Jump to content

Patent Application 18095311 - Methods and Systems for Operative Analysis and - Rejection

From WikiPatents

Patent Application 18095311 - Methods and Systems for Operative Analysis and

Title: Methods and Systems for Operative Analysis and Management

Application Information

  • Invention Title: Methods and Systems for Operative Analysis and Management
  • Application Number: 18095311
  • Submission Date: 2025-04-09T00:00:00.000Z
  • Effective Filing Date: 2023-01-10T00:00:00.000Z
  • Filing Date: 2023-01-10T00:00:00.000Z
  • National Class: 382
  • National Sub-Class: 128000
  • Examiner Employee Number: 83210
  • Art Unit: 2668
  • Tech Center: 2600

Rejection Summary

  • 102 Rejections: 0
  • 103 Rejections: 2

Cited Patents

The following patents were cited in the rejection:

Office Action Text


    Notice of Pre-AIA  or AIA  Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .

Claim Rejections - 35 USC § 103
In the event the determination of the status of the application as subject to AIA  35 U.S.C. 102 and 103 (or as subject to pre-AIA  35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA  to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.  

The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.

Claims 1-6, 8-12, 15, 17-20, 22-24 are rejected under 35 U.S.C. 103 as being unpatentable over Wolf et al, (US-PGPUB 2021/0307840) in view of Hunter et al, (US-PGPUB 2022/0132026)

In regards to claim 1, Wolf et al discloses a method for analyzing surgeries 
comprising the steps of: 
recording images of a surgery with a camera, wherein said images comprise a visual element chosen from a surgeon's hands during said surgery, a patient's surgery area, equipment used in said surgery, and instruments used in said surgery, (see at Par. 0060-0061, several cameras (e.g., overhead cameras 115, 121, and 123, and a tableside camera 125) for capturing video/image data during surgery. For example, camera 115 may be configured to track a surgical instrument (also referred to as a surgical tool) within location 127, an anatomical structure, a hand of surgeon 131, an incision, a movement of anatomical structure, and the like, [i.e., recording images of a surgery with a camera, “tableside camera 125 may be used for capturing video/image data during surgery”, wherein said images comprise a visual element chosen from a surgeon's hands during said surgery, a patient's surgery area, equipment used in said surgery, and instruments used in said surgery, “surgical instrument, and incision tool implicitly from a hand of surgeon 131”]);
saving said images of said surgery, (see at least: Par. 0084-0085, the repository may include a memory device, where the video footage may be stored for retrieval, [i.e., implicitly storing the video frame of the surgery in memory device, “repository”]);
generating a timestamp in said images, (see at least: Par. 0096, a subgroup of frames may generate tags or labels associated with the frames, correspond to differing surgical event-related categories, where the tags may include a timestamp, time range, frame number, or other means for associating the surgical event-related category to the subgroup of frames. See also, Par. 0315, medical information may be associated with a timestamp or other information indicating a time of capture, [i.e., generating a timestamp in said images, “generate tags or labels associated with the frames, including a timestamp …etc.”]);
chapterizing said images of said surgery into different chapters, (see at least: Par. 0096, generating tags or labels associated with the frames, which the tags may correspond to differing surgical event-related categories, such as a procedure step, a safety milestone, a point of decision, an intraoperative event, an operative milestone, or an intraoperative decision, [i.e., chapterizing said images of said surgery, “categorizing the frames”, into different chapters, “implicit by generating tags corresponding to differing surgical event-related categories, such as a procedure step, a safety milestone …etc., ”]); and 
analyzing said recorded images of said surgery, (see at least: Par. 0062, video/image data obtained from camera 121 may be analyzed to identify a ROI during the surgical procedure, and the camera control application may be configured to cause camera 115 to zoom towards the ROI identified by camera 121, [i.e., analyzing said recorded images of said surgery, “analyzing video/image data obtained from camera 121”]).
While disclosing generating a timestamp in said images, (Par. 0096); Wolf et al does not expressly disclose displaying a timestamp in said images.
However, Hunter discloses displaying a timestamp in said images, (see at least: Par. 0055, 0193, the annotation data comprises time-stamp data generated based on a time-stamp of the image displayed on the display assembly, [i.e., implicitly displaying a timestamp in one or more images]).
Wolf and Hunter are combinable because they are both concerned with object recognition. Therefore, it would have been obvious to a person of ordinary skill in the art, to modify Wolf, to annotate the displayed image with time-stamp, as though by Hunter, in order to indicate a time at which the image (e.g., video frame) was captured, (Hunter, Par. 0193).

In regards to claim 2, the combine teaching Wolf and Hunter as whole discloses the limitations of claim 1.
Wolf further discloses wherein said images of said surgery comprises moving images and wherein said camera comprises a video camera, (see at least: Par. 0060-0061, at least one camera 115 may capture video/image data, “moving images”, where camera 115 is implicitly a video camera).

In regards to claim 3, the combine teaching Wolf and Hunter as whole discloses the limitations of claim 1.
Wolf further discloses a step of pre-surgery analysis of said patient based on patient data chosen from wearable patient data, patient health records, patient medical records, patient magnetic resonance imaging, patient computed tomography scan, and any combination thereof, (see at least: Par. 0112, a graphical user interface may display the frames of surgical video in a window alongside text or images reflecting patient-related data associated with the frames, including any data associated with a patient, such as name, age, sex, weight, height, … medical record of the patient, [i.e., performing the pre-surgery analysis of said patient, based on patient data, “implicit by displaying graphical user interface to surgeon, reflecting patient-related data”]).

In regards to claim 4, the combine teaching Wolf and Hunter as whole discloses the limitations of claim 1.
Wolf further discloses wherein said chapters of said recorded images categorize said images of said surgery by categories chosen from when specific equipment is used, when specific instrument is used, and each surgical step, (see at least: Par. 0096, generating tags or labels associated with the frames, which the tags may correspond to differing surgical event-related categories, such as a procedure step, a safety milestone, a point of decision, an intraoperative event, an operative milestone, or an intraoperative decision, [i.e., chapterizing said images of said surgery, “categorizing the frames”, into different chapters, “implicit by generating tags corresponding to differing surgical event-related categories, such as a procedure step, a safety milestone …etc., ”]); and

In regards to claim 5, the combine teaching Wolf and Hunter as whole discloses the limitations of claim 1.
Wolf further discloses wherein said chapters of said images include said timestamp to show the timeframe for each chapter, (see at least: Par. Par. 0334, ascertaining a time of information capture by the piece of equipment. For example, the medical information captured by the piece of equipment may include timestamp information indicating the time of capture, [i.e., wherein said chapters of said images include said timestamp to show the timeframe for each chapter, “timestamp information indicating the time of capture, by the piece of equipment, of the medical information”]).

In regards to claim 6, the combine teaching Wolf and Hunter as whole discloses the limitations of claim 1.
The combine teaching Wolf and Hunter as whole does not expressly disclose a step of starting a new chapter with said images when a new instrument is used or when an instrument is removed in said surgery.
However, Wolf discloses a surgical event-related category, which may include an indicator such as a sign, pointer, tag, or code identifying a surgical event-related category, (Par. 0089); and providing an ID to identifying a particular piece of equipment, where the ID may be contained within an active or passive electronic tag included as original equipment with a piece of medical equipment or a tag added later, (Par. 0300-0302), which the tag added later is technically equivalent to a new tag being added to an image of a new piece of medical equipment, [i.e., starting a new chapter with said images, “adding new tag to images”, when a new instrument is used or when an instrument is removed in said surgery, “implicitly for particular or new piece of medical equipment”]).

In regards to claim 8, the combine teaching Wolf and Hunter as whole discloses the limitations of claim 1.
Wolf further discloses wherein said step of analyzing said images of said surgery comprises a step chosen from: identifying anatomic structures of said patient's surgery area; identifying pathologic structures of said patient's surgery area; identifying each equipment or instrument used in said surgery; identifying equipment or instrument duration of use; identifying equipment or instrument placement; identifying equipment or instrument selection; identifying surgeon technique in said surgery; comparing said surgery with recorded images of another surgery; and any combination thereof, (see at least: Par. 0058, analyzing the image data and/or the preprocessed image data using one or anatomical detection algorithm; and from Par. 0047, visual action recognition algorithms may be used to analyze the video and detect the interactions between the surgical instrument and the anatomical structure or tissue,  “implicit identifying anatomic structures of said patient's surgery area”]).

In regards to claim 9, the combine teaching Wolf and Hunter as whole discloses the limitations of claim 8.
Wolf further discloses step of collecting data from said step of analyzing said images of said surgery to provide collected data, (see at least: Par. 0062, video/image data obtained from camera 121 may be analyzed to identify a ROI during the surgical procedure, [i.e., analyzing said images of said surgery to provide collected data, “ROI”]).

In regards to claim 10, the combine teaching Wolf and Hunter as whole discloses the limitations of claim 9.
Wolf further discloses a step of creating a dynamic surgical scorecard of said surgery from said collected data, (see at least: Par. 0247, 0253, machine learning model may be trained to process video frames and generate competency-related scores using a training dataset including video clips of previous surgeries , [i.e., creating a dynamic surgical scorecard of said surgery, “competency-related scores”, from said collected data, “from video clips of previous surgeries”).

In regards to claim 11, the combine teaching Wolf and Hunter as whole discloses the limitations of claim 10.
Wolf further discloses wherein said dynamic surgical scorecard can be created for an agenda chosen from a surgeon, type of surgery, practice of surgeons, type of instrument used, type of instrument used, and type of said surgery area, (see at least: Par. 0247, Data indicating previous performance assessments may include prior competency-related scores, assessments, evaluations, of medical professional or other person involved in a surgical procedure, “i.e., practice of surgeons”])

In regards to claim 12, the combine teaching Wolf and Hunter as whole discloses the limitations of claim 9.
Wolf further discloses a step of creating a baseline for regulatory measures using said collected data, (see at least: Par. 0345, historical data may include a statistical model and/or a machine learning model based on an analysis of information and/or video footage from historical surgical procedures, and a skill level of the surgeon may be determined based on how well the surgeon performs during the event, which may be based on timeliness, effectiveness, adherence to a preferred technique, a lack of injury or adverse effects, or any other indicator of skill that may be gleaned from analyzing the footage, [which is technically enables creating a baseline for regulatory measures, “implicitly based on a skill level of the surgeon, determined based on timeliness, effectiveness, adherence to a preferred technique, a lack of injury or adverse effects”, using said collected data, “from analyzing the footage”]).

In regards to claim 15, the combine teaching Wolf and Hunter as whole discloses the limitations of claim 9.
Wolf further discloses a step of utilizing said collected data in pre-operation evaluation, (see at least: Par. 0152, additional examples of data may include room temperature, type of surgical instruments used, or any other data related to the surgical procedure and recorded before, during or after the surgical procedure, [i.e., utilizing said collected data, “recording temperature, type of surgical instruments used, and surgical procedure data”, in pre-operation evaluation, “before, the surgical procedure”]).

In regards to claim 23, the combine teaching Wolf and Hunter as whole discloses the limitations of claim 9.
Wolf further discloses a step of providing a real-time analysis of said surgery during said surgery wherein said real-time analysis is based on a comparison of said collected data and data input of said surgery, (see at least: Par. 0150-0151, a machine learning model may be trained using training examples to detect interactions between surgical instruments and anatomical structures from videos, and the trained machine learning model may be used to analyze the video footage based on the historical surgical footage, and detect the interaction between the medical instrument and the anatomical structure, [i.e., providing a real-time analysis of said surgery during said surgery, “using machine learning model to detect interactions between surgical instruments and anatomical structures from videos”, wherein said real-time analysis is based on a comparison of said collected data and data input of said surgery, “implicitly based on comparison between the video footage and historical surgical footage”]).

In regards to claim 24, the combine teaching Wolf and Hunter as whole discloses the limitations of claim 23.
Wolf further discloses wherein said real-time analysis provides a recommendation to said surgeon in said surgery for a surgery step, (see at least: Par. 0164-0165, stored data may be used to determine whether the interface area between an instrument and a biological structure is outside of the surgical plane, implicitly based on comparing real-time video feed time broadcast of the surgical procedure with the stored data, and upon such a determination, an out-of-surgical plane signal may be outputted indicating a deviation from the surgical plane by the surgical instrument, which enable a surgeon wielding a surgical instrument to experience real-time notification via the out-of-surgical plane signal that a deviation from the surgical plane by the surgical instrument has occurred, and from Par. 0170, Outputting an out-of-plane signal or a warning signal may improve patient surgical outcomes by alerting a surgeon to deviations or potential deviations of surgical instruments from surgical planes, thereby enabling the surgeon to make corrections to the location of the surgical instrument to operate within the surgical plane, which the warning signal technically provides a recommendation to said surgeon in said surgery for a surgery step to make a corrections to the location of the surgical instrument to operate within the surgical plane, [i.e., wherein said real-time analysis, “implicitly based on comparing real-time video feed time broadcast of the surgical procedure with the stored data”, provides a recommendation to said surgeon in said surgery for a surgery step, “implicit by alerting the surgeon to make a corrections to the location of the surgical instrument to operate within the surgical plane”]). 

Claims 13-14 are rejected under 35 U.S.C. 103 as being unpatentable over Wolf et al, and Hunter et al, as applied to claim 9 above; and further in view of Isaac et al, (US-PGPUB 20190180247)

In regards to claim 13, the combine teaching Wolf and Hunter as whole discloses the limitations of claim 9.
The combine teaching Wolf and Hunter as whole does not expressly disclose wherein said regulatory measures are chosen from Healthcare Effectiveness Data and Information Set, Merit Based Incentive Payments Systems, Medicare Access and Chip Reauthorization Act of 2015, and risk sharing models, (see at least: Par. 0345, 
However, Isaac et al discloses wherein said regulatory measures are chosen from Healthcare Effectiveness Data and Information Set, Merit Based Incentive Payments Systems, Medicare Access and Chip Reauthorization Act of 2015, and risk sharing models, (Par. 0096, the performance metrics may be defined by provider specialty and/or cost center, where parameters may be used to measure the quality of work against various external industry standard data sets and metrics, such as, for example, the Healthcare Effectiveness Data and Information Set (HEDIS), [i.e.,  regulatory measures, “parameters that measure the quality of work of surgeon”, are chosen from Healthcare Effectiveness Data and Information Set, Merit Based Incentive Payments Systems, Medicare Access and Chip Reauthorization Act of 2015, and risk sharing models, “external industry standard data sets and metrics, such as, for example, the Healthcare Effectiveness Data and Information Set”]).
Wolf and Hunter and Isaac are combinable because they are all concerned with surgery procedure. Therefore, it would have been obvious to a person of ordinary skill in the art, to the combine teaching Wolf and Hunter, to use the parameters that measure the quality of work against standard data sets and metrics, as though by Isaac, in order to provide enhanced efficiency, automation and reliability process for acquiring and retain top medical talent, (Isaac, Par. 0004)

In regards to claim 14, the combine teaching Wolf and Hunter as whole discloses the limitations of claim 9.
The combine teaching Wolf and Hunter as whole does not expressly disclose a step of using said collected data with Current Procedural Terminology codes
Isaac discloses using said collected data with Current Procedural Terminology codes, (see at least: Figs. 8-9, and Par. 0093-0096, obtaining actual performance measured data by defining performance metrics based on using parameters to measure the quality of work against various external industry standard data sets and metrics, such as, for example, the Healthcare Effectiveness Data and Information Set (HEDIS); and Par. 0098, applying relative value unit to the various Current Procedural Terminology® (CPT) codes used to describe and bill professional services, [i.e., using said collected data, “various external industry standard data sets and metrics”, with Current Procedural Terminology codes, “CPT”]).
Wolf and Hunter and Isaac are combinable because they are all concerned with surgery procedure. Therefore, it would have been obvious to a person of ordinary skill in the art, to the combine teaching Wolf and Hunter, to use the various external industry standard data sets and metrics, as though by Isaac, in order to provide an automation and consistency solution to the rules that are applied to the practice management data before it is utilized for bonus calculation purposes, (Isaac, Par. 0101)

Allowable Subject Matter
Claims 7, 16, 21, and 25-26 are objected to as being dependent upon a rejected base claim, but would be allowable if rewritten in independent form including all of the limitations of the base claim and any intervening claims.

With respect to claim 7, the prior art of record, alone or in reasonable combination, does not teach or suggest, the following underlined limitation(s), (in consideration of the claim as a whole):  
“wherein said step of chapterizing said images of said surgery comprises the steps of: automatically accepting a data input to a computer based at least in part on said recorded images of said surgery; establishing in said computer a first chapter identification determination model automated computational transform program with starting chapter identification parameters; automatically applying said first chapter identification determination model automated computational transform program with said starting chapter identification parameters to at least some of said data input to automatically create a first chapter identification determination model transform; generating a first chapter identification determination model completed output based on said first chapter identification determination model transform; automatically varying said starting chapter identification parameters for said first chapter identification determination model automated computational transform program to establish a second chapter identification determination model automated computational transform program that differs from said first chapter identification determination model automated computational transform program in the way that it determines chapter identification from said data input; automatically applying said second chapter identification determination model automated computational transform program with said automatically varied starting chapter identification parameters to at least some of said recorded images of said surgery to automatically create a second chapter identification determination model transform; generating a different, second chapter identification determination model completed output based on said second chapter identification determination model transform; automatically comparing said first chapter identification determination model completed output with said different, second chapter identification determination model completed output; automatically determining whether said first chapter identification determination model completed output or said different, second chapter identification determination model completed output is likely to provide identification of a new chapter; providing a chapter identification indication based on said step of automatically determining whether said first chapter identification determination model completed output or said different, second chapter identification determination model completed output is likely to provide said identification of said new chapter; and storing automatically improved chapter identification parameters that are determined to identify said new chapter for future use to automatically self-improve chapter identification determination models”

With respect to claim 16, the prior art of record, alone or in reasonable combination, does not teach or suggest, the following limitation(s), (in consideration of the claim as a whole):  
“providing collected data from a plurality of images from different surgeries; providing patient data for said patient; automatically accepting a data input to a computer based at least in part on said collected data and said patient data; establishing in said computer a first preoperative plan model automated computational transform program with starting preoperative plan parameters; automatically applying said first preoperative plan model automated computational transform program with said starting preoperative plan parameters to at least some of said data input to automatically create a first preoperative plan model transform; generating a first preoperative plan model completed output based on said first preoperative plan model transform; automatically varying said starting preoperative plan parameters for said first preoperative plan model automated computational transform program to establish a second preoperative plan model automated computational transform program that differs from said first preoperative plan model automated computational transform program in the way that it determines preoperative plans from said data input; automatically applying said second preoperative plan model automated computational transform program with said automatically varied starting preoperative plan parameters to at least some of said data input to automatically create a second preoperative plan model transform; generating a different, second preoperative plan model completed output based on said second preoperative plan model transform; automatically comparing said first preoperative plan model completed output with said different, second preoperative plan model completed output; automatically determining whether said first preoperative plan model completed output or said different, second preoperative plan model completed output is likely to provide an optimal preoperative plan; providing a preoperative plan model identification indication based on said step of automatically determining whether said first preoperative plan model completed output or said different, second preoperative plan model completed output is likely to provide said optimal preoperative plan; and storing automatically improved preoperative plan model parameters that are determined to identify said optimal preoperative plan for future use to automatically self-improve preoperative plan models”

With respect to claim 21, the prior art of record, alone or in reasonable combination, does not teach or suggest, the following underlined limitation(s), (in consideration of the claim as a whole):  
“ automatically determine a type of pathology that is associated with a type of instrument, a type of implant, or a type of surgical intervention based on collected data from other surgeries; automatically associate a case type from surgical common procedural codes (CPT's); automatically determine a collective of an amount of time usage of an instrument or implant per a CPT code, a procedure, or pathology identified; automatically determine a frequency or a type in which an implant is utilized per type of a procedure or pathology encountered; automatically determine a type and an amount of time instrument utilization per procedure coding or pathology encountered; automatically determine implant or instrument variations over time based on collected data; automatically compare a time usage for implants in different surgeries and automatically determine similar pathology among different surgeries; and automatically compare instrument use time in different surgeries and automatically determine similar pathology among different surgeries.

With respect to claim 25, the prior art of record, alone or in reasonable combination, does not teach or suggest, the following underlined limitation(s), (in consideration of the claim as a whole):  
“automatically accepting a data input to a computer based at least in part on said recorded images of said surgery; establishing in said computer a first surgery characteristic identification determination model automated computational transform program with starting surgery characteristic identification parameters; automatically applying said first surgery characteristic identification determination model automated computational transform program with said starting surgery characteristic identification parameters to at least some of said data input to automatically create a first surgery characteristic identification determination model transform; generating a first surgery characteristic identification determination model completed output based on said first surgery characteristic identification determination model transform; automatically varying said starting surgery characteristic identification parameters for said first surgery characteristic identification determination model automated computational transform program to establish a second surgery characteristic identification determination model automated computational transform program that differs from said first surgery characteristic identification determination model automated computational transform program in the way that it determines surgery characteristic identification from said data input; automatically applying said second surgery characteristic identification determination model automated computational transform program with said automatically varied starting surgery characteristic identification parameters to at least some of said recorded images of said surgery to automatically create a second surgery characteristic identification determination model transform; generating a different, second surgery characteristic identification determination model completed output based on said second surgery characteristic identification determination model transform; automatically comparing said first surgery characteristic identification determination model completed output with said different, second surgery characteristic identification determination model completed output; automatically determining whether said first surgery characteristic identification determination model completed output or said different, second surgery characteristic identification determination model completed output is likely to provide identification of a surgery characteristic; providing a surgery characteristic identification indication based on said step of automatically determining whether said first surgery characteristic identification determination model completed output or said different, second surgery characteristic identification determination model completed output is likely to provide said identification of said surgery characteristic; and storing automatically improved surgery characteristic identification parameters that are determined to identify said surgery characteristic for future use to automatically self-improve surgery characteristic identification determination models”.

The relevant prior art of record, Wolf et al, (US-PGPUB 2021/0307840) discloses a method for analyzing surgeries comprising the steps of: 
recording images of a surgery with a camera, wherein said images comprise a visual element chosen from a surgeon's hands during said surgery, a patient's surgery area, equipment used in said surgery, and instruments used in said surgery, (see at Par. 0060-0061, several cameras (e.g., overhead cameras 115, 121, and 123, and a tableside camera 125) for capturing video/image data during surgery. For example, camera 115 may be configured to track a surgical instrument (also referred to as a surgical tool) within location 127, an anatomical structure, a hand of surgeon 131, an incision, a movement of anatomical structure, and the like, [i.e., recording images of a surgery with a camera, “tableside camera 125 may be used for capturing video/image data during surgery”, wherein said images comprise a visual element chosen from a surgeon's hands during said surgery, a patient's surgery area, equipment used in said surgery, and instruments used in said surgery, “surgical instrument, and incision tool implicitly from a hand of surgeon 131”]);
saving said images of said surgery, (see at least: Par. 0084-0085, the repository may include a memory device, where the video footage may be stored for retrieval, [i.e., implicitly storing the video frame of the surgery in memory device, “repository”]);
generating a timestamp in said images, (see at least: Par. 0096, a subgroup of frames may generate tags or labels associated with the frames, correspond to differing surgical event-related categories, where the tags may include a timestamp, time range, frame number, or other means for associating the surgical event-related category to the subgroup of frames. See also, Par. 0315, medical information may be associated with a timestamp or other information indicating a time of capture, [i.e., generating a timestamp in said images, “generate tags or labels associated with the frames, including a timestamp …etc.”]);
chapterizing said images of said surgery into different chapters, (see at least: Par. 0096, generating tags or labels associated with the frames, which the tags may correspond to differing surgical event-related categories, such as a procedure step, a safety milestone, a point of decision, an intraoperative event, an operative milestone, or an intraoperative decision, [i.e., chapterizing said images of said surgery, “categorizing the frames”, into different chapters, “implicit by generating tags corresponding to differing surgical event-related categories, such as a procedure step, a safety milestone …etc., ”]); and 
analyzing said recorded images of said surgery, (see at least: Par. 0062, video/image data obtained from camera 121 may be analyzed to identify a ROI during the surgical procedure, and the camera control application may be configured to cause camera 115 to zoom towards the ROI identified by camera 121, [i.e., analyzing said recorded images of said surgery, “analyzing video/image data obtained from camera 121”]).
Wolf further discloses using tables 511 and 513 shown in Fig. 5, which may have information about surgical procedures, such as the type of procedure, patient information or characteristics, length of the procedure, a location of the procedure, a surgeon's identify or other information, an associated anesthesiologist's identity …etc., (Par. 0152-0153); and further training a machine learning models to predict outcomes based on image-related data using training data based on historical examples, (Par. 0188)
However, Wolf fails to teach or suggest, either alone or in combination with the other cited references, the underlined limitations of claims 7, 16, 21, and 25, in consideration of the claim as a whole.

A further prior art of record, Hunter et al, (US-PGPUB 2022/0132026), discloses displaying a timestamp in said images, (see at least: Par. 0055, 0193, the annotation data comprises time-stamp data generated based on a time-stamp of the image displayed on the display assembly, [i.e., implicitly displaying a timestamp in one or more images]); fails to teach or suggest, either alone or in combination with the other cited references, the underlined limitations of claims 7, 16, 21, and 25, in consideration of the claim as a whole.

Regarding claim 26, claim 26 is in condition for allowance in view of its dependency from claim 25.

Contact Information
Any inquiry concerning this communication or earlier communications from the examiner should be directed to AMARA ABDI whose telephone number is (571)272-0273. The examiner can normally be reached 9:00am-5:30pm.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Vu Le can be reached on (571) 272-7332. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.




/AMARA ABDI/Primary Examiner, Art Unit 2668                                                                                                                                                                                            04/03/2025


    
        
            
        
            
        
            
        
            
        
            
        
            
        
            
        
            
        
            
        
            
        
            
        
            
        
            
        
            
        
            
        
            
        
            
        
            
        
            
        
            
        
            
        
            
    


Cookies help us deliver our services. By using our services, you agree to our use of cookies.