Jump to content

Patent Application 18296332 - METHOD COMPUTER PROGRAM DEVICE AND SYSTEM FOR - Rejection

From WikiPatents

Patent Application 18296332 - METHOD COMPUTER PROGRAM DEVICE AND SYSTEM FOR

Title: METHOD, COMPUTER PROGRAM, DEVICE, AND SYSTEM FOR TRACKING A TARGET OBJECT

Application Information

  • Invention Title: METHOD, COMPUTER PROGRAM, DEVICE, AND SYSTEM FOR TRACKING A TARGET OBJECT
  • Application Number: 18296332
  • Submission Date: 2025-05-19T00:00:00.000Z
  • Effective Filing Date: 2023-04-05T00:00:00.000Z
  • Filing Date: 2023-04-05T00:00:00.000Z
  • National Class: 382
  • National Sub-Class: 103000
  • Examiner Employee Number: 71512
  • Art Unit: 2668
  • Tech Center: 2600

Rejection Summary

  • 102 Rejections: 1
  • 103 Rejections: 0

Cited Patents

No patents were cited in this rejection.

Office Action Text


    Notice of Pre-AIA  or AIA  Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .

Claims 1-11 and 13 are pending.
Claim 12 is canceled via preliminary amendment filed 4/5/2023.  

Specification
Applicant is reminded of the proper content of an abstract of the disclosure.
A patent abstract is a concise statement of the technical disclosure of the patent and should include that which is new in the art to which the invention pertains. The abstract should not refer to purported merits or speculative applications of the invention and should not compare the invention with the prior art.
If the patent is of a basic nature, the entire technical disclosure may be new in the art, and the abstract should be directed to the entire disclosure. If the patent is in the nature of an improvement in an old apparatus, process, product, or composition, the abstract should include the technical disclosure of the improvement. The abstract should also mention by way of example any preferred modifications or alternatives. 
Where applicable, the abstract should include the following: (1) if a machine or apparatus, its organization and operation; (2) if an article, its method of making; (3) if a chemical compound, its identity and use; (4) if a mixture, its ingredients; (5) if a process, the steps.
Extensive mechanical and design details of an apparatus should not be included in the abstract. The abstract should be in narrative form and generally limited to a single paragraph within the range of 50 to 150 words in length.
See MPEP § 608.01(b) for guidelines for the preparation of patent abstracts.

Claim Rejections - 35 USC § 102
The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action:
A person shall be entitled to a patent unless –


(a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale, or otherwise available to the public before the effective filing date of the claimed invention.


(a)(2) the claimed invention was described in a patent issued under section 151, or in an application for patent published or deemed published under section 122(b), in which the patent or application, as the case may be, names another inventor and was effectively filed before the effective filing date of the claimed invention.

Claims 1-9, 11 and 13 are rejected under 35 U.S.C. 102(a)(1) as being anticipated by CN110610514 (A) ― 2019-12-24, hereinafter “Zhang et al”.

Regarding claim 1:
Zhang et al disclose the following as claimed:
A method of tracking a target object in an image stream “[0002] The present invention relates to the field of intelligent transportation technology, and in particular to a method, a device and an electronic device for realizing multi-target tracking; [0010] Get the video stream of the camera as a continuous input image;” captured by a camera at a capture frequency (Fc) ([0012] “tracking frame rate is not greater than a capture frame rate (Fc) of the image”) said method comprising: 
a tracking phase comprising a plurality of iterations of tracking said target object ([0007, 0011, 0036] i.e., “multi-target tracking”; “each target object is tracked based on the tracking frame rate of each target object”), for each image of said image stream as a processed image (tracked image) and said tracking phase comprising detecting at least one object (see above), and a position of said at least one object (position of tracked object is inherent), in the processed image (tracked image), and identifying said target object among the at least one object that is detected in said processed image ([0011-16] “target tracking”, “each target object is tracked separately” i.e., tracking phase, “target detection on the image”); 
wherein said tracking phase is carried out at a detection frequency (Fs), lower than said capture frequency (Fc) ([0012] “tracking frame rate ( Fs) is not greater than a capture frame rate (Fc) of the image”), 
such that two images processed during two successive iterations of the tracking phase are separated by at least one non-processed image to which said tracking phase is not applied ([0031] “the target object is tracked in k frames of the image for every n frames of image input” implies non-processed image in between; see also [0177] For example, when the tracking frame rate of the target object is 3/5 of the image acquisition frame rate, the target object is tracked in 3 frames among every 5 frames. In a specific example, when the 1st, 2nd, 3rd, 4th and 5th frames are input, the target object can be tracked in the 1st, 3rd and 5th frames.”)


Regarding claim 2:
The method according claim 1, further comprising estimating a position of the target object at a time located between capture times of said two images processed during said two successive iterations of the tracking phase, based upon the position of said target object in each of said two images that are processed.  Zhang et al further disclose at ([0027] “Furthermore, the moving speed of the target object is obtained by motion estimation”; [0150] “The image of the target area is subjected to target detection by using a HOG-based feature classification algorithm or an SSD algorithm to extract the target object and obtain the detection area and category of the target object”).  Motion estimation implies finding motion vectors between images in a sequence for use in video/image compression, object tracking, image analysis, inter alia.  SSD and HOG-based feature classification algorithms facilitate target object tracking and positional estimation in an image.

Regarding claim 3:
The method according to claim 1, wherein the tracking phase is implemented for said each image every N image(s), where N≥2 or N≥20, such that said two successive iterations of the tracking phase are applied to two images separated, over time, from said N images, which are not processed.  The rejection of claim 1 above is fully applicable here.  Especially the last part of claim 1. Zhang et al further disclose at ([0031] “the target object is tracked in k frames of the image for every n frames of image input” implies non-processed image in between; see also [0177] For example, when the tracking frame rate of the target object is 3/5 of the image acquisition frame rate, the target object is tracked in 3 frames among every 5 frames. In a specific example, when the 1st, 2nd, 3rd, 4th and 5th frames are input, the target object can be tracked in the 1st, 3rd and 5th frames.”)

Regarding claim 4:
The method according to claim 1, wherein the tracking phase is carried out for each image captured every predetermined duration (DUR) of seconds.  The rejections of claims 1 and 3 above are fully applicable here.  Zhang et al further disclose at ([0031] “the target object is tracked in k frames of the image for every n frames of image input” implies non-processed image in between; see also [0177] For example, when the tracking frame rate of the target object is 3/5 of the image acquisition frame rate, the target object is tracked in 3 frames among every 5 frames. In a specific example, when the 1st, 2nd, 3rd, 4th and 5th frames are input, the target object can be tracked in the 1st, 3rd and 5th frames.”).  Because not all frames from the example are being tracked, the implication is a time duration is predetermined.  In other words, “the target object is tracked in k frames of the image for every n frames of image input” implies a set duration.

Regarding claim 5:
The method according to claim 1, wherein the image stream is captured prior to a first iteration of the plurality of iterations of the tracking phase such that the target object is not tracked in real time.  Zhang et al further disclose at ([0010] “Get the video stream of the camera as a continuous input image” implies image stream being captured prior to real-time tracking).
 
Regarding claim 6:
The method according to claim 1, wherein the method is implemented to carry out real-time tracking of the target object, said method further comprising transmitting said each image that is processed from the camera to a tracking device. Zhang et al further disclose at ([0007] “The technical problem to be solved by the present invention is to provide a method, device and electronic equipment for realizing multi-target tracking, which can reduce the data processing amount of target tracking and realize real-time tracking of multiple target objects”; see also [0034] “A target detection module, used to perform target detection on the image and extract multiple target objects”).

Regarding claim 7:
The method according to claim 6, wherein said transmitting said each image that is processed from the camera to the tracking device is carried out at a request of said tracking device.  Zhang et al further disclose at ([0040] “When the computer program instructions are executed by the processor, the processor executes the steps in the method for implementing multi-target tracking as described above”; the execution of program instructions by the processor to carry out target tracking implies “at a request” as claimed).

Regarding claim 8:
The method according to claim 6, wherein the camera is arranged to only capture processed images. Zhang et al further disclose at ([Abstract] “According to the technical scheme, for different target objects, the tracking frame rate is dynamically adjusted in real time, the data processing amount of target tracking can be reduced, and real-time tracking of multiple target objects is achieved.”)  The dynamically adjusted frame rate implies only processed images are captured to achieve real-time reduced frame rate. 
 
Regarding claim 9:
The method according to claim 1, wherein said detecting is carried out by an artificial intelligence model comprising a neural network, wherein said artificial intelligence model is previously trained to detect a presence of an object in an image.  Zhang et al further disclose at ([0095] “Furthermore, the target detection unit is specifically used to perform target detection on the image of the target area using a HOG-based feature classification algorithm or an SSD algorithm, extract the target object, and obtain the detection area and category of the target object.”)  An SSD algorithm is a supervised CNN machine learning modeling.  In other words, it is based on previously trained labeled object data.  

Regarding claim 11, which recites a corresponding “non-transitory computer program comprising executable instructions, which, when said executable instructions are executed by a computer apparatus, implement a method of” claim 1.  Thus, the rejection analysis of claim 1 is fully applicable here.  Zhang et al further disclose at ([0040] “When the computer program instructions are executed by the processor, the processor executes the steps in the method for
implementing multi-target tracking as described above.”)

Regarding claim 13, which recites a “system that tracks a target object” corresponding to method claim 1.  Thus, the rejection analysis of claim 1 is fully applicable here.  Zhang et al further disclose at ([0048] “FIG4 is a block diagram of an electronic device for implementing multi-target tracking according to an embodiment of the present invention”).
  
Allowable Subject Matter
Claim 10 is objected to as being dependent upon a rejected base claim, but would be allowable if rewritten in independent form including all of the limitations of the base claim and any intervening claims.
	The best prior art of record “Zhang et al” teach claims 1-9, 11 and 13 as outlined above.  However, the applied prior art fails to anticipate or rendered obvious the further limitations as recited: “wherein said identifying said target object in said processed image comprises for each object of said at least one object that is detected in said processed image, calculating a spatial distance between the position of said each object of said at least one object and the position of the target object detected on a previously processed image, spatial filtering of the each object of the at least one object based on said spatial distance that is calculated for said each object and a predetermined spatial distance threshold value (SDS), calculating an appearance distance between a visual signature of the target object detected on the previously processed image and a visual signature of said each object that is retained after the spatial filtering, and identifying the target object based on said appearance distance of said each object.”


Contact Information
Any inquiry concerning this communication or earlier communications from the examiner should be directed to Supervisory Patent Examiner, VU LE, whose telephone number is (571)272-7332. The examiner can normally be reached M-F 8:00 - 17:00.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent sub9missions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.

/VU LE/Supervisory Patent Examiner, Art Unit 2668                                                                                                                                                                                                        



    
        
            
        
            
        
            
        
            
        
            
        
            
        
            
        
            
    


Cookies help us deliver our services. By using our services, you agree to our use of cookies.