Jump to content

Patent Application 18304549 - EFFICIENT MOTION ESTIMATION IN AN IMAGE - Rejection

From WikiPatents

Patent Application 18304549 - EFFICIENT MOTION ESTIMATION IN AN IMAGE

Title: EFFICIENT MOTION ESTIMATION IN AN IMAGE PROCESSING DEVICE

Application Information

  • Invention Title: EFFICIENT MOTION ESTIMATION IN AN IMAGE PROCESSING DEVICE
  • Application Number: 18304549
  • Submission Date: 2025-05-23T00:00:00.000Z
  • Effective Filing Date: 2023-04-21T00:00:00.000Z
  • Filing Date: 2023-04-21T00:00:00.000Z
  • National Class: 382
  • National Sub-Class: 107000
  • Examiner Employee Number: 82141
  • Art Unit: 1774
  • Tech Center: 1700

Rejection Summary

  • 102 Rejections: 0
  • 103 Rejections: 3

Cited Patents

The following patents were cited in the rejection:

Office Action Text


    Notice of Pre-AIA  or AIA  Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .

Claim Rejections - 35 USC § 112
The following is a quotation of 35 U.S.C. 112(b):
(b)  CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention.


The following is a quotation of 35 U.S.C. 112 (pre-AIA ), second paragraph:
The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the applicant regards as his invention.


Claims 1-4, 10-12, 18-20, and 25-27 are rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph, as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor (or for applications subject to pre-AIA  35 U.S.C. 112, the applicant), regards as the invention.

Claims 2, 10, 18 and 25 recite “wherein generating the transform matrix is performed further in accordance with the cross-correlation”, the claim language has multiple interpretations as it could mean the transform matrix is generated then a cross-correlation is performed or the transformation matrix is performed including the cross-correlation. Both interpretations would make it unclear how can the generation of said transform matrix can be performed further. For the purpose of furthering prosecution, Examiner interprets the transform matrix is generated with considerations of a cross-correlation. 

	

Claims 3, 11, 19 and 26 recite “wherein the transform matrix is further generated in accordance with the detected one or more objects”, which contains similar interpretations as above analysis where it is unclear how the transform matrix can be further generated. For the purposes of furthering prosecution, Examiner interprets the transform matrix is generated with considerations of detected one or more objects. 

	Claims 4, 12, 20, and 27 are rejected due to their dependencies upon rejected claims 3, 11, 19 and 26. 


Claim Rejections - 35 USC § 103
In the event the determination of the status of the application as subject to AIA  35 U.S.C. 102 and 103 (or as subject to pre-AIA  35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA  to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.  
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.

Claims 1-4, 6, 8-12, 14, 16-20, 23-27 and 30 are rejected under 35 U.S.C. 103 as being unpatentable over Tsurumi (US 2010/0103290 A1) in view of Makino (US 2018/0218504 A1).

As to claims 1-4, 6 and 8, they are the method claims of claims 9-12, 14 and 16 and therefore similar analysis are provided below accordingly.

As to claim 9, Tsurumi teaches an apparatus (An image processing apparatus; Abstract), comprising: a memory (main memory, 781 Fig. 68) storing processor-readable code (program; Title); and at least one processor coupled to the memory (Multicore processor connected to main memory, 781 and 800 Fig. 68), the at least one processor configured to execute the processor-readable code to cause the at least one processor to perform operations including: determining a first set of correlation parameters (affine transformation parameter; [0170]) for a first frame (immediately previous frame; [0170]) of a sequence of frames (Figs. 27A, 27B); determining a second set of correlation parameters (calculate affine transformation parameters; [0170]) for a second frame (current frame; [0170]) of the sequence of frames (Figs. 27A, 27B); generating a transform matrix indicating motion from the first frame to the second frame in accordance with the first set of correlation parameters and the second set of correlation parameters (the matrix expression of affine transformation can be represented with Expression 1; [0160]); Tsurumi does not explicitly teach inverting the transform matrix to produce an inverted transform matrix indicating motion from the second frame to the first frame.  Tsurumi does teach an affine transformation parameter calculating process is executed multiple times wherein affine transformation parameters are calculated based on the three optical flows ([0159]). Makino teaches a moving object detection apparatus (Title) wherein the transformation matrix may be a transformation matrix for affine transformation ([0057]), and inverting the transform matrix to produce an inverted transform matrix indicating motion from the second frame to the first frame (said transformation matrix indicates transformation between coordinates of the pixels of the frame T (Second frame) and coordinates of the pixels of the frame T-1 (First frame) corresponding to the pixels of the frame T. Specifically, this transformation matrix transforms the coordinates of the pixels of the frame T-1 to coordinates of pixels of the frame T. Therefore, by transforming coordinates of representative points set in the frame T based on this transformation matrix, specifically, by an inverse matrix of this transformation matrix; [0058]). It would have been obvious for one ordinary skilled in the art at the time of filling to have modified the affine transformation of Tsurumi with the inverse matrix of Makino in order to reduce influence of flows of representative points that are not included in a background area by performing an optimization calculation method that is not easily influenced by outliers (Makino [0057]).

As to claim 10, Tsurumi and Makino teach the apparatus of claim 9, wherein the at least one processor is further configured to execute the processor-readable code to cause the at least one processor to perform operations (see details in Claim 9) including: Makino teaches determining a cross-correlation from the first frame to the second frame, wherein generating the transform matrix is performed further in accordance with the cross-correlation (cross correction maximization method or the Lucas Kanade method are interpreted to be the cross-correlation; [0053]).  It would have been obvious for one ordinary skilled in the art at the time of filling to modify the optical flow process used in Tsurumi ([0109]] with the cross-correction methods taught by Makino for better motion estimation for select features such as corners or edges. 

As to claim 11, Tsurumi and Makino teach the apparatus of claim 9, wherein the at least one processor is further configured to execute the processor-readable code to cause the at least one processor to perform operations (see details in Claim 9) including: detecting one or more objects of the first frame, wherein the transform matrix is further generated in accordance with the detected one or more objects (the image processing apparatus may further include an object detecting unit configured to detect an object included in the compositing target images; Tsurumi [0015]).  

As to claim 12, Tsurumi and Makino teach the apparatus of claim 11, wherein detecting one or more objects of the first frame comprises performing corner detection on the first frame (affine transformation parameters are calculated using the optical flows corresponding to three corner points detected from the images 320 and 330; Tsurumi [0153], Figs. 6A-6C).  

As to claim 14, Tsurumi and Makino teach the apparatus of claim 9, wherein the at least one processor is further configured to execute the processor-readable code to cause the at least one processor to perform operations including: determining a third set of correlation parameters for a third frame of the sequence of frames; detecting one or more objects of the third frame; and generating a transform matrix indicating motion from the third frame to the second frame in accordance with the second set of correlation parameters and the third set of correlation parameters (calculating process for affine transformation parameters is repeated; Tsurumi [0211]).  

As to claim 16, Tsurumi and Makino teach the apparatus of claim 9, wherein the at least one processor is further configured to execute the processor-readable code to cause the at least one processor to perform operations including: determining that the second frame is an odd-numbered frame of the sequence of frames, wherein inverting the transform matrix is performed in accordance with the determination that the second frame is an odd-numbered frame of the sequence of frames (Makino [0057] discloses frames T and T-1, this process is repeated and therefore could be labeled in any order within a series of frames).  

As to claims 17-20 and 23, they differ from claims 9-12 and 16 in that it is a non-transitory computer-readable medium storing instructions that are performed by the processor of claims 9-12 and 16. Tsurumi and Makino teach a main memory storing instructions (Tsurumi 781 Fig. 68) 

As to claims 24-27 and 30, they differ from claims 9-12 and 16 in that instead of the apparatus it is an image capture device. Tsurumi and Makino teach the moving picture input unit 110 is a moving picture input unit configured to input a moving picture imaged by an imaging apparatus such as a digital video camera or the like (Tsurumi [0108]).


Claims 5, 13, 21 and 28 are rejected under 35 U.S.C. 103 as being unpatentable over Tsurumi and Makino as applied to claims 1-4, 6, 8-12, 14, 16-20, 23-27 and 30 above, and further in view of Kwang Moo Yi et al. (Detection of Moving Objects with Non-Stationary Cameras in 5.8 ms: Bringing Motion Detection to your Mobile Device,” Kwang Moo Yi et al., CVPR2013, Workshops, 2013; disclosed in Makino [0003] hereinafter “Kwang”).

As to claim 5, it is the method claims of claim 13 and therefore similar analysis are provided below accordingly.

As to claim 13, Tsurumi and Makino teach the apparatus of claim 9 (see analysis for claim 9 above), however does not explicitly teach wherein determining the first set of correlation parameters comprises determining a first independent mean and a first independent variance of the first frame, and wherein determining the second set of correlation parameters comprises determining a second independent mean and a second independent variance of the second frame.  Kwang teaches a method for detecting a moving object from video by a moving camera, wherein the background model based on an average value (independent mean), a variance (independent variance) and Age of pixel values is calculated for each of areas obtained by dividing each frame included in video (Makino [0003]).  It would have been obvious for one ordinary skilled in the art at the time of filling to combine Kwang with Tsurumi and Makino since they are all within the same field of endeavor. 

As to claim 21, it differs from claim 13 in that it is a non-transitory computer-readable medium storing instructions that are performed by the processor of claim 13. Tsurumi and Makino teach a main memory storing instructions (Tsurumi 781 Fig. 68)
As to claim 28, it differs from claim 13 in that instead of the apparatus it is an image capture device. Tsurumi and Makino teach the moving picture input unit 110 is a moving picture input unit configured to input a moving picture imaged by an imaging apparatus such as a digital video camera or the like (Tsurumi [0108]).






Claims 7, 15, 22 and 29 are rejected under 35 U.S.C. 103 as being unpatentable over Tsurumi and Makino as applied to claim 1-4, 6, 8-12, 14, 16-20, 23-27 and 30 above, and further in view of Nir (US 20210368217 A1).

As to claim 7, it is the method claims of claim 15 and therefore similar analysis are provided below accordingly.

As to claim 15, Tsurumi and Makino teach the apparatus of claim 9 (see analysis for claim 9 above), wherein the at least one processor is further configured to execute the processor-readable code to cause the at least one processor to perform operations, however do not explicitly teach refraining from performing object detection on the second frame. Nir teaches video a system for comprising videos (Abstract) wherein a sequence of frames is analyzed for object of interest within the frame (Fig. 1B) and if two consecutive frames are substantially identical, the second frame of the two may not be analyzed to avoid object detection costs ([0061]). It would have been obvious to have modified Tsurumi and Makino at the time of filing with this encoding technique taught by Nir for the purposes of cost savings (Nir [0061]). 

As to claim 22, it differs from claim 15 in that it is a non-transitory computer-readable medium storing instructions that are performed by the processor of claim 15. Tsurumi and Makino teach a main memory storing instructions (Tsurumi 781 Fig. 68)

As to claim 29, it differs from claim 15 in that instead of the apparatus it is an image capture device. Tsurumi and Makino teach the moving picture input unit 110 is a moving picture input unit configured to input a moving picture imaged by an imaging apparatus such as a digital video camera or the like (Tsurumi [0108]).

Conclusion
The prior art made of record and not relied upon is considered pertinent to applicant's disclosure. 
Ma et al. (US 20090016610) teach systems and methods that facilitate image motion analysis 
Abbeloos et al. (US 20240144487) teach a method for tracking positions of object. 


Contact Information
Any inquiry concerning this communication or earlier communications from the examiner should be directed to CLAIRE X WANG whose telephone number is (571)270-1051. The examiner can normally be reached M-F 8:30am-5pm.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Yvonne Eyler can be reached at (571) 272-1200. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.

/CLAIRE X WANG/Supervisory Patent Examiner, Art Unit 1774                                                                                                                                                                                                        


    
        
            
        
            
        
            
        
            
        
            
        
            
        
            
        
            
        
            
        
            
        
            
    


Cookies help us deliver our services. By using our services, you agree to our use of cookies.