Jump to content

Patent Application 18184294 - Scene Classification Method Apparatus and - Rejection

From WikiPatents

Patent Application 18184294 - Scene Classification Method Apparatus and

Title: Scene Classification Method, Apparatus and Computer Program Product

Application Information

  • Invention Title: Scene Classification Method, Apparatus and Computer Program Product
  • Application Number: 18184294
  • Submission Date: 2025-05-15T00:00:00.000Z
  • Effective Filing Date: 2023-03-15T00:00:00.000Z
  • Filing Date: 2023-03-15T00:00:00.000Z
  • National Class: 382
  • National Sub-Class: 159000
  • Examiner Employee Number: 98143
  • Art Unit: 2662
  • Tech Center: 2600

Rejection Summary

  • 102 Rejections: 0
  • 103 Rejections: 3

Cited Patents

The following patents were cited in the rejection:

Office Action Text


    DETAILED ACTION
Notice of Pre-AIA  or AIA  Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .

Priority
Acknowledgement is made of Applicant’s claim of priority from the United Kingdom Patent Application No. GB2205533.9, filed on April 14, 2022.

Information Disclosure Statement
The information disclosure statements (“IDS”) filed on 03/15/2023 and 11/17/2023 have been reviewed and the listed references have been considered.

Drawings
Regarding the 3-page drawings filed on 03/15/2023, figures 1-2 are objected to as they depict block diagrams without “readily identifiable” descriptors of each block, as required by 37 CFR 1.84(n).  Rule 84(n) requires “labeled representations” of graphical symbols, such as blocks; and any that are “not universally recognized may be used, subject to approval by the Office, if they are not likely to be confused with existing conventional symbols, and if they are readily identifiable.”  In the case of figures 1-2, the blocks are not readily identifiable per se and therefore require the insertion of text that identifies the function of that block.  That is, each vacant block should be provided with a corresponding label identifying its function or purpose.
Corrected drawing sheets in compliance with 37 CFR 1.121(d) are required in reply to the Office action to avoid abandonment of the application.  Any amended replacement drawing sheet should include all of the figures appearing on the immediate prior version of the sheet, even if only one figure is being amended.  The figure or figure number of an amended drawing should not be labeled as “amended.”  If a drawing figure is to be canceled, the appropriate figure must be removed from the replacement sheet, and where necessary, the remaining figures must be renumbered and appropriate changes made to the brief description of the several views of the drawings for consistency.  Additional replacement sheets may be necessary to show the renumbering of the remaining figures.  Each drawing sheet submitted after the filing date of an application must be labeled in the top margin as either “Replacement Sheet” or “New Sheet” pursuant to 37 CFR 1.121(d).  If the changes are not accepted by the examiner, the applicant will be notified and informed of any required corrective action in the next Office action.  The objection to the drawings will not be held in abeyance.









Claim Rejections - 35 USC § 103
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.

The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows:
1. Determining the scope and contents of the prior art.
2. Ascertaining the differences between the prior art and the claims at issue.
3. Resolving the level of ordinary skill in the pertinent art.
4. Considering objective evidence present in the application indicating obviousness or nonobviousness.
This application currently names joint inventors. In considering patentability of the claims the examiner presumes that the subject matter of the various claims was commonly owned as of the effective filing date of the claimed invention(s) absent any evidence to the contrary.  Applicant is advised of the obligation under 37 CFR 1.56 to point out the inventor and effective filing dates of each claim that was not commonly owned as of the effective filing date of the later invention in order for the examiner to consider the applicability of 35 U.S.C. 102(b)(2)(C) for any potential 35 U.S.C. 102(a)(2) prior art against the later invention.

Claims 24-27, 29, 31-37, 39 and 41-43 are rejected under 35 U.S.C. 103 as being unpatentable over Choi et al. (US 2023/0082097 A1) in view of Kim et al. (US 10,303,980 B1).

Regarding claim 24, Choi teaches A method comprising: receiving feature maps generated from sensor data provided by a sensor system; (Choi, ¶0030: “A multi-sensor data-based fusion information generation method”) processing the feature maps (Choi, ¶0036: “the feature maps are converted based on a single 3D coordinate system through a process 240”) (Choi, ¶0036: “feature maps expressed using a single 3D coordinate system are fused
 the converted feature maps are concatenated”) and classifying (Choi, ¶0003: “recognizing or detecting an object through a statistical classifier”) a scene (Choi, ¶0053: “artificial intelligence technologies for recognizing an environment”) based on the generated inner products. (Choi, ¶0036: “Using the fused feature map, a recognition, such as object detection or region segmentation, is performed”).  However, Choi does not explicitly teach, using longitudinal and lateral feature pooling to generate longitudinal and lateral feature pool outputs.

In an analogous field of endeavor, Kim teaches, using longitudinal and lateral feature pooling (Kim, col. 1, lines 24-29: “the first direction is in a direction of the rows of the at least one decoded feature map and the second direction is in a direction of the columns thereof, concatenating each of features of each of the rows per each of the columns in a direction of a channel, to thereby generate at least one reshaped feature map”; applicant’s specification ¶0041: “longitudinal and lateral axes along the column and row directions of the feature maps”) to generate longitudinal and lateral feature pool outputs. (Kim, col 2, lines 04-06: “max pooling, and ReLU, to thereby generate one or more reduced feature maps by reducing a size”).

Therefore, it would have been obvious to a person of ordinary skill in the art before the effective filing date of the claimed invention to modify Choi using the teachings of Kim to introduce row and column features.  A person skilled in the art would be motivated to combine the known elements as described above and achieve the predictable result of generating a reduced 2D feature map.  Therefore, it would have been obvious to combine the analogous arts Choi and Kim to obtain the invention of claim 24.  

Regarding claim 25, Choi in view of Kim teaches, The method of claim 24, wherein generating the inner product further comprises: concatenating the longitudinal and lateral feature pool outputs. (Kim, col. 1, lines 26-29: “concatenating each of features of each of the rows per each of the columns in a direction of a channel, to thereby generate at least one reshaped feature map”).  The proposed combination as well as the motivation for combining Choi and Kim references presented in the rejection of claim 24, apply to claim 25 and are incorporated herein by reference.  Thus, the method recited in claim 25 is met by Choi and Kim.
Regarding claim 26, Choi in view of Kim teaches, The method of claim 24, wherein the longitudinal and lateral feature pooling comprises at least one of: maximum feature pooling; or mean feature pooling. (Kim, col 2, lines 04-06: “max pooling, and ReLU, to thereby generate one or more reduced feature maps by reducing a size”).  The proposed combination as well as the motivation for combining Choi and Kim references presented in the rejection of claim 24, apply to claim 26 and are incorporated herein by reference.  Thus, the method recited in claim 26 is met by Choi and Kim.

Regarding claim 27, Choi in view of Kim teaches, The method of claim 24, wherein classifying the scene (Choi, ¶0006: “a plurality of cameras for including all viewing angles needs to be used to recognize a surrounding 360-degree environment”) further comprises: generating one or more scene classification scores (Choi, ¶0003: “recognizing or detecting an object through a statistical classifier using the acquired feature values”) using the generated inner products. (Choi, ¶0006: “extract integrated surrounding environment awareness information from the fused feature value”).

Regarding claim 29, Choi in view of Kim teaches, The method of claim 24, wherein the sensor system is at least one of a radio detection and ranging (RADAR) or a light detection and ranging (LIDAR) system. (Choi, ¶0002: “data acquired from a variety of sensors (e.g., a camera, a LiDAR, and a radar)”).

Regarding claim 31, Choi in view of Kim teaches, The method of claim 24, further comprising: generating feature maps from the sensor data provided by the sensor system, wherein generating the feature maps comprises processing the sensor data (Choi, ¶0016: “information generation method for 360-degree detection and recognition of a surrounding object proposed herein includes acquiring a feature map from a multi-sensor signal”) through an object detection system. (Choi, ¶0053: “method may apply to various artificial intelligence technologies for recognizing an environment or an object”).

Regarding claim 32, Choi in view of Kim teaches, The method of claim 31, wherein the object detection system comprises an artificial neural network architecture. (Choi, ¶0016: “recognition of a surrounding object proposed herein includes a sensor data collector configured to acquire a feature map from a multi-sensor signal using a DNN”).

Regarding claim 33, Choi in view of Kim teaches, The method of claim 32, wherein the artificial neural network architecture is a Radar Deep Object Recognition network. (Choi, ¶0007: “information of a camera, a LiDAR, and a radar, based on a deep learning network in a situation in which object recognition information for autonomous driving”).

Regarding claim 34, it recites a system with elements corresponding to the steps of the method recited in claim 24.  Therefore, the recited elements of system claim 34 are mapped to the proposed combination in the same manner as the corresponding steps in method claim 24.  Additionally, the rationale and motivation to combine Choi and Kim presented in rejection of claim 24, apply to this claim.  Additionally, Choi teaches, A system comprising: one or more processors; (Choi, ¶0060: “system including a graphic processor unit”) and a non-transitory computer-readable medium coupled to the one or more processors, the non-transitory computer-readable medium storing instructions that, when executed by the one or more processors, (Choi, ¶0063: “computer storage medium or device, to be interpreted by the processing device or to provide an instruction or data to the processing device”) cause the one or more processors to: receive, (Choi, ¶0060: “multi-sensor information acquired from an embedded system including a graphic processor unit”) via an input, feature maps (Choi, ¶0058: “a LiDAR feature map as an input”).

Regarding claim 34, it recites a system with elements corresponding to the steps of the method recited in claim 24.  Therefore, the recited elements of system claim 34 are mapped to the proposed combination in the same manner as the corresponding steps in method claim 24.  Additionally, the rationale and motivation to combine Choi and Kim presented in rejection of claim 24, apply to this claim.  

Regarding claim 35, it recites a system with elements corresponding to the steps of the method recited in claim 25.  Therefore, the recited elements of system claim 35 are mapped to the proposed combination in the same manner as the corresponding steps in method claim 25.  Additionally, the rationale and motivation to combine Choi and Kim presented in rejection of claim 24, apply to this claim.  

Regarding claim 36, it recites a system with elements corresponding to the steps of the method recited in claim 26.  Therefore, the recited elements of system claim 36 are mapped to the proposed combination in the same manner as the corresponding steps in method claim 26.  Additionally, the rationale and motivation to combine Choi and Kim presented in rejection of claim 24, apply to this claim.  

Regarding claim 37, it recites a system with elements corresponding to the steps of the method recited in claim 27.  Therefore, the recited elements of system claim 37 are mapped to the proposed combination in the same manner as the corresponding steps in method claim 27.  Additionally, the rationale and motivation to combine Choi and Kim presented in rejection of claim 24, apply to this claim.  

Regarding claim 39, it recites a system with elements corresponding to the steps of the method recited in claim 29.  Therefore, the recited elements of system claim 39 are mapped to the proposed combination in the same manner as the corresponding steps in method claim 29.  Additionally, the rationale and motivation to combine Choi and Kim presented in rejection of claim 24, apply to this claim.  

Regarding claim 41, it recites a system with elements corresponding to the steps of the method recited in claim 31.  Therefore, the recited elements of system claim 41 are mapped to the proposed combination in the same manner as the corresponding steps in method claim 31.  Additionally, the rationale and motivation to combine Choi and Kim presented in rejection of claim 24, apply to this claim.  

Regarding claim 42 it recites a system with elements corresponding to the steps of the method recited in claim 32.  Therefore, the recited elements of system claim 42 are mapped to the proposed combination in the same manner as the corresponding steps in method claim 32.  Additionally, the rationale and motivation to combine Choi and Kim presented in rejection of claim 24, apply to this claim.  

Regarding claim 43 it recites a system with elements corresponding to the steps of the method recited in claim 33.  Therefore, the recited elements of system claim 43 are mapped to the proposed combination in the same manner as the corresponding steps in method claim 33.  Additionally, the rationale and motivation to combine Choi and Kim presented in rejection of claim 24, apply to this claim.

Claims 28 and 38 are rejected under 35 U.S.C. 103 as being unpatentable over Choi et al. (US 2023/0082097 A1), in view of Kim et al. (US 10,303,980 B1) and in further view of Freeman et al. (US 2019/0220709 A1).

Regarding claim 28, Choi in view of Kim teaches, The method of claim 27.  However, the combination of Choi and Kim does not explicitly teach wherein the one or more scene classification scores provide a probability value indicating the probability that an associated scene is detected.

In an analogous field of endeavor, Freeman teaches, wherein the one or more scene classification scores provide a probability value indicating the probability that an associated scene is detected. (Freeman, ¶0013: “if road scenes are considered, there can be a class for vehicles, another class for pedestrians, another class for roads and another class for buildings. Since there are four predetermined classes in this example, for each image four probability values, in particular pseudo probability values, are generated. The probability value for one of the classes then indicates the probability that the image shows an object from this particular class”).

Therefore, it would have been obvious to a person of ordinary skill in the art before the effective filing date of the claimed invention to modify Choi in view of Kim using the teachings of Freeman to introduce probability calculations.  A person skilled in the art would be motivated to combine the known elements as described above and achieve the predictable result of generating a value for indicating the probability of target detection.  Therefore, it would have been obvious to combine the analogous arts Choi, Kim and Freeman to obtain the invention in claim 28.  

Regarding claim 38, it recites a system with elements corresponding to the steps of the method recited in claim 28.  Therefore, the recited elements of system claim 38 are mapped to the proposed combination in the same manner as the corresponding steps in method claim 28.  Additionally, the rationale and motivation to combine Choi, Kim and Freeman presented in rejection of claim 28, apply to this claim.  

Claims 30 and 40 are rejected under 35 U.S.C. 103 as being unpatentable over Choi et al. (US 2023/0082097 A1), in view of Kim et al. (US 10,303,980 B1) and in further view of Ji Su Kim (US 2022/0082689 A1).

Regarding claim 30, Choi in view of Kim teaches, The method of claim 29, wherein the feature maps represent a vehicle-centric (Choi, ¶0015: “object detection or a region segmentation is performed by reconstructing precision map information around an own vehicle as a 2D image, by acquiring the feature map”) coordinate system, (Choi, ¶0016: “a coordinate system converter configured to convert the acquired feature map to an integrated 3D coordinate system”).  However, the combination of Choi and Kim does not explicitly teach wherein a direction of rows and columns of the feature maps are parallel with respective longitudinal and lateral axes of the vehicle-centric coordinate system.

In an analogous field of endeavor, Ji Su Kim teaches, wherein a direction of rows and columns of the feature maps are parallel with respective longitudinal and lateral axes of the vehicle-centric coordinate system. (Ji Su Kim, ¶0012: “The grid map including the plurality of cells may include a plurality of rows and columns that extend parallel to longitudinal and lateral directions of the vehicle from the is front or rear side of the vehicle”).

Therefore, it would have been obvious to a person of ordinary skill in the art before the effective filing date of the claimed invention to modify Choi in view of Kim using the teachings of Ji Su Kim to introduce a map with rows and columns parallel to longitudinal and lateral directions of the vehicle.  A person skilled in the art would be motivated to combine the known elements as described above and achieve the predictable result of a feature map representing the surroundings of the vehicle.  Therefore, it would have been obvious to combine the analogous arts Choi, Kim and Ji Su Kim to obtain the invention in claim 30.  

Regarding claim 40, it recites a system with elements corresponding to the steps of the method recited in claim 30.  Therefore, the recited elements of system claim 40 are mapped to the proposed combination in the same manner as the corresponding steps in method claim 30.  Additionally, the rationale and motivation to combine Choi, Kim and Ji Su Kim presented in rejection of claim 30, apply to this claim.  

Conclusion
Any inquiry concerning this communication or earlier communications from the examiner should be directed to MEHRAZUL ISLAM whose telephone number is (571)270-0489. The examiner can normally be reached Monday-Friday: 8am-5pm.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Saini Amandeep can be reached on (571) 272-3382. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.




/MEHRAZUL ISLAM/Examiner, Art Unit 2662                                                                                                                                                                                                        

/AMANDEEP SAINI/Supervisory Patent Examiner, Art Unit 2662                                                                                                                                                                                                        


    
        
            
        
            
        
            
        
            
        
            
        
            
        
            
        
            
        
            
        
            
        
            
        
            
    


Cookies help us deliver our services. By using our services, you agree to our use of cookies.