Jump to content

Patent Application 18308179 - COMPUTER-READABLE RECORDING MEDIUM STORING - Rejection

From WikiPatents

Patent Application 18308179 - COMPUTER-READABLE RECORDING MEDIUM STORING

Title: COMPUTER-READABLE RECORDING MEDIUM STORING TRAINING PROGRAM AND IDENTIFICATION PROGRAM, AND TRAINING METHOD

Application Information

  • Invention Title: COMPUTER-READABLE RECORDING MEDIUM STORING TRAINING PROGRAM AND IDENTIFICATION PROGRAM, AND TRAINING METHOD
  • Application Number: 18308179
  • Submission Date: 2025-05-20T00:00:00.000Z
  • Effective Filing Date: 2023-04-27T00:00:00.000Z
  • Filing Date: 2023-04-27T00:00:00.000Z
  • National Class: 382
  • National Sub-Class: 103000
  • Examiner Employee Number: 81106
  • Art Unit: 2671
  • Tech Center: 2600

Rejection Summary

  • 102 Rejections: 1
  • 103 Rejections: 1

Cited Patents

No patents were cited in this rejection.

Office Action Text


    Notice of Pre-AIA  or AIA  Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .

Information Disclosure Statement
The information disclosure statement (IDS) submitted on 4/27/2003 is in compliance with the provisions of 37 CFR 1.97.  Accordingly, the information disclosure statement is being considered by the examiner.

Claim Rejections - 35 USC § 101
35 U.S.C. 101 reads as follows:
Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title.


Claims 1-13 are rejected under 35 U.S.C. 101 because the claimed invention is directed to an abstract idea without significantly more. The flow chart in MPEP 2106, Subject Matter Eligibility Test For Products and Processes, will be referred to establish ineligible subject matter.
Regarding claim 1, Step 1: the claim recites a non-transitory computer-readable recording medium, which would be categorized as a device under the four recognized statutory categories. Step 2A Prong One: However, the claim is further directed to the abstract ideas of acquiring and classifying images, which are mental processes (see MPEP 2106.04(a)(2)); and to the abstract ideas of calculating a feature amount of the image and training a machine learning model, which are mathematical calculations (see MPEP 2106.04(a)(2)). Step 2A Prong Two: Additional elements include generic computer elements (memory). The addition of generic computer elements amounts to merely an instruction to apply the abstract idea using generic computer elements, and does not integrate the judicial exception into a practical application (see MPEP 2106.05(d)). Step 2B: The additional claim elements do not amount to significantly more than the judicial exception, as explained above. Therefore, the claim is ineligible.
Regarding claims 2-6, additional limitations are directed to the abstract ideas of acquiring processing, which is a mental process (see MPEP 2106.04(a)(2)), and training a model, which is a mathematical calculation (see MPEP 2106.04(a)(2)). The addition of further judicial exceptions do not amount to significantly more and therefore the claims are all ineligible.
Regarding claims 7-13, the rationale provided in the rejection of claim 1 is incorporated herein. In addition, the recording medium of claim 1 corresponds to the recording medium of claim 7, as well as the recording medium of claims 1-6 corresponds to the method of claims 8-13, and performs the steps disclosed herein. Therefore, the claims are all ineligible.

Claim Rejections - 35 USC § 102
In the event the determination of the status of the application as subject to AIA  35 U.S.C. 102 and 103 (or as subject to pre-AIA  35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA  to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.  
The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action:
A person shall be entitled to a patent unless –

(a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale, or otherwise available to the public before the effective filing date of the claimed invention.
Claims 1-4, 6-11, and 13 are rejected under 35 U.S.C. 102(a)(1) as being anticipated by Ruan (Liheng Ruan et al. “Facial expression recognition in facial occlusion scenarios: A path selection multi-network”. 2022 June 07).
Regarding claim 1, Ruan discloses a non-transitory computer-readable recording medium storing a training program for causing a computer to execute processing (Ruan Page 3/8, embodied within the database in order to implement and execute the training) comprising: 
acquiring a plurality of images that includes a face of a person (Ruan Page 3/8 and Fig 2, images are acquired into a database that include faces of a person); 
classifying the plurality of images, based on a combination of whether or not an action unit related to a motion of a specific portion of the face occurs and whether or not an occlusion is included in an image in which the action unit occurs (Ruan Page 3/8 and Fig. 1-2, the images are classified based on expression and where the occlusion occurs); 
calculating a feature amount of the image by inputting each of the plurality of classified images into a machine learning model (Ruan Page 3/8 and Table 1, calculating the amount of images in the database for each expression); and 
training the machine learning model so as to decrease a first distance between feature amounts of an image in which the action unit occurs and an image with an occlusion with respect to the image in which the action unit occurs and to increase a second distance between feature amounts of the image with the occlusion with respect to the image in which the action unit occurs and an image with an occlusion with respect to an image in which the action unit does not occur (Ruan Page 4/8-5/8, Fig. 3 and Table 3, training the model to better predict the expression associated with an inputted image depending on the occlusion based upon the classified images used for training).

Regarding claim 2, Ruan discloses the non-transitory computer-readable recording medium according to claim 1, wherein the acquiring processing refers to a storage unit that stores a plurality of face images of a person to which whether or not the action unit occurs is added (Ruan Page 3/8 and Fig. 2, the database stores the different facial expressions used for training an input image), based on an input image with correct answer information that indicates whether or not the action unit occurs and acquires an image of which whether or not the action unit occurs is opposite to whether or not the action unit occurs in the input image (Ruan Page 6/8 and Fig. 7, the prediction results from the training model showing the accuracy of the expression within the image and whether it is correct or not).

Regarding claim 3, Ruan discloses the non-transitory computer-readable recording medium according to claim 2, wherein the acquiring processing acquires an image with an occlusion by shielding a part of the image, based on the input image and the acquired image (Ruan Page 3/8 and Fig. 1-Fig. 3, the unknown image with an occlusion is input into the machine learning model and trained from different occlusions within the upper face, lower face, or eye mask).

Regarding claim 4, Ruan discloses the non-transitory computer-readable recording medium according to claim 3, wherein the acquiring processing shields at least a part of an action portion related to the action unit (Ruan Page 3/8 and Fig 2, the expressions identified include different occlusions).

Regarding claim 6, Ruan discloses the non-transitory computer-readable recording medium according to claim 1, for causing a computer to further execute processing comprising:
training an identification model so as to output whether or not an action unit occurs indicated by correct answer information, in a case where a feature amount obtained by inputting an image to which the correct answer information that indicates whether or not the action unit occurs is added into the machine learning model is input (Ruan Page 6/8 and Fig. 7, the results indicate whether the prediction for the given expression is correct for the associated occlusion).

Regarding claim 7, Ruan discloses a non-transitory computer-readable recording medium storing an identification program for causing a computer to execute processing (Ruan Page 3/8, embodied within the database in order to implement and execute the identification training) comprising: 
calculating a feature amount of an image by inputting each of a plurality of images classified based on a combination of whether or not an action unit related to a motion of a specific portion of a face of a person occurs and whether or not an occlusion is included in an image in which the action unit occurs into a machine learning model and acquiring the machine learning model that is trained to decrease a distance between feature amounts of an image in which the action unit occurs and an image with an occlusion with respect to the image in which the action unit occurs and to increase a distance between feature amounts of an image with an occlusion with respect to the image in which the action unit occurs and an image with an occlusion with respect to an image in which the action unit does not occur (Ruan Page 3/8-5/8, Fig. 1-3 and Table 3, calculating the amount of images in the database for each expression, such that the images are classified based on expression and where the occlusion occurs, such that the model is trained to better predict the expression associated with an inputted image depending on the occlusion based upon the classified images used for training); and
identifying whether or not a specific action unit occurs in a face of a person included in an image to be identified, based on a feature amount obtained by inputting the image to be identified that includes the face of the person into the acquired machine learning model (Ruan Page 6/8 and Fig. 7, identifying whether the prediction for the associated expression is correct for the associated occlusion).
Regarding claims 8-11 and 13, the rational provided in the rejection of claims 1-4 and 6 is provided herein. In addition, the recording medium of claims 1-4 and 6 corresponds to the method of claims 8-11 and 13, and performs the steps disclosed herein.

Claim Rejections - 35 USC § 103
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.

Claims 5 and 12 are rejected under 35 U.S.C. 103 as being unpatentable over Ruan in view of Hyunwoo (J. Hyunwoo. “Learning Entity and Relation Embedding for Knowledge Graph Completion”. 2019 November 13).
Regarding claim 5, Ruan does not disclose where Hyunwoo teaches the non-transitory computer-readable recording medium according to claim 1, wherein the training processing trains the machine learning model based on a loss function Loss of a formula (1):

    PNG
    media_image1.png
    36
    354
    media_image1.png
    Greyscale
 
when the first distance is set to do, the second distance is set to dau, a margin parameter regarding the first distance is set to mo, and a margin parameter regarding the second distance is set to mau (Hyunwoo Page 2/6, the loss function of the model is taught to include the distance and margins).
It would have been obvious, before the effective date of the claimed invention, to one of ordinary skill in the art to modify the training processing of Ruan with the teachings of Hyunwoo to include the loss function formula in order to allow a more accurate reading and be as close to the distance of the samples as possible.

Regarding claim 12, the rational provided in the rejection of claim 5 is provided herein. In addition, the recording medium of claim 5 corresponds to the method of claim 12, and performs the steps disclosed herein.

Conclusion
The prior art made of record and not relied upon is considered pertinent to applicant's disclosure is: Kage (US Pub 20070122005).

Any inquiry concerning this communication or earlier communications from the examiner should be directed to Vincent Rudolph whose telephone number is (571)272-8243. The examiner can normally be reached M-F 7:30 AM - 3:30 PM.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.



/VINCENT RUDOLPH/               Supervisory Patent Examiner, Art Unit 2671                                                                                                                                                                                         


    
        
            
        
            
        
            
        
            
        
            
        
            
        
            
        
            
    


Cookies help us deliver our services. By using our services, you agree to our use of cookies.