Jump to content

Patent Application 18296400 - DOMAIN ADAPTION FOR PROSTATE CANCER DETECTION - Rejection

From WikiPatents

Patent Application 18296400 - DOMAIN ADAPTION FOR PROSTATE CANCER DETECTION

Title: DOMAIN ADAPTION FOR PROSTATE CANCER DETECTION

Application Information

  • Invention Title: DOMAIN ADAPTION FOR PROSTATE CANCER DETECTION
  • Application Number: 18296400
  • Submission Date: 2025-05-21T00:00:00.000Z
  • Effective Filing Date: 2023-04-06T00:00:00.000Z
  • Filing Date: 2023-04-06T00:00:00.000Z
  • National Class: 382
  • National Sub-Class: 131000
  • Examiner Employee Number: 84557
  • Art Unit: 2682
  • Tech Center: 2600

Rejection Summary

  • 102 Rejections: 1
  • 103 Rejections: 1

Cited Patents

The following patents were cited in the rejection:

Office Action Text


    DETAILED ACTION
Notice of AIA  Status
1.	The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .

Claim Rejections - 35 USC § 102
2.	The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action:
A person shall be entitled to a patent unless –

(a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale, or otherwise available to the public before the effective filing date of the claimed invention.


31066..	Claims 1-3, 5-12 and 14-20 are rejected under 35 U.S.C. 102(a)(1) as being anticipated by Chiou et al (Harnessing Uncertainty in Domain Adaptation for MRI Prostate Lesion Segmentation, Jan. 18, 2021) (Applicant submitted reference).
Regarding claim 1, Chiou et al teaches: A computer-implemented method for performing a medical imaging analysis task using a machine learning based model, comprising: receiving one or more input medical images acquired using one or more out-of-distribution image acquisition parameters and having out-of-distribution imaging properties, the one or more out-of-distribution image acquisition parameters and the out-of-distribution imaging properties being out-of-distribution with respect to training data on which the machine learning based model is trained [Abstract (Medical images with parameters differing from those in training data.)]; generating one or more synthesized medical images from the one or more input medical images using a machine learning based generator network, the one or more synthesized medical images being generated for one or more in-distribution image acquisition parameters and having in-distribution imaging properties [page 3: p01, page 5: Diverse Image-to-Image Translation Network (Employ GAN to translate OOD images into the style of in-distribution target domain by generating multiple outputs/synthesized images.)], the one or more in-distribution image acquisition parameters and the in-distribution imaging properties being in-distribution with respect to the training data on which the machine learning based model is trained [page 3: method]; performing the medical imaging analysis task based on the one or more synthesized medical images using the machine learning based model; and outputting results of the medical imaging analysis task [page 9: p02: Impact of the ratio of synthesized to real data on the performance (Final output includes segmentation results from synthesized images possess in-distribution image properties.)].  

Regarding claim 2, Chiou et al further teaches: The computer-implemented method of claim 1, wherein the out-of-distribution image acquisition parameters and the in-distribution image acquisition parameters comprise parameters of an MRI (magnetic resonance imaging) scanner [page 3: p02].

Regarding claim 3, Chiou et al further teaches: The computer-implemented method of claim 2, wherein the parameters of the MRI scanner comprise b-value settings [page 7: 2.3].

Regarding claim 5, Chiou et al further teaches: The computer-implemented method of claim 1, wherein the machine learning based generator network is jointly trained with another machine learning based generator network, the other machine learning based generator network generating synthesized out-of-distribution medical images from in-distribution training medical images [page 8: 3 Result].

Regarding claim 6, Chiou et al further teaches: The computer-implemented method of claim 5, wherein the machine learning based generator network generates reconstructed in-distribution medical images from the synthesized out-of-distribution medical images and wherein the machine learning based generator network is trained based on segmentation predictions from the reconstructed in-distribution medical images generated using the machine learning based model [page 3: p01].

Regarding claim 7, Chiou et al further teaches: The computer-implemented method of claim 1, wherein the machine learning based generator network is trained using a discriminator network, the discriminator network distinguishing between 1) synthesized in-distribution images generated by the machine learning based generator network and corresponding segmentation predictions generated from the machine learning based model and 2) real in-distribution images and corresponding segmentation predictions as being real or synthesized [page 5: Diverse Image-to-Image Translation Network].

Regarding claim 8, Chiou et al further teaches: The computer-implemented method of claim 1, wherein the one or more input medical images comprises one or more mpMRI (multi-parametric magnetic resonance imaging) images [page 3: p02].

Regarding claim 9, Chiou et al further teaches: The computer-implemented method of claim 1, wherein the one or more input medical images depict a prostate of a patient and the medical imaging analysis task is detection of prostate cancer [page 3: p02].

Claims 10-12 and 14 have been analyzed and rejected with regard to claims 1-3 and 7 respectively.

Claim 15-20 have been analyzed and rejected with regard to claims 1, 2, 5, 6, 8, and 9 respectively and in accordance with Chiou et al’s further teaching on: A non-transitory computer readable medium storing computer program instructions for performing a medical imaging analysis task using a machine learning based model [abstract (A non-transitory computer readable medium storing computer program would be inherent structure for implementing the image analysis process.)].

Claim Rejections - 35 USC § 103
4.	The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102 of this title, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains.  Patentability shall not be negated by the manner in which the invention was made.


51066..	Claims 4 and 13 are rejected under 35 U.S.C. 103 as being unpatentable over Chiou et al (Harnessing Uncertainty in Domain Adaptation for MRI Prostate Lesion Segmentation, Jan. 18, 2021) (Applicant submitted reference) and in further view of Anand et al (US Pub: 2023/0111306).
Regarding claim 4, Chiou et al does not specify generating synthesized medical image based on metadata.  In the same field of endeavor, Anand et al teaches: The computer-implemented method of claim 1, wherein generating one or more synthesized medical images from the one or more input medical images using a machine learning based generator network comprises: generating the one or more synthesized medical images based on metadata of values of the out-of-distribution image acquisition parameters and values of the in-distribution image acquisition parameters [p0053, p0060].  Therefore, it would have been obvious for an ordinary skilled in the art before the effective filing date of the claimed invention to combine the teaching of the two to generating synthesized medical image based on metadata associated with respective image for attribute information.

Claim 13 has been analyzed and rejected with regard to claim 4.

Contact
6.  	 Any inquiry concerning this communication or earlier communications from the examiner should be directed to FAN ZHANG whose telephone number is (571)270-3751.  The examiner can normally be reached on Mon-Fri 9:00-5:00.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Benny Tieu can be reached on 571-272-7490.  The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of an application may be obtained from the Patent Application Information Retrieval (PAIR) system.  Status information for published applications may be obtained from either Private PAIR or Public PAIR.  Status information for unpublished applications is available through Private PAIR only.  For more information about the PAIR system, see http://pair-direct.uspto.gov. Should you have questions on access to the Private PAIR system, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free).  If you would like assistance from a USPTO Customer Service Representative or access to the automated information system, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.

    /Fan Zhang/
								    Patent Examiner, Art Unit 2682



    
        
            
        
            
        
            
        
            
        
            
        
            
    


Cookies help us deliver our services. By using our services, you agree to our use of cookies.