Jump to content

Patent Application 18328688 - SYSTEMS AND METHODS FOR PROCESSING FORMATTED - Rejection

From WikiPatents

Patent Application 18328688 - SYSTEMS AND METHODS FOR PROCESSING FORMATTED

Title: SYSTEMS AND METHODS FOR PROCESSING FORMATTED DATA IN COMPUTATIONAL STORAGE

Application Information

  • Invention Title: SYSTEMS AND METHODS FOR PROCESSING FORMATTED DATA IN COMPUTATIONAL STORAGE
  • Application Number: 18328688
  • Submission Date: 2025-05-21T00:00:00.000Z
  • Effective Filing Date: 2023-06-02T00:00:00.000Z
  • Filing Date: 2023-06-02T00:00:00.000Z
  • National Class: 712
  • National Sub-Class: 220000
  • Examiner Employee Number: 82556
  • Art Unit: 2182
  • Tech Center: 2100

Rejection Summary

  • 102 Rejections: 0
  • 103 Rejections: 2

Cited Patents

No patents were cited in this rejection.

Office Action Text


    DETAILED ACTION
Notice of Pre-AIA  or AIA  Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .

Continued Examination Under 37 CFR 1.114
A request for continued examination under 37 CFR 1.114, including the fee set forth in 37 CFR 1.17(e), was filed in this application after final rejection.  Since this application is eligible for continued examination under 37 CFR 1.114, and the fee set forth in 37 CFR 1.17(e) has been timely paid, the finality of the previous Office action has been withdrawn pursuant to 37 CFR 1.114.  Applicant's submission filed on March 24, 2025, has been entered.
 
Claims 1-20 are pending in this office action and presented for examination. Claims 1, 3, 6, 8, 10, 13, 15-16, and 19 are newly amended by the RCE received April 22, 2025.

Claim Rejections - 35 USC § 103
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.

Claim(s) 1-2, 5-6, 8-9, 12-13, 15, and 18-19 is/are rejected under 35 U.S.C. 103 as being unpatentable over Liu et al. (Liu) (Processing-in-Memory for Energy-efficient Neural Network Training: A Heterogeneous Approach) in view of Applicant Admitted Prior Art (AAPA).
Consider claim 1, Liu discloses a method for performing computations, the method comprising: receiving, at a storage device, first data associated with a first data set, the first data having a first format (page 662, right column, first unindented paragraph, line 1, input data of the operations; page 663, section VI. A., first paragraph, lines 12-13, data transfer between the CPU and main memory); receiving, at a processor core of the storage device (page 658, section III. A., first paragraph, lines 3-4, a programmable PIM, which is an ARM core), a request to perform a function on the first data (page 660, Table II, host submits work to accelerators; page 662, left column, first unindented paragraph, lines 1-3, the runtime on CPU is only responsible for offloading a kernel – which can have a part of its computation offloadable to fixed-function PIMs – to the programmable PIM), the function (for example, page 659, right column, first indented paragraph, line 3, kernel in the programmable PIM) comprising a first operation (for example, page 659, right column, first indented paragraph, line 13, computation phase 1) and a second operation (for example, page 659, right column, first indented paragraph, line 15, convolution computation (“Conv(...)” in the figure)); performing, by a first acceleration engine of the processor core (page 658, section III. A., first paragraph, line 3, programmable PIM), the first operation on the first data (for example, page 659, right column, first indented paragraph, line 13, computation phase 1), based on first instructions stored on the processor core (page 659, right column, first indented paragraph, line 3, kernel in the programmable PIM), to generate first result data (page 659, right column, first indented paragraph, lines 3-4, a kernel in the programmable PIM can trigger data processing with fixed-function PIMs); and performing, by a first circuit of the storage device that is external to the processor core (page 658, section III. A., first paragraph, line 4, massive fixed-function PIMs), the second operation (for example, page 659, right column, first indented paragraph, line 15, convolution computation (“Conv(...)” in the figure)) on the first result data (page 659, right column, first indented paragraph, lines 3-4, a kernel in the programmable PIM can trigger data processing with fixed-function PIMs; page 660, second column, first unindented paragraph, lines 10-14, between the fixed-function PIMs and programmable PIM, we employ the same consistency scheme: updates to memory locations by the entire set of fixed-function PIMs are not visible until the end of the kernel call to the fixed-function PIMs), based on the first instructions (page 659, right column, first indented paragraph, line 3, kernel in the programmable PIM).
However, Liu does not disclose that the first data comprises first page data corresponding to a first database page.
On the other hand, AAPA discloses first page data corresponding to a first database page (paragraphs [0055]-[0056] and paragraph [0065]).
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to combine the conventional subject matter disclosed by AAPA with the invention of Liu in order to accelerate processing of database data in particular. Alternatively, this modification entails simple substitution of one known element (the data of Liu) for another (database page data in particular, of AAPA) to obtain predictable results (the invention of Liu, wherein the data which is received and operated on is database page data in particular), which is an example of a rationale that may support a conclusion of obviousness as per MPEP 2143.

Consider claim 2, the overall combination entails the method of claim 1 (see above), wherein the storage device is configured to receive the request to perform the function via a communication protocol (Liu, page 660, Table II, host submits work to accelerators; page 662, left column, first unindented paragraph, lines 1-3, the runtime on CPU is only responsible for offloading a kernel – which can have a part of its computation offloadable to fixed-function PIMs – to the programmable PIM; Examiner submits that some form of communication protocol must be present for the storage device to validly process what the storage device is receiving).

Consider claim 5, the overall combination entails the method of claim 1 (see above), further comprising: receiving, at the storage device, second data associated with a second data set, the second data having a second format (Liu, page 662, right column, first unindented paragraph, line 1, input data of the operations; page 663, section VI. A., first paragraph, lines 12-13, data transfer between the CPU and main memory); receiving a request to perform the function on the second data (page 660, Table II, host submits work to accelerators; page 662, left column, first unindented paragraph, lines 1-3, the runtime on CPU is only responsible for offloading a kernel – which can have a part of its computation offloadable to fixed-function PIMs – to the programmable PIM); and performing, by a second acceleration engine of the processor core (page 658, section III. A., first paragraph, line 3, programmable PIM; page 659, left column, section: Heterogeneous PIM platform module, first paragraph, lines 6-7, each programmable PIM), the first operation on the second data (for example, page 659, right column, first indented paragraph, line 13, computation phase 1), based on at least one second instruction stored on the processor core (page 659, right column, first indented paragraph, line 3, kernel in the programmable PIM), to generate second result data (page 659, right column, first indented paragraph, lines 3-4, a kernel in the programmable PIM can trigger data processing with fixed-function PIMs).

Consider claim 6, the overall combination entails the method of claim 5 (see above), further comprising performing, by the first circuit of the storage device (Liu, page 658, section III. A., first paragraph, line 4, massive fixed-function PIMs), the second operation (for example, page 659, right column, first indented paragraph, line 15, convolution computation (“Conv(...)” in the figure)) on the second result data (page 659, right column, first indented paragraph, lines 3-4, a kernel in the programmable PIM can trigger data processing with fixed-function PIMs; page 660, second column, first unindented paragraph, lines 10-14, between the fixed-function PIMs and programmable PIM, we employ the same consistency scheme: updates to memory locations by the entire set of fixed-function PIMs are not visible until the end of the kernel call to the fixed-function PIMs), based on the at least one second instruction (page 659, right column, first indented paragraph, line 3, kernel in the programmable PIM).

Consider claim 8, Liu discloses a system for performing computations, the system comprising: a processor core (page 658, section III. A., first paragraph, lines 3-4, a programmable PIM, which is an ARM core) storing first instructions (page 659, right column, first indented paragraph, line 3, kernel in the programmable PIM) and comprising a first acceleration engine of the processor core (page 658, section III. A., first paragraph, line 3, programmable PIM); and a first circuit of the system that is external to the processor core and communicatively coupled to the processor core (page 658, section III. A., first paragraph, line 4, massive fixed-function PIMs), wherein the processor core is implemented in hardware (page 658, section III. A., first paragraph, lines 3-4, a programmable PIM, which is an ARM core) and is configured to: receive first data associated with a first data set, the first data having a first format (page 662, right column, first unindented paragraph, line 1, input data of the operations; page 663, section VI. A., first paragraph, lines 12-13, data transfer between the CPU and main memory); receive a request to perform a function on the first data (page 660, Table II, host submits work to accelerators; page 662, left column, first unindented paragraph, lines 1-3, the runtime on CPU is only responsible for offloading a kernel – which can have a part of its computation offloadable to fixed-function PIMs – to the programmable PIM), the function (for example, page 659, right column, first indented paragraph, line 3, kernel in the programmable PIM) comprising a first operation (for example, page 659, right column, first indented paragraph, line 13, computation phase 1) and a second operation (for example, page 659, right column, first indented paragraph, line 15, convolution computation (“Conv(...)” in the figure)); cause the first acceleration engine (page 658, section III. A., first paragraph, line 3, programmable PIM) to perform the first operation on the first data (for example, page 659, right column, first indented paragraph, line 13, computation phase 1), based on the first instructions (page 659, right column, first indented paragraph, line 3, kernel in the programmable PIM), to generate first result data (page 659, right column, first indented paragraph, lines 3-4, a kernel in the programmable PIM can trigger data processing with fixed-function PIMs); and cause the first circuit to perform the second operation (for example, page 659, right column, first indented paragraph, line 15, convolution computation (“Conv(...)” in the figure)) on the first result data (page 659, right column, first indented paragraph, lines 3-4, a kernel in the programmable PIM can trigger data processing with fixed-function PIMs; page 660, second column, first unindented paragraph, lines 10-14, between the fixed-function PIMs and programmable PIM, we employ the same consistency scheme: updates to memory locations by the entire set of fixed-function PIMs are not visible until the end of the kernel call to the fixed-function PIMs), based on the first instructions (page 659, right column, first indented paragraph, line 3, kernel in the programmable PIM).
However, Liu does not disclose that the first data comprises first page data corresponding to a first database page.
On the other hand, AAPA discloses first page data corresponding to a first database page (paragraphs [0055]-[0056] and paragraph [0065]).
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to combine the conventional subject matter disclosed by AAPA with the invention of Liu in order to accelerate processing of database data in particular. Alternatively, this modification entails simple substitution of one known element (the data of Liu) for another (database page data in particular, of AAPA) to obtain predictable results (the invention of Liu, wherein the data which is received and operated on is database page data in particular), which is an example of a rationale that may support a conclusion of obviousness as per MPEP 2143.

Consider claim 9, the overall combination entails the system of claim 8 (see above), wherein the processor core is configured to receive the request to perform the function via a communication protocol (Liu, page 660, Table II, host submits work to accelerators; page 662, left column, first unindented paragraph, lines 1-3, the runtime on CPU is only responsible for offloading a kernel – which can have a part of its computation offloadable to fixed-function PIMs – to the programmable PIM; Examiner submits that some form of communication protocol must be present for the storage device to validly process what the storage device is receiving).

Consider claim 12, the overall combination entails the system of claim 8 (see above), wherein the processor core is configured to: receive second data associated with a second data set, the second data having a second format (Liu, page 662, right column, first unindented paragraph, line 1, input data of the operations; page 663, section VI. A., first paragraph, lines 12-13, data transfer between the CPU and main memory); receive a request to perform the function on the second data (page 660, Table II, host submits work to accelerators; page 662, left column, first unindented paragraph, lines 1-3, the runtime on CPU is only responsible for offloading a kernel – which can have a part of its computation offloadable to fixed-function PIMs – to the programmable PIM); and cause a second acceleration engine of the processor core to perform (page 658, section III. A., first paragraph, line 3, programmable PIM; page 659, left column, section: Heterogeneous PIM platform module, first paragraph, lines 6-7, each programmable PIM) the first operation on the second data (for example, page 659, right column, first indented paragraph, line 13, computation phase 1), based on at least one second instruction stored on the processor core (page 659, right column, first indented paragraph, line 3, kernel in the programmable PIM), to generate second result data (page 659, right column, first indented paragraph, lines 3-4, a kernel in the programmable PIM can trigger data processing with fixed-function PIMs).

Consider claim 13, Liu discloses the system of claim 12 (see above), wherein the processor core is configured to cause the first circuit to perform (Liu, page 658, section III. A., first paragraph, line 4, massive fixed-function PIMs) the second operation (for example, page 659, right column, first indented paragraph, line 15, convolution computation (“Conv(...)” in the figure)) on the second result data (page 659, right column, first indented paragraph, lines 3-4, a kernel in the programmable PIM can trigger data processing with fixed-function PIMs; page 660, second column, first unindented paragraph, lines 10-14, between the fixed-function PIMs and programmable PIM, we employ the same consistency scheme: updates to memory locations by the entire set of fixed-function PIMs are not visible until the end of the kernel call to the fixed-function PIMs), based on the at least one second instruction (page 659, right column, first indented paragraph, line 3, kernel in the programmable PIM).

Consider claim 15, Liu discloses a storage device for performing computations, the storage device comprising: a processor core (page 658, section III. A., first paragraph, lines 3-4, a programmable PIM, which is an ARM core) storing first instructions (page 659, right column, first indented paragraph, line 3, kernel in the programmable PIM) and comprising a first acceleration engine (page 658, section III. A., first paragraph, line 3, programmable PIM); and a first circuit of the storage device that is external to the processor core and communicatively coupled to the processor core (page 658, section III. A., first paragraph, line 4, massive fixed-function PIMs), wherein the processor core is implemented in hardware (page 658, section III. A., first paragraph, lines 3-4, a programmable PIM, which is an ARM core) and is configured to: receive first data associated with a first data set, the first data having a first format (page 662, right column, first unindented paragraph, line 1, input data of the operations; page 663, section VI. A., first paragraph, lines 12-13, data transfer between the CPU and main memory); receive a request to perform a function on the first data (page 660, Table II, host submits work to accelerators; page 662, left column, first unindented paragraph, lines 1-3, the runtime on CPU is only responsible for offloading a kernel – which can have a part of its computation offloadable to fixed-function PIMs – to the programmable PIM), the function (for example, page 659, right column, first indented paragraph, line 3, kernel in the programmable PIM) comprising a first operation (for example, page 659, right column, first indented paragraph, line 13, computation phase 1) and a second operation (for example, page 659, right column, first indented paragraph, line 15, convolution computation (“Conv(...)” in the figure)); cause the first acceleration engine (page 658, section III. A., first paragraph, line 3, programmable PIM) to perform the first operation on the first data (for example, page 659, right column, first indented paragraph, line 13, computation phase 1), based on the first instructions (page 659, right column, first indented paragraph, line 3, kernel in the programmable PIM), to generate first result data (page 659, right column, first indented paragraph, lines 3-4, a kernel in the programmable PIM can trigger data processing with fixed-function PIMs); and cause the first circuit to perform the second operation (for example, page 659, right column, first indented paragraph, line 15, convolution computation (“Conv(...)” in the figure)) on the first result data (page 659, right column, first indented paragraph, lines 3-4, a kernel in the programmable PIM can trigger data processing with fixed-function PIMs; page 660, second column, first unindented paragraph, lines 10-14, between the fixed-function PIMs and programmable PIM, we employ the same consistency scheme: updates to memory locations by the entire set of fixed-function PIMs are not visible until the end of the kernel call to the fixed-function PIMs), based on the first instructions (page 659, right column, first indented paragraph, line 3, kernel in the programmable PIM).
However, Liu does not disclose that the first data comprises first page data corresponding to a first database page.
On the other hand, AAPA discloses first page data corresponding to a first database page (paragraphs [0055]-[0056] and paragraph [0065]).
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to combine the conventional subject matter disclosed by AAPA with the invention of Liu in order to accelerate processing of database data in particular. Alternatively, this modification entails simple substitution of one known element (the data of Liu) for another (database page data in particular, of AAPA) to obtain predictable results (the invention of Liu, wherein the data which is received and operated on is database page data in particular), which is an example of a rationale that may support a conclusion of obviousness as per MPEP 2143.

Consider claim 18, the overall combination entails the storage device of claim 15 (see above), wherein the processor core is configured to: receive second data associated with a second data set, the second data having a second format (Liu, page 662, right column, first unindented paragraph, line 1, input data of the operations; page 663, section VI. A., first paragraph, lines 12-13, data transfer between the CPU and main memory); receive a request to perform the function on the second data (page 660, Table II, host submits work to accelerators; page 662, left column, first unindented paragraph, lines 1-3, the runtime on CPU is only responsible for offloading a kernel – which can have a part of its computation offloadable to fixed-function PIMs – to the programmable PIM); and cause a second acceleration engine of the processor core to perform (page 658, section III. A., first paragraph, line 3, programmable PIM; page 659, left column, section: Heterogeneous PIM platform module, first paragraph, lines 6-7, each programmable PIM) the first operation on the second data (for example, page 659, right column, first indented paragraph, line 13, computation phase 1), based on at least one second instruction stored on the processor core (page 659, right column, first indented paragraph, line 3, kernel in the programmable PIM), to generate second result data (page 659, right column, first indented paragraph, lines 3-4, a kernel in the programmable PIM can trigger data processing with fixed-function PIMs).

Consider claim 19, the overall combination entails the storage device of claim 18 (see above), wherein the processor core is configured to cause the first circuit that is external to the processor core to perform (Liu, page 658, section III. A., first paragraph, line 4, massive fixed-function PIMs) the second operation (for example, page 659, right column, first indented paragraph, line 15, convolution computation (“Conv(...)” in the figure)) on the second result data (page 659, right column, first indented paragraph, lines 3-4, a kernel in the programmable PIM can trigger data processing with fixed-function PIMs; page 660, second column, first unindented paragraph, lines 10-14, between the fixed-function PIMs and programmable PIM, we employ the same consistency scheme: updates to memory locations by the entire set of fixed-function PIMs are not visible until the end of the kernel call to the fixed-function PIMs), based on the at least one second instruction (page 659, right column, first indented paragraph, line 3, kernel in the programmable PIM).

Claim(s) 7, 14, and 20 is/are rejected under 35 U.S.C. 103 as being unpatentable over Liu and AAPA as applied to claims 1, 8, and 15 above, and further in view of Ahuja et al. (Ahuja) (US 20050273614 A1).
Consider claim 7, the combination thus far entails the method of claim 1 (see above), wherein: the request is received via an application programming interface (API) of the storage device (Liu, page 661, section IV. A., first paragraph, lines 4-5, the API achieves the following functionality: (1) offloading a specific operation into specific PIM(s)). 
However, the combination thus far does not entail that the first operation is a parsing operation or a decoding operation; and the second operation is a scan operation.
On the other hand, Ahuja discloses a first operation being a parsing operation or a decoding operation; and a second operation being a scan operation ([0042], lines 4-6, engine (not shown) that parses the queries, scans the tag database 42, and retrieves the found object).
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to combine the teaching of Ahuja with the combination of Liu and AAPA in order to increase system capability by supporting data search via parsing and scanning operations. 

Consider claim 14, the combination thus far entails the system of claim 8 (see above), wherein: the request is received via an application programming interface (API) of the storage device (Liu, page 661, section IV. A., first paragraph, lines 4-5, the API achieves the following functionality: (1) offloading a specific operation into specific PIM(s)). 
However, the combination thus far does not entail that the first operation is a parsing operation or a decoding operation; and the second operation is a scan operation.
On the other hand, Ahuja discloses a first operation being a parsing operation or a decoding operation; and a second operation being a scan operation ([0042], lines 4-6, engine (not shown) that parses the queries, scans the tag database 42, and retrieves the found object).
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to combine the teaching of Ahuja with the combination of Liu and AAPA in order to increase system capability by supporting data search via parsing and scanning operations.

Consider claim 20, the combination thus far entails the storage device of claim 15 (see above), wherein: the request is received via an application programming interface (API) coupled to the processor core (Liu, page 661, section IV. A., first paragraph, lines 4-5, the API achieves the following functionality: (1) offloading a specific operation into specific PIM(s)). 
However, the combination thus far does not entail that the first operation is a parsing operation or a decoding operation; and the second operation is a scan operation.
On the other hand, Ahuja discloses a first operation being a parsing operation or a decoding operation; and a second operation being a scan operation ([0042], lines 4-6, engine (not shown) that parses the queries, scans the tag database 42, and retrieves the found object).
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to combine the teaching of Ahuja with the combination of Liu and AAPA in order to increase system capability by supporting data search via parsing and scanning operations.

Allowable Subject Matter
Claims 3-4, 10-11, and 16-17 are allowed.

Response to Arguments
Applicant on page 10 argues: ‘Applicant resubmits herewith the drawings originally submitted on December 30, 2024 (the "Drawings"), which were also sent to the Examiner on March 19, 2025, via email. As discussed during the Interview, Applicant submits that, after conducting a reasonable review of the Drawings, the Drawings appear to have been made by a process giving them satisfactory reproduction characteristics. Accordingly, Applicant respectfully requests that the objections be withdrawn.’
In view of the aforementioned drawings, the previously presented objection to the drawings is overcome.

Applicant on page 10 argues: “Applicant has amended claims 6, 13, and 19, and respectfully requests that the rejections of claims 6, 13, and 19 under § 112(b) be withdrawn.”
In view of the aforementioned amendments, the previously presented rejections of claims 6, 13, and 19 are withdrawn.

Applicant on page 12 argues: “Accordingly, Applicant respectfully requests the rejections of claims 1, 8, and 15 be withdrawn and that these claims be allowed.”
In view of the newly amended limitations of independent claims 1, 8, and 15, Examiner is now relying on 35 USC 103 to reject the aforementioned claims — see the Claim Rejections - 35 USC § 103 section above. Examiner also notes that Xi et al. (cited in the IDS received June 2, 2023) also appears to render obvious the newly amended limitation — see, for example, page 1, section: NDP in Modern Database System. While it may not be obvious to modify Liu in the particular manner necessary to result in the overall subject matter of claim 3, Examiner submits, after consideration of both the Liu reference and the general (and specific) state of the art of database processing before the effective filing date of the claimed invention, that it would have nevertheless still been obvious for the first data of Liu to be, in particular, first page data corresponding to a first database page.

Conclusion
Any inquiry concerning this communication or earlier communications from the examiner should be directed to KEITH E VICARY whose telephone number is (571)270-1314. The examiner can normally be reached Monday to Friday, 9:00 AM to 5:00 PM.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Andrew Caldwell can be reached at (571)272-3702. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.





/KEITH E VICARY/            Primary Examiner, Art Unit 2182                                                                                                                                                                                            


    
        
            
        
            
        
            
        
            
        
            
        
            
        
            
        
            
        
            
        
            
        
            
        
            
        
            
        
            
    


Cookies help us deliver our services. By using our services, you agree to our use of cookies.