Patent Application 18296316 - Machine Learning model for Occupancy Detection - Rejection
Appearance
Patent Application 18296316 - Machine Learning model for Occupancy Detection
Title: Machine Learning model for Occupancy Detection
Application Information
- Invention Title: Machine Learning model for Occupancy Detection
- Application Number: 18296316
- Submission Date: 2025-05-13T00:00:00.000Z
- Effective Filing Date: 2023-04-05T00:00:00.000Z
- Filing Date: 2023-04-05T00:00:00.000Z
- National Class: 382
- National Sub-Class: 103000
- Examiner Employee Number: 88366
- Art Unit: 2675
- Tech Center: 2600
Rejection Summary
- 102 Rejections: 1
- 103 Rejections: 3
Cited Patents
The following patents were cited in the rejection:
Office Action Text
Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Claim Rejections - 35 USC § 101 35 U.S.C. 101 reads as follows: Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title. Claims 1-20 are rejected under 35 U.S.C. 101 because the claimed invention is directed to the abstract idea of mental processes without significantly more. Claims 1 and 11 recite identify, by an object detection system, a heat signature generated by an object, and wherein the object has a type; generate a bounding box around the heat signature; determine the type of the object which generated the heat signature; and determine an occupancy of the room based on a number of bounding boxes corresponding to objects of a specified type, which all can be done mentally by looking at thermal images or by simply printing and drawing on them. This judicial exception is not integrated into a practical application nor include additional elements that are sufficient to amount to significantly more than the judicial exception because the recited preprocess the thermal data is mere insignificant extra-solution activity; and the recited an electronic device, comprising: a computer processor; a thermal camera configured to capture thermal data, the thermal data comprising a heat signature of an object present in a room; and a non-transitory computer readable medium, comprising stored instructions that when executed by the computer processor causes the computer electronic device to: receive thermal data from a thermal camera, the thermal data comprising a heat signature generated by an object present in a room merely recite generic components that are commercially available to anyone. Claims 2 and 12 recite generic components that are commercially available to anyone. Claims 3 and 13 recite insignificant extra-solution activity. Claims 4 and 14 recite the use of a machine learning model considered generic; and confidence scores considered insignificant extra-solution activity. See USPTO 2024 Guidance Update on Patent Subject Matter Eligibility, Including on Artificial Intelligence. Claims 5 and 15 recite the use of a machine learning model considered generic; and the training considered insignificant extra-solution activity. See USPTO 2024 Guidance Update on Patent Subject Matter Eligibility, Including on Artificial Intelligence. Claims 6 and 16 recite generic components that are commercially available to anyone. Claims 7 and 17 recite processes that can be done mentally. Claims 8 and 18 recite processes that can be done mentally and/or with pen and paper. Claims 9 and 19 recite processes that can be done mentally and/or with pen and paper. Claims 10 and 20 recite processes that can be done mentally. Claim Rejections - 35 USC § 102 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action: A person shall be entitled to a patent unless – (a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale, or otherwise available to the public before the effective filing date of the claimed invention. (a)(2) the claimed invention was described in a patent issued under section 151, or in an application for patent published or deemed published under section 122(b), in which the patent or application, as the case may be, names another inventor and was effectively filed before the effective filing date of the claimed invention. Claim(s) 1-6 and 11-16 is/are rejected under 35 U.S.C. 102(a)(1) as being anticipated by Tse, Rita, et al. "Privacy aware crowd-counting using thermal cameras." Twelfth International Conference on Digital Image Processing (ICDIP 2020). Vol. 11519. SPIE, 2020 (hereinafter referred to as “Tse”). Regarding claim 11, Tse discloses an electronic device, comprising: a computer processor (see Tse pgs. 2 and 3, and Fig. 1, where a “computer” is used); a thermal camera configured to capture thermal data (see Tse pgs. 2 and 3, Table 1, and Figs. 1 and 2, where a thermal camera captures thermal images), the thermal data comprising a heat signature of an object present in a room (see Tse pg. 6, and Figs. 4, 7, and 8, where thermal images are captured of people and other objects located in rooms, including both a classroom and a hallway); and a non-transitory computer readable medium, comprising stored instructions that when executed by the computer processor causes the computer electronic device to (see Tse pgs. 2 and 3, and Figs. 1, 13, and 14, where a “computer” is used to execute software): receive thermal data from a thermal camera (see Tse pgs. 2 and 3, Table 1, and Figs. 1 and 2, where a thermal camera captures thermal images), the thermal data comprising a heat signature generated by an object present in a room (see Tse pg. 6, and Figs. 4, 7, and 8, where thermal images are captured of people and other objects located in rooms, including both a classroom and a hallway); preprocess the thermal data (see Tse pg. 4, where the thermal image is converted from raw images to color pictures); identify, by an object detection system, a heat signature generated by an object, and wherein the object has a type (see Tse pg. 8, and Figs. 12-15, where “object detection” is performed on the preprocessed thermal image); generate a bounding box around the heat signature (see Tse pgs. 8 and 9, and Figs. 13-15, where bounding boxes are drawn on the images); determine the type of the object which generated the heat signature (see Tse Abstract, pgs. 8 and 9, and Figs. 13-15, where objects are classified as humans); and determine an occupancy of the room based on a number of bounding boxes corresponding to objects of a specified type (see Tse Abstract, pgs. 8 and 9, and Figs. 13-15, where the model is “able to classify and count humans”; and the resulting bounding boxes are predicted to be humans labeled as “person0”, “person1”, and “person2”, thereby indicating an occupancy of three people). Claim 1 is rejected under the same analysis as claim 11 above. Regarding claim 12, Tse discloses wherein the thermal data from the thermal camera comprises one or more thermal images captured in sequence, each thermal image further comprising a matrix of pixel values (see Tse pgs. 2-4, and Table 1, where images are captured “every 2 seconds” and the thermal images are “a 60*80 integer array” converted to “480 * 360” pixel matrix). Claim 2 is rejected under the same analysis as claim 12 above. Regarding claim 13, Tse discloses wherein the instruction that causes the computer processor to preprocess thermal data from the thermal camera comprises instructions that when executed by the computer processor, cause the computer processor to perform one or more of: removing noise from the one or more thermal images (see Tse pg. 7, where “Flat Field Correction” is used to “remove artifacts” and/or “distortions”); and normalize pixel values of the one or more thermal images (see Tse pg. 4, where a “Normalization Formula” is applied). Claim 3 is rejected under the same analysis as claim 13 above. Regarding claim 14, Tse discloses wherein the instruction that causes the computer processor to determine the type of the object which generated the heat signature comprises, comprises instructions that when executed by the computer processor, cause the computer processor to: access, by an object detection system, one or more captured thermal images including heat signatures generated by corresponding objects, each object having an object type; and apply, by the object detection system, a trained machine-learned model to the one or more captured thermal images to predict an object type and a location of objects within each thermal image, and to produce an associated confidence score representative of a prediction of the object type for each object (see Tse Abstract, pgs. 6-9, and Figs. 13-15, where the YOLOv3 model is trained to be “able to classify and count humans”; and the resulting bounding boxes are predicted with “99%” confidence to be humans labeled as “person0”, “person1”, and “person2”). Claim 4 is rejected under the same analysis as claim 14 above. Regarding claim 15, Tse discloses wherein the machine-learned model was trained by a process comprising: access, by the object detection system, a set of training data, the set of training data comprising thermal images including heat signatures generated by corresponding objects; and train, by the object detection system, the machine-learned model using the set of training data, the machine-learned model configured to predict the object type associated with a heat signature and a location of the object within a thermal image (see Tse Abstract, pgs. 6-9, and Figs. 13-15, where the YOLOv3 model is trained to be “able to classify and count humans”; and the resulting bounding boxes are predicted with “99%” confidence to be humans labeled as “person0”, “person1”, and “person2”). Claim 5 is rejected under the same analysis as claim 15 above. Regarding claim 16, Tse discloses wherein the set of training data comprises thermal images including heat signatures of human and non-human objects (see Tse Figs. 4, 7, 8, and 14, where the images comprise both humans and other objects, such as plants, walls, floors, chairs, and tables). Claim 6 is rejected under the same analysis as claim 16 above. Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows: 1. Determining the scope and contents of the prior art. 2. Ascertaining the differences between the prior art and the claims at issue. 3. Resolving the level of ordinary skill in the pertinent art. 4. Considering objective evidence present in the application indicating obviousness or nonobviousness. This application currently names joint inventors. In considering patentability of the claims the examiner presumes that the subject matter of the various claims was commonly owned as of the effective filing date of the claimed invention(s) absent any evidence to the contrary. Applicant is advised of the obligation under 37 CFR 1.56 to point out the inventor and effective filing dates of each claim that was not commonly owned as of the effective filing date of the later invention in order for the examiner to consider the applicability of 35 U.S.C. 102(b)(2)(C) for any potential 35 U.S.C. 102(a)(2) prior art against the later invention. Claim(s) 7, 8, 17, and 18 is/are rejected under 35 U.S.C. 103 as being unpatentable over Tse as applied to claims 2 and 12 above, and in further view of Meggers et al., US 2021/0208002 A1 (hereinafter referred to as “Meggers”). Regarding claim 17, Tse does not explicitly disclose wherein the non-transitory computer readable medium further comprises instructions that when executed by the computer processor, cause the computer processor to: track a path of the heat signature of the object using the one or more thermal images captured in sequence. However, Meggers discloses wherein the non-transitory computer readable medium further comprises instructions that when executed by the computer processor, cause the computer processor to: track a path of the heat signature of the object using the one or more thermal images captured in sequence (see Meggers paras. 0101 and 0107, where humans are tracked in a sequence of thermal images). It would have been obvious to one of ordinary skill in the art before the effective filing date to track the people in Tse’s images using the tracking algorithm of Meggers, because it is predictable that doing so would improve the accuracy of Tse’s human count (see Meggers para. 0107, where “[b]y tracking the humans, an accurate count can be maintained”). Claim 7 is rejected under the same analysis as claim 17 above. Regarding claim 18, Tse discloses receive a first thermal image indicating a location of the bounding box associated with a first time, receive a second thermal image indicating a location of the bounding box associated with a second time, assign a unique identification number to the bounding box in the first thermal image (see Tse Abstract, pgs. 8 and 9, and Figs. 13-15, where bounding boxes are predicted to be humans labeled as “person0”, “person1”, and “person2”). Tse does not explicitly disclose wherein the instruction that causes the computer processor to track a path of the heat signature of the object using the one or more thermal images captured in sequence, further comprises instructions that when executed by the computer processor, cause the computer processor to: calculate a distance traveled by the bounding box, based in part on the location of the bounding box from the first time to the second time; and responsive to the distance traveled meeting a predefined distance criterion, assigning the unique identification number to the bounding box in the second thermal image. However, Meggers discloses wherein the instruction that causes the computer processor to track a path of the heat signature of the object using the one or more thermal images captured in sequence, further comprises instructions that when executed by the computer processor, cause the computer processor to: receive a first thermal image indicating a location of the bounding box associated with a first time, receive a second thermal image indicating a location of the bounding box associated with a second time, assign a unique identification number to the bounding box in the first thermal image; calculate a distance traveled by the bounding box, based in part on the location of the bounding box from the first time to the second time; and responsive to the distance traveled meeting a predefined distance criterion, assigning the unique identification number to the bounding box in the second thermal image (see Meggers paras. 0101 and 0107, where “. . . the system could be detecting if new objects enter the field of view and determining if they are human-shaped. If so, the system could create bounding box coordinates for the human-shaped object, compute a centroid of each, then tracking the object in the field of view by, e.g., when comparing a first scan to a successive scan of the same area, assuming that centroids with minimum Euclidean distance between them are the same object”). Claim 8 is rejected under the same analysis as claim 18 above. Claim(s) 9 and 19 is/are rejected under 35 U.S.C. 103 as being unpatentable over Tse in view of Meggers as applied to claims 8 and 18 above, and in further view of Ding, Meng, et al. "Action recognition of individuals on an airport apron based on tracking bounding boxes of the thermal infrared target." Infrared Physics & Technology 117 (2021): 103859 (hereinafter referred to as “Ding”). Regarding claim 19, Tse does not explicitly disclose wherein the non-transitory computer readable medium further comprises instructions that when executed by the computer processor, cause the computer processor to: determine a location vector for the bounding box, based in part on a change in location of the bounding box from the first time to the second time. However, Ding discloses wherein the non-transitory computer readable medium further comprises instructions that when executed by the computer processor, cause the computer processor to: determine a location vector for the bounding box, based in part on a change in location of the bounding box from the first time to the second time (see Ding pgs. 5 and 6, “4.1. Visual tracking based on short-term spatiotemporal features”, where the tracked bounding boxes in thermal images are described using the feature vector “[a, b, w, h]” denoting the position, width, and height of the bounding boxes in each frame and change to reflect the new location of the bounding box as the underlying person moves over time). It would have been obvious to one of ordinary skill in the art before the effective filing date to use the feature vector of Ding to describe and store the bounding boxes of Tse, as modified by Meggers, because it is predictable that Ding’s vector would be a more efficient description of the bounding boxes as opposed to having to store each and every coordinate of the entire bounding box for each image frame. Claim 9 is rejected under the same analysis as claim 19 above. Claim(s) 10 and 20 is/are rejected under 35 U.S.C. 103 as being unpatentable over Tse in view of Meggers and Ding as applied to claims 9 and 19 above, and in further view of Persegol et al., et al. US 2023/0314227 A1 (hereinafter referred to as “Persegol”). Regarding claim 20, Tse does not explicitly disclose wherein the instruction that causes the computer processor to determine a location vector for the bounding box based in part on a change in location of the bounding box from the first time to the second time, further comprises instructions that when executed by the computer processor, cause the computer processor to: remove bounding boxes having a location vector that does not meet an object criterion. However, Persegol discloses wherein the instruction that causes the computer processor to determine a location vector for the bounding box based in part on a change in location of the bounding box from the first time to the second time, further comprises instructions that when executed by the computer processor, cause the computer processor to: remove bounding boxes having a location vector that does not meet an object criterion (see Persegol paras. 0120-0122, where an object criterion is used that does not “identify and/or count people” whose location vector is positioned within a designated “portion of the scene”). It would have been obvious to one of ordinary skill in the art before the effective filing date to allow Tse’s users to use Persegol’s object criterion, as previously modified by Meggers and Ding, to control exactly which portions of a scene to search for persons, because it is predictable that doing so would improve the accuracy of people detection and classification by eliminating other known hot bodies present in the scene that are likely to be misclassified, such as “a radiator” (see Persegol para. 0121). Claim 10 is rejected under the same analysis as claim 20 above. Conclusion The prior art made of record and not relied upon is considered pertinent to applicant's disclosure: Velipasalar et al., US 2020/0089967 A1, discloses an occupancy detection and/or counting system (see Velipasalar para. 0027); and Gomez, Andres, Francesco Conti, and Luca Benini. "Thermal image-based CNN's for ultra-low power people recognition." Proceedings of the 15th ACM international conference on computing frontiers. 2018, discloses a thermal image people recognition system (see Gomez Abstract and Fig. 1). Any inquiry concerning this communication or earlier communications from the examiner should be directed to ANDREW M MOYER whose telephone number is (571)272-9523. The examiner can normally be reached Monday-Friday 9-5 EST. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Director John Barlow can be reached at 571-272-4550. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /ANDREW M MOYER/ Supervisory Patent Examiner, Art Unit 2675