Patent Application 17808735 - LINE SCANNER HAVING INTEGRATED PROCESSING - Rejection
Appearance
Patent Application 17808735 - LINE SCANNER HAVING INTEGRATED PROCESSING
Title: LINE SCANNER HAVING INTEGRATED PROCESSING CAPABILITY
Application Information
- Invention Title: LINE SCANNER HAVING INTEGRATED PROCESSING CAPABILITY
- Application Number: 17808735
- Submission Date: 2025-05-12T00:00:00.000Z
- Effective Filing Date: 2022-06-24T00:00:00.000Z
- Filing Date: 2022-06-24T00:00:00.000Z
- National Class: 348
- National Sub-Class: 522000
- Examiner Employee Number: 91286
- Art Unit: 2485
- Tech Center: 2400
Rejection Summary
- 102 Rejections: 0
- 103 Rejections: 1
Cited Patents
The following patents were cited in the rejection:
Office Action Text
DETAILED ACTION Election/Restrictions Applicant’s election without traverse of Claims 1-11 in the reply filed on 11/19/2024 is acknowledged. Newly submitted claims 16, 18, and 20 are directed to an invention that is independent or distinct from the invention originally claimed for the following reasons. They include features from non-elected claims 12-15. They include a difference in distance that does not exist in the elected species, and the elected species has cameras that detect both projected patterns and reflective markers. Since applicant has received an action on the merits for the originally presented invention, this invention has been constructively elected by original presentation for prosecution on the merits. Accordingly, claims 16-20 are withdrawn from consideration as being directed to a non-elected invention. See 37 CFR 1.142(b) and MPEP § 821.03. To preserve a right to petition, the reply to this action must distinctly and specifically point out supposed errors in the restriction requirement. Otherwise, the election shall be treated as a final election without traverse. Traversal must be timely. Failure to timely traverse the requirement will result in the loss of right to petition under 37 CFR 1.144. If claims are subsequently added, applicant must indicate which of the subsequently added claims are readable upon the elected invention. Should applicant traverse on the ground that the inventions are not patentably distinct, applicant should submit evidence or identify such evidence now of record showing the inventions to be obvious variants or clearly admit on the record that this is the case. In either instance, if the examiner finds one of the inventions unpatentable over the prior art, the evidence or admission may be used in a rejection under 35 U.S.C. 103 or pre-AIA 35 U.S.C. 103(a) of the other invention. Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claim(s) 1-11, 17, and 19 are rejected under 35 U.S.C. 103 as being unpatentable over Zheng (US PG Publication 2020/0225030) in view of Rafii (US PG Publication 2017/0272728 A1) Regarding Claim 1, Zheng (US PG Publication 2020/0225030) discloses a system comprising: a first light source (pattern projector 104 [0025], Fig. 1) operable to project at least one line of light onto an object (The pattern projector 104 may be a single-line or multi-line line laser that operates with the second waveband [0031]); a second light source (the photogrammetric module provides supplemental lighting on the measured object from a supplemental light source with a first waveband [0027]) operable to illuminate reflective markers associated with the object (illuminate the markers disposed on or adhered to the surface of the object [0027]); a first camera and a second camera, each (at least one of the two cameras is —i.e., two cameras are, a multipurpose camera 101 used for both photogrammetry and three-dimensional scanning [0025]) having at least one image sensor (two cameras [0025]) that receives first reflected light from the at least one line of light (The three-dimensional scanning module is configured to perform, by operating the two cameras and the one pattern projector 104, three-dimensional scanning on the measured object [0025]) and second reflected light from the reflective markers (photogrammetric module is configured to perform, by operating the multipurpose camera 101, global photogrammetry on a measured object and obtain three-dimensional coordinates of markers on a surface of the object [0025]); at least one processor (inherent, see computations in [0032-[0035]) that determines locations of the at least one line of light (obtain image two-dimensional coordinates of feature points projected by the pattern projector onto the surface of the measured object [0033]) on the at least one image sensor (two-dimensional images 202, 203 [0033]) based at least in part on the received first reflected light (feature points projected by the pattern projector onto the surface of the measured object [0033]), the at least one processor further determining locations of the reflective markers (the photogrammetric module of the system obtains the three-dimensional coordinates of the non-encoded markers on the surface of the measured object [0032]) based at least in part on the second reflected light (operating the multipurpose camera to capture a set of raw images 201 of the measured object from different positions and angles [0032]); and a frame physically coupled to each of the first light source, the second light source, the at least one image sensor (A frame member 103 that fixes the two cameras and the pattern projector is used to ensure that the relative position between the two cameras and the pattern projector is fixed [0031]); a scanner device (scanner device of Fig. 1) comprising the frame (frame member 103, Fig. 1, [0031]), the at least one image sensor (multipurpose camera 101, Fig. 1, [0025]), that are physically coupled to the frame (camera is on the frame, Fig. 1) and integrated within the scanner device (See Fig. 1 where the camera is on the frame and together the camera and the frame are the scanner device), wherein the at least one processor causes each of the first camera and the second camera to simultaneously process the first reflected light and the second reflected light received by the at least one image sensor (see Fig. 2 and [0032]-[0033] the only dependency between the photogrammetric module and the Three-dimensional scanning module is the output of the Second 3D reconstructor 204, which means that the preceding functions in each processing chain can be performed simultaneously; Also, Fig. 3 and [0034] show synchronous image capture of both photogrammetry and 3D scanning images, indicating that both subsequent processes can operate simultaneously. Note, although Zheng does not explicitly state these operations occur simultaneously, it is within the ordinary skill to perform them simultaneously, as Zheng has already disclosed their dependencies and timing). Zheng does not disclose, but Rafii (US PG Publication 2017/0272728 A1) teaches a frame (system 10 is a hand-held device [0050]-[0051], Fig. 2A) physically coupled to the one or more processors (includes processor, FPGA, host processor 108, Fig. 1, [0050]); scanner device (scanning system, Abstract; scanning system 10, Fig. 1, 2A, [0071], ) comprising the at least one processor (host processor 108, Fig. 1, scanning system 10 is local to the scanning system [0071], [0050]) physically coupled to the frame (scanning system 10 is a handheld device [0051], Fig. 2A, and it can be seen from Fig. 2A that scanning system 10 is a rectangular object about the size of a smart phone, having the cameras and the processor inside it. Although Raffi does not disclose where inside the scanning system 10 the processor 108 is physically placed, those of ordinary skill in the art can deduce that it is physically installed and not floating or falling around the scanning system. It is therefore physically connected to a frame). One of ordinary skill in the art before the application was filed would have been motivated to modify Zhang with the on-board processor of Rafii and divide computation between the on-board processor and the network computer, as Rafii does, because it was well-known before the application was filed that smaller hand-held devices usually incorporate lower powered computing capabilities compared with network computers, and it was customary in the art before the application was filed to transmit data to a back-end computer for performing additional computations and returning the result to the hand-held device, providing a good balance of power consumption and operability to the hand-held device, and providing convenience to the user. Regarding Claim 2, Zheng (US PG Publication 2020/0225030) discloses the system of claim 1 wherein the frame includes a handle (handheld [0025], Fig. 1). Regarding Claim 3, Zheng (US PG Publication 2020/0225030) discloses the system of claim 2 wherein the system is sized for handheld operation without attachment to an external mechanical device (handheld [0025]). Regarding Claim 4, Zheng (US PG Publication 2020/0225030) discloses the system of claim 1 wherein: The at least one image sensor includes a first image sensor that receives an image that includes the first reflected light and the second reflected light (one of the two cameras is a multipurpose camera 101 [0025]; used for both photogrammetry and three-dimensional scanning [0025]; The photogrammetric module provides supplemental lighting on the measured object from a supplemental light source with a first waveband, and the three-dimensional scanning module provides supplemental lighting on the measured object from a supplemental light source with a second waveband [0027]); and the at least one processor further determines the locations on the at least one image sensor of the reflective markers and of the projected lines of light (identifying features such as the encoded markers, the non-encoded markers, and the contour lines projected by the pattern projector through an image matching algorithm, and then obtaining a coordinate set of the center coordinates of the markers and of the center line of the contour lines on the image. [0035]), the determined locations based at least in part on the image (one of the two cameras is a multipurpose camera 101 [0025]; used for both photogrammetry and three-dimensional scanning [0025]). Regarding Claim 5, Zheng (US PG Publication 2020/0225030) discloses the system of claim 1. Zheng does not disclose, but Rafii (US PG Publication 2017/0272728 A1) teaches wherein the at least one processor includes at least one field programmable gate array (FPGA) (system 10 includes FPGA [0050-[0051], Fig. 2A). One of ordinary skill in the art before the application was filed would have been motivated to modify Zhang with the on-board processor of Rafii and divide computation between the on-board processor and the network computer, as Rafii does, because it was well-known before the application was filed that smaller hand-held devices usually incorporate lower powered computing capabilities compared with network computers, and it was customary in the art before the application was filed to provide data to a back-end computer for performing additional computations and returning the result to the hand-held device, providing a good balance of power consumption and operability to the hand-held device, and providing convenience to the user. Regarding Claim 6, Zheng (US PG Publication 2020/0225030) discloses the system of claim 3. Zheng does not disclose, but Rafii (US PG Publication 2017/0272728 A1) teaches wherein the at least one processor sends the determined locations of the reflective markers and the determined locations of the projected lines of light (transmit the generated point clouds or the raw images [0111]) to a remote computing unit for further processing to determine 3D coordinates of points on the object (generating the 3D point clouds may also be performed by a remote processor 18, which may communicate with the central processing unit 220 through a network communications interface [0111]), the remote computing unit being separate from the scanner device (communicates over network interface [0111]), the remote computing unit comprising at least one of a wearable computing unit, an external computer, and a networked computer (processor 18 communicates over network interface [0111]). One of ordinary skill in the art before the application was filed would have been motivated to modify Zhang with the on-board processor of Rafii and divide computation between the on-board processor and the network computer, as Rafii does, because it was well-known before the application was filed that smaller hand-held devices usually incorporate lower powered computing capabilities compared with network computers, and it was customary in the art before the application was filed to transmit data to a back-end computer for performing additional computations and returning the result to the hand-held device, providing a good balance of power consumption and operability to the hand-held device, and providing convenience to the user. Regarding Claim 7, Zheng (US PG Publication 2020/0225030) discloses the system of claim 6. Zheng does not disclose, but Rafii (US PG Publication 2017/0272728 A1) teaches wherein the remote computing unit sends the determined 3D coordinates of points on the object to a mobile device for display (the 3D model may be transmitted back to the scanning device 10 and displayed 152 on the display component 150 of the scanning system 10 [0111]). One of ordinary skill in the art before the application was filed would have been motivated to modify Zhang with the on-board processor of Rafii and divide computation between the on-board processor and the network computer, as Rafii does, because it was well-known before the application was filed that smaller hand-held devices usually incorporate lower powered computing capabilities compared with network computers, and it was customary in the art before the application was filed to transmit data to a back-end computer for performing additional computations and returning the result to the hand-held device, providing a good balance of power consumption and operability to the hand-held device, and providing convenience to the user. Regarding Claim 8, Zheng (US PG Publication 2020/0225030) discloses storing the determined locations of the one or more lines of light and the determined locations of the one or more markers (data structure is sorted and stored [0036]). The remainder of Claim 8 is rejected on grounds provided in Claim 1. Regarding Claim 9, Zheng (US PG Publication 2020/0225030) discloses The method of claim 8 further comprising operating the system in a handheld mode, the system being unattached to an articulated arm coordinate measuring machine (AACMM) (handheld [0025]). Regarding Claim 10, the claim is rejected on the grounds provided in Claim 4. Regarding Claim 11, the claim is rejected on the grounds provided in Claim 6. Regarding Claim 17, Zheng (US PG Publication 2020/0225030) discloses The method of claim 8, wherein the at least one processor causes the first camera and the second camera to simultaneously determine () the locations of the line of light on the image sensors (first 2d image extractor, second 2d image extractor, Fig. 2) based at least in part on the first reflected light and the locations of the reflective markers on the image sensors based at least in part on the second reflected light (image 2D coordinates 211 of the encoded markers and the non-encoded markers in each image [0032]; two-dimensional coordinates 221 of the non-encoded markers in each image and image two-dimensional coordinates of feature points projected by the pattern projector onto the surface of the measured object [0033]). Regarding Claim 19, Zheng (US PG Publication 2020/0225030) discloses the system of Claim 1, wherein the scanner device comprises the first light source and the second light source (supplemental light source 106 is a set of LEDs of which the center waveband is the second waveband, and the supplemental light source 105 is a set of LEDs of which the center waveband is the first waveband, pattern projector 104 may be a single-line or multi-line line laser that operates with the second waveband; Fig. 1, [0031]). Response to Arguments Applicant’s remarks filed 4/25/2025 have been considered but are unpersuasive. Applicant argues in the Remarks at pp. 11-12 that Rafii does not disclose a collection of features that are mapped to Zheng. This argument is not persuasive against the combination of references because the prima facie case of obviousness does not require Rafii to disclose the features mapped to Zheng. Obviousness is based on what the combination of references suggests to those of ordinary skill in the art, and Applicant’s arguments against Rafii alone are not persuasive against the combination of Zheng modified by Rafii. Zheng is modified by Rafii to include an on-board processor. Applicant concedes that Rafii teaches an on-board processor. Remarks at 13. Thus, the combination of references teaches all of the elements that Applicant alleges that Rafii does not alone show. At p. 17 of Remarks, Applicant traverses the rationale to combine the references; but Applicant does not provide any reason for the traversal. Applicant instead reiterates Rafii’s failure to teach features mapped to Zheng. This is not persuasive against the motivation to combine as the motivation statement on record sets forth a well-known problem-solution that is addressed by the combination. Conclusion The prior art made of record and not relied upon is considered pertinent to applicant's disclosure. US 7912673 B2 US 20150138349 A1 US 10309770 B2 THIS ACTION IS MADE FINAL. Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a). A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action. Any inquiry concerning this communication or earlier communications from the examiner should be directed to SHADAN E HAGHANI whose telephone number is (571)270-5631. The examiner can normally be reached M-F 8-7. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Jay Patel can be reached on 571-272-2988. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /SHADAN E HAGHANI/ Examiner, Art Unit 2485