Jump to content

Patent Application 17707379 - APPARATUSES AND METHODS FOR GENERATING AUGMENTED - Rejection

From WikiPatents

Patent Application 17707379 - APPARATUSES AND METHODS FOR GENERATING AUGMENTED

Title: APPARATUSES AND METHODS FOR GENERATING AUGMENTED REALITY INTERFACE

Application Information

  • Invention Title: APPARATUSES AND METHODS FOR GENERATING AUGMENTED REALITY INTERFACE
  • Application Number: 17707379
  • Submission Date: 2025-04-09T00:00:00.000Z
  • Effective Filing Date: 2022-03-29T00:00:00.000Z
  • Filing Date: 2022-03-29T00:00:00.000Z
  • National Class: 705
  • National Sub-Class: 304000
  • Examiner Employee Number: 96438
  • Art Unit: 3626
  • Tech Center: 3600

Rejection Summary

  • 102 Rejections: 0
  • 103 Rejections: 2

Cited Patents

No patents were cited in this rejection.

Office Action Text


    DETAILED ACTION
Notice of Pre-AIA  or AIA  Status
       The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Status of Claims
No amendments were made to the pending claims which are hereby entered.
Claims 1, 4 - 10, 12, 14 - 17 and 19 - 25 are pending and have been examined. 
This action is made FINAL.
Response to Arguments
Applicant's arguments filed January 23, 2025 have been fully considered but they are not persuasive. 
Arguments regarding the 112(a) rejection have been considered and were found to be persuasive in view of cited paragraph ¶0081 from applicant specifications. Therefore, this particular objection has been withdrawn due to the applicant's remarks in pp.2 – 4. Specifically, because one of ordinary skilled in the art would recognize with reasonable clarity in the disclosure (last two sentences in ¶0081 from specs) the subset of available actions for an item based on “a length of the temporal difference” and the omission of other virtual user interface elements associated with other predefined actions. See MPEP 2163 (I)(b) and (II)(A) for more details.
Regarding applicant's arguments against the 101 rejection of the pending claims on pages 4-16: Applicant’s arguments directed to Step 2A prong 1 and Step 2A prong 2 – Step 2B analysis were considered. However, these arguments are not persuasive and the examiner respectfully disagrees for the following reasons: 
For Step 2A-Prong 1 starting in p. 7: The applicant argues that “the Office Action’s characterization of these elements as involving advertising, marketing or sales activities or behaviors is based on improper importing of elements of the description into the claims”. However, the Examiner disagrees. Because this is not how an Examiner concludes that the claimed invention and its pending claims are part of an abstract idea. Firstly, the pending claims were analyzed and “evaluated after determining what [the] applicant has invented by reviewing the entire application disclosure and construing the claims in accordance with their broadest reasonable interpretation (BRI)” for Step 2A-Prong 1 and 2 (See MPEP § 2106, subsection II, for more information about the importance of understanding what the applicant has invented, and MPEP § 2111 for more information about the BRI). Thus, it was still considered that the amended claims and its limitation steps as a whole and individually were directed to “Commercial interactions or legal interactions”. Because the claim language and its limitations are directed to determining a detected object (e.g. a purchased product) that is an identified item associated to an identified user account and their history (including time of “temporal milestone” and current times) to provide virtual user interface elements associated with actions or “post-purchase actions” (as previously referred by the applicant in ¶0081 from the specs) which encompass advertising, marketing or sales activities or behaviors. Thus, the Examiner disagrees since no particular embodiment from the written description was read into a claim and/or an improper read of a specific order of the steps was made based on the examples provided by the applicant in view of MPEP 2111.01(II). Rather the limitations were interpreted in light of the specification in giving them their “broadest reasonable interpretation”.

Secondly, the applicant argues in p. 10 from remarks that “subject matter of claim 14 may be usable in the domain of commerce, but it is not directed to advertising, marketing or sales activities”, but rather is directed to “providing improvements to an AR interface”. However, the Examiner disagrees because the AR interface is providing these “virtual user interface element” associated to “available actions” that the user can interact with based on the “length of the temporal difference being within the respective valid temporal range of each available action”. Thus, these “temporal ranges” by definition of the time and history tracked for product item are directed to at least sales activities or behaviors in view of ¶0081 from the applicant specs. Therefore, no improvements to an AR interface are apparent other than using AR technology and a computer as a tool to perform an abstract idea (see MPEP 2106.04(d)(I) and MPEP 2106.05(f)). Thus, not providing an inventive concept at Step 2B.  

Lastly, applicant comparison between the claim 14 and Example vi is misapplied and the Examiner disagrees. Because example vi is a hypothetical example that provides claims that do not recite (set forth or describe) an abstract idea, but does not define the 101 analysis eligibility based on the improvement to the functioning of the GUI (see MPEP 2106.04(a)(1)). Therefore, compared to the applicant claim limitations, these still recites in a high level of generality, the provision of an “AR interface” (being applied) with “virtual user interface elements” rendered and overlayed in real-time for display purposes without further limiting how at least the “rendering” functions were specifically achieved, when evaluating the claims individually and as a whole. Thus, the abstract idea recited claim limitations are not meaningfully limited and are “generally linking the use of the judicial exception to a particular technological environment” such as technology related to “AR interfaces”, as referred by the applicant which in turn failed to “reflect the disclosed improvement for the functioning of a computer or any other technology/technical field” (see MPEP 2106.04 (d)(1)). Moreover, the applicant’s response in p. 13 as to how the “rendering” functions by merely re-stating that the “AR interface is rendered specifically rendering only virtual user interface elements that have been identified as being valid” is not persuasive to cure the deficiencies of reflecting or further limiting how the use of the AR interface claimed with this rendering function is distinct from other AR interfaces performing the same function. Therefore, no improvements to an AR interface are apparent other than using AR technology and a computer as a tool to perform an abstract idea (see MPEP 2106.04(d)(I) and MPEP 2106.05(f)).

For Step 2A-Prong 2 and Step 2B starting in p. 14: The applicant arguments regarding the use of utilizing an “image capture device to capture the real-world video and the image output device to output the virtual user interface elements overlaid on the real-world video” to provide an “AR interface” is not persuasive. Because the claims’ additional elements (including the processor, the image capture device, image output device and a non-transitory computer readable medium), individually and in combination, does not integrate the judicial exception into a practical application. This is due to the following reasons:  
the recitation of the claims “must include the components or steps of the invention that provide the improvement described in the specification” (see MPEP 2106.05(a)), but rather the claims are invoking “computers merely as a tool” to perform the business process in which is merely adding the words “apply it” to the judicial exception (see MPEP 2106.05(a)(I) and 2106.05(f)(2) & (3)); This is because for an AR interface to work, as the applicant asserted in p. 14, certain feature elements are inherent in the technology such as an “image capture device” and an “image output device” which confirms and invokes the use or general computers with the aid of AR technology (as a tool) to perform the abstract idea.

and, the claims and its additional elements do not amount to “more than generally linking the use of a judicial exception to a particular technological environment or field of use”, which is “AR interfaces” designed for after-purchase options in this case, for the use of employing generic computer functions and AR technology to execute the abstract idea (see MPEP 2106.05(h)). 

Therefore, these additional elements are not adding significantly more to the abstract idea because they were simply applying the abstract idea on a “processor” that uses an “image capture device” and “an image output device” (see claim 14) and a “non-transitory computer readable medium” (see claim 20) without any recitation of details of how to carry out the abstract idea. In other words, and contrary to applicant remarks on p.14, the evaluated claims did not reflect the recitation of: 
an improvement to the functioning of a computer or other technology or technical field (see MPEP 2106.05(a))
and/or the use or application of the judicial exception in a meaningful way beyond generally linking its user to a particular environment, such that the claim as a whole is more than a drafting effort designed to monopolize the exception (see MPEP 2106.05(e))

Therefore, either claiming “improved user interface that is more tailored and user-friendly”, as referred by the applicant in p. 14, or “claiming the improved speed or efficiency inherent with applying the abstract idea on a computer" (or to an “AR interface”) and does not integrate a judicial exception into a practical application or provide an inventive concept” at Step 2B (see MPEP 2106.05(f)(2); TLC communications). Because is still invoking computers or other machinery (e.g. AR technology) merely as a tool to perform an existing process (see MPEP 2106.05 (f)) and obtain the intended result of a user-friendly application in an AR interface (see MPEP 2106.05(h)). 

Also, limiting the use of the idea to one particular environment, such as for the detection of objects in an “input frame” to provide and “enable the presentation” of virtual user interface elements related to a purchased product by using generic computer components and merely reciting computing technologies (e.g. Augmented Reality (AR) interfaces) and/or other machinery merely as a tool to one particular environment, “does not add significantly more” and “cannot integrate the judicial exception into a practical application” (see MPEP 2106.05(h)). Because the claims must recite the details regarding how a computer aids the method, the extent to which the computer aids the method, or the significance of a computer to the performance of the method. Merely adding generic computer components to perform the method (e.g. implementing the use of ordinary capacity for economic or other tasks [e.g., to receive, store, or transmit data] is not sufficient. Thus, the claim must include more than mere instructions to perform the method on a generic component or machinery to qualify as an improvement to an existing technology.” (see MPEP 2106.05(a)(II)).

Subsequently, these limitations do not serve to improve technology or the “AR interface” technology areas to be eligible at Step 2B, qualify as “significantly more” and provide for an “inventive concept” based on the reasons stated above (See MPEP 2106.05(a & f)). Finally, the claims are not meaningful limitation steps as these are recited in a very general, nominal or broad manner that is a “bare assertion of an improvement [that] would be apparent to one of ordinary skill in the art” when comparing them to its specifications (See MPEP 2106.04(d)(1) and MPEP 2106.05(a)). Specifically, failing to reflect the AR user interface improvement on how the display “certain virtual UI elements while selectively omitting other virtual UI elements
” is improved via the “rendering” step (see p. 15 from remarks). Thus, for the reasons stated above, the Examiner respectfully disagrees, and maintains 35 USC § 101 rejection for these pending claims.

Regarding the applicant's arguments of rejection under 35 USC § 103 for the pending claims on pages 16 – 21: Applicant’s arguments with respect to claim 1 have been considered but are not persuasive for the following reasons:  
Starting on p. 18 from remarks: The Examiner disagrees with applicant, because Reddy teachings are mischaracterized and Reddy is still teaching this limitation step under the broadest reasonable interpretation (BRI). As applicant asserts Reddy teaches “a start date and an end date for a period for which the service history is requested” (see ¶0084 – 85; Reddy) wherein the start date is interpreted as the identified item history such as purchase/shipping date and the end date is directed to the current or present time/date. Moreover, “Entitlements module 326” may process the “time between entitlements module 326 receiving history information request up to a time limit” (directed to a valid temporal range which is “a determination of how much time has passed (a “time window”)” as applicant asserts; see ¶0092; Reddy), in accordance to applicant specs in ¶0080 – 81. 

On p. 18 from remarks: In response to applicant alleging that “the Office Action appears to have misunderstood the Applicant's previous remarks with respect to this element of claim 1” for Reddy reference not teaching the claim recitation of "a length of the temporal difference being within the respective valid temporal range of each available action” as applicant alleges. The Examiner maintains that under BRI, Reddy teaches the “length of the temporal difference” because Reddy’s system can inherently infer and identify this temporal difference that is between a temporal milestone (e.g. the time that a product was purchased or product’s shipped time) and the current time in which the user requests the warranty (e.g. the length of the temporal difference). The system considers the product information which includes “product warranty expiry information, product warranty expiry date” (e.g. also directed to the length that the temporal difference should have) and “history of warranty entitlements services availed” that are directed to the temporal difference to obtain a temporal milestone from the identified item in the history and a current time in accordance to the applicant specifications in ¶0077 and ¶0080 – 81 (see ¶0114; Reddy).

On p. 19 from remarks: The applicant argues, misreads Reddy and disagrees “one of ordinary skill in the art would find Reddy to "obviously" omit certain user interface elements based on eligibility and product information/history” because Reddy merely states that the “user interface module” allows users to view and choose the type of services and fixes, “out of available options, as to when the service can be provided at the service center or by a house call by a service technician”. Further, the applicant adds and interprets Reddy as meaning “unavailable dates/times are grayed-out (but still displayed) or that a user interface element is modified so that the unavailable dates/times are unselectable.” However, the Examiner maintains that Reddy still, at least implicitly teaches the “omitting other virtual user interface elements” as Reddy discloses that only “available options” will be presented to the user, that under BRI regardless of the criteria (e.g. either unavailable dates/times or relevance), will modify or omit (e.g. intentionally leave out or exclude) the options that do not satisfy whatever the criteria is. Moreover, the applicant “omitting” limitation does not explicitly further limits how “other virtual user interface elements associated with other predefined actions” are being omitted by any specific criteria or “relevance” as the applicant alleged. Thus, the Reddy still teaches this broadly claimed limitation. As for arguments regarding the application of “impermissible hindsight analysis based on applicant’s disclosure” in p. 20, this is incorrect and Examiner disagrees. Because obviousness was determined based on “knowledge which was within the level of ordinary skill in the art” when “rendering, in the AR interface, a respective virtual user interface element associated with each available action
” which is analogous to available actions that can be found in computer interface, wherein this computer interface technology is well-known in the art. The Examiner also reminds the applicant that there is no requirement that an express, written motivation to combine must appear in prior art references before a finding of obviousness (see MPEP 2145(X)(A)). Lastly, Reddy’s system enables “users to have sufficient evidence of product ownership, purchase related details, services entitlement information and related processes to efficiently manage post-purchase and reverse logistics customer relationship management services associated with a product.” (¶0007; Reddy) and “to take advantage of post-product purchase services such as in-warranty entitlements and after sales support” (¶0003; Reddy).

Therefore, for the reasons stated above the Examiner respectfully disagrees, and maintains 35 USC § 103 rejection for these pending claims.
Claim Rejections - 35 USC § 101
       35 U.S.C. 101 reads as follows:
Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title.
Claims 1, 4 - 10, 12, 14 - 17 and 19 - 25 are rejected under 35 U.S.C. 101 because the claimed invention is directed to an abstract idea without significantly more, although the above mentioned claim(s) failed step 1, all claims were further evaluated for the purpose of advancing compact prosecution. Firstly, it should be stated that claim 14 is being used as the most representative of the independent claims set 1, 14 and 20. Step 1: the claimed invention falls under statutory categories of a machine and a process. However, Step 2A Prong 1: the abstract idea is defined by the elements of: 
process an input frame of a sequence of frames of a real-world video captured 
to detect and track an object in the input frame in real-time; 
determine that the detected object is an identified item recorded in a history associated with an identified user account based on a query of the history 
generate the AR interface that is tailored to the identified item by: determining a temporal difference between a temporal milestone associated with the identified item in the history and a current time;
identifying a set of two or more predefined actions associated with the identified item, each predefined action being valid for a respective valid temporal range;
identifying, from the set of two or more predefined actions, a subset of one or more available actions associated with the identified item based on length of the temporal difference being within the respective valid temporal range of each available action; and
provide
 wherein each respective virtual user interface element is rendered as an overlay on the sequence of frames in real-time.

These limitations, describe a method and a system for identifying and detecting user purchase history and account data to efficiently provide post-purchase actions to the user. As explained in MPEP 2106.04, subsection II, a claim “recites” a judicial exception when the judicial exception is “set forth” or “described” in the claim. Thus, these limitations are directed to the abstract idea of a certain method of organizing human activity in the form of “commercial or legal interactions” as these claims recite the steps of determining a detected object that is an identified item associated to an identified user account and their history (including time of “temporal milestone” and current times) to provide virtual user interface elements associated with actions or “post-purchase actions” in which directly involves advertising, marketing or sales activities or behaviors. As disclosed in the specification in ¶0004, this invention “presents solutions to a problem that is specific to computers and in particular generation of AR interfaces. A challenge in designing AR interfaces is to ensure that the user is presented with a user-friendly interface with relevant virtual elements tailored to a real-world object” as a way to “enhance user experience rather than introducing unintended barriers (e.g., overly complex options, having to navigate through a menu, having to dismiss irrelevant options, etc.)”. 

Step 2A Prong 2: For independent claims 1, 14 and 20, The judicial exception is not integrated into a practical application, because the claims as a whole, while looking for their additional element(s) of an image capture device and image output device; AR interface (from claims 1, 14 and 20); a processor (from claims 14 and 20) and a non-transitory computer readable medium (from claim 20) individually and in combination, merely is used as a tool to perform the abstract idea (refer to MPEP 2106.05(f)). These element features including the computer and the Augmented Reality (AR) technology being used are recited at a high level of generality. These feature steps are performed generally to apply the abstract idea without placing any limits on how these steps for at least processing the “input frame of a sequence of frames”, generating and rendering in the “AR interface” are performed distinctively from generic computer components and general AR technology functions. Thus, each function is recited to generally “apply it” to a computer and to general blockchain technology. See MPEP 2106.05(f).

Thus, these limitations are “merely indicating a field of use or technological environment in which to apply a judicial exception do not amount to significantly more than the exception itself, and cannot integrate a judicial exception into a practical application” (MPEP 2106.05(h)). Therefore, this is indicative of the fact that the claim set has not integrated the abstract idea into a practical application and therefore, the claims are found to be directed to the abstract idea identified by the examiner.

Step 2B: For independent claims 1, 14 and 20, these claims recite the additional elements: an image capture device and image output device; AR interface (from claims 1, 14 and 20); a processor (from claims 14 and 20) and a non-transitory computer readable medium (from claim 20) and these are not sufficient to amount significantly more than the judicial exception Meaning, that there are no additional element(s) claimed in the dependent claims that could be significantly more than the judicial exception, but rather, further recites the abstract idea. As indicated in Step 2A Prong 2, the additional element(s) in the claims are merely, using a generic computer device or computing technologies and/or other machinery merely as a tool to perform an abstract idea that does not constitute a practical application and only amounts to a mere instruction to practice the invention. Thus, these elements do not render the claims as being eligible (refer to MPEP 2106.05(f) and 2106.05(h)). This is because the claimed invention must improve “upon conventional functioning of a computer, or upon conventional technology or technological, a technical explanation as to how to implement the invention should be present in the specification. That is, the disclosure must provide sufficient details such that one of ordinary skill in the art would recognize the claimed invention as providing an improvement. The specification need not explicitly set forth the improvement, but it must describe the invention such that the improvement would be apparent to one of ordinary skill in the art” (see MPEP 2106.05(a)). The rationale set forth for the 2nd prong of the eligibility test above is also applicable and re-evaluated in step 2B. Therefore, this rationale is sufficient for its rejection basis as it is not patent eligible and no comments are necessary as it is also consistent with the MPEP 2106.

For dependent claims 4 - 10, 12, 15 - 17 and 19, 21 - 25, these claims cover or fall under the same abstract idea of a method of organizing human activity. They describe additional limitations steps of:
Claims 4 - 10, 12, 15 - 17 and 19, 21 - 25: further describes the abstract idea of the method of processing an input frame to detect and track an object in the input frame including multiple virtual user interface (UI) elements that prompt the user to perform different physical manipulations to change the object orientation (e.g. poses or reference marker changes) which triggers the removal of a particular virtual UI element, storing and processing subsequent frames that can be “tagged with timestamps” to confirm physical manipulation of detected objects. Thus, being directed to the abstract idea group of “engaging in commercial or legal interactions” as it involves sales activities or behaviors.

Step 2A Prong 2 and Step 2B: For dependent claims, these claims do not recite the additional elements. However, the claim limitations are further describing the abstract idea and recite functions that amounts no more than mere instructions to apply the exception using a generic computer component (refer to MPEP 2106.05(f)) and/or links to a computer implementing the “use of ordinary capacity for economic or other tasks (e.g., to receive, store, or transmit data), or simply adding a general-purpose computer or computer components” (refer to MPEP 2106.05 f (2)) and do not integrate the abstract idea into a practical application because it/they do(es) not impose any meaningful limits on practicing the abstract idea. These claims are directed to an abstract idea.

Therefore, the additional elements previously mentioned above, are nothing more than descriptive language about the elements that define the abstract idea, and these claims remain rejected under 101 as well. 
Claim Rejections - 35 USC § 103
       In the event the determination of the status of the application as subject to AIA  35 U.S.C. 102 and 103 (or as subject to pre-AIA  35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.  
        The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.

     Claims 1 - 5, 7 - 20 and 22 - 24 are rejected under 35 U.S.C. 103 as being unpatentable over Maestas (U.S. Patent No. 11055531 B1) in view of Reddy (U.S. Pub No. 20160110722 A1) in further view of Little (U.S. Patent No. 11436828 B1).
Regarding claims 1, 14 and 20: 
This independent claim set is represented by claim 14
Maestas teaches:
processor coupled to communicate with an image capture device and an image output device, wherein the processor is configured to: (In C5; L34 – 46; Fig. 3 (300): teaches that the system of a “Damage management system 300 may comprise a centralized manager 302” which includes a “server” run by “a single computer” (directed to the processor; see C5; L47 – 52) and “a remote device 30”, which includes a “camera” (see C6; L1 – 4; directed to image capture device). The system also includes “a networked surveillance camera could be uploaded to centralized manager 302 for use in detecting damage” that derives from various “internet-of-things (IOT) devices 33” (see C6; L21 – 27; directed to an image output device). Also, refer to Fig. 8 for user interface display.)
process an input frame of a sequence of frames of a real-world video captured by the image capture device to detect and track an object in the input frame in real-time; (In C4; L10 – 26; Fig. 2 (202 – 204): teaches that the system in step 202 captures “one or more physical objects in their initial states” in which involves capturing “an image as part of a sequence of images that form a video” (directed to the input frame) and in step 204 the use of “image recognition and/or machine vision algorithms” can be used to process the “physical object(s)” (directed to detect and track the object).)
determine that the detected object is an identified item recorded in a history associated with an identified user account based on a query of the history (In C7; L57 – 67; Fig. 5: teaches that the system “may then obtain access (for example, via the cloud) to the customer's order history information, including the order history” of the item in which “allows centralized manager 302 to automatically determine the price that the customer paid for television 402” (or any “consumer item”; see C8; L15 – 21), “without requiring the customer to look-up, or recall from memory, the price they paid”. For accessing other types of “user information” that were directed to the user account, see C6; L43 – 50.)

Maestas does not explicitly teach the abilities of determining an item’s temporal difference based on item’s history and current time and identify available actions and a subset of one or more available actions based on the temporal difference length within a valid temporal range and omits other types of these virtual user interface (UI) elements. However, Reddy teaches:
determining a temporal difference between a temporal milestone associated with the identified item in the history and a current time; (In ¶0092; Fig. 6 (602): teaches that the “consumer application module 302 may send a history information request, constituting a part of the product information and information pertaining to a start and end date for which a service history is requested (directed to temporal difference the start date is interpreted as the identified item history such as purchase/shipping date and the end date is directed to the current or present time/date). Then, “Entitlements module 326 may process the request and validate the user information and the product information” (directed to determining the temporal difference received) which can be further processed in a “time window” that may be the “time between entitlements module 326 receiving history information request up to a time limit” (directed to a valid temporal range) in accordance to applicant specs in ¶0080 – 81. Also, refer to ¶0082 and ¶0084 – 85 for general details of the start and end dates, and to ¶0104 for more details corresponding to “step 602”.)
identifying a set of two or more predefined actions associated with the identified item, each predefined action being valid for a respective valid temporal range; (In ¶106; Fig. 6 (606 – 610) Figs.24 – 25: teaches that “in response to receiving the user information and the product information, the server may generate a digital object at 606, in order to the enable the user to utilize the service associated with the product” which is directed to identifying a set of two or more predefined actions associated with the identified item. Such service enablement feature includes more than two actions associated with “a warranty service associated with the product” (see ¶0109 and ¶0114), receiving “information related to the service associated with the product” and other “pre-authorized services” (see ¶0110 – 111). Also, refer to ¶0082 for an example regarding to a user request for repairing a damaged item based on a “time limit” to schedule a “product repair” wherein all these service enablement options and product repair request stated above are directed to two or more predefined actions associated with the identified item.)
identifying, from the set of two or more predefined actions, a subset of one or more available actions associated with the identified item based on length of the temporal difference being within the respective valid temporal range of each available action; and (In ¶0114; Fig. 6 (610); Figs.24 – 25: teaches that the system allows the user to “schedule a warranty service” and “view service history associated with a warranty” in which the “product warranty expiry can be notified to the user in order to enable the user to purchase an extended warranty at the right time” which is directed to identifying a subset of an available action based on a length of the temporal difference that is within the valid temporal range of each available action. Thus, the system can inherently infer and identify this temporal difference that is between a temporal milestone (e.g. the time that a product was purchased or product’s shipped time) and the current time in which the user requests the warranty. The system considers the product information which includes “product warranty expiry information, product warranty expiry date” and “history of warranty entitlements services availed” that are directed to the temporal difference to obtain a temporal milestone from the identified item in the history and a current time in accordance to the applicant specifications in ¶0077 and ¶0080 – 81. See ¶0082 – 83 and ¶0092 in which the system via the “warranty entitlements module”, allows the user to apply for a “product warranty” within a “time window” to schedule a product repair with a technician” (directed to another subset of available action(s)) based on a “service history request” that includes “product information” and “a start date and an end date for a period for which the service history is requested” (see ¶0084) which is directed to the temporal difference. Also, refer to ¶0104 – 106 and ¶0108 – 109 for more details regarding the functional steps from Fig. 6.)

and omitting other virtual user interface elements associated with other predefined actions in the set of two or more predefined actions; and (In ¶0083; Figs. 24 – 25: teaches that the “user interface module can be configured to enable the user to view information associated with the service appointment via the user interface” wherein the user can “choose the service type, the type of fixes, select the date and time of service, out of available options” which obviously implies/suggests that certain virtual user interface elements can be omitted or removed if not relevant based on the eligibility and the user’s and the respective product information/history. Also, in ¶0077 – 78 the “digital object” generated in the interface can “automatically unwrap information when placed in a mobile phone” to “enable the user to utilize the service associated with the product” the unwrapped data can include “ownership information, warranty information such as, but not limited to, one or more warranty service providers, one or more third party extended service providers, and one or more warranty service centers, in order to utilize a warranty service associated with the product, as well as digital insurance information” which is directed to virtual user interface elements that can be omitted/removed or added depending the enablement conditions. Refer to ¶0082 wherein the enablement of requesting services to the user are allowed under specific conditions that need to be satisfied for “product repair” requests.)

It would have been obvious to one of ordinary skill in the art before the earliest effective filing date of the claimed invention to modify Maestas with the abilities of determining an item’s temporal difference based on item’s history and current time and identify available actions and a subset of one or more available actions based on the temporal difference length within a valid temporal range and omits other types of these virtual user interface (UI) elements, as taught by Reddy in order to enable “users to have sufficient evidence of product ownership, purchase related details, services entitlement information and related processes to efficiently manage post-purchase and reverse logistics customer relationship management services associated with a product.” (¶0007; Reddy) and “to take advantage of post-product purchase services such as in-warranty entitlements and after sales support” (¶0003; Reddy).

Maestas teaches that AR can be implemented into the system’s interface “to superimpose projected repairs onto various regions of the object as seen through a live video of the damaged object” (C10; L9 – 14; Maestas) such as overlaying “lightning strike data” (C9; L5 – 10; Maestas). But, neither Maestas or Reddy explicitly teach the abilities of generating and providing an AR interface that specifically includes virtual user interface (UI) element rendered as an overlay on the sequence of frames in real-time. However, Little teaches:
generate the AR interface that is tailored to the identified item by: (In C9; L27 – 40; Figs. 5 – 6 and 9 – 18: teaches a “VR/AR/MR processor 102 provides an indication of object identifications, the metadata and/or other information to the VR/AR/MR rendering device 304” which “displays the object identifications, metadata and/or other information to the user 302 in association with the actual environment 300 and/or a representation of the actual environment 300” which can also be displayed in a mobile interface (see Fig. 9).)
rendering, in the AR interface, a respective virtual user interface element associated with each available action in the subset of one or more available actions
 (In Fig. 15: teaches the rendering of a respective virtual user interface element related to reporting a damaged window that further provides the available action of confirming that the window is damaged by providing “yes” and “no” buttons. Similarly, the user can “submit a claim” based on the virtual user interface element related to a “summary” displayed (see Fig. 18). Refer to C5; L10 – 19 and C7 – 8; L66 – 67 and L1 – 5 for more general details of the rendering function)
provide, via the image output device, the AR interface wherein each respective virtual user interface element is rendered as an overlay on the sequence of frames in real-time. (In Figs. 5 – 6 and 9 – 18: teaches different AR interfaces that includes virtual user interface elements such as buttons, instructions and messages associated to different actions/requests selected by the user which are an overlay on the sequence of frames in real-time in accordance to applicant specs is ¶0051 and ¶064 – 66. See C4; L3 – 11 for general details of AR and Mixed reality and virtual objects and C14; L50 – 67; Fig. 11, wherein “an image of the object 1102 c is displayed to the user as an image of the undamaged object 1102 a in a virtual overlay on top of the damaged object 1102 b”)

It would have been obvious to one of ordinary skill in the art before the earliest effective filing date of the claimed invention to modify Maestas with Reddy with the abilities of abilities of generating and providing an AR interface that specifically includes virtual user interface (UI) element rendered as an overlay on the sequence of frames in real-time, as taught by Little in order to “assist the user in preparing a report of damage to objects after an incident, such as to submit a claim for damage to an insurance company” and to also include a “virtual assistant [that] may assist the user, for example, much as if an actual assistant were physically present with the user” which can “help a user to maximize the use of their insurance coverage.” (C1; L35 – 41; Little). Also, “a user may be better able to prepare a report of damage to objects after an incident, such as to submit a claim for damage” by using AR/VR technology (C3; L64 – 66; Little)

Regarding claims 4, 15 and 22: 
The combination of Maestas, Reddy and Little, as shown in the rejection above, discloses the limitations of claims 1, 14 and 20, respectively.
Maestas further teaches:
wherein a particular one of the respective virtual user interface elements is a prompt to perform a physical manipulation of the detected object to change an orientation of the detected object in the input frame captured by the image capture device, and (In C8; L22 – 35; Figs. 6 – 7: teaches “a user (i.e., a customer or an adjuster) may capture images (photos or video) of a house 600 using remote device 304” (directed to performing a physical manipulation of the detected object). “After a loss causing event, a user may capture images 652 of house 600 in its modified state (i.e., its state after the loss causing event), as shown in FIG. 7”. Also, refer to C7; L32 – 45 in which other physical manipulations regarding the object can be done to smaller object such as “jewelry 406”. Finally, “a customer may be asked to upload a receipt (i.e., an electronic receipt or an image of a physical receipt) to help the system determine the purchase price, vendor where the item was purchased, etc.” (see C7; L46 – 56).)

Reddy teaches that the system can “continuously” be “updated with new discount and rebate offers” to sort “the information and presents the one or more offers for the user to view in the consumer application module 302” via the interface see Fig. 25 (see ¶0094; Reddy). But, neither Maestas or Reddy explicitly teach the ability of updating the AR interface to remove particular virtual user interface element after performance of the physical manipulation being detected. However, Little further teaches:
wherein the AR interface is updated to remove the particular one virtual user interface element after performance of the physical manipulation is detected in the sequence of frames. (In C11; L16 – 26; Figs. 5 – 6 and 9 – 18: teaches that “the virtual assistant 602 may request the user 302 to manipulate an object in the actual environment so that an image input portion of the VR/AR/MR rendering device 304 may be able to obtain an image of a label on the object” such as “serial number or other pertinent information about the object” that upon receiving such action, the system may “utilize the information about an object to populate one or more entries in a database, such as to populate and/or modify entries in the inventory and claim database” and after further interactions the system may remove previous virtual UI elements to output a “summary” of the completed assistance related to the identified objects to submit a claim (see Fig. 18 and C18; L23 – 47) in accordance to applicant specs in ¶14 – 17 and ¶97.)

It would have been obvious to one of ordinary skill in the art before the earliest effective filing date of the claimed invention to modify Maestas with Reddy with the abilities of ability of updating the AR interface to remove particular virtual user interface element after performance of the physical manipulation being detected, as taught by Little. Because Maestas uses information “any details about the object that may be necessary to ensure the replacement is substantially similar to the damaged object” to determine “the price the customer paid for the object” (C11; 56 – 67; Maestas) and provides “estimated repair/replacement costs to a customer” in which the system “can build an internal ranking of vendors based on the quality of repairs, repair costs, repair times, etc.” and “this information can be used to select the best vendors when choices are available” (C12; 20 – 37; Maestas) meaning that the customer can elect at least one post-purchase action when available within the user interface, as suggested by Maestas, which can lead to combine the prior art of Little and the disclosed AR interface, in order to “assist the user in preparing a report of damage to objects after an incident, such as to submit a claim for damage to an insurance company” and to also include a “virtual assistant [that] may assist the user, for example, much as if an actual assistant were physically present with the user” which can “help a user to maximize the use of their insurance coverage.” (C1; L35 – 41; Little). Also, “a user may be better able to prepare a report of damage to objects after an incident, such as to submit a claim for damage” by using AR/VR technology (C3; L64 – 66; Little)

Regarding claim 5: 
The combination of Maestas, Reddy and Little, as shown in the rejection above, discloses the limitations of claim 4.
Maestas further teaches:
further comprising: storing one or more subsequent frames of the sequence of frames captured by the image capture device, including the detected object during or after performance of the physical manipulation. (In C5; L58 – 60: teaches that the “Remote device 304 may also comprise any device capable of storing and/or transmitting captured information.” Also, refer to C4; 27 – 45 for details regarding capturing data of an object before and after a “loss or damage” event.)

Regarding claims 7, 16 and 23: 
The combination of Maestas, Reddy and Little, as shown in the rejection above, discloses the limitations of claims 4, 15 and 22, respectively.
Maestas further teaches:
processing one or more subsequent frames of the sequence of frames captured by the image capture device, to confirm performance of the physical manipulation of the detected object; (In C8; L36 – 49; Fig. 8: teaches “After the image and/or other information has been captured for the house in its initial and modified states, a damage management system may detect differences between the initial and modified states of house 600. This activity corresponds with step 208 in FIG. 2. Referring to FIG. 8, the damage management system uses information from the images 650 of the initial state and information from images 652 of the modified state to detect one or more differences. The detected differences may indicate damaged regions and/or damaged structures” which further includes such detection with sensors to confirm the change in the initial state of the object (see C9; L1 – 24).)

Reddy teaches that the system can “continuously” be “updated with new discount and rebate offers” to sort “the information and presents the one or more offers for the user to view in the consumer application module 302” via the interface see Fig. 25 (see ¶0094; Reddy). But, neither Maestas or Reddy explicitly teach the ability of removing a particular virtual user interface element after confirming the performance of the physical manipulation being detected. However, Little further teaches:
wherein the particular one virtual user interface element is removed from the AR interface in response to confirming the performance of the physical manipulation. (In C11; L16 – 26; Figs. 5 – 6 and 9 – 18: teaches that “the virtual assistant 602 may request the user 302 to manipulate an object in the actual environment so that an image input portion of the VR/AR/MR rendering device 304 may be able to obtain an image of a label on the object” such as “serial number or other pertinent information about the object” that upon receiving such action, the system may “utilize the information about an object to populate one or more entries in a database, such as to populate and/or modify entries in the inventory and claim database” and after further interactions the system may remove previous virtual UI elements to output a “summary” of the completed assistance related to the identified objects to submit a claim (see Fig. 18 and C18; L23 – 47) in accordance to applicant specs in ¶14 – 17 and ¶97. Refer to C14; L21 – 37 and C20; L58 – 65 wherein the system compares/confirms and further the user confirms damaged objects before submitting the insurance claim)

It would have been obvious to one of ordinary skill in the art before the earliest effective filing date of the claimed invention to modify Maestas with Reddy with the abilities of ability of ability of removing a particular virtual user interface element after confirming the performance of the physical manipulation being detected, as taught by Little. Because Maestas uses information “any details about the object that may be necessary to ensure the replacement is substantially similar to the damaged object” to determine “the price the customer paid for the object” (C11; 56 – 67; Maestas) and provides “estimated repair/replacement costs to a customer” in which the system “can build an internal ranking of vendors based on the quality of repairs, repair costs, repair times, etc.” and “this information can be used to select the best vendors when choices are available” (C12; 20 – 37; Maestas) meaning that the customer can elect at least one post-purchase action when available within the user interface, as suggested by Maestas, which can lead to combine the prior art of Little and the disclosed AR interface, in order to “assist the user in preparing a report of damage to objects after an incident, such as to submit a claim for damage to an insurance company” and to also include a “virtual assistant [that] may assist the user, for example, much as if an actual assistant were physically present with the user” which can “help a user to maximize the use of their insurance coverage.” (C1; L35 – 41; Little). Also, “a user may be better able to prepare a report of damage to objects after an incident, such as to submit a claim for damage” by using AR/VR technology (C3; L64 – 66; Little)

Regarding claim 8: 
The combination of Maestas, Reddy and Little, as shown in the rejection above, discloses the limitations of claim 7.
Maestas further teaches:
wherein confirming the performance of the physical manipulation comprises: processing the one or subsequent frames to detect a change in pose of the detected object or to detect a changed reference marker on the detected object. (In C9; L32 – 45; Fig. 8: teaches that the system provides a “visualization” in which “the type of damage can be displayed with different colors or other patterns” (directed to a changed reference marker of the detected object). “For example, in the exemplary embodiment of FIG. 8, a first pattern 860 is used to display damage that is in urgent need of repair/replacement. A second pattern 862, in contrast, indicates damage that may not need immediate replacement. Moreover, the remainder of the house can be visualized using a third pattern 864, indicating regions with no damage at all”.)

Regarding claims 9, 17 and 24: 
The combination of Maestas, Reddy and Little, as shown in the rejection above, discloses the limitations of claims 4,15 and 22, respectively.
Maestas further teaches:
further comprising: detecting a difference in the detected object based on a comparison between a captured image of the detected object after performance of the physical manipulation and a stored previous image of the detected object or a reference object associated with the identified item. (In C8; L36 – 39: teaches “After the image and/or other information has been captured for the house in its initial and modified states, a damage management system may detect differences between the initial and modified states of house 600.” Also, refer to C4; L65 – 67 and C5; L1 – 3 for an example in which “if a house is damaged after a storm but no images of the house exist in its initial condition (i.e., its pre-storm condition), then it may be possible to use an architectural model of house as a baseline to compare with images of the house after the storm in order to detect possible damage”)

Regarding claim 10: 
The combination of Maestas, Reddy and Little, as shown in the rejection above, discloses the limitations of claim 4.
Neither Maestas or Reddy explicitly teach the ability of detecting an identifier after performing a physical manipulation by capturing an image of the detected object. However Little further teaches :
further comprising: detecting, in a captured image of the detected object after performance of the physical manipulation, an identifier; (In C11; L16 – 26; Figs. 5 – 6 and 9 – 18: teaches that “the virtual assistant 602 may request the user 302 to manipulate an object in the actual environment so that an image input portion of the VR/AR/MR rendering device 304 may be able to obtain an image of a label on the object” such as “serial number or other pertinent information about the object” which is directed to an identifier.)

It would have been obvious to one of ordinary skill in the art before the earliest effective filing date of the claimed invention to modify Maestas with Reddy with the abilities of ability of detecting an identifier after performing a physical manipulation by capturing an image of the detected object, as taught by Little in order to “assist the user in preparing a report of damage to objects after an incident, such as to submit a claim for damage to an insurance company” and to also include a “virtual assistant [that] may assist the user, for example, much as if an actual assistant were physically present with the user” which can “help a user to maximize the use of their insurance coverage.” (C1; L35 – 41; Little). Also, “a user may be better able to prepare a report of damage to objects after an incident, such as to submit a claim for damage” by using AR/VR technology (C3; L64 – 66; Little)

Regarding claim 12: 
The combination of Maestas, Reddy and Little, as shown in the rejection above, discloses the limitations of claim 1.
Maestas further teaches:
wherein at least one of the respective virtual user interface elements is provided as a virtual overlay superimposed on the detected object in the sequence of frames. (In C10; L9 – 14: teaches that the system “can use augmented reality to superimpose projected repairs onto various regions of the object as seen through a live video of the damaged object. Augmented reality techniques can be implemented using various conventional augmented reality frameworks or toolkits.” Also, refer to C9; L5 – 10 for an example in which the system can “overlay lightning strike data, which may be available from a third party service, in combination with geospatial information to determine locations on a structure (such as a house or other building) where lighting may have hit and damaged the structure”.)

Regarding claim 19: 
The combination of Maestas, Reddy and Little, as shown in the rejection above, discloses the limitations of claim 14.
Maestas further teaches:
wherein the apparatus is one of: a smartphone; a tablet; a wearable device; or a projection device. (In C4; L59 – 61: teaches that “various kinds of imaging devices could be used, including phones, tablets or other computing devices with cameras, digital cameras and/or other known imaging devices.”.)

Claims 6, 21 and 25 are rejected under 35 U.S.C. 103 as being unpatentable over Maestas (U.S. Patent No. 11055531 B1) in view of Reddy (U.S. Pub No. 20160110722 A1) in further view of Little (U.S. Patent No. 11436828 B1) and Nihal (WO Pub No.  2016069763 A1).
Regarding claim 6:
The combination of Maestas, Reddy and Little, as shown in the rejection above, discloses the limitations of claim 5.
Neither Maestas or Reddy teaches the ability to prompt the user to perform multiple physical manipulations to change orientation of the detected object in different manners. However, Little further teaches:
wherein the particular one virtual user interface element is a first prompt to perform a first physical manipulation to change the orientation of the detected object, wherein subsequent to the first prompt a second virtual user interface element is provided that is a second prompt to perform a second physical manipulation to further change the orientation of the detected object, (In C11; L16 – 26; Figs. 5 – 6 and 9 – 18 and Fig. 21 (2102 and 2104): Under the broadest reasonable interpretation (BRI) this limitation is satisfied by the prior art reference. Since it teaches that “the virtual assistant 602 may request the user 302 to manipulate an object in the actual environment so that an image input portion of the VR/AR/MR rendering device 304 may be able to obtain an image of a label on the object” such as “serial number or other pertinent information about the object” which is directed to different physical manipulations that include different orientations. Refer to C20; L28 – 44 and Fig. 21 (2102 and 2104) for other types of change directions/orientation performed to a physical object to document any damages or missing parts.)

It would have been obvious to one of ordinary skill in the art before the earliest effective filing date of the claimed invention to modify Maestas with Reddy with the ability to prompt the user to perform multiple physical manipulations to change orientation of the detected object in different manners, as taught by Little in order to “assist the user in preparing a report of damage to objects after an incident, such as to submit a claim for damage to an insurance company” and to also include a “virtual assistant [that] may assist the user, for example, much as if an actual assistant were physically present with the user” which can “help a user to maximize the use of their insurance coverage.” (C1; L35 – 41; Little). Also, “a user may be better able to prepare a report of damage to objects after an incident, such as to submit a claim for damage” by using AR/VR technology (C3; L64 – 66; Little)

Neither Maestas, Reddy or Little explicitly teach tagging stored input frames with timestamps for each prompt. However, Nihal teaches:
and wherein the stored one or more subsequent frames are tagged with timestamps corresponding to the first prompt and the second prompt. (In ¶0046: teaches that the system stores “recorded videos” that are “tagged with several types of attributes either automatically based on image recognition techniques or manually by people (either internally by the company, its employees, independent contractors or externally by users)” and these recorded videos includes an “appropriate timestamp (be. the point in time and optionally even the time range in the video where such a tag occurs)”.)

It would have been obvious to one of ordinary skill in the art before the earliest effective filing date of the claimed invention to modify Maestas, Reddy and Little with the ability of tagging stored input frames with timestamps for each prompt, as taught by Nihal in order to “making it easily searchable and connected to metadata from other videos” within the database (¶0053; Nihal)

Regarding claims 21 and 25: 
The combination of Maestas, Reddy and Little, as shown in the rejection above, discloses the limitations of claims 15 and 22, respectively.
Maestas further teaches:
wherein the particular one virtual user interface element is a first prompt to perform a first physical manipulation to change the orientation of the detected object, and wherein the processor is further configured to: (In C7; L35 – 45; Figs. 6 – 7: teaches that “a customer (or an adjuster) may walk through (and around the outside of) a home and take images (photos or video) of all the physical objects that need to be inventoried” which might also include action to “collect and arrange the objects on table 407 or other location where the objects can more easily captured” (directed to performing a physical manipulation of the detected object). Also, refer to for C8; L22 – 35 general details and to C7; L32 – 45 in which other physical manipulations regarding the object can be done to smaller object such as “jewelry 406” which can also include the customer’s action when being “asked to upload a receipt (i.e., an electronic receipt or an image of a physical receipt) to help the system determine the purchase price, vendor where the item was purchased, etc.” (see C7; L46 – 56).)
store one or more subsequent frames of the sequence of frames captured by the image capture device, including the detected object during or after performance of the first physical manipulation; (In C6; L17 – 27; Fig. 5 (302): teaches that the system including the user device or other devices used to capture images of the physical objects “upload”, and thus, store this image information via the “centralized manager 302” which is directed to storing subsequent frames of image capture device of a detected object during or after performance of the physical manipulation. Also, the “centralized manager 302” has access to the customer’s “order history” or “purchase history” when comparing order receipts against the damage object images obtained (see C7; L57 – 67 and C11; L53 – 67, respectively) and “may be in communication with one or more databases 312” (see C5; L47 – 55).)
store another one or more subsequent frames of the sequence of frames captured by the image capture device, including the detected object during or after performance of the second physical manipulation; (In C6; L17 – 27; Fig. 5 (302): teaches that the customer can also be “asked to upload a receipt (i.e., an electronic receipt or an image of a physical receipt) to help the system determine the purchase price, vendor where the item was purchased, etc.” (directed to a second physical manipulation; see C7; L46 – 55) which is also stored and sent to the “centralized manager 302”.)

Neither Maestas or Reddy explicitly teach the ability of specifically providing a second virtual user interface element that is the second prompt to perform a second physical manipulation. However, Little further teaches:
subsequent to the first prompt, providing a second virtual user interface element that is a second prompt to perform a second physical manipulation to further change the orientation of the detected object; and (In Figs. 10 and 12 - 15: under BRI this limitation is satisfied. Because this prior art teaches in Figs. 10 and 12 - 15 a transition for showing second virtual user interface element occurs based on user input and provision of image capturing upon virtual assistant request (see C11; L16 – 26) wherein the virtual assistant can ask for more user actions and the system renders elements such as follow-up questions as pop-up messages or confirmation buttons based on the second physical manipulation requested to further change the orientation in accordance with applicant specs in ¶06 wherein these virtual user interface elements are “virtual selection options”.)

It would have been obvious to one of ordinary skill in the art before the earliest effective filing date of the claimed invention to modify Maestas with Reddy with the ability of specifically providing a second virtual user interface element that is the second prompt to perform a second physical manipulation, as taught by Little in order to “assist the user in preparing a report of damage to objects after an incident, such as to submit a claim for damage to an insurance company” and to also include a “virtual assistant [that] may assist the user, for example, much as if an actual assistant were physically present with the user” which can “help a user to maximize the use of their insurance coverage.” (C1; L35 – 41; Little). Also, “a user may be better able to prepare a report of damage to objects after an incident, such as to submit a claim for damage” by using AR/VR technology (C3; L64 – 66; Little)

Neither Maestas, Reddy or Little explicitly teach tagging stored input frames with timestamps for each prompt. However, Nihal teaches:
wherein the stored one or more subsequent frames are tagged with timestamps corresponding to the first prompt and the second prompt. (In ¶0046: teaches that the system stores “recorded videos” that are “tagged with several types of attributes either automatically based on image recognition techniques or manually by people (either internally by the company, its employees, independent contractors or externally by users)” and these recorded videos includes an “appropriate timestamp (be. the point in time and optionally even the time range in the video where such a tag occurs)”.)

It would have been obvious to one of ordinary skill in the art before the earliest effective filing date of the claimed invention to modify Maestas, Reddy and Little with the ability of tagging stored input frames with timestamps for each prompt, as taught by Nihal in order to “making it easily searchable and connected to metadata from other videos” within the database (¶0053; Nihal)
Conclusion
The prior art made of record and not relied upon is considered pertinent to applicant's disclosure. 
Mani (U.S. Pub No. 20200117336 A1) is pertinent because it “relates to home appliances, in particular, to providing real-time virtual interactions with appliance products, and providing virtual aids for user interaction with appliance products.”
Yoffee (U.S. Pub No. 20220207898 A1) is pertinent because it “relate generally to the field of electronic warranty verification. More particularly, embodiments of the present disclosure relate to systems, methods, and non-transitory computer readable medium capable of performing remote artificial intelligence-assisted electronic warranty verification to assist an entity with warranty verification of a product.”
Schweinfurth (U.S. Patent No. 11430051 B1) is pertinent because it “relates to a system and method for reordering a consumer product from a merchant, and more specifically relates to generating an identification tag, corresponding to a consumer product, that when scanned enables the consumer to reorder, return, and/or review the consumer product.”
Behara (U.S. Pub No. 20230206288 A1) is pertinent because it is a “user device may receive, from a server device, a three-dimensional model of a product, and may display the three-dimensional model of the product, with a product review option, in an augmented reality user interface”.
Aggarwal (U.S. Pub No. 20180005208 A1) is pertinent because it “relates to interactive communication systems and, more particularly, to systems and methods to visualize user spending data in an altered reality.”
Angel (U.S. Pub No. 20090024628 A1) is pertinent because it “provide a home management system and method with a 360-degree virtual surface rendering application.”
Nelson (U.S. Patent No. 11501516 B2) is pertinent because it is “a computer implemented application for performing property inspections, capturing images associated with property inspections, and using those captured images are described herein”
Davis (U.S. Pub No. 20210374875 A1) is pertinent because it is “relates to augmented reality and, more particularly, to systems and methods for generating and displaying an enhanced situation visualization on a user computing device.”
Rathod – b (U.S. Pub No. 20210042724 A1) is pertinent because it “relates to enabling user to directly make payment to current or nearest place or particular place or searched or selected place on map associated merchant or user.”
Wilkinson (US Pub No. 20170364860 A1) is pertinent because it “relate generally to providing products and services to individuals.”
Kentris (U.S. Pub No. 20200402001 A1) is pertinent because it “relates to computer-implemented e-commerce platforms.”
Flores (U.S. Pub No. 20190244214 A1) is pertinent because it provides “customized authorization of item self-returns.” 
Jacobs (U.S. Patent No. 10796290 B2) is pertinent because it is “relates to systems and methods for facilitating a transaction using augmented reality, and more particularly using an interactive augmented environment.”
Keyvani (U.S. Patent No. 10929679 B1) is pertinent because it “relate[s] to computer systems and methods for providing an augmented reality graphical interface for assisting persons in identification of objects and assembly of products.”
Magee (U.S. Pub No. 20210124923 A1) is pertinent because it “relates generally to remote monitoring techniques, and more particularly, to a system and method for processing a refund request arising from a shopping session in a cashierless store.”
Isaacson (WO Pub No. 2020046906 A1) is pertinent because it “relates to applying new payment processes to making simplified purchases of products using a code or NFC tag configured on an object or a product in-store, in-stadium ordering, vehicle rentals, cryptocurrency payments, or purchasing a product through a scan of an already purchased product. A task or combination of tasks can be performed in a simplified way as initiated by a code or an NFC tag being interacted with by a mobile device.”
Kaehler (U.S. Pub No. 20220005095 A1) is pertinent because it is about “augmented reality (AR) devices, systems and methods that facilitate the purchase of one or more items or products at a retail location.”
Rathod (U.S. Pub No 20180350144 A1) is pertinent because it is about “Systems and methods for virtual world simulations of the real-world or emulate real-life or real-life activities in virtual world or real life simulator or generating a virtual world based on real environment.”
Sun, An Augmented Reality Online Assistance Platform for Repair Tasks (May 2021) is pertinent because it discusses leveraging “a remote rendering technique that we proposed previously to relieve the rendering burden of the augmented reality end devices. By conducting a user study, we show that the proposed method outperforms conventional instructional videos and sketches. The answers to the questionnaires show that the proposed method receives higher recommendation than sketching, and, compared to conventional instructional videos, is outstanding in terms of instruction clarity, preference, recommendation, and confidence of task completion.”
XR Today Team, RE’FLEKT Review: Industrial Communication Enhanced by AR (February 12, 2021) is pertinent because it discusses a “Munich-based AR and MR specialist, RE’FLEKT” company which released an application called “RE’FLEKT ONE [that] includes tools for scalable production of AR apps that can be used for training, sharing instructions, product communication, etc. – essentially converting your 2D manuals into interactive guides.”
Pieper, The Best AR Apps for eCommerce (April 26, 2021) is pertinent because it discusses “the best AR apps for eCommerce” to “allow businesses of all sizes to digitally upscale their offers.”

THIS ACTION IS MADE FINAL. Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a).
A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action.

Any inquiry concerning this communication or earlier communications from the examiner should be directed to Ivonnemary Rivera Gonzalez whose telephone number is (571)272-6158. The examiner can normally be reached Mon - Fri 9:00AM - 5:30PM.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Nathan Uber can be reached on (571) 270-3923. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.




/IVONNEMARY RIVERA GONZALEZ/Examiner, Art Unit 3626                                                                                                                                                                                                        
/NATHAN C UBER/Supervisory Patent Examiner, Art Unit 3626                                                                                                                                                                                                        


    
        
            
        
            
        
            
        
            
        
            
        
            
        
            
        
            
        
            
        
            
        
            
        
            
        
            
        
            
        
            
        
            
        
            
        
            
        
            
        
            
        
            
        
            
        
            
        
            
        
            
        
            
        
            
        
            
        
            
        
            
    


Cookies help us deliver our services. By using our services, you agree to our use of cookies.