Jump to content

Patent Application 18366017 - DISPLAY APPARATUS - Rejection

From WikiPatents
Revision as of 17:59, 22 May 2025 by Wikipatents (talk | contribs) (Updating Patent Application 18366017 - DISPLAY APPARATUS - Rejection with rejection information)
(diff) ← Older revision | Latest revision (diff) | Newer revision → (diff)

Patent Application 18366017 - DISPLAY APPARATUS

Title: DISPLAY APPARATUS

Application Information

  • Invention Title: DISPLAY APPARATUS
  • Application Number: 18366017
  • Submission Date: 2025-05-12T00:00:00.000Z
  • Effective Filing Date: 2023-08-07T00:00:00.000Z
  • Filing Date: 2023-08-07T00:00:00.000Z
  • National Class: 725
  • National Sub-Class: 037000
  • Examiner Employee Number: 88715
  • Art Unit: 2426
  • Tech Center: 2400

Rejection Summary

  • 102 Rejections: 0
  • 103 Rejections: 1

Cited Patents

The following patents were cited in the rejection:

Office Action Text


    DETAILED ACTION

Notice of Pre-AIA  or AIA  Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AFIA.
This action is in response to application 18/366,017 filed 8/7/2023.
Claims 1-20 presented for examination.

Objections to the claims
Claims 4-17 relates to a computing effective success frame rate related to play and pause control gesture and exceeding effective thresholds, also determine ratio as the first effective success frame rate of the play and pause control gesture. These features are objected to as being dependent upon a rejected base claim, but would be allowable if rewritten in independent form including all of the limitations of the base claim and any intervening claims. The culmination of the features cited make it non-obvious.

Claim Rejections - 35 USC § 103

The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102 of this title, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains.  Patentability shall not be negated by the manner in which the invention was made.


The factual inquiries set forth in Graham v. John Deere Co., 383 U.S. 1, 148 USPQ 459 (1966), that are applied for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows:
1. Determining the scope and contents of the prior art.
2. Ascertaining the differences between the prior art and the claims at issue.
3. Resolving the level of ordinary skill in the pertinent art.
4. Considering objective evidence present in the application indicating obviousness or nonobviousness.

Claims 1-3 and 18-20 are rejected under 35 U.S.C. 103 as being unpatentable over Praphul et al., Pub No US 2013/0290911 (hereafter Praphul) and further in view of Christopher Iain PARKINSON, Pub No US 2019/0018243 (hereafter PARKINSON).

Regarding Claim 1, Praphul discloses a display apparatus [FIG.1, para.0014: Discloses a display unit (element 124).], comprising:
a display configured to present an image and/ or a user interface [FIG.1, para.0014: Discloses the display unit (element 124) is a digital television configured to facilitate the transmission of audio and video signals to an operating user; and para.0016: Discloses the display unit (element 124) represents an electronic visual display configured to present video and images to a user; and para.0024: Discloses a graphical user interface (a user interface).];
an image collector or a user input interface configured to connect with the image collector, the image collector being configured to collect a user image [para.0013: Discloses capturing user image with an infrared camera (an image collector); and para.0014: Discloses a gesture detection controller embedded with entertainment devices (elements 105, 110, 115) along with sensors to feed the controller (e.g. cameras, microphones, infrared-cameras, etc.).];
a memory configured to store instructions and data associated with the display [FIG.2, para.0016: Discloses a gesture database (element 223 - memory) and a computer-readable storage medium (element 219). The gesture database (element 223) stores a list of gestures (data) along with an associated operation command (data) and destination device (data). The storage medium (element 219) represents volatile storage (e.g. random-access memory), non-volatile store (e.g. hard disk drive, read-only memory, compact disc read only memory, flash storage, etc.), or combinations thereof, Furthermore, storage medium (element 219) may include software (element 213) that is executable by a host device (element 210).];
one or more processors in connection with the display, the image collector or the user input interface and the memory and configured to execute the instructions [FIG.2, para.0016: Discloses the host or entertainment device (element 210) includes a gesture detection module (element 211) and a signal processor (element 209) for facilitating the gesture and multimodal control. FIG.2 illustrates the signal processor (element 209) in connection with the display (element 224).] to cause the display apparatus to:
in response to a switch command for a gesture detection function switch on the user interface [para.0014: Discloses the requisite sensors feeds the controller (user inputs - e.g. cameras, microphones, infrared-cameras, etc.); and FIG(s).3A-G, para.0017: Discloses gesture commands made by the user. The different hand gestures (received via the user input) are mapped to different commands (switching command for a gesture detection function) for the processor to execute. For example, FIG. 3A depicts a gesture command in which the user's hand forms a closed fist and a thumb pointing in a westward direction. Such a gesture command may be mapped to a "BACK" operation such as skipping to a previous chapter on a DVD player device. FIG. 3C depicts yet another gesture command in which the user's hand is open, fingers dose together, and the thumb is perpendicular to the index finger. Such a gesture may be mapped to a "STOP" operation such as stopping the playback; and FIG.4D, para.0022-0023: Discloses "spatial" meta-interaction, for example FIG.4D illustrates detecting different regions of recognized (via user interface) user hand positions. The hand position in the different positions causes switching to a different command. The different spatial regions are mapped to different devices such that an interaction triggered in a particular spatial region is destined towards a particular device. Thus, particular gestures act as a "toggle meta-interaction" to switch.], 
Praphul does not explicitly disclose detect whether the image collector is occupied by a specified application that needs to start the image collector based on an attribute state of the image collector; in response to the attribute state being a first state, determine that the image collector is occupied by the specified application that needs to start the image collector, and not perform a gesture detection function; in response to the attribute state being a second state, determine that the image collector is not occupied by the specified application that needs to start the image collector, and perform the gesture detection function (emphasis added to distinguish the elements not taught by Praphul). However, in analogous art, PARKINSON discloses computing device equipped with depth cameras, such as stereoscopic camera systems, infrared camera systems, RGB camera systems, and combinations of these, for gesture detection and recognition [para.0081]. Paragraph 0025 and FIG. 1B illustrated operational flow diagram depicts a conventional operational flow 100b of a second application process 110b requesting access to the same microphone (a user interface) of FIG. 1A, while a first application process 110a already has access (not busy is the attribute state being a first state) to the microphone (the user interface). Thus, the operational flow 100b illustrates how requests to access an input resource are handled when already being utilized by an application process. Because the first application process 110a has already been given exclusive access to the microphone (the user interface), the audio stack 130 determines that the associated hardware resource is busy or unavailable (attribute state being a second state), and the second application process 110b is blocked from accessing the microphone. Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to further modify Praphul with detecting whether the image collector is occupied by a specified application that needs to start the image collector based on an attribute state of the image collector, as taught by PARKINSON in order to yield predictable result such as providing operating Systems to communicate with the hardware layer, which in turn facilitates the translation of the received instruction to initiate [PARKINSON: para.0001].

Regarding Claim 2, the combined teachings of Praphul and PARKINSON discloses the display apparatus according to claim 1, and Praphul further discloses wherein the one or more processors are further configured to execute the instructions to cause the display apparatus to:
in response to the gesture detection function being enabled, acquire a user image comprising a user gesture collected by the image collector [FIG(s).3A-I: Illustrates the image collector acquiring user hand gestures; and FIG.4D: Illustrates acquiring user gestures.];
in response to detecting that the user gesture in the user image is a play and pause control gesture, acquire a playing mode for playing a video file [FIG.3, para(s).0017-0019: Discloses play control gestures (FIG(s). 3A, 3B, 3D) and pause control gesture (FIG. 3C).];
in response to the playing mode being full-screen playing, respond to a command generated from the play and pause control gesture by performing a play operation or a pause operation on the video file [FIG.3, para(s).0017-0019: Discloses play (FIG(s).3A, 3B, 3D), pause control gesture (FIG. 3C), and playing mode being full-screen playing (FIG. 3E).];
in response to the playing mode being small-screen playing, not respond to the command generated from the play and pause control gesture [FIG.3, para(s).0017-0019: Discloses play (FIG(s).3A, 3B, 3D), pause control gesture (FIG. 3C), and playing mode being small-screen playing (FIG. 3F).].

Regarding Claim 3, the combined teachings of Praphul and PARKINSON discloses the display apparatus according to claim 2, and Praphul further discloses wherein the one or more processors are further configured to execute the instructions to cause the display apparatus to:
acquire a signal source ID for identifying a channel type [para.0024, FIG.4E: Discloses controlling multiple devices. Here, the particular attributes of the gesture are analyzed for determining the appropriate destination device (channel type). More specifically, the meta-interaction may be embedded within the gesture command itself. A hand swipe gesture from left to right may mean "increase volume" of a particular device, while the number of fingers held-out while making the gesture may specify whether the gesture command is destined for the first device 405, second device 410, or third device 415 (acquire a signal source ID).];
in response to the signal source ID indicating a first channel type, acquire playing mode broadcast data for playing the video file, and determine whether to respond to the command generated from the play and pause control gesture based on the playing mode indicated by the playing mode broadcast data [para.0024, FIG.4E: Discloses a hand swipe gesture from left to right may mean "increase volume" (acquire playing mode broadcast data for playing the video file) of a particular device (the signal source ID).];
in response to the signal source ID indicating a second channel type, not respond to the command generated from the play and pause control gesture [para.0024, FIG.4E: Discloses the "increase volume" command is to the second device (a second channel type) based on the hand swipe gesture from left to right (the signal source ID). Thus, the first device (a second channel type) does not respond to the command generated from the play and pause control gesture (increasing volume).].

Regarding Claim 18, Praphul discloses a control method for a display apparatus [FIG.1, para.0014: Discloses a display unit (element 124).], comprising:
an image collector [para.0013: Discloses capturing user image with an infrared camera (an image collector); and para.0014: Discloses a gesture detection controller embedded with entertainment devices (elements 105, 110, 115) along with sensors to feed the controller (e.g. cameras, microphones, infrared-cameras, etc.).];
in response to a switch command for a gesture detection function switch on a user interface [para.0014: Discloses the requisite sensors feeds the controller (user inputs - e.g. cameras, microphones, infrared-cameras, etc.); and FIG(s).3A-G, para.0017: Discloses gesture commands made by the user. The different hand gestures (received via the user input) are mapped to different commands (switching command for a gesture detection function) for the processor to execute. For example, FIG. 3A depicts a gesture command in which the user's hand forms a closed fist and a thumb pointing in a westward direction. Such a gesture command may be mapped to a "BACK" operation such as skipping to a previous chapter on a DVD player device. FIG. 3C depicts yet another gesture command in which the user's hand is open, fingers dose together, and the thumb is perpendicular to the index finger. Such a gesture may be mapped to a "STOP" operation such as stopping the playback; and FIG.4D, para.0022-0023: Discloses "spatial" meta-interaction, for example FIG.4D illustrates detecting different regions of recognized (via user interface) user hand positions. The hand position in the different positions causes switching to a different command. The different spatial regions are mapped to different devices such that an interaction triggered in a particular spatial region is destined towards a particular device. Thus, particular gestures act as a "toggle meta-interaction" to switch.], 
Praphul does not explicitly disclose detecting whether an image collector is occupied by a specified application that needs to start the image collector based on an attribute state of the image collector; in response to the attribute state being a first state, determining that the image collector is occupied by the specified application that needs to start the image collector, and not performing a gesture detection function; in response to the attribute state being a second state, determining that the image collector is not occupied by the specified application that needs to start the image collector, and performing the gesture detection function (emphasis added to distinguish the elements not taught by Praphul). However, in analogous art, PARKINSON discloses computing device equipped with depth cameras, such as stereoscopic camera systems, infrared camera systems, RGB camera systems, and combinations of these, for gesture detection and recognition [para.0081]. Paragraph 0025 and FIG. 1B illustrated operational flow diagram depicts a conventional operational flow 100b of a second application process 110b requesting access to the same microphone (a user interface) of FIG. 1A, while a first application process 110a already has access (not busy is the attribute state being a first state) to the microphone (the user interface). Thus, the operational flow 100b illustrates how requests to access an input resource are handled when already being utilized by an application process. Because the first application process 110a has already been given exclusive access to the microphone (the user interface), the audio stack 130 determines that the associated hardware resource is busy or unavailable (attribute state being a second state), and the second application process 110b is blocked from accessing the microphone. Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to further modify Praphul with detecting whether the image collector is occupied by a specified application that needs to start the image collector based on an attribute state of the image collector, as taught by PARKINSON in order to yield predictable result such as providing operating Systems to communicate with the hardware layer, which in turn facilitates the translation of the received instruction to initiate [PARKINSON: para.0001].

Regarding Claim 19, the combined teachings of Praphul and PARKINSON discloses the method according to claim 18, and Praphul further discloses further comprising:
in response to the gesture detection function being enabled, acquiring a user image comprising a user gesture collected by the image collector [FIG(s).3A-I: Illustrates the image collector acquiring user hand gestures; and FIG.4D: Illustrates acquiring user gestures.];
in response to detecting that the user gesture in the user image is a play and pause control gesture, acquiring a playing mode for playing a video file [FIG.3, para(s).0017-0019: Discloses play control gestures (FIG(s). 3A, 3B, 3D) and pause control gesture (FIG. 3C).];
in response to the playing mode being full-screen playing, responding to a command generated from the play and pause control gesture by performing a play operation or a pause operation on the video file [FIG.3, para(s).0017-0019: Discloses play (FIG(s).3A, 3B, 3D), pause control gesture (FIG. 3C), and playing mode being full-screen playing (FIG. 3E).];
in response to the playing mode being small-screen playing, not responding to a command generated from the play and pause control gesture [FIG.3, para(s).0017-0019: Discloses play (FIG(s).3A, 3B, 3D), pause control gesture (FIG. 3C), and playing mode being small-screen playing (FIG. 3F).].

Regarding Claim 20, the combined teachings of Praphul and PARKINSON discloses the method according to claim 19, and Praphul further discloses further comprising:
acquiring a signal source ID for identifying a channel type [para.0024, FIG.4E: Discloses controlling multiple devices. Here, the particular attributes of the gesture are analyzed for determining the appropriate destination device (channel type). More specifically, the meta-interaction may be embedded within the gesture command itself. A hand swipe gesture from left to right may mean "increase volume" of a particular device, while the number of fingers held-out while making the gesture may specify whether the gesture command is destined for the first device 405, second device 410, or third device 415 (acquire a signal source ID).];
in response to the signal source ID indicating a first channel type, acquiring playing mode broadcast data for playing the video file, and determining whether to respond to a command generated from the play and pause control gesture based on the playing mode indicated by the playing mode broadcast data [para.0024, FIG.4E: Discloses a hand swipe gesture from left to right may mean "increase volume" (acquire playing mode broadcast data for playing the video file) of a particular device (the signal source ID).];
in response to the signal source ID indicating a second channel type, not responding to the command generated from the play and pause control gesture [para.0024, FIG.4E: Discloses the "increase volume" command is to the second device (a second channel type) based on the hand swipe gesture from left to right (the signal source ID). Thus, the first device (a second channel type) does not respond to the command generated from the play and pause control gesture (increasing volume).].

Conclusion
The prior art made of record and not relied upon is considered pertinent to applicant's disclosure. 
Gokturk et al., (US 2011/0291926) – Discloses recognizing gestures of body parts using depth perceptive sensors [para.0019]. Identified hand gesture is compared to a set of multiple designated hand gestures correspond to simple code or a series of commands to an electronic system [para.0053]. For example, a user directing one hand at an electronic device while raising two fingers may be recognized and interpreted as a first command [para.0087].
Chris Kalaboukis, (US 11,803,831) – Discloses features: (1) capture of a real-time image of the user's body; (2) detection or recognition of a gesture; (3) associating and mapping the gesture with a control event indicative of a directive to perform a particular transaction (col.12, lines 63-67).

Any inquiry concerning this communication or earlier communications from the examiner should be directed to ADIL OCAK whose telephone number is (571) 272-2774.  The examiner can normally be reached on M-F 8:00 AM - 5:00 PM.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Nasser Goodarzi can be reached on 571-272-4195.  The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of an application may be obtained from the Patent Application Information Retrieval (PAIR) system.  Status information for published applications may be obtained from either Private PAIR or Public PAIR.  Status information for unpublished applications is available through Private PAIR only.  For more information about the PAIR system, see http://pair-direct.uspto.gov. Should you have questions on access to the Private PAIR system; contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative or access to the automated information system, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/ADIL OCAK/Primary Examiner, Art Unit 2426                                                                                                                                                                                                        



    
        
            
        
            
        
            
        
            
    


(Ad) Transform your business with AI in minutes, not months

Custom AI strategy tailored to your specific industry needs
Step-by-step implementation with measurable ROI
5-minute setup that requires zero technical skills
Get your AI playbook

Trusted by 1,000+ companies worldwide

Cookies help us deliver our services. By using our services, you agree to our use of cookies.