Patent Application 18300817 - HEADPHONES AND METHODS FOR ADJUSTING SOUND - Rejection
Appearance
Patent Application 18300817 - HEADPHONES AND METHODS FOR ADJUSTING SOUND
Title: HEADPHONES AND METHODS FOR ADJUSTING SOUND EFFECTS OF HEADPHONES
Application Information
- Invention Title: HEADPHONES AND METHODS FOR ADJUSTING SOUND EFFECTS OF HEADPHONES
- Application Number: 18300817
- Submission Date: 2025-05-21T00:00:00.000Z
- Effective Filing Date: 2023-04-14T00:00:00.000Z
- Filing Date: 2023-04-14T00:00:00.000Z
- National Class: 381
- National Sub-Class: 074000
- Examiner Employee Number: 88772
- Art Unit: 2695
- Tech Center: 2600
Rejection Summary
- 102 Rejections: 1
- 103 Rejections: 5
Cited Patents
The following patents were cited in the rejection:
Office Action Text
DETAILED ACTION The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Drawings The drawings are objected to under 37 CFR 1.83(a). The drawings must show every feature of the invention specified in the claims. Therefore, the optimization module, the input module and the parsing module must be shown or the feature(s) canceled from the claim(s). No new matter should be entered. Corrected drawing sheets in compliance with 37 CFR 1.121(d) are required in reply to the Office action to avoid abandonment of the application. Any amended replacement drawing sheet should include all of the figures appearing on the immediate prior version of the sheet, even if only one figure is being amended. The figure or figure number of an amended drawing should not be labeled as “amended.” If a drawing figure is to be canceled, the appropriate figure must be removed from the replacement sheet, and where necessary, the remaining figures must be renumbered and appropriate changes made to the brief description of the several views of the drawings for consistency. Additional replacement sheets may be necessary to show the renumbering of the remaining figures. Each drawing sheet submitted after the filing date of an application must be labeled in the top margin as either “Replacement Sheet” or “New Sheet” pursuant to 37 CFR 1.121(d). If the changes are not accepted by the examiner, the applicant will be notified and informed of any required corrective action in the next Office action. The objection to the drawings will not be held in abeyance. Claim Interpretation The following is a quotation of 35 U.S.C. 112(f): (f) Element in Claim for a Combination. – An element in a claim for a combination may be expressed as a means or step for performing a specified function without the recital of structure, material, or acts in support thereof, and such claim shall be construed to cover the corresponding structure, material, or acts described in the specification and equivalents thereof. The following is a quotation of pre-AIA 35 U.S.C. 112, sixth paragraph: An element in a claim for a combination may be expressed as a means or step for performing a specified function without the recital of structure, material, or acts in support thereof, and such claim shall be construed to cover the corresponding structure, material, or acts described in the specification and equivalents thereof. The claims in this application are given their broadest reasonable interpretation using the plain meaning of the claim language in light of the specification as it would be understood by one of ordinary skill in the art. The broadest reasonable interpretation of a claim element (also commonly referred to as a claim limitation) is limited by the description in the specification when 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is invoked. As explained in MPEP § 2181, subsection I, claim limitations that meet the following three-prong test will be interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph: (A) the claim limitation uses the term “means” or “step” or a term used as a substitute for “means” that is a generic placeholder (also called a nonce term or a non-structural term having no specific structural meaning) for performing the claimed function; (B) the term “means” or “step” or the generic placeholder is modified by functional language, typically, but not always linked by the transition word “for” (e.g., “means for”) or another linking word or phrase, such as “configured to” or “so that”; and (C) the term “means” or “step” or the generic placeholder is not modified by sufficient structure, material, or acts for performing the claimed function. Use of the word “means” (or “step”) in a claim with functional language creates a rebuttable presumption that the claim limitation is to be treated in accordance with 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. The presumption that the claim limitation is interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is rebutted when the claim limitation recites sufficient structure, material, or acts to entirely perform the recited function. Absence of the word “means” (or “step”) in a claim creates a rebuttable presumption that the claim limitation is not to be treated in accordance with 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. The presumption that the claim limitation is not interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is rebutted when the claim limitation recites function without reciting sufficient structure, material or acts to entirely perform the recited function. Claim limitations in this application that use the word “means” (or “step”) are being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, except as otherwise indicated in an Office action. Conversely, claim limitations in this application that do not use the word “means” (or “step”) are not being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, except as otherwise indicated in an Office action. This application includes one or more claim limitations that do not use the word “means,” but are nonetheless being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, because the claim limitation(s) uses a generic placeholder that is coupled with functional language without reciting sufficient structure to perform the recited function and the generic placeholder is not preceded by a structural modifier. Such claim limitation(s) is/are: “an optimization module, configured to adjust,” “a communications module, configured to communicate,” “a parsing module configured to parse,” and “the input module is configured to input” in claims 1-3, 5-6, 11-12, 16-18 and 20-21. Because this/these claim limitation(s) is/are being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, it/they is/are being interpreted to cover the corresponding structure described in the specification as performing the claimed function, and equivalents thereof. If applicant does not intend to have this/these limitation(s) interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, applicant may: (1) amend the claim limitation(s) to avoid it/them being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph (e.g., by reciting sufficient structure to perform the claimed function); or (2) present a sufficient showing that the claim limitation(s) recite(s) sufficient structure to perform the claimed function so as to avoid it/them being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. Claim Rejections - 35 USC § 102 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action: A person shall be entitled to a patent unless – (a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale, or otherwise available to the public before the effective filing date of the claimed invention. Claim 10 is rejected under 35 U.S.C. 102(a)(1) as being anticipated by Lin et al. (US 2015/0195663 A1), hereinafter “Lin.” As to claim 10, Lin discloses a method for adjusting a sound effect of a headphone (Fig. 6), comprising: obtaining wearing state information of a human ear and/or type information of an audio file to be played (¶0086, Fig. 6. “Once the media is selected, metadata for the media is parsed and/or analyzed to determine if the media contains music, voice, or a movie, and what additional details are available such as the artist, genre or song name (610).”); selecting, based on the wearing state information of the human ear and/or the type information of the audio file to be played, a corresponding sound effect mode on a terminal device that is in a communication connection with the headphone (¶0086, Fig. 6. “The parsed/analyzed data is used to request a sound profile from a server over a network, such as the Internet, or from local storage (615). For example, Alpine could maintain a database of sound profiles matched to various types of media and matched to various types of reproduction devices. The sound profile could contain parameters for increasing or decreasing various frequency bands and other sound parameters for enhancing portions of the audio. Such aspects could include dynamic equalization, crossover gain, dynamic noise compression, time delays, and/or three-dimensional audio effects.”); and adjusting, based on corresponding values of sound parameters of the selected sound effect mode, the sound effect when the headphone plays the audio file (¶0086, Fig. 6. “The sound profile is received (620) and then adjusted to a particular user's preference (625) if necessary. The adjusted sound profile is then transmitted (630) to a reproduction device, such as a pair of headphones.”). Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows: 1. Determining the scope and contents of the prior art. 2. Ascertaining the differences between the prior art and the claims at issue. 3. Resolving the level of ordinary skill in the pertinent art. 4. Considering objective evidence present in the application indicating obviousness or nonobviousness. Claims 1-3 and 5-6 are rejected under 35 U.S.C. 103 as being unpatentable over Lin. As to claim 1, Lin discloses a headphone (headphones 120, fig. 1.), comprising: a speaker, configured to play an audio file (¶0028, Figs. 1 and 3. “Headphones 120 can include stereo speakers including separate drivers for the left and right ear to provide distinct audio to each ear”); a communication module, configured to establish a communication connection with a terminal device, wherein the terminal device includes a display screen configured to display a first user interface, the first user interface includes one or more selection controls configured to select any sound effect mode from at least two different sound effect modes, and sound parameters of audio streams of the audio file played by the speaker of the headphone in the different sound effect modes have different values (¶0027, ¶0062 and ¶0084, Figs. 1a, 3 and 8-9a/b. “mobile device 110 can alternatively connect to headphones 120 using wireless connection 160.” “Network interface 380 can be wired or wireless.” “FIG. 8 shows an exemplary user interface by which the user can select which aspects of tuning should be utilized when a sound profile is applied.”); configured to adjust, based on a correspondence between a certain sound effect mode and its corresponding values of the sound parameters, values of the sound parameters of an audio stream of the audio file to the corresponding values of the sound parameters of the certain sound effect mode (¶0031, ¶0062-0063 and ¶0081, Fig. 3. “Mobile device 110 can allow users to tune the sound profile of their headphone to their own preferences and/or apply predefined sound profiles suited to the genre, artist, song, or the user. For example, mobile device 110 can use Alpine's Tune-It mobile application. Tune-It can allow users quickly modify their headphone devices to suite their individual tastes.” “The sound profile could contain parameters for increasing or decreasing various frequency bands and other sound parameters for enhancing portions of the audio. Such aspects could include dynamic equalization, crossover gain, dynamic noise compression, time delays, and/or three-dimensional audio effects.” “Processor 350 can process the audio information to apply sound profiles.” Processor of headphones applies the user modified sound profile to the audio.); and a controller, configured to control the communication module to receive a selection instruction for the certain sound effect mode instructed by a user by triggering a selection control of the one or more selection controls, control the communication module to receive the audio stream of the audio file from the terminal device, control the optimization module to process the audio stream of the audio file and output the audio stream of the audio file to the speaker, and control the speaker to play the audio file (¶0062-0063, Fig. 3. “An audio signal, user input, metadata, other input or any portion or combination thereof can be processed in reproduction system 300 using the processor 350. Processor 350 can be used to perform analysis, processing, editing, playback functions, or to combine various signals, including adding metadata to either or both of audio and image signals.” “Processor 350 can also include interfaces to pass digital audio information to amplifier 320. Processor 350 can process the audio information to apply sound profiles, create a mono signal and apply low pass filter.” Processor receives the audio from the mobile device, processes the audio based on the sound profile, and passes the audio to the amplifier for playback by the speakers.). Lin does not expressly disclose a separate optimization module and controller. However, the processor of Lin provides the functionality of both the claimed optimization module and controller. Based on the specification, the optimization module appears to be executed by a processor. Before the effective filing date of the claimed invention, it would be obvious that a single processor performing two different functions could be replaced by two processors performing the functions. The motivation would have been an obvious duplication of parts which would yield predictable results. As to claim 2, Lin discloses the at least two different sound effect modes include a first sound effect mode and a second sound effect mode, wherein in the first sound effect mode, in a frequency response curve of an electrical signal of the audio file processed by the optimization module, a signal response in a frequency band of 200 Hz-10 kHz has a flat trend, and a signal response in a frequency band below 100 Hz is weaker than the signal response in the frequency band of 200 Hz-10 kHz (¶0084-0085 and Figs. 8-9a/b. “For example, if a user selects "Music" with selector 812, selector 813 could present different genres, such as "Rock," "Jazz," and "Classical.” “The user can manipulate any or all of these sliders up or down along their vertical ranges 930 to modify the sound presented.” User can adjust the equalization sliders to modify frequency response curves of different genre presets. Boosting/attenuating certain frequencies is a simple design choice that would have been obvious to one of ordinary skill in the art.); and in the second sound effect mode, in a frequency response curve of an electrical signal of the audio file processed by the optimization module, a signal response in a frequency band below 1 kHz is weaker than a signal response in a frequency band above 1 kHz (¶0084-0085 and Figs. 8-9a/b. “For example, if a user selects "Music" with selector 812, selector 813 could present different genres, such as "Rock," "Jazz," and "Classical.” “The user can manipulate any or all of these sliders up or down along their vertical ranges 930 to modify the sound presented.” User can adjust the equalization sliders to modify frequency response curves of different genre presets. Boosting/attenuating certain frequencies is a simple design choice that would have been obvious to one of ordinary skill in the art.). As to claim 3, Lin discloses the at least two different sound effect modes include a third sound effect mode, in the third sound effect mode, in a frequency response curve of an electrical signal of the audio file processed by the optimization module, a signal response in a frequency band of 400 Hz-3 kHz is significantly enhanced relative to signal responses in other frequency bands (¶0084-0085 and Figs. 8-9a/b. “For example, if a user selects "Music" with selector 812, selector 813 could present different genres, such as "Rock," "Jazz," and "Classical.” “The user can manipulate any or all of these sliders up or down along their vertical ranges 930 to modify the sound presented.” User can adjust the equalization sliders to modify frequency response curves of different genre presets. Boosting/attenuating certain frequencies is a simple design choice that would have been obvious to one of ordinary skill in the art.). As to claim 5, Lin discloses the headphone further includes a parsing module configured to parse a type of an audio file to be played (¶0064 and ¶0081, Fig. 3. “Processor 350 can parse and/or analyze metadata and request sound profiles via network 380.”); and the controller selects a specific sound effect mode from the at least two different sound effect modes based on the type of the audio file determined by the parsing module, and controls the speaker to play the audio file in the specific sound effect mode (¶0064 and ¶0081, Fig. 3. “Processor 350 can parse and/or analyze metadata and request sound profiles via network 380.” “Once the media is selected, metadata for the media is parsed and/or analyzed to determine if the media contains music, voice, or a movie, and what additional details are available such as the artist, genre or song name (610).”). As to claim 6, Lin discloses the headphone further includes a storage and an input module, the input module is configured to input the corresponding values of the sound parameters of each sound effect mode of the at least two different sound effect modes, and the storage is configured to store the corresponding values of the sound parameters of each sound effect mode (¶0060, ¶0063 and ¶0066, Fig. 3. “An input 340 including one or more input devices can be configured to receive instructions and information. For example, in some implementations input 340 can include a number of buttons. In some other implementations input 340 can include one or more of a touch pad, a touch screen, a cable interface, and any other such input devices known in the art. Input 340 can include knob 290.” “Processor 350 can use memory 360 to aid in the processing of various signals, e.g., by storing intermediate results.”). Claim 4 is rejected under 35 U.S.C. 103 as being unpatentable over Lin in view of Kemmerer et al. (US 10,462,551 B2), hereinafter “Kemmerer.” As to claim 4, Lin does not expressly disclose the headphone further includes a sensor configured to detect a wearing state of a human ear; and the controller is configured to select a specific sound effect mode from the at least two different sound effect modes based on the wearing state of the human ear detected by the sensor, and control the speaker to play the audio file in the specific sound effect mode. Kemmerer discloses the headphone further includes a sensor configured to detect a wearing state of a human ear (Kemmerer, Col. 12 lines 31-43, Fig. 2. “This acoustic transfer function can be continuously calculated, or triggered by an event, e.g., a sensor event such as detection of movement of the personal audio device 10 by a sensor such as an infra-red sensor or capacitive proximity sensor. The threshold value is established using calculations of voltage differentials when the personal audio device 10 is off-head, and on-head, respectively, e.g., between the driver signal and feedback microphone signal(s). When the magnitude of this acoustic transfer function (G.sub.sd) is below a threshold value (Yes to decision 220), the control circuit 30 determines that the personal audio device 10 has changed from an on-head state to an off-head state (process 230).”); and the controller is configured to select a specific sound effect mode from the at least two different sound effect modes based on the wearing state of the human ear detected by the sensor, and control the speaker to play the audio file in the specific sound effect mode (Kemmerer, Col. 15 lines 18-34, Fig. 2. “In response to determining that the personal audio device 10 has had a state change, either from on-head state to off-head state, or from off-head state to on-head state, in some examples, the control circuit 30 is further configured to adjust at least one function of the personal audio device 10 (process 260)… In some cases, the control circuit 30 is configured to adjust functions including one or more of: an audio playback function, a power function, a capacitive touch interface function, an active noise reduction (ANR) function, a controllable noise cancellation (CNC) function or a shutdown timer function.”). Lin and Kemmerer are analogous art because they are from the same field of endeavor with respect to headphones. Before the effective filing date of the claimed invention, it would have been obvious to a person of ordinary skill in the art to adjust audio functions when the headphones are removed, as taught by Kemmerer. The motivation would have been to automatically adjust settings when the headphones are removed instead of manually. Claims 7-9 are rejected under 35 U.S.C. 103 as being unpatentable over Lin in view of Ghani (Ghani, Uzair. "How to Access the Control Center on iPhone X." wccftech, 3 Nov. 2017, wccftech.com/iphone-x-control-center/.). As to claim 7, Lin does not expressly disclose the first user interface includes a main operation interface, and the main operation interface includes at least one of: a playback operation control, a Bluetooth connection control, and a tuning control, wherein the playback operation control is configured to control the speaker to play or pause a playback of the audio file, the Bluetooth connection control is configured to connect the headphone to the terminal device, and the tuning control is configured to adjust a volume of the speaker when playing the audio file. Ghani discloses the first user interface includes a main operation interface, and the main operation interface includes at least one of: a playback operation control, a Bluetooth connection control, and a tuning control, wherein the playback operation control is configured to control the speaker to play or pause a playback of the audio file, the Bluetooth connection control is configured to connect the headphone to the terminal device, and the tuning control is configured to adjust a volume of the speaker when playing the audio file (Ghani, first figure. Interface showing playback controls, Bluetooth toggle and volume control shown.). Lin and Ghani are analogous art because they are from the same field of endeavor with respect to audio playback devices Before the effective filing date of the claimed invention, it would have been obvious to a person of ordinary skill in the art to display Bluetooth, playback and volume controls, as taught by Ghani. The motivation would have been to allow the user to make multiple adjustments from the same interface. As to claim 8, Lin in view of Ghani discloses the first user interface includes a sound effect setting interface, and the sound effect setting interface pops up when the terminal receives a sound effect setting instruction from the user (Lin, Figs. 8-9a/c. Having an interface pop up is well known, routine and conventional and would have been obvious to one of ordinary skill in the art.); and each selection control of the one or more selection controls corresponding to the at least two different sound effect modes includes an operation region for receiving the selection instruction from the user, and a ratio of a total area of the operation regions of the one or more selection controls to a total area of the sound effect setting interface is greater than or equal to 0.2 and less than or equal to 1 (Lin, Figs. 8-9a/c. Full screen, i.e. equal to 1. Further, the size of an operation region is nothing more than a simple design choice.). As to claim 9, Lin in view of Ghani discloses the each selection control of the one or more selection controls corresponding to the at least two different sound effect modes further includes an operation description region configured to explain functions of the operation region, wherein when the first user interface is the main operation interface, the operation description region and the operation region at least partially overlap, and when the first user interface is the sound effect setting interface, the operation description region and the operation region are disposed independently (Lin, Figs. 8-9a/b and Ghani, first and second figures. Whether to have text and icons overlap is nothing more than a simple design choice, both of which are well known, routine and conventional in the art.). Claims 11-13, 15-18 and 20-21 are rejected under 35 U.S.C. 103 as being unpatentable over Lin in view of Weinans et al. (US 2007/0206829 A1), hereinafter “Weinans.” As to claim 11, Lin discloses a headphone (headphones 120, fig. 1.), comprising: a speaker, configured to play an audio file (¶0028, Figs. 1 and 3. “Headphones 120 can include stereo speakers including separate drivers for the left and right ear to provide distinct audio to each ear”); a communication module, configured to establish a communication connection with at least one terminal device, wherein the at least one terminal device includes a display screen configured to display a second user interface (¶0027, ¶0062 and ¶0084, Figs. 1a, 3 and 8-9a/b. “mobile device 110 can alternatively connect to headphones 120 using wireless connection 160.” “Network interface 380 can be wired or wireless.”); and a controller, configured to control the communication module to receive the audio file from the terminal device that is in the communication connection with the headphone and control the speaker to play the audio file (¶0062-0063, Fig. 3. “An audio signal, user input, metadata, other input or any portion or combination thereof can be processed in reproduction system 300 using the processor 350. Processor 350 can be used to perform analysis, processing, editing, playback functions, or to combine various signals, including adding metadata to either or both of audio and image signals.” “Processor 350 can also include interfaces to pass digital audio information to amplifier 320. Processor 350 can process the audio information to apply sound profiles, create a mono signal and apply low pass filter.” Processor receives the audio from the mobile device, processes the audio based on the sound profile, and passes the audio to the amplifier for playback by the speakers.). Lin does not expressly disclose the second user interface includes a multi-device connection function control configured to turn on or off a dual-device connection function, when the multi-device connection function control is turned on, the communication module is configured to establish the communication connection with at least two terminal devices, and when the multi-device connection control is turned off, the communication module is configured to establish a communication connection with only one terminal device. Weinans discloses the second user interface includes a multi-device connection function control configured to turn on or off a dual-device connection function, when the multi-device connection function control is turned on, the communication module is configured to establish the communication connection with at least two terminal devices, and when the multi-device connection control is turned off, the communication module is configured to establish a communication connection with only one terminal device (Weinans, ¶0064-0066 and ¶0069, Figs. 1-3. “switch 7 is operable between two settings: 1) A single point connection setting. In this position, control unit 3 is configured only to relay audio signals received from a first device to speaker 11. This first device is therefore a first prioritized device, preferably a voice call device such as mobile phone 2. … 2) A multipoint connection setting. In this position, signal transceiver 6 is allowed to connect to other devices than the first device 2. Signal transceiver 6 may therefore be connected to first device 2 and a second device 3 or 4, or both 3 and 4, or even only to one or more second devices 3 or 4.” “As an alternative solution, the switch may be operated by means of commands transmitted from the first priority device.” Switch between single point and multipoint connection may be operated by first priority device (e.g. mobile phone 2).). Lin and Weinans are analogous art because they are from the same field of endeavor with respect to wireless audio connections. Before the effective filing date of the claimed invention, it would have been obvious to a person of ordinary skill in the art to connect to multiple devices, as taught by Weinans. The motivation would have been to simultaneously connect for voice calls from one device and audio from another (Weinans, ¶0001 and ¶0011). As to claim 12, Lin in view of Weinans discloses the multi-device connection function control is a dual-device connection function control configured to turn on a dual-device connection function (Weinans, ¶0064-0066, Figs. 1-3. “A multipoint connection setting. In this position, signal transceiver 6 is allowed to connect to other devices than the first device 2. Signal transceiver 6 may therefore be connected to first device 2 and a second device 3 or 4, or both 3 and 4, or even only to one or more second devices 3 or 4.”); and the communication module is configured to establish a Bluetooth connection with at least one terminal device (Both Lin, ¶0027, and Weinans, ¶0063, disclose a Bluetooth connection.). The motivation is the same as claim 11 above. As to claim 13, Lin in view of Weinans does not expressly disclose the second user interface further includes a terminal connection control configured to establish or disconnect a communication connection between the headphone and the terminal device. However, the examiner takes Official Notice that connecting/disconnecting Bluetooth devices via an interface is well-known, routine and conventional in the art and would have been obvious to one of ordinary skill in the art. As to claim 15, Lin in view of Weinans discloses wherein the display screen is further configured to display a third user interface, the third user interface includes one or more selection controls configured to select any sound effect mode from at least two different sound effect modes, and sound parameters of audio streams of the audio file played by the speaker of the headphone in the different sound effect modes have different values (Lin, ¶0027, ¶0062 and ¶0084, Figs. 1a, 3 and 8-9a/b. “mobile device 110 can alternatively connect to headphones 120 using wireless connection 160.” “Network interface 380 can be wired or wireless.” “FIG. 8 shows an exemplary user interface by which the user can select which aspects of tuning should be utilized when a sound profile is applied.”). As to claim 16, Lin in view of Weinans discloses configured to adjust, based on a correspondence between a certain sound effect mode and its corresponding values of the sound parameters, values of the sound parameters of an audio stream of the audio file to the corresponding values of the sound parameters of the certain sound effect mode (Lin, ¶0031, ¶0062-0063 and ¶0081, Fig. 3. “Mobile device 110 can allow users to tune the sound profile of their headphone to their own preferences and/or apply predefined sound profiles suited to the genre, artist, song, or the user. For example, mobile device 110 can use Alpine's Tune-It mobile application. Tune-It can allow users quickly modify their headphone devices to suite their individual tastes.” “The sound profile could contain parameters for increasing or decreasing various frequency bands and other sound parameters for enhancing portions of the audio. Such aspects could include dynamic equalization, crossover gain, dynamic noise compression, time delays, and/or three-dimensional audio effects.” “Processor 350 can process the audio information to apply sound profiles.” Processor of headphones applies the user modified sound profile to the audio.). Lin does not expressly disclose a separate optimization module and controller. However, the processor of Lin provides the functionality of both the claimed optimization module and controller. Based on the specification, the optimization module appears to be executed by a processor. Before the effective filing date of the claimed invention, it would be obvious that a single processor performing two different functions could be replaced by two processors performing the functions. The motivation would have been an obvious duplication of parts which would yield predictable results. As to claim 17, Lin in view of Weinans discloses the at least two different sound effect modes include a first sound effect mode and a second sound effect mode, wherein in the first sound effect mode, in a frequency response curve of an electrical signal of the audio file processed by the optimization module, a signal response in a frequency band of 200 Hz-10 kHz has a flat trend, and a signal response in a frequency band below 100 Hz is weaker than the signal response in the frequency band of 200 Hz-10 kHz (Lin, ¶0084-0085 and Figs. 8-9a/b. “For example, if a user selects "Music" with selector 812, selector 813 could present different genres, such as "Rock," "Jazz," and "Classical.” “The user can manipulate any or all of these sliders up or down along their vertical ranges 930 to modify the sound presented.” User can adjust the equalization sliders to modify frequency response curves of different genre presets. Boosting/attenuating certain frequencies is a simple design choice that would have been obvious to one of ordinary skill in the art.); and in the second sound effect mode, in a frequency response curve of an electrical signal of the audio file processed by the optimization module, a signal response in a frequency band below 1 kHz is weaker than a signal response in a frequency band above 1 kHz (Lin, ¶0084-0085 and Figs. 8-9a/b. “For example, if a user selects "Music" with selector 812, selector 813 could present different genres, such as "Rock," "Jazz," and "Classical.” “The user can manipulate any or all of these sliders up or down along their vertical ranges 930 to modify the sound presented.” User can adjust the equalization sliders to modify frequency response curves of different genre presets. Boosting/attenuating certain frequencies is a simple design choice that would have been obvious to one of ordinary skill in the art.). As to claim 18, Lin in view of Weinans discloses the at least two different sound effect modes include a third sound effect mode, in the third sound effect mode, in a frequency response curve of an electrical signal of the audio file processed by the optimization module, a signal response in a frequency band of 400 Hz-3 kHz is significantly enhanced relative to signal responses in other frequency bands (Lin, ¶0084-0085 and Figs. 8-9a/b. “For example, if a user selects "Music" with selector 812, selector 813 could present different genres, such as "Rock," "Jazz," and "Classical.” “The user can manipulate any or all of these sliders up or down along their vertical ranges 930 to modify the sound presented.” User can adjust the equalization sliders to modify frequency response curves of different genre presets. Boosting/attenuating certain frequencies is a simple design choice that would have been obvious to one of ordinary skill in the art.). As to claim 20, Lin in view of Weinans discloses the headphone further includes a parsing module configured to parse a type of an audio file to be played (Lin, ¶0064 and ¶0081, Fig. 3. “Processor 350 can parse and/or analyze metadata and request sound profiles via network 380.”); and the controller selects a specific sound effect mode from the at least two different sound effect modes based on the type of the audio file determined by the parsing module, and controls the speaker to play the audio file in the specific sound effect mode (Lin, ¶0064 and ¶0081, Fig. 3. “Processor 350 can parse and/or analyze metadata and request sound profiles via network 380.” “Once the media is selected, metadata for the media is parsed and/or analyzed to determine if the media contains music, voice, or a movie, and what additional details are available such as the artist, genre or song name (610).”). As to claim 21, Lin in view of Weinans discloses the headphone further includes a storage and an input module, the input module is configured to input the corresponding values of the sound parameters of each sound effect mode of the at least two different sound effect modes, and the storage is configured to store the corresponding values of the sound parameters of each sound effect mode (Lin, ¶0060, ¶0063 and ¶0066, Fig. 3. “An input 340 including one or more input devices can be configured to receive instructions and information. For example, in some implementations input 340 can include a number of buttons. In some other implementations input 340 can include one or more of a touch pad, a touch screen, a cable interface, and any other such input devices known in the art. Input 340 can include knob 290.” “Processor 350 can use memory 360 to aid in the processing of various signals, e.g., by storing intermediate results.”). Claim 19 is rejected under 35 U.S.C. 103 as being unpatentable over Lin in view of Weinans, as applied to claim 15 above, and further in view of Kemmerer. As to claim 19, Lin in view of Weinans does not expressly disclose the headphone further includes a sensor configured to detect a wearing state of a human ear; and the controller is configured to select a specific sound effect mode from the at least two different sound effect modes based on the wearing state of the human ear detected by the sensor, and control the speaker to play the audio file in the specific sound effect mode. Kemmerer discloses the headphone further includes a sensor configured to detect a wearing state of a human ear (Kemmerer, Col. 12 lines 31-43, Fig. 2. “This acoustic transfer function can be continuously calculated, or triggered by an event, e.g., a sensor event such as detection of movement of the personal audio device 10 by a sensor such as an infra-red sensor or capacitive proximity sensor. The threshold value is established using calculations of voltage differentials when the personal audio device 10 is off-head, and on-head, respectively, e.g., between the driver signal and feedback microphone signal(s). When the magnitude of this acoustic transfer function (G.sub.sd) is below a threshold value (Yes to decision 220), the control circuit 30 determines that the personal audio device 10 has changed from an on-head state to an off-head state (process 230).”); and the controller is configured to select a specific sound effect mode from the at least two different sound effect modes based on the wearing state of the human ear detected by the sensor, and control the speaker to play the audio file in the specific sound effect mode ((Kemmerer, Col. 15 lines 18-34, Fig. 2. “In response to determining that the personal audio device 10 has had a state change, either from on-head state to off-head state, or from off-head state to on-head state, in some examples, the control circuit 30 is further configured to adjust at least one function of the personal audio device 10 (process 260)… In some cases, the control circuit 30 is configured to adjust functions including one or more of: an audio playback function, a power function, a capacitive touch interface function, an active noise reduction (ANR) function, a controllable noise cancellation (CNC) function or a shutdown timer function.”). The motivation is the same as claim 4 above. Conclusion The prior art made of record and not relied upon is considered pertinent to applicant's disclosure: Jabra (Jabra Elite 75t User Manual. GN Audio A/S, 2020); and Turner (Turner, Nicholas. "Jabra Sound Plus App | Full Review / Walk-Through." YouTube, 5 Mar. 2020, www.youtube.com/watch?v=LgSjrCPCbok.). Any inquiry concerning this communication or earlier communications from the examiner should be directed to JAMES K MOONEY whose telephone number is (571)272-2412. The examiner can normally be reached Monday-Thursday, 8:30-6:30 EST. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Vivian Chin can be reached at 5712727848. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /JAMES K MOONEY/Primary Examiner, Art Unit 2695
(Ad) Transform your business with AI in minutes, not months
✓
Custom AI strategy tailored to your specific industry needs
✓
Step-by-step implementation with measurable ROI
✓
5-minute setup that requires zero technical skills
Trusted by 1,000+ companies worldwide