Jump to content

Patent Application 18226259 - SYSTEMS AND METHODS OF GENERATING CONSCIOUSNESS - Rejection

From WikiPatents

Patent Application 18226259 - SYSTEMS AND METHODS OF GENERATING CONSCIOUSNESS

Title: SYSTEMS AND METHODS OF GENERATING CONSCIOUSNESS AFFECTS USING ONE OR MORE NON-BIOLOGICAL INPUTS

Application Information

  • Invention Title: SYSTEMS AND METHODS OF GENERATING CONSCIOUSNESS AFFECTS USING ONE OR MORE NON-BIOLOGICAL INPUTS
  • Application Number: 18226259
  • Submission Date: 2025-05-14T00:00:00.000Z
  • Effective Filing Date: 2023-07-26T00:00:00.000Z
  • Filing Date: 2023-07-26T00:00:00.000Z
  • National Class: 345
  • National Sub-Class: 156000
  • Examiner Employee Number: 83629
  • Art Unit: 2118
  • Tech Center: 2100

Rejection Summary

  • 102 Rejections: 0
  • 103 Rejections: 4

Cited Patents

No patents were cited in this rejection.

Office Action Text


    Notice of Pre-AIA  or AIA  Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
This office action is in response to the request for continued examination filed on December 18, 2024.
Claim 63 has been amended.
Claim 65 has been added.
Claims 51-65 are pending.

Claim Rejections - 35 USC § 103
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.

Claims 51, 53, 60, 63, and 65 are rejected under 35 U.S.C. 103 as being unpatentable over Guzak et al. (US Publication 20110040155A1) in further view of Gansca et al. (US Publication 20140215351A1), Johnson et al. (US Publication 20150150023A1), and Albouyeh et al. (US Publication 20170344225A1).
Regarding claim 51, Guzak teaches a method of generating a visual consciousness affect representation, said method comprising:
receiving, from memory of a client device and/or a server, one or more shares originating from one or more users … a client device application presented on one or more client devices, each of said shares contains one or more submissions (a method and computer program product for incorporating human emotions in a computing environment. In this aspect, sensory inputs of a user can be received by a computing device … the input 122 can include voice input, which is processed in accordance with a set of one or more voice recognition and/or voice analysis programs)([[0005], [0042], shared input (e.g. voice submission) is received from client devices); 
receiving, from said memory of said client device and/or said server, a non-biological input not originating from one or more of said users and said non-biological input originating from a device or a module (input manually entered by a user via a peripheral (keyboard, mouse, microphone, etc.), and other environmental inputs (e.g., video, audio, etc.) gathered by capture devices)([0010]); 
extracting, from each submission and said non-biological input, one or more categories of each of one or more consciousness input types to generate a list identifying one or more extracted categories (aggregator 132 can initially classify inputs 124 as being indicative of either positive emotions (e.g., happy, excited, calm, etc.) or negative emotions (e.g., sad, bored, frantic, etc.))([0022]; a list of extracted categories is generated from inputs);
calculating, using a client module on said client device and/or a server module and using one or more of said shares and said non-biological input, a dominate category of one or more of said shares and said non-biological input … (Results from each sensory channel can be aggregated to determine a current emotional state of a user … The sensory aggregator 132 can use the input 124 to generate standardized emotion data 126, which an emotion data consumer 134 utilizes to produce emotion adjusted output 128 presentable upon output device 135  … positive emotions (e.g., happy, excited, calm, etc.) or negative emotions (e.g., sad, bored, frantic, etc.) … standardized emotion datum 126 result from combining positive and negative scores … assessing whether the resultant score exceeds a previously established certainty threshold … The input/output 122-128 can include numerous attributes defining a data instance. These attributes can include an input category … and the like)([0010], [0021], [0022], and [0047]; a dominate category is calculated from combining scores of extracted categories); 
determining, using said client module on said client device and/or said server module on said server and based on one or more of said shares and said non-biological input, an intensity of said dominant category of one or more of said shares and said non-biological input (The input/output 122-128 can include numerous attributes defining a data instance. These attributes can include input category … a value, strength … and the like)([0047]); 
storing, in memory of said server and/or said client device, said dominant category of one or more of said shares and said non-biological input and said intensity of said dominant category (For each user, historical data 124, 126 can be maintained)([0058]); 
conveying, using said client module and/or said server module, said dominant category of one or more of said shares … said dominant category from said client device and/or said server … (The input/output 122-128 can include numerous attributes defining a data instance. These attributes can include an input category, name/identifier … and the like)([0047]; attributes including a category and reference value (i.e. name/identifier) are conveyed from client devices); and 
visually presenting, on said display interface of said plurality of client devices, one or more of said shares and said visual consciousness affect representation corresponding to one or more of said shares … wherein said consciousness affect representation is based on said dominant category of one or more of said shares posted on … said client device application and said non-biological input, wherein said visual consciousness affect is chosen from a group comprising color, weather pattern, image, and animation … (The emotion dimension values from each of the sensory channels can be aggregated to generate at least one emotion datum value, which is a standards-defined value for an emotional characteristic of the user ... The output handler 254 can alter application output based upon datum 126. For example, handler 254 can generate text, images, sounds, and the like that correspond to a given emotion datum 126)([0005] and [0061]).
Guzak differs from the claim in that Guzak fails to teach the share submissions are posted on a website, conveying a category of said shares and said intensity to said website and/or said client device application presented on a plurality of said client devices, and visually presenting a visual consciousness affect representation corresponding to said shares, wherein said visual consciousness affect representation appears adjacent to said shares and is based on said category of said shares posted on said website and/or said client device application. However, share submissions originating from users and posted on a website or application (i.e. social web page or networking application), conveying a category of said shares and said intensity to said website and/or said client device application presented on a plurality of said client devices, and visually presenting a visual consciousness affect representation corresponding to said shares, wherein said visual consciousness affect representation appears adjacent to said shares and is based on said category of said shares posted on said website and/or said client device application is taught by Gansca (Referring now to FIG. 2( b), a user may scroll down in the list of topics until a user reaches a topic of interest 224, for example, “Working on xmas.” ...  In FIG. 2( c), a sentiment thermometer 272 is provided to a user. The meaning (e.g., sentiment associated) with each color in the sentiment thermometer 272 is identical to the meaning (e.g., sentiment associated) with each color in the graphical object 122 ...  The user may scroll upwards (FIG. 2( d)) or downwards (FIG. 2( e)) from the view shown in FIG. 2( c) to choose a desired color to be associated with the selected topic 224 ... A different sentiment is associated with each color to facilitate the expression of varying degrees of positive and negative (or neutral) sentiment ... As shown in FIG. 2( e), a user may select, for example, the color dark blue (237) ... the user is taken to a screen shot 239 shown in FIG. 2( f), where the user may leave an optional narrative, for example, regarding the selected topic 224, in field 242 ... Clicking on the entry 236 takes a user to a screen where the entry 236 is displayed, along with any comments 252 and/or likes 256 the entry 236, as shown in FIG. 2( i) ... In some embodiments, the users may express their sentiment via any platform, including but not limited to, social networking applications, web page, smartphone application, text message, and any other platform where a user has the ability to make a color selection)([0078], [0081], [0084], [0090]; Figures 2a-2i – an exemplary embodiment of a user sharing submissions and intensity to a social network to be conveyed to other users is shown, the affect representation (i.e. sentiment associated with dark blue) is presented adjacent to the “So not fun!” share). The examiner notes Guzak and Gansca teach a method for generating output based on user’s consciousness affect. As such, it would have been obvious to a person having ordinary skill in the art before the effective filing date of the claimed invention to modify the consciousness representation of Guzak to include the sharing, conveying, and presenting of Gansca such that the method shares submissions to a social network to visually convey a user’s consciousness affect. One would be motivated to make such a combination to provide the advantage of allowing a user to uniquely express their opinion in a social network.
The combination of Guzak-Gansca fails to teach the dominate category corresponds to a category having highest contribution value. However, calculating a dominate category which corresponds to a category having highest contribution value is taught by Johnson (Referring now to FIG. 9, a flowchart of a process 900 for identifying an emotion, a cognitive state, a sentiment, or other attribute associated with a document ... e.g., a webpage ... If the number of positive words or phrases exceeds the number of negative words or phrase ... setting the primary document emotion variable to the emotion associated with the highest positive frame count  (e.g., Crave>Happiness>Gratitude) … If the number of positive words or phrases is less than the number of negative words or phrases ... process 900 may include setting the primary document emotion variable to the emotion associated with the highest negative frame count)([0178], [0184], and [0185]; a dominate (i.e., primary) category is based on which categories count (e.g., Crave, Happiness, etc.) is the highest). The examiner notes Guzak, Gansca, and Johnson teach a method for generating output based on user’s consciousness affect. As such, it would have been obvious to a person having ordinary skill in the art before the effective filing date of the claimed invention to modify the consciousness representation of Guzak-Gansca to include the calculating of Johnson such that the method calculates a dominate category by determining which category has highest contribution value. One would be motivated to make such a combination to provide the advantage of facilitating analysis of big data. 
The combination of Guzak-Gansca-Johnson fails to teach said visual consciousness affect representation is a predetermined size depending upon said calculated value obtained from intensity of said category. However, a visual consciousness affect representation being predetermined size based upon a calculated value obtained from intensity of a category is taught by Albouyeh (the visual characteristics of the visual indicators may be altered to convey the sentiment ... Visual characteristics of the visual indicators may include icon size, icon shape, icon color, icon labels, icon patterns, icon borders, and so forth)([0037]; sentiment is calculated value of intensity of a category). The examiner notes Guzak, Gansca, Johnson, and Albouyeh teach a method for generating output based on user’s consciousness affect. As such, it would have been obvious to a person having ordinary skill in the art before the effective filing date of the claimed invention to modify the consciousness representation of Guzak-Gansca-Johnson to include the predetermined size representation of Albouyeh such that the method varies a size of a visual consciousness affect based on calculated intensity of a category. One would be motivated to make such a combination to provide the advantage of providing additional graphical representations to convey consciousness.
Regarding claim 53, Guzak-Gansca-Johnson-Albouyeh teach the method of generating a visual consciousness affect representation of claim 1, wherein said consciousness input includes at least one input chosen from a group comprising emotional state input, reasoned input, location information input, physical awareness input and spiritual insight input, and said non-biological input is at least one input chosen from a group comprising emotional state input, reasoned input, location information input, physical awareness input and synchronicity input (Guzak - aggregator 132 can initially classify inputs 124 as being indicative of either positive emotions ... happy ... negative emotions ... sad … the environmental input 185 can include images/video of a face of a user 105, which is processed to discern a facial expression of the user 105. Other body language interpretation analytics can also be performed to determine sensory input (i.e., body language can be analyzed to determine if user 105 is nervous, calm, indecisive, etc.))([0022] and [0037]).
Regarding claim 60, Guzak-Gansca-Johnson-Albouyeh teach the method of generating a visual consciousness affect representation of claim 1, wherein a first intensity information accompanies each of one or more of said submissions, said non-biological input contains a second intensity information (Guzak - In channel processor 130 can include ... a data store 220, and/or other such components. The data store 220 can include device specific data 222, user specific data 224, configurable channel specific rules 226, and mappings to a specific standard 228 to which the generated processed input 124 conforms)([0048]; each input is associated with corresponding intensity (i.e., sense of force) information such as rules, mappings, etc.) and said extracting comprises: 
identifying, in each of said submissions and said non-biological input, information relating to one or more consciousness input types (Guzak - The input/output 122-128 can include numerous attributes defining a data instance)([0047]); and
and extracting, from said information relating to one or more of said consciousness input types, information relating to one or more said categories of each of said consciousness input types (“categories”) to generate said list identifying one or more extracted categories from each of said submissions and said non-biological input, and wherein each of said extracted categories is assigned a predetermined value that is at least in part based on said first intensity information (Guzak - The dimensional emotion evaluation component 216 converts a score/value computed by the processing component 212 into a standardized value/form. Component 216 can use to-standard mapping data 228)([0054]; corresponding information such as mappings information is used to assign a value (i.e., positive or negative score) to extracted categories).
Regarding system claim 63, the claim generally corresponds to method claim 1 and recites similar features in system form; therefore, the claim is rejected under similar rational. 
Regarding claim 65, Guzak teaches a method of generating a visual consciousness affect representation, said method comprising:
receiving, from memory of a client device and/or a server, one or more shares originating from one or more users …. a client device application presented on one or more client devices, each of said shares contains one or more consciousness inputs (a method and computer program product for incorporating human emotions in a computing environment. In this aspect, sensory inputs of a user can be received by a computing device … the input 122 can include voice input, which is processed in accordance with a set of one or more voice recognition and/or voice analysis programs)([[0005], [0042], shared input (e.g., voice consciousness) is received from client devices);
receiving, from said memory of said client device and/or said server, a non-biological input not originating from one or more of said users and said non-biological input originating from a device or a module (input manually entered by a user via a peripheral (keyboard, mouse, microphone, etc.), and other environmental inputs (e.g., video, audio, etc.) gathered by capture devices)([0010]);
extracting, from each consciousness input and said non-biological input, one or more categories of each of one or more consciousness input types to generate a list identifying one or more extracted categories (aggregator 132 can initially classify inputs 124 as being indicative of either positive emotions (e.g., happy, excited, calm, etc.) or negative emotions (e.g., sad, bored, frantic, etc.))([0022]; a list of extracted categories is generated from inputs);
calculating, using a client module on said client device and/or a server module and using one or more of said shares and said non-biological input and an age of each of said shares and non-biological input, a dominant category of one or more of said shares and said non-biological input … Results from each sensory channel can be aggregated to determine a current emotional state of a user … The sensory aggregator 132 can use the input 124 to generate standardized emotion data 126, which an emotion data consumer 134 utilizes to produce emotion adjusted output 128 presentable upon output device 135  … positive emotions (e.g., happy, excited, calm, etc.) or negative emotions (e.g., sad, bored, frantic, etc.) … standardized emotion datum 126 result from combining positive and negative scores … assessing whether the resultant score exceeds a previously established certainty threshold … The input/output 122-128 can include numerous attributes defining a data instance. These attributes can include an input category … and the like … the receiving processor can optionally define a processing unit, which is a set of data (such as a time window of data) to be analyzed)([0010], [0021], [0022], [0047], and [0066]; a dominate category is calculated from combining scores of extracted categories for a given age (i.e., amount of time));
determining, using said client module on said client device and/or said server module on said server and based on one or more of said shares and said non-biological input, an intensity of said dominant category of one or more of said shares and said non-biological input (The input/output 122-128 can include numerous attributes defining a data instance. These attributes can include input category … a value, strength … and the like)([0047]);
storing, in memory of said server and/or said client device, said dominant category of one or more of said shares and said non-biological input and said intensity of said dominant category (For each user, historical data 124, 126 can be maintained)([0058]);
conveying, using said client module and/or said server module, said dominant category of one or more of said shares … said dominant category from said client device and/or said server … (The input/output 122-128 can include numerous attributes defining a data instance. These attributes can include an input category, name/identifier … and the like)([0047]; attributes including a category and reference value (i.e. name/identifier) are conveyed from client devices); and 
visually presenting, on said display interface of said plurality of client devices, one or more of said shares and said visual consciousness affect representation corresponding to one or more of said shares … wherein said consciousness affect representation is based on said dominant category of one or more of said shares posted on … said client device application and said non- biological input, wherein said visual consciousness affect is chosen from a group comprising color, weather pattern, image, and animation … (The emotion dimension values from each of the sensory channels can be aggregated to generate at least one emotion datum value, which is a standards-defined value for an emotional characteristic of the user ... The output handler 254 can alter application output based upon datum 126. For example, handler 254 can generate text, images, sounds, and the like that correspond to a given emotion datum 126)([0005] and [0061]).
Guzak differs from the claim in that Guzak fails to teach the share consciousness are posted on a website, conveying a category of said shares and said intensity to said website and/or said client device application presented on a plurality of said client devices, and visually presenting a visual consciousness affect representation corresponding to said shares, wherein said visual consciousness affect representation appears adjacent to said shares and is based on said category of said shares posted on said website and/or said client device application. However, share consciousness originating from users and posted on a website or application (i.e. social web page or networking application), conveying a category of said shares and said intensity to said website and/or said client device application presented on a plurality of said client devices, and visually presenting a visual consciousness affect representation corresponding to said shares, wherein said visual consciousness affect representation appears adjacent to said shares and is based on said category of said shares posted on said website and/or said client device application is taught by Gansca (Referring now to FIG. 2( b), a user may scroll down in the list of topics until a user reaches a topic of interest 224, for example, “Working on xmas.” ...  In FIG. 2( c), a sentiment thermometer 272 is provided to a user. The meaning (e.g., sentiment associated) with each color in the sentiment thermometer 272 is identical to the meaning (e.g., sentiment associated) with each color in the graphical object 122 ...  The user may scroll upwards (FIG. 2( d)) or downwards (FIG. 2( e)) from the view shown in FIG. 2( c) to choose a desired color to be associated with the selected topic 224 ... A different sentiment is associated with each color to facilitate the expression of varying degrees of positive and negative (or neutral) sentiment ... As shown in FIG. 2( e), a user may select, for example, the color dark blue (237) ... the user is taken to a screen shot 239 shown in FIG. 2( f), where the user may leave an optional narrative, for example, regarding the selected topic 224, in field 242 ... Clicking on the entry 236 takes a user to a screen where the entry 236 is displayed, along with any comments 252 and/or likes 256 the entry 236, as shown in FIG. 2( i) ... In some embodiments, the users may express their sentiment via any platform, including but not limited to, social networking applications, web page, smartphone application, text message, and any other platform where a user has the ability to make a color selection)([0078], [0081], [0084], [0090]; Figures 2a-2i – an exemplary embodiment of a user sharing submissions and intensity to a social network to be conveyed to other users is shown, the affect representation (i.e. sentiment associated with dark blue) is presented adjacent to the “So not fun!” share). The examiner notes Guzak and Gansca teach a method for generating output based on user’s consciousness affect. As such, it would have been obvious to a person having ordinary skill in the art before the effective filing date of the claimed invention to modify the consciousness representation of Guzak to include the sharing, conveying, and presenting of Gansca such that the method shares consciousness to a social network to visually convey a user’s consciousness affect. One would be motivated to make such a combination to provide the advantage of allowing a user to uniquely express their opinion in a social network.
The combination of Guzak-Gansca fails to teach the dominate category corresponds to a category having highest contribution value. However, calculating a dominate category which corresponds to a category having highest contribution value is taught by Johnson (Referring now to FIG. 9, a flowchart of a process 900 for identifying an emotion, a cognitive state, a sentiment, or other attribute associated with a document ... e.g., a webpage ... If the number of positive words or phrases exceeds the number of negative words or phrase ... setting the primary document emotion variable to the emotion associated with the highest positive frame count  (e.g., Crave>Happiness>Gratitude) … If the number of positive words or phrases is less than the number of negative words or phrases ... process 900 may include setting the primary document emotion variable to the emotion associated with the highest negative frame count)([0178], [0184], and [0185]; dominate (i.e., primary) category is based on which categories count (e.g., Crave, Happiness, etc.) is the highest). The examiner notes Guzak, Gansca, and Johnson teach a method for generating output based on user’s consciousness affect. As such, it would have been obvious to a person having ordinary skill in the art before the effective filing date of the claimed invention to modify the consciousness representation of Guzak-Gansca to include the calculating of Johnson such that the method calculates a dominate category by determining which category has highest contribution value. One would be motivated to make such a combination to provide the advantage of facilitating analysis of big data.
The combination of Guzak-Gansca-Johnson fails to teach said visual consciousness affect representation is a predetermined size depending upon said calculated value obtained from intensity of said category. However, a visual consciousness affect representation being predetermined size based upon a calculated value obtained from intensity of a category is taught by Albouyeh (the visual characteristics of the visual indicators may be altered to convey the sentiment ... Visual characteristics of the visual indicators may include icon size, icon shape, icon color, icon labels, icon patterns, icon borders, and so forth)([0037]; sentiment is calculated value of intensity of a category). The examiner notes Guzak, Gansca, Johnson, and Albouyeh teach a method for generating output based on user’s consciousness affect. As such, it would have been obvious to a person having ordinary skill in the art before the effective filing date of the claimed invention to modify the consciousness representation of Guzak-Gansca-Johnson to include the predetermined size representation of Albouyeh such that the method varies a size of a visual consciousness affect based on calculated intensity of a category. One would be motivated to make such a combination to provide the advantage of providing additional graphical representations to convey consciousness.

Claim 52 is rejected under 35 U.S.C. 103 as being unpatentable over Guzak, Gansca, Johnson, Albouyeh, and in further view of Sadanandan et al. (US Publication 20130282808A1).
Regarding claim 52, Guzak-Gansca-Johnson-Albouyeh teach the method as applied above, Guzak-Gansca-Johnson-Albouyeh differs from the claim in that Guzak-Gansca-Johnson-Albouyeh fails to teach visually presenting an illustration or photo of an object associated with said identified category. However, teach visually presenting an illustration or photo of an object associated with an identified category is taught by Sadanandan (The script 116 uses the keywords identified from the analysis of the textual content ... to identify the corresponding mood indicators ... The script 116 then identifies the current mood or state of mind of the user using the mood and context indicators ... The updated user-profile image is packaged with the webpage and transmitted to the client-device for rendering)([0031], [0032], and [0033]). The examiner notes Guzak, Gansca, Johnson, Albouyeh, and Sadanandan teach a method for generating output based on user’s consciousness affect. As such, it would have been obvious to a person having ordinary skill in the art before the effective filing date of the claimed invention to modify the presenting of Guzak-Gansca-Johnson-Albouyeh to include the presenting Sadanandan such that the method presents an object indicative of an identified mood category. One would be motivated to make such a combination to provide the advantage of improving the conveyance of a visual consciousness by automatically updating a realistic appearance of a user based on the user's current state of mind or user's contextual interest.

Claim 54 is rejected under 35 U.S.C. 103 as being unpatentable over Guzak, Gansca, Johnson,  Albouyeh, and in further view of Kaleal (US Publication 20160086500A1).
Regarding claim 54, Guzak-Gansca-Johnson-Albouyeh teach the method as applied above, wherein said emotional state input represents an emotional state of said user (Guzak - metadata of the user input 181 can be evaluated to determine emotions of user 105 (e.g., typing pattern analysis, hand steadiness when manipulating a joystick/pointer, etc.))([0038]), said reasoned input represents an expression of said user (Guzak - the processed input 122 can include ... user's facial expressions)([0051]). 
Guzak-Gansca-Johnson-Albouyeh differs from the claim in that Guzak-Gansca-Johnson-Albouyeh fails to teach the input includes location information input representing location of said client devices and physical awareness input including one information, associated with said user and chosen from a group comprising general health information, body type, and biology awareness. However, Kaleal discloses of location information input representing location of said client devices (reception component 204 can receive information regarding a user's location using various known location determination techniques)([0082]) and physical awareness input including one information associated with a user and chosen from a group comprising general health information, body type, and biology awareness (based on the information identifying the presence and/or status/level of various biomarkers, analysis component 212 can determine one or more characteristics associated with a state of a human body system of the user, such as whether the body system is in a healthy state or an unhealthy state)([0092]). The examiner notes Guzak, Gansca, Johnson, Albouyeh, and Kaleal teach a method for generating output based on user’s consciousness affect. As such, it would have been obvious to a person having ordinary skill in the art before the effective filing date of the claimed invention to modify the input of Guzak-Gansca-Johnson-Albouyeh to include location information input and physical awareness input of Kaleal such that the method includes location input and physical awareness input in generating a visual consciousness affect representation. One would be motivated to make such a combination to provide the advantage of providing additional tangible types of inputs in determining consciousness affect.

Claims 55-59 are rejected under 35 U.S.C. 103 as being unpatentable over Guzak, Gansca, Johnson, Albouyeh, and in further view of Kaleal and Gilley et al. (US Publication 20150004578A1).
Regarding claim 55, Guzak-Gansca-Johnson-Albouyeh teach the method as applied above, wherein said emotional state input includes one category chosen from a group comprising love, no love, joy, sad, concerned, annoyed, trust, defiant, peaceful, aggressive, accept, reject, interested, distracted, optimistic and doubtful, and said emotional state input is not the same as reasoned input (Guzak - aggregator 132 can initially classify inputs 124 as being indicative of either positive emotions ... happy or ... negative emotions ... sad)([0022]; happy and sad are not the same as reasoned input (e.g. visual facial input)).
Guzak-Gansca-Johnson-Albouyeh differs from the claim in that Guzak-Gansca-Johnson-Albouyeh fails to teach the input is not the same as physical awareness input and location information input. However, Kaleal discloses of different physical awareness input (based on the information identifying the presence and/or status/level of various biomarkers, analysis component 212 can determine one or more characteristics associated with a state of a human body system of the user, such as whether the body system is in a healthy state or an unhealthy state)([0092]) and location information input (reception component 204 can receive information regarding a user's location using various known location determination techniques)([0082]). The examiner notes Guzak, Gansca, Johnson, Albouyeh, and Kaleal teach a method for generating output based on user’s consciousness affect. As such, it would have been obvious to a person having ordinary skill in the art before the effective filing date of the claimed invention to modify the input of Guzak-Gansca-Johnson-Albouyeh to include location information input and physical awareness input of Kaleal such that the method includes location input and physical awareness input in generating a visual consciousness affect representation. One would be motivated to make such a combination to provide the advantage of providing additional tangible types of inputs in determining consciousness affect.
The combination of Guzak-Gansca-Johnson-Albouyeh-Kaleal fails to teach the input is not the same as spiritual insight input. However, different spiritual insight input is taught by Gilley (The lifestyle companion system also can interview the user about non-health related topics, e.g., spirituality/religion, identity (e.g., sense of belonging) ... career ... goals)([0044]). The examiner notes Guzak, Gansca, Johnson, Albouyeh, Kaleal, and Gilley teach a method for generating output based on user’s consciousness affect. As such, it would have been obvious to a person having ordinary skill in the art before the effective filing date of the claimed invention to modify the input of Guzak-Gansca-Johnson-Albouyeh-Kaleal to include spiritual insight input of Gilley such that the method includes spiritual insight input in generating a visual consciousness affect representation. One would be motivated to make such a combination to provide the advantage of factoring in intangible input in determining consciousness affect.
Regarding claim 56, Guzak-Gansca-Johnson-Albouyeh teach the method as applied above, wherein said reasoned input includes one category chosen from a group comprising understood, solve, recognize, sight, hear, smell, touch, and taste, and said reasoned input is not the same as emotional state input (Guzak - environmental input 185 can include images, video, and audio captured by an audio/video capture device 184 ... the environmental input 185 can include images/video of a face of a user 105, which is processed to discern a facial expression of the user 105 ... Environmental input 185 can include speech analyzed)([0037]; sight (e.g. video) input is not the same as emotional input (e.g. happy)). 
Guzak-Gansca-Johnson-Albouyeh differs from the claim in that Guzak-Gansca-Johnson-Albouyeh fails to teach the input is not the same as physical awareness input and location information input. However, Kaleal discloses of different physical awareness input (based on the information identifying the presence and/or status/level of various biomarkers, analysis component 212 can determine one or more characteristics associated with a state of a human body system of the user, such as whether the body system is in a healthy state or an unhealthy state)([0092]) and location information input (reception component 204 can receive information regarding a user's location using various known location determination techniques)([0082]). The examiner notes Guzak, Gansca, Johnson, Albouyeh, and Kaleal teach a method for generating output based on user’s consciousness affect. As such, it would have been obvious to a person having ordinary skill in the art before the effective filing date of the claimed invention to modify the input of Guzak-Gansca-Johnson-Albouyeh to include location information input and physical awareness input of Kaleal such that the method includes location input and physical awareness input in generating a visual consciousness affect representation. One would be motivated to make such a combination to provide the advantage of providing additional tangible types of inputs in determining consciousness affect.
The combination of Guzak-Gansca-Johnson-Albouyeh-Kaleal fails to teach the input is not the same as spiritual insight input. However, different spiritual insight input is taught by Gilley (The lifestyle companion system also can interview the user about non-health related topics, e.g., spirituality/religion, identity (e.g., sense of belonging) ... career ... goals)([0044]). The examiner notes Guzak, Gansca, Johnson, Albouyeh, Kaleal, and Gilley teach a method for generating output based on user’s consciousness affect. As such, it would have been obvious to a person having ordinary skill in the art before the effective filing date of the claimed invention to modify the input of Guzak-Gansca-Johnson-Albouyeh-Kaleal to include spiritual insight input of Gilley such that the method includes spiritual insight input in generating a visual consciousness affect representation. One would be motivated to make such a combination to provide the advantage of factoring in intangible input in determining consciousness affect.
Regarding claim 57, Guzak-Gansca-Johnson-Albouyeh teach the method as applied above, wherein generating a visual consciousness affect representation includes different emotional state input (Guzak - aggregator 132 can initially classify inputs 124 as being indicative of either positive emotions ... happy or ... negative emotions ... sad)([0022]; happy and sad are different input) and reasoned input (Guzak - environmental input 185 can include images, video, and audio captured by an audio/video capture device 184 ... the environmental input 185 can include images/video of a face of a user 105, which is processed to discern a facial expression of the user 105 ... Environmental input 185 can include speech analyzed)([0037]; sight (e.g. video) is different input). 
Guzak-Gansca-Johnson-Albouyeh differs from the claim in that Guzak-Gansca-Johnson-Albouyeh fails to teach the input is not the same as physical awareness input including one category chosen from a group comprising fit, not fit, energetic, tired, healthy, sick, hungry and full and location information input. However, Kaleal discloses of different physical awareness input (based on the information identifying the presence and/or status/level of various biomarkers, analysis component 212 can determine one or more characteristics associated with a state of a human body system of the user, such as whether the body system is in a healthy state or an unhealthy state)([0092]) including one category chosen from a group comprising fit, not fit, energetic, tired, healthy, sick, hungry and full (how the user is feeling (e.g., sore, sick, energized, sad, tired, etc.))([0245]) and location information input (reception component 204 can receive information regarding a user's location using various known location determination techniques)([0082]). The examiner notes Guzak, Gansca, Johnson, Albouyeh, and Kaleal teach a method for generating output based on user’s consciousness affect. As such, it would have been obvious to a person having ordinary skill in the art before the effective filing date of the claimed invention to modify the input of Guzak-Gansca-Johnson-Albouyeh to include location information input and physical awareness input of Kaleal such that the method includes location input and physical awareness input in generating a visual consciousness affect representation. One would be motivated to make such a combination to provide the advantage of providing additional tangible types of inputs in determining consciousness affect.
The combination of Guzak-Gansca-Johnson-Albouyeh-Kaleal fails to teach the input is not the same as spiritual insight input. However, different spiritual insight input is taught by Gilley (The lifestyle companion system also can interview the user about non-health related topics, e.g., spirituality/religion, identity (e.g., sense of belonging) ... career ... goals)([0044]). The examiner notes Guzak, Gansca, Johnson, Albouyeh, Kaleal, and Gilley teach a method for generating output based on user’s consciousness affect. As such, it would have been obvious to a person having ordinary skill in the art before the effective filing date of the claimed invention to modify the input of Guzak-Gansca-Johnson-Albouyeh-Kaleal to include spiritual insight input of Gilley such that the method includes spiritual insight input in generating a visual consciousness affect representation. One would be motivated to make such a combination to provide the advantage of factoring in intangible input in determining consciousness affect.
Regarding claim 58, Guzak-Gansca-Johnson-Albouyeh teach the method as applied above, wherein generating a visual consciousness affect representation includes different emotional state input (Guzak - aggregator 132 can initially classify inputs 124 as being indicative of either positive emotions ... happy or ... negative emotions ... sad)([0022]; happy and sad are different input), reasoned input (Guzak - environmental input 185 can include images, video, and audio captured by an audio/video capture device 184 ... the environmental input 185 can include images/video of a face of a user 105, which is processed to discern a facial expression of the user 105 ... Environmental input 185 can include speech analyzed)([0037]; sight (e.g. video) is different input), and input includes one category chosen from a group comprising attraction, repulsion, calm, unrest, anticipate, remember, solitude, and congestion (Guzak - aggregator 132 can initially classify inputs 124 as being ... calm)([0022]).
Guzak-Gansca-Johnson-Albouyeh differs from the claim in that Guzak-Gansca-Johnson-Albouyeh fails to teach the input is not the same as physical awareness input and location information input. However, Kaleal discloses of different physical awareness input (based on the information identifying the presence and/or status/level of various biomarkers, analysis component 212 can determine one or more characteristics associated with a state of a human body system of the user, such as whether the body system is in a healthy state or an unhealthy state)([0092]) and location information input (reception component 204 can receive information regarding a user's location using various known location determination techniques)([0082]). The examiner notes Guzak, Gansca, Johnson, Albouyeh, and Kaleal teach a method for generating output based on user’s consciousness affect. As such, it would have been obvious to a person having ordinary skill in the art before the effective filing date of the claimed invention to modify the input of Guzak-Gansca-Johnson-Albouyeh to include location information input and physical awareness input of Kaleal such that the method includes location input and physical awareness input in generating a visual consciousness affect representation. One would be motivated to make such a combination to provide the advantage of providing additional tangible types of inputs in determining consciousness affect.
The combination of Guzak-Gansca-Johnson-Albouyeh-Kaleal fails to teach the input is not the same as spiritual insight input. However, different spiritual insight input is taught by Gilley (The lifestyle companion system also can interview the user about non-health related topics, e.g., spirituality/religion, identity (e.g., sense of belonging) ... career ... goals)([0044]). The examiner notes Guzak, Gansca, Johnson, Albouyeh, Kaleal, and Gilley teach a method for generating output based on user’s consciousness affect. As such, it would have been obvious to a person having ordinary skill in the art before the effective filing date of the claimed invention to modify the input of Guzak-Gansca-Johnson-Albouyeh-Kaleal to include spiritual insight input of Gilley such that the method includes spiritual insight input in generating a visual consciousness affect representation. One would be motivated to make such a combination to provide the advantage of factoring in intangible input in determining consciousness affect.
Regarding claim 59, Guzak-Gansca-Johnson-Albouyeh teach the method as applied above, wherein generating a visual consciousness affect representation includes different emotional state input (Guzak - aggregator 132 can initially classify inputs 124 as being indicative of either positive emotions ... happy or ... negative emotions ... sad)([0022]; happy and sad are different input) and reasoned input (Guzak - environmental input 185 can include images, video, and audio captured by an audio/video capture device 184 ... the environmental input 185 can include images/video of a face of a user 105, which is processed to discern a facial expression of the user 105 ... Environmental input 185 can include speech analyzed)([0037]; sight (e.g. video) is different input).
Guzak-Gansca-Johnson-Albouyeh differs from the claim in that Guzak-Gansca-Johnson-Albouyeh fails to teach the input is not the same as physical awareness input and location information input. However, Kaleal discloses of different physical awareness input (based on the information identifying the presence and/or status/level of various biomarkers, analysis component 212 can determine one or more characteristics associated with a state of a human body system of the user, such as whether the body system is in a healthy state or an unhealthy state)([0092]) and location information input (reception component 204 can receive information regarding a user's location using various known location determination techniques)([0082]). The examiner notes Guzak, Gansca, Johnson, Albouyeh, and Kaleal teach a method for generating output based on user’s consciousness affect. As such, it would have been obvious to a person having ordinary skill in the art before the effective filing date of the claimed invention to modify the input of Guzak-Gansca-Johnson-Albouyeh to include location information input and physical awareness input of Kaleal such that the method includes location input and physical awareness input in generating a visual consciousness affect representation. One would be motivated to make such a combination to provide the advantage of providing additional tangible types of inputs in determining consciousness affect.
The combination of Guzak-Gansca-Johnson-Albouyeh-Kaleal fails to teach the input is not the same as spiritual insight input including one category chosen from a group comprising hug, missing, energy, shield, flash, deja vu, presence, and universe. However, different spiritual insight input including one category chosen from a group comprising hug, missing, energy, shield, flash, deja vu, presence, and universe is taught by Gilley (The lifestyle companion system also can interview the user about non-health related topics, e.g., spirituality/religion, identity (e.g., sense of belonging) ... career ... goals)([0044]; spiritual (e.g. energy, universe, and presence (i.e. sense of belonging) is different input). The examiner notes Guzak, Gansca, Johnson, Albouyeh, Kaleal, and Gilley teach a method for generating output based on user’s consciousness affect. As such, it would have been obvious to a person having ordinary skill in the art before the effective filing date of the claimed invention to modify the input of Guzak-Gansca-Johnson-Albouyeh-Kaleal to include spiritual insight input of Gilley such that the method includes spiritual insight input in generating a visual consciousness affect representation. One would be motivated to make such a combination to provide the advantage of factoring in intangible input in determining consciousness affect.

Allowable Subject Matter
Claims 61, 62, and 64 are objected to as being dependent upon a rejected base claim, but would be allowable if rewritten in independent form including all of the limitations of the base claim and any intervening claims.


Response to Arguments
Applicant's arguments with respect to claims 51-65 have been considered but are moot in view of the new ground(s) of rejection.
Regarding claim 65, applicant argues the combination of Guzak, Gansca, and Albouyeh fails to teach “calculating, using a client module on said client device and/or a server module and using one or more of said shares and said non-biological input and an age of each of said shares and non-biological input, a dominant category of one or more of said shares and said non-biological input”; the examiner respectfully disagrees. 
Guzak discloses of extracting categories (i.e., emotions) from shared input of sensory capture devices to generate a list of emotions (e.g., positive and negative) “one or more sensory capture devices 110 can capture raw (or pre-processed) sensory input 122 from a user 105 ...  in-channel processor 130 can transform the sensory input 122 to processed input 124 … aggregator 132 can initially classify inputs 124 as being indicative of either positive emotions (e.g., happy, excited, calm, etc.) or negative emotions (e.g., sad, bored, frantic, etc.)” ([0019] and [0022]). Guzak further discloses of calculating a dominant category (i.e., standardized emotion) using the captured input and an age (i.e., an amount of time of captured input) “Results from each sensory channel can be aggregated to determine a current emotional state of a user … The sensory aggregator 132 can use the input 124 to generate standardized emotion data 126, which an emotion data consumer 134 utilizes to produce emotion adjusted output 128 presentable upon output device 135  … positive emotions (e.g., happy, excited, calm, etc.) or negative emotions (e.g., sad, bored, frantic, etc.) … standardized emotion datum 126 result from combining positive and negative scores … assessing whether the resultant score exceeds a previously established certainty threshold … The input/output 122-128 can include numerous attributes defining a data instance. These attributes can include an input category … and the like … the receiving processor can optionally define a processing unit, which is a set of data (such as a time window of data) to be analyzed” ([0010], [0021], [0022], [0047], and [0066]). 

Conclusion
The prior art made of record on form PTO-892 and not relied upon is considered pertinent to applicant's disclosure. Applicant is required under 37 C.F.R. § 1.111(c) to consider the reference fully when responding to this action. The document cited therein and enumerated below teaches a method and apparatus for generating a visual affect based on analysis of input.
20100257117A1
20120101808A1
20130103667A1
20130231920A1
20170220579A1
8949263B1
9262517B2
WO2010144618A1
Any inquiry concerning this communication or earlier communications from the examiner should be directed to Yongjia Pan whose telephone number is (571)270-1177. The examiner can normally be reached Monday - Friday, 9:00 AM - 5:00 PM EST.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Scott Baderman can be reached at 571-272-3644. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.





/YONGJIA PAN/Primary Examiner, Art Unit 2118                                                                                                                                                                                                        


    
        
            
        
            
        
            
        
            
        
            
        
            
        
            
        
            
        
            
        
            
        
            
        
            
        
            
        
            
        
            
        
            
        
            
        
            
        
            
        
            
        
            
        
            
        
            
        
            
        
            
    


Cookies help us deliver our services. By using our services, you agree to our use of cookies.