Jump to content

Patent Application 18453379 - INTEGRATED ANTI-TROLLING AND CONTENT MODERATION - Rejection

From WikiPatents

Patent Application 18453379 - INTEGRATED ANTI-TROLLING AND CONTENT MODERATION

Title: INTEGRATED ANTI-TROLLING AND CONTENT MODERATION SYSTEM FOR AN ONLINE PLATFORM AND METHOD THEREOF

Application Information

  • Invention Title: INTEGRATED ANTI-TROLLING AND CONTENT MODERATION SYSTEM FOR AN ONLINE PLATFORM AND METHOD THEREOF
  • Application Number: 18453379
  • Submission Date: 2025-04-10T00:00:00.000Z
  • Effective Filing Date: 2023-08-22T00:00:00.000Z
  • Filing Date: 2023-08-22T00:00:00.000Z
  • National Class: 709
  • National Sub-Class: 206000
  • Examiner Employee Number: 89905
  • Art Unit: 2454
  • Tech Center: 2400

Rejection Summary

  • 102 Rejections: 0
  • 103 Rejections: 2

Cited Patents

The following patents were cited in the rejection:

Office Action Text


    DETAILED ACTION

Notice of Pre-AIA  or AIA  Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .

Response to Amendment
	The amendment filed on January 31, 2025 has been entered.
Claims 1, 4, and 6-12 are pending.
Claims 2-3 and 5 have been canceled.
 Claims 1, 4, and 6-12 are rejected.

Response to Arguments
Applicant's arguments filed January 31, 2025 have been fully considered. It should first be noted that Applicant has provided an unfair and excessive 35 pages of arguments, many of which are redundant, likely realizing that the Office is not equipped to deal with such a huge number of arguments. It should also be noted that the actual written description is only seven pages long, making the arguments five times as long as the actual disclosure.  Perhaps if such effort had been expended in writing a more complete and thorough disclosure, prosecution would have advanced more quickly.
Regarding the Claim Objections discussed on pages 5-6 of Applicant’s Remarks, the arguments were persuasive, and the objections have been withdrawn.
Regarding the Specification discussed on pages 6-8 of Applicant’s Remarks,
Applicant generally argues as follows:
The Examiner alleged the specification does not appear to meet the requirements of 37 C.F.R. § 1.71 (a)-(c), which states as follows: (a) The specification must include a written description of the invention or discovery and of the manner and process of making and using the same...Therefore, the specification appears to violate the requirements of 37 C.F.R. § 1.71 (a)-(c). 
Response: The applicant respectfully submits that the specification provides a full, clear, concise, and exact terms description that enables any person skilled in the art to make and use the invention. The specification thoroughly describes the physical implementation of the invention in paragraphs [0016-0023]. 
The applicant further states that the drawings as disclosed in figure 1, represents the components of the invention in a block diagram form, which is a standard practice for software-based inventions where the novel aspects of the invention lie in the functionality of the different modules. The multiple modules shown in figure 1 are specifically configured to functions interdependently wherein each module corresponds to discrete physical components that comprises software components, executed on standard computing components such as processors, memory, and network interfaces, as these are the fundamental building blocks of any social media content management system that are well understood and enable any person skilled in the art to make and use the same. 
The applicant states that the specification provides detailed descriptions of both the system components and their interconnections. For example, paragraph [0018] describes the hybrid approach of the troll detection module that incorporates both an on-device and remote components, illustrating a distributed nature of the system architecture. The communication interfaces between these components are implemented using standard networking protocols and APIs, though not explicitly mentioned but are well understood by a person skilled in the art. The applicant further states that regarding the process description, figure 2 provides a clear flowchart showing the steps of the method, which is further elaborated in paragraphs [0024-0029]. 
The specification describes the method of flow of the user-generated content, from initial input analysis, moderation, and display or alteration of the content. The method of identifying the troll detection begins with analyzing user input in real-time, performed through both on-device and remote components using custom tokenization. The method utilizes the troll detection module integrated with a fine-tuned Al model to optimize the accuracy of the troll detection, executed on standard computing components. In the next step the users are enabled to designate moderators who evaluates and endorse comments and replies before they become visible, utilizing the personalized moderation module. Further, the alternative phrasing suggestions are offered to the users in real-time before posting, utilizing the AI-driven suggestions module. In the next step multiple moderators work collectively to establish a framework where to assess content and achieve consensus on content categorization, the collaborative moderation module is utilized. The final step involves the recording of true positive trolling instances and implementing temporary commenting restrictions utilizing the profile management module thereby representing an automated approach to user behavior management. The technical nature of these steps, that executed by multiple modules on the standard computing components works together in an integrated manner. The method, thereby does not fall under the category of a mere abstract concept, instead, it is established as a concrete technological solution to the challenge of online content moderation. The steps of the method are described with sufficient details for implementation by one skilled in the art. 
The specification clearly explains the mode of operation through practical examples. For instance, paragraph [0020] describes the functionality of the AI-driven suggestions module that provides alternative phrasing suggestions to users in real-time as they type, demonstrating the interactive nature of the system. The collaborative moderation process is further detailed in the paragraph [0021], showcasing the performance of multiple moderators together to evaluate the contents generated by the users. 
The best mode contemplated by the applicant is thoroughly described throughout the specification, particularly in paragraphs [0016-0023], that detail the preferred embodiment including all critical components and their interactions. The specification provides clear guidance on implementing each module, from the troll detection module to the profile management module of the user. 
 Therefore, applicant submits that the specification fully complies with the requirements of 37 CFR 1.71(a)-(c), providing a clear, concise, and exact description that enables any person skilled in the art to make and use the invention while disclosing the best mode contemplated by the applicant. Hence, the examiner is requested to withdraw the rejection. 



Applicant is reminded that 37 C.F.R. § 1.71 specifically requires the following detailed description and specification of the invention:
a) The specification must include a written description of the invention or discovery and of the manner and process of making and using the same, and is required to be in such full, clear, concise, and exact terms as to enable any person skilled in the art or science to which the invention or discovery appertains, or with which it is most nearly connected, to make and use the same. 
(b) The specification must set forth the precise invention for which a patent is solicited, in such manner as to distinguish it from other inventions and from what is old. It must describe completely a specific embodiment of the process, machine, manufacture, composition of matter or improvement invented, and must explain the mode of operation or principle whenever applicable. The best mode contemplated by the inventor of carrying out his invention must be set forth.
(c) In the case of an improvement, the specification must particularly point out the part or parts of the process, machine, manufacture, or composition of matter to which the improvement relates, and the description should be confined to the specific improvement and to such parts as necessarily cooperate with it or as may be necessary to a complete understanding or description of it. 37 C.F.R. § 1.71(a)-(c), emphasis added.

A further requirement provided by the Federal Circuit is as follows:
 "The ‘written description’ requirement implements the principle that a patent must describe the technology that is sought to be patented; the requirement serves both to satisfy the inventor’s obligation to disclose the technologic knowledge upon which the patent is based, and to demonstrate that the patentee [inventor] was in possession of the invention that is claimed." Capon v. Eshhar, 418 F.3d 1349, 1357, 76 USPQ2d 1078, 1084 (Fed. Cir. 2005), emphasis added.
Applicant alleges that figure 1 “represents the components of the invention” and that “each module corresponds to discrete physical components that comprises software components, executed on standard computing components such as processors, memory, and network interfaces.”  However, that first of only two diagrams does not present any physical components whatsoever.  The main box is labeled as “Platform,” which is defined in the computing art as follows:
In IT, a platform is any hardware or software used to host an application or service. An application platform, for example, consists of hardware, an operating system (OS), and coordinating programs that use the instruction set for a particular processor or microprocessor. In this case, the platform creates a foundation that ensures object code executes successfully. www.techtarget.com/searchitoperations/definition/platform.

However, the short four-page detailed description of the invention does not mention any hardware or software components, or any operating system on which the “modules” are to operate.  The only description of the figure 1 “platform” in the specification is that it is an “online platform” (paragraph [0012], [0024]), or a “social media platform” (paragraph [0009]), which a person of ordinary skill in the art might believe to be such operating system platforms such as Windows or MAC or social media platforms such as Facebook, Instagram, TikTok, WhatsApp, YouTube, or many others.  Since none of those platforms are disclosed, it is assumed that Applicant is proposing to obtain a US Patent for its own platform.  It should be noted that much of the written description and the claims contain descriptions that appear to be related to  marketing the product.  Examples include such statements as:
This unique approach ensures a transparent, accountable, and civil discourse. Additionally, the invention promotes politeness through the integration of AI-driven suggestions.  Specification, paragraph [0016].
[T]he troll detection module achieves remarkable precision in discerning instances of trolling, safeguarding the online community from offensive and disruptive content. Specification, paragraph [0018].
An Artificial Intelligence (AI)-driven suggestions module configured to provide the users with alternative rephrasing suggestions before posting, fostering the promotion of polite and constructive communication. Claim 1. 
 
Applicant should note that such promotional material is not relevant to the patenting process, which should focus on the technical aspects of the invention.  
Applicant also provides in the above arguments one-sentence descriptions of the “modules” included in figure 1 and further described in figure 2, and argues that they “work together in an integrated manner” and provide “a concrete technological solution to the challenge of online content moderation.”  However, the specification, in referring to figure 2, provides only one-sentence descriptions of each “module” (paragraph [0024]), and then contends that there are advantages to the approach, “including its transformative impact on online interactions and content moderation,” which is another marketing-oriented statement, rather than a technical disclosure.
Applicant then goes on to argue that the specification “explains the mode of operation through practical examples,” provides a “best mode,” and concludes as follows:
[T]he specification fully complies with the requirements of 37 CFR 1.71(a)-(c), providing a clear, concise, and exact description that enables any person skilled in the art to make and use the invention while disclosing the best mode contemplated by the applicant. Hence, the examiner is requested to withdraw the rejection.

Examiner respectfully disagrees. Because the specification discloses insufficient technical information to enable a person of ordinary skill in the art to make and use the invention, the rejection is maintained.  
 	Regarding the rejections under 35 U.S.C. § 112(a),	Applicant argues very extensively as follows:
Claims 1-12 are rejected under 35 U.S.C.112(a) or 35 U.S.C. 112 (pre-AlA), first paragraph, as failing to comply with the enablement requirement... .has actually been achieved. 
Response: The applicant respectfully submits that the claim 1 of the specification has been amended to provide clear and sufficient details enabling one skilled in the art to make and/or use the invention. The applicant states that the functionality of the troll detection module as described in paragraph [0018] employs a hybrid approach, incorporating both on-device and remote components to ensure prompt and accurate identification of trolling activity. The hybrid approach utilizes custom tokenization process along with a fine-tuned AI model to analyze the user-generated content for detecting the trolling behavior in real-time. 
The applicant further states that the criteria for defining and detecting trolling behavior are established through the user interactions and engagement patterns as described in paragraph [0023] that identifies the emerging forms of trolling. The claimed invention considered the dictionary definition of the term 'trolling' as the act of leaving an insulting message on the internet in order to annoy someone, and helps to detect and analyze that. Further, an iterative refinement process continuously improves the trolling-detection capabilities of the system through the user engagement and feedback. 
The applicant further states that the process of detecting trolling behavior involves multiple steps as outlined in paragraph [0024] that comprises the real-time analysis of user input using the troll detection module, evaluation of content against established behavioral patterns, assessment through collaborative moderation where multiple moderators work collectively, integration of user feedback to refine the detection models and implementation of a profile management module that tracks verified instances of trolling. These implementation details, combined with the knowledge of one skilled in the art regarding content moderation techniques, provide sufficient instruction for implementing the claimed invention. The specification provides clear guidance on the trolling behavior and the technical processes used to detect it. 
The real-time operation, as detailed in paragraph [0018], incorporates both on-device and remote components for comprehensive analysis wherein the troll detection module employs custom tokenization to break down and analyze content before it is posted. The real-time processing enables the system to intercept potentially offensive content before it reaches the platform, thereby the system prevents "the propagation of offensive content at its source." 
For propagation control, as outlined in paragraphs [0021-0022], the collaborative moderation module assesses content before it becomes visible. Multiple moderators evaluate and establish consensus on content classification, wherein the contents flagged as trolling are prevented from appearing in the user feeds. The profile management module further imposes temporary commenting restrictions upon detection of the trolling behavior. 
The system refines the troll detection module and enhances its suggestion algorithms, as detailed in paragraph [0023] through the tracking of the user behavior patterns that identifies recurring occurrences of trolling. Users with verified instances of trolling behavior face restrictions, while the profile management module maintains records of true positive trolling counts. These records enable the system to apply preventive measures. Through the comprehensive approach, the troll detection module identifies and actively prevents the propagation of offensive content by addressing it at multiple points: during content creation, before posting, during moderation, and through user management. 
The applicant respectfully submits that the specification addresses the storage and access of the suggestions through the integrated system architecture shown in figure-1, where the multiple modules including the AI-driven suggestions module operate interdependently using standard computing components. A person skilled in the art would understand that the suggestions are stored and accessed through conventional memory components that are fundamental to any content management system, even though not explicitly detailed in the claimed invention. Further, the continuous evolution of suggestions through user engagement and behavior analysis is explicitly described in paragraph [0023] which states that the system continuously evolves through user engagement and behavior analysis. By studying user interactions and engagement patterns, the system refines its troll detection models and enhances its suggestions algorithms. This indicates an integrated system where suggestions are dynamically stored and updated. 
The evaluation of communication quality is measurable through the tracking capabilities of the profile management module and the collaborative moderation module, which together provide metrics for assessing the trolling behavior. The specification provides sufficient technical detail for one skilled in the art to implement the suggestion, as it relies on standard computing infrastructure that is readily apparent to those familiar with social media platforms and content management systems. Since memory storage and access are fundamental aspects of any computing system, particularly social media platforms, a person of ordinary skill would understand how to implement these basic components without requiring explicit disclosure of standard technical infrastructure. 
The applicant respectfully submits this response to address the rejection under 35 U.S.C. § 112(a) concerning the enablement of the claims 1-12, highlighting the capacity of the specification to ensure that any person skilled in the art can make and use the invention as claimed. 
Hence, the examiner is requested to withdraw the rejection. 
The Examiner alleged the same issues discussed above for Claim 1 are also applicable to Claim 12. The specification would not enable a person of ordinary skill in the art to make and use the invention. 
Response: The applicant respectfully submits that the claim 12 of the non-provisional specification has been amended to provide clear and sufficient details enabling one skilled in the art to make and/or use the invention. The applicant states that the specification discloses a comprehensive method of technical implementation through claim 12, wherein each step of the claimed method works in conjunction with different specialized modules described in paragraphs [0018-0023] to achieve specific technical functions. 
The method of anti-trolling and content moderation as disclosed in claim 12, begins with the analysis of user input through both on-device and remote components using custom tokenization, as described in paragraph [0018], utilizing the troll detection module which is integrated with a fine-tuned AI model to optimize detection accuracy. Further, the users are allowed to designate the moderators utilizing personalized moderation module that evaluate and endorse the comments and replies before being made visible to the platform as detailed in paragraph [0019]. In the next step, alternative phrasing suggestions are provided to users in real-time before posting, through the AI-driven suggestions module, as specified in paragraph [0020]. further, the collaborative moderation module allows multiple moderators to work collaboratively to assess content and achieve consensus on content categorization, as outlined in the paragraph [0021]. In the final step, the user profiles are updated to record true positive trolling instances and applies temporary comment restrictions for detected trolling, utilizing the profile management module as disclosed in paragraph [0022]. The different steps of the method, executed by multiple modules on standard computing components are integrated to work together in a cohesive manner. Each step builds upon the previous one to create a comprehensive technical solution for anti- trolling and content moderation. 
The applicant states that the implementations of the method, as disclosed in different steps are built upon fundamental computing technologies. The technical foundation described in paragraphs [0016-0023] provides sufficient information for one skilled in the art to implement the steps of the method, using well-understood computing components and architectures that includes content analysis systems, user management frameworks, and moderation tools. The multiple steps of the method work together through standard data flow and process management techniques, creating an integrated solution that is implementable by the person skilled in the art. The fundamental computing technologies, though not explicitly mentioned in the specification, are well known and enable the person skilled in the art to implement the claimed method to make and use the invention as claimed. 
One skilled in the art would recognize that such implementation could utilize standard technologies such as database management systems for storing moderator and user data, authentication mechanisms for user access control, distributed processing for consensus building among moderators, and established content analysis techniques. These fundamental computing technologies are apparent and accessible to the person skilled in the art. 
Hence the examiner is requested to withdraw the rejection. Regarding Claims 2-11, the Examiner alleged because the claims depend from a rejected base claim, they are also rejected. 
Response: The applicant submits that since the rejection of the dependent claims 2-11 is solely due to their dependency on claim 1, their rejection should overcome by the demonstration that claim 1 meets the enablement requirement. Further, claim-wise explanation for enablement requirement is detailed below. 
Claim 2: The applicant respectfully submits that the claim 2 has been cancelled 
Claim 3: The applicant respectfully submits that the claim 3 has been cancelled. 
Claim 4: The applicant states that the specification enables the functionality of claim 4 through the interdependent operation of the collaborative moderation module and profile management module detailed in paragraphs [0021, 0022]. These modules execute on standard computing components to display troll detection scores, user statistics, and enable content flagging functionality. The collaborative moderation module implements the ability to view troll detection scores and user statistics as part of its collective assessment framework described in paragraph [0021]. The implementation of this module enables moderators to access comprehensive statistical data through the integrated tracking capability of the system. Further, the collaborative moderation module enables moderators to flag comments or posts for additional review, establishing a technical framework for synchronized assessment that creates a coordinated review system through the integrated communication framework of the system. 
The profile management module executes the functionality to mark users as allowed or blocked through its automated enforcement mechanisms described in paragraph [0022]. This 
implementation enables systematic user status management integrated with the access control systems of the platform. 
The interdependent functionality of these modules creates a comprehensive moderation framework where moderators can access metrics, collaborate on content assessment, and implement user status controls through the integrated technical architecture of the system, as disclosed in paragraphs [0021, 0022]. 
Claim 5: The applicant respectfully submits that the claim 5 has been cancelled 
Claim 6: The applicant states that the specification enables the functionality of claim 6 through the functionality of the profile management module detailed in paragraph [0022]. The profile management module execute on standard computing components to prevent comments from blocked users from appearing in the platform. One such example is the implementation of the content restriction framework utilizing conventional user management systems that control content visibility based on user status. These operations, while not explicitly detailed, are executed on standard content management infrastructure that is well-understood by a person skilled in the art. 
Claim 7: The applicant states that the specification enables the functionality of claim 7 through the comprehensive integration of all modules such as the troll detection module, personalized moderation module, AI-driven suggestions module, collaborative moderation module, and profile management module as detailed in paragraphs [0018-0022]. These modules execute on standard computing components to implement the complete content moderation workflow for users without specific moderation status. The user-generated content undergoes real-time analysis through the troll detection module, receives AI-driven suggestions before posting, requires assessment by nominated moderators, involves collaborative moderation decisions, and is subject to profile management tracking utilizing the interdependent modules as disclosed in paragraphs [0018-0022]. The suggestion algorithms and troll detection models, as described in paragraph [0023], are continuously refined through an iterative process based on user interactions and engagement patterns, enhancing the troll detection capabilities of the system over time. These operations, are executed on standard computational hardware that is well-understood by a person skilled in the art. 
Claim 8: The applicant states that the specification enables the functionality of claim 8 through the integration of the profile management module and personalized moderation module detailed in paragraphs [0019, 0022]. These modules execute the functionality that allows direct posting of content from allowed users. 
The profile management module implements the capability to identify and track users marked as allowed through its user status management framework described in paragraph [0022]. This enables systematic tracking of trusted users who have demonstrated consistent adherence to the guidelines of the platform. 
The personalized moderation module executes the expedited posting functionality, allowing content from allowed users to bypass standard moderation processes as detailed in paragraph [0019]. This implementation creates an efficient content flow where the comments and replies from the trusted users are automatically posted. 
The interaction between the modules enables dynamic content management where user status directly influences content visibility pathways. The classification capabilities of the profile management module in conjunction with the content processing of the personalized moderation module create streamlined publishing for trusted users while maintaining platform integrity as described in paragraphs [0019, 0022]. 
This technical implementation establishes an automated system for expedited content publishing based on user status classification, representing a specific approach to efficient content management through the integrated operation of the profile management module and personalized moderation modules. 
Claim 9: The applicant states that the specification enables the functionality of claim 9 through the implementation of profile management module as detailed in paragraph [0022]. The profile management module executes automated enforcement of temporary commenting restrictions based on detected trolling behavior. The applicant states that the profile management module implements real-time tracking of trolling behavior through its monitoring capabilities described in paragraph [0022]. This enables systematic identification of trolling instances by analyzing user content and interactions through hybrid analysis framework of the troll detection module as detailed in paragraph [0018]. The profile management module executes automated restriction enforcement on identification of the trolling behavior as specified in paragraph [0022]. This implementation creates a dynamic response system where restrictions are automatically applied based on verified instances of trolling behavior. 
The integrated operation between the profile management module and troll detection module enables automated detection and response, where trolling behavior triggers systematic restriction implementation. This technical framework establishes an automated enforcement system that operates based on verified behavioral patterns, as described in paragraphs [0018, 0022]. 
Claim 10: The applicant states that the specification enables the functionality of claim 10 through the implementation of profile management module, as detailed in paragraph [0022]. The profile management module implements comprehensive moderator performance tracking capabilities that enable users to access detailed statistics about moderator activities. This includes maintaining records of the number of users and posts moderated by each moderator, as specified in paragraph [0022]. The profile management module executes statistics compilation and display functionality through its tracking framework. The implementation maintains organized records of moderator activities, allowing systematic access to performance metrics that help users understand moderator effectiveness and workload distribution. 
Claim 11: The applicant states that the specification enables the functionality of claim 11 through the implementation of the profile management module as described in paragraph [0022]. The profile management module implements comprehensive tracking of the performance metrics of the moderator, maintaining counts of false positives, true positives, true negatives, and false negatives for the content evaluations by each moderator through the execution of statistical tracking capabilities that systematically record the accuracy of moderator decisions through these four specific metrics. The profile management module further implements functionality to make these performance metrics accessible to users, enabling transparency in the moderation process. The tracking framework maintains organized records of each moderator's assessment accuracy, providing users with quantitative measures of moderator effectiveness in content categorization. 
The applicant states that across the claims 4, 6, 7, 8, 9, 10, and 11, the specification comprehensively describes the technological implementations and data processing techniques that enable these claims. The claims 4, 6, 7, 8, 9, 10, and 11, are supported by specific descriptions of the functionality of the system, enabling a person of ordinary skill in the art to make and use the invention. 
Therefore, the examiner is requested to withdraw the rejection. 


Applicant begins by arguing that the enablement rejection is not justified, and begins by discussing the limitations of Claim 1.  Applicant asserts the following regarding the troll detection module:  
[T]he troll detection module as described in paragraph [0018] employs a hybrid approach, incorporating both on-device and remote components to ensure prompt and accurate identification of trolling activity. The hybrid approach utilizes custom tokenization process along with a fine-tuned AI model to analyze the user-generated content for detecting the trolling behavior in real-time.
A person of ordinary skill in the art (POSITA) would need to understand what the “custom tokenization process” and “fine-tuned AI model” are in order to make and use the invention.  By providing no information whatsoever about either process, the specification does not support enablement.  The further arguments about the dictionary definition of trolling are interesting, but most POSITAs already understand the concept. Applicant’s argument concludes with the statement that “an iterative refinement process continuously improves the trolling-detection capabilities of the system through the user engagement and feedback.”  However, the iterative refinement process is not disclosed, nor is a display device that would allow users to engage, thereby further demonstrating the enablement issue.
Applicant next states that there is “evaluation of content against established behavioral patterns, assessment through collaborative moderation where multiple moderators work collectively.”  Several issues arise with this argument.  A POSITA would wonder how established behavioral patterns are determined and where they are stored, given that no database is disclosed?  Furthermore, the same POSITA might wonder how the collaborative moderation can occur, since there are no user devices and no software disclosed for conducting such sessions.  Therefore, this argument does not support Applicant’s contention that the following 35 U.S.C. § 112(a) requirement is met: 
The specification shall contain a written description of the invention, and of the manner and process of making and using it, in such full, clear, concise, and exact terms as to enable any person skilled in the art to which it pertains, or with which it is most nearly connected, to make and use the same …
Applicant provides additional arguments for Claim 1 and concludes as follows:
One skilled in the art would recognize that such implementation could utilize standard technologies such as database management systems for storing moderator and user data, authentication mechanisms for user access control, distributed processing for consensus building among moderators, and established content analysis techniques. These fundamental computing technologies are apparent and accessible to the person skilled in the art. 
Although it is true that such technologies do exist in the art, Applicant’s contention that their descriptions, or even a mention of their need, do not need to be provided as part of this patent application is inconsistent with patent practice and might even be seen as somewhat cavalier, since the statute requires that the written description of the invention be provided in “full, clear, concise, and exact terms.”  Therefore, Examiner respectfully disagrees that the enablement rejection be withdrawn.
Applicant next provides similar arguments for independent Claim 12, similar to the ones for Claim 1, and acknowledges the lack of disclosure for basic underlying technologies needed to enable the invention with the following statement:  
The fundamental computing technologies, though not explicitly mentioned in the specification, are well known and enable the person skilled in the art to implement the claimed method to make and use the invention as claimed.
Again, Examiner respectfully disagrees and reiterates that the written description of the invention be provided in “full, clear, concise, and exact terms.”   
Applicant next presents extremely detailed arguments for the dependent claims.  Examiner acknowledges that Claim 2, 3, and 5 are canceled. 
Regarding Claim 4, Applicant states that “These modules execute on standard computing components to display troll detection scores, user statistics, and enable content flagging functionality.”  However, such components are not disclosed, in violation of the enablement requirement.  
Regarding Claim 6, Applicant argues, which Examiner again contends is in violation of the enablement requirement, as follows: 
The profile management module execute on standard computing components to prevent comments from blocked users from appearing in the platform. One such example is the implementation of the content restriction framework utilizing conventional user management systems that control content visibility based on user status. These operations, while not explicitly detailed, are executed on standard content management infrastructure that is well-understood by a person skilled in the art.
It would be helpful to a POSITA attempting to make and use the invention to know exactly which content management infrastructure is being utilized.
Regarding Claim 7, Applicant argues as follows:
The user-generated content undergoes real-time analysis through the troll detection module, receives AI-driven suggestions before posting, requires assessment by nominated moderators, involves collaborative moderation decisions, and is subject to profile management tracking utilizing the interdependent modules… These modules execute on standard computing components to implement the complete content moderation workflow for users without specific moderation status.

There is no description of where the AI-driven suggestions are stored or what specific AI software generates them.  As with previous claims, there is no description of the “standard computing components to implement the complete content moderation workflow for users without specific moderation status.”  Therefore, this claim is also in violation of the written description requirements.
Regarding Claims 8-11, Applicant again presents a description of how the claims are disclosed in the specification, but provides no description of the hardware or software needed to make the processes work.  These claims are not consistent with the written description requirements to provide a description of the invention be provided in “full, clear, concise, and exact terms.”
Therefore, the rejection under 35 U.S.C. § 112(a) has been maintained.
Regarding the rejections under 35 U.S.C. § 101, Applicant argues
The Examiner alleged the process recited in the independent claims describes Certain Methods of Organizing Human Activity, which includes managing personal behavior or interactions between people. 
Response: The applicant submits that the subject matter of the claimed invention does not fall under a judicial exception as the claims does not fall under a law of nature, a natural phenomenon, or an abstract idea. The claimed invention comprises an integrated anti-trolling and content moderation system for an online platform that enhances the technical functioning of the platform. 
The applicant submits that the claimed invention implements a distinct technical architecture through its troll detection module that employs a hybrid approach combining on-device and remote components with custom tokenization and fine-tuned AI model calibration, as described in paragraph [0018]. This technical implementation enables real-time analysis of user-generated content through a computing framework that cannot be described as certain methods of organizing human activity or abstract idea. 
A key technological feature of the invention is its AI-driven suggestions module that provides alternative rephrasing suggestions in real-time before the posting of content, as detailed in paragraph [0020]. This proactive intervention through AI processing represents a novel technical approach that transforms the way content moderation is performed, moving beyond a simple content filtering as disclosed in the prior arts. 
Further, as described in paragraph [0023], the system continuously evolves through user engagement and behavioral pattern analysis. This adaptive capability enables the system to refine the troll detection method and enhance the suggestion algorithms over time, creating a dynamic technical solution that improves with usage. The system tracks user behavior patterns to identify recurring instances of trolling and automatically adjusts the detection and prevention mechanisms of trolling accordingly. 
The applicant states that the integrated operation of the multiple modules, as claimed and supported by paragraphs [0018-0023] of the specification, creates an automated system where content analysis, improvement, and enforcement are performed through technological means. The combination of real-time processing capabilities, AI-driven suggestions, and continuous adaptation of the system demonstrates the invention as a technological solution for content moderation rather than falling under any judicial exception categories. Accordingly, as per the Revised Guidance, the claims of the invention are patent-eligible at Prong One and the claims do not require further analysis for a practical application of the judicial exception in Step 2A. 
Therefore, the examiner is requested to withdraw the rejection. The Examiner alleged there are no disclosures in the specification or recitations in the claims of processors, memory, or non-transitory computer-readable medium. Thus, no physical elements are cited to precludes the invention from being classified as Certain Methods of Organizing Human Activity. 
Response: The applicant respectfully submits that multiple modules of the invention functions interdependently using standard computing components which is well understood by a person skilled in the art, even though not explicitly detailed. The hybrid architecture of the troll detection module that incorporates both on-device and remote components, as described in paragraph [0018], inherently requires a distributed computing architecture to perform real- time analysis of user-generated content. The real-time analysis through both on-device and remote components cannot be achieved through human activity alone and necessitates standard computing infrastructure for simultaneous local and remote processing. 
The AI-driven suggestions module, as detailed in paragraph [0020], provides contextual rephrasing suggestions in real-time before content posting, which requires computational capabilities to process and analyze content as users type. This immediate analysis and suggestion generation cannot be performed through human mental processes and relies on standard computing components for real-time processing and response. Further, the collaborative moderation module, as outlined in paragraph [0021], enables multiple moderators to collectively assess content and establish consensus, requiring a networked system with real-time data synchronization. The collaborative framework necessitates standard computing infrastructure to enable simultaneous assessment and consensus building among multiple moderators. The profile management module, as described in paragraph [0022], tracks true positive trolling counts and implements automated temporary commenting restrictions. This automated tracking and enforcement functionality necessitates standard database systems and enforcement mechanisms that operate without human intervention. The ability to maintain records of trolling instances and automatically enforce restrictions demonstrates the technological nature of the implementation. 
These modules working together in an integrated manner, as shown in figure-1 and described in paragraphs [0018-0022], create a technological solution that goes beyond organizing human activity. The interdependent operation of these modules on standard computing components, while not explicitly detailed, is fundamental to implementing the claimed functionalities and would be well understood by a person skilled in the art of content management systems. The 
real-time analysis, automated enforcement, and collaborative features demonstrate that the invention represents a technological improvement rather than merely organizing human activity. 
The applicant respectfully submits that multiple modules of the invention functions interdependently using standard computing components which is well understood by a person skilled in the art even though not explicitly mentioned. The interdependent functioning of the multiple modules is not abstract methods of organizing human Activity, but rather specific technological solutions that require standard computing components to function. Therefore, the examiner is requested to withdraw the rejection. The Examiner alleged based on the analysis under Prong 2, Step 2A of the Revised Guidance, independent Claims 1 and 12 do not recite a "practical application" to overcome the judicial exception. 
Response: The applicant submits that the subject matter of the claimed invention does not fall under judicial exception. With reference to the objection with respect to the claim rejections under 35 U.S.C. §101, the applicant hereby submits that the subject matter does not fall under the categories of nature-based products or an abstract idea. The invention discloses an integrated anti-trolling and content moderation system for an online platform, that utilizes multiple interactive modules to proactively manage user interactions. The system is detailed in the newly amended claim 1, which includes a troll detection module employing custom tokenization and fine-tuned AI model calibration to optimize the accuracy of the detection of the trolling behavior. Further, various interactive modules such as a troll detection module, a personalized moderation module, an AI-driven suggestions module, a collaborative moderation module, and a profile management module collectively enhance the functionality of the system by capturing trolling behavior, fostering polite and constructive communication, and effectively managing user profiles. 
The applicant states that the claims include elements (multiple specialized modules) that provide specific, technical solutions to problems in the realm of online communication, specifically in moderating user interactions in real-time. These elements amount to significantly more than the judicial exception, as they employ innovative system which is a physical system to detect, analyze, and manage online content, which are clearly beyond the general abstract idea of content moderation. The system is an architecture with a combination of module which execute the configured functionalities to result in the technically improved output making the system distinct in terms of functionality. As disclosed in the specification through paragraphs [0018- 0023], the claimed invention is implemented through an integrated system architecture wherein multiple specialized modules work interdependently on standard computing components to execute specific technical functions. The troll detection module employs a hybrid approach incorporating both on-device and remote components, utilizing custom tokenization and fine- tuned AI models to optimize real-time detection accuracy of trolling behavior. The personalized moderation module enables user nomination of moderators who assess content before visibility, establishing a technical framework for pre-emptive content control. The AI-driven suggestions module provides real-time alternative phrasing suggestions before content posting, implementing proactive content improvement. The collaborative moderation module enables multiple moderators to collectively assess the content and establish consensus on classification, creating a structured framework for comprehensive content evaluation. The profile management module tracks true positive trolling counts and implements automated temporary commenting restrictions, providing systematic behavior management. These modules are technically integrated to function interdependently, and the integrated approach creates technical improvements by enabling real-time intervention at multiple points: during content creation, before posting, during moderation, and through automated user management. 
Further, the amendments introduce specific technological implementations that enhance the ability of the system to perform its intended functions. For instance the troll detection module employs a custom tokenization and a fine-tuned AI model calibration to optimize the accuracy of the troll detection. 
The technological improvements fundamentally transform the nature of the system into a practical and functional application that directly addresses the technical problem of managing the quality of user interaction in digital platforms. 
Therefore, the applicant respectfully requests the reconsideration and withdrawal of the rejection under 35 U.S.C. § 101, as the amended claims now clearly include inventive concepts that integrate technical elements and practical applications sufficient to qualify as significantly more than any judicial exception. 
Therefore, the examiner is requested to withdraw the rejection. The Examiner alleged in the instant application, since intercepting trolling and all the other functions being claimed are well-understand, routine, and conventional activities, there is no aspect of the invention that amounts to "significantly more" than the judicial exception. 
Response: The applicant respectfully submits that the claimed invention goes well beyond the conventional approaches of content moderation and trolling prevention through its novel technical implementation and integration of multiple components that work together in an unconventional manner. 
The hybrid approach of the invention for troll detection, incorporating both on-device and remote components, represents an unconventional technical solution. The system employs custom tokenization and fine-tuned AI models to achieve precise detection of trolling behavior in real-time, which is distinctly different from conventional moderation systems. 
The system employs a hybrid approach combining on-device and remote components, wherein custom tokenization and fine-tuned AI models work together to enable real-time analysis before content publication. Instead of simply deleting or blocking potentially problematic content, the system provides alternative phrasing suggestions through its AI-driven suggestions module before the content is posted, promoting constructive communication. The suggestions of alternative phrasing differs from conventional moderation systems where the contents are directly blocked or deleted. 
As detailed in paragraph [0023], the system continuously evolves through user engagement and behavior analysis, utilizing an iterative refinement process to improve detection accuracy based on identified behavioral patterns and user feedback. This proactive, learning-based approach with constructive suggestions distinguishes it from traditional reactive moderation systems that typically only remove or block contents. 
Further, the personalized moderation module of the system enables a user to nominate moderators who assess content before it becomes visible to content creators or the users. This represents a departure from conventional moderation systems that does not have user-nominated moderators for pre-publication assessment. The system further implements a collaborative moderation module where multiple moderators work together to establish consensus on content classification. 
The applicant states that the AI-driven suggestions module of the invention provides contextually relevant alternative phrasing suggestions prior to posting, representing a proactive and preventive approach which is not found in the prior arts. This real-time, AI-driven guidance system works in conjunction with the troll detection and moderation modules to create an integrated solution that is beyond the functionality of the conventional system. 
The profile management module further implements a tracking for true positive trolling counts coupled with an automated temporary commenting restrictions. This systematic approach to user behavior management and automated enforcement represents a technical advancement over conventional systems that typically rely on basic restriction mechanisms. 
The combination of the elements such as real-time hybrid troll detection, AI-driven suggestions, personalized moderation, collaborative consensus building, and automated profile management, creates a technical solution that is significantly more than conventional content moderation systems. The integrated system provides specific improvements to the functioning of online platforms in ways that amounts to "significantly more" than the judicial exception. 
Therefore, the examiner is requested to withdraw the rejection. The Examiner alleged accordingly, the analysis under the multiple steps of the Revised Guidance leads to the determination that independent Claims 1, 9, and 17 are directed to an abstract idea under 35 U.S.C. 101, and so are the dependent claims, which incorporate the abstract idea by virtue of their dependencies. Therefore, Claims 1-20 are directed to a judicial exception, and are not patent-eligible. Response: The applicant respectfully submits that claims 9 is not an independent claim, and claim 17 does not exist in the non-provisional specification. Further, the current invention does not contain the set of claims 1-20. Accordingly, the objection is not addressed considering the discrepancy in the numbering of the claims.

Examiner respectfully disagrees with Applicant’s contention that “the claimed invention does not fall under a judicial exception as the claims does not fall under a law of nature, a natural phenomenon, or an abstract idea.”   One of the three groupings of abstract ideas that is applicable to the instant invention is Certain Methods of Organizing Human Activity, which includes “managing personal behavior or relationships or interactions between people, including social activities.”  It seems apparent that trolling another individual is personal behavior and that the purpose of the invention is to prevent trolling, which is clearly and example of “managing personal behavior.”  
Applicant argues that “the claimed invention implements a distinct technical architecture through its troll detection module that employs a hybrid approach combining on-device and remote components with custom tokenization and fine-tuned AI model calibration.”   But the additional steps go beyond the mere troll detection, but  include the claimed “provide the user withs with alternative rephrasing suggestions,” “fostering the promotion of polite and constructive communication,” and “collaboratively evaluate content and establish consensus.”  All these activities involve managing personal behavior, which results in the invention being categorized as an abstract idea.   
As for the “distinct technical architecture,” that architecture does not change the overall purpose of the invention to detect trolling activity and to make suggestions to change the interactions.
As was stated in the office action, there are only two ways to overcome the rejection.  One way is to “determine whether the judicial exception is integrated into a practical application” and the definition of a “practical application” falls into four possible categories: 
Improvements to the Functioning of a Computer or to Any Other Technology or Technical Field

(2) Particular Machine 

(3) Particular Transformation

(4) Other Meaningful Limitations
The first category is not applicable because no computer has been a defined as part of the invention.  The second category of “Particular Machine” is also not applicable because no machine has been disclosed.  The third category of “Particular Transformation” is also not relevant to this invention.  As for “Other Meaningful Limitations,” none appear to be disclosed.
The other way to overcome the rejection is to recite additional elements that render the claim patent eligible by providing "significantly more" than the recited judicial exception.  However, since over one million prior art references were found that disclose the concept of trolling, the invention does not appear to be directed to "significantly more" than the recited judicial exception.
Therefore, the rejection will be sustained.
Regarding the rejections under 35 U.S.C. § 103, Applicant argues as follows:
Claims 1-3, 5, 8-9 and 12 are rejected under 35 U.S.C. 103 as being unpatentable over Brown (US 2021/0377052 Al, hereinafter referred to as Brown) in view of Packalen et al. (US 2019/0026601 Al, hereinafter referred to as Packalen). 
Response: The applicant respectfully submits that this response addresses each rejection under 35 U.S.C. § 103 for claims 1-3, 5, 8-9, and 12, rejected as unpatentable over Brown (US 2021/0377052 A1) and Packalen et al. (US 2019/0026601 A1) and provides substantiations to overcome the objections raised by the Examiner. 
Regarding Claim 1, the Examiner alleged Brown teaches: "An integrated anti-trolling and content moderation system for an online platform" and "a troll detection module (102) configured to perform real-time analysis of user- generated content, detecting instances of trolling behavior through both on-device and remote components" (paragraph [0040]). [The system includes a user intake process that requires a prospective user to submit sample content prior to being accepted into the social media system, in order to create a barrier for trolls and those who practice hate speech ([0040])] Response: The applicant respectfully submits this response to address the objection raised under 35 U.S.C. § 103 for claim 1. The applicant submits that the claimed invention is non-obvious over the cited prior arts Brown (US 2021/0377052 A1) and Packalen et al. (US 2019/0026601 A1). The applicant states that the troll detection module of the claimed invention is configured to perform real-time analysis of user-generated content, identifying trolling behavior through both on-device and remote components. This configuration allows for immediate and dynamic response to the contents as they are created. 
On the other hand, the invention of Brown, as disclosed in paragraph [0040], incorporates a user intake process requiring prospective users to submit sample content prior to acceptance into the platform. This process serves as a barrier to entry but does not involve ongoing, real-time analysis of user interactions and is fundamentally static. 
Brown's approach is preventative at the user registration level, whereas the claimed invention provides continuous real-time monitoring through "both on-device and remote components". This hybrid approach enables more efficient and effective detection of trolling behavior as it occurs, rather than just during initial screening. 
The invention of Packalen, as disclosed in paragraphs [0069-0072], implements a basic moderation model that classifies content as acceptable or unacceptable through machine learning algorithms. The system of Packalen lacks the hybrid approach of on-device and remote components and does not disclose custom tokenization processes for optimizing detection accuracy or real-time analysis framework. 
The real-time, adaptive analysis performed by the troll detection module in the claimed invention substantially enhances the effectiveness and efficiency of content moderation beyond the initial screening approach disclosed by Brown or Packalen. Therefore, the claimed invention provides a novel and non-obvious improvement over the prior arts, addressing the limitations of static user intake systems by offering a continuous, adaptive response to online trolling. The Examiner alleged a personalized moderation module (103) configured to enable the user to nominate moderators' (paragraphs [0037), [0038]). [Techniques are provided for computerized moderation, authorship recording, and distribution of social media content using moderator-supplied tags associated with content; authorship lists that can contain a mix of anonymous and non-anonymous authors ([0037]). Each user gets a personalized content stream, which may be posted by the user and/or other users to the social media system ([0038])] 
The applicant states that the personalized moderation module of the claimed invention allows the users to actively nominate moderators, granting them significant control over the moderation process. The personalized moderation module enables nomination and integrates the moderators into the content moderation workflow, allowing them to pre-screen content before it becomes public on the platform. 
On the other hand, the invention of Brown, as disclosed in paragraph ([0037), [0038]) employs moderator-supplied tags and maintains authorship lists that include a mix of anonymous and non- anonymous authors. While the system as disclosed by Brown provides a degree of customization in the content stream each user receives, it does not disclose or suggest allowing users to nominate moderators who actively participate in content moderation. Brown's approach utilizes the passive use of tags for content filtering and distribution, without user-directed control over moderation personnel or their active involvement in content approval processes. Packalen, on the other hand, as disclosed in paragraph [0072], relies on webmasters who decide on the kinds of content that are appropriate based on the publishing policies. While Packalen talks about performing the moderation, it does not teach or suggest a system where users can nominate their own moderators or establish a personalized moderation workflow. Packalen's approach represents a traditional moderation structure without user participation in moderator selection. The applicant states that the ability for users to nominate moderators in the claimed invention represents a technical improvement over the static system of Brown and Packalen. The personalized moderation module in the claimed invention advances the teachings of Brown and Packalen by enabling direct user involvement in the moderation process through the nomination of moderators. This feature promotes greater transparency, accountability, and community involvement in content moderation, which are not addressed by Brown or Packalen. 
The Examiner alleged "an Artificial Intelligence (Al)-driven suggestions module (104) configured to provide users with alternative rephrasing suggestions before posting, fostering the promotion of polite and constructive communication" (paragraph [0042]). [An item of content, along with the moderated tag set is fed to a machine learning system for re-training on the moderated content, in order to accurately pinpoint and remove harmful content ([0042]).] (NOTE: The machine learning system is equivalent to the "Artificial Intelligence 
(Al)-driven suggestions module" and the removal of harmful content to "alternative rephrasing suggestions.") 
The applicant submits that the AI-driven suggestions module of the claimed invention is configured to provide real-time, alternative phrasing suggestions to users before posting their content. This module proactively aimed at fostering polite and constructive communication by suggesting enhancements to user-generated content before it is made public. On the other hand, the invention of Brown, as disclosed in paragraph ([0042]) uses a machine learning model to re-train on moderated content, which is a reactive process focused on identifying and removing harmful content after it has been created and tagged. Further, the system of Packalen, as described in paragraph [0070], focuses solely on content classification through machine learning algorithms. It does not teach or suggest providing alternative phrasing suggestions or any proactive intervention to improve content quality before posting. The system of Packalen is reactive in nature, analyzing content after it has been created rather than offering constructive suggestions during content creation. 
The applicant states that the proactive approach of the claimed invention by providing suggestions for rephrasing before content is posted, represents a technical advancement over Brown's and Packalen's reactive content filtering system. The proactive approach of "alternative rephrasing suggestions" is completely different than the reactive approach of "removal of harmful content" and is not obvious as the concept of moderation in terms of removal of content the prevention before posting are two completely different concepts, that requires a different set of AI capabilities and user interaction protocols. The Examiner alleged a collaborative moderation module (105) configured to facilitate multiple moderators to collaboratively evaluate content and establish consensus on content classification' (paragraphs [0040],[0037)). [The social media system is flagged for human moderators to determine if the content is eligible to be on the social media system, and fi the prospective user is allowed to join the social media system, based on the sample content they provided ([0040]). Collaboration among authors on content is supported with authorship lists that can contain a mix of anonymous and non- anonymous authors ([0037]). 
The applicant submits that the collaborative moderation module of the claimed invention enables multiple moderators to work together to assess content and reach a consensus on content classification. The collaborative moderation module is configured to foster a cooperative environment where moderators collectively decide on the appropriateness of content, 
leveraging diverse perspectives to improve moderation accuracy and fairness. On the other hand, the invention of Brown, as disclosed in paragraphs ([0040], [0037]) includes human moderators to determine if content of a user is eligible to be on the platform based on provided sample content. While Brown mentions collaboration authorship list among authors and includes a mix of anonymous and non-anonymous authorship, it does not teach or suggest a collaborative moderation involving multiple moderators working together to evaluate and classify content dynamically. 
The system of Packalen, as disclosed in paragraphs [0072], describes individual webmaster-based moderation decisions. There is no teaching or suggestion of multiple moderators working collaboratively or reaching consensus on content decisions. The lack of a collaborative framework in Packalen system makes it fundamentally different from the approach of claimed invention to collective decision-making in content moderation. The applicant states that implementing a system where multiple moderators collaboratively assess content and establish consensus, as disclosed in the claimed invention involves coordination among multiple moderators and real-time communication features that are not suggested by both Brown and Packalen. This approach requires a modified technical framework and provides a clear improvement in terms of ensuring comprehensive and fair content evaluation. The collaborative moderation module of the claimed invention introduces a novel and non-obvious approach to content moderation on online platforms by facilitating collaboration among multiple moderators to establish consensus. The claimed invention thereby significantly advances the system of Brown and Packalen, that lack such a collaborative framework for moderation. The claimed invention creates a structured framework for multiple moderators to work together, which is fundamentally superior to both the system of Brown and Packalen that talks about the review by individual moderator. 
The Examiner alleged "a profile management module (106)" (paragraphs [0039]). [Upon creation of a user account/profile, the content in the personalized content stream is based on the users' selections of which preset categories to include and which to exclude, such as cats ([0039]). Brown does not teach: "track and display true positive trolling counts, coupled with the imposition of temporary commenting restrictions upon the identification of trolling behavior." 
The Examiner alleged Packalen teaches: "track and display true positive trolling counts, coupled with the imposition of temporary commenting restrictions upon the identification of trolling behavior (paragraphs [0086],[00721). [Frequency counts are performed in order to remove too frequent and too rare text items ([0086]). The webmasters eliminate improper content. trolling, spamming, or flaming, and a moderator can remove unsuitable contributions to make it less violent, severe, intense, or rigorous; contributions that are irrelevant, obscene, illegal, or insulting with regards to useful or informative contributions might also be removed ([0072]).]... in the same way to show a prima facie case of obviousness (MPEP 2143(1)(C)) under KSR International Co. .v Teleflex Inc., 127 S. C.t 1727, 82 USPQ2d 1385, 1395-97 (2007). 
The applicant submits that the profile management module of the claimed invention is configured to track and display true positive trolling counts and enforce temporary commenting restrictions based on detected trolling behavior. The profile management module dynamically adjusts user access and privileges in real-time, based on the user's behavior, thus playing a critical role in maintaining the quality and safety of the online platform. On the other hand, the invention of Brown, as disclosed in paragraph ([0039]) allows users to customize their content stream by selecting categories of interest, such as excluding certain topics (e.g., cats). However, Brown's approach is fundamentally static and limited to content filtering based on user preferences. 
The system of Packalen discloses basic frequency counting of too frequent and too rare text items in paragraph [0086]. Packalen further discusses content removal by webmasters in paragraph [0072], but it does not teach or suggest the systematic tracking of true positive trolling instances combined with automated restriction implementation. The applicant submits that neither Brown nor Packalen teaches the claimed functionality of "tracking and displaying true positive trolling counts coupled with temporary commenting restrictions". The dynamic approach of the claimed invention's to user behavior tracking and automated restriction enforcement represents a significant technical advancement over the static filtering and basic content removal approaches disclosed in the prior arts. 
The applicant submits that each component of the claimed invention introduces significant advancements over the prior art, offering a synergistic effect that enhances the functionality and effectiveness of online content moderation. The technological distinctions demonstrate that the claimed invention represents a significant advancement over Brown and Packalen by offering proactive intervention, collaborative moderation, and dynamic behavior management that are not contemplated by the disclosure of Brown or Packalen. Thus, the claimed invention is not obvious to one skilled in the art, and the applicant respectfully requests the withdrawal of the rejection under 35 U.S.C. § 103 for claim 1. 
Regarding Claim 12, 
The Examiner alleged Brown teaches: 
"An integrated anti-trolling and content moderation metho for an online platform" and "analyzing user input in real-time to identify potential trolling behavior using a troll detection module comprising on-device and remote components (201)" (paragraph [0040]).[The system includes a user intake process that requires a prospective user to submit sample content prior to being accepted into the social media system, in order to create a barrier for trolls and those who practice hate speech; a machine learning system analyzes the sample content and classifies it according to previously applied metadata ([0040]).] 
Response: The applicant respectfully submits that the independent claim 12 recites an integrated anti-trolling and content moderation method comprising a unique sequence of interconnected steps that operate synergistically. The cited references, whether taken alone or in combination, fail to teach or suggest the claimed method as a whole and hence the method is non-obvious over the cited prior art. 
The applicant states that the method disclosed in the claimed invention analyses the user input in real-time to identify potential trolling behavior using a hybrid approach with an on- device and remote components. In contrast, Brown teaches a one-time user intake process that screens sample content before being accepted into the social media platform. The method as disclosed by Brown is static wherein the initial screening approach fundamentally differs from the continuous, real-time analysis methodology of the claimed invention. 
The applicant states that the approach of Brown is preventative at the user registration level, whereas the claimed invention provides continuous real-time monitoring through "both on- device and remote components". This hybrid approach enables more efficient and effective detection of trolling behavior as it occurs, rather than only at the initial screening. The invention of Packalen, as disclosed in paragraphs [0069-0072], implements a basic moderation model that classifies content as acceptable or unacceptable through machine learning algorithms and lacks the hybrid approach of on-device and remote components for troll detection. The real-time, adaptive analysis performed by the troll detection module in the claimed invention substantially enhances the effectiveness and efficiency of content moderation beyond the initial screening approach disclosed by Brown or Packalen. Therefore, the claimed invention provides novel and non-obvious improvement over the prior arts, addressing the limitations of static user intake method by offering a continuous, adaptive response to online trolling. 
"enabling users to designate moderators who evaluate and endorse comments and replies prior to the user or content creator's visibility (202)" (paragraphs [0037],[0038]).[Techniques are provided for computerized moderation, authorship recording, and distribution of social media content using moderator-supplied tags associated with content; authorship lists that can contain a mix of anonymous and non-anonymous authors ([0037]). Each user gets a personalized content stream, which may be posted by the user and/or other users to the social media system ([0038]).] 
Response: The applicant states that the claimed method enables the users to designate moderators who evaluate and endorse comments before it becomes public on the platform. 
In contrast, Brown provides the techniques for computerized moderation, authorship recording, and distribution of social media content using moderator-supplied tags associated with the content. Brown further talks about authorship lists that can contain a mix of anonymous and non- anonymous authors ([0037]) wherein each user gets a personalized content stream, which may be posted by the user and/or other users to the social media system ([0038]). The method of Brown discloses generic moderator-supplied tags and content streams and fails to suggest allowing users to actively select their own moderators. 
Packalen, on the other hand, as disclosed in paragraph [0072], relies on webmasters who decide on the kinds of content that are appropriate based on the publishing policies. While Packalen talks about performing the moderation, it does not teach or suggest a method where users can nominate their own moderators or establish a personalized moderation workflow. Packalen approach represents a traditional moderation structure without user participation in moderator selection. 
Thus, the user-driven approach of the claimed invention represents an advancement than the generalized moderation approach over Brown and Packalen. 
"offering alternative phrasing suggestions to the user before posting through an Al-driven suggestions feature, promoting a culture of politeness and constructive communication 203)" (paragraph [0042]). [An item of content, along with the moderated tag set is fed to a machine learning system for re-training on the moderated content, in order to accurately pinpoint and remove harmful content ([0042)).] (NOTE: The machine learning system is equivalent to the 
"Artificial Intelligence (Al)-driven suggestions module" and the removal of harmful content of "alternative rephrasing suggestions.") 
The applicant submits that the claimed method offers alternative phrasing suggestions to users before posting, through an AI-driven feature, to foster a culture of politeness and constructive communication. 
On the other hand, the invention of Brown, as disclosed in paragraph ([0042]) uses a machine learning model to re-train on moderated content, which is a reactive process focused on identifying and removing harmful content after it has been created and tagged. Further, Packalen, as described in paragraph [0070], focuses solely on content classification through machine learning algorithms. It does not teach or suggest providing alternative phrasing suggestions or any proactive intervention to improve content quality before posting. The method of Packalen is reactive in nature, analyzing content after it has been created rather than offering constructive suggestions during content creation. 
The applicant states that the proactive approach of the claimed invention by providing suggestions for rephrasing before content is posted, represents a technical advancement over the reactive content filtering method of Brown and Packalen. The proactive approach of 
"alternative rephrasing suggestions" is completely different than the reactive approach of "removal of harmful content". Further, the concept of moderation in terms of removal of content and the rephrasing before posting are two completely different concepts, that requires a different set of AI capabilities and user interaction protocols. The claimed method actively promotes constructive communication by suggesting alternatives, rather than simply filtering out undesirable content. 
"collaboratively moderating content by involving multiple moderators to collectively assess content and achieve consensus on content categorization (204)"(paragraphs [0040],[0037]).[The social media system is flagged for human moderators to determine if the content is eligible to be on the social media system, and if the prospective user is allowed to join the social media system, based on the sample content they provided ([0040]). Collaboration among authors on content is supported with authorship lists that can contain a mix of anonymous and non-anonymous authors ([0037]). 
The applicant submits that the claimed method involves multiple moderators to collaboratively assess content and reach consensus on content categorization. The method fosters a cooperative environment where moderators collectively decide on the appropriateness of content, leveraging diverse perspectives to improve moderation accuracy and fairness. 
On the other hand, the invention of Brown, as disclosed in paragraphs ([0040], [0037]) includes human moderators to determine if content of a user is eligible to be on the platform based on provided sample content. While Brown mentions collaboration authorship list among authors and includes a mix of anonymous and non-anonymous authorship, it does not teach or suggest a collaborative moderation involving multiple moderators working together to evaluate and classify content dynamically. 
Packalen, on the other hand, as disclosed in paragraphs [0072], describes individual webmaster- based moderation decision without any teaching or suggestion of multiple moderators working collaboratively or reaching consensus on content decisions. The lack of a collaborative framework in Packalen makes it fundamentally different from the approach of claimed invention. The applicant states that implementing a method where multiple moderators collaboratively assess content and establish consensus, as disclosed in the claimed invention involves coordination among multiple moderators and real-time communication features that are not suggested by both Brown and Packalen. This approach requires a modified technical framework and provides a clear improvement in terms of ensuring comprehensive and fair content evaluation. The claimed invention introduces a novel and non-obvious approach to content moderation on online platforms by facilitating collaboration among multiple moderators to establish consensus thereby significantly advances the method of Brown and Packalen, that lack such a collaborative framework for moderation. The claimed invention creates a structured framework for multiple moderators to work together, which is fundamentally superior to both Brown and Packalen. 
"updating user profiles" (paragraphs [00391). [Upon creation of a user account/profile, the content in the personalized content stream is based on the users' selections of which preset categories to include and which to exclude, such as cats ([0039]). 
Brown does not teach: "record true positive trolling instances and applying temporary comment restrictions for detected trolling (205)." 
Packalen teaches: "record true positive trolling instances and applying temporary comment restrictions for detected trolling (205)" (paragraphs [0086],[0072]). [Frequency counts are performed in order to remove too frequent and too rare text items ([0086]). The webmasters eliminate improper content, trolling, spamming, or flaming, and a moderator can remove unsuitable contributions to make it less violent, severe, intense, or rigorous; contributions that are irrelevant, obscene, illegal, or insulting with regards to useful or informative contributions might also be removed ([0072]).]... in the same way to show a prima facie case of obviousness (MPEP 2143(1)C)) under KSR International Co. v. Teleflex Inc., 127 S. Ct. 1727, 82 USPQ2d 1385, 1395-97 (2007). 
The applicant further states that the claimed method comprises updating user profiles to record true positive trolling instances and applying temporary commenting restrictions for detected trolling. 
The applicant submits that the claimed method comprises updating user profiles to record true positive trolling instances and applying temporary commenting restrictions for detected trolling. 
The invention of Brown, as disclosed in paragraph ([0039]) allows users to customize their content stream by selecting categories of interest, such as excluding certain topics (e.g., cats). However, Brown's approach is fundamentally static and limited to content filtering based on user preferences. 
The invention of Packalen discloses basic frequency counting of too frequent and too rare text items in paragraph [0086]. Packalen further discusses content removal by webmasters in paragraph [0072], but it does not teach or suggest the systematic tracking of true positive trolling instances combined with automated restriction implementation. 
The applicant submits that neither Brown nor Packalen teaches the claimed functionality of " updating user profiles to record true positive trolling instances and applying temporary commenting restrictions for detected trolling". The dynamic approach of the claimed invention to user behavior tracking and automated restriction enforcement represents a significant technical advancement over the static filtering and basic content removal approaches disclosed in the prior arts. 
The applicant submits that each step of the of the claimed method introduces significant advancements over the prior art, offering a synergistic effect that enhances the functionality and effectiveness of online content moderation. The technological distinctions demonstrate that the claimed invention represents a significant advancement over Brown and Packalen by offering proactive intervention, collaborative moderation, and dynamic behavior management that are not contemplated by the disclosure of Brown or Packalen. Thus, the claimed invention is not obvious to one skilled in the art, and the applicant respectfully requests the withdrawal of the rejection under 35 U.S.C. § 103 for claim 12. 
Regarding Claim 2, 
Brown in view of Packalen teaches all the limitations of parent Claim 1. Brown teaches: 
"wherein the troll detection module employs custom tokenization" (paragraph [0044]).[Non-fungible tokens (NFTs) are enabled for documents to be efficiently distributed ([0044]).] 
Brown does not teach: 
"wherein the troll detection module employs...Al model calibration to optimize troll detection accuracy' (paragraphs [0086],[0070]). [Frequency counts are performed in order to remove too frequent and too rare text items ([0086]). The moderator tool executes a machine learning algorithm that creates a moderation model for how to perform moderation of data ([0070)).] (NOTE: The machine learning algorithm is equivalent to the "Al model.") 
Response: The applicant respectfully submits that the claim 2 has been cancelled 
Regarding Claim 3, 
Brown in view of Packalen teaches al the limitations of parent Claim 1. Brown teaches: "wherein the nominated moderator assesses and approves comments and replies prior to content creators' visibility' (paragraph [0096]). [For a content viewer to have the privilege of being able to submit tags, they must go through an intake process in order to be approved by a moderator ([0096]).] 
Response: The applicant respectfully submits that the claim 3 has been cancelled 
Regarding Claim 5, 
Brown in view of Packalen teaches all the limitations of parent Claim 1. Brown teaches: 
"wherein the moderator's approval of a comment or reply is a prerequisite for the content creator's visibility to said comment or reply' (paragraph [0072]). [Depending on the type of content and intended audience of each client, the webmaster of each client decides what kinds of user content and comments are appropriate, and moderation is performed in order to ensure that the contents to be published are acceptable; a moderator may remove unsuitable contributions ([0072]).] 
Response: The applicant respectfully submits that claim 5 has been cancelled. 
Regarding Claim 8, 
Brown in view of Packalen teaches all the limitations of parent Claim 1. Brown teaches: "wherein user comments or replies from users marked as allowed are directly posted to the platform without undergoing moderation, thereby expediting the visibility of said comments or replies" (paragraph [0073]). [Each client, however, has its own type of contents, with respect to style, theme, subject and acceptable features to be published ([0073]). 
Response: The applicant respectfully submits this response to the rejection under 35 U.S.C. § 103 for claim 8, dependent on claim 1, which alleges obviousness over Brown in view of Packalen. 
The applicant submits that the profile management module of the claimed invention is configured to track and display true positive trolling counts and enforce temporary commenting restrictions based on detected trolling behavior. The profile management module dynamically adjusts user access and privileges in real-time, based on the user's behavior, thus playing a critical role in maintaining the quality and safety of the online platform. On the other hand, the invention of Brown, as disclosed in paragraph ([0039]) allows users to customize their content stream by selecting categories of interest, such as excluding certain topics (e.g., cats). However, Brown's approach is fundamentally static and limited to content filtering based on user preferences. 
The system of Packalen discloses basic frequency counting of too frequent and too rare text items in paragraph [0086]. Packalen further discusses content removal by webmasters in paragraph [0072], but it does not teach or suggest the systematic tracking of true positive trolling instances combined with automated restriction implementation. The applicant submits that neither Brown nor Packalen teaches the claimed functionality of "tracking and displaying true positive trolling counts coupled with temporary commenting restrictions". The dynamic approach of the claimed invention's to user behavior tracking and automated restriction enforcement represents a significant technical advancement over the static filtering and basic content removal approaches disclosed in the prior arts. 
The applicant submits that each component of the claimed invention introduces significant advancements over the prior art, offering a synergistic effect that enhances the functionality and effectiveness of online content moderation. The technological distinctions demonstrate that the claimed invention represents a significant advancement over Brown and Packalen by offering proactive intervention, collaborative moderation, and dynamic behavior management that are not contemplated by the disclosure of Brown or Packalen. Thus, the claimed invention is not obvious to one skilled in the art, and the applicant respectfully requests the withdrawal of the rejection under 35 U.S.C. § 103 for claim 1. 
Regarding Claim 12, 
The Examiner alleged Brown teaches: 
"An integrated anti-trolling and content moderation metho for an online platform" and "analyzing user input in real-time to identify potential trolling behavior using a troll detection module comprising on-device and remote components (201)" (paragraph [0040]).[The system includes a user intake process that requires a prospective user to submit sample content prior to being accepted into the social media system, in order to create a barrier for trolls and those who practice hate speech; a machine learning system analyzes the sample content and classifies it according to previously applied metadata ([0040]).] 
Response: The applicant respectfully submits that the independent claim 12 recites an integrated anti-trolling and content moderation method comprising a unique sequence of interconnected steps that operate synergistically. The cited references, whether taken alone or in combination, fail to teach or suggest the claimed method as a whole and hence the method is non-obvious over the cited prior art. 
The applicant states that the method disclosed in the claimed invention analyses the user input in real-time to identify potential trolling behavior using a hybrid approach with an on- device and remote components. In contrast, Brown teaches a one-time user intake process that screens sample content before being accepted into the social media platform. The method as disclosed by Brown is static wherein the initial screening approach fundamentally differs from the continuous, real-time analysis methodology of the claimed invention. 
The applicant states that the approach of Brown is preventative at the user registration level, whereas the claimed invention provides continuous real-time monitoring through "both on- device and remote components". This hybrid approach enables more efficient and effective detection of trolling behavior as it occurs, rather than only at the initial screening. The invention of 
Packalen, as disclosed in paragraphs [0069-0072], implements a basic moderation model that classifies content as acceptable or unacceptable through machine learning algorithms and lacks the hybrid approach of on-device and remote components for troll detection. The real-time, adaptive analysis performed by the troll detection module in the claimed invention substantially enhances the effectiveness and efficiency of content moderation beyond the initial screening approach disclosed by Brown or Packalen. Therefore, the claimed invention provides novel and non-obvious improvement over the prior arts, addressing the limitations of static user intake method by offering a continuous, adaptive response to online trolling. 
"enabling users to designate moderators who evaluate and endorse comments and replies prior to the user or content creator's visibility (202)" (paragraphs [0037],[0038]).[Techniques are provided for computerized moderation, authorship recording, and distribution of social media content using moderator-supplied tags associated with content; authorship lists that can contain a mix of anonymous and non-anonymous authors ([0037]). Each user gets a personalized content stream, which may be posted by the user and/or other users to the social media system ([0038]).] 
Response: The applicant states that the claimed method enables the users to designate moderators who evaluate and endorse comments before it becomes public on the platform. 
In contrast, Brown provides the techniques for computerized moderation, authorship recording, and distribution of social media content using moderator-supplied tags associated with the content. Brown further talks about authorship lists that can contain a mix of anonymous and non- anonymous authors ([0037]) wherein each user gets a personalized content stream, which may be posted by the user and/or other users to the social media system ([0038]). The method of Brown discloses generic moderator-supplied tags and content streams and fails to suggest allowing users to actively select their own moderators. 
Packalen, on the other hand, as disclosed in paragraph [0072], relies on webmasters who decide on the kinds of content that are appropriate based on the publishing policies. While Packalen talks about performing the moderation, it does not teach or suggest a method where users can nominate their own moderators or establish a personalized moderation workflow. Packalen approach represents a traditional moderation structure without user participation in moderator selection. 
Thus, the user-driven approach of the claimed invention represents an advancement than the generalized moderation approach over Brown and Packalen. 
"offering alternative phrasing suggestions to the user before posting through an Al-driven suggestions feature, promoting a culture of politeness and constructive communication (203)" (paragraph [0042]). [An item of content, along with the moderated tag set is fed to a machine learning system for re-training on the moderated content, in order to accurately pinpoint and remove harmful content ([0042)).] (NOTE: The machine learning system is equivalent to the 
"Artificial Intelligence (Al)-driven suggestions module" and the removal of harmful content ot"alternative rephrasing suggestions.") 
The applicant submits that the claimed method offers alternative phrasing suggestions to users before posting, through an AI-driven feature, to foster a culture of politeness and constructive communication. 
On the other hand, the invention of Brown, as disclosed in paragraph ([0042]) uses a machine learning model to re-train on moderated content, which is a reactive process focused on identifying and removing harmful content after it has been created and tagged. Further, Packalen, as described in paragraph [0070], focuses solely on content classification through machine learning algorithms. It does not teach or suggest providing alternative phrasing suggestions or any proactive intervention to improve content quality before posting. The method of Packalen is reactive in nature, analyzing content after it has been created rather than offering constructive suggestions during content creation. 
The applicant states that the proactive approach of the claimed invention by providing suggestions for rephrasing before content is posted, represents a technical advancement over the reactive content filtering method of Brown and Packalen. The proactive approach of "alternative rephrasing suggestions" is completely different than the reactive approach of "removal of harmful content". Further, the concept of moderation in terms of removal of content and the rephrasing before posting are two completely different concepts, that requires a different set of AI capabilities and user interaction protocols. The claimed method actively promotes constructive communication by suggesting alternatives, rather than simply filtering out undesirable content. 
"collaboratively moderating content by involving multiple moderators to collectively assess content and achieve consensus on content categorization (204)"(paragraphs [0040],[0037]).[The social media system is flagged for human moderators to determine if the content is eligible to be on the social media system, and fi the prospective user is allowed to join the social media system, based on the sample content they provided ([0040]). Collaboration among authors on content is supported with authorship lists that can contain a mix of anonymous and non-anonymous authors ([0037]). 
The applicant submits that the claimed method involves multiple moderators to collaboratively assess content and reach consensus on content categorization. The method fosters a cooperative environment where moderators collectively decide on the appropriateness of content, leveraging diverse perspectives to improve moderation accuracy and fairness. 
On the other hand, the invention of Brown, as disclosed in paragraphs ([0040], [0037]) includes human moderators to determine if content of a user is eligible to be on the platform based on provided sample content. While Brown mentions collaboration authorship list among authors and includes a mix of anonymous and non-anonymous authorship, it does not teach or suggest a collaborative moderation involving multiple moderators working together to evaluate and classify content dynamically. 
Packalen, on the other hand, as disclosed in paragraphs [0072], describes individual webmaster- based moderation decision without any teaching or suggestion of multiple moderators working collaboratively or reaching consensus on content decisions. The lack of a collaborative framework in Packalen makes it fundamentally different from the approach of claimed invention. The applicant states that implementing a method where multiple moderators collaboratively assess content and establish consensus, as disclosed in the claimed invention involves coordination among multiple moderators and real-time communication features that are not suggested by both Brown and Packalen. This approach requires a modified technical framework and provides a clear improvement in terms of ensuring comprehensive and fair content evaluation. The claimed invention introduces a novel and non-obvious approach to content moderation on online platforms by facilitating collaboration among multiple moderators to establish consensus thereby significantly advances the method of Brown and Packalen, that lack such a collaborative framework for moderation. The claimed invention creates a structured framework for multiple moderators to work together, which is fundamentally superior to both Brown and Packalen. 
"updating user profiles" (paragraphs [00391). [Upon creation of a user account/profile, the content in the personalized content stream is based on the users' selections of which preset categories to include and which to exclude, such as cats ([0039]). 
Brown does not teach: "record true positive trolling instances and applying temporary comment restrictions for detected trolling (205)." 
Packalen teaches: "record true positive trolling instances and applying temporary comment restrictions for detected trolling (205)" (paragraphs [0086],[0072]). [Frequency counts are performed in order to remove too frequent and too rare text items ([0086]). The webmasters eliminate improper content, trolling, spamming, or flaming, and a moderator can remove unsuitable contributions to make it less violent, severe, intense, or rigorous; contributions that are irrelevant, obscene, illegal, or insulting with regards to useful or informative contributions might also be removed ([0072]).]... in the same way to show a prima facie case of obviousness (MPEP 2143(1)C)) under KSR International Co. v. Teleflex Inc., 127 S. Ct. 1727, 82 USPQ2d 1385, 1395-97 (2007). 
The applicant further states that the claimed method comprises updating user profiles to record true positive trolling instances and applying temporary commenting restrictions for detected trolling. 
The applicant submits that the claimed method comprises updating user profiles to record true positive trolling instances and applying temporary commenting restrictions for detected trolling. 
The invention of Brown, as disclosed in paragraph ([0039]) allows users to customize their content stream by selecting categories of interest, such as excluding certain topics (e.g., cats). However, Brown's approach is fundamentally static and limited to content filtering based on user preferences.
The invention of Packalen discloses basic frequency counting of too frequent and too rare text items in paragraph [0086]. Packalen further discusses content removal by webmasters in paragraph [0072], but it does not teach or suggest the systematic tracking of true positive trolling instances combined with automated restriction implementation.
The applicant submits that neither Brown nor Packalen teaches the claimed functionality of " updating user profiles to record true positive trolling instances and applying temporary commenting restrictions for detected trolling". The dynamic approach of the claimed invention to user behavior tracking and automated restriction enforcement represents a significant technical advancement over the static filtering and basic content removal approaches disclosed in the prior arts. 
The applicant submits that each step of the of the claimed method introduces significant advancements over the prior art, offering a synergistic effect that enhances the functionality and effectiveness of online content moderation. The technological distinctions demonstrate that the claimed invention represents a significant advancement over Brown and Packalen by offering proactive intervention, collaborative moderation, and dynamic behavior management that are not contemplated by the disclosure of Brown or Packalen. Thus, the claimed invention is not obvious to one skilled in the art, and the applicant respectfully requests the withdrawal of the rejection under 35 U.S.C. § 103 for claim 12. 
Regarding Claim 2, 
Brown in view of Packalen teaches all the limitations of parent Claim 1. Brown teaches: 
"wherein the troll detection module employs custom tokenization" (paragraph [0044]).[Non-fungible tokens (NFTs) are enabled for documents to be efficiently distributed ([0044]).] 
Brown does not teach: 
"wherein the troll detection module employs...Al model calibration to optimize troll detection accuracy' (paragraphs [0086],[0070]). [Frequency counts are performed in order to remove too frequent and too rare text items ([0086]). The moderator tool executes a machine learning algorithm that creates a moderation model for how to perform moderation of data ([0070)).] (NOTE: The machine learning algorithm is equivalent to the "Al model.") 
Response: The applicant respectfully submits that the claim 2 has been cancelled
Regarding Claim 3, 
Brown in view of Packalen teaches al the limitations of parent Claim 1. Brown teaches: "wherein the nominated moderator assesses and approves comments and replies prior to content creators' visibility' (paragraph [0096]). [For a content viewer to have the privilege of being able to submit tags, they must go through an intake process in order to be approved by a moderator ([0096]).] 
Response: The applicant respectfully submits that the claim 3 has been cancelled 
Regarding Claim 5, 
Brown in view of Packalen teaches all the limitations of parent Claim 1. Brown teaches: 
"wherein the moderator's approval of a comment or reply is a prerequisite for the content creator's visibility to said comment or reply' (paragraph [0072]). [Depending on the type of content and intended audience of each client, the webmaster of each client decides what kinds of user content and comments are appropriate, and moderation is performed in order to ensure that the contents to be published are acceptable; a moderator may remove unsuitable contributions ([0072]).] 
Response: The applicant respectfully submits that claim 5 has been cancelled. 
Regarding Claim 8, 
Brown in view of Packalen teaches all the limitations of parent Claim 1. Brown teaches: "wherein user comments or replies from users marked as allowed are directly posted to the platform without undergoing moderation, thereby expediting the visibility of said comments or replies" (paragraph [0073]). [Each client, however, has its own type of contents, with respect to style, theme, subject and acceptable features to be published ([0073]). 
Response: The applicant respectfully submits this response to the rejection under 35 U.S.C. § 103 for claim 8, dependent on claim 1, which alleges obviousness over Brown in view of Packalen. 
The applicant states that claim 8 describes a specific feature where users marked as "allowed" can post comments directly without moderation, expediting content visibility. On the other hand, Brown disclosure indicates that each client has its own type of contents, with respect to style, theme, subject and acceptable features to be published which means different content preferences and standards of different clients. The varying content standards for different clients does not teach or suggest the claimed technical capability of allowing trusted users to bypass moderation. 
Since neither Brown nor Packalen discloses the automated expedited posting feature for pre- approved users, claim 8 is not rendered obvious by the cited references. 
Hence, the examiner is requested to withdraw the rejection 
Regarding Claim 9, 
Brown in view of Packalen teaches all the limitations of parent Claim 1. Brown teaches: 
"wherein the temporary commenting restrictions is imposed upon identification of trolling behavior (paragraphs [0058],[00721]). [Due to the machine learning approach, the tool is able to moderate messages according to specific conventions and rules of the forum at hand ([0058]). Depending on the type of content and intended audience of each client, the webmaster of each client decides what kinds of user content and comments are appropriate, and moderation is performed in order to ensure that the contents to be published are acceptable in accordance with a publishing policy of each webmaster or other person with the responsibility for the contents to be published ([0072]).] 
Response: The applicant respectfully submits this response to the rejection under 35 U.S.C. § 103 for claim 9, dependent on claim 1, which alleges obviousness over Brown in view of Packalen. 
The applicant states that claim 9 discloses automatic imposition of temporary commenting restrictions on identification of trolling behavior. On the other hand, the invention of Packalen, as disclosed in paragraphs ([0058] and [0072]) describe general content moderation based on forum rules and publishing policies . While the content moderation of Brown is having broader aspect, it does not teach the specific technical feature of automatically imposing temporary restrictions in direct response to detected trolling behavior. The claimed invention provides an automated, targeted response to trolling, while Brown merely describes manual moderation based on general guidelines. Since neither Brown nor Packalen discloses this automated temporary restriction mechanism triggered by trolling detection, claim 9 is not obvious by the cited references. 
Further, the claimed invention has the following technical advancements over the cited prior arts: 
> The claimed invention provides a unique hybrid approach for troll detection by 
integrating both on-device and remote components along with custom tokenization and AI model calibration that enables real-time analysis of user-generated content for detecting trolling behavior. Neither Brown nor Packalen discloses the hybrid architecture with custom tokenization. 
> The claimed invention discloses a personalized moderation module where users 
nominate their own moderators who assess content before publication. While Brown teaches moderation and Packalen discusses automated content assessment, neither Brown nor Packalen discloses user-nominated moderators for pre-publication assessment. 
> The claimed invention comprises AI-driven suggestion module that provides alternative 
rephrasing suggestions before posting, promoting constructive communication. The proactive approach of alternative rephrasing suggestions is not disclosed by either Brown or Packalen. 
> The claimed invention comprises a collaborative moderation module where multiple 
moderators work together to evaluate content and establish consensus, creating a balanced and nuanced approach to content moderation. Neither Brown nor Packalen teaches this multi-moderator collaborative assessment. 
> The claimed system continuously evolves through user engagement and behavior 
analysis. The adaptive feature of the claimed invention allows the system to continuously improve its effectiveness in real-time, a capability which is not addressed by either Brown or Packalen. 
In light of the above details, the applicant submits that the claimed invention is not obvious to a person skilled in the art. The cited documents Brown and Packalen do not motivate a person to arrive at the claimed invention that combines on-device and remote components with custom tokenization and AI calibration. Further, the integration of user-nominated moderators, proactive AI-driven content suggestions, collaborative multi-moderator consensus building, and automated behavior tracking with temporary restrictions creates a comprehensive system that delivers superior results in preventing trolling and content moderation that is not predicted from the cited references either individually or in combination. 
Hence, the examiner is requested to withdraw the rejection under 35 U.S.C. § 103 
Examiner Note 
Prior art was not found for the rejection of Claims 4, 6-7, and 10-11. However, since the claims are rejected under both 35 U.S.C.101 and 35 U.S.C.112(a), they cannot be considered as allowable subject matter. 
Response: The applicant submits that the rejection of the dependent claims under 35 U.S.C. § 101 and 112(a) have been thoroughly addressed in previous responses to support the acceptability of claims 4, 6-7, and 10-11. Further, the absence of the prior arts for these claims highlights the technological improvements of the claimed invention supported by the specification. 
Therefore, given the responses to both the § 101 and § 112(a) rejections and considering the lack of prior art concerns, the applicant asserts that claims 4, 6-7, and 10-11 should be recognized as allowable subject matter. 
Hence, the examiner is requested to withdraw the rejection. 
Examiner respectfully disagrees with all the arguments presented by Applicant for Claims 1 and 12.  MPEP 2111 states as follows:
During patent examination, the pending claims must be "given their broadest reasonable interpretation consistent with the specification." The Federal Circuit’s en banc decision in Phillips v. AWH Corp., 415 F.3d 1303, 1316, 75 USPQ2d 1321, 1329 (Fed. Cir. 2005) expressly recognized that the USPTO employs the "broadest reasonable interpretation" standard:  
The Patent and Trademark Office ("PTO") determines the scope of claims in patent applications not solely on the basis of the claim language, but upon giving claims their broadest reasonable construction "in light of the specification as it would be interpreted by one of ordinary skill in the art." In re Am. Acad. of Sci. Tech. Ctr., 367 F.3d 1359, 1364[, 70 USPQ2d 1827, 1830] (Fed. Cir. 2004). Indeed, the rules of the PTO require that application claims must "conform to the invention as set forth in the remainder of the specification and the terms and phrases used in the claims must find clear support or antecedent basis in the description so that the meaning of the terms in the claims may be ascertainable by reference to the description." 37 CFR 1.75(d)(1).

Using this standard in examining the instant Claims 1 and 12, Examiner asserts that Brown in view of Packalen teaches all the limitations of the two independent Claims 1 and 12. 
Examiner acknowledges that Claims 2, 3, and 5 have been canceled and will no longer be considered.  In the current prosecution cycle, the rejections of dependent Claims 4 and 6-11 with respect to prior art will be withdrawn.
With respect to the Examiner’s Note, since the rejections under 35 U.S.C. § 101 and 112(a) both remain as outstanding, there is no allowable subject matter. 

Specification
The specification does not appear to meet the requirements of 37 C.F.R. §	 1.71 (a)-(c), which states as follows:
(a) The specification must include a written description of the invention or discovery and of the manner and process of making and using the same, and is required to be in such full, clear, concise, and exact terms as to enable any person skilled in the art or science to which the invention or discovery appertains, or with which it is most nearly connected, to make and use the same. 
(b) The specification must set forth the precise invention for which a patent is solicited, in such manner as to distinguish it from other inventions and from what is old. It must describe completely a specific embodiment of the process, machine, manufacture, composition of matter or improvement invented, and must explain the mode of operation or principle whenever applicable. The best mode contemplated by the inventor of carrying out his invention must be set forth. 
(c) In the case of an improvement, the specification must particularly point out the part or parts of the process, machine, manufacture, or composition of matter to which the improvement relates, and the description should be confined to the specific improvement and to such parts as necessarily cooperate with it or as may be necessary to a complete understanding or description of it.  
37 C.F.R. § 1.71 (a)-(c), emphasis added.
Although Claim 1 recites a “system,” no description of such typical physical system components as processors or computer readable media, how the various disclosed modules are interconnected with each other, are disclosed anywhere in the seven-page specification with only three pages of discussion in the detailed description. The descriptions are very brief given the complexity of the subject matter.  In addition, although the drawing labeled “Figure 1” is a picture of a box with the words “Online social networking platform 101” at the top, which is followed by enclosed boxes for each of the modules being claimed, there are no physical components, including processors, storage, or network elements indicated in the system drawing to provide any context for the physical system.  The only other drawing, which is labeled as “Figure 2” merely lists the steps in the process, which has been indicated as not being fully explained in the specification. 
The first limitation of Claim 1 recites as follows:   
a troll detection module (102) configured to perform real-time analysis of user- generated content, detecting instances of trolling behavior through both on-device and remote components.
However, There is no detailed discussion of how such analysis and detection occurs, but merely the following short descriptions, which extol the virtues of the process without a full explanation of how the function is performed: 
The troll detection module (102) employs a hybrid approach, incorporating both on-device and remote components to ensure prompt and accurate identification of trolling behavior. By utilizing custom tokenization and fine-tuned AI models, the troll detection module achieves remarkable precision in discerning instances of trolling, safeguarding the online community from offensive and disruptive content. Specification, paragraph [0018], emphasis added.

The system continuously evolves through user engagement and behavior analysis. By studying user interactions and engagement patterns, the system refines its troll detection models and enhances its suggestions algorithms. This iterative process ensures that the system remains adaptable and effective in tackling emerging forms of trolling and promoting positive online
interactions. Specification, paragraph [0023], emphasis added.
	The above statements refer to troll detection models and modules, but there is discussion of these elements in “full, clear, concise, and exact terms as to enable any person skilled in the art or science to make and use the invention.” The same issues exist for the other limitations in the two independent Claim 1 and 12.  Therefore, the specification appears to violate the requirements of 37 C.F.R. § 1.71 (a)-(c).
	
Claim Rejections - 35 USC § 112
The following is a quotation of the first paragraph of 35 U.S.C. 112(a):
(a) IN GENERAL.—The specification shall contain a written description of the invention, and of the manner and process of making and using it, in such full, clear, concise, and exact terms as to enable any person skilled in the art to which it pertains, or with which it is most nearly connected, to make and use the same, and shall set forth the best mode contemplated by the inventor or joint inventor of carrying out the invention.

The following is a quotation of the first paragraph of pre-AIA  35 U.S.C. 112:
The specification shall contain a written description of the invention, and of the manner and process of making and using it, in such full, clear, concise, and exact terms as to enable any person skilled in the art to which it pertains, or with which it is most nearly connected, to make and use the same, and shall set forth the best mode contemplated by the inventor of carrying out his invention.

Claims 1-12 are rejected under 35 U.S.C. 112(a) or 35 U.S.C. 112 (pre-AIA ), first paragraph, as failing to comply with the enablement requirement.  The claim(s) contains subject matter which was not described in the specification in such a way as to enable one skilled in the art to which it pertains, or with which it is most nearly connected, to make and/or use the invention. 
Regarding Claim 1,
The first claim limitation recites “a troll detection module (102) configured to perform real-time analysis of user-generated content, detecting instances of trolling behavior through both on-device and remote components,” but there is no description in the specification to explain the content analysis is performed, how trolling is defined, or exactly what process is used to detect behavior that amounts to trolling.  In discussing the advantages of the invention, the specification discloses as follows:
One primary advantage lies in the integration of a robust anti-trolling mechanism. Through the implementation of real-time troll detection models, the invention effectively intercepts and prevents the propagation of offensive content at its source. This proactive approach contributes to a more welcoming and secure digital space, fostering meaningful and constructive conversations. Specification, paragraph [0025], emphasis added.

However, without an explanation of the “troll detection module,” this limitation is not supported.  For example, it is not clear how does the system performs the interceptions or prevents “the propagation of offensive content at its source.”   The same problems appear to be true of the other modules, such as the “Artificial Intelligence (AI)-driven suggestions module,” which is disclosed as follows:
Advancing further, step (203) introduces alternative phrasing suggestions to users before their posts, facilitated by an AI-driven suggestions feature, thereby fostering a culture of politeness and constructive communication. This seamless progression leads to step (204), wherein content is collaboratively moderated with the engagement of multiple moderators working collectively to assess content and achieve consensus on its categorization.  Specification, paragraph [0024], emphasis added.

A person of ordinary skill in the art might wonder where the suggestions are stored, how they were acquired, and how the user might access the suggestions.  Furthermore, it is unclear how one would evaluate whether the positive goal of a “culture of politeness and constructive communication” has actually been achieved.  
Regarding Claim 12,
The first claim limitation recites “analyzing user input in real-time to identify potential trolling behavior using a troll detection module comprising on-device and remote components (201).”  The same issues discussed above for Claim 1 are also applicable to Claim 12.  The specification would not enable a person of ordinary skill in the art to make and use the invention.
Regarding Claims 2-11,
Because the claims depend from a rejected base claim, they are also rejected.	
	
Claim Rejections - 35 USC § 101
35 U.S.C. 101 reads as follows:
Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title.

Claims 1-12 are rejected under 35 U.S.C. 101 because the claimed invention is directed to a judicial exception (i.e., a law of nature, a natural phenomenon, or an abstract idea) and the claims do not include additional elements that are sufficient to amount to significantly more than the judicial exception for the reasons set forth below. 
Claims 1 and 12 are independent.  An understanding of the invention can be derived from a reading of independent Claim 12, which is reproduced below.
12.	 An integrated anti-trolling and content moderation system for an online platform, the system (100) comprising: 
a troll detection module (102) configured to perform real-time analysis of user- generated content, detecting instances of trolling behavior through both on-device and remote components;
a personalized moderation module (103) configured to enable the user to nominate moderators; 
an Artificial Intelligence (AI)-driven suggestions module (104) configured to provide users with alternative rephrasing suggestions before posting, fostering the promotion of polite and constructive communication; 
a collaborative moderation module (104) configured to facilitate multiple moderators to collaboratively evaluate content and establish consensus on content classification; and 
a profile management module (105) configured to track and display true positive trolling counts, coupled with the imposition of temporary commenting restrictions upon the identification of trolling behavior.
The purpose of the invention appears to be interception and prevention of trolling by analysis of content and the use of moderators.  
Under the 2019 Revised Guidance1, it is necessary to first look to whether the claim recites:
(1) any judicial exceptions, including certain groupings of abstract ideas (i.e., mathematical concepts, certain methods of organizing human activity such as a
fundamental economic practice, or mental processes); and

(2) additional elements that integrate the judicial exception into a practical application (see Manual for Patent Examining Procedure ("MPEP") §§ 2106.05(a)-(c), (e)-(h)).

Only if a claim (1) recites a judicial exception and (2) does not integrate that exception into a practical application, then it is necessary look to whether the claim:
(3) adds a specific limitation beyond the judicial exception that are not "well-understood, routine, conventional" in the field (see MPEP § 2106.05(d)); or

(4) simply appends well-understood, routine, conventional activities previously known to the industry, specified at a high level of generality, to the judicial exception. 


Prong One - Abstract Idea
The Revised Guidance extracts and synthesizes key concepts identified by the courts as abstract ideas to explain that the abstract idea exception includes the following groupings of subject matter, when recited as such in a claim limitation:
(a) Mathematical concepts-mathematical relationships, mathematical formulas or equations, mathematical calculations;

(b) Certain methods of organizing human activity – fundamental economic principles or practices (including hedging, insurance, mitigating risk); commercial or legal interactions (including agreements in the form of contracts; legal obligations; advertising, marketing or sales activities or behaviors; business relations); managing personal behavior or relationships or interactions between people (including social activities, teaching, and following rules or instructions); and

(c) Mental processes-concepts performed in the human mind (including an observation, evaluation, judgment, opinion).

Under the Revised Guidance, if the claim does not recite a judicial exception (a law of nature, natural phenomenon, or subject matter within the enumerated groupings of abstract ideas above), then the claim is patent-eligible at Prong One. This determination concludes the eligibility analysis, except in rare situations. However, if the claim recites a judicial exception (i.e., an abstract idea enumerated above, a law of nature, or a natural phenomenon), the claim requires further analysis for a practical application of the judicial exception in Step 2A.

Prong Two, Step 2A - Practical Application
If a claim recites a judicial exception in Step 2A, a determination is made whether the recited judicial exception is integrated into a practical application of that exception by: (a) identifying whether there are any additional elements recited in the claim beyond the judicial exception(s); and (b) evaluating those additional elements individually and in combination to determine whether they integrate the exception into a practical application.
The seven identified "practical application" sections of the MPEP are cited in the Revised Guidance under Step 2A. The first four constitute “practical applications,” as follows:
(1) MPEP § 2106.05(a) Improvements to the Functioning of a Computer or to Any Other Technology or Technical Field

(2) MPEP § 2106.05(b) Particular Machine 

(3) MPEP § 2106.05(c) Particular Transformation

(4) MPEP § 2106.05(e) Other Meaningful Limitations.

The last three do not constitute “practical applications,” as follows:

(5) MPEP § 2106.05(f) Mere Instructions to Apply an Exception

(6) MPEP § 2106.05(g) Insignificant Extra-Solution Activity 

(7) MPEP § 2106.05(h) Field of Use and Technological Environment

If the recited judicial exception is integrated into a practical application as determined under one or more of the MPEP sections cited above, then the claim is not directed to the judicial exception, and the patent-eligibility inquiry ends. If not, then analysis proceeds to Step 2B.

Prong Two, Step 2B - "Inventive Concept" or "Significantly More"
The Federal Circuit has held that a claim that recites a judicial exception under Step 2A is nonetheless patent eligible at the second step of the Alice/Mayo test (USPTO Step 2B). This can occur if the claim recites additional elements that render the claim patent eligible by providing "significantly more" than the recited judicial exception, such as because the additional elements were unconventional in combination. Therefore, if a claim has been determined to be directed to a judicial exception under Revised Step 2A, the additional elements must be evaluated individually and in combination under Step 2B to determine whether they provide an inventive concept (i.e., whether the additional elements amount to significantly more than the exception itself). Under the Revised Guidance, it must be determined in Step 2B whether an additional element or combination of elements: (1) "Adds a specific limitation or combination of limitations that are not well-understood, routine, conventional activity in the field, which is indicative that an inventive concept may be present;" or (2) "simply appends well-understood, routine, conventional activities previously known to the industry, specified at a high level of generality, to the judicial exception, which is indicative that an inventive concept may not be present." See Revised Guidance, III.B.
If the Examiner determines under Step 2B that the element (or combination of elements) amounts to significantly more than the exception itself, the claim is eligible, thereby concluding the eligibility analysis.
However, if a determination is made that the element and combination of elements does not amount to significantly more than the exception itself, the claim is ineligible under Step 2B, and the claim should be rejected for lack of subject matter eligibility.	
					Analysis
In accordance with Prong One of the Revised Guidance, the steps recited in independent Claims 1, 9, and 17, all of which recite analogous subject matter, are directed to a judicial exception.  The process recited in the independent claims describes Certain Methods of Organizing Human Activity, which includes managing personal behavior or interactions between people.   The term “troll” is defined as “to antagonize (others) online by deliberately posting inflammatory, irrelevant, or offensive comments or other disruptive content” (www.merriam-webster.com/dictionary/troll).
  Specifically, “trolling,” encompasses, and the claimed “anti-trolling,” “nominate moderators,” “provide users with alternative rephrasing suggestions,” “fostering the promotion of polite and constructive communication,” “collaboratively evaluate content and establish consensus,” “collaboratively evaluate content and establish consensus on content classification,” and “track and display true positive trolling counts” all represent functions that are directed to the abstract idea of managing personal behavior or relationships or  interactions between people managing personal behavior.  The dependent Claims 3-11, also recite additional limitations associated with moderator approvals and assessment, scoring, evaluations, identification of behavior, and counting instances of true and false positives by moderators.  
There are no disclosures in the specification or recitations in the claims of processors, memory, or non-transitory computer-readable medium.  Thus, no physical elements are cited to preclude the invention from being classified as Certain Methods of Organizing Human Activity.
Accordingly, under Prong One, the independent claims recite an abstract idea.

Prong Two
Step 2A

After determining under Prong One that the claims recite a judicial exception, under Prong Two, Step 2A, the analysis is conducted to determine whether the judicial exception is integrated into a practical application.  Such considerations as 
“Improvements to the Functioning of a Computer” or a “Particular Machine” are not applicable, since no computers or machines are disclosed.  There also does not appear to be a “Particular Transformation,” which is defined as “a transformation or reduction of a particular article to a different state or thing.”  MPEP § 2106.05(c).  Since there is no article to be transformed, this circumstance does not exist.  Finally, since intercepting trolling and all the other functions being claimed are well-understand, routine, and conventional activities, there are not “Other Meaningful Limitations” to transform the abstract idea into a practical application. See MPEP § 2106.05(e)
Based on the analysis under Prong 2, Step 2A of the Revised Guidance, independent Claims 1 and 12 do not recite a “practical application” to overcome the judicial exception. 



Prong Two
Step 2B
Next, if a claim has been determined to be directed to a judicial exception under the Revised Guidance, the additional elements must be evaluated individually and in combination under Step 2B to determine whether they provide an inventive concept by   amounting to “significantly more” than the judicial exception itself.  It must be determined in Step 2B whether an additional element or combination of elements: (1) “Adds a specific limitation or combination of limitations that are not well-understood, routine, conventional activity in the field, which is indicative that an inventive concept may be present;” or (2) “simply appends well-understood, routine, conventional activities previously known to the industry, specified at a high level of generality, to the judicial exception, which is indicative that an inventive concept may not be present.” See Revised Guidance, III.B.
In the instant application, since intercepting trolling and all the other functions being claimed are well-understand, routine, and conventional activities, there is no aspect of the invention that amounts to “significantly more” than the judicial exception.
Accordingly, the analysis under the multiple steps of the Revised Guidance leads to the determination that independent Claims 1, 9, and 17 are directed to an abstract idea under 35 U.S.C. 101, and so are the dependent claims, which incorporate the abstract idea by virtue of their dependencies.  Therefore, Claims 1-20 are directed to a judicial exception, and are not patent-eligible.  	

Claim Rejections - 35 USC § 103
In the event the determination of the status of the application as subject to AIA  35 U.S.C. 102 and 103 (or as subject to pre-AIA  35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA  to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.  
 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.

The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows:
1. Determining the scope and contents of the prior art.
2. Ascertaining the differences between the prior art and the claims at issue.
3. Resolving the level of ordinary skill in the pertinent art.
4. Considering objective evidence present in the application indicating obviousness or non-obviousness.
Claims 1 and 12 are rejected under 35 U.S.C. 103 as being unpatentable over Brown (US 2021/0377052 A1, hereinafter referred to as Brown) in view of Packalen et al. (US 2019/0026601 A1, hereinafter referred to as Packalen).
Regarding Claim 1,
Brown teaches:
	“An integrated anti-trolling and content moderation system for an online platform” and “a troll detection module (102) configured to perform real-time analysis of user- generated content, detecting instances of trolling behavior through both on-device and remote components” (paragraph [0040]).  [The system includes a user intake process that requires a prospective user to submit sample content prior to being accepted into the social media system, in order to create a barrier for trolls and those who practice hate speech ([0040]).]  
“a personalized moderation module (103) configured to enable the user to nominate moderators” (paragraphs [0037], [0038]).  [Techniques are provided for computerized moderation, authorship recording, and distribution of social media content using moderator-supplied tags associated with content; authorship lists that can contain a mix of anonymous and non-anonymous authors ([0037]).  Each user gets a personalized content stream, which may be posted by the user and/or other users to the social media system ([0038]).]
“an Artificial Intelligence (AI)-driven suggestions module (104) configured to provide users with alternative rephrasing suggestions before posting, fostering the promotion of polite and constructive communication” (paragraph [0042]).  [An item of content, along with the moderated tag set is fed to a machine learning system for re-training on the moderated content, in order to accurately pinpoint and remove harmful content ([0042]).]  (NOTE: The machine learning system is equivalent to the “Artificial Intelligence (AI)-driven suggestions module” and the removal of harmful content to “alternative rephrasing suggestions.” )
“a collaborative moderation module (104) configured to facilitate multiple moderators to collaboratively evaluate content and establish consensus on content classification” (paragraphs [0040], [0037]).  [The social media system is flagged for human moderators to determine if the content is eligible to be on the social media system, and if the prospective user is allowed to join the social media system, based on the sample content they provided ([0040]). Collaboration among authors on content is supported with authorship lists that can contain a mix of anonymous and non-anonymous authors ([0037]).
“a profile management module (105)”  (paragraphs [0039]).  [Upon creation of a user account/profile,  the content in the personalized content stream is based on the users' selections of which preset categories to include and which to exclude, such as cats ([0039]).
Brown does not teach:
“track and display true positive trolling counts, coupled with the imposition of temporary commenting restrictions upon the identification of trolling behavior.”
Packalen teaches:
“track and display true positive trolling counts, coupled with the imposition of temporary commenting restrictions upon the identification of trolling behavior”  (paragraphs [0086], [0072]).  [Frequency counts are performed in order to remove too frequent and too rare text items ([0086]). The webmasters eliminate improper content, trolling, spamming, or flaming, and a moderator can remove unsuitable contributions to make it less violent, severe, intense, or rigorous; contributions that are irrelevant, obscene, illegal, or insulting with regards to useful or informative contributions
might also be removed ([0072]).]
Both Brown and Packalen teach systems in which trolling is evaluated, and those systems are comparable to that of the instant application.  Because the two cited references are analogous to the instant application, it would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains, to include in the Brown disclosure, the ability to count items of text, including trolling, as taught by Packalen.  Such inclusion would have increased the usefulness of the messaging system providing the ability to evaluate the extent of the trolling problem in a system via the counts, and would have been consistent with the rationale of using known techniques to improve similar devices (methods, or products) in the same way to show a prima facie case of obviousness (MPEP 2143(I)(C)) under KSR International Co. v. Teleflex Inc., 127 S. Ct. 1727, 82 USPQ2d 1385, 1395-97 (2007).
Regarding Claim 12,
 Brown teaches:
	“An integrated anti-trolling and content moderation metho for an online platform” and “analyzing user input in real-time to identify potential trolling behavior using a troll detection module comprising on-device and remote components (201)” (paragraph [0040]).  [The system includes a user intake process that requires a prospective user to submit sample content prior to being accepted into the social media system, in order to create a barrier for trolls and those who practice hate speech; a machine learning system analyzes the sample content and classifies it according to previously applied metadata ([0040]).]  
“enabling users to designate moderators who evaluate and endorse comments and replies prior to the user or content creator's visibility (202)” (paragraphs [0037], [0038]).  [Techniques are provided for computerized moderation, authorship recording, and distribution of social media content using moderator-supplied tags associated with content; authorship lists that can contain a mix of anonymous and non-anonymous authors ([0037]).  Each user gets a personalized content stream, which may be posted by the user and/or other users to the social media system ([0038]).]
“offering alternative phrasing suggestions to the user before posting through an AI-driven suggestions feature, promoting a culture of politeness and constructive communication (203)” (paragraph [0042]).  [An item of content, along with the moderated tag set is fed to a machine learning system for re-training on the moderated content, in order to accurately pinpoint and remove harmful content ([0042]).]  (NOTE: The machine learning system is equivalent to the “Artificial Intelligence (AI)-driven suggestions module” and the removal of harmful content to “alternative rephrasing suggestions.” )
“collaboratively moderating content by involving multiple moderators to collectively assess content and achieve consensus on content categorization (204)” (paragraphs [0040], [0037]).  [The social media system is flagged for human moderators to determine if the content is eligible to be on the social media system, and if the prospective user is allowed to join the social media system, based on the sample content they provided ([0040]). Collaboration among authors on content is supported with authorship lists that can contain a mix of anonymous and non-anonymous authors ([0037]).
“updating user profiles”  (paragraphs [0039]).  [Upon creation of a user account/profile,  the content in the personalized content stream is based on the users' selections of which preset categories to include and which to exclude, such as cats ([0039]).
Brown does not teach:
“record true positive trolling instances and applying temporary comment restrictions for detected trolling (205).”
Packalen teaches:
“record true positive trolling instances and applying temporary comment restrictions for detected trolling (205)”  (paragraphs [0086], [0072]).  [Frequency counts are performed in order to remove too frequent and too rare text items ([0086]). The webmasters eliminate improper content, trolling, spamming, or flaming, and a moderator can remove unsuitable contributions to make it less violent, severe, intense, or rigorous; contributions that are irrelevant, obscene, illegal, or insulting with regards to useful or informative contributions might also be removed ([0072]).]  
Both Brown and Packalen teach systems in which trolling is evaluated, and those systems are comparable to that of the instant application.  Because the two cited references are analogous to the instant application, it would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains, to include in the Brown disclosure, the ability to count items of text, including trolling, as taught by Packalen.  Such inclusion would have increased the usefulness of the messaging system providing the ability to evaluate the extent of the trolling problem in a system via the counts, and would have been consistent with the rationale of using known techniques to improve similar devices (methods, or products) in the same way to show a prima facie case of obviousness (MPEP 2143(I)(C)) under KSR International Co. v. Teleflex Inc., 127 S. Ct. 1727, 82 USPQ2d 1385, 1395-97 (2007).

Examiner Note
Prior art was not found for the rejection of Claims 4, 6-7, and 10-11.  However, since the claims are rejected under both 35 U.S.C. 101 and 35 U.S.C. 112(a), they cannot be considered as allowable subject matter.

Conclusion
THIS ACTION IS MADE FINAL. Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a).
A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to PHYLLIS A BOOK whose telephone number is (571)272-0698. The examiner can normally be reached M-F 10:00 am - 7:00 pm.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, GLENTON BURGESS can be reached on 571-272-3949. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.





/PHYLLIS A BOOK/Primary Examiner, Art Unit 2454                                                                                                                                                                                                        


    
        
            
        
            
        
            
        
            
        
            
        
            
        
            
        
            
        
            
        
            
        
            
        
            
        
            
        
            
        
            
        
            
        
            
        
            
        
            
        
            
        
            
        
            
        
            
        
            
        
            
        
            
        
            
        
            
        
            
        
            
        
            
        
            
        
            
        
            
        
            
        
            
        
            
        
            
        
            
        
            
        
            
        
            
        
            
        
            
        
            
        
            
        
            
        
            
        
            
        
            
        
            
        
            
        
            
        
            
        
            
        
            
        
            
    

    
        1 2019 Revised Patent Subject Matter Eligibility Guidance, 84 Fed. Reg. 50
        (January 7, 2019) (hereinafter "Revised Guidance") 
        (https://www.govinfo.gov/content/pkg/FR-2019-01-07/pdf/2018-28282.pdf)
    


Cookies help us deliver our services. By using our services, you agree to our use of cookies.