US Patent Application 17825734. CONTENT MODERATION SERVICE FOR SYSTEM GENERATED CONTENT simplified abstract

From WikiPatents
Revision as of 15:08, 3 December 2023 by Wikipatents (talk | contribs) (Creating a new page)
(diff) ← Older revision | Latest revision (diff) | Newer revision → (diff)
Jump to navigation Jump to search

CONTENT MODERATION SERVICE FOR SYSTEM GENERATED CONTENT

Organization Name

Microsoft Technology Licensing, LLC==Inventor(s)==

[[Category:Bernhard Kohlmeier of Seattle WA (US)]]

[[Category:Natalie Ann Ramirez of Seattle WA (US)]]

CONTENT MODERATION SERVICE FOR SYSTEM GENERATED CONTENT - A simplified explanation of the abstract

This abstract first appeared for US patent application 17825734 titled 'CONTENT MODERATION SERVICE FOR SYSTEM GENERATED CONTENT

Simplified Explanation

- The patent application describes a method carried out by a content moderation service. - The method involves receiving system generated content from an application that has triggered a content moderation list. - The trigger can be a word, phrase, image, audio, or video that requires moderation action. - The content moderation service also receives context for the trigger, which includes other words, phrases, images, or media in proximity to the trigger. - The trigger and its context are evaluated together using a content moderation process to determine a risk assessment. - The risk assessment is then provided to the application, which can use it to determine the appropriate moderation action.


Original Abstract Submitted

A method carried out by a content moderation service can include receiving system generated content from an application identified as a result of detecting a trigger associated with a content moderation list in the system generated content, wherein the trigger associated with a content moderation list is a word, phrase, image, audio, or video indicated as requiring a moderation action; receiving context for the trigger, wherein the context for the trigger comprises other words or phrases or images or other media in proximity to the trigger; evaluating the trigger in combination with the context for the trigger according to a content moderation process to obtain a determination of a risk assessment; and providing the risk assessment to the application for determining the moderation action to the application.