17972548. BUILDING ANNOTATED MODELS BASED ON EYES-OFF DATA simplified abstract (Microsoft Technology Licensing, LLC)
Contents
- 1 BUILDING ANNOTATED MODELS BASED ON EYES-OFF DATA
- 1.1 Organization Name
- 1.2 Inventor(s)
- 1.3 BUILDING ANNOTATED MODELS BASED ON EYES-OFF DATA - A simplified explanation of the abstract
- 1.4 Simplified Explanation
- 1.5 Potential Applications
- 1.6 Problems Solved
- 1.7 Benefits
- 1.8 Potential Commercial Applications
- 1.9 Possible Prior Art
- 1.10 How does the anonymity technique ensure that the synthetic data does not contain any specific details that can be linked back to the confidential data?
- 1.11 What are the potential limitations or drawbacks of deploying the target model back in the eyes-off environment to classify the confidential data?
- 1.12 Original Abstract Submitted
BUILDING ANNOTATED MODELS BASED ON EYES-OFF DATA
Organization Name
Microsoft Technology Licensing, LLC
Inventor(s)
David Benjamin Levitan of Bothell WA (US)
Robert Alexander Sim of Bellevue WA (US)
Julia S. Mcanallen of Seattle WA (US)
Huseyin Atahan Inan of Redmond WA (US)
Girish Kumar of Redmond WA (US)
BUILDING ANNOTATED MODELS BASED ON EYES-OFF DATA - A simplified explanation of the abstract
This abstract first appeared for US patent application 17972548 titled 'BUILDING ANNOTATED MODELS BASED ON EYES-OFF DATA
Simplified Explanation
The patent application describes a method for building annotated models based on eyes-off data, where a synthetic data generation model is trained in an eyes-off environment using an anonymity technique on confidential data. The synthetic data is then used to train a target model in an eyes-on environment, which is later deployed back in the eyes-off environment to classify the confidential data.
- Synthetic data generation model trained in eyes-off environment
- Anonymity technique used on confidential data
- Synthetic data created without specific details linked to confidential data
- Annotated synthetic data used to train target model in eyes-on environment
- Target model deployed back in eyes-off environment to classify confidential data
Potential Applications
The technology could be applied in industries where sensitive data needs to be protected while still allowing for the development of accurate models and algorithms.
Problems Solved
This technology addresses the challenge of training models on confidential data without compromising the privacy and security of the information.
Benefits
The method allows for the development of accurate models without exposing sensitive data, ensuring data privacy and security.
Potential Commercial Applications
One potential commercial application of this technology could be in the healthcare industry, where patient data needs to be protected while still enabling the development of predictive models for diagnosis and treatment.
Possible Prior Art
One possible prior art could be the use of synthetic data generation techniques in machine learning to protect sensitive information during model training.
Unanswered Questions
How does the anonymity technique ensure that the synthetic data does not contain any specific details that can be linked back to the confidential data?
The patent application does not provide detailed information on the specific mechanisms used in the anonymity technique to prevent the identification of confidential data in the synthetic data.
What are the potential limitations or drawbacks of deploying the target model back in the eyes-off environment to classify the confidential data?
The patent application does not discuss any potential challenges or risks associated with deploying the target model back in the eyes-off environment, such as the accuracy of the classification results or the potential for data leakage.
Original Abstract Submitted
Systems and methods are directed to building annotated models based on eyes-off data. Specifically, a synthetic data generation model is trained and used to further train a target model. The synthetic data generation model is trained within an eyes-off environment using an anonymity technique on confidential data. The synthetic data generation model is then used to create synthetic data that closely represents the confidential data but without any specific details that can be linked back to the confidential data. The synthetic data is then annotated and used to train the target model within an eyes-on environment. Subsequently, the target model is deployed back within the eyes-off environment to classify the confidential data.