18372900. Scalable Feature Selection Via Sparse Learnable Masks simplified abstract (GOOGLE LLC)

From WikiPatents
Jump to navigation Jump to search

Scalable Feature Selection Via Sparse Learnable Masks

Organization Name

GOOGLE LLC

Inventor(s)

Sercan Omer Arik of San Francisco CA (US)

Yihe Dong of New York NY (US)

Scalable Feature Selection Via Sparse Learnable Masks - A simplified explanation of the abstract

This abstract first appeared for US patent application 18372900 titled 'Scalable Feature Selection Via Sparse Learnable Masks

Simplified Explanation

The patent application introduces a novel approach called Sparse Learnable Masks (SLM) for feature selection in machine learning models. SLM integrates learnable sparse masks into end-to-end training to automatically select a desired number of features while gradually adjusting the sparsity for effective learning. The objective of SLM is to increase mutual information between selected features and labels efficiently and scalably.

  • SLM is a canonical approach for feature selection in machine learning models.
  • SLM integrates learnable sparse masks into end-to-end training.
  • SLM includes dual mechanisms for automatic mask scaling to achieve a desired feature sparsity.
  • SLM gradually adjusts sparsity for effective learning.
  • SLM aims to increase mutual information between selected features and labels efficiently and scalably.
  • Empirical results show that SLM can achieve or improve upon state-of-the-art results on benchmark datasets while reducing computational complexity and cost.

Potential Applications

The technology can be applied in various fields such as image recognition, natural language processing, and bioinformatics for feature selection in machine learning models.

Problems Solved

1. Automatic selection of a desired number of features in machine learning models. 2. Efficiently increasing mutual information between selected features and labels.

Benefits

1. Improved performance on benchmark datasets. 2. Reduced computational complexity and cost.

Potential Commercial Applications

The technology can be utilized in industries such as healthcare, finance, and e-commerce for optimizing machine learning models with efficient feature selection.

Possible Prior Art

One possible prior art could be traditional feature selection methods such as filter, wrapper, and embedded methods used in machine learning.

=== What are the limitations of the SLM approach in feature selection? The article does not mention any potential limitations or drawbacks of the SLM approach in feature selection.

=== How does SLM compare to other feature selection techniques in terms of computational efficiency? The article does not provide a direct comparison of SLM with other feature selection techniques in terms of computational efficiency.


Original Abstract Submitted

Aspects of the disclosure are directed to a canonical approach for feature selection referred to as sparse learnable masks (SLM). SLM integrates learnable sparse masks into end-to-end training. For the fundamental non-differentiability challenge of selecting a desired number of features, SLM includes dual mechanisms for automatic mask scaling by achieving a desired feature sparsity and gradually tempering this sparsity for effective learning. SLM further employs an objective that increases mutual information (MI) between selected features and labels in an efficient and scalable manner. Empirically, SLM can achieve or improve upon state-of-the-art results on several benchmark datasets, often by a significant margin, while reducing computational complexity and cost.