20240054178. FACTORIZING VECTORS BY UTILIZING RESONATOR NETWORKS simplified abstract (International Business Machines Corporation)

From WikiPatents
Jump to navigation Jump to search

FACTORIZING VECTORS BY UTILIZING RESONATOR NETWORKS

Organization Name

International Business Machines Corporation

Inventor(s)

Jovin Langenegger of Gerlikon (CH)

Kumudu Geethan Karunaratne of Gattikon (CH)

Michael Andreas Hersche of Zurich (CH)

Abu Sebastian of Adliswil (CH)

Abbas Rahimi of Rüschlikon (CH)

FACTORIZING VECTORS BY UTILIZING RESONATOR NETWORKS - A simplified explanation of the abstract

This abstract first appeared for US patent application 20240054178 titled 'FACTORIZING VECTORS BY UTILIZING RESONATOR NETWORKS

Simplified Explanation

The patent application describes a computer-implemented method for factorizing a vector using resonator network modules, including an unbinding module and search-in-superposition modules.

  • Unbinding module is used to obtain unbound vectors representing estimates of codevectors of the product vector.
  • A reversible first operation is performed on the unbound vectors to obtain quasi-orthogonal vectors.
  • Quasi-orthogonal vectors are fed to search-in-superposition modules, which rely on a single codebook to obtain transformed vectors.
  • A second operation, which is the inverse of the first operation, refines the estimates of the codevectors.

---

      1. Potential Applications
  • Signal processing
  • Data compression
  • Pattern recognition
      1. Problems Solved
  • Efficient vector factorization
  • Utilizing a single codebook for transformation
  • Reversible operations for refining estimates
      1. Benefits
  • Improved accuracy in estimating codevectors
  • Simplified vector factorization process
  • Reduced computational complexity


Original Abstract Submitted

the disclosure includes a computer-implemented method of factorizing a vector by utilizing resonator network modules. such modules include an unbinding module, as well as search-in-superposition modules. the method includes the following steps. a product vector is fed to the unbinding module to obtain unbound vectors. the latter represent estimates of codevectors of the product vector. a first operation is performed on the unbound vectors to obtain quasi-orthogonal vectors. the first operation is reversible. the quasi-orthogonal vectors are fed to the search-in-superposition modules, which rely on a single codebook. in this way, transformed vectors are obtained, utilizing a single codebook. a second operation is performed on the transformed vectors. the second operation is an inverse operation of the first operation, which makes it possible to obtain refined estimates of the codevectors.