18046322. PROCESSING TENSORS simplified abstract (INTERNATIONAL BUSINESS MACHINES CORPORATION)

From WikiPatents
Revision as of 06:41, 26 April 2024 by Wikipatents (talk | contribs) (Creating a new page)
(diff) ← Older revision | Latest revision (diff) | Newer revision → (diff)
Jump to navigation Jump to search

PROCESSING TENSORS

Organization Name

INTERNATIONAL BUSINESS MACHINES CORPORATION

Inventor(s)

Julian Heyne of Stuttgart (DE)

Razvan Peter Figuli of Remchingen (DE)

Cedric Lichtenau of Stuttgart (DE)

Holger Horbach of Aidlingen (DE)

PROCESSING TENSORS - A simplified explanation of the abstract

This abstract first appeared for US patent application 18046322 titled 'PROCESSING TENSORS

Simplified Explanation

The present disclosure describes a method for accessing a multidimensional tensor of elements in a computer memory efficiently.

  • The method involves loading pages of the tensor linearly, then loading non-empty sticks from each page using base addresses and address offsets.
  • When a chunk size is reached, the loading process may reset and continue with the next page.

Potential Applications

This method could be applied in various fields such as data processing, image recognition, and machine learning algorithms that require efficient access to multidimensional data structures.

Problems Solved

This method solves the problem of efficiently accessing and loading data from a multidimensional tensor stored in memory, reducing processing time and improving overall system performance.

Benefits

The benefits of this method include faster data access, optimized memory usage, and improved computational efficiency in handling large-scale multidimensional data structures.

Potential Commercial Applications

Potential commercial applications of this technology include data analytics platforms, AI systems, scientific computing software, and any application that deals with complex data structures.

Possible Prior Art

One possible prior art could be methods for accessing multidimensional arrays in computer memory using different loading techniques and algorithms.

Unanswered Questions

How does this method compare to existing tensor access techniques in terms of speed and efficiency?

This article does not provide a direct comparison with existing techniques, so it is unclear how this method stacks up against other approaches in terms of performance and efficiency.

Are there any limitations or constraints to implementing this method in real-world applications?

The article does not address any potential limitations or constraints that may arise when implementing this method in practical, real-world scenarios.


Original Abstract Submitted

The present disclosure relates to a method of accessing a n-dimensional tensor of elements in a memory by a computer system. The multidimensional tensor comprises two-dimensional arrays, herein referred to as pages, each page being configured to comprise a predefined number of one-dimensional arrays of elements, herein referred to as sticks. The method includes linearly loading page per page of the tensor, and doing the following for each page: loading the non-empty sticks of the page from the memory using a base address of the page and determining a base address for the subsequent page using the number of loaded sticks and using an address offset indicative of potential empty sticks of the page. In case the number of loaded pages reaches a chunk size, the chunk page counter may be reinitialized and the linear loading may be continued with a subsequent page.