18086469. APPLICATION PROGRAMMING INTERFACE TO GENERATE A TENSOR ACCORDING TO A TENSOR MAP simplified abstract (NVIDIA Corporation)

From WikiPatents
Jump to navigation Jump to search

APPLICATION PROGRAMMING INTERFACE TO GENERATE A TENSOR ACCORDING TO A TENSOR MAP

Organization Name

NVIDIA Corporation

Inventor(s)

Harold Carter Edwards of Campbell CA (US)

Stephen Anthony Bernard Jones of San Francisco CA (US)

Alexander Lev Minkin of Los Altos CA (US)

Olivier Giroux of Santa Clara CA (US)

Gokul Ramaswamy Hirisave Chandra Shekhara of Bangalore (IN)

Vishalkumar Ketankumar Mehta of Stäfa (CH)

Aditya Avinash Atluri of Redmond WA (US)

Apoorv Parle of Santa Clara CA (US)

Chao Li of Austin TX (US)

Ronny Meir Krashinsky of Portola Valley CA (US)

Alan Kaatz of Seattle WA (US)

Andrew Robert Kerr of Atlanta GA (US)

Jack H. Choquette of Palo Alto CA (US)

APPLICATION PROGRAMMING INTERFACE TO GENERATE A TENSOR ACCORDING TO A TENSOR MAP - A simplified explanation of the abstract

This abstract first appeared for US patent application 18086469 titled 'APPLICATION PROGRAMMING INTERFACE TO GENERATE A TENSOR ACCORDING TO A TENSOR MAP

Simplified Explanation

The patent application describes apparatuses, systems, and techniques for translating a first tensor into a second tensor according to a tensor map without storing information about the memory transaction corresponding to the translation.

  • One or more circuits are used to perform an application programming interface (API) to translate the first tensor into the second tensor without storing information about the memory transactions.
  • The innovation allows for efficient tensor translation without the need to store memory transaction details, reducing memory overhead and improving performance.

Potential Applications

This technology could be applied in various fields such as machine learning, artificial intelligence, image processing, and data analysis where tensor operations are common.

Problems Solved

This technology solves the problem of memory overhead and performance issues associated with storing information about memory transactions during tensor translation processes.

Benefits

The benefits of this technology include improved efficiency, reduced memory usage, and enhanced performance in tensor translation tasks.

Potential Commercial Applications

Potential commercial applications of this technology could include optimizing neural network operations, accelerating image processing algorithms, and enhancing data analysis tools.

Possible Prior Art

One possible prior art could be techniques for tensor operations in machine learning frameworks that may involve storing memory transaction details during tensor translation processes.

Unanswered Questions

1. How does this technology compare to existing methods of tensor translation in terms of speed and memory usage? 2. Are there any limitations or constraints when applying this technology to large-scale tensor operations?


Original Abstract Submitted

Apparatuses, systems, and techniques to cause a first tensor to be translated into a second tensor according to a tensor map without storing information about a memory transaction corresponding to the translation. In at least one embodiment, one or more circuits are to perform an application programming interface (API) to cause a first tensor to be translated into a second tensor according to a tensor map without storing information about one or more memory transactions corresponding to the translation.