17987185. INCREASING PROCESSING RESOURCES IN PROCESSING CORES OF A GRAPHICS ENVIRONMENT simplified abstract (Intel Corporation)

From WikiPatents
Jump to navigation Jump to search

INCREASING PROCESSING RESOURCES IN PROCESSING CORES OF A GRAPHICS ENVIRONMENT

Organization Name

Intel Corporation

Inventor(s)

Jiasheng Chen of El Dorado Hills CA (US)

Chunhui Mei of San Diego CA (US)

Ben J. Ashbaugh of Folsom CA (US)

Naveen Matam of Folsom CA (US)

Joydeep Ray of Folsom CA (US)

Timothy Bauer of Hillsboro OR (US)

Guei-Yuan Lueh of San Jose CA (US)

Vasanth Ranganathan of El Dorado Hills CA (US)

Prashant Chaudhari of Folsom CA (US)

Vikranth Vemulapalli of Folsom CA (US)

Nishanth Reddy Pendluru of Folsom CA (US)

Piotr Reiter of Gdansk (PL)

Jain Philip of Bangalore (IN)

Marek Rudniewski of Gdansk (PL)

Christopher Spencer of Chuluota FL (US)

Parth Damani of Folsom CA (US)

Prathamesh Raghunath Shinde of Folsom CA (US)

John Wiegert of Aloha OR (US)

Fataneh Ghodrat of Hudson MA (US)

INCREASING PROCESSING RESOURCES IN PROCESSING CORES OF A GRAPHICS ENVIRONMENT - A simplified explanation of the abstract

This abstract first appeared for US patent application 17987185 titled 'INCREASING PROCESSING RESOURCES IN PROCESSING CORES OF A GRAPHICS ENVIRONMENT

Simplified Explanation

The patent application describes an apparatus designed to increase processing resources in processing cores of a graphics environment. The apparatus includes multiple processing resources, message arbiter-processing resource routers, local shared cache sequencers, and instruction caches.

  • The apparatus includes multiple processing resources to execute one or more execution threads efficiently.
  • It also features message arbiter-processing resource routers to arbitrate routing of thread control messages between pairs of processing resources.
  • Local shared cache sequencers provide an interface between at least one local shared cache of the processing core and the processing resources.
  • Instruction caches store instructions of the execution threads and interface with the processing resources.

Potential Applications

This technology could be applied in graphics processing units (GPUs), gaming consoles, virtual reality systems, and high-performance computing systems.

Problems Solved

This technology solves the problem of limited processing resources in graphics environments, enabling faster and more efficient execution of multiple threads.

Benefits

The benefits of this technology include improved performance, enhanced multitasking capabilities, reduced latency, and overall better user experience in graphics-intensive applications.

Potential Commercial Applications

Potential commercial applications of this technology include graphics cards for gaming, data centers for high-performance computing, and virtual reality systems for immersive experiences.

Possible Prior Art

One possible prior art could be the use of multi-core processors in computing systems to increase processing power and efficiency.

Unanswered Questions

How does this technology compare to existing solutions in terms of performance and scalability?

This article does not provide a direct comparison with existing solutions, leaving the reader wondering about the advantages of this technology over current options.

What are the potential limitations or drawbacks of implementing this technology in real-world applications?

The article does not address any potential limitations or drawbacks of implementing this technology, leaving the reader curious about any challenges that may arise during practical use.


Original Abstract Submitted

An apparatus to facilitate increasing processing resources in processing cores of a graphics environment is disclosed. The apparatus includes a plurality of processing resources to execute one or more execution threads; a plurality of message arbiter-processing resource (MA-PR) routers, wherein a respective MA-PR router of the plurality of MA-PR routers corresponds to a pair of processing resources of the plurality of processing resources and is to arbitrate routing of a thread control message from a message arbiter between the pair of processing resources; a plurality of local shared cache (LSC) sequencers to provide an interface between at least one LSC of the processing core and the plurality of processing resources; and a plurality of instruction caches (ICs) to store instructions of the one or more execution threads, wherein a respective IC of the plurality of ICs interfaces with a portion of the plurality of processing resources.