18508356. CACHE SIZE CHANGE simplified abstract (TEXAS INSTRUMENTS INCORPORATED)

From WikiPatents
Jump to navigation Jump to search

CACHE SIZE CHANGE

Organization Name

TEXAS INSTRUMENTS INCORPORATED

Inventor(s)

Abhijeet Ashok Chachad of Plano TX (US)

Naveen Bhoria of Plano TX (US)

David Matthew Thompson of Dallas TX (US)

Neelima Muralidharan of Murphy TX (US)

CACHE SIZE CHANGE - A simplified explanation of the abstract

This abstract first appeared for US patent application 18508356 titled 'CACHE SIZE CHANGE

Simplified Explanation

The abstract describes a method involving a level one (L1) controller changing the size of a L1 main cache, servicing pending read and write requests from a CPU core, stalling new requests from the CPU core, and managing the L1 main cache. Additionally, a level two (L2) controller receives an indication of L1 main cache invalidation, flushes its pipeline, stalls requests from any master, and reinitializes a shadow L1 main cache.

  • Determining and changing L1 main cache size by L1 controller
  • Servicing pending read and write requests from CPU core by L1 controller
  • Stalling new requests from CPU core by L1 controller
  • Writing back and invalidating L1 main cache by L1 controller
  • Receiving L1 main cache invalidation indication by L2 controller
  • Flushing pipeline and stalling requests by L2 controller
  • Reinitializing shadow L1 main cache by L2 controller

Potential Applications

The technology described in the patent application could be applied in computer systems, processors, and cache management systems.

Problems Solved

This technology helps in efficiently managing cache sizes, handling pending requests, and ensuring data integrity in multi-level cache systems.

Benefits

The benefits of this technology include improved performance, reduced latency, optimized cache utilization, and enhanced system reliability.

Potential Commercial Applications

Potential commercial applications of this technology could be in high-performance computing systems, server infrastructure, data centers, and other computing devices requiring efficient cache management.

Possible Prior Art

One possible prior art could be existing cache management techniques in computer systems, processors, and memory hierarchies.

Unanswered Questions

How does this technology impact overall system performance?

This technology can potentially improve system performance by optimizing cache utilization and reducing data access latency.

What are the potential challenges in implementing this technology in real-world systems?

Some potential challenges in implementing this technology could include compatibility issues with existing hardware and software, integration complexities, and performance trade-offs that need to be carefully balanced.


Original Abstract Submitted

A method includes determining, by a level one (L1) controller, to change a size of a L1 main cache; servicing, by the L1 controller, pending read requests and pending write requests from a central processing unit (CPU) core; stalling, by the L1 controller, new read requests and new write requests from the CPU core; writing back and invalidating, by the L1 controller, the L1 main cache. The method also includes receiving, by a level two (L2) controller, an indication that the L1 main cache has been invalidated and, in response, flushing a pipeline of the L2 controller; in response to the pipeline being flushed, stalling, by the L2 controller, requests received from any master; reinitializing, by the L2 controller, a shadow L1 main cache. Reinitializing includes clearing previous contents of the shadow L1 main cache and changing the size of the shadow L1 main cache.