18451775. Performance and Power Balanced Cache Partial Power Down Policy simplified abstract (MediaTek Inc.)
Contents
- 1 Performance and Power Balanced Cache Partial Power Down Policy
- 1.1 Organization Name
- 1.2 Inventor(s)
- 1.3 Performance and Power Balanced Cache Partial Power Down Policy - A simplified explanation of the abstract
- 1.4 Simplified Explanation
- 1.5 Potential Applications
- 1.6 Problems Solved
- 1.7 Benefits
- 1.8 Potential Commercial Applications
- 1.9 Possible Prior Art
- 1.10 Original Abstract Submitted
Performance and Power Balanced Cache Partial Power Down Policy
Organization Name
Inventor(s)
Yu-Pin Chen of Hsinchu City (TW)
Jia-Ming Chen of Hsinchu City (TW)
Chien-Yuan Lai of Hsinchu City (TW)
Ya Ting Chang of Hsinchu City (TW)
Cheng-Tse Chen of Hsinchu City (TW)
Performance and Power Balanced Cache Partial Power Down Policy - A simplified explanation of the abstract
This abstract first appeared for US patent application 18451775 titled 'Performance and Power Balanced Cache Partial Power Down Policy
Simplified Explanation
The computing system in this patent application performs partial cache deactivation by estimating the leakage power of a cache based on operating conditions such as voltage and temperature, identifying a region of the cache for deactivation based on cache hit counts, and adjusting the size of the region for deactivation based on the leakage power and memory hierarchy device bandwidth.
- The computing system estimates the leakage power of a cache based on voltage and temperature.
- A region of the cache is identified for deactivation based on cache hit counts.
- The size of the region for deactivation is adjusted based on leakage power and memory hierarchy device bandwidth.
Potential Applications
This technology could be applied in various computing systems and devices where power optimization is crucial, such as mobile devices, IoT devices, and data centers.
Problems Solved
This technology helps in reducing power consumption and improving energy efficiency in computing systems by selectively deactivating cache regions based on usage patterns and operating conditions.
Benefits
The benefits of this technology include lower power consumption, extended battery life for mobile devices, improved performance in data centers, and overall energy efficiency in computing systems.
Potential Commercial Applications
Potential commercial applications of this technology include mobile devices, IoT devices, servers, and other computing systems where power optimization is a key factor for performance and efficiency.
Possible Prior Art
One possible prior art for this technology could be techniques for cache management and power optimization in computing systems, but the specific approach of estimating leakage power and adjusting cache deactivation based on memory hierarchy device bandwidth may be novel.
Unanswered Questions
How does this technology impact overall system performance?
This article does not delve into the potential trade-offs between power optimization through cache deactivation and the impact on system performance. It would be interesting to explore how this technology balances power efficiency with computing performance.
What are the potential limitations of this technology in real-world applications?
The article does not address any potential limitations or challenges that may arise when implementing this technology in practical computing systems. It would be valuable to investigate any constraints or drawbacks that could affect the effectiveness of this approach.
Original Abstract Submitted
A computing system performs partial cache deactivation. The computing system estimates the leakage power of a cache based on operating conditions of the cache including voltage and temperature. The computing system further identifies a region of the cache as a candidate for deactivation based on cache hit counts. The computing system then adjusts the size of the region for the deactivation based on the leakage power and a bandwidth of a memory hierarchy device. The memory hierarchy device is at the next level to the cache in a memory hierarchy of the computing system.