18392310. METHOD AND APPARATUS TO USE DRAM AS A CACHE FOR SLOW BYTE-ADDRESSIBLE MEMORY FOR EFFICIENT CLOUD APPLICATIONS simplified abstract (Intel Corporation)

From WikiPatents
Revision as of 06:00, 26 April 2024 by Wikipatents (talk | contribs) (Creating a new page)
(diff) ← Older revision | Latest revision (diff) | Newer revision → (diff)
Jump to navigation Jump to search

METHOD AND APPARATUS TO USE DRAM AS A CACHE FOR SLOW BYTE-ADDRESSIBLE MEMORY FOR EFFICIENT CLOUD APPLICATIONS

Organization Name

Intel Corporation

Inventor(s)

Yao Zu Dong of Shanghai (CN)

Kun Tian of Shanghai (CN)

Fengguang Wu of TENGCHONG (CN)

Jingqi Liu of Shanghai (CN)

METHOD AND APPARATUS TO USE DRAM AS A CACHE FOR SLOW BYTE-ADDRESSIBLE MEMORY FOR EFFICIENT CLOUD APPLICATIONS - A simplified explanation of the abstract

This abstract first appeared for US patent application 18392310 titled 'METHOD AND APPARATUS TO USE DRAM AS A CACHE FOR SLOW BYTE-ADDRESSIBLE MEMORY FOR EFFICIENT CLOUD APPLICATIONS

Simplified Explanation

The abstract describes a method for identifying and migrating memory pages in virtualized systems to improve access speed.

  • Virtualized systems: The technology involves virtual machines running on a processor.
  • Memory page migration: Pages are moved from slower memory to faster memory based on access patterns.
  • Improved performance: By moving frequently accessed pages to faster memory, overall system performance is enhanced.

Potential Applications

This technology can be applied in cloud computing environments, data centers, and virtualized servers to optimize memory usage and improve system performance.

Problems Solved

1. Slow access speed to memory pages in virtualized systems. 2. Inefficient memory management in virtualized environments.

Benefits

1. Enhanced system performance. 2. Improved memory utilization. 3. Increased efficiency in virtualized systems.

Potential Commercial Applications

Optimizing memory usage in cloud computing services. Improving performance in virtualized servers. Enhancing data center efficiency.

Possible Prior Art

One possible prior art could be memory management techniques in virtualized systems that focus on optimizing memory usage and access speed.

Unanswered Questions

How does this technology impact energy consumption in virtualized systems?

This article does not address the potential impact of memory page migration on energy consumption in virtualized systems.

What are the potential security implications of migrating memory pages in virtualized environments?

The article does not discuss the security implications of moving memory pages between different memory types in virtualized systems.


Original Abstract Submitted

Various embodiments are generally directed to virtualized systems. A first guest memory page may be identified based at least in part on a number of accesses to a page table entry for the first guest memory page in a page table by an application executing in a virtual machine (VM) on the processor, the first guest memory page corresponding to a first byte-addressable memory. The execution of the VM and the application on the processor may be paused. The first guest memory page may be migrated to a target memory page in a second byte-addressable memory, the target memory page comprising one of a target host memory page and a target guest memory page, the second byte-addressable memory having an access speed faster than an access speed of the first byte-addressable memory.