18464068. In-Memory Distributed Cache simplified abstract (GOOGLE LLC)

From WikiPatents
Jump to navigation Jump to search

In-Memory Distributed Cache

Organization Name

GOOGLE LLC

Inventor(s)

Asa Briggs of Zurich (CH)

In-Memory Distributed Cache - A simplified explanation of the abstract

This abstract first appeared for US patent application 18464068 titled 'In-Memory Distributed Cache

Simplified Explanation

The abstract describes a method for an in-memory distributed cache that allows a client device to write data to the random access memory (RAM) of a memory host. Here is a simplified explanation of the abstract:

  • The method receives a write request from a client device to write a block of client data in the RAM of a memory host.
  • It determines whether the client device has permission to write the block of client data at the memory host.
  • It also checks if the block of client data is already saved at the memory host and if a free block of RAM is available.
  • If the client device has permission, the block of client data is not already saved, and a free block of RAM is available, the write request is allowed.
  • The client is then allowed to write the block of client data to the free block of RAM.

Potential applications of this technology:

  • Distributed databases: This method can be used in distributed databases to improve performance by caching frequently accessed data in memory.
  • Content delivery networks (CDNs): CDNs can utilize this method to cache popular content closer to end-users, reducing latency and improving content delivery speed.
  • Real-time analytics: In-memory caching can be beneficial for real-time analytics platforms, allowing faster data processing and analysis.

Problems solved by this technology:

  • Performance optimization: By caching data in memory, the method reduces the need to retrieve data from slower storage devices, improving overall system performance.
  • Scalability: The distributed nature of the cache allows for scaling across multiple memory hosts, accommodating larger amounts of data and increasing system capacity.

Benefits of this technology:

  • Faster data access: Storing data in memory enables quicker retrieval and processing, leading to reduced latency and improved response times.
  • Improved system performance: By reducing the reliance on slower storage devices, the method enhances the overall performance of the system.
  • Scalability and flexibility: The distributed cache can be easily scaled by adding more memory hosts, allowing for increased storage capacity and accommodating growing data demands.


Original Abstract Submitted

A method for an in-memory distributed cache includes receiving a write request from a client device to write a block of client data in random access memory (RAM) of a memory host and determining whether to allow the write request by determining whether the client device has permission to write the block of client data at the memory host, determining whether the block of client data is currently saved at the memory host, and determining whether a free block of RAM is available. When the client device has permission to write the block of client data at the memory host, the block of client data is not currently saved at the memory host, and a free block of RAM is available, the write request is allowed and the client is allowed to write the block of client data to the free block of RAM.