-
LRU (Least-recently-used)
-
FIFO (First-In, First-Out): In this policy, the first item that was added to the cache is the first one to be removed when the cache is full. It's a simple and easy-to-implement algorithm but may not perform well in all situations.
-
LFU (Least Frequently Used): LFU keeps track of how frequently each item in the cache is accessed and removes the item with the lowest access frequency when the cache is full. It aims to remove items that are used the least.
-
MRU (Most Recently Used): MRU removes the most recently accessed item when the cache is full. This is the opposite of LRU, and it's useful in scenarios where you want to keep items that are accessed most frequently.
-
Random Replacement: In this policy, a random item from the cache is removed when it's time to evict an item. While it's simple, it doesn't take usage patterns into account and may not perform well in practice.
-
ARC (Adaptive Replacement Cache): ARC is a self-tuning cache replacement algorithm that combines elements of both LRU and LFU. It dynamically adjusts its replacement policy based on the recent access patterns of items in the cache.
-
2Q (Two Queues): The 2Q algorithm maintains two separate queues for recently and frequently used items. It aims to strike a balance between removing old items and retaining frequently accessed items.
-
-
Save davidcsejtei/661b9b8dca1bbe27590cfa0b7a962a8f to your computer and use it in GitHub Desktop.
class LRUCache { | |
constructor(capacity) { | |
this.capacity = capacity | |
this.cache = new Map() | |
} | |
get(key) { | |
if (this.cache.has(key)) { | |
// If the key exists in the cache, move it to the front | |
const value = this.cache.get(key) | |
this.cache.delete(key) | |
this.cache.set(key, value) | |
return value | |
} | |
return -1 // Key not found | |
} | |
put(key, value) { | |
if (this.cache.has(key)) { | |
// If the key already exists, update its value and move it to the front | |
this.cache.delete(key) | |
} else if (this.cache.size >= this.capacity) { | |
// If the cache is at its capacity, remove the least recently used item (the first item) | |
const firstKey = this.cache.keys().next().value | |
this.cache.delete(firstKey) | |
} | |
// Add the new key-value pair to the cache | |
this.cache.set(key, value) | |
} | |
} | |
// Example usage: | |
const lruCache = new LRUCache(3) // Create an LRU cache with a capacity of 3 | |
lruCache.put(1, 1) | |
lruCache.put(2, 2) | |
lruCache.put(3, 3) | |
console.log(lruCache.get(1)) // Output: 1 | |
console.log(lruCache.cache) // Output: Map { 2 => 2, 3 => 3, 1 => 1 } | |
lruCache.put(4, 4) // Cache is now at its capacity, so it removes the least recently used item (key 2) | |
console.log(lruCache.get(2)) // Output: -1 (Key 2 has been removed from the cache) | |
console.log(lruCache.cache) // Output: Map { 3 => 3, 1 => 1, 4 => 4 } |
"key: value" pairs of data that are stored in a cache. The cache is a temporary storage location that can be accessed quickly. Caches are used to improve performance by reducing the need to fetch data from slower, longer-term storage locations, such as databases or disk drives.
A cache is a high-speed data storage layer that temporarily stores a subset of data, typically recently or frequently accessed data, in order to serve future requests more quickly. The primary purpose of a cache is to reduce data retrieval latency and improve overall system performance by reducing the need to fetch data from slower, longer-term storage locations, such as databases or disk drives. Caches are used in various computing systems, including enterprise systems, to optimize data access and improve application responsiveness.
In enterprise systems, caches are used for a variety of purposes, including:
Improving Performance: One of the primary purposes of caches is to accelerate data access. By storing frequently used data in a cache, applications can retrieve it more quickly, reducing the time required to access and process the data. This can lead to significant performance improvements in enterprise applications.
Reducing Database Load: Caching can help reduce the load on backend databases, especially in situations where there are a large number of read requests. By serving frequently requested data from a cache, databases can handle fewer read requests, freeing up resources for other operations.
Minimizing Network Traffic: In distributed enterprise systems, caches can be used to store data locally, reducing the need to fetch data over the network. This helps in minimizing network latency and bandwidth usage.
Enhancing Scalability: Caching can improve the scalability of enterprise systems. By reducing the workload on backend resources, such as databases or web services, caches enable applications to handle a larger number of concurrent users or requests.
Enabling Offline Access: In some cases, caches are used to enable offline access to data. Data that has been cached can be accessed even when the application is not connected to the network, which can be valuable in certain enterprise scenarios.
Examples of caches used in enterprise systems include:
Web Page Caches: Web servers often use caches to store the HTML content of web pages. This reduces the load on web servers and accelerates page load times for users.
Database Caches: Database management systems (DBMS) use caches to store frequently accessed data pages or query results. This improves query performance and reduces the need to fetch data from disk.
Content Delivery Network (CDN) Caches: CDNs cache content, such as images, videos, and static files, at edge locations closer to end-users. This reduces the latency of content delivery and offloads traffic from origin servers.
Object Caches: Some enterprise applications, especially those using object-oriented databases, employ object caches to store frequently accessed objects in memory.
API Response Caches: In RESTful API-based systems, responses from APIs are often cached to reduce the load on the API server and improve response times.
Session Caches: Caches can be used to store session data for web applications, reducing the need to access session information from a database on each request.
Message Queues with Caches: Some message queuing systems use in-memory caches to temporarily store and process messages, improving message throughput and reducing latency.
Caching strategies and technologies may vary depending on the specific requirements of the enterprise system, but the common goal is to enhance performance and efficiency by storing frequently used data in a faster and more accessible location.
There are several cache software solutions available for various purposes, from general-purpose caching to specialized caching for specific applications or services. Here are some popular cache software options:
Redis: Redis is an in-memory data store that can be used as a cache, database, or message broker. It is known for its high performance and versatility, making it a popular choice for caching.
Memcached: Memcached is a distributed memory object caching system. It is designed for simplicity and speed, making it suitable for caching frequently accessed data.
Nginx: Nginx, a popular web server and reverse proxy server, includes a built-in caching module that can be used to cache web content like static files, API responses, and web pages.
Varnish: Varnish Cache is an HTTP accelerator and caching reverse proxy. It is often used to improve the performance of web servers and reduce the load on backend servers.
Squid: Squid is a widely-used caching proxy server that can be employed to cache web content, DNS lookups, and more. It is often used in network environments to speed up web access.
Couchbase: Couchbase is a NoSQL database that includes caching features. It can be used both as a database and as a caching layer to accelerate data access.
Hazelcast: Hazelcast is an open-source in-memory data grid platform that can be used for distributed caching and data sharing across multiple nodes in a cluster.
Aerospike: Aerospike is a high-performance NoSQL database that includes caching capabilities. It is designed for low-latency, high-throughput applications.
Ehcache: Ehcache is an open-source, Java-based caching library that can be used to add caching to Java applications. It is often integrated with Java EE and Spring applications.
Apache JCS (Java Caching System): Apache JCS is a distributed caching system for Java applications. It provides features like in-memory caching, disk caching, and distributed caching.
Amazon ElastiCache: Amazon ElastiCache is a managed caching service provided by AWS. It supports both Redis and Memcached, allowing you to easily set up and manage caches in the cloud.
Microsoft Azure Cache for Redis: Azure Cache for Redis is a managed Redis service in Microsoft Azure. It offers high availability, security, and scalability for Redis-based caching.
Google Cloud Memorystore: Google Cloud Memorystore is a fully managed Redis service on Google Cloud Platform, suitable for caching and session management.
Couchbase Mobile: Couchbase Mobile provides caching and database capabilities for mobile and edge applications, allowing data to be cached and synchronized across devices.
The choice of cache software depends on your specific use case, performance requirements, scalability needs, and programming language preferences. Different cache solutions have their own strengths and may be better suited to different scenarios.