Implementing an LFU Cache in Java: Theory, Code, and Comparison with LRU
This article explains the Least Frequently Used (LFU) cache eviction strategy, contrasts it with LRU, and provides a complete Java implementation—including node definition, insertion, sorting, removal, and retrieval logic—along with a test scenario and discussion of LFU's drawbacks.
The article begins by introducing cache eviction policies, describing LRU (Least Recently Used) and then focusing on LFU (Least Frequently Used), which evicts the data with the lowest access frequency over a period.
LFU Definition : LFU tracks how often each cached item is accessed; the item with the smallest access count (and oldest timestamp when counts tie) is removed when the cache is full.
Node Definition – each cache entry is represented by a Node object that stores the key, value, access count, and last access time, and implements Comparable<Node> to enable sorting by count then time.
/** * 节点 */
public static class Node implements Comparable
{ //键
Object key; //值
Object value;
/** * 访问时间 */
long time;
/** * 访问次数 */
int count;
public Node(Object key, Object value, long time, int count) {
this.key = key; this.value = value; this.time = time; this.count = count;
}
public Object getKey(){ return key; }
public void setKey(Object key){ this.key = key; }
public Object getValue(){ return value; }
public void setValue(Object value){ this.value = value; }
public long getTime(){ return time; }
public void setTime(long time){ this.time = time; }
public int getCount(){ return count; }
public void setCount(int count){ this.count = count; }
@Override
public int compareTo(Node o) {
int compare = Integer.compare(this.count, o.count);
if(compare == 0){ return Long.compare(this.time, o.time); }
return compare;
}
}LFU Cache Class – a generic class LFU<K,V> holds the capacity, a LinkedHashMap<K,Node> for the entries, and provides methods for insertion, retrieval, sorting, and eviction.
public class LFU
{
/** * 总容量 */
private int capacity;
/** * 所有的node节点 */
private Map
caches;
/** * 构造方法 */
public LFU(int size){
this.capacity = size;
caches = new LinkedHashMap
(size);
}
}Insertion (put) – if the key is new and the cache is full, the least‑used entry is removed; otherwise a new node is created with count = 1 and the current nanosecond timestamp. Existing keys have their value overwritten, count incremented, and timestamp refreshed. After each insertion the cache is sorted.
/** * 添加元素 */
public void put(K key, V value){
Node node = caches.get(key);
if(node == null){
if(caches.size() >= capacity){
K leastKey = removeLeastCount();
caches.remove(leastKey);
}
node = new Node(key, value, System.nanoTime(), 1);
caches.put(key, node);
} else {
node.value = value;
node.setCount(node.getCount() + 1);
node.setTime(System.nanoTime());
}
sort();
}Sorting – the map entries are copied to a list, sorted using the Node.compareTo method (descending order), the original map cleared, and the sorted entries re‑inserted.
private void sort(){
List
> list = new ArrayList<>(caches.entrySet());
Collections.sort(list, new Comparator
>() {
@Override
public int compare(Map.Entry
o1, Map.Entry
o2){
return o2.getValue().compareTo(o1.getValue());
}
});
caches.clear();
for(Map.Entry
e : list){
caches.put(e.getKey(), e.getValue());
}
}Eviction of the Least Used – uses Collections.min on the node collection to find the entry with the smallest count (and oldest time when tied) and returns its key.
private K removeLeastCount(){
Collection
values = caches.values();
Node min = Collections.min(values);
return (K)min.getKey();
}Retrieval (get) – fetches the node, increments its count, updates the timestamp, re‑sorts the cache, and returns the stored value; returns null if the key is absent.
public V get(K key){
Node node = caches.get(key);
if(node != null){
node.setCount(node.getCount() + 1);
node.setTime(System.nanoTime());
sort();
return (V)node.value;
}
return null;
}Test Scenario – creates an LFU cache of capacity 5, inserts six keys (A‑F), performs multiple get operations to change counts and timestamps, re‑inserts a key, and finally prints the cache order, which should be C → B → F → E → D .
public static void main(String[] args){
LFU
lruCache = new LFU<>(5);
lruCache.put(1, "A");
lruCache.put(2, "B");
lruCache.put(3, "C");
lruCache.put(4, "D");
lruCache.put(5, "E");
lruCache.put(6, "F");
lruCache.get(2);
lruCache.get(2);
lruCache.get(3);
lruCache.get(6);
lruCache.put(3, "C"); // re‑insert
Map
caches = (Map
) lruCache.caches;
for(Map.Entry
e : caches.entrySet()){
System.out.println(e.getValue().value);
}
}Differences and Drawbacks – LRU evicts based on recency, while LFU evicts based on frequency. LFU can retain short‑term hot items for too long, causing stale data to occupy memory, and may evict newly added items quickly because of their low access count.
Conclusion – The article provides a concise introduction to LFU, a full Java implementation, and a comparison with LRU, highlighting when LFU is useful and its limitations; understanding such cache algorithms is essential for developers working on performance‑critical systems.
Architect's Tech Stack
Java backend, microservices, distributed systems, containerized programming, and more.
How this landed with the community
Was this worth your time?
0 Comments
Thoughtful readers leave field notes, pushback, and hard-won operational detail here.