Databases 12 min read

An Introduction to Redis: Basics, Performance, and Comparison with Memcached

Redis is an open‑source, in‑memory NoSQL database that provides ultra‑fast key‑value storage, rich data structures, persistence, clustering and extensible modules, making it the preferred distributed cache over Memcached, which lacks these features and is now rarely chosen for new projects.

Sohu Tech Products
Sohu Tech Products
Sohu Tech Products
An Introduction to Redis: Basics, Performance, and Comparison with Memcached

Backend projects that need distributed caching usually use Redis. Redis is not only a cache; it can also serve as a distributed lock, delayed queue, rate limiter, etc.

This article summarizes seven fundamental Redis questions, avoiding advanced topics such as data types, persistence, threading model, and performance tuning, allowing readers to self‑test their basic Redis knowledge.

What is Redis?

Redis (Remote DIctionary Server) is an open‑source NoSQL database written in C (BSD license). Unlike traditional databases, Redis stores data in memory (with optional persistence), giving it extremely fast read/write speed and making it widely used for distributed caching. It stores key‑value pairs.

Redis provides many built‑in data structures (String, Hash, Sorted Set, Bitmap, HyperLogLog, GEO) and supports transactions, persistence, Lua scripting, and clustering solutions such as Redis Sentinel and Redis Cluster.

Redis has no external dependencies; Linux and macOS are the most common development and testing platforms, and Linux is recommended for production deployment.

For personal learning you can install Redis locally or use the official online Redis environment (some commands are unavailable).

Why is Redis so fast?

Redis achieves high performance through three main factors:

In‑memory storage – memory access is thousands of times faster than disk.

Reactor‑based single‑threaded event loop with I/O multiplexing.

Optimized internal data structures and implementations.

See the image “Why is Redis so fast?” for a visual summary.

Common Distributed‑Cache Technology Choices

The classic choices are Memcached and Redis . Nowadays most projects prefer Redis; Memcached usage has sharply declined.

Some large‑scale projects have open‑sourced Redis‑compatible KV stores, such as Tencent’s Tendis (built on RocksDB and 100% Redis‑compatible). However, Tendis is rarely maintained and not recommended for new projects.

Other Redis alternatives include:

Dragonfly : an in‑memory database compatible with Redis and Memcached APIs, claimed to be the fastest.

KeyDB : a high‑performance Redis fork focusing on multithreading and memory efficiency.

Overall, Redis remains the first‑choice distributed cache due to its mature ecosystem and extensive documentation.

Redis vs. Memcached: Similarities and Differences

Similarities :

Both are memory‑based databases primarily used as caches.

Both support expiration policies.

Both deliver very high performance.

Differences :

Redis offers richer data types (list, set, sorted set, hash, etc.) while Memcached only supports simple key/value.

Redis provides persistence; Memcached stores data only in memory.

Redis includes disaster‑recovery mechanisms via persistence.

When memory is exhausted, Redis can spill to disk; Memcached throws errors.

Redis has native clustering; Memcached relies on client‑side sharding.

Memcached uses a multithreaded, non‑blocking I/O model; Redis uses a single‑threaded event loop (Redis 6.0 adds optional networking threads).

Redis supports Pub/Sub, Lua scripts, transactions, and many client libraries; Memcached does not.

Memcached uses lazy expiration only; Redis combines lazy and periodic expiration.

Given these advantages, Memcached is rarely the preferred choice for new projects.

Why Use Redis / Why Use a Cache?

High Performance : Frequently accessed, rarely changing data can be cached, allowing subsequent reads to be served directly from memory, which is orders of magnitude faster than disk.

High Concurrency : A single Redis instance can handle 100k+ QPS, compared to ~10k QPS for a typical MySQL instance (4 CPU, 8 GB). Redis clusters can achieve even higher throughput.

QPS (Queries Per Second) measures how many queries a server can execute each second.

By offloading read‑heavy workloads to Redis, overall system concurrency improves dramatically.

Common Cache Read/Write Strategies

For detailed explanations, see the article “Three Common Cache Read/Write Strategies”.

What Are Redis Modules and What Are They Used For?

Since Redis 4.0, modules allow extending Redis functionality via dynamically loaded shared objects (.so files). Developers can create custom modules for search, distributed locks, rate limiting, etc.

Officially recommended modules include:

RediSearch : full‑text search engine.

RedisJSON : JSON data handling.

RedisGraph : graph database.

RedisTimeSeries : time‑series data.

RedisBloom : Bloom filter implementation.

RedisAI : execution of deep‑learning / machine‑learning models.

RedisCell : distributed rate limiting.

More information is available at https://redis.io/modules .

distributed systemsPerformanceRediscachingIn-Memory DatabaseMemcached ComparisonRedis Modules
Sohu Tech Products
Written by

Sohu Tech Products

A knowledge-sharing platform for Sohu's technology products. As a leading Chinese internet brand with media, video, search, and gaming services and over 700 million users, Sohu continuously drives tech innovation and practice. We’ll share practical insights and tech news here.

0 followers
Reader feedback

How this landed with the community

login Sign in to like

Rate this article

Was this worth your time?

Sign in to rate
Discussion

0 Comments

Thoughtful readers leave field notes, pushback, and hard-won operational detail here.