Databases 16 min read

Pika Best Practices: 30 Tips for Optimizing the RocksDB‑Based Redis‑Compatible Storage

This article presents thirty practical recommendations for deploying, configuring, and maintaining Pika—a high‑capacity, RocksDB‑backed Redis‑compatible storage system—covering version selection, thread settings, hardware choices, key design, memory management, replication, backup, compaction, security, and monitoring to achieve reliable and high‑performance operation.

360 Tech Engineering
360 Tech Engineering
360 Tech Engineering
Pika Best Practices: 30 Tips for Optimizing the RocksDB‑Based Redis‑Compatible Storage

Best Practice 1: Use the latest stable Pika version (3.0.x) whenever possible; older 2.2.x or 2.3.x releases contain many bugs that have already been fixed.

Best Practice 2: Align the number of Pika threads with the total CPU thread count; for multi‑instance deployments you may lower the per‑instance thread count but never below half of the CPU threads.

Best Practice 3: Deploy Pika on fast SSDs rather than mechanical disks, and keep master and slave hardware as similar as possible to avoid performance discrepancies.

Best Practice 4: Limit the number of fields per key (especially for hash, list, zset) to under 10,000 for latency‑sensitive workloads; large keys should be split into multiple smaller keys.

Best Practice 5: Configure root-connection-num to allow local administrative access even when maxclients is exhausted, preventing lock‑out during emergencies.

Best Practice 6: Use client kill all to terminate all non‑replication connections safely when needed.

Best Practice 7: Adjust the timeout setting to close idle connections proactively, reducing connection‑count pressure and memory usage.

Best Practice 8: Monitor memory consumption; if total memory exceeds expectations or 10 GB, run client kill all followed by tcmalloc free to reclaim connection memory, and upgrade if the issue persists.

Best Practice 9: Avoid single‑node deployments; a minimal cluster should consist of at least one master and one slave, with failover handled via LVS, VIP floating, or configuration‑management middleware.

Best Practice 10: Prefer master‑slave clusters over dual‑master setups, as the latter requires stricter operational discipline and has more complex recovery procedures.

Best Practice 11: When running a single‑node Pika on reliable storage, you may disable binlog ( write-binlog=no ) to improve write performance, but a replica is still recommended for disaster recovery.

Best Practice 12: Increase open_file_limit to accommodate the growing number of SST files; alternatively, enlarge target-file-size-base to reduce the total file count.

Best Practice 13: Never modify the write2file or manifest files in the log directory, as they are critical for binlog continuity and replica synchronization.

Best Practice 14: Limit the full‑sync bandwidth using db-sync-speed (≤75 MB/s on 1 GbE, ≤500 MB/s on 10 GbE) to prevent network saturation during large data transfers.

Best Practice 15: Use keys * with extreme caution; although it does not block Pika, it can temporarily consume large amounts of memory when many keys exist.

Best Practice 16: Trigger keyspace statistics manually with info keyspace 1 ; monitor is_scaning_keyspace in info stats to know when the scan is in progress.

Best Practice 17: Avoid running info keyspace 1 or keys * during a full compaction, as it can cause temporary data size inflation.

Best Practice 18: Configure compact-cron or enable auto_compact to regularly clean up expired or deleted keys and prevent storage bloat.

Best Practice 19: Monitor invalid_keys via info keyspace 1 ; if the count is high, run a manual compact or schedule regular compactions.

Best Practice 20: Retain write2file logs for at least 48 hours to facilitate replica bootstrapping, scaling, and maintenance.

Best Practice 21: Increase sync-thread-num on heavy‑write masters (e.g., >50 k QPS) to improve replica synchronization throughput.

Best Practice 22: Ensure at least 30 % free disk space before initiating a full compaction to avoid running out of space due to temporary SST growth.

Best Practice 23: When disk I/O is a bottleneck, consider upgrading storage, reducing write intensity, or increasing write-buffer-size , keeping in mind that memtable flushing will eventually be required.

Best Practice 24: Disable RocksDB compression in Pika if client‑side compression is feasible; this reduces CPU overhead at the cost of larger disk usage.

Best Practice 25: Implement read‑write separation: scale read capacity by adding more replicas while keeping writes on the master.

Best Practice 26: Before a full compaction, delete existing backups if disk space is limited, because hard‑linked backup files will temporarily double in size.

Best Practice 27: Enable slow‑log persistence to error logs by setting slowlog-write-errorlog=yes to avoid excessive memory consumption.

Best Practice 28: Use userpass and userblacklist to restrict privileged commands for non‑admin users, providing a safer alternative to Redis’s rename‑command feature.

These practices collectively help operators achieve stable, high‑performance Pika deployments in production environments.

performanceoperationsRedisbest practicesRocksDBPikaDatabase Tuning
360 Tech Engineering
Written by

360 Tech Engineering

Official tech channel of 360, building the most professional technology aggregation platform for the brand.

0 followers
Reader feedback

How this landed with the community

login Sign in to like

Rate this article

Was this worth your time?

Sign in to rate
Discussion

0 Comments

Thoughtful readers leave field notes, pushback, and hard-won operational detail here.