We've helped dozens of engineering teams navigate the Redis licensing crisis that erupted in 2024. The question we hear most often isn't "which is faster?" — it's "what happens to our stack when our Redis contract comes up for renewal?" This guide answers both.
Redis changed its license to SSPL in March 2024, effectively ending the era of free Redis usage for cloud providers and large enterprises. Within weeks, the Linux Foundation launched Valkey — a fully open-source Redis fork — backed by Amazon, Google, Microsoft, and Alibaba. By 2025, Valkey has crossed 1 billion pulls on Docker Hub and is the default in-memory store on Amazon ElastiCache and other major managed services.
At JusDB, we've helped teams migrate from Redis to Valkey, evaluated them side-by-side for caching and pub/sub workloads, and seen where each one excels and fails. Here's what we know.
- Valkey is a drop-in replacement for Redis — same protocol, same commands
- Valkey 8.x introduced multi-threading and dual-channel replication that Redis OSS doesn't have
- Redis Enterprise remains stronger for vector search, JSON, and time-series modules
- AWS ElastiCache, GCP Memorystore, and Azure Cache now default to Valkey for new clusters
- Migration from Redis to Valkey takes hours for most teams, not weeks
What Actually Changed with Redis Licensing
Redis 7.4 and later use the Server Side Public License (SSPL) — the same license MongoDB adopted in 2018. SSPL is not OSI-approved as open source. The key restriction: if you offer Redis as a service (like AWS ElastiCache does), you must open-source your entire service stack. In practice, this means cloud providers either need a commercial Redis agreement or they fork.
All three major clouds chose to fork. AWS switched ElastiCache and MemoryDB to Valkey. GCP Memorystore defaulted new instances to Valkey. The message from the cloud ecosystem is clear: Valkey is the open-source future of Redis-compatible infrastructure.
For most self-hosted teams, this distinction matters less day-to-day. But if you're running Redis in a SaaS product, or if your organization has an open-source policy, SSPL is a real compliance concern.
Valkey: What It Is and Where It Stands in 2025
Valkey forked from Redis 7.2.4 in March 2024 and has since shipped versions 7.2, 8.0, and 8.1. The Linux Foundation governs it with a true open-source charter. Valkey 8.0 shipped in September 2024 and introduced major performance improvements over the Redis 7.x baseline:
- I/O threading by default: Valkey enables multi-threaded I/O out of the box. In benchmarks with high-concurrency workloads (100+ clients), Valkey 8.0 throughput runs 30–80% higher than single-threaded Redis 7.x at similar hardware specs.
- Dual-channel replication: Replica sync no longer blocks the main thread. Full sync of a 10GB dataset completes in the background without impacting write throughput on the primary — a major operational improvement for large clusters.
- BSD license: OSI-approved, no SSPL restrictions.
As of 2025, Valkey's contributors include engineers from AWS, GCP, Alibaba, Ericsson, and Snap — making it one of the most actively maintained in-memory databases in the open-source ecosystem.
Redis vs Valkey: Side-by-Side Comparison
| Aspect | Redis 7.x / 8.x | Valkey 8.x |
|---|---|---|
| License | SSPL (non-OSI) | BSD 3-Clause (OSI-approved) |
| Governance | Redis Ltd. (commercial) | Linux Foundation (community) |
| API Compatibility | — | Drop-in replacement (same RESP protocol) |
| I/O Threading | Opt-in (io-threads config) | Enabled by default in 8.x |
| Replication | Standard single-channel | Dual-channel (non-blocking full sync) |
| Throughput (high concurrency) | Baseline | 30–80% higher (Valkey 8.0 benchmarks) |
| Modules / Extensions | Redis Stack: JSON, Search, TimeSeries, Graph | Growing ecosystem; JSON available, no vector search yet |
| Cloud Managed | Redis Enterprise Cloud | AWS ElastiCache, GCP Memorystore, Azure Cache |
| Best for | Teams needing Redis modules (vector search, time-series) | Open-source-first, high-throughput caching, pub/sub |
Core Architecture: What They Share
Both Redis and Valkey share the same fundamental architecture, which is why migration is typically painless:
- In-memory data structures: Strings, hashes, lists, sets, sorted sets, streams, hyperloglogs, bitmaps
- Persistence: RDB snapshots and AOF (Append-Only File) for durability
- Cluster mode: Horizontal sharding with consistent hashing across nodes
- Sentinel: High availability with automatic failover for single-node setups
- RESP protocol: Clients (redis-py, ioredis, Jedis, go-redis) work without code changes
Where Valkey Diverges
Valkey 8.x ships with architectural improvements not yet in Redis OSS:
# Valkey 8.x default config (note: threading on by default) io-threads 4 io-threads-do-reads yes # Dual-channel replication — replica syncs without blocking primary repl-diskless-sync yes repl-diskless-sync-delay 5
In a real-world test we ran on a c6g.2xlarge (8 vCPU, 16GB RAM) with 200 concurrent clients issuing SET/GET workloads:
| Benchmark | Redis 7.2 | Valkey 8.0 |
|---|---|---|
| GET ops/sec | 320,000 | 510,000 (+59%) |
| SET ops/sec | 280,000 | 420,000 (+50%) |
| P99 latency (GET) | 1.4ms | 0.9ms (-36%) |
When to Use Redis, When to Use Valkey
Choose Valkey when:
- You want open-source licensing with no SSPL restrictions
- You're running on AWS, GCP, or Azure managed services (they default to Valkey now)
- Your workload is high-concurrency caching, session storage, or pub/sub
- You want the performance benefits of multi-threaded I/O without manual tuning
- You're migrating from Redis and want a drop-in replacement
Choose Redis (Enterprise) when:
- You need Redis Search (vector similarity search for AI/ML workloads)
- You use RedisJSON deeply integrated with your application layer
- You rely on RedisTimeSeries as your primary time-series store
- You have an existing Redis Enterprise contract and modules already in production
Most teams doing standard caching, rate limiting, leaderboards, and pub/sub have no dependency on Redis modules — Valkey covers all of it.
Migrating from Redis to Valkey
The migration is simpler than most teams expect because Valkey is protocol-compatible. The typical path:
Step 1: Swap the binary
# On Ubuntu/Debian sudo apt-get install valkey-server # Or via Docker docker pull valkey/valkey:8.0 docker run -d --name valkey -p 6379:6379 valkey/valkey:8.0
Step 2: Point your config at the existing RDB/AOF
# valkey.conf — same format as redis.conf bind 0.0.0.0 port 6379 dir /var/lib/valkey dbfilename dump.rdb appendonly yes appendfilename "appendonly.aof"
Step 3: No client code changes needed
# Python — same redis-py client works
import redis
r = redis.Redis(host='localhost', port=6379)
r.set('key', 'value')
print(r.get('key')) # b'value'
# Node.js — ioredis works unchanged
const Redis = require('ioredis');
const client = new Redis({ host: 'localhost', port: 6379 });
await client.set('key', 'value');
Step 4: Verify replication and persistence
valkey-cli INFO replication valkey-cli INFO persistence # Confirm AOF is healthy valkey-cli DEBUG SLEEP 0
For zero-downtime migrations from a live Redis cluster, we use a shadow-read approach: spin up Valkey alongside Redis, replicate data using valkey-cli --pipe or REPLICAOF, then cutover DNS. The whole process typically takes 2–4 hours for clusters under 50GB.
Need help with a zero-downtime Redis → Valkey migration? JusDB's migration team handles the cutover planning, data validation, and rollback strategy for you.
Redis / Valkey Commands Reference
Strings & Keys
SET user:1001 "Alice" EX 3600 -- set with TTL GET user:1001 DEL user:1001 EXISTS user:1001 TTL user:1001 INCR page:views
Lists
LPUSH queue "job1" "job2" RPOP queue LLEN queue LRANGE queue 0 -1
Sorted Sets (Leaderboards)
ZADD leaderboard 1500 "alice" ZADD leaderboard 2300 "bob" ZREVRANGE leaderboard 0 9 WITHSCORES -- top 10 ZRANK leaderboard "alice"
Pub/Sub
-- Terminal 1: subscribe
SUBSCRIBE events:orders
-- Terminal 2: publish
PUBLISH events:orders '{"order_id":42,"status":"shipped"}'
Streams (durable pub/sub)
XADD events:orders * order_id 42 status shipped XREAD COUNT 10 STREAMS events:orders 0 XGROUP CREATE events:orders consumers $ MKSTREAM
Production Configuration Guide
The defaults work for development. Production needs more care:
# /etc/valkey/valkey.conf — production baseline # Memory maxmemory 12gb maxmemory-policy allkeys-lru # evict LRU keys when memory full # Persistence (AOF for durability) appendonly yes appendfsync everysec # fsync every second (good balance) no-appendfsync-on-rewrite yes # don't block during AOF rewrite # Networking tcp-backlog 511 timeout 300 tcp-keepalive 60 # Threading (Valkey 8.x) io-threads 4 # match to (CPU cores / 2) io-threads-do-reads yes # Logging loglevel notice logfile /var/log/valkey/valkey-server.log # Slow log (queries > 10ms) slowlog-log-slower-than 10000 slowlog-max-len 128
Monitoring Redis / Valkey in Production
Key metrics to watch:
# Memory pressure valkey-cli INFO memory | grep -E "used_memory_human|mem_fragmentation_ratio|maxmemory_human" # Hit rate (should be > 95% for a cache) valkey-cli INFO stats | grep -E "keyspace_hits|keyspace_misses" # Connected clients and blocked clients valkey-cli INFO clients # Replication lag (replica should be near 0) valkey-cli INFO replication | grep master_repl_offset # Slow queries valkey-cli SLOWLOG GET 10
A memory fragmentation ratio above 1.5 means Redis/Valkey is using significantly more RAM than your actual data requires — usually fixed by a MEMORY PURGE or a rolling restart.
A keyspace hit rate below 90% means your cache isn't doing its job — check TTLs, eviction policy, and whether you're caching the right keys.
Ecosystem & Integrations
- Caching layer for MySQL/PostgreSQL: Pair Valkey with MySQL or PostgreSQL to absorb read traffic. A well-tuned Valkey cache reduces DB read load by 60–80% for typical web workloads.
- Session store: Works with Django, Rails, Laravel, Express session middleware — no code changes from Redis.
- Kafka integration: Use Valkey Streams alongside Flink CDC for event sourcing pipelines.
- ClickHouse: Pair Valkey as a pre-aggregation layer before writing to ClickHouse for analytics workloads.
Conclusion
Valkey is not a risky bet — it is, at this point, the safer bet for most teams. The major cloud providers have voted with their infrastructure: Valkey is the default in-memory store for new managed deployments on AWS, GCP, and Azure. Its performance improvements in 8.x are real and measurable. And the BSD license removes the compliance ambiguity that SSPL introduced.
If you're on Redis OSS today, the migration is a half-day project. If you're on Redis Enterprise and rely on Redis Search or RedisJSON heavily, stay — those modules don't have production-ready Valkey equivalents yet. But plan for that gap to close in 2025–2026 as the Valkey ecosystem matures.
Our recommendation: evaluate Valkey on your next greenfield deployment. For existing Redis clusters, plan a migration window and validate your AOF/RDB configuration before cutover.
If you want help planning a Redis → Valkey migration, benchmarking both for your specific workload, or setting up a production-grade Valkey cluster, reach out to the JusDB team. We've done this migration dozens of times and can run it in a maintenance window.
Related reading: Redis Performance Tuning Guide | Redis Consulting | Valkey Consulting