Network Latency Optimization
Minimize Your Ping
Latency starts at the network level. No matter how optimized your code is, the speed of light remains the ultimate bottleneck. The single most impactful action you can take to minimize latency is to achieve geographical co-location with the gRPC endpoint.Target: 0-1 milliseconds network latencyAim for sub-millisecond ping between your client application and the gRPC endpoint.
Recommended Data Centers
Co-locate your infrastructure in these data centers for optimal performance:- Europe
- North America
Providers: Cherry Servers, Latitude, velia.net, hostkey.com, TeraswitchEndpoint:
https://grpc.solanatracker.io
Measuring Latency
It’s possible to achieve 1-2 ms latency without co-locating if you’re in the same region. Use these tools to measure your latency:Latency Targets:
- Excellent: 0-1ms (co-located)
- Good: 1-2ms (same region)
- Acceptable: 2-5ms (nearby region)
- Poor: >5ms (consider relocating)
Connection Management
Distribute Load Across Multiple Clients
Streaming multiple high-load addresses (e.g., Meteora DLMM, Pump.fun, DEX programs) in a single subscription can quickly overwhelm a single client connection. Problems with single client:- Message backlog at network layer
- Single consumer thread bottleneck
- Client may disconnect due to unmanageable backlog
Best Practice: Split high-load addresses across multiple gRPC clients and distribute processing across separate CPU cores/threads.
When to Use Multiple Clients
- High-Load Programs
- Moderate-Load Addresses
Use separate clients for:
- DEX programs (Raydium, Jupiter, Orca)
- Pump.fun program
- Meteora DLMM
- Popular lending protocols
Connection Efficiency
Over-fragmenting your connections for moderate-load addresses can quickly hit connection limits.Best Practice: Don’t create a new connection for each token, pool, or wallet address. Combine them in a single subscribe request.
Client Configuration
Expand Max Message Size
Yellowstone gRPC clients have a default maximum size of 4 MB (4194304 bytes) for incoming messages. When streaming account updates or block updates, you can hit this limit.Required: Configure your gRPC client to avoid hitting the 4 MB message limit.
- TypeScript
- Rust
- Go
Keepalive Configuration
Configure keepalive to maintain persistent connections:Connection Timeouts
Set reasonable timeout values:Processing Optimization
Asynchronous Processing
Decouple I/O (receiving messages) from CPU-bound work (deserializing, filtering, business logic):Worker Thread Distribution
For extremely high-volume streams, distribute processing across worker threads:Error Handling
Implement Exponential Backoff
Handle Stream Errors Gracefully
Monitoring & Observability
Track Key Metrics
Alert on Anomalies
Security Best Practices
Protect Your Credentials
Never commit credentials to version control!
Rotate Tokens Regularly
Performance Checklist
Production Deployment Checklist:
- Co-locate in recommended data center (0-1ms latency)
- Separate high-load programs into different clients
- Configure max message size to 1GB
- Implement keepalive (30s interval)
- Use asynchronous processing
- Implement exponential backoff reconnection
- Monitor message rates and backlog
- Set up error alerting
- Use environment variables for credentials
- Implement token rotation mechanism
- Log metrics for analysis
- Test failover scenarios
Common Pitfalls
Processing Blocking Network I/O
Processing Blocking Network I/O
Problem: Slow processing logic blocks receiving new messagesSolution: Decouple message receipt from processing using queues and async processing
Too Many Connections
Too Many Connections
Problem: Creating separate connection for each addressSolution: Combine moderate-load addresses in single subscription
Insufficient Error Handling
Insufficient Error Handling
Problem: Stream crashes on errors without recoverySolution: Implement comprehensive error handling with automatic reconnection
No Monitoring
No Monitoring
Problem: Can’t diagnose performance issues or failuresSolution: Implement metrics tracking and alerting from day one