Network Latency Optimization
Minimize Your Ping
Latency starts at the network level. No matter how optimized your code is, the speed of light remains the ultimate bottleneck. The single most impactful action you can take to minimize latency is to achieve geographical co-location with the gRPC endpoint.
Target: 0-1 milliseconds network latency Aim for sub-millisecond ping between your client application and the gRPC endpoint.
Recommended Data Centers
Co-locate your infrastructure in these data centers for optimal performance:
Providers: Cherry Servers, Latitude, velia.net, hostkey.com, TeraswitchEndpoint: https://grpc.solanatracker.io
Measuring Latency
Itβs possible to achieve 1-2 ms latency without co-locating if youβre in the same region. Use these tools to measure your latency:
# Test ping to EU endpoint
ping grpc.solanatracker.io
# Test ping to US endpoint
ping grpc-us.solanatracker.io
# Detailed route analysis
mtr grpc.solanatracker.io
Latency Targets:
Excellent: 0-1ms (co-located)
Good: 1-2ms (same region)
Acceptable: 2-5ms (nearby region)
Poor: >5ms (consider relocating)
Connection Management
Distribute Load Across Multiple Clients
Streaming multiple high-load addresses (e.g., Meteora DLMM, Pump.fun, DEX programs) in a single subscription can quickly overwhelm a single client connection.
Problems with single client:
Message backlog at network layer
Single consumer thread bottleneck
Client may disconnect due to unmanageable backlog
Best Practice: Split high-load addresses across multiple gRPC clients and distribute processing across separate CPU cores/threads.
When to Use Multiple Clients
High-Load Programs
Moderate-Load Addresses
Use separate clients for:
DEX programs (Raydium, Jupiter, Orca)
Pump.fun program
Meteora DLMM
Popular lending protocols
// Separate client for each high-volume program
const raydiumClient = new Client ( endpoint , token );
const jupiterClient = new Client ( endpoint , token );
const pumpfunClient = new Client ( endpoint , token );
// Process on different threads/cores
await Promise . all ([
processRaydiumStream ( raydiumClient ),
processJupiterStream ( jupiterClient ),
processPumpfunStream ( pumpfunClient )
]);
Connection Efficiency
Over-fragmenting your connections for moderate-load addresses can quickly hit connection limits.
Best Practice: Donβt create a new connection for each token, pool, or wallet address. Combine them in a single subscribe request.
Example of efficient connection management:
class ConnectionManager {
private highLoadClients : Map < string , Client > = new Map ();
private moderateLoadClient : Client ;
async initialize () {
// Separate clients for high-load programs
this . highLoadClients . set ( 'raydium' , new Client ( endpoint , token ));
this . highLoadClients . set ( 'jupiter' , new Client ( endpoint , token ));
this . highLoadClients . set ( 'pumpfun' , new Client ( endpoint , token ));
// Single client for all moderate-load addresses
this . moderateLoadClient = new Client ( endpoint , token );
}
async addModerateLoadAddress ( address : string ) {
// Dynamically add address to existing subscription
await this . moderateLoadClient . modifySubscription ({
accounts: {
moderate: {
account: [ address ],
owner: [],
filters: []
}
}
});
}
}
Learn more about modifying subscriptions on the fly .
Client Configuration
Expand Max Message Size
Yellowstone gRPC clients have a default maximum size of 4 MB (4194304 bytes) for incoming messages. When streaming account updates or block updates, you can hit this limit.
Required: Configure your gRPC client to avoid hitting the 4 MB message limit.
const client = new Client (
"https://grpc.solanatracker.io" ,
"your-x-token" ,
{
"grpc.max_receive_message_length" : 1024 * 1024 * 1024 // 1 GB
}
);
Keepalive Configuration
Configure keepalive to maintain persistent connections:
const client = new Client (
endpoint ,
token ,
{
"grpc.keepalive_time_ms" : 30000 , // Send keepalive every 30s
"grpc.keepalive_timeout_ms" : 5000 , // Wait 5s for keepalive response
"grpc.keepalive_permit_without_calls" : 1 // Allow keepalive without active calls
}
);
Connection Timeouts
Set reasonable timeout values:
{
"grpc.initial_reconnect_backoff_ms" : 1000 , // Start with 1s backoff
"grpc.max_reconnect_backoff_ms" : 30000 , // Max 30s backoff
"grpc.min_reconnect_backoff_ms" : 1000 // Min 1s backoff
}
Processing Optimization
Asynchronous Processing
Decouple I/O (receiving messages) from CPU-bound work (deserializing, filtering, business logic):
class StreamProcessor {
private messageQueue : Array < any > = [];
private processing = false ;
async handleMessage ( data : any ) {
// Add to queue (fast)
this . messageQueue . push ( data );
// Process asynchronously
if ( ! this . processing ) {
this . processQueue ();
}
}
private async processQueue () {
this . processing = true ;
while ( this . messageQueue . length > 0 ) {
const message = this . messageQueue . shift ();
// CPU-intensive processing
await this . parseAndProcess ( message );
}
this . processing = false ;
}
private async parseAndProcess ( message : any ) {
// Your parsing and business logic here
}
}
Worker Thread Distribution
For extremely high-volume streams, distribute processing across worker threads:
import { Worker } from 'worker_threads' ;
class MultiThreadProcessor {
private workers : Worker [] = [];
private roundRobin = 0 ;
constructor ( numWorkers : number = 4 ) {
for ( let i = 0 ; i < numWorkers ; i ++ ) {
this . workers . push ( new Worker ( './processor-worker.js' ));
}
}
async handleMessage ( data : any ) {
// Distribute to workers in round-robin fashion
const worker = this . workers [ this . roundRobin ];
worker . postMessage ( data );
this . roundRobin = ( this . roundRobin + 1 ) % this . workers . length ;
}
}
Error Handling
Implement Exponential Backoff
class ResilientStreamManager {
private reconnectAttempts = 0 ;
private readonly maxReconnectAttempts = 10 ;
private readonly baseReconnectDelay = 1000 ;
private async reconnect ( subscribeRequest : SubscribeRequest ) : Promise < void > {
if ( this . reconnectAttempts >= this . maxReconnectAttempts ) {
console . error ( "Max reconnection attempts reached." );
return ;
}
this . reconnectAttempts ++ ;
const delay = this . baseReconnectDelay * Math . pow ( 2 , Math . min ( this . reconnectAttempts - 1 , 5 ));
console . log ( `Reconnect attempt ${ this . reconnectAttempts } / ${ this . maxReconnectAttempts } in ${ delay } ms...` );
setTimeout (() => {
this . connect ( subscribeRequest ). catch ( console . error );
}, delay );
}
}
Handle Stream Errors Gracefully
stream . on ( "error" , ( error ) => {
console . error ( "Stream error:" , error );
// Log error details
logErrorToMonitoring ( error );
// Don't crash - attempt to reconnect
this . handleDisconnect ( subscribeRequest );
});
stream . on ( "end" , () => {
console . log ( "Stream ended, reconnecting..." );
this . handleDisconnect ( subscribeRequest );
});
Monitoring & Observability
Track Key Metrics
class StreamMetrics {
private messagesReceived = 0 ;
private messagesProcessed = 0 ;
private errors = 0 ;
private lastReportTime = Date . now ();
recordMessage () {
this . messagesReceived ++ ;
}
recordProcessed () {
this . messagesProcessed ++ ;
}
recordError () {
this . errors ++ ;
}
report () {
const now = Date . now ();
const elapsed = ( now - this . lastReportTime ) / 1000 ;
console . log ( ` \n === Stream Metrics ( ${ elapsed . toFixed ( 1 ) } s) ===` );
console . log ( ` Messages Received: ${ this . messagesReceived } ` );
console . log ( ` Messages Processed: ${ this . messagesProcessed } ` );
console . log ( ` Processing Rate: ${ ( this . messagesProcessed / elapsed ). toFixed ( 2 ) } /sec` );
console . log ( ` Errors: ${ this . errors } ` );
console . log ( ` Backlog: ${ this . messagesReceived - this . messagesProcessed } ` );
this . lastReportTime = now ;
}
}
Alert on Anomalies
class AnomalyDetector {
private readonly backlogThreshold = 1000 ;
private readonly errorRateThreshold = 0.01 ; // 1%
checkHealth ( metrics : StreamMetrics ) {
const backlog = metrics . messagesReceived - metrics . messagesProcessed ;
if ( backlog > this . backlogThreshold ) {
this . alert ( `High backlog: ${ backlog } messages` );
}
const errorRate = metrics . errors / metrics . messagesReceived ;
if ( errorRate > this . errorRateThreshold ) {
this . alert ( `High error rate: ${ ( errorRate * 100 ). toFixed ( 2 ) } %` );
}
}
private alert ( message : string ) {
console . error ( `[ALERT] ${ message } ` );
// Send to your monitoring system
}
}
Security Best Practices
Protect Your Credentials
Never commit credentials to version control!
# .env file (add to .gitignore)
GRPC_ENDPOINT = https://grpc.solanatracker.io
GRPC_X_TOKEN = your-secret-token
# .gitignore
.env
* .env
.env.*
Rotate Tokens Regularly
class TokenRotationManager {
private currentToken : string ;
private nextToken ?: string ;
async rotateToken ( newToken : string ) {
this . nextToken = newToken ;
// Gracefully switch to new token
await this . reconnectWithNewToken ();
this . currentToken = this . nextToken ;
this . nextToken = undefined ;
}
private async reconnectWithNewToken () {
// Disconnect current stream
this . disconnect ();
// Connect with new token
await this . connect ( this . nextToken ! );
}
}
Production Deployment Checklist:
Common Pitfalls
Processing Blocking Network I/O
Problem: Slow processing logic blocks receiving new messagesSolution: Decouple message receipt from processing using queues and async processing
Problem: Creating separate connection for each addressSolution: Combine moderate-load addresses in single subscription
Insufficient Error Handling
Problem: Stream crashes on errors without recoverySolution: Implement comprehensive error handling with automatic reconnection
Problem: Canβt diagnose performance issues or failuresSolution: Implement metrics tracking and alerting from day one
Next Steps