Skip to main content

Rate Limits

The Transfer API implements rate limiting to ensure fair usage and maintain service quality for all clients.

Overview

Rate limits control the number of API requests you can make within a specific time window. Exceeding these limits results in temporary request rejection until the limit resets.

Why Rate Limits?

  • Prevent abuse - Protect against excessive API usage
  • Ensure availability - Maintain service quality for all clients
  • Fair resource allocation - Distribute API capacity equitably
  • System stability - Prevent overload and service degradation
Good Practice

Design your integration to stay well below rate limits with efficient request patterns and caching.

Rate Limit Values

Standard Endpoints

Endpoint CategoryLimitWindowPer
Standard Operations100 requests1 minuteClient ID
Batch Operations10 requests1 minuteClient ID
Transfer Queries100 results1 pageRequest

Detailed Breakdown

Limit: 100 requests per minute

Applies to:

  • GET /transfer/{id} - Get single transfer
  • PUT /transfer/{id}/reserve - Reserve transfer
  • PUT /transfer/{id}/provide-password - Provide password
  • PUT /transfer/{id}/sign - Sign transfer
  • PUT /transfer/{id}/register - Register transfer
  • DELETE /transfer/{id} - Revoke transfer
  • POST /transfer - Create single transfer

Example:

Window: 12:00:00 - 12:00:59
Allowed: 100 requests maximum
Reset: 12:01:00 (new window starts)

Calculation:

  • Per client ID
  • Rolling 60-second window
  • Resets every minute

Rate Limit Headers

Response Headers

Every API response includes rate limit information in headers:

HTTP/1.1 200 OK
X-RateLimit-Limit: 100
X-RateLimit-Remaining: 87
X-RateLimit-Reset: 1729426860
Content-Type: application/json

{...response body...}

Header Details

HeaderTypeDescriptionExample
X-RateLimit-LimitintegerMaximum requests allowed in window100
X-RateLimit-RemainingintegerRequests remaining in current window87
X-RateLimit-ResetintegerUnix timestamp when limit resets1729426860
Check Headers

Always check X-RateLimit-Remaining to avoid hitting limits.

Monitoring Rate Limits

async function makeRequestWithRateLimitCheck(url, options) {
const response = await fetch(url, options);

// Extract rate limit headers
const limit = parseInt(response.headers.get('X-RateLimit-Limit'));
const remaining = parseInt(response.headers.get('X-RateLimit-Remaining'));
const reset = parseInt(response.headers.get('X-RateLimit-Reset'));

// Log rate limit info
console.log(`Rate limit: ${remaining}/${limit} remaining`);

// Warn if approaching limit
if (remaining < limit * 0.1) { // Less than 10% remaining
console.warn('Approaching rate limit!');
const waitTime = (reset - Math.floor(Date.now() / 1000)) * 1000;
console.warn(`Limit resets in ${waitTime}ms`);
}

return response;
}

Rate Limit Exceeded

Error Response

When you exceed the rate limit, you receive a 429 Too Many Requests response:

HTTP/1.1 429 Too Many Requests
X-RateLimit-Limit: 100
X-RateLimit-Remaining: 0
X-RateLimit-Reset: 1729426920
Retry-After: 45
Content-Type: application/json

{
"error": "rate_limit_exceeded",
"error_description": "API rate limit exceeded. Try again in 45 seconds.",
"retry_after": 45
}

Error Details

FieldTypeDescription
errorstringError code: rate_limit_exceeded
error_descriptionstringHuman-readable message
retry_afterintegerSeconds to wait before retrying

HTTP Status Code

  • Status: 429 Too Many Requests
  • Retry-After header: Seconds until you can retry
  • X-RateLimit-Reset header: Unix timestamp of reset time

Handling Rate Limits

Strategy 1: Respect Retry-After

async function makeRequestWithRetry(url, options, maxRetries = 3) {
for (let attempt = 0; attempt < maxRetries; attempt++) {
try {
const response = await fetch(url, options);

if (response.status === 429) {
// Rate limit exceeded
const retryAfter = parseInt(response.headers.get('Retry-After')) || 60;

console.warn(`Rate limit exceeded. Waiting ${retryAfter}s...`);

if (attempt < maxRetries - 1) {
await sleep(retryAfter * 1000);
continue; // Retry
} else {
throw new Error('Rate limit exceeded, max retries reached');
}
}

if (!response.ok) {
throw new Error(`HTTP ${response.status}: ${response.statusText}`);
}

return await response.json();

} catch (error) {
if (attempt === maxRetries - 1) throw error;
}
}
}

Strategy 2: Proactive Rate Limiting

class RateLimiter {
constructor(maxRequests, windowMs) {
this.maxRequests = maxRequests;
this.windowMs = windowMs;
this.requests = [];
}

async waitIfNeeded() {
const now = Date.now();

// Remove old requests outside window
this.requests = this.requests.filter(
time => now - time < this.windowMs
);

// Check if at limit
if (this.requests.length >= this.maxRequests) {
// Calculate wait time
const oldestRequest = this.requests[0];
const waitTime = this.windowMs - (now - oldestRequest);

console.log(`Rate limit reached. Waiting ${waitTime}ms...`);
await sleep(waitTime);

// Recursive call to recheck
return this.waitIfNeeded();
}

// Record this request
this.requests.push(now);
}

async execute(fn) {
await this.waitIfNeeded();
return await fn();
}
}

// Usage
const limiter = new RateLimiter(100, 60000); // 100 req/min

async function createTransfer(data) {
return await limiter.execute(async () => {
return await apiClient.createTransfer(data);
});
}

Strategy 3: Request Queuing

class RequestQueue {
constructor(rateLimiter) {
this.queue = [];
this.processing = false;
this.rateLimiter = rateLimiter;
}

async enqueue(request) {
return new Promise((resolve, reject) => {
this.queue.push({ request, resolve, reject });
this.process();
});
}

async process() {
if (this.processing || this.queue.length === 0) return;

this.processing = true;

while (this.queue.length > 0) {
const { request, resolve, reject } = this.queue.shift();

try {
await this.rateLimiter.waitIfNeeded();
const result = await request();
resolve(result);
} catch (error) {
reject(error);
}
}

this.processing = false;
}
}

// Usage
const queue = new RequestQueue(new RateLimiter(100, 60000));

// Enqueue requests
const results = await Promise.all([
queue.enqueue(() => createTransfer(data1)),
queue.enqueue(() => createTransfer(data2)),
queue.enqueue(() => createTransfer(data3))
]);

Strategy 4: Exponential Backoff

async function exponentialBackoff(fn, maxRetries = 5) {
for (let attempt = 0; attempt < maxRetries; attempt++) {
try {
return await fn();
} catch (error) {
if (error.status === 429) {
// Calculate backoff time: 2^attempt * 1000ms
const backoffTime = Math.min(Math.pow(2, attempt) * 1000, 32000);

console.log(`Attempt ${attempt + 1} failed. Backing off ${backoffTime}ms`);

if (attempt < maxRetries - 1) {
await sleep(backoffTime);
continue;
}
}

throw error;
}
}
}

// Usage
const transfer = await exponentialBackoff(
() => createTransfer(transferData)
);

Rate Limit Troubleshooting

Frequently Hit Rate Limits

Problem: Consistently receiving 429 errors

Common causes:

  • Too many requests in short period
  • Not implementing retry delays
  • Polling instead of using callbacks
  • Not batching operations

Solutions:

  1. Implement rate limit checking before requests
  2. Use exponential backoff for retries
  3. Switch to callback-based approach
  4. Use batch endpoints where possible
  5. Add request queuing
  6. Cache frequently accessed data
Rate Limit Reset Not Working

Problem: Still getting 429 after waiting for reset

Possible causes:

  • Incorrect reset time calculation
  • Using wrong time zone
  • Not waiting full period

Solutions:

  1. Use X-RateLimit-Reset header (Unix timestamp)
  2. Calculate wait time correctly:
    const resetTime = parseInt(headers['X-RateLimit-Reset']);
    const now = Math.floor(Date.now() / 1000);
    const waitSeconds = Math.max(0, resetTime - now);
  3. Add buffer time (extra few seconds)
  4. Verify system clock is synchronized
Different Limits for Different Endpoints

Problem: Confused about which limits apply

Solution:

  • Standard endpoints: 100/min
  • Batch endpoints: 10/min
  • Query pagination: 100 results/page

Track limits separately:

const limiters = {
standard: new RateLimiter(100, 60000),
batch: new RateLimiter(10, 60000)
};

// Use appropriate limiter
await limiters.standard.execute(() => getTransfer(id));
await limiters.batch.execute(() => createBatchTransfer(data));

Testing Rate Limits

Test Script

async function testRateLimits() {
console.log('Testing rate limits...');

const results = {
successful: 0,
rateLimited: 0,
errors: 0
};

// Make 110 requests (10 over limit)
for (let i = 0; i < 110; i++) {
try {
const response = await getTransfer('test-transfer-id');
results.successful++;

// Log remaining
const remaining = response.headers['X-RateLimit-Remaining'];
console.log(`Request ${i + 1}: ${remaining} remaining`);

} catch (error) {
if (error.status === 429) {
results.rateLimited++;
console.log(`Request ${i + 1}: Rate limited!`);
} else {
results.errors++;
}
}
}

console.log('Results:', results);
// Expected: ~100 successful, ~10 rate limited
}