Skip to main content

Overview

Rate limiting prevents excessive API calls and ensures fair resource usage for all integrators.

Rate Limited Endpoints

/foods/get-foods

Limit: 1 request per 30 secondsWhy: Menu data doesn’t change frequently

/orders/get-current

Limit: 1 request per 30 secondsWhy: Prevents aggressive polling

How It Works

1

First Request

Your first call to a rate-limited endpoint succeeds normally.
2

Cooldown Starts

A 30-second cooldown timer starts from the first successful request.
3

Subsequent Requests

Any requests within the 30-second window return a 429 error.
4

Cooldown Expires

After 30 seconds, you can make another request.

Rate Limit Error Response

{
  "status": false,
  "error": "çok fazla istek",
  "message": "Çok fazla istek. Lütfen 30 saniye bekleyin."
}
HTTP Status: 429 Too Many Requests

Polling Strategy

// Poll every 30 seconds (or slightly more)
setInterval(async () => {
  try {
    const orders = await getOrders();
    processOrders(orders);
  } catch (error) {
    if (error.status === 429) {
      console.log('Rate limited (this shouldn\'t happen with 30s interval)');
    }
  }
}, 30000); // 30 seconds

❌ Wrong Approach

// ❌ DON'T: Too frequent!
setInterval(pollOrders, 5000); // Every 5 seconds - will get rate limited!

// ❌ DON'T: Aggressive retry
while (true) {
  await pollOrders(); // Immediate retry - will get rate limited!
}

Handling 429 Errors

async function pollOrders() {
  try {
    return await getOrders();
  } catch (error) {
    if (error.status === 429) {
      // Wait 30 seconds and retry
      await sleep(30000);
      return await getOrders();
    }
    throw error;
  }
}

Best Practices

const ENDPOINTS = {
  '/orders/get-current': 30000,  // 30 seconds
  '/foods/get-foods': 1800000,   // 30 minutes (cache locally!)
};

function getPollingInterval(endpoint) {
  return ENDPOINTS[endpoint] || 60000; // Default 1 minute
}
Don’t fetch menu on every order - cache it!
class MenuCache {
  constructor() {
    this.menu = null;
    this.lastFetch = null;
  }
  
  async getMenu() {
    const now = Date.now();
    const cacheAge = now - this.lastFetch;
    
    // Use cache if less than 30 minutes old
    if (this.menu && cacheAge < 1800000) {
      return this.menu;
    }
    
    // Fetch fresh menu
    this.menu = await fetchMenu();
    this.lastFetch = now;
    
    return this.menu;
  }
  
  invalidate() {
    this.menu = null;
  }
}

const menuCache = new MenuCache();

// Use cached menu when processing orders
async function processOrder(order) {
  const menu = await menuCache.getMenu();
  // Process order with cached menu
}
class RateLimitTracker {
  constructor() {
    this.lastCalls = {};
  }
  
  canCall(endpoint, limit = 30000) {
    const now = Date.now();
    const lastCall = this.lastCalls[endpoint];
    
    if (!lastCall) {
      return true;
    }
    
    const timeSince = now - lastCall;
    return timeSince >= limit;
  }
  
  recordCall(endpoint) {
    this.lastCalls[endpoint] = Date.now();
  }
  
  async safeCall(endpoint, apiFunction) {
    if (!this.canCall(endpoint)) {
      const waitTime = 30000 - (Date.now() - this.lastCalls[endpoint]);
      console.log(`Waiting ${waitTime}ms to avoid rate limit...`);
      await sleep(waitTime);
    }
    
    this.recordCall(endpoint);
    return await apiFunction();
  }
}

const rateLimiter = new RateLimitTracker();

// Usage
const orders = await rateLimiter.safeCall(
  '/orders/get-current',
  () => getOrders()
);
class RequestQueue {
  constructor() {
    this.queue = [];
    this.processing = false;
    this.minDelay = 30000; // 30 seconds between requests
  }
  
  add(request) {
    return new Promise((resolve, reject) => {
      this.queue.push({ request, resolve, reject });
      this.process();
    });
  }
  
  async process() {
    if (this.processing || this.queue.length === 0) {
      return;
    }
    
    this.processing = true;
    
    while (this.queue.length > 0) {
      const { request, resolve, reject } = this.queue.shift();
      
      try {
        const result = await request();
        resolve(result);
      } catch (error) {
        reject(error);
      }
      
      // Wait before next request
      if (this.queue.length > 0) {
        await sleep(this.minDelay);
      }
    }
    
    this.processing = false;
  }
}

const queue = new RequestQueue();

// Usage - requests are automatically spaced out
const orders1 = await queue.add(() => getOrders());
const orders2 = await queue.add(() => getOrders()); // Will wait 30s

Monitoring Rate Limits

class RateLimitMonitor {
  constructor() {
    this.rateLimitHits = 0;
    this.totalRequests = 0;
    this.startTime = Date.now();
  }
  
  recordRequest() {
    this.totalRequests++;
  }
  
  recordRateLimit() {
    this.rateLimitHits++;
    console.warn(`⚠️ Rate limit hit! Total: ${this.rateLimitHits}`);
    
    if (this.rateLimitHits > 10) {
      this.sendAlert();
    }
  }
  
  getStats() {
    const uptime = Date.now() - this.startTime;
    const hitRate = (this.rateLimitHits / this.totalRequests) * 100;
    
    return {
      totalRequests: this.totalRequests,
      rateLimitHits: this.rateLimitHits,
      hitRate: hitRate.toFixed(2) + '%',
      uptime: Math.floor(uptime / 1000) + 's'
    };
  }
  
  sendAlert() {
    console.error('🚨 High rate limit hits detected!');
    console.log('Stats:', this.getStats());
    // Send to monitoring service
  }
}

const monitor = new RateLimitMonitor();

// Usage
async function monitoredRequest() {
  monitor.recordRequest();
  
  try {
    return await apiCall();
  } catch (error) {
    if (error.status === 429) {
      monitor.recordRateLimit();
    }
    throw error;
  }
}

Testing Rate Limits

async function testRateLimiting() {
  console.log('Testing rate limit...');
  
  // First call - should succeed
  console.log('Call 1...');
  try {
    await getOrders();
    console.log('✅ Call 1 succeeded');
  } catch (error) {
    console.error('❌ Call 1 failed:', error.message);
  }
  
  // Immediate second call - should fail with 429
  console.log('Call 2 (immediate)...');
  try {
    await getOrders();
    console.log('✅ Call 2 succeeded (unexpected!)');
  } catch (error) {
    if (error.status === 429) {
      console.log('✅ Call 2 rate limited (expected)');
    } else {
      console.error('❌ Call 2 failed with different error:', error);
    }
  }
  
  // Wait 30 seconds
  console.log('Waiting 30 seconds...');
  await sleep(30000);
  
  // Third call - should succeed again
  console.log('Call 3 (after 30s)...');
  try {
    await getOrders();
    console.log('✅ Call 3 succeeded');
  } catch (error) {
    console.error('❌ Call 3 failed:', error.message);
  }
}

Common Mistakes

Avoid These Patterns:

  1. Polling too frequently - Always use 30+ second intervals
  2. Immediate retry on 429 - Wait the full 30 seconds
  3. Multiple simultaneous requests - Space out requests
  4. Not caching menu data - Cache locally, don’t refetch constantly
  5. Ignoring 429 errors - Handle them gracefully
EndpointRecommendedMaximumReason
/orders/get-current30-40sEvery 30sReal-time order updates
/foods/get-foods30-60minEvery 30sMenu doesn’t change often
/restaurants/status/get5minN/AStatus changes infrequent

Production Configuration

const config = {
  development: {
    orderPolling: 30000,      // 30 seconds
    menuRefresh: 300000,      // 5 minutes (more frequent for testing)
    statusCheck: 60000        // 1 minute
  },
  production: {
    orderPolling: 30000,      // 30 seconds
    menuRefresh: 1800000,     // 30 minutes
    statusCheck: 300000       // 5 minutes
  }
};

const env = process.env.NODE_ENV || 'development';
const intervals = config[env];

// Use in your application
setInterval(pollOrders, intervals.orderPolling);
setInterval(refreshMenu, intervals.menuRefresh);

Summary

  • Poll orders every 30+ seconds
  • Cache menu data for 30+ minutes
  • Handle 429 errors gracefully
  • Wait 30 seconds before retry
  • Monitor rate limit hits
  • Use request queuing
  • Poll more frequently than 30 seconds
  • Fetch menu on every order
  • Ignore 429 errors
  • Retry immediately
  • Make multiple simultaneous requests
  • Default to aggressive intervals

Need Higher Limits?

If your use case requires higher rate limits, contact us to discuss options.