Rate Limits
The API applies rate limits to ensure fair usage and protect service reliability. Here's how they work and how to stay within them.
Limits
| Scope | Limit | Window |
|---|---|---|
| All endpoints | 60 requests | Per minute, per stream |
The limit applies across all endpoints and all API keys for a stream combined. If you have multiple keys for the same stream, they share the same 60-request budget. If you hit 60 requests within a one-minute window, subsequent requests return 429 Too Many Requests until the window resets.
What happens when you hit the limit
HTTP 429 Too Many Requests
{
"message": "Rate limit exceeded (60 requests per minute)"
}
Wait a few seconds and try again. The window resets after 60 seconds from your first request in the window.
Staying within limits
Use caching on your side
The most effective way to stay within limits is to cache API responses in your application. Most station data doesn't change more than a few times per day.
Recommended cache durations:
| Data | Suggested cache | Why |
|---|---|---|
| Station info | 5–10 minutes | Rarely changes |
| Shows list | 2–5 minutes | Changes when shows are added/edited |
| Episodes list | 1–2 minutes | Changes when new episodes publish |
| Schedule | 5–10 minutes | Changes when schedule is edited |
| Now playing | 15–30 seconds | Changes when shows transition |
| Search results | 1–2 minutes | Results shift as episodes are added |
Batch your requests
If you need multiple types of data, fetch them in parallel rather than sequentially — the rate limit is per-minute, so parallel requests within a burst are fine as long as the total stays under 60.
// Good: parallel requests (3 requests counted)
const [station, episodes, shows] = await Promise.all([
fetch('/v1/station', { headers }),
fetch('/v1/episodes?limit=10', { headers }),
fetch('/v1/shows', { headers })
]);
// Avoid: polling in a tight loop
// This burns through your limit quickly
while (true) {
await fetch('/v1/now-playing', { headers });
await sleep(1000); // 60 requests per minute = exactly the limit
}
Use cursor pagination
For large episode lists, use cursor-based pagination instead of high offsets. Cursor pagination is more efficient and gives you consistent results:
// First page
const page1 = await fetch('/v1/episodes?limit=20', { headers });
const data = await page1.json();
// Next page (uses cursor from previous response)
if (data.next_cursor) {
const page2 = await fetch(`/v1/episodes?limit=20&cursor=${data.next_cursor}`, { headers });
}
Poll now-playing sparingly
If you're displaying live "now playing" information, don't poll more often than every 15 seconds. The API caches now-playing data for 15 seconds anyway, so faster polling just wastes your rate limit budget.
// Good: poll every 30 seconds
setInterval(async () => {
const response = await fetch('/v1/now-playing', { headers });
updateNowPlaying(await response.json());
}, 30000);
Server-side caching
The API caches responses server-side to keep things fast. This means:
- Two identical requests within the cache window return the same data
- After a write operation, it may take up to the cache TTL (15–60 seconds) for changes to appear in subsequent GET requests
| Endpoint | Server cache TTL |
|---|---|
| Station | 60 seconds |
| Episodes (list) | 30 seconds |
| Episodes (single) | 30 seconds |
| Shows (list) | 30 seconds |
| Shows (single) | 30 seconds |
| Schedule | 60 seconds |
| Now Playing | 15 seconds |
| Search | 30 seconds |
If you need higher limits
60 requests per minute is enough for most integrations. If you have a use case that requires more:
- Consider caching responses on your side (most common solution)
- Batch related requests together
- Reduce polling frequency for live data
If you still need higher limits after optimising, contact support to discuss your use case.