-
-
Notifications
You must be signed in to change notification settings - Fork 163
Smooth out traffic peaks
Every rate limiter from this package has execEvenly
option. If execEvenly
set to true
, it evenly delays action but consumes points immediately. It makes kind of FIFO queue using setTimeout
function. And this works for distributed environment too.
const { RateLimiterRedis } = require('rate-limiter-flexible');
const Redis = require('ioredis');
const redisClient = new Redis({
options: {
enableOfflineQueue: false
}
});
const opts = {
storeClient: redisClient,
points: 10, // 10 tokens
duration: 1, // per second, so default `execEvenlyMinDelayMs` is 100ms
execEvenly: true, // delay actions evenly
};
const rateLimiterRedis = new RateLimiterRedis(opts);
- 1st executed immediately. 9 remaining points.
- 2nd delayed for 100ms (
1000 / (8 + 2)
). 8 remaining points. - 3rd delayed for 111ms (
1000 / (7 + 2)
). 7 remaining points. - 4th delayed for 125ms (
1000 / (6 + 2)
). 6 remaining points. - ...
- 9th delayed for 333ms (
1000 / (1 + 2)
). 1 remaining point. - 10th delayed for 500ms (
1000 / (0 + 2)
). 0 remaining points.
This demonstrates how execEvenlyMinDelayMs
affects shaping traffic and some actions may go to the next duration.
In this example execEvenlyMinDelayMs
is 100ms as duration
is 1000ms with 10 points
.
- 1st executed immediately. 9 remaining points. 1 second duration starts here.
All other consume at the last ms and are executed the next duration
- 2nd delayed for 200ms (
1 / (8 + 2)
is less then 100, soexecEvenlyMinDelayMs
is used to calculate delay2 * 100
). 2 consumed points and 8 remaining points. - 3rd delayed for 300ms (
3 * 100
). 3 consumed points and 7 remaining points. - 4th delayed for 400ms (
4 * 100
). 4 consumed points and 6 remaining points. - ...
- 9th delayed for 900ms (
9 * 100
). 9 consumed points and 1 remaining point. - 10th delayed for 1000ms (
10 * 100
). 10 consumed points and 0 remaining points. So the last action in current duration is executed in the end of the next duration.
Action is delayed by formula: Milliseconds before points reset
/ ( Remaining Points in current duration
+ 2 ) with execEvenlyMinDelayMs
= duration in milliseconds
/ points
between actions. If delay is less than execEvenlyMinDelayMs
, delay is consumed points in current duration
* execEvenlyMinDelayMs
. This formula results to dynamic leak rate as it aims to divide spare time evenly before points reset.
Executing actions evenly relies on fixed window rate limiting, therefore, if the first action in duration is done and then all other actions done in the last ms, there is no spare time to divide between them, this is where execEvenlyMinDelayMs
takes an action. See example below.
Get started
Middlewares and plugins
Migration from other packages
Limiters:
- Redis
- Memory
- DynamoDB
- Prisma
- MongoDB (with sharding support)
- PostgreSQL
- MySQL
- BurstyRateLimiter
- Cluster
- PM2 Cluster
- Memcached
- RateLimiterUnion
- RateLimiterQueue
Wrappers:
- RLWrapperBlackAndWhite Black and White lists
Knowledge base:
- Block Strategy in memory
- Insurance Strategy
- Comparative benchmarks
- Smooth out traffic peaks
-
Usage example
- Minimal protection against password brute-force
- Login endpoint protection
- Websocket connection prevent flooding
- Dynamic block duration
- Different limits for authorized users
- Different limits for different parts of application
- Block Strategy in memory
- Insurance Strategy
- Third-party API, crawler, bot rate limiting