-
-
Notifications
You must be signed in to change notification settings - Fork 163
Comparative benchmarks
bombardier -c 1000 -l -d 30s -r 2000 -t 1s http://127.0.0.1:3000/endpoint
- 1000 concurrent requests
- test duration is 30 seconds
- not more than 2000 req/sec
- each package tested 5 times, the best results are below
Statistics Avg Stdev Max
Reqs/sec 1994.75 494.56 7353.12
Latency 6.34ms 7.36ms 126.48ms
Latency Distribution
50% 4.75ms
75% 6.54ms
90% 9.28ms
95% 14.79ms
99% 44.96ms
HTTP codes:
1xx - 0, 2xx - 30000, 3xx - 0, 4xx - 30004, 5xx - 0
express-limiter https://github.com/ded/express-limiter
Statistics Avg Stdev Max
Reqs/sec 1994.67 493.40 8298.33
Latency 8.33ms 9.54ms 211.85ms
Latency Distribution
50% 5.75ms
75% 8.37ms
90% 14.92ms
95% 23.09ms
99% 58.27ms
HTTP codes:
1xx - 0, 2xx - 46756, 3xx - 0, 4xx - 13221, 5xx - 0
Average latency
-
rate-limiter-flexible
6ms -
express-limiter
8ms
rate-limiter-flexible
slightly faster than express-limiter
tl;dr Fixed window algorithm used in rate-limiter-flexible 20x faster on high traffic than the fastest Rolling window
The same benchmarking setting for all tests:
bombardier -c 1000 -l -d 30s -r 2000 -t 1s http://127.0.0.1:3000/endpoint
- 1000 concurrent requests
- test duration is 30 seconds
- not more than 2000 req/sec
3 libraries from github:
- this one with fixed window
- https://github.com/peterkhayes/rolling-rate-limiter
- https://github.com/tj/node-ratelimiter
- https://github.com/fastest963/node-redis-rolling-limit
There are 4 simple Express 4.x endpoints
limited by different libraries,
which launched in node:10.5.0-jessie
and redis:4.0.10-alpine
Docker containers by PM2 with 4 workers
Docker images are recreated before each test.
All limiters created with same rule: maximum 100 requests per 1 second Key for every request is randomly generated number from 0 to 10
Statistics Avg Stdev Max
Reqs/sec 1994.75 494.56 7353.12
Latency 6.34ms 7.36ms 126.48ms
Latency Distribution
50% 4.75ms
75% 6.54ms
90% 9.28ms
95% 14.79ms
99% 44.96ms
HTTP codes:
1xx - 0, 2xx - 30000, 3xx - 0, 4xx - 30004, 5xx - 0
Statistics Avg Stdev Max
Reqs/sec 2002.28 1004.72 25535.21
Latency 281.73ms 2.01s 32.92s
Latency Distribution
50% 16.52ms
75% 55.35ms
90% 148.76ms
95% 257.40ms
99% 8.74s
HTTP codes:
1xx - 0, 2xx - 55206, 3xx - 0, 4xx - 1197, 5xx - 0
others - 3530
Errors:
the server closed connection before returning the first response byte. Make sure the server returns 'Connection: close' response header before closing the connection - 3275
dial tcp 127.0.0.1:3000: connect: operation timed out - 175
dial tcp 127.0.0.1:3000: connect: connection reset by peer - 80
Statistics Avg Stdev Max
Reqs/sec 2021.58 995.59 22045.06
Latency 219.30ms 1.81s 31.66s
Latency Distribution
50% 15.54ms
75% 45.39ms
90% 129.52ms
95% 206.34ms
99% 8.34s
HTTP codes:
1xx - 0, 2xx - 54628, 3xx - 0, 4xx - 1038, 5xx - 0
others - 4226
Errors:
the server closed connection before returning the first response byte. Make sure the server returns 'Connection: close' response header before closing the connection - 3931
dial tcp 127.0.0.1:3000: connect: connection reset by peer - 150
dial tcp 127.0.0.1:3000: connect: operation timed out - 145
Statistics Avg Stdev Max
Reqs/sec 1975.33 944.00 25855.43
Latency 148.29ms 1.18s 29.00s
Latency Distribution
50% 8.39ms
75% 26.26ms
90% 114.73ms
95% 214.40ms
99% 2.84s
HTTP codes:
1xx - 0, 2xx - 56727, 3xx - 0, 4xx - 0, 5xx - 0
others - 2274
Errors:
the server closed connection before returning the first response byte. Make sure the server returns 'Connection: close' response header before closing the connection - 2165
dial tcp 127.0.0.1:3000: connect: connection reset by peer - 70
dial tcp 127.0.0.1:3000: connect: operation timed out - 39
Average latency
-
rate-limiter-flexible
7ms -
rolling-rate-limiter
282ms -
ratelimiter
219ms -
redis-rolling-limit
148ms
It is obvious that fixed window is much-much faster
Get started
Middlewares and plugins
Migration from other packages
Limiters:
- Redis
- Memory
- DynamoDB
- Prisma
- MongoDB (with sharding support)
- PostgreSQL
- MySQL
- BurstyRateLimiter
- Cluster
- PM2 Cluster
- Memcached
- RateLimiterUnion
- RateLimiterQueue
Wrappers:
- RLWrapperBlackAndWhite Black and White lists
Knowledge base:
- Block Strategy in memory
- Insurance Strategy
- Comparative benchmarks
- Smooth out traffic peaks
-
Usage example
- Minimal protection against password brute-force
- Login endpoint protection
- Websocket connection prevent flooding
- Dynamic block duration
- Different limits for authorized users
- Different limits for different parts of application
- Block Strategy in memory
- Insurance Strategy
- Third-party API, crawler, bot rate limiting