Lightfront Limiter

Lightfront Limiter is a robust standalone rate limiting that supports three types of rate limits:
- Concurrents: A simple counter that can be capped at a certain value. (e.g. a user can make 3 concurrent connections at any given time)
- Throttles: A request rate per period of time. (e.g. a user can make 100 requests per minute).
- Quotas: A resource bound limit per time bucket. (e.g. a user is limited to 1000MB of data per day)
See the Configuration section below for more details on configuring limits.
API
Limiter is accessible via HTTP or gRPC API. Both modes are available on the same port at runtime. The available methods are:
setLimits
: Create new limits.
getLimits
: Get limits by their IDs.
getLimitsAll
: Get all limits.
removeLimits
: Remove limits by their IDs.
removeLimitsAll
: Remove all limits.
peekStates
: Peek will check the provided keys for the associated limits. This is identical to checkStates
, except that peek does not mutate any concurrents/throttle counters. This means it can be used safely for pure inspection.
checkStates
: Used to determine if the provided keys for the associated limits. If any of the associated limits are concurrents or throttles, counters will be incremented appropriately. It will returns a unique request ID to use in the subsequent finishRequest
call.
finishRequest
: After a request is completed, use the request ID from the checkStates
call here. If the associated limits from the call were concurrents, the counter will be decremented. If any limits were quotas and a resource update is provided. the quota resource update will also happen.
See the .proto definition for full spec.
Configuration
There are two ways to create limits:
- Pre-define some limits via a JSON file. See
limits.json
for an example.
- Via the
setLimits
during runtime.
Parameters
Limiter accepts several parameters via environment variable for further configuration.
REDIS_URL
: URL to the Redis instance or cluster.
PORT
: [OPTIONAL, default: 5838] Which port to run on.
DEFAULT_LIMITS_FILE
: [OPTIONAL, default: limits.json] The file to use for creating default limits.
AUTH_ENABLED
: [OPTIONAL, default: false] A boolean flag that enables authentication on the service.
AUTH_TOKENS
: [OPTIONAL, default: empty] A comma separated list of tokens to be used during authentication.
TLS_CERT_FILE
: [OPTIONAL, default: empty] The path to a custom certificate for SSL/TLS.
TLS_KEY_FILE
: [OPTIONAL, default: empty] The path to a custom certificate key for SSL/TLS.
LOG_LEVEL
: [OPTIONAL, default: 1] Which log level to run in (0: debug, 1: info, 2: warning, 3: 4: error, 5: fatal).
Running
Limiter comes with Docker Compose pre-configured. This is the simplest and fastest way to run the service.
To run via Docker:
docker-compose -f docker/docker-compose.yml up --build
Deploying

To deploy to Heroku, use the 1-click button above. NOTE: Heroku does not support HTTP/2 so gRPC requests will not work on a Heroku deployment.
Kubernetes deployment coming soon.
Example Usage
Assuming you have the service running on http://localhost:5838
with no authentication:
- Create some new limits:
curl -sS -X POST -d '[{"id":"c1","type":1,"limit":10,"expires":20},{"id":"t1","type":2,"rate":100,"max_burst":200,"refresh":60},{"id":"q1","type":3,"limit":100,"resource":"r1","period_bucket":1}]' http://localhost:5838/limit | jq
[
{
"id": "c1",
"type": "CONCURRENT",
"limit": "10",
"expires": "20"
},
{
"id": "t1",
"type": "THROTTLE",
"refresh": "60",
"rate": "100",
"max_burst": "200"
},
{
"id": "q1",
"type": "QUOTA",
"limit": "100",
"period_bucket": "SECOND",
"resource": "r1"
}
]
- Peek using the previously created limits for the key
user1
:
curl -sS -X POST -d '{"user1":["c1","t1","q1"]}' http://localhost:5838/state/peek | jq
{
"allowed": true,
"request_id": "70264732-5db4-48e3-9e34-92b780e821c6",
"states_by_key": [
{
"allowed": true,
"key": "user1",
"states": [
{
"allowed": true,
"limit": {
"id": "c1",
"type": "CONCURRENT",
"limit": "10",
"expires": "20"
},
"used": "0",
"remaining": "10"
},
{
"allowed": true,
"limit": {
"id": "t1",
"type": "THROTTLE",
"refresh": "60",
"rate": "100",
"max_burst": "200"
},
"remaining": "200"
},
{
"allowed": true,
"limit": {
"id": "q1",
"type": "QUOTA",
"limit": "100",
"period_bucket": "SECOND",
"resource": "r1"
},
"used": "0",
"remaining": "100"
}
]
}
]
}
- Check using the previously created limits for the key
user1
:
curl -sS -X POST -d '{"user1":["c1","t1","q1"]}' http://localhost:5838/state/check | jq
{
"allowed": true,
"request_id": "f9eb016c-484c-4ae3-938f-f9e70c70b215",
"states_by_key": [
{
"allowed": true,
"key": "user1",
"states": [
{
"allowed": true,
"limit": {
"id": "c1",
"type": "CONCURRENT",
"limit": "10",
"expires": "20"
},
"used": "1",
"remaining": "9"
},
{
"allowed": true,
"limit": {
"id": "t1",
"type": "THROTTLE",
"refresh": "60",
"rate": "100",
"max_burst": "200"
},
"remaining": "199"
},
{
"allowed": true,
"limit": {
"id": "q1",
"type": "QUOTA",
"limit": "100",
"period_bucket": "SECOND",
"resource": "r1"
},
"used": "0",
"remaining": "100"
}
]
}
]
}
- Finish the previous check request and add some used resources:
curl -sS -X POST -d '{"request_id":"f9eb016c-484c-4ae3-938f-f9e70c70b215","resources":{"r1":10}}' http://localhost:5838/request/finish | jq
Benchmark
docker run --rm --read-only -v `pwd`:`pwd` -w `pwd` --network="host" jordi/ab -p test_data_3.json -T application/json -c 500 -n 5000 http://localhost:5838/state/peek
Testing
Testing can be done with the following make command:
make test