I'm always excited to take on new projects and collaborate with innovative minds.
If you’ve ever built a public-facing API, you’ve likely run into the issue of clients making too many requests—sometimes intentionally, sometimes by accident. Left unchecked, this can overwhelm your backend, slow down your service, or even take your API offline.
A simple, scalable, and efficient way to handle this is Rate Limiting.
In this post, I’ll walk you through implementing rate limiting in an ASP.NET Core Web API using Redis as the distributed cache store. This approach is production-ready, efficient under load, and works well even when you’re running your API on multiple servers.
Some common real-world reasons for implementing rate limiting:
Here’s what we’ll do:
RateLimitingApi/
├── Controllers/
│ └── SampleController.cs
├── Middleware/
│ └── RateLimitingMiddleware.cs
├── Services/
│ └── RateLimitingService.cs
├── Program.cs
├── appsettings.json
├── RateLimitingApi.csproj
dotnet add package StackExchange.RedisIn your Program.cs:
builder.Services.AddSingleton<IConnectionMultiplexer>(sp =>
{
var configuration = builder.Configuration.GetConnectionString("Redis");
return ConnectionMultiplexer.Connect(configuration);
});In your appsettings.json:
{
"ConnectionStrings": {
"Redis": "localhost:6379"
}
}(Adjust the Redis connection string as per your setup.)
// Services/RateLimitingService.cs
using StackExchange.Redis;
public class RateLimitingService
{
private readonly IDatabase _redisDb;
private readonly int _limit = 100;
private readonly TimeSpan _timeWindow = TimeSpan.FromMinutes(1);
public RateLimitingService(IConnectionMultiplexer redis)
{
_redisDb = redis.GetDatabase();
}
public async Task<bool> IsLimitExceededAsync(string clientId)
{
var key = $"rate_limit:{clientId}";
var count = await _redisDb.StringIncrementAsync(key);
if (count == 1)
{
await _redisDb.KeyExpireAsync(key, _timeWindow);
}
return count > _limit;
}
}
// Middleware/RateLimitingMiddleware.cs
public class RateLimitingMiddleware
{
private readonly RequestDelegate _next;
private readonly RateLimitingService _rateLimiter;
public RateLimitingMiddleware(RequestDelegate next, RateLimitingService rateLimiter)
{
_next = next;
_rateLimiter = rateLimiter;
}
public async Task InvokeAsync(HttpContext context)
{
var clientId = context.Connection.RemoteIpAddress?.ToString() ?? "unknown";
if (await _rateLimiter.IsLimitExceededAsync(clientId))
{
context.Response.StatusCode = StatusCodes.Status429TooManyRequests;
await context.Response.WriteAsync("Rate limit exceeded. Try again later.");
return;
}
await _next(context);
}
}
using RateLimitingApi.Middleware;
using RateLimitingApi.Services;
var builder = WebApplication.CreateBuilder(args);
builder.Services.AddSingleton<RateLimitingService>();
builder.Services.AddControllers();
var app = builder.Build();
app.UseMiddleware<RateLimitingMiddleware>();
app.MapControllers();
app.Run();
// Controllers/SampleController.cs
using Microsoft.AspNetCore.Mvc;
[ApiController]
[Route("[controller]")]
public class SampleController : ControllerBase
{
[HttpGet("hello")]
public IActionResult Get()
{
return Ok("Hello, World!");
}
}
You can now start your API:
dotnet run
Then hit the endpoint repeatedly:
GET http://localhost:5000/sample/hello
After exceeding 100 requests within a minute (from the same IP), you’ll start getting:
HTTP 429 Too Many Requests
Rate limit exceeded. Try again later.
I’ve pushed the complete working code here:
👉 GitHub Repo: https://github.com/DheerGupta35959/RateLimitingApi
Rate limiting is one of those things that many teams leave till late, but it’s critical for the long-term health and stability of your APIs. This approach using ASP.NET Core and Redis is simple, scalable, and works well even in cloud environments like Azure or AWS.
Your email address will not be published. Required fields are marked *