I'm always excited to take on new projects and collaborate with innovative minds.

Social Links

The Single await That Brought Down Our .NET Throughput (and What We Learned)

A single misplaced await slowed down an entire .NET app in production. Learn how async/await can silently throttle performance, why await inside loops causes sequential execution, and how to fix it with Task.WhenAll and throttling. A real-world lesson every C# developer should know.

We recently encountered a performance mystery in one of our .NET 7 APIs. Everything seemed fine in development and QA; but once production traffic hit a threshold, response times exploded — and our CPU was hardly taxed. It wasn’t memory, GC, scaling, or infrastructure. It was one tiny misplaced await.

Here’s how a simple misstep turned into a production disaster — and how we fixed it.


The Puzzle: Slow under load, but no obvious culprit

Our API passed all tests, handled the expected load on staging, and logs showed nothing alarming. Then we launched, and things got weird:

  • Some requests would take much longer than expected.
  • Throughput would drop under moderate load.
  • CPU usage and resource metrics didn’t scream “bottleneck.”

We tried everything: deep profiling, GC tuning, scaling out. Nada.

During one late-night review, someone asked: “Why are we awaiting inside the loop?” That question turned everything around.


The culprit: an await inside a loop

Here’s a simplified version of what we had:

public async Task ProcessItemsAsync(List<Item> items)
{
    foreach (var item in items)
    {
        await _externalService.SendAsync(item);
    }
}

At first glance, that looks okay — you call asynchronous work per item. But the problem is subtle:

By using await inside the loop, we force sequential execution:

“Wait for this one to finish, then move to the next.”

Under light load, the delay might be negligible. Under heavy load or when each item has multiple sub-operations, the penalty compounds deeply.

In our case, some items triggered 50+ internal operations. That meant each iteration waited for long chains of calls to complete before proceeding to the next. That serialized work severely throttled throughput.


The fix: launch everything concurrently, await together

We changed the method to:

public async Task ProcessItemsAsync(List<Item> items)
{
    var tasks = items
        .Select(item => _externalService.SendAsync(item));
    await Task.WhenAll(tasks);
}

With this approach:

  1. We initiate all send calls immediately (non-blocking).
  2. We await their completion collectively using Task.WhenAll.

That means all items “work in parallel” (or as parallel as the thread pool / service constraints allow) instead of lining up one after another.


The impact (and caveats)

Once deployed, we saw massive improvements. Throughput climbed, response times steadied, and the bottleneck vanished.

However, a few lessons emerged:

  • Concurrency matters, but constraints matter more
    If the external service has rate limits or capacity constraints, flooding it with parallel requests may overload it or provoke errors.
  • Use throttling mechanisms
    A pattern we adopted:

    var semaphore = new SemaphoreSlim(5);
    var tasks = items.Select(async item => {
        await semaphore.WaitAsync();
        try
        {
            await _externalService.SendAsync(item);
        }
        finally
        {
            semaphore.Release();
        }
    });
    await Task.WhenAll(tasks);
    

    This caps the maximum simultaneous calls (5 in this example) but still avoids full serialization.

  • Profiling can mislead
    Because our app didn’t crash and CPU usage wasn’t pegged, many tools missed the serial bottleneck.
  • Always ask before “awaiting”
    When you write await, pause: “Do I need to wait here before moving on?” If not, consider concurrent alternatives.

Takeaway

Async/await is a powerful tool — but misused, it’s a trap. That one misplaced await was silently throttling our entire system. Once we rewrote the logic to spawn tasks and await them together, performance returned.

Whenever you reach for await, don’t assume it’s benign. Think: is sequential wait what I really want? If not, structure for concurrency (with limits as needed). Your future self (and your ops team) will thank you.

3 min read
Sep 29, 2025
By Dheer Gupta
Share

Leave a comment

Your email address will not be published. Required fields are marked *