What Is the Event Loop?

The event loop is the traffic controller for JavaScript work. JavaScript runs one piece of code at a time, and the event loop decides what to run next so your app keeps responding.

Think of it like a small team processing tasks in a strict order: finish current work, then check urgent queued tasks, then check regular queued tasks.

How It Works

  1. Run sync code on the call stack first.
  2. When stack is clear, run microtasks (for example, Promise callbacks).
  3. Then run one macrotask (for example, setTimeout callback).
  4. Repeat the cycle.

Definitions

Quick glossary before the simulator so you can understand each queue and step without jargon confusion.

Tick

One pass of the event loop where JavaScript checks what can run next. In each tick, work runs in priority order.

Example: A tick might run one sync task, then any waiting microtasks, then one macrotask.

Call Stack

The active work area where JavaScript runs functions right now. It must be clear before queued async callbacks can run.

Example: If a long loop is still on the stack, timers and Promise callbacks wait.

Queue

A waiting line for work that is ready to run later. The event loop pulls from queues when the call stack is free.

Example: A callback waits in a queue until current synchronous code finishes.

Microtask

High-priority queued work that runs right after the current stack finishes and before macrotasks.

Example: Promise .then() and queueMicrotask() callbacks are microtasks.

Macrotask

Regular queued work that runs after microtasks. This includes timers and many browser event callbacks.

Example: setTimeout, setInterval, and many message/event callbacks are macrotasks.

Task Queue (Macrotask Queue)

The queue holding regular macrotask work waiting for its turn in the loop.

Example: A setTimeout(..., 0) callback is queued here and runs after microtasks are drained.

Microtask Queue

The queue for microtasks, which is drained before the event loop runs the next macrotask.

Example: If many Promise callbacks are queued, they can run before the next timer callback.

Web APIs / Runtime APIs

Browser/runtime systems handle timers, network, and events outside the call stack until callbacks are ready.

Example: A fetch() request finishes in runtime APIs first, then its Promise callback is queued for the event loop.

Execution Examples

Practical event loop examples in a beginner-friendly flow.

Example 1: Sync, Microtask, Then Timer

JavaScript

console.log("Render start")
Promise.resolve().then(() => console.log("Flush state"))
setTimeout(() => console.log("Next tick timer"), 0)
console.log("Render end")

Synchronous code always finishes first. Then microtasks run. Timers run in a later event-loop turn.

Expected Order: Render start -> Render end -> Flush state -> Next tick timer

Example 2: Microtask Chain Priority

JavaScript

setTimeout(() => console.log("timeout"), 0)
queueMicrotask(() => {
  console.log("microtask 1")
  queueMicrotask(() => console.log("microtask 2"))
})
console.log("sync")

Microtasks can queue more microtasks. The microtask queue is drained fully before the timer queue is checked.

Expected Order: sync -> microtask 1 -> microtask 2 -> timeout

Example 3: async/await Resume Order

JavaScript

async function loadProfile() {
  console.log("load start")
  const user = await Promise.resolve("Ada")
  console.log("load done", user)
}

console.log("before call")
loadProfile()
console.log("after call")

`await` pauses the async function and resumes it as a microtask once the awaited promise resolves.

Expected Order: before call -> load start -> after call -> load done Ada

Example 4: Microtasks Between Timers

JavaScript

console.log("start")
setTimeout(() => {
  console.log("timer A")
  Promise.resolve().then(() => console.log("microtask in A"))
}, 0)
setTimeout(() => console.log("timer B"), 0)
console.log("end")

After each macrotask callback, microtasks run before moving to the next macrotask.

Expected Order: start -> end -> timer A -> microtask in A -> timer B

Timing APIs: Why setTimeout Feels Inaccurate

Timers in JavaScript are scheduling hints, not hard real-time guarantees. This is the source of many surprising bugs when people expect exact millisecond behavior.

setTimeout(fn, 1000) means "exactly 1000ms later"

What actually happens: It means "run no earlier than 1000ms." If the call stack is busy, microtasks are still draining, or other macrotasks are ahead, it runs later.

Why: Timer callbacks enter the macrotask queue. They still wait for their turn after current work is done.

Practical tip: Treat timer delay as a minimum threshold, not a precise deadline.

setTimeout(fn, 0) runs immediately

What actually happens: Zero-delay timers still run in a later event-loop turn. Promise callbacks (.then, await) usually run before them.

Why: Microtasks are drained before the event loop selects the next macrotask.

Practical tip: Use queueMicrotask/Promise microtasks for "right-after-sync" behavior; use timers for "later turn" behavior.

setInterval fires on perfect rhythm

What actually happens: Intervals drift. If a callback runs long, the next tick is delayed. Slow tabs/devices can increase drift.

Why: The loop cannot start a new interval callback until current work yields. Scheduler pressure accumulates timing error.

Practical tip: For accurate recurring work, schedule the next setTimeout based on Date.now() or performance.now() correction.

Background tabs keep timer precision

What actually happens: Browsers often clamp/throttle timers in background tabs to save battery and CPU.

Why: Visibility and power-saving policies reduce callback frequency when the page is not active.

Practical tip: Handle visibilitychange and recompute elapsed time when the tab becomes active again.

Timer order is always predictable across environments

What actually happens: Browser and Node.js timing phases are similar but not identical (setImmediate, process.nextTick, timer phases).

Why: Different runtimes implement scheduling details differently around I/O and microtask handling.

Practical tip: Avoid relying on fragile ordering tricks; write explicit sequencing with clear async boundaries.

Common Gotchas

  • setTimeout(fn, 0) is never immediate. It runs only after current sync code and queued microtasks finish.
  • Recursive microtasks can starve rendering and timers if you keep re-queueing them in a tight chain.
  • await inside a loop is sequential by default. Use Promise.all when you truly want parallel I/O.
  • CPU-heavy sync tasks (JSON.parse on huge payloads, large loops, heavy regex) block everything on the main thread.
  • Timer delays are minimum thresholds, not guarantees; actual execution can be later under load or in background tabs.
  • Node-specific behavior (process.nextTick, setImmediate) differs from browser scheduling and should be learned separately.
  • User events can feel delayed when the call stack is busy, even if the click already happened in the UI.

Video Walkthrough

Recommended explainer video: one of the clearest event loop walkthroughs for beginners.

Direct link: https://www.youtube.com/watch?v=eiC58R16hb8

Interactive Event Loop Simulator

Add sync tasks, microtasks, and macrotasks, then run ticks to see the queue priority in action.

Tip: press Enter in any input to add that event type instantly.

Quick Add Presets

Add common tasks instantly without typing, including short and long async callbacks.

Tick: 0

Call Stack Queue

No sync work waiting.

Microtask Queue

No microtasks queued.

Macrotask Queue

No macrotasks queued.

Execution Timeline

No ticks executed yet.

Pros and Cons

Pros

  • Simple mental model for async workflows.
  • Great for I/O-heavy apps like APIs and dashboards.
  • Avoids many classic thread-locking problems.

Cons

  • Heavy CPU work can block everything else.
  • Long sync tasks hurt perceived responsiveness.
  • You still need careful task scheduling for smooth UX.

JavaScript vs Other Languages

Direct comparison against JavaScript so it is easier to see what changes when you pick Go, Java, or Python.

JavaScript (Node.js / Browser)

Directly vs JavaScript: Baseline for this page: JavaScript uses a single-threaded event loop for user code and works best when most time is spent waiting on I/O.

Where it tends to win: Great developer velocity and very strong ecosystem for APIs, BFF layers, realtime UIs, and full-stack product teams.

Tradeoffs vs JavaScript: CPU-heavy work can block the loop unless you offload to workers/processes.

Go

Directly vs JavaScript: Compared with JavaScript, Go usually gives you easier parallel CPU usage out of the box via goroutines while still handling I/O-heavy services very well.

Where it tends to win: Often stronger for high-concurrency backend services when you need both I/O concurrency and CPU parallelism in one process.

Tradeoffs vs JavaScript: Less frontend/full-stack reuse than JavaScript and different tooling ergonomics for product teams.

Java

Directly vs JavaScript: Compared with JavaScript, modern Java (especially reactive or virtual-thread setups) can deliver higher raw throughput in many backend workloads, with heavier operational complexity.

Where it tends to win: Mature enterprise tooling, strong JIT performance, excellent observability, and robust large-system practices.

Tradeoffs vs JavaScript: Usually more configuration and platform overhead than JavaScript-centric stacks for smaller teams.

Python (asyncio)

Directly vs JavaScript: Compared with JavaScript, Python async feels similar conceptually (event loop + await), but typical API throughput is often lower without specialized optimization.

Where it tends to win: Excellent for data/ML-adjacent services and scripting-heavy teams that prioritize ecosystem breadth.

Tradeoffs vs JavaScript: Frequently lower web throughput than tuned JavaScript/Go/Java services in identical benchmark conditions.

Performance Snapshot (Same Benchmark Run)

TechEmpower Framework Benchmarks (Continuous Run) on 2026-02-20. These are framework-level snapshots from one shared run. Use them as directional comparisons, not absolute guarantees for every app.

Language / FrameworkJSON req/sJSON vs JSPlaintext req/sPlaintext vs JS
JavaScript (Fastify)816,3141.00x1,141,7811.00x
Go (Fiber v2)1,595,2231.95x11,926,80710.45x
Java (Spring WebFlux)1,233,5661.51x2,399,0522.10x
Python (FastAPI + Uvicorn)301,7860.37x345,8790.30x

Source: TechEmpower raw run data

Non-blocking I/O in Practice

Real company examples where event-loop and non-blocking I/O patterns produced measurable outcomes in production systems.

PayPal: Parallel Java vs Node.js rollout on a high-traffic page

Company: PayPal

Story: PayPal rebuilt its account overview page in parallel: one Java version and one Node.js version with the same functionality and production-like tests.

Why non-blocking I/O helped: Node.js let the team handle high request concurrency efficiently while waiting on multiple upstream API calls per route.

Measured impact: PayPal reported roughly 2x requests/second and about 35% lower average response time (~200ms faster) for the Node.js version in that comparison.

Source: PayPal Tech Blog — Node.js at PayPal (2013)

LinkedIn Mobile: one blocking call caused major throughput collapse

Company: LinkedIn

Story: LinkedIn engineers found a synchronous disk write in logging while tuning their Node.js mobile server.

Why non-blocking I/O helped: Because Node.js is single-threaded at the JavaScript execution layer, one blocking call stalled the event loop and reduced system throughput.

Measured impact: LinkedIn reported throughput dropping from thousands of requests/sec to only a few dozen until they removed that synchronous path.

Source: LinkedIn Engineering — Blazing fast Node.js (2011)

Uber Marketplace Edge: mobile API fan-in/fan-out at large scale

Company: Uber

Story: Uber described its mobile frontline API tier as over 600 stateless endpoints that aggregate multiple services.

Why non-blocking I/O helped: Uber explicitly notes Node.js helped manage large quantities of concurrent connections in this edge layer, where many requests are I/O-bound and waiting on downstream systems.

Measured impact: Operationally, this enabled a high-concurrency API edge architecture while Uber scaled traffic-heavy Marketplace workloads.

Source: Uber Engineering — Tech Stack Part II

PayPal (later phase): persistent connections for Node services

Company: PayPal

Story: PayPal Performance Engineering analyzed top Node apps under increased traffic and found connection churn was limiting scalability.

Why non-blocking I/O helped: Enabling keep-alive/persistent connections reduced repeated DNS, connection, and TLS overhead in non-blocking service-to-service calls.

Measured impact: PayPal reported up to ~18% CPU gains and up to ~14% P95 latency gains after framework-level rollout for top Node apps.

Source: PayPal Tech Blog — Enabling Node Apps To Do More With Less (2021)

Threading and High-Performance Compute

JavaScript in the browser usually runs user code on one main thread. That is great for simplicity, but pure heavy compute can freeze UI updates.

For high-performance workloads, split CPU-heavy work into Web Workers (browser) or worker threads/processes (server). Keep main-thread work short and stream progress updates back to the UI.