Tick
One pass of the event loop where JavaScript checks what can run next. In each tick, work runs in priority order.
Example: A tick might run one sync task, then any waiting microtasks, then one macrotask.
The event loop is the traffic controller for JavaScript work. JavaScript runs one piece of code at a time, and the event loop decides what to run next so your app keeps responding.
Think of it like a small team processing tasks in a strict order: finish current work, then check urgent queued tasks, then check regular queued tasks.
Quick glossary before the simulator so you can understand each queue and step without jargon confusion.
One pass of the event loop where JavaScript checks what can run next. In each tick, work runs in priority order.
Example: A tick might run one sync task, then any waiting microtasks, then one macrotask.
The active work area where JavaScript runs functions right now. It must be clear before queued async callbacks can run.
Example: If a long loop is still on the stack, timers and Promise callbacks wait.
A waiting line for work that is ready to run later. The event loop pulls from queues when the call stack is free.
Example: A callback waits in a queue until current synchronous code finishes.
High-priority queued work that runs right after the current stack finishes and before macrotasks.
Example: Promise .then() and queueMicrotask() callbacks are microtasks.
Regular queued work that runs after microtasks. This includes timers and many browser event callbacks.
Example: setTimeout, setInterval, and many message/event callbacks are macrotasks.
The queue holding regular macrotask work waiting for its turn in the loop.
Example: A setTimeout(..., 0) callback is queued here and runs after microtasks are drained.
The queue for microtasks, which is drained before the event loop runs the next macrotask.
Example: If many Promise callbacks are queued, they can run before the next timer callback.
Browser/runtime systems handle timers, network, and events outside the call stack until callbacks are ready.
Example: A fetch() request finishes in runtime APIs first, then its Promise callback is queued for the event loop.
Practical event loop examples in a beginner-friendly flow.
JavaScript
console.log("Render start")
Promise.resolve().then(() => console.log("Flush state"))
setTimeout(() => console.log("Next tick timer"), 0)
console.log("Render end")
Synchronous code always finishes first. Then microtasks run. Timers run in a later event-loop turn.
Expected Order: Render start -> Render end -> Flush state -> Next tick timer
JavaScript
setTimeout(() => console.log("timeout"), 0)
queueMicrotask(() => {
console.log("microtask 1")
queueMicrotask(() => console.log("microtask 2"))
})
console.log("sync")
Microtasks can queue more microtasks. The microtask queue is drained fully before the timer queue is checked.
Expected Order: sync -> microtask 1 -> microtask 2 -> timeout
JavaScript
async function loadProfile() {
console.log("load start")
const user = await Promise.resolve("Ada")
console.log("load done", user)
}
console.log("before call")
loadProfile()
console.log("after call")
`await` pauses the async function and resumes it as a microtask once the awaited promise resolves.
Expected Order: before call -> load start -> after call -> load done Ada
JavaScript
console.log("start")
setTimeout(() => {
console.log("timer A")
Promise.resolve().then(() => console.log("microtask in A"))
}, 0)
setTimeout(() => console.log("timer B"), 0)
console.log("end")
After each macrotask callback, microtasks run before moving to the next macrotask.
Expected Order: start -> end -> timer A -> microtask in A -> timer B
setTimeout Feels InaccurateTimers in JavaScript are scheduling hints, not hard real-time guarantees. This is the source of many surprising bugs when people expect exact millisecond behavior.
setTimeout(fn, 1000) means "exactly 1000ms later"What actually happens: It means "run no earlier than 1000ms." If the call stack is busy, microtasks are still draining, or other macrotasks are ahead, it runs later.
Why: Timer callbacks enter the macrotask queue. They still wait for their turn after current work is done.
Practical tip: Treat timer delay as a minimum threshold, not a precise deadline.
setTimeout(fn, 0) runs immediatelyWhat actually happens: Zero-delay timers still run in a later event-loop turn. Promise callbacks (.then, await) usually run before them.
Why: Microtasks are drained before the event loop selects the next macrotask.
Practical tip: Use queueMicrotask/Promise microtasks for "right-after-sync" behavior; use timers for "later turn" behavior.
setInterval fires on perfect rhythmWhat actually happens: Intervals drift. If a callback runs long, the next tick is delayed. Slow tabs/devices can increase drift.
Why: The loop cannot start a new interval callback until current work yields. Scheduler pressure accumulates timing error.
Practical tip: For accurate recurring work, schedule the next setTimeout based on Date.now() or performance.now() correction.
What actually happens: Browsers often clamp/throttle timers in background tabs to save battery and CPU.
Why: Visibility and power-saving policies reduce callback frequency when the page is not active.
Practical tip: Handle visibilitychange and recompute elapsed time when the tab becomes active again.
What actually happens: Browser and Node.js timing phases are similar but not identical (setImmediate, process.nextTick, timer phases).
Why: Different runtimes implement scheduling details differently around I/O and microtask handling.
Practical tip: Avoid relying on fragile ordering tricks; write explicit sequencing with clear async boundaries.
setTimeout(fn, 0) is never immediate. It runs only after current sync code and queued microtasks finish.await inside a loop is sequential by default. Use Promise.all when you truly want parallel I/O.JSON.parse on huge payloads, large loops, heavy regex) block everything on the main thread.process.nextTick, setImmediate) differs from browser scheduling and should be learned separately.Recommended explainer video: one of the clearest event loop walkthroughs for beginners.
Direct link: https://www.youtube.com/watch?v=eiC58R16hb8
Add sync tasks, microtasks, and macrotasks, then run ticks to see the queue priority in action.
Tip: press Enter in any input to add that event type instantly.
Add common tasks instantly without typing, including short and long async callbacks.
No sync work waiting.
No microtasks queued.
No macrotasks queued.
No ticks executed yet.
Direct comparison against JavaScript so it is easier to see what changes when you pick Go, Java, or Python.
Directly vs JavaScript: Baseline for this page: JavaScript uses a single-threaded event loop for user code and works best when most time is spent waiting on I/O.
Where it tends to win: Great developer velocity and very strong ecosystem for APIs, BFF layers, realtime UIs, and full-stack product teams.
Tradeoffs vs JavaScript: CPU-heavy work can block the loop unless you offload to workers/processes.
Directly vs JavaScript: Compared with JavaScript, Go usually gives you easier parallel CPU usage out of the box via goroutines while still handling I/O-heavy services very well.
Where it tends to win: Often stronger for high-concurrency backend services when you need both I/O concurrency and CPU parallelism in one process.
Tradeoffs vs JavaScript: Less frontend/full-stack reuse than JavaScript and different tooling ergonomics for product teams.
Directly vs JavaScript: Compared with JavaScript, modern Java (especially reactive or virtual-thread setups) can deliver higher raw throughput in many backend workloads, with heavier operational complexity.
Where it tends to win: Mature enterprise tooling, strong JIT performance, excellent observability, and robust large-system practices.
Tradeoffs vs JavaScript: Usually more configuration and platform overhead than JavaScript-centric stacks for smaller teams.
Directly vs JavaScript: Compared with JavaScript, Python async feels similar conceptually (event loop + await), but typical API throughput is often lower without specialized optimization.
Where it tends to win: Excellent for data/ML-adjacent services and scripting-heavy teams that prioritize ecosystem breadth.
Tradeoffs vs JavaScript: Frequently lower web throughput than tuned JavaScript/Go/Java services in identical benchmark conditions.
TechEmpower Framework Benchmarks (Continuous Run) on 2026-02-20. These are framework-level snapshots from one shared run. Use them as directional comparisons, not absolute guarantees for every app.
| Language / Framework | JSON req/s | JSON vs JS | Plaintext req/s | Plaintext vs JS |
|---|---|---|---|---|
| JavaScript (Fastify) | 816,314 | 1.00x | 1,141,781 | 1.00x |
| Go (Fiber v2) | 1,595,223 | 1.95x | 11,926,807 | 10.45x |
| Java (Spring WebFlux) | 1,233,566 | 1.51x | 2,399,052 | 2.10x |
| Python (FastAPI + Uvicorn) | 301,786 | 0.37x | 345,879 | 0.30x |
Source: TechEmpower raw run data
Real company examples where event-loop and non-blocking I/O patterns produced measurable outcomes in production systems.
Company: PayPal
Story: PayPal rebuilt its account overview page in parallel: one Java version and one Node.js version with the same functionality and production-like tests.
Why non-blocking I/O helped: Node.js let the team handle high request concurrency efficiently while waiting on multiple upstream API calls per route.
Measured impact: PayPal reported roughly 2x requests/second and about 35% lower average response time (~200ms faster) for the Node.js version in that comparison.
Company: LinkedIn
Story: LinkedIn engineers found a synchronous disk write in logging while tuning their Node.js mobile server.
Why non-blocking I/O helped: Because Node.js is single-threaded at the JavaScript execution layer, one blocking call stalled the event loop and reduced system throughput.
Measured impact: LinkedIn reported throughput dropping from thousands of requests/sec to only a few dozen until they removed that synchronous path.
Company: Uber
Story: Uber described its mobile frontline API tier as over 600 stateless endpoints that aggregate multiple services.
Why non-blocking I/O helped: Uber explicitly notes Node.js helped manage large quantities of concurrent connections in this edge layer, where many requests are I/O-bound and waiting on downstream systems.
Measured impact: Operationally, this enabled a high-concurrency API edge architecture while Uber scaled traffic-heavy Marketplace workloads.
Company: PayPal
Story: PayPal Performance Engineering analyzed top Node apps under increased traffic and found connection churn was limiting scalability.
Why non-blocking I/O helped: Enabling keep-alive/persistent connections reduced repeated DNS, connection, and TLS overhead in non-blocking service-to-service calls.
Measured impact: PayPal reported up to ~18% CPU gains and up to ~14% P95 latency gains after framework-level rollout for top Node apps.
Source: PayPal Tech Blog — Enabling Node Apps To Do More With Less (2021)
JavaScript in the browser usually runs user code on one main thread. That is great for simplicity, but pure heavy compute can freeze UI updates.
For high-performance workloads, split CPU-heavy work into Web Workers (browser) or worker threads/processes (server). Keep main-thread work short and stream progress updates back to the UI.