Node.js Interview Questions
Premise
“Any application that can be written in JavaScript, will eventually be written in JavaScript.” -Jeff Atwood
This was said back in 2007, and we can say that it is proving true till now. You can think of any technical keyword and there might be a JavaScript library build around it. So if it’s so popular and in demand, this can be a great programming language to learn. But that’s not the only skill that is required, since you have to apply this to solve practical problems. And one of such problems is to build scalable products.
Gen Z backend
After jQuery animation dev shifted to a single page application for better control of ui/ux and thus came frontend frameworks such as angular js and angular. After that JavaScript was made available to port into literally any modern machine that exists and runs as a standalone application i.e Node.js. It was widely accepted as a backend framework and comes to the top, 2nd year in a row in 2020 of StackOverflow survey.
As developers are busy getting an experience in node.js it’s nice to have a curated list of Node.js interview questions to revise. Also, to further consolidate your knowledge on Javascript, refer to this source.
Beginner Node.js Interview Questions
1. What is a first class function in Javascript?
When functions can be treated like any other variable then those functions are first-class functions. There are many other programming languages, for example, scala, Haskell, etc which follow this including JS. Now because of this function can be passed as a param to another function(callback) or a function can return another function(higher-order function). map() and filter() are higher-order functions that are popularly used.
2. What tools can be used to assure consistent code style?
ESLint can be used with any IDE to ensure a consistent coding style which further helps in maintaining the codebase.
3. What is the purpose of module.exports?
This is used to expose functions of a particular module or file to be used elsewhere in the project. This can be used to encapsulate all similar functions in a file which further improves the project structure.
For example, you have a file for all utils functions with util to get solutions in a different programming language of a problem statement.
const getSolutionInJavaScript = async ({
problem_id
}) => {
...
};
const getSolutionInPython = async ({
problem_id
}) => {
...
};
module.exports = { getSolutionInJavaScript, getSolutionInPython }Thus using module.exports we can use these functions in some other file:
const { getSolutionInJavaScript, getSolutionInPython} = require("./utils")
4. List down the two arguments that async.queue takes as input?
- Task Function
- Concurrency Value
5. What is REPL?
PL in Node.js stands for Read, Eval, Print, and Loop, which further means evaluating code on the go.
6. How many types of API functions are there in Node.js?
There are two types of API functions:
-
Asynchronous, non-blocking functions - mostly I/O operations which can be fork out of the main loop.
- Synchronous, blocking functions - mostly operations that influence the process running in the main loop.
7. How do you create a simple server in Node.js that returns Hello World?
var http = require("http");
http.createServer(function (request, response) {
response.writeHead(200, {'Content-Type': 'text/plain'});
response.end('Hello World\n');
}).listen(3000);
8. Why is Node.js single-threaded?
Node.js was created explicitly as an experiment in async processing. This was to try a new theory of doing async processing on a single thread over the existing thread-based implementation of scaling via different frameworks.
9. What is fork in node JS?
A fork in general is used to spawn child processes. In node it is used to create a new instance of v8 engine to run multiple workers to execute the code.
10. What are the advantages of using promises instead of callbacks?
The main advantage of using promise is you get an object to decide the action that needs to be taken after the async task completes. This gives more manageable code and avoids callback hell.
11. What are some commonly used timing features of Node.js?
- setTimeout/clearTimeout – This is used to implement delays in code execution.
- setInterval/clearInterval – This is used to run a code block multiple times.
- setImmediate/clearImmediate – Any function passed as the setImmediate() argument is a callback that's executed in the next iteration of the event loop.
- process.nextTick – Both setImmediate and process.nextTick appear to be doing the same thing; however, you may prefer one over the other depending on your callback’s urgency.
12. Explain the steps how “Control Flow” controls the functions calls?
- Control the order of execution
- Collect data
- Limit concurrency
- Call the following step in the program.
13. How is Node.js better than other frameworks most popularly used?
- Node.js provides simplicity in development because of its non-blocking I/O and event-based model results in short response time and concurrent processing, unlike other frameworks where developers have to use thread management.
- It runs on a chrome v8 engine which is written in c++ and is highly performant with constant improvement.
- Also since we will use Javascript in both the frontend and backend the development will be much faster.
- And at last, there are sample libraries so that we don’t need to reinvent the wheel.
14. How do you manage packages in your node.js project?
It can be managed by a number of package installers and their configuration file accordingly. Out of them mostly use npm or yarn. Both provide almost all libraries of javascript with extended features of controlling environment-specific configurations. To maintain versions of libs being installed in a project we use package.json and package-lock.json so that there is no issue in porting that app to a different environment.
15. What is Node.js and how it works?
Node.js is a virtual machine that uses JavaScript as its scripting language and runs Chrome’s V8 JavaScript engine. Basically, Node.js is based on an event-driven architecture where I/O runs asynchronously making it lightweight and efficient. It is being used in developing desktop applications as well with a popular framework called electron as it provides API to access OS-level features such as file system, network, etc.
Here is a Free course on Node.js for beginners to master the fundamentals of Node.js.
Intermediate Node.js Interview Questions
1. What do you understand by callback hell?
async_A(function(){
async_B(function(){
async_C(function(){
async_D(function(){
....
});
});
});
});For the above example, we are passing callback functions and it makes the code unreadable and not maintainable, thus we should change the async logic to avoid this.
2. Explain the concept of stub in Node.js?
Stubs are used in writing tests which are an important part of development. It replaces the whole function which is getting tested.
This helps in scenarios where we need to test:
- External calls which make tests slow and difficult to write (e.g HTTP calls/ DB calls)
- Triggering different outcomes for a piece of code (e.g. what happens if an error is thrown/ if it passes)
For example, this is the function:
const request = require('request');
const getPhotosByAlbumId = (id) => {
const requestUrl = `https://jsonplaceholder.typicode.com/albums/${id}/photos?_limit=3`;
return new Promise((resolve, reject) => {
request.get(requestUrl, (err, res, body) => {
if (err) {
return reject(err);
}
resolve(JSON.parse(body));
});
});
};
module.exports = getPhotosByAlbumId;
To test this function this is the stub
const expect = require('chai').expect;
const request = require('request');
const sinon = require('sinon');
const getPhotosByAlbumId = require('./index');
describe('with Stub: getPhotosByAlbumId', () => {
before(() => {
sinon.stub(request, 'get')
.yields(null, null, JSON.stringify([
{
"albumId": 1,
"id": 1,
"title": "A real photo 1",
"url": "https://via.placeholder.com/600/92c952",
"thumbnailUrl": "https://via.placeholder.com/150/92c952"
},
{
"albumId": 1,
"id": 2,
"title": "A real photo 2",
"url": "https://via.placeholder.com/600/771796",
"thumbnailUrl": "https://via.placeholder.com/150/771796"
},
{
"albumId": 1,
"id": 3,
"title": "A real photo 3",
"url": "https://via.placeholder.com/600/24f355",
"thumbnailUrl": "https://via.placeholder.com/150/24f355"
}
]));
});
after(() => {
request.get.restore();
});
it('should getPhotosByAlbumId', (done) => {
getPhotosByAlbumId(1).then((photos) => {
expect(photos.length).to.equal(3);
photos.forEach(photo => {
expect(photo).to.have.property('id');
expect(photo).to.have.property('title');
expect(photo).to.have.property('url');
});
done();
});
});
});
3. Describe the exit codes of Node.js?
Exit codes give us an idea of how a process got terminated/the reason behind termination.
A few of them are:
- Uncaught fatal exception - (code - 1) - There has been an exception that is not handled
- Unused - (code - 2) - This is reserved by bash
- Fatal Error - (code - 5) - There has been an error in V8 with stderr output of the description
- Internal Exception handler Run-time failure - (code - 7) - There has been an exception when bootstrapping function was called
- Internal JavaScript Evaluation Failure - (code - 4) - There has been an exception when the bootstrapping process failed to return function value when evaluated.
4. For Node.js, why Google uses V8 engine?
Well, are there any other options available? Yes, of course, we have Spidermonkey from Firefox, Chakra from Edge but Google’s v8 is the most evolved(since it’s open-source so there’s a huge community helping in developing features and fixing bugs) and fastest(since it’s written in c++) we got till now as a JavaScript and WebAssembly engine. And it is portable to almost every machine known.
5. Why should you separate Express app and server?
The server is responsible for initializing the routes, middleware, and other application logic whereas the app has all the business logic which will be served by the routes initiated by the server. This ensures that the business logic is encapsulated and decoupled from the application logic which makes the project more readable and maintainable.
6. Explain what a Reactor Pattern in Node.js?
Reactor pattern again a pattern for nonblocking I/O operations. But in general, this is used in any event-driven architecture.
There are two components in this: 1. Reactor 2. Handler.
Reactor: Its job is to dispatch the I/O event to appropriate handlers
Handler: Its job is to actually work on those events
7. What is middleware?
Middleware comes in between your request and business logic. It is mainly used to capture logs and enable rate limit, routing, authentication, basically whatever that is not a part of business logic. There are third-party middleware also such as body-parser and you can write your own middleware for a specific use case.
8. What are node.js buffers?
In general, buffers is a temporary memory that is mainly used by stream to hold on to some data until consumed. Buffers are introduced with additional use cases than JavaScript’s Unit8Array and are mainly used to represent a fixed-length sequence of bytes. This also supports legacy encodings like ASCII, utf-8, etc. It is a fixed(non-resizable) allocated memory outside the v8.
9. What is node.js streams?
Streams are instances of EventEmitter which can be used to work with streaming data in Node.js. They can be used for handling and manipulating streaming large files(videos, mp3, etc) over the network. They use buffers as their temporary storage.
There are mainly four types of the stream:
- Writable: streams to which data can be written (for example, fs.createWriteStream()).
- Readable: streams from which data can be read (for example, fs.createReadStream()).
- Duplex: streams that are both Readable and Writable (for example, net.Socket).
- Transform: Duplex streams that can modify or transform the data as it is written and read (for example, zlib.createDeflate()).
10. How can we use async await in node.js?
Here is an example of using async-await pattern:
// this code is to retry with exponential backoff
function wait (timeout) {
return new Promise((resolve) => {
setTimeout(() => {
resolve()
}, timeout);
});
}
async function requestWithRetry (url) {
const MAX_RETRIES = 10;
for (let i = 0; i <= MAX_RETRIES; i++) {
try {
return await request(url);
} catch (err) {
const timeout = Math.pow(2, i);
console.log('Waiting', timeout, 'ms');
await wait(timeout);
console.log('Retrying', err.message, i);
}
}
}
11. How does Node.js overcome the problem of blocking of I/O operations?
Since the node has an event loop that can be used to handle all the I/O operations in an asynchronous manner without blocking the main function.
So for example, if some network call needs to happen it will be scheduled in the event loop instead of the main thread(single thread). And if there are multiple such I/O calls each one will be queued accordingly to be executed separately(other than the main thread).
Thus even though we have single-threaded JS, I/O ops are handled in a nonblocking way.
12. Differentiate between process.nextTick() and setImmediate()?
Both can be used to switch to an asynchronous mode of operation by listener functions.
process.nextTick() sets the callback to execute but setImmediate pushes the callback in the queue to be executed. So the event loop runs in the following manner
timers–>pending callbacks–>idle,prepare–>connections(poll,data,etc)–>check–>close callbacks
In this process.nextTick() method adds the callback function to the start of the next event queue and setImmediate() method to place the function in the check phase of the next event queue.
13. If Node.js is single threaded then how does it handle concurrency?
The main loop is single-threaded and all async calls are managed by libuv library.
For example:
const crypto = require("crypto");
const start = Date.now();
function logHashTime() {
crypto.pbkdf2("a", "b", 100000, 512, "sha512", () => {
console.log("Hash: ", Date.now() - start);
});
}
logHashTime();
logHashTime();
logHashTime();
logHashTime();This gives the output:
Hash: 1213
Hash: 1225
Hash: 1212
Hash: 1222This is because libuv sets up a thread pool to handle such concurrency. How many threads will be there in the thread pool depends upon the number of cores but you can override this.
14. What is an event-loop in Node JS?
Whatever that is async is managed by event-loop using a queue and listener. We can get the idea using the following diagram:

So when an async function needs to be executed(or I/O) the main thread sends it to a different thread allowing v8 to keep executing the main code. Event loop involves different phases with specific tasks such as timers, pending callbacks, idle or prepare, poll, check, close callbacks with different FIFO queues. Also in between iterations it checks for async I/O or timers and shuts down cleanly if there aren't any.
Advanced Node.js Interview Questions
1. What is an Event Emitter in Node.js?
EventEmitter is a Node.js class that includes all the objects that are basically capable of emitting events. This can be done by attaching named events that are emitted by the object using an eventEmitter.on() function. Thus whenever this object throws an even the attached functions are invoked synchronously.
const EventEmitter = require('events');
class MyEmitter extends EventEmitter {}
const myEmitter = new MyEmitter();
myEmitter.on('event', () => {
console.log('an event occurred!');
});
myEmitter.emit('event');
2. Enhancing Node.js performance through clustering.
Node.js applications run on a single processor, which means that by default they don’t take advantage of a multiple-core system. Cluster mode is used to start up multiple node.js processes thereby having multiple instances of the event loop. When we start using cluster in a nodejs app behind the scene multiple node.js processes are created but there is also a parent process called the cluster manager which is responsible for monitoring the health of the individual instances of our application.

3. What is a thread pool and which library handles it in Node.js
The Thread pool is handled by the libuv library. libuv is a multi-platform C library that provides support for asynchronous I/O-based operations such as file systems, networking, and concurrency.

4. What is WASI and why is it being introduced?
Web assembly provides an implementation of WebAssembly System Interface specification through WASI API in node.js implemented using WASI class. The introduction of WASI was done by keeping in mind its possible to use the underlying operating system via a collection of POSIX-like functions thus further enabling the application to use resources more efficiently and features that require system-level access.
5. How are worker threads different from clusters?
Cluster:
- There is one process on each CPU with an IPC to communicate.
- In case we want to have multiple servers accepting HTTP requests via a single port, clusters can be helpful.
- The processes are spawned in each CPU thus will have separate memory and node instance which further will lead to memory issues.
Worker threads:
- There is only one process in total with multiple threads.
- Each thread has one Node instance (one event loop, one JS engine) with most of the APIs accessible.
- Shares memory with other threads (e.g. SharedArrayBuffer)
- This can be used for CPU-intensive tasks like processing data or accessing the file system since NodeJS is single-threaded, synchronous tasks can be made more efficient leveraging the worker's threads.
6. How to measure the duration of async operations?
Performance API provides us with tools to figure out the necessary performance metrics. A simple example would be using async_hooks and perf_hooks
'use strict';
const async_hooks = require('async_hooks');
const {
performance,
PerformanceObserver
} = require('perf_hooks');
const set = new Set();
const hook = async_hooks.createHook({
init(id, type) {
if (type === 'Timeout') {
performance.mark(`Timeout-${id}-Init`);
set.add(id);
}
},
destroy(id) {
if (set.has(id)) {
set.delete(id);
performance.mark(`Timeout-${id}-Destroy`);
performance.measure(`Timeout-${id}`,
`Timeout-${id}-Init`,
`Timeout-${id}-Destroy`);
}
}
});
hook.enable();
const obs = new PerformanceObserver((list, observer) => {
console.log(list.getEntries()[0]);
performance.clearMarks();
observer.disconnect();
});
obs.observe({ entryTypes: ['measure'], buffered: true });
setTimeout(() => {}, 1000);This would give us the exact time it took to execute the callback.
7. How to measure the performance of async operations?
Performance API provides us with tools to figure out the necessary performance metrics.
A simple example would be:
const { PerformanceObserver, performance } = require('perf_hooks');
const obs = new PerformanceObserver((items) => {
console.log(items.getEntries()[0].duration);
performance.clearMarks();
});
obs.observe({ entryTypes: ['measure'] });
performance.measure('Start to Now');
performance.mark('A');
doSomeLongRunningProcess(() => {
performance.measure('A to Now', 'A');
performance.mark('B');
performance.measure('A to B', 'A', 'B');
});
Additional Useful Resources
Advanced Express.js Interview Questions
1. 7. How do you approach API versioning (URI vs header vs content negotiation)?
API versioning is used to evolve APIs without breaking existing clients. Common approaches differ in how the version is specified.
- URI versioning includes the version in the URL (for example, /v1/users). It is simple, explicit, and easy to debug, which makes it the most commonly used approach in practice.
- Header-based versioning sends the version in a request header. It keeps URLs clean but requires clients and infrastructure to handle custom headers correctly.
- Content negotiation uses the Accept header with versioned media types. It is flexible but adds complexity and is harder to reason about during debugging.
In production systems, URI-based versioning is often preferred for clarity, while header-based or content negotiation approaches are used when stricter API contracts are required.
2. How do you prevent blocking the event loop when doing heavy work in an Express route?
Express runs on Node.js, which uses a single-threaded event loop. Any CPU-intensive or long-running synchronous work inside a route can block the event loop and degrade performance for all requests.
To prevent this:
- Offload CPU-heavy tasks to worker threads or separate processes
- Use asynchronous, non-blocking APIs for I/O operations
- Move background work to queues or job workers
- Avoid synchronous loops, heavy JSON processing, or crypto operations inside routes
In production systems, Express routes should remain lightweight and delegate heavy work elsewhere to maintain responsiveness.
3. How do you implement streaming responses (downloads or large payloads) in Express?
Streaming responses allow data to be sent incrementally instead of loading the entire payload into memory.
In Express, this is typically done using Node.js streams, such as readable streams piped directly to the response.
Common use cases include:
- File downloads
- Large exports (CSV, logs)
- Proxying data from another service
Streaming improves memory efficiency and enables backpressure handling. It is preferred over buffering large payloads in memory, especially for high-traffic or data-heavy endpoints.
4. How do you implement idempotency for POST endpoints (payment or order APIs)?
Idempotency ensures that repeated requests produce the same result, which is critical for APIs handling payments, orders, or retries.
A common approach is:
- Require clients to send an idempotency key with the request
- Store the key along with the request result in persistent storage
- If the same key is received again, return the previous response instead of reprocessing
Idempotency logic is usually implemented as middleware or service-level logic, before executing the main operation. This prevents duplicate charges or duplicate resource creation during retries or network failures.
5. How do you design pagination (offset vs cursor) and handle consistency?
Pagination is used to return large datasets in manageable chunks.
- Offset-based pagination uses page numbers or offsets and is simple to implement, but it can become slow and inconsistent when data changes frequently.
- Cursor-based pagination uses a stable reference (such as an ID or timestamp) and provides better performance and consistency for large or frequently updated datasets.
In production systems:
- Offset pagination is suitable for small or static datasets
- Cursor pagination is preferred for APIs with large tables or real-time data
Handling consistency involves defining clear sort order and ensuring cursors are stable and deterministic.
6. How do you implement graceful shutdown for an Express server?
Graceful shutdown ensures that an Express server stops accepting new requests while allowing in-flight requests to complete.
A typical approach includes:
- Listening for shutdown signals such as SIGTERM or SIGINT
- Stopping the server from accepting new connections
- Allowing active requests to finish within a timeout
- Closing database connections and other resources cleanly
This prevents dropped requests during deployments or restarts and is essential for zero-downtime production systems.
7. How do you handle centralized logging and correlation IDs across microservices?
Centralized logging collects logs from multiple services into a single searchable system, making debugging and tracing easier.
Correlation IDs are used to track a single request across services by:
- Generating or propagating a request ID at the edge
- Attaching the ID to logs, headers, and downstream calls
In Express, correlation IDs are typically handled via middleware and included in all log entries. This approach helps trace failures, measure latency across services, and debug distributed systems effectively.
Express.js Interview Questions
1. What is Express.js, and where does it sit in a Node.js backend stack?
Express.js is a minimal and flexible web framework built on top of Node.js. It provides a structured way to handle HTTP requests, routing, and middleware, without hiding core Node.js behavior.
In a typical Node.js backend stack, Express sits between the HTTP server and application logic:
- Node.js handles the event loop and low-level networking
- Express handles routing, middleware, and request–response flow
- Business logic, databases, and services are layered on top of Express
Express is commonly used to build REST APIs, backend services, and web applications because it offers structure while remaining lightweight.
2. What is middleware in Express, and what does next() do?
Middleware in Express is a function that executes between receiving a request and sending a response. It can read or modify the request and response objects or end the request cycle.
Common uses of middleware include:
- Authentication and authorization
- Logging
- Request parsing
- Validation and error handling
The next() function passes control to the next middleware or route handler in the chain. If next() is not called and no response is sent, the request will hang.
Middleware order matters in Express, as it determines how requests flow through the application.
3. What’s the difference between app.use() and route handlers like app.get()?
app.use() is used to register middleware that runs for every request or for a specific path prefix. It is not tied to a particular HTTP method.
Route handlers like app.get(), app.post(), etc., are method-specific and only run when both the path and HTTP method match.
Key differences:
- app.use() - middleware, path-based, runs for all HTTP methods
- app.get() / app.post() - route handlers, method-specific
- Middleware registered with app.use() typically runs before route handlers
This distinction is important for understanding request flow and middleware ordering in Express.
4. How do you serve static files in Express, and what are common pitfalls?
Static files in Express are served using the built-in express.static middleware.
Example: app.use(express.static('public'));
This allows files inside the public directory (HTML, CSS, images, JS) to be served directly.
Common pitfalls include:
- Placing express.static after route handlers, causing requests to never reach it
- Exposing sensitive files by serving the wrong directory
- Incorrect path resolution when using relative paths
- Forgetting to configure caching headers for production
Serving static files is simple, but incorrect placement or configuration can lead to security and performance issues.
5. How do route parameters work in Express (for example, /:id)?
Route parameters allow dynamic values to be captured directly from the URL. They are defined using a colon (:) and are available on the req.params object.
Example:
app.get('/users/:id', (req, res) => {
const userId = req.params.id;
res.send(userId);
});
In this route, a request to /users/42 sets req.params.id to 42.
Route parameters are commonly used for identifying resources, such as user IDs or order IDs. They are matched positionally and should be validated before use.
6. What is express.Router() and why do we use it?
express.Router() is a modular routing mechanism that allows routes and middleware to be grouped and managed separately from the main application.
It is used to:
- Organize routes by feature or domain
- Keep the main app.js or server.js file clean
- Apply middleware to specific route groups
Example usage:
const router = express.Router();
router.get('/users', getUsers);
router.post('/users', createUser);
app.use('/api', router);
Using routers improves maintainability and scalability, especially in larger Express applications with multiple features or teams working on the same codebase.
7. How do you handle JSON request bodies safely in Express, and what middleware is typically used?
JSON request bodies in Express are handled using built-in body parsing middleware.
The commonly used middleware is:
app.use(express.json());
This middleware parses incoming JSON payloads and makes the data available on req.body.
To handle JSON bodies safely, it is important to:
- Set reasonable size limits to prevent large payload abuse
- Validate and sanitize input before using it
- Handle parsing errors gracefully
In production applications, JSON parsing is usually combined with request validation and security middleware.
8. What is the difference between query params, route params, and body?
These are three different ways to pass data in an HTTP request:
1. Route params are part of the URL path and are accessed via req.params
Example: /users/:id
2. Query params are key-value pairs appended to the URL and are accessed via req.query
Example: /users?page=2
3. Request body contains data sent with methods like POST or PUT and is accessed via req.body
Used for creating or updating resources
Understanding the difference helps in designing clear APIs and handling data correctly in Express applications.
Express.js Interview Questions (Intermediate)
1. How do you handle errors in Express (error-handling middleware signature)?
Express handles errors using error-handling middleware, which has a special function signature with four arguments.
(err, req, res, next)
Key points:
- Error-handling middleware must be defined after all routes
- Calling next(err) forwards errors to the error handler
- It centralizes error formatting, logging, and response logic
Synchronous errors are caught automatically, while asynchronous errors must be passed explicitly using next(err) or handled with a wrapper. Centralized error handling is critical for consistent responses and easier debugging in production.
2. How do you structure routes and controllers in a production Express app?
In a production Express application, routes and controllers are usually separated by responsibility to keep the codebase maintainable and testable.
A common structure is:
- Routes: define HTTP paths and attach middleware
- Controllers: contain request-handling logic
- Services: handle business logic and external calls
Routes remain thin and delegate work to controllers, while controllers avoid direct database or infrastructure concerns.
This separation makes it easier to scale the codebase, apply middleware consistently, and test components independently.
3. How do you implement modular routing (feature-based routing)?
Modular routing groups routes by feature or domain instead of HTTP method. Each feature exposes its own router, which is then mounted on the main application.
Example approach:
- users/routes.js
- users/controller.js
- users/service.js
Each feature router is mounted using: app.use('/users', usersRouter);
This pattern improves clarity, supports team ownership, and reduces merge conflicts in large codebases. It is the most common routing approach in production Express applications.
4. How do you write custom middleware for authentication and authorization?
Custom authentication or authorization middleware is written as a function that runs before protected routes. It typically verifies credentials (such as a token or session) and decides whether the request should proceed.
A common pattern is:
- Extract credentials from headers or cookies
- Validate them (JWT verification, session lookup, etc.)
- Attach user context to req if valid
- Block the request if invalid
The middleware is then applied using app.use() or at the router level, ensuring all protected routes pass through it. Authorization checks (roles, permissions) are usually layered on top of authentication.
5. What’s the difference between synchronous errors and async errors in Express routes?
Synchronous errors are thrown directly during request handling and are automatically caught by Express. They are forwarded to the error-handling middleware without extra effort.
Asynchronous errors (inside promises or async/await) are not caught automatically unless they are explicitly passed to the error handler.
Key points:
- Sync error -> throw new Error() is enough
- Async error -> must be passed using next(err) or handled via an async wrapper
- Missing async error handling can cause unhandled promise rejections
In production apps, async route handlers are usually wrapped to ensure all errors reach centralized error middleware.
6. How do you implement request validation, and where should it live?
Request validation ensures incoming data is well-formed and safe before business logic runs.
In Express applications:
- Validation is typically implemented as middleware
- It runs before controllers
- It validates params, query strings, and request bodies
Validation logic should live:
- Close to routes for clarity
- Separate from controllers to keep them clean
Common practices include schema-based validation and returning consistent validation error responses. Keeping validation centralized and reusable helps maintain API reliability and security.
7. How do you implement rate limiting and request throttling?
Rate limiting and throttling are used to protect APIs from abuse and control traffic spikes.
In Express, this is typically implemented as middleware that:
- Tracks request counts per client (IP, user, or token)
- Enforces limits over a fixed or sliding time window
- Rejects requests once limits are exceeded
Rate limiting is usually applied:
- At the edge (reverse proxy, API gateway) for coarse limits
- In Express middleware for fine-grained, route-specific limits
Throttling strategies may differ for authenticated vs unauthenticated users. In production, rate limits are often backed by a shared store (for example, Redis) to work across multiple instances.
8. How does trust proxy impact IP extraction and security in Express deployments?
The trust proxy setting tells Express whether it should trust headers added by a reverse proxy, such as X-Forwarded-For, when determining the client’s IP address.
This setting is important when Express runs behind:
- Load balancers
- Reverse proxies
- Cloud platforms (NGINX, ALB, Cloudflare)
Key impacts:
- Affects how req.ip and req.ips are populated
- Influences rate limiting, logging, and authentication logic
- Incorrect configuration can allow IP spoofing or break security controls
In production, trust proxy should be enabled only when a trusted proxy is actually in front of the app, and configured as narrowly as possible.
9. How do you handle file uploads securely in Express?
Handling file uploads securely involves controlling what is uploaded, how much is uploaded, and where it is stored.
Key practices include:
- Enforcing file size limits to prevent memory or disk exhaustion
- Validating file types using MIME type and file signature checks, not just file extensions
- Storing files outside the application root to avoid direct execution
- Renaming files to avoid collisions and path traversal issues
- Scanning uploads if required for malware or policy compliance
Uploads are typically handled using middleware, with validation applied before files are processed or persisted. In production systems, files are often stored in object storage or a dedicated file service rather than on the application server.
Node.js Interview Questions for Experienced
1. How do you decide between cluster and worker_threads for scaling a Node service?
The choice depends on what kind of work needs to be scaled.
- cluster is used to scale a Node.js service across multiple CPU cores by running multiple processes. Each process has its own event loop and memory space, making it suitable for handling more concurrent requests.
- worker_threads are used to offload CPU-intensive tasks within the same process. They share memory and are useful for heavy computation that would otherwise block the event loop.
In practice:
- Use cluster for request-level concurrency and horizontal scaling
- Use worker threads for CPU-bound work that cannot be made asynchronous
2. What are common causes of event loop blocking, and how do you detect them?
Event loop blocking happens when synchronous or long-running tasks prevent Node.js from handling other requests.
Common causes include:
- CPU-intensive computations
- Large synchronous loops
- Heavy JSON serialization or parsing
- Expensive regex operations
- Blocking file system or crypto calls
Detection usually involves:
- Monitoring event loop delay
- Observing increased request latency under load
- Profiling the application to identify long-running synchronous functions
This question tests whether the candidate understands how Node’s runtime behavior directly impacts application performance.
3. How do you diagnose memory leaks in Node.js in production?
Diagnosing memory leaks in production focuses on identifying objects that keep growing and are not released.
Common approaches include:
- Monitoring heap usage over time to see if memory keeps increasing without stabilizing
- Comparing heap snapshots taken at different intervals to find retained objects
- Checking for unbounded caches, global references, event listeners, or closures
- Observing garbage collection behavior, frequent or long GC cycles often indicate pressure
- Correlating memory growth with specific traffic patterns or features
The goal is to distinguish between expected memory usage (such as caches) and unintended object retention.
4. What metrics do you monitor for Node services beyond CPU and RAM?
Beyond CPU and memory, production Node services require runtime and application-level metrics.
Commonly monitored metrics include:
- Event loop delay to detect blocking
- Request latency (p50, p95, p99)
- Throughput and error rates
- Garbage collection frequency and duration
- Thread pool usage for async operations
- Database and external dependency latency
- Queue depth or backlog for background jobs
These metrics help detect performance degradation early and pinpoint whether issues are caused by the runtime, application logic, or dependencies.
5. Explain backpressure in streams. What breaks if you ignore it?
Backpressure is the mechanism that allows a system to slow down data producers when consumers can’t keep up. In Node.js streams, it prevents memory from growing uncontrollably.
If backpressure is ignored:
- Buffers keep filling up, increasing memory usage
- The process may experience GC pressure or crashes
- Latency increases as data queues up
- Downstream systems can be overwhelmed
Properly handling backpressure using stream piping and respecting write() return values ensures data flows at a rate the system can safely process.
6. How do you implement graceful shutdown in Node (SIGTERM) without dropping requests?
Graceful shutdown allows a Node service to stop cleanly during deploys or restarts.
A typical approach includes:
- Listening for SIGTERM or SIGINT signals
- Stopping the server from accepting new connections
- Allowing in-flight requests to complete within a timeout
- Closing database connections, queues, and other resources
This prevents request loss and ensures the service exits predictably, which is essential for reliable production deployments
7. What’s your approach to zero-downtime deployments?
Zero-downtime deployments aim to update services without interrupting active traffic.
A typical approach includes:
- Draining existing connections before shutting down instances
- Respecting keep-alive connections and setting reasonable timeouts
- Using load balancers to stop routing new traffic to instances being terminated
- Performing graceful shutdown so in-flight requests complete
This ensures deployments do not cause failed requests or user-visible outages.
8. How do you design retries and timeouts to avoid retry storms?
Retries and timeouts must be designed carefully to avoid amplifying failures.
Best practices include:
- Setting strict timeouts for external calls
- Limiting retry attempts and adding exponential backoff
- Using jitter to avoid synchronized retries
- Avoiding retries for non-idempotent operations
- Combining retries with circuit breakers
This approach prevents cascading failures when a dependency becomes slow or unavailable.
9. What’s the difference between process-level concurrency and thread-level concurrency in Node?
Process-level concurrency runs multiple Node.js processes (for example, using cluster).
- Each process has its own event loop and memory
- Scales well across CPU cores
- Strong isolation, but higher memory usage
Thread-level concurrency uses worker threads within a single process.
- Threads share memory and can communicate efficiently
- Best suited for CPU-intensive tasks
- Requires careful handling to avoid race conditions
In practice, process-level concurrency is used to scale request handling, while thread-level concurrency is used to offload heavy computation.
10. How do you secure Node apps against common risks (secrets, headers, dependencies, input validation)?
Securing Node applications requires controls at multiple layers.
Common practices include:
- Managing secrets via environment variables or secret managers, not code or repos
- Setting secure HTTP headers to protect against common attacks
- Keeping dependencies updated and scanning for vulnerabilities
- Validating and sanitizing all external inputs
- Limiting error details exposed to clients
Security is an ongoing process and must be enforced consistently across development, deployment, and runtime environments.
11. How do you manage configuration safely across environments (12-factor principles)?
Managing configuration safely means keeping config separate from code and environment-specific.
Common practices aligned with 12-factor principles include:
- Using environment variables for all environment-specific settings
- Avoiding hard-coded secrets or environment-specific logic
- Validating configuration at startup and failing fast on missing values
- Using secret managers for sensitive data
- Keeping configuration consistent across local, staging, and production
This approach simplifies deployments and reduces configuration-related failures.
12. How do you structure a large Node monorepo (packages, shared libraries, boundaries)?
A large Node monorepo is structured to balance code reuse and isolation.
Common patterns include:
- Organizing code into packages by domain or service
- Extracting shared logic into well-defined shared libraries
- Enforcing clear boundaries between packages to prevent tight coupling
- Using tooling to manage dependencies and versioning consistently
This structure improves scalability, maintainability, and collaboration across teams working in the same repository.
Scenario-Based Express.js Interview Questions
1. How do you capture diagnostics safely without taking down the service?
Capturing diagnostics in production must be done carefully to avoid additional outages.
Safe practices include:
- Using structured logs with adjustable log levels
- Capturing heap snapshots or profiles during low traffic
- Applying sampling to avoid excessive overhead
- Avoiding blocking operations during diagnostics
- Collecting data incrementally rather than all at once
This ensures observability and debugging are possible without destabilizing the running service.
2. A route randomly returns “Cannot set headers after they are sent.” What causes this and how do you fix it?
This error occurs when Express attempts to send more than one response for the same request.
Common causes include:
- Calling res.send() / res.json() and then continuing execution
- Missing return statements after sending a response
- Calling next() after a response has already been sent
- Multiple async code paths resolving and sending responses
- Error handling logic that sends a response twice
To fix it:
- Ensure each request has exactly one response path
- Add return after sending a response
- Avoid calling next() once a response is finalized
- Carefully handle async logic, so only one branch responds
Interviewers use this scenario to check whether candidates understand Express request-response lifecycle control.
3. How do you isolate work using separate processes, queues, or worker threads?
Isolation is used to prevent heavy or unstable workloads from impacting request handling.
Common approaches include:
- Running heavy tasks in separate processes or services
- Using queues to process background jobs asynchronously
- Offloading CPU-intensive work to worker threads
- Keeping Express routes lightweight and delegating work
Choosing the right isolation strategy depends on whether the workload is CPU-bound, I/O-bound, or latency-sensitive.
4. How do you redesign an endpoint using streaming and limits?
When an endpoint struggles under load due to large payloads or unbounded processing, redesigning it with streaming and limits improves stability.
Typical changes include:
- Streaming request and response data instead of buffering everything in memory
- Applying size limits on request bodies and uploads
- Using pagination or chunked responses for large datasets
- Enforcing timeouts and backpressure-aware processing
This approach reduces memory usage, prevents event loop blocking, and makes endpoints more resilient under high traffic.
5. How do you design circuit breakers, bulkheads, and fallbacks?
These patterns are used to protect services from cascading failures when dependencies become slow or unavailable.
A common approach includes:
- Circuit breakers to stop calling a failing dependency after repeated errors and allow recovery after a cooldown
- Bulkheads to isolate resources (threads, connections, queues) so one failing component does not exhaust the entire system
- Fallbacks to return cached data, defaults, or partial responses when a dependency is unavailable
In Express-based systems, these patterns are usually implemented at the service or client layer, not directly inside route handlers, to keep request handling predictable.
6. How do you check event loop delay, thread pool starvation, or a stuck DB pool?
These issues affect throughput even when CPU or memory looks normal.
Typical checks include:
- Measuring event loop delay to detect synchronous blocking
- Monitoring thread pool usage for saturation caused by crypto, file I/O, or DNS
- Inspecting database pool metrics for exhausted or unreleased connections
- Checking request latency patterns under load
- Correlating slow requests with blocking operations or resource exhaustion
This scenario tests whether a candidate can reason about Node.js runtime internals rather than focusing only on application code.
7. How do you confirm a memory leak vs cache growth vs GC pressure?
When memory usage keeps increasing, the key is to determine why memory is not being released.
Common ways to differentiate include:
- Checking whether memory grows steadily (leak) or stabilizes after warming up (cache growth)
- Reviewing cache size limits, eviction policies, and TTLs
- Observing GC behavior, frequent or long garbage collection pauses indicate GC pressure
- Taking heap snapshots over time and comparing retained objects
- Looking for unbounded data structures, global references, or event listeners
This helps distinguish between intentional memory usage (caches) and unintentional retention (leaks).
8. How do you identify infinite loops, regex backtracking, JSON stringify hotspots, or tight polling?
These issues cause CPU spikes and event loop blocking, often only visible under load.
Common ways to identify them include:
- Monitoring event loop delay and CPU usage during incidents
- Reviewing code paths with synchronous loops or heavy computations
- Profiling the application to find slow functions or repeated executions
- Watching for excessive timers, polling intervals, or unbounded retries
- Inspecting regex usage for patterns prone to catastrophic backtracking
Once identified, fixes typically involve rewriting logic to be asynchronous, limiting execution frequency, or moving heavy work off the request path.
9. Users report inconsistent results due to caching. How do you set correct cache headers and invalidation?
Inconsistent results usually mean cache scope or invalidation is incorrect.
Key checks and fixes include:
- Setting explicit cache headers (Cache-Control, ETag, Last-Modified) based on data freshness requirements
- Avoiding caching for user-specific or frequently changing responses
- Using ETags or conditional requests to revalidate cached content
- Ensuring cache keys include all relevant request dimensions (auth, query params, locale)
- Implementing clear invalidation on writes (purge, versioned keys, or TTLs)
The goal is to cache only safe, deterministic responses and invalidate them predictably when data changes.
10. An endpoint is slow under load. How do you separate DB latency vs CPU blocking vs network issues?
To isolate performance bottlenecks, the first step is to measure where time is being spent.
Common checks include:
- Reviewing database query timings and connection pool metrics
- Checking for event loop blocking or long synchronous operations
- Inspecting outbound calls and network latency to dependencies
- Comparing response times with and without database access
- Reviewing logs and traces under load
This process helps distinguish whether slowness is caused by database pressure, CPU-bound work, or network dependencies, rather than guessing.
11. Your API is getting abused. What’s your layered approach?
When an API is being abused, protection should be applied in multiple layers, not just at the application level.
A typical layered approach includes:
- Rate limiting at the edge or middleware level to cap request volume
- Authentication and authorization to restrict access to known users
- WAF rules to block common attack patterns
- Caching for safe, repeatable requests to reduce backend load
- Request validation to reject malformed or abusive inputs early
This approach ensures abuse is controlled even if one protection layer fails.
12. You added an auth middleware, but some routes are bypassing it. How do you diagnose middleware order?
When authentication middleware is bypassed, the issue is almost always related to middleware ordering or mounting scope.
Key things to check:
- Whether the auth middleware is registered before the affected routes
- Whether it is attached using app.use() or only to specific routers
- If routes are mounted before the middleware is applied
- Whether next() is being called unconditionally inside the auth middleware
- If some routes are mounted on a different router or path prefix
In Express, middleware executes in the order it is defined, so incorrect placement can cause routes to skip critical checks.
This scenario tests whether the candidate understands Express execution flow and middleware sequencing, which is a core production concept.
Node js MCQ
Which of the following are Node.js stream types?
Which module is used to serve static files in Node.js?
Which of the following statements is valid to import a module in file?
What is the use of underscore variable in REPL session?
What is the default scope of Node.js application?
What is the fullform of NPM?
How to check equality of two nodes?
How many Node object methods are available?
How to make node modules available externally?
Is Node multithreaded?