Starting a Run
You can start a collection run in multiple ways:From the Sidebar Action Menu
In the sidebar, click the action menu (⋯) on a collection and select Run Collection, or on a folder and select Run Folder. Only the requests within that scope are executed.
Use the Command Palette
Open the Command Palette (
Cmd/Ctrl+Shift+P) and run LiteClient: Run Collection. Select the collection you want to run.Requests are executed in the order they appear in the sidebar. Use drag-and-drop reordering to control execution order.
Variable Chaining
The primary use case for the Collection Runner is variable chaining — setting variables in one request’s scripts and using them in subsequent requests. Variables set viapm.environment.set() or pm.globals.set() in a script persist for the remainder of the run.
- Request 1 — Login
- Request 2 — Get User Profile
POST
{{baseUrl}}/auth/loginPost-response script extracts the token from the response and saves it as an environment variable:How Scripts Work with the Runner
Pre-request and post-response scripts run for every request during a collection run, just as they do for individual requests.- Pre-request scripts run before each request is sent — use them to generate timestamps, signatures, or dynamic values
- Post-response scripts run after each response is received — use them to extract data, set variables, and write assertions
- Variables carry forward — any variable set in one request’s script is available to all subsequent requests in the run
- Test results accumulate — all
pm.test()assertions across the run are collected and displayed in the results panel
Scripts run in the same sandboxed environment as individual requests. The same 2-second timeout and 100KB size limit apply per script. See Scripting for the full pm API reference.
Cookie Persistence
Cookies set during a collection run carry forward between requests automatically. If a server sets a cookie on the first request (e.g., a session cookie), subsequent requests to the same domain include that cookie — just like a browser would. This is especially useful for workflows that depend on session-based authentication or CSRF tokens.Results Panel
The results panel provides real-time feedback as the runner executes each request.What You’ll See
- Progress bar — tracks how many requests have completed out of the total
- Results table — each request is listed with its pass/fail status based on script assertions
- Expandable rows — click a row to view the full response body, headers, test results, and console logs for that request
- Passed
- Failed
Requests where all
pm.test() assertions pass are marked with a green checkmark. Requests without any tests are also marked as passed.History
Each request executed by the Collection Runner is saved to your request history. This means you can:- Review the exact request and response for any step in the run
- Re-send individual requests from the run
- Compare responses across different runs
History entries from a collection run are identical to manually executed requests — they include the full request configuration, response body, headers, and timing.
Best Practices
Order requests intentionally
Order requests intentionally
Place setup requests (login, create resources) at the top of your collection. The runner executes top-to-bottom, so dependencies should come first.
Use assertions to catch failures early
Use assertions to catch failures early
Add
pm.test() assertions to validate each step. If a login request fails, the test result makes it immediately clear where the run broke down.Combine with environments
Combine with environments
Use environment variables like
{{baseUrl}} so the same collection run works against local, staging, and production servers by switching environments.Keep scripts idempotent
Keep scripts idempotent
Design your scripts so they produce the same result regardless of prior state. This makes runs repeatable and easier to debug.