Six distinct load testing instruments in seven deployments across five language ecosystems — chosen for ecosystem compatibility, not familiarity. The same engineering principle as the UI automation series: one shared target, documented trade-offs, honest results.
k6 · Locust (embedded + standalone) · Gatling · NBomber · Artillery · Apache JMeter
Performance testing is part of quality — not a separate discipline. Three principles govern every decision in this series.
Each performance tool is chosen to be language/ecosystem-native to its paired automation framework. A Python team owns Locust — same venv, same CI job, same review culture. A Java team owns Gatling — same Maven workflow. A .NET team owns NBomber — same IDE and NuGet. Migration from portfolio scale to production scale is a configuration change, not a tool migration.
TypeScript → k6 · Python → Locust · Java → Gatling · .NET → NBomber · JS → Artillery See the pairings →Five embedded tools run against QA Lab on CloudFront — a CDN that cannot degrade under portfolio-scale load. That is deliberate: CDN tests measure SLO compliance, not capacity. The standalone projects (Locust/Vercel, JMeter/Railway) run against real application servers where real degradation is measurable and documentable as engineering findings.
Embedded: SLO compliance · Standalone: real degradation See the degradation →Both standalone projects produce a FINDINGS.md — an engineering document, not a test report. The REM Waste finding (BS1 4DJ concurrency race in a module-level counter) and the Player API finding (ConcurrentDictionary write contention, 502/503 at 30 VU on Railway’s 512 MB RAM) are architectural constraints invisible to sequential E2E tests. Performance testing as investigation, not just measurement.
FINDINGS.md in both standalone repos · Real infrastructure constraints Read the findings →Five performance tools embedded as /performance folders in existing automation repos. Each runs as a separate CI job after functional tests pass — performance as a quality gate, not a separate phase.
p(95)<500, rate<0.01.@task decorators. Same virtual environment as the pytest suite — no toolchain switch. Interactive Web UI (localhost:8089) for exploration. Headless --headless mode for CI. Two profiles: smoke (5 VU) and baseline (10 VU). --exit-code-on-error 1 as CI quality gate.mvn test never triggers Gatling; only mvn gatling:test -Pperformance does. Three simulation profiles: smoke (5 VU), baseline (10 VU), cold/warm cache. Assertions DSL: percentile(95).lt(500). Best-in-class HTML report.QALab.Performance.csproj). Same NuGet workflow, same IDE, same debugger as the NUnit suite. TreatWarningsAsErrors applied. HTML + JSON dual report. Typed assertions DSL. The argument is the principle: a .NET team picks a .NET tool.@artillery/plugin-playwright. Captures FCP, LCP, TTFB, CLS, and INP under concurrent load — measuring user experience, not just server response time. HTTP and browser modes. Max 3 concurrent browser VUs (RAM constraint documented, not hidden).Two standalone performance repos targeting real application servers. Both surface engineering findings invisible to sequential E2E tests. Both produce FINDINGS.md as a documented portfolio artifact.
Standalone Locust suite testing the complete booking flow across four postcodes — each testing a different failure mode (happy path, empty state, latency simulation, concurrency constraint). The BS1 4DJ postcode revealed an architectural race condition invisible to Playwright’s sequential test execution.
JMeter 5.6.3 targeting a real ASP.NET Core 8 REST API with full JWT authentication. Full CRUD cycle per VU — login, token extraction, operations, cleanup. Stepped stress ramp (5 → 15 → 30 VU) demonstrates real degradation on Railway’s 512MB RAM free tier. 55% market share tool, included with enterprise framing not advocacy.
Artillery with @artillery/plugin-playwright is the only tool in this series running real Chromium instances. The distinction between HTTP and browser-level load is an engineering concept worth demonstrating explicitly.
| Capability | HTTP Tools (k6, Locust, Gatling, NBomber, JMeter) | Artillery + Browser Plugin |
|---|---|---|
| Server response time | ✓ Yes | ✓ Yes |
| HTML / CSS / JS rendering | — No | ✓ Yes (real Chromium) |
| JavaScript execution time | — No | ✓ Yes |
| Core Web Vitals (FCP, LCP, INP, CLS) | — No | ✓ Yes (via Playwright) |
| Async state transitions | — No | ✓ Yes (Playwright-driven) |
| RAM per virtual user | ~1 MB | ~150 MB (Chromium overhead) |
| Max practical concurrent VUs | 1,000s (network-limited) | ~3 (RAM-limited — documented) |
| Measures “server experience” | ✓ Yes | ✓ Yes + user experience |