Portfolio · Performance Testing · 6 Instruments · 7 Deployments · 5 Ecosystems

Cross-Stack
Performance Series

Six distinct load testing instruments in seven deployments across five language ecosystems — chosen for ecosystem compatibility, not familiarity. The same engineering principle as the UI automation series: one shared target, documented trade-offs, honest results.

k6 · Locust (embedded + standalone) · Gatling · NBomber · Artillery · Apache JMeter

⚡ 6 Distinct Instruments 7 Deployments 5 Language Ecosystems ✓ CI Quality Gates on All 7 5 Embedded Augmentations 2 Standalone Repos · Real Degradation ~$0/month Infrastructure Browser-Level Load (Artillery)

The Engineering Concept

Performance testing is part of quality — not a separate discipline. Three principles govern every decision in this series.

Embedded Augmentations

Five performance tools embedded as /performance folders in existing automation repos. Each runs as a separate CI job after functional tests pass — performance as a quality gate, not a separate phase.

k6
TypeScript Ecosystem
Embedded in: qa-lab-playwright · Target: QA Lab / CloudFront
JavaScript DSL running in k6’s own V8 runtime — zero friction with the TypeScript repo. Cold/warm CDN cache comparison via request tagging. Grafana Cloud output optional. Three scripts: slo-smoke.js, slo-baseline.js, cdn-cold-warm.js. Thresholds as CI quality gates: p(95)<500, rate<0.01.
p95 ~65ms p99 ~120ms Error 0.00% Warm p95 ~55ms
View k6 performance section →
Locust
Python Ecosystem
Embedded in: qa-lab-pytest-python · Target: QA Lab / CloudFront
Python-native HttpUser with @task decorators. Same virtual environment as the pytest suite — no toolchain switch. Interactive Web UI (localhost:8089) for exploration. Headless --headless mode for CI. Two profiles: smoke (5 VU) and baseline (10 VU). --exit-code-on-error 1 as CI quality gate.
p95 38ms p99 650ms Error 0%
View Locust performance section →
Gatling
Java Ecosystem
Embedded in: qa-lab-selenium-java · Target: QA Lab / CloudFront
Java DSL via Maven Gatling Plugin 4.x. Fully isolated behind a Maven profile — mvn test never triggers Gatling; only mvn gatling:test -Pperformance does. Three simulation profiles: smoke (5 VU), baseline (10 VU), cold/warm cache. Assertions DSL: percentile(95).lt(500). Best-in-class HTML report.
p75 ~15ms p99 ~32ms Error <1%
View Gatling performance section →
NBomber
.NET Ecosystem
Embedded in: qa-lab-playwright-csharp · Target: QA Lab / CloudFront
.NET-native C# scenarios in a standalone console project (QALab.Performance.csproj). Same NuGet workflow, same IDE, same debugger as the NUnit suite. TreatWarningsAsErrors applied. HTML + JSON dual report. Typed assertions DSL. The argument is the principle: a .NET team picks a .NET tool.
Smoke: 101 RPS · p95 47ms Baseline: 192 RPS · p95 55ms Error 0%
View NBomber performance section →
Artillery
JS/YAML Ecosystem
Embedded in: qa-lab-cypress · Target: QA Lab / CloudFront
Unique in series: the only tool running real Chromium browser instances via @artillery/plugin-playwright. Captures FCP, LCP, TTFB, CLS, and INP under concurrent load — measuring user experience, not just server response time. HTTP and browser modes. Max 3 concurrent browser VUs (RAM constraint documented, not hidden).
HTTP p95 22ms Browser TTFB 62ms Avg page load 561ms INP 32ms
View Artillery performance section →

Standalone Projects — Real Degradation

Two standalone performance repos targeting real application servers. Both surface engineering findings invisible to sequential E2E tests. Both produce FINDINGS.md as a documented portfolio artifact.

Locust
rem-waste-qa-performance
Target: REM Waste booking app · Vercel free tier · Python

Standalone Locust suite testing the complete booking flow across four postcodes — each testing a different failure mode (happy path, empty state, latency simulation, concurrency constraint). The BS1 4DJ postcode revealed an architectural race condition invisible to Playwright’s sequential test execution.

🔎 Engineering Finding — BS1 4DJ Race Condition
The BS1 4DJ postcode uses a module-level counter for retry simulation. Under concurrent requests, multiple users receive 500 errors simultaneously — the counter is application-wide state, not request-scoped. Invisible to sequential test execution. Only surfaced under concurrent load. A QA finding, not a performance finding.
79ms
p95 (5 VU)
0%
error rate (5 VU)
Race ⚠
at 10 VU
View REM Waste performance section →
Apache JMeter
player-api-tests-performance
Target: Player API (ASP.NET Core 8) · Railway free tier · JMX / CLI

JMeter 5.6.3 targeting a real ASP.NET Core 8 REST API with full JWT authentication. Full CRUD cycle per VU — login, token extraction, operations, cleanup. Stepped stress ramp (5 → 15 → 30 VU) demonstrates real degradation on Railway’s 512MB RAM free tier. 55% market share tool, included with enterprise framing not advocacy.

🔎 Engineering Finding — Write Contention & RAM Ceiling
ConcurrentDictionary write contention visible in p99 long tail at 15+ VU. Railway 502/503 errors emerge at 25–30 concurrent users — the 512MB RAM ceiling on free tier. Cold start ~5s after inactivity, isolated by a dedicated warm-up Thread Group before load begins. Real degradation on real infrastructure with real documented constraints.
126ms
avg (5 VU smoke)
~7.2s
p99 at 30 VU
3–5%
errors at 30 VU
View Player API performance section →

HTTP Load vs Browser-Level Load

Artillery with @artillery/plugin-playwright is the only tool in this series running real Chromium instances. The distinction between HTTP and browser-level load is an engineering concept worth demonstrating explicitly.

Capability HTTP Tools (k6, Locust, Gatling, NBomber, JMeter) Artillery + Browser Plugin
Server response time ✓ Yes ✓ Yes
HTML / CSS / JS rendering — No ✓ Yes (real Chromium)
JavaScript execution time — No ✓ Yes
Core Web Vitals (FCP, LCP, INP, CLS) — No ✓ Yes (via Playwright)
Async state transitions — No ✓ Yes (Playwright-driven)
RAM per virtual user ~1 MB ~150 MB (Chromium overhead)
Max practical concurrent VUs 1,000s (network-limited) ~3 (RAM-limited — documented)
Measures “server experience” ✓ Yes ✓ Yes + user experience
Evgenii Subbotin
QA/SDET Lead · QA Architect · SAFe RTE

20 years in software engineering. Performance testing is part of quality, not a separate phase. Seven tools, five ecosystems, two standalone investigations — built to demonstrate performance engineering judgment, not a list of tool names.