Leaving PHP, Part 2: the runtime model is from a different era
Part 1 looked at the people you can hire to write PHP. Part 2 looks at the architecture they would inherit on day one. PHP's defining choice, shared-nothing process-per-request, was the right answer for the 2005 web. In 2026 it is a structural tax that the rest of the stack has stopped paying.
PHP can close most of the gap with RoadRunner, Swoole, or FrankenPHP. Doing so means abandoning the architecture PHP's ecosystem assumes, while most libraries, most documentation, and most developers you can hire still assume the classic model. You end up running "PHP, but not really PHP," with a smaller library subset and a steeper onboarding curve. Meanwhile Node, Go, .NET, and modern Python ship long-lived runtimes by default.
What "shared-nothing" actually means
Tideways, a PHP-focused APM vendor, describes the model plainly. For every request, a node starts from scratch with no memory shared from the prior request of the same node. It loads every required object, performs the processing, and releases all the resources when the request has been fulfilled. The DeployHQ runtime guide adds the consequence: every request starts with a clean slate, no leaked state, no memory accumulation, no cross-request contamination. Simple, predictable, easy to debug.
That last sentence is the genuine virtue of the model. Memory leaks cannot accumulate across requests. Crashes are isolated. Horizontal scaling is trivial: add more PHP-FPM workers. For 2005-era CMS and form-driven web pages, this was elegant.
The cost is everywhere else in the modern stack:
- No persistent in-process state. Caches, prepared statements, and hot data structures must be rebuilt or fetched from Redis on every request.
- No real connection pooling. Each PHP-FPM worker holds its own DB connection. With N workers across M servers you carry N×M connections to your database, vs a single Node process pooling 20.
- No websockets, no long-polling, no SSE without bolting on a separate runtime.
- Bootstrap overhead on every request. Autoloader, framework boot, container build, route registration. OPcache helps, it does not eliminate it.
Modern stacks invert this. Node, Go, .NET, JVM-based languages, and modern Python (FastAPI, Starlette) all assume a long-lived process: load the framework once, hold connections, keep hot data in memory, stream responses, terminate only on deploy.
The benchmark reality
Public benchmarks have flaws. They measure microbenchmark scenarios, not real apps. When the gap is consistent and large across rounds and methodologies, it is signal.
TechEmpower Round 22 (released 2023-11-15), the most-cited public framework benchmark, consistently places PHP frameworks near the bottom of mainstream contenders. Highly-tuned frameworks written in Rust, Go, C#, and specialised Java stacks dominate the throughput rankings, while typical PHP frameworks sit much lower in raw req/s. On the simplest, CPU-bound endpoints (plaintext / JSON), the fastest Go / Rust / Java stacks achieve 10–30× the requests per second of baseline PHP frameworks in the same environment. In some TFB permutations, the top .NET or Go implementations score 20–40× compared to a typical Laravel example.
Indicative TechEmpower-style throughput, plaintext / JSON (relative)
Toptal's I/O performance comparison reaches the same ordering. At 5,000 concurrent connections: Go is the winner here, followed by Java, Node, and finally PHP. Their explanation is architectural, not language-level. At high connection volume the per-connection overhead involved with spawning new processes, plus the additional memory associated with it in PHP+Apache, becomes a dominant factor and tanks PHP's performance.
A few honest caveats from the same sources:
- PHP 7 → 8 closed a real gap. PHP 8.3 produced measurable additional gains over 8.2 (one Kinsta-cited Laravel demo workload showed up to ~38% improvement).
- Real apps spend most of their time on I/O (database, network, templates), so language-level CPU gaps shrink in production.
- These are version-upgrade gains, not a change in the architectural nature of PHP.
The architectural ceiling is the point. Even a perfectly tuned Laravel app on PHP 8.5 with OPcache and JIT does not approach what a Go service does on the same hardware, because the runtime model is different.
"But Swoole, RoadRunner, FrankenPHP solve this"
The PHP community has built three serious answers to the runtime problem. Each closes most of the architectural gap. Each comes with a bill.
| Runtime | Model | Persistent state | Native concurrency |
|---|---|---|---|
| PHP-FPM | Process pool, fresh per request | No | No |
| RoadRunner | Go app server, long-lived workers | Per worker | No |
| FrankenPHP | Go app server, worker mode | Per worker | No |
| Swoole (PECL) | Event-driven async runtime | Yes | Coroutines |
| Node.js | Single long-lived process, event loop | Yes | Native |
The DeployHQ comparison summarises RoadRunner's model: it manages a pool of long-lived PHP worker processes; each worker loads your application once and handles many requests sequentially. Swoole goes further, taking a fundamentally different approach. Rather than bolting a persistent worker onto PHP's traditional execution model, Swoole extends PHP itself with an asynchronous, event-driven runtime, similar to what Node.js provides for JavaScript. A single Swoole worker can handle thousands of concurrent connections using cooperative multitasking.
The catch is what you give up:
The fact that PHP needs three competing runtimes (RoadRunner, Swoole, FrankenPHP) to reach feature-parity with Node's default behaviour is itself the argument.
The connection-pool problem, in numbers
Take a mid-sized SaaS at modest scale. 100 req/s, 10 PHP-FPM workers per pod, 5 pods, Postgres backend.
Under classic PHP-FPM, each worker holds a DB connection. That is 10 × 5 = 50 simultaneous connections to Postgres just from the application tier, for a workload that, in a Node service, would be served by 5 connections (one pool per pod, one entry per pod typical at this load).
Postgres connections held by the application tier
Postgres max_connections defaults to 100. A PHP shop hits the connection ceiling at growth levels where a Node shop is still on the default config. The PHP fix is PgBouncer or a managed pooler, which is fine, but it is an extra moving part the Node team does not need.
PHP's PDO persistent connections only persist within a single FPM worker, not across them. Real pooling requires either Swoole, RoadRunner, or an external pooler. None of this is unsolvable. It is just work that other ecosystems do not make you do.
A scorecard for greenfield work in 2026
If you are greenfielding a service and the workload includes any of:
The PHP architecture is fighting you on every line. You can win the fight, Swoole and RoadRunner are real engineering, but you are spending budget on plumbing that other stacks ship for free.
If your workload is the classic PHP fit (request-response CRUD, server-rendered HTML, CMS, e-commerce, admin panels), the runtime model is genuinely fine and probably faster to develop in than the alternatives. PHP is not bad at its original job. It is bad at the jobs that did not exist when it was designed.
The strategic question, again, is forward-looking: is the next workload you build going to look like 2005's web, or 2026's? If your answer is "we mostly render HTML and process forms," PHP earns its keep. If your answer is "we are building anything real-time, agentic, streaming, or edge-deployed," and increasingly every product is, the runtime model is a reason to leave.
Up next in the series
The runtime argument is the most measurable of the ten. The next post moves to the one that quietly costs you the most every day, the one that does not show up on a benchmark chart but shows up on every PR you review.
If you want to start the migration before Part 10 lands, book a demo and we will walk you through what Pext does to your codebase.