Pext vs AI conversion vs manual rewrite
When a team decides to move off PHP, they have three options: transpile with a tool like Pext, feed the code to an AI and ask it to convert, or rewrite everything from scratch. The conversation usually starts with features and ends with risk. It should start earlier: with what these approaches actually assume about the transition itself.
The comparison at a glance
This is a simplified view, but it reflects real trade-offs:
| Factor | Pext | AI conversion | Manual rewrite |
|---|---|---|---|
| Speed | Fast | Medium | Slow |
| Short-term risk | Low | High | High |
| Code quality | Medium (improves over time) | Medium (unpredictable) | High (eventually) |
| Cost | Low | High | Very high |
| Lock-in | Mitigable | None | None |
| Scalability | High | Low | Medium |
Pext wins clearly on speed and cost. Manual rewrite wins on final code quality but takes months or years to get there. AI conversion sits in the middle on most dimensions while carrying the highest short-term execution risk. None of these are knock-out punches; the right choice depends on codebase size, team capacity, and how much disruption you can absorb.
The problem nobody acknowledges
Here is what AI conversion and manual rewrite have in common: neither acknowledges how PHP actually works at the level required to map it faithfully to Node.js.
PHP has reference semantics, copy-on-write arrays, integer overflow behavior, specific type coercion rules, destructor timing tied to reference counting, and a standard library with decades of quirks. JavaScript has none of those things, or has different, incompatible versions of them. Before you write a single line of target code, you need a complete glossary of every mismatch, how each one behaves, and what the correct equivalent is.
Teams that attempt a manual rewrite almost never start with this inventory. They discover the mismatches one by one, in production, through failing tests. Designing that mapping correctly could take months before a single line of application code is rewritten. Trial and error stretches that to years. The same constraint applies to AI: it will produce syntactically valid JavaScript, but it has no focused, tested model of PHP-to-JS semantic equivalence. It will get most things right and silently get the subtle things wrong.
This is the work Pext's team has already done. We spent the time understanding PHP's internals at the level required to build a correct transpiler. The semantic mapping is encoded in the compiler and the runtime, tested against real PHP projects at every level of complexity, from simple utility libraries to PHPUnit itself. That groundwork is not glamorous, but it is the difference between a working migration and an expensive experiment.
Your test suite is not free to port
Application code is one part of the migration. The test suite is another, and it is routinely underestimated.
PHPUnit and Jest are not equivalent frameworks with different syntax. They have different paradigms. PHPUnit is class-based, assertion-driven, with lifecycle hooks tied to object construction. Jest is module-based, callback-driven, with a completely different model for setup, teardown, data providers, and expectations. Porting a PHPUnit test suite to Jest is not a mechanical translation. It requires rethinking every test at the structural level.
For a codebase with years of test coverage written by senior engineers, that rethinking is not trivial. Those tests encode domain knowledge. The edge cases, the unusual assertions, the custom expectations built for specific business rules: they represent investment that is easy to accidentally discard during a port. The new Jest tests may pass and still miss the thing the original PHPUnit test was actually checking.
There is a more immediate problem too: you are paying again for work already done. A senior engineer spent time understanding the correct behavior of a subsystem and wrote tests that prove it. Rewriting those tests in a different framework re-spends that time without adding new coverage. The tests cover the same ground again, in a different shape, at the same cost.
Pext sidesteps this entirely. We ported PHPUnit to JavaScript as part of the project, and the transpiled PHP application runs against it directly. The original test suite keeps working from day one. No rewrite, no paradigm shift, no lost coverage. The institutional knowledge embedded in those tests is preserved and immediately useful. You can move toward Jest incrementally if you choose, but the question on day one is not "how do we port 3,000 PHPUnit tests" but rather "how do we make the Node application better", which is a much more productive place to start.
The PHP standard library is not free to replace
When an AI or a manual rewrite encounters a PHP standard library call, the approach is usually the same: find the nearest Node.js equivalent and use it. For simple cases, this works. For everything else, it produces workarounds.
PCRE is the clearest example. Perl-compatible regular expressions are built into PHP. The syntax is consistent, the behavior is well-specified, and the features (named captures, lookaheads, Unicode properties, possessive quantifiers) are used throughout enterprise PHP codebases, often in validation logic that seniors tuned carefully over years. Node.js does not have native PCRE support. Its RegExp is capable but not PCRE-compatible.
An AI or a manual rewrite will attempt to convert PCRE patterns to native JS regex. For straightforward patterns, this works. For complex patterns, it requires workarounds. Those workarounds get scattered across the codebase wherever a PCRE call existed. Eventually someone notices the pattern and extracts a utility function, maybe advancedRegexMatch, to centralize the hacks. That function becomes the new PCRE. Except it is not PCRE: it is a collection of edge-case patches that will silently fail on patterns the original PHP handled correctly.
The conclusion teams reach after weeks of this is: can we just have pcre_match in Node.js? That is exactly what pext-pcre is. Install the module, and PCRE works in Node.js the same way it works in PHP. No workarounds, no scattered hacks, no utility function accumulating exceptions. The complex validations that a senior built with confidence in PHP remain as correct and as readable in the Node.js application.
This is not lock-in. It is a quality-of-life module that exists because the gap between PHP and Node is real and because plugging it cleanly is better than routing around it forever. Node.js has substantial advantages over PHP: concurrency, ecosystem, tooling, deployment flexibility. But the places where it differs from PHP are precisely the most painful during a migration. Pext's runtime modules exist to absorb that pain at the right layer, so the application code does not have to.
AI is not the competition
It is tempting to frame this as Pext versus AI. That is the wrong frame. AI is a tool in the same toolbox, and it is a good one in the right context.
Pext's transpiler is deterministic for semantics. Every PHP construct maps to a defined JavaScript equivalent: no guessing, no hallucination, no silent approximation. What AI can do is improve the syntax of that output: identifier renaming, idiomatic restructuring, small-scale transformations that do not alter control flow or types but move the result closer to canonical JavaScript. Pext is designed to enable that layer of AI augmentation, not replace it.
AI is excellent at refactoring code that already runs correctly. Given working JavaScript, it can rename identifiers to idiomatic style, restructure classes, extract utilities, and add types at a pace no human team matches. That is Phase 2 work, after the migration is complete and the application is verified on Node. Using AI to do the initial semantic mapping from PHP to JS is asking it to do the hard part without the right foundation, and it will show.
The combination that actually works: Pext handles the mechanical correctness of the migration, AI handles the progressive improvement of the result.
Code quality is not fixed on day one
The table above marks Pext's code quality as "Medium." That is accurate for the initial output, and it is worth being honest about what that means. The transpiler preserves your PHP logic faithfully. It does not produce idiomatic JavaScript. The output looks like PHP with JavaScript syntax, because that is what it is. Variable names, class structures, and control flow patterns all come from the original PHP source.
This is not a permanent state. It is the starting point. Once you are on Node, you can assess the output at your own pace, prioritize the areas that matter most, and improve them incrementally. That process can be AI-augmented. The Strangler Fig pattern, for instance, lets you identify and replace high-value areas with idiomatic Node.js code while the rest of the application continues to run untouched. The improvement is scoped, reversible, and done without halting ongoing development.
A manual rewrite eventually produces high-quality code, but that quality only matters once the rewrite is done, which is typically many months away. During that time, the PHP codebase continues to accumulate changes that the rewrite team must track and reconcile. The quality of the final result is real, but the cost of getting there is substantial and the window of disruption is long.
Ship first, improve later
The strategic frame that matters most: the goal of the migration is to reach Node.js, not to immediately have perfect Node.js code.
Once you are on Node, the question changes from "how do we port this PHP to JavaScript" to "how do we make this JavaScript better." That is a much easier question. Your team knows JavaScript. Your toolchain knows JavaScript. AI assistants are optimized for JavaScript. The entire Node ecosystem is available to you. The marginal cost of improving Node.js code is low and keeps falling.
Pext gets you to that position fast. The transpiler runs against your full PHP codebase in a matter of days or weeks, not months. Your team never stops shipping. The PHP application keeps running during the migration. Development does not halt while you wait for a rewrite to catch up.
Manual rewrites halt or slow the PHP roadmap while the rewrite is in progress. AI conversions require substantial human review and correction passes that scale poorly with codebase size. Both approaches ask you to defer the benefits of being on Node until after the migration is done. Pext inverts that: you get to Node quickly, then improve from a position of stability.
On lock-in
The lock-in concern is legitimate and worth addressing directly. We have written a full post on this: Mitigating runtime lock-in. The short version is that Pext's runtime is modular and escrow-available. Enterprise customers get access to the runtime source. The generated JavaScript uses readable, named identifiers that you own. There is no proprietary cloud dependency in the production path.
Lock-in with Pext is a function of how much of the runtime your application actually uses, and that footprint shrinks over time as the compiler improves and as you progressively replace runtime-backed calls with vanilla JavaScript. The direction of travel is deliberately toward less dependency, not more.
Manual rewrites and AI conversions have no lock-in by definition. That is true, but it is also beside the point when the primary risk is not lock-in but whether the migration succeeds at all, on time, without breaking production. Pext's lock-in is manageable. The alternative risks are not: a stalled rewrite, silently incorrect AI output, months of runway consumed.
If you want to see how this looks against your actual codebase, book a demo.