The Forge server has one gigabyte of RAM. Vite's build process, compiling 1,575 modules including a 1.9MB CommandCenter chunk, was getting OOM-killed during the "computing gzip size" phase. Every deploy ran npm ci && npm run build on the server — a build that succeeded locally in seconds but died on a machine with a third of the memory.

The fix was to stop pretending the server should build anything. Committed public/build/ to git. Removed npm ci and npm run build from both deploy configurations. Forge now receives pre-built assets through git pull. No Node.js needed on the server. No OOM kills. Faster deploys.

The new workflow: make changes, build locally, commit everything including the build output, push. Forge auto-deploys: pull, composer install, cache clear, migrate, done. The tradeoff is larger git commits when frontend assets change. That's negligible compared to deploy reliability on a server that can't afford the memory to compile them.

Committing build output feels wrong to developers raised on gitignore-everything-generated. But the rule isn't universal — it's contextual. On a server with resources to spare, build on deploy. On a one-gigabyte VPS, build where the memory is and ship the result.