The pitch of every AI app builder in 2026 is the same: describe it, get a working site. The part that separates the serious tools from the demos is what happens next — when you want to take the code and run it somewhere else. Here's how to evaluate export quality before you commit, and what to demand from the tool you pick.
Why export quality matters
Every project outgrows the tool that created it. The question is what the exit cost looks like. If the export is clean Next.js, the exit cost is zero — you move the repo and keep shipping. If the export is a tangle of proprietary wrappers and hosted runtime dependencies, the exit cost is a full rebuild. The difference is invisible in the demo and decisive a year later.
Red flags to watch for
Hosting-bundled "export"
Some tools let you export files but only run on their servers. You get source, but the source calls their APIs, uses their auth, and deploys through their pipeline. Technically exportable, practically locked in. Test by deploying the exported project to a vanilla Vercel or self-hosted Node environment before you scale usage. If it won't run, you don't own it.
Proprietary DSL inside the output
Watch for JSX that calls framework-specific magic functions, component names you don't recognize, or a folder called something like .internal/ with compiled blobs. Real code is readable. If the output looks unfamiliar at the component level, it probably is.
Unnamed or bespoke dependencies
Open package.json. Every dependency should be a named, publicly-published package with a maintainer you can verify. If you see @tool/runtime or @tool/engine in the dependency list, that's the lock-in wrapped in a manifest.
Inconsistent structure across pages
AI-generated code can be inconsistent — component A uses hooks one way, component B uses them another. That's fine during iteration and painful during maintenance. The tools that stay coherent over 20 pages generate against a typed component tree, not raw text. Ask to see a multi-page project's code before committing.
What good export looks like
- Standard Next.js layout.
app/,components/,lib/,public/. If you can't find where a page is defined, the abstraction is too clever. - Common dependencies.
next,react,tailwindcss,lucide-react, maybeshadcn/uicomponents incomponents/ui. Nothing you'd have to Google. - TypeScript by default. Or clear PropTypes. A project without types ages poorly.
- Zero references to the originating tool. No imports from
@the-tool/. No comments that say "generated by". Clean code that looks like a human wrote it — or could have. - SEO instrumentation included.
generateMetadata, JSON-LD blocks, asitemap.tsandrobots.ts. If the tool doesn't know about SEO, its output won't rank, and you'll rebuild it anyway.
The 10-minute export test
Before you commit, run this evaluation:
- Generate the simplest project the tool supports (landing page + 2 pages).
- Export to a local directory.
cdinto it. Runcat package.json. Count unfamiliar dependencies.- Run
npm installandnpm run dev. Does it work without any tool-specific CLI? - Open a page file. Can you find the heading element by reading the JSX? Or is it buried in an abstraction?
- Run
npm run build. Does it produce a standard.next/output? - Push to a vanilla Vercel project (no tool integration). Does it deploy?
Any tool that fails this test will cost you more later than you saved up front. Any tool that passes is a real option worth considering on its other merits.
Bottom line
Export quality is the single best proxy for tool seriousness. Tools that bury the output behind a runtime are optimizing for their business model; tools that produce clean code you can leave with are optimizing for yours. Pick accordingly — and always run the export test before you scale up.