Vibe-coding meets Shopify: what it actually took
Vibe-coding meets Shopify: what it actually took
Written by

Ammar Haider

It's been well over 18 months now that we've had tools like Lovable, Bolt, and Replit redefining how the world builds custom websites & apps. Yet here we sit in 2026 and most Shopify pages are still either handwritten by a developer or dragged-and-dropped from a fixed template library.
This post is about why that gap exists and what it took to close it.
In this post:
In this post:
Section
The template ceiling
If you've ever built a landing page on Shopify, you've probably used a page builder. Tools like PageFly, GemPages, or Shogun that let you create pages outside of your theme's default templates. They all follow the same pattern. You pick from a library of templates, then swap around text, images, and blocks along with brand styling until it's close enough.
But the structure of the page is already decided for you. The layout, the number of sections, the hierarchy of content, all of that was set when the template was designed, and no amount of dragging and dropping changes it. Over the past few months, many of these tools have been adding AI features, but only superficially. Underneath the hood, what's happening is still manual assembly from pre-built parts. The template library is the product. That's their moat and their ceiling. These tools were designed before Large Language Models (LLMs) existed. Adding AI to a drag-and-drop editor doesn't change what it is.
Beyond the ceiling
The Shopify ecosystem is so used to templates that changing from this will take some adjustment. But look at what becomes possible when the ceiling is removed. Instead of browsing a library hoping something is close enough, you describe exactly what you want: "an asymmetrical masonry image grid on the right, with a rotating set of features around a central product image on the left." That's not a layout you'll find in any template library. It doesn't exist until you describe it.
This is the trade-off. Templates give you a starting point but decide the structure for you. Without them, you define the structure yourself, and the AI builds it. The result is pages that match the brief, not pages that approximate it.

Making AI speak Liquid
When LLMs generate code, they're drawing on copious amounts of open-source code from their training data. Naturally, this will converge to the most popular languages, frameworks, and technologies used by the general public and companies. If you ask an LLM coding agent like Cursor or Claude to generate front-end code, you'll usually get React. This is just the nature of how LLMs work and this is what tools like Lovable output.
Shopify has its own templating language called Liquid, and the overwhelming majority of stores are powered by it. To build a vibe-coding tool that actually works for these stores, you need to generate Liquid as the output. This is genuinely hard. Getting an LLM to produce it reliably requires serious prompt engineering, validation, and post-processing. And the real challenge is that Shopify pages aren't static. They pull live product data, collections, and metafields directly from the store.
The "genie effect"
Anyone building with AI models knows the importance of token efficiency, because output tokens directly affect both performance and cost. But not all tokens are equal, even if they're priced the same. For instance, asking the capital of New Zealand is very different to mapping the 3D structure of an undiscovered protein.
We built a generation pipeline which only asks the LLM to produce compressed, token-efficient outputs, which are then expanded through a post-processing pipeline into full, production-ready Liquid with all the necessary data bindings, schema blocks and styling. This lets us generate pages faster and price competitively. Instead of asking an LLM to write 500 lines of Liquid code directly, we're asking it to produce only the essential structure and the main content decisions. Think of it like a genie: "Phenomenal cosmic power, itty-bitty living space." That's what lets us pass the savings on to merchants.

Betting on Liquid
Liquid is Shopify's server-side rendering language, native to over 99% of stores, according to estimates. And contrary to what you might hear about Hydrogen (Shopify's React-based headless framework) being the next big thing, Liquid is still very much where the investment is going. Recently, Shopify's CEO Tobi Lütke personally shipped an improvement to Liquid's parse & render speed, making it 53% faster. That will have a material effect on performance for millions of online stores.
Deep design
It's one thing to create the structure of a page, but how does QuickPages know how to make it look and feel like the brand's original design? This is where our deep design profiles come in, built by a proprietary brand intelligence agent.
This is something that has only very recently become possible. Over the past few months, frontier models have made huge leaps in their ability to understand and reason about visual design. They can look at a website and extract not just the obvious things like hex codes and font names, but the subtler qualities: spacing patterns, visual weight, imagery style, tone of voice. QuickPages was built to take full advantage of this.
Our brand intelligence agent crawls your store and builds what we call a deep design profile, composed of hundreds of data points. Colours, typography, layout patterns, imagery choices, content tone. All of it extracted without you lifting a finger. No uploading brand guidelines. No picking from style presets. No configuring colour schemes. You install, it analyses, and you're ready to generate.
The results are uncanny. Here's an original store design next to a page generated by QuickPages for that same store.

Fine-tuning the last 10%
AI generation gets you most of the way there. But "most of the way" isn't good enough. What happens next is where the experience comes together.
The chat agent
QuickPages has a purpose-built chat agent that's pre-loaded with your brand's deep design profile and the full context of the page you're working on. This isn't a generic chatbot bolted on. It knows your colors, your fonts, your tone, and it knows what's on the page in front of you.
Talking to it is surprisingly gratifying. You can ask it to regenerate a section, change the layout, adjust the copy, shift the tone. It handles image generation too, so you're not leaving the app to source visuals. It's already quite powerful, and there's a lot more planned. The key thing is that it feels like working with someone who already understands the brief, not starting from scratch with every interaction.
Generating images in QuickPages:
Visual editing
Sometimes it's faster to show than to describe. For those moments, we built a visual editing layer that lets you refine generated pages without touching code.
This was one of the harder engineering challenges. We took inspiration from Lovable's approach to visual editing, specifically how they built their visual edits feature. Our implementation parses and manipulates the Liquid HTML AST (Abstract Syntax Tree) directly, which is what allows clean, reliable edits to native Liquid code rather than patching over it. Tailwind CSS made this possible: because each utility class maps to a single style property, every visual attribute becomes an individually configurable control. Generation and editing speak the same language.
The same AST tooling is also handed to the chat agent, which means when you ask it to make changes, it's making precise, structural edits to the Liquid rather than regenerating from scratch. Chat for broad strokes, visual editing for fine-tuning.

What's next
We're just getting started. Analytics integration is coming, so you'll be able to see how your generated pages actually perform. After that, A/B testing: the ability to generate multiple variants from a single brief, deploy them in minutes, and let the data decide which one wins.
The bigger picture is this. For the first time on Shopify, marketing teams will be able to create and test landing page variants at the speed of their ad campaigns, not at the speed of their design team's backlog.
QuickPages is live on the Shopify App Store. Your first page is on us.
The template ceiling
If you've ever built a landing page on Shopify, you've probably used a page builder. Tools like PageFly, GemPages, or Shogun that let you create pages outside of your theme's default templates. They all follow the same pattern. You pick from a library of templates, then swap around text, images, and blocks along with brand styling until it's close enough.
But the structure of the page is already decided for you. The layout, the number of sections, the hierarchy of content, all of that was set when the template was designed, and no amount of dragging and dropping changes it. Over the past few months, many of these tools have been adding AI features, but only superficially. Underneath the hood, what's happening is still manual assembly from pre-built parts. The template library is the product. That's their moat and their ceiling. These tools were designed before Large Language Models (LLMs) existed. Adding AI to a drag-and-drop editor doesn't change what it is.
Beyond the ceiling
The Shopify ecosystem is so used to templates that changing from this will take some adjustment. But look at what becomes possible when the ceiling is removed. Instead of browsing a library hoping something is close enough, you describe exactly what you want: "an asymmetrical masonry image grid on the right, with a rotating set of features around a central product image on the left." That's not a layout you'll find in any template library. It doesn't exist until you describe it.
This is the trade-off. Templates give you a starting point but decide the structure for you. Without them, you define the structure yourself, and the AI builds it. The result is pages that match the brief, not pages that approximate it.

Making AI speak Liquid
When LLMs generate code, they're drawing on copious amounts of open-source code from their training data. Naturally, this will converge to the most popular languages, frameworks, and technologies used by the general public and companies. If you ask an LLM coding agent like Cursor or Claude to generate front-end code, you'll usually get React. This is just the nature of how LLMs work and this is what tools like Lovable output.
Shopify has its own templating language called Liquid, and the overwhelming majority of stores are powered by it. To build a vibe-coding tool that actually works for these stores, you need to generate Liquid as the output. This is genuinely hard. Getting an LLM to produce it reliably requires serious prompt engineering, validation, and post-processing. And the real challenge is that Shopify pages aren't static. They pull live product data, collections, and metafields directly from the store.
The "genie effect"
Anyone building with AI models knows the importance of token efficiency, because output tokens directly affect both performance and cost. But not all tokens are equal, even if they're priced the same. For instance, asking the capital of New Zealand is very different to mapping the 3D structure of an undiscovered protein.
We built a generation pipeline which only asks the LLM to produce compressed, token-efficient outputs, which are then expanded through a post-processing pipeline into full, production-ready Liquid with all the necessary data bindings, schema blocks and styling. This lets us generate pages faster and price competitively. Instead of asking an LLM to write 500 lines of Liquid code directly, we're asking it to produce only the essential structure and the main content decisions. Think of it like a genie: "Phenomenal cosmic power, itty-bitty living space." That's what lets us pass the savings on to merchants.

Betting on Liquid
Liquid is Shopify's server-side rendering language, native to over 99% of stores, according to estimates. And contrary to what you might hear about Hydrogen (Shopify's React-based headless framework) being the next big thing, Liquid is still very much where the investment is going. Recently, Shopify's CEO Tobi Lütke personally shipped an improvement to Liquid's parse & render speed, making it 53% faster. That will have a material effect on performance for millions of online stores.
Deep design
It's one thing to create the structure of a page, but how does QuickPages know how to make it look and feel like the brand's original design? This is where our deep design profiles come in, built by a proprietary brand intelligence agent.
This is something that has only very recently become possible. Over the past few months, frontier models have made huge leaps in their ability to understand and reason about visual design. They can look at a website and extract not just the obvious things like hex codes and font names, but the subtler qualities: spacing patterns, visual weight, imagery style, tone of voice. QuickPages was built to take full advantage of this.
Our brand intelligence agent crawls your store and builds what we call a deep design profile, composed of hundreds of data points. Colours, typography, layout patterns, imagery choices, content tone. All of it extracted without you lifting a finger. No uploading brand guidelines. No picking from style presets. No configuring colour schemes. You install, it analyses, and you're ready to generate.
The results are uncanny. Here's an original store design next to a page generated by QuickPages for that same store.

Fine-tuning the last 10%
AI generation gets you most of the way there. But "most of the way" isn't good enough. What happens next is where the experience comes together.
The chat agent
QuickPages has a purpose-built chat agent that's pre-loaded with your brand's deep design profile and the full context of the page you're working on. This isn't a generic chatbot bolted on. It knows your colors, your fonts, your tone, and it knows what's on the page in front of you.
Talking to it is surprisingly gratifying. You can ask it to regenerate a section, change the layout, adjust the copy, shift the tone. It handles image generation too, so you're not leaving the app to source visuals. It's already quite powerful, and there's a lot more planned. The key thing is that it feels like working with someone who already understands the brief, not starting from scratch with every interaction.
Generating images in QuickPages:
Visual editing
Sometimes it's faster to show than to describe. For those moments, we built a visual editing layer that lets you refine generated pages without touching code.
This was one of the harder engineering challenges. We took inspiration from Lovable's approach to visual editing, specifically how they built their visual edits feature. Our implementation parses and manipulates the Liquid HTML AST (Abstract Syntax Tree) directly, which is what allows clean, reliable edits to native Liquid code rather than patching over it. Tailwind CSS made this possible: because each utility class maps to a single style property, every visual attribute becomes an individually configurable control. Generation and editing speak the same language.
The same AST tooling is also handed to the chat agent, which means when you ask it to make changes, it's making precise, structural edits to the Liquid rather than regenerating from scratch. Chat for broad strokes, visual editing for fine-tuning.

What's next
We're just getting started. Analytics integration is coming, so you'll be able to see how your generated pages actually perform. After that, A/B testing: the ability to generate multiple variants from a single brief, deploy them in minutes, and let the data decide which one wins.
The bigger picture is this. For the first time on Shopify, marketing teams will be able to create and test landing page variants at the speed of their ad campaigns, not at the speed of their design team's backlog.
QuickPages is live on the Shopify App Store. Your first page is on us.
