Generative UI: A rich, custom, visual interactive user experience for any prompt
November 18, 2025
Yaniv Leviathan, Google Fellow, Dani Valevski, Senior Staff Software Engineer, and Yossi Matias, Vice President & Head of Google Research
—
We introduce a novel implementation of generative UI, enabling AI models to create immersive experiences and interactive tools and simulations, all generated completely on the fly for any prompt. This is now rolling out in the Gemini app and Google Search, starting with AI Mode.
—
Generative UI is a powerful capability where an AI model creates not only content but an entire user experience. We present a new implementation that dynamically generates immersive visual experiences and interactive interfaces — such as web pages, games, tools, and applications — fully customized in response to any prompt or instruction. These prompts can be as simple as a single word or detailed and complex. Unlike static, predefined interfaces, these new interfaces are dynamically created in real time.
In our paper, “Generative UI: LLMs are Effective UI Generators” (available [here](https://generativeui.github.io/static/pdfs/paper.pdf)), we describe the core principles enabling generative UI and demonstrate its viability. Evaluations show that human raters strongly prefer these generated interfaces over standard LLM outputs, excluding generation speed. This marks a step toward fully AI-generated user experiences that adapt in real time to user needs, rather than relying on existing application catalogs.
Our research on generative UI also appears today in the [Gemini app](https://blog.google/products/gemini/gemini-3-gemini-app) through an experiment called dynamic view and in AI Mode in [Google Search](http://blog.google/products/search/gemini-3-search-ai-mode).
—

*Generative UI is useful for diverse applications. For any user question or prompt, from a single word to complex instructions, the model creates a fully custom interface.*
**Left:** Getting tailored fashion advice ([demo](https://generativeui.github.io/static/demos/carousel.html?result=fashion-advisor))
**Middle:** Learning about fractals ([demo](https://generativeui.github.io/static/demos/carousel.html?result=fractal-explorer))
**Right:** Teaching mathematics ([demo](https://generativeui.github.io/static/demos/carousel.html?result=basketball-math))
More examples on the [project page](https://generativeui.github.io/).
—
### Bringing generative UI to Google products
Generative UI capabilities are rolling out as experiments in the [Gemini app](https://blog.google/products/gemini/gemini-3-gemini-app) via dynamic view and visual layout. Dynamic view uses generative UI to design and code a fully customized interactive response to each prompt, tailoring experiences for different user needs — for example, explaining the microbiome to a 5-year-old differs significantly from explaining it to an adult.
Dynamic view supports scenarios ranging from learning probability ([demo](https://generativeui.github.io/static/demos/carousel.html?result=rolling-an-8)) to practical tasks like [event planning](https://generativeui.github.io/static/demos/carousel.html?result=thanksgiving) and [fashion advice](https://generativeui.github.io/static/demos/carousel.html?result=fashion-advisor). These interfaces enable interactive learning, exploration, and play. Initially, users may see only one experiment to gather feedback.
—

*Example of generative UI in dynamic view based on the prompt: “Create a Van Gogh gallery with life context for each piece.”*
—
Generative UI experiences are also integrated into [Google Search AI Mode](http://blog.google/products/search/gemini-3-search-ai-mode), delivering dynamic visual experiences with interactive tools and simulations specifically generated for user questions. Powered by Gemini 3’s multimodal understanding and coding abilities, AI Mode generates customized interfaces on the fly, enhancing comprehension and task completion.
Generative UI features in AI Mode are available now to Google AI Pro and Ultra subscribers in the U.S. Select “Thinking” from the model drop-down menu to try it out.
—

*Example of AI Mode in Google Search with prompt: “Show me how RNA polymerase works, the stages of transcription, and differences between prokaryotic and eukaryotic cells.”*
—
### How the generative UI implementation works
Our implementation, detailed in the [paper](https://generativeui.github.io/static/pdfs/paper.pdf), is based on Google’s Gemini 3 Pro model with three key additions:
1. **Tool access:** A server provides access to essential tools like image generation and web search, allowing outputs to be integrated by the model or sent directly to user browsers for efficiency.
2. **Carefully crafted system instructions:** Detailed guidelines guide the system goal, planning, examples, technical formatting, tool manuals, and tips to avoid common errors.
3. **Post-processing:** Outputs are refined with post-processing to fix frequent issues.
—

*A high-level system overview of the generative UI implementation.*
—
For some applications, consistent styles may be preferred. Our implementation can be configured to produce results and generated assets in a uniform style for all users. Otherwise, it uses default styles or lets users influence styling via prompts, as seen in dynamic view in the Gemini app.
—

*Screenshots of generative UI outputs with consistent “Wizard Green” styling.*
—
### Generative UI outputs are strongly preferred over standard formats
We developed PAGEN, a dataset of expert-made websites, to facilitate evaluations of generative UI implementations.
In user preference studies, expert-designed sites scored highest, closely followed by results from our generative UI implementation. These were preferred well over baseline LLM raw text or markdown formats, with performance dependent on the underlying model quality. See more in the [paper](https://generativeui.github.io/static/pdfs/paper.pdf).
—
### Opportunities ahead
Generative UI is in its early days with room for improvement. Current implementations can take a minute or more to generate results and sometimes produce inaccuracies; these challenges are active research areas.
Generative UI exemplifies the “magic cycle of research,” where breakthroughs inspire innovation that opens further research opportunities. We see potential to extend generative UI with wider service access, greater context adaptation, human feedback incorporation, and richer visual and interactive interfaces. We are excited about its future possibilities.
—
**Labels:**
Generative AI, Human-Computer Interaction and Visualization, Product
—
**Quick links:**
– [Paper](https://generativeui.github.io/static/pdfs/paper.pdf)
– [Project Page](https://generativeui.github.io/)
—
**Other posts of interest:**
– Reducing EV range anxiety: How a simple AI model predicts port availability (Nov 21, 2025)
– Introducing Nested Learning: A new ML paradigm for continual learning (Nov 7, 2025)
– Accelerating the magic cycle of research breakthroughs and real-world applications (Oct 31, 2025)
