How will ChatGPT and Generative AI change User Interface design?

What is ChatGPT good at? Text. Because all it has ever been trained on is text.

Large Language Models (the kind of model that GPT-3 is based on) are created by “training” on huge sets of data – scraped from the web using crawlers and extracted from Wikipedia (for example).

GPT-3 alone contains hundreds of billions of words. It can string these words together into what we’ve already seen are amazing responses to natural language questions.

But I’ve been wondering… What if we could train a large language model – not on written prose…. but on user interfaces.

Modern digital design platforms like Figma describe their layouts and components using code – describing the position, size, color and other attributes of every object on the screen.

It’s not a stretch to imagine training a large language model on a million (or even a billion) screen designs.

The result would be a sort of “visual autocomplete” where UX and UI designers could produce wireframes and even finished screen designs by just adding the basics and letting generative AI produce the rest!

These designs wouldn’t be “flat” bitmap graphics of the kind we’ve seen DALL-E produce, but editable digital designs containing components that can be moved around and edited nondestructively.

And, if the designers are already using an established design system, then this model could inherit those styles: automatically “theming” these mockups to the organization’s existing look and feel.

UX and UI designers aren’t going away. A good experience designer brings user insights from research as well as knowledge of human psychology – and blends them with the organization’s product vision.

Advances in AI will mean that designers spend less time building screens and more time focusing on solving real problems for real users – and that’s a great outcome to look forward to.

How will ChatGPT and Generative AI change User Interface design?