Feeling Our Way Forward

The first implementations within any new technological wave are often awkward.

The first radio broadcasters treated their medium like an auditory newspaper. The first web pages acted as digital brochures. The first smartphone apps attempted to replicate desktop experiences.

The Lumière brothers’ Workers Leaving The Lumière Factory (1895) is one of the earliest films ever made[1]. It clearly lacks editing and narrative techniques that are now synonymous with television, film, and streaming video.

Speaking of streaming video, Me at the zoo (2005) is the first YouTube video ever uploaded. It similarly feels awkwardly casual and unplanned compared to videos uploaded to the platform these days.

Visionaries aside, we usually come to grips with these new mediums by treating them as extensions of what we already know; transferring content wholesale and then augmenting them—over time—to fit their new home.

I think this is also the most likely case for AI-infused[2] interfaces; vestiges of traditional interface elements will be implemented, turn out to be redundant, and then removed over time.

Pre-empting vestiges

Distilling an interface down to its core task(s) and then working outward may get us to this realisation quicker than guessing vestiges ahead of time. Noah echoed this sentiment in his Figma Config talk earlier this year:

I think it's possible we'll shift from an app-centric world, where we're opening big monolithic containers and clicking around to express our intents, to more dynamic task-based computing. Emerging tech like ChatGPT might move us away from websites and apps back to what we've been doing since the dawn of Google: typing in questions and getting answers.

Noah Levin, AI: The next chapter in design

Said another way, application-based computing necessitates knobs and dials to pre-empt anything you might need to do within its bounds. Task-based computing focusses instead on a back-and-forth to refine and ultimately fulfil a specific task, with AI as the Copilot.

Task-based computing may work well for booking a flight, ordering dinner, or giving creative direction on a poster. I predict application-based computing will remain relevant for lower-level interactions: playing audio, editing video, or manipulating images. AI will itself have a low-level role in these cases; preparing transcripts, suggesting edits, and extending the toolset.

This is a good place to bring up a point from Barton: there may come a time where AI does a better job of designing these interfaces than we do, thus controlling the evolution of its own own interface design.

Keeping score

I’d like to maintain a visual record irrespective of how and when (and by whom) AI-infused interfaces shake out.

Davo put me onto Futurepedia and Future Tools. Mobbin maintains app and web categories for ‘Artificial Intelligence’. But I’ve yet to find a concentrated repository of AI-infused interfaces organised by interaction types. Here’s my starting point:

An icon for this asset

Generative AI Interfaces

An Airtable base

View on Airtable

That Airtable Airtable has been updated each time I’ve come across an AI-infused interface over the last week or so. Its more useful formats are probably the filtered gallery tables, which include:

I’ll add more examples, galleries, and perhaps create a front-end if this repository continues to have legs.


  1. It’s often referred to as the earliest motion picture ever made. A Louis Le Prince film however predates it by seven years. ↩︎

  2. Is ‘natural’ a better way to describe these? ‘Conversational’ is too prescriptive given that isn’t always the pattern. ↩︎