The first implementations within any new technological wave are often awkward.
The Lumière brothers’ Workers Leaving The Lumière Factory (1895) is one of the earliest films ever made. It clearly lacks editing and narrative techniques that are now synonymous with television, film, and streaming video.
Speaking of streaming video, Me at the zoo (2005) is the first YouTube video ever uploaded. It similarly feels awkwardly casual and unplanned compared to videos uploaded to the platform these days.
Visionaries aside, we usually come to grips with these new mediums by treating them as extensions of what we already know; transferring content wholesale and then augmenting them—over time—to fit their new home.
I think this is also the most likely case for AI-infused interfaces; vestiges of traditional interface elements will be implemented, turn out to be redundant, and then removed over time.
Distilling an interface down to its core task(s) and then working outward may get us to this realisation quicker than guessing vestiges ahead of time. Noah echoed this sentiment in his Figma Config talk earlier this year:
Said another way, application-based computing necessitates knobs and dials to pre-empt anything you might need to do within its bounds. Task-based computing focusses instead on a back-and-forth to refine and ultimately fulfil a specific task, with AI as the Copilot.
Task-based computing may work well for booking a flight, ordering dinner, or giving creative direction on a poster. I predict application-based computing will remain relevant for lower-level interactions: playing audio, editing video, or manipulating images. AI will itself have a low-level role in these cases; preparing transcripts, suggesting edits, and extending the toolset.
This is a good place to bring up a point from Barton: there may come a time where AI does a better job of designing these interfaces than we do, thus controlling the evolution of its own own interface design.
I’d like to maintain a visual record irrespective of how and when (and by whom) AI-infused interfaces shake out.
Davo put me onto Futurepedia and Future Tools. Mobbin maintains app and web categories for ‘Artificial Intelligence’. But I’ve yet to find a concentrated repository of AI-infused interfaces organised by interaction types. Here’s my starting point:
Generative AI Interfaces
An Airtable base
That Airtable Airtable has been updated each time I’ve come across an AI-infused interface over the last week or so. Its more useful formats are probably the filtered gallery tables, which include:
I’ll add more examples, galleries, and perhaps create a front-end if this repository continues to have legs.
Update: Davo sent me