UP | HOME

Learning interfaces

(I.e. interfaces that learn, not how to learn interfaces.)

Bret Victor, Magic Ink: Information Software and the Graphical Interface:

One simple approach to learning is to discover a common attribute of recent contexts, and narrow the current context along that attribute’s dimension. For example, in a music library, as the user chooses several bluegrass songs in a row, the software can graphically emphasize other songs in this genre. With further confidence, it might consider de-emphasizing or omitting songs outside of the genre. As another example, consider a user who requests information about “Lightwave,” then about “Bryce,” then “Blender.” These terms have many meanings individually, but as a group they are clearly names of 3D rendering software packages. A subsequent search for “Maya,” another 3D package, should not display information about the ancient civilization. In fact, information about Maya could be presented automatically.

Another simple approach is to establish the user’s velocity through the data space. If a person asks a travel guide about the Grand Canyon on one day, and Las Vegas the next day, the following day the software might suggest attractions around Los Angeles.

My notes on IM said about this:

In Magic Ink, Victor proposes an AI system to work out what you want to search for based on things you just looked at. That sounds annoying and failure-prone to me

Alan Briolat writes in response to a news report on how ‘Tesla’s new Model S will automatically shift between park, reverse, and drive’:

The other linked articles relating to ‘mode confusion’, confusing control placement, confusing placement of feedback displays, etc. are all somewhat terrifying too

I was just complaining IRL for the millionth time about this general class of problem: humans are good at learning patterns, stop making gadgets that guess, focus on consistent interfaces that users can easily internalise the rules of

so many ‘operator error’ accidents result from a human and a computer not agreeing on what the current rules are, and magic makes rules more complicated, not less complicated

The higher the stakes, the more dangerous a learning interface is. The search engine example could be useful, or creepy, or annoyingly wrong; automatic direction setting in a car could be useful, or creepy, or deadly wrong.