Monday, November 14, 2011

The Principle of Least Surprise as a survival trait

It has been a while my last post.

I have been on a bit of a roller coaster at my workplace and home dealing with the fact that my productivity has been insufficient for all the things that I have taking on.

Thank the Gods for tools.

I will be writing about a number of new tools I've discovered and some tools that I have retired in the past few months but specifically wanted to talk about something very interesting I discovered in the last month that reinforces some old-fashioned developer wisdom.

One of my favorite principles that you will hear bandied about is the principle of least surprise (POLS). You can go to Wikipedia for the definition if you want (POLS) but I will summarize a key portion of it here:

"..when two elements of an interface conflict, or are ambiguous, the behaviour should be that which will least surprise the user; in particular a programmer should try to think of the behavior that will least surprise someone who uses the program, rather than that behavior that is natural from knowing the inner workings of the program..."

A week and a half ago I was involved in a training course that dealt with the latest findings of neuroscience/cognitive science with regards to the impact it has on people learning new things and shifting their behavior in organizations. It was actually much broader than that, but that was my focus when I went there. The course itself was very focused and pragmatic, rather than theoretical.

The fields of neurophysiology and cognitive science have had a lot of significant shifts in their accepted worldviews. A lot of what's happening in these fields is the natural result of our being able to directly monitor brain/nerve activity in ways we couldn't 10 years ago. It is probably
inevitable that as we started to be able to directly measure things there would be some surprises.

One set of typical discussions are to be found in "Cognitive Science". This volume contains a fascinating back-and-forth on the fundamentals of perception and cognition located in two different articles: "Situated action: a Neurophysiological response to Vera and Simon." By WJ Clancey. That's in Cognitive Science volume 17 pages 87 to 116, which is immediately followed by "Situated Action: A reply to William Clancey" by Vera and Simon. You typically won't be able to find something like this except in your nearest technical library (That's where I had to go). The set of articles highlight the difficulty involved in analyzing cognition when the only tool you have for direct observation is the process you are studying, itself. The attempt to tease out reality from observation is fascinating in of its own right. Not to mention the fact that in the articles both parties successfully avoided all of the usual subtle and vicious underhanded usage of verbal stilettos that so characterizes many scholarly exchanges. But I expect only a few Neuro/Bio geeks will be willing to make the effort to read them. Nor should you have to.

Rather than spending a lot of time giving you all the results of all of the research in a semi-scholarly manner I will instead give you a summary of the key point and direct you to some more popular presentations of the material.

One of the key points that has pushed to the fore (while confounding many cognitive scientists along the way) is that the brain for the most part is not processing in sequence. Our mental model for the most part has been: We perceive something and then either consciously or unconsciously choose an action. You will see terms in the literature like "reaction/action cycle" or "perception action cascade". The measured reality is counter to this. The brain for the most part appears to act as a high-speed parallel pattern matching computer constantly working to predict in the now what's going to happen next. Its operation is such that by the time you have received the perception (i.e. the perception has fully bloomed in your consciousness), you are already in motion. In effect action and perception arise concurrently with perception lagging behind action. You are already in action by the time you consciously perceive.

At one level it's very logical. In terms of survival humans are not very fast, not very strong, nor very aware. Predicting the future is a great tool for staying alive in those circumstances. Thus comes the old saw "We only learn from our mistakes." Learning from mistakes in a way that has you be in action to avoid them without being slowed down by conscious thought is clearly the way to go to avoid predators. It explains how boxers can block punches at a speed faster than they can consciously perceive. Your brain is taking the middleman of conscious thought out of the equation.

At another level it feels very illogical. Notice I say, "it feels". It really is an emotional reaction rather than a logical analysis of data. It especially appears counter to our own experience of our own experience. One of the downsides of trying to use cognition to analyze cognition.

Leaving aside all of the implications this has for living life (Just thinking about what this means for romantic relationships could get you into trouble), the implications for user interfaces are, to me, significant.

The principle of least surprise appears to be much more critical than we expect because we are dealing with an agent (the brain of the users) that is designed to be in motion before it has seen our nice pretty screens. It certainly means that usability testing becomes even more important because it is our only real on the ground way to evaluate the "survivability" of our user interfaces.

I am sure that I don't have a solid beginning of an understanding of the implications of this new way of looking at things. I am pretty sure that if we stop relating to our user interfaces as a work of art and craft and start relating to it the way the human brain is relating to it: another thing to
survive, we will have some breakthroughs in the way we design and test user interfaces. Of course, it does give rise to some great possible names for testing measurements and regimens like "Survivable User Quotient" and "User Interface Survivability Evaluation". We should be able to get some really great innuendo somehow. Maybe even a reality TV show "I survived your UI". We can even have SDD aka Survivability Driven Development.

I will just leave you with an example of a situation I deal with every day that illustrates this. I work with a development and training organization that measures everything they can to track the effectiveness of the courses. Everything from the percentage of people that complete the course to homework completion and so on. We have a website that allows us to track many of the statistics and the website is specifically designed with the intention that you pay attention when you put the statistics in. What this means that the user interface level is that for each set of statistics there are save and cancel buttons next to each row. The cancel button does what you would expect, it cancels the changes that have been input and returns you back to the previous state. The save button acts the way a "commit" button would for a revision control system, it commits the changes for the row in such a way that they cannot be altered without explicitly sending an e-mail to the person who manages the statistics and having her change them at the database level.

Being a developer who has learned from over 20 years of mistakes I save everything as I'm editing them. That skill is critical to my survival as a developer and has saved me a great deal of embarrassment (which will do for motivation until a real predator comes along). As a result I have routinely had to deal with sending e-mails to correct the statistics that I had partially
entered and saved along the way. It is been that way for three years for me and it will probably continue to be that way simply because my day-to-day tasks as a developer constantly reinforces the "survival" value of saving my work. If this was a user interface that I had a choice in working with I would automatically avoid it simply because it's usage goes counter to one of my good "survival" practices.

The Principle of Least Surprise, it's not just a good idea, it's survival.

No comments:

Post a Comment