Negative Assumptions in User Interface Design
- By Matt Stephens
- July 20, 2005
This might sound overly negative, but when designing a software UI, you should always try to avoid making basic assumptions. Actually there's a bit more to it than that, so let's try again: When designing a software UI, it's good to make basic assumptions about what the user will be doing, but generally bad to make basic assumptions about what the user won't be doing.
The sort of basic assumptions I'm talking about are generally to do with user interface design, but (at the risk of making a basic assumption) the same thought process could also be extended to other disciplines such as code-level design and architecture.
So far that probably all seems quite abstract, so here's an example. With Windows, Microsoft made the basic assumption that if the current window is maximized (i.e. full-screen), you wouldn't want to drag the window around; it just wouldn't make sense. So, admirably, they locked down that part of the UI, so that the window can't be moved whilst maximized. This means there's one less customization available to users (which is a good thing!), one less way users can mess up their desktops and become confused; and one less way users can cause expense to themselves and their employers by fiddling around moving windows when they should instead be getting some real work done.
However, with the advent of twin-screen displays (they're now more common than you might think), the old assumption that a maximized window should be immovable is, quite simply, flawed. I've grown used to using a twin-screen display, and often rely on it when I'm working on two related Java files or Word documents at once. It's almost sublime to be able to copy text from one screen to another without messing around minimizing, resizing windows so that they're both visible, etc. With a twin-screen display, I just maximize a window on one screen, and maximize another window on the other screen. Then I can refer to both just by rolling my eyeballs (a scary sight), and drag text between screens. Joy!
An application which is naturally full-screen (i.e. occupies your full attention) is referred to as a sovereign application by interaction design guru Alan Cooper. The sovereign application is generally run full-screen, and often contains so much stuff that it really, genuinely needs to be run full-screen. A sovereign application also (in theory at least) gets your complete attention whilst working on a project. So, does it really make sense to have two screens side-by-side, each running a different sovereign application?
The answer is a resounding yes, because—though your attention will only be on one screen or other at a time—you're often working on, say, more than one Word document simultaneously, or several program files for the same project. A single project may also consist of several different file-types (e.g., UML diagrams, XML files, properties files, JPEGs, whatever). Each is created in a different sovereign application, even though you're almost invariably working on several different file-types to create a single end-result. It's in this (very common) situation where twin screens are a godsend.
So, back to the original basic assumption: When you launch an application window, the program has no way of knowing which screen it should load in. So, 50 percent of the time, you'll probably want to flip it over to the other screen. The obvious (and consistent) thing would be to drag the window's title-bar over to the other screen. The window should remain maximized, but simply flip over to its new home. However, Microsoft's basic assumption about maximized windows means this part of the UI is locked down, meaning you have to mess around shrinking windows, dragging them over and re-maximizing them on the other display.
This usability issue might not seem that bad, and isn't exactly the worst usability trait that an "Evil Empire" basher could hope to latch on to. But when you use twin screens on a regular basis, it can become quite annoying (especially if you're moving rapidly from file to file, frequently opening different document windows). It falls back to the original basic assumption, that a maximized window would occupy the full usable screen area. This is no longer necessarily true (and hasn't been true for several years now).
So, back to our original rule of thumb:
When designing software, you should try to avoid making basic assumptions about what the user won't be doing.
It's fine to make basic assumptions about what the user will be doing. In fact, this is an essential part of UI design: you definitely want to focus the UI around specific tasks (use case-driven development takes this approach, as do most forms of scenario-based interaction design). However, locking down the UI (preventing certain things from being possible) because of assumptions about what the user won't be doing, are an entirely different matter.
It may be that these negative assumptions are primarily what makes software so annoying to use. How often have you cursed a program because it steadfastly wouldn't allow you to do something? As it turns out, it's quite an easy thing to fix, simply by focusing the UI on what the user will be doing, not on what they won't be doing, or might not want to do. If part of your job involves designing user interfaces, it's worth keeping in mind.