Several new user interface technologies and interaction principles seem to define a new generation of user interfaces that will move off the flat screen and into the physical world to some extent. Many of these next generation interfaces will not have the user control the computer through commands, but will have the computer adapt the dialogue to the user's needs based on its inferences from observing the user.
Most current user interfaces are fairly similar and belong to one of two common types: Either the traditional alphanumeric full screen terminals with a keyboard and function keys, or the more modern WIMP workstations with windows, icons, menus, and a pointing device. In fact, most new user interfaces released after 1983 have been remarkably similar. In contrast, the next generation of user interfaces may move beyond the standard WIMP paradigm to involve elements like virtual realities, head mounted displays, sound and speech, pen and gesture recognition, animation and multimedia, limited artificial intelligence, and highly portable computers with cellular or other wireless communication capabilities. It is hard to envision the use of this hodgepodge of technologies in a single, unified user interface design, and indeed, it may be one of the defining characteristics of the next generation user interfaces that they abandon the principle of conforming to a canonical interface style and instead become more radically tailored to the requirements of individual tasks.
The fundamental technological trends leading to the emergence of several experimental and some commercial systems approaching next generation capabilities certainly include the well known phenomena that CPU speed, memory storage capacity, and communications bandwidth all increase exponentially with time, often doubling in as little as two years. In a few years, personal computers will be so powerful that they will be able to support very fancy user interfaces, and these interfaces will also be necessary if we are to extend the use of computers to larger numbers than the mostly penetrated markets of office workers.
Traditional user interfaces were function oriented, the user accessed whatever the system could do by specifying functions first and then their arguments. For example, to delete a file in a line-oriented system, the user would first issue the delete command in some way such as typing delete. The user would then further specify that the name of the item to be deleted. The typical syntax for function oriented interfaces was a verb noun syntax.
In contrast, modern graphical user interfaces are object oriented, the user first accesses the object of interest and then modifies it by operating upon it. There are several reasons for going with an object oriented interface approach for graphical user interfaces. One is the desire to continuously depict the objects of interest to the user to allow direct manipulation. Icons are good at depicting objects but often poor at depicting actions, leading objects to dominate the visual interface. Furthermore, the object oriented approach implies the use of a noun verb syntax, where the file is deleted by first selecting the file and then issuing the delete command (for example by dragging it into the recycle bin). With this syntax, the computer has knowledge of the operand at the time where the user tries to select the operator, and it can therefore help the user select a function that is appropriate for that object by only showing valid commands in menus and such. This eliminates an entire category of syntax errors due to mismatches between operator and operand.
A further functionality access change is likely to occur on a macro level in the move from application oriented to document oriented systems. Traditional operating systems have been based on the notion of applications that were used by the user one at a time. Even window systems and other attempts at application integration typically forced the user to use one application at a time, even though other applications were running in the background. Also, any given document or data file was only operated on by one application at a time. Some systems allow the construction of pipelines connecting multiple applications, but even these systems still basically have the applications act sequentially on the data.
The application model is constraining to users who have integrated tasks that require multiple applications to solve. Approaches to alleviate this mismatch in the past have included integrated software and composite editors that could deal with multiple data types in a single document. No single program is likely to satisfy all computer users, however, no matter how tightly integrated it is, so other approaches have also been invented to break the application barrier. Cut and paste mechanisms have been available for several years to allow the inclusion of data from one application in a document belonging to another application. Recent systems even allow live links back to the original application such that changes in the original data can be reflected in the copy in the new document (such as Microsoft`s OLE technology). However, these mechanisms are still constrained by the basic application model that require each document to belong to a specific application at any given time.
An alternative model is emerging in object oriented operating systems where the basic object of interest is the user's document. Any given document can contain sub objects of many different types, and the system will take care of activating the appropriate code to display, print, edit, or email these data types as required. The main difference is that the user no longer needs to think in terms of running applications, since the data knows how to integrate the available functionality in the system. In some sense, such an object oriented system is the ultimate composite editor, but the difference compared to traditional, tightly integrated multi-media editors is that the system is open and allows plug and play addition of new or upgraded functionality as the user desires without changing the rest of the system.
Even the document oriented systems may not have broken sufficiently with the past to achieve a sufficient match with the users' task requirements. It is possible that the very notion of files and a file system is outdated and should be replaced with a generalised notion of an information space with interlinked information objects in a hypertext manner. As personal computers get multi Gigabyte harddisks, and additional Terabytes become available over the Internet, users will need to access hundreds of thousands or even millions of information objects. To cope with this mass of information, users will need to think of them in more flexible ways than simply as files, and information retrieval facilities need to be made available on several different levels of granularity to allow users to find and manipulate associations between their data. In addition to hypertext and information retrieval, research approaching this next generation data paradigm includes the concept of piles of loosely structured information objects, the information workspace with multiple levels of information storage connected by animated computer graphics to induce a feeling of continuity, personal information management systems where information is organised according to the time it was accessed by the individual user, and the integration of fisheye hierarchical views of an information space with feedback from user queries. Also, several commercial products are already available to add full text search capabilities to existing file systems, but these utility programs are typically not integrated with the general file user interface.
<-- Back to Future of Computers |