Why are clients usually attracted to graphical user interfaces? Why aren't more applications developed using a console (command line / terminal) interface?
As often, we need to read the first section of the Wikipedia article.
In computing, graphical user interface (GUI, sometimes pronounced "gooey", but more often as "gwee") is a type of user interface that allows users to interact with electronic devices through graphical icons and visual indicators such as secondary notation, as opposed to text-based interfaces, typed command labels or text navigation. GUIs were introduced in reaction to the perceived steep learning curve of command-line interfaces (CLI), which require commands to be typed on the keyboard.
Reference: Graphical user interface
The steep learning curve on command-line-interfaces, implying that GUIs don't have that steep learning curve. We need to call in a test user to find out if that is true. Is it really easier to use a graphical user interface than a command line interface? Our test user is 1 year old, very cute, but she can't type on a keyboard. But she can tell that "A Magazine Is an iPad That Does Not Work".
On a more serious an less populist notion, the article reference Dillon, A. (2003) User Interface Design. MacMillan Encyclopedia of Cognitive Science, Vol. 4, London: MacMillan, 453-458 states the following:
4.6 Minimizing attentional and cognitive load
Theoretical insights into cognitive architecture emphasize the memory and attentional constraints of humans. These lessons have been learned by the HCI community who argue that interaction sequences should be designed to minimise short term memory load (e.g. not demanding a user choose from an excessive number of menu items; requiring the user to remember numbers or characters from one screen to another, etc.). Since recognition memory is superior to absolute recall, the use of menus is now the norm in design compared to the command line interfaces of the 1980s, which required users to memorise control arguments.
Scientifically valid of course, but if you put two application experts side by side, one equiped with the GUI and the other with the CLI - the command line user is much faster than the graphical user interface operator. The reason for this is that (1) CLI don't "waste" computer power by drawing GUI and (2) CLI don't provide feedback unless you do things the wrong way. But the cognitive load still is on the CLI operator - she/he needs to know exactly what to type and may also be more skilled than a GUI operator.
As a consequence, SharePoint administration is now moving toward excessive use of PowerShell rather than it's GUI equivalent Central Administration. In the previous version (2010) you could do much more in Central Administration than in current 2013 version. Probably because "you need to know what you're doing" on PowerShell and can't accidentally break something you didn't intend to break.
Two worlds making greater distance between each other. Interesting.
Simply because it requires less mental work :-)
You are referring to a paradigm called WIMP, which was developed at Xerox PARC back in the 1970s.
WIMP (Windows, Icons, Menus, Pointing devices) experimented with a "digital version" of real-life elements, represented by icons. This was very intuitive.
According to Don Norman, from the book 'Design of Everyday Things':
Design must convey the essence of a device’s operation; the way it works; the possible actions that can be taken; and, through feedback, just what it is doing at any particular moment. Design is really an act of communication, which means having a deep understanding of the person with whom the designer is communicating.
Also, he lists out 7 user control design principles for transforming difficult tasks into easy ones:
- Use both knowledge in the world and in the head
- Simplify the structure of tasks
- Make things visible
- Get the mappings right
- Exploit the powers of constraints-Natural & Artificial
- Design for Error
- When all else fails, standardize
So, referring the above principles, if you compare the Command-Line UI and GUI, the GUI wins over.
BTW, one of the reasons why Apple's products were successful, is because they applied these design principles from 'Design of Everyday Things' (Don Norman worked for Apple).
To me, being user friendly means:
Well debugged and idiot proof (don't allow simple mistakes to crash your program)
Easy to understand (after writing your program consider using it as if you were someone who has never seen it before, don't just assume everyone knows what your program is thinking or talking about)
Last but not least comes contextual help (Be sure to provide plenty of help for your users. It is almost a certainty that someone will become confused about something, so make sure your app is well documented.)
It has to be familiar and easy for anyone to navigate. If it's confusing, and unfamiliar, it won't be user-friendly. It must also do what it is designed to do, and a person has to be able to do what it was made for without struggles.
It is because 1) in humans, visual thinking and language-based thinking are separate (albeit linked) processes, and 2) a mediocre command line interface is much worse than a mediocre GUI.
A command line interface delivers no useful cues to the visual thinking brain areas, rendering half of our usual orientation ability useless. It works with language only. And that language is very weird when compared to normal human languages. It requires high precision, unlike the language communication we are hardwired for. And both the actions of the classic console (giving no useful cues what you are expected to do) and the reactions of the classic console (spitting out a cryptic message, doing nothing or doing the wrong thing when you do something wrong) are not very helpful for a typical human learning process.
So, humans starting to work with a console soon feel frustrated. Command line tools only remain popular with the people who 1) have a very high preference for the verbal channel and its focus on abstract thought and 2) are very good at the specific style of learning known as RTFM (which is the same style needed for learning abstract concepts such as maths, and mostly useless for physical skills such as playing soccer). The preference for this learning style has always been rare.
Note that when there is no need for the user to learn the correct language before they start experiencing some kind of success, the CLI is accepted by everybody. Google search is basically a CLI - but it is gentle. It never says "bad command or file name", and always delivers some results, even if they are worse than what you expected.
Beyond the learnability issue, the CLI is not generally worse than the GUI. The GUI is better for situations where spatial relationships in the information are important (e.g. photo manipulation), and the CLI is better for situation where conditional, temporal, causal and other abstract/logical relationships are important. There is a reason why all programming languages in wide use are word-based, despite lots of effort to develop graphic representations in the form of flow charts and the whole family of UML diagrams and then some more. It is just that this kind of information can be processed through verbal thinking much more successfully than through visual thinking.
But most applications are not extremely focused on manipulating either spatial or abstract relationships, and so we use interfaces which combine visual clues with words, like the traditional GUI which has labelled buttons.
A major ease-of-use area where GUIs tend to have an advantage is that for poeple it's easier to recognize rather than remember. I.e., it may often be that you don't remember a specific command or option, but would easily and immediately recognize it if it's shown to you; so interfaces that show you the available options are easier to use, even if it doesn't make them more efficient to use for people who know/remember them. It's particularly important for systems you use rarely - no matter how much effort you put into learning it, if you use some CLI system twice a year, you are likely to need documentation (or google) to remember how to do what you successfully did last time; but you would immediately recognize the correct things if the UI had shown them to you.
Another issue is discoverability - common GUI paradigms allow to find out what your possibilities are right there in the interface, as opposed to reading documentation outside of the interface. This is important for ease of learning, as for any even moderately complex software most users will not know all the possibilities, and they can't search 'how to do X' if they don't know that X is possible in the first place.
For an example contrast standard unix cd/ls usage with a text-mode gui of midnight commander:
You get pretty much the same functionality, with a drawback of taking extra visual(and mind) space, but with an advantage in discoverability, and preemptively showing the options and filenames you're likely to need, so you can recognize and select them.
The essence of the answer is in recognition vs recall. Most people will be attracted to GUIs because they are cognitively easier to use.
Command line interfaces force you to memorize and recall commands in order to do anything. Even to conjure up a help menu or a list of commands, you must know what to type.
GUIs, on the other hand, present options and only require that you visually scan around and recognize the command you want to execute. ("Oh, there it is. I want that option.")
Recognition is always easier than recall. (Think multiple choice tests vs test where you must write in the answer.)
Indeed, when GUIs are hard to use, it's often because the desired option is hidden or there are so many options in view that the person looking at the interface has trouble finding the option they want.
Unix is user friendly—it's just picky about its friends.
– Teo van de Bunt
The essence of your question has already been answered, but just a few small notes:
It's not fair to say that "graphical user interfaces" are universally user-friendly, or even more user-friendly than the equivalent command line interface. There's a reason why the command line persists to this day, well after we've all developed the ability to develop GUIs for most problems.
In order for an interface to be user-friendly, it needs to be friendly to its user. As anyone here can tell you, users (despite almost always being human) don't all think alike, and don't share needs.
The Unix philosophy describes the design of most good command-line interfaces. In a nutshell, it's about composability—building small, stateless, discrete tools that can be effectively stitched together in sequence to meet a wide array of different problems—as opposed to contextual design—what most of the UX discipline focuses on.
Some of the key benefits of composable design are:
Portability – many core command line tools haven't been meaningfully updated since first they were written in the 1970s, and yet have been able to be deployed on a huge array of devices, file systems and operating systems relatively trivially.
Power – as stated above, the normal user of the command line is a very different person to your average computer user. By being targeted at system administrators, developers and other (relatively) professional user groups, command line tools are often much more powerful than their graphical equivalents (often going so far as to give you enough rope to hang yourself). The classic quote:
"Unix was not designed to stop its users from doing stupid things, as that would also stop them from doing clever things." – Doug Gwyn
Scriptability – since command line tools all share the same input (textual arguments), it's easy to string them together to do things simply that would be very difficult graphically. This is especially important for solving problems you couldn't have anticipated when you designed the system in the first place.
Compare Google Chrome to a command line tool like cURL. Trying to use a tool like cURL to read and answer questions on your favourite website would be an exercise in frustration. On the other hand, though, say you wanted to pull out all the Wikipedia pages anyone links to on Stack Exchange each day. Doing so in cURL is a relatively simple process; trying to do it in Chrome would be a much harder (if not impossible) process without Google themselves (or some extension developer) building that specific functionality directly into the browser.
Integration – while it may not be immediately obvious to the user, a lot of graphical tools integrate directly into relevant command line tools behind the scenes. Since command line tools take simple textual input as parameters, it's easy to leverage the tool from other, more complex graphical applications where necessary. A hypothetical graphical file transfer application could provide drag-and-drop support, progress meters for file transfer, visual file selection dialogs, etc. and leverage another simpler, command line interface for the actual file transfer itself, converting the abstract concepts of clicking on a location on screen into the actual active command necessary.
OK, so getting down to the actual question you asked: if a CLI is so powerful, why do we generally design systems with graphical interfaces?
Well the answer comes down to fitness for purpose, and the especially important point that the average computer user generally doesn't need to worry about things like scriptability, portability and integration. They're best served by systems that abstract the complexity away and present a meaningful, simple representation of the operation that better corresponds to how the user thinks the operation works. After all, a picture is worth a thousand words.
For example, we've all seen this animation a million times:
That simple little animation carries with it a huge amount of information:
Whether or not you believe that the animation is effectively communicating all that, it's important to realise that trying to communicate all those facts in words in a similar amount of space would be largely impossible.
As computers get more powerful, animations like these don't all have to be choreographed in advance like this. Instead they can be responsive to the actions of the user and the operation(s) he or she performs.
Take this animation (from Mac OS X):
GIF from Cult of Mac
These animations are hugely powerful at describing the abstractions that exist in this interface implicitly and naturally.
Performing this operation on the command line is extremely difficult. In this GUI, it's a half-second operation.
Here, the user is identifying the block of text they want to save for later without doing much more than simply pointing to it. The structure, links, typeface, font size and other properties of the text are defined in HTML and CSS, but the user isn't consciously made aware of that fact; they simply see content in the format it makes sense in, and can interact with it directly there.
Once it's selected, they simply drag the text itself to the location they want it (in this case, the desktop), and the user gets to see the text itself—still formatted—moving from where they're taking it (the browser window) to the exact spot they're putting it (on the desktop); no abstract flying document graphic necessary. When the user lets go, an image of a document is created in the spot they dropped the text, but this time the actual text the user dropped there is used to populate the icon itself (unlike the older Windows example, the file isn't visually "blank"). Double-clicking on the newly-created document shows another animation: a smooth transition from the document icon on the desktop to the same document open in a text editor, indicating that the text editor window and the icon on disk are conceptually the same thing (only zoomed in).
I know a lot of the other answers here have already made the claim that the main advantage of graphical interfaces are their discoverability—items are able to be designed with their affordances made clear. While I agree it's true that GUIs are inherently able to make the supported actions discoverable to the user, I don't agree that discoverability that can always be assumed to exist in graphical interfaces (you need only see the criticism that has sprung up around Windows 8's gesture-driven charms menu for an example of a GUI where actions aren't inherently apparent at all times).
To me, the biggest advantage graphical interfaces have over CLIs (for average computer users with more usual needs) is that concept of abstraction—of metaphor—that allows complex systems and processes to be boiled down into simple-to-execute interactions that can depend on higher-order thinking to remove the need for explicit descriptions altogether.