Chat with us, powered by LiveChat How do you know when you have included enough 'intelligence' in a decision support system? Provide 3 examples of user interfaces that support your answer. Need about 2- | Wridemy

How do you know when you have included enough ‘intelligence’ in a decision support system? Provide 3 examples of user interfaces that support your answer. Need about 2-

How do you know when you have included enough "intelligence" in a decision support

system? Provide 3 examples of user interfaces that support your answer.

Need about 2-3 pages with peer-reviewed references. APA formatted. Attached chapter for the week.

Note: No AI generated content. It gets filtered out in plagiarism check!

USER INTERFACE

To the decision maker, the user interface is the DSS. The user interface includes all the mechanisms by which commands, requests, and data are entered into the DSS as well as all the methods by which results and information are output by the system. It does not matter how well the system performs; if the decision maker cannot access models and data and peruse results, invoke assistance, share results, or in some other way interact with the system, then the system cannot provide decision support. In fact, if the interface does not meet their needs and expectations, decision makers often will abandon use of the system entirely regardless of its modeling power or data availability.

To paraphrase Dickens, it is the most exciting of times for designing user interfaces, and it is the most frustrating of times for designing user interfaces. It is an exciting time because advances in computing technologies, interface design, and Web and mobile technologies have opened a wide range of opportunities for making more useful, more easily used, and more aesthetically pleasing representations of options, data, and information. It is a frustrating time because legacy systems still exist, and there are a wide range of user preferences. Some DSS must be built using technologies that actually limit the development of user interfaces. Others must at least interact with such legacy systems and are therefore limited in the range of options available. In this chapter, the focus will be on the future. However, remember that "the future" may take a long time to get to some installations.

Decision Support Systems for Business Intelligence by Vicki L. Sauter Copyright © 2010 John Wiley & Sons, Inc.

215

USER INTERFACE

GOALS OF THE USER INTERFACE

The purpose of the user interface is communication between the human and the computer, known as human-computer interaction (HCI). As with person-to-person communication, the goal of HCI is to minimize the amount of incorrectly perceived information (on both parts) while also minimizing the amount of effort expended by the decision maker. Said differently, the goal is to design systems that minimize the barrier between the human's cognitive model of what they want to accomplish and the computer's understanding of the user's task so that users can avail themselves of the full potential of the system.

Although there has been an active literature on HCI since the 1990s, the actual im- plementation of that goal continues to be more an "art" than a science. With experience, designers become more attuned to what users want and need and can better provide it through good color combinations, appropriate placement of input and output windows, and generally good composition of the work environment. The key to making the most out of it is knowing when to apply it. Some of the material is quite pertinent for all user interface design. Other material applies only in certain circumstances. But there are some guiding principles and those will be discussed first.

A prime concern of this goal is the speed at which decision makers can glean available information. Humans have powerful pattern-seeking visual systems. If they focus, humans can perceive as many as 625 separate points in a square inch and thus can realize substantial information. The eyes constantly scan the environment for cues, and the associated brain components act as a massive parallel processor, attempting to understand the patterns among those cues. The visual system includes preattentive processing, which allows humans to recognize some attributes quite quickly, long before the rest of the brain is aware that it has perceived the information. Good user interfaces will exploit that preattentive processing to get the important information noticed and perceived quickly. However, the information is sent to short-term visual processing in our brain, which is limited and is purged frequently. Specifically, the short-term visual memory holds only three to nine chunks of information at a time. When new information is available (we see another image), the old information is lost unless it has been moved along to our attention. Hence we lose the information before it is actually perceived. Since preattentive processing is much faster than attentive processing, one goal is to encode important information for rapid perception. If the data are presented well, so that important and informative patterns are highlighted, the preattentive processes will discern the patterns and then they will stand out. Otherwise the data may be missed, be incomprehensible, or even be misleading.

The attributes that invoke the preattentive processing include the hue and intensity of the color, the location, the orientation, the form of the object (width, size, shape, etc.), and motion. For example, more intense colors are likely to provoke preattentive processing, especially if those around it are more neutral. Longer, wider images will get more attention, as will variations in the shapes of the items and their being grouped together. However, clutter, too much unnecessary decoration, and an effort to overdesign the interface may actually slow down the perception and therefore work against us.

In addition to making the information quickly apparent, the user interface must be effective. These interfaces must allow users to work in a comfortable way and to focus on the data and the models in a way that supports a decision. Equally important is that the interface must allow these things without causing users frustration and hesitation and without requiring them to ask questions. This requires designers to make navigation of the system clear to ensure that decision makers can do what they need to do easily. It also requires the designers make the output clear and actionable. To accomplish this, designers

GOALS OF THE USER INTERFACE

should organize groups, whether they be menus, commands, or output, according to a well- defined principle, such as functions, entities, or use. In addition, designers should colocate items that belong to the same group. This might mean keeping menu items together or putting results for the same group together on the screen. Output should be organized to support meaningful comparisons and to discourage meaningless comparisons.

A third overall principle of interface design is that the user interfaces must be easily learned. Designers want the user to master operation of the system and relate to the system intuitively. To achieve this goal, they must be simple, structured, and consistent so that users know what to expect and where to expect it on the screen. A simple and well-organized interface can be remembered more easily. These systems have a minimum number of user responses, such as pointing and clicking, that require users to learn few rules but allow those rules to be generalized to more complexity. Well-designed systems will also provide good feedback to the user about why some actions are acceptable while others are not and how to fix the problem of the unacceptable actions. Such feedback can take the form of the hour glass to demonstrate the system is processing to useful error messages if it is not. Similarly, tolerant systems that allow the user multiple ways to achieve a goal adapt to the user, thereby allowing more natural efforts to make a system perform.

The goal of making the interface easily learned (and thus used) is complicated because every system will have a range of users, from beginners to experts, who have different needs. Beginners will need basic information about the scope of a program or specifics about how to make it work. Experts, on the other hand, will need information about how to make the program more efficient, with automation, shortcuts, and hot keys, and the boundaries of safe operation of the program. In between, users need reminders on how to use known functions, how to locate unfamiliar functions, and how to understand upgrades. All of these users rely not only on the information available with the user interface but also on the feedback that the system provides to learn how to use the system. Feedback that helps the users understand what they did incorrectly and how to adjust their actions in the future is critical to learning. Not only must the feedback be provided, but also it must be constructive, helping the user to understand mistakes, not to increase his or her frustration. It should provide clear instructions about how to fix the problem.

Finally usable systems are ones that satisfy the user's perceptions, feelings and opinions about the decision. Norman (2005) says that this dimension is impacted significantly by aesthetics. Specifically, he says that systems that are more enjoyable, makes users more relaxed and open to greater insight and creative response. The user interface should not be ugly and should fit the culture of the organization. Designers should avoid "cute" displays, unnecessary decoration and three-dimensional images because they simply detract from the main effort. Cooper (2007) believes that designing harmonious, ethical interactions that improve human situations and are well behaved is critical to satisfying user needs. Cooper (2007, p. 203) provides some guidance about creating harmonious interactions with the following:

• Less is more. • Enable users to direct, don't force them to discuss. • Design for the probably; provide for the possible. • Keep tools close at hand. • Provide feedback. • Provide for direct manipulation and graphical input. • Avoid unnecessary reporting.

218 USER INTERFACE

• Provide choices. • Optimize for responsiveness; accommodate latency.

By "ethical," Cooper (2007, p. 152) means the design should do no harm. He identifies the kinds of harm frequently seen in systems that should be avoided in DSS design as follows:

• Interpersonal harm with insults and loss of dignity (especially with error messages) • Psychological harm by causing confusion, discomfort, frustration, or boredom • Social and societal harm with exploitation or perpetuation of justice

Cooper (2007, p. 251) also provides guidance about designing for good behavior when he notes that products should:

• Personalize user experience where possible • Be deferential • Be forthcoming • Use common sense • Anticipate needs • Not burden users with internal problems with operations • Inform • Be perceptive • Not ask excessive questions • Take responsibility • Know when to bend the rules

Throughout the chapter, we will discuss the specifics these overriding principles of user interface design. The primary goal is to design DSS that make it easy and comfortable for decision makers to consider ill-structured problems, understand and evaluate a wide range of alternatives, and make a well-informed choice.

MECHANISMS OF USER INTERFACES

In addition to understanding the principles of good design, it is important to review the range of mechanisms for user interfaces that exist today and those mechanisms that are coming in the near future. Everyone is familiar with the keyboard and the mouse as input devices and the monitor as the primary output device. Increasingly users are relying upon portable devices. Consider, for example, the pen-and-gesture-based device shown in Figure 5.1. Information is "written" on the device and saved using handwriting and gesture recognition. This allows the device to go where the decisions are, such as an operating room, and to provide flexible support. Or, the user might rely upon a mobile phone, with much smaller screens such as the ones shown in Figure 5.2. These mobile devices have a substantially smaller screen yet have much higher resolution. On the other hand, if the decision makers will include a group, they might rely upon wall systems to

MECHANISMS OF USER INTERFACES 219

Figure 5.1. Pen-based system. HP Tablet. Photo by Janto Dreijer. Available at http://www.

wikipedia.com/File:Tablet:jpg used under the Creative Commons Attribution ShareAlike 3.0

License.

Figure 5.2. Mobile phones as input and output devices.

220 USER INTERFACE

Figure 5.3. Wall screens as displays. Ameren UE's Severe Weather Centre. Photo reprinted cour- tesy of Ameren Corporation.

display their output, such as those shown in Figure 5.3. These large screens may have lower resolution. Designing an interface for anything from a screen 5 in. x 3 in. with gestures and handwriting recognition to one that might take the entire wall and use only voice commands is a challenging proposition. User interfaces are, however, getting even more complicated for design. Increasingly, virtual reality is becoming more practical for DSS incorporation, so your system might include devices such as those shown in Figure 5.4 or even something like the wii device shown in Figure 5.5.

The future will bring both input and output devices that are increasingly different from the keyboard and the monitor that we rely upon today. Consider the device shown in Figure 5.6, which was developed in the MIT Media Laboratory. The device is a microcomputer. It includes a projector and a camera as two of the input/output devices. This device connects with the user's cell phone to obtain Internet connectivity. The decision maker can use his or her hands, as the user is doing in the photograph, to control the computer. The small bands on his hands provide a way for the user to communicate with the camera and thus the computer. This projection system means that any surface can be a computer screen and that one may interact with the screen using just one's fingers, as shown in Figure 5.7. In this figure, the user is selecting from menus and beginning his work. You can integrate these features into any activity. Notice how the user in Figure 5.8 has invoked his computer to supplement the newspaper article with a video from a national news service. Or, the decision maker can get information while shopping. Figure 5.9 shows a person who is considering purchasing a book in a local bookstore. Among the various kinds of information considered is the Amazon rating and Amazon reviews pulled up from his computer. Notice how they are projected on the front of the book (about halfway down the book cover).

It is important to think creatively about user interfaces to be sure that we provide the richest medium that will facilitate decision making. Different media require different design

MECHANISMS OF USER INTERFACES 221

Figure 5.4. Virtual reality devices. Ames developed (Pop Optics) now at the Dulles Ames of the

National Air and Space Museum. Source: http://gimp-savvy.com/cgi-bin/ing.cgi7ailsxmzVn080jE094

used under the Creative Commons Attribution ShareAlike 3.0 License.

and there is not a "one size fits all." It is important to think of the medium as a tool and let context drive the design and to customize for a specific platform. The general principles of this chapter will help readers evaluate the needs of the user and the medium. Most of the examples, however, will focus on current technologies.

Figure 5.5. A wii device. Wii remote control. Image from http://en.wikipedia.org/wiki/File: Wiimote-lite2.jpg used under the Creative Commons Attribution ShareAlike 3.0 License.

222 USER INTERFACE

Figure 5.6. MIT Media Lab's view of user interface device. Demonstration of the Sixth Sense Project of the MIT Media Lab. Photo taken by Sam Ogden. Photo reprinted courtesy of the MIT Media Laboratory, P. Maes, Project Director, and P. Mistry, Doctoral Student, (pictured).

Figure 5.7. MIT Media Lab's view of user interface device. Demonstration of the Sixth Sense Project of the MIT Media Lab. Photo taken by Lynn Barry. Photo reprinted courtesy of the MIT Media Laboratory, P. Maes, Project Director, and P. Mistry, Doctoral Student (pictured).

DSS in Action Friends

The FRIEND system is an emergency dispatch system in the Bellevue Borough, north of Pitts- burgh, Pennsylvania. This system, known as the First Responder Interactive Emergency Naviga- tional Database (FRIEND), dispatches information to police using hand-held computers in the field. The hand-held devices are too small to support keyboards or mice. Rather police use a stylus to write on the screen or even draw pictures. These responses arc transmitted immediately to the station for sharing. Police at the station can use a graphical interface or even speech commands to facilitate the sharing of information to members in the field.

USER INTERFACE COMPONENTS 223

Figure 5.8. MIT Media Lab's view of user interface device. Demonstration of the Sixth Sense Project of the MIT Media Lab. Photo taken by Sam Ogden. Photo reprinted courtesy of the MIT Media Laboratory, P. Maes, Project Director, and P. Mistry, Doctoral Student.

USER INTERFACE COMPONENTS

We must describe the user interface in terms of its components as well as its mode of communication, as in Table 5.1. The components are not independent of the modes of communication. However, since they each highlight different design issues, we present them separately—components first.

Figure 5.9. MIT Media Lab's view of user interface device. Demonstration of the Sixth Sense Project of the MIT Media Lab. Photo taken by Sam Ogden. Photo reprinted courtesy of the MIT Media Laboratory, P. Maes, Project Director, and P. Mistry, Doctoral Student.

USER INTERFACE

Table 5.1. User Interfaces

User interface components • Action language • Display or presentation language

• Knowledge base

Modes of communication • Mental model • Metaphors and idioms • Navigation of the model • Look

Action Language

The action language identifies the form of input used by decision makers to enter requests into the DSS. This includes the way by which decision makers request information, ask for new data, invoke models, perform sensitivity analyses, and even request mail. Historically, five main types of action languages have been used, as shown in Table 5.2.

Menus. Menus, the most common action language today, display one or more lists of alternatives, commands, or results from which decision makers can select. A menu provides a structured progression through the options available in a program to accomplish a specific task. Since they guide users through the steps of processing data and allow the user to avoid knowing the syntax of the software, menus often are called "user friendly." Menus can be invoked in any number of ways, including selecting specific keys on a keyboard, moving the mouse to a specific point on the screen and clicking it, pointing at the screen, or even speaking a particular word(s).

In many applications, menus exist as a list with radio buttons or check boxes on a page. Or the menu might be a list of terms over which the user moves the mouse and clicks to select. Or the menu might actually exist as a set of commands in a pull-down menu such as seen in the menu bar. As most computer users today are aware, you can invoke the pull-down menu by clicking on one of the words or using a hot-key shortcut. When this is done, a second set of menus is shown below the original command, as illustrated with Analytical menu bar shown in Figure 5.10.

Menus and menu bars should not be confused with the toolbars available on most programs. In Figure 5.10, the toolbar is the set of graphical buttons shown immediately below the menu bar. They might also show up as part of the "ribbon bar" that Microsoft has built into its 2007 Access, shown in Figure 5.11. These toolbars provide direct access to some specific component of the system. They do not provide an overview of the capabilities and operation of a program in the way that menus do but rather provide a shortcut for more experienced users.

Table 5.2. Basic Action Language Types

Menu format Question-answer format Command language format Input/output structured format Free-form natural language format

USER INTERFACE COMPONENTS 225

Figure 5.10. One form of a menu. Menu from Analytica. Used with permission of Lumina

Decision Systems.

Menu formats use the process of guiding the user through the steps with a set of pictures or commands that are easy for the user to understand. In this way, the designer can illustrate for the user the full range of analyses the DSS can perform and the data that can be used for analysis. Their advantage is clear. If the menus are understandable, the DSS is very easy to use; the decision maker is not required to remember how it works and only needs to make selections on the screen. The designer can allow users keyboard control (either arrow keys or letter key combinations), mouse control, light pen control, or touch screen control.

Menus are particularly appealing to inexperienced users, who can thereby use the system immediately. They may not fully understand the complexity of the system or the range of modeling they can accomplish, but they can get some results. The menu provides a pedagogical tool describing how the system works and what it can do. Clearly this provides an advantage. In the same way, menu formats are useful to decision makers who use a DSS only occasionally, especially if there are long intervals between uses. Like the inexperienced user, these decision makers can forget the commands necessary to accomplish a task and hence profit by the guidance the menus can provide.

Menu formats tend not to be an optimal action language choice for experienced users, however, especially if these decision makers use the system frequently. Such users can be- come frustrated with the time and keystrokes needed to process a request when other action language formats can allow them access to more complex analyses and more flexibility. This will be discussed in more depth under the command language.

Figure 5.11. A "ribbon bar" as a menu. Microsoft's "Ribbon" in Excel 2007 from http://en.wikipedia.com/wiki/ File:office2007vibbon.png. Used under the Creative Commons Attribution ShareAlike 3.0 License.

226 USER INTERFACE

Figure 5.12. Independent command and object menus.

The advantage of the menu system hinges on the understandability of the menus. A poorly conceived menu system can make the DSS unusable and frustrating. To avoid such problems, designers must consider several features. First, menu choices should be clearly stated. The names of the options or the data should coincide with those used by the decision makers. For example, if a DSS is being created for computer sales and the decision makers refer to CRTs as "screens," then the option on the menu ought to be "screen" not "CRT." The latter may be equivalent and even more nearly correct, but if it is not the jargon used by decision makers, it may not be clear. Likewise, stating a graphing option as "HLCO," even with the descriptor "high-low-close-open," does not convey sufficient information to the user, especially not novice or inexperienced user.

A second feature of a well-conceived menu is that the options are listed in a logical sequence. "Logical" is, of course, defined by the environment of the users. Sometimes the logical sequence is alphabetical or numerical. Other times it is more reasonable to group similar entries together. Some designers like to order the entries in a menu according to the frequency with which they are selected. While that can provide a convenience for experienced users, it can be confusing to the novice user who is after all the target of the menu and may not be aware of the frequency of responses. A better approach is to preselect a frequently chosen option so that users can simply press return or click a mouse to accept that particular answer. Improvements in software platforms make such preselection easier to implement, as we will discuss later in the chapter.

When creating a menu, designers need to be concerned about how they group items together. Generally, the commands are in one list, and the objects of the commands1 are in an alternate list, as shown in Figure 5.12. Of course, with careful planning, we can list the commands and objects together in the same list, as shown in Figure 5.13, and allow users to select all attributes that are appropriate.

In today's programming environment, designers tend not to combine command and object menus. The primary reason to combine them in the past was to save input time for the user since each menu represented a different screen that needed to be displayed. Display

!The "objects of the commands" typically refer to the data that should be selected for the particular command invoked.

USER INTERFACE COMPONENTS 227

Figure 5.13. Combined command and object menu.

changes could be terribly slow, especially on highly utilized, old mainframes. The trade- off between processing time and grouping options together seemed reasonable. For most programming languages and environments, that restriction no longer holds. Several menus on the same screen can all be accessed by the user. Furthermore, most modeling packages allow a user several options, depending upon earlier selections. If these were all displayed in a menu, the screen could become quite cluttered and not easy for the decision maker to use.

An alternative is to provide menus that are nested in a logical sequence. For example, Figure 5.14 demonstrates a nested menu that might appear in a DSS. All users would begin the system use on the "first-level" menu. Since the user selected "graph" as the option, the system displays the two options for aggregating data for a graph: annually and quarterly.

Figure 5.14. Nested menu structure.

USER INTERFACE

Note that this choice is provided prior to and independent of the selection of the variables to be graphed so that the user cannot inadvertently select the x axis as annual and the y axis as quarterly data (or vice versa).

The "third-level" menu item allows the users to specify what they want displayed on the y axis. While this limits the flexibility of the system, if carefully designed, it can represent all options needed by the user. Furthermore, it forces the user to declare what should be the dependent variable, or the variable plotted on the y axis, without using traditional jargon. This decreases the likelihood of misspecification of the graph.

The "fourth-level" menu is presented as a direct response to the selection of the dependent variable selection. That is, because the decision maker selected La Chef sales, the system "knows" that the only available and appropriate variables to present on the x axis are price, advertising, and the competitor's sales. In addition, the system "knows" that the time dimension for the data on the x axis must be consistent with that on the y axis and hence displays "quarterly" after the only selection that could be affected. Note that the system does not need to ask how users want the graph displayed because it has been specified without the use of jargon.

Finally, the last menu level allows the users the option of customizing the labeling and other visual characteristics of their graphs. Since the first option, standard graph, was selected, the system knows not to display the variety of options available for change. Had the user selected the customize option, the system would have moved to another menu that allows users to specify what should be changed.

In early systems, designers needed to provide menu systems that made sense in a fairly linear fashion. While they could display screens as a function of the options selected to that point, such systems typically did not have the ability to provide "intelligent" steps through the process. Today's environments, which typically provide some metalogic and hypertext functionality as well as some intelligent expertise integrated into the rules, can provide paths through the menu options that relieve users of unnecessary stops along the way.

Depending upon the programming environment, the menu choices might have the boxes, or radio buttons illustrated in Figure 5.12 or underscores or simply a blank space. The system might allow the user to pull down the menu or have it pop up with a particular option. Indeed, in some systems, users can click the mouse on an iconic representation of the option. These icons are picture symbols of familiar objects that can make the system appear friendlier, such as a depiction of a monthly calendar for selecting a date.

Ideally the choice from among these options is a function of the preferences of the system designers and users. In some cases, the choice will be easy because the programming environment only will support some of the options. In still other cases, multiple options are allowed, but the software restricts the meaning and uses of the individual options. For example, in some languages, the check box will support users selecting more than one of the options whereas the radio button will allow users to select only one. Before designing the menus, designers need to be familiar with the implications of their choices.

However the options are displ

Our website has a team of professional writers who can help you write any of your homework. They will write your papers from scratch. We also have a team of editors just to make sure all papers are of HIGH QUALITY & PLAGIARISM FREE. To make an Order you only need to click Ask A Question and we will direct you to our Order Page at WriteDemy. Then fill Our Order Form with all your assignment instructions. Select your deadline and pay for your paper. You will get it few hours before your set deadline.

Fill in all the assignment paper details that are required in the order form with the standard information being the page count, deadline, academic level and type of paper. It is advisable to have this information at hand so that you can quickly fill in the necessary information needed in the form for the essay writer to be immediately assigned to your writing project. Make payment for the custom essay order to enable us to assign a suitable writer to your order. Payments are made through Paypal on a secured billing page. Finally, sit back and relax.

Do you need an answer to this or any other questions?

About Wridemy

We are a professional paper writing website. If you have searched a question and bumped into our website just know you are in the right place to get help in your coursework. We offer HIGH QUALITY & PLAGIARISM FREE Papers.

How It Works

To make an Order you only need to click on “Order Now” and we will direct you to our Order Page. Fill Our Order Form with all your assignment instructions. Select your deadline and pay for your paper. You will get it few hours before your set deadline.

Are there Discounts?

All new clients are eligible for 20% off in their first Order. Our payment method is safe and secure.

Hire a tutor today CLICK HERE to make your first order

Related Tags

Academic APA Writing College Course Discussion Management English Finance General Graduate History Information Justify Literature MLA