Wednesday, May 12, 2010
Dear Manoj,
I have done every single blog with the EXCEPTION OF THE LATEST 5 that were conference papers. I did, however, do all of the book blogs because I actually read those.
I do NOT have a blog for HCI Remixed because I didn't know if that was required or not.
Good luck with all that grading!
Chris
Thursday, April 8, 2010
Opening Skinner's Box
This book detailed ten of the more famous psychological experiments conducted in the 1900s. Each chapter is devoted to one such experiment, and each one had a profound effect on future research and our lives today (even if we don't realize it). The opening chapter is about B.F. Skinner and his mouse boxes, wherein he used positive reinforcement to train mice to push levers (that's Skinner in the first picture). The author, Lauren Slater, did her best to get background information on each of the experimenters, sometimes going after family members and old research partners. It is this extra information that makes the book so enjoyable. I actually think that Slater is a bit crazy, but her personal insight brings character and feeling to what I would normally view as dispassionate scientific research. In some cases, such as with Bruce Alexander's Rat Park, Slater even attempted to carry out her own related research. In this case, she did a bunch of drugs to try and get herself addicted... which didn't work out for her (luckily).
The one thing that I found strange about this book was the fact that each chapter moves through a cyclical loop of introspection and explanation. She starts with musings and opinions, moves to facts, and then returns to musings. It's almost as if Slater trails off with her own thoughts about the experiments and forgets that she is relating them to the reader. Each of the ten experiments even related to CHI in some way! It's important to consider the psychological effects of computing instead of just the technical ones.
Anyway, it's a great book! It reads like a fictional story instead of a look at experiments.
The Inmates Part 2
- Personas - detailed potential customers created by the designers. Each persona should accurately reflect a certain demographic of customers. The designers should focus on meeting the needs of set personas instead of those of everyone. It is better for 10% of people to love your product than for 100% of hate it.
- Goals - set things that customers want to accomplish. Goals are not tasks. Tasks are steps that must be undertaken to meet end goals. Creating realistic goals for your personas to carry out can help you define what your product should do.
- Scenarios - situations where goals are needed to be met by personas. Again, creating reasonable scenarios can help you design a product that is both functional and helpful. If in the course of running a scenario your persona cannot accomplish their goals, then you should modify your product.
Thursday, April 1, 2010
Image Recognition for Intelligent Interfaces
Professor Trevor Darrell from UC Berkeley was the invited speaker at the 2008 IUI conference. As such, his paper is just an abstract. Seriously, as you read this sentence you might be reading the same amount as what is in his abstract anyway. He mentions that new advances in image recognition have made image-based interfaces a viable alternative to current interfaces that try to analyze physical objects. He then goes into the various parts of the problem he will discuss.
Discussion:
That's about it. I'm assuming that he then started talking about his abstract. Thank you, Manoj, for the shortest paper ever.
Wednesday, March 17, 2010
Video Object Annotation, Navigation, and Composition
Authors:
Dan B Goldman and David Salesin (Adobe Systems, Inc.). Chris Gonterman, Brian Curless, Steven M Seitz (University of Washington)
Summary:
Objects in a video are... objects in a video. Characters and props can be objects, cars and animals, etc. Most video editing software is concerned with timelines and frames, even though objects are what people are more concerned with. Being able to tag an object and have it tracked across frames would greatly speed up the video editing process (no splicing together stills to get your point across), and that's just what the authors of this paper are working on. They focus on the annotation, navigation, and composition of videos in an object-focused way. To accomplish these tasks, videos are preprocessed and low-level motion tracking is employed to determine what objects are in the video.
Annotation deals with adding graphics (such as text bubbles, highlights, outlines, etc.) to moving objects. Uses include sports broadcasting, surveillance videos, and post-production notation for professionals. The five annotations that they implemented were graffiti, scribbles, speech balloons, path arrows, and hyperlinks.
For navigation, the new system allows a user to select an object and drag the mouse to a new location on the screen. Once released, the video will move to a time when that object is close to that release point, thus computing video cuts for the user. The system visualizes ranges of motion for an object by placing a "starburst widget" on it which uses vectors to indicate the length and direction of motion that the object undergoes forward and backward in time.
Video-to-still composition is all about splicing together images from the video to create a single composition. The authors use a drag-and-drop system to move selected objects forward or backward through frames until the object is where it is wanted. All other objects in the frame remain frozen in place until they are directly selected and subsequently manipulated. In this way, a composite image can be created that has each object exactly where the user wants it to be.
Discussion:
Awesome stuff... except it takes 5 minutes PER FRAME to preprocess the video! That's an epic amount of time (and that's for 720 x 480 resolution). If they can speed that up, then they are golden. You should check out the paper for yourself!
Tuesday, March 16, 2010
Scratch Input: Creating Large, Inexpensive, Unpowered and Mobile Finger Input Surfaces
Chris Harrison, Scott E. Hudson (Carnegie Mellon University)
Summary:
Scratch Input is a sensor that detects the sound of a fingernail being dragged across different surfaces. The reason for such a sensor is to allow for the addition of a finger input device (gesture recognition). The device (which fits into a modified stethoscope) is small enough to be added to mobile devices. The sensor can be placed onto any solid surface and detect the unique high frequency of scratching (listed as 3000Hz or more). In this paper, the authors also go over some examples of when Scratch Input could be useful. One of which comes from the case that a cell phone is equipped with the device and is resting on a table. When an incoming call is received, the user performs a certain gesture on the tabletop and the phone takes the call on speakerphone. Another example involved placing the device on a wall and using different gestures on said wall to manipulate the playback of music. The authors found that, while testing on tables during a user study, people were able to accurately perform a set of six gestures with an average accuracy of 89.5%. They concluded that their product is both accurate and easy to use.
Discussion:
These guys seem to like making cheap little gadgets (my previous blog was over another such product). I wonder what it is that drives them to do this kind of research and development? Anyway, just like their last paper, this seems like a cool idea. Being super super lazy and just scratching or tapping on the wall or a table to get stuff done would rock! If I want my computer to start torrenting episodes of Archer, all I have to do is sketch out a big A. If I want my cell phone to call Dominos, but I don't want to have to reach over and pick it up, I can draw a D and then yell my order at the phone. It's every lazy person's dream!
Monday, March 8, 2010
Lightweight Material Detection for Placement-Aware Mobile Computing
Chris Harrison, Scott E. Hudson (Carnegie Mellon University)
Summary:
Placement-awareness would allow mobile devices to take certain actions without being explicitly told to do so (the authors give the example of a cell phone silencing itself while its owner is at work). In this paper, both cell phones and laptops are used to demonstrate the potential of a new sensor that observes its surroundings to determine the placement of its operating device. A user could map certain materials to locations, and the multispectral sensor could then predict its location by comparison (figure 1).
After giving some examples of use, the authors discuss how the sensor works and what it is made of. To combat situations where no light reaches the bottom of the device, the sensor is equipped with different LEDs that can be reflected off of the resting surface. It does its detection in seconds, and costs less than a dollar to manufacture. With the help of a naive Bayes classifier, the sensor learns which materials correspond to which locations within 86.6% accuracy (which, they say, is much better than anything else on the market).
Discussion:
This sensor has potential, but it definitely needs to have an override added to it. If for some reason the sensor thinks that you're in a location that you aren't then you're going to get pwned. But saving energy, as they discussed in the paper, is a definite plus. I see a company catching on and then charging beastly fees for this $1 sensor (that's the way the world goes round aka Microsoft).
Towards More Paper-like Input: Flexible Input Devices for Foldable Interaction Styles
David Gallant, Andrew Seniuk, and Roel Vertegaal (Queen's University)
Summary:
In this paper, the authors discuss their newest Foldable Input Device (FID). FIDs are an example of Foldable User Interfaces (FUIs), which combine traditional GUIs with the tangibility of paper inputs. In order to create their FID, the authors put 25-35 IR retro-reflectors onto a cardstock paper (think of it as your typical mouse pad). The IR reflectors are tracked with an augmented webcam that is attached to a computer running Windows XP. Via OpenGL, the motions and manipulations of the FID can be displayed in real time. The digital correspondent to the physical FID is known as a Sheet. Sheets respond to the manipulations of the FID, and multiple Sheets can be controlled by a single FID. Figure 1 shows some of the FID interaction techniques.
Discussion:
This paper basically went over the different techniques of the new FID as opposed to user tests or any specific applications. The authors do, however, provide quite a lot of enthusiasm for all of the things you can do (even if they aren't sure if anyone would want to use them). I think that the idea behind the input is really cool, but cannot really see myself using a paper input in such a way unless a webpage was structured like a book and I needed to flip through it. For future research, they should find out just how much people care for their FID (no offense guys!).
Multimodal Interfaces for Automotive Applications (MIAA)
Multimodal interactions allow for natural, intuitive design. If a person can physically touch and manipulate an object by hand, then they need not try and learn how to map these natural movements to those of a mouse, keyboard, or some other input device. Here the authors are concerned with applying multimodal interfaces to cars, where inputs and outputs are limited and often time-constrained. As cars continue to incorporate more advanced features that require the integration of technology, drivers must be able to interact with them with minimal difficulty and concentration (they're driving!).
In their workshop, the authors express an interest in multi-party systems wherein passengers and drivers fill different roles that give them access to different features and devices. They also mention voice commands and dialogue as a form of input, and having multiple unique outputs for each driver/passenger role. MIAA is focused on user-centered design.
Discussion:
This paper makes the workshop sound awesome. I didn't really notice the fact that almost all inputs and outputs in a vehicle are tailored toward the driver and are therefore within their reach (which according to Donny Norman is a design issue). Some minivans of SUVs give air conditioning or entertainment controls to passengers, but not in such a way as this paper implied. Keeping functions that are not critical to the driver's role away from the driver makes sense to me! But just what kinds of multimodal inputs would they seek to incorporate other than voice?
Sketch Recognition
Monday, March 1, 2010
Emotional Design: A Study of Being Boring
Your book, "Emotional Design", is boring. I struggled through the first six chapters over the course of two weeks, unable to bring myself to really concentrate. I arrived in chapter seven only to find you jabbing at Isaac Asimov's epic and award-winning science fiction writing (of which I have 26 books). Asimov endorsed "The Design of Everyday Things", a previous work of yours that I actually enjoyed. I find it rather low of you to critique his writing style now that he has died. I find that Asimov's works reflect creativity, style, and expansion; in "Emotional Design" you seem to be restating the obvious in loops of examples, striving to make a book out of a few chapter's worth of material. I sincerely hope that when I read "The Design of Future Things" I am not tempted to put it in the toaster as I was with this book.
Sincerely,
Chris
Now that that's cleared up, allow me to explain the book itself. If you ignore the blatant contradictions in his attitude (contrast this with his previous novel) and his examples (robots must not look like people but must look like people), Norman makes some key observations about our emotions and emotional design that we may take for granted. He breaks design into three levels: visceral (it appeals to basic, natural impulses), behavioural (where use and function are key), and reflective (it projects a certain meaning or message). Behavioural is essentially the idea behind his previous book that we read for class. Norman continues by discussing the need for emotional design and its benefits. When people connect with something, then they will excuse some of its shortcomings. When people trust something, they accept it and are faithful to it. When things are designed with people and their emotional responses in mind, everybody wins. He concludes with some ideas about robots and incorporating emotions into them so that they might learn to care about their tasks and have a sense of awareness.
P.S.
Did you even read Asimov's books? At one point you say that "...he never had people and robots working together as a team." That is the exact premise of all three original novels in The Robot Series! Literally, a human and a robot are partners. In other novels, people LOVE robots. I mean LOVE THEM like some creepy Japanese men love them, even! You also say that robots don't work together in his novels. In "The Naked Sun", the entire planet is literally run by robots. And in the latter novels of the Foundation Series, robots are even part of all-encompassing networks of organisms similar to the one mentioned in your book. Seriously, Donny... don't hide facts just to make a point in your book. That's low.
Thursday, February 25, 2010
User-Oriented Document Summarization through Vision-Based Eye-Tracking
In this paper, the authors present their new algorithm for document summarization. Their prototype uses eye-tracking to determine which words in a document are focused on the longest by a reader. The algorithm then predicts how long users would focus on other words based on semantic similarity. Basically, their algorithm predicts what sentences the user is most likely to focus on, and then ranks them in decreasing order so as to present the "most important" information for that specific user first. What sets this apart from other summarization techniques is the fact that the documents are broken down into components (word by word) that are then combined to rank sentences and paragraphs.
Eye tracking sample. This person looks at weird things.
While testing their new algorithm, the authors continually beat MS Word AutoSummarize and MEAD (an open source summarization package) in both Precision and Recall. In the future, they hope to improve their algorithm so that documents a user has never read can be successfully summarized (currently, it only works if the user is going back over something previously read).
Discussion:
I didn't know that automatic summary technology even existed. It seems to me that the authors are moving in the right direction by having summarization be user-oriented instead of generic. Everyone reviews differently and focuses on different things; a software that can learn your style would be very valuable. Their future work hinted at creating a summarization tool for things that the user hasn't even read before. I could use that on my next blog!
Do You Know? Recommending People to Invite into Your Social Network
The explicit definition of who is your friend within a social network is created by you. If Facebook forced you to be friends with everyone in your town, or school, or family then odds are you wouldn't want to use it. However, Facebook and other sites do offer suggestions as to who you may want to be friends with. Facebook will sometimes even give you a brief reason, such as the fact that you both went to the same high school.
In this paper, the authors discuss their recommendation widget, "Do You Know?". It combines information from the social network and outside of it to make recommendations that are viewed one at a time with the reasoning behind the choice. DYK is available for IBM's employee directory (all of the authors work for IBM), and 6287 users were logged during its initial test study. Overall, employees were happy with the tool, and provided feedback to the designers. The main point of question was the "No Thanks" button, which served as a way to permanently remove a suggestion. Some users were unsure as to just what it did (i.e., did it tell the person you rejected them?), and others never even noticed it. You can see a sample of DYK below.
Wednesday, February 17, 2010
PrintMarmoset: Redesigning the Print Button for Sustainability
To combat the excessive waste associated with printing things online, the authors developed PrintMarmoset. They tried to make their add-on as simple to use as possible, thereby minimizing the chance that people wouldn't use it because it frustrates them. This also promotes SID, by incorporating it in an unobtrusive way. Users simply select parts of the website that they do not want to print, and then whatever is left will be printed. PrintMarmoset is a WYSIWYG tool, and as such users in the study preferred it to current printing methods.
Discussion:
Have you ever printed an email and a second page comes out with just an address on it? Or pages are covered with ads that, even though you click 'Print Selection', you end up having pages and pages of? PrintMarmoset seems like a tool that was needed years ago. So why didn't anyone think of it? I think the authors are right when they say that people think paper is plentiful. It seems like everyone has this mindset that you can print things a thousand times at maximum size and spacing, and therefore wouldn't need a tool to reduce waste. Just think about Reed... that recycle bin is filled with crap that people didn't want to print. I hope that the authors will push for their add-on to be available in all browsers.
The Application of Forgiveness in Social System Design
Asimina Vasalou, Adam Joinson (both University of Bath), and Jens Riegelsberger (Google, Ltd., U.K.) are interested in applying the concept of "forgiveness" to social systems. Online communities offer many ways to control bad users, such as bans, ratings, and filters. The problem arises when good users have a lapse in judgement or make an honest mistake, and are reprimanded for it just like intentional abusers of the community. To prevent lasting harm, the authors suggest a system for social networks and communities that promotes forgiveness for people who would normally have a high record or popularity.
The authors define forgiveness as follows:
"Forgiveness is the victim’s prosocial change towards the offender as s/he replaces these initial negative motivations with positive motivations."
They then list and briefly detail seven factors that can help victims move beyond an offense (I won't go into detail, but you can read about it!). These factors are Offense Severity, Intent, Apology, Reparative Actions, Non-verbal Expressions, Dyadic History, and History in the Community. As is true in real life (aka life outside of computers), forgiveness is not a guarantee. To mirror this, the authors suggest that any social system choosing to implement forgiveness should consider three key things (they apparently love lists): forgiveness isn't mandatory or unconditional, and it doesn't repair trust or remove accountability. They stress that incorporating forgiveness allows online communities to instill a sense of empathy in members, and that both offenders and victims are given the chance to recover their community status. If communities incorporate a system of forgiveness as well as they have systems of punishment (referred to as 'reparative design'), then authors think everyone wins.
Discussion:
After reading this paper, I'm surprised that communities don't already incorporate reparative design. While playing Counter-Strike:Source with friends on the PC, I often see players banned after first offenses. When I was an Admin for a server, I noticed that there weren't any warnings except temporary bans. Sometimes people really get into things, and will accidentally break a rule (such as no cursing). Do servers forgive them? Nope. They own them. Then the offender is mad, the victims could care less, and the lesson learned is that the server or community sucks. I think that future work could study the application of reparative design in certain forums or servers. It's one thing to propose an idea; it's another to see if people will use it.
Thursday, February 11, 2010
Learning from IKEA Hacking: “Iʼm Not One to Decoupage a Tabletop and Call It a Day.”
In terms of using technology, IKEA hackers often place their concepts and changes online. Many websites, such as Instructables.com and IKEAHacker.com, offer opportunities for hackers to share their new instructions and techniques with others. The sense of community online lets the hackers feel like they belong and keeps them motivated to create. The authors note that people are merging computer terms (such as hacking and programming) and physical items (IKEA furniture) to create a wholly unique CHI experience and culture. In their words,
"...DIY culture is moving the workshop from the garage to the
web forum."
Discussion:
I'm a member of Instructables.com, and can testify to the craziness and awesomeness of tweaking products and designs to make new things. I have a bookshelf made out of Tetrominoes in my bedroom that I found a guide for of the site. With the exception of the creepy gyno chair that sits on the first page of this article, I found it to be really cool. Now as for the people interviewed... they seem a little out there. But what artist or sculptor isn't?
An Exploration of Social Requirements for Exercise Group Formation
Summary:
In this article, Mike Wu, Abhishek Ranjan, and Khai N. Truong (all from the University of Toronto) explain their findings on how people find exercise partners. They used an online survey of 96 people, followed by two focus groups of 12 people. The authors wanted to find out which characteristics of working out with others could be translated into the design of applications and websites devoted to bringing people together for exercise. Their results can be broken down into the following:
- Most people have or look for exercise partners
- People generally know their partner prior to working out
- Most people will not exercise alone
- People are willing to share different information to find partners (figure 1)
From these findings, the authors suggest that social networks geared towards helping people find exercise partners should have a few characteristics. First, they must allow people to collaborate about what they like and when they are available. Second, users should be allowed to update their available times constantly, instead of setting rigid schedules on calendars.Third, users should be able to vary what personal information they share with other people in the network. And finally, users should be able to choose their partners based on familiarity instead of being matched with strangers.
Figure 1. How willing people are to share information
Discussion:
I found this study to be interesting and true. I know that I have a hard time making myself go to the Rec unless someone is going with me. Even then, it takes a while to hash out schedules and communicate times. Generally, it's texting that gets everyone together, so having a mobile application based on these findings would be useful and practical. I would expect that to soon be developed as future work (the author's expect it, too). I wonder if the findings would be different for Americans over Canadians...
Monday, February 8, 2010
Social Computing Privacy Concerns: Antecedents & Effects
results = lol
“It Feels Better Than Filing”: Everyday Work Experiences in an Activity-Based Computing System
In this paper, Stephen Voida (University of Calgary) and Elizabeth D. Mynatt (Georgia Institute of Technology) studied the use of an activity-based computing system named Giornata. Activity-based computing boils down to designing interfaces that allow users to organize their tasks and information in a personal way. This differs from typical desktop computing because users can tag/manipulate items and collaborate instead of simply placing items in hierarchical files and share the files without good descriptions. In order to better understand the potential benefits or hindrances of Giornata, the authors chose to create a full-featured system that was used over an average span of 54 days by the 5 focus users (two faculty members, two grad students, and an industry member).
Once opened, Giornata provides a virtual desktop for each activity, wherein all work pertaining to that unique activity is carried out. This can be compared to having multiple desktop tabs, where each desktop is devoted to something different. Tags can be applied to each activity, and documents that span different activities inherit all relevant tags. These tags can be searched over for easy recovery of information instead of just file names. Users were also able to collaborate with each other by sending activities via email directly from their activity's desktop.
The users reported generally positive experiences using Giornata. Because they were allowed to freely use the program without restrictions on time or features (as in a typical user study), each user developed their own habits of information manipulation and activity creation. Tagging was used by every person on at least one of their activities, but the users found it to be of limited use in the short-term, although they stated that they could anticipate long-term benefits. In addition, users kept all of their needed information on their virtual desktop instead of archiving it in a file system.
The authors concluded from their study that customizable visualized data is a benefit to users, allowing them to work freely instead of through a hierarchical structure. Also, they believe that having activity-based storage and areas allow people to more naturally organize their work as they would without using a computer.
Discussion:
I had never heard of activity-based computing prior to this paper, and I now find the topic to be pretty interesting. Being able to visually compartmentalize my work would be great. I have a habit of placing everything I need in a file structure, and then either forgetting how things are related or forgetting what items are due when. Giornata seems like it would help me with both issues. I would be interested in seeing this research expanded to include tools that are specific to certain activities, such as writing research papers and maintaining all needed information. Such an interface would be ideal for applications such as PowerPoint and Publisher as well.
Saturday, February 6, 2010
The Inmates are Running the Asylum (Part 1)
Alan Cooper wrote The Inmates are Running the Asylum for the sole purpose of exposing software programmers and their corporations to the lack of attention they pay to design. Cooper frequently uses the term "interaction design" to mean a design that is easy for users to understand and use to the point that they will choose a product because of it. This is a fundamental choice that programmers must make before designing products instead of simply tacking on an interface to try and mask their confusing code. The problem is that programmers can understand what they make (after all, they made it) and therefore cannot see how customers would have an issue understanding it as well. Cooper points out that people in the software industry think more like computers than humans, and therefore have detached themselves from users who aren't experts in the field of computers.
Discussion:
Seeing as how I'm already done with the first half of the book, it's obvious that I found it really interesting. Cooper's writing style meshes well with his ideas of interaction design, and before you know it you're thirty pages in. What surprises me most about this book so far is the fact that most of the companies and products he mentions are things that I have never even heard of. This is a true testament to the power of his statements. These dead companies piled on features instead of refining design, and suffered for it. In my opinion, the lack of user-centered design helped lead to the burst of the tech bubble, with so many promising companies collapsing in on themselves. Apologists built them up, but in the end the companies had nothing to offer the real consumers who needed new technology without knowing it. So far, a great book.
Thursday, February 4, 2010
How Well do Visual Verbs Work in Daily Communication for Young and Old Adults?
In this paper, Xiaojuan Ma and Perry R. Cook (both from Princeton) analyze the different ways that verbs can be visually displayed, and how these displays are conveyed to both young (20-39) and old (55+) adults. The four visualizations in question are a single image, a panel of four images, an animation, and a video clip. They chose 48 frequently used verbs from the British National Corpus to visualize for their research. Note that verbs are harder to convey via images as compared to nouns, as most nouns represent a single, tangible thing.
The images for the verbs were taken from tagged web pages. The author recruited raters to give them feedback as to watch images best depicted each verb, and then chose the top four. The animations were taken from a sight specializing in verb animations, and the videos were recorded by the authors themselves.
WordNet score results for the two groups
The authors also found that the verbs best recognized all shared certain visual characteristics in common. Some of these include simple backgrounds, limited visual effects, and a limited use of symbols (such as a heart or a question mark to represent the verb). They also make note of the fact that different gestures have different meanings to people of different cultures and age groups, and therefore the visualizations should be examined for universality. In the future, the authors hope to apply their visual designs to help people with Aphasia (a disorder categorized by understanding written and spoken language).
Discussion:
I thought this article was interesting. I didn't actually see the purpose for the research, however, until I read where they were interested in helping people with Aphasia. With that in mind, it casts the research in a whole new light. By visualizing verbs, people who have difficulty speaking and writing can still communicate with others, which is of obvious importance. I think that future research should be done to with actual people suffering from Aphasia so that the benefits to them can be directly determined. Maybe an entire, universal language could be created based on visualizations, something of great benefit to anyone attempting to communicate across language barriers.
Wednesday, February 3, 2010
Correlating Low-Level Image Statistics with Users’ Rapid Aesthetic and Affective Judgments of Web Pages
In this article, the authors were concerned with how well websites evaluated based on decomposed low-level images compared to what actual users thought about the pages themselves. Users were asked to base their decisions on four main design dimensions:
Attractiveness, Pragmatic Quality, Hedonic Quality: Identity, and Hedonic Quality: Stimulation (hedonic quality refers to how pleasurable and interesting someone thinks the page is). The users rated the thirty web pages used on a seven point scale along the gambit of each dimension.
Monday, February 1, 2010
The Design of Everyday Things
Anyway... The Design of Everyday Things focuses on (you guessed it) the design of everyday things. Doors, phones, and VCRs are main characters, with noteworthy performances by projectors, lights, digital watches, and radios in supporting roles. Basically, Norman looks at something we take for granted and breaks it down into its general level of FAIL based on a few main characteristics. He then encourages designers to consider these characteristics in their future work.
Characteristics to keep in mind:
- Constraints (physical, semantic, logical, cultural). Simply put, constraints refine design by limiting it. If you want to make your Lego policeman and bike, you don't put the wheel on the officer's head, make him face backwards, and put the red light on the front of the bike. Constraints are good! Don't fight them.
- Natural Mappings. If you have four stove burners and slap four dials on there to control them, make the layouts match! Otherwise people make mistakes and can't remember stuff. Which leads into...
- Keep Knowledge in the World. If it's natural to use, then people need not focus on memorizing how to operate something. I should be able to pick up a pencil and write without having to first memorize its instruction manual.
- Maintain Visibility. If you want to turn the water on in a sink, don't hide a foot pedal under the counter. Let the user (again, naturally!) see how to use things. Sometimes you must sacrifice elegance for simplicity.
- Give Feedback. If you double-click Internet Explorer and then nothing tells you that it's currently in the process of crashing (no hour glass, warning, etc.) then you waste valuable time and are generally confused. Then you call Geek Squad and ask them what's wrong with your CPU when you don't even know what that means (but you want to sound high-tech). Let users see that what they do has a direct impact on what they're using, and they will feel better about themselves
Sunday, January 31, 2010
Ethnography: Humans and Doors
Chris Aikens
Brett Hlavinka
Idea:
As Brett presented in class, we are interested in observing how people interact with doors, and how doors alter the interactions between people. Specifically, we want to see how people use the entrance to Zachry at different times of day. We plan on developing a model of when people hold doors for others and what kinds of people are most likely to hold a door.
Example questions:
- How close must another person be for someone to hold the door for them?
- Does gender play a role in the decision to hold the door?
- Are members of the Corps more likely to hold the door for others?
- Do people using cell phones / iPods behave differently?
Sunday, January 24, 2010
Augmenting Interactive Tables with Mice & Keyboards
*Björn Hartmann, Meredith Ringel Morris, Hrvoje Benko, Andrew D. Wilson
(Microsoft Research, *Stanford University HCI Group)
Summary:
In previous research, multi-touch surfaces are seen as an alternative to keyboards and mice, and thus the two input choices seem to become mutually exclusive. This research team seeks to combine theses inputs to eliminate the limitations found in using either input type by itself. The integration of mice and keyboards offer three main functionalities - high precision/performance input, interacting with distant objects in a minimal way, and serving as proxies for the positions and identifications of users on the surface itself.
The authors give the example of three students working together on a project using a multi-touch table and individual mice and keyboards. The students are able to lock files onto their keyboards, combine inputs to work on files simultaneously, share their items with each others' work areas, and access unique files by logging in via their identified input devices.
In order to link digital files to keyboards, users can either move the files across the table and dock them, or they can move their keyboard onto the files. Both accomplish claiming files as under their ownership via collision detection. To collaborate, simply bring another input device into close proximity to link it (note that you must have the same orientation, as in keyboards approximately face the same direction).
Mice are used to manipulate distant objects that might be awkward to reach otherwise. In this way, the cursor behaves similar to how a user's finger would when selecting items on the surface. But what if the table is cluttered with cursors? In such a case, a line is projected from the mouse to its cursor across the table surface. Proximity again comes into play when linking a keyboard to a mouse. A user can touch the two devices together, or use the mouse to click on the outline of the keyboard if it is resting on the table.
The above figure shows what the researchers have so far tested their combined input design on. In the future, they plan on exploring ways to incorporate multiple users at different locations using the same surface.
Discussion:
The functionality of this idea is awesome. After reading this article, I can't imagine not being able to do this with a multi-touch surface. It seems so natural to be able to interact with touch surfaces via single input devices that I am already familiar with. The possibilities for collaboration and simultaneous multiple users are very well outlined, showing that a multi-touch table can serve some of the same functions as a regular tabletop. Pretty cool!
An example of collaboration: D) group searching H) group writing
Collabio: A Game for Annotating People within Social Networks
Michael Bernstein, Desney Tan, Greg Smith, Mary Czerwinski, Eric Horvitz
(MIT CSAIL and Microsoft Research)
Summary:
Collaborative Biography (or Collabio) is a social tagging game that is currently available on Facebook. It is a way to generate accurate information about individuals in a motivated way. Collabio differs from other tagging applications and projects in two main ways. First, Collabio is categorized as a "tagging for you" tool. This means that the taggers themselves do not directly benefit from the tags, and instead tag in hopes that whomever they tag will do so in return. Second, Collabio differs from other Facebook tagging apps in the sense that it is more concerned with the richness of tags over the entertainment value the application itself provides. Though it is a game, it is structured in such a way that inaccurate tags are given little or no point values, thus encouraging accuracy and motivating users to get more points.
Collabio has three main interfaces - Tag!, My Tags, and Leaderboard. Tag! allows users to (not surprisingly) tag their friends. An initial set of tags is generated from information pulled off of the person's profile. As these tags are confirmed by users or new tags are added, the tag cloud grows. Points are awarded for guessing tags, with the most popular tags giving the most points. An example is shown in the figure above, which I straight up cut from the article. My Tags allows users to manage the tags that people have made for them (duh). So if you don't like the fact that everyone tagged you as alcoholic, skank, or bed-wetter, you can easily delete your little annoying facts. Finally, Leaderboard serves as a motivator for users to try and get their names to the top by tagging everyone they can.
So just how useful are the tags to the researchers? As it turns out, they beat simply performing a web search or scanning someone's Facebook profile. Through surveys and rating tests (which are not reproduced here for all our sakes) Collabio was found to produce unique and accurate tags for people that could not be generated elsewhere. Therefore, Collabio is a step forward in information extraction in a social setting via user interaction.
Discussion:
Collabio seems like a pretty fun game to play on Facebook, but otherwise it currently lacks purpose outside the realm of research. I installed it myself and the first person it brought up to rate (my friend Don) was previously only rated by the Collabio Bot. The first word the Bot tagged him with was "awesome", which I find to be true but not very helpful! After piecing it together, the initial four tags generated by the Collabio Bot where "awesome Collabio Facebook tag"... very funny, guys. Combine this with the fact that I got AJAX errors half the time I was trying to guess stuff and I'm not exactly sure how this application ever got people to use it in the first place! Maybe it would work better if other friends used it, but for now it will sit next to FarmTown as another unused Facebook App.
Wednesday, January 20, 2010
User Guided Audio Selection from Complex Sound
When someone wishes to manipulate a photo or video, they are presented with a wide variety of tools and applications. Changing colors, deleting objects, merging scenes, and many other tasks which were once impossible are now commonplace. Audio processing, however, is still a complex and complicated task. Users cannot simply point to a section of an audio waveform and isolate an instrument in an overlay. Because of this difficulty, Paris Smaragdis developed a novel interface for selecting sounds. Most audio editors concern themselves with two main points - visualization and sound separation. Audio visualization is essentially a waveform showing the air pressure over time, and is most widely used. Sound separation involves breaking down audio files into acoustic energy, which can be seen as a graph of time and frequency. Both of these points provide information, but they lack object-based interaction.
Paris Smaragdis uses audio guidance to achieve sound selection from mixed audio. This task begins with the Probabilistic Latent Component Analysis (or PLCA) model. Simply put, the PLCA model estimates what pieces of an audio mixture belong to what unique instrument or sound, based on what is expected in the mixture, the presence of a given sound at a certain time, and the overall contribution that each sound makes to the mixture. The user can then sing, hum, or play an approximation of the sound they are trying to extract or edit, and use this sample as a prior. The PLCA model then tries to match the prior to parts of the audio mixture.
To test this approach, Smaragdis attempted to extract a speech after mixing it with background music. Using direct playback of the original speech, perfect extraction occurred. Having someone else say the words gave poorer results, but Smaragdis states that they still "...rival modern sound separation algorithms".
Discussion:
Obviously, the usefulness of this software is bounded by how accurately a user can reproduce a sound. Having the premixed track for playback would provide near perfect extraction, but a tone deaf person trying to edit an insane guitar solo would lead to what I can only imagine to be epic failure. Aside from the accuracy of user input, this audio selection tool sounds awesome. Anyone familiar with Audacity or other sound editing and mixing tools knows how frustrating trying to edit a unique instrument can be.
I see this work being furthered by working it in a different direction. If someone was able to extract a track by matching it to my input, could I not take my input and convert it into music? It would be revolutionary to simply sing or hum the parts you wish to include in a song and have the computer match it to pitches and note lengths. Then you could skin each input with the desired synthesized effects, or match it with recorded instrumental inputs. If I knew the first thing about making that a reality I would be hard at work on it now. But until then, I'm going to claim the idea as my own intellectual property!
Tuesday, January 19, 2010
A Reconfigurable Ferromagnetic Input Device
Jonathan Hook*, Stuart Taylor, Alex Butler, Nicolas Villar, Shahram Izadi
(Microsoft Research Cambridge, *School of Computing Science)
Summary:
A reconfigurable ferromagnetic input device can be thought of as the parent class of some familiar input devices (such as a trackball mouse or a multitouch surface). Being ferrous means that the input device contains iron, and therefore the authors of this paper concern themselves with ferrofluid bladders (or liquid iron) and various iron solids. Ferrous objects can be placed on the sensing surface and used to create unique and application-specific input devices by monitoring changes in the magnetic flux above said surface.
Magnetic Field Disturbance
Although a lot of work has been done on inputs, this idea is unique in that it allows for customized input devices, and for the detection of deformations in ferrous objects. The device combines an analogue sensing boards with a digital interface board. The sensing board is made up of sets of 16 sensor coils, while the digital interface is composed of an analogue-to-digital converter with a USB output.
Two different application scenarios were also discussed by the authors. Their first application concerned virtual sculpting, wherein deformations made in a ferrofluid bladder are translated to molding clay (their example image follows). Secondly, the authors looked at using their input device as a synthesizer. Various ferromagnetic objects are used to simulate the necessary actions to obtain sounds from musical instruments (their examples included striking a piano key and playing the violin). This scenario demonstrates the breadth of application for such an input device, which is what the authors wish future users to grasp. They feel that their device is generic enough to have novel application.
Discussion:
I find this input device to be groundbreaking. Researchers tend to focus on a certain type of input and on designing tools and applications that use it, while here we have an input device that can itself be customized to meet demands and fill niches. My issue with this paper was that they didn't go into more detail on their application scenarios! I would have enjoyed seeing just how the applications worked and how user-friendly this input device really is. I see ferromagnetic input devices being used for 3D mapping and editing, and space navigation. Imagine being able to fly a plane with full 3D control by hovering a ferrous object above the sensing board. To me, the applications only seem limited by the exposure of such novel input devices.