Letting Go presentations in-class September 29th
Documentation due online September 30th
Taking Control presentations in-class October 18th
Documentation due online October 21st
We tend to think of devices and user interfaces, especially for things like appliances, as an inseparable bundle. There are exceptions, like some of our entertainment systems with their unified remotes, but many and most other items in the home include a UI of some sort. Your washer has its UI, your dryer has another UI, the microwave has a UI, the espresso machine has a UI, the toaster oven has a UI. Most of these devices are not very sophisticated, nor do some of them need to be, but is there a missed opportunity here? Are there lots of missed opportunities?
There are certainly alternatives to traditional knob-and-button based interactions. Ubiquitous computing and physical computing are a couple of paradigms where there might be some traction for controlling the things in our homes much differently than we traditionally have. Voice recognition and gesture are others ways we might interact with our domestic settings.
What other opportunities are present when we relax traditional notions of how things should behave? What might your home or things within it do if an occupant is known to be driving home? What if the car or bike is in the driveway, or the train has made the station? Opening the door, entering a room, within sight, within reading distance, immediately adjacent, touching…do any or all of these possibly-sensed configurations (both of people and of other technologies) represent nascent opportunities?
This two-part assignment will engage us in ways of re-thinking how our technologies configure our lives, and how our lives configure our technologies. We’ll be thinking about ways that the world could be vastly different, and prototyping new interactions with these reconfigured worlds.
The first bite of this project will be to pick something (or multiple things) that we interact with on a regular basis and create a wholly new way to specify/control/trigger/accomplish that interaction that involves Sifteo cubes as the central modality. Working within the constraints of a technology of this sort, one that offers both cool new capabilities (a wonderful set of sensors, sweet form factor, etc.) and harsh constraints (limited screen resolution and colors, for instance) will demand creativity, craft, and acumen. The look and feel of the UI you create is critical to conveying both how someone should proceed in the interaction and how your work differentiates itself from other products and brands. Look for opportunities to push boundaries and use the various constraints to be refreshing and liberating.
The second bite is not much less demanding. The shackles come off, and you will do a similar, related, wholly new, or stolen (negotiate with your peers about this one, ok?) with technologies and modalities of your own choosing. Sometimes, though, freedom of this sort makes things much harder, because the space of opportunity is bigger and you’ll have to be able to defend your choices. Having an excuse about pixel count and color palette can be a shelter from having to explain why your various touch points don’t feel brand-consistent or how your graphics are somehow less than they could be. Limited interaction modalities with the tiny Sifteo screens could be a blessing compared to the expectations some folks now have when there’s a touchscreen display in front of them.
So, think about what it means to present the task of controlling a device using a distributed collection of tiny screens. Juxtaposition, orientation, shakes, screen presses – all are fair game. Music, talk, graphics, beeps and boops – don’t forget other channels. We’ll be supplying some examples for the Sifteos, and you all know how (and/or can work with the tutorials and your peers) to make things happen with Processing and Arduinos. You are encouraged to adapt anything and everything to enable a wide array of inputs and outputs. It is up to you to get some semantics into the Sifteo sphere, and to marry inputs and feedback in both parts of the assignment such that the interaction design has simplicity, coherence, consistency and elegance.
Remember, novel forms user interface and interaction design are an integral part of the user experience and overall brand. A recognizable mode of getting something done (like a touch wheel for volume and scrolling) can telegraph what you’re using, who was responsible for it, and how to work with it. If you can develop effective ways of interacting with devices of this sort, you can end up with a sort of “branded physical interaction” that can speed your users along the curve of working things out and can also help your new interface design spread to more devices and settings.
We will be presenting sketches and interim work to the whole class during studio next Thursday. You will be presenting your final work for the first half of the assignment in class on September 29th. The second half will be presented on October 18th – these are healthy chunks of time, so make sure you do a thorough and stellar job!
At all four presentations, be prepared to share the libraries and tools you’ve discovered with your classmates and instructor. (Don’t be secretive! We should cooperate on technology and compete on content. After all, that’s the philosophy that led to all these libraries being available to us in the first place.)
We will be posting your reminders on design.cca.edu, with links to Java, C#, Processing applets or as YouTube or Vimeo videos.
Evaluation on this project will be based on the ingenuity, creativity and beauty of your solutions. Oh, and generosity towards your peers.