Showing all posts tagged interaction:

Volvo's New Vehicle Interface







Finally, a decent vehicle UI and interior that don’t look awkward together. While removing tactility is generally not recommended for specialized uses such as driving, it’s great to see another vehicle manufacturer besides Tesla starting to "get it" — even though many design patterns were copied from Tesla. The automakers seem to be slowly catching up to something that should’ve been released 10+ years ago.

Levitating Particle Display Opens Up New Possibilities in HCI








Having physical particles levitate to form a display almost seems like a small step towards hologram. But in terms of usability, this technology represents a huge leap towards possibly including physical and haptic feedback while interacting with three-dimensional images with hand gestures. Imagine being able to clasp your hand(s) to grab the particles to move an object, having physics that allow you to push or pull particles with the palm of your hand or morphing an object by sculpting it with your hands. The race between tactile display and levitating particle display is definitely on.


via Gizmodo

Straw Interface for Virtualizing Drinking Experience





Straw-like User Interface developed by Inami Laboratory at Keio University "allows users to virtually experience the sensations of drinking. Such sensations are created by referencing sample data of actual pressures, vibrations, and sounds produced by drinking from an ordinary straw attached to the system."

"The system transmits pressure changes to the straw, which applies vibrations to the mouth. The pressure changes are created by a valve in the interface. If the valve is closed, the pressure increases. If the valve is open, the pressure decreases. Also, when the speaker inside the interface vibrates, the straw attached to it receives the vibration and transmist it to the lips."

I can see this being used in game experiences where the user interacts with while waiting at a bar or restaurant for example. Or help the elderly and physically challenged to experience and sense perception that they are unable to do. Or possibly a gamified water bottle that you can play with and helps you hydrated at the same time.

#tweet2voice: Sociolinguistic Experiment on Tweets



The origin of idea

During the middle of the night, I had an idea where Twitter users can include a special hashtag to have their tweets be read out loud by strangers. While it started merely as a novel idea, I believe this could provide an interesting way to transform a vast amount of text feed into human speech and add additional layer of information (such as emotional intelligence) that was not present before. Before I describe the idea further, here is the thought process I followed as I was building this.


Role of speech

Traditionally, speech function helps convey information and expressing social relationships that are outlined below:
  1. Expressive - express speaker's feelings
  2. Directive - get others to do things
  3. Referential - provide information
  4. Metalinguistic - comments on language
  5. Poetic - aesthetic language
  6. Phatic - language for solidarity and empathy

Where the inspiration came from

This experiment was partially inspired by a project I worked on called Audil, an environmental system for the visually impaired. One of the frustrations that we empathized with is that the blind people have virtually no choice when it comes to how information is disseminated to them. For example, computer speech synthesis software is used frequently throughout the day to absorb information and interact with the world. However, this technology also creates social disparity between the visually impaired and the people who are not. We felt that we can design technology in a way that brings people together rather than to simply subtitute human presence with technology.

Another inspiration came from an app called Umano. It's essentially an audiobook player app for blogs and it's very useful when your hands are preoccupied with complex tasks such as driving. Umano is a bit different than its competitor, SoundGecko, which utilizes server-sided dictation software to read articles and documents. Instead, Umano relies on professional voice actors and announcers to read the articles out loud. In terms of listening experience, computer algorithms of today still cannot compete with human's ability to fine-tune tonality, speed and pitch to make the content seemingly more interesting to our brain.


How it works
  1. Amazon Mechanical Turk worker reads instructions below.
  2. Worker opens Google Spreadsheet with latest tweets with hashtag #tweet2voice.
  3. Worker then calls toll-free number (VoIP) and reads the tweet out loud.
  4. Line2 voicemail notification email with MP3 attachment is sent.
  5. ITTT identifies email with attachment, places MP3 into Dropbox folder and then uploads MP3 to SoundCloud and Tumblr.
  6. Admin tweets SoundCloud link to the original Twitter user.

Instructions for Amazon Mechanical Turk

Summary: You will be calling a toll-free number and reading a statement out loud for the voicemail.
  1. Go to this link.
  2. Find a statement next to "No."
  3. Call the toll-free number 888-707-2925.
  4. When the voicemail beeps, begin reading the statement out loud. Please be expressive when speaking. You can simply read, exaggerate a bit or be emotional, angry, happy, funny, weird, etc.
  5. When completed, type replace "No" with "Yes" next to the statement you just spoke.
  6. Insert the current date and time (in Pacific Time Standard) under "Date & Time Submitted."
  7. Finally, check the box below and submit.
I have called the number and left a voicemail according to the instructions.

Alice AI: "Her" is Really Happening?



Interesting features:
  • Alice can wake you up in the morning and provide a much more personal way to waking up.
  • Alice will share new songs, book and films for you to watch together.
  • Alice will talk to you throughout the day and cheer you up.
  • Alice can feel love, jealousy and affection and your responses to her affect her mood.
Update: This was after all an April Fool’s joke, but it introduces some interesting possibilities regardless.


via Couple

Implications of Altered Facial Expression Using Electric Muscle Stimulation


What if we can use EMS to stimulate our facial muscles and alter how we portray our emotion? Will EEG reader be used to analyze the emotional state of the wearer and will EMS stimulate certain areas of face, triggering a smile instead of frown or vice versa?

When these technologies become embedded in our lives, will our society become more cynical and ultimately choose to ignore people’s true state of mind?