user-interface-software-and-technology
user-interface-software-and-technology copied to clipboard
Body Critiques
- Why is this chapter separate from the Hands chapter?
- Why is the Limbs section separate from the Body section? +1
- Why are breath and tongue as input considered under the Limbs section?
- Break up the Speech section/have more bullet points
- How does wearing glasses or contacts affect gaze detection?
- How do we know which body gestures invoke commands and which do not? This is addressed for speech input (“a voice interface’s available commands are essentially invisible”) and should also be added for body-based input.
- Talk about why gaming and sci fi spark and impact body-based interaction techniques. Would we be exploring these techniques if gaming and sci fi culture wasn’t so strong?
- Many examples in this chapter such as gaze, forearms, skin pressure sensor, were invented several years ago. Any updates? Have any made it beyond prototypes? +1
- Include how tone of voice, context, setting, culture, etc. can lead to discrepancies in speech input
- How do voice recognition systems clean the background noise?
- If gaze technology averages eye movement over time and uses that as input, does that mean they’re predicting the future, where the eye goes next?
- Seems like a common limitation across the sections of this chapter is being able to distinguish conscious vs unconscious movements. What are some possible solutions to this? +1
- The chapter seems to indicate that full-body input is better than body parts, however, doesn’t this lead to greater gulfs of execution and evaluation?
- Discuss practical limitations, like the fact that it’s tiring to run on an omni-directional treadmill and stand in the same pose for a long period of time. Are there use cases where body-based input is actually better than just using hands?
- Talk about privacy issues with gaze, body, and voice +1
- Talk about psychological complications with peripheral awareness
- Explain why we don’t have direct control over where our eyes go–this wasn’t intuitive for some
- Talk about the possibilities these inputs can lead to–for example, why would anyone want a tongue-based input?
- Fusion and symbiosis
- Part on fusion lacked context and explanation
- Talk more about digital symbiosis
- Until AI has a sense of will, is symbiosis actually possible or present?
- Include ethical and social questions about human-computer integration
- Terminology:
- Explain what EEG and EMG are
- Typos and grammar:
- "Infrafred"--misspelled
- “Using this depth map, it build”–Build should be “builds”
- “moving it’s eyes in the player’s eyes move”--should be “moving its eyes as the player’s eyes move”
- Limb video is unavailable to some students (worked fine for me though)
-Explain in the speech section why AI is still not very conversational and why companies still market them as so and what impact this has
- Include some history/early tech in the space of body tracking
- Talk about accuracy and mainstream implementations for gaze
- Talk about populations often left out of gaze based interaction designs
- Talk more about error handling with body based interaction. What do the errors look like in detail? How do these interactions usually help users recover from errors? How do designs prevent these errors in the first place?
- Include more examples of multi-model body interactions
-Define these terms on the side bar:
- Speech recognition
- gaze recognition
- body-based sensing
- symbiosis
- fusion
- For each section, go over its maturity and have a deep dive into how each one works
- Diagram the process of speech recognition in a picture rather than a dense paragraph
- How is data for speech recognition collected? Is it just recording in the wild? Go over ethics of data collection
- Mention foveate rendering
- What inspired explorations of body-based input? Was it to see what's possible or to address a need?
- Mention pros and cons of different body-based input for accessibility
- Add examples from sci-fi and movies and books to stretch the imagination