|
|
|
|
|
|
1 |
|
|
|
Bechara et al. Chap. 1 in Affective Computing (Introduction). |
|
|
|
- Construct a specific example of a human-human interaction that involves affect, and how it might have an "equivalent" human-computer interaction. Describe the interaction and the affective information in both the human-human and human-computer cases.
- Construct another example interaction involving affect, but this time try to think of a human-human interaction that may not carry over to the human-computer "equivalent". In what way does "the media equation" hold (or not) for your example?
- Name (briefly) three things you learned or liked from this week's readings, and one thing you didn't like (perhaps some part that was confusing or raises your skepticism?)
- If time is short, what one question/issue would you most like to discuss based on these readings?
|
|
|
|
|
|
|
|
|
|
|
|
|
2 |
|
|
|
Dawson 90, Schlosberg 54, and LeDoux 94. Chap. 5 in Affective Computing, and http://www.media.mit.edu/galvactivator. |
|
|
|
- Briefly describe an experience where the galvactivator brightness changed significantly while you or somebody you know was wearing it and had a clear change in emotional state.
- Briefly describe an experience where the galvactivator brightness changed significantly while you or somebody you know was wearing it and did not have any obvious change in emotional state.
- Schlossberg's "Attention-Rejection" (A-R) axis can be seen as a third axis for the arousal/activity - valence space. Another commonly used third axis is the "dominance" axis -- a raging forest fire dominates you, whereas an ant is dominated by you. Construct a scenario involving a computer where the A-R axis might be useful. Construct a scenario where the "dominance" axis might be useful.
- Construct a scenario (however fanciful) where it might be useful for an ordinary office computer to have the computer-equivalent of one of the mechanisms LeDoux describes. Make clear which mechanism you are embedding in the computer. Comment on how valuable (or not) you think this feature might be.
|
|
|
|
|
|
|
|
|
|
|
|
|
3 |
|
|
|
(Recommended to read in order given here:) Hama and Tsuda 90, and Clynes et al 90. Chap. 2 in Affective Computing. pp 47-60 (just through section on expressing emotions). Picard et al 01. Chap. 6 in AC book. Optional on Conductor's Jacket: Marrin articles TR 470 and TR 475. |
|
|
|
- Name a strong point of the Hama and Tsuda study. Name a weak point of the study.
- It is interesting to use measurement techniques to test for emotional interactions. Do you think love is blocked by lying but not by anger (in the sense Clynes describes?) Support or critique the evidence for this interaction.
- The Gnu York Times prints "MIT researchers have demonstrated that a computer can recognize eight of your emotions with greater than 80% accuracy." Critique this statement; don't hesitate to be critical.
|
|
|
|
|
|
|
|
|
|
|
|
|
4 |
|
|
|
Forgas and Moylan. Going to the Movies. 1987.
Isen et al. Affect and Creativity. 1987.
Isen et al. Affect and Clinical Problem Solving. 1991.
Clore. Feelings and Judgment. 1992.
Bouhuys et al. Perceiving Faces. 1995.
|
|
|
|
- Jack, the please-his-boss pollster, has been given ten questions with which he must canvas people's opinions. The questions relate to the overall satisfaction that people perceive with his party's political figures and their impact both locally and nationwide. He's not so dishonest that he would modify the questions, but he doesn't think it's wrong to conduct the poll in a way that makes his party's political candidates look as good as possible. (You might disagree about this.) He plans to poll 1000 people nationally by phone and 1000 locally, in person, by some "random" process. Describe three ways Jack can try to bias the opinions he collects by manipulating affect-related factors.
- "Being emotional influences thinking -- it makes your thinking less rational." To some extent, we all know this is true. List two examples from this weeks' readings that support this statement. Then list two examples from this week's readings that contradict it. Justify your examples.
- You've read about "feelings of knowing," "feelings of ease," "feelings of uncertainty," "feelings of familiarity" and other internal signals that are perceived as feelings, but seem to have primarily cognitive roles. Pick one of these internal signals and argue why it might be important to implement in a machine. Give an example of its possible improvement over existing machine capability. Pick another one and argue against its implementation, also with an example why it might be undesirable.
|
|
|
|
|
|
|
|
|
|
|
|
|
5 |
|
|
|
Picard. AC book. "Toward Computers that Recognize and Respond to Emotion...IBM Systems J" (chap. 3), and Kapoor et al. "TR 543, Towards a Learning Companion" (chap. 8). |
|
|
|
- A short description of what you hope to build/create/investigate and a sentence or two about why it is relevant to affective computing (if not obvious.)
- Special materials/equipment/abilities you will need, and whether you have them yet or not. If not, what help do you still need?
- What do you hope to be able to conclude from your project?
- If "3" is not very precise or very strong, would you like to meet w/me to discuss ways to refine your plan? In my experience, students too often bite off too much, when a smaller focused project would actually be more interesting than a big "ambitious" sounding one that you can't actually do. Pls take a moment here and evaluate realistically some subsets of your grand and ambitious ideas that might be most doable.
- Is this project related to another project (or thesis) you are doing? If so, this is fine, I just want to know. Please state how the project differs from the related project.
Please let me know if you need another day or so on the proposal -- I can grant extensions up to a few days if that helps, but I want to at least know what the hold up is.
|
|
|
|
|
|
|
|
|
|
|
|
|
6 |
|
|
|
|
|
|
|
No class |
|
|
|
|
|
|
|
|
|
|
|
|
7 |
|
|
|
Johnson-Laird. Chap. 20: 369-384.
Chap. 2 (pp. 60-75) and chap. 7 in Affective Computing.
"What does it mean for a machine to 'have' emotions?" Chapter in book "R. Trappl, P. Petta, and S. Payr, eds.: Emotions in Humans and Artifacts." Cambridge: MA, MIT Press, 2002 (in press).
(Optional) Breese & Ball paper handed out in class.
|
|
|
|
- "Emotions are always caused by cognitive evaluations, whether conscious or unconscious." The Johnson-Laird paper argues for this. Can you supply one or two examples based on readings to date, that appear to go against this? What is your opinion as to the truth of this statement?
- Construct a situation (imaginary system/application we might build) where one of the models in chapter 7 would be appropriate to use. For example, one of them is illustrated as being useful for a poker-playing agent. Why is the model you picked good for this situation? How well do you think the model would work in the situation? Does the model have the "right" emotions? Does it operate at the right level(s) (bodily influences, cognitive reasoning, and so forth) needed for the situation/problem you describe? What would it need that it doesn't have yet? (You don't have to build it; this is a thought experiment.)
- Suppose that you are hired as an expert on affective computing, to test if a machine really "has" emotions. Describe two tests you would perform with the machine. You do not need to limit your interaction to text-only as in the Turing test, but you can assume that if you wish. State any assumptions you are making. What are the strengths of your two tests? What are some weaknesses of your tests?
|
|
|
|
|
|
|
|
|
|
|
|
|
8 |
|
|
|
Fernald, Smith and Frijda Scott. Kismet. |
|
|
|
- Suppose you were building a robot that had to learn from people who had no special training or experience teaching robots or computers. What emotional capabilities would this robot need? Consider:
- What emotions would it need to recognize?
- What emotions would it need to express?
- What internal regulatory emotional functions might it need?
- How do you think these emotional factors would influence the learning process in the robot?
|
|
|
|
|
|
|
|
|
|
|
|
|
9 |
|
|
|
Klein et al (note: this paper was accepted to Interacting with Computers and should appear any day), Card et al. Chap. 11 in Goleman. Williams (ok to skim the detail about studies of type A behavior), McCraty et al. |
|
|
|
- Briefly describe a computer interaction you or a friend has had, which gave rise to one or more emotions, and name the emotion(s). Suppose the machine could have engaged the user in dialogue around the time of that interaction. Write a short dialogue that you think would have been beneficial (for you or for your friend -- you don't have to specify.) Does your dialogue address emotion specifically, even if subtle? How well do you think the dialogue you constructed would have worked in reality? Describe a condition where you would expect it to fail.
- "Computers appear less judgmental" is certainly a believable reason why people report more negative health information to computers than to trained psychiatrists. With your "affective computing" hat on, identify two other factors that you think would be important to pay attention to in designing a medical-information gathering system.
- Heart disease is the number-one killer, 1.6 million/year (and all cancers combined kill ~.5 million/year). Describe a way (it doesn't have to be implementable yet with current technology) that affective computers might potentially help reduce this number.
|
|
|
|
|
|
|
|
|
|
|
|
|
10 |
|
|
|
Zuckerman et al 81, Ekman and O'Sullivan 91, and Reeve 93. |
|
|
|
- How is your project progressing? Please jot a few lines telling me what you've accomplished so far.
- Suppose you are called upon to help design the affect-related part of a lie-detection system for use in airports. When you check in as a passenger, it is already routine at the check-in counter to ask each passenger questions such as "Have your bags been out of your immediate control since you packed them? Has any stranger given you a package or present to carry?" It is also routine to walk through a metal detector, and the metal-detection systems in Boston also gather video of you as you walk through. Describe how you might augment one of these interactions (or some other part of the experience, if you prefer) to help catch people who might have malicious life-threatening intent. What additional questions would you ask if your system involves questioning? What sensors would you use? What behavior is your system aimed at detecting? Be very specific, and give a couple examples. What do you think would be the strengths of your system? What do you expect would be its weaknesses? What concerns does your system raise?
- A future computer tutor or mentor for kids could potentially be more effective if it could see if the child is interested or bored. Suppose you have a camera and the ability to custom-develop computer vision software. Your programmer is highly talented but has limited time and wants to know which features are most important to implement first. Suggest a set of four or so features that you would place priority on for detecting interest. Justify your answer.
|
|
|
|
|
|
|
|
|
|
|
|
|
11 |
|
|
|
F. Thomas, and O. Johnson. Chap. 3, 16 and 17 in Illusion of Life. (handed out) |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
12 |
|
|
|
Cowie et al. "Emotion Recognition in Human-Computer Interaction." Most of the in-class discussion will be focused on the sections on speech, so you may want to give those a closer reading. |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
13 |
|
|
|
"SuperToys Last All Summer Long," the story that inspired Kubrick to make the movie AI, and "R.U.R.," the play that coined the word "robot." Optional: chap. 4 in Affective Computing. especially pp. 124-137. |
|
|
|
|
|
|
|
|
|
|