V1S10NS

V1S10NS is an exploration of Computer Vision applications using both Processing and OpenFrameworks paired with an XBox Kinect.

This project began with the goal of building an interactive sculpture that could tell when a person was smiling and respond. Quickly I became interested in how computers “see us”, given the number of cameras, sensors, and other devices constantly watching us in a big city.

V1S10NS consists of three mini explorations into OpenCV, a historical library for computer vision that has been ported to several languages and use-cases. Each project attempts to explore the potential uses and critiques of applications powered by computer vision, and how they may come to share our lives in the future.

Exploration 1: Computer Vision

Computer Vision is a video project shot with Kinect and analyzed with OpenCV in both Processing and OpenFrameworks, that explores how our computers see us in a world where we are almost always in view of one or more cameras – now even in our own homes.

Background

One of the most common uses for the OpenCV library is to identify people, animals, and objects by teaching the computer their shape and then processing image and video input to find a match. While undertaking this project, I was reading Jane Jacob’s The Life and Death of American Cities,  where she discussed the phenomenon of “street surveillance” by ordinary people and how this surveillance would bestow a level of civility on the street since so many people were always watching. I began to question how this type of ‘street surveillance’ is different from ‘computer surveillance’ which we often don’t realize or forget is present, since it is hard for us to remember that cameras are watching us.  This made me question how this is different from knowing we are surveilled by cameras (you would think with people behind them) in our day to day. Which type of surveillance was more effective, that of people or that of cameras?  In order to explore this, my first mini-project.

For this project, I wanted to answer the question, “What type of surveillance is more effective, people or machines?”

Exploration 2: EmotiControl

EmotiControl seeks to judge how happy, scared, afraid, and surprised users are by monitoring emotional reactions to a series of videos: happy, disgusting, sad. By watching the users faces during these videos, it is possible to see how their emotions change minutely.

  • How could we extrapolate this information further as an indication of say their psychological state or empathy for others?
  • How could these readings be used to make inferences and classifications about the users?

Background

One subset of OpenCV is algorithms that help identify faces and more recently even specific pairs of facial characteristics that can hint at a user’s emotion. As I learned more about Computer Vision as it pertains to surveillance, I was fascinated by these algorithms and how they could be used to both help and control people.

While working on this project, I was reading a series of dystopian short stories from Geoff Ryman called Brave New Worlds. One piece called “Dead Space for the Unexpected” used the idea of complete ubiquitous surveillance (through cameras, microphones, and biological sensors) as a form of corporate control.

For my second exploration, I wanted to understand how applications of Computer Vision could be used to read user’s emotions and control their behavior as a result. While I had already seen examples of emotion-recognition technology for advertising purposes, not too many other applications were yet available. In this project, I speculated about what a future world that used emotion recognition technology to check in on employees

In this project, I speculated about what a future world that used emotion recognition technology to check in on employees Emotional Intelligence (EI), the ability to identify, assess, and control the emotions of oneself, of others, and of groups. Many companies have been undertaking explorations as to how this somewhat intangible aspect of one’s intellectual/social/emotional makeup predicts success in the workplace. From preliminary studies used to test it (usually videos and surveys) they have found people with a high EI or EQ are very likely to succeed at work and drive the company forward.

  • Given our advances in emotion recognition, how important could our ability to master our own emotions and to work with others emotions become in the future?
  • If you could have augmented information about how others were feeling, what would you do with it in performance and workplace settings?

Exploration 3: Keeping Up Appearances

Keeping Up Appearances is an interactive installation using face-tracking technology to explore the dystopian consequences of emotion/behavior monitoring.

Throughout these projects, I kept experiencing a reoccurring feeling that if emotion recognition technology, powered by OpenCV algorithms, was taken beyond just a human-computer interaction (HCI) tool or a way to collect more robust data, that it could become a form of control with serious implications for human rights. By know how people are feeling and being able to quantify that information, it is possible to set acceptable parameters for when emotions should be felt, how long they should be felt for and more.

Building on my original goal to explore interactions powered by smiles, I flipped the concept and instead asked, what happens when computers can tell that we aren’t happy enough?

Related Blog Posts