MS2 Final Project | The Proposal

It’s that time of year again, time to start curating ideas for Major Studio final projects.

At Parson’s, we go through an iterative final process, producing a project over the span of 6-8 weeks and presenting it at the end.

This time around, I chose to continue exploring themes that I had first articulated in my first few projects this semester: ubiquitous & interactive surveillance, emotion recognition, and civil rights/societal implications.

For our first assignment, we were asked to create a proposal video outlining our idea or ideas for a final project, and to do research into the academic, cultural, and artistic body of work and precedents that could help inform our direction.

Discovery:

Domain Mapping, Research Question & Concept Statement

I had several questions left over at the end of my project this semester. I had explored several aspects of computer vision originally intending to make more meaningful interactions for users through applying different methods of computer vision and facial recognition.

However, as I explored further and dove into the history, applications of, and existing art depicting these technologies, I found myself experimenting with small projects around emotion recognition, surveillance, and ultimately control.

Control of people and certain groups of people became central to two of my pieces, EmotiControl, and Keeping Up Appearances, the first which explored how emotion recognition could be used to track people’s reaction to media and the second questioning how emotion recognition could be used to control people’s moods by penalizing a lack of smiles.

I was also very surprised by the acquiescence of my classmates to surveillance when the camera was fun and interactive, going out of their way to play with it.

These were the two areas I wanted to explore for my final project proposals.

Domain Mapping

To begin this project, I did alot of research around emotion recognition, surveillance through both cameras and online data monitoring, and uses of fun and interactive technology.

I was able to map my areas of interest into the following domains:

Final Project First Prototypes

Research Questions

I also developed some research questions I wanted to explore through my project.

  • How might we better understand the implications of a marriage between ubiquitous surveillance, emotion recognition, and interactive collection mechanisms powered by artificial intelligence?
  • What potential futures will increasingly sophisticated and interconnected digital systems bring, especially around issues of predictive policing, discrimination, and human rights?
  • How will emotion-recognition technology play a role in everyday decisions about people’s character: jobs/hiring, security & policing, government?
    • How will people feel about being classified by a computer?
    • Is it fair to use these judgments to make decisions about people’s lives?
  • What does it mean when there is artificial intelligence powering surveillance?
    • How will this impact decisions on security and human rights?
    • How can fun and interactive surveillance make people more likely to allow themselves to be watched?

Concept Statement

I proposed different explorations I might make into this area, informed by an overarching concept for how these projects might answer the above questions.

INFORM is a series of critical experiments extrapolating potential future applications of the collection and usage of facial and emotion recognition data.

I seek to critically explore how people, governments, and institutions may use new interactive technologies to collect and algorithmically correlate this information and what effects it will have on people’s everyday lives.

Proposal Video

Here is my proposal video outlining my ideas for the final project.

Testing Out the Technology & Code to Connect It Together

In order to begin working on this process, I first needed to teach myself a little Arduino and OpenFrameworks, figuring out how to wire up aforementioned pan/tilt servo kit to a webcam and have it follow faces. I also needed to figure out a reliable way to track people’s emotions (since everyone has different facial features and proportions), and how to hook up a commercially purchased laser pointer to an Arduino board.

Here are some of my early results.

The Pan/Tilt Assembly & A Facial Recognition Script to Make a WebCam track Faces

I managed to find a pan/tilt script that followed faces through a processing tutorial, however I wanted to use openFrameworks because of the more robust library of facial and emotion recognition features and functionality. So I rewrote the script for openFrameworks, with limited success.

The Laser Pointer

After playing with the pan/tilt motor, this was surprisingly easy to achieve. The only annoying realization I had about using a store bought laser pointer is that A, it isn’t very powerful, and B, you have to hardwire the on switch, which can burn it out very quickly. Both issues I will have to address down the road.

Next Steps

Here are the next steps I will be working on now that the tech is kind of roughly working.

  • Start building in interactive features (synthesized voice, reactions chains etc.) and test with users.
  • Paper & Cardboard prototype “outfits” for robot. Test with users.
  • Mount red laser in unit and figure out how to point without hitting people in the eyes…

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s