RAINBOW
We already live in augmented reality, the user experience just needs to align with our expectations.
Rainbow is a prototype augmented-reality solution for organizing virtual information in real spaces. By collecting digital content into layers, Rainbow helps users navigate between several different augmented experiences for one location.
It was created as part of the 2017 Reality Virtually Hackathon at the MIT Media Lab. I partnered with four other students and educators from different universities who were also interested in the intersection of augmented reality, privacy, identity, and trust to create an MVP prototype, pitch-deck, and video demo over the course of the two day event.
Team Members
- Rogue Fong, New York University
- Adam Sauer, Ohio State University
- Yunxin Fan, Harvard University
- Fahad Punjwani, MIT
Check out the project on Devpost and Github.
Project Summary
Today, we rely on our smartphones to tap into our ever-present ‘virtual reality’, limited by the constraints of a palm-sized window as we access information and connect with one another across mediums.
We believe that accessing information should be effortless, ‘augmenting’ the experience of both interacting with virtual data and the world around us. But in a world of easily accessible and constant digital augmentation, how might we ensure that our virtual reality doesn’t pollute our physical one?
How can we be sure what is true and what is real in a landscape augmented by ubiquitous digital information?
To begin to answer these questions we created Rainbow, a design pattern for shareable mixed reality.
Layered Reality

In order to solve for the challenge, we designed a paradigm for ‘layered reality’, layers of mixed and augmented reality that can be programmed separately.
They act as a virtual layer over a physical space, harnessing web APIs, geo-location, and local IOT devices to enhance one or more objects in that space with virtual elements.
How do layers work?
- One space can have many Layers.
- Unless programmed specifically, only one Layer can be viewed by one person at one time.
- Each Layer has several types of rights: content-production and delivery, interactivity, purchasing, editing, administration etc.
- Layers can be owned and controlled by private companies/ventures, individuals, governments or NGOs.
- There can also be common Layers, virtual layers where any user can freely edit, add or subtract content or interactivity. Typically these sandbox type layers are controlled through a distributed form of moderation and governance, although less regulated varieties can also exist.
Examples of Layer Types
- Public Layer: Controlled by a municipal government. Provides basic directions to subways, common attractions, public bathrooms, and interactivity with municipal data streams (311, 911 etc.).
- Public/Private Layer: Controlled by local Kendall Square district. Provides information about local businesses (promotions, coupons, hours of operation) and events happening in the area.
- Private Layer owned by MIT: Controlled privately by a local university. This layer can include tips/tricks from MIT students about places, restaurants etc. This pLayer can have separate sub-layers that allow for ad-hoc group communication, private messaging, urban gaming etc.
- Pop-up Layer: Controlled privately and temporarily by Blizzard Entertainment. A temporary (‘pop-up’) pLayer for an urban gaming experience for users subscribed to Blizzard’s World of Warcraft.
Presentation & Prototype
For our prototype, we illustrated how two layers could co-exist in a public square by building an experience where users could interact with different types of data geo-located/tagged to an IOT device embedded in a public square.
We used an NFC device to simulate authorizations (‘logging into a layer’) and allowed users to access two layers of reality using an MR headset, The Public Layer, showing real-time local weather collected from the embedded IOT sensors, and A Pop-Up Layer, showing a location-bound ‘in-game’ special event for World of Warcraft that only registered players could access while in this space.
Our MVP was built using the Meta mixed reality headset (no longer being produced), Unity, the Mapbox SDK, Arduino, an NFC reader/cards, and a temperature/humidity sensor.