VRCodes – vía @medialab

VR Codes
Andy Lippman and Grace Woo
VR Codes are dynamic data invisibly hidden in television and graphic displays. They allow the display to present simultaneously visual information in an unimpeded way, and real-time data to a camera. Our intention is to make social displays that many can use at once; using VR codes, users can draw data from a display and control its use on a mobile device. We think of VR Codes as analogous to QR codes for video, and envision a future where every display in the environment contains latent information embedded in VR codes.

Viral SpacesMIT Media Lab
VRCodes are currently being developed by Pixels.IO as a spinoff of the Viral Spaces group.

Envision a world where inconspicuous and unobtrusive display surfaces act as general digital interfaces which transmit both words and pictures as well as machine-compatible data. They also encode relative orientation and positioning. Any display can be a transmitter and any phone can be a receiver. Further, data can be rendered invisibly on the screen.

VRCodes present the design, implementation and evaluation of a novel visible light-based communications architecture based on undetectable, embedded codes in a picture that are easily resolved by an inexpensive camera. The software-defined interface creates an interactive system in which any aspect of the signal processing can be dynamically modified to fit the changing hardware peripherals and well as the demands of desired human interaction.

This design of a visual environment that is rich in information for both people and their devices overcomes many of the limitations imposed by radio frequency (RF) interfaces. It is scalable, directional, and potentially high capacity. We demonstrate it through NewsFlash, a multi-screen set of images where each user’s phone is an informational magnifying glass that reads codes arranged around the images.

VRCodes are currently being developed by Pixels.IO as a spinoff of the Viral Spaces group. See the MIT Media Lab PLDB entry and grace@pixels.io

VRCodes was initiated by Grace Woo in the MIT Media Lab as a part of her PhD thesis. Special thanks to Andy LippmanRamesh RaskarGerald SussmanVincent ChanSzymon Jakubczak and Eyal Toledano.

Recent uses of VRCodes
Newsflash [description]

Grace Woo, Andy Lippman

Newsflash shows a large display of screens which can be used in a public environment. Users can point their phone at a screen to get more data from the frontpages in front of them.

MIT camera uses lasers to capture images from around corners | VIDEO

It fires these lasers at a wall, which bounces them into a room. The beams then reflect off objects and people before re-emerging and striking a detector. The detector takes measurements every few picoseconds, or trillionths of a second.

The camera does this several times, bouncing light off several different spots on the wall to cover several angles.

The system then compares the time at which each light beam returns to the detector (and their angle), to piece together a picture of the room’s geometry. It’s a bit like ultrasound, or the way Microsoft’s Kinect uses a bunch of infrared dots to determine 3D shapes.

By Mark Brown wired.co.uk

A 3-D solid model of a jack inside a cube. Mod...

Researchers at the Massachusetts Institute of Technology have built a camera that can see around corners, by bouncing bursts of laser light off doors or walls.

The system works like a periscope. But instead of mirrors, it uses ordinary walls, doors or floors; instead of light, the camera is equipped with a femtosecond laser that emits bursts of light so short that their duration is measured in

quadrillionths of a second.

My Revised “Most Important Task First” Model

Are you more focused and energetic in the morning?
Doing your “Most Important Task” first assumes that you’re most focused first thing in the morning. The idea is to shift this big task to the time when your mental powers are at their height. If you’re naturally inclined to be more focused later in the day, this model might not work for you. You’ll want to calibrate your MIT time to your natural creative rhythms.


Step 1: Spend 30 minutes scanning email and responding to urgent items.

Step 2: Turn off email and other distractions. Focus for 2-3 hours on completing your “Most Important Task.”

Step 3: Take a lunch break away from your desk. Leaving your computer and recharging is the key to being productive after your MIT time.

Step 4: Devote the post-lunch day to taking care of ongoing tasks and other “reactionary work” that requires less mental stamina.

The Caveat

Although tackling hard work first seems like a no-brainer, I did have to alter the model a bit for it to work for me, which made me realize that this approach really depends on your personality. For some, it may be an easy switch that will exponentially increase productivity, but for others, it might cause extra stress. Leer más “My Revised “Most Important Task First” Model”