How Your Device Knows Your Life through Images

Article Header

Published on TechnologyReview.com

New research in neural networks may let computers identify our daily actions more accurately than the apps on the market that track things like GPS location and heart rate. A new computer model has achieved about 83 percent accuracy in identifying the activities it sees in real-life images—and with just a bit of training it could do this for any user it encounters.

Led by Georgia Tech graduate students Daniel Castro and Steven Hickson, researchers have created an artificial neural network designed to identify scenes in so-called “egocentric” photographs taken from the user’s point of view. These usually come from wearable cameras like Narrative Clip, MeCam, Google Glass, and GoPro, but regular cell-phone photos often work as well. The team gave the network its skill by training it with a set of about 40,000 images taken by a single individual over a six-month period. This dedicated volunteer manually associated each image with an activity, and naturally settled on using 19 basic activity labels. These labels include driving, watching TV, family time, and hygiene.

A separate learning algorithm combines the neural network’s guesses with metadata about the day and time at which the image was captured. This allows the network to learn common associations between activities and even make predictions about the user’s upcoming schedule.