OREANDA-NEWS. June 02, 2016. A human-scale “Transformer” robot. A psychedelic, animatronic giraffe. Ridable cupcakes and loads of carnival food. It was all on display last week at Maker Faire.

Thousands of artists, engineers and hobbyists descended on the 11th annual Maker Faire Bay Area in San Mateo, Calif., to show off their latest prototypes, homemade products and weekend projects.

NVIDIA was there, too, adding a dose of autonomy with our Jetson TX1 embedding computing module.

Experts and novices alike watched as our 10-watt platform generated a real-time 3D model of our booth and the surrounding area using a Stereolabs Zed passive stereo camera. Jetson identified objects at high frame rate using the deep visualization toolbox, and it examined household objects using Caffe, the ImageNet dataset and the AlexNet neural network.

Drone Racing at Maker Faire

Just past the funnel cakes at the packed event center, a large tent housed grandstands and a safety net, which separated onlookers from a 3D drone racing circuit. Competitors used VR headsets and drone-mounted analog cameras to navigate a complex set of turns and LED-lit checkpoints at high speed.

Like at any good NASCAR race, spectators thrilled at the occasional crash into a checkpoint ring. But the sheer speed and reaction times needed to steer the drones impressed even the most sophisticated visitors.

Dustin Franklin, Jetson evangelist and robotics engineer, wowed an audience of 150 makers by explaining how to add autonomous navigation to robotic projects. Artificial intelligence is changing the embedded computing landscape, from Fortune 500 businesses to hackers’ garages.

Among those who swung by our booth to talk about Jetson and deep learning were corporate autonomous driving researchers, enterprise data center architects and virtualization experts. And also a San Francisco resident who uses his Jetson TK1 to notify him when the parking spot in front of his apartment is newly vacated.