OREANDA-NEWS. February 10, 2016.  “All clear.” That’s the terrific news Jeet Samarth Raut’s mother heard after a radiological scan.

Two weeks later, a second opinion revealed breast cancer. Certain that technology can do better, the young entrepreneur is using deep learning software powered by NVIDIA GPUs to reduce the number of incorrect diagnoses.

Whether in Raut’s rural Illinois hometown (where his mother began treatment and recovered) or in developing countries around the world, accurate diagnoses can be hampered by failures in scanning, perception and interpretation.

Raut and fellow entrepreneur and Columbia University alum Peter Wakahiu Njenga co-founded Behold.ai, a startup based in New York that’s trying to make it easy for healthcare practitioners everywhere to more accurately identify diseases from ordinary radiology image data.

“A radiologist is looking for patterns on a scan, but it’s done manually and there’s such room for error,” said Raut, who spent several years working as a research assistant at Stanford University’s Phonetics Lab, Life-span Development Lab, and Computers and Cognition Lab.

“Computer vision has gotten really good at this, so we’re using technology to make these diagnoses more accurate by drawing from bigger datasets,” he said.

Behold.ai’s ultimate goal is to build a highly portable system with gear that can be used in developing regions, low-income areas and zones where healthcare is scattered.

A Personal Mission

Behold.ai’s software, combined with neural network architectures and NVIDIA GPUs, trains computers to identify and process thousands of existing medical images. These are tagged as healthy or diseased, with the system absorbing feedback from radiologists to improve tagging accuracy as it encounters more examples.

Healthcare practitioners in the field would no longer have to rely solely on their training and individual experience to spot diseased tissues. When interpreting visual data from MRIs, CT scans and retinal images, they’d get an assist from the large-scale visual recognition work performed by computers. This allows health assessments by technicians to be carried out anywhere from refugee camps to remote villages.

“Our technology is applicable in developing and low-income areas,” said Njenga, who previously worked at Facebook on machine learning. “You can take an image of an affected area, compare it with a database of similar images, get a diagnosis and start treatment.”

Deep Learning Makes It Work

After a patient has a scan taken at a medical imaging center, it’s sent for review to a radiologist and also Behold.ai’s servers. Behold.ai’s deep learning technology analyzes the scans to detect anomalies. The scans are returned to the radiologist with tags listing the ailments generated by Behold.ai’s models, which have been trained on giant datasets.

Having the medical images read by two parties reduces the chances of false positives or false negatives, Raut said.

To identify abnormalities in medical images, Behold.ai uses a class of artificial neural networks called convolutional neural networks, or ConvNets. Inspired by the lattice-like visual cortex of the brain, ConvNets are specialized for image processing tasks and use pattern recognition to perform object classification.

“We harness the massive parallelism of high-performance NVIDIA GPUs to speed up the process,” Njenga said. Advances in GPU programming enabled Behold.ai to have a deep ConvNet with 50 million parameters. The underlying algorithms use cuDNN, NVIDIA’s library of GPU-accelerated software building blocks for deep neural networks, Njenga said.

Behold.ai is looking to pilot its technology next year with an established healthcare provider.