Kernels of important information can be buried deep in visual data. Yet pinpointing these kernels, particularly in huge datasets is challenging. A new machine-learning vision platform that employs sparsely coded, hierarchical, and lateral linkages within a neural network modeled on the human neuro-visual system is helping to parse out this dense data to find those critical kernels of information. Developed by a team of scientists at Los Alamos National Laboratory, this Video Analysis and Search Technology—or VAST—teaches itself. The more training data it consumes, the better it becomes at identifying patterns, detecting objects, mitigating risk, and solving problems for its end users. Descartes Labs, an artificial intelligence company, uses VAST technology with satellite imaging for commercial purposes.
Descartes Labs interprets satellite imagery to enable real-time global awareness—whether it is food production, energy infrastructure, the growth of cities, or environmental impacts. Using machine learning and remote sensing technology, the satellite pictures can be converted into maps, which can be sequenced into a time-lapse video to show changes on the Earth’s surface. Today, Descartes Labs ingests and processes petabytes of image data into its cloud-based infrastructure from numerous publicly and privately owned satellites, including those belonging to NASA, USGS, Landsat, and ESA.
Descartes Labs has raised $38.3 million in private equity capital and recently received a $1.5 million award from the Defense Advanced Research Projects Agency (DARPA) using their satellite imagery and intelligence platform to analyze food security in the Middle East and North Africa. The company has grown to over 40 with offices in San Francisco, Los Alamos, Santa Fe, and New York City.