The U.S. Postal Service is set to purchase GPU-accelerated servers from Hewlett Packard Enterprise that it expects will help accelerate package data processing up to 10 times over previous methods.
The plan is for a spring 2020 deployment, using HPE’s Apollo 6500 servers, which come with up to eight Nvidia V100 Tensor Core GPUs. The Postal Service also will use Nvidia’s EGX edge computing servers at nearly 200 of its processing locations in the U.S.
Nvidia announced the USPS’s plans at its GPU Technology Conference in Washington, D.C. Ian Buck, the former Stanford professor who created the CUDA language for programming Nvidia GPUs before joining the company to head AI initiatives, made the announcement in an opening keynote focused on AI.
Buck said half of the world’s enterprises today rely on AI for network protection and security, and 80% of the telcos will rely on it to protect their networks. “AI is a wonderful tool for looking at massive amounts of data and finding anomalies, pulling needles out of a haystack,” he told the audience.
The USPS — which processes 485 million pieces of mail per day, or 146 billion pieces of mail per year — plans to use servers powered by Nvidia’s GPUs and deep learning software to train multiple AI algorithms for image recognition, according to Buck. Those algorithms would then be deployed to the EGX systems at the Postal Service’s package processing sites.
The aim is to improve the speed and accuracy of recognizing package labels, which would improve the speed of package delivery and reduce the need for manual involvement.
Nvidia AI deployments and market initiatives
AI is being embraced by a number of industries, to varying degrees of success. Nvidia uses itself as a guinea pig:
“At Nvidia we have a fleet of self-driving vehicles, which we use for both collecting data and testing our self-driving capabilities. We ingest and create literally petabytes of data every week that has to be processed by our own team of labelers and processed by AIs,” Buck told the crowd. “We have literally thousands of GPUs doing training every day, which are supporting hundreds of data scientists, which are defining the self-driving car capabilities.”
The module in Nvidia’s self-driving car is called Pegasus and consists of two Volta GPUs and two Tegra SOCs. “It’s basically an AI supercomputer inside every car processing hundreds of petabytes of data,” Buck said.
The challenge now is to actually apply AI, he said. To do so, Nvidia has a number of AI projects for the automotive, healthcare, robotics and 5G industries. For healthcare, for example, Nvidia has its Clara software development kit with pretrained models to tackle tasks such as looking for a particular kind of cancer in minutes or hours.
For IoT, Nvidia has the Metropolis Internet of Things application framework as cities build out sensors to detect unsafe driving conditions, such as a vehicle driving the wrong way onto a freeway. Nvidia also has the DRIVE autonomous vehicle platform, which spans everything from cars to trucks to robotaxis to industrial vehicles. Nvidia’s Omniverse kit targets design and media, and its Aerial products are for telcos moving to 5G, along with the EGX server.
To train new developers to build AI apps on GPUs, Nvidia announced that its Deep Learning Institute just added 12 new courses focused on AI training. So far, DLI has trained more than 180,000 AI workers.