Credit: Google News
Data science is the new ground zero for Nvidia, which introduced new software AI libraries running on its Tensor Core GPUs at its GPU Technology Conference in San Jose yesterday. The Cuda-X AI libraries are written to speed up machine-learning and data-science operations by as much as 50x, the company said, with far-reaching implications for such AI applications as speech and image recognition as well as risk assessment, fraud detection and inventory management.
Nvidia said the Cuda-X AI software stack includes cuDNN for deep-learning primitives, cuML for machine-learning algorithms, TensorRT for optimizing trained models for inference, and other libraries. Running on Tensor Core GPUs, they can be integrated into frameworks for deep learning such as TensorFlow, PyTorch and MCNet or popular cloud platforms.
In order to make Cuda-X accelerated analysis available to a broad range of users, Nvidia said its T4 Tensor Core GPUs will be available via Amazon EC2 G4 instances “in the coming weeks.” When those go online, they will be available for machine-learning applications including real-time ray tracing, simulation and rasterization. Eager AWS users can apply for an advance preview of EC2 G4 instances by filling out a form at the AWS website. Meanwhile, the RAPIDS open-source suite of libraries, which allows users to access parallel GPU processing and high-speed memory via Python interfaces, is already available via the Microsoft Azure Machine Learning service.
Workstations Tuned for Data Science
And Nvidia is also contributing its hardware and data-science software stack, including Cuda-X AI, to a new class of high-powered workstations tuned specifically to work in concert with data centers and power high-end data science applications. According to Bob Pette, Nvidia’s VP and GM of professional visualization and Quadro graphics, the new Nvidia-certified systems will be built and provided by an array of vendors including, globally, Dell, HP and Lenovo and, regionally, AMAZ, APY, Azken Muga, BOXX, CADNetwork, Carri, Colfax, Delta, EXXACT, Microway, Scan, Sysgen and Thinkmate. Optional enterprise support contracts will be sold by those OEMs and supported through Nvidia, Pette said.
As an example of performance targets for the new systems, Pette said an accelerated library for data scientists running on dual Quadro RTX8000 GPUs could achieve greater accuracy and 10 times faster turnaround compared to CPU processing nodes.
Real-Time Ray Tracing Is a Reality
Nvidia said its RTX platform, which includes GPU hardware custom-designed for Tensor core-accelerated ray-tracing, has made inroads in the industry since its introduction at SIGGRAPH last year — including an endorsement from Pixar Animation Studios, which said it will use RTX on its upcoming films. ILM, Image Engine, MPC Film and Weta Digital are also said by Nvidia to be using RTX in their VFX workflow. The software products incorporating RTX in 2019 releases include Adobe Dimension & Substance Designer, Autodesk Arnold & VRED, Chaos Group V-Ray, Dassault Systèmes CATIALive Rendering & SOLIDWORKS Visualize 2019, Daz 3D Daz Studio, Enscape Enscape3D, Epic Games Unreal Engine 4.22, ESI Group IC.IDO 13.0, Foundry Modo, Isotropix Clarisse 4.0, Luxion KeyShot 9, OTOY Octane 2019.2, Pixar Renderman XPU, Redshift Renderer 3.0, Siemens NX Ray Traced Studio, and Unity Technologies Unity (2020).
On the server side, the company debuted a new RTX Server configured with 1,280 Turing GPUS on 32 RTX blade servers, each squeezing 40 GPUs into an 8RU space. With low-latency access to such servers, Nvidia argued, cloud-rendered videogames and AR/VR applications become feasible over a 5G network. Nvidia also welcomed an array of new manufacturers for its T4 servers, built for GPU-accelerated data analysis, including Cisco, Dell EMC, Fujitsu, HPE, Inspur, Lenovo and Sugon.
Last but not least, Nvidia revealed Omniverse, a new enterprise collaboration platform for studios working with real-time graphics. Supporting industry standard technology including Pixar’s Universal Scene Description and Nvidia’s own Material Definition Language, Omniverse maintains live, bi-directional connectivity between such applications as Autodesk Maya, Adobe Photoshop and Epic Unreal Engine. That allows artists using one application to see changes made by artists working in another application immediately. Naturally, the Omniverse Viewer supports RTX RT Cores, Cuda Cores, and Tensor Core-accelerated AI. Nvidia said Omniverse is as a “lighthouse” program; interested users can request access to the client SDK or inclusion in the program at Nvidia’s developer site.
Credit: Google News