Nvidia debuted a car-focused, octa-core Tegra X1 SoC with a 256-core Maxwell GPU and 15W consumption, and demoed Tegra X1-based autopilot and IVI hardware.
At the Four Season’s Hotel in Las Vegas, Nvidia CEO Jen-Hsun Huang announced the Tegra X1 as a next-generation mobile system-on-chip. The X1 carries the mantle from Nvidia’s earlier Tegra K1, as well as the 64-bit “Project Denver” version of the K1. Yet, the primary focus was on automotive applications, with a particular focus on self-driving car technology.
Nvidia Tegra X1 die photo
(click image to enlarge)
The company showcased an Nvidia Drive CX system for IVI and digital cockpit displays, as well as a Drive PX autopilot computer for autonomous cars (see farther below). Both run on X1 or K1 SoCs. Nvidia did allow for a potential role for the X1 in other devices, however, by demonstrating a Tegra X1-optimized version of Epic Games’s Elemental game using Unreal Engine 4 technology.
Like the Project Denver version of the Tegra K1, which first appeared in the Android 5.0 based Nexus 9 tablet in recent months, the Tegra X1 is a 64-bit device, yet it offers twice the performance and twice the power efficiency of the K1, says Nvidia. The 20nm fabricated SoC incorporates four Cortex-A57 cores and four Cortex-A53 cores, with 2MB L2 and 512KB L1 cache, respectively. Power consumption is claimed to be a tidy 15W.
As is typical with Nvidia SoCs, the real draw is graphics. Only a year after moving to an unprecedented (for mobile) Kepler GPU with the Tegra K1, Nvidia is now moving up to its full high-end desktop gaming quality Maxwell graphics technology, with 256 cores compared to 192 for Kepler. With the help of Maxwell, the Tegra X1 supports DirectX 12, OpenGL 4.5, CUDA, OpenGL ES 3.1, and the Android Extension Pack, in addition to Unreal Engine 4.
The Maxwell-driven X1 supports H.265 or VP9 video at 4K, 60fps resolutions. The SoC also supports4K x 2K @60Hz displays, as well as 1080p @ 120Hz. HDMI 2.0 is supported at 60fps, and there’s support for HDCP 2.2.
Nvidia Drive PX and CX
Nvidia has been moving into the automotive computing market for some time, gaining design wins for its Tegra 3 and Tegra K1 SoCs in in-vehicle infotainment (IVI) systems such as the Honda Connect IVI system. With the Tegra X1, automotive computing is the now the primary focus, and not only for IVI, but for complete autopilots for self-driving cars.
Nvidia’s Drive CX (left) and Drive PX computers
(click image to enlarge)
To showcase the X1, Nvidia unveiled both a Drive CX IVI and digital cockpit computer, and a similar, but more advanced Drive PX autopilot prototype, both powered by Tegra X1 or K1 SoCs. The systems can illuminate 16.6 million pixels on multiple displays. This is said to be 10 times the resolution of current car computers.
The Drive PX autopilot device, which adds a second X1 chip, provides a combined processing power of 2.3 teraflops. The PX is further equipped with inputs for 12 high-resolution cameras that can be processed simultaneously at 1.3 billion pixels per second, says Nvidia.
Drive PX computer vision and image recognition examples
(click image to enlarge; source: Nvidia)
Drive PX also integrates self-parking, Auto-Valet, and Surround-Vision capabilities. Huang showed off the self-parking and Auto-Valet features at the Las Vegas event, using a virtual car in a photorealistic digital garage. The Drive PX controlled car autonomously searched the garage for an open space and parked in it. Pressing the Auto-Valet key on a keyfob, Huang instructed the car to return to his current location. The Surround Vision feature, meanwhile, offers a top-down 360-degree view of the car in real time.
“Your future cars will be the most advanced computers in the world,” Huang told the Four Season’s audience. “There will be more computing horsepower inside a car than anything you own today.”
According to ComputerWorld report about the event by Agam Shah, the Tegra X1 will enable cars to recognize objects, signs, images, and lanes. Apparently, it can do this with cameras alone, and without the expensive LIDAR system used in Google’s recently updated Self Driving Car prototype.
The X1 integrates algorithms that recognize pedestrians, traffic lights, and crosswalk and speed limit signs, writes Shah. The inputs are said to be based on contextual awareness, so the car can decide what to do next, for example, in relevant situations.
An Audi rep shared the stage with Shah, according to the The Verge, but no timetable was announced for incorporating the Nvidia technology. ComputerWorld’s Shah quotes Patrick Moorhead, principal analyst at Moor Insights and Strategy, as saying Tesla may be an early customer for the Tegra X1 or the Drive CX or PX designs. Moorehead also speculates the X1 could show up in robots.
According to The Verge, the Drive CX and PX support major automotive platforms including Linux, Android, and QNX. We wouldn’t be surprised if these systems, or at least the Tegra X1, showed up in a rumored Android M version of Android Auto. Last month, Reuters reported that Android M is a full IVI and telemetry stack as opposed to the current Android Auto, which more simply enables users to use the IVI touchscreen to control a nearby Android device. Yesterday, Parrot announced one of the first Android Auto implementations with its RNB6 IVI system for aftermarket sales.
When Google announced its Open Automotive Alliance a year ago at CES, Nvidia was the sole technology partner. Other members include Audi, GM, Honda, and Hyundai.
No timetable appears to have been announced for the release of the Tegra X1 or Drive CX or PX systems. More information may be found at the still rather minimalistic Tegra X1 product page, as well as the Tegra X1 blog announcement.