A mere 7 months after Volta was announced with the Tesla V100 accelerator and the GV100 GPU inside it, NVIDIA continues its breakneck pace by releasing the GV100-powered Titan V, available for sale today. Aimed at a decidedly more compute-oriented market than ever before, the 815 mm2 behemoth die that is GV100 is now available to the broader public.
The Titan V, by extension, sees the Titan lineup finally switch loyalties and start using NVIDIA’s high-end compute-focused GPUs, in this case the Volta architecture based V100. The end result is that rather than being NVIDIA’s top prosumer card, the Titan V is decidedly more focused on compute, particularly due to the combination of the price tag and the unique feature set that comes from using the GV100 GPU. Which isn’t to say that you can’t do graphics on the card – this is still very much a video card, outputs and all – but NVIDIA is first and foremost promoting it as a workstation-level AI compute card, and by extension focusing on the GV100 GPU’s unique tensor cores and the massive neural networking performance advantages they offer over earlier NVIDIA cards.
In this sense the Titan V is a return to form of sorts to the professional side of prosumer for the Titan family. One of the original claims to fame for the original Titan was its high performance in specialized FP64 compute workloads, something that was lost on the later Titan X and Titan Xp. By switching to NVIDIA’s specialized high-end compute GPUs, the Titan V regains its formerly lost compute capabilities, all the while also gaining all of the compute capabilities NVIDIA has introduced since then. It’s no mistake that Jen-Hsun introduced the card at a neural networking conference, as this is a big chunk of the professional computing audience that NVIDIA is targeting with the card.
Interestingly, comparing it to the PCIe Tesla V100, I’m surprised by just how close the cards are in features and performance. NVIDIA has confirmed that the Titan V gets the GV100 GPU’s full, unrestricted FP64 compute and tensor core performance. To the best of our knowledge (and from what NVIDIA will comment on) it doesn’t appear that they’ve artificially disabled any of the GPU’s core features. What does separate the Titan from the Tesla then from a performance standpoint is quite simple: memory capacity, memory bandwidth, and the lack of NVLink functionality. There are also a number of smaller differences between the cards that help to differentiate them between server and workstation – such as passive versus active cooling, NVLink, and the support policies – but otherwise for customers who are running a small number of cards, the Titan V’s feature set is remarkably close to the much more expensive Tesla V100’s, which is a very interesting development since it goes to show just how confident NVIDIA is that this won’t undermine Tesla sales.
Moving on and diving into the numbers, Titan V features 80 streaming multiprocessors (SMs) and 5120 CUDA cores, the same amount as its Tesla V100 siblings. The differences come with the memory and ROPs. In what's clearly a salvage part for NVIDIA, one of the card's 4 memory partitions has been cut, leaving Titan V with 12GB of HBM2 attached via a 3072-bit memory bus. As each memory controller is associated with a ROP partition and 768 KB of L2 cache, this in turn brings L2 down to 4.5 MB, as well as cutting down the ROP count.
In terms of clockspeeds, the HBM2 has been downclocked slightly to 1.7GHz, while the 1455MHz boost clock actually matches the 300W SXM2 variant of the Tesla V100, though that accelerator is passively cooled. Notably, the number of tensor cores have not been touched, though the official 110 DL TFLOPS rating is lower than the 1370MHz PCIe Tesla V100, as it would appear that NVIDIA is using a clockspeed lower than their boost clock in these calculations.
For the card itself, it features a vapor chamber cooler with copper heatsink and 16 power phases, all for the 250W TDP that has become standard with the single GPU Titan models. Output-wise, the Titan V brings 3 DisplayPorts and 1 HDMI connector. And as for card-to-card communication, PCB itself appears to have NVLink connections on the top, but these look to have been intentionally blocked by the shroud to prevent their use and are presumably disabled.
As mentioned earlier, NVIDIA is unsurprisingly pushing this as a compute accelerator card, especially considering that Titan V features tensor cores and keeps the TITAN branding as opposed to GeForce TITAN. But there are those of us who know better than to assume people won’t drop $3000 to use the latest Titan card for gaming, and while gaming is not the primary (or even secondary) focus of the card, you also won't see NVIDIA denying it. In that sense the Titan V is going to be treated as a jack-of-all-trades card by the company.
To that end, no gaming performance information has been disclosed, but NVIDIA has confirmed that the card uses the standard GeForce driver stack. And on that note, yesterday NVIDIA released 388.59 bringing official Titan V support. Now, how much those drivers have actually been optimized for the GV100 is another matter entirely; Volta is a new architecture, markedly so at times. Speaklng solely off the cuff here, for graphics workloads the card has more resources than the Titan Xp in almost every meaningful metric, but it's also a smaller difference on paper than you might think.
As for NVIDIA's intended market of compute and AI users, the Titan V will be supported by NVIDIA GPU Cloud, which includes TensorRT, a number of deep learning frameworks, and HPC-related tools.
If the golden shroud didn’t already suggest so, the Titan V is also carving out a new eye-watering price point, dropping in at $2999 and on sale now at the NVIDIA store. NVIDIA has, to date, been selling Tesla V100 products as fast as they can produce them, so I'm not going to be surprised if the Titan V sees a similar fate. The $3000 price tag is quite high, even by Titan standards, but with the rare Tesla V100 PCIe card going for around $10,000, the Titan V is markedly cheaper. In fact in some respects I'm surprised NVIDIA is selling a GV100 card for so little; these are GV100 salvage parts that don't make the cut for Tesla - so the alternative would be throwing them away - but it just goes to show how confident NVIDIA is that it won't undermine the Tesla family.
At any rate, for NVIDIA professional users who have been looking to dip their toes into Volta but didn't want a full-fledged Tesla card, the Titan V is clearly going to be a popular card. Over the last two years NVIDIA's AI efforts have been firing on all cylinders, and by bringing a GV100 card down to just $3000, expect to see them crack open the market that much further. I dare say the idea of the "prosumer" Titan has died with this card, but for the rapidly growing professional compute market, this looks to be exactly the kind of card that a lot of developers have been waiting for.
Article Credits: Anandtech.com