[ZDNet] Nvidia takes aim at Tesla's custom GPU claims - Page 2 - Overclock.net - An Overclocking Community

Forum Jump: 

[ZDNet] Nvidia takes aim at Tesla's custom GPU claims

Reply
 
Thread Tools
post #11 of 26 (permalink) Old 04-24-2019, 08:41 PM
Overclocker
 
JackCY's Avatar
 
Join Date: Jun 2014
Posts: 9,039
Rep: 300 (Unique: 219)
I thought they switched long ago, apparently not. The price/performance is very likely in favor of Tesla's custom solution. Just like other companies create their own custom hardware and don't rely on GPU compute or GPUs for AI, etc. Nvidia may have a better performing product than what Tesla bought from them, but at what price, at what power.

Quote:
NVIDIA DRIVE AGX Pegasus

NVIDIA DRIVE AGX Pegasus™ achieves an unprecedented 320 TOPS of deep learning with an architecture built on two NVIDIA® Xavier™ processors and two next-generation TensorCore GPUs. This energy-efficient, high-performance AI computer runs an array of deep neural networks simultaneously and is designed to safely handle highly automated and fully autonomous driving. No steering wheel or pedals required.
Can't find easily what device from Nvidia Tesla uses in S and X or 3 at least not now that 99% search results with Nvidia and Tesla in the query return endless reposts of this Nvidia's correction of Tesla.
Last time I saw the device on video it wasn't small by any means.

Nvidia is never reasonably priced, it's one of the main reasons they got booted out of console market. Don't expect them to stick in vehicles for long either. Even monitors they struggle.
JackCY is offline  
Sponsored Links
Advertisement
 
post #12 of 26 (permalink) Old 04-25-2019, 12:52 AM
Performance is the bible
 
Join Date: Apr 2009
Posts: 6,610
Rep: 433 (Unique: 299)
Quote: Originally Posted by Hwgeek View Post
P/S- Tesla has much more experience and actual DATA and more advance in Autonomous Cars then others- so if they Decide that NV's solutions aren't good enough for them- others take this very seriously.
Nvidia solution was more expensive, not inferior.
Tesla making their own chips cost a hell of a lot less compared to an AI chip that is not just dedicated to what they need. They can make chips that are more suited to them, smaller and less power hungry, and run their own custom code.

Tesla put all their ground work on nvidia chips. They actually had self driving solution under nvidia, but decided to wait until they can put it on their own chips, which is why it took a few more years to finalize it.

Nvidia's problem is that they are not as flexible as AMD or other dedicated solutions. They will not bend over to make a dedicated solution for a customer, but they expect their customers to adjust themselves to nvidia.
This has been reducing their customer list for the last decade by quite a lot.


Defoler is offline  
post #13 of 26 (permalink) Old 04-25-2019, 09:40 AM
New to Overclock.net
 
StAndrew's Avatar
 
Join Date: Sep 2008
Posts: 14
Rep: 0
Quote: Originally Posted by WannaBeOCer View Post
Finally someone read it. I just wanted to see how many people would go off of what I said. I love nVidia Haha.
Aint nobody got time fo dat!


TBH, I'm a little skeptical of Nvidia's credibility. Sure the AGX Pegasus is faster but as Nvidia accuses Tesla of miss characterizing, the AGX Pegasus is a 4 chip platform (two Xavier and two Turing GPU's iirc). 500 watts and I'm sure a hefty price tag. I'd be curious to see the cost of Tesla's package (they claim it’s cheaper than Xavier) and the power draw.
StAndrew is offline  
Sponsored Links
Advertisement
 
post #14 of 26 (permalink) Old 04-25-2019, 10:14 AM
Performance is the bible
 
Join Date: Apr 2009
Posts: 6,610
Rep: 433 (Unique: 299)
Quote: Originally Posted by StAndrew View Post
Aint nobody got time fo dat!


TBH, I'm a little skeptical of Nvidia's credibility. Sure the AGX Pegasus is faster but as Nvidia accuses Tesla of miss characterizing, the AGX Pegasus is a 4 chip platform (two Xavier and two Turing GPU's iirc). 500 watts and I'm sure a hefty price tag. I'd be curious to see the cost of Tesla's package (they claim it’s cheaper than Xavier) and the power draw.
Maybe I missed something.
There are two types of driving assist systems from nvidia. PX Xavier and PX Pegasus, with the first built to be small 30w self driving systemand is comparable to the the tesla solution in terms of performance power, and the second is a "full" AI system for complete autonomous driving, which is a 500W system.

Tesla solution is somewhere in the middle. Tesla are comparing their new hardware to the customized PX 2 from nvidia, which was a 60W version of PX 2. But nvidia's new hardware (Xavier and Pegasus) are a lot faster even compared to tesla's 70W 2 chip solution. Especially with the new nvlink on chip solution and the tuning gpu/arm cpu solution.

The 500W version of Pegasus is not comparable to the tesla solution, because it is not meant for the cars. A somewhere in the middle solution, like the customized PX 2 with a pegasus version, or xavier solution, are more comparable.
Comparing tesla's solution to pegasus is like comparing a station to a sport car. Tesla's solution is 21 times the original PX 2 according to them, but pegasus in raw power, is somwhere around 30 times without the customization or the better software tesla wrote over time, and without taking into effect the new nvlinks etc.

What I think was the main difference is the 80% price difference. I would expect that with the new nvidia hardware, tesla could get the same performance on a lower watt hardware, since nvidia can move mountains if they really want to.
But tesla need to save money, and they already had the software people, knowledge and experience they earned using nvidia hardware, so they could make a more dedicated chip for their solution, and make it cheaper because it was built in house specifically for their solution.
Since nvidia solution was designed to work not only for tesla's software, but for other companies, they couldn't just make it to tesla's direct specs.


Overall I think nvidia's issue is that they don't actually have end products, and not that their products are bad or worse.

They have middle products that are suppose to work with several systems, and each of their customers want a more specific specialized hardware.
When you look at other markets, specifically the phone markets, this is what apple are doing. They decide to start and design things on their own instead of relaying on others at a higher price tag. This helps you both have a complete control over your product, and reduce reliability, and reduce cost.
I recon that nvidia's solution will work better with the likes of the other big car manufacturers, who might not try to develop something from scratch like tesla, and will be ok with an "off the shelf" solution from nvidia. Tesla are more unique in the matter. BMW, audi, toyota and more, are working with nvidia hardware.



Last edited by Defoler; 04-25-2019 at 10:27 AM.
Defoler is offline  
post #15 of 26 (permalink) Old 04-25-2019, 10:52 AM
New to Overclock.net
 
Imouto's Avatar
 
Join Date: Mar 2012
Posts: 1,797
Rep: 208 (Unique: 96)
Quote: Originally Posted by StAndrew View Post
TBH, I'm a little skeptical of Nvidia's credibility. Sure the AGX Pegasus is faster but as Nvidia accuses Tesla of miss characterizing, the AGX Pegasus is a 4 chip platform (two Xavier and two Turing GPU's iirc). 500 watts and I'm sure a hefty price tag. I'd be curious to see the cost of Tesla's package (they claim it’s cheaper than Xavier) and the power draw.
Tesla's is 72W. It is stated in the article.

Or:

Tesla: 72W / 144 TOPS = 0.5 W/TOPS
Nvidia: 500W / 320 TOPS = 1.56 W/TOPS

#EnthusiastLivesMatter

Last edited by Imouto; 04-25-2019 at 12:29 PM.
Imouto is offline  
post #16 of 26 (permalink) Old 04-25-2019, 12:05 PM
New to Overclock.net
 
DNMock's Avatar
 
Join Date: Jul 2014
Location: Dallas
Posts: 3,272
Rep: 161 (Unique: 120)
My ignorance is shining through here. Why are they measuring it in Tera OPS instead of Tera FLOPS. Wouldn't floating point be the preferred method of computation for a self driving car?


DNMock is offline  
post #17 of 26 (permalink) Old 04-25-2019, 12:23 PM
New to Overclock.net
 
Hueristic's Avatar
 
Join Date: Jul 2008
Location: Bottom_Of_A_Bottle
Posts: 10,540
Rep: 433 (Unique: 289)
Quote: Originally Posted by DNMock View Post
My ignorance is shining through here. Why are they measuring it in Tera OPS instead of Tera FLOPS. Wouldn't floating point be the preferred method of computation for a self driving car?
Apparently for this application OPS is more important than Flops.

Quote:
In computing, floating point operations per second (FLOPS, flops or flop/s) is a measure of computer performance, useful in fields of scientific computations that require floating-point calculations. For such cases it is a more accurate measure than measuring instructions per second.
https://en.wikipedia.org/wiki/FLOPS

READ this thread before starting your first build!!!
ALWAYS power up a Mobo Before installing it! Consider Less than helpful posts as Free Bumps.
devil-smiley-019.gif¡¡¡ʍʇɟ qn1ɔ uoıʇɐıɔǝɹddɐ 939 ʇǝʞɔos ǝɥʇthumbsupsmiley.pngsozo.gifRetro Rulezsozo.gif

1.
If you can't afford to lose it don't mod or OC it.
2.
At least read the ENTIRE OP before commenting.

Semper Fi


Hueristic is offline  
post #18 of 26 (permalink) Old 04-25-2019, 12:36 PM
New to Overclock.net
 
Join Date: Jun 2008
Location: Wilts, U.K.
Posts: 3,522
Rep: 451 (Unique: 383)
Quote: Originally Posted by Imouto View Post
Tesla's is 72W. It is stated in the article.

Or:

Tesla: 72W / 144 TOPs = 0.5 W/TOP
Nvidia: 500W / 320 TOPs = 1.56 W/TOP
A problem seems to be that Tesla have given the maximum TOP's they can possibly manage but not the maximum power used to attain those TOP's, instead they've measured the power draw in the vehicle that's running their specific software (at 2300 frames per second).

Software that can run at 2300fps on a 144 TOP system won't push a 320 TOP system to 100%, it'll just perform the necessary calculations faster and wait in an idle state until the next frame, or maybe never even reach it's highest power state and run at lower clocks and power?


Darren9 is offline  
post #19 of 26 (permalink) Old 04-25-2019, 12:39 PM
Robotic Chemist
 
Asmodian's Avatar
 
Join Date: Aug 2009
Location: San Jose, California
Posts: 2,416
Rep: 179 (Unique: 119)
Yes, neural networks do not use floating point. They need speed, not precision, so tend to use 16 or even 8 bit integer. Games and weather simulations want FLOPS, not AI.

It is cool to see more people doing silicone in this area though, I wonder if Tesla licensed someone's IP or is this an actual from-scratch design?
Asmodian is offline  
post #20 of 26 (permalink) Old 04-25-2019, 01:40 PM
New to Overclock.net
 
StAndrew's Avatar
 
Join Date: Sep 2008
Posts: 14
Rep: 0
Quote: Originally Posted by Defoler View Post
Maybe I missed something.
There are two types of driving assist systems from nvidia. PX Xavier and PX Pegasus, with the first built to be small 30w self driving systemand is comparable to the the tesla solution in terms of performance power, and the second is a "full" AI system for complete autonomous driving, which is a 500W system.

Tesla solution is somewhere in the middle. Tesla are comparing their new hardware to the customized PX 2 from nvidia, which was a 60W version of PX 2. But nvidia's new hardware (Xavier and Pegasus) are a lot faster even compared to tesla's 70W 2 chip solution. Especially with the new nvlink on chip solution and the tuning gpu/arm cpu solution.

The 500W version of Pegasus is not comparable to the tesla solution, because it is not meant for the cars. A somewhere in the middle solution, like the customized PX 2 with a pegasus version, or xavier solution, are more comparable.
Comparing tesla's solution to pegasus is like comparing a station to a sport car. Tesla's solution is 21 times the original PX 2 according to them, but pegasus in raw power, is somwhere around 30 times without the customization or the better software tesla wrote over time, and without taking into effect the new nvlinks etc.

What I think was the main difference is the 80% price difference. I would expect that with the new nvidia hardware, tesla could get the same performance on a lower watt hardware, since nvidia can move mountains if they really want to.
But tesla need to save money, and they already had the software people, knowledge and experience they earned using nvidia hardware, so they could make a more dedicated chip for their solution, and make it cheaper because it was built in house specifically for their solution.
Since nvidia solution was designed to work not only for tesla's software, but for other companies, they couldn't just make it to tesla's direct specs.


Overall I think nvidia's issue is that they don't actually have end products, and not that their products are bad or worse.

They have middle products that are suppose to work with several systems, and each of their customers want a more specific specialized hardware.
When you look at other markets, specifically the phone markets, this is what apple are doing. They decide to start and design things on their own instead of relaying on others at a higher price tag. This helps you both have a complete control over your product, and reduce reliability, and reduce cost.
I recon that nvidia's solution will work better with the likes of the other big car manufacturers, who might not try to develop something from scratch like tesla, and will be ok with an "off the shelf" solution from nvidia. Tesla are more unique in the matter. BMW, audi, toyota and more, are working with nvidia hardware.
Basically my point. From what I gather, Tesla is comparing their new in house solution to Nvidia's PX Xavier and Nvidia is saying that its not a fair assessment and that the PX Pegasus is a better comparison. But then Nvidia only mentions TOPS performance but don't address power or costs. A point Nvidia makes is the Tesla solution uses two chips vs the Xavier’s one but the Pegasus uses 4.

Basically dishonest marketing on both sides but I would say Tesla is being more honest? Yeah, Ill go with that.
StAndrew is offline  
Reply

Quick Reply
Message:
Options

Register Now

In order to be able to post messages on the Overclock.net - An Overclocking Community forums, you must first register.
Please enter your desired user name, your email address and other required details in the form below.
User Name:
If you do not want to register, fill this field only and the name will be used as user name for your post.
Password
Please enter a password for your user account. Note that passwords are case-sensitive.
Password:
Confirm Password:
Email Address
Please enter a valid email address for yourself.
Email Address:

Log-in



Currently Active Users Viewing This Thread: 1 (0 members and 1 guests)
 
Thread Tools
Show Printable Version Show Printable Version
Email this Page Email this Page


Forum Jump: 

Posting Rules  
You may post new threads
You may post replies
You may not post attachments
You may not edit your posts

BB code is On
Smilies are On
[IMG] code is On
HTML code is Off
Trackbacks are Off
Pingbacks are Off
Refbacks are Off