Overclock.net banner

[AMD] Simon Solotko on Cloud Computing

586 Views 5 Replies 4 Participants Last post by  64NOMIS
Simon from AMD wrote a very interesting blog post about cloud computing. It's a fairly long read, but very interesting if you can grasp the concept.

Here's a snippit.

Quote:


Some of today’s most popular applications, including enterprise email applications and web browsers, threaten to condemn users to “virtual Purgatory.†These applications attempt to synchronize increasingly large amounts of media rich data. The result is complex synchronization that is slowing clients to a crawl. Complex archive solutions are constantly struggling to encrypt, compress, archive, synchronize and recall data. The ensuing data smashup robs client PCs of free cycles, rendering them momentarily unresponsive, leaving me and thousands like me with millions of useless, small, utterly idle moments. With the rush toward multi-client data ubiquity, it looks like we are being condemned to Purgatory.

An entire generation of applications that attempt to host content simultaneously online/offline is coming. Software Titans are deploying browsers and client-compiled applications that speed the deployment of online/offline applications. Software architectures designed to provide data and application integrity while having to live in many places at once may drive our clients into virtual self destruction.

A leap to a complete and fully integrated Cloud may avoid virtual Purgatory.

Source
1 - 6 of 6 Posts
Thank god my computer won't have to chug data and occasionally pause for a second or two anymore. Now I can simply wait for lag time between each and every click to some server. Oh wait...
Yes, both network latency and quality of service could slow things down. It begs the question how much of the input to input burden needs to reside client side, and at what point might that defeat the entire point?

But this might be a small price to pay in order to play Crysis on your watch, or listen to any piece of music ever written on your stereo, or view rich web content on your cell phone. And for keystrokes its probably not an insurmountable challenge, its just not a lot of upstream data.

We all seem to live with the lag of Unreal Tournament and similar ultra-fast paced online games - the entire world needs to adjust to our keystrokes. Perhaps the latency is even lower if the application were actually rendered online?
Quote:

Originally Posted by 64NOMIS View Post
Yes, both network latency and quality of service could slow things down. It begs the question how much of the input to input burden needs to reside client side, and at what point might that defeat the entire point?

But this might be a small price to pay in order to play Crysis on your watch, or listen to any piece of music ever written on your stereo, or view rich web content on your cell phone. And for keystrokes its probably not an insurmountable challenge, its just not a lot of upstream data.

We all seem to live with the lag of Unreal Tournament and similar ultra-fast paced online games - the entire world needs to adjust to our keystrokes. Perhaps the latency is even lower if the application were actually rendered online?
This is a very good point.

However, there will need to be major infrastructure overhauls to networking, communication and server hardware for this to be put in place. This in turn would cost loads and loads of money.

For a game to be rendered or processed online for 32,64, even 108 players in a game could be a heavy task.

I guess we will see what the future can bring us.
See less See more
I think this is a stupid idea. Delivering content through video will for the most part be much slower. For example, on OCN, you have like 10 bytes which tell you what color the background is, and that's it. It doesn't cost anymore bandwidth - but if this background was rendered somewhere else, it would cost at least 60 bytes per 8x8 block.

Also, they would have to use lossless video and audio encoding to deliver a GUI that looks like it does today (quality wise)... and guess what? lossless video is very expensive CPU wise. It's like decompressing an immense RAR in real time, but with the added complexity of content awareness.

At the end of the day, we'll still need powerful processors, so this is pretty far off. There's also the part that says even the most advanced lossless encoding costs tons of bits. You can expect a 20 minute segment of Crysis at 720p24 to be 15GB - that's approximately 36GB at 720p60. That's assuming you're using YV12/YUV 4:2:0 - which is only 12bit. 24-bit RGB would be more than twice that. (it's less compressible than YV12 because it has more unique data)

Of course they could avoid using lossless encoding, but then everything would look bad. You may be able to run Crysis maxed out, but you'll see artifacts. They're visible in (they change the feel of) everything.

I think doing it online will just add more latency. Let's be nice and assume 30ms latency between you and the server. Also, everything the server does takes 1ms. You have a very high speed and high quality connection so you can download 40GB instantly. (= download time doesn't add latency)

You press a key which is sent to the server (30ms) -> Server renders and compresses (1ms) -> You get the data (30ms) -> You decompress 24 frames of RGB24 video (40ms) OR You decompress 24 frames of YV12 video (20ms) -> Video is converted to RGB32 (10ms - this is more intensive than you think) using the chosen colorimetry -> Video is rendered (10ms)

This is about 100ms at the very least.

When ends come to meet, we'll still need decent GPUs - trash won't cut it. (my 8400GS can't render 60 fps video at 1680x1050, it lags) The same goes for CPUs for colorspace conversion and decompression.

The same results - on a high end PC from the future (because cloud computing just isn't happening in the next few years):
You press a key (1ms) -> PC renders (16-17ms (borderline delay for 60 fps)) -> It's waiting in your RAM (1ms) -> It's not compressed (0ms) -> It was rendered at RGB32 by your graphic card -> Video is rendered (10ms)

Conclusion: To reach low latency ingame, in both cases you'll need fast and high quality connections. It's a better idea to just make connections better instead of rely on servers. If we can eliminate the connection latency issue, rendering at home will be much faster than rendering on a server.
See less See more
I like the argument because I start in the same place. But I challenge you, as I have to challenge myself. Because I know things aren't going to stay the same. Perhaps for my nice big desktop I will have lots of cloud-grade compute mostly for running my holodeck. For the other clients, we may need to open our minds a bit.

Quote:


Originally Posted by Coma
View Post

I think this is a stupid idea. Delivering content through video will for the most part be much slower. For example, on OCN, you have like 10 bytes which tell you what color the background is, and that's it. It doesn't cost anymore bandwidth - but if this background was rendered somewhere else, it would cost at least 60 bytes per 8x8 block.

Also, they would have to use lossless video and audio encoding to deliver a GUI that looks like it does today (quality wise)... and guess what? lossless video is very expensive CPU wise. It's like decompressing an immense RAR in real time, but with the added complexity of content awareness.

First look at your screen. Every image and every graphic is - guess what - lossy compressed or a patterned texture. The only thing that's not is text. So one hypothesis is that if you want to render for productivity grade fidelity, you basically need clean text on top of a compressed background. Certainly you could do it that way. Something called acrobat. Not uncomplicated, but not a problem we haven't dealt with before.

Quote:


Originally Posted by Coma
View Post

At the end of the day, we'll still need powerful processors, so this is pretty far off. There's also the part that says even the most advanced lossless encoding costs tons of bits. You can expect a 20 minute segment of Crysis at 720p24 to be 15GB - that's approximately 36GB at 720p60. That's assuming you're using YV12/YUV 4:2:0 - which is only 12bit. 24-bit RGB would be more than twice that. (it's less compressible than YV12 because it has more unique data)

Of course they could avoid using lossless encoding, but then everything would look bad. You may be able to run Crysis maxed out, but you'll see artifacts. They're visible in (they change the feel of) everything.


Again, games and video will be fine with lossy compression in particuar on every client other than an enthusiast grade gaming desktop. I concur, shipping around lossless data is a recipe for failure, therefore we must assume that it just won't happen that way.

Quote:


Originally Posted by Coma
View Post

I think doing it online will just add more latency. Let's be nice and assume 30ms latency between you and the server. Also, everything the server does takes 1ms. You have a very high speed and high quality connection so you can download 40GB instantly. (= download time doesn't add latency)

Again, if you look at the data flow for a modern RT online game like UT3, it seems impossible for the input to flow upstream, the world to react, and for us to play head to head at faster than real time speeds without being bothered by a lot of multi-player input that needs to get munged and processed online. But it works.

Quote:


Originally Posted by Coma
View Post

You press a key which is sent to the server (30ms) -> Server renders and compresses (1ms) -> You get the data (30ms) -> You decompress 24 frames of RGB24 video (40ms) OR You decompress 24 frames of YV12 video (20ms) -> Video is converted to RGB32 (10ms - this is more intensive than you think) using the chosen colorimetry -> Video is rendered (10ms)

This is about 100ms at the very least.

OK, so does it take more time to act out the gameplay and render a 3D scene or decompress a video? If you assume that you've got all the data local, its not even a race. Particularly for a client with lightweight 3D compute. And in any event the keystroke or something even more complex with today's 3D games needs to go upstream. The question is, how heavyweight does the server need to be on the other end and as you state, does the network math make sense. This is the discussion we are having.

Quote:


Originally Posted by Coma
View Post

When ends come to meet, we'll still need decent GPUs - trash won't cut it. (my 8400GS can't render 60 fps video at 1680x1050, it lags) The same goes for CPUs for colorspace conversion and decompression.

The same results - on a high end PC from the future (because cloud computing just isn't happening in the next few years):
You press a key (1ms) -> PC renders (16-17ms (borderline delay for 60 fps)) -> It's waiting in your RAM (1ms) -> It's not compressed (0ms) -> It was rendered at RGB32 by your graphic card -> Video is rendered (10ms)

Conclusion: To reach low latency ingame, in both cases you'll need fast and high quality connections. It's a better idea to just make connections better instead of rely on servers. If we can eliminate the connection latency issue, rendering at home will be much faster than rendering on a server.

As I said, people like you and I will have beefy home clients running our holodeck that will be able to locally instantiate applications or even whole portions of the cloud.

We may actually be running the cloud that our friends are playing on.
See less See more
5
1 - 6 of 6 Posts
This is an older thread, you may not receive a response, and could be reviving an old thread. Please consider creating a new thread.
Top