Overclock.net - An Overclocking Community - Reply to Topic
Thread: [IGN] EA Launches Surprise Cloud Gaming Trial With Four Games Reply to Thread
Title:
Message:

Register Now

In order to be able to post messages on the Overclock.net - An Overclocking Community forums, you must first register.
Please enter your desired user name, your email address and other required details in the form below.
User Name:
If you do not want to register, fill this field only and the name will be used as user name for your post.
Password
Please enter a password for your user account. Note that passwords are case-sensitive.
Password:
Confirm Password:
Email Address
Please enter a valid email address for yourself.
Email Address:

Log-in


  Additional Options
Miscellaneous Options

  Topic Review (Newest First)
09-17-2019 12:19 AM
Malinkadink
Quote: Originally Posted by Zero4549 View Post
basically this. console players would probaby never notice. too bad you'll never convince console players to put down their overpriced decade outdated potato systems. brand loyalty is a wonderfully stupid thing
Hey those decade outdated systems can play games like Bloodborne and God of War, something the PC doesn't have access to until much later when or rather IF an emulator gets developed. PC has some of its own exclusives but i can't think of any recent ones that are up to the same production quality as Sony exclusives.

Consoles and PCs both have a space, and there's no argument that PC hardware is vastly superior, but its also not cheap to be on the bleeding edge, so consoles work well for most people. As for cloud gaming, i'm not a fan of the idea, there's just too much latency involved for me personally, it would be fine for some games but nothing competitive. I already notice a huge impact to my gameplay from network latency alone for example when im placed in a west coast Overwatch match (80ms) vs east coast (40ms).
09-16-2019 07:11 PM
Avonosac
Quote: Originally Posted by NihilOC View Post
Spoiler!
I too love to respond to messages by completely ignoring the content therein, alas I don't have your strength of character to see the periodic fantasy through to reality though.
09-15-2019 04:25 PM
skupples i still experienced it with nuked settings. Just like, all the time, always. I pushed through it for 4-5 hours n said no thx. It was my first and only attempt at jumping onto the modern BR craze. I guess i'm partial to arma, & its mods. Specially now that #3 has matured, & can run like almost like a dream .
09-15-2019 01:59 PM
UltraMega
Quote: Originally Posted by skupples View Post
lemme ask this... does PUBG have a high default latency? I always felt like that was the most intentionally latent game ever. I'm pretty sure my N64 was faster.
PUBG has definitely always felt like a high latency game for me. It is a fairly demanding game.
09-15-2019 07:32 AM
skupples lemme ask this... does PUBG have a high default latency? I always felt like that was the most intentionally latent game ever. I'm pretty sure my N64 was faster.
09-15-2019 12:24 AM
NihilOC
Quote: Originally Posted by Darren9 View Post
It's wrong to use excuses about sending interrupts and scheduling or rescheduling threads to cores. We have gigahertz processors, a thousand million, it can take a million clock cycles to schedule a thread or jump out of user mode to receive a hardware interrupt and still complete in 1/1000 of a frame time, or in other words a negligible addition. That even assumes that running a game on your gaming PC won't be the highest priority task and the scheduler won't realise that and prioritize your game threads.

The reddit thread you chose to show me clearly demonstrates a measured input lag of 24ms on the gaming PC, not 100ms, so 100+ms isn't due at all to scheduling/interrupts/RAM access or any similar underlying system functions or that 24ms wouldn't be possible.

I already said you can have 100ms of input lag, bad settings, bad peripherals including the monitor or bad game engine that doesn't allow good settings is adding the 80ms? I'm questioning whether that's average, so far you've showed me one measured input lag of 24ms for a gaming PC vs 90ms for an Nvidia steaming service and also said "they're generally, but not all, below 100ms anyway", to me that seems not to support your claim of 100ms as an average input lag for a gaming PC?

I never claimed to be an expert, I just asked why 100ms is considered average, you've dismissed me by showing a definite measured 24ms of input lag on a gaming PC and stating that most of the rest are also below 100ms - You can see why I'm not entirely buying it yet?
Just read something from somebody that is an expert then, because it reiterates what I just told you in a lot more detail: https://danluu.com/input-lag/

Quote:
• hardware has its own scanrate (e.g. 120 Hz for recent touch panels), so that can introduce up to 8 ms latency
• events are delivered to the kernel through firmware; this is relatively quick but system scheduling concerns may introduce a couple ms here
• the kernel delivers those events to privileged subscribers (here, backboardd) over a mach port; more scheduling loss possible
• backboardd must determine which process should receive the event; this requires taking a lock against the window server, which shares that information (a trip back into the kernel, more scheduling delay)
• backboardd sends that event to the process in question; more scheduling delay possible before it is processed
• those events are only dequeued on the main thread; something else may be happening on the main thread (e.g. as result of a timer or network activity), so some more latency may result, depending on that work
• UIKit introduced 1-2 ms event processing overhead, CPU-bound
application decides what to do with the event; apps are poorly written, so usually this takes many ms. the consequences are batched up in a data-driven update which is sent to the render server over IPC
• If the app needs a new shared-memory video buffer as a consequence of the event, which will happen anytime something non-trivial is happening, that will require round-trip IPC to the render server; more scheduling delays
• (trivial changes are things which the render server can incorporate itself, like affine transformation changes or color changes to layers; non-trivial changes include anything that has to do with text, most raster and vector operations)
• These kinds of updates often end up being triple-buffered: the GPU might be using one buffer to render right now; the render server might have another buffer queued up for its next frame; and you want to draw into another. More (cross-process) locking here; more trips into kernel-land.
• the render server applies those updates to its render tree (a few ms)
• every N Hz, the render tree is flushed to the GPU, which is asked to fill a video buffer
• Actually, though, there’s often triple-buffering for the screen buffer, for the same reason I described above: the GPU’s drawing into one now; another might be being read from in preparation for another frame
• every N Hz, that video buffer is swapped with another video buffer, and the display is driven directly from that memory
• (this N Hz isn’t necessarily ideally aligned with the preceding step’s N Hz)
I may not be an expert but I have studied this before, albeit a long time ago, and I haven't dismissed you. If you read my posts you'll see I've been saying that it's fine for some games, but less than ideal for twitch based FPS games or similar.

My point is that most games you play, even on a high spec gaming PC, will render at ~100 ms (sometimes higher, sometimes lower) and that meaningful optimisations are done in the game engine. That's why the highly optimised CS:GO, which likely doesn't need to do very much anyway to see if a gun can fire, can get such low latencies (you'll notice it was ~80ms lower that the other games benchmarks on streaming platforms as well, that's why the streaming platforms got sub-100ms scores).

The main improvements you'll see on a "gaming" PC vs a console are: refresh rate, high polling interval inputs, low latency monitors. All of which would also reduce latency for their cloud based equivalents.
09-14-2019 08:22 PM
skupples i can tell you my home experience, Vs. streaming to my shield, Vs. Streaming content via Nvidia to my shield... all very drastically different things. The latency between PC >> Shield is almost unnoticeable. The latency of Nvidia's streaming service? Very noticeable. It feels like gaming on a PS2/PS3, as far as latency. It's "playable" for all the 5+ year old titles they list, but again... Playable like an ancient console.

but again, this has nothing to do with performance and everything to do with restructuring the business model to include more subscribers. Not to mention their games were already being developed for STADIA... How does this surprise anyone.
09-14-2019 08:09 PM
Darren9
Quote: Originally Posted by NihilOC View Post
I think you are overestimating how quickly a computer is able to process inputs. Just pulling something from RAM takes ~10ms on a very good computer, let alone doing anything with that data.

Reducing it significantly below the ~100ms mark isn't about upgrading your computer though. You can shave a few ms off with a decent monitor (although that would also improve the input latency if the game was rendered remotely), but the majority of the improvements that reduce input lag are to the game engine. That's why stuff twitch based games, like fighting or FPS games, achieve such low input latency whereas something like an RPG wouldn't.

To provide an example as to why input is processed instantly lets say you press "a" in mortal kombat. It will go something like this:

• Send interrupt to the CPU;
• Tell the game a button has been pressed;
• Process scheduler will assign CPU time to the game, it may need to switch threads if there is something scheduled on that thread that it can't interrupt (like a system process);
• Game calls the corresponding function (let's say punch in this instance);
• Function checks whether the character can currently make the move;
• If yes, start rendering the first frame of the move.

There is an entire host of stuff that goes on that I haven't listed, mostly because it's not my area of expertise, but also because most of it is obscured even from the developers behind numerous layers of abstraction anyway. It happens very, very quickly but it is not instant.

If you want an example of this look at these tests here: https://www.reddit.com/r/cloudygamer...ormance_input/. You can see that different games give differing results, although again these games are FPS games designed for low input latency so they're generally, but not all, below 100ms anyway. I would note that any improvements in engine would grant the same decrease to the corresponding game if rendered remotely, much as an improved display would.



Yeah, I know, I googled it before posting. The above guy just thinks "gaming" PCs somehow process inputs more quickly.



Oh I agree that this is awful for consumers, but it's not a physics problem. The bad for consumers part will be when games that have no place being put on a streaming service, e.g. twitch based games, will be forced onto them by companies like EA. Likely either as exclusives to draw users, or they'll start moving their PC library in general to a streaming service to combat piracy (although they probably won't say that's the reason). And that's before you end up with x different streaming services all demanding a small sum of money for access to their titles, some of which will be exclusive, and when the total for all those subs is added up it'll end up costing more to access the latest games than it did before.

Either way ~180ms is not pushing up against theoretical limits. The technology isn't the issue, even distance within the processor is a negligible constraint. I mean the traces are mostly gold so you're looking at a signal propagating hundreds of thousands of meters per millisecond, even the actual stream can be transmitted primarily over fibre to the local cabinet in most western nations. The bulk of the latency is in processing which can be reduced.
It's wrong to use excuses about sending interrupts and scheduling or rescheduling threads to cores. We have gigahertz processors, a thousand million, it can take a million clock cycles to schedule a thread or jump out of user mode to receive a hardware interrupt and still complete in 1/1000 of a frame time, or in other words a negligible addition. That even assumes that running a game on your gaming PC won't be the highest priority task and the scheduler won't realise that and prioritize your game threads.

The reddit thread you chose to show me clearly demonstrates a measured input lag of 24ms on the gaming PC, not 100ms, so 100+ms isn't due at all to scheduling/interrupts/RAM access or any similar underlying system functions or that 24ms wouldn't be possible.

I already said you can have 100ms of input lag, bad settings, bad peripherals including the monitor or bad game engine that doesn't allow good settings is adding the 80ms? I'm questioning whether that's average, so far you've showed me one measured input lag of 24ms for a gaming PC vs 90ms for an Nvidia steaming service and also said "they're generally, but not all, below 100ms anyway", to me that seems not to support your claim of 100ms as an average input lag for a gaming PC?

I never claimed to be an expert, I just asked why 100ms is considered average, you've dismissed me by showing a definite measured 24ms of input lag on a gaming PC and stating that most of the rest are also below 100ms - You can see why I'm not entirely buying it yet?
09-14-2019 04:04 PM
NihilOC
Quote: Originally Posted by Darren9 View Post
I'm counting the display lag as part of the input lag, and I'm counting input lag as the time from making an action (mouse move seems more critical than a keystroke to me) to seeing it on the screen.

You definitely can have more than 100ms of input lag with a high render queue, v-sync on and a non-gaming monitor adding one or two more frames on the end but I wouldn't call that an "average" gaming PC? More like an average desktop PC being used to play games - not the same thing.
I think you are overestimating how quickly a computer is able to process inputs. Just pulling something from RAM takes ~10ms on a very good computer, let alone doing anything with that data.

Reducing it significantly below the ~100ms mark isn't about upgrading your computer though. You can shave a few ms off with a decent monitor (although that would also improve the input latency if the game was rendered remotely), but the majority of the improvements that reduce input lag are to the game engine. That's why stuff twitch based games, like fighting or FPS games, achieve such low input latency whereas something like an RPG wouldn't.

To provide an example as to why input is processed instantly lets say you press "a" in mortal kombat. It will go something like this:

• Send interrupt to the CPU;
• Tell the game a button has been pressed;
• Process scheduler will assign CPU time to the game, it may need to switch threads if there is something scheduled on that thread that it can't interrupt (like a system process);
• Game calls the corresponding function (let's say punch in this instance);
• Function checks whether the character can currently make the move;
• If yes, start rendering the first frame of the move.

There is an entire host of stuff that goes on that I haven't listed, mostly because it's not my area of expertise, but also because most of it is obscured even from the developers behind numerous layers of abstraction anyway. It happens very, very quickly but it is not instant.

If you want an example of this look at these tests here: https://www.reddit.com/r/cloudygamer...ormance_input/. You can see that different games give differing results, although again these games are FPS games designed for low input latency so they're generally, but not all, below 100ms anyway. I would note that any improvements in engine would grant the same decrease to the corresponding game if rendered remotely, much as an improved display would.

Quote: Originally Posted by skupples View Post
standard rate from click to action has been recorded many many times by many trusted resources, easy enough to google up. It was even part of the VRR Vs. no VRR debate for awhile.
Yeah, I know, I googled it before posting. The above guy just thinks "gaming" PCs somehow process inputs more quickly.

Quote: Originally Posted by Avonosac View Post
But it is a physics problem, its exactly a physics problem. The calculation is also a distance within the processors the signal needs to traverse, nearly 200ms of input latency is just not acceptable for anything with twitch. I wouldn't even consider a game which have algorithms which balance the latency experience across a wide range to attract more casuals, because it absolutely ruins the experience for those which have good systems and networks.

I was a semi-pro CS player back in highschool / early college before Steam, and ping over 50 was killer. I'm also teach companies how to build platforms as a consultant while building platforms with my day job, so I know all about the economics and constraints these offerings operate under. Look at my post history, I've ranted about how the platform wars are really bad for consumers a whole bunch already.

The end of the day, when the platforms vanish you have nothing to show for the money you put out, so while it seems nice.. just buy and build your own system.
Oh I agree that this is awful for consumers, but it's not a physics problem. The bad for consumers part will be when games that have no place being put on a streaming service, e.g. twitch based games, will be forced onto them by companies like EA. Likely either as exclusives to draw users, or they'll start moving their PC library in general to a streaming service to combat piracy (although they probably won't say that's the reason). And that's before you end up with x different streaming services all demanding a small sum of money for access to their titles, some of which will be exclusive, and when the total for all those subs is added up it'll end up costing more to access the latest games than it did before.

Either way ~180ms is not pushing up against theoretical limits. The technology isn't the issue, even distance within the processor is a negligible constraint. I mean the traces are mostly gold so you're looking at a signal propagating hundreds of thousands of meters per millisecond, even the actual stream can be transmitted primarily over fibre to the local cabinet in most western nations. The bulk of the latency is in processing which can be reduced.
09-13-2019 06:47 PM
skupples and that latency in CS is on top of everything else

this will work flawlessly for folks that live in certain cities, with certain infrastructures. Everyone else is boned for anything better than piss poor perf compared to what even a console killer PC could do. Maybe not in the graphics dept, but 100% in the responsive, quality experience dept.

however, as a built into TV solution? I'm sure it'll be great.

I just inherited a giiiiaant smart TV old enough to only work as a regular TV. I love it. Never clicking that update button.
This thread has more than 10 replies. Click here to review the whole thread.

Posting Rules  
You may post new threads
You may post replies
You may not post attachments
You may not edit your posts

BB code is On
Smilies are On
[IMG] code is On
HTML code is Off