[ExtremeTech] Building GPUs Out of Entire Wafers Could Turbocharge Performance, Efficiency - Page 3 - Overclock.net - An Overclocking Community

Forum Jump: 

[ExtremeTech] Building GPUs Out of Entire Wafers Could Turbocharge Performance, Efficiency

Reply
 
Thread Tools
post #21 of 23 (permalink) Old 02-23-2019, 02:10 PM
New to Overclock.net
 
8051's Avatar
 
Join Date: Apr 2014
Posts: 2,504
Rep: 20 (Unique: 14)
Quote: Originally Posted by prjindigo View Post
An 8% increase in speed across the board on all GPU used for graphical display and gaming can be had simply by putting a separate frame buffer chip on the card between the existing tech and the output ports.

This 8% is achieved by NOT INTERRUPTING the graphics generation to send frames to the display and instead having the card itself put the frames to the buffer at full card speed instead of having to slow down to transmit speeds of whatever connection you have to your monitor whether it be 22" DP1.2 or The Verge's magical invisible unicorn intestine clad 18 foot 144hz HDMI cable they showed in use at the end of the "How to build a computer" video.
They used to have dual-ported RAM (WRAM?) for videocards that somewhat addressed this problem. The DAC and GPU could both read from video memory simultaneously.
8051 is offline  
Sponsored Links
Advertisement
 
post #22 of 23 (permalink) Old 02-23-2019, 05:25 PM
PC Enthusiast
 
VeritronX's Avatar
 
Join Date: Mar 2014
Location: South Australia
Posts: 880
Rep: 26 (Unique: 26)
Nvidia already does it's gpu scheduling in software with it's drivers rather than on chip, they could make it split between gpus.. the real problem is load balancing and stitching the result together afterwards in sync. If they sectioned off areas of the screen and only refreshed once the whole screen had been rendered than it could work.

VeritronX is offline  
post #23 of 23 (permalink) Old 02-25-2019, 12:25 PM
New to Overclock.net
 
DNMock's Avatar
 
Join Date: Jul 2014
Location: Dallas
Posts: 3,135
Rep: 158 (Unique: 117)
Quote: Originally Posted by VeritronX View Post
Nvidia already does it's gpu scheduling in software with it's drivers rather than on chip, they could make it split between gpus.. the real problem is load balancing and stitching the result together afterwards in sync. If they sectioned off areas of the screen and only refreshed once the whole screen had been rendered than it could work.
I've always thought, and still do, that a multi gpu set up would benefit a ton from having a dedicated 3rd gpu that handles all the I/O and stitching.

With the current set-up the latency would be pretty high and it just wouldn't be cost effective to produce such a card for a niche audience, but if it's all on the same wafer and all the chips have are running with a pooled memory, suddenly it's not such a big issue.

Probably lots of issues with such a set up that I don't know about though.


DNMock is offline  
Reply

Quick Reply
Message:
Options

Register Now

In order to be able to post messages on the Overclock.net - An Overclocking Community forums, you must first register.
Please enter your desired user name, your email address and other required details in the form below.
User Name:
If you do not want to register, fill this field only and the name will be used as user name for your post.
Password
Please enter a password for your user account. Note that passwords are case-sensitive.
Password:
Confirm Password:
Email Address
Please enter a valid email address for yourself.
Email Address:

Log-in



Currently Active Users Viewing This Thread: 1 (0 members and 1 guests)
 
Thread Tools
Show Printable Version Show Printable Version
Email this Page Email this Page


Forum Jump: 

Posting Rules  
You may post new threads
You may post replies
You may not post attachments
You may not edit your posts

BB code is On
Smilies are On
[IMG] code is On
HTML code is Off
Trackbacks are Off
Pingbacks are Off
Refbacks are Off