Overclock.net › Forums › Industry News › Video Game News › [computerbase.de] DOOM + Vulkan Benchmarked.
New Posts  All Forums:Forum Nav:

[computerbase.de] DOOM + Vulkan Benchmarked. - Page 43

post #421 of 632
Quote:
Originally Posted by lolerk52 View Post

Well, it does appear to do better at lower resolutions than its brethren, but nowhere near 980 Ti perf:


Yeah, TPU showing mostlčy nVIDIA titles or old Games which rusn good on NVIDIA put Anno 2205awy and you get +5% on all amd cards.
post #422 of 632
Quote:
Originally Posted by GorillaSceptre View Post

That's the only saving grace right now. If the X1 wasn't struggling against it's competition MS wouldn't even give a damn..

Vulkan and DX12 are extremely similar so I'm personally hoping Vulkan is the go-to API moving forward. That way everyone who isn't/doesn't want to deal with Win10 gets the benefits too.

Besides support from Microsoft or deals i don't know why anyone would choose DX12 over Vulkan. And now that DX12 has competition i think we'll have MS pushing it very hard going forward.

That is the biggest reason why DirectX was preferred over OpenGL. And I think that would be the edge DirectX12 will have over Vulkan. I would definitely prefer Vulkan to be go-to API instead of DirectX12 for the sake of less Windows reliance of PC games going forward but developers unfortunately but understandably prefers having support than not.
My home PC
(16 items)
 
  
CPUMotherboardGraphicsRAM
AMD Threadripper 1950x Gigabyte Aorus X399 Gaming 7  MSI Geforce GTX 1080ti Gaming X G.Skill DDR4 3600 CL16 
Hard DriveHard DriveCoolingOS
Samsung Evo 840 500GB Samsung 960 Pro 500GB Noctua NH-U14S TR4 Windows 10 Pro 
MonitorMonitorKeyboardPower
Dell U2711 Samsung 55" 4k Corsair K70  EVGA SuperNova G2 1300W 
CaseMouseAudio
Corsair Carbide Air 540 Logitech G502 Denon AVR-X3300W 
  hide details  
Reply
My home PC
(16 items)
 
  
CPUMotherboardGraphicsRAM
AMD Threadripper 1950x Gigabyte Aorus X399 Gaming 7  MSI Geforce GTX 1080ti Gaming X G.Skill DDR4 3600 CL16 
Hard DriveHard DriveCoolingOS
Samsung Evo 840 500GB Samsung 960 Pro 500GB Noctua NH-U14S TR4 Windows 10 Pro 
MonitorMonitorKeyboardPower
Dell U2711 Samsung 55" 4k Corsair K70  EVGA SuperNova G2 1300W 
CaseMouseAudio
Corsair Carbide Air 540 Logitech G502 Denon AVR-X3300W 
  hide details  
Reply
post #423 of 632
Quote:
Originally Posted by Remij View Post

People need to stop blaming developers. It's the GPU vendors job to make sure their hardware is taken advantage of.

Why is it so hard to understand that developers will code for what hardware has the biggest install base in the market they are developing for? Why is it hard to understand that developers will often choose the path of least resistence? Just because there is a new API out there doesn't mean developers have to take advantage of it. It's the GPU vendors who stand to gain from it that need to do the work so that it's adopted.

Console ports are heavily designed for GCN architecture and Nvidia has to combat that by working with developers and adding their own incentives for people to choose their hardware over the competitions.

Devs have limited time and resources and make logical decisions based not on fanboy-ish behavior, but market research. If anything it's AMDs time to bring the heat with DX12 and really work with developers to heavily code for async compute+graphics. If developers don't, or AMD doesn't have enough resources to work with everyone, then it is what it is, they need to pick their battles. If games are developed on consoles then ported to PC, Nvidia has to make it work. If they can't.. then I'd be switching to Team Red. Developers are making decisions that make sense for them. AMD has done a lot of work recently. It's pretty much impossible for game developers to ignore them, and they have 2 new fresh APIs we'll suited to their hardware. They are finally more competitive and regardless of what people think is happening behind the scenes, this is good for competition.

AMD fans need to take their wins and losses graciously. Nvidia fans need to realize that AMD is back in a big way and they're not the same AMD as before. There will be some losses lol

Then how do you explain Ubisoft. tongue.gif

But seriously, ever since the unholy trinity that was Watch Dogs, Ass Creed Unity and Far Cry 4, I've pretty much written them off.
post #424 of 632
Quote:
Originally Posted by magnek View Post

Then how do you explain Ubisoft. tongue.gif

I'm French myself and trust me, there's no explaining Ubisoft. They're.. uhhh... just special. wink.giftongue.gif
My main PC
(8 items)
 
  
CPUMotherboardGraphicsRAM
Intel i7 6700k Asus ROG Maximus VIII Gene Nvidia GTX 1080Ti G.Skill Ripjaws 
Hard DriveOSKeyboardPower
Samsung 850 EVO  Windows 10 Razer Blackwidow Chroma EVGA Supernova 1300w 
  hide details  
Reply
My main PC
(8 items)
 
  
CPUMotherboardGraphicsRAM
Intel i7 6700k Asus ROG Maximus VIII Gene Nvidia GTX 1080Ti G.Skill Ripjaws 
Hard DriveOSKeyboardPower
Samsung 850 EVO  Windows 10 Razer Blackwidow Chroma EVGA Supernova 1300w 
  hide details  
Reply
post #425 of 632
Quote:
Originally Posted by Remij View Post

People need to stop blaming developers. It's the GPU vendors job to make sure their hardware is taken advantage of.

Why is it so hard to understand that developers will code for what hardware has the biggest install base in the market they are developing for? Why is it hard to understand that developers will often choose the path of least resistence? Just because there is a new API out there doesn't mean developers have to take advantage of it. It's the GPU vendors who stand to gain from it that need to do the work so that it's adopted.

Console ports are heavily designed for GCN architecture and Nvidia has to combat that by working with developers and adding their own incentives for people to choose their hardware over the competitions.

Devs have limited time and resources and make logical decisions based not on fanboy-ish behavior, but market research. If anything it's AMDs time to bring the heat with DX12 and really work with developers to heavily code for async compute+graphics. If developers don't, or AMD doesn't have enough resources to work with everyone, then it is what it is, they need to pick their battles. If games are developed on consoles then ported to PC, Nvidia has to make it work. If they can't.. then I'd be switching to Team Red. Developers are making decisions that make sense for them. AMD has done a lot of work recently. It's pretty much impossible for game developers to ignore them, and they have 2 new fresh APIs we'll suited to their hardware. They are finally more competitive and regardless of what people think is happening behind the scenes, this is good for competition.

AMD fans need to take their wins and losses graciously. Nvidia fans need to realize that AMD is back in a big way and they're not the same AMD as before. There will be some losses lol

Not sure if serious. A console level API better than DX9-DX11 has been a constant demand from developers for several years. It was Johan Andersson of EA DICE that kickstarted this API renaissance we are experiencing today (Mantle, Metal,DX12, Vulkan etc). He shared the concern of the rest of pc devs but he did not just nag. He took it to the next level and pitched the idea,the need if you like for a better API to Intel,MS,nvidia and AMD. It was AMD that shared his vision and they came up with Mantle, which is the father of DX12 and Vulkan. Without devs expressing their discontent and their need for better tools,we would be still left with the obsolete DX11.
Mastodon Ryzen
(12 items)
 
HP Z220
(8 items)
 
 
CPUMotherboardGraphicsRAM
R7 1800X Asus Crosshair VI Hero Sapphire RX Vega 64 reference Gskill TridentZ 
Hard DriveOptical DriveCoolingOS
Pny SSD 240GB Crucial MX100 CM Nepton 280L Win 10 
MonitorPowerCaseMouse
Acer Predator XG270HU Freesync XFX 750W Pro HAF XM Logitech G502 
CPUMotherboardGraphicsCooling
i7 3770 HP Quadro K2000 HP 
OSPowerCaseMouse
Win 7  HP 400W HP CMT RAT 7 
  hide details  
Reply
Mastodon Ryzen
(12 items)
 
HP Z220
(8 items)
 
 
CPUMotherboardGraphicsRAM
R7 1800X Asus Crosshair VI Hero Sapphire RX Vega 64 reference Gskill TridentZ 
Hard DriveOptical DriveCoolingOS
Pny SSD 240GB Crucial MX100 CM Nepton 280L Win 10 
MonitorPowerCaseMouse
Acer Predator XG270HU Freesync XFX 750W Pro HAF XM Logitech G502 
CPUMotherboardGraphicsCooling
i7 3770 HP Quadro K2000 HP 
OSPowerCaseMouse
Win 7  HP 400W HP CMT RAT 7 
  hide details  
Reply
post #426 of 632
Quote:
Originally Posted by Remij View Post

People need to stop blaming developers. It's the GPU vendors job to make sure their hardware is taken advantage of.

I'm not blaming indie devs working out of a tiny office trying to make their dreams come true..

I'm talking about big business Tripple-A studios, those are the titles that actually need the new API's (according to them). 99% of the games on Steam will run on weak hardware, DX12 and Vulkan + the extra work that comes with them aren't needed.

These big studios that release games like Arkham Knight, Unity, etc., etc., all blame their broken garbage on restrictive API's like DX11. As for the vendors being responsible, well... i agree to some extent, and AMD went as far to create their own API and push for DX12 and Vulkan in the first place.. But the onus is also on studios who are more than happy to charge $60 + season passes for their products.\

Studios like ID and DICE are few and far between.. Most of the other ones who have been crying for these new API's are now backtracking. Now that refunds are the norm i think they'll all put a bit more effort in. thumb.gif
Edited by GorillaSceptre - 7/17/16 at 3:06pm
post #427 of 632
Quote:
Originally Posted by comprodigy View Post

Mahigan, you're assumption about fences/async/nvidia is off. Proof in point is the async demo provided by MS and altered by AMD. I can run this on my 980ti and have the same performance as I do with async off. The software isnt written to detect if async is present or not, its just issuing command lists into mulitple queues.

Not off at all... that Async Compute test you are referencing is not a heavy test at all. It is only meant to show the feature off itself and encompasses no user interactions or need for more than a single Fence between the Compute and Graphics contexts.

In a game... things are much different... have a read here or check out the important parts below https://msdn.microsoft.com/en-us/library/windows/desktop/dn899217(v=vs.85).aspx Warning: Spoiler! (Click to show)
GPU engines

The following diagram shows a title's CPU threads, each populating one or more of the copy, compute and 3D queues. The 3D queue can drive all three GPU engines, the compute queue can drive the compute and copy engines, and the copy queue simply the copy engine.
As the different threads populate the queues, there can be no simple guarantee of the order of execution, hence the need for synchronization mechanisms - when the title requires them.

The following image illustrate how a title might schedule work across multiple GPU engines, including inter-engine synchronization where necessary: it shows the per-engine workloads with inter-engine dependencies. In this example, the copy engine first copies some geometry necessary for rendering. The 3D engine waits for these copies to complete, and renders a pre-pass over the geometry. This is then consumed by the compute engine. The results of the compute engine Dispatch, along with several texture copy operations on the copy engine, are consumed by the 3D engine for the final Draw call.

The following pseudo-code illustrates how a title might submit such a workload.
Quote:
// Get per-engine contexts. Note that multiple queues may be exposed
// per engine, however that design is not reflected here.
copyEngine = device->GetCopyEngineContext();
renderEngine = device->GetRenderEngineContext();
computeEngine = device->GetComputeEngineContext();
copyEngine->CopyResource(geometry, ...); // copy geometry
copyEngine->Signal(copyFence, 101);
copyEngine->CopyResource(tex1, ...); // copy textures
copyEngine->CopyResource(tex2, ...); // copy more textures
copyEngine->CopyResource(tex3, ...); // copy more textures
copyEngine->CopyResource(tex4, ...); // copy more textures
copyEngine->Signal(copyFence, 102);
renderEngine->Wait(copyFence, 101); // geometry copied
renderEngine->Draw(); // pre-pass using geometry only into rt1
renderEngine->Signal(renderFence, 201);
computeEngine->Wait(renderFence, 201); // prepass completed
computeEngine->Dispatch(); // lighting calculations on pre-pass (using rt1 as SRV)
computeEngine->Signal(computeFence, 301);
renderEngine->Wait(computeFence, 301); // lighting calculated into buf1
renderEngine->Wait(copyFence, 202); // textures copied
renderEngine->Draw(); // final render using buf1 as SRV, and tex[1-4] SRVs

The following pseudo-code illustrates synchronization between the copy and 3D engines to accomplish heap-like memory allocation via a ring buffer. Titles have the flexibility to choose the right balance between maximizing parallelism (via a large buffer) and reducing memory consumption and latency (via a small buffer).
Quote:
device->CreateBuffer(&ringCB);
for(int i=1;i++){
if(i > length) copyEngine->Wait(fence1, i - length);
copyEngine->Map(ringCB, value%length, WRITE, pData); // copy new data
copyEngine->Signal(fence2, i);
renderEngine->Wait(fence2, i);
renderEngine->Draw(); // draw using copied data
renderEngine->Signal(fence1, i);
}

// example for length = 3:
// copyEngine->Map();
// copyEngine->Signal(fence2, 1); // fence2 = 1
// copyEngine->Map();
// copyEngine->Signal(fence2, 2); // fence2 = 2
// copyEngine->Map();
// copyEngine->Signal(fence2, 3); // fence2 = 3
// copy engine has exhausted the ring buffer, so must wait for render to consume it
// copyEngine->Wait(fence1, 1); // fence1 == 0, wait
// renderEngine->Wait(fence2, 1); // fence2 == 3, pass
// renderEngine->Draw();
// renderEngine->Signal(fence1, 1); // fence1 = 1, copy engine now unblocked
// renderEngine->Wait(fence2, 2); // fence2 == 3, pass
// renderEngine->Draw();
// renderEngine->Signal(fence1, 2); // fence1 = 2
// renderEngine->Wait(fence2, 3); // fence2 == 3, pass
// renderEngine->Draw();
// renderEngine->Signal(fence1, 3); // fence1 = 3
// now render engine is starved, and so must wait for the copy engine
// renderEngine->Wait(fence2, 4); // fence2 == 3, wait
Multi-engine scenarios

D3D12 allows developers to avoid accidentally running into inefficiencies caused by unexpected synchronization delays. It also allows developers to introduce synchronization at a higher level where the required synchronization can be determined with greater certainty. A second issue that multi-engine addresses is to make expensive operations more explicit, which includes transitions between 3D and video that were traditionally costly because of synchronization between multiple kernel contexts.
In particular, the following scenarios can be addressed with D3D12:

  • Asynchronous and low priority GPU work. This enables concurrent execution of low priority GPU work and atomic operations that enable one GPU thread to consume the results of another unsynchronized thread without blocking.
  • High priority compute work. With background compute it is possible to interrupt 3D rendering to do a small amount of high priority compute work. The results of this work can be obtained early for additional processing on the CPU.
  • Background compute work. A separate low priority queue for compute workloads allows an application to utilize spare GPU cycles to perform background computation without negative impact on the primary rendering (or other) tasks. Background tasks may include decompression of resources or updating simulations or acceleration structures. Background tasks should be synchronized on the CPU infrequently (approximately once per frame) to avoid stalling or slowing foreground work.
  • Streaming and uploading data. A separate copy queue replaces the D3D11 concepts of initial data and updating resources. Although the application is responsible for more details in the D3D12 model, this responsibility comes with power. The application can control how much system memory is devoted to buffering upload data. The app can choose when and how (CPU vs GPU, blocking vs non-blocking) to synchronize, and can track progress and control the amount of queued work.
  • Increased parallelism. Applications can use deeper queues for background workloads (e.g. video decode) when they have separate queues for foreground work.

In D3D12 the concept of a command queue is the API representation of a roughly serial sequence of work submitted by the application. Barriers and other techniques allow this work to be executed in a pipeline or out of order, but the application only sees a single completion timeline. This corresponds to the immediate context in D3D11.
Synchronization APIs

Devices and Queues
The D3D 12 device has methods to create and retrieve command queues of different types and priorities. Most applications should use the default command queues because these allow for shared usage by other components. Applications with additional concurrency requirements can create additional queues. Queues are specified by the command list type that they consume.
Refer to the following creation methods of ID3D12Device:
CreateCommandQueue : creates a command queue based on information in a D3D12_COMMAND_QUEUE_DESC structure.
CreateCommandList : creates a command list of type D3D12_COMMAND_LIST_TYPE.
CreateFence : creates a fence, noting the flags in D3D12_FENCE_FLAGS. Fences are used to synchronize queues.
Queues of all types (3D, compute and copy) share the same interface and are all command-list based. Resource mapping operations remain on the queue interface, but are only allowed on 3D and compute queues (not copy).
Refer to the following methods of ID3D12CommandQueue:
ExecuteCommandLists : submits an array of command lists for execution. Each command list being defined by ID3D12CommandList.
Signal : sets a fence value when the queue (running on the GPU) reaches a certain point.
Wait : the queue waits until the specified fence reaches the specified value.
Note that bundles are not consumed by any queues and therefore this type cannot be used to create a queue.

Fences
The multi-engine API provides explicit APIs to create and synchronize using fences. A fence is a synchronization construct determined by monotonically increasing a UINT64 value. Fence values are set by the application. A signal operation increases the fence value and a wait operation blocks until the fence has reached the requested value. An event can be fired when a fence reaches a certain value.
Refer to the methods of the ID3D12Fence interface:
GetCompletedValue : returns the current value of the fence.
SetEventOnCompletion : causes an event to fire when the fence reaches a given value.
Signal : sets the fence to the given value.
Fences allow CPU access to the current fence value, and CPU waits and signals. Independent components can share the default queues but create their own fences and control their own fence values and synchronization.
The Signal method on the ID3D12Fence interface updates a fence from the CPU side. The Signal method on ID3D12CommandQueue updates a fence from the GPU side.
All nodes in a multi-engine setup can read and react to any fence reaching the right value.
Applications set their own fence values, a good starting point might be increasing a fence once per frame.
The fence APIs provide powerful synchronization functionality but can create potentially difficult to debug issues.
Asynchronous compute and graphics example

This next example allows graphics to render asynchronously from the compute queue. There is still a fixed amount of buffered data between the two stages, however now graphics work proceeds independently and uses the most up-to-date result of the compute stage as known on the CPU when the graphics work is queued. This would be useful if the graphics work was being updated by another source, for example user input. There must be multiple command lists to allow the ComputeGraphicsLatency frames of graphics work to be in flight at a time, and the function UpdateGraphicsCommandList represents updating the command list to include the most recent input data and read from the compute data from the appropriate buffer.
The compute queue must still wait for the graphics queue to finish with the pipe buffers, but a third fence (pGraphicsComputeFence) is introduced so that the progress of graphics reading compute work versus graphics progress in general can be tracked. This reflects the fact that now consecutive graphics frames could read from the same compute result or could skip a compute result. A more efficient but slightly more complicated design would use just the single graphics fence and store a mapping to the compute frames used by each graphics frame.
Quote:
void AsyncPipelinedComputeGraphics()
{
const UINT CpuLatency = 3;
const UINT ComputeGraphicsLatency = 2;

// Compute is 0, graphics is 1
ID3D12Fence *rgpFences[] = { pComputeFence, pGraphicsFence };
HANDLE handles[2];
handles[0] = CreateEvent(nullptr, FALSE, TRUE, nullptr);
handles[1] = CreateEvent(nullptr, FALSE, TRUE, nullptr);
UINT FrameNumbers[] = { 0, 0 };

ID3D12GraphicsCommandList *rgpGraphicsCommandLists[CpuLatency];
CreateGraphicsCommandLists(ARRAYSIZE(rgpGraphicsCommandLists),
rgpGraphicsCommandLists);

// Graphics needs to wait for the first compute frame to complete, this is the
// only wait that the graphics queue will perform.
pGraphicsQueue->Wait(pComputeFence, 1);


while (1)
{
for (auto i = 0; i < 2; ++i)
{
if (FrameNumbers > CpuLatency)
{
rgpFences
->SetEventOnFenceCompletion(
FrameNumbers - CpuLatency,
handles
);
}
else
{
SetEvent(handles);
}
}

auto WaitResult = WaitForMultipleObjects(2, handles, FALSE, INFINITE);
auto Stage = WaitResult = WAIT_OBJECT_0;
++FrameNumbers[Stage];

switch (Stage)
{
case 0:
{
if (FrameNumbers[Stage] > ComputeGraphicsLatency)
{
pComputeQueue->Wait(pGraphicsComputeFence,
FrameNumbers[Stage] - ComputeGraphicsLatency);
}
pComputeQueue->ExecuteCommandLists(1, &pComputeCommandList);
pComputeQueue->Signal(pComputeFence, FrameNumbers[Stage]);
break;
}
case 1:
{
// Recall that the GPU queue started with a wait for pComputeFence, 1
UINT64 CompletedComputeFrames = min(1,
pComputeFence->GetCurrentFenceValue());
UINT64 PipeBufferIndex =
(CompletedComputeFrames - 1) % ComputeGraphicsLatency;
UINT64 CommandListIndex = (FrameNumbers[Stage] - 1) % CpuLatency;
// Update graphics command list based on CPU input and using the appropriate
// buffer index for data produced by compute.
UpdateGraphicsCommandList(PipeBufferIndex,
rgpGraphicsCommandLists[CommandListIndex]);

// Signal *before* new rendering to indicate what compute work
// the graphics queue is DONE with
pGraphicsQueue->Signal(pGraphicsComputeFence, CompletedComputeFrames - 1);
pGraphicsQueue->ExecuteCommandLists(1,
rgpGraphicsCommandLists + PipeBufferIndex);
pGraphicsQueue->Signal(pGraphicsFence, FrameNumbers[Stage]);
break;
}
}
}
}

Now pair all of that with what the Oxide dev Kollock stated... (I asked that very question about fences to Kollock if you read the last exchange)
Warning: Spoiler! (Click to show)



Edited by Mahigan - 7/17/16 at 3:38pm
Kn0wledge
(20 items)
 
Pati3nce
(14 items)
 
Wisd0m
(10 items)
 
Reply
Kn0wledge
(20 items)
 
Pati3nce
(14 items)
 
Wisd0m
(10 items)
 
Reply
post #428 of 632
Quote:
Originally Posted by GorillaSceptre View Post

I'm not blaming indie devs working out of a tiny office trying to make their dreams come true..

I'm talking about big business Tripple-A studios, those are the titles that actually need the new API's (according to them). 99% of the games on Steam will run on weak hardware, DX12 and Vulkan + the extra work that comes with them aren't needed.

These big studios that release games like Arkham Knight, Unity, etc., etc., all blame there broken garbage on restrictive API's like DX11. As for the vendors being responsible, well... i agree to some extent, and AMD went as far to create their own API and push for DX12 and Vulkan in the first place.. But the onus is also on studios who are more than happy to charge $60 + season passes for their products.

The situation is exactly analogous to the GPU market. As long as people keep buying Battlefield Calls XXX Remastered Diamond Premium Edition with Season Pass and exclusive preorder bonuses regardless of what kind of garbage the big studios keep churning out, why would they have any incentive to do anything differently?
post #429 of 632
Quote:
Originally Posted by magnek View Post

The situation is exactly analogous to the GPU market. As long as people keep buying Battlefield Calls XXX Remastered Diamond Premium Edition with Season Pass and exclusive preorder bonuses regardless of what kind of garbage the big studios keep churning out, why would they have any incentive to do anything differently?

Put the reason in my edit. One word - Refunds. biggrin.gif
post #430 of 632
Quote:
Originally Posted by Kuivamaa View Post

Not sure if serious. A console level API better than DX9-DX11 has been a constant demand from developers for several years. It was Johan Andersson of EA DICE that instigated this API renaissance we are experiencing today (Mantle, Metal,DX12, Vulkan etc). He shared the concern of the rest of pc devs but he did not just nag. He took it to the next level and pitched the idea,the need if you like for a better API to Intel,MS,nvidia and AMD. It was AMD that shared his vision and they came up with Mantle, which is the father of DX12 and Vulkan. Without devs expressing their discontent and their need for better tools,we would be still left with the obsolete DX11.

Big developers with their own engines of course have much to gain from low level APIs where they push the latest and greatest hardware to show off their games and engines.. You'll see those devs take initiative and support the hardware better regardless of the API used because they have huge teams with highly specialized engineers and programmers that know exactly how to code close to the metal. It's no surprise they want to push technology forward.

But then you have to remember about the point I made about developing for the largest potential market of hardware out there. Even Johan was debating pushing for DX12 only vs coding two separate paths. He said the benefits would be there, but they have to consider the market.

So relax... don't blame developers just yet screaming bloody murder when something isn't fully taken advantage of. This is a transition period and the two architectures are quite different as we already know... So in the games that AMD gets ahead, celebrate and be happy. But when Nvidia wins games here and there.. just take solace in the fact that AMDs performance will likely be much better than it would have been before DX12. So progress is being made.
My main PC
(8 items)
 
  
CPUMotherboardGraphicsRAM
Intel i7 6700k Asus ROG Maximus VIII Gene Nvidia GTX 1080Ti G.Skill Ripjaws 
Hard DriveOSKeyboardPower
Samsung 850 EVO  Windows 10 Razer Blackwidow Chroma EVGA Supernova 1300w 
  hide details  
Reply
My main PC
(8 items)
 
  
CPUMotherboardGraphicsRAM
Intel i7 6700k Asus ROG Maximus VIII Gene Nvidia GTX 1080Ti G.Skill Ripjaws 
Hard DriveOSKeyboardPower
Samsung 850 EVO  Windows 10 Razer Blackwidow Chroma EVGA Supernova 1300w 
  hide details  
Reply
New Posts  All Forums:Forum Nav:
  Return Home
  Back to Forum: Video Game News
Overclock.net › Forums › Industry News › Video Game News › [computerbase.de] DOOM + Vulkan Benchmarked.