Originally Posted by MonarchX Warning: Spoiler! (Click to show)
How do you actually use the 13Mhz increments? EVGA Precision X does not go by 13Mhz and GPU-Z reports speed 1-2Mhz above or below of what EVGA Precision X reports. Then there is nVidia Inspector, which also reports clocks being slightly higher or low by 1-2Mhz. There doesn't seem to be a way to definitely set a clock like 1228.5Mhz. No matter how close I try to get it to 1228.5Mhz, I can never get that precise clock and again, different apps show different clocks. Only GPU-Z allowed me to input my clock speed manually by typing...
When you say that the reason LLC Mod won't work with GTX 780 Ti because of the custom PCB, what do you mean by that? We have reference GTX 780 PCB's and reference GTX 780 Ti PCB's too
My OC is getting worse... When I test for artifacts with games, benchmarks, and OCCPT, I can manage 1250 / 7500 just fine @ 1080p for many hours. If I downsample from 4K or use 2x2 (4x) SSAA, I have to lower clocks to about 1228 / 7400 or else I get occasional artifacts. I think 4K downsampling or 2x2 (4x) SSAA should be included in GFX card torture tests because it stresses the card harder than FurMark @ 1080p without actually causing it to overheat or reach the maximum set temperature limit. I am saving up for a good WC setup with 2x 480mm rads for a CPU + Videocard (full block) loop. I hope to achieve those low 45-40C temps. I think it may improve my OC. I could then do a hard volt-mod. I've done one with Radeon 9800 Pro, but that one required no soldering.
. Only cards like EVGA Classified / Classified KingPin editions use truly custom PCB's, but they also have their own voltage controls. I so badly regret not getting EVGA GTX 780 Ti Classified, which was only $30 more expensive at that time!
No, 780Ti IS a non-reference 780 for the voltmod, the added rectifiers and caps in addition with the bios/drivers PWM made the NCP4206 voltage controller command hack void! Any command you issue to the controller gets overridden from a certain value: (Ex: Voltage beyond 1,212V)
And yes you should have gone for the Classy!
Originally Posted by kostacurtas
OccamRazor will skyn3t going to update the first post when you release every new skyn3t bios?
It is a bit hard to follow the thread every day.
Any news for the skyn3t version of bios 80.80.34.00.01 from reference cards?
What bios are you taking about? 80.80.34.00.01
is common to all the brands!
All bios will be posted in a new thread: Skyn3t & Occamrazor Bios Repository
Will be in my SIG soon enough! But the all bios will be released in the threads accordingly!
Thanks but its done already, will be released as soon as all the beta testers report in!
Originally Posted by Jeronbernal
Not sure if this is a issue with the program or with how I'm doing it but...
When I overclock the clock through precision x, and have both cards sync'd only one of the cards gets overclocked, the other one stays at 980mhz, I have two 780ti SC, although when I check the card in precision x, the sliders are up, but both precision x and gpu is telling me that one of the cards isn't being overclocked.... What can I do to fix this problem?
Thanks very much guys!
Change your SLI bridge!
Originally Posted by fab686868
hi every1 first of all let me say a big thanks to all the gus who have contribuited to this topic and those who have provided the custom bios for our cards....
i just bought and installed an asus gtx 780 ti dcii oc edition, trying to follow the guidelines of the topic. fresh install of forceware 337.88 and afterburner 3.0.1. i unlocked all the features in the ab menu (from the AB settings button). cpu 4790k and asus maximus hero vi with 1504 bios
all stock afterburner show a limit of +75mv in the core voltage. and core clock of 9xx. i flashed the bios with the corresponding skyn3t, and after that the core clock went to 1xxxx, boost disable, and core voltage in ab limit was +100. i began playing with the settings, and i noticed that ab doesn't show the memory voltage slider..... and there is no carrot in the core voltage bar to open.... why is that ? how can i enable it ?
also, with core voltage set to +100 in ab, nvidia inspector 22.214.171.124 shows (no gpu load active) a voltage of 0,875, while at 100% gpu load it goes to 1.050.. those numbers do not seem ok... should'nt i see 1.212 at full load ? i set power target in ab to 130m and under load it floats around 103%...
to be noted, nvida inspector has a limit for core voltage of +275, while ab only +100. but nvida inspector, too, doesn't show the memory voltage control.then, how do i further enable voltage over 1.212 ? i'm going watercooled in a few days, but in the meanwhile i'd like to prepare the ground... i noticed that the guidelines for the ab hack to unlock 1.3v refer to a previuos version of AB... are they still valid for the 3.0.1 version ? thanks again,
Use PrecisionX and not AB as for AB database the 780Ti IS not a reference card, only the 780 is!
Use the K-Boost feature in PX to up the voltage to 1,212V!
Originally Posted by famich
Hello, if I may ask : I got one Gigabyte card -email@example.comV no problem, for the sake of 4K monitor I have added the 2nd one : Gainward - a bit worse , I think _ I have flashed both card s with the Skynet BIO---- however, I cannot control both of them with the old Precisionx : after addind the voltage this is applied, but the clock offset no and the cards /both/ stay at some kind of state 2-925MHz or so..
SVL 7 BIOS is the same, only the original BIOS s work.. ?? I am puzzled as with one card I ve never had such problems.. ths for your input
Send me the Gainward bios so i can take a look!
Originally Posted by Arizonian
I'm excited to see speculation news on 880 possibly coming out before end of Q4. Looking forward to 780Ti prices dropping. May just SLI this beast and be done for a while.
That will be the best side effect form the 880 release, price drop!
But dont expect a full Maxwell with bells and whistles bashing the 780Ti (it will be more like a GTX 680 V2 release)
Read my article of opinion:"First gen will (likely to) be released with the same fabrication nodes as Kepler (28mn) why?
Lets take a peak at TSMC (Taiwan Semiconductor Manufacturing Company)!
TSMC has Low power and high performance designs;
SiON (silicon oxynitride) - CLN28LP - low-cost/low-power devices
HKMG (high-K dielectric multi gate) - CLN28HPL - low-power/low-leakage chips
High Performance for graphics processors or microprocessors:
HKMG - CLN28HP
Currently TSMC is ramping mass production in 20mn BUT (you knew there was a BUT coming right?) its not for high performance designs...
Back in 2011 TSMC was already mass producing 28mn (on low power designs) but full Kepler only hit the market in 2013, leaving 2 years gap in between starting to produce and actually having good yields in high performance chips!
So nvidia will rely on good old 28mn fabrication for first gen Maxwell!
Dont expect a full Maxwell beast upon release, that will come after the die shrink in second gen (hopefully 20mn) and of course nvidia has to do the "milking" (suck every $$$ out of each generation to go to the next, that being refresh or new architecture! )
Compared to Kepler, Maxwell has more registers per thread, more registers per CUDA core, more shared memory per CUDA core and a lot more L2 cache per GPU, upgraded compute performance but more important, doubled their performance-per-watt!
Some "wafers" for those who never saw one!
But having 10mn doesnt mean we will see 10mn GPU's at that time frame, luckily we will have (Finfets instead of HKMG) 14/16mn on Volta!
The problem is next-generation 20nm bulk high-K metal gate and 16/14nm FinFET process will have a higher cost per gate than today's 28nm HKMG!
The 16/14nm FinFET node uses the same interconnect structure as 20nm, so the chip area is only 8-10% smaller than 20nm. In addition, this node faces yield issues related to stress control, overlay, and factors related to the step coverage and process uniformity of 3D structures!
Meaning yields ($$$$) will determine how soon we will see a beastly GPU released! As the 28 HKGM matures (wafer depreciation costs) and yields increase, costs decrease meaning even in 2017 the costs will be much lower than 28mn, FinFETs can be used for high-performance or ultra-dense designs but are not cost effective in mainstream semiconductors. Consequently, the industry faces a mismatch between what is being promoted by wafer vendors and what their customers need. If this means anything, we will see very high GPU priced chips (with Finfet fab 14/16mn), [You all still remember the high priced Titan? now you know why... ] and less powerful and much lesser priced cards with 20mn HKGM! Scaling to 10nm and 7nm nodes will entail additional wafer processing challenges for which the industry is not well prepared for the next 5 years!
There are other options (FD SOI and 450mm wafers) but lets see what will happen!
What will this mean for us gamers?
That for the Titan/780/ owners( voltmod enabled guys! ) the estimate 20/25% increase in performance is not enough to cut losses!
Lets wait for second gen (maybe 2015 christmas?) and do our math then!
DISCLAIMER: This is my exercise of reasoning with my knowledge of electronics and market, all can change as the $$$$ rules above all else! "
Team skyn3tEdited by OccamRazor - 7/5/14 at 5:40am