Hopefully someone who knows better can chime in, but I have a bit of experience with this. Here is, at least, a bump with some info I know.
Depends on your settings from what I understand. Some of the older cards don't support certain settings, but I think post-Maxwell you don't have to worry about this at all if you're not aiming for encoding in 8k res. Some of the newer cards do certain settings faster than the older ones. I'm not sure of the specifics here.
The encoding performance is generally based on architecture rather card type (e.g. maxwell vs pascal, the 1060 vs 1070 will not matter much). The thing outside of this is if a card is factory clocked to a lower TDP it will often have a significant impact on NVENC performance. I don't know the specifics here either.
For the above, I'm sorry but I cannot find the sources I used.
You can see a gist of what's supported here for the 10 series. The - https://developer.nvidia.com/video-e...support-matrix
. If you're only doing encoding with the GPU, you want to pay attention to the architecture, and number of NVENC, unless you are looking at TURING in which case it is equivalent to two PASCALS. Concurrent sessions shouldn't matter unless you're planning on doing professional level work that requires rendering several sessions at once, in which case you're going to aim for a Quadro or Tesla. I'd advise Quadro here because Tesla comes with a lot more baggage for things like machine learning. For architecture, usually newer = faster.
Also, it says 1070 has 2 NVENCs, I'm not 100% sure if that is accurate. I'm almost certain it only has one but I could be wrong or it could be a setting by manufacturer. My brain could also be the issue so there might be 2 and I may just be misremembering.
After this it is inference based on information I found when re-downloading resources I've used in the past and I'm not sure of NVidia's accuracy here. I'd imagine they know their own products but I've seen typos and things in the past.
If you look at this https://developer.nvidia.com/nvenc-application-note
<- you can see in table 1 on page 5 the capabilities, so it's kind of a rehash of the "right side" of the table I linked above, specifically the supported H264 and H265 stuff. On page 9 you can see the framerates per architecture for differing qualities. I don't know what these qualities correspond to offhand in terms of bitrate and settings but it should give you a gist if you want to compare between architectures. The 970 is Maxwell gen 2 iirc. The 1070 is PASCAL and the 2060 is TURING.
There may be things about how many encoding threads are being used that I am unaware of but assuming that we want to maximize these numbers. I believe 2 NVENC can work on a single video at once but I don't remember for sure. If they can then you have to multiply the framerates by the number of NVENCs, so basically 2 unless you want a Titan, which isn't in your list so I'm assuming not.
Furthermore, I believe that the TURING NVENC result in better quality than the earlier generations. (I mean obviously since it says in one of my links lol). NVidia says it is equivalent to medium at the same bitrate. I don't know if this is a marketing thing by NVidia or if it actually holds for sure. Oh the 2060 doesn't appear on the list. Since it supports NVENC I'm assuming it has 1 of each of the encoders, like the other cards.
This suggests that in terms of speed, 2060 = 1070 > 970. In terms of quality, it is probably the case that the 2060 > 1070 > 970 but I'm not sure between the 1070 and 970, they may be equal.
Hopefully this helps.
Edited for clarity about maxwell generation.
EDIT 2 IMPORTANT - Okay, so the medium setting specifications are for x264 with TURING. I don't know if they apply to x265. I think x265 is different. You can see https://www.nvidia.com/content/dam/e...Whitepaper.pdf
. Check page 22 (29 of the pdf). There appears to be an "up to" 25% bitrate savings for x265.