Originally Posted by Buzzin92
I think, when we get near the end of Moore's Law, Architecture designers will just focus all their resources on to efficiency. Doing that they can create a much larger die without increasing power consumption and temperatures too much.
I mean, Intel's i5/i7 die are almost twice as small as AMD's Bulldozer, while performing much faster, and then Intel reduced that die size further with Ivy Bridge while adding ~5% to the performance. I'm not trying to start a flame war here, but, if Intel were to use the same die area as Bulldozer, while keeping in the power envelope, it would destroy anything we see on the table now. Even 2011 6 core CPU's.
Another bit of food for thought. Since Intel's die sizes are so small, they don't need to pay anywhere near as much for manufacturing and sourcing as AMD/GF does, but yet they charge more for the CPU's in retail stores. This is what's killing AMD, and why their stocks are dropping like boulders.
Fully agree with you on EVERYTHING except on your choice of comparison
. AMD uses much more die size than intel for less performance as you stated, however they have a higher power consumption. I'm not extremely knowledgeable in CPUs but I think the (forgetting about costs) size (or transistor count) to power of JUST a single core or core cluster (for AMD) (cache is a whole nother story) is smaller than AMD's size but bigger than intels mainstream transistor count. However, overall I think that CPU's will eventually be much more efficient with smaller dies and transistor count as a whole due to the type of work they do and software being unoptimised.
A better example IMO would be fermi-kepler. nVidia realized that having lower clock-speeds and a larger count of shader units (along with architecture tweaks which is the entirety of the near future). In this e.g. a bigger transistor count is so much more efficient and powerful but again I think this is due to the type of work they do
However going back to your intel example, the "mainstream" 2-4 core CPU's have an extremely large portion of them being taken up by their HD2000/3000/2500/4000 which AMD doesn't do which is even more mindboggling how much better intel has been as of late!
Originally Posted by Just a nickname
Don't assume there is an end to moore's law, it is not a fracking law. It is only a prediction.
Vacuum transistors have no medium and their frequency can be 10+ times faster than ivy's transistors (around the 40GHz -
Moore's law states that the number of transistors in a computer doubles every 18 months/2 years. This has been slowing down during and post 45nm for cpus and as of late post 40nm for GPU's. This should in theory mean that computational power "doubles" as-well, however that's never been true due to architectures never scaling 100%. I know what you mean by what you say but Moore's law isn't really a law
. The thing I find crazy is that, yes, so many things have ridiculous potential such as the types of transistors you mention, quantum transistors etc. but we aren't close AT ALL to reaching them, and the amount of research being done is insignificant. If you look at the rate of shrinking down Intel has been doing we'll be pretty much done by 2016 and other fabs are soon to follow and will be reached by 2020 for consumer "top of the line" stuff, although tbh delays will probably push these into late 2020's. Even now, current 28 and 22nm manufacturing is extremely difficult to do (evident by comments from Intel and GF and by the delays these process' had and by the yeilds they currently have). Architecture revisions on the same manufacturing process can only do so much once they hit a point. Compare the leaks of sandy bridge and actual sandy bridge to the late Westmeres, amazing differences. Compare the leaks (and delays) of Haswell to Ivy Bridge (mainly good GPU improvements and minor cpu improvements compared to good cpu improvements and amazing igpu changes). They cannot improve performance as much because it's hard to do so. Power consumption and efficiency is currently the way to go about improving performance as of late as architecture throughput is slowing down, and this is evident by efficiency of power consumption meaning more transistors can be placed into the same space. Yes Sandy Bridge to Nehalam was a huge jump, but they can only do so much architecture wise without more power and space efficient transistors and they are already slowing down.
A new manufacturing process must be found, and to develop it will take years. Look at the time of every AMD, Intel, ATI, nVidia revision etc., 8 months to 1 year. Take the time to design their new architectures, 4+ years on a known process. Now when they eventually think of something usable and manufacturer, the RND costs will be really high and the time to design something around it will take a really long time. Not going to mention the scientific side but just saying that we're going to not get anything better than 10% improvements on a two year basis after 10 years, after another 5-10 years maybe if we're lucky 5% improvements on the hardware side alone. The near and see-able future is minor architecture tweaks and optimisation/software tweaks.
The major implications I'm seeing is the slowing down of ALL technology, and the failing of intel nVidia AMD IBM etc.
Intel had a hard time getting people to by ivy bridge CPU's. the desktop market offered little benefit and the mobile market isn't selling well at all. Seriously, the average joe won't notice/need the slight improvements he'll be getting (the future is slight improvements), and even if he has the same performance or slightly better with a 8 hour battery life compared to his 5 hours, he wouldn't be willing to pay another $1000+ for it again a couple of years later.
I personally have technology as a hobby and do (I bought two ivy bridge notebooks for power benefits alone an asus ux51vz and a sony vaio z) and so have you (probably) but most people don't/wont. If the future is mediocre 5-10% improvements less people will buy. If the future is larger die's for better performance the companies lose out on profits and will fail/slowdown.
The foreseeable coming to this is in two decades, three at most, and we're already in the slowdown. There is no sight to a future solution which is my point :\. I would love me some vacuum transistors
Originally Posted by awsan
I understand that laptops will still be not as fast as desktops in gaming but it will reach at least that 90% of the performance of a desktop and the 4930xm will be faster than a desktop 4770k clock to clock like ivy mobile vs ivy dekstop,still you will need 3k laptop to match a 1.5k desktop but sometimes its worth it
Naah, it'll never reach that close for "typical laptops". Your definition of a usable laptop is crazy. A 4930xm would only be needed/fully used in a 16"+ notebook with dual GPU's, maybe stretched to a high end GPU both OC'd in a thick 15" notebook. TBH you shouldn't be calling it a laptop
and I shouldn't be calling it a notebook. A rock would be a better term, or an anvil
Regardless, desktops will always rule due to their cooling ability but laptops edging closer is really useful. I can play any game on a 2.1kg <2" thick 15.6" notebook at 1080p on low/medium on frostbite 2/cryengine 3/unoptimised bs up to medium/high on modern games and ultra on anything 2011 and earlier, while my 13" notebook can run anything this year and last year at 720p high and anything <2011 including emulators and the such as 1080p and high while being <1.8" thick and <1.5kg heavy which is really crazy and extremely useful.....
Technology is slowing down though which sucks :\ in 2 years I don't see much improvement occurring tbh...(haswell) except for 8-12" devices being usable...
Although that small improvement is always worth it to us OCNers Edited by JassimH - 12/18/12 at 6:44am