I think this is highly unlikely. It's more likely that the GPU will grow to the point that - for graphics intensive applications like games - it's doing more work than the CPU, but I doubt it will ever merge. Why? For three reasons.Wintermoon wrote:Graphics cards are already becoming generic and programmable. In ten years, the GPU functionality will probably be completely integrated in the CPU anyway.
Firstly, because die space - putting stuff on a chip - is expensive, and it gets more expensive at an incredible rate the more stuff you want to put on at the same time. Basically because when you etch a silicon wafer to make chips, it's almost guaranteed that there will be a certain proportion of the surface of that wafer which doesn't get etched properly and thus doesn't work. If you have 20 1-square-cm chips on that wafer and lose a square millimetre to manufacturing error, you lose one chip, 5% of the wafer. If you have 5 4-square-cm chips on that wafer and lose a square millimetre to error, you still lose one chip... but now that one chip is 20% of your wafer. So there's a distinct economic advantage to keeping your chips small, which means that since CPUs and GPUs are both growing in size and complexity, putting them both on the same die would be economically suicidal. You'd need to presume that a totally new manufacturing process with different properties is going to come along.
Secondly, just because architecturally, our current PCs don't do stuff like that and would need significantly re-designing before they could - and the PC industry doesn't tend to make drastic changes very often, if ever. It's very big on backwards-compatibility, making sure legacy devices and applications still work, and sticking to the same architectural paradigms so that they can still re-use the same old auxiliary components and so on. It's likely that this kind of change would break more things than it would fix, in this regard.
Thirdly, graphics RAM is utilised in a different manner to system RAM, which is a good reason to keep them separate - graphics RAM benefits from being quicker, the caching policy (even if hardware-implemented) will be totally different, because access patterns are different. Some portion of it needs to be directly accessed by the electronics which drive the video-out socket. So putting it all in the same place would likely mean either lower performance, or again, drastically increased price.
Now, to be fair, you do see this happening in some cases. Integrated graphics chipsets residing on the motherboard often share system RAM, but then... they also often drastically underperform compared to their PCI/AGP/PCI-E cousins. Embedded systems sometimes do have the graphics chipset on-die with the CPU, but in the case of embedded systems size is typically more important than cost and performance, so it's worth taking the manufacturing hit and the performance hit to produce something that's physically tiny.
(There are performance benefits to putting the GPU on the same silicon as the CPU; you wouldn't have to worry about a slow PCI-E bus between the two when you have a fast on-die bus instead, for example. But this is largely worked around these days by uploading texture, geometry and draw instruction data, and shader programs and so on, into the graphics card's RAM so you don't use the bus for so much anyway.)