Citation :
The present and the future
by John Carmack
I'm still developing everything with OpenGL, and I'm still targeting mac and linux as well as windows, but I want to rationally address some points in the API debate: D3D is clunky, etc Not really true anymore. MS made large strides with each release, and DX8 can't be called a lousy API anymore. One can argue various points, but they are minor points. Anti-Microsoft forces have a bad habit of focusing on early problems, and not tracking the improvements that have been made in current versions. My rant of five years ago doesn't apply to the world of today. I do think that the world would be a slightly better place if Microsoft had pulled an embrace-and-extend on OpenGL instead of developing a brand new API that had to be rewritten a half dozen times, but its water under the bridge now. Open for more sales, etc It has been pretty clearly demonstrated that the mac market is barely viable and the linux market is not viable for game developers to pursue. Linux ports will be done out of good will, not profit motives. From an economic standpoint, a developer is not making a bad call if they ignore the existence of all platforms but windows. DX8 Gives more features Some people have an odd view that an API gives you features. Assuming you don't care about software emulation, hardware gives you features, and an API exposes them. If you try to use vertex programs or bump env map on a TNT, DX8 doesn't magically make it work. DX8's hardware independence is also looking a bit silly now as they make point releases to support ATI's new hardware. They might as well say D3D-GF3 or D3D-R200 instead of DX8 and DX8.1. All of Nvidia's new features have showed up as OpenGL extensions before they show up as new D3D features. Divergent extensions haven't been a problem up until very recently. All of the vendors tended to support all the extensions they could, and if they had unique functionality, like register combiners, they made their own extension. The current status of vertex programs does piss me off, though. I really wish ATI would have just adopted Nvidia's extension, even if it meant not exposing every last bit of their hardware. Abstraction in a high performance environment can be dangerous. If you insist that all hardware behave the same, you prevent vendors from making significant improvements. If the spec for behavior comes from people that aren't hardware oriented, it can be a huge burden. D3D still suffers somewhat due to this, with some semantics and odd features that make hardware guys wince. The Good News We are rapidly approaching a real golden age for graphics programming. Currently, cards and API's are a complex mess of hundreds of states and function calls, but the next two years will see the addition of the final primitive functionality needed to allow arbitrarily complex operations with graceful performance degradation. At that point, a higher level graphics API will finally make good sense. There is debate over exactly what it is going to look like, but the model will be like C. Just like any CPU can compile any C program (with various levels of efficiency), any graphics card past this point will be able to run any shader. Some hardware vendors are a bit concerned about this, because bullet point features that you have that the other guy doesn't are a major marketing feature, but the direction is a technical inevitability. They will just have to compete on price and performance. Oh, darn. It's a Turing machine point. Even if OpenGL 2.0 and DX10 don't adopt the same shader description language, they will be functionally equivalent, and could be automatically translated. There is lots of other goop like texture specification and context management that will still be different between API, but the core day-to-day work of a graphics programmer will be basically above the disputes. John Carmack
|