Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.
Better late than never, Open GL 3.0 was released July 11, 2008...

3.2 on August 3, 2009...

http://en.wikipedia.org/wiki/Opengl#OpenGL_3.0

Preliminary support for OpenGL 3.0 drivers weren't ready for another 5 months.

http://www.nvidia.com/object/linux_display_amd64_180.22.html

OpenGL 3.2 support didn't enter until this past December on Nvidia GPUs.

http://www.nvidia.com/object/linux_display_amd64_190.53.html

Apple, in one .6.x release will have their drivers up to the current solutions for both Nvidia and AMD.

That will be the culmination of work done over the past 3 years.

Anyone bashing DirectX is a fanboy, plain and simple.

I'm a former OpenGL developer, and the Khronos Group ROYALLY screwed up the release of GL3.

They basically decided to please CAD developers instead of game developers when they released GL3.

Originally they promised to rewrite the state-based model of OpenGL into an object-based model. They didn't.

DirectX (Direct3D) is object-based, and there are many benefits (performance and others) that arise from an object-based system.

Hell, geometry shaders in GL were STILL an extension until a few months ago. Geometry shaders have been in hardware for YEARS now :rolleyes: Anyone saying that you should just use the extensions has never been bitten in the ass by different GPU vendors supporting extensions to varying degrees. I recall one experience where I needed functionality (fp16 blending of framebuffers) and I couldn't even query the hardware via GL to see if the functionality was supported in hardware. I sent an email to the developer relations of ATi and their response was to maintain a list of cards that support it(!)...

GL is so messed up, its really sad now. It could have been AWESOME. Khronos told us it would be :(

OpenGL 3.1 rolled out the object model and it's been extended in OpenGL 3.2.

http://en.wikipedia.org/wiki/OpenGL#OpenGL_3.2

Instead of sending you to the OpenGL specs I figured Wiki can summarize their changes more rapidly.
 
Imagination Technologies announces POWERVR SGX545 graphics IP core w/OpenGL3.2/OpenCL

http://www.khronos.org/news/press/r...ces-powervr-sgx545-graphics-ip-core-with-ope/

I already posted this last week on AppleInsider.

Imagination Technologies announces POWERVR SGX545 graphics IP core with OpenGL 3.2 and OpenCL 1.0


New core takes mobile & embedded graphics family to a new level; delivers unrivalled capabilities

Las Vegas, USA, 8th January 2010: Imagination Technologies, a leading multimedia chip technologies company, announces POWERVR SGX545, the first and only DirectX10.1 capable embedded graphics IP core available for immediate licensing. SGX545 will also deliver OpenGL ES 2.x and OpenGL 3.2 to deliver class leading 3D graphics performance, and will also support OpenCL 1.0 full profile capability which will enable mobile and embedded applications to take maximum advantage of the capabilities offered by these GPU APIs for both 3D graphics and general purpose applications.

POWERVR SGX545 is available for licensing now. The IP is already proven in silicon in a test chip from Imagination and licensed by a lead partner.

Says Tony King-Smith, VP Marketing, Imagination: "Combining our many years of experience in the embedded, mobile and PC-based DirectX graphics worlds, POWERVR SGX 545 takes the possibilities of hand-held graphics to a new level by delivering a full DirectX 10.1 and OpenGL 3.x feature set as well as delivering GPU powered OpenCL heterogeneous parallel processing capabilities for the mobile and embedded markets. This makes POWERVR SGX545 a compelling solution for application processor SoC designers targeting the next generation of netbook and MID mobile products demanding exceptional graphics capabilities."

The debut of POWERVR SGX545 reinforces the SGX family’s outstanding scalability which ranges from ultra-small OpenGL ES 2.0 mobile cores through solutions for feature-rich mobile and HDTV platforms, to high-performance gaming and computing solutions. The SGX family supports a wide range of APIs including DirectX9 & 10, OpenGL ES 2.x, OpenGL 3.x, OpenVG 1.x and OpenCL 1.x.

POWERVR SGX545 delivers real-world performance of 40 million polygons/sec and 1 Gpixels/sec fillrate at 200MHz,* and is capable of driving HD screens with ultra smooth high frame rate 3D graphics content.

New features in POWERVR SGX545 include:
  • DirectX10.1 API support
  • Enhanced support for DirectX10 Geometry Shaders
  • DirectX10 Data assembler support (Vertex, primitive and instance ID generation)
  • Render target resource array support
  • Full arbitrary non power of two texture support
  • Full filtering support for F16 texture types
  • Support for all DirectX10 mandated texture formats
  • Sampling from unresolved MSAA surfaces
  • Support for Gamma on output pixels
  • Order dependent coverage based AA (anti-aliased lines)
  • Enhanced line rasterisation

SGX545 was also designed to deliver full profile OpenCL 1.0 capabilities, with advanced features including:
  • Support of round-to-nearest for floating-point math
  • Full 32-bit integer support (includes add, multiply and divide)
  • 64-bit integer emulation
  • 3D texture support
  • Support for the maximum 2D and 3D image sizes specified in the full profile.

Inside POWERVR SGX545
USSE (Universal Scalable Shader Engine), the main programmable processing unit within each POWERVR SGX545 pipeline, is a scalable multi-threaded GPU shader processing engine that efficiently processes graphics as well as many other mathematically-intensive tasks. USSE can be programmed using the GLSL language that forms part of the OpenGL ES 2.0 specification, or in the C-based parallel processing language used in the OpenCL specification – both APIs from the Khronos Group.

POWERVR SGX545 delivers the broadest range of graphics API feature sets in the industry, while also enabling developers to gain greater access to the full capabilities of the USSE-powered GP-GPU in a broad range of applications including digital imaging, video processing, game physics, cryptography, and other general computing tasks that can benefit from parallel processing.

Editor's Notes
* All fill rate figures stated assuming a scene depth complexity of x2.5.

About Imagination Technologies
Imagination Technologies Group plc (LSE:IMG) – a global leader in multimedia and communication silicon technologies – creates and licenses market-leading processor cores for graphics, video, multi-threaded embedded processing/DSP and multi-standard communications applications. These silicon intellectual property (IP) solutions for systems-on-chip (SoC) are complemented by strong array of software tools and drivers as well as extensive developer and middleware ecosystems. Target markets include mobile phone, handheld multimedia, home consumer entertainment, mobile and low-power computing, and in-car electronics. Its licensees include many of the leading semiconductor and consumer electronics companies. Imagination has corporate headquarters in the United Kingdom, with sales and R&D offices worldwide. See: www.imgtec.com.

There is a reason Apple is getting it's software stacks sewn up to leverage OpenGL 3.2/OpenCL1.0 together, across all markets they play in.

Nvidia is currently the only vendor with OpenCL1.0 not ready inside their GPGPUs for general consumption. It's still in beta.
 
http://www.khronos.org/news/press/r...ces-powervr-sgx545-graphics-ip-core-with-ope/

I already posted this last week on AppleInsider.



There is a reason Apple is getting it's software stacks sewn up to leverage OpenGL 3.2/OpenCL1.0 together, across all markets they play in.

Nvidia is currently the only vendor with OpenCL1.0 not ready inside their GPGPUs for general consumption. It's still in beta.

I'm going to be the devils advocate.

Do they have higher end solution. Its not like Apple will bother making a multitude of Kexts when they can get the same from a low-end solution available at time of making. I'm going to put good money on discrete solution from the 5XXX series due to be released soon than this dead end. I Mean, because they have to use Intel Chipsets anyway or move to AMD, why would they use the Imagination tech when the HD5250 would most likely be released whenever they update the iMacs next. It would most likely have better performance and would be easier to write drivers for.
 
Here ya go.
 

Attachments

  • opengl.jpg
    opengl.jpg
    111.5 KB · Views: 407
Will OpenGL 3.0 offer increased performance for OpenGL 2.0 supporting software?
I'm pretty sure you need to actually write your software in OpenGL 3.0 to take advantage of the benefits.

Is there a list of supported cards/machines floating about anywhere?
Any DX10 GPU should support OpenGL 3.0, except Intel GMAs. Basically ATI HD2xxx and nVidia 8xxx and up.

When was support for Open GL 3 added to Windows and Linux?
http://techreport.com/discussions.x/16080
http://techreport.com/discussions.x/16323

nVidia added official OpenGL 3.0 support for Windows and Linux in December 2008 and ATI followed in January 2009. So it's been more than a year on other platforms.

Hell, geometry shaders in GL were STILL an extension until a few months ago.
I think geometry shaders were promoted to core in OpenGL 3.2.

OpenGL 3.2 support didn't enter until this past December on Nvidia GPUs.
http://techreport.com/discussions.x/17701

nVidia actually added official OpenGL 3.2 support for Windows and Linux in October 2009. It was ATI that released official OpenGL 3.2 support in December 2009.

Apple, in one .6.x release will have their drivers up to the current solutions for both Nvidia and AMD.

That will be the culmination of work done over the past 3 years.
So has it actually been confirmed that 10.6.3 will bring full support for OpenGL 3.2 rather than just OpenGL 3.0? Because while this leaked screenshot confirms OpenGL 3.0 is on the way and some progress has been made on OpenGL 3.2, it doesn't show any progress since Leopard on OpenGL 3.1. In fact, on my older MacBook Pro with the X1600 and 10.5.8, I'm still getting 1/8 OpenGL 3.1 extensions supported (ARB_texture_rectangle) which is the same as the leaked screenshot.
 
A lot of the comments written so far in this thread seem to be by people who don't realize that DirectX is so much more than OpenGL, and the two aren't equivelent. DirectX also includes full support for Sound, Input, Networking and a bunch of other useful stuff. That's why it took off and became the defacto standard. OpenGL is simply concerned with graphics, and needs other libraries to do that other stuff.
OpenGL is still better.

http://blog.wolfire.com/2010/01/Why-you-should-use-OpenGL-and-not-DirectX
 
Somehow I doubt that's going to happen.

It may happen when the Windows world is using 6870s or 7870s. By time I could use a 3870 on my Intel Mac Pro the Windows/Linux world had been using the 487 for a few months. We need to get Mac support up to the current shipping version, not 1 or 2 versions late like now. But then with Apple emphasis so much on the iPod/iPhone area the Mac is lucky to continue.
 
Somehow I doubt that's going to happen.
I think given the length of the HD5870 and the cost, the HD5850 is more likely. I do hope that Apple adopts the HD 5000 series though, seeing it is the latest tech. The GPUs will likely be refreshed when the Gulftown Mac Pros roll out which seems likely to occur before Fermi arrives in any numbers.
 
I hear a HD5870 MacPro calling.

Somehow I doubt that's going to happen.

I expect it will be available when the 7870 comes out for twice as much as the PC card. There will then be a 2000 post thread about flashing the PC version to run in the MacPro sometime when the Mayan calender blows up or Al Gore's personal carbon footprint exceeds that of Rwanda, which ever comes first.

edit: I see heistax beat me to it. Nice to know it's not merely a personal notion of mine.
Why does Apple hate modern, high performance video cards?
 
I expect it will be available when the 7870 comes out for twice as much as the PC card. There will then be a 2000 post thread about flashing the PC version to run in the MacPro sometime when the Mayan calender blows up or Al Gore's personal carbon footprint exceeds that of Rwanda, which ever comes first.

edit: I see heistax beat me to it. Nice to know it's not merely a personal notion of mine.
Why does Apple hate modern, high performance video cards?

No necessarily, when the new Mac Pros were released the 4XXX series was the latest at the time. Granted it was the 4870 but the 4870 and 4890 have little performance difference to begin with.
 
It's too bad that Microsoft won't license their DirectX on anything else. Imagine if there was a Mac version of DirectX so that games could be ported easily and require very little re-write.

DirectX is not just a small part of Windows that can just be licensed, it is pretty much how most things from on screen graphics through to audio is handled. It would be almost impossible to just put it into OS X without loads of problems troubles and unwanted side effects.

If developers stopped emulating their games on OS X and started actually porting them then maybe we'd see a performance increase from this, but that'll never happen - it's too easy and cost effective to wrap games in an emulator and push it out than to actually take the time to build a native binary.

FYI most Mac ports are native binaries. Every game from Feral has been a native Macintosh binary I think the only people how try and emulate are Transgaming with the Cider powered games.

Edwin
 
http://www.khronos.org/news/press/r...ces-powervr-sgx545-graphics-ip-core-with-ope/

I already posted this last week on AppleInsider.



There is a reason Apple is getting it's software stacks sewn up to leverage OpenGL 3.2/OpenCL1.0 together, across all markets they play in.

Nvidia is currently the only vendor with OpenCL1.0 not ready inside their GPGPUs for general consumption. It's still in beta.

I do hope that it becomes the default GPU on MacBook's - because right now if it contains NVIDIA I am avoiding it like the plague. If it means that in the future it forces me to purchase a Windows machine then so be it - the last thing I want to be lumped with is another piece of crap from NVIDIA that continuously breaks down because of poor quality control at Nvidia's end.
 
FYI most Mac ports are native binaries. Every game from Feral has been a native Macintosh binary I think the only people how try and emulate are Transgaming with the Cider powered games.

Edwin

Yes, companies like Aspyr and Feral Interactive are actually porting the game over natively, but it's still not going to run as well as the native game. From what I understand, they have to convert the game from DirectX to OpenGL which are 2 very different animals. It's probably not an easy task, and then it's probably not an easy task to optimize so that it runs as well as it's Windows counterpart.

Call of Duty 4 runs pretty dang well for a port, but it's nowhere near the Windows version when you turn on Shadows and Specular Map.

Like I said before... in Windows COD4, I get a constant 60fps (vsync on) with ALL of the graphics settings cranked as high as they go and 4x AA, but on the Mac version, I have to turn off Shadows, AA, Specular Map, Lighting Effects and Glow in order to have it run at 60fps, but even with those settings off, there are still a few maps that still bring my framerate down to 30fps out in big open areas. For the most part, the Mac version is good, but the PC version is GREAT.

I blame it on the developers who ported it from DirectX to OpenGL. If the game was originally written in OpenGL, it would have run the same on both platforms because there would be nothing to translate to the Mac OS except for the app package and a few other things (in order to get the game to launch in OS X).
 
Yes, companies like Aspyr and Feral Interactive are actually porting the game over natively, but it's still not going to run as well as the native game. From what I understand, they have to convert the game from DirectX to OpenGL which are 2 very different animals. It's probably not an easy task, and then it's probably not an easy task to optimize so that it runs as well as it's Windows counterpart.

That's is close enough to the truth, although other things like the performance levels and supported features of OpenGL on Mac (compared to OpenGL on Windows), no direct access to the graphics hardware (unlike Windows) and the range of OS versions and cards we need to support also cause extra work and effort when porting.

Things like the new 3.0 support is great but it assumes the user has 10.6.3 installed and a compatible card, you cannot port a game assuming all the users will have the latest version of the latest operating system with a high end card. Because of this new features can take a while before they can be used in a new game as they need to become standard across enough machines before you can use them.

Although GL (Graphics) is an important part of a games code other items like Audio, Physics, Threading etc are all done differently on a Mac so if everyone suddenly used GL ports would not suddenly become easy and instantly run quickly.

Edwin
 
That's is close enough to the truth, although other things like the performance levels and supported features of OpenGL on Mac (compared to OpenGL on Windows), no direct access to the graphics hardware (unlike Windows) and the range of OS versions and cards we need to support also cause extra work and effort when porting.

Things like the new 3.0 support is great but it assumes the user has 10.6.3 installed and a compatible card, you cannot port a game assuming all the users will have the latest version of the latest operating system with a high end card. Because of this new features can take a while before they can be used in a new game as they need to become standard across enough machines before you can use them.

Although GL (Graphics) is an important part of a games code other items like Audio, Physics, Threading etc are all done differently on a Mac so if everyone suddenly used GL ports would not suddenly become easy and instantly run quickly.

Edwin

I didn't mean to imply that they'd run quickly, but once they're set up and running, the performance would be very close to the PC version.

Also, the other day I had this awesome idea. When a game goes full screen in Mac OS X, is the rest of Quartz Extreme/Core UI running? If so, doesn't that degrade performance because then the card is running Quartz Extreme/Core UI as well as the game? I mean, that stuff is probably using up some of the VRAM on the GPU so the game has less to work with, right? What if Apple made it so that Quartz Extreme and Core UI turn off before a game starts so you're left with the maximum ability of the card when you're in a full screen game?

Perhaps I'm completely wrong... I don't design or port games, so I don't know.
 
Anyone bashing DirectX is a fanboy, plain and simple.

I'm a former OpenGL developer, and the Khronos Group ROYALLY screwed up the release of GL3.

They basically decided to please CAD developers instead of game developers when they released GL3.

Originally they promised to rewrite the state-based model of OpenGL into an object-based model. They didn't.

DirectX (Direct3D) is object-based, and there are many benefits (performance and others) that arise from an object-based system.

Hell, geometry shaders in GL were STILL an extension until a few months ago. Geometry shaders have been in hardware for YEARS now :rolleyes: Anyone saying that you should just use the extensions has never been bitten in the ass by different GPU vendors supporting extensions to varying degrees. I recall one experience where I needed functionality (fp16 blending of framebuffers) and I couldn't even query the hardware via GL to see if the functionality was supported in hardware. I sent an email to the developer relations of ATi and their response was to maintain a list of cards that support it(!)...

GL is so messed up, its really sad now. It could have been AWESOME. Khronos told us it would be :(

Pretty much what I've heard too. DirectX has Microsoft fully behind it with support, documentation, consistently released drivers, an actual development staff, huge support by virtually every card maker out there, and a library of games that will make your head spin. OpenGL, from what I've read, has a shoddy documentation through their website, random help articles online, random forum/blog posts, and is overall more difficult to develop for.

And it's not like Apple is has exactly been amazing at pushing for gaming on OS X either.
 
Pretty much what I've heard too. DirectX has Microsoft fully behind it with support, documentation, consistently released drivers, an actual development staff, huge support by virtually every card maker out there, and a library of games that will make your head spin. OpenGL, from what I've read, has a shoddy documentation through their website, random help articles online, random forum/blog posts, and is overall more difficult to develop for.

And it's not like Apple is has exactly been amazing at pushing for gaming on OS X either.

Khronos Needs to be shot. I wish a accountable OSS or Standards company like RedHat would look after OpenXX. Hell even AMD/nVidia.

Same goes for W3C. Make a damned video standard already.

Until OpenGL has sense of Direction it will always be the Workstation Standard. ;)
 
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.