Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.

star-affinity

macrumors 68020
Original poster
Nov 14, 2007
2,009
1,343
I was discussing on a Swedish computer forum and a guy there gave a few examples on what he think's is bad with the OS kernel compared to Linux and Windows. Could anyone with knowledge comment on this?

The graphics performance I guess we all can agree on is (sadly) much better in Windows (and probably Linux too), but what about the rest? Well, the file system I guess I also would agree is old and could use a refresh/replacement.

Quickly translated using Google Translate, so I hope it makes sense:

”If you try on some more advanced system programming in OS X, you quickly realize that the things that are taken for granted in Linux (POSIX real-time extensions, control of which is / are CPU cores a thread to run on newer RFCs for TCP, etc) at all does not exist in OS X. 10-12 years ago the OS X kernel were at the forefront with its features, but seems like it is all gone compared to newer versions of the BSD kernel in terms of new developments since then. Windows also has virtually all of these features, but not as obvious how much is missing when comparing with Windows because it is a different API (ie, not POSIX).

Compared to Linux, OS X lacks any form of virtualization at the OS level (LCX and the like). Now this lacks in Windows too, but this is being worked on at Microsoft since time Docker has become so very popular that MS recognizes the need to do so the Windows kernel also can support this.

The file system in OS X was out-dated already 10 years ago, it's still not fixed. Today it is a lot less trouble than before when the SSD basically hides performance problems. Graphics Stack is a joke, seen from a performance perspective, compared to getting games available on Windows, OSX and Linux, it is pretty obvious that the low performance to get in OSX can not be blamed on OpenGL then Linux is typically as fast or marginally faster than the DX on Windows while OS X is significantly much slower.”
 
I am not too familiar with the extent of Posix-real time extensions, but things like fine-grained control over thread priorities, CPU affinities and other real-time goodies are fully supported in OS X, just not over Posix API.

As to the other concerns. Sure, OS X does not include OS level visualisation support Yosemite does introduce some OS support for hypervisors, but thats a different thing as far as I understand). I do not see why this is a problem though. OS X is an OS for personal computers, I would certainly not use it for large virtualised clusters or to provide scalable services. So for me this kind of criticism is a moot point.

I agree about filesystem. Apple is clearly doing something in that direction with the Core Storage, but I can't wait for HFS+ to finally die.

Now to OpenGL. Please note that I am not 100% sure about this, but AFAIK Apple's OpenGL implementation is quite similar to how Microsoft does DirectX — you have a frontend that is provided by the OS and communicates with the driver via a predefined interface. This means that the graphics driver has to be more or less implemented from scratch for OS X. The big difference to Windows is that there, GPU manufacturers generate sales from the gaming performance. So a lot of recourse are being put into optimising that performance. On OS X, not so much. The Linux performance i good because the hardware vendors can reuse much of the Windows drivers codebase. Unfortunately, OpenGL is a mess to implement and optimise for. The GPU performance on OS X will certainly improve with a change of graphics API to something less idiosyncratic and complex than current OpenGL. At any rate, I am quite sure that the graphics performance issues have not much to do with the kernel.
 
Last edited:
Unigine Heaven Benchmark 4.0 on HD 4000

OS X 10.10.2 (OpenGL 4.0)
Min FPS: 6.9
Max FPS: 28.5
AVG: 15.2

Windows 8.1 (OpenGL 4.0)
Min FPS: 7.2
Max FPS: 29.3
AVG: 15.4

Windows 8.1 (DirectX 11)
Min FPS: 6.4
Max FPS: 29.4
AVG: 15.8
 
… anyone with knowledge …

I don't have specialist knowledge, but here goes …

… file system …

HFS Plus

Storage and file systems, HFS and legacy, kSKDiskRoleLegacyMacData and kSKDiskTypeHFS

Elsewhere I find other people using the word 'legacy' to describe HFS Plus and (unless I'm missing something) no-one arguing against that description.

Also, Apple’s Software Quality Decline – Power developers, software issues and loss of spirit – a couple of quotes from Don Brady.

Storage systems

… Apple is clearly doing something in that direction with the Core Storage …

Location independent files (patent application by Apple) – my interpretation is probably way off, but it's an interesting patent application.

Kernels, microkernels etc.

Of possible interest, my recent L4 post under 'Intel CEO Responds to Rumors of ARM-Based Macs, Says Relationship With Apple Is 'Strong''.
 
The Linux performance i good because the hardware vendors can reuse much of the Windows drivers codebase.

How can it be that Window driver code base can be reused for Linux but not for OS X? Thanks for your input on the other topics!


Unigine Heaven Benchmark 4.0 on HD 4000

OS X 10.10.2 (OpenGL 4.0)
Min FPS: 6.9
Max FPS: 28.5
AVG: 15.2

Windows 8.1 (OpenGL 4.0)
Min FPS: 7.2
Max FPS: 29.3
AVG: 15.4

Windows 8.1 (DirectX 11)
Min FPS: 6.4
Max FPS: 29.4
AVG: 15.8

That’s really interesting. But it surely doesn’t reflect in game performance where Windows performs far better. Maybe the drivers for Intel HD graphics are on par on all three operating systems?

Here is some benchmarks that a friend did using the game Tomb Raider (from 2013) on an iMac with these specs (the OS X version is 10.9.2, though so that’s a bit old, but in my experience there’s still a big difference in performance for game when comparing Windows 8.1 and OS X 10.10.1):

- iMac (27-inch, Late 2012)
- i7 3.4 GHz
- 2 TB Fusion Drive
- 32 GB RAM
- NVIDIA GeForce GTX 680MX 2048 MB


and the benchmarks:

Here we have the results for Tomb Raider (2013) which has a built-in benchmark tool:

HIGH with vSync:
WIN: 46 / 62 / 57 fps
OSX: 16 / 37 / 33 fps

- Windows = 68-187% more frame per second.

NORMAL with vSync:
WIN: 58/ 62 / 60 fps
OSX: 30/ 40 / 36 fps

- Windows = 55-93% more frames per second.

HIGH without vSync:
WIN: 50/ 74 / 61 fps
OSX: 27 / 37 / 34 fps

- Windows 80-100% more frames per second.

NORMAL without vSync:
WIN: 64/ 96 / 78 fps
OSX: 28/ 40 / 36 fps

- Windows = 116-140% more frame per second.

If lowering the resolution of the game to 1280 x 720 in OS X it ran in 60 fps. But running the game in the same resolution in Windows yields 168 fps, that is almost 3x the performance. :(

In Windows it's also possible to run the game with higher settings that aren't available in OS X. So in Windows you get a better looking game that still performs a lot better.


Like I said I still see a much bigger frame rate drop for ”heavy” games in OS X compared to Windows in the same hardware. I wonder why that is if the Unigine Heaven Benchmark is reflecting actual game performance. :confused:

Anyway, thanks for your input guys! Looks like there's more to things than first meets the eye… as usual. :)
 
Last edited:
How can it be that Window driver code base can be reused for Linux but not for OS X? Thanks for your input on the other topics!

As I have mentioned (but probably not emphasised enough), this is all based on the bits and pieces I could gather here and there about OpenGL on OS X, such as disassembling parts of OS X drivers and looking at symbol tables. This information is not really officially available, so my conclusions could be wrong. Just wanted to repeat this again, before I go in more details.

On Windows and Linux, the graphics driver is actually the entire OpenGL implementation. What the OS offers you is a sort of proxy, which in turn delegates all the relevant calls for to the driver. This means that the hardware vendor (Nvidia, AMD, Intel etc.) are responsible for implementing the entire OpenGL — the base interface, the state machine, the shader compiler, the optimiser etc. Because the driver needs to supply an entire OpenGL implementation, there is a lot of code that is actually not tied to a particular platform. For example, the state machine implementation, the resource system and the shader compiler/optimised. To be clear, I do not exactly *know* how Nvidia writes their drivers, but it would make a lot of sense for them to keep the same code for the things mentioned above between Linux and Windows. And because I know that they are very smart people, I am sure that they do exactly that.

Now, the DirectX on Windows and OpenGL on OS X work differently. Here, there is a common front-end API implementation, that is supplied by the OS. This front-end handles the requests from the normal applications, processes them in some way and then delegates them to a hardware-specific plugin. The interface between the OS and that plugin is a custom internal API and that is what the hardware vendor needs to implement. So Nvidia/AMD etc. cannot really reuse the bulk their Windows/Linux whatever driver code here, because the scope of the driver is very different. They are not in complete control over the OpenGL implementation. And as I have mentioned before, they do not have that much pressure to optimise their drivers for OS X.

All in all, we see around 20-25% difference in performance between Windows and OS X. Based on this, one could really suspect optimisation. Although, it might be that there is something inherently inefficient with Apple's front-end implementation or the way it interfaces with the GPU plugin driver.

Ultimately, I believe that DirectX-style implementation (a common front-end, simple pluggable drivers) is the way to go. It moves the most complexity to out of the driver and increases stability, performance as well as makes things more predictable. However, this is quite difficult to do properly with OpenGL, because of the flaws in its design. This is why Khronos (together with Apple, AMD,Nvidia and others) are designing an modern API to replace OpenGL. If this design is done properly, I expect the OS X graphics performance to be within 95% of Windows.
 
Just noticed that the driver version for Intel HD 4000 graphics jumped from 10.0.86 in OS X 10.10.1 to 10.2.46 in OS X 10.10.2.

Maybe that high jump in versioning has anything to do with the performance of OS X in the Unigine benchmark becoming on par with Windows and Linux? I don't know. Just a thought.

Edit:

Very interesting post, leman.
I definitely think it's time the the graphics performance on OS X gets closer to the other OS'es, whatever the way will be. Didn't know they were working on a new API to replace OpenGL. Wonder when it will be ready? Also, would it be such a big thing for Apple to change the OpenGL implementation so it is more similar to how it's done in Windows and Linux? Maybe that's a too big of a change in OS X itself.
 
Last edited:
That’s really interesting. But it surely doesn’t reflect in game performance where Windows performs far better. Maybe the drivers for Intel HD graphics are on par on all three operating systems?
It's worth stating here that a lot of games on OS X suffer from optimization issues that Windows doesn't have. The overwhelming majority of games - particularly those that are graphically intense - are developed for Windows first, and tend to be ported to OS X. Some ports are awful, and others are quite good, but a port is a port. I would not expect ports to perform as well as the original game being played on its native operating system.
 
The problems with HFS+ have been covered in a number of places.
The implementation of hard links in HFS+ alone ...
http://arstechnica.com/apple/2011/07/mac-os-x-10-7/12/

I'm more interested in virtualization than gaming graphics. I'm not sure how well the real time extensions in Yosemite when used for virtualization compare with the implementation in other operating system kernels (e.g. KVM in Linux). Perhaps some VMware Fusion vs KVM benchmarks on the same hardware might be useful.
 
Didn't know they were working on a new API to replace OpenGL. Wonder when it will be ready?

http://www.slideshare.net/NeilTrevett/khronos-news-and-next-generation-opengl-from-siggraph-2014

As to when it will be ready: nobody knows. I think they already have come quite far, as they are looking for a name (https://www.khronos.org/surveys/index.php/929633/lang-en). But I hope they don't rush too much and spoil things. Then again, there is a lot of material they can work with (such as Mantle and Metal).

Also, would it be such a big thing for Apple to change the OpenGL implementation so it is more similar to how it's done in Windows and Linux? Maybe that's a too big of a change in OS X itself.

OpenGL is tightly integrated with OS X and allows it to do a bunch of tricks that I am not sure any other OS can do. First of all, OS X performs all the user interface drawing via OpenGL, so they need to maintain a tight grip on how the system is implemented beneath in order to have predictable performance. Then, their particular OpenGL implementation is what allows them to perform the on-the-fly GPU switching within an open application. Nvidia drivers can do something similar with Optimus, but thats not really OS-level support. Finally, OS X has something that I have not seen in any other OS (but I might be mistaken here): cross-process graphics buffers (IOSurface API). This is a huge thing because it allows isolated programs to use the same GPU buffer without any data copying whatsoever.
 
Unigine benchmark has native OpenGL support. 90% of all Mac games has DirectX>OpenGL wrappers.
 
I can't get excited about the real-time stuff. As long as VirtualBox works that's enough virtualization for me. The filesystem complaint has some merit but then one can understand the reasoning behind sticking with HFS+.

For me, the biggest annoyance at the kernel level in OS/X is the lack of cross-process Posix mutexes and condition variables. Working around that lack is a very serious pain when porting applications that were (for very good reasons) set up as multi-process apps.
 
It's worth stating here that a lot of games on OS X suffer from optimization issues that Windows doesn't have. The overwhelming majority of games - particularly those that are graphically intense - are developed for Windows first, and tend to be ported to OS X. Some ports are awful, and others are quite good, but a port is a port. I would not expect ports to perform as well as the original game being played on its native operating system.

Sure, a port is a port. But what about Blizzard games? I think they are developed in parallel. They run much better in Windows compared to OS X. Could still be some kind of port, though I guess.

Unigine benchmark has native OpenGL support. 90% of all Mac games has DirectX>OpenGL wrappers.

That doesn't explain why the Linux version of these games seem to perform better than the same game on OS X. Blizzard games also perform much better in Windows compared to OS X, and they are at least developed in parallel with the Windows version. Just tried World of Warcraft in both systems and the difference is big.

A bit old stuff, but look here: http://www.phoronix.com/scan.php?page=article&item=intel_sandy_threesome&num=4

Anyway, I'm happy if that Unigine benchmark is representative of OpenGL in OS X – then it means the problem with performance of games in OS X isn't about the OpenGL implementation in the operating system, but elsewhere.
 
Blender Cycles Benchmark (BMW, default settings)
Windows 8.1 x64: 12:53 minutes
OS X 10.10.2: 11:25 minutes
 
Compared to Linux, OS X lacks any form of virtualization at the OS level (LCX and the like).

Would have thought KVM would be way more relevant to a discussion of virtualisation than LCX (which is container based/OS level virtualisation)

Now this lacks in Windows too, but this is being worked on at Microsoft since time Docker has become so very popular that MS recognizes the need to do so the Windows kernel also can support this.

I guess the person who posted this has not heard of Hyper-V? It's only been around for more than 6 years.

But guess what? Virtualisation on Linux and Windows is almost exclusively reserved for server use (or high-end Windows desktop products), whereas OS X is almost exclusively a desktop/consumer OS now.

The flipside to this is that both VMWare Fusion and Parallels have *the best* host OS integration and guest tools of pretty much any VM solution out there, regardless of the fact they are not native to the OS X kernel.

The file system in OS X was out-dated already 10 years ago, it's still not fixed. Today it is a lot less trouble than before when the SSD basically hides performance problems.

And what does this have to do with the kernel? Nothing! NTFS is out of date, Extx is out of date... pretty much everything is "out of date" except for ZFS and BTRFS.

Graphics Stack is a joke, seen from a performance perspective, compared to getting games available on Windows, OSX and Linux, it is pretty obvious that the low performance to get in OSX can not be blamed on OpenGL then Linux is typically as fast or marginally faster than the DX on Windows while OS X is significantly much slower.

OS X is far behind Linux in terms of OpenGL version and graphics drivers....but you know what? I'd prefer to actually have commercial games with OS X compared to ...bugger all with Linux.
 
HFS Plus – Mac OS Extended, overextended, legacy

The file system

… one can understand the reasoning behind sticking with HFS+. …

What, would you say, are Apple's reasons?

Storage systems: Core Storage

The inventor – Deric Horn – left Apple over a year ago.

To the best of my knowledge, there's still no public API. I don't expect source code to become available. In the absence of those things …

… there's libfvde (previously on Google Code, now on GitHub) but the closed approach of Apple surely limits what can be done by third parties.

Thoughts

I shouldn't take things off-topic with discussion of Microsoft Resilient file system or Storage Spaces, just this one sentence from Where is the ZFS? – Reasons to not go solely proprietary:

… an extremely wealthy organisation delivering a technology that's a poor substitute for ZFS. …

Whilst Apple appears to make relatively slow progress with its storage system, with its preferred file systems: other technologies are relatively way ahead. From the advanced viewpoints, the distance looking back at Apple continues to grow.

Relative values of storage systems and file systems, simplified:

Apple < Microsoft < ZFS in Oracle Solaris < OpenZFS
 
”If you try on some more advanced system programming in OS X, you quickly realize that the things that are taken for granted in Linux (POSIX real-time extensions, control of which is / are CPU cores a thread to run on newer RFCs for TCP, etc) at all does not exist in OS X. 10-12 years ago the OS X kernel were at the forefront with its features, but seems like it is all gone compared to newer versions of the BSD kernel in terms of new developments since then. Windows also has virtually all of these features, but not as obvious how much is missing when comparing with Windows because it is a different API (ie, not POSIX).

This guy seems very bias for what he says and sounds like a typical Linux fanboy.

POSIX is a group of standards that determine a portable API for Unix-like operating systems. Mac OSX is Unix-based (and has been certified as such), and in accordance with this is POSIX compliant. POSIX guarantees that certain system calls will be available.

Essentially, Mac satisfies the API required to be POSIX compliant, which makes it a POSIX OS.

All versions of Linux are not POSIX-compliant. Kernel versions prior to 2.6 were not compliant, and today Linux isn't officially POSIX-compliant because they haven't gone out of their way to get certified (which will likely never happen). Regardless, Linux can be treated as a POSIX system for almost all intents and purposes.

Compared to Linux, OS X lacks any form of virtualization at the OS level (LCX and the like). Now this lacks in Windows too, but this is being worked on at Microsoft since time Docker has become so very popular that MS recognizes the need to do so the Windows kernel also can support this.

If this were true then Parallels, VMWare, etc.. Wouldn't be running on the system and if that it would run poorly...


The file system in OS X was out-dated already 10 years ago, it's still not fixed. Today it is a lot less trouble than before when the SSD basically hides performance problems.

Did he forget that NTFS (Windows) and EXT3 (Linux) is also more than 10 years old? I'm just not getting the point with what this guy is saying.. He needs to do his research to understand facts about it all.

Graphics Stack is a joke, seen from a performance perspective, compared to getting games available on Windows, OSX and Linux, it is pretty obvious that the low performance to get in OSX can not be blamed on OpenGL then Linux is typically as fast or marginally faster than the DX on Windows while OS X is significantly much slower.”

OS X maybe a bit slower and this is due to Apple using an older version of OpenGL. I wouldn't say that DirectX is better than OpenGL either, because the implementation of OpenGL is very poor on Windows. Microsoft doesn't have OpenGL implemented in the root of the OS so instead driver developers has to implement it in a driver level. This is due to how Microsoft brainwashed the public to make them believe DirectX is better which in reality is their way of trying to get people to develop for it and in return developers can't port their games unless they rewrite the code to work with OpenGL. So with the OpenGL thing being slower on Mac being Apple's fault is true, but maybe they do this on purpose, because they want to have a stable OS? Who knows?

Anyway... Since this Linux guy is a fanboy then maybe you should remind him how every time their Kernel gets updated the graphic and audio drivers may need to be reinstalled or people have to wait for an update to be released. Or how there's so many different variants of Linux that each variant almost needs a different package system for installing software... Or how confusing it is for a basic user to know which Linux OS to choose from. Yes it's good to be able to choose how you want your OS, but it's a slap in the face for people that are just getting their first laptop.
 
POSIX is a group of standards that determine a portable API for Unix-like operating systems. Mac OSX is Unix-based (and has been certified as such), and in accordance with this is POSIX compliant. POSIX guarantees that certain system calls will be available.

POSIX comes in different versions and with different extensions. OS X might be POSIX compliant but it does not mean that it supports every POSIX element out there. Of course, most of that functionality is supported by Mach or other interfaces.


OS X maybe a bit slower and this is due to Apple using an older version of OpenGL.

Please do not say thing like these because you only discredit yourself. OpenGL versions have nothing to do with performance. They have to do with functionality, which — in circumstantial cases and if correctly implemented — might improve performance of certain applications. I am not aware of any dramatic changes from 4.1 to 4.5 that would bring massive performance increases with it. Not to mention that rarely any OS X game uses functionality beyond the 3.2 version.

I wouldn't say that DirectX is better than OpenGL either, because the implementation of OpenGL is very poor on Windows. Microsoft doesn't have OpenGL implemented in the root of the OS so instead driver developers has to implement it in a driver level.

I don't see how this is poor implementation. Its just a radical different approach. This is ultimately what allows such great OpenGL performance on Windows and gives the vendors freedom to implement a plethora of extensions.

This is due to how Microsoft brainwashed the public to make them believe DirectX is better which in reality is their way of trying to get people to develop for it and in return developers can't port their games unless they rewrite the code to work with OpenGL.

This now, makes no sense at all. DirectX is better because its more stable, has better developer tools, offers predictable performance and does not suffer as much from vendor-specific idiosyncrasies. This has not always been the cases, but MS came a long way with their graphical API.

So with the OpenGL thing being slower on Mac being Apple's fault is true, but maybe they do this on purpose, because they want to have a stable OS?

This makes even less sense. Ever heard of Occam's Razor?
 
POSIX comes in different versions and with different extensions. OS X might be POSIX compliant but it does not mean that it supports every POSIX element out there. Of course, most of that functionality is supported by Mach or other interfaces.




Please do not say thing like these because you only discredit yourself. OpenGL versions have nothing to do with performance. They have to do with functionality, which — in circumstantial cases and if correctly implemented — might improve performance of certain applications. I am not aware of any dramatic changes from 4.1 to 4.5 that would bring massive performance increases with it. Not to mention that rarely any OS X game uses functionality beyond the 3.2 version.



I don't see how this is poor implementation. Its just a radical different approach. This is ultimately what allows such great OpenGL performance on Windows and gives the vendors freedom to implement a plethora of extensions.



This now, makes no sense at all. DirectX is better because its more stable, has better developer tools, offers predictable performance and does not suffer as much from vendor-specific idiosyncrasies. This has not always been the cases, but MS came a long way with their graphical API.



This makes even less sense. Ever heard of Occam's Razor?


Do you even do your own research before speaking? Direct X is a scam period! It's Microsoft's way of eliminating porting in mind. You should really look up what big name developers has said about the whole ordeal with OpenGL and DirectX. Almost every system and device that has 3D capabilities uses OpenGL. Direct X is only used on Microsoft products. http://blog.wolfire.com/2010/01/Why-you-should-use-OpenGL-and-not-DirectX

A matter of fact is that OpenGL can be a lot more powerful than DirectX The only problem is when you have companies like nVidia and AMD that focus more with DirectX you have a shotty OpenGL implementation. I've seen many videos trying to compare OpenGL with DirectX on a Windows machine, but they fail to realize that nVidia and AMD doesn't work as hard on their OpenGL drivers. OpenGL is not present in any a lot of Windows releases unless developers implement it themselves.

As for POSIX the dude clearly said there was none that existed in OS X and I was calling him out on that so your words make no sense on where the conversation was going. Do your research before posting seriously...

and just to prove my point even further read what valve has said:

Article 1: http://www.extremetech.com/gaming/1...programmer-discusses-wretched-state-of-opengl
 
Do you even do your own research before speaking?

I have developed software using both OpenGL and DirectX. I also used to moderate the OpenGL forums back when I was younger and happier person :p I think I know a thing or two about graphics APIs, even though I'm a bit rusty

Direct X is a scam period! It's Microsoft's way of eliminating porting in mind.

Oh my god, every proprietary API is a scam! I am afraid to even guess what you must think of OS X, with its non-portable Cocoa, NextStep, Mach, Objective-C and Swift :D

A matter of fact is that OpenGL can be a lot more powerful than DirectX

Of course it can. Its a flexible spec that can be extended as much as one wants. Nvidia exposes a number of close-to-metal extensions on their Windows drivers which allow you to achieve great speedups over DirectX or stock OpenGL.

The only problem is when you have companies like nVidia and AMD that focus more with DirectX you have a shotty OpenGL implementation.

The problem is that OpenGL is a 1000 pages long specification, with multiple hundred pages worth of extensions that are added to it and a shading language without conformance testing, to boot. This already makes writing an OpenGL driver a nightmare. And its certainly not helping that different vendors have their own super-extensions. If you want to write a demanding game that performs well with OpenGL, you'd need to write an extra code path for every piece of hardware out there. This completely invalidates your point about portability. OpenGL seems portable, but in its current form, its not!

Of course, this does not matter for most indy games that are far from being GPU demanding. Those should use OpenGL because its easier to target more platforms. Still, OpenGL development tools are quite rudimentary which makes it even more difficult.

As for POSIX the dude clearly said there was none that existed in OS X and I was calling him out on that so your words make no sense on where the conversation was going.

He said that the real-time extensions did not exist on OS X. At least try to read whats going on in the thread, before accusing other people of not doing their research.
 
I have developed software using both OpenGL and DirectX. I also used to moderate the OpenGL forums back when I was younger and happier person :p I think I know a thing or two about graphics APIs, even though I'm a bit rusty

Yea you're a bit rusty so you don't know what's going on today I'm a developer too and been studying. :D



Oh my god, every proprietary API is a scam! I am afraid to even guess what you must think of OS X, with its non-portable Cocoa, NextStep, Mach, Objective-C and Swift :D

What does this have to do with the facts I said about DirectX? Did you read those articles I posted to prove how you were wrong? Or did you just decide to point something else out instead of saying you were wrong? You didn't know you can code in C in Xcode? C is very portable and works with Windows, Linux, Mac, etc... You can also code in python and it's integrated on OS X so nothing to install here.

Of course it can. Its a flexible spec that can be extended as much as one wants. Nvidia exposes a number of close-to-metal extensions on their Windows drivers which allow you to achieve great speedups over DirectX or stock OpenGL.

You clearly ignored what I was explaining in my previous post...

The problem is that OpenGL is a 1000 pages long specification, with multiple hundred pages worth of extensions that are added to it and a shading language without conformance testing, to boot. This already makes writing an OpenGL driver a nightmare. And its certainly not helping that different vendors have their own super-extensions. If you want to write a demanding game that performs well with OpenGL, you'd need to write an extra code path for every piece of hardware out there. This completely invalidates your point about portability. OpenGL seems portable, but in its current form, its not!

Again you ignored what I said about nVidia and AMD with their shotty implementation of OpenGL which is the main problem why things are not right with Windows when it comes to OpenGL. Did you not read the Valve article I mentioned?

Of course, this does not matter for most indy games that are far from being GPU demanding. Those should use OpenGL because its easier to target more platforms. Still, OpenGL development tools are quite rudimentary which makes it even more difficult.

Putting OpenGL as an "indie thing" is just wrong in many aspects, because your PS4, Wii, iOS, and Android devices all use OpenGL.

He said that the real-time extensions did not exist on OS X. At least try to read whats going on in the thread, before accusing other people of not doing their research.

If you read what he said with the "()" he was pointing out examples and if you removed that part it would explain that he said there was none.


EDIT: I'm doing some more research on the matter and since Direct X 12 has been announced You are right about the issues.
 
Last edited:
What, would you say, are Apple's reasons?
(for sticking with HFS+)

Simple. Inertia, lack of burning need for anything better, and lack of any really obvious and obviously better alternative -- or at least, lack of an internal evangelist for an alternative.

I'm not trying to defend HFS+ except that it's been mostly good enough over the years.

As for alternatives: ZFS, maybe. Someone mentioned Btrfs; I ran it on my linux dev machine for a week until I traced my terrible DBMS performance problems to btrfs, and removed it. XFS is a good performer but large, complicated, and possibly tricky to port. Ext4, yawn. NTFS is proprietary and is only interesting when compared to FAT.
 
@DJEmergency: look, I am not going to argue with you. If you think that OpenGL ES, OpenGL and GNMX are the same thing, thats is your problem. And I am not going to explain to you the difference between 'open standards' and 'standard implementations'.

At any rate, good luck with your studying, you seem in desperate need of it
 
http://www.slideshare.net/NeilTrevett/khronos-news-and-next-generation-opengl-from-siggraph-2014

As to when it will be ready: nobody knows. I think they already have come quite far, as they are looking for a name (https://www.khronos.org/surveys/index.php/929633/lang-en). But I hope they don't rush too much and spoil things. Then again, there is a lot of material they can work with (such as Mantle and Metal).

Here we go! :)

http://www.vg247.com/2015/02/03/valve-to-unveil-khronos-glnext-at-gdc-next-month/
 
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.