Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.
You seem to imply Apple's 64-bit transition method in Tiger and Leopard where both 32-bit and 64-bit apps are supported while the kernel is still 32-bit is not ideal. Apple's method actually makes a lot of sense since it fosters the development of 64-bit apps, while not breaking drivers and system compatibility as a pure 64-bit OS like Windows x64. When 64-bit apps are run the processor runs in pure 64-bit mode so you get all the benefits of more registers, larger address space, etc. When 32-bit apps are run the processor runs in 32-bit mode so nothing breaks, and the kernel is 32-bit also, but 36-bit PAE enabled so that it can manage up to 64GB of RAM for use by both 32-bit and 64-bit applications.

The problem with that approach is that it involves a lot of performance penalty. It basically is a non-scalable approach. I understand why Apple did it - for the ease of porting to 64-bit and for the driver compatibility - they simply did not had enough time and resources to attempt a full 64-bit port which is what they are doing with Snow Leopard. They should have done that with Leopard.

The transition method Apple uses avoids the chicken and the egg problem, where developers are reluctant to write 64-bit programs and new 64-bit drivers because there aren't enough users of 64-bit operating systems, while there aren't enough 64-bit users because there isn't the 64-bit programs to justify the switch and there aren't enough 64-bit drivers so many of your devices won't work.

Note that developers don't _have_ to write 64-bit programs - 32-bit programs run very well on 64-bit kernel thanks to the excellent backwarsd compatibility AMD built in x86_64. Also note that there aren't nearly as many OSX drivers out side of Apple in the OEMs to make this driver porting problem even significant - with the right kind of design and APIs 32-bit to 64-bit driver ports are no big deal - most cases are just a recompile. Those gazillion Windows and Linux drivers are already happily 64-bit.

Compare this to Microsoft who faced the same issue, albeit on a very large and arguably different scale (countless OEMs with huge number of supported devices) - to their credit have resolved the driver problem with Vista x64 - I am typing this on Vista x64 with 10Gb RAM and a load of weird devices that just work - wireless N adapters, Bluetooth Stereo Headphones, cutting edge graphics you name it. So it is possible - Apple just chose the lame route.

With a 32-bit kernel and ability to run 64-bit apps, 64-bit app development can start ahead, so that when a pure 64-bit OS with 64-bit kernel arrives in Snow Leopard there are at least some 64-bit programs to encourage users to transition so that the market is there for developers to spend time to write new 64-bit drivers and more 64-bit programs. Admittedly, in practice there isn't a glut of 64-bit programs right now, I only know of Mathematica, Cinema 4D, Chess, and XCode 3, but it's a good idea in concept.

Again 64-bit kernel does not force writing 64-bit programs - 32-bit stuff works great. So I don't understand why 32-bit kernel was needed to quick start the development of 64-bit apps. Apple could have just thrown in a 64-bit kernel and rest of the situation would have remained the same. There was no understandable justification for a 32-bit kernel to run 64-bit user space.

And in 32-bit mode, the ability to devote the complete 4GB address space to an application when it is running or to the kernel when it is running instead of having a persistent kernel that takes up 2GB leaving only 2GB for applications like 32-bit Windows also makes a lot of sense. Afterall, you don't have issues on Mac like in 32-bit Windows where even games now are hitting the 2GB application address space limit and crashing. (http://www.anandtech.com/gadgets/showdoc.aspx?i=3034&p=1)

You have different issues on the Mac - scalability hurts due to 32-bit kernel and forced 4G/4G split. It's bad enough that no one chose to do it prior to Apple and it shows.

The problem again is that it is not a sane solution - it involves mapping and unmapping of kernel address space on each switch from user to kernel mode.

And devoting the full 4GB address space on a 32-bit OS to an application or a kernel is not an Apple only kludge as you imply. Red Hat Linux has a kernel called Hugemem which allows individual applications and the kernel to exclusively consume the full 4GB space on 32-bit processors, just like in OS X. (http://blogs.oracle.com/gverma/2008/03/redhat_linux_kernels_and_proce_1.html)

You are confusing the kludges here - PAE is a performance problem but it's order of magnitude less horrible than the kludges I was referring to - which were - a) Having to do redundant data copies to and from user->kernel at each switch and due to the 4G/4G split b) Needing to have stubs that again copy data around between user space and syscall handlers due to the kernel being 32-bit and user space which can be 64-bit.

PAE is actually supported in consumer versions of Windows too, but it's disabled because drivers need to be written to take it into account. Server drivers have long since standardized on PAE support since before 64-bit processors were available it was the only way to get more than 4GB of memory. There was no such demand before in consumer Windows, so consumer drivers don't support PAE and it's too late to get every driver rewritten.
That is correct - but PAE is again not the problem - the problem is having 4G split by default and running 32-bit kernel while supporting 64-bit user space and 32-bit drivers. Windows, Linux, Solaris all of them don't do that for a reason - it's not that they were not 'innovative' - it's that they understood that this solution will not scale in the markets in which they operate (Server space) and that they had to deal with a cleaner solution sooner or later and they chose rightly to face it sooner where as Apple in hindsight did the wrong thing - they now have to stop and do the right thing in Snow Leopard. Vista/Linux/Solaris have long advanced past the problem - driver availability and scalability are no problems there.

When Apple transitioned to Intel, they learned from this issue and implemented PAE by default from the start. This is why there are no driver issues and Apple supports the Mac Pro running 32GB of RAM even though the Tiger and Leopard kernels are both 32-bit.

Red Hat did provide a hugemem kernel with PAE enabled in the days prior to general availability of 64-bit x86 CPUs and they also provided 4G/4G user/kernel split *as an option* but as a Linux contributor I know that it is not at all encouraged especially in the day where most all CPUs are 64-bit capable already. And they also do not run a 32-bit kernel to support 64-bit apps and nor do they by default do a 4G/4G split. No one does all those horrible things together except Apple.

There is a reason no one uses OSX on Server (including Macrumors.com ;) ).
 
The problem with that approach is that it involves a lot of performance penalty. It basically is a non-scalable approach. I understand why Apple did it - for the ease of porting to 64-bit and for the driver compatibility - they simply did not had enough time and resources to attempt a full 64-bit port which is what they are doing with Snow Leopard. They should have done that with Leopard.
...
Again 64-bit kernel does not force writing 64-bit programs - 32-bit stuff works great. So I don't understand why 32-bit kernel was needed to quick start the development of 64-bit apps. Apple could have just thrown in a 64-bit kernel and rest of the situation would have remained the same. There was no understandable justification for a 32-bit kernel to run 64-bit user space.
...
You have different issues on the Mac - scalability hurts due to 32-bit kernel and forced 4G/4G split. It's bad enough that no one chose to do it prior to Apple and it shows.
In terms of why they didn't go directly to a 64-bit kernel for Leopard I think a lot of it had to do with timing. As you said they didn't have the resources to do a full 64-bit port, especially with the iPhone going on in parallel with Leopard. Another issue too was the question of legacy support for non-x64 Macs. When Leopard was released, it was only about a year since the last PPC Mac was sold. It just wasn't reasonable to drop PPC so early for Leopard. And the first Intel Macs were 32-bit Core Duo so there was not way they could go directly to a 64-bit Intel kernel. And adding a third x64 kernel to the 32-bit PPC and x86 kernels for Leopard and maintaining all three of them probably wasn't worthwhile from a resource perspective and it was more convenient to keep the established x86 kernel with ability to run 64-bit apps as in Tiger. The latest report on Appleinsider indicates that Snow Leopard is due mid-2009 which would be 3 years since the last PPC Mac was sold, which is enough to justify dropping the PPC kernel and adding a x64 kernel to replace it while maintaining the x86 kernel.

I don't think I understand what you mean by scalability issues with the 4GB exclusive address space for applications and the kernel. Do you mean the need for constant swapping when accessing the kernel? I grant that it isn't the best way to things because of the TLB hit. It's not really a real defense, but for OS X, I'm pretty sure it's always been this way. Like with PPC and the G5, the kernel was still 32-bit to maintain compatibility with the G3 and G4 but could enable PAE so that the G5 Power Mac could support 8GB of RAM. I'm pretty sure the 4GB exclusive address space policy was also in place. At the time of the G5 introduction, Apple wasn't as strong as it is today so they probably didn't have the resources to maintain separate 32-bit PPC and 64-bit PPC kernels. Similarly the 4GB exclusive mode was probably used to take advantage of as much memory as possible instead of restricting apps to 2GB and the kernel to 2GB and try to make up for the lack of a 64-bit PPC kernel. And from the perspective of the Intel transition and x64, they aren't really losing existing performance by going with the 32-bit PAE kernel/64-bit app/4GB exclusive address route and they still gain performance from the faster Intel processors, they are just choosing to leave potential performance on the table.

They left additional performance on the table, but in the end it probably doesn't matter for Apple since overall performance has improved over time anyways and it hasn't really impacted Mac sales growth. I doubt there would have been a sharp rise in Mac sales if Apple had released Leopard with a 64-bit kernel. In the end, their kludge just means that the performance improvement moving from 32-bit kernel Leopard to 64-bit kernel Snow Leopard will likely greater than the 32-bit Windows to 64-bit Windows transition. The improvement coming from not having to swap the kernel and 32-bit applications anymore since the kernel can be bigger than 4GB and be persistent in some region higher than the lower 4GB along with other 64-bit apps, while 32-bit apps will continue to have the full 4GB address space and swap for the lower 4GB region.
 
And the first Intel Macs were 32-bit Core Duo so there was not way they could go directly to a 64-bit Intel kernel. And adding a third x64 kernel to the 32-bit PPC and x86 kernels for Leopard and maintaining all three of them probably wasn't worthwhile from a resource perspective and it was more convenient to keep the established x86 kernel with ability to run 64-bit apps as in Tiger. The latest report on Appleinsider indicates that Snow Leopard is due mid-2009 which would be 3 years since the last PPC Mac was sold, which is enough to justify dropping the PPC kernel and adding a x64 kernel to replace it while maintaining the x86 kernel.

Right - Apple simply did not get a chance to plan ahead on the architecture switch and the other factor is that Apple's growth is a recent phenomenon that wasn't the case before or immediately after the Intel switch - so to summarize, the route Apple had to take wasn't ideal but it did allow them to continue selling until they have the right solution ready with Snow Leopard.

And I understood this before too, but some one who even slightly understands what is involved in running 32 bit kernel and 64-bit user space while providing a 4G/4G split - it is very hard to justify.

I don't think I understand what you mean by scalability issues with the 4GB exclusive address space for applications and the kernel. Do you mean the need for constant swapping when accessing the kernel?

There are two issues - one arises from using 32-bit kernel to support 64-bit applications where in the kernel needs to have 64-bit stubs that copy data from the application to the system call and trap handlers and this involves switching between long mode and compatibility mode back again. (Stubs run in long mode, rest of the kernel in compat mode.)

The other relates to the 4G/4G user/kernel split which implies separate address spaces for kernel and the user space. The system entry/exit code has to switch between the kernel page tables and the user page tables and although TLB misses are relatively expensive, the real overhead is manipulation of the cr3 register to switch to and from kernel page tables on each system entry and exit.

They should have just had 2 kernels as they will in Snow Leopard - one 32-bit and other 64-bit - that's the right way. But any way it wasn't a big deal for Apple and the markets in which it operates.

But all of this of course leaves out the main point of this thread - why was Leopard unstable and/or slow on Macbook Pro with 8Gb when we know the CPU, the chipset are all fine dealing with 8Gb and when Leopard itself handles >8Gb ok on say a Mac Pro - looks to me like only Apple can answer :)
 
just wondering

But has anyone besides ifixit tried the 8gb of ram?

Why I ask is because that stick that he pulled out could have been bad...I know he said it didn't run a full memory test on it.. And honestly those test don't catch all ram errors.

So basically what i am looking for is multiple people testing this theory that it freaks out with 8gigs of ram in it be for we all freak out. I know you have had the same results with previous MBPs. I am just looking to confirm with this particular system.
 
I had no idea that Apple's 64-bit support was this screwed up.

This is an enlightening thread - thank y'all for the information.

Yep. Another reason why 64 bit CS4 is Vista only. Apple really needs to put their focus back on computers or spin the business off.
 
personally I used 4GB at work & the only time I ran out of juice is when I try to use the 3D features in Ai CS3. (& the object was not that complex - just some text to turn 3D) ...I suppose I'm one of those that would need more than 4GB? But I just can see why a task like that would require so much RAM?? After all it isn't Maya I'm using :confused:

I'd like to see a poll of users who Need 8GB as opposed to 4GB :)

I think Maya uses RAM more intelligently then the built in 3D tools in AI. AI main function isn't 3D, that is my best guess.
 
well, maybe they just need to run some more processes to use the ram?

i'm sure they know how to get the computer to use the memory, but something about this doesn't seem right
 
So we know it's not drivers (since a check shows everything in the 0xFFFFFFFF00000000-and-up range)

It's also not CPU or chipset related (since we know of comparable laptops which can run with 8GB, and we can run the MBP at >4GB with a diminishing success rate as we approach 8GB).

It also likely has nothing to do with OS-X, since we know that the Mac Pro and X-Serve models both officially support 32GB of RAM. 8GB of RAM is also supported on (probably all) OSX86 distros, and works fine on my iDeneb test box (don't flame me - I still own a MacBook Pro and MacBook).

It's likely the EFI. Something in there - either the fact that one system IDs as a "Mac Pro" and another as a "MacBook Pro", or that the EFI itself does not properly initialize the MMU for >4GB of RAM - but something in the EFI is crippling the MacBook Pro.

Anyone up to finding a way to make a MacBook Pro "pretend" to be a Mac Pro?
 
It's likely the EFI. Something in there - either the fact that one system IDs as a "Mac Pro" and another as a "MacBook Pro", or that the EFI itself does not properly initialize the MMU for >4GB of RAM - but something in the EFI is crippling the MacBook Pro.

If firmware was the issue - it would result in hard lockups or the OS wouldn't see the total 8GB at all. In this case the OS sees all the 8GB but doesn't use it. I can't imagine how the firmware would cause this.

I have a feeling it's the OS build on the MBP that's limiting. We will not know for sure unless iFixit runs a test with Vista x64 or Linux x64. If they use all 8GB happily (and I am fairly certain they will) - there you have it - OSX is the problem. If not then it would be the chipset and/or firmware.
 
Mac Pro Build

I know iFixit has been silent since some time - but just throwing it out here -

It is very much possible that the OSX build that is on the MBP has hard coded limits in the kernel that specify the maximum amount of memory to use and/or there are other parameters that are tuned for operation with <= 4GB RAM.

If one takes a Leopard DVD that comes with a recent Mac Pro and hacks it (there are script checks for model numbers in the DVD/installer which prevent Leopard build shipped with say a Mac Pro from being installed on a later Generation Macbook Pro - but those are very easily subverted - you just have to enter the model # of the target mac into a array after copying it to say a USB or Firewire disk.) to install on the 8GB Mac Book Pro - we might see that the Mac Pro OS X kernel build works fine (i.e. uses all 8Gb as it is very likely tuned for more than 4GB RAM).

I know this sounds like a lot of work to do but it will clear up or confirm the suspicion that the OS build is the problem.

Or better yet - ship me the machine with 8Gb installed and I will do all tests for you, iFixit! :)
 
One Build for ALL intel Macs

If one takes a Leopard DVD that comes with a recent Mac Pro and hacks it ...

I know this sounds like a lot of work to do but it will clear up or confirm the suspicion that the OS build is the problem.

I've hacked the OS X Leopard (retail) DVD to install on an old mac that wasn't supported. It is a lot of work, and completely unnecessary for this current issue.

TARGET DISK MODE:
Take the new MBP or MB with 8 Gigs to a place with a Mac Pro. Boot up the Mac Pro holding "T" for target disk mode. Now connect the Mac Pro to the MBP with a FireWire cable. Boot the MBP holding "Option" to get a list of boot drives connected. Choose the drive with the FireWire icon and you're booting off a Mac Pro HDD. Macs have been able to do this since the last millennium.

COMMENT:
It could be an EFI or Driver issue still since the problem might not show up until the system actually tries to use the RAM above the Supported 4 Gig limit, or above the seemingly functional 6 Gig test case.
 
Hmm... interesting. IFixit, thanks for going through the trouble.

There's way to much posted here already, esp. the heated debates about 32 vs 64 bits. WTF - should be obvious that the OS is not the problem.

That leaves
- Weird motherboard issues
- EFI troubles

The latter is the most likely cause. I wouldn't rule out that peripheral devices play in the address space above 6 or 7GB either - who knows what would happen if you try to address these.

So if it's in the EFI - as most of us assume - it doesn't mean it's fixable via firmware update.

I do hope that 6GB are confirmed as working and maybe also officially supported at some point. I find 4GB barely enough and would much rather have more.
 
How the heck are people repeatedly claiming "it's EFI not the OS" while ignoring the fact that the OS "SEES" all 8Gb but refuses to use it and without knowing how other OSes behave w/8GB - is beyond me. If the OS sees it all it is not the firmware unless you also happen to have an explanation how that could happen.
 
How the heck are people repeatedly claiming "it's EFI not the OS" while ignoring the fact that the OS "SEES" all 8Gb but refuses to use it and without knowing how other OSes behave w/8GB - is beyond me. If the OS sees it all it is not the firmware unless you also happen to have an explanation how that could happen.

Wasn't there an article a while back about either Windows or PCs (I forget if it was hardware of software) being altered to 'see' more RAM than they could use? Some talk of a law suit as high end gamers had been effectively tricked in to paying for RAM that was not useable. It was something to do with the VRAM having to be subtracted from the total real usable RAM. I only mention this is light of the comments many have made equating 'seeing' to 'usable'.
 
I've hacked the OS X Leopard (retail) DVD to install on an old mac that wasn't supported. It is a lot of work, and completely unnecessary for this current issue.

TARGET DISK MODE:
Take the new MBP or MB with 8 Gigs to a place with a Mac Pro. Boot up the Mac Pro holding "T" for target disk mode. Now connect the Mac Pro to the MBP with a FireWire cable. Boot the MBP holding "Option" to get a list of boot drives connected. Choose the drive with the FireWire icon and you're booting off a Mac Pro HDD. Macs have been able to do this since the last millennium.

COMMENT:
It could be an EFI or Driver issue still since the problem might not show up until the system actually tries to use the RAM above the Supported 4 Gig limit, or above the seemingly functional 6 Gig test case.

Didn't I read there is no Firewire on the new MB?
 
Wasn't there an article a while back about either Windows or PCs (I forget if it was hardware of software) being altered to 'see' more RAM than they could use? Some talk of a law suit as high end gamers had been effectively tricked in to paying for RAM that was not useable. It was something to do with the VRAM having to be subtracted from the total real usable RAM. I only mention this is light of the comments many have made equating 'seeing' to 'usable'.

Right - Vista 32-bit edition has been altered so that if a PC has 4GB RAM it will show 4GB in system information but will only use 3.x GB as the BIOS will need to map devices in the lost RAM.

In this case however as someone pointed out the devices are not the problem - they are mapped above 4G and they clearly do not require 4GB of RAM - so if this theory applied, we would have seen say 8Gb - few hundred mb usable which is not the case.
 
Tired of all the Apple limitations, I have the feeling we are going back to 1990s apple computing. Still I like working on OS X, even if it's slower than Windows. And I like the design of Apple laptops better. But Apple's technology is getting very old, and design might not be enough to justify another expensive purchase of crippled technology. I was looking at Lenovo's top of the line notebooks. Those things are insanely superior to Macbook Pros. They have amazing specs and features, even integrated wacom tablets and color calibration. And you wouldn't believe what level of graphics and processing they have, blue-ray storage, internal raid drives... the listg of differences is infinite. But they are ugly.
 
But Apple's technology is getting very old, and design might not be enough to justify another expensive purchase of crippled technology.

:confused:

How is Apple's "technology... getting very old?" The chipset in the new MB and MBP is bleeding edge, and the chips themselves are the fastest that have the needed heat dissipation characteristics. The next OS in the pipeline contains multiprocessing goodies (Grand Central, OpenCL) which haven't been incorporated into any other OS.

Whatever the cause for the 8GB RAM limitation, it's not likely "old technology." More probably it's some bug or edge condition which Apple didn't find it necessary to test and fix, because the market for 8GB laptops (while it includes you and me) is small.

I was looking at Lenovo's top of the line notebooks. Those things are insanely superior to Macbook Pros. They have amazing specs and features, even integrated wacom tablets and color calibration. And you wouldn't believe what level of graphics and processing they have, blue-ray storage, internal raid drives... the listg of differences is infinite. But they are ugly.

They are not strictly comparable. They're bigger and heavier than the Mac laptops of equivalent sizes. Like it or not, we're not going to see a 9-lb, 2" thick 17" laptop or 7-lb 15" laptop from Apple... that's not how Steve rolls.

Also, the ThinkPad workstations are considerably more expensive than MacBook Pros.

Nevertheless, it would be good if niche machines like that were available with OS X. I've maintained for a long time that Apple needs to VERY selectively license OS X for sale only on computers in segments (such as monster laptops, true HTPCs, or high-end SFF desktops) where Apple doesn't intend to compete.
 
How is Apple's "technology... getting very old?"

In a sense he is right. Apple in 2008 does not have a true 64-bit OS and they were not even able to port their own handful drivers to 64-bits when the whole world has been truly 64-bit some time ago. So yes it is still old technology with lots of limitations.

Look at OEM driver support for OSX - the situation is abysmal. (Try buying an express card SATA adapter you like and see for yourself - there either isn't a driver for OSX or if there is it is wildly unstable and you are not going to get your hopes high.) Look at the performance problems.

You talk about licensing OS X to PC manufacturers but I want to remind that it takes lot more capabilities than just licensing to be able to sell an operating system and support it in the wild open market place - Microsoft goes to great lengths to ensure you get stable and performing drivers for all your devices, be it 32-bit Vista or 64-bit. (WHQL - it pays off.)

I find it very limiting at the least if not old.
 
You talk about licensing OS X to PC manufacturers but I want to remind that it takes lot more capabilities than just licensing to be able to sell an operating system and support it in the wild open market place - Microsoft goes to great lengths to ensure you get stable and performing drivers for all your devices, be it 32-bit Vista or 64-bit. (WHQL - it pays off.)

I find it very limiting at the least if not old.
There are only 3 main players for video and cpu chip sets. They all use common drivers and they can make there own ones as well. Apple can more basic common drivers for fall back modes as well. Like visa / svga, basic sata and ide, and so on. Linux has a lot of that build in the linux kernel.
 
There are only 3 main players for video and cpu chip sets. They all use common drivers and they can make there own ones as well. Apple can more basic common drivers for fall back modes as well. Like visa / svga, basic sata and ide, and so on. Linux has a lot of that build in the linux kernel.

It's the peripherals that matter the most though - wireless cards, Bluetooth headsets, Sound cards, SAS controllers and so on so forth. And even for graphics chips if you look at how much effort Microsoft and Nvidia had to invest to get stable graphics drivers. It is easy when you control the hardware and software but it becomes a significant challenge in the PC market where not even all the software is controlled by one company.

If you look at how good Vista has become at automatically downloading and installing drivers for all the common and uncommon devices - that's a lot of work to do with lots of OEMs.

People sort of expect that whatever junk they insert in their PC just works - graphics card drivers suck under Vista for playing games - they will just dump the OS for example and go back to XP. This arguably was not Microsoft's fault - the GPU driver writers were at fault but never the less since it hurts MSFT sales, they had to invest a lot of effort to control the situation.
 
You talk about licensing OS X to PC manufacturers but I want to remind that it takes lot more capabilities than just licensing to be able to sell an operating system and support it in the wild open market place - Microsoft goes to great lengths to ensure you get stable and performing drivers for all your devices, be it 32-bit Vista or 64-bit. (WHQL - it pays off.)

Apple should license to a few manufacturers in relatively small volume, providing them with a strict HCL and requiring them to adhere to it as a condition of the license. It should not license broadly or without careful attention to the hardware involved. The point here is to allow OS X to penetrate markets it currently cannot because it makes no sense for a volume, cost-centered manufacturer like Apple to develop and sell niche hardware.

The hackintosh community has showed that, even without any extra drivers, a fair cross-section of hardware works. Apple could write a few new drivers to cover the most commonly requested items. Users upgrading hardware would have to ensure, just as they do now, that such upgrade hardware was OS X compatible.

Such a scenario wouldn't be quite as seamless as Apple hardware is today, but it would 1) be marketed only at geeks and pros, and 2) offer those influential users choices they need to stay with OS X.
 
Apple should license to a few manufacturers in relatively small volume, providing them with a strict HCL and requiring them to adhere to it as a condition of the license. It should not license broadly or without careful attention to the hardware involved. The point here is to allow OS X to penetrate markets it currently cannot because it makes no sense for a volume, cost-centered manufacturer like Apple to develop and sell niche hardware.

The hackintosh community has showed that, even without any extra drivers, a fair cross-section of hardware works. Apple could write a few new drivers to cover the most commonly requested items. Users upgrading hardware would have to ensure, just as they do now, that such upgrade hardware was OS X compatible.

Such a scenario wouldn't be quite as seamless as Apple hardware is today, but it would 1) be marketed only at geeks and pros, and 2) offer those influential users choices they need to stay with OS X.
More hardware will work if they removed some of the locks build in and got rid of the pci id locks.
 
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.