Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.
Exactly - finding supported hardware for Solaris has been lot of trouble and it's just not that well tested even on newly supported hardware - then there is zero support. And I am done fighting for drivers. Linux is mostly painless from that standpoint now a days and if anything goes wrong the mailing lists are reasonably helpful.

Don't listen to them, go with FreeBSD. The latest v13 stuff in 8 & RELENG_7 is very nice. :) Even if you just use the older stuff in 7.2-R you'll be fine.
 
Well, who could trust SUN anyhow? Remember Scott Mcnealy's "All the wood behind one arrow" SUN Solaris adoption of NeXT's OpenStep?! What did SUN do? Eliminate all mentions of it like a year or two later and launched a competing technology -- JAVA.

Read:
http://it.slashdot.org/comments.pl?sid=183967&cid=15193666
Isn't OpenStep a programming API whereas Java is a language? As I understood when Scott talked about "...behind one arrow" he meant that Sun should stay focussed on their core business (and maybe outsource other things).
 
Isn't OpenStep a programming API whereas Java is a language? As I understood when Scott talked about "...behind one arrow" he meant that Sun should stay focussed on their core business (and maybe outsource other things).

Sun paid NeXT $25 Million [back in 1993] and they jointly developed the Openstep API designed using Objective-C and thus moving NX Api calls in NeXTstep 3.3 to NS Api calls in Openstep 4-4.2.

Openstep became more than just NeXTstep and extended to include EOF, not to mention improved versions of PDO and later WebObjects was born out of Foundation Kit/AppKit ease of use--also leveraging Objective-C.

Sun wanted all the money from the Hardware and part of the money for the Software.

Sun actually ported Openstep to Solaris with all apps being Objective-C.

Oak was in development in Sun labs at the time and borrowed extensibly from Objective-C and C++ later being spun out as Java.

Sun didn't understand that the OS was the draw to drive the sales, not the other way around.

It became a pissing contest between Tevanian + Jobs vs. Scott McNealy + Bill Joy and others.

The partnership was severed after the spec was released.

Fun times discussing the results with fellow NeXT employees.
 
Sun paid NeXT $25 Million [back in 1993] and they jointly developed the Openstep API designed using Objective-C and thus moving NX Api calls in NeXTstep 3.3 to NS Api calls in Openstep 4-4.2.

Sun also bought Lighthouse which had an impressive set of developer tools for OpenStep.

Sadly, Johnathan Swartz also came along with the Lighthouse acquisition and ran Sun into the ground.
 
yeah, and bluray too. moving on.

What about things like opengl too? Moving backwards? :)

Seems like a decent OS, it's stable - but for a release that is trying to focus on optimization it's quite surprising to see such serious regressions 'slip' through, first revision or not...


I don't really care much anyway any more, I feel(know) there's better alternatives(for my needs?) and the 'new' apple irritates me far too much - they're just a massive marketing machine.
 
I NetApp is suing Sun for patent infringement over ZFS. DTrace can be turned off - a file system, not so easily.

So that is exactly backwards as to who is asking for indemnification. Apple would be asking Sun to cover any awards to NetApp.


Until the dust settles on that, Apple probably doesn't want to put 10-20 million copies of something out there that has a significant chance of loosing in a patent battle. If that turned out to be $10-20 a copy penalty for the violation that is a big chunk of money.

Furthermore, because Sun is in limbo there is probably no one at Sun right now who could sign Sun for such a huge liability. Selling yourself to another company and that taking on a huge liability in the middle of the long transaction usually leads to trouble (get sued and/or costs more money) or at the very least political suicide (tagged as someone who saddles company with risks... not going to make the "keeper" list on the merger process.) .

Even if was in the direction you stated. Sun wanted indemnification for changes Apple made to ZFS to jam it into OS X that for some reason Sun was going to merge back into the core ZFS code tree.


There are also technical issues you are sweeping under the rug.

ZFS changes the why the kernel goes about dealing with the "file system". It isn't a file system. It is a volume manager merged with a file system. That is anti-Linux design philosophy. No way Linus would vote thumbs up on this even if it had GPL license on it.

[ Besides Oracle "liked" Linux in part because it didn't have an operating system. Now that they have Solaris ... it is dubious to "give away" one of the crown jewels of Solaris , ZFS, to Linux. Putting people on Solaris is a better opportunity for Oracle to make money than on Linux. In selling support contracts for OpenSolaris ( or the non open version) Oracle is the #1 player. In Linux, they are much smaller player to Redhat.

Besides when has Oracle taken a software product and slapped a GPL licesnse on it. There was stuff that was already GPL. Or defacto had to be GPL'd (core Linux additions). But name something that Oracle has injected into the GPL space when didn't "have to" ?

The GPL was/is useful in reducing the price of largely, historically, complementary offerings. If people pay less for OS and server hardware can pay MORE for Oracle stuff. Reduction of cost of complements.

Now that they will be in the OS and hardware game. Why accelerate knocking down complements?

Even more so in that Solaris/ZFS is a critical enabler of Suns universal storage devices. Those bring in
cold hard cash money. When the last time you saw Oracle walk away from cash money?????????????
]



Likewise squeezing this into Mac OS kernel would have issues both in user experience ( people just unplugging disks ) and in weaving those mods in with all of the other Snow Leopard updates in flight.

ZFS made much more sense when Apple still had XRaid and might have been inclined to expand past just 1U servers. Backtracking on XRaid. Having XSan , Time Machine success (in the consumer space), and barely treading water on servers ..... the biggest 'bang for the buck" for ZFS is not there; especially in the short term.
 
I figured this would happen... :\ if only apple would have bought sun.

Yes, I never thought about Apply buying Sun. But, at the same time i'm not sure it would make much sense.

Apple surely doesn't make its big bucks from XServes... big money is in the consumer devices... iphones, ipod, macbook pros and even mainstream macs (such as imac and mac pro). Is the OS X Server base really that large? I doubt it compared to Sun Microsystems. Does apple really want to concentrate on the server market? Probably not.

As much of a geek as I can be sometimes, tinkering, coding (and my computer science degree)... I love the simplicity of things which just work in OS X. I have fixed more windows boxes for people I know than I can remember. At the same time as I enjoy this about OS X, something is taken away from me - I dont know exactly how its working underneath or the exact details of its configuration some of the time.

Whilst Apple should be praised for providing a way for the average computer user to setup web servers, dns servers... I still get visions of graphic designers administering SAN's using Mac OS X Server.

I do have ZFS read/write working in 10.6 retail. It works good. But its obviously not for everyone.

ZFS, which is perhaps the most advanced modern filesystem in existence and written by some very very clever brains, should surely be used by people who understand exactly what they are doing and why they are doing it. The options and features of ZFS are pretty large and powerful. You aren't going to want to expose this feature set to your average mac user.

Unfortunately, with ZFS, I see it as the operating system not being able to make the "right choices" for the user and the user themselves not being smart enough to know which option to choose when asked.

It would make more sense I think for Apple to expose small inherent advantages of ZFS to the consumer. For example, lets have all Time Machine backup disks using ZFS. That way, the user wont know we are using ZFS snapshots... and if we run out of space we can have OS X ask for another disk and we'll pool them together in the background. 10.7 watch this space....
 
... that Oracle's pending acquisition of Sun could open the door to reconsideration of ZFS licensing issues.

If one would take Oracle into the picture, than there is a chance that Btrfs (butter fs) might make it into the Mac OS X. I once read that ZFS authors find Btrfs architecture much better. ZFS is relatively old and few its design decisions backfired. Btrfs, sponsored by Oracle and targeted mainly at Linux, used the experience and doesn't have the problems.

Also, though at the moment it doesn't make any huge difference, Btrfs has optimizations targeted at SSD.

Considering that Btrfs is at the moment in phase of active development, Apple might get a chance to influence it to better suit its needs.

In other words, if Oracle gets involved, situation would change only to better. Either ZFS or Btrfs, it's going to be a huge step forward.
 
Where have you all been?

Not sure where you all have been. I've been using ZFS for >1 year on my Leopard installation. There have been some stability issues but I have not lost any data. I started researching after my iTunes library became corrupted due to some bad memory. Just did a fresh install of Leopard and ZFS on my G5 and I'm not having any issues. :apple:
 
If one would take Oracle into the picture, than there is a chance that Btrfs (butter fs) might make it into the Mac OS X. I once read that ZFS authors find Btrfs architecture much better.

No way.
There is no way Apple is going to put a GPL code into the core of Mac OS X. Never will happen. Maybe not impossible .... but short of an event like Stallman becomes richest person on the planet, buying Apple, and open sourcing OS X ... not very likely.

Companies like Apple avoid the GPL like whenever they can. Note how Apple is moving off of gcc now that it is moved to GPL v3. That's for the compiler. No way they'd tolerate that in the core.


Never mind the fact that btrfs is Linux kernel only (actually almost x86 kernel only with limited big/little endian support). A major of objective of btrfs is to be compatible with ext3. Outside of linux that isn't a big plus.



Never mind the fact that btrfs is Linux kernel only (actually almost x86 kernel only with limited big/little endian support). A major of objective of btrfs is to be compatible with ext3. Outside of linux that isn't a big plus.

Finally, what folks are missing the point of at this point, Oracle doesn't even "own" btrfs at this point. The contributing developers span multiple companies and it is on track to being mainstreamed as one of the major default file systems once it matures. Oracle is capable of walking and chewing gum at the same time. It will have Oracle DB and MySQL. All the folks who wax as to it can only have "ZFS" or "btrfs" , "Solaris" or "Linux" , "Oracle DB" or "MySQL" are missing the point. Oracle is now at the IBM like stage where has multiple , sometimes conflicting, irons in the fire. As long as they are all making money, that is what matters most.

Also, though at the moment it doesn't make any huge difference, Btrfs has optimizations targeted at SSD.
Errr. ZFS already takes advantage of Flash ( http://blogs.sun.com/studler/entry/open_flash) in commercially deployed, production storage servers.... which do make a significant difference both in terms of money earned and performance.
 
No way.
There is no way Apple is going to put a GPL code into the core of Mac OS X. Never will happen. Maybe not impossible .... but short of an event like Stallman becomes richest person on the planet, buying Apple, and open sourcing OS X ... not very likely.

Companies like Apple avoid the GPL like whenever they can. Note how Apple is moving off of gcc now that it is moved to GPL v3. That's for the compiler. No way they'd tolerate that in the core.

[...]
OSS NTFS driver is GPLv2 and included in the OSX kernel. They don't seem to be *that* averse to GPLv2.

And they aren't exactly moving away from GCC for license reasons. They have complained about the GCC development process and integration with Apple's fork and more importantly they needed a better compiler with maintainable sources - which LLVM is (modular and modern). Then they had qualms about the quality of error checking and reporting gcc does, which impacts Xcode - and thus clang was born.
 
Apple is getting out of the computer business after they introduce their line of iTVs and iKeyboards. :rolleyes: It's all over folks. Quit waiting for a full range of updated ACDs in different sizes or an affordable quad core tower. Ain't gonna happen.
 
OSS NTFS driver is GPLv2 and included in the OSX kernel. They don't seem to be *that* averse to GPLv2.

When you can natively boot off of a GPL kernel extension that will be
interesting how Apple keeps their boot code proprietary at that point.

This is a play with fire situation where this could be dumped if they ever got challenged on it. They'd be in deep manure if there were Core foundation class code the imported any of those *.h files.

Someone is trying to treat this code as those it was LGPL'ed. It is not. If this is some I/O driver that is loaded into user mode/space maybe. ( I did a quick scan of the mount and some of the utils source code... hard to tell). If this is loaded as shared library in the kernel address space, I would bet a small sum that someone has not explained what they did to Apples IP lawyers.

Not sure if it was FUSE or there is a why with I/O Kit to run the user mode file systems and/or drivers here. http://2004.eurobsdcon.org/uploads/media/EBSD04_keynote.pdf


In short if is it run as a driver up in user mode it really isn't in the kernel (and it is going to heck of hard to ever boot off it . )



The real problem is not necessarily adding something to VFS but incorporated that in the same address space as the code from the foundation classes get loaded into. The OS stripped of what really makes it OS X (just the core non GUI darwin stuff) is already open so this partially skates as a "no harm no foul" so no one presses them on it.





And they aren't exactly moving away from GCC for license reasons.

No, it isn't the sole reason, but awfully coincidental that gcc stopped exactly where gcc flipped to GPL v3. (http://gcc.gnu.org/ml/gcc-announce/2007/msg00003.html) ships with
gcc 4.2.1 (released on July 18, 2007) 4.2.2 came out on (oct. 7. 2007 ) Leopard shipped on oct 26, 2007. Could have been stability ( stable gcc at time when Leopard went to manufacturing. )

Apple's released gcc is 10 releases back from where gcc is right now.




They have complained about the GCC development process and integration with Apple's fork and more importantly they needed a better compiler with maintainable sources - which LLVM is (modular and modern). Then they had qualms about the quality of error checking and reporting gcc does, which impacts Xcode - and thus clang was born.

Apple's fork grew out of gcc not taking patches or Apple not really offering them up??

http://www.informit.com/articles/article.aspx?p=1390172&rll=1
The C compiler in use at the time was GCC, which was released under the GPL. To avoid the restrictions of this code, NeXT provided its front end as a library that end users would link against GCC, thereby avoiding the GPL (which only applies to distribution of the software, not how you use it). This little legal maneuver didn't work, however, so NeXT was forced to release the code.


Yeah Apple is playing somewhat better now, but one factor that impedes their ability to get stuff mainlined is this kind of track record.
However, yes gcc has gone through some periods where it was a pain. The current gcc is a from a fork egcs where folks didn't like what was happening on the mainline.
LLVM is vastly more modular.
 
I think it is Apple's shun of SUN since Sun's Jonathan Schwartz on June 6, 2007 said ZFS is coming to Mac before Apple announced it:

"In fact, this week you'll see that Apple is announcing at their Worldwide Developer Conference that ZFS has become the file system in Mac OS 10."

That would have been a shun not because he pre-announced but because it wasn't even true and pre-announced.

If it had been "ZFS has become an optional file system in Mac OS X" that might have been a misstep. ZFS on Mac OS X still isn't really ready for production, it was even less true when he threw that zinger out there. "... the file system.." has the connotation that it is the primary file system. It showed no understanding of what the situation actually was at that time. (never mind the stepping on the "Steve show" by telegraphing the content. )
 
Sun gave away NFS and it became a big well supported standard. It then disregarded this lesson in its attempts at licensing NeWS. It appears Sun is trying to repeat the NeWS fiasco.
 
NOT a license problem

This article is poorly informed, and Robin Harris needs to do a bit more digging before presenting suppositions as truth.

If Apple's main reason for giving ZFS the cold shoulder was the type of open source license the ZFS code base is available under, they would have also not have included DTrace starting with OSX 10.5. DTrace, just like ZFS and the rest of the OpenSolaris core code, is available under the very same CDDL license.

To recap:

DTrace, from the same codebase as ZFS, is in Mac OSX since 10.5. It is licensed under the CDDL.

ZFS, from the same OpenSolaris codebase as DTrace, is licensed under the CDDL.

Apple's own APSL open source license, under which it releases Darwin et al, is arguably more compatible with the CDDL in spirit and in what it demands than it is with the GPL.

/dale
 
That's backwards. ZFS makes less sense with XRaid. ZFS is designed for JBODs, not hardware raids.

Not completely true. ZFS will work fine on anything that's a block device, even those presented by a RAID array such as a XRaid.

However because of how caching RAID arrays work, you will lose some of the benefits of ZFS's data error handling because the RAID array, as a consequence of what they do, abstracts the on-disk data out of ZFS's sight. You'll still get ZFS's error detection, ZFS just won't be able to reliably correct that error as it could with a naked hard drive where it knows exactly where the errant block of data lives.

/dale
 
Hmm. My gut feeling is that if Apple wants to switch to a new file system, it is going to want to run that FS on all its products/platforms.
Which would include, of-course, the iPhone.

I'm not a ZFS expert but is ZFS really practical on a flash storage device with a crappy slow CPU like the iPhone has?
 
Marketers often ask the question "would you prefer 100% of nothing or 40% of something huge?"

Stupid question here, if ZFS is GPLed and Apple is able to use it freely, what is Sun supposed to be getting 40% of?
 
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.