Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.
Last note: Why have we only heard talk of full ZFS support in "Snow Leopard Server" and not "Snow Leopard"??? Would Apple really put it in only server and not both??? :(

I would think that most of the Macs that Apple sells really wouldn't be able to take advantage of ZFS's main strength, ZPool.
 
Games tend not to build true native applications for any platform so unless various engines pick up the features then it's not looking good for games.

As it's been said no official drop of PPC yet Indeed no reason to believe just because the Developer Preview doesn't yet support PPC that i won't it the future.

It all depends on what they are really doing under the hood. If is a updater compiler like full use of LLVM as suggested. Maybe the compiler to the virtual machine is Intel only or the only build that is ready the developer use.

Hey the run times engines which take the intermediate representation (IR) on the real hardware are already working in Leopard. So why not keep moving them forward.

Hey if Snow Leopard is really delivering on performance improvements it's just as likely that it will have lower not higher system requirements.

if they want to do a new start and start fresh to clean the OS of all is unnecessary code, they should definately take PPC away the code would be much lighter and application as well as only coded for Intel ! and the OS can finally be clean and use all the feature for x86 platform.

Most application now are all universal or Intel only so it won't be a problem to take PPC away, the only thing is that people with PPC mac that's the problem but I prefert that than being lied back and late in technologie. and it's time for Apple to move forward in the futur.

it's time to update you mac.
 
i'm seeing this as a new step towards a none "OS X" maybe OS11, but i can't think of what horrible name they could give it to match mobileme and ... snow leopard, did the guy in charge of naming products at apple quit or something?
:D

I can imagine now.... The new Mac OS will be called: Groovy OS 11.0
 
Okay, do you honestly think that Microsoft has been sitting around redesigning the user interface since the days of 3.0? They have continued to refine the kernel and are definitely concerned with performance.

http://www.microsoft.com/whdc/system/vista/kernel-en.mspx

Also, ever hear of MinWin? Microsoft cut down their operating system to 40MB and it even had a webserver, just to demonstrate that Windows at the core isn't bloated. The point: if you want to forgo backwards compatibility and drivers for every component under the sun, you'll have an operating system like OSX (tied to hardware and the fear you might not be able to run the next OS on your fancy G5).


Have you looked at it. And do you know what they have pland for that kernel.

1) They did not cut there operation system down to 40MB.
They cut ther kernel down.
It can boot up in command promt and only grafik is ACSII grafik.
They can start a webserver, but nothing fancy. it only says I'm here.

And if you get DSL (Damn Small Linux) they you get support for most of you hardware, and grafik interface, and it only 50MB.

So I'm not impresst whit MinWin.


Anyone knows how big the kernel for the iPhone is?


2) Where do they plan to you MinWin?
MS has sayed that Windows 7 will use a optimized Vista kernel, and not the MinWin.

So whats the point of saying. "Look how cool the thing we are making is. And it will be out in about 10 years"


It's also a MS Windows 7 tester that sayed something like. "If you like the inferface on the iPhone, then you will get blowing away in Windows7"
Translated: "If you like what you see in 1year old tecloligi, the whait to see what we have in 4 year"

Apple tells what it will have in a year. and they have a tentet to keep that.
MS tells what they wan't to have in 4-5years, and don't tent to keep that
 

This is funny. It is interesting how the Me in mobile me and the me in Windows Me are almost identical.

But I think we all know that Apple is going to come up with something a little better than Me. Moblie Me just refers to your personal files. Snow Leopard might end up being the name which is perfectly fine with me.

Bring on the snow leopard!
 
It's good to see Apple trying to tackle THE big problem in computing today. I wish all these announcements weren't so light on details, but I suppose eventually we'll find out more.

The way I see Grand Central: It may or may not be the right approach to handling multiple cores. There are lots of really smart people working on the problem, and they don't all agree with each other.

But OpenCL is undoubtedly the right technical route. I hope Apple does really make it an open standard (and it gains traction, and it turns out to be well-designed) because having multiple incompatible implementations from different hardware companies sucks big time. And multiple incompatible implementations for different OS platforms sucks only slightly less. Use of graphics cards for computation will never take off in the mainstream as long as this is the situation. But the payoff for using them will be huge, so eventually it'll get done...I hope.
 
pthreads are ridiculously heavyweight (at least half a meg of allocation each), as well as being difficult to use safely. Generally it makes a lot more sense to have a pool of threads available and something to manage them and dispatch work to them... ;)

I'm not sure how GC is supposed to replace real threading. A multi-threaded program must be designed from the ground up to be multi-threaded (look up threads/semaphores/mutexes). Adding GC in the mix is not suddenly going to make a particular program multi-threaded. From what I'm reading GC is a totally different technology that will work to more evenly spread processes across various processor cores. Right now, I'm sure one core is more heavily used than the other(s) so spreading around the core usage even when the system is not being taxed should yield some efficiency and performance gains.
 
Have you looked at it. And do you know what they have pland for that kernel.

1) They did not cut there operation system down to 40MB.
They cut ther kernel down.
It can boot up in command promt and only grafik is ACSII grafik.
They can start a webserver, but nothing fancy. it only says I'm here.

And if you get DSL (Damn Small Linux) they you get support for most of you hardware, and grafik interface, and it only 50MB.

So I'm not impresst whit MinWin.


Anyone knows how big the kernel for the iPhone is?


2) Where do they plan to you MinWin?
MS has sayed that Windows 7 will use a optimized Vista kernel, and not the MinWin.

So whats the point of saying. "Look how cool the thing we are making is. And it will be out in about 10 years"


It's also a MS Windows 7 tester that sayed something like. "If you like the inferface on the iPhone, then you will get blowing away in Windows7"
Translated: "If you like what you see in 1year old tecloligi, the whait to see what we have in 4 year"

Apple tells what it will have in a year. and they have a tentet to keep that.
MS tells what they wan't to have in 4-5years, and don't tent to keep that

I agree with you a 200% :cool:
 
What I found interesting about Grand Central was how the method seems to correspond to something AMD was rumoured to be doing. I'm not sure how many people here took part in those discussions, but in the lead up to Conroe's launch, there were rumours that AMD was working on something called Reverse Hyperthreading whereby they could combine 2 processors to work on a single thread. The rumoured approach was supposed to be in hardware, so as to be transparent to existing software, and would divide up a single thread, probably based on pieces that had no dependency, and execute it on each available processor. Basically getting some multi-threaded benefits for single threaded apps.

Now this Grand Central seems to work on a similar philosophy, whereby a single thread is divided into packets for execution, except it'll be done in software instead of in hardware. If Grand Central can accomplish it's task on it's own or just through a simple recompile then it could be very useful in bringing single-threaded or dual-threaded applications into the multicore world. If it requires significant recoding, liking going through everything to figure out and write in hints for the compiler to divide things up, then it might not be as worthwhile.

Reverse Hyperthreading turned out to be a false rumour, but it did lead to some interesting discussions. And I find it interesting that Grand Central seems to take up upon the overarching theme. Maybe Apple picked up upon it and decided to figure out how to get it to work without relying on CPU makers?
 
I'm not sure how GC is supposed to replace real threading. A multi-threaded program must be designed from the ground up to be multi-threaded (look up threads/semaphores/mutexes). Adding GC in the mix is not suddenly going to make a particular program multi-threaded. From what I'm reading GC is a totally different technology that will work to more evenly spread processes across various processor cores. Right now, I'm sure one core is more heavily used than the other(s) so spreading around the core usage even when the system is not being taxed should yield some efficiency and performance gains.

If I remember correctly, Apple did some threading in the core libraries for openGL and touted a much improved performance (Multi threaded queues?) on some benchmark or another. They may have taken that to the next level.

Thread pooling and separation of work is the only real way to get that type of scalability on multiple multi-core CPU's. I'm curious as to how they feel this will solve the big problem of writing thread specific code. Perhaps make it easier to use thread pooling?

I don't see using the Graphics CPU's to increase work in the general computing environment. If your number crunching you would probably see the big gains.

This sounds a little like the mesh computing they are doing a CERN and some other places. Another poster talked about integration with XGrid. That's a really interesting proposal, passing off multiple threads to multiple hosts.

Almost makes me wish I went to WWDC this year to find out more detail.
 
Apple will more then likely stick with roman numerals. Next version Mac OS XI.

I don't think they'll use roman numerals. The only reason they used them for OS 10 is because 'X' in roman looks like and X, and we all know Xs make technology sexy and more cutting edge.

I'm more interested in what their naming scheme will be, since all the 'big' cats are almost gone. The cat names are all really good; Yes, I think Snow Leopard sounds perfectly fine.
 
i'm seeing this as a new step towards a none "OS X" maybe OS11, but i can't think of what horrible name they could give it to match mobileme and ... snow leopard, did the guy in charge of naming products at apple quit or something?
:D

OK I'll Byte!--)) What's a "None OSX"? :eek:
 
I still wonder what will happen with software compatibility.
I have invested over $7500 on what I use and if this becomes worthless on new systems I will be one mad camper.

Don't get the new system until those apps are updated to work with 10.6.
 
NO. Please do some research before posting. Snow Leopard is for INTEL CPU'S ONLY. The way it SHOULD be.

This is wrong on two accounts:

1. There is no reasonable evidence that Snow Leopard is for Intel CPU's only. Actually, threads discussing this seemed to be evenly divided between people having experience developing software who are quite sure that Snow Leopard will run on PowerPCs, and a fanbois who think it doesn't.

2. If Snow Leopard were to run on Intel CPU's only, then this would cause considerable damage to Apple. First by needlessly giving up > $100 million in software profits, secondly by creating a precedent that would change the value of any new Macintosh in a very strong negative way, thirdly by destroying twenty years of tradition in the NextStep source code base of being processor independent, with lots of invisible cost that are hard to explain to a non-developer.

Obviously that will be hard to understand to someone who feels it necessary to call people "PPC whiners".

if they want to do a new start and start fresh to clean the OS of all is unnecessary code, they should definately take PPC away the code would be much lighter and application as well as only coded for Intel ! and the OS can finally be clean and use all the feature for x86 platform.

So what makes you think that? What experience do you have at developing software? Please tell me. I'd really like to know. The code that I am working on is about 1 million lines of code, of which about 10 are specific to either PowerPC or Intel. That is 0.001%. How much code have you written, and how much could be saved if that code were to run on Intel only instead of Intel + PowerPC?

even though i own and love my PPC mac, i think it's time to let go and make it full intel, just imagine the resources saved just programming one! maybe it'll be on time!:eek:

Expected saving in programming time: Negative. Even without PowerPC, OS X would have to run on Intel 32 bit, Intel 64 bit and ARM. But what non-developers don't realize is that being forced to write portable code keeps the code quality up, which produces huge long-term savings.
 
I'm more interested in what their naming scheme will be, since all the 'big' cats are almost gone. The cat names are all really good; Yes, I think Snow Leopard sounds perfectly fine.

All big cats gone? How about "NO" for that?
Lynx, Lion, Bobcat, Cougar, Golden Cat, Mountain Lion, Serval.
 
What I found interesting about Grand Central was how the method seems to correspond to something AMD was rumoured to be doing. I'm not sure how many people here took part in those discussions, but in the lead up to Conroe's launch, there were rumours that AMD was working on something called Reverse Hyperthreading whereby they could combine 2 processors to work on a single thread. The rumoured approach was supposed to be in hardware, so as to be transparent to existing software, and would divide up a single thread, probably based on pieces that had no dependency, and execute it on each available processor. Basically getting some multi-threaded benefits for single threaded apps.

"Reverse Hyperthreading" has been completely debunked. It never existed, not at AMD or anywhere else.
 
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.