Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.
I Knew It Was Coming But Not As Soon As Next Summer

I Knew It Was Coming But Not As Soon As Next Summer. Excellent. I'll take two x 3 GHz per core for a total of 12 GHz inside. Now that's what I call POWER. :D
 
guifa said:
Actually, all of you are a bit off. It should be, "If this news be correct." You're using the present subjunctive in English, which is the same form as the infinitive.
Well well. I suppose this is American English, which is, of course, improper. "If this news is correct..." is far more correct than "be correct". Where DID you go to school?
Oh, and where is the New Cube?

s.
 
strider42 said:
Apple may not have dual duals if this is indeed true. They may simply replace the dual's with one of the dual core processors. The advatage for apple is that they can keep the powermac performance significantly above the iMac line still, without the added cost a second separate processor incurs (all the connections for it plus the cost of the actual processor, presumably a single dual core would be cheaper than two single cores). I rather suspect that is what apple will do. They may call it a G6 or whatever just to further differentiate.

And it's much more easy to keep one cool.
 
To me, the architecture changes are at least as compelling as the CPU clock rate increases. The only question is will we see these changes in the next rev with dual 3 GHz CPUs or in the rev after that?

Let us hope for the next revision. I'm pefectly content to see the effects of 6 months of PCIe X86 motherboards has. So far there doesn't seem to be a lot of issues but they are just now shipping.

I will say I'm a bit intrigued by Nvidia's SLI technology. I can't wait to see the real world benchmarks coming this fall. Two fast PCI Express GPU strapped together working hard seems to be quite exciting.
 
Multimedia said:
I Knew It Was Coming But Not As Soon As Next Summer. Excellent. I'll take two x 3 GHz per core for a total of 12 GHz inside. Now that's what I call POWER. :D

I hope you're saving your pennies for this machine you describe that's coming out next summer!
 
multiple processors

OS X supports two logical processors. It's possible, from what I've read, that 10.4 will support four. The idea that it's easy to continue support up to sixteen, or higher, is wrong.

When NT first came out it supported two. It wasn't able to make a real push into the server space until Microsoft was able to support four processors. That took a lot of work, and didn't work well. In fact, support for four only worked well when they got the support to eight-way. Now it supports thirty two way, and works well up to sixteen.

Memory, cache coherency, as well as issues with threading and program allocation, among others, are not easily gotten around.

Most programs have one thread and only benefit from two processors if the OS can move system ops to one processor while keeping program usage on the other. Most programs won't benefit much from hyper-threading, and indeed, might suffer. Intel has found this to be a problem as well.

A program must be compiled for multiple threads, as well as multiple processors. The more threads and processors, the more difficult the problem becomes.

Very few programs will see a benefit from four way machines. If you are running more than one program, however, you might see an advantage if the OS is written properly.

There has been criticism of Apple's server offerings because there are no four way systems. Business's like to have options to move to higher capacity systems, and so far, Apple doesn't offer any. I would assume that Apple would like to correct that.

IBM was talking about dual cores as far back as 2001. They were expecting to go there when switching to 65nm, but maybe now, as Intel and AMD seem to be doing, they will not wait. They also were going to an on board memory controller, but, from what I remember, the work to get Altivec on die stopped that work for the time being. I had understood that the on die controller was a requisite for the chip to be used in a portable. Perhaps that has changed.

Hyper-threading does not make a chip equivalent to a dual chip on die. There are certain operations that can be hyper-threaded, and those that cannot. Threads can be moved between chips or on chip, if more than one is used, but the chip (Xenon) has one cache and one line out to main memory. There have been dual on die chips with one cache, but they have not worked well. When hyper-threading is turned on, some programs see two logical PROCESSES, but not two logical cpu's. This is complex to explain. Going to Intel's or IBM's sites will give a good explanation. Much better that I could do here in such limited space.
 
BrianKonarsMac said:
apparently you are not familiar with IBM's PowerTune technology. They don't need to design a mobile processor if they can get it working correctly, as it will effectively be better than a seperate mobile design, as it will be scalable from 1/64 of clock frequency to full frequency in 3 cycles. this will cut heat and power consumption drastically, because the processor will only be running hard when you're doing heavy computing, and will enter a deep nap mode in between., i.e. when you're typing etc.
The old G3 chips had this, and it worked great. But Apple needs to get the software to work with it too. On my 400 MHz G3 Pismo under OS 9, I'd regularly get 4.5 hours per battery, with full screen brightness. Pop in two batteries and I was good to go for 9 hours :eek:!! However, as soon as I loaded OS X on the same machine, battery life dropped to a comparatively-pathetic 2.5 hours and to this day Apple's battery life on their portables is mediocre at best. If I try playing an OpenGL game on my TiBook, I get 45 minutes per battery :mad:
 
melgross said:
When NT first came out it supported two. It wasn't able to make a real push into the server space until Microsoft was able to support four processors. That took a lot of work, and didn't work well. In fact, support for four only worked well when they got the support to eight-way. Now it supports thirty two way, and works well up to sixteen.

Actually NT supported 4 processors way back in Windows NT 3.1 Advanced Server.....
And Believe it or not Unisys was building 32 processor NT servers using Intel pentiums back in 1995/96 (during the NT 3.51 days)

The real problem with NT's support of multiple processors wasnt/isn't NT's problem....
It is the terrible legacy architectural design of IBM PC's and compatables...
These systems were never designed to be anything more than a Personal Computer.....
Actually it is almost amazing that they have worked as well as they have considering there initial design flaws....
Many engineers have spent countless hours trying to mitigate a PC's poor design.


melgross said:
There has been criticism of Apple's server offerings because there are no four way systems. Business's like to have options to move to higher capacity systems, and so far, Apple doesn't offer any. I would assume that Apple would like to correct that.

I agree.... I for one would love to see a 4 way PPC server...



melgross said:
IBM was talking about dual cores as far back as 2001. They were expecting to go there when switching to 65nm, but maybe now, as Intel and AMD seem to be doing, they will not wait. They also were going to an on board memory controller, but, from what I remember, the work to get Altivec on die stopped that work for the time being. I had understood that the on die controller was a requisite for the chip to be used in a portable. Perhaps that has changed.

Actually IBM has been producing dual core processors for 3 years now...
All Power 4 processors are dual core... and the Power 5 is dual core too.

I expect if IBM is making a low cost dual core CPU for Apple, then it will be based on the Power 4+ and maybe the Power 5....
Either way they will more than likely be stripped of cache.
 
melgross said:
When hyper-threading is turned on, some programs see two logical PROCESSES, but not two logical cpu's.

No, it's PROCESSORS.

With Windows (since OS X doesn't run on any HT chips, anything there is speculation) when HT is enabled you see twice as many PROCESSORS.

On my dual Xeons, the environment variable NUMBER_OF_PROCESSORS is 4. The Task Manager shows 4 CPU load windows in the performance pane.

With Windows 2000 Pro - you cannot use the additional logical processors because 2k Pro only supports duals. With XP Pro, the "dual" support allows dual HT processors - or 4 logical processors.

As far as "operations that can be hyper-threaded" - the real issue is whether threads are waiting on the same resources. The OS doesn't see hyper-threads as anything other than a normal thread - they're scheduled as they request CPU time. If they're on real duals, they'll get 2 physical CPUs. If HT on a single, they'll get 2 logical CPUs.

The gain from HT is when threads use different chip resources. If one thread is stalled on memory, the other can use the integer unit. While one is using the float unit, the other can access memory. If both threads hit the same units, there's little benefit to HT.
 
spacemoose said:
It seems that IBMs does not really have a mobile processor development program.

While this is well and good for servers/workstations, it could hurt Apple in the growing mobile market with Intel working hard to improve their "M" chips.

If IBM is not creating any mobile specific technology, and Apple is forced to retrofit their desktop chips into their mobile products, they will never compete with the already superior Intel mobile technology.

Perhaps this is why Apple has maintained its relationship with Motorola, as it intends to create G4 derivatives that will become its "M" line.

----

I think Apple will surely stick with the "G" identity for its products; it is in complete sync with the clean, simple and easily distinguishing design of their systems.

I keep hearing on this board about how the powerbooks cant compete with a Pentium M....
NOW lets get this staight.... Im a long time PC laptop user and I wouldnt take a slow a-- Pentium M system if you gave it to me AND paid me to use it.

Apple will not be hurt in the laptop market by anything intel makes...
because a Powerbook isnt and never really was a competitor to a PC laptop
Apple sells mainly to a niche market...
And it competes in the (I just love this cool well engineered work of art laptop and typical MAC buyer market...)
 
mr.steevo said:
Well well. I suppose this is American English, which is, of course, improper. "If this news is correct..." is far more correct than "be correct". Where DID you go to school?
Oh, and where is the New Cube?

"be" is used correctly in that case when speaking in the subjunctive. It is rarely used correctly anymore, but still exists. An example is in the Three Dog Night song, Joy to the World:

If I were the king of the world,
I'd tell you what I'd do:
I'd throw away the cars and the bars and the wars,
And make sweet love to you.


See the Wikipedia artice for more information about the dying subjunctive. :)

-The former English teacher
 
Melgross, what are you talking about??

OS X supports two logical processors. It's possible, from what I've read, that 10.4 will support four. The idea that it's easy to continue support up to sixteen, or higher, is wrong.


As of OSX.3, the OS supports at least two physical processors. I am not aware that it supports any logical ( hyper threaded ) processors.


When NT first came out it supported two. It wasn't able to make a real push into the server space until Microsoft was able to support four processors. That took a lot of work, and didn't work well. In fact, support for four only worked well when they got the support to eight-way. Now it supports thirty two way, and works well up to sixteen.

Memory, cache coherency, as well as issues with threading and program allocation, among others, are not easily gotten around.


These issues were due to MS's lack of technology. At the same time MS was having a hard time getting past 4 processors, IBM had OS/2 V2 up to 64 processor capable. Yes, the architecture of day was limited since multi proc 386s often shared a common cache, which got cleared often by mistake, but the technology did exist. UNIX and the Mach kernel was even more advanced than the OS/2 kernel.

It is not wise to try to use MS as an example of the latest in technology.

Most programs have one thread and only benefit from two processors if the OS can move system ops to one processor while keeping program usage on the other. Most programs won't benefit much from hyper-threading, and indeed, might suffer. Intel has found this to be a problem as well.

A program must be compiled for multiple threads, as well as multiple processors. The more threads and processors, the more difficult the problem becomes.

Very few programs will see a benefit from four way machines. If you are running more than one program, however, you might see an advantage if the OS is written properly.

Bull!!! Any programmer can simply call for a new thread for any task in their code. Furthermore, the classes that a programmer might call to do tasks such as GUI controls, or file system access will very often spawn threads, due to the nature of the built in classes in the OS ( Win32, .NET, Next Framework, etc ).

Unless you are sitting in Terminal, and launching a simple program such as tar, you will be hard pressed to find a single threaded program for Windows, OSX, or most any modern day OS. Case in point, I have written a small program in JAVA that reads and modifies file attributes. I have intentionally not threaded the app, but when it does launch, it uses 7 threads, because I use the system calls found in the File object. You cannot have a GUI interface without threading.

The compiler does not perform the threading, the programmer does.

About the only thing you are somewhat correct is that just because a system has many physical / logical processors, does not guarantee speed. multi processors are like torque in an engine. More torque doesn't make you go faster, it just takes a lot more of a load to slow you down.

If you can't keep the processors busy enough ( generate enough of a load ), the extra power is wasted.


Max.
 
maxvamp said:
As of OSX.3, the OS supports at least two physical processors. I am not aware that it supports any logical ( hyper threaded ) processors.

Max.
Better SMP support is "supposed" to be on the way in OS X v10.4 -- this is reading between the lines on Apple Tiger Preview page about FreeBSD 5.x
More Power to Power-Users
The upgraded kernel, based on FreeBSD 5.x, provides optimized resource locking for better scalability across multiple processors, support for 64-bit memory pointers through the System library and standards-based access control lists. The system enhances network services via a next-generation launch daemon and centralized application logging. Tiger also features command-line access to Spotlight for searching application metadata and enables many common UNIX utilities to handle HFS+ resource forks.

Optimized Kernel Resource Locking
Optimized locking provides better SMP performance by allowing two CPUs to simultaneously access different portions of the kernel. This will improve performance of almost every task on multiprocessor machines.
Of course no telling how much better things will be just jumping from FreeBSD 4.4 to 5.x, along with extremely recent versions of GCC.

http://www.freebsd.org/releases/
 
maxvamp said:
These issues were due to MS's lack of technology. At the same time MS was having a hard time getting past 4 processors, IBM had OS/2 V2 up to 64 processor capable.

This document OS/2 for SMP V2.11 Reference says 16 CPUs max.

Theoretical support for many processors is a lot different than good performance with that many. Windows NT always had architectural support for 32 processors, but system locking issues caused a performance bottleneck on many workloads with a quad CPU machine. From release to release, these bottlenecks have been addressed.

And as far as "lack of technology", did you note that Tiger claims:

Optimized Kernel Resource Locking
Optimized locking provides better SMP performance by allowing two CPUs to simultaneously access different portions of the kernel. This will improve performance of almost every task on multiprocessor machines.

So, it seems that Panther must have some "lack of technology" that makes it suffer even on dual processors.


maxvamp said:
"A program must be compiled for multiple threads, as well as multiple processors. The more threads and processors, the more difficult the problem becomes."

Bull!!! Any programmer can simply call for a new thread for any task in their code.

I think that I must agree with the first post.

Of course creating threads is easy, and of course many OS activities create hidden threads. Overlapped I/O is a common example - the O/S is doing multiple I/Os in the background to speed things up.

What can be very difficult is splitting the program algorithm up so that it can be done in parallel by multiple threads and actually result in a program speedup. For some programs, this is next to impossible - by their nature things have to be done in order, and each step must wait for the previous to complete. This becomes even harder when you try to go beyond a dual processor system.

Some programs are easy to multi-thread - by their nature they consist of lots of small tasks that can be done in parallel. Video encoding is a great example - each keyframe makes an independent starting point, you have a keyframe and then a group of intermediate frames that are based on the keyframe.

You can split the input into these keyframe chunks, farm each chunk out to a separate thread, then reassemble them as they complete. In practice one would use a small number of worker threads, and feed chunks to those threads as they complete the previous chunks.

And of course, a task that can be farmed (or clustered) is probably a task that can be multi-threaded.

I think that your "Bull!!" comment is unjustified.
 
BULL!!!

:)

Ok, I stand corrected on the OS/2 max procs. It has been awhile since I was an OS/2 freak.

The reason I called BULL was over the comment that most programs are single threaded. Aiden, do you support this comment?

I will not dispute that some scheduling can be tricky, but effective threading happens all the time in code. Even in the example you provided with the video encoding, there are at least two threads here. One thread will be reading the source video and hopefully caching some of it, while another will be doing the processing. If you take into account the UI progress, and the File I/O, I doubt that you can say that this is a single threaded app.


A comment I have made several time is that more processors doesn't always mean more power. The claim that people still need to compile all of their apps for dual proc is a fallacy. People need to optimize their apps for the most effective threading.

Finally, the technology comment was made as a direct comment of Microsoft at that time period.

Microsoft did not come into it's own as an innovator until almost 2000 with the re-write of the OS/2-NT kernel into Windows 2000. Before then, in almost every place that they were competing Microsoft was technologically behind all other players. To connect this fact with improved technology that Apple is trying to put into the next version of the OS is failed logic taken out of context. IBM was always better at kernel design than Microsoft. I suspect that it had something to do with the fact that IBM is really a hardware company that happens to be the biggest software company out there.

Feel free to dispute this stuff, but I plan on sticking to my position.

Max. :D
 
AidenShaw said:
So, it seems that Panther must have some "lack of technology" that makes it suffer even on dual processors.
Mac OS X from its initial release and to this day (Panther) has used a locking construct called a funnel in the kernel. This funnel is used to protect parts of the kernel from simultaneous modification by multiple threads. Initially only one funnel existed and that funnel protected the majority of the BSD based aspects of the kernel since it was traditionally designed to not support multiple threads running concurrently. Later a second funnel was added (I think in 10.1), one to protect the networking aspects and the other the rest of the BSD derived kernel. This allowed better utilization of the system (at least while in kernel) when running on multiprocessor systems.

Note that IOKit and a few other areas of the kernel are pervasively multi-threaded, using finer grain locking that allow better utilization. Also user mode code can be heavily threaded to leverage available compute resources, etc.

As for Tiger I can not delve into specifics because of NDA but the granularity and locality of locking in the kernel will be changed for the better.
 
Wow!

Sorry, the Quote feature doesn't seem to be working for me today. I'm doing it manually.

Quote:
Originally Posted by melgross
When NT first came out it supported two. It wasn't able to make a real push into the server space until Microsoft was able to support four processors. That took a lot of work, and didn't work well. In fact, support for four only worked well when they got the support to eight-way. Now it supports thirty two way, and works well up to sixteen.


macsrus:
Actually NT supported 4 processors way back in Windows NT 3.1 Advanced Server.....
And Believe it or not Unisys was building 32 processor NT servers using Intel pentiums back in 1995/96 (during the NT 3.51 days)

me:
Yes at 3 they WERE supporting four. But when it first came out, you know, ver 1.0, they were supporting two.
macsrus:
Actually IBM has been producing dual core processors for 3 years now... All Power 4 processors are dual core... and the Power 5 is dual core too.

me:
True, but I was talking about the 970, not the Power series.

AidenShaw:
Quote:

Originally Posted by melgross

When hyper-threading is turned on, some programs see two logical PROCESSES, but not two logical cpu's.


No, it's PROCESSORS.
With Windows (since OS X doesn't run on any HT chips, anything there is speculation) when HT is enabled you see twice as many PROCESSORS.

On my dual Xeons, the environment variable NUMBER_OF_PROCESSORS is 4. The Task Manager shows 4 CPU load windows in the performance pane.

me:
I agree. But the program being run doesn't see it that way.

maxvamp:
Quote:
OS X supports two logical processors. It's possible, from what I've read, that 10.4 will support four. The idea that it's easy to continue support up to sixteen, or higher, is wrong.


As of OSX.3, the OS supports at least two physical processors. I am not aware that it supports any logical ( hyper threaded ) processors.

me:
Sorry, I meant to say physical, not logical.


Quote:
When NT first came out it supported two. It wasn't able to make a real push into the server space until Microsoft was able to support four processors. That took a lot of work, and didn't work well. In fact, support for four only worked well when they got the support to eight-way. Now it supports thirty two way, and works well up to sixteen.

Memory, cache coherency, as well as issues with threading and program allocation, among others, are not easily gotten around.


These issues were due to MS's lack of technology. At the same time MS was having a hard time getting past 4 processors, IBM had OS/2 V2 up to 64 processor capable. Yes, the architecture of day was limited since multi proc 386s often shared a common cache, which got cleared often by mistake, but the technology did exist. UNIX and the Mach kernel was even more advanced than the OS/2 kernel.

It is not wise to try to use MS as an example of the latest in technology.

me:
Actually, all companies had problems getting past two processors, in the beginning. It wasn't just Microsoft. IBM had more experience than Microsoft did (they had none, really.).

Quote:

Most programs have one thread and only benefit from two processors if the OS can move system ops to one processor while keeping program usage on the other. Most programs won't benefit much from hyper-threading, and indeed, might suffer. Intel has found this to be a problem as well.

A program must be compiled for multiple threads, as well as multiple processors. The more threads and processors, the more difficult the problem becomes.

Very few programs will see a benefit from four way machines. If you are running more than one program, however, you might see an advantage if the OS is written properly.



Bull!!! Any programmer can simply call for a new thread for any task in their code. Furthermore, the classes that a programmer might call to do tasks such as GUI controls, or file system access will very often spawn threads, due to the nature of the built in classes in the OS ( Win32, .NET, Next Framework, etc ).

Unless you are sitting in Terminal, and launching a simple program such as tar, you will be hard pressed to find a single threaded program for Windows, OSX, or most any modern day OS. Case in point, I have written a small program in JAVA that reads and modifies file attributes. I have intentionally not threaded the app, but when it does launch, it uses 7 threads, because I use the system calls found in the File object. You cannot have a GUI interface without threading.

The compiler does not perform the threading, the programmer does.

About the only thing you are somewhat correct is that just because a system has many physical / logical processors, does not guarantee speed. multi processors are like torque in an engine. More torque doesn't make you go faster, it just takes a lot more of a load to slow you down.

me:
Bull! My, such a strong word. Please think with your head, and not with your heart. I do understand how threads are spawned. You are not paying attention to the point of the argument. I am talking about parallel execution. Maybe I should have been clearer. Many, if not most, programs can't effectively use multi-threading. Most super computers work on problems that can be broken down into single units that can be worked upon at the same time. That's why supercomputers shuch as the older Cray's had an avarage speed that was close to the peak rating. Not so with massively parralel machines. The peak is MUCH higher than the average, sometimes ten times higher.

Mr. Shaw explains this here in post #189.
 
Bull != BullS**t

Please note that Bull is an adjective to a friendly dispute of a statement.

BullS**t implies aggression and is far less friendly unless prefixed with 'I Call ' .


I believe Shakespeare would agree.


Early days of multiprocessors... Common cache... One processor fills it... Second processor looks in cache for reusable code and finds none for the process it is working on... Clears cache... First processor goes back to cache to fetch work... doesn't find and as a result suffers performance penalty...

Problem got much worse with more procs. Life / Technology is much better now.

Max.

BTW: I still take a dual stance on the Threading thing. Yes, there are many, many threaded apps out there. No people don't need 4 way or more machines, unless they are running MS-Exchange.

Max.
 
maxvamp said:
The reason I called BULL was over the comment that most programs are single threaded. Aiden, do you support this comment?

I support it in the common context that most programs do their main flow of execution in a single program thread.

I support it in the sense that hidden threads created by the operating system for various support tasks do not make the main flow of the program multi-threaded.

I support it in the belief that a program is single-threaded unless the programmer designs his algorithm for parallel execution and calls services to spawn additional threads.

I support it in contrast to your first statement about "Bull!!!" which had no mention of the main problem of exploiting parellel threads in the main program to actually speed up the program.

___________________________________
maxvamp said:
Microsoft did not come into it's own as an innovator until almost 2000 with the re-write of the OS/2-NT kernel into Windows 2000.

The word "re-write" is open to interpretation, but Windows 2000 was an evolved Windows NT - not a clean start.

http://www.winntmag.com/Articles/Index.cfm?IssueID=97&ArticleID=4494

___________________________________
maxvamp said:
To connect this fact with improved technology that Apple is trying to put into the next version of the OS is failed logic taken out of context.

I see it more as a comment that a design paradigm of every operating system group has been:

First, get it working. Then get it working fast.


Apple is following the same path that Microsoft has been following - but Apple's OS design "problems" aren't as visible as were those of NT. There aren't any 8-way and 32-way PPC970 systems to bring attention to Apple's "problems".

I'd take the comment about improvements in Tiger's design to be a hint that Apple is getting ready for a 4-way (even if it's a HyperThreaded dual).
 
What does it all mean?

For anyone technologicaly challenged such as my self, could someone please explain what all this means in simpple terms ? :eek: :eek: :eek:
 
Sure

MacSA said:
For anyone technologicaly challenged such as my self, could someone please explain what all this means in simpple terms ? :eek: :eek: :eek:


The hardware rumour ("Antares") is that IBM may be developing a single chip that has two complete CPUs on it. This is called "dual-core" - referring to 2 CPU cores on one piece of silicon. It means that dual CPU Macs should get cheaper, and a quad processor might be easier to build.

It could also mean that duals could show up in lower-end systems - dual CPU iMacs or even laptops would be more feasible with the new dual-core chip.

It also means that there's a rumour that IBM will do what Intel has announced that it will do - push dual-core chips across the lineup:

http://www.xbitlabs.com/news/cpu/display/20040610151158.html

Within a year or so mainstream PCs will be dual CPU using dual-core chips. Most likely PC laptops, at least the more powerful ones, will be dual CPU.

___________________________________
The comments about SMT (Simultaneous Multi Threading, which is what Intel calls HyperThreading) describe a way where a single CPU can act like two CPUs. It's not as good as two real CPUs, but it's usually better than a single CPU and it's extremely cheap to add to the chip.

The single dual-core chip with SMT would look like a 4 CPU system, and perform like a 2.5 to 3-way system.

The SMT support is very questionable, though, since the rumour says that it's a PPC970, and the 970 does not have SMT.

____________________________________
The other discussions about threading are a digression by geeks arguing about how easy it is to use multiple CPUs to actually make your work run faster.

No need to really understand it - but we're talking about the root reasons that a dual CPU system is not twice as fast as a single, and why a quad wouldn't even to be close to 4 times as fast.
 
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.