Not at all! Some software can really benefit from SMT. The nice thing is that Intels latest spin on SMT is seldom a performance negative like it could have been in the first iterations seen years ago.so in a sense it is a marketing trick than anything else?
You are paying attention to the wrong people here. SMT has nothing to do with confusing the people, it is a real useful tech. How effective it is for the workload you apply to your computer is an open question. This however is not unlike the usage of a GPU in a computer. Some people hardly ever fully use their GPUs capability others using the same GPU can put it into thermal overload on a daily basis.Its good to know. I thought the whole point was that you would get the extra power from the logical core but as it looks its just another term to confuse more people))
As a side note the OS is what actually manages all those threads.
Many of those processes and threads managed by the OS don't really need the full performance of the CPU anyways.
Well there was a rumor some time ago about the new Mac Pro to have Gulftown and up to 128 GB RAM.Hopefully it will support more than the pitiful 32 GB of memory like the current model. Life begins at 128 GB, so I sure hope the new model supports that. With 12 cores / 24 threads, the machine will be a complete joke if the memory ceiling is still the same 32 GB.
Looking at my tricked out, one year old 8 core Xeon Mac Pro I can only lament the passing of the days we could buy a CPU upgrade card for such a Mac from DayStar and their like. One of the beauties of a Mac with slots was exactly that. I'd gladly pay the $1,200 or so to upgrade the CPUs on a machine that set me back over $6K. Can the more technically inclined explain what happened that ended such upgrades?
You really might want to stick to a subject you understand something about. The big difference between i5 and i7 is that the i7 offers 2 logical cores per core. The i7 in the iMac does that, so it's a real i7.
As for the price, you'd think people would learn to stop speculating about price on systems that have not even been announced, much less released. Historically, the Mac Pro has fallen within a specific price range - even when it was one of the first ones using a given processor. The first nephalem Macs were within the same price range even though the chips were fairly scarce at the time, so there's no reason to think the first Gulftowns will be any different.
It depends - the greatest words ever spoken abou computing.The OS scheduler needs to be smart and adaptive when running on an SMT system - otherwise you can easily get less performance than without SMT.
That can sometimes happen. On the other hand some apps can make use of both threads very well indeed. However it is almost never a negative on the modern SMT implementations.For example, consider the case where you have two computable threads. If these are scheduled on separate cores, you get 200% the performance. If, however, the OS schedules those two threads on different logical CPUs on the same physical core, you get (maybe) 120%. It's pretty easy to handle this scheduling with a simulation or long encoding run.
This is very true on a single core computer, not so much on a multicore model where it is not impossible for a thread to run to completion. I'm not prepared to get into the specifics about how the threads get dispatched as frankly I don't know the details, but I would suspect that Apple is generating more threads for the OS to manage than there are hardware threads to process.In real life, threads wake up and sleep constantly, so what's "perfect" now might be "worst case" in a millisecond. And, since it costs CPU to move a thread between logical CPUs, the scheduler shouldn't rebalance millisecond by millisecond.
That is one way to look at it. The other way is that a hardware thread running at ten or twenty percent capacity can still easily handle these minor loads on the system. Sometimes that little extra capacity goes a long way.If they don't, they're spending a lot of time in the idle state so the number of CPUs/cores/threads isn't that important.![]()
Where did that idea come from? Frankly I think you are screwing youself here. In part because you are thinking you are smarter than the scheduler about what hardware is available to use. More so you have no way of knowing when or if your software will need multiple threads.My rule of thumb with hyperthreading has been to turn it off unless you often have more computable threads than physical CPUs (cores).
Where did this idea come from? A proper CPU scheduler should not leave you with free cores. Plus this can be a huge negative if the threads are from the same process and can benefit from the local caches.You help the OS scheduler by eliminating the possibility of having two busy threads on the same physical core when there are idle physical cores.
(Typing on a Core i7-940 with HT disabled.)
n any event SMT allows for better utilization of the resouces in the CPU chip. Under optimal conditions the speed up can approach that of two CPUs though in practice it is often less. If you are concerned about the viability of SMT then look up bench marks that highlight where it is a success. Some apps can really give an i7 a workout.
That is false information. EVERYTHING takes advantage of multiple cores, its the nature of how OSX handles applications.But the problem/complaint is that there are no apps (ok, maybe we can all put our heads together and list 3) that truly take advantage of multi-core technology. Multi-core technology has been around for almost 5 years (I've owned a few quad-core PC systems since early 2007) and even today practically nothing takes advantage of it
Which has been the case since 2000.Of course it all has to start with the OS, too, to allow the apps to be written in that manner.
Don't bet on it!With this topic about new core processor, I take it the 13.3'' MBP would not take any change to this new hardware since its recently been introduced not to long ago?
The rumored 32nm processors are supposedly very low power. The new chips could go into a 13" MBP and give it a significant boost in performance and battery lifetime.I can see changes in upgraded processors for the 15'' and 17'' or iMac, but the 13.3'' can't handle the heat or are not ready to see an update until later in 2010.
I could see many users being raddled if the MBP 13.3'' had an extreme update to hardware and many ppl are buying the model for Christmas time.
so in a sense it is a marketing trick than anything else?
I disagree as some highly parallel apps do come very close to doing that and scale across even more cores well. Of course these are best case apps but it doesn't dismiss the reality.Ummmmm no. Even two CPU's can't double performance...
Yes and some apps do a lot worst too.The most Intel will ever claim for SMT under ideal conditions is a %30 boost.
It depends - the greatest words ever spoken abou computing.
There are many factors here but you will almost always get better through put with SMT on modern processors.
That can sometimes happen. On the other hand some apps can make use of both threads very well indeed. However it is almost never a negative on the modern SMT implementations.
What you need to look at is what happens when you have a plethora of threads say spawned by GCD....If you get 150% from each core you get 600% as opposed to 400% on a non threaded core. In the end you get done much faster than if you only used four cores.
Where did that idea come from? Frankly I think you are screwing youself here. In part because you are thinking you are smarter than the scheduler about what hardware is available to use. More so you have no way of knowing when or if your software will need multiple threads.
The only time you might gain is on hard single threaded apps that can benefit from clock speed. That could be a jusification if this is a proven reality. Even so when you do that for one or two apps you hog tie your machine for other apps and OS flexibility in general.
Where did this idea come from? A proper CPU scheduler should not leave you with free cores. Plus this can be a huge negative if the threads are from the same process and can benefit from the local caches.
Honestly I'd re-enable that HT support and look at the apps you run a little closer. Unless you have a very specific usage pattern that jusifies it you will be better off with HT on.
Doesn't hyperthreading really act as a "cueing" system to make sure a core is busy if a process is available rather than reporting its availability for sending ONLY after finishing execution of the prior process?
If you get a MP with one i9, it will have 6 physical and 12 logical cores. If you get it with two i9s, it will have 12 physical cores and 24 logical cores.Solitaire and Minesweeper?hmm..
Please someone enlighten me. Just wondering about the article. What did it mean by dual processor setup would bring 12 physical cores and 24(~!) logical cores? Dual 6 cores i9 is 12 cores. And how can it doubled to 24 logical cores?
So 1 core contain 2 "logical" cores?
Breaking news.Breaking news.
Lots of people already have nice monitors and don't want to look at a mirror when they are using a computer.
They are low power but it appears that the 2.53/2.67 GHz variants won't arrive until Q3 2010.The rumored 32nm processors are supposedly very low power. The new chips could go into a 13" MBP and give it a significant boost in performance and battery lifetime.
Very true.Certainly true words.... ;
If you never have more active (schedulable) threads than physical cores, you cannot possibly have better throughput with SMT. You might possibly have worse throughput, if active threads get scheduled on the same core when you have idle cores.
Actually I've been following Linux for a long time and have seen schedulers come and go so I understand the complexity. What I'm say is does it make sense for the average user to second guess the scheduler? In most cases I'd say it is not worthwhile.Do you have any statistics that say that OSX is good about thread scheduling on SMT systems? If you search for "hyperthreading performance worse" you see things like http://www.csl.cornell.edu/~vince/writeups/case_for_ht.html You might also want to look at the Linux developer discussions about trying to improve the Linux scheduler for SMT. It's not a simple "one size fits all" problem.
That is all well and good but can you off the top of your head say how many threads and processes are running when Safari is using part of a screen to run flash movies and in another thread you are running a word processor app or IDE? If it is 64 bit Snow Leopard that is two processes for Safari plus an unknown number of threads along with your word processor and at least a couple of threads there. So quickly you have three user processes and a few threads to deal with. In anyevent why would you want to burden yourself with turning SMT on and off to get what you think is better performance? Especially when your mix of apps can change at any moment.This is exactly why I said "My rule of thumb with hyperthreading has been to turn it off unless you often have more computable threads than physical CPUs" - on a server with a typical load with many computable threads, SMT is good - you'll typically win.
Exactly so why saddle yourself with being a CPU scheduler when the OS can do it for you? Is it perfect, certainly not but overtime it ought to do better.On a workstation/desktop with random loads, sometimes you'll have enough computable threads that SMT is a win. Sometimes you'll have fewer threads than physical CPUs, and SMT can hurt.
Would you agree that any of those 600 some odd threads can become active anytime? If so let's say 8 of them became active all at once, wouldn't you want to have SMT available then.It comes from experience and testing. Right now my quad core system has 649 running threads, is averaging 25% activity (out of 100% - there's a dual CPU VM churning on some things, usually it's much less), and is averaging 2.4GHz (out of 2.93GHz).
If you say so. It maybe very be the case that Mac OS has a crappy scheduler. I do wonder though if this testing of yours is with the latest Snow Leopard release as that section of the OS was totally reworked with the intent to support lots of cores.So, by limiting the system to 4 real threads, I never have the situation where scheduling artifacts lead to some physical cores with 2 threads and some physical cores with 0 threads.
It really doesn't matter as you can answer that question either.Also note that "knowing when or if your software will need multiple threads" is not the issue, it's "knowing when or if your software will have multiple computable threads".
I have to call BS on this one because you'd have to be super human to know what is running on your PC from millisecond to millisecond 24/7.My system has 657 threads now, so I *know* that it will always need multiple threads. I also know that my system seldom has more computable threads than physical cores.
Ok I can by that in general. But have you ever downloaded a large file while watching a movie on screen? It doesn't take much to end up using a lot of resources. This might be a bad example if you have a machine doing decode on the GPU.I know from the way that I use my machine that I don't have long-running multi-threaded CPU-bound apps.
That can certainly happen. Still if the OS wants it could still schedule those two threads to run on a single core. You are not gauranteed that that the OS will put the thread on another CPU.Also, it's very nebulous that "if the threads are from the same process and can benefit from the local caches" is meaningful. Some threads from the same process might be sharing common data, and L1/L2 cache sharing would be a plus. In other cases, threads from the same process might be independent, and running on separate physical cores would be a huge advantage.
Risks possibly but consistently I'm not to sure about. The problem that I see is that you say you only have one CPU busy but that doesn't mean the OS isn't feeding work to the others. The impact could be so minor you might never notice.How does the scheduler know which is the case? It doesn't.
My usage pattern averages less than 1 CPU busy, and I have 4 CPUs. Adding 4 more logical CPUs to my mix will almost never speed things up, and risks slowing things down.
I don't reccomend stealing. In any event you still haven't convinced me that being a cyborg schedulers is worthwhile.If I were going to spend some time stealing videos with Handbrake, I'd simply turn HT back on while I was ripping them.
Is this another off the cuff remark? Because it is not valid the way I'm parsing it.Note that HT will sacrifice response time for throughput in the best of cases - think about that.
This is exactly why i postponed my purchase of the 8 core mac pro. The only thing im worried is a price rise ! I hope not !!Please God !!!
![]()
In any event the stuff I've seen tested seldom shows SMT, on modern Intel processors, actually slowing things down.
What interests me here is what actually has you thinking you see a real advantage turning SMT off?
Cinebench scores scale predictably, save for a certain oddity surrounding quad-threaded performance. The Core i7 965 without HT blows the doors off the Core i7 with HT.
![]()
http://arstechnica.com/hardware/reviews/2008/11/nehalem-launch-review.ars/5
My point is how can you as a user possibly know how many threads and or processes are active all the time?
If it is 64 bit Snow Leopard that is two processes for Safari plus an unknown number of threads along with your word processor and at least a couple of threads there. So quickly you have three user processes and a few threads to deal with.
Would you agree that any of those 600 some odd threads can become active anytime? If so let's say 8 of them became active all at once, wouldn't you want to have SMT available then.
But have you ever downloaded a large file while watching a movie on screen? It doesn't take much to end up using a lot of resources. This might be a bad example if you have a machine doing decode on the GPU.
The problem that I see is that you say you only have one CPU busy but that doesn't mean the OS isn't feeding work to the others. The impact could be so minor you might never notice.
Originally posted by AidenShaw:
Note that HT will sacrifice response time for throughput in the best of cases - think about that.
Is this another off the cuff remark? Because it is not valid the way I'm parsing it.
In any event we can go on arguing about this...