Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.
Applications should be, and most likely are written to take advantage of available resources. A developer should be writing applications to take advantage of 8-cores already, they don't need an 8-core machine to do so.

You are not a developer, I take it?

Are you seriously suggesting that a developer should ship a product with features that are not only untested, but haven't even been tried out?

What do you prefer: Unpack 8 core Mac Pro, install Handbrake, run it, 50 percent CPU usage, or unpack 8 core Mac Pro, install Handbrake, run it, kaboom!
 
What about cheaper 4 core mac pros?

Seems to me this might be a way not for Apple to release an 8 core machine, but perhaps to release a one-chip, 4 core Mac Pro. That might result in slightly lower manufacturing and/or parts costs.
 
This is true, but there are still many many ways to optimize the multi-core processor that's not currently being use.

For example, I am waiting for a program to compile right now. Although I have a dual core on my computer, the compiler only compile one file at a time and usually takes about 10 min to do a full compile . If I have an 8 core computer with a multi-threaded compiler then I can cut the total time to jsut over a min + couple of seconds for linking time.

You know that if you have multiple processors, you can tell the build process to use them all, ie. compile multiple files at the same time!

I have a dual-core iMac, and if I do 'make -j3', it will use both processors. If you have a quad processor, do 'make -j5'.

Really though, this is just an example of what I was already talking about, namely doing tasks A,B,C and D, where A,B,C and D have no dependence on each other.
 
OK, I'm hardly a programmer (PHP doesn't really count) but that's the exact same description that I've heard applied to the description of what it takes to vectorize a program (i.e. make it Alti-Vec optimized) [that and the process of making loops that can be unrolled]. So I've got to ask, is there some difference between those two concepts? If not, it sure seems like we would have a lot more multi-core enabled apps out there already...

I'm glad you admit that PHP doesn't count :)

But to answer your question: There are situations where vectorization and multi-threading/processing are both applicable. However, vectorization *tends* to work on chunks of data that are not dependent on each other, but simliar. Say, you have four integers, and you need to double them all. You could vectorize that, and it'd be a lot cheaper that spawning additional threads to do each multiplication.

However, take Word for example. I don't know how it works, but let's assume that the main editor is one thread, and the real-time spell/grammar checker is a separate thread. Those two tasks are not at all the same, so you couldn't vectorize that, but you could very easily multi-thread it.

To bring it back to my cake example, let's say you had to crack four eggs. It would make sense to vectorize that, crack all four at the same time. But then let's say you have to crack one egg, pour 500ml of milk, and measure 250g of flour. You wouldn't vectorize that, you'd multi-thread it.
 
You are not a developer, I take it?

Are you seriously suggesting that a developer should ship a product with features that are not only untested, but haven't even been tried out?

What do you prefer: Unpack 8 core Mac Pro, install Handbrake, run it, 50 percent CPU usage, or unpack 8 core Mac Pro, install Handbrake, run it, kaboom!

Being a developer with a fair bit of graphics programming and multithreaded development experience, I would say the solution is somewhere in-between. There's no reason software isn't being planned for the upcoming CPU architectures and newer versions being developed to handle such. In other words, it's no secret that this hardware is coming, we've known about quad-core clovertown CPUs for nearly a year.. Engineering samples started shipping several months ago (early september, IIRC). Too bad Apple doesn't make pre-release hardware available via higher-level ADC programs, only a select few get the priviledge.

Programmers should make the effort to accommodate upcoming multi-core designs into their software development cycle. Once a new system is released, it should be a minimal effort to test and tweak the software for the new system and quickly release an update, thus making their customers only wait a week or two from when the systems first ship as opposed to several weeks/months while much of an application is re-written to accommodate 8 cores since the last version was hard-coded to handle 4. And then the cycle starts again in 18 months when 12 or 16 core chips start shipping. I don't think the software industry has really warmed-up to the multi-core paradigm just yet. They have been resisting it for years as anyone who has run multiprocessor systems over the years will attest to. But this is the way it's going to be for a while and eventually we'll hit a core barrier, just as the MHz barrier popped up. Both Intel and AMD are predicting 80 to 120 cores being the max for the x86 architecture. So start planning and figuring how to micro-manage threads and fibers within your code because we'll be hitting 16 to 24 cores by 2010 and MHz per core isn't going to creep much past 3GHz. And the current thread per task, thread per CPU core mentality that many programmers have is not the proper way to approach this.
 
The negative for me is the tiny caveat at the bottom of the article. Apple releasing 8-core Mac Pros this month? Highly doubtful, in my opinion.

Also, negative sometimes just means you don't believe it (as in this case) not that it's a "negative" announcement.

Thanks for the clarification. Is there a written document on how rating criteria should be applied? If not, and each person decides what criteria they will use, then the rating really does not mean much. Maybe it does not anyway? I was thinking it was a non-scientific barometer of how people perceived the technology.
 
You are not a developer, I take it?

Are you seriously suggesting that a developer should ship a product with features that are not only untested, but haven't even been tried out?

What do you prefer: Unpack 8 core Mac Pro, install Handbrake, run it, 50 percent CPU usage, or unpack 8 core Mac Pro, install Handbrake, run it, kaboom!

I don't think that's what he meant. I think he means instead of hard coding a program to use 8 (or however many cores), have the program dynamically use however many cores are in your computer. So if he wrote it on a 2 core machine, the program would use 2 cores. When he puts it on an 8 core computer, it'll automatically use all 8 w/o having to reprogram. The programmer should still test it and make corrections as necessary.
 
I cant wait to get my hands on my own 8 core mac, I currently have a pc with an amd athlon 64, tear. No wait there a 400mhz G3 imac sitting with tiger in my room, I think it can get close to the 8 core power macs specs:D
 
It turns out the 2.66 Ghz 8 core chips are about the same price as 3.0 Ghz 4 core chips. So the price differential will be product positioning, not raw cost.

Rocketman

Correction: You mean 2.66 GHz 4 Core chips versus 3.0 GHz 2 Core chips.

Woodcrest is a dual-core chip, Clovertown is 4. The Mac Pro uses 2 Woodcrests for 4 cores, a Mac Pro with 2 Clovertowns has 8 cores.
 
Programmers should make the effort to accommodate upcoming multi-core designs into their software development cycle. Once a new system is released, it should be a minimal effort to test and tweak the software for the new system and quickly release an update, thus making their customers only wait a week or two from when the systems first ship as opposed to several weeks/months .


This is not true at all. Multi-threading often introduces more problems such as race conditions, deadlocks, pipeline starvations, memory leaks, cache coherency problems. Further more, multithreaded apps are harder and take longer to debug. Also, using threads without good reason too is not efficient (context swtiching) and can cause problems (thread priorities) with other apps running. This is because threads can not yield to other threads and block if such an undesirable condition like a deadlock exists.. Like on Windows when one app has a non responsive thread and the whole system hangs.. Or like when Finder sucks and locks everything..

Also, multithreading behaves differently on different platforms with different language environments. Java threading might behave differently than p-threads (C-based) on the same system (OS X).. I am a prfessional developer etc..
 
So, that means that there's no practical reason Apple couldn't give an 8-core BTO option right away... Say, for around an additional $999? (The 3GHz quad-core model is an additional $799.) For those that need it, the extra $200 would be well worth it. For those that just want the bragging rights, well, I guess they can afford the $200.
 
So, that means that there's no practical reason Apple couldn't give an 8-core BTO option right away... Say, for around an additional $999? (The 3GHz quad-core model is an additional $799.) For those that need it, the extra $200 would be well worth it. For those that just want the bragging rights, well, I guess they can afford the $200.

let's see if there will be a cpu market. people buy quadcore chips to max out their 3.0 mac pros. then they sell their 3.0 chips to the owners of 2.0 mac pros. i wish there was a way to put the 2.0 xeons into mac mini's;)
 
Any one have an idea how this might affect OSX server usage? I am starting the process of looking to add another X serve and while I like the Woodcrest numbers I see, will Clovertown be a huge impact?

I normally run AFP, Mail, FTP, web services, LDAP and want to add QT streaming server along with some new features in Tiger which I hope get improved upon in Leapord.

My gut says 8 cores would give some performance improvements but I'm sure other's out there know more than I....
 
I guess this is fairly boring news for gamers, if Quake is any indication...

Yes. Games are mostly designed for single CPU, single core at this point. The Mac Pro is overkill for gaming, and I hear FB-DIMMs are detrimental to gaming performance too.

I just want a headless Mac that's more powerful than the Mini, and not as expensive as the Pro and as "workstation-ish", i.e. it should use standard desktop parts like Conroe and DDR2, and includes at least one 16x PCI-E slot that can fit, power, and cool the latest gaming cards.
 
Hmmm...

Im really looking forwards to this, if the 8-core 2.66 Macpro its going to cost just a little more than a quad 3ghz Macpro, im going to be buying as soon as it hits the website...

As a recent Mac switcher, coming straight in with a base spec macpro(4x2.66/4gb/1750gbHDD), im now happy to invest in a more powerful machine.

My only concern is the heat... my current Macpro runs 24/7 and 95% of the time is at full load across all 4 cores... and its still silent with temps never going over 52c... will these quad core chips run much hotter, meaning the front fans have to spin faster/noisier to keep the machine cool?
 
CNET Overlooked Running More Than One Copy Of The Same Application At Once

CNET Overlooked Running More Than One Copy Of The Same Application At Once. Were they to have launched two copies of Toast and started crushing video from EyeTV Recordings to high quality DVD images, they would have realize how to hose one of those 8-core systems easily.

I'm very confused about when Apple is going to offer 8-cores due to the need for the Stoakley platform chips to enhance an 8-core configuration's performance which won't be out until next Spring. :confused:
 
2.66GHz Clovertown is 120 Watts While The 2.33GHz Quad Is Same 80 Watts As Woddies

Im really looking forwards to this, if the 8-core 2.66 Macpro its going to cost just a little more than a quad 3ghz Macpro, im going to be buying as soon as it hits the website...

As a recent Mac switcher, coming straight in with a base spec macpro(4x2.66/4gb/1750gbHDD), im now happy to invest in a more powerful machine.

My only concern is the heat... my current Macpro runs 24/7 and 95% of the time is at full load across all 4 cores... and its still silent with temps never going over 52c... will these quad core chips run much hotter, meaning the front fans have to spin faster/noisier to keep the machine cool?
Maybe. If Apple goes from the 80 Watt 3GHz Woody to the 120 Watt 2.66GHz Clovertown then definitely. But if Apple chooses to only offer the 80 Watt 2.33GHz Dual Clovertown, then perhaps not and we'll all be happier campers. Or perhaps Apple has other cooling schemes in mind to keep a 2.66GHz set of Clovertowns quiet via other ways. Given that the Logic board stays the same, I'd rather buy the 2.33GHz version.
 
Don't Encode With Handbrake From Optical DVDs. Encode From Images On HDs

For some time, Handbrake didn't use more than two cores - owners of Quad G5s reported CPU usage of exactly 50 percent, then someone changed it and Quad G5s reported 100 percent CPU usage.

What we don't know: Was the code changed to use up to four processors, or as many processors as are available? Developers are usually very unwilling to ship code that they haven't been able to try out, so expect a version using eight cores about two days after the developers have access to an eight core machine.

In the case of Handbrake, encoding to MPEG4 seems already limited by the speed of the DVD drive; you can't encode faster than you can read from the DVD. H.264 is still limited by processor speed. Using eight cores is not too difficult; for example, if you encode 60 minutes of video, just give 7 1/2 minutes to each core.
I almost NEVER use handbrake from an optical DVD. That makes no sense to me. Why would you do that? :confused:

I use Handbrake about 12-18 hours of every day and I use it after creating high quality DVD images from EyeTV HDTV recordings with Toast 7.1 UB. On a Mac Pro Handbrake can use more than 3 cores and Toast can use all 4 cores. This is why I want an 8 core Mac Pro. Once you start running Toast and Handbrake simultaneously, you see why those of us who do this kind of repetitive DVD Image creation for Handbrake to mp4 compression truly need 8-cores NOW. :eek:
 
This is not true at all. Multi-threading often introduces more problems such as race conditions, deadlocks, pipeline starvations, memory leaks, cache coherency problems. Further more, multithreaded apps are harder and take longer to debug. Also, using threads without good reason too is not efficient (context swtiching) and can cause problems (thread priorities) with other apps running. This is because threads can not yield to other threads and block if such an undesirable condition like a deadlock exists.. Like on Windows when one app has a non responsive thread and the whole system hangs.. Or like when Finder sucks and locks everything..

Yes, yes, all true... Somewhat. True in the sense of how a lot of programmers approach current threading problems and various development theories. And we're currently limited by our development tools and the operating systems to a certain degree.

Also, multithreading behaves differently on different platforms with different language environments. Java threading might behave differently than p-threads (C-based) on the same system (OS X).. I am a prfessional developer etc..

Yes, but so many things behave differently from one platform to another. How is writing a low-level thread management system for each platform different than writing the core functions of a 3D graphics engine that can run cross-platform and take advantage of various differences or feature - OpenGL, Direct3D, 3DNow, etc.. Cross-platform development always has its issues as do using different development tools. You obviously know this as do many programmers, so what's the point of the doom and gloom? It's always been this way and is just a part of the development process.

Massively multithreaded apps do exist and have been written for various platforms over the years. Here in Windows and OSX land programmers go into panic mode when multithreading is mentioned. Yet SGI had Irix scaled to 256 CPUs and visulization apps utilizing multithreading on individual systems as well as across cluster nodes and displaying images built by multiple graphics pipes using multithreaded OpenGL that could scale from 1 to 16 graphics pipes and any number of CPUs.

Anyway, my whole point is that the software industry will eventually have to tackle this problem head on and will overcome it. I just don't understand the current resistance and denial exhibited by so many "developers". The hardware is coming, in many situations it's already here... Why fight it? It's time to look at threads in a new light (for many). Upcoming CPU roadmaps place newer quad-core chips in the market in mid '07 with common Xeon and Opteron workstations/servers moving to quad-CPU (16-core) with 45nm process and lower wattage. 8-core CPUs to arrive in '08, 12 and 16 cores per CPU in late '08 or early '09...

MHz isn't increasing and the consumer still wants the next version of their game or video editor to run twice as fast with more features on the new stystem they just bought, which now has 32 cores instead of 18 cores and they'll switch to a competitor's product if you take more than two or three months to ship your software update... What do you do?
 
That really depends on the program, on how "parallelizable" the application is.

The simplest way to think of it is like this: Let's say you have a program that first has to calculate A. Then, when it's done that, it uses the result of A to calculate B. Then, when it's done that, uses the result of B to calculate C, then C to D, and so on. That's a *serial* problem there. The calculation of B can't begin until A is done, so it doesn't matter how many processors you have running, all computation is held up on one spot.

On the other hand, let's say you have an application that needs to calculate A, B, C and D, but those four values are not dependent on each other at all. In that case, you can use four processors at the same time, to calculate all four values at the same time.

Think of it like baking a cake. You can't start putting on the icing until the cake is done baking. And you can't start baking the cake until the ingredients are all mixed together. But you can have people simultaneously getting out and measuring the ingredients.

So that problem is partially parallelizable, but the majority of its workload is a serial process.

Some software applications, just by their very nature, will never be able to do anything useful with multiple processors.

What a very lovely analogy. Thank you.

For me... 8 cores for the bragging rights only... so I guess I won't get one anytime soon. I'm sure 4 would suit me fine though, I need to upgrade my 1Ghz G4!!!
 
What about additional memory requirements?

Just asking a question, understand. But, is there a need to have more memory as twice as many requesting sources are accessing the memory pool?
 
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.