Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.
Threads can be difficult - usually, when they are, a better solution and/or design is called for anyway.

You are correct about the fact that IO processing is outside the scope of your original arguement - sorry, I get side tracked easily ;)

Just curious - what language and/or platform do you have your experience in? My experience (with threading at least) is from C++ on Solaris (with RogueWave tools) and Java on Solaris, OS X and Linux.

Admitedly, I've never had the "pleasure" to try threading in C.

I particularly enjoy Java's threading support (and am looking forward to getting time to learn Objective-C/Cocoa's implementations) as it does all the hard work for you.
 
eric_n_dfw:

My experience (with threading at least) is from C++ on Solaris (with RogueWave tools) and Java on Solaris, OS X and Linux.
Excellent choices. :)

I've got a few years of C/C++ with threads (posix pthreads) and networking via the BSD sockets. I've played with them in their SDL (http://www.libsdl.org) form a little in order to examine Windows compatible threading and networking. Most of my forking and creative IPC was done in Perl, I've done some Java and a fair amount of PHP, some Tcl, some x86 assembly, and who knows what else. The heavy stuff has been C/C++ and Perl though.

I've done most of my programming in Linux, but I use OSX more these days for the SDL/C stuff because I like project builder a lot (which is one of those apps that does use 2 CPUs for performance). I program in Windows enough to make sure my stuff compiles there. ;) Edit: I've also done some programming on Solaris and Irix, but not a whole lot. Seemed a lot like Linux except with more stupid problems.

More to the original thread, I've been really tempted to spend $5k on a new dual 1.42, combo drive, 512mb, 120gig, 23" LCD, rad-9700 system. Drool. But what would I do with my current system? :(
 
Originally posted by ddtlm
eric_n_dfw:


Excellent choices. :)

I've got a few years of C/C++ with threads (posix pthreads) and networking via the BSD sockets. I've played with them in their SDL (http://www.libsdl.org) form a little in order to examine Windows compatible threading and networking. Most of my forking and creative IPC was done in Perl, I've done some Java and a fair amount of PHP, some Tcl, some x86 assembly, and who knows what else. The heavy stuff has been C/C++ and Perl though.

I've done most of my programming in Linux, but I use OSX more these days for the SDL/C stuff because I like project builder a lot (which is one of those apps that does use 2 CPUs for performance). I program in Windows enough to make sure my stuff compiles there. ;) Edit: I've also done some programming on Solaris and Irix, but not a whole lot. Seemed a lot like Linux except with more stupid problems.

More to the original thread, I've been really tempted to spend $5k on a new dual 1.42, combo drive, 512mb, 120gig, 23" LCD, rad-9700 system. Drool. But what would I do with my current system? :(
Do you do much OO? The reason I ask is that it seems that threaded designs and good OO design kind-of goes hand in hand. While procedural programming would require you to do a lot of "roll your own" coding to support threading. (I could be wrong)

I'll tell you one application that needs a lot of work - whether it be multi-threading or just plain re-writting, is Quicken. That thing is such a slug for simple things that it's embarasing!

As far as the new machines - I'll tell ya', the Student Developer Discount price on the top-o-the line with 512MB and the 9700 ATI is not much more than I paid for my G3/400 B&W back in '99!! :D (no discount for me back then - my company is putting back through college now at 31 :D) so I am mighty tempted!!!
 
All this talk about threads reminded me of something.

Does anyone remember all those excited reviews when OS 8 came out and supported multithreading ?

People were speculating about loads of uses for theads, the most useful one for me was the idea that photoshop could be coded to use threads so you could fire off multiple filters on multiple documents at once while scanning in a new documents and printing another. It never happened and photoshop still works like some kind of monotasking graphics OS.

The one area where threads could have been used under OS 8 - 9.x would have been when printing, even with background printing enabled I still stare at a model dialogue box for a number of minutes before documents are even added to the printer que.

There's numerous areas where threads could have been using for years on end but no one used them.

Anyway back to the new 20" displays...

They're OS X only!!!

not fair!

boo hoo!
 
Your idea of a multi-threaded Photoshop sounds similar to batch-actions (you know, converting a folder of images to another format or something like that).

OS 8 and later 9 did benefit from this multi-threading. The control strip for one did a great job of running alongside any hardware-intensive application. Audio CDs played non-stop even when rendering movies with QuickTime Pro. I am sure you can come up with your own examples.

For its day, and for any single CPU G3 Mac, OS 8 and especially OS 9 were very good OSs. The move to a UNIX-based OS was not what I expected it to be but let me just say that for a multi-processor system this is the only OS I would want to use.
 
The problem with printing in Mac OS Classic has little to do with threading and much to do with the lack of premptive multitasking.

If photoshop took advantage of multi proc's in OS 8/9, then it was multi-threaded or at least spawned processes.
 
The breakfast Mac

If you are reading this and eating breakfast at the same time then I can imagine what you are using as a cup holder for your coffee.;) Good thing not all Macs are slot-loading yet.
 
Originally posted by ktlx


I am sorry, but if your total focus on Macs is games, you are a doofus. You are paying way too much money for way too little. If you cannot stand something by Microsoft then buy a PS.

Apple should be creating PowerMacs that address its market strengths. Those are in content and media creation. Those areas benefit the most from dual processors because the applications are multithreaded.
Another boo--- hooer---you use your mac for gaming! you cant do that,you shouldnt do that and yada yada yada!1 more time i use my mac for everything and even gaming believe it or not. thank goodness someone at apple has realized this have you seen there gaming page. oops i forgot your not supposed to do that and get back to that gauzeeeeeen bluuuurrrrrrrrr!The mac can should and will do all period!After all it is the digital hub. and with that i cant wait untill UT 2003 and DOOM3 is released so take that all you non gamers who are missing so much fun on the MAC. Gaming is a great way to compare machines, video cards, etc. If this was not so mike would not be using those figures all the time included in his tests at that great site Accelerate your Mac! Iam sure there are a lot of people who would love to see a 1.25 or 1.42 in a new imac but we wont probably get that because again it would eat into the pro line and all those non gamers might buy 1 of those machines instead of a powermac!The new displays are cool! the powermacs are moving forward!I just wish they would let the imac soar thats all!
 
Re: Which GPU do I want?

Originally posted by appeLappe
I?ve noticed that you can choose to update the GPU to either the Geoforce4 Ti or the ATI 9700 Pro for the same price.
[/B]

That for me is a no-brainer. The 9700 Stomps the GeFroce4 Ti in all ways right now. For the same price, the 9700.
 
Your idea of a multi-threaded Photoshop sounds similar to batch-actions (you know, converting a folder of images to another format or something like that).

An example of a batch action :

Image1.tiff -> play action -> save file -> Image2.tiff -> play action -> save file etc...

This is linear, not parallel, you can't do anything at all with photoshop apart from leave it in the background and use another application while it's batch processing. Every stage of a batch process comes after the previous one, you can't run 10 seperate batch processes at once for instance. Whether it uses dual cpus for some of the filters and functions is irrelevant to whether it's threaded or not, it only EVER does 1 thing at once and you have to wait for it to finish whatever that thing might be before you can ask it to do something else. All a batch does is save you from opening files yourself, all actions do is play back what you want them to do.

If this was a threaded in some way you could batch process while working on another image or images, all with seperate filters and image manipulations happening simulataneously. You can't.

OS 8 and later 9 did benefit from this multi-threading. The control strip for one did a great job of running alongside any hardware-intensive application. Audio CDs played non-stop even when rendering movies with QuickTime Pro. I am sure you can come up with your own examples.

An Audio CD playing in the background uses ZERO cpu time, other than sending a request for the first track and telling the drive to start playing the CD. It doesn't tax the hardware in any slight way, all it's doing is pushing the output of the CD into the audio hardware and then it's coming out of the speakers. About the only thing I can think of where it DOES use CPU time but doesn't interfere too much with other applications is playing mp3s. playing them in QT is the exception to that, it's worthless for playing mp3s because even opening a folder with a few dozen files can interupt playback for a split second.

Sorry if this nit picky, it's just true. The only real world use of multithreading under OS 8 - 9.x that I'm aware of is copying files in the finder between multiple drives and folders.
 
Good value

Currently an Apple Display is the best value PC monitor on the market. The price-drops are necessary to off-set people's response to war: saving money. The cynic would say that Apple's products are toys for the rich but Mac users know that they are better value than any PC (generally Macs do have better re-sale value than same-generation Windows hardware).

If times were better for the computer industry we may have had the video iPod all ready. G4s would probably be running at the same speed but MHz never worried me.
 
Originally posted by Sol
Your idea of a multi-threaded Photoshop sounds similar to batch-actions (you know, converting a folder of images to another format or something like that).

OS 8 and later 9 did benefit from this multi-threading. The control strip for one did a great job of running alongside any hardware-intensive application. Audio CDs played non-stop even when rendering movies with QuickTime Pro. I am sure you can come up with your own examples.

For its day, and for any single CPU G3 Mac, OS 8 and especially OS 9 were very good OSs. The move to a UNIX-based OS was not what I expected it to be but let me just say that for a multi-processor system this is the only OS I would want to use.

If Apple read this and other related messages and simply improved multi-tasking alot it would make a huge difference in day to day operations by users.

Rocketman
 
I think that a big part of the reason that Apple separates there consumer and pro machines by cpu and processor speed is. A) The price of the processor. Face it to sell these things at a good margin you can only cut the cost in so many places and still turn out a good product with name bran hardware. B) Supply of cpus. I think this is the main one. Home many G4s @ 1.42 do you think there are available? This has been true for a very long time for Apple. Because they are smaller they just can't drive the demand for enough of the higher processors to get Motorola to figure out how to crank out more of them. The faster the processor the lower the production yields.
 
Re: Regarding Multiprocessing and OS X

Originally posted by bentmywookie
Ok, for those arguing about the single processor vs. dual processor issue, all I want to say is that an OS (at its lowest level) needs to do resource allocation/management. It's the interface between the hardware and the software.

True.


So, to characterize the scenario, a piece of software comes along and tells the OS, "hey I need to cook this meal, here's the recipe, now off ya go!" And the processor looks and sees, "well I have two ovens, 3 bowls, 5 fridges, etc." and basically it uses whatever it has to get the job done. Ok, enough with this analogy (for now).

Bad analogy, because the recipe most software follows is "do this, THEN do this, THEN do this", not "do this and this and this in any order". The string of things that have to be done in order is called a thread of execution. MOST, not all, software is written with a single thread of execution for the "main task" of the software. Like baking bread: you have to mix the ingredients, then knead it, then let it rise (iterate), then bake. If you bake it first, you end up with fried eggs and singed flour. No matter how many cooks you have in the kitchen making one loaf of bread, the process can not be sped up because the labor can not be divided up and made parallel instead of in series.

That is not to say that most software is single-threaded. In fact, I would be surprised if any OS X or in fact any post-Win95 software that does any amount of work whatsoever were single threaded, simply because then the app would be unresponsive to both the user and the OS while it did its work. Hence, apps that "do work" tend to do their "work" in a background thread while the "main" thread continues to respond to user and OS messages. It is possible (although not common), in this common case, for the message-handling thread to execute on one processor while the "work" thread executes on the other processor, leaving the work thread a bit more "room" on its CPU. It is not common for this to happen as the OS will preferentially place threads of a single application on the same processor (unless they both use full timeslices and the other CPU is relatively unused) as threads within an app are far more likely to share data between each other (and to require cross-thread signalling and mutexes) than threads in different apps, and these things are more efficiently done when the threads are on the same physical processor.

What multiple processors buys you, in a single-threaded-application world, is the ability to run two separate applications on two separate processors. If you had one app that is running full-bore and consuming massive swaths of CPU time, and a dozen other apps, the kernel will schedule the boorish app on one CPU and the other apps on the other CPU. Likewise, if you have two CPU-hog apps running, each wil be scheduled on its own CPU and the rest of the processes in the system will divide up amongst the two CPUs on top of the two hogs.

Third scenario, an app is written to divide its "main work" between two threads. The OS initially schedules both threads on one CPU, then quickly sees that they are both "CPU Hog" threads and pushes one over to the other CPU next timeslice. Thus, the OS quite efficiently can use its two processors even though the app itself wasn't "strictly" written to an SMP API.

Fourth and final scenario, like the third, except the app uses the SMP API. When the second "work" thread is spawned, the application tells the kernel that it will be a CPU hog and should be placed on a different CPU from the first if possible. This removes the couple of timeslices where both threads share the same CPU before the kernel sorts out that they need to be separated.

Fifth, an improvement on the fourth, the app asks the Kernel if it has a low-usage CPU available, and only if there is a second CPU available for processing does it spawn a second thread for processing. This is a bit more difficult to program, but often significantly improves performance on the single-CPU case while not affecting multiple-CPU performance.

So, going from one single-threaded app to multiple single threaded apps to a multiple-threaded app to a multiple-threaded app that is SMP-aware, you get increasingly efficient use of the second processor.

OS X does a pretty good job of dividing up the work here, and of course you'll never see a "single" app running on OS X because OS X itself consists of multiple background processes. But, you won't see your main app run significantly faster on a dual-proc machine than on a single-proc machine if it is single-threaded. The only advantage the dual-proc machine has is that OS X's background processes are shunted over to one CPU and thus your main app has the whole CPU to itself (but, of course, not the whole System Bus or Memory Bus or disk, etc ...)

Now, a word about benchmarks: Benchmarks tend to be sequential, not parallel. By this, I mean, one task is executed while the machine is doing nothing else, then another task is executed, then another, and so on. If the apps handling the tasks are single-threaded, you have the first scenario above (one CPU hog thread on one CPU and the OS threads on the other), which shows very little advantage to having multiple CPUs. As a result of the fact that most of the time the popular applications out there have a single "work" thread, this skews benchmarks heavily in favor of single-CPU systems. This type of benchmarking is realistic for single-task servers whose only interaction with users is how fast it got its job done. However, "real life" with a desktop or workstation computer is generally not like this. People multi-task, and get annoyed when their computers do not. Yes, they want to be able to browse the web and print that report out and listen to iTunes while their CD is burning. Dual (or more) processors allows this to happen, and that is never shown in benchmarks.
 
Originally posted by iAndy
Sorry this doesn't prove that PC are difficult to upgrade, only that you might not be too hot recognizing one end of a screwdriver from another ;)

Quite true, regarding my failed attempt at upgrading (nothing to do with mechanical incompetence, just with not contacting NEC to see if their MB was capable of upgrading before going out and buying the upgrade card based on the chipset and BIOS information). As is always true, anectdotal evidence proves nothing, just adds a story to tell.

However, my decision not to (generally) upgrade my CPU is purely financial, and not at all related to some mechanical ineptitude (I do quite fine swapping in and out PC parts, thank you!). Noting that I have the willpower to hold off on a new computer for a few years, not requiring the latest/greatest every 6 months, I find I more often than not have a use for the old box, or can find someone to buy it or at the very least a school that needs it. Thus, my choice is, spend $500 (assuming a new MB/innards) to end up with one faster CPU in a two-year-old system with a small HD and rickety video card, or spend $2000 to end up with a faster overall system AND add a new system to my server farm AND bump one of the oldies out for sale or donation. One computer or two? I find that the monetary difference is outweighed by the difference in utility, and once every couple of years I can afford it.


That is not necessarily the case. There are a number of companies out there making a very healthy living from cpu upgrade cards.

Ah, yes, the upgrade cards. I had looked into those and dismissed them way back when because they were reportedly highly unstable and had physical issues in my case (ended up too tall and blocked the video card slot). But, yes, I imagine they've gotten better since then (and i can't remember the last time I heard about someone with an upgrade card having a flaky system because of it).

Note the following review of a recent add-in card at Tom's Hardware:

http://www6.tomshardware.com/cpu/20030107/index.html

ASSUMING your BIOS will support a newer processor, the PowerLeap solution can work.

Upgrading to a 1.4GHz P3 cost them $250 (assuming tax was the reason ... they later noted that the upgrade package itself was just under $160), and certainly out-performed the old 866MHz P3 they had, but didn't come anywhere near to what you would see with a real 1.4GHz P3 system (which should have been on-par with if not slightly better than the 1.7GHz P4 results they showed).

So, yes, you can use a card and "sorta" get an upgraded CPU (living with the deficiencies of the rest of your system and the added performance hit of the daughtercard). But don't expect (like most people seem to expect when they start talking about upgrading their CPU) top-of-the-line performance anywhere near that of a whole new box.

This is very analogous to the Mac situation: you can add a CPU-on-a-card upgrade (and people do), but you shouldn't go in expecting 1.25GHz performance out of your old Cube.


Boy are you digging a hole for yourself here ! ;) Check out www.Powerleap.com for info on their latest 1.4GHz upgrade offering @ $160 - enough said !

I stand corrected. Do note that that is a Celeron, not a P3 (you have to hit the "Add to Order" button and read the fine print there ... Celerons are recommended for BX-based motherboards anyways, which is the most common situation), but, yes, a fairly good upgrade for the money.

Now, back to the scenario I was talking about (because it's the one that would conceivably give you a "fully-functioning" high-end CPU and because the post I was replying to specifically talked about replacing their ZIF processor with another processor): I would love to see you buy a 3.04GHz P4 and fit it in a motherboard which had housed a 2.0GHz P4. Doesn't work, because Intel frequently changes the socket form factor to prevent precisely this. While you could possibly add a little bit of speed to that 2.0GHz motherboard (depending on which line ot 2.0GHz P4s you got, the "2.0" or the "2.0A"), a direct chip-for-chip replacement is generally a one-time upgrade, and a minor one at that. Unless you upgrade the motherboard, etc.
 
Re: Small threads ARE used

Originally posted by eric_n_dfw
Small threads are used all over the place. You seem to think that coding them is in some way difficult. It is not. You code your objects to be thread safe (fairly trivial if you know what you are doing), you instantiate one and tell it to run. Sometimes you create a bunch and (in Java terminology) put them in a thread pool, ordering all of them to fire off at the same time.

Easy. And dangerous. Win NT dies at 500 threads on the system. Note that a thread lasts a little longer than the work you are making it do, and that creating/destrying threads is fairly expensive.


I've never looked at Mozilla's code but if you think about it, threads would be a obvious way to pull in images for an html page. You fire off 1 for each image and the independantly load the graphics without the main thread worrying about them.

Well:

1) I can assure you that the "main thread" of the application is not the same as the socket-communications thread. In any app. Period. Socket communications requires its own thread per connection, or a lot of pretty fancy programming.

2) HTTP 1.1 uses "persistent connections" which allow multiple items (the main page and its images, for instance) to be pulled from a single TCP/IP connection. This is important because TCP/IP connections are fairly expensive to make. I suspect that Mozilla "always" uses the persistent connection speed-up instead of using multiple threads (which would be significantly slower on a narrow-band connection than a single-thread download), jsut so it doesn't have to worry about multiple routes of communications.


I'd presume that Final Cut Pro does similar type things - firing off a pair of threads to render one frame each.

Way too granular. But, yes, having one thread handle, say, three keyframes of video, and another thread handling the next three keyframes, should be fairly efficient. Except - oops - the key bottleneck there is likely to be the disk and system bus bandwidth, not processing speed.


Don't believe me about the browser point? Try OmniWeb and open the inspector that shows the page rendering status (I can't remeber it's actual window name) - each element of the page shows up and reports it progress - you can kill any one element. This is multithreading.

Well, it's one way of doing it. I'm not going to say it's the "right" way because there are trade-offs involved (the hit on your internet connection and the hit on the server included). But, yes, if the "base" case for optimization included a long network lag to the server, plenty of processing power on the server, and a fat net connection between web browser and server, then this is a good approach.

Back to the original point (I think): Mozilla is an example of an app that uses one thread to do it's "work". IMHO, Mozilla is really a poor example all around regarding threading. There are times when Mozilla becomes completely unresponsive on Windows XP while downloading HTTP headers ... very poor implementations in there somewhere (and, yes, I should go fix it if I know so much, but damn ... "in there somewhere" is a far cry from knowing where to look in that monster!)
 
Re: Re: Small threads ARE used

Originally posted by jettredmont

1) I can assure you that the "main thread" of the application is not the same as the socket-communications thread. In any app. Period. Socket communications requires its own thread per connection, or a lot of pretty fancy programming.

Oops, HTTP is one of the times when one thread-per-socket is not required (because the client doesn't need to respond to the server sending data back ... so long as the machine TCP buffers don't overflow, which is distinctly possible if you aren't careful ...). But still, I wouldn't want to do socket communications in the main thread of a graphical application. Too messy to code and thus to maintain.
 
jettredmont:

1) I can assure you that the "main thread" of the application is not the same as the socket-communications thread. In any app. Period. Socket communications requires its own thread per connection, or a lot of pretty fancy programming.
You've severely over-asserted this claim. Threads are one approach, however threads are definately not needed for low-data-flow applications. In fact, I think I could construct a number of examples where threads are an inferior way to manage sockets.

That is not to say that most software is single-threaded. In fact, I would be surprised if any OS X or in fact any post-Win95 software that does any amount of work whatsoever were single threaded, simply because then the app would be unresponsive to both the user and the OS while it did its work. Hence, apps that "do work" tend to do their "work" in a background thread while the "main" thread continues to respond to user and OS messages.
I guess you didn't read through my posts at all where I pretty much said the same thing ;) , and then dismissed this as irrelevent in the single-CPU vs dual-CPU debate. I'd be amazed if the performance gained by checking for keystrokes or whatever is greater than the performance lost by the SMP-OS constantly swapping the worker thread from CPU to CPU (with the OS overhead and the extra cache misses this causes).
 
Originally posted by ddtlm
jettredmont:


You've severely over-asserted this claim. Threads are one approach, however threads are definately not needed for low-data-flow applications. In fact, I think I could construct a number of examples where threads are an inferior way to manage sockets.

Correct. As I stated in my next reply.

But, to be clear, if you're not using a thread per socket connection, you have to be sure that:

1) You "get around" to each socket before the local (machine-defined) TCP buffers overflow. TCP buffer overflow is messy, and can cause system-wide misbehavior at least on Windows.

2) There are no assumed "response times" which conflict with your worst-case reading lag.

This is okay for HTTP connections generally, assuming you have a reasonable "round-robin" time

I guess you didn't read through my posts at all where I pretty much said the same thing ;) , and then dismissed this as irrelevent in the single-CPU vs dual-CPU debate.

Guilty. Replied before reading the whole thread (I mean, it's 14 pages long!) Sorry for repeating!

I'd be amazed if the performance gained by checking for keystrokes or whatever is greater than the performance lost by the SMP-OS constantly swapping the worker thread from CPU to CPU (with the OS overhead and the extra cache misses this causes).

In theory, the kernel shouldn't be swapping any process back and forth between processors for just that reason (cache misses). Are you sure that OS X is doing this? If you can document it, I'd suggest shooting a bug report off to Apple, 'cause it is a bug. The kernel should switch a thread from one processor to another only when there is a "significant" advantage to doing so. "Significant" is fuzzy, yes, but it implies that the switch will last more than one or two time slices!
 
jettredmont:

In theory, the kernel shouldn't be swapping any process back and forth between processors for just that reason (cache misses). Are you sure that OS X is doing this?
I'm pretty sure, based mainly on the CPU monitor usually showing each CPU 50% used when one worker thread is going full out. I saw this in the Linux 2.2.x days, but the 2.4.x kernels fixed it. I hope OSX fixes it some day as well.
 
...and,

Look how far off the topic of this thread has gotten. Kinda funny. It would be cool see a visual representation (in tree form, of course) of the many tangents people go off on in a discussion like this.

:)

BTW, where are the new iMacs!!?!?!!! :confused:
 
macphisto:

BTW, where are the new iMacs!!?!?!!!
Apple can't just go and do new product releases every day. Give them a few weeks.

More on topic: damn it is hard to pass up on the top-end PMac and 23" screen. An actual decent deal. I could get the Sony version of the screen but it costs $800 more (or so) and it's only real benefit is a second video input.
 
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.