Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.

anshelm

macrumors member
Original poster
Jan 17, 2002
46
0
As per krossfyter's request I'm going to try and simplify some concepts and put them in analogy forms. I think it might be helpful to discuss what certain things mean without taking other threads off-topic. Any corrections, better analogies, or more computer concepts to be discussed are encouraged.

For starters, I have two posts to bring up (questions by krossfyter): threads and pipes. Let the fun begin! :)
 

anshelm

macrumors member
Original poster
Jan 17, 2002
46
0
Threads:

" what your opinion on the thread boggin down the system?"

I'm not sure what "system" you refer to, so let's assume all easily explained cases. There are two ways to multitask/multithread a system.

You go out for a drive. It's a busy day, and you arrive at an intersection. This intersection is set up with a stoplight. Your road gets to go for awhile, and then the light turns red. The other road in the intersection gets to go for awhile, and then the light turns red for them.

Later on that drive you come to another intersection. Only this time, it's a two-way stop, and you have the stop sign. The road that intersects with yours keeps going at a fairly fast clip. Eventually you get to go after a break in the traffic on the other road.

Compare the two intersections. One intersection has a stoplight that dictates who gets to go for an amount of time. This slows down both roads, but it makes sure that both roads get to pass through the intersection in a timely manner. The other intersection has a stop sign. The road who doesn't have the stop sign goes faster then the road that does. However, the road that has the stop sign can sit waiting for quite some time before getting to go, so it can really slow down the traffic on that road.

The intersection with the stoplight is similiar to the preemptive multitasking multithreaded system. The intersection with the stop sign is similiar to the cooperative multitasking multithreaded system. There are pros and cons to both systems.

The pros to a preemptive multitasking system are that you can have a large number of programs running, and they will all get fair time slices (not equal sized slices, the OS decides how much of a time slice each deserves). Also, it FEELS more responsive while other programs are running because the UI gets enough time slices to move the mouse. The cons to a preemptive system is that all threads get slowed down some because they have to wait their turn to execute. (In this system, the system slows down threads.)

The pros to a cooperative multitasking system are that one program can run faster then on any other (popular OS) multitasking system. The big cons to the cooperative system is that other threads have to wait until the current thread decides it's time to give up control. Also, it feels very unresponsive when one program refuses to give up time very often. (In this system, threads DO slow down the system.)

Does that answer your question about threads slowing down the system? (A bit of a ramble, sorry).
 

anshelm

macrumors member
Original poster
Jan 17, 2002
46
0
Piping:

Beginning with an analogy. :) You decide you want to go into the manufacturing business. You also decide that you want to make computers. Easy enough, right? So you buy a warehouse and set up shop. First, you have to decide how to set up your assembly line.

To put together a simple line, you build 4 machines. Machine 1 is assigned the task of assembling the motherboard components. It takes a motherboard, puts in a processor and RAM, and sends it on. Machine 2 is assigned the task of assembling the case components. It takes the motherboard passed to it by Macgine 1, and puts it in the case and screws it in. It puts in all drives and cards, and screws them in. Then it sends it on to Machine 3. Machine 3 then takes the computer and plugs in all of the cords and cables (power, IDE, etc), and sends it on. Machine 4 takes the assembled computer and puts it's outside case on, and screws it in. It then sends out the completed computer.

Now this is a fairly efficent line. However, each machine can do it's task up to a certain point where it can go no faster. The reason is that each task is too complex to do it super fast. Because of this, it has a limited speed, but it can do quite a bit in that limited speed.

So, to get more speed, you build a second assembly line that is much larger (say 20 machines). Machine 1 sticks RAM in the motherboard and sends it on. Machine 2 lifts the zero-insertion-force lever on the processor socket and sends on the motherboard. Machine 3 plops the processor (processor that already has the heat-sink and fan) in the board. Machine 4 closes the zero-insertion-force lever. Machine 5 lifts up the motherboard and puts it in the case. Machine 6 screws in the motherboard. Machine 7 puts in one drive. Machine 8 screws it in. Etc, etc. You now have a huge line of machines.

Now this is a fairly inefficent line, at first glance. Each step is SO small that it doesn't get much done. However, the speed on this line can be cranked through the roof.

This is all well and good. Let's make up some times. For simplicity, let's say each step in assembly line 1 takes 2 minutes. So the whole process takes 8 minutes. Now let's say each setp in assembly line 2 takes 30 seconds. That's a total of 10 minutes. The shorter line produces faster... but only by 20%. You might expect the far shorter line to have a larger performance ratio, but because the longer line can do each step that much faster, it is almost as close.

Now this is all well and good, until you have a problem. You have to have the lines produce different things from time to time, based on supply-and-demand equations (after all, if you produced 10 billion computers, you wouldn't be able to change $1,000 for each of them!). If you have to stop line 1 to decide which to produce, it will start producing again faster then line 2. This is a big performance hit on line 2's part. So, you decide to put a machine at the front of line 2 to GUESS what computer to produce. Now, if this guess is wrong, and you have to empty the line you have a bigger performance hit then if you simply emptied the line and waited for the results from the supply-and-demand equations. So you make it the best you possibly can. It ends up have about a 70% correct ratio.

So line two can keep up with and in some areas outdo line one. So you have to build a better line one, that is 7 machines long. This might sound familiar, because it is. The G4 has 4 stages in it's pipe, the G4+ has 7 stages in it's pipe, and the Pentium 4 has 20 stages in it's pipe.

Now, there is more to this then first meets the eye. Not all of the processes in the processor get to have a "full pipe" (which is what I just described). What does this mean? Simple, time for another analogy. :)

You decide to make certain portions of each line faster by making another line that does certain things (like put the heat sink on the processor). Now, because of this, the original line has to wait for those sub-lines to finish. This is a performance hit, of course, and it is called a "half pipe" because it doesn't run all by itself anymore.

The G4 can do certain things better then the older Pentium 4's because of the Altivec engine (which allows it to do certain math faster). Overall, a deeper pipe (long, with each step being very simple) can bring you higher speeds, and run faster, but it doesn't necessarily outperform a smaller pipe. However, you can keep pushing the longer pipe to faster speeds, so eventually it will outperform the smaller pipe. The Pentium 4 has outperformed the G4 for a short while now, so I hope Motorola really is going to bump up the speed.

Now, before the myth goes any further: MHz does not mean anything about speed of each process. MHz is simply the measurement of the peaks. It's how many peaks per second can occur, and since in a processor (generally) you put one bit per peak, it comes close to be the same thing in certain areas.

Anyway, sorry about the length (I know that some attention drifts after longer posts).
 

lera

macrumors member
Dec 26, 2001
42
0
Originally posted by anshelm
Threads:

...explained cases. There are two ways to multitask/multithread...
...The intersection with the stoplight is similiar to the preemptive multitasking multithreaded system. The intersection with the stop sign is similiar to the cooperative multitasking multithreaded system. There are pros and cons to both systems...
...Also, it feels very unresponsive when one program refuses to give up time very often. (In this system, threads DO slow down the system.)

Dude Isn't multythreding about optimising for multipul prosessers?
(you did a beautiful job of describing multitasking but I think you have the multithreding thing wrong) IDK but I think.
 

lera

macrumors member
Dec 26, 2001
42
0
I don't think that the P4 out does the G4, I guess I'll have to check the speed marks on some of the comparison sights.
 

anshelm

macrumors member
Original poster
Jan 17, 2002
46
0
Actually, multithreading and multitasking are closely related. As I've explained in other posts, threads are branch-offs (or spin-offs if you wish to call them thart) of tasks, that report back to tasks. A multitasking system is generally multithreaded, because threads can be handled the way tasks are.

For those who didn't read my other posts, a simple summary: A task is an application, and a "thread" is a smaller process that the "task" creates to make work go faster.
 

anshelm

macrumors member
Original poster
Jan 17, 2002
46
0
Actually, the analogy I used for full and half piping was wrong, but it was the quickest I could come up with.

A better explanation:

You have a manufacturing plant to build these computers of yours that you are selling. You also want to build other things (such as certain computer accessories) on a different assembly line. Now, some of the machines used along the way to assemble certain things in the computers and their accessories are vastly beyond your resources to duplicate, so you decide to use the machine in both lines. Basically, one line uses it, then the other line uses it, etc.

That is what half-piping is. Basically, when one pipe depends on another pipe for certain steps of it's pipe. It is a huge performance hit, yes, but it reduces transitor counts quite a lot, and allows chips to do more without duplicating portions of the chip.
 
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.