Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.
First, no application software change is required. The chip should do the parallelism transparently. If you put a crossbar switch between the 4 cores and the 3 memory controllers, then 3 different cores can pull data from three different controllers in parallel with no change in software. If you don't have too much data sitting behind a single controller then the work will naturally spread out. In contrast, if put a high amount of memory on the other side of a controller than it is more likely that 2 cores will both be pulling data from that pool of memory. At the extreme, if all four cores are pulling data from a single memory pool you have the "front side bus" configuration that was abandoned by this Nehalem architecture. Intel has a pretty bonehead design if that parallelism doesn't get leveraged on a common basis. Likewise, the OS is pretty bubblegum if it tends to herd multiple applications into single memory pools.
I understand what you're getting at, but there is an influence from the software. Your argument is assuming the necessary data is actually available (and in such cases, is correct).

It has to do with how the application loads the data to the RAM from disk. If it's not optimized to keep it fed in a manner to reduce/eliminate misses, it's going to slow the throughput down (i.e was the file loaded sequentially for all cores, or was it broken down in a manner that allows it to load data simultaneously, so no core is starved while it waits for the data it needs).

A poorly written compiler can cause issues with IO as well, and cause some degree of starvation. I see it essentially as a bottleneck caused by software rather than hardware.

There is an additionally incremental speed hit if interleave, but not leveraging the multiple memory controllers is a waste of money on the new tech. Might has well stuck with a Core2 arch box.
Of course it is. But not all software can, and that which is possible, may have problems keeping the right data feed in time without the need to load data that should have been pre-loaded to prevent starvation as much as possible (not a situation where there's just a lack of RAM capacity for the task).

A daughterboard just for RAM is misguided. What you want are taller heatsinks on the processor packages since they are going to be more effective.
I'm looking at it from a commodity parts (fans in this instance) and dimensional POV (while keeping within PCB layout rules). That is, yes, you could go taller and narrower to accommodate another pair of DIMM slots, but what about air flow (i.e. available fans in terms of size and CFM)?

Taller and narrower heatsinks could force the use of a fan/s (potentially stacked) that're unable to push enough air flow due to dimensional limitations (width).

That is not a new trend. That is where the computer market has been since the start. Big machines did big jobs. The only "new" factor is gluing a "big machine" out of smaller decoupled components.
It's the "glued" aspect that I draw the definition though. That differentiates it from previous super computers (i.e. Crays of yore), as it was a single system. Multiple enclosures may have been needed, but it wasn't commodity systems via standardized interfaces, such as Infiniband, that modern clusters are created from.

You are skipping past the point I was making. Workstation are not going to obviate all clusters.
I wasn't trying to indicate that workstations will obviate clusters. On the contrary, actually.

My point was, that current and not too distant workstations will improve the cost/performance ratio. The additional cores will allow users to do more work for less funds (i.e. 8 core SP chip vs. DP Quad core).

Now in the cases that even a DP system won't suffice, what I meant as "heavy lifting", a cluster/farm is an effective means to increase the performance and resulting output. And it's getting cheaper and easier to do on a smaller scale (i.e. dedicated or at least fewer users = fewer systems = lower cost). There's independent filmmakers for example, that have built render farms of miscellaneous PC's.

As GPGPU processing becomes more prevalent, that could increase the performance as well without the need to increase the unit count in the cluster to gain additional performance. And it should be more cost effective as well (i.e. SP boards with 3 or 4x graphics cards = better cost/performance ratio than a DP system).

Most clusters are expensive. That means you have to share with someone else to make them cost effective. Workstations give you the freedom to do whatever crazy computational load you want and nobody else cares because they are using something else.
Costs are getting lower, and making it more accessible to smaller companies, and even independents/consultants (video, engineering, scientific modeling,...). For example, instead of having to purchase time, it may now be possible to construct a dedicated cluster for the desired task. As it's dedicated, it could be made smaller (fewer systems = lower cost). Combine it with the increasing cost/performance ratios that are expected, it should become a viable solution to users previously unable to afford it.

It won't be affordable for everyone (dedicated in-house), but with reduced costs to construct a cluster, hosted time should also reduce, making that more accessible as well (thinking additional hosting companies will also rise, helping to keep costs in check through competition).

I get your point in the context where might have a number of workstations in a LAN and only infrequently/non concurrently a single person at a time needs to run a job on the cluster (perhaps overnight when everyone goes home. John Doe gets Monday and Wednesday nights, Fred Flintstone gets Tuesdays and Thursday, Barney gets Friday and Saturday, etc. )
Scheduling has it's place, but it's not the only consideration I meant, as you've seen above. ;)

I'm seeing the potential for clusters to become more commonly used as a result of multiple factors converging.

If all the heavy lifting is done in the cluster could just get a Mac mini since main app running to accomplish that is an Xterm or VNC on the real machine.
In some cases, this may be all that's needed, depending on interface and/or graphics requirements.

The other problem is software costs. If the software is licensed per system and it is a significant percentage of the Workstation/Server it costs more to put it into a cluster.
This is definitely the potential "fly in the ointment" aspect to clusters/farms.

Hopefully, software vendors that have products that're meant to be used this way, will license accordingly in order to generate sales in a manner that maximizes profits (don't make it so expensive that it causes potential purchasers to exclude the product from consideration over licensing costs).

But it remains to be seen, if this will actually happen.
 
apple store

I don't know what otherrs think of this... but I've been checking the Apple Store for when Apple would be updating the iPhone 4 and it just came to me that maybe Apple is going to update everything (Mac Pro, Mac Mini, etc...) once they update for the iPhone. When that will be? I have no idea! :p
 
Yes...

The I/O of the pro is a need for the PROFESSIONAL (depending on trade of course) and PROSUMER ENTHUSIAST/INDEPENDENT PRODUCER.

A good share of above consumer HD camera's need-require PCI expansion and HD breakout cables. An iMac although a great asset in any arsenal is in no way, a flagship for a large Hollywood studio or the local wedding videographer.

As for storage, this is where I fall into prosumer, my iTunes library is nearly over 1TB. And only growing, that's just iTunes media, what about game media, and project media? 2TB is too small for me at this point, 4TB would be livable, but 8 would be ideal. A DROBO or PROMISE solution is definitely the route for iTunes, but the rest of my media, is headed to an 8TB pool.

Agreed. I'm nominally an independent music and video producer and my machine I'm currently using is an G5 2.5 dual with 8 gigs of ram and 4.7 TB of disc.

As promising as the Corei7 iMac looks I have a LOT of audio interfaces for my music production that may not fit into such a beast.

The Mac Pros are ungodly expensive for what you get (wasn't it Gizmodo that had the recent article "it's gotten straight stupid to buy a Mac Pro"?).

I'm just going to wait. Some more.
 
I don't know what otherrs think of this... but I've been checking the Apple Store for when Apple would be updating the iPhone 4 and it just came to me that maybe Apple is going to update everything (Mac Pro, Mac Mini, etc...) once they update for the iPhone. When that will be? I have no idea! :p

That is exactly what I said in the very first post of this thread, it is actually the whole reason I posted this thread. They will update the store on June 15, which is a Tuesday, because that is the day iPhone 4 is up for pre orders.
 
The I/O of the pro is a need for the PROFESSIONAL (depending on trade of course) and PROSUMER ENTHUSIAST/INDEPENDENT PRODUCER.

A good share of above consumer HD camera's need-require PCI expansion and HD breakout cables. An iMac although a great asset in any arsenal is in no way, a flagship for a large Hollywood studio or the local wedding videographer.

As for storage, this is where I fall into prosumer, my iTunes library is nearly over 1TB. And only growing, that's just iTunes media, what about game media, and project media? 2TB is too small for me at this point, 4TB would be livable, but 8 would be ideal. A DROBO or PROMISE solution is definitely the route for iTunes, but the rest of my media, is headed to an 8TB pool.

I love you.
 
Nas

Does anyone have experience with the HP media smart server? The single internal HD limitation of the imac was one of the drivers pushing me towards a MP. Going NAS for my media, and benefiting from the streaming ability of the media smart server, at first glance at least, seemed compelling...
 
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.