Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.
The "custom CPU" bit could mean lots of things - note that most of the Pros have used Xeons without heatspreaders, which are technically "custom" for Apple...
 
Perhaps "timed-exclusive" would be a better term than "custom".

To design and fab a custom part, especially in small quantities is hideously expensive. This is probably just Intel giving Apple first dips on upcoming tech, eg. Thunderbolt - you know for publicity; it also helps that Apple doesn't ship crazy amounts when it comes to desktop/laptop products so Intel's "just starting up" manufacturing process can keep pace.

Timed-exclusive makes a lot of sense, but I can tell you we make "custom chips" in the fab I work in all the time. Rarely are they are truly custom, specifically manufactured for a customer in very limited production numbers. Usually they are a chip thats already in production and one or more layers of that chip differ from the standard production run. It is very common to change one implant layer or use a different reticle on a photo layer. Technically that makes it a custom chip, but doesn't alter the overall chip design or cost to the buyer.
 
But TBolt is not a network interconnect - it's a PCIe link to an expansion chassis, which is daisy-chainable to a small number of additional expansion chassis.

These expansion chassis could have PCIe "cards" for network interconnects, but the network will not be TBolt.

Right, ThunderBolt itself is not, but it could facilitate networking, and in fact could transfer any data that currently traverses PCI Express. I think in theory it could remove the need for port differentiation entirely.

I'm not sure if that means someday it could be used to daisy chain MacMinis together, I think it might but I don't know if they'd still need a switch to manage the traffic. It sounded to me like that's what the poster was asking, and I directed him to the nearest current, comparable resource, Apple's page on grid computing.
 
It's a space hog because of the need to kill heat, I think. Can't wait to see how the revision turns out, assuming rumors are true. Once the drives go SS and optical bites it, these machines will be half their current size, easily.
Mac Pros are heavy, they might be able to shave a bit off it but I don't think Apple would want to make it too much smaller. Pros need 4 HD slots. And contrary to what Apple says, optical isn't dead yet. It's going to be a few years yet before SSDs are the capacity required by working pros. Unless Apple wants to completely abandon the pro market (a new version of FCP tells me they don't), the Mac Pro will likely stay pretty close to where it's at.

If they could make it so it would approximate say, the size of a U3 or U4 server it would help ease the pain of the xServe being discontinued.
Who rackmounts a Mac Pro? Seriously, whoever made the decision to axe the XServe at Apple obviously has never worked in a datacenter or anywhere else that necessitates the use of racks. If you run an office big enough to warrant a rack (and racks by themselves are $$) you're not going to dedicate space to a Mac Pro. What about maintenance or upgrades? I've upgraded my Mac Pro a few times, trust me it's not fun to lug that beast around.
 
Custom cpu - most likely apple will be first to get sandy-bridge-E, in its server version (with cpu interconnect). That's all.
 
I love the MacPro case design, but, owning one, I can see how they could easily compress everything even further. It's a space hog because of the need to kill heat, I think.

If you don't count the arms and legs, it's a reasonable size for a PC with 4 slots (one double), four HDD bays, 2 optical bays, and the convenience of tool-less slide-out sleds and trays.
 
Right, ThunderBolt itself is not, but it could facilitate networking, and in fact could transfer any data that currently traverses PCI Express.

Of course, by putting a NIC on the TBolt PCIe expansions chassis - you could do networking.


I think in theory it could remove the need for port differentiation entirely.

On the host perhaps, but you'll still need a TBolt to network chassis on each host, and a network switch.

And, check prices - most 10 GbE NICs cost more than a MiniMac (and that doesn't include the cha-ching of the TBolt to 10GbE dongle), and the switches to connect them cost as much per port as a MiniMac.


I'm not sure if that means someday it could be used to daisy chain MacMinis together

Six of them....


I think it might but I don't know if they'd still need a switch to manage the traffic. It sounded to me like that's what the poster was asking, and I directed him to the nearest current, comparable resource, Apple's page on grid computing.

If you want more than 6 or 7 systems, yes you'd need a switch. And it would be a Fibre Channel of 10GbE/1GbE switch - not a TBolt switch.
___________

Why would anyone want to build a cluster of Apple's slowest and least reliable systems? Look into the history of why the first iteration of the XServe cluster at Virginia Tech was a failure (hint - the second iteration using XServes with ECC memory was usable).
 
So is it the entire Mac Pro line that will be mountable or just the server iteration.

from the article i think just the server iteration. however if they designed it in a way where it could look like a computer tower but then you could add or take away a part to make it rack mountable, that would be pretty cool.
 
Right, ThunderBolt itself is not, but it could facilitate networking, and in fact could transfer any data that currently traverses PCI Express. I think in theory it could remove the need for port differentiation entirely.

I'm not sure if that means someday it could be used to daisy chain MacMinis together, I think it might but I don't know if they'd still need a switch to manage the traffic. It sounded to me like that's what the poster was asking, and I directed him to the nearest current, comparable resource, Apple's page on grid computing.

In which case instead of Daisy Chaining wouldn't a star typology work better like standard Ethernet?
If it needs a PCIe based networking chip at one end of the cable to make it happen why not move all those chips into one box wire them up to talk to each other with as much bandwidth as possible and call it a Thunderbolt Hub?

If your talking about having a whole bunch of mini's in a cluster then your going to have some sort of hub anyway.
 
Bespoke CPU's for the MacPro?

Maybe Intel has managed to sell them on the Itanium platform! :D



I'd love to see a rack-mountable MacPro as much as I'll be sorry to see the current model retire.
 
On the TimeMachine/Airport side, I'm going with the 'not touching this with a 10 foot pole' route, given the trouble introduced with cutting power corners on the first mac mini.
-GPU underpowered, meaning DVI output issues.

Given the HDD issues on the first batches of TimeMachines after 18 months, I'm expecting much of the same given a new unit dissipating less heat... aka less power, but I guess we'll see. Maybe they'll do it right this time.
 
Do you mean in order to network them or to have them share a processing load? I think fiber networks may still be better for grid computing: http://www.apple.com/science/hardware/gridcomputing.html.

Depends on what you're doing. While *thunderbolt* is not the ideal medium for building a cluster by itself, it does allow 10GigE connections on the mini, and hell, there are plenty of clusters currently in use (including at least 3 300+ core clusters at my Uni off the top of my head, and several at the Natl Lab I interned at last summer :p) that still use 1GigE for their interconnect (for apps that aren't heavily network intensive, particularly if you're using local scratch on the nodes, it isn't necessarily a problem).

That said, for the average user simply "thunderbolting"your mini's together aint gonna get you anything (just as Xgrid with 1Gig now), you need Apps built on a framework like MPI and workloads that are heavily parallelizable in order to take advantage of building even a small cluster!
 
If true dam 6 antennas should make it a great wifi setup for big homes. :)

Over all Apple is coming out with some sweet hardware with software that is going to rock.
 
Of course, by putting a NIC on the TBolt PCIe expansions chassis - you could do networking.




On the host perhaps, but you'll still need a TBolt to network chassis on each host, and a network switch.

And, check prices - most 10 GbE NICs cost more than a MiniMac (and that doesn't include the cha-ching of the TBolt to 10GbE dongle), and the switches to connect them cost as much per port as a MiniMac.




Six of them....




If you want more than 6 or 7 systems, yes you'd need a switch. And it would be a Fibre Channel of 10GbE/1GbE switch - not a TBolt switch.
___________

Why would anyone want to build a cluster of Apple's slowest and least reliable systems? Look into the history of why the first iteration of the XServe cluster at Virginia Tech was a failure (hint - the second iteration using XServes with ECC memory was usable).

Ha, I wouldn't. Thanks for the answer.
 
Mac Minis Maybe

I love the MacPro case design, but, owning one, I can see how they could easily compress everything even further. It's a space hog because of the need to kill heat, I think. Can't wait to see how the revision turns out, assuming rumors are true. Once the drives go SS and optical bites it, these machines will be half their current size, easily.

The Intel Mac Pro is great the way it is. It has room for 8 memory slots, 2 optical drive slots because some people need more than your entry level seems to need, then SSDs will have to become 10 times bigger while at the same time cost 10% of the current price to be able to handle the storage needs of many. My son has his G5 Power Macs & Intel Mac Pro hooked up to 4 internal rives & 5 to 10 external drives. And he is a single person shop. I'm running 2 - 2 TB & 2 1.5 TB drives internally in my Intel Mac Pro & have a 4 drive eSata arrangement with 4 1.5 TB drives mounted on sleds. I can exchange those out for some 1 TB & other sized drives or to hold the internal 1.5 TB drives when they get replaced by 3 TB drives before the year is over.

SSD drives for the system & programs, but can you afford to store 10, 20, 50, 100 or more TBs with SSDs. But as files get bigger more storage is needed.

All of these things need space. Currently I am using 3 slots to take care of my 5 displays. With a new Intel Mac Pro with ThunderBolt & a couple great ATI video cards that number will go down to 2. But there is always need for other PCIe cards. Maybe not for you, but for some. Currently that is an eSata card. ThunderBolt may take care of some of this other card needs, but not for everyone.

I was really thinking that the current Intel Mac Pro case is very nice. I just wish that it had padded handles as 50+ pounds is a little heavy to handle when it needs to be moved. This could be for a trip to the Apple store for one reason or another.
 
Yes and No.

Would this mean the new Mac minis are going to be graphically less powerful?
For 2D they might actually be a bit faster, for 3D they will be slower, especially when using advanced features. Worst is that Intel doesn't support OpenCL.
If so, wonder if it would mean a cheaper entry price...

Possibly. It is all about pricing from Intel. The big unknown here is how much the TB bridge costs. Also the type of Intel processor makes a difference, they come in a wide array of power profiles.

Dave
 
Honestly, I'm most excited about the prospect of an Airport Extreme with 6 antennae. My home is structured such that hard-wiring a specific area is not possible, so I have to use two current-gen AEBS units. And while the 2.4GHz band can breach the gap, the speedier 5GHz band cannot. I could really use the extra bandwidth afforded by the 5GHz band for high bitrate media streaming, and double the antennae of the current model might make that possible.
 
There are many advantages.

I never understood what are the benefits of MacPro being smaller:confused:

For example you can put a larger number in anyone area. More compact units can correctly be placed in a EIA rack. Such a Pro could actually sit upon a desktop if needed. Correctly designed the unit can run cooler for a given amount of fan effort. Components are getting smaller everyday so advancing performance does not dictate a big box. Tighter motherboards run faster.

Dave
 
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.