The limitations of XGrid et al.
XGrid as it currently stands is a truly marvelous technology preview product. One can reasonably assume that any distributed computing technology made by Apple will somehow find its roots in the XGrid paradigm. However, XGrid suffers from a serious limitation - as do all distributed computing systems I have seen to date. It is not a flaw easily dismissed, or acknowledged en passant only to brush it off later being somehow solved by Apple's brilliant engineering team. The problem is data, and the bandwidth of the interconnecting network infrastructure.
Those of you who have tried the XGrid technology preview, or more generally are involved in distributed computing projects (such as the erstwhile RSA keysearch, SETI@Home, Folding@Home, etc.) have probably noticed that all of the tasks approached are situations where a relatively small amount of data requires massive amounts of calculations to be performed on it. Thus for a compartively small "wait" while transferring data, one can distribute small packets of data to independent computers for processing in parallel. Furthermore, parallelising tasks over "slow networks" is only feasible if each packet of data can be processed independently from all others, because otherwise one will incur very serious delays when transacting over the network. What do I mean by "slow networks"? Sadly, anything slower than Infiniband is "slow" for the purposes of high-performance parallel processing. Thus even Gigabit Ethernet and FibreChannel are "slow" for the purposes of massively parallel processing.
What is the point of "farming out" complex video transitions if each computer must wait for the previous one to finish and "hand off" the data? What is the point of bothering to transmit a couple hundred megabytes of HD video begin and end frames for a transition when you can probably compute the transition more quickly on your own box? Both entail waiting, but the latter could potentially entail less waiting than the former, and certainly entails less infrastructure. Anybody who doubts the difficulties of parallelisation need look no further than that supposedly optimised resource hog and bullwark of the Apple design bureau: Photoshop. How many filters are dual-processor aware? More to the point: how many are not dual-processor aware? And that is within the confines of a single machine, with basically zero latency issues. I rest my case.
I do not wish to rain on anybody's parade, but I do not see this particular rumour bring much import to most professional Mac users' lives. Certainly not for audio, where latency is a serious issue. Certianly not for professional video editors using G5s on run-of-the-mill half-duplex 100baseT networks. Neither do I expect this technology to percolate down to consumer-level applications: though you may percieve Apple to be benevolent, remember that at heart Apple - nay, APPL - is a corporation seeking a profit. They have clearly identified cluster computting as being a significant target - witness the sale of the G5 XServe "processor blades" designed explicitly for distributed computing, and the recent introduction of the XSan network storage solution.
XSan and XGrid, running on RAID XServes and G5 processor-blade XServes respectively, clearly complement each other and form the two prongs of a concerted attack. This much is certain and evident. By comparison, the Big Mac massively parallel cluster was only the beginning: Apple now truly has all the ingredients necessary to become a serious player in the high-performance/high-reliability computing market. Enticing professional video editors is only one aspect of this policy - and, I expect, not a cornerstone. At the rate technology increases performance (yes, even at Apple's slow release rates), the time taken for a given render falls by half every year (if the increase from 2GHz to 3GHz "by summer" is to be believed), more-or-less in-keeping with Moore's Law. Somebody remarked that now it will be possible to have tomorrow's mac's rendering speed today. It also means that come tomorrow, you won't need that grid anymore. Apple wants you to buy PowerMacs, and it wants to you to keep on buying them. Longevity of your investment means lost revenue to them. Simple as that.
I noticed somebody got excited about the prospects of this technology somehow making its way into iMovie. This is almost certainly not the case: iMovie will not support distributed rendering. Even in its fourth incarnation, iMovie is still an inefficient Carbon app that does not even include support for multiple threads running concurrently on dual processor machines. It could not possibly be "upgraded" to run in XGrid-distributed style without a major rework. This is not corporate oversight on Apple's part: it is a sound business strategy, since their analysts are very careful to avoid undercutting their own offerings. Final Cut Pro for the professional market, Final Cut Express for the prosumer market, and iMovie for the lowly consumer. Same goes for all the other Apple software offerings. They may share technology in some select places, but there is clearly an active effort to maintain the highly profitable market segmentation currently extant.
So, overall, an interesting development, but hardly one that "Changes Everything".