Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.

Zest28

macrumors 68030
Original poster
Jul 11, 2022
2,728
4,490
For the blackhole scene in the movie Interstellar, they implemented Einstein's full equations in C++ and it took about 100 hours to render it with 32.000 cores. Apparently the implementation is so good, it is backed by many scientific researchers as the most accurate representation of blackholes and the movie is even being used for scientific research.

Also for the movie Avatar, apparently they used servers from HP with 35.000 CPU cores and 104 TB RAM in total to render it.

I believe Apple could put something like this together if they somehow found a way to combine Mac Pro's or Mac Studio's together in order to create a "server farm". Call this inter-connection between Mac Pro's or Mac Studio's "hyper fusion" or whatever marketing term Apple will use.

And it's not that expensive. 1000 Mac Studio's is only $4 million, which is nothing considering that Avatar made $3 billion.
 
For the blackhole scene in the movie Interstellar, they implemented Einstein's full equations in C++ and it took about 100 hours to render it with 32.000 cores. Apparently the implementation is so good, it is backed by many scientific researchers as the most accurate representation of blackholes and the movie is even being used for scientific research.

Also for the movie Avatar, apparently they used servers from HP with 35.000 CPU cores and 104 TB RAM in total to render it.

I believe Apple could put something like this together if they somehow found a way to combine Mac Pro's or Mac Studio's together in order to create a "server farm". Call this inter-connection between Mac Pro's or Mac Studio's "hyper fusion" or whatever marketing term Apple will use.

And it's not that expensive. 1000 Mac Studio's is only $4 million, which is nothing considering that Avatar made $3 billion.

Maybe something like, I dunno, Xgrid...? ;^p
 
  • Haha
Reactions: Longplays
I believe Apple could put something like this together if they somehow found a way to combine Mac Pro's or Mac Studio's together in order to create a "server farm". Call this inter-connection between Mac Pro's or Mac Studio's "hyper fusion" or whatever marketing term Apple will use.

And it's not that expensive. 1000 Mac Studio's is only $4 million, which is nothing considering that Avatar made $3 billion.
Apple hardware offers good value for money for interactive tasks, but not so much for long non-interactive tasks, such as rendering or scientific computing. For those tasks, other manufacturers' hardware is more cost-effective.

What is Apple's competitive advantage in long non-interactive tasks where raw performance is the primary consideration?
 
The ex-Apple engineers that founded NUVIA wanted to enter the datacenter and server space that Apple has not competed in for a over a dozen years.

It is a space that Apple failed in and unlikely to return to.
 

It's been somewhat discussed although not exactly the same idea.

Apple should start by letting users rent "M Extreme" SoCs from the cloud if they ever make an Extreme SoC.

Combining different SoCs into a single super computer is an entirely different problem. No one is going to do buy thousands of Mac Studios and manage them in a room. It'd have to be done from Apple's side and in a data center.
 
How parallel is the problem? If you can break it down into a large amount of parallel blocks, with no communication required between the blocks, then the solution is trivial. If you instead need to actively exchange data between the workers, you need specialised algorithms and hardware. The latter is the problematic part. High-speed data switches are complex and expensive, and non-uniform memory access is tricky. That's one reason why a supercomputer costs more than a bunch of Mac Studios piled together.

Then again, how does the Interstellar code look like and how do they use the cores? My Mac laptop contains 4096 GPU cores. If the program could be ported to a GPU an M2 Ultra could render it under two weeks.
 
  • Like
Reactions: Longplays
how does the Interstellar code look like and how do they use the cores?

A.4. Implementation
DNGR was written in C++ as a command-line application. It takes as input the camera's position, velocity, and field of view, as well as the black hole's location, mass and spin, plus details of any accretion disk, star maps and nebulae.
Each of the 23 million pixels in an IMAX image defines a beam that is evolved as described in appendix A.2.
The beam's evolution is described by the set of coupled first and second order differential equations (A.15) and (A.23) that we put into fully first order form and then numerically integrate backwards in time using a custom implementation of the Runge–Kutta–Fehlberg method (see, e.g., chapter 7 of [57]). This method gives an estimate of the truncation error at every integration step so we can adapt the step size during integration: we take small steps when the beam is bending sharply relative to our coordinates, and large steps when it is bending least. We use empirically determined tolerances to control this behaviour. Evolving the beam along with its central ray triples the time per integration step, on average, compared to evolving only the central ray.
The beam either travels near the black hole and then heads off to the celestial sphere; or it goes into the black hole and is never seen again; or it strikes the accretion disk, in which case it gets attenuated by the disk's optical thickness to extinction or continues through and beyond the disk to pick up additional data from other light sources, but with attenuated amplitude. We use automatic differentiation [58] to track the derivatives of the camera motion through the ray equations.
Each pixel can be calculated independently, so we run the calculations in parallel over multiple CPU cores and over multiple computers on our render-farm.
We use the OpenVDB library [60] to store and navigate volumetric data and Autodesk's Maya [61] to design the motion of the camera. (The motion is chosen to fit the film's narrative.) A custom plug-in running within Maya creates the command line parameters for each frame. These commands are queued up on our render-farm for off-line processing.
 
I believe Apple could put something like this together if they somehow found a way to combine Mac Pro's or Mac Studio's together in order to create a "server farm". Call this inter-connection between Mac Pro's or Mac Studio's "hyper fusion" or whatever marketing term Apple will use.

Apple's UltraFusion doesn't really work past an inch like distances, let along hook together two boxes.

The are thousands of thousands of server farms already out there hooked up with Ethernet, Infinitband , etc. Apple needs to invent next to nothing there and needs a goofy name to reinvent the wheel even less.


And it's not that expensive. 1000 Mac Studio's is only $4 million, which is nothing considering that Avatar made $3 billion.

A 1,000 Mac Studio's doing a coherent, 'suppose to be accurate", single computation is goofy. There is no ECC RAM. Folks tried building a 'supercomputer' with non-ECC Macs over a decade ago. It didn't really work well. They eventually had to rip out all the nodes and replace them with Macs that did have ECC. That original one was largely a waste of time and money. "Same thing , different day" here.

Can hook together a bunch of Macs doing different things for different people. Inside a MacStadium data center over 4 years ago....

AF1QipNa8eCpX06YCdFiB3mX9wV5FMU2WC9MfIS7hAoe=s1360-w1360-h1020


There is nothing 'new' for Apple to invent. Studios would just be a different vertical stand and actually less compute density ( unless shift the compute almost exclusively over to the GPU cores. ).
 
For the blackhole scene in the movie Interstellar, they implemented Einstein's full equations in C++ and it took about 100 hours to render it with 32.000 cores. Apparently the implementation is so good, it is backed by many scientific researchers as the most accurate representation of blackholes and the movie is even being used for scientific research.

Also for the movie Avatar, apparently they used servers from HP with 35.000 CPU cores and 104 TB RAM in total to render it.

I believe Apple could put something like this together if they somehow found a way to combine Mac Pro's or Mac Studio's together in order to create a "server farm". Call this inter-connection between Mac Pro's or Mac Studio's "hyper fusion" or whatever marketing term Apple will use.

And it's not that expensive. 1000 Mac Studio's is only $4 million, which is nothing considering that Avatar made $3 billion.
Wouldn’t be the first time.

I remember when a university used a bunch of G5s to create one of the fastest supercomputers in the world.
 
  • Like
Reactions: mectojic
Sounds like it would be a candidate for GPU acceleration. It is written there that each pixel is calculated independently.
Yeah I think the actual rendering part would be a great fit for the GPU.

I think the issue is that they were rendering huge amounts of sparse voxel data so getting that to the GPU and then decoding it would be the issue.

Also if you’re writing an experimental renderer that does all kinds of cutting edge relativistic math given to you by Kip Thorne - I’m not surprised if you’d want to do that on a CPU where things are more straightforward (and then just buy more CPUs with Warner Bros money) :D
 
  • Like
Reactions: iPadified
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.