Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.

MacRumors

macrumors bot
Original poster
Apr 12, 2001
67,939
38,645


In a blog post this week, software engineer Andrew Rossignol (my brother!) detailed how he managed to run generative AI on an old PowerBook G4.

PowerBook-G4-LLM.jpg

While hardware requirements for large language models (LLMs) are typically high, this particular PowerBook G4 model from 2005 is equipped with a mere 1.5GHz PowerPC G4 processor and 1GB of RAM. Despite this 20-year-old hardware, my brother was able to achieve inference with Meta's LLM model Llama 2 on the laptop.

The experiment involved porting the open-source llama2.c project, and then accelerating performance with a PowerPC vector extension called AltiVec.

His full blog post offers more technical details about the project.

Similar examples of generative AI models running on the PlayStation 3, Xbox 360, and other old devices have surfaced in the news from time to time.

Article Link: Software Engineer Runs Generative AI on 20-Year-Old PowerBook G4
 
Oh that's so cool!! I love PowerPC machines still. I have my PowerBook G3 Pismo still kicking around. I'll occasionally use it for writing projects or reminiscing about simpler times. Neat to see they're still keeping up with the latest trends *cough* ᶠᵃᵈˢ *cough*
 
Wow! That's impressive! I'm not a developer so don't really understand the specifics but it shows that there is still a lot of headroom with current tech if properly optimized!

Properly optimized.... that's the ticket! So many people have demanded that Apple add in tons of RAM, but in doing so, developers get lazy and don't optimize, and then tons of RAM still isn't enough over time. I've appreciated Apple's conservative approach to RAM over the years, as it's forced optimizations, which is critical to software being great.
 
Properly optimized.... that's the ticket! So many people have demanded that Apple add in tons of RAM, but in doing so, developers get lazy and don't optimize, and then tons of RAM still isn't enough over time. I've appreciated Apple's conservative approach to RAM over the years, as it's forced optimizations, which is critical to software being great.
You've just given Apple a reason to reduce the Mac's standard memory from 16GB to 4GB.

Apple: "If gen AI can run on 1GB memory, image what it can do with 4GB memory (which is analogous to 8GB)!"
 
Properly optimized.... that's the ticket! So many people have demanded that Apple add in tons of RAM, but in doing so, developers get lazy and don't optimize, and then tons of RAM still isn't enough over time. I've appreciated Apple's conservative approach to RAM over the years, as it's forced optimizations, which is critical to software being great.
If you run Linux Mint on an old Mac, you really notice that macOS is actually quite bloated. They could have made the OS a lot lighter than they did.
 


In a blog post this week, software engineer Andrew Rossignol (my brother!) detailed how he managed to run generative AI on an old PowerBook G4.

PowerBook-G4-LLM.jpg

While hardware requirements for large language models (LLMs) are typically high, this particular PowerBook G4 model from 2005 is equipped with a mere 1.5GHz PowerPC G4 processor and 1GB of RAM. Despite this 20-year-old hardware, my brother was able to achieve inference with Meta's LLM model Llama 2 on the laptop.

The experiment involved porting the open-source llama2.c project, and then accelerating performance with a PowerPC vector extension called AltiVec.

His full blog post offers more technical details about the project.

Similar examples of generative AI models running on the PlayStation 3, Xbox 360, and other old devices have surfaced in the news from time to time.

Article Link: Software Engineer Runs Generative AI on 20-Year-Old PowerBook G4
but that new iPad isn't able to run apple intelligence 😭
 
If you run Linux Mint on an old Mac, you really notice that macOS is actually quite bloated. They could have made the OS a lot lighter than they did.
I remember back in the day not sure if it was OS 9/X that one could omit printer drivers, languages, and more? And that would be the install down, which was important from CD or over USB 2.

Also would a similar Linux distribution have all the same peripheral support out of the “box” or does it require one to download repositories to make “consumer” friendly experiences work? Plus the ecosystem, QuickTime/iTunes/iLife/iWork integration?
Setting the foundations for ecosystem making it ready to go out of the box (of course I still remember downloading drivers for printers and things but a lot was plug and play).
 
MacOS, as well as iOS and iPadOS, are long overdue for a Snow Leopard-like update: 0 new features, just squash bugs and reduce system resources.

Remains hilarious that people continue to forget Snow Leopard actually introduced hundreds of new features, especially at the system level.

Your brother should teach Apple how it’s done.

Good engineers threw in the towel on browsers for PPC because the modern web is too resource intensive and bloated to fight against. Modern apps would be no different.

There is a roughly 9x slowdown when compared to the same code running on a modern CPU with much higher memory bandwidth.

Pulling AltiVec/VMX into the mix shaved off 30 seconds, so still 8x slower. It's a great concept and I'm glad to see it done but it is in no way realistic for actual use. :)
 
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.