Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.
In general I have been very pessimistic about LLMs, but I have to confess that when it comes to troubleshooting issues with BSD or Linux on PPC I have had a lot of luck using ChatGPT. I'd like to think these models could assist in PPC development, but there's still quite a ways to go. As a quick example, while they are useful for troubleshooting on maintained distros (BSD, Linux, MorphOS) trying to prompt it for help on an application for Classic MacOS leads it to the AI equivalent of despair ("no, sorry, I can't possibly do this! I have failed you!" type responses).
 
Look, if you want to understand what it's like to try and use these things for difficult work, prompt it with a question about something you believe you are a genuine expert at. It should absolve you of the notion pretty quickly. The biggest problem with these things is they sound super confident while spitting out abject lies, hallucinations, and things that are almost believable, then they grovel when you catch them in the act.

LLMs don't know what the answer to a question is. They can't. They guess what the answer to a question statistically should look like. The more specialized the knowledge, the less information it has to synthesize a response with.

They're fine for basic projects with lots of documentation. Hell, I've used chatgpt to do CSS for websites because I can't stand CSS and it's one of the most documented things on the internet. But for obscure retro computers where lots of the information has been lost to time? It can sometimes muddle through, but be prepared to do tons of debugging.

If one can't be bothered to write their code, why should others be bothered to debug it? And if you're going to use the slop machine to write your code and you do debug the whole thing through and through, you end up spending almost as long as you would have to write it from scratch.

These things are most useful for turning naturalistic questions "What kind of software can I use to program on classic Macs" into google-able answers "codewarriors 9".
 
  • Like
Reactions: repairedCheese
AI is a tool - it's not going away, and those who know how to use it properly will likely benefit from increased productivity as a result. The issue is knowing how to use it properly:

- It does not completely automate the development process, it only allows for quicker iteration
- There is still inherent value on systems based thinking and understanding - you cannot expect to "one-shot" complex applications
- The code it writes can often be incorrect, slow, or prone to vulnerabilities, and manual review and approval is still needed - this requires actually understanding basic development principles (version control, clean commits etc), as well as what the code actually does, which is a huge caveat these AI providers are not actually telling you.

The story being sold to people right now is that the average "idea guy" with no coding experience will be able to develop and ship a complete, secure, functional product, and that the AI's capability will scale with the codebase as complexity increases. It doesn't - if you are not steering it correctly and asking it to implement things based on a rigid specification, you will quickly find yourself lost in a black box and unable to explain WHY your program does what it does. Understanding the fundamentals of object oriented programming is key. There is going to be a massive wave of insecure, poorly developed software made by well-intentioned "vibe coders" flooding the internet in the coming years, and I doubt PowerPC will be any exception to this rule.

Being able to ship a maintainable platform with plans for years of support, adding new features, expanding existing ones, fixing bugs, etc is very difficult without the ability to manually dig through a stack trace or cleanly follow the logic in a 500 line function.

All of that being said, if you understand how to write code without AI, using AI will likely assist you as a developer and speed up your workflow. As somebody who has been working with C++ before AI was a thing, I have found it incredibly useful for abstracting away the need for memorizing syntax, instead allowing me to focus on the actual logic of the program and not necessarily the language. This makes it easy to jump between different languages, take on unfamiliar concepts - I don't need to search MSDN until I find the WINAPI function that does what I need, I can just describe the task I'm trying to achieve and it will show me a proof of concept using the API I need. I would compare it to writing out a document via pen and paper vs using a printer.

I use AI as a quick reference when working on ports (such as the OpenMW port I am working on) as well as consulting it for design and implementation advice regarding the media center I am creating, the AI frontend app I made a while back, as well as icon and visual generation (I'm no artist) - these are apps that would have taken weeks for me to develop classically.

All of what I'm saying here refers to AI specifically in the context of being used as an educational tool - I think more broadly, there are some pretty bad implications for society and a lot of people's cognitive security is in jeopardy - people are going to become more and more accustomed to relying on the output of their LLMs instead of the output of their own brain.

I think it's hard to have an "in-the-middle" take on AI - it's a very polarizing topic. It seems like on one side you have the luddites who are rejecting the technology entirely and denying any potential positive use cases, refusing to understand how it works (or horribly over-simplifying it in their head). Then on the other side, you have the tech bros way overselling what these tools are capable of, saying AGI is here and we're all losing our jobs, which has the effect of 1. scaring the normies and 2. generating up huge amounts of FOMO to raise capital

Then, arguably the third side is users who have been thoroughly convinced that the AI is sentient, reject any sort of grounding contradictory evidence, treat it like a boyfriend/girlfriend, or that the AI is actually some divine presence/convicing them in their delusions of grandeur - these people are becoming more and more common in every day life and you probably have already interacted with somebody who is having an unhealthy or anthropomorphized relationship with AI. The technology itself isn't inherently scary - however, the meaning that people assign to it sure is.

I disagree with most of the comment above me, because it's sort of based on the premise that AI is INCAPABLE of writing coherent code or being an assistive tool to developers - your experience using it to play around with CSS does not represent the experience of thousands of developers in Fortune 500 who are using AI to push production ready code on a daily basis.
 
  • Haha
Reactions: UpsideDownEclair
I have had Claude build entire apps for me where I either haven't read any of the code, or have read very little of the code.

I'm a teacher. I had Claude make an image quiz app, which shows me photos and makes me type in the name of the photo, so I can learn student names. There's a time limit, and a custom bag system which helps me practice= more with the ones I have most trouble with.

It also made me a Wacom drawing app that I use when teaching math, with a bunch of very specific features and shortcuts for when I'm in front of 18 impatient ten-year-olds.

Now, I've still spent lots of time on these projects, testing and asking for changes and so on. But I pretty much didn't do any coding. And all of these were native Objective-C and Cocoa.

I'm an OS X 10.9 user, so a bit newer than PPC, but still quite outdated. The newest Claude models are able to very quickly notice that they're running on an old OS and adapt which APIs they use.

However, I have Claude Code running on Mavericks, because it's a Node app and I was able to recompile Node. To be successful with this type of AI coding, you absolutely need to make the AI able to build and test on your target platform, so that it can independently discover and fix errors!

I'm not sure what this would look like for PPC. Maybe you could run Claude on a newer system and give it SSH access?

Also, this stuff is legitimately expensive and you really can't cheap out and use a worse AI model, it's just not going to work anywhere near as well. I'm paying $100 a month for Claude, which—yes—is a completely absurd amount of money. But on the other hand, in exchange it writes custom apps for me...
 
I'm not sure what this would look like for PPC. Maybe you could run Claude on a newer system and give it SSH access?

I made Sage, a simple Cocoa frontend for serving API requests from common LLM providers like Anthropic, OpenAI and Gemini. https://github.com/doctashay/sage

Working now but not present in the public build is an agentic code editor that provides 14 different tools that the model can invoke - including browsing the filesystem, grepping through source files, diff editing, checking system capability, working with MacPorts dependencies, and even running arbitrary commands... as you can imagine, this is pretty powerful but can easily be abused without guardrails. It is functional though, I can ask my agent to create a brand new CMake project, configure it, and test to make sure it builds and it will (after a couple of tries) actually do it. The biggest problem is cost - I don't have access to the fancy cutting edge context optimizations like you see in Cursor or Claude Code, and all of my guardrails are just "best guess" alignment so the bill quickly spins out of control - especially with larger models. I tried to use GLM 4.7 as a local model alternative, and it is happy to use my tools, just not in a very smart or fast way.

With enough work and no concern for cost, it could probably exercise a high enough degree of autonomy to take a git repo for a simple dependency (no v8 or anything crazy) and just have it build, test, iterate directly on the hardware until it completes a port entirely on its own - this is pretty much what it's already capable of doing, as long as the user is independently building and testing. Probably going to end up leaving this feature on the shelf until I get a better understanding of MCP, or how I can do this cheaply and securely - maybe in a couple of years when compute costs aren't as insane.
 
maybe in a couple of years when compute costs aren't as insane.
I'm not expecting these models to get cheaper, far from it. You're much more likely to get local models that suck less. Once the big AI bubble bursts, I can't imagine the current subsidized costs sticking around. These companies can't run on circular investment and venture capital forever, and not one of them is actually making money yet. Either the compute complexity is going to go down big time or the costs are going to go up big time, it's not sustainable as it sits.

I think that local models being able to benefit from all the stolen training data that the big companies did and then distilling it to something personal computer sized is waaaaaaay more likely.
 
  • Like
Reactions: repairedCheese
I made Sage, a simple Cocoa frontend for serving API requests from common LLM providers like Anthropic, OpenAI and Gemini. https://github.com/doctashay/sage

Working now but not present in the public build is an agentic code editor that provides 14 different tools that the model can invoke - including browsing the filesystem, grepping through source files, diff editing, checking system capability, working with MacPorts dependencies, and even running arbitrary commands... as you can imagine, this is pretty powerful but can easily be abused without guardrails. It is functional though, I can ask my agent to create a brand new CMake project, configure it, and test to make sure it builds and it will (after a couple of tries) actually do it. The biggest problem is cost - I don't have access to the fancy cutting edge context optimizations like you see in Cursor or Claude Code, and all of my guardrails are just "best guess" alignment so the bill quickly spins out of control - especially with larger models. I tried to use GLM 4.7 as a local model alternative, and it is happy to use my tools, just not in a very smart or fast way.

With enough work and no concern for cost, it could probably exercise a high enough degree of autonomy to take a git repo for a simple dependency (no v8 or anything crazy) and just have it build, test, iterate directly on the hardware until it completes a port entirely on its own - this is pretty much what it's already capable of doing, as long as the user is independently building and testing. Probably going to end up leaving this feature on the shelf until I get a better understanding of MCP, or how I can do this cheaply and securely - maybe in a couple of years when compute costs aren't as insane.
🤘
 
  • Haha
Reactions: UpsideDownEclair
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.