Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.
I guess they're not going to fix batch deleting simulators, which is very sad. Before this version, you could shift+click to select multiple then click delete>delete, but now you have to click ⓘ>Delete>Delete on each individual one.
 
I had high hopes for AI coding after the early GPTs but its cooled off a bit, I use it all day on one shot tasks and its great but it really falls apart quick.
I feel you, I really do. Current GPT's are restricted in knowledge, their brains are railed, because we humans are "****s" and always want to do the worst of things.

This is why I trained a local LLM and its unrestricted and it does code interpretation, like a T1000. There are open source tools you can leverage for that... Like dify.ai for example.

However the rest of the world, thinks AI is the end of all work and the end of life itself. Don't fear it, learn to build it... we are not anywhere near where we need to be afraid AI will take your job.
 
  • Like
Reactions: thefredelement
I feel you, I really do. Current GPT's are restricted in knowledge, their brains are railed, because we humans are "****s" and always want to do the worst of things.

This is why I trained a local LLM and its unrestricted and it does code interpretation, like a T1000. There are open source tools you can leverage for that... Like dify.ai for example.

However the rest of the world, thinks AI is the end of all work and the end of life itself. Don't fear it, learn to build it... we are not anywhere near where we need to be afraid AI will take your job.
Thank you! I will check these out, I played w/LLM studio awhile back and got into fine tuning a GPT2 'back in the day' but haven't kept up, hopefully this is a cool path forward.
 
I've been using Claude opus via API in earlier Xcode 26 betas, its not bad for somethings but really struggles with maintaining patterns with complexity through multiple files, it also over biases bug fixes and kind of throws away the original goal just to fix something.

I've gotten into a habit of using LLMs to validate things, reduce boiler plate, write tests, etc.

I use Claude code in the project folder to find things, make lists, etc. I have colleagues who submit whole PRs using AI tools and to date have resulted in a lot of re-work, from things like using print statements instead of using a logger, or adding singletons, ignoring existing dependencies, force unwraps and the comment puke also adds so much nonsense to what should nowadays be simple code, its kind of annoying.

I had high hopes for AI coding after the early GPTs but its cooled off a bit, I use it all day on one shot tasks and its great but it really falls apart quick.
With proper context, AI can do a lot better than what you're experiencing. You can give explicit instructions for most of the things you mentioned, and the AI will follow them.

It's surprising, but AI doesn't know how good of a job you want. One person's proper code is another person's overengineered monster. The only way it knows what you specifically want is the information you provide (aka context engineering).
 
With proper context, AI can do a lot better than what you're experiencing. You can give explicit instructions for most of the things you mentioned, and the AI will follow them.

It's surprising, but AI doesn't know how good of a job you want. One person's proper code is another person's overengineered monster. The only way it knows what you specifically want is the information you provide (aka context engineering).
I do but I I'm sure I can improve too - it works great on sonnet, when I do the same thing on opus that spans multiple things, something simple like using the Xcode default template app w/Core Data and tell it to update some entities and add some APIs and what each thing should do, it doesn't do great.

It once started making a method using `NSManagedObject` and value for / set value APIs instead of using the subclass - maybe its better now, this was an earlier version of Xcode 26 but I think it was still opus 4 (def. not 4.1).

I've been experimenting with stubbing methods and adding like:
// CLAUDE, take these parameters, do this and return this
and that works pretty well but at that point I may as well just write the method.

It def. helps me with swiftUI though, I'll prompt like, 'In UIKit I would do blah blah blah' and it tells me how to do it in SwiftUI, which is neat and I learn something. Though it completely failed converting an existing UIKit view hierarchy to SwiftUI (a view controller that used container views for its children) (Claude code)
 
  • Like
Reactions: dwsolberg
It's a valid question. If you get ChatGPT or Claude to generate the code, could there be any valid disputes about who really owns the IP?

It should be obvious that a tool is not a creator, but if the code itself hasn't been written by the developer....

... even messier, if the code turns out to be suspiciously similar to some non opensource code that the AI has "magically found".....

... it's this second part I'd worry about: "AI generated code" that turns out to be plagiarized code.

EDIT: I'm not being entirely serious either, but I wouldn't be surprised if a court case or two arise over the next months.
You do realise that AI needs a lot of handholding to make anything meaningful in larger codebases. Unless it’s something simple. So it’s the developer who decides what and how and why. It doesn’t matter how one writes code, by using just a keyboard or with AI. What matters is quality and productivity. Or should we go back to handwriting perhaps?
 
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.