There ARE two teams. plural.There is 2 teams :
<snip>
I feel you, I really do. Current GPT's are restricted in knowledge, their brains are railed, because we humans are "****s" and always want to do the worst of things.I had high hopes for AI coding after the early GPTs but its cooled off a bit, I use it all day on one shot tasks and its great but it really falls apart quick.
Thank you! I will check these out, I played w/LLM studio awhile back and got into fine tuning a GPT2 'back in the day' but haven't kept up, hopefully this is a cool path forward.I feel you, I really do. Current GPT's are restricted in knowledge, their brains are railed, because we humans are "****s" and always want to do the worst of things.
This is why I trained a local LLM and its unrestricted and it does code interpretation, like a T1000. There are open source tools you can leverage for that... Like dify.ai for example.
However the rest of the world, thinks AI is the end of all work and the end of life itself. Don't fear it, learn to build it... we are not anywhere near where we need to be afraid AI will take your job.
With proper context, AI can do a lot better than what you're experiencing. You can give explicit instructions for most of the things you mentioned, and the AI will follow them.I've been using Claude opus via API in earlier Xcode 26 betas, its not bad for somethings but really struggles with maintaining patterns with complexity through multiple files, it also over biases bug fixes and kind of throws away the original goal just to fix something.
I've gotten into a habit of using LLMs to validate things, reduce boiler plate, write tests, etc.
I use Claude code in the project folder to find things, make lists, etc. I have colleagues who submit whole PRs using AI tools and to date have resulted in a lot of re-work, from things like using print statements instead of using a logger, or adding singletons, ignoring existing dependencies, force unwraps and the comment puke also adds so much nonsense to what should nowadays be simple code, its kind of annoying.
I had high hopes for AI coding after the early GPTs but its cooled off a bit, I use it all day on one shot tasks and its great but it really falls apart quick.
I do but I I'm sure I can improve too - it works great on sonnet, when I do the same thing on opus that spans multiple things, something simple like using the Xcode default template app w/Core Data and tell it to update some entities and add some APIs and what each thing should do, it doesn't do great.With proper context, AI can do a lot better than what you're experiencing. You can give explicit instructions for most of the things you mentioned, and the AI will follow them.
It's surprising, but AI doesn't know how good of a job you want. One person's proper code is another person's overengineered monster. The only way it knows what you specifically want is the information you provide (aka context engineering).
You do realise that AI needs a lot of handholding to make anything meaningful in larger codebases. Unless it’s something simple. So it’s the developer who decides what and how and why. It doesn’t matter how one writes code, by using just a keyboard or with AI. What matters is quality and productivity. Or should we go back to handwriting perhaps?It's a valid question. If you get ChatGPT or Claude to generate the code, could there be any valid disputes about who really owns the IP?
It should be obvious that a tool is not a creator, but if the code itself hasn't been written by the developer....
... even messier, if the code turns out to be suspiciously similar to some non opensource code that the AI has "magically found".....
... it's this second part I'd worry about: "AI generated code" that turns out to be plagiarized code.
EDIT: I'm not being entirely serious either, but I wouldn't be surprised if a court case or two arise over the next months.