Still waiting for Apple to produce developer documentation that has been designed for humans.access full Apple developer documentation that has been designed for AI agents.
Still waiting for Apple to produce developer documentation that has been designed for humans.access full Apple developer documentation that has been designed for AI agents.
You’re only coding in AI now because AI isn’t refined enough to replace you yet.Embrace this, it's the future. I have coded with AI and it's undeniable. If you're not leveraging AI then you're going to be left behind.
I'm just pissy because for all of the good that Apple Intelligence has done for me, I could have bought an m4 Air with half of the specs and for 1/2 the price.Yup. It just does it quicker. If you bought the M4 Pro (I have one as well) to run their AI stuff then you are a fool. I bought mine because the single core integer speed was the fastest thing on the damn planet in laptop form that didn't try and burn your dick off.
The point is the neural engine wasn't originally designed for running LLMs and has perfectly good other uses. Even if they are a bit wonky sometimes.
I'm just pissy because for all of the good that Apple Intelligence has done for me, I could have bought an m4 Air with half of the specs and for 1/2 the price.
That's not been the case for me, which I explained in my comment. Before, it could take me hours to months to generate the code I need, including verification time. Now it takes minutes to hours to generate the code, including verification. Again, huge time-savings for me. Others in my field say the same thing, at least those who use LLMs do.The time you gain is the time you lose doing multiple verifications of the code you get from the ai … 🙂
Dude...I can say the same thing about a junior coder who writes something that might "work" but still needs to be redone and reviewed extensively before I can accept it. You know how you get AI to keep security in mind when coding? 1) Know what that actually means yourself first, then 2) Tell it to.I think the problem isn't necessarily with the vibe coding itself, it can produce usable custom applications, but can you produce maintainable code? Is it secure?
Github is having problems with AI slop being submitted for PRs and maintainers being overwhelmed by code that isn't usable, doesn't conform to the project coding standards and the submitter probably has absolutely no idea what the code does, so can't answer questions when asked.
I think one maintainer was claiming that around 10% of the submitted AI code was of a quality high enough to even consider accepting into the project.
Hopefully the quality will improve over time, to a point where AI code submissions are of a high enough quality to be usable and are maintainable.
But vibe coding is more about making a one-shot solution for a small custom application and if it doesn't work, you get the AI to iterate it. That is fine, if it is just for one person or a small group to use, where security holes are not so much of a problem, for example, but producing a codebase that hasn't been tested and using it for the basis of a major project isn't going to work out well at the current time.
Also, a lot of people who can't code are producing apps, again producing something QaD for themselves is fine, it is really freeing, they can get a customised app that does exactly what they want for the fraction of the price of getting it written professionally... But if it goes wrong, they have to drop back on the AI to try and correct it, because they don't understand the code.
If those people are asking AI to write new modules for an existing project and then submitting it, that is not a good situation to be in, especially as the maintainer of the project. Vibe coding has its place and it is a really great thing for really personal software, but it isn't so good for collaboration projects, at least not in its current form.
If the vibe coder can't validate the code produced, they shouldn't be adding it to projects or distributing it as a finished product. What happens when the first user reports a buffer overflow or input sanitation problem? You, as a programmer with 20 years of experience might be able to wade through the code to find the problem, but most people wouldn't have a clue.
I've done some simple vibe coding to get some simple scripts and app done that help me in my day to day job, but I can look at the code and see whether there are any big errors in it, but it has saved me days of extra work, allowing me to concentrate on the tasks at hand - E.g. we are doing an Exchange migration and getting the AI to spit out complex PowerShell scripts to list aliases or add new aliases to the users in the tenant to cover the different domains we use, for example, is very useful and time saving, but I wouldn't use it to create a multi-user system, because it is too complex and there are too many potential security problems it could build into the code.
I think it comes down to the right tool for the job, and vibe coding is still taking baby steps at the moment. It is fascinating and fun, but you need to know its (current) limits and when to use it and I think that is something few of the proponents are talking about, when trying to get people to use it.
Here's the thing to all the naysayers in the comments regarding the use of agentic AI for coding. If your company is not using agentic AI, your competition is and they're going to smoke you in terms of progress because you cannot be as efficiently productive. As a seasoned cloud dev, the amount of work expected to be done by my company can't be done without hiring a lot more people, and since they open up the gates on the usage, they expect us to use it.
With Xcode 26.3, Apple is adding support for agentic coding, allowing developers to use tools like Anthropic's Claude Agent and OpenAI's Codex right in Xcode for app creation.
![]()
Agentic coding will allow Xcode to complete more complex app development tasks autonomously. Claude, ChatGPT, and other AI models have been available for use in Xcode since Apple added intelligence features in Xcode 26, but until now, AI was limited and was not able to take action on its own. That will change with the option to use an AI coding assistant.
AI models can access more of Xcode's features to work toward a project goal, and Apple worked directly with Anthropic and OpenAI to configure their agents for use in Xcode. Agents can create new files, examine the structure of a project in Xcode, build a project directly and run tests, take image snapshots to double-check work, and access full Apple developer documentation that has been designed for AI agents.
Adding an agent to Xcode can be done with a single click in the Xcode settings, with agents able to be updated automatically as AI companies release updates. Developers will need to set up an Anthropic or OpenAI account to use those coding tools in Xcode, paying fees based on API usage.
Apple says that it aimed to ensure that Claude Agent and Codex run efficiently, with reduced token usage. It is simple to swap between agents in the same project, giving developers the flexibility to choose the agent best suited for a particular task.
While Apple worked with OpenAI and Anthropic for Xcode integration, the Xcode 26.3 features can be used with any agent or tool that uses the open standard Model Context Protocol. Apple is releasing documentation so that developers can configure and connect MCP agents to Xcode.
Using natural language commands, developers are able to instruct AI agents to complete a project, such as adding a new feature to an app. Xcode then works with the agent to break down the instructions into small tasks, and the agent is able to work on its own from there. Here's how the process works:
In the sidebar of a project, developers can follow along with what the agent is doing using the transcript, and can click to see where code is added to keep track of what the agent is doing. At any point, developers can go back to before an agent or model made a modification, so there are options to undo unwanted results or try out multiple options for introducing a new feature.
- A developer asks an integrated agent to add a new feature to an app.
- The agent looks at the current project to see how it's organized.
- The agent checks all relevant documentation, looking at code snippets, code samples, and the latest APIs.
- The agent begins working on the project, adding code as it goes.
- The agent builds the project, then uses Xcode to verify its work.
- If there are errors or warnings, the agent continues to work until all issues are addressed. It is able to access build logs and revise until a project is perfect.
- The agent wraps up by providing a summary of everything that happened so developers have a clear view of the implementation.
Apple says that agentic coding will allow developers to simplify workflows, make changes quicker, and bring new ideas to life. Apple also sees it as a learning tool that provides developers with the opportunity to learn new ways to build something or to implement an API in an app.
The release candidate of Xcode 26.3 is available for developers as of today, and a launch will likely follow in the next week or so.
Article Link: Xcode 26.3 Lets AI Agents From Anthropic and OpenAI Build Apps Autonomously
Here's the thing to all the naysayers in the comments regarding the use of agentic AI for coding. If your company is not using agentic AI, your competition is and they're going to smoke you in terms of progress because you cannot be as efficiently productive. As a seasoned cloud dev, the amount of work expected to be done by my company can't be done without hiring a lot more people, and since they open up the gates on the usage, they expect us to use it.
It's not like we don't know how to use core tools without it, but we have to use them to stay ahead of deadlines and in terms of code quality, being able to tell it to work against a certain set of standards in Claude is scary good. Just being able to have it perform scaffolding for an environment layout is a huge time saver. Think of it as a programming language that understands English, in which you have to be literal in what you want.
They definitely are. I work for a very large global company with lots of capital with a portfolio that has cemented them pure gains for the foreseeable future. Something catastrophic would have to happen to them, and they are using the hell out of this tech hand over fist. If your company isn't using it, it's going to get crushed by the competitionNah they aren't. Our immediate competitors are busy ****ing themselves into the ground with it and burning R&D capital.
Dude...I can say the same thing about a junior coder who writes something that might "work" but still needs to be redone and reviewed extensively before I can accept it. You know how you get AI to keep security in mind when coding? 1) Know what the actually means your self first, then 2) Tell it to.
I am having trouble getting my 85 year old head around vibe coding. So folks will I, in the "near" future and as the Wikipedia entry says:Vibe coding is begging to be hacked by script kiddies. It’s not there yet and those flaunting vibe coding on internet were jaded their database keys and secrets by some random script kiddies. Wait till it destroys your code base and lies about it. I use for some basic scripting or simple test beds but would never give access to my entire project or code base to agents. At least not yet.
Depends if you have a 20 page prompt in the backend. Karapathy quote was more forward looking and I can see getting there one day, but doubt it will be through an LLM.I am having trouble getting my 85 year old head around vibe coding. So folks will I, in the "near" future and as the Wikipedia entry says:
"The concept of vibe coding elaborates on Karpathy's claim from 2023 that "the hottest new programming language is English", meaning that the capabilities of LLMs were such that humans would no longer need to learn specific programming languages to command computers."
be able to say "Hey Siri write me an operating system for my M1 Mac studio that is better than the current one and install it on my computer"?
They definitely are. I work for a very large global company with lots of capital with a portfolio that has cemented them pure gains for the foreseeable future. Something catastrophic would have to happen to them, and they are using the hell out of this tech hand over fist. If your company isn't using it, it's going to get crushed by the competition
Yeah I read that and my first thought was, this guy isn't a software dev by profession.As someone who has been coding for 20 years and now does a lot of vibe coding, you couldn't possibly be more misinformed.
Edit: LOVE the downvotes by the people being left behind in the dust.
I don't even know what you're trying to say to be honest. You're just all over the place with your complaints and accusations.Comparing a language model (AI!!!) against a junior coder is really a deceptive argument and very poor excuse.
They pitch these AI models as having superhuman capabilities that will take everyone’s jobs. The fricken CEO of Anthropic and Meta made those claims for two years now. They said by the middle of 2025 it would be game over for software engineers.
They did not make a claim that it would be barely junior level coder. And no junior level coder claimed to be a superhuman level computer.
This thread’s title even makes the absurd claim that it can build apps autonomously. These models can barely produce a nice looking calculator without several rounds of feedback.
So don’t make poor excuses and shift goal posts. Just admit these models are a flawed and buggy assistant that can help produce fast code but not high quality efficient code.
Because that’s what AI models do. They are optimized to produce fast code, collect a bit of money each time, and keep the user addicted like a slot machine. The incentive is profit for the top shareholders, funds and VCs who invest in this stuff. They don’t care if it results in lots of slop. They already **** all over the web with social media and crypto and profited from slop before.
I don't even know what you're trying to say to be honest. You're just all over the place with your complaints and accusations.
Here is the reality from an actual professional: Conversation coding, with Antigravity and either Gemini or Claude, is astounding.
You reveal the fact that your entire perspective is internet-driven nonsense. None of it is based in actual experience. Good day.That’s on you.
Yet again I have to explain that after 30 years of software development and a user of these models I do not need lecturing to. People like me tell you the upsides, downsides, shortcomings and the truth about the massive ocean of AI slop and garbage low quality vibe coded apps that are now filling forums and app stores. You can over your eyes and ears and pretend it doesn’t exist but I posted the links. You post nothing.
You reveal the fact that your entire perspective is internet-driven nonsense. None of it is based in actual experience. Good day.