Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.
It make a lot of sense, especially considering the current situation in the middle east,
since Apple chip design teams are mainly located in Israel.

They need to both to prepare for the future, but at the same time deal with the challenges of the present.

(not entering to the politics around the situation but how it affects Apple)
 
Yep, then they can all equally suck.
You're delusional. AI is doing work in biochemistry, astronomy, and other fields and making incredible advancements. Now it's going to do the same for electronics. Anyone who uses it to aid in their technology will come out ahead and those who don't will fall behind.

Edit: Love all the laughing emojis on my post... they're making me laugh too, because you're probably all pathetic AI-cope luddite boomers, the ones who when they hear about a useful AI feature in software say things like "as long as I can turn it off" or "nobody asked for this".
 
Last edited:
But what about the Apple research paper published on the eve of WWDC that says that generative AI isn't very smart at all?

All all seriousness- sounds great if we can all get more powerful more capable chips in our hardware, sooner rather than later.
It's perfectly fine for small scale, for a few hundred lines of code. The Apple paper is about when it has to think hard at a large problem, it breaks down completely.

The trick all the generative AI startups are pursuing is to get it to work on larger scale, basically dividing a large project into tons of smaller tasks. And to do this automatically.
 
I used AI to write hundreds of lines of code yesterday for my job. It's fairly annoying. It writes a huge volume of code that looks right - like, I'm shocked at how many things it gets right - but then it face plants on some things where I'd be thinking about putting someone into a Performance Improvement Plan if I saw an employee making similar mistakes.
This sounds familiar. There are tasks Gemini is very good at and some that are just beyond its capabilities. In general I've found it to be excellent at small jobs:
  • Writing commit comments (from the diffs) and PR summaries (from its own comments).
  • Providing SQL suggestions when given the original DDL and analyze results.
  • Writing unit tests (and my least favorite part of programming: generating mock data).
  • Suggesting fixes for failing tests.
It tends to fail spectacularly on tasks that are too broad though. There's absolutely been a learning curve while incorporating LLM use into my day-to-day work; figuring out the right prompts and what size chunk of work it can accomplish has been a (worthwhile) challenge. One imagines the folks using this tech to aid in chip design have both trained the model appropriately and have themselves been trained to use that model.

(It's not a leap to surmise that Apple ran into the user training problem as they started testing generalized LLMs meant for consumer use. What they really need is for the system to recognize that the task is inappropriately large and prompt the user to deconstruct the question. This seems ... challenging. But it's also why the 2024 demo doesn't feel staged: anyone giving the demo would have learned how to prompt the system for reasonable answers.)
 
  • Like
Reactions: jimbobb24
Literally every company in the entire world has some version of this going on. We are using AI to try and make X product advance faster. Every single company.
 
But what about the Apple research paper published on the eve of WWDC that says that generative AI isn't very smart at all?

Not particularly applicable. Apple's paper was about whether these tools were actually 'reasoning' ( creating novel solution by inference or deduction or .... Something other than regurgitating what they have already seen. )

This isn't a likely what an AI assistant for circuit design would be. First of all it doesn't need to be a general human language chatbot. The AI tool could be fed tested design specs for the general logic for a circuit. The AI tools job is just to holp lay it out using the design tools so that optimize for density , power , or some other criteria. It would be for some joe-random in accounting to type in "give me a new M6 chip. " and ta-da magical answer comes out. It is more so go to this very narrow expert task that the tool has been explicitly training on for hundreds of examples.
( Synopsis , Cadence , other chip building design tool vendors will be training these. )

This have very little with random chit-chat chat bot work. Piling in all the fictional literature on the planet , every redit/macurmors/internet forum chatter, random romance novels doesn't really contribute a whole lot to better circuit design. It is really a 'language' problem. Expert specifications go in and even more expert specifications come out. There is no 'chit chat' necessary there at all.

If a statistical "reasoning" system had been trained on 100's of tower's of hanoi examples it would have spit out the right answer. But it really would have done far more of a "Monkey see, Monkey do" solution. More regurgitation (or at best incrementally adapting to an incrementally different solution) than deduction of something 'novel'. If the systems had been trained on Towers than Apple wouldn't have used it for their paper.



All all seriousness- sounds great if we can all get more powerful more capable chips in our hardware, sooner rather than later.

This is more so about productivity on a given design that is mainly driven by human experts. So for example maybe a team of 50 does what a team of 100 did in the same amount of time. Right now Apple tends to go idle in areas. Yealy iterations of A-series means you don't get yearly iterations on Watch SoC. Unlikely to go from 3 M-series die variations out into 6 or 7 M-series variations.

The complexity of the dies being build it also going up. At one point 2B transistors was a big budget. Then 8B , then 20B , 40B , 80B , etc. 120B really shouldn't be a bigger team. The pace and breath of dies coming out of Apple wouldn't change, but just would contain more complexity with less of an increase in cost to development.
 
  • Like
Reactions: bluecoast
Google and others have reportedly made significant improvements in chip design and software development using AI. It's essential to keep up with the times or risk being left behind.
 
Interesting. No doubt that the future chips will be powerful and AI will have a role in producing it. Meanwhile waiting to see Apple implement more AI features in its software.
 
  • Like
Reactions: xaanaax and mganu
"Moving the Mac to Apple Silicon was a huge bet for us," he explained. "There was no backup plan, no split-the lineup plan, so we went all in, including a monumental software effort."
And it was also the end of the Mac for me because ever since the M1, Mac displays are unusable for me. I'm 100% certain it can be fixed by a software knob which, of course, Apple refuses to provide.
 
This guy is probably one of the smartest at Apple now and is responsible for the most exciting developments in chips and imo Apple in over a decade. To me it sounds as if they will use AI not to just make the next chips but to make sure they continue to dominate in the SoC space which I ultimately think will be the majority of consumer PCs within the next 5 years anyways.
 
I know people like to complain about AI (I have too), but I have to admit it feels pretty amazing to be on the cusp of a major revolution.
 
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.