Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.

MacRumors

macrumors bot
Original poster
Apr 12, 2001
68,605
39,468


After poaching one of Apple's top artificial intelligence executives with a $200 million pay package to lure him away from the company, Meta has now hired two of his subordinates, Bloomberg reports.

meta-ai.jpg

Apple's Mark Lee and Tom Gunter are set to join Meta's Superintelligence Labs team, a newly established division tasked with building advanced AI systems capable of performing at or beyond human-level intelligence.

Earlier this month, Ruoming Pang joined Meta. Until recently, he led Apple's foundation models team. Models developed by Pang's team are used for Apple Intelligence features like email summaries, Priority Notifications, and Genmoji. Lee was Pang's first hire at Apple, while Gunter was apparently known as one of the team's most senior members.

Meta has been spending heavily on new staff and engineers to keep up with advancements from OpenAI and Google. Apple is reportedly now offering some engineers raises in an effort to retain them, but they are still substantially less than Meta's offers.

Bloomberg notes that the three departures "reflect the continuing turmoil at the Apple Foundation Models team." Apple is now believed to be considering a major change of strategy by using external models from the likes of OpenAI or Anthropic to power Siri and other Apple Intelligence features due to the shortcomings of its own models.

Apple is simultaneously developing versions with both its own models and third-party technology, and has not yet decided which to use as the foundation for Apple Intelligence beginning next year.

Article Link: Meta Poaches Two More Apple AI Executives
 
not sure what they are seeing with Apple's AI record.... Shouldn't they go after employees that work for companies that have had successfully deployments?
Seems like Apple's privacy approach really gums up their AI efforts rather than the people they hire. The employees are probably able to work much better at Meta.

More "this company won't let me work" than "I'm a bad worker and pushing this company a bad product"
 
Meta’s AI models are legitimately awful. Apple’s low-morale AI operation is probably the best they could manage to poach from.
 
  • Like
Reactions: transpo1
Seems like Apple's privacy approach really gums up their AI efforts rather than the people they hire. The employees are probably able to work much better at Meta.

More "this company won't let me work" than "I'm a bad worker and pushing this company a bad product"
That's how they market themselves, but they had been sending Siri recordings to third party contractors to improve the service (which didn't seem to have worked out too well).
 
During the dot-com bubble of the 2000s, investors were throwing money at everyone who could code, even if they didn’t have a solid idea that actually solved a real problem or created value. Money was being thrown everywhere in the hope that something would stick, and to make sure no opportunity was missed at any cost. The reality, however, was that new technology by itself doesn’t create value. Innovations and solutions do. Technology merely enables the development of new innovations and solutions. That’s why I personally feel that this AI boom is similar to the dot-com bubble. It’s an exciting new technology everyone wants to jump into, but very few truly know what’s worth doing with it. In that sense, Apple might actually be smart for not panicking about it.
 
That's how they market themselves, but they had been sending Siri recordings to third party contractors to improve the service (which didn't seem to have worked out too well).
Things have happened, yes, but their revenue speaks for itself…

Apple is more private than most, but that doesn’t mean they’re perfect. The Siri “scandal” was just an analytics data collection system that was poorly designed let alone communicated with the end user.

And I think it’s important to note that Apple immediately made the appropriate changes in an update to iOS 13 (13.2 I think) where you could remove your data from their servers as well as the addition of an obvious prompt to enable or disable collection during device setup.

Do I agree that this should have never been an issue? Of course. Do I think Apple should be able to market privacy? Yes. Do I think their claims are true? For the most part, based off of the available data (ie revenue, terms, etc), yes. Do I think it’s 100% private? No.
 
They are poaching Apple AI workers… why not poach better talent?

Siri seriously sucks and what have they been working on? Oh, it’s just not ready or good enough for our high horse standards. What a load of horse manure.
 
They are poaching Apple AI workers… why not poach better talent?

Siri seriously sucks and what have they been working on? Oh, it’s just not ready or good enough for our high horse standards. What a load of horse manure.
The AI launch was likely 80% ready by the start of the year. They would have benefitted from launching it and collecting users data for learning, similar to how ChatGPT launches their models and tweaks them every couple of weeks. But Tim Cook is too scared of the stock market reacting negatively to an imperfect launch, which is atypical of Apple.
 
Good for Facebook i guess?

These people can’t even put together a half decent functional assistant that is any more advanced than what we had a decade ago.

Suppose Facebook can’t afford the big dogs from the likes of Nvidia, OpenAI, Google, etc. that’s where the real talent is.

also this just shows that Apple still isn’t taking AI seriously…but then again these people obviously aren’t up to the job of putting out a useful AI product.

It’s actually embarrassing how far behind Apple are when it come’s to AI. It might actually be Tim’s biggest blunder in his time at Apple (even more so than his ski goggles). We’re already seeing Apple slip and start to go downhill. There is only so much the iPhone 17 can do for them.
 
The AI launch was likely 80% ready by the start of the year. They would have benefitted from launching it and collecting users data for learning, similar to how ChatGPT launches their models and tweaks them every couple of weeks. But Tim Cook is too scared of the stock market reacting negatively to an imperfect launch, which is atypical of Apple.
Most likely. And now they suffer for letting their core grow soft.
 
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.