Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.
Dear Apple,

Nice to know you have AI. Why is still the case that macOS spellchecker still does not correct the highly improbable 'tot he' to the astronomically more likely 'to the'?

Bets wishes, - a disgruntled customer
 
  • Love
Reactions: turbineseaplane
Dear Apple,

Nice to know you have AI. Why is still the case that macOS spellchecker still does not correct the highly improbable 'tot he' to the astronomically more likely 'to the'?

Bets wishes, - a disgruntled customer

Exactly this ^

So much AI hype talk nonsense, while absolute table stakes stuff (like correct word suggestion) is a broken as it gets
 
I’m curious what Wall Street is thinking here. What exactly are they expecting Apple to monetize in this space? If Apple announces some next gen Siri would they charge for it? How many people would pay for it?
 
its funny that at most, people just want Siri to be a bit better at performing ordinary mundane tasks, something that probably doesnt require buying out tons of "AI" companies to do.

If Apple doesnt actually care about AI, I dont blame them, its likely just shareholder pressure.
 
Coming from Apple, this is hilarious!
What’s funny about the research? Do you doubt the claims? If so, what in the research do you think doesn’t support the claims? You’re welcome to serve as a peer reviewer of the research if you have expertise in the area of LLMs. Have you done your own trials between this method and OpenAI’s GPT 4.0?
 


Apple researchers have developed an artificial intelligence system named ReALM (Reference Resolution as Language Modeling) that aims to radically enhance how voice assistants understand and respond to commands.

hey-siri-banner-apple.jpg

In a research paper (via VentureBeat), Apple outlines a new system for how large language models tackle reference resolution, which involves deciphering ambiguous references to on-screen entities, as well as understanding conversational and background context. As a result, ReALM could lead to more intuitive and natural interactions with devices.

Reference resolution is an important part of natural language understanding, enabling users to use pronouns and other indirect references in conversation without confusion. For digital assistants, this capability has historically been a significant challenge, limited by the need to interpret a wide range of verbal cues and visual information. Apple's ReALM system seeks to address this by converting the complex process of reference resolution into a pure language modeling problem. In doing so, it can comprehend references to visual elements displayed on a screen and integrate this understanding into the conversational flow.

ReALM reconstructs the visual layout of a screen using textual representations. This involves parsing on-screen entities and their locations to generate a textual format that captures the screen's content and structure. Apple researchers found that this strategy, combined with specific fine-tuning of language models for reference resolution tasks, significantly outperforms traditional methods, including the capabilities of OpenAI's GPT-4.

ReALM could enable users to interact with digital assistants much more efficiently with reference to what is currently displayed on their screen without the need for precise, detailed instructions. This has the potential to make voice assistants much more useful in a variety of settings, such as helping drivers navigate infotainment systems while driving or assisting users with disabilities by providing an easier and more accurate means of indirect interaction.

Apple has now published several AI research papers. Last month, the company revealed a new method for training large language models that seamlessly integrates both text and visual information. Apple is widely expected to unveil an array of AI features at WWDC in June.

Article Link: Apple Researchers Reveal New AI System That Can Beat GPT-4
So does this mean Siri will stop asking me who I am three times per week??? :rolleyes:
 
Just my two cents - I applaud what Apple is trying to do and I still trust them when they claim a privacy focused approach. But please, Apple, no matter how technically clever this new ReALM technology is, for goodness sake make it actually work. Make the implementation (within Siri or some new assistant) actually work properly. No more requiring people to look up answers on a phone. No more confusion about to which room a specific set of lights belong when I'm giving HomeKit commands. Actually make it work properly. As well as cleverly. Smarts about conversation context are all well and good, but when I say "living room" I actually do mean it, and not the "garden".
 
  • Haha
Reactions: TVreporter

> New AI System That Can Beat GPT-4

At this point, one can define AI metrics and train them against each other, just like chess engines, to get competitive advantage. Needs more and more input, feeding us with content and farming us with data. Where is my red pill now ?
 
When you consider the massive lead that Apple already had with Siri, in responsiveness and usefulness, is this a surprise to anyone?
 
To me, this sounds like what the Rabbi R1 is supposed to be able to do, which boils down to the LLM figuring out how to use an app's UI.

I could see this being Tim's AI play. Chatbots are great and all, but GPT and Gemini both can't actually do anything for the user, just talk to them.
 
More like big claim from the same company that's in talks with Alphabet, OpenAI, and Baidu to use one of their AI models



If ReALM is better than GPT-4,



then why is Apple in discussions with OpenAI to use GPT-4?

Because they will be used for different purposes?
 
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.