CmdrLaForge
macrumors 601
AI tools have their limits, that get's clearer the more you use them. Still useful but you need to understand what you are doing.
Apple's sleep score is far from perfect, but it's not bad. It's also highly helpful within person to track your sleep over time.It’s pointless and so is the sleep score BS. Wildly inaccurate and a complete waste of time.
ChatGPT 5.2 says: "There are 3 letters “r” in strawberry."
That's because it didn't have his TikTok information 😛ChatGPT also kept forgetting basic information about him, including his gender and age, despite it having full access to his records.
Then what's the point of the service if it cannot accurately analyze the data? What use does it serve if it gets things wrong?Both companies say their health tools aren't meant to replace doctors or provide diagnoses.
OpenAI announced the launch of ChatGPT Health, a dedicated section of ChatGPT where users can ask health-related questions [...]. Both companies say their health tools aren't meant to replace doctors or provide diagnoses.
A lot, although that’s not the only thing the tool can do.How many billions of dollars did it take for them to get ChatGPT to learn to count the number of letters in a word? 😆
A lot, although that’s not the only thing the tool can do.
Some people buy a $3000 MacBook Pro and then mostly just use it for media consumption and light browsing or other tasks. Other people use it to get a lot of computationally intensive work done. The same thing goes for LLMs. Some of us use them to get our work done much more efficiently.
Same here. I noticed that using the sleep tracking, especially after they tweaked it for 'greater accuracy', I was getting more anxious from using it. So I turned it off.Sleep tracking causes more anxiety than good. I stopped using it when I had a good night sleep and it gave me a Regular score just because I went to bed 1 hour later.
It can't. They just trained it on the strawberry question. I just asked ChatGPT the following:How many billions of dollars did it take for them to get ChatGPT to learn to count the number of letters in a word? 😆
This is the problem I see with AI today. If your cranberry question becomes as widely spread as the strawberry question was 2 months ago, models will be programmed to answer this question correctly. It doesn’t seem to be able to answer correctly without human intervention though.It can't. They just trained it on the strawberry question. I just asked ChatGPT the following:
how many of the letter 'r' are in 'cranberry'?
There are 2 letter **“r”**s in “cranberry.” 🍒
This is what happens when you put reality TV personalities and Faux News anchors in charge of government services.Errr.... NO! Your job, as a regulatory body, is to REGULATE.
They may as well just tell us all how much they've been paid to let this nonsense slide at this point.
Just like how O'Brien can got Winston to see 5 fingers instead of 4, they can manipulate AI to include as many "r's" in strawberry as they want. You won't get the truth using AI; you'll get their truth.ChatGPT 5.2 says: "There are 3 letters “r” in strawberry."
I've used Grok several times to review test results and treatment recommendations from my oncologist and others. In virtually every case, Grok delivered a detailed, fully customized summary of my reports in plain English. I wouldn't make treatment decisions based on AI just yet, but Grok has been a great tool to help me understand the medical jargon I've had to review. Like any AI, though, it's "buyer beware" regarding its feedback.
A reporter for The Washington Post has put ChatGPT's new optional Apple Health integration feature to the test by feeding it ten years of their Apple Watch data. The results were not encouraging, to say the least.
![]()
Earlier this month, OpenAI announced the launch of ChatGPT Health, a dedicated section of ChatGPT where users can ask health-related questions completely separated from their main ChatGPT experience. For more personalized responses, users can connect various health data services such as Apple Health, Function, MyFitnessPal, Weight Watchers, AllTrails, Instacart, and Peloton.
ChatGPT Health can also integrate with your medical records, allowing it to analyze your lab results and other aspects of your medical history to inform its answers to your health-related questions.
With this in mind, reporter Geoffrey Fowler gave ChatGPT Health access to 29 million steps and 6 million heartbeat measurements from his Apple Health app, and asked the bot to grade his cardiac health. It gave him an F.
Feeling understandably alarmed, Fowler asked his actual doctor, who in no uncertain terms dismissed the AI's assessment entirely. His physician said Fowler was at such low risk for heart problems that his insurance likely wouldn't even cover additional testing to disprove the chatbot's findings.
Cardiologist Eric Topol of the Scripps Research Institute was likewise unimpressed with the large language model's assessment. He called ChatGPT's analysis "baseless" and said people should ignore its medical advice, as it's clearly not ready for prime time.
Perhaps the most troubling finding, though, was ChatGPT's inconsistency. When Fowler asked the same question several times, his score swung wildly between an F and a B. ChatGPT also kept forgetting basic information about him, including his gender and age, despite it having full access to his records.
Anthropic's Claude chatbot fared slightly better – though not by much. The LLM graded Fowler's cardiac health a C, but it also failed to properly account for limitations in the Apple Watch data.
Both companies say their health tools aren't meant to replace doctors or provide diagnoses. Topol rightly argued that if these bots can't accurately assess health data, then they shouldn't be offering grades at all.
Yet nothing appears to be stopping them. The U.S. Food and Drug Administration earlier this month said the agency's job is to "get out of the way as a regulator" to promote innovation. An agency commissioner drew a red line at AI making "medical or clinical claims" without FDA review, but ChatGPT and Claude argue they are just providing information.
"People that do this are going to get really spooked about their health," Topol said. "It could also go the other way and give people who are unhealthy a false sense that everything they're doing is great."
ChatGPT's Apple Health integration is currently limited to a group of beta users. Responding to the report, OpenAI said it was working to improve the consistency of the chatbot's responses. "Launching ChatGPT Health with waitlisted access allows us to learn and improve the experience before making it widely available,” OpenAI VP Ashley Alexander told the publication in a statement.
Article Link: ChatGPT's Apple Health Integration Flaws Exposed in New Report