Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.
Google within Fitbit's app also has Lab programs, opt-in programs users can participate in and one of the is medical records related and I said hell no to that and that's not even being processed through AI but ChatGPT? Now that's a fat hell no from me and should be by anyone.

Also uploading medical records, doesn't that cross some regulations with HIPAA here in the US?

Yes, but it doesn't apply here. Companies, providers and such can't do it, and you can't share someone's health data that you have access to, but you're allowed to share any and all your own health data to anyone.

It does control what ChatGPT does with it.
 
Last edited:
So, in the US, OpenAI has to obey HIPAA if they want to use or share my data. So, no more risk than sharing my health data with EPIC or some other EHR system, right?

Of course, that's only for willful sharing, not unautorhized access to the data. But EHR's have the same potential vulnerabilities.
 
  • Disagree
Reactions: Bhy
With how dumb ai is there is no way I would want it to have access to my health records and even more "no way" would I trust what it said.
 
Tempting, but hell no!!!! No way am I giving my health records to this unknown service....hahaha! Nice try!!!
 
  • Like
Reactions: Bhy
Yeah. The only button I would push for any AI is Reject.

Apple's "messed up AI integration" is probably the best thing ever.
While I lost a chunk of faith in the company, I do pray they will figure out The Proper Way to integrate AI and once more, setup the standard in industries on how this has to be done.

If anyone has a chance to do this, in this madness of AI, is Apple.
I hope they come out strong and slap everyone with a lesson on How To. Like they did with the Mac, iPod, Iphone and many others.


It's you call, but I highly recommend to hold back anything shared with OpenAI/ChatGPT/Gemini/etc...
 
1767817281458.gif


No no absolutely not.
 
I wonder if this would be a useful way to make sense of all the data that gets collected in Health. I'm sure it's all useful (if it wasn't, we wouldn't collect and track it) but it's also largely incomprehensible to a layperson.

Of course, said layperson would have to trust Sam Altman & Co.
 
I get the privacy concerns and am concerned too, but this is actually one of the most truly helpful perks of AI. Utilizing the wealth of information available, the ability to help identify health trends to diagnose and even prevent certain diseases and illnesses will prove to be an invaluable tool for Doctors.
I mean, I'm already doing research and finding peer reviewed journal articles for my Dr. about my health. It's sad when I share results of clinical trials and he says "That's interesting, let me take a look and get back to you". Maybe AI will just remove the middle man because I BET you my Dr. is asking ChatGPT about that clinical trial...
 
Being discharged from hospital today after a stroke, I am buying an Apple Watch today, doctors all recommend it. I will look into this - I’m however not a fan of AI, I don’t use it.
 
I mean, I'm already doing research and finding peer reviewed journal articles for my Dr. about my health. It's sad when I share results of clinical trials and he says "That's interesting, let me take a look and get back to you". Maybe AI will just remove the middle man because I BET you my Dr. is asking ChatGPT about that clinical trial...

I bet you are incorrect, unless they are doing it on their own time and it's not connected to their patients. I work in clinical healthcare, do rounds daily, and AI is taken very seriously. We cannot access ANY AI while on campus (DNS and IP blocked), and using it without specific permission is a fireable offense.

Doctors can and do use AI, and there is nothing wrong it. They aren't using ChatGPT and shouldn't be using publicly-accessible LLMs. We have some internally-handled medical AI models we are trialing...they run on site and can't be accessed remotely, and don't crawl the internet.
 
I bet you are incorrect. I work in clinical healthcare, do rounds daily, and AI is taken very seriously. We cannot access ANY AI while on campus, and using it without specific permission is a fireable offense.

Doctors can and do use AI. They aren't using ChatGPT. We have some internally-handled medical AI. It runs on site and can't be accessed remotely.
Fair point. In retrospect, he did inform me at my last visit that he was using an internal medical AI to record and generate after visit notes. Gotta say, those notes were a lot more complete than before when he had to squeeze them into the 30 seconds between patients.
 
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.