Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.
Having worked in this AI field, I can say: I will NEVER EVER let my kids use ChatGPT. NOT EVEN 30 minutes.
Steve Jobs never let his daugther use iPad longer than 30 minutes.
I know tons of people working at Meta. And even they are very strict to their kids and don´t let them use Instagram.

I see more and more GenZ users who rely on ChatGPT as a therapist. And everybody who works in this field knows that it is a real danger and Instagram or other SNS platforms are "nothing", because we observe that such faceless ChatGPT bots could be "best friends" for users that they could get depressed more than ever.

Nobody could stop Meta. I don´t think that our nation is ever interested in regulating OpenAI and co. in this case.

Did you know that you can use ChatGPT also to learn stuff? It might be important for your children.
 
  • Like
Reactions: Isamilis and throAU
Did you know that you can use ChatGPT also to learn stuff? It might be important for your children.
ChatGPT is actually not a teacher nor is it an educational app. It’s a large language model that people happen to be using as an answer box, into which they can plug in all the world’s questions and problems and get an answer that seems like the real thing. The problem is, the model is not designed for providing reliable information, nor does it check its own work. Its strength lies in modeling linguistic patterns. That is why it fabricates the truth without a hiccup and spouts out blatant misinformation and lies.

Do you really want your children to be using this app to learn about the world?
 
ChatGPT is actually not a teacher nor is it an educational app. It’s a large language model that people happen to be using as an answer box, into which they can plug in all the world’s questions and problems and get an answer that seems like the real thing. The problem is, the model is not designed for providing reliable information, nor does it check its own work. Its strength lies in modeling linguistic patterns. That is why it fabricates the truth without a hiccup and spouts out blatant misinformation and lies.

Do you really want your children to be using this app to learn about the world?

Have you actually seriously used ChatGPT or are you just making assumptions based on what you think you know about LLMs?

I’m not saying blindly trust any LLM output but you can have ChatGPT provide citations for its sources and essentially have it read through a heap of web content for you and summarise it.

People seem to be hung up on the fact that an LLM might be wrong or hallucinate.

Guess what? So do people.
 
  • Like
Reactions: Isamilis and Kar98
Have you actually seriously used ChatGPT or are you just making assumptions based on what you think you know about LLMs?

I’m not saying blindly trust any LLM output but you can have ChatGPT provide citations for its sources and essentially have it read through a heap of web content for you and summarise it.

People seem to be hung up on the fact that an LLM might be wrong or hallucinate.

Guess what? So do people.
Yes, I have seriously used ChatGPT.

I still stand by what I said, that ChatGPT is not designed as an educational app or as a teacher. It is a model. People just happen to be using it as some kind of magic technology that solves all the world's problems and acts like your best friend.

As for the citations and references, they are often suspect. In my use of ChatGPT, it does not provide citations unless I ask it for them.

There is nothing wrong with saying that an LLM can be wrong. I am not "hung up" on it—it's just a fact. Spend any amount of time with an LLM asking it fact-based questions, and it will eventually give you drivel. Let's call a spade a spade here, shall we?

The fact that the LLM is wrong is not the main issue. The problem is that the model portrays everything it says under the assumption that it is right. Who would ever know, unless they checked the facts? I don't see that ChatGPT ever engages in any significant effort to check its so-called facts. The model is not designed to be a truth-teller and a fact provider. It IS designed to make things sound as natural as they can, and it will happily bulls**t its way through a conversation without letting you know that it really has no idea what it's talking about. Its answers are not based on any kind of wish on the part of the technology to give you the most reliable answer possible. It's just not designed that way. The ChatGPT screen on the Web does not even come with the disclaimer that it makes mistakes (it used to, as I recall, but they apparently thought it was not necessary). Doesn't that tell you something about the irresponsibility of the developers, OpenAI?
 
I still stand by what I said, that ChatGPT is not designed as an educational app or as a teacher. It is a model. People just happen to be using it as some kind of magic technology that solves all the world's problems and acts like your best friend.

Some people are, yes. Some people also recognise where it can be useful without blindly relying on what it says as fact without validation.

As for the citations and references, they are often suspect. In my use of ChatGPT, it does not provide citations unless I ask it for them.

This is why you check the citations.

There is nothing wrong with saying that an LLM can be wrong. I am not "hung up" on it—it's just a fact. Spend any amount of time with an LLM asking it fact-based questions, and it will eventually give you drivel. Let's call a spade a spade here, shall we?

Yes. 100% true.

However I've had about as much or more "hallucination" or fabrication or outright misunderstandings out of humans too.

My point is that no, its not perfect but in my experience it compares fairly well with your typical human and is much faster.

As far as kids using it as a learning tool which was kinda the original point:

100% yes! Not to the exclusion of teachers, but as a teaching aid that is 100% available all day every day, with infinite patience and willingness to talk, listen and explain - yes!
 
  • Like
Reactions: Isamilis
Yes, I have seriously used ChatGPT.

I still stand by what I said, that ChatGPT is not designed as an educational app or as a teacher. It is a model. People just happen to be using it as some kind of magic technology that solves all the world's problems and acts like your best friend.

As for the citations and references, they are often suspect. In my use of ChatGPT, it does not provide citations unless I ask it for them.

There is nothing wrong with saying that an LLM can be wrong. I am not "hung up" on it—it's just a fact. Spend any amount of time with an LLM asking it fact-based questions, and it will eventually give you drivel. Let's call a spade a spade here, shall we?

The fact that the LLM is wrong is not the main issue. The problem is that the model portrays everything it says under the assumption that it is right. Who would ever know, unless they checked the facts? I don't see that ChatGPT ever engages in any significant effort to check its so-called facts. The model is not designed to be a truth-teller and a fact provider. It IS designed to make things sound as natural as they can, and it will happily bulls**t its way through a conversation without letting you know that it really has no idea what it's talking about. Its answers are not based on any kind of wish on the part of the technology to give you the most reliable answer possible. It's just not designed that way. The ChatGPT screen on the Web does not even come with the disclaimer that it makes mistakes (it used to, as I recall, but they apparently thought it was not necessary). Doesn't that tell you something about the irresponsibility of the developers, OpenAI?
ChatGPT now has study mode. I haven’t tried it but I have been using ChatGPT a lot for work and my personal, even I paid subscription for my children (high school and college). My office also encouraged the staff to use LLM (they setup it up intranet due to security).
 
  • Like
Reactions: jchap and throAU
As far as kids using it as a learning tool which was kinda the original point:

100% yes! Not to the exclusion of teachers, but as a teaching aid that is 100% available all day every day, with infinite patience and willingness to talk, listen and explain - yes!
Hmmm, I can concede that it might be useful as an aid in teaching. As a language model, ChatGPT does have its uses in language-related areas, I agree. For instance, I translate documents for a living, and I have used ChatGPT from time to time to give me some hints when I run into a particular thorny sentence. Someone else here also wrote that it could be useful in learning foreign languages—again, sure, that's possible.

However, that also assumes that the teachers and the students are able to take responsibility and recognize that the tool is fallible. I'm willing to excuse it for being fallible. What I take issue with is the fact that OpenAI has removed their disclaimer of fallibility from public view. So many people are using LLMs now as replacements for search engines, which has never made sense to me. The technologies are vastly different, and the results are significantly different. For people who don't recognize this, I think there are dangers in using it. For instance, students might think that because ChatGPT gives them a perfectly natural-sounding English summary of their homework, they can trust it and use it as-is.

The LLM has become not just a springboard for learning and creativity, but a very real excuse for not learning and just letting the technology do the work for yourself. Even some of Apple's ads for Apple Intelligence in recent months have tried to (humorously?) illustrate how people can just let the AI do the work for them and all will be well.

As far as an LLM being a teaching aid available 24/7, sure—you could apply that to any software solution. Computers are supposed to do our bidding—that's what they are there for. Their "patience" with us is because we have designed them to be that way.

I have not tried to learn any subjects besides language from ChatGPT, but I wonder how far I really could go. Actually, I have created quizzes with ChatGPT before regarding music production technology, one of my interests. It does all right with most of them, occasionally throwing in some false information that raises an eyebrow. That's an educational application, but I wouldn't say that it should be used as-is.
 
Last edited:
ChatGPT now has study mode. I haven’t tried it but I have been using ChatGPT a lot for work and my personal, even I paid subscription for my children (high school and college). My office also encouraged the staff to use LLM (they setup it up intranet due to security).
That's an interesting change. I wonder how they have gotten around the problems with erroneous facts being generated, though.
 
So many people are using LLMs now as replacements for search engines, which has never made sense to me.

To clarify when I talk about doing this I mean this:

Screenshot 2025-08-06 at 3.41.52 pm.png



Which literally makes chatGPT search the web, and read a bunch of articles for you to pull results.

As per exhibit A, fully sourced based on live results it pulled from searching the internet. No ads, no "sponsored link" garbage. Just the info I actually want:

1754466248571.png





As far as learning goes, here's a conversation I had with the Universal Primer GPT on string theory and related off on tangent matters.

 

Attachments

  • Screenshot 2025-08-06 at 3.42.58 pm.png
    Screenshot 2025-08-06 at 3.42.58 pm.png
    743.7 KB · Views: 3
Last edited:
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.