I disagree. LLMs are getting dramatically smarter each year. There is no evidence as yet that LLM researchers are failing to maintain the hygiene of their training data.
There is a whole separate discussion of course that as the years go by and the majority of content is either generated by LLMs - or quality-checked by LLMs at the very least - how that will impact the training data being used. It's not about accuracy, which is easier to control, but to what extent we want training data to self-reinforce.
If neural networks can inadvertently develop behaviour remarkably reminiscent of human intelligence and reasoning, purely by sampling text excerpts of human activity, it suggests that we do not understand human intelligence.
Perhaps the core of human cognition fundamentally operates on similar principles under the hood - a widely distributed self-learning system that progressively distils the statistical patterns of our experiences into higher-level models of abstraction, reasoning and generalised intelligence.
ChatGPT is free to use though. The free tier of service is what is being baked into Apple's devices, with the same privacy afforded to enterprise customers. If you want the full-fat GPT-4o model (with its higher context size and higher usage limits) you still need to pay $20/mo for ChatGPT Plus and link it to your Apple Account.