It's a false analogy to try to equate individual human learning using copyrighted material, to AI/LLM learning using the same material.
At the simplest level, if a human consumes copyrighted material, often they're paying for it, but the AI companies seem to want to replace "often" with "seldom if ever". Makes a big difference in how well people are reimbursed for creating intellectual property.
It's true that once a human learns from copyrighted material, they can tell others what they've learned, either one-to-one or on a larger scale, by becoming a teacher, lecturer, authoring their own works, making YouTube videos, etc. All of that is normal and fits within the frameworks we're familiar with, and keeps in place the methods we've used for many years to reimburse people for their work.
But AI/LLM distribution of knowledge is very different. When a publicly-available LLM like ChatGPT, Gemini, Grok, etc. learns from copyrighted material, it then mass-distributes what it's learned to any number of people who access it, unbound by the individual human limitations on distribution of knowledge that we've operated with until recently. While that might sound wonderful in a sci-fi sort of way, it nearly completely upends the frameworks we've long had in place to ensure that the creators of that material are compensated for their work.
A lot of people (including myself) like the idea of having artificial entities we can sometimes call on to answer questions, do some things for us, etc. (as long as they get things right), but not if it means that the people who create the works that these entities learn on, are no longer compensated, or poorly compensated. Something else needs to be worked out than this.