I should hope such reeducation occurs. An AI is statistical inference engine. If the wrong answer is more common than the correct answer it will happily regurgitate the wrong answer. It's a major design flaw in the whole design.On multiple occasions Musk has openly admitted to tampering with Grok forcing it to produce preset replies to specific questions, on one occasion it even regurgitated the instructions it was given instead of the answer. I lost track of the number of times I personally saw Musk tweet how Grok would be “reeducated” on specific subjects that he seems obsessed with.
I've also had the AI give an answer that was correct a few years ago but was presented as the correct answer for today. Nor did it specify that the data was for 2022 and published in 2023.
I realize that most of the people on this blog do not deal with the real physical world on a frequent basis, but for those of us who do the wrong data can easily lead to wrecked machinery and personal injury. Hallucinating AIs are dangerous.