Being able to respond to a prompt does not mean it conceptually “knows” something. Statistically it hands you the most likely result to your language prompt.
ChatGPT is a *language* model, it generates speech. It is not “smart”, it’s just good at generating human sounding speech after being trained on it. Being able to spit out a fact does not mean it “understands” a given topic.
Thanks for linking that writeup by Chomsky et. al. I've seen it before, but it was worth seeing again. I think some of the arguments, in particular the ideas that ChatGPT and its ilk lack an inherent grammar and lack a morality without having one imposed are a bit thin, but I think it gives a clear definition of the gap in what these generative systems are capable of: they lack the ability to explain their reasoning.
This thread is talking about understanding, and knowing, but I think that gives a clear test.