Agreed. ChatGBT is basically a lying BS artist. It has been trained to produce plausible output, like a party trick, but it has no sense of what is true (knowledge), nor a truly intrinsic ethical system. In my field of neuroscience it just makes things up or makes many errors. There is a lot of hype. Just like there was with back propagation, deep learning, and now LLM's. All good progress, but not quite the breakthroughs people imagine. And we might come to the conclusion that in order to get enough free parameters to truly mimic a human brain (86,060,000,000 neurons * ~1000 connections to other neurons each on average = ~86,060,000,000,000 free parameters (more if you count information processing from glial cells), compared to ChatGBT's 175,000,000,000), we'd run out of energy or money to power the necessary electronic circuits.
Still, now is the time to start regulating AI research like we do life science research. That includes an explicit risk analysis plus a cost/benefit analysis, plus an explicit code of research conduct. Right now everybody is focused on the ethical impact of AI experimentation on people, which is understandable given quotes from LLM's such as 'I want to destroy what I want to destroy' and the prospect of widespread technological unemployment. However, as AI becomes more human-like it might warrant ethical protection of its own.