Human emotions into what though? There is generally a factual answer to everything. Just because you disagree with facts and find they cloud your judgement doesn't mean they're incorrect. Human emotions is what leads to extremes in opinions and those extremes are never right. The truth is always somewhere in the middle.
ChatGPT is very good at being unbiased, presenting pros and cons and facts.
It could be argued that LLM-based chat responders
cannot be unbiased. There's no awareness of Pro or Con - it's all just statistical grading, coupling a high and a low related to a search term. At best, that's the start of brainstorming, but shouldn't be considered a finished usable output.
I'm sure the programmers fed the LLM's an encyclopedia or two. Library's worth of vetted, respected, accepted physics, engineering, biology, philosophy, psychology, sociology, theology, archeology, anthropology.
And then they fed it the internet, which provides a voice to every Ignorant Pendejo with an Axe to Grind. The collective WE used to leave IPAGs appropriately squelched out. Remember how nice that used to be, how much quieter and more rational media was? Now IPAGs poison society through social media, as folks with weak spots in their minds get sucked in to IPAG B.S.
IPAG B.S. overpowers the sum of factual human knowledge. Day in, day out.
LLMs are STATISTICAL MODELS, pattern recognition engines based on worlds apex predator - People. But not like educated, skeptical, moderate wise people. No. LLMs get their patterns from IPAGs, because they weigh patterns only statistically - by volume, engagement and reproducibility. LLMs grade the quality of their own regurgitation by feedback loops of volume, engagement and reproducibility. THAT'S why LLMs puke wrongness hallucinate garbage -- Becuase they're really LIPAGLMs.