You can set ChatGPT to search the web too and cite its sources.Perplexity takes a completely different approach to ChatGPT.
ChatGPT spouts nonsense based on probabilities drawn from the pool of global knowledge.
Perplexity spouts nonsense based on current search results.
This means that the information provided by Perplexity can sometimes be more reliable. However, it can also take you in completely the wrong direction if there are name overlaps.
ChatGPT is more detailed, but it does not cite sources in the first step and often has to admit that it has fantasized when asked.
People in modern world just want some bit of mutual respect, even from a machine. You know, there were even some psychological studies about cyberbullying on platforms like reddit (the downvote system) or instagram, and the results were that mean comments from other users have given people worse psychological toll than real problems from their real lives. I won’t even bring the subject of how many people ended their lives because of online trolls.People preferring ass-kissing sycophantic bot to one that just does the job is somehow the world we live in. Guess it’s not totally surprising considering we have people proposing to and getting proposed to by AIs and are actually celebrating the “milestone”.
One such lunatic
Nah, I heard some of the are really happy 😉People building parasocial relationships with chatbot system prompts. Absolutely depressing.
I've even heard of religious groups worshipping things that have never actually said a single word to them...There are religious groups worshiping LLM's on TiKToK and other social media sites very cringy. Type in AI worship on YouTube and see how screwed we are with the decline in critical thinking skills.
What. On. Earth...People preferring ass-kissing sycophantic bot to one that just does the job is somehow the world we live in. Guess it’s not totally surprising considering we have people proposing to and getting proposed to by AIs and are actually celebrating the “milestone”.
One such lunatic
I would rather say disturbing.People building parasocial relationships with chatbot system prompts. Absolutely depressing.
Isn't the depressing part that people feel the need to do it? And not them actually helping themselves?People building parasocial relationships with chatbot system prompts. Absolutely depressing.
Yeah, it doesn't make any of this "okay" or good for society at large, but it's at least understandable when you look at how real people treat each other online. Not at all surprising that people find it comforting. There's a reason that therapists should be afraid of AI.People in modern world just want some bit of mutual respect, even from a machine. You know, there were even some psychological studies about cyberbullying on platforms like reddit (the downvote system) or instagram, and the results were that mean comments from other users have given people worse psychological toll than real problems from their real lives. I won’t even bring the subject of how many people ended their lives because of online trolls.
Thus some people just find the refuge in chatbots that will cave into everything user says. If Open AI had no clue about that they wouldn’t have included the feature to start with, hence why it is so popular. Thus I believe they will roll out 5.1 update or something and bring emotional back (or maybe they reserved it for paid tiers)
Shouldn't the height of such a bar be proportional to the value it represents?
The problem isn’t GPT-5’s communication style, but more than it gets basic things wrong. Today I asked how many weeks has it been since the 7th of April and it said 10.
I have nothing to add on this specific point, but I do think that it is pretty funny that if you give ChatGPT a large text and ask it to count the number of times that the word "dog" appears, there's a pretty decent chance that it will give you a wrong answer. But if you simply ask it for a python script to count the word "dog", it will almost certainly give you a perfectly functional script that counts them correctly, and it will even run the python script if you ask.The problem isn’t GPT-5’s communication style, but more than it gets basic things wrong. Today I asked how many weeks has it been since the 7th of April and it said 10.
Wow, all this over a soulless machine????What the furor revealed is something far more worrying than just a style “too clinical”. Something that was always floating, but we never suspected it would be this big : People are emotionally attached and dependent on ChatGPT 4. Go see Reddit and YouTube, thousands of people literally in despair, crying, some even threatening suicide, at the loss of what they consider their best friend, their therapist, and even their lover ( there are threads where both men and women “married” ChatGPT ).
It sounds unreal, something straight out of a scifi movie like “Her” , but reality is already exceeding fantasy.
Honestly i don’t think Apple is making a mistake by taking their time with AI. Some of the most nefarious aspects of AI are just starting to show up.
People are bio electrical / chemical machines. Do you know that humans have a soul? Do you know if consciousness might just be a factor of complexity? Maybe there is a field and we are all part of it - trees, flowers, bacteria etc.Wow, all this over a soulless machine????
What has this world come to?
Hearing stuff like this makes me feel like I'm in a fever dream