A problem with the ChatGPT 5.1 release that I haven't seen anyone address is the appearance of occasional spelling and grammar errors that prior models never exhibited. Among other things, 5.1 will sometimes drop some articles, etc. like "a", "and", and "the", so that some responses seemed to be phrased in ways that someone who's not a native English speaker might do.
5.1 also still hallucinates now and then, especially when you ask it for information on some recent events, and things like current business names, addresses, and phone numbers. Today I asked it for listings of local businesses that sell HVAC parts, and it replied by giving me a list of several. I then used Bing to search for more info on these businesses, but it couldn't find them, so I asked Gemini 3 if it could find the supposed businesses I'd gotten from ChatGPT. It replied that they weren't real, and it gave me the real names of several actual local HVAC businesses, their actual addresses and phone numbers, etc. I asked ChatGPT about this discrepancy, and it then admitted that it had given me the names of businesses that don't exist, at fictitious addresses and phone numbers, and one possibly real business that was located over 700 miles away.
I asked ChatGPT why it still sometimes hallucinates, and it replied that it's still more of a "pattern recognizer", referring to its database training to generate most answers rather than looking online for current information unless you specifically ask it to. It said that its database must contain old phone numbers, addresses, etc., and that it also simply generates "stuff that looks right based on my pattern recognition routines" (I'm paraphrasing). But I frequently get accurate current information from ChatGPT, including with version 5.1, without having to specify that it check online for current info, so it was confusing for it to give me hallucinated answers this time. It's too inconsistent.
Google's business from the start has been to search the internet for current info, so OpenAI's current approach to AI is partly antiquated.
5.1 also still hallucinates now and then, especially when you ask it for information on some recent events, and things like current business names, addresses, and phone numbers. Today I asked it for listings of local businesses that sell HVAC parts, and it replied by giving me a list of several. I then used Bing to search for more info on these businesses, but it couldn't find them, so I asked Gemini 3 if it could find the supposed businesses I'd gotten from ChatGPT. It replied that they weren't real, and it gave me the real names of several actual local HVAC businesses, their actual addresses and phone numbers, etc. I asked ChatGPT about this discrepancy, and it then admitted that it had given me the names of businesses that don't exist, at fictitious addresses and phone numbers, and one possibly real business that was located over 700 miles away.
I asked ChatGPT why it still sometimes hallucinates, and it replied that it's still more of a "pattern recognizer", referring to its database training to generate most answers rather than looking online for current information unless you specifically ask it to. It said that its database must contain old phone numbers, addresses, etc., and that it also simply generates "stuff that looks right based on my pattern recognition routines" (I'm paraphrasing). But I frequently get accurate current information from ChatGPT, including with version 5.1, without having to specify that it check online for current info, so it was confusing for it to give me hallucinated answers this time. It's too inconsistent.
Google's business from the start has been to search the internet for current info, so OpenAI's current approach to AI is partly antiquated.