#1: The Information Isn’t Always Accurate
ChatGPT answers questions with authority, but that doesn’t mean the information is always correct. ChatGPT will sound confident in its responses, but that doesn’t mean they are always correct, which can lead to confusion for customers. Even its makers openly acknowledge the bot’s shortcomings, stating: “ChatGPT sometimes writes plausible-sounding but incorrect or nonsensical answers.”
#2: It Doesn’t Have Any Special Training
ChatGPT can comb the internet for information, but it doesn’t have special training or access to internal resources to provide personalized responses. If answers to customers’ questions aren’t available online but require industry or company knowledge, ChatGPT won’t be able to answer correctly—but that won’t stop it from responding.
#3: It Provides Different Answers Every Time
One of the lauded benefits of ChatGPT is that it offers a new response every time. But that creates an inconsistent customer service experience. Companies can never count on the bot to provide a specific answer, making monitoring customer needs and requests challenging. One of chatbots’ most common use cases is repetitive questions, such as order status or account information. These questions require a set answer, which ChatGPT can’t provide consistently.
Hi, Reality,
I find it fascinating that there are other people here that also like AI.
I would like to address some of the points you brought to the table.
Regarding point #1, you can greatly improve on information accuracy in several ways. GPT's hallucinations to either diminish or stop altogether when you embed a database into it, and ask it to read from the database.
Here's how it works: suppose you would like to know more about the Official Microsoft Windows 11 API, and GPT is making up information. Bad bot!
What you do is, either you fine-tune it to give the required responses (i.e, you condition it to give the right answers by showing it examples on how it should respond, and then training it), and/or you can simply tell GPT, "here, read this documentation and tell me what it is about in your own words."
Fine-tuning is more reliable, but embedding documentation into it is more versatile. With ChatGPT, one way to make it read documentation is to simply copy and paste the documentation you want and ask it to learn it. Of course, that only lasts for a short while, so instead, you can use the API to force GPT to read from any documentation you want if you need a more permanent and reliable solution.
Of course, in this case, you will pay per token. But the cost tends to be small, especially with the dumber AIs (there are several AI variants, some smarter than others).
I mostly also covered point #2, but I would like to add that ChatGPT does have special training. However, it is fine-tuned to interact to you in a conversation, which GPT (the default IA that you use with the API) is not. No, you CANNOT fine-tune ChatGPT, but you can definitely fine-tune GPT.
Regarding point #3, if you use the API, there's a setting called "temperature". The closer it is to zero, the more predictable ChatGPT will respond, tending to give the same responses every time.
If after setting the temperature to zero you STILL find that it's not complying, you can simply integrate it with the information you want, and either copy and paste the information and/or ask GPT to paraphrase it – whatever works for you.