Chatbots are something we run into daily. They provide initial assistance and are often the first ones to
engage with us when visiting a website. However, whether they are new extensions to our team, or a
serious security threat is a continuous debate. In November 2022 Open AI introduced the most
developed chatbot called ChatGPT, an AI chatbot capable of having more advanced and natural
conversations and answering questions in more detail than basic chatbots.
ChatGPT sounds like a great solution for improving customer experience, right? With its over 175 billion
parameters language model, conversational design that is also multitasking, contextual and personalized
by training on specific datasets it sounds like the perfect customer support, especially with the open
source that enables developers to modify it to suit specific needs. However, not everyone is excited
about the new assistant. Many IT service providers and especially cyber security specialists aren’t very
excited about ChatGPT.
Does ChatGPT pose a threat to cyber security?
The short answer would be yes. Just like any software or app, ChatGPT can be used for good purposes
and cybercrime. Check Point tested if ChatGPT could be used to generate a sophisticated phishing email
and malicious code and they could. What does this mean for businesses that often are targeted by cyber
criminals through phishing emails? If you want to minimise the risk of being on the receiving end of
cybercriminal activity, it is of utmost importance that you take precautionary measures including having
professional support and an up-to-date data protection strategy. You shouldn’t rely on basic protection
such as a firewall, even though it is considered a good protection. You should always avoid opening
messages that include links from senders you don’t know.
Even though the test conducted by Check Point and also TechCrunch showed that basically anyone can
create a basic phishing email with malicious payload in it with help of this chatbot, it’s not a security
threat by itself. It’s as safe or as dangerous as its user.
Is ChatGPT safe to use?
Yes, it is. Generally, ChatGPT is safe to use when you ensure that you comply with the laws and
regulations, take appropriate security measures such as access controls and program the chatbot to
detect and flag malicious and fraudulent text. ChatGPT doesn’t ask for personal information or spread
false information on its own. It also audits the requests and provides answers that are relevant. If a
person asks ChatGPT to do something illegal, it will refuse the request.
I’ll quote ChatGPT itself on the impact of ChatGPT on cyber security. “Ultimately the impact of ChatGPT
on cyber security will depend on how it is used. It is important to be aware of the potential risks and to
take appropriate steps to mitigate them.”
How can you benefit from ChatGPT?
During and after lockdown, businesses have had to come up with an effective way to respond to
enquiries and also support requests. With ChatGPT your prospective customers as well as existing
customers can access 24/7 support and get answers that are relevant to their issues. This in turn helps
you to build a more positive and personalized customer journey and experience.
ChatGPT isn’t here to take over your business or the jobs, it was designed to help you provide better
service to your own customers which leads to a better customer experience. If you take needed
precautionary measures, your data and your clients’ data aren’t endangered.
ChatGPT helps you take some of the workload off the customer supports shoulders freeing up time and
enabling them to focus on the most urgent and more complicated support requests. As result, you’ll
have happier employees and customers who are more likely to stay with you and recommend you to
their network.