ChatGPT: What are the cyber risks?

As technology advances, so do the dangers that we face in the advanced world. Where artificial intelligence (AI) has long been a part of the cyber security industry. However, the newest versions of AI, such as ChatGPT have rapidly broken new ground and are already having a profound impact on the future.

Developed by the Artificial Intelligence research laboratory OpenAI, ChatGPT (Chat Generative Pre-Trained Transformer) is an AI-powered chatbot platform designed for conversational AI systems like virtual assistants and chatbots. ChatGPT uses the very large and sophisticated GPT (Generative Pre-trained Transformer) language model to generate human-like responses in text format. Where ChatGPT doesn’t have any knowledge, nor does it save any information. The responses ChatGPT generates are based on the data it was trained on.

The security risks of ChatGPT

Data theft

Attackers use various tools and techniques to steal data. There is a concern that ChatGPT may make a cyber criminal’s life easier. ChatGPT’s ability to impersonate others, write flawless text, and create code can be misused by anyone with malicious intent.

Storing Data

ChatGPT does save your personal information, with the real risk is that it collects data from your conversations. When using AI, it’s extremely easy to feed it your private information by mistake. For example forgetting to censor a document that you ask to proofread. In addition ChatGPT also stores other data include:

    • Personal information: your name, contact details, login credentials, payment information, and transaction records.
    • Device information: IP address, location, browser type, and the date and time that you start using ChatGPT as well as the length of your session. ChatGPT also retrieves your device’s name and operating system. It uses cookies to track your browsing activity both in the chat window and on its site. It claims to use this information for analytics and to find out exactly how you interact with ChatGPT.
    • Information added into the chat – ChatGPT records and stores transcripts of your conversations. This means any information you put into the chat, including personal information, is logged.
    • The privacy policy states that if you intend to enter personal data into the chat, you need to provide the people involved with adequate privacy notices. You also need to obtain their consent, and be able to show ChatGPT that you are processing this data within the law. Further, if you’re entering information defined as private according to GDPR, you must contact them to execute its Data Processing Addendum.

This personal information is available to a number of people and entities, shared with:

    • Vendors and service providers.
    • Other businesses.
    • Affiliates.
    • Legal entities.
    • AI trainers who review your conversations

Malware development

It is found that ChatGPT can aid in malware development. For example, a user with a little or no knowledge of malicious software could use the technology to write functional malware. Malware authors can also develop advanced software with ChatGPT, like a polymorphic virus, which changes its code to evade detection.

Subscribe To Our Newsletter

Phishing emails

One of the easiest ways to spot a phishing email is to find spelling and grammatical mistakes. There’s a concern that hackers will use ChatGPT to write phishing emails that read like they were written by a professional.

BEC (Business email compromise)

Business email compromise (BEC) is a type of social engineering attack where a scammer uses email to trick someone in an organisation into sharing confidential company data or sending money. Security software usually detects BEC attacks by identifying patterns. However, a BEC attack powered by ChatGPT can get past security filters.


Within seconds, ChatGPT can write text in a real person’s voice and style. For example, ChatGPT offered a convincing email as if authored by a single person.


People who send spam usually can take a while to write the text. With ChatGPT, they can boost their workflow by generating spam text instantly. Although most spam is harmless, some can carry malware or lead users to malicious websites.


With the world relying more on chatbots powered by AI, expect ethical dilemmas to arise as people use the tool to take credit for content, they did not write themselves.


Ransomware’s ability to hijack computer systems has helped extortionists make small fortunes. Many of these attackers don’t write their own code. Instead, they buy it from ransomware creators on the Dark Web marketplaces. They may no longer have to rely on third parties, though. As ChatGPT could successfully write malicious code that could encrypt an entire system in a ransomware attack.


In the age of clickbait journalism and the rise of social media, it can be challenging to tell the difference between fake and authentic news stories. Spotting fake stories is important because some spread propaganda while others lead to malicious pages. For instance, fake news stories of natural disasters sometimes trick unsuspecting users into sending donations to scammers. There is a fear that ChatGPT could be utilised to spread misinformation.