CYBER EMERGENCY RESPONSE - 1300 9 NINTH

What’s an AI assistant?

Artificial intelligence (AI) assistants like ChatGPT are computer programs that use artificial intelligence to provide assistance to users. They work by processing natural language input (such as voice or text) and generating responses that simulate a human conversation.

ChatGPT is a language model developed by OpenAI that is based on a neural network architecture called GPT (Generative Pre-trained Transformer). It has been trained on a massive corpus of text data, allowing it to understand and generate human-like language.

Now that we understand the basic concept of AI assistants, do they cause new or additional risks that organisations need to be aware of?

The short answer is a resounding YES, AI assistants can bring in additional risks to organisations and individuals alike.

Let’s dig in deeper to understand why.

Artificial intelligence (AI) assistants have become increasingly popular in the workplace (and also at home). They can help with tasks like scheduling, answering questions, and even writing emails. However, with the increasing use of AI assistants in the workplace, there are also new cybersecurity risks that organisations need to be aware of.

Here are some of the risks that ChatGPT or other AI assistants bring to the workplace:

  1. Data privacy: AI assistants need to access a lot of data to function effectively. This can include sensitive information like emails, documents, and other confidential data. If this data is not properly secured, it can be accessed by unauthorised users, leading to data breaches and other security incidents.

  2. Malware and phishing attacks: AI assistants can be vulnerable to malware and phishing attacks. Hackers can use AI assistants to gain access to sensitive information or distribute malware to other systems in the network.

  3. Manipulation: AI assistants can be manipulated by cybercriminals to trick users into disclosing sensitive information or performing actions that can compromise security. For example, a hacker could use a voice-mimicking technology to impersonate a user’s voice and gain access to sensitive information.

  4. Lack of transparency: AI assistants use complex algorithms to process data and generate responses. This can make it difficult for users to understand how the AI assistant is making decisions and whether it is following ethical and legal guidelines.

Overall, while AI assistants can be a valuable tool in the workplace, they also bring new cybersecurity risks. Organisations need to take proactive steps to mitigate these risks and ensure the security of their data and systems. NINTH EAST recommends the following mitigating steps:

  1. Develop policies and procedures for AI assistants: Organisations should develop policies and procedures that govern the use of AI assistants in the workplace. This should include guidelines for data privacy, security, and ethical use of AI.

  2. Implement strong authentication: Organisations should implement strong authentication mechanisms to prevent unauthorized access to AI assistants and the data they access.

  3. Monitor and log AI assistant activity: Organisations should monitor and log all activity performed by AI assistants to identify potential security incidents and take appropriate action.

  4. Regularly update and patch AI assistants: Organisations should regularly update and patch AI assistants to address vulnerabilities and ensure that they are running the latest security software.

NINTH EAST can assist you in developing those policies and procedures as well as introduce an ongoing data security strategy to ensure that risks are mitigated as they arise.