We recently added a blog article, What is Generative AI? In that article, we covered how this type of AI is designed to create something new, such as images, sounds, or even text by using ML algorithms to analyse and learn from large datasets of existing content and in turn using that knowledge to create new content. Now, we will focus on two different Large language models (LLMs), Google’s BARD and OpenAI’s ChatGPT.
LLMs are a type of Generative AI that are trained on very large amounts of text data. This data can include books, articles, code, and other forms of text. LLMs are able to use this data to learn the patterns of human language and to generate text that is both grammatically correct and semantically meaningful. LLMs are being used to revolutionise the way we interact with computers in a number of ways. For example, LLMs are being used to develop chatbots that can
have more natural and engaging conversations with humans. LLMs are also being used to develop new forms of text-based interfaces that allow users to interact with computers more intuitively.
LLMs are still under development, but they have the potential to revolutionise the way we interact with computers. As they become more powerful and sophisticated, they will become increasingly integrated into our everyday lives.
Overall, LLMs have a variety of benefits, including the ability to:
In addition to these benefits, LLMs are also being used to develop new and innovative applications. For example, LLMs are being used to create virtual assistants that can help people with their daily tasks, and to develop new forms of entertainment, such as interactive games and movies.
There are a growing number of LLMs available on the market but the scope of this article will focus on BARD and ChatGPT.
From a cybersecurity perspective, NINTH EAST believes that LLMs can be an effective tool in improving several aspects. Some examples are:
LLMs can be used to analyse large amounts of data to detect anomalies that may indicate a cyberattack. Our NINTH EAST Risk Technology practice are currently trialing LLMs as part of our overall methods of detecting cyberattacks and breaches. Early stage results indicate that with the correct prompts, accuracy detection is currently sitting at 87% score.
They can be used to develop new security tools and technologies that can help to prevent and mitigate cyberattacks. The proposition that AI brings to the entire Cybersecurity industry is widening and proper assessments of its applicability is continuing but similarly to the previous point, early stage results are positive and indicate opportunities for innovation.
One of the known and effective ways to mitigate the risks of cyber attack is education of employees that have access to systems and protected data. LLMs can be used to create relevant educational content that can help employees to learn about cybersecurity best practices and doing so in a very timely fashion in prevention of known attacks or vulnerabilities.
Generative AI or LLMs with real-world information can be used to respond to cyberattacks more effectively by helping the cyber response team in identifying the source of the attack, proven ways to mitigate the damage, and recover from the attack with solutions and/or steps validated by other customers having gone through the same or similar attacks.
Take a seat ChatGPT, let’s talk BARD!
BARD’s Proposition
Bard’s proposition is a new approach to large language models (LLMs) that is designed to be more powerful, more informative, and more engaging than previous LLMs. Bard is trained on a massive dataset of text and code, which allows it to generate text that is both grammatically correct and semantically meaningful. Bard is also able to access and process information from the real world through Google Search, which gives it a wider range of knowledge than previous LLMs.
ChatGPT is another LLM that is trained on a massive dataset of text. However, ChatGPT is not able to access and process information from the real world. This means that ChatGPT’s knowledge is limited to the text that it was trained on. As a result, ChatGPT is not able to answer questions about the real world as accurately or informatively as Bard.
In addition to its ability to access and process information from the real world, Bard also has a number of other advantages over ChatGPT. Bard is more powerful, which means that it can generate text that is more complex and nuanced. Bard is also more informative, which means that it can answer questions in a more comprehensive and detailed way. Bard is also more engaging, which means that it is more likely to hold the user’s attention.
Overall, Bard is a more powerful, more informative, and more engaging LLM than ChatGPT. Bard is able to access and process information from the real world, which gives it a wider range of knowledge. Bard is also more powerful, which means that it can generate text that is more complex and nuanced. Bard is also more informative, which means that it can answer questions in a more comprehensive and detailed way. Bard is also more engaging, which means that it is more likely to hold the user’s attention.
Bard’s proposition has a number of benefits, including:
Output Comparison
The output of the prompt “What is Gartner listing as the top cybersecurity concerns for 2024 onwards?” for both BARD and ChatGPT are below.
Analysing the two outputs, it is obvious that Google’s BARD holds a significant advantage on ChatGPT because of its access to real-world information. That alone in our opinion could make BARD a better platform than ChatGPT and with the understanding that it is still only an experimental project, we are excited to see the platform operate post the huge number of hallucinations we are seeing and by all reports, the general users are experiencing as well.
Conclusion
Both ChatGPT and BARD are powerful options available right now but should be used careful and considering that they are both marked as experimental, human intervention or validation is still required to ensure that the outputs are correct.