top of page

A beginner's guide to ChatGPT

Everyone’s talking about ChatGPT, and the internet is full of examples of AI shaking up how we ask questions. It's a powerful technology, but to get the most out of it there are some things you need to consider...


Firstly, let us consider what ChatGPT actually is.


It’s an artificial intelligence technology called ‘large language models’ - or LLMs. This means that an algorithm has been trained on a large amount of text. In ChatGPT’s case, the internet, but it could also be scientific research, books, social media posts etc.


The algorithms analyse the relationship between different words and, in turn, create probabilities. By asking it a question, it’ll give you an answer based on the relationships of the words in its model. Add machine learning into the mix and the result is a chatbot that can answer complex questions in a recognisable, human-like way. It beats Alexa hands down!


Sounds good, right? Well, yes it does, but with everything...there is a downside. Using such a large volume of data means it’s not possible to filter what’s accurate and inaccurate; offensive or acceptable.


And as a result, it does get things wrong. And it’s often those examples that have made the press.

Things like incorrect facts, being coaxed into creating toxic content or being biased or gullible towards particular arguments.


A common worry is that LLMs will ‘learn’ from your prompts and offer your information up to other users. There is a concern here, but not for the reason you might think.


Currently these models don’t give other users your information. But - and this is important - your queries are visible to the organisation that provides the service. This might mean that the provider or its partners are able to read what you’ve typed.


Queries can be sensitive for different reasons. Perhaps because of the data included within or because of who's asking the question.


Imagine if a CEO is discovered to have asked ‘how best to lay off an employee?’ or someone asks revealing health or relationship questions.


And, as more LLMs spring up, the risk increases that these queries stored online might be hacked, leaked, or accidentally made public.



Ok, what about the NCSC: what do they recommend?


Their advice is really quite simple. Don’t include sensitive information in your queries - and don’t submit queries that would cause you grief if they were made public.


So, is it possible for my organisation to use these tools to automate tasks?


While it’s not recommended that you use ‘public’ LLMs like ChatGPT for providing sensitive information, ‘cloud-hosted’ or ‘self-hosted’ models might be a better fit.


If it is cloud-hosted - check the terms of use and privacy policy. You’ll want to understand how your data is managed, and who it’s available to. And if it’s self-hosted - you’ll want to be doing a security assessment.


The use of this technology isn’t always well-intentioned. Some incredible examples have been witnessed of how LLMs can help write malware.


LLMs can create convincing-looking results but they’re currently suited to simple tasks. They’re useful for ‘helping experts save time’.


But an expert capable of creating highly capable malware is likely to be able to coach LLMs into being able to do it too.


And LLMs can also be used to advise on technical problems. If a cyber-criminal is struggling to escalate privileges or find data, they might ask an LLM and receive an answer that’s not unlike a search engine result, but with more context.


As LLMs excel at replicating writing styles on demand, there’s a risk of criminals using it to write convincing phishing emails in multiple languages.


We might see more convincing phishing attacks, or criminals trying techniques they weren’t familiar with previously.


So, what does the future look like?


It’s an exciting time, but there are risks involved in the use of public LLMs, and individuals and organisations should take great care with the data they choose to submit.


The NCSC is aware of other emerging threats in relation to cyber security and the adoption of LLMs, and you can guarantee that we havn’t heard the last of them…things are just getting started!


Note: This blog was not written by ChatGPT...or was it?!


 

Reporting

Report all Fraud and Cybercrime to Action Fraud by calling 0300 123 2040 or online. Forward suspicious emails to report@phishing.gov.uk. Report SMS scams by forwarding the original message to 7726 (spells SPAM on the keypad).

 

The contents of blog posts on this website are provided for general information only and are not intended to replace specific professional advice relevant to your situation. The intention of East Midlands Cyber Resilience Centre (EMCRC) is to encourage cyber resilience by raising issues and disseminating information on the experiences and initiatives of others. Articles on the website cannot by their nature be comprehensive and may not reflect most recent legislation, practice, or application to your circumstances. EMCRC provides affordable services and Trusted Partners if you need specific support. For specific questions please contact us by email.

 

EMCRC does not accept any responsibility for any loss which may arise from reliance on information or materials published on this blog. EMCRC is not responsible for the content of external internet sites that link to this site or which are linked from it.

bottom of page