A new article co-authored by GCHQ's chief data scientist looks at how large language models (LLMs) like ChatGPT could also give rise to new and unanticipated security risks.

It says there are 'serious concerns' around individuals providing sensitive information when they input questions into models, as well as over 'prompt hacking' in which models are tricked into providing bad results.

- Gordon Corera