Microsoft Builds Offline LLM for US Intelligence Services

Tom Everill | 16 May 2024


 

Summary

  • Microsoft is reportedly developing a generative AI model for American intelligence agencies, built on OpenAI's GPT-4 model, to analyse classified information securely. 

  • Concerns have been raised about the potential for AI systems to mislead officials due to limitations of language models, which operate on statistical probabilities and are prone to generating factually incorrect outputs, or 'hallucinations'. 


Microsoft has reportedly been building a generative AI model designed specifically for American intelligence agencies. According to a Bloomberg report, this marks the first time Microsoft has deployed a major language model in a secure setting, allowing spy agencies to analyse classified information without connectivity risks and engage in secure conversations with a chatbot like ChatGPT and Microsoft Copilot. However, this development raises concerns about the potential for AI systems to mislead officials in the absence of conscientious implementation, given language models’ inherent design limitations. 

The new AI service, which does not have a public name yet, is built on OpenAI’s GPT-4 model. It will be used for crafting computer code, analysing information, and powering virtual assistants that converse in a human-like manner. Microsoft has a license to use the technology as part of a deal in exchange for its significant investments in OpenAI. 

Developing the system involved 18 months of work to modify an AI supercomputer in Iowa, according to William Chappell, Microsoft's CTO for strategic missions and technology. The modified GPT-4 model is designed to read files provided by its users but cannot access the open Internet, addressing the growing interest among intelligence agencies to use generative AI for processing classified data while mitigating risks of data breaches or hacking attempts. 

GPT-4, as a Large Language Model (LLM), essentially predicts the most likely next item (token) in a sequence based on user input and produces output accordingly, it is not capable of reasoning or critical thought. One could, therefore, infer that overreliance on these fundamentally non-rational forms of intelligence could lead to a variety of potential negative externalities. For example, in March, BISI reported on research into LLMs’ diplomatic and military decision-making capabilities and their observed tendency towards ‘arms-race dynamics’ and ‘extreme’ behaviour. 

One serious drawback of using models like GPT-4 to analyse important data is their potential to construct flawed summaries, draw incorrect conclusions, or provide inaccurate information to their users. These factually incorrect outputs are known as ‘hallucinations’. As trained AI neural networks operate on statistical probabilities rather than functioning as databases, they make poor factual resources unless augmented with external access to information using techniques such as retrieval augmented generation (RAG). 

Given this limitation, it is entirely possible that GPT-4 could potentially misinform or mislead America's intelligence agencies if not used properly. The lack of information regarding the system's oversight, limitations on its use, and auditing processes for accuracy raises concerns about the potential consequences of relying on AI-generated insights for critical decision-making. 

Murray Foubister / Flickr


Forecast

  • Short-term

    • As the use of generative AI models in secure settings becomes more prevalent, intelligence agencies will likely continue to explore ways to leverage these technologies for analysing classified data. However, the potential for misinformation and inaccurate conclusions may lead to increased scrutiny and the development of more robust oversight mechanisms. 

  • Medium-term

    • The deployment of AI models in secure settings by major tech companies like Microsoft may prompt other nations to accelerate their efforts in developing similar technologies for their own intelligence agencies. This could lead to an intensified race to harness the power of generative AI in the realm of national security and geopolitics. 

  • Long-term

    • The increasing reliance on AI models for processing and analysing sensitive information may necessitate the development of new international frameworks and agreements to regulate the use of these technologies in the context of intelligence gathering and national security. As the capabilities of AI continue to advance, the need for collaboration between governments, tech companies, and academia will become increasingly important to ensure the responsible and ethical use of these powerful tools. 

Previous
Previous

2024 Maldives Legislative Elections

Next
Next

Increasing Indiscriminate Civilian Violence by the Myanmar Military Junta