nBrain AI Regal Technology Partners

Your Data Never Trains the AI.
Here's Exactly How It Works.

A simple, visual explanation of how Regal's AI platform uses language models without ever exposing, sharing, or training on your proprietary data.

Prepared For Regal Technology Partners
Topic Data Security & AI Architecture
Date March 2026
Classification Confidential

We Use the AI. We Don't Train It.

The most important thing to understand: the large language model (like GPT-5) is a pre-built tool. It was trained by Microsoft/OpenAI on public internet data — books, websites, research papers — long before it ever sees your environment. We never change the model. We never feed it your data to learn from. We use it the same way you'd use a calculator: you give it a problem, it gives you an answer, and it forgets everything immediately.

Think of It Like Hiring a Contractor

Imagine you hire a highly skilled engineer who went to the best schools and has decades of general knowledge. You bring them into your secure facility, hand them a specific document, and ask them a question about it. They read the document, give you their expert answer, and then you shred the document and wipe their memory of it.

That's exactly how this works. The AI has general knowledge from its training (school). Your documents are handed to it one question at a time (the specific task). It answers and immediately forgets. It never takes your documents home. It never learns from them. It never gets smarter from your data.

The Data Flow: What Goes Where

Here's exactly what happens when someone on your team asks the AI a question. Follow the arrows:

Your Secure Environment

Your documents, your data, your Azure Government subscription. Everything lives here and never leaves.

Question sent via API

Azure OpenAI API

Encrypted connection. No data stored. No logging by Microsoft.

Processed

The LLM (GPT-5)

Reads the question. Generates an answer. Immediately forgets. No learning occurs.

Answer returned
Answer passed back
You own & control this
Encrypted tunnel — nothing stored
Pre-trained model — unchanged by your data

The Critical Point

The blue arrows are an API call — the same way your phone talks to a weather service. You send a question, you get an answer, nothing is stored on the other end. Microsoft's Azure OpenAI Service contractually guarantees that your inputs and outputs are not used to train, improve, or modify the model in any way.

What We Build vs. What the LLM Does

There are two completely separate things happening. On the left is everything we custom-build for Regal — this is where the intelligence specific to your operations lives. On the right is the generic AI model that never changes.

What We Build for Regal

  • AI Agents — Custom-built instructions that tell the AI how to behave, what to look for, and how to respond for your specific use cases
  • System Prompts — The detailed rules, logic, and context that shape every AI response to fit Regal's needs
  • Document Retrieval (RAG) — The pipeline that finds the right documents from your library and hands them to the AI at query time
  • Workflow Automation — The business logic that routes questions, applies permissions, and manages user interactions
  • Data Connectors — Integrations with your ERP, PLM, MES, and other systems
API Boundary

What the LLM Already Is

  • Pre-trained model — Trained on public internet data by Microsoft/OpenAI before it ever reaches your environment
  • General knowledge — Understands language, logic, reasoning, math, code — but knows nothing about Regal specifically
  • Stateless — Every API call is independent. It has no memory between questions
  • Unchanged — Your usage does not modify, improve, or alter the model in any way
  • Shared architecture, isolated processing — The model exists in Azure Government, but your data is processed in isolation and never retained
What nBrain Customizes

The Agent Layer

This is where all the "training" happens — but it's not training the AI model. It's building the instructions, rules, and logic that wrap around the model.

  • Prompt engineering — crafting the right instructions
  • Agent behavior rules — "when asked about X, look in Y"
  • Response formatting — how answers should be structured
  • Permission logic — who can ask what
  • Document retrieval strategy — which docs to search and how
What Never Changes

The AI Model Itself

The model's weights — the actual "brain" of the AI — remain exactly as Microsoft/OpenAI shipped them. Your data never touches this.

  • Model weights are frozen — your data cannot alter them
  • No fine-tuning occurs with your data
  • No training pipeline exists in your environment
  • No data is sent to OpenAI for model improvement
  • The model is the same whether Regal uses it or not

What Happens When an Engineer Asks a Question

Let's trace exactly what happens from the moment someone types a question to when they get an answer:

1

Engineer types a question

"What are the thermal specifications for the connector assembly in our current DMS-R program?"

Happens inside your secure environment
2

The agent searches your documents

Our custom-built retrieval system searches Regal's document library (your vector database) and finds the 3-5 most relevant document sections. These documents never leave your environment — the search happens entirely inside your Azure subscription.

Your data stays in your environment
3

The question + relevant document excerpts are sent to the API

The question and the retrieved document snippets are packaged together and sent to the Azure OpenAI API over an encrypted connection. This is like sending a sealed envelope — only the API can open it, and it's processed in an isolated session.

Encrypted in transit — TLS 1.3
4

The model reads, answers, and forgets

The AI model reads the question and document excerpts, generates a detailed answer, and returns it. The moment the response is sent back, the model's memory of this interaction is wiped. There is no session, no history, no learning. The next question starts completely fresh.

No data retained. No learning. No model change.
5

The answer is returned to the engineer

The engineer sees the answer in the platform — with citations pointing back to the original source documents so they can verify the information. The entire interaction is logged in your audit trail.

Answer + audit log stay in your environment

What Microsoft Puts in Writing

These aren't just our promises — these are Microsoft's contractual commitments in their Azure OpenAI Service Data Processing Addendum:

1

Your Data Is Not Used to Train Models

Microsoft contractually guarantees that your prompts, inputs, outputs, and any data processed through Azure OpenAI Service are never used to train, retrain, or improve any Microsoft or OpenAI models. Period.

2

Your Data Is Not Shared with OpenAI

When you use Azure OpenAI (as opposed to OpenAI directly), your data is never sent to OpenAI. It stays within Microsoft's Azure infrastructure. OpenAI has no access to your prompts or responses.

3

Your Data Is Not Accessible to Other Customers

Each API call is processed in complete isolation. No other Azure customer can see, access, or benefit from your data. Your processing is a sealed, private session.

4

Your Data Is Processed Only Where You Choose

In Azure Government Cloud, your data is processed exclusively in US sovereign regions (US Gov Virginia / US Gov Arizona). It never crosses international boundaries or leaves government-controlled infrastructure.

When We Say "Training" — What We Actually Mean

The word "training" causes confusion because it means two very different things in AI. Here's the distinction:

We Do NOT Do This

Model Training

This is what people fear — and it's not what we do. Model training means feeding data into the AI's neural network to change its weights and make it "learn" new information permanently.

  • Changes the model's internal parameters
  • Data becomes part of the model permanently
  • Requires massive compute and GPU clusters
  • The model "remembers" the training data forever
  • Other users of the model could potentially access derived knowledge
This Is What We Do

Agent Configuration

We write instructions that tell a pre-built model how to behave for Regal's specific use cases. The model itself never changes — we just give it better directions.

  • System prompts — "You are an engineering assistant for Regal Technology..."
  • Retrieval rules — "Search these document categories first..."
  • Response templates — "Format answers with source citations..."
  • Permission logic — "Only show F-35 data to cleared users..."
  • Workflow automation — "Route quality questions to the QA agent..."

Another Way to Think About It

Model training is like teaching someone a new language. It changes who they are fundamentally. Once they learn it, they can't unlearn it, and they'll use that knowledge with everyone they interact with.

Agent configuration (what we do) is like handing someone a job description and a reference binder. They read the instructions before each task, do the work, and hand the binder back. They don't become a different person. They don't remember the binder's contents after the task. And the next person who uses them gets a fresh start with their own binder.

Your Data Protection Summary

Your documents stay in your Azure subscription

All source documents, embeddings, and search indexes live entirely within infrastructure you own and control.

The AI model is not trained on your data

The model weights are frozen. Your usage does not change, improve, or modify the AI in any way.

Every interaction is stateless and forgotten

Each question is an independent API call. The model retains nothing between calls. No session, no memory, no history.

Microsoft guarantees this in writing

The Azure OpenAI Data Processing Addendum contractually prohibits Microsoft from using your data for model training or sharing it with any third party.

No data ever leaves your environment

Nothing is sent to OpenAI. Nothing goes to third-party servers. Nothing crosses borders. API calls stay within Azure Government infrastructure.

No model can be built from your data

We never create derivative models, fine-tuned models, or trained systems from your proprietary information. The model is generic. Your data is private.

Your Data. Your Control. Always.

We build the intelligence around the model — the agents, the logic, the workflows. We never put your data inside the model. That's the fundamental architecture guarantee.