Frequently Asked Questions

What exactly does JD Fortress AI provide?

We deliver fully offline, on-premises AI powered by advanced large language models (LLMs).

Think of it as having enterprise-grade AI capabilities similar to a premium GPT subscription — but completely disconnected from the internet and public clouds. No data ever leaves your environment.

Alternatively, imagine building your own private, intelligent company Wikipedia: it draws exclusively from your internal documents, policies, contracts, and knowledge base. You can ask complex questions and receive accurate, structured answers — unlocking powerful automation across operations while maintaining total data sovereignty.

Can you provide concrete examples of how this benefits a business?

Absolutely — here are realistic scenarios for a law firm using our secure, local RAG-powered AI:

  • Instantly retrieve and summarize relevant case law, precedents, and internal memos when drafting advice or pleadings — reducing research time from hours to minutes.

  • Automate compliance checks: upload new regulations or client contracts and ask the system to flag risks, inconsistencies, or required actions.

  • Generate first-draft client correspondence, NDAs, or engagement letters tailored to your house style and past precedents.

  • Quickly answer due-diligence queries during mergers or acquisitions by cross-referencing your full document repository.

  • Support junior lawyers and paralegals with accurate, context-aware explanations of complex clauses or procedures — improving efficiency without risking external data exposure.

These capabilities transform routine business-as-usual (BAU) tasks into fast, reliable, and fully private processes.

How much does it cost?

Our pricing varies depending on your specifics. We offer a variety of options to fit your needs and budget. We developed a Cost Reduction Potential Calculator to help you decide; with a reasonable, even conservative set of assumptions, an ROI in under 6m is easily achievable. Please contact us for more information.

How frequently are the underlying language models updated, and who handles this?

We follow a controlled, predictable update cadence: major model refreshes occur approximately every six months, driven purely by significant performance and capability improvements in the rapidly evolving LLM landscape.

Because the system runs entirely locally/on-premises, there are no routine security patches or vulnerability fixes required — the deployment has no internet-facing exposure and therefore no external attack surface.

How does my company’s data get integrated into the AI system?

Loading and preparing your data is one of the most important (and technical) parts of the deployment — which is why we handle it end-to-end as part of our professional services.

In simple terms: we help you organise, securely convert, vectorise (turn into searchable numerical representations), and index your entire company knowledge base so the AI can instantly find and use the right information when answering questions. You don’t need to manage any of the complex technical steps — our team takes care of the full RAG pipeline setup tailored to your documents and workflows.

What if much of our data still exists only on paper (physical documents)?

Yes — we can incorporate paper-based records. We offer optional, bespoke digitisation and scanning services to convert physical files into searchable digital formats suitable for secure ingestion into the RAG system. This is priced separately as a professional service. Please contact us for a tailored quote and timeline.

Can email archives be included in the knowledge base?

Yes, historical email archives can be securely processed and added — this is a common and valuable source of institutional knowledge.

For security and compliance reasons, we recommend working only with non-live, archived exports rather than real-time mailboxes. We can design a periodic, controlled refresh process (e.g. monthly or quarterly exports) that keeps the system up-to-date without creating ongoing exposure risks. We’ll work with your IT/compliance team to define the safest approach.

Which file formats are supported?

We fully support the most common business formats, including:

  • PDF

  • Microsoft Word (.doc, .docx)

  • Microsoft Excel (.xls, .xlsx)

  • Microsoft PowerPoint (.ppt, .pptx)

…and many others. Our ingestion pipeline handles structured, semi-structured, and unstructured content effectively.

How frequently is the ingested company data refreshed or updated?

Refresh frequency is fully customisable and defined in your Service Level Agreement (SLA). Options range from manual/on-demand updates to scheduled automated syncs (daily, weekly, monthly, etc.), depending on your needs and security policies. Please reach out so we can discuss the right cadence and any associated setup costs.

Since the system is fully isolated with no internet connection at all, how do I actually get answers/results to my everyday work computer?

Correct — JDFortVault operates in true air-gapped mode for maximum security and compliance (ideal for the most sensitive environments).

This intentional separation actually encourages more deliberate, focused use of AI — which many of our high-security clients value.

Practical access methods include:

  • Running the system on a dedicated, secure machine connected via your internal local network (Ethernet) — answers appear directly in a browser or approved client interface on that machine.

  • Using “sneakernet” (secure USB drives or similar removable media) to transfer questions to the air-gapped system and bring back responses.

We help design the workflow that best balances security and usability for your team.

Which parts of the system are offline, and which require internet connectivity?

Very simply:

  • Offline / fully local: The actual AI reasoning and language model (LLM) runs entirely on your premises or in your VPC — zero data is ever sent to external providers like OpenAI, Anthropic, etc.

  • Online / connected (only when needed): Internet access is used solely to securely retrieve your own company data from approved cloud storage (e.g. Google Cloud, Azure, AWS S3) into the local environment for processing. No prompts, questions, or generated outputs ever leave your control.

The LLM never “phones home” — your sensitive data stays inside your fortress.

JDFortVault (Fully Air-gapped / Offline)
If my data is already stored securely in Google Cloud, why is a local/hybrid AI solution more secure?

Great question — and one we hear often.

Even when data resides in a highly secure cloud environment (such as Google Cloud with strong access controls), sending that data out to a third-party LLM provider (e.g., via API calls to public models) creates a material risk: your confidential information temporarily leaves your perimeter and is processed by someone else’s infrastructure.

With JDFortSpan:

  • Your data never leaves your controlled environment to reach an external LLM.

  • You keep using your existing secure cloud storage as the source of truth.

  • Internet is used only to pull your own files into the local, on-premises LLM instance for private processing.

  • All AI inference happens locally — so answers, insights, and generated content remain entirely within your fortress.

This gives you the best of both worlds: the convenience of cloud-hosted data + the ironclad privacy of offline, on-premises AI.

JDFortSpan (Local LLM with controlled connectivity)