Skip to main content
Back to MCP Server100% Offline

Local AI + WHMCS MCP Server

Run AI models on your own hardware with LM Studio or Ollama and connect to WHMCS via MCP Server. No API keys to cloud providers. No data leaves your network. Suitable for hosting providers under GDPR, HIPAA, or SOC 2 requirements, or anyone who wants full control over their AI stack.

Zero Cloud Exposure

Data stays on your server

Compliance Ready

GDPR, HIPAA, SOC 2

No Internet Required

Fully air-gapped capable

Hardware Requirements by Model

Local AI performance depends on your hardware and model choice. Here are tested configurations for WHMCS MCP operations.

ModelRAMVRAMSpeedQuality
Llama 3.1 8B8 GB6 GB (optional)Fast (15-30 tok/s on CPU)Good for simple queries
Llama 3.1 70B48 GB24 GB recommendedModerate (5-10 tok/s)Excellent, near cloud quality
Mistral 7B8 GB6 GB (optional)Fast (20-35 tok/s on CPU)Good for structured queries
Qwen 2.5 32B24 GB16 GB recommendedModerate (8-15 tok/s)Very good balance of speed and quality
DeepSeek R1 14B16 GB10 GB recommendedModerate (10-20 tok/s)Strong reasoning, good for multi-step queries

Speed measured on Apple M2 Pro (CPU inference). GPU inference with NVIDIA cards is 2-5x faster. Benchmarks from February 2026.

What You Can Do

Query your WHMCS with AI, entirely on your hardware. No data leaves your infrastructure.

Private Revenue Analysis

Query WHMCS for revenue breakdowns without any data leaving your infrastructure. Ask for aggregated numbers (no client PII). Useful for preparing board reports or sharing data with external consultants where raw WHMCS access is not appropriate.

Example prompt:

Show me revenue trends for the last 12 months. Break down by product category.

Returns a table of monthly revenue per product group. No client names, emails, or identifying data in the output.

Regulated Industry Compliance

Healthcare, finance, and government hosting providers operate under strict data residency rules. Local AI means no WHMCS data crosses jurisdictional boundaries. GDPR Article 44 (international transfers) does not apply when processing stays on-premises.

Example prompt:

List all clients in the "Enterprise" group with their service status and contract end dates.

Full client data stays on your server. The AI model processes it locally. No cross-border data transfer occurs.

Air-Gapped Operations

Once the AI model is downloaded, the entire stack works without internet. WHMCS runs on your server, MCP Server runs as a WHMCS addon, LM Studio runs on your workstation. Disconnect from the internet and it still works.

Example prompt:

Generate a summary of all support tickets opened this week grouped by category.

Works offline. No DNS, no API calls, no cloud dependencies.

Safe Data Export for External Tools

Use local AI to query WHMCS for aggregated, non-sensitive data (revenue trends, ticket counts, product stats). Copy that sanitized summary and upload it to NotebookLM, Google Sheets, or ChatGPT for presentations. Your raw WHMCS data never leaves your server.

Example prompt:

Get aggregated revenue trends by product (no client names). Export as a summary I can share externally.

You get clean numbers safe to share. The two-step workflow (local query, then external tool) keeps PII on-premises.

How to Connect

Set up local AI with your WHMCS. Two options: LM Studio (recommended) or Ollama.

1

Install MCP Server on your WHMCS

Upload the addon, activate it, and generate a Bearer token. Full installation guide

2

Install LM Studio or Ollama

Download LM Studio (recommended, native MCP since v0.3.17) or Ollama (via community wrapper). Load a model like Llama 3, Mistral, or Qwen.

3

Configure MCP connection

Add your WHMCS MCP Server to the local AI config:

// LM Studio MCP Configuration (~/.lmstudio/mcp.json)
{
  "mcpServers": {
    "whmcs": {
      "url": "https://your-whmcs.com/modules/addons/mx_mcp/mcp.php",
      "headers": {
        "Authorization": "Bearer YOUR_BEARER_TOKEN"
      }
    }
  }
}

// Ollama (via mcp-client-for-ollama)
// npm install -g mcp-client-for-ollama
// mcp-client --server https://your-whmcs.com/modules/addons/mx_mcp/mcp.php --header "Authorization: Bearer YOUR_BEARER_TOKEN"
4

Test the connection

Ask your local AI a simple query like “Get WHMCS system status”. LM Studio will show a tool call confirmation dialog before executing.

Use External AI Safely

Extract data via MCP with your local AI, then feed sanitized summaries to external tools. Your raw WHMCS data never leaves your server.

The Two-Step Privacy Workflow

WHMCS

Your server

Local AI + MCP

Your machine

External Tool

Sanitized data only

1

Query via MCP locally: Ask your local AI (LM Studio, Ollama) for aggregated WHMCS data. Example: “Show me revenue trends by product for the last 12 months. No client names or emails.”

2

Export the sanitized summary: Copy the aggregated output (trends, counts, percentages). No PII, no client details, no sensitive billing data.

3

Feed to external AI: Upload the summary to NotebookLM for board presentations, Google Sheets for reports, or any external tool for further analysis.

Real example:

A hosting provider in healthcare asks LM Studio (via MCP) for “total revenue by product category for Q4 2025, with growth percentages.” The result is a table of product names and revenue numbers. No patient-related data, no client emails, no billing addresses. They upload this summary to NotebookLM to generate a board presentation. Their raw WHMCS database was never exposed to any cloud service.

Example Prompts

Privacy-safe queries you can ask your local AI via MCP.

1

Show total revenue by product for Q4 2025. No client names, just aggregated numbers.

2

How many support tickets were opened vs resolved this month?

3

List the top 10 most popular products by active service count.

4

What is our average ticket resolution time this quarter?

5

Generate a summary of overdue invoices by amount range (under $100, $100-500, over $500).

Data Flow

How Local AI Connects to WHMCS

LM Studio and Ollama connect to MCP Server on your machine. 100% offline. Your WHMCS data never leaves your infrastructure.

MX Metrics
MX Metrics
WHMCS
WHMCS
MX Proposals
MX Proposals
MCP Server

MCP Server

AI Integration Hub

Local AI

Better Together

Run the entire MX ecosystem privately. Every module's data flows through MCP to your local AI.

MX Metrics data flows through MCP to your local AI. Query revenue trends, churn rates, and LTV without any financial data leaving your infrastructure. Ideal for regulated hosting providers.

Learn about MX Metrics →

MX Proposals data stays fully on-premises. Query proposal status, acceptance rates, and pipeline value through local AI. No client proposal details ever reach a cloud service.

Learn about MX Proposals →

Frequently Asked Questions

Does LM Studio support MCP natively?
Yes. Since LM Studio v0.3.17 (June 2025), MCP is supported natively via HTTP/URL transport. Configure MCP servers in ~/.lmstudio/mcp.json. When a model calls an MCP tool, LM Studio shows a confirmation dialog before executing. You approve or deny each tool call.
Does Ollama support MCP?
Ollama does not have native MCP support as of February 2026. The community-maintained mcp-client-for-ollama wrapper bridges Ollama to MCP servers. Install it via npm (npm install -g mcp-client-for-ollama) and it handles the MCP protocol translation. It works reliably with WHMCS MCP Server but requires Node.js.
What hardware do I need?
Minimum: 16 GB RAM and a modern CPU for 7-8B parameter models (Llama 3.1 8B, Mistral 7B). These handle simple WHMCS queries well. For complex multi-step operations, 32+ GB RAM with a GPU (NVIDIA with 12+ GB VRAM) and a 32-70B model gives much better results. The MCP query itself is lightweight. The hardware requirement is for the AI model, not the WHMCS connection.
Is local AI slower than cloud AI?
Yes, typically 2-5x slower for generating text. A simple client lookup takes 5-8 seconds on local AI vs 2-3 seconds on Claude. However, the MCP query part (your machine talking to WHMCS) takes the same time (~500ms). For operations where privacy matters more than speed, or where you run queries in batch (n8n + local AI), the tradeoff is worth it.
Which local model works best with WHMCS MCP?
For most WHMCS operations, Qwen 2.5 32B or Llama 3.1 70B give the best results. These models handle structured data well and follow MCP tool call formats reliably. Smaller models (7-8B) work for simple lookups but sometimes struggle with multi-step tool chains. If you have limited hardware, start with Llama 3.1 8B and upgrade as needed.
Can I use local AI to prepare data for external presentations?
Yes. This is a common two-step workflow: (1) Use local AI + MCP to query WHMCS for aggregated data, like "Show revenue by product for last 12 months, no client names." (2) Copy the sanitized output to NotebookLM, Google Slides, or any external tool. Your raw WHMCS data stays local. Only the aggregated summary (revenue numbers, ticket counts, product stats) goes external.

100% private AI for your WHMCS

Install MCP Server, connect your local AI, and query WHMCS data without any cloud exposure. Your data never leaves your server.

100% Offline46 WHMCS Tools$22/month