Integration Guide
Local AI + WHMCS
100% private WHMCS operations with local AI models
Estimated time: 20 minutes · Transport: HTTP/URL (LM Studio native), wrappers (Ollama)
Quick Reference
- Min Version
- LM Studio v0.3.17+ / Ollama v0.5+
- Transport
- HTTP/URL (LM Studio) / STDIO wrapper (Ollama)
- Public URL
- Not required
- Node.js
- Not required (LM Studio)
- Privacy Level
- 100% Offline
- Cloud Dependency
- None
- Avg Response Time
- 5-20 seconds depending on model size and hardware
Prerequisites
- WHMCS 8.0 or higher with admin access
- MCP Server addon installed and activated (installation guide)
- LM Studio v0.3.17+ (download) or Ollama (download)
- At least 16GB RAM and a modern CPU (GPU optional but recommended for larger models)
Step 1: Generate an API Key
- Go to your WHMCS admin panel
- Navigate to Addons > MCP Server
- Click Generate New API Key
- Copy the API key and save it securely. You will need it in the next step.
Important: The API key is shown only once. If you lose it, you will need to generate a new one.
Option A: LM Studio Setup (Recommended)
LM Studio has native MCP support since v0.3.17. This is the recommended path for local AI with WHMCS.
1. Download a model
- Open LM Studio and go to the Discover tab
- Search for a model that supports tool calling (Llama 3, Mistral, Qwen 2.5, or similar)
- Download a model that fits your hardware. For 16GB RAM, a 7B or 8B parameter model works well.
2. Configure MCP
Create or edit the MCP configuration file at:
- Location:
~/.lmstudio/mcp.json
Add the following configuration (replace the URL and API key with your own):
// LM Studio MCP Configuration (~/.lmstudio/mcp.json)
{
"mcpServers": {
"whmcs": {
"url": "https://your-whmcs.com/modules/addons/mx_mcp/mcp.php",
"headers": {
"Authorization": "Bearer your-bearer-token-here"
}
}
}
}3. Enable MCP in LM Studio
- Open LM Studio and load your downloaded model
- Go to Settings > MCP and verify the WHMCS server appears
- Start a new chat. LM Studio will show a tool confirmation dialog when the AI wants to call a WHMCS tool.
Option B: Ollama Setup (via Wrapper)
Ollama does not have native MCP support yet. Use the community-maintained mcp-client-for-ollama wrapper to connect.
1. Install Ollama and pull a model
# Install Ollama (if not already installed)
# Visit https://ollama.ai for installation instructions
# Pull a model with tool-calling support
ollama pull llama3
# or
ollama pull mistral2. Install the MCP client wrapper
npm install -g mcp-client-for-ollama3. Connect to WHMCS
mcp-client --server https://your-whmcs.com/modules/addons/mx_mcp/mcp.php --header "Authorization: Bearer your-bearer-token-here"Note: The Ollama wrapper is community-maintained. For the most reliable experience, we recommend LM Studio, which has native MCP support built in.
Test the Connection
Regardless of which option you chose, test the connection:
- LM Studio: Start a new chat and type:
Get WHMCS system status - Ollama: In the mcp-client session, type:
Get WHMCS system status - If the connection is working, you should see system information from your WHMCS installation
- LM Studio will show a tool confirmation dialog before executing. Click Allow to proceed.
Success! If you see WHMCS system info, the connection is working. Your data never left your server. Try queries like “List all active clients” or “Show revenue for this month”.
Troubleshooting
Model does not call tools
- Not all local models support tool calling. Use models that explicitly support function/tool calls (Llama 3, Mistral, Qwen 2.5).
- In LM Studio, check the model card for “tool use” or “function calling” support
- Smaller models (under 7B parameters) may struggle with tool calling. Try a larger model if available.
- Be explicit in your prompt: “Use the WHMCS tools to get the system status”
Slow responses or out of memory
- Local AI models are CPU/GPU intensive. Ensure at least 16GB RAM is available.
- Use a smaller model (7B instead of 13B or 70B) if your hardware is limited
- Close other memory-intensive applications while running local AI
- A dedicated GPU (even a mid-range one) significantly improves performance
Ollama wrapper connection issues
- Ensure Node.js is installed and up to date (v18+ recommended)
- Verify the
mcp-client-for-ollamapackage is installed globally - Check that Ollama is running:
ollama listshould show your downloaded models - Try the LM Studio path instead, as it has native MCP support
“Connection refused” to WHMCS
- Verify your WHMCS URL is correct and accessible from your machine
- Check that the MCP Server addon is activated in WHMCS
- Ensure your API key is valid and has not been revoked
- If running on the same machine, try using
https://localhostor the local IP
Upgrading from v1.1.x?
If you set up the MCP Server before v1.3.0, the authentication method changed. Here is what is different and how to update your config.
Previous method (v1.1.x)
Still works. In v1.3.0, authentication changed from API Keys to Bearer tokens, and the endpoint changed from sse.php to mcp.php. If you are still on v1.1.x, the old server URL and auth header were:
// Old server URL (v1.1.x): https://your-whmcs.com/modules/addons/mx_mcp/mcp/sse.php // Old authentication header: MCP_API_KEY: your-api-key-here
To upgrade: update the server URL to use mcp.php and replace the MCP_API_KEY header with Authorization: Bearer YOUR_TOKEN.