A TypeScript-based Model Context Protocol (MCP) client for testing MCP servers, with support for multiple LLM providers and LLM tool calling capabilities.
- Support for multiple LLM providers (Ollama, OpenAI, OpenRouter, Deepseek)
- Support for MCP servers (default: mcp_k8s_server)
- LLM tool calling through MCP servers
- Configurable via JSON config file or environment variables
- Interactive console interface for testing
# Install dependencies
npm install
# Build the project
npm run build
# Make the CLI executable
chmod +x ./dist/index.js# Run with default configuration
npm start
# Run with custom configuration file
npm start -- ./config.jsonYou can configure the client using a JSON file or environment variables.
{
"llm": {
"provider": "ollama",
"baseUrl": "http://localhost:11434",
"model": "llama3",
"temperature": 0.7,
"maxTokens": 1000
},
"mcpServers": {
"k8s": {
"name": "mcp_k8s_server",
"command": "mcp_k8s_server",
"enabled": true
},
"custom": {
"name": "custom_mcp_server",
"command": "custom_mcp_server",
"enabled": false
}
},
"defaultMCPServer": "k8s"
}{
"mcpServers": {
"local": {
"name": "local_mcp_server",
"command": "mcp_server",
"args": ["--option1", "--option2"],
"enabled": true
}
}
}{
"mcpServers": {
"remote": {
"name": "remote_mcp_server",
"baseUrl": "http://192.168.182.128:8000",
"enabled": true
}
}
}{
"mcpServers": {
"k8s": {
"name": "mcp_k8s_server",
"baseUrl": "http://192.168.182.128:8000",
"enabled": true
},
"custom": {
"name": "custom_mcp_server",
"baseUrl": "http://192.168.1.100:9000",
"enabled": false
}
},
"defaultMCPServer": "k8s"
}{
"llm": {
"provider": "ollama",
"baseUrl": "http://localhost:11434",
"model": "llama3",
"temperature": 0.7,
"maxTokens": 1000
}
}Note: Ollama typically doesn't require an API key when running locally.
{
"llm": {
"provider": "openai",
"apiKey": "sk-your-openai-key-here",
"model": "gpt-3.5-turbo",
"temperature": 0.7,
"maxTokens": 1000
}
}{
"llm": {
"provider": "openrouter",
"apiKey": "your-openrouter-key-here",
"baseUrl": "https://openrouter.ai/api/v1",
"model": "anthropic/claude-3-opus",
"temperature": 0.7,
"maxTokens": 1000
}
}{
"llm": {
"provider": "deepseek",
"apiKey": "your-deepseek-key-here",
"baseUrl": "https://api.deepseek.com/v1",
"model": "deepseek-chat",
"temperature": 0.7,
"maxTokens": 1000
}
}LLM_PROVIDER: The LLM provider to use (ollama, openai, openrouter, deepseek)LLM_API_KEY: API key for the LLM providerLLM_BASE_URL: Base URL for the LLM provider's APILLM_MODEL: Model to use for the LLMLLM_TEMPERATURE: Temperature setting for the LLMLLM_MAX_TOKENS: Maximum tokens to generateDEFAULT_MCP_SERVER: Default MCP server to use
Once the client is running, you can use the following commands:
-
help: Show available commands> help -
exitorquit: Exit the application> exit
-
servers: List available MCP servers> servers Available MCP Servers: --------------------- * 1. mcp_k8s_server (Enabled) 2. custom_mcp_server (Disabled) -
use <server-key>: Set the active MCP server> use k8s Active MCP server set to: mcp_k8s_server -
enable <server>: Enable an MCP server> enable custom Server 'custom_mcp_server' enabled. -
disable <server>: Disable an MCP server> disable custom Server 'custom_mcp_server' disabled.
-
tools: List tools for the active MCP server> tools Tools for mcp_k8s_server: ------------------------- 1. get_pods Description: Get all pods in the namespace 2. get_pod_logs Description: Get logs for a specific pod -
resources: List resources for the active MCP server> resources Resources for mcp_k8s_server: ------------------------- 1. k8s://namespaces Name: Kubernetes Namespaces Description: List of all namespaces in the cluster -
call <tool> <args>: Call a tool with JSON arguments> call get_pods {"namespace": "default"} Tool Result: ------------ { "content": [ { "type": "text", "text": "[{\"name\":\"nginx-pod\",\"namespace\":\"default\",\"status\":\"Running\"}]" } ] } -
resource <uri>: Read a resource from the active MCP server> resource k8s://namespaces Resource Content: ----------------- { "contents": [ { "uri": "k8s://namespaces", "text": "[\"default\", \"kube-system\", \"kube-public\"]" } ] }
-
clear: Clear chat history> clear Chat history cleared. -
config: Show current configuration> config Current Configuration: --------------------- { "llm": { "provider": "openai", "baseUrl": "https://api.openai.com/v1", "model": "gpt-3.5-turbo", "temperature": 0.7, "maxTokens": 1000 }, "mcpServers": { "k8s": { "name": "mcp_k8s_server", "baseUrl": "http://192.168.182.128:8000/sse", "enabled": true } }, "defaultMCPServer": "k8s" }
You can also send any message to chat with the LLM:
> What pods are running in the default namespace?
LLM wants to use tool: get_pods
Arguments: {
"namespace": "default"
}
Tool result: {
"content": [
{
"type": "text",
"text": "[{\"name\":\"nginx-pod\",\"namespace\":\"default\",\"status\":\"Running\"}]"
}
]
}
LLM: I found 1 pod running in the default namespace:
- nginx-pod (Status: Running)
> Tell me a joke about programming.
LLM: Why do programmers prefer dark mode?
Because light attracts bugs!
The client now supports LLM tool calling, allowing the LLM to:
- Learn about available tools from MCP servers
- Decide when to use tools based on user queries
- Return a JSON tool call when needed
- Process tool results and provide a conversational response
-
When you send a message to an LLM with tools available:
- The system prompt includes detailed tool descriptions
- The LLM can choose to answer directly or call a tool
-
If the LLM decides to call a tool, it will respond with a JSON object:
{ "tool": "tool-name", "arguments": { "argument-name": "value" } } -
The client will:
- Parse this response and recognize it as a tool call
- Execute the tool with the provided arguments
- Add the tool result to the conversation history
- Send the complete conversation history (including the tool result) back to the LLM
- Get a natural, conversational response that incorporates the tool result
- Display this final response to the user
The tool calling workflow follows these steps:
sequenceDiagram
participant User
participant Client
participant LLM
participant MCP Server
User->>Client: User question
Client->>LLM: Send question with chat history & tool descriptions
LLM-->>Client: Return tool call JSON
Client->>MCP Server: Execute tool with parameters
MCP Server-->>Client: Return tool result
Note over Client: Add tool result to chat history
Client->>LLM: Send updated chat history with tool result
LLM-->>Client: Return final conversational response
Client->>User: Display user-friendly response
This ensures the LLM has full context when generating its final response, resulting in more natural and helpful answers that properly incorporate the tool information.
MCP Client started.
LLM provider: openai
Active MCP server: mcp_k8s_server
Loaded 3 tools for LLM to use.
Type "help" for available commands.
> What pods are running in the default namespace?
LLM wants to use tool: get_pods
Arguments: {
"namespace": "default"
}
Tool result: {
"content": [
{
"type": "text",
"text": "[{\"name\":\"nginx-pod\",\"namespace\":\"default\",\"status\":\"Running\"}]"
}
]
}
LLM: I found 1 pod running in the default namespace:
- nginx-pod (Status: Running)
> servers
Available MCP Servers:
---------------------
* 1. mcp_k8s_server (Enabled)
2. custom_mcp_server (Disabled)
> exit
MIT