Artificial Intelligence 18 min read

Build a Weather‑Query AI Service with FastMCP: Step‑by‑Step Python Guide

This tutorial walks you through creating a FastMCP‑based weather‑query server in Python, registering it as an LLM‑callable tool, and building a matching Python client that connects via stdio, handles tool calls, and provides an interactive chat loop for AI‑driven queries.

Instant Consumer Technology Team
Instant Consumer Technology Team
Instant Consumer Technology Team
Build a Weather‑Query AI Service with FastMCP: Step‑by‑Step Python Guide

1. MCP Server Development

We develop a Server using Python in Stdio mode, requiring

uvx

installation.

1. Initialize Project

<code># Initialize project files
uv init muxue-mcp
cd muxue-mcp

# Create virtual environment
uv venv

# Activate environment
.venv\Scripts\activate

# Install dependencies
uv add "mcp[cli]" httpx</code>

2. Write Core Code

We create a weather‑query server named

weather_serv.py

with a minimal skeleton.

<code>from mcp.server.fastmcp import FastMCP
from pydantic import Field
import httpx, json, os, logging

logger = logging.getLogger("mcp")
mcp = FastMCP("weather")

@mcp.tool(description="Gaode weather query, input city name, return weather")
async def query_weather(city: str = Field(description="city name")) -> str:
    """Gaode weather query"""
    logger.info(f"Received request for city: {city}")
    contents = []
    content = {
        'date': '2025-5-21',
        'week': '4',
        'dayweather': '晴朗',
        'daytemp_float': '14.0',
        'daywind': '北',
        'nightweather': '多云',
        'nighttemp_float': '4.0'
    }
    contents.append(content)
    return json.dumps(contents, ensure_ascii=False)

if __name__ == "__main__":
    mcp.run(transport='stdio')
</code>

FastMCP provides tools, resources, prompts, automatic protocol handling and async support.

FastMCP architecture
FastMCP architecture

(2) Initialize FastMCP

<code># Create a FastMCP instance named "weather"
mcp = FastMCP("weather")
</code>

(3) Define Tool

Use

@mcp.tool

decorator to register a function as a tool that LLM can call.

(4) Main Entry

<code>if __name__ == "__main__":
    mcp.run(transport='stdio')
</code>

Running starts the server in stdio mode; SSE mode is introduced later.

2. MCP Client Development

Client code is written in Python and can be integrated into various agents.

1. Initialize Project

<code># Create project
uv init mcp-client
cd mcp-client
uv venv
.venv\Scripts\activate   # Windows
source .venv/bin/activate   # Unix
uv add mcp python-dotenv openai
del main.py
</code>

2. Define MCPClient class

<code>import asyncio, os, json
from typing import Optional, List
from contextlib import AsyncExitStack
from datetime import datetime
import re
from openai import OpenAI
from dotenv import load_dotenv
from mcp import ClientSession, StdioServerParameters
from mcp.client.stdio import stdio_client

load_dotenv()
class MCPClient:
    def __init__(self):
        self.exit_stack = AsyncExitStack()
        self.openai_api_key = os.getenv("DASHSCOPE_API_KEY")
        self.base_url = os.getenv("BASE_URL")
        self.model = os.getenv("MODEL")
        if not self.openai_api_key:
            raise ValueError("❌ Missing OpenAI API Key")
        self.client = OpenAI(api_key=self.openai_api_key, base_url=self.base_url)
        self.session: Optional[ClientSession] = None
    async def connect_to_server(self, server_script_path: str):
        # only .py or .js allowed
        is_python = server_script_path.endswith('.py')
        is_js = server_script_path.endswith('.js')
        if not (is_python or is_js):
            raise ValueError("Server script must be .py or .js")
        command = "python" if is_python else "node"
        server_params = StdioServerParameters(command=command, args=[server_script_path], env=None)
        stdio_transport = await self.exit_stack.enter_async_context(stdio_client(server_params))
        self.stdio, self.write = stdio_transport
        self.session = await self.exit_stack.enter_async_context(ClientSession(self.stdio, self.write))
        await self.session.initialize()
        tools = (await self.session.list_tools()).tools
        print("\nConnected tools:", [t.name for t in tools])
    async def process_query(self, query: str) -> str:
        # prepare tool definitions for the LLM
        tool_resp = await self.session.list_tools()
        available_tools = [{
            "type": "function",
            "function": {
                "name": t.name,
                "description": t.description,
                "parameters": t.inputSchema
            }
        } for t in tool_resp.tools]
        planning_messages = [{"role": "user", "content": query}]
        tool_response = self.client.chat.completions.create(
            model=self.model,
            messages=planning_messages,
            tools=available_tools
        )
        final_text = []
        for choice in tool_response.choices:
            if choice.finish_reason != 'tool_calls':
                final_text.append(choice.message.content)
            else:
                for tool_call in choice.message.tool_calls:
                    tool_name = tool_call.function.name
                    tool_args = tool_call.function.arguments
                    print(f"Calling tool {tool_name} with args {tool_args}")
                    result = await self.session.call_tool(tool_name, json.loads(tool_args))
                    final_text.append(f"[Calling tool {tool_name} with args {tool_args}]")
                    if hasattr(choice.message, 'content') and choice.message.content:
                        messages.append({"role": "assistant", "content": choice.message.content})
                    messages.append({"role": "user", "content": result.content})
                    synth_resp = self.client.chat.completions.create(
                        model=self.model,
                        max_tokens=4096,
                        messages=messages
                    )
                    final_text.append(synth_resp.choices[0].message.content)
        return "\n".join(final_text)
    async def chat_loop(self):
        print("\n🤖 MCP client started! Type 'quit' to exit")
        while True:
            try:
                query = input("\nYou: ").strip()
                if query.lower() == 'quit':
                    break
                response = await self.process_query(query)
                print(f"\n🤖 AI: {response}")
            except Exception as e:
                print(f"\n⚠️ Error: {str(e)}")
    async def cleanup(self):
        await self.exit_stack.aclose()
</code>

3. Connect to Server

Method

connect_to_server

builds Stdio parameters, starts the server process and lists available tools.

4. Query Logic

Method

process_query

sends the user query to the LLM, receives tool calls, executes them via the session, and continues the conversation.

5. Interactive Dialogue

Method

chat_loop

provides a REPL for user input.

6. Main Entry

<code>async def main():
    server_script_path = r"D:\Test\mcp-project\server.py"
    client = MCPClient()
    try:
        await client.connect_to_server(server_script_path)
        await client.chat_loop()
    finally:
        await client.cleanup()

if __name__ == "__main__":
    asyncio.run(main())
</code>

Running

python client.py

starts the interactive client.

References

https://www.yangyanxing.com/article/use-python-to-develop-mcp-server.html

https://www.yangyanxing.com/article/use-python-to-develop-sse-mcp-server.html

https://modelcontextprotocol.io/quickstart/server

https://modelcontextprotocol.io/quickstart/client

PythonLLMMCPTool IntegrationFastMCP
Instant Consumer Technology Team
Written by

Instant Consumer Technology Team

Instant Consumer Technology Team

0 followers
Reader feedback

How this landed with the community

login Sign in to like

Rate this article

Was this worth your time?

Sign in to rate
Discussion

0 Comments

Thoughtful readers leave field notes, pushback, and hard-won operational detail here.