Sponsored by Deepsite.site

Mcp Langchain Server

Created By
sanjeetkumaritoutlook-user7 months ago
Content

MCP-style LangChain + Ollama Python project that:

Runs completely free and locally

Accepts MCP-style { action, params } via a Flask API

Routes to a local LLM (via Ollama) for answers

File:

app.py ← Flask API (MCP-style server)

├── app.py ← Flask API (MCP-style server)-> Flask web server

├── mcp_agent.py ← LangChain logic: interprets actions -> LangChain agent

├── requirements.txt ← Python dependencies

└── README.md ← Project overview

MCP-based LangChain server

The LangChain agent received the correct action

It executed it using your logic (from mcp_agent.py)

It responded with the correct time back to Postman

Next Steps: Test with other action values

Add new functions to mcp_agent.py

Or integrate this with a frontend

adding more intelligent tasks or connecting it with a UI!

##Llama models Explore https://ollama.com/library

##Run gemma:2b

ollama run gemma:2b

gemma:2b uses just ~2–3 GB RAM.

##Run mistral

ollama run mistral

mistral only needs ~4 GB of RAM and still performs quite well for general tasks.

##Run llama3

ollama run llama3

Ollama runs models like llama3 entirely on your machine, in memory.

Even though it's optimized, LLaMA 3 still needs at least ~6 GB of RAM free, and ideally more (8–12 GB total system RAM recommended).

##List downloaded models: ollama list

##Pull other models: ollama pull mistral

ollama pull codellama

Stop running model:

ollama stop

Start your MCP server:

python app.py

You need to install the latest LangChain with community modules, specifically:

pip install langchain-community

##Recommended Full Setup (for safety):

If you're using LangChain with Ollama, it's best to ensure these are installed:

pip install langchain langchain-community langchain-core langchainhub

pip install ollama

gemma:2b ,llama3 has cut off date

Static Knowledge: These models are trained on data available up to a certain point (e.g., mid-2023 for many models).

They don’t know anything that happened after their training cut-off date.

Want Real News in a LangChain/Ollama Setup?

You can combine the local model with:

RAG (Retrieval-Augmented Generation), where you fetch live news via API (like NewsAPI or Google News),

Then pass that info as context to the LLM to summarize or answer based on it.

LangChain + News API integration using local Ollama model.

Recommend Servers
TraeBuild with Free GPT-4.1 & Claude 3.7. Fully MCP-Ready.
TimeA Model Context Protocol server that provides time and timezone conversion capabilities. This server enables LLMs to get current time information and perform timezone conversions using IANA timezone names, with automatic system timezone detection.
Context7Context7 MCP Server -- Up-to-date code documentation for LLMs and AI code editors
EdgeOne Pages MCPAn MCP service designed for deploying HTML content to EdgeOne Pages and obtaining an accessible public URL.
Visual Studio Code - Open Source ("Code - OSS")Visual Studio Code
Howtocook Mcp基于Anduin2017 / HowToCook (程序员在家做饭指南)的mcp server,帮你推荐菜谱、规划膳食,解决“今天吃什么“的世纪难题; Based on Anduin2017/HowToCook (Programmer's Guide to Cooking at Home), MCP Server helps you recommend recipes, plan meals, and solve the century old problem of "what to eat today"
MCP AdvisorMCP Advisor & Installation - Use the right MCP server for your needs
Baidu Map百度地图核心API现已全面兼容MCP协议,是国内首家兼容MCP协议的地图服务商。
DeepChatYour AI Partner on Desktop
Tavily Mcp
Serper MCP ServerA Serper MCP Server
Playwright McpPlaywright MCP server
Zhipu Web SearchZhipu Web Search MCP Server is a search engine specifically designed for large models. It integrates four search engines, allowing users to flexibly compare and switch between them. Building upon the web crawling and ranking capabilities of traditional search engines, it enhances intent recognition capabilities, returning results more suitable for large model processing (such as webpage titles, URLs, summaries, site names, site icons, etc.). This helps AI applications achieve "dynamic knowledge acquisition" and "precise scenario adaptation" capabilities.
MiniMax MCPOfficial MiniMax Model Context Protocol (MCP) server that enables interaction with powerful Text to Speech, image generation and video generation APIs.
BlenderBlenderMCP connects Blender to Claude AI through the Model Context Protocol (MCP), allowing Claude to directly interact with and control Blender. This integration enables prompt assisted 3D modeling, scene creation, and manipulation.
ChatWiseThe second fastest AI chatbot™
Amap Maps高德地图官方 MCP Server
WindsurfThe new purpose-built IDE to harness magic
Jina AI MCP ToolsA Model Context Protocol (MCP) server that integrates with Jina AI Search Foundation APIs.
CursorThe AI Code Editor
AiimagemultistyleA Model Context Protocol (MCP) server for image generation and manipulation using fal.ai's Stable Diffusion model.