Sponsored by Deepsite.site

Aide

Created By
UltraRepo8 months ago
AI Development Extension for Visual Studio with support for UltraRepo with support for private AI LLMs and private MCP servers
Content


AI Platform and Tools

AI Dev Environment (AIDE) for VS Code

UltraRepo AIDE is an open source extension for Visual Studio Code that works with private data and LLMs, providing productivity and quality for your team.

Features

  • Supports Private AI Models: Ollama, Open WebUI.

  • Supports Public AI Models GPT-4, o1, Claude, Gemini, Ollama, Github and other OpenAI-compatible local models with your API key from OpenAI, Azure OpenAI Service, Google, Anthropic or other providers.

  • 💥 Model Context Protocol (MCP) to bring your own tools and DeepClaude (DeepSeek R1 + Claude) mode for best AI responses.

  • 📂 Chat with your Files: Add multiple files and images to your chat using @ for seamless collaboration.

  • 📃 Streaming Answers: Receive real-time responses to your prompts in the sidebar conversation window.

  • 📖 Prompt Manager: Chat with your own prompts (use # to search).

  • 🔥 Stop Responses: Interrupt responses at any time to save your tokens.

  • 📝 Code Assistance: Create files or fix your code with one click or keyboard shortcuts.

  • ➡️ Export Conversations: Export all your conversation history at once in Markdown format.

  • 🐛 Automatic Partial Code Detection: Automatically continues and combines responses when they are cut off.

  • 📰 Custom Prompt Prefixes: Customize what you are asking ChatGPT with ad-hoc prompt prefixes.

  • 💻 Seamless Code Integration: Copy, insert, or create new files directly from ChatGPT's code suggestions.

  • ➕ Editable Prompts: Edit and resend previous prompts.

  • 🛡️ Telemetry Free: No usage data is collected.

Recent Release Highlights

  • v4.8.0: New LOGO and new models.
  • v4.7.0: Added Model Context Protocol (MCP) integration.
  • v4.6.9: Added Github Copilot provider.
  • v4.6.7: Added DeepClaude mode (DeepSeek + Claude) for best AI responses.
  • v4.6.5: Added reasoning models (DeepSeek R1 and o3-mini)
  • v4.6.3: Added chatting with files (including text files and images)
  • v4.6.0: Added flexible prompt management with /manage-prompt command and use prompts with #promptname.
  • v4.5.0: Added support of Google Generative AI models and reduce extension size.

Installation

  • Install the extension from the Visual Studio Marketplace or search UltraRepo AIDE in VScode Extensions and click install.
  • Reload Visual Studio Code after installation.

AI Services

Configure the extension by setting your API keys and preferences in the settings.

ConfigurationDescription
API KeyRequired, get from OpenAI, Azure OpenAI, Anthropic or other AI services
API Base URLOptional, default to "https://api.openai.com/v1"
ModelOptional, default to "gpt-4o"

Refer to the following sections for more details on configuring various AI services.

OpenAI
ConfigurationExample
API Keyyour-api-key
Modelgpt-4o
API Base URLhttps://api.openai.com/v1 (Optional)
Ollama

Pull your image first from Ollama library and then setup the base URL and custom model.

ConfigurationExample
API Keyollama (Optional)
Modelcustom
Custom Modelqwen2.5
API Base URLhttp://localhost:11434/v1/
DeepSeek

Ollama provider:

ConfigurationExample
API Keyollama (Optional)
Modelcustom
Custom Modeldeepseek-r1
API Base URLhttp://localhost:11434/v1/

DeepSeek provider:

ConfigurationExample
API Keyyour-deepseek-key
Modeldeepseek-reasoner
API Base URLhttps://api.deepseek.com

SiliconFlow (SiliconCloud) provider:

ConfigurationExample
API Keyyour-siliconflow-key
Modelcustom
Custom Modeldeepseek-ai/DeepSeek-R1
API Base URLhttps://api.siliconflow.cn/v1

Azure AI Foundry provider:

ConfigurationExample
API Keyyour-azure-ai-key
ModelDeepSeek-R1
API Base URLhttps://[endpoint-name].[region].models.ai.azure.com
Anthropic Claude
ConfigurationExample
API Keyyour-api-key
Modelclaude-3-sonnet-20240229
API Base URLhttps://api.anthropic.com/v1 (Optional)
Google Gemini
ConfigurationExample
API Keyyour-api-key
Modelgemini-2.0-flash-thinking-exp-1219
API Base URLhttps://generativelanguage.googleapis.com/v1beta (Optional)
Azure OpenAI

For Azure OpenAI Service, apiBaseUrl should be set to format https://[YOUR-ENDPOINT-NAME].openai.azure.com/openai/deployments/[YOUR-DEPLOYMENT-NAME].

ConfigurationExample
API Keyyour-api-key
Modelgpt-4o
API Base URLhttps://endpoint-name.openai.azure.com/openai/deployments/deployment-name
Github Copilot

Github Copilot is supported with build-in authentication (a popup would ask your permission when using Github Copilot models).

Note: Currently, gpt-4o, gpt-4o-mini, o1, o1-mini, claude-3.5-sonnet are supported (refer the doc here for the details). And MCP tools are not supported yet via UltraRepo AIDE extension.

ConfigurationExample
ProviderGitHubCopilot
API Keygithub
Modelcustom
Custom Modelclaude-3.5-sonnet
Github Models

For Github Models, get your Github token from here.

ConfigurationExample
API Keyyour-github-token
Modelo1
API Base URLhttps://models.inference.ai.azure.com
OpenAI compatible Models

To use OpenAI compatible APIs, you need to set a custom model name: set model to "custom" and then specify your custom model name.

Example for groq:

ConfigurationExample
API Keyyour-groq-key
Modelcustom
Custom Modelmixtral-8x7b-32768
API Base URLhttps://api.groq.com/openai/v1
DeepClaude (DeepSeek + Claude)
ConfigurationExample
API Keyyour-api-key
Modelclaude-3-sonnet-20240229
API Base URLhttps://api.anthropic.com/v1 (Optional)
Reasoning API Keyyour-deepseek-api-key
Reasoning Modeldeepseek-reasoner (or deepseek-r1 regarding to your provider)
Reasoning API Base URLhttps://api.deepseek.com (or your own base URL)

Configurations

Full list of configuration options
SettingDefaultDescription
ultrarepo.gpt3.apiKeyOpenAI API key. Get your API Key from OpenAI.
ultrarepo.gpt3.apiBaseUrlhttps://api.openai.com/v1Optional override for the OpenAI API base URL. If you customize it, please make sure you have the same format. e.g. starts with https:// without a trailing slash. The completions endpoint suffix is added internally, e.g. for reference: ${apiBaseUrl}/v1/completions
ultrarepo.gpt3.organizationOpenAI Organization ID.
ultrarepo.gpt3.modelgpt-4oOpenAI models to use for your prompts. Documentation. If you face 400 Bad Request please make sure you are using the right model for your integration method. For local or self-hosted LLMs compatible with OpenAI, you can select custom and specify your custom model name in #ultrarepo.gpt3.customModel#.
ultrarepo.gpt3.customModelSpecify your custom model name here if you selected custom in #ultrarepo.gpt3.model#. This allows you to use a custom model name for local or self-hosted LLMs compatible with OpenAI.
ultrarepo.gpt3.maxTokens1024The maximum number of tokens to generate in the completion.
ultrarepo.gpt3.temperature1What sampling temperature to use. Higher values means the model will take more risks. Try 0.9 for more creative applications, and 0 (argmax sampling) for ones with a well-defined answer.
ultrarepo.gpt3.top_p1An alternative to sampling with temperature, called nucleus sampling, where the model considers the results of the tokens with top_p probability mass. So 0.1 means only the tokens comprising the top 10% probability mass are considered.
ultrarepo.systemPromptSystem prompts for the copilot.
ultrarepo.gpt3.generateCode-enabledtrueEnable the code generation context menu item for the selected comment/code for Codex.
ultrarepo.gpt3.searchGrounding.enabledfalseEnable search grounding for Gemini model. Only available for Google Gemini models.
ultrarepo.promptPrefix.addTestsImplement tests for the following codeThe prompt prefix used for adding tests for the selected code
ultrarepo.promptPrefix.addTests-enabledtrueEnable the prompt prefix used for adding tests for the selected code in the context menu
ultrarepo.promptPrefix.findProblemsFind problems with the following codeThe prompt prefix used for finding problems for the selected code
ultrarepo.promptPrefix.findProblems-enabledtrueEnable the prompt prefix used for finding problems for the selected code in the context menu
ultrarepo.promptPrefix.optimizeOptimize the following codeThe prompt prefix used for optimizing the selected code
ultrarepo.promptPrefix.optimize-enabledtrueEnable the prompt prefix used for optimizing the selected code in the context menu
ultrarepo.promptPrefix.explainExplain the following codeThe prompt prefix used for explaining the selected code
ultrarepo.promptPrefix.explain-enabledtrueEnable the prompt prefix used for explaining the selected code in the context menu
ultrarepo.promptPrefix.addCommentsAdd comments for the following codeThe prompt prefix used for adding comments for the selected code
ultrarepo.promptPrefix.addComments-enabledtrueEnable the prompt prefix used for adding comments for the selected code in the context menu
ultrarepo.promptPrefix.completeCodeComplete the following codeThe prompt prefix used for completing the selected code
ultrarepo.promptPrefix.completeCode-enabledtrueEnable the prompt prefix used for completing the selected code in the context menu
ultrarepo.promptPrefix.adhoc-enabledtrueEnable the prompt prefix used for adhoc command for the selected code in the context menu
ultrarepo.promptPrefix.customPrompt1Your custom prompt 1. It's disabled by default, please set to a custom prompt and enable it if you prefer using customized prompt
ultrarepo.promptPrefix.customPrompt1-enabledfalseEnable custom prompt 1. If you enable this item make sure to set this #ultrarepo.promptPrefix.customPrompt1#
ultrarepo.promptPrefix.customPrompt2Your custom prompt 2. It's disabled by default, please set to a custom prompt and enable it if you prefer using customized prompt
ultrarepo.promptPrefix.customPrompt2-enabledfalseEnable custom prompt 2. If you enable this item make sure to set this #ultrarepo.promptPrefix.customPrompt2#
ultrarepo.response.showNotificationfalseChoose whether you'd like to receive a notification when ChatGPT bot responds to your query.
ultrarepo.response.autoScrolltrueWhenever there is a new question or response added to the conversation window, extension will automatically scroll to the bottom. You can change that behaviour by disabling this setting.

How to install locally

Build and install locally

We highly recommend installing the extension directly from the VS Code Marketplace for the easiest setup and automatic updates. However, for advanced users, building and installing locally is also an option.

  • Install vsce if you don't have it on your machine (The Visual Studio Code Extension Manager)
    • npm install --global vsce
  • Run vsce package
  • Follow the instructions and install manually.
npm run build
npm run package
code --uninstall-extension UltraRepo.aide
code --install-extension ultrarepo-aide-*.vsix

Acknowledgement

AI Toolkit for TypeScript

This extension utilizes the AI Toolkit for TypeScript to seamlessly integrate with a variety of AI providers. This allows for flexible and robust AI functionality within the editor. We appreciate the work by Vercel in creating this valuable resource.

gencay/vscode-chatgpt

This extension is built on the widely-used gencay/vscode-chatgpt project, which has garnered over 500,000 downloads. We are deeply grateful for the foundation laid by the original author, Gencay, and the community that supported it.

Unfortunately, the original author has decided to stop maintaining the project, and the new recommended Genie AI extension is not open-source. This fork continues the development to keep the project open and accessible to everyone.

License

This project is released under ISC License - See LICENSE for details. Copyright notice and the respective permission notices must appear in all copies.

Recommend Servers
TraeBuild with Free GPT-4.1 & Claude 3.7. Fully MCP-Ready.
DeepChatYour AI Partner on Desktop
Visual Studio Code - Open Source ("Code - OSS")Visual Studio Code
Howtocook Mcp基于Anduin2017 / HowToCook (程序员在家做饭指南)的mcp server,帮你推荐菜谱、规划膳食,解决“今天吃什么“的世纪难题; Based on Anduin2017/HowToCook (Programmer's Guide to Cooking at Home), MCP Server helps you recommend recipes, plan meals, and solve the century old problem of "what to eat today"
BlenderBlenderMCP connects Blender to Claude AI through the Model Context Protocol (MCP), allowing Claude to directly interact with and control Blender. This integration enables prompt assisted 3D modeling, scene creation, and manipulation.
Tavily Mcp
CursorThe AI Code Editor
EdgeOne Pages MCPAn MCP service designed for deploying HTML content to EdgeOne Pages and obtaining an accessible public URL.
TimeA Model Context Protocol server that provides time and timezone conversion capabilities. This server enables LLMs to get current time information and perform timezone conversions using IANA timezone names, with automatic system timezone detection.
MCP AdvisorMCP Advisor & Installation - Use the right MCP server for your needs
Serper MCP ServerA Serper MCP Server
MiniMax MCPOfficial MiniMax Model Context Protocol (MCP) server that enables interaction with powerful Text to Speech, image generation and video generation APIs.
WindsurfThe new purpose-built IDE to harness magic
Playwright McpPlaywright MCP server
ChatWiseThe second fastest AI chatbot™
Context7Context7 MCP Server -- Up-to-date code documentation for LLMs and AI code editors
Zhipu Web SearchZhipu Web Search MCP Server is a search engine specifically designed for large models. It integrates four search engines, allowing users to flexibly compare and switch between them. Building upon the web crawling and ranking capabilities of traditional search engines, it enhances intent recognition capabilities, returning results more suitable for large model processing (such as webpage titles, URLs, summaries, site names, site icons, etc.). This helps AI applications achieve "dynamic knowledge acquisition" and "precise scenario adaptation" capabilities.
Jina AI MCP ToolsA Model Context Protocol (MCP) server that integrates with Jina AI Search Foundation APIs.
Amap Maps高德地图官方 MCP Server
Baidu Map百度地图核心API现已全面兼容MCP协议,是国内首家兼容MCP协议的地图服务商。
AiimagemultistyleA Model Context Protocol (MCP) server for image generation and manipulation using fal.ai's Stable Diffusion model.