- LNR Server 02: Cascading Failure Scenario Simulation
LNR Server 02: Cascading Failure Scenario Simulation
📣 Important Notice
⚠️ As the paper is under review, all contents in this repository are currently not permitted for reuse by anyone until this announcement is removed. Thank you for your understanding! 🙏
1. Overview & Objectives
This repository contains the complete implementation, experimental data, and supplementary results for the paper ××× developed by XXX University in China, and .
Pending publication, the code is shared under a restrictive license. Once the paper is accepted, the repository will transition to a MIT license. Please contact the corresponding author for any inquiries regarding academic use during the review period.
2. Videos of agents operation
2.1 Operation of the developed prototype
↓↓↓ A demonstration of using the developed prototype to operate the TCG-TE LNR agents using graph-guided MCP tools
The full video could be found here
↓↓↓ A demonstration of using the developed prototype to integrate a new MCP server to TCG-TE LNR agents
The full video could be found here
2.2 Operation of agents based on NPG-TE pattern
↓↓↓ A snippet of the operation of NPG-TE agent with discrete MCP tools driven by GPT-5.
↓↓↓ A screenshot of Agent's response
The full video can be found here
↓↓↓ A snippet of the operation of NPG-TE agents with discrete MCP tools driven by GPT-4o.
↓↓↓ A screenshot of Agent's response
The full video can be found here
2.3 Operation of agents based on TCG-TE pattern
↓↓↓ A snippet of the operation of TCG-TE agents with graph-guided MCP tools driven by Claude sonnet 3.7.
The full video can be found here
↓↓↓ A snippet of the operation of TCG-TE agents with graph-guided MCP tools driven by GPT-4.1.
↓↓↓ A screenshot of Agent's response
The full video can be found here
3. Repository Structure
4. Acknowledgments
This work heavily relies on excellent open-source projects, including but not limited to:
- LangGraph & LangChain
- Hugging Face MTEB leaderboard
- NetworkX, PyTorch Geometric, and numerous LLM providers (OpenAI, Anthropic, Qwen, Llama, etc.)
We are deeply grateful to all contributors of these foundational work.