Sponsored by Deepsite.site

Tag

#ENS

211 results found

Careerproof

Career and workforce intelligence built on a deep HR ontology — skill taxonomies, role definitions and responsibilities, compensation and incentive structures, learning and development pathways, sourcing strategies, and role/skill evolution mapping. This structured foundation, combined with a RAG knowledge base curated from 50+ premium sources (HBR, McKinsey, BCG, Gartner, Forrester) and updated 3x daily with live web research, powers 6 guided skills and 42 MCP tools for two audiences: working professionals getting personalized career intelligence (CV optimization, salary benchmarking, career strategy), and HR/TA teams running structured talent evaluation, candidate shortlisting, compensation analysis, and consulting-grade workforce research reports. Example Use Cases (for HR/TA teams): 1. Custom Evaluation Models — Train CareerProof on your organization's existing assessment rubrics, scorecards, and evaluation criteria to build custom eval models that evaluate candidates through your specific lens. Upload your competency frameworks and historical assessments, then run inference on new candidates — scored and ranked exactly how your team would, at scale. 2. Candidate Evaluation & Shortlisting — Set up a hiring context with company profile and job description, upload candidate CVs, then batch-rank them with GEM competency scoring and JD-FIT matching. Apply your custom eval models for organization-specific scoring, or deep-dive any candidate with a 360-degree evaluation including tailored interview questions derived from skill taxonomy analysis. 3. Workforce Research Reports — Generate consulting-grade PDF reports across 16 types (salary benchmarking, skills gap analysis, org design, DEI assessment, succession planning, sourcing strategy, and more). Each report is grounded in real-time market data from premium sources and structured around HR ontology — role definitions, compensation structures, L&D pathways, and skill evolution mapping. 4. Compensation & Incentive Benchmarking — Get market-calibrated salary and total compensation intelligence for any role, location, and industry. Analysis is structured around compensation and incentive frameworks from the HR ontology, enriched with live web research and curated knowledge base data covering base salary, equity, bonuses, and benefits. Example Use Cases (for the working professional or career coach): 1. Career Intelligence Chat (Hyper-Personalized) — Ask career strategy questions and get hyper-personalized responses that fuse your CV context with deep insights from the career and workforce RAG knowledge base. Salary benchmarks calibrated to your function and location, industry disruption analysis mapped to your skill profile, and career pivot recommendations grounded in role evolution data — not surface-level answers, but intelligence drawn from the same sources that inform executive strategy. 2. CV Optimization (Hyper-Personalized) — Upload your CV and receive a hyper-personalized positioning pipeline that combines your actual experience with deep insights from our career and workforce RAG knowledge base. Market analysis calibrated to your industry and seniority, career opportunity identification grounded in role/skill evolution data, and targeted edits with trade-off analysis — not generic advice, but intelligence shaped by 50+ premium research sources and your unique career trajectory.

Splid MCP

# Splid MCP Server A Model Context Protocol (MCP) server that exposes Splid (splid.app) via tools, powered by the reverse‑engineered `splid-js` client. - Language/Runtime: Node.js (ESM) + TypeScript - Transport: Streamable HTTP (and stdio for local inspector) - License: MIT ## Quick start 1) Install ```bash npm install ``` 2) Configure env Create a `.env` in project root: ``` CODE=YOUR_SPLID_INVITE_CODE PORT=8000 ``` 3) Build and run ```bash npm run build npm run dev ``` 4) Inspect locally ```bash npm run inspect ``` Then connect to `http://localhost:8000/mcp` using "Streamable HTTP". ## Tools All tools support an optional group selector to override the default from `CODE`: - `groupId?: string` - `groupCode?: string` (invite code) - `groupName?: string` (reserved; not yet supported) If none provided, the server uses the default group from `CODE`. ### health - Purpose: connectivity check - Output: `{ ok: true }` ### whoami - Purpose: show the currently selected group and its members - Input: none - Output: JSON containing group info and members ### createExpense - Purpose: create a new expense entry - Input: - `title: string` - `amount: number > 0` - `currencyCode?: string` (defaults to the group default when omitted) - `payers: { userId?: string; name?: string; amount: number > 0 }[]` (at least 1) - `profiteers: { userId?: string; name?: string; share: number in (0,1] }[]` (at least 1) - Optional group selector fields - Rules: - Names are case‑insensitive and resolved to member GlobalId; unknown names return a clear error. - The sum of all `share` values must equal 1 (±1e‑6). - Example (names): ```json { "title": "Dinner", "amount": 12.5, "payers": [{ "name": "Alice", "amount": 12.5 }], "profiteers": [{ "name": "Bob", "share": 0.6 }, { "name": "Alice", "share": 0.4 }] } ``` - Example (userIds): ```json { "title": "Dinner", "amount": 12.5, "payers": [{ "userId": "<GlobalId>", "amount": 12.5 }], "profiteers": [{ "userId": "<GlobalId>", "share": 1 }] } ``` ### listEntries - Purpose: list recent entries in a group - Input: - `limit?: number` (1..100, default 20) - Optional group selector fields - Output: array of entries ### getGroupSummary - Purpose: show balances/summary for a group - Input: - Optional group selector fields - Output: summary object (balances computed via Splid) ### Streamable HTTP - URL: `http://localhost:8000/mcp` - No auth headers required; use MCP Inspector to test. ## Troubleshooting - "Bad Request: Server not initialized": refresh and reconnect; first POST must be `initialize`. - 400 with share errors: ensure shares are in (0,1] and sum to 1. - Unknown name: check exact member names in `whoami` output. ## Configuration - Env variables: - `CODE`: Splid invite/join code for the default group - `PORT` (optional): default 8000 ## Acknowledgements - Splid JS client: https://github.com/LinusBolls/splid-js - MCP Server template / docs: https://github.com/InteractionCo/mcp-server-template ## License MIT

Screenmonitormcp

ScreenMonitorMCP - Revolutionary AI Vision Server Give AI real-time sight and screen interaction capabilities ScreenMonitorMCP is a revolutionary MCP (Model Context Protocol) server that provides Claude and other AI assistants with real-time screen monitoring, visual analysis, and intelligent interaction capabilities. This project enables AI to see, understand, and interact with your screen in ways never before possible. Why ScreenMonitorMCP? Transform your AI assistant from text-only to a visual powerhouse that can: Monitor your screen in real-time and detect important changes Click UI elements using natural language commands Extract text from any part of your screen Analyze screenshots and videos with AI Provide intelligent insights about screen activity Core Features Smart Monitoring System start_smart_monitoring() - Enable intelligent monitoring with configurable triggers get_monitoring_insights() - AI-powered analysis of screen activity get_recent_events() - History of detected screen changes stop_smart_monitoring() - Stop monitoring with preserved insights Natural Language UI Interaction smart_click() - Click elements using descriptions like "Save button" extract_text_from_screen() - OCR text extraction from screen regions get_active_application() - Get current application context Visual Analysis Tools capture_and_analyze() - Screenshot capture with AI analysis record_and_analyze() - Video recording with AI analysis query_vision_about_current_view() - Ask AI questions about current screen System Performance get_system_metrics() - Comprehensive system health dashboard get_cache_stats() - Cache performance statistics optimize_image() - Advanced image optimization simulate_input() - Keyboard and mouse simulation

Research-Quest: Scientific Discovery Platform

Research-Quest A comprehensive desktop extension implementing the Research-Quest framework for systematic scientific reasoning through an 8-stage graph-based methodology. Overview The Research-Quest desktop extension provides researchers with a powerful tool for conducting systematic scientific analysis through a structured, graph-based approach. This extension implements the complete Research-Quest framework as defined in the Research-Quest.md specification, enabling: - Systematic Research Methodology: 8-stage process from initialization to reflection - Multi-dimensional Confidence Tracking: Bayesian belief updates with statistical rigor - Interdisciplinary Research Support: Bridge nodes connecting different domains - Causal Inference Capabilities: Pearl's do-calculus and counterfactual reasoning - Temporal Pattern Analysis: Dynamic relationship modeling - Bias Detection & Mitigation: Systematic bias identification and correction - Impact Assessment: Research significance and utility estimation - Collaborative Research Features: Multi-researcher attribution and consensus building Architecture 8-Stage Research-Quest Framework 1. Initialization: Create root node with task understanding 2. Decomposition: Break down research task into fundamental dimensions 3. Hypothesis/Planning: Generate competing hypotheses with detailed metadata 4. Evidence Integration: Bayesian confidence updates with typed relationships 5. Pruning/Merging: Graph refinement based on confidence and impact 6. Subgraph Extraction: Focus on high-value research pathways 7. Composition: Generate structured research narratives 8. Reflection: Comprehensive quality audit and validation