Sponsored by Deepsite.site

Tag

#hr

248 results found

Scratchpad Mcp

scratchpad-mcp is an MCP server that gives AI agents persistent, token-efficient storage. It solves a specific waste problem: agents constantly re-read files they've already seen, re-summarize documents they've already processed, and re-load context they've already understood. Every one of those round-trips burns tokens for no new information. This server fixes that with eight tools designed around how agents actually work: Versioned writes. write_file automatically versions every write and keeps the 10 most recent versions per file. Storage is append-only on success and atomic on failure partial writes can't corrupt state. Structured diffs. read_file accepts a since_version parameter and returns a JSON line-diff against that prior version instead of the full content. Agents that have already seen v1 can ask "what changed in v3?" and get a small structured payload they can reason about, not the entire file again. Append-only logs. append_log and read_log give agents an event-stream they can replay. Cursor-based pagination (since_entry + last_entry_id + has_more) means an agent can checkpoint where it left off and resume cheaply. On-demand summaries. summarize_file calls Claude Haiku to summarize files over ~2000 estimated tokens. Summaries are cached per file version, so repeat calls on an unchanged file cost nothing. The threshold is enforced server-side you can't accidentally pay to summarize something small. Per-agent isolation. Every operation is scoped by an agent_id parameter, so one server instance can serve many agents without leaking state between them. Storage limits. 1 MB per file write, 64 KB per log entry, 1000 files / 100k log entries / 100 MB total per agent sane multi-tenant guardrails out of the box. Backed by a single SQLite file (Postgres migration is on the roadmap). All SQL is parameterized, paths are validated against a strict allowlist, and the security model is documented honestly it's safe for one-user-per-process deployments today, and the V2 plan derives agent_id from the caller's API key for true multi-tenancy. Build agents that remember what they've already seen.

Ai Hr Management Toolkit

You have 50 resumes to screen. Your AI assistant can reason about candidates — but it cannot open PDFs, extract structured data, or track pipeline stages. This toolkit bridges that gap. Give your AI assistant 24 tools covering the entire hiring workflow: Parse PDFs, DOCX, TXT, Markdown, and URLs into structured JSON Extract skills, experience, keywords, and entities algorithmically Score and rank candidates against job descriptions Run a full ATS: jobs, candidates, interviews, offers, notes, and analytics 23 of 24 tools are 100% algorithmic — no LLM calls, no API keys required. The AI calls tools, interprets the results, and delivers analysis. You just ask questions. All 24 MCP Tools All tools return structured JSON with next_steps hints so the AI knows what to call next. Resume Parsing & Ingestion Tool What it does AI? parse_resume Parse PDF / DOCX / TXT / MD / URL → raw text + contacts, keywords, section map No batch_parse_resumes Parse up to 20 files in one call, full pipeline on each No inspect_pipeline Run the 5-stage analysis pipeline → confidence scores, entity counts, data quality report No Text Analysis & NLP Tool What it does AI? extract_keywords TF-IDF keyword + bigram extraction with NER entity classification No detect_patterns Find date ranges, dollar/percent metrics, team sizes, section boundaries, career trajectory signals No classify_entities NER with 12 entity types (PERSON, ORG, SKILL, JOB_TITLE, LOCATION, DATE, …) + context disambiguation No extract_skills_structured Map extracted skills into 13 categories with proficiency estimation (beginner → expert) No extract_experience_structured Parse work history into structured timeline with start/end dates, achievements, and technologies No analyze_resume_comprehensive Master tool — full pipeline + entities + keywords + skills + experience in one call No Candidate Matching & Scoring Tool What it does AI? compute_similarity Cosine, Jaccard, TF-IDF overlap, and skill-match scores between resume and job description No assess_candidate Score against up to 8 weighted criteria axes → weighted total + pass / review / reject decision Optional manage_candidates Rank, filter, compare, and recommend pipeline stage changes across a candidate pool No Export & Notifications Tool What it does AI? export_results Export structured parse results to JSON or CSV No send_email Send results via SMTP (config passed per call — no server-side secrets stored) No ATS — Jobs Tool What it does AI? ats_manage_jobs Full CRUD for job postings: create, read, update, delete, list, search by title/department/status No ATS — Candidates & Pipeline Tool What it does AI? ats_manage_candidates CRUD + pipeline operations: add, update, move stage, bulk-move, filter by stage/score/tags No ats_pipeline_analytics Stage distribution, conversion rates, avg time-in-stage, bottleneck detection, drop-off analysis No ats_dashboard_stats One-call hiring health report: open roles, candidates by stage, interview load, offer acceptance rate No ats_search Global full-text search across all ATS entities (candidates, jobs, interviews, offers, notes) No ATS — Interviews Tool What it does AI? ats_schedule_interview Create, update, and delete interviews with conflict detection and interviewer availability check No ats_interview_feedback Submit structured feedback, compute consensus score, summarize feedback across all interviewers No ATS — Offers & Notes Tool What it does AI? ats_manage_offers Full offer lifecycle: draft → pending → approved → sent → accepted / declined / expired No ats_manage_notes Add, update, search, and delete timestamped candidate notes No Testing & Seeding Tool What it does AI? ats_generate_demo_data Generate a realistic sample ATS dataset (jobs, candidates, interviews, offers) for testing No assess_candidate optionally calls an LLM when you supply provider + apiKey; it falls back to fully algorithmic scoring otherwise.

Careerproof

Career and workforce intelligence built on a deep HR ontology — skill taxonomies, role definitions and responsibilities, compensation and incentive structures, learning and development pathways, sourcing strategies, and role/skill evolution mapping. This structured foundation, combined with a RAG knowledge base curated from 50+ premium sources (HBR, McKinsey, BCG, Gartner, Forrester) and updated 3x daily with live web research, powers 6 guided skills and 42 MCP tools for two audiences: working professionals getting personalized career intelligence (CV optimization, salary benchmarking, career strategy), and HR/TA teams running structured talent evaluation, candidate shortlisting, compensation analysis, and consulting-grade workforce research reports. Example Use Cases (for HR/TA teams): 1. Custom Evaluation Models — Train CareerProof on your organization's existing assessment rubrics, scorecards, and evaluation criteria to build custom eval models that evaluate candidates through your specific lens. Upload your competency frameworks and historical assessments, then run inference on new candidates — scored and ranked exactly how your team would, at scale. 2. Candidate Evaluation & Shortlisting — Set up a hiring context with company profile and job description, upload candidate CVs, then batch-rank them with GEM competency scoring and JD-FIT matching. Apply your custom eval models for organization-specific scoring, or deep-dive any candidate with a 360-degree evaluation including tailored interview questions derived from skill taxonomy analysis. 3. Workforce Research Reports — Generate consulting-grade PDF reports across 16 types (salary benchmarking, skills gap analysis, org design, DEI assessment, succession planning, sourcing strategy, and more). Each report is grounded in real-time market data from premium sources and structured around HR ontology — role definitions, compensation structures, L&D pathways, and skill evolution mapping. 4. Compensation & Incentive Benchmarking — Get market-calibrated salary and total compensation intelligence for any role, location, and industry. Analysis is structured around compensation and incentive frameworks from the HR ontology, enriched with live web research and curated knowledge base data covering base salary, equity, bonuses, and benefits. Example Use Cases (for the working professional or career coach): 1. Career Intelligence Chat (Hyper-Personalized) — Ask career strategy questions and get hyper-personalized responses that fuse your CV context with deep insights from the career and workforce RAG knowledge base. Salary benchmarks calibrated to your function and location, industry disruption analysis mapped to your skill profile, and career pivot recommendations grounded in role evolution data — not surface-level answers, but intelligence drawn from the same sources that inform executive strategy. 2. CV Optimization (Hyper-Personalized) — Upload your CV and receive a hyper-personalized positioning pipeline that combines your actual experience with deep insights from our career and workforce RAG knowledge base. Market analysis calibrated to your industry and seniority, career opportunity identification grounded in role/skill evolution data, and targeted edits with trade-off analysis — not generic advice, but intelligence shaped by 50+ premium research sources and your unique career trajectory.

Codegraph Rust

🎯 Overview CodeGraph is a powerful CLI tool that combines MCP (Model Context Protocol) server management with sophisticated code analysis capabilities. It provides a unified interface for indexing projects, managing embeddings, and running MCP servers with multiple transport options. All you now need is an Agent(s) to create your very own deep code and project knowledge synthehizer system! Key Capabilities 🔍 Advanced Code Analysis: Parse and analyze code across multiple languages using Tree-sitter 🚄 Dual Transport Support: Run MCP servers with STDIO, HTTP, or both simultaneously 🎯 Vector Search: Semantic code search using FAISS-powered vector embeddings 📊 Graph-Based Architecture: Navigate code relationships with RocksDB-backed graph storage ⚡ High Performance: Optimized for large codebases with parallel processing and batched embeddings 🔧 Flexible Configuration: Extensive configuration options for embedding models and performance tuning RAW PERFORMANCE ✨✨✨ 170K lines of rust code in 0.49sec! 21024 embeddings in 3:24mins! On M3 Pro 32GB Qdrant/all-MiniLM-L6-v2-onnx on CPU no Metal acceleration used! Parsing completed: 353/353 files, 169397 lines in 0.49s (714.5 files/s, 342852 lines/s) [00:03:24] [########################################] 21024/21024 Embeddings complete ✨ Features Core Features Project Indexing Multi-language support (Rust, Python, JavaScript, TypeScript, Go, Java, C++) Incremental indexing with file watching Parallel processing with configurable workers Smart caching for improved performance MCP Server Management STDIO transport for direct communication HTTP streaming with SSE support Dual transport mode for maximum flexibility Background daemon mode with PID management Code Search Semantic search using embeddings Exact match and fuzzy search Regex and AST-based queries Configurable similarity thresholds Architecture Analysis Component relationship mapping Dependency analysis Code pattern detection Architecture visualization support