Sponsored by Deepsite.site

Tag

#Pet

76 results found

Quizlar

Quizlar is the learning MCP server. It turns whatever source the user brings — a YouTube lecture, a PDF, a URL, a textbook passage, or a pasted block of notes — into flashcards, then runs interactive quizzes with FSRS spaced-repetition scheduling so the material actually sticks. Every tool maps one-to-one to something a real learner does: ingest, quiz, track progress, review what's due. Built as a voice-first tutor (LiveKit + Deepgram + ElevenLabs); the MCP surface exposes the same primitives the voice agent uses internally, so your agent gets production-grade grading, STT-aware answer parsing, and the same FSRS scheduler that powers the consumer app. Three use cases: 1. "Quiz me on X." Call quiz_me(topic) — a composite tool that builds the deck and starts the quiz in one step. Then loop submit_answer → end_quiz. Grading is tier-1 exact match → phonetic fuzz → short LLM fallback (the same pipeline that ships in the voice product). 2. Study from a YouTube lecture. create_deck_from_youtube pulls the transcript, clusters it into concepts, and generates cards proportional to the video length. Poll get_job_status, then run the quiz loop. 3. Daily spaced-repetition review. get_study_recommendations returns exactly the cards due under the user's FSRS schedule, prioritised across all decks. Quizlar is the scheduler of record — your agent executes the plan instead of reinventing one each session. Auth: Bearer token (sk-qz-<32 chars>) for headless installs, or full OAuth 2.1 / DCR / PKCE for one-click clients (Smithery, Claude Connector). Mint keys at https://quizlar.app/settings/api-keys. 22 tools total. Voice and text equal-status. Education-first.

Petro Mcp

petro-mcp — Petroleum Engineering MCP Server petro-mcp exposes petroleum engineering workflows to Claude and other MCP-compatible LLMs through natural language. Instead of writing Python scripts, just ask your AI assistant. Capabilities (80+ tools across the full upstream workflow): - Well Logs (LAS): Parse LAS files, extract curves and headers, compute Vshale, porosity (density, neutron-density, sonic, effective), water saturation (Archie, Simandoux, Indonesian), permeability (Timur, Coates), and net pay. - Decline Curve Analysis: Arps exponential/hyperbolic/harmonic fits, advanced models (Duong, PLE, SEPD), EUR calculation, Monte Carlo EUR distributions, bootstrap confidence intervals, probabilistic forecasts, price sensitivities. - Rate Transient Analysis (RTA): Agarwal-Gardner, Blasingame, NPI, flowing material balance, normalized rate, sqrt-time, material balance time, permeability estimation, radius of investigation. - Production Analytics: CSV production data queries, trend analysis, anomaly detection (shut-ins, rate jumps, water breakthrough, GOR blowouts), producing ratios (GOR, WOR, water cut). - PVT & Reservoir: Black-oil correlations (Standing, Beggs-Robinson, Hall-Yarborough, Lee-Gonzalez-Eakin, Sutton), brine PVT, bubble point, oil compressibility, gas Z-factor, volumetric OOIP/OGIP, recovery factors, Havlena-Odeh, P/Z analysis. - Drilling & Wellbore: Hydrostatic pressure, ECD, kill mud weight, MAASP, burst/collapse pressure, bit pressure drop, nozzle TFA, annular velocity, dogleg severity, vertical section, well survey, anticollision, wellbore tortuosity. - Production Engineering: Nodal analysis (Vogel IPR + VLP), Beggs-Brill multiphase flow, choke flow, erosional velocity, Turner/Coleman critical rates, hydrate temperature/inhibitor, ICP/FCP, HPT. - Economics: NPV, IRR, payout period, PV10, breakeven price, well economics, operating netback, price sensitivity. - Units: Oilfield unit conversions across pressure, rate, volume, length, density, viscosity, and more. Why petro-mcp? Purpose-built for petroleum engineers. Other energy MCP servers focus on commodity prices; this one runs the actual engineering calculations — log interpretation, decline analysis, reservoir engineering, drilling, production, and economics — all through plain English. Install: pip install petro-mcp → configure in Claude Desktop → ask away. Links: GitHub: https://github.com/petropt/petro-mcp · PyPI: https://pypi.org/project/petro-mcp/ · Web tools: https://tools.petropt.com License: MIT · Author: Groundwork Analytics

Careerproof

Career and workforce intelligence built on a deep HR ontology — skill taxonomies, role definitions and responsibilities, compensation and incentive structures, learning and development pathways, sourcing strategies, and role/skill evolution mapping. This structured foundation, combined with a RAG knowledge base curated from 50+ premium sources (HBR, McKinsey, BCG, Gartner, Forrester) and updated 3x daily with live web research, powers 6 guided skills and 42 MCP tools for two audiences: working professionals getting personalized career intelligence (CV optimization, salary benchmarking, career strategy), and HR/TA teams running structured talent evaluation, candidate shortlisting, compensation analysis, and consulting-grade workforce research reports. Example Use Cases (for HR/TA teams): 1. Custom Evaluation Models — Train CareerProof on your organization's existing assessment rubrics, scorecards, and evaluation criteria to build custom eval models that evaluate candidates through your specific lens. Upload your competency frameworks and historical assessments, then run inference on new candidates — scored and ranked exactly how your team would, at scale. 2. Candidate Evaluation & Shortlisting — Set up a hiring context with company profile and job description, upload candidate CVs, then batch-rank them with GEM competency scoring and JD-FIT matching. Apply your custom eval models for organization-specific scoring, or deep-dive any candidate with a 360-degree evaluation including tailored interview questions derived from skill taxonomy analysis. 3. Workforce Research Reports — Generate consulting-grade PDF reports across 16 types (salary benchmarking, skills gap analysis, org design, DEI assessment, succession planning, sourcing strategy, and more). Each report is grounded in real-time market data from premium sources and structured around HR ontology — role definitions, compensation structures, L&D pathways, and skill evolution mapping. 4. Compensation & Incentive Benchmarking — Get market-calibrated salary and total compensation intelligence for any role, location, and industry. Analysis is structured around compensation and incentive frameworks from the HR ontology, enriched with live web research and curated knowledge base data covering base salary, equity, bonuses, and benefits. Example Use Cases (for the working professional or career coach): 1. Career Intelligence Chat (Hyper-Personalized) — Ask career strategy questions and get hyper-personalized responses that fuse your CV context with deep insights from the career and workforce RAG knowledge base. Salary benchmarks calibrated to your function and location, industry disruption analysis mapped to your skill profile, and career pivot recommendations grounded in role evolution data — not surface-level answers, but intelligence drawn from the same sources that inform executive strategy. 2. CV Optimization (Hyper-Personalized) — Upload your CV and receive a hyper-personalized positioning pipeline that combines your actual experience with deep insights from our career and workforce RAG knowledge base. Market analysis calibrated to your industry and seniority, career opportunity identification grounded in role/skill evolution data, and targeted edits with trade-off analysis — not generic advice, but intelligence shaped by 50+ premium research sources and your unique career trajectory.