Your AI development stack, curated

The best AI coding tools, MCP workflows, and Claude Code skills — organized for developers. From editor setup to production integrations.

Build your AI stack

Tools, MCP servers, and skills that work together — from editor to production.

AI Coding Tools
8+ tools indexed
Editor extensions, code completion, pair programming tools. Cursor, Windsurf, Copilot, and more.
MCP Servers
6+ MCP servers indexed
Connect your AI to GitHub, databases, browsers, search, and production infrastructure.
Claude Code Skills
6+ skills indexed
Reusable workflow modules for debugging, refactoring, code review, and planning.

MCP Servers

More →

n8n MCP Server Trigger

The MCP Server Trigger is a first-party n8n core node that turns an n8n workflow into a Model Context Protocol server endpoint. Instead of chaining conventional trigger nodes, it connects only to tool nodes so remote MCP clients can list tools and invoke them over long-lived Server-Sent Events or streamable HTTP transports (stdio is explicitly unsupported). Each node exposes separate test and production MCP URLs, optional bearer or header authentication, and documentation explains how to proxy Claude Desktop through `npx mcp-remote` plus queue-mode caveats for multi-replica webhook deployments.

DuckDB MCP community extension (`duckdb_mcp`)

The DuckDB-distributed community extension `duckdb_mcp` embeds MCP client and server capabilities directly inside DuckDB. Installers load it via `INSTALL duckdb_mcp FROM community` followed by `LOAD duckdb_mcp`, after which SQL can attach remote MCP servers (stdio/TCP/WebSocket transports), enumerate resources (`mcp_list_resources`), invoke remote tools (`mcp_call_tool`), and wrap responses with `read_csv`/`read_json`/`read_parquet` URIs routed through `mcp://`. In reverse direction, DuckDB can publish tables, queries, and execution-bound tools (`mcp_publish_table`, `mcp_publish_query`, `mcp_publish_execution_tool`) while `mcp_server_start` exposes them to external MCP-compatible clients.

Neon MCP Server

Official Neon MCP integration exposes Neon Postgres projects to MCP-capable assistants via Streamable HTTP (`https://mcp.neon.tech/mcp`), legacy SSE (`https://mcp.neon.tech/sse`), or a locally launched `@neondatabase/mcp-server-neon` package. Documentation lists tools for project and branch lifecycle, SQL execution, migration rehearsal branches, slow-query diagnostics, Neon Auth provisioning, Data API setup, and embedded Neon docs retrieval—each mapped to Neon API operations.

Qdrant MCP Server

Official Qdrant MCP server implementation that gives AI agents a semantic memory layer backed by Qdrant vector search. It exposes MCP tools for storing information and retrieving relevant context, so assistants can persist and recall facts across sessions instead of relying only on short chat history.

Ollama MCP Server

Community-maintained Model Context Protocol bridge that exposes Ollama's local HTTP API—model listing, pulls, chat, and OpenAI-compatible completions—to MCP clients such as Claude Desktop and Cursor. Published on npm as `ollama-mcp-server` (maintained fork of NightTrek/Ollama-mcp); requires a running Ollama daemon reachable at `OLLAMA_HOST` (default `http://127.0.0.1:11434`).

Shopify Dev MCP

Official Shopify Dev MCP server from the Shopify AI Toolkit: connects Claude Code, Cursor, VS Code, Gemini CLI, Codex, and similar clients to Shopify developer documentation, GraphQL schemas, and validation workflows without guessing API shapes. Runs locally via npx using the @shopify/dev-mcp package; Shopify documents that no authentication is required for this developer-resources server. Part of Shopify's broader AI Toolkit alongside plugins and optional skill bundles.

Claude Code Skills

More →

Postmortem trigger and root-cause taxonomy

Distills Appendix C (“Results of Postmortem Analysis”) from Google’s SRE workbook: it explains why Google catalogs standardized postmortem fields—linking outages to observable triggers versus deeper root-cause categories—so reliability leaders can prioritize systemic fixes rather than anecdotal fixes. The appendix cites a multi-year corpus (labeled 2010–2017 in the workbook) highlighting that binary pushes accounted for roughly 37% of outage triggers while configuration pushes were about 31%, with additional slices for user-behavior spikes, pipelines, upstream providers, performance decay, capacity, and hardware. A companion table correlates outages with qualitative root causes such as faulty software (~41%), development-process gaps (~20%), emergent complexity (~17%), deployment planning weaknesses (~7%), and network failures (~3%). Teams use these distributions to sanity-check whether their incident queues skew differently and to steer investment into the failure classes that statistically dominate historically.

Example SLO document authoring

Operationalizes Appendix A from Google’s SRE workbook by translating the illustrative “Example Game Service” SLO dossier into a checklist teams can mimic: articulate the user-facing workload, nominate rolling measurement windows (the appendix uses four weeks), pair each subsystem with tightly defined SLIs (availability from load balancers excluding 5xx, latency percentile gates, freshness for derived tables, correctness via probers, completeness for pipelines), cite explicit numerator/denominator language, rationalize rounding policies, quantify per-objective error budgets, and cite the sibling error budget policy for enforcement.

Error budget policy drafting

Translates Google’s worked example error-budget policy into a repeatable playbook for tying release tempo to measured reliability: define goals (protect users from repeated SLO misses while preserving innovation incentives), spell out what happens when the rolling window consumes its budget (freeze changes except urgent defects or security work), codify outage investigation thresholds, and document escalation paths when stakeholders disagree about budget math.

Creating and maintaining Cursor skills

Defines how to author, revise, and validate SKILL.md files so agent skills stay executable, scoped, and testable. It focuses on turning vague know-how into reusable operational instructions with clear triggers, deterministic steps, and verification checks.

Designing with LLM structured outputs

This skill covers when and how to ask an LLM for machine-readable payloads: define a JSON Schema (or the vendor's equivalent), enable the structured-output feature your provider documents, validate responses in application code, and handle refusals or validation errors explicitly. It applies to tool-calling agents, extraction pipelines, configuration emitters, and any workflow where brittle text parsing creates production risk.

Maintaining Cursor Project Rules

Follow Cursor's official Rules documentation when you want persistent Agent guidance tied to a repository. Project rules encode architecture expectations, risky-folder guardrails, or repeatable workflows; Cursor applies them via Always Apply, intelligent relevance, glob-scoped attachments, or manual @mentions. Use .mdc frontmatter for finer control and reference templates with @file instead of pasting large snippets.

AI News

All news →