Your AI development stack, curated

The best AI coding tools, MCP workflows, and Claude Code skills — organized for developers. From editor setup to production integrations.

Build your AI stack

Tools, MCP servers, and skills that work together — from editor to production.

AI Coding Tools
8+ tools indexed
Editor extensions, code completion, pair programming tools. Cursor, Windsurf, Copilot, and more.
MCP Servers
6+ MCP servers indexed
Connect your AI to GitHub, databases, browsers, search, and production infrastructure.
Claude Code Skills
6+ skills indexed
Reusable workflow modules for debugging, refactoring, code review, and planning.

MCP Servers

More →

Qdrant MCP Server

Official Qdrant MCP server implementation that gives AI agents a semantic memory layer backed by Qdrant vector search. It exposes MCP tools for storing information and retrieving relevant context, so assistants can persist and recall facts across sessions instead of relying only on short chat history.

Ollama MCP Server

Community-maintained Model Context Protocol bridge that exposes Ollama's local HTTP API—model listing, pulls, chat, and OpenAI-compatible completions—to MCP clients such as Claude Desktop and Cursor. Published on npm as `ollama-mcp-server` (maintained fork of NightTrek/Ollama-mcp); requires a running Ollama daemon reachable at `OLLAMA_HOST` (default `http://127.0.0.1:11434`).

Shopify Dev MCP

Official Shopify Dev MCP server from the Shopify AI Toolkit: connects Claude Code, Cursor, VS Code, Gemini CLI, Codex, and similar clients to Shopify developer documentation, GraphQL schemas, and validation workflows without guessing API shapes. Runs locally via npx using the @shopify/dev-mcp package; Shopify documents that no authentication is required for this developer-resources server. Part of Shopify's broader AI Toolkit alongside plugins and optional skill bundles.

piLoci MCP

piLoci MCP is a self-hosted memory server for AI agents that exposes project-scoped memory storage and retrieval through the Model Context Protocol. Built to run on Raspberry Pi 5, it provides semantic recall, project listing, and user identity tools. Teams connect Claude Desktop, Codex, and other MCP clients to share persistent context without sending memory data to cloud services.

Webflow MCP Server

Connect any LLM to your Webflow sites via the Model Context Protocol. Manage pages, collections, CMS items, e-commerce products, forms, and users through natural language — enabling AI-driven site management and content workflows.

Cloudflare MCP

Bridges AI agents to Cloudflare Workers, KV storage, R2 object storage, and D1 databases for edge deployment inspection and management. Agents can check Workers status, inspect KV namespaces, query D1 databases, and monitor R2 buckets directly from the coding environment.

Claude Code Skills

More →

Creating and maintaining Cursor skills

Defines how to author, revise, and validate SKILL.md files so agent skills stay executable, scoped, and testable. It focuses on turning vague know-how into reusable operational instructions with clear triggers, deterministic steps, and verification checks.

Designing with LLM structured outputs

This skill covers when and how to ask an LLM for machine-readable payloads: define a JSON Schema (or the vendor's equivalent), enable the structured-output feature your provider documents, validate responses in application code, and handle refusals or validation errors explicitly. It applies to tool-calling agents, extraction pipelines, configuration emitters, and any workflow where brittle text parsing creates production risk.

Maintaining Cursor Project Rules

Follow Cursor's official Rules documentation when you want persistent Agent guidance tied to a repository. Project rules encode architecture expectations, risky-folder guardrails, or repeatable workflows; Cursor applies them via Always Apply, intelligent relevance, glob-scoped attachments, or manual @mentions. Use .mdc frontmatter for finer control and reference templates with @file instead of pasting large snippets.

Structured AI meeting notes

Converts raw meeting transcripts into structured, actionable notes with decision logs, assigned action items, and key context preserved for future AI retrieval. This skill bridges the gap between what was discussed in a meeting and what AI agents need to know when acting on outcomes days or weeks later.

Incident response

Structured process for handling production incidents from detection to resolution and post-mortem. Covers severity assessment using P0-P3 grading, team coordination with a designated incident commander, communication templates for stakeholders and users, and structured post-mortem requirements to drive organizational learning from every significant outage.

Context-Aware QA Skill

Context-Aware QA is a prompting technique where an AI model is instructed to retrieve and cite authoritative sources before answering factual questions. By combining retrieval-augmented generation (RAG) with explicit verification instructions, it dramatically reduces hallucinations in production AI systems.

AI News

All news →