What happened
Anthropic is packaging repeatable behavior in Claude as Skills: prompts, allowed tools, and policy text in one place. The point is to keep “how we use AI here” in versioned artifacts that survive turnover and sprint churn, like runbooks—except the assistant can run them when MCP and data access are wired up.
Why it matters
Rollouts fail when every team writes fresh prompts and nobody can diff what changed. Skills give product, security, and engineering one document to fight over instead of scattered chat logs. Tool choice matters less than whether you maintain these modules with the same care as libraries and services.
Directory impact
This crosses Notion, filesystem MCP for file-backed context, and skills such as brainstorming or executing plans. Skills only help when context is trustworthy and people verify outputs. Teams on Kimi or other long-context chat should ask the same question: how do we reuse behavior without copy-paste drift?
What to watch next
Watch for open interchange formats versus closed platforms. Shared formats would let platform teams swap models under one governance layer. If formats stay proprietary, lock-in fights will heat up as Skills become the integration surface for AI behavior.