The System Prompts Leaks repository serves as a curated collection of system prompts, system messages, and developer messages from major AI providers. The repository's primary function is to aggregate and preserve these prompts via a community-driven Pull Request model, providing transparency into how different AI systems are configured and instructed to behave.
This document provides an overview of the repository's structure, content organization, and contribution model. For detailed information about specific AI systems, see:
Sources: readme.md1-13
The repository explicitly defines its purpose as a "Collection of system prompts/system messages/developer messages" with an open invitation for community contributions. This crowd-sourced approach enables rapid documentation of system prompt updates across multiple AI platforms.
Sources: readme.md4-8
The repository organizes content into two primary provider categories, with additional utility documentation.
Sources: readme.md1-13 high-level architecture diagrams
The repository contains three distinct content categories, each serving different documentation purposes:
Category | Provider | Content Type | Primary Focus |
---|---|---|---|
Claude Systems | Anthropic | Operational system prompts | Tool-centric architecture with web/Google Workspace integration |
GPT-5 Systems | OpenAI | Personality and mode configurations | Personality-centric architecture with specialized operational modes |
Utility Toolkits | Referenced by Claude | External capability documentation | PowerPoint and PDF processing workflows |
Sources: High-level architecture diagrams
The two providers demonstrate fundamentally different architectural priorities:
Anthropic (Claude): Implements a tool-centric pipeline where safety precedes query complexity analysis. The system categorizes requests into four complexity levels (never search, offer search, single search, research) to determine tool invocation patterns. External data integration—web searches, Google Workspace access, file processing—forms the core capability set, with mandatory citation requirements for all external data sources.
OpenAI (GPT-5): Implements a personality-centric pipeline where personality traits are applied before safety evaluation. The system routes through specialized modes (Thinking, Study, Regular), each with distinct operational constraints. The Thinking mode's dual-channel architecture (analysis vs commentary) is unique, enabling internal reasoning separate from user-visible actions.
Sources: High-level architecture diagrams
The repository documents three Claude variants, each representing different deployment contexts or version iterations:
Claude Core System - Comprehensive operational guidelines including 14+ tool definitions, query complexity categorization, citation requirements, and multi-layered safety architecture. See Claude Core System Architecture.
Claude.ai System Message - FAQ-formatted message covering character encoding, function availability, user preferences/styles system, and behavioral guidelines. See Claude.ai System Message.
Claude Sonnet 4 - Latest version variant with specific configuration differences from core Claude. See Claude Sonnet 4 Configuration.
Sources: High-level architecture diagrams
The repository documents five GPT-5 variants, organized by personality types and operational modes:
See Personality System Framework for detailed documentation.
Sources: High-level architecture diagrams
The repository includes documentation for two external capability toolkits referenced by Claude:
PowerPoint Toolkit - Text extraction (markitdown), XML access (unpack.py), creation workflows (html2pptx), template editing (inventory.py, replace.py). See PowerPoint Processing Toolkit.
PDF Toolkit - Basic operations (pypdf), text/table extraction (pdfplumber), creation (reportlab), OCR capabilities (pytesseract). See PDF Processing Toolkit.
These toolkits are not executable by Claude but serve as reference documentation for generating user instructions.
Sources: High-level architecture diagrams
The repository operates on an open contribution model, explicitly inviting community submissions via Pull Requests. This approach enables:
The contribution guidelines are minimal, focusing on collecting authentic system prompts rather than analysis or commentary.
Sources: readme.md8
The repository includes a Star History chart tracking community engagement over time, demonstrating the repository's adoption within the AI research and development community.
Sources: readme.md10-12
System prompts serve as operational instructions that define AI assistant behavior across multiple dimensions:
Dimension | Purpose | Examples |
---|---|---|
Tool Access | Define which external tools the assistant can invoke | web_search , gmail.search , python.exec |
Safety Constraints | Establish content policies and forbidden actions | Copyright limits, face blindness, harmful content blocks |
Personality Traits | Configure conversational style and tone | Cynic's sarcasm, Listener's reflection, Robot's efficiency |
Citation Rules | Specify when and how to attribute sources | Mandatory web citations, no internal tool citations |
Mode Logic | Define specialized operational states | Thinking's dual channels, Study's pedagogical constraints |
Sources: High-level architecture diagrams
Both providers implement multi-layered architectures where system prompts control:
The key architectural difference: Anthropic applies safety before routing, while OpenAI applies personality before safety.
Sources: High-level architecture diagrams
For in-depth analysis of specific systems and comparative architecture studies:
Sources: Wiki table of contents structure
Refresh this wiki
This wiki was recently refreshed. Please wait 5 days to refresh again.