Staircase AI MCP Security and Compliance
This article explains MCP usage to help IT, Security, and Compliance teams make informed decisions about MCP enablement.
Overview
Staircase AI Model Context Protocol (MCP) allows users to query Staircase account intelligence directly from supported AI assistants, for example, Claude, ChatGPT, and Gemini. Instead of switching between tools, users can ask questions about accounts, stakeholders, and topics and receive responses backed by Staircase AI data.
Key Benefits
- Improved workflow efficiency: Customer-facing teams access Staircase insights in the tools they already use.
- Better meeting preparation: AI assistants can synthesize account context into briefings and summaries.
- Faster research: Cross-account analysis becomes conversational instead of manual.
Security Model Summary
| Aspect | Implementation |
|---|---|
| Authentication | Cloudflare Access using OAuth 2.0 (Google or Microsoft identity providers) |
| Authorization | Per-user; respects existing Staircase permissions |
| Data flow | Direct connection between the user’s LLM tool and the Staircase MCP server |
| LLM provider data usage | No training on customer data by default (controlled by user and provider settings) |
Authentication Architecture
Staircase AI MCP authenticates users through Cloudflare Access using OAuth 2.0.
This authentication flow is separate from Staircase AI’s in-app login. Cloudflare Access provides the OAuth layer specifically for MCP connections.
Supported Identity Providers
The following table lists the supported and unsupported identity providers:
| Provider | Supported |
|---|---|
| Yes | |
| Microsoft (Azure AD) | Yes |
| Okta | No |
| SAML SSO | No |
Users can authenticate using the same Google or Microsoft account configured for their organization’s SSO.
For more information on how to connect LLMs to the Staircase AI MCP server, refer to the Connect Staircase AI to LLMs Using MCP article.
Token Management
When a user configures MCP in their LLM tool (such as Claude Desktop or ChatGPT), the first query triggers an authentication prompt. The user signs in using Google or Microsoft through Cloudflare Access, which then issues a JSON Web Token (JWT) access token. The token is stored locally on the user’s device in encrypted form and is reused for subsequent queries until it expires.
The following table lists the token property and its value:
| Property | Value |
|---|---|
| Token format | JSON Web Token (JWT) |
| Local storage | Encrypted |
| Access token lifetime | 12 hours |
| Refresh token | Included in responses |
Data Flow and Boundaries
Staircase MCP uses a direct, authenticated connection between the user’s device and Staircase infrastructure.

The following table illustrates how data flows between source and destination:
| Data | Source → Destination | Notes |
|---|---|---|
| User query | LLM tool → Staircase MCP server | User-entered prompt |
| Staircase response | Staircase → LLM tool | Structured account intelligence and evidence |
- Raw customer communications do not leave Staircase, except as summarized intelligence.
- PII-anonymized data remains protected when PII anonymization is enabled for the organization.
- Cross-organization data access is not possible; all queries are scoped to the authenticated user’s organization.
LLM Provider Data Handling
Data handling by LLM providers (Anthropic, OpenAI, Google) is governed by:
- The user’s LLM tool settings: Each provider offers configurable privacy controls.
- The organization’s LLM plan: Enterprise plans typically include enhanced privacy protections.
Staircase AI returns structured data to the user’s local MCP client. It does not send data directly to LLM providers. Gainsight recommends reviewing LLM provider data policies before proceeding.
Provider defaults (verify with provider)
| Provider | Training on user data | Enterprise option |
|---|---|---|
| Anthropic (Claude) | No (by default) | Team / Enterprise |
| OpenAI (ChatGPT) | Opt-out available | Team / Enterprise |
| Google (Gemini) | Varies by product | Workspace integrations |
Security Controls
Each user authenticates individually and user identity is verified for every request. There are no shared organization-level tokens or API keys, therefore, sessions cannot be transferred between users.
Staircase MCP implements protections against prompt injection attacks, including:
- Input sanitization
- Output validation
- Guardrails against instruction override attempts
Permission Model
MCP queries respect existing Staircase permissions. Following table illustrates staircase setting and corresponding MCP behavior:
| Staircase setting | MCP behavior |
|---|---|
| MCP disabled for org | User cannot connect |
| MCP enabled, user not authorized | Connection fails |
| MCP enabled, user authorized | User can query permitted Staircase data |
Compliance and Certifications
Staircase AI is certified with SOC 2 Type II certification and is compliant with General Data Protection Regulation (GDPR) and California Consumer Privacy Act (CCPA).
MCP operates within Staircase’s existing compliance framework.
GDPR Considerations
- MCP data is subject to existing data subject request (DSR) processes
- Data residency follows existing Staircase configurations
- User authentication constitutes consent for MCP data access