It is Monday morning. Anna, CTO of a mid-sized logistics company, has just finished another frustrating meeting with her development team. For three months, they have been trying to integrate their new AI assistant with internal systems - the customer database, fleet management system, invoicing platform, and dozens of other tools. Each integration requires a separate connector, its own authorization logic, and constant maintenance. When one system updates, the entire structure collapses. Anna counts in her head: four developers over three months, hundreds of hours of debugging, and the AI assistant still can only answer basic questions because it lacks access to actual business data. “There must be a better way,” she thinks, browsing the morning tech news.
And today, November 1, 2025, Anthropic announced something that could change everything. Model Context Protocol (MCP) - an open standard for communication between AI systems and external data sources and tools. This article provides a deep analysis of this groundbreaking announcement: what MCP is, how it works, why it has the potential to become a universal standard, and how enterprises can prepare for the era of truly integrated artificial intelligence. For technology leaders like Anna, this could be the moment when AI integration stops being a nightmare and becomes a competitive advantage.
What is Model Context Protocol and why did Anthropic create it?
“42% of enterprise-scale companies have actively deployed AI in their business, while another 40% are exploring or experimenting with AI.”
— IBM, Global AI Adoption Index 2024 | Source
Model Context Protocol (MCP) is an open communication standard that defines a universal way to connect AI models with external data sources, tools, and services. It can be compared to USB for the world of artificial intelligence - instead of building a separate cable for each device, we have one universal standard that works everywhere.
Anthropic, the creator of Claude models, designed MCP as a response to a fundamental problem in contemporary enterprise AI deployments. Even the most advanced language models are isolated from actual business context. They can generate excellent responses based on their training knowledge, but they lack access to current company data - documents, databases, calendars, CRM systems, or developer tools. This isolation drastically limits the practical usefulness of AI in corporate environments.
Previous solutions to this problem relied on building custom integrations for each AI model + data source combination. With M models and N data sources, a company had to maintain M x N separate connectors. Any update on either side required rebuilding connections. This approach is inefficient, expensive, and inflexible.
MCP solves this problem by introducing an abstraction layer. Instead of M x N integrations, only M + N protocol implementations are needed. Each AI model implements MCP once, each data source implements MCP once, and all combinations work automatically. This is a fundamental architectural change that reduces integration complexity by orders of magnitude.
A key decision by Anthropic was to release MCP as an open standard. The protocol is published under an open source license, and the specification is available to all AI providers, tool creators, and enterprises. This strategic move aims to create an ecosystem - the more participants adopt the standard, the greater the value for everyone.
How does MCP architecture work and what are its key components?
The Model Context Protocol architecture is based on a client-server model with clearly defined roles and communication protocols. Understanding this architecture is crucial for anyone planning to deploy MCP in an enterprise environment.
MCP Host is the application or environment that runs the AI model and manages connections to MCP servers. This could be a desktop application (like Claude Desktop), an IDE environment, a chatbot platform, or any other application utilizing language models. The host is responsible for initiating connections, managing sessions, and passing requests between the AI model and MCP servers.
MCP Server is a lightweight service that exposes a specific data source or tool according to the MCP standard. A server can provide access to a database, file system, external service API, developer tool, or any other resource. A key feature of MCP servers is their modularity - each server handles one specific data source, which facilitates development, testing, and maintenance.
MCP Client is a component built into the host that implements the communication protocol with servers. The client manages the connection lifecycle, handles errors, and translates AI model requests into MCP protocol calls.
The protocol defines three basic types of resources that servers can expose:
Resources are data that the AI model can read - documents, database records, configuration files, system logs. Resources are identified by URIs and can be fetched on demand or streamed in real-time.
Tools are actions that the AI model can perform - sending an email, creating a task in a project management system, executing an SQL query, running a script. Tools have defined input and output schemas, allowing the AI model to invoke them correctly.
Prompts are predefined interaction templates that a server can expose - standard query formats, report templates, analytical procedures. Prompts facilitate standardization of interactions with specific data sources.
Communication in MCP occurs via JSON-RPC 2.0 using various transport layers. The protocol supports communication through standard input/output (stdio) for local servers and through HTTP with Server-Sent Events (SSE) for remote servers. This flexibility allows MCP deployment in both local environments and distributed cloud architectures.
Why is MCP a breakthrough compared to previous AI integration methods?
To appreciate the significance of Model Context Protocol, it is worth examining previous methods of integrating AI systems with enterprise data and their limitations.
Retrieval-Augmented Generation (RAG) has become the de facto standard for contextualizing AI models. In the RAG approach, documents are indexed in a vector database, and during a query, the system retrieves the most relevant fragments and appends them to the prompt. RAG works well for static document corpora but has significant limitations. First, it requires continuous synchronization of the index with data sources. Second, it does not handle structured data well (SQL databases, APIs). Third, it does not allow for performing actions - the model can only read, not write.
Function Calling introduced by OpenAI allows AI models to invoke predefined functions. This is a step in the right direction, but the implementation is specific to each AI provider. Functions must be defined in the application code, which means adding a new data source requires changes to the application and redeployment.
Custom API Integrations is an approach where development teams build dedicated connectors for each AI model + data source combination. This is the most flexible but also the most expensive solution. It requires specialized knowledge of each system, continuous maintenance, and does not scale well.
MCP addresses the limitations of all these approaches by standardizing the communication layer. The AI model does not need to know how a specific database or API works - it communicates through the universal MCP protocol. The MCP server translates standard requests into specific calls to the target system.
Key advantages of MCP include separation of concerns (the AI model does not need to implement integration logic), composability (any MCP servers can be combined in a single session), standardization (one protocol for all integrations), security (centralized permission management through the host), and ecosystem (a growing library of ready-made MCP servers).
According to Anthropic’s internal data, development teams that migrated from custom integrations to MCP reported an average 70% reduction in time needed to add a new data source to their AI system.
What is Agentic AI and how does MCP enable building autonomous agents?
Agentic AI is a paradigm in which AI systems not only answer questions but autonomously take actions to achieve complex goals. An AI agent can plan sequences of steps, use tools, monitor progress, and adapt strategy based on results. This is a fundamental shift from traditional chatbots or AI assistants that operate reactively - responding to individual queries without the ability to execute multi-step tasks.
The Agentic AI concept is based on several key capabilities. First is planning - the agent can decompose a complex goal into a sequence of smaller steps. Second is tool use - the agent can invoke external functions, APIs, and systems to execute individual steps. Third is memory - the agent tracks the context of the entire task and results of previous steps. Fourth is adaptation - the agent can modify the plan when encountering obstacles or unexpected results.
Model Context Protocol is a key enabler for Agentic AI because it provides agents with standardized access to the tools and data needed for autonomous operation. Without MCP, each agent would need built-in knowledge of every system it was to interact with. With MCP, an agent can dynamically discover available resources and tools, then use them to achieve its goals. This is a fundamental difference - the agent becomes independent of specific system implementations.
A typical AI agent workflow with MCP looks as follows. The user defines a high-level goal, for example, “Prepare a sales report for the last quarter and send it to the management team.” The agent analyzes the goal and decomposes it into subtasks: retrieve sales data, aggregate and analyze, generate report, identify recipients, send email.
For each subtask, the agent queries available MCP servers for relevant resources and tools. The MCP server for the CRM system exposes a tool for retrieving sales data. The MCP server for the company directory allows searching for management team email addresses. The MCP server for the email system exposes a tool for sending emails. The agent automatically understands the data schemas and parameters of each tool thanks to the standardized description in the MCP protocol.
The agent executes the sequence of steps using the discovered tools, monitors the results of each step, and adapts the plan as needed. For example, if sales data is incomplete, the agent can automatically retrieve missing data from an alternative source. If the email system returns an error, the agent can try an alternative communication channel or inform the user about the problem.
It is worth highlighting the difference between first and second generation agents. First generation agents have pre-programmed action paths - a sequence of steps defined by the programmer. Second generation agents, supported by MCP, operate more flexibly - dynamically planning and adapting their actions based on available tools and results of previous steps. This flexibility is crucial for handling unforeseen situations in real business environments.
This architecture enables building agents that are simultaneously powerful (they have access to many systems) and secure (all actions go through a controlled protocol with permission management). The agent does not have direct access to systems - all interactions are mediated by MCP servers, which allows implementation of security policies, limits, and auditing.
What are practical MCP use cases in an enterprise environment?
Model Context Protocol opens a broad spectrum of applications in enterprises. Below are the most important use cases already being implemented by early adopters of the protocol. Each of these cases demonstrates the unique value of standardized AI integration with business systems.
Intelligent business data analysis is a use case where the AI agent has access to multiple data sources through MCP servers - data warehouses, CRM systems, e-commerce platforms, analytical tools. Users can ask questions in natural language (“Which products had the highest margin in the northern region in Q3?”), and the agent automatically identifies relevant sources, retrieves data, performs analysis, and presents results. This eliminates the need for SQL knowledge, navigating between systems, or manually combining data from different sources. In practice, this means a business analyst who previously needed hours to prepare a report can get an answer in minutes. The agent not only retrieves data but can also create visualizations, identify trends, and formulate recommendations.
DevOps process automation leverages MCP servers for code repositories, CI/CD systems, monitoring tools, and cloud platforms. The AI agent can automatically analyze error logs, identify problematic commits, propose fixes, create pull requests, and monitor deployments. According to pilot deployment data, this approach reduces mean time to resolution by 40%. In an on-call scenario, the agent can automatically conduct initial diagnosis of a production problem before an engineer is woken up in the middle of the night - often the problem can be resolved autonomously or at least precisely localized.
Next-generation customer support combines MCP servers for the knowledge base, ticket history, ticketing system, and product catalog. The AI agent can comprehensively handle customer inquiries - from simple questions about order status, through solving technical problems, to escalation to the appropriate specialist. The agent has full context of customer history and can take actions (refunds, order changes) without passing the case to a human. Importantly, the agent can combine information from multiple systems in a way that would be impossible for a traditional chatbot - for example, verify delivery status in the logistics system, check payment history in the financial system, and propose personalized compensation based on the customer profile in CRM.
Documentation and knowledge management uses MCP servers for document management systems, company wikis, code repositories, and communication platforms. The AI agent can search for information scattered across many systems, identify outdated documentation, suggest updates, and automatically generate summaries for new employees. In large organizations where institutional knowledge is scattered across hundreds of systems and thousands of documents, such an agent becomes an invaluable guide. A new employee can ask “What does the approval process for orders over 100,000 PLN look like?” and receive a precise answer based on current documentation and actual flows in systems.
Compliance and audit is a case particularly important for regulated industries such as finance, pharmaceuticals, or energy. MCP servers for financial systems, access logs, and transaction registries allow the AI agent to continuously monitor compliance, identify anomalies, and generate audit reports. The agent can automatically flag potential violations and initiate clarification procedures. In practice, this means transitioning from periodic, manual audits to continuous real-time monitoring. The agent can detect an unusual transaction pattern minutes after it occurs, not weeks later during a quarterly review.
HR process automation is another promising area. MCP servers for the HR system, recruitment platform, training system, and company calendar enable the AI agent to comprehensively support personnel processes. The agent can automatically plan new employee onboarding by booking training, assigning mentors, and preparing personalized development paths. It can also analyze absence and turnover data, identifying potential problems before they become critical.
What does security and permission management look like in MCP?
Security is a fundamental aspect of any integration system in an enterprise environment. For many organizations, security concerns are the main barrier to AI solution adoption. Model Context Protocol was designed with security in mind from the ground up, implementing a multi-layered protection model that addresses these concerns.
Least privilege architecture is the basic principle of MCP. Each MCP server declares exactly what resources and tools it exposes, and the host decides which of them are available to the AI model in a given session. The AI model has no default access to anything - each resource must be explicitly made available. This principle means that even if an AI agent “wanted” to access sensitive data, it cannot do so if the appropriate server has not been configured and approved.
Host-level control means that the MCP host (the application running the AI model) is responsible for managing connections to servers and enforcing security policies. The host can implement user approval for sensitive operations (e.g., the agent asks for confirmation before sending an email), time and quantity limits (limits on the number of database queries), logging of all interactions (full audit trail), and integration with enterprise IAM systems (authentication through Active Directory, SSO).
Server isolation is another security layer. Each MCP server runs as a separate process with its own permissions. Compromise of one server does not grant access to others. Servers can be run in containers or sandboxes for additional isolation. In practice, this means the MCP server for the CRM system has no access to the financial database - even if it runs on the same machine.
Transport security is ensured through encrypted communication. For remote MCP servers, communication occurs over HTTPS with TLS 1.3. The protocol also supports server authentication through X.509 certificates and JWT tokens, allowing verification of server identity before establishing a connection.
Granular access control allows defining policies at the level of individual resources and tools. An administrator can configure that a given user has access to read sales data but cannot modify it. They can also restrict access temporally (only during business hours) or geographically (only from the corporate network).
Auditability is built into the protocol. All requests and responses can be logged, allowing full reconstruction of AI agent actions. This is crucial for compliance and debugging. In case of a security incident or doubts about agent actions, it is possible to reconstruct exactly what queries were sent and what data was returned.
For enterprises with stringent security requirements, MCP offers the ability to deploy completely on-premise, without communication with external services. MCP servers can run in private infrastructure, and data never leaves the controlled environment. The AI model can be run locally or communicate with the cloud only for the model itself, without transmitting business data.
It is worth emphasizing that MCP security is a function of implementation, not just specification. Enterprises deploying MCP should conduct their own risk analysis and implement appropriate controls tailored to their regulatory and business context. The protocol provides mechanisms, but it is the organization that decides how strictly to configure them.
How to start MCP deployment in an organization step by step?
Deploying Model Context Protocol in an enterprise requires a systematic approach that considers both technical and organizational aspects. Below is a proven implementation path.
Phase 1: Assessment and Planning (2-4 weeks)
Start with an inventory of existing systems and data sources that could be exposed through MCP. Identify use cases with the highest business value - places where AI access to data could bring the greatest benefits. Assess technical readiness - do systems have APIs that can be wrapped with an MCP server? Define security and compliance requirements. At the end of this phase, you should have a prioritized list of integrations and a preliminary solution architecture.
Phase 2: Proof of Concept (4-6 weeks)
Choose one or two low-risk but visibly valuable use cases. Implement MCP servers for selected data sources - you can use existing servers from the growing open source ecosystem or build your own. Configure the MCP host (e.g., Claude Desktop) and test the end-to-end flow. Measure business value - time savings, response quality, user satisfaction. Identify challenges and lessons for the future.
Phase 3: Pilot Expansion (6-8 weeks)
Based on PoC experience, extend deployment to additional use cases. Build or adapt MCP servers for additional systems. Implement production security mechanisms - permission management, auditing, monitoring. Train the support team and prepare documentation. Launch a pilot with a larger group of users and collect feedback.
Phase 4: Production and Scaling (ongoing)
Move to production deployment with full support. Build an internal catalog of MCP servers and best practices. Establish server lifecycle management processes - updates, monitoring, incident response. Continue expanding with new integrations based on business priorities.
A key success factor is engaging business owners of individual systems. MCP is not just an IT project - it is a change in how the organization uses its data. Without business support, deployment may encounter organizational barriers and not reach its full potential.
What is the MCP ecosystem and what ready-made servers are already available?
One of the greatest advantages of an open standard is the ability to build an ecosystem of ready-made components. Already at the MCP announcement, Anthropic released a library of reference servers, and the community is actively developing more. This ecosystem is a key factor accelerating adoption - instead of building everything from scratch, enterprises can use ready-made components.
Official servers from Anthropic include integrations with the most popular developer and productivity platforms. The GitHub server enables browsing repositories, code analysis, managing issues and pull requests, as well as access to GitHub Actions and workflows. The GitLab server offers similar capabilities for the alternative platform, including GitLab CI/CD support. The Google Drive server allows searching, reading, and analyzing documents, spreadsheets, and presentations. The Slack server enables access to conversation history, sending messages, and managing channels. The PostgreSQL server allows executing SQL queries to the database with full permission control.
Additionally, Anthropic has released servers for the local file system (with access control to specified directories), memory (persistent memory for the agent), web search (through Brave Search integration), and terminal (executing system commands in a controlled environment).
Community servers are rapidly growing in number. Enthusiasts and companies are publishing servers for Jira, Confluence, Notion, Asana, Salesforce, HubSpot, AWS, Azure, and dozens of other platforms. The quality and maturity of these servers varies, but the ecosystem is developing dynamically. In the first week after the announcement, over 50 community servers appeared. For popular platforms (like Jira or Salesforce), several alternative implementations are already available, allowing selection of the best fit for needs.
Server building frameworks facilitate creating custom integrations. Anthropic has released SDKs for Python and TypeScript that abstract low-level protocol details. Developers define resources and tools using simple decorators and schemas, and the SDK handles JSON-RPC communication, serialization, error handling, and connection management. A simple MCP server can be built in a few dozen lines of code. For more complex integrations, the SDK offers advanced features like data streaming, large file handling, and asynchronous operations.
It is worth noting that MCP is agnostic to AI provider. Although Anthropic is the protocol creator, nothing prevents other providers (OpenAI, Google, Meta) from implementing MCP support. The more providers adopt the standard, the greater the ecosystem value for everyone.
For enterprises planning MCP deployment, it is recommended to start with official Anthropic servers, which are best tested. As deployment matures, community servers can be added or custom ones built for specific internal systems.
How does MCP affect the future of AI systems architecture in enterprises?
Model Context Protocol is not just an integration tool - it is a paradigm shift in how AI systems are designed. Its adoption will have far-reaching consequences for IT architecture in enterprises. Technology leaders who understand these implications early will be better able to prepare their organizations for the coming transformation.
From monolithic assistants to compositional agents - the traditional approach to enterprise AI relied on building dedicated assistants for specific tasks. The HR assistant had access to HR systems, the sales assistant to CRM, the IT assistant to developer tools. This approach led to fragmentation and duplication of effort. MCP enables building universal agents that dynamically compose their capabilities from available MCP servers. One agent can handle tasks from multiple domains, adapting to context. This is a fundamental change in architecture - instead of N dedicated assistants, the organization can have one universal agent with dynamically configurable capabilities.
Data Mesh for AI - the Data Mesh concept promotes decentralization of data ownership while maintaining interoperability. MCP naturally supports this model - each team can expose their data through an MCP server, maintaining control over implementation and security, while enabling its use by AI agents throughout the organization. The team responsible for sales data maintains their MCP server, defines available resources and tools, and manages permissions. AI agents throughout the organization can use this data without needing to understand implementation details.
API-first becomes MCP-first - organizations that have adopted an API-first approach to system integration have a natural path to MCP. Existing APIs can be wrapped with MCP servers, immediately making them available to AI agents. In the future, new systems may be designed with native MCP support from the start. We can expect enterprise software vendors to start offering ready-made MCP servers as part of their products - similar to how they offer APIs and webhooks today.
Democratization of data access - MCP combined with natural language AI interfaces enables data access for non-technical users. A business analyst does not need to know SQL or navigate between dozens of systems - they can simply ask a question to an AI agent that has access to all relevant sources. This changes the power dynamics in the organization - data access ceases to be the domain of IT and data analysts, becoming available to every employee.
New roles and competencies - MCP adoption creates demand for new competencies and roles in the organization. MCP Server Developer is a specialist in building and maintaining MCP servers, understanding both the protocol and the specifics of integrated systems. AI Systems Architect is a role responsible for designing AI agent ecosystems and their integrations, ensuring consistency, security, and scalability. Prompt Engineer evolves toward Agent Designer, designing behaviors of autonomous agents, defining their goals, constraints, and action strategies.
AI Governance - with the growing autonomy of AI agents, the importance of governance grows. Who is responsible when an agent makes a wrong decision? How to audit agent actions? How to ensure regulatory compliance? MCP provides technical mechanisms (logging, access control), but organizations must build governance processes and structures around these mechanisms.
What are the limitations and challenges associated with MCP?
Despite its enormous potential, Model Context Protocol is not a panacea and has limitations that should be considered when planning deployment.
Ecosystem maturity - MCP is a freshly announced standard. The ecosystem of servers, tools, and best practices is still taking shape. Early adopters must expect to build their own solutions and encounter undocumented edge cases.
Performance and latency - each MCP tool call is an additional communication round-trip. For agents executing many steps, cumulative latency can be significant. Performance optimization requires careful architecture design and potentially local caching.
Debugging complexity - MCP-based systems introduce a new level of complexity. When an AI agent does not behave as expected, the cause may lie in the AI model, MCP host, MCP server, or target system. Debugging and observability tools are still in their infancy.
Version management - the MCP protocol will evolve, as will servers and hosts. Managing compatibility between versions in a production environment requires discipline and processes that are still crystallizing.
Operational costs - each MCP server is an additional component to deploy, monitor, and maintain. With a large number of integrations, operational overhead can be significant.
AI model limitations - MCP solves the data access problem but does not eliminate fundamental limitations of language models - hallucinations, reasoning errors, context limitations. An AI agent with access to real data can generate erroneous conclusions just as easily as without such access.
Enterprises should approach MCP with enthusiasm but also realism. The best results will be achieved by organizations that treat MCP as part of a broader AI strategy, not as a magical solution to all problems.
How does MCP compare to competing standards and initiatives?
Model Context Protocol did not emerge in a vacuum. The AI integration market is active, and several other initiatives are competing for the standard position.
OpenAI Plugins and GPT Actions are OpenAI’s response to the integration problem. Plugins allow GPT models to call external APIs. However, they are locked into the OpenAI ecosystem and require publication in the OpenAI store, which limits enterprise applications.
LangChain and LlamaIndex are popular frameworks for building AI applications. They offer their own abstractions for integration with tools and data. They are more mature than MCP but less standardized - each project implements integrations in its own way.
Semantic Kernel from Microsoft is an SDK for building AI applications with integrations. It is well integrated with the Microsoft ecosystem (Azure, Office 365) but less universal.
MCP stands out in this landscape with several characteristics. It is an open standard, not a framework or product. It is agnostic to AI provider and platform. It focuses on protocol standardization, not application implementation. It has the support of Anthropic, one of the AI market leaders.
A probable scenario is coexistence of different approaches. MCP may become the transport layer standard, while frameworks like LangChain are built on top of it. AI providers may implement MCP support alongside their native integration mechanisms.
For enterprises, avoiding vendor lock-in is crucial. Choosing an open standard like MCP ensures flexibility and reduces risk associated with dependence on a single provider.
How does ARDURA Consulting support enterprises in deploying MCP and Agentic AI?
Transformation toward Agentic AI using Model Context Protocol is a complex undertaking requiring a combination of technical competencies, project experience, and understanding of business specifics. ARDURA Consulting, as a trusted technology partner with over a decade of enterprise deployment experience, offers comprehensive support at every stage of this journey.
AI Strategy and Architecture - our systems architects design the target architecture of AI ecosystems with MCP as the integration layer. We analyze the systems landscape, identify highest-value use cases, and design migration paths that minimize risk.
MCP Server Development - ARDURA development teams specialize in building MCP servers for enterprise systems. Integration with legacy ERP, custom CRM, or data lake - we deliver servers that are secure, efficient, and easy to maintain.
Staff Augmentation for AI Projects - through our Staff Augmentation model, we provide AI specialists and integration engineers to strengthen client teams. The flexible collaboration model allows scaling resources according to needs.
Quality Assurance for AI Systems - our testing teams offer specialized QA services for MCP systems and AI agents, testing functionality, security, and performance.
ARDURA Consulting approaches each project as a Trusted Advisor. We help clients understand where MCP and Agentic AI can deliver real business value and guide them through implementation with long-term success in mind.
If your organization is considering Model Context Protocol deployment, we invite you to a conversation. Contact us to schedule a consultation.
| MCP Maturity Stage | Characteristics | Typical Deployments | Next Steps |
| **Stage 1: Exploration** | Organization learns about MCP, tests basic use cases in development environment. | Claude Desktop with official MCP servers (GitHub, Google Drive) | Identify 2-3 internal systems for integration, build PoC |
| **Stage 2: Pilot** | First production MCP deployment for selected use case, limited user group. | Custom MCP servers for 1-2 internal systems, basic security management | Measure pilot ROI, extend to additional use cases |
| **Stage 3: Scaling** | MCP as standard AI integration layer, multiple servers, broad adoption. | MCP server catalog, centralized management, IAM integration | Build competency centers, standardize server development processes |
| **Stage 4: Transformation** | Agentic AI as strategic organizational capability, autonomous agents supporting key processes. | Complex agent ecosystems, advanced orchestration, continuous optimization | Explore new use cases, share experiences with ecosystem |
Planning an AI integration project? Learn about our Software Development services.
See also
- AI-Driven Development: How Artificial Intelligence Supports Software Development
- AI and Automation in SAM: How Intelligent License Management Can Reduce Your IT Costs by 30%
- Microservices Architecture vs Monolith: How to Choose the Right Approach
Let’s discuss your project
Have questions or need support? Contact us - our experts are happy to help.