Over the past year, we’ve been working on integrating OpenEdge applications with AI agents using the Model Context Protocol (MCP). Recently, Progress introduced their own OpenEdge MCP Server giving developers an official option to expose ABL business logic to AI.
At first glance, both approaches seem to solve the same problem. And in many ways, they do. But when you look closer, the differences become quite interesting.
Same Goal, Different Paths
Both approaches ultimately aim to expose ABL business logic running on PAS for OpenEdge to AI agents.
- The OpenEdge MCP Server connects to PASOE via REST/Web and automatically exposes operations as MCP tools.
- A custom MCP server approach allows you to implement MCP servers yourself and integrate with OpenEdge using your preferred technology stack.
From an architectural standpoint, both end up in the same place: AI agents calling into ABL code.
The Progress Approach: OpenAPI MCP
The OpenEdge MCP Server takes a very opinionated approach:
- You provide an OpenAPI (Swagger) specification
- The server automatically converts each operation into an MCP tool
- These tools are exposed to AI agents with built-in security, scoping, and guardrails
This has some clear advantages:
- Minimal manual wiring
- Strong, consistent structure
- Enterprise-grade security built in
- Fast path from existing REST APIs to AI
Where I See Limitations
1. Container-First Deployment
The MCP Server is designed as a Docker-based solution. That’s great for standardized deployments — but for some use cases, it can feel a bit heavy.
2. Dependency on OpenAPI
The entire approach relies on having a clean OpenAPI specification. Generating high-quality specs from PASOE isn’t always straightforward.
3. OpenAPI vs. MCP — Different Concerns
OpenAPI and MCP solve different problems. Mapping one to the other works — but it’s not always a perfect fit.
Custom MCP Servers: One Example Using .NET
Instead of starting with OpenAPI, another option is to build MCP servers directly using one of the available SDKs.
In our case, we chose a .NET Core-based implementation. That is just one example of a custom MCP server approach.
- We have a lot of experience with the technology
- It is robust and production-proven
- It is platform-independent
From an MCP perspective, this does not differ much from Python (FastMCP) or Node.js based approaches.
Where the Custom Approach Shines
1. Full Control Over Tool Design
You are not constrained by existing REST APIs. You can design MCP tools specifically for AI use cases.
2. Dynamic Prompts and Behavior
Because everything is implemented in code, you can generate prompts dynamically and adapt behavior at runtime.
3. Flexible Integration Options
A custom MCP server is not tied to a specific deployment model.
So Which One Should You Use?
It depends on your requirements.
- For standardized, secure, low-effort setups → OpenEdge MCP Server
- For flexibility and control → Custom MCP server approach
Final Thoughts
Both approaches are valid — and they can even complement each other.
For us, the choice was mostly about timing — we started long before the OpenEdge MCP Server was available.
And today, given our requirements, I’m still not inclined to switch.
But it’s great to see Progress entering this space. Bringing OpenEdge business logic into the world of AI is what matters most.