Category: All posts
Jul 25, 2025
Posted by
Jacky Liang
Model Context Protocol (MCP) is eating the LLM app world right now.
Launched in late November 2024 by Anthropic, the reaction by the AI community to this new protocol standard was lukewarm-to-cold at best.
Then suddenly, a few months later, every LLM dev conversation lands on the same buzzword: MCP this, MCP that.
In this piece, we want to cut through the hype and share our engineers’ POV of what MCP is, when to use it (and not), and the major security implications to consider when deploying and using MCP.
We won’t go too in-depth about what MCP is and its client-server-source architecture. There’s plenty of great content on that, including this “Model Context Protocol (MCP), clearly explained (why it matters)” video by Greg Isenberg. So below is just a simple definition.
Created by Anthropic, MCP lets AI systems connect to databases, APIs, and tools through a standardized protocol. Think of it as USB-C for AI applications: just like USB-C provides a universal way to connect any device to any peripheral, MCP provides a universal way to connect any AI model to any data source or tool.
While function calling (also known as tool calling) works great when you're building tools for your own AI app, MCP shines when you need to expose your APIs to multiple users or AI applications. Instead of everyone reading your API docs and implementing custom integrations, they can just plug into your MCP server with a standardized interface.
But obviously, we at TigerData have a habit of wanting to look beyond the hype to see the practical truth. And truth is what we went out to seek.
We cornered three unwilling TigerData engineers because they’ve actually built with MCP and asked them to cut through the noise for us.
What we got was direct and realistic, which is exactly what we needed.
Anton Umnikov, cloud solutions architect at TigerData, drops this analogy that simplifies MCP’s utility: "LLMs without tools are like a computer without access to internet." Anton’s been building with LLMs for three years, watched the hype cycles come and go, and he's not buying into the MCP trend lightly.
But then James Guthrie, staff software engineer on the AI team at TigerData, jumps in with the reality check: "I can't see how it won't be a thing, but there are quite a few hurdles to broader adoption like security."
John Pruitt, staff software engineer on the AI team at TigerData, meanwhile, draws the language server protocol (LSP) comparison: "MCP reminds me of LSP for IDEs. It's a way to give agents agency to interact with their environment." Just like LSP lets any code editor understand any programming language through a standard protocol, MCP lets any AI system connect to any external tool or data source.
Three engineers. Three different angles. Same conclusion: MCP is inevitable, but widespread adoption isn’t quite ready yet.
"You're sending customer information to MCP servers with questionable security protocols and LLM providers," James says flatly. This isn't theoretical hand-waving, because this is happening right now as developers connect MCP to production systems without considering the security implications.
Current MCP servers are highly insecure.
Just in January 2025, security researchers discovered CVE-2025-6514, a critical vulnerability in mcp-remote that allows attackers to trigger arbitrary OS command execution when connecting to untrusted MCP servers. Another critical flaw, CVE-2025-49596 with a CVSS score of 9.4, allows attackers to execute remote code on developers' machines simply by visiting a malicious website while running MCP Inspector.
Recent research by Backslash found hundreds of MCP servers with network exposure vulnerabilities, binding to all interfaces (0.0.0.0) and allowing arbitrary command execution through careless subprocess usage. Knostic researchers discovered 1,862 MCP servers exposed to the internet, with all 119 manually tested servers allowing access to internal tool listings without authentication.
Anton works with Canadian banks. His take? "They are very, very cautious about adopting Gen AI in general." TigerData takes our customers' privacy and security exceptionally seriously, and we work with institutions that understand what happens when customer data leaks.
Meanwhile, John's fighting more... implementation battles:
"Not all clients support the entire MCP specification." While everyone debates philosophy, he's debugging why his MCP server crashes when Claude asks it to do something basic. Observability for MCP during development and production still needs work.
Based on what we’re seeing, here's where MCP works well:
Gone are the days of reading API docs manually: "You don't need to read documentation manually anymore, just tell the LLM what you want to accomplish while connected to a Docs MCP server," James explains. How much time have you wasted parsing Stripe's docs vs. just pasting in the LLM.txt into your LLM, or connecting to the docs’ MCP server so the LLM can figure it out themselves?
Database context that doesn’t suck: John wants "to connect Claude Desktop directly to the database I'm working on." Not to replace his SQL skills, but to give LLMs enough context to stop suggesting table names that don't exist and understand the holistic schema of his DB.
No more boring task automation: All three mention the same things—tests, commit messages, CSV imports. The stuff you hate doing but must do. This is perfect MCP territory.
Here's how they're really approaching MCP (and it's not what the tutorials tell you):
Build it from scratch first: Anton built MCP servers from scratch. Why? "If you do MCP the hard way, you'll understand the basics." No shortcuts. No abstractions. Just JSON-RPC and pain from figuring out the spec until you get it.
Human-in-the-loop everything: James wants confirmation prompts (something that’s catching on in many agentic CLI tools like Claude Code): "This is the action I think should be taken. Do you want to take this action?". Avoid YOLO mode at all costs if you don’t want your entire database deleted by an overeager LLM agent.
Type every line: John's rule is simple—he types out AI-suggested code instead of copying it. "If I have to type it, I know I understand it." Friction in this case helps John learn, instead of blindly clicking “apply”.
None of them see MCP replacing their current workflow.
"Using LLM or MCP is something I do in addition to my general workflow," James explains. "It's complementary. It doesn't drive the work."
Context determines everything.
Authentication logic? Understand every line. Destructive database operations? Verify every line of SQL. CSV import script? Let the AI handle it. "You're always on a continuum between how much control you need and how much you're willing to relinquish."
The freedom of convenience can often take over observability because letting an agent make all the decisions (even the unsafe ones) can feel so easy and convenient!
Anton nails it: "Right now most people are doing hello world kind of stuff."
The gap between proof-of-concept demos and production systems is massive. Security frameworks for MCP are in their infancy. Client implementations are inconsistent. Enterprise compliance is barely a footnote.
This obviously isn't stopping adoption of MCP. But we are seeing it slowing down to a less frantic pace since the peak in Q1 of 2025.
"The more you understand, the more you've tried, the better you'll be prepared for what comes next," Anton adds. The pragmatic teams aren’t rushing to deploy MCP everywhere. They are building and learning it, but only deploying it when it makes sense for their customers, and more importantly, with as few security blindspots as possible.
If you're building with LLMs right now, here's what our engineers suggest.
Use MCP when:
Use function calling when:
Security. Comes. First.
Build from scratch first. Create an MCP server by hand using JSON-RPC. Build apps using other MCP servers. The best way to learn is to build it. Understand the protocol fundamentals before trusting abstractions.
You are the conductor of tools. As Anton puts it: "Think of yourself as a conductor of AI workers and tools, guiding them to do specific things rather than blindly trusting their output. Avoid YOLO mode unless you have nothing to use."
It’s easy to fall for the hype in a very hype-y space. Yet the race isn't to deploy MCP fastest, but to understand how it helps your customers and to protect their data.
Watch our video interviews with Anton, James, and John for their unfiltered takes on MCP's reality versus the hype. You can find them on any of our social channels, including LinkedIn and X, starting this week.
About the Author:
Jacky Liang is a developer advocate at TigerData with an AI and LLMs obsession. He's worked at Pinecone, Oracle Cloud, and Looker Data as both a software developer and product manager which has shaped the way he thinks about software.
He cuts through AI hype to focus on what actually works. How can we use AI to solve real problems? What tools are worth your time? How will this technology actually change how we work?
When he's not writing or speaking about AI, Jacky builds side projects and tries to keep up with the endless stream of new AI tools and research—an impossible task, but he keeps trying anyway. His model of choice is Claude Sonnet 4 and his favorite coding tool is Claude Code.