---
title: "Three Tiger Data Engineers Told Us the Truth About MCP – Security Is Its Achilles Heel"
published: 2025-07-24T09:00:50.000-04:00
updated: 2026-01-08T07:53:08.000-05:00
excerpt: "Three Tiger Data engineers reveal the unvarnished truth about Model Context Protocol (MCP) implementation, exposing critical security concerns and practical challenges the AI hype cycle isn't discussing."
tags: Dev Q&A, AI, Thought Leadership, Engineering
authors: Jacky Liang
---

> **TimescaleDB is now Tiger Data.**

Model Context Protocol (MCP) is eating the LLM app world right now. 

Launched in late November 2024 by Anthropic, the reaction by the AI community to this [new protocol standard](https://www.anthropic.com/news/model-context-protocol) was lukewarm-to-cold at best. 

Then suddenly, a few months later, every LLM dev conversation lands on the same buzzword: MCP this, MCP that. 

In this piece, we want to [cut through](#its-like-a-computer-without-internet) the hype and share our engineers’ POV of what MCP is, when to use it (and not), and the major [security implications](#the-security-nightmare-few-are-talking-about) to consider when deploying and using MCP. 

## What is MCP?

We won’t go too in-depth about what MCP is and its client-server-source architecture. There’s plenty of great content on that, including this [“Model Context Protocol (MCP), clearly explained (why it matters)”](https://www.youtube.com/watch?v=7j_NE6Pjv-E) video by Greg Isenberg. So below is just a simple definition.

![Model context protocol (MCP) architecture. Source: DiamantAI](https://storage.ghost.io/c/6b/cb/6bcb39cf-9421-4bd1-9c9d-fa7b6755ba0e/content/images/2025/07/SCR-20250722-nyxo.png)

__Model context protocol (MCP) architecture. Source:__ [__DiamantAI__](https://diamantai.substack.com/p/model-context-protocol-mcp-explained)

Created by Anthropic, MCP lets AI systems connect to databases, APIs, and tools through a standardized protocol. Think of it as USB-C for AI applications: just like USB-C provides a universal way to connect any device to any peripheral, MCP provides a universal way to connect any AI model to any data source or tool. 

While [function calling](https://platform.openai.com/docs/guides/function-calling) (also known as tool calling) works great when you're building tools for your own AI app, MCP shines when you need to expose your APIs to multiple users or AI applications. Instead of everyone reading your API docs and implementing custom integrations, they can just plug into your MCP server with a standardized interface. 

But obviously, we at Tiger Data have a habit of wanting to look [beyond](https://www.tigerdata.com/blog/vector-databases-are-the-wrong-abstraction) [the hype](https://www.tigerdata.com/blog/why-cursor-is-about-to-ditch-vector-search-and-you-should-too) to see the practical truth. And truth is what we went out to seek. 

We cornered three unwilling Tiger Data engineers because they’ve actually built with MCP and asked them to cut through the noise for us.

What we got was direct and realistic, which is exactly what we needed. 

## "It's Like a Computer Without Internet"

[Anton Umnikov](https://www.linkedin.com/in/antonumnikov/), cloud solutions architect at Tiger Data, drops this analogy that simplifies MCP’s utility: "LLMs without tools are like a computer without access to internet." Anton’s been building with LLMs for three years, watched the hype cycles come and go, and he's not buying into the MCP trend lightly.

But then [James Guthrie](https://www.linkedin.com/in/james-guthrie-51786894/), staff software engineer on the AI team at Tiger Data, jumps in with the reality check: "I can't see how it won't be a thing, but there are quite a few hurdles to broader adoption like security." 

[John Pruitt](https://www.linkedin.com/in/jgpruitt/), staff software engineer on the AI team at Tiger Data, meanwhile, draws the language server protocol (LSP) comparison: "MCP reminds me of LSP for IDEs. It's a way to give agents agency to interact with their environment." Just like LSP lets any code editor understand any programming language through a standard protocol, MCP lets any AI system connect to any external tool or data source.

Three engineers. Three different angles. Same conclusion: MCP is inevitable, but widespread adoption isn’t _quite ready yet_. 

## The Security Nightmare Few Are Talking About

"You're sending customer information to MCP servers with questionable security protocols and LLM providers," James says flatly. This isn't theoretical hand-waving, because this is happening right now as developers connect MCP to production systems without considering the security implications.

Current MCP servers are highly insecure. 

Just in January 2025, security researchers discovered [CVE-2025-6514](https://jfrog.com/blog/2025-6514-critical-mcp-remote-rce-vulnerability/), a critical vulnerability in mcp-remote that allows attackers to trigger arbitrary OS command execution when connecting to untrusted MCP servers. Another critical flaw, [CVE-2025-49596](https://www.oligo.security/blog/critical-rce-vulnerability-in-anthropic-mcp-inspector-cve-2025-49596) with a CVSS score of 9.4, allows attackers to execute remote code on developers' machines simply by visiting a malicious website while running MCP Inspector.

Recent research by Backslash found [hundreds of MCP servers](https://www.backslash.security/blog/hundreds-of-mcp-servers-vulnerable-to-abuse) with network exposure vulnerabilities, binding to all interfaces (0.0.0.0) and allowing arbitrary command execution through careless subprocess usage. Knostic researchers discovered [1,862 MCP servers](https://www.knostic.ai/blog/find-mcp-server-shodan) exposed to the internet, with all 119 manually tested servers allowing access to internal tool listings without authentication.

Anton works with Canadian banks. His take? "They are very, very cautious about adopting Gen AI in general." Tiger Data takes our customers' privacy and security exceptionally seriously, and we work with institutions that understand what happens when customer data leaks.

Meanwhile, John's fighting more... implementation battles:

> "Not all clients support the entire MCP specification." While everyone debates philosophy, he's debugging why his MCP server crashes when Claude asks it to do something basic. Observability for MCP during development and production still needs work.

## The Boring Bits of Where MCP Works Well

Based on what we’re seeing, here's where MCP works well:

**Gone are the days of reading API docs manually**: "You don't need to read documentation manually anymore, just tell the LLM what you want to accomplish while connected to a Docs MCP server," James explains. How much time have you wasted parsing Stripe's docs vs. just pasting in the LLM.txt into your LLM, or connecting to the docs’ MCP server so the LLM can figure it out themselves?  

**Database context that doesn’t suck**: John wants "to connect Claude Desktop directly to the database I'm working on." Not to replace his SQL skills, but to give LLMs enough context to stop suggesting table names that don't exist and understand the holistic schema of his DB. 

**No more boring task automation**: All three mention the same things—tests, commit messages, CSV imports. The stuff you hate doing but must do. This is perfect MCP territory.

## Build It To Learn It

Here's how they're really approaching MCP (and it's not what the tutorials tell you):

**Build it from scratch first**: Anton built MCP servers from scratch. Why? "If you do MCP the hard way, you'll understand the basics." No shortcuts. No abstractions. Just JSON-RPC and pain from figuring out the spec until you get it. 

**Human-in-the-loop everything**: James wants confirmation prompts (something that’s catching on in many agentic CLI tools like Claude Code): "This is the action I think should be taken. Do you want to take this action?". [Avoid YOLO mode](https://x.com/jasonlk/status/1946069562723897802) at all costs if you don’t want your entire database deleted by an overeager LLM agent. 

![Replit deleted an entire database. Source: Jason Lemkin](https://storage.ghost.io/c/6b/cb/6bcb39cf-9421-4bd1-9c9d-fa7b6755ba0e/content/images/2025/07/SCR-20250722-nqgh.png)

__Replit deleted an entire database. Source:__ [__Jason Lemkin__](https://x.com/jasonlk/status/1946069562723897802)

**Type every line**: John's rule is simple—he types out AI-suggested code instead of copying it. "If I have to type it, I know I understand it." Friction in this case helps John learn, instead of blindly clicking “apply”. 

## MCP Doesn’t Replace Their Workflow

None of them see MCP replacing their current workflow.

> "Using LLM or MCP is something I do in addition to my general workflow," James explains. "It's complementary. It doesn't drive the work."

Context determines everything. 

Authentication logic? Understand every line. Destructive database operations? Verify every line of SQL. CSV import script? Let the AI handle it. "You're always on a continuum between how much control you need and how much you're willing to relinquish." 

The freedom of convenience can often take over observability because letting an agent make all the decisions (even the unsafe ones) can feel so easy and convenient! 

## Big Gap Between PoC and Production for Enterprises

Anton nails it: "Right now most people are doing hello world kind of stuff."

The gap between proof-of-concept demos and production systems is massive. Security frameworks for MCP are in their infancy. Client implementations are inconsistent. Enterprise compliance is barely a footnote. 

This obviously isn't stopping adoption of MCP. But we are seeing it slowing down to a less frantic pace since the peak in Q1 of 2025.

> "The more you understand, the more you've tried, the better you'll be prepared for what comes next," Anton adds. The pragmatic teams aren’t rushing to deploy MCP everywhere. They are building and learning it, but only deploying it when it makes sense for their customers, and more importantly, with as few security blindspots as possible. 

## Engineering Teams: Print This and Hang It Up

If you're building with LLMs right now, here's what our engineers suggest.

**Use MCP when:**

-   You need to expose your APIs to multiple users or AI applications
-   You want a standardized protocol that works across different AI platforms
-   You're building tools that others will integrate with
-   You need persistent connections and context across conversations

**Use function calling when:**

-   You're building tools for your own AI application only
-   You need one-off integrations where you control both ends
-   You want simpler implementation without protocol overhead
-   You're prototyping or testing functionality quickly

**Security. Comes. First**.

-   When building your own MCP server, assume that any data you expose through your API is publicly accessible
-   Use the principle of least privilege - start with read-only access and as minimal permissions as needed to achieve a task, and no more. 
-   Be exceptionally careful and require explicit user approval for any destructive operations (deletes, updates, writes). Treat "SHOULD always be a human in the loop" as MUST
-   Treat MCP entities like external users because they essentially are
-   Read the code of open source MCP servers to see what they do and how they use your data 
-   Researchers found hundreds of servers with network exposure issues and careless command execution - be careful what MCP servers you connect to and use 
-   Researchers also suggest to never bind MCP servers to all network interfaces (0.0.0.0) and stick to localhost only

**Build from scratch first**. Create an MCP server by hand using JSON-RPC. Build apps using other MCP servers. The best way to learn is to build it. Understand the protocol fundamentals before trusting abstractions. 

**You are the conductor of tools**. As Anton puts it: "Think of yourself as a conductor of AI workers and tools, guiding them to do specific things rather than blindly trusting their output. Avoid YOLO mode unless you have nothing to use."

It’s easy to fall for the hype in a very hype-y space. Yet the race isn't to deploy MCP fastest, but to understand how it helps your customers and to protect their data. 

* * *

## Want the Full Story? 

Watch our [video interviews](https://www.youtube.com/@TimescaleDB/shorts) with Anton, James, and John for their unfiltered takes on MCP's reality versus the hype. You can find them on any of our social channels, including [LinkedIn](https://www.linkedin.com/company/tigerdata/) and [X](https://x.com/TigerDatabase), starting this week.

## Helpful Links

1.  [Tweet by @jasonlk](https://x.com/jasonlk/status/1946069562723897802)
2.  [Threat Research: Hundreds of MCP Servers Vulnerable to Abuse](https://www.backslash.security/blog/hundreds-of-mcp-servers-vulnerable-to-abuse)
3.  [How to Find an MCP Server with Shodan](https://www.knostic.ai/blog/find-mcp-server-shodan)
4.  [Critical RCE Vulnerability in Anthropic MCP Inspector - CVE-2025-49596](https://www.oligo.security/blog/critical-rce-vulnerability-in-anthropic-mcp-inspector-cve-2025-49596)
5.  [Why a Classic MCP Server Vulnerability Can Undermine Your Entire AI Agent](https://www.trendmicro.com/en_us/research/25/f/why-a-classic-mcp-server-vulnerability-can-undermine-your-entire-ai-agent.html)
6.  [Critical RCE Vulnerability in mcp-remote: CVE-2025-6514 Threatens LLM Clients](https://jfrog.com/blog/2025-6514-critical-mcp-remote-rce-vulnerability/)
7.  [Wikipedia: The Language Server Protocol (LSP)](https://en.wikipedia.org/wiki/Language_Server_Protocol)
8.  [OpenAI docs: Function calling](https://platform.openai.com/docs/guides/function-calling)
9.  [Anthropic docs: MCP](https://docs.anthropic.com/en/docs/mcp) 

* * *

**About the Author:**

[Jacky Liang](https://www.linkedin.com/in/jjackyliang/) is a developer advocate at Tiger Data with an AI and LLMs obsession. He's worked at Pinecone, Oracle Cloud, and Looker Data as both a software developer and product manager which has shaped the way he thinks about software. 

He cuts through AI hype to focus on what actually works. How can we use AI to solve real problems? What tools are worth your time? How will this technology actually change how we work? 

When he's not writing or speaking about AI, Jacky builds side projects and tries to keep up with the endless stream of new AI tools and research—an impossible task, but he keeps trying anyway. His model of choice is Claude Sonnet 4 and his favorite coding tool is Claude Code.