How to Spot AI Snake Oil: 5 Red Flags to Watch Out For
- Reuben Piryatinsky
- Jun 23
- 4 min read
Updated: 5 days ago
We're in the middle of an AI gold rush, and every vendor is suddenly "AI-powered." Flashy demos, glossy decks, and buzzword-loaded pitches are showing up in every boardroom, each promising transformation at scale.
The problem? Most of it is snake oil.
Our clients have been asking us which vendors are trust-worthy, and which claims are actually substantiated and backed up by evidence. Throughout our work on AI vendor selection, we've put together a framework for identifying AI snake oil.
What is AI Snake Oil?
AI snake oil is the illusion of innovation: demos built on clean sample data, vague claims of intelligence, and promises of plug-and-play automation. But when you dig deeper, there’s little substance behind the scenes. As someone who’s spent years on both the vendor and buyer side of enterprise technology, I’ve seen how quickly AI can go from promise to problem when decision-makers aren’t asking the right questions.
So, let’s cut through the hype. Here are five red flags to watch for during AI vendor demos, along with the questions you should be asking before you consider signing a contract.
1. “No Data Prep Needed” or Demo Only on Sample Data
If the demo works beautifully, but only on their sample data, that’s a red flag.
Real AI doesn’t work without real data. Your systems are messy. You’ve got missing values, custom fields, legacy schemas. If a vendor says their model doesn’t need your data, or worse, doesn’t ask for it, question why.
Ask this:
“Can you show us how this works using our real data, today?”
“How do you handle schema mapping, anomalies, and missing data?”
What to watch for:
Disclosures about pipelining, ETL, and how they contextualize inputs for your domain.
Scripted, Polished Demos with No Room for Variability
If it feels like a concept car, shiny but impractical, it probably is.
Choreographed click-throughs are built to impress, not to operate in production. Often, what’s powering the demo is manual labor, not machine intelligence.
Ask this:
“Can we go off-script and try a custom use case?”
“Can we test how this handles edge cases in our workflow?”
What to look for:
Unedited recordings or live demos using client data, ideally from companies in your industry. The vendor must be willing to build a small POC with your data to demonstrate how their product handles your specific use case.
Overuse of Buzzwords with No Substance
You’ll hear it all: “predictive AI,” “generative engines,” “neural this,” “automagic that.” If the vendor can’t clearly explain what kind of AI is powering their tool, or how it was trained, it’s probably marketing smoke.
Ask this:
“Is this supervised or unsupervised learning?”
“What data did you train it on, and what’s your model’s accuracy or recall?”
Quick test:
If they can’t describe the inputs, outputs, and core methodology in plain English, they don’t understand it, or don’t want you to understand it.
You may be surprised by what you find out. Some "AI" models may actually be deterministic, rules-based algorithms. It's fine if they are - but it's not acceptable for the vendor to not be honest about it.
Black-Box Models with No Explainability
If the vendor says, “It’s proprietary,” and can’t show you logs, decision trails, or model behavior, you’re exposing your organization to risk, bias, and regulatory blind spots. Models must be explainable by the vendor.
Moreover, many vendors' AI capabilities are simply wrappers around other AI models such as OpenAI's o3 or Anthropic's Claude Sonnet.
Ask this:
“How do we audit model decisions?”
“Where can we see confidence scores, rationale, or logs?”
What you want:
Explainable AI, with outputs you can trace, test, and dispute. Especially critical in compliance-heavy industries like banking, insurance, and healthcare.
One-Size-Fits-All or “Set-It-and-Forget-It” Claims
There is no AI that works out of the box for every vertical, every dataset, and every use case. Anyone who says “you can go live in a day” is either lying, or vastly oversimplifying the effort ahead.
Ask this:
“How do you tailor this for our domain, compliance needs, and operational scale?”
“What does a feedback loop look like post-deployment?”
Mature vendors will talk about:
Custom model retraining, versioning, data refresh cadences, and integration complexity, not shortcuts.
Bonus Red Flags
They don’t ask about your data, workflows, or users.
They can’t provide concrete ROI metrics or case studies.
They reference “clients like you” without naming a single one.
Quick Checklist for Your Next AI Demo
Red Flag | What to Ask |
No live demo with your data | “Let’s test a real use case.” |
Scripted demo flow | “Show me an unscripted, live interaction.” |
Buzzword brochure | “Explain the model type, training data, metrics.” |
Black-box approach | “How do you audit or trace model decisions?” |
One-size-fits-all claim | “How is this tailored to our org and scale?” |
The Bottom Line
We are at a real inflection point in enterprise AI. But the winners won’t be those shouting the loudest. The winners will be the ones building effective, contextualized, and explainable AI systems, designed for your messy, nuanced environment. They have explainable models that are trained with data that's relevant to their domain and the specific problem they are solving.
So before you commit, push past the slides. Ask the hard questions.
Looking for AI Vendor Selection?
If you are evaluating a number of vendors for the best fit for your organization, let's talk. We've developed a comprehensive approach to help CIOs evaluate vendors based on a quantitative scorecard that includes strategic, technical, and operational fit - so that you take the guesswork out and be confident that you've made the best choice.
Get in touch to discuss our proven AI vendor selection system.