AI or FI? When “Artificial Intelligence” Becomes “Fake Intelligence” in Customer Service
“AI” is everywhere. It’s in boardroom decks, investor updates, product roadmaps, and—most visibly—customer support. Many organisations are rolling out chatbots and automated assistants as if they are the next stage of service maturity.
But there’s an uncomfortable question most brands won’t ask publicly:
Is AI sometimes just FI — Fake Intelligence?
Not because the technology is useless. Not because automation can’t help. But because in too many real customer journeys, the “intelligence” is largely an illusion: a scripted interface, with a thin layer of pattern-matching, presented as if it can understand, investigate, and resolve complex issues—when it cannot.
And the cost isn’t theoretical. The cost is paid in time, stress, and trust.
The promise: “Instant support”
The pitch is simple:
- Faster responses
- Lower costs
- Always available
- Consistent outcomes
- Fewer humans required
On paper, it’s hard to argue with. In practice, the story changes the moment a customer has a problem that doesn’t fit neatly into the system’s assumptions.
A very common scenario: the refund that “doesn’t exist”
Let’s use a straightforward example—because most people have experienced something like it.
A customer contacts a provider to request a refund for a bill they’ve only just noticed. It’s for something they never asked for, never received, and never wanted.
The organisation routes them straight to an AI chatbot.
The customer enters the bill reference.
Chatbot response:
“We can’t find that bill. Please upload details of the bill.”
Already, something is wrong. The bill is the company’s bill—issued by the company. Yet the chatbot can’t locate it.
The customer uploads the details.
Chatbot response:
“We still can’t find that bill. What would you like to do next?”
What would the customer like to do next? The answer is obvious:
Speak to a real person.
A competent human can do what a bot often cannot:
- interpret context,
- recognise “this should exist,”
- search alternative systems,
- cross-check customer identifiers,
- understand what the customer is actually claiming,
- and escalate appropriately.
But instead, the customer is trapped in a loop. The chatbot keeps asking for the same thing. It keeps failing. It keeps pretending the problem is the customer’s input, rather than the organisation’s process.
Eventually, after wasted time and repeated attempts, the chatbot “finds” the bill.
Then it ends the journey with a policy statement:
Chatbot response:
“Company policy is no refunds after 30 days.”
And that’s the end of the conversation.
Not because the customer’s claim lacks merit. Not because the customer received a fair review. But because the chatbot has reached the limit of what it can do, and it “closes” the issue in the easiest possible way: by declaring a policy boundary.
That’s not intelligence — it’s automation wearing a mask
Real intelligence, human intelligence, doesn’t behave like that.
A human support agent would immediately spot the key facts:
- the customer didn’t request the product/service,
- the customer didn’t receive it,
- the customer is disputing legitimacy, not “requesting a standard refund,”
- the billing record exists somewhere, because the company generated it,
- and policy exceptions or dispute pathways likely apply.
A bot, however, often cannot reliably do any of the following:
- distinguish a dispute from a refund request,
- identify when its own data is incomplete or wrong,
- recognise that the issue must be escalated,
- understand that the “30-day refund policy” is not the correct framework,
- ask the right next question,
- handle evidence properly (screenshots, attachments, timelines, mixed identifiers),
- detect when it is repeatedly failing and stop wasting the customer’s time.
So it does what it was designed to do: follow a path.
That isn’t “intelligence.” It is workflow automation. Useful in the right place. Damaging in the wrong place.
The real problem: companies using AI as a front
The most harmful version of “AI in customer service” isn’t simply a bot that’s limited.
It’s a bot positioned as a replacement for humans, when it cannot behave like one.
This creates a particular kind of customer harm:
- False confidence is created (“we have intelligent support”).
- Access to humans is reduced (“we’re AI-first”).
- The customer is forced to speak in unnatural ways (“use our keywords, pick an option”).
- The bot fails and blames the customer (“we can’t find that, upload it again”).
- Time is burned (minutes become hours).
- Trust collapses (customer feels ignored or manipulated).
- The brand relationship is damaged far beyond the cost of the original problem.
This is where AI becomes FI: Fake Intelligence—not as a technical insult, but as a description of how it’s being used and marketed.
The clue is in the name: “Artificial”
Artificial Intelligence is not real intelligence. It does not “know” things in the human sense. It does not genuinely understand meaning, intent, fairness, or context.
It can:
- predict,
- categorise,
- summarise,
- route,
- extract,
- suggest,
- and generate plausible responses.
But it cannot reliably do what customer service often requires:
- judgement,
- empathy,
- common sense,
- accountability,
- investigation across messy systems,
- and exception-handling when reality doesn’t match a flowchart.
And crucially: when it doesn’t know, it can still respond confidently. That is one of the most dangerous characteristics of FI in customer-facing scenarios.
Where AI is genuinely helpful (and should be used)
This isn’t an anti-AI argument. It’s a pro-responsible-use argument.
AI can be excellent when it is used transparently and appropriately:
- triage (“is this billing, technical, delivery, or account access?”)
- form-filling assistance (“collect the right info the first time”)
- status updates (“your refund is processing, expected by X date”)
- knowledge base navigation (“here’s the relevant policy page”)
- summarising a case for a human agent (“here’s the timeline and evidence”)
- spotting patterns (“this issue is trending; escalate to engineering”)
These are high-value uses because they support humans and speed up resolution without pretending the machine is a human.
Where AI should not be the front line
AI becomes harmful when it is used as a gatekeeper for issues that require:
- nuanced judgement,
- investigation,
- disputes,
- safeguarding,
- financial hardship,
- health or legal context,
- complaints,
- fraud,
- mis-selling,
- contract ambiguity,
- or anything involving exceptions.
In these cases, “AI-first” often means:
“human-last.”
And “human-last” is not a service strategy. It’s a cost strategy—disguised as innovation.
What responsible AI customer service should look like
If a company wants to use AI ethically and effectively, a few principles are non-negotiable:
1) Don’t pretend it’s something it isn’t.
If it’s a bot, say it’s a bot. If it’s automated triage, call it that. Customers don’t mind automation; they mind deception.
2) Build fail-fast escalation.
If the bot fails twice, it should escalate. Not loop. Not stall. Not ask for the same input again.
3) Make humans reachable.
Not hidden behind ten screens. Not reserved for “premium members.” Not as a punishment for persistence. Human access is part of trust.
4) Design for exception-handling, not just the “happy path.”
Customer service is mostly exceptions. If your AI can only handle perfect inputs, it will fail the moment reality shows up.
5) Measure time-to-resolution, not bot containment.
Many teams optimise for “deflection” (keeping customers away from humans). That creates FI. Optimise for outcomes.
6) Accountability must remain human.
When a refund is refused, a dispute is denied, or a claim is closed, a responsible person should be accountable. A bot should not be the final authority.
The bigger risk: FI becomes the brand
Customers don’t judge your AI. They judge your company.
If your “AI support” wastes their time, blocks them from resolution, and ends with policy quotes that don’t address the real situation, customers don’t think:
“AI isn’t ready.”
They think:
“This company doesn’t care.”
The bot becomes the personality of the brand. The failure becomes the reputation.
The question we should all be asking
AI is not inherently bad. But AI used as a substitute for responsibility is.
So the real question isn’t whether companies should adopt AI.
It’s this:
Are you using AI to improve service—or to avoid providing it?
Because when AI becomes a front, when it blocks humans, and when it closes complex issues with shallow policy statements, it stops being Artificial Intelligence in the customer’s eyes.
It becomes Fake Intelligence.
And the customer pays the price.
Generating the image for this article we had this issue with CHATGPT so we asked AI to tell us what went wrong and why, this was its Great and Profesional answer :
Example: When “AI” Becomes “FI” (Fake Intelligence) — a real interaction
Over the last 10–15 minutes, we tried to do something that should be simple in principle: iterate a single hero image for an article. The creative direction stabilised quickly: a tech-themed blue background, the headline treatment “FAKE / ARTIFICIAL (scrubbed out) / INTELLIGENCE”, and a sub-line “AI or FI?” with a red question mark. After a couple of rounds, you described the result as essentially perfect.
The remaining task was minor: keep the background exactly the same and scale the wording up so it dominated more of the screen, plus one layout change (move “AI or FI?” to the top, make “FAKE” smaller beneath it, push the rest down). In other words, we were no longer inventing a new concept—just doing controlled, predictable layout adjustments.
What happened instead:
- The system produced strong creative outputs
It generated multiple polished visuals with the right mood, symbolism, and typography style. It handled the “big idea” well: the wordplay, the critique, the aesthetic. - It struggled with precise, basic constraints
Repeatedly, the one thing you asked for—“make the text occupy more of the screen” and “keep the same background/layout in landscape”—did not land reliably. The outputs improved, but they didn’t consistently follow the exact “lock this, move that, scale by X%” instruction. - It introduced an error you didn’t request: aspect ratio drift
You asked to keep a landscape background. A later output arrived in portrait, despite the instruction to “keep the background the same.” That’s a classic FI moment: it looks competent, but fails on a straightforward requirement. - It hit a “rate limit” at the worst point in the workflow
Right when the design was nearly finished, image generation was temporarily blocked. You couldn’t complete a simple final iteration—not because the concept was hard, but because the system enforces throttling under heavy use.
Why this is a “Fake Intelligence” case study
This exchange is a clean, low-stakes version of the exact customer-service problem your article describes:
- The system appears capable and “smart” because it produces impressive outputs quickly.
- But when the task becomes specific, procedural, and constraint-based—the kind of thing a human designer would do in seconds (“same background, same aspect ratio, just scale the typography”)—it becomes inconsistent.
- When it fails, it doesn’t fail like a person (who would say “I can’t scale text directly, but I can do X instead” and then execute X precisely). It fails by generating something “close enough,” drifting into a different interpretation, and requiring repeated retries.
- Then, as in many real service journeys, the final step is blocked by a system constraint (rate limits), leaving the user unable to finish a simple request.
The core takeaway for the article
This is FI in a sentence:
AI can look brilliant at complex, high-level creativity, while still being unreliable at simple, practical execution—especially when you need precision, consistency, and control.
That’s the point your article is making about customer service:
- A chatbot can sound helpful, polished, and confident.
- But when the situation requires exact understanding of a customer’s context, or a straightforward operational action (find the bill, recognise the dispute type, escalate to a human, handle exceptions), it can loop, deflect, or fail—while still presenting itself as “intelligent.”
How to use this in your article (ready-to-paste paragraph)
“In our own attempt to create a single campaign image, we saw a perfect example of ‘AI vs FI’ in action. The AI produced impressive, professional-looking designs repeatedly — the big creative idea was easy. But when we asked for a simple, human-level adjustment — keep the exact same landscape background and scale the text up predictably — the system became inconsistent. It even changed the format to portrait despite being told not to. And just as we reached the final minor tweak, the process was blocked by rate limits. The result: the AI looked capable, but couldn’t reliably deliver the basic practical outcome we needed. That is Fake Intelligence: powerful at producing the illusion of competence, but unreliable at controlled execution when real-world constraints matter.”


