paint-brush
AI is Neither the Magical Replacement for Human Analysts Nor a Useless Gimmickby@vakamynin
New Story

AI is Neither the Magical Replacement for Human Analysts Nor a Useless Gimmick

by Vitalii Kamynin 6mApril 10th, 2025
Read on Terminal Reader
Read this story w/o Javascript
tldt arrow

Too Long; Didn't Read

AI is being used to help analysts with routine tasks. But it can also be a real contender on the analytics team. We'll walk through real-world use cases with ChatGPT to show how AI can complement analysts' work.

Company Mentioned

Mention Thumbnail
featured image - AI is Neither the Magical Replacement for Human Analysts Nor a Useless Gimmick
Vitalii Kamynin  HackerNoon profile picture

Every day, every minute, and every second, AI is digging deeper into data analytics. It automates routine tasks, writes SQL queries, cleans messy datasets, builds charts — and, at first glance, even seems to understand what it's doing. Well... almost.


This raises a very real, slightly unsettling question: Will AI replace data analysts? Or at least those junior analysts still wrangling spreadsheets in Excel?


On the surface, it makes sense. Analytics is about spotting patterns, finding correlations, and building scripts. Can AI find correlations? Can it structure data? It can. Put it all together, and AI starts to look like less of an assistant — and more like a real contender on the analytics team.


But there’s a catch. Several, in fact.

AI in Analytics: Between Expectations and Reality

If you take a closer look at what’s being written about AI in data analytics today, you’ll notice a clear divide in perspectives.


On one side — there are the overly optimistic takes, usually aimed at beginners. Here, AI is positioned as a universal solution: it quickly finds patterns, cleans data, writes SQL queries, and even draws conclusions. After reading a few of these, you might start to believe analysts are on the verge of becoming obsolete.


On the other side — there are experienced professionals raising very valid concerns. These articles focus on AI’s weaknesses: how it struggles with complex cases, generates questionable insights, and doesn’t always grasp what it’s dealing with. The takeaway often sounds like: “AI isn’t a replacement for humans — it’s just a lightweight helper.”


But both sides are missing the complete picture. The first tends to overestimate what AI can do. The second risks underestimating the real value it already brings to day-to-day analytical work. So, what are we left to do? We either buy into the hype or ignore a tool that’s already capable of saving hours of effort.


The goal of this article is to offer a third perspective: a practical take from those already using AI in real workflows. Let’s see where it helps, where it fails, and how to actually make it work without expecting miracles.


We’ll walk through real-world use cases with ChatGPT, taking a clear look at how AI can complement the analyst’s work and why thoughtful integration is no longer just a trend but a new layer of professional reality.

Simple Technical Tasks: How Good Is AI, Really?

Before we dive into the examples, here’s a bit of context.


I’m the founder and former CTO of Infolek (until 2022) and currently the CTO of OMNI Digital. I'm focused on building innovative solutions for automating and optimizing business processes in pharmacies and pharmaceutical distribution, including in Kazakhstan. The examples below come from real-world challenges we’ve encountered — drawn directly from actual projects.


Example 1: Parsing Kazakhstan Addresses into Segments


Imagine a dataset with 30,000 addresses from Kazakhstan. The task is to split each address into separate components: region, district, locality, street, and building number.


A straightforward prompt like “split the address into these parts” doesn’t quite work — the structure of the data is too complex. There are many nuances. For instance, if you try to split by delimiter, the number of segments can vary widely — from 3 (e.g., Almaty, Baizakov Street, bld.7) to 8 or more (e.g., Kazakhstan, Aktobe Region, Aktobe, Almaty District, Kargaly Residential Area, Yeset Batyr Microdistrict, 1st Microdistrict, bld.34). Some addresses even contain multiple nested localities. In short — the rules aren’t consistent.


The solution? Iteration. Prompt → result → refinement → another prompt. Rinse and repeat — until it works.


Solving this manually takes a lot more time. But here’s where AI shines:


Given a precise prompt, it can quickly test hypotheses, generate scripts or regular expressions, and adapt to complex conditions. It easily picks out specific subtasks when they’re clearly formulated — like this one:


“Create a new column adr1_hc2 that contains a substring composed of the final address segments, joined by commas, that were not previously classified. Extract the digits starting from the first encountered number, up to the first non-numeric character…”


With clear instructions, AI becomes a powerful helper — fast, consistent, and surprisingly accurate in handling technical logic.

Bulk Data Validation: Where AI Hits Its Limits (and Then Pushes Past Them)

Example 2: Validating Kazakh Rural Districts


Let’s say you have a spreadsheet with 1,000 rural districts in Kazakhstan — and a list of assumed matches to the administrative regions they belong to. The problem? There’s no reliable open-source table you can just plug into your model. A human would have to search for each entry manually — one by one.


ChatGPT handles single-row validation pretty well. Give it one district, and it can cross-check, reason, and suggest corrections. But try to scale that to 1,000 rows in the standard interface, and things get tricky. Fast.


But ChatGPT will politely (and sometimes not so politely) refuse. It starts throwing limitations and safety warnings or just says, "I’m sorry, I can’t help with that."


The workaround? Use the API.


By switching to the API and automating the prompt flow, the task became totally manageable — and was completed successfully. What took hours of manual effort could now be done consistently at scale.


The key takeaway:


AI can absolutely assist with bulk data validation — but it needs the right infrastructure. For anything beyond small-scale checks, you’ll need to go beyond the chat window and tap into automation.

Advanced Analytics: When AI Starts to Struggle

Example 3: Investigating Data Discrepancies in Pharmaceutical Reports


A pharmaceutical distributor sends monthly reports. One month, something’s off — 95% of customer names and addresses no longer match any previous reports. It’s not just a formatting issue — it looks like a completely different dataset.


A quick manual review showed that about 25% of the records can be partially matched using semi-automated methods. The rest? They needed full-on human review.


So, we decided to put AI to the test.


  • First Attempt: Let AI figure it out on its own. We uploaded both files and asked ChatGPT to identify discrepancies and explain what was happening. This resulted in a complete failure. The model doesn't even come close to identifying the issue — it just highlights the obvious mismatches without meaningful analysis.


  • Second Attempt: Help it along. We gave ChatGPT detailed context: data structure, business logic, reporting rules, possible scenarios, and hypotheses to test. We guided it step by step. Still no luck. Here, the model either offers vague generalizations or latches on to irrelevant details. No progress.


  • Final Attempt: Use AI as a tool, not a brain. A human analyst takes over. Using ChatGPT for routine checks, hypothesis scripting, and basic structuring, the analyst tested several theories in about 30–40 minutes. Eventually, a pattern emerges: the issue lies in the distributor’s reporting system — confirmed later by their tech support.


The takeaway:


When it’s about structuring or visualizing known data, AI does well. But when the task requires identifying hidden patterns, dealing with non-linear relationships, or forming new hypotheses — AI hits its current limits.


It’s not that it’s terrible. It’s just not a replacement for experience, context, and human intuition — at least not yet.


AI is great at the boring stuff. It handles repetitive tasks like text parsing, pattern recognition, and basic structuring and even writes scripts and regex faster than most junior analysts. It can even run simple forecasts — but let’s be clear: the quality of those forecasts still depends on a human analyst setting the direction, checking the assumptions, and approving the results.


AI can also help visualize data — as long as the metrics are well-defined. But when it comes to identifying non-standard KPIs, it often misses the point entirely. Why? Because novelty isn’t its strength — it's still pattern-matching inside known boundaries.


Need to scale? That’s where APIs come in. With the right setup, you can automate large-scale tasks that would normally eat up hours (or days) of manual work. Bulk validation, mass transformations, structured prompting — all of it becomes doable.


But there’s a catch. AI still throws out false correlations and misleading conclusions — especially when your data is messy, incomplete, or when the problem isn’t clearly defined. That’s when a human needs to step in. Not just to review the output but to ask the right questions in the first place.


The hard truth? You still need a human brain for anything nuanced, nonlinear, or business-specific. Analysts know how to adapt methods to context, create custom metrics, and see patterns no model would notice. Humans bring intuition. AI brings speed. And right now, that’s the real power combo.