April 17, 2026

why data teams disagree even when everyone is right

How we see the world is shaped by what we spend time doing. The longer we stay in one domain, the more that lens hardens.

You see it clearly in data teams.

Put a data engineer, an analyst, and an analytics engineer in the same room and ask a simple question:

What does good data look like? You will not get one answer. You will get three.

Because they are not just doing different jobs. They are thinking in different models.

Start with the engineer.

When an engineer looks at data, they do not first see revenue, customers, or churn.

They see structure. Tables. Partitions. Schemas. Pipelines. Latency. Failure points.

To them, data is not insight yet. It is a system in motion. One that can fail in subtle ways if not handled carefully. Their instinct is to ask questions most people skip:

Where does this data come from?
How often does it arrive?
What happens if it fails?
Will this still work when usage doubles or triples?

In their world, correctness means guarantees. A dataset is only "good" if it is reproducible, consistent, and resilient under pressure.

That is why engineers hesitate when asked to quickly produce a metric.

It is not resistance. It is memory. They have seen what happens when something fragile becomes something important.

A beautiful dashboard built on an unstable pipeline is not success. It is a delayed outage. So engineers build foundations. Quietly. Methodically. And sometimes, from the outside, slowly.

Now shift perspective.

When an analyst looks at data, they do not see systems. They see meaning.

Trends. Behavior. Patterns. Opportunities.

To them, data is not infrastructure. It is a story waiting to be told. Their instinct is different:

What is happening?
Why is it happening?
What should we do about it?

They care less about how the data arrived and more about what it reveals. In their world, correctness is not just about pipelines. It is about interpretation.

A dataset is only useful if it reflects reality and helps someone make a decision. That is why analysts push for speed.

Not because they are careless, but because they understand a different risk. Perfect data delivered too late is often useless.

So they explore. They iterate. They ask uncomfortable questions.

And sometimes, they build logic in places engineers would never approve of. Because the alternative is no answer at all.

Now place these two perspectives side by side. The tension becomes obvious.

Engineers optimize for stability and scale. Analysts optimize for speed and meaning.

So when an analyst says, "Can we just get this metric quickly?" the engineer hears, "Can we cut corners on reliability?"

And when an engineer says, "We need to redesign the pipeline first," the analyst hears, "We are delaying answers again."

Neither side is wrong. They are solving for different risks.

Engineers are trying to prevent systems from breaking. Analysts are trying to prevent decisions from being made in the dark.

Both fears are valid. This is where the analytics engineer comes in.

Not as a third silo, but as a bridge.

If engineers think in systems and analysts think in questions, analytics engineers think in transformations.

Joins. Grain. Models. Lineage. Consistency.

But more importantly, they ask a different question:

What should this data represent? They care about definitions.

What exactly is revenue?
What counts as an active user?
At what level of detail should this dataset exist?
How do we make this reusable so no one has to rebuild it?

If engineers bring structure and analysts bring meaning, analytics engineers bring shape.

They take raw data and turn it into something both trustworthy and usable. They are the ones who say:

Let us not just answer this once. Let us define it properly so we never have to ask it again.

For example:

You are working at a company that sells products online, and someone asks: What is our revenue this month?

It sounds simple. But watch how each role approaches it.

The data engineer looks upstream.

Revenue depends on orders, payments, refunds.

Are they coming from different systems?
Do they arrive in real time or in batches?
What happens if events arrive out of order?
How do we handle duplicates?

Their concern is not the number.

It is whether the data behind the number can be trusted.

Meanwhile, the analyst is already writing a query.

They sum successful payments for the month and move on.

It is fast. It is useful. It answers the question. But there are gaps.

What about refunds?
What about partial payments?
What is the official definition of revenue?

Then the analytics engineer reframes the problem.

Not, what is revenue right now?

But, what should revenue mean for the company?

Should it be based on orders or payments?
Do we recognize it before or after payment?
How do refunds affect it?

They build a model.

A reusable definition. Now every dashboard and report uses the same logic. The question does not just get answered. It gets standardized.

The same pattern shows up everywhere.

Another example is something like active users.

An engineer thinks about processing billions of events efficiently. An analyst counts logins over 30 days. The analytics engineer asks: What does active actually mean?

Same problem.

Three different starting points. This is why so many data problems feel harder than they should be.

A dashboard is wrong, not because the SQL is broken, but because definitions do not align.

A pipeline is delayed, not because engineers are slow, but because reliability was underestimated.

A model goes unused, not because it is incorrect, but because it does not match how the business thinks.

When teams do not recognize these differences, they misread them. The real strength of a data team does not come from choosing one mindset. It comes from layering them.

Engineers make data trustworthy. Analytics engineers make it usable. Analysts make it useful. Remove any one of these, and things fall apart.

Together, they form more than a pipeline. They form a system of understanding.

- dr. calculus

How was this essay?

* * *
1632
* *
112
*
14

I build data products, pipelines… and everything in between. Join The Data Signal — let's build together!