Why do I need Datamin?
A vitamin for your data
of workers say they could save 6 or more hours a week with automation
of data collected today has lost some, or even all, its business value
of companies still use manual processes for monitoring, changes, incidents, exceptions, and disruptions
and that’s a minimal cost they pay every month for fixing the consequences
Every team in your organization wants to:
  • Have an easy access to data
  • Be able to analyze data without writing code or SQL queries
  • Make data-driven decisions without waiting for BI or Data Engineers to help them
It is called Data-Driven Culture!
Data-Driven Culture requires Operational Analytics
Traditional data analysis is typically based on large volumes of historical data. And that’s great but having only that is not enough in today’s world. Unlike traditional Data Analytics, Operational one works with data that help to take a decision that is important right now.
The examples of data Operational Analytics deals with are:
that were placed in the last hour.
Bank transactions
that were processed within the last day.
that created an account yesterday but still didn’t complete it.
Three main pillars of a software for Operational Analytics are:
  • Allow configuring data expectations (OKRs, KPIs, Metrics, SLAs, etc)
  • Allow comparison of the status quo of it with the expected value
  • Allow automation of actions based on the status of this comparison
And this is exactly what Datamin does
What makes Datamin different from data observability software (Monte Carlo, BigEye)?
Traditional data observability software is focused on data quality and data infrastructure. In other words, it tries to identify how data is being created or updated, which source it comes from, where it goes further, or observes if the data is created in an expected state. This means, that the main task of data observation is to define if there are problems with data pipelines, infrastructure, or transformation/aggregation algorithms that caused an issue with data. As a result, they need to notify data, DevOps, or software engineers about problems and they have to fix them. Which makes these platform are extremely tech heavy as well. No task automation or operational analytics is involved like in the case of Datamin.
How is Datamin different from BI software (Tableau, Looker)?
BI tools are visualization platforms typically working with historical data. Their main use case is to visualize the long-term perspective when users have enough time to take a decision. Datamin is an operational software working with new data for automating decisions that you need to take immediately because your colleagues, customers, or partners are waiting for it.
What makes Datamin differ from workflow automation software like Zapier or n8n?
Workflow automation is not a core of Datamin as a core of the product. We just use workflow for a better customer experience, because everybody is familiar with that. Workflow automation tools are fundamentally event-driven. They make it difficult to run analysis across multiple data sets that then trigger a workflow. The main task of Datamin as an operational analytical software is to automate data analytics and decision-making based on the result of it, which is different from passively waiting for a trigger.
What about Reverse ETL: Hightouch, Census?
The answer here is similar to the previous one. Reverse ETL is focused on copying data from one place to another without deeply analyzing the data itself.
Do you want to say that Datamin is better than all these products?
No, that’s not what we want to say. We all are different and solve different problems. Many tools mentioned above can be integrated with Datamin into one powerful data stack. For example, Tableau. You can read more about how to connect Datamin to it in our documentation.
If even after reading this page you still have doubts about whether you should give Datamin a try: