Process Analysis
Introduction to Process Mining
Process mining is the discipline of reconstructing actual process execution from digital footprints — the event logs that every IT system generates when work happens. It answers a question that interviews and workshops cannot: not how people say the process works, but how it actually ran, in every variant, across every transaction, over any time period.Process analysis is the discipline of examining how work actually happens — not how it is supposed to happen — and identifying the gap between the two. It is the analytical foundation of every process improvement and automation initiative. Without it, you are optimizing assumptions rather than reality.
| Technique | What It Reveals | Best Used For | Limitation |
|---|---|---|---|
| Process Observation | What actually happens — workarounds, informal steps, real exception handling | Operational processes with visible execution | Time-consuming; observer effect can alter behavior |
| Stakeholder Interviews | The “why” behind current design; political constraints; known pain points | Understanding context and history | Subjective; people describe the intended, not actual |
| Data & Log Analysis | Volume, cycle times, error rates, exception frequency — objective system evidence | Any system-supported process | Shows what, not why; requires data access |
| Value Stream Mapping | Where time is spent: value-added vs. wait vs. waste across the full end-to-end | End-to-end process improvement | Requires significant cross-functional collaboration |
| Process Mining | Actual execution paths derived from system logs — all variants, not just the intended path | Complex, high-volume system-supported processes | Requires structured event log data |
Every time a user opens a case, updates a record, or triggers a workflow step, the system writes an event to a log: who, what, when. Process mining reads millions of these events and reconstructs the actual process as a map — showing every path taken, how frequently, how long each step took, and where it deviated from the intended design. This is not sampling. It is the complete picture.
- What paths does the process actually take — and how many variants exist?
- What percentage of transactions follow the intended path vs. deviations?
- Where do rework loops occur and how frequently?
- How much time does each step take — average, median, and worst-case?
- Where are the bottlenecks — which steps create queue buildup?
- Are compliance steps being executed in the required sequence and by the required roles?
- Which cases deviated from the approved process — and why?
- Which process variants are candidates for automation?
| Details | |
|---|---|
| Minimum data requirement | An event log with three mandatory fields: Case ID (the transaction or case identifier), Activity Name (what happened), Timestamp (when it happened). Resource (who) and additional attributes enrich the analysis but are not required. |
| What it works best on | System-supported processes with structured logging — loan processing, account operations, claims, order management. Any process where a system records each step as it happens. |
| What it cannot reveal | The “why” — context, business reasons, political factors, informal decisions not recorded in the system. Process mining shows what happened; it takes human analysis to understand why. |
| Data volume needed | Meaningful analysis typically requires hundreds to thousands of case instances. A 30-case sample produces unreliable pattern detection. |
Traditional mapping (interviews, workshops) takes days, relies on people’s recall, and produces the intended process. Process mining takes hours once data is available, relies on system records, and produces the actual process. They are complementary: use mining to discover reality, use workshops to understand the context behind it.

