Clustering Analysis: Difference between revisions

From QPR ProcessAnalyzer Wiki
Jump to navigation Jump to search
No edit summary
No edit summary
 
(17 intermediate revisions by the same user not shown)
Line 1: Line 1:
The Clustering Analysis view groups cases in the model in a way that the cases inside a group are similar to each other (e.g. cases have the same case attribute values are in the same group). Clustering is based on advanced Machine Learning and Artificial Intelligence algorithms. By default Clustering Analysis uses in-memory built-in kmodes algorithm with categorized values for Event Type occurrences and Case Attribute values. The algorithm does not guarantee convergence to the global optimum which means that subsequent Clustering Analysis runs may result in slightly different clustering results. See this [https://en.wikipedia.org/wiki/Cluster_analysis Wikipedia article] for more about the idea behind clustering.
Clustering Analysis can be used to explain what kind of cases the eventlog contains. It divides cases into groups in a way that cases within each group are as similar as possible in terms of the case attribute values and occurred event types. Clustering is based on a unsupervised machine learning and uses the ''kmodes'' algorithm with categorized values for case attribute values and event type occurrences. Due to the nature of the algorithm, different clustering runs may end up to different results. Also, the clustering is usually not performed for all cases but a sample of cases (to improve performance) because when the sample is representative, it would provide as good results as with the entire dataset. See this [https://en.wikipedia.org/wiki/Cluster_analysis Wikipedia article] for more about the idea behind clustering.


You can use the Clustering Analysis View, for example, to check data integrity. That is, the Clustering Analysis might reveal that the model actually contains data from two different processes.
Clustering analysis is an easy way to understand and explain the eventlog without knowing anything about it beforehand. It can also be used to check data integrity, as the analysis might reveal that the eventlog actually contains data from two distinct processes that cannot actually be compared to each other.


[[File:Clusteringanalysis.png|900px]]


[[File:Clusteringanalysis.png|1100px]]
== Using Clustering Analysis ==
Clustering analysis is available as a view in the [[Navigation_Menu#Clustering_Analysis|Navigation menu]]. Also when creating a custom dashboard, the clustering can be opened as a preset to get it to the dashboard. The dashboard will remember the clustering settings if they are changed.


== Clustering Calculation Principle ==
Clustering analysis is shown in a table where rows are grouped in a way that each group is a cluster and each row shows describing features in the cluster. There are the following columns:
Clustering analysis consists of two phases:
* '''Feature''': The describing feature of the cluster, i.e., the case attribute and its value, or the occurred event type name.
* Phase 1: Clustering
* '''Cluster density %''': Share of cases having this feature within the cluster, i.e. the number of cases having the value shown on the row in this particular cluster divided by the number of cases in the cluster * 100.
* Phase 2: Root cause analysis to explain the clustering results
* '''Total density %''': Share of cases having this feature in the entire eventlog, i.e., the total number of cases having the value shown on the row divided by the total number of cases * 100.
* '''Contribution %''': Explains how much more common this feature is in this cluster when comparing to the entire eventlog. The higher the value, the more the feature characterizes this particular cluster. The contribution percentage is calculated as a subtraction of the cluster density and the total density percentages.


The dropdown settings affect the data features that are given to the clustering phase. You have done the right thigh by including only the one event type that occurs for each case. This way, the clustering in effect only uses the case attribute information. However, when the root cause analysis explains the results, it finds out that some event types correlate a lot with the clustering results, even though those event types were not included in the clustering phase as parameters.
Note that in some special datasets, there may be less clusters returned than requested. This occurs for example when there are less cases than clusters requested, or the features of the cases don't have unique enough values.


“Saving of cluster identity”: When a set of clusters emerge, which seems to have a useful meaning, I would like to be able to “save” this, by eg creating a new case attribute on the fly that get set to the cluster identity (in the case above 33% of the cases should get this “new” attribute set to “Cluster 01”. (eg to go back to process discovery and filter only on “Cluster 01”)
== Clustering Analysis Settings ==
Doesn’t seem to be possible? What are your thoughts on the “next step” when a set of interesting clusters are identified?
Clustering analysis has the following settings:
* '''Clusters''': Number of clusters which the cases are divided into.
* '''Cluster rows''': Number of describing features shown for each cluster. The features are shown in the order of strongest contribution.
* '''Attributes''': Case attributes that are taken to the clustering analysis. If none is selected, all case attributes are selected. You can restrict which case attributes are selected, if you want the clustering to be done based on only certain features.
* '''Events''': Event types that occurrences are taken to the clustering analysis. If none is selected, all event types are selected.


Correct. It is not possible to save the clustering results at the moment. The next step would be to identify relevant business areas based on the clustering results. I would do this by reducing the number of event types and case attributes from the clustering source parameters so that eventually, I would only have the 1-3 most relevant features left. Then finally, I would create a calculated case attribute or new Filters to use that grouping in further analysis.
Also the size of the sample used in the clustering can be changed in the chart settings (in the ''Analyze'' tab).


Another similar analysis would be to eg state that “I am looking for two clusters”, and cluster one are al cases that passes the “won” event, and the other cluster are the cases that passes through “lost”. Now, this is what the Root cause analysis does, right?
== Clustering Analysis Calculation==
Calculating the clustering analysis is performed as follows:
# Taking a random sample of cases from the entire eventlog.
# Performing the clustering to the cases using the K-means algorithm. As a result, each case belongs to a certain cluster.
# [[Root_Causes|Root causes analysis]] is run to find the characterizing features for each clusters.


If the clustering is done using ONLY one case attribute which has ONLY two possible values while the Event Type dimension is “disabled” by selecting only one event type that occurs in all cases….and the number of clusters is set to 2….then the phase 1 clustering will most likely produce the two clusters which contain only the two different kind of cases. After this, the phase 2 -root cause analysis will give exactly the same result as a normal root cause analysis, given that the random initialization managed to select initial cases from different case attribute groups.  
The root causes analysis is used so that the clustering analysis wouldn't need show the individual cases in each cluster, but the features that describe each cluster (a long list of individual cases wouldn't be very easy to read). Note that the case attribute and event type settings are used both in determining which features are taken to the clustering phase and also for which features the root causes analysis is run.


== How to Use Clustering Analysis ==
Technically, the feature data used by the clustering analysis are numeric values between 0 and 1. Case attribute values are converted into that format as follows:
The right panel contains the clustering analysis results. The table shows the clusters, how many cases are in each cluster, and the following details for each cluster:
* Numbers and dates are scaled between 0 and 1 (minimumum value is converted to 0 and maximum to 1)
* '''Feature''' and '''Value''': These two columns list the case attribute and other values that are common to the cases in the cluster.
* Textual columns are "one-hot" encoded into multiple columns in a way that each unique value gets an own column. For each unique value column, when value is 1, the case has that particular attribute value and when value is 0, the attribute value is something else.
* '''Cluster Density %''': Share of cases having this feature value within the cluster (i.e. the number of cases having the value shown on the row in this particular cluster divided by the number of cases in the cluster * 100).
* '''Total Density %''': Share of cases having this feature value in the whole data set (i.e. the total number of cases having the value shown on the row divided by the total number of cases * 100).
* '''Contribution %''': Amount of cases that can be explained to belong to this cluster because of this feature value. The scale is such that 0% means that the feature value isn't specific to this cluster and 100% means that all cases belonging to this cluster can be explained by this feature value.
 
In the left panel, you can use the left panel to filter cases. Note that you are not bound to using just the Flowchart analysis, as you can change the analysis by right-clicking the analysis and selecting a different type of analysis shown on the panel.
 
[[Category: QPR ProcessAnalyzer]]

Latest revision as of 22:55, 1 September 2023

Clustering Analysis can be used to explain what kind of cases the eventlog contains. It divides cases into groups in a way that cases within each group are as similar as possible in terms of the case attribute values and occurred event types. Clustering is based on a unsupervised machine learning and uses the kmodes algorithm with categorized values for case attribute values and event type occurrences. Due to the nature of the algorithm, different clustering runs may end up to different results. Also, the clustering is usually not performed for all cases but a sample of cases (to improve performance) because when the sample is representative, it would provide as good results as with the entire dataset. See this Wikipedia article for more about the idea behind clustering.

Clustering analysis is an easy way to understand and explain the eventlog without knowing anything about it beforehand. It can also be used to check data integrity, as the analysis might reveal that the eventlog actually contains data from two distinct processes that cannot actually be compared to each other.

Clusteringanalysis.png

Using Clustering Analysis

Clustering analysis is available as a view in the Navigation menu. Also when creating a custom dashboard, the clustering can be opened as a preset to get it to the dashboard. The dashboard will remember the clustering settings if they are changed.

Clustering analysis is shown in a table where rows are grouped in a way that each group is a cluster and each row shows describing features in the cluster. There are the following columns:

  • Feature: The describing feature of the cluster, i.e., the case attribute and its value, or the occurred event type name.
  • Cluster density %: Share of cases having this feature within the cluster, i.e. the number of cases having the value shown on the row in this particular cluster divided by the number of cases in the cluster * 100.
  • Total density %: Share of cases having this feature in the entire eventlog, i.e., the total number of cases having the value shown on the row divided by the total number of cases * 100.
  • Contribution %: Explains how much more common this feature is in this cluster when comparing to the entire eventlog. The higher the value, the more the feature characterizes this particular cluster. The contribution percentage is calculated as a subtraction of the cluster density and the total density percentages.

Note that in some special datasets, there may be less clusters returned than requested. This occurs for example when there are less cases than clusters requested, or the features of the cases don't have unique enough values.

Clustering Analysis Settings

Clustering analysis has the following settings:

  • Clusters: Number of clusters which the cases are divided into.
  • Cluster rows: Number of describing features shown for each cluster. The features are shown in the order of strongest contribution.
  • Attributes: Case attributes that are taken to the clustering analysis. If none is selected, all case attributes are selected. You can restrict which case attributes are selected, if you want the clustering to be done based on only certain features.
  • Events: Event types that occurrences are taken to the clustering analysis. If none is selected, all event types are selected.

Also the size of the sample used in the clustering can be changed in the chart settings (in the Analyze tab).

Clustering Analysis Calculation

Calculating the clustering analysis is performed as follows:

  1. Taking a random sample of cases from the entire eventlog.
  2. Performing the clustering to the cases using the K-means algorithm. As a result, each case belongs to a certain cluster.
  3. Root causes analysis is run to find the characterizing features for each clusters.

The root causes analysis is used so that the clustering analysis wouldn't need show the individual cases in each cluster, but the features that describe each cluster (a long list of individual cases wouldn't be very easy to read). Note that the case attribute and event type settings are used both in determining which features are taken to the clustering phase and also for which features the root causes analysis is run.

Technically, the feature data used by the clustering analysis are numeric values between 0 and 1. Case attribute values are converted into that format as follows:

  • Numbers and dates are scaled between 0 and 1 (minimumum value is converted to 0 and maximum to 1)
  • Textual columns are "one-hot" encoded into multiple columns in a way that each unique value gets an own column. For each unique value column, when value is 1, the case has that particular attribute value and when value is 0, the attribute value is something else.