Best Practices for Designing Models: Difference between revisions

From QPR ProcessAnalyzer Wiki
Jump to navigation Jump to search
No edit summary
No edit summary
Line 45: Line 45:
Alternative to chart filter is Analyzed objects containing filtering, e.g. ... might improve performance
Alternative to chart filter is Analyzed objects containing filtering, e.g. ... might improve performance


Use the model description to document the necessary details regarding the model for other users.
Note the Load Model on Startup setting. When to use it correctly.


Creating models
Creating models


Use the most suitable datatypes for case and event attributes. If there are only two possible values, boolean is the best. The true and false values can be mapped into a textual presentation, so it's not needed to use strings to get desired texts for visualizations. If numerical data cannot contain decimals or precision containing decimals is not required for the analysis, integer should be used over float.
Use the most suitable datatypes for case and event attributes. If there are only two possible values, boolean is the best. The true and false values can be mapped into a textual presentation, so it's not needed to use strings to get desired texts for visualizations. If numerical data cannot contain decimals or precision containing decimals is not required for the analysis, integer should be used over float. If the attribute value contains a numerical score (such as number between 1 and 5), integer is better than string. Usually string is the slowest.
 
All datatypes support null values to mark missing or some kind of special values. The null value can be freely used to mark anything - it's just a matter of decision.


Include only case and event attributes that are needed by the dashboards. For analysis, more attributes maybe useful, but they are not needed for dashboards. Loading model is slower, when there are more attributes.
Include only case and event attributes that are needed by the dashboards. For analysis, more attributes maybe useful, but they are not needed for dashboards. Loading model is slower, when there are more attributes.


Include only events that are needed by the dashboards
Include only events that are needed by the dashboards
Sorter event type names are easier to read in the UI and provide slightly better performance. This is also true for case and event attributes values.
Use calculated attributes, to pre-calculate case level KPI's from measures. It cannot be used when there is event type filtering applied. On the other hand, don't use calculated attributes unnecessarily because they are stored into memory, and thus they consume memory list the normal attributes. Don't calculate anything from the entire model level in the calculated attributes expression, because it will lead to very slow performance in model loading.

Revision as of 00:20, 23 March 2022

Limit number of returned rows

Prefer ready-made measures and dimensions over custom. For some simple calculations, Statistical calculations may be used. Also the Adjustment expression is useful.

As Analyzed objects, prefer Cases over Events, as usually there are lot more events than cases. Some KPI's can be calculated from the cases point of view. Also Variations, Event types and Flows are generally fast. On the other hand, Flow Occurrences is slow, as the number of them is even more than the event count.

Don't use dimensioning when it's not needed. When there is anyways a row for each root object, dimensioning is unnecessary. For example, Cases as Analyzed objects and dimension by case id will lead to a row for each case, but the same result can be achieved by disabling dimensioning.

Use preset as examples. In many cases, you find what you are looking for from the presets.

Use on-screen settings for settings that often users want to change, as they are easier to use than opening the settings. They also guide users to change parameters that might be relevant from the analysis viewpoint.

Exporting data: for large amounts of data, prefer CSV export over Excel export

Avoid Custom layout settings as their compatibility with future QPR ProcessAnalyzer versions might not be maintained. Use Custom layout only when it's absolutely necessary for the visualization.

Try sampling. It improves performance, but in most cases, it cannot be used, as it affects the analysis results, for example in object counts. When sampling can be used, it's very useful in improving calculation performance for large models.

Same dashboard can easily use different models and filtering still works. Model optimized for a chart might improve performance.

Sorting affects performance, so use it only when it's relevant for the analysis

Group rows exceeding maximum affects performance, so use it only when the information is useful for the analysis.

For slow charts, use the Benchmark Performance to find the fastest settings. Usually settings up a working chart is the first thing to do, and if the chart appears too slow, you can try to find another, faster way to calculate the same chart.

Limit the number of shown attributes or event types, if there are some that are not needed.

Disable creating filters from chart, if there are no meaningful filters created.

Use conditional formattings to improve KPI visualization

Note the special values, such as null and empty strings, and set a describing label name for them. E.g. ...

Avoid calculating same things multiple times in different measures. If there are repeating expressions, create a separate measure for it, and define it as a variable, which can be referenced from other measures.

Check that each measure and dimension has describing units. The general terms "cases" and "events" might not describe the counts best. E.g. cases might be orders.

Use custom labels if they describe the measures and dimensions better. Still, in many measures and dimensions, the automatically generated title is suitable.

Mappings can be done freely, so dimensions don't always need to go to the X-axis and measures go to the Y-axis.

Note the number of charts in a dashboard. The more there are charts, the slower it is.

Alternative to chart filter is Analyzed objects containing filtering, e.g. ... might improve performance

Use the model description to document the necessary details regarding the model for other users.

Note the Load Model on Startup setting. When to use it correctly.

Creating models

Use the most suitable datatypes for case and event attributes. If there are only two possible values, boolean is the best. The true and false values can be mapped into a textual presentation, so it's not needed to use strings to get desired texts for visualizations. If numerical data cannot contain decimals or precision containing decimals is not required for the analysis, integer should be used over float. If the attribute value contains a numerical score (such as number between 1 and 5), integer is better than string. Usually string is the slowest.

All datatypes support null values to mark missing or some kind of special values. The null value can be freely used to mark anything - it's just a matter of decision.

Include only case and event attributes that are needed by the dashboards. For analysis, more attributes maybe useful, but they are not needed for dashboards. Loading model is slower, when there are more attributes.

Include only events that are needed by the dashboards

Sorter event type names are easier to read in the UI and provide slightly better performance. This is also true for case and event attributes values.

Use calculated attributes, to pre-calculate case level KPI's from measures. It cannot be used when there is event type filtering applied. On the other hand, don't use calculated attributes unnecessarily because they are stored into memory, and thus they consume memory list the normal attributes. Don't calculate anything from the entire model level in the calculated attributes expression, because it will lead to very slow performance in model loading.