Big Data Chart: Difference between revisions

From QPR ProcessAnalyzer Wiki
Jump to navigation Jump to search
No edit summary
Line 1: Line 1:
Big data chart is similar to the in-memory chart, except in the Big data chart backend processing is performed in the datasource where the eventlog data is stored. Depending on the [[QPR_ProcessAnalyzer_Project_Workspace#Models|model type], processing is done in Snowflake (for models using Snowflake datatables) or in SQL Server (for models using Local datatables).
Big data chart is similar to the in-memory chart, except in the Big data chart backend processing is performed in the datasource where the eventlog data is stored. Depending on the [[QPR_ProcessAnalyzer_Project_Workspace#Models|model type], processing is done in Snowflake (for models using Snowflake datatables) or in SQL Server (for models using Local datatables).


The Big data chart can be added to dashboard by selecting the second item from the tool palette (''Bid Data Chart'').
The Big data chart can be added to dashboard by selecting the second item from the tool palette (''Big Data Chart'').


== Differences to in-memory chart ==
== Differences to in-memory chart ==

Revision as of 15:51, 23 March 2022

Big data chart is similar to the in-memory chart, except in the Big data chart backend processing is performed in the datasource where the eventlog data is stored. Depending on the [[QPR_ProcessAnalyzer_Project_Workspace#Models|model type], processing is done in Snowflake (for models using Snowflake datatables) or in SQL Server (for models using Local datatables).

The Big data chart can be added to dashboard by selecting the second item from the tool palette (Big Data Chart).

Differences to in-memory chart

Visualization settings are mainly the same in the Big data chart comparing to the in-memory chart. On the other hand data selection, measures and dimensions work differently. The main differences are:

  • There are different set of analyzed objects, measures and dimensions available
  • Filtering can be done for each measure and dimension separately
  • List of measures and dimensions are equal. They are only separated by the additional aggregation selection that measures have.
  • Custom expressions are written as SQL expressions which differs from the eventlog objects available in the in-memory charts. Note also that the measure expression in the Bid Data Chart doesn't contain the aggregation logic, and thus the measure and dimension expression are equal. The aggregation selection from the dropdown list is available also for the custom measures.

Following functionalities supported by the in-memory are not available in the Big data chart: Presets, Group rows exceeding maximum, Analyzed objects sample size, Find root causes, and Business calendars. In addition, the following measure/dimension settings are not available: Round to decimals, Calculate measure for, Variable name, Custom aggregation expression, and Adjustment expression.

When to use Big Data Chart

Snowflake powered calculation will allow practically unlimited scaling when the amount of data and users increases. The Big data chart is the component to use in dashboards for the Snowflake models. In addition, the Big data chart can be used for model using local datatables. Note that the processing is then performed in SQL Server which is not optimal for analytics queries. There are still special usecases when the Big data chart is the best option for model using local datatables:

  • Eventlogs are filtered heavily so that the number of remaining cases and events are low (usually maximum of some thousands). Then processing may be done in the SQL Server without using the in-memory processing (which will require less memory)
  • If the model is not currently loaded in the memory, the fastest method is to use the Big data chart comparing to the in-memory chart, when also calculating the required time to load the model in-memory.