In-memory Models Management: Difference between revisions

From QPR ProcessAnalyzer Wiki
Jump to navigation Jump to search
No edit summary
No edit summary
Line 3: Line 3:
== Memory usage behavior==
== Memory usage behavior==
When models are loaded into memory, PA (IIS worker process) memory increases, but when models are dropped, memory consumption doesn't decrease. When models are loaded into memory after dropping models, the memory consumption doesn't increase, so it appears that the memory from the dropped models can be used by the newly loaded models. Using models will further increase memory consumption because filter eventlogs are cached.
When models are loaded into memory, PA (IIS worker process) memory increases, but when models are dropped, memory consumption doesn't decrease. When models are loaded into memory after dropping models, the memory consumption doesn't increase, so it appears that the memory from the dropped models can be used by the newly loaded models. Using models will further increase memory consumption because filter eventlogs are cached.
== Drop unused filters after setting ==
For models, there is the '''Drop Unused Filters After''' setting available in the [[QPR_ProcessAnalyzer_Project_Workspace#Editing_Model_Settings|Model properties]] dialog. The setting determines the duration after which the unused filters in the model are dropped from the memory. The Drop Unused Filters After setting is defined in format ''HH:mm:ss'' or ''d.HH:mm:ss'', for example 01:00:00 (one hour), 00:30:00 (30 minutes) or 1.00:00:00 (24 hours). If this setting is not defined, the server level [[Installing_QPR_ProcessAnalyzer_Server#Server_settings_file_(appsettings.json)|default setting]] is used. When any calculation is performed using a filter, the filter's last used time is updated. Note that the allowed value for this setting is greater than zero seconds.
== Best practices for memory management ==
Follow these best practices for the QPR ProcessAnalyzer server memory configuration:
* Disable extending memory into disk (the page file), because it will slow down the system remarkably. Instead, make sure that there is enough memory in the system for workloads for QPR ProcessAnalyzer. (more information: https://docs.microsoft.com/en-us/windows/client-management/introduction-page-file)
* Make sure that there are no more models loaded into memory than there is available space. The more there is simultaneous usage, the caching also requires more memory.


== Memory usage monitoring ==
== Memory usage monitoring ==
Line 29: Line 21:
"Memory consumption: " + Round((UsedProcessMemory / 1024 / 1024), 0) + "MB"
"Memory consumption: " + Round((UsedProcessMemory / 1024 / 1024), 0) + "MB"
</pre>
</pre>
== Drop unused filters after setting ==
For models, there is the '''Drop Unused Filters After''' setting available in the [[QPR_ProcessAnalyzer_Project_Workspace#Editing_Model_Settings|Model properties]] dialog. The setting determines the duration after which the unused filters in the model are dropped from the memory. The Drop Unused Filters After setting is defined in format ''HH:mm:ss'' or ''d.HH:mm:ss'', for example 01:00:00 (one hour), 00:30:00 (30 minutes) or 1.00:00:00 (24 hours). If this setting is not defined, the server level [[Installing_QPR_ProcessAnalyzer_Server#Server_settings_file_(appsettings.json)|default setting]] is used. When any calculation is performed using a filter, the filter's last used time is updated. Note that the allowed value for this setting is greater than zero seconds.
== Best practices for memory management ==
Follow these best practices for the QPR ProcessAnalyzer server memory configuration:
* Disable extending memory into disk (the page file), because it will slow down the system remarkably. Instead, make sure that there is enough memory in the system for workloads for QPR ProcessAnalyzer. (more information: https://docs.microsoft.com/en-us/windows/client-management/introduction-page-file)
* Make sure that there are no more models loaded into memory than there is available space. The more there is simultaneous usage, the caching also requires more memory.


==Memory stored objects==
==Memory stored objects==

Revision as of 18:14, 16 September 2024

Memory management settings determine, how long models and calculation results are kept in the memory, balancing between the performance and memory consumption. The longer the items are in the memory, the more memory is reserved, but on the other hand users perceive better performance as models and calculation results are more often available in the memory cache (i.e., higher hit rate). Models loaded into memory reserve the space they need and they are not dropped there automatically. The rest of the space is available for other cached objects, which may be dropped if memory is becoming full.

Memory usage behavior

When models are loaded into memory, PA (IIS worker process) memory increases, but when models are dropped, memory consumption doesn't decrease. When models are loaded into memory after dropping models, the memory consumption doesn't increase, so it appears that the memory from the dropped models can be used by the newly loaded models. Using models will further increase memory consumption because filter eventlogs are cached.

Memory usage monitoring

QPR ProcessAnalyzer memory usage can be monitored as follows:

  • In the server Windows Task Manager, how much QPR ProcessAnalyzer process (w3wp.exe) is consuming.
  • In the User Settings dialog, see QPR ProcessAnalyzer memory usage in the User Settings dialog.

Based on the memory usage, it may be difficult to identify how much there is unused memory in the server because some of the memory may be reserved by the w3wp.exe process but it's not currently in use by the eventlogs. To see how much memory is actually in use, the administrator can run a manual garbage collection for the w3wp.exe process. It can be done in the Expression Designer by running GarbageCollection(). It may be required to run the command several times. After running this command, QPR ProcessAnalyzer memory usage can be checked.

Example command that runs the garbage collection three times and after that shows the QPR ProcessAnalyzer memory usage:

GarbageCollection();
Sleep(2000);
GarbageCollection();
Sleep(2000);
GarbageCollection();
Sleep(2000);
"Memory consumption: " + Round((UsedProcessMemory / 1024 / 1024), 0) + "MB"

Drop unused filters after setting

For models, there is the Drop Unused Filters After setting available in the Model properties dialog. The setting determines the duration after which the unused filters in the model are dropped from the memory. The Drop Unused Filters After setting is defined in format HH:mm:ss or d.HH:mm:ss, for example 01:00:00 (one hour), 00:30:00 (30 minutes) or 1.00:00:00 (24 hours). If this setting is not defined, the server level default setting is used. When any calculation is performed using a filter, the filter's last used time is updated. Note that the allowed value for this setting is greater than zero seconds.

Best practices for memory management

Follow these best practices for the QPR ProcessAnalyzer server memory configuration:

  • Disable extending memory into disk (the page file), because it will slow down the system remarkably. Instead, make sure that there is enough memory in the system for workloads for QPR ProcessAnalyzer. (more information: https://docs.microsoft.com/en-us/windows/client-management/introduction-page-file)
  • Make sure that there are no more models loaded into memory than there is available space. The more there is simultaneous usage, the caching also requires more memory.

Memory stored objects

There are following types of objects managed in the QPR ProcessAnalyzer server memory:

Stored object Contents Unused objects dropped after Time to recreate after dropping Memory consumption
Models (model eventlogs) Models contain the eventlog data (events, cases), and objects calculated from the (e.g., event types, variations and flows). Models are never dropped automatically from the memory, even when there is a memory shortage in the server. To drop a model from memory, you need to drop the model in the Workspace. Note that if the server has been restarted, only models that have the automatic loading set are loaded into memory models directly after the restart. Slow

Models are loaded from the database requiring to transfer considerable amount of data, which takes much more time than e.g. calculating filters and analyses.

High
Filters (filter eventlogs) Filters contain all filtered data in model, which includes similar structures than the models. Primarily defined by the Drop Unused Filters After setting in the Model settings dialog. If this not set for a model, the server level DropUnusedFiltersAfter setting is used. If the server setting is not defined, a default value of 1 hour is used.

When the is a memory shortage in the server, they may be dropped from the memory earlier to free memory.

Fast

Filters are calculated from the model data that already exists in the memory. Practically, filters are subsets of models.

Medium
Calculation results Results of the calculations made in dashboards. Unused calculation results are kept 30 minutes in the memory. When the is a memory shortage in the server, they may be dropped from the memory earlier to free memory. Fast

Analysis results are calculated from filters, which already exist in memory.

Low
Datatables Contents of datatables. There is a fixed duration of 1 hour. When the is a memory shortage in the server, they may be dropped from the memory earlier to free memory. Between slow and fast, depending on how much there is data in the datatable. Between high and low, depending on how much there is data in the datatable.

If the QPR ProcessAnalyzer server doesn't have enough memory to store more objects, already stored filters, calculation results and datatables are dropped from the memory starting from the one having the longest time since the last usage. Thus, when there is a memory shortage, objects may be dropped earlier than their settings define. Filters, calculation results and datatables are dropped during the memory shortage, but model are never dropped automatically. This is because recalculating filters and calculation results is usually faster than loading models from the database. That is why, when trying to load more models than there is available memory in the server, an out of memory error situation may occur.