Automatic Model Loading on Server Startup: Difference between revisions

From QPR ProcessAnalyzer Wiki
Jump to navigation Jump to search
No edit summary
No edit summary
 
(7 intermediate revisions by the same user not shown)
Line 1: Line 1:
QPR ProcessAnalyzer models can be loaded automatically into the server memory, when the server starts. Especially when models are large and loading takes a long time, it's useful to load the models beforehand. The automatic loading can be switched on in the Model properties in the '''Details''' tab by changing the '''Load on Startup''' setting.  
QPR ProcessAnalyzer models can be loaded automatically into the backend server memory already when the system starts (e.g. after software update). When models are large and loading takes a long time, it's useful to load the models in advance. The automatic loading can be switched on in the '''Model properties''' dialog (available in the models list in the header) in the '''Details''' tab by selecting the '''Load Model on Startup''' checkbox.  


For models that are set to load automatically on server startup, you also need to set the '''Drop Unused Model After'' setting to a large value (as shown by the previous example), to make sure that it remains in the memory even if it's not used (e.g. ''100.00:00:00'').
It's possible to control how many models are loaded at the same time, to limit how much server capacity the model loading can use. The default value is two models at the same time. To change this setting, see the [[PA_Configuration_database_table|PA_Configuration table]]. This setting only affects the model loading on startup (not model loading initiated by users). If loading of a model fails, the loading continues with the next model.


It's possible to control how many models are loaded simultaneously at maximum, to limit how much server capacity the model loading can use. The default value is two models at the same time. To change this setting, see the [[PA_Configuration_database_table_in_QPR_ProcessAnalyzer|PA_Configuration table]]. This setting only affects the model loading on startup (not model loading initiated by users). If loading of a model fails, the loading just continues to the next model.
Model loading is also started, when running a calculation for the model. When a calculation, that initiated a model loading, is cancelled, model loading is still continued. If a calculation is waiting for a model loading to complete, and model loading is cancelled, an error is returned to user waiting for the calculation.


For the ''LoadOnStartup'' setting to work, [[Installing_QPR_ProcessAnalyzer_Server#IIS_Configuration|IIS configuration]] for the QPR ProcessAnalyzer Server need to be in place.
Note also the [[Roles_and_Permissions#Model_Permissions|security context used to load models]].


[[Category: QPR ProcessAnalyzer]]
[[Category: QPR ProcessAnalyzer]]

Latest revision as of 17:08, 29 January 2022

QPR ProcessAnalyzer models can be loaded automatically into the backend server memory already when the system starts (e.g. after software update). When models are large and loading takes a long time, it's useful to load the models in advance. The automatic loading can be switched on in the Model properties dialog (available in the models list in the header) in the Details tab by selecting the Load Model on Startup checkbox.

It's possible to control how many models are loaded at the same time, to limit how much server capacity the model loading can use. The default value is two models at the same time. To change this setting, see the PA_Configuration table. This setting only affects the model loading on startup (not model loading initiated by users). If loading of a model fails, the loading continues with the next model.

Model loading is also started, when running a calculation for the model. When a calculation, that initiated a model loading, is cancelled, model loading is still continued. If a calculation is waiting for a model loading to complete, and model loading is cancelled, an error is returned to user waiting for the calculation.

Note also the security context used to load models.