<?xml version="1.0"?>
<feed xmlns="http://www.w3.org/2005/Atom" xml:lang="en">
	<id>https://wiki.onqpr.com/pa/api.php?action=feedcontributions&amp;feedformat=atom&amp;user=MarHink</id>
	<title>QPR ProcessAnalyzer Wiki - User contributions [en]</title>
	<link rel="self" type="application/atom+xml" href="https://wiki.onqpr.com/pa/api.php?action=feedcontributions&amp;feedformat=atom&amp;user=MarHink"/>
	<link rel="alternate" type="text/html" href="https://wiki.onqpr.com/pa/index.php/Special:Contributions/MarHink"/>
	<updated>2026-04-06T13:53:23Z</updated>
	<subtitle>User contributions</subtitle>
	<generator>MediaWiki 1.39.1</generator>
	<entry>
		<id>https://wiki.onqpr.com/pa/index.php?title=Create_Predicted_Eventlog&amp;diff=27626</id>
		<title>Create Predicted Eventlog</title>
		<link rel="alternate" type="text/html" href="https://wiki.onqpr.com/pa/index.php?title=Create_Predicted_Eventlog&amp;diff=27626"/>
		<updated>2026-01-20T14:15:37Z</updated>

		<summary type="html">&lt;p&gt;MarHink: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;This article has instructions how to install, configure and use eventlog predictions. The prediction creates a new model that contains the source model data and the predictions. It&#039;s able to predict case attributes for the generated new cases and event attributes for the predicted events. By default, predictions use [https://en.wikipedia.org/wiki/Transformer_(deep_learning) Transformer] neural network architecture. However, also [https://en.wikipedia.org/wiki/Long_short-term_memory LSTM] and [https://en.wikipedia.org/wiki/Gated_recurrent_unit GRU]-based architectures are supported.&lt;br /&gt;
&lt;br /&gt;
To distinguish the real (source data) and predicted events and cases, there are following attributes in the model:&lt;br /&gt;
* Event attribute &#039;&#039;&#039;Predicted&#039;&#039;&#039; denotes whether the event is from the source data (&#039;&#039;false&#039;&#039;) or whether it&#039;s predicted (&#039;&#039;true&#039;&#039;).&lt;br /&gt;
* Case attribute &#039;&#039;&#039;Generated&#039;&#039;&#039; denotes whether the case is in the source data (&#039;&#039;false&#039;&#039;) or whether the prediction generated it as a new case (&#039;&#039;true&#039;&#039;).&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Prerequisites for prediction ==&lt;br /&gt;
Following prerequisites need to be fulfilled to run the eventlog prediction:&lt;br /&gt;
* QPR ProcessAnalyzer 2024.8 or later in use&lt;br /&gt;
* Snowflake connection is configured&lt;br /&gt;
* Source models are stored to Snowflake&lt;br /&gt;
&lt;br /&gt;
== Install prediction to Snowflake ==&lt;br /&gt;
To install the eventlog prediction to Snowflake:&lt;br /&gt;
# Go to Snowflake, and create a Snowflake-managed stage with name &#039;&#039;&#039;DECISION_INTELLIGENCE&#039;&#039;&#039; to the same schema configured to QPR ProcessAnalyzer (in the Snowflake connection string). Use settings in the following image: [[File:Create_Snowflake_stage.png]]&lt;br /&gt;
# Open the created stage and upload the &#039;&#039;&#039;predict.pyz&#039;&#039;&#039; file into the stage (ask the file from your QPR representative).&lt;br /&gt;
# Create the following procedure to the same schema:&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;sql&amp;quot;&amp;gt;&lt;br /&gt;
CREATE OR REPLACE PROCEDURE QPRPA_SP_PREDICTION(&amp;quot;CONFIGURATION&amp;quot; OBJECT)&lt;br /&gt;
RETURNS OBJECT&lt;br /&gt;
LANGUAGE PYTHON&lt;br /&gt;
STRICT&lt;br /&gt;
RUNTIME_VERSION = &#039;3.11&#039;&lt;br /&gt;
PACKAGES = (&#039;nltk&#039;,&#039;numpy&#039;,&#039;networkx&#039;,&#039;pandas&#039;,&#039;scikit-learn&#039;,&#039;snowflake-snowpark-python&#039;,&#039;tensorflow==2.12.0&#039;,&#039;dill&#039;,&#039;psutil&#039;,&#039;prophet&#039;,&#039;holidays&#039;,&#039;python-kubernetes&#039;,&#039;docker-py&#039;,&#039;cryptography&#039;)&lt;br /&gt;
HANDLER = &#039;main&#039;&lt;br /&gt;
EXECUTE AS OWNER&lt;br /&gt;
AS &#039;&lt;br /&gt;
import sys&lt;br /&gt;
def main(session, parameters_in: dict) -&amp;gt; dict:&lt;br /&gt;
	session.file.get(&#039;&#039;@decision_intelligence/predict.pyz&#039;&#039;, &#039;&#039;/tmp&#039;&#039;)&lt;br /&gt;
	sys.path.append(&#039;&#039;/tmp/predict.pyz&#039;&#039;)&lt;br /&gt;
	import predict&lt;br /&gt;
	return predict.main(session, parameters_in)&lt;br /&gt;
&#039;;&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Create prediction script in QPR ProcessAnalyzer ==&lt;br /&gt;
1. Create the following example expression script (e.g., with name &#039;&#039;&#039;Create prediction model&#039;&#039;&#039;):&amp;lt;syntaxhighlight lang=&amp;quot;typescript&amp;quot;&amp;gt;&lt;br /&gt;
let sourceModel = ProjectByName(&amp;quot;&amp;lt;project name&amp;gt;&amp;quot;).ModelByName(&amp;quot;&amp;lt;model name&amp;gt;&amp;quot;);&lt;br /&gt;
let completeCaseEventTypeNames = [&amp;quot;&amp;lt;event type name found only in complete cases&amp;gt;&amp;quot;, &amp;quot;&amp;lt;another event type name&amp;gt;&amp;quot;, &amp;quot;...&amp;quot;];&lt;br /&gt;
&lt;br /&gt;
let targetProject = Project;&lt;br /&gt;
let eventTypeColumnName = sourceModel.EventsDataTable.ColumnMappings[&amp;quot;EventType&amp;quot;];&lt;br /&gt;
&lt;br /&gt;
_system.ML.GeneratePredictionModel(#{&lt;br /&gt;
  &amp;quot;Name&amp;quot;: &amp;quot;My prediction model&amp;quot;,      // Name of the PA model to generate to the target project.&lt;br /&gt;
  &amp;quot;SourceModel&amp;quot;: sourceModel,         // Snowflake-based PA model used for training the prediction model.&lt;br /&gt;
  &amp;quot;TargetProject&amp;quot;: targetProject,     // Target project to create the model into.&lt;br /&gt;
  &amp;quot;TrainingConfiguration&amp;quot;: #{         // Training parameters.&lt;br /&gt;
    &amp;quot;num_epochs_to_train&amp;quot;: 200&lt;br /&gt;
  },&lt;br /&gt;
  &amp;quot;GenerationConfiguration&amp;quot;: #{       // Model generation parameters.&lt;br /&gt;
    &amp;quot;cases_to_generate&amp;quot;: 1000&lt;br /&gt;
  },&lt;br /&gt;
  &amp;quot;TrainingDataFilter&amp;quot;: #{&lt;br /&gt;
    &amp;quot;Items&amp;quot;: [&lt;br /&gt;
      #{&lt;br /&gt;
        &amp;quot;Type&amp;quot;: &amp;quot;IncludeCases&amp;quot;,&lt;br /&gt;
        &amp;quot;Items&amp;quot;: [&lt;br /&gt;
          #{&lt;br /&gt;
            &amp;quot;Type&amp;quot;: &amp;quot;EventAttributeValue&amp;quot;,&lt;br /&gt;
            &amp;quot;Attribute&amp;quot;: eventTypeColumnName,&lt;br /&gt;
            &amp;quot;Values&amp;quot;: completeCaseEventTypeNames&lt;br /&gt;
          }&lt;br /&gt;
        ]&lt;br /&gt;
      }&lt;br /&gt;
    ]&lt;br /&gt;
  },&lt;br /&gt;
  &amp;quot;IncompleteCasesFilter&amp;quot;: #{&lt;br /&gt;
    &amp;quot;Items&amp;quot;: [&lt;br /&gt;
      #{&lt;br /&gt;
        &amp;quot;Type&amp;quot;: &amp;quot;ExcludeCases&amp;quot;,&lt;br /&gt;
        &amp;quot;Items&amp;quot;: [&lt;br /&gt;
          #{&lt;br /&gt;
            &amp;quot;Type&amp;quot;: &amp;quot;EventAttributeValue&amp;quot;,&lt;br /&gt;
            &amp;quot;Attribute&amp;quot;: eventTypeColumnName,&lt;br /&gt;
            &amp;quot;Values&amp;quot;: completeCaseEventTypeNames&lt;br /&gt;
          }&lt;br /&gt;
        ]&lt;br /&gt;
      }&lt;br /&gt;
    ]&lt;br /&gt;
  },&lt;br /&gt;
  &amp;quot;RecreatePredictionModel&amp;quot;: true,    // Should a prediction model be overwritten if one already exists for this source model and target model name combination.&lt;br /&gt;
  &amp;quot;TrainingCaseSampleSize&amp;quot;: 10000     // Maximum number of cases to use from the source model (random sampled).&lt;br /&gt;
});&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;2. Configure prediction for the previously created script as instructed in the next chapter. At minimum, replace the tags listed below with some suitable values:&lt;br /&gt;
* &#039;&#039;&#039;&amp;lt;project name&amp;gt;&#039;&#039;&#039;: Name of the project in which the source model is located.&lt;br /&gt;
* &#039;&#039;&#039;&amp;lt;model name&amp;gt;&#039;&#039;&#039;: Name of the model to be used as source model. This data in this source model will be used to train the prediction model so that it can generate new cases and continuations for incomplete existing cases.&lt;br /&gt;
* &#039;&#039;&#039;&amp;lt;event type name found only in complete cases&amp;gt;&#039;&#039;&#039;: This example script has been hard-coded to determine whether a case is complete or incomplete based on the existence of this event type.&lt;br /&gt;
&lt;br /&gt;
== Configure prediction ==&lt;br /&gt;
Prediction script has the following settings in the GeneratePredictionModel call:&lt;br /&gt;
* &#039;&#039;&#039;Name&#039;&#039;&#039;: Name of the QPR ProcessAnalyzer model that is created to the target project. The model will contain the source model content and the predictions.&lt;br /&gt;
* &#039;&#039;&#039;SourceModel&#039;&#039;&#039;: Source model for which the prediction is made. Model can be selected for example based on id with ModelById function or by name with ModelByName function.&lt;br /&gt;
* &#039;&#039;&#039;TargetProject&#039;&#039;&#039;: Target project to create the new model into.&lt;br /&gt;
* &#039;&#039;&#039;RecreatePredictionModel&#039;&#039;&#039;: When &#039;&#039;true&#039;&#039;, a new ML model is trained when the script is run. When &#039;&#039;false&#039;&#039;, the prediction is run using possibly pre-existing ML model. &lt;br /&gt;
* &#039;&#039;&#039;TrainingConfiguration&#039;&#039;&#039;: Training parameters.&lt;br /&gt;
** &#039;&#039;&#039;attributes&#039;&#039;&#039;: Attribute configurations (for more information, see the chapter below).&lt;br /&gt;
** &#039;&#039;&#039;generate_start_time_trend_images&#039;&#039;&#039;: If set to true, two images will be generated for each cross validated Prophet-parameter combination and also for the final selected parameters showing the results of plot and plot_components-functions. &lt;br /&gt;
*** The images will be generated into stage files with the following path names:&lt;br /&gt;
**** plot: @decision_intelligence_testing/{model_name}_st_RMSE={rmse_value or &amp;quot;final&amp;quot;}.png&lt;br /&gt;
****plot_components:  @decision_intelligence_testing/{model_name}_st_RMSE={rmse_value or &amp;quot;final&amp;quot;}_comp.png&lt;br /&gt;
***The default value is false.&lt;br /&gt;
** &#039;&#039;&#039;max_num_case_clusters&#039;&#039;&#039;: Set the maximum number of clusters to divide the case attribute values into.&lt;br /&gt;
*** The default value is 20.&lt;br /&gt;
** &#039;&#039;&#039;max_num_traces_in_training&#039;&#039;&#039;: Set the maximum number of traces used in training.&lt;br /&gt;
*** When training, every case of length N will be split into N traces (a.k.a. prefixes) (p_1, ..., p_N), where p_x contains first x events of the all events of the full case.&lt;br /&gt;
**** If there are more traces available than this configured value, cases to include will be random sampled so that the maximum is exceeded by at most one case.&lt;br /&gt;
**** If null, all the traces will be used, no matter what (may easily lead to running out of memory).&lt;br /&gt;
**** The default value is 100000.&lt;br /&gt;
** &#039;&#039;&#039;num_epochs_to_train&#039;&#039;&#039;: How many times the training set is used in training. The best performing model out of all the iterations will be selected.&lt;br /&gt;
*** The default value is 500.&lt;br /&gt;
** &#039;&#039;&#039;num_extra_years_to_reserve_in_created_model&#039;&#039;&#039;: Number of additional years after the year of the last timestamp in the training data to reserve to the capacity of the created ML model, allowing the model to  be able to predict timestamps in the range between the minimum timestamp year in the training data and the maximum timestamp year plus this value.&lt;br /&gt;
*** The default value is 20.&lt;br /&gt;
** &#039;&#039;&#039;reserve_extra_sequence_length&#039;&#039;&#039;: How many extra events to reserve space for in the ML model compared to the number of events the longest case in the training data has.&lt;br /&gt;
*** The default value is 5.&lt;br /&gt;
** &#039;&#039;&#039;samples_per_epoch&#039;&#039;&#039;: If not null, specifies (approximately) how many traces/prefixes will be used to represent one epoch of data in the training. The actual value used will be made divisible by batch_size using this formula:&lt;br /&gt;
***max(floor(samples_per_epoch / batch_size), 1) * batch_size&lt;br /&gt;
***If null, every epoch will use all the traces/prefixes in the training data.&lt;br /&gt;
***The default value is null&lt;br /&gt;
**&#039;&#039;&#039;validation_split&#039;&#039;&#039;: Percentage of traces/prefixes to use to evaluate the loss and any model metrics at the end of each epoch. The model will not be trained on this data.&lt;br /&gt;
***If 0, separate validation data will not be used. Instead, all the training data will be used also as validation data.&lt;br /&gt;
***The default value is 0.&lt;br /&gt;
* &#039;&#039;&#039;GenerationConfiguration&#039;&#039;&#039;: Event generation parameters. When null, no generation is done. For example, following parameters are supported:&lt;br /&gt;
** &#039;&#039;&#039;avoid_repeated_activities&#039;&#039;&#039;: Array of activity names that should occur at most once in any case. The probability of selecting any of the activities specified in this configuration more than once is set to be 0. &lt;br /&gt;
*** Empty array means that activity generation is not restricted by this setting at all. &lt;br /&gt;
*** null value means that there should not be any activities that can occur more than once (shortcut for specifying all the activity names).&lt;br /&gt;
*** The default value is an empty array.&lt;br /&gt;
** &#039;&#039;&#039;cases_to_generate&#039;&#039;&#039;: Maximum number cases to create. The number of created cases is further limited by the capabilities of the trained model and the &#039;&#039;case_generation_start_time&#039;&#039; and &#039;&#039;case_generation_end_time&#039;&#039; parameters.&lt;br /&gt;
*** The default value is such that the number of cases,  by itself, is not limited.&lt;br /&gt;
** &#039;&#039;&#039;case_generation_start_time&#039;&#039;&#039;: If defined, new cases will be generated after this timestamp (given as string in ISO datetime format). &lt;br /&gt;
*** If undefined, the latest start event timestamp used in the training data is used.&lt;br /&gt;
*** The default value is undefined.&lt;br /&gt;
** &#039;&#039;&#039;case_generation_end_time&#039;&#039;&#039;: If defined, new events and cases will not be generated after this timestamp (given as string in ISO datetime format). E.g., &amp;quot;2015-01-01T00:00:00&amp;quot;.&lt;br /&gt;
*** The default value is unlimited (only limit comes from the capacity of the trained model)&lt;br /&gt;
** &#039;&#039;&#039;generate_debug_event_attributes&#039;&#039;&#039;: &lt;br /&gt;
*** If true, additional columns will be added containing, e.g., probabilities of the selected activity and other activities.&lt;br /&gt;
*** The default value is false.&lt;br /&gt;
** &#039;&#039;&#039;max_num_events&#039;&#039;&#039;:&lt;br /&gt;
*** Specifies the maximum number of events to generate for any case.&lt;br /&gt;
*** If unspecified (=default), the value equals to &#039;&#039;&amp;lt;the maximum number of events in any case in the training data&amp;gt;&#039;&#039;+&#039;&#039;&amp;lt;the value of reserve_extra_sequence_length in training&amp;gt;&#039;&#039;.&lt;br /&gt;
** &#039;&#039;&#039;min_prediction_probability &#039;&#039;&#039;: &lt;br /&gt;
*** The minimum probability of any prediction. If the probability of a prediction is lower than this, it will never be picked. &lt;br /&gt;
*** The default value is 0.01.&lt;br /&gt;
** &#039;&#039;&#039;temperature&#039;&#039;&#039;: &lt;br /&gt;
*** If 0, the generated next activity will always be the one that is the most probable. &lt;br /&gt;
*** If 1, the generated next activity is purely based on the probabilities returned by the trained ML model. &lt;br /&gt;
*** This behavior is interpolated when using values between 0 and 1.&lt;br /&gt;
*** The default value is 0.9.&lt;br /&gt;
* &#039;&#039;&#039;TrainingDataFilter&#039;&#039;&#039;: [[Filtering_in_QPR_ProcessAnalyzer_Queries|Filter]] to select specific cases that are used to train the prediction model. This filter is required to train the model only using the completed cases. Uncompleted cases should not be used for the training, so the model doesn&#039;t incorrectly learn that cases should end like that.&lt;br /&gt;
* &#039;&#039;&#039;IncompleteCasesFilter&#039;&#039;&#039;: Optional [[Filtering_in_QPR_ProcessAnalyzer_Queries|filter]] to select which cases the prediction is made for. To improve performance of the prediction, it&#039;s recommended to include only the incomplete cases for which new events might appear, and skip the completed cases for which new events are not expected anymore.&lt;br /&gt;
* &#039;&#039;&#039;TrainingCaseSampleSize&#039;&#039;&#039;: Maximum number of cases to take from the source model (cases are selected randomly). Use a lower setting to speed up the ML model training. The greater the value, the more subtle phenomena the prediction can learn from the data.&lt;br /&gt;
&lt;br /&gt;
== Attribute configuration ==&lt;br /&gt;
Attribute configuration is used in &#039;&#039;&#039;TrainingConfiguration&#039;&#039;&#039; (see the chapter above) to configure which event- and case attributes should be used in prediction model and how they are used.&lt;br /&gt;
&lt;br /&gt;
The configuration is in the top level split into two sections: &amp;quot;event&amp;quot; and &amp;quot;case&amp;quot;. &amp;quot;Event&amp;quot; is used to configure event attributes, whereas &amp;quot;case&amp;quot; is used for case attributes.&lt;br /&gt;
&lt;br /&gt;
The next level supports one value: &amp;quot;input&amp;quot;.&lt;br /&gt;
&lt;br /&gt;
The next level after that, supports the following settings:&lt;br /&gt;
&lt;br /&gt;
* &#039;&#039;&#039;categorical_groups&#039;&#039;&#039;: An array of categorical attribute group configuration objects used to define groups of attributes that will be bundled together in the trained model, either as separate input- or output features. Each attribute group will form its own input- or output vector used in the model training and generation.&lt;br /&gt;
** If null, only one group will be created with all the available categorical attributes included.&lt;br /&gt;
** The following settings are supported by these objects:&lt;br /&gt;
*** &#039;&#039;&#039;attributes&#039;&#039;&#039;: An array of attribute names.&lt;br /&gt;
**** If null, all the input attributes are to be included in this group.&lt;br /&gt;
*** &#039;&#039;&#039;max_num_clusters&#039;&#039;&#039;: The maximum number of clusters (input- or output vector feature values) to use to represent this group of attributes.&lt;br /&gt;
**** Default value: 20&lt;br /&gt;
**** NOTE: Clustering is used by default to convert a set of attribute values into an input- or output vector used by the prediction model.&lt;br /&gt;
*** &#039;&#039;&#039;ignore_values_threshold:&#039;&#039;&#039; The minimum percentage of objects having a specific attribute value in order for that attribute value to be taken into account as unique attribute value within this categorical group.&lt;br /&gt;
**** Depending on the context, the default value is any one of the following configurations:&lt;br /&gt;
***** ignore_values_threshold_for_case_attribute_values&lt;br /&gt;
****** Used when clustering case attributes when generating attribute values for generated new cases.&lt;br /&gt;
****** Default value is 0.01.&lt;br /&gt;
***** ignore_values_threshold_for_case_attributes&lt;br /&gt;
****** Used when clustering case attributes.&lt;br /&gt;
****** Default value is 0.1.&lt;br /&gt;
***** ignore_values_threshold_for_event_attributes&lt;br /&gt;
****** Used when clustering event attributes.&lt;br /&gt;
****** Default value is 0.1.&lt;br /&gt;
* &#039;&#039;&#039;columns&#039;&#039;&#039;: An array of attribute column configuration objects used to define columns in the input data that are to be used as event- or case attributes.&lt;br /&gt;
** If null, all the columns will be included as categorical attributes (except case id, event type (only for event) and timestamp (only for event) columns).&lt;br /&gt;
** The following settings are supported by these objects:&lt;br /&gt;
*** &#039;&#039;&#039;label&#039;&#039;&#039;: Column name.&lt;br /&gt;
*** &#039;&#039;&#039;type&#039;&#039;&#039;: Type of the column. Supported types are:&lt;br /&gt;
**** &#039;&#039;&#039;categorical&#039;&#039;&#039;: Values can take on one of a limited, and usually fixed, number of possible values.&lt;br /&gt;
**** &#039;&#039;&#039;numeric&#039;&#039;&#039;: Value is considered as a continuous numeric value.&lt;br /&gt;
&lt;br /&gt;
==== Example ====&lt;br /&gt;
Use all event attributes as input for the prediction model. In addition, additional machine learning input vector for SAP_User-event data column supporting at most 10 unique values.&lt;br /&gt;
&lt;br /&gt;
In addition, for case attributes, only &amp;quot;Region&amp;quot;, &amp;quot;Product Group&amp;quot;, &amp;quot;Account Manager&amp;quot; and &amp;quot;Customer Group&amp;quot; case data columns are used as categorical attributes and &amp;quot;Cost&amp;quot; as numeric attribute. Furthermore, the four categorical case attributes are grouped into three groups, each of which are used as its own input vector for the prediction model.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
When generating, all event attributes will be included for generated events as columns. Generated cases will have only &amp;quot;Region&amp;quot;, &amp;quot;Product Group&amp;quot;, &amp;quot;Account Manager&amp;quot;,  &amp;quot;Customer Group&amp;quot;, and &amp;quot;Cost&amp;quot; columns.&amp;lt;syntaxhighlight lang=&amp;quot;json&amp;quot;&amp;gt;&lt;br /&gt;
&amp;quot;attributes&amp;quot;: #{&lt;br /&gt;
  &amp;quot;event&amp;quot;: #{&lt;br /&gt;
    &amp;quot;input&amp;quot;: #{&lt;br /&gt;
      &amp;quot;categorical_groups&amp;quot;: [&lt;br /&gt;
        #{&lt;br /&gt;
          &amp;quot;attributes&amp;quot;: None&lt;br /&gt;
        },&lt;br /&gt;
        #{&lt;br /&gt;
          &amp;quot;attributes&amp;quot;: [&amp;quot;SAP_User&amp;quot;],&lt;br /&gt;
          &amp;quot;max_num_clusters&amp;quot;: 10&lt;br /&gt;
        }&lt;br /&gt;
      ],&lt;br /&gt;
      &amp;quot;columns&amp;quot;: None&lt;br /&gt;
    }&lt;br /&gt;
  },&lt;br /&gt;
  &amp;quot;case&amp;quot;: #{&lt;br /&gt;
    &amp;quot;input&amp;quot;: #{&lt;br /&gt;
      &amp;quot;categorical_groups&amp;quot;: [#{&lt;br /&gt;
        &amp;quot;attributes&amp;quot;: [&amp;quot;Account Manager&amp;quot;]&lt;br /&gt;
      }, #{&lt;br /&gt;
        &amp;quot;attributes&amp;quot;: [&amp;quot;Customer Group&amp;quot;]&lt;br /&gt;
      }, #{&lt;br /&gt;
        &amp;quot;attributes&amp;quot;: [&amp;quot;Region&amp;quot;, &amp;quot;Product Group&amp;quot;]&lt;br /&gt;
      }],&lt;br /&gt;
      &amp;quot;columns&amp;quot;: [&lt;br /&gt;
        #{ &amp;quot;label&amp;quot;: &amp;quot;Region&amp;quot;, &amp;quot;type&amp;quot;: &amp;quot;categorical&amp;quot; },&lt;br /&gt;
        #{ &amp;quot;label&amp;quot;: &amp;quot;Product Group&amp;quot;, &amp;quot;type&amp;quot;: &amp;quot;categorical&amp;quot; },&lt;br /&gt;
        #{ &amp;quot;label&amp;quot;: &amp;quot;Account Manager&amp;quot;, &amp;quot;type&amp;quot;: &amp;quot;categorical&amp;quot; },&lt;br /&gt;
        #{ &amp;quot;label&amp;quot;: &amp;quot;Customer Group&amp;quot;, &amp;quot;type&amp;quot;: &amp;quot;categorical&amp;quot; },&lt;br /&gt;
        #{ &amp;quot;label&amp;quot;: &amp;quot;Cost&amp;quot;, &amp;quot;type&amp;quot;: &amp;quot;numeric&amp;quot; }&lt;br /&gt;
      ]&lt;br /&gt;
    }&lt;br /&gt;
  }&lt;br /&gt;
}&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Predicting case attribute values ==&lt;br /&gt;
QPR ProcessAnalyzer can also be used to, e.g.,  predict the final values of case attributes of running cases. The following script gives an example on how to perform this.&amp;lt;syntaxhighlight lang=&amp;quot;typescript&amp;quot;&amp;gt;&lt;br /&gt;
let sourceModel = ProjectByName(&amp;quot;&amp;lt;project name&amp;gt;&amp;quot;).ModelByName(&amp;quot;&amp;lt;model name&amp;gt;&amp;quot;);&lt;br /&gt;
let caseAttributeToPredict = &amp;quot;&amp;lt;name of the case attribute&amp;gt;&amp;quot;;&lt;br /&gt;
let resultModelName = &amp;quot;&amp;lt;name of the model to be created/replaced&amp;gt;&amp;quot;;&lt;br /&gt;
let generateDebugCaseAttributes = false; // Set to true to generate columns for prediction probabilities.&lt;br /&gt;
let casesToPredictFilter = &amp;quot;&amp;lt;JSON filter for cases for which the prediction is to be performed&amp;gt;&amp;quot;;&lt;br /&gt;
let casesToUseForTrainingFilter = &amp;quot;&amp;lt;JSON filter for cases to be used for ML model training&amp;gt;&amp;quot;;&lt;br /&gt;
&lt;br /&gt;
let targetProject = Project;&lt;br /&gt;
&lt;br /&gt;
_system.ML.GenerateCaseAttributePredictionModel(#{&lt;br /&gt;
  &amp;quot;Name&amp;quot;: resultModelName,                                     // Name of the PA model to generate ti the target project.&lt;br /&gt;
  &amp;quot;SourceModel&amp;quot;: sourceModel,                                  // Snowflake-based PA model used for training the prediction model.&lt;br /&gt;
  &amp;quot;TargetProject&amp;quot;: targetProject,                              // Target project to create the model into.&lt;br /&gt;
  &amp;quot;RecreatePredictionModel&amp;quot;: false,                            // Should a prediction model be overwritten if one already exists for this source model and target model name combination. &lt;br /&gt;
  &amp;quot;TrainingCaseSampleSize&amp;quot;: 10000,                             // Maximum number of cases to use from the source model (random sampled). &lt;br /&gt;
  &amp;quot;CommonConfiguration&amp;quot;: #{                                    // Common parameters used by both training and generation.&lt;br /&gt;
    &amp;quot;output_case_attribute_groups&amp;quot;: [#{&lt;br /&gt;
      &amp;quot;attributes&amp;quot;: [caseAttributeToPredict]                   // Attribute whose value is to be predicted.&lt;br /&gt;
    }]&lt;br /&gt;
  },&lt;br /&gt;
  &amp;quot;TrainingConfiguration&amp;quot;: #{                                  // Training parameters.&lt;br /&gt;
    &amp;quot;max_num_case_attribute_clusters&amp;quot;: 80,&lt;br /&gt;
    &amp;quot;num_epochs_to_train&amp;quot;: 100&lt;br /&gt;
  },                            &lt;br /&gt;
  &amp;quot;GenerationConfiguration&amp;quot;: #{                                // Case attribute generation parameters.&lt;br /&gt;
    &amp;quot;generate_debug_case_attributes&amp;quot;: generateDebugCaseAttributes // Should probability and probability_all-columns be generated as well as the actual prediction created into a new column named Predicted_&amp;lt;attribute name&amp;gt;&lt;br /&gt;
  },                                                       &lt;br /&gt;
  &amp;quot;TrainingDataFilter&amp;quot;: ParseJson(casesToUseForTrainingFilter), // Filter JSON for events to be used for training.&lt;br /&gt;
  &amp;quot;IncompleteCasesFilter&amp;quot;: ParseJson(casesToPredictFilter)      // Filter JSON for events for whose case attribute value is to be predicted.&lt;br /&gt;
});&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;/div&gt;</summary>
		<author><name>MarHink</name></author>
	</entry>
	<entry>
		<id>https://wiki.onqpr.com/pa/index.php?title=Create_Predicted_Eventlog&amp;diff=27618</id>
		<title>Create Predicted Eventlog</title>
		<link rel="alternate" type="text/html" href="https://wiki.onqpr.com/pa/index.php?title=Create_Predicted_Eventlog&amp;diff=27618"/>
		<updated>2026-01-15T11:28:22Z</updated>

		<summary type="html">&lt;p&gt;MarHink: /* Attribute configuration */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;This article has instructions how to install, configure and use eventlog predictions. The prediction creates a new model that contains the source model data and the predictions. It&#039;s able to predict case attributes for the generated new cases and event attributes for the predicted events. To distinguish the real (source data) and predicted events and cases, there are following attributes in the model:&lt;br /&gt;
* Event attribute &#039;&#039;&#039;Predicted&#039;&#039;&#039; denotes whether the event is from the source data (&#039;&#039;false&#039;&#039;) or whether it&#039;s predicted (&#039;&#039;true&#039;&#039;).&lt;br /&gt;
* Case attribute &#039;&#039;&#039;Generated&#039;&#039;&#039; denotes whether the case is in the source data (&#039;&#039;false&#039;&#039;) or whether the prediction generated it as a new case (&#039;&#039;true&#039;&#039;).&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Prerequisites for prediction ==&lt;br /&gt;
Following prerequisites need to be fulfilled to run the eventlog prediction:&lt;br /&gt;
* QPR ProcessAnalyzer 2024.8 or later in use&lt;br /&gt;
* Snowflake connection is configured&lt;br /&gt;
* Source models are stored to Snowflake&lt;br /&gt;
&lt;br /&gt;
== Install prediction to Snowflake ==&lt;br /&gt;
To install the eventlog prediction to Snowflake:&lt;br /&gt;
# Go to Snowflake, and create a Snowflake-managed stage with name &#039;&#039;&#039;DECISION_INTELLIGENCE&#039;&#039;&#039; to the same schema configured to QPR ProcessAnalyzer (in the Snowflake connection string). Use settings in the following image: [[File:Create_Snowflake_stage.png]]&lt;br /&gt;
# Open the created stage and upload the &#039;&#039;&#039;predict.pyz&#039;&#039;&#039; file into the stage (ask the file from your QPR representative).&lt;br /&gt;
# Create the following procedure to the same schema:&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;sql&amp;quot;&amp;gt;&lt;br /&gt;
CREATE OR REPLACE PROCEDURE QPRPA_SP_PREDICTION(&amp;quot;CONFIGURATION&amp;quot; OBJECT)&lt;br /&gt;
RETURNS OBJECT&lt;br /&gt;
LANGUAGE PYTHON&lt;br /&gt;
STRICT&lt;br /&gt;
RUNTIME_VERSION = &#039;3.11&#039;&lt;br /&gt;
PACKAGES = (&#039;nltk&#039;,&#039;numpy&#039;,&#039;networkx&#039;,&#039;pandas&#039;,&#039;scikit-learn&#039;,&#039;snowflake-snowpark-python&#039;,&#039;tensorflow==2.12.0&#039;,&#039;dill&#039;,&#039;psutil&#039;,&#039;prophet&#039;,&#039;holidays&#039;,&#039;python-kubernetes&#039;,&#039;docker-py&#039;,&#039;cryptography&#039;)&lt;br /&gt;
HANDLER = &#039;main&#039;&lt;br /&gt;
EXECUTE AS OWNER&lt;br /&gt;
AS &#039;&lt;br /&gt;
import sys&lt;br /&gt;
def main(session, parameters_in: dict) -&amp;gt; dict:&lt;br /&gt;
	session.file.get(&#039;&#039;@decision_intelligence/predict.pyz&#039;&#039;, &#039;&#039;/tmp&#039;&#039;)&lt;br /&gt;
	sys.path.append(&#039;&#039;/tmp/predict.pyz&#039;&#039;)&lt;br /&gt;
	import predict&lt;br /&gt;
	return predict.main(session, parameters_in)&lt;br /&gt;
&#039;;&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Create prediction script in QPR ProcessAnalyzer ==&lt;br /&gt;
1. Create the following example expression script (e.g., with name &#039;&#039;&#039;Create prediction model&#039;&#039;&#039;):&amp;lt;syntaxhighlight lang=&amp;quot;typescript&amp;quot;&amp;gt;&lt;br /&gt;
let sourceModel = ProjectByName(&amp;quot;&amp;lt;project name&amp;gt;&amp;quot;).ModelByName(&amp;quot;&amp;lt;model name&amp;gt;&amp;quot;);&lt;br /&gt;
let completeCaseEventTypeNames = [&amp;quot;&amp;lt;event type name found only in complete cases&amp;gt;&amp;quot;, &amp;quot;&amp;lt;another event type name&amp;gt;&amp;quot;, &amp;quot;...&amp;quot;];&lt;br /&gt;
&lt;br /&gt;
let targetProject = Project;&lt;br /&gt;
let eventTypeColumnName = sourceModel.EventsDataTable.ColumnMappings[&amp;quot;EventType&amp;quot;];&lt;br /&gt;
&lt;br /&gt;
_system.ML.GeneratePredictionModel(#{&lt;br /&gt;
  &amp;quot;Name&amp;quot;: &amp;quot;My prediction model&amp;quot;,      // Name of the PA model to generate to the target project.&lt;br /&gt;
  &amp;quot;SourceModel&amp;quot;: sourceModel,         // Snowflake-based PA model used for training the prediction model.&lt;br /&gt;
  &amp;quot;TargetProject&amp;quot;: targetProject,     // Target project to create the model into.&lt;br /&gt;
  &amp;quot;TrainingConfiguration&amp;quot;: #{         // Training parameters.&lt;br /&gt;
    &amp;quot;num_epochs_to_train&amp;quot;: 200&lt;br /&gt;
  },&lt;br /&gt;
  &amp;quot;GenerationConfiguration&amp;quot;: #{       // Model generation parameters.&lt;br /&gt;
    &amp;quot;cases_to_generate&amp;quot;: 1000&lt;br /&gt;
  },&lt;br /&gt;
  &amp;quot;TrainingDataFilter&amp;quot;: #{&lt;br /&gt;
    &amp;quot;Items&amp;quot;: [&lt;br /&gt;
      #{&lt;br /&gt;
        &amp;quot;Type&amp;quot;: &amp;quot;IncludeCases&amp;quot;,&lt;br /&gt;
        &amp;quot;Items&amp;quot;: [&lt;br /&gt;
          #{&lt;br /&gt;
            &amp;quot;Type&amp;quot;: &amp;quot;EventAttributeValue&amp;quot;,&lt;br /&gt;
            &amp;quot;Attribute&amp;quot;: eventTypeColumnName,&lt;br /&gt;
            &amp;quot;Values&amp;quot;: completeCaseEventTypeNames&lt;br /&gt;
          }&lt;br /&gt;
        ]&lt;br /&gt;
      }&lt;br /&gt;
    ]&lt;br /&gt;
  },&lt;br /&gt;
  &amp;quot;IncompleteCasesFilter&amp;quot;: #{&lt;br /&gt;
    &amp;quot;Items&amp;quot;: [&lt;br /&gt;
      #{&lt;br /&gt;
        &amp;quot;Type&amp;quot;: &amp;quot;ExcludeCases&amp;quot;,&lt;br /&gt;
        &amp;quot;Items&amp;quot;: [&lt;br /&gt;
          #{&lt;br /&gt;
            &amp;quot;Type&amp;quot;: &amp;quot;EventAttributeValue&amp;quot;,&lt;br /&gt;
            &amp;quot;Attribute&amp;quot;: eventTypeColumnName,&lt;br /&gt;
            &amp;quot;Values&amp;quot;: completeCaseEventTypeNames&lt;br /&gt;
          }&lt;br /&gt;
        ]&lt;br /&gt;
      }&lt;br /&gt;
    ]&lt;br /&gt;
  },&lt;br /&gt;
  &amp;quot;RecreatePredictionModel&amp;quot;: true,    // Should a prediction model be overwritten if one already exists for this source model and target model name combination.&lt;br /&gt;
  &amp;quot;TrainingCaseSampleSize&amp;quot;: 10000     // Maximum number of cases to use from the source model (random sampled).&lt;br /&gt;
});&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;2. Configure prediction for the previously created script as instructed in the next chapter. At minimum, replace the tags listed below with some suitable values:&lt;br /&gt;
* &#039;&#039;&#039;&amp;lt;project name&amp;gt;&#039;&#039;&#039;: Name of the project in which the source model is located.&lt;br /&gt;
* &#039;&#039;&#039;&amp;lt;model name&amp;gt;&#039;&#039;&#039;: Name of the model to be used as source model. This data in this source model will be used to train the prediction model so that it can generate new cases and continuations for incomplete existing cases.&lt;br /&gt;
* &#039;&#039;&#039;&amp;lt;event type name found only in complete cases&amp;gt;&#039;&#039;&#039;: This example script has been hard-coded to determine whether a case is complete or incomplete based on the existence of this event type.&lt;br /&gt;
&lt;br /&gt;
== Configure prediction ==&lt;br /&gt;
Prediction script has the following settings in the GeneratePredictionModel call:&lt;br /&gt;
* &#039;&#039;&#039;Name&#039;&#039;&#039;: Name of the QPR ProcessAnalyzer model that is created to the target project. The model will contain the source model content and the predictions.&lt;br /&gt;
* &#039;&#039;&#039;SourceModel&#039;&#039;&#039;: Source model for which the prediction is made. Model can be selected for example based on id with ModelById function or by name with ModelByName function.&lt;br /&gt;
* &#039;&#039;&#039;TargetProject&#039;&#039;&#039;: Target project to create the new model into.&lt;br /&gt;
* &#039;&#039;&#039;RecreatePredictionModel&#039;&#039;&#039;: When &#039;&#039;true&#039;&#039;, a new ML model is trained when the script is run. When &#039;&#039;false&#039;&#039;, the prediction is run using possibly pre-existing ML model. &lt;br /&gt;
* &#039;&#039;&#039;TrainingConfiguration&#039;&#039;&#039;: Training parameters.&lt;br /&gt;
** &#039;&#039;&#039;attributes&#039;&#039;&#039;: Attribute configurations (for more information, see the chapter below).&lt;br /&gt;
** &#039;&#039;&#039;generate_start_time_trend_images&#039;&#039;&#039;: If set to true, two images will be generated for each cross validated Prophet-parameter combination and also for the final selected parameters showing the results of plot and plot_components-functions. &lt;br /&gt;
*** The images will be generated into stage files with the following path names:&lt;br /&gt;
**** plot: @decision_intelligence_testing/{model_name}_st_RMSE={rmse_value or &amp;quot;final&amp;quot;}.png&lt;br /&gt;
****plot_components:  @decision_intelligence_testing/{model_name}_st_RMSE={rmse_value or &amp;quot;final&amp;quot;}_comp.png&lt;br /&gt;
***The default value is false.&lt;br /&gt;
** &#039;&#039;&#039;max_num_case_clusters&#039;&#039;&#039;: Set the maximum number of clusters to divide the case attribute values into.&lt;br /&gt;
*** The default value is 20.&lt;br /&gt;
** &#039;&#039;&#039;max_num_traces_in_training&#039;&#039;&#039;: Set the maximum number of traces used in training.&lt;br /&gt;
*** When training, every case of length N will be split into N traces (a.k.a. prefixes) (p_1, ..., p_N), where p_x contains first x events of the all events of the full case.&lt;br /&gt;
**** If there are more traces available than this configured value, cases to include will be random sampled so that the maximum is exceeded by at most one case.&lt;br /&gt;
**** If null, all the traces will be used, no matter what (may easily lead to running out of memory).&lt;br /&gt;
**** The default value is 100000.&lt;br /&gt;
** &#039;&#039;&#039;num_epochs_to_train&#039;&#039;&#039;: How many times the training set is used in training. The best performing model out of all the iterations will be selected.&lt;br /&gt;
*** The default value is 500.&lt;br /&gt;
** &#039;&#039;&#039;num_extra_years_to_reserve_in_created_model&#039;&#039;&#039;: Number of additional years after the year of the last timestamp in the training data to reserve to the capacity of the created ML model, allowing the model to  be able to predict timestamps in the range between the minimum timestamp year in the training data and the maximum timestamp year plus this value.&lt;br /&gt;
*** The default value is 20.&lt;br /&gt;
** &#039;&#039;&#039;reserve_extra_sequence_length&#039;&#039;&#039;: How many extra events to reserve space for in the ML model compared to the number of events the longest case in the training data has.&lt;br /&gt;
*** The default value is 5.&lt;br /&gt;
** &#039;&#039;&#039;samples_per_epoch&#039;&#039;&#039;: If not null, specifies (approximately) how many traces/prefixes will be used to represent one epoch of data in the training. The actual value used will be made divisible by batch_size using this formula:&lt;br /&gt;
***max(floor(samples_per_epoch / batch_size), 1) * batch_size&lt;br /&gt;
***If null, every epoch will use all the traces/prefixes in the training data.&lt;br /&gt;
***The default value is null&lt;br /&gt;
**&#039;&#039;&#039;validation_split&#039;&#039;&#039;: Percentage of traces/prefixes to use to evaluate the loss and any model metrics at the end of each epoch. The model will not be trained on this data.&lt;br /&gt;
***If 0, separate validation data will not be used. Instead, all the training data will be used also as validation data.&lt;br /&gt;
***The default value is 0.&lt;br /&gt;
* &#039;&#039;&#039;GenerationConfiguration&#039;&#039;&#039;: Event generation parameters. When null, no generation is done. For example, following parameters are supported:&lt;br /&gt;
** &#039;&#039;&#039;avoid_repeated_activities&#039;&#039;&#039;: Array of activity names that should occur at most once in any case. The probability of selecting any of the activities specified in this configuration more than once is set to be 0. &lt;br /&gt;
*** Empty array means that activity generation is not restricted by this setting at all. &lt;br /&gt;
*** null value means that there should not be any activities that can occur more than once (shortcut for specifying all the activity names).&lt;br /&gt;
*** The default value is an empty array.&lt;br /&gt;
** &#039;&#039;&#039;cases_to_generate&#039;&#039;&#039;: Maximum number cases to create. The number of created cases is further limited by the capabilities of the trained model and the &#039;&#039;case_generation_start_time&#039;&#039; and &#039;&#039;case_generation_end_time&#039;&#039; parameters.&lt;br /&gt;
*** The default value is such that the number of cases,  by itself, is not limited.&lt;br /&gt;
** &#039;&#039;&#039;case_generation_start_time&#039;&#039;&#039;: If defined, new cases will be generated after this timestamp (given as string in ISO datetime format). &lt;br /&gt;
*** If undefined, the latest start event timestamp used in the training data is used.&lt;br /&gt;
*** The default value is undefined.&lt;br /&gt;
** &#039;&#039;&#039;case_generation_end_time&#039;&#039;&#039;: If defined, new events and cases will not be generated after this timestamp (given as string in ISO datetime format). E.g., &amp;quot;2015-01-01T00:00:00&amp;quot;.&lt;br /&gt;
*** The default value is unlimited (only limit comes from the capacity of the trained model)&lt;br /&gt;
** &#039;&#039;&#039;generate_debug_event_attributes&#039;&#039;&#039;: &lt;br /&gt;
*** If true, additional columns will be added containing, e.g., probabilities of the selected activity and other activities.&lt;br /&gt;
*** The default value is false.&lt;br /&gt;
** &#039;&#039;&#039;max_num_events&#039;&#039;&#039;:&lt;br /&gt;
*** Specifies the maximum number of events to generate for any case.&lt;br /&gt;
*** If unspecified (=default), the value equals to &#039;&#039;&amp;lt;the maximum number of events in any case in the training data&amp;gt;&#039;&#039;+&#039;&#039;&amp;lt;the value of reserve_extra_sequence_length in training&amp;gt;&#039;&#039;.&lt;br /&gt;
** &#039;&#039;&#039;min_prediction_probability &#039;&#039;&#039;: &lt;br /&gt;
*** The minimum probability of any prediction. If the probability of a prediction is lower than this, it will never be picked. &lt;br /&gt;
*** The default value is 0.01.&lt;br /&gt;
** &#039;&#039;&#039;temperature&#039;&#039;&#039;: &lt;br /&gt;
*** If 0, the generated next activity will always be the one that is the most probable. &lt;br /&gt;
*** If 1, the generated next activity is purely based on the probabilities returned by the trained ML model. &lt;br /&gt;
*** This behavior is interpolated when using values between 0 and 1.&lt;br /&gt;
*** The default value is 0.9.&lt;br /&gt;
* &#039;&#039;&#039;TrainingDataFilter&#039;&#039;&#039;: [[Filtering_in_QPR_ProcessAnalyzer_Queries|Filter]] to select specific cases that are used to train the prediction model. This filter is required to train the model only using the completed cases. Uncompleted cases should not be used for the training, so the model doesn&#039;t incorrectly learn that cases should end like that.&lt;br /&gt;
* &#039;&#039;&#039;IncompleteCasesFilter&#039;&#039;&#039;: Optional [[Filtering_in_QPR_ProcessAnalyzer_Queries|filter]] to select which cases the prediction is made for. To improve performance of the prediction, it&#039;s recommended to include only the incomplete cases for which new events might appear, and skip the completed cases for which new events are not expected anymore.&lt;br /&gt;
* &#039;&#039;&#039;TrainingCaseSampleSize&#039;&#039;&#039;: Maximum number of cases to take from the source model (cases are selected randomly). Use a lower setting to speed up the ML model training. The greater the value, the more subtle phenomena the prediction can learn from the data.&lt;br /&gt;
&lt;br /&gt;
== Attribute configuration ==&lt;br /&gt;
Attribute configuration is used in &#039;&#039;&#039;TrainingConfiguration&#039;&#039;&#039; (see the chapter above) to configure which event- and case attributes should be used in prediction model and how they are used.&lt;br /&gt;
&lt;br /&gt;
The configuration is in the top level split into two sections: &amp;quot;event&amp;quot; and &amp;quot;case&amp;quot;. &amp;quot;Event&amp;quot; is used to configure event attributes, whereas &amp;quot;case&amp;quot; is used for case attributes.&lt;br /&gt;
&lt;br /&gt;
The next level supports one value: &amp;quot;input&amp;quot;.&lt;br /&gt;
&lt;br /&gt;
The next level after that, supports the following settings:&lt;br /&gt;
&lt;br /&gt;
* &#039;&#039;&#039;categorical_groups&#039;&#039;&#039;: An array of categorical attribute group configuration objects used to define groups of attributes that will be bundled together in the trained model, either as separate input- or output features. Each attribute group will form its own input- or output vector used in the model training and generation.&lt;br /&gt;
** If null, only one group will be created with all the available categorical attributes included.&lt;br /&gt;
** The following settings are supported by these objects:&lt;br /&gt;
*** &#039;&#039;&#039;attributes&#039;&#039;&#039;: An array of attribute names.&lt;br /&gt;
**** If null, all the input attributes are to be included in this group.&lt;br /&gt;
*** &#039;&#039;&#039;max_num_clusters&#039;&#039;&#039;: The maximum number of clusters (input- or output vector feature values) to use to represent this group of attributes.&lt;br /&gt;
**** Default value: 20&lt;br /&gt;
**** NOTE: Clustering is used by default to convert a set of attribute values into an input- or output vector used by the prediction model.&lt;br /&gt;
*** &#039;&#039;&#039;ignore_values_threshold:&#039;&#039;&#039; The minimum percentage of objects having a specific attribute value in order for that attribute value to be taken into account as unique attribute value within this categorical group.&lt;br /&gt;
**** Depending on the context, the default value is any one of the following configurations:&lt;br /&gt;
***** ignore_values_threshold_for_case_attribute_values&lt;br /&gt;
****** Used when clustering case attributes when generating attribute values for generated new cases.&lt;br /&gt;
****** Default value is 0.01.&lt;br /&gt;
***** ignore_values_threshold_for_case_attributes&lt;br /&gt;
****** Used when clustering case attributes.&lt;br /&gt;
****** Default value is 0.1.&lt;br /&gt;
***** ignore_values_threshold_for_event_attributes&lt;br /&gt;
****** Used when clustering event attributes.&lt;br /&gt;
****** Default value is 0.1.&lt;br /&gt;
* &#039;&#039;&#039;columns&#039;&#039;&#039;: An array of attribute column configuration objects used to define columns in the input data that are to be used as event- or case attributes.&lt;br /&gt;
** If null, all the columns will be included as categorical attributes (except case id, event type (only for event) and timestamp (only for event) columns).&lt;br /&gt;
** The following settings are supported by these objects:&lt;br /&gt;
*** &#039;&#039;&#039;label&#039;&#039;&#039;: Column name.&lt;br /&gt;
*** &#039;&#039;&#039;type&#039;&#039;&#039;: Type of the column. Supported types are:&lt;br /&gt;
**** &#039;&#039;&#039;categorical&#039;&#039;&#039;: Values can take on one of a limited, and usually fixed, number of possible values.&lt;br /&gt;
**** &#039;&#039;&#039;numeric&#039;&#039;&#039;: Value is considered as a continuous numeric value.&lt;br /&gt;
&lt;br /&gt;
==== Example ====&lt;br /&gt;
Use all event attributes as input for the prediction model. In addition, additional machine learning input vector for SAP_User-event data column supporting at most 10 unique values.&lt;br /&gt;
&lt;br /&gt;
In addition, for case attributes, only &amp;quot;Region&amp;quot;, &amp;quot;Product Group&amp;quot;, &amp;quot;Account Manager&amp;quot; and &amp;quot;Customer Group&amp;quot; case data columns are used as categorical attributes and &amp;quot;Cost&amp;quot; as numeric attribute. Furthermore, the four categorical case attributes are grouped into three groups, each of which are used as its own input vector for the prediction model.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
When generating, all event attributes will be included for generated events as columns. Generated cases will have only &amp;quot;Region&amp;quot;, &amp;quot;Product Group&amp;quot;, &amp;quot;Account Manager&amp;quot;,  &amp;quot;Customer Group&amp;quot;, and &amp;quot;Cost&amp;quot; columns.&amp;lt;syntaxhighlight lang=&amp;quot;json&amp;quot;&amp;gt;&lt;br /&gt;
&amp;quot;attributes&amp;quot;: #{&lt;br /&gt;
  &amp;quot;event&amp;quot;: #{&lt;br /&gt;
    &amp;quot;input&amp;quot;: #{&lt;br /&gt;
      &amp;quot;categorical_groups&amp;quot;: [&lt;br /&gt;
        #{&lt;br /&gt;
          &amp;quot;attributes&amp;quot;: None&lt;br /&gt;
        },&lt;br /&gt;
        #{&lt;br /&gt;
          &amp;quot;attributes&amp;quot;: [&amp;quot;SAP_User&amp;quot;],&lt;br /&gt;
          &amp;quot;max_num_clusters&amp;quot;: 10&lt;br /&gt;
        }&lt;br /&gt;
      ],&lt;br /&gt;
      &amp;quot;columns&amp;quot;: None&lt;br /&gt;
    }&lt;br /&gt;
  },&lt;br /&gt;
  &amp;quot;case&amp;quot;: #{&lt;br /&gt;
    &amp;quot;input&amp;quot;: #{&lt;br /&gt;
      &amp;quot;categorical_groups&amp;quot;: [#{&lt;br /&gt;
        &amp;quot;attributes&amp;quot;: [&amp;quot;Account Manager&amp;quot;]&lt;br /&gt;
      }, #{&lt;br /&gt;
        &amp;quot;attributes&amp;quot;: [&amp;quot;Customer Group&amp;quot;]&lt;br /&gt;
      }, #{&lt;br /&gt;
        &amp;quot;attributes&amp;quot;: [&amp;quot;Region&amp;quot;, &amp;quot;Product Group&amp;quot;]&lt;br /&gt;
      }],&lt;br /&gt;
      &amp;quot;columns&amp;quot;: [&lt;br /&gt;
        #{ &amp;quot;label&amp;quot;: &amp;quot;Region&amp;quot;, &amp;quot;type&amp;quot;: &amp;quot;categorical&amp;quot; },&lt;br /&gt;
        #{ &amp;quot;label&amp;quot;: &amp;quot;Product Group&amp;quot;, &amp;quot;type&amp;quot;: &amp;quot;categorical&amp;quot; },&lt;br /&gt;
        #{ &amp;quot;label&amp;quot;: &amp;quot;Account Manager&amp;quot;, &amp;quot;type&amp;quot;: &amp;quot;categorical&amp;quot; },&lt;br /&gt;
        #{ &amp;quot;label&amp;quot;: &amp;quot;Customer Group&amp;quot;, &amp;quot;type&amp;quot;: &amp;quot;categorical&amp;quot; },&lt;br /&gt;
        #{ &amp;quot;label&amp;quot;: &amp;quot;Cost&amp;quot;, &amp;quot;type&amp;quot;: &amp;quot;numeric&amp;quot; }&lt;br /&gt;
      ]&lt;br /&gt;
    }&lt;br /&gt;
  }&lt;br /&gt;
}&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Predicting case attribute values ==&lt;br /&gt;
QPR ProcessAnalyzer can also be used to, e.g.,  predict the final values of case attributes of running cases. The following script gives an example on how to perform this.&amp;lt;syntaxhighlight lang=&amp;quot;typescript&amp;quot;&amp;gt;&lt;br /&gt;
let sourceModel = ProjectByName(&amp;quot;&amp;lt;project name&amp;gt;&amp;quot;).ModelByName(&amp;quot;&amp;lt;model name&amp;gt;&amp;quot;);&lt;br /&gt;
let caseAttributeToPredict = &amp;quot;&amp;lt;name of the case attribute&amp;gt;&amp;quot;;&lt;br /&gt;
let resultModelName = &amp;quot;&amp;lt;name of the model to be created/replaced&amp;gt;&amp;quot;;&lt;br /&gt;
let generateDebugCaseAttributes = false; // Set to true to generate columns for prediction probabilities.&lt;br /&gt;
let casesToPredictFilter = &amp;quot;&amp;lt;JSON filter for cases for which the prediction is to be performed&amp;gt;&amp;quot;;&lt;br /&gt;
let casesToUseForTrainingFilter = &amp;quot;&amp;lt;JSON filter for cases to be used for ML model training&amp;gt;&amp;quot;;&lt;br /&gt;
&lt;br /&gt;
let targetProject = Project;&lt;br /&gt;
&lt;br /&gt;
_system.ML.GenerateCaseAttributePredictionModel(#{&lt;br /&gt;
  &amp;quot;Name&amp;quot;: resultModelName,                                     // Name of the PA model to generate ti the target project.&lt;br /&gt;
  &amp;quot;SourceModel&amp;quot;: sourceModel,                                  // Snowflake-based PA model used for training the prediction model.&lt;br /&gt;
  &amp;quot;TargetProject&amp;quot;: targetProject,                              // Target project to create the model into.&lt;br /&gt;
  &amp;quot;RecreatePredictionModel&amp;quot;: false,                            // Should a prediction model be overwritten if one already exists for this source model and target model name combination. &lt;br /&gt;
  &amp;quot;TrainingCaseSampleSize&amp;quot;: 10000,                             // Maximum number of cases to use from the source model (random sampled). &lt;br /&gt;
  &amp;quot;CommonConfiguration&amp;quot;: #{                                    // Common parameters used by both training and generation.&lt;br /&gt;
    &amp;quot;output_case_attribute_groups&amp;quot;: [#{&lt;br /&gt;
      &amp;quot;attributes&amp;quot;: [caseAttributeToPredict]                   // Attribute whose value is to be predicted.&lt;br /&gt;
    }]&lt;br /&gt;
  },&lt;br /&gt;
  &amp;quot;TrainingConfiguration&amp;quot;: #{                                  // Training parameters.&lt;br /&gt;
    &amp;quot;max_num_case_attribute_clusters&amp;quot;: 80,&lt;br /&gt;
    &amp;quot;num_epochs_to_train&amp;quot;: 100&lt;br /&gt;
  },                            &lt;br /&gt;
  &amp;quot;GenerationConfiguration&amp;quot;: #{                                // Case attribute generation parameters.&lt;br /&gt;
    &amp;quot;generate_debug_case_attributes&amp;quot;: generateDebugCaseAttributes // Should probability and probability_all-columns be generated as well as the actual prediction created into a new column named Predicted_&amp;lt;attribute name&amp;gt;&lt;br /&gt;
  },                                                       &lt;br /&gt;
  &amp;quot;TrainingDataFilter&amp;quot;: ParseJson(casesToUseForTrainingFilter), // Filter JSON for events to be used for training.&lt;br /&gt;
  &amp;quot;IncompleteCasesFilter&amp;quot;: ParseJson(casesToPredictFilter)      // Filter JSON for events for whose case attribute value is to be predicted.&lt;br /&gt;
});&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;/div&gt;</summary>
		<author><name>MarHink</name></author>
	</entry>
	<entry>
		<id>https://wiki.onqpr.com/pa/index.php?title=Create_Predicted_Eventlog&amp;diff=27617</id>
		<title>Create Predicted Eventlog</title>
		<link rel="alternate" type="text/html" href="https://wiki.onqpr.com/pa/index.php?title=Create_Predicted_Eventlog&amp;diff=27617"/>
		<updated>2026-01-15T11:05:58Z</updated>

		<summary type="html">&lt;p&gt;MarHink: /* Attribute configuration */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;This article has instructions how to install, configure and use eventlog predictions. The prediction creates a new model that contains the source model data and the predictions. It&#039;s able to predict case attributes for the generated new cases and event attributes for the predicted events. To distinguish the real (source data) and predicted events and cases, there are following attributes in the model:&lt;br /&gt;
* Event attribute &#039;&#039;&#039;Predicted&#039;&#039;&#039; denotes whether the event is from the source data (&#039;&#039;false&#039;&#039;) or whether it&#039;s predicted (&#039;&#039;true&#039;&#039;).&lt;br /&gt;
* Case attribute &#039;&#039;&#039;Generated&#039;&#039;&#039; denotes whether the case is in the source data (&#039;&#039;false&#039;&#039;) or whether the prediction generated it as a new case (&#039;&#039;true&#039;&#039;).&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Prerequisites for prediction ==&lt;br /&gt;
Following prerequisites need to be fulfilled to run the eventlog prediction:&lt;br /&gt;
* QPR ProcessAnalyzer 2024.8 or later in use&lt;br /&gt;
* Snowflake connection is configured&lt;br /&gt;
* Source models are stored to Snowflake&lt;br /&gt;
&lt;br /&gt;
== Install prediction to Snowflake ==&lt;br /&gt;
To install the eventlog prediction to Snowflake:&lt;br /&gt;
# Go to Snowflake, and create a Snowflake-managed stage with name &#039;&#039;&#039;DECISION_INTELLIGENCE&#039;&#039;&#039; to the same schema configured to QPR ProcessAnalyzer (in the Snowflake connection string). Use settings in the following image: [[File:Create_Snowflake_stage.png]]&lt;br /&gt;
# Open the created stage and upload the &#039;&#039;&#039;predict.pyz&#039;&#039;&#039; file into the stage (ask the file from your QPR representative).&lt;br /&gt;
# Create the following procedure to the same schema:&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;sql&amp;quot;&amp;gt;&lt;br /&gt;
CREATE OR REPLACE PROCEDURE QPRPA_SP_PREDICTION(&amp;quot;CONFIGURATION&amp;quot; OBJECT)&lt;br /&gt;
RETURNS OBJECT&lt;br /&gt;
LANGUAGE PYTHON&lt;br /&gt;
STRICT&lt;br /&gt;
RUNTIME_VERSION = &#039;3.11&#039;&lt;br /&gt;
PACKAGES = (&#039;nltk&#039;,&#039;numpy&#039;,&#039;networkx&#039;,&#039;pandas&#039;,&#039;scikit-learn&#039;,&#039;snowflake-snowpark-python&#039;,&#039;tensorflow==2.12.0&#039;,&#039;dill&#039;,&#039;psutil&#039;,&#039;prophet&#039;,&#039;holidays&#039;,&#039;python-kubernetes&#039;,&#039;docker-py&#039;,&#039;cryptography&#039;)&lt;br /&gt;
HANDLER = &#039;main&#039;&lt;br /&gt;
EXECUTE AS OWNER&lt;br /&gt;
AS &#039;&lt;br /&gt;
import sys&lt;br /&gt;
def main(session, parameters_in: dict) -&amp;gt; dict:&lt;br /&gt;
	session.file.get(&#039;&#039;@decision_intelligence/predict.pyz&#039;&#039;, &#039;&#039;/tmp&#039;&#039;)&lt;br /&gt;
	sys.path.append(&#039;&#039;/tmp/predict.pyz&#039;&#039;)&lt;br /&gt;
	import predict&lt;br /&gt;
	return predict.main(session, parameters_in)&lt;br /&gt;
&#039;;&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Create prediction script in QPR ProcessAnalyzer ==&lt;br /&gt;
1. Create the following example expression script (e.g., with name &#039;&#039;&#039;Create prediction model&#039;&#039;&#039;):&amp;lt;syntaxhighlight lang=&amp;quot;typescript&amp;quot;&amp;gt;&lt;br /&gt;
let sourceModel = ProjectByName(&amp;quot;&amp;lt;project name&amp;gt;&amp;quot;).ModelByName(&amp;quot;&amp;lt;model name&amp;gt;&amp;quot;);&lt;br /&gt;
let completeCaseEventTypeNames = [&amp;quot;&amp;lt;event type name found only in complete cases&amp;gt;&amp;quot;, &amp;quot;&amp;lt;another event type name&amp;gt;&amp;quot;, &amp;quot;...&amp;quot;];&lt;br /&gt;
&lt;br /&gt;
let targetProject = Project;&lt;br /&gt;
let eventTypeColumnName = sourceModel.EventsDataTable.ColumnMappings[&amp;quot;EventType&amp;quot;];&lt;br /&gt;
&lt;br /&gt;
_system.ML.GeneratePredictionModel(#{&lt;br /&gt;
  &amp;quot;Name&amp;quot;: &amp;quot;My prediction model&amp;quot;,      // Name of the PA model to generate to the target project.&lt;br /&gt;
  &amp;quot;SourceModel&amp;quot;: sourceModel,         // Snowflake-based PA model used for training the prediction model.&lt;br /&gt;
  &amp;quot;TargetProject&amp;quot;: targetProject,     // Target project to create the model into.&lt;br /&gt;
  &amp;quot;TrainingConfiguration&amp;quot;: #{         // Training parameters.&lt;br /&gt;
    &amp;quot;num_epochs_to_train&amp;quot;: 200&lt;br /&gt;
  },&lt;br /&gt;
  &amp;quot;GenerationConfiguration&amp;quot;: #{       // Model generation parameters.&lt;br /&gt;
    &amp;quot;cases_to_generate&amp;quot;: 1000&lt;br /&gt;
  },&lt;br /&gt;
  &amp;quot;TrainingDataFilter&amp;quot;: #{&lt;br /&gt;
    &amp;quot;Items&amp;quot;: [&lt;br /&gt;
      #{&lt;br /&gt;
        &amp;quot;Type&amp;quot;: &amp;quot;IncludeCases&amp;quot;,&lt;br /&gt;
        &amp;quot;Items&amp;quot;: [&lt;br /&gt;
          #{&lt;br /&gt;
            &amp;quot;Type&amp;quot;: &amp;quot;EventAttributeValue&amp;quot;,&lt;br /&gt;
            &amp;quot;Attribute&amp;quot;: eventTypeColumnName,&lt;br /&gt;
            &amp;quot;Values&amp;quot;: completeCaseEventTypeNames&lt;br /&gt;
          }&lt;br /&gt;
        ]&lt;br /&gt;
      }&lt;br /&gt;
    ]&lt;br /&gt;
  },&lt;br /&gt;
  &amp;quot;IncompleteCasesFilter&amp;quot;: #{&lt;br /&gt;
    &amp;quot;Items&amp;quot;: [&lt;br /&gt;
      #{&lt;br /&gt;
        &amp;quot;Type&amp;quot;: &amp;quot;ExcludeCases&amp;quot;,&lt;br /&gt;
        &amp;quot;Items&amp;quot;: [&lt;br /&gt;
          #{&lt;br /&gt;
            &amp;quot;Type&amp;quot;: &amp;quot;EventAttributeValue&amp;quot;,&lt;br /&gt;
            &amp;quot;Attribute&amp;quot;: eventTypeColumnName,&lt;br /&gt;
            &amp;quot;Values&amp;quot;: completeCaseEventTypeNames&lt;br /&gt;
          }&lt;br /&gt;
        ]&lt;br /&gt;
      }&lt;br /&gt;
    ]&lt;br /&gt;
  },&lt;br /&gt;
  &amp;quot;RecreatePredictionModel&amp;quot;: true,    // Should a prediction model be overwritten if one already exists for this source model and target model name combination.&lt;br /&gt;
  &amp;quot;TrainingCaseSampleSize&amp;quot;: 10000     // Maximum number of cases to use from the source model (random sampled).&lt;br /&gt;
});&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;2. Configure prediction for the previously created script as instructed in the next chapter. At minimum, replace the tags listed below with some suitable values:&lt;br /&gt;
* &#039;&#039;&#039;&amp;lt;project name&amp;gt;&#039;&#039;&#039;: Name of the project in which the source model is located.&lt;br /&gt;
* &#039;&#039;&#039;&amp;lt;model name&amp;gt;&#039;&#039;&#039;: Name of the model to be used as source model. This data in this source model will be used to train the prediction model so that it can generate new cases and continuations for incomplete existing cases.&lt;br /&gt;
* &#039;&#039;&#039;&amp;lt;event type name found only in complete cases&amp;gt;&#039;&#039;&#039;: This example script has been hard-coded to determine whether a case is complete or incomplete based on the existence of this event type.&lt;br /&gt;
&lt;br /&gt;
== Configure prediction ==&lt;br /&gt;
Prediction script has the following settings in the GeneratePredictionModel call:&lt;br /&gt;
* &#039;&#039;&#039;Name&#039;&#039;&#039;: Name of the QPR ProcessAnalyzer model that is created to the target project. The model will contain the source model content and the predictions.&lt;br /&gt;
* &#039;&#039;&#039;SourceModel&#039;&#039;&#039;: Source model for which the prediction is made. Model can be selected for example based on id with ModelById function or by name with ModelByName function.&lt;br /&gt;
* &#039;&#039;&#039;TargetProject&#039;&#039;&#039;: Target project to create the new model into.&lt;br /&gt;
* &#039;&#039;&#039;RecreatePredictionModel&#039;&#039;&#039;: When &#039;&#039;true&#039;&#039;, a new ML model is trained when the script is run. When &#039;&#039;false&#039;&#039;, the prediction is run using possibly pre-existing ML model. &lt;br /&gt;
* &#039;&#039;&#039;TrainingConfiguration&#039;&#039;&#039;: Training parameters.&lt;br /&gt;
** &#039;&#039;&#039;attributes&#039;&#039;&#039;: Attribute configurations (for more information, see the chapter below).&lt;br /&gt;
** &#039;&#039;&#039;generate_start_time_trend_images&#039;&#039;&#039;: If set to true, two images will be generated for each cross validated Prophet-parameter combination and also for the final selected parameters showing the results of plot and plot_components-functions. &lt;br /&gt;
*** The images will be generated into stage files with the following path names:&lt;br /&gt;
**** plot: @decision_intelligence_testing/{model_name}_st_RMSE={rmse_value or &amp;quot;final&amp;quot;}.png&lt;br /&gt;
****plot_components:  @decision_intelligence_testing/{model_name}_st_RMSE={rmse_value or &amp;quot;final&amp;quot;}_comp.png&lt;br /&gt;
***The default value is false.&lt;br /&gt;
** &#039;&#039;&#039;max_num_case_clusters&#039;&#039;&#039;: Set the maximum number of clusters to divide the case attribute values into.&lt;br /&gt;
*** The default value is 20.&lt;br /&gt;
** &#039;&#039;&#039;max_num_traces_in_training&#039;&#039;&#039;: Set the maximum number of traces used in training.&lt;br /&gt;
*** When training, every case of length N will be split into N traces (a.k.a. prefixes) (p_1, ..., p_N), where p_x contains first x events of the all events of the full case.&lt;br /&gt;
**** If there are more traces available than this configured value, cases to include will be random sampled so that the maximum is exceeded by at most one case.&lt;br /&gt;
**** If null, all the traces will be used, no matter what (may easily lead to running out of memory).&lt;br /&gt;
**** The default value is 100000.&lt;br /&gt;
** &#039;&#039;&#039;num_epochs_to_train&#039;&#039;&#039;: How many times the training set is used in training. The best performing model out of all the iterations will be selected.&lt;br /&gt;
*** The default value is 500.&lt;br /&gt;
** &#039;&#039;&#039;num_extra_years_to_reserve_in_created_model&#039;&#039;&#039;: Number of additional years after the year of the last timestamp in the training data to reserve to the capacity of the created ML model, allowing the model to  be able to predict timestamps in the range between the minimum timestamp year in the training data and the maximum timestamp year plus this value.&lt;br /&gt;
*** The default value is 20.&lt;br /&gt;
** &#039;&#039;&#039;reserve_extra_sequence_length&#039;&#039;&#039;: How many extra events to reserve space for in the ML model compared to the number of events the longest case in the training data has.&lt;br /&gt;
*** The default value is 5.&lt;br /&gt;
** &#039;&#039;&#039;samples_per_epoch&#039;&#039;&#039;: If not null, specifies (approximately) how many traces/prefixes will be used to represent one epoch of data in the training. The actual value used will be made divisible by batch_size using this formula:&lt;br /&gt;
***max(floor(samples_per_epoch / batch_size), 1) * batch_size&lt;br /&gt;
***If null, every epoch will use all the traces/prefixes in the training data.&lt;br /&gt;
***The default value is null&lt;br /&gt;
**&#039;&#039;&#039;validation_split&#039;&#039;&#039;: Percentage of traces/prefixes to use to evaluate the loss and any model metrics at the end of each epoch. The model will not be trained on this data.&lt;br /&gt;
***If 0, separate validation data will not be used. Instead, all the training data will be used also as validation data.&lt;br /&gt;
***The default value is 0.&lt;br /&gt;
* &#039;&#039;&#039;GenerationConfiguration&#039;&#039;&#039;: Event generation parameters. When null, no generation is done. For example, following parameters are supported:&lt;br /&gt;
** &#039;&#039;&#039;avoid_repeated_activities&#039;&#039;&#039;: Array of activity names that should occur at most once in any case. The probability of selecting any of the activities specified in this configuration more than once is set to be 0. &lt;br /&gt;
*** Empty array means that activity generation is not restricted by this setting at all. &lt;br /&gt;
*** null value means that there should not be any activities that can occur more than once (shortcut for specifying all the activity names).&lt;br /&gt;
*** The default value is an empty array.&lt;br /&gt;
** &#039;&#039;&#039;cases_to_generate&#039;&#039;&#039;: Maximum number cases to create. The number of created cases is further limited by the capabilities of the trained model and the &#039;&#039;case_generation_start_time&#039;&#039; and &#039;&#039;case_generation_end_time&#039;&#039; parameters.&lt;br /&gt;
*** The default value is such that the number of cases,  by itself, is not limited.&lt;br /&gt;
** &#039;&#039;&#039;case_generation_start_time&#039;&#039;&#039;: If defined, new cases will be generated after this timestamp (given as string in ISO datetime format). &lt;br /&gt;
*** If undefined, the latest start event timestamp used in the training data is used.&lt;br /&gt;
*** The default value is undefined.&lt;br /&gt;
** &#039;&#039;&#039;case_generation_end_time&#039;&#039;&#039;: If defined, new events and cases will not be generated after this timestamp (given as string in ISO datetime format). E.g., &amp;quot;2015-01-01T00:00:00&amp;quot;.&lt;br /&gt;
*** The default value is unlimited (only limit comes from the capacity of the trained model)&lt;br /&gt;
** &#039;&#039;&#039;generate_debug_event_attributes&#039;&#039;&#039;: &lt;br /&gt;
*** If true, additional columns will be added containing, e.g., probabilities of the selected activity and other activities.&lt;br /&gt;
*** The default value is false.&lt;br /&gt;
** &#039;&#039;&#039;max_num_events&#039;&#039;&#039;:&lt;br /&gt;
*** Specifies the maximum number of events to generate for any case.&lt;br /&gt;
*** If unspecified (=default), the value equals to &#039;&#039;&amp;lt;the maximum number of events in any case in the training data&amp;gt;&#039;&#039;+&#039;&#039;&amp;lt;the value of reserve_extra_sequence_length in training&amp;gt;&#039;&#039;.&lt;br /&gt;
** &#039;&#039;&#039;min_prediction_probability &#039;&#039;&#039;: &lt;br /&gt;
*** The minimum probability of any prediction. If the probability of a prediction is lower than this, it will never be picked. &lt;br /&gt;
*** The default value is 0.01.&lt;br /&gt;
** &#039;&#039;&#039;temperature&#039;&#039;&#039;: &lt;br /&gt;
*** If 0, the generated next activity will always be the one that is the most probable. &lt;br /&gt;
*** If 1, the generated next activity is purely based on the probabilities returned by the trained ML model. &lt;br /&gt;
*** This behavior is interpolated when using values between 0 and 1.&lt;br /&gt;
*** The default value is 0.9.&lt;br /&gt;
* &#039;&#039;&#039;TrainingDataFilter&#039;&#039;&#039;: [[Filtering_in_QPR_ProcessAnalyzer_Queries|Filter]] to select specific cases that are used to train the prediction model. This filter is required to train the model only using the completed cases. Uncompleted cases should not be used for the training, so the model doesn&#039;t incorrectly learn that cases should end like that.&lt;br /&gt;
* &#039;&#039;&#039;IncompleteCasesFilter&#039;&#039;&#039;: Optional [[Filtering_in_QPR_ProcessAnalyzer_Queries|filter]] to select which cases the prediction is made for. To improve performance of the prediction, it&#039;s recommended to include only the incomplete cases for which new events might appear, and skip the completed cases for which new events are not expected anymore.&lt;br /&gt;
* &#039;&#039;&#039;TrainingCaseSampleSize&#039;&#039;&#039;: Maximum number of cases to take from the source model (cases are selected randomly). Use a lower setting to speed up the ML model training. The greater the value, the more subtle phenomena the prediction can learn from the data.&lt;br /&gt;
&lt;br /&gt;
== Attribute configuration ==&lt;br /&gt;
Attribute configuration is used in &#039;&#039;&#039;TrainingConfiguration&#039;&#039;&#039; (see the chapter above) to configure which event- and case attributes should be used in prediction model and how they are used.&lt;br /&gt;
&lt;br /&gt;
The configuration is in the top level split into two sections: &amp;quot;event&amp;quot; and &amp;quot;case&amp;quot;. &amp;quot;Event&amp;quot; is used to configure event attributes, whereas &amp;quot;case&amp;quot; is used for case attributes.&lt;br /&gt;
&lt;br /&gt;
The next level supports one value: &amp;quot;input&amp;quot;.&lt;br /&gt;
&lt;br /&gt;
The next level after that, supports the following settings:&lt;br /&gt;
&lt;br /&gt;
* &#039;&#039;&#039;categorical_groups&#039;&#039;&#039;: An array of categorical attribute group configuration objects used to define groups of attributes that will be bundled together in the trained model, either as separate input- or output features. Each attribute group will form its own input- or output vector used in the model training and generation.&lt;br /&gt;
** If null, only one group will be created with all the available categorical attributes included.&lt;br /&gt;
** The following settings are supported by these objects:&lt;br /&gt;
*** &#039;&#039;&#039;attributes&#039;&#039;&#039;: An array of attribute names.&lt;br /&gt;
**** If null, all the input attributes are to be included in this group.&lt;br /&gt;
*** &#039;&#039;&#039;max_num_clusters&#039;&#039;&#039;: The maximum number of clusters (input- or output vector feature values) to use to represent this group of attributes.&lt;br /&gt;
**** Default value: 20&lt;br /&gt;
**** NOTE: Clustering is used by default to convert a set of attribute values into an input- or output vector used by the prediction model.&lt;br /&gt;
*** &#039;&#039;&#039;ignore_values_threshold:&#039;&#039;&#039; The minimum percentage of objects having a specific attribute value in order for that attribute value to be taken into account as unique attribute value within this categorical group.&lt;br /&gt;
**** Depending on the context, the default value is any one of the following configurations:&lt;br /&gt;
***** ignore_values_threshold_for_case_attribute_values&lt;br /&gt;
***** ignore_values_threshold_for_case_attributes&lt;br /&gt;
***** ignore_values_threshold_for_event_attributes&lt;br /&gt;
* &#039;&#039;&#039;columns&#039;&#039;&#039;: An array of attribute column configuration objects used to define columns in the input data that are to be used as event- or case attributes.&lt;br /&gt;
** If null, all the columns will be included as categorical attributes (except case id, event type (only for event) and timestamp (only for event) columns).&lt;br /&gt;
** The following settings are supported by these objects:&lt;br /&gt;
*** &#039;&#039;&#039;label&#039;&#039;&#039;: Column name.&lt;br /&gt;
*** &#039;&#039;&#039;type&#039;&#039;&#039;: Type of the column. Supported types are:&lt;br /&gt;
**** &#039;&#039;&#039;categorical&#039;&#039;&#039;: Values can take on one of a limited, and usually fixed, number of possible values.&lt;br /&gt;
**** &#039;&#039;&#039;numeric&#039;&#039;&#039;: Value is considered as a continuous numeric value.&lt;br /&gt;
&lt;br /&gt;
==== Example ====&lt;br /&gt;
Use all event attributes as input for the prediction model. In addition, additional machine learning input vector for SAP_User-event data column supporting at most 10 unique values.&lt;br /&gt;
&lt;br /&gt;
In addition, for case attributes, only &amp;quot;Region&amp;quot;, &amp;quot;Product Group&amp;quot;, &amp;quot;Account Manager&amp;quot; and &amp;quot;Customer Group&amp;quot; case data columns are used as categorical attributes and &amp;quot;Cost&amp;quot; as numeric attribute. Furthermore, the four categorical case attributes are grouped into three groups, each of which are used as its own input vector for the prediction model.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
When generating, all event attributes will be included for generated events as columns. Generated cases will have only &amp;quot;Region&amp;quot;, &amp;quot;Product Group&amp;quot;, &amp;quot;Account Manager&amp;quot;,  &amp;quot;Customer Group&amp;quot;, and &amp;quot;Cost&amp;quot; columns.&amp;lt;syntaxhighlight lang=&amp;quot;json&amp;quot;&amp;gt;&lt;br /&gt;
&amp;quot;attributes&amp;quot;: #{&lt;br /&gt;
  &amp;quot;event&amp;quot;: #{&lt;br /&gt;
    &amp;quot;input&amp;quot;: #{&lt;br /&gt;
      &amp;quot;categorical_groups&amp;quot;: [&lt;br /&gt;
        #{&lt;br /&gt;
          &amp;quot;attributes&amp;quot;: None&lt;br /&gt;
        },&lt;br /&gt;
        #{&lt;br /&gt;
          &amp;quot;attributes&amp;quot;: [&amp;quot;SAP_User&amp;quot;],&lt;br /&gt;
          &amp;quot;max_num_clusters&amp;quot;: 10&lt;br /&gt;
        }&lt;br /&gt;
      ],&lt;br /&gt;
      &amp;quot;columns&amp;quot;: None&lt;br /&gt;
    }&lt;br /&gt;
  },&lt;br /&gt;
  &amp;quot;case&amp;quot;: #{&lt;br /&gt;
    &amp;quot;input&amp;quot;: #{&lt;br /&gt;
      &amp;quot;categorical_groups&amp;quot;: [#{&lt;br /&gt;
        &amp;quot;attributes&amp;quot;: [&amp;quot;Account Manager&amp;quot;]&lt;br /&gt;
      }, #{&lt;br /&gt;
        &amp;quot;attributes&amp;quot;: [&amp;quot;Customer Group&amp;quot;]&lt;br /&gt;
      }, #{&lt;br /&gt;
        &amp;quot;attributes&amp;quot;: [&amp;quot;Region&amp;quot;, &amp;quot;Product Group&amp;quot;]&lt;br /&gt;
      }],&lt;br /&gt;
      &amp;quot;columns&amp;quot;: [&lt;br /&gt;
        #{ &amp;quot;label&amp;quot;: &amp;quot;Region&amp;quot;, &amp;quot;type&amp;quot;: &amp;quot;categorical&amp;quot; },&lt;br /&gt;
        #{ &amp;quot;label&amp;quot;: &amp;quot;Product Group&amp;quot;, &amp;quot;type&amp;quot;: &amp;quot;categorical&amp;quot; },&lt;br /&gt;
        #{ &amp;quot;label&amp;quot;: &amp;quot;Account Manager&amp;quot;, &amp;quot;type&amp;quot;: &amp;quot;categorical&amp;quot; },&lt;br /&gt;
        #{ &amp;quot;label&amp;quot;: &amp;quot;Customer Group&amp;quot;, &amp;quot;type&amp;quot;: &amp;quot;categorical&amp;quot; },&lt;br /&gt;
        #{ &amp;quot;label&amp;quot;: &amp;quot;Cost&amp;quot;, &amp;quot;type&amp;quot;: &amp;quot;numeric&amp;quot; }&lt;br /&gt;
      ]&lt;br /&gt;
    }&lt;br /&gt;
  }&lt;br /&gt;
}&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Predicting case attribute values ==&lt;br /&gt;
QPR ProcessAnalyzer can also be used to, e.g.,  predict the final values of case attributes of running cases. The following script gives an example on how to perform this.&amp;lt;syntaxhighlight lang=&amp;quot;typescript&amp;quot;&amp;gt;&lt;br /&gt;
let sourceModel = ProjectByName(&amp;quot;&amp;lt;project name&amp;gt;&amp;quot;).ModelByName(&amp;quot;&amp;lt;model name&amp;gt;&amp;quot;);&lt;br /&gt;
let caseAttributeToPredict = &amp;quot;&amp;lt;name of the case attribute&amp;gt;&amp;quot;;&lt;br /&gt;
let resultModelName = &amp;quot;&amp;lt;name of the model to be created/replaced&amp;gt;&amp;quot;;&lt;br /&gt;
let generateDebugCaseAttributes = false; // Set to true to generate columns for prediction probabilities.&lt;br /&gt;
let casesToPredictFilter = &amp;quot;&amp;lt;JSON filter for cases for which the prediction is to be performed&amp;gt;&amp;quot;;&lt;br /&gt;
let casesToUseForTrainingFilter = &amp;quot;&amp;lt;JSON filter for cases to be used for ML model training&amp;gt;&amp;quot;;&lt;br /&gt;
&lt;br /&gt;
let targetProject = Project;&lt;br /&gt;
&lt;br /&gt;
_system.ML.GenerateCaseAttributePredictionModel(#{&lt;br /&gt;
  &amp;quot;Name&amp;quot;: resultModelName,                                     // Name of the PA model to generate ti the target project.&lt;br /&gt;
  &amp;quot;SourceModel&amp;quot;: sourceModel,                                  // Snowflake-based PA model used for training the prediction model.&lt;br /&gt;
  &amp;quot;TargetProject&amp;quot;: targetProject,                              // Target project to create the model into.&lt;br /&gt;
  &amp;quot;RecreatePredictionModel&amp;quot;: false,                            // Should a prediction model be overwritten if one already exists for this source model and target model name combination. &lt;br /&gt;
  &amp;quot;TrainingCaseSampleSize&amp;quot;: 10000,                             // Maximum number of cases to use from the source model (random sampled). &lt;br /&gt;
  &amp;quot;CommonConfiguration&amp;quot;: #{                                    // Common parameters used by both training and generation.&lt;br /&gt;
    &amp;quot;output_case_attribute_groups&amp;quot;: [#{&lt;br /&gt;
      &amp;quot;attributes&amp;quot;: [caseAttributeToPredict]                   // Attribute whose value is to be predicted.&lt;br /&gt;
    }]&lt;br /&gt;
  },&lt;br /&gt;
  &amp;quot;TrainingConfiguration&amp;quot;: #{                                  // Training parameters.&lt;br /&gt;
    &amp;quot;max_num_case_attribute_clusters&amp;quot;: 80,&lt;br /&gt;
    &amp;quot;num_epochs_to_train&amp;quot;: 100&lt;br /&gt;
  },                            &lt;br /&gt;
  &amp;quot;GenerationConfiguration&amp;quot;: #{                                // Case attribute generation parameters.&lt;br /&gt;
    &amp;quot;generate_debug_case_attributes&amp;quot;: generateDebugCaseAttributes // Should probability and probability_all-columns be generated as well as the actual prediction created into a new column named Predicted_&amp;lt;attribute name&amp;gt;&lt;br /&gt;
  },                                                       &lt;br /&gt;
  &amp;quot;TrainingDataFilter&amp;quot;: ParseJson(casesToUseForTrainingFilter), // Filter JSON for events to be used for training.&lt;br /&gt;
  &amp;quot;IncompleteCasesFilter&amp;quot;: ParseJson(casesToPredictFilter)      // Filter JSON for events for whose case attribute value is to be predicted.&lt;br /&gt;
});&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;/div&gt;</summary>
		<author><name>MarHink</name></author>
	</entry>
	<entry>
		<id>https://wiki.onqpr.com/pa/index.php?title=Expression_Script_Examples&amp;diff=27598</id>
		<title>Expression Script Examples</title>
		<link rel="alternate" type="text/html" href="https://wiki.onqpr.com/pa/index.php?title=Expression_Script_Examples&amp;diff=27598"/>
		<updated>2026-01-12T08:22:24Z</updated>

		<summary type="html">&lt;p&gt;MarHink: /* Create a new object-centric model based on already existing object-centric model containing only filtered objects and events */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;This page contains script examples written in the QPR ProcessAnalyzer expression language. See how expression scripts can be created in the [[Managing_Scripts#Creating_Script|Workspace]]. For documentation for the syntax, functions and entities can be found from the main page in the [[QPR_ProcessAnalyzer_Wiki#For_Developers|KPI Expression Language]] section.&lt;br /&gt;
&lt;br /&gt;
== Calling Expression Script from Expression ==&lt;br /&gt;
Expression scripts can be called from an expression using the [[QPR_ProcessAnalyzer_Objects_in_Expression_Language#Script|Run]] function with the following syntax:&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;typescript&amp;quot; line&amp;gt;&lt;br /&gt;
let result = ScriptById(123).Run(#{&lt;br /&gt;
  &amp;quot;parameter1&amp;quot;: &amp;quot;value1&amp;quot;,&lt;br /&gt;
  &amp;quot;parameter2&amp;quot;: false,&lt;br /&gt;
  &amp;quot;parameter2&amp;quot;: 123.45&lt;br /&gt;
})&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The command waits until the run is completed, and the return value of the called script is returned by the Run function call.&lt;br /&gt;
&lt;br /&gt;
Parameters can be passed to the called script, and the parameters are available as variables in the script. The parameters can contain any type of data.&lt;br /&gt;
&lt;br /&gt;
Expression scripts can also be called from a dashboard. Expressions can be stored to scripts instead of dashboards, which is a way to separate complex expressions from dashboards and allow to reuse expressions across several dashboards.&lt;br /&gt;
&lt;br /&gt;
== Calling SQL Script from Expression ==&lt;br /&gt;
SQL script can be called from an expression using the Run function as follows (similar to calling [[#Calling Expression Script from Expression|expression scripts]]):&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;typescript&amp;quot; line&amp;gt;&lt;br /&gt;
let result = ScriptById(123).Run(#{&lt;br /&gt;
  &amp;quot;parameter1&amp;quot;: &amp;quot;value1&amp;quot;,&lt;br /&gt;
  &amp;quot;parameter2&amp;quot;: 321&lt;br /&gt;
});&lt;br /&gt;
let arrayOfAllReports = result.Keys;&lt;br /&gt;
let report1 = result.Report1;&lt;br /&gt;
let report2 = result.Report2;&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
SQL scripts can return multiple &#039;&#039;reports&#039;&#039;, which are combined to a dictionary, where the key is the name of the report (&amp;quot;sheet name&amp;quot;) and value is the report data as a DataFrame. See in the above example, how the reports can be accessed by their name.&lt;br /&gt;
&lt;br /&gt;
== Examples ==&lt;br /&gt;
=== Call web service===&lt;br /&gt;
Contact to a web service, fetch some data, and store it to a datatable.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;typescript&amp;quot; line&amp;gt;&lt;br /&gt;
let datatableName = &amp;quot;Web Service Data&amp;quot;;&lt;br /&gt;
let webServiceData = CallWebService(&lt;br /&gt;
    #{&amp;quot;Address&amp;quot;: &amp;quot;https://processanalyzer.onqpr.com/qprpa/api/serverinfo&amp;quot;}&lt;br /&gt;
);&lt;br /&gt;
&lt;br /&gt;
let targetDatatable = Project.Datatables.Where(name==datatableName);&lt;br /&gt;
if (Count(targetDatatable) == 0) {&lt;br /&gt;
	targetDatatable = Project.CreateDatatable(datatableName)&lt;br /&gt;
	.AddColumn(&amp;quot;Setting name&amp;quot;, &amp;quot;String&amp;quot;)&lt;br /&gt;
	.AddColumn(&amp;quot;Setting value&amp;quot;, &amp;quot;String&amp;quot;)&lt;br /&gt;
	.AddColumn(&amp;quot;Data read&amp;quot;, &amp;quot;DateTime&amp;quot;);&lt;br /&gt;
} else {&lt;br /&gt;
	targetDatatable = targetDatatable[0];&lt;br /&gt;
}&lt;br /&gt;
&lt;br /&gt;
let currentTime = Now;&lt;br /&gt;
let dataAsDf = ToDataFrame(&lt;br /&gt;
	webServiceData.keys.{&lt;br /&gt;
        let key = _;&lt;br /&gt;
        [key, webServiceData[key], currentTime];&lt;br /&gt;
    },&lt;br /&gt;
	[&amp;quot;Setting name&amp;quot;, &amp;quot;Setting value&amp;quot;, &amp;quot;Data read&amp;quot;]&lt;br /&gt;
);&lt;br /&gt;
targetDatatable.Import(dataAsDf, #{&amp;quot;Append&amp;quot;:true});&lt;br /&gt;
WriteLog(`${CountTop(dataAsDf.Rows)} rows written to datatable`);&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Store data to datatable ===&lt;br /&gt;
Get all models in the system and store them to a datatable.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;typescript&amp;quot; line&amp;gt;&lt;br /&gt;
let newDatatable = Project&lt;br /&gt;
    .CreateDatatable(&amp;quot;Models list &amp;quot; + ToString(Now, &amp;quot;dd.MM.yyyy HH:mm:ss&amp;quot;))&lt;br /&gt;
    .AddColumn(&amp;quot;Model name&amp;quot;, &amp;quot;String&amp;quot;)&lt;br /&gt;
    .AddColumn(&amp;quot;Project name&amp;quot;, &amp;quot;String&amp;quot;)&lt;br /&gt;
    .AddColumn(&amp;quot;Created time&amp;quot;, &amp;quot;DateTime&amp;quot;)&lt;br /&gt;
    .AddColumn(&amp;quot;Cases&amp;quot;, &amp;quot;Integer&amp;quot;);&lt;br /&gt;
let startTime = Now;&lt;br /&gt;
let modelsData = ToDataFrame(&lt;br /&gt;
    Models.([Name, Project.Name, CreatedDate, NCases]),&lt;br /&gt;
    [&amp;quot;Model name&amp;quot;, &amp;quot;Project name&amp;quot;, &amp;quot;Created time&amp;quot;, &amp;quot;Cases&amp;quot;]&lt;br /&gt;
);&lt;br /&gt;
WriteLog(`Listing models took ${(Now - startTime).TotalSeconds.Round(2)} seconds.`);&lt;br /&gt;
newDatatable.Import(modelsData);&lt;br /&gt;
WriteLog(`Datatable ${newDatatable.Id} created.`);&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Convert datatable column data (Snowflake) ===&lt;br /&gt;
Following script converts textual data from the column &amp;quot;TimestampString&amp;quot; to dates to column &amp;quot;Timestamp&amp;quot; by trying different time formats and using the first suitable.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;typescript&amp;quot; line&amp;gt;&lt;br /&gt;
DatatableById(1)&lt;br /&gt;
  .UpdateRows(&lt;br /&gt;
    true,&lt;br /&gt;
    &amp;quot;Timestamp&amp;quot;,&lt;br /&gt;
    Coalesce(&lt;br /&gt;
      TryToTimestamp(Column(&amp;quot;TimestampString&amp;quot;), &amp;quot;DD.MM.YYYY HH24:MI:SS.FF3&amp;quot;),&lt;br /&gt;
      TryToTimestamp(Column(&amp;quot;TimestampString&amp;quot;), &amp;quot;YYYY-MM-DD HH24:MI:SS.FF3&amp;quot;),&lt;br /&gt;
      TryToTimestamp(Column(&amp;quot;TimestampString&amp;quot;), &amp;quot;YYYY/MM/DD HH24:MI:SS.FF3&amp;quot;)&lt;br /&gt;
    )&lt;br /&gt;
  );&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Convert datatable column data (in-memory) ===&lt;br /&gt;
This script can be used to convert a single column into numerical data type. To use the script, you need to setup the following in the beginning of the script:&lt;br /&gt;
* Project name where the datatable is located.&lt;br /&gt;
* Datatable name&lt;br /&gt;
* Name of the column to be converted&lt;br /&gt;
&lt;br /&gt;
Note that the conversion fails, if there is data that cannot be converted into numerical format. The conversion assumes that period (.) is used as the decimal point. &lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;typescript&amp;quot; line&amp;gt;&lt;br /&gt;
let projectName = &amp;quot;New Project&amp;quot;;&lt;br /&gt;
let datatableName = &amp;quot;qpr processanalyzer events&amp;quot;;&lt;br /&gt;
let columnName = &amp;quot;Event order in case&amp;quot;;&lt;br /&gt;
&lt;br /&gt;
let project = (Projects.Where(Name==projectName))[0];&lt;br /&gt;
let datatable = (project.Datatables.Where(Name==datatableName))[0];&lt;br /&gt;
DatatableById(datatable.Id).DataFrame&lt;br /&gt;
.SetColumns([&lt;br /&gt;
	columnName: () =&amp;gt; {&lt;br /&gt;
		let data = Column(columnName);&lt;br /&gt;
		if (data == null) {&lt;br /&gt;
			null;&lt;br /&gt;
		 } else {&lt;br /&gt;
			ToFloat(data);&lt;br /&gt;
		}&lt;br /&gt;
	}&lt;br /&gt;
])&lt;br /&gt;
.Persist(datatable.Name, [&amp;quot;ProjectId&amp;quot;: project.Id, &amp;quot;Append&amp;quot;: false]);&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Instead of converting to numeric (with the &#039;&#039;ToFloat&#039;&#039; function), data can be converted into string using the &#039;&#039;ToString&#039;&#039; function.&lt;br /&gt;
&lt;br /&gt;
=== Show DataFrame as HTML table ===&lt;br /&gt;
&lt;br /&gt;
This script defines a function to show dataframe as a HTML table, and uses the function for a literal dataframe.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;typescript&amp;quot; line&amp;gt;&lt;br /&gt;
function dataframeToHtmlTable(df) {&lt;br /&gt;
	return&lt;br /&gt;
`&amp;lt;table&amp;gt;&lt;br /&gt;
	&amp;lt;tr&amp;gt;&lt;br /&gt;
		${StringJoin(&amp;quot;\r\n\t\t&amp;quot;,  + df.columns.`&amp;lt;th&amp;gt;${_}&amp;lt;/th&amp;gt;`)}&lt;br /&gt;
	&amp;lt;/tr&amp;gt;&lt;br /&gt;
	${StringJoin(&amp;quot;&amp;quot;, df.Rows.(&lt;br /&gt;
		&amp;quot;\r\n\t&amp;lt;tr&amp;gt;&amp;quot; + StringJoin(&amp;quot;&amp;quot;, _.`\r\n\t\t&amp;lt;td&amp;gt;${ToString(_)}&amp;lt;/td&amp;gt;`) + &amp;quot;\r\n\t&amp;lt;/tr&amp;gt;&amp;quot;&lt;br /&gt;
	))}&lt;br /&gt;
&amp;lt;/table&amp;gt;`&lt;br /&gt;
}&lt;br /&gt;
&lt;br /&gt;
let data = ToDataFrame(&lt;br /&gt;
	[&lt;br /&gt;
		[&amp;quot;one&amp;quot;, &amp;quot;two&amp;quot;, &amp;quot;three&amp;quot;],&lt;br /&gt;
		[&amp;quot;four&amp;quot;, &amp;quot;five&amp;quot;, &amp;quot;six&amp;quot;],&lt;br /&gt;
		[&amp;quot;seven&amp;quot;, &amp;quot;eight&amp;quot;, &amp;quot;nine&amp;quot;]&lt;br /&gt;
	],&lt;br /&gt;
	[&amp;quot;Column 1&amp;quot;, &amp;quot;Column 2&amp;quot;, &amp;quot;Column 3&amp;quot;]&lt;br /&gt;
);&lt;br /&gt;
&lt;br /&gt;
return dataframeToHtmlTable(data);&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Copy local datatables to Snowflake ===&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;typescript&amp;quot; line&amp;gt;&lt;br /&gt;
// Copies all datatables in a project to another project including datatable contents.&lt;br /&gt;
// Usage instructions:&lt;br /&gt;
// 1. Create expression script in the project from where you want to copy the datatables.&lt;br /&gt;
// 2. Create a new project named as &amp;quot;&amp;lt;name of the project to be moved&amp;gt; - Snowflake&amp;quot;. New datatables will be created here. E.g., when moving project named &amp;quot;SAP_OrderToCash&amp;quot;, the target project should be named as &amp;quot;SAP_OrderToCash - Snowflake&amp;quot;.&lt;br /&gt;
// 3. Run the script.&lt;br /&gt;
// NOTE: Columns of type &amp;quot;Any&amp;quot; will be created as &amp;quot;String&amp;quot;-columns in Snowflake, thus it is recommended that actual data types are set for the tables prior to the move.&lt;br /&gt;
&lt;br /&gt;
let sourceProject = Project;&lt;br /&gt;
let sourceProjectName = Project.Name;&lt;br /&gt;
let targetProjectName = `${sourceProjectName} - Snowflake`;&lt;br /&gt;
let targetProject = First(Projects.Where(Name == targetProjectName));&lt;br /&gt;
if (IsNull(targetProject)) {&lt;br /&gt;
  WriteLog(`Unable to find target project named &amp;quot;${targetProjectName}&amp;quot;. Aborting operation.`);&lt;br /&gt;
  return;&lt;br /&gt;
}&lt;br /&gt;
let dts = sourceProject.DataTables;&lt;br /&gt;
WriteLog(`Copying all ${CountTop(dts)} data tables found in project &amp;quot;${sourceProject.Name}&amp;quot; (id: ${sourceProject.Id}) to Snowflake in project &amp;quot;${targetProject.Name}&amp;quot; (id: ${targetProject.Id})`);&lt;br /&gt;
dts.{&lt;br /&gt;
  let sourceDt = _;&lt;br /&gt;
  WriteLog(`Starting to copy data table &amp;quot;${Name}&amp;quot; (id: ${Id}) having ${NRows} rows and ${NColumns} columns.`);&lt;br /&gt;
  let targetDt;&lt;br /&gt;
  targetDt = targetProject.DatatableByName(sourceDt.Name);&lt;br /&gt;
  if (targetDt == null) {&lt;br /&gt;
    targetDt = targetProject.CreateDataTable(sourceDt.Name, #{&amp;quot;Connection&amp;quot;: CreateSnowflakeConnection(#{&amp;quot;ProjectId&amp;quot;: targetProject.Id})});&lt;br /&gt;
    targetDt.Import(sourceDt.SqlDataFrame);&lt;br /&gt;
    WriteLog(`Finished copying data table &amp;quot;${Name}&amp;quot; (id: ${Id}) to table &amp;quot;${targetDt.Name}&amp;quot; (id: ${targetDt.Id})`);&lt;br /&gt;
  } else {&lt;br /&gt;
    WriteLog(`Datatable already exist &amp;quot;${Name}&amp;quot; (id: ${Id}) to table &amp;quot;${targetDt.Name}&amp;quot; (id: ${targetDt.Id})`);&lt;br /&gt;
  }&lt;br /&gt;
}&lt;br /&gt;
WriteLog(`Finished copying all the data tables found in project &amp;quot;${sourceProject.Name}&amp;quot; (id: ${sourceProject.Id}) to Snowflake in project &amp;quot;${targetProject.Name}&amp;quot; (id: ${targetProject.Id})`);&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
If you don&#039;t need to copy the data but only create the Snowflake datatables with columns, you can change the line 22 to&lt;br /&gt;
&amp;lt;syntaxhighlight&amp;gt;&lt;br /&gt;
targetDt.Import(sourceDt.SqlDataFrame.head(0));&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Copy single datatable to Snowflake ===&lt;br /&gt;
This script creates a copy of a single datatable to Snowflake. Replace the &#039;&#039;&amp;lt;tableId1&amp;gt;&#039;&#039; with the id of the source datatable.&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;typescript&amp;quot; line&amp;gt;&lt;br /&gt;
function CopyDataTableToSnowflake(dataTableId)&lt;br /&gt;
{&lt;br /&gt;
  let sourceDt = DataTableById(dataTableId);&lt;br /&gt;
  sourceDt.SqlDataFrame.Persist(`${sourceDt.Name} - Snowflake`, #{&amp;quot;Append&amp;quot;: false, &amp;quot;Connection&amp;quot;: CreateSnowflakeConnection(#{&amp;quot;ProjectId&amp;quot;: sourceDt.Project.Id})});&lt;br /&gt;
}&lt;br /&gt;
CopyDataTableToSnowflake(&amp;lt;tableId1&amp;gt;);&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Create a copy of a data table that has all Any-type columns changed to String-type columns ===&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;typescript&amp;quot; line&amp;gt;&lt;br /&gt;
function ConvertAnyDataTypesToStringsToNewTable(dataTableId)&lt;br /&gt;
{&lt;br /&gt;
  let dt = DataTableById(dataTableId);&lt;br /&gt;
  let sdf = dt.SqlDataFrame;&lt;br /&gt;
  let cts = dt.ColumnTypes;&lt;br /&gt;
  cts.{&lt;br /&gt;
    let ct = _;&lt;br /&gt;
    if (ct.DataType == &amp;quot;Any&amp;quot;) {&lt;br /&gt;
      let n = ct.Name;&lt;br /&gt;
      sdf = sdf.WithColumn(ct.Name, #sql{Cast(Column(Variable(&amp;quot;n&amp;quot;)), &amp;quot;ShortString&amp;quot;)});&lt;br /&gt;
    }&lt;br /&gt;
  };&lt;br /&gt;
  sdf.Persist(`${dt.Name} - Converted`, #{&amp;quot;Append&amp;quot;: false, &amp;quot;ProjectId&amp;quot;: dt.Project.Id});&lt;br /&gt;
}&lt;br /&gt;
ConvertAnyDataTypesToStringsToNewTable(&amp;lt;dataTableId&amp;gt;);&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Query number of rows in given data table having a datetime value in given year grouped by month and return resulting table as CSV ===&lt;br /&gt;
SqlDataFrame is used in order to prevent loading the whole datatable into memory first. Filtering is performed as first operation in order to minimize the amount of required work for the data source of the data table.&amp;lt;syntaxhighlight lang=&amp;quot;typescript&amp;quot; line=&amp;quot;1&amp;quot;&amp;gt;&lt;br /&gt;
DataTableById(&amp;lt;data table id&amp;gt;)&lt;br /&gt;
  .SqlDataFrame&lt;br /&gt;
  .Where(#sql{2014 == Year(Column(&amp;quot;Start Time&amp;quot;))})&lt;br /&gt;
  .WithColumn(&amp;quot;Month&amp;quot;, #sql{Month(Column(&amp;quot;Start Time&amp;quot;))})&lt;br /&gt;
  .GroupBy([&amp;quot;Month&amp;quot;]).Aggregate([&amp;quot;Count&amp;quot;], [&amp;quot;Count&amp;quot;])&lt;br /&gt;
  .OrderByColumns([&amp;quot;Month&amp;quot;], [true])&lt;br /&gt;
  .Collect().ToCsv();&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Function for filtering SqlDataFrame by removing rows having, or replacing, the most infrequently occurring column values ===&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;typescript&amp;quot; line=&amp;quot;1&amp;quot;&amp;gt;&lt;br /&gt;
/***&lt;br /&gt;
 * @name ColumnWithMinUsage&lt;br /&gt;
 * @descripion&lt;br /&gt;
 * Generic function that can be used to filter out the most infrequently occurring attribute values or replace their Values&lt;br /&gt;
 * with given common value.&lt;br /&gt;
 * @param df:&lt;br /&gt;
 * DataFrame to operate on.&lt;br /&gt;
 * @param columnName:&lt;br /&gt;
 * Name of the column to be filtered.&lt;br /&gt;
 * @param newColumnName:&lt;br /&gt;
 * Name of the column that will contain the new value of the original column after filtering (if includeOthers was applied).&lt;br /&gt;
 * @param maxNumUniqueValues:&lt;br /&gt;
 * Maximum number of unique values to include into the comparison for each attribute column. If the amount of unique values for any attribute exceeds this value, only given number of attributes are included that have the highest usage.&lt;br /&gt;
 * @param minValueUsage:&lt;br /&gt;
 * Minimum total usage of a value included into the comparison. The number of cases having every returned value should be at least given percentage (a float value between 0.0 and 1.0) of all the compared cases.&lt;br /&gt;
 * @param includeOthers:&lt;br /&gt;
 * Should the rest of the attribute values not included due to MinValueUsage or MaxNumUniqueValues filtering be included as an aggregated &amp;quot;Others&amp;quot; value?&lt;br /&gt;
 * If not empty/null, defines the name used for these other-values.&lt;br /&gt;
 */&lt;br /&gt;
function ColumnWithMinUsage(df, columnName, newColumnName, maxNumUniqueValues, minValueUsage, includeOthers)&lt;br /&gt;
{&lt;br /&gt;
  let all = df&lt;br /&gt;
	.GroupBy([])&lt;br /&gt;
	.Aggregate([&amp;quot;NAllTotal&amp;quot;], [&amp;quot;Count&amp;quot;])&lt;br /&gt;
	.WithColumn(&amp;quot;__Join2&amp;quot;, #sql{1});&lt;br /&gt;
  let minValueUsageEnabled = !IsNullTop(minValueUsage);&lt;br /&gt;
  let maxNumUniqueValuesEnabled = !IsNullTop(maxNumUniqueValues);&lt;br /&gt;
  if (minValueUsageEnabled || maxNumUniqueValuesEnabled) {&lt;br /&gt;
	// Perform column value-based filtering if minValueUsageEnabled or maxNumUniqueValuesEnabled is defined.&lt;br /&gt;
    let valueColumnName = &amp;quot;__ValueNew&amp;quot;;&lt;br /&gt;
	let filteredValuesColumns = [valueColumnName: columnName];&lt;br /&gt;
	let filteredValues = df&lt;br /&gt;
	  .GroupBy([columnName]).Aggregate([&amp;quot;Count&amp;quot;], [&amp;quot;Count&amp;quot;]);&lt;br /&gt;
	if (minValueUsageEnabled) {&lt;br /&gt;
	  filteredValues = filteredValues&lt;br /&gt;
		.WithColumn(&amp;quot;__Join&amp;quot;, #sql{1})&lt;br /&gt;
		.Join(all, [&amp;quot;__Join&amp;quot;: &amp;quot;__Join2&amp;quot;], &amp;quot;leftouter&amp;quot;)&lt;br /&gt;
        .WithColumn(&amp;quot;Usage&amp;quot;, #sql{Column(&amp;quot;Count&amp;quot;) / Column(&amp;quot;NAllTotal&amp;quot;)});&lt;br /&gt;
	  filteredValuesColumns = Concat(filteredValuesColumns, [&amp;quot;Usage&amp;quot;]);&lt;br /&gt;
	}&lt;br /&gt;
	if (maxNumUniqueValuesEnabled) {&lt;br /&gt;
	  filteredValues = filteredValues&lt;br /&gt;
		.WithRowNumberColumn(&amp;quot;RowNumber&amp;quot;, [&amp;quot;Count&amp;quot;], null, [false]);&lt;br /&gt;
	  filteredValuesColumns = Concat(filteredValuesColumns, [&amp;quot;RowNumber&amp;quot;]);&lt;br /&gt;
	}&lt;br /&gt;
&lt;br /&gt;
	filteredValues = filteredValues&lt;br /&gt;
	  .Select(filteredValuesColumns);&lt;br /&gt;
&lt;br /&gt;
	// Generate select returning all the accepted values.&lt;br /&gt;
	let allValues = filteredValues&lt;br /&gt;
	  .(minValueUsageEnabled ? Where(#sql{Column(&amp;quot;Usage&amp;quot;) &amp;gt;= #expr{minValueUsage}}) : _)&lt;br /&gt;
	  .(maxNumUniqueValuesEnabled ? Where(#sql{Column(&amp;quot;RowNumber&amp;quot;) &amp;lt;= #expr{maxNumUniqueValues}}) : _)&lt;br /&gt;
	  .Select([valueColumnName, newColumnName: valueColumnName]);&lt;br /&gt;
&lt;br /&gt;
	if (!IsNullTop(includeOthers)) {&lt;br /&gt;
	  // If includeOthers is defined, replace original values with the variable defined in includeOthers.&lt;br /&gt;
	  let otherValues = filteredValues&lt;br /&gt;
		.(minValueUsageEnabled ? Where(#sql{Column(&amp;quot;Usage&amp;quot;) &amp;lt; #expr{minValueUsage}}) : _)&lt;br /&gt;
		.(maxNumUniqueValuesEnabled ? Where(#sql{Column(&amp;quot;RowNumber&amp;quot;) &amp;gt; #expr{maxNumUniqueValues}}) : _)&lt;br /&gt;
		.WithColumn(newColumnName, #sql{#expr{includeOthers}})&lt;br /&gt;
		.Select([valueColumnName, newColumnName]);&lt;br /&gt;
	  allValues = allValues.Append(otherValues)&lt;br /&gt;
	}&lt;br /&gt;
	df.Join(allValues, [columnName: valueColumnName], &amp;quot;inner&amp;quot;)&lt;br /&gt;
	  .RemoveColumns([valueColumnName]);&lt;br /&gt;
  }&lt;br /&gt;
}&lt;br /&gt;
&lt;br /&gt;
// The following example will return only rows containing two of the most common values for Region-column.&lt;br /&gt;
//let df = DataTableById(&amp;lt;data table id&amp;gt;).SqlDataFrame;&lt;br /&gt;
//df = ColumnWithMinUsage(df, &amp;quot;Region&amp;quot;, &amp;quot;_Filtered&amp;quot;, 2, null, null);&lt;br /&gt;
//df.Collect().ToCsv();&lt;br /&gt;
&lt;br /&gt;
// The following example will return all input rows, but will replace the values of rows whose Region-column&lt;br /&gt;
// has a value used by less than 15% of all the rows with a new value: &amp;quot;_Others&amp;quot;.&lt;br /&gt;
//let df = DataTableById(&amp;lt;data table id&amp;gt;).SqlDataFrame;&lt;br /&gt;
//df = ColumnWithMinUsage(df, &amp;quot;Region&amp;quot;, &amp;quot;_Filtered&amp;quot;, null, 0.15, &amp;quot;_Others&amp;quot;);&lt;br /&gt;
//df.Collect().ToCsv();&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Export model events and cases ===&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;typescript&amp;quot; line=&amp;quot;1&amp;quot;&amp;gt;&lt;br /&gt;
function ExportModelEvents(m) {&lt;br /&gt;
  let attrs = m.EventAttributes;&lt;br /&gt;
  ToDataFrame(&lt;br /&gt;
    m.EventLog.Events.Concat(&lt;br /&gt;
      [Case.Name, Type.Name, ToString(TimeStamp, &amp;quot;yyyy-MM-dd HH:mm:ss.fff&amp;quot;)], &lt;br /&gt;
      {let evt = _; attrs.{let att = _; evt.Attribute(att)}}&lt;br /&gt;
    ), &lt;br /&gt;
    Concat(&lt;br /&gt;
      [&amp;quot;CaseId&amp;quot;, &amp;quot;EventType&amp;quot;, &amp;quot;TimeStamp&amp;quot;], &lt;br /&gt;
      attrs.Name&lt;br /&gt;
    )&lt;br /&gt;
  ).ToCsv(true);&lt;br /&gt;
}&lt;br /&gt;
&lt;br /&gt;
 &lt;br /&gt;
&lt;br /&gt;
function ExportModelCases(m) {&lt;br /&gt;
  let attrs = m.CaseAttributes;&lt;br /&gt;
  ToDataFrame(&lt;br /&gt;
    m.EventLog.Cases.Concat(&lt;br /&gt;
      [Name], &lt;br /&gt;
      {let cas = _; attrs.{let att = _; cas.Attribute(att)}}&lt;br /&gt;
    ), &lt;br /&gt;
    Concat(&lt;br /&gt;
      [&amp;quot;CaseId&amp;quot;], &lt;br /&gt;
      attrs.Name&lt;br /&gt;
    )&lt;br /&gt;
  ).ToCsv(true);&lt;br /&gt;
}&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
First(Models.Where(Name==&amp;quot;SAP OtC Extended&amp;quot;)).EventsDataTable.DataFrame.ToCsv(true)&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
First(Models.Where(Name==&amp;quot;SAP OtC Extended&amp;quot;)).CasesDataTable.DataFrame.ToCsv(true)&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Calculate all the value usages of a single column for each event in event data table ===&lt;br /&gt;
This query could be used, e.g., to find out the maximum resource usage for every resource found in the event data table.&amp;lt;syntaxhighlight lang=&amp;quot;typescript&amp;quot; line=&amp;quot;1&amp;quot;&amp;gt;&lt;br /&gt;
function WithUsageColumns(resourceColumn)&lt;br /&gt;
{&lt;br /&gt;
  function WithTotalUsageColumnOfSingleResource(resourceColumn, resourceValue)&lt;br /&gt;
  {&lt;br /&gt;
    _&lt;br /&gt;
      .WithColumn(&amp;quot;_Prev&amp;quot;, #sql{Lag(Column(resourceColumn), [TimeStamp, EventType], [true, true], [CaseId], 1, null)})&lt;br /&gt;
      .WithColumn(&amp;quot;_UsageDiff&amp;quot;, #sql{&lt;br /&gt;
        CaseWhen(&lt;br /&gt;
          Column(resourceColumn) == Column(&amp;quot;_Prev&amp;quot;), 0, &lt;br /&gt;
          Column(&amp;quot;_Prev&amp;quot;) == #expr{resourceValue}, -1,&lt;br /&gt;
          Column(resourceColumn) == #expr{resourceValue}, 1,&lt;br /&gt;
          0)&lt;br /&gt;
      })&lt;br /&gt;
      .WithColumn(`${resourceValue}_Usage`, #sql{Sum(Column(&amp;quot;_UsageDiff&amp;quot;), [TimeStamp, EventType])})&lt;br /&gt;
      .RemoveColumns([&amp;quot;_Prev&amp;quot;, &amp;quot;_UsageDiff&amp;quot;])&lt;br /&gt;
  }&lt;br /&gt;
&lt;br /&gt;
  let sdf = _;&lt;br /&gt;
  let allValues = sdf.SelectDistinct([resourceColumn]).OrderByColumns([resourceColumn], [true]).Collect().Column(resourceColumn);&lt;br /&gt;
  allValues.{&lt;br /&gt;
    let v = _;&lt;br /&gt;
    sdf = sdf.WithTotalUsageColumnOfSingleResource(resourceColumn, v)&lt;br /&gt;
  }&lt;br /&gt;
  sdf&lt;br /&gt;
}&lt;br /&gt;
&lt;br /&gt;
let dt = ModelById(&amp;lt;model id&amp;gt;).EventsDataTable;&lt;br /&gt;
dt&lt;br /&gt;
  .SqlDataFrame&lt;br /&gt;
  .WithUsageColumns(&amp;lt;resource column name&amp;gt;)&lt;br /&gt;
  .OrderByColumns([dt.ColumnMappings[&amp;quot;TimeStamp&amp;quot;]], [true])&lt;br /&gt;
  .Collect().ToCsv()&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;Where:&lt;br /&gt;
&lt;br /&gt;
* &amp;lt;model id&amp;gt; is the id of the model containing event data to be examined.&lt;br /&gt;
* &amp;lt;resource column name&amp;gt; is the name of the column in the event data table of the specified model containing the resource being used by that event.&lt;br /&gt;
&lt;br /&gt;
NOTE: This expression uses functionalities that are only supported in Snowflake-based data tables.&lt;br /&gt;
&lt;br /&gt;
=== Create new Snowflake model from filter ===&lt;br /&gt;
This script creates a new Snowflake model (and two datatables for cases and events) containing filtered event log from given filter id. The script also works if the model doesn&#039;t have a cases datatable.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;typescript&amp;quot; line=&amp;quot;1&amp;quot;&amp;gt;&lt;br /&gt;
let filter = FilterById(1); // filter id&lt;br /&gt;
let model = filter.model;&lt;br /&gt;
let project = model.project;&lt;br /&gt;
let nameSuffix = &amp;quot; - &amp;quot; + filter.name + &amp;quot; - &amp;quot; + ToString(Now, &amp;quot;dd-MM-yyyy HH:mm:ss&amp;quot;);&lt;br /&gt;
let eventsDatatableName = model.EventsDataTable.Name + nameSuffix;&lt;br /&gt;
if (eventsDatatableName.length &amp;gt; 440) {&lt;br /&gt;
  eventsDatatableName = eventsDatatableName.Substring(eventsDatatableName.length - 440);&lt;br /&gt;
}&lt;br /&gt;
let eventsData = model&lt;br /&gt;
  .EventsDataTable&lt;br /&gt;
  .SqlDataFrame&lt;br /&gt;
  .ApplyFilter(&lt;br /&gt;
    filter.rules,&lt;br /&gt;
    model.CasesDataTable?.SqlDataFrame&lt;br /&gt;
  );&lt;br /&gt;
project&lt;br /&gt;
  .CreateDatatable(eventsDatatableName, #{&amp;quot;Connection&amp;quot;: CreateSnowflakeConnection()})&lt;br /&gt;
  .Import(eventsData);&lt;br /&gt;
let modelConfiguration = model.Configuration;&lt;br /&gt;
modelConfiguration.DataSource.Events.Set(&amp;quot;DataTableName&amp;quot;, eventsDatatableName);&lt;br /&gt;
if (model.CasesDataTable != null) {&lt;br /&gt;
  let eventsDataCaseIdColumn = &amp;quot;CaseId_&amp;quot; + ToString(Random());&lt;br /&gt;
  let casesDatatableName = model.CasesDataTable.Name + nameSuffix;&lt;br /&gt;
  if (casesDatatableName.length &amp;gt; 440) {&lt;br /&gt;
    casesDatatableName = casesDatatableName.Substring(casesDatatableName.length - 440);&lt;br /&gt;
  }&lt;br /&gt;
  let casesData = model&lt;br /&gt;
    .CasesDataTable&lt;br /&gt;
    .SqlDataFrame&lt;br /&gt;
    .join(&lt;br /&gt;
	  eventsData.SelectDistinct([eventsDataCaseIdColumn: modelConfiguration.DataSource.Events.Columns.CaseId]),&lt;br /&gt;
      [modelConfiguration.DataSource.Cases.Columns.CaseId: eventsDataCaseIdColumn]&lt;br /&gt;
	).Select(model.CasesDataTable.ColumnNames);&lt;br /&gt;
  project&lt;br /&gt;
    .CreateDatatable(casesDatatableName, #{&amp;quot;Connection&amp;quot;: CreateSnowflakeConnection()})&lt;br /&gt;
    .Import(casesData);&lt;br /&gt;
  modelConfiguration.DataSource.Cases.Set(&amp;quot;DataTableName&amp;quot;, casesDatatableName);&lt;br /&gt;
}&lt;br /&gt;
let modelName = model.Name + nameSuffix;&lt;br /&gt;
if (modelName &amp;gt; 440) {&lt;br /&gt;
  modelName = modelName.Substring(modelName.length - 440);&lt;br /&gt;
}&lt;br /&gt;
project&lt;br /&gt;
  .CreateModel(#{    &lt;br /&gt;
    &amp;quot;Name&amp;quot;: modelName,&lt;br /&gt;
    &amp;quot;Description&amp;quot;: model.Description,&lt;br /&gt;
    &amp;quot;Configuration&amp;quot;: modelConfiguration&lt;br /&gt;
  });&lt;br /&gt;
return modelName;&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Creating a model consisting of multiple copies of cases in an existing model ===&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;typescript&amp;quot; line=&amp;quot;1&amp;quot;&amp;gt;&lt;br /&gt;
/**&lt;br /&gt;
 * @name CreateTestModel&lt;br /&gt;
 * @description&lt;br /&gt;
 * Creates a new model (or overwrites an existing) to given target project with given number of &lt;br /&gt;
 * repetitions of given source model.&lt;br /&gt;
 * Each repetition will generate &amp;quot;&amp;lt;N&amp;gt;-&amp;quot;-prefix to CaseId-columns, where N equals to the repeat index.&lt;br /&gt;
 * @param sourceModel&lt;br /&gt;
 * PA model used for the source data and from where the connection is copied for the target model if a &lt;br /&gt;
 * new one has to be created.&lt;br /&gt;
 * @param numRepeats&lt;br /&gt;
 * Number of times the data in the source model should be repeated in the generated model.&lt;br /&gt;
 * @param targetProject&lt;br /&gt;
 * Project in which the target model resides.&lt;br /&gt;
 * @param targetModelName&lt;br /&gt;
 * Specifies the name of the test model in the given target project. If a model already exists with &lt;br /&gt;
 * given name, event and case data in this model will be replaced with the new generated event and &lt;br /&gt;
 * case data.&lt;br /&gt;
 * @returns&lt;br /&gt;
 * Model object of the test model having the newly generated data.&lt;br /&gt;
 */&lt;br /&gt;
function CreateTestModel(sourceModel, numRepeats, targetProject, targetModelName) &lt;br /&gt;
{&lt;br /&gt;
  let eventsColumnMappings = sourceModel.EventsDataTable.ColumnMappings;&lt;br /&gt;
  let casesColumnMappings = sourceModel.CasesDataTable.ColumnMappings;&lt;br /&gt;
  let connection = sourceModel.EventsDataTable.DataSourceConnection;&lt;br /&gt;
&lt;br /&gt;
  function CreateResultModel()&lt;br /&gt;
  {&lt;br /&gt;
    function GetTable(tableName) &lt;br /&gt;
    {&lt;br /&gt;
      let tableConfiguration = #{&lt;br /&gt;
        &amp;quot;Name&amp;quot;: tableName,&lt;br /&gt;
        &amp;quot;Connection&amp;quot;: connection&lt;br /&gt;
      };&lt;br /&gt;
      let resultTable = targetProject.DataTableByName(tableName);&lt;br /&gt;
      if (resultTable == null)&lt;br /&gt;
      {&lt;br /&gt;
        resultTable = targetProject.CreateDataTable(tableConfiguration)&lt;br /&gt;
          .Modify(#{&amp;quot;NameInDataSource&amp;quot;: null})&lt;br /&gt;
          .Synchronize();&lt;br /&gt;
      }&lt;br /&gt;
      return resultTable;&lt;br /&gt;
    }&lt;br /&gt;
&lt;br /&gt;
    let eventsTableName = `${targetModelName} - events`;&lt;br /&gt;
    let casesTableName = `${targetModelName} - cases`;&lt;br /&gt;
    let targetModel = targetProject.ModelByName(targetModelName);&lt;br /&gt;
    let eventsTable, casesTable = null;&lt;br /&gt;
&lt;br /&gt;
    if (targetModel != null)&lt;br /&gt;
    {&lt;br /&gt;
      eventsTable = targetModel.EventsDataTable;&lt;br /&gt;
      casesTable = targetModel.CasesDataTable;&lt;br /&gt;
    }&lt;br /&gt;
    else {&lt;br /&gt;
      eventsTable = GetTable(eventsTableName);&lt;br /&gt;
      if (sourceModel.CasesDataTable != null) {&lt;br /&gt;
        casesTable = GetTable(casesTableName);&lt;br /&gt;
      }&lt;br /&gt;
&lt;br /&gt;
      let timestampMapping = eventsColumnMappings[&amp;quot;TimeStamp&amp;quot;];&lt;br /&gt;
      eventsColumnMappings.Remove(&amp;quot;TimeStamp&amp;quot;);&lt;br /&gt;
      eventsColumnMappings.Set(&amp;quot;Timestamp&amp;quot;, timestampMapping);&lt;br /&gt;
&lt;br /&gt;
      let modelConfiguration = #{&lt;br /&gt;
        &amp;quot;DataSource&amp;quot;: #{&lt;br /&gt;
          &amp;quot;Events&amp;quot;:#{&lt;br /&gt;
            &amp;quot;DataSourceType&amp;quot;: &amp;quot;datatable&amp;quot;,&lt;br /&gt;
            &amp;quot;DataTableName&amp;quot;: eventsTableName,&lt;br /&gt;
            &amp;quot;Columns&amp;quot;: eventsColumnMappings&lt;br /&gt;
          }&lt;br /&gt;
        }&lt;br /&gt;
      };&lt;br /&gt;
&lt;br /&gt;
      if (casesColumnMappings != null) {&lt;br /&gt;
        modelConfiguration[&amp;quot;DataSource&amp;quot;].Set(&amp;quot;Cases&amp;quot;, #{&lt;br /&gt;
          &amp;quot;DataSourceType&amp;quot;: &amp;quot;datatable&amp;quot;,&lt;br /&gt;
          &amp;quot;DataTableName&amp;quot;: casesTableName,&lt;br /&gt;
          &amp;quot;Columns&amp;quot;: casesColumnMappings&lt;br /&gt;
        });&lt;br /&gt;
      }&lt;br /&gt;
&lt;br /&gt;
      targetModel = targetProject.CreateModel(#{&amp;quot;Name&amp;quot;: targetModelName, &amp;quot;Configuration&amp;quot;: modelConfiguration});&lt;br /&gt;
    }&lt;br /&gt;
&lt;br /&gt;
    eventsTable.Truncate();&lt;br /&gt;
    casesTable?.Truncate();&lt;br /&gt;
&lt;br /&gt;
    return #{&lt;br /&gt;
      &amp;quot;TargetModel&amp;quot;: targetModel,&lt;br /&gt;
      &amp;quot;Events&amp;quot;: eventsTable,&lt;br /&gt;
      &amp;quot;Cases&amp;quot;: casesTable&lt;br /&gt;
    };&lt;br /&gt;
  }&lt;br /&gt;
&lt;br /&gt;
  function RepeatNTimes(sourceDf, caseIdColumn, numRepeats)&lt;br /&gt;
  {&lt;br /&gt;
    let resultDf = null;&lt;br /&gt;
    for (let i = 1; i &amp;lt;= numRepeats; ++i) {&lt;br /&gt;
      let iterationDf = sourceDf&lt;br /&gt;
        .WithColumn(caseIdColumn, #sql{Concat(#expr{i}, &amp;quot;-&amp;quot;, Column(#expr{caseIdColumn}))});&lt;br /&gt;
      resultDf = resultDf == null ? iterationDf : resultDf.Append(iterationDf); &lt;br /&gt;
    }&lt;br /&gt;
    resultDf;&lt;br /&gt;
  }&lt;br /&gt;
&lt;br /&gt;
  let resultModel = CreateResultModel();&lt;br /&gt;
  let sourceEventDataDf = sourceModel.EventsDataTable.SqlDataFrame;&lt;br /&gt;
  let resultEventDataDf = RepeatNTimes(sourceEventDataDf, eventsColumnMappings[&amp;quot;CaseId&amp;quot;], numRepeats);&lt;br /&gt;
  resultModel[&amp;quot;Events&amp;quot;].Import(resultEventDataDf);&lt;br /&gt;
&lt;br /&gt;
  let sourceCaseDataDf = sourceModel.CasesDataTable?.SqlDataFrame;&lt;br /&gt;
  if (sourceCaseDataDf != null) {&lt;br /&gt;
    let resultCaseDataDf = RepeatNTimes(sourceCaseDataDf, casesColumnMappings[&amp;quot;CaseId&amp;quot;], numRepeats);&lt;br /&gt;
    resultModel[&amp;quot;Cases&amp;quot;].Import(resultCaseDataDf);&lt;br /&gt;
  }&lt;br /&gt;
  resultModel[&amp;quot;TargetModel&amp;quot;];&lt;br /&gt;
}&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;Example usage:&amp;lt;blockquote&amp;gt;CreateTestModel(ProjectByName(&amp;quot;Project&amp;quot;).ModelByName(&amp;quot;SAP_OrderToCash - Snowflake&amp;quot;), 3, ProjectByName(&amp;quot;TestData&amp;quot;), &amp;quot;TestModel&amp;quot;);&amp;lt;/blockquote&amp;gt;Creates a new model named &amp;quot;TestModel&amp;quot; (or overwrites old one) into project named &amp;quot;TestData&amp;quot; containing the data from model &amp;quot;SAP_OrderToCash - Snowflake&amp;quot; in project &amp;quot;Project&amp;quot; repeated three times.&lt;br /&gt;
&lt;br /&gt;
=== Analyzing declare patterns found in event log ===&lt;br /&gt;
&lt;br /&gt;
This is an example expression that shows how POSIX-style regular expressions can be used to search for cases in an event log having certain event type patterns [https://www.researchgate.net/publication/277631859_Generating_Event_Logs_Through_the_Simulation_of_Declare_Models declare patterns].&lt;br /&gt;
&lt;br /&gt;
Note: Before use, replace &amp;lt;model id&amp;gt; with a valid model identifier having event data.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;typescript&amp;quot; line=&amp;quot;1&amp;quot;&amp;gt;&lt;br /&gt;
let dt = ModelById(&amp;lt;model id&amp;gt;).EventsDataTable;&lt;br /&gt;
let sdf = dt.SqlDataFrame.Head(1000);&lt;br /&gt;
let caseIdColumn = dt.ColumnMappings[&amp;quot;CaseId&amp;quot;], timeStampColumn = dt.ColumnMappings[&amp;quot;TimeStamp&amp;quot;], eventTypeColumn = dt.ColumnMappings[&amp;quot;EventType&amp;quot;];&lt;br /&gt;
&lt;br /&gt;
let eventTypesDf = sdf&lt;br /&gt;
  .SelectDistinct([eventTypeColumn])&lt;br /&gt;
  .OrderByColumns([eventTypeColumn], [true])&lt;br /&gt;
  .WithRowNumberColumn(&amp;quot;Token&amp;quot;, [eventTypeColumn])&lt;br /&gt;
  .WithColumn(&amp;quot;Token&amp;quot;, #sql{Char(Column(&amp;quot;Token&amp;quot;) + Unicode(&amp;quot;a&amp;quot;) - 1)});&lt;br /&gt;
&lt;br /&gt;
sdf = sdf&lt;br /&gt;
  .Join(eventTypesDf.Select([&amp;quot;_EventType2&amp;quot;: eventTypeColumn, &amp;quot;Token&amp;quot;]), [eventTypeColumn: &amp;quot;_EventType2&amp;quot;], &amp;quot;leftouter&amp;quot;);&lt;br /&gt;
&lt;br /&gt;
function RespondedExistencePattern(a, b)&lt;br /&gt;
{&lt;br /&gt;
  // [^a]*((a.*b.*)|(b.*a.*))*[^a]*&lt;br /&gt;
  ToSqlExpression(`Regexp(Column(&amp;quot;Trace&amp;quot;), &amp;quot;[^${a}]*((${a}.*${b}.*)|(${b}.*${a}.*))*[^${a}]&amp;quot;)`)&lt;br /&gt;
}&lt;br /&gt;
&lt;br /&gt;
function ResponsePattern(a, b)&lt;br /&gt;
{&lt;br /&gt;
  // [^a]*(a.*b)*[^a]*&lt;br /&gt;
  ToSqlExpression(`Regexp(Column(&amp;quot;Trace&amp;quot;), &amp;quot;[^${a}]*(${a}.*${b})*[^${a}]&amp;quot;)`)&lt;br /&gt;
}&lt;br /&gt;
&lt;br /&gt;
function AlternateResponsePattern(a, b)&lt;br /&gt;
{&lt;br /&gt;
  // [^a]*(a[^a]*b[^a]*)*[^a]*&lt;br /&gt;
  ToSqlExpression(`Regexp(Column(&amp;quot;Trace&amp;quot;), &amp;quot;[^${a}]*(${a}[^${a}]*${b}[^${a}]*)*[^${a}]&amp;quot;)`)&lt;br /&gt;
}&lt;br /&gt;
&lt;br /&gt;
function ChainResponsePattern(a, b)&lt;br /&gt;
{&lt;br /&gt;
  // [^a]*(ab[^a]*)*[^a]*&lt;br /&gt;
  ToSqlExpression(`Regexp(Column(&amp;quot;Trace&amp;quot;), &amp;quot;[^${a}]*(${a}${b}[^${a}]*)*[^${a}]&amp;quot;)`)&lt;br /&gt;
}&lt;br /&gt;
&lt;br /&gt;
function PrecedencePattern(a, b)&lt;br /&gt;
{&lt;br /&gt;
  // [^b]*(a.*b)*[^b]*&lt;br /&gt;
  ToSqlExpression(`Regexp(Column(&amp;quot;Trace&amp;quot;), &amp;quot;[^${b}]*(${a}.*${b})*[^${b}]&amp;quot;)`)&lt;br /&gt;
}&lt;br /&gt;
&lt;br /&gt;
let tracesDf = sdf&lt;br /&gt;
  .GroupBy([caseIdColumn])&lt;br /&gt;
  .Aggregate([&amp;quot;Trace&amp;quot;: &amp;quot;Token&amp;quot;], [#{&amp;quot;Function&amp;quot;: &amp;quot;list&amp;quot;, &amp;quot;Ordering&amp;quot;: [timeStampColumn], &amp;quot;Separator&amp;quot;: &amp;quot;&amp;quot;}])&lt;br /&gt;
  .WithColumn(&amp;quot;RespondedExistenceNL&amp;quot;, #sql{#expr{RespondedExistencePattern(&amp;quot;n&amp;quot;, &amp;quot;l&amp;quot;)}})&lt;br /&gt;
  .WithColumn(&amp;quot;RespondedExistenceCA&amp;quot;, #sql{#expr{RespondedExistencePattern(&amp;quot;c&amp;quot;, &amp;quot;a&amp;quot;)}})&lt;br /&gt;
  .WithColumn(&amp;quot;ResponsePatternNL&amp;quot;, #sql{#expr{ResponsePattern(&amp;quot;n&amp;quot;, &amp;quot;l&amp;quot;)}})&lt;br /&gt;
  .WithColumn(&amp;quot;ResponsePatternCA&amp;quot;, #sql{#expr{ResponsePattern(&amp;quot;c&amp;quot;, &amp;quot;a&amp;quot;)}})&lt;br /&gt;
  .WithColumn(&amp;quot;AlternateResponsePatternNL&amp;quot;, #sql{#expr{AlternateResponsePattern(&amp;quot;n&amp;quot;, &amp;quot;l&amp;quot;)}})&lt;br /&gt;
  .WithColumn(&amp;quot;AlternateResponsePatternCA&amp;quot;, #sql{#expr{AlternateResponsePattern(&amp;quot;c&amp;quot;, &amp;quot;a&amp;quot;)}})&lt;br /&gt;
  .WithColumn(&amp;quot;ChainResponsePatternNL&amp;quot;, #sql{#expr{ChainResponsePattern(&amp;quot;n&amp;quot;, &amp;quot;l&amp;quot;)}})&lt;br /&gt;
  .WithColumn(&amp;quot;ChainResponsePatternCA&amp;quot;, #sql{#expr{ChainResponsePattern(&amp;quot;c&amp;quot;, &amp;quot;a&amp;quot;)}})&lt;br /&gt;
  .WithColumn(&amp;quot;PrecedencePatternNL&amp;quot;, #sql{#expr{PrecedencePattern(&amp;quot;n&amp;quot;, &amp;quot;l&amp;quot;)}})&lt;br /&gt;
  .WithColumn(&amp;quot;PrecedencePatternCA&amp;quot;, #sql{#expr{PrecedencePattern(&amp;quot;c&amp;quot;, &amp;quot;a&amp;quot;)}})&lt;br /&gt;
;&lt;br /&gt;
&lt;br /&gt;
[tracesDf.Collect().ToCsv(), eventTypesDf.Collect().ToCsv()]&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Perform a query and send results as E-mail in a HTML table ===&lt;br /&gt;
This example requires two scripts that are both located in the same project:&lt;br /&gt;
&lt;br /&gt;
# Expression-type script to execute (can be, e.g., scheduled to run daily).&lt;br /&gt;
# Expression-type script containing the query JSON to use as basis for the e-mail. In this example, this script is named as &amp;quot;Send query as E-mail - query JSON&amp;quot;. The contents of this script is just the JSON representation of a query that can be extracted, e.g., from any PA chart view.&lt;br /&gt;
&lt;br /&gt;
Script #1 should contain the following code:&amp;lt;syntaxhighlight lang=&amp;quot;typescript&amp;quot;&amp;gt;&lt;br /&gt;
let replyToAddress = &amp;quot;noreply@test.com&amp;quot;;&lt;br /&gt;
let recipientsArray = [&amp;quot;testuser@test.com&amp;quot;];&lt;br /&gt;
let queryConfiguration = ParseJson(Project.ScriptByName(&amp;quot;Send query as E-mail - query JSON&amp;quot;).Code)&lt;br /&gt;
  .Set(&amp;quot;IncludeCollect&amp;quot;, true);&lt;br /&gt;
let resultDf = Query(queryConfiguration);&lt;br /&gt;
let mailBodyHtml = resultDf.`&lt;br /&gt;
&amp;lt;table&amp;gt;&lt;br /&gt;
  &amp;lt;caption&amp;gt;Example query&amp;lt;/caption&amp;gt;&lt;br /&gt;
  &amp;lt;thead&amp;gt;&lt;br /&gt;
    &amp;lt;tr&amp;gt;&lt;br /&gt;
      ${StringJoin(&amp;quot;&amp;quot;, _.Columns.`&lt;br /&gt;
        &amp;lt;th&amp;gt;${_}&amp;lt;/th&amp;gt;&lt;br /&gt;
      `)}&lt;br /&gt;
    &amp;lt;/tr&amp;gt;&lt;br /&gt;
  &amp;lt;/thead&amp;gt;&lt;br /&gt;
  &amp;lt;tbody&amp;gt;&lt;br /&gt;
    ${StringJoin(&amp;quot;&amp;quot;, _.Rows.`&lt;br /&gt;
      &amp;lt;tr&amp;gt;&lt;br /&gt;
        ${StringJoin(&amp;quot;&amp;quot;, _.`&lt;br /&gt;
          &amp;lt;td&amp;gt;${_}&amp;lt;/td&amp;gt;&lt;br /&gt;
        `)}&lt;br /&gt;
      &amp;lt;/tr&amp;gt;&lt;br /&gt;
    `)}&lt;br /&gt;
  &amp;lt;/tbody&amp;gt;&lt;br /&gt;
&amp;lt;/table&amp;gt;&lt;br /&gt;
`;&lt;br /&gt;
&lt;br /&gt;
SendEmail(#{&lt;br /&gt;
  &amp;quot;ReplyTo&amp;quot;: [replyToAddress],&lt;br /&gt;
  &amp;quot;To&amp;quot;: recipientsArray,&lt;br /&gt;
  &amp;quot;Subject&amp;quot;: &amp;quot;Example query E-mail&amp;quot;,&lt;br /&gt;
  &amp;quot;IsBodyHtml&amp;quot;: true,&lt;br /&gt;
  &amp;quot;Body&amp;quot;: mailBodyHtml&lt;br /&gt;
});&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;NOTE: QPR ProcessAnalyzer Server has to have SMTP server configured. Also, remember to update the values of replyToAddress and recipientsArray before using.&lt;br /&gt;
&lt;br /&gt;
=== Converting a case-centric model to object-centric model ===&lt;br /&gt;
This function serves as an example on how a case-centric model could be converted into an object-centric model having just one object type: &amp;quot;Case&amp;quot;.&lt;br /&gt;
&lt;br /&gt;
Note: Before use, replace &amp;lt;model id&amp;gt; with a valid model identifier having event data.&amp;lt;syntaxhighlight lang=&amp;quot;typescript&amp;quot;&amp;gt;&lt;br /&gt;
function ConvertCCModelToOCModel(model, newModelName) &lt;br /&gt;
{&lt;br /&gt;
  let connection = model.EventsDataTable.DataSourceConnection;&lt;br /&gt;
  let caseIdColumn = model.EventsDataTable.ColumnMappings[&amp;quot;CaseId&amp;quot;];&lt;br /&gt;
  let eventToObjectTableName = `${newModelName} - event-to-object`;&lt;br /&gt;
  let eventsTableName = `${newModelName} - events`;&lt;br /&gt;
  let objectsTableName = `${newModelName} - objects`;&lt;br /&gt;
          &lt;br /&gt;
  let eventsDf = model.EventsDataTable&lt;br /&gt;
    .SqlDataFrame&lt;br /&gt;
    .RenameColumns([&lt;br /&gt;
      &amp;quot;OcelEventType&amp;quot;: model.EventsDataTable.ColumnMappings[&amp;quot;EventType&amp;quot;],&lt;br /&gt;
      &amp;quot;OcelEventTime&amp;quot;: model.EventsDataTable.ColumnMappings[&amp;quot;TimeStamp&amp;quot;]])&lt;br /&gt;
    .WithColumn(&amp;quot;OcelEventId&amp;quot;, #sql{Concat(&amp;quot;evt-&amp;quot;, Cast(RowNumber([Column(&amp;quot;OcelEventTime&amp;quot;)]), &amp;quot;String&amp;quot;))});&lt;br /&gt;
&lt;br /&gt;
  eventsDf&lt;br /&gt;
    .RenameColumns([&lt;br /&gt;
      &amp;quot;OcelEventToObjectSourceId&amp;quot;: &amp;quot;OcelEventId&amp;quot;,&lt;br /&gt;
      &amp;quot;OcelEventToObjectTargetId&amp;quot;: caseIdColumn])&lt;br /&gt;
    .Select([&amp;quot;OcelEventToObjectSourceId&amp;quot;, &amp;quot;OcelEventToObjectTargetId&amp;quot;])&lt;br /&gt;
    .WithColumn(&amp;quot;OcelEventToObjectQualifier&amp;quot;, #sql{#expr{caseIdColumn} })&lt;br /&gt;
    .Persist(eventToObjectTableName, #{&amp;quot;Connection&amp;quot;: connection});&lt;br /&gt;
&lt;br /&gt;
  eventsDf&lt;br /&gt;
    .RemoveColumns([caseIdColumn])&lt;br /&gt;
    .Persist(eventsTableName, #{&amp;quot;Connection&amp;quot;: connection});&lt;br /&gt;
&lt;br /&gt;
  let casesDt = model.CasesDataTable&lt;br /&gt;
    .SqlDataFrame&lt;br /&gt;
    .RenameColumns([&lt;br /&gt;
      &amp;quot;OcelObjectId&amp;quot;: model.CasesDataTable.ColumnMappings[&amp;quot;CaseId&amp;quot;]&lt;br /&gt;
    ])&lt;br /&gt;
    .WithColumn(&amp;quot;OcelObjectType&amp;quot;, #sql{&amp;quot;Case&amp;quot;})&lt;br /&gt;
    .Persist(objectsTableName, #{&amp;quot;Connection&amp;quot;: connection});&lt;br /&gt;
&lt;br /&gt;
  let newConfiguration = #{&lt;br /&gt;
    &amp;quot;OcelDataSource&amp;quot;: #{&lt;br /&gt;
      &amp;quot;Events&amp;quot;: eventsTableName,&lt;br /&gt;
      &amp;quot;Objects&amp;quot;: objectsTableName,&lt;br /&gt;
      &amp;quot;EventToObject&amp;quot;: eventToObjectTableName&lt;br /&gt;
    }&lt;br /&gt;
  };&lt;br /&gt;
&lt;br /&gt;
  model.Project  &lt;br /&gt;
    .CreateModel(#{      &lt;br /&gt;
      &amp;quot;Name&amp;quot;: newModelName,  &lt;br /&gt;
      &amp;quot;Description&amp;quot;: model.Description,  &lt;br /&gt;
      &amp;quot;Configuration&amp;quot;: newConfiguration  &lt;br /&gt;
    });&lt;br /&gt;
}&lt;br /&gt;
&lt;br /&gt;
let ccModel = ModelById(&amp;lt;model id&amp;gt;);&lt;br /&gt;
ConvertCCModelToOCModel(ccModel, `ocel - ${ccModel.Name}`);&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Show duplicate rows in datatable ===&lt;br /&gt;
This script returns all rows in a datatable appearing more than once (sorted by the most frequent occurrences first). Additionally, the number of occurrences is returned as the last column. Replace the datatable id with the correct one.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;typescript&amp;quot;&amp;gt;&lt;br /&gt;
let datatableId = 1;&lt;br /&gt;
let rowCountColumn = &amp;quot;Row count&amp;quot;;&lt;br /&gt;
let columns = DatatableById(datatableId).Columns.Name;&lt;br /&gt;
DatatableById(datatableId)&lt;br /&gt;
  .SqlDataFrame&lt;br /&gt;
  .GroupBy(columns)&lt;br /&gt;
  .Aggregate(&lt;br /&gt;
    [rowCountColumn: columns[0]],&lt;br /&gt;
    [&amp;quot;Count&amp;quot;]&lt;br /&gt;
  )&lt;br /&gt;
  .Where(Column(rowCountColumn) &amp;gt; 1)&lt;br /&gt;
  .OrderByColumns([rowCountColumn], [false])&lt;br /&gt;
  .Collect()&lt;br /&gt;
  .ToCSV();&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Create a new object-centric model based on already existing object-centric model containing only filtered objects and events ===&lt;br /&gt;
Script for creating a new object-centric model based on another already existing object-centric model that applies given filter to the original model and creates all the needed tables to store only the relevant rows for all the object-centric data tables.&lt;br /&gt;
&lt;br /&gt;
NOTE: Replace &amp;lt;nowiki&amp;gt;&amp;lt;source model id&amp;gt;, &amp;lt;target model name&amp;gt;, &amp;lt;target project id&amp;gt; as well as the value of ocelItems with the applicable values before use.&amp;lt;/nowiki&amp;gt;&amp;lt;syntaxhighlight lang=&amp;quot;typescript&amp;quot;&amp;gt;&lt;br /&gt;
let newModelName = &amp;quot;&amp;lt;target model name&amp;gt;&amp;quot;;&lt;br /&gt;
let sourceModelId = &amp;lt;source model id&amp;gt;;&lt;br /&gt;
let targetProjectId = &amp;lt;target project id&amp;gt;;&lt;br /&gt;
&lt;br /&gt;
// Replace the value of the ocelItems below with the configuration of object-centric filter to apply.&lt;br /&gt;
// Note: Use expression language syntax here, not JSON.&lt;br /&gt;
let ocelItems = [#{&lt;br /&gt;
  &amp;quot;Include&amp;quot;: true,&lt;br /&gt;
  &amp;quot;ObjectAttributeValue&amp;quot;: #{&lt;br /&gt;
    &amp;quot;ObjectType&amp;quot;: &amp;quot;Purchase Order&amp;quot;,&lt;br /&gt;
    &amp;quot;Attribute&amp;quot;: &amp;quot;po_product&amp;quot;,&lt;br /&gt;
    &amp;quot;Values&amp;quot;: [&lt;br /&gt;
      &amp;quot;0Cows&amp;quot;&lt;br /&gt;
    ]&lt;br /&gt;
  }&lt;br /&gt;
}];&lt;br /&gt;
&lt;br /&gt;
let m = ModelById(sourceModelId);&lt;br /&gt;
if (!m.IsOcelModel)&lt;br /&gt;
  throw `Model ${m.Name} is not an OCPM model`;&lt;br /&gt;
if (CountTop(m.CheckModelValidity()) &amp;gt; 0)&lt;br /&gt;
  throw `Model ${m.Name} is not a valid OCPM model`;&lt;br /&gt;
&lt;br /&gt;
let targetProject = ProjectById(targetProjectId);&lt;br /&gt;
if (CountTop(targetProject.Models.Where(Name == newModelName)) &amp;gt; 0)&lt;br /&gt;
  throw `Model having name ${newModelName} already exists in project ${targetProject.Name}. Rename or delete the existing model before running this script.`;&lt;br /&gt;
&lt;br /&gt;
let newConfiguration = #{&lt;br /&gt;
  &amp;quot;OcelDataSource&amp;quot;: #{&lt;br /&gt;
    &amp;quot;EventTypes&amp;quot;: #{},&lt;br /&gt;
    &amp;quot;ObjectTypes&amp;quot;: #{}&lt;br /&gt;
  }&lt;br /&gt;
};&lt;br /&gt;
&lt;br /&gt;
WriteLog(&amp;quot;Creating common tables...&amp;quot;);&lt;br /&gt;
&lt;br /&gt;
function PersistDataFrame(dataFrame, configurationName, configurationDict, dataTableName)&lt;br /&gt;
{&lt;br /&gt;
  WriteLog(`Creating filtered table: ${dataTableName}`);&lt;br /&gt;
  configurationDict.Set(configurationName, dataTableName);&lt;br /&gt;
  return dataFrame.Persist(dataTableName, #{&amp;quot;ProjectId&amp;quot;: targetProjectId, &amp;quot;Append&amp;quot;: false});&lt;br /&gt;
}&lt;br /&gt;
&lt;br /&gt;
let objectsDf = m.CacheTableSqlDataFrame(#{&lt;br /&gt;
  &amp;quot;CacheTableType&amp;quot;: &amp;quot;Objects&amp;quot;,&lt;br /&gt;
  &amp;quot;Perspective&amp;quot;: #{},&lt;br /&gt;
  &amp;quot;OcelItems&amp;quot;: ocelItems &lt;br /&gt;
}).SetPrimaryKey(m.OcelObjects.PrimaryKey);&lt;br /&gt;
let newObjectsDt = PersistDataFrame(objectsDf, &amp;quot;Objects&amp;quot;, newConfiguration.OcelDataSource, `${newModelName} - objects`);&lt;br /&gt;
&lt;br /&gt;
let objectToObjectDf = m.CacheTableSqlDataFrame(#{&lt;br /&gt;
  &amp;quot;CacheTableType&amp;quot;: &amp;quot;ObjectToObject&amp;quot;,&lt;br /&gt;
  &amp;quot;Perspective&amp;quot;: #{},&lt;br /&gt;
  &amp;quot;OcelItems&amp;quot;: ocelItems &lt;br /&gt;
}).SetPrimaryKey(m.OcelObjectToObject.PrimaryKey)&lt;br /&gt;
  .Select([&amp;quot;OcelObjectToObjectSourceId&amp;quot;, &amp;quot;OcelObjectToObjectTargetId&amp;quot;, &amp;quot;OcelObjectToObjectQualifier&amp;quot;]);&lt;br /&gt;
let newObjectToObjectDt = PersistDataFrame(objectToObjectDf, &amp;quot;ObjectToObject&amp;quot;, newConfiguration.OcelDataSource, `${newModelName} - object-to-object`);&lt;br /&gt;
&lt;br /&gt;
let eventToObjectDf = m.CacheTableSqlDataFrame(#{&lt;br /&gt;
  &amp;quot;CacheTableType&amp;quot;: &amp;quot;EventToObject&amp;quot;,&lt;br /&gt;
  &amp;quot;Perspective&amp;quot;: #{},&lt;br /&gt;
  &amp;quot;OcelItems&amp;quot;: ocelItems &lt;br /&gt;
}).SetPrimaryKey(m.OcelEventToObject.PrimaryKey)&lt;br /&gt;
  .Select([&amp;quot;OcelEventToObjectSourceId&amp;quot;, &amp;quot;OcelEventToObjectTargetId&amp;quot;, &amp;quot;OcelEventToObjectQualifier&amp;quot;]);&lt;br /&gt;
let newEventToObjectDt = PersistDataFrame(eventToObjectDf, &amp;quot;EventToObject&amp;quot;, newConfiguration.OcelDataSource, `${newModelName} - event-to-object`);&lt;br /&gt;
&lt;br /&gt;
let eventsDf = m.OcelEvents.SqlDataFrame&lt;br /&gt;
  .Join(eventToObjectDf.SelectDistinct([&amp;quot;OcelEventToObjectSourceId&amp;quot;]), [&amp;quot;OcelEventId&amp;quot;: &amp;quot;OcelEventToObjectSourceId&amp;quot;], &amp;quot;inner&amp;quot;)&lt;br /&gt;
  .RemoveColumns([&amp;quot;OcelEventToObjectSourceId&amp;quot;])&lt;br /&gt;
  .SetPrimaryKey(m.OcelEvents.PrimaryKey);&lt;br /&gt;
let newEventsDt = PersistDataFrame(eventsDf, &amp;quot;Events&amp;quot;, newConfiguration.OcelDataSource, `${newModelName} - events`);&lt;br /&gt;
&lt;br /&gt;
WriteLog(&amp;quot;Creating event type tables...&amp;quot;);&lt;br /&gt;
&lt;br /&gt;
m.Configuration.OcelDataSource.EventTypes.ToArray().{&lt;br /&gt;
  let ar = _;&lt;br /&gt;
  let eventTypeName = GetContext(ar);&lt;br /&gt;
  let eventTypeDt = m.OcelEventType(eventTypeName);&lt;br /&gt;
  let eventTypeDf = eventTypeDt.SqlDataFrame&lt;br /&gt;
    .Join(newEventsDt.SqlDataFrame.Select([&amp;quot;OcelEventId&amp;quot;]), [&amp;quot;OcelEventTypeEventId&amp;quot;: &amp;quot;OcelEventId&amp;quot;], &amp;quot;inner&amp;quot;)&lt;br /&gt;
    .RemoveColumns([&amp;quot;OcelEventId&amp;quot;])&lt;br /&gt;
    .SetPrimaryKey(eventTypeDt.PrimaryKey);&lt;br /&gt;
  PersistDataFrame(eventTypeDf, eventTypeName, newConfiguration.OcelDataSource.EventTypes, `${newModelName} - eventtype - ${eventTypeName}`);&lt;br /&gt;
};&lt;br /&gt;
  &lt;br /&gt;
WriteLog(&amp;quot;Creating object type tables...&amp;quot;);&lt;br /&gt;
let newUniqueObjectsDf = newObjectsDt&lt;br /&gt;
  .SqlDataFrame&lt;br /&gt;
  .SelectDistinct([&amp;quot;OcelObjectId&amp;quot;]);&lt;br /&gt;
&lt;br /&gt;
m.Configuration.OcelDataSource.ObjectTypes.ToArray().{&lt;br /&gt;
  let ar = _;&lt;br /&gt;
  let objectTypeName = GetContext(ar);&lt;br /&gt;
  let objectTypeDt = m.OcelObjectType(objectTypeName);&lt;br /&gt;
  let objectTypeDf = objectTypeDt.SqlDataFrame&lt;br /&gt;
    .Join(newUniqueObjectsDf, [&amp;quot;OcelObjectTypeObjectId&amp;quot;: &amp;quot;OcelObjectId&amp;quot;], &amp;quot;inner&amp;quot;)&lt;br /&gt;
    .RemoveColumns([&amp;quot;OcelObjectId&amp;quot;])&lt;br /&gt;
    .SetPrimaryKey(objectTypeDt.PrimaryKey);&lt;br /&gt;
  PersistDataFrame(objectTypeDf, objectTypeName, newConfiguration.OcelDataSource.ObjectTypes, `${newModelName} - objecttype - ${objectTypeName}`);&lt;br /&gt;
};&lt;br /&gt;
&lt;br /&gt;
WriteLog(`Creating model ${newModelName}...`);&lt;br /&gt;
&lt;br /&gt;
let newModel = targetProject.CreateModel(#{&amp;quot;Name&amp;quot;: newModelName, &amp;quot;Configuration&amp;quot;: newConfiguration});&lt;br /&gt;
&lt;br /&gt;
WriteLog(`Model (id=${newModel.Id}) created with configuration:\r\n${ToJson(newConfiguration)}`);&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;/div&gt;</summary>
		<author><name>MarHink</name></author>
	</entry>
	<entry>
		<id>https://wiki.onqpr.com/pa/index.php?title=Expression_Script_Examples&amp;diff=27597</id>
		<title>Expression Script Examples</title>
		<link rel="alternate" type="text/html" href="https://wiki.onqpr.com/pa/index.php?title=Expression_Script_Examples&amp;diff=27597"/>
		<updated>2026-01-12T08:18:53Z</updated>

		<summary type="html">&lt;p&gt;MarHink: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;This page contains script examples written in the QPR ProcessAnalyzer expression language. See how expression scripts can be created in the [[Managing_Scripts#Creating_Script|Workspace]]. For documentation for the syntax, functions and entities can be found from the main page in the [[QPR_ProcessAnalyzer_Wiki#For_Developers|KPI Expression Language]] section.&lt;br /&gt;
&lt;br /&gt;
== Calling Expression Script from Expression ==&lt;br /&gt;
Expression scripts can be called from an expression using the [[QPR_ProcessAnalyzer_Objects_in_Expression_Language#Script|Run]] function with the following syntax:&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;typescript&amp;quot; line&amp;gt;&lt;br /&gt;
let result = ScriptById(123).Run(#{&lt;br /&gt;
  &amp;quot;parameter1&amp;quot;: &amp;quot;value1&amp;quot;,&lt;br /&gt;
  &amp;quot;parameter2&amp;quot;: false,&lt;br /&gt;
  &amp;quot;parameter2&amp;quot;: 123.45&lt;br /&gt;
})&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The command waits until the run is completed, and the return value of the called script is returned by the Run function call.&lt;br /&gt;
&lt;br /&gt;
Parameters can be passed to the called script, and the parameters are available as variables in the script. The parameters can contain any type of data.&lt;br /&gt;
&lt;br /&gt;
Expression scripts can also be called from a dashboard. Expressions can be stored to scripts instead of dashboards, which is a way to separate complex expressions from dashboards and allow to reuse expressions across several dashboards.&lt;br /&gt;
&lt;br /&gt;
== Calling SQL Script from Expression ==&lt;br /&gt;
SQL script can be called from an expression using the Run function as follows (similar to calling [[#Calling Expression Script from Expression|expression scripts]]):&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;typescript&amp;quot; line&amp;gt;&lt;br /&gt;
let result = ScriptById(123).Run(#{&lt;br /&gt;
  &amp;quot;parameter1&amp;quot;: &amp;quot;value1&amp;quot;,&lt;br /&gt;
  &amp;quot;parameter2&amp;quot;: 321&lt;br /&gt;
});&lt;br /&gt;
let arrayOfAllReports = result.Keys;&lt;br /&gt;
let report1 = result.Report1;&lt;br /&gt;
let report2 = result.Report2;&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
SQL scripts can return multiple &#039;&#039;reports&#039;&#039;, which are combined to a dictionary, where the key is the name of the report (&amp;quot;sheet name&amp;quot;) and value is the report data as a DataFrame. See in the above example, how the reports can be accessed by their name.&lt;br /&gt;
&lt;br /&gt;
== Examples ==&lt;br /&gt;
=== Call web service===&lt;br /&gt;
Contact to a web service, fetch some data, and store it to a datatable.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;typescript&amp;quot; line&amp;gt;&lt;br /&gt;
let datatableName = &amp;quot;Web Service Data&amp;quot;;&lt;br /&gt;
let webServiceData = CallWebService(&lt;br /&gt;
    #{&amp;quot;Address&amp;quot;: &amp;quot;https://processanalyzer.onqpr.com/qprpa/api/serverinfo&amp;quot;}&lt;br /&gt;
);&lt;br /&gt;
&lt;br /&gt;
let targetDatatable = Project.Datatables.Where(name==datatableName);&lt;br /&gt;
if (Count(targetDatatable) == 0) {&lt;br /&gt;
	targetDatatable = Project.CreateDatatable(datatableName)&lt;br /&gt;
	.AddColumn(&amp;quot;Setting name&amp;quot;, &amp;quot;String&amp;quot;)&lt;br /&gt;
	.AddColumn(&amp;quot;Setting value&amp;quot;, &amp;quot;String&amp;quot;)&lt;br /&gt;
	.AddColumn(&amp;quot;Data read&amp;quot;, &amp;quot;DateTime&amp;quot;);&lt;br /&gt;
} else {&lt;br /&gt;
	targetDatatable = targetDatatable[0];&lt;br /&gt;
}&lt;br /&gt;
&lt;br /&gt;
let currentTime = Now;&lt;br /&gt;
let dataAsDf = ToDataFrame(&lt;br /&gt;
	webServiceData.keys.{&lt;br /&gt;
        let key = _;&lt;br /&gt;
        [key, webServiceData[key], currentTime];&lt;br /&gt;
    },&lt;br /&gt;
	[&amp;quot;Setting name&amp;quot;, &amp;quot;Setting value&amp;quot;, &amp;quot;Data read&amp;quot;]&lt;br /&gt;
);&lt;br /&gt;
targetDatatable.Import(dataAsDf, #{&amp;quot;Append&amp;quot;:true});&lt;br /&gt;
WriteLog(`${CountTop(dataAsDf.Rows)} rows written to datatable`);&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Store data to datatable ===&lt;br /&gt;
Get all models in the system and store them to a datatable.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;typescript&amp;quot; line&amp;gt;&lt;br /&gt;
let newDatatable = Project&lt;br /&gt;
    .CreateDatatable(&amp;quot;Models list &amp;quot; + ToString(Now, &amp;quot;dd.MM.yyyy HH:mm:ss&amp;quot;))&lt;br /&gt;
    .AddColumn(&amp;quot;Model name&amp;quot;, &amp;quot;String&amp;quot;)&lt;br /&gt;
    .AddColumn(&amp;quot;Project name&amp;quot;, &amp;quot;String&amp;quot;)&lt;br /&gt;
    .AddColumn(&amp;quot;Created time&amp;quot;, &amp;quot;DateTime&amp;quot;)&lt;br /&gt;
    .AddColumn(&amp;quot;Cases&amp;quot;, &amp;quot;Integer&amp;quot;);&lt;br /&gt;
let startTime = Now;&lt;br /&gt;
let modelsData = ToDataFrame(&lt;br /&gt;
    Models.([Name, Project.Name, CreatedDate, NCases]),&lt;br /&gt;
    [&amp;quot;Model name&amp;quot;, &amp;quot;Project name&amp;quot;, &amp;quot;Created time&amp;quot;, &amp;quot;Cases&amp;quot;]&lt;br /&gt;
);&lt;br /&gt;
WriteLog(`Listing models took ${(Now - startTime).TotalSeconds.Round(2)} seconds.`);&lt;br /&gt;
newDatatable.Import(modelsData);&lt;br /&gt;
WriteLog(`Datatable ${newDatatable.Id} created.`);&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Convert datatable column data (Snowflake) ===&lt;br /&gt;
Following script converts textual data from the column &amp;quot;TimestampString&amp;quot; to dates to column &amp;quot;Timestamp&amp;quot; by trying different time formats and using the first suitable.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;typescript&amp;quot; line&amp;gt;&lt;br /&gt;
DatatableById(1)&lt;br /&gt;
  .UpdateRows(&lt;br /&gt;
    true,&lt;br /&gt;
    &amp;quot;Timestamp&amp;quot;,&lt;br /&gt;
    Coalesce(&lt;br /&gt;
      TryToTimestamp(Column(&amp;quot;TimestampString&amp;quot;), &amp;quot;DD.MM.YYYY HH24:MI:SS.FF3&amp;quot;),&lt;br /&gt;
      TryToTimestamp(Column(&amp;quot;TimestampString&amp;quot;), &amp;quot;YYYY-MM-DD HH24:MI:SS.FF3&amp;quot;),&lt;br /&gt;
      TryToTimestamp(Column(&amp;quot;TimestampString&amp;quot;), &amp;quot;YYYY/MM/DD HH24:MI:SS.FF3&amp;quot;)&lt;br /&gt;
    )&lt;br /&gt;
  );&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Convert datatable column data (in-memory) ===&lt;br /&gt;
This script can be used to convert a single column into numerical data type. To use the script, you need to setup the following in the beginning of the script:&lt;br /&gt;
* Project name where the datatable is located.&lt;br /&gt;
* Datatable name&lt;br /&gt;
* Name of the column to be converted&lt;br /&gt;
&lt;br /&gt;
Note that the conversion fails, if there is data that cannot be converted into numerical format. The conversion assumes that period (.) is used as the decimal point. &lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;typescript&amp;quot; line&amp;gt;&lt;br /&gt;
let projectName = &amp;quot;New Project&amp;quot;;&lt;br /&gt;
let datatableName = &amp;quot;qpr processanalyzer events&amp;quot;;&lt;br /&gt;
let columnName = &amp;quot;Event order in case&amp;quot;;&lt;br /&gt;
&lt;br /&gt;
let project = (Projects.Where(Name==projectName))[0];&lt;br /&gt;
let datatable = (project.Datatables.Where(Name==datatableName))[0];&lt;br /&gt;
DatatableById(datatable.Id).DataFrame&lt;br /&gt;
.SetColumns([&lt;br /&gt;
	columnName: () =&amp;gt; {&lt;br /&gt;
		let data = Column(columnName);&lt;br /&gt;
		if (data == null) {&lt;br /&gt;
			null;&lt;br /&gt;
		 } else {&lt;br /&gt;
			ToFloat(data);&lt;br /&gt;
		}&lt;br /&gt;
	}&lt;br /&gt;
])&lt;br /&gt;
.Persist(datatable.Name, [&amp;quot;ProjectId&amp;quot;: project.Id, &amp;quot;Append&amp;quot;: false]);&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Instead of converting to numeric (with the &#039;&#039;ToFloat&#039;&#039; function), data can be converted into string using the &#039;&#039;ToString&#039;&#039; function.&lt;br /&gt;
&lt;br /&gt;
=== Show DataFrame as HTML table ===&lt;br /&gt;
&lt;br /&gt;
This script defines a function to show dataframe as a HTML table, and uses the function for a literal dataframe.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;typescript&amp;quot; line&amp;gt;&lt;br /&gt;
function dataframeToHtmlTable(df) {&lt;br /&gt;
	return&lt;br /&gt;
`&amp;lt;table&amp;gt;&lt;br /&gt;
	&amp;lt;tr&amp;gt;&lt;br /&gt;
		${StringJoin(&amp;quot;\r\n\t\t&amp;quot;,  + df.columns.`&amp;lt;th&amp;gt;${_}&amp;lt;/th&amp;gt;`)}&lt;br /&gt;
	&amp;lt;/tr&amp;gt;&lt;br /&gt;
	${StringJoin(&amp;quot;&amp;quot;, df.Rows.(&lt;br /&gt;
		&amp;quot;\r\n\t&amp;lt;tr&amp;gt;&amp;quot; + StringJoin(&amp;quot;&amp;quot;, _.`\r\n\t\t&amp;lt;td&amp;gt;${ToString(_)}&amp;lt;/td&amp;gt;`) + &amp;quot;\r\n\t&amp;lt;/tr&amp;gt;&amp;quot;&lt;br /&gt;
	))}&lt;br /&gt;
&amp;lt;/table&amp;gt;`&lt;br /&gt;
}&lt;br /&gt;
&lt;br /&gt;
let data = ToDataFrame(&lt;br /&gt;
	[&lt;br /&gt;
		[&amp;quot;one&amp;quot;, &amp;quot;two&amp;quot;, &amp;quot;three&amp;quot;],&lt;br /&gt;
		[&amp;quot;four&amp;quot;, &amp;quot;five&amp;quot;, &amp;quot;six&amp;quot;],&lt;br /&gt;
		[&amp;quot;seven&amp;quot;, &amp;quot;eight&amp;quot;, &amp;quot;nine&amp;quot;]&lt;br /&gt;
	],&lt;br /&gt;
	[&amp;quot;Column 1&amp;quot;, &amp;quot;Column 2&amp;quot;, &amp;quot;Column 3&amp;quot;]&lt;br /&gt;
);&lt;br /&gt;
&lt;br /&gt;
return dataframeToHtmlTable(data);&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Copy local datatables to Snowflake ===&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;typescript&amp;quot; line&amp;gt;&lt;br /&gt;
// Copies all datatables in a project to another project including datatable contents.&lt;br /&gt;
// Usage instructions:&lt;br /&gt;
// 1. Create expression script in the project from where you want to copy the datatables.&lt;br /&gt;
// 2. Create a new project named as &amp;quot;&amp;lt;name of the project to be moved&amp;gt; - Snowflake&amp;quot;. New datatables will be created here. E.g., when moving project named &amp;quot;SAP_OrderToCash&amp;quot;, the target project should be named as &amp;quot;SAP_OrderToCash - Snowflake&amp;quot;.&lt;br /&gt;
// 3. Run the script.&lt;br /&gt;
// NOTE: Columns of type &amp;quot;Any&amp;quot; will be created as &amp;quot;String&amp;quot;-columns in Snowflake, thus it is recommended that actual data types are set for the tables prior to the move.&lt;br /&gt;
&lt;br /&gt;
let sourceProject = Project;&lt;br /&gt;
let sourceProjectName = Project.Name;&lt;br /&gt;
let targetProjectName = `${sourceProjectName} - Snowflake`;&lt;br /&gt;
let targetProject = First(Projects.Where(Name == targetProjectName));&lt;br /&gt;
if (IsNull(targetProject)) {&lt;br /&gt;
  WriteLog(`Unable to find target project named &amp;quot;${targetProjectName}&amp;quot;. Aborting operation.`);&lt;br /&gt;
  return;&lt;br /&gt;
}&lt;br /&gt;
let dts = sourceProject.DataTables;&lt;br /&gt;
WriteLog(`Copying all ${CountTop(dts)} data tables found in project &amp;quot;${sourceProject.Name}&amp;quot; (id: ${sourceProject.Id}) to Snowflake in project &amp;quot;${targetProject.Name}&amp;quot; (id: ${targetProject.Id})`);&lt;br /&gt;
dts.{&lt;br /&gt;
  let sourceDt = _;&lt;br /&gt;
  WriteLog(`Starting to copy data table &amp;quot;${Name}&amp;quot; (id: ${Id}) having ${NRows} rows and ${NColumns} columns.`);&lt;br /&gt;
  let targetDt;&lt;br /&gt;
  targetDt = targetProject.DatatableByName(sourceDt.Name);&lt;br /&gt;
  if (targetDt == null) {&lt;br /&gt;
    targetDt = targetProject.CreateDataTable(sourceDt.Name, #{&amp;quot;Connection&amp;quot;: CreateSnowflakeConnection(#{&amp;quot;ProjectId&amp;quot;: targetProject.Id})});&lt;br /&gt;
    targetDt.Import(sourceDt.SqlDataFrame);&lt;br /&gt;
    WriteLog(`Finished copying data table &amp;quot;${Name}&amp;quot; (id: ${Id}) to table &amp;quot;${targetDt.Name}&amp;quot; (id: ${targetDt.Id})`);&lt;br /&gt;
  } else {&lt;br /&gt;
    WriteLog(`Datatable already exist &amp;quot;${Name}&amp;quot; (id: ${Id}) to table &amp;quot;${targetDt.Name}&amp;quot; (id: ${targetDt.Id})`);&lt;br /&gt;
  }&lt;br /&gt;
}&lt;br /&gt;
WriteLog(`Finished copying all the data tables found in project &amp;quot;${sourceProject.Name}&amp;quot; (id: ${sourceProject.Id}) to Snowflake in project &amp;quot;${targetProject.Name}&amp;quot; (id: ${targetProject.Id})`);&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
If you don&#039;t need to copy the data but only create the Snowflake datatables with columns, you can change the line 22 to&lt;br /&gt;
&amp;lt;syntaxhighlight&amp;gt;&lt;br /&gt;
targetDt.Import(sourceDt.SqlDataFrame.head(0));&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Copy single datatable to Snowflake ===&lt;br /&gt;
This script creates a copy of a single datatable to Snowflake. Replace the &#039;&#039;&amp;lt;tableId1&amp;gt;&#039;&#039; with the id of the source datatable.&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;typescript&amp;quot; line&amp;gt;&lt;br /&gt;
function CopyDataTableToSnowflake(dataTableId)&lt;br /&gt;
{&lt;br /&gt;
  let sourceDt = DataTableById(dataTableId);&lt;br /&gt;
  sourceDt.SqlDataFrame.Persist(`${sourceDt.Name} - Snowflake`, #{&amp;quot;Append&amp;quot;: false, &amp;quot;Connection&amp;quot;: CreateSnowflakeConnection(#{&amp;quot;ProjectId&amp;quot;: sourceDt.Project.Id})});&lt;br /&gt;
}&lt;br /&gt;
CopyDataTableToSnowflake(&amp;lt;tableId1&amp;gt;);&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Create a copy of a data table that has all Any-type columns changed to String-type columns ===&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;typescript&amp;quot; line&amp;gt;&lt;br /&gt;
function ConvertAnyDataTypesToStringsToNewTable(dataTableId)&lt;br /&gt;
{&lt;br /&gt;
  let dt = DataTableById(dataTableId);&lt;br /&gt;
  let sdf = dt.SqlDataFrame;&lt;br /&gt;
  let cts = dt.ColumnTypes;&lt;br /&gt;
  cts.{&lt;br /&gt;
    let ct = _;&lt;br /&gt;
    if (ct.DataType == &amp;quot;Any&amp;quot;) {&lt;br /&gt;
      let n = ct.Name;&lt;br /&gt;
      sdf = sdf.WithColumn(ct.Name, #sql{Cast(Column(Variable(&amp;quot;n&amp;quot;)), &amp;quot;ShortString&amp;quot;)});&lt;br /&gt;
    }&lt;br /&gt;
  };&lt;br /&gt;
  sdf.Persist(`${dt.Name} - Converted`, #{&amp;quot;Append&amp;quot;: false, &amp;quot;ProjectId&amp;quot;: dt.Project.Id});&lt;br /&gt;
}&lt;br /&gt;
ConvertAnyDataTypesToStringsToNewTable(&amp;lt;dataTableId&amp;gt;);&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Query number of rows in given data table having a datetime value in given year grouped by month and return resulting table as CSV ===&lt;br /&gt;
SqlDataFrame is used in order to prevent loading the whole datatable into memory first. Filtering is performed as first operation in order to minimize the amount of required work for the data source of the data table.&amp;lt;syntaxhighlight lang=&amp;quot;typescript&amp;quot; line=&amp;quot;1&amp;quot;&amp;gt;&lt;br /&gt;
DataTableById(&amp;lt;data table id&amp;gt;)&lt;br /&gt;
  .SqlDataFrame&lt;br /&gt;
  .Where(#sql{2014 == Year(Column(&amp;quot;Start Time&amp;quot;))})&lt;br /&gt;
  .WithColumn(&amp;quot;Month&amp;quot;, #sql{Month(Column(&amp;quot;Start Time&amp;quot;))})&lt;br /&gt;
  .GroupBy([&amp;quot;Month&amp;quot;]).Aggregate([&amp;quot;Count&amp;quot;], [&amp;quot;Count&amp;quot;])&lt;br /&gt;
  .OrderByColumns([&amp;quot;Month&amp;quot;], [true])&lt;br /&gt;
  .Collect().ToCsv();&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Function for filtering SqlDataFrame by removing rows having, or replacing, the most infrequently occurring column values ===&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;typescript&amp;quot; line=&amp;quot;1&amp;quot;&amp;gt;&lt;br /&gt;
/***&lt;br /&gt;
 * @name ColumnWithMinUsage&lt;br /&gt;
 * @descripion&lt;br /&gt;
 * Generic function that can be used to filter out the most infrequently occurring attribute values or replace their Values&lt;br /&gt;
 * with given common value.&lt;br /&gt;
 * @param df:&lt;br /&gt;
 * DataFrame to operate on.&lt;br /&gt;
 * @param columnName:&lt;br /&gt;
 * Name of the column to be filtered.&lt;br /&gt;
 * @param newColumnName:&lt;br /&gt;
 * Name of the column that will contain the new value of the original column after filtering (if includeOthers was applied).&lt;br /&gt;
 * @param maxNumUniqueValues:&lt;br /&gt;
 * Maximum number of unique values to include into the comparison for each attribute column. If the amount of unique values for any attribute exceeds this value, only given number of attributes are included that have the highest usage.&lt;br /&gt;
 * @param minValueUsage:&lt;br /&gt;
 * Minimum total usage of a value included into the comparison. The number of cases having every returned value should be at least given percentage (a float value between 0.0 and 1.0) of all the compared cases.&lt;br /&gt;
 * @param includeOthers:&lt;br /&gt;
 * Should the rest of the attribute values not included due to MinValueUsage or MaxNumUniqueValues filtering be included as an aggregated &amp;quot;Others&amp;quot; value?&lt;br /&gt;
 * If not empty/null, defines the name used for these other-values.&lt;br /&gt;
 */&lt;br /&gt;
function ColumnWithMinUsage(df, columnName, newColumnName, maxNumUniqueValues, minValueUsage, includeOthers)&lt;br /&gt;
{&lt;br /&gt;
  let all = df&lt;br /&gt;
	.GroupBy([])&lt;br /&gt;
	.Aggregate([&amp;quot;NAllTotal&amp;quot;], [&amp;quot;Count&amp;quot;])&lt;br /&gt;
	.WithColumn(&amp;quot;__Join2&amp;quot;, #sql{1});&lt;br /&gt;
  let minValueUsageEnabled = !IsNullTop(minValueUsage);&lt;br /&gt;
  let maxNumUniqueValuesEnabled = !IsNullTop(maxNumUniqueValues);&lt;br /&gt;
  if (minValueUsageEnabled || maxNumUniqueValuesEnabled) {&lt;br /&gt;
	// Perform column value-based filtering if minValueUsageEnabled or maxNumUniqueValuesEnabled is defined.&lt;br /&gt;
    let valueColumnName = &amp;quot;__ValueNew&amp;quot;;&lt;br /&gt;
	let filteredValuesColumns = [valueColumnName: columnName];&lt;br /&gt;
	let filteredValues = df&lt;br /&gt;
	  .GroupBy([columnName]).Aggregate([&amp;quot;Count&amp;quot;], [&amp;quot;Count&amp;quot;]);&lt;br /&gt;
	if (minValueUsageEnabled) {&lt;br /&gt;
	  filteredValues = filteredValues&lt;br /&gt;
		.WithColumn(&amp;quot;__Join&amp;quot;, #sql{1})&lt;br /&gt;
		.Join(all, [&amp;quot;__Join&amp;quot;: &amp;quot;__Join2&amp;quot;], &amp;quot;leftouter&amp;quot;)&lt;br /&gt;
        .WithColumn(&amp;quot;Usage&amp;quot;, #sql{Column(&amp;quot;Count&amp;quot;) / Column(&amp;quot;NAllTotal&amp;quot;)});&lt;br /&gt;
	  filteredValuesColumns = Concat(filteredValuesColumns, [&amp;quot;Usage&amp;quot;]);&lt;br /&gt;
	}&lt;br /&gt;
	if (maxNumUniqueValuesEnabled) {&lt;br /&gt;
	  filteredValues = filteredValues&lt;br /&gt;
		.WithRowNumberColumn(&amp;quot;RowNumber&amp;quot;, [&amp;quot;Count&amp;quot;], null, [false]);&lt;br /&gt;
	  filteredValuesColumns = Concat(filteredValuesColumns, [&amp;quot;RowNumber&amp;quot;]);&lt;br /&gt;
	}&lt;br /&gt;
&lt;br /&gt;
	filteredValues = filteredValues&lt;br /&gt;
	  .Select(filteredValuesColumns);&lt;br /&gt;
&lt;br /&gt;
	// Generate select returning all the accepted values.&lt;br /&gt;
	let allValues = filteredValues&lt;br /&gt;
	  .(minValueUsageEnabled ? Where(#sql{Column(&amp;quot;Usage&amp;quot;) &amp;gt;= #expr{minValueUsage}}) : _)&lt;br /&gt;
	  .(maxNumUniqueValuesEnabled ? Where(#sql{Column(&amp;quot;RowNumber&amp;quot;) &amp;lt;= #expr{maxNumUniqueValues}}) : _)&lt;br /&gt;
	  .Select([valueColumnName, newColumnName: valueColumnName]);&lt;br /&gt;
&lt;br /&gt;
	if (!IsNullTop(includeOthers)) {&lt;br /&gt;
	  // If includeOthers is defined, replace original values with the variable defined in includeOthers.&lt;br /&gt;
	  let otherValues = filteredValues&lt;br /&gt;
		.(minValueUsageEnabled ? Where(#sql{Column(&amp;quot;Usage&amp;quot;) &amp;lt; #expr{minValueUsage}}) : _)&lt;br /&gt;
		.(maxNumUniqueValuesEnabled ? Where(#sql{Column(&amp;quot;RowNumber&amp;quot;) &amp;gt; #expr{maxNumUniqueValues}}) : _)&lt;br /&gt;
		.WithColumn(newColumnName, #sql{#expr{includeOthers}})&lt;br /&gt;
		.Select([valueColumnName, newColumnName]);&lt;br /&gt;
	  allValues = allValues.Append(otherValues)&lt;br /&gt;
	}&lt;br /&gt;
	df.Join(allValues, [columnName: valueColumnName], &amp;quot;inner&amp;quot;)&lt;br /&gt;
	  .RemoveColumns([valueColumnName]);&lt;br /&gt;
  }&lt;br /&gt;
}&lt;br /&gt;
&lt;br /&gt;
// The following example will return only rows containing two of the most common values for Region-column.&lt;br /&gt;
//let df = DataTableById(&amp;lt;data table id&amp;gt;).SqlDataFrame;&lt;br /&gt;
//df = ColumnWithMinUsage(df, &amp;quot;Region&amp;quot;, &amp;quot;_Filtered&amp;quot;, 2, null, null);&lt;br /&gt;
//df.Collect().ToCsv();&lt;br /&gt;
&lt;br /&gt;
// The following example will return all input rows, but will replace the values of rows whose Region-column&lt;br /&gt;
// has a value used by less than 15% of all the rows with a new value: &amp;quot;_Others&amp;quot;.&lt;br /&gt;
//let df = DataTableById(&amp;lt;data table id&amp;gt;).SqlDataFrame;&lt;br /&gt;
//df = ColumnWithMinUsage(df, &amp;quot;Region&amp;quot;, &amp;quot;_Filtered&amp;quot;, null, 0.15, &amp;quot;_Others&amp;quot;);&lt;br /&gt;
//df.Collect().ToCsv();&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Export model events and cases ===&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;typescript&amp;quot; line=&amp;quot;1&amp;quot;&amp;gt;&lt;br /&gt;
function ExportModelEvents(m) {&lt;br /&gt;
  let attrs = m.EventAttributes;&lt;br /&gt;
  ToDataFrame(&lt;br /&gt;
    m.EventLog.Events.Concat(&lt;br /&gt;
      [Case.Name, Type.Name, ToString(TimeStamp, &amp;quot;yyyy-MM-dd HH:mm:ss.fff&amp;quot;)], &lt;br /&gt;
      {let evt = _; attrs.{let att = _; evt.Attribute(att)}}&lt;br /&gt;
    ), &lt;br /&gt;
    Concat(&lt;br /&gt;
      [&amp;quot;CaseId&amp;quot;, &amp;quot;EventType&amp;quot;, &amp;quot;TimeStamp&amp;quot;], &lt;br /&gt;
      attrs.Name&lt;br /&gt;
    )&lt;br /&gt;
  ).ToCsv(true);&lt;br /&gt;
}&lt;br /&gt;
&lt;br /&gt;
 &lt;br /&gt;
&lt;br /&gt;
function ExportModelCases(m) {&lt;br /&gt;
  let attrs = m.CaseAttributes;&lt;br /&gt;
  ToDataFrame(&lt;br /&gt;
    m.EventLog.Cases.Concat(&lt;br /&gt;
      [Name], &lt;br /&gt;
      {let cas = _; attrs.{let att = _; cas.Attribute(att)}}&lt;br /&gt;
    ), &lt;br /&gt;
    Concat(&lt;br /&gt;
      [&amp;quot;CaseId&amp;quot;], &lt;br /&gt;
      attrs.Name&lt;br /&gt;
    )&lt;br /&gt;
  ).ToCsv(true);&lt;br /&gt;
}&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
First(Models.Where(Name==&amp;quot;SAP OtC Extended&amp;quot;)).EventsDataTable.DataFrame.ToCsv(true)&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
First(Models.Where(Name==&amp;quot;SAP OtC Extended&amp;quot;)).CasesDataTable.DataFrame.ToCsv(true)&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Calculate all the value usages of a single column for each event in event data table ===&lt;br /&gt;
This query could be used, e.g., to find out the maximum resource usage for every resource found in the event data table.&amp;lt;syntaxhighlight lang=&amp;quot;typescript&amp;quot; line=&amp;quot;1&amp;quot;&amp;gt;&lt;br /&gt;
function WithUsageColumns(resourceColumn)&lt;br /&gt;
{&lt;br /&gt;
  function WithTotalUsageColumnOfSingleResource(resourceColumn, resourceValue)&lt;br /&gt;
  {&lt;br /&gt;
    _&lt;br /&gt;
      .WithColumn(&amp;quot;_Prev&amp;quot;, #sql{Lag(Column(resourceColumn), [TimeStamp, EventType], [true, true], [CaseId], 1, null)})&lt;br /&gt;
      .WithColumn(&amp;quot;_UsageDiff&amp;quot;, #sql{&lt;br /&gt;
        CaseWhen(&lt;br /&gt;
          Column(resourceColumn) == Column(&amp;quot;_Prev&amp;quot;), 0, &lt;br /&gt;
          Column(&amp;quot;_Prev&amp;quot;) == #expr{resourceValue}, -1,&lt;br /&gt;
          Column(resourceColumn) == #expr{resourceValue}, 1,&lt;br /&gt;
          0)&lt;br /&gt;
      })&lt;br /&gt;
      .WithColumn(`${resourceValue}_Usage`, #sql{Sum(Column(&amp;quot;_UsageDiff&amp;quot;), [TimeStamp, EventType])})&lt;br /&gt;
      .RemoveColumns([&amp;quot;_Prev&amp;quot;, &amp;quot;_UsageDiff&amp;quot;])&lt;br /&gt;
  }&lt;br /&gt;
&lt;br /&gt;
  let sdf = _;&lt;br /&gt;
  let allValues = sdf.SelectDistinct([resourceColumn]).OrderByColumns([resourceColumn], [true]).Collect().Column(resourceColumn);&lt;br /&gt;
  allValues.{&lt;br /&gt;
    let v = _;&lt;br /&gt;
    sdf = sdf.WithTotalUsageColumnOfSingleResource(resourceColumn, v)&lt;br /&gt;
  }&lt;br /&gt;
  sdf&lt;br /&gt;
}&lt;br /&gt;
&lt;br /&gt;
let dt = ModelById(&amp;lt;model id&amp;gt;).EventsDataTable;&lt;br /&gt;
dt&lt;br /&gt;
  .SqlDataFrame&lt;br /&gt;
  .WithUsageColumns(&amp;lt;resource column name&amp;gt;)&lt;br /&gt;
  .OrderByColumns([dt.ColumnMappings[&amp;quot;TimeStamp&amp;quot;]], [true])&lt;br /&gt;
  .Collect().ToCsv()&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;Where:&lt;br /&gt;
&lt;br /&gt;
* &amp;lt;model id&amp;gt; is the id of the model containing event data to be examined.&lt;br /&gt;
* &amp;lt;resource column name&amp;gt; is the name of the column in the event data table of the specified model containing the resource being used by that event.&lt;br /&gt;
&lt;br /&gt;
NOTE: This expression uses functionalities that are only supported in Snowflake-based data tables.&lt;br /&gt;
&lt;br /&gt;
=== Create new Snowflake model from filter ===&lt;br /&gt;
This script creates a new Snowflake model (and two datatables for cases and events) containing filtered event log from given filter id. The script also works if the model doesn&#039;t have a cases datatable.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;typescript&amp;quot; line=&amp;quot;1&amp;quot;&amp;gt;&lt;br /&gt;
let filter = FilterById(1); // filter id&lt;br /&gt;
let model = filter.model;&lt;br /&gt;
let project = model.project;&lt;br /&gt;
let nameSuffix = &amp;quot; - &amp;quot; + filter.name + &amp;quot; - &amp;quot; + ToString(Now, &amp;quot;dd-MM-yyyy HH:mm:ss&amp;quot;);&lt;br /&gt;
let eventsDatatableName = model.EventsDataTable.Name + nameSuffix;&lt;br /&gt;
if (eventsDatatableName.length &amp;gt; 440) {&lt;br /&gt;
  eventsDatatableName = eventsDatatableName.Substring(eventsDatatableName.length - 440);&lt;br /&gt;
}&lt;br /&gt;
let eventsData = model&lt;br /&gt;
  .EventsDataTable&lt;br /&gt;
  .SqlDataFrame&lt;br /&gt;
  .ApplyFilter(&lt;br /&gt;
    filter.rules,&lt;br /&gt;
    model.CasesDataTable?.SqlDataFrame&lt;br /&gt;
  );&lt;br /&gt;
project&lt;br /&gt;
  .CreateDatatable(eventsDatatableName, #{&amp;quot;Connection&amp;quot;: CreateSnowflakeConnection()})&lt;br /&gt;
  .Import(eventsData);&lt;br /&gt;
let modelConfiguration = model.Configuration;&lt;br /&gt;
modelConfiguration.DataSource.Events.Set(&amp;quot;DataTableName&amp;quot;, eventsDatatableName);&lt;br /&gt;
if (model.CasesDataTable != null) {&lt;br /&gt;
  let eventsDataCaseIdColumn = &amp;quot;CaseId_&amp;quot; + ToString(Random());&lt;br /&gt;
  let casesDatatableName = model.CasesDataTable.Name + nameSuffix;&lt;br /&gt;
  if (casesDatatableName.length &amp;gt; 440) {&lt;br /&gt;
    casesDatatableName = casesDatatableName.Substring(casesDatatableName.length - 440);&lt;br /&gt;
  }&lt;br /&gt;
  let casesData = model&lt;br /&gt;
    .CasesDataTable&lt;br /&gt;
    .SqlDataFrame&lt;br /&gt;
    .join(&lt;br /&gt;
	  eventsData.SelectDistinct([eventsDataCaseIdColumn: modelConfiguration.DataSource.Events.Columns.CaseId]),&lt;br /&gt;
      [modelConfiguration.DataSource.Cases.Columns.CaseId: eventsDataCaseIdColumn]&lt;br /&gt;
	).Select(model.CasesDataTable.ColumnNames);&lt;br /&gt;
  project&lt;br /&gt;
    .CreateDatatable(casesDatatableName, #{&amp;quot;Connection&amp;quot;: CreateSnowflakeConnection()})&lt;br /&gt;
    .Import(casesData);&lt;br /&gt;
  modelConfiguration.DataSource.Cases.Set(&amp;quot;DataTableName&amp;quot;, casesDatatableName);&lt;br /&gt;
}&lt;br /&gt;
let modelName = model.Name + nameSuffix;&lt;br /&gt;
if (modelName &amp;gt; 440) {&lt;br /&gt;
  modelName = modelName.Substring(modelName.length - 440);&lt;br /&gt;
}&lt;br /&gt;
project&lt;br /&gt;
  .CreateModel(#{    &lt;br /&gt;
    &amp;quot;Name&amp;quot;: modelName,&lt;br /&gt;
    &amp;quot;Description&amp;quot;: model.Description,&lt;br /&gt;
    &amp;quot;Configuration&amp;quot;: modelConfiguration&lt;br /&gt;
  });&lt;br /&gt;
return modelName;&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Creating a model consisting of multiple copies of cases in an existing model ===&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;typescript&amp;quot; line=&amp;quot;1&amp;quot;&amp;gt;&lt;br /&gt;
/**&lt;br /&gt;
 * @name CreateTestModel&lt;br /&gt;
 * @description&lt;br /&gt;
 * Creates a new model (or overwrites an existing) to given target project with given number of &lt;br /&gt;
 * repetitions of given source model.&lt;br /&gt;
 * Each repetition will generate &amp;quot;&amp;lt;N&amp;gt;-&amp;quot;-prefix to CaseId-columns, where N equals to the repeat index.&lt;br /&gt;
 * @param sourceModel&lt;br /&gt;
 * PA model used for the source data and from where the connection is copied for the target model if a &lt;br /&gt;
 * new one has to be created.&lt;br /&gt;
 * @param numRepeats&lt;br /&gt;
 * Number of times the data in the source model should be repeated in the generated model.&lt;br /&gt;
 * @param targetProject&lt;br /&gt;
 * Project in which the target model resides.&lt;br /&gt;
 * @param targetModelName&lt;br /&gt;
 * Specifies the name of the test model in the given target project. If a model already exists with &lt;br /&gt;
 * given name, event and case data in this model will be replaced with the new generated event and &lt;br /&gt;
 * case data.&lt;br /&gt;
 * @returns&lt;br /&gt;
 * Model object of the test model having the newly generated data.&lt;br /&gt;
 */&lt;br /&gt;
function CreateTestModel(sourceModel, numRepeats, targetProject, targetModelName) &lt;br /&gt;
{&lt;br /&gt;
  let eventsColumnMappings = sourceModel.EventsDataTable.ColumnMappings;&lt;br /&gt;
  let casesColumnMappings = sourceModel.CasesDataTable.ColumnMappings;&lt;br /&gt;
  let connection = sourceModel.EventsDataTable.DataSourceConnection;&lt;br /&gt;
&lt;br /&gt;
  function CreateResultModel()&lt;br /&gt;
  {&lt;br /&gt;
    function GetTable(tableName) &lt;br /&gt;
    {&lt;br /&gt;
      let tableConfiguration = #{&lt;br /&gt;
        &amp;quot;Name&amp;quot;: tableName,&lt;br /&gt;
        &amp;quot;Connection&amp;quot;: connection&lt;br /&gt;
      };&lt;br /&gt;
      let resultTable = targetProject.DataTableByName(tableName);&lt;br /&gt;
      if (resultTable == null)&lt;br /&gt;
      {&lt;br /&gt;
        resultTable = targetProject.CreateDataTable(tableConfiguration)&lt;br /&gt;
          .Modify(#{&amp;quot;NameInDataSource&amp;quot;: null})&lt;br /&gt;
          .Synchronize();&lt;br /&gt;
      }&lt;br /&gt;
      return resultTable;&lt;br /&gt;
    }&lt;br /&gt;
&lt;br /&gt;
    let eventsTableName = `${targetModelName} - events`;&lt;br /&gt;
    let casesTableName = `${targetModelName} - cases`;&lt;br /&gt;
    let targetModel = targetProject.ModelByName(targetModelName);&lt;br /&gt;
    let eventsTable, casesTable = null;&lt;br /&gt;
&lt;br /&gt;
    if (targetModel != null)&lt;br /&gt;
    {&lt;br /&gt;
      eventsTable = targetModel.EventsDataTable;&lt;br /&gt;
      casesTable = targetModel.CasesDataTable;&lt;br /&gt;
    }&lt;br /&gt;
    else {&lt;br /&gt;
      eventsTable = GetTable(eventsTableName);&lt;br /&gt;
      if (sourceModel.CasesDataTable != null) {&lt;br /&gt;
        casesTable = GetTable(casesTableName);&lt;br /&gt;
      }&lt;br /&gt;
&lt;br /&gt;
      let timestampMapping = eventsColumnMappings[&amp;quot;TimeStamp&amp;quot;];&lt;br /&gt;
      eventsColumnMappings.Remove(&amp;quot;TimeStamp&amp;quot;);&lt;br /&gt;
      eventsColumnMappings.Set(&amp;quot;Timestamp&amp;quot;, timestampMapping);&lt;br /&gt;
&lt;br /&gt;
      let modelConfiguration = #{&lt;br /&gt;
        &amp;quot;DataSource&amp;quot;: #{&lt;br /&gt;
          &amp;quot;Events&amp;quot;:#{&lt;br /&gt;
            &amp;quot;DataSourceType&amp;quot;: &amp;quot;datatable&amp;quot;,&lt;br /&gt;
            &amp;quot;DataTableName&amp;quot;: eventsTableName,&lt;br /&gt;
            &amp;quot;Columns&amp;quot;: eventsColumnMappings&lt;br /&gt;
          }&lt;br /&gt;
        }&lt;br /&gt;
      };&lt;br /&gt;
&lt;br /&gt;
      if (casesColumnMappings != null) {&lt;br /&gt;
        modelConfiguration[&amp;quot;DataSource&amp;quot;].Set(&amp;quot;Cases&amp;quot;, #{&lt;br /&gt;
          &amp;quot;DataSourceType&amp;quot;: &amp;quot;datatable&amp;quot;,&lt;br /&gt;
          &amp;quot;DataTableName&amp;quot;: casesTableName,&lt;br /&gt;
          &amp;quot;Columns&amp;quot;: casesColumnMappings&lt;br /&gt;
        });&lt;br /&gt;
      }&lt;br /&gt;
&lt;br /&gt;
      targetModel = targetProject.CreateModel(#{&amp;quot;Name&amp;quot;: targetModelName, &amp;quot;Configuration&amp;quot;: modelConfiguration});&lt;br /&gt;
    }&lt;br /&gt;
&lt;br /&gt;
    eventsTable.Truncate();&lt;br /&gt;
    casesTable?.Truncate();&lt;br /&gt;
&lt;br /&gt;
    return #{&lt;br /&gt;
      &amp;quot;TargetModel&amp;quot;: targetModel,&lt;br /&gt;
      &amp;quot;Events&amp;quot;: eventsTable,&lt;br /&gt;
      &amp;quot;Cases&amp;quot;: casesTable&lt;br /&gt;
    };&lt;br /&gt;
  }&lt;br /&gt;
&lt;br /&gt;
  function RepeatNTimes(sourceDf, caseIdColumn, numRepeats)&lt;br /&gt;
  {&lt;br /&gt;
    let resultDf = null;&lt;br /&gt;
    for (let i = 1; i &amp;lt;= numRepeats; ++i) {&lt;br /&gt;
      let iterationDf = sourceDf&lt;br /&gt;
        .WithColumn(caseIdColumn, #sql{Concat(#expr{i}, &amp;quot;-&amp;quot;, Column(#expr{caseIdColumn}))});&lt;br /&gt;
      resultDf = resultDf == null ? iterationDf : resultDf.Append(iterationDf); &lt;br /&gt;
    }&lt;br /&gt;
    resultDf;&lt;br /&gt;
  }&lt;br /&gt;
&lt;br /&gt;
  let resultModel = CreateResultModel();&lt;br /&gt;
  let sourceEventDataDf = sourceModel.EventsDataTable.SqlDataFrame;&lt;br /&gt;
  let resultEventDataDf = RepeatNTimes(sourceEventDataDf, eventsColumnMappings[&amp;quot;CaseId&amp;quot;], numRepeats);&lt;br /&gt;
  resultModel[&amp;quot;Events&amp;quot;].Import(resultEventDataDf);&lt;br /&gt;
&lt;br /&gt;
  let sourceCaseDataDf = sourceModel.CasesDataTable?.SqlDataFrame;&lt;br /&gt;
  if (sourceCaseDataDf != null) {&lt;br /&gt;
    let resultCaseDataDf = RepeatNTimes(sourceCaseDataDf, casesColumnMappings[&amp;quot;CaseId&amp;quot;], numRepeats);&lt;br /&gt;
    resultModel[&amp;quot;Cases&amp;quot;].Import(resultCaseDataDf);&lt;br /&gt;
  }&lt;br /&gt;
  resultModel[&amp;quot;TargetModel&amp;quot;];&lt;br /&gt;
}&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;Example usage:&amp;lt;blockquote&amp;gt;CreateTestModel(ProjectByName(&amp;quot;Project&amp;quot;).ModelByName(&amp;quot;SAP_OrderToCash - Snowflake&amp;quot;), 3, ProjectByName(&amp;quot;TestData&amp;quot;), &amp;quot;TestModel&amp;quot;);&amp;lt;/blockquote&amp;gt;Creates a new model named &amp;quot;TestModel&amp;quot; (or overwrites old one) into project named &amp;quot;TestData&amp;quot; containing the data from model &amp;quot;SAP_OrderToCash - Snowflake&amp;quot; in project &amp;quot;Project&amp;quot; repeated three times.&lt;br /&gt;
&lt;br /&gt;
=== Analyzing declare patterns found in event log ===&lt;br /&gt;
&lt;br /&gt;
This is an example expression that shows how POSIX-style regular expressions can be used to search for cases in an event log having certain event type patterns [https://www.researchgate.net/publication/277631859_Generating_Event_Logs_Through_the_Simulation_of_Declare_Models declare patterns].&lt;br /&gt;
&lt;br /&gt;
Note: Before use, replace &amp;lt;model id&amp;gt; with a valid model identifier having event data.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;typescript&amp;quot; line=&amp;quot;1&amp;quot;&amp;gt;&lt;br /&gt;
let dt = ModelById(&amp;lt;model id&amp;gt;).EventsDataTable;&lt;br /&gt;
let sdf = dt.SqlDataFrame.Head(1000);&lt;br /&gt;
let caseIdColumn = dt.ColumnMappings[&amp;quot;CaseId&amp;quot;], timeStampColumn = dt.ColumnMappings[&amp;quot;TimeStamp&amp;quot;], eventTypeColumn = dt.ColumnMappings[&amp;quot;EventType&amp;quot;];&lt;br /&gt;
&lt;br /&gt;
let eventTypesDf = sdf&lt;br /&gt;
  .SelectDistinct([eventTypeColumn])&lt;br /&gt;
  .OrderByColumns([eventTypeColumn], [true])&lt;br /&gt;
  .WithRowNumberColumn(&amp;quot;Token&amp;quot;, [eventTypeColumn])&lt;br /&gt;
  .WithColumn(&amp;quot;Token&amp;quot;, #sql{Char(Column(&amp;quot;Token&amp;quot;) + Unicode(&amp;quot;a&amp;quot;) - 1)});&lt;br /&gt;
&lt;br /&gt;
sdf = sdf&lt;br /&gt;
  .Join(eventTypesDf.Select([&amp;quot;_EventType2&amp;quot;: eventTypeColumn, &amp;quot;Token&amp;quot;]), [eventTypeColumn: &amp;quot;_EventType2&amp;quot;], &amp;quot;leftouter&amp;quot;);&lt;br /&gt;
&lt;br /&gt;
function RespondedExistencePattern(a, b)&lt;br /&gt;
{&lt;br /&gt;
  // [^a]*((a.*b.*)|(b.*a.*))*[^a]*&lt;br /&gt;
  ToSqlExpression(`Regexp(Column(&amp;quot;Trace&amp;quot;), &amp;quot;[^${a}]*((${a}.*${b}.*)|(${b}.*${a}.*))*[^${a}]&amp;quot;)`)&lt;br /&gt;
}&lt;br /&gt;
&lt;br /&gt;
function ResponsePattern(a, b)&lt;br /&gt;
{&lt;br /&gt;
  // [^a]*(a.*b)*[^a]*&lt;br /&gt;
  ToSqlExpression(`Regexp(Column(&amp;quot;Trace&amp;quot;), &amp;quot;[^${a}]*(${a}.*${b})*[^${a}]&amp;quot;)`)&lt;br /&gt;
}&lt;br /&gt;
&lt;br /&gt;
function AlternateResponsePattern(a, b)&lt;br /&gt;
{&lt;br /&gt;
  // [^a]*(a[^a]*b[^a]*)*[^a]*&lt;br /&gt;
  ToSqlExpression(`Regexp(Column(&amp;quot;Trace&amp;quot;), &amp;quot;[^${a}]*(${a}[^${a}]*${b}[^${a}]*)*[^${a}]&amp;quot;)`)&lt;br /&gt;
}&lt;br /&gt;
&lt;br /&gt;
function ChainResponsePattern(a, b)&lt;br /&gt;
{&lt;br /&gt;
  // [^a]*(ab[^a]*)*[^a]*&lt;br /&gt;
  ToSqlExpression(`Regexp(Column(&amp;quot;Trace&amp;quot;), &amp;quot;[^${a}]*(${a}${b}[^${a}]*)*[^${a}]&amp;quot;)`)&lt;br /&gt;
}&lt;br /&gt;
&lt;br /&gt;
function PrecedencePattern(a, b)&lt;br /&gt;
{&lt;br /&gt;
  // [^b]*(a.*b)*[^b]*&lt;br /&gt;
  ToSqlExpression(`Regexp(Column(&amp;quot;Trace&amp;quot;), &amp;quot;[^${b}]*(${a}.*${b})*[^${b}]&amp;quot;)`)&lt;br /&gt;
}&lt;br /&gt;
&lt;br /&gt;
let tracesDf = sdf&lt;br /&gt;
  .GroupBy([caseIdColumn])&lt;br /&gt;
  .Aggregate([&amp;quot;Trace&amp;quot;: &amp;quot;Token&amp;quot;], [#{&amp;quot;Function&amp;quot;: &amp;quot;list&amp;quot;, &amp;quot;Ordering&amp;quot;: [timeStampColumn], &amp;quot;Separator&amp;quot;: &amp;quot;&amp;quot;}])&lt;br /&gt;
  .WithColumn(&amp;quot;RespondedExistenceNL&amp;quot;, #sql{#expr{RespondedExistencePattern(&amp;quot;n&amp;quot;, &amp;quot;l&amp;quot;)}})&lt;br /&gt;
  .WithColumn(&amp;quot;RespondedExistenceCA&amp;quot;, #sql{#expr{RespondedExistencePattern(&amp;quot;c&amp;quot;, &amp;quot;a&amp;quot;)}})&lt;br /&gt;
  .WithColumn(&amp;quot;ResponsePatternNL&amp;quot;, #sql{#expr{ResponsePattern(&amp;quot;n&amp;quot;, &amp;quot;l&amp;quot;)}})&lt;br /&gt;
  .WithColumn(&amp;quot;ResponsePatternCA&amp;quot;, #sql{#expr{ResponsePattern(&amp;quot;c&amp;quot;, &amp;quot;a&amp;quot;)}})&lt;br /&gt;
  .WithColumn(&amp;quot;AlternateResponsePatternNL&amp;quot;, #sql{#expr{AlternateResponsePattern(&amp;quot;n&amp;quot;, &amp;quot;l&amp;quot;)}})&lt;br /&gt;
  .WithColumn(&amp;quot;AlternateResponsePatternCA&amp;quot;, #sql{#expr{AlternateResponsePattern(&amp;quot;c&amp;quot;, &amp;quot;a&amp;quot;)}})&lt;br /&gt;
  .WithColumn(&amp;quot;ChainResponsePatternNL&amp;quot;, #sql{#expr{ChainResponsePattern(&amp;quot;n&amp;quot;, &amp;quot;l&amp;quot;)}})&lt;br /&gt;
  .WithColumn(&amp;quot;ChainResponsePatternCA&amp;quot;, #sql{#expr{ChainResponsePattern(&amp;quot;c&amp;quot;, &amp;quot;a&amp;quot;)}})&lt;br /&gt;
  .WithColumn(&amp;quot;PrecedencePatternNL&amp;quot;, #sql{#expr{PrecedencePattern(&amp;quot;n&amp;quot;, &amp;quot;l&amp;quot;)}})&lt;br /&gt;
  .WithColumn(&amp;quot;PrecedencePatternCA&amp;quot;, #sql{#expr{PrecedencePattern(&amp;quot;c&amp;quot;, &amp;quot;a&amp;quot;)}})&lt;br /&gt;
;&lt;br /&gt;
&lt;br /&gt;
[tracesDf.Collect().ToCsv(), eventTypesDf.Collect().ToCsv()]&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Perform a query and send results as E-mail in a HTML table ===&lt;br /&gt;
This example requires two scripts that are both located in the same project:&lt;br /&gt;
&lt;br /&gt;
# Expression-type script to execute (can be, e.g., scheduled to run daily).&lt;br /&gt;
# Expression-type script containing the query JSON to use as basis for the e-mail. In this example, this script is named as &amp;quot;Send query as E-mail - query JSON&amp;quot;. The contents of this script is just the JSON representation of a query that can be extracted, e.g., from any PA chart view.&lt;br /&gt;
&lt;br /&gt;
Script #1 should contain the following code:&amp;lt;syntaxhighlight lang=&amp;quot;typescript&amp;quot;&amp;gt;&lt;br /&gt;
let replyToAddress = &amp;quot;noreply@test.com&amp;quot;;&lt;br /&gt;
let recipientsArray = [&amp;quot;testuser@test.com&amp;quot;];&lt;br /&gt;
let queryConfiguration = ParseJson(Project.ScriptByName(&amp;quot;Send query as E-mail - query JSON&amp;quot;).Code)&lt;br /&gt;
  .Set(&amp;quot;IncludeCollect&amp;quot;, true);&lt;br /&gt;
let resultDf = Query(queryConfiguration);&lt;br /&gt;
let mailBodyHtml = resultDf.`&lt;br /&gt;
&amp;lt;table&amp;gt;&lt;br /&gt;
  &amp;lt;caption&amp;gt;Example query&amp;lt;/caption&amp;gt;&lt;br /&gt;
  &amp;lt;thead&amp;gt;&lt;br /&gt;
    &amp;lt;tr&amp;gt;&lt;br /&gt;
      ${StringJoin(&amp;quot;&amp;quot;, _.Columns.`&lt;br /&gt;
        &amp;lt;th&amp;gt;${_}&amp;lt;/th&amp;gt;&lt;br /&gt;
      `)}&lt;br /&gt;
    &amp;lt;/tr&amp;gt;&lt;br /&gt;
  &amp;lt;/thead&amp;gt;&lt;br /&gt;
  &amp;lt;tbody&amp;gt;&lt;br /&gt;
    ${StringJoin(&amp;quot;&amp;quot;, _.Rows.`&lt;br /&gt;
      &amp;lt;tr&amp;gt;&lt;br /&gt;
        ${StringJoin(&amp;quot;&amp;quot;, _.`&lt;br /&gt;
          &amp;lt;td&amp;gt;${_}&amp;lt;/td&amp;gt;&lt;br /&gt;
        `)}&lt;br /&gt;
      &amp;lt;/tr&amp;gt;&lt;br /&gt;
    `)}&lt;br /&gt;
  &amp;lt;/tbody&amp;gt;&lt;br /&gt;
&amp;lt;/table&amp;gt;&lt;br /&gt;
`;&lt;br /&gt;
&lt;br /&gt;
SendEmail(#{&lt;br /&gt;
  &amp;quot;ReplyTo&amp;quot;: [replyToAddress],&lt;br /&gt;
  &amp;quot;To&amp;quot;: recipientsArray,&lt;br /&gt;
  &amp;quot;Subject&amp;quot;: &amp;quot;Example query E-mail&amp;quot;,&lt;br /&gt;
  &amp;quot;IsBodyHtml&amp;quot;: true,&lt;br /&gt;
  &amp;quot;Body&amp;quot;: mailBodyHtml&lt;br /&gt;
});&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;NOTE: QPR ProcessAnalyzer Server has to have SMTP server configured. Also, remember to update the values of replyToAddress and recipientsArray before using.&lt;br /&gt;
&lt;br /&gt;
=== Converting a case-centric model to object-centric model ===&lt;br /&gt;
This function serves as an example on how a case-centric model could be converted into an object-centric model having just one object type: &amp;quot;Case&amp;quot;.&lt;br /&gt;
&lt;br /&gt;
Note: Before use, replace &amp;lt;model id&amp;gt; with a valid model identifier having event data.&amp;lt;syntaxhighlight lang=&amp;quot;typescript&amp;quot;&amp;gt;&lt;br /&gt;
function ConvertCCModelToOCModel(model, newModelName) &lt;br /&gt;
{&lt;br /&gt;
  let connection = model.EventsDataTable.DataSourceConnection;&lt;br /&gt;
  let caseIdColumn = model.EventsDataTable.ColumnMappings[&amp;quot;CaseId&amp;quot;];&lt;br /&gt;
  let eventToObjectTableName = `${newModelName} - event-to-object`;&lt;br /&gt;
  let eventsTableName = `${newModelName} - events`;&lt;br /&gt;
  let objectsTableName = `${newModelName} - objects`;&lt;br /&gt;
          &lt;br /&gt;
  let eventsDf = model.EventsDataTable&lt;br /&gt;
    .SqlDataFrame&lt;br /&gt;
    .RenameColumns([&lt;br /&gt;
      &amp;quot;OcelEventType&amp;quot;: model.EventsDataTable.ColumnMappings[&amp;quot;EventType&amp;quot;],&lt;br /&gt;
      &amp;quot;OcelEventTime&amp;quot;: model.EventsDataTable.ColumnMappings[&amp;quot;TimeStamp&amp;quot;]])&lt;br /&gt;
    .WithColumn(&amp;quot;OcelEventId&amp;quot;, #sql{Concat(&amp;quot;evt-&amp;quot;, Cast(RowNumber([Column(&amp;quot;OcelEventTime&amp;quot;)]), &amp;quot;String&amp;quot;))});&lt;br /&gt;
&lt;br /&gt;
  eventsDf&lt;br /&gt;
    .RenameColumns([&lt;br /&gt;
      &amp;quot;OcelEventToObjectSourceId&amp;quot;: &amp;quot;OcelEventId&amp;quot;,&lt;br /&gt;
      &amp;quot;OcelEventToObjectTargetId&amp;quot;: caseIdColumn])&lt;br /&gt;
    .Select([&amp;quot;OcelEventToObjectSourceId&amp;quot;, &amp;quot;OcelEventToObjectTargetId&amp;quot;])&lt;br /&gt;
    .WithColumn(&amp;quot;OcelEventToObjectQualifier&amp;quot;, #sql{#expr{caseIdColumn} })&lt;br /&gt;
    .Persist(eventToObjectTableName, #{&amp;quot;Connection&amp;quot;: connection});&lt;br /&gt;
&lt;br /&gt;
  eventsDf&lt;br /&gt;
    .RemoveColumns([caseIdColumn])&lt;br /&gt;
    .Persist(eventsTableName, #{&amp;quot;Connection&amp;quot;: connection});&lt;br /&gt;
&lt;br /&gt;
  let casesDt = model.CasesDataTable&lt;br /&gt;
    .SqlDataFrame&lt;br /&gt;
    .RenameColumns([&lt;br /&gt;
      &amp;quot;OcelObjectId&amp;quot;: model.CasesDataTable.ColumnMappings[&amp;quot;CaseId&amp;quot;]&lt;br /&gt;
    ])&lt;br /&gt;
    .WithColumn(&amp;quot;OcelObjectType&amp;quot;, #sql{&amp;quot;Case&amp;quot;})&lt;br /&gt;
    .Persist(objectsTableName, #{&amp;quot;Connection&amp;quot;: connection});&lt;br /&gt;
&lt;br /&gt;
  let newConfiguration = #{&lt;br /&gt;
    &amp;quot;OcelDataSource&amp;quot;: #{&lt;br /&gt;
      &amp;quot;Events&amp;quot;: eventsTableName,&lt;br /&gt;
      &amp;quot;Objects&amp;quot;: objectsTableName,&lt;br /&gt;
      &amp;quot;EventToObject&amp;quot;: eventToObjectTableName&lt;br /&gt;
    }&lt;br /&gt;
  };&lt;br /&gt;
&lt;br /&gt;
  model.Project  &lt;br /&gt;
    .CreateModel(#{      &lt;br /&gt;
      &amp;quot;Name&amp;quot;: newModelName,  &lt;br /&gt;
      &amp;quot;Description&amp;quot;: model.Description,  &lt;br /&gt;
      &amp;quot;Configuration&amp;quot;: newConfiguration  &lt;br /&gt;
    });&lt;br /&gt;
}&lt;br /&gt;
&lt;br /&gt;
let ccModel = ModelById(&amp;lt;model id&amp;gt;);&lt;br /&gt;
ConvertCCModelToOCModel(ccModel, `ocel - ${ccModel.Name}`);&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Show duplicate rows in datatable ===&lt;br /&gt;
This script returns all rows in a datatable appearing more than once (sorted by the most frequent occurrences first). Additionally, the number of occurrences is returned as the last column. Replace the datatable id with the correct one.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;typescript&amp;quot;&amp;gt;&lt;br /&gt;
let datatableId = 1;&lt;br /&gt;
let rowCountColumn = &amp;quot;Row count&amp;quot;;&lt;br /&gt;
let columns = DatatableById(datatableId).Columns.Name;&lt;br /&gt;
DatatableById(datatableId)&lt;br /&gt;
  .SqlDataFrame&lt;br /&gt;
  .GroupBy(columns)&lt;br /&gt;
  .Aggregate(&lt;br /&gt;
    [rowCountColumn: columns[0]],&lt;br /&gt;
    [&amp;quot;Count&amp;quot;]&lt;br /&gt;
  )&lt;br /&gt;
  .Where(Column(rowCountColumn) &amp;gt; 1)&lt;br /&gt;
  .OrderByColumns([rowCountColumn], [false])&lt;br /&gt;
  .Collect()&lt;br /&gt;
  .ToCSV();&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Create a new object-centric model based on already existing object-centric model containing only filtered objects and events ===&lt;br /&gt;
Script for creating a new OCPM-model based on another already existing model in PA that applies given filter to the original model and creates all the needed tables to store only the relevant rows for all the OCPM data tables.&amp;lt;syntaxhighlight lang=&amp;quot;typescript&amp;quot;&amp;gt;&lt;br /&gt;
let newModelName = &amp;quot;&amp;lt;target model name&amp;gt;&amp;quot;;&lt;br /&gt;
let sourceModelId = &amp;lt;source model id&amp;gt;;&lt;br /&gt;
let targetProjectId = &amp;lt;target project id&amp;gt;;&lt;br /&gt;
&lt;br /&gt;
// Replace the value of the ocelItems below with the configuration of object-centric filter to apply.&lt;br /&gt;
// Note: Use expression language syntax here, not JSON.&lt;br /&gt;
let ocelItems = [#{&lt;br /&gt;
  &amp;quot;Include&amp;quot;: true,&lt;br /&gt;
  &amp;quot;ObjectAttributeValue&amp;quot;: #{&lt;br /&gt;
    &amp;quot;ObjectType&amp;quot;: &amp;quot;Purchase Order&amp;quot;,&lt;br /&gt;
    &amp;quot;Attribute&amp;quot;: &amp;quot;po_product&amp;quot;,&lt;br /&gt;
    &amp;quot;Values&amp;quot;: [&lt;br /&gt;
      &amp;quot;0Cows&amp;quot;&lt;br /&gt;
    ]&lt;br /&gt;
  }&lt;br /&gt;
}];&lt;br /&gt;
&lt;br /&gt;
let m = ModelById(sourceModelId);&lt;br /&gt;
if (!m.IsOcelModel)&lt;br /&gt;
  throw `Model ${m.Name} is not an OCPM model`;&lt;br /&gt;
if (CountTop(m.CheckModelValidity()) &amp;gt; 0)&lt;br /&gt;
  throw `Model ${m.Name} is not a valid OCPM model`;&lt;br /&gt;
&lt;br /&gt;
let targetProject = ProjectById(targetProjectId);&lt;br /&gt;
if (CountTop(targetProject.Models.Where(Name == newModelName)) &amp;gt; 0)&lt;br /&gt;
  throw `Model having name ${newModelName} already exists in project ${targetProject.Name}. Rename or delete the existing model before running this script.`;&lt;br /&gt;
&lt;br /&gt;
let newConfiguration = #{&lt;br /&gt;
  &amp;quot;OcelDataSource&amp;quot;: #{&lt;br /&gt;
    &amp;quot;EventTypes&amp;quot;: #{},&lt;br /&gt;
    &amp;quot;ObjectTypes&amp;quot;: #{}&lt;br /&gt;
  }&lt;br /&gt;
};&lt;br /&gt;
&lt;br /&gt;
WriteLog(&amp;quot;Creating common tables...&amp;quot;);&lt;br /&gt;
&lt;br /&gt;
function PersistDataFrame(dataFrame, configurationName, configurationDict, dataTableName)&lt;br /&gt;
{&lt;br /&gt;
  WriteLog(`Creating filtered table: ${dataTableName}`);&lt;br /&gt;
  configurationDict.Set(configurationName, dataTableName);&lt;br /&gt;
  return dataFrame.Persist(dataTableName, #{&amp;quot;ProjectId&amp;quot;: targetProjectId, &amp;quot;Append&amp;quot;: false});&lt;br /&gt;
}&lt;br /&gt;
&lt;br /&gt;
let objectsDf = m.CacheTableSqlDataFrame(#{&lt;br /&gt;
  &amp;quot;CacheTableType&amp;quot;: &amp;quot;Objects&amp;quot;,&lt;br /&gt;
  &amp;quot;Perspective&amp;quot;: #{},&lt;br /&gt;
  &amp;quot;OcelItems&amp;quot;: ocelItems &lt;br /&gt;
}).SetPrimaryKey(m.OcelObjects.PrimaryKey);&lt;br /&gt;
let newObjectsDt = PersistDataFrame(objectsDf, &amp;quot;Objects&amp;quot;, newConfiguration.OcelDataSource, `${newModelName} - objects`);&lt;br /&gt;
&lt;br /&gt;
let objectToObjectDf = m.CacheTableSqlDataFrame(#{&lt;br /&gt;
  &amp;quot;CacheTableType&amp;quot;: &amp;quot;ObjectToObject&amp;quot;,&lt;br /&gt;
  &amp;quot;Perspective&amp;quot;: #{},&lt;br /&gt;
  &amp;quot;OcelItems&amp;quot;: ocelItems &lt;br /&gt;
}).SetPrimaryKey(m.OcelObjectToObject.PrimaryKey)&lt;br /&gt;
  .Select([&amp;quot;OcelObjectToObjectSourceId&amp;quot;, &amp;quot;OcelObjectToObjectTargetId&amp;quot;, &amp;quot;OcelObjectToObjectQualifier&amp;quot;]);&lt;br /&gt;
let newObjectToObjectDt = PersistDataFrame(objectToObjectDf, &amp;quot;ObjectToObject&amp;quot;, newConfiguration.OcelDataSource, `${newModelName} - object-to-object`);&lt;br /&gt;
&lt;br /&gt;
let eventToObjectDf = m.CacheTableSqlDataFrame(#{&lt;br /&gt;
  &amp;quot;CacheTableType&amp;quot;: &amp;quot;EventToObject&amp;quot;,&lt;br /&gt;
  &amp;quot;Perspective&amp;quot;: #{},&lt;br /&gt;
  &amp;quot;OcelItems&amp;quot;: ocelItems &lt;br /&gt;
}).SetPrimaryKey(m.OcelEventToObject.PrimaryKey)&lt;br /&gt;
  .Select([&amp;quot;OcelEventToObjectSourceId&amp;quot;, &amp;quot;OcelEventToObjectTargetId&amp;quot;, &amp;quot;OcelEventToObjectQualifier&amp;quot;]);&lt;br /&gt;
let newEventToObjectDt = PersistDataFrame(eventToObjectDf, &amp;quot;EventToObject&amp;quot;, newConfiguration.OcelDataSource, `${newModelName} - event-to-object`);&lt;br /&gt;
&lt;br /&gt;
let eventsDf = m.OcelEvents.SqlDataFrame&lt;br /&gt;
  .Join(eventToObjectDf.SelectDistinct([&amp;quot;OcelEventToObjectSourceId&amp;quot;]), [&amp;quot;OcelEventId&amp;quot;: &amp;quot;OcelEventToObjectSourceId&amp;quot;], &amp;quot;inner&amp;quot;)&lt;br /&gt;
  .RemoveColumns([&amp;quot;OcelEventToObjectSourceId&amp;quot;])&lt;br /&gt;
  .SetPrimaryKey(m.OcelEvents.PrimaryKey);&lt;br /&gt;
let newEventsDt = PersistDataFrame(eventsDf, &amp;quot;Events&amp;quot;, newConfiguration.OcelDataSource, `${newModelName} - events`);&lt;br /&gt;
&lt;br /&gt;
WriteLog(&amp;quot;Creating event type tables...&amp;quot;);&lt;br /&gt;
&lt;br /&gt;
m.Configuration.OcelDataSource.EventTypes.ToArray().{&lt;br /&gt;
  let ar = _;&lt;br /&gt;
  let eventTypeName = GetContext(ar);&lt;br /&gt;
  let eventTypeDt = m.OcelEventType(eventTypeName);&lt;br /&gt;
  let eventTypeDf = eventTypeDt.SqlDataFrame&lt;br /&gt;
    .Join(newEventsDt.SqlDataFrame.Select([&amp;quot;OcelEventId&amp;quot;]), [&amp;quot;OcelEventTypeEventId&amp;quot;: &amp;quot;OcelEventId&amp;quot;], &amp;quot;inner&amp;quot;)&lt;br /&gt;
    .RemoveColumns([&amp;quot;OcelEventId&amp;quot;])&lt;br /&gt;
    .SetPrimaryKey(eventTypeDt.PrimaryKey);&lt;br /&gt;
  PersistDataFrame(eventTypeDf, eventTypeName, newConfiguration.OcelDataSource.EventTypes, `${newModelName} - eventtype - ${eventTypeName}`);&lt;br /&gt;
};&lt;br /&gt;
  &lt;br /&gt;
WriteLog(&amp;quot;Creating object type tables...&amp;quot;);&lt;br /&gt;
let newUniqueObjectsDf = newObjectsDt&lt;br /&gt;
  .SqlDataFrame&lt;br /&gt;
  .SelectDistinct([&amp;quot;OcelObjectId&amp;quot;]);&lt;br /&gt;
&lt;br /&gt;
m.Configuration.OcelDataSource.ObjectTypes.ToArray().{&lt;br /&gt;
  let ar = _;&lt;br /&gt;
  let objectTypeName = GetContext(ar);&lt;br /&gt;
  let objectTypeDt = m.OcelObjectType(objectTypeName);&lt;br /&gt;
  let objectTypeDf = objectTypeDt.SqlDataFrame&lt;br /&gt;
    .Join(newUniqueObjectsDf, [&amp;quot;OcelObjectTypeObjectId&amp;quot;: &amp;quot;OcelObjectId&amp;quot;], &amp;quot;inner&amp;quot;)&lt;br /&gt;
    .RemoveColumns([&amp;quot;OcelObjectId&amp;quot;])&lt;br /&gt;
    .SetPrimaryKey(objectTypeDt.PrimaryKey);&lt;br /&gt;
  PersistDataFrame(objectTypeDf, objectTypeName, newConfiguration.OcelDataSource.ObjectTypes, `${newModelName} - objecttype - ${objectTypeName}`);&lt;br /&gt;
};&lt;br /&gt;
&lt;br /&gt;
WriteLog(`Creating model ${newModelName}...`);&lt;br /&gt;
&lt;br /&gt;
let newModel = targetProject.CreateModel(#{&amp;quot;Name&amp;quot;: newModelName, &amp;quot;Configuration&amp;quot;: newConfiguration});&lt;br /&gt;
&lt;br /&gt;
WriteLog(`Model (id=${newModel.Id}) created with configuration:\r\n${ToJson(newConfiguration)}`);&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;/div&gt;</summary>
		<author><name>MarHink</name></author>
	</entry>
	<entry>
		<id>https://wiki.onqpr.com/pa/index.php?title=Expression_Script_Examples&amp;diff=27596</id>
		<title>Expression Script Examples</title>
		<link rel="alternate" type="text/html" href="https://wiki.onqpr.com/pa/index.php?title=Expression_Script_Examples&amp;diff=27596"/>
		<updated>2026-01-12T08:17:08Z</updated>

		<summary type="html">&lt;p&gt;MarHink: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;This page contains script examples written in the QPR ProcessAnalyzer expression language. See how expression scripts can be created in the [[Managing_Scripts#Creating_Script|Workspace]]. For documentation for the syntax, functions and entities can be found from the main page in the [[QPR_ProcessAnalyzer_Wiki#For_Developers|KPI Expression Language]] section.&lt;br /&gt;
&lt;br /&gt;
== Calling Expression Script from Expression ==&lt;br /&gt;
Expression scripts can be called from an expression using the [[QPR_ProcessAnalyzer_Objects_in_Expression_Language#Script|Run]] function with the following syntax:&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;typescript&amp;quot; line&amp;gt;&lt;br /&gt;
let result = ScriptById(123).Run(#{&lt;br /&gt;
  &amp;quot;parameter1&amp;quot;: &amp;quot;value1&amp;quot;,&lt;br /&gt;
  &amp;quot;parameter2&amp;quot;: false,&lt;br /&gt;
  &amp;quot;parameter2&amp;quot;: 123.45&lt;br /&gt;
})&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The command waits until the run is completed, and the return value of the called script is returned by the Run function call.&lt;br /&gt;
&lt;br /&gt;
Parameters can be passed to the called script, and the parameters are available as variables in the script. The parameters can contain any type of data.&lt;br /&gt;
&lt;br /&gt;
Expression scripts can also be called from a dashboard. Expressions can be stored to scripts instead of dashboards, which is a way to separate complex expressions from dashboards and allow to reuse expressions across several dashboards.&lt;br /&gt;
&lt;br /&gt;
== Calling SQL Script from Expression ==&lt;br /&gt;
SQL script can be called from an expression using the Run function as follows (similar to calling [[#Calling Expression Script from Expression|expression scripts]]):&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;typescript&amp;quot; line&amp;gt;&lt;br /&gt;
let result = ScriptById(123).Run(#{&lt;br /&gt;
  &amp;quot;parameter1&amp;quot;: &amp;quot;value1&amp;quot;,&lt;br /&gt;
  &amp;quot;parameter2&amp;quot;: 321&lt;br /&gt;
});&lt;br /&gt;
let arrayOfAllReports = result.Keys;&lt;br /&gt;
let report1 = result.Report1;&lt;br /&gt;
let report2 = result.Report2;&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
SQL scripts can return multiple &#039;&#039;reports&#039;&#039;, which are combined to a dictionary, where the key is the name of the report (&amp;quot;sheet name&amp;quot;) and value is the report data as a DataFrame. See in the above example, how the reports can be accessed by their name.&lt;br /&gt;
&lt;br /&gt;
== Examples ==&lt;br /&gt;
=== Call web service===&lt;br /&gt;
Contact to a web service, fetch some data, and store it to a datatable.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;typescript&amp;quot; line&amp;gt;&lt;br /&gt;
let datatableName = &amp;quot;Web Service Data&amp;quot;;&lt;br /&gt;
let webServiceData = CallWebService(&lt;br /&gt;
    #{&amp;quot;Address&amp;quot;: &amp;quot;https://processanalyzer.onqpr.com/qprpa/api/serverinfo&amp;quot;}&lt;br /&gt;
);&lt;br /&gt;
&lt;br /&gt;
let targetDatatable = Project.Datatables.Where(name==datatableName);&lt;br /&gt;
if (Count(targetDatatable) == 0) {&lt;br /&gt;
	targetDatatable = Project.CreateDatatable(datatableName)&lt;br /&gt;
	.AddColumn(&amp;quot;Setting name&amp;quot;, &amp;quot;String&amp;quot;)&lt;br /&gt;
	.AddColumn(&amp;quot;Setting value&amp;quot;, &amp;quot;String&amp;quot;)&lt;br /&gt;
	.AddColumn(&amp;quot;Data read&amp;quot;, &amp;quot;DateTime&amp;quot;);&lt;br /&gt;
} else {&lt;br /&gt;
	targetDatatable = targetDatatable[0];&lt;br /&gt;
}&lt;br /&gt;
&lt;br /&gt;
let currentTime = Now;&lt;br /&gt;
let dataAsDf = ToDataFrame(&lt;br /&gt;
	webServiceData.keys.{&lt;br /&gt;
        let key = _;&lt;br /&gt;
        [key, webServiceData[key], currentTime];&lt;br /&gt;
    },&lt;br /&gt;
	[&amp;quot;Setting name&amp;quot;, &amp;quot;Setting value&amp;quot;, &amp;quot;Data read&amp;quot;]&lt;br /&gt;
);&lt;br /&gt;
targetDatatable.Import(dataAsDf, #{&amp;quot;Append&amp;quot;:true});&lt;br /&gt;
WriteLog(`${CountTop(dataAsDf.Rows)} rows written to datatable`);&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Store data to datatable ===&lt;br /&gt;
Get all models in the system and store them to a datatable.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;typescript&amp;quot; line&amp;gt;&lt;br /&gt;
let newDatatable = Project&lt;br /&gt;
    .CreateDatatable(&amp;quot;Models list &amp;quot; + ToString(Now, &amp;quot;dd.MM.yyyy HH:mm:ss&amp;quot;))&lt;br /&gt;
    .AddColumn(&amp;quot;Model name&amp;quot;, &amp;quot;String&amp;quot;)&lt;br /&gt;
    .AddColumn(&amp;quot;Project name&amp;quot;, &amp;quot;String&amp;quot;)&lt;br /&gt;
    .AddColumn(&amp;quot;Created time&amp;quot;, &amp;quot;DateTime&amp;quot;)&lt;br /&gt;
    .AddColumn(&amp;quot;Cases&amp;quot;, &amp;quot;Integer&amp;quot;);&lt;br /&gt;
let startTime = Now;&lt;br /&gt;
let modelsData = ToDataFrame(&lt;br /&gt;
    Models.([Name, Project.Name, CreatedDate, NCases]),&lt;br /&gt;
    [&amp;quot;Model name&amp;quot;, &amp;quot;Project name&amp;quot;, &amp;quot;Created time&amp;quot;, &amp;quot;Cases&amp;quot;]&lt;br /&gt;
);&lt;br /&gt;
WriteLog(`Listing models took ${(Now - startTime).TotalSeconds.Round(2)} seconds.`);&lt;br /&gt;
newDatatable.Import(modelsData);&lt;br /&gt;
WriteLog(`Datatable ${newDatatable.Id} created.`);&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Convert datatable column data (Snowflake) ===&lt;br /&gt;
Following script converts textual data from the column &amp;quot;TimestampString&amp;quot; to dates to column &amp;quot;Timestamp&amp;quot; by trying different time formats and using the first suitable.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;typescript&amp;quot; line&amp;gt;&lt;br /&gt;
DatatableById(1)&lt;br /&gt;
  .UpdateRows(&lt;br /&gt;
    true,&lt;br /&gt;
    &amp;quot;Timestamp&amp;quot;,&lt;br /&gt;
    Coalesce(&lt;br /&gt;
      TryToTimestamp(Column(&amp;quot;TimestampString&amp;quot;), &amp;quot;DD.MM.YYYY HH24:MI:SS.FF3&amp;quot;),&lt;br /&gt;
      TryToTimestamp(Column(&amp;quot;TimestampString&amp;quot;), &amp;quot;YYYY-MM-DD HH24:MI:SS.FF3&amp;quot;),&lt;br /&gt;
      TryToTimestamp(Column(&amp;quot;TimestampString&amp;quot;), &amp;quot;YYYY/MM/DD HH24:MI:SS.FF3&amp;quot;)&lt;br /&gt;
    )&lt;br /&gt;
  );&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Convert datatable column data (in-memory) ===&lt;br /&gt;
This script can be used to convert a single column into numerical data type. To use the script, you need to setup the following in the beginning of the script:&lt;br /&gt;
* Project name where the datatable is located.&lt;br /&gt;
* Datatable name&lt;br /&gt;
* Name of the column to be converted&lt;br /&gt;
&lt;br /&gt;
Note that the conversion fails, if there is data that cannot be converted into numerical format. The conversion assumes that period (.) is used as the decimal point. &lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;typescript&amp;quot; line&amp;gt;&lt;br /&gt;
let projectName = &amp;quot;New Project&amp;quot;;&lt;br /&gt;
let datatableName = &amp;quot;qpr processanalyzer events&amp;quot;;&lt;br /&gt;
let columnName = &amp;quot;Event order in case&amp;quot;;&lt;br /&gt;
&lt;br /&gt;
let project = (Projects.Where(Name==projectName))[0];&lt;br /&gt;
let datatable = (project.Datatables.Where(Name==datatableName))[0];&lt;br /&gt;
DatatableById(datatable.Id).DataFrame&lt;br /&gt;
.SetColumns([&lt;br /&gt;
	columnName: () =&amp;gt; {&lt;br /&gt;
		let data = Column(columnName);&lt;br /&gt;
		if (data == null) {&lt;br /&gt;
			null;&lt;br /&gt;
		 } else {&lt;br /&gt;
			ToFloat(data);&lt;br /&gt;
		}&lt;br /&gt;
	}&lt;br /&gt;
])&lt;br /&gt;
.Persist(datatable.Name, [&amp;quot;ProjectId&amp;quot;: project.Id, &amp;quot;Append&amp;quot;: false]);&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Instead of converting to numeric (with the &#039;&#039;ToFloat&#039;&#039; function), data can be converted into string using the &#039;&#039;ToString&#039;&#039; function.&lt;br /&gt;
&lt;br /&gt;
=== Show DataFrame as HTML table ===&lt;br /&gt;
&lt;br /&gt;
This script defines a function to show dataframe as a HTML table, and uses the function for a literal dataframe.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;typescript&amp;quot; line&amp;gt;&lt;br /&gt;
function dataframeToHtmlTable(df) {&lt;br /&gt;
	return&lt;br /&gt;
`&amp;lt;table&amp;gt;&lt;br /&gt;
	&amp;lt;tr&amp;gt;&lt;br /&gt;
		${StringJoin(&amp;quot;\r\n\t\t&amp;quot;,  + df.columns.`&amp;lt;th&amp;gt;${_}&amp;lt;/th&amp;gt;`)}&lt;br /&gt;
	&amp;lt;/tr&amp;gt;&lt;br /&gt;
	${StringJoin(&amp;quot;&amp;quot;, df.Rows.(&lt;br /&gt;
		&amp;quot;\r\n\t&amp;lt;tr&amp;gt;&amp;quot; + StringJoin(&amp;quot;&amp;quot;, _.`\r\n\t\t&amp;lt;td&amp;gt;${ToString(_)}&amp;lt;/td&amp;gt;`) + &amp;quot;\r\n\t&amp;lt;/tr&amp;gt;&amp;quot;&lt;br /&gt;
	))}&lt;br /&gt;
&amp;lt;/table&amp;gt;`&lt;br /&gt;
}&lt;br /&gt;
&lt;br /&gt;
let data = ToDataFrame(&lt;br /&gt;
	[&lt;br /&gt;
		[&amp;quot;one&amp;quot;, &amp;quot;two&amp;quot;, &amp;quot;three&amp;quot;],&lt;br /&gt;
		[&amp;quot;four&amp;quot;, &amp;quot;five&amp;quot;, &amp;quot;six&amp;quot;],&lt;br /&gt;
		[&amp;quot;seven&amp;quot;, &amp;quot;eight&amp;quot;, &amp;quot;nine&amp;quot;]&lt;br /&gt;
	],&lt;br /&gt;
	[&amp;quot;Column 1&amp;quot;, &amp;quot;Column 2&amp;quot;, &amp;quot;Column 3&amp;quot;]&lt;br /&gt;
);&lt;br /&gt;
&lt;br /&gt;
return dataframeToHtmlTable(data);&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Copy local datatables to Snowflake ===&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;typescript&amp;quot; line&amp;gt;&lt;br /&gt;
// Copies all datatables in a project to another project including datatable contents.&lt;br /&gt;
// Usage instructions:&lt;br /&gt;
// 1. Create expression script in the project from where you want to copy the datatables.&lt;br /&gt;
// 2. Create a new project named as &amp;quot;&amp;lt;name of the project to be moved&amp;gt; - Snowflake&amp;quot;. New datatables will be created here. E.g., when moving project named &amp;quot;SAP_OrderToCash&amp;quot;, the target project should be named as &amp;quot;SAP_OrderToCash - Snowflake&amp;quot;.&lt;br /&gt;
// 3. Run the script.&lt;br /&gt;
// NOTE: Columns of type &amp;quot;Any&amp;quot; will be created as &amp;quot;String&amp;quot;-columns in Snowflake, thus it is recommended that actual data types are set for the tables prior to the move.&lt;br /&gt;
&lt;br /&gt;
let sourceProject = Project;&lt;br /&gt;
let sourceProjectName = Project.Name;&lt;br /&gt;
let targetProjectName = `${sourceProjectName} - Snowflake`;&lt;br /&gt;
let targetProject = First(Projects.Where(Name == targetProjectName));&lt;br /&gt;
if (IsNull(targetProject)) {&lt;br /&gt;
  WriteLog(`Unable to find target project named &amp;quot;${targetProjectName}&amp;quot;. Aborting operation.`);&lt;br /&gt;
  return;&lt;br /&gt;
}&lt;br /&gt;
let dts = sourceProject.DataTables;&lt;br /&gt;
WriteLog(`Copying all ${CountTop(dts)} data tables found in project &amp;quot;${sourceProject.Name}&amp;quot; (id: ${sourceProject.Id}) to Snowflake in project &amp;quot;${targetProject.Name}&amp;quot; (id: ${targetProject.Id})`);&lt;br /&gt;
dts.{&lt;br /&gt;
  let sourceDt = _;&lt;br /&gt;
  WriteLog(`Starting to copy data table &amp;quot;${Name}&amp;quot; (id: ${Id}) having ${NRows} rows and ${NColumns} columns.`);&lt;br /&gt;
  let targetDt;&lt;br /&gt;
  targetDt = targetProject.DatatableByName(sourceDt.Name);&lt;br /&gt;
  if (targetDt == null) {&lt;br /&gt;
    targetDt = targetProject.CreateDataTable(sourceDt.Name, #{&amp;quot;Connection&amp;quot;: CreateSnowflakeConnection(#{&amp;quot;ProjectId&amp;quot;: targetProject.Id})});&lt;br /&gt;
    targetDt.Import(sourceDt.SqlDataFrame);&lt;br /&gt;
    WriteLog(`Finished copying data table &amp;quot;${Name}&amp;quot; (id: ${Id}) to table &amp;quot;${targetDt.Name}&amp;quot; (id: ${targetDt.Id})`);&lt;br /&gt;
  } else {&lt;br /&gt;
    WriteLog(`Datatable already exist &amp;quot;${Name}&amp;quot; (id: ${Id}) to table &amp;quot;${targetDt.Name}&amp;quot; (id: ${targetDt.Id})`);&lt;br /&gt;
  }&lt;br /&gt;
}&lt;br /&gt;
WriteLog(`Finished copying all the data tables found in project &amp;quot;${sourceProject.Name}&amp;quot; (id: ${sourceProject.Id}) to Snowflake in project &amp;quot;${targetProject.Name}&amp;quot; (id: ${targetProject.Id})`);&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
If you don&#039;t need to copy the data but only create the Snowflake datatables with columns, you can change the line 22 to&lt;br /&gt;
&amp;lt;syntaxhighlight&amp;gt;&lt;br /&gt;
targetDt.Import(sourceDt.SqlDataFrame.head(0));&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Copy single datatable to Snowflake ===&lt;br /&gt;
This script creates a copy of a single datatable to Snowflake. Replace the &#039;&#039;&amp;lt;tableId1&amp;gt;&#039;&#039; with the id of the source datatable.&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;typescript&amp;quot; line&amp;gt;&lt;br /&gt;
function CopyDataTableToSnowflake(dataTableId)&lt;br /&gt;
{&lt;br /&gt;
  let sourceDt = DataTableById(dataTableId);&lt;br /&gt;
  sourceDt.SqlDataFrame.Persist(`${sourceDt.Name} - Snowflake`, #{&amp;quot;Append&amp;quot;: false, &amp;quot;Connection&amp;quot;: CreateSnowflakeConnection(#{&amp;quot;ProjectId&amp;quot;: sourceDt.Project.Id})});&lt;br /&gt;
}&lt;br /&gt;
CopyDataTableToSnowflake(&amp;lt;tableId1&amp;gt;);&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Create a copy of a data table that has all Any-type columns changed to String-type columns ===&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;typescript&amp;quot; line&amp;gt;&lt;br /&gt;
function ConvertAnyDataTypesToStringsToNewTable(dataTableId)&lt;br /&gt;
{&lt;br /&gt;
  let dt = DataTableById(dataTableId);&lt;br /&gt;
  let sdf = dt.SqlDataFrame;&lt;br /&gt;
  let cts = dt.ColumnTypes;&lt;br /&gt;
  cts.{&lt;br /&gt;
    let ct = _;&lt;br /&gt;
    if (ct.DataType == &amp;quot;Any&amp;quot;) {&lt;br /&gt;
      let n = ct.Name;&lt;br /&gt;
      sdf = sdf.WithColumn(ct.Name, #sql{Cast(Column(Variable(&amp;quot;n&amp;quot;)), &amp;quot;ShortString&amp;quot;)});&lt;br /&gt;
    }&lt;br /&gt;
  };&lt;br /&gt;
  sdf.Persist(`${dt.Name} - Converted`, #{&amp;quot;Append&amp;quot;: false, &amp;quot;ProjectId&amp;quot;: dt.Project.Id});&lt;br /&gt;
}&lt;br /&gt;
ConvertAnyDataTypesToStringsToNewTable(&amp;lt;dataTableId&amp;gt;);&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Query number of rows in given data table having a datetime value in given year grouped by month and return resulting table as CSV ===&lt;br /&gt;
SqlDataFrame is used in order to prevent loading the whole datatable into memory first. Filtering is performed as first operation in order to minimize the amount of required work for the data source of the data table.&amp;lt;syntaxhighlight lang=&amp;quot;typescript&amp;quot; line=&amp;quot;1&amp;quot;&amp;gt;&lt;br /&gt;
DataTableById(&amp;lt;data table id&amp;gt;)&lt;br /&gt;
  .SqlDataFrame&lt;br /&gt;
  .Where(#sql{2014 == Year(Column(&amp;quot;Start Time&amp;quot;))})&lt;br /&gt;
  .WithColumn(&amp;quot;Month&amp;quot;, #sql{Month(Column(&amp;quot;Start Time&amp;quot;))})&lt;br /&gt;
  .GroupBy([&amp;quot;Month&amp;quot;]).Aggregate([&amp;quot;Count&amp;quot;], [&amp;quot;Count&amp;quot;])&lt;br /&gt;
  .OrderByColumns([&amp;quot;Month&amp;quot;], [true])&lt;br /&gt;
  .Collect().ToCsv();&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Function for filtering SqlDataFrame by removing rows having, or replacing, the most infrequently occurring column values ===&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;typescript&amp;quot; line=&amp;quot;1&amp;quot;&amp;gt;&lt;br /&gt;
/***&lt;br /&gt;
 * @name ColumnWithMinUsage&lt;br /&gt;
 * @descripion&lt;br /&gt;
 * Generic function that can be used to filter out the most infrequently occurring attribute values or replace their Values&lt;br /&gt;
 * with given common value.&lt;br /&gt;
 * @param df:&lt;br /&gt;
 * DataFrame to operate on.&lt;br /&gt;
 * @param columnName:&lt;br /&gt;
 * Name of the column to be filtered.&lt;br /&gt;
 * @param newColumnName:&lt;br /&gt;
 * Name of the column that will contain the new value of the original column after filtering (if includeOthers was applied).&lt;br /&gt;
 * @param maxNumUniqueValues:&lt;br /&gt;
 * Maximum number of unique values to include into the comparison for each attribute column. If the amount of unique values for any attribute exceeds this value, only given number of attributes are included that have the highest usage.&lt;br /&gt;
 * @param minValueUsage:&lt;br /&gt;
 * Minimum total usage of a value included into the comparison. The number of cases having every returned value should be at least given percentage (a float value between 0.0 and 1.0) of all the compared cases.&lt;br /&gt;
 * @param includeOthers:&lt;br /&gt;
 * Should the rest of the attribute values not included due to MinValueUsage or MaxNumUniqueValues filtering be included as an aggregated &amp;quot;Others&amp;quot; value?&lt;br /&gt;
 * If not empty/null, defines the name used for these other-values.&lt;br /&gt;
 */&lt;br /&gt;
function ColumnWithMinUsage(df, columnName, newColumnName, maxNumUniqueValues, minValueUsage, includeOthers)&lt;br /&gt;
{&lt;br /&gt;
  let all = df&lt;br /&gt;
	.GroupBy([])&lt;br /&gt;
	.Aggregate([&amp;quot;NAllTotal&amp;quot;], [&amp;quot;Count&amp;quot;])&lt;br /&gt;
	.WithColumn(&amp;quot;__Join2&amp;quot;, #sql{1});&lt;br /&gt;
  let minValueUsageEnabled = !IsNullTop(minValueUsage);&lt;br /&gt;
  let maxNumUniqueValuesEnabled = !IsNullTop(maxNumUniqueValues);&lt;br /&gt;
  if (minValueUsageEnabled || maxNumUniqueValuesEnabled) {&lt;br /&gt;
	// Perform column value-based filtering if minValueUsageEnabled or maxNumUniqueValuesEnabled is defined.&lt;br /&gt;
    let valueColumnName = &amp;quot;__ValueNew&amp;quot;;&lt;br /&gt;
	let filteredValuesColumns = [valueColumnName: columnName];&lt;br /&gt;
	let filteredValues = df&lt;br /&gt;
	  .GroupBy([columnName]).Aggregate([&amp;quot;Count&amp;quot;], [&amp;quot;Count&amp;quot;]);&lt;br /&gt;
	if (minValueUsageEnabled) {&lt;br /&gt;
	  filteredValues = filteredValues&lt;br /&gt;
		.WithColumn(&amp;quot;__Join&amp;quot;, #sql{1})&lt;br /&gt;
		.Join(all, [&amp;quot;__Join&amp;quot;: &amp;quot;__Join2&amp;quot;], &amp;quot;leftouter&amp;quot;)&lt;br /&gt;
        .WithColumn(&amp;quot;Usage&amp;quot;, #sql{Column(&amp;quot;Count&amp;quot;) / Column(&amp;quot;NAllTotal&amp;quot;)});&lt;br /&gt;
	  filteredValuesColumns = Concat(filteredValuesColumns, [&amp;quot;Usage&amp;quot;]);&lt;br /&gt;
	}&lt;br /&gt;
	if (maxNumUniqueValuesEnabled) {&lt;br /&gt;
	  filteredValues = filteredValues&lt;br /&gt;
		.WithRowNumberColumn(&amp;quot;RowNumber&amp;quot;, [&amp;quot;Count&amp;quot;], null, [false]);&lt;br /&gt;
	  filteredValuesColumns = Concat(filteredValuesColumns, [&amp;quot;RowNumber&amp;quot;]);&lt;br /&gt;
	}&lt;br /&gt;
&lt;br /&gt;
	filteredValues = filteredValues&lt;br /&gt;
	  .Select(filteredValuesColumns);&lt;br /&gt;
&lt;br /&gt;
	// Generate select returning all the accepted values.&lt;br /&gt;
	let allValues = filteredValues&lt;br /&gt;
	  .(minValueUsageEnabled ? Where(#sql{Column(&amp;quot;Usage&amp;quot;) &amp;gt;= #expr{minValueUsage}}) : _)&lt;br /&gt;
	  .(maxNumUniqueValuesEnabled ? Where(#sql{Column(&amp;quot;RowNumber&amp;quot;) &amp;lt;= #expr{maxNumUniqueValues}}) : _)&lt;br /&gt;
	  .Select([valueColumnName, newColumnName: valueColumnName]);&lt;br /&gt;
&lt;br /&gt;
	if (!IsNullTop(includeOthers)) {&lt;br /&gt;
	  // If includeOthers is defined, replace original values with the variable defined in includeOthers.&lt;br /&gt;
	  let otherValues = filteredValues&lt;br /&gt;
		.(minValueUsageEnabled ? Where(#sql{Column(&amp;quot;Usage&amp;quot;) &amp;lt; #expr{minValueUsage}}) : _)&lt;br /&gt;
		.(maxNumUniqueValuesEnabled ? Where(#sql{Column(&amp;quot;RowNumber&amp;quot;) &amp;gt; #expr{maxNumUniqueValues}}) : _)&lt;br /&gt;
		.WithColumn(newColumnName, #sql{#expr{includeOthers}})&lt;br /&gt;
		.Select([valueColumnName, newColumnName]);&lt;br /&gt;
	  allValues = allValues.Append(otherValues)&lt;br /&gt;
	}&lt;br /&gt;
	df.Join(allValues, [columnName: valueColumnName], &amp;quot;inner&amp;quot;)&lt;br /&gt;
	  .RemoveColumns([valueColumnName]);&lt;br /&gt;
  }&lt;br /&gt;
}&lt;br /&gt;
&lt;br /&gt;
// The following example will return only rows containing two of the most common values for Region-column.&lt;br /&gt;
//let df = DataTableById(&amp;lt;data table id&amp;gt;).SqlDataFrame;&lt;br /&gt;
//df = ColumnWithMinUsage(df, &amp;quot;Region&amp;quot;, &amp;quot;_Filtered&amp;quot;, 2, null, null);&lt;br /&gt;
//df.Collect().ToCsv();&lt;br /&gt;
&lt;br /&gt;
// The following example will return all input rows, but will replace the values of rows whose Region-column&lt;br /&gt;
// has a value used by less than 15% of all the rows with a new value: &amp;quot;_Others&amp;quot;.&lt;br /&gt;
//let df = DataTableById(&amp;lt;data table id&amp;gt;).SqlDataFrame;&lt;br /&gt;
//df = ColumnWithMinUsage(df, &amp;quot;Region&amp;quot;, &amp;quot;_Filtered&amp;quot;, null, 0.15, &amp;quot;_Others&amp;quot;);&lt;br /&gt;
//df.Collect().ToCsv();&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Export model events and cases ===&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;typescript&amp;quot; line=&amp;quot;1&amp;quot;&amp;gt;&lt;br /&gt;
function ExportModelEvents(m) {&lt;br /&gt;
  let attrs = m.EventAttributes;&lt;br /&gt;
  ToDataFrame(&lt;br /&gt;
    m.EventLog.Events.Concat(&lt;br /&gt;
      [Case.Name, Type.Name, ToString(TimeStamp, &amp;quot;yyyy-MM-dd HH:mm:ss.fff&amp;quot;)], &lt;br /&gt;
      {let evt = _; attrs.{let att = _; evt.Attribute(att)}}&lt;br /&gt;
    ), &lt;br /&gt;
    Concat(&lt;br /&gt;
      [&amp;quot;CaseId&amp;quot;, &amp;quot;EventType&amp;quot;, &amp;quot;TimeStamp&amp;quot;], &lt;br /&gt;
      attrs.Name&lt;br /&gt;
    )&lt;br /&gt;
  ).ToCsv(true);&lt;br /&gt;
}&lt;br /&gt;
&lt;br /&gt;
 &lt;br /&gt;
&lt;br /&gt;
function ExportModelCases(m) {&lt;br /&gt;
  let attrs = m.CaseAttributes;&lt;br /&gt;
  ToDataFrame(&lt;br /&gt;
    m.EventLog.Cases.Concat(&lt;br /&gt;
      [Name], &lt;br /&gt;
      {let cas = _; attrs.{let att = _; cas.Attribute(att)}}&lt;br /&gt;
    ), &lt;br /&gt;
    Concat(&lt;br /&gt;
      [&amp;quot;CaseId&amp;quot;], &lt;br /&gt;
      attrs.Name&lt;br /&gt;
    )&lt;br /&gt;
  ).ToCsv(true);&lt;br /&gt;
}&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
First(Models.Where(Name==&amp;quot;SAP OtC Extended&amp;quot;)).EventsDataTable.DataFrame.ToCsv(true)&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
First(Models.Where(Name==&amp;quot;SAP OtC Extended&amp;quot;)).CasesDataTable.DataFrame.ToCsv(true)&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Calculate all the value usages of a single column for each event in event data table ===&lt;br /&gt;
This query could be used, e.g., to find out the maximum resource usage for every resource found in the event data table.&amp;lt;syntaxhighlight lang=&amp;quot;typescript&amp;quot; line=&amp;quot;1&amp;quot;&amp;gt;&lt;br /&gt;
function WithUsageColumns(resourceColumn)&lt;br /&gt;
{&lt;br /&gt;
  function WithTotalUsageColumnOfSingleResource(resourceColumn, resourceValue)&lt;br /&gt;
  {&lt;br /&gt;
    _&lt;br /&gt;
      .WithColumn(&amp;quot;_Prev&amp;quot;, #sql{Lag(Column(resourceColumn), [TimeStamp, EventType], [true, true], [CaseId], 1, null)})&lt;br /&gt;
      .WithColumn(&amp;quot;_UsageDiff&amp;quot;, #sql{&lt;br /&gt;
        CaseWhen(&lt;br /&gt;
          Column(resourceColumn) == Column(&amp;quot;_Prev&amp;quot;), 0, &lt;br /&gt;
          Column(&amp;quot;_Prev&amp;quot;) == #expr{resourceValue}, -1,&lt;br /&gt;
          Column(resourceColumn) == #expr{resourceValue}, 1,&lt;br /&gt;
          0)&lt;br /&gt;
      })&lt;br /&gt;
      .WithColumn(`${resourceValue}_Usage`, #sql{Sum(Column(&amp;quot;_UsageDiff&amp;quot;), [TimeStamp, EventType])})&lt;br /&gt;
      .RemoveColumns([&amp;quot;_Prev&amp;quot;, &amp;quot;_UsageDiff&amp;quot;])&lt;br /&gt;
  }&lt;br /&gt;
&lt;br /&gt;
  let sdf = _;&lt;br /&gt;
  let allValues = sdf.SelectDistinct([resourceColumn]).OrderByColumns([resourceColumn], [true]).Collect().Column(resourceColumn);&lt;br /&gt;
  allValues.{&lt;br /&gt;
    let v = _;&lt;br /&gt;
    sdf = sdf.WithTotalUsageColumnOfSingleResource(resourceColumn, v)&lt;br /&gt;
  }&lt;br /&gt;
  sdf&lt;br /&gt;
}&lt;br /&gt;
&lt;br /&gt;
let dt = ModelById(&amp;lt;model id&amp;gt;).EventsDataTable;&lt;br /&gt;
dt&lt;br /&gt;
  .SqlDataFrame&lt;br /&gt;
  .WithUsageColumns(&amp;lt;resource column name&amp;gt;)&lt;br /&gt;
  .OrderByColumns([dt.ColumnMappings[&amp;quot;TimeStamp&amp;quot;]], [true])&lt;br /&gt;
  .Collect().ToCsv()&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;Where:&lt;br /&gt;
&lt;br /&gt;
* &amp;lt;model id&amp;gt; is the id of the model containing event data to be examined.&lt;br /&gt;
* &amp;lt;resource column name&amp;gt; is the name of the column in the event data table of the specified model containing the resource being used by that event.&lt;br /&gt;
&lt;br /&gt;
NOTE: This expression uses functionalities that are only supported in Snowflake-based data tables.&lt;br /&gt;
&lt;br /&gt;
=== Create new Snowflake model from filter ===&lt;br /&gt;
This script creates a new Snowflake model (and two datatables for cases and events) containing filtered event log from given filter id. The script also works if the model doesn&#039;t have a cases datatable.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;typescript&amp;quot; line=&amp;quot;1&amp;quot;&amp;gt;&lt;br /&gt;
let filter = FilterById(1); // filter id&lt;br /&gt;
let model = filter.model;&lt;br /&gt;
let project = model.project;&lt;br /&gt;
let nameSuffix = &amp;quot; - &amp;quot; + filter.name + &amp;quot; - &amp;quot; + ToString(Now, &amp;quot;dd-MM-yyyy HH:mm:ss&amp;quot;);&lt;br /&gt;
let eventsDatatableName = model.EventsDataTable.Name + nameSuffix;&lt;br /&gt;
if (eventsDatatableName.length &amp;gt; 440) {&lt;br /&gt;
  eventsDatatableName = eventsDatatableName.Substring(eventsDatatableName.length - 440);&lt;br /&gt;
}&lt;br /&gt;
let eventsData = model&lt;br /&gt;
  .EventsDataTable&lt;br /&gt;
  .SqlDataFrame&lt;br /&gt;
  .ApplyFilter(&lt;br /&gt;
    filter.rules,&lt;br /&gt;
    model.CasesDataTable?.SqlDataFrame&lt;br /&gt;
  );&lt;br /&gt;
project&lt;br /&gt;
  .CreateDatatable(eventsDatatableName, #{&amp;quot;Connection&amp;quot;: CreateSnowflakeConnection()})&lt;br /&gt;
  .Import(eventsData);&lt;br /&gt;
let modelConfiguration = model.Configuration;&lt;br /&gt;
modelConfiguration.DataSource.Events.Set(&amp;quot;DataTableName&amp;quot;, eventsDatatableName);&lt;br /&gt;
if (model.CasesDataTable != null) {&lt;br /&gt;
  let eventsDataCaseIdColumn = &amp;quot;CaseId_&amp;quot; + ToString(Random());&lt;br /&gt;
  let casesDatatableName = model.CasesDataTable.Name + nameSuffix;&lt;br /&gt;
  if (casesDatatableName.length &amp;gt; 440) {&lt;br /&gt;
    casesDatatableName = casesDatatableName.Substring(casesDatatableName.length - 440);&lt;br /&gt;
  }&lt;br /&gt;
  let casesData = model&lt;br /&gt;
    .CasesDataTable&lt;br /&gt;
    .SqlDataFrame&lt;br /&gt;
    .join(&lt;br /&gt;
	  eventsData.SelectDistinct([eventsDataCaseIdColumn: modelConfiguration.DataSource.Events.Columns.CaseId]),&lt;br /&gt;
      [modelConfiguration.DataSource.Cases.Columns.CaseId: eventsDataCaseIdColumn]&lt;br /&gt;
	).Select(model.CasesDataTable.ColumnNames);&lt;br /&gt;
  project&lt;br /&gt;
    .CreateDatatable(casesDatatableName, #{&amp;quot;Connection&amp;quot;: CreateSnowflakeConnection()})&lt;br /&gt;
    .Import(casesData);&lt;br /&gt;
  modelConfiguration.DataSource.Cases.Set(&amp;quot;DataTableName&amp;quot;, casesDatatableName);&lt;br /&gt;
}&lt;br /&gt;
let modelName = model.Name + nameSuffix;&lt;br /&gt;
if (modelName &amp;gt; 440) {&lt;br /&gt;
  modelName = modelName.Substring(modelName.length - 440);&lt;br /&gt;
}&lt;br /&gt;
project&lt;br /&gt;
  .CreateModel(#{    &lt;br /&gt;
    &amp;quot;Name&amp;quot;: modelName,&lt;br /&gt;
    &amp;quot;Description&amp;quot;: model.Description,&lt;br /&gt;
    &amp;quot;Configuration&amp;quot;: modelConfiguration&lt;br /&gt;
  });&lt;br /&gt;
return modelName;&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Creating a model consisting of multiple copies of cases in an existing model ===&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;typescript&amp;quot; line=&amp;quot;1&amp;quot;&amp;gt;&lt;br /&gt;
/**&lt;br /&gt;
 * @name CreateTestModel&lt;br /&gt;
 * @description&lt;br /&gt;
 * Creates a new model (or overwrites an existing) to given target project with given number of &lt;br /&gt;
 * repetitions of given source model.&lt;br /&gt;
 * Each repetition will generate &amp;quot;&amp;lt;N&amp;gt;-&amp;quot;-prefix to CaseId-columns, where N equals to the repeat index.&lt;br /&gt;
 * @param sourceModel&lt;br /&gt;
 * PA model used for the source data and from where the connection is copied for the target model if a &lt;br /&gt;
 * new one has to be created.&lt;br /&gt;
 * @param numRepeats&lt;br /&gt;
 * Number of times the data in the source model should be repeated in the generated model.&lt;br /&gt;
 * @param targetProject&lt;br /&gt;
 * Project in which the target model resides.&lt;br /&gt;
 * @param targetModelName&lt;br /&gt;
 * Specifies the name of the test model in the given target project. If a model already exists with &lt;br /&gt;
 * given name, event and case data in this model will be replaced with the new generated event and &lt;br /&gt;
 * case data.&lt;br /&gt;
 * @returns&lt;br /&gt;
 * Model object of the test model having the newly generated data.&lt;br /&gt;
 */&lt;br /&gt;
function CreateTestModel(sourceModel, numRepeats, targetProject, targetModelName) &lt;br /&gt;
{&lt;br /&gt;
  let eventsColumnMappings = sourceModel.EventsDataTable.ColumnMappings;&lt;br /&gt;
  let casesColumnMappings = sourceModel.CasesDataTable.ColumnMappings;&lt;br /&gt;
  let connection = sourceModel.EventsDataTable.DataSourceConnection;&lt;br /&gt;
&lt;br /&gt;
  function CreateResultModel()&lt;br /&gt;
  {&lt;br /&gt;
    function GetTable(tableName) &lt;br /&gt;
    {&lt;br /&gt;
      let tableConfiguration = #{&lt;br /&gt;
        &amp;quot;Name&amp;quot;: tableName,&lt;br /&gt;
        &amp;quot;Connection&amp;quot;: connection&lt;br /&gt;
      };&lt;br /&gt;
      let resultTable = targetProject.DataTableByName(tableName);&lt;br /&gt;
      if (resultTable == null)&lt;br /&gt;
      {&lt;br /&gt;
        resultTable = targetProject.CreateDataTable(tableConfiguration)&lt;br /&gt;
          .Modify(#{&amp;quot;NameInDataSource&amp;quot;: null})&lt;br /&gt;
          .Synchronize();&lt;br /&gt;
      }&lt;br /&gt;
      return resultTable;&lt;br /&gt;
    }&lt;br /&gt;
&lt;br /&gt;
    let eventsTableName = `${targetModelName} - events`;&lt;br /&gt;
    let casesTableName = `${targetModelName} - cases`;&lt;br /&gt;
    let targetModel = targetProject.ModelByName(targetModelName);&lt;br /&gt;
    let eventsTable, casesTable = null;&lt;br /&gt;
&lt;br /&gt;
    if (targetModel != null)&lt;br /&gt;
    {&lt;br /&gt;
      eventsTable = targetModel.EventsDataTable;&lt;br /&gt;
      casesTable = targetModel.CasesDataTable;&lt;br /&gt;
    }&lt;br /&gt;
    else {&lt;br /&gt;
      eventsTable = GetTable(eventsTableName);&lt;br /&gt;
      if (sourceModel.CasesDataTable != null) {&lt;br /&gt;
        casesTable = GetTable(casesTableName);&lt;br /&gt;
      }&lt;br /&gt;
&lt;br /&gt;
      let timestampMapping = eventsColumnMappings[&amp;quot;TimeStamp&amp;quot;];&lt;br /&gt;
      eventsColumnMappings.Remove(&amp;quot;TimeStamp&amp;quot;);&lt;br /&gt;
      eventsColumnMappings.Set(&amp;quot;Timestamp&amp;quot;, timestampMapping);&lt;br /&gt;
&lt;br /&gt;
      let modelConfiguration = #{&lt;br /&gt;
        &amp;quot;DataSource&amp;quot;: #{&lt;br /&gt;
          &amp;quot;Events&amp;quot;:#{&lt;br /&gt;
            &amp;quot;DataSourceType&amp;quot;: &amp;quot;datatable&amp;quot;,&lt;br /&gt;
            &amp;quot;DataTableName&amp;quot;: eventsTableName,&lt;br /&gt;
            &amp;quot;Columns&amp;quot;: eventsColumnMappings&lt;br /&gt;
          }&lt;br /&gt;
        }&lt;br /&gt;
      };&lt;br /&gt;
&lt;br /&gt;
      if (casesColumnMappings != null) {&lt;br /&gt;
        modelConfiguration[&amp;quot;DataSource&amp;quot;].Set(&amp;quot;Cases&amp;quot;, #{&lt;br /&gt;
          &amp;quot;DataSourceType&amp;quot;: &amp;quot;datatable&amp;quot;,&lt;br /&gt;
          &amp;quot;DataTableName&amp;quot;: casesTableName,&lt;br /&gt;
          &amp;quot;Columns&amp;quot;: casesColumnMappings&lt;br /&gt;
        });&lt;br /&gt;
      }&lt;br /&gt;
&lt;br /&gt;
      targetModel = targetProject.CreateModel(#{&amp;quot;Name&amp;quot;: targetModelName, &amp;quot;Configuration&amp;quot;: modelConfiguration});&lt;br /&gt;
    }&lt;br /&gt;
&lt;br /&gt;
    eventsTable.Truncate();&lt;br /&gt;
    casesTable?.Truncate();&lt;br /&gt;
&lt;br /&gt;
    return #{&lt;br /&gt;
      &amp;quot;TargetModel&amp;quot;: targetModel,&lt;br /&gt;
      &amp;quot;Events&amp;quot;: eventsTable,&lt;br /&gt;
      &amp;quot;Cases&amp;quot;: casesTable&lt;br /&gt;
    };&lt;br /&gt;
  }&lt;br /&gt;
&lt;br /&gt;
  function RepeatNTimes(sourceDf, caseIdColumn, numRepeats)&lt;br /&gt;
  {&lt;br /&gt;
    let resultDf = null;&lt;br /&gt;
    for (let i = 1; i &amp;lt;= numRepeats; ++i) {&lt;br /&gt;
      let iterationDf = sourceDf&lt;br /&gt;
        .WithColumn(caseIdColumn, #sql{Concat(#expr{i}, &amp;quot;-&amp;quot;, Column(#expr{caseIdColumn}))});&lt;br /&gt;
      resultDf = resultDf == null ? iterationDf : resultDf.Append(iterationDf); &lt;br /&gt;
    }&lt;br /&gt;
    resultDf;&lt;br /&gt;
  }&lt;br /&gt;
&lt;br /&gt;
  let resultModel = CreateResultModel();&lt;br /&gt;
  let sourceEventDataDf = sourceModel.EventsDataTable.SqlDataFrame;&lt;br /&gt;
  let resultEventDataDf = RepeatNTimes(sourceEventDataDf, eventsColumnMappings[&amp;quot;CaseId&amp;quot;], numRepeats);&lt;br /&gt;
  resultModel[&amp;quot;Events&amp;quot;].Import(resultEventDataDf);&lt;br /&gt;
&lt;br /&gt;
  let sourceCaseDataDf = sourceModel.CasesDataTable?.SqlDataFrame;&lt;br /&gt;
  if (sourceCaseDataDf != null) {&lt;br /&gt;
    let resultCaseDataDf = RepeatNTimes(sourceCaseDataDf, casesColumnMappings[&amp;quot;CaseId&amp;quot;], numRepeats);&lt;br /&gt;
    resultModel[&amp;quot;Cases&amp;quot;].Import(resultCaseDataDf);&lt;br /&gt;
  }&lt;br /&gt;
  resultModel[&amp;quot;TargetModel&amp;quot;];&lt;br /&gt;
}&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;Example usage:&amp;lt;blockquote&amp;gt;CreateTestModel(ProjectByName(&amp;quot;Project&amp;quot;).ModelByName(&amp;quot;SAP_OrderToCash - Snowflake&amp;quot;), 3, ProjectByName(&amp;quot;TestData&amp;quot;), &amp;quot;TestModel&amp;quot;);&amp;lt;/blockquote&amp;gt;Creates a new model named &amp;quot;TestModel&amp;quot; (or overwrites old one) into project named &amp;quot;TestData&amp;quot; containing the data from model &amp;quot;SAP_OrderToCash - Snowflake&amp;quot; in project &amp;quot;Project&amp;quot; repeated three times.&lt;br /&gt;
&lt;br /&gt;
=== Analyzing declare patterns found in event log ===&lt;br /&gt;
&lt;br /&gt;
This is an example expression that shows how POSIX-style regular expressions can be used to search for cases in an event log having certain event type patterns [https://www.researchgate.net/publication/277631859_Generating_Event_Logs_Through_the_Simulation_of_Declare_Models declare patterns].&lt;br /&gt;
&lt;br /&gt;
Note: Before use, replace &amp;lt;model id&amp;gt; with a valid model identifier having event data.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;typescript&amp;quot; line=&amp;quot;1&amp;quot;&amp;gt;&lt;br /&gt;
let dt = ModelById(&amp;lt;model id&amp;gt;).EventsDataTable;&lt;br /&gt;
let sdf = dt.SqlDataFrame.Head(1000);&lt;br /&gt;
let caseIdColumn = dt.ColumnMappings[&amp;quot;CaseId&amp;quot;], timeStampColumn = dt.ColumnMappings[&amp;quot;TimeStamp&amp;quot;], eventTypeColumn = dt.ColumnMappings[&amp;quot;EventType&amp;quot;];&lt;br /&gt;
&lt;br /&gt;
let eventTypesDf = sdf&lt;br /&gt;
  .SelectDistinct([eventTypeColumn])&lt;br /&gt;
  .OrderByColumns([eventTypeColumn], [true])&lt;br /&gt;
  .WithRowNumberColumn(&amp;quot;Token&amp;quot;, [eventTypeColumn])&lt;br /&gt;
  .WithColumn(&amp;quot;Token&amp;quot;, #sql{Char(Column(&amp;quot;Token&amp;quot;) + Unicode(&amp;quot;a&amp;quot;) - 1)});&lt;br /&gt;
&lt;br /&gt;
sdf = sdf&lt;br /&gt;
  .Join(eventTypesDf.Select([&amp;quot;_EventType2&amp;quot;: eventTypeColumn, &amp;quot;Token&amp;quot;]), [eventTypeColumn: &amp;quot;_EventType2&amp;quot;], &amp;quot;leftouter&amp;quot;);&lt;br /&gt;
&lt;br /&gt;
function RespondedExistencePattern(a, b)&lt;br /&gt;
{&lt;br /&gt;
  // [^a]*((a.*b.*)|(b.*a.*))*[^a]*&lt;br /&gt;
  ToSqlExpression(`Regexp(Column(&amp;quot;Trace&amp;quot;), &amp;quot;[^${a}]*((${a}.*${b}.*)|(${b}.*${a}.*))*[^${a}]&amp;quot;)`)&lt;br /&gt;
}&lt;br /&gt;
&lt;br /&gt;
function ResponsePattern(a, b)&lt;br /&gt;
{&lt;br /&gt;
  // [^a]*(a.*b)*[^a]*&lt;br /&gt;
  ToSqlExpression(`Regexp(Column(&amp;quot;Trace&amp;quot;), &amp;quot;[^${a}]*(${a}.*${b})*[^${a}]&amp;quot;)`)&lt;br /&gt;
}&lt;br /&gt;
&lt;br /&gt;
function AlternateResponsePattern(a, b)&lt;br /&gt;
{&lt;br /&gt;
  // [^a]*(a[^a]*b[^a]*)*[^a]*&lt;br /&gt;
  ToSqlExpression(`Regexp(Column(&amp;quot;Trace&amp;quot;), &amp;quot;[^${a}]*(${a}[^${a}]*${b}[^${a}]*)*[^${a}]&amp;quot;)`)&lt;br /&gt;
}&lt;br /&gt;
&lt;br /&gt;
function ChainResponsePattern(a, b)&lt;br /&gt;
{&lt;br /&gt;
  // [^a]*(ab[^a]*)*[^a]*&lt;br /&gt;
  ToSqlExpression(`Regexp(Column(&amp;quot;Trace&amp;quot;), &amp;quot;[^${a}]*(${a}${b}[^${a}]*)*[^${a}]&amp;quot;)`)&lt;br /&gt;
}&lt;br /&gt;
&lt;br /&gt;
function PrecedencePattern(a, b)&lt;br /&gt;
{&lt;br /&gt;
  // [^b]*(a.*b)*[^b]*&lt;br /&gt;
  ToSqlExpression(`Regexp(Column(&amp;quot;Trace&amp;quot;), &amp;quot;[^${b}]*(${a}.*${b})*[^${b}]&amp;quot;)`)&lt;br /&gt;
}&lt;br /&gt;
&lt;br /&gt;
let tracesDf = sdf&lt;br /&gt;
  .GroupBy([caseIdColumn])&lt;br /&gt;
  .Aggregate([&amp;quot;Trace&amp;quot;: &amp;quot;Token&amp;quot;], [#{&amp;quot;Function&amp;quot;: &amp;quot;list&amp;quot;, &amp;quot;Ordering&amp;quot;: [timeStampColumn], &amp;quot;Separator&amp;quot;: &amp;quot;&amp;quot;}])&lt;br /&gt;
  .WithColumn(&amp;quot;RespondedExistenceNL&amp;quot;, #sql{#expr{RespondedExistencePattern(&amp;quot;n&amp;quot;, &amp;quot;l&amp;quot;)}})&lt;br /&gt;
  .WithColumn(&amp;quot;RespondedExistenceCA&amp;quot;, #sql{#expr{RespondedExistencePattern(&amp;quot;c&amp;quot;, &amp;quot;a&amp;quot;)}})&lt;br /&gt;
  .WithColumn(&amp;quot;ResponsePatternNL&amp;quot;, #sql{#expr{ResponsePattern(&amp;quot;n&amp;quot;, &amp;quot;l&amp;quot;)}})&lt;br /&gt;
  .WithColumn(&amp;quot;ResponsePatternCA&amp;quot;, #sql{#expr{ResponsePattern(&amp;quot;c&amp;quot;, &amp;quot;a&amp;quot;)}})&lt;br /&gt;
  .WithColumn(&amp;quot;AlternateResponsePatternNL&amp;quot;, #sql{#expr{AlternateResponsePattern(&amp;quot;n&amp;quot;, &amp;quot;l&amp;quot;)}})&lt;br /&gt;
  .WithColumn(&amp;quot;AlternateResponsePatternCA&amp;quot;, #sql{#expr{AlternateResponsePattern(&amp;quot;c&amp;quot;, &amp;quot;a&amp;quot;)}})&lt;br /&gt;
  .WithColumn(&amp;quot;ChainResponsePatternNL&amp;quot;, #sql{#expr{ChainResponsePattern(&amp;quot;n&amp;quot;, &amp;quot;l&amp;quot;)}})&lt;br /&gt;
  .WithColumn(&amp;quot;ChainResponsePatternCA&amp;quot;, #sql{#expr{ChainResponsePattern(&amp;quot;c&amp;quot;, &amp;quot;a&amp;quot;)}})&lt;br /&gt;
  .WithColumn(&amp;quot;PrecedencePatternNL&amp;quot;, #sql{#expr{PrecedencePattern(&amp;quot;n&amp;quot;, &amp;quot;l&amp;quot;)}})&lt;br /&gt;
  .WithColumn(&amp;quot;PrecedencePatternCA&amp;quot;, #sql{#expr{PrecedencePattern(&amp;quot;c&amp;quot;, &amp;quot;a&amp;quot;)}})&lt;br /&gt;
;&lt;br /&gt;
&lt;br /&gt;
[tracesDf.Collect().ToCsv(), eventTypesDf.Collect().ToCsv()]&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Perform a query and send results as E-mail in a HTML table ===&lt;br /&gt;
This example requires two scripts that are both located in the same project:&lt;br /&gt;
&lt;br /&gt;
# Expression-type script to execute (can be, e.g., scheduled to run daily).&lt;br /&gt;
# Expression-type script containing the query JSON to use as basis for the e-mail. In this example, this script is named as &amp;quot;Send query as E-mail - query JSON&amp;quot;. The contents of this script is just the JSON representation of a query that can be extracted, e.g., from any PA chart view.&lt;br /&gt;
&lt;br /&gt;
Script #1 should contain the following code:&amp;lt;syntaxhighlight lang=&amp;quot;typescript&amp;quot;&amp;gt;&lt;br /&gt;
let replyToAddress = &amp;quot;noreply@test.com&amp;quot;;&lt;br /&gt;
let recipientsArray = [&amp;quot;testuser@test.com&amp;quot;];&lt;br /&gt;
let queryConfiguration = ParseJson(Project.ScriptByName(&amp;quot;Send query as E-mail - query JSON&amp;quot;).Code)&lt;br /&gt;
  .Set(&amp;quot;IncludeCollect&amp;quot;, true);&lt;br /&gt;
let resultDf = Query(queryConfiguration);&lt;br /&gt;
let mailBodyHtml = resultDf.`&lt;br /&gt;
&amp;lt;table&amp;gt;&lt;br /&gt;
  &amp;lt;caption&amp;gt;Example query&amp;lt;/caption&amp;gt;&lt;br /&gt;
  &amp;lt;thead&amp;gt;&lt;br /&gt;
    &amp;lt;tr&amp;gt;&lt;br /&gt;
      ${StringJoin(&amp;quot;&amp;quot;, _.Columns.`&lt;br /&gt;
        &amp;lt;th&amp;gt;${_}&amp;lt;/th&amp;gt;&lt;br /&gt;
      `)}&lt;br /&gt;
    &amp;lt;/tr&amp;gt;&lt;br /&gt;
  &amp;lt;/thead&amp;gt;&lt;br /&gt;
  &amp;lt;tbody&amp;gt;&lt;br /&gt;
    ${StringJoin(&amp;quot;&amp;quot;, _.Rows.`&lt;br /&gt;
      &amp;lt;tr&amp;gt;&lt;br /&gt;
        ${StringJoin(&amp;quot;&amp;quot;, _.`&lt;br /&gt;
          &amp;lt;td&amp;gt;${_}&amp;lt;/td&amp;gt;&lt;br /&gt;
        `)}&lt;br /&gt;
      &amp;lt;/tr&amp;gt;&lt;br /&gt;
    `)}&lt;br /&gt;
  &amp;lt;/tbody&amp;gt;&lt;br /&gt;
&amp;lt;/table&amp;gt;&lt;br /&gt;
`;&lt;br /&gt;
&lt;br /&gt;
SendEmail(#{&lt;br /&gt;
  &amp;quot;ReplyTo&amp;quot;: [replyToAddress],&lt;br /&gt;
  &amp;quot;To&amp;quot;: recipientsArray,&lt;br /&gt;
  &amp;quot;Subject&amp;quot;: &amp;quot;Example query E-mail&amp;quot;,&lt;br /&gt;
  &amp;quot;IsBodyHtml&amp;quot;: true,&lt;br /&gt;
  &amp;quot;Body&amp;quot;: mailBodyHtml&lt;br /&gt;
});&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;NOTE: QPR ProcessAnalyzer Server has to have SMTP server configured. Also, remember to update the values of replyToAddress and recipientsArray before using.&lt;br /&gt;
&lt;br /&gt;
=== Converting a case-centric model to object-centric model ===&lt;br /&gt;
This function serves as an example on how a case-centric model could be converted into an object-centric model having just one object type: &amp;quot;Case&amp;quot;.&lt;br /&gt;
&lt;br /&gt;
Note: Before use, replace &amp;lt;model id&amp;gt; with a valid model identifier having event data.&amp;lt;syntaxhighlight lang=&amp;quot;typescript&amp;quot;&amp;gt;&lt;br /&gt;
function ConvertCCModelToOCModel(model, newModelName) &lt;br /&gt;
{&lt;br /&gt;
  let connection = model.EventsDataTable.DataSourceConnection;&lt;br /&gt;
  let caseIdColumn = model.EventsDataTable.ColumnMappings[&amp;quot;CaseId&amp;quot;];&lt;br /&gt;
  let eventToObjectTableName = `${newModelName} - event-to-object`;&lt;br /&gt;
  let eventsTableName = `${newModelName} - events`;&lt;br /&gt;
  let objectsTableName = `${newModelName} - objects`;&lt;br /&gt;
          &lt;br /&gt;
  let eventsDf = model.EventsDataTable&lt;br /&gt;
    .SqlDataFrame&lt;br /&gt;
    .RenameColumns([&lt;br /&gt;
      &amp;quot;OcelEventType&amp;quot;: model.EventsDataTable.ColumnMappings[&amp;quot;EventType&amp;quot;],&lt;br /&gt;
      &amp;quot;OcelEventTime&amp;quot;: model.EventsDataTable.ColumnMappings[&amp;quot;TimeStamp&amp;quot;]])&lt;br /&gt;
    .WithColumn(&amp;quot;OcelEventId&amp;quot;, #sql{Concat(&amp;quot;evt-&amp;quot;, Cast(RowNumber([Column(&amp;quot;OcelEventTime&amp;quot;)]), &amp;quot;String&amp;quot;))});&lt;br /&gt;
&lt;br /&gt;
  eventsDf&lt;br /&gt;
    .RenameColumns([&lt;br /&gt;
      &amp;quot;OcelEventToObjectSourceId&amp;quot;: &amp;quot;OcelEventId&amp;quot;,&lt;br /&gt;
      &amp;quot;OcelEventToObjectTargetId&amp;quot;: caseIdColumn])&lt;br /&gt;
    .Select([&amp;quot;OcelEventToObjectSourceId&amp;quot;, &amp;quot;OcelEventToObjectTargetId&amp;quot;])&lt;br /&gt;
    .WithColumn(&amp;quot;OcelEventToObjectQualifier&amp;quot;, #sql{#expr{caseIdColumn} })&lt;br /&gt;
    .Persist(eventToObjectTableName, #{&amp;quot;Connection&amp;quot;: connection});&lt;br /&gt;
&lt;br /&gt;
  eventsDf&lt;br /&gt;
    .RemoveColumns([caseIdColumn])&lt;br /&gt;
    .Persist(eventsTableName, #{&amp;quot;Connection&amp;quot;: connection});&lt;br /&gt;
&lt;br /&gt;
  let casesDt = model.CasesDataTable&lt;br /&gt;
    .SqlDataFrame&lt;br /&gt;
    .RenameColumns([&lt;br /&gt;
      &amp;quot;OcelObjectId&amp;quot;: model.CasesDataTable.ColumnMappings[&amp;quot;CaseId&amp;quot;]&lt;br /&gt;
    ])&lt;br /&gt;
    .WithColumn(&amp;quot;OcelObjectType&amp;quot;, #sql{&amp;quot;Case&amp;quot;})&lt;br /&gt;
    .Persist(objectsTableName, #{&amp;quot;Connection&amp;quot;: connection});&lt;br /&gt;
&lt;br /&gt;
  let newConfiguration = #{&lt;br /&gt;
    &amp;quot;OcelDataSource&amp;quot;: #{&lt;br /&gt;
      &amp;quot;Events&amp;quot;: eventsTableName,&lt;br /&gt;
      &amp;quot;Objects&amp;quot;: objectsTableName,&lt;br /&gt;
      &amp;quot;EventToObject&amp;quot;: eventToObjectTableName&lt;br /&gt;
    }&lt;br /&gt;
  };&lt;br /&gt;
&lt;br /&gt;
  model.Project  &lt;br /&gt;
    .CreateModel(#{      &lt;br /&gt;
      &amp;quot;Name&amp;quot;: newModelName,  &lt;br /&gt;
      &amp;quot;Description&amp;quot;: model.Description,  &lt;br /&gt;
      &amp;quot;Configuration&amp;quot;: newConfiguration  &lt;br /&gt;
    });&lt;br /&gt;
}&lt;br /&gt;
&lt;br /&gt;
let ccModel = ModelById(&amp;lt;model id&amp;gt;);&lt;br /&gt;
ConvertCCModelToOCModel(ccModel, `ocel - ${ccModel.Name}`);&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Show duplicate rows in datatable ===&lt;br /&gt;
This script returns all rows in a datatable appearing more than once (sorted by the most frequent occurrences first). Additionally, the number of occurrences is returned as the last column. Replace the datatable id with the correct one.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;typescript&amp;quot;&amp;gt;&lt;br /&gt;
let datatableId = 1;&lt;br /&gt;
let rowCountColumn = &amp;quot;Row count&amp;quot;;&lt;br /&gt;
let columns = DatatableById(datatableId).Columns.Name;&lt;br /&gt;
DatatableById(datatableId)&lt;br /&gt;
  .SqlDataFrame&lt;br /&gt;
  .GroupBy(columns)&lt;br /&gt;
  .Aggregate(&lt;br /&gt;
    [rowCountColumn: columns[0]],&lt;br /&gt;
    [&amp;quot;Count&amp;quot;]&lt;br /&gt;
  )&lt;br /&gt;
  .Where(Column(rowCountColumn) &amp;gt; 1)&lt;br /&gt;
  .OrderByColumns([rowCountColumn], [false])&lt;br /&gt;
  .Collect()&lt;br /&gt;
  .ToCSV();&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Create a new object-centric model based on already existing object-centric model containing only filtered objects and events ===&lt;br /&gt;
Script for creating a new OCPM-model based on another already existing model in PA that applies given filter to the original model and creates all the needed tables to store only the relevant rows for all the OCPM data tables.&lt;br /&gt;
&lt;br /&gt;
NOTE: Replace &amp;lt;nowiki&amp;gt;&amp;lt;source model id&amp;gt;, &amp;lt;target model name&amp;gt;, &amp;lt;target project id&amp;gt; as well as the value of ocelItems with the applicable values before use.&amp;lt;/nowiki&amp;gt;&amp;lt;syntaxhighlight lang=&amp;quot;typescript&amp;quot;&amp;gt;&lt;br /&gt;
let newModelName = &amp;quot;&amp;lt;target model name&amp;gt;&amp;quot;;&lt;br /&gt;
let sourceModelId = &amp;lt;source model id&amp;gt;;&lt;br /&gt;
let targetProjectId = &amp;lt;target project id&amp;gt;;&lt;br /&gt;
&lt;br /&gt;
// Replace the value of the ocelItems below with the configuration of object-centric filter to apply.&lt;br /&gt;
// Note: Use expression language syntax here, not JSON.&lt;br /&gt;
let ocelItems = [#{&lt;br /&gt;
  &amp;quot;Include&amp;quot;: true,&lt;br /&gt;
  &amp;quot;ObjectAttributeValue&amp;quot;: #{&lt;br /&gt;
    &amp;quot;ObjectType&amp;quot;: &amp;quot;Purchase Order&amp;quot;,&lt;br /&gt;
    &amp;quot;Attribute&amp;quot;: &amp;quot;po_product&amp;quot;,&lt;br /&gt;
    &amp;quot;Values&amp;quot;: [&lt;br /&gt;
      &amp;quot;0Cows&amp;quot;&lt;br /&gt;
    ]&lt;br /&gt;
  }&lt;br /&gt;
}];&lt;br /&gt;
&lt;br /&gt;
let m = ModelById(sourceModelId);&lt;br /&gt;
if (!m.IsOcelModel)&lt;br /&gt;
  throw `Model ${m.Name} is not an OCPM model`;&lt;br /&gt;
if (CountTop(m.CheckModelValidity()) &amp;gt; 0)&lt;br /&gt;
  throw `Model ${m.Name} is not a valid OCPM model`;&lt;br /&gt;
&lt;br /&gt;
let targetProject = ProjectById(targetProjectId);&lt;br /&gt;
if (CountTop(targetProject.Models.Where(Name == newModelName)) &amp;gt; 0)&lt;br /&gt;
  throw `Model having name ${newModelName} already exists in project ${targetProject.Name}. Rename or delete the existing model before running this script.`;&lt;br /&gt;
&lt;br /&gt;
let newConfiguration = #{&lt;br /&gt;
  &amp;quot;OcelDataSource&amp;quot;: #{&lt;br /&gt;
    &amp;quot;EventTypes&amp;quot;: #{},&lt;br /&gt;
    &amp;quot;ObjectTypes&amp;quot;: #{}&lt;br /&gt;
  }&lt;br /&gt;
};&lt;br /&gt;
&lt;br /&gt;
WriteLog(&amp;quot;Creating common tables...&amp;quot;);&lt;br /&gt;
&lt;br /&gt;
function PersistDataFrame(dataFrame, configurationName, configurationDict, dataTableName)&lt;br /&gt;
{&lt;br /&gt;
  WriteLog(`Creating filtered table: ${dataTableName}`);&lt;br /&gt;
  configurationDict.Set(configurationName, dataTableName);&lt;br /&gt;
  return dataFrame.Persist(dataTableName, #{&amp;quot;ProjectId&amp;quot;: targetProjectId, &amp;quot;Append&amp;quot;: false});&lt;br /&gt;
}&lt;br /&gt;
&lt;br /&gt;
let objectsDf = m.CacheTableSqlDataFrame(#{&lt;br /&gt;
  &amp;quot;CacheTableType&amp;quot;: &amp;quot;Objects&amp;quot;,&lt;br /&gt;
  &amp;quot;Perspective&amp;quot;: #{},&lt;br /&gt;
  &amp;quot;OcelItems&amp;quot;: ocelItems &lt;br /&gt;
}).SetPrimaryKey(m.OcelObjects.PrimaryKey);&lt;br /&gt;
let newObjectsDt = PersistDataFrame(objectsDf, &amp;quot;Objects&amp;quot;, newConfiguration.OcelDataSource, `${newModelName} - objects`);&lt;br /&gt;
&lt;br /&gt;
let objectToObjectDf = m.CacheTableSqlDataFrame(#{&lt;br /&gt;
  &amp;quot;CacheTableType&amp;quot;: &amp;quot;ObjectToObject&amp;quot;,&lt;br /&gt;
  &amp;quot;Perspective&amp;quot;: #{},&lt;br /&gt;
  &amp;quot;OcelItems&amp;quot;: ocelItems &lt;br /&gt;
}).SetPrimaryKey(m.OcelObjectToObject.PrimaryKey)&lt;br /&gt;
  .Select([&amp;quot;OcelObjectToObjectSourceId&amp;quot;, &amp;quot;OcelObjectToObjectTargetId&amp;quot;, &amp;quot;OcelObjectToObjectQualifier&amp;quot;]);&lt;br /&gt;
let newObjectToObjectDt = PersistDataFrame(objectToObjectDf, &amp;quot;ObjectToObject&amp;quot;, newConfiguration.OcelDataSource, `${newModelName} - object-to-object`);&lt;br /&gt;
&lt;br /&gt;
let eventToObjectDf = m.CacheTableSqlDataFrame(#{&lt;br /&gt;
  &amp;quot;CacheTableType&amp;quot;: &amp;quot;EventToObject&amp;quot;,&lt;br /&gt;
  &amp;quot;Perspective&amp;quot;: #{},&lt;br /&gt;
  &amp;quot;OcelItems&amp;quot;: ocelItems &lt;br /&gt;
}).SetPrimaryKey(m.OcelEventToObject.PrimaryKey)&lt;br /&gt;
  .Select([&amp;quot;OcelEventToObjectSourceId&amp;quot;, &amp;quot;OcelEventToObjectTargetId&amp;quot;, &amp;quot;OcelEventToObjectQualifier&amp;quot;]);&lt;br /&gt;
let newEventToObjectDt = PersistDataFrame(eventToObjectDf, &amp;quot;EventToObject&amp;quot;, newConfiguration.OcelDataSource, `${newModelName} - event-to-object`);&lt;br /&gt;
&lt;br /&gt;
let eventsDf = m.OcelEvents.SqlDataFrame&lt;br /&gt;
  .Join(eventToObjectDf.SelectDistinct([&amp;quot;OcelEventToObjectSourceId&amp;quot;]), [&amp;quot;OcelEventId&amp;quot;: &amp;quot;OcelEventToObjectSourceId&amp;quot;], &amp;quot;inner&amp;quot;)&lt;br /&gt;
  .RemoveColumns([&amp;quot;OcelEventToObjectSourceId&amp;quot;])&lt;br /&gt;
  .SetPrimaryKey(m.OcelEvents.PrimaryKey);&lt;br /&gt;
let newEventsDt = PersistDataFrame(eventsDf, &amp;quot;Events&amp;quot;, newConfiguration.OcelDataSource, `${newModelName} - events`);&lt;br /&gt;
&lt;br /&gt;
WriteLog(&amp;quot;Creating event type tables...&amp;quot;);&lt;br /&gt;
&lt;br /&gt;
m.Configuration.OcelDataSource.EventTypes.ToArray().{&lt;br /&gt;
  let ar = _;&lt;br /&gt;
  let eventTypeName = GetContext(ar);&lt;br /&gt;
  let eventTypeDt = m.OcelEventType(eventTypeName);&lt;br /&gt;
  let eventTypeDf = eventTypeDt.SqlDataFrame&lt;br /&gt;
    .Join(newEventsDt.SqlDataFrame.Select([&amp;quot;OcelEventId&amp;quot;]), [&amp;quot;OcelEventTypeEventId&amp;quot;: &amp;quot;OcelEventId&amp;quot;], &amp;quot;inner&amp;quot;)&lt;br /&gt;
    .RemoveColumns([&amp;quot;OcelEventId&amp;quot;])&lt;br /&gt;
    .SetPrimaryKey(eventTypeDt.PrimaryKey);&lt;br /&gt;
  PersistDataFrame(eventTypeDf, eventTypeName, newConfiguration.OcelDataSource.EventTypes, `${newModelName} - eventtype - ${eventTypeName}`);&lt;br /&gt;
};&lt;br /&gt;
  &lt;br /&gt;
WriteLog(&amp;quot;Creating object type tables...&amp;quot;);&lt;br /&gt;
let newUniqueObjectsDf = newObjectsDt&lt;br /&gt;
  .SqlDataFrame&lt;br /&gt;
  .SelectDistinct([&amp;quot;OcelObjectId&amp;quot;]);&lt;br /&gt;
&lt;br /&gt;
m.Configuration.OcelDataSource.ObjectTypes.ToArray().{&lt;br /&gt;
  let ar = _;&lt;br /&gt;
  let objectTypeName = GetContext(ar);&lt;br /&gt;
  let objectTypeDt = m.OcelObjectType(objectTypeName);&lt;br /&gt;
  let objectTypeDf = objectTypeDt.SqlDataFrame&lt;br /&gt;
    .Join(newUniqueObjectsDf, [&amp;quot;OcelObjectTypeObjectId&amp;quot;: &amp;quot;OcelObjectId&amp;quot;], &amp;quot;inner&amp;quot;)&lt;br /&gt;
    .RemoveColumns([&amp;quot;OcelObjectId&amp;quot;])&lt;br /&gt;
    .SetPrimaryKey(objectTypeDt.PrimaryKey);&lt;br /&gt;
  PersistDataFrame(objectTypeDf, objectTypeName, newConfiguration.OcelDataSource.ObjectTypes, `${newModelName} - objecttype - ${objectTypeName}`);&lt;br /&gt;
};&lt;br /&gt;
&lt;br /&gt;
WriteLog(`Creating model ${newModelName}...`);&lt;br /&gt;
&lt;br /&gt;
let newModel = targetProject.CreateModel(#{&amp;quot;Name&amp;quot;: newModelName, &amp;quot;Configuration&amp;quot;: newConfiguration});&lt;br /&gt;
&lt;br /&gt;
WriteLog(`Model (id=${newModel.Id}) created with configuration:\r\n${ToJson(newConfiguration)}`);&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;/div&gt;</summary>
		<author><name>MarHink</name></author>
	</entry>
	<entry>
		<id>https://wiki.onqpr.com/pa/index.php?title=Embed_to_Website&amp;diff=27267</id>
		<title>Embed to Website</title>
		<link rel="alternate" type="text/html" href="https://wiki.onqpr.com/pa/index.php?title=Embed_to_Website&amp;diff=27267"/>
		<updated>2025-11-11T12:58:02Z</updated>

		<summary type="html">&lt;p&gt;MarHink: Added required change details on Cross-Origin-Opener-Policy header.&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;QPR ProcessAnalyzer can be embedded to another website, such as Microsoft Sharepoint. The embedding is based on the &#039;&#039;iframe&#039;&#039; html element. By default, QPR ProcessAnalyzer only allows to be embedded by a website in the same origin that is restricted by the Content-Security-Policy HTTL header. More information how to change the CSP setting in [[QPR_ProcessAnalyzer_Security_Hardening#HTTP_Response_Headers|Security Hardening]]. Note also the special behavior when using the [[#Using_SAML_authentication_with_embedding|SAML authentication with embedding]].&lt;br /&gt;
&lt;br /&gt;
== Example embedding webpage ==&lt;br /&gt;
This is a simple example page containing an iframe element that embeds QPR ProcessAnalyzer.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
&amp;lt;!DOCTYPE html&amp;gt;&lt;br /&gt;
&amp;lt;html&amp;gt;&lt;br /&gt;
  &amp;lt;body&amp;gt;&lt;br /&gt;
    &amp;lt;iframe src=&amp;quot;https://processanalyzer.company.com/qprpa/ui/#/dashboard?sys:dashboardIdentifier=/MyProject/MyDashboard&amp;quot; height=&amp;quot;600&amp;quot; width=&amp;quot;900&amp;quot;&amp;gt;&amp;lt;/iframe&amp;gt;&lt;br /&gt;
  &amp;lt;/body&amp;gt;&lt;br /&gt;
&amp;lt;/html&amp;gt;&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Change CSP frame-ancestors setting ==&lt;br /&gt;
By default, the Content-Security-Policy HTTL header &#039;&#039;&#039;frame-ancestors&#039;&#039;&#039; directive is set to &#039;&#039;&#039;self&#039;&#039;&#039;, allowing the parent website where QPR ProcessAnalyzer is embedded to, to be located in the same origin. If the parent website is in another origin, the CSP needs to be changed as follows (allowing to embed QPR ProcessAnalyzer in the example.com website):&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
frame-ancestors &#039;self&#039; example.com;&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
If the CSP prevents the embedded website from showing, there will be a descriptive error message in the browser console. It contains details which CSP directive didn&#039;t allow to open the page.&lt;br /&gt;
&lt;br /&gt;
More information about the Content-Security-Policy HTTL header: https://developer.mozilla.org/en-US/docs/Web/HTTP/Guides/CSP.&lt;br /&gt;
&lt;br /&gt;
== Using SAML authentication with embedding ==&lt;br /&gt;
When QPR ProcessAnalyzer is embedded into another website with the [[SAML_2.0_Federated_Authentication|SAML authentication]] enabled, the authentication process is handled in a separate browser window or tab. This is necessary because some identity providers (IdP) do not permit authentication within an embedded frame (iframe element). Consequently, when SAML authentication begins, the identity provider&#039;s page will open in a new browser window. While this window is active, the embedded (original) window displays a message informing the user that authentication is in progress in a separate window.&lt;br /&gt;
&lt;br /&gt;
If the browser has popup blocking enabled, the new window cannot open. In such cases, a descriptive message prompts the user to disable popup blocking. Additionally, the user can manually open the authentication window by clicking the embedded frame, even if popups are blocked. Once the identity provider login is successful, the new window will close, and the original browser window will redirect to QPR ProcessAnalyzer. However, if the separate browser window is closed before completing the login, the original window will display the standard QPR ProcessAnalyzer login view.&lt;br /&gt;
&lt;br /&gt;
In addition, for SAML authentication to work correctly when the identity provider is located on a &#039;&#039;&#039;different origin&#039;&#039;&#039; than the QPR ProcessAnalyzer server itself, the &amp;lt;code&amp;gt;Cross-Origin-Opener-Policy&amp;lt;/code&amp;gt; HTTP header must be changed from its default value of &amp;lt;code&amp;gt;&amp;quot;same-origin&amp;quot;&amp;lt;/code&amp;gt; to &amp;lt;code&amp;gt;&amp;quot;unsafe-none&amp;quot;&amp;lt;/code&amp;gt;.&lt;br /&gt;
----&#039;&#039;&#039;Additional Information:&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
* &#039;&#039;&#039;Cross-Origin-Opener-Policy (COOP) Header:&#039;&#039;&#039; This is a response header that provides a way for a document to control whether a new document opened in a top-level context (like a new tab or window) shares the same browsing context group as its opener. Setting it to &amp;lt;code&amp;gt;unsafe-none&amp;lt;/code&amp;gt; allows the cross-origin authentication flow to complete successfully in this embedded scenario. For more technical details, see https://developer.mozilla.org/en-US/docs/Web/HTTP/Reference/Headers/Cross-Origin-Opener-Policy&lt;/div&gt;</summary>
		<author><name>MarHink</name></author>
	</entry>
	<entry>
		<id>https://wiki.onqpr.com/pa/index.php?title=Create_Predicted_Eventlog&amp;diff=27122</id>
		<title>Create Predicted Eventlog</title>
		<link rel="alternate" type="text/html" href="https://wiki.onqpr.com/pa/index.php?title=Create_Predicted_Eventlog&amp;diff=27122"/>
		<updated>2025-10-21T06:59:26Z</updated>

		<summary type="html">&lt;p&gt;MarHink: /* Predicting case attribute values */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;This article has instructions how to install, configure and use eventlog predictions. The prediction creates a new model that contains the source model data and the predictions. It&#039;s able to predict case attributes for the generated new cases and event attributes for the predicted events. To distinguish the real (source data) and predicted events and cases, there are following attributes in the model:&lt;br /&gt;
* Event attribute &#039;&#039;&#039;Predicted&#039;&#039;&#039; denotes whether the event is from the source data (&#039;&#039;false&#039;&#039;) or whether it&#039;s predicted (&#039;&#039;true&#039;&#039;).&lt;br /&gt;
* Case attribute &#039;&#039;&#039;Generated&#039;&#039;&#039; denotes whether the case is in the source data (&#039;&#039;false&#039;&#039;) or whether the prediction generated it as a new case (&#039;&#039;true&#039;&#039;).&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Prerequisites for prediction ==&lt;br /&gt;
Following prerequisites need to be fulfilled to run the eventlog prediction:&lt;br /&gt;
* QPR ProcessAnalyzer 2024.8 or later in use&lt;br /&gt;
* Snowflake connection is configured&lt;br /&gt;
* Source models are stored to Snowflake&lt;br /&gt;
&lt;br /&gt;
== Install prediction to Snowflake ==&lt;br /&gt;
To install the eventlog prediction to Snowflake:&lt;br /&gt;
# Go to Snowflake, and create a Snowflake-managed stage with name &#039;&#039;&#039;DECISION_INTELLIGENCE&#039;&#039;&#039; to the same schema configured to QPR ProcessAnalyzer (in the Snowflake connection string). Use settings in the following image: [[File:Create_Snowflake_stage.png]]&lt;br /&gt;
# Open the created stage and upload the &#039;&#039;&#039;predict.pyz&#039;&#039;&#039; file into the stage (ask the file from your QPR representative).&lt;br /&gt;
# Create the following procedure to the same schema:&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;sql&amp;quot;&amp;gt;&lt;br /&gt;
CREATE OR REPLACE PROCEDURE QPRPA_SP_PREDICTION(&amp;quot;CONFIGURATION&amp;quot; OBJECT)&lt;br /&gt;
RETURNS OBJECT&lt;br /&gt;
LANGUAGE PYTHON&lt;br /&gt;
STRICT&lt;br /&gt;
RUNTIME_VERSION = &#039;3.11&#039;&lt;br /&gt;
PACKAGES = (&#039;nltk&#039;,&#039;numpy&#039;,&#039;networkx&#039;,&#039;pandas&#039;,&#039;scikit-learn&#039;,&#039;snowflake-snowpark-python&#039;,&#039;tensorflow==2.12.0&#039;,&#039;dill&#039;,&#039;psutil&#039;,&#039;prophet&#039;,&#039;holidays&#039;,&#039;python-kubernetes&#039;,&#039;docker-py&#039;,&#039;cryptography&#039;)&lt;br /&gt;
HANDLER = &#039;main&#039;&lt;br /&gt;
EXECUTE AS OWNER&lt;br /&gt;
AS &#039;&lt;br /&gt;
import sys&lt;br /&gt;
def main(session, parameters_in: dict) -&amp;gt; dict:&lt;br /&gt;
	session.file.get(&#039;&#039;@decision_intelligence/predict.pyz&#039;&#039;, &#039;&#039;/tmp&#039;&#039;)&lt;br /&gt;
	sys.path.append(&#039;&#039;/tmp/predict.pyz&#039;&#039;)&lt;br /&gt;
	import predict&lt;br /&gt;
	return predict.main(session, parameters_in)&lt;br /&gt;
&#039;;&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Create prediction script in QPR ProcessAnalyzer ==&lt;br /&gt;
1. Create the following example expression script (e.g., with name &#039;&#039;&#039;Create prediction model&#039;&#039;&#039;):&amp;lt;syntaxhighlight lang=&amp;quot;typescript&amp;quot;&amp;gt;&lt;br /&gt;
let sourceModel = ProjectByName(&amp;quot;&amp;lt;project name&amp;gt;&amp;quot;).ModelByName(&amp;quot;&amp;lt;model name&amp;gt;&amp;quot;);&lt;br /&gt;
let completeCaseEventTypeName = &amp;quot;&amp;lt;event type name found only in complete cases&amp;gt;&amp;quot;;&lt;br /&gt;
&lt;br /&gt;
let targetProject = Project;&lt;br /&gt;
let eventTypeColumnName = sourceModel.EventsDataTable.ColumnMappings[&amp;quot;EventType&amp;quot;];&lt;br /&gt;
&lt;br /&gt;
_system.ML.GeneratePredictionModel(#{&lt;br /&gt;
  &amp;quot;Name&amp;quot;: &amp;quot;My prediction model&amp;quot;,      // Name of the PA model to generate ti the target project.&lt;br /&gt;
  &amp;quot;SourceModel&amp;quot;: sourceModel,         // Snowflake-based PA model used for training the prediction model.&lt;br /&gt;
  &amp;quot;TargetProject&amp;quot;: targetProject,     // Target project to create the model into.&lt;br /&gt;
  &amp;quot;TrainingConfiguration&amp;quot;: #{         // Training parameters.&lt;br /&gt;
    &amp;quot;num_epochs_to_train&amp;quot;: 200&lt;br /&gt;
  },&lt;br /&gt;
  &amp;quot;GenerationConfiguration&amp;quot;: #{       // Model generation parameters.&lt;br /&gt;
    &amp;quot;cases_to_generate&amp;quot;: 1000&lt;br /&gt;
  },&lt;br /&gt;
  &amp;quot;TrainingDataFilter&amp;quot;: #{&lt;br /&gt;
    &amp;quot;Items&amp;quot;: [&lt;br /&gt;
      #{&lt;br /&gt;
        &amp;quot;Type&amp;quot;: &amp;quot;IncludeCases&amp;quot;,&lt;br /&gt;
        &amp;quot;Items&amp;quot;: [&lt;br /&gt;
          #{&lt;br /&gt;
            &amp;quot;Type&amp;quot;: &amp;quot;EventAttributeValue&amp;quot;,&lt;br /&gt;
            &amp;quot;Attribute&amp;quot;: eventTypeColumnName,&lt;br /&gt;
            &amp;quot;StringifiedValues&amp;quot;: [&lt;br /&gt;
              `0${completeCaseEventTypeName}`&lt;br /&gt;
            ]&lt;br /&gt;
          }&lt;br /&gt;
        ]&lt;br /&gt;
      }&lt;br /&gt;
    ]&lt;br /&gt;
  },&lt;br /&gt;
  &amp;quot;IncompleteCasesFilter&amp;quot;: #{&lt;br /&gt;
    &amp;quot;Items&amp;quot;: [&lt;br /&gt;
      #{&lt;br /&gt;
        &amp;quot;Type&amp;quot;: &amp;quot;ExcludeCases&amp;quot;,&lt;br /&gt;
        &amp;quot;Items&amp;quot;: [&lt;br /&gt;
          #{&lt;br /&gt;
            &amp;quot;Type&amp;quot;: &amp;quot;EventAttributeValue&amp;quot;,&lt;br /&gt;
            &amp;quot;Attribute&amp;quot;: eventTypeColumnName,&lt;br /&gt;
            &amp;quot;StringifiedValues&amp;quot;: [&lt;br /&gt;
              `0${completeCaseEventTypeName}`&lt;br /&gt;
            ]&lt;br /&gt;
          }&lt;br /&gt;
        ]&lt;br /&gt;
      }&lt;br /&gt;
    ]&lt;br /&gt;
  },&lt;br /&gt;
  &amp;quot;RecreatePredictionModel&amp;quot;: true,    // Should a prediction model be overwritten if one already exists for this source model and target model name combination.&lt;br /&gt;
  &amp;quot;TrainingCaseSampleSize&amp;quot;: 10000     // Maximum number of cases to use from the source model (random sampled).&lt;br /&gt;
});&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;2. Configure prediction for the previously created script as instructed in the next chapter. At minimum, replace the tags listed below with some suitable values:&lt;br /&gt;
* &#039;&#039;&#039;&amp;lt;project name&amp;gt;&#039;&#039;&#039;: Name of the project in which the source model is located.&lt;br /&gt;
* &#039;&#039;&#039;&amp;lt;model name&amp;gt;&#039;&#039;&#039;: Name of the model to be used as source model. This data in this source model will be used to train the prediction model so that it can generate new cases and continuations for incomplete existing cases.&lt;br /&gt;
* &#039;&#039;&#039;&amp;lt;event type name found only in complete cases&amp;gt;&#039;&#039;&#039;: This example script has been hard-coded to determine whether a case is complete or incomplete based on the existence of this event type.&lt;br /&gt;
&lt;br /&gt;
== Configure prediction ==&lt;br /&gt;
Prediction script has the following settings in the GeneratePredictionModel call:&lt;br /&gt;
* &#039;&#039;&#039;Name&#039;&#039;&#039;: Name of the QPR ProcessAnalyzer model that is created to the target project. The model will contain the source model content and the predictions.&lt;br /&gt;
* &#039;&#039;&#039;SourceModel&#039;&#039;&#039;: Source model for which the prediction is made. Model can be selected for example based on id with ModelById function or by name with ModelByName function.&lt;br /&gt;
* &#039;&#039;&#039;TargetProject&#039;&#039;&#039;: Target project to create the new model into.&lt;br /&gt;
* &#039;&#039;&#039;RecreatePredictionModel&#039;&#039;&#039;: When &#039;&#039;true&#039;&#039;, a new ML model is trained when the script is run. When &#039;&#039;false&#039;&#039;, the prediction is run using possibly pre-existing ML model. &lt;br /&gt;
* &#039;&#039;&#039;TrainingConfiguration&#039;&#039;&#039;: Training parameters.&lt;br /&gt;
** &#039;&#039;&#039;attributes&#039;&#039;&#039;: Attribute configurations (for more information, see the chapter below).&lt;br /&gt;
** &#039;&#039;&#039;generate_start_time_trend_images&#039;&#039;&#039;: If set to true, two images will be generated for each cross validated Prophet-parameter combination and also for the final selected parameters showing the results of plot and plot_components-functions. &lt;br /&gt;
*** The images will be generated into stage files with the following path names:&lt;br /&gt;
**** plot: @decision_intelligence_testing/{model_name}_st_RMSE={rmse_value or &amp;quot;final&amp;quot;}.png&lt;br /&gt;
****plot_components:  @decision_intelligence_testing/{model_name}_st_RMSE={rmse_value or &amp;quot;final&amp;quot;}_comp.png&lt;br /&gt;
***The default value is false.&lt;br /&gt;
** &#039;&#039;&#039;max_num_case_clusters&#039;&#039;&#039;: Set the maximum number of clusters to divide the case attribute values into.&lt;br /&gt;
*** The default value is 20.&lt;br /&gt;
** &#039;&#039;&#039;max_num_traces_in_training&#039;&#039;&#039;: Set the maximum number of traces used in training.&lt;br /&gt;
*** When training, every case of length N will be split into N traces (a.k.a. prefixes) (p_1, ..., p_N), where p_x contains first x events of the all events of the full case.&lt;br /&gt;
**** If there are more traces available than this configured value, cases to include will be random sampled so that the maximum is exceeded by at most one case.&lt;br /&gt;
**** If null, all the traces will be used, no matter what (may easily lead to running out of memory).&lt;br /&gt;
**** The default value is 100000.&lt;br /&gt;
** &#039;&#039;&#039;num_epochs_to_train&#039;&#039;&#039;: How many times the training set is used in training. The best performing model out of all the iterations will be selected.&lt;br /&gt;
*** The default value is 500.&lt;br /&gt;
** &#039;&#039;&#039;num_extra_years_to_reserve_in_created_model&#039;&#039;&#039;: Number of additional years after the year of the last timestamp in the training data to reserve to the capacity of the created ML model, allowing the model to  be able to predict timestamps in the range between the minimum timestamp year in the training data and the maximum timestamp year plus this value.&lt;br /&gt;
*** The default value is 20.&lt;br /&gt;
** &#039;&#039;&#039;reserve_extra_sequence_length&#039;&#039;&#039;: How many extra events to reserve space for in the ML model compared to the number of events the longest case in the training data has.&lt;br /&gt;
*** The default value is 5.&lt;br /&gt;
** &#039;&#039;&#039;samples_per_epoch&#039;&#039;&#039;: If not null, specifies (approximately) how many traces/prefixes will be used to represent one epoch of data in the training. The actual value used will be made divisible by batch_size using this formula:&lt;br /&gt;
***max(floor(samples_per_epoch / batch_size), 1) * batch_size&lt;br /&gt;
***If null, every epoch will use all the traces/prefixes in the training data.&lt;br /&gt;
***The default value is null&lt;br /&gt;
**&#039;&#039;&#039;validation_split&#039;&#039;&#039;: Percentage of traces/prefixes to use to evaluate the loss and any model metrics at the end of each epoch. The model will not be trained on this data.&lt;br /&gt;
***If 0, separate validation data will not be used. Instead, all the training data will be used also as validation data.&lt;br /&gt;
***The default value is 0.&lt;br /&gt;
* &#039;&#039;&#039;GenerationConfiguration&#039;&#039;&#039;: Event generation parameters. When null, no generation is done. For example, following parameters are supported:&lt;br /&gt;
** &#039;&#039;&#039;avoid_repeated_activities&#039;&#039;&#039;: Array of activity names that should occur at most once in any case. The probability of selecting any of the activities specified in this configuration more than once is set to be 0. &lt;br /&gt;
*** Empty array means that activity generation is not restricted by this setting at all. &lt;br /&gt;
*** null value means that there should not be any activities that can occur more than once (shortcut for specifying all the activity names).&lt;br /&gt;
*** The default value is an empty array.&lt;br /&gt;
** &#039;&#039;&#039;cases_to_generate&#039;&#039;&#039;: Maximum number cases to create. The number of created cases is further limited by the capabilities of the trained model and the &#039;&#039;case_generation_start_time&#039;&#039; and &#039;&#039;case_generation_end_time&#039;&#039; parameters.&lt;br /&gt;
*** The default value is such that the number of cases,  by itself, is not limited.&lt;br /&gt;
** &#039;&#039;&#039;case_generation_start_time&#039;&#039;&#039;: If defined, new cases will be generated after this timestamp (given as string in ISO datetime format). &lt;br /&gt;
*** If undefined, the latest start event timestamp used in the training data is used.&lt;br /&gt;
*** The default value is undefined.&lt;br /&gt;
** &#039;&#039;&#039;case_generation_end_time&#039;&#039;&#039;: If defined, new events and cases will not be generated after this timestamp (given as string in ISO datetime format). E.g., &amp;quot;2015-01-01T00:00:00&amp;quot;.&lt;br /&gt;
*** The default value is unlimited (only limit comes from the capacity of the trained model)&lt;br /&gt;
** &#039;&#039;&#039;generate_debug_event_attributes&#039;&#039;&#039;: &lt;br /&gt;
*** If true, additional columns will be added containing, e.g., probabilities of the selected activity and other activities.&lt;br /&gt;
*** The default value is false.&lt;br /&gt;
** &#039;&#039;&#039;max_num_events&#039;&#039;&#039;:&lt;br /&gt;
*** Specifies the maximum number of events to generate for any case.&lt;br /&gt;
*** If unspecified (=default), the value equals to &#039;&#039;&amp;lt;the maximum number of events in any case in the training data&amp;gt;&#039;&#039;+&#039;&#039;&amp;lt;the value of reserve_extra_sequence_length in training&amp;gt;&#039;&#039;.&lt;br /&gt;
** &#039;&#039;&#039;min_prediction_probability &#039;&#039;&#039;: &lt;br /&gt;
*** The minimum probability of any prediction. If the probability of a prediction is lower than this, it will never be picked. &lt;br /&gt;
*** The default value is 0.01.&lt;br /&gt;
** &#039;&#039;&#039;temperature&#039;&#039;&#039;: &lt;br /&gt;
*** If 0, the generated next activity will always be the one that is the most probable. &lt;br /&gt;
*** If 1, the generated next activity is purely based on the probabilities returned by the trained ML model. &lt;br /&gt;
*** This behavior is interpolated when using values between 0 and 1.&lt;br /&gt;
*** The default value is 0.9.&lt;br /&gt;
* &#039;&#039;&#039;TrainingDataFilter&#039;&#039;&#039;: [[Filtering_in_QPR_ProcessAnalyzer_Queries|Filter]] to select specific cases that are used to train the prediction model. This filter is required to train the model only using the completed cases. Uncompleted cases should not be used for the training, so the model doesn&#039;t incorrectly learn that cases should end like that.&lt;br /&gt;
* &#039;&#039;&#039;IncompleteCasesFilter&#039;&#039;&#039;: Optional [[Filtering_in_QPR_ProcessAnalyzer_Queries|filter]] to select which cases the prediction is made for. To improve performance of the prediction, it&#039;s recommended to include only the incomplete cases for which new events might appear, and skip the completed cases for which new events are not expected anymore.&lt;br /&gt;
* &#039;&#039;&#039;TrainingCaseSampleSize&#039;&#039;&#039;: Maximum number of cases to take from the source model (cases are selected randomly). Use a lower setting to speed up the ML model training. The greater the value, the more subtle phenomena the prediction can learn from the data.&lt;br /&gt;
&lt;br /&gt;
== Attribute configuration ==&lt;br /&gt;
Attribute configuration is used in &#039;&#039;&#039;TrainingConfiguration&#039;&#039;&#039; (see the chapter above) to configure which event- and case attributes should be used in prediction model and how they are used.&lt;br /&gt;
&lt;br /&gt;
The configuration is in the top level split into two sections: &amp;quot;event&amp;quot; and &amp;quot;case&amp;quot;. &amp;quot;Event&amp;quot; is used to configure event attributes, whereas &amp;quot;case&amp;quot; is used for case attributes.&lt;br /&gt;
&lt;br /&gt;
The next level supports one value: &amp;quot;input&amp;quot;.&lt;br /&gt;
&lt;br /&gt;
The next level after that, supports the following settings:&lt;br /&gt;
&lt;br /&gt;
* &#039;&#039;&#039;categorical_groups&#039;&#039;&#039;: An array of categorical attribute group configuration objects used to define groups of attributes that will be bundled together in the trained model, either as separate input- or output features. Each attribute group will form its own input- or output vector used in the model training and generation.&lt;br /&gt;
** If null, only one group will be created with all the available categorical attributes included.&lt;br /&gt;
** The following settings are supported by these objects:&lt;br /&gt;
*** &#039;&#039;&#039;attributes&#039;&#039;&#039;: An array of attribute names.&lt;br /&gt;
**** If null, all the input attributes are to be included in this group.&lt;br /&gt;
*** &#039;&#039;&#039;max_num_clusters&#039;&#039;&#039;: The maximum number of clusters (input- or output vector feature values) to use to represent this group of attributes.&lt;br /&gt;
**** Default value: 20&lt;br /&gt;
**** NOTE: Clustering is used by default to convert a set of attribute values into an input- or output vector used by the prediction model.&lt;br /&gt;
* &#039;&#039;&#039;columns&#039;&#039;&#039;: An array of attribute column configuration objects used to define columns in the input data that are to be used as event- or case attributes.&lt;br /&gt;
** If null, all the columns will be included as categorical attributes (except case id, event type (only for event) and timestamp (only for event) columns).&lt;br /&gt;
** The following settings are supported by these objects:&lt;br /&gt;
*** &#039;&#039;&#039;label&#039;&#039;&#039;: Column name.&lt;br /&gt;
*** &#039;&#039;&#039;type&#039;&#039;&#039;: Type of the column. Supported types are:&lt;br /&gt;
**** &#039;&#039;&#039;categorical&#039;&#039;&#039;: Values can take on one of a limited, and usually fixed, number of possible values.&lt;br /&gt;
**** &#039;&#039;&#039;numeric&#039;&#039;&#039;: Value is considered as a continuous numeric value.&lt;br /&gt;
&lt;br /&gt;
==== Example ====&lt;br /&gt;
Use all event attributes as input for the prediction model. In addition, additional machine learning input vector for SAP_User-event data column supporting at most 10 unique values.&lt;br /&gt;
&lt;br /&gt;
In addition, for case attributes, only &amp;quot;Region&amp;quot;, &amp;quot;Product Group&amp;quot;, &amp;quot;Account Manager&amp;quot; and &amp;quot;Customer Group&amp;quot; case data columns are used as categorical attributes and &amp;quot;Cost&amp;quot; as numeric attribute. Furthermore, the four categorical case attributes are grouped into three groups, each of which are used as its own input vector for the prediction model.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
When generating, all event attributes will be included for generated events as columns. Generated cases will have only &amp;quot;Region&amp;quot;, &amp;quot;Product Group&amp;quot;, &amp;quot;Account Manager&amp;quot;,  &amp;quot;Customer Group&amp;quot;, and &amp;quot;Cost&amp;quot; columns.&amp;lt;syntaxhighlight lang=&amp;quot;json&amp;quot;&amp;gt;&lt;br /&gt;
&amp;quot;attributes&amp;quot;: #{&lt;br /&gt;
  &amp;quot;event&amp;quot;: #{&lt;br /&gt;
    &amp;quot;input&amp;quot;: #{&lt;br /&gt;
      &amp;quot;categorical_groups&amp;quot;: [&lt;br /&gt;
        #{&lt;br /&gt;
          &amp;quot;attributes&amp;quot;: None&lt;br /&gt;
        },&lt;br /&gt;
        #{&lt;br /&gt;
          &amp;quot;attributes&amp;quot;: [&amp;quot;SAP_User&amp;quot;],&lt;br /&gt;
          &amp;quot;max_num_clusters&amp;quot;: 10&lt;br /&gt;
        }&lt;br /&gt;
      ],&lt;br /&gt;
      &amp;quot;columns&amp;quot;: None&lt;br /&gt;
    }&lt;br /&gt;
  },&lt;br /&gt;
  &amp;quot;case&amp;quot;: #{&lt;br /&gt;
    &amp;quot;input&amp;quot;: #{&lt;br /&gt;
      &amp;quot;categorical_groups&amp;quot;: [#{&lt;br /&gt;
        &amp;quot;attributes&amp;quot;: [&amp;quot;Account Manager&amp;quot;]&lt;br /&gt;
      }, #{&lt;br /&gt;
        &amp;quot;attributes&amp;quot;: [&amp;quot;Customer Group&amp;quot;]&lt;br /&gt;
      }, #{&lt;br /&gt;
        &amp;quot;attributes&amp;quot;: [&amp;quot;Region&amp;quot;, &amp;quot;Product Group&amp;quot;]&lt;br /&gt;
      }],&lt;br /&gt;
      &amp;quot;columns&amp;quot;: [&lt;br /&gt;
        #{ &amp;quot;label&amp;quot;: &amp;quot;Region&amp;quot;, &amp;quot;type&amp;quot;: &amp;quot;categorical&amp;quot; },&lt;br /&gt;
        #{ &amp;quot;label&amp;quot;: &amp;quot;Product Group&amp;quot;, &amp;quot;type&amp;quot;: &amp;quot;categorical&amp;quot; },&lt;br /&gt;
        #{ &amp;quot;label&amp;quot;: &amp;quot;Account Manager&amp;quot;, &amp;quot;type&amp;quot;: &amp;quot;categorical&amp;quot; },&lt;br /&gt;
        #{ &amp;quot;label&amp;quot;: &amp;quot;Customer Group&amp;quot;, &amp;quot;type&amp;quot;: &amp;quot;categorical&amp;quot; },&lt;br /&gt;
        #{ &amp;quot;label&amp;quot;: &amp;quot;Cost&amp;quot;, &amp;quot;type&amp;quot;: &amp;quot;numeric&amp;quot; }&lt;br /&gt;
      ]&lt;br /&gt;
    }&lt;br /&gt;
  }&lt;br /&gt;
}&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Predicting case attribute values ==&lt;br /&gt;
QPR ProcessAnalyzer can also be used to, e.g.,  predict the final values of case attributes of running cases. The following script gives an example on how to perform this.&amp;lt;syntaxhighlight lang=&amp;quot;typescript&amp;quot;&amp;gt;&lt;br /&gt;
let sourceModel = ProjectByName(&amp;quot;&amp;lt;project name&amp;gt;&amp;quot;).ModelByName(&amp;quot;&amp;lt;model name&amp;gt;&amp;quot;);&lt;br /&gt;
let caseAttributeToPredict = &amp;quot;&amp;lt;name of the case attribute&amp;gt;&amp;quot;;&lt;br /&gt;
let resultModelName = &amp;quot;&amp;lt;name of the model to be created/replaced&amp;gt;&amp;quot;;&lt;br /&gt;
let generateDebugCaseAttributes = false; // Set to true to generate columns for prediction probabilities.&lt;br /&gt;
let casesToPredictFilter = &amp;quot;&amp;lt;JSON filter for cases for which the prediction is to be performed&amp;gt;&amp;quot;;&lt;br /&gt;
let casesToUseForTrainingFilter = &amp;quot;&amp;lt;JSON filter for cases to be used for ML model training&amp;gt;&amp;quot;;&lt;br /&gt;
&lt;br /&gt;
let targetProject = Project;&lt;br /&gt;
&lt;br /&gt;
_system.ML.GenerateCaseAttributePredictionModel(#{&lt;br /&gt;
  &amp;quot;Name&amp;quot;: resultModelName,                                     // Name of the PA model to generate ti the target project.&lt;br /&gt;
  &amp;quot;SourceModel&amp;quot;: sourceModel,                                  // Snowflake-based PA model used for training the prediction model.&lt;br /&gt;
  &amp;quot;TargetProject&amp;quot;: targetProject,                              // Target project to create the model into.&lt;br /&gt;
  &amp;quot;RecreatePredictionModel&amp;quot;: false,                            // Should a prediction model be overwritten if one already exists for this source model and target model name combination. &lt;br /&gt;
  &amp;quot;TrainingCaseSampleSize&amp;quot;: 10000,                             // Maximum number of cases to use from the source model (random sampled). &lt;br /&gt;
  &amp;quot;CommonConfiguration&amp;quot;: #{                                    // Common parameters used by both training and generation.&lt;br /&gt;
    &amp;quot;output_case_attribute_groups&amp;quot;: [#{&lt;br /&gt;
      &amp;quot;attributes&amp;quot;: [caseAttributeToPredict]                   // Attribute whose value is to be predicted.&lt;br /&gt;
    }]&lt;br /&gt;
  },&lt;br /&gt;
  &amp;quot;TrainingConfiguration&amp;quot;: #{                                  // Training parameters.&lt;br /&gt;
    &amp;quot;max_num_case_attribute_clusters&amp;quot;: 80,&lt;br /&gt;
    &amp;quot;num_epochs_to_train&amp;quot;: 100&lt;br /&gt;
  },                            &lt;br /&gt;
  &amp;quot;GenerationConfiguration&amp;quot;: #{                                // Case attribute generation parameters.&lt;br /&gt;
    &amp;quot;generate_debug_case_attributes&amp;quot;: generateDebugCaseAttributes // Should probability and probability_all-columns be generated as well as the actual prediction created into a new column named Predicted_&amp;lt;attribute name&amp;gt;&lt;br /&gt;
  },                                                       &lt;br /&gt;
  &amp;quot;TrainingDataFilter&amp;quot;: ParseJson(casesToUseForTrainingFilter), // Filter JSON for events to be used for training.&lt;br /&gt;
  &amp;quot;IncompleteCasesFilter&amp;quot;: ParseJson(casesToPredictFilter)      // Filter JSON for events for whose case attribute value is to be predicted.&lt;br /&gt;
});&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;/div&gt;</summary>
		<author><name>MarHink</name></author>
	</entry>
	<entry>
		<id>https://wiki.onqpr.com/pa/index.php?title=Create_Predicted_Eventlog&amp;diff=27121</id>
		<title>Create Predicted Eventlog</title>
		<link rel="alternate" type="text/html" href="https://wiki.onqpr.com/pa/index.php?title=Create_Predicted_Eventlog&amp;diff=27121"/>
		<updated>2025-10-21T06:58:09Z</updated>

		<summary type="html">&lt;p&gt;MarHink: Added example of predicting case attribute values&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;This article has instructions how to install, configure and use eventlog predictions. The prediction creates a new model that contains the source model data and the predictions. It&#039;s able to predict case attributes for the generated new cases and event attributes for the predicted events. To distinguish the real (source data) and predicted events and cases, there are following attributes in the model:&lt;br /&gt;
* Event attribute &#039;&#039;&#039;Predicted&#039;&#039;&#039; denotes whether the event is from the source data (&#039;&#039;false&#039;&#039;) or whether it&#039;s predicted (&#039;&#039;true&#039;&#039;).&lt;br /&gt;
* Case attribute &#039;&#039;&#039;Generated&#039;&#039;&#039; denotes whether the case is in the source data (&#039;&#039;false&#039;&#039;) or whether the prediction generated it as a new case (&#039;&#039;true&#039;&#039;).&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Prerequisites for prediction ==&lt;br /&gt;
Following prerequisites need to be fulfilled to run the eventlog prediction:&lt;br /&gt;
* QPR ProcessAnalyzer 2024.8 or later in use&lt;br /&gt;
* Snowflake connection is configured&lt;br /&gt;
* Source models are stored to Snowflake&lt;br /&gt;
&lt;br /&gt;
== Install prediction to Snowflake ==&lt;br /&gt;
To install the eventlog prediction to Snowflake:&lt;br /&gt;
# Go to Snowflake, and create a Snowflake-managed stage with name &#039;&#039;&#039;DECISION_INTELLIGENCE&#039;&#039;&#039; to the same schema configured to QPR ProcessAnalyzer (in the Snowflake connection string). Use settings in the following image: [[File:Create_Snowflake_stage.png]]&lt;br /&gt;
# Open the created stage and upload the &#039;&#039;&#039;predict.pyz&#039;&#039;&#039; file into the stage (ask the file from your QPR representative).&lt;br /&gt;
# Create the following procedure to the same schema:&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;sql&amp;quot;&amp;gt;&lt;br /&gt;
CREATE OR REPLACE PROCEDURE QPRPA_SP_PREDICTION(&amp;quot;CONFIGURATION&amp;quot; OBJECT)&lt;br /&gt;
RETURNS OBJECT&lt;br /&gt;
LANGUAGE PYTHON&lt;br /&gt;
STRICT&lt;br /&gt;
RUNTIME_VERSION = &#039;3.11&#039;&lt;br /&gt;
PACKAGES = (&#039;nltk&#039;,&#039;numpy&#039;,&#039;networkx&#039;,&#039;pandas&#039;,&#039;scikit-learn&#039;,&#039;snowflake-snowpark-python&#039;,&#039;tensorflow==2.12.0&#039;,&#039;dill&#039;,&#039;psutil&#039;,&#039;prophet&#039;,&#039;holidays&#039;,&#039;python-kubernetes&#039;,&#039;docker-py&#039;,&#039;cryptography&#039;)&lt;br /&gt;
HANDLER = &#039;main&#039;&lt;br /&gt;
EXECUTE AS OWNER&lt;br /&gt;
AS &#039;&lt;br /&gt;
import sys&lt;br /&gt;
def main(session, parameters_in: dict) -&amp;gt; dict:&lt;br /&gt;
	session.file.get(&#039;&#039;@decision_intelligence/predict.pyz&#039;&#039;, &#039;&#039;/tmp&#039;&#039;)&lt;br /&gt;
	sys.path.append(&#039;&#039;/tmp/predict.pyz&#039;&#039;)&lt;br /&gt;
	import predict&lt;br /&gt;
	return predict.main(session, parameters_in)&lt;br /&gt;
&#039;;&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Create prediction script in QPR ProcessAnalyzer ==&lt;br /&gt;
1. Create the following example expression script (e.g., with name &#039;&#039;&#039;Create prediction model&#039;&#039;&#039;):&amp;lt;syntaxhighlight lang=&amp;quot;typescript&amp;quot;&amp;gt;&lt;br /&gt;
let sourceModel = ProjectByName(&amp;quot;&amp;lt;project name&amp;gt;&amp;quot;).ModelByName(&amp;quot;&amp;lt;model name&amp;gt;&amp;quot;);&lt;br /&gt;
let completeCaseEventTypeName = &amp;quot;&amp;lt;event type name found only in complete cases&amp;gt;&amp;quot;;&lt;br /&gt;
&lt;br /&gt;
let targetProject = Project;&lt;br /&gt;
let eventTypeColumnName = sourceModel.EventsDataTable.ColumnMappings[&amp;quot;EventType&amp;quot;];&lt;br /&gt;
&lt;br /&gt;
_system.ML.GeneratePredictionModel(#{&lt;br /&gt;
  &amp;quot;Name&amp;quot;: &amp;quot;My prediction model&amp;quot;,      // Name of the PA model to generate ti the target project.&lt;br /&gt;
  &amp;quot;SourceModel&amp;quot;: sourceModel,         // Snowflake-based PA model used for training the prediction model.&lt;br /&gt;
  &amp;quot;TargetProject&amp;quot;: targetProject,     // Target project to create the model into.&lt;br /&gt;
  &amp;quot;TrainingConfiguration&amp;quot;: #{         // Training parameters.&lt;br /&gt;
    &amp;quot;num_epochs_to_train&amp;quot;: 200&lt;br /&gt;
  },&lt;br /&gt;
  &amp;quot;GenerationConfiguration&amp;quot;: #{       // Model generation parameters.&lt;br /&gt;
    &amp;quot;cases_to_generate&amp;quot;: 1000&lt;br /&gt;
  },&lt;br /&gt;
  &amp;quot;TrainingDataFilter&amp;quot;: #{&lt;br /&gt;
    &amp;quot;Items&amp;quot;: [&lt;br /&gt;
      #{&lt;br /&gt;
        &amp;quot;Type&amp;quot;: &amp;quot;IncludeCases&amp;quot;,&lt;br /&gt;
        &amp;quot;Items&amp;quot;: [&lt;br /&gt;
          #{&lt;br /&gt;
            &amp;quot;Type&amp;quot;: &amp;quot;EventAttributeValue&amp;quot;,&lt;br /&gt;
            &amp;quot;Attribute&amp;quot;: eventTypeColumnName,&lt;br /&gt;
            &amp;quot;StringifiedValues&amp;quot;: [&lt;br /&gt;
              `0${completeCaseEventTypeName}`&lt;br /&gt;
            ]&lt;br /&gt;
          }&lt;br /&gt;
        ]&lt;br /&gt;
      }&lt;br /&gt;
    ]&lt;br /&gt;
  },&lt;br /&gt;
  &amp;quot;IncompleteCasesFilter&amp;quot;: #{&lt;br /&gt;
    &amp;quot;Items&amp;quot;: [&lt;br /&gt;
      #{&lt;br /&gt;
        &amp;quot;Type&amp;quot;: &amp;quot;ExcludeCases&amp;quot;,&lt;br /&gt;
        &amp;quot;Items&amp;quot;: [&lt;br /&gt;
          #{&lt;br /&gt;
            &amp;quot;Type&amp;quot;: &amp;quot;EventAttributeValue&amp;quot;,&lt;br /&gt;
            &amp;quot;Attribute&amp;quot;: eventTypeColumnName,&lt;br /&gt;
            &amp;quot;StringifiedValues&amp;quot;: [&lt;br /&gt;
              `0${completeCaseEventTypeName}`&lt;br /&gt;
            ]&lt;br /&gt;
          }&lt;br /&gt;
        ]&lt;br /&gt;
      }&lt;br /&gt;
    ]&lt;br /&gt;
  },&lt;br /&gt;
  &amp;quot;RecreatePredictionModel&amp;quot;: true,    // Should a prediction model be overwritten if one already exists for this source model and target model name combination.&lt;br /&gt;
  &amp;quot;TrainingCaseSampleSize&amp;quot;: 10000     // Maximum number of cases to use from the source model (random sampled).&lt;br /&gt;
});&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;2. Configure prediction for the previously created script as instructed in the next chapter. At minimum, replace the tags listed below with some suitable values:&lt;br /&gt;
* &#039;&#039;&#039;&amp;lt;project name&amp;gt;&#039;&#039;&#039;: Name of the project in which the source model is located.&lt;br /&gt;
* &#039;&#039;&#039;&amp;lt;model name&amp;gt;&#039;&#039;&#039;: Name of the model to be used as source model. This data in this source model will be used to train the prediction model so that it can generate new cases and continuations for incomplete existing cases.&lt;br /&gt;
* &#039;&#039;&#039;&amp;lt;event type name found only in complete cases&amp;gt;&#039;&#039;&#039;: This example script has been hard-coded to determine whether a case is complete or incomplete based on the existence of this event type.&lt;br /&gt;
&lt;br /&gt;
== Configure prediction ==&lt;br /&gt;
Prediction script has the following settings in the GeneratePredictionModel call:&lt;br /&gt;
* &#039;&#039;&#039;Name&#039;&#039;&#039;: Name of the QPR ProcessAnalyzer model that is created to the target project. The model will contain the source model content and the predictions.&lt;br /&gt;
* &#039;&#039;&#039;SourceModel&#039;&#039;&#039;: Source model for which the prediction is made. Model can be selected for example based on id with ModelById function or by name with ModelByName function.&lt;br /&gt;
* &#039;&#039;&#039;TargetProject&#039;&#039;&#039;: Target project to create the new model into.&lt;br /&gt;
* &#039;&#039;&#039;RecreatePredictionModel&#039;&#039;&#039;: When &#039;&#039;true&#039;&#039;, a new ML model is trained when the script is run. When &#039;&#039;false&#039;&#039;, the prediction is run using possibly pre-existing ML model. &lt;br /&gt;
* &#039;&#039;&#039;TrainingConfiguration&#039;&#039;&#039;: Training parameters.&lt;br /&gt;
** &#039;&#039;&#039;attributes&#039;&#039;&#039;: Attribute configurations (for more information, see the chapter below).&lt;br /&gt;
** &#039;&#039;&#039;generate_start_time_trend_images&#039;&#039;&#039;: If set to true, two images will be generated for each cross validated Prophet-parameter combination and also for the final selected parameters showing the results of plot and plot_components-functions. &lt;br /&gt;
*** The images will be generated into stage files with the following path names:&lt;br /&gt;
**** plot: @decision_intelligence_testing/{model_name}_st_RMSE={rmse_value or &amp;quot;final&amp;quot;}.png&lt;br /&gt;
****plot_components:  @decision_intelligence_testing/{model_name}_st_RMSE={rmse_value or &amp;quot;final&amp;quot;}_comp.png&lt;br /&gt;
***The default value is false.&lt;br /&gt;
** &#039;&#039;&#039;max_num_case_clusters&#039;&#039;&#039;: Set the maximum number of clusters to divide the case attribute values into.&lt;br /&gt;
*** The default value is 20.&lt;br /&gt;
** &#039;&#039;&#039;max_num_traces_in_training&#039;&#039;&#039;: Set the maximum number of traces used in training.&lt;br /&gt;
*** When training, every case of length N will be split into N traces (a.k.a. prefixes) (p_1, ..., p_N), where p_x contains first x events of the all events of the full case.&lt;br /&gt;
**** If there are more traces available than this configured value, cases to include will be random sampled so that the maximum is exceeded by at most one case.&lt;br /&gt;
**** If null, all the traces will be used, no matter what (may easily lead to running out of memory).&lt;br /&gt;
**** The default value is 100000.&lt;br /&gt;
** &#039;&#039;&#039;num_epochs_to_train&#039;&#039;&#039;: How many times the training set is used in training. The best performing model out of all the iterations will be selected.&lt;br /&gt;
*** The default value is 500.&lt;br /&gt;
** &#039;&#039;&#039;num_extra_years_to_reserve_in_created_model&#039;&#039;&#039;: Number of additional years after the year of the last timestamp in the training data to reserve to the capacity of the created ML model, allowing the model to  be able to predict timestamps in the range between the minimum timestamp year in the training data and the maximum timestamp year plus this value.&lt;br /&gt;
*** The default value is 20.&lt;br /&gt;
** &#039;&#039;&#039;reserve_extra_sequence_length&#039;&#039;&#039;: How many extra events to reserve space for in the ML model compared to the number of events the longest case in the training data has.&lt;br /&gt;
*** The default value is 5.&lt;br /&gt;
** &#039;&#039;&#039;samples_per_epoch&#039;&#039;&#039;: If not null, specifies (approximately) how many traces/prefixes will be used to represent one epoch of data in the training. The actual value used will be made divisible by batch_size using this formula:&lt;br /&gt;
***max(floor(samples_per_epoch / batch_size), 1) * batch_size&lt;br /&gt;
***If null, every epoch will use all the traces/prefixes in the training data.&lt;br /&gt;
***The default value is null&lt;br /&gt;
**&#039;&#039;&#039;validation_split&#039;&#039;&#039;: Percentage of traces/prefixes to use to evaluate the loss and any model metrics at the end of each epoch. The model will not be trained on this data.&lt;br /&gt;
***If 0, separate validation data will not be used. Instead, all the training data will be used also as validation data.&lt;br /&gt;
***The default value is 0.&lt;br /&gt;
* &#039;&#039;&#039;GenerationConfiguration&#039;&#039;&#039;: Event generation parameters. When null, no generation is done. For example, following parameters are supported:&lt;br /&gt;
** &#039;&#039;&#039;avoid_repeated_activities&#039;&#039;&#039;: Array of activity names that should occur at most once in any case. The probability of selecting any of the activities specified in this configuration more than once is set to be 0. &lt;br /&gt;
*** Empty array means that activity generation is not restricted by this setting at all. &lt;br /&gt;
*** null value means that there should not be any activities that can occur more than once (shortcut for specifying all the activity names).&lt;br /&gt;
*** The default value is an empty array.&lt;br /&gt;
** &#039;&#039;&#039;cases_to_generate&#039;&#039;&#039;: Maximum number cases to create. The number of created cases is further limited by the capabilities of the trained model and the &#039;&#039;case_generation_start_time&#039;&#039; and &#039;&#039;case_generation_end_time&#039;&#039; parameters.&lt;br /&gt;
*** The default value is such that the number of cases,  by itself, is not limited.&lt;br /&gt;
** &#039;&#039;&#039;case_generation_start_time&#039;&#039;&#039;: If defined, new cases will be generated after this timestamp (given as string in ISO datetime format). &lt;br /&gt;
*** If undefined, the latest start event timestamp used in the training data is used.&lt;br /&gt;
*** The default value is undefined.&lt;br /&gt;
** &#039;&#039;&#039;case_generation_end_time&#039;&#039;&#039;: If defined, new events and cases will not be generated after this timestamp (given as string in ISO datetime format). E.g., &amp;quot;2015-01-01T00:00:00&amp;quot;.&lt;br /&gt;
*** The default value is unlimited (only limit comes from the capacity of the trained model)&lt;br /&gt;
** &#039;&#039;&#039;generate_debug_event_attributes&#039;&#039;&#039;: &lt;br /&gt;
*** If true, additional columns will be added containing, e.g., probabilities of the selected activity and other activities.&lt;br /&gt;
*** The default value is false.&lt;br /&gt;
** &#039;&#039;&#039;max_num_events&#039;&#039;&#039;:&lt;br /&gt;
*** Specifies the maximum number of events to generate for any case.&lt;br /&gt;
*** If unspecified (=default), the value equals to &#039;&#039;&amp;lt;the maximum number of events in any case in the training data&amp;gt;&#039;&#039;+&#039;&#039;&amp;lt;the value of reserve_extra_sequence_length in training&amp;gt;&#039;&#039;.&lt;br /&gt;
** &#039;&#039;&#039;min_prediction_probability &#039;&#039;&#039;: &lt;br /&gt;
*** The minimum probability of any prediction. If the probability of a prediction is lower than this, it will never be picked. &lt;br /&gt;
*** The default value is 0.01.&lt;br /&gt;
** &#039;&#039;&#039;temperature&#039;&#039;&#039;: &lt;br /&gt;
*** If 0, the generated next activity will always be the one that is the most probable. &lt;br /&gt;
*** If 1, the generated next activity is purely based on the probabilities returned by the trained ML model. &lt;br /&gt;
*** This behavior is interpolated when using values between 0 and 1.&lt;br /&gt;
*** The default value is 0.9.&lt;br /&gt;
* &#039;&#039;&#039;TrainingDataFilter&#039;&#039;&#039;: [[Filtering_in_QPR_ProcessAnalyzer_Queries|Filter]] to select specific cases that are used to train the prediction model. This filter is required to train the model only using the completed cases. Uncompleted cases should not be used for the training, so the model doesn&#039;t incorrectly learn that cases should end like that.&lt;br /&gt;
* &#039;&#039;&#039;IncompleteCasesFilter&#039;&#039;&#039;: Optional [[Filtering_in_QPR_ProcessAnalyzer_Queries|filter]] to select which cases the prediction is made for. To improve performance of the prediction, it&#039;s recommended to include only the incomplete cases for which new events might appear, and skip the completed cases for which new events are not expected anymore.&lt;br /&gt;
* &#039;&#039;&#039;TrainingCaseSampleSize&#039;&#039;&#039;: Maximum number of cases to take from the source model (cases are selected randomly). Use a lower setting to speed up the ML model training. The greater the value, the more subtle phenomena the prediction can learn from the data.&lt;br /&gt;
&lt;br /&gt;
== Attribute configuration ==&lt;br /&gt;
Attribute configuration is used in &#039;&#039;&#039;TrainingConfiguration&#039;&#039;&#039; (see the chapter above) to configure which event- and case attributes should be used in prediction model and how they are used.&lt;br /&gt;
&lt;br /&gt;
The configuration is in the top level split into two sections: &amp;quot;event&amp;quot; and &amp;quot;case&amp;quot;. &amp;quot;Event&amp;quot; is used to configure event attributes, whereas &amp;quot;case&amp;quot; is used for case attributes.&lt;br /&gt;
&lt;br /&gt;
The next level supports one value: &amp;quot;input&amp;quot;.&lt;br /&gt;
&lt;br /&gt;
The next level after that, supports the following settings:&lt;br /&gt;
&lt;br /&gt;
* &#039;&#039;&#039;categorical_groups&#039;&#039;&#039;: An array of categorical attribute group configuration objects used to define groups of attributes that will be bundled together in the trained model, either as separate input- or output features. Each attribute group will form its own input- or output vector used in the model training and generation.&lt;br /&gt;
** If null, only one group will be created with all the available categorical attributes included.&lt;br /&gt;
** The following settings are supported by these objects:&lt;br /&gt;
*** &#039;&#039;&#039;attributes&#039;&#039;&#039;: An array of attribute names.&lt;br /&gt;
**** If null, all the input attributes are to be included in this group.&lt;br /&gt;
*** &#039;&#039;&#039;max_num_clusters&#039;&#039;&#039;: The maximum number of clusters (input- or output vector feature values) to use to represent this group of attributes.&lt;br /&gt;
**** Default value: 20&lt;br /&gt;
**** NOTE: Clustering is used by default to convert a set of attribute values into an input- or output vector used by the prediction model.&lt;br /&gt;
* &#039;&#039;&#039;columns&#039;&#039;&#039;: An array of attribute column configuration objects used to define columns in the input data that are to be used as event- or case attributes.&lt;br /&gt;
** If null, all the columns will be included as categorical attributes (except case id, event type (only for event) and timestamp (only for event) columns).&lt;br /&gt;
** The following settings are supported by these objects:&lt;br /&gt;
*** &#039;&#039;&#039;label&#039;&#039;&#039;: Column name.&lt;br /&gt;
*** &#039;&#039;&#039;type&#039;&#039;&#039;: Type of the column. Supported types are:&lt;br /&gt;
**** &#039;&#039;&#039;categorical&#039;&#039;&#039;: Values can take on one of a limited, and usually fixed, number of possible values.&lt;br /&gt;
**** &#039;&#039;&#039;numeric&#039;&#039;&#039;: Value is considered as a continuous numeric value.&lt;br /&gt;
&lt;br /&gt;
==== Example ====&lt;br /&gt;
Use all event attributes as input for the prediction model. In addition, additional machine learning input vector for SAP_User-event data column supporting at most 10 unique values.&lt;br /&gt;
&lt;br /&gt;
In addition, for case attributes, only &amp;quot;Region&amp;quot;, &amp;quot;Product Group&amp;quot;, &amp;quot;Account Manager&amp;quot; and &amp;quot;Customer Group&amp;quot; case data columns are used as categorical attributes and &amp;quot;Cost&amp;quot; as numeric attribute. Furthermore, the four categorical case attributes are grouped into three groups, each of which are used as its own input vector for the prediction model.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
When generating, all event attributes will be included for generated events as columns. Generated cases will have only &amp;quot;Region&amp;quot;, &amp;quot;Product Group&amp;quot;, &amp;quot;Account Manager&amp;quot;,  &amp;quot;Customer Group&amp;quot;, and &amp;quot;Cost&amp;quot; columns.&amp;lt;syntaxhighlight lang=&amp;quot;json&amp;quot;&amp;gt;&lt;br /&gt;
&amp;quot;attributes&amp;quot;: #{&lt;br /&gt;
  &amp;quot;event&amp;quot;: #{&lt;br /&gt;
    &amp;quot;input&amp;quot;: #{&lt;br /&gt;
      &amp;quot;categorical_groups&amp;quot;: [&lt;br /&gt;
        #{&lt;br /&gt;
          &amp;quot;attributes&amp;quot;: None&lt;br /&gt;
        },&lt;br /&gt;
        #{&lt;br /&gt;
          &amp;quot;attributes&amp;quot;: [&amp;quot;SAP_User&amp;quot;],&lt;br /&gt;
          &amp;quot;max_num_clusters&amp;quot;: 10&lt;br /&gt;
        }&lt;br /&gt;
      ],&lt;br /&gt;
      &amp;quot;columns&amp;quot;: None&lt;br /&gt;
    }&lt;br /&gt;
  },&lt;br /&gt;
  &amp;quot;case&amp;quot;: #{&lt;br /&gt;
    &amp;quot;input&amp;quot;: #{&lt;br /&gt;
      &amp;quot;categorical_groups&amp;quot;: [#{&lt;br /&gt;
        &amp;quot;attributes&amp;quot;: [&amp;quot;Account Manager&amp;quot;]&lt;br /&gt;
      }, #{&lt;br /&gt;
        &amp;quot;attributes&amp;quot;: [&amp;quot;Customer Group&amp;quot;]&lt;br /&gt;
      }, #{&lt;br /&gt;
        &amp;quot;attributes&amp;quot;: [&amp;quot;Region&amp;quot;, &amp;quot;Product Group&amp;quot;]&lt;br /&gt;
      }],&lt;br /&gt;
      &amp;quot;columns&amp;quot;: [&lt;br /&gt;
        #{ &amp;quot;label&amp;quot;: &amp;quot;Region&amp;quot;, &amp;quot;type&amp;quot;: &amp;quot;categorical&amp;quot; },&lt;br /&gt;
        #{ &amp;quot;label&amp;quot;: &amp;quot;Product Group&amp;quot;, &amp;quot;type&amp;quot;: &amp;quot;categorical&amp;quot; },&lt;br /&gt;
        #{ &amp;quot;label&amp;quot;: &amp;quot;Account Manager&amp;quot;, &amp;quot;type&amp;quot;: &amp;quot;categorical&amp;quot; },&lt;br /&gt;
        #{ &amp;quot;label&amp;quot;: &amp;quot;Customer Group&amp;quot;, &amp;quot;type&amp;quot;: &amp;quot;categorical&amp;quot; },&lt;br /&gt;
        #{ &amp;quot;label&amp;quot;: &amp;quot;Cost&amp;quot;, &amp;quot;type&amp;quot;: &amp;quot;numeric&amp;quot; }&lt;br /&gt;
      ]&lt;br /&gt;
    }&lt;br /&gt;
  }&lt;br /&gt;
}&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Predicting case attribute values ==&lt;br /&gt;
QPR ProcessAnalyzer can also be used to, e.g.,  predict the final values of case attributes of running cases. The following script gives an example on how to perform this.&amp;lt;syntaxhighlight lang=&amp;quot;typescript&amp;quot;&amp;gt;&lt;br /&gt;
let sourceModel = ProjectByName(&amp;quot;&amp;lt;project name&amp;gt;&amp;quot;).ModelByName(&amp;quot;&amp;lt;model name&amp;gt;&amp;quot;);&lt;br /&gt;
let caseAttributeToPredict = &amp;quot;&amp;lt;name of the case attribute&amp;gt;&amp;quot;;&lt;br /&gt;
let resultModelName = &amp;quot;&amp;lt;name of the model to be created/replaced&amp;gt;&amp;quot;;&lt;br /&gt;
let generateDebugCaseAttributes = false;&lt;br /&gt;
let casesToPredictFilter = &amp;quot;&amp;lt;JSON filter for cases for which the prediction is to be performed&amp;gt;&amp;quot;;&lt;br /&gt;
let casesToUseForTrainingFilter = &amp;quot;&amp;lt;JSON filter for cases to be used for ML model training&amp;gt;&amp;quot;;&lt;br /&gt;
&lt;br /&gt;
let targetProject = Project;&lt;br /&gt;
&lt;br /&gt;
_system.ML.GenerateCaseAttributePredictionModel(#{&lt;br /&gt;
  &amp;quot;Name&amp;quot;: resultModelName,                                     // Name of the PA model to generate ti the target project.&lt;br /&gt;
  &amp;quot;SourceModel&amp;quot;: sourceModel,                                  // Snowflake-based PA model used for training the prediction model.&lt;br /&gt;
  &amp;quot;TargetProject&amp;quot;: targetProject,                              // Target project to create the model into.&lt;br /&gt;
  &amp;quot;RecreatePredictionModel&amp;quot;: false,                            // Should a prediction model be overwritten if one already exists for this source model and target model name combination. &lt;br /&gt;
  &amp;quot;TrainingCaseSampleSize&amp;quot;: 10000,                             // Maximum number of cases to use from the source model (random sampled). &lt;br /&gt;
  &amp;quot;CommonConfiguration&amp;quot;: #{                                    // Common parameters used by both training and generation.&lt;br /&gt;
    &amp;quot;output_case_attribute_groups&amp;quot;: [#{&lt;br /&gt;
      &amp;quot;attributes&amp;quot;: [caseAttributeToPredict]                   // Attribute whose value is to be predicted.&lt;br /&gt;
    }]&lt;br /&gt;
  },&lt;br /&gt;
  &amp;quot;TrainingConfiguration&amp;quot;: #{                                  // Training parameters.&lt;br /&gt;
    &amp;quot;max_num_case_attribute_clusters&amp;quot;: 80,&lt;br /&gt;
    &amp;quot;num_epochs_to_train&amp;quot;: 100&lt;br /&gt;
  },                            &lt;br /&gt;
  &amp;quot;GenerationConfiguration&amp;quot;: #{                                // Case attribute generation parameters.&lt;br /&gt;
    &amp;quot;generate_debug_case_attributes&amp;quot;: generateDebugCaseAttributes // Should probability and probability_all-columns be generated as well as the actual prediction created into a new column named Predicted_&amp;lt;attribute name&amp;gt;&lt;br /&gt;
  },                                                       &lt;br /&gt;
  &amp;quot;TrainingDataFilter&amp;quot;: ParseJson(casesToUseForTrainingFilter), // Filter JSON for events to be used for training.&lt;br /&gt;
  &amp;quot;IncompleteCasesFilter&amp;quot;: ParseJson(casesToPredictFilter)      // Filter JSON for events for whose case attribute value is to be predicted.&lt;br /&gt;
});&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;/div&gt;</summary>
		<author><name>MarHink</name></author>
	</entry>
	<entry>
		<id>https://wiki.onqpr.com/pa/index.php?title=SAML_2.0_Federated_Authentication&amp;diff=27096</id>
		<title>SAML 2.0 Federated Authentication</title>
		<link rel="alternate" type="text/html" href="https://wiki.onqpr.com/pa/index.php?title=SAML_2.0_Federated_Authentication&amp;diff=27096"/>
		<updated>2025-10-03T07:28:15Z</updated>

		<summary type="html">&lt;p&gt;MarHink: Updated SAML related URLs to have the correct casings.&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;QPR ProcessAnalyzer supports authenticating users with federated authentication using the SAML 2.0 protocol. QPR ProcessAnalyzer works as a &#039;&#039;&#039;service provider (SP)&#039;&#039;&#039; and uses an external &#039;&#039;&#039;identity providers (IdP)&#039;&#039;&#039; to provide user identity (i.e. authenticating users). Commonly used identity providers are Azure AD and Microsoft Active Directory Federation Services (ADFS).&lt;br /&gt;
&lt;br /&gt;
== Introduction ==&lt;br /&gt;
When QPR ProcessAnalyzer is configured as a SAML 2.0 service provider (SP), users can authenticate to QPR ProcessAnalyzer via the configured SAML 2.0 identity provider (IdP). When accessing QPR ProcessAnalyzer, users are automatically redirected to the identity provider for authentication. When the authentication is done, users are redirected back to QPR ProcessAnalyzer where user is then automatically logged in. When using federated authentication, users don&#039;t normally see the QPR ProcessAnalyzer login page. The login page can be accessed (e.g. when login using QPR ProcessAnalyzer user management credentials) by adding &#039;&#039;&#039;forceLogin=1&#039;&#039;&#039; parameter to the url, e.g. &amp;lt;nowiki&amp;gt;https://customer.onqpr.com/QPRPA/ui/#/login?forceLogin=1&amp;lt;/nowiki&amp;gt;. &lt;br /&gt;
&lt;br /&gt;
QPR ProcessAnalyzer can also automatically redirect users to the identity provider from the url &#039;&#039;&#039;/qprpa/Saml2&#039;&#039;&#039;, e.g. &amp;lt;nowiki&amp;gt;https://customer.onqpr.com/qprpa/Saml2&amp;lt;/nowiki&amp;gt;. Redirection to this url can be configured to IIS, when users access QPR ProcessAnalyzer with the server name only, e.g. &amp;lt;nowiki&amp;gt;https://customer.onqpr.com&amp;lt;/nowiki&amp;gt;. The advantage of using this url is that the QPR ProcessAnalyzer web application is not loaded before the authentication, making the authentication flow faster. When going to the identity provider using a url starting with &amp;lt;nowiki&amp;gt;https://customer.onqpr.com/QPRPA/ui/&amp;lt;/nowiki&amp;gt;, the QPR ProcessAnalyzer web application is loaded before going to the identity provider.&lt;br /&gt;
&lt;br /&gt;
When a user logs in to QPR ProcessAnalyzer for the first time, user account is created to QPR ProcessAnalyzer user management. This account can only log in using the federated authentication, because the user account doesn&#039;t have a password in QPR ProcessAnalyzer. User accounts are matched between QPR ProcessAnalyzer and the identity provider using usernames.&lt;br /&gt;
&lt;br /&gt;
When the &#039;&#039;SAMLGroupsAttribute&#039;&#039; setting has been configured, QPR ProcessAnalyzer user management is kept in synchronization with the identity provider information regarding which groups the users are belonging to. This way, the operative user management will be done outside QPR ProcessAnalyzer. Still, the groups to be used need to be created beforehand and permissions assigned to the groups.&lt;br /&gt;
&lt;br /&gt;
Additional notes for the SAML authentication:&lt;br /&gt;
* QPR ProcessAnalyzer needs to use https when SAML 2.0 authentication is used.&lt;br /&gt;
* The identity provider needs to publish the identity provider metadata, because QPR ProcessAnalyzer reads the identity provider settings from there.&lt;br /&gt;
* QPR ProcessAnalyzer only supports SAML POST binding (e.g., SAML redirect binding is not supported).&lt;br /&gt;
*&#039;&#039;SAML AuthnRequests&#039;&#039; are signed using an out-of-the-box certificate embedded to QPR ProcessAnalyzer. This certificate has validity until 1.1.2035. It&#039;s also possible to use a custom certificate (see the SAMLSigningCertificate setting). The certificate public key is available in the service provider metadata published by the [[Web API: saml2|QPR ProcessAnalyzer Web API]].&lt;br /&gt;
*&#039;&#039;SAML Assertions&#039;&#039; must be signed (by the identity provider) to be accepted by QPR ProcessAnalyzer.&lt;br /&gt;
*SAML Assertions can optionally be encrypted by the identity provider. This requires the SAMLEncryptionCertificate setting to be defined in the [[PA_Configuration_database_table#SAML_2.0_Federated_Authentication_Settings|QPR ProcessAnalyzer configuration table]].&lt;br /&gt;
* Logout request to identity provider is not supported by QPR ProcessAnalyzer.&lt;br /&gt;
* If user clicks the logout button, user is redirected to the QPR ProcessAnalyzer login page. There user can click the &#039;&#039;&#039;Log in using SSO&#039;&#039;&#039; button to relogin.&lt;br /&gt;
* If the QPR ProcessAnalyzer session expires, user is redirected back to the identity provider for relogin.&lt;br /&gt;
&lt;br /&gt;
==Configuring SAML to QPR ProcessAnalyzer==&lt;br /&gt;
&lt;br /&gt;
To configure the SAML 2.0 authentication, follow these steps:&lt;br /&gt;
# Define settings &#039;&#039;&#039;SAMLMetadataUrl&#039;&#039;&#039;, &#039;&#039;&#039;ServiceProviderLocation&#039;&#039;&#039;, &#039;&#039;&#039;SAMLUserIdAttribute&#039;&#039;&#039;, &#039;&#039;&#039;SAMLGroupsAttribute&#039;&#039;&#039; (optional), &#039;&#039;&#039;SAMLEncryptionCertificate&#039;&#039;&#039; (optional), and &#039;&#039;&#039;SAMLSigningCertificate&#039;&#039;&#039; (optional) in the [[PA_Configuration_database_table#SAML_2.0_Federated_Authentication_Settings|QPR ProcessAnalyzer configuration table]]. QPR ProcessAnalyzer needs to be restarted for the settings to take effect.&lt;br /&gt;
# Configure a redirection from the root path of the QPR ProcessAnalyzer server to &#039;&#039;&#039;/qprpa/Saml2&#039;&#039;&#039;, so that users are automatically redirected to the identity provider for authentication.&lt;br /&gt;
# The identity provider configuration depends on which identity provide is used. See below for help how to configure [[#Using Azure AD as Identity Provider|Azure AD]] and [[#Using ADFS as Identity Provider|ADFS]] as the identity provider.&lt;br /&gt;
&lt;br /&gt;
If there are any issues with the authentication, please check the [[QPR_ProcessAnalyzer_Logs|QPR ProcessAnalyzer logs]].&lt;br /&gt;
&lt;br /&gt;
==Using Azure AD as Identity Provider==&lt;br /&gt;
Azure Active Directory (AAD) can be used as an identity provider to login to QPR ProcessAnalyzer. Following configurations are needed:&lt;br /&gt;
# Login to https://portal.azure.com as a cloud application admin or an application admin for your Azure AD tenant.&lt;br /&gt;
# Click &#039;&#039;&#039;Azure Active Directory&#039;&#039;&#039; &amp;gt; &#039;&#039;&#039;Enterprise Applications&#039;&#039;&#039; &amp;gt; &#039;&#039;&#039;New application&#039;&#039;&#039;. Select &#039;&#039;&#039;Non-gallery application&#039;&#039;&#039;.&lt;br /&gt;
# Define &#039;&#039;&#039;Name&#039;&#039;&#039; for the application, e.g., &amp;quot;QPR ProcessAnalyzer&amp;quot;.&lt;br /&gt;
# Go to &#039;&#039;&#039;Manage&#039;&#039;&#039; &amp;gt; &#039;&#039;&#039;Single sign-on&#039;&#039;&#039; &amp;gt; &#039;&#039;&#039;SAML&#039;&#039;&#039;.&lt;br /&gt;
# Click &#039;&#039;&#039;Edit&#039;&#039;&#039; pencil on the &#039;&#039;&#039;Basic SAML Authentication&#039;&#039;&#039; and define following settings (where &amp;lt;hostname&amp;gt; is the name of the QPR ProcessAnalyzer server):&lt;br /&gt;
##&#039;&#039;&#039;Identifier (Entity ID):&#039;&#039;&#039; https://&amp;lt;hostname&amp;gt;/qprpa/Saml2&lt;br /&gt;
## &#039;&#039;&#039;Reply URL (Assertion Consumer Service URL):&#039;&#039;&#039; https://&amp;lt;hostname&amp;gt;/qprpa/Saml2/Acs&lt;br /&gt;
## &#039;&#039;&#039;Sign on URL:&#039;&#039;&#039; It&#039;s recommended to leave this setting empty.&lt;br /&gt;
# Copy the &#039;&#039;&#039;App Federation Metadata Url&#039;&#039;&#039; to the clipboard to store it to  to the &#039;&#039;SAMLMetadataUrl&#039;&#039; setting in the QPR ProcessAnalyzer configuration table.&lt;br /&gt;
# If you want QPR Application to synchronize group membership between Azure AD and QPR ProcessAnalyzer, please add also the &#039;&#039;&#039;Group Claim&#039;&#039;&#039; from User &#039;&#039;&#039;Attributes &amp;amp; Claims&#039;&#039;&#039;.&lt;br /&gt;
&lt;br /&gt;
More information about Azure Active Directory: https://docs.microsoft.com/en-us/azure/active-directory/&lt;br /&gt;
&lt;br /&gt;
Instructions for setting up the optional SAML assertions encryption: https://docs.microsoft.com/en-us/azure/active-directory/manage-apps/howto-saml-token-encryption?WT.mc_id=migration_service_aad_-inproduct-azureportal.&lt;br /&gt;
&lt;br /&gt;
==Using ADFS as Identity Provider==&lt;br /&gt;
ADFS (Active Directory Federation Services) can be used as an identity provider to login to QPR ProcessAnalyzer. For ADFS setup, follow the ADFS configuration guide in https://docs.microsoft.com/en-us/windows-server/identity/ad-fs/operations/create-a-relying-party-trust with the following notes:&lt;br /&gt;
* Step 4: Select option &#039;&#039;&#039;Enter data about the relying party manually&#039;&#039;&#039; as metadata is not available.&lt;br /&gt;
* Step 5: Name can be chosen freely.&lt;br /&gt;
* Step 7: Disable option &#039;&#039;&#039;Enable support for the WS-Federation Passive protocol&#039;&#039;&#039;. Select option &#039;&#039;&#039;Enable support for the SAML 2.0 WebSSO protocol&#039;&#039;&#039; and define url &#039;&#039;&#039;https://&amp;lt;hostname&amp;gt;/qprpa/Saml2/Acs&#039;&#039;&#039; where SERVERNAME is the QPR ProcessAnalyzer server hostname.&lt;br /&gt;
* Step 8: Define url &#039;&#039;&#039;https://&amp;lt;hostname&amp;gt;/qprpa/Saml2/Acs&#039;&#039;&#039; where &amp;lt;hostname&amp;gt; is the QPR ProcessAnalyzer server hostname.&lt;br /&gt;
* Step 11: Select option &#039;&#039;&#039;Configure claims issuance policy for this application&#039;&#039;&#039;.&lt;br /&gt;
&lt;br /&gt;
Example:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
c:[Type == &amp;quot;http://schemas.microsoft.com/ws/2008/06/identity/claims/windowsaccountname&amp;quot;, Issuer == &amp;quot;AD AUTHORITY&amp;quot;]&lt;br /&gt;
=&amp;gt; issue(store = &amp;quot;Active Directory&amp;quot;, types = (&amp;quot;http://schemas.xmlsoap.org/ws/2005/05/identity/claims/upn&amp;quot;, &amp;quot;http://schemas.xmlsoap.org/claims/CommonName&amp;quot;, &amp;quot;http://schemas.xmlsoap.org/ws/2005/05/identity/claims/emailaddress&amp;quot;, &amp;quot;http://schemas.xmlsoap.org/claims/Group&amp;quot;), query = &amp;quot;;userPrincipalName,displayName,mail,tokenGroups;{0}&amp;quot;, param = c.Value);&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== SAML 2.0 Authentication API ==&lt;br /&gt;
QPR ProcessAnalyzer has [[Web_API:_saml2/acs|/Saml2/Acs]] endpoint which accepts a SAML assertion from the IdP and returns a HTTP redirection to QPR ProcessAnalyzer Web UI. The url contains a &#039;&#039;sys:samlHash&#039;&#039; parameter which is used by the Web UI to login the user using the [[Web_API:_Token|/token]] endpoint (to get a session token to use in the interactions with the Web API).&lt;br /&gt;
&lt;br /&gt;
== Troubleshooting ==&lt;br /&gt;
&#039;&#039;&#039;Problem&#039;&#039;&#039;: When trying to authenticate with SAML, QPR ProcessAnalyzer responds: &#039;&#039;Bad Request - Request too long&#039;&#039;.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Solution&#039;&#039;&#039;: This error comes when the SAML assertion (message from IdP to QPR ProcessAnalyzer) contains too long data. The SAML assertion is encoded to an http header (as part of a cookie), which has a certain allowed maximum length. One option is to increase the limits in the Windows http.sys web server (https://learn.microsoft.com/en-us/troubleshoot/developer/webapps/iis/iisadmin-service-inetinfo/httpsys-registry-windows). Alternatively, it may be possible reduce the amount of data included by the IdP to the SAML assertion.&lt;br /&gt;
&lt;br /&gt;
==References==&lt;br /&gt;
* General information about federated authentication: https://en.wikipedia.org/wiki/Federated_identity&lt;br /&gt;
* General information about SAML 2.0: https://en.wikipedia.org/wiki/SAML_2.0&lt;br /&gt;
* SAML 2.0 in Azure AD: https://docs.microsoft.com/en-us/azure/active-directory/develop/single-sign-on-saml-protocol&lt;br /&gt;
* General information about ADFS: https://en.wikipedia.org/wiki/Active_Directory_Federation_Services&lt;br /&gt;
* ADFS documentation: https://msdn.microsoft.com/en-us/library/bb897402.aspx&lt;br /&gt;
&lt;br /&gt;
[[Category: QPR ProcessAnalyzer]]&lt;/div&gt;</summary>
		<author><name>MarHink</name></author>
	</entry>
	<entry>
		<id>https://wiki.onqpr.com/pa/index.php?title=Create_Simulated_Eventlog&amp;diff=26796</id>
		<title>Create Simulated Eventlog</title>
		<link rel="alternate" type="text/html" href="https://wiki.onqpr.com/pa/index.php?title=Create_Simulated_Eventlog&amp;diff=26796"/>
		<updated>2025-08-22T14:22:18Z</updated>

		<summary type="html">&lt;p&gt;MarHink: /* Transformation: event_resource_limits */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;This article has instructions how to install, configure and use eventlog simulations. The simulation creates a new model that contains both the source model data and the new simulated data. Case attribute &#039;&#039;&#039;Simulated&#039;&#039;&#039; can be used to determine whether the case is in the source data (&#039;&#039;false&#039;&#039;) or whether the simulation generated it as a new simulated case (&#039;&#039;true&#039;&#039;).&lt;br /&gt;
&lt;br /&gt;
== Prerequisites for simulation ==&lt;br /&gt;
As prerequisite, prediction must be installed into the used Snowflake as described in [[Create Predicted Eventlog|Install prediction in Snowflake]].&lt;br /&gt;
&lt;br /&gt;
== Create simulation script in QPR ProcessAnalyzer ==&lt;br /&gt;
You can create your own simulation script from scratch, or use one of the scripts provided below as a starting point.&lt;br /&gt;
&lt;br /&gt;
=== Creating a simulation model that deletes specific events ===&lt;br /&gt;
1. Create the following example expression script (e.g., with name &amp;quot;&#039;&#039;&#039;Create simulation model&#039;&#039;&#039; &#039;&#039;&#039;- delete events&#039;&#039;&#039;&amp;quot;):&amp;lt;syntaxhighlight lang=&amp;quot;typescript&amp;quot;&amp;gt;&lt;br /&gt;
let sourceModel = ProjectByName(&amp;quot;&amp;lt;project name&amp;gt;&amp;quot;).ModelByName(&amp;quot;&amp;lt;model name&amp;gt;&amp;quot;);&lt;br /&gt;
let flowFromEventType = &amp;quot;&amp;lt;from event type of a flow to modify&amp;gt;&amp;quot;, flowToEventType = &amp;quot;&amp;lt;to event type of a flow to modify&amp;gt;&amp;quot;;&lt;br /&gt;
&lt;br /&gt;
let targetProject = Project;&lt;br /&gt;
_system.ML.ApplyTransformations(#{&lt;br /&gt;
  &amp;quot;Name&amp;quot;: &amp;quot;My simulation model - delete&amp;quot;,   // Name of the PA model to generate to the target project.&lt;br /&gt;
  &amp;quot;SourceModel&amp;quot;: sourceModel,               // Snowflake-based PA model used for training the prediction model.&lt;br /&gt;
  &amp;quot;TargetProject&amp;quot;: targetProject,           // Target project to create the model into.&lt;br /&gt;
  &amp;quot;Transformations&amp;quot;: [#{                    // Transformation configurations.&lt;br /&gt;
    &amp;quot;type&amp;quot;: &amp;quot;modify_flow_durations&amp;quot;,&lt;br /&gt;
    &amp;quot;column&amp;quot;: sourceModel.EventsDataTable.ColumnMappings[&amp;quot;EventType&amp;quot;],&lt;br /&gt;
    &amp;quot;flows&amp;quot;: [#{&lt;br /&gt;
      &amp;quot;from&amp;quot;: flowFromEventType,&lt;br /&gt;
      &amp;quot;to&amp;quot;: flowToEventType,&lt;br /&gt;
      &amp;quot;probability&amp;quot;: 1.0,&lt;br /&gt;
      &amp;quot;operation&amp;quot;: #{&lt;br /&gt;
        &amp;quot;type&amp;quot;: &amp;quot;set_value&amp;quot;,&lt;br /&gt;
        &amp;quot;value&amp;quot;: 0.0&lt;br /&gt;
      },&lt;br /&gt;
      &amp;quot;delete&amp;quot;: true&lt;br /&gt;
    }]&lt;br /&gt;
  }]&lt;br /&gt;
});&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;2. Configure simulation for the previously created script as instructed in the next chapter. At minimum, replace the tags listed below with some suitable values:&lt;br /&gt;
&lt;br /&gt;
* &#039;&#039;&#039;&amp;lt;project name&amp;gt;&#039;&#039;&#039;: Name of the project in which the source model is located.&lt;br /&gt;
* &#039;&#039;&#039;&amp;lt;model name&amp;gt;&#039;&#039;&#039;: Name of the model to be used as source model. This data in this source model will be used as source data to be modified by the simulation transformations.&lt;br /&gt;
* &#039;&#039;&#039;&amp;lt;from event type of a flow to modify&amp;gt;&#039;&#039;&#039;: From-event type name of flows from which the from-event is to be deleted.&lt;br /&gt;
* &#039;&#039;&#039;&amp;lt;to event type of a flow to modify&amp;gt;&#039;&#039;&#039;: To-event type name of flows from which the from-event is to be deleted.&lt;br /&gt;
&lt;br /&gt;
=== Create simulation model automating all the resources belonging to the same role as specified resource ===&lt;br /&gt;
1. Create the following example expression script (e.g., with name &amp;quot;&#039;&#039;&#039;Create simulation model&#039;&#039;&#039; &#039;&#039;&#039;- automate&#039;&#039;&#039;&amp;quot;):&amp;lt;syntaxhighlight lang=&amp;quot;typescript&amp;quot;&amp;gt;&lt;br /&gt;
let sourceModel = ProjectByName(&amp;quot;&amp;lt;project name&amp;gt;&amp;quot;).ModelByName(&amp;quot;&amp;lt;model name&amp;gt;&amp;quot;);&lt;br /&gt;
let resourceColumnName = &amp;quot;&amp;lt;event data column having resource names&amp;gt;&amp;quot;;&lt;br /&gt;
let resourceNameToAutomate = &amp;quot;&amp;lt;resource value whose role should be automated&amp;gt;&amp;quot;;&lt;br /&gt;
&lt;br /&gt;
let targetProject = Project;&lt;br /&gt;
_system.ML.ApplyTransformations(#{&lt;br /&gt;
  &amp;quot;PredictionProcedureName&amp;quot;: &amp;quot;qprpa_sp_prediction&amp;quot;,&lt;br /&gt;
  &amp;quot;Name&amp;quot;: &amp;quot;My simulation model - automate&amp;quot;,   // Name of the PA model to generate to the target project.&lt;br /&gt;
  &amp;quot;SourceModel&amp;quot;: sourceModel,                 // Snowflake-based PA model used for training the prediction model.&lt;br /&gt;
  &amp;quot;TargetProject&amp;quot;: targetProject,             // Target project to create the model into.&lt;br /&gt;
  &amp;quot;Transformations&amp;quot;: [#{                      // Transformation configurations&lt;br /&gt;
    &amp;quot;type&amp;quot;: &amp;quot;resources_to_roles&amp;quot;,&lt;br /&gt;
    &amp;quot;resource_column&amp;quot;: resourceColumnName,&lt;br /&gt;
    &amp;quot;role_column&amp;quot;: &amp;quot;Role&amp;quot;,&lt;br /&gt;
    &amp;quot;role_name_template&amp;quot;: &amp;quot;Role %d&amp;quot;&lt;br /&gt;
  }, #{&lt;br /&gt;
    &amp;quot;type&amp;quot;: &amp;quot;modify_flow_durations&amp;quot;,&lt;br /&gt;
    &amp;quot;input&amp;quot;: #{&lt;br /&gt;
      &amp;quot;role_name&amp;quot;: #{&lt;br /&gt;
        &amp;quot;input&amp;quot;: &amp;quot;resource_to_role_map&amp;quot;,&lt;br /&gt;
        &amp;quot;value_path&amp;quot;: [resourceNameToAutomate]&lt;br /&gt;
      }&lt;br /&gt;
    },&lt;br /&gt;
    &amp;quot;column&amp;quot;: &amp;quot;Role&amp;quot;,&lt;br /&gt;
    &amp;quot;flows&amp;quot;: [#{&lt;br /&gt;
      &amp;quot;from_input&amp;quot;: &amp;quot;role_name&amp;quot;,&lt;br /&gt;
      &amp;quot;operation&amp;quot;: #{&lt;br /&gt;
        &amp;quot;type&amp;quot;: &amp;quot;set_value&amp;quot;,&lt;br /&gt;
        &amp;quot;value&amp;quot;: 1,&lt;br /&gt;
        &amp;quot;probability&amp;quot;: 0.5&lt;br /&gt;
      }&lt;br /&gt;
    }]&lt;br /&gt;
  }]&lt;br /&gt;
});&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;2. Configure simulation for the previously created script as instructed in the next chapter. At minimum, replace the tags listed below with some suitable values:&lt;br /&gt;
* &#039;&#039;&#039;&amp;lt;project name&amp;gt;&#039;&#039;&#039;: Name of the project in which the source model is located.&lt;br /&gt;
* &#039;&#039;&#039;&amp;lt;model name&amp;gt;&#039;&#039;&#039;: Name of the model to be used as source model. This data in this source model will be used as source data to be modified by the simulation transformations.&lt;br /&gt;
* &#039;&#039;&#039;&amp;lt;event data column having resource names&amp;gt;&#039;&#039;&#039;: Name of the event data column that contains resource names.&lt;br /&gt;
* &#039;&#039;&#039;&amp;lt;resource value whose role should be automated&amp;gt;&#039;&#039;&#039;: Name of the resource whose role is to be automated.&lt;br /&gt;
&lt;br /&gt;
== Configure simulation ==&lt;br /&gt;
Simulation script has the following settings in the ApplyTransformations call:&lt;br /&gt;
&lt;br /&gt;
* &#039;&#039;&#039;Name&#039;&#039;&#039;: Name of the QPR ProcessAnalyzer model that is created to the target project. The model will contain the source model content and the predictions.&lt;br /&gt;
* &#039;&#039;&#039;SourceModel&#039;&#039;&#039;: Source model for which the simulation is made. Model can be selected for example based on id with ModelById function or by name with ModelByName function.&lt;br /&gt;
* &#039;&#039;&#039;TargetProject&#039;&#039;&#039;: Target project to create the new model into.&lt;br /&gt;
* &#039;&#039;&#039;Transformations&#039;&#039;&#039;: Array of transformation configuration objects. Each object supports the following parameters:&lt;br /&gt;
** &#039;&#039;&#039;type&#039;&#039;&#039;: Defines the type of the transformation to perform. See below for more details on the supported transformations. Supported values are:&lt;br /&gt;
*** &#039;&#039;&#039;enforce_resource_limits&#039;&#039;&#039;: Used to modify given event log using given maximum resource limits.&lt;br /&gt;
*** &#039;&#039;&#039;extract_max_resource_usages&#039;&#039;&#039;: Used to extract, for every value of a specified column, the maximum number of concurrent cases in given event log that have that value. &lt;br /&gt;
*** &#039;&#039;&#039;generate&#039;&#039;&#039;: Used to generate a new event log using a trained ML model.&lt;br /&gt;
*** &#039;&#039;&#039;modify_flow_durations&#039;&#039;&#039;: Used to modify durations of flows and possibly remove events having specific flows.&lt;br /&gt;
*** &#039;&#039;&#039;modify_values&#039;&#039;&#039;: Used to modify values of a dictionary given as input (e.g., dictionary generated by extract_max_resource_usages).&lt;br /&gt;
*** &#039;&#039;&#039;resources_to_roles&#039;&#039;&#039;: Performs &amp;quot;organization mining&amp;quot; by trying to group together column values (e.g., resources) that are used in similar fashion in given event log (e.g., resources that are often present in similar set of activities).&lt;br /&gt;
**&#039;&#039;&#039;input&#039;&#039;&#039;: Can be used to specify that given transformation input parameters get their values from the previous transformation result.&lt;br /&gt;
***Value can be either direct mapping by just the name of the transformation result property, or it can be a value mapping configuration object that supports the following parameters:&lt;br /&gt;
****&#039;&#039;&#039;input&#039;&#039;&#039;: Name of the parameter to get from the previous transformation result as the root object of the actual value to extract.&lt;br /&gt;
****&#039;&#039;&#039;value_path&#039;&#039;&#039;: An array of property names to traverse into the root object.&lt;br /&gt;
&lt;br /&gt;
=== Transformation: enforce_resource_limits ===&lt;br /&gt;
Using given input data, this transformation generates a new event log which does not exceed the concurrency limitations of specified column values. &lt;br /&gt;
&lt;br /&gt;
Event rows are traversed in time order, and if at some point a limit would be exceeded, instead of outputting the actual event, a new copy of the actual event, with copied event properties, is created to represent the queue for the actual event. &lt;br /&gt;
&lt;br /&gt;
Only after an event leaves from the column value that contains a queue, the event that had been waiting for the longest in the queue will be generated (following the FIFO-principle).&lt;br /&gt;
&lt;br /&gt;
==== Supported parameters ====&lt;br /&gt;
* &#039;&#039;&#039;column&#039;&#039;&#039;: Name of the column having the values whose concurrent usage is to be limited by specified limits&lt;br /&gt;
* &#039;&#039;&#039;limits&#039;&#039;&#039;: Specifies an object containing key-value -pairs where keys are column values and values contain an integer specifying the maximum number of concurrent cases in the given event log that can contain given value.&lt;br /&gt;
* &#039;&#039;&#039;queue_event_activity_name&#039;&#039;&#039;: If set, specifies the name template used for queue-events. In this template, when a queue event is created, %s is replaced with the name of the activity this queue event is queuing to.&lt;br /&gt;
** If not set, the activity name is not altered at all for the queue event.&lt;br /&gt;
* &#039;&#039;&#039;queue_event_column&#039;&#039;&#039;: &lt;br /&gt;
** If queue_event_activity_name is set:&lt;br /&gt;
*** If the event represents a queue-event, the value in this column specifies the name of the queue-activity.&lt;br /&gt;
*** Otherwise, the value is null.&lt;br /&gt;
** If queue_event_activity_name is not set:&lt;br /&gt;
*** If the event represents a queue-event, the value in this column True.&lt;br /&gt;
*** Otherwise, the value is False.&lt;br /&gt;
&lt;br /&gt;
==== Inputs ====&lt;br /&gt;
Event log to operate on.&lt;br /&gt;
&lt;br /&gt;
==== Outputs ====&lt;br /&gt;
Event log with resource limits enforced.&lt;br /&gt;
&lt;br /&gt;
==== Example ====&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;json&amp;quot;&amp;gt;&lt;br /&gt;
#{&lt;br /&gt;
  &amp;quot;type&amp;quot;: &amp;quot;enforce_resource_limits&amp;quot;,&lt;br /&gt;
  &amp;quot;queue_event_column&amp;quot;: &amp;quot;Queue&amp;quot;,&lt;br /&gt;
  &amp;quot;queue_event_activity_name&amp;quot;: &amp;quot;%s - Queue&amp;quot;,&lt;br /&gt;
  &amp;quot;limits&amp;quot;: #{&lt;br /&gt;
    &amp;quot;Role 3&amp;quot;: None&lt;br /&gt;
  },&lt;br /&gt;
  &amp;quot;input&amp;quot;: #{&lt;br /&gt;
    &amp;quot;limits&amp;quot;: &amp;quot;role_limits&amp;quot;,&lt;br /&gt;
    &amp;quot;column&amp;quot;: &amp;quot;role_column&amp;quot;&lt;br /&gt;
  }&lt;br /&gt;
}&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Transformation: extract_max_resource_usages ===&lt;br /&gt;
Extract, for every value of a specified column, the maximum number of concurrent cases in given event log that have that value.&lt;br /&gt;
&lt;br /&gt;
==== Supported parameters ====&lt;br /&gt;
&lt;br /&gt;
* &#039;&#039;&#039;resource_column&#039;&#039;&#039;:&lt;br /&gt;
** The name of the column representing the resources whose maximum concurrent case usages are to be calculated.&lt;br /&gt;
&lt;br /&gt;
==== Inputs ====&lt;br /&gt;
Event log to operate on.&lt;br /&gt;
&lt;br /&gt;
==== Outputs ====&lt;br /&gt;
&lt;br /&gt;
* max_resource_usages:&lt;br /&gt;
** A dictionary object containing resource names as keys (unique resource_column values) and their maximum usage in the event log.&lt;br /&gt;
&lt;br /&gt;
==== Example ====&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;json&amp;quot;&amp;gt;&lt;br /&gt;
#{&lt;br /&gt;
  &amp;quot;type&amp;quot;: &amp;quot;extract_max_resource_usages&amp;quot;,&lt;br /&gt;
  &amp;quot;resource_column&amp;quot;: &amp;quot;SAP_User&amp;quot;&lt;br /&gt;
}&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Transformation: generate ===&lt;br /&gt;
Generate a new event log using the configured model [[Create Predicted Eventlog|prediction generation parameters (GenerationConfiguration)]].&lt;br /&gt;
&lt;br /&gt;
==== Supported parameters ====&lt;br /&gt;
Supports all the same parameters as those supported by model prediction generation configuration.&lt;br /&gt;
&lt;br /&gt;
==== Inputs ====&lt;br /&gt;
Does not support inputs.&lt;br /&gt;
&lt;br /&gt;
==== Outputs ====&lt;br /&gt;
Generated event log.&lt;br /&gt;
&lt;br /&gt;
==== Example ====&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;json&amp;quot;&amp;gt;&lt;br /&gt;
#{&lt;br /&gt;
  &amp;quot;type&amp;quot;: &amp;quot;generate&amp;quot;,&lt;br /&gt;
  &amp;quot;model_name&amp;quot;: &amp;quot;ML model&amp;quot;,&lt;br /&gt;
  &amp;quot;cases_to_generate&amp;quot;: 100,&lt;br /&gt;
  &amp;quot;max_num_events&amp;quot;: 20&lt;br /&gt;
}&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Transformation: modify_flow_durations ===&lt;br /&gt;
Modify durations of flows and possibly remove events having specific flows.&lt;br /&gt;
&lt;br /&gt;
==== Supported parameters ====&lt;br /&gt;
&lt;br /&gt;
* &#039;&#039;&#039;column&#039;&#039;&#039;: The name of the column based on which the flows are created. Usually this is the column containing activities, but could also be, e.g., organization units, users, …&lt;br /&gt;
* &#039;&#039;&#039;flows&#039;&#039;&#039;: Flows to transform. Contains an array of flow transformation configuration objects. Each object defines transformations performed on one flow type defined by starting and ending column values. Supports the following properties:&lt;br /&gt;
** &#039;&#039;&#039;delete&#039;&#039;&#039;: Same as delete_from.&lt;br /&gt;
** &#039;&#039;&#039;delete_from&#039;&#039;&#039;: If defined, specifies whether the &amp;quot;from event&amp;quot; of the matched flow should be removed after applying the operation.&lt;br /&gt;
** &#039;&#039;&#039;delete_to&#039;&#039;&#039;: If defined, specifies whether the &amp;quot;to event&amp;quot; of the matched flow should be removed after applying the operation.&lt;br /&gt;
** &#039;&#039;&#039;from&#039;&#039;&#039;: Column value starting the flow.&lt;br /&gt;
*** If this and from_input are both undefined, any starting value is accepted.&lt;br /&gt;
** &#039;&#039;&#039;from_input&#039;&#039;&#039;: If defined, specifies the name of the transformation-level parameter from which the actual column value starting the flow is read from.&lt;br /&gt;
*** Overrides the value defined in from-parameter.&lt;br /&gt;
** &#039;&#039;&#039;operation&#039;&#039;&#039;: Specifies the actual flow duration modification operation to perform as value modification configuration object where the value is the duration in seconds. Supports the following properties:&lt;br /&gt;
*** &#039;&#039;&#039;probability&#039;&#039;&#039;: If defined, specifies the percentage probability of applying the operation to any matching instance of the flow. &lt;br /&gt;
**** Value should be a numeric value between 0 and 1.0. &lt;br /&gt;
**** This probability applies only to this operation.&lt;br /&gt;
**** The default value is 1.0.&lt;br /&gt;
*** &#039;&#039;&#039;type&#039;&#039;&#039;: Type of the operation. The following types are supported:&lt;br /&gt;
**** &#039;&#039;&#039;add&#039;&#039;&#039;: Sets the value to be the current value plus the number specified by the value.&lt;br /&gt;
**** &#039;&#039;&#039;multiply&#039;&#039;&#039;: Sets the value to be the current value multiplied by the number specified by the value.&lt;br /&gt;
**** &#039;&#039;&#039;set_value&#039;&#039;&#039;: Sets the value to be exactly the number specified by the value.&lt;br /&gt;
*** &#039;&#039;&#039;value&#039;&#039;&#039;: Value used by the operation.&lt;br /&gt;
** &#039;&#039;&#039;probability&#039;&#039;&#039;: If defined, specifies the percentage probability of applying the operation to any matching instance of the flow.&lt;br /&gt;
*** Value should be a numeric value between 0 and 1.0.&lt;br /&gt;
*** This probability applies, in addition to the operation specified by the operation-parameter, also to any possible other transformations, such as event deletion.&lt;br /&gt;
*** Default value is 1.0&lt;br /&gt;
** &#039;&#039;&#039;to&#039;&#039;&#039;: Column value ending the flow.&lt;br /&gt;
*** If this and to_input are both undefined, any ending value is accepted.&lt;br /&gt;
** &#039;&#039;&#039;to_input&#039;&#039;&#039;: If defined, specifies the name of the transformation-level parameter to which the actual column value ending the flow is read from.&lt;br /&gt;
*** Overrides the value defined in to-parameter.&lt;br /&gt;
&lt;br /&gt;
==== Inputs ====&lt;br /&gt;
Event log to operate on.&lt;br /&gt;
&lt;br /&gt;
==== Outputs ====&lt;br /&gt;
Transformed event log.&lt;br /&gt;
&lt;br /&gt;
==== Example ====&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;json&amp;quot;&amp;gt;&lt;br /&gt;
#{&lt;br /&gt;
  &amp;quot;type&amp;quot;: &amp;quot;modify_flow_durations&amp;quot;,&lt;br /&gt;
  &amp;quot;column&amp;quot;: &amp;quot;Organization&amp;quot;,&lt;br /&gt;
  &amp;quot;flows&amp;quot;: [#{&lt;br /&gt;
    &amp;quot;from&amp;quot;: &amp;quot;Delivery&amp;quot;,&lt;br /&gt;
    &amp;quot;operation&amp;quot;: #{&lt;br /&gt;
      &amp;quot;type&amp;quot;: &amp;quot;set_value&amp;quot;,&lt;br /&gt;
      &amp;quot;value&amp;quot;: 0.0&lt;br /&gt;
    },&lt;br /&gt;
    &amp;quot;delete&amp;quot;: true&lt;br /&gt;
  }]&lt;br /&gt;
}&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Transformation: modify_values ===&lt;br /&gt;
Modify values of an object given as input (e.g., object generated by extract_max_resource_usages).&lt;br /&gt;
&lt;br /&gt;
Due to the required inputs, this transformation can&#039;t be the first transformation to perform.&lt;br /&gt;
&lt;br /&gt;
==== Supported parameters ====&lt;br /&gt;
&lt;br /&gt;
* &#039;&#039;&#039;values&#039;&#039;&#039;: Array of value configuration objects. Each object supports the following properties:&lt;br /&gt;
** &#039;&#039;&#039;input&#039;&#039;&#039;: Name of the result to modify, where the result is the output of the previous transformation.&lt;br /&gt;
** &#039;&#039;&#039;input_key_from&#039;&#039;&#039;: If defined, specifies the the name of the property of a input object whose value contains the name of the property whose value is to be modified.&lt;br /&gt;
** &#039;&#039;&#039;input_key_value_path&#039;&#039;&#039;: If input_key_from is defined and is represented as an object, this configuration should specify an array of property names to traverse into the object. &lt;br /&gt;
*** The value at the end of this path will be used as the name of the property to modify in the input.&lt;br /&gt;
** &#039;&#039;&#039;operation&#039;&#039;&#039;: Specifies the actual value modification operation to perform as value modification configuration object. Supports the following properties:&lt;br /&gt;
*** &#039;&#039;&#039;probability&#039;&#039;&#039;: If defined, specifies the percentage probability of applying the operation to any matching instance of the flow. &lt;br /&gt;
**** Value should be a numeric value between 0 and 1.0. &lt;br /&gt;
**** This probability applies only to this operation.&lt;br /&gt;
**** The default value is 1.0.&lt;br /&gt;
*** &#039;&#039;&#039;type&#039;&#039;&#039;: Type of the operation. The following types are supported:&lt;br /&gt;
**** &#039;&#039;&#039;add&#039;&#039;&#039;: Sets the value to be the current value plus the number specified by the value.&lt;br /&gt;
**** &#039;&#039;&#039;multiply&#039;&#039;&#039;: Sets the value to be the current value multiplied by the number specified by the value.&lt;br /&gt;
**** &#039;&#039;&#039;set_value&#039;&#039;&#039;: Sets the value to be exactly the number specified by the value.&lt;br /&gt;
*** &#039;&#039;&#039;value&#039;&#039;&#039;: Value used by the operation.&lt;br /&gt;
&lt;br /&gt;
==== Inputs ====&lt;br /&gt;
Output of the previous transformation operation. &lt;br /&gt;
&lt;br /&gt;
==== Outputs ====&lt;br /&gt;
The same output as the previous performed transformation, except with the specified value modifications applied.&lt;br /&gt;
&lt;br /&gt;
==== Example ====&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;json&amp;quot;&amp;gt;&lt;br /&gt;
#{&lt;br /&gt;
  &amp;quot;type&amp;quot;: &amp;quot;modify_values&amp;quot;,&lt;br /&gt;
  &amp;quot;values&amp;quot;: [#{&lt;br /&gt;
    &amp;quot;input&amp;quot;: &amp;quot;role_limits&amp;quot;,&lt;br /&gt;
    &amp;quot;input_key_from&amp;quot;: &amp;quot;resource_to_role_map&amp;quot;,&lt;br /&gt;
    &amp;quot;input_key_value_path&amp;quot;: [&amp;quot;Tina&amp;quot;],&lt;br /&gt;
    &amp;quot;operation&amp;quot;: #{&lt;br /&gt;
      &amp;quot;type&amp;quot;: &amp;quot;multiply&amp;quot;,&lt;br /&gt;
      &amp;quot;value&amp;quot;: 0.5&lt;br /&gt;
    }&lt;br /&gt;
  }]&lt;br /&gt;
}&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Transformation: resources_to_roles ===&lt;br /&gt;
Performs &amp;quot;organization mining&amp;quot; by grouping together column values (e.g., resources) that are used in similar fashion in given event log. E.g., resources that are often present in similar set of activities.&lt;br /&gt;
&lt;br /&gt;
==== Supported parameters ====&lt;br /&gt;
&lt;br /&gt;
* &#039;&#039;&#039;resource_column&#039;&#039;&#039;: The name of the column containing names of resources.&lt;br /&gt;
* &#039;&#039;&#039;resource_limits&#039;&#039;&#039;: Contains an object dictionary containing resource names with their maximum concurrent usages.&lt;br /&gt;
** If set, when building role_limits output, these values will be summed for each resource into the resulting role-based usage limit.&lt;br /&gt;
** If not set, each resource in a role will be counted as one, when calculating the role_limits.&lt;br /&gt;
* &#039;&#039;&#039;role_column&#039;&#039;&#039;: The name of the column to be created and whose values will indicate the role in which the resource belongs to.&lt;br /&gt;
* &#039;&#039;&#039;role_name_template&#039;&#039;&#039;: If set, specifies the name template used for role names. In this template, %d will be replaced by a numeric value starting from 1. &lt;br /&gt;
** The default value is &amp;quot;Role %d&amp;quot;.&lt;br /&gt;
* &#039;&#039;&#039;similarity_threshold&#039;&#039;&#039;: The minimum value of Pearson correlation coefficient calculated between two resources in order for them to be considered as having the same role.&lt;br /&gt;
** The default value is 0.7 &lt;br /&gt;
&lt;br /&gt;
==== Inputs ====&lt;br /&gt;
Event log to operate on.&lt;br /&gt;
&lt;br /&gt;
==== Outputs ====&lt;br /&gt;
&lt;br /&gt;
* Transformed event log.&lt;br /&gt;
* Result dictionary object containing the following properties:&lt;br /&gt;
** &#039;&#039;&#039;resource_column&#039;&#039;&#039;: The name of the column containing names of resources.&lt;br /&gt;
** &#039;&#039;&#039;resource_to_role_map&#039;&#039;&#039;: An object containing resource names as property names and role names as value.&lt;br /&gt;
** &#039;&#039;&#039;role_column&#039;&#039;&#039;: The name of the generated column whose values will indicate the role in which the resource belongs to.&lt;br /&gt;
** &#039;&#039;&#039;role_limits&#039;&#039;&#039;: An object containing role names as property names and maximum usage for that role as value.&lt;br /&gt;
&lt;br /&gt;
==== Example ====&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;json&amp;quot;&amp;gt;&lt;br /&gt;
#{&lt;br /&gt;
  &amp;quot;type&amp;quot;: &amp;quot;resources_to_roles&amp;quot;,&lt;br /&gt;
  &amp;quot;resource_column&amp;quot;: &amp;quot;SAP_User&amp;quot;,&lt;br /&gt;
  &amp;quot;role_column&amp;quot;: &amp;quot;Role&amp;quot;,&lt;br /&gt;
  &amp;quot;role_name_template&amp;quot;: &amp;quot;Role %d&amp;quot;,&lt;br /&gt;
  &amp;quot;input&amp;quot;: #{&lt;br /&gt;
    &amp;quot;resource_limits&amp;quot;: &amp;quot;max_resource_usages&amp;quot;&lt;br /&gt;
  }&lt;br /&gt;
}&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;/div&gt;</summary>
		<author><name>MarHink</name></author>
	</entry>
	<entry>
		<id>https://wiki.onqpr.com/pa/index.php?title=Create_Simulated_Eventlog&amp;diff=26795</id>
		<title>Create Simulated Eventlog</title>
		<link rel="alternate" type="text/html" href="https://wiki.onqpr.com/pa/index.php?title=Create_Simulated_Eventlog&amp;diff=26795"/>
		<updated>2025-08-22T13:50:46Z</updated>

		<summary type="html">&lt;p&gt;MarHink: /* Transformation: modify_flow_durations */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;This article has instructions how to install, configure and use eventlog simulations. The simulation creates a new model that contains both the source model data and the new simulated data. Case attribute &#039;&#039;&#039;Simulated&#039;&#039;&#039; can be used to determine whether the case is in the source data (&#039;&#039;false&#039;&#039;) or whether the simulation generated it as a new simulated case (&#039;&#039;true&#039;&#039;).&lt;br /&gt;
&lt;br /&gt;
== Prerequisites for simulation ==&lt;br /&gt;
As prerequisite, prediction must be installed into the used Snowflake as described in [[Create Predicted Eventlog|Install prediction in Snowflake]].&lt;br /&gt;
&lt;br /&gt;
== Create simulation script in QPR ProcessAnalyzer ==&lt;br /&gt;
You can create your own simulation script from scratch, or use one of the scripts provided below as a starting point.&lt;br /&gt;
&lt;br /&gt;
=== Creating a simulation model that deletes specific events ===&lt;br /&gt;
1. Create the following example expression script (e.g., with name &amp;quot;&#039;&#039;&#039;Create simulation model&#039;&#039;&#039; &#039;&#039;&#039;- delete events&#039;&#039;&#039;&amp;quot;):&amp;lt;syntaxhighlight lang=&amp;quot;typescript&amp;quot;&amp;gt;&lt;br /&gt;
let sourceModel = ProjectByName(&amp;quot;&amp;lt;project name&amp;gt;&amp;quot;).ModelByName(&amp;quot;&amp;lt;model name&amp;gt;&amp;quot;);&lt;br /&gt;
let flowFromEventType = &amp;quot;&amp;lt;from event type of a flow to modify&amp;gt;&amp;quot;, flowToEventType = &amp;quot;&amp;lt;to event type of a flow to modify&amp;gt;&amp;quot;;&lt;br /&gt;
&lt;br /&gt;
let targetProject = Project;&lt;br /&gt;
_system.ML.ApplyTransformations(#{&lt;br /&gt;
  &amp;quot;Name&amp;quot;: &amp;quot;My simulation model - delete&amp;quot;,   // Name of the PA model to generate to the target project.&lt;br /&gt;
  &amp;quot;SourceModel&amp;quot;: sourceModel,               // Snowflake-based PA model used for training the prediction model.&lt;br /&gt;
  &amp;quot;TargetProject&amp;quot;: targetProject,           // Target project to create the model into.&lt;br /&gt;
  &amp;quot;Transformations&amp;quot;: [#{                    // Transformation configurations.&lt;br /&gt;
    &amp;quot;type&amp;quot;: &amp;quot;modify_flow_durations&amp;quot;,&lt;br /&gt;
    &amp;quot;column&amp;quot;: sourceModel.EventsDataTable.ColumnMappings[&amp;quot;EventType&amp;quot;],&lt;br /&gt;
    &amp;quot;flows&amp;quot;: [#{&lt;br /&gt;
      &amp;quot;from&amp;quot;: flowFromEventType,&lt;br /&gt;
      &amp;quot;to&amp;quot;: flowToEventType,&lt;br /&gt;
      &amp;quot;probability&amp;quot;: 1.0,&lt;br /&gt;
      &amp;quot;operation&amp;quot;: #{&lt;br /&gt;
        &amp;quot;type&amp;quot;: &amp;quot;set_value&amp;quot;,&lt;br /&gt;
        &amp;quot;value&amp;quot;: 0.0&lt;br /&gt;
      },&lt;br /&gt;
      &amp;quot;delete&amp;quot;: true&lt;br /&gt;
    }]&lt;br /&gt;
  }]&lt;br /&gt;
});&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;2. Configure simulation for the previously created script as instructed in the next chapter. At minimum, replace the tags listed below with some suitable values:&lt;br /&gt;
&lt;br /&gt;
* &#039;&#039;&#039;&amp;lt;project name&amp;gt;&#039;&#039;&#039;: Name of the project in which the source model is located.&lt;br /&gt;
* &#039;&#039;&#039;&amp;lt;model name&amp;gt;&#039;&#039;&#039;: Name of the model to be used as source model. This data in this source model will be used as source data to be modified by the simulation transformations.&lt;br /&gt;
* &#039;&#039;&#039;&amp;lt;from event type of a flow to modify&amp;gt;&#039;&#039;&#039;: From-event type name of flows from which the from-event is to be deleted.&lt;br /&gt;
* &#039;&#039;&#039;&amp;lt;to event type of a flow to modify&amp;gt;&#039;&#039;&#039;: To-event type name of flows from which the from-event is to be deleted.&lt;br /&gt;
&lt;br /&gt;
=== Create simulation model automating all the resources belonging to the same role as specified resource ===&lt;br /&gt;
1. Create the following example expression script (e.g., with name &amp;quot;&#039;&#039;&#039;Create simulation model&#039;&#039;&#039; &#039;&#039;&#039;- automate&#039;&#039;&#039;&amp;quot;):&amp;lt;syntaxhighlight lang=&amp;quot;typescript&amp;quot;&amp;gt;&lt;br /&gt;
let sourceModel = ProjectByName(&amp;quot;&amp;lt;project name&amp;gt;&amp;quot;).ModelByName(&amp;quot;&amp;lt;model name&amp;gt;&amp;quot;);&lt;br /&gt;
let resourceColumnName = &amp;quot;&amp;lt;event data column having resource names&amp;gt;&amp;quot;;&lt;br /&gt;
let resourceNameToAutomate = &amp;quot;&amp;lt;resource value whose role should be automated&amp;gt;&amp;quot;;&lt;br /&gt;
&lt;br /&gt;
let targetProject = Project;&lt;br /&gt;
_system.ML.ApplyTransformations(#{&lt;br /&gt;
  &amp;quot;PredictionProcedureName&amp;quot;: &amp;quot;qprpa_sp_prediction&amp;quot;,&lt;br /&gt;
  &amp;quot;Name&amp;quot;: &amp;quot;My simulation model - automate&amp;quot;,   // Name of the PA model to generate to the target project.&lt;br /&gt;
  &amp;quot;SourceModel&amp;quot;: sourceModel,                 // Snowflake-based PA model used for training the prediction model.&lt;br /&gt;
  &amp;quot;TargetProject&amp;quot;: targetProject,             // Target project to create the model into.&lt;br /&gt;
  &amp;quot;Transformations&amp;quot;: [#{                      // Transformation configurations&lt;br /&gt;
    &amp;quot;type&amp;quot;: &amp;quot;resources_to_roles&amp;quot;,&lt;br /&gt;
    &amp;quot;resource_column&amp;quot;: resourceColumnName,&lt;br /&gt;
    &amp;quot;role_column&amp;quot;: &amp;quot;Role&amp;quot;,&lt;br /&gt;
    &amp;quot;role_name_template&amp;quot;: &amp;quot;Role %d&amp;quot;&lt;br /&gt;
  }, #{&lt;br /&gt;
    &amp;quot;type&amp;quot;: &amp;quot;modify_flow_durations&amp;quot;,&lt;br /&gt;
    &amp;quot;input&amp;quot;: #{&lt;br /&gt;
      &amp;quot;role_name&amp;quot;: #{&lt;br /&gt;
        &amp;quot;input&amp;quot;: &amp;quot;resource_to_role_map&amp;quot;,&lt;br /&gt;
        &amp;quot;value_path&amp;quot;: [resourceNameToAutomate]&lt;br /&gt;
      }&lt;br /&gt;
    },&lt;br /&gt;
    &amp;quot;column&amp;quot;: &amp;quot;Role&amp;quot;,&lt;br /&gt;
    &amp;quot;flows&amp;quot;: [#{&lt;br /&gt;
      &amp;quot;from_input&amp;quot;: &amp;quot;role_name&amp;quot;,&lt;br /&gt;
      &amp;quot;operation&amp;quot;: #{&lt;br /&gt;
        &amp;quot;type&amp;quot;: &amp;quot;set_value&amp;quot;,&lt;br /&gt;
        &amp;quot;value&amp;quot;: 1,&lt;br /&gt;
        &amp;quot;probability&amp;quot;: 0.5&lt;br /&gt;
      }&lt;br /&gt;
    }]&lt;br /&gt;
  }]&lt;br /&gt;
});&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;2. Configure simulation for the previously created script as instructed in the next chapter. At minimum, replace the tags listed below with some suitable values:&lt;br /&gt;
* &#039;&#039;&#039;&amp;lt;project name&amp;gt;&#039;&#039;&#039;: Name of the project in which the source model is located.&lt;br /&gt;
* &#039;&#039;&#039;&amp;lt;model name&amp;gt;&#039;&#039;&#039;: Name of the model to be used as source model. This data in this source model will be used as source data to be modified by the simulation transformations.&lt;br /&gt;
* &#039;&#039;&#039;&amp;lt;event data column having resource names&amp;gt;&#039;&#039;&#039;: Name of the event data column that contains resource names.&lt;br /&gt;
* &#039;&#039;&#039;&amp;lt;resource value whose role should be automated&amp;gt;&#039;&#039;&#039;: Name of the resource whose role is to be automated.&lt;br /&gt;
&lt;br /&gt;
== Configure simulation ==&lt;br /&gt;
Simulation script has the following settings in the ApplyTransformations call:&lt;br /&gt;
&lt;br /&gt;
* &#039;&#039;&#039;Name&#039;&#039;&#039;: Name of the QPR ProcessAnalyzer model that is created to the target project. The model will contain the source model content and the predictions.&lt;br /&gt;
* &#039;&#039;&#039;SourceModel&#039;&#039;&#039;: Source model for which the simulation is made. Model can be selected for example based on id with ModelById function or by name with ModelByName function.&lt;br /&gt;
* &#039;&#039;&#039;TargetProject&#039;&#039;&#039;: Target project to create the new model into.&lt;br /&gt;
* &#039;&#039;&#039;Transformations&#039;&#039;&#039;: Array of transformation configuration objects. Each object supports the following parameters:&lt;br /&gt;
** &#039;&#039;&#039;type&#039;&#039;&#039;: Defines the type of the transformation to perform. See below for more details on the supported transformations. Supported values are:&lt;br /&gt;
*** &#039;&#039;&#039;enforce_resource_limits&#039;&#039;&#039;: Used to modify given event log using given maximum resource limits.&lt;br /&gt;
*** &#039;&#039;&#039;extract_max_resource_usages&#039;&#039;&#039;: Used to extract, for every value of a specified column, the maximum number of concurrent cases in given event log that have that value. &lt;br /&gt;
*** &#039;&#039;&#039;generate&#039;&#039;&#039;: Used to generate a new event log using a trained ML model.&lt;br /&gt;
*** &#039;&#039;&#039;modify_flow_durations&#039;&#039;&#039;: Used to modify durations of flows and possibly remove events having specific flows.&lt;br /&gt;
*** &#039;&#039;&#039;modify_values&#039;&#039;&#039;: Used to modify values of a dictionary given as input (e.g., dictionary generated by extract_max_resource_usages).&lt;br /&gt;
*** &#039;&#039;&#039;resources_to_roles&#039;&#039;&#039;: Performs &amp;quot;organization mining&amp;quot; by trying to group together column values (e.g., resources) that are used in similar fashion in given event log (e.g., resources that are often present in similar set of activities).&lt;br /&gt;
**&#039;&#039;&#039;input&#039;&#039;&#039;: Can be used to specify that given transformation input parameters get their values from the previous transformation result.&lt;br /&gt;
***Value can be either direct mapping by just the name of the transformation result property, or it can be a value mapping configuration object that supports the following parameters:&lt;br /&gt;
****&#039;&#039;&#039;input&#039;&#039;&#039;: Name of the parameter to get from the previous transformation result as the root object of the actual value to extract.&lt;br /&gt;
****&#039;&#039;&#039;value_path&#039;&#039;&#039;: An array of property names to traverse into the root object.&lt;br /&gt;
&lt;br /&gt;
=== Transformation: event_resource_limits ===&lt;br /&gt;
Using given input data, this transformation generates a new event log which does not exceed the concurrency limitations of specified column values. &lt;br /&gt;
&lt;br /&gt;
Event rows are traversed in time order, and if at some point a limit would be exceeded, instead of outputting the actual event, a new copy of the actual event, with copied event properties, is created to represent the queue for the actual event. &lt;br /&gt;
&lt;br /&gt;
Only after an event leaves from the column value that contains a queue, the event that had been waiting for the longest in the queue will be generated (following the FIFO-principle).&lt;br /&gt;
&lt;br /&gt;
==== Supported parameters ====&lt;br /&gt;
* &#039;&#039;&#039;column&#039;&#039;&#039;: Name of the column having the values whose concurrent usage is to be limited by specified limits&lt;br /&gt;
* &#039;&#039;&#039;limits&#039;&#039;&#039;: Specifies an object containing key-value -pairs where keys are column values and values contain an integer specifying the maximum number of concurrent cases in the given event log that can contain given value.&lt;br /&gt;
* &#039;&#039;&#039;queue_event_activity_name&#039;&#039;&#039;: If set, specifies the name template used for queue-events. In this template, when a queue event is created, %s is replaced with the name of the activity this queue event is queuing to.&lt;br /&gt;
** If not set, the activity name is not altered at all for the queue event.&lt;br /&gt;
* &#039;&#039;&#039;queue_event_column&#039;&#039;&#039;: &lt;br /&gt;
** If queue_event_activity_name is set:&lt;br /&gt;
*** If the event represents a queue-event, the value in this column specifies the name of the queue-activity.&lt;br /&gt;
*** Otherwise, the value is null.&lt;br /&gt;
** If queue_event_activity_name is not set:&lt;br /&gt;
*** If the event represents a queue-event, the value in this column True.&lt;br /&gt;
*** Otherwise, the value is False.&lt;br /&gt;
&lt;br /&gt;
==== Inputs ====&lt;br /&gt;
Event log to operate on.&lt;br /&gt;
&lt;br /&gt;
==== Outputs ====&lt;br /&gt;
Event log with resource limits enforced.&lt;br /&gt;
&lt;br /&gt;
==== Example ====&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;json&amp;quot;&amp;gt;&lt;br /&gt;
#{&lt;br /&gt;
  &amp;quot;type&amp;quot;: &amp;quot;enforce_resource_limits&amp;quot;,&lt;br /&gt;
  &amp;quot;queue_event_column&amp;quot;: &amp;quot;Queue&amp;quot;,&lt;br /&gt;
  &amp;quot;queue_event_activity_name&amp;quot;: &amp;quot;%s - Queue&amp;quot;,&lt;br /&gt;
  &amp;quot;limits&amp;quot;: #{&lt;br /&gt;
    &amp;quot;Role 3&amp;quot;: None&lt;br /&gt;
  },&lt;br /&gt;
  &amp;quot;input&amp;quot;: #{&lt;br /&gt;
    &amp;quot;limits&amp;quot;: &amp;quot;role_limits&amp;quot;,&lt;br /&gt;
    &amp;quot;column&amp;quot;: &amp;quot;role_column&amp;quot;&lt;br /&gt;
  }&lt;br /&gt;
}&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Transformation: extract_max_resource_usages ===&lt;br /&gt;
Extract, for every value of a specified column, the maximum number of concurrent cases in given event log that have that value.&lt;br /&gt;
&lt;br /&gt;
==== Supported parameters ====&lt;br /&gt;
&lt;br /&gt;
* &#039;&#039;&#039;resource_column&#039;&#039;&#039;:&lt;br /&gt;
** The name of the column representing the resources whose maximum concurrent case usages are to be calculated.&lt;br /&gt;
&lt;br /&gt;
==== Inputs ====&lt;br /&gt;
Event log to operate on.&lt;br /&gt;
&lt;br /&gt;
==== Outputs ====&lt;br /&gt;
&lt;br /&gt;
* max_resource_usages:&lt;br /&gt;
** A dictionary object containing resource names as keys (unique resource_column values) and their maximum usage in the event log.&lt;br /&gt;
&lt;br /&gt;
==== Example ====&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;json&amp;quot;&amp;gt;&lt;br /&gt;
#{&lt;br /&gt;
  &amp;quot;type&amp;quot;: &amp;quot;extract_max_resource_usages&amp;quot;,&lt;br /&gt;
  &amp;quot;resource_column&amp;quot;: &amp;quot;SAP_User&amp;quot;&lt;br /&gt;
}&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Transformation: generate ===&lt;br /&gt;
Generate a new event log using the configured model [[Create Predicted Eventlog|prediction generation parameters (GenerationConfiguration)]].&lt;br /&gt;
&lt;br /&gt;
==== Supported parameters ====&lt;br /&gt;
Supports all the same parameters as those supported by model prediction generation configuration.&lt;br /&gt;
&lt;br /&gt;
==== Inputs ====&lt;br /&gt;
Does not support inputs.&lt;br /&gt;
&lt;br /&gt;
==== Outputs ====&lt;br /&gt;
Generated event log.&lt;br /&gt;
&lt;br /&gt;
==== Example ====&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;json&amp;quot;&amp;gt;&lt;br /&gt;
#{&lt;br /&gt;
  &amp;quot;type&amp;quot;: &amp;quot;generate&amp;quot;,&lt;br /&gt;
  &amp;quot;model_name&amp;quot;: &amp;quot;ML model&amp;quot;,&lt;br /&gt;
  &amp;quot;cases_to_generate&amp;quot;: 100,&lt;br /&gt;
  &amp;quot;max_num_events&amp;quot;: 20&lt;br /&gt;
}&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Transformation: modify_flow_durations ===&lt;br /&gt;
Modify durations of flows and possibly remove events having specific flows.&lt;br /&gt;
&lt;br /&gt;
==== Supported parameters ====&lt;br /&gt;
&lt;br /&gt;
* &#039;&#039;&#039;column&#039;&#039;&#039;: The name of the column based on which the flows are created. Usually this is the column containing activities, but could also be, e.g., organization units, users, …&lt;br /&gt;
* &#039;&#039;&#039;flows&#039;&#039;&#039;: Flows to transform. Contains an array of flow transformation configuration objects. Each object defines transformations performed on one flow type defined by starting and ending column values. Supports the following properties:&lt;br /&gt;
** &#039;&#039;&#039;delete&#039;&#039;&#039;: Same as delete_from.&lt;br /&gt;
** &#039;&#039;&#039;delete_from&#039;&#039;&#039;: If defined, specifies whether the &amp;quot;from event&amp;quot; of the matched flow should be removed after applying the operation.&lt;br /&gt;
** &#039;&#039;&#039;delete_to&#039;&#039;&#039;: If defined, specifies whether the &amp;quot;to event&amp;quot; of the matched flow should be removed after applying the operation.&lt;br /&gt;
** &#039;&#039;&#039;from&#039;&#039;&#039;: Column value starting the flow.&lt;br /&gt;
*** If this and from_input are both undefined, any starting value is accepted.&lt;br /&gt;
** &#039;&#039;&#039;from_input&#039;&#039;&#039;: If defined, specifies the name of the transformation-level parameter from which the actual column value starting the flow is read from.&lt;br /&gt;
*** Overrides the value defined in from-parameter.&lt;br /&gt;
** &#039;&#039;&#039;operation&#039;&#039;&#039;: Specifies the actual flow duration modification operation to perform as value modification configuration object where the value is the duration in seconds. Supports the following properties:&lt;br /&gt;
*** &#039;&#039;&#039;probability&#039;&#039;&#039;: If defined, specifies the percentage probability of applying the operation to any matching instance of the flow. &lt;br /&gt;
**** Value should be a numeric value between 0 and 1.0. &lt;br /&gt;
**** This probability applies only to this operation.&lt;br /&gt;
**** The default value is 1.0.&lt;br /&gt;
*** &#039;&#039;&#039;type&#039;&#039;&#039;: Type of the operation. The following types are supported:&lt;br /&gt;
**** &#039;&#039;&#039;add&#039;&#039;&#039;: Sets the value to be the current value plus the number specified by the value.&lt;br /&gt;
**** &#039;&#039;&#039;multiply&#039;&#039;&#039;: Sets the value to be the current value multiplied by the number specified by the value.&lt;br /&gt;
**** &#039;&#039;&#039;set_value&#039;&#039;&#039;: Sets the value to be exactly the number specified by the value.&lt;br /&gt;
*** &#039;&#039;&#039;value&#039;&#039;&#039;: Value used by the operation.&lt;br /&gt;
** &#039;&#039;&#039;probability&#039;&#039;&#039;: If defined, specifies the percentage probability of applying the operation to any matching instance of the flow.&lt;br /&gt;
*** Value should be a numeric value between 0 and 1.0.&lt;br /&gt;
*** This probability applies, in addition to the operation specified by the operation-parameter, also to any possible other transformations, such as event deletion.&lt;br /&gt;
*** Default value is 1.0&lt;br /&gt;
** &#039;&#039;&#039;to&#039;&#039;&#039;: Column value ending the flow.&lt;br /&gt;
*** If this and to_input are both undefined, any ending value is accepted.&lt;br /&gt;
** &#039;&#039;&#039;to_input&#039;&#039;&#039;: If defined, specifies the name of the transformation-level parameter to which the actual column value ending the flow is read from.&lt;br /&gt;
*** Overrides the value defined in to-parameter.&lt;br /&gt;
&lt;br /&gt;
==== Inputs ====&lt;br /&gt;
Event log to operate on.&lt;br /&gt;
&lt;br /&gt;
==== Outputs ====&lt;br /&gt;
Transformed event log.&lt;br /&gt;
&lt;br /&gt;
==== Example ====&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;json&amp;quot;&amp;gt;&lt;br /&gt;
#{&lt;br /&gt;
  &amp;quot;type&amp;quot;: &amp;quot;modify_flow_durations&amp;quot;,&lt;br /&gt;
  &amp;quot;column&amp;quot;: &amp;quot;Organization&amp;quot;,&lt;br /&gt;
  &amp;quot;flows&amp;quot;: [#{&lt;br /&gt;
    &amp;quot;from&amp;quot;: &amp;quot;Delivery&amp;quot;,&lt;br /&gt;
    &amp;quot;operation&amp;quot;: #{&lt;br /&gt;
      &amp;quot;type&amp;quot;: &amp;quot;set_value&amp;quot;,&lt;br /&gt;
      &amp;quot;value&amp;quot;: 0.0&lt;br /&gt;
    },&lt;br /&gt;
    &amp;quot;delete&amp;quot;: true&lt;br /&gt;
  }]&lt;br /&gt;
}&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Transformation: modify_values ===&lt;br /&gt;
Modify values of an object given as input (e.g., object generated by extract_max_resource_usages).&lt;br /&gt;
&lt;br /&gt;
Due to the required inputs, this transformation can&#039;t be the first transformation to perform.&lt;br /&gt;
&lt;br /&gt;
==== Supported parameters ====&lt;br /&gt;
&lt;br /&gt;
* &#039;&#039;&#039;values&#039;&#039;&#039;: Array of value configuration objects. Each object supports the following properties:&lt;br /&gt;
** &#039;&#039;&#039;input&#039;&#039;&#039;: Name of the result to modify, where the result is the output of the previous transformation.&lt;br /&gt;
** &#039;&#039;&#039;input_key_from&#039;&#039;&#039;: If defined, specifies the the name of the property of a input object whose value contains the name of the property whose value is to be modified.&lt;br /&gt;
** &#039;&#039;&#039;input_key_value_path&#039;&#039;&#039;: If input_key_from is defined and is represented as an object, this configuration should specify an array of property names to traverse into the object. &lt;br /&gt;
*** The value at the end of this path will be used as the name of the property to modify in the input.&lt;br /&gt;
** &#039;&#039;&#039;operation&#039;&#039;&#039;: Specifies the actual value modification operation to perform as value modification configuration object. Supports the following properties:&lt;br /&gt;
*** &#039;&#039;&#039;probability&#039;&#039;&#039;: If defined, specifies the percentage probability of applying the operation to any matching instance of the flow. &lt;br /&gt;
**** Value should be a numeric value between 0 and 1.0. &lt;br /&gt;
**** This probability applies only to this operation.&lt;br /&gt;
**** The default value is 1.0.&lt;br /&gt;
*** &#039;&#039;&#039;type&#039;&#039;&#039;: Type of the operation. The following types are supported:&lt;br /&gt;
**** &#039;&#039;&#039;add&#039;&#039;&#039;: Sets the value to be the current value plus the number specified by the value.&lt;br /&gt;
**** &#039;&#039;&#039;multiply&#039;&#039;&#039;: Sets the value to be the current value multiplied by the number specified by the value.&lt;br /&gt;
**** &#039;&#039;&#039;set_value&#039;&#039;&#039;: Sets the value to be exactly the number specified by the value.&lt;br /&gt;
*** &#039;&#039;&#039;value&#039;&#039;&#039;: Value used by the operation.&lt;br /&gt;
&lt;br /&gt;
==== Inputs ====&lt;br /&gt;
Output of the previous transformation operation. &lt;br /&gt;
&lt;br /&gt;
==== Outputs ====&lt;br /&gt;
The same output as the previous performed transformation, except with the specified value modifications applied.&lt;br /&gt;
&lt;br /&gt;
==== Example ====&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;json&amp;quot;&amp;gt;&lt;br /&gt;
#{&lt;br /&gt;
  &amp;quot;type&amp;quot;: &amp;quot;modify_values&amp;quot;,&lt;br /&gt;
  &amp;quot;values&amp;quot;: [#{&lt;br /&gt;
    &amp;quot;input&amp;quot;: &amp;quot;role_limits&amp;quot;,&lt;br /&gt;
    &amp;quot;input_key_from&amp;quot;: &amp;quot;resource_to_role_map&amp;quot;,&lt;br /&gt;
    &amp;quot;input_key_value_path&amp;quot;: [&amp;quot;Tina&amp;quot;],&lt;br /&gt;
    &amp;quot;operation&amp;quot;: #{&lt;br /&gt;
      &amp;quot;type&amp;quot;: &amp;quot;multiply&amp;quot;,&lt;br /&gt;
      &amp;quot;value&amp;quot;: 0.5&lt;br /&gt;
    }&lt;br /&gt;
  }]&lt;br /&gt;
}&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Transformation: resources_to_roles ===&lt;br /&gt;
Performs &amp;quot;organization mining&amp;quot; by grouping together column values (e.g., resources) that are used in similar fashion in given event log. E.g., resources that are often present in similar set of activities.&lt;br /&gt;
&lt;br /&gt;
==== Supported parameters ====&lt;br /&gt;
&lt;br /&gt;
* &#039;&#039;&#039;resource_column&#039;&#039;&#039;: The name of the column containing names of resources.&lt;br /&gt;
* &#039;&#039;&#039;resource_limits&#039;&#039;&#039;: Contains an object dictionary containing resource names with their maximum concurrent usages.&lt;br /&gt;
** If set, when building role_limits output, these values will be summed for each resource into the resulting role-based usage limit.&lt;br /&gt;
** If not set, each resource in a role will be counted as one, when calculating the role_limits.&lt;br /&gt;
* &#039;&#039;&#039;role_column&#039;&#039;&#039;: The name of the column to be created and whose values will indicate the role in which the resource belongs to.&lt;br /&gt;
* &#039;&#039;&#039;role_name_template&#039;&#039;&#039;: If set, specifies the name template used for role names. In this template, %d will be replaced by a numeric value starting from 1. &lt;br /&gt;
** The default value is &amp;quot;Role %d&amp;quot;.&lt;br /&gt;
* &#039;&#039;&#039;similarity_threshold&#039;&#039;&#039;: The minimum value of Pearson correlation coefficient calculated between two resources in order for them to be considered as having the same role.&lt;br /&gt;
** The default value is 0.7 &lt;br /&gt;
&lt;br /&gt;
==== Inputs ====&lt;br /&gt;
Event log to operate on.&lt;br /&gt;
&lt;br /&gt;
==== Outputs ====&lt;br /&gt;
&lt;br /&gt;
* Transformed event log.&lt;br /&gt;
* Result dictionary object containing the following properties:&lt;br /&gt;
** &#039;&#039;&#039;resource_column&#039;&#039;&#039;: The name of the column containing names of resources.&lt;br /&gt;
** &#039;&#039;&#039;resource_to_role_map&#039;&#039;&#039;: An object containing resource names as property names and role names as value.&lt;br /&gt;
** &#039;&#039;&#039;role_column&#039;&#039;&#039;: The name of the generated column whose values will indicate the role in which the resource belongs to.&lt;br /&gt;
** &#039;&#039;&#039;role_limits&#039;&#039;&#039;: An object containing role names as property names and maximum usage for that role as value.&lt;br /&gt;
&lt;br /&gt;
==== Example ====&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;json&amp;quot;&amp;gt;&lt;br /&gt;
#{&lt;br /&gt;
  &amp;quot;type&amp;quot;: &amp;quot;resources_to_roles&amp;quot;,&lt;br /&gt;
  &amp;quot;resource_column&amp;quot;: &amp;quot;SAP_User&amp;quot;,&lt;br /&gt;
  &amp;quot;role_column&amp;quot;: &amp;quot;Role&amp;quot;,&lt;br /&gt;
  &amp;quot;role_name_template&amp;quot;: &amp;quot;Role %d&amp;quot;,&lt;br /&gt;
  &amp;quot;input&amp;quot;: #{&lt;br /&gt;
    &amp;quot;resource_limits&amp;quot;: &amp;quot;max_resource_usages&amp;quot;&lt;br /&gt;
  }&lt;br /&gt;
}&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;/div&gt;</summary>
		<author><name>MarHink</name></author>
	</entry>
	<entry>
		<id>https://wiki.onqpr.com/pa/index.php?title=Expression_Script_Examples&amp;diff=26264</id>
		<title>Expression Script Examples</title>
		<link rel="alternate" type="text/html" href="https://wiki.onqpr.com/pa/index.php?title=Expression_Script_Examples&amp;diff=26264"/>
		<updated>2025-04-25T11:27:24Z</updated>

		<summary type="html">&lt;p&gt;MarHink: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;This page contains script examples written in the QPR ProcessAnalyzer expression language. See how expression scripts can be created in the [[Managing_Scripts#Creating_Script|Workspace]]. For documentation for the syntax, functions and entities can be found from the main page in the [[QPR_ProcessAnalyzer_Wiki#For_Developers|KPI Expression Language]] section.&lt;br /&gt;
&lt;br /&gt;
== Calling Expression Script from Expression ==&lt;br /&gt;
Expression scripts can be called from an expression using the [[QPR_ProcessAnalyzer_Objects_in_Expression_Language#Script|Run]] function with the following syntax:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
let result = ScriptById(123).Run(#{&lt;br /&gt;
  &amp;quot;parameter1&amp;quot;: &amp;quot;value1&amp;quot;,&lt;br /&gt;
  &amp;quot;parameter2&amp;quot;: false,&lt;br /&gt;
  &amp;quot;parameter2&amp;quot;: 123.45&lt;br /&gt;
})&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The command waits until the run is completed, and the return value of the called script is returned by the Run function call.&lt;br /&gt;
&lt;br /&gt;
Parameters can be passed to the called script, and the parameters are available as variables in the script. The parameters can contain any type of data.&lt;br /&gt;
&lt;br /&gt;
Expression scripts can also be called from a dashboard. Expressions can be stored to scripts instead of dashboards, which is a way to separate complex expressions from dashboards and allow to reuse expressions across several dashboards.&lt;br /&gt;
&lt;br /&gt;
== Calling SQL Script from Expression ==&lt;br /&gt;
SQL script can be called from an expression using the Run function as follows (similar to calling [[#Calling Expression Script from Expression|expression scripts]]):&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
let result = ScriptById(123).Run(#{&lt;br /&gt;
  &amp;quot;parameter1&amp;quot;: &amp;quot;value1&amp;quot;,&lt;br /&gt;
  &amp;quot;parameter2&amp;quot;: 321&lt;br /&gt;
});&lt;br /&gt;
let arrayOfAllReports = result.Keys;&lt;br /&gt;
let report1 = result.Report1;&lt;br /&gt;
let report2 = result.Report2;&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
SQL scripts can return multiple &#039;&#039;reports&#039;&#039;, which are combined to a dictionary, where the key is the name of the report (&amp;quot;sheet name&amp;quot;) and value is the report data as a DataFrame. See in the above example, how the reports can be accessed by their name.&lt;br /&gt;
&lt;br /&gt;
== Examples ==&lt;br /&gt;
=== Call web service===&lt;br /&gt;
Contact to a web service, fetch some data, and store it to a datatable.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;typescript&amp;quot; line&amp;gt;&lt;br /&gt;
let datatableName = &amp;quot;Web Service Data&amp;quot;;&lt;br /&gt;
let webServiceData = CallWebService(&lt;br /&gt;
    #{&amp;quot;Address&amp;quot;: &amp;quot;https://processanalyzer.onqpr.com/qprpa/api/serverinfo&amp;quot;}&lt;br /&gt;
);&lt;br /&gt;
&lt;br /&gt;
let targetDatatable = Project.Datatables.Where(name==datatableName);&lt;br /&gt;
if (Count(targetDatatable) == 0) {&lt;br /&gt;
	targetDatatable = Project.CreateDatatable(datatableName)&lt;br /&gt;
	.AddColumn(&amp;quot;Setting name&amp;quot;, &amp;quot;String&amp;quot;)&lt;br /&gt;
	.AddColumn(&amp;quot;Setting value&amp;quot;, &amp;quot;String&amp;quot;)&lt;br /&gt;
	.AddColumn(&amp;quot;Data read&amp;quot;, &amp;quot;DateTime&amp;quot;);&lt;br /&gt;
} else {&lt;br /&gt;
	targetDatatable = targetDatatable[0];&lt;br /&gt;
}&lt;br /&gt;
&lt;br /&gt;
let currentTime = Now;&lt;br /&gt;
let dataAsDf = ToDataFrame(&lt;br /&gt;
	webServiceData.keys.{&lt;br /&gt;
        let key = _;&lt;br /&gt;
        [key, webServiceData[key], currentTime];&lt;br /&gt;
    },&lt;br /&gt;
	[&amp;quot;Setting name&amp;quot;, &amp;quot;Setting value&amp;quot;, &amp;quot;Data read&amp;quot;]&lt;br /&gt;
);&lt;br /&gt;
targetDatatable.Import(dataAsDf, #{&amp;quot;Append&amp;quot;:true});&lt;br /&gt;
WriteLog(`${CountTop(dataAsDf.Rows)} rows written to datatable`);&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Store data to datatable ===&lt;br /&gt;
Get all models in the system and store them to a datatable.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;typescript&amp;quot; line&amp;gt;&lt;br /&gt;
let newDatatable = Project&lt;br /&gt;
    .CreateDatatable(&amp;quot;Models list &amp;quot; + ToString(Now, &amp;quot;dd.MM.yyyy HH:mm:ss&amp;quot;))&lt;br /&gt;
    .AddColumn(&amp;quot;Model name&amp;quot;, &amp;quot;String&amp;quot;)&lt;br /&gt;
    .AddColumn(&amp;quot;Project name&amp;quot;, &amp;quot;String&amp;quot;)&lt;br /&gt;
    .AddColumn(&amp;quot;Created time&amp;quot;, &amp;quot;DateTime&amp;quot;)&lt;br /&gt;
    .AddColumn(&amp;quot;Cases&amp;quot;, &amp;quot;Integer&amp;quot;);&lt;br /&gt;
let startTime = Now;&lt;br /&gt;
let modelsData = ToDataFrame(&lt;br /&gt;
    Models.([Name, Project.Name, CreatedDate, NCases]),&lt;br /&gt;
    [&amp;quot;Model name&amp;quot;, &amp;quot;Project name&amp;quot;, &amp;quot;Created time&amp;quot;, &amp;quot;Cases&amp;quot;]&lt;br /&gt;
);&lt;br /&gt;
WriteLog(`Listing models took ${(Now - startTime).TotalSeconds.Round(2)} seconds.`);&lt;br /&gt;
newDatatable.Import(modelsData);&lt;br /&gt;
WriteLog(`Datatable ${newDatatable.Id} created.`);&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Convert datatable column data ===&lt;br /&gt;
This script can be used to convert a single column into numerical data type. To use the script, you need to setup the following in the beginning of the script:&lt;br /&gt;
* Project name where the datatable is located.&lt;br /&gt;
* Datatable name&lt;br /&gt;
* Name of the column to be converted&lt;br /&gt;
&lt;br /&gt;
Note that the conversion fails, if there is data that cannot be converted into numerical format. The conversion assumes that period (.) is used as the decimal point. &lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;typescript&amp;quot; line&amp;gt;&lt;br /&gt;
let projectName = &amp;quot;New Project&amp;quot;;&lt;br /&gt;
let datatableName = &amp;quot;qpr processanalyzer events&amp;quot;;&lt;br /&gt;
let columnName = &amp;quot;Event order in case&amp;quot;;&lt;br /&gt;
&lt;br /&gt;
let project = (Projects.Where(Name==projectName))[0];&lt;br /&gt;
let datatable = (project.Datatables.Where(Name==datatableName))[0];&lt;br /&gt;
DatatableById(datatable.Id).DataFrame&lt;br /&gt;
.SetColumns([&lt;br /&gt;
	columnName: () =&amp;gt; {&lt;br /&gt;
		let data = Column(columnName);&lt;br /&gt;
		if (data == null) {&lt;br /&gt;
			null;&lt;br /&gt;
		 } else {&lt;br /&gt;
			ToFloat(data);&lt;br /&gt;
		}&lt;br /&gt;
	}&lt;br /&gt;
])&lt;br /&gt;
.Persist(datatable.Name, [&amp;quot;ProjectId&amp;quot;: project.Id, &amp;quot;Append&amp;quot;: false]);&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Instead of converting to numeric (with the &#039;&#039;ToFloat&#039;&#039; function), data can be converted into string using the &#039;&#039;ToString&#039;&#039; function.&lt;br /&gt;
&lt;br /&gt;
=== Show DataFrame as HTML table ===&lt;br /&gt;
&lt;br /&gt;
This script defines a function to show dataframe as a HTML table, and uses the function for a literal dataframe.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;typescript&amp;quot; line&amp;gt;&lt;br /&gt;
function dataframeToHtmlTable(df) {&lt;br /&gt;
	return&lt;br /&gt;
`&amp;lt;table&amp;gt;&lt;br /&gt;
	&amp;lt;tr&amp;gt;&lt;br /&gt;
		${StringJoin(&amp;quot;\r\n\t\t&amp;quot;,  + df.columns.`&amp;lt;th&amp;gt;${_}&amp;lt;/th&amp;gt;`)}&lt;br /&gt;
	&amp;lt;/tr&amp;gt;&lt;br /&gt;
	${StringJoin(&amp;quot;&amp;quot;, df.Rows.(&lt;br /&gt;
		&amp;quot;\r\n\t&amp;lt;tr&amp;gt;&amp;quot; + StringJoin(&amp;quot;&amp;quot;, _.`\r\n\t\t&amp;lt;td&amp;gt;${ToString(_)}&amp;lt;/td&amp;gt;`) + &amp;quot;\r\n\t&amp;lt;/tr&amp;gt;&amp;quot;&lt;br /&gt;
	))}&lt;br /&gt;
&amp;lt;/table&amp;gt;`&lt;br /&gt;
}&lt;br /&gt;
&lt;br /&gt;
let data = ToDataFrame(&lt;br /&gt;
	[&lt;br /&gt;
		[&amp;quot;one&amp;quot;, &amp;quot;two&amp;quot;, &amp;quot;three&amp;quot;],&lt;br /&gt;
		[&amp;quot;four&amp;quot;, &amp;quot;five&amp;quot;, &amp;quot;six&amp;quot;],&lt;br /&gt;
		[&amp;quot;seven&amp;quot;, &amp;quot;eight&amp;quot;, &amp;quot;nine&amp;quot;]&lt;br /&gt;
	],&lt;br /&gt;
	[&amp;quot;Column 1&amp;quot;, &amp;quot;Column 2&amp;quot;, &amp;quot;Column 3&amp;quot;]&lt;br /&gt;
);&lt;br /&gt;
&lt;br /&gt;
return dataframeToHtmlTable(data);&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Copy local datatables to Snowflake ===&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;typescript&amp;quot; line&amp;gt;&lt;br /&gt;
// Copies all datatables in a project to another project including datatable contents.&lt;br /&gt;
// Usage instructions:&lt;br /&gt;
// 1. Create expression script in the project from where you want to copy the datatables.&lt;br /&gt;
// 2. Create a new project named as &amp;quot;&amp;lt;name of the project to be moved&amp;gt; - Snowflake&amp;quot;. New datatables will be created here. E.g., when moving project named &amp;quot;SAP_OrderToCash&amp;quot;, the target project should be named as &amp;quot;SAP_OrderToCash - Snowflake&amp;quot;.&lt;br /&gt;
// 3. Run the script.&lt;br /&gt;
// NOTE: Columns of type &amp;quot;Any&amp;quot; will be created as &amp;quot;String&amp;quot;-columns in Snowflake, thus it is recommended that actual data types are set for the tables prior to the move.&lt;br /&gt;
&lt;br /&gt;
let sourceProject = Project;&lt;br /&gt;
let sourceProjectName = Project.Name;&lt;br /&gt;
let targetProjectName = `${sourceProjectName} - Snowflake`;&lt;br /&gt;
let targetProject = First(Projects.Where(Name == targetProjectName));&lt;br /&gt;
if (IsNull(targetProject)) {&lt;br /&gt;
  WriteLog(`Unable to find target project named &amp;quot;${targetProjectName}&amp;quot;. Aborting operation.`);&lt;br /&gt;
  return;&lt;br /&gt;
}&lt;br /&gt;
let dts = sourceProject.DataTables;&lt;br /&gt;
WriteLog(`Copying all ${CountTop(dts)} data tables found in project &amp;quot;${sourceProject.Name}&amp;quot; (id: ${sourceProject.Id}) to Snowflake in project &amp;quot;${targetProject.Name}&amp;quot; (id: ${targetProject.Id})`);&lt;br /&gt;
dts.{&lt;br /&gt;
  let sourceDt = _;&lt;br /&gt;
  WriteLog(`Starting to copy data table &amp;quot;${Name}&amp;quot; (id: ${Id}) having ${NRows} rows and ${NColumns} columns.`);&lt;br /&gt;
  let targetDt;&lt;br /&gt;
  targetDt = targetProject.DatatableByName(sourceDt.Name);&lt;br /&gt;
  if (targetDt == null) {&lt;br /&gt;
    targetDt = targetProject.CreateDataTable(sourceDt.Name, #{&amp;quot;Connection&amp;quot;: CreateSnowflakeConnection(#{&amp;quot;ProjectId&amp;quot;: targetProject.Id})});&lt;br /&gt;
    targetDt.Import(sourceDt.SqlDataFrame);&lt;br /&gt;
    WriteLog(`Finished copying data table &amp;quot;${Name}&amp;quot; (id: ${Id}) to table &amp;quot;${targetDt.Name}&amp;quot; (id: ${targetDt.Id})`);&lt;br /&gt;
  } else {&lt;br /&gt;
    WriteLog(`Datatable already exist &amp;quot;${Name}&amp;quot; (id: ${Id}) to table &amp;quot;${targetDt.Name}&amp;quot; (id: ${targetDt.Id})`);&lt;br /&gt;
  }&lt;br /&gt;
}&lt;br /&gt;
WriteLog(`Finished copying all the data tables found in project &amp;quot;${sourceProject.Name}&amp;quot; (id: ${sourceProject.Id}) to Snowflake in project &amp;quot;${targetProject.Name}&amp;quot; (id: ${targetProject.Id})`);&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
If you don&#039;t need to copy the data but only create the Snowflake datatables with columns, you can change the line 22 to&lt;br /&gt;
&amp;lt;syntaxhighlight&amp;gt;&lt;br /&gt;
targetDt.Import(sourceDt.SqlDataFrame.head(0));&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Copy single datatable to Snowflake ===&lt;br /&gt;
This script creates a copy of a single datatable to Snowflake. Replace the &#039;&#039;&amp;lt;tableId1&amp;gt;&#039;&#039; with the id of the source datatable.&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;typescript&amp;quot; line&amp;gt;&lt;br /&gt;
function CopyDataTableToSnowflake(dataTableId)&lt;br /&gt;
{&lt;br /&gt;
  let sourceDt = DataTableById(dataTableId);&lt;br /&gt;
  sourceDt.SqlDataFrame.Persist(`${sourceDt.Name} - Snowflake`, #{&amp;quot;Append&amp;quot;: false, &amp;quot;Connection&amp;quot;: CreateSnowflakeConnection(#{&amp;quot;ProjectId&amp;quot;: sourceDt.Project.Id})});&lt;br /&gt;
}&lt;br /&gt;
CopyDataTableToSnowflake(&amp;lt;tableId1&amp;gt;);&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Create a copy of a data table that has all Any-type columns changed to String-type columns ===&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;typescript&amp;quot; line&amp;gt;&lt;br /&gt;
function ConvertAnyDataTypesToStringsToNewTable(dataTableId)&lt;br /&gt;
{&lt;br /&gt;
  let dt = DataTableById(dataTableId);&lt;br /&gt;
  let sdf = dt.SqlDataFrame;&lt;br /&gt;
  let cts = dt.ColumnTypes;&lt;br /&gt;
  cts.{&lt;br /&gt;
    let ct = _;&lt;br /&gt;
    if (ct.DataType == &amp;quot;Any&amp;quot;) {&lt;br /&gt;
      let n = ct.Name;&lt;br /&gt;
      sdf = sdf.WithColumn(ct.Name, #sql{Cast(Column(Variable(&amp;quot;n&amp;quot;)), &amp;quot;ShortString&amp;quot;)});&lt;br /&gt;
    }&lt;br /&gt;
  };&lt;br /&gt;
  sdf.Persist(`${dt.Name} - Converted`, #{&amp;quot;Append&amp;quot;: false, &amp;quot;ProjectId&amp;quot;: dt.Project.Id});&lt;br /&gt;
}&lt;br /&gt;
ConvertAnyDataTypesToStringsToNewTable(&amp;lt;dataTableId&amp;gt;);&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Query number of rows in given data table having a datetime value in given year grouped by month and return resulting table as CSV ===&lt;br /&gt;
SqlDataFrame is used in order to prevent loading the whole datatable into memory first. Filtering is performed as first operation in order to minimize the amount of required work for the data source of the data table.&amp;lt;syntaxhighlight lang=&amp;quot;typescript&amp;quot; line=&amp;quot;1&amp;quot;&amp;gt;&lt;br /&gt;
DataTableById(&amp;lt;data table id&amp;gt;)&lt;br /&gt;
  .SqlDataFrame&lt;br /&gt;
  .Where(#sql{2014 == Year(Column(&amp;quot;Start Time&amp;quot;))})&lt;br /&gt;
  .WithColumn(&amp;quot;Month&amp;quot;, #sql{Month(Column(&amp;quot;Start Time&amp;quot;))})&lt;br /&gt;
  .GroupBy([&amp;quot;Month&amp;quot;]).Aggregate([&amp;quot;Count&amp;quot;], [&amp;quot;Count&amp;quot;])&lt;br /&gt;
  .OrderByColumns([&amp;quot;Month&amp;quot;], [true])&lt;br /&gt;
  .Collect().ToCsv();&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Function for filtering SqlDataFrame by removing rows having, or replacing, the most infrequently occurring column values ===&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;typescript&amp;quot; line=&amp;quot;1&amp;quot;&amp;gt;&lt;br /&gt;
/***&lt;br /&gt;
 * @name ColumnWithMinUsage&lt;br /&gt;
 * @descripion&lt;br /&gt;
 * Generic function that can be used to filter out the most infrequently occurring attribute values or replace their Values&lt;br /&gt;
 * with given common value.&lt;br /&gt;
 * @param df:&lt;br /&gt;
 * DataFrame to operate on.&lt;br /&gt;
 * @param columnName:&lt;br /&gt;
 * Name of the column to be filtered.&lt;br /&gt;
 * @param newColumnName:&lt;br /&gt;
 * Name of the column that will contain the new value of the original column after filtering (if includeOthers was applied).&lt;br /&gt;
 * @param maxNumUniqueValues:&lt;br /&gt;
 * Maximum number of unique values to include into the comparison for each attribute column. If the amount of unique values for any attribute exceeds this value, only given number of attributes are included that have the highest usage.&lt;br /&gt;
 * @param minValueUsage:&lt;br /&gt;
 * Minimum total usage of a value included into the comparison. The number of cases having every returned value should be at least given percentage (a float value between 0.0 and 1.0) of all the compared cases.&lt;br /&gt;
 * @param includeOthers:&lt;br /&gt;
 * Should the rest of the attribute values not included due to MinValueUsage or MaxNumUniqueValues filtering be included as an aggregated &amp;quot;Others&amp;quot; value?&lt;br /&gt;
 * If not empty/null, defines the name used for these other-values.&lt;br /&gt;
 */&lt;br /&gt;
function ColumnWithMinUsage(df, columnName, newColumnName, maxNumUniqueValues, minValueUsage, includeOthers)&lt;br /&gt;
{&lt;br /&gt;
  let all = df&lt;br /&gt;
	.GroupBy([])&lt;br /&gt;
	.Aggregate([&amp;quot;NAllTotal&amp;quot;], [&amp;quot;Count&amp;quot;])&lt;br /&gt;
	.WithColumn(&amp;quot;__Join2&amp;quot;, #sql{1});&lt;br /&gt;
  let minValueUsageEnabled = !IsNullTop(minValueUsage);&lt;br /&gt;
  let maxNumUniqueValuesEnabled = !IsNullTop(maxNumUniqueValues);&lt;br /&gt;
  if (minValueUsageEnabled || maxNumUniqueValuesEnabled) {&lt;br /&gt;
	// Perform column value-based filtering if minValueUsageEnabled or maxNumUniqueValuesEnabled is defined.&lt;br /&gt;
    let valueColumnName = &amp;quot;__ValueNew&amp;quot;;&lt;br /&gt;
	let filteredValuesColumns = [valueColumnName: columnName];&lt;br /&gt;
	let filteredValues = df&lt;br /&gt;
	  .GroupBy([columnName]).Aggregate([&amp;quot;Count&amp;quot;], [&amp;quot;Count&amp;quot;]);&lt;br /&gt;
	if (minValueUsageEnabled) {&lt;br /&gt;
	  filteredValues = filteredValues&lt;br /&gt;
		.WithColumn(&amp;quot;__Join&amp;quot;, #sql{1})&lt;br /&gt;
		.Join(all, [&amp;quot;__Join&amp;quot;: &amp;quot;__Join2&amp;quot;], &amp;quot;leftouter&amp;quot;)&lt;br /&gt;
        .WithColumn(&amp;quot;Usage&amp;quot;, #sql{Column(&amp;quot;Count&amp;quot;) / Column(&amp;quot;NAllTotal&amp;quot;)});&lt;br /&gt;
	  filteredValuesColumns = Concat(filteredValuesColumns, [&amp;quot;Usage&amp;quot;]);&lt;br /&gt;
	}&lt;br /&gt;
	if (maxNumUniqueValuesEnabled) {&lt;br /&gt;
	  filteredValues = filteredValues&lt;br /&gt;
		.WithRowNumberColumn(&amp;quot;RowNumber&amp;quot;, [&amp;quot;Count&amp;quot;], null, [false]);&lt;br /&gt;
	  filteredValuesColumns = Concat(filteredValuesColumns, [&amp;quot;RowNumber&amp;quot;]);&lt;br /&gt;
	}&lt;br /&gt;
&lt;br /&gt;
	filteredValues = filteredValues&lt;br /&gt;
	  .Select(filteredValuesColumns);&lt;br /&gt;
&lt;br /&gt;
	// Generate select returning all the accepted values.&lt;br /&gt;
	let allValues = filteredValues&lt;br /&gt;
	  .(minValueUsageEnabled ? Where(#sql{Column(&amp;quot;Usage&amp;quot;) &amp;gt;= #expr{minValueUsage}}) : _)&lt;br /&gt;
	  .(maxNumUniqueValuesEnabled ? Where(#sql{Column(&amp;quot;RowNumber&amp;quot;) &amp;lt;= #expr{maxNumUniqueValues}}) : _)&lt;br /&gt;
	  .Select([valueColumnName, newColumnName: valueColumnName]);&lt;br /&gt;
&lt;br /&gt;
	if (!IsNullTop(includeOthers)) {&lt;br /&gt;
	  // If includeOthers is defined, replace original values with the variable defined in includeOthers.&lt;br /&gt;
	  let otherValues = filteredValues&lt;br /&gt;
		.(minValueUsageEnabled ? Where(#sql{Column(&amp;quot;Usage&amp;quot;) &amp;lt; #expr{minValueUsage}}) : _)&lt;br /&gt;
		.(maxNumUniqueValuesEnabled ? Where(#sql{Column(&amp;quot;RowNumber&amp;quot;) &amp;gt; #expr{maxNumUniqueValues}}) : _)&lt;br /&gt;
		.WithColumn(newColumnName, #sql{#expr{includeOthers}})&lt;br /&gt;
		.Select([valueColumnName, newColumnName]);&lt;br /&gt;
	  allValues = allValues.Append(otherValues)&lt;br /&gt;
	}&lt;br /&gt;
	df.Join(allValues, [columnName: valueColumnName], &amp;quot;inner&amp;quot;)&lt;br /&gt;
	  .RemoveColumns([valueColumnName]);&lt;br /&gt;
  }&lt;br /&gt;
}&lt;br /&gt;
&lt;br /&gt;
// The following example will return only rows containing two of the most common values for Region-column.&lt;br /&gt;
//let df = DataTableById(&amp;lt;data table id&amp;gt;).SqlDataFrame;&lt;br /&gt;
//df = ColumnWithMinUsage(df, &amp;quot;Region&amp;quot;, &amp;quot;_Filtered&amp;quot;, 2, null, null);&lt;br /&gt;
//df.Collect().ToCsv();&lt;br /&gt;
&lt;br /&gt;
// The following example will return all input rows, but will replace the values of rows whose Region-column&lt;br /&gt;
// has a value used by less than 15% of all the rows with a new value: &amp;quot;_Others&amp;quot;.&lt;br /&gt;
//let df = DataTableById(&amp;lt;data table id&amp;gt;).SqlDataFrame;&lt;br /&gt;
//df = ColumnWithMinUsage(df, &amp;quot;Region&amp;quot;, &amp;quot;_Filtered&amp;quot;, null, 0.15, &amp;quot;_Others&amp;quot;);&lt;br /&gt;
//df.Collect().ToCsv();&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Export model events and cases ===&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;typescript&amp;quot; line=&amp;quot;1&amp;quot;&amp;gt;&lt;br /&gt;
function ExportModelEvents(m) {&lt;br /&gt;
  let attrs = m.EventAttributes;&lt;br /&gt;
  ToDataFrame(&lt;br /&gt;
    m.EventLog.Events.Concat(&lt;br /&gt;
      [Case.Name, Type.Name, ToString(TimeStamp, &amp;quot;yyyy-MM-dd HH:mm:ss.fff&amp;quot;)], &lt;br /&gt;
      {let evt = _; attrs.{let att = _; evt.Attribute(att)}}&lt;br /&gt;
    ), &lt;br /&gt;
    Concat(&lt;br /&gt;
      [&amp;quot;CaseId&amp;quot;, &amp;quot;EventType&amp;quot;, &amp;quot;TimeStamp&amp;quot;], &lt;br /&gt;
      attrs.Name&lt;br /&gt;
    )&lt;br /&gt;
  ).ToCsv(true);&lt;br /&gt;
}&lt;br /&gt;
&lt;br /&gt;
 &lt;br /&gt;
&lt;br /&gt;
function ExportModelCases(m) {&lt;br /&gt;
  let attrs = m.CaseAttributes;&lt;br /&gt;
  ToDataFrame(&lt;br /&gt;
    m.EventLog.Cases.Concat(&lt;br /&gt;
      [Name], &lt;br /&gt;
      {let cas = _; attrs.{let att = _; cas.Attribute(att)}}&lt;br /&gt;
    ), &lt;br /&gt;
    Concat(&lt;br /&gt;
      [&amp;quot;CaseId&amp;quot;], &lt;br /&gt;
      attrs.Name&lt;br /&gt;
    )&lt;br /&gt;
  ).ToCsv(true);&lt;br /&gt;
}&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
First(Models.Where(Name==&amp;quot;SAP OtC Extended&amp;quot;)).EventsDataTable.DataFrame.ToCsv(true)&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
First(Models.Where(Name==&amp;quot;SAP OtC Extended&amp;quot;)).CasesDataTable.DataFrame.ToCsv(true)&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Calculate all the value usages of a single column for each event in event data table ===&lt;br /&gt;
This query could be used, e.g., to find out the maximum resource usage for every resource found in the event data table.&amp;lt;syntaxhighlight lang=&amp;quot;typescript&amp;quot; line=&amp;quot;1&amp;quot;&amp;gt;&lt;br /&gt;
function WithUsageColumns(resourceColumn)&lt;br /&gt;
{&lt;br /&gt;
  function WithTotalUsageColumnOfSingleResource(resourceColumn, resourceValue)&lt;br /&gt;
  {&lt;br /&gt;
    _&lt;br /&gt;
      .WithColumn(&amp;quot;_Prev&amp;quot;, #sql{Lag(Column(resourceColumn), [TimeStamp, EventType], [true, true], [CaseId], 1, null)})&lt;br /&gt;
      .WithColumn(&amp;quot;_UsageDiff&amp;quot;, #sql{&lt;br /&gt;
        CaseWhen(&lt;br /&gt;
          Column(resourceColumn) == Column(&amp;quot;_Prev&amp;quot;), 0, &lt;br /&gt;
          Column(&amp;quot;_Prev&amp;quot;) == #expr{resourceValue}, -1,&lt;br /&gt;
          Column(resourceColumn) == #expr{resourceValue}, 1,&lt;br /&gt;
          0)&lt;br /&gt;
      })&lt;br /&gt;
      .WithColumn(`${resourceValue}_Usage`, #sql{Sum(Column(&amp;quot;_UsageDiff&amp;quot;), [TimeStamp, EventType])})&lt;br /&gt;
      .RemoveColumns([&amp;quot;_Prev&amp;quot;, &amp;quot;_UsageDiff&amp;quot;])&lt;br /&gt;
  }&lt;br /&gt;
&lt;br /&gt;
  let sdf = _;&lt;br /&gt;
  let allValues = sdf.SelectDistinct([resourceColumn]).OrderByColumns([resourceColumn], [true]).Collect().Column(resourceColumn);&lt;br /&gt;
  allValues.{&lt;br /&gt;
    let v = _;&lt;br /&gt;
    sdf = sdf.WithTotalUsageColumnOfSingleResource(resourceColumn, v)&lt;br /&gt;
  }&lt;br /&gt;
  sdf&lt;br /&gt;
}&lt;br /&gt;
&lt;br /&gt;
let dt = ModelById(&amp;lt;model id&amp;gt;).EventsDataTable;&lt;br /&gt;
dt&lt;br /&gt;
  .SqlDataFrame&lt;br /&gt;
  .WithUsageColumns(&amp;lt;resource column name&amp;gt;)&lt;br /&gt;
  .OrderByColumns([dt.ColumnMappings[&amp;quot;TimeStamp&amp;quot;]], [true])&lt;br /&gt;
  .Collect().ToCsv()&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;Where:&lt;br /&gt;
&lt;br /&gt;
* &amp;lt;model id&amp;gt; is the id of the model containing event data to be examined.&lt;br /&gt;
* &amp;lt;resource column name&amp;gt; is the name of the column in the event data table of the specified model containing the resource being used by that event.&lt;br /&gt;
&lt;br /&gt;
NOTE: This expression uses functionalities that are only supported in Snowflake-based data tables.&lt;br /&gt;
&lt;br /&gt;
=== Create new Snowflake model from filter ===&lt;br /&gt;
This script creates a new Snowflake model (and two datatables for cases and events) containing filtered event log from given filter id. The script also works if the model doesn&#039;t have a cases datatable.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;typescript&amp;quot; line=&amp;quot;1&amp;quot;&amp;gt;&lt;br /&gt;
let filter = FilterById(1); // filter id&lt;br /&gt;
let model = filter.model;&lt;br /&gt;
let project = model.project;&lt;br /&gt;
let nameSuffix = &amp;quot; - &amp;quot; + filter.name + &amp;quot; - &amp;quot; + ToString(Now, &amp;quot;dd-MM-yyyy HH:mm:ss&amp;quot;);&lt;br /&gt;
let eventsDatatableName = model.EventsDataTable.Name + nameSuffix;&lt;br /&gt;
if (eventsDatatableName.length &amp;gt; 440) {&lt;br /&gt;
  eventsDatatableName = eventsDatatableName.Substring(eventsDatatableName.length - 440);&lt;br /&gt;
}&lt;br /&gt;
let eventsData = model&lt;br /&gt;
  .EventsDataTable&lt;br /&gt;
  .SqlDataFrame&lt;br /&gt;
  .ApplyFilter(&lt;br /&gt;
    filter.rules,&lt;br /&gt;
    model.CasesDataTable?.SqlDataFrame&lt;br /&gt;
  );&lt;br /&gt;
project&lt;br /&gt;
  .CreateDatatable(eventsDatatableName, #{&amp;quot;Connection&amp;quot;: CreateSnowflakeConnection()})&lt;br /&gt;
  .Import(eventsData);&lt;br /&gt;
let modelConfiguration = model.Configuration;&lt;br /&gt;
modelConfiguration.DataSource.Events.Set(&amp;quot;DataTableName&amp;quot;, eventsDatatableName);&lt;br /&gt;
if (model.CasesDataTable != null) {&lt;br /&gt;
  let eventsDataCaseIdColumn = &amp;quot;CaseId_&amp;quot; + ToString(Random());&lt;br /&gt;
  let casesDatatableName = model.CasesDataTable.Name + nameSuffix;&lt;br /&gt;
  if (casesDatatableName.length &amp;gt; 440) {&lt;br /&gt;
    casesDatatableName = casesDatatableName.Substring(casesDatatableName.length - 440);&lt;br /&gt;
  }&lt;br /&gt;
  let casesData = model&lt;br /&gt;
    .CasesDataTable&lt;br /&gt;
    .SqlDataFrame&lt;br /&gt;
    .join(&lt;br /&gt;
	  eventsData.SelectDistinct([eventsDataCaseIdColumn: modelConfiguration.DataSource.Events.Columns.CaseId]),&lt;br /&gt;
      [modelConfiguration.DataSource.Cases.Columns.CaseId: eventsDataCaseIdColumn]&lt;br /&gt;
	).Select(model.CasesDataTable.ColumnNames);&lt;br /&gt;
  project&lt;br /&gt;
    .CreateDatatable(casesDatatableName, #{&amp;quot;Connection&amp;quot;: CreateSnowflakeConnection()})&lt;br /&gt;
    .Import(casesData);&lt;br /&gt;
  modelConfiguration.DataSource.Cases.Set(&amp;quot;DataTableName&amp;quot;, casesDatatableName);&lt;br /&gt;
}&lt;br /&gt;
let modelName = model.Name + nameSuffix;&lt;br /&gt;
if (modelName &amp;gt; 440) {&lt;br /&gt;
  modelName = modelName.Substring(modelName.length - 440);&lt;br /&gt;
}&lt;br /&gt;
project&lt;br /&gt;
  .CreateModel(#{    &lt;br /&gt;
    &amp;quot;Name&amp;quot;: modelName,&lt;br /&gt;
    &amp;quot;Description&amp;quot;: model.Description,&lt;br /&gt;
    &amp;quot;Configuration&amp;quot;: modelConfiguration&lt;br /&gt;
  });&lt;br /&gt;
return modelName;&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Creating a model consisting of multiple copies of cases in an existing model ===&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;typescript&amp;quot; line=&amp;quot;1&amp;quot;&amp;gt;&lt;br /&gt;
/**&lt;br /&gt;
 * @name CreateTestModel&lt;br /&gt;
 * @description&lt;br /&gt;
 * Creates a new model (or overwrites an existing) to given target project with given number of &lt;br /&gt;
 * repetitions of given source model.&lt;br /&gt;
 * Each repetition will generate &amp;quot;&amp;lt;N&amp;gt;-&amp;quot;-prefix to CaseId-columns, where N equals to the repeat index.&lt;br /&gt;
 * @param sourceModel&lt;br /&gt;
 * PA model used for the source data and from where the connection is copied for the target model if a &lt;br /&gt;
 * new one has to be created.&lt;br /&gt;
 * @param numRepeats&lt;br /&gt;
 * Number of times the data in the source model should be repeated in the generated model.&lt;br /&gt;
 * @param targetProject&lt;br /&gt;
 * Project in which the target model resides.&lt;br /&gt;
 * @param targetModelName&lt;br /&gt;
 * Specifies the name of the test model in the given target project. If a model already exists with &lt;br /&gt;
 * given name, event and case data in this model will be replaced with the new generated event and &lt;br /&gt;
 * case data.&lt;br /&gt;
 * @returns&lt;br /&gt;
 * Model object of the test model having the newly generated data.&lt;br /&gt;
 */&lt;br /&gt;
function CreateTestModel(sourceModel, numRepeats, targetProject, targetModelName) &lt;br /&gt;
{&lt;br /&gt;
  let eventsColumnMappings = sourceModel.EventsDataTable.ColumnMappings;&lt;br /&gt;
  let casesColumnMappings = sourceModel.CasesDataTable.ColumnMappings;&lt;br /&gt;
  let connection = sourceModel.EventsDataTable.DataSourceConnection;&lt;br /&gt;
&lt;br /&gt;
  function CreateResultModel()&lt;br /&gt;
  {&lt;br /&gt;
    function GetTable(tableName) &lt;br /&gt;
    {&lt;br /&gt;
      let tableConfiguration = #{&lt;br /&gt;
        &amp;quot;Name&amp;quot;: tableName,&lt;br /&gt;
        &amp;quot;Connection&amp;quot;: connection&lt;br /&gt;
      };&lt;br /&gt;
      let resultTable = targetProject.DataTableByName(tableName);&lt;br /&gt;
      if (resultTable == null)&lt;br /&gt;
      {&lt;br /&gt;
        resultTable = targetProject.CreateDataTable(tableConfiguration)&lt;br /&gt;
          .Modify(#{&amp;quot;NameInDataSource&amp;quot;: null})&lt;br /&gt;
          .Synchronize();&lt;br /&gt;
      }&lt;br /&gt;
      return resultTable;&lt;br /&gt;
    }&lt;br /&gt;
&lt;br /&gt;
    let eventsTableName = `${targetModelName} - events`;&lt;br /&gt;
    let casesTableName = `${targetModelName} - cases`;&lt;br /&gt;
    let targetModel = targetProject.ModelByName(targetModelName);&lt;br /&gt;
    let eventsTable, casesTable = null;&lt;br /&gt;
&lt;br /&gt;
    if (targetModel != null)&lt;br /&gt;
    {&lt;br /&gt;
      eventsTable = targetModel.EventsDataTable;&lt;br /&gt;
      casesTable = targetModel.CasesDataTable;&lt;br /&gt;
    }&lt;br /&gt;
    else {&lt;br /&gt;
      eventsTable = GetTable(eventsTableName);&lt;br /&gt;
      if (sourceModel.CasesDataTable != null) {&lt;br /&gt;
        casesTable = GetTable(casesTableName);&lt;br /&gt;
      }&lt;br /&gt;
&lt;br /&gt;
      let timestampMapping = eventsColumnMappings[&amp;quot;TimeStamp&amp;quot;];&lt;br /&gt;
      eventsColumnMappings.Remove(&amp;quot;TimeStamp&amp;quot;);&lt;br /&gt;
      eventsColumnMappings.Set(&amp;quot;Timestamp&amp;quot;, timestampMapping);&lt;br /&gt;
&lt;br /&gt;
      let modelConfiguration = #{&lt;br /&gt;
        &amp;quot;DataSource&amp;quot;: #{&lt;br /&gt;
          &amp;quot;Events&amp;quot;:#{&lt;br /&gt;
            &amp;quot;DataSourceType&amp;quot;: &amp;quot;datatable&amp;quot;,&lt;br /&gt;
            &amp;quot;DataTableName&amp;quot;: eventsTableName,&lt;br /&gt;
            &amp;quot;Columns&amp;quot;: eventsColumnMappings&lt;br /&gt;
          }&lt;br /&gt;
        }&lt;br /&gt;
      };&lt;br /&gt;
&lt;br /&gt;
      if (casesColumnMappings != null) {&lt;br /&gt;
        modelConfiguration[&amp;quot;DataSource&amp;quot;].Set(&amp;quot;Cases&amp;quot;, #{&lt;br /&gt;
          &amp;quot;DataSourceType&amp;quot;: &amp;quot;datatable&amp;quot;,&lt;br /&gt;
          &amp;quot;DataTableName&amp;quot;: casesTableName,&lt;br /&gt;
          &amp;quot;Columns&amp;quot;: casesColumnMappings&lt;br /&gt;
        });&lt;br /&gt;
      }&lt;br /&gt;
&lt;br /&gt;
      targetModel = targetProject.CreateModel(#{&amp;quot;Name&amp;quot;: targetModelName, &amp;quot;Configuration&amp;quot;: modelConfiguration});&lt;br /&gt;
    }&lt;br /&gt;
&lt;br /&gt;
    eventsTable.Truncate();&lt;br /&gt;
    casesTable?.Truncate();&lt;br /&gt;
&lt;br /&gt;
    return #{&lt;br /&gt;
      &amp;quot;TargetModel&amp;quot;: targetModel,&lt;br /&gt;
      &amp;quot;Events&amp;quot;: eventsTable,&lt;br /&gt;
      &amp;quot;Cases&amp;quot;: casesTable&lt;br /&gt;
    };&lt;br /&gt;
  }&lt;br /&gt;
&lt;br /&gt;
  function RepeatNTimes(sourceDf, caseIdColumn, numRepeats)&lt;br /&gt;
  {&lt;br /&gt;
    let resultDf = null;&lt;br /&gt;
    for (let i = 1; i &amp;lt;= numRepeats; ++i) {&lt;br /&gt;
      let iterationDf = sourceDf&lt;br /&gt;
        .WithColumn(caseIdColumn, #sql{Concat(#expr{i}, &amp;quot;-&amp;quot;, Column(#expr{caseIdColumn}))});&lt;br /&gt;
      resultDf = resultDf == null ? iterationDf : resultDf.Append(iterationDf); &lt;br /&gt;
    }&lt;br /&gt;
    resultDf;&lt;br /&gt;
  }&lt;br /&gt;
&lt;br /&gt;
  let resultModel = CreateResultModel();&lt;br /&gt;
  let sourceEventDataDf = sourceModel.EventsDataTable.SqlDataFrame;&lt;br /&gt;
  let resultEventDataDf = RepeatNTimes(sourceEventDataDf, eventsColumnMappings[&amp;quot;CaseId&amp;quot;], numRepeats);&lt;br /&gt;
  resultModel[&amp;quot;Events&amp;quot;].Import(resultEventDataDf);&lt;br /&gt;
&lt;br /&gt;
  let sourceCaseDataDf = sourceModel.CasesDataTable?.SqlDataFrame;&lt;br /&gt;
  if (sourceCaseDataDf != null) {&lt;br /&gt;
    let resultCaseDataDf = RepeatNTimes(sourceCaseDataDf, casesColumnMappings[&amp;quot;CaseId&amp;quot;], numRepeats);&lt;br /&gt;
    resultModel[&amp;quot;Cases&amp;quot;].Import(resultCaseDataDf);&lt;br /&gt;
  }&lt;br /&gt;
  resultModel[&amp;quot;TargetModel&amp;quot;];&lt;br /&gt;
}&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;Example usage:&amp;lt;blockquote&amp;gt;CreateTestModel(ProjectByName(&amp;quot;Project&amp;quot;).ModelByName(&amp;quot;SAP_OrderToCash - Snowflake&amp;quot;), 3, ProjectByName(&amp;quot;TestData&amp;quot;), &amp;quot;TestModel&amp;quot;);&amp;lt;/blockquote&amp;gt;Creates a new model named &amp;quot;TestModel&amp;quot; (or overwrites old one) into project named &amp;quot;TestData&amp;quot; containing the data from model &amp;quot;SAP_OrderToCash - Snowflake&amp;quot; in project &amp;quot;Project&amp;quot; repeated three times.&lt;br /&gt;
&lt;br /&gt;
=== Analyzing declare patterns found in event log ===&lt;br /&gt;
&lt;br /&gt;
This is an example expression that shows how POSIX-style regular expressions can be used to search for cases in an event log having certain event type patterns [https://www.researchgate.net/publication/277631859_Generating_Event_Logs_Through_the_Simulation_of_Declare_Models declare patterns].&lt;br /&gt;
&lt;br /&gt;
Note: Before use, replace &amp;lt;model id&amp;gt; with a valid model identifier having event data.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;typescript&amp;quot; line=&amp;quot;1&amp;quot;&amp;gt;&lt;br /&gt;
let dt = ModelById(&amp;lt;model id&amp;gt;).EventsDataTable;&lt;br /&gt;
let sdf = dt.SqlDataFrame.Head(1000);&lt;br /&gt;
let caseIdColumn = dt.ColumnMappings[&amp;quot;CaseId&amp;quot;], timeStampColumn = dt.ColumnMappings[&amp;quot;TimeStamp&amp;quot;], eventTypeColumn = dt.ColumnMappings[&amp;quot;EventType&amp;quot;];&lt;br /&gt;
&lt;br /&gt;
let eventTypesDf = sdf&lt;br /&gt;
  .SelectDistinct([eventTypeColumn])&lt;br /&gt;
  .OrderByColumns([eventTypeColumn], [true])&lt;br /&gt;
  .WithRowNumberColumn(&amp;quot;Token&amp;quot;, [eventTypeColumn])&lt;br /&gt;
  .WithColumn(&amp;quot;Token&amp;quot;, #sql{Char(Column(&amp;quot;Token&amp;quot;) + Unicode(&amp;quot;a&amp;quot;) - 1)});&lt;br /&gt;
&lt;br /&gt;
sdf = sdf&lt;br /&gt;
  .Join(eventTypesDf.Select([&amp;quot;_EventType2&amp;quot;: eventTypeColumn, &amp;quot;Token&amp;quot;]), [eventTypeColumn: &amp;quot;_EventType2&amp;quot;], &amp;quot;leftouter&amp;quot;);&lt;br /&gt;
&lt;br /&gt;
function RespondedExistencePattern(a, b)&lt;br /&gt;
{&lt;br /&gt;
  // [^a]*((a.*b.*)|(b.*a.*))*[^a]*&lt;br /&gt;
  ToSqlExpression(`Regexp(Column(&amp;quot;Trace&amp;quot;), &amp;quot;[^${a}]*((${a}.*${b}.*)|(${b}.*${a}.*))*[^${a}]&amp;quot;)`)&lt;br /&gt;
}&lt;br /&gt;
&lt;br /&gt;
function ResponsePattern(a, b)&lt;br /&gt;
{&lt;br /&gt;
  // [^a]*(a.*b)*[^a]*&lt;br /&gt;
  ToSqlExpression(`Regexp(Column(&amp;quot;Trace&amp;quot;), &amp;quot;[^${a}]*(${a}.*${b})*[^${a}]&amp;quot;)`)&lt;br /&gt;
}&lt;br /&gt;
&lt;br /&gt;
function AlternateResponsePattern(a, b)&lt;br /&gt;
{&lt;br /&gt;
  // [^a]*(a[^a]*b[^a]*)*[^a]*&lt;br /&gt;
  ToSqlExpression(`Regexp(Column(&amp;quot;Trace&amp;quot;), &amp;quot;[^${a}]*(${a}[^${a}]*${b}[^${a}]*)*[^${a}]&amp;quot;)`)&lt;br /&gt;
}&lt;br /&gt;
&lt;br /&gt;
function ChainResponsePattern(a, b)&lt;br /&gt;
{&lt;br /&gt;
  // [^a]*(ab[^a]*)*[^a]*&lt;br /&gt;
  ToSqlExpression(`Regexp(Column(&amp;quot;Trace&amp;quot;), &amp;quot;[^${a}]*(${a}${b}[^${a}]*)*[^${a}]&amp;quot;)`)&lt;br /&gt;
}&lt;br /&gt;
&lt;br /&gt;
function PrecedencePattern(a, b)&lt;br /&gt;
{&lt;br /&gt;
  // [^b]*(a.*b)*[^b]*&lt;br /&gt;
  ToSqlExpression(`Regexp(Column(&amp;quot;Trace&amp;quot;), &amp;quot;[^${b}]*(${a}.*${b})*[^${b}]&amp;quot;)`)&lt;br /&gt;
}&lt;br /&gt;
&lt;br /&gt;
let tracesDf = sdf&lt;br /&gt;
  .GroupBy([caseIdColumn])&lt;br /&gt;
  .Aggregate([&amp;quot;Trace&amp;quot;: &amp;quot;Token&amp;quot;], [#{&amp;quot;Function&amp;quot;: &amp;quot;list&amp;quot;, &amp;quot;Ordering&amp;quot;: [timeStampColumn], &amp;quot;Separator&amp;quot;: &amp;quot;&amp;quot;}])&lt;br /&gt;
  .WithColumn(&amp;quot;RespondedExistenceNL&amp;quot;, #sql{#expr{RespondedExistencePattern(&amp;quot;n&amp;quot;, &amp;quot;l&amp;quot;)}})&lt;br /&gt;
  .WithColumn(&amp;quot;RespondedExistenceCA&amp;quot;, #sql{#expr{RespondedExistencePattern(&amp;quot;c&amp;quot;, &amp;quot;a&amp;quot;)}})&lt;br /&gt;
  .WithColumn(&amp;quot;ResponsePatternNL&amp;quot;, #sql{#expr{ResponsePattern(&amp;quot;n&amp;quot;, &amp;quot;l&amp;quot;)}})&lt;br /&gt;
  .WithColumn(&amp;quot;ResponsePatternCA&amp;quot;, #sql{#expr{ResponsePattern(&amp;quot;c&amp;quot;, &amp;quot;a&amp;quot;)}})&lt;br /&gt;
  .WithColumn(&amp;quot;AlternateResponsePatternNL&amp;quot;, #sql{#expr{AlternateResponsePattern(&amp;quot;n&amp;quot;, &amp;quot;l&amp;quot;)}})&lt;br /&gt;
  .WithColumn(&amp;quot;AlternateResponsePatternCA&amp;quot;, #sql{#expr{AlternateResponsePattern(&amp;quot;c&amp;quot;, &amp;quot;a&amp;quot;)}})&lt;br /&gt;
  .WithColumn(&amp;quot;ChainResponsePatternNL&amp;quot;, #sql{#expr{ChainResponsePattern(&amp;quot;n&amp;quot;, &amp;quot;l&amp;quot;)}})&lt;br /&gt;
  .WithColumn(&amp;quot;ChainResponsePatternCA&amp;quot;, #sql{#expr{ChainResponsePattern(&amp;quot;c&amp;quot;, &amp;quot;a&amp;quot;)}})&lt;br /&gt;
  .WithColumn(&amp;quot;PrecedencePatternNL&amp;quot;, #sql{#expr{PrecedencePattern(&amp;quot;n&amp;quot;, &amp;quot;l&amp;quot;)}})&lt;br /&gt;
  .WithColumn(&amp;quot;PrecedencePatternCA&amp;quot;, #sql{#expr{PrecedencePattern(&amp;quot;c&amp;quot;, &amp;quot;a&amp;quot;)}})&lt;br /&gt;
;&lt;br /&gt;
&lt;br /&gt;
[tracesDf.Collect().ToCsv(), eventTypesDf.Collect().ToCsv()]&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Perform a query and send results as E-mail in a HTML table ===&lt;br /&gt;
This example requires two scripts that are both located in the same project:&lt;br /&gt;
&lt;br /&gt;
# Expression-type script to execute (can be, e.g., scheduled to run daily).&lt;br /&gt;
# Expression-type script containing the query JSON to use as basis for the e-mail. In this example, this script is named as &amp;quot;Send query as E-mail - query JSON&amp;quot;. The contents of this script is just the JSON representation of a query that can be extracted, e.g., from any PA chart view.&lt;br /&gt;
&lt;br /&gt;
Script #1 should contain the following code:&amp;lt;syntaxhighlight lang=&amp;quot;typescript&amp;quot;&amp;gt;&lt;br /&gt;
let replyToAddress = &amp;quot;noreply@test.com&amp;quot;;&lt;br /&gt;
let recipientsArray = [&amp;quot;testuser@test.com&amp;quot;];&lt;br /&gt;
let queryConfiguration = ParseJson(Project.ScriptByName(&amp;quot;Send query as E-mail - query JSON&amp;quot;).Code)&lt;br /&gt;
  .Set(&amp;quot;IncludeCollect&amp;quot;, true);&lt;br /&gt;
let resultDf = Query(queryConfiguration);&lt;br /&gt;
let mailBodyHtml = resultDf.`&lt;br /&gt;
&amp;lt;table&amp;gt;&lt;br /&gt;
  &amp;lt;caption&amp;gt;Example query&amp;lt;/caption&amp;gt;&lt;br /&gt;
  &amp;lt;thead&amp;gt;&lt;br /&gt;
    &amp;lt;tr&amp;gt;&lt;br /&gt;
      ${StringJoin(&amp;quot;&amp;quot;, _.Columns.`&lt;br /&gt;
        &amp;lt;th&amp;gt;${_}&amp;lt;/th&amp;gt;&lt;br /&gt;
      `)}&lt;br /&gt;
    &amp;lt;/tr&amp;gt;&lt;br /&gt;
  &amp;lt;/thead&amp;gt;&lt;br /&gt;
  &amp;lt;tbody&amp;gt;&lt;br /&gt;
    ${StringJoin(&amp;quot;&amp;quot;, _.Rows.`&lt;br /&gt;
      &amp;lt;tr&amp;gt;&lt;br /&gt;
        ${StringJoin(&amp;quot;&amp;quot;, _.`&lt;br /&gt;
          &amp;lt;td&amp;gt;${_}&amp;lt;/td&amp;gt;&lt;br /&gt;
        `)}&lt;br /&gt;
      &amp;lt;/tr&amp;gt;&lt;br /&gt;
    `)}&lt;br /&gt;
  &amp;lt;/tbody&amp;gt;&lt;br /&gt;
&amp;lt;/table&amp;gt;&lt;br /&gt;
`;&lt;br /&gt;
&lt;br /&gt;
SendEmail(#{&lt;br /&gt;
  &amp;quot;ReplyTo&amp;quot;: [replyToAddress],&lt;br /&gt;
  &amp;quot;To&amp;quot;: recipientsArray,&lt;br /&gt;
  &amp;quot;Subject&amp;quot;: &amp;quot;Example query E-mail&amp;quot;,&lt;br /&gt;
  &amp;quot;IsBodyHtml&amp;quot;: true,&lt;br /&gt;
  &amp;quot;Body&amp;quot;: mailBodyHtml&lt;br /&gt;
});&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;NOTE: QPR ProcessAnalyzer Server has to have SMTP server configured. Also, remember to update the values of replyToAddress and recipientsArray before using.&lt;br /&gt;
&lt;br /&gt;
=== Converting a case-centric model to object-centric model ===&lt;br /&gt;
This function serves as an example on how a case-centric model could be converted into an object-centric model having just one object type: &amp;quot;Case&amp;quot;.&lt;br /&gt;
&lt;br /&gt;
Note: Before use, replace &amp;lt;model id&amp;gt; with a valid model identifier having event data.&amp;lt;syntaxhighlight lang=&amp;quot;typescript&amp;quot;&amp;gt;&lt;br /&gt;
function ConvertCCModelToOCModel(model, newModelName) &lt;br /&gt;
{&lt;br /&gt;
  let connection = model.EventsDataTable.DataSourceConnection;&lt;br /&gt;
  let caseIdColumn = model.EventsDataTable.ColumnMappings[&amp;quot;CaseId&amp;quot;];&lt;br /&gt;
  let eventToObjectTableName = `${newModelName} - event-to-object`;&lt;br /&gt;
  let eventsTableName = `${newModelName} - events`;&lt;br /&gt;
  let objectsTableName = `${newModelName} - objects`;&lt;br /&gt;
          &lt;br /&gt;
  let eventsDf = model.EventsDataTable&lt;br /&gt;
    .SqlDataFrame&lt;br /&gt;
    .RenameColumns([&lt;br /&gt;
      &amp;quot;OcelEventType&amp;quot;: model.EventsDataTable.ColumnMappings[&amp;quot;EventType&amp;quot;],&lt;br /&gt;
      &amp;quot;OcelEventTime&amp;quot;: model.EventsDataTable.ColumnMappings[&amp;quot;TimeStamp&amp;quot;]])&lt;br /&gt;
    .WithColumn(&amp;quot;OcelEventId&amp;quot;, #sql{Concat(&amp;quot;evt-&amp;quot;, Cast(RowNumber([Column(&amp;quot;OcelEventTime&amp;quot;)]), &amp;quot;String&amp;quot;))});&lt;br /&gt;
&lt;br /&gt;
  eventsDf&lt;br /&gt;
    .RenameColumns([&lt;br /&gt;
      &amp;quot;OcelEventToObjectSourceId&amp;quot;: &amp;quot;OcelEventId&amp;quot;,&lt;br /&gt;
      &amp;quot;OcelEventToObjectTargetId&amp;quot;: caseIdColumn])&lt;br /&gt;
    .Select([&amp;quot;OcelEventToObjectSourceId&amp;quot;, &amp;quot;OcelEventToObjectTargetId&amp;quot;])&lt;br /&gt;
    .WithColumn(&amp;quot;OcelEventToObjectQualifier&amp;quot;, #sql{#expr{caseIdColumn} })&lt;br /&gt;
    .Persist(eventToObjectTableName, #{&amp;quot;Connection&amp;quot;: connection});&lt;br /&gt;
&lt;br /&gt;
  eventsDf&lt;br /&gt;
    .RemoveColumns([caseIdColumn])&lt;br /&gt;
    .Persist(eventsTableName, #{&amp;quot;Connection&amp;quot;: connection});&lt;br /&gt;
&lt;br /&gt;
  let casesDt = model.CasesDataTable&lt;br /&gt;
    .SqlDataFrame&lt;br /&gt;
    .RenameColumns([&lt;br /&gt;
      &amp;quot;OcelObjectId&amp;quot;: model.CasesDataTable.ColumnMappings[&amp;quot;CaseId&amp;quot;]&lt;br /&gt;
    ])&lt;br /&gt;
    .WithColumn(&amp;quot;OcelObjectType&amp;quot;, #sql{&amp;quot;Case&amp;quot;})&lt;br /&gt;
    .Persist(objectsTableName, #{&amp;quot;Connection&amp;quot;: connection});&lt;br /&gt;
&lt;br /&gt;
  let newConfiguration = #{&lt;br /&gt;
    &amp;quot;OcelDataSource&amp;quot;: #{&lt;br /&gt;
      &amp;quot;Events&amp;quot;: eventsTableName,&lt;br /&gt;
      &amp;quot;Objects&amp;quot;: objectsTableName,&lt;br /&gt;
      &amp;quot;EventToObject&amp;quot;: eventToObjectTableName&lt;br /&gt;
    }&lt;br /&gt;
  };&lt;br /&gt;
&lt;br /&gt;
  model.Project  &lt;br /&gt;
    .CreateModel(#{      &lt;br /&gt;
      &amp;quot;Name&amp;quot;: newModelName,  &lt;br /&gt;
      &amp;quot;Description&amp;quot;: model.Description,  &lt;br /&gt;
      &amp;quot;Configuration&amp;quot;: newConfiguration  &lt;br /&gt;
    });&lt;br /&gt;
}&lt;br /&gt;
&lt;br /&gt;
let ccModel = ModelById(&amp;lt;model id&amp;gt;);&lt;br /&gt;
ConvertCCModelToOCModel(ccModel, `ocel - ${ccModel.Name}`);&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;/div&gt;</summary>
		<author><name>MarHink</name></author>
	</entry>
	<entry>
		<id>https://wiki.onqpr.com/pa/index.php?title=Object-centric_Process_Mining_Model&amp;diff=26041</id>
		<title>Object-centric Process Mining Model</title>
		<link rel="alternate" type="text/html" href="https://wiki.onqpr.com/pa/index.php?title=Object-centric_Process_Mining_Model&amp;diff=26041"/>
		<updated>2025-03-25T15:28:23Z</updated>

		<summary type="html">&lt;p&gt;MarHink: /* Object-centric model structure */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;QPR ProcessAnalyzer supports object-centric process mining (OCPM) based on the OCEL 2.0 standard (https://www.ocel-standard.org). To use object-centric functionality, you need to transform data into the [[#Object-centric_model_structure|suitable format]] for the [[#Create_object-centric_model|object-centric model]]. Object-centric models can be analyzed in the object-centric flowchart and with (case-centric) charts because the object-centric model can be converted into a case-centric eventlog using [[#Object-centric_perspectives|perspectives]]. To use the OCPM functionality, Snowflake needs to be used as the calculation engine.&lt;br /&gt;
&lt;br /&gt;
== Create object-centric model ==&lt;br /&gt;
Create a new object-centric model as follows:&lt;br /&gt;
# In the Workspace, open the project where to create the model.&lt;br /&gt;
# Select &#039;&#039;&#039;NEW&#039;&#039;&#039; in the top right menu and select &#039;&#039;&#039;model&#039;&#039;&#039;.&lt;br /&gt;
# Define a name for the new model.&lt;br /&gt;
# Set &#039;&#039;&#039;Model type&#039;&#039;&#039; as &#039;&#039;&#039;Object-centric&#039;&#039;&#039;.&lt;br /&gt;
# Click &#039;&#039;&#039;Create&#039;&#039;&#039;.&lt;br /&gt;
&lt;br /&gt;
== Configure object-centric model datatables ==&lt;br /&gt;
Datatables for the object-centric model need to exist in the same project as the model. Datatables can be set for the model as follows:&lt;br /&gt;
# In the Workspace, select the object-centric model and click &#039;&#039;&#039;Properties&#039;&#039;&#039;.&lt;br /&gt;
# In the model properties dialog, open the &#039;&#039;&#039;Datasource&#039;&#039;&#039; tab.&lt;br /&gt;
# Add a following kind of json configuration to the textbox:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
{&lt;br /&gt;
  &amp;quot;Objects&amp;quot;: &amp;quot;OCPM: objects&amp;quot;,&lt;br /&gt;
  &amp;quot;Events&amp;quot;: &amp;quot;OCPM: events&amp;quot;,&lt;br /&gt;
  &amp;quot;ObjectToObject&amp;quot;: &amp;quot;OCPM: object-object&amp;quot;,&lt;br /&gt;
  &amp;quot;EventToObject&amp;quot;: &amp;quot;OCPM: event-object&amp;quot;,&lt;br /&gt;
  &amp;quot;ObjectTypes&amp;quot;: {&lt;br /&gt;
    &amp;quot;Invoice&amp;quot;: &amp;quot;OCPM object: Invoice&amp;quot;,&lt;br /&gt;
    &amp;quot;Payment&amp;quot;: &amp;quot;OCPM object: Payment&amp;quot;,&lt;br /&gt;
    &amp;quot;Purchase Order&amp;quot;: &amp;quot;OCPM object: Purchase Order&amp;quot;&lt;br /&gt;
  },&lt;br /&gt;
  &amp;quot;EventTypes&amp;quot;: { &lt;br /&gt;
    &amp;quot;Approve Purchase Requisition&amp;quot;: &amp;quot;OCPM event: Approve Purchase Requisition&amp;quot;,&lt;br /&gt;
    &amp;quot;Change PO Quantity&amp;quot;: &amp;quot;OCPM event: Change PO Quantity&amp;quot;,&lt;br /&gt;
    &amp;quot;Create Purchase Order&amp;quot;: &amp;quot;OCPM event: Create Purchase Order&amp;quot;,&lt;br /&gt;
    &amp;quot;Insert Invoice&amp;quot;: &amp;quot;OCPM event: Insert Invoice&amp;quot;,&lt;br /&gt;
    &amp;quot;Insert Payment&amp;quot;: &amp;quot;OCPM event: Insert Payment&amp;quot;&lt;br /&gt;
  }&lt;br /&gt;
}&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The json configuration needs to have following properties:&lt;br /&gt;
* &#039;&#039;&#039;Objects&#039;&#039;&#039;: Objects datatable name.&lt;br /&gt;
* &#039;&#039;&#039;Events&#039;&#039;&#039;: Events datatable name.&lt;br /&gt;
* &#039;&#039;&#039;ObjectToObject&#039;&#039;&#039;: Object-to-object relation datatable name.&lt;br /&gt;
* &#039;&#039;&#039;EventToObject&#039;&#039;&#039;: Event-to-object relation datatable name.&lt;br /&gt;
* &#039;&#039;&#039;ObjectTypes&#039;&#039;&#039;: Key-value-pairs of object type datatable names. Note that object names need to match with object names in the objects datatable.&lt;br /&gt;
* &#039;&#039;&#039;EventTypes&#039;&#039;&#039;: Key-value-pairs of event type datatable names. Note that event names need to match with event names in the events datatable.&lt;br /&gt;
&lt;br /&gt;
It&#039;s also possible that all object attributes and all event attributes are in the same table:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
{&lt;br /&gt;
  &amp;quot;Objects&amp;quot;: &amp;quot;OCPM: objects&amp;quot;,&lt;br /&gt;
  &amp;quot;Events&amp;quot;: &amp;quot;OCPM: events&amp;quot;,&lt;br /&gt;
  &amp;quot;ObjectToObject&amp;quot;: &amp;quot;OCPM: object-object&amp;quot;,&lt;br /&gt;
  &amp;quot;EventToObject&amp;quot;: &amp;quot;OCPM: event-object&amp;quot;,&lt;br /&gt;
  &amp;quot;ObjectTypes&amp;quot;: {&lt;br /&gt;
    &amp;quot;Invoice&amp;quot;: &amp;quot;OCPM object attributes&amp;quot;,&lt;br /&gt;
    &amp;quot;Payment&amp;quot;: &amp;quot;OCPM object attributes&amp;quot;,&lt;br /&gt;
    &amp;quot;Purchase Order&amp;quot;: &amp;quot;OCPM object attributes&amp;quot;&lt;br /&gt;
  },&lt;br /&gt;
  &amp;quot;EventTypes&amp;quot;: { &lt;br /&gt;
    &amp;quot;Approve Purchase Requisition&amp;quot;: &amp;quot;OCPM event attributes&amp;quot;,&lt;br /&gt;
    &amp;quot;Change PO Quantity&amp;quot;: &amp;quot;OCPM event attributes&amp;quot;,&lt;br /&gt;
    &amp;quot;Create Purchase Order&amp;quot;: &amp;quot;OCPM event attributes&amp;quot;,&lt;br /&gt;
    &amp;quot;Insert Invoice&amp;quot;: &amp;quot;OCPM event attributes&amp;quot;,&lt;br /&gt;
    &amp;quot;Insert Payment&amp;quot;: &amp;quot;OCPM event attributes&amp;quot;&lt;br /&gt;
  }&lt;br /&gt;
}&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Note also that the Objects table (&amp;quot;OCPM: objects&amp;quot; in the above example) must not contain object types which are not specified in the ObjectTypes section (and same for event types in the EventTypes section).&lt;br /&gt;
&lt;br /&gt;
== Import from OCEL 2.0 JSON file ==&lt;br /&gt;
Object-centric model can be import from an OCEL 2.0 JSON file as follows:&lt;br /&gt;
# In the Workspace, open the project where to import the model.&lt;br /&gt;
# Select &#039;&#039;&#039;NEW&#039;&#039;&#039; in top right menu and select &#039;&#039;&#039;Import Model&#039;&#039;&#039;.&lt;br /&gt;
# Select the OCEL 2.0 JSON file from the disk and click &#039;&#039;&#039;Open&#039;&#039;&#039;.&lt;br /&gt;
&lt;br /&gt;
An object-centric model and a list of datatables is created.&lt;br /&gt;
&lt;br /&gt;
Example OCEL 2.0 eventlogs: https://www.ocel-standard.org/event-logs/overview/ (download the json format supported by QPR ProcessAnalyzer)&lt;br /&gt;
&lt;br /&gt;
== Filtering object-centric model ==&lt;br /&gt;
Object-centric models can be filtered by object attribute values which is similar to filtering by case attribute values in case-centric models. For the object attribute filtering, following settings are defined:&lt;br /&gt;
* Object type to be filtered&lt;br /&gt;
* Object attribute name&lt;br /&gt;
* Object attribute values&lt;br /&gt;
* Include or exclude logic&lt;br /&gt;
* Number of object relation steps&lt;br /&gt;
&lt;br /&gt;
When the number of object relation steps is zero, objects of only the selected type are filtered. For example, when including items, all other object types are excluded (when object relation steps is zero). When excluding items, all other object types are included (when object relation steps is zero).&lt;br /&gt;
&lt;br /&gt;
When object relation steps is one, objects directly related to the filtered object with an object-to-object relation are included or excluded.&lt;br /&gt;
&lt;br /&gt;
When object relation steps is two or more, several object-to-object relations are followed to find the included or excluded objects. The objects are traversed only in the same direction which is either forward or backward. This is based on the notion that in object-centric models, all object-to-object relations have a direction, i.e., starting object and ending object of the relation. When creating an object-centric model, this relation direction needs to be carefully selected to produce desired results in the filtering.&lt;br /&gt;
&lt;br /&gt;
Object attribute filter rules can be created for the entire dashboard by pressing the blue plus button in the dashboard header (requires that an object-centric model is selected for the dashboard). Object attribute filter rules can also be added for an individual chart/flowchart by opening the &#039;&#039;Filter&#039;&#039; tab in the chart/flowchart settings and pressing the &#039;&#039;&#039;Object-centric filter&#039;&#039;&#039; button.&lt;br /&gt;
&lt;br /&gt;
Alternatively, a chart showing object attribute values as dimension or columns, can be selected to start creating object attribute filter rules. When making the selection, user can choose between &#039;&#039;&#039;Include Objects&#039;&#039;&#039; and &#039;&#039;&#039;Exclude Objects&#039;&#039;&#039;. It&#039;s also possible to use the case-centric filtering by selecting &#039;&#039;&#039;Include Cases&#039;&#039;&#039; or &#039;&#039;&#039;Cases Cases&#039;&#039;&#039;. Note that the case-centric attribute filtering doesn&#039;t work with the object-centric flowchart and with other charts having different perspective selected.&lt;br /&gt;
&lt;br /&gt;
=== Changes for QPR ProcessAnalyzer 2025.3 ===&lt;br /&gt;
Starting from QPR ProcessAnalyzer 2025.3, it&#039;s possible to leave the object relation steps setting empty. This means that the object-to-object relations are followed in a way that all the object types are traversed once to find the related objects. This also means that relation direction (forward or backward) doesn&#039;t matter anymore.&lt;br /&gt;
&lt;br /&gt;
== Object-centric model structure ==&lt;br /&gt;
Object-centric model contains datatables described in the table below. Datatables can be named freely, as the model json configuration is used to define the datatable for each type of data. The datatables need to use column names specified in the table below because those are the column names assumed by the object-centric (i.e., column names cannot be selected freely).&lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
!&#039;&#039;&#039;Datatable role&#039;&#039;&#039;&lt;br /&gt;
!&#039;&#039;&#039;Contained data&#039;&#039;&#039;&lt;br /&gt;
! &#039;&#039;&#039;Datatable columns&#039;&#039;&#039;&lt;br /&gt;
|-&lt;br /&gt;
||Objects&lt;br /&gt;
||Objects in the model (one row per object).&lt;br /&gt;
||&lt;br /&gt;
* &#039;&#039;&#039;OcelObjectId&#039;&#039;&#039;: Unique id for the object (among all objects in the model).&lt;br /&gt;
* &#039;&#039;&#039;OcelObjectType&#039;&#039;&#039;: Object type name (such as Order, Invoice, Delivery). Note that the model json configuration need to use same object type names.&lt;br /&gt;
|-&lt;br /&gt;
||Events&lt;br /&gt;
||Events in the model (one row per event).&lt;br /&gt;
||&lt;br /&gt;
* &#039;&#039;&#039;OcelEventId&#039;&#039;&#039;: Unique id for the event (among all events in the model).&lt;br /&gt;
* &#039;&#039;&#039;OcelEventType&#039;&#039;&#039;: Event type name (such as Order created, Invoice sent). Note that the model json configuration need to use same event type names.&lt;br /&gt;
* &#039;&#039;&#039;OcelEventTime&#039;&#039;&#039;: Event timestamp.&lt;br /&gt;
|-&lt;br /&gt;
||Object-object relations&lt;br /&gt;
||Relations between objects (one row per relation).&lt;br /&gt;
||&lt;br /&gt;
* &#039;&#039;&#039;OcelObjectToObjectSourceId&#039;&#039;&#039;: Source object id in the relation.&lt;br /&gt;
* &#039;&#039;&#039;OcelObjectToObjectTargetId&#039;&#039;&#039;: Target object id in the relation.&lt;br /&gt;
* &#039;&#039;&#039;OcelObjectToObjectQualifier&#039;&#039;&#039;: Describes the type of the relation as free-form text (not used currently).&lt;br /&gt;
|-&lt;br /&gt;
||Event-object relations&lt;br /&gt;
||Relations between events and objects (one row per relation).&lt;br /&gt;
||&lt;br /&gt;
* &#039;&#039;&#039;OcelEventToObjectSourceId&#039;&#039;&#039;: Event id in the relation.&lt;br /&gt;
* &#039;&#039;&#039;OcelEventToObjectTargetId&#039;&#039;&#039;: Object id in the relation.&lt;br /&gt;
* &#039;&#039;&#039;OcelEventToObjectQualifier&#039;&#039;&#039;: Describes the type of the relation as free-form text (not used currently).&lt;br /&gt;
|-&lt;br /&gt;
||Object attributes (several datatables)&lt;br /&gt;
||Object attribute values, each object type in a separate table (one row per object).&lt;br /&gt;
||&lt;br /&gt;
* &#039;&#039;&#039;OcelObjectTypeObjectId&#039;&#039;&#039;: Object id. Matches to the objects datatable &#039;&#039;OcelObjectId&#039;&#039; column.&lt;br /&gt;
* &#039;&#039;&#039;OcelObjectTypeTime&#039;&#039;&#039;: Timestamp from which the attribute values are valid.&lt;br /&gt;
* &#039;&#039;&#039;OcelObjectTypeChangedField&#039;&#039;&#039;: Changed object attribute name (not used currently).&lt;br /&gt;
* &#039;&#039;&#039;&amp;lt;Object attributes&amp;gt;&#039;&#039;&#039;: Columns for each of the object attribute values (column name is the object attribute name).&lt;br /&gt;
|-&lt;br /&gt;
||Event attributes (several datatables)&lt;br /&gt;
||Event attribute values, each event type in a separate table (one row per event).&lt;br /&gt;
||&lt;br /&gt;
* &#039;&#039;&#039;OcelEventTypeEventId&#039;&#039;&#039;: Event id. Matches to the events datatable &#039;&#039;OcelEventId&#039;&#039; column.&lt;br /&gt;
* &#039;&#039;&#039;&amp;lt;Event attributes&amp;gt;&#039;&#039;&#039;: Columns for each of the event attribute values (column name is the event attribute name).&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
== Object-centric perspectives ==&lt;br /&gt;
Perspectives convert an object-centric model into the traditional case-centric eventlog, allowing to view and analyze object-centric models in analyses provided by charts. A single perspective is not able describe the object-centric model entirely, but just from a certain limited viewpoint. By using analyses with several perspectives, it&#039;s possible to get a more complete picture of the object-centric model. The perspective starts from a certain object type and traverses the object-object relations as many steps as desired.&lt;br /&gt;
&lt;br /&gt;
To define a perspective, the following settings are defined in the chart settings:&lt;br /&gt;
* &#039;&#039;&#039;Base Object type&#039;&#039;&#039;: Object of this type will be cases in the projected case-centric eventlog.&lt;br /&gt;
* &#039;&#039;&#039;Object Relation Steps&#039;&#039;&#039;: Specifies how many object-object relations will be traversed in order to find events connected to the base objects. Value zero means that only those events are returned that are directly connected to the base objects.&lt;br /&gt;
* &#039;&#039;&#039;Show Event Types&#039;&#039;&#039;: List of event type names which are included into the perspective eventlog. If no events are explicitly defined, all events will be included, but their event attributes are not included.&lt;br /&gt;
&lt;br /&gt;
The resulting perspective eventlog will have the following columns:&lt;br /&gt;
* &#039;&#039;&#039;OcelObjectId&#039;&#039;&#039; (mapped to case id)&lt;br /&gt;
* &#039;&#039;&#039;OcelEventType&#039;&#039;&#039; (mapped to event type)&lt;br /&gt;
* &#039;&#039;&#039;OcelEventTime&#039;&#039;&#039; (mapped to timestamp)&lt;br /&gt;
* &#039;&#039;&#039;OcelEventId&#039;&#039;&#039;&lt;br /&gt;
* Object attributes of the base object type. Note that the object attribute values are &amp;quot;repeated&amp;quot; for all events belonging to the same object.&lt;br /&gt;
* Event attributes of the selected event types. Values are null for events that don&#039;t have the attribute.&lt;br /&gt;
&lt;br /&gt;
The base object type attributes are available as case attributes. As the object attribute values may change over time in the OCEL 2.0 data, the last attribute value is used as the case attribute value. Note that other object type&#039;s attributes are not available as case attributes, so the object for which the attributes are used, need to be set as the base object.&lt;br /&gt;
&lt;br /&gt;
== Save perspective to filter ==&lt;br /&gt;
It&#039;s possible to include the object-centric perspective to a stored filter. When a filter is selected, also the perspective in the filter is applied to the dashboard. This allows to quickly change perspectives for the entire dashboard. The chart specific perspective overrides the dashboard level perspective, so the dashboard level perspective is only applied for charts that don&#039;t have the chart specific perspective defined.&lt;br /&gt;
&lt;br /&gt;
Perspective can be added to a filter as follows:&lt;br /&gt;
# Go to the &#039;&#039;Process Discovery&#039;&#039; dashboard.&lt;br /&gt;
# Open the &#039;&#039;Session variables&#039;&#039; dialog in the dots menu on top right.&lt;br /&gt;
# Paste the filter json to the &#039;&#039;Value&#039;&#039; of the &#039;&#039;Filter&#039;&#039; variable (it might be easiest to start with a filter without filter rules, and then add the filter rules using the UI).&lt;br /&gt;
# Click &#039;&#039;Done&#039;&#039; button for the dialog.&lt;br /&gt;
# Save the filter by hovering the &#039;&#039;Unsaved filter&#039;&#039; (filters dropdown list) in the header and click &#039;&#039;Save as new filter&#039;&#039;.&lt;br /&gt;
&lt;br /&gt;
Example: Filter json without any filter rules:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
{&lt;br /&gt;
  &amp;quot;Items&amp;quot;: [],&lt;br /&gt;
  &amp;quot;Perspective&amp;quot;: {&lt;br /&gt;
    &amp;quot;ObjectType&amp;quot;: &amp;quot;Container&amp;quot;,&lt;br /&gt;
    &amp;quot;RecursionDepth&amp;quot;: 0&lt;br /&gt;
  }&lt;br /&gt;
}&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Example: Filter json with a filter rule:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
{&lt;br /&gt;
  &amp;quot;Items&amp;quot;: [&lt;br /&gt;
    {&lt;br /&gt;
      &amp;quot;Type&amp;quot;: &amp;quot;IncludeEvents&amp;quot;,&lt;br /&gt;
      &amp;quot;Items&amp;quot;: [&lt;br /&gt;
        {&lt;br /&gt;
          &amp;quot;Type&amp;quot;: &amp;quot;Attribute&amp;quot;,&lt;br /&gt;
          &amp;quot;Attribute&amp;quot;: &amp;quot;OcelEventId&amp;quot;,&lt;br /&gt;
          &amp;quot;StringifiedValues&amp;quot;: [ &amp;quot;0Event 1&amp;quot; ]&lt;br /&gt;
        }&lt;br /&gt;
      ]&lt;br /&gt;
    }&lt;br /&gt;
  ],&lt;br /&gt;
  &amp;quot;Perspective&amp;quot;: {&lt;br /&gt;
    &amp;quot;ObjectType&amp;quot;: &amp;quot;Container&amp;quot;,&lt;br /&gt;
    &amp;quot;RecursionDepth&amp;quot;: 0&lt;br /&gt;
  }&lt;br /&gt;
}&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Differences to OCEL 2.0 standard ==&lt;br /&gt;
Object-centric models in QPR ProcessAnalyzer are mainly following the OCEL 2.0 standard, but there are the following differences:&lt;br /&gt;
* Changing of object attributes values over time is not supported.&lt;br /&gt;
* &#039;&#039;ocel_time&#039;&#039; field of each event type table is moved to events datatable (as every event has a timestemp). &lt;br /&gt;
* &#039;&#039;*_map_type&#039;&#039; columns are not needed as the model settings are used for the same purpose. &lt;br /&gt;
* Object type tables: If OcelObjectTypeChangedField is not null, all the other field values are copied from the previous entry except: &lt;br /&gt;
** &#039;&#039;OcelObjectTypeChangedField&#039;&#039; which has the names of the changed fields as a comma separated string. &lt;br /&gt;
** The actual changed field which has the new value. &lt;br /&gt;
** &#039;&#039;OcelObjectTypeTime&#039;&#039; which has the timestamp when the value changed.&lt;/div&gt;</summary>
		<author><name>MarHink</name></author>
	</entry>
	<entry>
		<id>https://wiki.onqpr.com/pa/index.php?title=Object-centric_Process_Mining_Model&amp;diff=26040</id>
		<title>Object-centric Process Mining Model</title>
		<link rel="alternate" type="text/html" href="https://wiki.onqpr.com/pa/index.php?title=Object-centric_Process_Mining_Model&amp;diff=26040"/>
		<updated>2025-03-25T14:17:14Z</updated>

		<summary type="html">&lt;p&gt;MarHink: /* Object-centric model structure */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;QPR ProcessAnalyzer supports object-centric process mining (OCPM) based on the OCEL 2.0 standard (https://www.ocel-standard.org). To use object-centric functionality, you need to transform data into the [[#Object-centric_model_structure|suitable format]] for the [[#Create_object-centric_model|object-centric model]]. Object-centric models can be analyzed in the object-centric flowchart and with (case-centric) charts because the object-centric model can be converted into a case-centric eventlog using [[#Object-centric_perspectives|perspectives]]. To use the OCPM functionality, Snowflake needs to be used as the calculation engine.&lt;br /&gt;
&lt;br /&gt;
== Create object-centric model ==&lt;br /&gt;
Create a new object-centric model as follows:&lt;br /&gt;
# In the Workspace, open the project where to create the model.&lt;br /&gt;
# Select &#039;&#039;&#039;NEW&#039;&#039;&#039; in the top right menu and select &#039;&#039;&#039;model&#039;&#039;&#039;.&lt;br /&gt;
# Define a name for the new model.&lt;br /&gt;
# Set &#039;&#039;&#039;Model type&#039;&#039;&#039; as &#039;&#039;&#039;Object-centric&#039;&#039;&#039;.&lt;br /&gt;
# Click &#039;&#039;&#039;Create&#039;&#039;&#039;.&lt;br /&gt;
&lt;br /&gt;
== Configure object-centric model datatables ==&lt;br /&gt;
Datatables for the object-centric model need to exist in the same project as the model. Datatables can be set for the model as follows:&lt;br /&gt;
# In the Workspace, select the object-centric model and click &#039;&#039;&#039;Properties&#039;&#039;&#039;.&lt;br /&gt;
# In the model properties dialog, open the &#039;&#039;&#039;Datasource&#039;&#039;&#039; tab.&lt;br /&gt;
# Add a following kind of json configuration to the textbox:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
{&lt;br /&gt;
  &amp;quot;Objects&amp;quot;: &amp;quot;OCPM: objects&amp;quot;,&lt;br /&gt;
  &amp;quot;Events&amp;quot;: &amp;quot;OCPM: events&amp;quot;,&lt;br /&gt;
  &amp;quot;ObjectToObject&amp;quot;: &amp;quot;OCPM: object-object&amp;quot;,&lt;br /&gt;
  &amp;quot;EventToObject&amp;quot;: &amp;quot;OCPM: event-object&amp;quot;,&lt;br /&gt;
  &amp;quot;ObjectTypes&amp;quot;: {&lt;br /&gt;
    &amp;quot;Invoice&amp;quot;: &amp;quot;OCPM object: Invoice&amp;quot;,&lt;br /&gt;
    &amp;quot;Payment&amp;quot;: &amp;quot;OCPM object: Payment&amp;quot;,&lt;br /&gt;
    &amp;quot;Purchase Order&amp;quot;: &amp;quot;OCPM object: Purchase Order&amp;quot;&lt;br /&gt;
  },&lt;br /&gt;
  &amp;quot;EventTypes&amp;quot;: { &lt;br /&gt;
    &amp;quot;Approve Purchase Requisition&amp;quot;: &amp;quot;OCPM event: Approve Purchase Requisition&amp;quot;,&lt;br /&gt;
    &amp;quot;Change PO Quantity&amp;quot;: &amp;quot;OCPM event: Change PO Quantity&amp;quot;,&lt;br /&gt;
    &amp;quot;Create Purchase Order&amp;quot;: &amp;quot;OCPM event: Create Purchase Order&amp;quot;,&lt;br /&gt;
    &amp;quot;Insert Invoice&amp;quot;: &amp;quot;OCPM event: Insert Invoice&amp;quot;,&lt;br /&gt;
    &amp;quot;Insert Payment&amp;quot;: &amp;quot;OCPM event: Insert Payment&amp;quot;&lt;br /&gt;
  }&lt;br /&gt;
}&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The json configuration needs to have following properties:&lt;br /&gt;
* &#039;&#039;&#039;Objects&#039;&#039;&#039;: Objects datatable name.&lt;br /&gt;
* &#039;&#039;&#039;Events&#039;&#039;&#039;: Events datatable name.&lt;br /&gt;
* &#039;&#039;&#039;ObjectToObject&#039;&#039;&#039;: Object-to-object relation datatable name.&lt;br /&gt;
* &#039;&#039;&#039;EventToObject&#039;&#039;&#039;: Event-to-object relation datatable name.&lt;br /&gt;
* &#039;&#039;&#039;ObjectTypes&#039;&#039;&#039;: Key-value-pairs of object type datatable names. Note that object names need to match with object names in the objects datatable.&lt;br /&gt;
* &#039;&#039;&#039;EventTypes&#039;&#039;&#039;: Key-value-pairs of event type datatable names. Note that event names need to match with event names in the events datatable.&lt;br /&gt;
&lt;br /&gt;
It&#039;s also possible that all object attributes and all event attributes are in the same table:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
{&lt;br /&gt;
  &amp;quot;Objects&amp;quot;: &amp;quot;OCPM: objects&amp;quot;,&lt;br /&gt;
  &amp;quot;Events&amp;quot;: &amp;quot;OCPM: events&amp;quot;,&lt;br /&gt;
  &amp;quot;ObjectToObject&amp;quot;: &amp;quot;OCPM: object-object&amp;quot;,&lt;br /&gt;
  &amp;quot;EventToObject&amp;quot;: &amp;quot;OCPM: event-object&amp;quot;,&lt;br /&gt;
  &amp;quot;ObjectTypes&amp;quot;: {&lt;br /&gt;
    &amp;quot;Invoice&amp;quot;: &amp;quot;OCPM object attributes&amp;quot;,&lt;br /&gt;
    &amp;quot;Payment&amp;quot;: &amp;quot;OCPM object attributes&amp;quot;,&lt;br /&gt;
    &amp;quot;Purchase Order&amp;quot;: &amp;quot;OCPM object attributes&amp;quot;&lt;br /&gt;
  },&lt;br /&gt;
  &amp;quot;EventTypes&amp;quot;: { &lt;br /&gt;
    &amp;quot;Approve Purchase Requisition&amp;quot;: &amp;quot;OCPM event attributes&amp;quot;,&lt;br /&gt;
    &amp;quot;Change PO Quantity&amp;quot;: &amp;quot;OCPM event attributes&amp;quot;,&lt;br /&gt;
    &amp;quot;Create Purchase Order&amp;quot;: &amp;quot;OCPM event attributes&amp;quot;,&lt;br /&gt;
    &amp;quot;Insert Invoice&amp;quot;: &amp;quot;OCPM event attributes&amp;quot;,&lt;br /&gt;
    &amp;quot;Insert Payment&amp;quot;: &amp;quot;OCPM event attributes&amp;quot;&lt;br /&gt;
  }&lt;br /&gt;
}&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Note also that the Objects table (&amp;quot;OCPM: objects&amp;quot; in the above example) must not contain object types which are not specified in the ObjectTypes section (and same for event types in the EventTypes section).&lt;br /&gt;
&lt;br /&gt;
== Import from OCEL 2.0 JSON file ==&lt;br /&gt;
Object-centric model can be import from an OCEL 2.0 JSON file as follows:&lt;br /&gt;
# In the Workspace, open the project where to import the model.&lt;br /&gt;
# Select &#039;&#039;&#039;NEW&#039;&#039;&#039; in top right menu and select &#039;&#039;&#039;Import Model&#039;&#039;&#039;.&lt;br /&gt;
# Select the OCEL 2.0 JSON file from the disk and click &#039;&#039;&#039;Open&#039;&#039;&#039;.&lt;br /&gt;
&lt;br /&gt;
An object-centric model and a list of datatables is created.&lt;br /&gt;
&lt;br /&gt;
Example OCEL 2.0 eventlogs: https://www.ocel-standard.org/event-logs/overview/ (download the json format supported by QPR ProcessAnalyzer)&lt;br /&gt;
&lt;br /&gt;
== Filtering object-centric model ==&lt;br /&gt;
Object-centric models can be filtered by object attribute values which is similar to filtering by case attribute values in case-centric models. For the object attribute filtering, following settings are defined:&lt;br /&gt;
* Object type to be filtered&lt;br /&gt;
* Object attribute name&lt;br /&gt;
* Object attribute values&lt;br /&gt;
* Include or exclude logic&lt;br /&gt;
* Number of object relation steps&lt;br /&gt;
&lt;br /&gt;
When the number of object relation steps is zero, objects of only the selected type are filtered. For example, when including items, all other object types are excluded (when object relation steps is zero). When excluding items, all other object types are included (when object relation steps is zero).&lt;br /&gt;
&lt;br /&gt;
When object relation steps is one, objects directly related to the filtered object with an object-to-object relation are included or excluded.&lt;br /&gt;
&lt;br /&gt;
When object relation steps is two or more, several object-to-object relations are followed to find the included or excluded objects. The objects are traversed only in the same direction which is either forward or backward. This is based on the notion that in object-centric models, all object-to-object relations have a direction, i.e., starting object and ending object of the relation. When creating an object-centric model, this relation direction needs to be carefully selected to produce desired results in the filtering.&lt;br /&gt;
&lt;br /&gt;
Object attribute filter rules can be created for the entire dashboard by pressing the blue plus button in the dashboard header (requires that an object-centric model is selected for the dashboard). Object attribute filter rules can also be added for an individual chart/flowchart by opening the &#039;&#039;Filter&#039;&#039; tab in the chart/flowchart settings and pressing the &#039;&#039;&#039;Object-centric filter&#039;&#039;&#039; button.&lt;br /&gt;
&lt;br /&gt;
Alternatively, a chart showing object attribute values as dimension or columns, can be selected to start creating object attribute filter rules. When making the selection, user can choose between &#039;&#039;&#039;Include Objects&#039;&#039;&#039; and &#039;&#039;&#039;Exclude Objects&#039;&#039;&#039;. It&#039;s also possible to use the case-centric filtering by selecting &#039;&#039;&#039;Include Cases&#039;&#039;&#039; or &#039;&#039;&#039;Cases Cases&#039;&#039;&#039;. Note that the case-centric attribute filtering doesn&#039;t work with the object-centric flowchart and with other charts having different perspective selected.&lt;br /&gt;
&lt;br /&gt;
=== Changes for QPR ProcessAnalyzer 2025.3 ===&lt;br /&gt;
Starting from QPR ProcessAnalyzer 2025.3, it&#039;s possible to leave the object relation steps setting empty. This means that the object-to-object relations are followed in a way that all the object types are traversed once to find the related objects. This also means that relation direction (forward or backward) doesn&#039;t matter anymore.&lt;br /&gt;
&lt;br /&gt;
== Object-centric model structure ==&lt;br /&gt;
Object-centric model contains datatables described in the table below. Datatables can be named freely, as the model json configuration is used to define the datatable for each type of data. The datatables need to use column names specified in the table below because those are the column names assumed by the object-centric (i.e., column names cannot be selected freely).&lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
!&#039;&#039;&#039;Datatable role&#039;&#039;&#039;&lt;br /&gt;
!&#039;&#039;&#039;Contained data&#039;&#039;&#039;&lt;br /&gt;
! &#039;&#039;&#039;Datatable columns&#039;&#039;&#039;&lt;br /&gt;
|-&lt;br /&gt;
||Objects&lt;br /&gt;
||Objects in the model (one row per object).&lt;br /&gt;
||&lt;br /&gt;
* &#039;&#039;&#039;OcelObjectId&#039;&#039;&#039;: Unique id for the object (among all objects in the model).&lt;br /&gt;
* &#039;&#039;&#039;OcelObjectType&#039;&#039;&#039;: Object type name (such as Order, Invoice, Delivery). Note that the model json configuration need to use same object type names.&lt;br /&gt;
|-&lt;br /&gt;
||Events&lt;br /&gt;
||Events in the model (one row per event).&lt;br /&gt;
||&lt;br /&gt;
* &#039;&#039;&#039;OcelEventId&#039;&#039;&#039;: Unique id for the event (among all events in the model).&lt;br /&gt;
* &#039;&#039;&#039;OcelEventType&#039;&#039;&#039;: Event type name (such as Order created, Invoice sent). Note that the model json configuration need to use same event type names.&lt;br /&gt;
* &#039;&#039;&#039;OcelEventTime&#039;&#039;&#039;: Event timestamp.&lt;br /&gt;
|-&lt;br /&gt;
||Object-object relations&lt;br /&gt;
||Relations between objects (one row per relation).&lt;br /&gt;
||&lt;br /&gt;
* &#039;&#039;&#039;OcelObjectToObjectSourceId&#039;&#039;&#039;: Source object id in the relation.&lt;br /&gt;
* &#039;&#039;&#039;OcelObjectToObjectTargetId&#039;&#039;&#039;: Target object id in the relation.&lt;br /&gt;
* &#039;&#039;&#039;OcelObjectToObjectQualifier&#039;&#039;&#039;: Describes the type of the relation as free-form text (not used currently).&lt;br /&gt;
|-&lt;br /&gt;
||Event-object relations&lt;br /&gt;
||Relations between events and objects (one row per relation).&lt;br /&gt;
||&lt;br /&gt;
* &#039;&#039;&#039;OcelEventToObjectSourceId&#039;&#039;&#039;: Event id in the relation.&lt;br /&gt;
* &#039;&#039;&#039;OcelEventToObjectTargetId&#039;&#039;&#039;: Object id in the relation.&lt;br /&gt;
* &#039;&#039;&#039;OcelEventToObjectQualifier&#039;&#039;&#039;: Describes the type of the relation as free-form text (not used currently).&lt;br /&gt;
|-&lt;br /&gt;
||Object attributes (several datatables)&lt;br /&gt;
||Object attribute values, each object type in a separate table (one row per object).&lt;br /&gt;
||&lt;br /&gt;
* &#039;&#039;&#039;OcelObjectTypeObjectId&#039;&#039;&#039;: Object id. Matches to the objects datatable &#039;&#039;OcelObjectId&#039;&#039; column.&lt;br /&gt;
* &#039;&#039;&#039;OcelObjectTypeTime&#039;&#039;&#039;: Timestamp which the attribute value is valid from (not used currently).&lt;br /&gt;
* &#039;&#039;&#039;OcelObjectTypeChangedField&#039;&#039;&#039;: Changed object attribute name (not used currently).&lt;br /&gt;
* &#039;&#039;&#039;&amp;lt;Object attributes&amp;gt;&#039;&#039;&#039;: Columns for each of the object attribute values (column name is the object attribute name).&lt;br /&gt;
|-&lt;br /&gt;
||Event attributes (several datatables)&lt;br /&gt;
||Event attribute values, each event type in a separate table (one row per event).&lt;br /&gt;
||&lt;br /&gt;
* &#039;&#039;&#039;OcelEventTypeEventId&#039;&#039;&#039;: Event id. Matches to the events datatable &#039;&#039;OcelEventId&#039;&#039; column.&lt;br /&gt;
* &#039;&#039;&#039;&amp;lt;Event attributes&amp;gt;&#039;&#039;&#039;: Columns for each of the event attribute values (column name is the event attribute name).&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
== Object-centric perspectives ==&lt;br /&gt;
Perspectives convert an object-centric model into the traditional case-centric eventlog, allowing to view and analyze object-centric models in analyses provided by charts. A single perspective is not able describe the object-centric model entirely, but just from a certain limited viewpoint. By using analyses with several perspectives, it&#039;s possible to get a more complete picture of the object-centric model. The perspective starts from a certain object type and traverses the object-object relations as many steps as desired.&lt;br /&gt;
&lt;br /&gt;
To define a perspective, the following settings are defined in the chart settings:&lt;br /&gt;
* &#039;&#039;&#039;Base Object type&#039;&#039;&#039;: Object of this type will be cases in the projected case-centric eventlog.&lt;br /&gt;
* &#039;&#039;&#039;Object Relation Steps&#039;&#039;&#039;: Specifies how many object-object relations will be traversed in order to find events connected to the base objects. Value zero means that only those events are returned that are directly connected to the base objects.&lt;br /&gt;
* &#039;&#039;&#039;Show Event Types&#039;&#039;&#039;: List of event type names which are included into the perspective eventlog. If no events are explicitly defined, all events will be included, but their event attributes are not included.&lt;br /&gt;
&lt;br /&gt;
The resulting perspective eventlog will have the following columns:&lt;br /&gt;
* &#039;&#039;&#039;OcelObjectId&#039;&#039;&#039; (mapped to case id)&lt;br /&gt;
* &#039;&#039;&#039;OcelEventType&#039;&#039;&#039; (mapped to event type)&lt;br /&gt;
* &#039;&#039;&#039;OcelEventTime&#039;&#039;&#039; (mapped to timestamp)&lt;br /&gt;
* &#039;&#039;&#039;OcelEventId&#039;&#039;&#039;&lt;br /&gt;
* Object attributes of the base object type. Note that the object attribute values are &amp;quot;repeated&amp;quot; for all events belonging to the same object.&lt;br /&gt;
* Event attributes of the selected event types. Values are null for events that don&#039;t have the attribute.&lt;br /&gt;
&lt;br /&gt;
The base object type attributes are available as case attributes. As the object attribute values may change over time in the OCEL 2.0 data, the last attribute value is used as the case attribute value. Note that other object type&#039;s attributes are not available as case attributes, so the object for which the attributes are used, need to be set as the base object.&lt;br /&gt;
&lt;br /&gt;
== Save perspective to filter ==&lt;br /&gt;
It&#039;s possible to include the object-centric perspective to a stored filter. When a filter is selected, also the perspective in the filter is applied to the dashboard. This allows to quickly change perspectives for the entire dashboard. The chart specific perspective overrides the dashboard level perspective, so the dashboard level perspective is only applied for charts that don&#039;t have the chart specific perspective defined.&lt;br /&gt;
&lt;br /&gt;
Perspective can be added to a filter as follows:&lt;br /&gt;
# Go to the &#039;&#039;Process Discovery&#039;&#039; dashboard.&lt;br /&gt;
# Open the &#039;&#039;Session variables&#039;&#039; dialog in the dots menu on top right.&lt;br /&gt;
# Paste the filter json to the &#039;&#039;Value&#039;&#039; of the &#039;&#039;Filter&#039;&#039; variable (it might be easiest to start with a filter without filter rules, and then add the filter rules using the UI).&lt;br /&gt;
# Click &#039;&#039;Done&#039;&#039; button for the dialog.&lt;br /&gt;
# Save the filter by hovering the &#039;&#039;Unsaved filter&#039;&#039; (filters dropdown list) in the header and click &#039;&#039;Save as new filter&#039;&#039;.&lt;br /&gt;
&lt;br /&gt;
Example: Filter json without any filter rules:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
{&lt;br /&gt;
  &amp;quot;Items&amp;quot;: [],&lt;br /&gt;
  &amp;quot;Perspective&amp;quot;: {&lt;br /&gt;
    &amp;quot;ObjectType&amp;quot;: &amp;quot;Container&amp;quot;,&lt;br /&gt;
    &amp;quot;RecursionDepth&amp;quot;: 0&lt;br /&gt;
  }&lt;br /&gt;
}&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Example: Filter json with a filter rule:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
{&lt;br /&gt;
  &amp;quot;Items&amp;quot;: [&lt;br /&gt;
    {&lt;br /&gt;
      &amp;quot;Type&amp;quot;: &amp;quot;IncludeEvents&amp;quot;,&lt;br /&gt;
      &amp;quot;Items&amp;quot;: [&lt;br /&gt;
        {&lt;br /&gt;
          &amp;quot;Type&amp;quot;: &amp;quot;Attribute&amp;quot;,&lt;br /&gt;
          &amp;quot;Attribute&amp;quot;: &amp;quot;OcelEventId&amp;quot;,&lt;br /&gt;
          &amp;quot;StringifiedValues&amp;quot;: [ &amp;quot;0Event 1&amp;quot; ]&lt;br /&gt;
        }&lt;br /&gt;
      ]&lt;br /&gt;
    }&lt;br /&gt;
  ],&lt;br /&gt;
  &amp;quot;Perspective&amp;quot;: {&lt;br /&gt;
    &amp;quot;ObjectType&amp;quot;: &amp;quot;Container&amp;quot;,&lt;br /&gt;
    &amp;quot;RecursionDepth&amp;quot;: 0&lt;br /&gt;
  }&lt;br /&gt;
}&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Differences to OCEL 2.0 standard ==&lt;br /&gt;
Object-centric models in QPR ProcessAnalyzer are mainly following the OCEL 2.0 standard, but there are the following differences:&lt;br /&gt;
* Changing of object attributes values over time is not supported.&lt;br /&gt;
* &#039;&#039;ocel_time&#039;&#039; field of each event type table is moved to events datatable (as every event has a timestemp). &lt;br /&gt;
* &#039;&#039;*_map_type&#039;&#039; columns are not needed as the model settings are used for the same purpose. &lt;br /&gt;
* Object type tables: If OcelObjectTypeChangedField is not null, all the other field values are copied from the previous entry except: &lt;br /&gt;
** &#039;&#039;OcelObjectTypeChangedField&#039;&#039; which has the names of the changed fields as a comma separated string. &lt;br /&gt;
** The actual changed field which has the new value. &lt;br /&gt;
** &#039;&#039;OcelObjectTypeTime&#039;&#039; which has the timestamp when the value changed.&lt;/div&gt;</summary>
		<author><name>MarHink</name></author>
	</entry>
	<entry>
		<id>https://wiki.onqpr.com/pa/index.php?title=Expression_Script_Examples&amp;diff=26025</id>
		<title>Expression Script Examples</title>
		<link rel="alternate" type="text/html" href="https://wiki.onqpr.com/pa/index.php?title=Expression_Script_Examples&amp;diff=26025"/>
		<updated>2025-03-19T14:13:25Z</updated>

		<summary type="html">&lt;p&gt;MarHink: /* Perform a query and send results as E-mail in a HTML table */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;This page contains script examples written in the QPR ProcessAnalyzer expression language. See how expression scripts can be created in the [[Managing_Scripts#Creating_Script|Workspace]]. For documentation for the syntax, functions and entities can be found from the main page in the [[QPR_ProcessAnalyzer_Wiki#For_Developers|KPI Expression Language]] section.&lt;br /&gt;
&lt;br /&gt;
== Calling Expression Script from Expression ==&lt;br /&gt;
Expression scripts can be called from an expression using the [[QPR_ProcessAnalyzer_Objects_in_Expression_Language#Script|Run]] function with the following syntax:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
let result = ScriptById(123).Run(#{&lt;br /&gt;
  &amp;quot;parameter1&amp;quot;: &amp;quot;value1&amp;quot;,&lt;br /&gt;
  &amp;quot;parameter2&amp;quot;: false,&lt;br /&gt;
  &amp;quot;parameter2&amp;quot;: 123.45&lt;br /&gt;
})&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The command waits until the run is completed, and the return value of the called script is returned by the Run function call.&lt;br /&gt;
&lt;br /&gt;
Parameters can be passed to the called script, and the parameters are available as variables in the script. The parameters can contain any type of data.&lt;br /&gt;
&lt;br /&gt;
Expression scripts can also be called from a dashboard. Expressions can be stored to scripts instead of dashboards, which is a way to separate complex expressions from dashboards and allow to reuse expressions across several dashboards.&lt;br /&gt;
&lt;br /&gt;
== Calling SQL Script from Expression ==&lt;br /&gt;
SQL script can be called from an expression using the Run function as follows (similar to calling [[#Calling Expression Script from Expression|expression scripts]]):&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
let result = ScriptById(123).Run(#{&lt;br /&gt;
  &amp;quot;parameter1&amp;quot;: &amp;quot;value1&amp;quot;,&lt;br /&gt;
  &amp;quot;parameter2&amp;quot;: 321&lt;br /&gt;
});&lt;br /&gt;
let arrayOfAllReports = result.Keys;&lt;br /&gt;
let report1 = result.Report1;&lt;br /&gt;
let report2 = result.Report2;&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
SQL scripts can return multiple &#039;&#039;reports&#039;&#039;, which are combined to a dictionary, where the key is the name of the report (&amp;quot;sheet name&amp;quot;) and value is the report data as a DataFrame. See in the above example, how the reports can be accessed by their name.&lt;br /&gt;
&lt;br /&gt;
== Examples ==&lt;br /&gt;
=== Call web service===&lt;br /&gt;
Contact to a web service, fetch some data, and store it to a datatable.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;typescript&amp;quot; line&amp;gt;&lt;br /&gt;
let datatableName = &amp;quot;Web Service Data&amp;quot;;&lt;br /&gt;
let webServiceData = CallWebService(&lt;br /&gt;
    #{&amp;quot;Address&amp;quot;: &amp;quot;https://processanalyzer.onqpr.com/qprpa/api/serverinfo&amp;quot;}&lt;br /&gt;
);&lt;br /&gt;
&lt;br /&gt;
let targetDatatable = Project.Datatables.Where(name==datatableName);&lt;br /&gt;
if (Count(targetDatatable) == 0) {&lt;br /&gt;
	targetDatatable = Project.CreateDatatable(datatableName)&lt;br /&gt;
	.AddColumn(&amp;quot;Setting name&amp;quot;, &amp;quot;String&amp;quot;)&lt;br /&gt;
	.AddColumn(&amp;quot;Setting value&amp;quot;, &amp;quot;String&amp;quot;)&lt;br /&gt;
	.AddColumn(&amp;quot;Data read&amp;quot;, &amp;quot;DateTime&amp;quot;);&lt;br /&gt;
} else {&lt;br /&gt;
	targetDatatable = targetDatatable[0];&lt;br /&gt;
}&lt;br /&gt;
&lt;br /&gt;
let currentTime = Now;&lt;br /&gt;
let dataAsDf = ToDataFrame(&lt;br /&gt;
	webServiceData.keys.{&lt;br /&gt;
        let key = _;&lt;br /&gt;
        [key, webServiceData[key], currentTime];&lt;br /&gt;
    },&lt;br /&gt;
	[&amp;quot;Setting name&amp;quot;, &amp;quot;Setting value&amp;quot;, &amp;quot;Data read&amp;quot;]&lt;br /&gt;
);&lt;br /&gt;
targetDatatable.Import(dataAsDf, #{&amp;quot;Append&amp;quot;:true});&lt;br /&gt;
WriteLog(`${CountTop(dataAsDf.Rows)} rows written to datatable`);&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Store data to datatable ===&lt;br /&gt;
Get all models in the system and store them to a datatable.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;typescript&amp;quot; line&amp;gt;&lt;br /&gt;
let newDatatable = Project&lt;br /&gt;
    .CreateDatatable(&amp;quot;Models list &amp;quot; + ToString(Now, &amp;quot;dd.MM.yyyy HH:mm:ss&amp;quot;))&lt;br /&gt;
    .AddColumn(&amp;quot;Model name&amp;quot;, &amp;quot;String&amp;quot;)&lt;br /&gt;
    .AddColumn(&amp;quot;Project name&amp;quot;, &amp;quot;String&amp;quot;)&lt;br /&gt;
    .AddColumn(&amp;quot;Created time&amp;quot;, &amp;quot;DateTime&amp;quot;)&lt;br /&gt;
    .AddColumn(&amp;quot;Cases&amp;quot;, &amp;quot;Integer&amp;quot;);&lt;br /&gt;
let startTime = Now;&lt;br /&gt;
let modelsData = ToDataFrame(&lt;br /&gt;
    Models.([Name, Project.Name, CreatedDate, NCases]),&lt;br /&gt;
    [&amp;quot;Model name&amp;quot;, &amp;quot;Project name&amp;quot;, &amp;quot;Created time&amp;quot;, &amp;quot;Cases&amp;quot;]&lt;br /&gt;
);&lt;br /&gt;
WriteLog(`Listing models took ${(Now - startTime).TotalSeconds.Round(2)} seconds.`);&lt;br /&gt;
newDatatable.Import(modelsData);&lt;br /&gt;
WriteLog(`Datatable ${newDatatable.Id} created.`);&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Convert datatable column data ===&lt;br /&gt;
This script can be used to convert a single column into numerical data type. To use the script, you need to setup the following in the beginning of the script:&lt;br /&gt;
* Project name where the datatable is located.&lt;br /&gt;
* Datatable name&lt;br /&gt;
* Name of the column to be converted&lt;br /&gt;
&lt;br /&gt;
Note that the conversion fails, if there is data that cannot be converted into numerical format. The conversion assumes that period (.) is used as the decimal point. &lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;typescript&amp;quot; line&amp;gt;&lt;br /&gt;
let projectName = &amp;quot;New Project&amp;quot;;&lt;br /&gt;
let datatableName = &amp;quot;qpr processanalyzer events&amp;quot;;&lt;br /&gt;
let columnName = &amp;quot;Event order in case&amp;quot;;&lt;br /&gt;
&lt;br /&gt;
let project = (Projects.Where(Name==projectName))[0];&lt;br /&gt;
let datatable = (project.Datatables.Where(Name==datatableName))[0];&lt;br /&gt;
DatatableById(datatable.Id).DataFrame&lt;br /&gt;
.SetColumns([&lt;br /&gt;
	columnName: () =&amp;gt; {&lt;br /&gt;
		let data = Column(columnName);&lt;br /&gt;
		if (data == null) {&lt;br /&gt;
			null;&lt;br /&gt;
		 } else {&lt;br /&gt;
			ToFloat(data);&lt;br /&gt;
		}&lt;br /&gt;
	}&lt;br /&gt;
])&lt;br /&gt;
.Persist(datatable.Name, [&amp;quot;ProjectId&amp;quot;: project.Id, &amp;quot;Append&amp;quot;: false]);&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Instead of converting to numeric (with the &#039;&#039;ToFloat&#039;&#039; function), data can be converted into string using the &#039;&#039;ToString&#039;&#039; function.&lt;br /&gt;
&lt;br /&gt;
=== Show DataFrame as HTML table ===&lt;br /&gt;
&lt;br /&gt;
This script defines a function to show dataframe as a HTML table, and uses the function for a literal dataframe.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;typescript&amp;quot; line&amp;gt;&lt;br /&gt;
function dataframeToHtmlTable(df) {&lt;br /&gt;
	return&lt;br /&gt;
`&amp;lt;table&amp;gt;&lt;br /&gt;
	&amp;lt;tr&amp;gt;&lt;br /&gt;
		${StringJoin(&amp;quot;\r\n\t\t&amp;quot;,  + df.columns.`&amp;lt;th&amp;gt;${_}&amp;lt;/th&amp;gt;`)}&lt;br /&gt;
	&amp;lt;/tr&amp;gt;&lt;br /&gt;
	${StringJoin(&amp;quot;&amp;quot;, df.Rows.(&lt;br /&gt;
		&amp;quot;\r\n\t&amp;lt;tr&amp;gt;&amp;quot; + StringJoin(&amp;quot;&amp;quot;, _.`\r\n\t\t&amp;lt;td&amp;gt;${ToString(_)}&amp;lt;/td&amp;gt;`) + &amp;quot;\r\n\t&amp;lt;/tr&amp;gt;&amp;quot;&lt;br /&gt;
	))}&lt;br /&gt;
&amp;lt;/table&amp;gt;`&lt;br /&gt;
}&lt;br /&gt;
&lt;br /&gt;
let data = ToDataFrame(&lt;br /&gt;
	[&lt;br /&gt;
		[&amp;quot;one&amp;quot;, &amp;quot;two&amp;quot;, &amp;quot;three&amp;quot;],&lt;br /&gt;
		[&amp;quot;four&amp;quot;, &amp;quot;five&amp;quot;, &amp;quot;six&amp;quot;],&lt;br /&gt;
		[&amp;quot;seven&amp;quot;, &amp;quot;eight&amp;quot;, &amp;quot;nine&amp;quot;]&lt;br /&gt;
	],&lt;br /&gt;
	[&amp;quot;Column 1&amp;quot;, &amp;quot;Column 2&amp;quot;, &amp;quot;Column 3&amp;quot;]&lt;br /&gt;
);&lt;br /&gt;
&lt;br /&gt;
return dataframeToHtmlTable(data);&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Copy local datatables to Snowflake ===&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;typescript&amp;quot; line&amp;gt;&lt;br /&gt;
// Copies all datatables in a project to another project including datatable contents.&lt;br /&gt;
// Usage instructions:&lt;br /&gt;
// 1. Create expression script in the project from where you want to copy the datatables.&lt;br /&gt;
// 2. Create a new project named as &amp;quot;&amp;lt;name of the project to be moved&amp;gt; - Snowflake&amp;quot;. New datatables will be created here. E.g., when moving project named &amp;quot;SAP_OrderToCash&amp;quot;, the target project should be named as &amp;quot;SAP_OrderToCash - Snowflake&amp;quot;.&lt;br /&gt;
// 3. Run the script.&lt;br /&gt;
// NOTE: Columns of type &amp;quot;Any&amp;quot; will be created as &amp;quot;String&amp;quot;-columns in Snowflake, thus it is recommended that actual data types are set for the tables prior to the move.&lt;br /&gt;
&lt;br /&gt;
let sourceProject = Project;&lt;br /&gt;
let sourceProjectName = Project.Name;&lt;br /&gt;
let targetProjectName = `${sourceProjectName} - Snowflake`;&lt;br /&gt;
let targetProject = First(Projects.Where(Name == targetProjectName));&lt;br /&gt;
if (IsNull(targetProject)) {&lt;br /&gt;
  WriteLog(`Unable to find target project named &amp;quot;${targetProjectName}&amp;quot;. Aborting operation.`);&lt;br /&gt;
  return;&lt;br /&gt;
}&lt;br /&gt;
let dts = sourceProject.DataTables;&lt;br /&gt;
WriteLog(`Copying all ${CountTop(dts)} data tables found in project &amp;quot;${sourceProject.Name}&amp;quot; (id: ${sourceProject.Id}) to Snowflake in project &amp;quot;${targetProject.Name}&amp;quot; (id: ${targetProject.Id})`);&lt;br /&gt;
dts.{&lt;br /&gt;
  let sourceDt = _;&lt;br /&gt;
  WriteLog(`Starting to copy data table &amp;quot;${Name}&amp;quot; (id: ${Id}) having ${NRows} rows and ${NColumns} columns.`);&lt;br /&gt;
  let targetDt;&lt;br /&gt;
  targetDt = targetProject.DatatableByName(sourceDt.Name);&lt;br /&gt;
  if (targetDt == null) {&lt;br /&gt;
    targetDt = targetProject.CreateDataTable(sourceDt.Name, #{&amp;quot;Connection&amp;quot;: CreateSnowflakeConnection(#{&amp;quot;ProjectId&amp;quot;: targetProject.Id})});&lt;br /&gt;
    targetDt.Import(sourceDt.SqlDataFrame);&lt;br /&gt;
    WriteLog(`Finished copying data table &amp;quot;${Name}&amp;quot; (id: ${Id}) to table &amp;quot;${targetDt.Name}&amp;quot; (id: ${targetDt.Id})`);&lt;br /&gt;
  } else {&lt;br /&gt;
    WriteLog(`Datatable already exist &amp;quot;${Name}&amp;quot; (id: ${Id}) to table &amp;quot;${targetDt.Name}&amp;quot; (id: ${targetDt.Id})`);&lt;br /&gt;
  }&lt;br /&gt;
}&lt;br /&gt;
WriteLog(`Finished copying all the data tables found in project &amp;quot;${sourceProject.Name}&amp;quot; (id: ${sourceProject.Id}) to Snowflake in project &amp;quot;${targetProject.Name}&amp;quot; (id: ${targetProject.Id})`);&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
If you don&#039;t need to copy the data but only create the Snowflake datatables with columns, you can change the line 22 to&lt;br /&gt;
&amp;lt;syntaxhighlight&amp;gt;&lt;br /&gt;
targetDt.Import(sourceDt.SqlDataFrame.head(0));&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Copy single datatable to Snowflake ===&lt;br /&gt;
This script creates a copy of a single datatable to Snowflake. Replace the &#039;&#039;&amp;lt;tableId1&amp;gt;&#039;&#039; with the id of the source datatable.&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;typescript&amp;quot; line&amp;gt;&lt;br /&gt;
function CopyDataTableToSnowflake(dataTableId)&lt;br /&gt;
{&lt;br /&gt;
  let sourceDt = DataTableById(dataTableId);&lt;br /&gt;
  sourceDt.SqlDataFrame.Persist(`${sourceDt.Name} - Snowflake`, #{&amp;quot;Append&amp;quot;: false, &amp;quot;Connection&amp;quot;: CreateSnowflakeConnection(#{&amp;quot;ProjectId&amp;quot;: sourceDt.Project.Id})});&lt;br /&gt;
}&lt;br /&gt;
CopyDataTableToSnowflake(&amp;lt;tableId1&amp;gt;);&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Create a copy of a data table that has all Any-type columns changed to String-type columns ===&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;typescript&amp;quot; line&amp;gt;&lt;br /&gt;
function ConvertAnyDataTypesToStringsToNewTable(dataTableId)&lt;br /&gt;
{&lt;br /&gt;
  let dt = DataTableById(dataTableId);&lt;br /&gt;
  let sdf = dt.SqlDataFrame;&lt;br /&gt;
  let cts = dt.ColumnTypes;&lt;br /&gt;
  cts.{&lt;br /&gt;
    let ct = _;&lt;br /&gt;
    if (ct.DataType == &amp;quot;Any&amp;quot;) {&lt;br /&gt;
      let n = ct.Name;&lt;br /&gt;
      sdf = sdf.WithColumn(ct.Name, #sql{Cast(Column(Variable(&amp;quot;n&amp;quot;)), &amp;quot;ShortString&amp;quot;)});&lt;br /&gt;
    }&lt;br /&gt;
  };&lt;br /&gt;
  sdf.Persist(`${dt.Name} - Converted`, #{&amp;quot;Append&amp;quot;: false, &amp;quot;ProjectId&amp;quot;: dt.Project.Id});&lt;br /&gt;
}&lt;br /&gt;
ConvertAnyDataTypesToStringsToNewTable(&amp;lt;dataTableId&amp;gt;);&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Query number of rows in given data table having a datetime value in given year grouped by month and return resulting table as CSV ===&lt;br /&gt;
SqlDataFrame is used in order to prevent loading the whole datatable into memory first. Filtering is performed as first operation in order to minimize the amount of required work for the data source of the data table.&amp;lt;syntaxhighlight lang=&amp;quot;typescript&amp;quot; line=&amp;quot;1&amp;quot;&amp;gt;&lt;br /&gt;
DataTableById(&amp;lt;data table id&amp;gt;)&lt;br /&gt;
  .SqlDataFrame&lt;br /&gt;
  .Where(#sql{2014 == Year(Column(&amp;quot;Start Time&amp;quot;))})&lt;br /&gt;
  .WithColumn(&amp;quot;Month&amp;quot;, #sql{Month(Column(&amp;quot;Start Time&amp;quot;))})&lt;br /&gt;
  .GroupBy([&amp;quot;Month&amp;quot;]).Aggregate([&amp;quot;Count&amp;quot;], [&amp;quot;Count&amp;quot;])&lt;br /&gt;
  .OrderByColumns([&amp;quot;Month&amp;quot;], [true])&lt;br /&gt;
  .Collect().ToCsv();&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Function for filtering SqlDataFrame by removing rows having, or replacing, the most infrequently occurring column values ===&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;typescript&amp;quot; line=&amp;quot;1&amp;quot;&amp;gt;&lt;br /&gt;
/***&lt;br /&gt;
 * @name ColumnWithMinUsage&lt;br /&gt;
 * @descripion&lt;br /&gt;
 * Generic function that can be used to filter out the most infrequently occurring attribute values or replace their Values&lt;br /&gt;
 * with given common value.&lt;br /&gt;
 * @param df:&lt;br /&gt;
 * DataFrame to operate on.&lt;br /&gt;
 * @param columnName:&lt;br /&gt;
 * Name of the column to be filtered.&lt;br /&gt;
 * @param newColumnName:&lt;br /&gt;
 * Name of the column that will contain the new value of the original column after filtering (if includeOthers was applied).&lt;br /&gt;
 * @param maxNumUniqueValues:&lt;br /&gt;
 * Maximum number of unique values to include into the comparison for each attribute column. If the amount of unique values for any attribute exceeds this value, only given number of attributes are included that have the highest usage.&lt;br /&gt;
 * @param minValueUsage:&lt;br /&gt;
 * Minimum total usage of a value included into the comparison. The number of cases having every returned value should be at least given percentage (a float value between 0.0 and 1.0) of all the compared cases.&lt;br /&gt;
 * @param includeOthers:&lt;br /&gt;
 * Should the rest of the attribute values not included due to MinValueUsage or MaxNumUniqueValues filtering be included as an aggregated &amp;quot;Others&amp;quot; value?&lt;br /&gt;
 * If not empty/null, defines the name used for these other-values.&lt;br /&gt;
 */&lt;br /&gt;
function ColumnWithMinUsage(df, columnName, newColumnName, maxNumUniqueValues, minValueUsage, includeOthers)&lt;br /&gt;
{&lt;br /&gt;
  let all = df&lt;br /&gt;
	.GroupBy([])&lt;br /&gt;
	.Aggregate([&amp;quot;NAllTotal&amp;quot;], [&amp;quot;Count&amp;quot;])&lt;br /&gt;
	.WithColumn(&amp;quot;__Join2&amp;quot;, #sql{1});&lt;br /&gt;
  let minValueUsageEnabled = !IsNullTop(minValueUsage);&lt;br /&gt;
  let maxNumUniqueValuesEnabled = !IsNullTop(maxNumUniqueValues);&lt;br /&gt;
  if (minValueUsageEnabled || maxNumUniqueValuesEnabled) {&lt;br /&gt;
	// Perform column value-based filtering if minValueUsageEnabled or maxNumUniqueValuesEnabled is defined.&lt;br /&gt;
    let valueColumnName = &amp;quot;__ValueNew&amp;quot;;&lt;br /&gt;
	let filteredValuesColumns = [valueColumnName: columnName];&lt;br /&gt;
	let filteredValues = df&lt;br /&gt;
	  .GroupBy([columnName]).Aggregate([&amp;quot;Count&amp;quot;], [&amp;quot;Count&amp;quot;]);&lt;br /&gt;
	if (minValueUsageEnabled) {&lt;br /&gt;
	  filteredValues = filteredValues&lt;br /&gt;
		.WithColumn(&amp;quot;__Join&amp;quot;, #sql{1})&lt;br /&gt;
		.Join(all, [&amp;quot;__Join&amp;quot;: &amp;quot;__Join2&amp;quot;], &amp;quot;leftouter&amp;quot;)&lt;br /&gt;
        .WithColumn(&amp;quot;Usage&amp;quot;, #sql{Column(&amp;quot;Count&amp;quot;) / Column(&amp;quot;NAllTotal&amp;quot;)});&lt;br /&gt;
	  filteredValuesColumns = Concat(filteredValuesColumns, [&amp;quot;Usage&amp;quot;]);&lt;br /&gt;
	}&lt;br /&gt;
	if (maxNumUniqueValuesEnabled) {&lt;br /&gt;
	  filteredValues = filteredValues&lt;br /&gt;
		.WithRowNumberColumn(&amp;quot;RowNumber&amp;quot;, [&amp;quot;Count&amp;quot;], null, [false]);&lt;br /&gt;
	  filteredValuesColumns = Concat(filteredValuesColumns, [&amp;quot;RowNumber&amp;quot;]);&lt;br /&gt;
	}&lt;br /&gt;
&lt;br /&gt;
	filteredValues = filteredValues&lt;br /&gt;
	  .Select(filteredValuesColumns);&lt;br /&gt;
&lt;br /&gt;
	// Generate select returning all the accepted values.&lt;br /&gt;
	let allValues = filteredValues&lt;br /&gt;
	  .(minValueUsageEnabled ? Where(#sql{Column(&amp;quot;Usage&amp;quot;) &amp;gt;= #expr{minValueUsage}}) : _)&lt;br /&gt;
	  .(maxNumUniqueValuesEnabled ? Where(#sql{Column(&amp;quot;RowNumber&amp;quot;) &amp;lt;= #expr{maxNumUniqueValues}}) : _)&lt;br /&gt;
	  .Select([valueColumnName, newColumnName: valueColumnName]);&lt;br /&gt;
&lt;br /&gt;
	if (!IsNullTop(includeOthers)) {&lt;br /&gt;
	  // If includeOthers is defined, replace original values with the variable defined in includeOthers.&lt;br /&gt;
	  let otherValues = filteredValues&lt;br /&gt;
		.(minValueUsageEnabled ? Where(#sql{Column(&amp;quot;Usage&amp;quot;) &amp;lt; #expr{minValueUsage}}) : _)&lt;br /&gt;
		.(maxNumUniqueValuesEnabled ? Where(#sql{Column(&amp;quot;RowNumber&amp;quot;) &amp;gt; #expr{maxNumUniqueValues}}) : _)&lt;br /&gt;
		.WithColumn(newColumnName, #sql{#expr{includeOthers}})&lt;br /&gt;
		.Select([valueColumnName, newColumnName]);&lt;br /&gt;
	  allValues = allValues.Append(otherValues)&lt;br /&gt;
	}&lt;br /&gt;
	df.Join(allValues, [columnName: valueColumnName], &amp;quot;inner&amp;quot;)&lt;br /&gt;
	  .RemoveColumns([valueColumnName]);&lt;br /&gt;
  }&lt;br /&gt;
}&lt;br /&gt;
&lt;br /&gt;
// The following example will return only rows containing two of the most common values for Region-column.&lt;br /&gt;
//let df = DataTableById(&amp;lt;data table id&amp;gt;).SqlDataFrame;&lt;br /&gt;
//df = ColumnWithMinUsage(df, &amp;quot;Region&amp;quot;, &amp;quot;_Filtered&amp;quot;, 2, null, null);&lt;br /&gt;
//df.Collect().ToCsv();&lt;br /&gt;
&lt;br /&gt;
// The following example will return all input rows, but will replace the values of rows whose Region-column&lt;br /&gt;
// has a value used by less than 15% of all the rows with a new value: &amp;quot;_Others&amp;quot;.&lt;br /&gt;
//let df = DataTableById(&amp;lt;data table id&amp;gt;).SqlDataFrame;&lt;br /&gt;
//df = ColumnWithMinUsage(df, &amp;quot;Region&amp;quot;, &amp;quot;_Filtered&amp;quot;, null, 0.15, &amp;quot;_Others&amp;quot;);&lt;br /&gt;
//df.Collect().ToCsv();&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Export model events and cases ===&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;typescript&amp;quot; line=&amp;quot;1&amp;quot;&amp;gt;&lt;br /&gt;
function ExportModelEvents(m) {&lt;br /&gt;
  let attrs = m.EventAttributes;&lt;br /&gt;
  ToDataFrame(&lt;br /&gt;
    m.EventLog.Events.Concat(&lt;br /&gt;
      [Case.Name, Type.Name, ToString(TimeStamp, &amp;quot;yyyy-MM-dd HH:mm:ss.fff&amp;quot;)], &lt;br /&gt;
      {let evt = _; attrs.{let att = _; evt.Attribute(att)}}&lt;br /&gt;
    ), &lt;br /&gt;
    Concat(&lt;br /&gt;
      [&amp;quot;CaseId&amp;quot;, &amp;quot;EventType&amp;quot;, &amp;quot;TimeStamp&amp;quot;], &lt;br /&gt;
      attrs.Name&lt;br /&gt;
    )&lt;br /&gt;
  ).ToCsv(true);&lt;br /&gt;
}&lt;br /&gt;
&lt;br /&gt;
 &lt;br /&gt;
&lt;br /&gt;
function ExportModelCases(m) {&lt;br /&gt;
  let attrs = m.CaseAttributes;&lt;br /&gt;
  ToDataFrame(&lt;br /&gt;
    m.EventLog.Cases.Concat(&lt;br /&gt;
      [Name], &lt;br /&gt;
      {let cas = _; attrs.{let att = _; cas.Attribute(att)}}&lt;br /&gt;
    ), &lt;br /&gt;
    Concat(&lt;br /&gt;
      [&amp;quot;CaseId&amp;quot;], &lt;br /&gt;
      attrs.Name&lt;br /&gt;
    )&lt;br /&gt;
  ).ToCsv(true);&lt;br /&gt;
}&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
First(Models.Where(Name==&amp;quot;SAP OtC Extended&amp;quot;)).EventsDataTable.DataFrame.ToCsv(true)&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
First(Models.Where(Name==&amp;quot;SAP OtC Extended&amp;quot;)).CasesDataTable.DataFrame.ToCsv(true)&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Calculate all the value usages of a single column for each event in event data table ===&lt;br /&gt;
This query could be used, e.g., to find out the maximum resource usage for every resource found in the event data table.&amp;lt;syntaxhighlight lang=&amp;quot;typescript&amp;quot; line=&amp;quot;1&amp;quot;&amp;gt;&lt;br /&gt;
function WithUsageColumns(resourceColumn)&lt;br /&gt;
{&lt;br /&gt;
  function WithTotalUsageColumnOfSingleResource(resourceColumn, resourceValue)&lt;br /&gt;
  {&lt;br /&gt;
    _&lt;br /&gt;
      .WithColumn(&amp;quot;_Prev&amp;quot;, #sql{Lag(Column(resourceColumn), [TimeStamp, EventType], [true, true], [CaseId], 1, null)})&lt;br /&gt;
      .WithColumn(&amp;quot;_UsageDiff&amp;quot;, #sql{&lt;br /&gt;
        CaseWhen(&lt;br /&gt;
          Column(resourceColumn) == Column(&amp;quot;_Prev&amp;quot;), 0, &lt;br /&gt;
          Column(&amp;quot;_Prev&amp;quot;) == #expr{resourceValue}, -1,&lt;br /&gt;
          Column(resourceColumn) == #expr{resourceValue}, 1,&lt;br /&gt;
          0)&lt;br /&gt;
      })&lt;br /&gt;
      .WithColumn(`${resourceValue}_Usage`, #sql{Sum(Column(&amp;quot;_UsageDiff&amp;quot;), [TimeStamp, EventType])})&lt;br /&gt;
      .RemoveColumns([&amp;quot;_Prev&amp;quot;, &amp;quot;_UsageDiff&amp;quot;])&lt;br /&gt;
  }&lt;br /&gt;
&lt;br /&gt;
  let sdf = _;&lt;br /&gt;
  let allValues = sdf.SelectDistinct([resourceColumn]).OrderByColumns([resourceColumn], [true]).Collect().Column(resourceColumn);&lt;br /&gt;
  allValues.{&lt;br /&gt;
    let v = _;&lt;br /&gt;
    sdf = sdf.WithTotalUsageColumnOfSingleResource(resourceColumn, v)&lt;br /&gt;
  }&lt;br /&gt;
  sdf&lt;br /&gt;
}&lt;br /&gt;
&lt;br /&gt;
let dt = ModelById(&amp;lt;model id&amp;gt;).EventsDataTable;&lt;br /&gt;
dt&lt;br /&gt;
  .SqlDataFrame&lt;br /&gt;
  .WithUsageColumns(&amp;lt;resource column name&amp;gt;)&lt;br /&gt;
  .OrderByColumns([dt.ColumnMappings[&amp;quot;TimeStamp&amp;quot;]], [true])&lt;br /&gt;
  .Collect().ToCsv()&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;Where:&lt;br /&gt;
&lt;br /&gt;
* &amp;lt;model id&amp;gt; is the id of the model containing event data to be examined.&lt;br /&gt;
* &amp;lt;resource column name&amp;gt; is the name of the column in the event data table of the specified model containing the resource being used by that event.&lt;br /&gt;
&lt;br /&gt;
NOTE: This expression uses functionalities that are only supported in Snowflake-based data tables.&lt;br /&gt;
&lt;br /&gt;
=== Create new Snowflake model from filter ===&lt;br /&gt;
This script creates a new Snowflake model (and two datatables for cases and events) containing filtered event log from given filter id. The script also works if the model doesn&#039;t have a cases datatable.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;typescript&amp;quot; line=&amp;quot;1&amp;quot;&amp;gt;&lt;br /&gt;
let filter = FilterById(1); // filter id&lt;br /&gt;
let model = filter.model;&lt;br /&gt;
let project = model.project;&lt;br /&gt;
let nameSuffix = &amp;quot; - &amp;quot; + filter.name + &amp;quot; - &amp;quot; + ToString(Now, &amp;quot;dd-MM-yyyy HH:mm:ss&amp;quot;);&lt;br /&gt;
let eventsDatatableName = model.EventsDataTable.Name + nameSuffix;&lt;br /&gt;
if (eventsDatatableName.length &amp;gt; 440) {&lt;br /&gt;
  eventsDatatableName = eventsDatatableName.Substring(eventsDatatableName.length - 440);&lt;br /&gt;
}&lt;br /&gt;
let eventsData = model&lt;br /&gt;
  .EventsDataTable&lt;br /&gt;
  .SqlDataFrame&lt;br /&gt;
  .ApplyFilter(&lt;br /&gt;
    filter.rules,&lt;br /&gt;
    model.CasesDataTable?.SqlDataFrame&lt;br /&gt;
  );&lt;br /&gt;
project&lt;br /&gt;
  .CreateDatatable(eventsDatatableName, #{&amp;quot;Connection&amp;quot;: CreateSnowflakeConnection()})&lt;br /&gt;
  .Import(eventsData);&lt;br /&gt;
let modelConfiguration = model.Configuration;&lt;br /&gt;
modelConfiguration.DataSource.Events.Set(&amp;quot;DataTableName&amp;quot;, eventsDatatableName);&lt;br /&gt;
if (model.CasesDataTable != null) {&lt;br /&gt;
  let eventsDataCaseIdColumn = &amp;quot;CaseId_&amp;quot; + ToString(Random());&lt;br /&gt;
  let casesDatatableName = model.CasesDataTable.Name + nameSuffix;&lt;br /&gt;
  if (casesDatatableName.length &amp;gt; 440) {&lt;br /&gt;
    casesDatatableName = casesDatatableName.Substring(casesDatatableName.length - 440);&lt;br /&gt;
  }&lt;br /&gt;
  let casesData = model&lt;br /&gt;
    .CasesDataTable&lt;br /&gt;
    .SqlDataFrame&lt;br /&gt;
    .join(&lt;br /&gt;
	  eventsData.SelectDistinct([eventsDataCaseIdColumn: modelConfiguration.DataSource.Events.Columns.CaseId]),&lt;br /&gt;
      [modelConfiguration.DataSource.Cases.Columns.CaseId: eventsDataCaseIdColumn]&lt;br /&gt;
	).Select(model.CasesDataTable.ColumnNames);&lt;br /&gt;
  project&lt;br /&gt;
    .CreateDatatable(casesDatatableName, #{&amp;quot;Connection&amp;quot;: CreateSnowflakeConnection()})&lt;br /&gt;
    .Import(casesData);&lt;br /&gt;
  modelConfiguration.DataSource.Cases.Set(&amp;quot;DataTableName&amp;quot;, casesDatatableName);&lt;br /&gt;
}&lt;br /&gt;
let modelName = model.Name + nameSuffix;&lt;br /&gt;
if (modelName &amp;gt; 440) {&lt;br /&gt;
  modelName = modelName.Substring(modelName.length - 440);&lt;br /&gt;
}&lt;br /&gt;
project&lt;br /&gt;
  .CreateModel(#{    &lt;br /&gt;
    &amp;quot;Name&amp;quot;: modelName,&lt;br /&gt;
    &amp;quot;Description&amp;quot;: model.Description,&lt;br /&gt;
    &amp;quot;Configuration&amp;quot;: modelConfiguration&lt;br /&gt;
  });&lt;br /&gt;
return modelName;&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Creating a model consisting of multiple copies of cases in an existing model ===&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;typescript&amp;quot; line=&amp;quot;1&amp;quot;&amp;gt;&lt;br /&gt;
/**&lt;br /&gt;
 * @name CreateTestModel&lt;br /&gt;
 * @description&lt;br /&gt;
 * Creates a new model (or overwrites an existing) to given target project with given number of &lt;br /&gt;
 * repetitions of given source model.&lt;br /&gt;
 * Each repetition will generate &amp;quot;&amp;lt;N&amp;gt;-&amp;quot;-prefix to CaseId-columns, where N equals to the repeat index.&lt;br /&gt;
 * @param sourceModel&lt;br /&gt;
 * PA model used for the source data and from where the connection is copied for the target model if a &lt;br /&gt;
 * new one has to be created.&lt;br /&gt;
 * @param numRepeats&lt;br /&gt;
 * Number of times the data in the source model should be repeated in the generated model.&lt;br /&gt;
 * @param targetProject&lt;br /&gt;
 * Project in which the target model resides.&lt;br /&gt;
 * @param targetModelName&lt;br /&gt;
 * Specifies the name of the test model in the given target project. If a model already exists with &lt;br /&gt;
 * given name, event and case data in this model will be replaced with the new generated event and &lt;br /&gt;
 * case data.&lt;br /&gt;
 * @returns&lt;br /&gt;
 * Model object of the test model having the newly generated data.&lt;br /&gt;
 */&lt;br /&gt;
function CreateTestModel(sourceModel, numRepeats, targetProject, targetModelName) &lt;br /&gt;
{&lt;br /&gt;
  let eventsColumnMappings = sourceModel.EventsDataTable.ColumnMappings;&lt;br /&gt;
  let casesColumnMappings = sourceModel.CasesDataTable.ColumnMappings;&lt;br /&gt;
  let connection = sourceModel.EventsDataTable.DataSourceConnection;&lt;br /&gt;
&lt;br /&gt;
  function CreateResultModel()&lt;br /&gt;
  {&lt;br /&gt;
    function GetTable(tableName) &lt;br /&gt;
    {&lt;br /&gt;
      let tableConfiguration = #{&lt;br /&gt;
        &amp;quot;Name&amp;quot;: tableName,&lt;br /&gt;
        &amp;quot;Connection&amp;quot;: connection&lt;br /&gt;
      };&lt;br /&gt;
      let resultTable = targetProject.DataTableByName(tableName);&lt;br /&gt;
      if (resultTable == null)&lt;br /&gt;
      {&lt;br /&gt;
        resultTable = targetProject.CreateDataTable(tableConfiguration)&lt;br /&gt;
          .Modify(#{&amp;quot;NameInDataSource&amp;quot;: null})&lt;br /&gt;
          .Synchronize();&lt;br /&gt;
      }&lt;br /&gt;
      return resultTable;&lt;br /&gt;
    }&lt;br /&gt;
&lt;br /&gt;
    let eventsTableName = `${targetModelName} - events`;&lt;br /&gt;
    let casesTableName = `${targetModelName} - cases`;&lt;br /&gt;
    let targetModel = targetProject.ModelByName(targetModelName);&lt;br /&gt;
    let eventsTable, casesTable = null;&lt;br /&gt;
&lt;br /&gt;
    if (targetModel != null)&lt;br /&gt;
    {&lt;br /&gt;
      eventsTable = targetModel.EventsDataTable;&lt;br /&gt;
      casesTable = targetModel.CasesDataTable;&lt;br /&gt;
    }&lt;br /&gt;
    else {&lt;br /&gt;
      eventsTable = GetTable(eventsTableName);&lt;br /&gt;
      if (sourceModel.CasesDataTable != null) {&lt;br /&gt;
        casesTable = GetTable(casesTableName);&lt;br /&gt;
      }&lt;br /&gt;
&lt;br /&gt;
      let timestampMapping = eventsColumnMappings[&amp;quot;TimeStamp&amp;quot;];&lt;br /&gt;
      eventsColumnMappings.Remove(&amp;quot;TimeStamp&amp;quot;);&lt;br /&gt;
      eventsColumnMappings.Set(&amp;quot;Timestamp&amp;quot;, timestampMapping);&lt;br /&gt;
&lt;br /&gt;
      let modelConfiguration = #{&lt;br /&gt;
        &amp;quot;DataSource&amp;quot;: #{&lt;br /&gt;
          &amp;quot;Events&amp;quot;:#{&lt;br /&gt;
            &amp;quot;DataSourceType&amp;quot;: &amp;quot;datatable&amp;quot;,&lt;br /&gt;
            &amp;quot;DataTableName&amp;quot;: eventsTableName,&lt;br /&gt;
            &amp;quot;Columns&amp;quot;: eventsColumnMappings&lt;br /&gt;
          }&lt;br /&gt;
        }&lt;br /&gt;
      };&lt;br /&gt;
&lt;br /&gt;
      if (casesColumnMappings != null) {&lt;br /&gt;
        modelConfiguration[&amp;quot;DataSource&amp;quot;].Set(&amp;quot;Cases&amp;quot;, #{&lt;br /&gt;
          &amp;quot;DataSourceType&amp;quot;: &amp;quot;datatable&amp;quot;,&lt;br /&gt;
          &amp;quot;DataTableName&amp;quot;: casesTableName,&lt;br /&gt;
          &amp;quot;Columns&amp;quot;: casesColumnMappings&lt;br /&gt;
        });&lt;br /&gt;
      }&lt;br /&gt;
&lt;br /&gt;
      targetModel = targetProject.CreateModel(#{&amp;quot;Name&amp;quot;: targetModelName, &amp;quot;Configuration&amp;quot;: modelConfiguration});&lt;br /&gt;
    }&lt;br /&gt;
&lt;br /&gt;
    eventsTable.Truncate();&lt;br /&gt;
    casesTable?.Truncate();&lt;br /&gt;
&lt;br /&gt;
    return #{&lt;br /&gt;
      &amp;quot;TargetModel&amp;quot;: targetModel,&lt;br /&gt;
      &amp;quot;Events&amp;quot;: eventsTable,&lt;br /&gt;
      &amp;quot;Cases&amp;quot;: casesTable&lt;br /&gt;
    };&lt;br /&gt;
  }&lt;br /&gt;
&lt;br /&gt;
  function RepeatNTimes(sourceDf, caseIdColumn, numRepeats)&lt;br /&gt;
  {&lt;br /&gt;
    let resultDf = null;&lt;br /&gt;
    for (let i = 1; i &amp;lt;= numRepeats; ++i) {&lt;br /&gt;
      let iterationDf = sourceDf&lt;br /&gt;
        .WithColumn(caseIdColumn, #sql{Concat(#expr{i}, &amp;quot;-&amp;quot;, Column(#expr{caseIdColumn}))});&lt;br /&gt;
      resultDf = resultDf == null ? iterationDf : resultDf.Append(iterationDf); &lt;br /&gt;
    }&lt;br /&gt;
    resultDf;&lt;br /&gt;
  }&lt;br /&gt;
&lt;br /&gt;
  let resultModel = CreateResultModel();&lt;br /&gt;
  let sourceEventDataDf = sourceModel.EventsDataTable.SqlDataFrame;&lt;br /&gt;
  let resultEventDataDf = RepeatNTimes(sourceEventDataDf, eventsColumnMappings[&amp;quot;CaseId&amp;quot;], numRepeats);&lt;br /&gt;
  resultModel[&amp;quot;Events&amp;quot;].Import(resultEventDataDf);&lt;br /&gt;
&lt;br /&gt;
  let sourceCaseDataDf = sourceModel.CasesDataTable?.SqlDataFrame;&lt;br /&gt;
  if (sourceCaseDataDf != null) {&lt;br /&gt;
    let resultCaseDataDf = RepeatNTimes(sourceCaseDataDf, casesColumnMappings[&amp;quot;CaseId&amp;quot;], numRepeats);&lt;br /&gt;
    resultModel[&amp;quot;Cases&amp;quot;].Import(resultCaseDataDf);&lt;br /&gt;
  }&lt;br /&gt;
  resultModel[&amp;quot;TargetModel&amp;quot;];&lt;br /&gt;
}&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;Example usage:&amp;lt;blockquote&amp;gt;CreateTestModel(ProjectByName(&amp;quot;Project&amp;quot;).ModelByName(&amp;quot;SAP_OrderToCash - Snowflake&amp;quot;), 3, ProjectByName(&amp;quot;TestData&amp;quot;), &amp;quot;TestModel&amp;quot;);&amp;lt;/blockquote&amp;gt;Creates a new model named &amp;quot;TestModel&amp;quot; (or overwrites old one) into project named &amp;quot;TestData&amp;quot; containing the data from model &amp;quot;SAP_OrderToCash - Snowflake&amp;quot; in project &amp;quot;Project&amp;quot; repeated three times.&lt;br /&gt;
&lt;br /&gt;
=== Analyzing declare patterns found in event log ===&lt;br /&gt;
&lt;br /&gt;
This is an example expression that shows how POSIX-style regular expressions can be used to search for cases in an event log having certain event type patterns [https://www.researchgate.net/publication/277631859_Generating_Event_Logs_Through_the_Simulation_of_Declare_Models declare patterns].&lt;br /&gt;
&lt;br /&gt;
Note: Before use, replace &amp;lt;model id&amp;gt; with a valid model identifier having event data.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;typescript&amp;quot; line=&amp;quot;1&amp;quot;&amp;gt;&lt;br /&gt;
let dt = ModelById(&amp;lt;model id&amp;gt;).EventsDataTable;&lt;br /&gt;
let sdf = dt.SqlDataFrame.Head(1000);&lt;br /&gt;
let caseIdColumn = dt.ColumnMappings[&amp;quot;CaseId&amp;quot;], timeStampColumn = dt.ColumnMappings[&amp;quot;TimeStamp&amp;quot;], eventTypeColumn = dt.ColumnMappings[&amp;quot;EventType&amp;quot;];&lt;br /&gt;
&lt;br /&gt;
let eventTypesDf = sdf&lt;br /&gt;
  .SelectDistinct([eventTypeColumn])&lt;br /&gt;
  .OrderByColumns([eventTypeColumn], [true])&lt;br /&gt;
  .WithRowNumberColumn(&amp;quot;Token&amp;quot;, [eventTypeColumn])&lt;br /&gt;
  .WithColumn(&amp;quot;Token&amp;quot;, #sql{Char(Column(&amp;quot;Token&amp;quot;) + Unicode(&amp;quot;a&amp;quot;) - 1)});&lt;br /&gt;
&lt;br /&gt;
sdf = sdf&lt;br /&gt;
  .Join(eventTypesDf.Select([&amp;quot;_EventType2&amp;quot;: eventTypeColumn, &amp;quot;Token&amp;quot;]), [eventTypeColumn: &amp;quot;_EventType2&amp;quot;], &amp;quot;leftouter&amp;quot;);&lt;br /&gt;
&lt;br /&gt;
function RespondedExistencePattern(a, b)&lt;br /&gt;
{&lt;br /&gt;
  // [^a]*((a.*b.*)|(b.*a.*))*[^a]*&lt;br /&gt;
  ToSqlExpression(`Regexp(Column(&amp;quot;Trace&amp;quot;), &amp;quot;[^${a}]*((${a}.*${b}.*)|(${b}.*${a}.*))*[^${a}]&amp;quot;)`)&lt;br /&gt;
}&lt;br /&gt;
&lt;br /&gt;
function ResponsePattern(a, b)&lt;br /&gt;
{&lt;br /&gt;
  // [^a]*(a.*b)*[^a]*&lt;br /&gt;
  ToSqlExpression(`Regexp(Column(&amp;quot;Trace&amp;quot;), &amp;quot;[^${a}]*(${a}.*${b})*[^${a}]&amp;quot;)`)&lt;br /&gt;
}&lt;br /&gt;
&lt;br /&gt;
function AlternateResponsePattern(a, b)&lt;br /&gt;
{&lt;br /&gt;
  // [^a]*(a[^a]*b[^a]*)*[^a]*&lt;br /&gt;
  ToSqlExpression(`Regexp(Column(&amp;quot;Trace&amp;quot;), &amp;quot;[^${a}]*(${a}[^${a}]*${b}[^${a}]*)*[^${a}]&amp;quot;)`)&lt;br /&gt;
}&lt;br /&gt;
&lt;br /&gt;
function ChainResponsePattern(a, b)&lt;br /&gt;
{&lt;br /&gt;
  // [^a]*(ab[^a]*)*[^a]*&lt;br /&gt;
  ToSqlExpression(`Regexp(Column(&amp;quot;Trace&amp;quot;), &amp;quot;[^${a}]*(${a}${b}[^${a}]*)*[^${a}]&amp;quot;)`)&lt;br /&gt;
}&lt;br /&gt;
&lt;br /&gt;
function PrecedencePattern(a, b)&lt;br /&gt;
{&lt;br /&gt;
  // [^b]*(a.*b)*[^b]*&lt;br /&gt;
  ToSqlExpression(`Regexp(Column(&amp;quot;Trace&amp;quot;), &amp;quot;[^${b}]*(${a}.*${b})*[^${b}]&amp;quot;)`)&lt;br /&gt;
}&lt;br /&gt;
&lt;br /&gt;
let tracesDf = sdf&lt;br /&gt;
  .GroupBy([caseIdColumn])&lt;br /&gt;
  .Aggregate([&amp;quot;Trace&amp;quot;: &amp;quot;Token&amp;quot;], [#{&amp;quot;Function&amp;quot;: &amp;quot;list&amp;quot;, &amp;quot;Ordering&amp;quot;: [timeStampColumn], &amp;quot;Separator&amp;quot;: &amp;quot;&amp;quot;}])&lt;br /&gt;
  .WithColumn(&amp;quot;RespondedExistenceNL&amp;quot;, #sql{#expr{RespondedExistencePattern(&amp;quot;n&amp;quot;, &amp;quot;l&amp;quot;)}})&lt;br /&gt;
  .WithColumn(&amp;quot;RespondedExistenceCA&amp;quot;, #sql{#expr{RespondedExistencePattern(&amp;quot;c&amp;quot;, &amp;quot;a&amp;quot;)}})&lt;br /&gt;
  .WithColumn(&amp;quot;ResponsePatternNL&amp;quot;, #sql{#expr{ResponsePattern(&amp;quot;n&amp;quot;, &amp;quot;l&amp;quot;)}})&lt;br /&gt;
  .WithColumn(&amp;quot;ResponsePatternCA&amp;quot;, #sql{#expr{ResponsePattern(&amp;quot;c&amp;quot;, &amp;quot;a&amp;quot;)}})&lt;br /&gt;
  .WithColumn(&amp;quot;AlternateResponsePatternNL&amp;quot;, #sql{#expr{AlternateResponsePattern(&amp;quot;n&amp;quot;, &amp;quot;l&amp;quot;)}})&lt;br /&gt;
  .WithColumn(&amp;quot;AlternateResponsePatternCA&amp;quot;, #sql{#expr{AlternateResponsePattern(&amp;quot;c&amp;quot;, &amp;quot;a&amp;quot;)}})&lt;br /&gt;
  .WithColumn(&amp;quot;ChainResponsePatternNL&amp;quot;, #sql{#expr{ChainResponsePattern(&amp;quot;n&amp;quot;, &amp;quot;l&amp;quot;)}})&lt;br /&gt;
  .WithColumn(&amp;quot;ChainResponsePatternCA&amp;quot;, #sql{#expr{ChainResponsePattern(&amp;quot;c&amp;quot;, &amp;quot;a&amp;quot;)}})&lt;br /&gt;
  .WithColumn(&amp;quot;PrecedencePatternNL&amp;quot;, #sql{#expr{PrecedencePattern(&amp;quot;n&amp;quot;, &amp;quot;l&amp;quot;)}})&lt;br /&gt;
  .WithColumn(&amp;quot;PrecedencePatternCA&amp;quot;, #sql{#expr{PrecedencePattern(&amp;quot;c&amp;quot;, &amp;quot;a&amp;quot;)}})&lt;br /&gt;
;&lt;br /&gt;
&lt;br /&gt;
[tracesDf.Collect().ToCsv(), eventTypesDf.Collect().ToCsv()]&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Perform a query and send results as E-mail in a HTML table ===&lt;br /&gt;
This example requires two scripts that are both located in the same project:&lt;br /&gt;
&lt;br /&gt;
# Expression-type script to execute (can be, e.g., scheduled to run daily).&lt;br /&gt;
# Expression-type script containing the query JSON to use as basis for the e-mail. In this example, this script is named as &amp;quot;Send query as E-mail - query JSON&amp;quot;. The contents of this script is just the JSON representation of a query that can be extracted, e.g., from any PA chart view.&lt;br /&gt;
&lt;br /&gt;
Script #1 should contain the following code:&amp;lt;syntaxhighlight lang=&amp;quot;typescript&amp;quot;&amp;gt;&lt;br /&gt;
let replyToAddress = &amp;quot;noreply@test.com&amp;quot;;&lt;br /&gt;
let recipientsArray = [&amp;quot;testuser@test.com&amp;quot;];&lt;br /&gt;
let queryConfiguration = ParseJson(Project.ScriptByName(&amp;quot;Send query as E-mail - query JSON&amp;quot;).Code)&lt;br /&gt;
  .Set(&amp;quot;IncludeCollect&amp;quot;, true);&lt;br /&gt;
let resultDf = Query(queryConfiguration);&lt;br /&gt;
let mailBodyHtml = resultDf.`&lt;br /&gt;
&amp;lt;table&amp;gt;&lt;br /&gt;
  &amp;lt;caption&amp;gt;Example query&amp;lt;/caption&amp;gt;&lt;br /&gt;
  &amp;lt;thead&amp;gt;&lt;br /&gt;
    &amp;lt;tr&amp;gt;&lt;br /&gt;
      ${StringJoin(&amp;quot;&amp;quot;, _.Columns.`&lt;br /&gt;
        &amp;lt;th&amp;gt;${_}&amp;lt;/th&amp;gt;&lt;br /&gt;
      `)}&lt;br /&gt;
    &amp;lt;/tr&amp;gt;&lt;br /&gt;
  &amp;lt;/thead&amp;gt;&lt;br /&gt;
  &amp;lt;tbody&amp;gt;&lt;br /&gt;
    ${StringJoin(&amp;quot;&amp;quot;, _.Rows.`&lt;br /&gt;
      &amp;lt;tr&amp;gt;&lt;br /&gt;
        ${StringJoin(&amp;quot;&amp;quot;, _.`&lt;br /&gt;
          &amp;lt;td&amp;gt;${_}&amp;lt;/td&amp;gt;&lt;br /&gt;
        `)}&lt;br /&gt;
      &amp;lt;/tr&amp;gt;&lt;br /&gt;
    `)}&lt;br /&gt;
  &amp;lt;/tbody&amp;gt;&lt;br /&gt;
&amp;lt;/table&amp;gt;&lt;br /&gt;
`;&lt;br /&gt;
&lt;br /&gt;
SendEmail(#{&lt;br /&gt;
  &amp;quot;ReplyTo&amp;quot;: [replyToAddress],&lt;br /&gt;
  &amp;quot;To&amp;quot;: recipientsArray,&lt;br /&gt;
  &amp;quot;Subject&amp;quot;: &amp;quot;Example query E-mail&amp;quot;,&lt;br /&gt;
  &amp;quot;IsBodyHtml&amp;quot;: true,&lt;br /&gt;
  &amp;quot;Body&amp;quot;: mailBodyHtml&lt;br /&gt;
});&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;NOTE: QPR ProcessAnalyzer Server has to have SMTP server configured. Also, remember to update the values of replyToAddress and recipientsArray before using.&lt;/div&gt;</summary>
		<author><name>MarHink</name></author>
	</entry>
	<entry>
		<id>https://wiki.onqpr.com/pa/index.php?title=Expression_Script_Examples&amp;diff=26002</id>
		<title>Expression Script Examples</title>
		<link rel="alternate" type="text/html" href="https://wiki.onqpr.com/pa/index.php?title=Expression_Script_Examples&amp;diff=26002"/>
		<updated>2025-03-18T14:30:34Z</updated>

		<summary type="html">&lt;p&gt;MarHink: /* Perform a query and send results as E-mail in a HTML table */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;This page contains script examples written in the QPR ProcessAnalyzer expression language. See how expression scripts can be created in the [[Managing_Scripts#Creating_Script|Workspace]]. For documentation for the syntax, functions and entities can be found from the main page in the [[QPR_ProcessAnalyzer_Wiki#For_Developers|KPI Expression Language]] section.&lt;br /&gt;
&lt;br /&gt;
== Calling Expression Script from Expression ==&lt;br /&gt;
Expression scripts can be called from an expression using the [[QPR_ProcessAnalyzer_Objects_in_Expression_Language#Script|Run]] function with the following syntax:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
let result = ScriptById(123).Run(#{&lt;br /&gt;
  &amp;quot;parameter1&amp;quot;: &amp;quot;value1&amp;quot;,&lt;br /&gt;
  &amp;quot;parameter2&amp;quot;: false,&lt;br /&gt;
  &amp;quot;parameter2&amp;quot;: 123.45&lt;br /&gt;
})&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The command waits until the run is completed, and the return value of the called script is returned by the Run function call.&lt;br /&gt;
&lt;br /&gt;
Parameters can be passed to the called script, and the parameters are available as variables in the script. The parameters can contain any type of data.&lt;br /&gt;
&lt;br /&gt;
Expression scripts can also be called from a dashboard. Expressions can be stored to scripts instead of dashboards, which is a way to separate complex expressions from dashboards and allow to reuse expressions across several dashboards.&lt;br /&gt;
&lt;br /&gt;
== Calling SQL Script from Expression ==&lt;br /&gt;
SQL script can be called from an expression using the Run function as follows (similar to calling [[#Calling Expression Script from Expression|expression scripts]]):&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
let result = ScriptById(123).Run(#{&lt;br /&gt;
  &amp;quot;parameter1&amp;quot;: &amp;quot;value1&amp;quot;,&lt;br /&gt;
  &amp;quot;parameter2&amp;quot;: 321&lt;br /&gt;
});&lt;br /&gt;
let arrayOfAllReports = result.Keys;&lt;br /&gt;
let report1 = result.Report1;&lt;br /&gt;
let report2 = result.Report2;&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
SQL scripts can return multiple &#039;&#039;reports&#039;&#039;, which are combined to a dictionary, where the key is the name of the report (&amp;quot;sheet name&amp;quot;) and value is the report data as a DataFrame. See in the above example, how the reports can be accessed by their name.&lt;br /&gt;
&lt;br /&gt;
== Examples ==&lt;br /&gt;
=== Call web service===&lt;br /&gt;
Contact to a web service, fetch some data, and store it to a datatable.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;typescript&amp;quot; line&amp;gt;&lt;br /&gt;
let datatableName = &amp;quot;Web Service Data&amp;quot;;&lt;br /&gt;
let webServiceData = CallWebService(&lt;br /&gt;
    #{&amp;quot;Address&amp;quot;: &amp;quot;https://processanalyzer.onqpr.com/qprpa/api/serverinfo&amp;quot;}&lt;br /&gt;
);&lt;br /&gt;
&lt;br /&gt;
let targetDatatable = Project.Datatables.Where(name==datatableName);&lt;br /&gt;
if (Count(targetDatatable) == 0) {&lt;br /&gt;
	targetDatatable = Project.CreateDatatable(datatableName)&lt;br /&gt;
	.AddColumn(&amp;quot;Setting name&amp;quot;, &amp;quot;String&amp;quot;)&lt;br /&gt;
	.AddColumn(&amp;quot;Setting value&amp;quot;, &amp;quot;String&amp;quot;)&lt;br /&gt;
	.AddColumn(&amp;quot;Data read&amp;quot;, &amp;quot;DateTime&amp;quot;);&lt;br /&gt;
} else {&lt;br /&gt;
	targetDatatable = targetDatatable[0];&lt;br /&gt;
}&lt;br /&gt;
&lt;br /&gt;
let currentTime = Now;&lt;br /&gt;
let dataAsDf = ToDataFrame(&lt;br /&gt;
	webServiceData.keys.{&lt;br /&gt;
        let key = _;&lt;br /&gt;
        [key, webServiceData[key], currentTime];&lt;br /&gt;
    },&lt;br /&gt;
	[&amp;quot;Setting name&amp;quot;, &amp;quot;Setting value&amp;quot;, &amp;quot;Data read&amp;quot;]&lt;br /&gt;
);&lt;br /&gt;
targetDatatable.Import(dataAsDf, #{&amp;quot;Append&amp;quot;:true});&lt;br /&gt;
WriteLog(`${CountTop(dataAsDf.Rows)} rows written to datatable`);&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Store data to datatable ===&lt;br /&gt;
Get all models in the system and store them to a datatable.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;typescript&amp;quot; line&amp;gt;&lt;br /&gt;
let newDatatable = Project&lt;br /&gt;
    .CreateDatatable(&amp;quot;Models list &amp;quot; + ToString(Now, &amp;quot;dd.MM.yyyy HH:mm:ss&amp;quot;))&lt;br /&gt;
    .AddColumn(&amp;quot;Model name&amp;quot;, &amp;quot;String&amp;quot;)&lt;br /&gt;
    .AddColumn(&amp;quot;Project name&amp;quot;, &amp;quot;String&amp;quot;)&lt;br /&gt;
    .AddColumn(&amp;quot;Created time&amp;quot;, &amp;quot;DateTime&amp;quot;)&lt;br /&gt;
    .AddColumn(&amp;quot;Cases&amp;quot;, &amp;quot;Integer&amp;quot;);&lt;br /&gt;
let startTime = Now;&lt;br /&gt;
let modelsData = ToDataFrame(&lt;br /&gt;
    Models.([Name, Project.Name, CreatedDate, NCases]),&lt;br /&gt;
    [&amp;quot;Model name&amp;quot;, &amp;quot;Project name&amp;quot;, &amp;quot;Created time&amp;quot;, &amp;quot;Cases&amp;quot;]&lt;br /&gt;
);&lt;br /&gt;
WriteLog(`Listing models took ${(Now - startTime).TotalSeconds.Round(2)} seconds.`);&lt;br /&gt;
newDatatable.Import(modelsData);&lt;br /&gt;
WriteLog(`Datatable ${newDatatable.Id} created.`);&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Convert datatable column data ===&lt;br /&gt;
This script can be used to convert a single column into numerical data type. To use the script, you need to setup the following in the beginning of the script:&lt;br /&gt;
* Project name where the datatable is located.&lt;br /&gt;
* Datatable name&lt;br /&gt;
* Name of the column to be converted&lt;br /&gt;
&lt;br /&gt;
Note that the conversion fails, if there is data that cannot be converted into numerical format. The conversion assumes that period (.) is used as the decimal point. &lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;typescript&amp;quot; line&amp;gt;&lt;br /&gt;
let projectName = &amp;quot;New Project&amp;quot;;&lt;br /&gt;
let datatableName = &amp;quot;qpr processanalyzer events&amp;quot;;&lt;br /&gt;
let columnName = &amp;quot;Event order in case&amp;quot;;&lt;br /&gt;
&lt;br /&gt;
let project = (Projects.Where(Name==projectName))[0];&lt;br /&gt;
let datatable = (project.Datatables.Where(Name==datatableName))[0];&lt;br /&gt;
DatatableById(datatable.Id).DataFrame&lt;br /&gt;
.SetColumns([&lt;br /&gt;
	columnName: () =&amp;gt; {&lt;br /&gt;
		let data = Column(columnName);&lt;br /&gt;
		if (data == null) {&lt;br /&gt;
			null;&lt;br /&gt;
		 } else {&lt;br /&gt;
			ToFloat(data);&lt;br /&gt;
		}&lt;br /&gt;
	}&lt;br /&gt;
])&lt;br /&gt;
.Persist(datatable.Name, [&amp;quot;ProjectId&amp;quot;: project.Id, &amp;quot;Append&amp;quot;: false]);&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Instead of converting to numeric (with the &#039;&#039;ToFloat&#039;&#039; function), data can be converted into string using the &#039;&#039;ToString&#039;&#039; function.&lt;br /&gt;
&lt;br /&gt;
=== Show DataFrame as HTML table ===&lt;br /&gt;
&lt;br /&gt;
This script defines a function to show dataframe as a HTML table, and uses the function for a literal dataframe.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;typescript&amp;quot; line&amp;gt;&lt;br /&gt;
function dataframeToHtmlTable(df) {&lt;br /&gt;
	return&lt;br /&gt;
`&amp;lt;table&amp;gt;&lt;br /&gt;
	&amp;lt;tr&amp;gt;&lt;br /&gt;
		${StringJoin(&amp;quot;\r\n\t\t&amp;quot;,  + df.columns.`&amp;lt;th&amp;gt;${_}&amp;lt;/th&amp;gt;`)}&lt;br /&gt;
	&amp;lt;/tr&amp;gt;&lt;br /&gt;
	${StringJoin(&amp;quot;&amp;quot;, df.Rows.(&lt;br /&gt;
		&amp;quot;\r\n\t&amp;lt;tr&amp;gt;&amp;quot; + StringJoin(&amp;quot;&amp;quot;, _.`\r\n\t\t&amp;lt;td&amp;gt;${ToString(_)}&amp;lt;/td&amp;gt;`) + &amp;quot;\r\n\t&amp;lt;/tr&amp;gt;&amp;quot;&lt;br /&gt;
	))}&lt;br /&gt;
&amp;lt;/table&amp;gt;`&lt;br /&gt;
}&lt;br /&gt;
&lt;br /&gt;
let data = ToDataFrame(&lt;br /&gt;
	[&lt;br /&gt;
		[&amp;quot;one&amp;quot;, &amp;quot;two&amp;quot;, &amp;quot;three&amp;quot;],&lt;br /&gt;
		[&amp;quot;four&amp;quot;, &amp;quot;five&amp;quot;, &amp;quot;six&amp;quot;],&lt;br /&gt;
		[&amp;quot;seven&amp;quot;, &amp;quot;eight&amp;quot;, &amp;quot;nine&amp;quot;]&lt;br /&gt;
	],&lt;br /&gt;
	[&amp;quot;Column 1&amp;quot;, &amp;quot;Column 2&amp;quot;, &amp;quot;Column 3&amp;quot;]&lt;br /&gt;
);&lt;br /&gt;
&lt;br /&gt;
return dataframeToHtmlTable(data);&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Copy local datatables to Snowflake ===&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;typescript&amp;quot; line&amp;gt;&lt;br /&gt;
// Copies all datatables in a project to another project including datatable contents.&lt;br /&gt;
// Usage instructions:&lt;br /&gt;
// 1. Create expression script in the project from where you want to copy the datatables.&lt;br /&gt;
// 2. Create a new project named as &amp;quot;&amp;lt;name of the project to be moved&amp;gt; - Snowflake&amp;quot;. New datatables will be created here. E.g., when moving project named &amp;quot;SAP_OrderToCash&amp;quot;, the target project should be named as &amp;quot;SAP_OrderToCash - Snowflake&amp;quot;.&lt;br /&gt;
// 3. Run the script.&lt;br /&gt;
// NOTE: Columns of type &amp;quot;Any&amp;quot; will be created as &amp;quot;String&amp;quot;-columns in Snowflake, thus it is recommended that actual data types are set for the tables prior to the move.&lt;br /&gt;
&lt;br /&gt;
let sourceProject = Project;&lt;br /&gt;
let sourceProjectName = Project.Name;&lt;br /&gt;
let targetProjectName = `${sourceProjectName} - Snowflake`;&lt;br /&gt;
let targetProject = First(Projects.Where(Name == targetProjectName));&lt;br /&gt;
if (IsNull(targetProject)) {&lt;br /&gt;
  WriteLog(`Unable to find target project named &amp;quot;${targetProjectName}&amp;quot;. Aborting operation.`);&lt;br /&gt;
  return;&lt;br /&gt;
}&lt;br /&gt;
let dts = sourceProject.DataTables;&lt;br /&gt;
WriteLog(`Copying all ${CountTop(dts)} data tables found in project &amp;quot;${sourceProject.Name}&amp;quot; (id: ${sourceProject.Id}) to Snowflake in project &amp;quot;${targetProject.Name}&amp;quot; (id: ${targetProject.Id})`);&lt;br /&gt;
dts.{&lt;br /&gt;
  let sourceDt = _;&lt;br /&gt;
  WriteLog(`Starting to copy data table &amp;quot;${Name}&amp;quot; (id: ${Id}) having ${NRows} rows and ${NColumns} columns.`);&lt;br /&gt;
  let targetDt;&lt;br /&gt;
  targetDt = targetProject.DatatableByName(sourceDt.Name);&lt;br /&gt;
  if (targetDt == null) {&lt;br /&gt;
    targetDt = targetProject.CreateDataTable(sourceDt.Name, #{&amp;quot;Connection&amp;quot;: CreateSnowflakeConnection(#{&amp;quot;ProjectId&amp;quot;: targetProject.Id})});&lt;br /&gt;
    targetDt.Import(sourceDt.SqlDataFrame);&lt;br /&gt;
    WriteLog(`Finished copying data table &amp;quot;${Name}&amp;quot; (id: ${Id}) to table &amp;quot;${targetDt.Name}&amp;quot; (id: ${targetDt.Id})`);&lt;br /&gt;
  } else {&lt;br /&gt;
    WriteLog(`Datatable already exist &amp;quot;${Name}&amp;quot; (id: ${Id}) to table &amp;quot;${targetDt.Name}&amp;quot; (id: ${targetDt.Id})`);&lt;br /&gt;
  }&lt;br /&gt;
}&lt;br /&gt;
WriteLog(`Finished copying all the data tables found in project &amp;quot;${sourceProject.Name}&amp;quot; (id: ${sourceProject.Id}) to Snowflake in project &amp;quot;${targetProject.Name}&amp;quot; (id: ${targetProject.Id})`);&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
If you don&#039;t need to copy the data but only create the Snowflake datatables with columns, you can change the line 22 to&lt;br /&gt;
&amp;lt;syntaxhighlight&amp;gt;&lt;br /&gt;
targetDt.Import(sourceDt.SqlDataFrame.head(0));&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Copy single datatable to Snowflake ===&lt;br /&gt;
This script creates a copy of a single datatable to Snowflake. Replace the &#039;&#039;&amp;lt;tableId1&amp;gt;&#039;&#039; with the id of the source datatable.&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;typescript&amp;quot; line&amp;gt;&lt;br /&gt;
function CopyDataTableToSnowflake(dataTableId)&lt;br /&gt;
{&lt;br /&gt;
  let sourceDt = DataTableById(dataTableId);&lt;br /&gt;
  sourceDt.SqlDataFrame.Persist(`${sourceDt.Name} - Snowflake`, #{&amp;quot;Append&amp;quot;: false, &amp;quot;Connection&amp;quot;: CreateSnowflakeConnection(#{&amp;quot;ProjectId&amp;quot;: sourceDt.Project.Id})});&lt;br /&gt;
}&lt;br /&gt;
CopyDataTableToSnowflake(&amp;lt;tableId1&amp;gt;);&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Create a copy of a data table that has all Any-type columns changed to String-type columns ===&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;typescript&amp;quot; line&amp;gt;&lt;br /&gt;
function ConvertAnyDataTypesToStringsToNewTable(dataTableId)&lt;br /&gt;
{&lt;br /&gt;
  let dt = DataTableById(dataTableId);&lt;br /&gt;
  let sdf = dt.SqlDataFrame;&lt;br /&gt;
  let cts = dt.ColumnTypes;&lt;br /&gt;
  cts.{&lt;br /&gt;
    let ct = _;&lt;br /&gt;
    if (ct.DataType == &amp;quot;Any&amp;quot;) {&lt;br /&gt;
      let n = ct.Name;&lt;br /&gt;
      sdf = sdf.WithColumn(ct.Name, #sql{Cast(Column(Variable(&amp;quot;n&amp;quot;)), &amp;quot;ShortString&amp;quot;)});&lt;br /&gt;
    }&lt;br /&gt;
  };&lt;br /&gt;
  sdf.Persist(`${dt.Name} - Converted`, #{&amp;quot;Append&amp;quot;: false, &amp;quot;ProjectId&amp;quot;: dt.Project.Id});&lt;br /&gt;
}&lt;br /&gt;
ConvertAnyDataTypesToStringsToNewTable(&amp;lt;dataTableId&amp;gt;);&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Query number of rows in given data table having a datetime value in given year grouped by month and return resulting table as CSV ===&lt;br /&gt;
SqlDataFrame is used in order to prevent loading the whole datatable into memory first. Filtering is performed as first operation in order to minimize the amount of required work for the data source of the data table.&amp;lt;syntaxhighlight lang=&amp;quot;typescript&amp;quot; line=&amp;quot;1&amp;quot;&amp;gt;&lt;br /&gt;
DataTableById(&amp;lt;data table id&amp;gt;)&lt;br /&gt;
  .SqlDataFrame&lt;br /&gt;
  .Where(#sql{2014 == Year(Column(&amp;quot;Start Time&amp;quot;))})&lt;br /&gt;
  .WithColumn(&amp;quot;Month&amp;quot;, #sql{Month(Column(&amp;quot;Start Time&amp;quot;))})&lt;br /&gt;
  .GroupBy([&amp;quot;Month&amp;quot;]).Aggregate([&amp;quot;Count&amp;quot;], [&amp;quot;Count&amp;quot;])&lt;br /&gt;
  .OrderByColumns([&amp;quot;Month&amp;quot;], [true])&lt;br /&gt;
  .Collect().ToCsv();&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Function for filtering SqlDataFrame by removing rows having, or replacing, the most infrequently occurring column values ===&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;typescript&amp;quot; line=&amp;quot;1&amp;quot;&amp;gt;&lt;br /&gt;
/***&lt;br /&gt;
 * @name ColumnWithMinUsage&lt;br /&gt;
 * @descripion&lt;br /&gt;
 * Generic function that can be used to filter out the most infrequently occurring attribute values or replace their Values&lt;br /&gt;
 * with given common value.&lt;br /&gt;
 * @param df:&lt;br /&gt;
 * DataFrame to operate on.&lt;br /&gt;
 * @param columnName:&lt;br /&gt;
 * Name of the column to be filtered.&lt;br /&gt;
 * @param newColumnName:&lt;br /&gt;
 * Name of the column that will contain the new value of the original column after filtering (if includeOthers was applied).&lt;br /&gt;
 * @param maxNumUniqueValues:&lt;br /&gt;
 * Maximum number of unique values to include into the comparison for each attribute column. If the amount of unique values for any attribute exceeds this value, only given number of attributes are included that have the highest usage.&lt;br /&gt;
 * @param minValueUsage:&lt;br /&gt;
 * Minimum total usage of a value included into the comparison. The number of cases having every returned value should be at least given percentage (a float value between 0.0 and 1.0) of all the compared cases.&lt;br /&gt;
 * @param includeOthers:&lt;br /&gt;
 * Should the rest of the attribute values not included due to MinValueUsage or MaxNumUniqueValues filtering be included as an aggregated &amp;quot;Others&amp;quot; value?&lt;br /&gt;
 * If not empty/null, defines the name used for these other-values.&lt;br /&gt;
 */&lt;br /&gt;
function ColumnWithMinUsage(df, columnName, newColumnName, maxNumUniqueValues, minValueUsage, includeOthers)&lt;br /&gt;
{&lt;br /&gt;
  let all = df&lt;br /&gt;
	.GroupBy([])&lt;br /&gt;
	.Aggregate([&amp;quot;NAllTotal&amp;quot;], [&amp;quot;Count&amp;quot;])&lt;br /&gt;
	.WithColumn(&amp;quot;__Join2&amp;quot;, #sql{1});&lt;br /&gt;
  let minValueUsageEnabled = !IsNullTop(minValueUsage);&lt;br /&gt;
  let maxNumUniqueValuesEnabled = !IsNullTop(maxNumUniqueValues);&lt;br /&gt;
  if (minValueUsageEnabled || maxNumUniqueValuesEnabled) {&lt;br /&gt;
	// Perform column value-based filtering if minValueUsageEnabled or maxNumUniqueValuesEnabled is defined.&lt;br /&gt;
    let valueColumnName = &amp;quot;__ValueNew&amp;quot;;&lt;br /&gt;
	let filteredValuesColumns = [valueColumnName: columnName];&lt;br /&gt;
	let filteredValues = df&lt;br /&gt;
	  .GroupBy([columnName]).Aggregate([&amp;quot;Count&amp;quot;], [&amp;quot;Count&amp;quot;]);&lt;br /&gt;
	if (minValueUsageEnabled) {&lt;br /&gt;
	  filteredValues = filteredValues&lt;br /&gt;
		.WithColumn(&amp;quot;__Join&amp;quot;, #sql{1})&lt;br /&gt;
		.Join(all, [&amp;quot;__Join&amp;quot;: &amp;quot;__Join2&amp;quot;], &amp;quot;leftouter&amp;quot;)&lt;br /&gt;
        .WithColumn(&amp;quot;Usage&amp;quot;, #sql{Column(&amp;quot;Count&amp;quot;) / Column(&amp;quot;NAllTotal&amp;quot;)});&lt;br /&gt;
	  filteredValuesColumns = Concat(filteredValuesColumns, [&amp;quot;Usage&amp;quot;]);&lt;br /&gt;
	}&lt;br /&gt;
	if (maxNumUniqueValuesEnabled) {&lt;br /&gt;
	  filteredValues = filteredValues&lt;br /&gt;
		.WithRowNumberColumn(&amp;quot;RowNumber&amp;quot;, [&amp;quot;Count&amp;quot;], null, [false]);&lt;br /&gt;
	  filteredValuesColumns = Concat(filteredValuesColumns, [&amp;quot;RowNumber&amp;quot;]);&lt;br /&gt;
	}&lt;br /&gt;
&lt;br /&gt;
	filteredValues = filteredValues&lt;br /&gt;
	  .Select(filteredValuesColumns);&lt;br /&gt;
&lt;br /&gt;
	// Generate select returning all the accepted values.&lt;br /&gt;
	let allValues = filteredValues&lt;br /&gt;
	  .(minValueUsageEnabled ? Where(#sql{Column(&amp;quot;Usage&amp;quot;) &amp;gt;= #expr{minValueUsage}}) : _)&lt;br /&gt;
	  .(maxNumUniqueValuesEnabled ? Where(#sql{Column(&amp;quot;RowNumber&amp;quot;) &amp;lt;= #expr{maxNumUniqueValues}}) : _)&lt;br /&gt;
	  .Select([valueColumnName, newColumnName: valueColumnName]);&lt;br /&gt;
&lt;br /&gt;
	if (!IsNullTop(includeOthers)) {&lt;br /&gt;
	  // If includeOthers is defined, replace original values with the variable defined in includeOthers.&lt;br /&gt;
	  let otherValues = filteredValues&lt;br /&gt;
		.(minValueUsageEnabled ? Where(#sql{Column(&amp;quot;Usage&amp;quot;) &amp;lt; #expr{minValueUsage}}) : _)&lt;br /&gt;
		.(maxNumUniqueValuesEnabled ? Where(#sql{Column(&amp;quot;RowNumber&amp;quot;) &amp;gt; #expr{maxNumUniqueValues}}) : _)&lt;br /&gt;
		.WithColumn(newColumnName, #sql{#expr{includeOthers}})&lt;br /&gt;
		.Select([valueColumnName, newColumnName]);&lt;br /&gt;
	  allValues = allValues.Append(otherValues)&lt;br /&gt;
	}&lt;br /&gt;
	df.Join(allValues, [columnName: valueColumnName], &amp;quot;inner&amp;quot;)&lt;br /&gt;
	  .RemoveColumns([valueColumnName]);&lt;br /&gt;
  }&lt;br /&gt;
}&lt;br /&gt;
&lt;br /&gt;
// The following example will return only rows containing two of the most common values for Region-column.&lt;br /&gt;
//let df = DataTableById(&amp;lt;data table id&amp;gt;).SqlDataFrame;&lt;br /&gt;
//df = ColumnWithMinUsage(df, &amp;quot;Region&amp;quot;, &amp;quot;_Filtered&amp;quot;, 2, null, null);&lt;br /&gt;
//df.Collect().ToCsv();&lt;br /&gt;
&lt;br /&gt;
// The following example will return all input rows, but will replace the values of rows whose Region-column&lt;br /&gt;
// has a value used by less than 15% of all the rows with a new value: &amp;quot;_Others&amp;quot;.&lt;br /&gt;
//let df = DataTableById(&amp;lt;data table id&amp;gt;).SqlDataFrame;&lt;br /&gt;
//df = ColumnWithMinUsage(df, &amp;quot;Region&amp;quot;, &amp;quot;_Filtered&amp;quot;, null, 0.15, &amp;quot;_Others&amp;quot;);&lt;br /&gt;
//df.Collect().ToCsv();&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Export model events and cases ===&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;typescript&amp;quot; line=&amp;quot;1&amp;quot;&amp;gt;&lt;br /&gt;
function ExportModelEvents(m) {&lt;br /&gt;
  let attrs = m.EventAttributes;&lt;br /&gt;
  ToDataFrame(&lt;br /&gt;
    m.EventLog.Events.Concat(&lt;br /&gt;
      [Case.Name, Type.Name, ToString(TimeStamp, &amp;quot;yyyy-MM-dd HH:mm:ss.fff&amp;quot;)], &lt;br /&gt;
      {let evt = _; attrs.{let att = _; evt.Attribute(att)}}&lt;br /&gt;
    ), &lt;br /&gt;
    Concat(&lt;br /&gt;
      [&amp;quot;CaseId&amp;quot;, &amp;quot;EventType&amp;quot;, &amp;quot;TimeStamp&amp;quot;], &lt;br /&gt;
      attrs.Name&lt;br /&gt;
    )&lt;br /&gt;
  ).ToCsv(true);&lt;br /&gt;
}&lt;br /&gt;
&lt;br /&gt;
 &lt;br /&gt;
&lt;br /&gt;
function ExportModelCases(m) {&lt;br /&gt;
  let attrs = m.CaseAttributes;&lt;br /&gt;
  ToDataFrame(&lt;br /&gt;
    m.EventLog.Cases.Concat(&lt;br /&gt;
      [Name], &lt;br /&gt;
      {let cas = _; attrs.{let att = _; cas.Attribute(att)}}&lt;br /&gt;
    ), &lt;br /&gt;
    Concat(&lt;br /&gt;
      [&amp;quot;CaseId&amp;quot;], &lt;br /&gt;
      attrs.Name&lt;br /&gt;
    )&lt;br /&gt;
  ).ToCsv(true);&lt;br /&gt;
}&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
First(Models.Where(Name==&amp;quot;SAP OtC Extended&amp;quot;)).EventsDataTable.DataFrame.ToCsv(true)&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
First(Models.Where(Name==&amp;quot;SAP OtC Extended&amp;quot;)).CasesDataTable.DataFrame.ToCsv(true)&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Calculate all the value usages of a single column for each event in event data table ===&lt;br /&gt;
This query could be used, e.g., to find out the maximum resource usage for every resource found in the event data table.&amp;lt;syntaxhighlight lang=&amp;quot;typescript&amp;quot; line=&amp;quot;1&amp;quot;&amp;gt;&lt;br /&gt;
function WithUsageColumns(resourceColumn)&lt;br /&gt;
{&lt;br /&gt;
  function WithTotalUsageColumnOfSingleResource(resourceColumn, resourceValue)&lt;br /&gt;
  {&lt;br /&gt;
    _&lt;br /&gt;
      .WithColumn(&amp;quot;_Prev&amp;quot;, #sql{Lag(Column(resourceColumn), [TimeStamp, EventType], [true, true], [CaseId], 1, null)})&lt;br /&gt;
      .WithColumn(&amp;quot;_UsageDiff&amp;quot;, #sql{&lt;br /&gt;
        CaseWhen(&lt;br /&gt;
          Column(resourceColumn) == Column(&amp;quot;_Prev&amp;quot;), 0, &lt;br /&gt;
          Column(&amp;quot;_Prev&amp;quot;) == #expr{resourceValue}, -1,&lt;br /&gt;
          Column(resourceColumn) == #expr{resourceValue}, 1,&lt;br /&gt;
          0)&lt;br /&gt;
      })&lt;br /&gt;
      .WithColumn(`${resourceValue}_Usage`, #sql{Sum(Column(&amp;quot;_UsageDiff&amp;quot;), [TimeStamp, EventType])})&lt;br /&gt;
      .RemoveColumns([&amp;quot;_Prev&amp;quot;, &amp;quot;_UsageDiff&amp;quot;])&lt;br /&gt;
  }&lt;br /&gt;
&lt;br /&gt;
  let sdf = _;&lt;br /&gt;
  let allValues = sdf.SelectDistinct([resourceColumn]).OrderByColumns([resourceColumn], [true]).Collect().Column(resourceColumn);&lt;br /&gt;
  allValues.{&lt;br /&gt;
    let v = _;&lt;br /&gt;
    sdf = sdf.WithTotalUsageColumnOfSingleResource(resourceColumn, v)&lt;br /&gt;
  }&lt;br /&gt;
  sdf&lt;br /&gt;
}&lt;br /&gt;
&lt;br /&gt;
let dt = ModelById(&amp;lt;model id&amp;gt;).EventsDataTable;&lt;br /&gt;
dt&lt;br /&gt;
  .SqlDataFrame&lt;br /&gt;
  .WithUsageColumns(&amp;lt;resource column name&amp;gt;)&lt;br /&gt;
  .OrderByColumns([dt.ColumnMappings[&amp;quot;TimeStamp&amp;quot;]], [true])&lt;br /&gt;
  .Collect().ToCsv()&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;Where:&lt;br /&gt;
&lt;br /&gt;
* &amp;lt;model id&amp;gt; is the id of the model containing event data to be examined.&lt;br /&gt;
* &amp;lt;resource column name&amp;gt; is the name of the column in the event data table of the specified model containing the resource being used by that event.&lt;br /&gt;
&lt;br /&gt;
NOTE: This expression uses functionalities that are only supported in Snowflake-based data tables.&lt;br /&gt;
&lt;br /&gt;
=== Create new Snowflake model from filter ===&lt;br /&gt;
This script creates a new Snowflake model (and two datatables for cases and events) containing filtered event log from given filter id. The script also works if the model doesn&#039;t have a cases datatable.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;typescript&amp;quot; line=&amp;quot;1&amp;quot;&amp;gt;&lt;br /&gt;
let filter = FilterById(1); // filter id&lt;br /&gt;
let model = filter.model;&lt;br /&gt;
let project = model.project;&lt;br /&gt;
let nameSuffix = &amp;quot; - &amp;quot; + filter.name + &amp;quot; - &amp;quot; + ToString(Now, &amp;quot;dd-MM-yyyy HH:mm:ss&amp;quot;);&lt;br /&gt;
let eventsDatatableName = model.EventsDataTable.Name + nameSuffix;&lt;br /&gt;
if (eventsDatatableName.length &amp;gt; 440) {&lt;br /&gt;
  eventsDatatableName = eventsDatatableName.Substring(eventsDatatableName.length - 440);&lt;br /&gt;
}&lt;br /&gt;
let eventsData = model&lt;br /&gt;
  .EventsDataTable&lt;br /&gt;
  .SqlDataFrame&lt;br /&gt;
  .ApplyFilter(&lt;br /&gt;
    filter.rules,&lt;br /&gt;
    model.CasesDataTable?.SqlDataFrame&lt;br /&gt;
  );&lt;br /&gt;
project&lt;br /&gt;
  .CreateDatatable(eventsDatatableName, #{&amp;quot;Connection&amp;quot;: CreateSnowflakeConnection()})&lt;br /&gt;
  .Import(eventsData);&lt;br /&gt;
let modelConfiguration = model.Configuration;&lt;br /&gt;
modelConfiguration.DataSource.Events.Set(&amp;quot;DataTableName&amp;quot;, eventsDatatableName);&lt;br /&gt;
if (model.CasesDataTable != null) {&lt;br /&gt;
  let eventsDataCaseIdColumn = &amp;quot;CaseId_&amp;quot; + ToString(Random());&lt;br /&gt;
  let casesDatatableName = model.CasesDataTable.Name + nameSuffix;&lt;br /&gt;
  if (casesDatatableName.length &amp;gt; 440) {&lt;br /&gt;
    casesDatatableName = casesDatatableName.Substring(casesDatatableName.length - 440);&lt;br /&gt;
  }&lt;br /&gt;
  let casesData = model&lt;br /&gt;
    .CasesDataTable&lt;br /&gt;
    .SqlDataFrame&lt;br /&gt;
    .join(&lt;br /&gt;
	  eventsData.SelectDistinct([eventsDataCaseIdColumn: modelConfiguration.DataSource.Events.Columns.CaseId]),&lt;br /&gt;
      [modelConfiguration.DataSource.Cases.Columns.CaseId: eventsDataCaseIdColumn]&lt;br /&gt;
	).Select(model.CasesDataTable.ColumnNames);&lt;br /&gt;
  project&lt;br /&gt;
    .CreateDatatable(casesDatatableName, #{&amp;quot;Connection&amp;quot;: CreateSnowflakeConnection()})&lt;br /&gt;
    .Import(casesData);&lt;br /&gt;
  modelConfiguration.DataSource.Cases.Set(&amp;quot;DataTableName&amp;quot;, casesDatatableName);&lt;br /&gt;
}&lt;br /&gt;
let modelName = model.Name + nameSuffix;&lt;br /&gt;
if (modelName &amp;gt; 440) {&lt;br /&gt;
  modelName = modelName.Substring(modelName.length - 440);&lt;br /&gt;
}&lt;br /&gt;
project&lt;br /&gt;
  .CreateModel(#{    &lt;br /&gt;
    &amp;quot;Name&amp;quot;: modelName,&lt;br /&gt;
    &amp;quot;Description&amp;quot;: model.Description,&lt;br /&gt;
    &amp;quot;Configuration&amp;quot;: modelConfiguration&lt;br /&gt;
  });&lt;br /&gt;
return modelName;&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Creating a model consisting of multiple copies of cases in an existing model ===&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;typescript&amp;quot; line=&amp;quot;1&amp;quot;&amp;gt;&lt;br /&gt;
/**&lt;br /&gt;
 * @name CreateTestModel&lt;br /&gt;
 * @description&lt;br /&gt;
 * Creates a new model (or overwrites an existing) to given target project with given number of &lt;br /&gt;
 * repetitions of given source model.&lt;br /&gt;
 * Each repetition will generate &amp;quot;&amp;lt;N&amp;gt;-&amp;quot;-prefix to CaseId-columns, where N equals to the repeat index.&lt;br /&gt;
 * @param sourceModel&lt;br /&gt;
 * PA model used for the source data and from where the connection is copied for the target model if a &lt;br /&gt;
 * new one has to be created.&lt;br /&gt;
 * @param numRepeats&lt;br /&gt;
 * Number of times the data in the source model should be repeated in the generated model.&lt;br /&gt;
 * @param targetProject&lt;br /&gt;
 * Project in which the target model resides.&lt;br /&gt;
 * @param targetModelName&lt;br /&gt;
 * Specifies the name of the test model in the given target project. If a model already exists with &lt;br /&gt;
 * given name, event and case data in this model will be replaced with the new generated event and &lt;br /&gt;
 * case data.&lt;br /&gt;
 * @returns&lt;br /&gt;
 * Model object of the test model having the newly generated data.&lt;br /&gt;
 */&lt;br /&gt;
function CreateTestModel(sourceModel, numRepeats, targetProject, targetModelName) &lt;br /&gt;
{&lt;br /&gt;
  let eventsColumnMappings = sourceModel.EventsDataTable.ColumnMappings;&lt;br /&gt;
  let casesColumnMappings = sourceModel.CasesDataTable.ColumnMappings;&lt;br /&gt;
  let connection = sourceModel.EventsDataTable.DataSourceConnection;&lt;br /&gt;
&lt;br /&gt;
  function CreateResultModel()&lt;br /&gt;
  {&lt;br /&gt;
    function GetTable(tableName) &lt;br /&gt;
    {&lt;br /&gt;
      let tableConfiguration = #{&lt;br /&gt;
        &amp;quot;Name&amp;quot;: tableName,&lt;br /&gt;
        &amp;quot;Connection&amp;quot;: connection&lt;br /&gt;
      };&lt;br /&gt;
      let resultTable = targetProject.DataTableByName(tableName);&lt;br /&gt;
      if (resultTable == null)&lt;br /&gt;
      {&lt;br /&gt;
        resultTable = targetProject.CreateDataTable(tableConfiguration)&lt;br /&gt;
          .Modify(#{&amp;quot;NameInDataSource&amp;quot;: null})&lt;br /&gt;
          .Synchronize();&lt;br /&gt;
      }&lt;br /&gt;
      return resultTable;&lt;br /&gt;
    }&lt;br /&gt;
&lt;br /&gt;
    let eventsTableName = `${targetModelName} - events`;&lt;br /&gt;
    let casesTableName = `${targetModelName} - cases`;&lt;br /&gt;
    let targetModel = targetProject.ModelByName(targetModelName);&lt;br /&gt;
    let eventsTable, casesTable = null;&lt;br /&gt;
&lt;br /&gt;
    if (targetModel != null)&lt;br /&gt;
    {&lt;br /&gt;
      eventsTable = targetModel.EventsDataTable;&lt;br /&gt;
      casesTable = targetModel.CasesDataTable;&lt;br /&gt;
    }&lt;br /&gt;
    else {&lt;br /&gt;
      eventsTable = GetTable(eventsTableName);&lt;br /&gt;
      if (sourceModel.CasesDataTable != null) {&lt;br /&gt;
        casesTable = GetTable(casesTableName);&lt;br /&gt;
      }&lt;br /&gt;
&lt;br /&gt;
      let timestampMapping = eventsColumnMappings[&amp;quot;TimeStamp&amp;quot;];&lt;br /&gt;
      eventsColumnMappings.Remove(&amp;quot;TimeStamp&amp;quot;);&lt;br /&gt;
      eventsColumnMappings.Set(&amp;quot;Timestamp&amp;quot;, timestampMapping);&lt;br /&gt;
&lt;br /&gt;
      let modelConfiguration = #{&lt;br /&gt;
        &amp;quot;DataSource&amp;quot;: #{&lt;br /&gt;
          &amp;quot;Events&amp;quot;:#{&lt;br /&gt;
            &amp;quot;DataSourceType&amp;quot;: &amp;quot;datatable&amp;quot;,&lt;br /&gt;
            &amp;quot;DataTableName&amp;quot;: eventsTableName,&lt;br /&gt;
            &amp;quot;Columns&amp;quot;: eventsColumnMappings&lt;br /&gt;
          }&lt;br /&gt;
        }&lt;br /&gt;
      };&lt;br /&gt;
&lt;br /&gt;
      if (casesColumnMappings != null) {&lt;br /&gt;
        modelConfiguration[&amp;quot;DataSource&amp;quot;].Set(&amp;quot;Cases&amp;quot;, #{&lt;br /&gt;
          &amp;quot;DataSourceType&amp;quot;: &amp;quot;datatable&amp;quot;,&lt;br /&gt;
          &amp;quot;DataTableName&amp;quot;: casesTableName,&lt;br /&gt;
          &amp;quot;Columns&amp;quot;: casesColumnMappings&lt;br /&gt;
        });&lt;br /&gt;
      }&lt;br /&gt;
&lt;br /&gt;
      targetModel = targetProject.CreateModel(#{&amp;quot;Name&amp;quot;: targetModelName, &amp;quot;Configuration&amp;quot;: modelConfiguration});&lt;br /&gt;
    }&lt;br /&gt;
&lt;br /&gt;
    eventsTable.Truncate();&lt;br /&gt;
    casesTable?.Truncate();&lt;br /&gt;
&lt;br /&gt;
    return #{&lt;br /&gt;
      &amp;quot;TargetModel&amp;quot;: targetModel,&lt;br /&gt;
      &amp;quot;Events&amp;quot;: eventsTable,&lt;br /&gt;
      &amp;quot;Cases&amp;quot;: casesTable&lt;br /&gt;
    };&lt;br /&gt;
  }&lt;br /&gt;
&lt;br /&gt;
  function RepeatNTimes(sourceDf, caseIdColumn, numRepeats)&lt;br /&gt;
  {&lt;br /&gt;
    let resultDf = null;&lt;br /&gt;
    for (let i = 1; i &amp;lt;= numRepeats; ++i) {&lt;br /&gt;
      let iterationDf = sourceDf&lt;br /&gt;
        .WithColumn(caseIdColumn, #sql{Concat(#expr{i}, &amp;quot;-&amp;quot;, Column(#expr{caseIdColumn}))});&lt;br /&gt;
      resultDf = resultDf == null ? iterationDf : resultDf.Append(iterationDf); &lt;br /&gt;
    }&lt;br /&gt;
    resultDf;&lt;br /&gt;
  }&lt;br /&gt;
&lt;br /&gt;
  let resultModel = CreateResultModel();&lt;br /&gt;
  let sourceEventDataDf = sourceModel.EventsDataTable.SqlDataFrame;&lt;br /&gt;
  let resultEventDataDf = RepeatNTimes(sourceEventDataDf, eventsColumnMappings[&amp;quot;CaseId&amp;quot;], numRepeats);&lt;br /&gt;
  resultModel[&amp;quot;Events&amp;quot;].Import(resultEventDataDf);&lt;br /&gt;
&lt;br /&gt;
  let sourceCaseDataDf = sourceModel.CasesDataTable?.SqlDataFrame;&lt;br /&gt;
  if (sourceCaseDataDf != null) {&lt;br /&gt;
    let resultCaseDataDf = RepeatNTimes(sourceCaseDataDf, casesColumnMappings[&amp;quot;CaseId&amp;quot;], numRepeats);&lt;br /&gt;
    resultModel[&amp;quot;Cases&amp;quot;].Import(resultCaseDataDf);&lt;br /&gt;
  }&lt;br /&gt;
  resultModel[&amp;quot;TargetModel&amp;quot;];&lt;br /&gt;
}&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;Example usage:&amp;lt;blockquote&amp;gt;CreateTestModel(ProjectByName(&amp;quot;Project&amp;quot;).ModelByName(&amp;quot;SAP_OrderToCash - Snowflake&amp;quot;), 3, ProjectByName(&amp;quot;TestData&amp;quot;), &amp;quot;TestModel&amp;quot;);&amp;lt;/blockquote&amp;gt;Creates a new model named &amp;quot;TestModel&amp;quot; (or overwrites old one) into project named &amp;quot;TestData&amp;quot; containing the data from model &amp;quot;SAP_OrderToCash - Snowflake&amp;quot; in project &amp;quot;Project&amp;quot; repeated three times.&lt;br /&gt;
&lt;br /&gt;
=== Analyzing declare patterns found in event log ===&lt;br /&gt;
&lt;br /&gt;
This is an example expression that shows how POSIX-style regular expressions can be used to search for cases in an event log having certain event type patterns [https://www.researchgate.net/publication/277631859_Generating_Event_Logs_Through_the_Simulation_of_Declare_Models declare patterns].&lt;br /&gt;
&lt;br /&gt;
Note: Before use, replace &amp;lt;model id&amp;gt; with a valid model identifier having event data.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;typescript&amp;quot; line=&amp;quot;1&amp;quot;&amp;gt;&lt;br /&gt;
let dt = ModelById(&amp;lt;model id&amp;gt;).EventsDataTable;&lt;br /&gt;
let sdf = dt.SqlDataFrame.Head(1000);&lt;br /&gt;
let caseIdColumn = dt.ColumnMappings[&amp;quot;CaseId&amp;quot;], timeStampColumn = dt.ColumnMappings[&amp;quot;TimeStamp&amp;quot;], eventTypeColumn = dt.ColumnMappings[&amp;quot;EventType&amp;quot;];&lt;br /&gt;
&lt;br /&gt;
let eventTypesDf = sdf&lt;br /&gt;
  .SelectDistinct([eventTypeColumn])&lt;br /&gt;
  .OrderByColumns([eventTypeColumn], [true])&lt;br /&gt;
  .WithRowNumberColumn(&amp;quot;Token&amp;quot;, [eventTypeColumn])&lt;br /&gt;
  .WithColumn(&amp;quot;Token&amp;quot;, #sql{Char(Column(&amp;quot;Token&amp;quot;) + Unicode(&amp;quot;a&amp;quot;) - 1)});&lt;br /&gt;
&lt;br /&gt;
sdf = sdf&lt;br /&gt;
  .Join(eventTypesDf.Select([&amp;quot;_EventType2&amp;quot;: eventTypeColumn, &amp;quot;Token&amp;quot;]), [eventTypeColumn: &amp;quot;_EventType2&amp;quot;], &amp;quot;leftouter&amp;quot;);&lt;br /&gt;
&lt;br /&gt;
function RespondedExistencePattern(a, b)&lt;br /&gt;
{&lt;br /&gt;
  // [^a]*((a.*b.*)|(b.*a.*))*[^a]*&lt;br /&gt;
  ToSqlExpression(`Regexp(Column(&amp;quot;Trace&amp;quot;), &amp;quot;[^${a}]*((${a}.*${b}.*)|(${b}.*${a}.*))*[^${a}]&amp;quot;)`)&lt;br /&gt;
}&lt;br /&gt;
&lt;br /&gt;
function ResponsePattern(a, b)&lt;br /&gt;
{&lt;br /&gt;
  // [^a]*(a.*b)*[^a]*&lt;br /&gt;
  ToSqlExpression(`Regexp(Column(&amp;quot;Trace&amp;quot;), &amp;quot;[^${a}]*(${a}.*${b})*[^${a}]&amp;quot;)`)&lt;br /&gt;
}&lt;br /&gt;
&lt;br /&gt;
function AlternateResponsePattern(a, b)&lt;br /&gt;
{&lt;br /&gt;
  // [^a]*(a[^a]*b[^a]*)*[^a]*&lt;br /&gt;
  ToSqlExpression(`Regexp(Column(&amp;quot;Trace&amp;quot;), &amp;quot;[^${a}]*(${a}[^${a}]*${b}[^${a}]*)*[^${a}]&amp;quot;)`)&lt;br /&gt;
}&lt;br /&gt;
&lt;br /&gt;
function ChainResponsePattern(a, b)&lt;br /&gt;
{&lt;br /&gt;
  // [^a]*(ab[^a]*)*[^a]*&lt;br /&gt;
  ToSqlExpression(`Regexp(Column(&amp;quot;Trace&amp;quot;), &amp;quot;[^${a}]*(${a}${b}[^${a}]*)*[^${a}]&amp;quot;)`)&lt;br /&gt;
}&lt;br /&gt;
&lt;br /&gt;
function PrecedencePattern(a, b)&lt;br /&gt;
{&lt;br /&gt;
  // [^b]*(a.*b)*[^b]*&lt;br /&gt;
  ToSqlExpression(`Regexp(Column(&amp;quot;Trace&amp;quot;), &amp;quot;[^${b}]*(${a}.*${b})*[^${b}]&amp;quot;)`)&lt;br /&gt;
}&lt;br /&gt;
&lt;br /&gt;
let tracesDf = sdf&lt;br /&gt;
  .GroupBy([caseIdColumn])&lt;br /&gt;
  .Aggregate([&amp;quot;Trace&amp;quot;: &amp;quot;Token&amp;quot;], [#{&amp;quot;Function&amp;quot;: &amp;quot;list&amp;quot;, &amp;quot;Ordering&amp;quot;: [timeStampColumn], &amp;quot;Separator&amp;quot;: &amp;quot;&amp;quot;}])&lt;br /&gt;
  .WithColumn(&amp;quot;RespondedExistenceNL&amp;quot;, #sql{#expr{RespondedExistencePattern(&amp;quot;n&amp;quot;, &amp;quot;l&amp;quot;)}})&lt;br /&gt;
  .WithColumn(&amp;quot;RespondedExistenceCA&amp;quot;, #sql{#expr{RespondedExistencePattern(&amp;quot;c&amp;quot;, &amp;quot;a&amp;quot;)}})&lt;br /&gt;
  .WithColumn(&amp;quot;ResponsePatternNL&amp;quot;, #sql{#expr{ResponsePattern(&amp;quot;n&amp;quot;, &amp;quot;l&amp;quot;)}})&lt;br /&gt;
  .WithColumn(&amp;quot;ResponsePatternCA&amp;quot;, #sql{#expr{ResponsePattern(&amp;quot;c&amp;quot;, &amp;quot;a&amp;quot;)}})&lt;br /&gt;
  .WithColumn(&amp;quot;AlternateResponsePatternNL&amp;quot;, #sql{#expr{AlternateResponsePattern(&amp;quot;n&amp;quot;, &amp;quot;l&amp;quot;)}})&lt;br /&gt;
  .WithColumn(&amp;quot;AlternateResponsePatternCA&amp;quot;, #sql{#expr{AlternateResponsePattern(&amp;quot;c&amp;quot;, &amp;quot;a&amp;quot;)}})&lt;br /&gt;
  .WithColumn(&amp;quot;ChainResponsePatternNL&amp;quot;, #sql{#expr{ChainResponsePattern(&amp;quot;n&amp;quot;, &amp;quot;l&amp;quot;)}})&lt;br /&gt;
  .WithColumn(&amp;quot;ChainResponsePatternCA&amp;quot;, #sql{#expr{ChainResponsePattern(&amp;quot;c&amp;quot;, &amp;quot;a&amp;quot;)}})&lt;br /&gt;
  .WithColumn(&amp;quot;PrecedencePatternNL&amp;quot;, #sql{#expr{PrecedencePattern(&amp;quot;n&amp;quot;, &amp;quot;l&amp;quot;)}})&lt;br /&gt;
  .WithColumn(&amp;quot;PrecedencePatternCA&amp;quot;, #sql{#expr{PrecedencePattern(&amp;quot;c&amp;quot;, &amp;quot;a&amp;quot;)}})&lt;br /&gt;
;&lt;br /&gt;
&lt;br /&gt;
[tracesDf.Collect().ToCsv(), eventTypesDf.Collect().ToCsv()]&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Perform a query and send results as E-mail in a HTML table ===&lt;br /&gt;
This example requires two scripts that are both located in the same project:&lt;br /&gt;
&lt;br /&gt;
# Expression-type script to execute (can be, e.g., scheduled to run daily).&lt;br /&gt;
# Expression-type script containing the query JSON to use as basis for the e-mail. In this example, this script is named as &amp;quot;Send query as E-mail - query JSON&amp;quot;. The contents of this script is just the JSON representation of a query that can be extracted, e.g., from any PA chart view.&lt;br /&gt;
&lt;br /&gt;
Script #1 should contain the following code:&amp;lt;syntaxhighlight lang=&amp;quot;typescript&amp;quot;&amp;gt;&lt;br /&gt;
let replyToAddress = &amp;quot;noreply@test.com&amp;quot;;&lt;br /&gt;
let recipientsArray = [&amp;quot;test@test.com&amp;quot;];&lt;br /&gt;
let queryConfiguration = Project.ScriptByName(&amp;quot;Send query as E-mail - query JSON&amp;quot;).Code;&lt;br /&gt;
let resultDf = Query(ParseJson(queryConfiguration));&lt;br /&gt;
let mailBodyHtml = resultDf.Collect().`&lt;br /&gt;
&amp;lt;table&amp;gt;&lt;br /&gt;
  &amp;lt;caption&amp;gt;Example query&amp;lt;/caption&amp;gt;&lt;br /&gt;
  &amp;lt;thead&amp;gt;&lt;br /&gt;
    &amp;lt;tr&amp;gt;&lt;br /&gt;
      ${StringJoin(&amp;quot;&amp;quot;, _.Columns.`&lt;br /&gt;
        &amp;lt;th&amp;gt;${_}&amp;lt;/th&amp;gt;&lt;br /&gt;
      `)}&lt;br /&gt;
    &amp;lt;/tr&amp;gt;&lt;br /&gt;
  &amp;lt;/thead&amp;gt;&lt;br /&gt;
  &amp;lt;tbody&amp;gt;&lt;br /&gt;
    ${StringJoin(&amp;quot;&amp;quot;, _.Rows.`&lt;br /&gt;
      &amp;lt;tr&amp;gt;&lt;br /&gt;
        ${StringJoin(&amp;quot;&amp;quot;, _.`&lt;br /&gt;
          &amp;lt;td&amp;gt;${_}&amp;lt;/td&amp;gt;&lt;br /&gt;
        `)}&lt;br /&gt;
      &amp;lt;/tr&amp;gt;&lt;br /&gt;
    `)}&lt;br /&gt;
  &amp;lt;/tbody&amp;gt;&lt;br /&gt;
&amp;lt;/table&amp;gt;&lt;br /&gt;
`;&lt;br /&gt;
&lt;br /&gt;
SendEmail(#{&lt;br /&gt;
  &amp;quot;ReplyTo&amp;quot;: [replyToAddress],&lt;br /&gt;
  &amp;quot;To&amp;quot;: recipientsArray,&lt;br /&gt;
  &amp;quot;Subject&amp;quot;: &amp;quot;Example query E-mail&amp;quot;,&lt;br /&gt;
  &amp;quot;IsBodyHtml&amp;quot;: true,&lt;br /&gt;
  &amp;quot;Body&amp;quot;: mailBodyHtml&lt;br /&gt;
});&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;NOTE: QPR ProcessAnalyzer Server has to have SMTP server configured. Also, remember to update the values of replyToAddress and recipientsArray before using.&lt;/div&gt;</summary>
		<author><name>MarHink</name></author>
	</entry>
	<entry>
		<id>https://wiki.onqpr.com/pa/index.php?title=Expression_Script_Examples&amp;diff=26001</id>
		<title>Expression Script Examples</title>
		<link rel="alternate" type="text/html" href="https://wiki.onqpr.com/pa/index.php?title=Expression_Script_Examples&amp;diff=26001"/>
		<updated>2025-03-18T14:29:35Z</updated>

		<summary type="html">&lt;p&gt;MarHink: /* Perform a query and send results as E-mail in a HTML table */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;This page contains script examples written in the QPR ProcessAnalyzer expression language. See how expression scripts can be created in the [[Managing_Scripts#Creating_Script|Workspace]]. For documentation for the syntax, functions and entities can be found from the main page in the [[QPR_ProcessAnalyzer_Wiki#For_Developers|KPI Expression Language]] section.&lt;br /&gt;
&lt;br /&gt;
== Calling Expression Script from Expression ==&lt;br /&gt;
Expression scripts can be called from an expression using the [[QPR_ProcessAnalyzer_Objects_in_Expression_Language#Script|Run]] function with the following syntax:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
let result = ScriptById(123).Run(#{&lt;br /&gt;
  &amp;quot;parameter1&amp;quot;: &amp;quot;value1&amp;quot;,&lt;br /&gt;
  &amp;quot;parameter2&amp;quot;: false,&lt;br /&gt;
  &amp;quot;parameter2&amp;quot;: 123.45&lt;br /&gt;
})&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The command waits until the run is completed, and the return value of the called script is returned by the Run function call.&lt;br /&gt;
&lt;br /&gt;
Parameters can be passed to the called script, and the parameters are available as variables in the script. The parameters can contain any type of data.&lt;br /&gt;
&lt;br /&gt;
Expression scripts can also be called from a dashboard. Expressions can be stored to scripts instead of dashboards, which is a way to separate complex expressions from dashboards and allow to reuse expressions across several dashboards.&lt;br /&gt;
&lt;br /&gt;
== Calling SQL Script from Expression ==&lt;br /&gt;
SQL script can be called from an expression using the Run function as follows (similar to calling [[#Calling Expression Script from Expression|expression scripts]]):&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
let result = ScriptById(123).Run(#{&lt;br /&gt;
  &amp;quot;parameter1&amp;quot;: &amp;quot;value1&amp;quot;,&lt;br /&gt;
  &amp;quot;parameter2&amp;quot;: 321&lt;br /&gt;
});&lt;br /&gt;
let arrayOfAllReports = result.Keys;&lt;br /&gt;
let report1 = result.Report1;&lt;br /&gt;
let report2 = result.Report2;&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
SQL scripts can return multiple &#039;&#039;reports&#039;&#039;, which are combined to a dictionary, where the key is the name of the report (&amp;quot;sheet name&amp;quot;) and value is the report data as a DataFrame. See in the above example, how the reports can be accessed by their name.&lt;br /&gt;
&lt;br /&gt;
== Examples ==&lt;br /&gt;
=== Call web service===&lt;br /&gt;
Contact to a web service, fetch some data, and store it to a datatable.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;typescript&amp;quot; line&amp;gt;&lt;br /&gt;
let datatableName = &amp;quot;Web Service Data&amp;quot;;&lt;br /&gt;
let webServiceData = CallWebService(&lt;br /&gt;
    #{&amp;quot;Address&amp;quot;: &amp;quot;https://processanalyzer.onqpr.com/qprpa/api/serverinfo&amp;quot;}&lt;br /&gt;
);&lt;br /&gt;
&lt;br /&gt;
let targetDatatable = Project.Datatables.Where(name==datatableName);&lt;br /&gt;
if (Count(targetDatatable) == 0) {&lt;br /&gt;
	targetDatatable = Project.CreateDatatable(datatableName)&lt;br /&gt;
	.AddColumn(&amp;quot;Setting name&amp;quot;, &amp;quot;String&amp;quot;)&lt;br /&gt;
	.AddColumn(&amp;quot;Setting value&amp;quot;, &amp;quot;String&amp;quot;)&lt;br /&gt;
	.AddColumn(&amp;quot;Data read&amp;quot;, &amp;quot;DateTime&amp;quot;);&lt;br /&gt;
} else {&lt;br /&gt;
	targetDatatable = targetDatatable[0];&lt;br /&gt;
}&lt;br /&gt;
&lt;br /&gt;
let currentTime = Now;&lt;br /&gt;
let dataAsDf = ToDataFrame(&lt;br /&gt;
	webServiceData.keys.{&lt;br /&gt;
        let key = _;&lt;br /&gt;
        [key, webServiceData[key], currentTime];&lt;br /&gt;
    },&lt;br /&gt;
	[&amp;quot;Setting name&amp;quot;, &amp;quot;Setting value&amp;quot;, &amp;quot;Data read&amp;quot;]&lt;br /&gt;
);&lt;br /&gt;
targetDatatable.Import(dataAsDf, #{&amp;quot;Append&amp;quot;:true});&lt;br /&gt;
WriteLog(`${CountTop(dataAsDf.Rows)} rows written to datatable`);&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Store data to datatable ===&lt;br /&gt;
Get all models in the system and store them to a datatable.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;typescript&amp;quot; line&amp;gt;&lt;br /&gt;
let newDatatable = Project&lt;br /&gt;
    .CreateDatatable(&amp;quot;Models list &amp;quot; + ToString(Now, &amp;quot;dd.MM.yyyy HH:mm:ss&amp;quot;))&lt;br /&gt;
    .AddColumn(&amp;quot;Model name&amp;quot;, &amp;quot;String&amp;quot;)&lt;br /&gt;
    .AddColumn(&amp;quot;Project name&amp;quot;, &amp;quot;String&amp;quot;)&lt;br /&gt;
    .AddColumn(&amp;quot;Created time&amp;quot;, &amp;quot;DateTime&amp;quot;)&lt;br /&gt;
    .AddColumn(&amp;quot;Cases&amp;quot;, &amp;quot;Integer&amp;quot;);&lt;br /&gt;
let startTime = Now;&lt;br /&gt;
let modelsData = ToDataFrame(&lt;br /&gt;
    Models.([Name, Project.Name, CreatedDate, NCases]),&lt;br /&gt;
    [&amp;quot;Model name&amp;quot;, &amp;quot;Project name&amp;quot;, &amp;quot;Created time&amp;quot;, &amp;quot;Cases&amp;quot;]&lt;br /&gt;
);&lt;br /&gt;
WriteLog(`Listing models took ${(Now - startTime).TotalSeconds.Round(2)} seconds.`);&lt;br /&gt;
newDatatable.Import(modelsData);&lt;br /&gt;
WriteLog(`Datatable ${newDatatable.Id} created.`);&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Convert datatable column data ===&lt;br /&gt;
This script can be used to convert a single column into numerical data type. To use the script, you need to setup the following in the beginning of the script:&lt;br /&gt;
* Project name where the datatable is located.&lt;br /&gt;
* Datatable name&lt;br /&gt;
* Name of the column to be converted&lt;br /&gt;
&lt;br /&gt;
Note that the conversion fails, if there is data that cannot be converted into numerical format. The conversion assumes that period (.) is used as the decimal point. &lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;typescript&amp;quot; line&amp;gt;&lt;br /&gt;
let projectName = &amp;quot;New Project&amp;quot;;&lt;br /&gt;
let datatableName = &amp;quot;qpr processanalyzer events&amp;quot;;&lt;br /&gt;
let columnName = &amp;quot;Event order in case&amp;quot;;&lt;br /&gt;
&lt;br /&gt;
let project = (Projects.Where(Name==projectName))[0];&lt;br /&gt;
let datatable = (project.Datatables.Where(Name==datatableName))[0];&lt;br /&gt;
DatatableById(datatable.Id).DataFrame&lt;br /&gt;
.SetColumns([&lt;br /&gt;
	columnName: () =&amp;gt; {&lt;br /&gt;
		let data = Column(columnName);&lt;br /&gt;
		if (data == null) {&lt;br /&gt;
			null;&lt;br /&gt;
		 } else {&lt;br /&gt;
			ToFloat(data);&lt;br /&gt;
		}&lt;br /&gt;
	}&lt;br /&gt;
])&lt;br /&gt;
.Persist(datatable.Name, [&amp;quot;ProjectId&amp;quot;: project.Id, &amp;quot;Append&amp;quot;: false]);&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Instead of converting to numeric (with the &#039;&#039;ToFloat&#039;&#039; function), data can be converted into string using the &#039;&#039;ToString&#039;&#039; function.&lt;br /&gt;
&lt;br /&gt;
=== Show DataFrame as HTML table ===&lt;br /&gt;
&lt;br /&gt;
This script defines a function to show dataframe as a HTML table, and uses the function for a literal dataframe.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;typescript&amp;quot; line&amp;gt;&lt;br /&gt;
function dataframeToHtmlTable(df) {&lt;br /&gt;
	return&lt;br /&gt;
`&amp;lt;table&amp;gt;&lt;br /&gt;
	&amp;lt;tr&amp;gt;&lt;br /&gt;
		${StringJoin(&amp;quot;\r\n\t\t&amp;quot;,  + df.columns.`&amp;lt;th&amp;gt;${_}&amp;lt;/th&amp;gt;`)}&lt;br /&gt;
	&amp;lt;/tr&amp;gt;&lt;br /&gt;
	${StringJoin(&amp;quot;&amp;quot;, df.Rows.(&lt;br /&gt;
		&amp;quot;\r\n\t&amp;lt;tr&amp;gt;&amp;quot; + StringJoin(&amp;quot;&amp;quot;, _.`\r\n\t\t&amp;lt;td&amp;gt;${ToString(_)}&amp;lt;/td&amp;gt;`) + &amp;quot;\r\n\t&amp;lt;/tr&amp;gt;&amp;quot;&lt;br /&gt;
	))}&lt;br /&gt;
&amp;lt;/table&amp;gt;`&lt;br /&gt;
}&lt;br /&gt;
&lt;br /&gt;
let data = ToDataFrame(&lt;br /&gt;
	[&lt;br /&gt;
		[&amp;quot;one&amp;quot;, &amp;quot;two&amp;quot;, &amp;quot;three&amp;quot;],&lt;br /&gt;
		[&amp;quot;four&amp;quot;, &amp;quot;five&amp;quot;, &amp;quot;six&amp;quot;],&lt;br /&gt;
		[&amp;quot;seven&amp;quot;, &amp;quot;eight&amp;quot;, &amp;quot;nine&amp;quot;]&lt;br /&gt;
	],&lt;br /&gt;
	[&amp;quot;Column 1&amp;quot;, &amp;quot;Column 2&amp;quot;, &amp;quot;Column 3&amp;quot;]&lt;br /&gt;
);&lt;br /&gt;
&lt;br /&gt;
return dataframeToHtmlTable(data);&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Copy local datatables to Snowflake ===&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;typescript&amp;quot; line&amp;gt;&lt;br /&gt;
// Copies all datatables in a project to another project including datatable contents.&lt;br /&gt;
// Usage instructions:&lt;br /&gt;
// 1. Create expression script in the project from where you want to copy the datatables.&lt;br /&gt;
// 2. Create a new project named as &amp;quot;&amp;lt;name of the project to be moved&amp;gt; - Snowflake&amp;quot;. New datatables will be created here. E.g., when moving project named &amp;quot;SAP_OrderToCash&amp;quot;, the target project should be named as &amp;quot;SAP_OrderToCash - Snowflake&amp;quot;.&lt;br /&gt;
// 3. Run the script.&lt;br /&gt;
// NOTE: Columns of type &amp;quot;Any&amp;quot; will be created as &amp;quot;String&amp;quot;-columns in Snowflake, thus it is recommended that actual data types are set for the tables prior to the move.&lt;br /&gt;
&lt;br /&gt;
let sourceProject = Project;&lt;br /&gt;
let sourceProjectName = Project.Name;&lt;br /&gt;
let targetProjectName = `${sourceProjectName} - Snowflake`;&lt;br /&gt;
let targetProject = First(Projects.Where(Name == targetProjectName));&lt;br /&gt;
if (IsNull(targetProject)) {&lt;br /&gt;
  WriteLog(`Unable to find target project named &amp;quot;${targetProjectName}&amp;quot;. Aborting operation.`);&lt;br /&gt;
  return;&lt;br /&gt;
}&lt;br /&gt;
let dts = sourceProject.DataTables;&lt;br /&gt;
WriteLog(`Copying all ${CountTop(dts)} data tables found in project &amp;quot;${sourceProject.Name}&amp;quot; (id: ${sourceProject.Id}) to Snowflake in project &amp;quot;${targetProject.Name}&amp;quot; (id: ${targetProject.Id})`);&lt;br /&gt;
dts.{&lt;br /&gt;
  let sourceDt = _;&lt;br /&gt;
  WriteLog(`Starting to copy data table &amp;quot;${Name}&amp;quot; (id: ${Id}) having ${NRows} rows and ${NColumns} columns.`);&lt;br /&gt;
  let targetDt;&lt;br /&gt;
  targetDt = targetProject.DatatableByName(sourceDt.Name);&lt;br /&gt;
  if (targetDt == null) {&lt;br /&gt;
    targetDt = targetProject.CreateDataTable(sourceDt.Name, #{&amp;quot;Connection&amp;quot;: CreateSnowflakeConnection(#{&amp;quot;ProjectId&amp;quot;: targetProject.Id})});&lt;br /&gt;
    targetDt.Import(sourceDt.SqlDataFrame);&lt;br /&gt;
    WriteLog(`Finished copying data table &amp;quot;${Name}&amp;quot; (id: ${Id}) to table &amp;quot;${targetDt.Name}&amp;quot; (id: ${targetDt.Id})`);&lt;br /&gt;
  } else {&lt;br /&gt;
    WriteLog(`Datatable already exist &amp;quot;${Name}&amp;quot; (id: ${Id}) to table &amp;quot;${targetDt.Name}&amp;quot; (id: ${targetDt.Id})`);&lt;br /&gt;
  }&lt;br /&gt;
}&lt;br /&gt;
WriteLog(`Finished copying all the data tables found in project &amp;quot;${sourceProject.Name}&amp;quot; (id: ${sourceProject.Id}) to Snowflake in project &amp;quot;${targetProject.Name}&amp;quot; (id: ${targetProject.Id})`);&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
If you don&#039;t need to copy the data but only create the Snowflake datatables with columns, you can change the line 22 to&lt;br /&gt;
&amp;lt;syntaxhighlight&amp;gt;&lt;br /&gt;
targetDt.Import(sourceDt.SqlDataFrame.head(0));&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Copy single datatable to Snowflake ===&lt;br /&gt;
This script creates a copy of a single datatable to Snowflake. Replace the &#039;&#039;&amp;lt;tableId1&amp;gt;&#039;&#039; with the id of the source datatable.&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;typescript&amp;quot; line&amp;gt;&lt;br /&gt;
function CopyDataTableToSnowflake(dataTableId)&lt;br /&gt;
{&lt;br /&gt;
  let sourceDt = DataTableById(dataTableId);&lt;br /&gt;
  sourceDt.SqlDataFrame.Persist(`${sourceDt.Name} - Snowflake`, #{&amp;quot;Append&amp;quot;: false, &amp;quot;Connection&amp;quot;: CreateSnowflakeConnection(#{&amp;quot;ProjectId&amp;quot;: sourceDt.Project.Id})});&lt;br /&gt;
}&lt;br /&gt;
CopyDataTableToSnowflake(&amp;lt;tableId1&amp;gt;);&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Create a copy of a data table that has all Any-type columns changed to String-type columns ===&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;typescript&amp;quot; line&amp;gt;&lt;br /&gt;
function ConvertAnyDataTypesToStringsToNewTable(dataTableId)&lt;br /&gt;
{&lt;br /&gt;
  let dt = DataTableById(dataTableId);&lt;br /&gt;
  let sdf = dt.SqlDataFrame;&lt;br /&gt;
  let cts = dt.ColumnTypes;&lt;br /&gt;
  cts.{&lt;br /&gt;
    let ct = _;&lt;br /&gt;
    if (ct.DataType == &amp;quot;Any&amp;quot;) {&lt;br /&gt;
      let n = ct.Name;&lt;br /&gt;
      sdf = sdf.WithColumn(ct.Name, #sql{Cast(Column(Variable(&amp;quot;n&amp;quot;)), &amp;quot;ShortString&amp;quot;)});&lt;br /&gt;
    }&lt;br /&gt;
  };&lt;br /&gt;
  sdf.Persist(`${dt.Name} - Converted`, #{&amp;quot;Append&amp;quot;: false, &amp;quot;ProjectId&amp;quot;: dt.Project.Id});&lt;br /&gt;
}&lt;br /&gt;
ConvertAnyDataTypesToStringsToNewTable(&amp;lt;dataTableId&amp;gt;);&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Query number of rows in given data table having a datetime value in given year grouped by month and return resulting table as CSV ===&lt;br /&gt;
SqlDataFrame is used in order to prevent loading the whole datatable into memory first. Filtering is performed as first operation in order to minimize the amount of required work for the data source of the data table.&amp;lt;syntaxhighlight lang=&amp;quot;typescript&amp;quot; line=&amp;quot;1&amp;quot;&amp;gt;&lt;br /&gt;
DataTableById(&amp;lt;data table id&amp;gt;)&lt;br /&gt;
  .SqlDataFrame&lt;br /&gt;
  .Where(#sql{2014 == Year(Column(&amp;quot;Start Time&amp;quot;))})&lt;br /&gt;
  .WithColumn(&amp;quot;Month&amp;quot;, #sql{Month(Column(&amp;quot;Start Time&amp;quot;))})&lt;br /&gt;
  .GroupBy([&amp;quot;Month&amp;quot;]).Aggregate([&amp;quot;Count&amp;quot;], [&amp;quot;Count&amp;quot;])&lt;br /&gt;
  .OrderByColumns([&amp;quot;Month&amp;quot;], [true])&lt;br /&gt;
  .Collect().ToCsv();&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Function for filtering SqlDataFrame by removing rows having, or replacing, the most infrequently occurring column values ===&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;typescript&amp;quot; line=&amp;quot;1&amp;quot;&amp;gt;&lt;br /&gt;
/***&lt;br /&gt;
 * @name ColumnWithMinUsage&lt;br /&gt;
 * @descripion&lt;br /&gt;
 * Generic function that can be used to filter out the most infrequently occurring attribute values or replace their Values&lt;br /&gt;
 * with given common value.&lt;br /&gt;
 * @param df:&lt;br /&gt;
 * DataFrame to operate on.&lt;br /&gt;
 * @param columnName:&lt;br /&gt;
 * Name of the column to be filtered.&lt;br /&gt;
 * @param newColumnName:&lt;br /&gt;
 * Name of the column that will contain the new value of the original column after filtering (if includeOthers was applied).&lt;br /&gt;
 * @param maxNumUniqueValues:&lt;br /&gt;
 * Maximum number of unique values to include into the comparison for each attribute column. If the amount of unique values for any attribute exceeds this value, only given number of attributes are included that have the highest usage.&lt;br /&gt;
 * @param minValueUsage:&lt;br /&gt;
 * Minimum total usage of a value included into the comparison. The number of cases having every returned value should be at least given percentage (a float value between 0.0 and 1.0) of all the compared cases.&lt;br /&gt;
 * @param includeOthers:&lt;br /&gt;
 * Should the rest of the attribute values not included due to MinValueUsage or MaxNumUniqueValues filtering be included as an aggregated &amp;quot;Others&amp;quot; value?&lt;br /&gt;
 * If not empty/null, defines the name used for these other-values.&lt;br /&gt;
 */&lt;br /&gt;
function ColumnWithMinUsage(df, columnName, newColumnName, maxNumUniqueValues, minValueUsage, includeOthers)&lt;br /&gt;
{&lt;br /&gt;
  let all = df&lt;br /&gt;
	.GroupBy([])&lt;br /&gt;
	.Aggregate([&amp;quot;NAllTotal&amp;quot;], [&amp;quot;Count&amp;quot;])&lt;br /&gt;
	.WithColumn(&amp;quot;__Join2&amp;quot;, #sql{1});&lt;br /&gt;
  let minValueUsageEnabled = !IsNullTop(minValueUsage);&lt;br /&gt;
  let maxNumUniqueValuesEnabled = !IsNullTop(maxNumUniqueValues);&lt;br /&gt;
  if (minValueUsageEnabled || maxNumUniqueValuesEnabled) {&lt;br /&gt;
	// Perform column value-based filtering if minValueUsageEnabled or maxNumUniqueValuesEnabled is defined.&lt;br /&gt;
    let valueColumnName = &amp;quot;__ValueNew&amp;quot;;&lt;br /&gt;
	let filteredValuesColumns = [valueColumnName: columnName];&lt;br /&gt;
	let filteredValues = df&lt;br /&gt;
	  .GroupBy([columnName]).Aggregate([&amp;quot;Count&amp;quot;], [&amp;quot;Count&amp;quot;]);&lt;br /&gt;
	if (minValueUsageEnabled) {&lt;br /&gt;
	  filteredValues = filteredValues&lt;br /&gt;
		.WithColumn(&amp;quot;__Join&amp;quot;, #sql{1})&lt;br /&gt;
		.Join(all, [&amp;quot;__Join&amp;quot;: &amp;quot;__Join2&amp;quot;], &amp;quot;leftouter&amp;quot;)&lt;br /&gt;
        .WithColumn(&amp;quot;Usage&amp;quot;, #sql{Column(&amp;quot;Count&amp;quot;) / Column(&amp;quot;NAllTotal&amp;quot;)});&lt;br /&gt;
	  filteredValuesColumns = Concat(filteredValuesColumns, [&amp;quot;Usage&amp;quot;]);&lt;br /&gt;
	}&lt;br /&gt;
	if (maxNumUniqueValuesEnabled) {&lt;br /&gt;
	  filteredValues = filteredValues&lt;br /&gt;
		.WithRowNumberColumn(&amp;quot;RowNumber&amp;quot;, [&amp;quot;Count&amp;quot;], null, [false]);&lt;br /&gt;
	  filteredValuesColumns = Concat(filteredValuesColumns, [&amp;quot;RowNumber&amp;quot;]);&lt;br /&gt;
	}&lt;br /&gt;
&lt;br /&gt;
	filteredValues = filteredValues&lt;br /&gt;
	  .Select(filteredValuesColumns);&lt;br /&gt;
&lt;br /&gt;
	// Generate select returning all the accepted values.&lt;br /&gt;
	let allValues = filteredValues&lt;br /&gt;
	  .(minValueUsageEnabled ? Where(#sql{Column(&amp;quot;Usage&amp;quot;) &amp;gt;= #expr{minValueUsage}}) : _)&lt;br /&gt;
	  .(maxNumUniqueValuesEnabled ? Where(#sql{Column(&amp;quot;RowNumber&amp;quot;) &amp;lt;= #expr{maxNumUniqueValues}}) : _)&lt;br /&gt;
	  .Select([valueColumnName, newColumnName: valueColumnName]);&lt;br /&gt;
&lt;br /&gt;
	if (!IsNullTop(includeOthers)) {&lt;br /&gt;
	  // If includeOthers is defined, replace original values with the variable defined in includeOthers.&lt;br /&gt;
	  let otherValues = filteredValues&lt;br /&gt;
		.(minValueUsageEnabled ? Where(#sql{Column(&amp;quot;Usage&amp;quot;) &amp;lt; #expr{minValueUsage}}) : _)&lt;br /&gt;
		.(maxNumUniqueValuesEnabled ? Where(#sql{Column(&amp;quot;RowNumber&amp;quot;) &amp;gt; #expr{maxNumUniqueValues}}) : _)&lt;br /&gt;
		.WithColumn(newColumnName, #sql{#expr{includeOthers}})&lt;br /&gt;
		.Select([valueColumnName, newColumnName]);&lt;br /&gt;
	  allValues = allValues.Append(otherValues)&lt;br /&gt;
	}&lt;br /&gt;
	df.Join(allValues, [columnName: valueColumnName], &amp;quot;inner&amp;quot;)&lt;br /&gt;
	  .RemoveColumns([valueColumnName]);&lt;br /&gt;
  }&lt;br /&gt;
}&lt;br /&gt;
&lt;br /&gt;
// The following example will return only rows containing two of the most common values for Region-column.&lt;br /&gt;
//let df = DataTableById(&amp;lt;data table id&amp;gt;).SqlDataFrame;&lt;br /&gt;
//df = ColumnWithMinUsage(df, &amp;quot;Region&amp;quot;, &amp;quot;_Filtered&amp;quot;, 2, null, null);&lt;br /&gt;
//df.Collect().ToCsv();&lt;br /&gt;
&lt;br /&gt;
// The following example will return all input rows, but will replace the values of rows whose Region-column&lt;br /&gt;
// has a value used by less than 15% of all the rows with a new value: &amp;quot;_Others&amp;quot;.&lt;br /&gt;
//let df = DataTableById(&amp;lt;data table id&amp;gt;).SqlDataFrame;&lt;br /&gt;
//df = ColumnWithMinUsage(df, &amp;quot;Region&amp;quot;, &amp;quot;_Filtered&amp;quot;, null, 0.15, &amp;quot;_Others&amp;quot;);&lt;br /&gt;
//df.Collect().ToCsv();&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Export model events and cases ===&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;typescript&amp;quot; line=&amp;quot;1&amp;quot;&amp;gt;&lt;br /&gt;
function ExportModelEvents(m) {&lt;br /&gt;
  let attrs = m.EventAttributes;&lt;br /&gt;
  ToDataFrame(&lt;br /&gt;
    m.EventLog.Events.Concat(&lt;br /&gt;
      [Case.Name, Type.Name, ToString(TimeStamp, &amp;quot;yyyy-MM-dd HH:mm:ss.fff&amp;quot;)], &lt;br /&gt;
      {let evt = _; attrs.{let att = _; evt.Attribute(att)}}&lt;br /&gt;
    ), &lt;br /&gt;
    Concat(&lt;br /&gt;
      [&amp;quot;CaseId&amp;quot;, &amp;quot;EventType&amp;quot;, &amp;quot;TimeStamp&amp;quot;], &lt;br /&gt;
      attrs.Name&lt;br /&gt;
    )&lt;br /&gt;
  ).ToCsv(true);&lt;br /&gt;
}&lt;br /&gt;
&lt;br /&gt;
 &lt;br /&gt;
&lt;br /&gt;
function ExportModelCases(m) {&lt;br /&gt;
  let attrs = m.CaseAttributes;&lt;br /&gt;
  ToDataFrame(&lt;br /&gt;
    m.EventLog.Cases.Concat(&lt;br /&gt;
      [Name], &lt;br /&gt;
      {let cas = _; attrs.{let att = _; cas.Attribute(att)}}&lt;br /&gt;
    ), &lt;br /&gt;
    Concat(&lt;br /&gt;
      [&amp;quot;CaseId&amp;quot;], &lt;br /&gt;
      attrs.Name&lt;br /&gt;
    )&lt;br /&gt;
  ).ToCsv(true);&lt;br /&gt;
}&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
First(Models.Where(Name==&amp;quot;SAP OtC Extended&amp;quot;)).EventsDataTable.DataFrame.ToCsv(true)&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
First(Models.Where(Name==&amp;quot;SAP OtC Extended&amp;quot;)).CasesDataTable.DataFrame.ToCsv(true)&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Calculate all the value usages of a single column for each event in event data table ===&lt;br /&gt;
This query could be used, e.g., to find out the maximum resource usage for every resource found in the event data table.&amp;lt;syntaxhighlight lang=&amp;quot;typescript&amp;quot; line=&amp;quot;1&amp;quot;&amp;gt;&lt;br /&gt;
function WithUsageColumns(resourceColumn)&lt;br /&gt;
{&lt;br /&gt;
  function WithTotalUsageColumnOfSingleResource(resourceColumn, resourceValue)&lt;br /&gt;
  {&lt;br /&gt;
    _&lt;br /&gt;
      .WithColumn(&amp;quot;_Prev&amp;quot;, #sql{Lag(Column(resourceColumn), [TimeStamp, EventType], [true, true], [CaseId], 1, null)})&lt;br /&gt;
      .WithColumn(&amp;quot;_UsageDiff&amp;quot;, #sql{&lt;br /&gt;
        CaseWhen(&lt;br /&gt;
          Column(resourceColumn) == Column(&amp;quot;_Prev&amp;quot;), 0, &lt;br /&gt;
          Column(&amp;quot;_Prev&amp;quot;) == #expr{resourceValue}, -1,&lt;br /&gt;
          Column(resourceColumn) == #expr{resourceValue}, 1,&lt;br /&gt;
          0)&lt;br /&gt;
      })&lt;br /&gt;
      .WithColumn(`${resourceValue}_Usage`, #sql{Sum(Column(&amp;quot;_UsageDiff&amp;quot;), [TimeStamp, EventType])})&lt;br /&gt;
      .RemoveColumns([&amp;quot;_Prev&amp;quot;, &amp;quot;_UsageDiff&amp;quot;])&lt;br /&gt;
  }&lt;br /&gt;
&lt;br /&gt;
  let sdf = _;&lt;br /&gt;
  let allValues = sdf.SelectDistinct([resourceColumn]).OrderByColumns([resourceColumn], [true]).Collect().Column(resourceColumn);&lt;br /&gt;
  allValues.{&lt;br /&gt;
    let v = _;&lt;br /&gt;
    sdf = sdf.WithTotalUsageColumnOfSingleResource(resourceColumn, v)&lt;br /&gt;
  }&lt;br /&gt;
  sdf&lt;br /&gt;
}&lt;br /&gt;
&lt;br /&gt;
let dt = ModelById(&amp;lt;model id&amp;gt;).EventsDataTable;&lt;br /&gt;
dt&lt;br /&gt;
  .SqlDataFrame&lt;br /&gt;
  .WithUsageColumns(&amp;lt;resource column name&amp;gt;)&lt;br /&gt;
  .OrderByColumns([dt.ColumnMappings[&amp;quot;TimeStamp&amp;quot;]], [true])&lt;br /&gt;
  .Collect().ToCsv()&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;Where:&lt;br /&gt;
&lt;br /&gt;
* &amp;lt;model id&amp;gt; is the id of the model containing event data to be examined.&lt;br /&gt;
* &amp;lt;resource column name&amp;gt; is the name of the column in the event data table of the specified model containing the resource being used by that event.&lt;br /&gt;
&lt;br /&gt;
NOTE: This expression uses functionalities that are only supported in Snowflake-based data tables.&lt;br /&gt;
&lt;br /&gt;
=== Create new Snowflake model from filter ===&lt;br /&gt;
This script creates a new Snowflake model (and two datatables for cases and events) containing filtered event log from given filter id. The script also works if the model doesn&#039;t have a cases datatable.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;typescript&amp;quot; line=&amp;quot;1&amp;quot;&amp;gt;&lt;br /&gt;
let filter = FilterById(1); // filter id&lt;br /&gt;
let model = filter.model;&lt;br /&gt;
let project = model.project;&lt;br /&gt;
let nameSuffix = &amp;quot; - &amp;quot; + filter.name + &amp;quot; - &amp;quot; + ToString(Now, &amp;quot;dd-MM-yyyy HH:mm:ss&amp;quot;);&lt;br /&gt;
let eventsDatatableName = model.EventsDataTable.Name + nameSuffix;&lt;br /&gt;
if (eventsDatatableName.length &amp;gt; 440) {&lt;br /&gt;
  eventsDatatableName = eventsDatatableName.Substring(eventsDatatableName.length - 440);&lt;br /&gt;
}&lt;br /&gt;
let eventsData = model&lt;br /&gt;
  .EventsDataTable&lt;br /&gt;
  .SqlDataFrame&lt;br /&gt;
  .ApplyFilter(&lt;br /&gt;
    filter.rules,&lt;br /&gt;
    model.CasesDataTable?.SqlDataFrame&lt;br /&gt;
  );&lt;br /&gt;
project&lt;br /&gt;
  .CreateDatatable(eventsDatatableName, #{&amp;quot;Connection&amp;quot;: CreateSnowflakeConnection()})&lt;br /&gt;
  .Import(eventsData);&lt;br /&gt;
let modelConfiguration = model.Configuration;&lt;br /&gt;
modelConfiguration.DataSource.Events.Set(&amp;quot;DataTableName&amp;quot;, eventsDatatableName);&lt;br /&gt;
if (model.CasesDataTable != null) {&lt;br /&gt;
  let eventsDataCaseIdColumn = &amp;quot;CaseId_&amp;quot; + ToString(Random());&lt;br /&gt;
  let casesDatatableName = model.CasesDataTable.Name + nameSuffix;&lt;br /&gt;
  if (casesDatatableName.length &amp;gt; 440) {&lt;br /&gt;
    casesDatatableName = casesDatatableName.Substring(casesDatatableName.length - 440);&lt;br /&gt;
  }&lt;br /&gt;
  let casesData = model&lt;br /&gt;
    .CasesDataTable&lt;br /&gt;
    .SqlDataFrame&lt;br /&gt;
    .join(&lt;br /&gt;
	  eventsData.SelectDistinct([eventsDataCaseIdColumn: modelConfiguration.DataSource.Events.Columns.CaseId]),&lt;br /&gt;
      [modelConfiguration.DataSource.Cases.Columns.CaseId: eventsDataCaseIdColumn]&lt;br /&gt;
	).Select(model.CasesDataTable.ColumnNames);&lt;br /&gt;
  project&lt;br /&gt;
    .CreateDatatable(casesDatatableName, #{&amp;quot;Connection&amp;quot;: CreateSnowflakeConnection()})&lt;br /&gt;
    .Import(casesData);&lt;br /&gt;
  modelConfiguration.DataSource.Cases.Set(&amp;quot;DataTableName&amp;quot;, casesDatatableName);&lt;br /&gt;
}&lt;br /&gt;
let modelName = model.Name + nameSuffix;&lt;br /&gt;
if (modelName &amp;gt; 440) {&lt;br /&gt;
  modelName = modelName.Substring(modelName.length - 440);&lt;br /&gt;
}&lt;br /&gt;
project&lt;br /&gt;
  .CreateModel(#{    &lt;br /&gt;
    &amp;quot;Name&amp;quot;: modelName,&lt;br /&gt;
    &amp;quot;Description&amp;quot;: model.Description,&lt;br /&gt;
    &amp;quot;Configuration&amp;quot;: modelConfiguration&lt;br /&gt;
  });&lt;br /&gt;
return modelName;&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Creating a model consisting of multiple copies of cases in an existing model ===&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;typescript&amp;quot; line=&amp;quot;1&amp;quot;&amp;gt;&lt;br /&gt;
/**&lt;br /&gt;
 * @name CreateTestModel&lt;br /&gt;
 * @description&lt;br /&gt;
 * Creates a new model (or overwrites an existing) to given target project with given number of &lt;br /&gt;
 * repetitions of given source model.&lt;br /&gt;
 * Each repetition will generate &amp;quot;&amp;lt;N&amp;gt;-&amp;quot;-prefix to CaseId-columns, where N equals to the repeat index.&lt;br /&gt;
 * @param sourceModel&lt;br /&gt;
 * PA model used for the source data and from where the connection is copied for the target model if a &lt;br /&gt;
 * new one has to be created.&lt;br /&gt;
 * @param numRepeats&lt;br /&gt;
 * Number of times the data in the source model should be repeated in the generated model.&lt;br /&gt;
 * @param targetProject&lt;br /&gt;
 * Project in which the target model resides.&lt;br /&gt;
 * @param targetModelName&lt;br /&gt;
 * Specifies the name of the test model in the given target project. If a model already exists with &lt;br /&gt;
 * given name, event and case data in this model will be replaced with the new generated event and &lt;br /&gt;
 * case data.&lt;br /&gt;
 * @returns&lt;br /&gt;
 * Model object of the test model having the newly generated data.&lt;br /&gt;
 */&lt;br /&gt;
function CreateTestModel(sourceModel, numRepeats, targetProject, targetModelName) &lt;br /&gt;
{&lt;br /&gt;
  let eventsColumnMappings = sourceModel.EventsDataTable.ColumnMappings;&lt;br /&gt;
  let casesColumnMappings = sourceModel.CasesDataTable.ColumnMappings;&lt;br /&gt;
  let connection = sourceModel.EventsDataTable.DataSourceConnection;&lt;br /&gt;
&lt;br /&gt;
  function CreateResultModel()&lt;br /&gt;
  {&lt;br /&gt;
    function GetTable(tableName) &lt;br /&gt;
    {&lt;br /&gt;
      let tableConfiguration = #{&lt;br /&gt;
        &amp;quot;Name&amp;quot;: tableName,&lt;br /&gt;
        &amp;quot;Connection&amp;quot;: connection&lt;br /&gt;
      };&lt;br /&gt;
      let resultTable = targetProject.DataTableByName(tableName);&lt;br /&gt;
      if (resultTable == null)&lt;br /&gt;
      {&lt;br /&gt;
        resultTable = targetProject.CreateDataTable(tableConfiguration)&lt;br /&gt;
          .Modify(#{&amp;quot;NameInDataSource&amp;quot;: null})&lt;br /&gt;
          .Synchronize();&lt;br /&gt;
      }&lt;br /&gt;
      return resultTable;&lt;br /&gt;
    }&lt;br /&gt;
&lt;br /&gt;
    let eventsTableName = `${targetModelName} - events`;&lt;br /&gt;
    let casesTableName = `${targetModelName} - cases`;&lt;br /&gt;
    let targetModel = targetProject.ModelByName(targetModelName);&lt;br /&gt;
    let eventsTable, casesTable = null;&lt;br /&gt;
&lt;br /&gt;
    if (targetModel != null)&lt;br /&gt;
    {&lt;br /&gt;
      eventsTable = targetModel.EventsDataTable;&lt;br /&gt;
      casesTable = targetModel.CasesDataTable;&lt;br /&gt;
    }&lt;br /&gt;
    else {&lt;br /&gt;
      eventsTable = GetTable(eventsTableName);&lt;br /&gt;
      if (sourceModel.CasesDataTable != null) {&lt;br /&gt;
        casesTable = GetTable(casesTableName);&lt;br /&gt;
      }&lt;br /&gt;
&lt;br /&gt;
      let timestampMapping = eventsColumnMappings[&amp;quot;TimeStamp&amp;quot;];&lt;br /&gt;
      eventsColumnMappings.Remove(&amp;quot;TimeStamp&amp;quot;);&lt;br /&gt;
      eventsColumnMappings.Set(&amp;quot;Timestamp&amp;quot;, timestampMapping);&lt;br /&gt;
&lt;br /&gt;
      let modelConfiguration = #{&lt;br /&gt;
        &amp;quot;DataSource&amp;quot;: #{&lt;br /&gt;
          &amp;quot;Events&amp;quot;:#{&lt;br /&gt;
            &amp;quot;DataSourceType&amp;quot;: &amp;quot;datatable&amp;quot;,&lt;br /&gt;
            &amp;quot;DataTableName&amp;quot;: eventsTableName,&lt;br /&gt;
            &amp;quot;Columns&amp;quot;: eventsColumnMappings&lt;br /&gt;
          }&lt;br /&gt;
        }&lt;br /&gt;
      };&lt;br /&gt;
&lt;br /&gt;
      if (casesColumnMappings != null) {&lt;br /&gt;
        modelConfiguration[&amp;quot;DataSource&amp;quot;].Set(&amp;quot;Cases&amp;quot;, #{&lt;br /&gt;
          &amp;quot;DataSourceType&amp;quot;: &amp;quot;datatable&amp;quot;,&lt;br /&gt;
          &amp;quot;DataTableName&amp;quot;: casesTableName,&lt;br /&gt;
          &amp;quot;Columns&amp;quot;: casesColumnMappings&lt;br /&gt;
        });&lt;br /&gt;
      }&lt;br /&gt;
&lt;br /&gt;
      targetModel = targetProject.CreateModel(#{&amp;quot;Name&amp;quot;: targetModelName, &amp;quot;Configuration&amp;quot;: modelConfiguration});&lt;br /&gt;
    }&lt;br /&gt;
&lt;br /&gt;
    eventsTable.Truncate();&lt;br /&gt;
    casesTable?.Truncate();&lt;br /&gt;
&lt;br /&gt;
    return #{&lt;br /&gt;
      &amp;quot;TargetModel&amp;quot;: targetModel,&lt;br /&gt;
      &amp;quot;Events&amp;quot;: eventsTable,&lt;br /&gt;
      &amp;quot;Cases&amp;quot;: casesTable&lt;br /&gt;
    };&lt;br /&gt;
  }&lt;br /&gt;
&lt;br /&gt;
  function RepeatNTimes(sourceDf, caseIdColumn, numRepeats)&lt;br /&gt;
  {&lt;br /&gt;
    let resultDf = null;&lt;br /&gt;
    for (let i = 1; i &amp;lt;= numRepeats; ++i) {&lt;br /&gt;
      let iterationDf = sourceDf&lt;br /&gt;
        .WithColumn(caseIdColumn, #sql{Concat(#expr{i}, &amp;quot;-&amp;quot;, Column(#expr{caseIdColumn}))});&lt;br /&gt;
      resultDf = resultDf == null ? iterationDf : resultDf.Append(iterationDf); &lt;br /&gt;
    }&lt;br /&gt;
    resultDf;&lt;br /&gt;
  }&lt;br /&gt;
&lt;br /&gt;
  let resultModel = CreateResultModel();&lt;br /&gt;
  let sourceEventDataDf = sourceModel.EventsDataTable.SqlDataFrame;&lt;br /&gt;
  let resultEventDataDf = RepeatNTimes(sourceEventDataDf, eventsColumnMappings[&amp;quot;CaseId&amp;quot;], numRepeats);&lt;br /&gt;
  resultModel[&amp;quot;Events&amp;quot;].Import(resultEventDataDf);&lt;br /&gt;
&lt;br /&gt;
  let sourceCaseDataDf = sourceModel.CasesDataTable?.SqlDataFrame;&lt;br /&gt;
  if (sourceCaseDataDf != null) {&lt;br /&gt;
    let resultCaseDataDf = RepeatNTimes(sourceCaseDataDf, casesColumnMappings[&amp;quot;CaseId&amp;quot;], numRepeats);&lt;br /&gt;
    resultModel[&amp;quot;Cases&amp;quot;].Import(resultCaseDataDf);&lt;br /&gt;
  }&lt;br /&gt;
  resultModel[&amp;quot;TargetModel&amp;quot;];&lt;br /&gt;
}&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;Example usage:&amp;lt;blockquote&amp;gt;CreateTestModel(ProjectByName(&amp;quot;Project&amp;quot;).ModelByName(&amp;quot;SAP_OrderToCash - Snowflake&amp;quot;), 3, ProjectByName(&amp;quot;TestData&amp;quot;), &amp;quot;TestModel&amp;quot;);&amp;lt;/blockquote&amp;gt;Creates a new model named &amp;quot;TestModel&amp;quot; (or overwrites old one) into project named &amp;quot;TestData&amp;quot; containing the data from model &amp;quot;SAP_OrderToCash - Snowflake&amp;quot; in project &amp;quot;Project&amp;quot; repeated three times.&lt;br /&gt;
&lt;br /&gt;
=== Analyzing declare patterns found in event log ===&lt;br /&gt;
&lt;br /&gt;
This is an example expression that shows how POSIX-style regular expressions can be used to search for cases in an event log having certain event type patterns [https://www.researchgate.net/publication/277631859_Generating_Event_Logs_Through_the_Simulation_of_Declare_Models declare patterns].&lt;br /&gt;
&lt;br /&gt;
Note: Before use, replace &amp;lt;model id&amp;gt; with a valid model identifier having event data.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;typescript&amp;quot; line=&amp;quot;1&amp;quot;&amp;gt;&lt;br /&gt;
let dt = ModelById(&amp;lt;model id&amp;gt;).EventsDataTable;&lt;br /&gt;
let sdf = dt.SqlDataFrame.Head(1000);&lt;br /&gt;
let caseIdColumn = dt.ColumnMappings[&amp;quot;CaseId&amp;quot;], timeStampColumn = dt.ColumnMappings[&amp;quot;TimeStamp&amp;quot;], eventTypeColumn = dt.ColumnMappings[&amp;quot;EventType&amp;quot;];&lt;br /&gt;
&lt;br /&gt;
let eventTypesDf = sdf&lt;br /&gt;
  .SelectDistinct([eventTypeColumn])&lt;br /&gt;
  .OrderByColumns([eventTypeColumn], [true])&lt;br /&gt;
  .WithRowNumberColumn(&amp;quot;Token&amp;quot;, [eventTypeColumn])&lt;br /&gt;
  .WithColumn(&amp;quot;Token&amp;quot;, #sql{Char(Column(&amp;quot;Token&amp;quot;) + Unicode(&amp;quot;a&amp;quot;) - 1)});&lt;br /&gt;
&lt;br /&gt;
sdf = sdf&lt;br /&gt;
  .Join(eventTypesDf.Select([&amp;quot;_EventType2&amp;quot;: eventTypeColumn, &amp;quot;Token&amp;quot;]), [eventTypeColumn: &amp;quot;_EventType2&amp;quot;], &amp;quot;leftouter&amp;quot;);&lt;br /&gt;
&lt;br /&gt;
function RespondedExistencePattern(a, b)&lt;br /&gt;
{&lt;br /&gt;
  // [^a]*((a.*b.*)|(b.*a.*))*[^a]*&lt;br /&gt;
  ToSqlExpression(`Regexp(Column(&amp;quot;Trace&amp;quot;), &amp;quot;[^${a}]*((${a}.*${b}.*)|(${b}.*${a}.*))*[^${a}]&amp;quot;)`)&lt;br /&gt;
}&lt;br /&gt;
&lt;br /&gt;
function ResponsePattern(a, b)&lt;br /&gt;
{&lt;br /&gt;
  // [^a]*(a.*b)*[^a]*&lt;br /&gt;
  ToSqlExpression(`Regexp(Column(&amp;quot;Trace&amp;quot;), &amp;quot;[^${a}]*(${a}.*${b})*[^${a}]&amp;quot;)`)&lt;br /&gt;
}&lt;br /&gt;
&lt;br /&gt;
function AlternateResponsePattern(a, b)&lt;br /&gt;
{&lt;br /&gt;
  // [^a]*(a[^a]*b[^a]*)*[^a]*&lt;br /&gt;
  ToSqlExpression(`Regexp(Column(&amp;quot;Trace&amp;quot;), &amp;quot;[^${a}]*(${a}[^${a}]*${b}[^${a}]*)*[^${a}]&amp;quot;)`)&lt;br /&gt;
}&lt;br /&gt;
&lt;br /&gt;
function ChainResponsePattern(a, b)&lt;br /&gt;
{&lt;br /&gt;
  // [^a]*(ab[^a]*)*[^a]*&lt;br /&gt;
  ToSqlExpression(`Regexp(Column(&amp;quot;Trace&amp;quot;), &amp;quot;[^${a}]*(${a}${b}[^${a}]*)*[^${a}]&amp;quot;)`)&lt;br /&gt;
}&lt;br /&gt;
&lt;br /&gt;
function PrecedencePattern(a, b)&lt;br /&gt;
{&lt;br /&gt;
  // [^b]*(a.*b)*[^b]*&lt;br /&gt;
  ToSqlExpression(`Regexp(Column(&amp;quot;Trace&amp;quot;), &amp;quot;[^${b}]*(${a}.*${b})*[^${b}]&amp;quot;)`)&lt;br /&gt;
}&lt;br /&gt;
&lt;br /&gt;
let tracesDf = sdf&lt;br /&gt;
  .GroupBy([caseIdColumn])&lt;br /&gt;
  .Aggregate([&amp;quot;Trace&amp;quot;: &amp;quot;Token&amp;quot;], [#{&amp;quot;Function&amp;quot;: &amp;quot;list&amp;quot;, &amp;quot;Ordering&amp;quot;: [timeStampColumn], &amp;quot;Separator&amp;quot;: &amp;quot;&amp;quot;}])&lt;br /&gt;
  .WithColumn(&amp;quot;RespondedExistenceNL&amp;quot;, #sql{#expr{RespondedExistencePattern(&amp;quot;n&amp;quot;, &amp;quot;l&amp;quot;)}})&lt;br /&gt;
  .WithColumn(&amp;quot;RespondedExistenceCA&amp;quot;, #sql{#expr{RespondedExistencePattern(&amp;quot;c&amp;quot;, &amp;quot;a&amp;quot;)}})&lt;br /&gt;
  .WithColumn(&amp;quot;ResponsePatternNL&amp;quot;, #sql{#expr{ResponsePattern(&amp;quot;n&amp;quot;, &amp;quot;l&amp;quot;)}})&lt;br /&gt;
  .WithColumn(&amp;quot;ResponsePatternCA&amp;quot;, #sql{#expr{ResponsePattern(&amp;quot;c&amp;quot;, &amp;quot;a&amp;quot;)}})&lt;br /&gt;
  .WithColumn(&amp;quot;AlternateResponsePatternNL&amp;quot;, #sql{#expr{AlternateResponsePattern(&amp;quot;n&amp;quot;, &amp;quot;l&amp;quot;)}})&lt;br /&gt;
  .WithColumn(&amp;quot;AlternateResponsePatternCA&amp;quot;, #sql{#expr{AlternateResponsePattern(&amp;quot;c&amp;quot;, &amp;quot;a&amp;quot;)}})&lt;br /&gt;
  .WithColumn(&amp;quot;ChainResponsePatternNL&amp;quot;, #sql{#expr{ChainResponsePattern(&amp;quot;n&amp;quot;, &amp;quot;l&amp;quot;)}})&lt;br /&gt;
  .WithColumn(&amp;quot;ChainResponsePatternCA&amp;quot;, #sql{#expr{ChainResponsePattern(&amp;quot;c&amp;quot;, &amp;quot;a&amp;quot;)}})&lt;br /&gt;
  .WithColumn(&amp;quot;PrecedencePatternNL&amp;quot;, #sql{#expr{PrecedencePattern(&amp;quot;n&amp;quot;, &amp;quot;l&amp;quot;)}})&lt;br /&gt;
  .WithColumn(&amp;quot;PrecedencePatternCA&amp;quot;, #sql{#expr{PrecedencePattern(&amp;quot;c&amp;quot;, &amp;quot;a&amp;quot;)}})&lt;br /&gt;
;&lt;br /&gt;
&lt;br /&gt;
[tracesDf.Collect().ToCsv(), eventTypesDf.Collect().ToCsv()]&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Perform a query and send results as E-mail in a HTML table ===&lt;br /&gt;
This example requires two scripts that are both located in the same project:&lt;br /&gt;
&lt;br /&gt;
# Expression-type script to execute (can be, e.g., scheduled to run daily).&lt;br /&gt;
# Expression-type script containing the query JSON to use as basis for the e-mail. In this example, this script is named as &amp;quot;Send query as E-mail - query JSON&amp;quot;. The contents of this script is just the JSON representation of a query that can be extracted, e.g., from any PA chart view.&lt;br /&gt;
&lt;br /&gt;
Script #1 should contain the following code:&amp;lt;syntaxhighlight lang=&amp;quot;typescript&amp;quot;&amp;gt;&lt;br /&gt;
let replyToAddress = &amp;quot;noreply@test.com&amp;quot;;&lt;br /&gt;
let recipientsArray = [&amp;quot;test@test.com&amp;quot;];&lt;br /&gt;
let queryConfiguration = Project.ScriptByName(&amp;quot;Send query as E-mail - query JSON&amp;quot;).Code;&lt;br /&gt;
let resultDf = Query(ParseJson(queryConfiguration));&lt;br /&gt;
let mailBodyHtml = resultDf.Collect().`&lt;br /&gt;
&amp;lt;table&amp;gt;&lt;br /&gt;
  &amp;lt;caption&amp;gt;Example query&amp;lt;/caption&amp;gt;&lt;br /&gt;
  &amp;lt;thead&amp;gt;&lt;br /&gt;
    &amp;lt;tr&amp;gt;&lt;br /&gt;
      ${StringJoin(&amp;quot;&amp;quot;, _.Columns.`&lt;br /&gt;
        &amp;lt;th&amp;gt;${_}&amp;lt;/th&amp;gt;&lt;br /&gt;
      `)}&lt;br /&gt;
    &amp;lt;/tr&amp;gt;&lt;br /&gt;
  &amp;lt;/thead&amp;gt;&lt;br /&gt;
  &amp;lt;tbody&amp;gt;&lt;br /&gt;
    ${StringJoin(&amp;quot;&amp;quot;, _.Rows.`&lt;br /&gt;
      &amp;lt;tr&amp;gt;&lt;br /&gt;
        ${StringJoin(&amp;quot;&amp;quot;, _.`&lt;br /&gt;
          &amp;lt;td&amp;gt;${_}&amp;lt;/td&amp;gt;&lt;br /&gt;
        `)}&lt;br /&gt;
      &amp;lt;/tr&amp;gt;&lt;br /&gt;
    `)}&lt;br /&gt;
  &amp;lt;/tbody&amp;gt;&lt;br /&gt;
&amp;lt;/table&amp;gt;&lt;br /&gt;
`;&lt;br /&gt;
&lt;br /&gt;
SendEmail(#{&lt;br /&gt;
  &amp;quot;ReplyTo&amp;quot;: [replyToAddress],&lt;br /&gt;
  &amp;quot;To&amp;quot;: recipientsArray,&lt;br /&gt;
  &amp;quot;Subject&amp;quot;: &amp;quot;Example query E-mail&amp;quot;,&lt;br /&gt;
  &amp;quot;IsBodyHtml&amp;quot;: true,&lt;br /&gt;
  &amp;quot;Body&amp;quot;: mailBodyHtml&lt;br /&gt;
});&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;NOTE: PA server has to have SMTP server configured. Also, remember to update the values of replyToAddress and recipientsArray before using.&lt;/div&gt;</summary>
		<author><name>MarHink</name></author>
	</entry>
	<entry>
		<id>https://wiki.onqpr.com/pa/index.php?title=Expression_Script_Examples&amp;diff=26000</id>
		<title>Expression Script Examples</title>
		<link rel="alternate" type="text/html" href="https://wiki.onqpr.com/pa/index.php?title=Expression_Script_Examples&amp;diff=26000"/>
		<updated>2025-03-18T14:26:41Z</updated>

		<summary type="html">&lt;p&gt;MarHink: /* Perform a query and send results as E-mail in a HTML table */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;This page contains script examples written in the QPR ProcessAnalyzer expression language. See how expression scripts can be created in the [[Managing_Scripts#Creating_Script|Workspace]]. For documentation for the syntax, functions and entities can be found from the main page in the [[QPR_ProcessAnalyzer_Wiki#For_Developers|KPI Expression Language]] section.&lt;br /&gt;
&lt;br /&gt;
== Calling Expression Script from Expression ==&lt;br /&gt;
Expression scripts can be called from an expression using the [[QPR_ProcessAnalyzer_Objects_in_Expression_Language#Script|Run]] function with the following syntax:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
let result = ScriptById(123).Run(#{&lt;br /&gt;
  &amp;quot;parameter1&amp;quot;: &amp;quot;value1&amp;quot;,&lt;br /&gt;
  &amp;quot;parameter2&amp;quot;: false,&lt;br /&gt;
  &amp;quot;parameter2&amp;quot;: 123.45&lt;br /&gt;
})&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The command waits until the run is completed, and the return value of the called script is returned by the Run function call.&lt;br /&gt;
&lt;br /&gt;
Parameters can be passed to the called script, and the parameters are available as variables in the script. The parameters can contain any type of data.&lt;br /&gt;
&lt;br /&gt;
Expression scripts can also be called from a dashboard. Expressions can be stored to scripts instead of dashboards, which is a way to separate complex expressions from dashboards and allow to reuse expressions across several dashboards.&lt;br /&gt;
&lt;br /&gt;
== Calling SQL Script from Expression ==&lt;br /&gt;
SQL script can be called from an expression using the Run function as follows (similar to calling [[#Calling Expression Script from Expression|expression scripts]]):&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
let result = ScriptById(123).Run(#{&lt;br /&gt;
  &amp;quot;parameter1&amp;quot;: &amp;quot;value1&amp;quot;,&lt;br /&gt;
  &amp;quot;parameter2&amp;quot;: 321&lt;br /&gt;
});&lt;br /&gt;
let arrayOfAllReports = result.Keys;&lt;br /&gt;
let report1 = result.Report1;&lt;br /&gt;
let report2 = result.Report2;&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
SQL scripts can return multiple &#039;&#039;reports&#039;&#039;, which are combined to a dictionary, where the key is the name of the report (&amp;quot;sheet name&amp;quot;) and value is the report data as a DataFrame. See in the above example, how the reports can be accessed by their name.&lt;br /&gt;
&lt;br /&gt;
== Examples ==&lt;br /&gt;
=== Call web service===&lt;br /&gt;
Contact to a web service, fetch some data, and store it to a datatable.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;typescript&amp;quot; line&amp;gt;&lt;br /&gt;
let datatableName = &amp;quot;Web Service Data&amp;quot;;&lt;br /&gt;
let webServiceData = CallWebService(&lt;br /&gt;
    #{&amp;quot;Address&amp;quot;: &amp;quot;https://processanalyzer.onqpr.com/qprpa/api/serverinfo&amp;quot;}&lt;br /&gt;
);&lt;br /&gt;
&lt;br /&gt;
let targetDatatable = Project.Datatables.Where(name==datatableName);&lt;br /&gt;
if (Count(targetDatatable) == 0) {&lt;br /&gt;
	targetDatatable = Project.CreateDatatable(datatableName)&lt;br /&gt;
	.AddColumn(&amp;quot;Setting name&amp;quot;, &amp;quot;String&amp;quot;)&lt;br /&gt;
	.AddColumn(&amp;quot;Setting value&amp;quot;, &amp;quot;String&amp;quot;)&lt;br /&gt;
	.AddColumn(&amp;quot;Data read&amp;quot;, &amp;quot;DateTime&amp;quot;);&lt;br /&gt;
} else {&lt;br /&gt;
	targetDatatable = targetDatatable[0];&lt;br /&gt;
}&lt;br /&gt;
&lt;br /&gt;
let currentTime = Now;&lt;br /&gt;
let dataAsDf = ToDataFrame(&lt;br /&gt;
	webServiceData.keys.{&lt;br /&gt;
        let key = _;&lt;br /&gt;
        [key, webServiceData[key], currentTime];&lt;br /&gt;
    },&lt;br /&gt;
	[&amp;quot;Setting name&amp;quot;, &amp;quot;Setting value&amp;quot;, &amp;quot;Data read&amp;quot;]&lt;br /&gt;
);&lt;br /&gt;
targetDatatable.Import(dataAsDf, #{&amp;quot;Append&amp;quot;:true});&lt;br /&gt;
WriteLog(`${CountTop(dataAsDf.Rows)} rows written to datatable`);&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Store data to datatable ===&lt;br /&gt;
Get all models in the system and store them to a datatable.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;typescript&amp;quot; line&amp;gt;&lt;br /&gt;
let newDatatable = Project&lt;br /&gt;
    .CreateDatatable(&amp;quot;Models list &amp;quot; + ToString(Now, &amp;quot;dd.MM.yyyy HH:mm:ss&amp;quot;))&lt;br /&gt;
    .AddColumn(&amp;quot;Model name&amp;quot;, &amp;quot;String&amp;quot;)&lt;br /&gt;
    .AddColumn(&amp;quot;Project name&amp;quot;, &amp;quot;String&amp;quot;)&lt;br /&gt;
    .AddColumn(&amp;quot;Created time&amp;quot;, &amp;quot;DateTime&amp;quot;)&lt;br /&gt;
    .AddColumn(&amp;quot;Cases&amp;quot;, &amp;quot;Integer&amp;quot;);&lt;br /&gt;
let startTime = Now;&lt;br /&gt;
let modelsData = ToDataFrame(&lt;br /&gt;
    Models.([Name, Project.Name, CreatedDate, NCases]),&lt;br /&gt;
    [&amp;quot;Model name&amp;quot;, &amp;quot;Project name&amp;quot;, &amp;quot;Created time&amp;quot;, &amp;quot;Cases&amp;quot;]&lt;br /&gt;
);&lt;br /&gt;
WriteLog(`Listing models took ${(Now - startTime).TotalSeconds.Round(2)} seconds.`);&lt;br /&gt;
newDatatable.Import(modelsData);&lt;br /&gt;
WriteLog(`Datatable ${newDatatable.Id} created.`);&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Convert datatable column data ===&lt;br /&gt;
This script can be used to convert a single column into numerical data type. To use the script, you need to setup the following in the beginning of the script:&lt;br /&gt;
* Project name where the datatable is located.&lt;br /&gt;
* Datatable name&lt;br /&gt;
* Name of the column to be converted&lt;br /&gt;
&lt;br /&gt;
Note that the conversion fails, if there is data that cannot be converted into numerical format. The conversion assumes that period (.) is used as the decimal point. &lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;typescript&amp;quot; line&amp;gt;&lt;br /&gt;
let projectName = &amp;quot;New Project&amp;quot;;&lt;br /&gt;
let datatableName = &amp;quot;qpr processanalyzer events&amp;quot;;&lt;br /&gt;
let columnName = &amp;quot;Event order in case&amp;quot;;&lt;br /&gt;
&lt;br /&gt;
let project = (Projects.Where(Name==projectName))[0];&lt;br /&gt;
let datatable = (project.Datatables.Where(Name==datatableName))[0];&lt;br /&gt;
DatatableById(datatable.Id).DataFrame&lt;br /&gt;
.SetColumns([&lt;br /&gt;
	columnName: () =&amp;gt; {&lt;br /&gt;
		let data = Column(columnName);&lt;br /&gt;
		if (data == null) {&lt;br /&gt;
			null;&lt;br /&gt;
		 } else {&lt;br /&gt;
			ToFloat(data);&lt;br /&gt;
		}&lt;br /&gt;
	}&lt;br /&gt;
])&lt;br /&gt;
.Persist(datatable.Name, [&amp;quot;ProjectId&amp;quot;: project.Id, &amp;quot;Append&amp;quot;: false]);&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Instead of converting to numeric (with the &#039;&#039;ToFloat&#039;&#039; function), data can be converted into string using the &#039;&#039;ToString&#039;&#039; function.&lt;br /&gt;
&lt;br /&gt;
=== Show DataFrame as HTML table ===&lt;br /&gt;
&lt;br /&gt;
This script defines a function to show dataframe as a HTML table, and uses the function for a literal dataframe.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;typescript&amp;quot; line&amp;gt;&lt;br /&gt;
function dataframeToHtmlTable(df) {&lt;br /&gt;
	return&lt;br /&gt;
`&amp;lt;table&amp;gt;&lt;br /&gt;
	&amp;lt;tr&amp;gt;&lt;br /&gt;
		${StringJoin(&amp;quot;\r\n\t\t&amp;quot;,  + df.columns.`&amp;lt;th&amp;gt;${_}&amp;lt;/th&amp;gt;`)}&lt;br /&gt;
	&amp;lt;/tr&amp;gt;&lt;br /&gt;
	${StringJoin(&amp;quot;&amp;quot;, df.Rows.(&lt;br /&gt;
		&amp;quot;\r\n\t&amp;lt;tr&amp;gt;&amp;quot; + StringJoin(&amp;quot;&amp;quot;, _.`\r\n\t\t&amp;lt;td&amp;gt;${ToString(_)}&amp;lt;/td&amp;gt;`) + &amp;quot;\r\n\t&amp;lt;/tr&amp;gt;&amp;quot;&lt;br /&gt;
	))}&lt;br /&gt;
&amp;lt;/table&amp;gt;`&lt;br /&gt;
}&lt;br /&gt;
&lt;br /&gt;
let data = ToDataFrame(&lt;br /&gt;
	[&lt;br /&gt;
		[&amp;quot;one&amp;quot;, &amp;quot;two&amp;quot;, &amp;quot;three&amp;quot;],&lt;br /&gt;
		[&amp;quot;four&amp;quot;, &amp;quot;five&amp;quot;, &amp;quot;six&amp;quot;],&lt;br /&gt;
		[&amp;quot;seven&amp;quot;, &amp;quot;eight&amp;quot;, &amp;quot;nine&amp;quot;]&lt;br /&gt;
	],&lt;br /&gt;
	[&amp;quot;Column 1&amp;quot;, &amp;quot;Column 2&amp;quot;, &amp;quot;Column 3&amp;quot;]&lt;br /&gt;
);&lt;br /&gt;
&lt;br /&gt;
return dataframeToHtmlTable(data);&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Copy local datatables to Snowflake ===&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;typescript&amp;quot; line&amp;gt;&lt;br /&gt;
// Copies all datatables in a project to another project including datatable contents.&lt;br /&gt;
// Usage instructions:&lt;br /&gt;
// 1. Create expression script in the project from where you want to copy the datatables.&lt;br /&gt;
// 2. Create a new project named as &amp;quot;&amp;lt;name of the project to be moved&amp;gt; - Snowflake&amp;quot;. New datatables will be created here. E.g., when moving project named &amp;quot;SAP_OrderToCash&amp;quot;, the target project should be named as &amp;quot;SAP_OrderToCash - Snowflake&amp;quot;.&lt;br /&gt;
// 3. Run the script.&lt;br /&gt;
// NOTE: Columns of type &amp;quot;Any&amp;quot; will be created as &amp;quot;String&amp;quot;-columns in Snowflake, thus it is recommended that actual data types are set for the tables prior to the move.&lt;br /&gt;
&lt;br /&gt;
let sourceProject = Project;&lt;br /&gt;
let sourceProjectName = Project.Name;&lt;br /&gt;
let targetProjectName = `${sourceProjectName} - Snowflake`;&lt;br /&gt;
let targetProject = First(Projects.Where(Name == targetProjectName));&lt;br /&gt;
if (IsNull(targetProject)) {&lt;br /&gt;
  WriteLog(`Unable to find target project named &amp;quot;${targetProjectName}&amp;quot;. Aborting operation.`);&lt;br /&gt;
  return;&lt;br /&gt;
}&lt;br /&gt;
let dts = sourceProject.DataTables;&lt;br /&gt;
WriteLog(`Copying all ${CountTop(dts)} data tables found in project &amp;quot;${sourceProject.Name}&amp;quot; (id: ${sourceProject.Id}) to Snowflake in project &amp;quot;${targetProject.Name}&amp;quot; (id: ${targetProject.Id})`);&lt;br /&gt;
dts.{&lt;br /&gt;
  let sourceDt = _;&lt;br /&gt;
  WriteLog(`Starting to copy data table &amp;quot;${Name}&amp;quot; (id: ${Id}) having ${NRows} rows and ${NColumns} columns.`);&lt;br /&gt;
  let targetDt;&lt;br /&gt;
  targetDt = targetProject.DatatableByName(sourceDt.Name);&lt;br /&gt;
  if (targetDt == null) {&lt;br /&gt;
    targetDt = targetProject.CreateDataTable(sourceDt.Name, #{&amp;quot;Connection&amp;quot;: CreateSnowflakeConnection(#{&amp;quot;ProjectId&amp;quot;: targetProject.Id})});&lt;br /&gt;
    targetDt.Import(sourceDt.SqlDataFrame);&lt;br /&gt;
    WriteLog(`Finished copying data table &amp;quot;${Name}&amp;quot; (id: ${Id}) to table &amp;quot;${targetDt.Name}&amp;quot; (id: ${targetDt.Id})`);&lt;br /&gt;
  } else {&lt;br /&gt;
    WriteLog(`Datatable already exist &amp;quot;${Name}&amp;quot; (id: ${Id}) to table &amp;quot;${targetDt.Name}&amp;quot; (id: ${targetDt.Id})`);&lt;br /&gt;
  }&lt;br /&gt;
}&lt;br /&gt;
WriteLog(`Finished copying all the data tables found in project &amp;quot;${sourceProject.Name}&amp;quot; (id: ${sourceProject.Id}) to Snowflake in project &amp;quot;${targetProject.Name}&amp;quot; (id: ${targetProject.Id})`);&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
If you don&#039;t need to copy the data but only create the Snowflake datatables with columns, you can change the line 22 to&lt;br /&gt;
&amp;lt;syntaxhighlight&amp;gt;&lt;br /&gt;
targetDt.Import(sourceDt.SqlDataFrame.head(0));&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Copy single datatable to Snowflake ===&lt;br /&gt;
This script creates a copy of a single datatable to Snowflake. Replace the &#039;&#039;&amp;lt;tableId1&amp;gt;&#039;&#039; with the id of the source datatable.&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;typescript&amp;quot; line&amp;gt;&lt;br /&gt;
function CopyDataTableToSnowflake(dataTableId)&lt;br /&gt;
{&lt;br /&gt;
  let sourceDt = DataTableById(dataTableId);&lt;br /&gt;
  sourceDt.SqlDataFrame.Persist(`${sourceDt.Name} - Snowflake`, #{&amp;quot;Append&amp;quot;: false, &amp;quot;Connection&amp;quot;: CreateSnowflakeConnection(#{&amp;quot;ProjectId&amp;quot;: sourceDt.Project.Id})});&lt;br /&gt;
}&lt;br /&gt;
CopyDataTableToSnowflake(&amp;lt;tableId1&amp;gt;);&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Create a copy of a data table that has all Any-type columns changed to String-type columns ===&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;typescript&amp;quot; line&amp;gt;&lt;br /&gt;
function ConvertAnyDataTypesToStringsToNewTable(dataTableId)&lt;br /&gt;
{&lt;br /&gt;
  let dt = DataTableById(dataTableId);&lt;br /&gt;
  let sdf = dt.SqlDataFrame;&lt;br /&gt;
  let cts = dt.ColumnTypes;&lt;br /&gt;
  cts.{&lt;br /&gt;
    let ct = _;&lt;br /&gt;
    if (ct.DataType == &amp;quot;Any&amp;quot;) {&lt;br /&gt;
      let n = ct.Name;&lt;br /&gt;
      sdf = sdf.WithColumn(ct.Name, #sql{Cast(Column(Variable(&amp;quot;n&amp;quot;)), &amp;quot;ShortString&amp;quot;)});&lt;br /&gt;
    }&lt;br /&gt;
  };&lt;br /&gt;
  sdf.Persist(`${dt.Name} - Converted`, #{&amp;quot;Append&amp;quot;: false, &amp;quot;ProjectId&amp;quot;: dt.Project.Id});&lt;br /&gt;
}&lt;br /&gt;
ConvertAnyDataTypesToStringsToNewTable(&amp;lt;dataTableId&amp;gt;);&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Query number of rows in given data table having a datetime value in given year grouped by month and return resulting table as CSV ===&lt;br /&gt;
SqlDataFrame is used in order to prevent loading the whole datatable into memory first. Filtering is performed as first operation in order to minimize the amount of required work for the data source of the data table.&amp;lt;syntaxhighlight lang=&amp;quot;typescript&amp;quot; line=&amp;quot;1&amp;quot;&amp;gt;&lt;br /&gt;
DataTableById(&amp;lt;data table id&amp;gt;)&lt;br /&gt;
  .SqlDataFrame&lt;br /&gt;
  .Where(#sql{2014 == Year(Column(&amp;quot;Start Time&amp;quot;))})&lt;br /&gt;
  .WithColumn(&amp;quot;Month&amp;quot;, #sql{Month(Column(&amp;quot;Start Time&amp;quot;))})&lt;br /&gt;
  .GroupBy([&amp;quot;Month&amp;quot;]).Aggregate([&amp;quot;Count&amp;quot;], [&amp;quot;Count&amp;quot;])&lt;br /&gt;
  .OrderByColumns([&amp;quot;Month&amp;quot;], [true])&lt;br /&gt;
  .Collect().ToCsv();&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Function for filtering SqlDataFrame by removing rows having, or replacing, the most infrequently occurring column values ===&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;typescript&amp;quot; line=&amp;quot;1&amp;quot;&amp;gt;&lt;br /&gt;
/***&lt;br /&gt;
 * @name ColumnWithMinUsage&lt;br /&gt;
 * @descripion&lt;br /&gt;
 * Generic function that can be used to filter out the most infrequently occurring attribute values or replace their Values&lt;br /&gt;
 * with given common value.&lt;br /&gt;
 * @param df:&lt;br /&gt;
 * DataFrame to operate on.&lt;br /&gt;
 * @param columnName:&lt;br /&gt;
 * Name of the column to be filtered.&lt;br /&gt;
 * @param newColumnName:&lt;br /&gt;
 * Name of the column that will contain the new value of the original column after filtering (if includeOthers was applied).&lt;br /&gt;
 * @param maxNumUniqueValues:&lt;br /&gt;
 * Maximum number of unique values to include into the comparison for each attribute column. If the amount of unique values for any attribute exceeds this value, only given number of attributes are included that have the highest usage.&lt;br /&gt;
 * @param minValueUsage:&lt;br /&gt;
 * Minimum total usage of a value included into the comparison. The number of cases having every returned value should be at least given percentage (a float value between 0.0 and 1.0) of all the compared cases.&lt;br /&gt;
 * @param includeOthers:&lt;br /&gt;
 * Should the rest of the attribute values not included due to MinValueUsage or MaxNumUniqueValues filtering be included as an aggregated &amp;quot;Others&amp;quot; value?&lt;br /&gt;
 * If not empty/null, defines the name used for these other-values.&lt;br /&gt;
 */&lt;br /&gt;
function ColumnWithMinUsage(df, columnName, newColumnName, maxNumUniqueValues, minValueUsage, includeOthers)&lt;br /&gt;
{&lt;br /&gt;
  let all = df&lt;br /&gt;
	.GroupBy([])&lt;br /&gt;
	.Aggregate([&amp;quot;NAllTotal&amp;quot;], [&amp;quot;Count&amp;quot;])&lt;br /&gt;
	.WithColumn(&amp;quot;__Join2&amp;quot;, #sql{1});&lt;br /&gt;
  let minValueUsageEnabled = !IsNullTop(minValueUsage);&lt;br /&gt;
  let maxNumUniqueValuesEnabled = !IsNullTop(maxNumUniqueValues);&lt;br /&gt;
  if (minValueUsageEnabled || maxNumUniqueValuesEnabled) {&lt;br /&gt;
	// Perform column value-based filtering if minValueUsageEnabled or maxNumUniqueValuesEnabled is defined.&lt;br /&gt;
    let valueColumnName = &amp;quot;__ValueNew&amp;quot;;&lt;br /&gt;
	let filteredValuesColumns = [valueColumnName: columnName];&lt;br /&gt;
	let filteredValues = df&lt;br /&gt;
	  .GroupBy([columnName]).Aggregate([&amp;quot;Count&amp;quot;], [&amp;quot;Count&amp;quot;]);&lt;br /&gt;
	if (minValueUsageEnabled) {&lt;br /&gt;
	  filteredValues = filteredValues&lt;br /&gt;
		.WithColumn(&amp;quot;__Join&amp;quot;, #sql{1})&lt;br /&gt;
		.Join(all, [&amp;quot;__Join&amp;quot;: &amp;quot;__Join2&amp;quot;], &amp;quot;leftouter&amp;quot;)&lt;br /&gt;
        .WithColumn(&amp;quot;Usage&amp;quot;, #sql{Column(&amp;quot;Count&amp;quot;) / Column(&amp;quot;NAllTotal&amp;quot;)});&lt;br /&gt;
	  filteredValuesColumns = Concat(filteredValuesColumns, [&amp;quot;Usage&amp;quot;]);&lt;br /&gt;
	}&lt;br /&gt;
	if (maxNumUniqueValuesEnabled) {&lt;br /&gt;
	  filteredValues = filteredValues&lt;br /&gt;
		.WithRowNumberColumn(&amp;quot;RowNumber&amp;quot;, [&amp;quot;Count&amp;quot;], null, [false]);&lt;br /&gt;
	  filteredValuesColumns = Concat(filteredValuesColumns, [&amp;quot;RowNumber&amp;quot;]);&lt;br /&gt;
	}&lt;br /&gt;
&lt;br /&gt;
	filteredValues = filteredValues&lt;br /&gt;
	  .Select(filteredValuesColumns);&lt;br /&gt;
&lt;br /&gt;
	// Generate select returning all the accepted values.&lt;br /&gt;
	let allValues = filteredValues&lt;br /&gt;
	  .(minValueUsageEnabled ? Where(#sql{Column(&amp;quot;Usage&amp;quot;) &amp;gt;= #expr{minValueUsage}}) : _)&lt;br /&gt;
	  .(maxNumUniqueValuesEnabled ? Where(#sql{Column(&amp;quot;RowNumber&amp;quot;) &amp;lt;= #expr{maxNumUniqueValues}}) : _)&lt;br /&gt;
	  .Select([valueColumnName, newColumnName: valueColumnName]);&lt;br /&gt;
&lt;br /&gt;
	if (!IsNullTop(includeOthers)) {&lt;br /&gt;
	  // If includeOthers is defined, replace original values with the variable defined in includeOthers.&lt;br /&gt;
	  let otherValues = filteredValues&lt;br /&gt;
		.(minValueUsageEnabled ? Where(#sql{Column(&amp;quot;Usage&amp;quot;) &amp;lt; #expr{minValueUsage}}) : _)&lt;br /&gt;
		.(maxNumUniqueValuesEnabled ? Where(#sql{Column(&amp;quot;RowNumber&amp;quot;) &amp;gt; #expr{maxNumUniqueValues}}) : _)&lt;br /&gt;
		.WithColumn(newColumnName, #sql{#expr{includeOthers}})&lt;br /&gt;
		.Select([valueColumnName, newColumnName]);&lt;br /&gt;
	  allValues = allValues.Append(otherValues)&lt;br /&gt;
	}&lt;br /&gt;
	df.Join(allValues, [columnName: valueColumnName], &amp;quot;inner&amp;quot;)&lt;br /&gt;
	  .RemoveColumns([valueColumnName]);&lt;br /&gt;
  }&lt;br /&gt;
}&lt;br /&gt;
&lt;br /&gt;
// The following example will return only rows containing two of the most common values for Region-column.&lt;br /&gt;
//let df = DataTableById(&amp;lt;data table id&amp;gt;).SqlDataFrame;&lt;br /&gt;
//df = ColumnWithMinUsage(df, &amp;quot;Region&amp;quot;, &amp;quot;_Filtered&amp;quot;, 2, null, null);&lt;br /&gt;
//df.Collect().ToCsv();&lt;br /&gt;
&lt;br /&gt;
// The following example will return all input rows, but will replace the values of rows whose Region-column&lt;br /&gt;
// has a value used by less than 15% of all the rows with a new value: &amp;quot;_Others&amp;quot;.&lt;br /&gt;
//let df = DataTableById(&amp;lt;data table id&amp;gt;).SqlDataFrame;&lt;br /&gt;
//df = ColumnWithMinUsage(df, &amp;quot;Region&amp;quot;, &amp;quot;_Filtered&amp;quot;, null, 0.15, &amp;quot;_Others&amp;quot;);&lt;br /&gt;
//df.Collect().ToCsv();&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Export model events and cases ===&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;typescript&amp;quot; line=&amp;quot;1&amp;quot;&amp;gt;&lt;br /&gt;
function ExportModelEvents(m) {&lt;br /&gt;
  let attrs = m.EventAttributes;&lt;br /&gt;
  ToDataFrame(&lt;br /&gt;
    m.EventLog.Events.Concat(&lt;br /&gt;
      [Case.Name, Type.Name, ToString(TimeStamp, &amp;quot;yyyy-MM-dd HH:mm:ss.fff&amp;quot;)], &lt;br /&gt;
      {let evt = _; attrs.{let att = _; evt.Attribute(att)}}&lt;br /&gt;
    ), &lt;br /&gt;
    Concat(&lt;br /&gt;
      [&amp;quot;CaseId&amp;quot;, &amp;quot;EventType&amp;quot;, &amp;quot;TimeStamp&amp;quot;], &lt;br /&gt;
      attrs.Name&lt;br /&gt;
    )&lt;br /&gt;
  ).ToCsv(true);&lt;br /&gt;
}&lt;br /&gt;
&lt;br /&gt;
 &lt;br /&gt;
&lt;br /&gt;
function ExportModelCases(m) {&lt;br /&gt;
  let attrs = m.CaseAttributes;&lt;br /&gt;
  ToDataFrame(&lt;br /&gt;
    m.EventLog.Cases.Concat(&lt;br /&gt;
      [Name], &lt;br /&gt;
      {let cas = _; attrs.{let att = _; cas.Attribute(att)}}&lt;br /&gt;
    ), &lt;br /&gt;
    Concat(&lt;br /&gt;
      [&amp;quot;CaseId&amp;quot;], &lt;br /&gt;
      attrs.Name&lt;br /&gt;
    )&lt;br /&gt;
  ).ToCsv(true);&lt;br /&gt;
}&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
First(Models.Where(Name==&amp;quot;SAP OtC Extended&amp;quot;)).EventsDataTable.DataFrame.ToCsv(true)&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
First(Models.Where(Name==&amp;quot;SAP OtC Extended&amp;quot;)).CasesDataTable.DataFrame.ToCsv(true)&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Calculate all the value usages of a single column for each event in event data table ===&lt;br /&gt;
This query could be used, e.g., to find out the maximum resource usage for every resource found in the event data table.&amp;lt;syntaxhighlight lang=&amp;quot;typescript&amp;quot; line=&amp;quot;1&amp;quot;&amp;gt;&lt;br /&gt;
function WithUsageColumns(resourceColumn)&lt;br /&gt;
{&lt;br /&gt;
  function WithTotalUsageColumnOfSingleResource(resourceColumn, resourceValue)&lt;br /&gt;
  {&lt;br /&gt;
    _&lt;br /&gt;
      .WithColumn(&amp;quot;_Prev&amp;quot;, #sql{Lag(Column(resourceColumn), [TimeStamp, EventType], [true, true], [CaseId], 1, null)})&lt;br /&gt;
      .WithColumn(&amp;quot;_UsageDiff&amp;quot;, #sql{&lt;br /&gt;
        CaseWhen(&lt;br /&gt;
          Column(resourceColumn) == Column(&amp;quot;_Prev&amp;quot;), 0, &lt;br /&gt;
          Column(&amp;quot;_Prev&amp;quot;) == #expr{resourceValue}, -1,&lt;br /&gt;
          Column(resourceColumn) == #expr{resourceValue}, 1,&lt;br /&gt;
          0)&lt;br /&gt;
      })&lt;br /&gt;
      .WithColumn(`${resourceValue}_Usage`, #sql{Sum(Column(&amp;quot;_UsageDiff&amp;quot;), [TimeStamp, EventType])})&lt;br /&gt;
      .RemoveColumns([&amp;quot;_Prev&amp;quot;, &amp;quot;_UsageDiff&amp;quot;])&lt;br /&gt;
  }&lt;br /&gt;
&lt;br /&gt;
  let sdf = _;&lt;br /&gt;
  let allValues = sdf.SelectDistinct([resourceColumn]).OrderByColumns([resourceColumn], [true]).Collect().Column(resourceColumn);&lt;br /&gt;
  allValues.{&lt;br /&gt;
    let v = _;&lt;br /&gt;
    sdf = sdf.WithTotalUsageColumnOfSingleResource(resourceColumn, v)&lt;br /&gt;
  }&lt;br /&gt;
  sdf&lt;br /&gt;
}&lt;br /&gt;
&lt;br /&gt;
let dt = ModelById(&amp;lt;model id&amp;gt;).EventsDataTable;&lt;br /&gt;
dt&lt;br /&gt;
  .SqlDataFrame&lt;br /&gt;
  .WithUsageColumns(&amp;lt;resource column name&amp;gt;)&lt;br /&gt;
  .OrderByColumns([dt.ColumnMappings[&amp;quot;TimeStamp&amp;quot;]], [true])&lt;br /&gt;
  .Collect().ToCsv()&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;Where:&lt;br /&gt;
&lt;br /&gt;
* &amp;lt;model id&amp;gt; is the id of the model containing event data to be examined.&lt;br /&gt;
* &amp;lt;resource column name&amp;gt; is the name of the column in the event data table of the specified model containing the resource being used by that event.&lt;br /&gt;
&lt;br /&gt;
NOTE: This expression uses functionalities that are only supported in Snowflake-based data tables.&lt;br /&gt;
&lt;br /&gt;
=== Create new Snowflake model from filter ===&lt;br /&gt;
This script creates a new Snowflake model (and two datatables for cases and events) containing filtered event log from given filter id. The script also works if the model doesn&#039;t have a cases datatable.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;typescript&amp;quot; line=&amp;quot;1&amp;quot;&amp;gt;&lt;br /&gt;
let filter = FilterById(1); // filter id&lt;br /&gt;
let model = filter.model;&lt;br /&gt;
let project = model.project;&lt;br /&gt;
let nameSuffix = &amp;quot; - &amp;quot; + filter.name + &amp;quot; - &amp;quot; + ToString(Now, &amp;quot;dd-MM-yyyy HH:mm:ss&amp;quot;);&lt;br /&gt;
let eventsDatatableName = model.EventsDataTable.Name + nameSuffix;&lt;br /&gt;
if (eventsDatatableName.length &amp;gt; 440) {&lt;br /&gt;
  eventsDatatableName = eventsDatatableName.Substring(eventsDatatableName.length - 440);&lt;br /&gt;
}&lt;br /&gt;
let eventsData = model&lt;br /&gt;
  .EventsDataTable&lt;br /&gt;
  .SqlDataFrame&lt;br /&gt;
  .ApplyFilter(&lt;br /&gt;
    filter.rules,&lt;br /&gt;
    model.CasesDataTable?.SqlDataFrame&lt;br /&gt;
  );&lt;br /&gt;
project&lt;br /&gt;
  .CreateDatatable(eventsDatatableName, #{&amp;quot;Connection&amp;quot;: CreateSnowflakeConnection()})&lt;br /&gt;
  .Import(eventsData);&lt;br /&gt;
let modelConfiguration = model.Configuration;&lt;br /&gt;
modelConfiguration.DataSource.Events.Set(&amp;quot;DataTableName&amp;quot;, eventsDatatableName);&lt;br /&gt;
if (model.CasesDataTable != null) {&lt;br /&gt;
  let eventsDataCaseIdColumn = &amp;quot;CaseId_&amp;quot; + ToString(Random());&lt;br /&gt;
  let casesDatatableName = model.CasesDataTable.Name + nameSuffix;&lt;br /&gt;
  if (casesDatatableName.length &amp;gt; 440) {&lt;br /&gt;
    casesDatatableName = casesDatatableName.Substring(casesDatatableName.length - 440);&lt;br /&gt;
  }&lt;br /&gt;
  let casesData = model&lt;br /&gt;
    .CasesDataTable&lt;br /&gt;
    .SqlDataFrame&lt;br /&gt;
    .join(&lt;br /&gt;
	  eventsData.SelectDistinct([eventsDataCaseIdColumn: modelConfiguration.DataSource.Events.Columns.CaseId]),&lt;br /&gt;
      [modelConfiguration.DataSource.Cases.Columns.CaseId: eventsDataCaseIdColumn]&lt;br /&gt;
	).Select(model.CasesDataTable.ColumnNames);&lt;br /&gt;
  project&lt;br /&gt;
    .CreateDatatable(casesDatatableName, #{&amp;quot;Connection&amp;quot;: CreateSnowflakeConnection()})&lt;br /&gt;
    .Import(casesData);&lt;br /&gt;
  modelConfiguration.DataSource.Cases.Set(&amp;quot;DataTableName&amp;quot;, casesDatatableName);&lt;br /&gt;
}&lt;br /&gt;
let modelName = model.Name + nameSuffix;&lt;br /&gt;
if (modelName &amp;gt; 440) {&lt;br /&gt;
  modelName = modelName.Substring(modelName.length - 440);&lt;br /&gt;
}&lt;br /&gt;
project&lt;br /&gt;
  .CreateModel(#{    &lt;br /&gt;
    &amp;quot;Name&amp;quot;: modelName,&lt;br /&gt;
    &amp;quot;Description&amp;quot;: model.Description,&lt;br /&gt;
    &amp;quot;Configuration&amp;quot;: modelConfiguration&lt;br /&gt;
  });&lt;br /&gt;
return modelName;&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Creating a model consisting of multiple copies of cases in an existing model ===&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;typescript&amp;quot; line=&amp;quot;1&amp;quot;&amp;gt;&lt;br /&gt;
/**&lt;br /&gt;
 * @name CreateTestModel&lt;br /&gt;
 * @description&lt;br /&gt;
 * Creates a new model (or overwrites an existing) to given target project with given number of &lt;br /&gt;
 * repetitions of given source model.&lt;br /&gt;
 * Each repetition will generate &amp;quot;&amp;lt;N&amp;gt;-&amp;quot;-prefix to CaseId-columns, where N equals to the repeat index.&lt;br /&gt;
 * @param sourceModel&lt;br /&gt;
 * PA model used for the source data and from where the connection is copied for the target model if a &lt;br /&gt;
 * new one has to be created.&lt;br /&gt;
 * @param numRepeats&lt;br /&gt;
 * Number of times the data in the source model should be repeated in the generated model.&lt;br /&gt;
 * @param targetProject&lt;br /&gt;
 * Project in which the target model resides.&lt;br /&gt;
 * @param targetModelName&lt;br /&gt;
 * Specifies the name of the test model in the given target project. If a model already exists with &lt;br /&gt;
 * given name, event and case data in this model will be replaced with the new generated event and &lt;br /&gt;
 * case data.&lt;br /&gt;
 * @returns&lt;br /&gt;
 * Model object of the test model having the newly generated data.&lt;br /&gt;
 */&lt;br /&gt;
function CreateTestModel(sourceModel, numRepeats, targetProject, targetModelName) &lt;br /&gt;
{&lt;br /&gt;
  let eventsColumnMappings = sourceModel.EventsDataTable.ColumnMappings;&lt;br /&gt;
  let casesColumnMappings = sourceModel.CasesDataTable.ColumnMappings;&lt;br /&gt;
  let connection = sourceModel.EventsDataTable.DataSourceConnection;&lt;br /&gt;
&lt;br /&gt;
  function CreateResultModel()&lt;br /&gt;
  {&lt;br /&gt;
    function GetTable(tableName) &lt;br /&gt;
    {&lt;br /&gt;
      let tableConfiguration = #{&lt;br /&gt;
        &amp;quot;Name&amp;quot;: tableName,&lt;br /&gt;
        &amp;quot;Connection&amp;quot;: connection&lt;br /&gt;
      };&lt;br /&gt;
      let resultTable = targetProject.DataTableByName(tableName);&lt;br /&gt;
      if (resultTable == null)&lt;br /&gt;
      {&lt;br /&gt;
        resultTable = targetProject.CreateDataTable(tableConfiguration)&lt;br /&gt;
          .Modify(#{&amp;quot;NameInDataSource&amp;quot;: null})&lt;br /&gt;
          .Synchronize();&lt;br /&gt;
      }&lt;br /&gt;
      return resultTable;&lt;br /&gt;
    }&lt;br /&gt;
&lt;br /&gt;
    let eventsTableName = `${targetModelName} - events`;&lt;br /&gt;
    let casesTableName = `${targetModelName} - cases`;&lt;br /&gt;
    let targetModel = targetProject.ModelByName(targetModelName);&lt;br /&gt;
    let eventsTable, casesTable = null;&lt;br /&gt;
&lt;br /&gt;
    if (targetModel != null)&lt;br /&gt;
    {&lt;br /&gt;
      eventsTable = targetModel.EventsDataTable;&lt;br /&gt;
      casesTable = targetModel.CasesDataTable;&lt;br /&gt;
    }&lt;br /&gt;
    else {&lt;br /&gt;
      eventsTable = GetTable(eventsTableName);&lt;br /&gt;
      if (sourceModel.CasesDataTable != null) {&lt;br /&gt;
        casesTable = GetTable(casesTableName);&lt;br /&gt;
      }&lt;br /&gt;
&lt;br /&gt;
      let timestampMapping = eventsColumnMappings[&amp;quot;TimeStamp&amp;quot;];&lt;br /&gt;
      eventsColumnMappings.Remove(&amp;quot;TimeStamp&amp;quot;);&lt;br /&gt;
      eventsColumnMappings.Set(&amp;quot;Timestamp&amp;quot;, timestampMapping);&lt;br /&gt;
&lt;br /&gt;
      let modelConfiguration = #{&lt;br /&gt;
        &amp;quot;DataSource&amp;quot;: #{&lt;br /&gt;
          &amp;quot;Events&amp;quot;:#{&lt;br /&gt;
            &amp;quot;DataSourceType&amp;quot;: &amp;quot;datatable&amp;quot;,&lt;br /&gt;
            &amp;quot;DataTableName&amp;quot;: eventsTableName,&lt;br /&gt;
            &amp;quot;Columns&amp;quot;: eventsColumnMappings&lt;br /&gt;
          }&lt;br /&gt;
        }&lt;br /&gt;
      };&lt;br /&gt;
&lt;br /&gt;
      if (casesColumnMappings != null) {&lt;br /&gt;
        modelConfiguration[&amp;quot;DataSource&amp;quot;].Set(&amp;quot;Cases&amp;quot;, #{&lt;br /&gt;
          &amp;quot;DataSourceType&amp;quot;: &amp;quot;datatable&amp;quot;,&lt;br /&gt;
          &amp;quot;DataTableName&amp;quot;: casesTableName,&lt;br /&gt;
          &amp;quot;Columns&amp;quot;: casesColumnMappings&lt;br /&gt;
        });&lt;br /&gt;
      }&lt;br /&gt;
&lt;br /&gt;
      targetModel = targetProject.CreateModel(#{&amp;quot;Name&amp;quot;: targetModelName, &amp;quot;Configuration&amp;quot;: modelConfiguration});&lt;br /&gt;
    }&lt;br /&gt;
&lt;br /&gt;
    eventsTable.Truncate();&lt;br /&gt;
    casesTable?.Truncate();&lt;br /&gt;
&lt;br /&gt;
    return #{&lt;br /&gt;
      &amp;quot;TargetModel&amp;quot;: targetModel,&lt;br /&gt;
      &amp;quot;Events&amp;quot;: eventsTable,&lt;br /&gt;
      &amp;quot;Cases&amp;quot;: casesTable&lt;br /&gt;
    };&lt;br /&gt;
  }&lt;br /&gt;
&lt;br /&gt;
  function RepeatNTimes(sourceDf, caseIdColumn, numRepeats)&lt;br /&gt;
  {&lt;br /&gt;
    let resultDf = null;&lt;br /&gt;
    for (let i = 1; i &amp;lt;= numRepeats; ++i) {&lt;br /&gt;
      let iterationDf = sourceDf&lt;br /&gt;
        .WithColumn(caseIdColumn, #sql{Concat(#expr{i}, &amp;quot;-&amp;quot;, Column(#expr{caseIdColumn}))});&lt;br /&gt;
      resultDf = resultDf == null ? iterationDf : resultDf.Append(iterationDf); &lt;br /&gt;
    }&lt;br /&gt;
    resultDf;&lt;br /&gt;
  }&lt;br /&gt;
&lt;br /&gt;
  let resultModel = CreateResultModel();&lt;br /&gt;
  let sourceEventDataDf = sourceModel.EventsDataTable.SqlDataFrame;&lt;br /&gt;
  let resultEventDataDf = RepeatNTimes(sourceEventDataDf, eventsColumnMappings[&amp;quot;CaseId&amp;quot;], numRepeats);&lt;br /&gt;
  resultModel[&amp;quot;Events&amp;quot;].Import(resultEventDataDf);&lt;br /&gt;
&lt;br /&gt;
  let sourceCaseDataDf = sourceModel.CasesDataTable?.SqlDataFrame;&lt;br /&gt;
  if (sourceCaseDataDf != null) {&lt;br /&gt;
    let resultCaseDataDf = RepeatNTimes(sourceCaseDataDf, casesColumnMappings[&amp;quot;CaseId&amp;quot;], numRepeats);&lt;br /&gt;
    resultModel[&amp;quot;Cases&amp;quot;].Import(resultCaseDataDf);&lt;br /&gt;
  }&lt;br /&gt;
  resultModel[&amp;quot;TargetModel&amp;quot;];&lt;br /&gt;
}&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;Example usage:&amp;lt;blockquote&amp;gt;CreateTestModel(ProjectByName(&amp;quot;Project&amp;quot;).ModelByName(&amp;quot;SAP_OrderToCash - Snowflake&amp;quot;), 3, ProjectByName(&amp;quot;TestData&amp;quot;), &amp;quot;TestModel&amp;quot;);&amp;lt;/blockquote&amp;gt;Creates a new model named &amp;quot;TestModel&amp;quot; (or overwrites old one) into project named &amp;quot;TestData&amp;quot; containing the data from model &amp;quot;SAP_OrderToCash - Snowflake&amp;quot; in project &amp;quot;Project&amp;quot; repeated three times.&lt;br /&gt;
&lt;br /&gt;
=== Analyzing declare patterns found in event log ===&lt;br /&gt;
&lt;br /&gt;
This is an example expression that shows how POSIX-style regular expressions can be used to search for cases in an event log having certain event type patterns [https://www.researchgate.net/publication/277631859_Generating_Event_Logs_Through_the_Simulation_of_Declare_Models declare patterns].&lt;br /&gt;
&lt;br /&gt;
Note: Before use, replace &amp;lt;model id&amp;gt; with a valid model identifier having event data.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;typescript&amp;quot; line=&amp;quot;1&amp;quot;&amp;gt;&lt;br /&gt;
let dt = ModelById(&amp;lt;model id&amp;gt;).EventsDataTable;&lt;br /&gt;
let sdf = dt.SqlDataFrame.Head(1000);&lt;br /&gt;
let caseIdColumn = dt.ColumnMappings[&amp;quot;CaseId&amp;quot;], timeStampColumn = dt.ColumnMappings[&amp;quot;TimeStamp&amp;quot;], eventTypeColumn = dt.ColumnMappings[&amp;quot;EventType&amp;quot;];&lt;br /&gt;
&lt;br /&gt;
let eventTypesDf = sdf&lt;br /&gt;
  .SelectDistinct([eventTypeColumn])&lt;br /&gt;
  .OrderByColumns([eventTypeColumn], [true])&lt;br /&gt;
  .WithRowNumberColumn(&amp;quot;Token&amp;quot;, [eventTypeColumn])&lt;br /&gt;
  .WithColumn(&amp;quot;Token&amp;quot;, #sql{Char(Column(&amp;quot;Token&amp;quot;) + Unicode(&amp;quot;a&amp;quot;) - 1)});&lt;br /&gt;
&lt;br /&gt;
sdf = sdf&lt;br /&gt;
  .Join(eventTypesDf.Select([&amp;quot;_EventType2&amp;quot;: eventTypeColumn, &amp;quot;Token&amp;quot;]), [eventTypeColumn: &amp;quot;_EventType2&amp;quot;], &amp;quot;leftouter&amp;quot;);&lt;br /&gt;
&lt;br /&gt;
function RespondedExistencePattern(a, b)&lt;br /&gt;
{&lt;br /&gt;
  // [^a]*((a.*b.*)|(b.*a.*))*[^a]*&lt;br /&gt;
  ToSqlExpression(`Regexp(Column(&amp;quot;Trace&amp;quot;), &amp;quot;[^${a}]*((${a}.*${b}.*)|(${b}.*${a}.*))*[^${a}]&amp;quot;)`)&lt;br /&gt;
}&lt;br /&gt;
&lt;br /&gt;
function ResponsePattern(a, b)&lt;br /&gt;
{&lt;br /&gt;
  // [^a]*(a.*b)*[^a]*&lt;br /&gt;
  ToSqlExpression(`Regexp(Column(&amp;quot;Trace&amp;quot;), &amp;quot;[^${a}]*(${a}.*${b})*[^${a}]&amp;quot;)`)&lt;br /&gt;
}&lt;br /&gt;
&lt;br /&gt;
function AlternateResponsePattern(a, b)&lt;br /&gt;
{&lt;br /&gt;
  // [^a]*(a[^a]*b[^a]*)*[^a]*&lt;br /&gt;
  ToSqlExpression(`Regexp(Column(&amp;quot;Trace&amp;quot;), &amp;quot;[^${a}]*(${a}[^${a}]*${b}[^${a}]*)*[^${a}]&amp;quot;)`)&lt;br /&gt;
}&lt;br /&gt;
&lt;br /&gt;
function ChainResponsePattern(a, b)&lt;br /&gt;
{&lt;br /&gt;
  // [^a]*(ab[^a]*)*[^a]*&lt;br /&gt;
  ToSqlExpression(`Regexp(Column(&amp;quot;Trace&amp;quot;), &amp;quot;[^${a}]*(${a}${b}[^${a}]*)*[^${a}]&amp;quot;)`)&lt;br /&gt;
}&lt;br /&gt;
&lt;br /&gt;
function PrecedencePattern(a, b)&lt;br /&gt;
{&lt;br /&gt;
  // [^b]*(a.*b)*[^b]*&lt;br /&gt;
  ToSqlExpression(`Regexp(Column(&amp;quot;Trace&amp;quot;), &amp;quot;[^${b}]*(${a}.*${b})*[^${b}]&amp;quot;)`)&lt;br /&gt;
}&lt;br /&gt;
&lt;br /&gt;
let tracesDf = sdf&lt;br /&gt;
  .GroupBy([caseIdColumn])&lt;br /&gt;
  .Aggregate([&amp;quot;Trace&amp;quot;: &amp;quot;Token&amp;quot;], [#{&amp;quot;Function&amp;quot;: &amp;quot;list&amp;quot;, &amp;quot;Ordering&amp;quot;: [timeStampColumn], &amp;quot;Separator&amp;quot;: &amp;quot;&amp;quot;}])&lt;br /&gt;
  .WithColumn(&amp;quot;RespondedExistenceNL&amp;quot;, #sql{#expr{RespondedExistencePattern(&amp;quot;n&amp;quot;, &amp;quot;l&amp;quot;)}})&lt;br /&gt;
  .WithColumn(&amp;quot;RespondedExistenceCA&amp;quot;, #sql{#expr{RespondedExistencePattern(&amp;quot;c&amp;quot;, &amp;quot;a&amp;quot;)}})&lt;br /&gt;
  .WithColumn(&amp;quot;ResponsePatternNL&amp;quot;, #sql{#expr{ResponsePattern(&amp;quot;n&amp;quot;, &amp;quot;l&amp;quot;)}})&lt;br /&gt;
  .WithColumn(&amp;quot;ResponsePatternCA&amp;quot;, #sql{#expr{ResponsePattern(&amp;quot;c&amp;quot;, &amp;quot;a&amp;quot;)}})&lt;br /&gt;
  .WithColumn(&amp;quot;AlternateResponsePatternNL&amp;quot;, #sql{#expr{AlternateResponsePattern(&amp;quot;n&amp;quot;, &amp;quot;l&amp;quot;)}})&lt;br /&gt;
  .WithColumn(&amp;quot;AlternateResponsePatternCA&amp;quot;, #sql{#expr{AlternateResponsePattern(&amp;quot;c&amp;quot;, &amp;quot;a&amp;quot;)}})&lt;br /&gt;
  .WithColumn(&amp;quot;ChainResponsePatternNL&amp;quot;, #sql{#expr{ChainResponsePattern(&amp;quot;n&amp;quot;, &amp;quot;l&amp;quot;)}})&lt;br /&gt;
  .WithColumn(&amp;quot;ChainResponsePatternCA&amp;quot;, #sql{#expr{ChainResponsePattern(&amp;quot;c&amp;quot;, &amp;quot;a&amp;quot;)}})&lt;br /&gt;
  .WithColumn(&amp;quot;PrecedencePatternNL&amp;quot;, #sql{#expr{PrecedencePattern(&amp;quot;n&amp;quot;, &amp;quot;l&amp;quot;)}})&lt;br /&gt;
  .WithColumn(&amp;quot;PrecedencePatternCA&amp;quot;, #sql{#expr{PrecedencePattern(&amp;quot;c&amp;quot;, &amp;quot;a&amp;quot;)}})&lt;br /&gt;
;&lt;br /&gt;
&lt;br /&gt;
[tracesDf.Collect().ToCsv(), eventTypesDf.Collect().ToCsv()]&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Perform a query and send results as E-mail in a HTML table ===&lt;br /&gt;
This example requires two scripts that are both located in the same project:&lt;br /&gt;
&lt;br /&gt;
# Expression-type script to execute (can be, e.g., scheduled to run daily).&lt;br /&gt;
# Expression-type script containing the query JSON to use as basis for the e-mail. In this example, this script is named as &amp;quot;Send query as E-mail - query JSON&amp;quot;. The contents of this script is just the JSON representation of a query that can be extracted, e.g., from any PA chart view.&lt;br /&gt;
&lt;br /&gt;
Script #1 should contain the following code:&amp;lt;syntaxhighlight lang=&amp;quot;typescript&amp;quot;&amp;gt;&lt;br /&gt;
let replyToAddress = &amp;quot;noreply@test.com&amp;quot;;&lt;br /&gt;
let recipientsArray = [&amp;quot;test@test.com&amp;quot;];&lt;br /&gt;
let queryConfiguration = Project.ScriptByName(&amp;quot;Send query as E-mail - query JSON&amp;quot;).Code;&lt;br /&gt;
let resultDf = Query(ParseJson(queryConfiguration));&lt;br /&gt;
let mailBodyHtml = resultDf.Collect().`&lt;br /&gt;
&amp;lt;table&amp;gt;&lt;br /&gt;
  &amp;lt;caption&amp;gt;Example query&amp;lt;/caption&amp;gt;&lt;br /&gt;
  &amp;lt;thead&amp;gt;&lt;br /&gt;
    &amp;lt;tr&amp;gt;&lt;br /&gt;
      ${StringJoin(&amp;quot;&amp;quot;, _.Columns.`&lt;br /&gt;
        &amp;lt;th&amp;gt;${_}&amp;lt;/th&amp;gt;&lt;br /&gt;
      `)}&lt;br /&gt;
    &amp;lt;/tr&amp;gt;&lt;br /&gt;
  &amp;lt;/thead&amp;gt;&lt;br /&gt;
  &amp;lt;tbody&amp;gt;&lt;br /&gt;
    ${StringJoin(&amp;quot;&amp;quot;, _.Rows.`&lt;br /&gt;
      &amp;lt;tr&amp;gt;&lt;br /&gt;
        ${StringJoin(&amp;quot;&amp;quot;, _.`&lt;br /&gt;
          &amp;lt;td&amp;gt;${_}&amp;lt;/td&amp;gt;&lt;br /&gt;
        `)}&lt;br /&gt;
      &amp;lt;/tr&amp;gt;&lt;br /&gt;
    `)}&lt;br /&gt;
  &amp;lt;/tbody&amp;gt;&lt;br /&gt;
&amp;lt;/table&amp;gt;&lt;br /&gt;
`;&lt;br /&gt;
&lt;br /&gt;
SendEmail(#{&lt;br /&gt;
  &amp;quot;ReplyTo&amp;quot;: [replyToAddress],&lt;br /&gt;
  &amp;quot;To&amp;quot;: recipientsArray,&lt;br /&gt;
  &amp;quot;Subject&amp;quot;: &amp;quot;Example query E-mail&amp;quot;,&lt;br /&gt;
  &amp;quot;IsBodyHtml&amp;quot;: true,&lt;br /&gt;
  &amp;quot;Body&amp;quot;: mailBodyHtml&lt;br /&gt;
});&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;/div&gt;</summary>
		<author><name>MarHink</name></author>
	</entry>
	<entry>
		<id>https://wiki.onqpr.com/pa/index.php?title=Expression_Script_Examples&amp;diff=25999</id>
		<title>Expression Script Examples</title>
		<link rel="alternate" type="text/html" href="https://wiki.onqpr.com/pa/index.php?title=Expression_Script_Examples&amp;diff=25999"/>
		<updated>2025-03-18T14:17:10Z</updated>

		<summary type="html">&lt;p&gt;MarHink: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;This page contains script examples written in the QPR ProcessAnalyzer expression language. See how expression scripts can be created in the [[Managing_Scripts#Creating_Script|Workspace]]. For documentation for the syntax, functions and entities can be found from the main page in the [[QPR_ProcessAnalyzer_Wiki#For_Developers|KPI Expression Language]] section.&lt;br /&gt;
&lt;br /&gt;
== Calling Expression Script from Expression ==&lt;br /&gt;
Expression scripts can be called from an expression using the [[QPR_ProcessAnalyzer_Objects_in_Expression_Language#Script|Run]] function with the following syntax:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
let result = ScriptById(123).Run(#{&lt;br /&gt;
  &amp;quot;parameter1&amp;quot;: &amp;quot;value1&amp;quot;,&lt;br /&gt;
  &amp;quot;parameter2&amp;quot;: false,&lt;br /&gt;
  &amp;quot;parameter2&amp;quot;: 123.45&lt;br /&gt;
})&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The command waits until the run is completed, and the return value of the called script is returned by the Run function call.&lt;br /&gt;
&lt;br /&gt;
Parameters can be passed to the called script, and the parameters are available as variables in the script. The parameters can contain any type of data.&lt;br /&gt;
&lt;br /&gt;
Expression scripts can also be called from a dashboard. Expressions can be stored to scripts instead of dashboards, which is a way to separate complex expressions from dashboards and allow to reuse expressions across several dashboards.&lt;br /&gt;
&lt;br /&gt;
== Calling SQL Script from Expression ==&lt;br /&gt;
SQL script can be called from an expression using the Run function as follows (similar to calling [[#Calling Expression Script from Expression|expression scripts]]):&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
let result = ScriptById(123).Run(#{&lt;br /&gt;
  &amp;quot;parameter1&amp;quot;: &amp;quot;value1&amp;quot;,&lt;br /&gt;
  &amp;quot;parameter2&amp;quot;: 321&lt;br /&gt;
});&lt;br /&gt;
let arrayOfAllReports = result.Keys;&lt;br /&gt;
let report1 = result.Report1;&lt;br /&gt;
let report2 = result.Report2;&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
SQL scripts can return multiple &#039;&#039;reports&#039;&#039;, which are combined to a dictionary, where the key is the name of the report (&amp;quot;sheet name&amp;quot;) and value is the report data as a DataFrame. See in the above example, how the reports can be accessed by their name.&lt;br /&gt;
&lt;br /&gt;
== Examples ==&lt;br /&gt;
=== Call web service===&lt;br /&gt;
Contact to a web service, fetch some data, and store it to a datatable.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;typescript&amp;quot; line&amp;gt;&lt;br /&gt;
let datatableName = &amp;quot;Web Service Data&amp;quot;;&lt;br /&gt;
let webServiceData = CallWebService(&lt;br /&gt;
    #{&amp;quot;Address&amp;quot;: &amp;quot;https://processanalyzer.onqpr.com/qprpa/api/serverinfo&amp;quot;}&lt;br /&gt;
);&lt;br /&gt;
&lt;br /&gt;
let targetDatatable = Project.Datatables.Where(name==datatableName);&lt;br /&gt;
if (Count(targetDatatable) == 0) {&lt;br /&gt;
	targetDatatable = Project.CreateDatatable(datatableName)&lt;br /&gt;
	.AddColumn(&amp;quot;Setting name&amp;quot;, &amp;quot;String&amp;quot;)&lt;br /&gt;
	.AddColumn(&amp;quot;Setting value&amp;quot;, &amp;quot;String&amp;quot;)&lt;br /&gt;
	.AddColumn(&amp;quot;Data read&amp;quot;, &amp;quot;DateTime&amp;quot;);&lt;br /&gt;
} else {&lt;br /&gt;
	targetDatatable = targetDatatable[0];&lt;br /&gt;
}&lt;br /&gt;
&lt;br /&gt;
let currentTime = Now;&lt;br /&gt;
let dataAsDf = ToDataFrame(&lt;br /&gt;
	webServiceData.keys.{&lt;br /&gt;
        let key = _;&lt;br /&gt;
        [key, webServiceData[key], currentTime];&lt;br /&gt;
    },&lt;br /&gt;
	[&amp;quot;Setting name&amp;quot;, &amp;quot;Setting value&amp;quot;, &amp;quot;Data read&amp;quot;]&lt;br /&gt;
);&lt;br /&gt;
targetDatatable.Import(dataAsDf, #{&amp;quot;Append&amp;quot;:true});&lt;br /&gt;
WriteLog(`${CountTop(dataAsDf.Rows)} rows written to datatable`);&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Store data to datatable ===&lt;br /&gt;
Get all models in the system and store them to a datatable.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;typescript&amp;quot; line&amp;gt;&lt;br /&gt;
let newDatatable = Project&lt;br /&gt;
    .CreateDatatable(&amp;quot;Models list &amp;quot; + ToString(Now, &amp;quot;dd.MM.yyyy HH:mm:ss&amp;quot;))&lt;br /&gt;
    .AddColumn(&amp;quot;Model name&amp;quot;, &amp;quot;String&amp;quot;)&lt;br /&gt;
    .AddColumn(&amp;quot;Project name&amp;quot;, &amp;quot;String&amp;quot;)&lt;br /&gt;
    .AddColumn(&amp;quot;Created time&amp;quot;, &amp;quot;DateTime&amp;quot;)&lt;br /&gt;
    .AddColumn(&amp;quot;Cases&amp;quot;, &amp;quot;Integer&amp;quot;);&lt;br /&gt;
let startTime = Now;&lt;br /&gt;
let modelsData = ToDataFrame(&lt;br /&gt;
    Models.([Name, Project.Name, CreatedDate, NCases]),&lt;br /&gt;
    [&amp;quot;Model name&amp;quot;, &amp;quot;Project name&amp;quot;, &amp;quot;Created time&amp;quot;, &amp;quot;Cases&amp;quot;]&lt;br /&gt;
);&lt;br /&gt;
WriteLog(`Listing models took ${(Now - startTime).TotalSeconds.Round(2)} seconds.`);&lt;br /&gt;
newDatatable.Import(modelsData);&lt;br /&gt;
WriteLog(`Datatable ${newDatatable.Id} created.`);&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Convert datatable column data ===&lt;br /&gt;
This script can be used to convert a single column into numerical data type. To use the script, you need to setup the following in the beginning of the script:&lt;br /&gt;
* Project name where the datatable is located.&lt;br /&gt;
* Datatable name&lt;br /&gt;
* Name of the column to be converted&lt;br /&gt;
&lt;br /&gt;
Note that the conversion fails, if there is data that cannot be converted into numerical format. The conversion assumes that period (.) is used as the decimal point. &lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;typescript&amp;quot; line&amp;gt;&lt;br /&gt;
let projectName = &amp;quot;New Project&amp;quot;;&lt;br /&gt;
let datatableName = &amp;quot;qpr processanalyzer events&amp;quot;;&lt;br /&gt;
let columnName = &amp;quot;Event order in case&amp;quot;;&lt;br /&gt;
&lt;br /&gt;
let project = (Projects.Where(Name==projectName))[0];&lt;br /&gt;
let datatable = (project.Datatables.Where(Name==datatableName))[0];&lt;br /&gt;
DatatableById(datatable.Id).DataFrame&lt;br /&gt;
.SetColumns([&lt;br /&gt;
	columnName: () =&amp;gt; {&lt;br /&gt;
		let data = Column(columnName);&lt;br /&gt;
		if (data == null) {&lt;br /&gt;
			null;&lt;br /&gt;
		 } else {&lt;br /&gt;
			ToFloat(data);&lt;br /&gt;
		}&lt;br /&gt;
	}&lt;br /&gt;
])&lt;br /&gt;
.Persist(datatable.Name, [&amp;quot;ProjectId&amp;quot;: project.Id, &amp;quot;Append&amp;quot;: false]);&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Instead of converting to numeric (with the &#039;&#039;ToFloat&#039;&#039; function), data can be converted into string using the &#039;&#039;ToString&#039;&#039; function.&lt;br /&gt;
&lt;br /&gt;
=== Show DataFrame as HTML table ===&lt;br /&gt;
&lt;br /&gt;
This script defines a function to show dataframe as a HTML table, and uses the function for a literal dataframe.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;typescript&amp;quot; line&amp;gt;&lt;br /&gt;
function dataframeToHtmlTable(df) {&lt;br /&gt;
	return&lt;br /&gt;
`&amp;lt;table&amp;gt;&lt;br /&gt;
	&amp;lt;tr&amp;gt;&lt;br /&gt;
		${StringJoin(&amp;quot;\r\n\t\t&amp;quot;,  + df.columns.`&amp;lt;th&amp;gt;${_}&amp;lt;/th&amp;gt;`)}&lt;br /&gt;
	&amp;lt;/tr&amp;gt;&lt;br /&gt;
	${StringJoin(&amp;quot;&amp;quot;, df.Rows.(&lt;br /&gt;
		&amp;quot;\r\n\t&amp;lt;tr&amp;gt;&amp;quot; + StringJoin(&amp;quot;&amp;quot;, _.`\r\n\t\t&amp;lt;td&amp;gt;${ToString(_)}&amp;lt;/td&amp;gt;`) + &amp;quot;\r\n\t&amp;lt;/tr&amp;gt;&amp;quot;&lt;br /&gt;
	))}&lt;br /&gt;
&amp;lt;/table&amp;gt;`&lt;br /&gt;
}&lt;br /&gt;
&lt;br /&gt;
let data = ToDataFrame(&lt;br /&gt;
	[&lt;br /&gt;
		[&amp;quot;one&amp;quot;, &amp;quot;two&amp;quot;, &amp;quot;three&amp;quot;],&lt;br /&gt;
		[&amp;quot;four&amp;quot;, &amp;quot;five&amp;quot;, &amp;quot;six&amp;quot;],&lt;br /&gt;
		[&amp;quot;seven&amp;quot;, &amp;quot;eight&amp;quot;, &amp;quot;nine&amp;quot;]&lt;br /&gt;
	],&lt;br /&gt;
	[&amp;quot;Column 1&amp;quot;, &amp;quot;Column 2&amp;quot;, &amp;quot;Column 3&amp;quot;]&lt;br /&gt;
);&lt;br /&gt;
&lt;br /&gt;
return dataframeToHtmlTable(data);&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Copy local datatables to Snowflake ===&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;typescript&amp;quot; line&amp;gt;&lt;br /&gt;
// Copies all datatables in a project to another project including datatable contents.&lt;br /&gt;
// Usage instructions:&lt;br /&gt;
// 1. Create expression script in the project from where you want to copy the datatables.&lt;br /&gt;
// 2. Create a new project named as &amp;quot;&amp;lt;name of the project to be moved&amp;gt; - Snowflake&amp;quot;. New datatables will be created here. E.g., when moving project named &amp;quot;SAP_OrderToCash&amp;quot;, the target project should be named as &amp;quot;SAP_OrderToCash - Snowflake&amp;quot;.&lt;br /&gt;
// 3. Run the script.&lt;br /&gt;
// NOTE: Columns of type &amp;quot;Any&amp;quot; will be created as &amp;quot;String&amp;quot;-columns in Snowflake, thus it is recommended that actual data types are set for the tables prior to the move.&lt;br /&gt;
&lt;br /&gt;
let sourceProject = Project;&lt;br /&gt;
let sourceProjectName = Project.Name;&lt;br /&gt;
let targetProjectName = `${sourceProjectName} - Snowflake`;&lt;br /&gt;
let targetProject = First(Projects.Where(Name == targetProjectName));&lt;br /&gt;
if (IsNull(targetProject)) {&lt;br /&gt;
  WriteLog(`Unable to find target project named &amp;quot;${targetProjectName}&amp;quot;. Aborting operation.`);&lt;br /&gt;
  return;&lt;br /&gt;
}&lt;br /&gt;
let dts = sourceProject.DataTables;&lt;br /&gt;
WriteLog(`Copying all ${CountTop(dts)} data tables found in project &amp;quot;${sourceProject.Name}&amp;quot; (id: ${sourceProject.Id}) to Snowflake in project &amp;quot;${targetProject.Name}&amp;quot; (id: ${targetProject.Id})`);&lt;br /&gt;
dts.{&lt;br /&gt;
  let sourceDt = _;&lt;br /&gt;
  WriteLog(`Starting to copy data table &amp;quot;${Name}&amp;quot; (id: ${Id}) having ${NRows} rows and ${NColumns} columns.`);&lt;br /&gt;
  let targetDt;&lt;br /&gt;
  targetDt = targetProject.DatatableByName(sourceDt.Name);&lt;br /&gt;
  if (targetDt == null) {&lt;br /&gt;
    targetDt = targetProject.CreateDataTable(sourceDt.Name, #{&amp;quot;Connection&amp;quot;: CreateSnowflakeConnection(#{&amp;quot;ProjectId&amp;quot;: targetProject.Id})});&lt;br /&gt;
    targetDt.Import(sourceDt.SqlDataFrame);&lt;br /&gt;
    WriteLog(`Finished copying data table &amp;quot;${Name}&amp;quot; (id: ${Id}) to table &amp;quot;${targetDt.Name}&amp;quot; (id: ${targetDt.Id})`);&lt;br /&gt;
  } else {&lt;br /&gt;
    WriteLog(`Datatable already exist &amp;quot;${Name}&amp;quot; (id: ${Id}) to table &amp;quot;${targetDt.Name}&amp;quot; (id: ${targetDt.Id})`);&lt;br /&gt;
  }&lt;br /&gt;
}&lt;br /&gt;
WriteLog(`Finished copying all the data tables found in project &amp;quot;${sourceProject.Name}&amp;quot; (id: ${sourceProject.Id}) to Snowflake in project &amp;quot;${targetProject.Name}&amp;quot; (id: ${targetProject.Id})`);&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
If you don&#039;t need to copy the data but only create the Snowflake datatables with columns, you can change the line 22 to&lt;br /&gt;
&amp;lt;syntaxhighlight&amp;gt;&lt;br /&gt;
targetDt.Import(sourceDt.SqlDataFrame.head(0));&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Copy single datatable to Snowflake ===&lt;br /&gt;
This script creates a copy of a single datatable to Snowflake. Replace the &#039;&#039;&amp;lt;tableId1&amp;gt;&#039;&#039; with the id of the source datatable.&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;typescript&amp;quot; line&amp;gt;&lt;br /&gt;
function CopyDataTableToSnowflake(dataTableId)&lt;br /&gt;
{&lt;br /&gt;
  let sourceDt = DataTableById(dataTableId);&lt;br /&gt;
  sourceDt.SqlDataFrame.Persist(`${sourceDt.Name} - Snowflake`, #{&amp;quot;Append&amp;quot;: false, &amp;quot;Connection&amp;quot;: CreateSnowflakeConnection(#{&amp;quot;ProjectId&amp;quot;: sourceDt.Project.Id})});&lt;br /&gt;
}&lt;br /&gt;
CopyDataTableToSnowflake(&amp;lt;tableId1&amp;gt;);&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Create a copy of a data table that has all Any-type columns changed to String-type columns ===&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;typescript&amp;quot; line&amp;gt;&lt;br /&gt;
function ConvertAnyDataTypesToStringsToNewTable(dataTableId)&lt;br /&gt;
{&lt;br /&gt;
  let dt = DataTableById(dataTableId);&lt;br /&gt;
  let sdf = dt.SqlDataFrame;&lt;br /&gt;
  let cts = dt.ColumnTypes;&lt;br /&gt;
  cts.{&lt;br /&gt;
    let ct = _;&lt;br /&gt;
    if (ct.DataType == &amp;quot;Any&amp;quot;) {&lt;br /&gt;
      let n = ct.Name;&lt;br /&gt;
      sdf = sdf.WithColumn(ct.Name, #sql{Cast(Column(Variable(&amp;quot;n&amp;quot;)), &amp;quot;ShortString&amp;quot;)});&lt;br /&gt;
    }&lt;br /&gt;
  };&lt;br /&gt;
  sdf.Persist(`${dt.Name} - Converted`, #{&amp;quot;Append&amp;quot;: false, &amp;quot;ProjectId&amp;quot;: dt.Project.Id});&lt;br /&gt;
}&lt;br /&gt;
ConvertAnyDataTypesToStringsToNewTable(&amp;lt;dataTableId&amp;gt;);&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Query number of rows in given data table having a datetime value in given year grouped by month and return resulting table as CSV ===&lt;br /&gt;
SqlDataFrame is used in order to prevent loading the whole datatable into memory first. Filtering is performed as first operation in order to minimize the amount of required work for the data source of the data table.&amp;lt;syntaxhighlight lang=&amp;quot;typescript&amp;quot; line=&amp;quot;1&amp;quot;&amp;gt;&lt;br /&gt;
DataTableById(&amp;lt;data table id&amp;gt;)&lt;br /&gt;
  .SqlDataFrame&lt;br /&gt;
  .Where(#sql{2014 == Year(Column(&amp;quot;Start Time&amp;quot;))})&lt;br /&gt;
  .WithColumn(&amp;quot;Month&amp;quot;, #sql{Month(Column(&amp;quot;Start Time&amp;quot;))})&lt;br /&gt;
  .GroupBy([&amp;quot;Month&amp;quot;]).Aggregate([&amp;quot;Count&amp;quot;], [&amp;quot;Count&amp;quot;])&lt;br /&gt;
  .OrderByColumns([&amp;quot;Month&amp;quot;], [true])&lt;br /&gt;
  .Collect().ToCsv();&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Function for filtering SqlDataFrame by removing rows having, or replacing, the most infrequently occurring column values ===&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;typescript&amp;quot; line=&amp;quot;1&amp;quot;&amp;gt;&lt;br /&gt;
/***&lt;br /&gt;
 * @name ColumnWithMinUsage&lt;br /&gt;
 * @descripion&lt;br /&gt;
 * Generic function that can be used to filter out the most infrequently occurring attribute values or replace their Values&lt;br /&gt;
 * with given common value.&lt;br /&gt;
 * @param df:&lt;br /&gt;
 * DataFrame to operate on.&lt;br /&gt;
 * @param columnName:&lt;br /&gt;
 * Name of the column to be filtered.&lt;br /&gt;
 * @param newColumnName:&lt;br /&gt;
 * Name of the column that will contain the new value of the original column after filtering (if includeOthers was applied).&lt;br /&gt;
 * @param maxNumUniqueValues:&lt;br /&gt;
 * Maximum number of unique values to include into the comparison for each attribute column. If the amount of unique values for any attribute exceeds this value, only given number of attributes are included that have the highest usage.&lt;br /&gt;
 * @param minValueUsage:&lt;br /&gt;
 * Minimum total usage of a value included into the comparison. The number of cases having every returned value should be at least given percentage (a float value between 0.0 and 1.0) of all the compared cases.&lt;br /&gt;
 * @param includeOthers:&lt;br /&gt;
 * Should the rest of the attribute values not included due to MinValueUsage or MaxNumUniqueValues filtering be included as an aggregated &amp;quot;Others&amp;quot; value?&lt;br /&gt;
 * If not empty/null, defines the name used for these other-values.&lt;br /&gt;
 */&lt;br /&gt;
function ColumnWithMinUsage(df, columnName, newColumnName, maxNumUniqueValues, minValueUsage, includeOthers)&lt;br /&gt;
{&lt;br /&gt;
  let all = df&lt;br /&gt;
	.GroupBy([])&lt;br /&gt;
	.Aggregate([&amp;quot;NAllTotal&amp;quot;], [&amp;quot;Count&amp;quot;])&lt;br /&gt;
	.WithColumn(&amp;quot;__Join2&amp;quot;, #sql{1});&lt;br /&gt;
  let minValueUsageEnabled = !IsNullTop(minValueUsage);&lt;br /&gt;
  let maxNumUniqueValuesEnabled = !IsNullTop(maxNumUniqueValues);&lt;br /&gt;
  if (minValueUsageEnabled || maxNumUniqueValuesEnabled) {&lt;br /&gt;
	// Perform column value-based filtering if minValueUsageEnabled or maxNumUniqueValuesEnabled is defined.&lt;br /&gt;
    let valueColumnName = &amp;quot;__ValueNew&amp;quot;;&lt;br /&gt;
	let filteredValuesColumns = [valueColumnName: columnName];&lt;br /&gt;
	let filteredValues = df&lt;br /&gt;
	  .GroupBy([columnName]).Aggregate([&amp;quot;Count&amp;quot;], [&amp;quot;Count&amp;quot;]);&lt;br /&gt;
	if (minValueUsageEnabled) {&lt;br /&gt;
	  filteredValues = filteredValues&lt;br /&gt;
		.WithColumn(&amp;quot;__Join&amp;quot;, #sql{1})&lt;br /&gt;
		.Join(all, [&amp;quot;__Join&amp;quot;: &amp;quot;__Join2&amp;quot;], &amp;quot;leftouter&amp;quot;)&lt;br /&gt;
        .WithColumn(&amp;quot;Usage&amp;quot;, #sql{Column(&amp;quot;Count&amp;quot;) / Column(&amp;quot;NAllTotal&amp;quot;)});&lt;br /&gt;
	  filteredValuesColumns = Concat(filteredValuesColumns, [&amp;quot;Usage&amp;quot;]);&lt;br /&gt;
	}&lt;br /&gt;
	if (maxNumUniqueValuesEnabled) {&lt;br /&gt;
	  filteredValues = filteredValues&lt;br /&gt;
		.WithRowNumberColumn(&amp;quot;RowNumber&amp;quot;, [&amp;quot;Count&amp;quot;], null, [false]);&lt;br /&gt;
	  filteredValuesColumns = Concat(filteredValuesColumns, [&amp;quot;RowNumber&amp;quot;]);&lt;br /&gt;
	}&lt;br /&gt;
&lt;br /&gt;
	filteredValues = filteredValues&lt;br /&gt;
	  .Select(filteredValuesColumns);&lt;br /&gt;
&lt;br /&gt;
	// Generate select returning all the accepted values.&lt;br /&gt;
	let allValues = filteredValues&lt;br /&gt;
	  .(minValueUsageEnabled ? Where(#sql{Column(&amp;quot;Usage&amp;quot;) &amp;gt;= #expr{minValueUsage}}) : _)&lt;br /&gt;
	  .(maxNumUniqueValuesEnabled ? Where(#sql{Column(&amp;quot;RowNumber&amp;quot;) &amp;lt;= #expr{maxNumUniqueValues}}) : _)&lt;br /&gt;
	  .Select([valueColumnName, newColumnName: valueColumnName]);&lt;br /&gt;
&lt;br /&gt;
	if (!IsNullTop(includeOthers)) {&lt;br /&gt;
	  // If includeOthers is defined, replace original values with the variable defined in includeOthers.&lt;br /&gt;
	  let otherValues = filteredValues&lt;br /&gt;
		.(minValueUsageEnabled ? Where(#sql{Column(&amp;quot;Usage&amp;quot;) &amp;lt; #expr{minValueUsage}}) : _)&lt;br /&gt;
		.(maxNumUniqueValuesEnabled ? Where(#sql{Column(&amp;quot;RowNumber&amp;quot;) &amp;gt; #expr{maxNumUniqueValues}}) : _)&lt;br /&gt;
		.WithColumn(newColumnName, #sql{#expr{includeOthers}})&lt;br /&gt;
		.Select([valueColumnName, newColumnName]);&lt;br /&gt;
	  allValues = allValues.Append(otherValues)&lt;br /&gt;
	}&lt;br /&gt;
	df.Join(allValues, [columnName: valueColumnName], &amp;quot;inner&amp;quot;)&lt;br /&gt;
	  .RemoveColumns([valueColumnName]);&lt;br /&gt;
  }&lt;br /&gt;
}&lt;br /&gt;
&lt;br /&gt;
// The following example will return only rows containing two of the most common values for Region-column.&lt;br /&gt;
//let df = DataTableById(&amp;lt;data table id&amp;gt;).SqlDataFrame;&lt;br /&gt;
//df = ColumnWithMinUsage(df, &amp;quot;Region&amp;quot;, &amp;quot;_Filtered&amp;quot;, 2, null, null);&lt;br /&gt;
//df.Collect().ToCsv();&lt;br /&gt;
&lt;br /&gt;
// The following example will return all input rows, but will replace the values of rows whose Region-column&lt;br /&gt;
// has a value used by less than 15% of all the rows with a new value: &amp;quot;_Others&amp;quot;.&lt;br /&gt;
//let df = DataTableById(&amp;lt;data table id&amp;gt;).SqlDataFrame;&lt;br /&gt;
//df = ColumnWithMinUsage(df, &amp;quot;Region&amp;quot;, &amp;quot;_Filtered&amp;quot;, null, 0.15, &amp;quot;_Others&amp;quot;);&lt;br /&gt;
//df.Collect().ToCsv();&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Export model events and cases ===&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;typescript&amp;quot; line=&amp;quot;1&amp;quot;&amp;gt;&lt;br /&gt;
function ExportModelEvents(m) {&lt;br /&gt;
  let attrs = m.EventAttributes;&lt;br /&gt;
  ToDataFrame(&lt;br /&gt;
    m.EventLog.Events.Concat(&lt;br /&gt;
      [Case.Name, Type.Name, ToString(TimeStamp, &amp;quot;yyyy-MM-dd HH:mm:ss.fff&amp;quot;)], &lt;br /&gt;
      {let evt = _; attrs.{let att = _; evt.Attribute(att)}}&lt;br /&gt;
    ), &lt;br /&gt;
    Concat(&lt;br /&gt;
      [&amp;quot;CaseId&amp;quot;, &amp;quot;EventType&amp;quot;, &amp;quot;TimeStamp&amp;quot;], &lt;br /&gt;
      attrs.Name&lt;br /&gt;
    )&lt;br /&gt;
  ).ToCsv(true);&lt;br /&gt;
}&lt;br /&gt;
&lt;br /&gt;
 &lt;br /&gt;
&lt;br /&gt;
function ExportModelCases(m) {&lt;br /&gt;
  let attrs = m.CaseAttributes;&lt;br /&gt;
  ToDataFrame(&lt;br /&gt;
    m.EventLog.Cases.Concat(&lt;br /&gt;
      [Name], &lt;br /&gt;
      {let cas = _; attrs.{let att = _; cas.Attribute(att)}}&lt;br /&gt;
    ), &lt;br /&gt;
    Concat(&lt;br /&gt;
      [&amp;quot;CaseId&amp;quot;], &lt;br /&gt;
      attrs.Name&lt;br /&gt;
    )&lt;br /&gt;
  ).ToCsv(true);&lt;br /&gt;
}&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
First(Models.Where(Name==&amp;quot;SAP OtC Extended&amp;quot;)).EventsDataTable.DataFrame.ToCsv(true)&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
First(Models.Where(Name==&amp;quot;SAP OtC Extended&amp;quot;)).CasesDataTable.DataFrame.ToCsv(true)&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Calculate all the value usages of a single column for each event in event data table ===&lt;br /&gt;
This query could be used, e.g., to find out the maximum resource usage for every resource found in the event data table.&amp;lt;syntaxhighlight lang=&amp;quot;typescript&amp;quot; line=&amp;quot;1&amp;quot;&amp;gt;&lt;br /&gt;
function WithUsageColumns(resourceColumn)&lt;br /&gt;
{&lt;br /&gt;
  function WithTotalUsageColumnOfSingleResource(resourceColumn, resourceValue)&lt;br /&gt;
  {&lt;br /&gt;
    _&lt;br /&gt;
      .WithColumn(&amp;quot;_Prev&amp;quot;, #sql{Lag(Column(resourceColumn), [TimeStamp, EventType], [true, true], [CaseId], 1, null)})&lt;br /&gt;
      .WithColumn(&amp;quot;_UsageDiff&amp;quot;, #sql{&lt;br /&gt;
        CaseWhen(&lt;br /&gt;
          Column(resourceColumn) == Column(&amp;quot;_Prev&amp;quot;), 0, &lt;br /&gt;
          Column(&amp;quot;_Prev&amp;quot;) == #expr{resourceValue}, -1,&lt;br /&gt;
          Column(resourceColumn) == #expr{resourceValue}, 1,&lt;br /&gt;
          0)&lt;br /&gt;
      })&lt;br /&gt;
      .WithColumn(`${resourceValue}_Usage`, #sql{Sum(Column(&amp;quot;_UsageDiff&amp;quot;), [TimeStamp, EventType])})&lt;br /&gt;
      .RemoveColumns([&amp;quot;_Prev&amp;quot;, &amp;quot;_UsageDiff&amp;quot;])&lt;br /&gt;
  }&lt;br /&gt;
&lt;br /&gt;
  let sdf = _;&lt;br /&gt;
  let allValues = sdf.SelectDistinct([resourceColumn]).OrderByColumns([resourceColumn], [true]).Collect().Column(resourceColumn);&lt;br /&gt;
  allValues.{&lt;br /&gt;
    let v = _;&lt;br /&gt;
    sdf = sdf.WithTotalUsageColumnOfSingleResource(resourceColumn, v)&lt;br /&gt;
  }&lt;br /&gt;
  sdf&lt;br /&gt;
}&lt;br /&gt;
&lt;br /&gt;
let dt = ModelById(&amp;lt;model id&amp;gt;).EventsDataTable;&lt;br /&gt;
dt&lt;br /&gt;
  .SqlDataFrame&lt;br /&gt;
  .WithUsageColumns(&amp;lt;resource column name&amp;gt;)&lt;br /&gt;
  .OrderByColumns([dt.ColumnMappings[&amp;quot;TimeStamp&amp;quot;]], [true])&lt;br /&gt;
  .Collect().ToCsv()&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;Where:&lt;br /&gt;
&lt;br /&gt;
* &amp;lt;model id&amp;gt; is the id of the model containing event data to be examined.&lt;br /&gt;
* &amp;lt;resource column name&amp;gt; is the name of the column in the event data table of the specified model containing the resource being used by that event.&lt;br /&gt;
&lt;br /&gt;
NOTE: This expression uses functionalities that are only supported in Snowflake-based data tables.&lt;br /&gt;
&lt;br /&gt;
=== Create new Snowflake model from filter ===&lt;br /&gt;
This script creates a new Snowflake model (and two datatables for cases and events) containing filtered event log from given filter id. The script also works if the model doesn&#039;t have a cases datatable.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;typescript&amp;quot; line=&amp;quot;1&amp;quot;&amp;gt;&lt;br /&gt;
let filter = FilterById(1); // filter id&lt;br /&gt;
let model = filter.model;&lt;br /&gt;
let project = model.project;&lt;br /&gt;
let nameSuffix = &amp;quot; - &amp;quot; + filter.name + &amp;quot; - &amp;quot; + ToString(Now, &amp;quot;dd-MM-yyyy HH:mm:ss&amp;quot;);&lt;br /&gt;
let eventsDatatableName = model.EventsDataTable.Name + nameSuffix;&lt;br /&gt;
if (eventsDatatableName.length &amp;gt; 440) {&lt;br /&gt;
  eventsDatatableName = eventsDatatableName.Substring(eventsDatatableName.length - 440);&lt;br /&gt;
}&lt;br /&gt;
let eventsData = model&lt;br /&gt;
  .EventsDataTable&lt;br /&gt;
  .SqlDataFrame&lt;br /&gt;
  .ApplyFilter(&lt;br /&gt;
    filter.rules,&lt;br /&gt;
    model.CasesDataTable?.SqlDataFrame&lt;br /&gt;
  );&lt;br /&gt;
project&lt;br /&gt;
  .CreateDatatable(eventsDatatableName, #{&amp;quot;Connection&amp;quot;: CreateSnowflakeConnection()})&lt;br /&gt;
  .Import(eventsData);&lt;br /&gt;
let modelConfiguration = model.Configuration;&lt;br /&gt;
modelConfiguration.DataSource.Events.Set(&amp;quot;DataTableName&amp;quot;, eventsDatatableName);&lt;br /&gt;
if (model.CasesDataTable != null) {&lt;br /&gt;
  let eventsDataCaseIdColumn = &amp;quot;CaseId_&amp;quot; + ToString(Random());&lt;br /&gt;
  let casesDatatableName = model.CasesDataTable.Name + nameSuffix;&lt;br /&gt;
  if (casesDatatableName.length &amp;gt; 440) {&lt;br /&gt;
    casesDatatableName = casesDatatableName.Substring(casesDatatableName.length - 440);&lt;br /&gt;
  }&lt;br /&gt;
  let casesData = model&lt;br /&gt;
    .CasesDataTable&lt;br /&gt;
    .SqlDataFrame&lt;br /&gt;
    .join(&lt;br /&gt;
	  eventsData.SelectDistinct([eventsDataCaseIdColumn: modelConfiguration.DataSource.Events.Columns.CaseId]),&lt;br /&gt;
      [modelConfiguration.DataSource.Cases.Columns.CaseId: eventsDataCaseIdColumn]&lt;br /&gt;
	).Select(model.CasesDataTable.ColumnNames);&lt;br /&gt;
  project&lt;br /&gt;
    .CreateDatatable(casesDatatableName, #{&amp;quot;Connection&amp;quot;: CreateSnowflakeConnection()})&lt;br /&gt;
    .Import(casesData);&lt;br /&gt;
  modelConfiguration.DataSource.Cases.Set(&amp;quot;DataTableName&amp;quot;, casesDatatableName);&lt;br /&gt;
}&lt;br /&gt;
let modelName = model.Name + nameSuffix;&lt;br /&gt;
if (modelName &amp;gt; 440) {&lt;br /&gt;
  modelName = modelName.Substring(modelName.length - 440);&lt;br /&gt;
}&lt;br /&gt;
project&lt;br /&gt;
  .CreateModel(#{    &lt;br /&gt;
    &amp;quot;Name&amp;quot;: modelName,&lt;br /&gt;
    &amp;quot;Description&amp;quot;: model.Description,&lt;br /&gt;
    &amp;quot;Configuration&amp;quot;: modelConfiguration&lt;br /&gt;
  });&lt;br /&gt;
return modelName;&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Creating a model consisting of multiple copies of cases in an existing model ===&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;typescript&amp;quot; line=&amp;quot;1&amp;quot;&amp;gt;&lt;br /&gt;
/**&lt;br /&gt;
 * @name CreateTestModel&lt;br /&gt;
 * @description&lt;br /&gt;
 * Creates a new model (or overwrites an existing) to given target project with given number of &lt;br /&gt;
 * repetitions of given source model.&lt;br /&gt;
 * Each repetition will generate &amp;quot;&amp;lt;N&amp;gt;-&amp;quot;-prefix to CaseId-columns, where N equals to the repeat index.&lt;br /&gt;
 * @param sourceModel&lt;br /&gt;
 * PA model used for the source data and from where the connection is copied for the target model if a &lt;br /&gt;
 * new one has to be created.&lt;br /&gt;
 * @param numRepeats&lt;br /&gt;
 * Number of times the data in the source model should be repeated in the generated model.&lt;br /&gt;
 * @param targetProject&lt;br /&gt;
 * Project in which the target model resides.&lt;br /&gt;
 * @param targetModelName&lt;br /&gt;
 * Specifies the name of the test model in the given target project. If a model already exists with &lt;br /&gt;
 * given name, event and case data in this model will be replaced with the new generated event and &lt;br /&gt;
 * case data.&lt;br /&gt;
 * @returns&lt;br /&gt;
 * Model object of the test model having the newly generated data.&lt;br /&gt;
 */&lt;br /&gt;
function CreateTestModel(sourceModel, numRepeats, targetProject, targetModelName) &lt;br /&gt;
{&lt;br /&gt;
  let eventsColumnMappings = sourceModel.EventsDataTable.ColumnMappings;&lt;br /&gt;
  let casesColumnMappings = sourceModel.CasesDataTable.ColumnMappings;&lt;br /&gt;
  let connection = sourceModel.EventsDataTable.DataSourceConnection;&lt;br /&gt;
&lt;br /&gt;
  function CreateResultModel()&lt;br /&gt;
  {&lt;br /&gt;
    function GetTable(tableName) &lt;br /&gt;
    {&lt;br /&gt;
      let tableConfiguration = #{&lt;br /&gt;
        &amp;quot;Name&amp;quot;: tableName,&lt;br /&gt;
        &amp;quot;Connection&amp;quot;: connection&lt;br /&gt;
      };&lt;br /&gt;
      let resultTable = targetProject.DataTableByName(tableName);&lt;br /&gt;
      if (resultTable == null)&lt;br /&gt;
      {&lt;br /&gt;
        resultTable = targetProject.CreateDataTable(tableConfiguration)&lt;br /&gt;
          .Modify(#{&amp;quot;NameInDataSource&amp;quot;: null})&lt;br /&gt;
          .Synchronize();&lt;br /&gt;
      }&lt;br /&gt;
      return resultTable;&lt;br /&gt;
    }&lt;br /&gt;
&lt;br /&gt;
    let eventsTableName = `${targetModelName} - events`;&lt;br /&gt;
    let casesTableName = `${targetModelName} - cases`;&lt;br /&gt;
    let targetModel = targetProject.ModelByName(targetModelName);&lt;br /&gt;
    let eventsTable, casesTable = null;&lt;br /&gt;
&lt;br /&gt;
    if (targetModel != null)&lt;br /&gt;
    {&lt;br /&gt;
      eventsTable = targetModel.EventsDataTable;&lt;br /&gt;
      casesTable = targetModel.CasesDataTable;&lt;br /&gt;
    }&lt;br /&gt;
    else {&lt;br /&gt;
      eventsTable = GetTable(eventsTableName);&lt;br /&gt;
      if (sourceModel.CasesDataTable != null) {&lt;br /&gt;
        casesTable = GetTable(casesTableName);&lt;br /&gt;
      }&lt;br /&gt;
&lt;br /&gt;
      let timestampMapping = eventsColumnMappings[&amp;quot;TimeStamp&amp;quot;];&lt;br /&gt;
      eventsColumnMappings.Remove(&amp;quot;TimeStamp&amp;quot;);&lt;br /&gt;
      eventsColumnMappings.Set(&amp;quot;Timestamp&amp;quot;, timestampMapping);&lt;br /&gt;
&lt;br /&gt;
      let modelConfiguration = #{&lt;br /&gt;
        &amp;quot;DataSource&amp;quot;: #{&lt;br /&gt;
          &amp;quot;Events&amp;quot;:#{&lt;br /&gt;
            &amp;quot;DataSourceType&amp;quot;: &amp;quot;datatable&amp;quot;,&lt;br /&gt;
            &amp;quot;DataTableName&amp;quot;: eventsTableName,&lt;br /&gt;
            &amp;quot;Columns&amp;quot;: eventsColumnMappings&lt;br /&gt;
          }&lt;br /&gt;
        }&lt;br /&gt;
      };&lt;br /&gt;
&lt;br /&gt;
      if (casesColumnMappings != null) {&lt;br /&gt;
        modelConfiguration[&amp;quot;DataSource&amp;quot;].Set(&amp;quot;Cases&amp;quot;, #{&lt;br /&gt;
          &amp;quot;DataSourceType&amp;quot;: &amp;quot;datatable&amp;quot;,&lt;br /&gt;
          &amp;quot;DataTableName&amp;quot;: casesTableName,&lt;br /&gt;
          &amp;quot;Columns&amp;quot;: casesColumnMappings&lt;br /&gt;
        });&lt;br /&gt;
      }&lt;br /&gt;
&lt;br /&gt;
      targetModel = targetProject.CreateModel(#{&amp;quot;Name&amp;quot;: targetModelName, &amp;quot;Configuration&amp;quot;: modelConfiguration});&lt;br /&gt;
    }&lt;br /&gt;
&lt;br /&gt;
    eventsTable.Truncate();&lt;br /&gt;
    casesTable?.Truncate();&lt;br /&gt;
&lt;br /&gt;
    return #{&lt;br /&gt;
      &amp;quot;TargetModel&amp;quot;: targetModel,&lt;br /&gt;
      &amp;quot;Events&amp;quot;: eventsTable,&lt;br /&gt;
      &amp;quot;Cases&amp;quot;: casesTable&lt;br /&gt;
    };&lt;br /&gt;
  }&lt;br /&gt;
&lt;br /&gt;
  function RepeatNTimes(sourceDf, caseIdColumn, numRepeats)&lt;br /&gt;
  {&lt;br /&gt;
    let resultDf = null;&lt;br /&gt;
    for (let i = 1; i &amp;lt;= numRepeats; ++i) {&lt;br /&gt;
      let iterationDf = sourceDf&lt;br /&gt;
        .WithColumn(caseIdColumn, #sql{Concat(#expr{i}, &amp;quot;-&amp;quot;, Column(#expr{caseIdColumn}))});&lt;br /&gt;
      resultDf = resultDf == null ? iterationDf : resultDf.Append(iterationDf); &lt;br /&gt;
    }&lt;br /&gt;
    resultDf;&lt;br /&gt;
  }&lt;br /&gt;
&lt;br /&gt;
  let resultModel = CreateResultModel();&lt;br /&gt;
  let sourceEventDataDf = sourceModel.EventsDataTable.SqlDataFrame;&lt;br /&gt;
  let resultEventDataDf = RepeatNTimes(sourceEventDataDf, eventsColumnMappings[&amp;quot;CaseId&amp;quot;], numRepeats);&lt;br /&gt;
  resultModel[&amp;quot;Events&amp;quot;].Import(resultEventDataDf);&lt;br /&gt;
&lt;br /&gt;
  let sourceCaseDataDf = sourceModel.CasesDataTable?.SqlDataFrame;&lt;br /&gt;
  if (sourceCaseDataDf != null) {&lt;br /&gt;
    let resultCaseDataDf = RepeatNTimes(sourceCaseDataDf, casesColumnMappings[&amp;quot;CaseId&amp;quot;], numRepeats);&lt;br /&gt;
    resultModel[&amp;quot;Cases&amp;quot;].Import(resultCaseDataDf);&lt;br /&gt;
  }&lt;br /&gt;
  resultModel[&amp;quot;TargetModel&amp;quot;];&lt;br /&gt;
}&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;Example usage:&amp;lt;blockquote&amp;gt;CreateTestModel(ProjectByName(&amp;quot;Project&amp;quot;).ModelByName(&amp;quot;SAP_OrderToCash - Snowflake&amp;quot;), 3, ProjectByName(&amp;quot;TestData&amp;quot;), &amp;quot;TestModel&amp;quot;);&amp;lt;/blockquote&amp;gt;Creates a new model named &amp;quot;TestModel&amp;quot; (or overwrites old one) into project named &amp;quot;TestData&amp;quot; containing the data from model &amp;quot;SAP_OrderToCash - Snowflake&amp;quot; in project &amp;quot;Project&amp;quot; repeated three times.&lt;br /&gt;
&lt;br /&gt;
=== Analyzing declare patterns found in event log ===&lt;br /&gt;
&lt;br /&gt;
This is an example expression that shows how POSIX-style regular expressions can be used to search for cases in an event log having certain event type patterns [https://www.researchgate.net/publication/277631859_Generating_Event_Logs_Through_the_Simulation_of_Declare_Models declare patterns].&lt;br /&gt;
&lt;br /&gt;
Note: Before use, replace &amp;lt;model id&amp;gt; with a valid model identifier having event data.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;typescript&amp;quot; line=&amp;quot;1&amp;quot;&amp;gt;&lt;br /&gt;
let dt = ModelById(&amp;lt;model id&amp;gt;).EventsDataTable;&lt;br /&gt;
let sdf = dt.SqlDataFrame.Head(1000);&lt;br /&gt;
let caseIdColumn = dt.ColumnMappings[&amp;quot;CaseId&amp;quot;], timeStampColumn = dt.ColumnMappings[&amp;quot;TimeStamp&amp;quot;], eventTypeColumn = dt.ColumnMappings[&amp;quot;EventType&amp;quot;];&lt;br /&gt;
&lt;br /&gt;
let eventTypesDf = sdf&lt;br /&gt;
  .SelectDistinct([eventTypeColumn])&lt;br /&gt;
  .OrderByColumns([eventTypeColumn], [true])&lt;br /&gt;
  .WithRowNumberColumn(&amp;quot;Token&amp;quot;, [eventTypeColumn])&lt;br /&gt;
  .WithColumn(&amp;quot;Token&amp;quot;, #sql{Char(Column(&amp;quot;Token&amp;quot;) + Unicode(&amp;quot;a&amp;quot;) - 1)});&lt;br /&gt;
&lt;br /&gt;
sdf = sdf&lt;br /&gt;
  .Join(eventTypesDf.Select([&amp;quot;_EventType2&amp;quot;: eventTypeColumn, &amp;quot;Token&amp;quot;]), [eventTypeColumn: &amp;quot;_EventType2&amp;quot;], &amp;quot;leftouter&amp;quot;);&lt;br /&gt;
&lt;br /&gt;
function RespondedExistencePattern(a, b)&lt;br /&gt;
{&lt;br /&gt;
  // [^a]*((a.*b.*)|(b.*a.*))*[^a]*&lt;br /&gt;
  ToSqlExpression(`Regexp(Column(&amp;quot;Trace&amp;quot;), &amp;quot;[^${a}]*((${a}.*${b}.*)|(${b}.*${a}.*))*[^${a}]&amp;quot;)`)&lt;br /&gt;
}&lt;br /&gt;
&lt;br /&gt;
function ResponsePattern(a, b)&lt;br /&gt;
{&lt;br /&gt;
  // [^a]*(a.*b)*[^a]*&lt;br /&gt;
  ToSqlExpression(`Regexp(Column(&amp;quot;Trace&amp;quot;), &amp;quot;[^${a}]*(${a}.*${b})*[^${a}]&amp;quot;)`)&lt;br /&gt;
}&lt;br /&gt;
&lt;br /&gt;
function AlternateResponsePattern(a, b)&lt;br /&gt;
{&lt;br /&gt;
  // [^a]*(a[^a]*b[^a]*)*[^a]*&lt;br /&gt;
  ToSqlExpression(`Regexp(Column(&amp;quot;Trace&amp;quot;), &amp;quot;[^${a}]*(${a}[^${a}]*${b}[^${a}]*)*[^${a}]&amp;quot;)`)&lt;br /&gt;
}&lt;br /&gt;
&lt;br /&gt;
function ChainResponsePattern(a, b)&lt;br /&gt;
{&lt;br /&gt;
  // [^a]*(ab[^a]*)*[^a]*&lt;br /&gt;
  ToSqlExpression(`Regexp(Column(&amp;quot;Trace&amp;quot;), &amp;quot;[^${a}]*(${a}${b}[^${a}]*)*[^${a}]&amp;quot;)`)&lt;br /&gt;
}&lt;br /&gt;
&lt;br /&gt;
function PrecedencePattern(a, b)&lt;br /&gt;
{&lt;br /&gt;
  // [^b]*(a.*b)*[^b]*&lt;br /&gt;
  ToSqlExpression(`Regexp(Column(&amp;quot;Trace&amp;quot;), &amp;quot;[^${b}]*(${a}.*${b})*[^${b}]&amp;quot;)`)&lt;br /&gt;
}&lt;br /&gt;
&lt;br /&gt;
let tracesDf = sdf&lt;br /&gt;
  .GroupBy([caseIdColumn])&lt;br /&gt;
  .Aggregate([&amp;quot;Trace&amp;quot;: &amp;quot;Token&amp;quot;], [#{&amp;quot;Function&amp;quot;: &amp;quot;list&amp;quot;, &amp;quot;Ordering&amp;quot;: [timeStampColumn], &amp;quot;Separator&amp;quot;: &amp;quot;&amp;quot;}])&lt;br /&gt;
  .WithColumn(&amp;quot;RespondedExistenceNL&amp;quot;, #sql{#expr{RespondedExistencePattern(&amp;quot;n&amp;quot;, &amp;quot;l&amp;quot;)}})&lt;br /&gt;
  .WithColumn(&amp;quot;RespondedExistenceCA&amp;quot;, #sql{#expr{RespondedExistencePattern(&amp;quot;c&amp;quot;, &amp;quot;a&amp;quot;)}})&lt;br /&gt;
  .WithColumn(&amp;quot;ResponsePatternNL&amp;quot;, #sql{#expr{ResponsePattern(&amp;quot;n&amp;quot;, &amp;quot;l&amp;quot;)}})&lt;br /&gt;
  .WithColumn(&amp;quot;ResponsePatternCA&amp;quot;, #sql{#expr{ResponsePattern(&amp;quot;c&amp;quot;, &amp;quot;a&amp;quot;)}})&lt;br /&gt;
  .WithColumn(&amp;quot;AlternateResponsePatternNL&amp;quot;, #sql{#expr{AlternateResponsePattern(&amp;quot;n&amp;quot;, &amp;quot;l&amp;quot;)}})&lt;br /&gt;
  .WithColumn(&amp;quot;AlternateResponsePatternCA&amp;quot;, #sql{#expr{AlternateResponsePattern(&amp;quot;c&amp;quot;, &amp;quot;a&amp;quot;)}})&lt;br /&gt;
  .WithColumn(&amp;quot;ChainResponsePatternNL&amp;quot;, #sql{#expr{ChainResponsePattern(&amp;quot;n&amp;quot;, &amp;quot;l&amp;quot;)}})&lt;br /&gt;
  .WithColumn(&amp;quot;ChainResponsePatternCA&amp;quot;, #sql{#expr{ChainResponsePattern(&amp;quot;c&amp;quot;, &amp;quot;a&amp;quot;)}})&lt;br /&gt;
  .WithColumn(&amp;quot;PrecedencePatternNL&amp;quot;, #sql{#expr{PrecedencePattern(&amp;quot;n&amp;quot;, &amp;quot;l&amp;quot;)}})&lt;br /&gt;
  .WithColumn(&amp;quot;PrecedencePatternCA&amp;quot;, #sql{#expr{PrecedencePattern(&amp;quot;c&amp;quot;, &amp;quot;a&amp;quot;)}})&lt;br /&gt;
;&lt;br /&gt;
&lt;br /&gt;
[tracesDf.Collect().ToCsv(), eventTypesDf.Collect().ToCsv()]&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Perform a query and send results as E-mail in a HTML table ===&lt;br /&gt;
This example requires two scripts:&lt;br /&gt;
&lt;br /&gt;
# Expression-type script to execute (can be, e.g., scheduled to run daily).&lt;br /&gt;
# Expression-type script containing the query JSON to use as basis for the e-mail. In this example, this script is named as &amp;quot;Send query as E-mail - query JSON&amp;quot;. The contents of this script is just the JSON representation of a query that can be extracted, e.g., from any PA chart view.&lt;br /&gt;
&lt;br /&gt;
Script #1 should contain the following code:&amp;lt;syntaxhighlight lang=&amp;quot;typescript&amp;quot;&amp;gt;&lt;br /&gt;
let replyToAddress = &amp;quot;noreply@test.com&amp;quot;;&lt;br /&gt;
let recipientsArray = [&amp;quot;test@test.com&amp;quot;];&lt;br /&gt;
let queryConfiguration = Project.ScriptByName(&amp;quot;Send query as E-mail - query JSON&amp;quot;).Code;&lt;br /&gt;
let resultDf = Query(ParseJson(queryConfiguration));&lt;br /&gt;
let mailBodyHtml = resultDf.Collect().`&lt;br /&gt;
&amp;lt;table&amp;gt;&lt;br /&gt;
  &amp;lt;caption&amp;gt;Example query&amp;lt;/caption&amp;gt;&lt;br /&gt;
  &amp;lt;thead&amp;gt;&lt;br /&gt;
    &amp;lt;tr&amp;gt;&lt;br /&gt;
      ${StringJoin(&amp;quot;&amp;quot;, _.Columns.`&lt;br /&gt;
        &amp;lt;th&amp;gt;${_}&amp;lt;/th&amp;gt;&lt;br /&gt;
      `)}&lt;br /&gt;
    &amp;lt;/tr&amp;gt;&lt;br /&gt;
  &amp;lt;/thead&amp;gt;&lt;br /&gt;
  &amp;lt;tbody&amp;gt;&lt;br /&gt;
    ${StringJoin(&amp;quot;&amp;quot;, _.Rows.`&lt;br /&gt;
      &amp;lt;tr&amp;gt;&lt;br /&gt;
        ${StringJoin(&amp;quot;&amp;quot;, _.`&lt;br /&gt;
          &amp;lt;td&amp;gt;${_}&amp;lt;/td&amp;gt;&lt;br /&gt;
        `)}&lt;br /&gt;
      &amp;lt;/tr&amp;gt;&lt;br /&gt;
    `)}&lt;br /&gt;
  &amp;lt;/tbody&amp;gt;&lt;br /&gt;
&amp;lt;/table&amp;gt;&lt;br /&gt;
`;&lt;br /&gt;
&lt;br /&gt;
SendEmail(#{&lt;br /&gt;
  &amp;quot;ReplyTo&amp;quot;: [replyToAddress],&lt;br /&gt;
  &amp;quot;To&amp;quot;: recipientsArray,&lt;br /&gt;
  &amp;quot;Subject&amp;quot;: &amp;quot;Example query E-mail&amp;quot;,&lt;br /&gt;
  &amp;quot;IsBodyHtml&amp;quot;: true,&lt;br /&gt;
  &amp;quot;Body&amp;quot;: mailBodyHtml&lt;br /&gt;
});&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;/div&gt;</summary>
		<author><name>MarHink</name></author>
	</entry>
	<entry>
		<id>https://wiki.onqpr.com/pa/index.php?title=Web_API:_saml2&amp;diff=25989</id>
		<title>Web API: saml2</title>
		<link rel="alternate" type="text/html" href="https://wiki.onqpr.com/pa/index.php?title=Web_API:_saml2&amp;diff=25989"/>
		<updated>2025-03-17T12:30:36Z</updated>

		<summary type="html">&lt;p&gt;MarHink: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&#039;&#039;&#039;Saml2&#039;&#039;&#039; method returns the SAML 2.0 service provider (SP) metadata. No authentication is required to fetch the metadata. Usually the service provider metadata url is configured to the identity provider (IdP), which can then read, e.g., the needed public encryption keys.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
Url: GET qprpa/Saml2&lt;br /&gt;
attachment; filename=&amp;quot;customer.onqpr.com_qprpa_Saml2.xml&amp;quot;&lt;br /&gt;
Content-Type: application/samlmetadata+xml&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Example:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
&amp;lt;EntityDescriptor xmlns=&amp;quot;urn:oasis:names:tc:SAML:2.0:metadata&amp;quot; xmlns:saml2=&amp;quot;urn:oasis:names:tc:SAML:2.0:assertion&amp;quot; cacheDuration=&amp;quot;PT1H&amp;quot; entityID=&amp;quot;https://customer.onqpr.com/qprpa/Saml2&amp;quot; ID=&amp;quot;_76ac281969e84420924d4e25d22b7c4e&amp;quot;&amp;gt;&lt;br /&gt;
   &amp;lt;Signature xmlns=&amp;quot;http://www.w3.org/2000/09/xmldsig#&amp;quot;&amp;gt;&lt;br /&gt;
      &amp;lt;SignedInfo&amp;gt;&lt;br /&gt;
         &amp;lt;CanonicalizationMethod Algorithm=&amp;quot;http://www.w3.org/2001/10/xml-exc-c14n#&amp;quot; /&amp;gt;&lt;br /&gt;
         &amp;lt;SignatureMethod Algorithm=&amp;quot;http://www.w3.org/2001/04/xmldsig-more#rsa-sha256&amp;quot; /&amp;gt;&lt;br /&gt;
         &amp;lt;Reference URI=&amp;quot;...&amp;quot;&amp;gt;&lt;br /&gt;
            &amp;lt;Transforms&amp;gt;&lt;br /&gt;
               &amp;lt;Transform Algorithm=&amp;quot;http://www.w3.org/2000/09/xmldsig#enveloped-signature&amp;quot; /&amp;gt;&lt;br /&gt;
               &amp;lt;Transform Algorithm=&amp;quot;http://www.w3.org/2001/10/xml-exc-c14n#&amp;quot; /&amp;gt;&lt;br /&gt;
            &amp;lt;/Transforms&amp;gt;&lt;br /&gt;
            &amp;lt;DigestMethod Algorithm=&amp;quot;http://www.w3.org/2001/04/xmlenc#sha256&amp;quot; /&amp;gt;&lt;br /&gt;
            &amp;lt;DigestValue&amp;gt;...&amp;lt;/DigestValue&amp;gt;&lt;br /&gt;
         &amp;lt;/Reference&amp;gt;&lt;br /&gt;
      &amp;lt;/SignedInfo&amp;gt;&lt;br /&gt;
      &amp;lt;SignatureValue&amp;gt;...&amp;lt;/SignatureValue&amp;gt;&lt;br /&gt;
      &amp;lt;KeyInfo&amp;gt;&lt;br /&gt;
         &amp;lt;X509Data&amp;gt;&lt;br /&gt;
            &amp;lt;X509Certificate&amp;gt;...&amp;lt;/X509Certificate&amp;gt;&lt;br /&gt;
         &amp;lt;/X509Data&amp;gt;&lt;br /&gt;
      &amp;lt;/KeyInfo&amp;gt;&lt;br /&gt;
   &amp;lt;/Signature&amp;gt;&lt;br /&gt;
   &amp;lt;SPSSODescriptor AuthnRequestsSigned=&amp;quot;false&amp;quot; WantAssertionsSigned=&amp;quot;false&amp;quot; protocolSupportEnumeration=&amp;quot;urn:oasis:names:tc:SAML:2.0:protocol&amp;quot;&amp;gt;&lt;br /&gt;
      &amp;lt;KeyDescriptor use=&amp;quot;signing&amp;quot;&amp;gt;&lt;br /&gt;
         &amp;lt;KeyInfo xmlns=&amp;quot;http://www.w3.org/2000/09/xmldsig#&amp;quot;&amp;gt;&lt;br /&gt;
            &amp;lt;X509Data&amp;gt;&lt;br /&gt;
               &amp;lt;X509Certificate&amp;gt;...&amp;lt;/X509Certificate&amp;gt;&lt;br /&gt;
            &amp;lt;/X509Data&amp;gt;&lt;br /&gt;
         &amp;lt;/KeyInfo&amp;gt;&lt;br /&gt;
      &amp;lt;/KeyDescriptor&amp;gt;&lt;br /&gt;
      &amp;lt;SingleLogoutService Binding=&amp;quot;urn:oasis:names:tc:SAML:2.0:bindings:HTTP-Redirect&amp;quot; Location=&amp;quot;https://customer.onqpr.com/QPRPA/Saml2/Logout&amp;quot; /&amp;gt;&lt;br /&gt;
      &amp;lt;AssertionConsumerService Binding=&amp;quot;urn:oasis:names:tc:SAML:2.0:bindings:HTTP-POST&amp;quot; Location=&amp;quot;https://customer.onqpr.com/QPRPA/Saml2/Acs&amp;quot; isDefault=&amp;quot;true&amp;quot; index=&amp;quot;0&amp;quot; /&amp;gt;&lt;br /&gt;
      &amp;lt;AssertionConsumerService Binding=&amp;quot;urn:oasis:names:tc:SAML:2.0:bindings:HTTP-Artifact&amp;quot; Location=&amp;quot;https://customer.onqpr.com/QPRPA/Saml2/Acs&amp;quot; isDefault=&amp;quot;false&amp;quot; index=&amp;quot;1&amp;quot; /&amp;gt;&lt;br /&gt;
   &amp;lt;/SPSSODescriptor&amp;gt;&lt;br /&gt;
&amp;lt;/EntityDescriptor&amp;gt;&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
[[Category: QPR ProcessAnalyzer]]&lt;/div&gt;</summary>
		<author><name>MarHink</name></author>
	</entry>
	<entry>
		<id>https://wiki.onqpr.com/pa/index.php?title=Expression_Script_Examples&amp;diff=25971</id>
		<title>Expression Script Examples</title>
		<link rel="alternate" type="text/html" href="https://wiki.onqpr.com/pa/index.php?title=Expression_Script_Examples&amp;diff=25971"/>
		<updated>2025-03-06T09:12:53Z</updated>

		<summary type="html">&lt;p&gt;MarHink: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;This page contains script examples written in the QPR ProcessAnalyzer expression language. See how expression scripts can be created in the [[Managing_Scripts#Creating_Script|Workspace]]. For documentation for the syntax, functions and entities can be found from the main page in the [[QPR_ProcessAnalyzer_Wiki#For_Developers|KPI Expression Language]] section.&lt;br /&gt;
&lt;br /&gt;
== Calling Expression Script from Expression ==&lt;br /&gt;
Expression scripts can be called from an expression using the [[QPR_ProcessAnalyzer_Objects_in_Expression_Language#Script|Run]] function with the following syntax:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
let result = ScriptById(123).Run(#{&lt;br /&gt;
  &amp;quot;parameter1&amp;quot;: &amp;quot;value1&amp;quot;,&lt;br /&gt;
  &amp;quot;parameter2&amp;quot;: false,&lt;br /&gt;
  &amp;quot;parameter2&amp;quot;: 123.45&lt;br /&gt;
})&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The command waits until the run is completed, and the return value of the called script is returned by the Run function call.&lt;br /&gt;
&lt;br /&gt;
Parameters can be passed to the called script, and the parameters are available as variables in the script. The parameters can contain any type of data.&lt;br /&gt;
&lt;br /&gt;
Expression scripts can also be called from a dashboard. Expressions can be stored to scripts instead of dashboards, which is a way to separate complex expressions from dashboards and allow to reuse expressions across several dashboards.&lt;br /&gt;
&lt;br /&gt;
== Calling SQL Script from Expression ==&lt;br /&gt;
SQL script can be called from an expression using the Run function as follows (similar to calling [[#Calling Expression Script from Expression|expression scripts]]):&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
let result = ScriptById(123).Run(#{&lt;br /&gt;
  &amp;quot;parameter1&amp;quot;: &amp;quot;value1&amp;quot;,&lt;br /&gt;
  &amp;quot;parameter2&amp;quot;: 321&lt;br /&gt;
});&lt;br /&gt;
let arrayOfAllReports = result.Keys;&lt;br /&gt;
let report1 = result.Report1;&lt;br /&gt;
let report2 = result.Report2;&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
SQL scripts can return multiple &#039;&#039;reports&#039;&#039;, which are combined to a dictionary, where the key is the name of the report (&amp;quot;sheet name&amp;quot;) and value is the report data as a DataFrame. See in the above example, how the reports can be accessed by their name.&lt;br /&gt;
&lt;br /&gt;
== Examples ==&lt;br /&gt;
=== Call web service===&lt;br /&gt;
Contact to a web service, fetch some data, and store it to a datatable.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;typescript&amp;quot; line&amp;gt;&lt;br /&gt;
let datatableName = &amp;quot;Web Service Data&amp;quot;;&lt;br /&gt;
let webServiceData = CallWebService(&lt;br /&gt;
    #{&amp;quot;Address&amp;quot;: &amp;quot;https://processanalyzer.onqpr.com/qprpa/api/serverinfo&amp;quot;}&lt;br /&gt;
);&lt;br /&gt;
&lt;br /&gt;
let targetDatatable = Project.Datatables.Where(name==datatableName);&lt;br /&gt;
if (Count(targetDatatable) == 0) {&lt;br /&gt;
	targetDatatable = Project.CreateDatatable(datatableName)&lt;br /&gt;
	.AddColumn(&amp;quot;Setting name&amp;quot;, &amp;quot;String&amp;quot;)&lt;br /&gt;
	.AddColumn(&amp;quot;Setting value&amp;quot;, &amp;quot;String&amp;quot;)&lt;br /&gt;
	.AddColumn(&amp;quot;Data read&amp;quot;, &amp;quot;DateTime&amp;quot;);&lt;br /&gt;
} else {&lt;br /&gt;
	targetDatatable = targetDatatable[0];&lt;br /&gt;
}&lt;br /&gt;
&lt;br /&gt;
let currentTime = Now;&lt;br /&gt;
let dataAsDf = ToDataFrame(&lt;br /&gt;
	webServiceData.keys.{&lt;br /&gt;
        let key = _;&lt;br /&gt;
        [key, webServiceData[key], currentTime];&lt;br /&gt;
    },&lt;br /&gt;
	[&amp;quot;Setting name&amp;quot;, &amp;quot;Setting value&amp;quot;, &amp;quot;Data read&amp;quot;]&lt;br /&gt;
);&lt;br /&gt;
targetDatatable.Import(dataAsDf, #{&amp;quot;Append&amp;quot;:true});&lt;br /&gt;
WriteLog(`${CountTop(dataAsDf.Rows)} rows written to datatable`);&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Store data to datatable ===&lt;br /&gt;
Get all models in the system and store them to a datatable.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;typescript&amp;quot; line&amp;gt;&lt;br /&gt;
let newDatatable = Project&lt;br /&gt;
    .CreateDatatable(&amp;quot;Models list &amp;quot; + ToString(Now, &amp;quot;dd.MM.yyyy HH:mm:ss&amp;quot;))&lt;br /&gt;
    .AddColumn(&amp;quot;Model name&amp;quot;, &amp;quot;String&amp;quot;)&lt;br /&gt;
    .AddColumn(&amp;quot;Project name&amp;quot;, &amp;quot;String&amp;quot;)&lt;br /&gt;
    .AddColumn(&amp;quot;Created time&amp;quot;, &amp;quot;DateTime&amp;quot;)&lt;br /&gt;
    .AddColumn(&amp;quot;Cases&amp;quot;, &amp;quot;Integer&amp;quot;);&lt;br /&gt;
let startTime = Now;&lt;br /&gt;
let modelsData = ToDataFrame(&lt;br /&gt;
    Models.([Name, Project.Name, CreatedDate, NCases]),&lt;br /&gt;
    [&amp;quot;Model name&amp;quot;, &amp;quot;Project name&amp;quot;, &amp;quot;Created time&amp;quot;, &amp;quot;Cases&amp;quot;]&lt;br /&gt;
);&lt;br /&gt;
WriteLog(`Listing models took ${(Now - startTime).TotalSeconds.Round(2)} seconds.`);&lt;br /&gt;
newDatatable.Import(modelsData);&lt;br /&gt;
WriteLog(`Datatable ${newDatatable.Id} created.`);&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Convert datatable column data ===&lt;br /&gt;
This script can be used to convert a single column into numerical data type. To use the script, you need to setup the following in the beginning of the script:&lt;br /&gt;
* Project name where the datatable is located.&lt;br /&gt;
* Datatable name&lt;br /&gt;
* Name of the column to be converted&lt;br /&gt;
&lt;br /&gt;
Note that the conversion fails, if there is data that cannot be converted into numerical format. The conversion assumes that period (.) is used as the decimal point. &lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;typescript&amp;quot; line&amp;gt;&lt;br /&gt;
let projectName = &amp;quot;New Project&amp;quot;;&lt;br /&gt;
let datatableName = &amp;quot;qpr processanalyzer events&amp;quot;;&lt;br /&gt;
let columnName = &amp;quot;Event order in case&amp;quot;;&lt;br /&gt;
&lt;br /&gt;
let project = (Projects.Where(Name==projectName))[0];&lt;br /&gt;
let datatable = (project.Datatables.Where(Name==datatableName))[0];&lt;br /&gt;
DatatableById(datatable.Id).DataFrame&lt;br /&gt;
.SetColumns([&lt;br /&gt;
	columnName: () =&amp;gt; {&lt;br /&gt;
		let data = Column(columnName);&lt;br /&gt;
		if (data == null) {&lt;br /&gt;
			null;&lt;br /&gt;
		 } else {&lt;br /&gt;
			ToFloat(data);&lt;br /&gt;
		}&lt;br /&gt;
	}&lt;br /&gt;
])&lt;br /&gt;
.Persist(datatable.Name, [&amp;quot;ProjectId&amp;quot;: project.Id, &amp;quot;Append&amp;quot;: false]);&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Instead of converting to numeric (with the &#039;&#039;ToFloat&#039;&#039; function), data can be converted into string using the &#039;&#039;ToString&#039;&#039; function.&lt;br /&gt;
&lt;br /&gt;
=== Show DataFrame as HTML table ===&lt;br /&gt;
&lt;br /&gt;
This script defines a function to show dataframe as a HTML table, and uses the function for a literal dataframe.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;typescript&amp;quot; line&amp;gt;&lt;br /&gt;
function dataframeToHtmlTable(df) {&lt;br /&gt;
	return&lt;br /&gt;
`&amp;lt;table&amp;gt;&lt;br /&gt;
	&amp;lt;tr&amp;gt;&lt;br /&gt;
		${StringJoin(&amp;quot;\r\n\t\t&amp;quot;,  + df.columns.`&amp;lt;th&amp;gt;${_}&amp;lt;/th&amp;gt;`)}&lt;br /&gt;
	&amp;lt;/tr&amp;gt;&lt;br /&gt;
	${StringJoin(&amp;quot;&amp;quot;, df.Rows.(&lt;br /&gt;
		&amp;quot;\r\n\t&amp;lt;tr&amp;gt;&amp;quot; + StringJoin(&amp;quot;&amp;quot;, _.`\r\n\t\t&amp;lt;td&amp;gt;${ToString(_)}&amp;lt;/td&amp;gt;`) + &amp;quot;\r\n\t&amp;lt;/tr&amp;gt;&amp;quot;&lt;br /&gt;
	))}&lt;br /&gt;
&amp;lt;/table&amp;gt;`&lt;br /&gt;
}&lt;br /&gt;
&lt;br /&gt;
let data = ToDataFrame(&lt;br /&gt;
	[&lt;br /&gt;
		[&amp;quot;one&amp;quot;, &amp;quot;two&amp;quot;, &amp;quot;three&amp;quot;],&lt;br /&gt;
		[&amp;quot;four&amp;quot;, &amp;quot;five&amp;quot;, &amp;quot;six&amp;quot;],&lt;br /&gt;
		[&amp;quot;seven&amp;quot;, &amp;quot;eight&amp;quot;, &amp;quot;nine&amp;quot;]&lt;br /&gt;
	],&lt;br /&gt;
	[&amp;quot;Column 1&amp;quot;, &amp;quot;Column 2&amp;quot;, &amp;quot;Column 3&amp;quot;]&lt;br /&gt;
);&lt;br /&gt;
&lt;br /&gt;
return dataframeToHtmlTable(data);&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Copy local datatables to Snowflake ===&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;typescript&amp;quot; line&amp;gt;&lt;br /&gt;
// Copies all datatables in a project to another project including datatable contents.&lt;br /&gt;
// Usage instructions:&lt;br /&gt;
// 1. Create expression script in the project from where you want to copy the datatables.&lt;br /&gt;
// 2. Create a new project named as &amp;quot;&amp;lt;name of the project to be moved&amp;gt; - Snowflake&amp;quot;. New datatables will be created here. E.g., when moving project named &amp;quot;SAP_OrderToCash&amp;quot;, the target project should be named as &amp;quot;SAP_OrderToCash - Snowflake&amp;quot;.&lt;br /&gt;
// 3. Run the script.&lt;br /&gt;
// NOTE: Columns of type &amp;quot;Any&amp;quot; will be created as &amp;quot;String&amp;quot;-columns in Snowflake, thus it is recommended that actual data types are set for the tables prior to the move.&lt;br /&gt;
&lt;br /&gt;
let sourceProject = Project;&lt;br /&gt;
let sourceProjectName = Project.Name;&lt;br /&gt;
let targetProjectName = `${sourceProjectName} - Snowflake`;&lt;br /&gt;
let targetProject = First(Projects.Where(Name == targetProjectName));&lt;br /&gt;
if (IsNull(targetProject)) {&lt;br /&gt;
  WriteLog(`Unable to find target project named &amp;quot;${targetProjectName}&amp;quot;. Aborting operation.`);&lt;br /&gt;
  return;&lt;br /&gt;
}&lt;br /&gt;
let dts = sourceProject.DataTables;&lt;br /&gt;
WriteLog(`Copying all ${CountTop(dts)} data tables found in project &amp;quot;${sourceProject.Name}&amp;quot; (id: ${sourceProject.Id}) to Snowflake in project &amp;quot;${targetProject.Name}&amp;quot; (id: ${targetProject.Id})`);&lt;br /&gt;
dts.{&lt;br /&gt;
  let sourceDt = _;&lt;br /&gt;
  WriteLog(`Starting to copy data table &amp;quot;${Name}&amp;quot; (id: ${Id}) having ${NRows} rows and ${NColumns} columns.`);&lt;br /&gt;
  let targetDt;&lt;br /&gt;
  targetDt = targetProject.DatatableByName(sourceDt.Name);&lt;br /&gt;
  if (targetDt == null) {&lt;br /&gt;
    targetDt = targetProject.CreateDataTable(sourceDt.Name, #{&amp;quot;Connection&amp;quot;: CreateSnowflakeConnection(#{&amp;quot;ProjectId&amp;quot;: targetProject.Id})});&lt;br /&gt;
    targetDt.Import(sourceDt.SqlDataFrame);&lt;br /&gt;
    WriteLog(`Finished copying data table &amp;quot;${Name}&amp;quot; (id: ${Id}) to table &amp;quot;${targetDt.Name}&amp;quot; (id: ${targetDt.Id})`);&lt;br /&gt;
  } else {&lt;br /&gt;
    WriteLog(`Datatable already exist &amp;quot;${Name}&amp;quot; (id: ${Id}) to table &amp;quot;${targetDt.Name}&amp;quot; (id: ${targetDt.Id})`);&lt;br /&gt;
  }&lt;br /&gt;
}&lt;br /&gt;
WriteLog(`Finished copying all the data tables found in project &amp;quot;${sourceProject.Name}&amp;quot; (id: ${sourceProject.Id}) to Snowflake in project &amp;quot;${targetProject.Name}&amp;quot; (id: ${targetProject.Id})`);&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
If you don&#039;t need to copy the data but only create the Snowflake datatables with columns, you can change the line 22 to&lt;br /&gt;
&amp;lt;syntaxhighlight&amp;gt;&lt;br /&gt;
targetDt.Import(sourceDt.SqlDataFrame.head(0));&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Copy single datatable to Snowflake ===&lt;br /&gt;
This script creates a copy of a single datatable to Snowflake. Replace the &#039;&#039;&amp;lt;tableId1&amp;gt;&#039;&#039; with the id of the source datatable.&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;typescript&amp;quot; line&amp;gt;&lt;br /&gt;
function CopyDataTableToSnowflake(dataTableId)&lt;br /&gt;
{&lt;br /&gt;
  let sourceDt = DataTableById(dataTableId);&lt;br /&gt;
  sourceDt.SqlDataFrame.Persist(`${sourceDt.Name} - Snowflake`, #{&amp;quot;Append&amp;quot;: false, &amp;quot;Connection&amp;quot;: CreateSnowflakeConnection(#{&amp;quot;ProjectId&amp;quot;: sourceDt.Project.Id})});&lt;br /&gt;
}&lt;br /&gt;
CopyDataTableToSnowflake(&amp;lt;tableId1&amp;gt;);&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Create a copy of a data table that has all Any-type columns changed to String-type columns ===&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;typescript&amp;quot; line&amp;gt;&lt;br /&gt;
function ConvertAnyDataTypesToStringsToNewTable(dataTableId)&lt;br /&gt;
{&lt;br /&gt;
  let dt = DataTableById(dataTableId);&lt;br /&gt;
  let sdf = dt.SqlDataFrame;&lt;br /&gt;
  let cts = dt.ColumnTypes;&lt;br /&gt;
  cts.{&lt;br /&gt;
    let ct = _;&lt;br /&gt;
    if (ct.DataType == &amp;quot;Any&amp;quot;) {&lt;br /&gt;
      let n = ct.Name;&lt;br /&gt;
      sdf = sdf.WithColumn(ct.Name, #sql{Cast(Column(Variable(&amp;quot;n&amp;quot;)), &amp;quot;ShortString&amp;quot;)});&lt;br /&gt;
    }&lt;br /&gt;
  };&lt;br /&gt;
  sdf.Persist(`${dt.Name} - Converted`, #{&amp;quot;Append&amp;quot;: false, &amp;quot;ProjectId&amp;quot;: dt.Project.Id});&lt;br /&gt;
}&lt;br /&gt;
ConvertAnyDataTypesToStringsToNewTable(&amp;lt;dataTableId&amp;gt;);&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Query number of rows in given data table having a datetime value in given year grouped by month and return resulting table as CSV ===&lt;br /&gt;
SqlDataFrame is used in order to prevent loading the whole datatable into memory first. Filtering is performed as first operation in order to minimize the amount of required work for the data source of the data table.&amp;lt;syntaxhighlight lang=&amp;quot;typescript&amp;quot; line=&amp;quot;1&amp;quot;&amp;gt;&lt;br /&gt;
DataTableById(&amp;lt;data table id&amp;gt;)&lt;br /&gt;
  .SqlDataFrame&lt;br /&gt;
  .Where(#sql{2014 == Year(Column(&amp;quot;Start Time&amp;quot;))})&lt;br /&gt;
  .WithColumn(&amp;quot;Month&amp;quot;, #sql{Month(Column(&amp;quot;Start Time&amp;quot;))})&lt;br /&gt;
  .GroupBy([&amp;quot;Month&amp;quot;]).Aggregate([&amp;quot;Count&amp;quot;], [&amp;quot;Count&amp;quot;])&lt;br /&gt;
  .OrderByColumns([&amp;quot;Month&amp;quot;], [true])&lt;br /&gt;
  .Collect().ToCsv();&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Function for filtering SqlDataFrame by removing rows having, or replacing, the most infrequently occurring column values ===&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;typescript&amp;quot; line=&amp;quot;1&amp;quot;&amp;gt;&lt;br /&gt;
/***&lt;br /&gt;
 * @name ColumnWithMinUsage&lt;br /&gt;
 * @descripion&lt;br /&gt;
 * Generic function that can be used to filter out the most infrequently occurring attribute values or replace their Values&lt;br /&gt;
 * with given common value.&lt;br /&gt;
 * @param df:&lt;br /&gt;
 * DataFrame to operate on.&lt;br /&gt;
 * @param columnName:&lt;br /&gt;
 * Name of the column to be filtered.&lt;br /&gt;
 * @param newColumnName:&lt;br /&gt;
 * Name of the column that will contain the new value of the original column after filtering (if includeOthers was applied).&lt;br /&gt;
 * @param maxNumUniqueValues:&lt;br /&gt;
 * Maximum number of unique values to include into the comparison for each attribute column. If the amount of unique values for any attribute exceeds this value, only given number of attributes are included that have the highest usage.&lt;br /&gt;
 * @param minValueUsage:&lt;br /&gt;
 * Minimum total usage of a value included into the comparison. The number of cases having every returned value should be at least given percentage (a float value between 0.0 and 1.0) of all the compared cases.&lt;br /&gt;
 * @param includeOthers:&lt;br /&gt;
 * Should the rest of the attribute values not included due to MinValueUsage or MaxNumUniqueValues filtering be included as an aggregated &amp;quot;Others&amp;quot; value?&lt;br /&gt;
 * If not empty/null, defines the name used for these other-values.&lt;br /&gt;
 */&lt;br /&gt;
function ColumnWithMinUsage(df, columnName, newColumnName, maxNumUniqueValues, minValueUsage, includeOthers)&lt;br /&gt;
{&lt;br /&gt;
  let all = df&lt;br /&gt;
	.GroupBy([])&lt;br /&gt;
	.Aggregate([&amp;quot;NAllTotal&amp;quot;], [&amp;quot;Count&amp;quot;])&lt;br /&gt;
	.WithColumn(&amp;quot;__Join2&amp;quot;, #sql{1});&lt;br /&gt;
  let minValueUsageEnabled = !IsNullTop(minValueUsage);&lt;br /&gt;
  let maxNumUniqueValuesEnabled = !IsNullTop(maxNumUniqueValues);&lt;br /&gt;
  if (minValueUsageEnabled || maxNumUniqueValuesEnabled) {&lt;br /&gt;
	// Perform column value-based filtering if minValueUsageEnabled or maxNumUniqueValuesEnabled is defined.&lt;br /&gt;
    let valueColumnName = &amp;quot;__ValueNew&amp;quot;;&lt;br /&gt;
	let filteredValuesColumns = [valueColumnName: columnName];&lt;br /&gt;
	let filteredValues = df&lt;br /&gt;
	  .GroupBy([columnName]).Aggregate([&amp;quot;Count&amp;quot;], [&amp;quot;Count&amp;quot;]);&lt;br /&gt;
	if (minValueUsageEnabled) {&lt;br /&gt;
	  filteredValues = filteredValues&lt;br /&gt;
		.WithColumn(&amp;quot;__Join&amp;quot;, #sql{1})&lt;br /&gt;
		.Join(all, [&amp;quot;__Join&amp;quot;: &amp;quot;__Join2&amp;quot;], &amp;quot;leftouter&amp;quot;)&lt;br /&gt;
        .WithColumn(&amp;quot;Usage&amp;quot;, #sql{Column(&amp;quot;Count&amp;quot;) / Column(&amp;quot;NAllTotal&amp;quot;)});&lt;br /&gt;
	  filteredValuesColumns = Concat(filteredValuesColumns, [&amp;quot;Usage&amp;quot;]);&lt;br /&gt;
	}&lt;br /&gt;
	if (maxNumUniqueValuesEnabled) {&lt;br /&gt;
	  filteredValues = filteredValues&lt;br /&gt;
		.WithRowNumberColumn(&amp;quot;RowNumber&amp;quot;, [&amp;quot;Count&amp;quot;], null, [false]);&lt;br /&gt;
	  filteredValuesColumns = Concat(filteredValuesColumns, [&amp;quot;RowNumber&amp;quot;]);&lt;br /&gt;
	}&lt;br /&gt;
&lt;br /&gt;
	filteredValues = filteredValues&lt;br /&gt;
	  .Select(filteredValuesColumns);&lt;br /&gt;
&lt;br /&gt;
	// Generate select returning all the accepted values.&lt;br /&gt;
	let allValues = filteredValues&lt;br /&gt;
	  .(minValueUsageEnabled ? Where(#sql{Column(&amp;quot;Usage&amp;quot;) &amp;gt;= #expr{minValueUsage}}) : _)&lt;br /&gt;
	  .(maxNumUniqueValuesEnabled ? Where(#sql{Column(&amp;quot;RowNumber&amp;quot;) &amp;lt;= #expr{maxNumUniqueValues}}) : _)&lt;br /&gt;
	  .Select([valueColumnName, newColumnName: valueColumnName]);&lt;br /&gt;
&lt;br /&gt;
	if (!IsNullTop(includeOthers)) {&lt;br /&gt;
	  // If includeOthers is defined, replace original values with the variable defined in includeOthers.&lt;br /&gt;
	  let otherValues = filteredValues&lt;br /&gt;
		.(minValueUsageEnabled ? Where(#sql{Column(&amp;quot;Usage&amp;quot;) &amp;lt; #expr{minValueUsage}}) : _)&lt;br /&gt;
		.(maxNumUniqueValuesEnabled ? Where(#sql{Column(&amp;quot;RowNumber&amp;quot;) &amp;gt; #expr{maxNumUniqueValues}}) : _)&lt;br /&gt;
		.WithColumn(newColumnName, #sql{#expr{includeOthers}})&lt;br /&gt;
		.Select([valueColumnName, newColumnName]);&lt;br /&gt;
	  allValues = allValues.Append(otherValues)&lt;br /&gt;
	}&lt;br /&gt;
	df.Join(allValues, [columnName: valueColumnName], &amp;quot;inner&amp;quot;)&lt;br /&gt;
	  .RemoveColumns([valueColumnName]);&lt;br /&gt;
  }&lt;br /&gt;
}&lt;br /&gt;
&lt;br /&gt;
// The following example will return only rows containing two of the most common values for Region-column.&lt;br /&gt;
//let df = DataTableById(&amp;lt;data table id&amp;gt;).SqlDataFrame;&lt;br /&gt;
//df = ColumnWithMinUsage(df, &amp;quot;Region&amp;quot;, &amp;quot;_Filtered&amp;quot;, 2, null, null);&lt;br /&gt;
//df.Collect().ToCsv();&lt;br /&gt;
&lt;br /&gt;
// The following example will return all input rows, but will replace the values of rows whose Region-column&lt;br /&gt;
// has a value used by less than 15% of all the rows with a new value: &amp;quot;_Others&amp;quot;.&lt;br /&gt;
//let df = DataTableById(&amp;lt;data table id&amp;gt;).SqlDataFrame;&lt;br /&gt;
//df = ColumnWithMinUsage(df, &amp;quot;Region&amp;quot;, &amp;quot;_Filtered&amp;quot;, null, 0.15, &amp;quot;_Others&amp;quot;);&lt;br /&gt;
//df.Collect().ToCsv();&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Export model events and cases ===&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;typescript&amp;quot; line=&amp;quot;1&amp;quot;&amp;gt;&lt;br /&gt;
function ExportModelEvents(m) {&lt;br /&gt;
  let attrs = m.EventAttributes;&lt;br /&gt;
  ToDataFrame(&lt;br /&gt;
    m.EventLog.Events.Concat(&lt;br /&gt;
      [Case.Name, Type.Name, ToString(TimeStamp, &amp;quot;yyyy-MM-dd HH:mm:ss.fff&amp;quot;)], &lt;br /&gt;
      {let evt = _; attrs.{let att = _; evt.Attribute(att)}}&lt;br /&gt;
    ), &lt;br /&gt;
    Concat(&lt;br /&gt;
      [&amp;quot;CaseId&amp;quot;, &amp;quot;EventType&amp;quot;, &amp;quot;TimeStamp&amp;quot;], &lt;br /&gt;
      attrs.Name&lt;br /&gt;
    )&lt;br /&gt;
  ).ToCsv(true);&lt;br /&gt;
}&lt;br /&gt;
&lt;br /&gt;
 &lt;br /&gt;
&lt;br /&gt;
function ExportModelCases(m) {&lt;br /&gt;
  let attrs = m.CaseAttributes;&lt;br /&gt;
  ToDataFrame(&lt;br /&gt;
    m.EventLog.Cases.Concat(&lt;br /&gt;
      [Name], &lt;br /&gt;
      {let cas = _; attrs.{let att = _; cas.Attribute(att)}}&lt;br /&gt;
    ), &lt;br /&gt;
    Concat(&lt;br /&gt;
      [&amp;quot;CaseId&amp;quot;], &lt;br /&gt;
      attrs.Name&lt;br /&gt;
    )&lt;br /&gt;
  ).ToCsv(true);&lt;br /&gt;
}&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
First(Models.Where(Name==&amp;quot;SAP OtC Extended&amp;quot;)).EventsDataTable.DataFrame.ToCsv(true)&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
First(Models.Where(Name==&amp;quot;SAP OtC Extended&amp;quot;)).CasesDataTable.DataFrame.ToCsv(true)&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Calculate all the value usages of a single column for each event in event data table ===&lt;br /&gt;
This query could be used, e.g., to find out the maximum resource usage for every resource found in the event data table.&amp;lt;syntaxhighlight lang=&amp;quot;typescript&amp;quot; line=&amp;quot;1&amp;quot;&amp;gt;&lt;br /&gt;
function WithUsageColumns(resourceColumn)&lt;br /&gt;
{&lt;br /&gt;
  function WithTotalUsageColumnOfSingleResource(resourceColumn, resourceValue)&lt;br /&gt;
  {&lt;br /&gt;
    _&lt;br /&gt;
      .WithColumn(&amp;quot;_Prev&amp;quot;, #sql{Lag(Column(resourceColumn), [TimeStamp, EventType], [true, true], [CaseId], 1, null)})&lt;br /&gt;
      .WithColumn(&amp;quot;_UsageDiff&amp;quot;, #sql{&lt;br /&gt;
        CaseWhen(&lt;br /&gt;
          Column(resourceColumn) == Column(&amp;quot;_Prev&amp;quot;), 0, &lt;br /&gt;
          Column(&amp;quot;_Prev&amp;quot;) == #expr{resourceValue}, -1,&lt;br /&gt;
          Column(resourceColumn) == #expr{resourceValue}, 1,&lt;br /&gt;
          0)&lt;br /&gt;
      })&lt;br /&gt;
      .WithColumn(`${resourceValue}_Usage`, #sql{Sum(Column(&amp;quot;_UsageDiff&amp;quot;), [TimeStamp, EventType])})&lt;br /&gt;
      .RemoveColumns([&amp;quot;_Prev&amp;quot;, &amp;quot;_UsageDiff&amp;quot;])&lt;br /&gt;
  }&lt;br /&gt;
&lt;br /&gt;
  let sdf = _;&lt;br /&gt;
  let allValues = sdf.SelectDistinct([resourceColumn]).OrderByColumns([resourceColumn], [true]).Collect().Column(resourceColumn);&lt;br /&gt;
  allValues.{&lt;br /&gt;
    let v = _;&lt;br /&gt;
    sdf = sdf.WithTotalUsageColumnOfSingleResource(resourceColumn, v)&lt;br /&gt;
  }&lt;br /&gt;
  sdf&lt;br /&gt;
}&lt;br /&gt;
&lt;br /&gt;
let dt = ModelById(&amp;lt;model id&amp;gt;).EventsDataTable;&lt;br /&gt;
dt&lt;br /&gt;
  .SqlDataFrame&lt;br /&gt;
  .WithUsageColumns(&amp;lt;resource column name&amp;gt;)&lt;br /&gt;
  .OrderByColumns([dt.ColumnMappings[&amp;quot;TimeStamp&amp;quot;]], [true])&lt;br /&gt;
  .Collect().ToCsv()&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;Where:&lt;br /&gt;
&lt;br /&gt;
* &amp;lt;model id&amp;gt; is the id of the model containing event data to be examined.&lt;br /&gt;
* &amp;lt;resource column name&amp;gt; is the name of the column in the event data table of the specified model containing the resource being used by that event.&lt;br /&gt;
&lt;br /&gt;
NOTE: This expression uses functionalities that are only supported in Snowflake-based data tables.&lt;br /&gt;
&lt;br /&gt;
=== Create new Snowflake model from filter ===&lt;br /&gt;
This script creates a new Snowflake model (and two datatables for cases and events) containing filtered event log from given filter id. The script also works if the model doesn&#039;t have a cases datatable.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;typescript&amp;quot; line=&amp;quot;1&amp;quot;&amp;gt;&lt;br /&gt;
let filter = FilterById(1); // filter id&lt;br /&gt;
let model = filter.model;&lt;br /&gt;
let project = model.project;&lt;br /&gt;
let nameSuffix = &amp;quot; - &amp;quot; + filter.name + &amp;quot; - &amp;quot; + ToString(Now, &amp;quot;dd-MM-yyyy HH:mm:ss&amp;quot;);&lt;br /&gt;
let eventsDatatableName = model.EventsDataTable.Name + nameSuffix;&lt;br /&gt;
if (eventsDatatableName.length &amp;gt; 440) {&lt;br /&gt;
  eventsDatatableName = eventsDatatableName.Substring(eventsDatatableName.length - 440);&lt;br /&gt;
}&lt;br /&gt;
let eventsData = model&lt;br /&gt;
  .EventsDataTable&lt;br /&gt;
  .SqlDataFrame&lt;br /&gt;
  .ApplyFilter(&lt;br /&gt;
    filter.rules,&lt;br /&gt;
    model.CasesDataTable?.SqlDataFrame&lt;br /&gt;
  );&lt;br /&gt;
project&lt;br /&gt;
  .CreateDatatable(eventsDatatableName, #{&amp;quot;Connection&amp;quot;: CreateSnowflakeConnection()})&lt;br /&gt;
  .Import(eventsData);&lt;br /&gt;
let modelConfiguration = model.Configuration;&lt;br /&gt;
modelConfiguration.DataSource.Events.Set(&amp;quot;DataTableName&amp;quot;, eventsDatatableName);&lt;br /&gt;
if (model.CasesDataTable != null) {&lt;br /&gt;
  let eventsDataCaseIdColumn = &amp;quot;CaseId_&amp;quot; + ToString(Random());&lt;br /&gt;
  let casesDatatableName = model.CasesDataTable.Name + nameSuffix;&lt;br /&gt;
  if (casesDatatableName.length &amp;gt; 440) {&lt;br /&gt;
    casesDatatableName = casesDatatableName.Substring(casesDatatableName.length - 440);&lt;br /&gt;
  }&lt;br /&gt;
  let casesData = model&lt;br /&gt;
    .CasesDataTable&lt;br /&gt;
    .SqlDataFrame&lt;br /&gt;
    .join(&lt;br /&gt;
	  eventsData.SelectDistinct([eventsDataCaseIdColumn: modelConfiguration.DataSource.Events.Columns.CaseId]),&lt;br /&gt;
      [modelConfiguration.DataSource.Cases.Columns.CaseId: eventsDataCaseIdColumn]&lt;br /&gt;
	).Select(model.CasesDataTable.ColumnNames);&lt;br /&gt;
  project&lt;br /&gt;
    .CreateDatatable(casesDatatableName, #{&amp;quot;Connection&amp;quot;: CreateSnowflakeConnection()})&lt;br /&gt;
    .Import(casesData);&lt;br /&gt;
  modelConfiguration.DataSource.Cases.Set(&amp;quot;DataTableName&amp;quot;, casesDatatableName);&lt;br /&gt;
}&lt;br /&gt;
let modelName = model.Name + nameSuffix;&lt;br /&gt;
if (modelName &amp;gt; 440) {&lt;br /&gt;
  modelName = modelName.Substring(modelName.length - 440);&lt;br /&gt;
}&lt;br /&gt;
project&lt;br /&gt;
  .CreateModel(#{    &lt;br /&gt;
    &amp;quot;Name&amp;quot;: modelName,&lt;br /&gt;
    &amp;quot;Description&amp;quot;: model.Description,&lt;br /&gt;
    &amp;quot;Configuration&amp;quot;: modelConfiguration&lt;br /&gt;
  });&lt;br /&gt;
return modelName;&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Creating a model consisting of multiple copies of cases in an existing model ===&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;typescript&amp;quot; line=&amp;quot;1&amp;quot;&amp;gt;&lt;br /&gt;
/**&lt;br /&gt;
 * @name CreateTestModel&lt;br /&gt;
 * @description&lt;br /&gt;
 * Creates a new model (or overwrites an existing) to given target project with given number of &lt;br /&gt;
 * repetitions of given source model.&lt;br /&gt;
 * Each repetition will generate &amp;quot;&amp;lt;N&amp;gt;-&amp;quot;-prefix to CaseId-columns, where N equals to the repeat index.&lt;br /&gt;
 * @param sourceModel&lt;br /&gt;
 * PA model used for the source data and from where the connection is copied for the target model if a &lt;br /&gt;
 * new one has to be created.&lt;br /&gt;
 * @param numRepeats&lt;br /&gt;
 * Number of times the data in the source model should be repeated in the generated model.&lt;br /&gt;
 * @param targetProject&lt;br /&gt;
 * Project in which the target model resides.&lt;br /&gt;
 * @param targetModelName&lt;br /&gt;
 * Specifies the name of the test model in the given target project. If a model already exists with &lt;br /&gt;
 * given name, event and case data in this model will be replaced with the new generated event and &lt;br /&gt;
 * case data.&lt;br /&gt;
 * @returns&lt;br /&gt;
 * Model object of the test model having the newly generated data.&lt;br /&gt;
 */&lt;br /&gt;
function CreateTestModel(sourceModel, numRepeats, targetProject, targetModelName) &lt;br /&gt;
{&lt;br /&gt;
  let eventsColumnMappings = sourceModel.EventsDataTable.ColumnMappings;&lt;br /&gt;
  let casesColumnMappings = sourceModel.CasesDataTable.ColumnMappings;&lt;br /&gt;
  let connection = sourceModel.EventsDataTable.DataSourceConnection;&lt;br /&gt;
&lt;br /&gt;
  function CreateResultModel()&lt;br /&gt;
  {&lt;br /&gt;
    function GetTable(tableName) &lt;br /&gt;
    {&lt;br /&gt;
      let tableConfiguration = #{&lt;br /&gt;
        &amp;quot;Name&amp;quot;: tableName,&lt;br /&gt;
        &amp;quot;Connection&amp;quot;: connection&lt;br /&gt;
      };&lt;br /&gt;
      let resultTable = targetProject.DataTableByName(tableName);&lt;br /&gt;
      if (resultTable == null)&lt;br /&gt;
      {&lt;br /&gt;
        resultTable = targetProject.CreateDataTable(tableConfiguration)&lt;br /&gt;
          .Modify(#{&amp;quot;NameInDataSource&amp;quot;: null})&lt;br /&gt;
          .Synchronize();&lt;br /&gt;
      }&lt;br /&gt;
      return resultTable;&lt;br /&gt;
    }&lt;br /&gt;
&lt;br /&gt;
    let eventsTableName = `${targetModelName} - events`;&lt;br /&gt;
    let casesTableName = `${targetModelName} - cases`;&lt;br /&gt;
    let targetModel = targetProject.ModelByName(targetModelName);&lt;br /&gt;
    let eventsTable, casesTable = null;&lt;br /&gt;
&lt;br /&gt;
    if (targetModel != null)&lt;br /&gt;
    {&lt;br /&gt;
      eventsTable = targetModel.EventsDataTable;&lt;br /&gt;
      casesTable = targetModel.CasesDataTable;&lt;br /&gt;
    }&lt;br /&gt;
    else {&lt;br /&gt;
      eventsTable = GetTable(eventsTableName);&lt;br /&gt;
      if (sourceModel.CasesDataTable != null) {&lt;br /&gt;
        casesTable = GetTable(casesTableName);&lt;br /&gt;
      }&lt;br /&gt;
&lt;br /&gt;
      let timestampMapping = eventsColumnMappings[&amp;quot;TimeStamp&amp;quot;];&lt;br /&gt;
      eventsColumnMappings.Remove(&amp;quot;TimeStamp&amp;quot;);&lt;br /&gt;
      eventsColumnMappings.Set(&amp;quot;Timestamp&amp;quot;, timestampMapping);&lt;br /&gt;
&lt;br /&gt;
      let modelConfiguration = #{&lt;br /&gt;
        &amp;quot;DataSource&amp;quot;: #{&lt;br /&gt;
          &amp;quot;Events&amp;quot;:#{&lt;br /&gt;
            &amp;quot;DataSourceType&amp;quot;: &amp;quot;datatable&amp;quot;,&lt;br /&gt;
            &amp;quot;DataTableName&amp;quot;: eventsTableName,&lt;br /&gt;
            &amp;quot;Columns&amp;quot;: eventsColumnMappings&lt;br /&gt;
          }&lt;br /&gt;
        }&lt;br /&gt;
      };&lt;br /&gt;
&lt;br /&gt;
      if (casesColumnMappings != null) {&lt;br /&gt;
        modelConfiguration[&amp;quot;DataSource&amp;quot;].Set(&amp;quot;Cases&amp;quot;, #{&lt;br /&gt;
          &amp;quot;DataSourceType&amp;quot;: &amp;quot;datatable&amp;quot;,&lt;br /&gt;
          &amp;quot;DataTableName&amp;quot;: casesTableName,&lt;br /&gt;
          &amp;quot;Columns&amp;quot;: casesColumnMappings&lt;br /&gt;
        });&lt;br /&gt;
      }&lt;br /&gt;
&lt;br /&gt;
      targetModel = targetProject.CreateModel(#{&amp;quot;Name&amp;quot;: targetModelName, &amp;quot;Configuration&amp;quot;: modelConfiguration});&lt;br /&gt;
    }&lt;br /&gt;
&lt;br /&gt;
    eventsTable.Truncate();&lt;br /&gt;
    casesTable?.Truncate();&lt;br /&gt;
&lt;br /&gt;
    return #{&lt;br /&gt;
      &amp;quot;TargetModel&amp;quot;: targetModel,&lt;br /&gt;
      &amp;quot;Events&amp;quot;: eventsTable,&lt;br /&gt;
      &amp;quot;Cases&amp;quot;: casesTable&lt;br /&gt;
    };&lt;br /&gt;
  }&lt;br /&gt;
&lt;br /&gt;
  function RepeatNTimes(sourceDf, caseIdColumn, numRepeats)&lt;br /&gt;
  {&lt;br /&gt;
    let resultDf = null;&lt;br /&gt;
    for (let i = 1; i &amp;lt;= numRepeats; ++i) {&lt;br /&gt;
      let iterationDf = sourceDf&lt;br /&gt;
        .WithColumn(caseIdColumn, #sql{Concat(#expr{i}, &amp;quot;-&amp;quot;, Column(#expr{caseIdColumn}))});&lt;br /&gt;
      resultDf = resultDf == null ? iterationDf : resultDf.Append(iterationDf); &lt;br /&gt;
    }&lt;br /&gt;
    resultDf;&lt;br /&gt;
  }&lt;br /&gt;
&lt;br /&gt;
  let resultModel = CreateResultModel();&lt;br /&gt;
  let sourceEventDataDf = sourceModel.EventsDataTable.SqlDataFrame;&lt;br /&gt;
  let resultEventDataDf = RepeatNTimes(sourceEventDataDf, eventsColumnMappings[&amp;quot;CaseId&amp;quot;], numRepeats);&lt;br /&gt;
  resultModel[&amp;quot;Events&amp;quot;].Import(resultEventDataDf);&lt;br /&gt;
&lt;br /&gt;
  let sourceCaseDataDf = sourceModel.CasesDataTable?.SqlDataFrame;&lt;br /&gt;
  if (sourceCaseDataDf != null) {&lt;br /&gt;
    let resultCaseDataDf = RepeatNTimes(sourceCaseDataDf, casesColumnMappings[&amp;quot;CaseId&amp;quot;], numRepeats);&lt;br /&gt;
    resultModel[&amp;quot;Cases&amp;quot;].Import(resultCaseDataDf);&lt;br /&gt;
  }&lt;br /&gt;
  resultModel[&amp;quot;TargetModel&amp;quot;];&lt;br /&gt;
}&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;Example usage:&amp;lt;blockquote&amp;gt;CreateTestModel(ProjectByName(&amp;quot;Project&amp;quot;).ModelByName(&amp;quot;SAP_OrderToCash - Snowflake&amp;quot;), 3, ProjectByName(&amp;quot;TestData&amp;quot;), &amp;quot;TestModel&amp;quot;);&amp;lt;/blockquote&amp;gt;Creates a new model named &amp;quot;TestModel&amp;quot; (or overwrites old one) into project named &amp;quot;TestData&amp;quot; containing the data from model &amp;quot;SAP_OrderToCash - Snowflake&amp;quot; in project &amp;quot;Project&amp;quot; repeated three times.&lt;br /&gt;
&lt;br /&gt;
=== Analyzing declare patterns found in event log ===&lt;br /&gt;
&lt;br /&gt;
This is an example expression that shows how POSIX-style regular expressions can be used to search for cases in an event log having certain event type patterns [https://www.researchgate.net/publication/277631859_Generating_Event_Logs_Through_the_Simulation_of_Declare_Models declare patterns].&lt;br /&gt;
&lt;br /&gt;
Note: Before use, replace &amp;lt;model id&amp;gt; with a valid model identifier having event data.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;typescript&amp;quot; line=&amp;quot;1&amp;quot;&amp;gt;&lt;br /&gt;
let dt = ModelById(&amp;lt;model id&amp;gt;).EventsDataTable;&lt;br /&gt;
let sdf = dt.SqlDataFrame.Head(1000);&lt;br /&gt;
let caseIdColumn = dt.ColumnMappings[&amp;quot;CaseId&amp;quot;], timeStampColumn = dt.ColumnMappings[&amp;quot;TimeStamp&amp;quot;], eventTypeColumn = dt.ColumnMappings[&amp;quot;EventType&amp;quot;];&lt;br /&gt;
&lt;br /&gt;
let eventTypesDf = sdf&lt;br /&gt;
  .SelectDistinct([eventTypeColumn])&lt;br /&gt;
  .OrderByColumns([eventTypeColumn], [true])&lt;br /&gt;
  .WithRowNumberColumn(&amp;quot;Token&amp;quot;, [eventTypeColumn])&lt;br /&gt;
  .WithColumn(&amp;quot;Token&amp;quot;, #sql{Char(Column(&amp;quot;Token&amp;quot;) + Unicode(&amp;quot;a&amp;quot;) - 1)});&lt;br /&gt;
&lt;br /&gt;
sdf = sdf&lt;br /&gt;
  .Join(eventTypesDf.Select([&amp;quot;_EventType2&amp;quot;: eventTypeColumn, &amp;quot;Token&amp;quot;]), [eventTypeColumn: &amp;quot;_EventType2&amp;quot;], &amp;quot;leftouter&amp;quot;);&lt;br /&gt;
&lt;br /&gt;
function RespondedExistencePattern(a, b)&lt;br /&gt;
{&lt;br /&gt;
  // [^a]*((a.*b.*)|(b.*a.*))*[^a]*&lt;br /&gt;
  ToSqlExpression(`Regexp(Column(&amp;quot;Trace&amp;quot;), &amp;quot;[^${a}]*((${a}.*${b}.*)|(${b}.*${a}.*))*[^${a}]&amp;quot;)`)&lt;br /&gt;
}&lt;br /&gt;
&lt;br /&gt;
function ResponsePattern(a, b)&lt;br /&gt;
{&lt;br /&gt;
  // [^a]*(a.*b)*[^a]*&lt;br /&gt;
  ToSqlExpression(`Regexp(Column(&amp;quot;Trace&amp;quot;), &amp;quot;[^${a}]*(${a}.*${b})*[^${a}]&amp;quot;)`)&lt;br /&gt;
}&lt;br /&gt;
&lt;br /&gt;
function AlternateResponsePattern(a, b)&lt;br /&gt;
{&lt;br /&gt;
  // [^a]*(a[^a]*b[^a]*)*[^a]*&lt;br /&gt;
  ToSqlExpression(`Regexp(Column(&amp;quot;Trace&amp;quot;), &amp;quot;[^${a}]*(${a}[^${a}]*${b}[^${a}]*)*[^${a}]&amp;quot;)`)&lt;br /&gt;
}&lt;br /&gt;
&lt;br /&gt;
function ChainResponsePattern(a, b)&lt;br /&gt;
{&lt;br /&gt;
  // [^a]*(ab[^a]*)*[^a]*&lt;br /&gt;
  ToSqlExpression(`Regexp(Column(&amp;quot;Trace&amp;quot;), &amp;quot;[^${a}]*(${a}${b}[^${a}]*)*[^${a}]&amp;quot;)`)&lt;br /&gt;
}&lt;br /&gt;
&lt;br /&gt;
function PrecedencePattern(a, b)&lt;br /&gt;
{&lt;br /&gt;
  // [^b]*(a.*b)*[^b]*&lt;br /&gt;
  ToSqlExpression(`Regexp(Column(&amp;quot;Trace&amp;quot;), &amp;quot;[^${b}]*(${a}.*${b})*[^${b}]&amp;quot;)`)&lt;br /&gt;
}&lt;br /&gt;
&lt;br /&gt;
let tracesDf = sdf&lt;br /&gt;
  .GroupBy([caseIdColumn])&lt;br /&gt;
  .Aggregate([&amp;quot;Trace&amp;quot;: &amp;quot;Token&amp;quot;], [#{&amp;quot;Function&amp;quot;: &amp;quot;list&amp;quot;, &amp;quot;Ordering&amp;quot;: [timeStampColumn], &amp;quot;Separator&amp;quot;: &amp;quot;&amp;quot;}])&lt;br /&gt;
  .WithColumn(&amp;quot;RespondedExistenceNL&amp;quot;, #sql{#expr{RespondedExistencePattern(&amp;quot;n&amp;quot;, &amp;quot;l&amp;quot;)}})&lt;br /&gt;
  .WithColumn(&amp;quot;RespondedExistenceCA&amp;quot;, #sql{#expr{RespondedExistencePattern(&amp;quot;c&amp;quot;, &amp;quot;a&amp;quot;)}})&lt;br /&gt;
  .WithColumn(&amp;quot;ResponsePatternNL&amp;quot;, #sql{#expr{ResponsePattern(&amp;quot;n&amp;quot;, &amp;quot;l&amp;quot;)}})&lt;br /&gt;
  .WithColumn(&amp;quot;ResponsePatternCA&amp;quot;, #sql{#expr{ResponsePattern(&amp;quot;c&amp;quot;, &amp;quot;a&amp;quot;)}})&lt;br /&gt;
  .WithColumn(&amp;quot;AlternateResponsePatternNL&amp;quot;, #sql{#expr{AlternateResponsePattern(&amp;quot;n&amp;quot;, &amp;quot;l&amp;quot;)}})&lt;br /&gt;
  .WithColumn(&amp;quot;AlternateResponsePatternCA&amp;quot;, #sql{#expr{AlternateResponsePattern(&amp;quot;c&amp;quot;, &amp;quot;a&amp;quot;)}})&lt;br /&gt;
  .WithColumn(&amp;quot;ChainResponsePatternNL&amp;quot;, #sql{#expr{ChainResponsePattern(&amp;quot;n&amp;quot;, &amp;quot;l&amp;quot;)}})&lt;br /&gt;
  .WithColumn(&amp;quot;ChainResponsePatternCA&amp;quot;, #sql{#expr{ChainResponsePattern(&amp;quot;c&amp;quot;, &amp;quot;a&amp;quot;)}})&lt;br /&gt;
  .WithColumn(&amp;quot;PrecedencePatternNL&amp;quot;, #sql{#expr{PrecedencePattern(&amp;quot;n&amp;quot;, &amp;quot;l&amp;quot;)}})&lt;br /&gt;
  .WithColumn(&amp;quot;PrecedencePatternCA&amp;quot;, #sql{#expr{PrecedencePattern(&amp;quot;c&amp;quot;, &amp;quot;a&amp;quot;)}})&lt;br /&gt;
;&lt;br /&gt;
&lt;br /&gt;
[tracesDf.Collect().ToCsv(), eventTypesDf.Collect().ToCsv()]&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;/div&gt;</summary>
		<author><name>MarHink</name></author>
	</entry>
	<entry>
		<id>https://wiki.onqpr.com/pa/index.php?title=SQL_Scripting_Commands&amp;diff=25670</id>
		<title>SQL Scripting Commands</title>
		<link rel="alternate" type="text/html" href="https://wiki.onqpr.com/pa/index.php?title=SQL_Scripting_Commands&amp;diff=25670"/>
		<updated>2025-01-14T15:49:52Z</updated>

		<summary type="html">&lt;p&gt;MarHink: /* --#ImportSapQuery */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;This page lists QPR ProcessAnalyzer commands that can be used in the SQL scripts. Each command precedes one or two SQL queries, which sets parameters for the command or defines the data used by the command.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;div style=&amp;quot;display: flex;flex-wrap: wrap;&amp;quot;&amp;gt;&lt;br /&gt;
&amp;lt;div style=&amp;quot;flex: 1 0 230px;border:1px solid #dfdfdf;padding:0 1em 1em 1.5em;background-color:#F7FAFC;margin:10px 0px 0px 10px;&amp;quot;&amp;gt;&lt;br /&gt;
=== Data Extraction ===&lt;br /&gt;
* [[#--.23CallWebService|CallWebService]]&lt;br /&gt;
* [[#--.23ImportOdbcQuery|ImportOdbcQuery]]&lt;br /&gt;
* [[#--.23ImportOleDbQuery|ImportOleDbQuery]]&lt;br /&gt;
* [[#--.23ImportSalesforceQuery|ImportSalesforceQuery]]&lt;br /&gt;
* [[#--.23ImportSapQuery|ImportSapQuery]]&lt;br /&gt;
* [[#--.23ImportSqlQuery|ImportSqlQuery]] (ADO.Net)&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;div style=&amp;quot;flex: 1 0 230px;border:1px solid #dfdfdf;padding:0 1em 1em 1.5em;background-color:#F7FAFC;margin:10px 0px 0px 10px;&amp;quot;&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Data Output ===&lt;br /&gt;
* [[#--.23ImportDataTable|ImportDataTable]]&lt;br /&gt;
* [[#--.23SendEmail|SendEmail]]&lt;br /&gt;
* [[#--.23ShowReport|ShowReport]]&lt;br /&gt;
* [[#--.23WriteLog|WriteLog]]&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;div style=&amp;quot;flex: 1 0 230px;border:1px solid #dfdfdf;padding:0 1em 1em 1.5em;background-color:#F7FAFC;margin:10px 0px 0px 10px;&amp;quot;&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Script Flow ===&lt;br /&gt;
* [[#--.23RunQuery|RunQuery]] ([[RunQuery Script Examples|examples]])&lt;br /&gt;
* [[#--.23Commit|Commit]]&lt;br /&gt;
* [[#--.23Exit|Exit]]&lt;br /&gt;
* [[#--.23GetAnalysis|GetAnalysis]]&lt;br /&gt;
* [[#--.23Run|Run]]&lt;br /&gt;
* [[#--.23StartBackground|StartBackground]]&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
= --#CallWebService =&lt;br /&gt;
Extracts data via Web Service. This command takes one SELECT query as parameter.&lt;br /&gt;
&lt;br /&gt;
== Query ==&lt;br /&gt;
Configures the command using a SELECT statement returning two columns: the first column is for a key and the second one is for a value of that key. The values in both the key column and in the value column are of type NVARCHAR. The supported keys for this command are:&lt;br /&gt;
; Address&lt;br /&gt;
: Defines the URI of the service to call. Mandatory.&lt;br /&gt;
; Method&lt;br /&gt;
: Defines the HTTP method to use for the call. Must be any of the following: GET (default), POST, PUT, DELETE. Optional.&lt;br /&gt;
; Body&lt;br /&gt;
: Defines the message body text to send to the service. Default value is empty. Optional.&lt;br /&gt;
; Encoding&lt;br /&gt;
: Defines the encoding method to use. The supported options are listed in [https://msdn.microsoft.com/en-us/library/system.text.encoding%28v=vs.110%29.aspx https://msdn.microsoft.com/en-us/library/system.text.encoding%28v=vs.110%29.aspx]. Default value is UTF8. Optional.&lt;br /&gt;
; Timeout&lt;br /&gt;
: Number of milliseconds to wait before the request times out. Default value is 60000. Optional.&lt;br /&gt;
; ExecuteInClientSide&lt;br /&gt;
: Defines whether the web service call is made from the QPR ScriptLauncher or from the server. TRUE or 1, the call is executed in the ScriptLauncher. FALSE or 0, the call is executed in the server. Default value is FALSE. Optional.&lt;br /&gt;
; DefaultNetworkCredentials&lt;br /&gt;
: Optional. Defines the possibility to use default network credentials in web service calls:&lt;br /&gt;
: 1 = use the default network credentials.&lt;br /&gt;
: 0 = don&#039;t use the default network credentials.&lt;br /&gt;
: If CallWebService command is run in the server side (ExecuteInClientSide=False), the default network credentials can be used only if in the server configuration AllowForwardingNetworkCredentials is true (it is false by default). Otherwise, if the CallWebService command is run in the client side (ExecuteInClientSide=True), the default network credentials can always be used. &lt;br /&gt;
; CatchOperationExceptions&lt;br /&gt;
: Optional. Defines whether to stop the script execution or to continue to run the script from the next statement if an exception occurs when running the script:&lt;br /&gt;
: 1 = don&#039;t stop execution of the script, continue running the script from the next statement.&lt;br /&gt;
: 0 = stop execution of the current script and show the exception.&lt;br /&gt;
: The following script variables will be set and are shown in the script log:&lt;br /&gt;
: &amp;lt;code&amp;gt;@_ExceptionOccurred&amp;lt;/code&amp;gt; If there was an exception, then this value is 1, otherwise 0. INT.&lt;br /&gt;
: &amp;lt;code&amp;gt;@_ExceptionType&amp;lt;/code&amp;gt; If there was an exception, shows the C# class name for the exception, NVARCHAR(MAX), otherwise NULL.&lt;br /&gt;
: &amp;lt;code&amp;gt;@_ExceptionMessage&amp;lt;/code&amp;gt; If there was an exception, contains a message that would have been displayed, NVARCHAR(MAX), otherwise NULL.&lt;br /&gt;
: &amp;lt;code&amp;gt;@_ExceptionDetails&amp;lt;/code&amp;gt; If there was an exception, contains the details that would have been displayed, including the system stack trace, NVARCHAR(MAX), otherwise NULL.&lt;br /&gt;
;&amp;lt;nowiki&amp;gt;&amp;lt;other parameters&amp;gt;&amp;lt;/nowiki&amp;gt;&lt;br /&gt;
: All the rest of the passed parameters not listed above are added as extra headers to the request. For example, &#039;&#039;Content-Type&#039;&#039; and &#039;&#039;Accept&#039;&#039; HTTP headers can be added. Optional.&lt;br /&gt;
&lt;br /&gt;
== Result ==&lt;br /&gt;
The result of the request is passed to the script following the CallWebService operation in the following variables:&lt;br /&gt;
: &amp;lt;code&amp;gt;@_ResponseText&amp;lt;/code&amp;gt; The response text received from the remote server. If there was an error in processing the request, this will contain the received error message. NVARCHAR(MAX). &lt;br /&gt;
: &amp;lt;code&amp;gt;@_ResponseStatusCode&amp;lt;/code&amp;gt; The numeric status code received from the remote server. INT. &lt;br /&gt;
: &amp;lt;code&amp;gt;@_ResponseSuccess&amp;lt;/code&amp;gt; True only if the request returned status code that represents a success. BIT.&lt;br /&gt;
&lt;br /&gt;
See examples at the [[CallWebService Script Examples]] page.&lt;br /&gt;
&lt;br /&gt;
= --#Commit =&lt;br /&gt;
[https://docs.microsoft.com/en-us/sql/t-sql/language-elements/commit-transaction-transact-sql?view=sql-server-ver15 Commits] the currently open SQL transaction in the sandbox database and starts a new transaction. The commit command can be executed at any point in the script. Note that the command does not have any parameters, i.e. there is no preceding SELECT statement before the --#Commit statement.&lt;br /&gt;
&lt;br /&gt;
If the commit command is not used, the database transaction in the sandbox database is committed when the script is completed. On the other hand, if the script execution encounters an error, the SQL transaction is rolled back. &lt;br /&gt;
&lt;br /&gt;
The commit command is useful in following circumstances:&lt;br /&gt;
* If the sandbox database is configured to allow storing permanent objects, commit can be used to preserve changes even if the script execution encounters an error.&lt;br /&gt;
* When the scripting is handling large amount of data, it&#039;s better to make commits during the script run, so that the database transaction log doesn&#039;t grow too large.&lt;br /&gt;
* Committing changes makes them visible for other users in the database.&lt;br /&gt;
&lt;br /&gt;
Example:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
--#Commit&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
= --#Exit =&lt;br /&gt;
Stops the execution of the script and gives a message to the user. This command takes one SELECT query as its parameter.&lt;br /&gt;
&lt;br /&gt;
Configures the command using a SELECT statement returning two columns: the first column is for a key and the second one is for a value of that key. The values in both the key column and in the value column are of type NVARCHAR. The supported keys for this command are:&amp;lt;br/&amp;gt;&lt;br /&gt;
; Exit&lt;br /&gt;
: Defines whether to stop the script execution:&lt;br /&gt;
: 1 = stop execution of the current script and call the script defined by the RunScriptId parameter if it is given.&lt;br /&gt;
: 0 = if a value for the RunScriptId parameter is given, pause the execution of the current script and call the given script, then resume running the current script after the given script ends. If a value for RunScriptId is not given, do not pause or stop execution of the current script.&lt;br /&gt;
; MessageText&lt;br /&gt;
: Text to be shown to the user after the script execution is finished if the script finished because of the Exit command, i.e. when Exit=1. The default value is &amp;quot;Script execution finished.&amp;quot;, which is shown also when the script finished normally, i.e. when Exit=0. The text is also written to the script log.&lt;br /&gt;
; RunScriptId&lt;br /&gt;
: Optional. The Id of the script to be run. Can be empty. Note that the script can call itself, so be careful not to create a looping script.&lt;br /&gt;
; CatchOperationExceptions&lt;br /&gt;
: Optional. Defines whether to stop the script execution or to continue to run the script from the next statement if an exception occurs when running the script:&lt;br /&gt;
: 1 = don&#039;t stop execution of the script, continue running the script from the next statement.&lt;br /&gt;
: 0 = stop execution of the current script and show the exception.&lt;br /&gt;
: The following script variables will be set and are shown in the script log:&lt;br /&gt;
: &amp;lt;code&amp;gt;@_ExceptionOccurred&amp;lt;/code&amp;gt; If there was an exception, then this value is 1, otherwise 0. INT&lt;br /&gt;
: &amp;lt;code&amp;gt;@_ExceptionType&amp;lt;/code&amp;gt; If there was an exception, shows the C# class name for the exception, NVARCHAR(MAX), otherwise NULL.&lt;br /&gt;
: &amp;lt;code&amp;gt;@_ExceptionMessage&amp;lt;/code&amp;gt; If there was an exception, contains a message that would have been displayed, NVARCHAR(MAX), otherwise NULL.&lt;br /&gt;
: &amp;lt;code&amp;gt;@_ExceptionDetails&amp;lt;/code&amp;gt; If there was an exception, contains the details that would have been displayed, including the system stack trace, NVARCHAR(MAX), otherwise NULL.&lt;br /&gt;
&lt;br /&gt;
See examples at the [[Exit Script Examples]] page.&lt;br /&gt;
&lt;br /&gt;
= --#GetAnalysis =&lt;br /&gt;
&lt;br /&gt;
&amp;lt;div style=&amp;quot;border:1px solid #dfdfdf;padding:0.5em 1em 0.5em 1em;background-color:#E7EAEC;margin:10px 0px 0px 10px;&amp;quot;&amp;gt;&lt;br /&gt;
--#GetAnalysis command is deprecated and it will be removed in a future release. Use the more flexible [[SQL_Scripting_Commands#--.23RunQuery|--#RunQuery]] command instead.&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Creates an analysis from the data which the preceding SQL statements given as parameters provide. This command can take several queries, one for every analysis to be performed. These queries and analysis results are independent from one another. Contains information about the scripts that are running and have been run.​&lt;br /&gt;
&lt;br /&gt;
Configures the command using a SELECT statement returning two columns: the first column is for a key and the second one is for a value of that key. The values in both the key column and in the value column are of type NVARCHAR. The supported keys for this command are:&amp;lt;br/&amp;gt;&lt;br /&gt;
; &amp;lt;Analysis Parameter&amp;gt;&lt;br /&gt;
: The --#GetAnalysis command supports the following analysis types:&lt;br /&gt;
* DataTableAnalysis=18: Reads a data table from SQL server and stores it in temporary table​&lt;br /&gt;
* Etl=19&lt;br /&gt;
* EtlReport=20&lt;br /&gt;
* RunScript=25&lt;br /&gt;
* ExpressionAnalysis=33&lt;br /&gt;
; TargetTable&lt;br /&gt;
: The temporary table to which the analysis is to be stored. When the TargetTable parameter is used, the &amp;quot;Table&amp;quot; result type of the ForceAnalysisResultType parameter is also automatically used. If the specified temporary table already exists in the database then its contents are deleted before storing analysis.&lt;br /&gt;
; Show&lt;br /&gt;
: Optional. If TRUE or 1, the analysis is opened after the script is run. If the Show parameter is set to TRUE or 1 and the TargetTable parameter is used in the same GetAnalysis command, the analysis result is stored in the target table in tabular format.&lt;br /&gt;
; Title&lt;br /&gt;
: Optional. Name of the CSV file created when Show is TRUE or 1. Default value is the name of the analysis type.&lt;br /&gt;
; CatchOperationExceptions&lt;br /&gt;
: Optional. Defines whether to stop the script execution or to continue to run the script from the next statement if an exception occurs when running the script:&lt;br /&gt;
: 1 = don&#039;t stop execution of the script, continue running the script from the next statement.&lt;br /&gt;
: 0 = stop execution of the current script and show the exception.&lt;br /&gt;
: The following script variables will be set and are shown in the script log:&lt;br /&gt;
:&amp;lt;code&amp;gt;@_ExceptionOccurred&amp;lt;/code&amp;gt; If there was an exception, then this value is 1, otherwise 0. INT&lt;br /&gt;
:&amp;lt;code&amp;gt;@_ExceptionType&amp;lt;/code&amp;gt; If there was an exception, shows the C# class name for the exception, NVARCHAR(MAX), otherwise NULL.&lt;br /&gt;
:&amp;lt;code&amp;gt;@_ExceptionMessage&amp;lt;/code&amp;gt; If there was an exception, contains a message that would have been displayed, NVARCHAR(MAX), otherwise NULL.&lt;br /&gt;
:&amp;lt;code&amp;gt;@_ExceptionDetails&amp;lt;/code&amp;gt; If there was an exception, contains the details that would have been displayed, including the system stack trace, NVARCHAR(MAX), otherwise NULL.&lt;br /&gt;
; MaximumCount&lt;br /&gt;
: Used with Operation Log Analysis analysis type. Integer. The maximum amount of rows returned. Optional. Default value is 1000.&lt;br /&gt;
&lt;br /&gt;
See examples at the [[GetAnalysis Script Examples]] page.&lt;br /&gt;
&lt;br /&gt;
= --#ImportDataTable =&lt;br /&gt;
Imports data from an SQL query to a datatable. This command takes two SELECT queries as parameters.&lt;br /&gt;
&lt;br /&gt;
== First Query ==&lt;br /&gt;
Configures the command using a SELECT statement returning two columns: the first column is for a key and the second one is for a value of that key. The values in both the key column and in the value column are of type NVARCHAR. The supported keys for this command are:&amp;lt;br/&amp;gt;&lt;br /&gt;
; ProjectId or ProjectName&lt;br /&gt;
: The id or the name of the project in which the target data table exists.&lt;br /&gt;
; DataTableId or DataTableName&lt;br /&gt;
: The id or the name of the existing/new target data table.&lt;br /&gt;
; Append&lt;br /&gt;
: Defines what to do with an existing contents of the target datatable. When value is 1, existing rows in the target datatable are not deleted (also new columns in the imported data are created to the datatable). When value is 0, existing rows in the target datatable are deleted before the import (columns are still preserved). Not used when creating a new data table.&lt;br /&gt;
; CatchOperationExceptions&lt;br /&gt;
: Optional. Defines whether to stop the script execution or to continue to run the script from the next statement if an exception occurs when running the script:&lt;br /&gt;
: 1 = don&#039;t stop execution of the script, continue running the script from the next statement.&lt;br /&gt;
: 0 = stop execution of the current script and show the exception.&lt;br /&gt;
: The following script variables will be set and are shown in the script log:&lt;br /&gt;
:&amp;lt;code&amp;gt;@_ExceptionOccurred&amp;lt;/code&amp;gt; If there was an exception, then this value is 1, otherwise 0. INT&lt;br /&gt;
:&amp;lt;code&amp;gt;@_ExceptionType&amp;lt;/code&amp;gt; If there was an exception, shows the C# class name for the exception, NVARCHAR(MAX), otherwise NULL.&lt;br /&gt;
:&amp;lt;code&amp;gt;@_ExceptionMessage&amp;lt;/code&amp;gt; If there was an exception, contains a message that would have been displayed, NVARCHAR(MAX), otherwise NULL.&lt;br /&gt;
:&amp;lt;code&amp;gt;@_ExceptionDetails&amp;lt;/code&amp;gt; If there was an exception, contains the details that would have been displayed, including the system stack trace, NVARCHAR(MAX), otherwise NULL.&lt;br /&gt;
&lt;br /&gt;
== Second Query ==&lt;br /&gt;
; &amp;lt;nowiki&amp;gt;&amp;lt;data&amp;gt;&amp;lt;/nowiki&amp;gt;&lt;br /&gt;
: The database query whose results are to be imported. Note that if the query doesn&#039;t return any data, the datatable is not created.&lt;br /&gt;
&lt;br /&gt;
See examples at the [[ImportDataTable Script Examples]] page.&lt;br /&gt;
&lt;br /&gt;
= --#ImportOdbcQuery =&lt;br /&gt;
Extracts data from an ODBC data source and imports it to QPR ProcessAnalyzer datatable or temporary table. Column names from the query result as used. If a column name contains illegal characters for table names, the illegal characters are converted to be underscore characters. Columns are extracted as text data. To use ImportOdbcQuery, define a SELECT statement returning two columns: the first column is for a key and the second one is for a value of that key. The values in both the key column and in the value column are of type NVARCHAR. The supported keys for this command are:&lt;br /&gt;
&lt;br /&gt;
; TargetTable: The temporary table to which the data is to be imported. If not used, define the target using the ProjectId/ProjectName, DataTableId/DataTableName, and Append parameters described below. &lt;br /&gt;
; ProjectId / ProjectName&lt;br /&gt;
: The id or the name of the project in which the target data table exists.&lt;br /&gt;
; DataTableId / DataTableName&lt;br /&gt;
: The id or the name of the existing/new target data table.&lt;br /&gt;
; Append&lt;br /&gt;
: Defines what to do with an existing target data table and its contents. TRUE or any other Integer than &amp;quot;0&amp;quot; = the target data table and its existing contents are not deleted before import. If a user imports into a data table with &#039;Append&#039; = FALSE or &amp;quot;0&amp;quot;, the contents of the data table are deleted before the import. If a user imports into a temporary table (i.e. TargetTable) with &#039;Append&#039; = FALSE or &amp;quot;0&amp;quot;, then the whole temporary table is deleted before the import. Not used when creating a new data table.&lt;br /&gt;
; CatchOperationExceptions&lt;br /&gt;
: Optional. Defines whether to stop the script execution or to continue to run the script from the next statement if an exception occurs when running the script:&lt;br /&gt;
: 1 = don&#039;t stop execution of the script, continue running the script from the next statement.&lt;br /&gt;
: 0 = stop execution of the current script and show the exception.&lt;br /&gt;
: The following script variables will be set and are shown in the script log:&lt;br /&gt;
:&amp;lt;code&amp;gt;@_ExceptionOccurred&amp;lt;/code&amp;gt; If there was an exception, then this value is 1, otherwise 0. INT&lt;br /&gt;
:&amp;lt;code&amp;gt;@_ExceptionType&amp;lt;/code&amp;gt; If there was an exception, shows the C# class name for the exception, NVARCHAR(MAX), otherwise NULL.&lt;br /&gt;
:&amp;lt;code&amp;gt;@_ExceptionMessage&amp;lt;/code&amp;gt; If there was an exception, contains a message that would have been displayed, NVARCHAR(MAX), otherwise NULL.&lt;br /&gt;
:&amp;lt;code&amp;gt;@_ExceptionDetails&amp;lt;/code&amp;gt; If there was an exception, contains the details that would have been displayed, including the system stack trace, NVARCHAR(MAX), otherwise NULL.&lt;br /&gt;
; OdbcConnectionString&lt;br /&gt;
: The ODBC driver connection string that includes the settings needed to establish the initial connection. Mandatory. See [http://msdn.microsoft.com/en-us/library/system.data.odbc.odbcconnection.connectionstring%28v=vs.110%29.aspx?cs-save-lang=1&amp;amp;cs-lang=csharp#code-snippet-1 OdbcConnection.ConnectionString Property in Microsoft Development Network] for more information on the possible connection strings.&lt;br /&gt;
; OdbcConnectionStringKey&lt;br /&gt;
: [[Storing_Secrets_for_Scripts|Secret name]] for the connection string. Alternative for the OdbcConnectionString property.&lt;br /&gt;
; OdbcQueryString&lt;br /&gt;
: The SQL query string. Mandatory. Note that if the query doesn&#039;t return any data, the target data table or temporary table is not created.&lt;br /&gt;
; QueryExecutionTimeout&lt;br /&gt;
: Defines timeout in seconds for the ODBC command execution. If not specified, default value is 600 seconds.&lt;br /&gt;
; ExecuteInClientSide&lt;br /&gt;
: Defines whether the command is executed from the QPR ScriptLauncher or from the server. This parameter is used when there is no server connection available, for example. TRUE or 1, the query is executed in the QPR ScriptLauncher. FALSE or 0, the query is executed in the server. Supports only data table as the import destination. If &#039;TargetTable&#039; has been defined as the import destination and the value of this parameter is given as TRUE or 1, you will receive an error message. Optional. Default value is FALSE.&lt;br /&gt;
&lt;br /&gt;
See examples in the [[ImportOdbcQuery Script Examples]] page.&lt;br /&gt;
&lt;br /&gt;
= --#ImportOleDbQuery =&lt;br /&gt;
Extracts data from an OLE DB data source and imports it to QPR ProcessAnalyzer datatable or a temporary table. Column names from the query result are used. It is possible to both create new datatables as well as modify existing datatables with this command. To use the ImportOleDbQuery, define a SELECT statement returning two columns: the first column is for a key and the second one is for a value of that key. The values in both the key column and in the value column are of type NVARCHAR. The supported keys for this command are:&lt;br /&gt;
&lt;br /&gt;
; TargetTable&lt;br /&gt;
: The temporary table to which the data is to be imported. If not used, define the target using the ProjectId/ProjectName, DataTableId/DataTableName, and Append parameters described below.&lt;br /&gt;
; ProjectId / ProjectName&lt;br /&gt;
: The id or the name of the project in which the target data table exists.&lt;br /&gt;
; DataTableId / DataTableName&lt;br /&gt;
: The id or the name of the existing/new target data table.&lt;br /&gt;
; Append&lt;br /&gt;
: Defines what to do with an existing target data table and its contents. TRUE or any other Integer than &amp;quot;0&amp;quot; = the target data table and its existing contents are not deleted before import. If a user imports into a data table with &#039;Append&#039; = FALSE or &amp;quot;0&amp;quot;, the contents of the data table are deleted before the import. If a user imports into a temporary table(i.e. TargetTable) with &#039;Append&#039; = FALSE or &amp;quot;0&amp;quot;, then the whole temporary table is deleted before the import. Not used when creating a new data table.&lt;br /&gt;
; CatchOperationExceptions&lt;br /&gt;
: Optional. Defines whether to stop the script execution or to continue to run the script from the next statement if an exception occurs when running the script:&lt;br /&gt;
: 1 = don&#039;t stop execution of the script, continue running the script from the next statement.&lt;br /&gt;
: 0 = stop execution of the current script and show the exception.&lt;br /&gt;
: The following script variables will be set and are shown in the script log:&lt;br /&gt;
:&amp;lt;code&amp;gt;@_ExceptionOccurred&amp;lt;/code&amp;gt; If there was an exception, then this value is 1, otherwise 0. INT&lt;br /&gt;
:&amp;lt;code&amp;gt;@_ExceptionType&amp;lt;/code&amp;gt; If there was an exception, shows the C# class name for the exception, NVARCHAR(MAX), otherwise NULL.&lt;br /&gt;
:&amp;lt;code&amp;gt;@_ExceptionMessage&amp;lt;/code&amp;gt; If there was an exception, contains a message that would have been displayed, NVARCHAR(MAX), otherwise NULL.&lt;br /&gt;
:&amp;lt;code&amp;gt;@_ExceptionDetails&amp;lt;/code&amp;gt; If there was an exception, contains the details that would have been displayed, including the system stack trace, NVARCHAR(MAX), otherwise NULL.&lt;br /&gt;
; OleDbConnectionString&lt;br /&gt;
: The OLE DB connection string that includes the settings needed to establish the initial connection. Mandatory. See [http://msdn.microsoft.com/en-us/library/system.data.oledb.oledbconnection.connectionstring%28v=vs.110%29.aspx OleDbConnection.ConnectionString Property in Microsoft Development Network] for more information on the possible connection strings.&lt;br /&gt;
; OleDbQueryString&lt;br /&gt;
: The SQL query string. Mandatory. Note that if the query doesn&#039;t return any data, the target data table or temporary table is not created.&lt;br /&gt;
; QueryExecutionTimeout&lt;br /&gt;
: Defines timeout in seconds for the OLE DB command execution. If not specified, default value is 600 seconds.&lt;br /&gt;
; ExecuteInClientSide&lt;br /&gt;
: Defines whether the command is executed from the QPR ScriptLauncher or from the server. This parameter is used when there is no server connection available, for example. TRUE or 1, the query is executed in the QPR ScriptLauncher. FALSE or 0, the query is executed in the server. Supports only data table as the import destination. If &#039;TargetTable&#039; has been defined as the import destination and the value of this parameter is given as TRUE or any other Integer than &amp;quot;0&amp;quot;, you will receive an error message. Optional. Default value is FALSE.&lt;br /&gt;
&lt;br /&gt;
See examples at the [[ImportOleDbQuery Script Examples]] page.&lt;br /&gt;
&lt;br /&gt;
= --#ImportSalesforceQuery =&lt;br /&gt;
Extracts data from the Salesforce cloud using its REST API and imports the data to a datatable. The command takes one SELECT query as its parameter. If the query doesn&#039;t return any data, the target data table or temporary table is not created.&lt;br /&gt;
&lt;br /&gt;
More information about the Salesforce REST API: https://developer.salesforce.com/docs/atlas.en-us.api_rest.meta/api_rest/intro_rest.htm.&lt;br /&gt;
&lt;br /&gt;
== Query ==&lt;br /&gt;
Configures the command using a SELECT statement returning two columns: the first column is for a key and the second one is for a value of that key. The values in both the key column and in the value column are of type NVARCHAR. The supported keys for this command are:&amp;lt;br/&amp;gt;&lt;br /&gt;
; TargetTable&lt;br /&gt;
: Temporary table to which the data is imported. If not used, define the target using the ProjectId/ProjectName, DataTableId/DataTableName, and Append parameters described below.&lt;br /&gt;
; ProjectId / ProjectName&lt;br /&gt;
: Id or the name of the project in which the target datatable is located.&lt;br /&gt;
; DataTableId / DataTableName&lt;br /&gt;
: Id or the name of the target data table. If DataTableName is used, the ProjectId or ProjectName can also be used to define the project where the datatable is located.&lt;br /&gt;
; Append&lt;br /&gt;
: Defines what to do with an existing target data table contents. TRUE or 1, existing contents of the target datatable is not deleted in the import. When FALSE or 0, existing contents of the target datatable are deleted before importing new data. Not used when creating a new data table.&lt;br /&gt;
; CatchOperationExceptions&lt;br /&gt;
: Optional. Defines whether to stop the script execution or to continue to run the script from the next statement if an exception occurs when running the script:&lt;br /&gt;
: 1 = don&#039;t stop execution of the script, continue running the script from the next statement.&lt;br /&gt;
: 0 = stop execution of the current script and show the exception.&lt;br /&gt;
: The following script variables will be set and are shown in the script log:&lt;br /&gt;
:&amp;lt;code&amp;gt;@_ExceptionOccurred&amp;lt;/code&amp;gt; If there was an exception, then this value is 1, otherwise 0. INT&lt;br /&gt;
:&amp;lt;code&amp;gt;@_ExceptionType&amp;lt;/code&amp;gt; If there was an exception, shows the C# class name for the exception, NVARCHAR(MAX), otherwise NULL.&lt;br /&gt;
:&amp;lt;code&amp;gt;@_ExceptionMessage&amp;lt;/code&amp;gt; If there was an exception, contains a message that would have been displayed, NVARCHAR(MAX), otherwise NULL.&lt;br /&gt;
:&amp;lt;code&amp;gt;@_ExceptionDetails&amp;lt;/code&amp;gt; If there was an exception, contains the details that would have been displayed, including the system stack trace, NVARCHAR(MAX), otherwise NULL.&lt;br /&gt;
; SalesforceUser&lt;br /&gt;
: Username for the Salesforce cloud.&lt;br /&gt;
; SalesforcePW&lt;br /&gt;
: Password for the Salesforce cloud.&lt;br /&gt;
; SalesforcePWKey&lt;br /&gt;
: [[Storing_Secrets_for_Scripts|Secret name]] for the stored Salesforce password. Alternative for the SalesforcePW property.&lt;br /&gt;
; SalesforceUrl&lt;br /&gt;
: Optional. Salesforce web service url.&lt;br /&gt;
; SalesforceQueryMode&lt;br /&gt;
: Optional. Determines which Salesforce query function to use. One of the following values (1, 2 or 3) can be used:&lt;br /&gt;
: 1: &#039;&#039;&#039;QueryAll&#039;&#039;&#039; (default): Executes specified SOQL query, except unlike &#039;&#039;Query&#039;&#039;, &#039;&#039;QueryAll&#039;&#039; returns records that are deleted because of a merge or delete. More information: https://developer.salesforce.com/docs/atlas.en-us.api_rest.meta/api_rest/resources_queryall.htm.&lt;br /&gt;
: 2: &#039;&#039;&#039;Query&#039;&#039;&#039;: Executes the specified SOQL query. More information: https://developer.salesforce.com/docs/atlas.en-us.api_rest.meta/api_rest/resources_query.htm)&lt;br /&gt;
: 3: &#039;&#039;&#039;sObject Describe&#039;&#039;&#039;: Completely describes the individual metadata at all levels for the specified object. For example, this can be used to retrieve the fields, URLs, and child relationships for the Account object. More information: https://developer.salesforce.com/docs/atlas.en-us.api_rest.meta/api_rest/resources_sobject_describe.htm).&lt;br /&gt;
; SalesforceQuery&lt;br /&gt;
: Query to run in the Salesforce cloud to fetch the data, defined as SOQL (Salesforce Object Query Language). More information: https://developer.salesforce.com/docs/atlas.en-us.236.0.soql_sosl.meta/soql_sosl/sforce_api_calls_soql_sosl_intro.htm.&lt;br /&gt;
; SalesforceQueryRetries&lt;br /&gt;
: Optional. Number of retries to attempt if the Salesforce query doesn&#039;t succeed. Default value is 3.&lt;br /&gt;
; SalesforceQueryRetryWait&lt;br /&gt;
: Optional. Number of milliseconds to wait between query retries. Default is 3000 ms.&lt;br /&gt;
; SalesforceBatchSize&lt;br /&gt;
: Optional. Data is queried from Salesforce in batches, and this setting determines the batch size. The value can be between 200 and 2000, and the default value is 500.&lt;br /&gt;
&lt;br /&gt;
== Notes ==&lt;br /&gt;
If you get error &#039;&#039;INVALID_TYPE sObject type &#039;Objectname&#039; is not supported&#039;&#039;:&lt;br /&gt;
* Check that the object in question exists or that the object name is correct.&lt;br /&gt;
* Verify that the Salesforce user has rights to the object. &lt;br /&gt;
** You have to give access to the new custom objects and VisualForce pages from the user&#039;s profile, and you have to check the &amp;quot;Customize Application&amp;quot; checkbox under the same profile (https://developer.salesforce.com/forums/?id=906F00000008qG6IAI). Contact your Salesforce administrator.&lt;br /&gt;
* The Salesforce user may need extra license to access the object. Special 3rd party custom objects may need separate license. Contact your Salesforce application administrator.&lt;br /&gt;
&lt;br /&gt;
See examples at the [[ImportSalesforceQuery Script Examples]] page.&lt;br /&gt;
&lt;br /&gt;
= --#ImportSapQuery =&lt;br /&gt;
Extracts data from an SAP system and imports it to QPR ProcessAnalyzer datatable or a temporary table. Column names from the query result are used. If a column name contains illegal characters for table names, the illegal characters are converted to be underscore characters (e.g. &amp;quot;sap:Owner&amp;quot; -&amp;gt; &amp;quot;sap_Owner&amp;quot;). Columns are extracted as text data. Note that using this command requires [[QPR_ProcessAnalyzer_ScriptLauncher#Installing_SAP_NetWeaver_RFC_Library|installing SAP NetWeaver RFC Library]].&lt;br /&gt;
&lt;br /&gt;
To use the ImportSapQuery command, define a SELECT statement returning two columns: the first column is for a key and the second one is for a value of that key. The values in both the key column and in the value column are of type NVARCHAR. The supported keys for this command are:&lt;br /&gt;
; TargetTable&lt;br /&gt;
: If this parameter is given, store the results into a temporary SQL table in the ETL sandbox. If the TargetTable parameter is not given, use either the ProjectId or ProjectName parameters.&lt;br /&gt;
; ProjectId / ProjectName&lt;br /&gt;
: The id or the name of the project in which the target data table exists.&lt;br /&gt;
; DataTableId / DataTableName&lt;br /&gt;
: The id or the name of the existing/new target data table.&lt;br /&gt;
; Append&lt;br /&gt;
: Defines what to do with an existing target data table and its contents. TRUE or any other Integer than &amp;quot;0&amp;quot; = the target data table and its existing contents are not deleted before import. If a user imports into a data table with &#039;Append&#039; = FALSE or &amp;quot;0&amp;quot;, the contents of the data table are deleted before the import. If a user imports into a temporary table (i.e. TargetTable) with &#039;Append&#039; = FALSE or &amp;quot;0&amp;quot;, then the whole temporary table is deleted before the import. Not used when creating a new data table.&lt;br /&gt;
; ConvertDataTypes&lt;br /&gt;
: List of SAP data types that are converted into respective data types supported by SQL Server instead of using NVARCHAR. Defined by listing the data type identifier characters in any order. Available data type identifying characters are &#039;&#039;&#039;IFPCDTNX&#039;&#039;&#039;. If not defined, all data is converted to NVARCHAR. Example: &#039;&#039;IFP&#039;&#039; (convert only numeric data types: Integer, Float, Packed number) ([[Importing_Data_from_SAP|more information]]).&lt;br /&gt;
; CatchOperationExceptions&lt;br /&gt;
: Optional. Defines whether to stop the script execution or to continue to run the script from the next statement if an exception occurs when running the script:&lt;br /&gt;
: 1 = don&#039;t stop execution of the script, continue running the script from the next statement.&lt;br /&gt;
: 0 = stop execution of the current script and show the exception.&lt;br /&gt;
: The following script variables will be set and are shown in the script log:&lt;br /&gt;
:&amp;lt;code&amp;gt;@_ExceptionOccurred&amp;lt;/code&amp;gt; If there was an exception, then this value is 1, otherwise 0. INT&lt;br /&gt;
:&amp;lt;code&amp;gt;@_ExceptionType&amp;lt;/code&amp;gt; If there was an exception, shows the C# class name for the exception, NVARCHAR(MAX), otherwise NULL.&lt;br /&gt;
:&amp;lt;code&amp;gt;@_ExceptionMessage&amp;lt;/code&amp;gt; If there was an exception, contains a message that would have been displayed, NVARCHAR(MAX), otherwise NULL.&lt;br /&gt;
:&amp;lt;code&amp;gt;@_ExceptionDetails&amp;lt;/code&amp;gt; If there was an exception, contains the details that would have been displayed, including the system stack trace, NVARCHAR(MAX), otherwise NULL.&lt;br /&gt;
;ImportChunkSize&lt;br /&gt;
:Specifies the size of the chunks of data used when importing the data from SAP to PA Server. The value represents the approximate maximum number of data cells in each chunk (consisting tables of &amp;lt;number of rows&amp;gt; * &amp;lt;number of columns&amp;gt; data cells. Default value is 200000. Smaller value causes big imports to be split into more chunks taking more time in total, but it makes importing of each chunk faster possibly helping in some timeout situations.&lt;br /&gt;
; SapUser&lt;br /&gt;
: SAP username used to connect to SAP. Mandatory. Corresponds to the &amp;quot;USER&amp;quot; constant on SAP side. See the [http://help.sap.com/saphelp_nw04/helpdata/en/e9/23c80d66d08c4c8c044a3ea11ca90f/content.htm SAP .NET Connector documentation] for more info.&lt;br /&gt;
; SapPW&lt;br /&gt;
: Password of the SAP user used to connect to SAP. Mandatory. Corresponds to the &amp;quot;PASSWD&amp;quot; constant on SAP side. See the [http://help.sap.com/saphelp_nw04/helpdata/en/e9/23c80d66d08c4c8c044a3ea11ca90f/content.htm SAP .NET Connector documentation] for more info.&lt;br /&gt;
; SapPWKey&lt;br /&gt;
: [[Storing_Secrets_for_Scripts|Secret name]] for the stored SAP password. Alternative for the SapPW property.&lt;br /&gt;
; SapClient&lt;br /&gt;
: The SAP backend client. Mandatory. Corresponds to the &amp;quot;CLIENT&amp;quot; constant on SAP side. See the [http://help.sap.com/saphelp_nw04/helpdata/en/e9/23c80d66d08c4c8c044a3ea11ca90f/content.htm SAP .NET Connector documentation] for more info.&lt;br /&gt;
; SapAppServerHost&lt;br /&gt;
: The hostname or IP of the specific SAP application server, to which all connections shall be opened. Mandatory if SapMessageServerHost is not defined. Corresponds to the &amp;quot;ASHOST&amp;quot; constant on SAP side. See the [http://help.sap.com/saphelp_nw04/helpdata/en/e9/23c80d66d08c4c8c044a3ea11ca90f/content.htm SAP .NET Connector documentation] for more info.&lt;br /&gt;
; SapMessageServerHost&lt;br /&gt;
: The hostname or IP of the SAP system’s message server (central instance). Mandatory if SapAppServerHost is not defined. Corresponds to the &amp;quot;MSHOST&amp;quot; constant on SAP side. See the [http://help.sap.com/saphelp_nw04/helpdata/en/e9/23c80d66d08c4c8c044a3ea11ca90f/content.htm SAP .NET Connector documentation] for more info.&lt;br /&gt;
; SapSystemNumber&lt;br /&gt;
: The SAP system’s system number. Mandatory if SapSystemID is not defined. Corresponds to the &amp;quot;SYSNR&amp;quot; constant on SAP side. See the [http://help.sap.com/saphelp_nw04/helpdata/en/e9/23c80d66d08c4c8c044a3ea11ca90f/content.htm SAP .NET Connector documentation] for more info.&lt;br /&gt;
; SapSystemID&lt;br /&gt;
: The SAP system’s three-letter system ID. Mandatory if SapSystemNumber is not defined. Corresponds to the &amp;quot;SYSID&amp;quot; constant on SAP side. See the [http://help.sap.com/saphelp_nw04/helpdata/en/e9/23c80d66d08c4c8c044a3ea11ca90f/content.htm SAP .NET Connector documentation] for more info.&lt;br /&gt;
; ExecuteInClientSide&lt;br /&gt;
: Defines whether the command is executed from the QPR ScriptLauncher or from the server. This parameter is used when there is no server connection available, for example. TRUE or 1, the query is executed in the QPR ScriptLauncher. FALSE or 0, the query is executed in the server. Supports only data table as the import destination. If &#039;TargetTable&#039; has been defined as the import destination and the value of this parameter is given as TRUE or any other Integer than &amp;quot;0&amp;quot;, you will receive an error message. Optional. Default value is FALSE.&lt;br /&gt;
; SapLanguage&lt;br /&gt;
: SAP language used. Default value is &amp;quot;EN&amp;quot;. Optional. Corresponds to the &amp;quot;LANG&amp;quot; constant on SAP side. See the [http://help.sap.com/saphelp_nw04/helpdata/en/e9/23c80d66d08c4c8c044a3ea11ca90f/content.htm SAP .NET Connector documentation] for more info.&lt;br /&gt;
; SapPoolSize&lt;br /&gt;
: The maximum number of RFC connections that this destination will keep in its pool. Default value is &amp;quot;5&amp;quot;. Optional. Corresponds to the &amp;quot;POOL_SIZE&amp;quot; constant on SAP side. See the [http://help.sap.com/saphelp_nw04/helpdata/en/e9/23c80d66d08c4c8c044a3ea11ca90f/content.htm SAP .NET Connector documentation] for more info.&lt;br /&gt;
; SapMaxPoolSize&lt;br /&gt;
: In order to prevent an unlimited number of connections to be opened, you can use this parameter. Default value is &amp;quot;10&amp;quot;. Optional. Corresponds to the &amp;quot;MAX_POOL_SIZE&amp;quot; constant on SAP side. See the [http://help.sap.com/saphelp_nw04/helpdata/en/e9/23c80d66d08c4c8c044a3ea11ca90f/content.htm SAP .NET Connector documentation] for more info.&lt;br /&gt;
; SapConnectionIdleTimeout&lt;br /&gt;
: If a connection has been idle for more than SapIdleTimeout seconds, it will be closed and removed from the connection pool upon checking for idle connections or pools. Default value is &amp;quot;600&amp;quot;. Optional. Corresponds to the &amp;quot;IDLE_TIMEOUT&amp;quot; constant on SAP side. See the [http://help.sap.com/saphelp_nw04/helpdata/en/e9/23c80d66d08c4c8c044a3ea11ca90f/content.htm SAP .NET Connector documentation] for more info.&lt;br /&gt;
; SapRouter&lt;br /&gt;
: List of host names and service names / port numbers for the SAPRouter in the following format: /H/hostname/S/portnumber. Optional. Corresponds to the &amp;quot;SAPROUTER&amp;quot; constant on SAP side. See the [http://help.sap.com/saphelp_nw04/helpdata/en/e9/23c80d66d08c4c8c044a3ea11ca90f/content.htm SAP .NET Connector documentation] for more info.&lt;br /&gt;
; SapLogonGroup&lt;br /&gt;
: The logon group from which the message server shall select an application server. Optional. Corresponds to the &amp;quot;GROUP&amp;quot; constant on SAP side. See the [http://help.sap.com/saphelp_nw04/helpdata/en/e9/23c80d66d08c4c8c044a3ea11ca90f/content.htm SAP .NET Connector documentation] for more info.&lt;br /&gt;
; SapQueryMode&lt;br /&gt;
: If this number is set to &amp;quot;1&amp;quot;, then the query result will have the SAP Table field names as data table column names and actual data rows as rows. If this is set to &amp;quot;3&amp;quot;, the query result will get the field descriptions from the SAP query using NO_DATA parameter, i.e. the returned columns are the following (in this order): Field, Type, Description, Length, Offset. Default value is &amp;quot;1&amp;quot;. Optional. See the [http://help.sap.com/saphelp_nw04/helpdata/en/e9/23c80d66d08c4c8c044a3ea11ca90f/content.htm SAP .NET Connector documentation] for more info.&lt;br /&gt;
; SapQueryTable&lt;br /&gt;
: Name of the SAP table to be extracted. Specifies the value for the parameter QUERY_TABLE in tab: &#039;Import&#039; or function module &#039;rfc_read_table&#039; in SAP. Mandatory. See the [http://help.sap.com/saphelp_nw04/helpdata/en/e9/23c80d66d08c4c8c044a3ea11ca90f/content.htm SAP .NET Connector documentation] for more info. Note that if the query doesn&#039;t return any data, the target data table or temporary table is not created.&lt;br /&gt;
; SapRowcount&lt;br /&gt;
: The maximum amount of rows to fetch. Specifies the value for parameter ROWCOUNT in tab: &#039;Import&#039; or function module &#039;rfc_read_table&#039; in SAP. Optional. See the [http://help.sap.com/saphelp_nw04/helpdata/en/e9/23c80d66d08c4c8c044a3ea11ca90f/content.htm SAP .NET Connector documentation] for more info.&lt;br /&gt;
; SapRowskips&lt;br /&gt;
: The number of rows to skip. Specifies the value for parameter ROWSKIPS in tab: &#039;Import&#039; or function module &#039;rfc_read_table&#039;. in SAP. Optional. See the [http://help.sap.com/saphelp_nw04/helpdata/en/e9/23c80d66d08c4c8c044a3ea11ca90f/content.htm SAP .NET Connector documentation] for more info.&lt;br /&gt;
; SapWhereClause&lt;br /&gt;
: A comma separated list of WHERE clause elements passed for the SapQueryTable. Can be used with or without the SapWhereClauseSelect parameter. If used together with the SapWhereClauseSelect parameter, use the SapWhereClause parameter first. NOTE: The default maximum length for the Where Clause string is 72 characters in SAP, so the recommended maximum length of the SapWhereClause value is also 72 characters. In effect, specifies the value for parameter OPTIONS in tab: &#039;Import&#039; or function module &#039;rfc_read_table&#039; in SAP. Optional. See the [http://help.sap.com/saphelp_nw04/helpdata/en/e9/23c80d66d08c4c8c044a3ea11ca90f/content.htm SAP .NET Connector documentation] for more info.&lt;br /&gt;
; SapWhereClauseSelect&lt;br /&gt;
: The SELECT query to be executed in QPR ProcessAnalyzer sandbox. Used with or without the SapWhereClause parameter to pass WHERE clauses to SapQueryTable. If used together with the SapWhereClause parameter, use the SapWhereClause parameter first. The query is expected to return a table with at least one column, as the contents from the rows in the first column of the table are concatenated together to form the WHERE clause in SAP RFC_ReadTable. Therefore, it&#039;s recommended to first create the table with the WHERE clauses into a temporary table. In addition, it&#039;s recommended to have an order number column in the table and use that in the SELECT query to make sure the WHERE clause elements are concatenated in the correct order. The default maximum length for Where Clause string is 72 characters in SAP, so the recommended maximum length for the WHERE clause string in each row of the table is also 72. In effect, specifies the value for parameter OPTIONS in tab: &#039;Import&#039; or function module &#039;rfc_read_table&#039; in SAP. Optional. The contents up to the first 10 rows in the first column of the SELECT query are shown in the QPR ProcessAnalyzer [[QPR_ProcessAnalyzer_Logs#Script_Log|Script Log]]. See the [http://help.sap.com/saphelp_nw04/helpdata/en/e9/23c80d66d08c4c8c044a3ea11ca90f/content.htm SAP .NET Connector documentation] for more info.&amp;lt;br/&amp;gt;&lt;br /&gt;
; SapFieldNames&lt;br /&gt;
: A comma separated list of field names for columns to be imported. Default value is empty, resulting in all columns being imported. Specifies the value for parameter FIELDNAME in tab: &#039;Tables&#039; for table &#039;FIELDS&#039; for function module &#039;rfc_read_table&#039; in SAP. Optional. See the [http://help.sap.com/saphelp_nw04/helpdata/en/e9/23c80d66d08c4c8c044a3ea11ca90f/content.htm SAP .NET Connector documentation] for more info.&lt;br /&gt;
; SapFunction&lt;br /&gt;
: If you define a value for this parameter, then the new value specifies the SAP function that is called inside the #ImportSapQuery command. Optional. The default value is RFC_READ_TABLE. Another possible value is BBP_RFC_READ_TABLE. See the [http://help.sap.com/saphelp_nw04/helpdata/en/e9/23c80d66d08c4c8c044a3ea11ca90f/content.htm SAP .NET Connector documentation] for more info.&lt;br /&gt;
; UseAnyAsColumnType&lt;br /&gt;
: Determines datatable column data types for the created columns. When &#039;&#039;true&#039;&#039;, &amp;quot;Any&amp;quot; type of columns are created (resulting into SQL_variant columns in SQL server), and when &#039;&#039;false&#039;&#039;, data types depend on the ConvertDataTypes parameter. Default value is true when running the import in SQL script, and otherwise default value is false.&lt;br /&gt;
;AliasUser&lt;br /&gt;
:&lt;br /&gt;
;AppServerService&lt;br /&gt;
:&lt;br /&gt;
;CharacterFaultIndicatorToken&lt;br /&gt;
:&lt;br /&gt;
;Codepage&lt;br /&gt;
:&lt;br /&gt;
;GatewayHost&lt;br /&gt;
:&lt;br /&gt;
;GatewayService&lt;br /&gt;
:&lt;br /&gt;
;IdleCheckTime&lt;br /&gt;
:&lt;br /&gt;
;LogonCheck&lt;br /&gt;
:&lt;br /&gt;
;MaxPoolWaitTime&lt;br /&gt;
:&lt;br /&gt;
;MessageServerService&lt;br /&gt;
:&lt;br /&gt;
;Name&lt;br /&gt;
:&lt;br /&gt;
;NoCompression&lt;br /&gt;
:&lt;br /&gt;
;OnCharacterConversionError&lt;br /&gt;
:&lt;br /&gt;
;PartnerCharSize&lt;br /&gt;
:&lt;br /&gt;
;PasswordChangeEnforced&lt;br /&gt;
:&lt;br /&gt;
;ProgramId&lt;br /&gt;
:&lt;br /&gt;
;R3Name&lt;br /&gt;
:&lt;br /&gt;
;RegistrationCount&lt;br /&gt;
:&lt;br /&gt;
;RepositoryDestination&lt;br /&gt;
:&lt;br /&gt;
;RepositoryPassword&lt;br /&gt;
:&lt;br /&gt;
;RepositorySncMyName&lt;br /&gt;
:&lt;br /&gt;
;RepositoryUser&lt;br /&gt;
:&lt;br /&gt;
;RepositoryX509Certificate&lt;br /&gt;
:&lt;br /&gt;
;SapSso2Ticket&lt;br /&gt;
:&lt;br /&gt;
;SncLibraryPath&lt;br /&gt;
:Full path including file name of the [[Importing_Data_from_SAP#SNC_encrypted_connection|SNC]] shared library to be used.&lt;br /&gt;
;SncMode&lt;br /&gt;
: Determines whether connections will be secured with [[Importing_Data_from_SAP#SNC_encrypted_connection|SNC]]. Value &#039;&#039;&#039;0&#039;&#039;&#039; doesn&#039;t use SNC (default) and value &#039;&#039;&#039;1&#039;&#039;&#039; uses SNC.&lt;br /&gt;
;SncMyName&lt;br /&gt;
:Token/identifier representing the external RFC program. In most cases this can be omitted. The installed [[Importing_Data_from_SAP#SNC_encrypted_connection|SNC]] solution usually knows its own SNC name. Only for solutions supporting “multiple identities”, you may Varies depending on the installed SNC solution (Secude, Kerberos, NTLM, etc). Example for Secude: p/secude:CN=ALEREMOT SAP Online Help 09.09.2014 SAP .NET Connector 3.0 41 need to specify the identity to be used for this particular destination/server. E, O=Mustermann-AG, C=DE&lt;br /&gt;
;SncPartnerName&lt;br /&gt;
:The backend&#039;s [[Importing_Data_from_SAP#SNC_encrypted_connection|SNC]]name.&lt;br /&gt;
;SncPartnerNames&lt;br /&gt;
:&lt;br /&gt;
;SncQop&lt;br /&gt;
:Quality of service to be used for SNC communication of this particular destination/server. One of the following values:&lt;br /&gt;
* 1: Digital signature&lt;br /&gt;
* 2: Digital signature and encryption&lt;br /&gt;
* 3: Digital signature, encryption, and user authentication&lt;br /&gt;
* 8: Default value defined by back-end system&lt;br /&gt;
* 9: Maximum value that the current security product supports&lt;br /&gt;
;SystemIds&lt;br /&gt;
:&lt;br /&gt;
;UseSapGui&lt;br /&gt;
:&lt;br /&gt;
;X509Certificate&lt;br /&gt;
:&lt;br /&gt;
&lt;br /&gt;
See examples at the [[ImportSapQuery Script Examples]] page.&lt;br /&gt;
&lt;br /&gt;
= --#ImportSqlQuery =&lt;br /&gt;
Extracts data from an ADO.Net source (which usually is an SQL Server database) and imports it to QPR ProcessAnalyzer datatable or a temporary table. Column names from the query result are used. It is possible to both create new Data Tables as well as modify existing datatables with this command. To use the ImportSqlQuery command, a SELECT statement returning two columns: the first column is for a key and the second one is for a value of that key. The values in both the key column and in the value column are of type NVARCHAR. The supported keys for this command are:&lt;br /&gt;
&lt;br /&gt;
; TargetTable&lt;br /&gt;
: The temporary table to which the data is to be imported. If not used, define the target using the ProjectId/ProjectName, DataTableId/DataTableName, and Append parameters described below.&lt;br /&gt;
; ProjectId / ProjectName&lt;br /&gt;
: The id or the name of the project in which the target data table exists.&lt;br /&gt;
; DataTableId / DataTableName&lt;br /&gt;
: The id or the name of the existing/new target data table.&lt;br /&gt;
; Append&lt;br /&gt;
: Defines what to do with an existing target data table and its contents. TRUE or any other Integer than &amp;quot;0&amp;quot; = the target data table and its existing contents are not deleted before import. If a user imports into a data table with &#039;Append&#039; = FALSE or &amp;quot;0&amp;quot;, the contents of the data table are deleted before the import. If a user imports into a temporary table (i.e. TargetTable) with &#039;Append&#039; = FALSE or &amp;quot;0&amp;quot;, then the whole temporary table is deleted before the import. Not used when creating a new data table.&lt;br /&gt;
; CatchOperationExceptions&lt;br /&gt;
: Optional. Defines whether to stop the script execution or to continue to run the script from the next statement if an exception occurs when running the script:&lt;br /&gt;
: 1 = don&#039;t stop execution of the script, continue running the script from the next statement.&lt;br /&gt;
: 0 = stop execution of the current script and show the exception.&lt;br /&gt;
: The following script variables will be set and are shown in the script log:&lt;br /&gt;
:&amp;lt;code&amp;gt;@_ExceptionOccurred&amp;lt;/code&amp;gt; If there was an exception, then this value is 1, otherwise 0. INT&lt;br /&gt;
:&amp;lt;code&amp;gt;@_ExceptionType&amp;lt;/code&amp;gt; If there was an exception, shows the C# class name for the exception, NVARCHAR(MAX), otherwise NULL.&lt;br /&gt;
:&amp;lt;code&amp;gt;@_ExceptionMessage&amp;lt;/code&amp;gt; If there was an exception, contains a message that would have been displayed, NVARCHAR(MAX), otherwise NULL.&lt;br /&gt;
:&amp;lt;code&amp;gt;@_ExceptionDetails&amp;lt;/code&amp;gt; If there was an exception, contains the details that would have been displayed, including the system stack trace, NVARCHAR(MAX), otherwise NULL.&lt;br /&gt;
; SqlConnectionString&lt;br /&gt;
: The SQL connection string that includes the settings needed to establish the initial connection. Mandatory. See [http://msdn.microsoft.com/en-us/library/system.data.sqlclient.sqlconnection.connectionstring%28v=vs.110%29.aspx SqlConnection.ConnectionString Property in Microsoft Development Network] for more information on the connection parameters.&lt;br /&gt;
; SqlQueryString&lt;br /&gt;
: The SQL query string. Mandatory. Note that if the query doesn&#039;t return any data, the target data table or temporary table is not created.&lt;br /&gt;
; QueryExecutionTimeout&lt;br /&gt;
: Defines timeout in seconds for the SQL command execution. If not specified, default value is 600 seconds.&lt;br /&gt;
; ExecuteInClientSide&lt;br /&gt;
: Defines whether the command is executed from the QPR ScriptLauncher or from the server. This parameter is used when there is no server connection available, for example. TRUE or 1, the query is executed in the client side. FALSE or 0, the query is executed in the server side. Supports only data table as the import destination. If &#039;TargetTable&#039; has been defined as the import destination and the value of this parameter is given as TRUE or any other Integer than &amp;quot;0&amp;quot;, you will receive an error message. Optional. Default value is FALSE.&lt;br /&gt;
&lt;br /&gt;
See examples at the [[ImportSqlQuery Script Examples]] page.&lt;br /&gt;
&lt;br /&gt;
= --#Run =&lt;br /&gt;
Runs another script with specified parameters. This command can take multiple SELECT queries which are passed as parameters to the called script. The first SELECT configures the script call by defining the script id to be called.&lt;br /&gt;
&lt;br /&gt;
== First Query ==&lt;br /&gt;
Configures the command using a SELECT statement returning two columns: the first column is for a key and the second one is for a value of that key. The values in both the key column and in the value column are of type NVARCHAR. The supported keys for this command are:&amp;lt;br/&amp;gt;&lt;br /&gt;
; ScriptId&lt;br /&gt;
: Mandatory. The Id of the called script.&lt;br /&gt;
; CatchOperationExceptions&lt;br /&gt;
: Optional. Defines whether to stop the script execution or to continue to run the script from the next statement if an exception occurs when running the script:&lt;br /&gt;
: 1 = don&#039;t stop execution of the script, continue running the script from the next statement.&lt;br /&gt;
: 0 = stop execution of the current script and show the exception.&lt;br /&gt;
: The following script variables will be set and are shown in the script log:&lt;br /&gt;
: &amp;lt;code&amp;gt;@_ExceptionOccurred&amp;lt;/code&amp;gt; If there was an exception, then this value is 1, otherwise 0. INT&lt;br /&gt;
: &amp;lt;code&amp;gt;@_ExceptionType&amp;lt;/code&amp;gt; If there was an exception, shows the C# class name for the exception, NVARCHAR(MAX), otherwise NULL.&lt;br /&gt;
: &amp;lt;code&amp;gt;@_ExceptionMessage&amp;lt;/code&amp;gt; If there was an exception, contains a message that would have been displayed, NVARCHAR(MAX), otherwise NULL.&lt;br /&gt;
: &amp;lt;code&amp;gt;@_ExceptionDetails&amp;lt;/code&amp;gt; If there was an exception, contains the details that would have been displayed, including the system stack trace, NVARCHAR(MAX), otherwise NULL.&lt;br /&gt;
&lt;br /&gt;
== Following Queries ==&lt;br /&gt;
Subsequent queries are optional and they are used for passing parameters to the called script. Maximum number of arguments is 10.  Each argument is created as a temporary table with names &#039;&#039;&#039;#_Arg1&#039;&#039;&#039;, &#039;&#039;&#039;#_Arg2&#039;&#039;&#039;, &#039;&#039;&#039;#_Arg10&#039;&#039;&#039;. In the created temporary tables, all columns are of the type SQL_VARIANT. If the column names have not been specified, then &#039;&#039;&#039;Value_0&#039;&#039;&#039;, &#039;&#039;&#039;Value_1&#039;&#039;&#039; etc. are used.&lt;br /&gt;
The possible arguments are as follows:&lt;br /&gt;
* &#039;&#039;&#039;@_Argv&#039;&#039;&#039;: Number of provided parameters (between 0 to 10) (type iNT)&lt;br /&gt;
* &#039;&#039;&#039;#_Arg1&#039;&#039;&#039;, &#039;&#039;&#039;#_Arg2&#039;&#039;&#039;, ... &#039;&#039;&#039;#_Arg10&#039;&#039;&#039;: arguments passed to that script&lt;br /&gt;
&lt;br /&gt;
Each argument exists in the called script until the next --#Run command is executed in that script. After the called script has finished, the main script continues its execution.&lt;br /&gt;
&lt;br /&gt;
See examples at the [[Run Script Examples]] page.&lt;br /&gt;
&lt;br /&gt;
= --#RunQuery =&lt;br /&gt;
Runs an [[Web_API:_Expression/query|expression language query]], and stores results to a [[QPR_ProcessAnalyzer_Project_Workspace#Datatables|datatable]] or to a temporary table in the scripting database. Following parameters can be used in the command:&lt;br /&gt;
* &#039;&#039;&#039;Configuration&#039;&#039;&#039;: Expression language query to run, written in JSON as specified in [[Web_API:_Expression/query|Web API: Expression/query]]. Queries can be created by using a [[QPR_ProcessAnalyzer_Chart|chart]] where to open the &#039;&#039;&#039;Query&#039;&#039;&#039; (in the &#039;&#039;&#039;Advanced&#039;&#039;&#039; tab). It will show the query made by chart that&#039;s compatible with what can be specified in the &#039;&#039;Configuration&#039;&#039; parameter.&lt;br /&gt;
* &#039;&#039;&#039;TargetTable&#039;&#039;&#039;: When specified, results are stored to a temporary table with that name in the scripting sandbox. The temporary table can be read using the subsequent commands. When the script ends, temporary tables are automatically removed.&lt;br /&gt;
* &#039;&#039;&#039;DatatableId&#039;&#039;&#039;: When specified, data is stored to the defined existing datatable. When using datatable id, ProjectName or ProjectId parameter don&#039;t need to be defined.&lt;br /&gt;
* &#039;&#039;&#039;DataTableName&#039;&#039;&#039;: When specified, data is stored to the datatable with that name, located in the same project as the script. If you want to use different project, specify either the ProjectName or ProjectId parameter.&lt;br /&gt;
* &#039;&#039;&#039;ProjectName&#039;&#039;&#039;: Specifies a project by name where the results datatable is stored. Used together with the DataTableName parameter.&lt;br /&gt;
* &#039;&#039;&#039;ProjectId&#039;&#039;&#039;: Specifies a project by id where the results datatable is stored. Used together with the DataTableName parameter.&lt;br /&gt;
&lt;br /&gt;
See [[RunQuery Script Examples]].&lt;br /&gt;
&lt;br /&gt;
= --#SendEmail =&lt;br /&gt;
Sends an e-mail and writes a message to script log whether sending the email was successful or not. Script execution continues even when the sending isn&#039;t successful. &lt;br /&gt;
&lt;br /&gt;
Configures the command using a SELECT statement returning two columns: the first column is for a key and the second one is for a value of that key. The values in both the key column and in the value column are of type NVARCHAR. The supported keys for this command are:&amp;lt;br/&amp;gt;&lt;br /&gt;
; CatchOperationExceptions&lt;br /&gt;
: Optional. Defines whether to stop the script execution or to continue to run the script from the next statement if an exception occurs when running the script:&lt;br /&gt;
: 1 = don&#039;t stop execution of the script, continue running the script from the next statement.&lt;br /&gt;
: 0 = stop execution of the current script and show the exception.&lt;br /&gt;
: The following script variables will be set and are shown in the script log:&lt;br /&gt;
:&amp;lt;code&amp;gt;@_ExceptionOccurred&amp;lt;/code&amp;gt; If there was an exception, then this value is 1, otherwise 0. INT&lt;br /&gt;
:&amp;lt;code&amp;gt;@_ExceptionType&amp;lt;/code&amp;gt; If there was an exception, shows the C# class name for the exception, NVARCHAR(MAX), otherwise NULL.&lt;br /&gt;
:&amp;lt;code&amp;gt;@_ExceptionMessage&amp;lt;/code&amp;gt; If there was an exception, contains a message that would have been displayed, NVARCHAR(MAX), otherwise NULL.&lt;br /&gt;
:&amp;lt;code&amp;gt;@_ExceptionDetails&amp;lt;/code&amp;gt; If there was an exception, contains the details that would have been displayed, including the system stack trace, NVARCHAR(MAX), otherwise NULL.&lt;br /&gt;
&#039;&#039;&#039;E-mail Parameters&#039;&#039;&#039;&lt;br /&gt;
; EmailFrom&lt;br /&gt;
: Defines the from address for this e-mail message. Mandatory.&lt;br /&gt;
; EmailTo&lt;br /&gt;
: Defines the recipient(s) for this e-mail message given in a list separated by comma. Mandatory.&lt;br /&gt;
; EmailSubject&lt;br /&gt;
: Defines the subject of the email. Default value is empty. Optional.&lt;br /&gt;
; EmailBody&lt;br /&gt;
: Defines the message body. Default value is empty. Optional.&lt;br /&gt;
; EmailCc&lt;br /&gt;
: Defines the carbon copy recipient(s) for this e-mail message given in a list separated by comma. Optional.&lt;br /&gt;
; EmailBcc&lt;br /&gt;
: Defines the blind carbon copy recipient(s) for this e-mail message given in a list separated by comma. Optional.&lt;br /&gt;
; EmailIsBodyHtml&lt;br /&gt;
: Defines whether the e-mail message body is in HTML. TRUE or any other Integer than &amp;quot;0&amp;quot; = body is in HTML, FALSE or &amp;quot;0&amp;quot; = body is not in HTML. Default value is FALSE. Optional.&lt;br /&gt;
; EmailSender&lt;br /&gt;
: Defines the sender&#039;s address for this e-mail message. Default value is empty. Optional.&lt;br /&gt;
; EmailReplyTo&lt;br /&gt;
: Defines the ReplyTo address(es) for the mail message given in a list separated by comma. Optional.&lt;br /&gt;
; EmailPriority&lt;br /&gt;
: Defines the priority of this e-mail message. Possible values are &amp;quot;High&amp;quot;, &amp;quot;Normal&amp;quot;, and &amp;quot;Low&amp;quot;. Default value is &amp;quot;Normal&amp;quot;. Optional.&lt;br /&gt;
; EmailDeliveryNotification&lt;br /&gt;
: Defines the delivery notifications for this e-mail message. Possible values are &amp;quot;Delay&amp;quot;, &amp;quot;Never&amp;quot;, &amp;quot;None&amp;quot;, &amp;quot;OnFailure&amp;quot;, and &amp;quot;OnSuccess&amp;quot;. Default value is &amp;quot;None&amp;quot;. Optional.&lt;br /&gt;
; EmailBodyEncoding&lt;br /&gt;
: Defines the encoding used to encode the message body. Supported encodings are listed in the &amp;quot;Remarks&amp;quot; section at http://msdn.microsoft.com/en-us/library/System.Text.Encoding.aspx. UTF8 is used by default. Optional.&lt;br /&gt;
; EmailSubjectEncoding&lt;br /&gt;
: Defines the encoding used for the subject content for this e-mail message. Supported encodings are listed in the &amp;quot;Remarks&amp;quot; section at http://msdn.microsoft.com/en-us/library/System.Text.Encoding.aspx. UTF8 is used by default. Optional.&lt;br /&gt;
; EmailAttachmentQuery&lt;br /&gt;
: Defines a query to fetch the parameters for adding attachments to the email. Each row (except the header row)  in the query result corresponds to one attachment. The result must contain the following columns in this order: Name of the attachment, Content for the attachment (Sent as-is without any modifications. Supports binary values.), Media type (supported types are text/plain, text/html, text/xml, and image/jpeg), and Creation time (SQL datetime). Names of the columns do not matter. If the result doesn&#039;t contain some of the columns, an error is written into the Progress log, and the email is not sent. Optional.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;SMTP Server Parameters&#039;&#039;&#039;&lt;br /&gt;
; SmtpServer&lt;br /&gt;
: Defines the hostname or the IP address of the server. Mandatory for the first occurrence of the SendEmail command during script execution.&lt;br /&gt;
; SmtpPort&lt;br /&gt;
: Defines the port of the SMTP server. Default value is &amp;quot;25&amp;quot;. Optional.&lt;br /&gt;
; SmtpAuthenticationUsername&lt;br /&gt;
: Defines the user name for the SMTP server. Note that the user name is in plain text and visible to all users who have access to the script. Optional.&lt;br /&gt;
; SmtpAuthenticationPassword&lt;br /&gt;
: Defines the password for the SMTP server. Note that the password is in plain text and visible to all users who have access to the script. Optional.&lt;br /&gt;
; SmtpEnableSSL&lt;br /&gt;
: Defines whether SSL should be enabled for the SMTP connection. TRUE or any other Integer than &amp;quot;0&amp;quot; = SSL is enabled, FALSE or &amp;quot;0&amp;quot; = SSL is not enabled. Default value is &amp;quot;FALSE&amp;quot;. Optional.&lt;br /&gt;
&lt;br /&gt;
See examples at the [[SendEmail Script Examples]] page.&lt;br /&gt;
&lt;br /&gt;
= --#ShowReport =&lt;br /&gt;
Outputs result of an SQL query to a CSV file when running script from [[QPR_ProcessAnalyzer_ScriptLauncher|QPR ProcessAnalyzer ScriptLauncher]]. This command takes two SELECT queries as parameters.&lt;br /&gt;
== First Query ==&lt;br /&gt;
SQL query which results are shown. &lt;br /&gt;
; &amp;lt;nowiki&amp;gt;&amp;lt;data&amp;gt;&amp;lt;/nowiki&amp;gt;&lt;br /&gt;
: Mandatory. The database query whose results are to be returned.&lt;br /&gt;
&lt;br /&gt;
== Second Query ==&lt;br /&gt;
Configures the command using a SELECT statement returning two columns: the first column is for a key and the second one is for a value of that key. The values in both the key column and in the value column are of type NVARCHAR. The supported keys for this command are:&amp;lt;br/&amp;gt;&lt;br /&gt;
; &amp;lt;Analysis Parameter&amp;gt;&lt;br /&gt;
: Optional. The analysis parameters given for the operation. Some suggested parameters to be used:&lt;br /&gt;
:; Title&lt;br /&gt;
:: The name of the created CSV file.&lt;br /&gt;
:; MaximumCount&lt;br /&gt;
:: The maximum number of rows to show (0 = all, default = 1000).&lt;br /&gt;
&lt;br /&gt;
; CatchOperationExceptions: Optional. Defines whether to stop the script execution or to continue to run the script from the next statement if an exception occurs when running the script:&lt;br /&gt;
: 1 = don&#039;t stop execution of the script, continue running the script from the next statement.&lt;br /&gt;
: 0 = stop execution of the current script and show the exception.&lt;br /&gt;
: The following script variables will be set and are shown in the script log:&lt;br /&gt;
:&amp;lt;code&amp;gt;@_ExceptionOccurred&amp;lt;/code&amp;gt; If there was an exception, then this value is 1, otherwise 0. INT&lt;br /&gt;
:&amp;lt;code&amp;gt;@_ExceptionType&amp;lt;/code&amp;gt; If there was an exception, shows the C# class name for the exception, NVARCHAR(MAX), otherwise NULL.&lt;br /&gt;
:&amp;lt;code&amp;gt;@_ExceptionMessage&amp;lt;/code&amp;gt; If there was an exception, contains a message that would have been displayed, NVARCHAR(MAX), otherwise NULL.&lt;br /&gt;
:&amp;lt;code&amp;gt;@_ExceptionDetails&amp;lt;/code&amp;gt; If there was an exception, contains the details that would have been displayed, including the system stack trace, NVARCHAR(MAX), otherwise NULL.&lt;br /&gt;
&lt;br /&gt;
See examples at the [[ShowReport Script Examples]] page.&lt;br /&gt;
&lt;br /&gt;
= --#StartBackground =&lt;br /&gt;
Continues the script run in background, i.e. the parent script execution completes and the rest of the script execution continues. When running a script in the background, it cannot output any results using the ShowReport command or GetAnalysis with the Show parameter. It&#039;s possible to terminate scripts that run in the background via the [[QPR_ProcessAnalyzer_Logs#Task_Log|Task log]]. No also that a script running in the background cannot execute in the client side mode.&lt;br /&gt;
&lt;br /&gt;
Takes one SELECT query as a parameter. Following parameter is supported:&lt;br /&gt;
&lt;br /&gt;
; Enabled&lt;br /&gt;
: Boolean value defining whether the script is run in background starting from this command. TRUE = run in background, FALSE = don&#039;t run in background. Default value is TRUE.&lt;br /&gt;
&lt;br /&gt;
See examples at the [[StartBackground Script Examples]] page.&lt;br /&gt;
&lt;br /&gt;
= --#WriteLog =&lt;br /&gt;
Adds the first column values from the preceding SQL statements to the log that is shown after the whole script execution is completed. In addition to the WriteLog command, you can also use the [https://docs.microsoft.com/en-us/sql/t-sql/language-elements/print-transact-sql?view=sql-server-ver15 Print SQL statement] to generate log entries into the script execution log. The difference to the WriteLog command is that the Print statement can use also variables.&lt;br /&gt;
&lt;br /&gt;
See examples at the [[WriteLog Script Examples]] page.&lt;br /&gt;
&lt;br /&gt;
__NOTOC__&lt;br /&gt;
[[Category: QPR ProcessAnalyzer]]&lt;/div&gt;</summary>
		<author><name>MarHink</name></author>
	</entry>
	<entry>
		<id>https://wiki.onqpr.com/pa/index.php?title=ExtractSap_Function&amp;diff=25669</id>
		<title>ExtractSap Function</title>
		<link rel="alternate" type="text/html" href="https://wiki.onqpr.com/pa/index.php?title=ExtractSap_Function&amp;diff=25669"/>
		<updated>2025-01-14T15:41:10Z</updated>

		<summary type="html">&lt;p&gt;MarHink: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;ExtractSap function imports data from an SAP R/3 system using its RFC interface and returns a DataFlow. The function is in the generic context and also in the project context. When using the secure strings, the function need to be called in the project context as the secure strings are project specific.  Using the ExtractSap function requires installing [[QPR_ProcessAnalyzer_ScriptLauncher#Installing_SAP_NetWeaver_RFC_Library|SAP NetWeaver RFC Library]]. More information about [[Importing_Data_from_SAP|connecting to SAP]].&lt;br /&gt;
&lt;br /&gt;
Parameters:&lt;br /&gt;
* &#039;&#039;&#039;UseGateway&#039;&#039;&#039;: Boolean value indicating whether data extraction should be performed through the gateway (&#039;&#039;true&#039;&#039;) or by QPR ProcessAnalyzer Server (&#039;&#039;false&#039;&#039;, default value). The gateway may be needed to access on-premise systems that are not available in the public network.&lt;br /&gt;
* &#039;&#039;&#039;User&#039;&#039;&#039;: SAP username used to connect to SAP. Mandatory. Corresponds to the &amp;quot;USER&amp;quot; constant in SAP.&lt;br /&gt;
* &#039;&#039;&#039;Password&#039;&#039;&#039;: Password for the SAP user used to connect to SAP. Mandatory. Corresponds to the &amp;quot;PASSWD&amp;quot; constant in SAP.&lt;br /&gt;
* &#039;&#039;&#039;PasswordKey&#039;&#039;&#039;: [[Storing_Secrets_for_Scripts|Secret name]] for the stored SAP password. Alternative for the Password property.&lt;br /&gt;
* &#039;&#039;&#039;Client&#039;&#039;&#039;: The SAP backend client. Mandatory. Corresponds to the &amp;quot;CLIENT&amp;quot; constant in SAP.&lt;br /&gt;
* &#039;&#039;&#039;AppServerHost&#039;&#039;&#039;: Hostname or IP of the specific SAP application server, to which all connections shall be opened. Mandatory if MessageServerHost is not defined. Corresponds to the &amp;quot;ASHOST&amp;quot; constant in SAP.&lt;br /&gt;
* &#039;&#039;&#039;MessageServerHost&#039;&#039;&#039;: Hostname or IP of the SAP system’s message server (central instance). Mandatory if AppServerHost is not defined. Corresponds to the &amp;quot;MSHOST&amp;quot; constant in SAP.&lt;br /&gt;
* &#039;&#039;&#039;SystemNumber&#039;&#039;&#039;: SAP system’s system number. Mandatory if SystemID is not defined. Corresponds to the &amp;quot;SYSNR&amp;quot; constant in SAP.&lt;br /&gt;
* &#039;&#039;&#039;SystemID&#039;&#039;&#039;: SAP system’s three-letter system ID. Mandatory if SystemNumber is not defined. Corresponds to the &amp;quot;SYSID&amp;quot; constant in SAP.&lt;br /&gt;
* &#039;&#039;&#039;Options&#039;&#039;&#039;: Array of clause elements. Specifies the value for parameter OPTIONS in tab &#039;Import&#039; or function module &#039;rfc_read_table&#039; in SAP. NOTE: maximum length for one string is 72 characters(SAP limit). Optional. Example: [&amp;quot;VBELN BETWEEN &#039;0000017448&#039;&amp;quot; ,&amp;quot;AND &#039;0000017450&#039;&amp;quot;].&lt;br /&gt;
* &#039;&#039;&#039;OptionString&#039;&#039;&#039;: String of clause elements. Specifies the value for parameter OPTIONS in tab &#039;Import&#039; or function module &#039;rfc_read_table&#039; in SAP. Maximum length for one string is 72 characters(SAP limit). Over 72 character length OptionString is splited by &#039; AND &#039; and &#039; OR &#039; to several OPTIONS parameter to avoid SAP 72 characters limit. Optional. Example: &amp;quot;VBELN BETWEEN &#039;0000017448&#039; AND &#039;0000017450&#039;&amp;quot;.&lt;br /&gt;
* &#039;&#039;&#039;FieldNames&#039;&#039;&#039;: Comma separated list of field names for columns to be extracted. Default value is empty, which will extract all columns. Specifies the value for parameter FIELDNAME in tab &#039;Tables&#039; for table &#039;FIELDS&#039; for function module &#039;rfc_read_table&#039; in SAP. Optional.&lt;br /&gt;
* &#039;&#039;&#039;Function&#039;&#039;&#039;: SAP function name that is called in SAP. Optional. The default value is RFC_READ_TABLE. Another possible function name is BBP_RFC_READ_TABLE.&lt;br /&gt;
* &#039;&#039;&#039;Language&#039;&#039;&#039;: SAP language used. Default value is &amp;quot;EN&amp;quot;. Optional. Corresponds to the &amp;quot;LANG&amp;quot; constant in SAP.&lt;br /&gt;
* &#039;&#039;&#039;Rowcount&#039;&#039;&#039;: Maximum amount of rows to fetch. Specifies the value for parameter ROWCOUNT in tab &#039;Import&#039; or function module &#039;rfc_read_table&#039; in SAP. Optional.&lt;br /&gt;
* &#039;&#039;&#039;Rowskips&#039;&#039;&#039;: Number of rows to skip. Specifies the value for parameter ROWSKIPS in tab &#039;Import&#039; or function module &#039;rfc_read_table&#039;. in SAP. Optional.&lt;br /&gt;
* &#039;&#039;&#039;PoolSize&#039;&#039;&#039;: Maximum number of RFC connections that this destination will keep in its pool. Default value is &amp;quot;5&amp;quot;. Optional. Corresponds to the &amp;quot;POOL_SIZE&amp;quot; constant in SAP.&lt;br /&gt;
* &#039;&#039;&#039;IdleTimeout&#039;&#039;&#039;: If a connection has been idle for more than the defined time in seconds, it will be closed and removed from the connection pool. Default value is &amp;quot;600&amp;quot;. Optional. Corresponds to the &amp;quot;IDLE_TIMEOUT&amp;quot; constant in SAP.&lt;br /&gt;
* &#039;&#039;&#039;ImportChunkSize&#039;&#039;&#039;: Specifies the size of the chunks of data used when importing the data from SAP to PA Server. The value represents the approximate maximum number of data cells in each chunk (consisting tables of &amp;lt;number of rows&amp;gt; * &amp;lt;number of columns&amp;gt; data cells. Default value is 200000. Smaller value causes big imports to be split into more chunks taking more time in total, but it makes importing of each chunk faster possibly helping in some timeout situations.&lt;br /&gt;
* &#039;&#039;&#039;Router&#039;&#039;&#039;: List of host names and service names or port numbers for the SAPRouter in the following format: /H/hostname/S/portnumber. Optional. Corresponds to the &amp;quot;SAPROUTER&amp;quot; constant in SAP.&lt;br /&gt;
* &#039;&#039;&#039;LogonGroup&#039;&#039;&#039;: The logon group from which the message server shall select an application server. Optional. Corresponds to the &amp;quot;GROUP&amp;quot; constant in SAP.&lt;br /&gt;
* &#039;&#039;&#039;Mode&#039;&#039;&#039;: If this number is set to &amp;quot;1&amp;quot;, then the query result will have the SAP Table field names as data table column names and actual data rows as rows. If this is set to &amp;quot;3&amp;quot;, the query result will get the field descriptions from the SAP query using NO_DATA parameter, i.e. the returned columns are the following (in this order): Field, Type, Description, Length, Offset. Default value is &amp;quot;1&amp;quot;. Optional.&lt;br /&gt;
* &#039;&#039;&#039;QueryTable&#039;&#039;&#039;: Name of the SAP table to be extracted. Specifies the value for the parameter QUERY_TABLE in tab &#039;Import&#039; or function module &#039;rfc_read_table&#039; in SAP. Mandatory. Note that if the query doesn&#039;t return any data, the target data table or temporary table is not created.&lt;br /&gt;
* &#039;&#039;&#039;[[Importing_Data_from_SAP|ConvertDataTypes]]&#039;&#039;&#039;: List of SAP data types that are converted into respective expression language data types. Defined by listing the data type identifier characters in any order. Available data type identifying characters are IFPCDTNX. If not defined, all data types are converted. Example: IFP (convert only numeric data types: Integer, Float, Packed number).&lt;br /&gt;
* &#039;&#039;&#039;Ping&#039;&#039;&#039;: If this is set to true, SAP server is pinged before SAP query execution. Purpose is help troubleshooting SAP connection issues. Optional.&lt;br /&gt;
* &#039;&#039;&#039;TraceLevel&#039;&#039;&#039;: Sets SAP trace level, which can be 0, 1, 2, 3 or 4. Setting higher trace level helps to troubleshoot SAP connection issues. &#039;&#039;sapnco.dll&#039;&#039; writes log file to the current working folder with for example name&#039;nco_rfc_5484_2.trc&#039;. Optional.&lt;br /&gt;
* &#039;&#039;&#039;AliasUser&#039;&#039;&#039;&lt;br /&gt;
* &#039;&#039;&#039;AppServerService&#039;&#039;&#039;&lt;br /&gt;
* &#039;&#039;&#039;CharacterFaultIndicatorToken&#039;&#039;&#039;&lt;br /&gt;
* &#039;&#039;&#039;Codepage&#039;&#039;&#039;&lt;br /&gt;
* &#039;&#039;&#039;GatewayHost&#039;&#039;&#039;&lt;br /&gt;
* &#039;&#039;&#039;GatewayService&#039;&#039;&#039;&lt;br /&gt;
* &#039;&#039;&#039;IdleCheckTime&#039;&#039;&#039;&lt;br /&gt;
* &#039;&#039;&#039;LogonCheck&#039;&#039;&#039;&lt;br /&gt;
* &#039;&#039;&#039;MaxPoolWaitTime&#039;&#039;&#039;&lt;br /&gt;
* &#039;&#039;&#039;MessageServerService&#039;&#039;&#039;&lt;br /&gt;
* &#039;&#039;&#039;Name&#039;&#039;&#039;&lt;br /&gt;
* &#039;&#039;&#039;NoCompression&#039;&#039;&#039;&lt;br /&gt;
* &#039;&#039;&#039;OnCharacterConversionError&#039;&#039;&#039;&lt;br /&gt;
* &#039;&#039;&#039;PartnerCharSize&#039;&#039;&#039;&lt;br /&gt;
* &#039;&#039;&#039;PasswordChangeEnforced&#039;&#039;&#039;&lt;br /&gt;
* &#039;&#039;&#039;ProgramId&#039;&#039;&#039;&lt;br /&gt;
* &#039;&#039;&#039;R3Name&#039;&#039;&#039;&lt;br /&gt;
* &#039;&#039;&#039;RegistrationCount&#039;&#039;&#039;&lt;br /&gt;
* &#039;&#039;&#039;RepositoryDestination&#039;&#039;&#039;&lt;br /&gt;
* &#039;&#039;&#039;RepositoryPassword&#039;&#039;&#039;&lt;br /&gt;
* &#039;&#039;&#039;RepositorySncMyName&#039;&#039;&#039;&lt;br /&gt;
* &#039;&#039;&#039;RepositoryUser&#039;&#039;&#039;&lt;br /&gt;
* &#039;&#039;&#039;RepositoryX509Certificate&#039;&#039;&#039;&lt;br /&gt;
* &#039;&#039;&#039;SapSso2Ticket&#039;&#039;&#039;&lt;br /&gt;
* &#039;&#039;&#039;SncLibraryPath&#039;&#039;&#039;: Full path including file name of the SNC shared library to be used.&lt;br /&gt;
* &#039;&#039;&#039;SncMode&#039;&#039;&#039;: Determines whether connections will be secured with SNC. Value 0 doesn&#039;t use SNC (default) and value 1 uses SNC.&lt;br /&gt;
* &#039;&#039;&#039;SncMyName&#039;&#039;&#039;: Token/identifier representing the external RFC program. In most cases this can be omitted. The installed SNC solution usually knows its own SNC name. Only for solutions supporting “multiple identities”, you may Varies depending on the installed SNC solution (Secude, Kerberos, NTLM, etc). Example for Secude: p/secude:CN=ALEREMOT SAP Online Help 09.09.2014 SAP .NET Connector 3.0 41 need to specify the identity to be used for this particular destination/server. E, O=Mustermann-AG, C=DE&lt;br /&gt;
* &#039;&#039;&#039;SncPartnerName&#039;&#039;&#039;: The backend&#039;s SNCname.&lt;br /&gt;
* &#039;&#039;&#039;SncPartnerNames&#039;&#039;&#039;&lt;br /&gt;
* &#039;&#039;&#039;SncQop&#039;&#039;&#039;: Quality of service to be used for SNC communication of this particular destination/server. One of the following values:&lt;br /&gt;
** 1: Digital signature&lt;br /&gt;
** 2: Digital signature and encryption&lt;br /&gt;
** 3: Digital signature, encryption, and user authentication&lt;br /&gt;
** 8: Default value defined by back-end system&lt;br /&gt;
** 9: Maximum value that the current security product supports&lt;br /&gt;
* &#039;&#039;&#039;SystemIds&#039;&#039;&#039;&lt;br /&gt;
* &#039;&#039;&#039;UseSapGui&#039;&#039;&#039;&lt;br /&gt;
* &#039;&#039;&#039;X509Certificate&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
Examples:&lt;br /&gt;
&lt;br /&gt;
An example of performing a SAP extraction and persisting the extracted data to a table having id 1. Note: Data table must exist already when using this approach.&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;typescript&amp;quot;&amp;gt;&lt;br /&gt;
let sapConnection = #{ &lt;br /&gt;
  &amp;quot;AppServerHost&amp;quot;: &amp;quot;sap01&amp;quot;,&lt;br /&gt;
  &amp;quot;SystemNumber&amp;quot;: &amp;quot;00&amp;quot;,&lt;br /&gt;
  &amp;quot;User&amp;quot;: &amp;quot;user1&amp;quot;,&lt;br /&gt;
  &amp;quot;PasswordKey&amp;quot;: &amp;quot;SapPW1&amp;quot;,&lt;br /&gt;
  &amp;quot;Router&amp;quot;: &amp;quot;&amp;quot;,&lt;br /&gt;
  &amp;quot;SystemID&amp;quot;: &amp;quot;QPR&amp;quot;,&lt;br /&gt;
  &amp;quot;Client&amp;quot;: &amp;quot;800&amp;quot;,&lt;br /&gt;
  &amp;quot;Language&amp;quot;: &amp;quot;EN&amp;quot;,&lt;br /&gt;
  &amp;quot;PoolSize&amp;quot;: 5,&lt;br /&gt;
  &amp;quot;PoolSizeMax&amp;quot;: 10,&lt;br /&gt;
  &amp;quot;IdleTimeout&amp;quot;: 600 &lt;br /&gt;
 };&lt;br /&gt;
let queryParameters = sapConnection.Extend(#{&lt;br /&gt;
  &amp;quot;FieldNames&amp;quot;: &amp;quot;VBELN,ERDAT,ERZET,ERNAM,NETWR,WAERK&amp;quot;, &lt;br /&gt;
  &amp;quot;QueryTable&amp;quot;: &amp;quot;VBAK&amp;quot;,&lt;br /&gt;
  &amp;quot;Options&amp;quot;: [&amp;quot;VBELN BETWEEN &#039;0000017448&#039;&amp;quot; ,&amp;quot;AND &#039;0000017450&#039;&amp;quot;]&lt;br /&gt;
});&lt;br /&gt;
let resultsFlow = ProjectById(1)&lt;br /&gt;
  .ExtractSap(queryParameters);&lt;br /&gt;
DatatableById(1)&lt;br /&gt;
  .Import(resultsFlow);&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
An example of performing a SAP extraction and persisting the extracted data to a table named &amp;quot;SAPData&amp;quot; in project having id 1. If the data table does not yet exist, a new one is created into Snowflake.&amp;lt;syntaxhighlight lang=&amp;quot;typescript&amp;quot;&amp;gt;&lt;br /&gt;
let sapConnection = #{ &lt;br /&gt;
  &amp;quot;AppServerHost&amp;quot;: &amp;quot;sap01&amp;quot;,&lt;br /&gt;
  &amp;quot;SystemNumber&amp;quot;: &amp;quot;10&amp;quot;,&lt;br /&gt;
  &amp;quot;User&amp;quot;: &amp;quot;exampleuser&amp;quot;,&lt;br /&gt;
  &amp;quot;Password&amp;quot;: &amp;quot;examplepassword&amp;quot;,&lt;br /&gt;
  &amp;quot;Router&amp;quot;: &amp;quot;/H/127.0.0.1/A/1234/H/&amp;quot;,&lt;br /&gt;
  &amp;quot;SystemID&amp;quot;: &amp;quot;QPR&amp;quot;,&lt;br /&gt;
  &amp;quot;Client&amp;quot;: &amp;quot;200&amp;quot;,&lt;br /&gt;
  &amp;quot;Language&amp;quot;: &amp;quot;EN&amp;quot;,&lt;br /&gt;
  &amp;quot;PoolSize&amp;quot;: 5,&lt;br /&gt;
  &amp;quot;PoolSizeMax&amp;quot;: 10,&lt;br /&gt;
  &amp;quot;IdleTimeout&amp;quot;: 600,&lt;br /&gt;
  &amp;quot;AppServerHost&amp;quot;: &amp;quot;127.0.0.1&amp;quot;,&lt;br /&gt;
  &amp;quot;LogonGroup&amp;quot;: &amp;quot;GROUPXNAME&amp;quot;,&lt;br /&gt;
  &amp;quot;Mode&amp;quot;: &amp;quot;1&amp;quot;&lt;br /&gt;
 };&lt;br /&gt;
let queryParameters = sapConnection.Extend(#{&lt;br /&gt;
  &amp;quot;FieldNames&amp;quot;: &amp;quot;VBELN,ERDAT,ERZET,ERNAM,NETWR,WAERK&amp;quot;, &lt;br /&gt;
  &amp;quot;QueryTable&amp;quot;: &amp;quot;VBAK&amp;quot;,&lt;br /&gt;
  &amp;quot;Options&amp;quot;: [&amp;quot;VBELN EQ &#039;0060000039&#039;&amp;quot;, &amp;quot;OR VBELN EQ &#039;0060000040&#039;&amp;quot;]&lt;br /&gt;
});&lt;br /&gt;
ProjectById(1)&lt;br /&gt;
  .ExtractSap(queryParameters)&lt;br /&gt;
  .Persist(&amp;quot;SAPData&amp;quot;, #{&amp;quot;ProjectId&amp;quot;: 1, &amp;quot;Connection&amp;quot;: CreateSnowflakeConnection(#{&amp;quot;ProjectId&amp;quot;: 1})})&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
An example of performing a SAP extract, a simple transformation for extracted data and import to a table named &amp;quot;TransformedSAPData&amp;quot; in project having id 1.&lt;br /&gt;
&lt;br /&gt;
Note: The example uses ScriptLauncher as a gateway and works only with ScriptLauncher version 2023.1 or later with &amp;quot;UseLegacyClientSideImport&amp;quot;-setting in appsettings.json set to false (if the setting is available in the used version).&amp;lt;syntaxhighlight lang=&amp;quot;typescript&amp;quot;&amp;gt;&lt;br /&gt;
function ExtractTransformAndLoad(extractFunc, transformFunc, loadFunc)&lt;br /&gt;
{&lt;br /&gt;
  let rawDataFlow = extractFunc();&lt;br /&gt;
  let transformedDataFlow = ToDataFlow();&lt;br /&gt;
&lt;br /&gt;
  _system.Parallel.Run([&lt;br /&gt;
    () =&amp;gt; Catch({&lt;br /&gt;
      let df;&lt;br /&gt;
      while (!IsNullTop(df = rawDataFlow.Collect(#{&amp;quot;CollectChunk&amp;quot;: true}))) {&lt;br /&gt;
        transformedDataFlow.Append(transformFunc(df));&lt;br /&gt;
        WriteLog(`A chunk having ${df.NRows} rows has been transformed.`);&lt;br /&gt;
      }&lt;br /&gt;
      if (rawDataFlow.HasError) {&lt;br /&gt;
        transformedDataFlow.Fail(&amp;quot;Error occurred during data extraction.&amp;quot;);&lt;br /&gt;
      }&lt;br /&gt;
      else {&lt;br /&gt;
        transformedDataFlow.Complete();&lt;br /&gt;
      }&lt;br /&gt;
    }, {&lt;br /&gt;
      transformedDataFlow.Fail(&amp;quot;Error occurred during transformation calculation.&amp;quot;);&lt;br /&gt;
    }),&lt;br /&gt;
    () =&amp;gt; {&lt;br /&gt;
      loadFunc(transformedDataFlow);&lt;br /&gt;
    }&lt;br /&gt;
  ]);&lt;br /&gt;
}&lt;br /&gt;
&lt;br /&gt;
let sapConnection = #{ &lt;br /&gt;
  &amp;quot;AppServerHost&amp;quot;: &amp;quot;sap01&amp;quot;,&lt;br /&gt;
  &amp;quot;SystemNumber&amp;quot;: &amp;quot;10&amp;quot;,&lt;br /&gt;
  &amp;quot;User&amp;quot;: &amp;quot;exampleuser&amp;quot;,&lt;br /&gt;
  &amp;quot;Password&amp;quot;: &amp;quot;examplepassword&amp;quot;,&lt;br /&gt;
  &amp;quot;Router&amp;quot;: &amp;quot;/H/127.0.0.1/A/1234/H/&amp;quot;,&lt;br /&gt;
  &amp;quot;SystemID&amp;quot;: &amp;quot;QPR&amp;quot;,&lt;br /&gt;
  &amp;quot;Client&amp;quot;: &amp;quot;200&amp;quot;,&lt;br /&gt;
  &amp;quot;Language&amp;quot;: &amp;quot;EN&amp;quot;,&lt;br /&gt;
  &amp;quot;PoolSize&amp;quot;: 5,&lt;br /&gt;
  &amp;quot;PoolSizeMax&amp;quot;: 10,&lt;br /&gt;
  &amp;quot;IdleTimeout&amp;quot;: 600,&lt;br /&gt;
  &amp;quot;AppServerHost&amp;quot;: &amp;quot;127.0.0.1&amp;quot;,&lt;br /&gt;
  &amp;quot;LogonGroup&amp;quot;: &amp;quot;GROUPXNAME&amp;quot;,&lt;br /&gt;
  &amp;quot;Mode&amp;quot;: &amp;quot;1&amp;quot;&lt;br /&gt;
};&lt;br /&gt;
&lt;br /&gt;
ExtractTransformAndLoad(&lt;br /&gt;
  () =&amp;gt; ExtractSap(&lt;br /&gt;
    sapConnection.Extend([&lt;br /&gt;
      &amp;quot;FieldNames&amp;quot;: &amp;quot;VBELN,ERDAT,ERZET,ERNAM,NETWR,WAERK&amp;quot;, &lt;br /&gt;
      &amp;quot;QueryTable&amp;quot;: &amp;quot;VBAK&amp;quot;,&lt;br /&gt;
      &amp;quot;Options&amp;quot;: [&amp;quot;VBELN BETWEEN &#039;0000017448&#039;&amp;quot; ,&amp;quot;AND &#039;0060000042&#039;&amp;quot;],&lt;br /&gt;
      &amp;quot;UseGateway&amp;quot;: true&lt;br /&gt;
    ])&lt;br /&gt;
  ),&lt;br /&gt;
  df =&amp;gt; df.SetColumns([&amp;quot;Test&amp;quot;: () =&amp;gt; `${Column(&amp;quot;NETWR&amp;quot;)} ${Column(&amp;quot;WAERK&amp;quot;)}`]),&lt;br /&gt;
  dataFlow =&amp;gt; {&lt;br /&gt;
    dataFlow.Persist(&amp;quot;TransformedSAPData&amp;quot;, [&amp;quot;ProjectName&amp;quot;: &amp;quot;TestData&amp;quot;, &amp;quot;Append&amp;quot;: 0]);&lt;br /&gt;
  }&lt;br /&gt;
);&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;/div&gt;</summary>
		<author><name>MarHink</name></author>
	</entry>
	<entry>
		<id>https://wiki.onqpr.com/pa/index.php?title=Object-centric_Process_Mining_Model&amp;diff=25552</id>
		<title>Object-centric Process Mining Model</title>
		<link rel="alternate" type="text/html" href="https://wiki.onqpr.com/pa/index.php?title=Object-centric_Process_Mining_Model&amp;diff=25552"/>
		<updated>2024-12-10T09:02:27Z</updated>

		<summary type="html">&lt;p&gt;MarHink: /* Object-centric model structure */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;QPR ProcessAnalyzer supports object-centric process mining (OCPM) based on the OCEL 2.0 standard (https://www.ocel-standard.org). To use object-centric functionality, you need to transform data into the [[#Object-centric_model_structure|suitable format]] for the [[#Create_object-centric_model|object-centric model]]. Object-centric models can be analyzed in the object-centric flowchart and with (case-centric) charts because the object-centric model can be converted into a case-centric eventlog using [[#Object-centric_perspectives|perspectives]]. To use the OCPM functionality, Snowflake needs to be used as the calculation engine.&lt;br /&gt;
&lt;br /&gt;
== Create object-centric model ==&lt;br /&gt;
Create a new object-centric model as follows:&lt;br /&gt;
# In the Workspace, open the project where to create the model.&lt;br /&gt;
# Select &#039;&#039;&#039;NEW&amp;quot;&#039;&#039;&#039; in the top right menu and select &#039;&#039;&#039;model&#039;&#039;&#039;.&lt;br /&gt;
# Define a name for the new model.&lt;br /&gt;
# Set &#039;&#039;&#039;Model type&#039;&#039;&#039; as &#039;&#039;&#039;Object-centric&#039;&#039;&#039;.&lt;br /&gt;
# Click &#039;&#039;&#039;Create&#039;&#039;&#039;.&lt;br /&gt;
&lt;br /&gt;
== Configure object-centric model datatables ==&lt;br /&gt;
Datatables for the object-centric model need to exist in the same project as the model. Datatables can be set for the model as follows:&lt;br /&gt;
# In the Workspace, select the object-centric model and click &#039;&#039;&#039;Properties&#039;&#039;&#039;.&lt;br /&gt;
# In the model properties dialog, open the &#039;&#039;&#039;Datasource&#039;&#039;&#039; tab.&lt;br /&gt;
# Add a following kind of json configuration to the textbox:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
{&lt;br /&gt;
  &amp;quot;Objects&amp;quot;: &amp;quot;OCPM: objects&amp;quot;,&lt;br /&gt;
  &amp;quot;Events&amp;quot;: &amp;quot;OCPM: events&amp;quot;,&lt;br /&gt;
  &amp;quot;ObjectToObject&amp;quot;: &amp;quot;OCPM: object-object&amp;quot;,&lt;br /&gt;
  &amp;quot;EventToObject&amp;quot;: &amp;quot;OCPM: event-object&amp;quot;,&lt;br /&gt;
  &amp;quot;ObjectTypes&amp;quot;: {&lt;br /&gt;
    &amp;quot;Invoice&amp;quot;: &amp;quot;OCPM object: Invoice&amp;quot;,&lt;br /&gt;
    &amp;quot;Payment&amp;quot;: &amp;quot;OCPM object: Payment&amp;quot;,&lt;br /&gt;
    &amp;quot;Purchase Order&amp;quot;: &amp;quot;OCPM object: Purchase Order&amp;quot;&lt;br /&gt;
  },&lt;br /&gt;
  &amp;quot;EventTypes&amp;quot;: { &lt;br /&gt;
    &amp;quot;Approve Purchase Requisition&amp;quot;: &amp;quot;OCPM event: Approve Purchase Requisition&amp;quot;,&lt;br /&gt;
    &amp;quot;Change PO Quantity&amp;quot;: &amp;quot;OCPM event: Change PO Quantity&amp;quot;,&lt;br /&gt;
    &amp;quot;Create Purchase Order&amp;quot;: &amp;quot;OCPM event: Create Purchase Order&amp;quot;,&lt;br /&gt;
    &amp;quot;Insert Invoice&amp;quot;: &amp;quot;OCPM event: Insert Invoice&amp;quot;,&lt;br /&gt;
    &amp;quot;Insert Payment&amp;quot;: &amp;quot;OCPM event: Insert Payment&amp;quot;&lt;br /&gt;
  }&lt;br /&gt;
}&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The json configuration needs to have following properties:&lt;br /&gt;
* &#039;&#039;&#039;Objects&#039;&#039;&#039;: Objects datatable name.&lt;br /&gt;
* &#039;&#039;&#039;Events&#039;&#039;&#039;: Events datatable name.&lt;br /&gt;
* &#039;&#039;&#039;ObjectToObject&#039;&#039;&#039;: Object-to-object relation datatable name.&lt;br /&gt;
* &#039;&#039;&#039;EventToObject&#039;&#039;&#039;: Event-to-object relation datatable name.&lt;br /&gt;
* &#039;&#039;&#039;ObjectTypes&#039;&#039;&#039;: Key-value-pairs of object type datatable names. Note that object names need to match with object names in the objects datatable.&lt;br /&gt;
* &#039;&#039;&#039;EventTypes&#039;&#039;&#039;: Key-value-pairs of event type datatable names. Note that event names need to match with event names in the events datatable.&lt;br /&gt;
&lt;br /&gt;
== Import from OCEL 2.0 JSON file ==&lt;br /&gt;
Object-centric model can be import from an OCEL 2.0 JSON file as follows:&lt;br /&gt;
# In the Workspace, open the project where to import the model.&lt;br /&gt;
# Select &#039;&#039;&#039;NEW&#039;&#039;&#039; in top right menu and select &#039;&#039;&#039;Import Model&#039;&#039;&#039;.&lt;br /&gt;
# Select the OCEL 2.0 JSON file from the disk and click &#039;&#039;&#039;Open&#039;&#039;&#039;.&lt;br /&gt;
&lt;br /&gt;
An object-centric model and a list of datatables is created.&lt;br /&gt;
&lt;br /&gt;
Example OCEL 2.0 eventlogs: https://www.ocel-standard.org/event-logs/overview/ (download the json format supported by QPR ProcessAnalyzer)&lt;br /&gt;
&lt;br /&gt;
== Object-centric model structure ==&lt;br /&gt;
Object-centric model contains datatables described in the table below. Datatables can be named freely, as the model json configuration is used to define the datatable for each type of data. The datatables need to use column names specified in the table below because those are the column names assumed by the object-centric (i.e., column names cannot be selected freely).&lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
!&#039;&#039;&#039;Datatable role&#039;&#039;&#039;&lt;br /&gt;
!&#039;&#039;&#039;Contained data&#039;&#039;&#039;&lt;br /&gt;
! &#039;&#039;&#039;Datatable columns&#039;&#039;&#039;&lt;br /&gt;
|-&lt;br /&gt;
||Objects&lt;br /&gt;
||Objects in the model (one row per object).&lt;br /&gt;
||&lt;br /&gt;
* &#039;&#039;&#039;OcelObjectId&#039;&#039;&#039;: Unique id for the object (among all objects in the model).&lt;br /&gt;
* &#039;&#039;&#039;OcelObjectType&#039;&#039;&#039;: Object type name (such as Order, Invoice, Delivery). Note that the model json configuration need to use same object type names.&lt;br /&gt;
|-&lt;br /&gt;
||Events&lt;br /&gt;
||Events in the model (one row per event).&lt;br /&gt;
||&lt;br /&gt;
* &#039;&#039;&#039;OcelEventId&#039;&#039;&#039;: Unique id for the event (among all events in the model).&lt;br /&gt;
* &#039;&#039;&#039;OcelEventType&#039;&#039;&#039;: Event type name (such as Order created, Invoice sent). Note that the model json configuration need to use same event type names.&lt;br /&gt;
* &#039;&#039;&#039;OcelEventTime&#039;&#039;&#039;: Event timestamp.&lt;br /&gt;
|-&lt;br /&gt;
||Object-object relations&lt;br /&gt;
||Relations between objects (one row per relation).&lt;br /&gt;
||&lt;br /&gt;
* &#039;&#039;&#039;OcelObjectToObjectSourceId&#039;&#039;&#039;: Source object id in the relation.&lt;br /&gt;
* &#039;&#039;&#039;OcelObjectToObjectTargetId&#039;&#039;&#039;: Target object id in the relation.&lt;br /&gt;
* &#039;&#039;&#039;OcelObjectToObjectQualifier&#039;&#039;&#039;: Describes the type of the relation as free-form text (not used currently).&lt;br /&gt;
|-&lt;br /&gt;
||Event-object relations&lt;br /&gt;
||Relations between events and objects (one row per relation).&lt;br /&gt;
||&lt;br /&gt;
* &#039;&#039;&#039;OcelEventToObjectSourceId&#039;&#039;&#039;: Object id in the relation.&lt;br /&gt;
* &#039;&#039;&#039;OcelEventToObjectTargetId&#039;&#039;&#039;: Event id in the relation.&lt;br /&gt;
* &#039;&#039;&#039;OcelEventToObjectQualifier&#039;&#039;&#039;: Describes the type of the relation as free-form text (not used currently).&lt;br /&gt;
|-&lt;br /&gt;
||Object attributes (several datatables)&lt;br /&gt;
||Object attribute values, each object type in a separate table (one row per object).&lt;br /&gt;
||&lt;br /&gt;
* &#039;&#039;&#039;OcelObjectTypeObjectId&#039;&#039;&#039;: Object id. Matches to the objects datatable &#039;&#039;OcelObjectId&#039;&#039; column.&lt;br /&gt;
* &#039;&#039;&#039;OcelObjectTypeTime&#039;&#039;&#039;: Timestamp which the attribute value is valid from (not used currently).&lt;br /&gt;
* &#039;&#039;&#039;OcelObjectTypeChangedField&#039;&#039;&#039;: Changed object attribute name (not used currently).&lt;br /&gt;
* &#039;&#039;&#039;&amp;lt;Object attributes&amp;gt;&#039;&#039;&#039;: Columns for each of the object attribute values (column name is the object attribute name).&lt;br /&gt;
|-&lt;br /&gt;
||Event attributes (several datatables)&lt;br /&gt;
||Event attribute values, each event type in a separate table (one row per event).&lt;br /&gt;
||&lt;br /&gt;
* &#039;&#039;&#039;OcelEventTypeEventId&#039;&#039;&#039;: Event id. Matches to the events datatable &#039;&#039;OcelEventId&#039;&#039; column.&lt;br /&gt;
* &#039;&#039;&#039;&amp;lt;Event attributes&amp;gt;&#039;&#039;&#039;: Columns for each of the event attribute values (column name is the event attribute name).&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
== Object-centric perspectives ==&lt;br /&gt;
Perspectives convert an object-centric model into the traditional case-centric eventlog, allowing to view and analyze object-centric models in analyses provided by charts. A single perspective is not able describe the object-centric model entirely, but just from a certain limited viewpoint. By using analyses with several perspectives, it&#039;s possible to get a more complete picture of the object-centric model. The perspective starts from a certain object type and traverses the object-object relations as many steps as desired.&lt;br /&gt;
&lt;br /&gt;
To define a perspective, the following settings are defined in the chart settings:&lt;br /&gt;
* &#039;&#039;&#039;Base Object type&#039;&#039;&#039;: Object of this type will be cases in the projected case-centric eventlog.&lt;br /&gt;
* &#039;&#039;&#039;Object Relation Steps&#039;&#039;&#039;: Specifies how many object-object relations will be traversed in order to find events connected to the base objects. Value zero means that only those events are returned that are directly connected to the base objects.&lt;br /&gt;
* &#039;&#039;&#039;Show Event Types&#039;&#039;&#039;: List of event type names which are included into the perspective eventlog. If no events are explicitly defined, all events will be included, but their event attributes are not included.&lt;br /&gt;
&lt;br /&gt;
The resulting perspective eventlog will have the following columns:&lt;br /&gt;
* &#039;&#039;&#039;OcelObjectId&#039;&#039;&#039; (mapped to case id)&lt;br /&gt;
* &#039;&#039;&#039;OcelEventType&#039;&#039;&#039; (mapped to event type)&lt;br /&gt;
* &#039;&#039;&#039;OcelEventTime&#039;&#039;&#039; (mapped to timestamp)&lt;br /&gt;
* &#039;&#039;&#039;OcelEventId&#039;&#039;&#039;&lt;br /&gt;
* Object attributes of the base object type. Note that the object attribute values are &amp;quot;repeated&amp;quot; for all events belonging to the same object.&lt;br /&gt;
* Event attributes of the selected event types. Values are null for events that don&#039;t have the attribute.&lt;br /&gt;
&lt;br /&gt;
The base object type attributes are available as case attributes. As the object attribute values may change over time in the OCEL 2.0 data, the last attribute value is used as the case attribute value. Note that other object type&#039;s attributes are not available as case attributes, so the object for which the attributes are used, need to be set as the base object.&lt;br /&gt;
&lt;br /&gt;
== Save perspective to filter ==&lt;br /&gt;
It&#039;s possible to include the object-centric perspective to a stored filter. When a filter is selected, also the perspective in the filter is applied to the dashboard. This allows to quickly change perspectives for the entire dashboard. The chart specific perspective overrides the dashboard level perspective, so the dashboard level perspective is only applied for charts that don&#039;t have the chart specific perspective defined.&lt;br /&gt;
&lt;br /&gt;
Perspective can be added to a filter as follows:&lt;br /&gt;
# Go to the &#039;&#039;Process Discovery&#039;&#039; dashboard.&lt;br /&gt;
# Open the &#039;&#039;Session variables&#039;&#039; dialog in the dots menu on top right.&lt;br /&gt;
# Paste the filter json to the &#039;&#039;Value&#039;&#039; of the &#039;&#039;Filter&#039;&#039; variable (it might be easiest to start with a filter without filter rules, and then add the filter rules using the UI).&lt;br /&gt;
# Click &#039;&#039;Done&#039;&#039; button for the dialog.&lt;br /&gt;
# Save the filter by hovering the &#039;&#039;Unsaved filter&#039;&#039; (filters dropdown list) in the header and click &#039;&#039;Save as new filter&#039;&#039;.&lt;br /&gt;
&lt;br /&gt;
Example: Filter json without any filter rules:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
{&lt;br /&gt;
  &amp;quot;Items&amp;quot;: [],&lt;br /&gt;
  &amp;quot;Perspective&amp;quot;: {&lt;br /&gt;
    &amp;quot;ObjectType&amp;quot;: &amp;quot;Container&amp;quot;,&lt;br /&gt;
    &amp;quot;RecursionDepth&amp;quot;: 0&lt;br /&gt;
  }&lt;br /&gt;
}&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Example: Filter json with a filter rule:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
{&lt;br /&gt;
  &amp;quot;Items&amp;quot;: [&lt;br /&gt;
    {&lt;br /&gt;
      &amp;quot;Type&amp;quot;: &amp;quot;IncludeEvents&amp;quot;,&lt;br /&gt;
      &amp;quot;Items&amp;quot;: [&lt;br /&gt;
        {&lt;br /&gt;
          &amp;quot;Type&amp;quot;: &amp;quot;Attribute&amp;quot;,&lt;br /&gt;
          &amp;quot;Attribute&amp;quot;: &amp;quot;OcelEventId&amp;quot;,&lt;br /&gt;
          &amp;quot;StringifiedValues&amp;quot;: [ &amp;quot;0Event 1&amp;quot; ]&lt;br /&gt;
        }&lt;br /&gt;
      ]&lt;br /&gt;
    }&lt;br /&gt;
  ],&lt;br /&gt;
  &amp;quot;Perspective&amp;quot;: {&lt;br /&gt;
    &amp;quot;ObjectType&amp;quot;:  &amp;quot;Container&amp;quot;,&lt;br /&gt;
    &amp;quot;RecursionDepth&amp;quot;: 0&lt;br /&gt;
  }&lt;br /&gt;
}&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Differences to OCEL 2.0 standard ==&lt;br /&gt;
Object-centric models in QPR ProcessAnalyzer are mainly following the OCEL 2.0 standard, but there are the following differences:&lt;br /&gt;
* Changing of object attributes values over time is not supported.&lt;br /&gt;
* &#039;&#039;ocel_time&#039;&#039; field of each event type table is moved to events datatable (as every event has a timestemp). &lt;br /&gt;
* &#039;&#039;*_map_type&#039;&#039; columns are not needed as the model settings are used for the same purpose. &lt;br /&gt;
* Object type tables: If OcelObjectTypeChangedField is not null, all the other field values are copied from the previous entry except: &lt;br /&gt;
** &#039;&#039;OcelObjectTypeChangedField&#039;&#039; which has the names of the changed fields as a comma separated string. &lt;br /&gt;
** The actual changed field which has the new value. &lt;br /&gt;
** &#039;&#039;OcelObjectTypeTime&#039;&#039; which has the timestamp when the value changed.&lt;/div&gt;</summary>
		<author><name>MarHink</name></author>
	</entry>
	<entry>
		<id>https://wiki.onqpr.com/pa/index.php?title=System_Library&amp;diff=25481</id>
		<title>System Library</title>
		<link rel="alternate" type="text/html" href="https://wiki.onqpr.com/pa/index.php?title=System_Library&amp;diff=25481"/>
		<updated>2024-11-20T10:24:06Z</updated>

		<summary type="html">&lt;p&gt;MarHink: /* Example */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;System library is a collection of Expression Language functions and properties that provide additional tools for scripting Process Analyzer functionalities. System library is referenced in scripts via &#039;&#039;_system&#039;&#039;-property, which provides additional properties dedicated for different areas of interests for scripting. &lt;br /&gt;
&lt;br /&gt;
==ML.GeneratePredictionModel==&lt;br /&gt;
Documentation [[Create_Predicted_Eventlog#Create_prediction_script_in_QPR_ProcessAnalyzer|here]].&lt;br /&gt;
&lt;br /&gt;
==ML.ApplyTransformations==&lt;br /&gt;
Documentation [[Create_Simulated_Eventlog#Create_simulation_script_in_QPR_ProcessAnalyzer|here]].&lt;br /&gt;
&lt;br /&gt;
==Parallel.Run==&lt;br /&gt;
Runs given functions in parallel.&lt;br /&gt;
&lt;br /&gt;
===Parameters===&lt;br /&gt;
*&#039;&#039;&#039;functions&#039;&#039;&#039;:&lt;br /&gt;
**An array of functions to run in parallel.&lt;br /&gt;
&lt;br /&gt;
===Return value ===&lt;br /&gt;
An array of results returned by the called functions, in the same order as the function generating them in the &#039;&#039;functions&#039;&#039;-parameter.&lt;br /&gt;
&lt;br /&gt;
===Example===&lt;br /&gt;
The following script uses _system.Parallel.Run to run three functions:&lt;br /&gt;
&lt;br /&gt;
* SAP-extraction from VBAK-table in SAP (connection parameters defined in connectionParametersDict-dictionary).&lt;br /&gt;
*Transform the extracted data by adding a new column.&lt;br /&gt;
*Load the data into data table identified by dataTableId.&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;typescript&amp;quot;&amp;gt;&lt;br /&gt;
function ExtractTransformAndLoad(extractFunc, transformFunc, loadFunc)&lt;br /&gt;
{&lt;br /&gt;
  let rawDataFlow = extractFunc();&lt;br /&gt;
  let transformedDataFlow = ToDataFlow();&lt;br /&gt;
&lt;br /&gt;
  _system.Parallel.Run([&lt;br /&gt;
    () =&amp;gt; Catch({&lt;br /&gt;
      let df;&lt;br /&gt;
      while (!IsNullTop(df = rawDataFlow.Collect(#{&amp;quot;CollectChunk&amp;quot;: true}))) {&lt;br /&gt;
        transformedDataFlow.Append(transformFunc(df));&lt;br /&gt;
        WriteLog(`A chunk having ${df.NRows} rows has been transformed.`);&lt;br /&gt;
      }&lt;br /&gt;
      if (rawDataFlow.HasError) {&lt;br /&gt;
        transformedDataFlow.Fail(&amp;quot;Error occurred during data extraction.&amp;quot;);&lt;br /&gt;
      }&lt;br /&gt;
      else {&lt;br /&gt;
        transformedDataFlow.Complete();&lt;br /&gt;
      }&lt;br /&gt;
    }, {&lt;br /&gt;
      transformedDataFlow.Fail(&amp;quot;Error occurred during transformation calculation.&amp;quot;);&lt;br /&gt;
    }),&lt;br /&gt;
    () =&amp;gt; {&lt;br /&gt;
      loadFunc(transformedDataFlow);&lt;br /&gt;
    }&lt;br /&gt;
  ]);&lt;br /&gt;
}&lt;br /&gt;
&lt;br /&gt;
ExtractTransformAndLoad(&lt;br /&gt;
  () =&amp;gt; ExtractSap(connectionParametersDict.Extend(&lt;br /&gt;
    [&lt;br /&gt;
    &amp;quot;FieldNames&amp;quot;: &amp;quot;VBELN,ERDAT,ERZET,ERNAM,NETWR,WAERK&amp;quot;, &lt;br /&gt;
    &amp;quot;QueryTable&amp;quot;: &amp;quot;VBAK&amp;quot;,&lt;br /&gt;
	&amp;quot;Options&amp;quot;: [&amp;quot;VBELN BETWEEN &#039;0000017448&#039;&amp;quot; ,&amp;quot;AND &#039;0060000042&#039;&amp;quot;],&lt;br /&gt;
    &amp;quot;UseGateway&amp;quot;: true&lt;br /&gt;
    ])&lt;br /&gt;
  ),&lt;br /&gt;
  df =&amp;gt; df.SetColumns([&amp;quot;Test&amp;quot;: () =&amp;gt; `${Column(&amp;quot;NETWR&amp;quot;)} ${Column(&amp;quot;WAERK&amp;quot;)}`]),&lt;br /&gt;
  dataFlow =&amp;gt; {&lt;br /&gt;
    DataTableById(dataTableId).Import(dataFlow, [&amp;quot;Append&amp;quot;: 0]);&lt;br /&gt;
  }&lt;br /&gt;
);&lt;br /&gt;
&lt;br /&gt;
DataTableById(dataTableId).SqlDataFrame.OrderByColumns([&amp;quot;VBELN&amp;quot;], [true]).Collect()&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==RootCauses.FindForDataFrame==&lt;br /&gt;
Finds root causes for a particular process phenomenon by comparing properties of selected cases against those of all cases in given model.&lt;br /&gt;
&lt;br /&gt;
Based on the similar in-memory function: [[FindRootCauses Function|EventLog.FindRootCauses]].&lt;br /&gt;
&lt;br /&gt;
===Parameters===&lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;model&#039;&#039;&#039;:&lt;br /&gt;
**Model object of a model whose data tables are used to calculate the root causes.&lt;br /&gt;
*&#039;&#039;&#039;parameters&#039;&#039;&#039;:&lt;br /&gt;
**A parameter convertible to a StringDictionary object with the following supported key-values: &lt;br /&gt;
*** &#039;&#039;&#039;Filter&#039;&#039;&#039;: &lt;br /&gt;
****Filter json (#30921#) that is applied to the event log before calculating root causes.&lt;br /&gt;
*** &#039;&#039;&#039;Selection&#039;&#039;&#039;: &lt;br /&gt;
**** Selection json (#30927#) that defines selected cases to find root causes for. Is applied on top of the filtered event log (specified by Filter parameter).&lt;br /&gt;
****If nothing is selected, 100% of cases are counted as selected.&lt;br /&gt;
*** &#039;&#039;&#039;CaseAttributeTypes&#039;&#039;&#039;: &lt;br /&gt;
****An array of strings with the names of case attributes included into the root causes.&lt;br /&gt;
****Only case attributes of type string, integer or boolean are included.&lt;br /&gt;
****If CaseAttributeTypes is null or empty string or not specified, all case attributes having type &amp;quot;String&amp;quot; are included into the root causes.&lt;br /&gt;
****If the model doesn&#039;t have a case attribute with the specified name, an error message is shown.&lt;br /&gt;
*** &#039;&#039;&#039;EventAttributeTypes&#039;&#039;&#039;:&lt;br /&gt;
****An array of strings with the names of event attributes included into the root causes.&lt;br /&gt;
****Only event attributes of type string are included.&lt;br /&gt;
****If EventAttributeTypes is null, all event attributes having type &amp;quot;String&amp;quot; are included into the root causes.&lt;br /&gt;
****If EventAttributeTypes is empty array or not specified, event attributes not applied to the root causes.&lt;br /&gt;
****If the model doesn&#039;t have an event attribute with the specified name, an error message is shown.&lt;br /&gt;
****Analysis column Name should contain Event Attribute Name.&lt;br /&gt;
****Analysis column Value should contain Event Attribute Value and number of occurrences in case: &amp;lt;value&amp;gt; (count).&lt;br /&gt;
****Analysis column Type should have &amp;quot;EventAttributeValue&amp;quot; as its value.&lt;br /&gt;
***&#039;&#039;&#039;WeightingExpression&#039;&#039;&#039;: &lt;br /&gt;
**** Expression providing weights for each case.&lt;br /&gt;
****Expression, if defined, must be any of the following types:&lt;br /&gt;
***** A string containing the [[SQL Expressions|SqlExpression]] to evaluate.&lt;br /&gt;
*****A SqlExpression object, created e.g., using ToSqlExpression-function (see also: [[QPR_ProcessAnalyzer_Expressions#In-memory_expression_blocks_in_SQL_expressions|In-memory expression blocks in SQL expressions]]).&lt;br /&gt;
****A row is filtered out of result if expression result is null.&lt;br /&gt;
***&#039;&#039;&#039;MaximumRowCount&#039;&#039;&#039;:&lt;br /&gt;
****The maximum number of the most and least contributing root causes to return. Thus, the actual number of returned rows can be at most two times this value (if specified).&lt;br /&gt;
****If undefined, 200 is used.&lt;br /&gt;
**** If set to 0, all rows are returned.&lt;br /&gt;
***&#039;&#039;&#039;MinValueUsage&#039;&#039;&#039;: &lt;br /&gt;
****The minimum total usage of a value included into the comparison. The number of cases having every returned value should be at least given percentage (a float value between 0.0 and 1.0) of all the compared cases.&lt;br /&gt;
*****If not defined or null, all values are included (=default).&lt;br /&gt;
***&#039;&#039;&#039;MaxNumUniqueValues&#039;&#039;&#039;: &lt;br /&gt;
****Maximum number of unique values to include into the comparison for each attribute column. If the amount of unique values for any attribute exceeds this value, only given number of attributes are included that have the highest usage.&lt;br /&gt;
****If not defined or null, all values are included (=default).&lt;br /&gt;
***&#039;&#039;&#039;IncludeOthers&#039;&#039;&#039;:  &lt;br /&gt;
****Should the rest of the attribute values not included due to MinValueUsage or MaxNumUniqueValues filtering be included as an aggregated &amp;quot;Others&amp;quot; value?&lt;br /&gt;
****If given any string value, that value is used as the value for all the aggregated values.&lt;br /&gt;
****Default = undefined =&amp;gt; others values will not be included into the results.&lt;br /&gt;
***&#039;&#039;&#039;ValueIfNull&#039;&#039;&#039;: Value used to indicate null-values.&lt;br /&gt;
****Default = &amp;quot;(blank)&amp;quot;.&lt;br /&gt;
****Must be not null.&lt;br /&gt;
&lt;br /&gt;
===Return value===&lt;br /&gt;
Returns a [[SqlDataFrame in Expression Language|SqlDataFrame]] object with the following columns:&lt;br /&gt;
&lt;br /&gt;
*Common columns:&lt;br /&gt;
**&#039;&#039;&#039;Type&#039;&#039;&#039;: &lt;br /&gt;
***Type of the root cause:&lt;br /&gt;
****&amp;quot;CaseAttributeValue&amp;quot; for case attributes.&lt;br /&gt;
****&amp;quot;EventAttributeValue&amp;quot; for event attributes.&lt;br /&gt;
**&#039;&#039;&#039;Name&#039;&#039;&#039;:  &lt;br /&gt;
***When type is CaseAttributeValue, case attribute name.&lt;br /&gt;
***When type is EventAttributeValue, event attribute name.&lt;br /&gt;
**&#039;&#039;&#039;Value&#039;&#039;&#039;: &lt;br /&gt;
***When type is CaseAttributeValue, case attribute value.&lt;br /&gt;
***When type is EventAttributeValue, event attribute value and number of occurrences in case.&lt;br /&gt;
** &#039;&#039;&#039;Total&#039;&#039;&#039;:  &lt;br /&gt;
***The total number of cases having the found root cause.&lt;br /&gt;
** &#039;&#039;&#039;Selected&#039;&#039;&#039;: &lt;br /&gt;
***The number of cases that have the found root cause and belong to the selected cases.&lt;br /&gt;
**&#039;&#039;&#039;Compared&#039;&#039;&#039;:&lt;br /&gt;
*** The number of cases that have the found root cause and don&#039;t belong to the selected cases.&lt;br /&gt;
**Columns when WeightingExpression not have value:&lt;br /&gt;
***&#039;&#039;&#039;Contribution&#039;&#039;&#039;: &lt;br /&gt;
****The number of cases which contribute to the deviation from the average percentage of selected cases among all analyzed cases.&lt;br /&gt;
***&#039;&#039;&#039;ContributionPercentage&#039;&#039;&#039;: &lt;br /&gt;
****The percent of cases which contribute to the deviation from the average percentage.&lt;br /&gt;
*** &#039;&#039;&#039;DifferencePercentage&#039;&#039;&#039;:&lt;br /&gt;
****The deviation in percentage between selected cases with the found root cause and the average percentage of selected cases among all analyzed cases.&lt;br /&gt;
***&#039;&#039;&#039;SelectedPercentage&#039;&#039;&#039;: &lt;br /&gt;
**** The percent of selected cases that have the found root cause out of all cases with that root cause.&lt;br /&gt;
**Columns when WeightingExpression have value:&lt;br /&gt;
***&#039;&#039;&#039;Contribution&#039;&#039;&#039;: &lt;br /&gt;
****The sum of case weights which contribute to the deviation from the average percentage of selected case weights among all analyzed cases.&lt;br /&gt;
***&#039;&#039;&#039;ContributionPercentage&#039;&#039;&#039;: &lt;br /&gt;
****The percent of case weights which contribute to the deviation from the average percentage.&lt;br /&gt;
*** &#039;&#039;&#039;DifferencePercentage&#039;&#039;&#039;:&lt;br /&gt;
****The deviation in percentage between selected case weights with the found root cause and the average percentage of selected case weights among all analyzed cases.&lt;br /&gt;
***&#039;&#039;&#039;SelectedPercentage&#039;&#039;&#039;: &lt;br /&gt;
****The percent of selected case weights that have the found root cause out of all case weights with that root cause.&lt;br /&gt;
*** &#039;&#039;&#039;SelectedWeight&#039;&#039;&#039;: &lt;br /&gt;
**** The sum of weights that have the found root cause and belong to the selected cases.&lt;br /&gt;
*** &#039;&#039;&#039;ComparedWeight&#039;&#039;&#039;: &lt;br /&gt;
****The sum of weights that have the found root cause and don&#039;t belong to the selected cases.&lt;br /&gt;
*** &#039;&#039;&#039;TotalWeight&#039;&#039;&#039;:&lt;br /&gt;
****The sum of weights of all cases with that root cause.&lt;br /&gt;
&lt;br /&gt;
===Example===&lt;br /&gt;
Calculate root cause analysis for given model using parameters read from the [[Web API: Expression/query|query configuration]].&amp;lt;syntaxhighlight lang=&amp;quot;json&amp;quot;&amp;gt;&lt;br /&gt;
{&lt;br /&gt;
  &amp;quot;ProcessingMethod&amp;quot;: &amp;quot;DataFrame&amp;quot;,&lt;br /&gt;
  &amp;quot;Root&amp;quot;:&amp;quot;let m = _; _system.RootCauses.FindForDataFrame(m, _query.Configuration.Parameters.FindRootCausesParameters.Clone().Extend(#{\&amp;quot;Filter\&amp;quot;: _query.Configuration.Filter}))&amp;quot;,&lt;br /&gt;
  &amp;quot;Parameters&amp;quot;: {&lt;br /&gt;
    &amp;quot;FindRootCausesParameters&amp;quot;: {&lt;br /&gt;
      &amp;quot;CaseAttributeTypes&amp;quot;: [&amp;quot;Account Manager&amp;quot;,&amp;quot;Customer Group&amp;quot;,&amp;quot;Product Group&amp;quot;,&amp;quot;Region&amp;quot;],&lt;br /&gt;
      &amp;quot;Selection&amp;quot;: {&amp;quot;Items&amp;quot;:[{&amp;quot;Type&amp;quot;:&amp;quot;IncludeCases&amp;quot;,&amp;quot;Items&amp;quot;:[{&amp;quot;Type&amp;quot;:&amp;quot;CaseAttributeValue&amp;quot;,&amp;quot;Attribute&amp;quot;:&amp;quot;Product Group&amp;quot;,&amp;quot;StringifiedValues&amp;quot;:[&amp;quot;0Hats&amp;quot;]}]}]},&lt;br /&gt;
      &amp;quot;MaxNumUniqueValues&amp;quot;: 2,&lt;br /&gt;
      &amp;quot;MaximumRowCount&amp;quot;: 1000,&lt;br /&gt;
      &amp;quot;MinValueUsage&amp;quot;: 0.20,&lt;br /&gt;
      &amp;quot;WeightingExpression&amp;quot;: &amp;quot;Column(\&amp;quot;Cost\&amp;quot;)&amp;quot;&lt;br /&gt;
    }&lt;br /&gt;
  },&lt;br /&gt;
  &amp;quot;Ordering&amp;quot;: [&lt;br /&gt;
    {&amp;quot;Name&amp;quot;: &amp;quot;Contribution&amp;quot;, &amp;quot;Direction&amp;quot;: &amp;quot;Descending&amp;quot;}&lt;br /&gt;
  ]&lt;br /&gt;
}&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==Utils.GetSampledEvents==&lt;br /&gt;
Returns a [[SqlDataFrame in Expression Language|SqlDataFrame]] containing sampled events of given model where given filter is first applied. &lt;br /&gt;
&lt;br /&gt;
===Parameters ===&lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;sourceModel&#039;&#039;&#039;: &lt;br /&gt;
**ProcessAnalyzer model object of the model whose event data is to be filtered and sampled.&lt;br /&gt;
*&#039;&#039;&#039;sampledCaseCount&#039;&#039;&#039;:&lt;br /&gt;
**The maximum number of cases to return (or null if all cases should be returned).&lt;br /&gt;
* &#039;&#039;&#039;filter&#039;&#039;&#039;:&lt;br /&gt;
**JSON filter to be applied on the event data of the source model prior to performing the sampling.&lt;br /&gt;
&lt;br /&gt;
===Return value===&lt;br /&gt;
[[SqlDataFrame in Expression Language|SqlDataFrame]] containing sampled events of given model where given filter is first applied.&lt;br /&gt;
&lt;br /&gt;
===Example===&lt;br /&gt;
Get a sample of all the events of at most 1000 cases having &amp;quot;Hats&amp;quot; as &amp;quot;Product Group&amp;quot; case attribute value from model identified by modelId.&amp;lt;syntaxhighlight lang=&amp;quot;typescript&amp;quot;&amp;gt;&lt;br /&gt;
let eventDataSampleSdf = _system.Utils.GetSampledEvents(modelId, 1000, #{&lt;br /&gt;
  &amp;quot;Items&amp;quot;:[#{&lt;br /&gt;
    &amp;quot;Type&amp;quot;:&amp;quot;IncludeCases&amp;quot;,&lt;br /&gt;
	&amp;quot;Items&amp;quot;:[#{&lt;br /&gt;
	  &amp;quot;Type&amp;quot;:&amp;quot;CaseAttributeValue&amp;quot;,&lt;br /&gt;
	  &amp;quot;Attribute&amp;quot;:&amp;quot;Product Group&amp;quot;,&lt;br /&gt;
	  &amp;quot;StringifiedValues&amp;quot;:[&amp;quot;0Hats&amp;quot;]&lt;br /&gt;
	}]&lt;br /&gt;
  }]&lt;br /&gt;
});&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==Utils.ModifyColumnTypes==&lt;br /&gt;
In-place modifies the column types of given columns in given data table.&lt;br /&gt;
&lt;br /&gt;
===Parameters===&lt;br /&gt;
*&#039;&#039;&#039;dataTable&#039;&#039;&#039;: &lt;br /&gt;
**ProcessAnalyzer model object of the model whose event data is to be filtered and sampled.&lt;br /&gt;
*&#039;&#039;&#039;columnTypesToSet&#039;&#039;&#039;:&lt;br /&gt;
**Array of column name/type definitions to set.&lt;br /&gt;
***Only columns that are to be changed are required to be listed.&lt;br /&gt;
***Columns that don&#039;t exist in the data table will be skipped.&lt;br /&gt;
&lt;br /&gt;
===Return value ===&lt;br /&gt;
Returns the modified data table object.&lt;br /&gt;
&lt;br /&gt;
===Example===&lt;br /&gt;
Modify column &amp;quot;CaseId&amp;quot; to be of type string.&amp;lt;syntaxhighlight lang=&amp;quot;typescript&amp;quot;&amp;gt;&lt;br /&gt;
_system.Utils.ModifyColumnTypes(&lt;br /&gt;
  eventsTable, &lt;br /&gt;
  [#{&amp;quot;Name&amp;quot;: &amp;quot;CaseId&amp;quot;, &amp;quot;DataType&amp;quot;: &amp;quot;String&amp;quot;}]&lt;br /&gt;
);&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==Utils.RunFunctionWithParallelLogging==&lt;br /&gt;
Runs given function that generates logging information into given data table in a way that all the logging will be included into the generated script run log as well (if run inside a script).&lt;br /&gt;
&lt;br /&gt;
Internally polls the table every 5 seconds for new rows and adds all the newly added rows to script log.&lt;br /&gt;
&lt;br /&gt;
===Parameters===&lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;logTable&#039;&#039;&#039;:&lt;br /&gt;
**A DataTable used for logging.&lt;br /&gt;
*&#039;&#039;&#039;callbackFunc&#039;&#039;&#039;:&lt;br /&gt;
** Function that uses given data table for logging it&#039;s current status.&lt;br /&gt;
&lt;br /&gt;
===Return value===&lt;br /&gt;
The result of the callback function.&lt;br /&gt;
&lt;br /&gt;
=== Example===&lt;br /&gt;
Run stored procedure named &amp;quot;StoredProcedureTest&amp;quot; in Snowflake, that generates new rows to log table identified by logTableId and log the generated rows into the script run log.&amp;lt;syntaxhighlight lang=&amp;quot;typescript&amp;quot;&amp;gt;&lt;br /&gt;
_system.Utils.RunFunctionWithParallelLogging(DataTableById(logTableId), () =&amp;gt; {&lt;br /&gt;
    CreateSnowflakeConnection().CallStoredProcedure(&amp;quot;StoredProcedureTest&amp;quot;, #{})&lt;br /&gt;
});&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;/div&gt;</summary>
		<author><name>MarHink</name></author>
	</entry>
	<entry>
		<id>https://wiki.onqpr.com/pa/index.php?title=System_Library&amp;diff=25480</id>
		<title>System Library</title>
		<link rel="alternate" type="text/html" href="https://wiki.onqpr.com/pa/index.php?title=System_Library&amp;diff=25480"/>
		<updated>2024-11-19T13:53:09Z</updated>

		<summary type="html">&lt;p&gt;MarHink: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;System library is a collection of Expression Language functions and properties that provide additional tools for scripting Process Analyzer functionalities. System library is referenced in scripts via &#039;&#039;_system&#039;&#039;-property, which provides additional properties dedicated for different areas of interests for scripting. &lt;br /&gt;
&lt;br /&gt;
==ML.GeneratePredictionModel==&lt;br /&gt;
Documentation [[Create_Predicted_Eventlog#Create_prediction_script_in_QPR_ProcessAnalyzer|here]].&lt;br /&gt;
&lt;br /&gt;
==ML.ApplyTransformations==&lt;br /&gt;
Documentation [[Create_Simulated_Eventlog#Create_simulation_script_in_QPR_ProcessAnalyzer|here]].&lt;br /&gt;
&lt;br /&gt;
==Parallel.Run==&lt;br /&gt;
Runs given functions in parallel.&lt;br /&gt;
&lt;br /&gt;
===Parameters===&lt;br /&gt;
*&#039;&#039;&#039;functions&#039;&#039;&#039;:&lt;br /&gt;
**An array of functions to run in parallel.&lt;br /&gt;
&lt;br /&gt;
===Return value ===&lt;br /&gt;
An array of results returned by the called functions, in the same order as the function generating them in the &#039;&#039;functions&#039;&#039;-parameter.&lt;br /&gt;
&lt;br /&gt;
===Example===&lt;br /&gt;
The following script uses _system.Parallel.Run to run three functions:&lt;br /&gt;
&lt;br /&gt;
* SAP-extraction from VBAK-table in SAP (connection parameters defined in connectionParametersDict-dictionary).&lt;br /&gt;
*Transform the extracted data by adding a new column.&lt;br /&gt;
*Load the data into data table identified by dataTableId.&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;typescript&amp;quot;&amp;gt;&lt;br /&gt;
function ExtractTransformAndLoad(extractFunc, transformFunc, loadFunc)&lt;br /&gt;
{&lt;br /&gt;
  let rawDataFlow = extractFunc();&lt;br /&gt;
  let transformedDataFlow = ToDataFlow();&lt;br /&gt;
&lt;br /&gt;
  _system.Parallel.Run([&lt;br /&gt;
    () =&amp;gt; Catch({&lt;br /&gt;
      let df;&lt;br /&gt;
      while (!IsNullTop(df = rawDataFlow.Collect(#{&amp;quot;CollectChunk&amp;quot;: true}))) {&lt;br /&gt;
        transformedDataFlow.Append(transformFunc(df));&lt;br /&gt;
        WriteLog(`A chunk having ${df.NRows} rows has been transformed.`);&lt;br /&gt;
      }&lt;br /&gt;
      if (rawDataFlow.HasError) {&lt;br /&gt;
        transformedDataFlow.Fail(&amp;quot;Error occurred during data extraction.&amp;quot;);&lt;br /&gt;
      }&lt;br /&gt;
      else {&lt;br /&gt;
        transformedDataFlow.Complete();&lt;br /&gt;
      }&lt;br /&gt;
    }, {&lt;br /&gt;
      transformedDataFlow.Fail(&amp;quot;Error occurred during transformation calculation.&amp;quot;);&lt;br /&gt;
    }),&lt;br /&gt;
    () =&amp;gt; {&lt;br /&gt;
      loadFunc(transformedDataFlow);&lt;br /&gt;
    }&lt;br /&gt;
  ]);&lt;br /&gt;
}&lt;br /&gt;
&lt;br /&gt;
ExtractTransformAndLoad(&lt;br /&gt;
  () =&amp;gt; ExtractSap(connectionParametersDict.Extend(&lt;br /&gt;
    [&lt;br /&gt;
    &amp;quot;FieldNames&amp;quot;: &amp;quot;VBELN,ERDAT,ERZET,ERNAM,NETWR,WAERK&amp;quot;, &lt;br /&gt;
    &amp;quot;QueryTable&amp;quot;: &amp;quot;VBAK&amp;quot;,&lt;br /&gt;
	&amp;quot;Options&amp;quot;: [&amp;quot;VBELN BETWEEN &#039;0000017448&#039;&amp;quot; ,&amp;quot;AND &#039;0060000042&#039;&amp;quot;],&lt;br /&gt;
    &amp;quot;UseGateway&amp;quot;: true&lt;br /&gt;
    ])&lt;br /&gt;
  ),&lt;br /&gt;
  df =&amp;gt; df.SetColumns([&amp;quot;Test&amp;quot;: () =&amp;gt; `${Column(&amp;quot;NETWR&amp;quot;)} ${Column(&amp;quot;WAERK&amp;quot;)}`]),&lt;br /&gt;
  dataFlow =&amp;gt; {&lt;br /&gt;
    DataTableById(dataTableId).Import(dataFlow, [&amp;quot;Append&amp;quot;: 0]);&lt;br /&gt;
  }&lt;br /&gt;
);&lt;br /&gt;
&lt;br /&gt;
DataTableById(dataTableId).SqlDataFrame.OrderByColumns([&amp;quot;VBELN&amp;quot;], [true]).Collect()&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==RootCauses.FindForDataFrame==&lt;br /&gt;
Finds root causes for a particular process phenomenon by comparing properties of selected cases against those of all cases in given model.&lt;br /&gt;
&lt;br /&gt;
Based on the similar in-memory function: [[FindRootCauses Function|EventLog.FindRootCauses]].&lt;br /&gt;
&lt;br /&gt;
===Parameters===&lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;model&#039;&#039;&#039;:&lt;br /&gt;
**Model object of a model whose data tables are used to calculate the root causes.&lt;br /&gt;
*&#039;&#039;&#039;parameters&#039;&#039;&#039;:&lt;br /&gt;
**A parameter convertible to a StringDictionary object with the following supported key-values: &lt;br /&gt;
*** &#039;&#039;&#039;Filter&#039;&#039;&#039;: &lt;br /&gt;
****Filter json (#30921#) that is applied to the event log before calculating root causes.&lt;br /&gt;
*** &#039;&#039;&#039;Selection&#039;&#039;&#039;: &lt;br /&gt;
**** Selection json (#30927#) that defines selected cases to find root causes for. Is applied on top of the filtered event log (specified by Filter parameter).&lt;br /&gt;
****If nothing is selected, 100% of cases are counted as selected.&lt;br /&gt;
*** &#039;&#039;&#039;CaseAttributeTypes&#039;&#039;&#039;: &lt;br /&gt;
****An array of strings with the names of case attributes included into the root causes.&lt;br /&gt;
****Only case attributes of type string, integer or boolean are included.&lt;br /&gt;
****If CaseAttributeTypes is null or empty string or not specified, all case attributes having type &amp;quot;String&amp;quot; are included into the root causes.&lt;br /&gt;
****If the model doesn&#039;t have a case attribute with the specified name, an error message is shown.&lt;br /&gt;
*** &#039;&#039;&#039;EventAttributeTypes&#039;&#039;&#039;:&lt;br /&gt;
****An array of strings with the names of event attributes included into the root causes.&lt;br /&gt;
****Only event attributes of type string are included.&lt;br /&gt;
****If EventAttributeTypes is null, all event attributes having type &amp;quot;String&amp;quot; are included into the root causes.&lt;br /&gt;
****If EventAttributeTypes is empty array or not specified, event attributes not applied to the root causes.&lt;br /&gt;
****If the model doesn&#039;t have an event attribute with the specified name, an error message is shown.&lt;br /&gt;
****Analysis column Name should contain Event Attribute Name.&lt;br /&gt;
****Analysis column Value should contain Event Attribute Value and number of occurrences in case: &amp;lt;value&amp;gt; (count).&lt;br /&gt;
****Analysis column Type should have &amp;quot;EventAttributeValue&amp;quot; as its value.&lt;br /&gt;
***&#039;&#039;&#039;WeightingExpression&#039;&#039;&#039;: &lt;br /&gt;
**** Expression providing weights for each case.&lt;br /&gt;
****Expression, if defined, must be any of the following types:&lt;br /&gt;
***** A string containing the [[SQL Expressions|SqlExpression]] to evaluate.&lt;br /&gt;
*****A SqlExpression object, created e.g., using ToSqlExpression-function (see also: [[QPR_ProcessAnalyzer_Expressions#In-memory_expression_blocks_in_SQL_expressions|In-memory expression blocks in SQL expressions]]).&lt;br /&gt;
****A row is filtered out of result if expression result is null.&lt;br /&gt;
***&#039;&#039;&#039;MaximumRowCount&#039;&#039;&#039;:&lt;br /&gt;
****The maximum number of the most and least contributing root causes to return. Thus, the actual number of returned rows can be at most two times this value (if specified).&lt;br /&gt;
****If undefined, 200 is used.&lt;br /&gt;
**** If set to 0, all rows are returned.&lt;br /&gt;
***&#039;&#039;&#039;MinValueUsage&#039;&#039;&#039;: &lt;br /&gt;
****The minimum total usage of a value included into the comparison. The number of cases having every returned value should be at least given percentage (a float value between 0.0 and 1.0) of all the compared cases.&lt;br /&gt;
*****If not defined or null, all values are included (=default).&lt;br /&gt;
***&#039;&#039;&#039;MaxNumUniqueValues&#039;&#039;&#039;: &lt;br /&gt;
****Maximum number of unique values to include into the comparison for each attribute column. If the amount of unique values for any attribute exceeds this value, only given number of attributes are included that have the highest usage.&lt;br /&gt;
****If not defined or null, all values are included (=default).&lt;br /&gt;
***&#039;&#039;&#039;IncludeOthers&#039;&#039;&#039;:  &lt;br /&gt;
****Should the rest of the attribute values not included due to MinValueUsage or MaxNumUniqueValues filtering be included as an aggregated &amp;quot;Others&amp;quot; value?&lt;br /&gt;
****If given any string value, that value is used as the value for all the aggregated values.&lt;br /&gt;
****Default = undefined =&amp;gt; others values will not be included into the results.&lt;br /&gt;
***&#039;&#039;&#039;ValueIfNull&#039;&#039;&#039;: Value used to indicate null-values.&lt;br /&gt;
****Default = &amp;quot;(blank)&amp;quot;.&lt;br /&gt;
****Must be not null.&lt;br /&gt;
&lt;br /&gt;
===Return value===&lt;br /&gt;
Returns a [[SqlDataFrame in Expression Language|SqlDataFrame]] object with the following columns:&lt;br /&gt;
&lt;br /&gt;
*Common columns:&lt;br /&gt;
**&#039;&#039;&#039;Type&#039;&#039;&#039;: &lt;br /&gt;
***Type of the root cause:&lt;br /&gt;
****&amp;quot;CaseAttributeValue&amp;quot; for case attributes.&lt;br /&gt;
****&amp;quot;EventAttributeValue&amp;quot; for event attributes.&lt;br /&gt;
**&#039;&#039;&#039;Name&#039;&#039;&#039;:  &lt;br /&gt;
***When type is CaseAttributeValue, case attribute name.&lt;br /&gt;
***When type is EventAttributeValue, event attribute name.&lt;br /&gt;
**&#039;&#039;&#039;Value&#039;&#039;&#039;: &lt;br /&gt;
***When type is CaseAttributeValue, case attribute value.&lt;br /&gt;
***When type is EventAttributeValue, event attribute value and number of occurrences in case.&lt;br /&gt;
** &#039;&#039;&#039;Total&#039;&#039;&#039;:  &lt;br /&gt;
***The total number of cases having the found root cause.&lt;br /&gt;
** &#039;&#039;&#039;Selected&#039;&#039;&#039;: &lt;br /&gt;
***The number of cases that have the found root cause and belong to the selected cases.&lt;br /&gt;
**&#039;&#039;&#039;Compared&#039;&#039;&#039;:&lt;br /&gt;
*** The number of cases that have the found root cause and don&#039;t belong to the selected cases.&lt;br /&gt;
**Columns when WeightingExpression not have value:&lt;br /&gt;
***&#039;&#039;&#039;Contribution&#039;&#039;&#039;: &lt;br /&gt;
****The number of cases which contribute to the deviation from the average percentage of selected cases among all analyzed cases.&lt;br /&gt;
***&#039;&#039;&#039;ContributionPercentage&#039;&#039;&#039;: &lt;br /&gt;
****The percent of cases which contribute to the deviation from the average percentage.&lt;br /&gt;
*** &#039;&#039;&#039;DifferencePercentage&#039;&#039;&#039;:&lt;br /&gt;
****The deviation in percentage between selected cases with the found root cause and the average percentage of selected cases among all analyzed cases.&lt;br /&gt;
***&#039;&#039;&#039;SelectedPercentage&#039;&#039;&#039;: &lt;br /&gt;
**** The percent of selected cases that have the found root cause out of all cases with that root cause.&lt;br /&gt;
**Columns when WeightingExpression have value:&lt;br /&gt;
***&#039;&#039;&#039;Contribution&#039;&#039;&#039;: &lt;br /&gt;
****The sum of case weights which contribute to the deviation from the average percentage of selected case weights among all analyzed cases.&lt;br /&gt;
***&#039;&#039;&#039;ContributionPercentage&#039;&#039;&#039;: &lt;br /&gt;
****The percent of case weights which contribute to the deviation from the average percentage.&lt;br /&gt;
*** &#039;&#039;&#039;DifferencePercentage&#039;&#039;&#039;:&lt;br /&gt;
****The deviation in percentage between selected case weights with the found root cause and the average percentage of selected case weights among all analyzed cases.&lt;br /&gt;
***&#039;&#039;&#039;SelectedPercentage&#039;&#039;&#039;: &lt;br /&gt;
****The percent of selected case weights that have the found root cause out of all case weights with that root cause.&lt;br /&gt;
*** &#039;&#039;&#039;SelectedWeight&#039;&#039;&#039;: &lt;br /&gt;
**** The sum of weights that have the found root cause and belong to the selected cases.&lt;br /&gt;
*** &#039;&#039;&#039;ComparedWeight&#039;&#039;&#039;: &lt;br /&gt;
****The sum of weights that have the found root cause and don&#039;t belong to the selected cases.&lt;br /&gt;
*** &#039;&#039;&#039;TotalWeight&#039;&#039;&#039;:&lt;br /&gt;
****The sum of weights of all cases with that root cause.&lt;br /&gt;
&lt;br /&gt;
===Example===&lt;br /&gt;
Calculate root cause analysis for given model using parameters read from the [[Web API: Expression/query|query configuration]].&amp;lt;syntaxhighlight lang=&amp;quot;json&amp;quot;&amp;gt;&lt;br /&gt;
{&lt;br /&gt;
  &amp;quot;ProcessingMethod&amp;quot;: &amp;quot;DataFrame&amp;quot;,&lt;br /&gt;
  &amp;quot;Root&amp;quot;:&amp;quot;let m = _; _system.RootCauses.FindForDataFrame(m, _query.Configuration.Parameters.FindRootCausesParameters.Clone().Extend(#{\&amp;quot;Filter\&amp;quot;: _query.Configuration.Filter}))&amp;quot;,&lt;br /&gt;
  &amp;quot;Parameters&amp;quot;: {&lt;br /&gt;
    &amp;quot;FindRootCausesParameters&amp;quot;: {&lt;br /&gt;
      &amp;quot;CaseAttributeTypes&amp;quot;: [&amp;quot;Account Manager&amp;quot;,&amp;quot;Customer Group&amp;quot;,&amp;quot;Product Group&amp;quot;,&amp;quot;Region&amp;quot;],&lt;br /&gt;
      &amp;quot;Selection&amp;quot;: {&amp;quot;Items&amp;quot;:[{&amp;quot;Type&amp;quot;:&amp;quot;IncludeCases&amp;quot;,&amp;quot;Items&amp;quot;:[{&amp;quot;Type&amp;quot;:&amp;quot;CaseAttributeValue&amp;quot;,&amp;quot;Attribute&amp;quot;:&amp;quot;Product Group&amp;quot;,&amp;quot;StringifiedValues&amp;quot;:[&amp;quot;0Hats&amp;quot;]}]}]},&lt;br /&gt;
      &amp;quot;MaxNumUniqueValues&amp;quot;: 2,&lt;br /&gt;
      &amp;quot;MaximumRowCount&amp;quot;: 1000,&lt;br /&gt;
      &amp;quot;MinValueUsage&amp;quot;: 0.20,&lt;br /&gt;
      &amp;quot;WeightingExpression&amp;quot;: &amp;quot;Column(\&amp;quot;Cost\&amp;quot;)&amp;quot;&lt;br /&gt;
    }&lt;br /&gt;
  },&lt;br /&gt;
  &amp;quot;Ordering&amp;quot;: [&lt;br /&gt;
    {&amp;quot;Name&amp;quot;: &amp;quot;Contribution&amp;quot;, &amp;quot;Direction&amp;quot;: &amp;quot;Descending&amp;quot;}&lt;br /&gt;
  ]&lt;br /&gt;
}&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==Utils.GetSampledEvents==&lt;br /&gt;
Returns a [[SqlDataFrame in Expression Language|SqlDataFrame]] containing sampled events of given model where given filter is first applied. &lt;br /&gt;
&lt;br /&gt;
===Parameters ===&lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;sourceModel&#039;&#039;&#039;: &lt;br /&gt;
**ProcessAnalyzer model object of the model whose event data is to be filtered and sampled.&lt;br /&gt;
*&#039;&#039;&#039;sampledCaseCount&#039;&#039;&#039;:&lt;br /&gt;
**The maximum number of cases to return (or null if all cases should be returned).&lt;br /&gt;
* &#039;&#039;&#039;filter&#039;&#039;&#039;:&lt;br /&gt;
**JSON filter to be applied on the event data of the source model prior to performing the sampling.&lt;br /&gt;
&lt;br /&gt;
===Return value===&lt;br /&gt;
[[SqlDataFrame in Expression Language|SqlDataFrame]] containing sampled events of given model where given filter is first applied.&lt;br /&gt;
&lt;br /&gt;
===Example===&lt;br /&gt;
Get a sample of all the events of at most 1000 cases having &amp;quot;Hats&amp;quot; as &amp;quot;Product Group&amp;quot; case attribute value from model identified by modelId.&amp;lt;syntaxhighlight lang=&amp;quot;typescript&amp;quot;&amp;gt;&lt;br /&gt;
let eventDataSampleSdf = _system.Utils.GetSampledEvents(modelId, 1000, #{&lt;br /&gt;
  &amp;quot;Items&amp;quot;:[#{&lt;br /&gt;
    &amp;quot;Type&amp;quot;:&amp;quot;IncludeCases&amp;quot;,&lt;br /&gt;
	&amp;quot;Items&amp;quot;:[#{&lt;br /&gt;
	  &amp;quot;Type&amp;quot;:&amp;quot;CaseAttributeValue&amp;quot;,&lt;br /&gt;
	  &amp;quot;Attribute&amp;quot;:&amp;quot;Product Group&amp;quot;,&lt;br /&gt;
	  &amp;quot;StringifiedValues&amp;quot;:[&amp;quot;0Hats&amp;quot;]&lt;br /&gt;
	}]&lt;br /&gt;
  }]&lt;br /&gt;
});&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==Utils.ModifyColumnTypes==&lt;br /&gt;
In-place modifies the column types of given columns in given data table.&lt;br /&gt;
&lt;br /&gt;
===Parameters===&lt;br /&gt;
*&#039;&#039;&#039;dataTable&#039;&#039;&#039;: &lt;br /&gt;
**ProcessAnalyzer model object of the model whose event data is to be filtered and sampled.&lt;br /&gt;
*&#039;&#039;&#039;columnTypesToSet&#039;&#039;&#039;:&lt;br /&gt;
**Array of column name/type definitions to set.&lt;br /&gt;
***Only columns that are to be changed are required to be listed.&lt;br /&gt;
***Columns that don&#039;t exist in the data table will be skipped.&lt;br /&gt;
&lt;br /&gt;
===Return value ===&lt;br /&gt;
Returns the modified data table object.&lt;br /&gt;
&lt;br /&gt;
===Example===&lt;br /&gt;
Modify column &amp;quot;CaseId&amp;quot; to be of type string.&amp;lt;syntaxhighlight lang=&amp;quot;typescript&amp;quot;&amp;gt;&lt;br /&gt;
_system.Utils.ModifyColumnTypes(&lt;br /&gt;
  eventsTable, &lt;br /&gt;
  #{#{&amp;quot;Name&amp;quot;: &amp;quot;CaseId&amp;quot;, &amp;quot;DataType&amp;quot;: &amp;quot;String&amp;quot;}}&lt;br /&gt;
);&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==Utils.RunFunctionWithParallelLogging==&lt;br /&gt;
Runs given function that generates logging information into given data table in a way that all the logging will be included into the generated script run log as well (if run inside a script).&lt;br /&gt;
&lt;br /&gt;
Internally polls the table every 5 seconds for new rows and adds all the newly added rows to script log.&lt;br /&gt;
&lt;br /&gt;
===Parameters===&lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;logTable&#039;&#039;&#039;:&lt;br /&gt;
**A DataTable used for logging.&lt;br /&gt;
*&#039;&#039;&#039;callbackFunc&#039;&#039;&#039;:&lt;br /&gt;
** Function that uses given data table for logging it&#039;s current status.&lt;br /&gt;
&lt;br /&gt;
===Return value===&lt;br /&gt;
The result of the callback function.&lt;br /&gt;
&lt;br /&gt;
=== Example===&lt;br /&gt;
Run stored procedure named &amp;quot;StoredProcedureTest&amp;quot; in Snowflake, that generates new rows to log table identified by logTableId and log the generated rows into the script run log.&amp;lt;syntaxhighlight lang=&amp;quot;typescript&amp;quot;&amp;gt;&lt;br /&gt;
_system.Utils.RunFunctionWithParallelLogging(DataTableById(logTableId), () =&amp;gt; {&lt;br /&gt;
    CreateSnowflakeConnection().CallStoredProcedure(&amp;quot;StoredProcedureTest&amp;quot;, #{})&lt;br /&gt;
});&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;/div&gt;</summary>
		<author><name>MarHink</name></author>
	</entry>
	<entry>
		<id>https://wiki.onqpr.com/pa/index.php?title=System_Library&amp;diff=25479</id>
		<title>System Library</title>
		<link rel="alternate" type="text/html" href="https://wiki.onqpr.com/pa/index.php?title=System_Library&amp;diff=25479"/>
		<updated>2024-11-19T13:52:05Z</updated>

		<summary type="html">&lt;p&gt;MarHink: /* ML.ApplyTransformations */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;System library is a collection of Expression Language functions and properties that provide additional tools for scripting Process Analyzer functionalities. System library is referenced in scripts via &#039;&#039;_system&#039;&#039;-property, which provides additional properties dedicated for different areas of interests for scripting. &lt;br /&gt;
&lt;br /&gt;
The following hierarchy shows the properties and functions available in System Library:&lt;br /&gt;
&lt;br /&gt;
* ML&lt;br /&gt;
** GeneratePredictionModel&lt;br /&gt;
**ApplyTransformations&lt;br /&gt;
*Parallel&lt;br /&gt;
**Run&lt;br /&gt;
*RootCauses&lt;br /&gt;
**FindRootCausesDataFrame&lt;br /&gt;
*Utils&lt;br /&gt;
**GetSampledEvents&lt;br /&gt;
**ModifyColumnTypes&lt;br /&gt;
**RunFunctionWithParallelLogging&lt;br /&gt;
&lt;br /&gt;
==ML.GeneratePredictionModel==&lt;br /&gt;
Documentation [[Create_Predicted_Eventlog#Create_prediction_script_in_QPR_ProcessAnalyzer|here]].&lt;br /&gt;
&lt;br /&gt;
==ML.ApplyTransformations==&lt;br /&gt;
Documentation [[Create_Simulated_Eventlog#Create_simulation_script_in_QPR_ProcessAnalyzer|here]].&lt;br /&gt;
&lt;br /&gt;
==Parallel.Run==&lt;br /&gt;
Runs given functions in parallel.&lt;br /&gt;
&lt;br /&gt;
===Parameters===&lt;br /&gt;
*&#039;&#039;&#039;functions&#039;&#039;&#039;:&lt;br /&gt;
**An array of functions to run in parallel.&lt;br /&gt;
&lt;br /&gt;
===Return value ===&lt;br /&gt;
An array of results returned by the called functions, in the same order as the function generating them in the &#039;&#039;functions&#039;&#039;-parameter.&lt;br /&gt;
&lt;br /&gt;
===Example===&lt;br /&gt;
The following script uses _system.Parallel.Run to run three functions:&lt;br /&gt;
&lt;br /&gt;
* SAP-extraction from VBAK-table in SAP (connection parameters defined in connectionParametersDict-dictionary).&lt;br /&gt;
*Transform the extracted data by adding a new column.&lt;br /&gt;
*Load the data into data table identified by dataTableId.&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;typescript&amp;quot;&amp;gt;&lt;br /&gt;
function ExtractTransformAndLoad(extractFunc, transformFunc, loadFunc)&lt;br /&gt;
{&lt;br /&gt;
  let rawDataFlow = extractFunc();&lt;br /&gt;
  let transformedDataFlow = ToDataFlow();&lt;br /&gt;
&lt;br /&gt;
  _system.Parallel.Run([&lt;br /&gt;
    () =&amp;gt; Catch({&lt;br /&gt;
      let df;&lt;br /&gt;
      while (!IsNullTop(df = rawDataFlow.Collect(#{&amp;quot;CollectChunk&amp;quot;: true}))) {&lt;br /&gt;
        transformedDataFlow.Append(transformFunc(df));&lt;br /&gt;
        WriteLog(`A chunk having ${df.NRows} rows has been transformed.`);&lt;br /&gt;
      }&lt;br /&gt;
      if (rawDataFlow.HasError) {&lt;br /&gt;
        transformedDataFlow.Fail(&amp;quot;Error occurred during data extraction.&amp;quot;);&lt;br /&gt;
      }&lt;br /&gt;
      else {&lt;br /&gt;
        transformedDataFlow.Complete();&lt;br /&gt;
      }&lt;br /&gt;
    }, {&lt;br /&gt;
      transformedDataFlow.Fail(&amp;quot;Error occurred during transformation calculation.&amp;quot;);&lt;br /&gt;
    }),&lt;br /&gt;
    () =&amp;gt; {&lt;br /&gt;
      loadFunc(transformedDataFlow);&lt;br /&gt;
    }&lt;br /&gt;
  ]);&lt;br /&gt;
}&lt;br /&gt;
&lt;br /&gt;
ExtractTransformAndLoad(&lt;br /&gt;
  () =&amp;gt; ExtractSap(connectionParametersDict.Extend(&lt;br /&gt;
    [&lt;br /&gt;
    &amp;quot;FieldNames&amp;quot;: &amp;quot;VBELN,ERDAT,ERZET,ERNAM,NETWR,WAERK&amp;quot;, &lt;br /&gt;
    &amp;quot;QueryTable&amp;quot;: &amp;quot;VBAK&amp;quot;,&lt;br /&gt;
	&amp;quot;Options&amp;quot;: [&amp;quot;VBELN BETWEEN &#039;0000017448&#039;&amp;quot; ,&amp;quot;AND &#039;0060000042&#039;&amp;quot;],&lt;br /&gt;
    &amp;quot;UseGateway&amp;quot;: true&lt;br /&gt;
    ])&lt;br /&gt;
  ),&lt;br /&gt;
  df =&amp;gt; df.SetColumns([&amp;quot;Test&amp;quot;: () =&amp;gt; `${Column(&amp;quot;NETWR&amp;quot;)} ${Column(&amp;quot;WAERK&amp;quot;)}`]),&lt;br /&gt;
  dataFlow =&amp;gt; {&lt;br /&gt;
    DataTableById(dataTableId).Import(dataFlow, [&amp;quot;Append&amp;quot;: 0]);&lt;br /&gt;
  }&lt;br /&gt;
);&lt;br /&gt;
&lt;br /&gt;
DataTableById(dataTableId).SqlDataFrame.OrderByColumns([&amp;quot;VBELN&amp;quot;], [true]).Collect()&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==RootCauses.FindForDataFrame==&lt;br /&gt;
Finds root causes for a particular process phenomenon by comparing properties of selected cases against those of all cases in given model.&lt;br /&gt;
&lt;br /&gt;
Based on the similar in-memory function: [[FindRootCauses Function|EventLog.FindRootCauses]].&lt;br /&gt;
&lt;br /&gt;
===Parameters===&lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;model&#039;&#039;&#039;:&lt;br /&gt;
**Model object of a model whose data tables are used to calculate the root causes.&lt;br /&gt;
*&#039;&#039;&#039;parameters&#039;&#039;&#039;:&lt;br /&gt;
**A parameter convertible to a StringDictionary object with the following supported key-values: &lt;br /&gt;
*** &#039;&#039;&#039;Filter&#039;&#039;&#039;: &lt;br /&gt;
****Filter json (#30921#) that is applied to the event log before calculating root causes.&lt;br /&gt;
*** &#039;&#039;&#039;Selection&#039;&#039;&#039;: &lt;br /&gt;
**** Selection json (#30927#) that defines selected cases to find root causes for. Is applied on top of the filtered event log (specified by Filter parameter).&lt;br /&gt;
****If nothing is selected, 100% of cases are counted as selected.&lt;br /&gt;
*** &#039;&#039;&#039;CaseAttributeTypes&#039;&#039;&#039;: &lt;br /&gt;
****An array of strings with the names of case attributes included into the root causes.&lt;br /&gt;
****Only case attributes of type string, integer or boolean are included.&lt;br /&gt;
****If CaseAttributeTypes is null or empty string or not specified, all case attributes having type &amp;quot;String&amp;quot; are included into the root causes.&lt;br /&gt;
****If the model doesn&#039;t have a case attribute with the specified name, an error message is shown.&lt;br /&gt;
*** &#039;&#039;&#039;EventAttributeTypes&#039;&#039;&#039;:&lt;br /&gt;
****An array of strings with the names of event attributes included into the root causes.&lt;br /&gt;
****Only event attributes of type string are included.&lt;br /&gt;
****If EventAttributeTypes is null, all event attributes having type &amp;quot;String&amp;quot; are included into the root causes.&lt;br /&gt;
****If EventAttributeTypes is empty array or not specified, event attributes not applied to the root causes.&lt;br /&gt;
****If the model doesn&#039;t have an event attribute with the specified name, an error message is shown.&lt;br /&gt;
****Analysis column Name should contain Event Attribute Name.&lt;br /&gt;
****Analysis column Value should contain Event Attribute Value and number of occurrences in case: &amp;lt;value&amp;gt; (count).&lt;br /&gt;
****Analysis column Type should have &amp;quot;EventAttributeValue&amp;quot; as its value.&lt;br /&gt;
***&#039;&#039;&#039;WeightingExpression&#039;&#039;&#039;: &lt;br /&gt;
**** Expression providing weights for each case.&lt;br /&gt;
****Expression, if defined, must be any of the following types:&lt;br /&gt;
***** A string containing the [[SQL Expressions|SqlExpression]] to evaluate.&lt;br /&gt;
*****A SqlExpression object, created e.g., using ToSqlExpression-function (see also: [[QPR_ProcessAnalyzer_Expressions#In-memory_expression_blocks_in_SQL_expressions|In-memory expression blocks in SQL expressions]]).&lt;br /&gt;
****A row is filtered out of result if expression result is null.&lt;br /&gt;
***&#039;&#039;&#039;MaximumRowCount&#039;&#039;&#039;:&lt;br /&gt;
****The maximum number of the most and least contributing root causes to return. Thus, the actual number of returned rows can be at most two times this value (if specified).&lt;br /&gt;
****If undefined, 200 is used.&lt;br /&gt;
**** If set to 0, all rows are returned.&lt;br /&gt;
***&#039;&#039;&#039;MinValueUsage&#039;&#039;&#039;: &lt;br /&gt;
****The minimum total usage of a value included into the comparison. The number of cases having every returned value should be at least given percentage (a float value between 0.0 and 1.0) of all the compared cases.&lt;br /&gt;
*****If not defined or null, all values are included (=default).&lt;br /&gt;
***&#039;&#039;&#039;MaxNumUniqueValues&#039;&#039;&#039;: &lt;br /&gt;
****Maximum number of unique values to include into the comparison for each attribute column. If the amount of unique values for any attribute exceeds this value, only given number of attributes are included that have the highest usage.&lt;br /&gt;
****If not defined or null, all values are included (=default).&lt;br /&gt;
***&#039;&#039;&#039;IncludeOthers&#039;&#039;&#039;:  &lt;br /&gt;
****Should the rest of the attribute values not included due to MinValueUsage or MaxNumUniqueValues filtering be included as an aggregated &amp;quot;Others&amp;quot; value?&lt;br /&gt;
****If given any string value, that value is used as the value for all the aggregated values.&lt;br /&gt;
****Default = undefined =&amp;gt; others values will not be included into the results.&lt;br /&gt;
***&#039;&#039;&#039;ValueIfNull&#039;&#039;&#039;: Value used to indicate null-values.&lt;br /&gt;
****Default = &amp;quot;(blank)&amp;quot;.&lt;br /&gt;
****Must be not null.&lt;br /&gt;
&lt;br /&gt;
===Return value===&lt;br /&gt;
Returns a [[SqlDataFrame in Expression Language|SqlDataFrame]] object with the following columns:&lt;br /&gt;
&lt;br /&gt;
*Common columns:&lt;br /&gt;
**&#039;&#039;&#039;Type&#039;&#039;&#039;: &lt;br /&gt;
***Type of the root cause:&lt;br /&gt;
****&amp;quot;CaseAttributeValue&amp;quot; for case attributes.&lt;br /&gt;
****&amp;quot;EventAttributeValue&amp;quot; for event attributes.&lt;br /&gt;
**&#039;&#039;&#039;Name&#039;&#039;&#039;:  &lt;br /&gt;
***When type is CaseAttributeValue, case attribute name.&lt;br /&gt;
***When type is EventAttributeValue, event attribute name.&lt;br /&gt;
**&#039;&#039;&#039;Value&#039;&#039;&#039;: &lt;br /&gt;
***When type is CaseAttributeValue, case attribute value.&lt;br /&gt;
***When type is EventAttributeValue, event attribute value and number of occurrences in case.&lt;br /&gt;
** &#039;&#039;&#039;Total&#039;&#039;&#039;:  &lt;br /&gt;
***The total number of cases having the found root cause.&lt;br /&gt;
** &#039;&#039;&#039;Selected&#039;&#039;&#039;: &lt;br /&gt;
***The number of cases that have the found root cause and belong to the selected cases.&lt;br /&gt;
**&#039;&#039;&#039;Compared&#039;&#039;&#039;:&lt;br /&gt;
*** The number of cases that have the found root cause and don&#039;t belong to the selected cases.&lt;br /&gt;
**Columns when WeightingExpression not have value:&lt;br /&gt;
***&#039;&#039;&#039;Contribution&#039;&#039;&#039;: &lt;br /&gt;
****The number of cases which contribute to the deviation from the average percentage of selected cases among all analyzed cases.&lt;br /&gt;
***&#039;&#039;&#039;ContributionPercentage&#039;&#039;&#039;: &lt;br /&gt;
****The percent of cases which contribute to the deviation from the average percentage.&lt;br /&gt;
*** &#039;&#039;&#039;DifferencePercentage&#039;&#039;&#039;:&lt;br /&gt;
****The deviation in percentage between selected cases with the found root cause and the average percentage of selected cases among all analyzed cases.&lt;br /&gt;
***&#039;&#039;&#039;SelectedPercentage&#039;&#039;&#039;: &lt;br /&gt;
**** The percent of selected cases that have the found root cause out of all cases with that root cause.&lt;br /&gt;
**Columns when WeightingExpression have value:&lt;br /&gt;
***&#039;&#039;&#039;Contribution&#039;&#039;&#039;: &lt;br /&gt;
****The sum of case weights which contribute to the deviation from the average percentage of selected case weights among all analyzed cases.&lt;br /&gt;
***&#039;&#039;&#039;ContributionPercentage&#039;&#039;&#039;: &lt;br /&gt;
****The percent of case weights which contribute to the deviation from the average percentage.&lt;br /&gt;
*** &#039;&#039;&#039;DifferencePercentage&#039;&#039;&#039;:&lt;br /&gt;
****The deviation in percentage between selected case weights with the found root cause and the average percentage of selected case weights among all analyzed cases.&lt;br /&gt;
***&#039;&#039;&#039;SelectedPercentage&#039;&#039;&#039;: &lt;br /&gt;
****The percent of selected case weights that have the found root cause out of all case weights with that root cause.&lt;br /&gt;
*** &#039;&#039;&#039;SelectedWeight&#039;&#039;&#039;: &lt;br /&gt;
**** The sum of weights that have the found root cause and belong to the selected cases.&lt;br /&gt;
*** &#039;&#039;&#039;ComparedWeight&#039;&#039;&#039;: &lt;br /&gt;
****The sum of weights that have the found root cause and don&#039;t belong to the selected cases.&lt;br /&gt;
*** &#039;&#039;&#039;TotalWeight&#039;&#039;&#039;:&lt;br /&gt;
****The sum of weights of all cases with that root cause.&lt;br /&gt;
&lt;br /&gt;
===Example===&lt;br /&gt;
Calculate root cause analysis for given model using parameters read from the [[Web API: Expression/query|query configuration]].&amp;lt;syntaxhighlight lang=&amp;quot;json&amp;quot;&amp;gt;&lt;br /&gt;
{&lt;br /&gt;
  &amp;quot;ProcessingMethod&amp;quot;: &amp;quot;DataFrame&amp;quot;,&lt;br /&gt;
  &amp;quot;Root&amp;quot;:&amp;quot;let m = _; _system.RootCauses.FindForDataFrame(m, _query.Configuration.Parameters.FindRootCausesParameters.Clone().Extend(#{\&amp;quot;Filter\&amp;quot;: _query.Configuration.Filter}))&amp;quot;,&lt;br /&gt;
  &amp;quot;Parameters&amp;quot;: {&lt;br /&gt;
    &amp;quot;FindRootCausesParameters&amp;quot;: {&lt;br /&gt;
      &amp;quot;CaseAttributeTypes&amp;quot;: [&amp;quot;Account Manager&amp;quot;,&amp;quot;Customer Group&amp;quot;,&amp;quot;Product Group&amp;quot;,&amp;quot;Region&amp;quot;],&lt;br /&gt;
      &amp;quot;Selection&amp;quot;: {&amp;quot;Items&amp;quot;:[{&amp;quot;Type&amp;quot;:&amp;quot;IncludeCases&amp;quot;,&amp;quot;Items&amp;quot;:[{&amp;quot;Type&amp;quot;:&amp;quot;CaseAttributeValue&amp;quot;,&amp;quot;Attribute&amp;quot;:&amp;quot;Product Group&amp;quot;,&amp;quot;StringifiedValues&amp;quot;:[&amp;quot;0Hats&amp;quot;]}]}]},&lt;br /&gt;
      &amp;quot;MaxNumUniqueValues&amp;quot;: 2,&lt;br /&gt;
      &amp;quot;MaximumRowCount&amp;quot;: 1000,&lt;br /&gt;
      &amp;quot;MinValueUsage&amp;quot;: 0.20,&lt;br /&gt;
      &amp;quot;WeightingExpression&amp;quot;: &amp;quot;Column(\&amp;quot;Cost\&amp;quot;)&amp;quot;&lt;br /&gt;
    }&lt;br /&gt;
  },&lt;br /&gt;
  &amp;quot;Ordering&amp;quot;: [&lt;br /&gt;
    {&amp;quot;Name&amp;quot;: &amp;quot;Contribution&amp;quot;, &amp;quot;Direction&amp;quot;: &amp;quot;Descending&amp;quot;}&lt;br /&gt;
  ]&lt;br /&gt;
}&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==Utils.GetSampledEvents==&lt;br /&gt;
Returns a [[SqlDataFrame in Expression Language|SqlDataFrame]] containing sampled events of given model where given filter is first applied. &lt;br /&gt;
&lt;br /&gt;
===Parameters ===&lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;sourceModel&#039;&#039;&#039;: &lt;br /&gt;
**ProcessAnalyzer model object of the model whose event data is to be filtered and sampled.&lt;br /&gt;
*&#039;&#039;&#039;sampledCaseCount&#039;&#039;&#039;:&lt;br /&gt;
**The maximum number of cases to return (or null if all cases should be returned).&lt;br /&gt;
* &#039;&#039;&#039;filter&#039;&#039;&#039;:&lt;br /&gt;
**JSON filter to be applied on the event data of the source model prior to performing the sampling.&lt;br /&gt;
&lt;br /&gt;
===Return value===&lt;br /&gt;
[[SqlDataFrame in Expression Language|SqlDataFrame]] containing sampled events of given model where given filter is first applied.&lt;br /&gt;
&lt;br /&gt;
===Example===&lt;br /&gt;
Get a sample of all the events of at most 1000 cases having &amp;quot;Hats&amp;quot; as &amp;quot;Product Group&amp;quot; case attribute value from model identified by modelId.&amp;lt;syntaxhighlight lang=&amp;quot;typescript&amp;quot;&amp;gt;&lt;br /&gt;
let eventDataSampleSdf = _system.Utils.GetSampledEvents(modelId, 1000, #{&lt;br /&gt;
  &amp;quot;Items&amp;quot;:[#{&lt;br /&gt;
    &amp;quot;Type&amp;quot;:&amp;quot;IncludeCases&amp;quot;,&lt;br /&gt;
	&amp;quot;Items&amp;quot;:[#{&lt;br /&gt;
	  &amp;quot;Type&amp;quot;:&amp;quot;CaseAttributeValue&amp;quot;,&lt;br /&gt;
	  &amp;quot;Attribute&amp;quot;:&amp;quot;Product Group&amp;quot;,&lt;br /&gt;
	  &amp;quot;StringifiedValues&amp;quot;:[&amp;quot;0Hats&amp;quot;]&lt;br /&gt;
	}]&lt;br /&gt;
  }]&lt;br /&gt;
});&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==Utils.ModifyColumnTypes==&lt;br /&gt;
In-place modifies the column types of given columns in given data table.&lt;br /&gt;
&lt;br /&gt;
===Parameters===&lt;br /&gt;
*&#039;&#039;&#039;dataTable&#039;&#039;&#039;: &lt;br /&gt;
**ProcessAnalyzer model object of the model whose event data is to be filtered and sampled.&lt;br /&gt;
*&#039;&#039;&#039;columnTypesToSet&#039;&#039;&#039;:&lt;br /&gt;
**Array of column name/type definitions to set.&lt;br /&gt;
***Only columns that are to be changed are required to be listed.&lt;br /&gt;
***Columns that don&#039;t exist in the data table will be skipped.&lt;br /&gt;
&lt;br /&gt;
===Return value ===&lt;br /&gt;
Returns the modified data table object.&lt;br /&gt;
&lt;br /&gt;
===Example===&lt;br /&gt;
Modify column &amp;quot;CaseId&amp;quot; to be of type string.&amp;lt;syntaxhighlight lang=&amp;quot;typescript&amp;quot;&amp;gt;&lt;br /&gt;
_system.Utils.ModifyColumnTypes(&lt;br /&gt;
  eventsTable, &lt;br /&gt;
  #{#{&amp;quot;Name&amp;quot;: &amp;quot;CaseId&amp;quot;, &amp;quot;DataType&amp;quot;: &amp;quot;String&amp;quot;}}&lt;br /&gt;
);&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==Utils.RunFunctionWithParallelLogging==&lt;br /&gt;
Runs given function that generates logging information into given data table in a way that all the logging will be included into the generated script run log as well (if run inside a script).&lt;br /&gt;
&lt;br /&gt;
Internally polls the table every 5 seconds for new rows and adds all the newly added rows to script log.&lt;br /&gt;
&lt;br /&gt;
===Parameters===&lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;logTable&#039;&#039;&#039;:&lt;br /&gt;
**A DataTable used for logging.&lt;br /&gt;
*&#039;&#039;&#039;callbackFunc&#039;&#039;&#039;:&lt;br /&gt;
** Function that uses given data table for logging it&#039;s current status.&lt;br /&gt;
&lt;br /&gt;
===Return value===&lt;br /&gt;
The result of the callback function.&lt;br /&gt;
&lt;br /&gt;
=== Example===&lt;br /&gt;
Run stored procedure named &amp;quot;StoredProcedureTest&amp;quot; in Snowflake, that generates new rows to log table identified by logTableId and log the generated rows into the script run log.&amp;lt;syntaxhighlight lang=&amp;quot;typescript&amp;quot;&amp;gt;&lt;br /&gt;
_system.Utils.RunFunctionWithParallelLogging(DataTableById(logTableId), () =&amp;gt; {&lt;br /&gt;
    CreateSnowflakeConnection().CallStoredProcedure(&amp;quot;StoredProcedureTest&amp;quot;, #{})&lt;br /&gt;
});&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;/div&gt;</summary>
		<author><name>MarHink</name></author>
	</entry>
	<entry>
		<id>https://wiki.onqpr.com/pa/index.php?title=System_Library&amp;diff=25478</id>
		<title>System Library</title>
		<link rel="alternate" type="text/html" href="https://wiki.onqpr.com/pa/index.php?title=System_Library&amp;diff=25478"/>
		<updated>2024-11-19T13:50:46Z</updated>

		<summary type="html">&lt;p&gt;MarHink: /* ML.GeneratePredictionModel */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;System library is a collection of Expression Language functions and properties that provide additional tools for scripting Process Analyzer functionalities. System library is referenced in scripts via &#039;&#039;_system&#039;&#039;-property, which provides additional properties dedicated for different areas of interests for scripting. &lt;br /&gt;
&lt;br /&gt;
The following hierarchy shows the properties and functions available in System Library:&lt;br /&gt;
&lt;br /&gt;
* ML&lt;br /&gt;
** GeneratePredictionModel&lt;br /&gt;
**ApplyTransformations&lt;br /&gt;
*Parallel&lt;br /&gt;
**Run&lt;br /&gt;
*RootCauses&lt;br /&gt;
**FindRootCausesDataFrame&lt;br /&gt;
*Utils&lt;br /&gt;
**GetSampledEvents&lt;br /&gt;
**ModifyColumnTypes&lt;br /&gt;
**RunFunctionWithParallelLogging&lt;br /&gt;
&lt;br /&gt;
==ML.GeneratePredictionModel==&lt;br /&gt;
Documentation [[Create_Predicted_Eventlog#Create_prediction_script_in_QPR_ProcessAnalyzer|here]].&lt;br /&gt;
&lt;br /&gt;
==ML.ApplyTransformations==&lt;br /&gt;
Documentation here: [[Create Simulated Eventlog|ApplyTransformations]]&lt;br /&gt;
&lt;br /&gt;
==Parallel.Run==&lt;br /&gt;
Runs given functions in parallel.&lt;br /&gt;
&lt;br /&gt;
===Parameters===&lt;br /&gt;
*&#039;&#039;&#039;functions&#039;&#039;&#039;:&lt;br /&gt;
**An array of functions to run in parallel.&lt;br /&gt;
&lt;br /&gt;
===Return value ===&lt;br /&gt;
An array of results returned by the called functions, in the same order as the function generating them in the &#039;&#039;functions&#039;&#039;-parameter.&lt;br /&gt;
&lt;br /&gt;
===Example===&lt;br /&gt;
The following script uses _system.Parallel.Run to run three functions:&lt;br /&gt;
&lt;br /&gt;
* SAP-extraction from VBAK-table in SAP (connection parameters defined in connectionParametersDict-dictionary).&lt;br /&gt;
*Transform the extracted data by adding a new column.&lt;br /&gt;
*Load the data into data table identified by dataTableId.&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;typescript&amp;quot;&amp;gt;&lt;br /&gt;
function ExtractTransformAndLoad(extractFunc, transformFunc, loadFunc)&lt;br /&gt;
{&lt;br /&gt;
  let rawDataFlow = extractFunc();&lt;br /&gt;
  let transformedDataFlow = ToDataFlow();&lt;br /&gt;
&lt;br /&gt;
  _system.Parallel.Run([&lt;br /&gt;
    () =&amp;gt; Catch({&lt;br /&gt;
      let df;&lt;br /&gt;
      while (!IsNullTop(df = rawDataFlow.Collect(#{&amp;quot;CollectChunk&amp;quot;: true}))) {&lt;br /&gt;
        transformedDataFlow.Append(transformFunc(df));&lt;br /&gt;
        WriteLog(`A chunk having ${df.NRows} rows has been transformed.`);&lt;br /&gt;
      }&lt;br /&gt;
      if (rawDataFlow.HasError) {&lt;br /&gt;
        transformedDataFlow.Fail(&amp;quot;Error occurred during data extraction.&amp;quot;);&lt;br /&gt;
      }&lt;br /&gt;
      else {&lt;br /&gt;
        transformedDataFlow.Complete();&lt;br /&gt;
      }&lt;br /&gt;
    }, {&lt;br /&gt;
      transformedDataFlow.Fail(&amp;quot;Error occurred during transformation calculation.&amp;quot;);&lt;br /&gt;
    }),&lt;br /&gt;
    () =&amp;gt; {&lt;br /&gt;
      loadFunc(transformedDataFlow);&lt;br /&gt;
    }&lt;br /&gt;
  ]);&lt;br /&gt;
}&lt;br /&gt;
&lt;br /&gt;
ExtractTransformAndLoad(&lt;br /&gt;
  () =&amp;gt; ExtractSap(connectionParametersDict.Extend(&lt;br /&gt;
    [&lt;br /&gt;
    &amp;quot;FieldNames&amp;quot;: &amp;quot;VBELN,ERDAT,ERZET,ERNAM,NETWR,WAERK&amp;quot;, &lt;br /&gt;
    &amp;quot;QueryTable&amp;quot;: &amp;quot;VBAK&amp;quot;,&lt;br /&gt;
	&amp;quot;Options&amp;quot;: [&amp;quot;VBELN BETWEEN &#039;0000017448&#039;&amp;quot; ,&amp;quot;AND &#039;0060000042&#039;&amp;quot;],&lt;br /&gt;
    &amp;quot;UseGateway&amp;quot;: true&lt;br /&gt;
    ])&lt;br /&gt;
  ),&lt;br /&gt;
  df =&amp;gt; df.SetColumns([&amp;quot;Test&amp;quot;: () =&amp;gt; `${Column(&amp;quot;NETWR&amp;quot;)} ${Column(&amp;quot;WAERK&amp;quot;)}`]),&lt;br /&gt;
  dataFlow =&amp;gt; {&lt;br /&gt;
    DataTableById(dataTableId).Import(dataFlow, [&amp;quot;Append&amp;quot;: 0]);&lt;br /&gt;
  }&lt;br /&gt;
);&lt;br /&gt;
&lt;br /&gt;
DataTableById(dataTableId).SqlDataFrame.OrderByColumns([&amp;quot;VBELN&amp;quot;], [true]).Collect()&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==RootCauses.FindForDataFrame==&lt;br /&gt;
Finds root causes for a particular process phenomenon by comparing properties of selected cases against those of all cases in given model.&lt;br /&gt;
&lt;br /&gt;
Based on the similar in-memory function: [[FindRootCauses Function|EventLog.FindRootCauses]].&lt;br /&gt;
&lt;br /&gt;
===Parameters===&lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;model&#039;&#039;&#039;:&lt;br /&gt;
**Model object of a model whose data tables are used to calculate the root causes.&lt;br /&gt;
*&#039;&#039;&#039;parameters&#039;&#039;&#039;:&lt;br /&gt;
**A parameter convertible to a StringDictionary object with the following supported key-values: &lt;br /&gt;
*** &#039;&#039;&#039;Filter&#039;&#039;&#039;: &lt;br /&gt;
****Filter json (#30921#) that is applied to the event log before calculating root causes.&lt;br /&gt;
*** &#039;&#039;&#039;Selection&#039;&#039;&#039;: &lt;br /&gt;
**** Selection json (#30927#) that defines selected cases to find root causes for. Is applied on top of the filtered event log (specified by Filter parameter).&lt;br /&gt;
****If nothing is selected, 100% of cases are counted as selected.&lt;br /&gt;
*** &#039;&#039;&#039;CaseAttributeTypes&#039;&#039;&#039;: &lt;br /&gt;
****An array of strings with the names of case attributes included into the root causes.&lt;br /&gt;
****Only case attributes of type string, integer or boolean are included.&lt;br /&gt;
****If CaseAttributeTypes is null or empty string or not specified, all case attributes having type &amp;quot;String&amp;quot; are included into the root causes.&lt;br /&gt;
****If the model doesn&#039;t have a case attribute with the specified name, an error message is shown.&lt;br /&gt;
*** &#039;&#039;&#039;EventAttributeTypes&#039;&#039;&#039;:&lt;br /&gt;
****An array of strings with the names of event attributes included into the root causes.&lt;br /&gt;
****Only event attributes of type string are included.&lt;br /&gt;
****If EventAttributeTypes is null, all event attributes having type &amp;quot;String&amp;quot; are included into the root causes.&lt;br /&gt;
****If EventAttributeTypes is empty array or not specified, event attributes not applied to the root causes.&lt;br /&gt;
****If the model doesn&#039;t have an event attribute with the specified name, an error message is shown.&lt;br /&gt;
****Analysis column Name should contain Event Attribute Name.&lt;br /&gt;
****Analysis column Value should contain Event Attribute Value and number of occurrences in case: &amp;lt;value&amp;gt; (count).&lt;br /&gt;
****Analysis column Type should have &amp;quot;EventAttributeValue&amp;quot; as its value.&lt;br /&gt;
***&#039;&#039;&#039;WeightingExpression&#039;&#039;&#039;: &lt;br /&gt;
**** Expression providing weights for each case.&lt;br /&gt;
****Expression, if defined, must be any of the following types:&lt;br /&gt;
***** A string containing the [[SQL Expressions|SqlExpression]] to evaluate.&lt;br /&gt;
*****A SqlExpression object, created e.g., using ToSqlExpression-function (see also: [[QPR_ProcessAnalyzer_Expressions#In-memory_expression_blocks_in_SQL_expressions|In-memory expression blocks in SQL expressions]]).&lt;br /&gt;
****A row is filtered out of result if expression result is null.&lt;br /&gt;
***&#039;&#039;&#039;MaximumRowCount&#039;&#039;&#039;:&lt;br /&gt;
****The maximum number of the most and least contributing root causes to return. Thus, the actual number of returned rows can be at most two times this value (if specified).&lt;br /&gt;
****If undefined, 200 is used.&lt;br /&gt;
**** If set to 0, all rows are returned.&lt;br /&gt;
***&#039;&#039;&#039;MinValueUsage&#039;&#039;&#039;: &lt;br /&gt;
****The minimum total usage of a value included into the comparison. The number of cases having every returned value should be at least given percentage (a float value between 0.0 and 1.0) of all the compared cases.&lt;br /&gt;
*****If not defined or null, all values are included (=default).&lt;br /&gt;
***&#039;&#039;&#039;MaxNumUniqueValues&#039;&#039;&#039;: &lt;br /&gt;
****Maximum number of unique values to include into the comparison for each attribute column. If the amount of unique values for any attribute exceeds this value, only given number of attributes are included that have the highest usage.&lt;br /&gt;
****If not defined or null, all values are included (=default).&lt;br /&gt;
***&#039;&#039;&#039;IncludeOthers&#039;&#039;&#039;:  &lt;br /&gt;
****Should the rest of the attribute values not included due to MinValueUsage or MaxNumUniqueValues filtering be included as an aggregated &amp;quot;Others&amp;quot; value?&lt;br /&gt;
****If given any string value, that value is used as the value for all the aggregated values.&lt;br /&gt;
****Default = undefined =&amp;gt; others values will not be included into the results.&lt;br /&gt;
***&#039;&#039;&#039;ValueIfNull&#039;&#039;&#039;: Value used to indicate null-values.&lt;br /&gt;
****Default = &amp;quot;(blank)&amp;quot;.&lt;br /&gt;
****Must be not null.&lt;br /&gt;
&lt;br /&gt;
===Return value===&lt;br /&gt;
Returns a [[SqlDataFrame in Expression Language|SqlDataFrame]] object with the following columns:&lt;br /&gt;
&lt;br /&gt;
*Common columns:&lt;br /&gt;
**&#039;&#039;&#039;Type&#039;&#039;&#039;: &lt;br /&gt;
***Type of the root cause:&lt;br /&gt;
****&amp;quot;CaseAttributeValue&amp;quot; for case attributes.&lt;br /&gt;
****&amp;quot;EventAttributeValue&amp;quot; for event attributes.&lt;br /&gt;
**&#039;&#039;&#039;Name&#039;&#039;&#039;:  &lt;br /&gt;
***When type is CaseAttributeValue, case attribute name.&lt;br /&gt;
***When type is EventAttributeValue, event attribute name.&lt;br /&gt;
**&#039;&#039;&#039;Value&#039;&#039;&#039;: &lt;br /&gt;
***When type is CaseAttributeValue, case attribute value.&lt;br /&gt;
***When type is EventAttributeValue, event attribute value and number of occurrences in case.&lt;br /&gt;
** &#039;&#039;&#039;Total&#039;&#039;&#039;:  &lt;br /&gt;
***The total number of cases having the found root cause.&lt;br /&gt;
** &#039;&#039;&#039;Selected&#039;&#039;&#039;: &lt;br /&gt;
***The number of cases that have the found root cause and belong to the selected cases.&lt;br /&gt;
**&#039;&#039;&#039;Compared&#039;&#039;&#039;:&lt;br /&gt;
*** The number of cases that have the found root cause and don&#039;t belong to the selected cases.&lt;br /&gt;
**Columns when WeightingExpression not have value:&lt;br /&gt;
***&#039;&#039;&#039;Contribution&#039;&#039;&#039;: &lt;br /&gt;
****The number of cases which contribute to the deviation from the average percentage of selected cases among all analyzed cases.&lt;br /&gt;
***&#039;&#039;&#039;ContributionPercentage&#039;&#039;&#039;: &lt;br /&gt;
****The percent of cases which contribute to the deviation from the average percentage.&lt;br /&gt;
*** &#039;&#039;&#039;DifferencePercentage&#039;&#039;&#039;:&lt;br /&gt;
****The deviation in percentage between selected cases with the found root cause and the average percentage of selected cases among all analyzed cases.&lt;br /&gt;
***&#039;&#039;&#039;SelectedPercentage&#039;&#039;&#039;: &lt;br /&gt;
**** The percent of selected cases that have the found root cause out of all cases with that root cause.&lt;br /&gt;
**Columns when WeightingExpression have value:&lt;br /&gt;
***&#039;&#039;&#039;Contribution&#039;&#039;&#039;: &lt;br /&gt;
****The sum of case weights which contribute to the deviation from the average percentage of selected case weights among all analyzed cases.&lt;br /&gt;
***&#039;&#039;&#039;ContributionPercentage&#039;&#039;&#039;: &lt;br /&gt;
****The percent of case weights which contribute to the deviation from the average percentage.&lt;br /&gt;
*** &#039;&#039;&#039;DifferencePercentage&#039;&#039;&#039;:&lt;br /&gt;
****The deviation in percentage between selected case weights with the found root cause and the average percentage of selected case weights among all analyzed cases.&lt;br /&gt;
***&#039;&#039;&#039;SelectedPercentage&#039;&#039;&#039;: &lt;br /&gt;
****The percent of selected case weights that have the found root cause out of all case weights with that root cause.&lt;br /&gt;
*** &#039;&#039;&#039;SelectedWeight&#039;&#039;&#039;: &lt;br /&gt;
**** The sum of weights that have the found root cause and belong to the selected cases.&lt;br /&gt;
*** &#039;&#039;&#039;ComparedWeight&#039;&#039;&#039;: &lt;br /&gt;
****The sum of weights that have the found root cause and don&#039;t belong to the selected cases.&lt;br /&gt;
*** &#039;&#039;&#039;TotalWeight&#039;&#039;&#039;:&lt;br /&gt;
****The sum of weights of all cases with that root cause.&lt;br /&gt;
&lt;br /&gt;
===Example===&lt;br /&gt;
Calculate root cause analysis for given model using parameters read from the [[Web API: Expression/query|query configuration]].&amp;lt;syntaxhighlight lang=&amp;quot;json&amp;quot;&amp;gt;&lt;br /&gt;
{&lt;br /&gt;
  &amp;quot;ProcessingMethod&amp;quot;: &amp;quot;DataFrame&amp;quot;,&lt;br /&gt;
  &amp;quot;Root&amp;quot;:&amp;quot;let m = _; _system.RootCauses.FindForDataFrame(m, _query.Configuration.Parameters.FindRootCausesParameters.Clone().Extend(#{\&amp;quot;Filter\&amp;quot;: _query.Configuration.Filter}))&amp;quot;,&lt;br /&gt;
  &amp;quot;Parameters&amp;quot;: {&lt;br /&gt;
    &amp;quot;FindRootCausesParameters&amp;quot;: {&lt;br /&gt;
      &amp;quot;CaseAttributeTypes&amp;quot;: [&amp;quot;Account Manager&amp;quot;,&amp;quot;Customer Group&amp;quot;,&amp;quot;Product Group&amp;quot;,&amp;quot;Region&amp;quot;],&lt;br /&gt;
      &amp;quot;Selection&amp;quot;: {&amp;quot;Items&amp;quot;:[{&amp;quot;Type&amp;quot;:&amp;quot;IncludeCases&amp;quot;,&amp;quot;Items&amp;quot;:[{&amp;quot;Type&amp;quot;:&amp;quot;CaseAttributeValue&amp;quot;,&amp;quot;Attribute&amp;quot;:&amp;quot;Product Group&amp;quot;,&amp;quot;StringifiedValues&amp;quot;:[&amp;quot;0Hats&amp;quot;]}]}]},&lt;br /&gt;
      &amp;quot;MaxNumUniqueValues&amp;quot;: 2,&lt;br /&gt;
      &amp;quot;MaximumRowCount&amp;quot;: 1000,&lt;br /&gt;
      &amp;quot;MinValueUsage&amp;quot;: 0.20,&lt;br /&gt;
      &amp;quot;WeightingExpression&amp;quot;: &amp;quot;Column(\&amp;quot;Cost\&amp;quot;)&amp;quot;&lt;br /&gt;
    }&lt;br /&gt;
  },&lt;br /&gt;
  &amp;quot;Ordering&amp;quot;: [&lt;br /&gt;
    {&amp;quot;Name&amp;quot;: &amp;quot;Contribution&amp;quot;, &amp;quot;Direction&amp;quot;: &amp;quot;Descending&amp;quot;}&lt;br /&gt;
  ]&lt;br /&gt;
}&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==Utils.GetSampledEvents==&lt;br /&gt;
Returns a [[SqlDataFrame in Expression Language|SqlDataFrame]] containing sampled events of given model where given filter is first applied. &lt;br /&gt;
&lt;br /&gt;
===Parameters ===&lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;sourceModel&#039;&#039;&#039;: &lt;br /&gt;
**ProcessAnalyzer model object of the model whose event data is to be filtered and sampled.&lt;br /&gt;
*&#039;&#039;&#039;sampledCaseCount&#039;&#039;&#039;:&lt;br /&gt;
**The maximum number of cases to return (or null if all cases should be returned).&lt;br /&gt;
* &#039;&#039;&#039;filter&#039;&#039;&#039;:&lt;br /&gt;
**JSON filter to be applied on the event data of the source model prior to performing the sampling.&lt;br /&gt;
&lt;br /&gt;
===Return value===&lt;br /&gt;
[[SqlDataFrame in Expression Language|SqlDataFrame]] containing sampled events of given model where given filter is first applied.&lt;br /&gt;
&lt;br /&gt;
===Example===&lt;br /&gt;
Get a sample of all the events of at most 1000 cases having &amp;quot;Hats&amp;quot; as &amp;quot;Product Group&amp;quot; case attribute value from model identified by modelId.&amp;lt;syntaxhighlight lang=&amp;quot;typescript&amp;quot;&amp;gt;&lt;br /&gt;
let eventDataSampleSdf = _system.Utils.GetSampledEvents(modelId, 1000, #{&lt;br /&gt;
  &amp;quot;Items&amp;quot;:[#{&lt;br /&gt;
    &amp;quot;Type&amp;quot;:&amp;quot;IncludeCases&amp;quot;,&lt;br /&gt;
	&amp;quot;Items&amp;quot;:[#{&lt;br /&gt;
	  &amp;quot;Type&amp;quot;:&amp;quot;CaseAttributeValue&amp;quot;,&lt;br /&gt;
	  &amp;quot;Attribute&amp;quot;:&amp;quot;Product Group&amp;quot;,&lt;br /&gt;
	  &amp;quot;StringifiedValues&amp;quot;:[&amp;quot;0Hats&amp;quot;]&lt;br /&gt;
	}]&lt;br /&gt;
  }]&lt;br /&gt;
});&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==Utils.ModifyColumnTypes==&lt;br /&gt;
In-place modifies the column types of given columns in given data table.&lt;br /&gt;
&lt;br /&gt;
===Parameters===&lt;br /&gt;
*&#039;&#039;&#039;dataTable&#039;&#039;&#039;: &lt;br /&gt;
**ProcessAnalyzer model object of the model whose event data is to be filtered and sampled.&lt;br /&gt;
*&#039;&#039;&#039;columnTypesToSet&#039;&#039;&#039;:&lt;br /&gt;
**Array of column name/type definitions to set.&lt;br /&gt;
***Only columns that are to be changed are required to be listed.&lt;br /&gt;
***Columns that don&#039;t exist in the data table will be skipped.&lt;br /&gt;
&lt;br /&gt;
===Return value ===&lt;br /&gt;
Returns the modified data table object.&lt;br /&gt;
&lt;br /&gt;
===Example===&lt;br /&gt;
Modify column &amp;quot;CaseId&amp;quot; to be of type string.&amp;lt;syntaxhighlight lang=&amp;quot;typescript&amp;quot;&amp;gt;&lt;br /&gt;
_system.Utils.ModifyColumnTypes(&lt;br /&gt;
  eventsTable, &lt;br /&gt;
  #{#{&amp;quot;Name&amp;quot;: &amp;quot;CaseId&amp;quot;, &amp;quot;DataType&amp;quot;: &amp;quot;String&amp;quot;}}&lt;br /&gt;
);&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==Utils.RunFunctionWithParallelLogging==&lt;br /&gt;
Runs given function that generates logging information into given data table in a way that all the logging will be included into the generated script run log as well (if run inside a script).&lt;br /&gt;
&lt;br /&gt;
Internally polls the table every 5 seconds for new rows and adds all the newly added rows to script log.&lt;br /&gt;
&lt;br /&gt;
===Parameters===&lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;logTable&#039;&#039;&#039;:&lt;br /&gt;
**A DataTable used for logging.&lt;br /&gt;
*&#039;&#039;&#039;callbackFunc&#039;&#039;&#039;:&lt;br /&gt;
** Function that uses given data table for logging it&#039;s current status.&lt;br /&gt;
&lt;br /&gt;
===Return value===&lt;br /&gt;
The result of the callback function.&lt;br /&gt;
&lt;br /&gt;
=== Example===&lt;br /&gt;
Run stored procedure named &amp;quot;StoredProcedureTest&amp;quot; in Snowflake, that generates new rows to log table identified by logTableId and log the generated rows into the script run log.&amp;lt;syntaxhighlight lang=&amp;quot;typescript&amp;quot;&amp;gt;&lt;br /&gt;
_system.Utils.RunFunctionWithParallelLogging(DataTableById(logTableId), () =&amp;gt; {&lt;br /&gt;
    CreateSnowflakeConnection().CallStoredProcedure(&amp;quot;StoredProcedureTest&amp;quot;, #{})&lt;br /&gt;
});&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;/div&gt;</summary>
		<author><name>MarHink</name></author>
	</entry>
	<entry>
		<id>https://wiki.onqpr.com/pa/index.php?title=System_Library&amp;diff=25477</id>
		<title>System Library</title>
		<link rel="alternate" type="text/html" href="https://wiki.onqpr.com/pa/index.php?title=System_Library&amp;diff=25477"/>
		<updated>2024-11-19T13:49:54Z</updated>

		<summary type="html">&lt;p&gt;MarHink: /* ML.GeneratePredictionModel */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;System library is a collection of Expression Language functions and properties that provide additional tools for scripting Process Analyzer functionalities. System library is referenced in scripts via &#039;&#039;_system&#039;&#039;-property, which provides additional properties dedicated for different areas of interests for scripting. &lt;br /&gt;
&lt;br /&gt;
The following hierarchy shows the properties and functions available in System Library:&lt;br /&gt;
&lt;br /&gt;
* ML&lt;br /&gt;
** GeneratePredictionModel&lt;br /&gt;
**ApplyTransformations&lt;br /&gt;
*Parallel&lt;br /&gt;
**Run&lt;br /&gt;
*RootCauses&lt;br /&gt;
**FindRootCausesDataFrame&lt;br /&gt;
*Utils&lt;br /&gt;
**GetSampledEvents&lt;br /&gt;
**ModifyColumnTypes&lt;br /&gt;
**RunFunctionWithParallelLogging&lt;br /&gt;
&lt;br /&gt;
==ML.GeneratePredictionModel==&lt;br /&gt;
Documentation here: [[Create Predicted Eventlog|Create_Predicted_Eventlog#Create_prediction_script_in_QPR_ProcessAnalyzer]]&lt;br /&gt;
&lt;br /&gt;
==ML.ApplyTransformations==&lt;br /&gt;
Documentation here: [[Create Simulated Eventlog|ApplyTransformations]]&lt;br /&gt;
&lt;br /&gt;
==Parallel.Run==&lt;br /&gt;
Runs given functions in parallel.&lt;br /&gt;
&lt;br /&gt;
===Parameters===&lt;br /&gt;
*&#039;&#039;&#039;functions&#039;&#039;&#039;:&lt;br /&gt;
**An array of functions to run in parallel.&lt;br /&gt;
&lt;br /&gt;
===Return value ===&lt;br /&gt;
An array of results returned by the called functions, in the same order as the function generating them in the &#039;&#039;functions&#039;&#039;-parameter.&lt;br /&gt;
&lt;br /&gt;
===Example===&lt;br /&gt;
The following script uses _system.Parallel.Run to run three functions:&lt;br /&gt;
&lt;br /&gt;
* SAP-extraction from VBAK-table in SAP (connection parameters defined in connectionParametersDict-dictionary).&lt;br /&gt;
*Transform the extracted data by adding a new column.&lt;br /&gt;
*Load the data into data table identified by dataTableId.&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;typescript&amp;quot;&amp;gt;&lt;br /&gt;
function ExtractTransformAndLoad(extractFunc, transformFunc, loadFunc)&lt;br /&gt;
{&lt;br /&gt;
  let rawDataFlow = extractFunc();&lt;br /&gt;
  let transformedDataFlow = ToDataFlow();&lt;br /&gt;
&lt;br /&gt;
  _system.Parallel.Run([&lt;br /&gt;
    () =&amp;gt; Catch({&lt;br /&gt;
      let df;&lt;br /&gt;
      while (!IsNullTop(df = rawDataFlow.Collect(#{&amp;quot;CollectChunk&amp;quot;: true}))) {&lt;br /&gt;
        transformedDataFlow.Append(transformFunc(df));&lt;br /&gt;
        WriteLog(`A chunk having ${df.NRows} rows has been transformed.`);&lt;br /&gt;
      }&lt;br /&gt;
      if (rawDataFlow.HasError) {&lt;br /&gt;
        transformedDataFlow.Fail(&amp;quot;Error occurred during data extraction.&amp;quot;);&lt;br /&gt;
      }&lt;br /&gt;
      else {&lt;br /&gt;
        transformedDataFlow.Complete();&lt;br /&gt;
      }&lt;br /&gt;
    }, {&lt;br /&gt;
      transformedDataFlow.Fail(&amp;quot;Error occurred during transformation calculation.&amp;quot;);&lt;br /&gt;
    }),&lt;br /&gt;
    () =&amp;gt; {&lt;br /&gt;
      loadFunc(transformedDataFlow);&lt;br /&gt;
    }&lt;br /&gt;
  ]);&lt;br /&gt;
}&lt;br /&gt;
&lt;br /&gt;
ExtractTransformAndLoad(&lt;br /&gt;
  () =&amp;gt; ExtractSap(connectionParametersDict.Extend(&lt;br /&gt;
    [&lt;br /&gt;
    &amp;quot;FieldNames&amp;quot;: &amp;quot;VBELN,ERDAT,ERZET,ERNAM,NETWR,WAERK&amp;quot;, &lt;br /&gt;
    &amp;quot;QueryTable&amp;quot;: &amp;quot;VBAK&amp;quot;,&lt;br /&gt;
	&amp;quot;Options&amp;quot;: [&amp;quot;VBELN BETWEEN &#039;0000017448&#039;&amp;quot; ,&amp;quot;AND &#039;0060000042&#039;&amp;quot;],&lt;br /&gt;
    &amp;quot;UseGateway&amp;quot;: true&lt;br /&gt;
    ])&lt;br /&gt;
  ),&lt;br /&gt;
  df =&amp;gt; df.SetColumns([&amp;quot;Test&amp;quot;: () =&amp;gt; `${Column(&amp;quot;NETWR&amp;quot;)} ${Column(&amp;quot;WAERK&amp;quot;)}`]),&lt;br /&gt;
  dataFlow =&amp;gt; {&lt;br /&gt;
    DataTableById(dataTableId).Import(dataFlow, [&amp;quot;Append&amp;quot;: 0]);&lt;br /&gt;
  }&lt;br /&gt;
);&lt;br /&gt;
&lt;br /&gt;
DataTableById(dataTableId).SqlDataFrame.OrderByColumns([&amp;quot;VBELN&amp;quot;], [true]).Collect()&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==RootCauses.FindForDataFrame==&lt;br /&gt;
Finds root causes for a particular process phenomenon by comparing properties of selected cases against those of all cases in given model.&lt;br /&gt;
&lt;br /&gt;
Based on the similar in-memory function: [[FindRootCauses Function|EventLog.FindRootCauses]].&lt;br /&gt;
&lt;br /&gt;
===Parameters===&lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;model&#039;&#039;&#039;:&lt;br /&gt;
**Model object of a model whose data tables are used to calculate the root causes.&lt;br /&gt;
*&#039;&#039;&#039;parameters&#039;&#039;&#039;:&lt;br /&gt;
**A parameter convertible to a StringDictionary object with the following supported key-values: &lt;br /&gt;
*** &#039;&#039;&#039;Filter&#039;&#039;&#039;: &lt;br /&gt;
****Filter json (#30921#) that is applied to the event log before calculating root causes.&lt;br /&gt;
*** &#039;&#039;&#039;Selection&#039;&#039;&#039;: &lt;br /&gt;
**** Selection json (#30927#) that defines selected cases to find root causes for. Is applied on top of the filtered event log (specified by Filter parameter).&lt;br /&gt;
****If nothing is selected, 100% of cases are counted as selected.&lt;br /&gt;
*** &#039;&#039;&#039;CaseAttributeTypes&#039;&#039;&#039;: &lt;br /&gt;
****An array of strings with the names of case attributes included into the root causes.&lt;br /&gt;
****Only case attributes of type string, integer or boolean are included.&lt;br /&gt;
****If CaseAttributeTypes is null or empty string or not specified, all case attributes having type &amp;quot;String&amp;quot; are included into the root causes.&lt;br /&gt;
****If the model doesn&#039;t have a case attribute with the specified name, an error message is shown.&lt;br /&gt;
*** &#039;&#039;&#039;EventAttributeTypes&#039;&#039;&#039;:&lt;br /&gt;
****An array of strings with the names of event attributes included into the root causes.&lt;br /&gt;
****Only event attributes of type string are included.&lt;br /&gt;
****If EventAttributeTypes is null, all event attributes having type &amp;quot;String&amp;quot; are included into the root causes.&lt;br /&gt;
****If EventAttributeTypes is empty array or not specified, event attributes not applied to the root causes.&lt;br /&gt;
****If the model doesn&#039;t have an event attribute with the specified name, an error message is shown.&lt;br /&gt;
****Analysis column Name should contain Event Attribute Name.&lt;br /&gt;
****Analysis column Value should contain Event Attribute Value and number of occurrences in case: &amp;lt;value&amp;gt; (count).&lt;br /&gt;
****Analysis column Type should have &amp;quot;EventAttributeValue&amp;quot; as its value.&lt;br /&gt;
***&#039;&#039;&#039;WeightingExpression&#039;&#039;&#039;: &lt;br /&gt;
**** Expression providing weights for each case.&lt;br /&gt;
****Expression, if defined, must be any of the following types:&lt;br /&gt;
***** A string containing the [[SQL Expressions|SqlExpression]] to evaluate.&lt;br /&gt;
*****A SqlExpression object, created e.g., using ToSqlExpression-function (see also: [[QPR_ProcessAnalyzer_Expressions#In-memory_expression_blocks_in_SQL_expressions|In-memory expression blocks in SQL expressions]]).&lt;br /&gt;
****A row is filtered out of result if expression result is null.&lt;br /&gt;
***&#039;&#039;&#039;MaximumRowCount&#039;&#039;&#039;:&lt;br /&gt;
****The maximum number of the most and least contributing root causes to return. Thus, the actual number of returned rows can be at most two times this value (if specified).&lt;br /&gt;
****If undefined, 200 is used.&lt;br /&gt;
**** If set to 0, all rows are returned.&lt;br /&gt;
***&#039;&#039;&#039;MinValueUsage&#039;&#039;&#039;: &lt;br /&gt;
****The minimum total usage of a value included into the comparison. The number of cases having every returned value should be at least given percentage (a float value between 0.0 and 1.0) of all the compared cases.&lt;br /&gt;
*****If not defined or null, all values are included (=default).&lt;br /&gt;
***&#039;&#039;&#039;MaxNumUniqueValues&#039;&#039;&#039;: &lt;br /&gt;
****Maximum number of unique values to include into the comparison for each attribute column. If the amount of unique values for any attribute exceeds this value, only given number of attributes are included that have the highest usage.&lt;br /&gt;
****If not defined or null, all values are included (=default).&lt;br /&gt;
***&#039;&#039;&#039;IncludeOthers&#039;&#039;&#039;:  &lt;br /&gt;
****Should the rest of the attribute values not included due to MinValueUsage or MaxNumUniqueValues filtering be included as an aggregated &amp;quot;Others&amp;quot; value?&lt;br /&gt;
****If given any string value, that value is used as the value for all the aggregated values.&lt;br /&gt;
****Default = undefined =&amp;gt; others values will not be included into the results.&lt;br /&gt;
***&#039;&#039;&#039;ValueIfNull&#039;&#039;&#039;: Value used to indicate null-values.&lt;br /&gt;
****Default = &amp;quot;(blank)&amp;quot;.&lt;br /&gt;
****Must be not null.&lt;br /&gt;
&lt;br /&gt;
===Return value===&lt;br /&gt;
Returns a [[SqlDataFrame in Expression Language|SqlDataFrame]] object with the following columns:&lt;br /&gt;
&lt;br /&gt;
*Common columns:&lt;br /&gt;
**&#039;&#039;&#039;Type&#039;&#039;&#039;: &lt;br /&gt;
***Type of the root cause:&lt;br /&gt;
****&amp;quot;CaseAttributeValue&amp;quot; for case attributes.&lt;br /&gt;
****&amp;quot;EventAttributeValue&amp;quot; for event attributes.&lt;br /&gt;
**&#039;&#039;&#039;Name&#039;&#039;&#039;:  &lt;br /&gt;
***When type is CaseAttributeValue, case attribute name.&lt;br /&gt;
***When type is EventAttributeValue, event attribute name.&lt;br /&gt;
**&#039;&#039;&#039;Value&#039;&#039;&#039;: &lt;br /&gt;
***When type is CaseAttributeValue, case attribute value.&lt;br /&gt;
***When type is EventAttributeValue, event attribute value and number of occurrences in case.&lt;br /&gt;
** &#039;&#039;&#039;Total&#039;&#039;&#039;:  &lt;br /&gt;
***The total number of cases having the found root cause.&lt;br /&gt;
** &#039;&#039;&#039;Selected&#039;&#039;&#039;: &lt;br /&gt;
***The number of cases that have the found root cause and belong to the selected cases.&lt;br /&gt;
**&#039;&#039;&#039;Compared&#039;&#039;&#039;:&lt;br /&gt;
*** The number of cases that have the found root cause and don&#039;t belong to the selected cases.&lt;br /&gt;
**Columns when WeightingExpression not have value:&lt;br /&gt;
***&#039;&#039;&#039;Contribution&#039;&#039;&#039;: &lt;br /&gt;
****The number of cases which contribute to the deviation from the average percentage of selected cases among all analyzed cases.&lt;br /&gt;
***&#039;&#039;&#039;ContributionPercentage&#039;&#039;&#039;: &lt;br /&gt;
****The percent of cases which contribute to the deviation from the average percentage.&lt;br /&gt;
*** &#039;&#039;&#039;DifferencePercentage&#039;&#039;&#039;:&lt;br /&gt;
****The deviation in percentage between selected cases with the found root cause and the average percentage of selected cases among all analyzed cases.&lt;br /&gt;
***&#039;&#039;&#039;SelectedPercentage&#039;&#039;&#039;: &lt;br /&gt;
**** The percent of selected cases that have the found root cause out of all cases with that root cause.&lt;br /&gt;
**Columns when WeightingExpression have value:&lt;br /&gt;
***&#039;&#039;&#039;Contribution&#039;&#039;&#039;: &lt;br /&gt;
****The sum of case weights which contribute to the deviation from the average percentage of selected case weights among all analyzed cases.&lt;br /&gt;
***&#039;&#039;&#039;ContributionPercentage&#039;&#039;&#039;: &lt;br /&gt;
****The percent of case weights which contribute to the deviation from the average percentage.&lt;br /&gt;
*** &#039;&#039;&#039;DifferencePercentage&#039;&#039;&#039;:&lt;br /&gt;
****The deviation in percentage between selected case weights with the found root cause and the average percentage of selected case weights among all analyzed cases.&lt;br /&gt;
***&#039;&#039;&#039;SelectedPercentage&#039;&#039;&#039;: &lt;br /&gt;
****The percent of selected case weights that have the found root cause out of all case weights with that root cause.&lt;br /&gt;
*** &#039;&#039;&#039;SelectedWeight&#039;&#039;&#039;: &lt;br /&gt;
**** The sum of weights that have the found root cause and belong to the selected cases.&lt;br /&gt;
*** &#039;&#039;&#039;ComparedWeight&#039;&#039;&#039;: &lt;br /&gt;
****The sum of weights that have the found root cause and don&#039;t belong to the selected cases.&lt;br /&gt;
*** &#039;&#039;&#039;TotalWeight&#039;&#039;&#039;:&lt;br /&gt;
****The sum of weights of all cases with that root cause.&lt;br /&gt;
&lt;br /&gt;
===Example===&lt;br /&gt;
Calculate root cause analysis for given model using parameters read from the [[Web API: Expression/query|query configuration]].&amp;lt;syntaxhighlight lang=&amp;quot;json&amp;quot;&amp;gt;&lt;br /&gt;
{&lt;br /&gt;
  &amp;quot;ProcessingMethod&amp;quot;: &amp;quot;DataFrame&amp;quot;,&lt;br /&gt;
  &amp;quot;Root&amp;quot;:&amp;quot;let m = _; _system.RootCauses.FindForDataFrame(m, _query.Configuration.Parameters.FindRootCausesParameters.Clone().Extend(#{\&amp;quot;Filter\&amp;quot;: _query.Configuration.Filter}))&amp;quot;,&lt;br /&gt;
  &amp;quot;Parameters&amp;quot;: {&lt;br /&gt;
    &amp;quot;FindRootCausesParameters&amp;quot;: {&lt;br /&gt;
      &amp;quot;CaseAttributeTypes&amp;quot;: [&amp;quot;Account Manager&amp;quot;,&amp;quot;Customer Group&amp;quot;,&amp;quot;Product Group&amp;quot;,&amp;quot;Region&amp;quot;],&lt;br /&gt;
      &amp;quot;Selection&amp;quot;: {&amp;quot;Items&amp;quot;:[{&amp;quot;Type&amp;quot;:&amp;quot;IncludeCases&amp;quot;,&amp;quot;Items&amp;quot;:[{&amp;quot;Type&amp;quot;:&amp;quot;CaseAttributeValue&amp;quot;,&amp;quot;Attribute&amp;quot;:&amp;quot;Product Group&amp;quot;,&amp;quot;StringifiedValues&amp;quot;:[&amp;quot;0Hats&amp;quot;]}]}]},&lt;br /&gt;
      &amp;quot;MaxNumUniqueValues&amp;quot;: 2,&lt;br /&gt;
      &amp;quot;MaximumRowCount&amp;quot;: 1000,&lt;br /&gt;
      &amp;quot;MinValueUsage&amp;quot;: 0.20,&lt;br /&gt;
      &amp;quot;WeightingExpression&amp;quot;: &amp;quot;Column(\&amp;quot;Cost\&amp;quot;)&amp;quot;&lt;br /&gt;
    }&lt;br /&gt;
  },&lt;br /&gt;
  &amp;quot;Ordering&amp;quot;: [&lt;br /&gt;
    {&amp;quot;Name&amp;quot;: &amp;quot;Contribution&amp;quot;, &amp;quot;Direction&amp;quot;: &amp;quot;Descending&amp;quot;}&lt;br /&gt;
  ]&lt;br /&gt;
}&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==Utils.GetSampledEvents==&lt;br /&gt;
Returns a [[SqlDataFrame in Expression Language|SqlDataFrame]] containing sampled events of given model where given filter is first applied. &lt;br /&gt;
&lt;br /&gt;
===Parameters ===&lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;sourceModel&#039;&#039;&#039;: &lt;br /&gt;
**ProcessAnalyzer model object of the model whose event data is to be filtered and sampled.&lt;br /&gt;
*&#039;&#039;&#039;sampledCaseCount&#039;&#039;&#039;:&lt;br /&gt;
**The maximum number of cases to return (or null if all cases should be returned).&lt;br /&gt;
* &#039;&#039;&#039;filter&#039;&#039;&#039;:&lt;br /&gt;
**JSON filter to be applied on the event data of the source model prior to performing the sampling.&lt;br /&gt;
&lt;br /&gt;
===Return value===&lt;br /&gt;
[[SqlDataFrame in Expression Language|SqlDataFrame]] containing sampled events of given model where given filter is first applied.&lt;br /&gt;
&lt;br /&gt;
===Example===&lt;br /&gt;
Get a sample of all the events of at most 1000 cases having &amp;quot;Hats&amp;quot; as &amp;quot;Product Group&amp;quot; case attribute value from model identified by modelId.&amp;lt;syntaxhighlight lang=&amp;quot;typescript&amp;quot;&amp;gt;&lt;br /&gt;
let eventDataSampleSdf = _system.Utils.GetSampledEvents(modelId, 1000, #{&lt;br /&gt;
  &amp;quot;Items&amp;quot;:[#{&lt;br /&gt;
    &amp;quot;Type&amp;quot;:&amp;quot;IncludeCases&amp;quot;,&lt;br /&gt;
	&amp;quot;Items&amp;quot;:[#{&lt;br /&gt;
	  &amp;quot;Type&amp;quot;:&amp;quot;CaseAttributeValue&amp;quot;,&lt;br /&gt;
	  &amp;quot;Attribute&amp;quot;:&amp;quot;Product Group&amp;quot;,&lt;br /&gt;
	  &amp;quot;StringifiedValues&amp;quot;:[&amp;quot;0Hats&amp;quot;]&lt;br /&gt;
	}]&lt;br /&gt;
  }]&lt;br /&gt;
});&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==Utils.ModifyColumnTypes==&lt;br /&gt;
In-place modifies the column types of given columns in given data table.&lt;br /&gt;
&lt;br /&gt;
===Parameters===&lt;br /&gt;
*&#039;&#039;&#039;dataTable&#039;&#039;&#039;: &lt;br /&gt;
**ProcessAnalyzer model object of the model whose event data is to be filtered and sampled.&lt;br /&gt;
*&#039;&#039;&#039;columnTypesToSet&#039;&#039;&#039;:&lt;br /&gt;
**Array of column name/type definitions to set.&lt;br /&gt;
***Only columns that are to be changed are required to be listed.&lt;br /&gt;
***Columns that don&#039;t exist in the data table will be skipped.&lt;br /&gt;
&lt;br /&gt;
===Return value ===&lt;br /&gt;
Returns the modified data table object.&lt;br /&gt;
&lt;br /&gt;
===Example===&lt;br /&gt;
Modify column &amp;quot;CaseId&amp;quot; to be of type string.&amp;lt;syntaxhighlight lang=&amp;quot;typescript&amp;quot;&amp;gt;&lt;br /&gt;
_system.Utils.ModifyColumnTypes(&lt;br /&gt;
  eventsTable, &lt;br /&gt;
  #{#{&amp;quot;Name&amp;quot;: &amp;quot;CaseId&amp;quot;, &amp;quot;DataType&amp;quot;: &amp;quot;String&amp;quot;}}&lt;br /&gt;
);&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==Utils.RunFunctionWithParallelLogging==&lt;br /&gt;
Runs given function that generates logging information into given data table in a way that all the logging will be included into the generated script run log as well (if run inside a script).&lt;br /&gt;
&lt;br /&gt;
Internally polls the table every 5 seconds for new rows and adds all the newly added rows to script log.&lt;br /&gt;
&lt;br /&gt;
===Parameters===&lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;logTable&#039;&#039;&#039;:&lt;br /&gt;
**A DataTable used for logging.&lt;br /&gt;
*&#039;&#039;&#039;callbackFunc&#039;&#039;&#039;:&lt;br /&gt;
** Function that uses given data table for logging it&#039;s current status.&lt;br /&gt;
&lt;br /&gt;
===Return value===&lt;br /&gt;
The result of the callback function.&lt;br /&gt;
&lt;br /&gt;
=== Example===&lt;br /&gt;
Run stored procedure named &amp;quot;StoredProcedureTest&amp;quot; in Snowflake, that generates new rows to log table identified by logTableId and log the generated rows into the script run log.&amp;lt;syntaxhighlight lang=&amp;quot;typescript&amp;quot;&amp;gt;&lt;br /&gt;
_system.Utils.RunFunctionWithParallelLogging(DataTableById(logTableId), () =&amp;gt; {&lt;br /&gt;
    CreateSnowflakeConnection().CallStoredProcedure(&amp;quot;StoredProcedureTest&amp;quot;, #{})&lt;br /&gt;
});&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;/div&gt;</summary>
		<author><name>MarHink</name></author>
	</entry>
	<entry>
		<id>https://wiki.onqpr.com/pa/index.php?title=System_Library&amp;diff=25476</id>
		<title>System Library</title>
		<link rel="alternate" type="text/html" href="https://wiki.onqpr.com/pa/index.php?title=System_Library&amp;diff=25476"/>
		<updated>2024-11-19T13:48:49Z</updated>

		<summary type="html">&lt;p&gt;MarHink: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;System library is a collection of Expression Language functions and properties that provide additional tools for scripting Process Analyzer functionalities. System library is referenced in scripts via &#039;&#039;_system&#039;&#039;-property, which provides additional properties dedicated for different areas of interests for scripting. &lt;br /&gt;
&lt;br /&gt;
The following hierarchy shows the properties and functions available in System Library:&lt;br /&gt;
&lt;br /&gt;
* ML&lt;br /&gt;
** GeneratePredictionModel&lt;br /&gt;
**ApplyTransformations&lt;br /&gt;
*Parallel&lt;br /&gt;
**Run&lt;br /&gt;
*RootCauses&lt;br /&gt;
**FindRootCausesDataFrame&lt;br /&gt;
*Utils&lt;br /&gt;
**GetSampledEvents&lt;br /&gt;
**ModifyColumnTypes&lt;br /&gt;
**RunFunctionWithParallelLogging&lt;br /&gt;
&lt;br /&gt;
==ML.GeneratePredictionModel==&lt;br /&gt;
Documentation here: [[Create Predicted Eventlog|GeneratePredictionModel]]&lt;br /&gt;
&lt;br /&gt;
==ML.ApplyTransformations==&lt;br /&gt;
Documentation here: [[Create Simulated Eventlog|ApplyTransformations]]&lt;br /&gt;
&lt;br /&gt;
==Parallel.Run==&lt;br /&gt;
Runs given functions in parallel.&lt;br /&gt;
&lt;br /&gt;
===Parameters===&lt;br /&gt;
*&#039;&#039;&#039;functions&#039;&#039;&#039;:&lt;br /&gt;
**An array of functions to run in parallel.&lt;br /&gt;
&lt;br /&gt;
===Return value ===&lt;br /&gt;
An array of results returned by the called functions, in the same order as the function generating them in the &#039;&#039;functions&#039;&#039;-parameter.&lt;br /&gt;
&lt;br /&gt;
===Example===&lt;br /&gt;
The following script uses _system.Parallel.Run to run three functions:&lt;br /&gt;
&lt;br /&gt;
* SAP-extraction from VBAK-table in SAP (connection parameters defined in connectionParametersDict-dictionary).&lt;br /&gt;
*Transform the extracted data by adding a new column.&lt;br /&gt;
*Load the data into data table identified by dataTableId.&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;typescript&amp;quot;&amp;gt;&lt;br /&gt;
function ExtractTransformAndLoad(extractFunc, transformFunc, loadFunc)&lt;br /&gt;
{&lt;br /&gt;
  let rawDataFlow = extractFunc();&lt;br /&gt;
  let transformedDataFlow = ToDataFlow();&lt;br /&gt;
&lt;br /&gt;
  _system.Parallel.Run([&lt;br /&gt;
    () =&amp;gt; Catch({&lt;br /&gt;
      let df;&lt;br /&gt;
      while (!IsNullTop(df = rawDataFlow.Collect(#{&amp;quot;CollectChunk&amp;quot;: true}))) {&lt;br /&gt;
        transformedDataFlow.Append(transformFunc(df));&lt;br /&gt;
        WriteLog(`A chunk having ${df.NRows} rows has been transformed.`);&lt;br /&gt;
      }&lt;br /&gt;
      if (rawDataFlow.HasError) {&lt;br /&gt;
        transformedDataFlow.Fail(&amp;quot;Error occurred during data extraction.&amp;quot;);&lt;br /&gt;
      }&lt;br /&gt;
      else {&lt;br /&gt;
        transformedDataFlow.Complete();&lt;br /&gt;
      }&lt;br /&gt;
    }, {&lt;br /&gt;
      transformedDataFlow.Fail(&amp;quot;Error occurred during transformation calculation.&amp;quot;);&lt;br /&gt;
    }),&lt;br /&gt;
    () =&amp;gt; {&lt;br /&gt;
      loadFunc(transformedDataFlow);&lt;br /&gt;
    }&lt;br /&gt;
  ]);&lt;br /&gt;
}&lt;br /&gt;
&lt;br /&gt;
ExtractTransformAndLoad(&lt;br /&gt;
  () =&amp;gt; ExtractSap(connectionParametersDict.Extend(&lt;br /&gt;
    [&lt;br /&gt;
    &amp;quot;FieldNames&amp;quot;: &amp;quot;VBELN,ERDAT,ERZET,ERNAM,NETWR,WAERK&amp;quot;, &lt;br /&gt;
    &amp;quot;QueryTable&amp;quot;: &amp;quot;VBAK&amp;quot;,&lt;br /&gt;
	&amp;quot;Options&amp;quot;: [&amp;quot;VBELN BETWEEN &#039;0000017448&#039;&amp;quot; ,&amp;quot;AND &#039;0060000042&#039;&amp;quot;],&lt;br /&gt;
    &amp;quot;UseGateway&amp;quot;: true&lt;br /&gt;
    ])&lt;br /&gt;
  ),&lt;br /&gt;
  df =&amp;gt; df.SetColumns([&amp;quot;Test&amp;quot;: () =&amp;gt; `${Column(&amp;quot;NETWR&amp;quot;)} ${Column(&amp;quot;WAERK&amp;quot;)}`]),&lt;br /&gt;
  dataFlow =&amp;gt; {&lt;br /&gt;
    DataTableById(dataTableId).Import(dataFlow, [&amp;quot;Append&amp;quot;: 0]);&lt;br /&gt;
  }&lt;br /&gt;
);&lt;br /&gt;
&lt;br /&gt;
DataTableById(dataTableId).SqlDataFrame.OrderByColumns([&amp;quot;VBELN&amp;quot;], [true]).Collect()&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==RootCauses.FindForDataFrame==&lt;br /&gt;
Finds root causes for a particular process phenomenon by comparing properties of selected cases against those of all cases in given model.&lt;br /&gt;
&lt;br /&gt;
Based on the similar in-memory function: [[FindRootCauses Function|EventLog.FindRootCauses]].&lt;br /&gt;
&lt;br /&gt;
===Parameters===&lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;model&#039;&#039;&#039;:&lt;br /&gt;
**Model object of a model whose data tables are used to calculate the root causes.&lt;br /&gt;
*&#039;&#039;&#039;parameters&#039;&#039;&#039;:&lt;br /&gt;
**A parameter convertible to a StringDictionary object with the following supported key-values: &lt;br /&gt;
*** &#039;&#039;&#039;Filter&#039;&#039;&#039;: &lt;br /&gt;
****Filter json (#30921#) that is applied to the event log before calculating root causes.&lt;br /&gt;
*** &#039;&#039;&#039;Selection&#039;&#039;&#039;: &lt;br /&gt;
**** Selection json (#30927#) that defines selected cases to find root causes for. Is applied on top of the filtered event log (specified by Filter parameter).&lt;br /&gt;
****If nothing is selected, 100% of cases are counted as selected.&lt;br /&gt;
*** &#039;&#039;&#039;CaseAttributeTypes&#039;&#039;&#039;: &lt;br /&gt;
****An array of strings with the names of case attributes included into the root causes.&lt;br /&gt;
****Only case attributes of type string, integer or boolean are included.&lt;br /&gt;
****If CaseAttributeTypes is null or empty string or not specified, all case attributes having type &amp;quot;String&amp;quot; are included into the root causes.&lt;br /&gt;
****If the model doesn&#039;t have a case attribute with the specified name, an error message is shown.&lt;br /&gt;
*** &#039;&#039;&#039;EventAttributeTypes&#039;&#039;&#039;:&lt;br /&gt;
****An array of strings with the names of event attributes included into the root causes.&lt;br /&gt;
****Only event attributes of type string are included.&lt;br /&gt;
****If EventAttributeTypes is null, all event attributes having type &amp;quot;String&amp;quot; are included into the root causes.&lt;br /&gt;
****If EventAttributeTypes is empty array or not specified, event attributes not applied to the root causes.&lt;br /&gt;
****If the model doesn&#039;t have an event attribute with the specified name, an error message is shown.&lt;br /&gt;
****Analysis column Name should contain Event Attribute Name.&lt;br /&gt;
****Analysis column Value should contain Event Attribute Value and number of occurrences in case: &amp;lt;value&amp;gt; (count).&lt;br /&gt;
****Analysis column Type should have &amp;quot;EventAttributeValue&amp;quot; as its value.&lt;br /&gt;
***&#039;&#039;&#039;WeightingExpression&#039;&#039;&#039;: &lt;br /&gt;
**** Expression providing weights for each case.&lt;br /&gt;
****Expression, if defined, must be any of the following types:&lt;br /&gt;
***** A string containing the [[SQL Expressions|SqlExpression]] to evaluate.&lt;br /&gt;
*****A SqlExpression object, created e.g., using ToSqlExpression-function (see also: [[QPR_ProcessAnalyzer_Expressions#In-memory_expression_blocks_in_SQL_expressions|In-memory expression blocks in SQL expressions]]).&lt;br /&gt;
****A row is filtered out of result if expression result is null.&lt;br /&gt;
***&#039;&#039;&#039;MaximumRowCount&#039;&#039;&#039;:&lt;br /&gt;
****The maximum number of the most and least contributing root causes to return. Thus, the actual number of returned rows can be at most two times this value (if specified).&lt;br /&gt;
****If undefined, 200 is used.&lt;br /&gt;
**** If set to 0, all rows are returned.&lt;br /&gt;
***&#039;&#039;&#039;MinValueUsage&#039;&#039;&#039;: &lt;br /&gt;
****The minimum total usage of a value included into the comparison. The number of cases having every returned value should be at least given percentage (a float value between 0.0 and 1.0) of all the compared cases.&lt;br /&gt;
*****If not defined or null, all values are included (=default).&lt;br /&gt;
***&#039;&#039;&#039;MaxNumUniqueValues&#039;&#039;&#039;: &lt;br /&gt;
****Maximum number of unique values to include into the comparison for each attribute column. If the amount of unique values for any attribute exceeds this value, only given number of attributes are included that have the highest usage.&lt;br /&gt;
****If not defined or null, all values are included (=default).&lt;br /&gt;
***&#039;&#039;&#039;IncludeOthers&#039;&#039;&#039;:  &lt;br /&gt;
****Should the rest of the attribute values not included due to MinValueUsage or MaxNumUniqueValues filtering be included as an aggregated &amp;quot;Others&amp;quot; value?&lt;br /&gt;
****If given any string value, that value is used as the value for all the aggregated values.&lt;br /&gt;
****Default = undefined =&amp;gt; others values will not be included into the results.&lt;br /&gt;
***&#039;&#039;&#039;ValueIfNull&#039;&#039;&#039;: Value used to indicate null-values.&lt;br /&gt;
****Default = &amp;quot;(blank)&amp;quot;.&lt;br /&gt;
****Must be not null.&lt;br /&gt;
&lt;br /&gt;
===Return value===&lt;br /&gt;
Returns a [[SqlDataFrame in Expression Language|SqlDataFrame]] object with the following columns:&lt;br /&gt;
&lt;br /&gt;
*Common columns:&lt;br /&gt;
**&#039;&#039;&#039;Type&#039;&#039;&#039;: &lt;br /&gt;
***Type of the root cause:&lt;br /&gt;
****&amp;quot;CaseAttributeValue&amp;quot; for case attributes.&lt;br /&gt;
****&amp;quot;EventAttributeValue&amp;quot; for event attributes.&lt;br /&gt;
**&#039;&#039;&#039;Name&#039;&#039;&#039;:  &lt;br /&gt;
***When type is CaseAttributeValue, case attribute name.&lt;br /&gt;
***When type is EventAttributeValue, event attribute name.&lt;br /&gt;
**&#039;&#039;&#039;Value&#039;&#039;&#039;: &lt;br /&gt;
***When type is CaseAttributeValue, case attribute value.&lt;br /&gt;
***When type is EventAttributeValue, event attribute value and number of occurrences in case.&lt;br /&gt;
** &#039;&#039;&#039;Total&#039;&#039;&#039;:  &lt;br /&gt;
***The total number of cases having the found root cause.&lt;br /&gt;
** &#039;&#039;&#039;Selected&#039;&#039;&#039;: &lt;br /&gt;
***The number of cases that have the found root cause and belong to the selected cases.&lt;br /&gt;
**&#039;&#039;&#039;Compared&#039;&#039;&#039;:&lt;br /&gt;
*** The number of cases that have the found root cause and don&#039;t belong to the selected cases.&lt;br /&gt;
**Columns when WeightingExpression not have value:&lt;br /&gt;
***&#039;&#039;&#039;Contribution&#039;&#039;&#039;: &lt;br /&gt;
****The number of cases which contribute to the deviation from the average percentage of selected cases among all analyzed cases.&lt;br /&gt;
***&#039;&#039;&#039;ContributionPercentage&#039;&#039;&#039;: &lt;br /&gt;
****The percent of cases which contribute to the deviation from the average percentage.&lt;br /&gt;
*** &#039;&#039;&#039;DifferencePercentage&#039;&#039;&#039;:&lt;br /&gt;
****The deviation in percentage between selected cases with the found root cause and the average percentage of selected cases among all analyzed cases.&lt;br /&gt;
***&#039;&#039;&#039;SelectedPercentage&#039;&#039;&#039;: &lt;br /&gt;
**** The percent of selected cases that have the found root cause out of all cases with that root cause.&lt;br /&gt;
**Columns when WeightingExpression have value:&lt;br /&gt;
***&#039;&#039;&#039;Contribution&#039;&#039;&#039;: &lt;br /&gt;
****The sum of case weights which contribute to the deviation from the average percentage of selected case weights among all analyzed cases.&lt;br /&gt;
***&#039;&#039;&#039;ContributionPercentage&#039;&#039;&#039;: &lt;br /&gt;
****The percent of case weights which contribute to the deviation from the average percentage.&lt;br /&gt;
*** &#039;&#039;&#039;DifferencePercentage&#039;&#039;&#039;:&lt;br /&gt;
****The deviation in percentage between selected case weights with the found root cause and the average percentage of selected case weights among all analyzed cases.&lt;br /&gt;
***&#039;&#039;&#039;SelectedPercentage&#039;&#039;&#039;: &lt;br /&gt;
****The percent of selected case weights that have the found root cause out of all case weights with that root cause.&lt;br /&gt;
*** &#039;&#039;&#039;SelectedWeight&#039;&#039;&#039;: &lt;br /&gt;
**** The sum of weights that have the found root cause and belong to the selected cases.&lt;br /&gt;
*** &#039;&#039;&#039;ComparedWeight&#039;&#039;&#039;: &lt;br /&gt;
****The sum of weights that have the found root cause and don&#039;t belong to the selected cases.&lt;br /&gt;
*** &#039;&#039;&#039;TotalWeight&#039;&#039;&#039;:&lt;br /&gt;
****The sum of weights of all cases with that root cause.&lt;br /&gt;
&lt;br /&gt;
===Example===&lt;br /&gt;
Calculate root cause analysis for given model using parameters read from the [[Web API: Expression/query|query configuration]].&amp;lt;syntaxhighlight lang=&amp;quot;json&amp;quot;&amp;gt;&lt;br /&gt;
{&lt;br /&gt;
  &amp;quot;ProcessingMethod&amp;quot;: &amp;quot;DataFrame&amp;quot;,&lt;br /&gt;
  &amp;quot;Root&amp;quot;:&amp;quot;let m = _; _system.RootCauses.FindForDataFrame(m, _query.Configuration.Parameters.FindRootCausesParameters.Clone().Extend(#{\&amp;quot;Filter\&amp;quot;: _query.Configuration.Filter}))&amp;quot;,&lt;br /&gt;
  &amp;quot;Parameters&amp;quot;: {&lt;br /&gt;
    &amp;quot;FindRootCausesParameters&amp;quot;: {&lt;br /&gt;
      &amp;quot;CaseAttributeTypes&amp;quot;: [&amp;quot;Account Manager&amp;quot;,&amp;quot;Customer Group&amp;quot;,&amp;quot;Product Group&amp;quot;,&amp;quot;Region&amp;quot;],&lt;br /&gt;
      &amp;quot;Selection&amp;quot;: {&amp;quot;Items&amp;quot;:[{&amp;quot;Type&amp;quot;:&amp;quot;IncludeCases&amp;quot;,&amp;quot;Items&amp;quot;:[{&amp;quot;Type&amp;quot;:&amp;quot;CaseAttributeValue&amp;quot;,&amp;quot;Attribute&amp;quot;:&amp;quot;Product Group&amp;quot;,&amp;quot;StringifiedValues&amp;quot;:[&amp;quot;0Hats&amp;quot;]}]}]},&lt;br /&gt;
      &amp;quot;MaxNumUniqueValues&amp;quot;: 2,&lt;br /&gt;
      &amp;quot;MaximumRowCount&amp;quot;: 1000,&lt;br /&gt;
      &amp;quot;MinValueUsage&amp;quot;: 0.20,&lt;br /&gt;
      &amp;quot;WeightingExpression&amp;quot;: &amp;quot;Column(\&amp;quot;Cost\&amp;quot;)&amp;quot;&lt;br /&gt;
    }&lt;br /&gt;
  },&lt;br /&gt;
  &amp;quot;Ordering&amp;quot;: [&lt;br /&gt;
    {&amp;quot;Name&amp;quot;: &amp;quot;Contribution&amp;quot;, &amp;quot;Direction&amp;quot;: &amp;quot;Descending&amp;quot;}&lt;br /&gt;
  ]&lt;br /&gt;
}&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==Utils.GetSampledEvents==&lt;br /&gt;
Returns a [[SqlDataFrame in Expression Language|SqlDataFrame]] containing sampled events of given model where given filter is first applied. &lt;br /&gt;
&lt;br /&gt;
===Parameters ===&lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;sourceModel&#039;&#039;&#039;: &lt;br /&gt;
**ProcessAnalyzer model object of the model whose event data is to be filtered and sampled.&lt;br /&gt;
*&#039;&#039;&#039;sampledCaseCount&#039;&#039;&#039;:&lt;br /&gt;
**The maximum number of cases to return (or null if all cases should be returned).&lt;br /&gt;
* &#039;&#039;&#039;filter&#039;&#039;&#039;:&lt;br /&gt;
**JSON filter to be applied on the event data of the source model prior to performing the sampling.&lt;br /&gt;
&lt;br /&gt;
===Return value===&lt;br /&gt;
[[SqlDataFrame in Expression Language|SqlDataFrame]] containing sampled events of given model where given filter is first applied.&lt;br /&gt;
&lt;br /&gt;
===Example===&lt;br /&gt;
Get a sample of all the events of at most 1000 cases having &amp;quot;Hats&amp;quot; as &amp;quot;Product Group&amp;quot; case attribute value from model identified by modelId.&amp;lt;syntaxhighlight lang=&amp;quot;typescript&amp;quot;&amp;gt;&lt;br /&gt;
let eventDataSampleSdf = _system.Utils.GetSampledEvents(modelId, 1000, #{&lt;br /&gt;
  &amp;quot;Items&amp;quot;:[#{&lt;br /&gt;
    &amp;quot;Type&amp;quot;:&amp;quot;IncludeCases&amp;quot;,&lt;br /&gt;
	&amp;quot;Items&amp;quot;:[#{&lt;br /&gt;
	  &amp;quot;Type&amp;quot;:&amp;quot;CaseAttributeValue&amp;quot;,&lt;br /&gt;
	  &amp;quot;Attribute&amp;quot;:&amp;quot;Product Group&amp;quot;,&lt;br /&gt;
	  &amp;quot;StringifiedValues&amp;quot;:[&amp;quot;0Hats&amp;quot;]&lt;br /&gt;
	}]&lt;br /&gt;
  }]&lt;br /&gt;
});&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==Utils.ModifyColumnTypes==&lt;br /&gt;
In-place modifies the column types of given columns in given data table.&lt;br /&gt;
&lt;br /&gt;
===Parameters===&lt;br /&gt;
*&#039;&#039;&#039;dataTable&#039;&#039;&#039;: &lt;br /&gt;
**ProcessAnalyzer model object of the model whose event data is to be filtered and sampled.&lt;br /&gt;
*&#039;&#039;&#039;columnTypesToSet&#039;&#039;&#039;:&lt;br /&gt;
**Array of column name/type definitions to set.&lt;br /&gt;
***Only columns that are to be changed are required to be listed.&lt;br /&gt;
***Columns that don&#039;t exist in the data table will be skipped.&lt;br /&gt;
&lt;br /&gt;
===Return value ===&lt;br /&gt;
Returns the modified data table object.&lt;br /&gt;
&lt;br /&gt;
===Example===&lt;br /&gt;
Modify column &amp;quot;CaseId&amp;quot; to be of type string.&amp;lt;syntaxhighlight lang=&amp;quot;typescript&amp;quot;&amp;gt;&lt;br /&gt;
_system.Utils.ModifyColumnTypes(&lt;br /&gt;
  eventsTable, &lt;br /&gt;
  #{#{&amp;quot;Name&amp;quot;: &amp;quot;CaseId&amp;quot;, &amp;quot;DataType&amp;quot;: &amp;quot;String&amp;quot;}}&lt;br /&gt;
);&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==Utils.RunFunctionWithParallelLogging==&lt;br /&gt;
Runs given function that generates logging information into given data table in a way that all the logging will be included into the generated script run log as well (if run inside a script).&lt;br /&gt;
&lt;br /&gt;
Internally polls the table every 5 seconds for new rows and adds all the newly added rows to script log.&lt;br /&gt;
&lt;br /&gt;
===Parameters===&lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;logTable&#039;&#039;&#039;:&lt;br /&gt;
**A DataTable used for logging.&lt;br /&gt;
*&#039;&#039;&#039;callbackFunc&#039;&#039;&#039;:&lt;br /&gt;
** Function that uses given data table for logging it&#039;s current status.&lt;br /&gt;
&lt;br /&gt;
===Return value===&lt;br /&gt;
The result of the callback function.&lt;br /&gt;
&lt;br /&gt;
=== Example===&lt;br /&gt;
Run stored procedure named &amp;quot;StoredProcedureTest&amp;quot; in Snowflake, that generates new rows to log table identified by logTableId and log the generated rows into the script run log.&amp;lt;syntaxhighlight lang=&amp;quot;typescript&amp;quot;&amp;gt;&lt;br /&gt;
_system.Utils.RunFunctionWithParallelLogging(DataTableById(logTableId), () =&amp;gt; {&lt;br /&gt;
    CreateSnowflakeConnection().CallStoredProcedure(&amp;quot;StoredProcedureTest&amp;quot;, #{})&lt;br /&gt;
});&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;/div&gt;</summary>
		<author><name>MarHink</name></author>
	</entry>
	<entry>
		<id>https://wiki.onqpr.com/pa/index.php?title=QPR_ProcessAnalyzer_Wiki&amp;diff=25475</id>
		<title>QPR ProcessAnalyzer Wiki</title>
		<link rel="alternate" type="text/html" href="https://wiki.onqpr.com/pa/index.php?title=QPR_ProcessAnalyzer_Wiki&amp;diff=25475"/>
		<updated>2024-11-19T13:42:43Z</updated>

		<summary type="html">&lt;p&gt;MarHink: Added system library link&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&amp;lt;div class=&amp;quot;downloadButton&amp;quot; style=&amp;quot;width:190px;float:right;margin: 3px 12px 0px 15px;&amp;quot;&amp;gt;[[Online_Learning_Platform|Online Learning&amp;lt;br /&amp;gt;Platform]]&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Welcome to QPR ProcessAnalyzer Wiki! QPR ProcessAnalyzer is a software for turning event and transactional data into visual process analysis and intelligence. Topics in this documentation are divided based on user roles for process analysts, developers and administrators.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;div style=&amp;quot;height:5px;&amp;quot;&amp;gt;&amp;lt;/div&amp;gt;&lt;br /&gt;
== For Process Analysts ==&lt;br /&gt;
This section contains information how to get started with QPR ProcessAnalyzer and how to create your first dashboards! This section also describes how to use filters and how to make different kinds of analyses with QPR ProcessAnalyzer.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;div style=&amp;quot;display: flex;flex-wrap: wrap;&amp;quot;&amp;gt;&lt;br /&gt;
&amp;lt;div style=&amp;quot;flex: 1 0 210px;border:1px solid #dfdfdf;padding:0 1em 1em 1.5em;background-color:#F7FAFC;margin:10px 0px 0px 10px;&amp;quot;&amp;gt;&lt;br /&gt;
=== Getting Started ===&lt;br /&gt;
* [[Getting Started with QPR ProcessAnalyzer]]&lt;br /&gt;
* [[QPR_ProcessAnalyzer_Native_App_in_Snowflake|Snowflake Native App]]&lt;br /&gt;
* [[Introduction to Process Mining|Introduction to Process Mining]]&lt;br /&gt;
* [[Process_Mining_Concepts|Process Mining Concepts]]&lt;br /&gt;
* [[Log_in_QPR_ProcessAnalyzer|Log in QPR ProcessAnalyzer]]&lt;br /&gt;
* [[Languages and Localization|Language and Localization Settings]]&lt;br /&gt;
* [[User Settings|User Settings]]&lt;br /&gt;
* [[Navigation_Menu|Navigation Menu Functions]]&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;div style=&amp;quot;flex: 1 0 210px;border:1px solid #dfdfdf;padding:0 1em 1em 1.5em;background-color:#F7FAFC;margin:10px 0px 0px 10px;&amp;quot;&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Working with Dashboards ===&lt;br /&gt;
* [[QPR_ProcessAnalyzer_Project_Workspace|Project Workspace]]: [[QPR_ProcessAnalyzer_Project_Workspace#Projects|Projects]], [[QPR_ProcessAnalyzer_Project_Workspace#Dashboards|Dashboards]], [[QPR_ProcessAnalyzer_Project_Workspace#Models|Models]], [[QPR_ProcessAnalyzer_Project_Workspace#Datatables|Datatables]], [[Managing_Scripts|Scripts]], [[QPR_ProcessAnalyzer_Project_Workspace#Recycle_Bin|Recycle Bin]]&lt;br /&gt;
* [[Filtering_in_QPR_ProcessAnalyzer|Using Filters]]&lt;br /&gt;
* [[QPR ProcessAnalyzer Dashboard Designer|Creating Dashboards]]&lt;br /&gt;
* [[AI Assistant for QPR ProcessAnalyzer|AI Assistant]] (powered by generative AI)&lt;br /&gt;
* [[Dashboard Variables|Dynamic Variables in Dashboards]]&lt;br /&gt;
* [[Business Calendar|Business Calendar to Calculate Durations]]&lt;br /&gt;
* [[Best Practices for Designing Dashboards|Best Practices for Designing Dashboards]]&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;div style=&amp;quot;flex: 1 0 370px;border:1px solid #dfdfdf;padding:0 1em 1em 1.5em;background-color:#F7FAFC;margin:10px 0px 0px 10px;&amp;quot;&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Analyses and Visualizations ===&lt;br /&gt;
&amp;lt;div style=&amp;quot;float:left; width:50%;&amp;quot;&amp;gt;&lt;br /&gt;
* [[Process Flowchart|Process Flowchart]]&lt;br /&gt;
* [[Object-Centric_Flowchart|Object-Centric Flowchart]]&lt;br /&gt;
* [[QPR ProcessAnalyzer Chart|Chart]] / [[Snowflake Chart|Snowflake Chart]]&lt;br /&gt;
** [[QPR ProcessAnalyzer Graphs|Graphs]]&lt;br /&gt;
** [[QPR_ProcessAnalyzer_Table|Table]]&lt;br /&gt;
** [[QPR_ProcessAnalyzer_Pivot_Table|Pivot Table]]&lt;br /&gt;
** [[QPR_ProcessAnalyzer_KPI_Card|KPI Card]]&lt;br /&gt;
** [[Measure,_Dimension_and_Column_Settings|Measure Settings]]&lt;br /&gt;
** [[Chart_On-screen_Settings|On-screen Settings]]&lt;br /&gt;
** [[Chart_Linked_Settings|Linked Settings]]&lt;br /&gt;
** [[Actions_to_Run_Script_in_Table|Run Script Actions]]&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;div style=&amp;quot;float:left; width:50%;&amp;quot;&amp;gt;&lt;br /&gt;
* [[Root Causes|Root Causes Analysis]]&lt;br /&gt;
* [[Clustering Analysis|Clustering Analysis]]&lt;br /&gt;
* [[Conformance Analysis|Conformance Analysis]]&lt;br /&gt;
* [[Design Diagram|Design Diagram]] / [[QPR ProcessAnalyzer BPMN Editor|BPMN Editor]]&lt;br /&gt;
* [[Gantt_Chart|Gantt Chart]]&lt;br /&gt;
* [[Sankey_Chart|Sankey Chart]]&lt;br /&gt;
* [[Label and Link]]&lt;br /&gt;
* [[Image|Image]]&lt;br /&gt;
* [[Filter_Selectors|Filter Selectors]] / [[Dropdown_List_Selector|Dropdown List Selector]]&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== For Citizen Developers ==&lt;br /&gt;
This section describes how to build ETL scripts that transform the source data into process mining models. There is also a detailed description how the process mining models can be configured so that they are optimal for the desired analyses. Finally, there is reference documentation for all expression language related functionality, that can be used both when writing custom KPI&#039;s in dashboards and in the ETL scripts.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;div style=&amp;quot;display: flex;flex-wrap: wrap;&amp;quot;&amp;gt;&lt;br /&gt;
&amp;lt;div style=&amp;quot;flex: 1 0 370px;border:1px solid #dfdfdf;padding:0 1em 1em 1.5em;background-color:#F7FAFC;margin:10px 0px 0px 10px;&amp;quot;&amp;gt;&lt;br /&gt;
=== Expression Language Reference ===&lt;br /&gt;
* [[SQL_Expressions|SQL Expressions for Snowflake]]&lt;br /&gt;
* In-memory expressions: [[QPR_ProcessAnalyzer_Expressions|Basic Syntax and Operations]] / [[Generic Functions in QPR ProcessAnalyzer|Generic Functions]]&lt;br /&gt;
* [[Generic_Objects_in_Expression_Language|Generic types]] ([[Generic_Properties_in_Expression_Language|Generic Properties]], [[Generic_Objects_in_Expression_Language#Array|Array]], [[Generic_Objects_in_Expression_Language#DateTime|DateTime]], [[Generic_Objects_in_Expression_Language#String|String]], [[Generic_Objects_in_Expression_Language#Timespan|Timespan]],  [[Generic_Objects_in_Expression_Language#Dictionary|Dictionary]])&lt;br /&gt;
* [[Process_Mining_Objects_in_Expression_Language|In-memory models API]] ([[Process_Mining_Objects_in_Expression_Language#AttributeType|AttributeType]], [[Process_Mining_Objects_in_Expression_Language#Case|Case]], [[Process_Mining_Objects_in_Expression_Language#Event|Event]], [[Process_Mining_Objects_in_Expression_Language#EventLog|EventLog]], [[Process_Mining_Objects_in_Expression_Language#EventType|EventType]], [[Process_Mining_Objects_in_Expression_Language#Flow|Flow]], [[Process_Mining_Objects_in_Expression_Language#FlowOccurrence|FlowOccurrence]], [[Process_Mining_Objects_in_Expression_Language#Variation|Variation]])&lt;br /&gt;
* Configuration objects ([[Dashboard_in_Expression_Language|Dashboard]], [[Datatable_in_Expression_Language|Datatable]], [[QPR_ProcessAnalyzer_Objects_in_Expression_Language#Model|Model]], [[QPR_ProcessAnalyzer_Objects_in_Expression_Language#Filter|Filter]], [[Diagram_in_Expression_Language|Diagram]], [[QPR_ProcessAnalyzer_Objects_in_Expression_Language#Script|Script]], [[QPR_ProcessAnalyzer_Objects_in_Expression_Language#Project|Project]], [[QPR_ProcessAnalyzer_Objects_in_Expression_Language#User.2FGroup|User/Group]])&lt;br /&gt;
* Tabular data ([[DataFrame in Expression Language|DataFrame]], [[SqlDataFrame in Expression Language|SqlDataFrame]],  [[DataFlow_in_Expression_Language|DataFlow]])&lt;br /&gt;
* [[Machine_Learning_Functions_in_Expression_Language|Machine Learning API]] / [[Conformance_Checking|Conformance Checking]]&lt;br /&gt;
* [[Filtering_in_QPR_ProcessAnalyzer_Queries|Filter Rules JSON]]&lt;br /&gt;
* [[System Library]]&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;div style=&amp;quot;flex: 1 0 250px;border:1px solid #dfdfdf;padding:0 1em 1em 1.5em;background-color:#F7FAFC;margin:10px 0px 0px 10px;&amp;quot;&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Create Process Mining Models ===&lt;br /&gt;
* [[Creating Process Mining Model|Walkthrough: Creating Process Mining Model]]&lt;br /&gt;
* [[Exporting_and_Importing_Data_in_QPR_ProcessAnalyzer|Importing Data from CSV, XES and PACM files]]&lt;br /&gt;
* [[Object-centric_Process_Mining_Model|Object-centric Process Mining]]&lt;br /&gt;
* [[Event Ordering for Identical Timestamps|Event Ordering for Identical Timestamps]]&lt;br /&gt;
* [[Managing Time Zones and Local Time|Time Zones and Local Time]]&lt;br /&gt;
* [[Best_Practices_for_Designing_Models|Best Practices for Designing Models]]&lt;br /&gt;
* [[Create_Predicted_Eventlog|Create Predicted Eventlog]]&lt;br /&gt;
* [[Create Simulated Eventlog]]&lt;br /&gt;
==== For In-memory Models ====&lt;br /&gt;
* [[Calculated Attributes in QPR ProcessAnalyzer|Calculated Case and Event Attributes]]&lt;br /&gt;
* [[Email Notifications|Email Notifications]]&lt;br /&gt;
* [[QPR ProcessAnalyzer Model Datasources|Model datasources]] ([[QPR ProcessAnalyzer Model Datasources#Loading Data from Datatables|Datatable]], [[QPR ProcessAnalyzer Model Datasources#Loading Script|Loading Script]], [[QPR ProcessAnalyzer Model Datasources#ODBC_Datasource|ODBC]])&lt;br /&gt;
* [[Case Level Permissions|Case Level Permissions]]&lt;br /&gt;
* [[Automatic Model Loading on Server Startup|Keeping Models Always Available]]&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;div style=&amp;quot;flex: 1 0 250px;border:1px solid #dfdfdf;padding:0 1em 1em 1.5em;background-color:#F7FAFC;margin:10px 0px 0px 10px;&amp;quot;&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Data Integrations and Connectors ===&lt;br /&gt;
* [[Managing Scripts|Managing Scripts in Workspace]]&lt;br /&gt;
* [[Datatable_Properties_Dialog|Managing Datatables in Workspace]]&lt;br /&gt;
* [[SQL Scripting for ETL|Writing SQL Scripts]]&lt;br /&gt;
* [[SQL Scripting Commands|SQL Scripting Commands Reference]]&lt;br /&gt;
* [[Storing Secrets for Scripts|Storing Secrets]]&lt;br /&gt;
* [[QPR ProcessAnalyzer ScriptLauncher|Installing and using QPR ScriptLauncher]]&lt;br /&gt;
* [[Importing_Data_from_SAP|How to Import Data from SAP]]&lt;br /&gt;
* [[Anonymize data|Anonymize data]]&lt;br /&gt;
* [[Expression Script Examples|Expression Script Examples]]&lt;br /&gt;
* [[QPR ProcessAnalyzer API|QPR ProcessAnalyzer REST API]]&lt;br /&gt;
* [[Sample Eventlog Files|Sample Eventlogs]]&lt;br /&gt;
* [[QPR TaskRecorder]]&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== For System Administrators ==&lt;br /&gt;
This section starts with the planning of QPR ProcessAnalyzer installation. After all requirements have been fulfilled, you can continue with the installation and configuration of QPR ProcessAnalyzer. Finally, learn how to manage users and perform other administrative tasks.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;div style=&amp;quot;display: flex;flex-wrap: wrap;&amp;quot;&amp;gt;&lt;br /&gt;
&amp;lt;div style=&amp;quot;flex: 1 0 290px;border:1px solid #dfdfdf;padding:0 1em 1em 1.5em;background-color:#F7FAFC;margin:10px 0px 0px 10px;&amp;quot;&amp;gt;&lt;br /&gt;
=== Planning Installation ===&lt;br /&gt;
* [[QPR ProcessAnalyzer System Requirements|System Requirements]]&lt;br /&gt;
* [[QPR ProcessAnalyzer System Architecture|System Architecture]]&lt;br /&gt;
* [[User Session Management|User Session Management]]&lt;br /&gt;
* [[QPR ProcessAnalyzer Release Notes|QPR ProcessAnalyzer Release Notes]]&lt;br /&gt;
* [[QPR_TaskRecorder_Release_Notes|QPR TaskRecorder Release Notes]]&lt;br /&gt;
* [[QPR_ProcessAnalyzer_Downloads|Downloads Page]]&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;div style=&amp;quot;flex: 1 0 290px;border:1px solid #dfdfdf;padding:0 1em 1em 1.5em;background-color:#F7FAFC;margin:10px 0px 0px 10px;&amp;quot;&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Installing and Configuring ===&lt;br /&gt;
* [[Installing QPR ProcessAnalyzer Server|Install QPR ProcessAnalyzer Server]] (or [[Updating_QPR_ProcessAnalyzer_Server|update existing]])&lt;br /&gt;
* [[Snowflake_Connection_Configuration|Configure Snowflake Connection]]&lt;br /&gt;
* [[Setting_up_Scripting_Sandbox|Setting up SQL Scripting Sandbox]]&lt;br /&gt;
* [[SAML_2.0_Federated_Authentication|SAML 2.0 Authentication]]&lt;br /&gt;
* [[QPR ProcessAnalyzer Security Hardening|Security Hardening]]&lt;br /&gt;
* [[Activate_QPR_ProcessAnalyzer_using_ActivationUtility|Activation without Internet Connection]]&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;div style=&amp;quot;flex: 1 0 290px;border:1px solid #dfdfdf;padding:0 1em 1em 1.5em;background-color:#F7FAFC;margin:10px 0px 0px 10px;&amp;quot;&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Administrating System ===&lt;br /&gt;
* [[Roles and Permissions|User Roles and Permissions]]&lt;br /&gt;
* [[Manage Users and Groups|Managing Users]]&lt;br /&gt;
* [[PA_Configuration_database_table|System Configurations]]&lt;br /&gt;
* [[QPR ProcessAnalyzer Logs|Logs for Audit and Troubleshooting]]&lt;br /&gt;
* [[In-memory_Models_Management|In-memory Models Management]]&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Agreements ==&lt;br /&gt;
See the [[QPR End User Software License Agreement]] and [[QPR Software as a Service Agreement]].&lt;br /&gt;
&lt;br /&gt;
__NOTOC__&lt;/div&gt;</summary>
		<author><name>MarHink</name></author>
	</entry>
	<entry>
		<id>https://wiki.onqpr.com/pa/index.php?title=System_Library&amp;diff=25474</id>
		<title>System Library</title>
		<link rel="alternate" type="text/html" href="https://wiki.onqpr.com/pa/index.php?title=System_Library&amp;diff=25474"/>
		<updated>2024-11-19T13:35:36Z</updated>

		<summary type="html">&lt;p&gt;MarHink: /* Example */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;System library is a collection of Expression Language functions and properties that provide additional tools for scripting Process Analyzer functionalities. System library is referenced in scripts via &#039;&#039;_system&#039;&#039;-property, which provides additional properties dedicated for different areas of interests for scripting. &lt;br /&gt;
&lt;br /&gt;
The following hierarchy shows the properties and functions available in System Library:&lt;br /&gt;
&lt;br /&gt;
* ML&lt;br /&gt;
** GeneratePredictionModel (documentation here: [[Create Predicted Eventlog|GeneratePredictionModel]])&lt;br /&gt;
** ApplyTransformations (documentation here: [[Create Simulated Eventlog|ApplyTransformations]])&lt;br /&gt;
* Parallel&lt;br /&gt;
** Run&lt;br /&gt;
* RootCauses&lt;br /&gt;
** FindRootCausesDataFrame&lt;br /&gt;
* Utils&lt;br /&gt;
** GetSampledEvents&lt;br /&gt;
** ModifyColumnTypes&lt;br /&gt;
** RunFunctionWithParallelLogging&lt;br /&gt;
&lt;br /&gt;
== Parallel.Run ==&lt;br /&gt;
Runs given functions in parallel.&lt;br /&gt;
&lt;br /&gt;
=== Parameters ===&lt;br /&gt;
* &#039;&#039;&#039;functions&#039;&#039;&#039;:&lt;br /&gt;
** An array of functions to run in parallel.&lt;br /&gt;
&lt;br /&gt;
=== Return value ===&lt;br /&gt;
An array of results returned by the called functions, in the same order as the function generating them in the &#039;&#039;functions&#039;&#039;-parameter.&lt;br /&gt;
&lt;br /&gt;
=== Example ===&lt;br /&gt;
The following script uses _system.Parallel.Run to run three functions:&lt;br /&gt;
&lt;br /&gt;
* SAP-extraction from VBAK-table in SAP (connection parameters defined in connectionParametersDict-dictionary).&lt;br /&gt;
* Transform the extracted data by adding a new column.&lt;br /&gt;
* Load the data into data table identified by dataTableId.&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;typescript&amp;quot;&amp;gt;&lt;br /&gt;
function ExtractTransformAndLoad(extractFunc, transformFunc, loadFunc)&lt;br /&gt;
{&lt;br /&gt;
  let rawDataFlow = extractFunc();&lt;br /&gt;
  let transformedDataFlow = ToDataFlow();&lt;br /&gt;
&lt;br /&gt;
  _system.Parallel.Run([&lt;br /&gt;
    () =&amp;gt; Catch({&lt;br /&gt;
      let df;&lt;br /&gt;
      while (!IsNullTop(df = rawDataFlow.Collect(#{&amp;quot;CollectChunk&amp;quot;: true}))) {&lt;br /&gt;
        transformedDataFlow.Append(transformFunc(df));&lt;br /&gt;
        WriteLog(`A chunk having ${df.NRows} rows has been transformed.`);&lt;br /&gt;
      }&lt;br /&gt;
      if (rawDataFlow.HasError) {&lt;br /&gt;
        transformedDataFlow.Fail(&amp;quot;Error occurred during data extraction.&amp;quot;);&lt;br /&gt;
      }&lt;br /&gt;
      else {&lt;br /&gt;
        transformedDataFlow.Complete();&lt;br /&gt;
      }&lt;br /&gt;
    }, {&lt;br /&gt;
      transformedDataFlow.Fail(&amp;quot;Error occurred during transformation calculation.&amp;quot;);&lt;br /&gt;
    }),&lt;br /&gt;
    () =&amp;gt; {&lt;br /&gt;
      loadFunc(transformedDataFlow);&lt;br /&gt;
    }&lt;br /&gt;
  ]);&lt;br /&gt;
}&lt;br /&gt;
&lt;br /&gt;
ExtractTransformAndLoad(&lt;br /&gt;
  () =&amp;gt; ExtractSap(connectionParametersDict.Extend(&lt;br /&gt;
    [&lt;br /&gt;
    &amp;quot;FieldNames&amp;quot;: &amp;quot;VBELN,ERDAT,ERZET,ERNAM,NETWR,WAERK&amp;quot;, &lt;br /&gt;
    &amp;quot;QueryTable&amp;quot;: &amp;quot;VBAK&amp;quot;,&lt;br /&gt;
	&amp;quot;Options&amp;quot;: [&amp;quot;VBELN BETWEEN &#039;0000017448&#039;&amp;quot; ,&amp;quot;AND &#039;0060000042&#039;&amp;quot;],&lt;br /&gt;
    &amp;quot;UseGateway&amp;quot;: true&lt;br /&gt;
    ])&lt;br /&gt;
  ),&lt;br /&gt;
  df =&amp;gt; df.SetColumns([&amp;quot;Test&amp;quot;: () =&amp;gt; `${Column(&amp;quot;NETWR&amp;quot;)} ${Column(&amp;quot;WAERK&amp;quot;)}`]),&lt;br /&gt;
  dataFlow =&amp;gt; {&lt;br /&gt;
    DataTableById(dataTableId).Import(dataFlow, [&amp;quot;Append&amp;quot;: 0]);&lt;br /&gt;
  }&lt;br /&gt;
);&lt;br /&gt;
&lt;br /&gt;
DataTableById(dataTableId).SqlDataFrame.OrderByColumns([&amp;quot;VBELN&amp;quot;], [true]).Collect()&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== RootCauses.FindForDataFrame ==&lt;br /&gt;
Finds root causes for a particular process phenomenon by comparing properties of selected cases against those of all cases in given model.&lt;br /&gt;
&lt;br /&gt;
Based on the similar in-memory function: [[FindRootCauses Function|EventLog.FindRootCauses]].&lt;br /&gt;
&lt;br /&gt;
=== Parameters ===&lt;br /&gt;
&lt;br /&gt;
* &#039;&#039;&#039;model&#039;&#039;&#039;:&lt;br /&gt;
** Model object of a model whose data tables are used to calculate the root causes.&lt;br /&gt;
* &#039;&#039;&#039;parameters&#039;&#039;&#039;:&lt;br /&gt;
** A parameter convertible to a StringDictionary object with the following supported key-values:&lt;br /&gt;
*** &#039;&#039;&#039;Filter&#039;&#039;&#039;: &lt;br /&gt;
**** Filter json (#30921#) that is applied to the event log before calculating root causes.&lt;br /&gt;
*** &#039;&#039;&#039;Selection&#039;&#039;&#039;: &lt;br /&gt;
**** Selection json (#30927#) that defines selected cases to find root causes for. Is applied on top of the filtered event log (specified by Filter parameter).&lt;br /&gt;
**** If nothing is selected, 100% of cases are counted as selected.&lt;br /&gt;
*** &#039;&#039;&#039;CaseAttributeTypes&#039;&#039;&#039;: &lt;br /&gt;
**** An array of strings with the names of case attributes included into the root causes.&lt;br /&gt;
**** Only case attributes of type string, integer or boolean are included.&lt;br /&gt;
**** If CaseAttributeTypes is null or empty string or not specified, all case attributes having type &amp;quot;String&amp;quot; are included into the root causes.&lt;br /&gt;
**** If the model doesn&#039;t have a case attribute with the specified name, an error message is shown.&lt;br /&gt;
*** &#039;&#039;&#039;EventAttributeTypes&#039;&#039;&#039;:&lt;br /&gt;
**** An array of strings with the names of event attributes included into the root causes.&lt;br /&gt;
**** Only event attributes of type string are included.&lt;br /&gt;
**** If EventAttributeTypes is null, all event attributes having type &amp;quot;String&amp;quot; are included into the root causes.&lt;br /&gt;
**** If EventAttributeTypes is empty array or not specified, event attributes not applied to the root causes.&lt;br /&gt;
**** If the model doesn&#039;t have an event attribute with the specified name, an error message is shown.&lt;br /&gt;
**** Analysis column Name should contain Event Attribute Name.&lt;br /&gt;
**** Analysis column Value should contain Event Attribute Value and number of occurrences in case: &amp;lt;value&amp;gt; (count).&lt;br /&gt;
**** Analysis column Type should have &amp;quot;EventAttributeValue&amp;quot; as its value.&lt;br /&gt;
*** &#039;&#039;&#039;WeightingExpression&#039;&#039;&#039;: &lt;br /&gt;
**** Expression providing weights for each case.&lt;br /&gt;
**** Expression, if defined, must be any of the following types:&lt;br /&gt;
***** A string containing the [[SQL Expressions|SqlExpression]] to evaluate.&lt;br /&gt;
***** A SqlExpression object, created e.g., using ToSqlExpression-function (see also: [[QPR_ProcessAnalyzer_Expressions#In-memory_expression_blocks_in_SQL_expressions|In-memory expression blocks in SQL expressions]]).&lt;br /&gt;
**** A row is filtered out of result if expression result is null.&lt;br /&gt;
*** &#039;&#039;&#039;MaximumRowCount&#039;&#039;&#039;:&lt;br /&gt;
**** The maximum number of the most and least contributing root causes to return. Thus, the actual number of returned rows can be at most two times this value (if specified).&lt;br /&gt;
**** If undefined, 200 is used.&lt;br /&gt;
**** If set to 0, all rows are returned.&lt;br /&gt;
*** &#039;&#039;&#039;MinValueUsage&#039;&#039;&#039;: &lt;br /&gt;
**** The minimum total usage of a value included into the comparison. The number of cases having every returned value should be at least given percentage (a float value between 0.0 and 1.0) of all the compared cases.&lt;br /&gt;
***** If not defined or null, all values are included (=default).&lt;br /&gt;
*** &#039;&#039;&#039;MaxNumUniqueValues&#039;&#039;&#039;: &lt;br /&gt;
**** Maximum number of unique values to include into the comparison for each attribute column. If the amount of unique values for any attribute exceeds this value, only given number of attributes are included that have the highest usage.&lt;br /&gt;
**** If not defined or null, all values are included (=default).&lt;br /&gt;
*** &#039;&#039;&#039;IncludeOthers&#039;&#039;&#039;: &lt;br /&gt;
**** Should the rest of the attribute values not included due to MinValueUsage or MaxNumUniqueValues filtering be included as an aggregated &amp;quot;Others&amp;quot; value?&lt;br /&gt;
**** If given any string value, that value is used as the value for all the aggregated values.&lt;br /&gt;
**** Default = undefined =&amp;gt; others values will not be included into the results.&lt;br /&gt;
*** &#039;&#039;&#039;ValueIfNull&#039;&#039;&#039;: Value used to indicate null-values.&lt;br /&gt;
**** Default = &amp;quot;(blank)&amp;quot;.&lt;br /&gt;
**** Must be not null.&lt;br /&gt;
&lt;br /&gt;
=== Return value ===&lt;br /&gt;
Returns a [[SqlDataFrame in Expression Language|SqlDataFrame]] object with the following columns:&lt;br /&gt;
&lt;br /&gt;
* Common columns:&lt;br /&gt;
** &#039;&#039;&#039;Type&#039;&#039;&#039;: &lt;br /&gt;
*** Type of the root cause:&lt;br /&gt;
**** &amp;quot;CaseAttributeValue&amp;quot; for case attributes.&lt;br /&gt;
**** &amp;quot;EventAttributeValue&amp;quot; for event attributes.&lt;br /&gt;
** &#039;&#039;&#039;Name&#039;&#039;&#039;: &lt;br /&gt;
*** When type is CaseAttributeValue, case attribute name.&lt;br /&gt;
*** When type is EventAttributeValue, event attribute name.&lt;br /&gt;
** &#039;&#039;&#039;Value&#039;&#039;&#039;: &lt;br /&gt;
*** When type is CaseAttributeValue, case attribute value.&lt;br /&gt;
*** When type is EventAttributeValue, event attribute value and number of occurrences in case.&lt;br /&gt;
** &#039;&#039;&#039;Total&#039;&#039;&#039;: &lt;br /&gt;
*** The total number of cases having the found root cause.&lt;br /&gt;
** &#039;&#039;&#039;Selected&#039;&#039;&#039;: &lt;br /&gt;
*** The number of cases that have the found root cause and belong to the selected cases.&lt;br /&gt;
** &#039;&#039;&#039;Compared&#039;&#039;&#039;:&lt;br /&gt;
*** The number of cases that have the found root cause and don&#039;t belong to the selected cases.&lt;br /&gt;
** Columns when WeightingExpression not have value:&lt;br /&gt;
*** &#039;&#039;&#039;Contribution&#039;&#039;&#039;: &lt;br /&gt;
**** The number of cases which contribute to the deviation from the average percentage of selected cases among all analyzed cases.&lt;br /&gt;
*** &#039;&#039;&#039;ContributionPercentage&#039;&#039;&#039;: &lt;br /&gt;
**** The percent of cases which contribute to the deviation from the average percentage.&lt;br /&gt;
*** &#039;&#039;&#039;DifferencePercentage&#039;&#039;&#039;:&lt;br /&gt;
**** The deviation in percentage between selected cases with the found root cause and the average percentage of selected cases among all analyzed cases.&lt;br /&gt;
*** &#039;&#039;&#039;SelectedPercentage&#039;&#039;&#039;: &lt;br /&gt;
**** The percent of selected cases that have the found root cause out of all cases with that root cause.&lt;br /&gt;
** Columns when WeightingExpression have value:&lt;br /&gt;
*** &#039;&#039;&#039;Contribution&#039;&#039;&#039;: &lt;br /&gt;
**** The sum of case weights which contribute to the deviation from the average percentage of selected case weights among all analyzed cases.&lt;br /&gt;
*** &#039;&#039;&#039;ContributionPercentage&#039;&#039;&#039;: &lt;br /&gt;
**** The percent of case weights which contribute to the deviation from the average percentage.&lt;br /&gt;
*** &#039;&#039;&#039;DifferencePercentage&#039;&#039;&#039;:&lt;br /&gt;
**** The deviation in percentage between selected case weights with the found root cause and the average percentage of selected case weights among all analyzed cases.&lt;br /&gt;
*** &#039;&#039;&#039;SelectedPercentage&#039;&#039;&#039;:&lt;br /&gt;
**** The percent of selected case weights that have the found root cause out of all case weights with that root cause.&lt;br /&gt;
*** &#039;&#039;&#039;SelectedWeight&#039;&#039;&#039;: &lt;br /&gt;
**** The sum of weights that have the found root cause and belong to the selected cases.&lt;br /&gt;
*** &#039;&#039;&#039;ComparedWeight&#039;&#039;&#039;: &lt;br /&gt;
**** The sum of weights that have the found root cause and don&#039;t belong to the selected cases.&lt;br /&gt;
*** &#039;&#039;&#039;TotalWeight&#039;&#039;&#039;:&lt;br /&gt;
**** The sum of weights of all cases with that root cause.&lt;br /&gt;
&lt;br /&gt;
=== Example ===&lt;br /&gt;
Calculate root cause analysis for given model using parameters read from the [[Web API: Expression/query|query configuration]].&amp;lt;syntaxhighlight lang=&amp;quot;json&amp;quot;&amp;gt;&lt;br /&gt;
{&lt;br /&gt;
  &amp;quot;ProcessingMethod&amp;quot;: &amp;quot;DataFrame&amp;quot;,&lt;br /&gt;
  &amp;quot;Root&amp;quot;:&amp;quot;let m = _; _system.RootCauses.FindForDataFrame(m, _query.Configuration.Parameters.FindRootCausesParameters.Clone().Extend(#{\&amp;quot;Filter\&amp;quot;: _query.Configuration.Filter}))&amp;quot;,&lt;br /&gt;
  &amp;quot;Parameters&amp;quot;: {&lt;br /&gt;
    &amp;quot;FindRootCausesParameters&amp;quot;: {&lt;br /&gt;
      &amp;quot;CaseAttributeTypes&amp;quot;: [&amp;quot;Account Manager&amp;quot;,&amp;quot;Customer Group&amp;quot;,&amp;quot;Product Group&amp;quot;,&amp;quot;Region&amp;quot;],&lt;br /&gt;
      &amp;quot;Selection&amp;quot;: {&amp;quot;Items&amp;quot;:[{&amp;quot;Type&amp;quot;:&amp;quot;IncludeCases&amp;quot;,&amp;quot;Items&amp;quot;:[{&amp;quot;Type&amp;quot;:&amp;quot;CaseAttributeValue&amp;quot;,&amp;quot;Attribute&amp;quot;:&amp;quot;Product Group&amp;quot;,&amp;quot;StringifiedValues&amp;quot;:[&amp;quot;0Hats&amp;quot;]}]}]},&lt;br /&gt;
      &amp;quot;MaxNumUniqueValues&amp;quot;: 2,&lt;br /&gt;
      &amp;quot;MaximumRowCount&amp;quot;: 1000,&lt;br /&gt;
      &amp;quot;MinValueUsage&amp;quot;: 0.20,&lt;br /&gt;
      &amp;quot;WeightingExpression&amp;quot;: &amp;quot;Column(\&amp;quot;Cost\&amp;quot;)&amp;quot;&lt;br /&gt;
    }&lt;br /&gt;
  },&lt;br /&gt;
  &amp;quot;Ordering&amp;quot;: [&lt;br /&gt;
    {&amp;quot;Name&amp;quot;: &amp;quot;Contribution&amp;quot;, &amp;quot;Direction&amp;quot;: &amp;quot;Descending&amp;quot;}&lt;br /&gt;
  ]&lt;br /&gt;
}&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Utils.GetSampledEvents ==&lt;br /&gt;
Returns a [[SqlDataFrame in Expression Language|SqlDataFrame]] containing sampled events of given model where given filter is first applied. &lt;br /&gt;
&lt;br /&gt;
=== Parameters ===&lt;br /&gt;
&lt;br /&gt;
* &#039;&#039;&#039;sourceModel&#039;&#039;&#039;:&lt;br /&gt;
** ProcessAnalyzer model object of the model whose event data is to be filtered and sampled.&lt;br /&gt;
* &#039;&#039;&#039;sampledCaseCount&#039;&#039;&#039;:&lt;br /&gt;
** The maximum number of cases to return (or null if all cases should be returned).&lt;br /&gt;
* &#039;&#039;&#039;filter&#039;&#039;&#039;:&lt;br /&gt;
** JSON filter to be applied on the event data of the source model prior to performing the sampling.&lt;br /&gt;
&lt;br /&gt;
=== Return value ===&lt;br /&gt;
[[SqlDataFrame in Expression Language|SqlDataFrame]] containing sampled events of given model where given filter is first applied.&lt;br /&gt;
&lt;br /&gt;
=== Example ===&lt;br /&gt;
Get a sample of all the events of at most 1000 cases having &amp;quot;Hats&amp;quot; as &amp;quot;Product Group&amp;quot; case attribute value from model identified by modelId.&amp;lt;syntaxhighlight lang=&amp;quot;typescript&amp;quot;&amp;gt;&lt;br /&gt;
let eventDataSampleSdf = _system.Utils.GetSampledEvents(modelId, 1000, #{&lt;br /&gt;
  &amp;quot;Items&amp;quot;:[#{&lt;br /&gt;
    &amp;quot;Type&amp;quot;:&amp;quot;IncludeCases&amp;quot;,&lt;br /&gt;
	&amp;quot;Items&amp;quot;:[#{&lt;br /&gt;
	  &amp;quot;Type&amp;quot;:&amp;quot;CaseAttributeValue&amp;quot;,&lt;br /&gt;
	  &amp;quot;Attribute&amp;quot;:&amp;quot;Product Group&amp;quot;,&lt;br /&gt;
	  &amp;quot;StringifiedValues&amp;quot;:[&amp;quot;0Hats&amp;quot;]&lt;br /&gt;
	}]&lt;br /&gt;
  }]&lt;br /&gt;
});&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Utils.ModifyColumnTypes ==&lt;br /&gt;
In-place modifies the column types of given columns in given data table.&lt;br /&gt;
&lt;br /&gt;
=== Parameters ===&lt;br /&gt;
* &#039;&#039;&#039;dataTable&#039;&#039;&#039;:&lt;br /&gt;
** ProcessAnalyzer model object of the model whose event data is to be filtered and sampled.&lt;br /&gt;
* &#039;&#039;&#039;columnTypesToSet&#039;&#039;&#039;:&lt;br /&gt;
** Array of column name/type definitions to set.&lt;br /&gt;
*** Only columns that are to be changed are required to be listed.&lt;br /&gt;
*** Columns that don&#039;t exist in the data table will be skipped.&lt;br /&gt;
&lt;br /&gt;
=== Return value ===&lt;br /&gt;
Returns the modified data table object.&lt;br /&gt;
&lt;br /&gt;
=== Example ===&lt;br /&gt;
Modify column &amp;quot;CaseId&amp;quot; to be of type string.&amp;lt;syntaxhighlight lang=&amp;quot;typescript&amp;quot;&amp;gt;&lt;br /&gt;
_system.Utils.ModifyColumnTypes(&lt;br /&gt;
  eventsTable, &lt;br /&gt;
  #{#{&amp;quot;Name&amp;quot;: &amp;quot;CaseId&amp;quot;, &amp;quot;DataType&amp;quot;: &amp;quot;String&amp;quot;}}&lt;br /&gt;
);&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Utils.RunFunctionWithParallelLogging ==&lt;br /&gt;
Runs given function that generates logging information into given data table in a way that all the logging will be included into the generated script run log as well (if run inside a script).&lt;br /&gt;
&lt;br /&gt;
Internally polls the table every 5 seconds for new rows and adds all the newly added rows to script log.&lt;br /&gt;
&lt;br /&gt;
=== Parameters ===&lt;br /&gt;
&lt;br /&gt;
* &#039;&#039;&#039;logTable&#039;&#039;&#039;:&lt;br /&gt;
** A DataTable used for logging.&lt;br /&gt;
* &#039;&#039;&#039;callbackFunc&#039;&#039;&#039;:&lt;br /&gt;
** Function that uses given data table for logging it&#039;s current status.&lt;br /&gt;
&lt;br /&gt;
=== Return value ===&lt;br /&gt;
The result of the callback function.&lt;br /&gt;
&lt;br /&gt;
=== Example ===&lt;br /&gt;
Run stored procedure named &amp;quot;StoredProcedureTest&amp;quot; in Snowflake, that generates new rows to log table identified by logTableId and log the generated rows into the script run log.&amp;lt;syntaxhighlight lang=&amp;quot;typescript&amp;quot;&amp;gt;&lt;br /&gt;
_system.Utils.RunFunctionWithParallelLogging(DataTableById(logTableId), () =&amp;gt; {&lt;br /&gt;
    CreateSnowflakeConnection().CallStoredProcedure(&amp;quot;StoredProcedureTest&amp;quot;, #{})&lt;br /&gt;
});&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;/div&gt;</summary>
		<author><name>MarHink</name></author>
	</entry>
	<entry>
		<id>https://wiki.onqpr.com/pa/index.php?title=System_Library&amp;diff=25473</id>
		<title>System Library</title>
		<link rel="alternate" type="text/html" href="https://wiki.onqpr.com/pa/index.php?title=System_Library&amp;diff=25473"/>
		<updated>2024-11-19T13:26:55Z</updated>

		<summary type="html">&lt;p&gt;MarHink: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;System library is a collection of Expression Language functions and properties that provide additional tools for scripting Process Analyzer functionalities. System library is referenced in scripts via &#039;&#039;_system&#039;&#039;-property, which provides additional properties dedicated for different areas of interests for scripting. &lt;br /&gt;
&lt;br /&gt;
The following hierarchy shows the properties and functions available in System Library:&lt;br /&gt;
&lt;br /&gt;
* ML&lt;br /&gt;
** GeneratePredictionModel (documentation here: [[Create Predicted Eventlog|GeneratePredictionModel]])&lt;br /&gt;
** ApplyTransformations (documentation here: [[Create Simulated Eventlog|ApplyTransformations]])&lt;br /&gt;
* Parallel&lt;br /&gt;
** Run&lt;br /&gt;
* RootCauses&lt;br /&gt;
** FindRootCausesDataFrame&lt;br /&gt;
* Utils&lt;br /&gt;
** GetSampledEvents&lt;br /&gt;
** ModifyColumnTypes&lt;br /&gt;
** RunFunctionWithParallelLogging&lt;br /&gt;
&lt;br /&gt;
== Parallel.Run ==&lt;br /&gt;
Runs given functions in parallel.&lt;br /&gt;
&lt;br /&gt;
=== Parameters ===&lt;br /&gt;
* &#039;&#039;&#039;functions&#039;&#039;&#039;:&lt;br /&gt;
** An array of functions to run in parallel.&lt;br /&gt;
&lt;br /&gt;
=== Return value ===&lt;br /&gt;
An array of results returned by the called functions, in the same order as the function generating them in the &#039;&#039;functions&#039;&#039;-parameter.&lt;br /&gt;
&lt;br /&gt;
=== Example ===&lt;br /&gt;
The following script uses _system.Parallel.Run to run three functions:&lt;br /&gt;
&lt;br /&gt;
* SAP-extraction from VBAK-table in SAP (connection parameters defined in connectionParametersDict-dictionary).&lt;br /&gt;
* Transform the extracted data by adding a new column.&lt;br /&gt;
* Load the data into data table identified by dataTableId.&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;typescript&amp;quot;&amp;gt;&lt;br /&gt;
function ExtractTransformAndLoad(extractFunc, transformFunc, loadFunc)&lt;br /&gt;
{&lt;br /&gt;
  let rawDataFlow = extractFunc();&lt;br /&gt;
  let transformedDataFlow = ToDataFlow();&lt;br /&gt;
&lt;br /&gt;
  _system.Parallel.Run([&lt;br /&gt;
    () =&amp;gt; Catch({&lt;br /&gt;
      let df;&lt;br /&gt;
      while (!IsNullTop(df = rawDataFlow.Collect(#{&amp;quot;CollectChunk&amp;quot;: true}))) {&lt;br /&gt;
        transformedDataFlow.Append(transformFunc(df));&lt;br /&gt;
        WriteLog(`A chunk having ${df.NRows} rows has been transformed.`);&lt;br /&gt;
      }&lt;br /&gt;
      if (rawDataFlow.HasError) {&lt;br /&gt;
        transformedDataFlow.Fail(&amp;quot;Error occurred during data extraction.&amp;quot;);&lt;br /&gt;
      }&lt;br /&gt;
      else {&lt;br /&gt;
        transformedDataFlow.Complete();&lt;br /&gt;
      }&lt;br /&gt;
    }, {&lt;br /&gt;
      transformedDataFlow.Fail(&amp;quot;Error occurred during transformation calculation.&amp;quot;);&lt;br /&gt;
    }),&lt;br /&gt;
    () =&amp;gt; {&lt;br /&gt;
      loadFunc(transformedDataFlow);&lt;br /&gt;
    }&lt;br /&gt;
  ]);&lt;br /&gt;
}&lt;br /&gt;
&lt;br /&gt;
ExtractTransformAndLoad(&lt;br /&gt;
  () =&amp;gt; ExtractSap(connectionParametersDict.Extend(&lt;br /&gt;
    [&lt;br /&gt;
    &amp;quot;FieldNames&amp;quot;: &amp;quot;VBELN,ERDAT,ERZET,ERNAM,NETWR,WAERK&amp;quot;, &lt;br /&gt;
    &amp;quot;QueryTable&amp;quot;: &amp;quot;VBAK&amp;quot;,&lt;br /&gt;
	&amp;quot;Options&amp;quot;: [&amp;quot;VBELN BETWEEN &#039;0000017448&#039;&amp;quot; ,&amp;quot;AND &#039;0060000042&#039;&amp;quot;],&lt;br /&gt;
    &amp;quot;UseGateway&amp;quot;: true&lt;br /&gt;
    ])&lt;br /&gt;
  ),&lt;br /&gt;
  df =&amp;gt; df.SetColumns([&amp;quot;Test&amp;quot;: () =&amp;gt; `${Column(&amp;quot;NETWR&amp;quot;)} ${Column(&amp;quot;WAERK&amp;quot;)}`]),&lt;br /&gt;
  dataFlow =&amp;gt; {&lt;br /&gt;
    DataTableById(dataTableId).Import(dataFlow, [&amp;quot;Append&amp;quot;: 0]);&lt;br /&gt;
  }&lt;br /&gt;
);&lt;br /&gt;
&lt;br /&gt;
DataTableById(dataTableId).SqlDataFrame.OrderByColumns([&amp;quot;VBELN&amp;quot;], [true]).Collect()&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== RootCauses.FindForDataFrame ==&lt;br /&gt;
Finds root causes for a particular process phenomenon by comparing properties of selected cases against those of all cases in given model.&lt;br /&gt;
&lt;br /&gt;
Based on the similar in-memory function: [[FindRootCauses Function|EventLog.FindRootCauses]].&lt;br /&gt;
&lt;br /&gt;
=== Parameters ===&lt;br /&gt;
&lt;br /&gt;
* &#039;&#039;&#039;model&#039;&#039;&#039;:&lt;br /&gt;
** Model object of a model whose data tables are used to calculate the root causes.&lt;br /&gt;
* &#039;&#039;&#039;parameters&#039;&#039;&#039;:&lt;br /&gt;
** A parameter convertible to a StringDictionary object with the following supported key-values:&lt;br /&gt;
*** &#039;&#039;&#039;Filter&#039;&#039;&#039;: &lt;br /&gt;
**** Filter json (#30921#) that is applied to the event log before calculating root causes.&lt;br /&gt;
*** &#039;&#039;&#039;Selection&#039;&#039;&#039;: &lt;br /&gt;
**** Selection json (#30927#) that defines selected cases to find root causes for. Is applied on top of the filtered event log (specified by Filter parameter).&lt;br /&gt;
**** If nothing is selected, 100% of cases are counted as selected.&lt;br /&gt;
*** &#039;&#039;&#039;CaseAttributeTypes&#039;&#039;&#039;: &lt;br /&gt;
**** An array of strings with the names of case attributes included into the root causes.&lt;br /&gt;
**** Only case attributes of type string, integer or boolean are included.&lt;br /&gt;
**** If CaseAttributeTypes is null or empty string or not specified, all case attributes having type &amp;quot;String&amp;quot; are included into the root causes.&lt;br /&gt;
**** If the model doesn&#039;t have a case attribute with the specified name, an error message is shown.&lt;br /&gt;
*** &#039;&#039;&#039;EventAttributeTypes&#039;&#039;&#039;:&lt;br /&gt;
**** An array of strings with the names of event attributes included into the root causes.&lt;br /&gt;
**** Only event attributes of type string are included.&lt;br /&gt;
**** If EventAttributeTypes is null, all event attributes having type &amp;quot;String&amp;quot; are included into the root causes.&lt;br /&gt;
**** If EventAttributeTypes is empty array or not specified, event attributes not applied to the root causes.&lt;br /&gt;
**** If the model doesn&#039;t have an event attribute with the specified name, an error message is shown.&lt;br /&gt;
**** Analysis column Name should contain Event Attribute Name.&lt;br /&gt;
**** Analysis column Value should contain Event Attribute Value and number of occurrences in case: &amp;lt;value&amp;gt; (count).&lt;br /&gt;
**** Analysis column Type should have &amp;quot;EventAttributeValue&amp;quot; as its value.&lt;br /&gt;
*** &#039;&#039;&#039;WeightingExpression&#039;&#039;&#039;: &lt;br /&gt;
**** Expression providing weights for each case.&lt;br /&gt;
**** Expression, if defined, must be any of the following types:&lt;br /&gt;
***** A string containing the [[SQL Expressions|SqlExpression]] to evaluate.&lt;br /&gt;
***** A SqlExpression object, created e.g., using ToSqlExpression-function (see also: [[QPR_ProcessAnalyzer_Expressions#In-memory_expression_blocks_in_SQL_expressions|In-memory expression blocks in SQL expressions]]).&lt;br /&gt;
**** A row is filtered out of result if expression result is null.&lt;br /&gt;
*** &#039;&#039;&#039;MaximumRowCount&#039;&#039;&#039;:&lt;br /&gt;
**** The maximum number of the most and least contributing root causes to return. Thus, the actual number of returned rows can be at most two times this value (if specified).&lt;br /&gt;
**** If undefined, 200 is used.&lt;br /&gt;
**** If set to 0, all rows are returned.&lt;br /&gt;
*** &#039;&#039;&#039;MinValueUsage&#039;&#039;&#039;: &lt;br /&gt;
**** The minimum total usage of a value included into the comparison. The number of cases having every returned value should be at least given percentage (a float value between 0.0 and 1.0) of all the compared cases.&lt;br /&gt;
***** If not defined or null, all values are included (=default).&lt;br /&gt;
*** &#039;&#039;&#039;MaxNumUniqueValues&#039;&#039;&#039;: &lt;br /&gt;
**** Maximum number of unique values to include into the comparison for each attribute column. If the amount of unique values for any attribute exceeds this value, only given number of attributes are included that have the highest usage.&lt;br /&gt;
**** If not defined or null, all values are included (=default).&lt;br /&gt;
*** &#039;&#039;&#039;IncludeOthers&#039;&#039;&#039;: &lt;br /&gt;
**** Should the rest of the attribute values not included due to MinValueUsage or MaxNumUniqueValues filtering be included as an aggregated &amp;quot;Others&amp;quot; value?&lt;br /&gt;
**** If given any string value, that value is used as the value for all the aggregated values.&lt;br /&gt;
**** Default = undefined =&amp;gt; others values will not be included into the results.&lt;br /&gt;
*** &#039;&#039;&#039;ValueIfNull&#039;&#039;&#039;: Value used to indicate null-values.&lt;br /&gt;
**** Default = &amp;quot;(blank)&amp;quot;.&lt;br /&gt;
**** Must be not null.&lt;br /&gt;
&lt;br /&gt;
=== Return value ===&lt;br /&gt;
Returns a [[SqlDataFrame in Expression Language|SqlDataFrame]] object with the following columns:&lt;br /&gt;
&lt;br /&gt;
* Common columns:&lt;br /&gt;
** &#039;&#039;&#039;Type&#039;&#039;&#039;: &lt;br /&gt;
*** Type of the root cause:&lt;br /&gt;
**** &amp;quot;CaseAttributeValue&amp;quot; for case attributes.&lt;br /&gt;
**** &amp;quot;EventAttributeValue&amp;quot; for event attributes.&lt;br /&gt;
** &#039;&#039;&#039;Name&#039;&#039;&#039;: &lt;br /&gt;
*** When type is CaseAttributeValue, case attribute name.&lt;br /&gt;
*** When type is EventAttributeValue, event attribute name.&lt;br /&gt;
** &#039;&#039;&#039;Value&#039;&#039;&#039;: &lt;br /&gt;
*** When type is CaseAttributeValue, case attribute value.&lt;br /&gt;
*** When type is EventAttributeValue, event attribute value and number of occurrences in case.&lt;br /&gt;
** &#039;&#039;&#039;Total&#039;&#039;&#039;: &lt;br /&gt;
*** The total number of cases having the found root cause.&lt;br /&gt;
** &#039;&#039;&#039;Selected&#039;&#039;&#039;: &lt;br /&gt;
*** The number of cases that have the found root cause and belong to the selected cases.&lt;br /&gt;
** &#039;&#039;&#039;Compared&#039;&#039;&#039;:&lt;br /&gt;
*** The number of cases that have the found root cause and don&#039;t belong to the selected cases.&lt;br /&gt;
** Columns when WeightingExpression not have value:&lt;br /&gt;
*** &#039;&#039;&#039;Contribution&#039;&#039;&#039;: &lt;br /&gt;
**** The number of cases which contribute to the deviation from the average percentage of selected cases among all analyzed cases.&lt;br /&gt;
*** &#039;&#039;&#039;ContributionPercentage&#039;&#039;&#039;: &lt;br /&gt;
**** The percent of cases which contribute to the deviation from the average percentage.&lt;br /&gt;
*** &#039;&#039;&#039;DifferencePercentage&#039;&#039;&#039;:&lt;br /&gt;
**** The deviation in percentage between selected cases with the found root cause and the average percentage of selected cases among all analyzed cases.&lt;br /&gt;
*** &#039;&#039;&#039;SelectedPercentage&#039;&#039;&#039;: &lt;br /&gt;
**** The percent of selected cases that have the found root cause out of all cases with that root cause.&lt;br /&gt;
** Columns when WeightingExpression have value:&lt;br /&gt;
*** &#039;&#039;&#039;Contribution&#039;&#039;&#039;: &lt;br /&gt;
**** The sum of case weights which contribute to the deviation from the average percentage of selected case weights among all analyzed cases.&lt;br /&gt;
*** &#039;&#039;&#039;ContributionPercentage&#039;&#039;&#039;: &lt;br /&gt;
**** The percent of case weights which contribute to the deviation from the average percentage.&lt;br /&gt;
*** &#039;&#039;&#039;DifferencePercentage&#039;&#039;&#039;:&lt;br /&gt;
**** The deviation in percentage between selected case weights with the found root cause and the average percentage of selected case weights among all analyzed cases.&lt;br /&gt;
*** &#039;&#039;&#039;SelectedPercentage&#039;&#039;&#039;:&lt;br /&gt;
**** The percent of selected case weights that have the found root cause out of all case weights with that root cause.&lt;br /&gt;
*** &#039;&#039;&#039;SelectedWeight&#039;&#039;&#039;: &lt;br /&gt;
**** The sum of weights that have the found root cause and belong to the selected cases.&lt;br /&gt;
*** &#039;&#039;&#039;ComparedWeight&#039;&#039;&#039;: &lt;br /&gt;
**** The sum of weights that have the found root cause and don&#039;t belong to the selected cases.&lt;br /&gt;
*** &#039;&#039;&#039;TotalWeight&#039;&#039;&#039;:&lt;br /&gt;
**** The sum of weights of all cases with that root cause.&lt;br /&gt;
&lt;br /&gt;
=== Example ===&lt;br /&gt;
Calculate root cause analysis for given model using parameters read from the [[Web API: Expression/query|query configuration]].&amp;lt;syntaxhighlight lang=&amp;quot;json&amp;quot;&amp;gt;&lt;br /&gt;
{&lt;br /&gt;
  &amp;quot;ProcessingMethod&amp;quot;: &amp;quot;DataFrame&amp;quot;,&lt;br /&gt;
  &amp;quot;Root&amp;quot;:&amp;quot;let m = _; _system.RootCauses.FindForDataFrame(m, _query.Configuration.Parameters.FindRootCausesParameters.Clone().Extend(#{\&amp;quot;Filter\&amp;quot;: _query.Configuration.Filter}))&amp;quot;,&lt;br /&gt;
  &amp;quot;Parameters&amp;quot;: {&lt;br /&gt;
    &amp;quot;FindRootCausesParameters&amp;quot;: {&lt;br /&gt;
      &amp;quot;CaseAttributeTypes&amp;quot;: [&amp;quot;Account Manager&amp;quot;,&amp;quot;Customer Group&amp;quot;,&amp;quot;Product Group&amp;quot;,&amp;quot;Region&amp;quot;],&lt;br /&gt;
      &amp;quot;Selection&amp;quot;: {&amp;quot;Items&amp;quot;:[{&amp;quot;Type&amp;quot;:&amp;quot;IncludeCases&amp;quot;,&amp;quot;Items&amp;quot;:[{&amp;quot;Type&amp;quot;:&amp;quot;CaseAttributeValue&amp;quot;,&amp;quot;Attribute&amp;quot;:&amp;quot;Product Group&amp;quot;,&amp;quot;StringifiedValues&amp;quot;:[&amp;quot;0Hats&amp;quot;]}]}]},&lt;br /&gt;
      &amp;quot;MaxNumUniqueValues&amp;quot;: 2,&lt;br /&gt;
      &amp;quot;MaximumRowCount&amp;quot;: 1000,&lt;br /&gt;
      &amp;quot;MinValueUsage&amp;quot;: 0.20,&lt;br /&gt;
      &amp;quot;WeightingExpression&amp;quot;: &amp;quot;Column(\&amp;quot;Cost\&amp;quot;)&amp;quot;&lt;br /&gt;
    }&lt;br /&gt;
  },&lt;br /&gt;
  &amp;quot;Ordering&amp;quot;: [&lt;br /&gt;
    {&amp;quot;Name&amp;quot;: &amp;quot;Contribution&amp;quot;, &amp;quot;Direction&amp;quot;: &amp;quot;Descending&amp;quot;}&lt;br /&gt;
  ]&lt;br /&gt;
}&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Utils.GetSampledEvents ==&lt;br /&gt;
Returns a [[SqlDataFrame in Expression Language|SqlDataFrame]] containing sampled events of given model where given filter is first applied. &lt;br /&gt;
&lt;br /&gt;
=== Parameters ===&lt;br /&gt;
&lt;br /&gt;
* &#039;&#039;&#039;sourceModel&#039;&#039;&#039;:&lt;br /&gt;
** ProcessAnalyzer model object of the model whose event data is to be filtered and sampled.&lt;br /&gt;
* &#039;&#039;&#039;sampledCaseCount&#039;&#039;&#039;:&lt;br /&gt;
** The maximum number of cases to return (or null if all cases should be returned).&lt;br /&gt;
* &#039;&#039;&#039;filter&#039;&#039;&#039;:&lt;br /&gt;
** JSON filter to be applied on the event data of the source model prior to performing the sampling.&lt;br /&gt;
&lt;br /&gt;
=== Return value ===&lt;br /&gt;
[[SqlDataFrame in Expression Language|SqlDataFrame]] containing sampled events of given model where given filter is first applied.&lt;br /&gt;
&lt;br /&gt;
=== Example ===&lt;br /&gt;
Get a sample of all the events of at most 1000 cases having &amp;quot;Hats&amp;quot; as &amp;quot;Product Group&amp;quot; case attribute value from model identified by modelId.&amp;lt;syntaxhighlight lang=&amp;quot;typescript&amp;quot;&amp;gt;&lt;br /&gt;
let eventDataSampleSdf = _system.Utils.GetSampledEvents(modelId, 1000, #{&lt;br /&gt;
  &amp;quot;Items&amp;quot;:[#{&lt;br /&gt;
    &amp;quot;Type&amp;quot;:&amp;quot;IncludeCases&amp;quot;,&lt;br /&gt;
	&amp;quot;Items&amp;quot;:[#{&lt;br /&gt;
	  &amp;quot;Type&amp;quot;:&amp;quot;CaseAttributeValue&amp;quot;,&lt;br /&gt;
	  &amp;quot;Attribute&amp;quot;:&amp;quot;Product Group&amp;quot;,&lt;br /&gt;
	  &amp;quot;StringifiedValues&amp;quot;:[&amp;quot;0Hats&amp;quot;]&lt;br /&gt;
	}]&lt;br /&gt;
  }]&lt;br /&gt;
});&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Utils.ModifyColumnTypes ==&lt;br /&gt;
In-place modifies the column types of given columns in given data table.&lt;br /&gt;
&lt;br /&gt;
=== Parameters ===&lt;br /&gt;
* &#039;&#039;&#039;dataTable&#039;&#039;&#039;:&lt;br /&gt;
** ProcessAnalyzer model object of the model whose event data is to be filtered and sampled.&lt;br /&gt;
* &#039;&#039;&#039;columnTypesToSet&#039;&#039;&#039;:&lt;br /&gt;
** Array of column name/type definitions to set.&lt;br /&gt;
*** Only columns that are to be changed are required to be listed.&lt;br /&gt;
*** Columns that don&#039;t exist in the data table will be skipped.&lt;br /&gt;
&lt;br /&gt;
=== Return value ===&lt;br /&gt;
Returns the modified data table object.&lt;br /&gt;
&lt;br /&gt;
=== Example ===&lt;br /&gt;
Modify column &amp;quot;CaseId&amp;quot; to be of type string.&amp;lt;syntaxhighlight lang=&amp;quot;typescript&amp;quot;&amp;gt;&lt;br /&gt;
_system.Utils.ModifyColumnTypes(&lt;br /&gt;
  eventsTable, &lt;br /&gt;
  #{#{&amp;quot;Name&amp;quot;: &amp;quot;CaseId&amp;quot;, &amp;quot;DataType&amp;quot;: &amp;quot;String&amp;quot;}}&lt;br /&gt;
);&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Utils.RunFunctionWithParallelLogging ==&lt;br /&gt;
Runs given function that generates logging information into given data table in a way that all the logging will be included into the generated script run log as well (if run inside a script).&lt;br /&gt;
&lt;br /&gt;
Internally polls the table every 5 seconds for new rows and adds all the newly added rows to script log.&lt;br /&gt;
&lt;br /&gt;
=== Parameters ===&lt;br /&gt;
&lt;br /&gt;
* &#039;&#039;&#039;logTable&#039;&#039;&#039;:&lt;br /&gt;
** A DataTable used for logging.&lt;br /&gt;
* &#039;&#039;&#039;callbackFunc&#039;&#039;&#039;:&lt;br /&gt;
** Function that uses given data table for logging it&#039;s current status.&lt;br /&gt;
&lt;br /&gt;
=== Return value ===&lt;br /&gt;
The result of the callback function.&lt;br /&gt;
&lt;br /&gt;
=== Example ===&lt;br /&gt;
Run given stored procedure that generates new rows to log table identified by logTableId and log the generated rows into the script run log.&amp;lt;syntaxhighlight lang=&amp;quot;typescript&amp;quot;&amp;gt;&lt;br /&gt;
_system.Utils.RunFunctionWithParallelLogging(DataTableById(logTableId), () =&amp;gt; {&lt;br /&gt;
    CreateSnowflakeConnection().CallStoredProcedure(&amp;quot;StoredProcedureTest&amp;quot;, #{})&lt;br /&gt;
});&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;/div&gt;</summary>
		<author><name>MarHink</name></author>
	</entry>
	<entry>
		<id>https://wiki.onqpr.com/pa/index.php?title=System_Library&amp;diff=25472</id>
		<title>System Library</title>
		<link rel="alternate" type="text/html" href="https://wiki.onqpr.com/pa/index.php?title=System_Library&amp;diff=25472"/>
		<updated>2024-11-19T13:26:24Z</updated>

		<summary type="html">&lt;p&gt;MarHink: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;System library is a collection of Expression Language functions and properties that provide additional tools for scripting Process Analyzer functionalities. System library is referenced in scripts via &#039;&#039;_system&#039;&#039;-property, which provides additional properties dedicated for different areas of interests for scripting. &lt;br /&gt;
&lt;br /&gt;
The following hierarchy shows the properties and functions available in System Library:&lt;br /&gt;
&lt;br /&gt;
* ML&lt;br /&gt;
** GeneratePredictionModel (documented here: [[Create Predicted Eventlog|GeneratePredictionModel]])&lt;br /&gt;
** ApplyTransformations (documented here: [[Create Simulated Eventlog|ApplyTransformations]])&lt;br /&gt;
* Parallel&lt;br /&gt;
** Run&lt;br /&gt;
* RootCauses&lt;br /&gt;
** FindRootCausesDataFrame&lt;br /&gt;
* Utils&lt;br /&gt;
** GetSampledEvents&lt;br /&gt;
** ModifyColumnTypes&lt;br /&gt;
** RunFunctionWithParallelLogging&lt;br /&gt;
&lt;br /&gt;
== Parallel.Run ==&lt;br /&gt;
Runs given functions in parallel.&lt;br /&gt;
&lt;br /&gt;
=== Parameters ===&lt;br /&gt;
* &#039;&#039;&#039;functions&#039;&#039;&#039;:&lt;br /&gt;
** An array of functions to run in parallel.&lt;br /&gt;
&lt;br /&gt;
=== Return value ===&lt;br /&gt;
An array of results returned by the called functions, in the same order as the function generating them in the &#039;&#039;functions&#039;&#039;-parameter.&lt;br /&gt;
&lt;br /&gt;
=== Example ===&lt;br /&gt;
The following script uses _system.Parallel.Run to run three functions:&lt;br /&gt;
&lt;br /&gt;
* SAP-extraction from VBAK-table in SAP (connection parameters defined in connectionParametersDict-dictionary).&lt;br /&gt;
* Transform the extracted data by adding a new column.&lt;br /&gt;
* Load the data into data table identified by dataTableId.&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;typescript&amp;quot;&amp;gt;&lt;br /&gt;
function ExtractTransformAndLoad(extractFunc, transformFunc, loadFunc)&lt;br /&gt;
{&lt;br /&gt;
  let rawDataFlow = extractFunc();&lt;br /&gt;
  let transformedDataFlow = ToDataFlow();&lt;br /&gt;
&lt;br /&gt;
  _system.Parallel.Run([&lt;br /&gt;
    () =&amp;gt; Catch({&lt;br /&gt;
      let df;&lt;br /&gt;
      while (!IsNullTop(df = rawDataFlow.Collect(#{&amp;quot;CollectChunk&amp;quot;: true}))) {&lt;br /&gt;
        transformedDataFlow.Append(transformFunc(df));&lt;br /&gt;
        WriteLog(`A chunk having ${df.NRows} rows has been transformed.`);&lt;br /&gt;
      }&lt;br /&gt;
      if (rawDataFlow.HasError) {&lt;br /&gt;
        transformedDataFlow.Fail(&amp;quot;Error occurred during data extraction.&amp;quot;);&lt;br /&gt;
      }&lt;br /&gt;
      else {&lt;br /&gt;
        transformedDataFlow.Complete();&lt;br /&gt;
      }&lt;br /&gt;
    }, {&lt;br /&gt;
      transformedDataFlow.Fail(&amp;quot;Error occurred during transformation calculation.&amp;quot;);&lt;br /&gt;
    }),&lt;br /&gt;
    () =&amp;gt; {&lt;br /&gt;
      loadFunc(transformedDataFlow);&lt;br /&gt;
    }&lt;br /&gt;
  ]);&lt;br /&gt;
}&lt;br /&gt;
&lt;br /&gt;
ExtractTransformAndLoad(&lt;br /&gt;
  () =&amp;gt; ExtractSap(connectionParametersDict.Extend(&lt;br /&gt;
    [&lt;br /&gt;
    &amp;quot;FieldNames&amp;quot;: &amp;quot;VBELN,ERDAT,ERZET,ERNAM,NETWR,WAERK&amp;quot;, &lt;br /&gt;
    &amp;quot;QueryTable&amp;quot;: &amp;quot;VBAK&amp;quot;,&lt;br /&gt;
	&amp;quot;Options&amp;quot;: [&amp;quot;VBELN BETWEEN &#039;0000017448&#039;&amp;quot; ,&amp;quot;AND &#039;0060000042&#039;&amp;quot;],&lt;br /&gt;
    &amp;quot;UseGateway&amp;quot;: true&lt;br /&gt;
    ])&lt;br /&gt;
  ),&lt;br /&gt;
  df =&amp;gt; df.SetColumns([&amp;quot;Test&amp;quot;: () =&amp;gt; `${Column(&amp;quot;NETWR&amp;quot;)} ${Column(&amp;quot;WAERK&amp;quot;)}`]),&lt;br /&gt;
  dataFlow =&amp;gt; {&lt;br /&gt;
    DataTableById(dataTableId).Import(dataFlow, [&amp;quot;Append&amp;quot;: 0]);&lt;br /&gt;
  }&lt;br /&gt;
);&lt;br /&gt;
&lt;br /&gt;
DataTableById(dataTableId).SqlDataFrame.OrderByColumns([&amp;quot;VBELN&amp;quot;], [true]).Collect()&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== RootCauses.FindForDataFrame ==&lt;br /&gt;
Finds root causes for a particular process phenomenon by comparing properties of selected cases against those of all cases in given model.&lt;br /&gt;
&lt;br /&gt;
Based on the similar in-memory function: [[FindRootCauses Function|EventLog.FindRootCauses]].&lt;br /&gt;
&lt;br /&gt;
=== Parameters ===&lt;br /&gt;
&lt;br /&gt;
* &#039;&#039;&#039;model&#039;&#039;&#039;:&lt;br /&gt;
** Model object of a model whose data tables are used to calculate the root causes.&lt;br /&gt;
* &#039;&#039;&#039;parameters&#039;&#039;&#039;:&lt;br /&gt;
** A parameter convertible to a StringDictionary object with the following supported key-values:&lt;br /&gt;
*** &#039;&#039;&#039;Filter&#039;&#039;&#039;: &lt;br /&gt;
**** Filter json (#30921#) that is applied to the event log before calculating root causes.&lt;br /&gt;
*** &#039;&#039;&#039;Selection&#039;&#039;&#039;: &lt;br /&gt;
**** Selection json (#30927#) that defines selected cases to find root causes for. Is applied on top of the filtered event log (specified by Filter parameter).&lt;br /&gt;
**** If nothing is selected, 100% of cases are counted as selected.&lt;br /&gt;
*** &#039;&#039;&#039;CaseAttributeTypes&#039;&#039;&#039;: &lt;br /&gt;
**** An array of strings with the names of case attributes included into the root causes.&lt;br /&gt;
**** Only case attributes of type string, integer or boolean are included.&lt;br /&gt;
**** If CaseAttributeTypes is null or empty string or not specified, all case attributes having type &amp;quot;String&amp;quot; are included into the root causes.&lt;br /&gt;
**** If the model doesn&#039;t have a case attribute with the specified name, an error message is shown.&lt;br /&gt;
*** &#039;&#039;&#039;EventAttributeTypes&#039;&#039;&#039;:&lt;br /&gt;
**** An array of strings with the names of event attributes included into the root causes.&lt;br /&gt;
**** Only event attributes of type string are included.&lt;br /&gt;
**** If EventAttributeTypes is null, all event attributes having type &amp;quot;String&amp;quot; are included into the root causes.&lt;br /&gt;
**** If EventAttributeTypes is empty array or not specified, event attributes not applied to the root causes.&lt;br /&gt;
**** If the model doesn&#039;t have an event attribute with the specified name, an error message is shown.&lt;br /&gt;
**** Analysis column Name should contain Event Attribute Name.&lt;br /&gt;
**** Analysis column Value should contain Event Attribute Value and number of occurrences in case: &amp;lt;value&amp;gt; (count).&lt;br /&gt;
**** Analysis column Type should have &amp;quot;EventAttributeValue&amp;quot; as its value.&lt;br /&gt;
*** &#039;&#039;&#039;WeightingExpression&#039;&#039;&#039;: &lt;br /&gt;
**** Expression providing weights for each case.&lt;br /&gt;
**** Expression, if defined, must be any of the following types:&lt;br /&gt;
***** A string containing the [[SQL Expressions|SqlExpression]] to evaluate.&lt;br /&gt;
***** A SqlExpression object, created e.g., using ToSqlExpression-function (see also: [[QPR_ProcessAnalyzer_Expressions#In-memory_expression_blocks_in_SQL_expressions|In-memory expression blocks in SQL expressions]]).&lt;br /&gt;
**** A row is filtered out of result if expression result is null.&lt;br /&gt;
*** &#039;&#039;&#039;MaximumRowCount&#039;&#039;&#039;:&lt;br /&gt;
**** The maximum number of the most and least contributing root causes to return. Thus, the actual number of returned rows can be at most two times this value (if specified).&lt;br /&gt;
**** If undefined, 200 is used.&lt;br /&gt;
**** If set to 0, all rows are returned.&lt;br /&gt;
*** &#039;&#039;&#039;MinValueUsage&#039;&#039;&#039;: &lt;br /&gt;
**** The minimum total usage of a value included into the comparison. The number of cases having every returned value should be at least given percentage (a float value between 0.0 and 1.0) of all the compared cases.&lt;br /&gt;
***** If not defined or null, all values are included (=default).&lt;br /&gt;
*** &#039;&#039;&#039;MaxNumUniqueValues&#039;&#039;&#039;: &lt;br /&gt;
**** Maximum number of unique values to include into the comparison for each attribute column. If the amount of unique values for any attribute exceeds this value, only given number of attributes are included that have the highest usage.&lt;br /&gt;
**** If not defined or null, all values are included (=default).&lt;br /&gt;
*** &#039;&#039;&#039;IncludeOthers&#039;&#039;&#039;: &lt;br /&gt;
**** Should the rest of the attribute values not included due to MinValueUsage or MaxNumUniqueValues filtering be included as an aggregated &amp;quot;Others&amp;quot; value?&lt;br /&gt;
**** If given any string value, that value is used as the value for all the aggregated values.&lt;br /&gt;
**** Default = undefined =&amp;gt; others values will not be included into the results.&lt;br /&gt;
*** &#039;&#039;&#039;ValueIfNull&#039;&#039;&#039;: Value used to indicate null-values.&lt;br /&gt;
**** Default = &amp;quot;(blank)&amp;quot;.&lt;br /&gt;
**** Must be not null.&lt;br /&gt;
&lt;br /&gt;
=== Return value ===&lt;br /&gt;
Returns a [[SqlDataFrame in Expression Language|SqlDataFrame]] object with the following columns:&lt;br /&gt;
&lt;br /&gt;
* Common columns:&lt;br /&gt;
** &#039;&#039;&#039;Type&#039;&#039;&#039;: &lt;br /&gt;
*** Type of the root cause:&lt;br /&gt;
**** &amp;quot;CaseAttributeValue&amp;quot; for case attributes.&lt;br /&gt;
**** &amp;quot;EventAttributeValue&amp;quot; for event attributes.&lt;br /&gt;
** &#039;&#039;&#039;Name&#039;&#039;&#039;: &lt;br /&gt;
*** When type is CaseAttributeValue, case attribute name.&lt;br /&gt;
*** When type is EventAttributeValue, event attribute name.&lt;br /&gt;
** &#039;&#039;&#039;Value&#039;&#039;&#039;: &lt;br /&gt;
*** When type is CaseAttributeValue, case attribute value.&lt;br /&gt;
*** When type is EventAttributeValue, event attribute value and number of occurrences in case.&lt;br /&gt;
** &#039;&#039;&#039;Total&#039;&#039;&#039;: &lt;br /&gt;
*** The total number of cases having the found root cause.&lt;br /&gt;
** &#039;&#039;&#039;Selected&#039;&#039;&#039;: &lt;br /&gt;
*** The number of cases that have the found root cause and belong to the selected cases.&lt;br /&gt;
** &#039;&#039;&#039;Compared&#039;&#039;&#039;:&lt;br /&gt;
*** The number of cases that have the found root cause and don&#039;t belong to the selected cases.&lt;br /&gt;
** Columns when WeightingExpression not have value:&lt;br /&gt;
*** &#039;&#039;&#039;Contribution&#039;&#039;&#039;: &lt;br /&gt;
**** The number of cases which contribute to the deviation from the average percentage of selected cases among all analyzed cases.&lt;br /&gt;
*** &#039;&#039;&#039;ContributionPercentage&#039;&#039;&#039;: &lt;br /&gt;
**** The percent of cases which contribute to the deviation from the average percentage.&lt;br /&gt;
*** &#039;&#039;&#039;DifferencePercentage&#039;&#039;&#039;:&lt;br /&gt;
**** The deviation in percentage between selected cases with the found root cause and the average percentage of selected cases among all analyzed cases.&lt;br /&gt;
*** &#039;&#039;&#039;SelectedPercentage&#039;&#039;&#039;: &lt;br /&gt;
**** The percent of selected cases that have the found root cause out of all cases with that root cause.&lt;br /&gt;
** Columns when WeightingExpression have value:&lt;br /&gt;
*** &#039;&#039;&#039;Contribution&#039;&#039;&#039;: &lt;br /&gt;
**** The sum of case weights which contribute to the deviation from the average percentage of selected case weights among all analyzed cases.&lt;br /&gt;
*** &#039;&#039;&#039;ContributionPercentage&#039;&#039;&#039;: &lt;br /&gt;
**** The percent of case weights which contribute to the deviation from the average percentage.&lt;br /&gt;
*** &#039;&#039;&#039;DifferencePercentage&#039;&#039;&#039;:&lt;br /&gt;
**** The deviation in percentage between selected case weights with the found root cause and the average percentage of selected case weights among all analyzed cases.&lt;br /&gt;
*** &#039;&#039;&#039;SelectedPercentage&#039;&#039;&#039;:&lt;br /&gt;
**** The percent of selected case weights that have the found root cause out of all case weights with that root cause.&lt;br /&gt;
*** &#039;&#039;&#039;SelectedWeight&#039;&#039;&#039;: &lt;br /&gt;
**** The sum of weights that have the found root cause and belong to the selected cases.&lt;br /&gt;
*** &#039;&#039;&#039;ComparedWeight&#039;&#039;&#039;: &lt;br /&gt;
**** The sum of weights that have the found root cause and don&#039;t belong to the selected cases.&lt;br /&gt;
*** &#039;&#039;&#039;TotalWeight&#039;&#039;&#039;:&lt;br /&gt;
**** The sum of weights of all cases with that root cause.&lt;br /&gt;
&lt;br /&gt;
=== Example ===&lt;br /&gt;
Calculate root cause analysis for given model using parameters read from the [[Web API: Expression/query|query configuration]].&amp;lt;syntaxhighlight lang=&amp;quot;json&amp;quot;&amp;gt;&lt;br /&gt;
{&lt;br /&gt;
  &amp;quot;ProcessingMethod&amp;quot;: &amp;quot;DataFrame&amp;quot;,&lt;br /&gt;
  &amp;quot;Root&amp;quot;:&amp;quot;let m = _; _system.RootCauses.FindForDataFrame(m, _query.Configuration.Parameters.FindRootCausesParameters.Clone().Extend(#{\&amp;quot;Filter\&amp;quot;: _query.Configuration.Filter}))&amp;quot;,&lt;br /&gt;
  &amp;quot;Parameters&amp;quot;: {&lt;br /&gt;
    &amp;quot;FindRootCausesParameters&amp;quot;: {&lt;br /&gt;
      &amp;quot;CaseAttributeTypes&amp;quot;: [&amp;quot;Account Manager&amp;quot;,&amp;quot;Customer Group&amp;quot;,&amp;quot;Product Group&amp;quot;,&amp;quot;Region&amp;quot;],&lt;br /&gt;
      &amp;quot;Selection&amp;quot;: {&amp;quot;Items&amp;quot;:[{&amp;quot;Type&amp;quot;:&amp;quot;IncludeCases&amp;quot;,&amp;quot;Items&amp;quot;:[{&amp;quot;Type&amp;quot;:&amp;quot;CaseAttributeValue&amp;quot;,&amp;quot;Attribute&amp;quot;:&amp;quot;Product Group&amp;quot;,&amp;quot;StringifiedValues&amp;quot;:[&amp;quot;0Hats&amp;quot;]}]}]},&lt;br /&gt;
      &amp;quot;MaxNumUniqueValues&amp;quot;: 2,&lt;br /&gt;
      &amp;quot;MaximumRowCount&amp;quot;: 1000,&lt;br /&gt;
      &amp;quot;MinValueUsage&amp;quot;: 0.20,&lt;br /&gt;
      &amp;quot;WeightingExpression&amp;quot;: &amp;quot;Column(\&amp;quot;Cost\&amp;quot;)&amp;quot;&lt;br /&gt;
    }&lt;br /&gt;
  },&lt;br /&gt;
  &amp;quot;Ordering&amp;quot;: [&lt;br /&gt;
    {&amp;quot;Name&amp;quot;: &amp;quot;Contribution&amp;quot;, &amp;quot;Direction&amp;quot;: &amp;quot;Descending&amp;quot;}&lt;br /&gt;
  ]&lt;br /&gt;
}&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Utils.GetSampledEvents ==&lt;br /&gt;
Returns a [[SqlDataFrame in Expression Language|SqlDataFrame]] containing sampled events of given model where given filter is first applied. &lt;br /&gt;
&lt;br /&gt;
=== Parameters ===&lt;br /&gt;
&lt;br /&gt;
* &#039;&#039;&#039;sourceModel&#039;&#039;&#039;:&lt;br /&gt;
** ProcessAnalyzer model object of the model whose event data is to be filtered and sampled.&lt;br /&gt;
* &#039;&#039;&#039;sampledCaseCount&#039;&#039;&#039;:&lt;br /&gt;
** The maximum number of cases to return (or null if all cases should be returned).&lt;br /&gt;
* &#039;&#039;&#039;filter&#039;&#039;&#039;:&lt;br /&gt;
** JSON filter to be applied on the event data of the source model prior to performing the sampling.&lt;br /&gt;
&lt;br /&gt;
=== Return value ===&lt;br /&gt;
[[SqlDataFrame in Expression Language|SqlDataFrame]] containing sampled events of given model where given filter is first applied.&lt;br /&gt;
&lt;br /&gt;
=== Example ===&lt;br /&gt;
Get a sample of all the events of at most 1000 cases having &amp;quot;Hats&amp;quot; as &amp;quot;Product Group&amp;quot; case attribute value from model identified by modelId.&amp;lt;syntaxhighlight lang=&amp;quot;typescript&amp;quot;&amp;gt;&lt;br /&gt;
let eventDataSampleSdf = _system.Utils.GetSampledEvents(modelId, 1000, #{&lt;br /&gt;
  &amp;quot;Items&amp;quot;:[#{&lt;br /&gt;
    &amp;quot;Type&amp;quot;:&amp;quot;IncludeCases&amp;quot;,&lt;br /&gt;
	&amp;quot;Items&amp;quot;:[#{&lt;br /&gt;
	  &amp;quot;Type&amp;quot;:&amp;quot;CaseAttributeValue&amp;quot;,&lt;br /&gt;
	  &amp;quot;Attribute&amp;quot;:&amp;quot;Product Group&amp;quot;,&lt;br /&gt;
	  &amp;quot;StringifiedValues&amp;quot;:[&amp;quot;0Hats&amp;quot;]&lt;br /&gt;
	}]&lt;br /&gt;
  }]&lt;br /&gt;
});&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Utils.ModifyColumnTypes ==&lt;br /&gt;
In-place modifies the column types of given columns in given data table.&lt;br /&gt;
&lt;br /&gt;
=== Parameters ===&lt;br /&gt;
* &#039;&#039;&#039;dataTable&#039;&#039;&#039;:&lt;br /&gt;
** ProcessAnalyzer model object of the model whose event data is to be filtered and sampled.&lt;br /&gt;
* &#039;&#039;&#039;columnTypesToSet&#039;&#039;&#039;:&lt;br /&gt;
** Array of column name/type definitions to set.&lt;br /&gt;
*** Only columns that are to be changed are required to be listed.&lt;br /&gt;
*** Columns that don&#039;t exist in the data table will be skipped.&lt;br /&gt;
&lt;br /&gt;
=== Return value ===&lt;br /&gt;
Returns the modified data table object.&lt;br /&gt;
&lt;br /&gt;
=== Example ===&lt;br /&gt;
Modify column &amp;quot;CaseId&amp;quot; to be of type string.&amp;lt;syntaxhighlight lang=&amp;quot;typescript&amp;quot;&amp;gt;&lt;br /&gt;
_system.Utils.ModifyColumnTypes(&lt;br /&gt;
  eventsTable, &lt;br /&gt;
  #{#{&amp;quot;Name&amp;quot;: &amp;quot;CaseId&amp;quot;, &amp;quot;DataType&amp;quot;: &amp;quot;String&amp;quot;}}&lt;br /&gt;
);&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Utils.RunFunctionWithParallelLogging ==&lt;br /&gt;
Runs given function that generates logging information into given data table in a way that all the logging will be included into the generated script run log as well (if run inside a script).&lt;br /&gt;
&lt;br /&gt;
Internally polls the table every 5 seconds for new rows and adds all the newly added rows to script log.&lt;br /&gt;
&lt;br /&gt;
=== Parameters ===&lt;br /&gt;
&lt;br /&gt;
* &#039;&#039;&#039;logTable&#039;&#039;&#039;:&lt;br /&gt;
** A DataTable used for logging.&lt;br /&gt;
* &#039;&#039;&#039;callbackFunc&#039;&#039;&#039;:&lt;br /&gt;
** Function that uses given data table for logging it&#039;s current status.&lt;br /&gt;
&lt;br /&gt;
=== Return value ===&lt;br /&gt;
The result of the callback function.&lt;br /&gt;
&lt;br /&gt;
=== Example ===&lt;br /&gt;
Run given stored procedure that generates new rows to log table identified by logTableId and log the generated rows into the script run log.&amp;lt;syntaxhighlight lang=&amp;quot;typescript&amp;quot;&amp;gt;&lt;br /&gt;
_system.Utils.RunFunctionWithParallelLogging(DataTableById(logTableId), () =&amp;gt; {&lt;br /&gt;
    CreateSnowflakeConnection().CallStoredProcedure(&amp;quot;StoredProcedureTest&amp;quot;, #{})&lt;br /&gt;
});&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;/div&gt;</summary>
		<author><name>MarHink</name></author>
	</entry>
	<entry>
		<id>https://wiki.onqpr.com/pa/index.php?title=System_Library&amp;diff=25471</id>
		<title>System Library</title>
		<link rel="alternate" type="text/html" href="https://wiki.onqpr.com/pa/index.php?title=System_Library&amp;diff=25471"/>
		<updated>2024-11-19T13:24:49Z</updated>

		<summary type="html">&lt;p&gt;MarHink: /* Utils.RunFunctionWithParallelLogging */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;System library is a collection of Expression Language functions and properties that provide additional tools for scripting Process Analyzer functionalities. System library is referenced in scripts via &#039;&#039;_system&#039;&#039;-property, which provides additional properties dedicated for different areas of interests for scripting. &lt;br /&gt;
&lt;br /&gt;
The following hierarchy shows the properties and functions available in System Library:&lt;br /&gt;
&lt;br /&gt;
* ML&lt;br /&gt;
** [[Create Predicted Eventlog|GeneratePredictionModel]]&lt;br /&gt;
** [[Create Simulated Eventlog|ApplyTransformations]]&lt;br /&gt;
* Parallel&lt;br /&gt;
** Run&lt;br /&gt;
* RootCauses&lt;br /&gt;
** FindRootCausesDataFrame&lt;br /&gt;
* Utils&lt;br /&gt;
** GetSampledEvents&lt;br /&gt;
** ModifyColumnTypes&lt;br /&gt;
** RunFunctionWithParallelLogging&lt;br /&gt;
&lt;br /&gt;
== Parallel.Run ==&lt;br /&gt;
Runs given functions in parallel.&lt;br /&gt;
&lt;br /&gt;
=== Parameters ===&lt;br /&gt;
* &#039;&#039;&#039;functions&#039;&#039;&#039;:&lt;br /&gt;
** An array of functions to run in parallel.&lt;br /&gt;
&lt;br /&gt;
=== Return value ===&lt;br /&gt;
An array of results returned by the called functions, in the same order as the function generating them in the &#039;&#039;functions&#039;&#039;-parameter.&lt;br /&gt;
&lt;br /&gt;
=== Example ===&lt;br /&gt;
The following script uses _system.Parallel.Run to run three functions:&lt;br /&gt;
&lt;br /&gt;
* SAP-extraction from VBAK-table in SAP (connection parameters defined in connectionParametersDict-dictionary).&lt;br /&gt;
* Transform the extracted data by adding a new column.&lt;br /&gt;
* Load the data into data table identified by dataTableId.&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;typescript&amp;quot;&amp;gt;&lt;br /&gt;
function ExtractTransformAndLoad(extractFunc, transformFunc, loadFunc)&lt;br /&gt;
{&lt;br /&gt;
  let rawDataFlow = extractFunc();&lt;br /&gt;
  let transformedDataFlow = ToDataFlow();&lt;br /&gt;
&lt;br /&gt;
  _system.Parallel.Run([&lt;br /&gt;
    () =&amp;gt; Catch({&lt;br /&gt;
      let df;&lt;br /&gt;
      while (!IsNullTop(df = rawDataFlow.Collect(#{&amp;quot;CollectChunk&amp;quot;: true}))) {&lt;br /&gt;
        transformedDataFlow.Append(transformFunc(df));&lt;br /&gt;
        WriteLog(`A chunk having ${df.NRows} rows has been transformed.`);&lt;br /&gt;
      }&lt;br /&gt;
      if (rawDataFlow.HasError) {&lt;br /&gt;
        transformedDataFlow.Fail(&amp;quot;Error occurred during data extraction.&amp;quot;);&lt;br /&gt;
      }&lt;br /&gt;
      else {&lt;br /&gt;
        transformedDataFlow.Complete();&lt;br /&gt;
      }&lt;br /&gt;
    }, {&lt;br /&gt;
      transformedDataFlow.Fail(&amp;quot;Error occurred during transformation calculation.&amp;quot;);&lt;br /&gt;
    }),&lt;br /&gt;
    () =&amp;gt; {&lt;br /&gt;
      loadFunc(transformedDataFlow);&lt;br /&gt;
    }&lt;br /&gt;
  ]);&lt;br /&gt;
}&lt;br /&gt;
&lt;br /&gt;
ExtractTransformAndLoad(&lt;br /&gt;
  () =&amp;gt; ExtractSap(connectionParametersDict.Extend(&lt;br /&gt;
    [&lt;br /&gt;
    &amp;quot;FieldNames&amp;quot;: &amp;quot;VBELN,ERDAT,ERZET,ERNAM,NETWR,WAERK&amp;quot;, &lt;br /&gt;
    &amp;quot;QueryTable&amp;quot;: &amp;quot;VBAK&amp;quot;,&lt;br /&gt;
	&amp;quot;Options&amp;quot;: [&amp;quot;VBELN BETWEEN &#039;0000017448&#039;&amp;quot; ,&amp;quot;AND &#039;0060000042&#039;&amp;quot;],&lt;br /&gt;
    &amp;quot;UseGateway&amp;quot;: true&lt;br /&gt;
    ])&lt;br /&gt;
  ),&lt;br /&gt;
  df =&amp;gt; df.SetColumns([&amp;quot;Test&amp;quot;: () =&amp;gt; `${Column(&amp;quot;NETWR&amp;quot;)} ${Column(&amp;quot;WAERK&amp;quot;)}`]),&lt;br /&gt;
  dataFlow =&amp;gt; {&lt;br /&gt;
    DataTableById(dataTableId).Import(dataFlow, [&amp;quot;Append&amp;quot;: 0]);&lt;br /&gt;
  }&lt;br /&gt;
);&lt;br /&gt;
&lt;br /&gt;
DataTableById(dataTableId).SqlDataFrame.OrderByColumns([&amp;quot;VBELN&amp;quot;], [true]).Collect()&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== RootCauses.FindForDataFrame ==&lt;br /&gt;
Finds root causes for a particular process phenomenon by comparing properties of selected cases against those of all cases in given model.&lt;br /&gt;
&lt;br /&gt;
Based on the similar in-memory function: [[FindRootCauses Function|EventLog.FindRootCauses]].&lt;br /&gt;
&lt;br /&gt;
=== Parameters ===&lt;br /&gt;
&lt;br /&gt;
* &#039;&#039;&#039;model&#039;&#039;&#039;:&lt;br /&gt;
** Model object of a model whose data tables are used to calculate the root causes.&lt;br /&gt;
* &#039;&#039;&#039;parameters&#039;&#039;&#039;:&lt;br /&gt;
** A parameter convertible to a StringDictionary object with the following supported key-values:&lt;br /&gt;
*** &#039;&#039;&#039;Filter&#039;&#039;&#039;: &lt;br /&gt;
**** Filter json (#30921#) that is applied to the event log before calculating root causes.&lt;br /&gt;
*** &#039;&#039;&#039;Selection&#039;&#039;&#039;: &lt;br /&gt;
**** Selection json (#30927#) that defines selected cases to find root causes for. Is applied on top of the filtered event log (specified by Filter parameter).&lt;br /&gt;
**** If nothing is selected, 100% of cases are counted as selected.&lt;br /&gt;
*** &#039;&#039;&#039;CaseAttributeTypes&#039;&#039;&#039;: &lt;br /&gt;
**** An array of strings with the names of case attributes included into the root causes.&lt;br /&gt;
**** Only case attributes of type string, integer or boolean are included.&lt;br /&gt;
**** If CaseAttributeTypes is null or empty string or not specified, all case attributes having type &amp;quot;String&amp;quot; are included into the root causes.&lt;br /&gt;
**** If the model doesn&#039;t have a case attribute with the specified name, an error message is shown.&lt;br /&gt;
*** &#039;&#039;&#039;EventAttributeTypes&#039;&#039;&#039;:&lt;br /&gt;
**** An array of strings with the names of event attributes included into the root causes.&lt;br /&gt;
**** Only event attributes of type string are included.&lt;br /&gt;
**** If EventAttributeTypes is null, all event attributes having type &amp;quot;String&amp;quot; are included into the root causes.&lt;br /&gt;
**** If EventAttributeTypes is empty array or not specified, event attributes not applied to the root causes.&lt;br /&gt;
**** If the model doesn&#039;t have an event attribute with the specified name, an error message is shown.&lt;br /&gt;
**** Analysis column Name should contain Event Attribute Name.&lt;br /&gt;
**** Analysis column Value should contain Event Attribute Value and number of occurrences in case: &amp;lt;value&amp;gt; (count).&lt;br /&gt;
**** Analysis column Type should have &amp;quot;EventAttributeValue&amp;quot; as its value.&lt;br /&gt;
*** &#039;&#039;&#039;WeightingExpression&#039;&#039;&#039;: &lt;br /&gt;
**** Expression providing weights for each case.&lt;br /&gt;
**** Expression, if defined, must be any of the following types:&lt;br /&gt;
***** A string containing the [[SQL Expressions|SqlExpression]] to evaluate.&lt;br /&gt;
***** A SqlExpression object, created e.g., using ToSqlExpression-function (see also: [[QPR_ProcessAnalyzer_Expressions#In-memory_expression_blocks_in_SQL_expressions|In-memory expression blocks in SQL expressions]]).&lt;br /&gt;
**** A row is filtered out of result if expression result is null.&lt;br /&gt;
*** &#039;&#039;&#039;MaximumRowCount&#039;&#039;&#039;:&lt;br /&gt;
**** The maximum number of the most and least contributing root causes to return. Thus, the actual number of returned rows can be at most two times this value (if specified).&lt;br /&gt;
**** If undefined, 200 is used.&lt;br /&gt;
**** If set to 0, all rows are returned.&lt;br /&gt;
*** &#039;&#039;&#039;MinValueUsage&#039;&#039;&#039;: &lt;br /&gt;
**** The minimum total usage of a value included into the comparison. The number of cases having every returned value should be at least given percentage (a float value between 0.0 and 1.0) of all the compared cases.&lt;br /&gt;
***** If not defined or null, all values are included (=default).&lt;br /&gt;
*** &#039;&#039;&#039;MaxNumUniqueValues&#039;&#039;&#039;: &lt;br /&gt;
**** Maximum number of unique values to include into the comparison for each attribute column. If the amount of unique values for any attribute exceeds this value, only given number of attributes are included that have the highest usage.&lt;br /&gt;
**** If not defined or null, all values are included (=default).&lt;br /&gt;
*** &#039;&#039;&#039;IncludeOthers&#039;&#039;&#039;: &lt;br /&gt;
**** Should the rest of the attribute values not included due to MinValueUsage or MaxNumUniqueValues filtering be included as an aggregated &amp;quot;Others&amp;quot; value?&lt;br /&gt;
**** If given any string value, that value is used as the value for all the aggregated values.&lt;br /&gt;
**** Default = undefined =&amp;gt; others values will not be included into the results.&lt;br /&gt;
*** &#039;&#039;&#039;ValueIfNull&#039;&#039;&#039;: Value used to indicate null-values.&lt;br /&gt;
**** Default = &amp;quot;(blank)&amp;quot;.&lt;br /&gt;
**** Must be not null.&lt;br /&gt;
&lt;br /&gt;
=== Return value ===&lt;br /&gt;
Returns a [[SqlDataFrame in Expression Language|SqlDataFrame]] object with the following columns:&lt;br /&gt;
&lt;br /&gt;
* Common columns:&lt;br /&gt;
** &#039;&#039;&#039;Type&#039;&#039;&#039;: &lt;br /&gt;
*** Type of the root cause:&lt;br /&gt;
**** &amp;quot;CaseAttributeValue&amp;quot; for case attributes.&lt;br /&gt;
**** &amp;quot;EventAttributeValue&amp;quot; for event attributes.&lt;br /&gt;
** &#039;&#039;&#039;Name&#039;&#039;&#039;: &lt;br /&gt;
*** When type is CaseAttributeValue, case attribute name.&lt;br /&gt;
*** When type is EventAttributeValue, event attribute name.&lt;br /&gt;
** &#039;&#039;&#039;Value&#039;&#039;&#039;: &lt;br /&gt;
*** When type is CaseAttributeValue, case attribute value.&lt;br /&gt;
*** When type is EventAttributeValue, event attribute value and number of occurrences in case.&lt;br /&gt;
** &#039;&#039;&#039;Total&#039;&#039;&#039;: &lt;br /&gt;
*** The total number of cases having the found root cause.&lt;br /&gt;
** &#039;&#039;&#039;Selected&#039;&#039;&#039;: &lt;br /&gt;
*** The number of cases that have the found root cause and belong to the selected cases.&lt;br /&gt;
** &#039;&#039;&#039;Compared&#039;&#039;&#039;:&lt;br /&gt;
*** The number of cases that have the found root cause and don&#039;t belong to the selected cases.&lt;br /&gt;
** Columns when WeightingExpression not have value:&lt;br /&gt;
*** &#039;&#039;&#039;Contribution&#039;&#039;&#039;: &lt;br /&gt;
**** The number of cases which contribute to the deviation from the average percentage of selected cases among all analyzed cases.&lt;br /&gt;
*** &#039;&#039;&#039;ContributionPercentage&#039;&#039;&#039;: &lt;br /&gt;
**** The percent of cases which contribute to the deviation from the average percentage.&lt;br /&gt;
*** &#039;&#039;&#039;DifferencePercentage&#039;&#039;&#039;:&lt;br /&gt;
**** The deviation in percentage between selected cases with the found root cause and the average percentage of selected cases among all analyzed cases.&lt;br /&gt;
*** &#039;&#039;&#039;SelectedPercentage&#039;&#039;&#039;: &lt;br /&gt;
**** The percent of selected cases that have the found root cause out of all cases with that root cause.&lt;br /&gt;
** Columns when WeightingExpression have value:&lt;br /&gt;
*** &#039;&#039;&#039;Contribution&#039;&#039;&#039;: &lt;br /&gt;
**** The sum of case weights which contribute to the deviation from the average percentage of selected case weights among all analyzed cases.&lt;br /&gt;
*** &#039;&#039;&#039;ContributionPercentage&#039;&#039;&#039;: &lt;br /&gt;
**** The percent of case weights which contribute to the deviation from the average percentage.&lt;br /&gt;
*** &#039;&#039;&#039;DifferencePercentage&#039;&#039;&#039;:&lt;br /&gt;
**** The deviation in percentage between selected case weights with the found root cause and the average percentage of selected case weights among all analyzed cases.&lt;br /&gt;
*** &#039;&#039;&#039;SelectedPercentage&#039;&#039;&#039;:&lt;br /&gt;
**** The percent of selected case weights that have the found root cause out of all case weights with that root cause.&lt;br /&gt;
*** &#039;&#039;&#039;SelectedWeight&#039;&#039;&#039;: &lt;br /&gt;
**** The sum of weights that have the found root cause and belong to the selected cases.&lt;br /&gt;
*** &#039;&#039;&#039;ComparedWeight&#039;&#039;&#039;: &lt;br /&gt;
**** The sum of weights that have the found root cause and don&#039;t belong to the selected cases.&lt;br /&gt;
*** &#039;&#039;&#039;TotalWeight&#039;&#039;&#039;:&lt;br /&gt;
**** The sum of weights of all cases with that root cause.&lt;br /&gt;
&lt;br /&gt;
=== Example ===&lt;br /&gt;
Calculate root cause analysis for given model using parameters read from the [[Web API: Expression/query|query configuration]].&amp;lt;syntaxhighlight lang=&amp;quot;json&amp;quot;&amp;gt;&lt;br /&gt;
{&lt;br /&gt;
  &amp;quot;ProcessingMethod&amp;quot;: &amp;quot;DataFrame&amp;quot;,&lt;br /&gt;
  &amp;quot;Root&amp;quot;:&amp;quot;let m = _; _system.RootCauses.FindForDataFrame(m, _query.Configuration.Parameters.FindRootCausesParameters.Clone().Extend(#{\&amp;quot;Filter\&amp;quot;: _query.Configuration.Filter}))&amp;quot;,&lt;br /&gt;
  &amp;quot;Parameters&amp;quot;: {&lt;br /&gt;
    &amp;quot;FindRootCausesParameters&amp;quot;: {&lt;br /&gt;
      &amp;quot;CaseAttributeTypes&amp;quot;: [&amp;quot;Account Manager&amp;quot;,&amp;quot;Customer Group&amp;quot;,&amp;quot;Product Group&amp;quot;,&amp;quot;Region&amp;quot;],&lt;br /&gt;
      &amp;quot;Selection&amp;quot;: {&amp;quot;Items&amp;quot;:[{&amp;quot;Type&amp;quot;:&amp;quot;IncludeCases&amp;quot;,&amp;quot;Items&amp;quot;:[{&amp;quot;Type&amp;quot;:&amp;quot;CaseAttributeValue&amp;quot;,&amp;quot;Attribute&amp;quot;:&amp;quot;Product Group&amp;quot;,&amp;quot;StringifiedValues&amp;quot;:[&amp;quot;0Hats&amp;quot;]}]}]},&lt;br /&gt;
      &amp;quot;MaxNumUniqueValues&amp;quot;: 2,&lt;br /&gt;
      &amp;quot;MaximumRowCount&amp;quot;: 1000,&lt;br /&gt;
      &amp;quot;MinValueUsage&amp;quot;: 0.20,&lt;br /&gt;
      &amp;quot;WeightingExpression&amp;quot;: &amp;quot;Column(\&amp;quot;Cost\&amp;quot;)&amp;quot;&lt;br /&gt;
    }&lt;br /&gt;
  },&lt;br /&gt;
  &amp;quot;Ordering&amp;quot;: [&lt;br /&gt;
    {&amp;quot;Name&amp;quot;: &amp;quot;Contribution&amp;quot;, &amp;quot;Direction&amp;quot;: &amp;quot;Descending&amp;quot;}&lt;br /&gt;
  ]&lt;br /&gt;
}&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Utils.GetSampledEvents ==&lt;br /&gt;
Returns a [[SqlDataFrame in Expression Language|SqlDataFrame]] containing sampled events of given model where given filter is first applied. &lt;br /&gt;
&lt;br /&gt;
=== Parameters ===&lt;br /&gt;
&lt;br /&gt;
* &#039;&#039;&#039;sourceModel&#039;&#039;&#039;:&lt;br /&gt;
** ProcessAnalyzer model object of the model whose event data is to be filtered and sampled.&lt;br /&gt;
* &#039;&#039;&#039;sampledCaseCount&#039;&#039;&#039;:&lt;br /&gt;
** The maximum number of cases to return (or null if all cases should be returned).&lt;br /&gt;
* &#039;&#039;&#039;filter&#039;&#039;&#039;:&lt;br /&gt;
** JSON filter to be applied on the event data of the source model prior to performing the sampling.&lt;br /&gt;
&lt;br /&gt;
=== Return value ===&lt;br /&gt;
[[SqlDataFrame in Expression Language|SqlDataFrame]] containing sampled events of given model where given filter is first applied.&lt;br /&gt;
&lt;br /&gt;
=== Example ===&lt;br /&gt;
Get a sample of all the events of at most 1000 cases having &amp;quot;Hats&amp;quot; as &amp;quot;Product Group&amp;quot; case attribute value from model identified by modelId.&amp;lt;syntaxhighlight lang=&amp;quot;typescript&amp;quot;&amp;gt;&lt;br /&gt;
let eventDataSampleSdf = _system.Utils.GetSampledEvents(modelId, 1000, #{&lt;br /&gt;
  &amp;quot;Items&amp;quot;:[#{&lt;br /&gt;
    &amp;quot;Type&amp;quot;:&amp;quot;IncludeCases&amp;quot;,&lt;br /&gt;
	&amp;quot;Items&amp;quot;:[#{&lt;br /&gt;
	  &amp;quot;Type&amp;quot;:&amp;quot;CaseAttributeValue&amp;quot;,&lt;br /&gt;
	  &amp;quot;Attribute&amp;quot;:&amp;quot;Product Group&amp;quot;,&lt;br /&gt;
	  &amp;quot;StringifiedValues&amp;quot;:[&amp;quot;0Hats&amp;quot;]&lt;br /&gt;
	}]&lt;br /&gt;
  }]&lt;br /&gt;
});&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Utils.ModifyColumnTypes ==&lt;br /&gt;
In-place modifies the column types of given columns in given data table.&lt;br /&gt;
&lt;br /&gt;
=== Parameters ===&lt;br /&gt;
* &#039;&#039;&#039;dataTable&#039;&#039;&#039;:&lt;br /&gt;
** ProcessAnalyzer model object of the model whose event data is to be filtered and sampled.&lt;br /&gt;
* &#039;&#039;&#039;columnTypesToSet&#039;&#039;&#039;:&lt;br /&gt;
** Array of column name/type definitions to set.&lt;br /&gt;
*** Only columns that are to be changed are required to be listed.&lt;br /&gt;
*** Columns that don&#039;t exist in the data table will be skipped.&lt;br /&gt;
&lt;br /&gt;
=== Return value ===&lt;br /&gt;
Returns the modified data table object.&lt;br /&gt;
&lt;br /&gt;
=== Example ===&lt;br /&gt;
Modify column &amp;quot;CaseId&amp;quot; to be of type string.&amp;lt;syntaxhighlight lang=&amp;quot;typescript&amp;quot;&amp;gt;&lt;br /&gt;
_system.Utils.ModifyColumnTypes(&lt;br /&gt;
  eventsTable, &lt;br /&gt;
  #{#{&amp;quot;Name&amp;quot;: &amp;quot;CaseId&amp;quot;, &amp;quot;DataType&amp;quot;: &amp;quot;String&amp;quot;}}&lt;br /&gt;
);&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Utils.RunFunctionWithParallelLogging ==&lt;br /&gt;
Runs given function that generates logging information into given data table in a way that all the logging will be included into the generated script run log as well (if run inside a script).&lt;br /&gt;
&lt;br /&gt;
Internally polls the table every 5 seconds for new rows and adds all the newly added rows to script log.&lt;br /&gt;
&lt;br /&gt;
=== Parameters ===&lt;br /&gt;
&lt;br /&gt;
* &#039;&#039;&#039;logTable&#039;&#039;&#039;:&lt;br /&gt;
** A DataTable used for logging.&lt;br /&gt;
* &#039;&#039;&#039;callbackFunc&#039;&#039;&#039;:&lt;br /&gt;
** Function that uses given data table for logging it&#039;s current status.&lt;br /&gt;
&lt;br /&gt;
=== Return value ===&lt;br /&gt;
The result of the callback function.&lt;br /&gt;
&lt;br /&gt;
=== Example ===&lt;br /&gt;
Run given stored procedure that generates new rows to log table identified by logTableId and log the generated rows into the script run log.&amp;lt;syntaxhighlight lang=&amp;quot;typescript&amp;quot;&amp;gt;&lt;br /&gt;
_system.Utils.RunFunctionWithParallelLogging(DataTableById(logTableId), () =&amp;gt; {&lt;br /&gt;
    CreateSnowflakeConnection().CallStoredProcedure(&amp;quot;StoredProcedureTest&amp;quot;, #{})&lt;br /&gt;
});&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;/div&gt;</summary>
		<author><name>MarHink</name></author>
	</entry>
	<entry>
		<id>https://wiki.onqpr.com/pa/index.php?title=System_Library&amp;diff=25470</id>
		<title>System Library</title>
		<link rel="alternate" type="text/html" href="https://wiki.onqpr.com/pa/index.php?title=System_Library&amp;diff=25470"/>
		<updated>2024-11-19T13:24:19Z</updated>

		<summary type="html">&lt;p&gt;MarHink: /* Utils.ModifyColumnTypes */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;System library is a collection of Expression Language functions and properties that provide additional tools for scripting Process Analyzer functionalities. System library is referenced in scripts via &#039;&#039;_system&#039;&#039;-property, which provides additional properties dedicated for different areas of interests for scripting. &lt;br /&gt;
&lt;br /&gt;
The following hierarchy shows the properties and functions available in System Library:&lt;br /&gt;
&lt;br /&gt;
* ML&lt;br /&gt;
** [[Create Predicted Eventlog|GeneratePredictionModel]]&lt;br /&gt;
** [[Create Simulated Eventlog|ApplyTransformations]]&lt;br /&gt;
* Parallel&lt;br /&gt;
** Run&lt;br /&gt;
* RootCauses&lt;br /&gt;
** FindRootCausesDataFrame&lt;br /&gt;
* Utils&lt;br /&gt;
** GetSampledEvents&lt;br /&gt;
** ModifyColumnTypes&lt;br /&gt;
** RunFunctionWithParallelLogging&lt;br /&gt;
&lt;br /&gt;
== Parallel.Run ==&lt;br /&gt;
Runs given functions in parallel.&lt;br /&gt;
&lt;br /&gt;
=== Parameters ===&lt;br /&gt;
* &#039;&#039;&#039;functions&#039;&#039;&#039;:&lt;br /&gt;
** An array of functions to run in parallel.&lt;br /&gt;
&lt;br /&gt;
=== Return value ===&lt;br /&gt;
An array of results returned by the called functions, in the same order as the function generating them in the &#039;&#039;functions&#039;&#039;-parameter.&lt;br /&gt;
&lt;br /&gt;
=== Example ===&lt;br /&gt;
The following script uses _system.Parallel.Run to run three functions:&lt;br /&gt;
&lt;br /&gt;
* SAP-extraction from VBAK-table in SAP (connection parameters defined in connectionParametersDict-dictionary).&lt;br /&gt;
* Transform the extracted data by adding a new column.&lt;br /&gt;
* Load the data into data table identified by dataTableId.&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;typescript&amp;quot;&amp;gt;&lt;br /&gt;
function ExtractTransformAndLoad(extractFunc, transformFunc, loadFunc)&lt;br /&gt;
{&lt;br /&gt;
  let rawDataFlow = extractFunc();&lt;br /&gt;
  let transformedDataFlow = ToDataFlow();&lt;br /&gt;
&lt;br /&gt;
  _system.Parallel.Run([&lt;br /&gt;
    () =&amp;gt; Catch({&lt;br /&gt;
      let df;&lt;br /&gt;
      while (!IsNullTop(df = rawDataFlow.Collect(#{&amp;quot;CollectChunk&amp;quot;: true}))) {&lt;br /&gt;
        transformedDataFlow.Append(transformFunc(df));&lt;br /&gt;
        WriteLog(`A chunk having ${df.NRows} rows has been transformed.`);&lt;br /&gt;
      }&lt;br /&gt;
      if (rawDataFlow.HasError) {&lt;br /&gt;
        transformedDataFlow.Fail(&amp;quot;Error occurred during data extraction.&amp;quot;);&lt;br /&gt;
      }&lt;br /&gt;
      else {&lt;br /&gt;
        transformedDataFlow.Complete();&lt;br /&gt;
      }&lt;br /&gt;
    }, {&lt;br /&gt;
      transformedDataFlow.Fail(&amp;quot;Error occurred during transformation calculation.&amp;quot;);&lt;br /&gt;
    }),&lt;br /&gt;
    () =&amp;gt; {&lt;br /&gt;
      loadFunc(transformedDataFlow);&lt;br /&gt;
    }&lt;br /&gt;
  ]);&lt;br /&gt;
}&lt;br /&gt;
&lt;br /&gt;
ExtractTransformAndLoad(&lt;br /&gt;
  () =&amp;gt; ExtractSap(connectionParametersDict.Extend(&lt;br /&gt;
    [&lt;br /&gt;
    &amp;quot;FieldNames&amp;quot;: &amp;quot;VBELN,ERDAT,ERZET,ERNAM,NETWR,WAERK&amp;quot;, &lt;br /&gt;
    &amp;quot;QueryTable&amp;quot;: &amp;quot;VBAK&amp;quot;,&lt;br /&gt;
	&amp;quot;Options&amp;quot;: [&amp;quot;VBELN BETWEEN &#039;0000017448&#039;&amp;quot; ,&amp;quot;AND &#039;0060000042&#039;&amp;quot;],&lt;br /&gt;
    &amp;quot;UseGateway&amp;quot;: true&lt;br /&gt;
    ])&lt;br /&gt;
  ),&lt;br /&gt;
  df =&amp;gt; df.SetColumns([&amp;quot;Test&amp;quot;: () =&amp;gt; `${Column(&amp;quot;NETWR&amp;quot;)} ${Column(&amp;quot;WAERK&amp;quot;)}`]),&lt;br /&gt;
  dataFlow =&amp;gt; {&lt;br /&gt;
    DataTableById(dataTableId).Import(dataFlow, [&amp;quot;Append&amp;quot;: 0]);&lt;br /&gt;
  }&lt;br /&gt;
);&lt;br /&gt;
&lt;br /&gt;
DataTableById(dataTableId).SqlDataFrame.OrderByColumns([&amp;quot;VBELN&amp;quot;], [true]).Collect()&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== RootCauses.FindForDataFrame ==&lt;br /&gt;
Finds root causes for a particular process phenomenon by comparing properties of selected cases against those of all cases in given model.&lt;br /&gt;
&lt;br /&gt;
Based on the similar in-memory function: [[FindRootCauses Function|EventLog.FindRootCauses]].&lt;br /&gt;
&lt;br /&gt;
=== Parameters ===&lt;br /&gt;
&lt;br /&gt;
* &#039;&#039;&#039;model&#039;&#039;&#039;:&lt;br /&gt;
** Model object of a model whose data tables are used to calculate the root causes.&lt;br /&gt;
* &#039;&#039;&#039;parameters&#039;&#039;&#039;:&lt;br /&gt;
** A parameter convertible to a StringDictionary object with the following supported key-values:&lt;br /&gt;
*** &#039;&#039;&#039;Filter&#039;&#039;&#039;: &lt;br /&gt;
**** Filter json (#30921#) that is applied to the event log before calculating root causes.&lt;br /&gt;
*** &#039;&#039;&#039;Selection&#039;&#039;&#039;: &lt;br /&gt;
**** Selection json (#30927#) that defines selected cases to find root causes for. Is applied on top of the filtered event log (specified by Filter parameter).&lt;br /&gt;
**** If nothing is selected, 100% of cases are counted as selected.&lt;br /&gt;
*** &#039;&#039;&#039;CaseAttributeTypes&#039;&#039;&#039;: &lt;br /&gt;
**** An array of strings with the names of case attributes included into the root causes.&lt;br /&gt;
**** Only case attributes of type string, integer or boolean are included.&lt;br /&gt;
**** If CaseAttributeTypes is null or empty string or not specified, all case attributes having type &amp;quot;String&amp;quot; are included into the root causes.&lt;br /&gt;
**** If the model doesn&#039;t have a case attribute with the specified name, an error message is shown.&lt;br /&gt;
*** &#039;&#039;&#039;EventAttributeTypes&#039;&#039;&#039;:&lt;br /&gt;
**** An array of strings with the names of event attributes included into the root causes.&lt;br /&gt;
**** Only event attributes of type string are included.&lt;br /&gt;
**** If EventAttributeTypes is null, all event attributes having type &amp;quot;String&amp;quot; are included into the root causes.&lt;br /&gt;
**** If EventAttributeTypes is empty array or not specified, event attributes not applied to the root causes.&lt;br /&gt;
**** If the model doesn&#039;t have an event attribute with the specified name, an error message is shown.&lt;br /&gt;
**** Analysis column Name should contain Event Attribute Name.&lt;br /&gt;
**** Analysis column Value should contain Event Attribute Value and number of occurrences in case: &amp;lt;value&amp;gt; (count).&lt;br /&gt;
**** Analysis column Type should have &amp;quot;EventAttributeValue&amp;quot; as its value.&lt;br /&gt;
*** &#039;&#039;&#039;WeightingExpression&#039;&#039;&#039;: &lt;br /&gt;
**** Expression providing weights for each case.&lt;br /&gt;
**** Expression, if defined, must be any of the following types:&lt;br /&gt;
***** A string containing the [[SQL Expressions|SqlExpression]] to evaluate.&lt;br /&gt;
***** A SqlExpression object, created e.g., using ToSqlExpression-function (see also: [[QPR_ProcessAnalyzer_Expressions#In-memory_expression_blocks_in_SQL_expressions|In-memory expression blocks in SQL expressions]]).&lt;br /&gt;
**** A row is filtered out of result if expression result is null.&lt;br /&gt;
*** &#039;&#039;&#039;MaximumRowCount&#039;&#039;&#039;:&lt;br /&gt;
**** The maximum number of the most and least contributing root causes to return. Thus, the actual number of returned rows can be at most two times this value (if specified).&lt;br /&gt;
**** If undefined, 200 is used.&lt;br /&gt;
**** If set to 0, all rows are returned.&lt;br /&gt;
*** &#039;&#039;&#039;MinValueUsage&#039;&#039;&#039;: &lt;br /&gt;
**** The minimum total usage of a value included into the comparison. The number of cases having every returned value should be at least given percentage (a float value between 0.0 and 1.0) of all the compared cases.&lt;br /&gt;
***** If not defined or null, all values are included (=default).&lt;br /&gt;
*** &#039;&#039;&#039;MaxNumUniqueValues&#039;&#039;&#039;: &lt;br /&gt;
**** Maximum number of unique values to include into the comparison for each attribute column. If the amount of unique values for any attribute exceeds this value, only given number of attributes are included that have the highest usage.&lt;br /&gt;
**** If not defined or null, all values are included (=default).&lt;br /&gt;
*** &#039;&#039;&#039;IncludeOthers&#039;&#039;&#039;: &lt;br /&gt;
**** Should the rest of the attribute values not included due to MinValueUsage or MaxNumUniqueValues filtering be included as an aggregated &amp;quot;Others&amp;quot; value?&lt;br /&gt;
**** If given any string value, that value is used as the value for all the aggregated values.&lt;br /&gt;
**** Default = undefined =&amp;gt; others values will not be included into the results.&lt;br /&gt;
*** &#039;&#039;&#039;ValueIfNull&#039;&#039;&#039;: Value used to indicate null-values.&lt;br /&gt;
**** Default = &amp;quot;(blank)&amp;quot;.&lt;br /&gt;
**** Must be not null.&lt;br /&gt;
&lt;br /&gt;
=== Return value ===&lt;br /&gt;
Returns a [[SqlDataFrame in Expression Language|SqlDataFrame]] object with the following columns:&lt;br /&gt;
&lt;br /&gt;
* Common columns:&lt;br /&gt;
** &#039;&#039;&#039;Type&#039;&#039;&#039;: &lt;br /&gt;
*** Type of the root cause:&lt;br /&gt;
**** &amp;quot;CaseAttributeValue&amp;quot; for case attributes.&lt;br /&gt;
**** &amp;quot;EventAttributeValue&amp;quot; for event attributes.&lt;br /&gt;
** &#039;&#039;&#039;Name&#039;&#039;&#039;: &lt;br /&gt;
*** When type is CaseAttributeValue, case attribute name.&lt;br /&gt;
*** When type is EventAttributeValue, event attribute name.&lt;br /&gt;
** &#039;&#039;&#039;Value&#039;&#039;&#039;: &lt;br /&gt;
*** When type is CaseAttributeValue, case attribute value.&lt;br /&gt;
*** When type is EventAttributeValue, event attribute value and number of occurrences in case.&lt;br /&gt;
** &#039;&#039;&#039;Total&#039;&#039;&#039;: &lt;br /&gt;
*** The total number of cases having the found root cause.&lt;br /&gt;
** &#039;&#039;&#039;Selected&#039;&#039;&#039;: &lt;br /&gt;
*** The number of cases that have the found root cause and belong to the selected cases.&lt;br /&gt;
** &#039;&#039;&#039;Compared&#039;&#039;&#039;:&lt;br /&gt;
*** The number of cases that have the found root cause and don&#039;t belong to the selected cases.&lt;br /&gt;
** Columns when WeightingExpression not have value:&lt;br /&gt;
*** &#039;&#039;&#039;Contribution&#039;&#039;&#039;: &lt;br /&gt;
**** The number of cases which contribute to the deviation from the average percentage of selected cases among all analyzed cases.&lt;br /&gt;
*** &#039;&#039;&#039;ContributionPercentage&#039;&#039;&#039;: &lt;br /&gt;
**** The percent of cases which contribute to the deviation from the average percentage.&lt;br /&gt;
*** &#039;&#039;&#039;DifferencePercentage&#039;&#039;&#039;:&lt;br /&gt;
**** The deviation in percentage between selected cases with the found root cause and the average percentage of selected cases among all analyzed cases.&lt;br /&gt;
*** &#039;&#039;&#039;SelectedPercentage&#039;&#039;&#039;: &lt;br /&gt;
**** The percent of selected cases that have the found root cause out of all cases with that root cause.&lt;br /&gt;
** Columns when WeightingExpression have value:&lt;br /&gt;
*** &#039;&#039;&#039;Contribution&#039;&#039;&#039;: &lt;br /&gt;
**** The sum of case weights which contribute to the deviation from the average percentage of selected case weights among all analyzed cases.&lt;br /&gt;
*** &#039;&#039;&#039;ContributionPercentage&#039;&#039;&#039;: &lt;br /&gt;
**** The percent of case weights which contribute to the deviation from the average percentage.&lt;br /&gt;
*** &#039;&#039;&#039;DifferencePercentage&#039;&#039;&#039;:&lt;br /&gt;
**** The deviation in percentage between selected case weights with the found root cause and the average percentage of selected case weights among all analyzed cases.&lt;br /&gt;
*** &#039;&#039;&#039;SelectedPercentage&#039;&#039;&#039;:&lt;br /&gt;
**** The percent of selected case weights that have the found root cause out of all case weights with that root cause.&lt;br /&gt;
*** &#039;&#039;&#039;SelectedWeight&#039;&#039;&#039;: &lt;br /&gt;
**** The sum of weights that have the found root cause and belong to the selected cases.&lt;br /&gt;
*** &#039;&#039;&#039;ComparedWeight&#039;&#039;&#039;: &lt;br /&gt;
**** The sum of weights that have the found root cause and don&#039;t belong to the selected cases.&lt;br /&gt;
*** &#039;&#039;&#039;TotalWeight&#039;&#039;&#039;:&lt;br /&gt;
**** The sum of weights of all cases with that root cause.&lt;br /&gt;
&lt;br /&gt;
=== Example ===&lt;br /&gt;
Calculate root cause analysis for given model using parameters read from the [[Web API: Expression/query|query configuration]].&amp;lt;syntaxhighlight lang=&amp;quot;json&amp;quot;&amp;gt;&lt;br /&gt;
{&lt;br /&gt;
  &amp;quot;ProcessingMethod&amp;quot;: &amp;quot;DataFrame&amp;quot;,&lt;br /&gt;
  &amp;quot;Root&amp;quot;:&amp;quot;let m = _; _system.RootCauses.FindForDataFrame(m, _query.Configuration.Parameters.FindRootCausesParameters.Clone().Extend(#{\&amp;quot;Filter\&amp;quot;: _query.Configuration.Filter}))&amp;quot;,&lt;br /&gt;
  &amp;quot;Parameters&amp;quot;: {&lt;br /&gt;
    &amp;quot;FindRootCausesParameters&amp;quot;: {&lt;br /&gt;
      &amp;quot;CaseAttributeTypes&amp;quot;: [&amp;quot;Account Manager&amp;quot;,&amp;quot;Customer Group&amp;quot;,&amp;quot;Product Group&amp;quot;,&amp;quot;Region&amp;quot;],&lt;br /&gt;
      &amp;quot;Selection&amp;quot;: {&amp;quot;Items&amp;quot;:[{&amp;quot;Type&amp;quot;:&amp;quot;IncludeCases&amp;quot;,&amp;quot;Items&amp;quot;:[{&amp;quot;Type&amp;quot;:&amp;quot;CaseAttributeValue&amp;quot;,&amp;quot;Attribute&amp;quot;:&amp;quot;Product Group&amp;quot;,&amp;quot;StringifiedValues&amp;quot;:[&amp;quot;0Hats&amp;quot;]}]}]},&lt;br /&gt;
      &amp;quot;MaxNumUniqueValues&amp;quot;: 2,&lt;br /&gt;
      &amp;quot;MaximumRowCount&amp;quot;: 1000,&lt;br /&gt;
      &amp;quot;MinValueUsage&amp;quot;: 0.20,&lt;br /&gt;
      &amp;quot;WeightingExpression&amp;quot;: &amp;quot;Column(\&amp;quot;Cost\&amp;quot;)&amp;quot;&lt;br /&gt;
    }&lt;br /&gt;
  },&lt;br /&gt;
  &amp;quot;Ordering&amp;quot;: [&lt;br /&gt;
    {&amp;quot;Name&amp;quot;: &amp;quot;Contribution&amp;quot;, &amp;quot;Direction&amp;quot;: &amp;quot;Descending&amp;quot;}&lt;br /&gt;
  ]&lt;br /&gt;
}&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Utils.GetSampledEvents ==&lt;br /&gt;
Returns a [[SqlDataFrame in Expression Language|SqlDataFrame]] containing sampled events of given model where given filter is first applied. &lt;br /&gt;
&lt;br /&gt;
=== Parameters ===&lt;br /&gt;
&lt;br /&gt;
* &#039;&#039;&#039;sourceModel&#039;&#039;&#039;:&lt;br /&gt;
** ProcessAnalyzer model object of the model whose event data is to be filtered and sampled.&lt;br /&gt;
* &#039;&#039;&#039;sampledCaseCount&#039;&#039;&#039;:&lt;br /&gt;
** The maximum number of cases to return (or null if all cases should be returned).&lt;br /&gt;
* &#039;&#039;&#039;filter&#039;&#039;&#039;:&lt;br /&gt;
** JSON filter to be applied on the event data of the source model prior to performing the sampling.&lt;br /&gt;
&lt;br /&gt;
=== Return value ===&lt;br /&gt;
[[SqlDataFrame in Expression Language|SqlDataFrame]] containing sampled events of given model where given filter is first applied.&lt;br /&gt;
&lt;br /&gt;
=== Example ===&lt;br /&gt;
Get a sample of all the events of at most 1000 cases having &amp;quot;Hats&amp;quot; as &amp;quot;Product Group&amp;quot; case attribute value from model identified by modelId.&amp;lt;syntaxhighlight lang=&amp;quot;typescript&amp;quot;&amp;gt;&lt;br /&gt;
let eventDataSampleSdf = _system.Utils.GetSampledEvents(modelId, 1000, #{&lt;br /&gt;
  &amp;quot;Items&amp;quot;:[#{&lt;br /&gt;
    &amp;quot;Type&amp;quot;:&amp;quot;IncludeCases&amp;quot;,&lt;br /&gt;
	&amp;quot;Items&amp;quot;:[#{&lt;br /&gt;
	  &amp;quot;Type&amp;quot;:&amp;quot;CaseAttributeValue&amp;quot;,&lt;br /&gt;
	  &amp;quot;Attribute&amp;quot;:&amp;quot;Product Group&amp;quot;,&lt;br /&gt;
	  &amp;quot;StringifiedValues&amp;quot;:[&amp;quot;0Hats&amp;quot;]&lt;br /&gt;
	}]&lt;br /&gt;
  }]&lt;br /&gt;
});&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Utils.ModifyColumnTypes ==&lt;br /&gt;
In-place modifies the column types of given columns in given data table.&lt;br /&gt;
&lt;br /&gt;
=== Parameters ===&lt;br /&gt;
* &#039;&#039;&#039;dataTable&#039;&#039;&#039;:&lt;br /&gt;
** ProcessAnalyzer model object of the model whose event data is to be filtered and sampled.&lt;br /&gt;
* &#039;&#039;&#039;columnTypesToSet&#039;&#039;&#039;:&lt;br /&gt;
** Array of column name/type definitions to set.&lt;br /&gt;
*** Only columns that are to be changed are required to be listed.&lt;br /&gt;
*** Columns that don&#039;t exist in the data table will be skipped.&lt;br /&gt;
&lt;br /&gt;
=== Return value ===&lt;br /&gt;
Returns the modified data table object.&lt;br /&gt;
&lt;br /&gt;
=== Example ===&lt;br /&gt;
Modify column &amp;quot;CaseId&amp;quot; to be of type string.&amp;lt;syntaxhighlight lang=&amp;quot;typescript&amp;quot;&amp;gt;&lt;br /&gt;
_system.Utils.ModifyColumnTypes(&lt;br /&gt;
  eventsTable, &lt;br /&gt;
  #{#{&amp;quot;Name&amp;quot;: &amp;quot;CaseId&amp;quot;, &amp;quot;DataType&amp;quot;: &amp;quot;String&amp;quot;}}&lt;br /&gt;
);&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Utils.RunFunctionWithParallelLogging ==&lt;br /&gt;
Runs given function that generates logging information into given data table in a way that all the logging will be included into the generated script run log as well (if run inside a script).&lt;br /&gt;
&lt;br /&gt;
Internally polls the table every 5 seconds for new rows and adds all the newly added rows to script log.&lt;br /&gt;
&lt;br /&gt;
=== Parameters ===&lt;br /&gt;
&lt;br /&gt;
* logTable:&lt;br /&gt;
** A DataTable used for logging.&lt;br /&gt;
* callbackFunc:&lt;br /&gt;
** Function that uses given data table for logging it&#039;s current status.&lt;br /&gt;
&lt;br /&gt;
=== Return value ===&lt;br /&gt;
The result of the callback function.&lt;br /&gt;
&lt;br /&gt;
=== Example ===&lt;br /&gt;
Run given stored procedure that generates new rows to log table identified by logTableId and log the generated rows into the script run log.&amp;lt;syntaxhighlight lang=&amp;quot;typescript&amp;quot;&amp;gt;&lt;br /&gt;
_system.Utils.RunFunctionWithParallelLogging(DataTableById(logTableId), () =&amp;gt; {&lt;br /&gt;
    CreateSnowflakeConnection().CallStoredProcedure(&amp;quot;StoredProcedureTest&amp;quot;, #{})&lt;br /&gt;
});&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;/div&gt;</summary>
		<author><name>MarHink</name></author>
	</entry>
	<entry>
		<id>https://wiki.onqpr.com/pa/index.php?title=System_Library&amp;diff=25469</id>
		<title>System Library</title>
		<link rel="alternate" type="text/html" href="https://wiki.onqpr.com/pa/index.php?title=System_Library&amp;diff=25469"/>
		<updated>2024-11-19T13:05:26Z</updated>

		<summary type="html">&lt;p&gt;MarHink: /* Utils.ModifyColumnTypes */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;System library is a collection of Expression Language functions and properties that provide additional tools for scripting Process Analyzer functionalities. System library is referenced in scripts via &#039;&#039;_system&#039;&#039;-property, which provides additional properties dedicated for different areas of interests for scripting. &lt;br /&gt;
&lt;br /&gt;
The following hierarchy shows the properties and functions available in System Library:&lt;br /&gt;
&lt;br /&gt;
* ML&lt;br /&gt;
** [[Create Predicted Eventlog|GeneratePredictionModel]]&lt;br /&gt;
** [[Create Simulated Eventlog|ApplyTransformations]]&lt;br /&gt;
* Parallel&lt;br /&gt;
** Run&lt;br /&gt;
* RootCauses&lt;br /&gt;
** FindRootCausesDataFrame&lt;br /&gt;
* Utils&lt;br /&gt;
** GetSampledEvents&lt;br /&gt;
** ModifyColumnTypes&lt;br /&gt;
** RunFunctionWithParallelLogging&lt;br /&gt;
&lt;br /&gt;
== Parallel.Run ==&lt;br /&gt;
Runs given functions in parallel.&lt;br /&gt;
&lt;br /&gt;
=== Parameters ===&lt;br /&gt;
* &#039;&#039;&#039;functions&#039;&#039;&#039;:&lt;br /&gt;
** An array of functions to run in parallel.&lt;br /&gt;
&lt;br /&gt;
=== Return value ===&lt;br /&gt;
An array of results returned by the called functions, in the same order as the function generating them in the &#039;&#039;functions&#039;&#039;-parameter.&lt;br /&gt;
&lt;br /&gt;
=== Example ===&lt;br /&gt;
The following script uses _system.Parallel.Run to run three functions:&lt;br /&gt;
&lt;br /&gt;
* SAP-extraction from VBAK-table in SAP (connection parameters defined in connectionParametersDict-dictionary).&lt;br /&gt;
* Transform the extracted data by adding a new column.&lt;br /&gt;
* Load the data into data table identified by dataTableId.&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;typescript&amp;quot;&amp;gt;&lt;br /&gt;
function ExtractTransformAndLoad(extractFunc, transformFunc, loadFunc)&lt;br /&gt;
{&lt;br /&gt;
  let rawDataFlow = extractFunc();&lt;br /&gt;
  let transformedDataFlow = ToDataFlow();&lt;br /&gt;
&lt;br /&gt;
  _system.Parallel.Run([&lt;br /&gt;
    () =&amp;gt; Catch({&lt;br /&gt;
      let df;&lt;br /&gt;
      while (!IsNullTop(df = rawDataFlow.Collect(#{&amp;quot;CollectChunk&amp;quot;: true}))) {&lt;br /&gt;
        transformedDataFlow.Append(transformFunc(df));&lt;br /&gt;
        WriteLog(`A chunk having ${df.NRows} rows has been transformed.`);&lt;br /&gt;
      }&lt;br /&gt;
      if (rawDataFlow.HasError) {&lt;br /&gt;
        transformedDataFlow.Fail(&amp;quot;Error occurred during data extraction.&amp;quot;);&lt;br /&gt;
      }&lt;br /&gt;
      else {&lt;br /&gt;
        transformedDataFlow.Complete();&lt;br /&gt;
      }&lt;br /&gt;
    }, {&lt;br /&gt;
      transformedDataFlow.Fail(&amp;quot;Error occurred during transformation calculation.&amp;quot;);&lt;br /&gt;
    }),&lt;br /&gt;
    () =&amp;gt; {&lt;br /&gt;
      loadFunc(transformedDataFlow);&lt;br /&gt;
    }&lt;br /&gt;
  ]);&lt;br /&gt;
}&lt;br /&gt;
&lt;br /&gt;
ExtractTransformAndLoad(&lt;br /&gt;
  () =&amp;gt; ExtractSap(connectionParametersDict.Extend(&lt;br /&gt;
    [&lt;br /&gt;
    &amp;quot;FieldNames&amp;quot;: &amp;quot;VBELN,ERDAT,ERZET,ERNAM,NETWR,WAERK&amp;quot;, &lt;br /&gt;
    &amp;quot;QueryTable&amp;quot;: &amp;quot;VBAK&amp;quot;,&lt;br /&gt;
	&amp;quot;Options&amp;quot;: [&amp;quot;VBELN BETWEEN &#039;0000017448&#039;&amp;quot; ,&amp;quot;AND &#039;0060000042&#039;&amp;quot;],&lt;br /&gt;
    &amp;quot;UseGateway&amp;quot;: true&lt;br /&gt;
    ])&lt;br /&gt;
  ),&lt;br /&gt;
  df =&amp;gt; df.SetColumns([&amp;quot;Test&amp;quot;: () =&amp;gt; `${Column(&amp;quot;NETWR&amp;quot;)} ${Column(&amp;quot;WAERK&amp;quot;)}`]),&lt;br /&gt;
  dataFlow =&amp;gt; {&lt;br /&gt;
    DataTableById(dataTableId).Import(dataFlow, [&amp;quot;Append&amp;quot;: 0]);&lt;br /&gt;
  }&lt;br /&gt;
);&lt;br /&gt;
&lt;br /&gt;
DataTableById(dataTableId).SqlDataFrame.OrderByColumns([&amp;quot;VBELN&amp;quot;], [true]).Collect()&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== RootCauses.FindForDataFrame ==&lt;br /&gt;
Finds root causes for a particular process phenomenon by comparing properties of selected cases against those of all cases in given model.&lt;br /&gt;
&lt;br /&gt;
Based on the similar in-memory function: [[FindRootCauses Function|EventLog.FindRootCauses]].&lt;br /&gt;
&lt;br /&gt;
=== Parameters ===&lt;br /&gt;
&lt;br /&gt;
* &#039;&#039;&#039;model&#039;&#039;&#039;:&lt;br /&gt;
** Model object of a model whose data tables are used to calculate the root causes.&lt;br /&gt;
* &#039;&#039;&#039;parameters&#039;&#039;&#039;:&lt;br /&gt;
** A parameter convertible to a StringDictionary object with the following supported key-values:&lt;br /&gt;
*** &#039;&#039;&#039;Filter&#039;&#039;&#039;: &lt;br /&gt;
**** Filter json (#30921#) that is applied to the event log before calculating root causes.&lt;br /&gt;
*** &#039;&#039;&#039;Selection&#039;&#039;&#039;: &lt;br /&gt;
**** Selection json (#30927#) that defines selected cases to find root causes for. Is applied on top of the filtered event log (specified by Filter parameter).&lt;br /&gt;
**** If nothing is selected, 100% of cases are counted as selected.&lt;br /&gt;
*** &#039;&#039;&#039;CaseAttributeTypes&#039;&#039;&#039;: &lt;br /&gt;
**** An array of strings with the names of case attributes included into the root causes.&lt;br /&gt;
**** Only case attributes of type string, integer or boolean are included.&lt;br /&gt;
**** If CaseAttributeTypes is null or empty string or not specified, all case attributes having type &amp;quot;String&amp;quot; are included into the root causes.&lt;br /&gt;
**** If the model doesn&#039;t have a case attribute with the specified name, an error message is shown.&lt;br /&gt;
*** &#039;&#039;&#039;EventAttributeTypes&#039;&#039;&#039;:&lt;br /&gt;
**** An array of strings with the names of event attributes included into the root causes.&lt;br /&gt;
**** Only event attributes of type string are included.&lt;br /&gt;
**** If EventAttributeTypes is null, all event attributes having type &amp;quot;String&amp;quot; are included into the root causes.&lt;br /&gt;
**** If EventAttributeTypes is empty array or not specified, event attributes not applied to the root causes.&lt;br /&gt;
**** If the model doesn&#039;t have an event attribute with the specified name, an error message is shown.&lt;br /&gt;
**** Analysis column Name should contain Event Attribute Name.&lt;br /&gt;
**** Analysis column Value should contain Event Attribute Value and number of occurrences in case: &amp;lt;value&amp;gt; (count).&lt;br /&gt;
**** Analysis column Type should have &amp;quot;EventAttributeValue&amp;quot; as its value.&lt;br /&gt;
*** &#039;&#039;&#039;WeightingExpression&#039;&#039;&#039;: &lt;br /&gt;
**** Expression providing weights for each case.&lt;br /&gt;
**** Expression, if defined, must be any of the following types:&lt;br /&gt;
***** A string containing the [[SQL Expressions|SqlExpression]] to evaluate.&lt;br /&gt;
***** A SqlExpression object, created e.g., using ToSqlExpression-function (see also: [[QPR_ProcessAnalyzer_Expressions#In-memory_expression_blocks_in_SQL_expressions|In-memory expression blocks in SQL expressions]]).&lt;br /&gt;
**** A row is filtered out of result if expression result is null.&lt;br /&gt;
*** &#039;&#039;&#039;MaximumRowCount&#039;&#039;&#039;:&lt;br /&gt;
**** The maximum number of the most and least contributing root causes to return. Thus, the actual number of returned rows can be at most two times this value (if specified).&lt;br /&gt;
**** If undefined, 200 is used.&lt;br /&gt;
**** If set to 0, all rows are returned.&lt;br /&gt;
*** &#039;&#039;&#039;MinValueUsage&#039;&#039;&#039;: &lt;br /&gt;
**** The minimum total usage of a value included into the comparison. The number of cases having every returned value should be at least given percentage (a float value between 0.0 and 1.0) of all the compared cases.&lt;br /&gt;
***** If not defined or null, all values are included (=default).&lt;br /&gt;
*** &#039;&#039;&#039;MaxNumUniqueValues&#039;&#039;&#039;: &lt;br /&gt;
**** Maximum number of unique values to include into the comparison for each attribute column. If the amount of unique values for any attribute exceeds this value, only given number of attributes are included that have the highest usage.&lt;br /&gt;
**** If not defined or null, all values are included (=default).&lt;br /&gt;
*** &#039;&#039;&#039;IncludeOthers&#039;&#039;&#039;: &lt;br /&gt;
**** Should the rest of the attribute values not included due to MinValueUsage or MaxNumUniqueValues filtering be included as an aggregated &amp;quot;Others&amp;quot; value?&lt;br /&gt;
**** If given any string value, that value is used as the value for all the aggregated values.&lt;br /&gt;
**** Default = undefined =&amp;gt; others values will not be included into the results.&lt;br /&gt;
*** &#039;&#039;&#039;ValueIfNull&#039;&#039;&#039;: Value used to indicate null-values.&lt;br /&gt;
**** Default = &amp;quot;(blank)&amp;quot;.&lt;br /&gt;
**** Must be not null.&lt;br /&gt;
&lt;br /&gt;
=== Return value ===&lt;br /&gt;
Returns a [[SqlDataFrame in Expression Language|SqlDataFrame]] object with the following columns:&lt;br /&gt;
&lt;br /&gt;
* Common columns:&lt;br /&gt;
** &#039;&#039;&#039;Type&#039;&#039;&#039;: &lt;br /&gt;
*** Type of the root cause:&lt;br /&gt;
**** &amp;quot;CaseAttributeValue&amp;quot; for case attributes.&lt;br /&gt;
**** &amp;quot;EventAttributeValue&amp;quot; for event attributes.&lt;br /&gt;
** &#039;&#039;&#039;Name&#039;&#039;&#039;: &lt;br /&gt;
*** When type is CaseAttributeValue, case attribute name.&lt;br /&gt;
*** When type is EventAttributeValue, event attribute name.&lt;br /&gt;
** &#039;&#039;&#039;Value&#039;&#039;&#039;: &lt;br /&gt;
*** When type is CaseAttributeValue, case attribute value.&lt;br /&gt;
*** When type is EventAttributeValue, event attribute value and number of occurrences in case.&lt;br /&gt;
** &#039;&#039;&#039;Total&#039;&#039;&#039;: &lt;br /&gt;
*** The total number of cases having the found root cause.&lt;br /&gt;
** &#039;&#039;&#039;Selected&#039;&#039;&#039;: &lt;br /&gt;
*** The number of cases that have the found root cause and belong to the selected cases.&lt;br /&gt;
** &#039;&#039;&#039;Compared&#039;&#039;&#039;:&lt;br /&gt;
*** The number of cases that have the found root cause and don&#039;t belong to the selected cases.&lt;br /&gt;
** Columns when WeightingExpression not have value:&lt;br /&gt;
*** &#039;&#039;&#039;Contribution&#039;&#039;&#039;: &lt;br /&gt;
**** The number of cases which contribute to the deviation from the average percentage of selected cases among all analyzed cases.&lt;br /&gt;
*** &#039;&#039;&#039;ContributionPercentage&#039;&#039;&#039;: &lt;br /&gt;
**** The percent of cases which contribute to the deviation from the average percentage.&lt;br /&gt;
*** &#039;&#039;&#039;DifferencePercentage&#039;&#039;&#039;:&lt;br /&gt;
**** The deviation in percentage between selected cases with the found root cause and the average percentage of selected cases among all analyzed cases.&lt;br /&gt;
*** &#039;&#039;&#039;SelectedPercentage&#039;&#039;&#039;: &lt;br /&gt;
**** The percent of selected cases that have the found root cause out of all cases with that root cause.&lt;br /&gt;
** Columns when WeightingExpression have value:&lt;br /&gt;
*** &#039;&#039;&#039;Contribution&#039;&#039;&#039;: &lt;br /&gt;
**** The sum of case weights which contribute to the deviation from the average percentage of selected case weights among all analyzed cases.&lt;br /&gt;
*** &#039;&#039;&#039;ContributionPercentage&#039;&#039;&#039;: &lt;br /&gt;
**** The percent of case weights which contribute to the deviation from the average percentage.&lt;br /&gt;
*** &#039;&#039;&#039;DifferencePercentage&#039;&#039;&#039;:&lt;br /&gt;
**** The deviation in percentage between selected case weights with the found root cause and the average percentage of selected case weights among all analyzed cases.&lt;br /&gt;
*** &#039;&#039;&#039;SelectedPercentage&#039;&#039;&#039;:&lt;br /&gt;
**** The percent of selected case weights that have the found root cause out of all case weights with that root cause.&lt;br /&gt;
*** &#039;&#039;&#039;SelectedWeight&#039;&#039;&#039;: &lt;br /&gt;
**** The sum of weights that have the found root cause and belong to the selected cases.&lt;br /&gt;
*** &#039;&#039;&#039;ComparedWeight&#039;&#039;&#039;: &lt;br /&gt;
**** The sum of weights that have the found root cause and don&#039;t belong to the selected cases.&lt;br /&gt;
*** &#039;&#039;&#039;TotalWeight&#039;&#039;&#039;:&lt;br /&gt;
**** The sum of weights of all cases with that root cause.&lt;br /&gt;
&lt;br /&gt;
=== Example ===&lt;br /&gt;
Calculate root cause analysis for given model using parameters read from the [[Web API: Expression/query|query configuration]].&amp;lt;syntaxhighlight lang=&amp;quot;json&amp;quot;&amp;gt;&lt;br /&gt;
{&lt;br /&gt;
  &amp;quot;ProcessingMethod&amp;quot;: &amp;quot;DataFrame&amp;quot;,&lt;br /&gt;
  &amp;quot;Root&amp;quot;:&amp;quot;let m = _; _system.RootCauses.FindForDataFrame(m, _query.Configuration.Parameters.FindRootCausesParameters.Clone().Extend(#{\&amp;quot;Filter\&amp;quot;: _query.Configuration.Filter}))&amp;quot;,&lt;br /&gt;
  &amp;quot;Parameters&amp;quot;: {&lt;br /&gt;
    &amp;quot;FindRootCausesParameters&amp;quot;: {&lt;br /&gt;
      &amp;quot;CaseAttributeTypes&amp;quot;: [&amp;quot;Account Manager&amp;quot;,&amp;quot;Customer Group&amp;quot;,&amp;quot;Product Group&amp;quot;,&amp;quot;Region&amp;quot;],&lt;br /&gt;
      &amp;quot;Selection&amp;quot;: {&amp;quot;Items&amp;quot;:[{&amp;quot;Type&amp;quot;:&amp;quot;IncludeCases&amp;quot;,&amp;quot;Items&amp;quot;:[{&amp;quot;Type&amp;quot;:&amp;quot;CaseAttributeValue&amp;quot;,&amp;quot;Attribute&amp;quot;:&amp;quot;Product Group&amp;quot;,&amp;quot;StringifiedValues&amp;quot;:[&amp;quot;0Hats&amp;quot;]}]}]},&lt;br /&gt;
      &amp;quot;MaxNumUniqueValues&amp;quot;: 2,&lt;br /&gt;
      &amp;quot;MaximumRowCount&amp;quot;: 1000,&lt;br /&gt;
      &amp;quot;MinValueUsage&amp;quot;: 0.20,&lt;br /&gt;
      &amp;quot;WeightingExpression&amp;quot;: &amp;quot;Column(\&amp;quot;Cost\&amp;quot;)&amp;quot;&lt;br /&gt;
    }&lt;br /&gt;
  },&lt;br /&gt;
  &amp;quot;Ordering&amp;quot;: [&lt;br /&gt;
    {&amp;quot;Name&amp;quot;: &amp;quot;Contribution&amp;quot;, &amp;quot;Direction&amp;quot;: &amp;quot;Descending&amp;quot;}&lt;br /&gt;
  ]&lt;br /&gt;
}&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Utils.GetSampledEvents ==&lt;br /&gt;
Returns a [[SqlDataFrame in Expression Language|SqlDataFrame]] containing sampled events of given model where given filter is first applied. &lt;br /&gt;
&lt;br /&gt;
=== Parameters ===&lt;br /&gt;
&lt;br /&gt;
* &#039;&#039;&#039;sourceModel&#039;&#039;&#039;:&lt;br /&gt;
** ProcessAnalyzer model object of the model whose event data is to be filtered and sampled.&lt;br /&gt;
* &#039;&#039;&#039;sampledCaseCount&#039;&#039;&#039;:&lt;br /&gt;
** The maximum number of cases to return (or null if all cases should be returned).&lt;br /&gt;
* &#039;&#039;&#039;filter&#039;&#039;&#039;:&lt;br /&gt;
** JSON filter to be applied on the event data of the source model prior to performing the sampling.&lt;br /&gt;
&lt;br /&gt;
=== Return value ===&lt;br /&gt;
[[SqlDataFrame in Expression Language|SqlDataFrame]] containing sampled events of given model where given filter is first applied.&lt;br /&gt;
&lt;br /&gt;
=== Example ===&lt;br /&gt;
Get a sample of all the events of at most 1000 cases having &amp;quot;Hats&amp;quot; as &amp;quot;Product Group&amp;quot; case attribute value from model identified by modelId.&amp;lt;syntaxhighlight lang=&amp;quot;typescript&amp;quot;&amp;gt;&lt;br /&gt;
let eventDataSampleSdf = _system.Utils.GetSampledEvents(modelId, 1000, #{&lt;br /&gt;
  &amp;quot;Items&amp;quot;:[#{&lt;br /&gt;
    &amp;quot;Type&amp;quot;:&amp;quot;IncludeCases&amp;quot;,&lt;br /&gt;
	&amp;quot;Items&amp;quot;:[#{&lt;br /&gt;
	  &amp;quot;Type&amp;quot;:&amp;quot;CaseAttributeValue&amp;quot;,&lt;br /&gt;
	  &amp;quot;Attribute&amp;quot;:&amp;quot;Product Group&amp;quot;,&lt;br /&gt;
	  &amp;quot;StringifiedValues&amp;quot;:[&amp;quot;0Hats&amp;quot;]&lt;br /&gt;
	}]&lt;br /&gt;
  }]&lt;br /&gt;
});&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Utils.ModifyColumnTypes ==&lt;br /&gt;
In-place modifies the column types of given columns in given data table.&lt;br /&gt;
&lt;br /&gt;
=== Parameters ===&lt;br /&gt;
* &#039;&#039;&#039;dataTable&#039;&#039;&#039;:&lt;br /&gt;
** ProcessAnalyzer model object of the model whose event data is to be filtered and sampled.&lt;br /&gt;
* &#039;&#039;&#039;columnTypesToSet&#039;&#039;&#039;:&lt;br /&gt;
** Array of column name/type definitions to set (not all columns in the data table need to be listed).&lt;br /&gt;
&lt;br /&gt;
=== Return value ===&lt;br /&gt;
Returns the modified data table object.&lt;br /&gt;
&lt;br /&gt;
=== Example ===&lt;br /&gt;
Modify column &amp;quot;CaseId&amp;quot; to be of type string.&amp;lt;syntaxhighlight lang=&amp;quot;typescript&amp;quot;&amp;gt;&lt;br /&gt;
_system.Utils.ModifyColumnTypes(&lt;br /&gt;
  eventsTable, &lt;br /&gt;
  #{#{&amp;quot;Name&amp;quot;: &amp;quot;CaseId&amp;quot;, &amp;quot;DataType&amp;quot;: &amp;quot;String&amp;quot;}}&lt;br /&gt;
);&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Utils.RunFunctionWithParallelLogging ==&lt;br /&gt;
foo&lt;/div&gt;</summary>
		<author><name>MarHink</name></author>
	</entry>
	<entry>
		<id>https://wiki.onqpr.com/pa/index.php?title=System_Library&amp;diff=25468</id>
		<title>System Library</title>
		<link rel="alternate" type="text/html" href="https://wiki.onqpr.com/pa/index.php?title=System_Library&amp;diff=25468"/>
		<updated>2024-11-19T12:55:58Z</updated>

		<summary type="html">&lt;p&gt;MarHink: /* Utils.GetSampledEvents */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;System library is a collection of Expression Language functions and properties that provide additional tools for scripting Process Analyzer functionalities. System library is referenced in scripts via &#039;&#039;_system&#039;&#039;-property, which provides additional properties dedicated for different areas of interests for scripting. &lt;br /&gt;
&lt;br /&gt;
The following hierarchy shows the properties and functions available in System Library:&lt;br /&gt;
&lt;br /&gt;
* ML&lt;br /&gt;
** [[Create Predicted Eventlog|GeneratePredictionModel]]&lt;br /&gt;
** [[Create Simulated Eventlog|ApplyTransformations]]&lt;br /&gt;
* Parallel&lt;br /&gt;
** Run&lt;br /&gt;
* RootCauses&lt;br /&gt;
** FindRootCausesDataFrame&lt;br /&gt;
* Utils&lt;br /&gt;
** GetSampledEvents&lt;br /&gt;
** ModifyColumnTypes&lt;br /&gt;
** RunFunctionWithParallelLogging&lt;br /&gt;
&lt;br /&gt;
== Parallel.Run ==&lt;br /&gt;
Runs given functions in parallel.&lt;br /&gt;
&lt;br /&gt;
=== Parameters ===&lt;br /&gt;
* &#039;&#039;&#039;functions&#039;&#039;&#039;:&lt;br /&gt;
** An array of functions to run in parallel.&lt;br /&gt;
&lt;br /&gt;
=== Return value ===&lt;br /&gt;
An array of results returned by the called functions, in the same order as the function generating them in the &#039;&#039;functions&#039;&#039;-parameter.&lt;br /&gt;
&lt;br /&gt;
=== Example ===&lt;br /&gt;
The following script uses _system.Parallel.Run to run three functions:&lt;br /&gt;
&lt;br /&gt;
* SAP-extraction from VBAK-table in SAP (connection parameters defined in connectionParametersDict-dictionary).&lt;br /&gt;
* Transform the extracted data by adding a new column.&lt;br /&gt;
* Load the data into data table identified by dataTableId.&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;typescript&amp;quot;&amp;gt;&lt;br /&gt;
function ExtractTransformAndLoad(extractFunc, transformFunc, loadFunc)&lt;br /&gt;
{&lt;br /&gt;
  let rawDataFlow = extractFunc();&lt;br /&gt;
  let transformedDataFlow = ToDataFlow();&lt;br /&gt;
&lt;br /&gt;
  _system.Parallel.Run([&lt;br /&gt;
    () =&amp;gt; Catch({&lt;br /&gt;
      let df;&lt;br /&gt;
      while (!IsNullTop(df = rawDataFlow.Collect(#{&amp;quot;CollectChunk&amp;quot;: true}))) {&lt;br /&gt;
        transformedDataFlow.Append(transformFunc(df));&lt;br /&gt;
        WriteLog(`A chunk having ${df.NRows} rows has been transformed.`);&lt;br /&gt;
      }&lt;br /&gt;
      if (rawDataFlow.HasError) {&lt;br /&gt;
        transformedDataFlow.Fail(&amp;quot;Error occurred during data extraction.&amp;quot;);&lt;br /&gt;
      }&lt;br /&gt;
      else {&lt;br /&gt;
        transformedDataFlow.Complete();&lt;br /&gt;
      }&lt;br /&gt;
    }, {&lt;br /&gt;
      transformedDataFlow.Fail(&amp;quot;Error occurred during transformation calculation.&amp;quot;);&lt;br /&gt;
    }),&lt;br /&gt;
    () =&amp;gt; {&lt;br /&gt;
      loadFunc(transformedDataFlow);&lt;br /&gt;
    }&lt;br /&gt;
  ]);&lt;br /&gt;
}&lt;br /&gt;
&lt;br /&gt;
ExtractTransformAndLoad(&lt;br /&gt;
  () =&amp;gt; ExtractSap(connectionParametersDict.Extend(&lt;br /&gt;
    [&lt;br /&gt;
    &amp;quot;FieldNames&amp;quot;: &amp;quot;VBELN,ERDAT,ERZET,ERNAM,NETWR,WAERK&amp;quot;, &lt;br /&gt;
    &amp;quot;QueryTable&amp;quot;: &amp;quot;VBAK&amp;quot;,&lt;br /&gt;
	&amp;quot;Options&amp;quot;: [&amp;quot;VBELN BETWEEN &#039;0000017448&#039;&amp;quot; ,&amp;quot;AND &#039;0060000042&#039;&amp;quot;],&lt;br /&gt;
    &amp;quot;UseGateway&amp;quot;: true&lt;br /&gt;
    ])&lt;br /&gt;
  ),&lt;br /&gt;
  df =&amp;gt; df.SetColumns([&amp;quot;Test&amp;quot;: () =&amp;gt; `${Column(&amp;quot;NETWR&amp;quot;)} ${Column(&amp;quot;WAERK&amp;quot;)}`]),&lt;br /&gt;
  dataFlow =&amp;gt; {&lt;br /&gt;
    DataTableById(dataTableId).Import(dataFlow, [&amp;quot;Append&amp;quot;: 0]);&lt;br /&gt;
  }&lt;br /&gt;
);&lt;br /&gt;
&lt;br /&gt;
DataTableById(dataTableId).SqlDataFrame.OrderByColumns([&amp;quot;VBELN&amp;quot;], [true]).Collect()&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== RootCauses.FindForDataFrame ==&lt;br /&gt;
Finds root causes for a particular process phenomenon by comparing properties of selected cases against those of all cases in given model.&lt;br /&gt;
&lt;br /&gt;
Based on the similar in-memory function: [[FindRootCauses Function|EventLog.FindRootCauses]].&lt;br /&gt;
&lt;br /&gt;
=== Parameters ===&lt;br /&gt;
&lt;br /&gt;
* &#039;&#039;&#039;model&#039;&#039;&#039;:&lt;br /&gt;
** Model object of a model whose data tables are used to calculate the root causes.&lt;br /&gt;
* &#039;&#039;&#039;parameters&#039;&#039;&#039;:&lt;br /&gt;
** A parameter convertible to a StringDictionary object with the following supported key-values:&lt;br /&gt;
*** &#039;&#039;&#039;Filter&#039;&#039;&#039;: &lt;br /&gt;
**** Filter json (#30921#) that is applied to the event log before calculating root causes.&lt;br /&gt;
*** &#039;&#039;&#039;Selection&#039;&#039;&#039;: &lt;br /&gt;
**** Selection json (#30927#) that defines selected cases to find root causes for. Is applied on top of the filtered event log (specified by Filter parameter).&lt;br /&gt;
**** If nothing is selected, 100% of cases are counted as selected.&lt;br /&gt;
*** &#039;&#039;&#039;CaseAttributeTypes&#039;&#039;&#039;: &lt;br /&gt;
**** An array of strings with the names of case attributes included into the root causes.&lt;br /&gt;
**** Only case attributes of type string, integer or boolean are included.&lt;br /&gt;
**** If CaseAttributeTypes is null or empty string or not specified, all case attributes having type &amp;quot;String&amp;quot; are included into the root causes.&lt;br /&gt;
**** If the model doesn&#039;t have a case attribute with the specified name, an error message is shown.&lt;br /&gt;
*** &#039;&#039;&#039;EventAttributeTypes&#039;&#039;&#039;:&lt;br /&gt;
**** An array of strings with the names of event attributes included into the root causes.&lt;br /&gt;
**** Only event attributes of type string are included.&lt;br /&gt;
**** If EventAttributeTypes is null, all event attributes having type &amp;quot;String&amp;quot; are included into the root causes.&lt;br /&gt;
**** If EventAttributeTypes is empty array or not specified, event attributes not applied to the root causes.&lt;br /&gt;
**** If the model doesn&#039;t have an event attribute with the specified name, an error message is shown.&lt;br /&gt;
**** Analysis column Name should contain Event Attribute Name.&lt;br /&gt;
**** Analysis column Value should contain Event Attribute Value and number of occurrences in case: &amp;lt;value&amp;gt; (count).&lt;br /&gt;
**** Analysis column Type should have &amp;quot;EventAttributeValue&amp;quot; as its value.&lt;br /&gt;
*** &#039;&#039;&#039;WeightingExpression&#039;&#039;&#039;: &lt;br /&gt;
**** Expression providing weights for each case.&lt;br /&gt;
**** Expression, if defined, must be any of the following types:&lt;br /&gt;
***** A string containing the [[SQL Expressions|SqlExpression]] to evaluate.&lt;br /&gt;
***** A SqlExpression object, created e.g., using ToSqlExpression-function (see also: [[QPR_ProcessAnalyzer_Expressions#In-memory_expression_blocks_in_SQL_expressions|In-memory expression blocks in SQL expressions]]).&lt;br /&gt;
**** A row is filtered out of result if expression result is null.&lt;br /&gt;
*** &#039;&#039;&#039;MaximumRowCount&#039;&#039;&#039;:&lt;br /&gt;
**** The maximum number of the most and least contributing root causes to return. Thus, the actual number of returned rows can be at most two times this value (if specified).&lt;br /&gt;
**** If undefined, 200 is used.&lt;br /&gt;
**** If set to 0, all rows are returned.&lt;br /&gt;
*** &#039;&#039;&#039;MinValueUsage&#039;&#039;&#039;: &lt;br /&gt;
**** The minimum total usage of a value included into the comparison. The number of cases having every returned value should be at least given percentage (a float value between 0.0 and 1.0) of all the compared cases.&lt;br /&gt;
***** If not defined or null, all values are included (=default).&lt;br /&gt;
*** &#039;&#039;&#039;MaxNumUniqueValues&#039;&#039;&#039;: &lt;br /&gt;
**** Maximum number of unique values to include into the comparison for each attribute column. If the amount of unique values for any attribute exceeds this value, only given number of attributes are included that have the highest usage.&lt;br /&gt;
**** If not defined or null, all values are included (=default).&lt;br /&gt;
*** &#039;&#039;&#039;IncludeOthers&#039;&#039;&#039;: &lt;br /&gt;
**** Should the rest of the attribute values not included due to MinValueUsage or MaxNumUniqueValues filtering be included as an aggregated &amp;quot;Others&amp;quot; value?&lt;br /&gt;
**** If given any string value, that value is used as the value for all the aggregated values.&lt;br /&gt;
**** Default = undefined =&amp;gt; others values will not be included into the results.&lt;br /&gt;
*** &#039;&#039;&#039;ValueIfNull&#039;&#039;&#039;: Value used to indicate null-values.&lt;br /&gt;
**** Default = &amp;quot;(blank)&amp;quot;.&lt;br /&gt;
**** Must be not null.&lt;br /&gt;
&lt;br /&gt;
=== Return value ===&lt;br /&gt;
Returns a [[SqlDataFrame in Expression Language|SqlDataFrame]] object with the following columns:&lt;br /&gt;
&lt;br /&gt;
* Common columns:&lt;br /&gt;
** &#039;&#039;&#039;Type&#039;&#039;&#039;: &lt;br /&gt;
*** Type of the root cause:&lt;br /&gt;
**** &amp;quot;CaseAttributeValue&amp;quot; for case attributes.&lt;br /&gt;
**** &amp;quot;EventAttributeValue&amp;quot; for event attributes.&lt;br /&gt;
** &#039;&#039;&#039;Name&#039;&#039;&#039;: &lt;br /&gt;
*** When type is CaseAttributeValue, case attribute name.&lt;br /&gt;
*** When type is EventAttributeValue, event attribute name.&lt;br /&gt;
** &#039;&#039;&#039;Value&#039;&#039;&#039;: &lt;br /&gt;
*** When type is CaseAttributeValue, case attribute value.&lt;br /&gt;
*** When type is EventAttributeValue, event attribute value and number of occurrences in case.&lt;br /&gt;
** &#039;&#039;&#039;Total&#039;&#039;&#039;: &lt;br /&gt;
*** The total number of cases having the found root cause.&lt;br /&gt;
** &#039;&#039;&#039;Selected&#039;&#039;&#039;: &lt;br /&gt;
*** The number of cases that have the found root cause and belong to the selected cases.&lt;br /&gt;
** &#039;&#039;&#039;Compared&#039;&#039;&#039;:&lt;br /&gt;
*** The number of cases that have the found root cause and don&#039;t belong to the selected cases.&lt;br /&gt;
** Columns when WeightingExpression not have value:&lt;br /&gt;
*** &#039;&#039;&#039;Contribution&#039;&#039;&#039;: &lt;br /&gt;
**** The number of cases which contribute to the deviation from the average percentage of selected cases among all analyzed cases.&lt;br /&gt;
*** &#039;&#039;&#039;ContributionPercentage&#039;&#039;&#039;: &lt;br /&gt;
**** The percent of cases which contribute to the deviation from the average percentage.&lt;br /&gt;
*** &#039;&#039;&#039;DifferencePercentage&#039;&#039;&#039;:&lt;br /&gt;
**** The deviation in percentage between selected cases with the found root cause and the average percentage of selected cases among all analyzed cases.&lt;br /&gt;
*** &#039;&#039;&#039;SelectedPercentage&#039;&#039;&#039;: &lt;br /&gt;
**** The percent of selected cases that have the found root cause out of all cases with that root cause.&lt;br /&gt;
** Columns when WeightingExpression have value:&lt;br /&gt;
*** &#039;&#039;&#039;Contribution&#039;&#039;&#039;: &lt;br /&gt;
**** The sum of case weights which contribute to the deviation from the average percentage of selected case weights among all analyzed cases.&lt;br /&gt;
*** &#039;&#039;&#039;ContributionPercentage&#039;&#039;&#039;: &lt;br /&gt;
**** The percent of case weights which contribute to the deviation from the average percentage.&lt;br /&gt;
*** &#039;&#039;&#039;DifferencePercentage&#039;&#039;&#039;:&lt;br /&gt;
**** The deviation in percentage between selected case weights with the found root cause and the average percentage of selected case weights among all analyzed cases.&lt;br /&gt;
*** &#039;&#039;&#039;SelectedPercentage&#039;&#039;&#039;:&lt;br /&gt;
**** The percent of selected case weights that have the found root cause out of all case weights with that root cause.&lt;br /&gt;
*** &#039;&#039;&#039;SelectedWeight&#039;&#039;&#039;: &lt;br /&gt;
**** The sum of weights that have the found root cause and belong to the selected cases.&lt;br /&gt;
*** &#039;&#039;&#039;ComparedWeight&#039;&#039;&#039;: &lt;br /&gt;
**** The sum of weights that have the found root cause and don&#039;t belong to the selected cases.&lt;br /&gt;
*** &#039;&#039;&#039;TotalWeight&#039;&#039;&#039;:&lt;br /&gt;
**** The sum of weights of all cases with that root cause.&lt;br /&gt;
&lt;br /&gt;
=== Example ===&lt;br /&gt;
Calculate root cause analysis for given model using parameters read from the [[Web API: Expression/query|query configuration]].&amp;lt;syntaxhighlight lang=&amp;quot;json&amp;quot;&amp;gt;&lt;br /&gt;
{&lt;br /&gt;
  &amp;quot;ProcessingMethod&amp;quot;: &amp;quot;DataFrame&amp;quot;,&lt;br /&gt;
  &amp;quot;Root&amp;quot;:&amp;quot;let m = _; _system.RootCauses.FindForDataFrame(m, _query.Configuration.Parameters.FindRootCausesParameters.Clone().Extend(#{\&amp;quot;Filter\&amp;quot;: _query.Configuration.Filter}))&amp;quot;,&lt;br /&gt;
  &amp;quot;Parameters&amp;quot;: {&lt;br /&gt;
    &amp;quot;FindRootCausesParameters&amp;quot;: {&lt;br /&gt;
      &amp;quot;CaseAttributeTypes&amp;quot;: [&amp;quot;Account Manager&amp;quot;,&amp;quot;Customer Group&amp;quot;,&amp;quot;Product Group&amp;quot;,&amp;quot;Region&amp;quot;],&lt;br /&gt;
      &amp;quot;Selection&amp;quot;: {&amp;quot;Items&amp;quot;:[{&amp;quot;Type&amp;quot;:&amp;quot;IncludeCases&amp;quot;,&amp;quot;Items&amp;quot;:[{&amp;quot;Type&amp;quot;:&amp;quot;CaseAttributeValue&amp;quot;,&amp;quot;Attribute&amp;quot;:&amp;quot;Product Group&amp;quot;,&amp;quot;StringifiedValues&amp;quot;:[&amp;quot;0Hats&amp;quot;]}]}]},&lt;br /&gt;
      &amp;quot;MaxNumUniqueValues&amp;quot;: 2,&lt;br /&gt;
      &amp;quot;MaximumRowCount&amp;quot;: 1000,&lt;br /&gt;
      &amp;quot;MinValueUsage&amp;quot;: 0.20,&lt;br /&gt;
      &amp;quot;WeightingExpression&amp;quot;: &amp;quot;Column(\&amp;quot;Cost\&amp;quot;)&amp;quot;&lt;br /&gt;
    }&lt;br /&gt;
  },&lt;br /&gt;
  &amp;quot;Ordering&amp;quot;: [&lt;br /&gt;
    {&amp;quot;Name&amp;quot;: &amp;quot;Contribution&amp;quot;, &amp;quot;Direction&amp;quot;: &amp;quot;Descending&amp;quot;}&lt;br /&gt;
  ]&lt;br /&gt;
}&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Utils.GetSampledEvents ==&lt;br /&gt;
Returns a [[SqlDataFrame in Expression Language|SqlDataFrame]] containing sampled events of given model where given filter is first applied. &lt;br /&gt;
&lt;br /&gt;
=== Parameters ===&lt;br /&gt;
&lt;br /&gt;
* &#039;&#039;&#039;sourceModel&#039;&#039;&#039;:&lt;br /&gt;
** ProcessAnalyzer model object of the model whose event data is to be filtered and sampled.&lt;br /&gt;
* &#039;&#039;&#039;sampledCaseCount&#039;&#039;&#039;:&lt;br /&gt;
** The maximum number of cases to return (or null if all cases should be returned).&lt;br /&gt;
* &#039;&#039;&#039;filter&#039;&#039;&#039;:&lt;br /&gt;
** JSON filter to be applied on the event data of the source model prior to performing the sampling.&lt;br /&gt;
&lt;br /&gt;
=== Return value ===&lt;br /&gt;
[[SqlDataFrame in Expression Language|SqlDataFrame]] containing sampled events of given model where given filter is first applied.&lt;br /&gt;
&lt;br /&gt;
=== Example ===&lt;br /&gt;
Get a sample of all the events of at most 1000 cases having &amp;quot;Hats&amp;quot; as &amp;quot;Product Group&amp;quot; case attribute value from model identified by modelId.&amp;lt;syntaxhighlight lang=&amp;quot;typescript&amp;quot;&amp;gt;&lt;br /&gt;
let eventDataSampleSdf = _system.Utils.GetSampledEvents(modelId, 1000, #{&lt;br /&gt;
  &amp;quot;Items&amp;quot;:[#{&lt;br /&gt;
    &amp;quot;Type&amp;quot;:&amp;quot;IncludeCases&amp;quot;,&lt;br /&gt;
	&amp;quot;Items&amp;quot;:[#{&lt;br /&gt;
	  &amp;quot;Type&amp;quot;:&amp;quot;CaseAttributeValue&amp;quot;,&lt;br /&gt;
	  &amp;quot;Attribute&amp;quot;:&amp;quot;Product Group&amp;quot;,&lt;br /&gt;
	  &amp;quot;StringifiedValues&amp;quot;:[&amp;quot;0Hats&amp;quot;]&lt;br /&gt;
	}]&lt;br /&gt;
  }]&lt;br /&gt;
});&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Utils.ModifyColumnTypes ==&lt;br /&gt;
&lt;br /&gt;
== Utils.RunFunctionWithParallelLogging ==&lt;br /&gt;
foo&lt;/div&gt;</summary>
		<author><name>MarHink</name></author>
	</entry>
	<entry>
		<id>https://wiki.onqpr.com/pa/index.php?title=System_Library&amp;diff=25467</id>
		<title>System Library</title>
		<link rel="alternate" type="text/html" href="https://wiki.onqpr.com/pa/index.php?title=System_Library&amp;diff=25467"/>
		<updated>2024-11-19T12:48:02Z</updated>

		<summary type="html">&lt;p&gt;MarHink: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;System library is a collection of Expression Language functions and properties that provide additional tools for scripting Process Analyzer functionalities. System library is referenced in scripts via &#039;&#039;_system&#039;&#039;-property, which provides additional properties dedicated for different areas of interests for scripting. &lt;br /&gt;
&lt;br /&gt;
The following hierarchy shows the properties and functions available in System Library:&lt;br /&gt;
&lt;br /&gt;
* ML&lt;br /&gt;
** [[Create Predicted Eventlog|GeneratePredictionModel]]&lt;br /&gt;
** [[Create Simulated Eventlog|ApplyTransformations]]&lt;br /&gt;
* Parallel&lt;br /&gt;
** Run&lt;br /&gt;
* RootCauses&lt;br /&gt;
** FindRootCausesDataFrame&lt;br /&gt;
* Utils&lt;br /&gt;
** GetSampledEvents&lt;br /&gt;
** ModifyColumnTypes&lt;br /&gt;
** RunFunctionWithParallelLogging&lt;br /&gt;
&lt;br /&gt;
== Parallel.Run ==&lt;br /&gt;
Runs given functions in parallel.&lt;br /&gt;
&lt;br /&gt;
=== Parameters ===&lt;br /&gt;
* &#039;&#039;&#039;functions&#039;&#039;&#039;:&lt;br /&gt;
** An array of functions to run in parallel.&lt;br /&gt;
&lt;br /&gt;
=== Return value ===&lt;br /&gt;
An array of results returned by the called functions, in the same order as the function generating them in the &#039;&#039;functions&#039;&#039;-parameter.&lt;br /&gt;
&lt;br /&gt;
=== Example ===&lt;br /&gt;
The following script uses _system.Parallel.Run to run three functions:&lt;br /&gt;
&lt;br /&gt;
* SAP-extraction from VBAK-table in SAP (connection parameters defined in connectionParametersDict-dictionary).&lt;br /&gt;
* Transform the extracted data by adding a new column.&lt;br /&gt;
* Load the data into data table identified by dataTableId.&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;typescript&amp;quot;&amp;gt;&lt;br /&gt;
function ExtractTransformAndLoad(extractFunc, transformFunc, loadFunc)&lt;br /&gt;
{&lt;br /&gt;
  let rawDataFlow = extractFunc();&lt;br /&gt;
  let transformedDataFlow = ToDataFlow();&lt;br /&gt;
&lt;br /&gt;
  _system.Parallel.Run([&lt;br /&gt;
    () =&amp;gt; Catch({&lt;br /&gt;
      let df;&lt;br /&gt;
      while (!IsNullTop(df = rawDataFlow.Collect(#{&amp;quot;CollectChunk&amp;quot;: true}))) {&lt;br /&gt;
        transformedDataFlow.Append(transformFunc(df));&lt;br /&gt;
        WriteLog(`A chunk having ${df.NRows} rows has been transformed.`);&lt;br /&gt;
      }&lt;br /&gt;
      if (rawDataFlow.HasError) {&lt;br /&gt;
        transformedDataFlow.Fail(&amp;quot;Error occurred during data extraction.&amp;quot;);&lt;br /&gt;
      }&lt;br /&gt;
      else {&lt;br /&gt;
        transformedDataFlow.Complete();&lt;br /&gt;
      }&lt;br /&gt;
    }, {&lt;br /&gt;
      transformedDataFlow.Fail(&amp;quot;Error occurred during transformation calculation.&amp;quot;);&lt;br /&gt;
    }),&lt;br /&gt;
    () =&amp;gt; {&lt;br /&gt;
      loadFunc(transformedDataFlow);&lt;br /&gt;
    }&lt;br /&gt;
  ]);&lt;br /&gt;
}&lt;br /&gt;
&lt;br /&gt;
ExtractTransformAndLoad(&lt;br /&gt;
  () =&amp;gt; ExtractSap(connectionParametersDict.Extend(&lt;br /&gt;
    [&lt;br /&gt;
    &amp;quot;FieldNames&amp;quot;: &amp;quot;VBELN,ERDAT,ERZET,ERNAM,NETWR,WAERK&amp;quot;, &lt;br /&gt;
    &amp;quot;QueryTable&amp;quot;: &amp;quot;VBAK&amp;quot;,&lt;br /&gt;
	&amp;quot;Options&amp;quot;: [&amp;quot;VBELN BETWEEN &#039;0000017448&#039;&amp;quot; ,&amp;quot;AND &#039;0060000042&#039;&amp;quot;],&lt;br /&gt;
    &amp;quot;UseGateway&amp;quot;: true&lt;br /&gt;
    ])&lt;br /&gt;
  ),&lt;br /&gt;
  df =&amp;gt; df.SetColumns([&amp;quot;Test&amp;quot;: () =&amp;gt; `${Column(&amp;quot;NETWR&amp;quot;)} ${Column(&amp;quot;WAERK&amp;quot;)}`]),&lt;br /&gt;
  dataFlow =&amp;gt; {&lt;br /&gt;
    DataTableById(dataTableId).Import(dataFlow, [&amp;quot;Append&amp;quot;: 0]);&lt;br /&gt;
  }&lt;br /&gt;
);&lt;br /&gt;
&lt;br /&gt;
DataTableById(dataTableId).SqlDataFrame.OrderByColumns([&amp;quot;VBELN&amp;quot;], [true]).Collect()&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== RootCauses.FindForDataFrame ==&lt;br /&gt;
Finds root causes for a particular process phenomenon by comparing properties of selected cases against those of all cases in given model.&lt;br /&gt;
&lt;br /&gt;
Based on the similar in-memory function: [[FindRootCauses Function|EventLog.FindRootCauses]].&lt;br /&gt;
&lt;br /&gt;
=== Parameters ===&lt;br /&gt;
&lt;br /&gt;
* &#039;&#039;&#039;model&#039;&#039;&#039;:&lt;br /&gt;
** Model object of a model whose data tables are used to calculate the root causes.&lt;br /&gt;
* &#039;&#039;&#039;parameters&#039;&#039;&#039;:&lt;br /&gt;
** A parameter convertible to a StringDictionary object with the following supported key-values:&lt;br /&gt;
*** &#039;&#039;&#039;Filter&#039;&#039;&#039;: &lt;br /&gt;
**** Filter json (#30921#) that is applied to the event log before calculating root causes.&lt;br /&gt;
*** &#039;&#039;&#039;Selection&#039;&#039;&#039;: &lt;br /&gt;
**** Selection json (#30927#) that defines selected cases to find root causes for. Is applied on top of the filtered event log (specified by Filter parameter).&lt;br /&gt;
**** If nothing is selected, 100% of cases are counted as selected.&lt;br /&gt;
*** &#039;&#039;&#039;CaseAttributeTypes&#039;&#039;&#039;: &lt;br /&gt;
**** An array of strings with the names of case attributes included into the root causes.&lt;br /&gt;
**** Only case attributes of type string, integer or boolean are included.&lt;br /&gt;
**** If CaseAttributeTypes is null or empty string or not specified, all case attributes having type &amp;quot;String&amp;quot; are included into the root causes.&lt;br /&gt;
**** If the model doesn&#039;t have a case attribute with the specified name, an error message is shown.&lt;br /&gt;
*** &#039;&#039;&#039;EventAttributeTypes&#039;&#039;&#039;:&lt;br /&gt;
**** An array of strings with the names of event attributes included into the root causes.&lt;br /&gt;
**** Only event attributes of type string are included.&lt;br /&gt;
**** If EventAttributeTypes is null, all event attributes having type &amp;quot;String&amp;quot; are included into the root causes.&lt;br /&gt;
**** If EventAttributeTypes is empty array or not specified, event attributes not applied to the root causes.&lt;br /&gt;
**** If the model doesn&#039;t have an event attribute with the specified name, an error message is shown.&lt;br /&gt;
**** Analysis column Name should contain Event Attribute Name.&lt;br /&gt;
**** Analysis column Value should contain Event Attribute Value and number of occurrences in case: &amp;lt;value&amp;gt; (count).&lt;br /&gt;
**** Analysis column Type should have &amp;quot;EventAttributeValue&amp;quot; as its value.&lt;br /&gt;
*** &#039;&#039;&#039;WeightingExpression&#039;&#039;&#039;: &lt;br /&gt;
**** Expression providing weights for each case.&lt;br /&gt;
**** Expression, if defined, must be any of the following types:&lt;br /&gt;
***** A string containing the [[SQL Expressions|SqlExpression]] to evaluate.&lt;br /&gt;
***** A SqlExpression object, created e.g., using ToSqlExpression-function (see also: [[QPR_ProcessAnalyzer_Expressions#In-memory_expression_blocks_in_SQL_expressions|In-memory expression blocks in SQL expressions]]).&lt;br /&gt;
**** A row is filtered out of result if expression result is null.&lt;br /&gt;
*** &#039;&#039;&#039;MaximumRowCount&#039;&#039;&#039;:&lt;br /&gt;
**** The maximum number of the most and least contributing root causes to return. Thus, the actual number of returned rows can be at most two times this value (if specified).&lt;br /&gt;
**** If undefined, 200 is used.&lt;br /&gt;
**** If set to 0, all rows are returned.&lt;br /&gt;
*** &#039;&#039;&#039;MinValueUsage&#039;&#039;&#039;: &lt;br /&gt;
**** The minimum total usage of a value included into the comparison. The number of cases having every returned value should be at least given percentage (a float value between 0.0 and 1.0) of all the compared cases.&lt;br /&gt;
***** If not defined or null, all values are included (=default).&lt;br /&gt;
*** &#039;&#039;&#039;MaxNumUniqueValues&#039;&#039;&#039;: &lt;br /&gt;
**** Maximum number of unique values to include into the comparison for each attribute column. If the amount of unique values for any attribute exceeds this value, only given number of attributes are included that have the highest usage.&lt;br /&gt;
**** If not defined or null, all values are included (=default).&lt;br /&gt;
*** &#039;&#039;&#039;IncludeOthers&#039;&#039;&#039;: &lt;br /&gt;
**** Should the rest of the attribute values not included due to MinValueUsage or MaxNumUniqueValues filtering be included as an aggregated &amp;quot;Others&amp;quot; value?&lt;br /&gt;
**** If given any string value, that value is used as the value for all the aggregated values.&lt;br /&gt;
**** Default = undefined =&amp;gt; others values will not be included into the results.&lt;br /&gt;
*** &#039;&#039;&#039;ValueIfNull&#039;&#039;&#039;: Value used to indicate null-values.&lt;br /&gt;
**** Default = &amp;quot;(blank)&amp;quot;.&lt;br /&gt;
**** Must be not null.&lt;br /&gt;
&lt;br /&gt;
=== Return value ===&lt;br /&gt;
Returns a [[SqlDataFrame in Expression Language|SqlDataFrame]] object with the following columns:&lt;br /&gt;
&lt;br /&gt;
* Common columns:&lt;br /&gt;
** &#039;&#039;&#039;Type&#039;&#039;&#039;: &lt;br /&gt;
*** Type of the root cause:&lt;br /&gt;
**** &amp;quot;CaseAttributeValue&amp;quot; for case attributes.&lt;br /&gt;
**** &amp;quot;EventAttributeValue&amp;quot; for event attributes.&lt;br /&gt;
** &#039;&#039;&#039;Name&#039;&#039;&#039;: &lt;br /&gt;
*** When type is CaseAttributeValue, case attribute name.&lt;br /&gt;
*** When type is EventAttributeValue, event attribute name.&lt;br /&gt;
** &#039;&#039;&#039;Value&#039;&#039;&#039;: &lt;br /&gt;
*** When type is CaseAttributeValue, case attribute value.&lt;br /&gt;
*** When type is EventAttributeValue, event attribute value and number of occurrences in case.&lt;br /&gt;
** &#039;&#039;&#039;Total&#039;&#039;&#039;: &lt;br /&gt;
*** The total number of cases having the found root cause.&lt;br /&gt;
** &#039;&#039;&#039;Selected&#039;&#039;&#039;: &lt;br /&gt;
*** The number of cases that have the found root cause and belong to the selected cases.&lt;br /&gt;
** &#039;&#039;&#039;Compared&#039;&#039;&#039;:&lt;br /&gt;
*** The number of cases that have the found root cause and don&#039;t belong to the selected cases.&lt;br /&gt;
** Columns when WeightingExpression not have value:&lt;br /&gt;
*** &#039;&#039;&#039;Contribution&#039;&#039;&#039;: &lt;br /&gt;
**** The number of cases which contribute to the deviation from the average percentage of selected cases among all analyzed cases.&lt;br /&gt;
*** &#039;&#039;&#039;ContributionPercentage&#039;&#039;&#039;: &lt;br /&gt;
**** The percent of cases which contribute to the deviation from the average percentage.&lt;br /&gt;
*** &#039;&#039;&#039;DifferencePercentage&#039;&#039;&#039;:&lt;br /&gt;
**** The deviation in percentage between selected cases with the found root cause and the average percentage of selected cases among all analyzed cases.&lt;br /&gt;
*** &#039;&#039;&#039;SelectedPercentage&#039;&#039;&#039;: &lt;br /&gt;
**** The percent of selected cases that have the found root cause out of all cases with that root cause.&lt;br /&gt;
** Columns when WeightingExpression have value:&lt;br /&gt;
*** &#039;&#039;&#039;Contribution&#039;&#039;&#039;: &lt;br /&gt;
**** The sum of case weights which contribute to the deviation from the average percentage of selected case weights among all analyzed cases.&lt;br /&gt;
*** &#039;&#039;&#039;ContributionPercentage&#039;&#039;&#039;: &lt;br /&gt;
**** The percent of case weights which contribute to the deviation from the average percentage.&lt;br /&gt;
*** &#039;&#039;&#039;DifferencePercentage&#039;&#039;&#039;:&lt;br /&gt;
**** The deviation in percentage between selected case weights with the found root cause and the average percentage of selected case weights among all analyzed cases.&lt;br /&gt;
*** &#039;&#039;&#039;SelectedPercentage&#039;&#039;&#039;:&lt;br /&gt;
**** The percent of selected case weights that have the found root cause out of all case weights with that root cause.&lt;br /&gt;
*** &#039;&#039;&#039;SelectedWeight&#039;&#039;&#039;: &lt;br /&gt;
**** The sum of weights that have the found root cause and belong to the selected cases.&lt;br /&gt;
*** &#039;&#039;&#039;ComparedWeight&#039;&#039;&#039;: &lt;br /&gt;
**** The sum of weights that have the found root cause and don&#039;t belong to the selected cases.&lt;br /&gt;
*** &#039;&#039;&#039;TotalWeight&#039;&#039;&#039;:&lt;br /&gt;
**** The sum of weights of all cases with that root cause.&lt;br /&gt;
&lt;br /&gt;
=== Example ===&lt;br /&gt;
Calculate root cause analysis for given model using parameters read from the [[Web API: Expression/query|query configuration]].&amp;lt;syntaxhighlight lang=&amp;quot;json&amp;quot;&amp;gt;&lt;br /&gt;
{&lt;br /&gt;
  &amp;quot;ProcessingMethod&amp;quot;: &amp;quot;DataFrame&amp;quot;,&lt;br /&gt;
  &amp;quot;Root&amp;quot;:&amp;quot;let m = _; _system.RootCauses.FindForDataFrame(m, _query.Configuration.Parameters.FindRootCausesParameters.Clone().Extend(#{\&amp;quot;Filter\&amp;quot;: _query.Configuration.Filter}))&amp;quot;,&lt;br /&gt;
  &amp;quot;Parameters&amp;quot;: {&lt;br /&gt;
    &amp;quot;FindRootCausesParameters&amp;quot;: {&lt;br /&gt;
      &amp;quot;CaseAttributeTypes&amp;quot;: [&amp;quot;Account Manager&amp;quot;,&amp;quot;Customer Group&amp;quot;,&amp;quot;Product Group&amp;quot;,&amp;quot;Region&amp;quot;],&lt;br /&gt;
      &amp;quot;Selection&amp;quot;: {&amp;quot;Items&amp;quot;:[{&amp;quot;Type&amp;quot;:&amp;quot;IncludeCases&amp;quot;,&amp;quot;Items&amp;quot;:[{&amp;quot;Type&amp;quot;:&amp;quot;CaseAttributeValue&amp;quot;,&amp;quot;Attribute&amp;quot;:&amp;quot;Product Group&amp;quot;,&amp;quot;StringifiedValues&amp;quot;:[&amp;quot;0Hats&amp;quot;]}]}]},&lt;br /&gt;
      &amp;quot;MaxNumUniqueValues&amp;quot;: 2,&lt;br /&gt;
      &amp;quot;MaximumRowCount&amp;quot;: 1000,&lt;br /&gt;
      &amp;quot;MinValueUsage&amp;quot;: 0.20,&lt;br /&gt;
      &amp;quot;WeightingExpression&amp;quot;: &amp;quot;Column(\&amp;quot;Cost\&amp;quot;)&amp;quot;&lt;br /&gt;
    }&lt;br /&gt;
  },&lt;br /&gt;
  &amp;quot;Ordering&amp;quot;: [&lt;br /&gt;
    {&amp;quot;Name&amp;quot;: &amp;quot;Contribution&amp;quot;, &amp;quot;Direction&amp;quot;: &amp;quot;Descending&amp;quot;}&lt;br /&gt;
  ]&lt;br /&gt;
}&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Utils.GetSampledEvents ==&lt;br /&gt;
Returns a [[SqlDataFrame in Expression Language|SqlDataFrame]] containing sampled events of given model where given filter is first applied. &lt;br /&gt;
&lt;br /&gt;
=== Parameters ===&lt;br /&gt;
&lt;br /&gt;
* &#039;&#039;&#039;sourceModel&#039;&#039;&#039;:&lt;br /&gt;
** ProcessAnalyzer model object of the model whose event data is to be filtered and sampled.&lt;br /&gt;
* &#039;&#039;&#039;sampledCaseCount&#039;&#039;&#039;:&lt;br /&gt;
** The maximum number of cases to return (or null if all cases should be returned).&lt;br /&gt;
* &#039;&#039;&#039;filter&#039;&#039;&#039;:&lt;br /&gt;
** JSON filter to be applied on the event data of the source model prior to performing the sampling.&lt;br /&gt;
&lt;br /&gt;
=== Return value ===&lt;br /&gt;
[[SqlDataFrame in Expression Language|SqlDataFrame]] containing sampled events of given model where given filter is first applied.&lt;br /&gt;
&lt;br /&gt;
== Utils.RunFunctionWithParallelLogging ==&lt;br /&gt;
foo&lt;/div&gt;</summary>
		<author><name>MarHink</name></author>
	</entry>
	<entry>
		<id>https://wiki.onqpr.com/pa/index.php?title=System_Library&amp;diff=25466</id>
		<title>System Library</title>
		<link rel="alternate" type="text/html" href="https://wiki.onqpr.com/pa/index.php?title=System_Library&amp;diff=25466"/>
		<updated>2024-11-19T12:41:03Z</updated>

		<summary type="html">&lt;p&gt;MarHink: /* Parameters */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;System library is a collection of Expression Language functions and properties that provide additional tools for scripting Process Analyzer functionalities. System library is referenced in scripts via &#039;&#039;_system&#039;&#039;-property, which provides additional properties dedicated for different areas of interests for scripting. &lt;br /&gt;
&lt;br /&gt;
The following hierarchy shows the properties and functions available in System Library:&lt;br /&gt;
&lt;br /&gt;
* ML&lt;br /&gt;
** [[Create Predicted Eventlog|GeneratePredictionModel]]&lt;br /&gt;
** [[Create Simulated Eventlog|ApplyTransformations]]&lt;br /&gt;
* Parallel&lt;br /&gt;
** Run&lt;br /&gt;
* RootCauses&lt;br /&gt;
** FindRootCausesDataFrame&lt;br /&gt;
* Utils&lt;br /&gt;
** GetSampledEvents&lt;br /&gt;
** ModifyColumnTypes&lt;br /&gt;
** RunFunctionWithParallelLogging&lt;br /&gt;
&lt;br /&gt;
== Parallel.Run ==&lt;br /&gt;
Runs given functions in parallel.&lt;br /&gt;
&lt;br /&gt;
=== Parameters ===&lt;br /&gt;
* &#039;&#039;&#039;functions&#039;&#039;&#039;:&lt;br /&gt;
** An array of functions to run in parallel.&lt;br /&gt;
&lt;br /&gt;
=== Return value ===&lt;br /&gt;
An array of results returned by the called functions, in the same order as the function generating them in the &#039;&#039;functions&#039;&#039;-parameter.&lt;br /&gt;
&lt;br /&gt;
=== Example ===&lt;br /&gt;
The following script uses _system.Parallel.Run to run three functions:&lt;br /&gt;
&lt;br /&gt;
* SAP-extraction from VBAK-table in SAP (connection parameters defined in connectionParametersDict-dictionary).&lt;br /&gt;
* Transform the extracted data by adding a new column.&lt;br /&gt;
* Load the data into data table identified by dataTableId.&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;typescript&amp;quot;&amp;gt;&lt;br /&gt;
function ExtractTransformAndLoad(extractFunc, transformFunc, loadFunc)&lt;br /&gt;
{&lt;br /&gt;
  let rawDataFlow = extractFunc();&lt;br /&gt;
  let transformedDataFlow = ToDataFlow();&lt;br /&gt;
&lt;br /&gt;
  _system.Parallel.Run([&lt;br /&gt;
    () =&amp;gt; Catch({&lt;br /&gt;
      let df;&lt;br /&gt;
      while (!IsNullTop(df = rawDataFlow.Collect(#{&amp;quot;CollectChunk&amp;quot;: true}))) {&lt;br /&gt;
        transformedDataFlow.Append(transformFunc(df));&lt;br /&gt;
        WriteLog(`A chunk having ${df.NRows} rows has been transformed.`);&lt;br /&gt;
      }&lt;br /&gt;
      if (rawDataFlow.HasError) {&lt;br /&gt;
        transformedDataFlow.Fail(&amp;quot;Error occurred during data extraction.&amp;quot;);&lt;br /&gt;
      }&lt;br /&gt;
      else {&lt;br /&gt;
        transformedDataFlow.Complete();&lt;br /&gt;
      }&lt;br /&gt;
    }, {&lt;br /&gt;
      transformedDataFlow.Fail(&amp;quot;Error occurred during transformation calculation.&amp;quot;);&lt;br /&gt;
    }),&lt;br /&gt;
    () =&amp;gt; {&lt;br /&gt;
      loadFunc(transformedDataFlow);&lt;br /&gt;
    }&lt;br /&gt;
  ]);&lt;br /&gt;
}&lt;br /&gt;
&lt;br /&gt;
ExtractTransformAndLoad(&lt;br /&gt;
  () =&amp;gt; ExtractSap(connectionParametersDict.Extend(&lt;br /&gt;
    [&lt;br /&gt;
    &amp;quot;FieldNames&amp;quot;: &amp;quot;VBELN,ERDAT,ERZET,ERNAM,NETWR,WAERK&amp;quot;, &lt;br /&gt;
    &amp;quot;QueryTable&amp;quot;: &amp;quot;VBAK&amp;quot;,&lt;br /&gt;
	&amp;quot;Options&amp;quot;: [&amp;quot;VBELN BETWEEN &#039;0000017448&#039;&amp;quot; ,&amp;quot;AND &#039;0060000042&#039;&amp;quot;],&lt;br /&gt;
    &amp;quot;UseGateway&amp;quot;: true&lt;br /&gt;
    ])&lt;br /&gt;
  ),&lt;br /&gt;
  df =&amp;gt; df.SetColumns([&amp;quot;Test&amp;quot;: () =&amp;gt; `${Column(&amp;quot;NETWR&amp;quot;)} ${Column(&amp;quot;WAERK&amp;quot;)}`]),&lt;br /&gt;
  dataFlow =&amp;gt; {&lt;br /&gt;
    DataTableById(dataTableId).Import(dataFlow, [&amp;quot;Append&amp;quot;: 0]);&lt;br /&gt;
  }&lt;br /&gt;
);&lt;br /&gt;
&lt;br /&gt;
DataTableById(dataTableId).SqlDataFrame.OrderByColumns([&amp;quot;VBELN&amp;quot;], [true]).Collect()&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== RootCauses.FindForDataFrame ==&lt;br /&gt;
Finds root causes for a particular process phenomenon by comparing properties of selected cases against those of all cases in given model.&lt;br /&gt;
&lt;br /&gt;
Based on the similar in-memory function: [[FindRootCauses Function|EventLog.FindRootCauses]].&lt;br /&gt;
&lt;br /&gt;
=== Parameters ===&lt;br /&gt;
&lt;br /&gt;
* &#039;&#039;&#039;model&#039;&#039;&#039;:&lt;br /&gt;
** Model object of a model whose data tables are used to calculate the root causes.&lt;br /&gt;
* &#039;&#039;&#039;parameters&#039;&#039;&#039;:&lt;br /&gt;
** A parameter convertible to a StringDictionary object with the following supported key-values:&lt;br /&gt;
*** &#039;&#039;&#039;Filter&#039;&#039;&#039;: &lt;br /&gt;
**** Filter json (#30921#) that is applied to the event log before calculating root causes.&lt;br /&gt;
*** &#039;&#039;&#039;Selection&#039;&#039;&#039;: &lt;br /&gt;
**** Selection json (#30927#) that defines selected cases to find root causes for. Is applied on top of the filtered event log (specified by Filter parameter).&lt;br /&gt;
**** If nothing is selected, 100% of cases are counted as selected.&lt;br /&gt;
*** &#039;&#039;&#039;CaseAttributeTypes&#039;&#039;&#039;: &lt;br /&gt;
**** An array of strings with the names of case attributes included into the root causes.&lt;br /&gt;
**** Only case attributes of type string, integer or boolean are included.&lt;br /&gt;
**** If CaseAttributeTypes is null or empty string or not specified, all case attributes having type &amp;quot;String&amp;quot; are included into the root causes.&lt;br /&gt;
**** If the model doesn&#039;t have a case attribute with the specified name, an error message is shown.&lt;br /&gt;
*** &#039;&#039;&#039;EventAttributeTypes&#039;&#039;&#039;:&lt;br /&gt;
**** An array of strings with the names of event attributes included into the root causes.&lt;br /&gt;
**** Only event attributes of type string are included.&lt;br /&gt;
**** If EventAttributeTypes is null, all event attributes having type &amp;quot;String&amp;quot; are included into the root causes.&lt;br /&gt;
**** If EventAttributeTypes is empty array or not specified, event attributes not applied to the root causes.&lt;br /&gt;
**** If the model doesn&#039;t have an event attribute with the specified name, an error message is shown.&lt;br /&gt;
**** Analysis column Name should contain Event Attribute Name.&lt;br /&gt;
**** Analysis column Value should contain Event Attribute Value and number of occurrences in case: &amp;lt;value&amp;gt; (count).&lt;br /&gt;
**** Analysis column Type should have &amp;quot;EventAttributeValue&amp;quot; as its value.&lt;br /&gt;
*** &#039;&#039;&#039;WeightingExpression&#039;&#039;&#039;: &lt;br /&gt;
**** Expression providing weights for each case.&lt;br /&gt;
**** Expression, if defined, must be any of the following types:&lt;br /&gt;
***** A string containing the [[SQL Expressions|SqlExpression]] to evaluate.&lt;br /&gt;
***** A SqlExpression object, created e.g., using ToSqlExpression-function (see also: [[QPR_ProcessAnalyzer_Expressions#In-memory_expression_blocks_in_SQL_expressions|In-memory expression blocks in SQL expressions]]).&lt;br /&gt;
**** A row is filtered out of result if expression result is null.&lt;br /&gt;
*** &#039;&#039;&#039;MaximumRowCount&#039;&#039;&#039;:&lt;br /&gt;
**** The maximum number of the most and least contributing root causes to return. Thus, the actual number of returned rows can be at most two times this value (if specified).&lt;br /&gt;
**** If undefined, 200 is used.&lt;br /&gt;
**** If set to 0, all rows are returned.&lt;br /&gt;
*** &#039;&#039;&#039;MinValueUsage&#039;&#039;&#039;: &lt;br /&gt;
**** The minimum total usage of a value included into the comparison. The number of cases having every returned value should be at least given percentage (a float value between 0.0 and 1.0) of all the compared cases.&lt;br /&gt;
***** If not defined or null, all values are included (=default).&lt;br /&gt;
*** &#039;&#039;&#039;MaxNumUniqueValues&#039;&#039;&#039;: &lt;br /&gt;
**** Maximum number of unique values to include into the comparison for each attribute column. If the amount of unique values for any attribute exceeds this value, only given number of attributes are included that have the highest usage.&lt;br /&gt;
**** If not defined or null, all values are included (=default).&lt;br /&gt;
*** &#039;&#039;&#039;IncludeOthers&#039;&#039;&#039;: &lt;br /&gt;
**** Should the rest of the attribute values not included due to MinValueUsage or MaxNumUniqueValues filtering be included as an aggregated &amp;quot;Others&amp;quot; value?&lt;br /&gt;
**** If given any string value, that value is used as the value for all the aggregated values.&lt;br /&gt;
**** Default = undefined =&amp;gt; others values will not be included into the results.&lt;br /&gt;
*** &#039;&#039;&#039;ValueIfNull&#039;&#039;&#039;: Value used to indicate null-values.&lt;br /&gt;
**** Default = &amp;quot;(blank)&amp;quot;.&lt;br /&gt;
**** Must be not null.&lt;br /&gt;
&lt;br /&gt;
=== Return value ===&lt;br /&gt;
Returns a SqlDataFrame object with the following columns:&lt;br /&gt;
&lt;br /&gt;
* Common columns:&lt;br /&gt;
** &#039;&#039;&#039;Type&#039;&#039;&#039;: &lt;br /&gt;
*** Type of the root cause:&lt;br /&gt;
**** &amp;quot;CaseAttributeValue&amp;quot; for case attributes.&lt;br /&gt;
**** &amp;quot;EventAttributeValue&amp;quot; for event attributes.&lt;br /&gt;
** &#039;&#039;&#039;Name&#039;&#039;&#039;: &lt;br /&gt;
*** When type is CaseAttributeValue, case attribute name.&lt;br /&gt;
*** When type is EventAttributeValue, event attribute name.&lt;br /&gt;
** &#039;&#039;&#039;Value&#039;&#039;&#039;: &lt;br /&gt;
*** When type is CaseAttributeValue, case attribute value.&lt;br /&gt;
*** When type is EventAttributeValue, event attribute value and number of occurrences in case.&lt;br /&gt;
** &#039;&#039;&#039;Total&#039;&#039;&#039;: &lt;br /&gt;
*** The total number of cases having the found root cause.&lt;br /&gt;
** &#039;&#039;&#039;Selected&#039;&#039;&#039;: &lt;br /&gt;
*** The number of cases that have the found root cause and belong to the selected cases.&lt;br /&gt;
** &#039;&#039;&#039;Compared&#039;&#039;&#039;:&lt;br /&gt;
*** The number of cases that have the found root cause and don&#039;t belong to the selected cases.&lt;br /&gt;
** Columns when WeightingExpression not have value:&lt;br /&gt;
*** &#039;&#039;&#039;Contribution&#039;&#039;&#039;: &lt;br /&gt;
**** The number of cases which contribute to the deviation from the average percentage of selected cases among all analyzed cases.&lt;br /&gt;
*** &#039;&#039;&#039;ContributionPercentage&#039;&#039;&#039;: &lt;br /&gt;
**** The percent of cases which contribute to the deviation from the average percentage.&lt;br /&gt;
*** &#039;&#039;&#039;DifferencePercentage&#039;&#039;&#039;:&lt;br /&gt;
**** The deviation in percentage between selected cases with the found root cause and the average percentage of selected cases among all analyzed cases.&lt;br /&gt;
*** &#039;&#039;&#039;SelectedPercentage&#039;&#039;&#039;: &lt;br /&gt;
**** The percent of selected cases that have the found root cause out of all cases with that root cause.&lt;br /&gt;
** Columns when WeightingExpression have value:&lt;br /&gt;
*** &#039;&#039;&#039;Contribution&#039;&#039;&#039;: &lt;br /&gt;
**** The sum of case weights which contribute to the deviation from the average percentage of selected case weights among all analyzed cases.&lt;br /&gt;
*** &#039;&#039;&#039;ContributionPercentage&#039;&#039;&#039;: &lt;br /&gt;
**** The percent of case weights which contribute to the deviation from the average percentage.&lt;br /&gt;
*** &#039;&#039;&#039;DifferencePercentage&#039;&#039;&#039;:&lt;br /&gt;
**** The deviation in percentage between selected case weights with the found root cause and the average percentage of selected case weights among all analyzed cases.&lt;br /&gt;
*** &#039;&#039;&#039;SelectedPercentage&#039;&#039;&#039;:&lt;br /&gt;
**** The percent of selected case weights that have the found root cause out of all case weights with that root cause.&lt;br /&gt;
*** &#039;&#039;&#039;SelectedWeight&#039;&#039;&#039;: &lt;br /&gt;
**** The sum of weights that have the found root cause and belong to the selected cases.&lt;br /&gt;
*** &#039;&#039;&#039;ComparedWeight&#039;&#039;&#039;: &lt;br /&gt;
**** The sum of weights that have the found root cause and don&#039;t belong to the selected cases.&lt;br /&gt;
*** &#039;&#039;&#039;TotalWeight&#039;&#039;&#039;:&lt;br /&gt;
**** The sum of weights of all cases with that root cause.&lt;br /&gt;
&lt;br /&gt;
=== Example ===&lt;br /&gt;
Calculate root cause analysis for given model using parameters read from the [[Web API: Expression/query|query configuration]].&amp;lt;syntaxhighlight lang=&amp;quot;json&amp;quot;&amp;gt;&lt;br /&gt;
{&lt;br /&gt;
  &amp;quot;ProcessingMethod&amp;quot;: &amp;quot;DataFrame&amp;quot;,&lt;br /&gt;
  &amp;quot;Root&amp;quot;:&amp;quot;let m = _; _system.RootCauses.FindForDataFrame(m, _query.Configuration.Parameters.FindRootCausesParameters.Clone().Extend(#{\&amp;quot;Filter\&amp;quot;: _query.Configuration.Filter}))&amp;quot;,&lt;br /&gt;
  &amp;quot;Parameters&amp;quot;: {&lt;br /&gt;
    &amp;quot;FindRootCausesParameters&amp;quot;: {&lt;br /&gt;
      &amp;quot;CaseAttributeTypes&amp;quot;: [&amp;quot;Account Manager&amp;quot;,&amp;quot;Customer Group&amp;quot;,&amp;quot;Product Group&amp;quot;,&amp;quot;Region&amp;quot;],&lt;br /&gt;
      &amp;quot;Selection&amp;quot;: {&amp;quot;Items&amp;quot;:[{&amp;quot;Type&amp;quot;:&amp;quot;IncludeCases&amp;quot;,&amp;quot;Items&amp;quot;:[{&amp;quot;Type&amp;quot;:&amp;quot;CaseAttributeValue&amp;quot;,&amp;quot;Attribute&amp;quot;:&amp;quot;Product Group&amp;quot;,&amp;quot;StringifiedValues&amp;quot;:[&amp;quot;0Hats&amp;quot;]}]}]},&lt;br /&gt;
      &amp;quot;MaxNumUniqueValues&amp;quot;: 2,&lt;br /&gt;
      &amp;quot;MaximumRowCount&amp;quot;: 1000,&lt;br /&gt;
      &amp;quot;MinValueUsage&amp;quot;: 0.20,&lt;br /&gt;
      &amp;quot;WeightingExpression&amp;quot;: &amp;quot;Column(\&amp;quot;Cost\&amp;quot;)&amp;quot;&lt;br /&gt;
    }&lt;br /&gt;
  },&lt;br /&gt;
  &amp;quot;Ordering&amp;quot;: [&lt;br /&gt;
    {&amp;quot;Name&amp;quot;: &amp;quot;Contribution&amp;quot;, &amp;quot;Direction&amp;quot;: &amp;quot;Descending&amp;quot;}&lt;br /&gt;
  ]&lt;br /&gt;
}&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Utils.GetSampledEvents ==&lt;br /&gt;
foo&lt;br /&gt;
&lt;br /&gt;
== Utils.RunFunctionWithParallelLogging ==&lt;br /&gt;
foo&lt;/div&gt;</summary>
		<author><name>MarHink</name></author>
	</entry>
	<entry>
		<id>https://wiki.onqpr.com/pa/index.php?title=System_Library&amp;diff=25465</id>
		<title>System Library</title>
		<link rel="alternate" type="text/html" href="https://wiki.onqpr.com/pa/index.php?title=System_Library&amp;diff=25465"/>
		<updated>2024-11-19T12:38:42Z</updated>

		<summary type="html">&lt;p&gt;MarHink: /* RootCauses.FindRootCausesDataFrame */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;System library is a collection of Expression Language functions and properties that provide additional tools for scripting Process Analyzer functionalities. System library is referenced in scripts via &#039;&#039;_system&#039;&#039;-property, which provides additional properties dedicated for different areas of interests for scripting. &lt;br /&gt;
&lt;br /&gt;
The following hierarchy shows the properties and functions available in System Library:&lt;br /&gt;
&lt;br /&gt;
* ML&lt;br /&gt;
** [[Create Predicted Eventlog|GeneratePredictionModel]]&lt;br /&gt;
** [[Create Simulated Eventlog|ApplyTransformations]]&lt;br /&gt;
* Parallel&lt;br /&gt;
** Run&lt;br /&gt;
* RootCauses&lt;br /&gt;
** FindRootCausesDataFrame&lt;br /&gt;
* Utils&lt;br /&gt;
** GetSampledEvents&lt;br /&gt;
** ModifyColumnTypes&lt;br /&gt;
** RunFunctionWithParallelLogging&lt;br /&gt;
&lt;br /&gt;
== Parallel.Run ==&lt;br /&gt;
Runs given functions in parallel.&lt;br /&gt;
&lt;br /&gt;
=== Parameters ===&lt;br /&gt;
* &#039;&#039;&#039;functions&#039;&#039;&#039;:&lt;br /&gt;
** An array of functions to run in parallel.&lt;br /&gt;
&lt;br /&gt;
=== Return value ===&lt;br /&gt;
An array of results returned by the called functions, in the same order as the function generating them in the &#039;&#039;functions&#039;&#039;-parameter.&lt;br /&gt;
&lt;br /&gt;
=== Example ===&lt;br /&gt;
The following script uses _system.Parallel.Run to run three functions:&lt;br /&gt;
&lt;br /&gt;
* SAP-extraction from VBAK-table in SAP (connection parameters defined in connectionParametersDict-dictionary).&lt;br /&gt;
* Transform the extracted data by adding a new column.&lt;br /&gt;
* Load the data into data table identified by dataTableId.&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;typescript&amp;quot;&amp;gt;&lt;br /&gt;
function ExtractTransformAndLoad(extractFunc, transformFunc, loadFunc)&lt;br /&gt;
{&lt;br /&gt;
  let rawDataFlow = extractFunc();&lt;br /&gt;
  let transformedDataFlow = ToDataFlow();&lt;br /&gt;
&lt;br /&gt;
  _system.Parallel.Run([&lt;br /&gt;
    () =&amp;gt; Catch({&lt;br /&gt;
      let df;&lt;br /&gt;
      while (!IsNullTop(df = rawDataFlow.Collect(#{&amp;quot;CollectChunk&amp;quot;: true}))) {&lt;br /&gt;
        transformedDataFlow.Append(transformFunc(df));&lt;br /&gt;
        WriteLog(`A chunk having ${df.NRows} rows has been transformed.`);&lt;br /&gt;
      }&lt;br /&gt;
      if (rawDataFlow.HasError) {&lt;br /&gt;
        transformedDataFlow.Fail(&amp;quot;Error occurred during data extraction.&amp;quot;);&lt;br /&gt;
      }&lt;br /&gt;
      else {&lt;br /&gt;
        transformedDataFlow.Complete();&lt;br /&gt;
      }&lt;br /&gt;
    }, {&lt;br /&gt;
      transformedDataFlow.Fail(&amp;quot;Error occurred during transformation calculation.&amp;quot;);&lt;br /&gt;
    }),&lt;br /&gt;
    () =&amp;gt; {&lt;br /&gt;
      loadFunc(transformedDataFlow);&lt;br /&gt;
    }&lt;br /&gt;
  ]);&lt;br /&gt;
}&lt;br /&gt;
&lt;br /&gt;
ExtractTransformAndLoad(&lt;br /&gt;
  () =&amp;gt; ExtractSap(connectionParametersDict.Extend(&lt;br /&gt;
    [&lt;br /&gt;
    &amp;quot;FieldNames&amp;quot;: &amp;quot;VBELN,ERDAT,ERZET,ERNAM,NETWR,WAERK&amp;quot;, &lt;br /&gt;
    &amp;quot;QueryTable&amp;quot;: &amp;quot;VBAK&amp;quot;,&lt;br /&gt;
	&amp;quot;Options&amp;quot;: [&amp;quot;VBELN BETWEEN &#039;0000017448&#039;&amp;quot; ,&amp;quot;AND &#039;0060000042&#039;&amp;quot;],&lt;br /&gt;
    &amp;quot;UseGateway&amp;quot;: true&lt;br /&gt;
    ])&lt;br /&gt;
  ),&lt;br /&gt;
  df =&amp;gt; df.SetColumns([&amp;quot;Test&amp;quot;: () =&amp;gt; `${Column(&amp;quot;NETWR&amp;quot;)} ${Column(&amp;quot;WAERK&amp;quot;)}`]),&lt;br /&gt;
  dataFlow =&amp;gt; {&lt;br /&gt;
    DataTableById(dataTableId).Import(dataFlow, [&amp;quot;Append&amp;quot;: 0]);&lt;br /&gt;
  }&lt;br /&gt;
);&lt;br /&gt;
&lt;br /&gt;
DataTableById(dataTableId).SqlDataFrame.OrderByColumns([&amp;quot;VBELN&amp;quot;], [true]).Collect()&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== RootCauses.FindForDataFrame ==&lt;br /&gt;
Finds root causes for a particular process phenomenon by comparing properties of selected cases against those of all cases in given model (#27619#).&lt;br /&gt;
&lt;br /&gt;
Based on the similar in-memory function: [[FindRootCauses Function|EventLog.FindRootCauses]].&lt;br /&gt;
&lt;br /&gt;
=== Parameters ===&lt;br /&gt;
&lt;br /&gt;
* &#039;&#039;&#039;model&#039;&#039;&#039;:&lt;br /&gt;
** Model object of a model whose data tables are used to calculate the root causes.&lt;br /&gt;
* &#039;&#039;&#039;parameters&#039;&#039;&#039;:&lt;br /&gt;
** A parameter convertible to a StringDictionary object with the following supported key-values:&lt;br /&gt;
*** &#039;&#039;&#039;Filter&#039;&#039;&#039;: &lt;br /&gt;
**** Filter json (#30921#) that is applied to the event log before calculating root causes.&lt;br /&gt;
*** &#039;&#039;&#039;Selection&#039;&#039;&#039;: &lt;br /&gt;
**** Selection json (#30927#) that defines selected cases to find root causes for. Is applied on top of the filtered event log (specified by Filter parameter).&lt;br /&gt;
**** If nothing is selected, 100% of cases are counted as selected.&lt;br /&gt;
*** &#039;&#039;&#039;CaseAttributeTypes&#039;&#039;&#039;: &lt;br /&gt;
**** An array of strings with the names of case attributes included into the root causes.&lt;br /&gt;
**** Only case attributes of type string, integer or boolean are included.&lt;br /&gt;
**** If CaseAttributeTypes is null or empty string or not specified, all case attributes having type &amp;quot;String&amp;quot; are included into the root causes.&lt;br /&gt;
**** If the model doesn&#039;t have a case attribute with the specified name, an error message is shown.&lt;br /&gt;
*** &#039;&#039;&#039;EventAttributeTypes&#039;&#039;&#039;:&lt;br /&gt;
**** An array of strings with the names of event attributes included into the root causes.&lt;br /&gt;
**** Only event attributes of type string are included.&lt;br /&gt;
**** If EventAttributeTypes is null, all event attributes having type &amp;quot;String&amp;quot; are included into the root causes.&lt;br /&gt;
**** If EventAttributeTypes is empty array or not specified, event attributes not applied to the root causes.&lt;br /&gt;
**** If the model doesn&#039;t have an event attribute with the specified name, an error message is shown.&lt;br /&gt;
**** Analysis column Name should contain Event Attribute Name.&lt;br /&gt;
**** Analysis column Value should contain Event Attribute Value and number of occurrences in case: &amp;lt;value&amp;gt; (count).&lt;br /&gt;
**** Analysis column Type should have &amp;quot;EventAttributeValue&amp;quot; as its value.&lt;br /&gt;
*** &#039;&#039;&#039;WeightingExpression&#039;&#039;&#039;: &lt;br /&gt;
**** Expression providing weights for each case.&lt;br /&gt;
**** Expression, if defined, must be any of the following types:&lt;br /&gt;
***** A string containing the [[SQL Expressions|SqlExpression]] to evaluate.&lt;br /&gt;
***** A SqlExpression object, created e.g., using ToSqlExpression-function (see also: [[QPR_ProcessAnalyzer_Expressions#In-memory_expression_blocks_in_SQL_expressions|In-memory expression blocks in SQL expressions]]).&lt;br /&gt;
**** A row is filtered out of result if expression result is null.&lt;br /&gt;
*** &#039;&#039;&#039;MaximumRowCount&#039;&#039;&#039;:&lt;br /&gt;
**** The maximum number of the most and least contributing root causes to return. Thus, the actual number of returned rows can be at most two times this value (if specified).&lt;br /&gt;
**** If undefined, 200 is used.&lt;br /&gt;
**** If set to 0, all rows are returned.&lt;br /&gt;
*** &#039;&#039;&#039;MinValueUsage&#039;&#039;&#039;: &lt;br /&gt;
**** The minimum total usage of a value included into the comparison. The number of cases having every returned value should be at least given percentage (a float value between 0.0 and 1.0) of all the compared cases.&lt;br /&gt;
***** If not defined or null, all values are included (=default).&lt;br /&gt;
*** &#039;&#039;&#039;MaxNumUniqueValues&#039;&#039;&#039;: &lt;br /&gt;
**** Maximum number of unique values to include into the comparison for each attribute column. If the amount of unique values for any attribute exceeds this value, only given number of attributes are included that have the highest usage.&lt;br /&gt;
**** If not defined or null, all values are included (=default).&lt;br /&gt;
*** &#039;&#039;&#039;IncludeOthers&#039;&#039;&#039;: &lt;br /&gt;
**** Should the rest of the attribute values not included due to MinValueUsage or MaxNumUniqueValues filtering be included as an aggregated &amp;quot;Others&amp;quot; value?&lt;br /&gt;
**** If given any string value, that value is used as the value for all the aggregated values.&lt;br /&gt;
**** Default = undefined =&amp;gt; others values will not be included into the results.&lt;br /&gt;
*** &#039;&#039;&#039;ValueIfNull&#039;&#039;&#039;: Value used to indicate null-values.&lt;br /&gt;
**** Default = &amp;quot;(blank)&amp;quot;.&lt;br /&gt;
**** Must be not null.&lt;br /&gt;
&lt;br /&gt;
=== Return value ===&lt;br /&gt;
Returns a SqlDataFrame object with the following columns:&lt;br /&gt;
&lt;br /&gt;
* Common columns:&lt;br /&gt;
** &#039;&#039;&#039;Type&#039;&#039;&#039;: &lt;br /&gt;
*** Type of the root cause:&lt;br /&gt;
**** &amp;quot;CaseAttributeValue&amp;quot; for case attributes.&lt;br /&gt;
**** &amp;quot;EventAttributeValue&amp;quot; for event attributes.&lt;br /&gt;
** &#039;&#039;&#039;Name&#039;&#039;&#039;: &lt;br /&gt;
*** When type is CaseAttributeValue, case attribute name.&lt;br /&gt;
*** When type is EventAttributeValue, event attribute name.&lt;br /&gt;
** &#039;&#039;&#039;Value&#039;&#039;&#039;: &lt;br /&gt;
*** When type is CaseAttributeValue, case attribute value.&lt;br /&gt;
*** When type is EventAttributeValue, event attribute value and number of occurrences in case.&lt;br /&gt;
** &#039;&#039;&#039;Total&#039;&#039;&#039;: &lt;br /&gt;
*** The total number of cases having the found root cause.&lt;br /&gt;
** &#039;&#039;&#039;Selected&#039;&#039;&#039;: &lt;br /&gt;
*** The number of cases that have the found root cause and belong to the selected cases.&lt;br /&gt;
** &#039;&#039;&#039;Compared&#039;&#039;&#039;:&lt;br /&gt;
*** The number of cases that have the found root cause and don&#039;t belong to the selected cases.&lt;br /&gt;
** Columns when WeightingExpression not have value:&lt;br /&gt;
*** &#039;&#039;&#039;Contribution&#039;&#039;&#039;: &lt;br /&gt;
**** The number of cases which contribute to the deviation from the average percentage of selected cases among all analyzed cases.&lt;br /&gt;
*** &#039;&#039;&#039;ContributionPercentage&#039;&#039;&#039;: &lt;br /&gt;
**** The percent of cases which contribute to the deviation from the average percentage.&lt;br /&gt;
*** &#039;&#039;&#039;DifferencePercentage&#039;&#039;&#039;:&lt;br /&gt;
**** The deviation in percentage between selected cases with the found root cause and the average percentage of selected cases among all analyzed cases.&lt;br /&gt;
*** &#039;&#039;&#039;SelectedPercentage&#039;&#039;&#039;: &lt;br /&gt;
**** The percent of selected cases that have the found root cause out of all cases with that root cause.&lt;br /&gt;
** Columns when WeightingExpression have value:&lt;br /&gt;
*** &#039;&#039;&#039;Contribution&#039;&#039;&#039;: &lt;br /&gt;
**** The sum of case weights which contribute to the deviation from the average percentage of selected case weights among all analyzed cases.&lt;br /&gt;
*** &#039;&#039;&#039;ContributionPercentage&#039;&#039;&#039;: &lt;br /&gt;
**** The percent of case weights which contribute to the deviation from the average percentage.&lt;br /&gt;
*** &#039;&#039;&#039;DifferencePercentage&#039;&#039;&#039;:&lt;br /&gt;
**** The deviation in percentage between selected case weights with the found root cause and the average percentage of selected case weights among all analyzed cases.&lt;br /&gt;
*** &#039;&#039;&#039;SelectedPercentage&#039;&#039;&#039;:&lt;br /&gt;
**** The percent of selected case weights that have the found root cause out of all case weights with that root cause.&lt;br /&gt;
*** &#039;&#039;&#039;SelectedWeight&#039;&#039;&#039;: &lt;br /&gt;
**** The sum of weights that have the found root cause and belong to the selected cases.&lt;br /&gt;
*** &#039;&#039;&#039;ComparedWeight&#039;&#039;&#039;: &lt;br /&gt;
**** The sum of weights that have the found root cause and don&#039;t belong to the selected cases.&lt;br /&gt;
*** &#039;&#039;&#039;TotalWeight&#039;&#039;&#039;:&lt;br /&gt;
**** The sum of weights of all cases with that root cause.&lt;br /&gt;
&lt;br /&gt;
=== Example ===&lt;br /&gt;
Calculate root cause analysis for given model using parameters read from the query configuration.&amp;lt;syntaxhighlight lang=&amp;quot;json&amp;quot;&amp;gt;&lt;br /&gt;
{&lt;br /&gt;
  &amp;quot;ProcessingMethod&amp;quot;: &amp;quot;DataFrame&amp;quot;,&lt;br /&gt;
  &amp;quot;Root&amp;quot;:&amp;quot;let m = _; _system.RootCauses.FindForDataFrame(m, _query.Configuration.Parameters.FindRootCausesParameters.Clone().Extend(#{\&amp;quot;Filter\&amp;quot;: _query.Configuration.Filter}))&amp;quot;,&lt;br /&gt;
  &amp;quot;Parameters&amp;quot;: {&lt;br /&gt;
    &amp;quot;FindRootCausesParameters&amp;quot;: {&lt;br /&gt;
      &amp;quot;CaseAttributeTypes&amp;quot;: [&amp;quot;Account Manager&amp;quot;,&amp;quot;Customer Group&amp;quot;,&amp;quot;Product Group&amp;quot;,&amp;quot;Region&amp;quot;],&lt;br /&gt;
      &amp;quot;Selection&amp;quot;: {&amp;quot;Items&amp;quot;:[{&amp;quot;Type&amp;quot;:&amp;quot;IncludeCases&amp;quot;,&amp;quot;Items&amp;quot;:[{&amp;quot;Type&amp;quot;:&amp;quot;CaseAttributeValue&amp;quot;,&amp;quot;Attribute&amp;quot;:&amp;quot;Product Group&amp;quot;,&amp;quot;StringifiedValues&amp;quot;:[&amp;quot;0Hats&amp;quot;]}]}]},&lt;br /&gt;
      &amp;quot;MaxNumUniqueValues&amp;quot;: 2,&lt;br /&gt;
      &amp;quot;MaximumRowCount&amp;quot;: 1000,&lt;br /&gt;
      &amp;quot;MinValueUsage&amp;quot;: 0.20,&lt;br /&gt;
      &amp;quot;WeightingExpression&amp;quot;: &amp;quot;Column(\&amp;quot;Cost\&amp;quot;)&amp;quot;&lt;br /&gt;
    }&lt;br /&gt;
  },&lt;br /&gt;
  &amp;quot;Ordering&amp;quot;: [&lt;br /&gt;
    {&amp;quot;Name&amp;quot;: &amp;quot;Contribution&amp;quot;, &amp;quot;Direction&amp;quot;: &amp;quot;Descending&amp;quot;}&lt;br /&gt;
  ]&lt;br /&gt;
}&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Utils.GetSampledEvents ==&lt;br /&gt;
foo&lt;br /&gt;
&lt;br /&gt;
== Utils.RunFunctionWithParallelLogging ==&lt;br /&gt;
foo&lt;/div&gt;</summary>
		<author><name>MarHink</name></author>
	</entry>
	<entry>
		<id>https://wiki.onqpr.com/pa/index.php?title=System_Library&amp;diff=25464</id>
		<title>System Library</title>
		<link rel="alternate" type="text/html" href="https://wiki.onqpr.com/pa/index.php?title=System_Library&amp;diff=25464"/>
		<updated>2024-11-19T12:36:56Z</updated>

		<summary type="html">&lt;p&gt;MarHink: /* Return value */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;System library is a collection of Expression Language functions and properties that provide additional tools for scripting Process Analyzer functionalities. System library is referenced in scripts via &#039;&#039;_system&#039;&#039;-property, which provides additional properties dedicated for different areas of interests for scripting. &lt;br /&gt;
&lt;br /&gt;
The following hierarchy shows the properties and functions available in System Library:&lt;br /&gt;
&lt;br /&gt;
* ML&lt;br /&gt;
** [[Create Predicted Eventlog|GeneratePredictionModel]]&lt;br /&gt;
** [[Create Simulated Eventlog|ApplyTransformations]]&lt;br /&gt;
* Parallel&lt;br /&gt;
** Run&lt;br /&gt;
* RootCauses&lt;br /&gt;
** FindRootCausesDataFrame&lt;br /&gt;
* Utils&lt;br /&gt;
** GetSampledEvents&lt;br /&gt;
** ModifyColumnTypes&lt;br /&gt;
** RunFunctionWithParallelLogging&lt;br /&gt;
&lt;br /&gt;
== Parallel.Run ==&lt;br /&gt;
Runs given functions in parallel.&lt;br /&gt;
&lt;br /&gt;
=== Parameters ===&lt;br /&gt;
* &#039;&#039;&#039;functions&#039;&#039;&#039;:&lt;br /&gt;
** An array of functions to run in parallel.&lt;br /&gt;
&lt;br /&gt;
=== Return value ===&lt;br /&gt;
An array of results returned by the called functions, in the same order as the function generating them in the &#039;&#039;functions&#039;&#039;-parameter.&lt;br /&gt;
&lt;br /&gt;
=== Example ===&lt;br /&gt;
The following script uses _system.Parallel.Run to run three functions:&lt;br /&gt;
&lt;br /&gt;
* SAP-extraction from VBAK-table in SAP (connection parameters defined in connectionParametersDict-dictionary).&lt;br /&gt;
* Transform the extracted data by adding a new column.&lt;br /&gt;
* Load the data into data table identified by dataTableId.&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;typescript&amp;quot;&amp;gt;&lt;br /&gt;
function ExtractTransformAndLoad(extractFunc, transformFunc, loadFunc)&lt;br /&gt;
{&lt;br /&gt;
  let rawDataFlow = extractFunc();&lt;br /&gt;
  let transformedDataFlow = ToDataFlow();&lt;br /&gt;
&lt;br /&gt;
  _system.Parallel.Run([&lt;br /&gt;
    () =&amp;gt; Catch({&lt;br /&gt;
      let df;&lt;br /&gt;
      while (!IsNullTop(df = rawDataFlow.Collect(#{&amp;quot;CollectChunk&amp;quot;: true}))) {&lt;br /&gt;
        transformedDataFlow.Append(transformFunc(df));&lt;br /&gt;
        WriteLog(`A chunk having ${df.NRows} rows has been transformed.`);&lt;br /&gt;
      }&lt;br /&gt;
      if (rawDataFlow.HasError) {&lt;br /&gt;
        transformedDataFlow.Fail(&amp;quot;Error occurred during data extraction.&amp;quot;);&lt;br /&gt;
      }&lt;br /&gt;
      else {&lt;br /&gt;
        transformedDataFlow.Complete();&lt;br /&gt;
      }&lt;br /&gt;
    }, {&lt;br /&gt;
      transformedDataFlow.Fail(&amp;quot;Error occurred during transformation calculation.&amp;quot;);&lt;br /&gt;
    }),&lt;br /&gt;
    () =&amp;gt; {&lt;br /&gt;
      loadFunc(transformedDataFlow);&lt;br /&gt;
    }&lt;br /&gt;
  ]);&lt;br /&gt;
}&lt;br /&gt;
&lt;br /&gt;
ExtractTransformAndLoad(&lt;br /&gt;
  () =&amp;gt; ExtractSap(connectionParametersDict.Extend(&lt;br /&gt;
    [&lt;br /&gt;
    &amp;quot;FieldNames&amp;quot;: &amp;quot;VBELN,ERDAT,ERZET,ERNAM,NETWR,WAERK&amp;quot;, &lt;br /&gt;
    &amp;quot;QueryTable&amp;quot;: &amp;quot;VBAK&amp;quot;,&lt;br /&gt;
	&amp;quot;Options&amp;quot;: [&amp;quot;VBELN BETWEEN &#039;0000017448&#039;&amp;quot; ,&amp;quot;AND &#039;0060000042&#039;&amp;quot;],&lt;br /&gt;
    &amp;quot;UseGateway&amp;quot;: true&lt;br /&gt;
    ])&lt;br /&gt;
  ),&lt;br /&gt;
  df =&amp;gt; df.SetColumns([&amp;quot;Test&amp;quot;: () =&amp;gt; `${Column(&amp;quot;NETWR&amp;quot;)} ${Column(&amp;quot;WAERK&amp;quot;)}`]),&lt;br /&gt;
  dataFlow =&amp;gt; {&lt;br /&gt;
    DataTableById(dataTableId).Import(dataFlow, [&amp;quot;Append&amp;quot;: 0]);&lt;br /&gt;
  }&lt;br /&gt;
);&lt;br /&gt;
&lt;br /&gt;
DataTableById(dataTableId).SqlDataFrame.OrderByColumns([&amp;quot;VBELN&amp;quot;], [true]).Collect()&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== RootCauses.FindRootCausesDataFrame ==&lt;br /&gt;
Finds root causes for a particular process phenomenon by comparing properties of selected cases against those of all cases in given model (#27619#).&lt;br /&gt;
&lt;br /&gt;
Based on the similar in-memory function: [[FindRootCauses Function|EventLog.FindRootCauses]].&lt;br /&gt;
&lt;br /&gt;
=== Parameters ===&lt;br /&gt;
&lt;br /&gt;
* model:&lt;br /&gt;
** Model object of a model whose data tables are used to calculate the root causes.&lt;br /&gt;
* parameters:&lt;br /&gt;
** A parameter convertible to a StringDictionary object with the following supported key-values:&lt;br /&gt;
*** Filter: &lt;br /&gt;
**** Filter json (#30921#) that is applied to the event log before calculating root causes.&lt;br /&gt;
*** Selection: &lt;br /&gt;
**** Selection json (#30927#) that defines selected cases to find root causes for. Is applied on top of the filtered event log (specified by Filter parameter).&lt;br /&gt;
**** If nothing is selected, 100% of cases are counted as selected.&lt;br /&gt;
*** CaseAttributeTypes: &lt;br /&gt;
**** An array of strings with the names of case attributes included into the root causes.&lt;br /&gt;
**** Only case attributes of type string, integer or boolean are included.&lt;br /&gt;
**** If CaseAttributeTypes is null or empty string or not specified, all case attributes having type &amp;quot;String&amp;quot; are included into the root causes.&lt;br /&gt;
**** If the model doesn&#039;t have a case attribute with the specified name, an error message is shown.&lt;br /&gt;
*** EventAttributeTypes:&lt;br /&gt;
**** An array of strings with the names of event attributes included into the root causes.&lt;br /&gt;
**** Only event attributes of type string are included.&lt;br /&gt;
**** If EventAttributeTypes is null, all event attributes having type &amp;quot;String&amp;quot; are included into the root causes.&lt;br /&gt;
**** If EventAttributeTypes is empty array or not specified, event attributes not applied to the root causes.&lt;br /&gt;
**** If the model doesn&#039;t have an event attribute with the specified name, an error message is shown.&lt;br /&gt;
**** Analysis column Name should contain Event Attribute Name.&lt;br /&gt;
**** Analysis column Value should contain Event Attribute Value and number of occurrences in case: &amp;lt;value&amp;gt; (count).&lt;br /&gt;
**** Analysis column Type should have &amp;quot;EventAttributeValue&amp;quot; as its value.&lt;br /&gt;
*** WeightingExpression: &lt;br /&gt;
**** Expression providing weights for each case.&lt;br /&gt;
**** Expression, if defined, must be any of the following types:&lt;br /&gt;
***** A string containing the [[SQL Expressions|SqlExpression]] to evaluate.&lt;br /&gt;
***** A SqlExpression object, created e.g., using ToSqlExpression-function (see also: [[QPR_ProcessAnalyzer_Expressions#In-memory_expression_blocks_in_SQL_expressions|In-memory expression blocks in SQL expressions]]).&lt;br /&gt;
**** A row is filtered out of result if expression result is null.&lt;br /&gt;
*** MaximumRowCount:&lt;br /&gt;
**** The maximum number of the most and least contributing root causes to return. Thus, the actual number of returned rows can be at most two times this value (if specified). &lt;br /&gt;
**** If undefined, 200 is used.&lt;br /&gt;
**** If set to 0, all rows are returned.&lt;br /&gt;
*** MinValueUsage: &lt;br /&gt;
**** The minimum total usage of a value included into the comparison. The number of cases having every returned value should be at least given percentage (a float value between 0.0 and 1.0) of all the compared cases.&lt;br /&gt;
***** If not defined or null, all values are included (=default).&lt;br /&gt;
*** MaxNumUniqueValues: &lt;br /&gt;
**** Maximum number of unique values to include into the comparison for each attribute column. If the amount of unique values for any attribute exceeds this value, only given number of attributes are included that have the highest usage.&lt;br /&gt;
**** If not defined or null, all values are included (=default).&lt;br /&gt;
*** IncludeOthers: &lt;br /&gt;
**** Should the rest of the attribute values not included due to MinValueUsage or MaxNumUniqueValues filtering be included as an aggregated &amp;quot;Others&amp;quot; value?&lt;br /&gt;
**** If given any string value, that value is used as the value for all the aggregated values.&lt;br /&gt;
**** Default = undefined =&amp;gt; others values will not be included into the results.&lt;br /&gt;
*** ValueIfNull: Value used to indicate null-values.&lt;br /&gt;
**** Default = &amp;quot;(blank)&amp;quot;.&lt;br /&gt;
**** Must be not null.&lt;br /&gt;
&lt;br /&gt;
=== Return value ===&lt;br /&gt;
Returns a SqlDataFrame object with the following columns:&lt;br /&gt;
&lt;br /&gt;
* Common columns:&lt;br /&gt;
** Type: &lt;br /&gt;
*** Type of the root cause:&lt;br /&gt;
**** &amp;quot;CaseAttributeValue&amp;quot; for case attributes.&lt;br /&gt;
**** &amp;quot;EventAttributeValue&amp;quot; for event attributes.&lt;br /&gt;
** Name: &lt;br /&gt;
*** When type is CaseAttributeValue, case attribute name.&lt;br /&gt;
*** When type is EventAttributeValue, event attribute name.&lt;br /&gt;
** Value: &lt;br /&gt;
*** When type is CaseAttributeValue, case attribute value.&lt;br /&gt;
*** When type is EventAttributeValue, event attribute value and number of occurrences in case.&lt;br /&gt;
** Total: &lt;br /&gt;
*** The total number of cases having the found root cause.&lt;br /&gt;
** Selected: &lt;br /&gt;
*** The number of cases that have the found root cause and belong to the selected cases.&lt;br /&gt;
** Compared:&lt;br /&gt;
*** The number of cases that have the found root cause and don&#039;t belong to the selected cases.&lt;br /&gt;
** Columns when WeightingExpression not have value:&lt;br /&gt;
*** Contribution: &lt;br /&gt;
**** The number of cases which contribute to the deviation from the average percentage of selected cases among all analyzed cases.&lt;br /&gt;
*** ContributionPercentage: &lt;br /&gt;
**** The percent of cases which contribute to the deviation from the average percentage.&lt;br /&gt;
*** DifferencePercentage:&lt;br /&gt;
**** The deviation in percentage between selected cases with the found root cause and the average percentage of selected cases among all analyzed cases.&lt;br /&gt;
*** SelectedPercentage: &lt;br /&gt;
**** The percent of selected cases that have the found root cause out of all cases with that root cause.&lt;br /&gt;
** Columns when WeightingExpression have value:&lt;br /&gt;
*** Contribution: &lt;br /&gt;
**** The sum of case weights which contribute to the deviation from the average percentage of selected case weights among all analyzed cases.&lt;br /&gt;
*** ContributionPercentage: &lt;br /&gt;
**** The percent of case weights which contribute to the deviation from the average percentage.&lt;br /&gt;
*** DifferencePercentage:&lt;br /&gt;
**** The deviation in percentage between selected case weights with the found root cause and the average percentage of selected case weights among all analyzed cases.&lt;br /&gt;
*** SelectedPercentage:&lt;br /&gt;
**** The percent of selected case weights that have the found root cause out of all case weights with that root cause.&lt;br /&gt;
*** SelectedWeight: &lt;br /&gt;
**** The sum of weights that have the found root cause and belong to the selected cases.&lt;br /&gt;
*** ComparedWeight: &lt;br /&gt;
**** The sum of weights that have the found root cause and don&#039;t belong to the selected cases.&lt;br /&gt;
*** TotalWeight:&lt;br /&gt;
**** The sum of weights of all cases with that root cause.&lt;br /&gt;
&lt;br /&gt;
=== Example ===&lt;br /&gt;
Calculate root cause analysis for given model using parameters read from the query configuration.&amp;lt;syntaxhighlight lang=&amp;quot;json&amp;quot;&amp;gt;&lt;br /&gt;
{&lt;br /&gt;
  &amp;quot;ProcessingMethod&amp;quot;: &amp;quot;DataFrame&amp;quot;,&lt;br /&gt;
  &amp;quot;Root&amp;quot;:&amp;quot;let m = _; _system.RootCauses.FindForDataFrame(m, _query.Configuration.Parameters.FindRootCausesParameters.Clone().Extend(#{\&amp;quot;Filter\&amp;quot;: _query.Configuration.Filter}))&amp;quot;,&lt;br /&gt;
  &amp;quot;Parameters&amp;quot;: {&lt;br /&gt;
    &amp;quot;FindRootCausesParameters&amp;quot;: {&lt;br /&gt;
      &amp;quot;CaseAttributeTypes&amp;quot;: [&amp;quot;Account Manager&amp;quot;,&amp;quot;Customer Group&amp;quot;,&amp;quot;Product Group&amp;quot;,&amp;quot;Region&amp;quot;],&lt;br /&gt;
      &amp;quot;Selection&amp;quot;: {&amp;quot;Items&amp;quot;:[{&amp;quot;Type&amp;quot;:&amp;quot;IncludeCases&amp;quot;,&amp;quot;Items&amp;quot;:[{&amp;quot;Type&amp;quot;:&amp;quot;CaseAttributeValue&amp;quot;,&amp;quot;Attribute&amp;quot;:&amp;quot;Product Group&amp;quot;,&amp;quot;StringifiedValues&amp;quot;:[&amp;quot;0Hats&amp;quot;]}]}]},&lt;br /&gt;
      &amp;quot;MaxNumUniqueValues&amp;quot;: 2,&lt;br /&gt;
      &amp;quot;MaximumRowCount&amp;quot;: 1000,&lt;br /&gt;
      &amp;quot;MinValueUsage&amp;quot;: 0.20,&lt;br /&gt;
      &amp;quot;WeightingExpression&amp;quot;: &amp;quot;Column(\&amp;quot;Cost\&amp;quot;)&amp;quot;&lt;br /&gt;
    }&lt;br /&gt;
  },&lt;br /&gt;
  &amp;quot;Ordering&amp;quot;: [&lt;br /&gt;
    {&amp;quot;Name&amp;quot;: &amp;quot;Contribution&amp;quot;, &amp;quot;Direction&amp;quot;: &amp;quot;Descending&amp;quot;}&lt;br /&gt;
  ]&lt;br /&gt;
}&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Utils.GetSampledEvents ==&lt;br /&gt;
foo&lt;br /&gt;
&lt;br /&gt;
== Utils.RunFunctionWithParallelLogging ==&lt;br /&gt;
foo&lt;/div&gt;</summary>
		<author><name>MarHink</name></author>
	</entry>
	<entry>
		<id>https://wiki.onqpr.com/pa/index.php?title=System_Library&amp;diff=25463</id>
		<title>System Library</title>
		<link rel="alternate" type="text/html" href="https://wiki.onqpr.com/pa/index.php?title=System_Library&amp;diff=25463"/>
		<updated>2024-11-19T12:34:28Z</updated>

		<summary type="html">&lt;p&gt;MarHink: /* Return value */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;System library is a collection of Expression Language functions and properties that provide additional tools for scripting Process Analyzer functionalities. System library is referenced in scripts via &#039;&#039;_system&#039;&#039;-property, which provides additional properties dedicated for different areas of interests for scripting. &lt;br /&gt;
&lt;br /&gt;
The following hierarchy shows the properties and functions available in System Library:&lt;br /&gt;
&lt;br /&gt;
* ML&lt;br /&gt;
** [[Create Predicted Eventlog|GeneratePredictionModel]]&lt;br /&gt;
** [[Create Simulated Eventlog|ApplyTransformations]]&lt;br /&gt;
* Parallel&lt;br /&gt;
** Run&lt;br /&gt;
* RootCauses&lt;br /&gt;
** FindRootCausesDataFrame&lt;br /&gt;
* Utils&lt;br /&gt;
** GetSampledEvents&lt;br /&gt;
** ModifyColumnTypes&lt;br /&gt;
** RunFunctionWithParallelLogging&lt;br /&gt;
&lt;br /&gt;
== Parallel.Run ==&lt;br /&gt;
Runs given functions in parallel.&lt;br /&gt;
&lt;br /&gt;
=== Parameters ===&lt;br /&gt;
* &#039;&#039;&#039;functions&#039;&#039;&#039;:&lt;br /&gt;
** An array of functions to run in parallel.&lt;br /&gt;
&lt;br /&gt;
=== Return value ===&lt;br /&gt;
An array of results returned by the called functions, in the same order as the function generating them in the &#039;&#039;functions&#039;&#039;-parameter.&lt;br /&gt;
&lt;br /&gt;
=== Example ===&lt;br /&gt;
The following script uses _system.Parallel.Run to run three functions:&lt;br /&gt;
&lt;br /&gt;
* SAP-extraction from VBAK-table in SAP (connection parameters defined in connectionParametersDict-dictionary).&lt;br /&gt;
* Transform the extracted data by adding a new column.&lt;br /&gt;
* Load the data into data table identified by dataTableId.&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;typescript&amp;quot;&amp;gt;&lt;br /&gt;
function ExtractTransformAndLoad(extractFunc, transformFunc, loadFunc)&lt;br /&gt;
{&lt;br /&gt;
  let rawDataFlow = extractFunc();&lt;br /&gt;
  let transformedDataFlow = ToDataFlow();&lt;br /&gt;
&lt;br /&gt;
  _system.Parallel.Run([&lt;br /&gt;
    () =&amp;gt; Catch({&lt;br /&gt;
      let df;&lt;br /&gt;
      while (!IsNullTop(df = rawDataFlow.Collect(#{&amp;quot;CollectChunk&amp;quot;: true}))) {&lt;br /&gt;
        transformedDataFlow.Append(transformFunc(df));&lt;br /&gt;
        WriteLog(`A chunk having ${df.NRows} rows has been transformed.`);&lt;br /&gt;
      }&lt;br /&gt;
      if (rawDataFlow.HasError) {&lt;br /&gt;
        transformedDataFlow.Fail(&amp;quot;Error occurred during data extraction.&amp;quot;);&lt;br /&gt;
      }&lt;br /&gt;
      else {&lt;br /&gt;
        transformedDataFlow.Complete();&lt;br /&gt;
      }&lt;br /&gt;
    }, {&lt;br /&gt;
      transformedDataFlow.Fail(&amp;quot;Error occurred during transformation calculation.&amp;quot;);&lt;br /&gt;
    }),&lt;br /&gt;
    () =&amp;gt; {&lt;br /&gt;
      loadFunc(transformedDataFlow);&lt;br /&gt;
    }&lt;br /&gt;
  ]);&lt;br /&gt;
}&lt;br /&gt;
&lt;br /&gt;
ExtractTransformAndLoad(&lt;br /&gt;
  () =&amp;gt; ExtractSap(connectionParametersDict.Extend(&lt;br /&gt;
    [&lt;br /&gt;
    &amp;quot;FieldNames&amp;quot;: &amp;quot;VBELN,ERDAT,ERZET,ERNAM,NETWR,WAERK&amp;quot;, &lt;br /&gt;
    &amp;quot;QueryTable&amp;quot;: &amp;quot;VBAK&amp;quot;,&lt;br /&gt;
	&amp;quot;Options&amp;quot;: [&amp;quot;VBELN BETWEEN &#039;0000017448&#039;&amp;quot; ,&amp;quot;AND &#039;0060000042&#039;&amp;quot;],&lt;br /&gt;
    &amp;quot;UseGateway&amp;quot;: true&lt;br /&gt;
    ])&lt;br /&gt;
  ),&lt;br /&gt;
  df =&amp;gt; df.SetColumns([&amp;quot;Test&amp;quot;: () =&amp;gt; `${Column(&amp;quot;NETWR&amp;quot;)} ${Column(&amp;quot;WAERK&amp;quot;)}`]),&lt;br /&gt;
  dataFlow =&amp;gt; {&lt;br /&gt;
    DataTableById(dataTableId).Import(dataFlow, [&amp;quot;Append&amp;quot;: 0]);&lt;br /&gt;
  }&lt;br /&gt;
);&lt;br /&gt;
&lt;br /&gt;
DataTableById(dataTableId).SqlDataFrame.OrderByColumns([&amp;quot;VBELN&amp;quot;], [true]).Collect()&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== RootCauses.FindRootCausesDataFrame ==&lt;br /&gt;
Finds root causes for a particular process phenomenon by comparing properties of selected cases against those of all cases in given model (#27619#).&lt;br /&gt;
&lt;br /&gt;
Based on the similar in-memory function: [[FindRootCauses Function|EventLog.FindRootCauses]].&lt;br /&gt;
&lt;br /&gt;
=== Parameters ===&lt;br /&gt;
&lt;br /&gt;
* model:&lt;br /&gt;
** Model object of a model whose data tables are used to calculate the root causes.&lt;br /&gt;
* parameters:&lt;br /&gt;
** A parameter convertible to a StringDictionary object with the following supported key-values:&lt;br /&gt;
*** Filter: &lt;br /&gt;
**** Filter json (#30921#) that is applied to the event log before calculating root causes.&lt;br /&gt;
*** Selection: &lt;br /&gt;
**** Selection json (#30927#) that defines selected cases to find root causes for. Is applied on top of the filtered event log (specified by Filter parameter).&lt;br /&gt;
**** If nothing is selected, 100% of cases are counted as selected.&lt;br /&gt;
*** CaseAttributeTypes: &lt;br /&gt;
**** An array of strings with the names of case attributes included into the root causes.&lt;br /&gt;
**** Only case attributes of type string, integer or boolean are included.&lt;br /&gt;
**** If CaseAttributeTypes is null or empty string or not specified, all case attributes having type &amp;quot;String&amp;quot; are included into the root causes.&lt;br /&gt;
**** If the model doesn&#039;t have a case attribute with the specified name, an error message is shown.&lt;br /&gt;
*** EventAttributeTypes:&lt;br /&gt;
**** An array of strings with the names of event attributes included into the root causes.&lt;br /&gt;
**** Only event attributes of type string are included.&lt;br /&gt;
**** If EventAttributeTypes is null, all event attributes having type &amp;quot;String&amp;quot; are included into the root causes.&lt;br /&gt;
**** If EventAttributeTypes is empty array or not specified, event attributes not applied to the root causes.&lt;br /&gt;
**** If the model doesn&#039;t have an event attribute with the specified name, an error message is shown.&lt;br /&gt;
**** Analysis column Name should contain Event Attribute Name.&lt;br /&gt;
**** Analysis column Value should contain Event Attribute Value and number of occurrences in case: &amp;lt;value&amp;gt; (count).&lt;br /&gt;
**** Analysis column Type should have &amp;quot;EventAttributeValue&amp;quot; as its value.&lt;br /&gt;
*** WeightingExpression: &lt;br /&gt;
**** Expression providing weights for each case.&lt;br /&gt;
**** Expression, if defined, must be any of the following types:&lt;br /&gt;
***** A string containing the [[SQL Expressions|SqlExpression]] to evaluate.&lt;br /&gt;
***** A SqlExpression object, created e.g., using ToSqlExpression-function (see also: [[QPR_ProcessAnalyzer_Expressions#In-memory_expression_blocks_in_SQL_expressions|In-memory expression blocks in SQL expressions]]).&lt;br /&gt;
**** A row is filtered out of result if expression result is null.&lt;br /&gt;
*** MaximumRowCount:&lt;br /&gt;
**** The maximum number of the most and least contributing root causes to return. Thus, the actual number of returned rows can be at most two times this value (if specified). &lt;br /&gt;
**** If undefined, 200 is used.&lt;br /&gt;
**** If set to 0, all rows are returned.&lt;br /&gt;
*** MinValueUsage: &lt;br /&gt;
**** The minimum total usage of a value included into the comparison. The number of cases having every returned value should be at least given percentage (a float value between 0.0 and 1.0) of all the compared cases.&lt;br /&gt;
***** If not defined or null, all values are included (=default).&lt;br /&gt;
*** MaxNumUniqueValues: &lt;br /&gt;
**** Maximum number of unique values to include into the comparison for each attribute column. If the amount of unique values for any attribute exceeds this value, only given number of attributes are included that have the highest usage.&lt;br /&gt;
**** If not defined or null, all values are included (=default).&lt;br /&gt;
*** IncludeOthers: &lt;br /&gt;
**** Should the rest of the attribute values not included due to MinValueUsage or MaxNumUniqueValues filtering be included as an aggregated &amp;quot;Others&amp;quot; value?&lt;br /&gt;
**** If given any string value, that value is used as the value for all the aggregated values.&lt;br /&gt;
**** Default = undefined =&amp;gt; others values will not be included into the results.&lt;br /&gt;
*** ValueIfNull: Value used to indicate null-values.&lt;br /&gt;
**** Default = &amp;quot;(blank)&amp;quot;.&lt;br /&gt;
**** Must be not null.&lt;br /&gt;
&lt;br /&gt;
=== Return value ===&lt;br /&gt;
Returns a SqlDataFrame object with the following columns:&lt;br /&gt;
&lt;br /&gt;
* Common columns:&lt;br /&gt;
** Type: &lt;br /&gt;
*** Type of the root cause:&lt;br /&gt;
**** &amp;quot;CaseAttributeValue&amp;quot; for case attributes.&lt;br /&gt;
**** &amp;quot;EventAttributeValue&amp;quot; for event attributes.&lt;br /&gt;
** Name: &lt;br /&gt;
*** When type is CaseAttributeValue, case attribute name.&lt;br /&gt;
*** When type is EventAttributeValue, event attribute name.&lt;br /&gt;
** Value: &lt;br /&gt;
*** When type is CaseAttributeValue, case attribute value.&lt;br /&gt;
*** When type is EventAttributeValue, event attribute value and number of occurrences in case.&lt;br /&gt;
** Total: &lt;br /&gt;
*** The total number of cases having the found root cause.&lt;br /&gt;
** Selected: &lt;br /&gt;
*** The number of cases that have the found root cause and belong to the selected cases.&lt;br /&gt;
** Compared:&lt;br /&gt;
*** The number of cases that have the found root cause and don&#039;t belong to the selected cases.&lt;br /&gt;
** Columns when WeightingExpression not have value:&lt;br /&gt;
*** Contribution: &lt;br /&gt;
**** The number of cases which contribute to the deviation from the average percentage of selected cases among all analyzed cases.&lt;br /&gt;
*** ContributionPercentage: &lt;br /&gt;
**** The percent of cases which contribute to the deviation from the average percentage.&lt;br /&gt;
*** DifferencePercentage:&lt;br /&gt;
**** The deviation in percentage between selected cases with the found root cause and the average percentage of selected cases among all analyzed cases.&lt;br /&gt;
*** SelectedPercentage: &lt;br /&gt;
**** The percent of selected cases that have the found root cause out of all cases with that root cause.&lt;br /&gt;
** Columns when WeightingExpression have value:&lt;br /&gt;
*** Contribution: &lt;br /&gt;
**** The sum of case weights which contribute to the deviation from the average percentage of selected case weights among all analyzed cases.&lt;br /&gt;
*** ContributionPercentage: &lt;br /&gt;
**** The percent of case weights which contribute to the deviation from the average percentage.&lt;br /&gt;
*** DifferencePercentage:&lt;br /&gt;
**** The deviation in percentage between selected case weights with the found root cause and the average percentage of selected case weights among all analyzed cases.&lt;br /&gt;
*** SelectedPercentage:&lt;br /&gt;
**** The percent of selected case weights that have the found root cause out of all case weights with that root cause.&lt;br /&gt;
*** SelectedWeight: &lt;br /&gt;
**** The sum of weights that have the found root cause and belong to the selected cases.&lt;br /&gt;
*** ComparedWeight: &lt;br /&gt;
**** The sum of weights that have the found root cause and don&#039;t belong to the selected cases.&lt;br /&gt;
*** TotalWeight:&lt;br /&gt;
**** The sum of weights of all cases with that root cause.&lt;br /&gt;
&lt;br /&gt;
Examples:&lt;br /&gt;
&lt;br /&gt;
{&lt;br /&gt;
&lt;br /&gt;
  &amp;quot;ProcessingMethod&amp;quot;: &amp;quot;DataFrame&amp;quot;,&lt;br /&gt;
&lt;br /&gt;
  &amp;quot;Root&amp;quot;:&amp;quot;let m = _; _system.RootCauses.&#039;&#039;&#039;FindForDataFrame&#039;&#039;&#039;(m, _query.Configuration.Parameters.FindRootCausesParameters.Clone().Extend(#{\&amp;quot;Filter\&amp;quot;: _query.Configuration.Filter}))&amp;quot;,&lt;br /&gt;
&lt;br /&gt;
  &amp;quot;Parameters&amp;quot;: {&lt;br /&gt;
&lt;br /&gt;
   &amp;quot;FindRootCausesParameters&amp;quot;: {&lt;br /&gt;
&lt;br /&gt;
     &amp;quot;CaseAttributeTypes&amp;quot;: [&amp;quot;Account Manager&amp;quot;,&amp;quot;Customer Group&amp;quot;,&amp;quot;Product Group&amp;quot;,&amp;quot;Region&amp;quot;], &lt;br /&gt;
&lt;br /&gt;
     &amp;quot;Selection&amp;quot;: {&amp;quot;Items&amp;quot;:[{&amp;quot;Type&amp;quot;:&amp;quot;IncludeCases&amp;quot;,&amp;quot;Items&amp;quot;:[{&amp;quot;Type&amp;quot;:&amp;quot;CaseAttributeValue&amp;quot;,&amp;quot;Attribute&amp;quot;:&amp;quot;Product Group&amp;quot;,&amp;quot;StringifiedValues&amp;quot;:[&amp;quot;0Hats&amp;quot;]}]}]}, &lt;br /&gt;
&lt;br /&gt;
     &amp;quot;MaxNumUniqueValues&amp;quot;: 2, &lt;br /&gt;
&lt;br /&gt;
     &amp;quot;MaximumRowCount&amp;quot;: 1000, &lt;br /&gt;
&lt;br /&gt;
     &amp;quot;MinValueUsage&amp;quot;: 0.20,&lt;br /&gt;
&lt;br /&gt;
     &amp;quot;WeightingExpression&amp;quot;: &amp;quot;Column(\&amp;quot;Cost\&amp;quot;)&amp;quot;&lt;br /&gt;
&lt;br /&gt;
   }&lt;br /&gt;
&lt;br /&gt;
  },&lt;br /&gt;
&lt;br /&gt;
  &amp;quot;Ordering&amp;quot;: [ &lt;br /&gt;
&lt;br /&gt;
   {&amp;quot;Name&amp;quot;: &amp;quot;Contribution&amp;quot;, &amp;quot;Direction&amp;quot;: &amp;quot;Descending&amp;quot;}&lt;br /&gt;
&lt;br /&gt;
  ]&lt;br /&gt;
&lt;br /&gt;
}&lt;br /&gt;
&lt;br /&gt;
Calculates root cause analysis for given model using parameters read from the query configuration.&lt;br /&gt;
&lt;br /&gt;
== Utils.GetSampledEvents ==&lt;br /&gt;
foo&lt;br /&gt;
&lt;br /&gt;
== Utils.RunFunctionWithParallelLogging ==&lt;br /&gt;
foo&lt;/div&gt;</summary>
		<author><name>MarHink</name></author>
	</entry>
	<entry>
		<id>https://wiki.onqpr.com/pa/index.php?title=System_Library&amp;diff=25460</id>
		<title>System Library</title>
		<link rel="alternate" type="text/html" href="https://wiki.onqpr.com/pa/index.php?title=System_Library&amp;diff=25460"/>
		<updated>2024-11-19T12:28:55Z</updated>

		<summary type="html">&lt;p&gt;MarHink: /* Parameters */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;System library is a collection of Expression Language functions and properties that provide additional tools for scripting Process Analyzer functionalities. System library is referenced in scripts via &#039;&#039;_system&#039;&#039;-property, which provides additional properties dedicated for different areas of interests for scripting. &lt;br /&gt;
&lt;br /&gt;
The following hierarchy shows the properties and functions available in System Library:&lt;br /&gt;
&lt;br /&gt;
* ML&lt;br /&gt;
** [[Create Predicted Eventlog|GeneratePredictionModel]]&lt;br /&gt;
** [[Create Simulated Eventlog|ApplyTransformations]]&lt;br /&gt;
* Parallel&lt;br /&gt;
** Run&lt;br /&gt;
* RootCauses&lt;br /&gt;
** FindRootCausesDataFrame&lt;br /&gt;
* Utils&lt;br /&gt;
** GetSampledEvents&lt;br /&gt;
** ModifyColumnTypes&lt;br /&gt;
** RunFunctionWithParallelLogging&lt;br /&gt;
&lt;br /&gt;
== Parallel.Run ==&lt;br /&gt;
Runs given functions in parallel.&lt;br /&gt;
&lt;br /&gt;
=== Parameters ===&lt;br /&gt;
* &#039;&#039;&#039;functions&#039;&#039;&#039;:&lt;br /&gt;
** An array of functions to run in parallel.&lt;br /&gt;
&lt;br /&gt;
=== Return value ===&lt;br /&gt;
An array of results returned by the called functions, in the same order as the function generating them in the &#039;&#039;functions&#039;&#039;-parameter.&lt;br /&gt;
&lt;br /&gt;
=== Example ===&lt;br /&gt;
The following script uses _system.Parallel.Run to run three functions:&lt;br /&gt;
&lt;br /&gt;
* SAP-extraction from VBAK-table in SAP (connection parameters defined in connectionParametersDict-dictionary).&lt;br /&gt;
* Transform the extracted data by adding a new column.&lt;br /&gt;
* Load the data into data table identified by dataTableId.&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;typescript&amp;quot;&amp;gt;&lt;br /&gt;
function ExtractTransformAndLoad(extractFunc, transformFunc, loadFunc)&lt;br /&gt;
{&lt;br /&gt;
  let rawDataFlow = extractFunc();&lt;br /&gt;
  let transformedDataFlow = ToDataFlow();&lt;br /&gt;
&lt;br /&gt;
  _system.Parallel.Run([&lt;br /&gt;
    () =&amp;gt; Catch({&lt;br /&gt;
      let df;&lt;br /&gt;
      while (!IsNullTop(df = rawDataFlow.Collect(#{&amp;quot;CollectChunk&amp;quot;: true}))) {&lt;br /&gt;
        transformedDataFlow.Append(transformFunc(df));&lt;br /&gt;
        WriteLog(`A chunk having ${df.NRows} rows has been transformed.`);&lt;br /&gt;
      }&lt;br /&gt;
      if (rawDataFlow.HasError) {&lt;br /&gt;
        transformedDataFlow.Fail(&amp;quot;Error occurred during data extraction.&amp;quot;);&lt;br /&gt;
      }&lt;br /&gt;
      else {&lt;br /&gt;
        transformedDataFlow.Complete();&lt;br /&gt;
      }&lt;br /&gt;
    }, {&lt;br /&gt;
      transformedDataFlow.Fail(&amp;quot;Error occurred during transformation calculation.&amp;quot;);&lt;br /&gt;
    }),&lt;br /&gt;
    () =&amp;gt; {&lt;br /&gt;
      loadFunc(transformedDataFlow);&lt;br /&gt;
    }&lt;br /&gt;
  ]);&lt;br /&gt;
}&lt;br /&gt;
&lt;br /&gt;
ExtractTransformAndLoad(&lt;br /&gt;
  () =&amp;gt; ExtractSap(connectionParametersDict.Extend(&lt;br /&gt;
    [&lt;br /&gt;
    &amp;quot;FieldNames&amp;quot;: &amp;quot;VBELN,ERDAT,ERZET,ERNAM,NETWR,WAERK&amp;quot;, &lt;br /&gt;
    &amp;quot;QueryTable&amp;quot;: &amp;quot;VBAK&amp;quot;,&lt;br /&gt;
	&amp;quot;Options&amp;quot;: [&amp;quot;VBELN BETWEEN &#039;0000017448&#039;&amp;quot; ,&amp;quot;AND &#039;0060000042&#039;&amp;quot;],&lt;br /&gt;
    &amp;quot;UseGateway&amp;quot;: true&lt;br /&gt;
    ])&lt;br /&gt;
  ),&lt;br /&gt;
  df =&amp;gt; df.SetColumns([&amp;quot;Test&amp;quot;: () =&amp;gt; `${Column(&amp;quot;NETWR&amp;quot;)} ${Column(&amp;quot;WAERK&amp;quot;)}`]),&lt;br /&gt;
  dataFlow =&amp;gt; {&lt;br /&gt;
    DataTableById(dataTableId).Import(dataFlow, [&amp;quot;Append&amp;quot;: 0]);&lt;br /&gt;
  }&lt;br /&gt;
);&lt;br /&gt;
&lt;br /&gt;
DataTableById(dataTableId).SqlDataFrame.OrderByColumns([&amp;quot;VBELN&amp;quot;], [true]).Collect()&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== RootCauses.FindRootCausesDataFrame ==&lt;br /&gt;
Finds root causes for a particular process phenomenon by comparing properties of selected cases against those of all cases in given model (#27619#).&lt;br /&gt;
&lt;br /&gt;
Based on the similar in-memory function: [[FindRootCauses Function|EventLog.FindRootCauses]].&lt;br /&gt;
&lt;br /&gt;
=== Parameters ===&lt;br /&gt;
&lt;br /&gt;
* model:&lt;br /&gt;
** Model object of a model whose data tables are used to calculate the root causes.&lt;br /&gt;
* parameters:&lt;br /&gt;
** A parameter convertible to a StringDictionary object with the following supported key-values:&lt;br /&gt;
*** Filter: &lt;br /&gt;
**** Filter json (#30921#) that is applied to the event log before calculating root causes.&lt;br /&gt;
*** Selection: &lt;br /&gt;
**** Selection json (#30927#) that defines selected cases to find root causes for. Is applied on top of the filtered event log (specified by Filter parameter).&lt;br /&gt;
**** If nothing is selected, 100% of cases are counted as selected.&lt;br /&gt;
*** CaseAttributeTypes: &lt;br /&gt;
**** An array of strings with the names of case attributes included into the root causes.&lt;br /&gt;
**** Only case attributes of type string, integer or boolean are included.&lt;br /&gt;
**** If CaseAttributeTypes is null or empty string or not specified, all case attributes having type &amp;quot;String&amp;quot; are included into the root causes.&lt;br /&gt;
**** If the model doesn&#039;t have a case attribute with the specified name, an error message is shown.&lt;br /&gt;
*** EventAttributeTypes:&lt;br /&gt;
**** An array of strings with the names of event attributes included into the root causes.&lt;br /&gt;
**** Only event attributes of type string are included.&lt;br /&gt;
**** If EventAttributeTypes is null, all event attributes having type &amp;quot;String&amp;quot; are included into the root causes.&lt;br /&gt;
**** If EventAttributeTypes is empty array or not specified, event attributes not applied to the root causes.&lt;br /&gt;
**** If the model doesn&#039;t have an event attribute with the specified name, an error message is shown.&lt;br /&gt;
**** Analysis column Name should contain Event Attribute Name.&lt;br /&gt;
**** Analysis column Value should contain Event Attribute Value and number of occurrences in case: &amp;lt;value&amp;gt; (count).&lt;br /&gt;
**** Analysis column Type should have &amp;quot;EventAttributeValue&amp;quot; as its value.&lt;br /&gt;
*** WeightingExpression: &lt;br /&gt;
**** Expression providing weights for each case.&lt;br /&gt;
**** Expression, if defined, must be any of the following types:&lt;br /&gt;
***** A string containing the [[SQL Expressions|SqlExpression]] to evaluate.&lt;br /&gt;
***** A SqlExpression object, created e.g., using ToSqlExpression-function (see also: [[QPR_ProcessAnalyzer_Expressions#In-memory_expression_blocks_in_SQL_expressions|In-memory expression blocks in SQL expressions]]).&lt;br /&gt;
**** A row is filtered out of result if expression result is null.&lt;br /&gt;
*** MaximumRowCount:&lt;br /&gt;
**** The maximum number of the most and least contributing root causes to return. Thus, the actual number of returned rows can be at most two times this value (if specified). &lt;br /&gt;
**** If undefined, 200 is used.&lt;br /&gt;
**** If set to 0, all rows are returned.&lt;br /&gt;
*** MinValueUsage: &lt;br /&gt;
**** The minimum total usage of a value included into the comparison. The number of cases having every returned value should be at least given percentage (a float value between 0.0 and 1.0) of all the compared cases.&lt;br /&gt;
***** If not defined or null, all values are included (=default).&lt;br /&gt;
*** MaxNumUniqueValues: &lt;br /&gt;
**** Maximum number of unique values to include into the comparison for each attribute column. If the amount of unique values for any attribute exceeds this value, only given number of attributes are included that have the highest usage.&lt;br /&gt;
**** If not defined or null, all values are included (=default).&lt;br /&gt;
*** IncludeOthers: &lt;br /&gt;
**** Should the rest of the attribute values not included due to MinValueUsage or MaxNumUniqueValues filtering be included as an aggregated &amp;quot;Others&amp;quot; value?&lt;br /&gt;
**** If given any string value, that value is used as the value for all the aggregated values.&lt;br /&gt;
**** Default = undefined =&amp;gt; others values will not be included into the results.&lt;br /&gt;
*** ValueIfNull: Value used to indicate null-values.&lt;br /&gt;
**** Default = &amp;quot;(blank)&amp;quot;.&lt;br /&gt;
**** Must be not null.&lt;br /&gt;
&lt;br /&gt;
=== Return value ===&lt;br /&gt;
Returns a SqlDataFrame object with the following columns:&lt;br /&gt;
&lt;br /&gt;
* Common columns:&lt;br /&gt;
** Type: &lt;br /&gt;
*** Type of the root cause:&lt;br /&gt;
**** &amp;quot;CaseAttributeValue&amp;quot; for case attributes&lt;br /&gt;
**** &amp;quot;EventAttributeValue&amp;quot; for event attributes&lt;br /&gt;
&lt;br /&gt;
3.1.2 Name: &lt;br /&gt;
&lt;br /&gt;
3.1.2.1 When type is CaseAttributeValue, case attribute name&lt;br /&gt;
&lt;br /&gt;
3.1.2.2 When type is EventAttributeValue, event attribute name&lt;br /&gt;
&lt;br /&gt;
3.1.3 Value: &lt;br /&gt;
&lt;br /&gt;
3.1.3.1 When type is CaseAttributeValue, case attribute value&lt;br /&gt;
&lt;br /&gt;
3.1.3.2 When type is EventAttributeValue, event attribute value and number of occurrences in case&lt;br /&gt;
&lt;br /&gt;
3.1.4 Total: total number of cases having the found root cause&lt;br /&gt;
&lt;br /&gt;
3.1.5 Selected: Number of cases that have the found root cause and belong to the selected cases&lt;br /&gt;
&lt;br /&gt;
3.1.6 Compared: number of cases that have the found root cause and don&#039;t belong to the selected cases&lt;br /&gt;
&lt;br /&gt;
3.2 Columns when WeightingExpression not have value&lt;br /&gt;
&lt;br /&gt;
3.2.1 Contribution: the number of cases which contribute to the deviation from the average percentage of selected cases among all analyzed cases&lt;br /&gt;
&lt;br /&gt;
3.2.2 ContributionPercentage: the percent of cases which contribute to the deviation from the average percentage&lt;br /&gt;
&lt;br /&gt;
3.2.3 DifferencePercentage: deviation in percentage between selected cases with the found root cause and the average percentage of selected cases among all analyzed cases&lt;br /&gt;
&lt;br /&gt;
3.1.4 SelectedPercentage: percent of selected cases that have the found root cause out of all cases with that root cause&lt;br /&gt;
&lt;br /&gt;
3.3 Columns when WeightingExpression have value&lt;br /&gt;
&lt;br /&gt;
3.3.1 Contribution: Sum of case weights which contribute to the deviation from the average percentage of selected case weights among all analyzed cases&lt;br /&gt;
&lt;br /&gt;
3.3.2 ContributionPercentage: the percent of case weights which contribute to the deviation from the average percentage&lt;br /&gt;
&lt;br /&gt;
3.3.3 DifferencePercentage: deviation in percentage between selected case weights with the found root cause and the average percentage of selected case weights among all analyzed cases&lt;br /&gt;
&lt;br /&gt;
3.3.4 SelectedPercentage: percent of selected case weights that have the found root cause out of all case weights with that root cause&lt;br /&gt;
&lt;br /&gt;
3.3.5 SelectedWeight: Sum of weights that have the found root cause and belong to the selected cases&lt;br /&gt;
&lt;br /&gt;
3.3.6 ComparedWeight: Sum of weights that have the found root cause and don&#039;t belong to the selected cases&lt;br /&gt;
&lt;br /&gt;
3.3.7 TotalWeight: Sum of weights of all cases with that root cause&lt;br /&gt;
&lt;br /&gt;
4. Notable differences between this function and FindRootCauses (#71058#) used for in-memory event logs:&lt;br /&gt;
&lt;br /&gt;
4.1. In FindRootCauses, Zero-weighted contibution rows are removed from the results.&lt;br /&gt;
&lt;br /&gt;
4.2. Non-numeric case cost in FindForDataFrame causes and exception to be thrown. Not in FindRootCauses, where is behaves as if null was used.&lt;br /&gt;
&lt;br /&gt;
4.3. All the case-data table columns can be used as CaseAttributeTypes in FindForDataFrame, even the case id column, which can&#039;t be used in FindRootCauses.&lt;br /&gt;
&lt;br /&gt;
4.4. WeightingExpression is given as SqlExpression, not expression language expression as in FindRootCauses.&lt;br /&gt;
&lt;br /&gt;
4.5. FindForDataFrame only supports querying one type of columns at a time. Querying, e.g., columns having both string and integer values will throw an exception.&lt;br /&gt;
&lt;br /&gt;
4.6. If an empty array is given for CaseAttributeTypes-parameter, FindForDataFrame throws an exception, whereas FindRootCauses returns a result without any rows.&lt;br /&gt;
&lt;br /&gt;
4.7. FindForDataFrame converts attribute values to string, because internally it uses UNPIVOT instruction for calculating Value column (requires same data type for unpivoted columns). So even if numeric case attribute is used, Value column will contain strings and sorting by Value will use string comparison.&lt;br /&gt;
&lt;br /&gt;
Examples:&lt;br /&gt;
&lt;br /&gt;
{&lt;br /&gt;
&lt;br /&gt;
  &amp;quot;ProcessingMethod&amp;quot;: &amp;quot;DataFrame&amp;quot;,&lt;br /&gt;
&lt;br /&gt;
  &amp;quot;Root&amp;quot;:&amp;quot;let m = _; _system.RootCauses.&#039;&#039;&#039;FindForDataFrame&#039;&#039;&#039;(m, _query.Configuration.Parameters.FindRootCausesParameters.Clone().Extend(#{\&amp;quot;Filter\&amp;quot;: _query.Configuration.Filter}))&amp;quot;,&lt;br /&gt;
&lt;br /&gt;
  &amp;quot;Parameters&amp;quot;: {&lt;br /&gt;
&lt;br /&gt;
   &amp;quot;FindRootCausesParameters&amp;quot;: {&lt;br /&gt;
&lt;br /&gt;
     &amp;quot;CaseAttributeTypes&amp;quot;: [&amp;quot;Account Manager&amp;quot;,&amp;quot;Customer Group&amp;quot;,&amp;quot;Product Group&amp;quot;,&amp;quot;Region&amp;quot;], &lt;br /&gt;
&lt;br /&gt;
     &amp;quot;Selection&amp;quot;: {&amp;quot;Items&amp;quot;:[{&amp;quot;Type&amp;quot;:&amp;quot;IncludeCases&amp;quot;,&amp;quot;Items&amp;quot;:[{&amp;quot;Type&amp;quot;:&amp;quot;CaseAttributeValue&amp;quot;,&amp;quot;Attribute&amp;quot;:&amp;quot;Product Group&amp;quot;,&amp;quot;StringifiedValues&amp;quot;:[&amp;quot;0Hats&amp;quot;]}]}]}, &lt;br /&gt;
&lt;br /&gt;
     &amp;quot;MaxNumUniqueValues&amp;quot;: 2, &lt;br /&gt;
&lt;br /&gt;
     &amp;quot;MaximumRowCount&amp;quot;: 1000, &lt;br /&gt;
&lt;br /&gt;
     &amp;quot;MinValueUsage&amp;quot;: 0.20,&lt;br /&gt;
&lt;br /&gt;
     &amp;quot;WeightingExpression&amp;quot;: &amp;quot;Column(\&amp;quot;Cost\&amp;quot;)&amp;quot;&lt;br /&gt;
&lt;br /&gt;
   }&lt;br /&gt;
&lt;br /&gt;
  },&lt;br /&gt;
&lt;br /&gt;
  &amp;quot;Ordering&amp;quot;: [ &lt;br /&gt;
&lt;br /&gt;
   {&amp;quot;Name&amp;quot;: &amp;quot;Contribution&amp;quot;, &amp;quot;Direction&amp;quot;: &amp;quot;Descending&amp;quot;}&lt;br /&gt;
&lt;br /&gt;
  ]&lt;br /&gt;
&lt;br /&gt;
}&lt;br /&gt;
&lt;br /&gt;
Calculates root cause analysis for given model using parameters read from the query configuration.&lt;br /&gt;
&lt;br /&gt;
== Utils.GetSampledEvents ==&lt;br /&gt;
foo&lt;br /&gt;
&lt;br /&gt;
== Utils.RunFunctionWithParallelLogging ==&lt;br /&gt;
foo&lt;/div&gt;</summary>
		<author><name>MarHink</name></author>
	</entry>
	<entry>
		<id>https://wiki.onqpr.com/pa/index.php?title=System_Library&amp;diff=25459</id>
		<title>System Library</title>
		<link rel="alternate" type="text/html" href="https://wiki.onqpr.com/pa/index.php?title=System_Library&amp;diff=25459"/>
		<updated>2024-11-19T12:27:32Z</updated>

		<summary type="html">&lt;p&gt;MarHink: /* Parameters */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;System library is a collection of Expression Language functions and properties that provide additional tools for scripting Process Analyzer functionalities. System library is referenced in scripts via &#039;&#039;_system&#039;&#039;-property, which provides additional properties dedicated for different areas of interests for scripting. &lt;br /&gt;
&lt;br /&gt;
The following hierarchy shows the properties and functions available in System Library:&lt;br /&gt;
&lt;br /&gt;
* ML&lt;br /&gt;
** [[Create Predicted Eventlog|GeneratePredictionModel]]&lt;br /&gt;
** [[Create Simulated Eventlog|ApplyTransformations]]&lt;br /&gt;
* Parallel&lt;br /&gt;
** Run&lt;br /&gt;
* RootCauses&lt;br /&gt;
** FindRootCausesDataFrame&lt;br /&gt;
* Utils&lt;br /&gt;
** GetSampledEvents&lt;br /&gt;
** ModifyColumnTypes&lt;br /&gt;
** RunFunctionWithParallelLogging&lt;br /&gt;
&lt;br /&gt;
== Parallel.Run ==&lt;br /&gt;
Runs given functions in parallel.&lt;br /&gt;
&lt;br /&gt;
=== Parameters ===&lt;br /&gt;
* &#039;&#039;&#039;functions&#039;&#039;&#039;:&lt;br /&gt;
** An array of functions to run in parallel.&lt;br /&gt;
&lt;br /&gt;
=== Return value ===&lt;br /&gt;
An array of results returned by the called functions, in the same order as the function generating them in the &#039;&#039;functions&#039;&#039;-parameter.&lt;br /&gt;
&lt;br /&gt;
=== Example ===&lt;br /&gt;
The following script uses _system.Parallel.Run to run three functions:&lt;br /&gt;
&lt;br /&gt;
* SAP-extraction from VBAK-table in SAP (connection parameters defined in connectionParametersDict-dictionary).&lt;br /&gt;
* Transform the extracted data by adding a new column.&lt;br /&gt;
* Load the data into data table identified by dataTableId.&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;typescript&amp;quot;&amp;gt;&lt;br /&gt;
function ExtractTransformAndLoad(extractFunc, transformFunc, loadFunc)&lt;br /&gt;
{&lt;br /&gt;
  let rawDataFlow = extractFunc();&lt;br /&gt;
  let transformedDataFlow = ToDataFlow();&lt;br /&gt;
&lt;br /&gt;
  _system.Parallel.Run([&lt;br /&gt;
    () =&amp;gt; Catch({&lt;br /&gt;
      let df;&lt;br /&gt;
      while (!IsNullTop(df = rawDataFlow.Collect(#{&amp;quot;CollectChunk&amp;quot;: true}))) {&lt;br /&gt;
        transformedDataFlow.Append(transformFunc(df));&lt;br /&gt;
        WriteLog(`A chunk having ${df.NRows} rows has been transformed.`);&lt;br /&gt;
      }&lt;br /&gt;
      if (rawDataFlow.HasError) {&lt;br /&gt;
        transformedDataFlow.Fail(&amp;quot;Error occurred during data extraction.&amp;quot;);&lt;br /&gt;
      }&lt;br /&gt;
      else {&lt;br /&gt;
        transformedDataFlow.Complete();&lt;br /&gt;
      }&lt;br /&gt;
    }, {&lt;br /&gt;
      transformedDataFlow.Fail(&amp;quot;Error occurred during transformation calculation.&amp;quot;);&lt;br /&gt;
    }),&lt;br /&gt;
    () =&amp;gt; {&lt;br /&gt;
      loadFunc(transformedDataFlow);&lt;br /&gt;
    }&lt;br /&gt;
  ]);&lt;br /&gt;
}&lt;br /&gt;
&lt;br /&gt;
ExtractTransformAndLoad(&lt;br /&gt;
  () =&amp;gt; ExtractSap(connectionParametersDict.Extend(&lt;br /&gt;
    [&lt;br /&gt;
    &amp;quot;FieldNames&amp;quot;: &amp;quot;VBELN,ERDAT,ERZET,ERNAM,NETWR,WAERK&amp;quot;, &lt;br /&gt;
    &amp;quot;QueryTable&amp;quot;: &amp;quot;VBAK&amp;quot;,&lt;br /&gt;
	&amp;quot;Options&amp;quot;: [&amp;quot;VBELN BETWEEN &#039;0000017448&#039;&amp;quot; ,&amp;quot;AND &#039;0060000042&#039;&amp;quot;],&lt;br /&gt;
    &amp;quot;UseGateway&amp;quot;: true&lt;br /&gt;
    ])&lt;br /&gt;
  ),&lt;br /&gt;
  df =&amp;gt; df.SetColumns([&amp;quot;Test&amp;quot;: () =&amp;gt; `${Column(&amp;quot;NETWR&amp;quot;)} ${Column(&amp;quot;WAERK&amp;quot;)}`]),&lt;br /&gt;
  dataFlow =&amp;gt; {&lt;br /&gt;
    DataTableById(dataTableId).Import(dataFlow, [&amp;quot;Append&amp;quot;: 0]);&lt;br /&gt;
  }&lt;br /&gt;
);&lt;br /&gt;
&lt;br /&gt;
DataTableById(dataTableId).SqlDataFrame.OrderByColumns([&amp;quot;VBELN&amp;quot;], [true]).Collect()&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== RootCauses.FindRootCausesDataFrame ==&lt;br /&gt;
Finds root causes for a particular process phenomenon by comparing properties of selected cases against those of all cases in given model (#27619#).&lt;br /&gt;
&lt;br /&gt;
Based on the similar in-memory function: [[FindRootCauses Function|EventLog.FindRootCauses]].&lt;br /&gt;
&lt;br /&gt;
=== Parameters ===&lt;br /&gt;
&lt;br /&gt;
* model:&lt;br /&gt;
** Model object of a model whose data tables are used to calculate the root causes.&lt;br /&gt;
* parameters:&lt;br /&gt;
** A parameter convertible to a StringDictionary object with the following supported key-values:&lt;br /&gt;
*** Filter: &lt;br /&gt;
**** Filter json (#30921#) that is applied to the event log before calculating root causes.&lt;br /&gt;
*** Selection: &lt;br /&gt;
**** Selection json (#30927#) that defines selected cases to find root causes for. Is applied on top of the filtered event log (specified by Filter parameter).&lt;br /&gt;
**** If nothing is selected, 100% of cases are counted as selected.&lt;br /&gt;
*** CaseAttributeTypes: &lt;br /&gt;
**** An array of strings with the names of case attributes included into the root causes.&lt;br /&gt;
**** Only case attributes of type string, integer or boolean are included.&lt;br /&gt;
**** If CaseAttributeTypes is null or empty string or not specified, all case attributes having type &amp;quot;String&amp;quot; are included into the root causes.&lt;br /&gt;
**** If the model doesn&#039;t have a case attribute with the specified name, an error message is shown.&lt;br /&gt;
*** EventAttributeTypes:&lt;br /&gt;
**** An array of strings with the names of event attributes included into the root causes.&lt;br /&gt;
**** Only event attributes of type string are included.&lt;br /&gt;
**** If EventAttributeTypes is null, all event attributes having type &amp;quot;String&amp;quot; are included into the root causes.&lt;br /&gt;
**** If EventAttributeTypes is empty array or not specified, event attributes not applied to the root causes.&lt;br /&gt;
**** If the model doesn&#039;t have an event attribute with the specified name, an error message is shown.&lt;br /&gt;
**** Analysis column Name should contain Event Attribute Name.&lt;br /&gt;
**** Analysis column Value should contain Event Attribute Value and number of occurrences in case: &amp;lt;value&amp;gt; (count).&lt;br /&gt;
**** Analysis column Type should have &amp;quot;EventAttributeValue&amp;quot; as its value.&lt;br /&gt;
*** WeightingExpression: &lt;br /&gt;
**** Expression providing weights for each case.&lt;br /&gt;
**** Expression, if defined, must be any of the following types:&lt;br /&gt;
***** A string containing the [[SQL Expressions|SqlExpression]] to evaluate.&lt;br /&gt;
***** A SqlExpression object, created e.g., using ToSqlExpression-function (see also: [[QPR_ProcessAnalyzer_Expressions#In-memory_expression_blocks_in_SQL_expressions|In-memory expression blocks in SQL expressions]]).&lt;br /&gt;
**** A row is filtered out of result if expression result is null.&lt;br /&gt;
*** MaximumRowCount:&lt;br /&gt;
**** The maximum number of the most and least contributing root causes to return. Thus, the actual number of returned rows can be at most two times this value (if specified). &lt;br /&gt;
**** If undefined, 200 is used.&lt;br /&gt;
**** If set to 0, all rows are returned.&lt;br /&gt;
*** MinValueUsage: &lt;br /&gt;
**** The minimum total usage of a value included into the comparison. The number of cases having every returned value should be at least given percentage (a float value between 0.0 and 1.0) of all the compared cases.&lt;br /&gt;
***** If not defined or null, all values are included (=default).&lt;br /&gt;
*** MaxNumUniqueValues: &lt;br /&gt;
**** Maximum number of unique values to include into the comparison for each attribute column. If the amount of unique values for any attribute exceeds this value, only given number of attributes are included that have the highest usage.&lt;br /&gt;
**** If not defined or null, all values are included (=default).&lt;br /&gt;
*** IncludeOthers: &lt;br /&gt;
**** Should the rest of the attribute values not included due to MinValueUsage or MaxNumUniqueValues filtering be included as an aggregated &amp;quot;Others&amp;quot; value?&lt;br /&gt;
**** If given any string value, that value is used as the value for all the aggregated values.&lt;br /&gt;
**** Default = undefined =&amp;gt; others values will not be included into the results.&lt;br /&gt;
*** ValueIfNull: Value used to indicate null-values.&lt;br /&gt;
**** Default = &amp;quot;(blank)&amp;quot;.&lt;br /&gt;
**** Must be not null.&lt;br /&gt;
&lt;br /&gt;
3 Returns a SqlDataFrame object (#70611#) with the following columns:&lt;br /&gt;
&lt;br /&gt;
3.1 Common columns &lt;br /&gt;
&lt;br /&gt;
3.1.1 Type: type of the root cause&lt;br /&gt;
&lt;br /&gt;
3.1.1.1 &amp;quot;CaseAttributeValue&amp;quot; for case attributes&lt;br /&gt;
&lt;br /&gt;
3.1.1.2 &amp;quot;EventAttributeValue&amp;quot; for event attributes&lt;br /&gt;
&lt;br /&gt;
3.1.2 Name: &lt;br /&gt;
&lt;br /&gt;
3.1.2.1 When type is CaseAttributeValue, case attribute name&lt;br /&gt;
&lt;br /&gt;
3.1.2.2 When type is EventAttributeValue, event attribute name&lt;br /&gt;
&lt;br /&gt;
3.1.3 Value: &lt;br /&gt;
&lt;br /&gt;
3.1.3.1 When type is CaseAttributeValue, case attribute value&lt;br /&gt;
&lt;br /&gt;
3.1.3.2 When type is EventAttributeValue, event attribute value and number of occurrences in case&lt;br /&gt;
&lt;br /&gt;
3.1.4 Total: total number of cases having the found root cause&lt;br /&gt;
&lt;br /&gt;
3.1.5 Selected: Number of cases that have the found root cause and belong to the selected cases&lt;br /&gt;
&lt;br /&gt;
3.1.6 Compared: number of cases that have the found root cause and don&#039;t belong to the selected cases&lt;br /&gt;
&lt;br /&gt;
3.2 Columns when WeightingExpression not have value&lt;br /&gt;
&lt;br /&gt;
3.2.1 Contribution: the number of cases which contribute to the deviation from the average percentage of selected cases among all analyzed cases&lt;br /&gt;
&lt;br /&gt;
3.2.2 ContributionPercentage: the percent of cases which contribute to the deviation from the average percentage&lt;br /&gt;
&lt;br /&gt;
3.2.3 DifferencePercentage: deviation in percentage between selected cases with the found root cause and the average percentage of selected cases among all analyzed cases&lt;br /&gt;
&lt;br /&gt;
3.1.4 SelectedPercentage: percent of selected cases that have the found root cause out of all cases with that root cause&lt;br /&gt;
&lt;br /&gt;
3.3 Columns when WeightingExpression have value&lt;br /&gt;
&lt;br /&gt;
3.3.1 Contribution: Sum of case weights which contribute to the deviation from the average percentage of selected case weights among all analyzed cases&lt;br /&gt;
&lt;br /&gt;
3.3.2 ContributionPercentage: the percent of case weights which contribute to the deviation from the average percentage&lt;br /&gt;
&lt;br /&gt;
3.3.3 DifferencePercentage: deviation in percentage between selected case weights with the found root cause and the average percentage of selected case weights among all analyzed cases&lt;br /&gt;
&lt;br /&gt;
3.3.4 SelectedPercentage: percent of selected case weights that have the found root cause out of all case weights with that root cause&lt;br /&gt;
&lt;br /&gt;
3.3.5 SelectedWeight: Sum of weights that have the found root cause and belong to the selected cases&lt;br /&gt;
&lt;br /&gt;
3.3.6 ComparedWeight: Sum of weights that have the found root cause and don&#039;t belong to the selected cases&lt;br /&gt;
&lt;br /&gt;
3.3.7 TotalWeight: Sum of weights of all cases with that root cause&lt;br /&gt;
&lt;br /&gt;
4. Notable differences between this function and FindRootCauses (#71058#) used for in-memory event logs:&lt;br /&gt;
&lt;br /&gt;
4.1. In FindRootCauses, Zero-weighted contibution rows are removed from the results.&lt;br /&gt;
&lt;br /&gt;
4.2. Non-numeric case cost in FindForDataFrame causes and exception to be thrown. Not in FindRootCauses, where is behaves as if null was used.&lt;br /&gt;
&lt;br /&gt;
4.3. All the case-data table columns can be used as CaseAttributeTypes in FindForDataFrame, even the case id column, which can&#039;t be used in FindRootCauses.&lt;br /&gt;
&lt;br /&gt;
4.4. WeightingExpression is given as SqlExpression, not expression language expression as in FindRootCauses.&lt;br /&gt;
&lt;br /&gt;
4.5. FindForDataFrame only supports querying one type of columns at a time. Querying, e.g., columns having both string and integer values will throw an exception.&lt;br /&gt;
&lt;br /&gt;
4.6. If an empty array is given for CaseAttributeTypes-parameter, FindForDataFrame throws an exception, whereas FindRootCauses returns a result without any rows.&lt;br /&gt;
&lt;br /&gt;
4.7. FindForDataFrame converts attribute values to string, because internally it uses UNPIVOT instruction for calculating Value column (requires same data type for unpivoted columns). So even if numeric case attribute is used, Value column will contain strings and sorting by Value will use string comparison.&lt;br /&gt;
&lt;br /&gt;
Examples:&lt;br /&gt;
&lt;br /&gt;
{&lt;br /&gt;
&lt;br /&gt;
  &amp;quot;ProcessingMethod&amp;quot;: &amp;quot;DataFrame&amp;quot;,&lt;br /&gt;
&lt;br /&gt;
  &amp;quot;Root&amp;quot;:&amp;quot;let m = _; _system.RootCauses.&#039;&#039;&#039;FindForDataFrame&#039;&#039;&#039;(m, _query.Configuration.Parameters.FindRootCausesParameters.Clone().Extend(#{\&amp;quot;Filter\&amp;quot;: _query.Configuration.Filter}))&amp;quot;,&lt;br /&gt;
&lt;br /&gt;
  &amp;quot;Parameters&amp;quot;: {&lt;br /&gt;
&lt;br /&gt;
   &amp;quot;FindRootCausesParameters&amp;quot;: {&lt;br /&gt;
&lt;br /&gt;
     &amp;quot;CaseAttributeTypes&amp;quot;: [&amp;quot;Account Manager&amp;quot;,&amp;quot;Customer Group&amp;quot;,&amp;quot;Product Group&amp;quot;,&amp;quot;Region&amp;quot;], &lt;br /&gt;
&lt;br /&gt;
     &amp;quot;Selection&amp;quot;: {&amp;quot;Items&amp;quot;:[{&amp;quot;Type&amp;quot;:&amp;quot;IncludeCases&amp;quot;,&amp;quot;Items&amp;quot;:[{&amp;quot;Type&amp;quot;:&amp;quot;CaseAttributeValue&amp;quot;,&amp;quot;Attribute&amp;quot;:&amp;quot;Product Group&amp;quot;,&amp;quot;StringifiedValues&amp;quot;:[&amp;quot;0Hats&amp;quot;]}]}]}, &lt;br /&gt;
&lt;br /&gt;
     &amp;quot;MaxNumUniqueValues&amp;quot;: 2, &lt;br /&gt;
&lt;br /&gt;
     &amp;quot;MaximumRowCount&amp;quot;: 1000, &lt;br /&gt;
&lt;br /&gt;
     &amp;quot;MinValueUsage&amp;quot;: 0.20,&lt;br /&gt;
&lt;br /&gt;
     &amp;quot;WeightingExpression&amp;quot;: &amp;quot;Column(\&amp;quot;Cost\&amp;quot;)&amp;quot;&lt;br /&gt;
&lt;br /&gt;
   }&lt;br /&gt;
&lt;br /&gt;
  },&lt;br /&gt;
&lt;br /&gt;
  &amp;quot;Ordering&amp;quot;: [ &lt;br /&gt;
&lt;br /&gt;
   {&amp;quot;Name&amp;quot;: &amp;quot;Contribution&amp;quot;, &amp;quot;Direction&amp;quot;: &amp;quot;Descending&amp;quot;}&lt;br /&gt;
&lt;br /&gt;
  ]&lt;br /&gt;
&lt;br /&gt;
}&lt;br /&gt;
&lt;br /&gt;
Calculates root cause analysis for given model using parameters read from the query configuration.&lt;br /&gt;
&lt;br /&gt;
== Utils.GetSampledEvents ==&lt;br /&gt;
foo&lt;br /&gt;
&lt;br /&gt;
== Utils.RunFunctionWithParallelLogging ==&lt;br /&gt;
foo&lt;/div&gt;</summary>
		<author><name>MarHink</name></author>
	</entry>
	<entry>
		<id>https://wiki.onqpr.com/pa/index.php?title=System_Library&amp;diff=25456</id>
		<title>System Library</title>
		<link rel="alternate" type="text/html" href="https://wiki.onqpr.com/pa/index.php?title=System_Library&amp;diff=25456"/>
		<updated>2024-11-19T12:23:36Z</updated>

		<summary type="html">&lt;p&gt;MarHink: /* Parameters */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;System library is a collection of Expression Language functions and properties that provide additional tools for scripting Process Analyzer functionalities. System library is referenced in scripts via &#039;&#039;_system&#039;&#039;-property, which provides additional properties dedicated for different areas of interests for scripting. &lt;br /&gt;
&lt;br /&gt;
The following hierarchy shows the properties and functions available in System Library:&lt;br /&gt;
&lt;br /&gt;
* ML&lt;br /&gt;
** [[Create Predicted Eventlog|GeneratePredictionModel]]&lt;br /&gt;
** [[Create Simulated Eventlog|ApplyTransformations]]&lt;br /&gt;
* Parallel&lt;br /&gt;
** Run&lt;br /&gt;
* RootCauses&lt;br /&gt;
** FindRootCausesDataFrame&lt;br /&gt;
* Utils&lt;br /&gt;
** GetSampledEvents&lt;br /&gt;
** ModifyColumnTypes&lt;br /&gt;
** RunFunctionWithParallelLogging&lt;br /&gt;
&lt;br /&gt;
== Parallel.Run ==&lt;br /&gt;
Runs given functions in parallel.&lt;br /&gt;
&lt;br /&gt;
=== Parameters ===&lt;br /&gt;
* &#039;&#039;&#039;functions&#039;&#039;&#039;:&lt;br /&gt;
** An array of functions to run in parallel.&lt;br /&gt;
&lt;br /&gt;
=== Return value ===&lt;br /&gt;
An array of results returned by the called functions, in the same order as the function generating them in the &#039;&#039;functions&#039;&#039;-parameter.&lt;br /&gt;
&lt;br /&gt;
=== Example ===&lt;br /&gt;
The following script uses _system.Parallel.Run to run three functions:&lt;br /&gt;
&lt;br /&gt;
* SAP-extraction from VBAK-table in SAP (connection parameters defined in connectionParametersDict-dictionary).&lt;br /&gt;
* Transform the extracted data by adding a new column.&lt;br /&gt;
* Load the data into data table identified by dataTableId.&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;typescript&amp;quot;&amp;gt;&lt;br /&gt;
function ExtractTransformAndLoad(extractFunc, transformFunc, loadFunc)&lt;br /&gt;
{&lt;br /&gt;
  let rawDataFlow = extractFunc();&lt;br /&gt;
  let transformedDataFlow = ToDataFlow();&lt;br /&gt;
&lt;br /&gt;
  _system.Parallel.Run([&lt;br /&gt;
    () =&amp;gt; Catch({&lt;br /&gt;
      let df;&lt;br /&gt;
      while (!IsNullTop(df = rawDataFlow.Collect(#{&amp;quot;CollectChunk&amp;quot;: true}))) {&lt;br /&gt;
        transformedDataFlow.Append(transformFunc(df));&lt;br /&gt;
        WriteLog(`A chunk having ${df.NRows} rows has been transformed.`);&lt;br /&gt;
      }&lt;br /&gt;
      if (rawDataFlow.HasError) {&lt;br /&gt;
        transformedDataFlow.Fail(&amp;quot;Error occurred during data extraction.&amp;quot;);&lt;br /&gt;
      }&lt;br /&gt;
      else {&lt;br /&gt;
        transformedDataFlow.Complete();&lt;br /&gt;
      }&lt;br /&gt;
    }, {&lt;br /&gt;
      transformedDataFlow.Fail(&amp;quot;Error occurred during transformation calculation.&amp;quot;);&lt;br /&gt;
    }),&lt;br /&gt;
    () =&amp;gt; {&lt;br /&gt;
      loadFunc(transformedDataFlow);&lt;br /&gt;
    }&lt;br /&gt;
  ]);&lt;br /&gt;
}&lt;br /&gt;
&lt;br /&gt;
ExtractTransformAndLoad(&lt;br /&gt;
  () =&amp;gt; ExtractSap(connectionParametersDict.Extend(&lt;br /&gt;
    [&lt;br /&gt;
    &amp;quot;FieldNames&amp;quot;: &amp;quot;VBELN,ERDAT,ERZET,ERNAM,NETWR,WAERK&amp;quot;, &lt;br /&gt;
    &amp;quot;QueryTable&amp;quot;: &amp;quot;VBAK&amp;quot;,&lt;br /&gt;
	&amp;quot;Options&amp;quot;: [&amp;quot;VBELN BETWEEN &#039;0000017448&#039;&amp;quot; ,&amp;quot;AND &#039;0060000042&#039;&amp;quot;],&lt;br /&gt;
    &amp;quot;UseGateway&amp;quot;: true&lt;br /&gt;
    ])&lt;br /&gt;
  ),&lt;br /&gt;
  df =&amp;gt; df.SetColumns([&amp;quot;Test&amp;quot;: () =&amp;gt; `${Column(&amp;quot;NETWR&amp;quot;)} ${Column(&amp;quot;WAERK&amp;quot;)}`]),&lt;br /&gt;
  dataFlow =&amp;gt; {&lt;br /&gt;
    DataTableById(dataTableId).Import(dataFlow, [&amp;quot;Append&amp;quot;: 0]);&lt;br /&gt;
  }&lt;br /&gt;
);&lt;br /&gt;
&lt;br /&gt;
DataTableById(dataTableId).SqlDataFrame.OrderByColumns([&amp;quot;VBELN&amp;quot;], [true]).Collect()&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== RootCauses.FindRootCausesDataFrame ==&lt;br /&gt;
Finds root causes for a particular process phenomenon by comparing properties of selected cases against those of all cases in given model (#27619#).&lt;br /&gt;
&lt;br /&gt;
Based on the similar in-memory function: [[FindRootCauses Function|EventLog.FindRootCauses]].&lt;br /&gt;
&lt;br /&gt;
=== Parameters ===&lt;br /&gt;
&lt;br /&gt;
* model:&lt;br /&gt;
** Model object of a model whose data tables are used to calculate the root causes.&lt;br /&gt;
* parameters:&lt;br /&gt;
** A parameter convertible to a StringDictionary object with the following supported key-values:&lt;br /&gt;
*** Filter: &lt;br /&gt;
**** Filter json (#30921#) that is applied to the event log before calculating root causes.&lt;br /&gt;
*** Selection: &lt;br /&gt;
**** Selection json (#30927#) that defines selected cases to find root causes for. Is applied on top of the filtered event log (specified by Filter parameter).&lt;br /&gt;
**** If nothing is selected, 100% of cases are counted as selected.&lt;br /&gt;
*** CaseAttributeTypes: &lt;br /&gt;
**** An array of strings with the names of case attributes included into the root causes.&lt;br /&gt;
**** Only case attributes of type string, integer or boolean are included.&lt;br /&gt;
**** If CaseAttributeTypes is null or empty string or not specified, all case attributes having type &amp;quot;String&amp;quot; are included into the root causes.&lt;br /&gt;
**** If the model doesn&#039;t have a case attribute with the specified name, an error message is shown.&lt;br /&gt;
*** EventAttributeTypes:&lt;br /&gt;
**** An array of strings with the names of event attributes included into the root causes.&lt;br /&gt;
**** Only event attributes of type string are included.&lt;br /&gt;
**** If EventAttributeTypes is null, all event attributes having type &amp;quot;String&amp;quot; are included into the root causes.&lt;br /&gt;
**** If EventAttributeTypes is empty array or not specified, event attributes not applied to the root causes.&lt;br /&gt;
**** If the model doesn&#039;t have an event attribute with the specified name, an error message is shown.&lt;br /&gt;
**** Analysis column Name should contain Event Attribute Name.&lt;br /&gt;
**** Analysis column Value should contain Event Attribute Value and number of occurrences in case: &amp;lt;value&amp;gt; (count).&lt;br /&gt;
**** Analysis column Type should have &amp;quot;EventAttributeValue&amp;quot; as its value.&lt;br /&gt;
*** WeightingExpression: &lt;br /&gt;
**** Expression providing weights for each case.&lt;br /&gt;
**** Expression, if defined, must be any of the following types:&lt;br /&gt;
***** A string containing the [[SQL Expressions|SqlExpression]] to evaluate.&lt;br /&gt;
***** A SqlExpression object, created e.g., using ToSqlExpression-function (see also: [[QPR_ProcessAnalyzer_Expressions#In-memory_expression_blocks_in_SQL_expressions|In-memory expression blocks in SQL expressions]]).&lt;br /&gt;
**** A row is filtered out of result if expression result is null.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
2.2.1.6 MaximumRowCount: maximum number of the most and least contributing root causes to return. Thus, the actual number of returned rows can be at most two times this value (if specified). &lt;br /&gt;
&lt;br /&gt;
2.2.1.6.1. If undefined, 200 is used.&lt;br /&gt;
&lt;br /&gt;
2.2.1.6.2. If set to 0, all rows are returned.&lt;br /&gt;
&lt;br /&gt;
2.2.1.7. MinValueUsage: Minimum total usage of a value included into the comparison. The number of cases having every returned value should be at least given percentage (a float value between 0.0 and 1.0) of all the compared cases.&lt;br /&gt;
&lt;br /&gt;
2.2.1.7.1. If not defined or null, all values are included (=default).&lt;br /&gt;
&lt;br /&gt;
2.2.1.8. MaxNumUniqueValues: Maximum number of unique values to include into the comparison for each attribute column. If the amount of unique values for any attribute exceeds this value, only given number of attributes are included that have the highest usage.&lt;br /&gt;
&lt;br /&gt;
2.2.1.8.1. If not defined or null, all values are included (=default).&lt;br /&gt;
&lt;br /&gt;
2.2.1.9. IncludeOthers: Should the rest of the attribute values not included due to MinValueUsage or MaxNumUniqueValues filtering be included as an aggregated &amp;quot;Others&amp;quot; value?&lt;br /&gt;
&lt;br /&gt;
2.2.1.9.1. If given any string value, that value is used as the value for all the aggregated values.&lt;br /&gt;
&lt;br /&gt;
2.2.1.9.2. Default = undefined =&amp;gt; others values will not be included into the results.&lt;br /&gt;
&lt;br /&gt;
2.2.1.10. ValueIfNull: Value used to indicate null-values.&lt;br /&gt;
&lt;br /&gt;
2.2.1.10.1. Default = &amp;quot;(blank)&amp;quot;.&lt;br /&gt;
&lt;br /&gt;
2.2.1.10.2. Must be not null.&lt;br /&gt;
&lt;br /&gt;
3 Returns a SqlDataFrame object (#70611#) with the following columns:&lt;br /&gt;
&lt;br /&gt;
3.1 Common columns &lt;br /&gt;
&lt;br /&gt;
3.1.1 Type: type of the root cause&lt;br /&gt;
&lt;br /&gt;
3.1.1.1 &amp;quot;CaseAttributeValue&amp;quot; for case attributes&lt;br /&gt;
&lt;br /&gt;
3.1.1.2 &amp;quot;EventAttributeValue&amp;quot; for event attributes&lt;br /&gt;
&lt;br /&gt;
3.1.2 Name: &lt;br /&gt;
&lt;br /&gt;
3.1.2.1 When type is CaseAttributeValue, case attribute name&lt;br /&gt;
&lt;br /&gt;
3.1.2.2 When type is EventAttributeValue, event attribute name&lt;br /&gt;
&lt;br /&gt;
3.1.3 Value: &lt;br /&gt;
&lt;br /&gt;
3.1.3.1 When type is CaseAttributeValue, case attribute value&lt;br /&gt;
&lt;br /&gt;
3.1.3.2 When type is EventAttributeValue, event attribute value and number of occurrences in case&lt;br /&gt;
&lt;br /&gt;
3.1.4 Total: total number of cases having the found root cause&lt;br /&gt;
&lt;br /&gt;
3.1.5 Selected: Number of cases that have the found root cause and belong to the selected cases&lt;br /&gt;
&lt;br /&gt;
3.1.6 Compared: number of cases that have the found root cause and don&#039;t belong to the selected cases&lt;br /&gt;
&lt;br /&gt;
3.2 Columns when WeightingExpression not have value&lt;br /&gt;
&lt;br /&gt;
3.2.1 Contribution: the number of cases which contribute to the deviation from the average percentage of selected cases among all analyzed cases&lt;br /&gt;
&lt;br /&gt;
3.2.2 ContributionPercentage: the percent of cases which contribute to the deviation from the average percentage&lt;br /&gt;
&lt;br /&gt;
3.2.3 DifferencePercentage: deviation in percentage between selected cases with the found root cause and the average percentage of selected cases among all analyzed cases&lt;br /&gt;
&lt;br /&gt;
3.1.4 SelectedPercentage: percent of selected cases that have the found root cause out of all cases with that root cause&lt;br /&gt;
&lt;br /&gt;
3.3 Columns when WeightingExpression have value&lt;br /&gt;
&lt;br /&gt;
3.3.1 Contribution: Sum of case weights which contribute to the deviation from the average percentage of selected case weights among all analyzed cases&lt;br /&gt;
&lt;br /&gt;
3.3.2 ContributionPercentage: the percent of case weights which contribute to the deviation from the average percentage&lt;br /&gt;
&lt;br /&gt;
3.3.3 DifferencePercentage: deviation in percentage between selected case weights with the found root cause and the average percentage of selected case weights among all analyzed cases&lt;br /&gt;
&lt;br /&gt;
3.3.4 SelectedPercentage: percent of selected case weights that have the found root cause out of all case weights with that root cause&lt;br /&gt;
&lt;br /&gt;
3.3.5 SelectedWeight: Sum of weights that have the found root cause and belong to the selected cases&lt;br /&gt;
&lt;br /&gt;
3.3.6 ComparedWeight: Sum of weights that have the found root cause and don&#039;t belong to the selected cases&lt;br /&gt;
&lt;br /&gt;
3.3.7 TotalWeight: Sum of weights of all cases with that root cause&lt;br /&gt;
&lt;br /&gt;
4. Notable differences between this function and FindRootCauses (#71058#) used for in-memory event logs:&lt;br /&gt;
&lt;br /&gt;
4.1. In FindRootCauses, Zero-weighted contibution rows are removed from the results.&lt;br /&gt;
&lt;br /&gt;
4.2. Non-numeric case cost in FindForDataFrame causes and exception to be thrown. Not in FindRootCauses, where is behaves as if null was used.&lt;br /&gt;
&lt;br /&gt;
4.3. All the case-data table columns can be used as CaseAttributeTypes in FindForDataFrame, even the case id column, which can&#039;t be used in FindRootCauses.&lt;br /&gt;
&lt;br /&gt;
4.4. WeightingExpression is given as SqlExpression, not expression language expression as in FindRootCauses.&lt;br /&gt;
&lt;br /&gt;
4.5. FindForDataFrame only supports querying one type of columns at a time. Querying, e.g., columns having both string and integer values will throw an exception.&lt;br /&gt;
&lt;br /&gt;
4.6. If an empty array is given for CaseAttributeTypes-parameter, FindForDataFrame throws an exception, whereas FindRootCauses returns a result without any rows.&lt;br /&gt;
&lt;br /&gt;
4.7. FindForDataFrame converts attribute values to string, because internally it uses UNPIVOT instruction for calculating Value column (requires same data type for unpivoted columns). So even if numeric case attribute is used, Value column will contain strings and sorting by Value will use string comparison.&lt;br /&gt;
&lt;br /&gt;
Examples:&lt;br /&gt;
&lt;br /&gt;
{&lt;br /&gt;
&lt;br /&gt;
  &amp;quot;ProcessingMethod&amp;quot;: &amp;quot;DataFrame&amp;quot;,&lt;br /&gt;
&lt;br /&gt;
  &amp;quot;Root&amp;quot;:&amp;quot;let m = _; _system.RootCauses.&#039;&#039;&#039;FindForDataFrame&#039;&#039;&#039;(m, _query.Configuration.Parameters.FindRootCausesParameters.Clone().Extend(#{\&amp;quot;Filter\&amp;quot;: _query.Configuration.Filter}))&amp;quot;,&lt;br /&gt;
&lt;br /&gt;
  &amp;quot;Parameters&amp;quot;: {&lt;br /&gt;
&lt;br /&gt;
   &amp;quot;FindRootCausesParameters&amp;quot;: {&lt;br /&gt;
&lt;br /&gt;
     &amp;quot;CaseAttributeTypes&amp;quot;: [&amp;quot;Account Manager&amp;quot;,&amp;quot;Customer Group&amp;quot;,&amp;quot;Product Group&amp;quot;,&amp;quot;Region&amp;quot;], &lt;br /&gt;
&lt;br /&gt;
     &amp;quot;Selection&amp;quot;: {&amp;quot;Items&amp;quot;:[{&amp;quot;Type&amp;quot;:&amp;quot;IncludeCases&amp;quot;,&amp;quot;Items&amp;quot;:[{&amp;quot;Type&amp;quot;:&amp;quot;CaseAttributeValue&amp;quot;,&amp;quot;Attribute&amp;quot;:&amp;quot;Product Group&amp;quot;,&amp;quot;StringifiedValues&amp;quot;:[&amp;quot;0Hats&amp;quot;]}]}]}, &lt;br /&gt;
&lt;br /&gt;
     &amp;quot;MaxNumUniqueValues&amp;quot;: 2, &lt;br /&gt;
&lt;br /&gt;
     &amp;quot;MaximumRowCount&amp;quot;: 1000, &lt;br /&gt;
&lt;br /&gt;
     &amp;quot;MinValueUsage&amp;quot;: 0.20,&lt;br /&gt;
&lt;br /&gt;
     &amp;quot;WeightingExpression&amp;quot;: &amp;quot;Column(\&amp;quot;Cost\&amp;quot;)&amp;quot;&lt;br /&gt;
&lt;br /&gt;
   }&lt;br /&gt;
&lt;br /&gt;
  },&lt;br /&gt;
&lt;br /&gt;
  &amp;quot;Ordering&amp;quot;: [ &lt;br /&gt;
&lt;br /&gt;
   {&amp;quot;Name&amp;quot;: &amp;quot;Contribution&amp;quot;, &amp;quot;Direction&amp;quot;: &amp;quot;Descending&amp;quot;}&lt;br /&gt;
&lt;br /&gt;
  ]&lt;br /&gt;
&lt;br /&gt;
}&lt;br /&gt;
&lt;br /&gt;
Calculates root cause analysis for given model using parameters read from the query configuration.&lt;br /&gt;
&lt;br /&gt;
== Utils.GetSampledEvents ==&lt;br /&gt;
foo&lt;br /&gt;
&lt;br /&gt;
== Utils.RunFunctionWithParallelLogging ==&lt;br /&gt;
foo&lt;/div&gt;</summary>
		<author><name>MarHink</name></author>
	</entry>
	<entry>
		<id>https://wiki.onqpr.com/pa/index.php?title=System_Library&amp;diff=25455</id>
		<title>System Library</title>
		<link rel="alternate" type="text/html" href="https://wiki.onqpr.com/pa/index.php?title=System_Library&amp;diff=25455"/>
		<updated>2024-11-19T12:22:45Z</updated>

		<summary type="html">&lt;p&gt;MarHink: /* Returns */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;System library is a collection of Expression Language functions and properties that provide additional tools for scripting Process Analyzer functionalities. System library is referenced in scripts via &#039;&#039;_system&#039;&#039;-property, which provides additional properties dedicated for different areas of interests for scripting. &lt;br /&gt;
&lt;br /&gt;
The following hierarchy shows the properties and functions available in System Library:&lt;br /&gt;
&lt;br /&gt;
* ML&lt;br /&gt;
** [[Create Predicted Eventlog|GeneratePredictionModel]]&lt;br /&gt;
** [[Create Simulated Eventlog|ApplyTransformations]]&lt;br /&gt;
* Parallel&lt;br /&gt;
** Run&lt;br /&gt;
* RootCauses&lt;br /&gt;
** FindRootCausesDataFrame&lt;br /&gt;
* Utils&lt;br /&gt;
** GetSampledEvents&lt;br /&gt;
** ModifyColumnTypes&lt;br /&gt;
** RunFunctionWithParallelLogging&lt;br /&gt;
&lt;br /&gt;
== Parallel.Run ==&lt;br /&gt;
Runs given functions in parallel.&lt;br /&gt;
&lt;br /&gt;
=== Parameters ===&lt;br /&gt;
* &#039;&#039;&#039;functions&#039;&#039;&#039;:&lt;br /&gt;
** An array of functions to run in parallel.&lt;br /&gt;
&lt;br /&gt;
=== Return value ===&lt;br /&gt;
An array of results returned by the called functions, in the same order as the function generating them in the &#039;&#039;functions&#039;&#039;-parameter.&lt;br /&gt;
&lt;br /&gt;
=== Example ===&lt;br /&gt;
The following script uses _system.Parallel.Run to run three functions:&lt;br /&gt;
&lt;br /&gt;
* SAP-extraction from VBAK-table in SAP (connection parameters defined in connectionParametersDict-dictionary).&lt;br /&gt;
* Transform the extracted data by adding a new column.&lt;br /&gt;
* Load the data into data table identified by dataTableId.&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;typescript&amp;quot;&amp;gt;&lt;br /&gt;
function ExtractTransformAndLoad(extractFunc, transformFunc, loadFunc)&lt;br /&gt;
{&lt;br /&gt;
  let rawDataFlow = extractFunc();&lt;br /&gt;
  let transformedDataFlow = ToDataFlow();&lt;br /&gt;
&lt;br /&gt;
  _system.Parallel.Run([&lt;br /&gt;
    () =&amp;gt; Catch({&lt;br /&gt;
      let df;&lt;br /&gt;
      while (!IsNullTop(df = rawDataFlow.Collect(#{&amp;quot;CollectChunk&amp;quot;: true}))) {&lt;br /&gt;
        transformedDataFlow.Append(transformFunc(df));&lt;br /&gt;
        WriteLog(`A chunk having ${df.NRows} rows has been transformed.`);&lt;br /&gt;
      }&lt;br /&gt;
      if (rawDataFlow.HasError) {&lt;br /&gt;
        transformedDataFlow.Fail(&amp;quot;Error occurred during data extraction.&amp;quot;);&lt;br /&gt;
      }&lt;br /&gt;
      else {&lt;br /&gt;
        transformedDataFlow.Complete();&lt;br /&gt;
      }&lt;br /&gt;
    }, {&lt;br /&gt;
      transformedDataFlow.Fail(&amp;quot;Error occurred during transformation calculation.&amp;quot;);&lt;br /&gt;
    }),&lt;br /&gt;
    () =&amp;gt; {&lt;br /&gt;
      loadFunc(transformedDataFlow);&lt;br /&gt;
    }&lt;br /&gt;
  ]);&lt;br /&gt;
}&lt;br /&gt;
&lt;br /&gt;
ExtractTransformAndLoad(&lt;br /&gt;
  () =&amp;gt; ExtractSap(connectionParametersDict.Extend(&lt;br /&gt;
    [&lt;br /&gt;
    &amp;quot;FieldNames&amp;quot;: &amp;quot;VBELN,ERDAT,ERZET,ERNAM,NETWR,WAERK&amp;quot;, &lt;br /&gt;
    &amp;quot;QueryTable&amp;quot;: &amp;quot;VBAK&amp;quot;,&lt;br /&gt;
	&amp;quot;Options&amp;quot;: [&amp;quot;VBELN BETWEEN &#039;0000017448&#039;&amp;quot; ,&amp;quot;AND &#039;0060000042&#039;&amp;quot;],&lt;br /&gt;
    &amp;quot;UseGateway&amp;quot;: true&lt;br /&gt;
    ])&lt;br /&gt;
  ),&lt;br /&gt;
  df =&amp;gt; df.SetColumns([&amp;quot;Test&amp;quot;: () =&amp;gt; `${Column(&amp;quot;NETWR&amp;quot;)} ${Column(&amp;quot;WAERK&amp;quot;)}`]),&lt;br /&gt;
  dataFlow =&amp;gt; {&lt;br /&gt;
    DataTableById(dataTableId).Import(dataFlow, [&amp;quot;Append&amp;quot;: 0]);&lt;br /&gt;
  }&lt;br /&gt;
);&lt;br /&gt;
&lt;br /&gt;
DataTableById(dataTableId).SqlDataFrame.OrderByColumns([&amp;quot;VBELN&amp;quot;], [true]).Collect()&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== RootCauses.FindRootCausesDataFrame ==&lt;br /&gt;
Finds root causes for a particular process phenomenon by comparing properties of selected cases against those of all cases in given model (#27619#).&lt;br /&gt;
&lt;br /&gt;
Based on the similar in-memory function: [[FindRootCauses Function|EventLog.FindRootCauses]].&lt;br /&gt;
&lt;br /&gt;
=== Parameters ===&lt;br /&gt;
&lt;br /&gt;
* model:&lt;br /&gt;
** Model object of a model whose data tables are used to calculate the root causes.&lt;br /&gt;
* parameters:&lt;br /&gt;
** A parameter convertible to a StringDictionary object with the following supported key-values:&lt;br /&gt;
*** Filter: &lt;br /&gt;
**** Filter json (#30921#) that is applied to the event log before calculating root causes.&lt;br /&gt;
*** Selection: &lt;br /&gt;
**** Selection json (#30927#) that defines selected cases to find root causes for. Is applied on top of the filtered event log (specified by Filter parameter).&lt;br /&gt;
**** If nothing is selected, 100% of cases are counted as selected.&lt;br /&gt;
*** CaseAttributeTypes: &lt;br /&gt;
**** An array of strings with the names of case attributes included into the root causes.&lt;br /&gt;
**** Only case attributes of type string, integer or boolean are included.&lt;br /&gt;
**** If CaseAttributeTypes is null or empty string or not specified, all case attributes having type &amp;quot;String&amp;quot; are included into the root causes.&lt;br /&gt;
**** If the model doesn&#039;t have a case attribute with the specified name, an error message is shown.&lt;br /&gt;
*** EventAttributeTypes:&lt;br /&gt;
**** An array of strings with the names of event attributes included into the root causes.&lt;br /&gt;
**** Only event attributes of type string are included.&lt;br /&gt;
**** If EventAttributeTypes is null, all event attributes having type &amp;quot;String&amp;quot; are included into the root causes.&lt;br /&gt;
**** If EventAttributeTypes is empty array or not specified, event attributes not applied to the root causes.&lt;br /&gt;
**** If the model doesn&#039;t have an event attribute with the specified name, an error message is shown.&lt;br /&gt;
**** Analysis column Name should contain Event Attribute Name.&lt;br /&gt;
**** Analysis column Value should contain Event Attribute Value and number of occurrences in case: &amp;lt;value&amp;gt; (count).&lt;br /&gt;
**** Analysis column Type should have &amp;quot;EventAttributeValue&amp;quot; as its value.&lt;br /&gt;
*** WeightingExpression: &lt;br /&gt;
**** Expression providing weights for each case.&lt;br /&gt;
**** Expression, if defined, must be any of the following types:&lt;br /&gt;
***** A string containing the [[SQL Expressions|SqlExpression]] to evaluate.&lt;br /&gt;
***** A SqlExpression object, created e.g., using ToSqlExpression-function (see also: [[QPR ProcessAnalyzer Expressions#In-memory expression blocks in SQL expressions|In-memory expression blocks in SQL expressions]]).&lt;br /&gt;
**** A row is filtered out of result if expression result is null.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
2.2.1.6 MaximumRowCount: maximum number of the most and least contributing root causes to return. Thus, the actual number of returned rows can be at most two times this value (if specified). &lt;br /&gt;
&lt;br /&gt;
2.2.1.6.1. If undefined, 200 is used.&lt;br /&gt;
&lt;br /&gt;
2.2.1.6.2. If set to 0, all rows are returned.&lt;br /&gt;
&lt;br /&gt;
2.2.1.7. MinValueUsage: Minimum total usage of a value included into the comparison. The number of cases having every returned value should be at least given percentage (a float value between 0.0 and 1.0) of all the compared cases.&lt;br /&gt;
&lt;br /&gt;
2.2.1.7.1. If not defined or null, all values are included (=default).&lt;br /&gt;
&lt;br /&gt;
2.2.1.8. MaxNumUniqueValues: Maximum number of unique values to include into the comparison for each attribute column. If the amount of unique values for any attribute exceeds this value, only given number of attributes are included that have the highest usage.&lt;br /&gt;
&lt;br /&gt;
2.2.1.8.1. If not defined or null, all values are included (=default).&lt;br /&gt;
&lt;br /&gt;
2.2.1.9. IncludeOthers: Should the rest of the attribute values not included due to MinValueUsage or MaxNumUniqueValues filtering be included as an aggregated &amp;quot;Others&amp;quot; value?&lt;br /&gt;
&lt;br /&gt;
2.2.1.9.1. If given any string value, that value is used as the value for all the aggregated values.&lt;br /&gt;
&lt;br /&gt;
2.2.1.9.2. Default = undefined =&amp;gt; others values will not be included into the results.&lt;br /&gt;
&lt;br /&gt;
2.2.1.10. ValueIfNull: Value used to indicate null-values.&lt;br /&gt;
&lt;br /&gt;
2.2.1.10.1. Default = &amp;quot;(blank)&amp;quot;.&lt;br /&gt;
&lt;br /&gt;
2.2.1.10.2. Must be not null.&lt;br /&gt;
&lt;br /&gt;
3 Returns a SqlDataFrame object (#70611#) with the following columns:&lt;br /&gt;
&lt;br /&gt;
3.1 Common columns &lt;br /&gt;
&lt;br /&gt;
3.1.1 Type: type of the root cause&lt;br /&gt;
&lt;br /&gt;
3.1.1.1 &amp;quot;CaseAttributeValue&amp;quot; for case attributes&lt;br /&gt;
&lt;br /&gt;
3.1.1.2 &amp;quot;EventAttributeValue&amp;quot; for event attributes&lt;br /&gt;
&lt;br /&gt;
3.1.2 Name: &lt;br /&gt;
&lt;br /&gt;
3.1.2.1 When type is CaseAttributeValue, case attribute name&lt;br /&gt;
&lt;br /&gt;
3.1.2.2 When type is EventAttributeValue, event attribute name&lt;br /&gt;
&lt;br /&gt;
3.1.3 Value: &lt;br /&gt;
&lt;br /&gt;
3.1.3.1 When type is CaseAttributeValue, case attribute value&lt;br /&gt;
&lt;br /&gt;
3.1.3.2 When type is EventAttributeValue, event attribute value and number of occurrences in case&lt;br /&gt;
&lt;br /&gt;
3.1.4 Total: total number of cases having the found root cause&lt;br /&gt;
&lt;br /&gt;
3.1.5 Selected: Number of cases that have the found root cause and belong to the selected cases&lt;br /&gt;
&lt;br /&gt;
3.1.6 Compared: number of cases that have the found root cause and don&#039;t belong to the selected cases&lt;br /&gt;
&lt;br /&gt;
3.2 Columns when WeightingExpression not have value&lt;br /&gt;
&lt;br /&gt;
3.2.1 Contribution: the number of cases which contribute to the deviation from the average percentage of selected cases among all analyzed cases&lt;br /&gt;
&lt;br /&gt;
3.2.2 ContributionPercentage: the percent of cases which contribute to the deviation from the average percentage&lt;br /&gt;
&lt;br /&gt;
3.2.3 DifferencePercentage: deviation in percentage between selected cases with the found root cause and the average percentage of selected cases among all analyzed cases&lt;br /&gt;
&lt;br /&gt;
3.1.4 SelectedPercentage: percent of selected cases that have the found root cause out of all cases with that root cause&lt;br /&gt;
&lt;br /&gt;
3.3 Columns when WeightingExpression have value&lt;br /&gt;
&lt;br /&gt;
3.3.1 Contribution: Sum of case weights which contribute to the deviation from the average percentage of selected case weights among all analyzed cases&lt;br /&gt;
&lt;br /&gt;
3.3.2 ContributionPercentage: the percent of case weights which contribute to the deviation from the average percentage&lt;br /&gt;
&lt;br /&gt;
3.3.3 DifferencePercentage: deviation in percentage between selected case weights with the found root cause and the average percentage of selected case weights among all analyzed cases&lt;br /&gt;
&lt;br /&gt;
3.3.4 SelectedPercentage: percent of selected case weights that have the found root cause out of all case weights with that root cause&lt;br /&gt;
&lt;br /&gt;
3.3.5 SelectedWeight: Sum of weights that have the found root cause and belong to the selected cases&lt;br /&gt;
&lt;br /&gt;
3.3.6 ComparedWeight: Sum of weights that have the found root cause and don&#039;t belong to the selected cases&lt;br /&gt;
&lt;br /&gt;
3.3.7 TotalWeight: Sum of weights of all cases with that root cause&lt;br /&gt;
&lt;br /&gt;
4. Notable differences between this function and FindRootCauses (#71058#) used for in-memory event logs:&lt;br /&gt;
&lt;br /&gt;
4.1. In FindRootCauses, Zero-weighted contibution rows are removed from the results.&lt;br /&gt;
&lt;br /&gt;
4.2. Non-numeric case cost in FindForDataFrame causes and exception to be thrown. Not in FindRootCauses, where is behaves as if null was used.&lt;br /&gt;
&lt;br /&gt;
4.3. All the case-data table columns can be used as CaseAttributeTypes in FindForDataFrame, even the case id column, which can&#039;t be used in FindRootCauses.&lt;br /&gt;
&lt;br /&gt;
4.4. WeightingExpression is given as SqlExpression, not expression language expression as in FindRootCauses.&lt;br /&gt;
&lt;br /&gt;
4.5. FindForDataFrame only supports querying one type of columns at a time. Querying, e.g., columns having both string and integer values will throw an exception.&lt;br /&gt;
&lt;br /&gt;
4.6. If an empty array is given for CaseAttributeTypes-parameter, FindForDataFrame throws an exception, whereas FindRootCauses returns a result without any rows.&lt;br /&gt;
&lt;br /&gt;
4.7. FindForDataFrame converts attribute values to string, because internally it uses UNPIVOT instruction for calculating Value column (requires same data type for unpivoted columns). So even if numeric case attribute is used, Value column will contain strings and sorting by Value will use string comparison.&lt;br /&gt;
&lt;br /&gt;
Examples:&lt;br /&gt;
&lt;br /&gt;
{&lt;br /&gt;
&lt;br /&gt;
  &amp;quot;ProcessingMethod&amp;quot;: &amp;quot;DataFrame&amp;quot;,&lt;br /&gt;
&lt;br /&gt;
  &amp;quot;Root&amp;quot;:&amp;quot;let m = _; _system.RootCauses.&#039;&#039;&#039;FindForDataFrame&#039;&#039;&#039;(m, _query.Configuration.Parameters.FindRootCausesParameters.Clone().Extend(#{\&amp;quot;Filter\&amp;quot;: _query.Configuration.Filter}))&amp;quot;,&lt;br /&gt;
&lt;br /&gt;
  &amp;quot;Parameters&amp;quot;: {&lt;br /&gt;
&lt;br /&gt;
   &amp;quot;FindRootCausesParameters&amp;quot;: {&lt;br /&gt;
&lt;br /&gt;
     &amp;quot;CaseAttributeTypes&amp;quot;: [&amp;quot;Account Manager&amp;quot;,&amp;quot;Customer Group&amp;quot;,&amp;quot;Product Group&amp;quot;,&amp;quot;Region&amp;quot;], &lt;br /&gt;
&lt;br /&gt;
     &amp;quot;Selection&amp;quot;: {&amp;quot;Items&amp;quot;:[{&amp;quot;Type&amp;quot;:&amp;quot;IncludeCases&amp;quot;,&amp;quot;Items&amp;quot;:[{&amp;quot;Type&amp;quot;:&amp;quot;CaseAttributeValue&amp;quot;,&amp;quot;Attribute&amp;quot;:&amp;quot;Product Group&amp;quot;,&amp;quot;StringifiedValues&amp;quot;:[&amp;quot;0Hats&amp;quot;]}]}]}, &lt;br /&gt;
&lt;br /&gt;
     &amp;quot;MaxNumUniqueValues&amp;quot;: 2, &lt;br /&gt;
&lt;br /&gt;
     &amp;quot;MaximumRowCount&amp;quot;: 1000, &lt;br /&gt;
&lt;br /&gt;
     &amp;quot;MinValueUsage&amp;quot;: 0.20,&lt;br /&gt;
&lt;br /&gt;
     &amp;quot;WeightingExpression&amp;quot;: &amp;quot;Column(\&amp;quot;Cost\&amp;quot;)&amp;quot;&lt;br /&gt;
&lt;br /&gt;
   }&lt;br /&gt;
&lt;br /&gt;
  },&lt;br /&gt;
&lt;br /&gt;
  &amp;quot;Ordering&amp;quot;: [ &lt;br /&gt;
&lt;br /&gt;
   {&amp;quot;Name&amp;quot;: &amp;quot;Contribution&amp;quot;, &amp;quot;Direction&amp;quot;: &amp;quot;Descending&amp;quot;}&lt;br /&gt;
&lt;br /&gt;
  ]&lt;br /&gt;
&lt;br /&gt;
}&lt;br /&gt;
&lt;br /&gt;
Calculates root cause analysis for given model using parameters read from the query configuration.&lt;br /&gt;
&lt;br /&gt;
== Utils.GetSampledEvents ==&lt;br /&gt;
foo&lt;br /&gt;
&lt;br /&gt;
== Utils.RunFunctionWithParallelLogging ==&lt;br /&gt;
foo&lt;/div&gt;</summary>
		<author><name>MarHink</name></author>
	</entry>
	<entry>
		<id>https://wiki.onqpr.com/pa/index.php?title=System_Library&amp;diff=25454</id>
		<title>System Library</title>
		<link rel="alternate" type="text/html" href="https://wiki.onqpr.com/pa/index.php?title=System_Library&amp;diff=25454"/>
		<updated>2024-11-19T10:29:58Z</updated>

		<summary type="html">&lt;p&gt;MarHink: /* RootCauses.FindRootCausesDataFrame */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;System library is a collection of Expression Language functions and properties that provide additional tools for scripting Process Analyzer functionalities. System library is referenced in scripts via &#039;&#039;_system&#039;&#039;-property, which provides additional properties dedicated for different areas of interests for scripting. &lt;br /&gt;
&lt;br /&gt;
The following hierarchy shows the properties and functions available in System Library:&lt;br /&gt;
&lt;br /&gt;
* ML&lt;br /&gt;
** [[Create Predicted Eventlog|GeneratePredictionModel]]&lt;br /&gt;
** [[Create Simulated Eventlog|ApplyTransformations]]&lt;br /&gt;
* Parallel&lt;br /&gt;
** Run&lt;br /&gt;
* RootCauses&lt;br /&gt;
** FindRootCausesDataFrame&lt;br /&gt;
* Utils&lt;br /&gt;
** GetSampledEvents&lt;br /&gt;
** ModifyColumnTypes&lt;br /&gt;
** RunFunctionWithParallelLogging&lt;br /&gt;
&lt;br /&gt;
== Parallel.Run ==&lt;br /&gt;
Runs given functions in parallel.&lt;br /&gt;
&lt;br /&gt;
=== Parameters ===&lt;br /&gt;
* &#039;&#039;&#039;functions&#039;&#039;&#039;:&lt;br /&gt;
** An array of functions to run in parallel.&lt;br /&gt;
&lt;br /&gt;
=== Returns ===&lt;br /&gt;
An array of results returned by the called functions, in the same order as the function generating them in the &#039;&#039;functions&#039;&#039;-parameter.&lt;br /&gt;
&lt;br /&gt;
=== Example ===&lt;br /&gt;
The following script uses _system.Parallel.Run to run three functions:&lt;br /&gt;
&lt;br /&gt;
* SAP-extraction from VBAK-table in SAP (connection parameters defined in connectionParametersDict-dictionary).&lt;br /&gt;
* Transform the extracted data by adding a new column.&lt;br /&gt;
* Load the data into data table identified by dataTableId.&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;typescript&amp;quot;&amp;gt;&lt;br /&gt;
function ExtractTransformAndLoad(extractFunc, transformFunc, loadFunc)&lt;br /&gt;
{&lt;br /&gt;
  let rawDataFlow = extractFunc();&lt;br /&gt;
  let transformedDataFlow = ToDataFlow();&lt;br /&gt;
&lt;br /&gt;
  _system.Parallel.Run([&lt;br /&gt;
    () =&amp;gt; Catch({&lt;br /&gt;
      let df;&lt;br /&gt;
      while (!IsNullTop(df = rawDataFlow.Collect(#{&amp;quot;CollectChunk&amp;quot;: true}))) {&lt;br /&gt;
        transformedDataFlow.Append(transformFunc(df));&lt;br /&gt;
        WriteLog(`A chunk having ${df.NRows} rows has been transformed.`);&lt;br /&gt;
      }&lt;br /&gt;
      if (rawDataFlow.HasError) {&lt;br /&gt;
        transformedDataFlow.Fail(&amp;quot;Error occurred during data extraction.&amp;quot;);&lt;br /&gt;
      }&lt;br /&gt;
      else {&lt;br /&gt;
        transformedDataFlow.Complete();&lt;br /&gt;
      }&lt;br /&gt;
    }, {&lt;br /&gt;
      transformedDataFlow.Fail(&amp;quot;Error occurred during transformation calculation.&amp;quot;);&lt;br /&gt;
    }),&lt;br /&gt;
    () =&amp;gt; {&lt;br /&gt;
      loadFunc(transformedDataFlow);&lt;br /&gt;
    }&lt;br /&gt;
  ]);&lt;br /&gt;
}&lt;br /&gt;
&lt;br /&gt;
ExtractTransformAndLoad(&lt;br /&gt;
  () =&amp;gt; ExtractSap(connectionParametersDict.Extend(&lt;br /&gt;
    [&lt;br /&gt;
    &amp;quot;FieldNames&amp;quot;: &amp;quot;VBELN,ERDAT,ERZET,ERNAM,NETWR,WAERK&amp;quot;, &lt;br /&gt;
    &amp;quot;QueryTable&amp;quot;: &amp;quot;VBAK&amp;quot;,&lt;br /&gt;
	&amp;quot;Options&amp;quot;: [&amp;quot;VBELN BETWEEN &#039;0000017448&#039;&amp;quot; ,&amp;quot;AND &#039;0060000042&#039;&amp;quot;],&lt;br /&gt;
    &amp;quot;UseGateway&amp;quot;: true&lt;br /&gt;
    ])&lt;br /&gt;
  ),&lt;br /&gt;
  df =&amp;gt; df.SetColumns([&amp;quot;Test&amp;quot;: () =&amp;gt; `${Column(&amp;quot;NETWR&amp;quot;)} ${Column(&amp;quot;WAERK&amp;quot;)}`]),&lt;br /&gt;
  dataFlow =&amp;gt; {&lt;br /&gt;
    DataTableById(dataTableId).Import(dataFlow, [&amp;quot;Append&amp;quot;: 0]);&lt;br /&gt;
  }&lt;br /&gt;
);&lt;br /&gt;
&lt;br /&gt;
DataTableById(dataTableId).SqlDataFrame.OrderByColumns([&amp;quot;VBELN&amp;quot;], [true]).Collect()&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== RootCauses.FindRootCausesDataFrame ==&lt;br /&gt;
1 Finds root causes for a particular process phenomenon by comparing properties of selected cases against those of all cases in given model (#27619#).&lt;br /&gt;
&lt;br /&gt;
1.1. Based on the similar in-memory function: EventLog.FindRootCauses (#71058#).&lt;br /&gt;
&lt;br /&gt;
1.2. Based on IDataFrame (#70613#)-based calculations, which, in case of SqlDataFrame (#70611#), offload the acctual work to the data sources (#70864#, e.g., Snowflake).&lt;br /&gt;
&lt;br /&gt;
2. Parameters: &lt;br /&gt;
&lt;br /&gt;
2.1. model:&lt;br /&gt;
&lt;br /&gt;
2.1.1. Model object of a model whose data tables are used to calculate the root causes.&lt;br /&gt;
&lt;br /&gt;
2.2. parameters&lt;br /&gt;
&lt;br /&gt;
2.2.1. A parameter convertible to a StringDictionary (#48323#) object with the following supported key-values:&lt;br /&gt;
&lt;br /&gt;
2.2.1.1 Filter: filter json (#30921#) that is applied to the event log before calculating root causes&lt;br /&gt;
&lt;br /&gt;
2.2.1.2 Selection: selection json (#30927#) that defines selected cases to find root causes for&lt;br /&gt;
&lt;br /&gt;
2.2.1.2.1 Selection is applied on top of the filtered event log (specified by Filter parameter)&lt;br /&gt;
&lt;br /&gt;
2.2.1.2.2 If nothing is selected, 100% of cases are counted as selected.&lt;br /&gt;
&lt;br /&gt;
2.2.1.3 CaseAttributeTypes: array of strings with the names of case attributes included into the root causes&lt;br /&gt;
&lt;br /&gt;
2.2.1.3.1 Only case attributes of type string, integer or boolean are included&lt;br /&gt;
&lt;br /&gt;
2.2.1.3.2 If CaseAttributeTypes is null or empty string or not specified, all case attributes having type &amp;quot;String&amp;quot; are included into the root causes&lt;br /&gt;
&lt;br /&gt;
2.2.1.3.3 If the model doesn&#039;t have a case attribute with the specified name, an error message is shown&lt;br /&gt;
&lt;br /&gt;
2.2.1.4 EventAttributeTypes: array of strings with the names of event attributes included into the root causes&lt;br /&gt;
&lt;br /&gt;
2.2.1.4.1 Only event attributes of type string are included&lt;br /&gt;
&lt;br /&gt;
2.2.1.4.2 If EventAttributeTypes is null, all event attributes having type &amp;quot;String&amp;quot; are included into the root causes&lt;br /&gt;
&lt;br /&gt;
2.2.1.4.3 If EventAttributeTypes is empty array or not specified, event attributes not applied to the root causes&lt;br /&gt;
&lt;br /&gt;
2.2.1.4.4 If the model doesn&#039;t have an event attribute with the specified name, an error message is shown&lt;br /&gt;
&lt;br /&gt;
2.2.1.4.5 Analysis column Name should contain Event Attribute Name&lt;br /&gt;
&lt;br /&gt;
2.2.1.4.6 Analysis column Value should contain Event Attribute Value and number of occurrences in case: &amp;lt;value&amp;gt; (count)&lt;br /&gt;
&lt;br /&gt;
2.2.1.4.7 Analysis column Type should have &amp;quot;EventAttributeValue&amp;quot; as its value&lt;br /&gt;
&lt;br /&gt;
2.2.1.5 WeightingExpression: Expression provide weights for each case. &lt;br /&gt;
&lt;br /&gt;
2.2.1.5.1 Expression, if defined, must be any of the following types:&lt;br /&gt;
&lt;br /&gt;
2.2.1.5.1.1. A string containing the SqlExpression (#70558#).&lt;br /&gt;
&lt;br /&gt;
2.2.1.5.1.1.1. Supports extended functionalities specified here #70773#.&lt;br /&gt;
&lt;br /&gt;
2.2.1.5.1.1. A SqlExpression object, created e.g., using ToSqlExpression-function (#71302#).&lt;br /&gt;
&lt;br /&gt;
2.2.1.5.2 A row is filtered out of result if expression result is null. &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
2.2.1.6 MaximumRowCount: maximum number of the most and least contributing root causes to return. Thus, the actual number of returned rows can be at most two times this value (if specified). &lt;br /&gt;
&lt;br /&gt;
2.2.1.6.1. If undefined, 200 is used.&lt;br /&gt;
&lt;br /&gt;
2.2.1.6.2. If set to 0, all rows are returned.&lt;br /&gt;
&lt;br /&gt;
2.2.1.7. MinValueUsage: Minimum total usage of a value included into the comparison. The number of cases having every returned value should be at least given percentage (a float value between 0.0 and 1.0) of all the compared cases.&lt;br /&gt;
&lt;br /&gt;
2.2.1.7.1. If not defined or null, all values are included (=default).&lt;br /&gt;
&lt;br /&gt;
2.2.1.8. MaxNumUniqueValues: Maximum number of unique values to include into the comparison for each attribute column. If the amount of unique values for any attribute exceeds this value, only given number of attributes are included that have the highest usage.&lt;br /&gt;
&lt;br /&gt;
2.2.1.8.1. If not defined or null, all values are included (=default).&lt;br /&gt;
&lt;br /&gt;
2.2.1.9. IncludeOthers: Should the rest of the attribute values not included due to MinValueUsage or MaxNumUniqueValues filtering be included as an aggregated &amp;quot;Others&amp;quot; value?&lt;br /&gt;
&lt;br /&gt;
2.2.1.9.1. If given any string value, that value is used as the value for all the aggregated values.&lt;br /&gt;
&lt;br /&gt;
2.2.1.9.2. Default = undefined =&amp;gt; others values will not be included into the results.&lt;br /&gt;
&lt;br /&gt;
2.2.1.10. ValueIfNull: Value used to indicate null-values.&lt;br /&gt;
&lt;br /&gt;
2.2.1.10.1. Default = &amp;quot;(blank)&amp;quot;.&lt;br /&gt;
&lt;br /&gt;
2.2.1.10.2. Must be not null.&lt;br /&gt;
&lt;br /&gt;
3 Returns a SqlDataFrame object (#70611#) with the following columns:&lt;br /&gt;
&lt;br /&gt;
3.1 Common columns &lt;br /&gt;
&lt;br /&gt;
3.1.1 Type: type of the root cause&lt;br /&gt;
&lt;br /&gt;
3.1.1.1 &amp;quot;CaseAttributeValue&amp;quot; for case attributes&lt;br /&gt;
&lt;br /&gt;
3.1.1.2 &amp;quot;EventAttributeValue&amp;quot; for event attributes&lt;br /&gt;
&lt;br /&gt;
3.1.2 Name: &lt;br /&gt;
&lt;br /&gt;
3.1.2.1 When type is CaseAttributeValue, case attribute name&lt;br /&gt;
&lt;br /&gt;
3.1.2.2 When type is EventAttributeValue, event attribute name&lt;br /&gt;
&lt;br /&gt;
3.1.3 Value: &lt;br /&gt;
&lt;br /&gt;
3.1.3.1 When type is CaseAttributeValue, case attribute value&lt;br /&gt;
&lt;br /&gt;
3.1.3.2 When type is EventAttributeValue, event attribute value and number of occurrences in case&lt;br /&gt;
&lt;br /&gt;
3.1.4 Total: total number of cases having the found root cause&lt;br /&gt;
&lt;br /&gt;
3.1.5 Selected: Number of cases that have the found root cause and belong to the selected cases&lt;br /&gt;
&lt;br /&gt;
3.1.6 Compared: number of cases that have the found root cause and don&#039;t belong to the selected cases&lt;br /&gt;
&lt;br /&gt;
3.2 Columns when WeightingExpression not have value&lt;br /&gt;
&lt;br /&gt;
3.2.1 Contribution: the number of cases which contribute to the deviation from the average percentage of selected cases among all analyzed cases&lt;br /&gt;
&lt;br /&gt;
3.2.2 ContributionPercentage: the percent of cases which contribute to the deviation from the average percentage&lt;br /&gt;
&lt;br /&gt;
3.2.3 DifferencePercentage: deviation in percentage between selected cases with the found root cause and the average percentage of selected cases among all analyzed cases&lt;br /&gt;
&lt;br /&gt;
3.1.4 SelectedPercentage: percent of selected cases that have the found root cause out of all cases with that root cause&lt;br /&gt;
&lt;br /&gt;
3.3 Columns when WeightingExpression have value&lt;br /&gt;
&lt;br /&gt;
3.3.1 Contribution: Sum of case weights which contribute to the deviation from the average percentage of selected case weights among all analyzed cases&lt;br /&gt;
&lt;br /&gt;
3.3.2 ContributionPercentage: the percent of case weights which contribute to the deviation from the average percentage&lt;br /&gt;
&lt;br /&gt;
3.3.3 DifferencePercentage: deviation in percentage between selected case weights with the found root cause and the average percentage of selected case weights among all analyzed cases&lt;br /&gt;
&lt;br /&gt;
3.3.4 SelectedPercentage: percent of selected case weights that have the found root cause out of all case weights with that root cause&lt;br /&gt;
&lt;br /&gt;
3.3.5 SelectedWeight: Sum of weights that have the found root cause and belong to the selected cases&lt;br /&gt;
&lt;br /&gt;
3.3.6 ComparedWeight: Sum of weights that have the found root cause and don&#039;t belong to the selected cases&lt;br /&gt;
&lt;br /&gt;
3.3.7 TotalWeight: Sum of weights of all cases with that root cause&lt;br /&gt;
&lt;br /&gt;
4. Notable differences between this function and FindRootCauses (#71058#) used for in-memory event logs:&lt;br /&gt;
&lt;br /&gt;
4.1. In FindRootCauses, Zero-weighted contibution rows are removed from the results.&lt;br /&gt;
&lt;br /&gt;
4.2. Non-numeric case cost in FindForDataFrame causes and exception to be thrown. Not in FindRootCauses, where is behaves as if null was used.&lt;br /&gt;
&lt;br /&gt;
4.3. All the case-data table columns can be used as CaseAttributeTypes in FindForDataFrame, even the case id column, which can&#039;t be used in FindRootCauses.&lt;br /&gt;
&lt;br /&gt;
4.4. WeightingExpression is given as SqlExpression, not expression language expression as in FindRootCauses.&lt;br /&gt;
&lt;br /&gt;
4.5. FindForDataFrame only supports querying one type of columns at a time. Querying, e.g., columns having both string and integer values will throw an exception.&lt;br /&gt;
&lt;br /&gt;
4.6. If an empty array is given for CaseAttributeTypes-parameter, FindForDataFrame throws an exception, whereas FindRootCauses returns a result without any rows.&lt;br /&gt;
&lt;br /&gt;
4.7. FindForDataFrame converts attribute values to string, because internally it uses UNPIVOT instruction for calculating Value column (requires same data type for unpivoted columns). So even if numeric case attribute is used, Value column will contain strings and sorting by Value will use string comparison.&lt;br /&gt;
&lt;br /&gt;
Examples:&lt;br /&gt;
&lt;br /&gt;
{&lt;br /&gt;
&lt;br /&gt;
  &amp;quot;ProcessingMethod&amp;quot;: &amp;quot;DataFrame&amp;quot;,&lt;br /&gt;
&lt;br /&gt;
  &amp;quot;Root&amp;quot;:&amp;quot;let m = _; _system.RootCauses.&#039;&#039;&#039;FindForDataFrame&#039;&#039;&#039;(m, _query.Configuration.Parameters.FindRootCausesParameters.Clone().Extend(#{\&amp;quot;Filter\&amp;quot;: _query.Configuration.Filter}))&amp;quot;,&lt;br /&gt;
&lt;br /&gt;
  &amp;quot;Parameters&amp;quot;: {&lt;br /&gt;
&lt;br /&gt;
   &amp;quot;FindRootCausesParameters&amp;quot;: {&lt;br /&gt;
&lt;br /&gt;
     &amp;quot;CaseAttributeTypes&amp;quot;: [&amp;quot;Account Manager&amp;quot;,&amp;quot;Customer Group&amp;quot;,&amp;quot;Product Group&amp;quot;,&amp;quot;Region&amp;quot;], &lt;br /&gt;
&lt;br /&gt;
     &amp;quot;Selection&amp;quot;: {&amp;quot;Items&amp;quot;:[{&amp;quot;Type&amp;quot;:&amp;quot;IncludeCases&amp;quot;,&amp;quot;Items&amp;quot;:[{&amp;quot;Type&amp;quot;:&amp;quot;CaseAttributeValue&amp;quot;,&amp;quot;Attribute&amp;quot;:&amp;quot;Product Group&amp;quot;,&amp;quot;StringifiedValues&amp;quot;:[&amp;quot;0Hats&amp;quot;]}]}]}, &lt;br /&gt;
&lt;br /&gt;
     &amp;quot;MaxNumUniqueValues&amp;quot;: 2, &lt;br /&gt;
&lt;br /&gt;
     &amp;quot;MaximumRowCount&amp;quot;: 1000, &lt;br /&gt;
&lt;br /&gt;
     &amp;quot;MinValueUsage&amp;quot;: 0.20,&lt;br /&gt;
&lt;br /&gt;
     &amp;quot;WeightingExpression&amp;quot;: &amp;quot;Column(\&amp;quot;Cost\&amp;quot;)&amp;quot;&lt;br /&gt;
&lt;br /&gt;
   }&lt;br /&gt;
&lt;br /&gt;
  },&lt;br /&gt;
&lt;br /&gt;
  &amp;quot;Ordering&amp;quot;: [ &lt;br /&gt;
&lt;br /&gt;
   {&amp;quot;Name&amp;quot;: &amp;quot;Contribution&amp;quot;, &amp;quot;Direction&amp;quot;: &amp;quot;Descending&amp;quot;}&lt;br /&gt;
&lt;br /&gt;
  ]&lt;br /&gt;
&lt;br /&gt;
}&lt;br /&gt;
&lt;br /&gt;
Calculates root cause analysis for given model using parameters read from the query configuration.&lt;br /&gt;
&lt;br /&gt;
== Utils.GetSampledEvents ==&lt;br /&gt;
foo&lt;br /&gt;
&lt;br /&gt;
== Utils.RunFunctionWithParallelLogging ==&lt;br /&gt;
foo&lt;/div&gt;</summary>
		<author><name>MarHink</name></author>
	</entry>
	<entry>
		<id>https://wiki.onqpr.com/pa/index.php?title=System_Library&amp;diff=25453</id>
		<title>System Library</title>
		<link rel="alternate" type="text/html" href="https://wiki.onqpr.com/pa/index.php?title=System_Library&amp;diff=25453"/>
		<updated>2024-11-19T10:26:59Z</updated>

		<summary type="html">&lt;p&gt;MarHink: /* Parameters */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;System library is a collection of Expression Language functions and properties that provide additional tools for scripting Process Analyzer functionalities. System library is referenced in scripts via &#039;&#039;_system&#039;&#039;-property, which provides additional properties dedicated for different areas of interests for scripting. &lt;br /&gt;
&lt;br /&gt;
The following hierarchy shows the properties and functions available in System Library:&lt;br /&gt;
&lt;br /&gt;
* ML&lt;br /&gt;
** [[Create Predicted Eventlog|GeneratePredictionModel]]&lt;br /&gt;
** [[Create Simulated Eventlog|ApplyTransformations]]&lt;br /&gt;
* Parallel&lt;br /&gt;
** Run&lt;br /&gt;
* RootCauses&lt;br /&gt;
** FindRootCausesDataFrame&lt;br /&gt;
* Utils&lt;br /&gt;
** GetSampledEvents&lt;br /&gt;
** ModifyColumnTypes&lt;br /&gt;
** RunFunctionWithParallelLogging&lt;br /&gt;
&lt;br /&gt;
== Parallel.Run ==&lt;br /&gt;
Runs given functions in parallel.&lt;br /&gt;
&lt;br /&gt;
=== Parameters ===&lt;br /&gt;
* &#039;&#039;&#039;functions&#039;&#039;&#039;:&lt;br /&gt;
** An array of functions to run in parallel.&lt;br /&gt;
&lt;br /&gt;
=== Returns ===&lt;br /&gt;
An array of results returned by the called functions, in the same order as the function generating them in the &#039;&#039;functions&#039;&#039;-parameter.&lt;br /&gt;
&lt;br /&gt;
=== Example ===&lt;br /&gt;
The following script uses _system.Parallel.Run to run three functions:&lt;br /&gt;
&lt;br /&gt;
* SAP-extraction from VBAK-table in SAP (connection parameters defined in connectionParametersDict-dictionary).&lt;br /&gt;
* Transform the extracted data by adding a new column.&lt;br /&gt;
* Load the data into data table identified by dataTableId.&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;typescript&amp;quot;&amp;gt;&lt;br /&gt;
function ExtractTransformAndLoad(extractFunc, transformFunc, loadFunc)&lt;br /&gt;
{&lt;br /&gt;
  let rawDataFlow = extractFunc();&lt;br /&gt;
  let transformedDataFlow = ToDataFlow();&lt;br /&gt;
&lt;br /&gt;
  _system.Parallel.Run([&lt;br /&gt;
    () =&amp;gt; Catch({&lt;br /&gt;
      let df;&lt;br /&gt;
      while (!IsNullTop(df = rawDataFlow.Collect(#{&amp;quot;CollectChunk&amp;quot;: true}))) {&lt;br /&gt;
        transformedDataFlow.Append(transformFunc(df));&lt;br /&gt;
        WriteLog(`A chunk having ${df.NRows} rows has been transformed.`);&lt;br /&gt;
      }&lt;br /&gt;
      if (rawDataFlow.HasError) {&lt;br /&gt;
        transformedDataFlow.Fail(&amp;quot;Error occurred during data extraction.&amp;quot;);&lt;br /&gt;
      }&lt;br /&gt;
      else {&lt;br /&gt;
        transformedDataFlow.Complete();&lt;br /&gt;
      }&lt;br /&gt;
    }, {&lt;br /&gt;
      transformedDataFlow.Fail(&amp;quot;Error occurred during transformation calculation.&amp;quot;);&lt;br /&gt;
    }),&lt;br /&gt;
    () =&amp;gt; {&lt;br /&gt;
      loadFunc(transformedDataFlow);&lt;br /&gt;
    }&lt;br /&gt;
  ]);&lt;br /&gt;
}&lt;br /&gt;
&lt;br /&gt;
ExtractTransformAndLoad(&lt;br /&gt;
  () =&amp;gt; ExtractSap(connectionParametersDict.Extend(&lt;br /&gt;
    [&lt;br /&gt;
    &amp;quot;FieldNames&amp;quot;: &amp;quot;VBELN,ERDAT,ERZET,ERNAM,NETWR,WAERK&amp;quot;, &lt;br /&gt;
    &amp;quot;QueryTable&amp;quot;: &amp;quot;VBAK&amp;quot;,&lt;br /&gt;
	&amp;quot;Options&amp;quot;: [&amp;quot;VBELN BETWEEN &#039;0000017448&#039;&amp;quot; ,&amp;quot;AND &#039;0060000042&#039;&amp;quot;],&lt;br /&gt;
    &amp;quot;UseGateway&amp;quot;: true&lt;br /&gt;
    ])&lt;br /&gt;
  ),&lt;br /&gt;
  df =&amp;gt; df.SetColumns([&amp;quot;Test&amp;quot;: () =&amp;gt; `${Column(&amp;quot;NETWR&amp;quot;)} ${Column(&amp;quot;WAERK&amp;quot;)}`]),&lt;br /&gt;
  dataFlow =&amp;gt; {&lt;br /&gt;
    DataTableById(dataTableId).Import(dataFlow, [&amp;quot;Append&amp;quot;: 0]);&lt;br /&gt;
  }&lt;br /&gt;
);&lt;br /&gt;
&lt;br /&gt;
DataTableById(dataTableId).SqlDataFrame.OrderByColumns([&amp;quot;VBELN&amp;quot;], [true]).Collect()&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== RootCauses.FindRootCausesDataFrame ==&lt;br /&gt;
foo&lt;br /&gt;
&lt;br /&gt;
== Utils.GetSampledEvents ==&lt;br /&gt;
foo&lt;br /&gt;
&lt;br /&gt;
== Utils.RunFunctionWithParallelLogging ==&lt;br /&gt;
foo&lt;/div&gt;</summary>
		<author><name>MarHink</name></author>
	</entry>
</feed>