Skip to main content
Gainsight Inc.

Catch-up on Rules Engine Enhancements

April 2019: 6.2 Release

Rules Engine is the control room for business automation. The Rules Engine is where Admins can build business rules to trigger CTAs, update scorecard measures, send emails and pull data from any object into a Gainsight object. Rules analyze data from SFDC or Gainsight sources or from the Matrix Data Platform. The Rules Engine only functions with source objects that are related, either through Master-Detail or Lookup, to the Salesforce Account object. Rules can only reference one object at a time, plus the Account and/or CustomerInfo object.

  1. pasted image 0 (1).pngAbility to Create Folders in Rules Engine: With this release, you can create folders in Rules Engine to organize your Rules. All the folders are displayed in the left pane on the Rules List page. The folder to which a Rule belongs to is displayed against the Rule Name. By default, Gainsight provides you a folder known as Uncategorized. All the existing Rules are part of this folder, by default. You can neither modify the name of this folder nor delete it.

28. pasted image 0.png

You can create new folders and move the existing rules into the new folder. You can nest a folder in another folder; up to a single level of nesting is allowed. You cannot create folders in the default Uncategorized folder. You can perform the following tasks with folders.

  1. Create Folder

    To create a new folder:

    1. Click the + icon.

    2. (Optional) Select the folder under which you want to nest this folder.

    3. In the Folder Name field, enter a name for the folder.

    4. Click ADD.

29. Create folder.gif

The above image illustrates the maximum level of folder nesting. A folder nested under another folder (and not directly under the Home folder), can have the same name as its parent folder. However, two custom folders which exist at the same level under the Home folder cannot have the same name.

  1. Move Rules to Folder

    Once you create a folder, you can move the Rules into the newly created folder. A rule cannot be part of multiple folders. To move the rules to a folder:

    1. Select the check box for the required rule(s).

    2. From the MOVE TO folder, select the required folder.

    3. Click OK.

31. Move rule.gif

  1. Delete Folders

    You can delete a folder if it has no other sub folders under it or does not hold any Rules. You cannot delete the Uncategorized folder. To delete a folder, select the required folder and click the delete icon.

Delete Scorecards.gif

  1. Folder for new Rules

    The New Rule page is now updated with the Folder field. You can now select a folder for the new rule. If no folder is selected, the Rule is moved to the Uncategorized folder.

33. New rule folder.gif

  1. Integration of Gainsight Analyzer with Rules Engine: Previously, to scan a rule you needed to navigate to the Gainsight Analyzer page. However, with this enhancement you can now scan rules from the Rules list page. A new scan icon has been introduced for every rule to accomplish this.  

34..png

When you click the scan icon for a rule, you are navigated to the Rules Analyzer page and you can view the scan results for that particular rule.

Scan results.gif

You can also scan a rule from the Rule Preview page.

36..png

  1. community logo.pngAbility to Archive files in S3 Bucket: Previously, a file stored in the S3 bucket remained there even after it was used in a rule execution. You had to either manually delete it or replace it before the next execution.

    However, with this release Gainsight provides you the option to archive the used files. A Do Not Archive check box has been introduced. When you do not select this check box, used files are automatically moved to an archive folder. The Archived File Path field is also introduced as a part of this enhancement. You must specify a path for the archived folder. A new folder is created in the specified path.

37. Archive.gif

By default, the archived folder is created at the same level at which the source file is located. For example, if your source CSV file is located immediately in the S3 bucket and not nested in any folder, the archived folder is created at the immediate level in S3 bucket. If your source CSV file is nested in a folder, the archived folder is also created at the nested level. You can modify the default path, if required.

38..GIF

IMPORTANT:

  • By default, the Do Not Archive check box is selected. You must clear the check box to archive files.

  • For a custom bucket, Gainsight must have the required permissions to create a folder in your S3 bucket. If Gainsight does not have the required permissions, the rule is executed but the archive folder is not created.
     

  1. Ability to use “Includes” operator for String fields: Previously, you could not use the Includes operator on String fields, while applying filters in a Dataset. As a result, you had to include the required string field multiple times in the filter section for each option to be filtered. For example, in the Name field of the Company object, if you wanted to filter data for three names, you had to add the Name field thrice to the filters section.

    However, with this release Gainsight has introduced the Includes operator for filtering. You can use this operator to include multiple filter values in a single filter. This enhancement is applicable only for MDA objects.

39..GIF

February 2019: 6.0 Release

N/A

January 2019: 5.22 Release

Rules Engine is the control room for business automation. The Rules Engine is where Admins can build business rules to trigger CTAs, update scorecard measures, send emails and pull data from any object into a Gainsight object. Rules analyze data from SFDC or Gainsight sources or from the Matrix Data Platform. The Rules Engine only functions with source objects that are related, either through Master-Detail or Lookup, to the Salesforce Account object. Rules can only reference one object at a time, plus the Account and/or CustomerInfo object.

  1. Auto Trigger Rules with an S3 Dataset: S3 Dataset Task in Rules Engine allows you to create a dataset from a csv file in the S3 bucket. Previously, when there were new uploads in the Gainsight managed S3 bucket, you had to manually execute the S3 rule, to fetch the updated content from the S3 file.

    However, with this enhancement, a new Event Schedule type called S3 File is introduced to automate updation of S3 dataset. You must now create an S3 dataset task and set the configuration. When you upload a new CSV file which has the same configuration as defined in S3 dataset, the S3 dataset rule is triggered automatically. S3 dataset rule is auto triggered every time a new CSV file (with same configuration as defined in S3 dataset) is uploaded to the S3 bucket.
     
    Note: To trigger a Rule with non-Gainsight events, select the Event framework option.

    When you select the S3 File Schedule type, the rule is auto triggered whenever a new CSV file is uploaded to the S3 bucket.

1. S3 file.png

When you upload a CSV file to the S3 bucket, you must ensure that the old and new CSV files have the same name, if you are using the EQUALS option in the File Path field. However, if you are using the STARTSWITH option in the File Path field, the rule is triggered for any CSV file uploaded which matches the start of the string. Gainsight performs a case sensitive match when comparing the old and the new file names in S3 bucket.

Consider an instance in which you create an S3 Dataset by selecting the STARTSWITH option in the File Path field and you have used “Accounts” as the name. If you have a file in S3 bucket as Accounts.csv, this file can trigger the rule since it meets the STARTSWITH criteria. Now, if you add two new files to your S3 bucket called Accounts1.csv and Accounts2.csv, each file can trigger the rule (The rule is executed twice; for each instance of adding the files, Accounts1.csv and Accounts2.csv). In this case, the rule is triggered based on an event. Event here corresponds to uploading a new file, which meets the STARTSWITH option criteria.

Note: The Column headers for Accounts1.csv and Accounts2.csv must match with the existing S3 Dataset task configuration.

2. Column Header.png

A time based schedule (either Basic or Advanced) in contrast to the event based schedule, triggers the rule only once; at the scheduled time. The content from both the files is merged during this single execution, which is performed at the scheduled time.

Business Use Case

You record all your business activities like Sales data, profits, number of customers, etc., in a single CSV file. This is very dynamic data and can be updated frequently. Previously, whenever you changed the contents of this CSV file, you were required to run the rule manually to update the configured S3 dataset. However, with this enhancement, when the CSV file is updated in the Gainsight managed S3 bucket, the rule is triggered automatically based on the S3 file Event schedule and the configured S3 dataset is updated automatically.

To create an S3 File event schedule:

  1. Create a Rule with an S3 Dataset task. For more information on how to create S3 dataset tasks, see the S3 Dataset Task in Bionic Rules article.
  2. Navigate to the Schedule page.
  3. Select Event from the Schedule type drop-down menu.
  4. Select the S3 File option.
  5. Select your S3 dataset task from the Task drop-down menu.
  6. Click SAVE.

Limitations:

  • The S3 Dataset rule can not be part of a rule chain.
  • The File Name setting in the S3 Dataset Task cannot use the "Pattern" wildcard.
  • If a rule has multiple S3 datasets, you can auto trigger the rule only for any one of the S3 datasets. This Dataset must be selected from the Task drop-down menu.
  • Your CSV file must be located in the Gainsight Managed S3 bucket. Rule is not auto triggered if file is configured from any custom bucket.
  1. 6. Gicon.png Save a rule with multiple Datasets, without using a Merge task: Previously, when you created multiple tasks (Transformation task or Pivot task ) on a single dataset, it was mandatory to create a merge task. With this enhancement, you can save a rule with multiple datasets without using a Merge task. However, you must ensure that all of your datasets are interconnected, thus resembling a tree structure. If you have created two or more independent datasets, you will still require a Merge task.

    What is changed with this enhancement:

    In the following diagram, a single dataset is created (Dataset A). A transformation task and a Pivot task were created from this Dataset. Previously, it was mandatory to create a Merge task which would combine the Transformation and Pivot tasks. With this enhancement, Merge task is not required since output datasets are interconnected, reminiscent to a tree structure. You can also create multiple transformation and pivot tasks on Dataset A, without requiring a Merge task.


3. Dataset.png

What is not changed with this enhancement:

In the following diagram, two datasets A and B are created. These two datasets are independent and not connected in any way. Thus, a Merge task is required in this case.

4. Merger tasks.png

If you create a Transformation task or Pivot task on any of the above datasets, you will still require a Merge task.

5. Pivot Task.png

In the above case, you must create a Merge task to combine the transformation task from Dataset A and Pivot task from Dataset B.

Business Use Case  

You create a dataset which contain all important CTA data like Created Date, Due Date, Priority, and so on. You create a transformation task on this dataset and use the Date Diff formula function to calculate the number of days left for a CTA to reach its due date (Today’s date - Due date). You also create a Pivot task on the Priority field and classify all CTAs based on their Priority levels (High, Medium, and Low). Previously, it was mandatory for you to Merge the transformation and pivot tasks even though there was no logical significance. With this enhancement, you can save the rule without creating a merge task.

To use this enhancement:

  1. Create a Dataset. For more information on how to create Dataset, see the Bionic Rule Task Creation article.
  2. Create a Pivot task on the Dataset. For more information on how to create Pivot tasks, see the Pivot tasks in Bionic Rules article.
  3. Create a transformation task on the same Dataset. For more information on how to create Transformation tasks, see the Transformation tasks in Bionic Rules article.
  4. (Optional) Create multiple transformation tasks and Pivot tasks on the same dataset.

You can execute rule actions on the output datasets created from the above tasks without creating a merge task on the output datasets.

  1. 6. Gicon.png Rule Execution Results up to 30 days: Gainsight has increased the time period from 7 days to 30 days for downloading the rule results from the EXECUTION HISTORY tab. The 30 day time period to download rule results, start from the rule execution day. With this enhancement, if an Admin missed to download the rule results within a week’s period, they now have the extended time period of 30 days to download the results.

    To use this enhancement:
    1. Navigate to Administration > Rules Engine > RULES LIST.
    2. Click any Rule Name in the list. (select a rule which was executed more than 7 days ago but less than 30 days ago)
    3. Click the EXECUTION HISTORY tab.
    4. Click the download results icon. The Rule results are downloaded to your system.

6. Download rule results.gif

  1. Task Name and Output Dataset Name are Auto-Populated: The Task Name and Output Dataset Name are auto populated based on the source Object name in the Setup Rule page while creating a dataset. The Dataset name is auto populated in the format “Fetch from <Object Name>”. For example, if you use the "Call to Action" Object in the Setup Rule page, the Task Name and Output Dataset Name will be “Fetch from Call to Action”. The names are auto populated only for a normal Dataset or an S3 Dataset. Names are not auto populated for Transformation task, Pivot Task, or Merge task.

    To use this enhancement:
    1. Create a new Rule. For more information on how to create Rule, refer the Bionic Rule task Creation article.
    2. Click DATASET.
    3. Select a required Source object.
    4. Drag and drop the required fields to Show and Filters sections and apply the required configurations.  
    5. Click SAVE.

7. Rule task creation.gif

  1. Preview Rule Summary from Rule Chain: Previously, if you were viewing a series of Rules in a Rule chain and you wanted to view the Rule Summary for a particular rule you had to navigate to the Rules List page, search the required rule and then preview it.

    With this enhancement, you can now Preview a rule even from the Rule Chain page.

    To use this enhancement:
    1. Navigate to Administration > Operations > Rules Engine.
    2. Click the RULE CHAIN tab.
    3. Click VIEW for the required rule chain. List of rules in the rule chain are displayed.
    4. Select the required rule to preview rule.

8. Preview Rule.gif

Apart from this, the Rule chains to which a Rule belongs to are now hyperlinked in the respective Rule Info tab. You can click the Rule Chain hyperlink to view the respective Rule Chain.

To use this enhancement:

  1. Navigate to Administration > Operations > Rules Engine.
  2. Click the rule name hyperlink (select a rule which is part of a rule chain).
  3. Click the rule chain name hyperlink in the Rule Info tab, to view the specific rule chain.

9. Rule chain.gif

  1. View Filters from Rule Preview chain page: With this enhancement, you can now view the filters and associated Advanced Logic applied in the Rule setup page, Transformation task, and Pivot task, under the Tasks section, located on the RULE SETUP page.

    To use this enhancement:
    1. Click the Rule Name hyperlink (Select a rule which has filters).
    2. Click the RULE SETUP tab.
    3. Expand the required dataset and tasks, in the Tasks section. You can now view the various filters and associated Advanced Logic used applied on the filters.

10. Advanced Logic.gif

December 2018: 5.21 Release

S3 Dataset Enhancements

  1. Ability to include multiple Date and Datetime formats in a single csv file: Rules Engine is the control room for business automation. It allows you to build business rules that help trigger CTAs (Calls to Action), update Scorecards, send emails, and do much more.

    S3 dataset task in Rules Engine allows you to create a dataset from a csv file in the S3 bucket. Previously, you could not use multiple Date and Datetime formats in a single csv file. You were restricted to use only a single format for Date and Datetime data, across your entire csv file.

    However, with this enhancement you can now use multiple Date and Date time formats in a single csv file. This is useful for example, in organizations that compile data from multiple departments, who in turn may be using different Date and Datetime formats in their sales data (like transaction date, transaction time, etc).
    The Columns section of the S3 Dataset is now enhanced with a settings icon. You can use this icon to select a Date or Date time format for each column, which uses a Date or Datetime data type. If you do not select any format for a Date or Datetime data type column, the format selected in the Default Date Configuration section will be applied to that column. For a detailed step by step procedure, refer the S3 Dataset task in Bionic Rules article.

    Note: A single Column of your csv file cannot have multiple Date or Datetime formats. All the entries of a column must have the same Date or DateTime format.

To use this enhancement:

  1. Create a rule with an S3 dataset task. To learn more about configuring S3 dataset, refer the S3 Dataset task in Bionic Rules article.
  2. In the S3 dataset task, perform the following tasks in the Columns section:
    1. Select the DATE or DATETIME option from the Data Type column, as per the S3 Dataset configuration.
    2. Click the settings icon for the Date or Datetime data type fields. The respective Column Properties - Date or Column Properties - DateTime window is displayed.
    3. Select the required format.
    4. Select the timezone for Column Properties - DateTime.
    5. Click SAVE.

C S3 - enhancement.gif

Note: If you do not select a format for Date, DateTime fields or Timezone, the format or Time Zone specified in the Default Date Configuration section is selected.

Defau;lt date time.png

November 2018: 5.20 Release

  1. Formula Field enhancements:
  • New “Case Expression” formula function:  With this release, Gainsight has launched a new Formula function called Case Expression. This formula function allows you to categorize data based on your requirements. You can use this Formula function to create a custom output Field. This output field has values for the records that match your specific set of requirements.  
    Note: Excel has a similar function expression called SWITCH.

     By using the “Case Expression” in the Data Transformation Task rather than using Action Criteria, you can improve the           efficiency of execution of the rule when the data update/action is executed.

  Business use cases for this function include:

  • You can use the Case Expression to Categorize customers based on the revenue generated by them, as Platinum, Gold, Silver, Bronze, etc.
  • You can categorize customers based on their NPS responses like Promoters, Passives, and Detractors.
  • You can categorize your Customers based on their geographical locations like Asia Pacific, Middle East, Europe, Australia, and so on.
  • You can categorize your customers based on the number of employees they have as Jupiter, Mars, Earth and so on.  

    The Customer Categorization with Case Expression tutorial provides you with step by step instructions, to configure the first use case. 

    Anatomy of Case Expression:

  • Case Expression function is made up of multiple cases (a maximum of 10).  

  • Every Case consists of multiple Criteria (a maximum of 5). Each criteria is a specific requirement that any record should match. For example, a Criteria for a customer to be classified as Detractor can be NPS score between 0-6.

  • Every Case has an associated Action when any record matches with the given criteria. A value (Ex: Detractor) is populated in the Output field. This value can be a custom value or fetched from another field in the source dataset.

1 nps_case-expressions.gif

  • Execution of the Case Expression in detail:
    • Execution of the Case Expression begins with the evaluation of the first case on a record. If all the criteria in this case are satisfied by the record, the action associated with this case is executed. The execution of Case expression halts here for this record and none of the other cases are evaluated.
    • However, if the first case is not satisfied, the system evaluates the second case on the same record, and so on. If none of the available cases are satisfied by the record, the default case is executed.

This process is applied to all the records.  

The Default Case: The Case Expression also has a default case. This default case does not have any criteria. It only has an action; Default action. You cannot delete the default case. When a record does not match any of the specified criteria, the action associated with the default case is executed.    

Result of this execution is to create an Output field that is populated with values which display categorized records.

A detailed workflow of Case Execution is shown below for the NPS survey responses

NPS Case expression (2).png

  • Case Expression UI elements

3 Case Expression UI elements.png

  • Output Field Label: Enter a name for the output field.
  • Output Data Type: Select the Data Type for the output field. The available Data Types are Boolean, Number, and string. In the above example, String is selected
  • Case 1 and Case 2: These are the two cases present in the above image. You can add up to ten cases.
  • Advanced Logic: By default, AND logic is applied when you have multiple Criteria in a Case. You can change it to OR condition. However, the above example has only one criteria; hence Advanced Logic is not applicable here.
  • Then: Then is the Action field. In this scenario, either Renewal date missed, Renewal date not missed or Today is renewal date, output is populated.
  • Default: If the record does not match Case 1 or Case 2, the Default value is set. 

    To learn more about how to use this feature, refer the Formula Fields in Bionic Rules article.
  1. community icon.png Enhancement in Load to Relationship Action Type: If you use multiple relationship types in your org, previously you had to create multiple “Load to Relationship” actions in order to load data to each of the relationship types. Furthermore, you had to select the relationship type for each of the action types and also configure field mappings for each action. Also, rules with multiple actions require more execution time and you needed to reconfigure every action, whenever you add additional fields in your source dataset.

    With this release, Gainsight has enhanced the Load To Relationship Action type. The Relationship Type field is now moved into the Field Mappings section. You can now configure Relationship Type as a field mapping. This allows you to configure multiple Relationship types within a single Load to Relationship action, thus eliminating the requirement to create multiple actions. This enhancement greatly improves the rule execution time; Rules with single action execute faster as compared to rules with multiple actions. Also, when you add additional fields in your source dataset, you need not modify multiple actions, since you maintain only one action. You just need to update the single action.

    You can use the enhanced version of Load to Relationship Action type in two ways:

    1. Dynamic mapping: If the Relationship Type ID field is present in your source dataset, you can map it to the target Relationship Type field to dynamically populate Relationship Type.
      The following example demonstrates a use case where you could use the Dynamic mapping capability

  • You have defined a Relationship Type for each of the Products you sell in your organization and the Relationship Type name is the same as the Product Name.
  • You also have a subscriptions object where each Subscription has the details about the Product you sold and the company you sold that product to.
  • You can merge Subscriptions with Relationship Type object using a single merge task to get the Relationship Type ID field by using Name as the merge key.
  • You can then map this field from the source in the Load to Relationship action type, to dynamically create Relationships across multiple relationship types using a single action.

Prerequisite: To use the Load to Relationship Action type dynamically, ensure that Relationship Type ID is included in the Source dataset

To use Load to Relationship Action type dynamically:

  1. Click + ACTION.
  2. Select the Load to Relationship Action type.
  3. Map the queried Relationship Type ID field to the target Relationship Type (Picklist) field.
  4. Check the default value check box and choose a default Relationship Type if applicable for you use case.
  5. Select the Include in identifiers check box.
  6. Perform other mappings, as required.
  7. Click SAVE.

dynamic mapping.gif

  1. Manual Mapping: If you have not included the Relationship Type field in the Source Dataset, you can manually map the Relationship Type. 
    To use Load to Relationship Action type manually:
    1. Click + ACTION.
    2. Select the Load to Relationship Action type.
    3. Click Add Custom Field.
    4. Select the Relationship Type (string) field.
    5. Select a Relationship Type from the list of available relationship types.
    6. Select the Include in identifiers check box.
    7. Perform other mappings, as required.
    8. Click SAVE.

manual mapping_1.gif

 

  • Mapping the Relationship Type, Account Name, and Relationship Name fields is mandatory.
  • If the source dataset has Relationship Type field as null for a record, then the default value will be used for that record. If no default value is selected, then relationship is not created for that record and it would be marked as an error.

  • Was this article helpful?