Skip to main content
Gainsight Inc.

Best Practices of Rules Engine

Introduction

The purpose of this document is to outline the best practices of Rules Engine which helps in creating optimized, less error prone and easy to maintain rules.

Intended Audience

The intended audience for the document includes individuals by role Gainsight Administrators/Solutions Architects/Customer Success Architects/Technical Success Architects etc.

Naming Standards

The goal is to standardize the naming standards those are consistent, readable, and easy enough to understand. While creating a rule it’s always a good idea to follow some sort of naming standards.

  • Rule Naming Standard: The name of a rule should explain the type of operation and source of the data. Below are the recommendations with Examples.
    • ​​​Rules to load data to MDA/SFDC  - this includes load data to standard objects like Customer, Usage, Company, User, Company Person, etc..
      <<Action Name>>: <<Operation>> - <<Target Object>>
      Example: Load to Gainsight: Load WoW Usage Data - Usage Metrics
    • Rules to set score
      <<Set Score>>: <<Measure Name>> - <<Scorecard Name>>
      Example: Set Score: NPS® - Account Scorecard
    • Rules to Create CTA’s
      <<CTA>>: <<Reason>>
      Example: CTA: Drop in Adoption Score
    • Rules to load data to Relationships
      Load to Relationship: <<Relationship Name>> - Operation
      Example: Load to Relationship: Retailer - Create and Update
    • Rules for relationships:
      <<Relationship Type>>: <<Action Name>> - <<Operation/Reason>>
      Example: Retailer: Call to Action - Detractor NPS® Score    
    • Other Rules  ( Load to Features, milestones, Success Plans, etc…)
      <<Action Name>>: <<Feature/Milestone/Success Plan name>>
      Example: Load to Milestone: EBR completed

Note: If rule consists of different actions or Writing data to multiple objects try to seperate action names/Target object names with ‘/’. If final name is going to be lengthy then give some generic name and use description sections to provide more details.

  • Task Naming Standards: The name of a task should explain the type of operation and source of the data. Below are the recommendations with Examples.
    • Datasets
      Fetch << Criteria>> <<Object Name>>
      Examples: Fetch 120 days data from Usage Data
                      Fetch Closed Won Opportunities
    • Merge Task
      Merge << Dataset1 >> <<Dataset 2>> - <<Join Type Shortname>>
      Example: Merge Account Opportunity - EQ
Join Type Description
Retain common records from both datasets EQ
Retain all records from Left Dataset LF
Retain all records from Right dataset RT
Retain all records from both datasets FL
  • Transformation
    <<Type of Operations/ Filter Criteria>>

    Example:   Calculate Weekly Usage
                   Filter Accounts without Usage
  • Pivot
    <<Pivot Fieldname>>
    Example:   Pivot Eventname
  • S3 Datasets
    S3<<Filename>>
    Examples: S3 Usage_YYYY_MM_DD.csv

Note: Name of the task should not start with a number.

  • Description: Provide the  description of a rule/object mandatorily. This helps in understanding the purpose of rule or each task and helps to understand rule logic easily.  Specify details if multiple actions are being performed in the rule.
  • Reserved keywords: Avoid below characters in the rule name. These characters will hamper the search capabilities.
    • ^,*,(,),[,],|,etc… (Need to get complete list)
  • Number rules with Sequence: If certain set of rules have to be run in a particular Sequence, try to give numbering to the rules, that would make it easier to understand execution order (This is applicable for Rules part of one rule chain only).
    Example:
    • Load to Gainsight: Load Events into Usage Day Agg - 1.0
    • Load to Gainsight: Load Events dayagg into Usage Metrics - 2.1
    • Load to Gainsight: Load Events dayagg into Usage Metrics - 2.2

Performance Best Practices

Use Filters

  • Incremental Data: Where ever possible try to process only incremental data based on last modified date  and also add other appropriate filters.
  • Gainsight Customers only: If using SFDC object as source, make sure to check  “Apply to Gainsight Customers only” checkbox unless you are trying to load new customers.
  • Active Customers Only: Try to add Customer_info.Status = ‘Active’ filter to the rule.
  • Capture Changed Data only(Field level): Where ever possible try to update records only when the data you are trying to write is different from the existing data.
    For Example: If you are trying to update status to Active in contacts object, add a filter in the Source dataset to fetch only records with status not equal to Active from the contact object and for only those contacts update status.
  • Not already a Customer: If trying to add new customers to Customer Info object using load to customers then make sure that particular customer is not already a customer by adding a filter account.customer_info = NULL
  • Scorecard History as Source: If Account/Relationship Scorecard history is used as a source then make sure to use filter based on Time Granularity, since this object will have both month and week level data.
  • S3 Exports and Imports: Appropriate filters should be added while importing data from S3 or exporting data to S3 file to avoid dealing with huge volumes.

Avoid Multiple Datasets, Transformations and Merges

  • Lookups: If multiple objects can be joined using lookups in MDA/SFDC then use lookups to fetch data using single data set instead of having multiple datasets and merges.
  • Dataspaces: Use dataspaces to join multiple SFDC objects having parent child relationships instead of creating multiple datasets.
  • Filtered and Aggregated Data from Source : Aggregate and filter data while fetching data from source itself  instead of creating additional transformations to filter/aggregate the data later.
  • Pivot Task:Try to leverage Pivot functionality to avoid multiple transformations and merges. For more information on Pivot task refer to Pivot Tasks in Bionic Rules article.

Use Built-in Functions

  • Case Function: Use Case statement where ever the values to be written to target object are based on the values in a field or based on different combinations. For the example of Case statement usage, refer to the Customer Categorization with Case Expression Formula field article.
  • Period over period calculations: Use in built functions and also make sure to add appropriate date filters to the source dataset.

Combining Multiple Rules/Actions

  • Combine Rules: If same dataset is being processed to perform multiple actions, then try to add them as separate actions in the same rule instead of creating multiple rules.
  • Advanced Criteria: Avoid multiple actions and use advanced criteria with OR conditions to perform same action based on different criteria.
  • Default option at Object level: Instead of having multiple actions to set default values and then updating data from source, Use default option at the object level or in the rule action.

Others

  • Updates in one go: When multiple rules are configured to update/Load data to Customer info/Account object, try to point those rules to Company object and use a single rule to load data from company to Customer Info/Account.
  • Add Custom fields to Customer Info: Usually Account object will have many dependent background SFDC jobs(Workflow rules, validation rules, Processbuilder processes) and writing data to Account object is time consuming. Hence consider adding new fields and loading data to Customer Info instead of Account object.
  • Includes and Excludes operators: If multiple filter condition on the same field(of picklist type) have to be added to the rule then use Includes/Excludes option instead of multiple OR conditions.
  • Company object is the Single source of Truth: Instead of adding fields and load Account/Customer Info/Company attributes to Custom objects try to use lookup to company/Account object.

Data Related Issues and Best Practices

Duplicates in the Target  

  • NULL Values in Identifiers: Make sure Identifier fields will never receive NULL values from the source.
  • Duplicates from the Source: Source should not have duplicates based on identifier fields.
  • Insert Operation: Insert should be avoided in the rules.
  • Consistent Identifiers: Make sure to use same set of identifiers while upserting/updating data across different rules/Actions.

Wrong Data Updates 

  • Granularity: Fields used to perform Aggregations, merges in the rule should be at same granularity  as identifiers used to perform updates/insert operations.
  • Update Relationships: When trying to update relationships, make sure the relationship type selected in rule setup page and relationship type selected in actions should be same.
  • Joins while merging data: Use appropriate joins when Merging data from two sets.
  1. When using retain records from Right/Left data set, always make sure that the dataset from which you want to retain records is the master set and field selected in Account lookup and identifiers in action page should be from Master set.
  2. Retain all records from both datasets option should be avoided as much as possible. When using this option, Below are different possibilities. Need to choose fields carefully while mapping data and selecting identifiers in actions page.
  • Data present in both the sets.
  • Data present in dataset 1, in this case all the fields from dataset 2 will have NULL values.
  • Data present in dataset 2, in this case all the fields from dataset 1 will have NULL values.

Missing Data

  • Hard Coded Filters: Make sure there are no hard coded filters based on ID fields like on Account Id, Contact ID, Customer ID, etc… while fetching data or at action level.
  • Account Lookup: Make sure to choose Correct Account ID in the Account lookup dropdown if multiple Account ID’s fields present in the source and use same field as identifier in actions.
  • Schedule & Data availability in Source : if filters in the rule that are based on date = rule date then make sure data will be available in source before the schedule of the rule.
  • Enforce Dependency: Utilize rule dependencies option in rules chain to ensure rules run in the correct order.
  • Schedule & Filters: Make sure filters in the rule are configured as per the schedule of the rules.
    Example: If rule is scheduled to run weekly, make sure to have filters on the date fields to fetch weekly data.
  • Cron Schedule: When scheduling a rule make sure you are not missing any data.
    Example: If a rule is scheduled to run every month 31st then that rule will not be run for the months which don’t have 31 days. In this case the rule should be scheduled to run on the last day of the month using Cron Scheduler.

Maintainability Best Practices 

  • Rules without Actions: Rules not performing any action and not exporting data to S3 should be inactivated.
  • Rules with same name: avoid creating multiple rules with same name.
  • Rule for Testing and one time operation: If a rule is created only for testing or to perform one time operation, make rule inactive immediately after testing or performing one time operation.
  • Failure emails: Make sure to configure failure Emails.
  • Parallel Execution: You can run two rules simultaneously. This allows rules to run at their scheduled time, even if other rules with the same dependencies are already running. Admins can select the run preference of each rule by navigating to the Rule Configuration Page > Schedule. For more information on scheduling rules, refer to the Schedule and Execute Rules article.
    Note: If the rules are related, you can create a Rule Chain to ensure that they will run sequentially. 
  • Numbering Rules: Use a numbering scheme to make it easy to remember the order when constructing the rule chain.
NPS, Net Promoter, and Net Promoter Score are registered trademarks of Satmetrix Systems, Inc., Bain & Company and Fred Reichheld
  • Was this article helpful?