IMPORTANT NOTE: Gainsight is upgrading Connectors 2.0 with Horizon Experience. This article applies to tenants which are yet to be upgraded to the Horizon Experience of Connectors 2.0. If you are using Connectors 2.0 with Horizon Experience, you can find the documentation here.

Admins can use the S3 Connector to bring customer data into Gainsight. Using the S3 Connector to load data is secure and simple. To load usage data in Gainsight using the S3 Connector, you need to have an org with the S3 Connector enabled.

To learn more about the S3 Connector, refer to S3 Connector and S3 Connector FAQs.

Assumption

S3 Connector is enabled in your organization. For more information, refer to Integrating Amazon S3 with Gainsight.

Use the following steps to load usage data in Gainsight:

Step 1: Create MDA Object

The S3 connector allows you to set up mapping to read customer data files and bring them into MDA Objects, so that you can then use the data with Rules and Reporting. The S3 Connector will connect your S3 bucket to one object within Gainsight’s Matrix Data Architecture (MDA). Once the raw usage data resides on Gainsight MDA, the system performs aggregations on it to achieve optimal performance while generating reports.


To create an MDA object:  

  1. Navigate to Administration > Data Management.
Admin > Data Management.png
  1. Click + OBJECT.
Create Object.png
  1. Enter a name and description for the MDA object.
  2. Select either Manual or Data File Upload(CSV):
    • Select Manual for creating the fields/columns manually.
    • Select Data File Upload (CSV) to insert a template file which already has fields/columns configured.
  3. Click NEXT. The object will be created and Add Field screen appears.
  4. In the Add Field screen, enter the required details and click Add. A new field/column will be created. The MDA object creation is complete.

Step 2: Create and Schedule S3 Project

By creating a project, you will be able to fetch your customer’s data to the MDA.

Notes:

  • You can create multiple projects on an existing MDA custom object (Matrix Data-Object).
  • To edit existing projects, click the individual project in the project list page.

To create a project:

  1. Navigate to Administration > Connectors > S3 Connector; then click + Data Ingest Job.
  1. Under Data Ingest Job Setup tab, enter the following details:
  • Data Ingest Job Name: The desired data ingest job name.
  • Matrix Data-Object: Select an existing Matrix Data-Object.
  • Source CSV file: Enter the name of CSV file that you want to be picked from the Amazon S3 input folder. For example, doublequotes.csv.

Data Ingest Job setup.png

  • CSV Properties: Select appropriate CSV Properties. Recommended CSV properties:
    • Character Encoding: UTF-8
    • Separator: , (Comma)
    • Quote Char: “ (Double Quote)
    • Escape Char: Backslash
    • Header Line: 1 (Mandatory)
    • Select data load operation: Select Insert/Upsert. Once you select Upsert, you need to select the key fields to identify unique records.
S3 Job Configuration.png
  1. Click Proceed to Field Mapping; under the Field Mapping tab, map Target Object Field with Source CSV Field appropriately.
  • Map all or a few object fields with the header fields in the CSV file and vice-versa.
  • Click Select All to map all object fields with the CSV headers.
  • Choose multiple object fields and then click >> to map the selected object fields with the CSV headers.
  • Click UnMap All to unmap all of the fields that you set for mapping.
  • For Derived Field mappings, click Show import lookup icon. Data import lookup configuration dialog appears. This is to lookup to a standard object and match fields to fetch Gainsight IDs (GSIDs) from the looked up object and populate values in the target field. Derived mappings can be performed only for target fields of GSID data type. For more information, refer to Gainsight S3 Connector and Data Import Lookup.
Field Mappings Configuration.png
  1. Click Next; then under the Schedule tab, enter the following details and click Run Now or set a recurring schedule.
    • On Success:  If a job has partial/full data import success, a success notification email is sent to the email ID entered here.
    • On Failure: When all records fail to import, a failure notification email is sent to the email ID entered here.

Note: When you click Run Now, the data ingest configuration is saved automatically.

  • (Optional) If you do not want to schedule your data ingest job, you can choose to execute it whenever the file is uploaded to the Input folder using the Post file upload option. The following are the limitations for using this option.
  • The file name/file cannot be used in other data ingestion projects. An error occurs if such an operation is performed.
  • While editing an existing Data Ingest Job, you cannot modify the existing Matrix Data - Object. You need to create another Data Ingest Job with a different Matrix Data - Object for data ingestion.
  • On any given day, you can upload up to five files with a maximum size of up to 200MB. Each file has to be uploaded with a minimum gap of two hours.
  • Time based schedule - to schedule the data ingestion daily, weekly, and then monthly.
Schedule page.png

A success message appears once the data ingest job is saved successfully. In addition, you can check the execution history using View Execution History.

In case of failure, you can click on the status of a particular data ingest job to view the cause. Also, the Failed column contains a link to download the error file which contains the failure reason of a job.

S3 Execution Logs.png

Step 3: Upload CSV using Cyberduck

Gainsight recommends using the Cyberduck tool to push your files into Amazon’s S3 Bucket. Cyberduck is one of the most popular ETL tool that supports connecting to Amazon’s S3. For a detailed list of ETL tools that you can use, see S3 FAQs.

Note: You may want to know the frequency and granularity of your usage data updates (daily, weekly, monthly; account/instance/user-level).

  1. Download and install the cyberduck tool from https://cyberduck.io/?l=en
  2. Click Open Connection and fill in the required info as shown in the image below:

gainsight-LoadUsageinGainsightusingS3Connector-91.png

  1. After the connection is established, you can see a list of all the folders, one each for each Gainsight bucket(folder) configured in S3 connector. Three sub-folders are available: input, archived, and error.
  2. Navigate to input folder.
  3. Click File > Upload.
gainsight-LoadUsageinGainsightusingS3Connector-10.png

Limitations:

  • Unique filename for post file upload: Specify a unique filename for all the event based data ingest projects (post file upload). Also, the file name used must not be a suffix of a filename that already exists.
  • 500MB file size limitation: S3 connector supports file sizes up to 500 MB. Suggested size is 200MB.
  • Setup the project and then upload file: The files that exist in the bucket before setting up "post file upload" will not be picked up.
  • Upload the file with exact filename: Uploading a file and renaming it to match the set file name in the project does not work.
  • Ingestion job starts immediately: A provision to configure delay time for post file upload is not present. The ingestion job starts immediately.
  • File processing happens in a sequence: You have to upload a new file only after the previous file processing has started, or if the previous file is moved to the archive folder. If the previous file is still in the input folder, the new file will overwrite the older file.

Step 4: Create Usage Data Measures

For more information on how to create usage data measures, refer to Usage Data Configurations.

  1. Navigate to Administration > Usage Configuration.
Admin > Usage Configuration.png
  1. Click + Add Measures. The Add Measures dialog appears.
gainsight-LoadUsageinGainsightusingS3Connector-12.png
  1. Type the measures separated by a return.
  2. Click Process. The measures added will be displayed.
  1. Click ADD ALL. The measures will be created and will be available in the Load to Usage action (while creating a rule).

Step 5: Load to Usage Data using a Rule

Use the following procedure if you want to load to the Salesforce Usage Data object. Reporting (C360 > Usage report) is possible only when you load to the Usage Data object.

Note: If Salesforce Account ID is not present in the MDA object, use this procedure.

  1. Create an MDA object.
  2. Build a rule to populate a second MDA object that contains Salesforce Account ID, External Identifier, and MDA Usage Data object. As an example, the source object can be SFDC Account object, and sync External Identifier and SFDC Account ID (upsert on SFDC Account ID).

Join the two MDA objects (Admin > Data Management > MDA Usage Data object). The join starts on the MDA Usage Object and goes to the secondary MDA object. The join is between the external identifier field on both objects. After the rule is run and the join is complete you will be able to use both objects/fields in a rule or report. For more information, refer to MDA Joins.

Step 6: Load to Usage MDA Object

You can load customer data and store it in the Usage MDA object using the Rules Engine. After loading the data to the MDA object, you can use it in the report builder.

  1. Navigate to Administration > Rules Engine.
  2. Click + Rule.
  3. In the Setup Rule screen, select Matrix Data.  
gainsight-LoadUsageinGainsightusingS3Connector-15.png
  1. In the Select a source object list, select the MDA object you have created in Step 1. The fields available for the MDA object will be populated.
    1. If you have completed Step 5 in the procedure (joining the two objects to obtain SFDC Account ID), click through the join(+ Sign) and place the Salesforce Account ID in the show section, as it is required in the rule to load to the MDA Usage Data object.
  2. Drag the rest of the needed fields into the Show and Filters sections.
  3. Click NEXT. The Setup Action screen appears.
  4. Select Load to Usage from the Action Type list.
  5. In the Field Mappings section, map the required fields. Account and Date field mappings are required.
  6. Click + FIELD MAPPING and select Usage Data Aggregation Level Name. A text box appears. You can enter one of these values based on the configuration of your org.
    1. INSTANCELEVEL OR ACCOUNTLEVEL (loads instance level data or account level data)
    2. USERLEVEL OR ACCOUNTLEVEL (loads user level data or account level data)
  7. Click Save and run the rule as per your requirement.