This connector helps you to fetch data from Amazon S3 into the Gainsight MDA repository. When using S3 buckets, the job still pushes the data to MDA objects/subject areas. The destination of the data does not change, only the method of how we ingest the data. S3 is a place to store files - it is not a database that you use for Rules or Reporting. The S3 connector allows you to set up mapping to read those files and bring them into MDA Objects, that you can then use for Rules and Reporting. Once the raw usage data resides on Gainsight MDA, the system performs aggregations on it to achieve optimal performance while generating reports. The S3 Connector is now available at Administration > Operations > Connectors > S3 Connector.

This article describes how to:    

See Also:

Prerequisites

  • All your projects will need their CSVs to be placed under: s3://gsext-lr7yqwhf1o0laliaqiqhiwemn...a-Ingest/Input.
    Note: When using Cyberduck, you must copy the Bucket Access Path after s3://
  • Before using this connector, you must contact Gainsight Customer Support and obtain credentials to integrate Amazon S3 bucket with Gainsight. For more information, refer to the Integrating Amazon S3 with Gainsight section in this article.
  • Use S3-SDK, S3cmd, or S3 browsers to copy the CSV file in the input folder to perform data load operations. For windows, use either cyberduck or S3 browser. For Mac, use Cyberduck Version 5.2.2 (The newest version of Cyberduck Removes the ability to put Path.) For more information, refer to the How to Use Cyberduck Tool section.

Limitations

  • Multiple files with different column formats require creation of corresponding projects for the desired Matrix Data-Object.

  • We recommend that the CSV file size not exceed 200MB.

Integrating Amazon S3 with Gainsight

To integrate Amazon S3 with Gainsight:

  1. Go to Administration > Operations > Connectors > S3 Connector.

  2. Click on S3 Connector; then click NEXT.
 The Gainsight application will automatically populate the Access Key and Security Token for the S3 bucket created for you. You might have to reload the page to view these credentials; then click the View S3 Config link.
 Optionally, you can click on the Reset button to reset Access Key and Security Token.
  3. Click Test to check whether S3 has been successfully integrated with Gainsight.

  4. Click NEXT to perform different data load operations on S3 data. For more information on operations that you can perform on S3 data, refer to Creating Projects for Data Insertion and How to Upsert Data into Subject Area.

Creating Projects for Data Ingest

Once you have integrated the S3 connector with Gainsight, you are now ready to create a project.

Notes:

  • You can create multiple projects on existing MDA custom object (Matrix Data-Object).
  • To edit existing projects, click the individual project in the project list page.

To create a project:

1. Navigate to Administration > Operations > Connectors > S3 Connector; then click + DATA INGEST JOB.

2. Under Project Setup tab, enter the following details:

  • Data Ingest Job Name: The desired data ingest job name.
  • Matrix Data-Object: Select an existing Matrix Data-Object.
  • Input: The path for the input file.
  • Archived: The path for archiving file.
  • Failed: The path to place the file in case the operation fails.
  • Key Encryption: To encrypt the file to be uploaded.
    • Note: If you are unable to see Key Encryption, contact Gainsight Support.
    • Recommended/Verified tools to encrypt file: GPG Keychain and OpenPGP Studio.
  • Select Type: The type of encryption. (this field appears only when you select the Key Encryption check box)
  • Write to error folder: Select this if you want to write the error file at the path specified in the Failed field. ( this field appears only when you select the Key Encryption check box)
  • Source CSV file: Enter the CSV file name that you would like to be picked from the Amazon S3 input folder. 
  • Select data load operation: You can select Insert  or Upsert checkbox but you need to choose Matrix Data-object to select the Upsert check box. Once you select Upsert, you need to select the key fields to identify unique records.
  • CSV Properties: Select appropriate CSV Properties. However, we Gainsight recommends the following CSV properties.
  • Char (Character) Encoding: UTF-8
  • Separator : , (Comma)
  • Quote Char: “ (Double Quote)
  • Escape Char: Backslash
  • Header Line: 1 (Mandatory)

3. Click PROCEED TO FIELD MAPPING; under the Field Mapping tab, map Target Object Field with Source CSV Field appropriately.

  • You can map all or a few object fields with the header fields in the CSV file and vice-versa.
  • You can click Select All to map all object fields with the CSV headers.
  • You can choose multiple object fields and then click >> to map the selected object fields with the CSV headers.
  • You can click ‘UnMap All’ to unmap all of the fields that you set for mapping.

4. Click NEXT; then under the Schedule tab, enter the following details and click RUN NOW or set a recurring schedule.

  • On Success: A success notification email is sent to the email ID entered here.
  • On Failure: A failure notification email is sent to the email ID entered here.

         Note: When you click Run Now, the data ingest configuration is saved automatically.

(Optional) If you do not want to schedule your data ingest job, you can choose to execute it whenever the file is uploaded to the Input folder using the Post file upload option. The following are the limitations for using this option.

  • The file name/file cannot be used in other data ingestion projects. An error occurs if such an operation is performed.
  • While editing an existing Data Ingest Job, you cannot modify the existing Matrix Data - Object. You need to create another Data Ingest Job with a different Matrix Data - Object for data ingestion.
  • On any given day, you can upload up to five files with a maximum size of up to 200MB. Each file has to be uploaded with a minimum gap of two hours.

Note: Learn about the success or failure of the data load through the notification mechanism while using S3 Connector for uploading the data (file) into MDA. A Webhook notification is available at the input Callback URL. The Callback URL must be HTTPS, support POST method, and return a Success response of 2XX. Header values in the form of key and value are submitted. Admins may test the URL by using the TEST IT ONCE button, which sends in a message of “TestMessage” to the endpoint.

Users receive two messages at the endpoint:

i. TestMessage, which is used for validating the URL.

ii. The notification at the endpoint that contains the following fields:

  • S3 Job Id (Project Id)
  • S3 Project Name
  • Time taken (in milliseconds)
  • Total number of rows
  • Succeeded rows
  • Failed rows
  • S3 error file name
  • Status (Failure, success, or partial success)
  • Status Id

5. Click SAVE. A success message appears once the data ingest job is saved successfully. In addition, you can check the execution history using View Execution History.

In case of failure, you can click on the status of a particular data ingest job to view the cause. Also, the Failed column contains a link to records that failed.

Data Import Lookup

This feature allows you to lookup to a standard object and match a column to fetch Gainsight IDs (GSIDs) from the looked up object. The main purpose of this feature is to populate values in the fields here from another object during the data ingest.

In Administration > Connectors > S3 Connector, perform the following steps to use the Data Import Lookup feature:

  1. Navigate to S3 Connectors > S3 Configuration > Data Ingest Job Setup > Field mapping.
  2. Click Select All > icon to add all the available fields of target objects to be mapped. (refer the following image)

  1. Click on the Data Import lookup icon (as highlighted in the following image).

Data import lookup is supported only for fields with datatype as GSID as of now. When you apply data import lookup, you lookup to another object match it with a field and get the data back. For example, when you select the Account: Owner ID as field name and you want the GSID of the Standard Object to be matched, you need to Lookup into the User object and match it by the SFDC User id field, then populate GSID.

  1. When you have multiple matches or when no match is found, select from the given options as needed. (refer the following image).


IMPORTANT: If there are multiple Account and User Identifiers (multiple mappings), Admins can use Data import lookup.

How to Upsert Data into Matrix Data-Object

You must create a data ingest job to perform an Upsert on an existing Matrix Data-Object.

To upsert data into a Matrix Data-Object:

1. Navigate to  Amazon S3 Connector > [Click on the desired data ingest job].

2. Select the Upsert check box; then add appropriate fields in Select key fields to identify unique records.
 

3. Click PROCEED TO FIELD MAPPING.

4. Under the Field mapping tab, map Target Object Field to Source CSV Field appropriately, if required.

5. Click RUN NOW, or set a recurring schedule using the Set recurring schedule check box.

Viewing Execution History, S3 Configuration

  • Once you have created a data ingest job and have performed a data load operation, you can view the execution history using View Execution History as shown in the image below.
  • You can click the View S3 Config link to view or to configure Amazon S3 for Gainsight.

Troubleshoot Data Load Operation

You can check the data load operation details on Amazon S3:

  • archived: Once the CSV file is used for data load operation, the file is moved from the input folder of S3 bucket to the processed folder. And errors, if any, are recorded in a file and placed under the error folder.
  • error: This folder contains records of data that has failed.
  • input: This folder contains the CSV file
     

How to Use Cyberduck Tool

1. Download and install the cyberduck tool from https://cyberduck.io/?l=en

2. Click Open Connection and fill in the required info as shown in the image below:
 

3. You will find the list of all the folders, one each for each gainsight123 configured in S3 connector. Inside this folder, there will be 3 subfolders: input, archived, and error.

4. Navigate to the “input” folder and upload files inside this folder by selecting the “upload” option in File Menu, or right click.
 

Limitations

  1. Unique file name for post file upload: User has to specify a unique file name for all the Event Based Data Ingest projects (Post File Upload): Also, the file name used should not be a suffix of a file name that already exists.
  2. 500 mb file size limitation: S3 connector will support files up to 500 MB only. Suggested size is 200 MB.
  3. User has to setup the project and then upload file: The files that exist in the bucket before setting up "post file upload" will not be picked up.
  4. User has to upload the file with exact filename only: Uploading a file and renaming it to match the set file name in the project will not be ingested.
  5. No delay time can be configured: There is no provision to configure a delay time for "post file upload." It will be ingested immediately.
  6. User has to upload a new file only after the previous file processing has started, or previous file is moved to archive folder. If the previous file is still in the input folder, the new file will overwrite the older file.