Skip to main content
Gainsight Inc.

Gainsight S3 Connector

Gainsight Standard Edition
This article supports Gainsight Standard Edition. This Edition is built on Gainsight's state of the art Matrix Data Architecture (MDA) platform, and is designed for customer success professionals for driving revenue, increasing retention, and scaling operations. To learn more about Gainsight Standard Edition, click here.

If you are using Gainsight Salesforce Edition, which is built on Salesforce and customer business data is stored in SFDC, you can find supporting documentation here.

 

S3 connector helps you to fetch data in a CSV file from Amazon S3 bucket into Gainsight objects. Amazon S3 bucket is a space to store files that may contain your business data. The S3 connector allows you to set up mapping to read those files and fetch data into MDA objects. S3 Connector supports integrating with Gainsight objects from Gainsight Managed Bucket only, the credentials of which are displayed in the Gainsight S3 Connector page. When data is stored in the MDA objects, you can use them in other product areas of Gainsight like Rules, Reporting, etc.

You can also ingest raw usage data into a Gainsight object using S3 connector. Once the raw usage data is stored in a Gainsight object, the system performs aggregations on it to achieve optimal performance while generating reports. S3 Connector supports loading data into Gainsight standard and custom objects except User and Person objects. The S3 Connector is available at Administration > Operations > Connectors > S3 Connector.

This article describes how to:    

See Also

Prerequisites

  • Make sure that the formats of the Date and DateTime values in the CSV file are supported in Gainsight. For the list of supported formats in Gainsight, refer Gainsight Data Management.
  • All your projects will need their CSVs to be placed in Gainsight bucket with Bucket Access Path: s3://gsext-lr7yqwhf1............a-Ingest/Input. To know your bucket access path, navigate to Administration > Operations > Connectors > S3 Connector.
    Note: When using Cyberduck, you must copy the Bucket Access Path after s3://
  • Use S3-SDK, S3cmd, or S3 browsers to copy the CSV file in the input folder to perform data load operations. For windows, use either cyberduck or S3 browser. Gainsight recommends using:

Limitations

  • Multiple files with different column formats require creation of corresponding projects for the desired Gainsight Object.
  • It is recommended to ensure that the CSV file size does not exceed 200MB.

Integrating Amazon S3 with Gainsight

To integrate Amazon S3 with Gainsight:

  1. Navigate to Administration > Operations > Connectors > S3 Connector. S3 Configuration page appears.
  2. Click NEXT. The Gainsight application will automatically populate the Access Key and Security Token for the S3 Gainsight Managed bucket created for you. You might have to reload the page to view these credentials; then click the View S3 Config link. Optionally, you can click Reset, to reset Access Key and Security Token.

S3 Connector 1.gif

  1.  Click Test, to check whether S3 has been successfully integrated with Gainsight. If it is a valid connection, it displays a message Connection Successful.

S3 Connector 2.gif

Create Data Ingest Jobs

Once you have integrated the S3 connector with Gainsight, you are now ready to create a data ingest job.

Notes

  • You can create multiple data ingest jobs on an existing custom object (Matrix Data-Object).
  • To edit an existing data ingest job, click the individual job in the list page.

To create a Data Ingest Job:

  1. Navigate to Administration > Operations > Connectors > S3 Connector.
  2. Click + DATA INGEST JOB.

S3 Connector 3.png

  1. Under Data Ingest Job Setup tab, enter the following details: 
  • Data Ingest Job Name: The desired data ingest job name.
  • Matrix Data-Object: Select an existing Gainsight Object.
  • Input: Path for the input file. This indicates location of the S3 bucket path to load your CSV input files.
  • Archived: Path of the archiving file. Once a data ingest job is successful, the input file is moved to Archived folder.
  • Failed: Path of the file which is moved from Input file path after failed data ingest job.
  • Key Encryption: To encrypt the file to be uploaded.
    • Note: If you are unable to see Key Encryption, contact Gainsight Support.
    • Recommended/Verified tools to encrypt file: GPG Keychain and OpenPGP Studio.
  • Select Type: The type of encryption. (this field appears only when you select the Key Encryption check box)
  • Write to error folder: Select this if you want to write the error file at the path specified in the Failed field. ( this field appears only when you select the Key Encryption check box)
    • Note: Key Encryption, Select Type, and Write to error folder options appear only when Key Encryption is enabled in Gainsight. To enable the Key Encryption, contact support@gainsight.com.
  • Source CSV file: Enter the CSV file name that you would like to be picked from the Amazon S3 input folder, Ex: CompanyDetails.csv

S3 Connector 4.gif

  • Select data load operation: You can select either the Insert or Upsert radio button but you need to select Gainsight object to select the Upsert radio button. Once you select Upsert, you need to click the + button and select the key fields to identify unique records.
  • CSV Properties: Select appropriate CSV Properties. However, Gainsight recommends the following CSV properties:
    • Char (Character) Encoding: UTF-8
    • Separator : , (Comma)
    • Quote Char: “ (Double Quote)
    • Escape Char: Backslash
    • Header Line: 1 (Mandatory)
    • Multi select separator: ; (Semicolon)

​​​​​4MT.png
Notes:

  • It is required to select Character Encoding format in the S3 job configuration. By default, UTF-8 is selected but users can change it as required.
  • User should use same separator in the job configuration which is used in the input CSV file. By default , (comma) is selected as separator but users can change it as required.
  • Quote Character is used to import a value (along with special characters) specified in the Quotation while importing data. It is recommended to use same Quote Character in the job configuration which is used in the input CSV file. By default, Double Quote is selected in the job configuration but users can change to Single Quote as required.
  • Escape character is used to include special character in the value. By default, Backslash is used as Escape Character before special character in the value. It is recommended to use Backslash in CSV file to avoid any discrepancy in the data after loading.
  1. Click PROCEED TO FIELD MAPPING; under the Field Mapping tab, map Target Object Field with Source CSV Field appropriately.
  • You can map all or a few fields with the header fields in the CSV file. You can choose multiple object fields and then click the Field Mapping icon to map the selected fields with the CSV headers.
  • You can click Select All to map all fields with the CSV headers.
  • Click the UnMap icon for a specific field mapping or UnMap All to unmap all the fields that you set for mapping.

S3 Connector 5.png

  1. While mapping Date and DateTime fields between the Source CSV field and the Target MDA object, Click the Clock icon. Select a Timezone dialog box appears.

S3 Connector 6.png

  1. Select a Timezone from the dropdown list and click Ok. This is to assign a timezone for the Date and DateTime values. These values are then converted into UTC from the selected timezone and are stored in the Gainsight object. If you do not select a timezone, the records are considered to be in the Gainsight Timezone. The Date and Datetime values are then converted into UTC from the Gainsight Timezone and are stored in the Gainsight object. For more information on Timezone standardization, refer TBA << Timezone Standardization at Gainsight >>.
  2. For Derived Field mappings, click the Show import lookup icon. Data import lookup configuration dialog appears. This is to lookup to the same or another standard object and match fields to fetch Gainsight IDs (GSIDs) from the looked up object and populate in the target field. Derived mappings can be performed only for target fields of GSID data type.
  3. There are two types of lookups: Direct and Self. Direct lookup enables admins to lookup to another MDA standard object and fetch GSIDs of the records from the lookup object. Self lookup enables admins to lookup to the same standard object and fetch GSID of another record to the target field. For more information, refer Data Import Lookup.
  4. In the following example using Direct import lookup, we lookup to User object, match CSV file header CSM Email with User::Email and bring the correct GSID from lookup object User into target field Company::CSM. Click the + button to match multiple fields between the CSV file and lookup object to import correct GSID from the standard object. When you have multiple matches or when no match is found, you can select from the given options as needed. Click Apply.

S3_Derived Mappings.gif

Note: If there are multiple Account and User Identifiers (multiple mappings), Admins can use multiple field matching as shown above. In the image above, CSM Email and CSM Name from CSV file match by Email and Name in the User object.

  1. When Field Mappings and Derived Field Mappings are completed, click NEXT. Schedule page appears.
  2. Enter the following details and click RUN NOW or set a recurring schedule.
  • On Success: If a job has partial/full data import success, a success notification email is sent to the email ID entered here.
  • On Failure: When all records fail to import, a failure notification email is sent to the email ID entered here.

Note: When you click Run Now, the data ingest configuration is saved automatically.

(Optional) If you do not want to schedule your data ingest job, you can choose to execute it whenever the file is uploaded to the Input folder using the Post file upload option. The following are the limitations for using this option.

  • The file name/file cannot be used in other data ingestion projects. An error occurs if such an operation is performed.
  • While editing an existing Data Ingest Job, you cannot modify the existing Gainsight object. You need to create another Data Ingest Job with a different Gainsight object for data ingestion.
  • Time based schedule - to scheduled daily, weekly, and then monthly.

S3 Connector 7.png

Note: Learn about the success or failure of the data load through the notification mechanism while using S3 Connector for uploading the data (file) into MDA. A Webhook notification is available at the input Callback URL. The Callback URL must be HTTPS, support POST method, and return a Success response of 200. Header values in the form of key and value are submitted. Admins may test the URL by using the TEST IT ONCE button, which sends a “TestMessage” to the endpoint.

S3 Connector 8.png

Users receive two messages at the endpoint:

  • TestMessage, which is used for validating the URL.
  • The notification at the endpoint that contains the following fields:
    • S3 Job Id (Project Id)
    • S3 Project Name
    • Time taken (in milliseconds)
    • Total number of rows
    • Succeeded rows
    • Failed rows
    • S3 error file name
    • Status (Failure, success, or partial success)
    • Status Id
  1. Click SAVE. A success message appears once the data ingest job is saved successfully. In addition, you can check the execution history using View Execution History.

9MT.png
In case of failure, you can click on the status of a particular data ingest job to view the cause. Also, the Failed column contains a link to download the error file which contains the failure reason of a job.

Upsert Data into Gainsight Object

You must create a data ingest job to perform an Upsert on an existing Matrix Data-Object.
To upsert data into a Matrix Data-Object:

  1. Navigate to S3 Connector > [Click on the desired data ingest job].
  2. Select the Upsert check box; click the + button and add appropriate fields in Select key fields to identify unique records.

S3 Connector 9.png

  1. Click PROCEED TO FIELD MAPPING.
  2. In the Field mapping tab, map Target Object Field to Source CSV Field appropriately, if required.
  3. In the Schedule tab, click RUN NOW, or set a recurring schedule using the Set recurring schedule checkbox.

S3 Connector 10.png

Viewing Execution History and S3 Configuration

  • Once you have created a data ingest job and have performed a data load operation, you can view the execution history using View Execution History as shown in the image below.

S3 Connector 11.png

  • You can see Success and Failure jobs in the S3 Execution Logs page as shown below. In case of failure job, you can click the Failure status of a particular data ingest job to view the cause. Also, the Failed column contains a link to download the error file which contains the failure reason of a job.

13MT.png

  • Click View S3 Config to view or to configure Amazon S3 for Gainsight as shown below. It provides Bucket Access Path, Access Key, and Security Token for S3 bucket connection.
  • Click TEST to test the S3 bucket connection. When the connection is good, it shows a message Connection Successful.

14MT.png

Troubleshoot Data Load Operation

You can check the data load operation details on Amazon S3:

  • archived: Once the CSV file is used for data load operation and the data ingest job is successful, the file is moved from the input folder of S3 bucket to the archived folder.
  • error: This folder contains the error files which have records of data that has failed.
  • input: This folder contains the CSV file to be used for data import.

15MT.png

Using Cyberduck Tool to upload CSV files into S3 Bucket

  1. Click View S3 Config in the S3 Configuration page and open the Gainsight Managed bucket details created for your tenant.
  2. Download and install the cyberduck tool from https://cyberduck.io/?l=en.
  3. Open the Cyberduck application and Click + at the bottom left of the page to add a bucket as a Bookmark. Add a Bookmark dialog appears.
  4. Enter the following details in the Add a Bookmark dialog:
    1. Select Amazon S3 from the dropdown list. Nickname, Port, and URL are automatically selected.
    2. Copy Access Key from the S3 Configuration page and enter in the Access Key ID.
    3. Click More Options and copy Bucket Access Path from the S3 Configuration page and enter in the Path.
    4. Close the Add a Bookmark dialog. Gainsight Managed S3 Bucket is added as a bookmark.

S3_Connector_12.png

  1. Click Open Connection. Copy Security Token from the S3 Configuration page and enter in the Secret Access Key and click Login.
  2. Enter the following details:
    1. Select Server: S3 (Amazon Simple Storage Service)
    2. Access Key ID: from the Access Key in the Gainsight S3 Configuration page
    3. Secret Access Key: from the Security Token in the Gainsight S3 Configuration page and click Connect.

S3_Connector_13.png

  1. Select connection type
  2. Copy/Paste Access Key
  3. Copy/Paste Security Token
  4. Click Connect.
  1. You can see three subfolders in the MDA-Data-Ingest: input, archived, and error.
  2. Navigate to the input folder and upload files into this folder by selecting the Upload option in File Menu, or right click.

S3 Connector 14.png

Using Google Chrome extension to Connect
to Amazon S3

You can download and use the Google chrome extension Amazon S3, to connect to Gainsight’s Amazon S3 bucket. When you successfully download the extension in your chrome browser, you can view the Amazon S3 symbol in the top right corner of your browser. You use this Amazon S3 link to download the extension.

To add a new S3 connection through Google Chrome Extension:

  1. Click this symbol. S3 Connection page appears.
  2. Click +S3Connection and enter the following:
    1. Connection Name: Assign a unique connection name.
    2. Bucket Name from the Bucket Access Path in the Gainsight S3 Configuration page.
    3. Access Key from the Gainsight S3 Configuration page.
    4. Security Token from the Gainsight S3 Configuration page.
  3. You can enter the credentials in this window and click Save. Click Submit to establish the connection or Test to test the validity of connection.

S3_Connector_15.png

You can now navigate to the input folder and upload the required CSV files for data ingestion into the Gainsight objects through S3 Connector.

20MT.png

Conditions

  • Unique file name for post file upload: User has to specify a unique file name for all the Event Based Data Ingest jobs (Post File Upload): Also, the file name used should not be a suffix of a file name that already exists.
  • 500 mb file size limitation: S3 connector supports files up to 500 MB. However, it is recommended that the file size does not exceed 200 MB.
  • User has to setup the project and then upload file: The files that exist in the bucket before setting up "post file upload" will not be picked up.
  • User has to upload the file with exact filename only: Uploading a file and renaming it to match the set file name in the project will not be ingested.
  • No delay time can be configured: There is no provision to configure a delay time for "post file upload." It will be ingested immediately.
  • User has to upload a new file only after the previous file processing has started, or previous file is moved to archive folder. If the previous file is still in the input folder, the new file will overwrite the older file.
  • Was this article helpful?