This article supports Gainsight NXT, the next evolution of the Customer Success platform. If you are using Gainsight CS Salesforce Edition, you can find supporting documentation by visiting the home page, and selecting CS > Salesforce Edition.
Not sure what your team is using? Click here.
Note: Gainsight recommends to create S3 data ingest jobs using Bionic rules, for more information refer to S3 Dataset Task in Bionic Rules article. If you have already created S3 data ingest jobs in the connectors 1.0 and want to update the configurations, please continue referring to this article.
For more information on how to create an S3 Connection, refer to the Create an S3 Connection in Connectors 2.0 (Horizon Experience) or Create an S3 Connection in Connectors 2.0 article.
S3 connector helps you to fetch data in a CSV file from Amazon S3 bucket into Gainsight objects. Amazon S3 bucket is a space to store files that may contain your business data. The S3 connector allows you to set up mapping to read those files and fetch data into MDA objects. S3 Connector supports integrating with Gainsight objects from Gainsight Managed Bucket only, the credentials of which are displayed in the Gainsight S3 Connector page. When data is stored in the MDA objects, you can use them in other product areas of Gainsight like Rules, Reporting, etc.
You can also ingest raw usage data into a Gainsight object using S3 connector. Once the raw usage data is stored in a Gainsight object, the system performs aggregations on it to achieve optimal performance while generating reports. S3 Connector supports loading data into Gainsight standard and custom objects except User and Person objects. The S3 Connector is available at Administration > Connectors > S3 Connector.
This article describes how to:
- Integrate Amazon S3 with Gainsight
- Create Data Ingest Jobs
- Upsert data into a Gainsight Object
- Viewing execution history and S3 Configuration
- Troubleshoot data load operation
- Make sure that the formats of the Date and DateTime values in the CSV file are supported in Gainsight. For the list of supported formats in Gainsight, refer Gainsight Data Management.
- All your projects will need their CSVs to be placed in Gainsight bucket with Bucket Access Path: s3://gsext-lr7yqwhf1............a-Ingest/Input. To know your bucket access path, navigate to Administration > Operations > Connectors > S3 Connector.
Note: When using Cyberduck, you must copy the Bucket Access Path after s3://
- Use S3-SDK, S3cmd, or S3 browsers to copy the CSV file in the input folder to perform data load operations. For windows, use either cyberduck or S3 browser.
- Gainsight recommends using Cyberduck or Google Chrome extension of S3 bucket to upload CSV/TSV files into Gainsight Managed S3 bucket. For more information, refer to the Upload CSV/TSV files into S3 Bucket article.
- Multiple files with different column formats require creation of corresponding projects for the desired Gainsight Object.
- It is recommended to ensure that the CSV file size does not exceed 200MB.
Integrating Amazon S3 with Gainsight
To integrate Amazon S3 with Gainsight:
- Navigate to Administration > Operations > Connectors > S3 Connector. S3 Configuration page appears.
- Click NEXT. The Gainsight application will automatically populate the Access Key and Security Token for the S3 Gainsight Managed bucket created for you. You might have to reload the page to view these credentials; then click the View S3 Config link. Optionally, you can click Reset, to reset Access Key and Security Token.
- Click Test, to check whether S3 has been successfully integrated with Gainsight. If it is a valid connection, it displays a message Connection Successful.
Create Data Ingest Jobs
Once you have integrated the S3 connector with Gainsight, you are now ready to create a data ingest job.
- You can create multiple data ingest jobs on an existing custom object (Matrix Data-Object).
- To edit an existing data ingest job, click the individual job in the list page.
To create a Data Ingest Job:
- Navigate to Administration > Operations > Connectors > S3 Connector.
- Click + DATA INGEST JOB.
- Under Data Ingest Job Setup tab, enter the following details:
- Data Ingest Job Name: The desired data ingest job name.
- Matrix Data-Object: Select an existing Gainsight Object.
- Input: Path for the input file. This indicates location of the S3 bucket path to load your CSV input files.
- Archived: Path of the archiving file. Once a data ingest job is successful, the input file is moved to Archived folder.
- Failed: Path of the file which is moved from Input file path after failed data ingest job.
- Key Encryption: To encrypt the file to be uploaded.
- Note: If you are unable to see Key Encryption, contact Gainsight Support.
- Recommended/Verified tools to encrypt file: GPG Keychain and OpenPGP Studio.
- Select Type: The type of encryption. (this field appears only when you select the Key Encryption check box)
- Write to error folder: Select this if you want to write the error file at the path specified in the Failed field. ( this field appears only when you select the Key Encryption check box)
- Note: Key Encryption, Select Type, and Write to error folder options appear only when Key Encryption is enabled in Gainsight. To enable the Key Encryption, contact firstname.lastname@example.org.
- Source CSV file: Enter the CSV file name that you would like to be picked from the Amazon S3 input folder, Ex: CompanyDetails.csv
- Select data load operation: You can select either the Insert or Upsert radio button but you need to select Gainsight object to select the Upsert radio button. Once you select Upsert, you need to click the + button and select the key fields to identify unique records.
- CSV Properties: Select appropriate CSV Properties. However, Gainsight recommends the following CSV properties:
- Char (Character) Encoding: UTF-8
- Separator : , (Comma)
- Quote Char: “ (Double Quote)
- Escape Char: Backslash
- Header Line: 1 (Mandatory)
- Multi select separator: ; (Semicolon)
- It is required to select Character Encoding format in the S3 job configuration. By default, UTF-8 is selected but users can change it as required.
- User should use same separator in the job configuration which is used in the input CSV file. By default , (comma) is selected as separator but users can change it as required.
- Quote Character is used to import a value (along with special characters) specified in the Quotation while importing data. It is recommended to use same Quote Character in the job configuration which is used in the input CSV file. By default, Double Quote is selected in the job configuration but users can change to Single Quote as required.
- Escape character is used to include special character in the value. By default, Backslash is used as Escape Character before special character in the value. It is recommended to use Backslash in CSV file to avoid any discrepancy in the data after loading.
- Click PROCEED TO FIELD MAPPING; under the Field Mapping tab, map Target Object Field with Source CSV Field appropriately.
- You can map all or a few fields with the header fields in the CSV file. You can choose multiple object fields and then click the Field Mapping icon to map the selected fields with the CSV headers.
- You can click Select All to map all fields with the CSV headers.
- Click the UnMap icon for a specific field mapping or UnMap All to unmap all the fields that you set for mapping.
- While mapping Date and DateTime fields between the Source CSV field and the Target MDA object, Click the Clock icon. Select a Timezone dialog box appears.
- Select a Timezone from the dropdown list and click Ok. This is to assign a timezone for the Date and DateTime values. These values are then converted into UTC from the selected timezone and are stored in the Gainsight object. If you do not select a timezone, the records are considered to be in the Gainsight Timezone. The Date and Datetime values are then converted into UTC from the Gainsight Timezone and are stored in the Gainsight object.
- For Derived Field mappings, click the Show import lookup icon. Data import lookup configuration dialog appears. This is to lookup to the same or another standard object and match fields to fetch Gainsight IDs (GSIDs) from the looked up object and populate in the target field. Derived mappings can be performed only for target fields of GSID data type.
- There are two types of lookups: Direct and Self. Direct lookup enables admins to lookup to another MDA standard object and fetch GSIDs of the records from the lookup object. Self lookup enables admins to lookup to the same standard object and fetch GSID of another record to the target field. For more information, refer Data Import Lookup.
- In the following example using Direct import lookup, we lookup to User object, match CSV file header CSM Email with User::Email and bring the correct GSID from lookup object User into target field Company::CSM. Click the + button to match multiple fields between the CSV file and lookup object to import correct GSID from the standard object. When you have multiple matches or when no match is found, you can select from the given options as needed. Click Apply.
Note: If there are multiple Account and User Identifiers (multiple mappings), Admins can use multiple field matching as shown above. In the image above, CSM Email and CSM Name from CSV file match by Email and Name in the User object.
- When Field Mappings and Derived Field Mappings are completed, click NEXT. Schedule page appears.
- Enter the following details and click RUN NOW or set a recurring schedule.
- On Success: If a job has partial/full data import success, a success notification email is sent to the email ID entered here.
- On Failure: When all records fail to import, a failure notification email is sent to the email ID entered here.
Note: When you click Run Now, the data ingest configuration is saved automatically.
(Optional) If you do not want to schedule your data ingest job, you can choose to execute it whenever the file is uploaded to the Input folder using the Post file upload option. The following are the limitations for using this option.
- The file name/file cannot be used in other data ingestion projects. An error occurs if such an operation is performed.
- While editing an existing Data Ingest Job, you cannot modify the existing Gainsight object. You need to create another Data Ingest Job with a different Gainsight object for data ingestion.
- Time based schedule - to scheduled daily, weekly, and then monthly.
Note: Learn about the success or failure of the data load through the notification mechanism while using S3 Connector for uploading the data (file) into MDA. A Webhook notification is available at the input Callback URL. The Callback URL must be HTTPS, support POST method, and return a Success response of 200. Header values in the form of key and value are submitted. Admins may test the URL by using the TEST IT ONCE button, which sends a “TestMessage” to the endpoint.
Users receive two messages at the endpoint:
- TestMessage, which is used for validating the URL.
- The notification at the endpoint that contains the following fields:
- S3 Job Id (Project Id)
- S3 Project Name
- Time taken (in milliseconds)
- Total number of rows
- Succeeded rows
- Failed rows
- S3 error file name
- Status (Failure, success, or partial success)
- Status Id
- Click SAVE. A success message appears once the data ingest job is saved successfully. In addition, you can check the execution history using View Execution History.
In case of failure, you can click on the status of a particular data ingest job to view the cause. Also, the Failed column contains a link to download the error file which contains the failure reason of a job.
Upsert Data into Gainsight Object
You must create a data ingest job to perform an Upsert on an existing Matrix Data-Object.
To upsert data into a Matrix Data-Object:
- Navigate to S3 Connector > [Click on the desired data ingest job].
- Select the Upsert check box; click the + button and add appropriate fields in Select key fields to identify unique records.
- Click PROCEED TO FIELD MAPPING.
- In the Field mapping tab, map Target Object Field to Source CSV Field appropriately, if required.
- In the Schedule tab, click RUN NOW, or set a recurring schedule using the Set recurring schedule checkbox.
Viewing Execution History and S3 Configuration
Once you have created a data ingest job and have performed a data load operation, you can view the execution history using View Execution History as shown in the image below.
You can see Success and Failure jobs in the S3 Execution Logs page as shown below. In case of failure job, you can click the Failure status of a particular data ingest job to view the cause. Also, the Failed column contains a link to download the error file which contains the failure reason of a job.
- Click View S3 Config to view or to configure Amazon S3 for Gainsight as shown below. It provides Bucket Access Path, Access Key, and Security Token for S3 bucket connection.
- Click TEST to test the S3 bucket connection. When the connection is good, it shows a message Connection Successful.
Troubleshoot Data Load Operation
You can check the data load operation details on Amazon S3:
- archived: Once the CSV file is used for data load operation and the data ingest job is successful, the file is moved from the input folder of S3 bucket to the archived folder.
- error: This folder contains the error files which have records of data that has failed.
- input: This folder contains the CSV file to be used for data import.