Admins can use the S3 Connector to bring customer data into Gainsight. Using the S3 Connector to load data is secure and simple. To load usage data in Gainsight using the S3 Connector, you need to have an org with the S3 Connector enabled.
S3 Connector is enabled in your organization. For more information, refer to Integrating Amazon S3 with Gainsight.
Use the following steps to load usage data in Gainsight:
The S3 connector allows you to set up mapping to read customer data files and bring them into MDA Objects, so that you can then use the data with Rules and Reporting. The S3 Connector will connect your S3 bucket to one Subject Area within Gainsight’s Matrix Data Architecture (MDA). Once the raw usage data resides on Gainsight MDA, the system performs aggregations on it to achieve optimal performance while generating reports.
To create an MDA object:
- Navigate to Administration > Data Management.
- Click + OBJECT.
- Enter a name and description for the MDA object.
- Select either Manual or Data File Upload(CSV):
- Select Manual for creating the fields/columns manually.
- Select Data File Upload (CSV) to insert a template file which already has fields/columns configured.
- Click NEXT. The object will be created and Add Field screen appears.
- In the Add Field screen, enter the required details and click Add. A new field/column will be created. The MDA object creation is complete.
By creating a project, you will be able to fetch your customer’s data to the MDA.
- You can create multiple projects on an existing MDA custom object (Matrix Data-Object).
- To edit existing projects, click the individual project in the project list page.
To create a project:
- Navigate to Administration > Connectors > S3 Connector; then click + Data Ingest Job.
- Under Project Setup tab, enter the following details:
- Data Ingest Job Name: The desired data ingest job name.
- Matrix Data-Object: Select an existing Matrix Data-Object.
- Source CSV file: Enter the name of CSV file that you want to be picked from the Amazon S3 input folder. For example, doublequotes.csv.
- CSV Properties: Select appropriate CSV Properties. Recommended CSV properties:
- Character Encoding: UTF-8
- Separator: , (Comma)
- Quote Char: “ (Double Quote)
- Escape Char: Backslash
- Header Line: 1 (Mandatory)
- Select data load operation: Select Insert/Upsert.
- Click Proceed to Field Mapping; under the Field Mapping tab, map Target Object Field with Source CSV Field appropriately.
- Map all or a few object fields with the header fields in the CSV file and vice-versa.
- Click Select All to map all object fields with the CSV headers.
- Choose multiple object fields and then click >> to map the selected object fields with the CSV headers.
- Click UnMap All to unmap all of the fields that you set for mapping.
- Click Next; then under the Schedule tab, enter the following details and click Run Now or set a recurring schedule.
- On Success: A success notification email is sent to the email ID entered here.
- On Failure: A failure notification email is sent to the email ID entered here.
Note: When you click Run Now, the data ingest configuration is saved automatically.
- (Optional) If you do not want to schedule your data ingest job, you can choose to execute it whenever the file is uploaded to the Input folder using the Post file upload option. The following are the limitations for using this option.
- The file name/file cannot be used in other data ingestion projects. An error occurs if such an operation is performed.
- While editing an existing Data Ingest Job, you cannot modify the existing Matrix Data - Object. You need to create another Data Ingest Job with a different Matrix Data - Object for data ingestion.
- On any given day, you can upload up to five files with a maximum size of up to 200MB. Each file has to be uploaded with a minimum gap of two hours.
- Time based schedule - to scheduled based on time starting from daily, weekly, and then monthly.
A success message appears once the data ingest job is saved successfully. In addition, you can check the execution history using View Execution History.
In case of failure, you can click on the status of a particular data ingest job to view the cause. Also, the Failed column contains a link to records that failed.
Gainsight recommends using the Cyberduck tool to push your files into Amazon’s S3 Bucket. Cyberduck is one of the most popular ETL tool that supports connecting to Amazon’s S3. For a detailed list of ETL tools that you can use, see S3 FAQs.
Note: You may want to know the frequency and granularity of your usage data updates (daily, weekly, monthly; account/instance/user-level).
- Download and install the cyberduck tool from https://cyberduck.io/?l=en
- Click Open Connection and fill in the required info as shown in the image below:
- After the connection is established, you can see a list of all the folders, one each for each Gainsight bucket(folder) configured in S3 connector. Three sub-folders are available: input, archived, and error.
- Navigate to input folder.
- Click File > Upload.
- Unique filename for post file upload: Specify a unique filename for all the event based data ingest projects (post file upload). Also, the file name used must not be a suffix of a filename that already exists.
- 500MB file size limitation: S3 connector supports file sizes up to 500 MB. Suggested size is 200MB.
- Setup the project and then upload file: The files that exist in the bucket before setting up "post file upload" will not be picked up.
- Upload the file with exact filename: Uploading a file and renaming it to match the set file name in the project does not work.
- Ingestion job starts immediately: A provision to configure delay time for post file upload is not present. The ingestion job starts immediately.
- File processing happens in a sequence: You have to upload a new file only after the previous file processing has started, or if the previous file is moved to the archive folder. If the previous file is still in the input folder, the new file will overwrite the older file.
For more information on how to create usage data measures, refer to Usage Data Configurations.
- Navigate to Administration > Usage Configuration.
- Click + Add Measures. The Add Measures dialog appears.
- Type the measures separated by a return.
- Click Process. The measures added will be displayed.
- Click ADD ALL. The measures will be created and will be available in the Load to Usage action (while creating a rule).
Use the following procedure if you want to load to the Salesforce Usage Data object. Reporting (CS360 > Usage report) is possible only when you load to the Usage Data object.
Note: If Salesforce Account ID is not present in the MDA object, use this procedure.
- Create an MDA object.
- Build a rule to populate a second MDA object that contains Salesforce Account ID, External Identifier, and MDA Usage Data object. As an example, the source object can be SFDC Account object, and sync External Identifier and SFDC Account ID (upsert on SFDC Account ID).
Join the two MDA objects (Admin > Data Management > MDA Usage Data object). The join starts on the MDA Usage Object and goes to the secondary MDA object. The join is between the external identifier field on both objects. After the rule is run and the join is complete you will be able to use both objects/fields in a rule or report. For more information, refer to MDA Joins.
You can load customer data and store it in the Usage MDA object using the Rules Engine. After loading the data to the MDA object, you can use it in the report builder.
- Navigate to Administration > Rules Engine.
- Click + Rule.
- In the Setup Rule screen, select Matrix Data.
- In the Select a source object list, select the MDA object you have created in Step 1. The fields available for the MDA object will be populated.
- If you have completed Step 5 in the procedure (joining the two objects to obtain SFDC Account ID), click through the join(+ Sign) and place the Salesforce Account ID in the show section, as it is required in the rule to load to the MDA Usage Data object.
- Drag the rest of the needed fields into the Show and Filters sections.
- Click NEXT. The Setup Action screen appears.
- Select Load to Usage from the Action Type list.
- In the Field Mappings section, map the required fields. Account and Date field mappings are required.
- Click + FIELD MAPPING and select Usage Data Aggregation Level Name. A text box appears. You can enter one of these values based on the configuration of your org.
- INSTANCELEVEL OR ACCOUNTLEVEL (loads instance level data or account level data)
- USERLEVEL OR ACCOUNTLEVEL (loads user level data or account level data)
- Click Save and run the rule as per your requirement.