Redshift - Daily exports from AB Tasty to Redshift

This feature is currently available in Early Adoption. Please contact your CSM to enroll into the Early Adopter program.

Amazon Redshift is a fully managed, petabyte-scale data warehouse service in the cloud. Amazon Redshift Serverless lets you access and analyze data without all the configurations of a provisioned data warehouse

The connector

Step 1: Create an AWS account

  1. Create an AWS account 
  2. Go to AWS Redshift using the search bar)

Step 2: Create a cluster 

Once you're on AWS Redshift, click on “create Cluster” (top right on the screen). More information here

  1. Input an identifier of your choice 
  2. Choose your own configuration (AWS can help you if you chose “Help me choose”). 
  3. Inside Data Warehouse configuration, you need to create an admin username
    1. Enter a username
    2. Enter a password (we recommend adding it manually)
  4. Click on “create cluster”

Once your cluster is created, it should be displayed on AWS Redshift Cluster.

Step 3: Check VPC security group

  1. Click on the new cluster / Properties tab and click on the instance on VPC security group

  1. Click on the instance of Security group ID.

Inside Inbound rules, check that at least one rule has the follow configuration:

  • IP version at IPv4
  • Type: All traffic 
  • Source: 0.0.0.0/0 

If it is not the case, you should edit one route with this specific configuration. 

Step 4: Edit publicly accessible setting

  1. Go back to your cluster and click on action and Modify publicly accessible setting.

  1. Click on “Turn on Publicly accessible” and save changes

Step 5: Configure the console Query editor v2

  1. Connect to the Data Warehouse

Stay inside your cluster and click on query data => Query in query editor V2.

A new window will open and a new modal will be displayed for the first connection.

  • Choose Data Warehouse user name and password
  • By default, the name of your Data Warehouse is dev. Don’t edit this input
  • User name: enter here the username you have already registered at step 1 (admin user of your cluster). Enter the user name and the good password 
  • Click on connection: the connection is now established. 
  1.  Create a new user

     Since the connection is done, you need to create another user (which is not the admin of the cluster). 

    Go to the console, and copy/paste this script into app2 integration hub (the user name and password you created).

    CREATE USER username PASSWORD 'Userpassword';

    Replace username and Userpassword by the username and UserPassword of your choice

    Username has to be in lowercase. You must have at least one digital number for your password.

    And run the script

  2. Give rights to the new user

Give to the new user all the rights it needs to write some data (provided by app2). Copy and past this script inside the console:

GRANT CREATE ON DATABASE dev TO username;
GRANT usage, create on schema public TO username;
GRANT select on svv_table_info to username;

Replace username by the name of your user:

  • Select the first line and execute it 
  • Select the second one and execute it 

Step 6: Create a S3 bucket

Global Configuration

Go to AWS S3 (Enter S3 or Bucket inside the search bar). Click on create Bucket 

  1. Create a bucket by using the following configuration:
  • Bucket name: Name of your choice
  • AWS Region: Choose the appropriate region 
  • ACLS disabled
  • Unchecked Block all public access
  • Bucket versioning: Disabled
  • Encryption type: Server-side encryption with Amazon S3 managed keys
  • Bucket key: Enabled

Click on create Bucket

  1. Now that the bucket is created, click on it and go to the Permissions tab → Edit Block public access

  2. Uncheck Block all public access.

Step 7: Configuration of bucket Policy

Go back to the bucket/ Permissions tab/ Edit Bucket Policy

Copy / Paste the following script:

{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "ListObjectsInBucket",
"Effect": "Allow",
"Principal": "*",
"Action": "s3:ListBucket",
"Resource": "arn:aws:s3:::NAME BUCKET"
},
{
"Sid": "AllObjectActions",
"Effect": "Allow",
"Principal": "*",
"Action": "s3:*Object",
"Resource": "arn:aws:s3:::NAME BUCKET/*"
}
]
}

Replace NAME BUCKET by the real name of the created bucket. 

Step 8: Create an IAM User

Create a new user and user group

We need to add a new IAM user who will have the rights on the S3 Bucket 

  • Search for IAM via the search bar:

  • Click on User => Create user
    • Enter a user name and click on next

  • On the second step, you have to create a new user group before finalizing the creation of the user

  • User group name : Enter the name of the user group you want.

Permissions policies : add to this new group : 

As soon as the new user group is created, add it to the new user

Click on next and finalize the creation of the new user.

Step 9: Create an access key

As soon as the user is created, click on it: Security credentials tab / Create access key

  • Choose Third-party service option
  • Add a description
  • Click on create access key

A user key and a user secret are now created. 

We recommend saving them as you will need them later.

Step 10: Set up the connector 

Go to the integration page > Data Warehouse > Redshift > Create connector

  1. Enter a name for the connector
  2. Host: Go on AWS redshift, and on this screen, copy the endpoint URL 

    Paste it in the Host field, deleting all characters after the domain name.

    Ex: endpoint URL:: julien-nied-cluster.cgtw9e5st7qf.eu-west-3.redshift.amazonaws.com:5439/dev 

    Enter only:: julien-nied-cluster.cgtw9e5st7qf.eu-west-3.redshift.amazonaws.com

  3. Port: By default, it is 5439 (you can find it on the endpoint URL. If it is not the same, copy and paste it)
  4. Data Warehouse: By default, it’s “dev” (you can find it on the endpoint URL. If it is not the same, copy and paste it)
  5. Schema and JDBC parameters URL: Leave empty/
  6. Authorization method: Choose Username And password
  7. Username: enter the username you created above (create a new user inside the console)
  8. Password: enter the password you created above (create a new user inside the console)
  9. Loading Method: Choose S3 staging
  10. S3 Bucket name: The name of the bucket you created

  1. S3 Bucket path: Need no values. 
  2. S3 Bucket region: Choose the region where the bucket is created (ex: eu-west-1)
  3. AWS access key Id: Paste the key ID you have created when you have created a new IAM User
  4. AWS access secret key: Paste the secret key you have created when you have created a new IAM User
  5. Click “Test connection”. If everything works well, you can validate by clicking on “Next step”.

Your connector is now set up, and you can proceed to set up your Export.

You will get an error message, if one of the fields contains an error.

The export

To set up your daily export, please refer to the guide: Data Warehouse integrations: General information.

Step 1: Generate your payload

Refer to the Data Warehouse article to create your payload.

Step 2: Set up the export

  1. Export name: the name of your export, give an explicit name to easily retrieve it in AB Tasty
  2. Name of the table: the name of the table we will create in Redshift 
  3. Data exporter query: paste the payload of your data explorer query 
  4. Click save and create. 

The RedShift integration is now complete, and you will soon see the data flowing into your dedicated Data Warehouse (It can take up to 2–3 hours, depending on the size of your report).

The export is activated upon creation, and new data will be appended to the current one, daily. The following screenshot shows that the export is activated on creation:

Was this article helpful?

/