Free Starter Pack Before you start using Claude for Excel, apply these best practices
← All Integrations
Amazon Redshift Logo

Amazon Redshift + Go Fig

Data Stack

Connect your Redshift data warehouse to Go Fig for seamless financial data integration and analysis.

Amazon Redshift is where many enterprise finance, data, and marketing teams land years of historical warehouse data. Go Fig connects to your Redshift cluster (provisioned or Serverless) as a first-class source in the Financial Intelligence Graph, so AI financial analysts like Celeste can query curated marts, fact tables, and materialized views alongside QuickBooks, NetSuite, Salesforce, and HRIS data. The connector uses read-only users, respects workload management (WLM) queue assignments, and supports UNLOAD-based extracts for large backfills so analytical queries don't compete with production ETL. Go Fig handles Redshift's specifics: late-binding views, AZ64 and ZSTD compression metadata, SUPER semi-structured columns, and Redshift Spectrum external tables pointing at S3. For finance teams standardized on AWS, connecting via PrivateLink or VPC peering keeps data traffic inside the AWS backbone and avoids public internet exposure entirely.

Key facts

Deployments
Provisioned, Serverless, Spectrum
Sync mode
Query-based + UNLOAD for large tables
Networking
PrivateLink, VPC peering, IP allowlist
Auth
DB user or IAM via GetClusterCredentials
Concurrency
Runs in a dedicated WLM queue

SOC 2 Type II ยท All integrations

What you can do with Amazon Redshift data in Go Fig

Mart-layer financial reporting

Point Celeste at curated fact and dim tables in your Redshift mart layer for revenue, cost, and unit-economics reporting without lifting data out of the warehouse.

Cost allocation from event data

Aggregate high-cardinality event or usage tables in Redshift, then join to GL segments and customer dimensions for margin and cost-to-serve analysis.

Board-ready exec reporting

Blend Redshift KPI marts with bookings, billings, and headcount from SaaS systems so CFOs present one coherent number set rather than reconciling across tabs.

Data available from Amazon Redshift

Go Fig extracts and normalizes the following data from your Amazon Redshift account:

Standard tables
Materialized views
Late-binding views
Redshift Spectrum external tables
SUPER semi-structured columns
STL/SVL system tables
Query history
WLM queue metadata
Partitioned fact tables
Custom UNLOAD-based extracts

How to connect Amazon Redshift

1

Establish private connectivity

For production, provision AWS PrivateLink from your Redshift VPC to Go Fig or set up VPC peering. If neither is available, allowlist Go Fig's published egress CIDRs in the cluster security group and enable SSL (require_ssl=true) on the parameter group.

2

Create a scoped read-only user and WLM queue

Run CREATE USER fig_reader PASSWORD DISABLE (for IAM auth) and GRANT USAGE on the schemas plus SELECT on the tables, views, and materialized views Go Fig should read. Assign the user to a dedicated WLM queue with a bounded concurrency slot so analytical refreshes can't saturate the cluster during month-end ETL.

3

Register IAM role and connect

Attach a minimal IAM role granting redshift:GetClusterCredentials for fig_reader and (if using Spectrum or UNLOAD) s3:GetObject on the relevant bucket. In Go Fig, paste the cluster endpoint, database, and role ARN. Go Fig introspects pg_catalog and svv_external_tables to enumerate objects.

4

Choose extract strategy per table

For small dimension and mart tables, Go Fig runs SELECT with incremental filters on a last_updated column. For large fact tables, Go Fig issues an UNLOAD to a staging S3 prefix in Parquet with PARTITION BY a date column, then reads Parquet back. This offloads scan cost from leader/compute nodes and keeps syncs cheap on Serverless.

Authentication: Two supported paths. (1) A dedicated read-only Redshift database user with GRANT SELECT on the schemas or materialized views Go Fig should access. (2) IAM-based auth via redshift:GetClusterCredentials, which issues short-lived temporary credentials tied to a read-only IAM role. IAM auth is preferred for production since no long-lived passwords are stored. For private clusters, use an AWS PrivateLink endpoint to Go Fig's VPC, VPC peering, or SSH bastion as a fallback.

Common Questions About Amazon Redshift Integration

Will Go Fig contend with our production ETL on the Redshift cluster?

Not if it's set up correctly. Go Fig runs under a dedicated WLM queue with a concurrency slot cap you control, so analytical reads are bounded. For Redshift Serverless, query capacity scales independently. For very large fact-table syncs, Go Fig uses UNLOAD to Parquet in S3 rather than repeatedly scanning the cluster, which is the same pattern AWS recommends for BI extracts.

Does Go Fig support Redshift Spectrum for data sitting in S3?

Yes. Spectrum external schemas and external tables are enumerated the same way as local tables. Go Fig reads through the Spectrum layer by default, or you can ask Go Fig to read the underlying S3 objects directly via the Amazon S3 connector if you want to bypass Spectrum compute charges on large scans.

How does Go Fig handle the SUPER type and semi-structured data?

SUPER columns are pulled as JSON and Go Fig can either keep them as JSON for downstream parsing or unnest specific paths into first-class columns. PartiQL path expressions you've encoded in views are preserved. Nested arrays can be flattened into separate rows where that maps more cleanly onto a finance question.

Can I connect to a private Redshift cluster without exposing it to the internet?

Yes. PrivateLink from your Redshift VPC to Go Fig is the recommended production pattern and keeps all traffic on the AWS backbone. VPC peering and Transit Gateway are also supported. If none of those are available, Go Fig's published egress IPs can be allowlisted in the cluster security group with SSL required.

What about RA3 vs Serverless, and do workload settings matter?

Both are fully supported. On RA3 with managed storage, Go Fig benefits from the decoupled storage model since reads don't compete with ETL compute. On Serverless, base RPU sizing and max capacity directly affect cost for large extracts, so we recommend UNLOAD-based extracts for fact tables and query-based reads for smaller dimension and mart tables.

Ready to connect Amazon Redshift?

See how your Amazon Redshift data looks in Go Fig with a personalized demo.

Book a Demo