This page provides you with instructions on how to extract data from Amazon RDS and load it into Google BigQuery. (If this manual process sounds onerous, check out Stitch, which can do all the heavy lifting for you in just a few clicks.)
What is Amazon RDS?
Amazon RDS (relational database service) lets users spin up cloud-based database instances without worrying about infrastructure provisioning or software maintenance or many of the administrative tasks involved in running a database on premises.
Cloud platforms can scale up or down quickly to meet changing demands. RDS takes advantage of that capability to let users add database instances to as needed. It offers automatic backup and recovery for database instances, and can replicate data across multiple zones for high availability.
RDS supports six different database engines: Amazon Aurora, PostgreSQL, MySQL, MariaDB, Oracle Database, and Microsoft SQL Server.
What is Google BigQuery?
Google BigQuery is a data warehouse that delivers super-fast results from SQL queries, which it accomplishes using a powerful engine dubbed Dremel. With BigQuery, there's no spinning up (and down) clusters of machines as you work with your data. With that said, it's clear why some claim that BigQuery prioritizes querying over administration. It's super fast, and that's the reason why most folks use it.
Getting data out of Amazon RDS
The most common way to get data out of any database is to write SQL SELECT queries. As part of any query you can join tables, specify filters, and sort and limit results.
Loading data into Google BigQuery
Google Cloud Platform offers a helpful guide for loading data into BigQuery. You can use the
bq command-line tool to upload the files to your awaiting datasets, adding the correct schema and data type information along the way. The
bq load command is your friend here. You can find the syntax in the bq command-line tool quickstart guide. Iterate through this process as many times as it takes to load all of your tables into BigQuery.
Keeping Amazon RDS data up to date
At this point you've coded up a script or written a program to get the data you want and successfully moved it into your data warehouse. But how will you load new or updated data? It's not a good idea to replicate all of your data each time you have updated records. That process would be painfully slow and resource-intensive.
The key is to build your script in such a way that it can identify incremental updates to your data. You can identify key fields that your script can use to bookmark its progression through the data, and pick up where it left off as it looks for updated data. Auto-incrementing fields such as updated_at or created_at work best for this. When you've built in this functionality, you can set up your script as a cron job or continuous loop to get new data as it appears in your database.
Other data warehouse options
BigQuery is great, but sometimes you need to optimize for different things when you're choosing a data warehouse. Some folks choose to go with Amazon Redshift, PostgreSQL, Snowflake, or Microsoft Azure SQL Data Warehouse, which are RDBMSes that use similar SQL syntax, or Panoply, which works with Redshift instances. Others choose a data lake, like Amazon S3. If you're interested in seeing the relevant steps for loading data into one of these platforms, check out To Redshift, To Postgres, To Snowflake, To Panoply, To Azure SQL Data Warehouse, and To S3.
Easier and faster alternatives
If all this sounds a bit overwhelming, don’t be alarmed. If you have all the skills necessary to go through this process, chances are building and maintaining a script like this isn’t a very high-leverage use of your time.
Thankfully, products like Stitch were built to move data from Amazon RDS to Google BigQuery automatically. With just a few clicks, Stitch starts extracting your Amazon RDS data via the API, structuring it in a way that's optimized for analysis, and inserting that data into your Google BigQuery data warehouse.