Estuary

Amazon Redshift Data Types: An In-Depth Guide

Understanding Redshift data types is a crucial first step to unlocking the full power of the data warehouse. Here's a complete guide you can keep on hand.

Share this article

Data is needed to make any smart business decision today. However, for the data collected to be useful, it needs to be structured and stored in the right format. That's where data types come in.

Data types are critical, especially in the data management stage, because they define how applications interpret values and what kind of operations—logical, mathematical or relational operations—can be performed. Thus every data variable collected should be assigned a data type. Besides operations, data types are also important for data integrity, resource optimization, memory allocation, logic, and storage in general. 

This article will explore data types in Amazon Redshift, one of the most powerful cloud-based data warehousing services. We will discuss why understanding data types is important in Redshift and closely look at their significance.

What is Amazon Redshift?

Amazon Redshift is a fully managed data warehouse cloud service that lets users access and analyze huge volumes of structured and unstructured data. By huge volume of data, we mean up to the range of exabytes (1018 bytes). This data warehouse which Amazon developed, also supports large data migration and can connect to standard SQL clients and BI (business intelligence) tools.

One thing that makes Redshift a popular warehouse application for organizations is its fast query and I/O performance for large datasets. Other amazing features of this data warehouse are:

  • It can scale its nodes up and down to meet demand. Hence, it is often described as a petabyte-scale data warehousing tool.
  • Limited concurrency, so multiple queries can be run against data in Amazon S3 (object storage service) regardless of the data size and complexity.
  • Fully managed; thus, it is cost-effective as there are zero upfront costs.
  • Automatically backs up your data to Amazon S3 for any disaster recovery.
  • It organizes data in columnar data storage, which is ideal for data warehousing and analytics because queries must be aggregated over large data sets. 
  • Indexes and materialized views are not required, so less space is used.
  • It has powerful parallel processing and compression techniques.
  • Improved query performance because its columnar data storage and parallel processing reduce the amount of I/O needed to perform queries.

Back to data types, let's discuss why we should worry about data types of your columns in Redshift.

Why it is important to understand Redshift data types?

Understanding Amazon Redshift data types is important for efficient data storage and query performance in data management. 

Here are some reasons why it is important to understand data types in Redshift:

  • Data types determine how data is stored and processed. This impacts the storage space, performance, and query execution time. 
  • They ensure data integrity and consistency.
  • They prevent data corruption, particularly during the ETL process, which can occur if the wrong data types are placed.
  • They ensure integration between different tools goes effectively and seamlessly.

Amazon Redshift Data Types

Data types are crucial for data management and analysis. Amazon Redshift supports a wide range of data types.

Numeric types

These data types are used for mathematical and aggregation functions. Here the precision and scale of exact or approximate values are preserved. Null values are also accepted here. Null here means no values or absence of a value.

Under the numeric data types are:

  • Integers
  • Decimals
  • Floating-point numbers

Integer: These numbers are the SMALLINT, INTEGER, and BIGINT data types to store whole numbers of various ranges. By whole numbers, we mean these numbers lack decimal and fraction components. 

The SMALLINT has the smallest byte (2 bytes) among the types and can range from -32768 to +32767. However, BIGINT has a value of 8 bytes and ranges from -9223372036854775808 to 9223372036854775807. INTEGER or INT4 has 4 bytes.

Decimal: This number has a storage space of variables up to 128 bits and 38 digits of precision. To define a decimal column, you will need to define its precision and scale. By precision, we mean the total number of digits before and after the decimal. However, scales here mean the number of digits after the decimal. Thus a decimal of 234.67 has a scale of 2 and a precision of 5.

Floating-point numbers: These are used to store numbers with variable precision. We have the REAL and DOUBLE PRECISION. The difference lies in their precision value. While REAL stores have a precision of 6 and can range from 1E-37 to 1E+37, DOUBLE PRECISION has a precision of about 15 digits.

Character types

These are your CHAR (character) and VARCHAR (character varying). They can also be called the string data type in Redshift. However, unlike most data management applications, they are defined in terms of bytes, not characters. The Character types store user-generated values or values you want as text—for example, a name or username.

  • CHAR: is a fixed-length string variable with a maximum storage value of 4,096 bytes. It is important to note that a CHAR column can only have single-byte characters; thus, a CHAR(10) column will have a maximum length of 10 bytes. 
  • VARCHAR, however, can store multibyte characters with a maximum of four bytes per character. They have a maximum storage value of 65,535 bytes.

Other character type values include BPCHAR (256 bytes), NCHAR (4,096 bytes), and NVARCHAR (65,535 bytes). However, depending on length specification, they’re implicitly stored as a char or varchar type. For example, a column with BPCHAR types will be converted to a CHAR (256) column.

Datetime types

The Datetime data types represent the calendar date. However, the range, duration, and storage vary depending on the data type.

  • DATE stores date data without any time or time zone information. It has a storage size of 4-byte and  24hrs resolution.
  • TIME data types refer to the time independent of a specific date and time zone. It has 8-byte storage and can store up to six digits of precision for fractional seconds. By default, TIME is stored in Coordinated Universal Time (UTC).
  • TIMETZ, unlike the TIME data types, stores time values with a timezone. It has the same storage size and precision value as time.
  • TIMESTAMP stores the data and time data without any information on the time zone. However, by default, a TIMESTAMP value is in UTC. It has a storage size of 8-byte storage with six digits of precision for fractional seconds.
  • TIMESTAMPZ is TIMESTAMP with a specific timezone. It also has the same 8-byte storage and six digits of precision for fractional seconds, like TIMESTAMP. 

Boolean types

This data type refers to single-byte, logical variables. The values here are stored as true and false, with t representing True and f for False values. NULL can also be stored with a storage size of 1 byte. For example, storing the decline and approved columns.

Super data types

This schemaless data type is made up of all complex types—ARRAY and STRUCTS— in Amazon Redshift. It is used to store semistructured data and documents as values. These data being stored don't look like your regular tabular data. Rather, they have complex entities like arrays, tuples, and nested structures in JSON. It also supports null, boolean, string, and numbers. 

Redshift's SUPER data types support 16 MB of data.

VARBYTE type

The VARBYTE data types store binary data—videos, images, and binary large objects (BLOBs). It is important to remember that binary data can take up a lot of storage space, raising the storage cost. As a result, utilizing VARBYTE should be employed as this can affect query performance. The appropriate compression algorithms and indexes should be applied to reduce the storage requirements for the best performance.

HLLSKETCH type

This data type is a probabilistic algorithm presenting a memory-efficient way to estimate the number of distinct values in a large dataset. The HyperLogLog algorithm's results are stored in the data type.

Challenges with Redshift Data Types

Though Amazon Redshift allows its users to store and analyze data in various data types, dealing with these data types can be challenging.

Here are some challenges you may encounter while dealing with Redshift data types:

  • Converting data types can be tricky and time-consuming, especially when loading big data into RedshiftData. This can create some data integrity issues. You can solve this by utilizing a data operation and integration platform.
  • Performance issues when the wrong data type is used.
  • The data types in Redshift have storage cost implications. For example, using "varchar" over the "char" data types.

Key takeaway

Understanding Redshift data types and picking the right data types for your data analytics use case is crucial. However, like other tools, a mismatch in data types can lead to data loss and corruption during integration. To ensure there's a correct mismatch when integrating your tools with Redshift, you should try Estuary

Estuary Flow is a real-time data operation, and integration platform with various unique connectors called flows. With these connectors, you can sync, load, and build pipelines across your data stack, Amazon Redshift included.
 

Want to try Estuary Flow? Sign up for a free trial in the web app, or contact our developer support team with any questions you have!

Start streaming your data for free

Build a Pipeline
Share this article

Table of Contents

Build a Pipeline

Start streaming your data for free

Build a Pipeline

About the author

Picture of Jeffrey Richman
Jeffrey Richman

With over 15 years in data engineering, a seasoned expert in driving growth for early-stage data companies, focusing on strategies that attract customers and users. Extensive writing provides insights to help companies scale efficiently and effectively in an evolving data landscape.

Popular Articles

Streaming Pipelines.
Simple to Deploy.
Simply Priced.
$0.50/GB of data moved + $.14/connector/hour;
50% less than competing ETL/ELT solutions;
<100ms latency on streaming sinks/sources.