As of 20th June 2025, AWS blocked access to Amazon Timestream for LiveAnalytics for new users. While AWS did not provide a specific reason, the move likely reflects a shift in strategic focus and pricing adjustments. This change has prompted many teams to start evaluating the best migration path after Amazon Timestream restrictions. Existing customer workloads remain unaffected, and AWS has confirmed continued investment in the service’s security, availability, and performance.

With this shift, AWS recommends Amazon Timestream for InfluxDB as the primary alternative due to its similar functionality and seamless transition. For more advanced or customized needs, workloads can also be migrated to Aurora or RDS PostgreSQL, often using Amazon S3 as an intermediate data layer. This article will guide you through each of these migration paths after Amazon Timestream, helping you assess which option best fits your use case.

What is Amazon Timestream for LiveAnalytics?

Amazon Timestream for LiveAnalytics is a time series database. Its main characteristics are fast, scalable, full-managed and purpose-built. It allows to store and analyze trillions of time series data points per day. 

But what exactly can it be used for? Let’s look at a real-life scenario. Imagine a company managing a modern office filled with hundreds of sensors measuring temperature, humidity, movement, and energy usage. Timestream for LiveAnalytics could be used to collect and process this sensor data in real time—calculating average temperatures, detecting anomalies in energy consumption (e.g., sudden spikes), or flagging unauthorized movement after hours. It also integrates with AWS services like Amazon QuickSight and AWS Lambda for data visualization and automation.

However, with AWS phasing out access for new users, many organizations are now evaluating their migration path after Amazon Timestream. Whether it’s moving to Amazon Timestream for InfluxDB or shifting workloads to Aurora or PostgreSQL, identifying the right path is key for maintaining continuity and performance.

What is Timestream for InfluxDB, Aurora and RDS Postgres?

InfluxDB is also a managed timeseries database with capabilities of setting up, operating and scaling time series workloads that have query response times of only a few milliseconds. It helps DevOps and application developers run InluxDB databases on AWS using open-source APIs for real-time time series applications.

Amazon Aurora is a cloud-based, fully managed relational database service. It provides high performance and availability at global scale for some well-known Database Management Systems such as PostgreSQL and MySQL. The key features of Aurora are high performance and availability, cost-effectiveness and ease to migrate. Some of the use cases include customer relationship management (CRM), enterprise resource planning (ERP), building Software-as a-Service (SaaS) applications or going serverless.

Lastly, Amazon RDS, which stands for Relational Database Service, is an easy to manage relational database optimized for total cost of ownership. Undifferentiated database control tasks, which include provisioning, configuring, backing up, and patching, are automated by Amazon RDS. With a choice across eight engines and two deployment options, customers can create a new database just in minutes and customize it to meet their requirements.The key benefits of RDS are easy to manage, choice of engines, high availability and operational expertise.

Exporting data to S3

Before migrating to Timestream for InfluxDB or Amazon Aurora/RDS Postgres, the data should be exported to Amazon S3. The reason for doing so, is to create a durable intermediate storage layer that forms the basis for subsequent database-specific ingestion. AWS recommends using Timestream for LiveAnalytics export tool for migrating time-series data. This script uses a time-based chunking strategy, which breaks the data into smaller pieces that can be processed independently. 

WizzDev migration after Amazon timestream restricitions_amazon s3_logo

Moreover, the tool significantly decreases migration risk by automatically retrying failed operations. The tool also offers migration statistics captured in the DynamoDB table. It tracks metrics like configuration used or records exported with more other data points for confirming that your migration is complete.

Migration to Timestream for InfluxDB

AWS developed a python script to determine the cardinality of Your Timestream for LiveAnalytics table. Cardinality in InfluxDB is the “number of unique measurement, tag set, and field key combinations in an InfluxDB bucket.”. This script checks if the cardinality is under 10 million and helps choose a suitable Timestream for InfluxDB Instance type.

When the data is exported to S3 and the cardinality is calculated, the data can be ingested from Amazon S3 to Timestream for InfluxDB. This is handled in the automation process, which makes use of InfluxDB`s import tools to switch the information into its specialised time-series structure.

As part of the migration path after Amazon Timestream:

  1. Unloading data using Timestream for LiveAnalytics export tool.
  2. Data transformation including converting Timestream for LiveAnalytics data into InfluxDB line protocol format.
  3. Data ingestion, in which the line protocol dataset is ingested into Timestream for InfluxDB instance.
  4. Validation, which is an optional step validating that every line protocol point has been ingested.

For installation and usage follow script’s Readme in GitHub.

Migration to Aurora/RDS Postgres

Once your data is in S3, it can be ingested into Amazon RDS/Aurora PostgreSQL. AWS recommends using AWSDatabase Migration Service (DMS) – set S3 as source and PostgreSQL as target. If the DMS is not suitable for your case, AWS prepared a Python-based utility to migrate CSV data from S3 to PostgreSQL.

Some of the script’s key features include:

  • Parallel File Processing: Supports simultaneous ingestion of multiple CSV files using multithreading.
  • Connection Pooling: Efficiently manages database connections.
  • Automatic Column Detection: Extracting column names from CSV headers.
  • Retry Logic: Tries again after errors, with longer wait times each time it fails.
  • File Management: Moves finished files to a special folder, so failed ones can continue later without starting over.
  • Logging: Keeps detailed logs to help with monitoring and troubleshooting.
  • Error Notifications: Sending SNS notifications in case of failures.
    Secure Credentials: AWS
  • Secrets Manager stores and provides database passwords.

For installation and usage follow script’s Readme in GitHub.

If you’re navigating a broader AWS migration, evaluating your cloud architecture, or need support tailoring the right database strategy, our team at WizzDev can help. We’ve guided startups and enterprise clients through complex IoT data pipelines, cloud integrations, and AWS optimization projects—so you’re not alone in this.