- Workstation preparation - Prepare the workstation to access TiDB Cloud
- TiDB on EKS - TiDB Deployment on EKS(Cluster/Monitoring/Dashboard/Online Diag)
- TiDB installation on AWS - Deploy TiDB Cluster on the AWS using TiUP
- TiDB Cloud Cli(to deprecate) - TiDB Cloud cli using RESTAPI implementation
- TiDB on AKS(terraform) - TiDB Deployment on AKS(Cluster/Monitoring/Dashboard/Scaling) with terraform
- TiDB on AKS(BYOC) - TiDB Deployment on AKS on customer’s account
- KAFKA Deployment - KAFKA OP deployment on AWS using one command script
- TiDB on EKS to TiDB Cloud - Data migration from TiDB on EKS to TiDB Cloud
- TiDB to TiDB full copy - Data copy from TiDB Cloud to TiDB
- TiDB to Oracle(MSK/Glue) - Data replication from TiDB to Oracle through TiCDC, MSK(confluent jdbc sink) and Glue schema registry
- TiDB to Oracle(S3) - Data replication from TiDB to Oracle through TiCDC, S3 (CSV files)
- TiDB to Postgres - Data replication from TiDB to Postgres through TiCDC and kafka(debezium source and confluent jdbc sink)
- TiDB to Elasticsearch - Data replication from TiDB to Elasticsearch through TiCDC and op kafka(debezium sink)
- TiDB to Elasticsearch(MSK/Glue) - Data replication from TiDB to Elasticsearch through TiCDC, MSK(debezium sink) and Glue schema registry
- TiDB to Redshift - Data replication from TiDB to redshift through TiCDC, Kafka(confluent redshift)
- TiDB to Redshift(MSK/Glue) - Data replication from TiDB to redshift through TiCDC, MSK(confluent redshift) and Glue schema registry
- TiDB to Aurora full copy - Data copy from TiDB Cloud to Aurora
- TiDB data transfer to S3 with Glue ETL - Use Glue ETL to transfer the data from TiDB to AWS S3.
- Mongo to TiDB - Data replication from MongoDB to TiDB through kafka(debezium mongo source and customized jdbc sink)
- Aurora to TiDBCloud - Data replication from Aurora to TiDB through DM
- Aurora to TiDBCloud with TiDB Cloud API - Data replication from Aurora to TiDB through DM with TiDB Cloud API which makes the process much more easy
- SQL Server to TiDB Cloud with kafka - Data migration from SQL Server to TiDB Cloud with kafka
- MySQL to TiDBCloud - Data migration from MySQL on EC2 to TiDB Cloud which use DM on cloud
- Postgres to TiDB Cloud with Confluent - Data replication from postgres to TiDB Cloud with Confluent kafka
- Postgres to TiDB Cloud with OP KAFKA - Data replication from postgres to TiDB Cloud with OP kafka
- RU estimation - Estimate RU according to different operations
- Count performance tuning - Performance tuning for table count
- Low latency during heavy batch - TiKV nodes isolation between batch and online transaction
- Resource control to improve latency during heavy batch - Resource control between batch and online transaction to reduce impact to online [batch: insert/select, dumpling]
- Multiple Access Point - Create multiple access point by open api
- Readonly mode test - Test read only mode
- TiTan performance test - Test TiTan Storage performance
- OSS Insight on AWS - Query performance test using OSS Insight project’s tables(https://ossinsight.io/)
- AUDIT LOG - AUDIT LOG verification on the OP env
- transferdb ddl conversion - Data conversion from oracle to mysql document. Please refer to transferdb
- uuid generator - Use uid-generator from baidu to generate sequence id ( https://github.com/baidu/uid-generator.git )
- powerdns on TiDB - Deploy powerdbs on TiDB
- Prometheus to Newrelic metrics(to deprecate) - Convert the Prometheus metrics to Newrelic
- Prometheus datadog integration - Integrate prometheus to datadog
- THANOS for prometheus HA - THANOS deployment
- connection failure for jdbc connection - Connection failure (connect/j) to TiDB Cloud
- encoding comparison between binary and utf8mb4 - Comparison between binary and utf8mb4
- Data access from redshift to TiDB Cloud - Redshift access TiDB Cloud Data through federated query
- Performance test with Jmeter - Use Jmeter to do the performance test against TiDB Cloud
- mockdata(tidb) - generate mockdata for TiDB to do performance and compatibility test
- mockdata(orcle) - genreate oracle data for data replication. One customer needs to do hundreds of table migration verification
- mask data(tidb) - mask the data with config rule
- JDBC connection pool automatic recovery - - Verify that the JDBC connection pool automatic recovery during the TiDB Scaling in