Developer(s) | PingCAP Inc. |
---|---|
Initial release | October 15, 2017[1] |
Stable release | 7.5.0[2]
/ 1 December 2023 |
Repository | |
Written in | Go (TiDB), Rust (TiKV) |
Available in | English, Chinese |
Type | NewSQL |
License | Apache 2.0 |
Website | en |
TiDB (/’taɪdiːbi:/, "Ti" stands for Titanium) is an open-source NewSQL database that supports Hybrid Transactional and Analytical Processing (HTAP) workloads.[3] It is designed to be MySQL compatible. It is developed and supported primarily by PingCAP and licensed under Apache 2.0, though it is also available as a paid product. TiDB drew its initial design inspiration from Google's Spanner[4] and F1[5] papers.[6]
Release history
See all TiDB release notes.
- On April 7, 2022, TiDB 6.0 GA was released.
- On April 7, 2021 TiDB 5.0 GA was released.
- On May 28, 2020, TiDB 4.0 GA was released.
- On June 28, 2019, TiDB 3.0 GA was released.
- On April 27, 2018, TiDB 2.0 GA was released.
- On October 16, 2017, TiDB 1.0 GA was released.
Main features
Horizontal scalability
TiDB can expand both SQL processing and storage capacity by adding new nodes. This makes infrastructure capacity scaling easier and more flexible compared to traditional relational databases which only scale vertically.
MySQL compatibility
TiDB acts like it is a MySQL 5.7 server to applications. A user can continue to use all of the existing MySQL client libraries.[7] Because TiDB's SQL processing layer is built from scratch, not a MySQL fork, its compatibility is not 100%, and there are known behavior differences between MySQL and TiDB.[8]
Distributed transactions with strong consistency
TiDB internally shards a table into small range-based chunks that are referred to as "Regions".[9] Each Region defaults to approximately 100 MB in size, and TiDB uses a two-phase commit internally to ensure that regions are maintained in a transactionally consistent way.
Cloud native
TiDB is designed to work in the cloud to make deployment, provisioning, operations, and maintenance flexible. The storage layer of TiDB, called TiKV, became a Cloud Native Computing Foundation (CNCF) member project in August 2018, as a Sandbox level project,[10] and became an incubation-level hosted project in May 2019.[11] TiKV graduated from CNCF in September 2020.[12] The architecture of the TiDB platform also allows SQL processing and storage to be scaled independently of each other.
Real-time HTAP
TiDB can support both online transaction processing (OLTP) and online analytical processing (OLAP) workloads. TiDB has two storage engines: TiKV, a rowstore, and TiFlash, a columnstore. Data can be replicated from TiKV to TiFlash in real time to ensure that TiFlash processes the latest data.
High availability
TiDB uses the Raft consensus algorithm[13] to ensure that data is highly available and safely replicated throughout storage in Raft groups. In the event of failure, a Raft group will automatically elect a new leader for the failed member, and self-heal the TiDB cluster without any required manual intervention. Failure and self-healing operations are transparent to the applications.
Deployment methods
Kubernetes with Operator
TiDB can be deployed in a Kubernetes-enabled cloud environment by using TiDB Operator.[14] An Operator is a method of packaging, deploying, and managing a Kubernetes application. It is designed for running stateful workloads and was first introduced by CoreOS in 2016.[15] TiDB Operator[16] was originally developed by PingCAP and open-sourced in August, 2018.[17] TiDB Operator can be used to deploy TiDB on a laptop,[18] Google Cloud Platform’s Google Kubernetes Engine,[19] and Amazon Web Services’ Elastic Container Service for Kubernetes.[20]
TiUP
TiDB 4.0 introduces TiUP, a cluster operation and maintenance tool. It helps users quickly install and configure a TiDB cluster with a few commands.[21]
TiDB Ansible
TiDB can be deployed using Ansible by using a TiDB Ansible playbook (not recommended).[22]
Docker
Docker can be used to deploy TiDB in a containerized environment on multiple nodes and multiple machines, and Docker Compose can be used to deploy TiDB with a single command for testing purposes.[23]
Tools
TiDB has a series of open-source tools built around it to help with data replication and migration for existing MySQL and MariaDB users.
TiDB Data Migration (DM)
TiDB Data Migration (DM) is suited for replicating data from already sharded MySQL or MariaDB tables to TiDB.[24] A common use case of DM is to connect MySQL or MariaDB tables to TiDB, treating TiDB almost as a slave, then directly run analytical workloads on this TiDB cluster in near real-time.
Backup & Restore
Backup & Restore (BR) is a distributed backup and restore tool for TiDB cluster data. It offers high backup and restore speeds for large-scale TiDB clusters.[25]
Dumpling
Dumpling is a data export tool that exports data stored in TiDB or MySQL. It lets users make logical full backups or full dumps from TiDB or MySQL.[26]
TiDB Lightning
TiDB Lightning is a tool that supports high speed full-import of a large MySQL dump into a new TiDB cluster, providing a faster import experience than executing each SQL statement. This tool is used to quickly populate an initially empty TiDB cluster with much data, in order to speed up testing or production migration. The import speed improvement is achieved by parsing SQL statements into key-value pairs, then directly generate Sorted String Table (SST) files to RocksDB.[27][28]
TiDB Binlog
TiDB Binlog is a tool used to collect the logical changes made to a TiDB cluster. It is used to provide incremental backup and replication, either between two TiDB clusters, or from a TiDB cluster to another downstream platform.[29]
It is similar in functionality to MySQL primary-secondary replication. The main difference is that since TiDB is a distributed database, the binlog generated by each TiDB instance needs to be merged and sorted according to the time of the transaction commit before being consumed downstream.[30]
See also
References
- ↑ "1.0 GA release notes". GitHub.
- ↑ "Release 7.5.0". December 1, 2023. Retrieved December 19, 2023.
- ↑ Xu, Kevin (October 17, 2018). "How TiDB combines OLTP and OLAP in a distributed database". InfoWorld.
- ↑ "Spanner: Google's Globally-Distributed Database". 2012.
- ↑ "F1: A Distributed SQL Database That Scales". 2013.
- ↑ Hall, Susan (April 17, 2017). "TiDB Brings Distributed Scalability to SQL". The New Stack.
- ↑ Tocker, Morgan (November 14, 2018). "Meet TiDB: An open source NewSQL database". Opensource.com.
- ↑ "Compatibility with MySQL". PingCAP.
- ↑ "TiKV Architecture". TiKV.
- ↑ Evans, Kristen (August 28, 2018). "CNCF to Host TiKV in the Sandbox". Cloud Native Computing Foundation.
- ↑ CNCF (May 21, 2019). "TOC Votes to Move TiKV into CNCF Incubator". Cloud Native Computing Foundation. Retrieved August 19, 2020.
- ↑ TiKV Authors (September 2, 2020). "Celebrating TiKV's CNCF Graduation". TiKV.
- ↑ "The Raft Consensus Algorithm".
- ↑ Jackson, Joab (January 22, 2019). "Database Operators Bring Stateful Workloads to Kubernetes". The New Stack.
- ↑ Philips, Brandon (November 3, 2016). "Introducing Operators: Putting Operational Knowledge into Software". CoreOS.
- ↑ "TiDB Operator GitHub repo". GitHub.
- ↑ "Introducing the Kubernetes Operator for TiDB". InfoWorld. August 16, 2018.
- ↑ "Deploy TiDB to Kubernetes on Your Laptop".
- ↑ "Deploy TiDB, a distributed MySQL compatible database, to Kubernetes on Google Cloud".
- ↑ "Deploy TiDB, a distributed MySQL compatible database, on Kubernetes via AWS EKS". GitHub.
- ↑ Long, Heng (April 19, 2020). "Get a TiDB Cluster Up in Only One Minute". PinCAP. Retrieved August 19, 2020.
- ↑ "Ansible Playbook for TiDB". GitHub.
- ↑ "How to Spin Up an HTAP Database in 5 Minutes With TiDB + TiSpark".
- ↑ "DM GitHub Repo". GitHub.
- ↑ Shen, Taining (April 13, 2020). "How to Back Up and Restore a 10-TB Cluster at 1+ GB/s". PingCAP.
- ↑ "Dumpling Overview". PingCAP.
- ↑ Chan, Kenny (January 30, 2019). "Introducing TiDB Lightning". PingCAP.
- ↑ "TiDB Lightning Overview". PingCAP.
- ↑ "TiDB Binlog Cluster Overview". PingCAP.
- ↑ Wang, Xiang (January 29, 2019). "TiDB-Binlog Architecture Evolution and Implementation Principles". PingCAP.