IOT数据库tdengine学习使用,非常方便,速度十分快,开源IOT数据库,支持集群方式进行部署,支持分区,支持Topic,流式计算

Posted freewebsys

tags:

篇首语:本文由小常识网(cha138.com)小编为大家整理,主要介绍了IOT数据库tdengine学习使用,非常方便,速度十分快,开源IOT数据库,支持集群方式进行部署,支持分区,支持Topic,流式计算相关的知识,希望对你有一定的参考价值。

目录

前言


本文的原文连接是:
https://blog.csdn.net/freewebsys/article/details/108971807

未经博主允许不得转载。
博主CSDN地址是:https://blog.csdn.net/freewebsys
博主掘金地址是:https://juejin.cn/user/585379920479288
博主知乎地址是:https://www.zhihu.com/people/freewebsystem

1,关于Tdengine背景


背景:2019年,陶建辉创办的“涛思数据”宣布将其时序数据库产品“TDengine”开源,这一举动曾着实让行业惊讶:一款面向物联网、车联网、工业互联网以及其他典型时序数据处理场景的时序数据库(Time-Series Database)究竟能在GitHub上获得多大关注?核心代码的开放会让一家初创企业获得更多机会,还是反被制约?然而上线3个月,Star数量上万,并连续获得三轮融资,这使涛思数据的战略逐渐受到认可。
https://www.163.com/dy/article/H2GLI1PQ0530UH99.html

官方网站:
https://www.taosdata.com/

2,使用直接使用docker尝鲜


https://docs.taosdata.com/get-started/
https://hub.docker.com/r/tdengine/tdengine

启动:

$ docker run -itd \\
   -p 6030-6041:6030-6041 \\
   -p 6030-6041:6030-6041/udp \\
   tdengine/tdengine
   

然后执行测试命令


taosBenchmark 
[09/07 09:43:40.210544] INFO: taos client version: 3.0.1.0


         Press enter key to continue or Ctrl-C to stop


[09/07 09:47:29.304734] INFO: create database: <CREATE DATABASE IF NOT EXISTS test precision 'ms';>
[09/07 09:47:31.314766] INFO: stable meters does not exist, will create one
[09/07 09:47:31.316766] INFO: create stable: <CREATE TABLE IF NOT EXISTS test.meters (ts TIMESTAMP,current float,voltage int,phase float) TAGS (groupid int,location binary(16))>
[09/07 09:47:31.324614] INFO: generate stable<meters> columns data with lenOfCols<80> * prepared_rand<10000>
[09/07 09:47:31.334906] INFO: generate stable<meters> tags data with lenOfTags<54> * childTblCount<10000>
[09/07 09:47:31.338802] INFO: start creating 10000 table(s) with 8 thread(s)
[09/07 09:47:31.340590] INFO: thread[0] start creating table from 0 to 1249
[09/07 09:47:31.341547] INFO: thread[1] start creating table from 1250 to 2499
[09/07 09:47:31.342576] INFO: thread[2] start creating table from 2500 to 3749
[09/07 09:47:31.343289] INFO: thread[3] start creating table from 3750 to 4999
[09/07 09:47:31.344594] INFO: thread[4] start creating table from 5000 to 6249
[09/07 09:47:31.345317] INFO: thread[5] start creating table from 6250 to 7499
[09/07 09:47:31.346115] INFO: thread[6] start creating table from 7500 to 8749
[09/07 09:47:31.350073] INFO: thread[7] start creating table from 8750 to 9999
[09/07 09:47:33.166145] INFO: Spent 1.8280 seconds to create 10000 table(s) with 8 thread(s), already exist 0 table(s), actual 10000 table(s) pre created, 0 table(s) will be auto created


         Press enter key to continue or Ctrl-C to stop


[09/07 09:48:25.825688] INFO: record per request (30000) is larger than insert rows (10000) in progressive mode, which will be set to 10000
[09/07 09:48:25.840859] INFO: Estimate memory usage: 11.74MB


         Press enter key to continue or Ctrl-C to stop


[09/07 09:48:35.176099] INFO: thread[0] start progressive inserting into table from 0 to 1249
[09/07 09:48:35.176250] INFO: thread[1] start progressive inserting into table from 1250 to 2499
[09/07 09:48:35.176307] INFO: thread[2] start progressive inserting into table from 2500 to 3749
[09/07 09:48:35.176597] INFO: thread[4] start progressive inserting into table from 5000 to 6249
[09/07 09:48:35.176876] INFO: thread[6] start progressive inserting into table from 7500 to 8749
[09/07 09:48:35.176939] INFO: thread[7] start progressive inserting into table from 8750 to 9999
[09/07 09:48:35.178943] INFO: thread[5] start progressive inserting into table from 6250 to 7499
[09/07 09:48:35.182180] INFO: thread[3] start progressive inserting into table from 3750 to 4999

[09/07 09:49:05.349983] INFO: thread[4] has currently inserted rows: 4370000
[09/07 09:49:05.360251] INFO: thread[6] has currently inserted rows: 4500000
[09/07 09:49:05.364481] INFO: thread[0] has currently inserted rows: 4620000
[09/07 09:49:05.370841] INFO: thread[5] has currently inserted rows: 4380000
[09/07 09:49:05.378918] INFO: thread[7] has currently inserted rows: 4510000
[09/07 09:49:05.393132] INFO: thread[2] has currently inserted rows: 4440000
[09/07 09:49:05.400107] INFO: thread[3] has currently inserted rows: 4360000
[09/07 09:49:05.401223] INFO: thread[1] has currently inserted rows: 4470000
[09/07 09:49:35.434374] INFO: thread[1] has currently inserted rows: 8970000
[09/07 09:49:35.440311] INFO: thread[7] has currently inserted rows: 8900000
[09/07 09:49:35.445450] INFO: thread[4] has currently inserted rows: 8880000
[09/07 09:49:35.452617] INFO: thread[5] has currently inserted rows: 8890000
[09/07 09:49:35.452651] INFO: thread[3] has currently inserted rows: 8690000
[09/07 09:49:35.456406] INFO: thread[0] has currently inserted rows: 9030000
[09/07 09:49:35.467113] INFO: thread[6] has currently inserted rows: 8810000
[09/07 09:49:35.493582] INFO: thread[2] has currently inserted rows: 8710000
[09/07 09:49:59.716787] INFO: thread[6] completed total inserted rows: 12500000, 151074.78 records/second
[09/07 09:50:00.189079] INFO: thread[0] completed total inserted rows: 12500000, 150185.50 records/second
[09/07 09:50:00.213614] INFO: thread[1] completed total inserted rows: 12500000, 150167.37 records/second
[09/07 09:50:00.676656] INFO: thread[5] completed total inserted rows: 12500000, 149337.44 records/second
[09/07 09:50:00.883392] INFO: thread[4] completed total inserted rows: 12500000, 148994.40 records/second
[09/07 09:50:01.317899] INFO: thread[3] completed total inserted rows: 12500000, 148216.44 records/second
[09/07 09:50:01.371287] INFO: thread[7] completed total inserted rows: 12500000, 148093.75 records/second
[09/07 09:50:01.409091] INFO: thread[2] completed total inserted rows: 12500000, 148080.71 records/second
[09/07 09:50:01.414528] INFO: Spent 86.232537 seconds to insert rows: 100000000 with 8 thread(s) into test 1159655.08 records/second
[09/07 09:50:01.414588] INFO: insert delay, min: 1.79ms, avg: 67.00ms, p90: 300.71ms, p95: 322.98ms, p99: 343.11ms, max: 419.40ms

mac笔记本配置:
CPU 2.6 GHz 六核Intel Core i7
内存 16 GB 2667 MHz DDR4

基准测试脚本分2步,按回车开始执行。
1)首先创建表
1.8280 seconds to create 10000 table(s) with 8 thread(s)
2)然后插入了10亿数据
Spent 86.232537 seconds to insert rows: 10,0000,000 with 8 thread(s) into test 1159655.08 records/second

速度杠杠的。

命令和 mysql 非常像,而且可以使用


taos
Welcome to the TDengine Command Line Interface, Client Version:3.0.1.0
Copyright (c) 2022 by TDengine, all rights reserved.

Server is Community Edition.

taos> show databases;
              name              |
=================================
 information_schema             |
 performance_schema             |
 test                           |
Query OK, 3 rows in database (0.008884s)

taos> use test;
Database changed.

taos> show tables;
           table_name           |
=================================
 d1250                          |
 d2500                          |
 d1251                          |
 d2501                          |
 d7500                          |
 ...

Query OK, 10000 rows in database (0.069536s)

taos> select * from d1;
           ts            |       current        |   voltage   |        phase         |
======================================================================================
 2017-07-14 02:40:00.000 |             10.00000 |         110 |              0.32222 |
 2017-07-14 02:40:00.001 |              9.84000 |         114 |              0.32222 |
 2017-07-14 02:40:00.002 |             10.12000 |         116 |              0.33056 |
 2017-07-14 02:40:00.003 |             10.04000 |         114 |              0.34167 |
 2017-07-14 02:40:00.004 |              9.96000 |         112 |              0.33056 |
...

Query OK, 10000 rows in database (0.012816s)

还支持JDBC 方式插入:
https://docs.taosdata.com/develop/insert-data/sql-writing/

package com.taos.example;

import java.sql.Connection;
import java.sql.DriverManager;
import java.sql.SQLException;
import java.sql.Statement;
import java.util.Arrays;
import java.util.List;


public class RestInsertExample 
    private static Connection getConnection() throws SQLException 
        String jdbcUrl = "jdbc:TAOS-RS://localhost:6041?user=root&password=taosdata";
        return DriverManager.getConnection(jdbcUrl);
    

    private static List<String> getRawData() 
        return Arrays.asList(
                "d1001,2018-10-03 14:38:05.000,10.30000,219,0.31000,'California.SanFrancisco',2",
                "d1001,2018-10-03 14:38:15.000,12.60000,218,0.33000,'California.SanFrancisco',2",
                "d1001,2018-10-03 14:38:16.800,12.30000,221,0.31000,'California.SanFrancisco',2",
                "d1002,2018-10-03 14:38:16.650,10.30000,218,0.25000,'California.SanFrancisco',3",
                "d1003,2018-10-03 14:38:05.500,11.80000,221,0.28000,'California.LosAngeles',2",
                "d1003,2018-10-03 14:38:16.600,13.40000,223,0.29000,'California.LosAngeles',2",
                "d1004,2018-10-03 14:38:05.000,10.80000,223,0.29000,'California.LosAngeles',3",
                "d1004,2018-10-03 14:38:06.500,11.50000,221,0.35000,'California.LosAngeles',3"
        );
    


    /**
     * The generated SQL is: 
     */
    private static String getSQL() 
        StringBuilder sb = new StringBuilder("INSERT INTO ");
        for (String line : getRawData()) 
            String[] ps = line.split(",");
            sb.append("power." + ps[0]).append(" USING power.meters TAGS(")
                    .append(ps[5]).append(", ") // tag: location
                    .append(ps[6]) // tag: groupId
                    .append(") VALUES(")
                    .append('\\'').append(ps[1]).append('\\'').append(",") // ts
                    .append(ps[2]).append(",") // current
                    .append(ps[3]).append(",") // voltage
                    .append(ps[4]).append(") "); // phase
        
        return sb.toString();
    

    public static void insertData() throws SQLException 
        try (Connection conn = getConnection()) 
            try (Statement stmt = conn.createStatement()) 
                stmt.execute("CREATE DATABASE power KEEP 3650");
                stmt.execute("CREATE STABLE power.meters (ts TIMESTAMP, current FLOAT, voltage INT, phase FLOAT) " +
                        "TAGS (location BINARY(64), groupId INT)");
                String sql = getSQL();
                int rowCount = stmt.executeUpdate(sql);
                System.out.println("rowCount=" + rowCount); // rowCount=8
            
        
    

    public static void main(String[] args) throws SQLException 
        insertData();
    


还支持 topic ,流式计算:
https://docs.taosdata.com/taos-sql/tmq/

CREATE TOPIC [IF NOT EXISTS] topic_name AS subquery;

SHOW TOPICS;

3,总结


Tdengine 还真的是不错的数据库,因为IOT数据结构简单,数据量大。
根据这系统特点 Tdengine 做了很多优化,支持海量数据处理。非常的好。
而且还有公司在背后做技术支持。非常不错。

本文的原文连接是:
https://blog.csdn.net/freewebsys/article/details/108971807

以上是关于IOT数据库tdengine学习使用,非常方便,速度十分快,开源IOT数据库,支持集群方式进行部署,支持分区,支持Topic,流式计算的主要内容,如果未能解决你的问题,请参考以下文章

IOT数据库tdengine学习使用,非常方便,速度十分快,开源IOT数据库,支持集群方式进行部署,支持分区,支持Topic,流式计算

涛思数据 TDengine 征稿—十分钟上手TDengine大数据平台

tdengine根据时间对数据自动进行填充

TDengine:开源高效的物联网大数据平台

tdengine学习

tdengine 是不是支持使用线路协议将数据从 influxdb 传输到 tdengine