prisma 集成tidb 安装试用

Posted rongfengliang-荣锋亮

tags:

篇首语:本文由小常识网(cha138.com)小编为大家整理,主要介绍了prisma 集成tidb 安装试用相关的知识,希望对你有一定的参考价值。

以前官方提供的ansible 的脚本,现在有了docker的版本,可以方便测试使用

实际完整配置参考 https://github.com/rongfengliang/prisma-tidb

安装tidb

  • clone 代码
git clone https://github.com/pingcap/tidb-docker-compose.git
  • 启动

    拉取镜像有点慢,稍等

cd tidb-docker-compose && docker-compose pull # Get the latest Docker images
docker-compose up -d

初始化prisma 项目

  • init
prisma init
  • 修改数据库为tidb 配置
version: \'3\'
services:
  prisma:
    image: prismagraphql/prisma:1.13
    ports:
    - "4466:4466"
    environment:
      PRISMA_CONFIG: |
        port: 4466
        # uncomment the next line and provide the env var PRISMA_MANAGEMENT_API_SECRET=my-secret to activate cluster security
        # managementApiSecret: my-secret
        databases:
          default:
            connector: mysql
            host: 10.6.201.9 # ip 地址
            port: 4000
            user: root
            migrations: true

或者全部使用docker-compose运行
修改后完整的docker-compose 为:

version: \'2.1\'
services:
  prisma:
    image: prismagraphql/prisma:1.13
    ports:
    - "4466:4466"
    environment:
      PRISMA_CONFIG: |
        port: 4466
        # uncomment the next line and provide the env var PRISMA_MANAGEMENT_API_SECRET=my-secret to activate cluster security
        # managementApiSecret: my-secret
        databases:
          default:
            connector: mysql
            host: tidb
            port: 4000
            user: root
            migrations: true
  pd0:
    image: pingcap/pd:latest
    ports:
      - "2379"
    volumes:
      - ./config/pd.toml:/pd.toml:ro
      - ./data:/data
    command:
      - --name=pd0
      - --client-urls=http://0.0.0.0:2379
      - --peer-urls=http://0.0.0.0:2380
      - --advertise-client-urls=http://pd0:2379
      - --advertise-peer-urls=http://pd0:2380
      - --initial-cluster=pd0=http://pd0:2380,pd1=http://pd1:2380,pd2=http://pd2:2380
      - --data-dir=/data/pd0
      - --config=/pd.toml
    restart: on-failure
  pd1:
    image: pingcap/pd:latest
    ports:
      - "2379"
    volumes:
      - ./config/pd.toml:/pd.toml:ro
      - ./data:/data
    command:
      - --name=pd1
      - --client-urls=http://0.0.0.0:2379
      - --peer-urls=http://0.0.0.0:2380
      - --advertise-client-urls=http://pd1:2379
      - --advertise-peer-urls=http://pd1:2380
      - --initial-cluster=pd0=http://pd0:2380,pd1=http://pd1:2380,pd2=http://pd2:2380
      - --data-dir=/data/pd1
      - --config=/pd.toml
    restart: on-failure
  pd2:
    image: pingcap/pd:latest
    ports:
      - "2379"
    volumes:
      - ./config/pd.toml:/pd.toml:ro
      - ./data:/data
    command:
      - --name=pd2
      - --client-urls=http://0.0.0.0:2379
      - --peer-urls=http://0.0.0.0:2380
      - --advertise-client-urls=http://pd2:2379
      - --advertise-peer-urls=http://pd2:2380
      - --initial-cluster=pd0=http://pd0:2380,pd1=http://pd1:2380,pd2=http://pd2:2380
      - --data-dir=/data/pd2
      - --config=/pd.toml
    restart: on-failure
  tikv0:
    image: pingcap/tikv:latest
    volumes:
      - ./config/tikv.toml:/tikv.toml:ro
      - ./data:/data
    command:
      - --addr=0.0.0.0:20160
      - --advertise-addr=tikv0:20160
      - --data-dir=/data/tikv0
      - --pd=pd0:2379,pd1:2379,pd2:2379
      - --config=/tikv.toml
    depends_on:
      - "pd0"
      - "pd1"
      - "pd2"
    restart: on-failure
  tikv1:
    image: pingcap/tikv:latest
    volumes:
      - ./config/tikv.toml:/tikv.toml:ro
      - ./data:/data
    command:
      - --addr=0.0.0.0:20160
      - --advertise-addr=tikv1:20160
      - --data-dir=/data/tikv1
      - --pd=pd0:2379,pd1:2379,pd2:2379
      - --config=/tikv.toml
    depends_on:
      - "pd0"
      - "pd1"
      - "pd2"
    restart: on-failure
  tikv2:
    image: pingcap/tikv:latest
    volumes:
      - ./config/tikv.toml:/tikv.toml:ro
      - ./data:/data
    command:
      - --addr=0.0.0.0:20160
      - --advertise-addr=tikv2:20160
      - --data-dir=/data/tikv2
      - --pd=pd0:2379,pd1:2379,pd2:2379
      - --config=/tikv.toml
    depends_on:
      - "pd0"
      - "pd1"
      - "pd2"
    restart: on-failure

  tidb:
    image: pingcap/tidb:latest
    ports:
      - "4000:4000"
      - "10080:10080"
    volumes:
      - ./config/tidb.toml:/tidb.toml:ro
    command:
      - --store=tikv
      - --path=pd0:2379,pd1:2379,pd2:2379
      - --config=/tidb.toml
    depends_on:
      - "tikv0"
      - "tikv1"
      - "tikv2"
    restart: on-failure
  tispark-master:
    image: pingcap/tispark:latest
    command:
      - /opt/spark/sbin/start-master.sh
    volumes:
      - ./config/spark-defaults.conf:/opt/spark/conf/spark-defaults.conf:ro
    environment:
      SPARK_MASTER_PORT: 7077
      SPARK_MASTER_WEBUI_PORT: 8080
    ports:
      - "7077:7077"
      - "8080:8080"
    depends_on:
      - "tikv0"
      - "tikv1"
      - "tikv2"
    restart: on-failure
  tispark-slave0:
    image: pingcap/tispark:latest
    command:
      - /opt/spark/sbin/start-slave.sh
      - spark://tispark-master:7077
    volumes:
      - ./config/spark-defaults.conf:/opt/spark/conf/spark-defaults.conf:ro
    environment:
      SPARK_WORKER_WEBUI_PORT: 38081
    ports:
      - "38081:38081"
    depends_on:
      - tispark-master
    restart: on-failure

  tidb-vision:
    image: pingcap/tidb-vision:latest
    environment:
      PD_ENDPOINT: pd0:2379
    ports:
      - "8010:8010"

  # monitors
  pushgateway:
    image: prom/pushgateway:v0.3.1
    command:
    - --log.level=error
    restart: on-failure
  prometheus:
    user: root
    image: prom/prometheus:v2.2.1
    command:
      - --log.level=error
      - --storage.tsdb.path=/data/prometheus
      - --config.file=/etc/prometheus/prometheus.yml
    ports:
      - "9090:9090"
    volumes:
      - ./config/prometheus.yml:/etc/prometheus/prometheus.yml:ro
      - ./config/pd.rules.yml:/etc/prometheus/pd.rules.yml:ro
      - ./config/tikv.rules.yml:/etc/prometheus/tikv.rules.yml:ro
      - ./config/tidb.rules.yml:/etc/prometheus/tidb.rules.yml:ro
      - ./data:/data
    restart: on-failure
  grafana:
    image: grafana/grafana:4.6.3
    environment:
      GF_LOG_LEVEL: error
    ports:
      - "3000:3000"
    restart: on-failure
  dashboard-installer:
    image: pingcap/tidb-dashboard-installer:v2.0.0
    command: ["grafana:3000"]
    volumes:
      - ./config/grafana-datasource.json:/datasource.json:ro
      - ./config/pd-dashboard.json:/pd.json:ro
      - ./config/tikv-dashboard.json:/tikv.json:ro
      - ./config/tidb-dashboard.json:/tidb.json:ro
      - ./config/overview-dashboard.json:/overview.json:ro
    restart: on-failure
  • 启动
docker-compose up -d
  • deploy
prisma deploy
  • 访问
http://localhost:4466

  • 数据操作
mutation {
    createUser(data:{
    name:"rongfengliang"
  }){
    id
    name
  }
}

  • tidb监控界面

参考资料

https://pingcap.com/docs/op-guide/docker-compose/
https://github.com/rongfengliang/prisma-tidb

以上是关于prisma 集成tidb 安装试用的主要内容,如果未能解决你的问题,请参考以下文章

prisma mongodb 试用

tidb安装和连接

prisma 集成 pipelinedb测试

无法通过 Prisma 在 GraphQL-yoga 中使用片段

dbt 集成presto试用

prisma2:如何获取嵌套字段?