Redash 9安装与配置(基于Docker方式)

Posted ShenLiang2025

tags:

篇首语:本文由小常识网(cha138.com)小编为大家整理,主要介绍了Redash 9安装与配置(基于Docker方式)相关的知识,希望对你有一定的参考价值。

Redash 9 Docker方式安装与配置详解

安装docker

卸载原系统docker

apt-get remove docker docker-engine docker.io

安装docker

curl -sSL https://get.docker.com/ | sh

安装docker-compose

安装docker-compose

sudo curl -L https://github.com/docker/compose/releases/download/1.16.1/docker-compose-`uname -s`-`uname -m` -o /usr/local/bin/docker-compose

或者

https://github.com/docker/compose/releases里找到docker-compose-linux-x86_64后手动下载放置在/usr/local/bin下。

cd /usr/local/bin

mv docker-compose-linux-x86_64 docker-compose

chmod +x docker-compose

下载Redash 9

https://github.com/getredash/redash/tree/release/9.0.x

下载9.0分支下的源代码。

unzip redash-release-9.0.x.zip

cd redash-release-9.0.x

初始化环境

sudo docker-compose up

准备依赖包

手动下载IBmDB相关包

wget  https://pypi.tuna.tsinghua.edu.cn/packages/98/cb/f77d9bd5f64246074af364cc30e20e3044c533890f3b67d30e89615c2fc5/ibm_db-3.0.1.tar.gz        

wget   https://public.dhe.ibm.com/ibmdl/export/pub/software/data/db2/drivers/odbc_cli/linuxx64_odbc_cli.tar.gz

上传如上压缩包到redash 9的package目录(需新建)内。

修改Dockerfile

WORKDIR /app

COPY packages/ibm_db-3.0.1.tar.gz packages/linuxx64_odbc_cli.tar.gz ./

RUN tar -zxvf ibm_db-3.0.1.tar.gz

RUN tar -zxvf linuxx64_odbc_cli.tar.gz -C ibm_db-3.0.1/

WORKDIR /app/ibm_db-3.0.1

RUN python setup.py install

WORKDIR /app

COPY requirements.txt requirements_bundles.txt requirements_dev.txt requirements_all_ds.txt ./

注:在COPY requirements.txt…之上加入上述内容。

修改pip源

用国内的pip源,加入参数-i http://pypi.douban.com/simple/ --trusted-host pypi.douban.com

完整的Dockerfile

FROM node:12 as frontend-builder

RUN sed -i s@/archive.ubuntu.com/@/mirrors.aliyun.com/@g /etc/apt/sources.list

RUN apt-get clean

# Controls whether to build the frontend assets

ARG skip_frontend_build

WORKDIR /frontend

COPY package.json package-lock.json /frontend/

COPY viz-lib /frontend/viz-lib

RUN if [ "x$skip_frontend_build" = "x" ] ; then npm ci --unsafe-perm; fi

COPY client /frontend/client

COPY webpack.config.js /frontend/

RUN if [ "x$skip_frontend_build" = "x" ] ; then npm run build; else mkdir -p /frontend/client/dist && touch /frontend/client/dist/multi_org.html && touch /frontend/client/dist/index.html; fi

FROM python:3.7-slim

EXPOSE 5000

# Controls whether to install extra dependencies needed for all data sources.

ARG skip_ds_deps

# Controls whether to install dev dependencies.

ARG skip_dev_deps

RUN useradd --create-home redash

# Ubuntu packages

RUN apt-get update && \\

  apt-get install -y \\

    curl \\

    gnupg \\

    build-essential \\

    pwgen \\

    libffi-dev \\

    sudo \\

    git-core \\

    wget \\

    # Postgres client

    libpq-dev \\

    # ODBC support:

    g++ unixodbc-dev \\

    # for SAML

    xmlsec1 \\

    # Additional packages required for data sources:

    libssl-dev \\

    default-libmysqlclient-dev \\

    freetds-dev \\

    libsasl2-dev \\

    unzip \\

    libsasl2-modules-gssapi-mit #&& \\

  # MSSQL ODBC Driver: 

  #curl https://packages.microsoft.com/keys/microsoft.asc | apt-key add - && \\

  #curl https://packages.microsoft.com/config/debian/10/prod.list > /etc/apt/sources.list.d/mssql-release.list && \\

  #apt-get update && \\

  #ACCEPT_EULA=Y apt-get install -y msodbcsql17 odbcinst1debian2  unixodbc && \\

  #apt-get clean && \\

  #rm -rf /var/lib/apt/lists/*

#ARG databricks_odbc_driver_url=https://databricks.com/wp-content/uploads/2.6.10.1010-2/SimbaSparkODBC-2.6.10.1010-2-Debian-64bit.zip

#ADD $databricks_odbc_driver_url /tmp/simba_odbc.zip

#RUN unzip /tmp/simba_odbc.zip -d /tmp/ \\

  #&& dpkg -i /tmp/SimbaSparkODBC-*/*.deb \\

  #&& echo "[Simba]\\nDriver = /opt/simba/spark/lib/64/libsparkodbc_sb64.so" >> /etc/odbcinst.ini \\

  #&& rm /tmp/simba_odbc.zip \\

  #&& rm -rf /tmp/SimbaSparkODBC*

WORKDIR /app

# Disalbe PIP Cache and Version Check

ENV PIP_DISABLE_PIP_VERSION_CHECK=1

ENV PIP_NO_CACHE_DIR=1

# We first copy only the requirements file, to avoid rebuilding on every file

# change.

WORKDIR /app

COPY packages/ibm_db-3.0.1.tar.gz packages/linuxx64_odbc_cli.tar.gz ./

RUN tar -zxvf ibm_db-3.0.1.tar.gz

RUN tar -zxvf linuxx64_odbc_cli.tar.gz -C ibm_db-3.0.1/

WORKDIR /app/ibm_db-3.0.1

RUN python setup.py install

WORKDIR /app

COPY requirements.txt requirements_bundles.txt requirements_dev.txt requirements_all_ds.txt ./

RUN if [ "x$skip_dev_deps" = "x" ] ; then pip install -r requirements.txt -r requirements_dev.txt -i http://pypi.douban.com/simple/ --trusted-host pypi.douban.com; else pip install -r requirements.txt -i http://pypi.douban.com/simple/ --trusted-host pypi.douban.com; fi

RUN if [ "x$skip_ds_deps" = "x" ] ; then pip install -r requirements_all_ds.txt -i http://pypi.douban.com/simple/ --trusted-host pypi.douban.com; else echo "Skipping pip install -r requirements_all_ds.txt" ; fi

COPY . /app

COPY --from=frontend-builder /frontend/client/dist /app/client/dist

RUN chown -R redash /app

USER redash

ENTRYPOINT ["/app/bin/docker-entrypoint"]

CMD ["server"]

部署Redash

docker-compose up

注:如果安装python包时出现依赖的错误,将requirements.txt 、requirements_all_ds.txt里的版本范围删除即可。

初始化Redash库表

sudo docker-compose -f docker-compose.yml run --rm server create_db

启动及停止Redash服务

# 初始化成功后,可以通过start命令启动redash服务

sudo docker-compose start

sudo docker-compose stop

安装前端

安装npm

sudo apt install libssl1.0-dev

sudo apt install nodejs-dev

sudo apt install node-gyp

sudo apt install npm

安装sql-formatter

#1下载sql-formatter-masterzip包

https://codeload.github.com/getredash/sql-formatter/zip/refs/heads/master

#2 安装sql-formatter

npm i sql-formatter –S

前端构建

curl --silent --location https://rpm.nodesource.com/setup_6.x | bash -

yum install -y nodejs

npm install -g cnpm --registry=https://registry.npm.taobao.org

cnpm install

cnpm run build

确保redash-release-9.0.x/client/dist目录内有内容即可进入系统页面。或者下载别人编译好的文件,详见:

链接:https://pan.baidu.com/s/1vCkJFxYUdYjSEeQ4ZwH0IQ

提取码:u2re

访问redash

以上是关于Redash 9安装与配置(基于Docker方式)的主要内容,如果未能解决你的问题,请参考以下文章

Redash 9安装与配置(基于Docker方式)

Redash 安装部署

redash docker 运行

Redash二次开发-开发环境搭建

centos 安装 redis 怎么配置文件

docker环境安装mysqlcanalelasticsearch,基于binlog利用canal实现mysql的数据同步到elasticsearch中