首页 数据库 mysql教程 使用 Docker Compose 快速启动 MySQL、PostgreSQL、MongoDB、Redis 和 Kafka 的开发环境

使用 Docker Compose 快速启动 MySQL、PostgreSQL、MongoDB、Redis 和 Kafka 的开发环境

Oct 28, 2024 am 07:40 AM

Quickly Start Dev Environment For MySQL, PostgreSQL, MongoDB, Redis, and Kafka Using Docker Compose

这里介绍如何使用 Docker Composebitnami 镜像快速搭建 MySQL、PostgreSQL、MongoDB、Redis 和 Kafka 的开发环境每个数据库的环境变量和 UI 工具。我们将逐步完成该过程:

为什么使用 bitnami 图像?

  1. 预配置和优化: Bitnami 映像已根据最佳实践进行了预配置,使它们更容易针对常见用例进行设置和优化。

  2. 安全性:Bitnami 定期更新其映像以解决漏洞,与一些可能不经常更新的社区维护的映像相比,提供了更安全的选项。

  3. 跨环境的一致性: Bitnami 确保其图像在不同环境中一致工作,使它们成为测试、开发和生产设置的良好选择。

  4. 易于使用:它们通常包含简化部署的脚本和默认值,减少手动配置和设置的需要。

  5. 文档和支持: Bitnami 通过其母公司 VMware 提供详细文档,有时还提供支持,这对于故障排除和企业使用非常有价值。

另一个重要说明是关于许可证,它可能会有所不同,但 bitnami 软件通常可以免费使用,其容器和软件包基于开源软件并使用 MIT、Apache 2.0 或 GPL 等许可证...阅读更多关于开源许可证

第 1 步:安装 Docker 和 Docker Compose

  1. 安装 Docker:按照 Docker 官方文档中针对您的操作系统的说明进行操作。
  2. 安装 Docker Compose:按照 Docker Compose 安装指南中的说明进行操作

第 2 步:项目结构

创建以下项目结构:

dev-environment/
├── components   # for mounting container volumes
├── scripts/
│   ├── pgadmin
│   │   ├──servers.json   # for pgadmin automatically load postgreDB
│   ├── create-topics.sh  # for creating kafka topics
│   ├── mongo-init.sh     # init script for mongodb
│   ├── mysql-init.sql    # init script for mysql
│   ├── postgres-init.sql # init script for postgre
├── .env
├── docker-compose.yml
登录后复制
登录后复制

步骤 3:(.env) 文件

创建一个包含以下内容的 .env 文件:

# MySQL Configuration
MYSQL_PORT=23306
MYSQL_USERNAME=dev-user
MYSQL_PASSWORD=dev-password
MYSQL_DATABASE=dev_database

# PostgreSQL Configuration
POSTGRES_PORT=25432
POSTGRES_USERNAME=dev-user
POSTGRES_PASSWORD=dev-password
POSTGRES_DATABASE=dev_database

# MongoDB Configuration
MONGO_PORT=27017
MONGO_USERNAME=dev-user
MONGO_PASSWORD=dev-password
MONGO_DATABASE=dev_database

# Redis Configuration
REDIS_PORT=26379
REDIS_PASSWORD=dev-password

# Kafka Configuration
KAFKA_PORT=29092
KAFKA_USERNAME=dev-user
KAFKA_PASSWORD=dev-password

# UI Tools Configuration
PHPMYADMIN_PORT=280
PGADMIN_PORT=281
MONGOEXPRESS_PORT=28081
REDIS_COMMANDER_PORT=28082
KAFKA_UI_PORT=28080

# Data Directory for Volumes
DATA_DIR=./
登录后复制
登录后复制

步骤4:(docker-compose.yml)文件

创建 docker-compose.yml 文件:

version: '3.8'
services:
  dev-mysql:
    image: bitnami/mysql:latest
    # This container_name can be used for internal connections between containers (running on the same docker virtual network)
    container_name: dev-mysql
    ports:
      # This mapping means that requests sent to the ${MYSQL_PORT} on the host machine will be forwarded to port 3306 in the dev-mysql container. This setup allows users to access the MySQL database from outside the container, such as from a local machine or another service.
      - '${MYSQL_PORT}:3306'
    environment:
      # Setup environment variables for container
      - MYSQL_ROOT_PASSWORD=${MYSQL_PASSWORD}
      - MYSQL_USER=${MYSQL_USERNAME}
      - MYSQL_PASSWORD=${MYSQL_PASSWORD}
      - MYSQL_DATABASE=${MYSQL_DATABASE}
    volumes:
      # Syncs msyql data from inside container to host machine, to keep them accross container restarts
      - '${DATA_DIR}/components/mysql/data:/bitnami/mysql/data'
      # Add custom script to init db
      - './scripts/mysql-init.sql:/docker-entrypoint-initdb.d/init.sql'

  phpmyadmin:
    image: phpmyadmin/phpmyadmin:latest
    container_name: dev-phpmyadmin
    # The depends_on option in Docker specifies that a container should be started only after the specified dependent container (e.g., dev-mysql) has been started (but not ensuring that it is ready)
    depends_on:
      - dev-mysql
    ports:
      - '${PHPMYADMIN_PORT}:80'
    environment:
      - PMA_HOST=dev-mysql
      # use internal port for internal connections, not exposed port ${MYSQL_PORT}
      - PMA_PORT=3306
      - PMA_USER=${MYSQL_USERNAME}
      - PMA_PASSWORD=${MYSQL_PASSWORD}

  #=======

  dev-postgresql:
    image: bitnami/postgresql:latest
    container_name: dev-postgresql
    ports:
      - '${POSTGRES_PORT}:5432'
    environment:
      - POSTGRESQL_USERNAME=${POSTGRES_USERNAME}
      - POSTGRESQL_PASSWORD=${POSTGRES_PASSWORD}
      - POSTGRESQL_DATABASE=${POSTGRES_DATABASE}
    volumes:
      # This setup will ensure that PostgreSQL data from inside container is synced to host machine, enabling persistence across container restarts.
      - '${DATA_DIR}/components/postgresql/data:/bitnami/postgresql/data'
      # Most relational databases support a special docker-entrypoint-initdb.d folder. This folder is used to initialise the database automatically when the container is first created.
      # We can put .sql or .sh scripts there, and Docker will automatically, here ./scripts/postgres-init.sql from host machine be automatically copied to the Docker container during the build and then run it
      - ./scripts/postgres-init.sql:/docker-entrypoint-initdb.d/init.sql:ro

  pgadmin:
    image: dpage/pgadmin4:latest
    container_name: dev-pgadmin
    depends_on:
      - dev-postgresql
    ports:
      - '${PGADMIN_PORT}:80'
    # user: root used to ensure that the container has full administrative privileges,
    # necessary when performing actions that require elevated permissions, such as mounting volumes (properly read or write to the mounted volumes), executing certain entrypoint commands, or accessing specific directories from host machine
    user: root
    environment:
      # PGADMIN_DEFAULT_EMAIL and PGADMIN_DEFAULT_PASSWORD - Sets the default credentials for the pgAdmin user
      - PGADMIN_DEFAULT_EMAIL=admin@dev.com
      - PGADMIN_DEFAULT_PASSWORD=${POSTGRES_PASSWORD}
      # PGADMIN_CONFIG_SERVER_MODE - determines whether pgAdmin runs in server mode (multi-user) or desktop mode (single-user). We’re setting it to false, so we won’t be prompted for login credentials
      - PGADMIN_CONFIG_SERVER_MODE=False
      # PGADMIN_CONFIG_MASTER_PASSWORD_REQUIRED - controls whether a master password is required to access saved server definitions and other sensitive information
      - PGADMIN_CONFIG_MASTER_PASSWORD_REQUIRED=False
    volumes:
      # This setup will ensure that PGAdmin data from inside container is synced to host machine, enabling persistence across container restarts.
      - '${DATA_DIR}/components/pgadmin:/var/lib/pgadmin'
      # This setup to make PGAdmin automatically detect and connect to PostgreSQL when it starts (following the config being set in servers.json)
      - ./scripts/pgadmin/servers.json:/pgadmin4/servers.json:ro

  #=======

  dev-mongodb:
    image: bitnami/mongodb:latest
    container_name: dev-mongodb
    ports:
      - '${MONGO_PORT}:27017'
    environment:
      - MONGO_INITDB_ROOT_USERNAME=${MONGO_USERNAME}
      - MONGO_INITDB_ROOT_PASSWORD=${MONGO_PASSWORD}
      - MONGO_INITDB_DATABASE=${MONGO_DATABASE}
      - MONGODB_ROOT_USER=${MONGO_USERNAME}
      - MONGODB_ROOT_PASSWORD=${MONGO_PASSWORD}
      - MONGODB_DATABASE=${MONGO_DATABASE}
    volumes:
      - '${DATA_DIR}/components/mongodb/data:/bitnami/mongodb'
      # This line maps ./scripts/mongo-init.sh from host machine to /docker-entrypoint-initdb.d/mongo-init.sh inside container with 'ro' mode (read only mode) which means container can't modify the mounted file
      - ./scripts/mongo-init.sh:/docker-entrypoint-initdb.d/mongo-init.sh:ro
      # - ./scripts/mongo-init.sh:/bitnami/scripts/mongo-init.sh:ro

  mongo-express:
    image: mongo-express:latest
    container_name: dev-mongoexpress
    depends_on:
      - dev-mongodb
    ports:
      - '${MONGOEXPRESS_PORT}:8081'
    environment:
      - ME_CONFIG_MONGODB_ENABLE_ADMIN=true
      - ME_CONFIG_MONGODB_ADMINUSERNAME=${MONGO_USERNAME}
      - ME_CONFIG_MONGODB_ADMINPASSWORD=${MONGO_PASSWORD}
      # - ME_CONFIG_MONGODB_SERVER=dev-mongodb
      # - ME_CONFIG_MONGODB_PORT=${MONGO_PORT}
      - ME_CONFIG_MONGODB_URL=mongodb://${MONGO_USERNAME}:${MONGO_PASSWORD}@dev-mongodb:${MONGO_PORT}/${MONGO_DATABASE}?authSource=admin&ssl=false&directConnection=true
    restart: unless-stopped
    # 'restart: unless-stopped' restarts a container automatically unless it is explicitly stopped by the user.
    # some others: 1. 'no': (Default option if not specified) meaning the container won't automatically restart if it stops or crashes.
    #              2. 'always': The container will restart regardless of the reason it stopped, including if Docker is restarted.
    #              3. 'on-failure': The container will restart only if it exits with a non-zero status indicating an error. (and won't restart if it stops when completing as short running task and return 0 status).

  #=======

  dev-redis:
    image: bitnami/redis:latest
    container_name: dev-redis
    ports:
      - '${REDIS_PORT}:6379'
    environment:
      - REDIS_PASSWORD=${REDIS_PASSWORD}
    volumes:
      - '${DATA_DIR}/components/redis:/bitnami/redis'
    networks:
      - dev-network

  redis-commander:
    image: rediscommander/redis-commander:latest
    container_name: dev-redis-commander
    depends_on:
      - dev-redis
    ports:
      - '${REDIS_COMMANDER_PORT}:8081'
    environment:
      - REDIS_HOST=dev-redis
      # While exposed port ${REDIS_PORT} being bind to host network, redis-commander still using internal port 6379 (being use internally inside docker virtual network) to connect to redis
      - REDIS_PORT=6379
      - REDIS_PASSWORD=${REDIS_PASSWORD}
    networks:
      - dev-network
    # This networks setup is optional, in case not being set, both redis-commader and redis will both be assigned to default docker network (usually named bridge) and still being able to connect each other

  #=======

  dev-kafka:
    image: 'bitnami/kafka:latest'
    container_name: dev-kafka
    ports:
      - '${KAFKA_PORT}:9094'
    environment:
      # Sets the timezone for the container to "Asia/Shanghai". This ensures that logs and timestamps inside the Kafka container align with the Shanghai timezone.
      - TZ=Asia/Shanghai
      # KAFKA_CFG_NODE_ID=0: Identifies the Kafka node with ID 0. This is crucial for multi-node Kafka clusters to distinguish each node uniquely.
      - KAFKA_CFG_NODE_ID=0
      # KAFKA_CFG_PROCESS_ROLES=controller,broker: Specifies the roles the Kafka node will perform, in this case, both as a controller (managing cluster metadata) and a broker (handling messages).
      - KAFKA_CFG_PROCESS_ROLES=controller,broker
      # KAFKA_CFG_CONTROLLER_QUORUM_VOTERS=0@<your_host>:9093: Defines the quorum voters for the Kafka controllers. It indicates that node 0 (the current node) acts as a voter for controller decisions and will be accessible at 9093 on <your_host>.
      - KAFKA_CFG_CONTROLLER_QUORUM_VOTERS=0@<your_host>:9093
      # The following lists different listeners for Kafka. Each listener binds a protocol to a specific port:
      # PLAINTEXT for client connections (:9092). CONTROLLER for internal controller communication (:9093). EXTERNAL for external client access (:9094).SASL_PLAINTEXT for SASL-authenticated clients (:9095).
      - KAFKA_CFG_LISTENERS=PLAINTEXT://:9092,CONTROLLER://:9093,EXTERNAL://:9094,SASL_PLAINTEXT://:9095
      # KAFKA_CFG_ADVERTISED_LISTENERS specifies how clients should connect to Kafka externally:
      # PLAINTEXT at dev-kafka:9092 for internal communication. EXTERNAL at 127.0.0.1:${KAFKA_PORT} (host access). SASL_PLAINTEXT for SASL connections (kafka:9095).
      - KAFKA_CFG_ADVERTISED_LISTENERS=PLAINTEXT://dev-kafka:9092,EXTERNAL://127.0.0.1:${KAFKA_PORT},SASL_PLAINTEXT://kafka:9095
      # The following maps security protocols to each listener. For example, CONTROLLER uses PLAINTEXT, and EXTERNAL uses SASL_PLAINTEXT.
      - KAFKA_CFG_LISTENER_SECURITY_PROTOCOL_MAP=CONTROLLER:PLAINTEXT,EXTERNAL:SASL_PLAINTEXT,PLAINTEXT:PLAINTEXT,SASL_PLAINTEXT:SASL_PLAINTEXT
      # Indicates that the CONTROLLER role should use the CONTROLLER listener for communications.
      - KAFKA_CFG_CONTROLLER_LISTENER_NAMES=CONTROLLER
      # Specifies users with relevant passwords that can connect to Kafka using SASL authentication
      - KAFKA_CLIENT_USERS=${KAFKA_USERNAME}
      - KAFKA_CLIENT_PASSWORDS=${KAFKA_PASSWORD}
    volumes:
      - '${DATA_DIR}/components/kafka/data:/bitnami/kafka/data'
      # Maps a local file create-topics.sh from the ./scripts directory to the path /opt/bitnami/kafka/create_topic.sh inside the Kafka container
      # This script can be used to automatically create Kafka topics when the container starts
      - ./scripts/create-topics.sh:/opt/bitnami/kafka/create_topic.sh:ro
    # Following command starts the Kafka server in the background using /opt/bitnami/scripts/kafka/run.sh. then sleep 5 to ensure that the Kafka server is fully up and running.
    # Executes the create_topic.sh script, which is used to create Kafka topics. Uses 'wait' to keep the script running until all background processes (like the Kafka server) finish,
    command: >
      bash -c "
      /opt/bitnami/scripts/kafka/run.sh & sleep 5; /opt/bitnami/kafka/create_topic.sh; wait
      "

  kafka-ui:
    image: provectuslabs/kafka-ui:latest
    container_name: dev-kafka-ui
    ports:
      - '${KAFKA_UI_PORT}:8080'
    environment:
      # Sets the name of the Kafka cluster displayed in the UI as "local."
      - KAFKA_CLUSTERS_0_NAME=local
      # Specifies the address (dev-kafka:9092) for the Kafka broker that the UI should connect to.
      - KAFKA_CLUSTERS_0_BOOTSTRAPSERVERS=dev-kafka:9092
      # Uses the provided ${KAFKA_USERNAME} for SASL (Simple Authentication and Security Layer) authentication with the Kafka cluster.
      - KAFKA_CLUSTERS_0_SASL_USER=${KAFKA_USERNAME}
      # Uses the ${KAFKA_PASSWORD} for authentication with the Kafka broker.
      - KAFKA_CLUSTERS_0_SASL_PASSWORD=${KAFKA_PASSWORD}
      # Sets the SASL mechanism as 'PLAIN', which is a simple username-password-based authentication method.
      - KAFKA_CLUSTERS_0_SASL_MECHANISM=PLAIN
      # Configures the communication protocol as SASL_PLAINTEXT, which means it uses SASL for authentication without encryption over plaintext communication.
      - KAFKA_CLUSTERS_0_SECURITY_PROTOCOL=SASL_PLAINTEXT
    depends_on:
      - dev-kafka

networks:
  dev-network:
    driver: bridge

登录后复制
登录后复制

第 5 步:脚本

在脚本文件夹中创建必要的脚本。

  1. pgadmin/servers.json:

        {
        "Servers": {
          "1": {
            "Name": "Local PostgreSQL",
            "Group": "Servers",
            "Host": "dev-postgresql",
            "Port": 5432,
            "MaintenanceDB": "dev_database",
            "Username": "dev-user",
            "Password": "dev-password",
            "SSLMode": "prefer",
            "Favorite": true
          }
        }
      }
    
    登录后复制
    登录后复制
  2. create-topics.sh:

    # Wait for Kafka to be ready
    until /opt/bitnami/kafka/bin/kafka-topics.sh --list --bootstrap-server localhost:9092; do
      echo "Waiting for Kafka to be ready..."
      sleep 2
    done
    
    # Create topics
    /opt/bitnami/kafka/bin/kafka-topics.sh --create --bootstrap-server localhost:9092 --replication-factor 1 --partitions 8 --topic latestMsgToRedis
    /opt/bitnami/kafka/bin/kafka-topics.sh --create --bootstrap-server localhost:9092 --replication-factor 1 --partitions 8 --topic msgToPush
    /opt/bitnami/kafka/bin/kafka-topics.sh --create --bootstrap-server localhost:9092 --replication-factor 1 --partitions 8 --topic offlineMsgToMongoMysql
    
    echo "Topics created."
    
    登录后复制
    登录后复制
  3. mongo-init.sh:

    # mongosh --: Launches the MongoDB shell, connecting to the default MongoDB instance.
    # "$MONGO_INITDB_DATABASE": Specifies the database to connect to (using the value from the environment variable).
    # <<EOF: Indicates the start of a multi-line input block. Everything between <<EOF and EOF is treated as MongoDB shell commands to be executed. 
    # db.getSiblingDB('admin'): Switches to the admin database, which is the default administrative database in MongoDB. It allows you to perform administrative tasks like user creation, where the user dev-user will be created.
    # db.auth('$MONGO_INITDB_ROOT_USERNAME', '$MONGO_INITDB_ROOT_PASSWORD') (commented out): This line, if executed, would authenticate the user with the given credentials against the "admin" database. It’s necessary if the following operations require authentication.
    # The user dev-user is created in the admin database with the specified username and password.
    # { role: 'root', db: 'admin' }: Allows full access to the admin database.
    # { role: 'readWrite', db: '$MONGO_INITDB_DATABASE' }: Grants read and write permissions specifically for dev_database.
    
    mongosh -- "$MONGO_INITDB_DATABASE" <<EOF
    db = db.getSiblingDB('admin')
    db.auth('$MONGO_INITDB_ROOT_USERNAME', '$MONGO_INITDB_ROOT_PASSWORD')
    db.createUser({
      user: "$MONGODB_ROOT_USER",
      pwd: "$MONGODB_ROOT_PASSWORD",
      roles: [
        { role: 'root', db: 'admin' },
        { role: 'root', db: '$MONGO_INITDB_DATABASE' }
      ]
    })
    
    db = db.getSiblingDB('$MONGO_INITDB_DATABASE');
    db.createCollection('users');
    db.users.insertMany([
      { username: 'user1', email: 'user1@example.com' },
      { username: 'user2', email: 'user2@example.com' }
    ]);
    EOF
    
    登录后复制
    登录后复制
  4. mysql-init.sql:

    -- CREATE TABLE IF NOT EXISTS test (id SERIAL PRIMARY KEY, name VARCHAR(50));
    
    BEGIN;
    
    -- structure setup
    
    CREATE TABLE users (
        id SERIAL PRIMARY KEY,
        username VARCHAR(50) NOT NULL,
        email VARCHAR(100) NOT NULL
    );
    
    -- data setup
    
    INSERT INTO users (username, email) 
    VALUES ('user1', 'user1@example.com');
    
    INSERT INTO users (username, email) 
    VALUES ('user2', 'user2@example.com');
    
    COMMIT;
    
    登录后复制
    登录后复制
    登录后复制
  5. postgres-init.sql:

    -- CREATE TABLE IF NOT EXISTS test (id SERIAL PRIMARY KEY, name VARCHAR(50));
    
    BEGIN;
    
    -- structure setup
    
    CREATE TABLE users (
        id SERIAL PRIMARY KEY,
        username VARCHAR(50) NOT NULL,
        email VARCHAR(100) NOT NULL
    );
    
    -- data setup
    
    INSERT INTO users (username, email) 
    VALUES ('user1', 'user1@example.com');
    
    INSERT INTO users (username, email) 
    VALUES ('user2', 'user2@example.com');
    
    COMMIT;
    
    登录后复制
    登录后复制
    登录后复制

第 6 步:运行 Docker Compose

在终端中,导航到开发环境文件夹并运行:

dev-environment/
├── components   # for mounting container volumes
├── scripts/
│   ├── pgadmin
│   │   ├──servers.json   # for pgadmin automatically load postgreDB
│   ├── create-topics.sh  # for creating kafka topics
│   ├── mongo-init.sh     # init script for mongodb
│   ├── mysql-init.sql    # init script for mysql
│   ├── postgres-init.sql # init script for postgre
├── .env
├── docker-compose.yml
登录后复制
登录后复制

此命令将启动所有服务,每个服务都有自己定义的容器、端口和环境配置。

第 7 步:使用 UI 工具访问数据库

  • phpMyAdmin:通过http://localhost:280访问
  • Mongo Express:通过http://localhost:28081访问
  • pgAdmin 4:通过http://localhost:281访问
  • Redis Commander:通过http://localhost:28082访问
  • Kafka UI:通过http://localhost:28080访问

每个 UI 工具都已配置为连接到其各自的数据库容器。

第8步:通过CLI访问

首先,我们需要将 .env 文件中的所有环境变量加载到当前工作的 CLI 会话中。为此,我们可以使用以下命令:

# MySQL Configuration
MYSQL_PORT=23306
MYSQL_USERNAME=dev-user
MYSQL_PASSWORD=dev-password
MYSQL_DATABASE=dev_database

# PostgreSQL Configuration
POSTGRES_PORT=25432
POSTGRES_USERNAME=dev-user
POSTGRES_PASSWORD=dev-password
POSTGRES_DATABASE=dev_database

# MongoDB Configuration
MONGO_PORT=27017
MONGO_USERNAME=dev-user
MONGO_PASSWORD=dev-password
MONGO_DATABASE=dev_database

# Redis Configuration
REDIS_PORT=26379
REDIS_PASSWORD=dev-password

# Kafka Configuration
KAFKA_PORT=29092
KAFKA_USERNAME=dev-user
KAFKA_PASSWORD=dev-password

# UI Tools Configuration
PHPMYADMIN_PORT=280
PGADMIN_PORT=281
MONGOEXPRESS_PORT=28081
REDIS_COMMANDER_PORT=28082
KAFKA_UI_PORT=28080

# Data Directory for Volumes
DATA_DIR=./
登录后复制
登录后复制
  • grep -v '^#' .env:从 .env 文件中过滤掉注释(以 # 开头的行)。
  • xargs:将每一行转换为 key=value 对。
  • 导出:将变量加载到当前环境中,使其可在会话中使用。
  1. 访问 MySQL CLI

    • 访问 dev-mysql 容器内的 MySQL 数据库:
    version: '3.8'
    services:
      dev-mysql:
        image: bitnami/mysql:latest
        # This container_name can be used for internal connections between containers (running on the same docker virtual network)
        container_name: dev-mysql
        ports:
          # This mapping means that requests sent to the ${MYSQL_PORT} on the host machine will be forwarded to port 3306 in the dev-mysql container. This setup allows users to access the MySQL database from outside the container, such as from a local machine or another service.
          - '${MYSQL_PORT}:3306'
        environment:
          # Setup environment variables for container
          - MYSQL_ROOT_PASSWORD=${MYSQL_PASSWORD}
          - MYSQL_USER=${MYSQL_USERNAME}
          - MYSQL_PASSWORD=${MYSQL_PASSWORD}
          - MYSQL_DATABASE=${MYSQL_DATABASE}
        volumes:
          # Syncs msyql data from inside container to host machine, to keep them accross container restarts
          - '${DATA_DIR}/components/mysql/data:/bitnami/mysql/data'
          # Add custom script to init db
          - './scripts/mysql-init.sql:/docker-entrypoint-initdb.d/init.sql'
    
      phpmyadmin:
        image: phpmyadmin/phpmyadmin:latest
        container_name: dev-phpmyadmin
        # The depends_on option in Docker specifies that a container should be started only after the specified dependent container (e.g., dev-mysql) has been started (but not ensuring that it is ready)
        depends_on:
          - dev-mysql
        ports:
          - '${PHPMYADMIN_PORT}:80'
        environment:
          - PMA_HOST=dev-mysql
          # use internal port for internal connections, not exposed port ${MYSQL_PORT}
          - PMA_PORT=3306
          - PMA_USER=${MYSQL_USERNAME}
          - PMA_PASSWORD=${MYSQL_PASSWORD}
    
      #=======
    
      dev-postgresql:
        image: bitnami/postgresql:latest
        container_name: dev-postgresql
        ports:
          - '${POSTGRES_PORT}:5432'
        environment:
          - POSTGRESQL_USERNAME=${POSTGRES_USERNAME}
          - POSTGRESQL_PASSWORD=${POSTGRES_PASSWORD}
          - POSTGRESQL_DATABASE=${POSTGRES_DATABASE}
        volumes:
          # This setup will ensure that PostgreSQL data from inside container is synced to host machine, enabling persistence across container restarts.
          - '${DATA_DIR}/components/postgresql/data:/bitnami/postgresql/data'
          # Most relational databases support a special docker-entrypoint-initdb.d folder. This folder is used to initialise the database automatically when the container is first created.
          # We can put .sql or .sh scripts there, and Docker will automatically, here ./scripts/postgres-init.sql from host machine be automatically copied to the Docker container during the build and then run it
          - ./scripts/postgres-init.sql:/docker-entrypoint-initdb.d/init.sql:ro
    
      pgadmin:
        image: dpage/pgadmin4:latest
        container_name: dev-pgadmin
        depends_on:
          - dev-postgresql
        ports:
          - '${PGADMIN_PORT}:80'
        # user: root used to ensure that the container has full administrative privileges,
        # necessary when performing actions that require elevated permissions, such as mounting volumes (properly read or write to the mounted volumes), executing certain entrypoint commands, or accessing specific directories from host machine
        user: root
        environment:
          # PGADMIN_DEFAULT_EMAIL and PGADMIN_DEFAULT_PASSWORD - Sets the default credentials for the pgAdmin user
          - PGADMIN_DEFAULT_EMAIL=admin@dev.com
          - PGADMIN_DEFAULT_PASSWORD=${POSTGRES_PASSWORD}
          # PGADMIN_CONFIG_SERVER_MODE - determines whether pgAdmin runs in server mode (multi-user) or desktop mode (single-user). We’re setting it to false, so we won’t be prompted for login credentials
          - PGADMIN_CONFIG_SERVER_MODE=False
          # PGADMIN_CONFIG_MASTER_PASSWORD_REQUIRED - controls whether a master password is required to access saved server definitions and other sensitive information
          - PGADMIN_CONFIG_MASTER_PASSWORD_REQUIRED=False
        volumes:
          # This setup will ensure that PGAdmin data from inside container is synced to host machine, enabling persistence across container restarts.
          - '${DATA_DIR}/components/pgadmin:/var/lib/pgadmin'
          # This setup to make PGAdmin automatically detect and connect to PostgreSQL when it starts (following the config being set in servers.json)
          - ./scripts/pgadmin/servers.json:/pgadmin4/servers.json:ro
    
      #=======
    
      dev-mongodb:
        image: bitnami/mongodb:latest
        container_name: dev-mongodb
        ports:
          - '${MONGO_PORT}:27017'
        environment:
          - MONGO_INITDB_ROOT_USERNAME=${MONGO_USERNAME}
          - MONGO_INITDB_ROOT_PASSWORD=${MONGO_PASSWORD}
          - MONGO_INITDB_DATABASE=${MONGO_DATABASE}
          - MONGODB_ROOT_USER=${MONGO_USERNAME}
          - MONGODB_ROOT_PASSWORD=${MONGO_PASSWORD}
          - MONGODB_DATABASE=${MONGO_DATABASE}
        volumes:
          - '${DATA_DIR}/components/mongodb/data:/bitnami/mongodb'
          # This line maps ./scripts/mongo-init.sh from host machine to /docker-entrypoint-initdb.d/mongo-init.sh inside container with 'ro' mode (read only mode) which means container can't modify the mounted file
          - ./scripts/mongo-init.sh:/docker-entrypoint-initdb.d/mongo-init.sh:ro
          # - ./scripts/mongo-init.sh:/bitnami/scripts/mongo-init.sh:ro
    
      mongo-express:
        image: mongo-express:latest
        container_name: dev-mongoexpress
        depends_on:
          - dev-mongodb
        ports:
          - '${MONGOEXPRESS_PORT}:8081'
        environment:
          - ME_CONFIG_MONGODB_ENABLE_ADMIN=true
          - ME_CONFIG_MONGODB_ADMINUSERNAME=${MONGO_USERNAME}
          - ME_CONFIG_MONGODB_ADMINPASSWORD=${MONGO_PASSWORD}
          # - ME_CONFIG_MONGODB_SERVER=dev-mongodb
          # - ME_CONFIG_MONGODB_PORT=${MONGO_PORT}
          - ME_CONFIG_MONGODB_URL=mongodb://${MONGO_USERNAME}:${MONGO_PASSWORD}@dev-mongodb:${MONGO_PORT}/${MONGO_DATABASE}?authSource=admin&ssl=false&directConnection=true
        restart: unless-stopped
        # 'restart: unless-stopped' restarts a container automatically unless it is explicitly stopped by the user.
        # some others: 1. 'no': (Default option if not specified) meaning the container won't automatically restart if it stops or crashes.
        #              2. 'always': The container will restart regardless of the reason it stopped, including if Docker is restarted.
        #              3. 'on-failure': The container will restart only if it exits with a non-zero status indicating an error. (and won't restart if it stops when completing as short running task and return 0 status).
    
      #=======
    
      dev-redis:
        image: bitnami/redis:latest
        container_name: dev-redis
        ports:
          - '${REDIS_PORT}:6379'
        environment:
          - REDIS_PASSWORD=${REDIS_PASSWORD}
        volumes:
          - '${DATA_DIR}/components/redis:/bitnami/redis'
        networks:
          - dev-network
    
      redis-commander:
        image: rediscommander/redis-commander:latest
        container_name: dev-redis-commander
        depends_on:
          - dev-redis
        ports:
          - '${REDIS_COMMANDER_PORT}:8081'
        environment:
          - REDIS_HOST=dev-redis
          # While exposed port ${REDIS_PORT} being bind to host network, redis-commander still using internal port 6379 (being use internally inside docker virtual network) to connect to redis
          - REDIS_PORT=6379
          - REDIS_PASSWORD=${REDIS_PASSWORD}
        networks:
          - dev-network
        # This networks setup is optional, in case not being set, both redis-commader and redis will both be assigned to default docker network (usually named bridge) and still being able to connect each other
    
      #=======
    
      dev-kafka:
        image: 'bitnami/kafka:latest'
        container_name: dev-kafka
        ports:
          - '${KAFKA_PORT}:9094'
        environment:
          # Sets the timezone for the container to "Asia/Shanghai". This ensures that logs and timestamps inside the Kafka container align with the Shanghai timezone.
          - TZ=Asia/Shanghai
          # KAFKA_CFG_NODE_ID=0: Identifies the Kafka node with ID 0. This is crucial for multi-node Kafka clusters to distinguish each node uniquely.
          - KAFKA_CFG_NODE_ID=0
          # KAFKA_CFG_PROCESS_ROLES=controller,broker: Specifies the roles the Kafka node will perform, in this case, both as a controller (managing cluster metadata) and a broker (handling messages).
          - KAFKA_CFG_PROCESS_ROLES=controller,broker
          # KAFKA_CFG_CONTROLLER_QUORUM_VOTERS=0@<your_host>:9093: Defines the quorum voters for the Kafka controllers. It indicates that node 0 (the current node) acts as a voter for controller decisions and will be accessible at 9093 on <your_host>.
          - KAFKA_CFG_CONTROLLER_QUORUM_VOTERS=0@<your_host>:9093
          # The following lists different listeners for Kafka. Each listener binds a protocol to a specific port:
          # PLAINTEXT for client connections (:9092). CONTROLLER for internal controller communication (:9093). EXTERNAL for external client access (:9094).SASL_PLAINTEXT for SASL-authenticated clients (:9095).
          - KAFKA_CFG_LISTENERS=PLAINTEXT://:9092,CONTROLLER://:9093,EXTERNAL://:9094,SASL_PLAINTEXT://:9095
          # KAFKA_CFG_ADVERTISED_LISTENERS specifies how clients should connect to Kafka externally:
          # PLAINTEXT at dev-kafka:9092 for internal communication. EXTERNAL at 127.0.0.1:${KAFKA_PORT} (host access). SASL_PLAINTEXT for SASL connections (kafka:9095).
          - KAFKA_CFG_ADVERTISED_LISTENERS=PLAINTEXT://dev-kafka:9092,EXTERNAL://127.0.0.1:${KAFKA_PORT},SASL_PLAINTEXT://kafka:9095
          # The following maps security protocols to each listener. For example, CONTROLLER uses PLAINTEXT, and EXTERNAL uses SASL_PLAINTEXT.
          - KAFKA_CFG_LISTENER_SECURITY_PROTOCOL_MAP=CONTROLLER:PLAINTEXT,EXTERNAL:SASL_PLAINTEXT,PLAINTEXT:PLAINTEXT,SASL_PLAINTEXT:SASL_PLAINTEXT
          # Indicates that the CONTROLLER role should use the CONTROLLER listener for communications.
          - KAFKA_CFG_CONTROLLER_LISTENER_NAMES=CONTROLLER
          # Specifies users with relevant passwords that can connect to Kafka using SASL authentication
          - KAFKA_CLIENT_USERS=${KAFKA_USERNAME}
          - KAFKA_CLIENT_PASSWORDS=${KAFKA_PASSWORD}
        volumes:
          - '${DATA_DIR}/components/kafka/data:/bitnami/kafka/data'
          # Maps a local file create-topics.sh from the ./scripts directory to the path /opt/bitnami/kafka/create_topic.sh inside the Kafka container
          # This script can be used to automatically create Kafka topics when the container starts
          - ./scripts/create-topics.sh:/opt/bitnami/kafka/create_topic.sh:ro
        # Following command starts the Kafka server in the background using /opt/bitnami/scripts/kafka/run.sh. then sleep 5 to ensure that the Kafka server is fully up and running.
        # Executes the create_topic.sh script, which is used to create Kafka topics. Uses 'wait' to keep the script running until all background processes (like the Kafka server) finish,
        command: >
          bash -c "
          /opt/bitnami/scripts/kafka/run.sh & sleep 5; /opt/bitnami/kafka/create_topic.sh; wait
          "
    
      kafka-ui:
        image: provectuslabs/kafka-ui:latest
        container_name: dev-kafka-ui
        ports:
          - '${KAFKA_UI_PORT}:8080'
        environment:
          # Sets the name of the Kafka cluster displayed in the UI as "local."
          - KAFKA_CLUSTERS_0_NAME=local
          # Specifies the address (dev-kafka:9092) for the Kafka broker that the UI should connect to.
          - KAFKA_CLUSTERS_0_BOOTSTRAPSERVERS=dev-kafka:9092
          # Uses the provided ${KAFKA_USERNAME} for SASL (Simple Authentication and Security Layer) authentication with the Kafka cluster.
          - KAFKA_CLUSTERS_0_SASL_USER=${KAFKA_USERNAME}
          # Uses the ${KAFKA_PASSWORD} for authentication with the Kafka broker.
          - KAFKA_CLUSTERS_0_SASL_PASSWORD=${KAFKA_PASSWORD}
          # Sets the SASL mechanism as 'PLAIN', which is a simple username-password-based authentication method.
          - KAFKA_CLUSTERS_0_SASL_MECHANISM=PLAIN
          # Configures the communication protocol as SASL_PLAINTEXT, which means it uses SASL for authentication without encryption over plaintext communication.
          - KAFKA_CLUSTERS_0_SECURITY_PROTOCOL=SASL_PLAINTEXT
        depends_on:
          - dev-kafka
    
    networks:
      dev-network:
        driver: bridge
    
    
    登录后复制
    登录后复制
  2. 访问 PostgreSQL CLI

    • 访问 dev-postgresql 容器内的 PostgreSQL 数据库:
        {
        "Servers": {
          "1": {
            "Name": "Local PostgreSQL",
            "Group": "Servers",
            "Host": "dev-postgresql",
            "Port": 5432,
            "MaintenanceDB": "dev_database",
            "Username": "dev-user",
            "Password": "dev-password",
            "SSLMode": "prefer",
            "Favorite": true
          }
        }
      }
    
    登录后复制
    登录后复制
  3. 访问 MongoDB CLI

    • 访问 dev-mongodb 容器内的 MongoDB shell:
    # Wait for Kafka to be ready
    until /opt/bitnami/kafka/bin/kafka-topics.sh --list --bootstrap-server localhost:9092; do
      echo "Waiting for Kafka to be ready..."
      sleep 2
    done
    
    # Create topics
    /opt/bitnami/kafka/bin/kafka-topics.sh --create --bootstrap-server localhost:9092 --replication-factor 1 --partitions 8 --topic latestMsgToRedis
    /opt/bitnami/kafka/bin/kafka-topics.sh --create --bootstrap-server localhost:9092 --replication-factor 1 --partitions 8 --topic msgToPush
    /opt/bitnami/kafka/bin/kafka-topics.sh --create --bootstrap-server localhost:9092 --replication-factor 1 --partitions 8 --topic offlineMsgToMongoMysql
    
    echo "Topics created."
    
    登录后复制
    登录后复制
  4. 访问 Redis CLI

    • 要访问 dev-redis 容器内的 Redis CLI:
    # mongosh --: Launches the MongoDB shell, connecting to the default MongoDB instance.
    # "$MONGO_INITDB_DATABASE": Specifies the database to connect to (using the value from the environment variable).
    # <<EOF: Indicates the start of a multi-line input block. Everything between <<EOF and EOF is treated as MongoDB shell commands to be executed. 
    # db.getSiblingDB('admin'): Switches to the admin database, which is the default administrative database in MongoDB. It allows you to perform administrative tasks like user creation, where the user dev-user will be created.
    # db.auth('$MONGO_INITDB_ROOT_USERNAME', '$MONGO_INITDB_ROOT_PASSWORD') (commented out): This line, if executed, would authenticate the user with the given credentials against the "admin" database. It’s necessary if the following operations require authentication.
    # The user dev-user is created in the admin database with the specified username and password.
    # { role: 'root', db: 'admin' }: Allows full access to the admin database.
    # { role: 'readWrite', db: '$MONGO_INITDB_DATABASE' }: Grants read and write permissions specifically for dev_database.
    
    mongosh -- "$MONGO_INITDB_DATABASE" <<EOF
    db = db.getSiblingDB('admin')
    db.auth('$MONGO_INITDB_ROOT_USERNAME', '$MONGO_INITDB_ROOT_PASSWORD')
    db.createUser({
      user: "$MONGODB_ROOT_USER",
      pwd: "$MONGODB_ROOT_PASSWORD",
      roles: [
        { role: 'root', db: 'admin' },
        { role: 'root', db: '$MONGO_INITDB_DATABASE' }
      ]
    })
    
    db = db.getSiblingDB('$MONGO_INITDB_DATABASE');
    db.createCollection('users');
    db.users.insertMany([
      { username: 'user1', email: 'user1@example.com' },
      { username: 'user2', email: 'user2@example.com' }
    ]);
    EOF
    
    登录后复制
    登录后复制
  5. 访问 Kafka CLI

    • 要访问 dev-kafka 容器内的 Kafka CLI:
    -- CREATE TABLE IF NOT EXISTS test (id SERIAL PRIMARY KEY, name VARCHAR(50));
    
    BEGIN;
    
    -- structure setup
    
    CREATE TABLE users (
        id SERIAL PRIMARY KEY,
        username VARCHAR(50) NOT NULL,
        email VARCHAR(100) NOT NULL
    );
    
    -- data setup
    
    INSERT INTO users (username, email) 
    VALUES ('user1', 'user1@example.com');
    
    INSERT INTO users (username, email) 
    VALUES ('user2', 'user2@example.com');
    
    COMMIT;
    
    登录后复制
    登录后复制
    登录后复制

概括

此设置使用 Docker Compose 以及环境变量、bitnami 映像和卷映射来创建可重现的开发环境。通过使用 docker-compose up -d,您可以使用 docker-compose down 快速启动或拆除整个环境,使其适合本地开发和测试。

以上是使用 Docker Compose 快速启动 MySQL、PostgreSQL、MongoDB、Redis 和 Kafka 的开发环境的详细内容。更多信息请关注PHP中文网其他相关文章!

本站声明
本文内容由网友自发贡献,版权归原作者所有,本站不承担相应法律责任。如您发现有涉嫌抄袭侵权的内容,请联系admin@php.cn

热AI工具

Undresser.AI Undress

Undresser.AI Undress

人工智能驱动的应用程序,用于创建逼真的裸体照片

AI Clothes Remover

AI Clothes Remover

用于从照片中去除衣服的在线人工智能工具。

Undress AI Tool

Undress AI Tool

免费脱衣服图片

Clothoff.io

Clothoff.io

AI脱衣机

Video Face Swap

Video Face Swap

使用我们完全免费的人工智能换脸工具轻松在任何视频中换脸!

热门文章

<🎜>:泡泡胶模拟器无穷大 - 如何获取和使用皇家钥匙
4 周前 By 尊渡假赌尊渡假赌尊渡假赌
北端:融合系统,解释
4 周前 By 尊渡假赌尊渡假赌尊渡假赌
Mandragora:巫婆树的耳语 - 如何解锁抓钩
3 周前 By 尊渡假赌尊渡假赌尊渡假赌

热工具

记事本++7.3.1

记事本++7.3.1

好用且免费的代码编辑器

SublimeText3汉化版

SublimeText3汉化版

中文版,非常好用

禅工作室 13.0.1

禅工作室 13.0.1

功能强大的PHP集成开发环境

Dreamweaver CS6

Dreamweaver CS6

视觉化网页开发工具

SublimeText3 Mac版

SublimeText3 Mac版

神级代码编辑软件(SublimeText3)

热门话题

Java教程
1672
14
CakePHP 教程
1428
52
Laravel 教程
1332
25
PHP教程
1277
29
C# 教程
1257
24
MySQL的角色:Web应用程序中的数据库 MySQL的角色:Web应用程序中的数据库 Apr 17, 2025 am 12:23 AM

MySQL在Web应用中的主要作用是存储和管理数据。1.MySQL高效处理用户信息、产品目录和交易记录等数据。2.通过SQL查询,开发者能从数据库提取信息生成动态内容。3.MySQL基于客户端-服务器模型工作,确保查询速度可接受。

说明InnoDB重做日志和撤消日志的作用。 说明InnoDB重做日志和撤消日志的作用。 Apr 15, 2025 am 12:16 AM

InnoDB使用redologs和undologs确保数据一致性和可靠性。1.redologs记录数据页修改,确保崩溃恢复和事务持久性。2.undologs记录数据原始值,支持事务回滚和MVCC。

MySQL与其他编程语言:一种比较 MySQL与其他编程语言:一种比较 Apr 19, 2025 am 12:22 AM

MySQL与其他编程语言相比,主要用于存储和管理数据,而其他语言如Python、Java、C 则用于逻辑处理和应用开发。 MySQL以其高性能、可扩展性和跨平台支持着称,适合数据管理需求,而其他语言在各自领域如数据分析、企业应用和系统编程中各有优势。

初学者的MySQL:开始数据库管理 初学者的MySQL:开始数据库管理 Apr 18, 2025 am 12:10 AM

MySQL的基本操作包括创建数据库、表格,及使用SQL进行数据的CRUD操作。1.创建数据库:CREATEDATABASEmy_first_db;2.创建表格:CREATETABLEbooks(idINTAUTO_INCREMENTPRIMARYKEY,titleVARCHAR(100)NOTNULL,authorVARCHAR(100)NOTNULL,published_yearINT);3.插入数据:INSERTINTObooks(title,author,published_year)VA

解释InnoDB缓冲池及其对性能的重要性。 解释InnoDB缓冲池及其对性能的重要性。 Apr 19, 2025 am 12:24 AM

InnoDBBufferPool通过缓存数据和索引页来减少磁盘I/O,提升数据库性能。其工作原理包括:1.数据读取:从BufferPool中读取数据;2.数据写入:修改数据后写入BufferPool并定期刷新到磁盘;3.缓存管理:使用LRU算法管理缓存页;4.预读机制:提前加载相邻数据页。通过调整BufferPool大小和使用多个实例,可以优化数据库性能。

MySQL与其他数据库:比较选项 MySQL与其他数据库:比较选项 Apr 15, 2025 am 12:08 AM

MySQL适合Web应用和内容管理系统,因其开源、高性能和易用性而受欢迎。1)与PostgreSQL相比,MySQL在简单查询和高并发读操作上表现更好。2)相较Oracle,MySQL因开源和低成本更受中小企业青睐。3)对比MicrosoftSQLServer,MySQL更适合跨平台应用。4)与MongoDB不同,MySQL更适用于结构化数据和事务处理。

MySQL:结构化数据和关系数据库 MySQL:结构化数据和关系数据库 Apr 18, 2025 am 12:22 AM

MySQL通过表结构和SQL查询高效管理结构化数据,并通过外键实现表间关系。1.创建表时定义数据格式和类型。2.使用外键建立表间关系。3.通过索引和查询优化提高性能。4.定期备份和监控数据库确保数据安全和性能优化。

学习MySQL:新用户的分步指南 学习MySQL:新用户的分步指南 Apr 19, 2025 am 12:19 AM

MySQL值得学习,因为它是强大的开源数据库管理系统,适用于数据存储、管理和分析。1)MySQL是关系型数据库,使用SQL操作数据,适合结构化数据管理。2)SQL语言是与MySQL交互的关键,支持CRUD操作。3)MySQL的工作原理包括客户端/服务器架构、存储引擎和查询优化器。4)基本用法包括创建数据库和表,高级用法涉及使用JOIN连接表。5)常见错误包括语法错误和权限问题,调试技巧包括检查语法和使用EXPLAIN命令。6)性能优化涉及使用索引、优化SQL语句和定期维护数据库。

See all articles