


Quickly Start Dev Environment For MySQL, PostgreSQL, MongoDB, Redis, and Kafka Using Docker Compose
Here's how to quickly set up a development environment with MySQL, PostgreSQL, MongoDB, Redis and Kafka using Docker Compose, with bitnami images, environment variables, and UI tools for each database. We'll go through the process step by step:
Why use bitnami images ?
Pre-Configured and Optimized: Bitnami images come pre-configured with best practices, making them easier to set up and optimize for common use cases.
Security: Bitnami regularly updates its images to address vulnerabilities, providing a more secure option compared to some community-maintained images that may not be updated as frequently.
Consistency Across Environments: Bitnami ensures that their images work consistently across different environments, making them a good choice for testing, development, and production setups.
Ease of Use: They often include scripts and defaults that simplify deployments, reducing the need for manual configuration and setup.
Documentation and Support: Bitnami provides detailed documentation and sometimes support through their parent company, VMware, which can be valuable for troubleshooting and enterprise usage.
Another import note is about licenses, it can vary, but bitnami software is generally free to use, its containers and packages are based on open-source software and use licenses like MIT, Apache 2.0, or GPL... Read More About Licenses for Open Sources
Step 1: Install Docker and Docker Compose
- Install Docker: Follow the instructions for your OS from the official Docker documentation.
- Install Docker Compose: Follow instructions from the Docker Compose installation guide
Step 2: Project Structure
Create the following project structure:
dev-environment/ ├── components # for mounting container volumes ├── scripts/ │ ├── pgadmin │ │ ├──servers.json # for pgadmin automatically load postgreDB │ ├── create-topics.sh # for creating kafka topics │ ├── mongo-init.sh # init script for mongodb │ ├── mysql-init.sql # init script for mysql │ ├── postgres-init.sql # init script for postgre ├── .env ├── docker-compose.yml
Step 3: (.env) File
Create a .env file with the following content:
# MySQL Configuration MYSQL_PORT=23306 MYSQL_USERNAME=dev-user MYSQL_PASSWORD=dev-password MYSQL_DATABASE=dev_database # PostgreSQL Configuration POSTGRES_PORT=25432 POSTGRES_USERNAME=dev-user POSTGRES_PASSWORD=dev-password POSTGRES_DATABASE=dev_database # MongoDB Configuration MONGO_PORT=27017 MONGO_USERNAME=dev-user MONGO_PASSWORD=dev-password MONGO_DATABASE=dev_database # Redis Configuration REDIS_PORT=26379 REDIS_PASSWORD=dev-password # Kafka Configuration KAFKA_PORT=29092 KAFKA_USERNAME=dev-user KAFKA_PASSWORD=dev-password # UI Tools Configuration PHPMYADMIN_PORT=280 PGADMIN_PORT=281 MONGOEXPRESS_PORT=28081 REDIS_COMMANDER_PORT=28082 KAFKA_UI_PORT=28080 # Data Directory for Volumes DATA_DIR=./
Step 4: (docker-compose.yml) File
Create the docker-compose.yml file:
version: '3.8' services: dev-mysql: image: bitnami/mysql:latest # This container_name can be used for internal connections between containers (running on the same docker virtual network) container_name: dev-mysql ports: # This mapping means that requests sent to the ${MYSQL_PORT} on the host machine will be forwarded to port 3306 in the dev-mysql container. This setup allows users to access the MySQL database from outside the container, such as from a local machine or another service. - '${MYSQL_PORT}:3306' environment: # Setup environment variables for container - MYSQL_ROOT_PASSWORD=${MYSQL_PASSWORD} - MYSQL_USER=${MYSQL_USERNAME} - MYSQL_PASSWORD=${MYSQL_PASSWORD} - MYSQL_DATABASE=${MYSQL_DATABASE} volumes: # Syncs msyql data from inside container to host machine, to keep them accross container restarts - '${DATA_DIR}/components/mysql/data:/bitnami/mysql/data' # Add custom script to init db - './scripts/mysql-init.sql:/docker-entrypoint-initdb.d/init.sql' phpmyadmin: image: phpmyadmin/phpmyadmin:latest container_name: dev-phpmyadmin # The depends_on option in Docker specifies that a container should be started only after the specified dependent container (e.g., dev-mysql) has been started (but not ensuring that it is ready) depends_on: - dev-mysql ports: - '${PHPMYADMIN_PORT}:80' environment: - PMA_HOST=dev-mysql # use internal port for internal connections, not exposed port ${MYSQL_PORT} - PMA_PORT=3306 - PMA_USER=${MYSQL_USERNAME} - PMA_PASSWORD=${MYSQL_PASSWORD} #======= dev-postgresql: image: bitnami/postgresql:latest container_name: dev-postgresql ports: - '${POSTGRES_PORT}:5432' environment: - POSTGRESQL_USERNAME=${POSTGRES_USERNAME} - POSTGRESQL_PASSWORD=${POSTGRES_PASSWORD} - POSTGRESQL_DATABASE=${POSTGRES_DATABASE} volumes: # This setup will ensure that PostgreSQL data from inside container is synced to host machine, enabling persistence across container restarts. - '${DATA_DIR}/components/postgresql/data:/bitnami/postgresql/data' # Most relational databases support a special docker-entrypoint-initdb.d folder. This folder is used to initialise the database automatically when the container is first created. # We can put .sql or .sh scripts there, and Docker will automatically, here ./scripts/postgres-init.sql from host machine be automatically copied to the Docker container during the build and then run it - ./scripts/postgres-init.sql:/docker-entrypoint-initdb.d/init.sql:ro pgadmin: image: dpage/pgadmin4:latest container_name: dev-pgadmin depends_on: - dev-postgresql ports: - '${PGADMIN_PORT}:80' # user: root used to ensure that the container has full administrative privileges, # necessary when performing actions that require elevated permissions, such as mounting volumes (properly read or write to the mounted volumes), executing certain entrypoint commands, or accessing specific directories from host machine user: root environment: # PGADMIN_DEFAULT_EMAIL and PGADMIN_DEFAULT_PASSWORD - Sets the default credentials for the pgAdmin user - PGADMIN_DEFAULT_EMAIL=admin@dev.com - PGADMIN_DEFAULT_PASSWORD=${POSTGRES_PASSWORD} # PGADMIN_CONFIG_SERVER_MODE - determines whether pgAdmin runs in server mode (multi-user) or desktop mode (single-user). We’re setting it to false, so we won’t be prompted for login credentials - PGADMIN_CONFIG_SERVER_MODE=False # PGADMIN_CONFIG_MASTER_PASSWORD_REQUIRED - controls whether a master password is required to access saved server definitions and other sensitive information - PGADMIN_CONFIG_MASTER_PASSWORD_REQUIRED=False volumes: # This setup will ensure that PGAdmin data from inside container is synced to host machine, enabling persistence across container restarts. - '${DATA_DIR}/components/pgadmin:/var/lib/pgadmin' # This setup to make PGAdmin automatically detect and connect to PostgreSQL when it starts (following the config being set in servers.json) - ./scripts/pgadmin/servers.json:/pgadmin4/servers.json:ro #======= dev-mongodb: image: bitnami/mongodb:latest container_name: dev-mongodb ports: - '${MONGO_PORT}:27017' environment: - MONGO_INITDB_ROOT_USERNAME=${MONGO_USERNAME} - MONGO_INITDB_ROOT_PASSWORD=${MONGO_PASSWORD} - MONGO_INITDB_DATABASE=${MONGO_DATABASE} - MONGODB_ROOT_USER=${MONGO_USERNAME} - MONGODB_ROOT_PASSWORD=${MONGO_PASSWORD} - MONGODB_DATABASE=${MONGO_DATABASE} volumes: - '${DATA_DIR}/components/mongodb/data:/bitnami/mongodb' # This line maps ./scripts/mongo-init.sh from host machine to /docker-entrypoint-initdb.d/mongo-init.sh inside container with 'ro' mode (read only mode) which means container can't modify the mounted file - ./scripts/mongo-init.sh:/docker-entrypoint-initdb.d/mongo-init.sh:ro # - ./scripts/mongo-init.sh:/bitnami/scripts/mongo-init.sh:ro mongo-express: image: mongo-express:latest container_name: dev-mongoexpress depends_on: - dev-mongodb ports: - '${MONGOEXPRESS_PORT}:8081' environment: - ME_CONFIG_MONGODB_ENABLE_ADMIN=true - ME_CONFIG_MONGODB_ADMINUSERNAME=${MONGO_USERNAME} - ME_CONFIG_MONGODB_ADMINPASSWORD=${MONGO_PASSWORD} # - ME_CONFIG_MONGODB_SERVER=dev-mongodb # - ME_CONFIG_MONGODB_PORT=${MONGO_PORT} - ME_CONFIG_MONGODB_URL=mongodb://${MONGO_USERNAME}:${MONGO_PASSWORD}@dev-mongodb:${MONGO_PORT}/${MONGO_DATABASE}?authSource=admin&ssl=false&directConnection=true restart: unless-stopped # 'restart: unless-stopped' restarts a container automatically unless it is explicitly stopped by the user. # some others: 1. 'no': (Default option if not specified) meaning the container won't automatically restart if it stops or crashes. # 2. 'always': The container will restart regardless of the reason it stopped, including if Docker is restarted. # 3. 'on-failure': The container will restart only if it exits with a non-zero status indicating an error. (and won't restart if it stops when completing as short running task and return 0 status). #======= dev-redis: image: bitnami/redis:latest container_name: dev-redis ports: - '${REDIS_PORT}:6379' environment: - REDIS_PASSWORD=${REDIS_PASSWORD} volumes: - '${DATA_DIR}/components/redis:/bitnami/redis' networks: - dev-network redis-commander: image: rediscommander/redis-commander:latest container_name: dev-redis-commander depends_on: - dev-redis ports: - '${REDIS_COMMANDER_PORT}:8081' environment: - REDIS_HOST=dev-redis # While exposed port ${REDIS_PORT} being bind to host network, redis-commander still using internal port 6379 (being use internally inside docker virtual network) to connect to redis - REDIS_PORT=6379 - REDIS_PASSWORD=${REDIS_PASSWORD} networks: - dev-network # This networks setup is optional, in case not being set, both redis-commader and redis will both be assigned to default docker network (usually named bridge) and still being able to connect each other #======= dev-kafka: image: 'bitnami/kafka:latest' container_name: dev-kafka ports: - '${KAFKA_PORT}:9094' environment: # Sets the timezone for the container to "Asia/Shanghai". This ensures that logs and timestamps inside the Kafka container align with the Shanghai timezone. - TZ=Asia/Shanghai # KAFKA_CFG_NODE_ID=0: Identifies the Kafka node with ID 0. This is crucial for multi-node Kafka clusters to distinguish each node uniquely. - KAFKA_CFG_NODE_ID=0 # KAFKA_CFG_PROCESS_ROLES=controller,broker: Specifies the roles the Kafka node will perform, in this case, both as a controller (managing cluster metadata) and a broker (handling messages). - KAFKA_CFG_PROCESS_ROLES=controller,broker # KAFKA_CFG_CONTROLLER_QUORUM_VOTERS=0@<your_host>:9093: Defines the quorum voters for the Kafka controllers. It indicates that node 0 (the current node) acts as a voter for controller decisions and will be accessible at 9093 on <your_host>. - KAFKA_CFG_CONTROLLER_QUORUM_VOTERS=0@<your_host>:9093 # The following lists different listeners for Kafka. Each listener binds a protocol to a specific port: # PLAINTEXT for client connections (:9092). CONTROLLER for internal controller communication (:9093). EXTERNAL for external client access (:9094).SASL_PLAINTEXT for SASL-authenticated clients (:9095). - KAFKA_CFG_LISTENERS=PLAINTEXT://:9092,CONTROLLER://:9093,EXTERNAL://:9094,SASL_PLAINTEXT://:9095 # KAFKA_CFG_ADVERTISED_LISTENERS specifies how clients should connect to Kafka externally: # PLAINTEXT at dev-kafka:9092 for internal communication. EXTERNAL at 127.0.0.1:${KAFKA_PORT} (host access). SASL_PLAINTEXT for SASL connections (kafka:9095). - KAFKA_CFG_ADVERTISED_LISTENERS=PLAINTEXT://dev-kafka:9092,EXTERNAL://127.0.0.1:${KAFKA_PORT},SASL_PLAINTEXT://kafka:9095 # The following maps security protocols to each listener. For example, CONTROLLER uses PLAINTEXT, and EXTERNAL uses SASL_PLAINTEXT. - KAFKA_CFG_LISTENER_SECURITY_PROTOCOL_MAP=CONTROLLER:PLAINTEXT,EXTERNAL:SASL_PLAINTEXT,PLAINTEXT:PLAINTEXT,SASL_PLAINTEXT:SASL_PLAINTEXT # Indicates that the CONTROLLER role should use the CONTROLLER listener for communications. - KAFKA_CFG_CONTROLLER_LISTENER_NAMES=CONTROLLER # Specifies users with relevant passwords that can connect to Kafka using SASL authentication - KAFKA_CLIENT_USERS=${KAFKA_USERNAME} - KAFKA_CLIENT_PASSWORDS=${KAFKA_PASSWORD} volumes: - '${DATA_DIR}/components/kafka/data:/bitnami/kafka/data' # Maps a local file create-topics.sh from the ./scripts directory to the path /opt/bitnami/kafka/create_topic.sh inside the Kafka container # This script can be used to automatically create Kafka topics when the container starts - ./scripts/create-topics.sh:/opt/bitnami/kafka/create_topic.sh:ro # Following command starts the Kafka server in the background using /opt/bitnami/scripts/kafka/run.sh. then sleep 5 to ensure that the Kafka server is fully up and running. # Executes the create_topic.sh script, which is used to create Kafka topics. Uses 'wait' to keep the script running until all background processes (like the Kafka server) finish, command: > bash -c " /opt/bitnami/scripts/kafka/run.sh & sleep 5; /opt/bitnami/kafka/create_topic.sh; wait " kafka-ui: image: provectuslabs/kafka-ui:latest container_name: dev-kafka-ui ports: - '${KAFKA_UI_PORT}:8080' environment: # Sets the name of the Kafka cluster displayed in the UI as "local." - KAFKA_CLUSTERS_0_NAME=local # Specifies the address (dev-kafka:9092) for the Kafka broker that the UI should connect to. - KAFKA_CLUSTERS_0_BOOTSTRAPSERVERS=dev-kafka:9092 # Uses the provided ${KAFKA_USERNAME} for SASL (Simple Authentication and Security Layer) authentication with the Kafka cluster. - KAFKA_CLUSTERS_0_SASL_USER=${KAFKA_USERNAME} # Uses the ${KAFKA_PASSWORD} for authentication with the Kafka broker. - KAFKA_CLUSTERS_0_SASL_PASSWORD=${KAFKA_PASSWORD} # Sets the SASL mechanism as 'PLAIN', which is a simple username-password-based authentication method. - KAFKA_CLUSTERS_0_SASL_MECHANISM=PLAIN # Configures the communication protocol as SASL_PLAINTEXT, which means it uses SASL for authentication without encryption over plaintext communication. - KAFKA_CLUSTERS_0_SECURITY_PROTOCOL=SASL_PLAINTEXT depends_on: - dev-kafka networks: dev-network: driver: bridge
Step 5: Scripts
Create the necessary scripts in the scripts folder.
-
pgadmin/servers.json:
{ "Servers": { "1": { "Name": "Local PostgreSQL", "Group": "Servers", "Host": "dev-postgresql", "Port": 5432, "MaintenanceDB": "dev_database", "Username": "dev-user", "Password": "dev-password", "SSLMode": "prefer", "Favorite": true } } }
Copy after loginCopy after login -
create-topics.sh:
# Wait for Kafka to be ready until /opt/bitnami/kafka/bin/kafka-topics.sh --list --bootstrap-server localhost:9092; do echo "Waiting for Kafka to be ready..." sleep 2 done # Create topics /opt/bitnami/kafka/bin/kafka-topics.sh --create --bootstrap-server localhost:9092 --replication-factor 1 --partitions 8 --topic latestMsgToRedis /opt/bitnami/kafka/bin/kafka-topics.sh --create --bootstrap-server localhost:9092 --replication-factor 1 --partitions 8 --topic msgToPush /opt/bitnami/kafka/bin/kafka-topics.sh --create --bootstrap-server localhost:9092 --replication-factor 1 --partitions 8 --topic offlineMsgToMongoMysql echo "Topics created."
Copy after loginCopy after login -
mongo-init.sh:
# mongosh --: Launches the MongoDB shell, connecting to the default MongoDB instance. # "$MONGO_INITDB_DATABASE": Specifies the database to connect to (using the value from the environment variable). # <<EOF: Indicates the start of a multi-line input block. Everything between <<EOF and EOF is treated as MongoDB shell commands to be executed. # db.getSiblingDB('admin'): Switches to the admin database, which is the default administrative database in MongoDB. It allows you to perform administrative tasks like user creation, where the user dev-user will be created. # db.auth('$MONGO_INITDB_ROOT_USERNAME', '$MONGO_INITDB_ROOT_PASSWORD') (commented out): This line, if executed, would authenticate the user with the given credentials against the "admin" database. It’s necessary if the following operations require authentication. # The user dev-user is created in the admin database with the specified username and password. # { role: 'root', db: 'admin' }: Allows full access to the admin database. # { role: 'readWrite', db: '$MONGO_INITDB_DATABASE' }: Grants read and write permissions specifically for dev_database. mongosh -- "$MONGO_INITDB_DATABASE" <<EOF db = db.getSiblingDB('admin') db.auth('$MONGO_INITDB_ROOT_USERNAME', '$MONGO_INITDB_ROOT_PASSWORD') db.createUser({ user: "$MONGODB_ROOT_USER", pwd: "$MONGODB_ROOT_PASSWORD", roles: [ { role: 'root', db: 'admin' }, { role: 'root', db: '$MONGO_INITDB_DATABASE' } ] }) db = db.getSiblingDB('$MONGO_INITDB_DATABASE'); db.createCollection('users'); db.users.insertMany([ { username: 'user1', email: 'user1@example.com' }, { username: 'user2', email: 'user2@example.com' } ]); EOF
Copy after loginCopy after login -
mysql-init.sql:
-- CREATE TABLE IF NOT EXISTS test (id SERIAL PRIMARY KEY, name VARCHAR(50)); BEGIN; -- structure setup CREATE TABLE users ( id SERIAL PRIMARY KEY, username VARCHAR(50) NOT NULL, email VARCHAR(100) NOT NULL ); -- data setup INSERT INTO users (username, email) VALUES ('user1', 'user1@example.com'); INSERT INTO users (username, email) VALUES ('user2', 'user2@example.com'); COMMIT;
Copy after loginCopy after loginCopy after login -
postgres-init.sql:
-- CREATE TABLE IF NOT EXISTS test (id SERIAL PRIMARY KEY, name VARCHAR(50)); BEGIN; -- structure setup CREATE TABLE users ( id SERIAL PRIMARY KEY, username VARCHAR(50) NOT NULL, email VARCHAR(100) NOT NULL ); -- data setup INSERT INTO users (username, email) VALUES ('user1', 'user1@example.com'); INSERT INTO users (username, email) VALUES ('user2', 'user2@example.com'); COMMIT;
Copy after loginCopy after loginCopy after login
Step 6: Run Docker Compose
In your terminal, navigate to the dev-environment folder and run:
dev-environment/ ├── components # for mounting container volumes ├── scripts/ │ ├── pgadmin │ │ ├──servers.json # for pgadmin automatically load postgreDB │ ├── create-topics.sh # for creating kafka topics │ ├── mongo-init.sh # init script for mongodb │ ├── mysql-init.sql # init script for mysql │ ├── postgres-init.sql # init script for postgre ├── .env ├── docker-compose.yml
This command will start all the services, each with its own container, port, and environment configuration as defined.
Step 7: Access the Databases Using UI Tools
- phpMyAdmin: Access via http://localhost:280
- Mongo Express: Access via http://localhost:28081
- pgAdmin 4: Access via http://localhost:281
- Redis Commander: Access via http://localhost:28082
- Kafka UI: Access via http://localhost:28080
Each UI tool is already configured to connect to its respective database container.
Step 8: Access via CLI
First of all we need to load all environment variables from .env file to current working CLI session. To do that we can use the following command:
# MySQL Configuration MYSQL_PORT=23306 MYSQL_USERNAME=dev-user MYSQL_PASSWORD=dev-password MYSQL_DATABASE=dev_database # PostgreSQL Configuration POSTGRES_PORT=25432 POSTGRES_USERNAME=dev-user POSTGRES_PASSWORD=dev-password POSTGRES_DATABASE=dev_database # MongoDB Configuration MONGO_PORT=27017 MONGO_USERNAME=dev-user MONGO_PASSWORD=dev-password MONGO_DATABASE=dev_database # Redis Configuration REDIS_PORT=26379 REDIS_PASSWORD=dev-password # Kafka Configuration KAFKA_PORT=29092 KAFKA_USERNAME=dev-user KAFKA_PASSWORD=dev-password # UI Tools Configuration PHPMYADMIN_PORT=280 PGADMIN_PORT=281 MONGOEXPRESS_PORT=28081 REDIS_COMMANDER_PORT=28082 KAFKA_UI_PORT=28080 # Data Directory for Volumes DATA_DIR=./
- grep -v '^#' .env: Filters out comments (lines starting with #) from the .env file.
- xargs: Converts each line into key=value pairs.
- export: Loads the variables into the current environment, making them available for use in the session.
-
Access MySQL CLI
- To access the MySQL database inside the dev-mysql container:
version: '3.8' services: dev-mysql: image: bitnami/mysql:latest # This container_name can be used for internal connections between containers (running on the same docker virtual network) container_name: dev-mysql ports: # This mapping means that requests sent to the ${MYSQL_PORT} on the host machine will be forwarded to port 3306 in the dev-mysql container. This setup allows users to access the MySQL database from outside the container, such as from a local machine or another service. - '${MYSQL_PORT}:3306' environment: # Setup environment variables for container - MYSQL_ROOT_PASSWORD=${MYSQL_PASSWORD} - MYSQL_USER=${MYSQL_USERNAME} - MYSQL_PASSWORD=${MYSQL_PASSWORD} - MYSQL_DATABASE=${MYSQL_DATABASE} volumes: # Syncs msyql data from inside container to host machine, to keep them accross container restarts - '${DATA_DIR}/components/mysql/data:/bitnami/mysql/data' # Add custom script to init db - './scripts/mysql-init.sql:/docker-entrypoint-initdb.d/init.sql' phpmyadmin: image: phpmyadmin/phpmyadmin:latest container_name: dev-phpmyadmin # The depends_on option in Docker specifies that a container should be started only after the specified dependent container (e.g., dev-mysql) has been started (but not ensuring that it is ready) depends_on: - dev-mysql ports: - '${PHPMYADMIN_PORT}:80' environment: - PMA_HOST=dev-mysql # use internal port for internal connections, not exposed port ${MYSQL_PORT} - PMA_PORT=3306 - PMA_USER=${MYSQL_USERNAME} - PMA_PASSWORD=${MYSQL_PASSWORD} #======= dev-postgresql: image: bitnami/postgresql:latest container_name: dev-postgresql ports: - '${POSTGRES_PORT}:5432' environment: - POSTGRESQL_USERNAME=${POSTGRES_USERNAME} - POSTGRESQL_PASSWORD=${POSTGRES_PASSWORD} - POSTGRESQL_DATABASE=${POSTGRES_DATABASE} volumes: # This setup will ensure that PostgreSQL data from inside container is synced to host machine, enabling persistence across container restarts. - '${DATA_DIR}/components/postgresql/data:/bitnami/postgresql/data' # Most relational databases support a special docker-entrypoint-initdb.d folder. This folder is used to initialise the database automatically when the container is first created. # We can put .sql or .sh scripts there, and Docker will automatically, here ./scripts/postgres-init.sql from host machine be automatically copied to the Docker container during the build and then run it - ./scripts/postgres-init.sql:/docker-entrypoint-initdb.d/init.sql:ro pgadmin: image: dpage/pgadmin4:latest container_name: dev-pgadmin depends_on: - dev-postgresql ports: - '${PGADMIN_PORT}:80' # user: root used to ensure that the container has full administrative privileges, # necessary when performing actions that require elevated permissions, such as mounting volumes (properly read or write to the mounted volumes), executing certain entrypoint commands, or accessing specific directories from host machine user: root environment: # PGADMIN_DEFAULT_EMAIL and PGADMIN_DEFAULT_PASSWORD - Sets the default credentials for the pgAdmin user - PGADMIN_DEFAULT_EMAIL=admin@dev.com - PGADMIN_DEFAULT_PASSWORD=${POSTGRES_PASSWORD} # PGADMIN_CONFIG_SERVER_MODE - determines whether pgAdmin runs in server mode (multi-user) or desktop mode (single-user). We’re setting it to false, so we won’t be prompted for login credentials - PGADMIN_CONFIG_SERVER_MODE=False # PGADMIN_CONFIG_MASTER_PASSWORD_REQUIRED - controls whether a master password is required to access saved server definitions and other sensitive information - PGADMIN_CONFIG_MASTER_PASSWORD_REQUIRED=False volumes: # This setup will ensure that PGAdmin data from inside container is synced to host machine, enabling persistence across container restarts. - '${DATA_DIR}/components/pgadmin:/var/lib/pgadmin' # This setup to make PGAdmin automatically detect and connect to PostgreSQL when it starts (following the config being set in servers.json) - ./scripts/pgadmin/servers.json:/pgadmin4/servers.json:ro #======= dev-mongodb: image: bitnami/mongodb:latest container_name: dev-mongodb ports: - '${MONGO_PORT}:27017' environment: - MONGO_INITDB_ROOT_USERNAME=${MONGO_USERNAME} - MONGO_INITDB_ROOT_PASSWORD=${MONGO_PASSWORD} - MONGO_INITDB_DATABASE=${MONGO_DATABASE} - MONGODB_ROOT_USER=${MONGO_USERNAME} - MONGODB_ROOT_PASSWORD=${MONGO_PASSWORD} - MONGODB_DATABASE=${MONGO_DATABASE} volumes: - '${DATA_DIR}/components/mongodb/data:/bitnami/mongodb' # This line maps ./scripts/mongo-init.sh from host machine to /docker-entrypoint-initdb.d/mongo-init.sh inside container with 'ro' mode (read only mode) which means container can't modify the mounted file - ./scripts/mongo-init.sh:/docker-entrypoint-initdb.d/mongo-init.sh:ro # - ./scripts/mongo-init.sh:/bitnami/scripts/mongo-init.sh:ro mongo-express: image: mongo-express:latest container_name: dev-mongoexpress depends_on: - dev-mongodb ports: - '${MONGOEXPRESS_PORT}:8081' environment: - ME_CONFIG_MONGODB_ENABLE_ADMIN=true - ME_CONFIG_MONGODB_ADMINUSERNAME=${MONGO_USERNAME} - ME_CONFIG_MONGODB_ADMINPASSWORD=${MONGO_PASSWORD} # - ME_CONFIG_MONGODB_SERVER=dev-mongodb # - ME_CONFIG_MONGODB_PORT=${MONGO_PORT} - ME_CONFIG_MONGODB_URL=mongodb://${MONGO_USERNAME}:${MONGO_PASSWORD}@dev-mongodb:${MONGO_PORT}/${MONGO_DATABASE}?authSource=admin&ssl=false&directConnection=true restart: unless-stopped # 'restart: unless-stopped' restarts a container automatically unless it is explicitly stopped by the user. # some others: 1. 'no': (Default option if not specified) meaning the container won't automatically restart if it stops or crashes. # 2. 'always': The container will restart regardless of the reason it stopped, including if Docker is restarted. # 3. 'on-failure': The container will restart only if it exits with a non-zero status indicating an error. (and won't restart if it stops when completing as short running task and return 0 status). #======= dev-redis: image: bitnami/redis:latest container_name: dev-redis ports: - '${REDIS_PORT}:6379' environment: - REDIS_PASSWORD=${REDIS_PASSWORD} volumes: - '${DATA_DIR}/components/redis:/bitnami/redis' networks: - dev-network redis-commander: image: rediscommander/redis-commander:latest container_name: dev-redis-commander depends_on: - dev-redis ports: - '${REDIS_COMMANDER_PORT}:8081' environment: - REDIS_HOST=dev-redis # While exposed port ${REDIS_PORT} being bind to host network, redis-commander still using internal port 6379 (being use internally inside docker virtual network) to connect to redis - REDIS_PORT=6379 - REDIS_PASSWORD=${REDIS_PASSWORD} networks: - dev-network # This networks setup is optional, in case not being set, both redis-commader and redis will both be assigned to default docker network (usually named bridge) and still being able to connect each other #======= dev-kafka: image: 'bitnami/kafka:latest' container_name: dev-kafka ports: - '${KAFKA_PORT}:9094' environment: # Sets the timezone for the container to "Asia/Shanghai". This ensures that logs and timestamps inside the Kafka container align with the Shanghai timezone. - TZ=Asia/Shanghai # KAFKA_CFG_NODE_ID=0: Identifies the Kafka node with ID 0. This is crucial for multi-node Kafka clusters to distinguish each node uniquely. - KAFKA_CFG_NODE_ID=0 # KAFKA_CFG_PROCESS_ROLES=controller,broker: Specifies the roles the Kafka node will perform, in this case, both as a controller (managing cluster metadata) and a broker (handling messages). - KAFKA_CFG_PROCESS_ROLES=controller,broker # KAFKA_CFG_CONTROLLER_QUORUM_VOTERS=0@<your_host>:9093: Defines the quorum voters for the Kafka controllers. It indicates that node 0 (the current node) acts as a voter for controller decisions and will be accessible at 9093 on <your_host>. - KAFKA_CFG_CONTROLLER_QUORUM_VOTERS=0@<your_host>:9093 # The following lists different listeners for Kafka. Each listener binds a protocol to a specific port: # PLAINTEXT for client connections (:9092). CONTROLLER for internal controller communication (:9093). EXTERNAL for external client access (:9094).SASL_PLAINTEXT for SASL-authenticated clients (:9095). - KAFKA_CFG_LISTENERS=PLAINTEXT://:9092,CONTROLLER://:9093,EXTERNAL://:9094,SASL_PLAINTEXT://:9095 # KAFKA_CFG_ADVERTISED_LISTENERS specifies how clients should connect to Kafka externally: # PLAINTEXT at dev-kafka:9092 for internal communication. EXTERNAL at 127.0.0.1:${KAFKA_PORT} (host access). SASL_PLAINTEXT for SASL connections (kafka:9095). - KAFKA_CFG_ADVERTISED_LISTENERS=PLAINTEXT://dev-kafka:9092,EXTERNAL://127.0.0.1:${KAFKA_PORT},SASL_PLAINTEXT://kafka:9095 # The following maps security protocols to each listener. For example, CONTROLLER uses PLAINTEXT, and EXTERNAL uses SASL_PLAINTEXT. - KAFKA_CFG_LISTENER_SECURITY_PROTOCOL_MAP=CONTROLLER:PLAINTEXT,EXTERNAL:SASL_PLAINTEXT,PLAINTEXT:PLAINTEXT,SASL_PLAINTEXT:SASL_PLAINTEXT # Indicates that the CONTROLLER role should use the CONTROLLER listener for communications. - KAFKA_CFG_CONTROLLER_LISTENER_NAMES=CONTROLLER # Specifies users with relevant passwords that can connect to Kafka using SASL authentication - KAFKA_CLIENT_USERS=${KAFKA_USERNAME} - KAFKA_CLIENT_PASSWORDS=${KAFKA_PASSWORD} volumes: - '${DATA_DIR}/components/kafka/data:/bitnami/kafka/data' # Maps a local file create-topics.sh from the ./scripts directory to the path /opt/bitnami/kafka/create_topic.sh inside the Kafka container # This script can be used to automatically create Kafka topics when the container starts - ./scripts/create-topics.sh:/opt/bitnami/kafka/create_topic.sh:ro # Following command starts the Kafka server in the background using /opt/bitnami/scripts/kafka/run.sh. then sleep 5 to ensure that the Kafka server is fully up and running. # Executes the create_topic.sh script, which is used to create Kafka topics. Uses 'wait' to keep the script running until all background processes (like the Kafka server) finish, command: > bash -c " /opt/bitnami/scripts/kafka/run.sh & sleep 5; /opt/bitnami/kafka/create_topic.sh; wait " kafka-ui: image: provectuslabs/kafka-ui:latest container_name: dev-kafka-ui ports: - '${KAFKA_UI_PORT}:8080' environment: # Sets the name of the Kafka cluster displayed in the UI as "local." - KAFKA_CLUSTERS_0_NAME=local # Specifies the address (dev-kafka:9092) for the Kafka broker that the UI should connect to. - KAFKA_CLUSTERS_0_BOOTSTRAPSERVERS=dev-kafka:9092 # Uses the provided ${KAFKA_USERNAME} for SASL (Simple Authentication and Security Layer) authentication with the Kafka cluster. - KAFKA_CLUSTERS_0_SASL_USER=${KAFKA_USERNAME} # Uses the ${KAFKA_PASSWORD} for authentication with the Kafka broker. - KAFKA_CLUSTERS_0_SASL_PASSWORD=${KAFKA_PASSWORD} # Sets the SASL mechanism as 'PLAIN', which is a simple username-password-based authentication method. - KAFKA_CLUSTERS_0_SASL_MECHANISM=PLAIN # Configures the communication protocol as SASL_PLAINTEXT, which means it uses SASL for authentication without encryption over plaintext communication. - KAFKA_CLUSTERS_0_SECURITY_PROTOCOL=SASL_PLAINTEXT depends_on: - dev-kafka networks: dev-network: driver: bridge
Copy after loginCopy after login -
Access PostgreSQL CLI
- To access the PostgreSQL database inside the dev-postgresql container:
{ "Servers": { "1": { "Name": "Local PostgreSQL", "Group": "Servers", "Host": "dev-postgresql", "Port": 5432, "MaintenanceDB": "dev_database", "Username": "dev-user", "Password": "dev-password", "SSLMode": "prefer", "Favorite": true } } }
Copy after loginCopy after login -
Access MongoDB CLI
- To access the MongoDB shell inside the dev-mongodb container:
# Wait for Kafka to be ready until /opt/bitnami/kafka/bin/kafka-topics.sh --list --bootstrap-server localhost:9092; do echo "Waiting for Kafka to be ready..." sleep 2 done # Create topics /opt/bitnami/kafka/bin/kafka-topics.sh --create --bootstrap-server localhost:9092 --replication-factor 1 --partitions 8 --topic latestMsgToRedis /opt/bitnami/kafka/bin/kafka-topics.sh --create --bootstrap-server localhost:9092 --replication-factor 1 --partitions 8 --topic msgToPush /opt/bitnami/kafka/bin/kafka-topics.sh --create --bootstrap-server localhost:9092 --replication-factor 1 --partitions 8 --topic offlineMsgToMongoMysql echo "Topics created."
Copy after loginCopy after login -
Access Redis CLI
- To access the Redis CLI inside the dev-redis container:
# mongosh --: Launches the MongoDB shell, connecting to the default MongoDB instance. # "$MONGO_INITDB_DATABASE": Specifies the database to connect to (using the value from the environment variable). # <<EOF: Indicates the start of a multi-line input block. Everything between <<EOF and EOF is treated as MongoDB shell commands to be executed. # db.getSiblingDB('admin'): Switches to the admin database, which is the default administrative database in MongoDB. It allows you to perform administrative tasks like user creation, where the user dev-user will be created. # db.auth('$MONGO_INITDB_ROOT_USERNAME', '$MONGO_INITDB_ROOT_PASSWORD') (commented out): This line, if executed, would authenticate the user with the given credentials against the "admin" database. It’s necessary if the following operations require authentication. # The user dev-user is created in the admin database with the specified username and password. # { role: 'root', db: 'admin' }: Allows full access to the admin database. # { role: 'readWrite', db: '$MONGO_INITDB_DATABASE' }: Grants read and write permissions specifically for dev_database. mongosh -- "$MONGO_INITDB_DATABASE" <<EOF db = db.getSiblingDB('admin') db.auth('$MONGO_INITDB_ROOT_USERNAME', '$MONGO_INITDB_ROOT_PASSWORD') db.createUser({ user: "$MONGODB_ROOT_USER", pwd: "$MONGODB_ROOT_PASSWORD", roles: [ { role: 'root', db: 'admin' }, { role: 'root', db: '$MONGO_INITDB_DATABASE' } ] }) db = db.getSiblingDB('$MONGO_INITDB_DATABASE'); db.createCollection('users'); db.users.insertMany([ { username: 'user1', email: 'user1@example.com' }, { username: 'user2', email: 'user2@example.com' } ]); EOF
Copy after loginCopy after login -
Access Kafka CLI
- To access the Kafka CLI inside the dev-kafka container:
-- CREATE TABLE IF NOT EXISTS test (id SERIAL PRIMARY KEY, name VARCHAR(50)); BEGIN; -- structure setup CREATE TABLE users ( id SERIAL PRIMARY KEY, username VARCHAR(50) NOT NULL, email VARCHAR(100) NOT NULL ); -- data setup INSERT INTO users (username, email) VALUES ('user1', 'user1@example.com'); INSERT INTO users (username, email) VALUES ('user2', 'user2@example.com'); COMMIT;
Copy after loginCopy after loginCopy after login
Summary
This setup uses Docker Compose with environment variables, bitnami images, and volume mappings to create a reproducible development environment. By using docker-compose up -d, you can quickly spin up or tear down the entire environment with docker-compose down, making it suitable for local development and testing.
The above is the detailed content of Quickly Start Dev Environment For MySQL, PostgreSQL, MongoDB, Redis, and Kafka Using Docker Compose. For more information, please follow other related articles on the PHP Chinese website!

Hot AI Tools

Undresser.AI Undress
AI-powered app for creating realistic nude photos

AI Clothes Remover
Online AI tool for removing clothes from photos.

Undress AI Tool
Undress images for free

Clothoff.io
AI clothes remover

Video Face Swap
Swap faces in any video effortlessly with our completely free AI face swap tool!

Hot Article

Hot Tools

Notepad++7.3.1
Easy-to-use and free code editor

SublimeText3 Chinese version
Chinese version, very easy to use

Zend Studio 13.0.1
Powerful PHP integrated development environment

Dreamweaver CS6
Visual web development tools

SublimeText3 Mac version
God-level code editing software (SublimeText3)

Hot Topics











The main role of MySQL in web applications is to store and manage data. 1.MySQL efficiently processes user information, product catalogs, transaction records and other data. 2. Through SQL query, developers can extract information from the database to generate dynamic content. 3.MySQL works based on the client-server model to ensure acceptable query speed.

InnoDB uses redologs and undologs to ensure data consistency and reliability. 1.redologs record data page modification to ensure crash recovery and transaction persistence. 2.undologs records the original data value and supports transaction rollback and MVCC.

Compared with other programming languages, MySQL is mainly used to store and manage data, while other languages such as Python, Java, and C are used for logical processing and application development. MySQL is known for its high performance, scalability and cross-platform support, suitable for data management needs, while other languages have advantages in their respective fields such as data analytics, enterprise applications, and system programming.

The basic operations of MySQL include creating databases, tables, and using SQL to perform CRUD operations on data. 1. Create a database: CREATEDATABASEmy_first_db; 2. Create a table: CREATETABLEbooks(idINTAUTO_INCREMENTPRIMARYKEY, titleVARCHAR(100)NOTNULL, authorVARCHAR(100)NOTNULL, published_yearINT); 3. Insert data: INSERTINTObooks(title, author, published_year)VA

MySQL is suitable for web applications and content management systems and is popular for its open source, high performance and ease of use. 1) Compared with PostgreSQL, MySQL performs better in simple queries and high concurrent read operations. 2) Compared with Oracle, MySQL is more popular among small and medium-sized enterprises because of its open source and low cost. 3) Compared with Microsoft SQL Server, MySQL is more suitable for cross-platform applications. 4) Unlike MongoDB, MySQL is more suitable for structured data and transaction processing.

InnoDBBufferPool reduces disk I/O by caching data and indexing pages, improving database performance. Its working principle includes: 1. Data reading: Read data from BufferPool; 2. Data writing: After modifying the data, write to BufferPool and refresh it to disk regularly; 3. Cache management: Use the LRU algorithm to manage cache pages; 4. Reading mechanism: Load adjacent data pages in advance. By sizing the BufferPool and using multiple instances, database performance can be optimized.

MySQL efficiently manages structured data through table structure and SQL query, and implements inter-table relationships through foreign keys. 1. Define the data format and type when creating a table. 2. Use foreign keys to establish relationships between tables. 3. Improve performance through indexing and query optimization. 4. Regularly backup and monitor databases to ensure data security and performance optimization.

MySQL is worth learning because it is a powerful open source database management system suitable for data storage, management and analysis. 1) MySQL is a relational database that uses SQL to operate data and is suitable for structured data management. 2) The SQL language is the key to interacting with MySQL and supports CRUD operations. 3) The working principle of MySQL includes client/server architecture, storage engine and query optimizer. 4) Basic usage includes creating databases and tables, and advanced usage involves joining tables using JOIN. 5) Common errors include syntax errors and permission issues, and debugging skills include checking syntax and using EXPLAIN commands. 6) Performance optimization involves the use of indexes, optimization of SQL statements and regular maintenance of databases.
