


How to use Docker for application monitoring and log management
Docker has become an essential technology in modern applications, but using Docker for application monitoring and log management is a challenge. With the continuous enhancement of Docker network functions, such as Service Discovery and Load Balancing, we increasingly need a complete, stable, and efficient application monitoring system.
In this article, we will briefly introduce the use of Docker for application monitoring and log management and give specific code examples.
Using Prometheus for application monitoring
Prometheus is an open source, Pull model-based service monitoring and warning tool developed by SoundCloud. It is written in Go language and is widely used in microservice solutions and cloud environments. As a monitoring tool, it can monitor Docker's CPU, memory, network and disk, etc., and also supports multi-dimensional data switching, flexible query, alarm and visualization functions, allowing you to react quickly and do things quickly. Make decisions.
Another thing to note is that Prometheus needs to sample through Pull mode, that is, access the /metrics interface in the monitored application to obtain monitoring data. Therefore, when starting the monitored application image, you need to first configure the IP and port that can access Prometheus into the /metrics interface. Below is a simple Node.js application.
const express = require('express') const app = express() app.get('/', (req, res) => { res.send('Hello World!') }) app.get('/metrics', (req, res) => { res.send(` # HELP api_calls_total Total API calls # TYPE api_calls_total counter api_calls_total 100 `) }) app.listen(3000, () => { console.log('Example app listening on port 3000!') })
In this code, we return an api_calls_total monitoring indicator through the /metrics interface.
Next, download the Docker image of Prometheus on the official website and create a docker-compose.yml file, and in this file, we obtain the data of the Node.js application.
version: '3' services: node: image: node:lts command: node index.js ports: - 3000:3000 prometheus: image: prom/prometheus:v2.25.2 volumes: - ./prometheus:/etc/prometheus command: - '--config.file=/etc/prometheus/prometheus.yml' - '--storage.tsdb.retention.time=15d' ports: - 9090:9090
In the docker-compose.yml file, we define two services, one is the Node service that runs the Node.js application, and the other is the Prometheus service for monitoring. Among them, the port published by the Node service is port 3000. Through port mapping, the /metrics interface of the Node application can be accessed through the IP and 3000 port in docker-compose.yml. Prometheus can access the corresponding monitoring indicator data through port 9090.
Finally, in the prometheus.yml file, we need to define the data source to be obtained.
global: scrape_interval: 15s evaluation_interval: 15s scrape_configs: - job_name: 'node-exporter' static_configs: - targets: ['node:9100'] - job_name: 'node-js-app' static_configs: - targets: ['node:3000']
In this file, we define the indicators of all Node.js applications to be collected, where the targets parameter is the IP address of the Node.js application and its corresponding port number. Here, we are using node and port 3000.
Finally, run the docker-compose up command to start the entire application and its monitoring service, and view the member indicators in Prometheus.
Use ElasticSearch and Logstash for log management
In Docker, application log data is distributed in different Docker containers. If you want to manage these logs in a centralized place, you can use ElasticSearch and Logstash in ELK to centrally manage the logs to make it easier to monitor and analyze computer resources.
Before starting, you need to download the Docker images of Logstash and ElasticSearch and create a docker-compose.yml file.
In this file, we define three services, among which bls is an API service used to simulate business logs. After each response, a log will be recorded to stdout and log files. The logstash service is built from the Docker image officially provided by Logstash and is used to collect, filter and transmit logs. The ElasticSearch service is used to store and retrieve logs.
version: '3' services: bls: image: nginx:alpine volumes: - ./log:/var/log/nginx - ./public:/usr/share/nginx/html:ro ports: - "8000:80" logging: driver: "json-file" options: max-size: "10m" max-file: "10" logstash: image: logstash:7.10.1 volumes: - ./logstash/pipeline:/usr/share/logstash/pipeline environment: - "ES_HOST=elasticsearch" depends_on: - elasticsearch elasticsearch: image: elasticsearch:7.10.1 environment: - "http.host=0.0.0.0" - "discovery.type=single-node" volumes: - ./elasticsearch:/usr/share/elasticsearch/data
In the configuration file, we map the path in the container to the host's log file system. At the same time, through the logging option, the volume size and quantity of the log are defined to limit the storage occupied by the log.
In the logstash of the configuration file, we define a new pipeline named nginx_pipeline.conf
. This file is used to handle the collection, filtering and transmission of nginx logs. Similar to how ELK works, logstash will process the received logs based on different conditions and send them to the already created Elasticsearch cluster. In this configuration file, we define the following processing logic:
input { file { path => "/var/log/nginx/access.log" } } filter { grok { match => { "message" => "%{COMBINEDAPACHELOG}" } } } output { elasticsearch { hosts => [ "${ES_HOST}:9200" ] index => "nginx_log_index" } }
In this configuration file, we define an input named file, which means that we want to read data from the local Log file. Next, we introduced a filter that uses the grok library to parse logs that match a specific template. Finally, we define the output, which transfers data to the address of the Elasticsearch cluster, while passing retrieval and reporting into the container via the environment variable ES_HOST
.
In the end, after completing the entire ELK configuration as above, we will get an efficient log management system. Each log will be sent to a centralized place and integrated together, allowing for easy search. Filtering and visualization operations.
The above is the detailed content of How to use Docker for application monitoring and log management. For more information, please follow other related articles on the PHP Chinese website!

Hot AI Tools

Undresser.AI Undress
AI-powered app for creating realistic nude photos

AI Clothes Remover
Online AI tool for removing clothes from photos.

Undress AI Tool
Undress images for free

Clothoff.io
AI clothes remover

Video Face Swap
Swap faces in any video effortlessly with our completely free AI face swap tool!

Hot Article

Hot Tools

Notepad++7.3.1
Easy-to-use and free code editor

SublimeText3 Chinese version
Chinese version, very easy to use

Zend Studio 13.0.1
Powerful PHP integrated development environment

Dreamweaver CS6
Visual web development tools

SublimeText3 Mac version
God-level code editing software (SublimeText3)

Hot Topics

Four ways to exit Docker container: Use Ctrl D in the container terminal Enter exit command in the container terminal Use docker stop <container_name> Command Use docker kill <container_name> command in the host terminal (force exit)

Methods for copying files to external hosts in Docker: Use the docker cp command: Execute docker cp [Options] <Container Path> <Host Path>. Using data volumes: Create a directory on the host, and use the -v parameter to mount the directory into the container when creating the container to achieve bidirectional file synchronization.

You can query the Docker container name by following the steps: List all containers (docker ps). Filter the container list (using the grep command). Gets the container name (located in the "NAMES" column).

How to restart the Docker container: get the container ID (docker ps); stop the container (docker stop <container_id>); start the container (docker start <container_id>); verify that the restart is successful (docker ps). Other methods: Docker Compose (docker-compose restart) or Docker API (see Docker documentation).

The process of starting MySQL in Docker consists of the following steps: Pull the MySQL image to create and start the container, set the root user password, and map the port verification connection Create the database and the user grants all permissions to the database

The steps to update a Docker image are as follows: Pull the latest image tag New image Delete the old image for a specific tag (optional) Restart the container (if needed)

Docker process viewing method: 1. Docker CLI command: docker ps; 2. Systemd CLI command: systemctl status docker; 3. Docker Compose CLI command: docker-compose ps; 4. Process Explorer (Windows); 5. /proc directory (Linux).

Create a container in Docker: 1. Pull the image: docker pull [mirror name] 2. Create a container: docker run [Options] [mirror name] [Command] 3. Start the container: docker start [Container name]
