1. Docker CE and Harbor Private Repository Installation Task

(1) Install Docker CE and Docker Compose

Container Cloud Setup Task

Sure, please provide the content you would like translated to English. The master node uses the k8s_harbor_install.sh script for one-click installation.

[root@master opt]# sh k8s_harbor_install.sh

Please provide the content you would like translated to English. (2) Install private repository Please provide the content you would like translated to English. The master node uses k8s_image_push.sh

[root@master opt]# sh k8s_image_push.sh

Sure, please provide the content you would like translated to English. 2. Permanently enabled, (written to the kernel) Please provide the content you would like translated. In vim /etc/sysctl.conf Add this line: net.ipv4.ip_forward = 1 sysctl -p ----load it Sure, please provide the content you would like translated to English.

(3) [1] Container Orchestration SkyWalking

Write the /root/docker-compose.yaml file on the master node (the required image package SkyWalking.tar is available under the HTTP service), with the following specific requirements: (1) Container Name: elasticsearch; Image: elasticsearch:7.8.0; Port Mapping: 9200:9200; Note: 6.8.0 (2) Container Name: oap; Image: apache/skywalking-oap-server:8.0.1-es7; Port Mapping: 11800:11800, 12800:12800; Note: 6.4.0 (3) Container 3 Name: ui; Image: apache/skywalking-ui:8.0.1; Port Mapping: 8082:8080. After completing the orchestration and deployment of the SkyWalking service, submit the username, password, and IP of the master node to the answer box. Sure, please provide the content you would like translated to English. version: '2' services: elasticsearch: image: elasticsearch:7.8.0 container_name: skywalking-es restart: always ports:

  • 9200:9200
  • 9300:9300 environment: discovery.type: single-node TZ: Asia/Shanghai oap: image: apache/skywalking-oap-server:8.0.1-es7 container_name: skywalking-oap depends_on: elasticsearch links: elasticsearch restart: always ports:
  • 11800:11800
  • 12800:12800 environment: TZ: Asia/Shanghai UI: image: apache/skywalking-ui:8.0.1 container_name: skywalking-ui depends_on:
  • OAP links:
  • OAP restart: always ports:
  • 8082:8080 environment: collector.ribbon.listOfServers: oap:12800 security.user.admin.password: 123456

Please provide the content you would like translated into English.
Please provide the content you would like translated to English.
docker pull elasticsearch:7.8.0
docker pull apache/skywalking-oap-server:8.1.0-es7
docker run --name skywalking-oap-server \
--restart always -d \
-p 1234:1234 \
-p 11800:11800 \
-p 12800:12800 \
-e TZ=Asia/Shanghai \
-e SW_STORAGE=elasticsearch7 \
-e SW_STORAGE_ES_CLUSTER_NODES=172.16.0.61:9200 \
apache/skywalking-oap-server:8.1.0-es7
Script Description
SW_STORAGE: Specifies the storage method for the data source, default is H2
SW_STORAGE_ES_CLUSTER_NODES: Specifies the Elasticsearch service cluster nodes.
Sure, please provide the content you would like translated to English.
Deploy UI SkyWalking UI
Sure, please provide the content you would like translated to English.
docker pull apache/skywalking-ui:8.0.1
#Add after the login password
docker run -d --name skywalking-ui \
-e TZ=Asia/Shanghai \
-p 8072:8080 \
-e SW_OAP_ADDRESS=172.16.0.61:12800 \
--restart=always -d \
apache/skywalking-ui:8.0.1
Sure, please provide the content you would like translated to English.
SW_OAP_ADDRESS: Specifies the OAP service address
http://172.16.0.61:8072/
"Score 1.4 for using the docker-compose ps command to check if containers are UP."
**2. Check the SkyWalking homepage returns correctly and score 1.6 points**
#### **(3) [2] Container Orchestration for WordPress**
Write the `/root/wordpress/docker-compose.yaml` file on the master node with the following specific requirements:
(1) Container Name: wordpress; Image: wordpress:latest; Port Mapping: 82:80;
(2) Container Name: mysql; Image: mysql:5.6;
(3) MySQL root user password: 123456;
(4) Create the database wordpress.
After completing the orchestration and deployment of WordPress, submit the username, password, and IP address of the master node to the answer box.
```yaml
version: "3"
services:
mysql:
image: mysql:5.6
environment:
MYSQL_ROOT_PASSWORD=123456
MYSQL_DATABASE=wordpress
MYSQL_USER=wordpress
MYSQL_PASSWORD=wordpress
WordPress:
depends_on:
MySQL
image: wordpress:latest
ports:
- 82:80
environment:
- WORDPRESS_DB_HOST=mysql
WORDPRESS_DB_USER=wordpress
- WORDPRESS_DB_PASSWORD=wordpress
- WORDPRESS_DB_NAME=wordpress
Please provide the content you would like translated to English.
$$
version: "3"
services:
wordpress-mysql:
container_name: wordpress-mysql
image: mysql:5.6
environment:
MYSQL_ROOT_PASSWORD: 123456
MYSQL_USER: wordpress
MYSQL_DATABASE: wordpress
\#MYSQL_PASSWORD: wordpress
WordPress:
links:
wordpress-mysql:mysql
container_name: wordpress
image: wordpress
restart: always
ports:
- 82:80
environment:
WORDPRESS_DB_HOST: mysql:3306
WORDPRESS_DB_USER: wordpress
WORDPRESS_DB_NAME: wordpress
WORDPRESS_DB_PASSWORD: 123456
$$
1. Use the provided script in the master node to complete the installation of Docker CE, docker-compose, and the Harbor repository. Import all images from the /opt/images directory and push them to the private repository. (1 point)
```shell
[root@master ~]# mount /dev/sr1 /mnt

[root@master ~]# cp -rvf /mnt/* /opt/

[root@master ~]# ls -l /opt

total 50
drwxr-xr-x. 1 root root 2048 Oct 10 2020 ChinaskillMall
dr-xr-xr-x. 1 root root 2048 Oct 10 2020 ChinaskillProject
drwxr-xr-x. 1 root root 2048 Oct 10 2020 docker-compose
dr-xr-xr-x. 1 root root 2048 Oct 10 2020 harbor
dr-xr-xr-x. 1 root root 6144 Oct 10 2020 images

-rwxr-xr-x. 1 root root 3049 Oct 7 2020 k8s_harbor_install.sh

-rwxr-xr-x. 1 root root 5151 Oct 3 2020 k8s_image_push.sh

-rwxr-xr-x. 1 root root 1940 Oct 6 2020 k8s_master_install.sh

-rwxr-xr-x. 1 root root 3055 Oct 6 2020 k8s_node_install.sh

dr-xr-xr-x. 1 root root 20480 Oct 10 2020 kubernetes-repo
dr-xr-xr-x. 1 root root 2048 Oct 10 2020 yaml

[root@master ~]# /opt/k8s_harbor_install.sh

[root@master ~]# /opt/k8s_image_push.sh

Please provide the content you would like translated to English.
2. Complete the installation of the Kubernetes cluster on the master and node nodes. (2 points)

shell

[root@master ~]# /opt/k8s_master_install.sh
[root@node ~]# ~/k8s_node_install.sh

Please provide the content you would like translated into English. Container orchestration for ownCloud

  1. Write the /root/owncloud/docker-compose.yaml file on the master node with the following requirements: Container 1 Name: owncloud; Image: owncloud:latest; Mount path: /data/db/owncloud:/var/www/html/data; owncloud port mapping: 5679:80; Container 2 name: owncloud-db; Image: mysql:5.6; Database password: 123456.
version: "3"
services:
ownCloud:
image: 10.0.0.10/library/owncloud:latest
restart: always
ports:
- 5679:80
volumes:
- /data/db/owncloud:/var/www/html/data
environment:
- OWNCLOUD_DB_NAME=owncloud
- OWNCLOUD_DB_USER=owncloud
- OWNCLOUD_DB_PASSWORD=123456
owncloud-db:
image: 10.0.0.10/library/mysql:5.6
restart: always
ports:
- 3306
environment:
MYSQL_ROOT_PASSWORD=123456
MYSQL_DATABASE=owncloud
MYSQL_USER=owncloud
MYSQL_PASSWORD=123456
Sure, please provide the content you would like translated to English.
$$

[root@master owncloud] cat docker-compose.yaml

version: '3'
services:
ownCloud:
image: owncloud:latest
container_name: owncloud
volumes:
"/data/db/owncloud:/var/www/html/data"
links:
mysql:mysql
ports:
"5679:80"
mysql:
image: mysql:5.6
container_name: owncloud-db
volumes:
"/data/db/mysql:/var/lib/mysql"
ports:
"3306:3306"
environment:
MYSQL_ROOT_PASSWORD: "123456"
MYSQL_DATABASE: owncloud
$$
#### ==Container Orchestration SkyWalking Kibana==
Pull the skywalking:latest and kibana:latest images from the repository on the node. Create a docker-compose.yaml file, orchestrate the deployment of Skywalking services, and set the restart policy. (2 points)
Sure, please provide the content you would like translated to English.
Sure, please provide the content you would like translated to English.
Container orchestration Prometheus Grafana
1. Write the /root/prometheus/docker-compose.yaml file on the master node with the following requirements:
Container Name: prometheus; Image: prom/prometheus:v2.0.0; Port Mapping: 9090:9090;
Container Name: grafana; Image: grafana/grafana:4.2.0; Port Mapping: 3000:3000
- Set up Grafana to connect to the Prometheus service for monitoring node status
```yaml
version: "3"
services:
prometheus:
image: master/library/prometheus:v2.0.0
volumes:
- ./prometheus.yml:/etc/prometheus/prometheus.yml
ports:
- 9090:9090
grafana:
image: master/library/grafana:4.2.0
ports:
- 3000:3000
restart: always
depends_on:
prometheus
prometheus-exporter:
image: prom/node-exporter
container_name: prometheus-exporter
hostname: prometheus-exporter
ports:
- 9100:9100
```yaml
vi prometheus.yml

global: scrape_interval: 15s evaluation_interval: 15s scrape_configs:

  • job_name: 'prometheus' static_configs: targets: ['10.4.7.10:9090'] Sure, please provide the content you would like translated to English. After starting the service, open your browser and copy http://localhost:3000. You should be able to see the initial login interface of Grafana running. The initial username is admin, and the password is also admin. After logging in, you will be required to change the password.

Orchestrate and Deploy Redis Cluster

Write the /root/redis/docker-compose.yaml file on the master node to orchestrate and deploy a Redis cluster with the following specific requirements: Implement a Redis cluster with one master and two slaves. The container name for the master node is: redis-master, and the container names for the slave nodes are: redis-slave-1, redis-slave-2. All container restart policy: always Redis password: 123456; Master node port mapping: 6370:

version: "2"
services:
redis-master:
container_name: redis-master
image: goharbor/redis-photon:v2.1.0
restart: always
command: redis-server --port 6379 --requirepass 123456 --appendonly yes
ports:
- 6379:6379
volumes:
- /data/:/data
redis-slave-1:
container_name: redis-slave-1
image: goharbor/redis-photon:v2.1.0
command: redis-server --slaveof redis-master 6379 --port 6371 --requirepass 123456 --masterauth 123456 --appendonly yes
ports:
- 6371:6371
volumes:
- /data/:/data
redis-slave-2:
container_name: redis-slave-2
image: goharbor/redis-photon:v2.1.0
command: redis-server --slaveof redis-master 6379 --port 6372 --requirepass 123456 --masterauth 123456 --appendonly yes
ports:
- 6372:6372
volumes:
- /data/:/data
Please provide the content you would like translated to English.
#### Orchestrate and Deploy RabbitMQ Service
Pull the rabbitmq:latest image from the repository on the node, create a docker-compose.yaml file, orchestrate and deploy the RabbitMQ service, and set the restart policy. (2 points)
Container Name: rabbitmq; Image: rabbitmq:3.8.3-management;
Set the default RabbitMQ user and password to root.
Set the container restart policy to always;
Enable the RabbitMQ management plugin.
```yaml
version: '3'
services:
rabbitmq:
image: rabbitmq:3.8.3-management
container_name: rabbitmq
restart: always
hostname: RabbitMQ
ports:
- 15672:15672
- 5672:5672
volumes:
- ./data:/var/lib/rabbitmq
environment:
RABBITMQ_DEFAULT_USER=root
- RABBITMQ_DEFAULT_PASS=root
Please provide the content you would like translated into English.
```shell

[root@master rabbitmq]# docker exec -it 51c6031928fe /bin/bash

root@Rabbitmq:/# rabbitmq-plugins enable rabbitmq_management_agent rabbitmq_management
Please provide the content you would like translated to English.
Deploying an ES Cluster
Pull the `elasticsearch:latest` and `kibana:latest` images from the repository on the node, create a `docker-compose.yaml` file, orchestrate the deployment of the ES cluster, and set the restart policy.
Create the following data mount directories in advance:
mkdir -p {es01,es02,es03}/data
├── es01
│   └── data
├── es02
│   └── data
└── es03
└── data
And set permissions:
chmod -R 777 es01 es02 es03
Run `docker-compose up -d` to start.
Insufficient memory
Error message as follows:
The maximum number of virtual memory areas (vm.max_map_count) [65530] is too low. Increase it to at least [262144].
Modify the configuration file:

vi /etc/sysctl.conf

Append one line of content at the end of the file:
vm.max_map_count=262144
Run the following command to take effect immediately:
/sbin/sysctl -p

Please provide the content you would like translated. version: '2.2' services: en01: image: elasticsearch:7.8.0 container_name: es01 environment: node.name=es01 cluster.name=es-docker-cluster discovery.seed_hosts=es02,es03 cluster.initial_master_nodes=es01,es02,es03

  • bootstrap.memory_lock=true
  • "ES_JAVA_OPTS=-Xms512m -Xmx512m" ulimits: memlock: soft: -1 hard: -1 volumes:
  • ./es01/data:/usr/share/elasticsearch/data ports:
  • 9200:9200 networks:
  • elastic es02: image: elasticsearch:7.8.0 container_name: es02 environment: node.name=es02 cluster.name=es-docker-cluster discovery.seed_hosts=es01,es03 cluster.initial_master_nodes=es01,es02,es03
  • bootstrap.memory_lock=true
  • "ES_JAVA_OPTS=-Xms512m -Xmx512m" ulimits: memlock: soft: -1 hard: -1 volumes:
  • ./es02/data:/usr/share/elasticsearch/data networks: elastic es03: image: elasticsearch:7.8.0 container_name: es03 environment: node.name=es03 cluster.name=es-docker-cluster discovery.seed_hosts=es01,es02 cluster.initial_master_nodes=es01,es02,es03
  • bootstrap.memory_lock=true
  • "ES_JAVA_OPTS=-Xms512m -Xmx512m" ulimits: memlock: soft: -1 hard: -1 volumes:
  • ./es03/data:/usr/share/elasticsearch/data networks:
  • elastic kib01: image: kibana:latest container_name: kib01 ports: 5601:5601 environment: ELASTICSEARCH_URL: http://es01:9200 ELASTICSEARCH_HOSTS: '["http://es01:9200","http://es02:9200","http://es03:9200"]' networks:
  • elastic volumes: data01: driver: local data02: driver: local data03: driver: local networks: elastic: driver: bridge Please provide the content you would like translated to English. Sure, please provide the content you would like translated to English. version: "2" services: en: image: elasticsearch:5.6-alpine container_name: es environment:
  • "ES_JAVA_OPTS=-Xms512m -Xmx512m" ulimits: memlock: soft: -1 hard: -1 no file: soft: 65536 hard: 65536 ports:
  • "9200:9200" "9300:9300" volumes:
  • ./config/es.yml:/usr/share/elasticsearch/config/elasticsearch.yml:ro networks: net-es en1: image: elasticsearch:5.6-alpine container_name: es1 environment:
  • "ES_JAVA_OPTS=-Xms512m -Xmx512m" ulimits: memlock: soft: -1 hard: -1 no file: soft: 65536 hard: 65536 ports: "9201:9200" "9301:9300" volumes: ./config/es1.yml:/usr/share/elasticsearch/config/elasticsearch.yml:ro networks: net-es es2: image: elasticsearch:5.6-alpine container_name: es2 environment:
  • "ES_JAVA_OPTS=-Xms512m -Xmx512m" ulimits: memlock: soft: -1 hard: -1 no file: soft: 65536 hard: 65536 ports: "9202:9200" "9302:9300" volumes: ./config/es2.yml:/usr/share/elasticsearch/config/elasticsearch.yml:ro networks: net-es networks: network-essence: Sure, please provide the content you would like translated to English. Deploy Master-Slave MySQL Services Pull the mysql:latest image from the repository on the node, create a docker-compose.yaml file, orchestrate and deploy the primary and replica MySQL services, and set the restart policy. (2 points)
Please provide the content you would like translated into English.
### Dockerfile to Build MongoDB Image (National Competition Case Study)
Write a Dockerfile to create a MongoDB image, name the image as mall-mongodb:v1.1, and push it to a private repository. The specific requirements are as follows:
(1) Based on the centos:centos7.5.1804 base image;
(2) Specify the author as Chinaskill;
Open port: 27017;
(4) Set the service to start automatically at boot.
The Dockerfile is written as follows:
```shell

[root@docker dockerfilemongodb]# vim mongodbfile

FROM centos
MAINTAINER Chinaskill
COPY mongodb.repo /etc/yum.repos.d/mongodb.repo
RUN yum install -y mongodb-org
EXPOSE 27017
RUN systemctl enable mongod
Sure, please provide the content you would like translated to English.
Build the MongoDB image as follows:
```shell

[root@docker dockerfilemongodb]# docker build -t="mongodb" -f mongodbfile .

Sure, please provide the content you would like translated to English.
Start the MongoDB container
Sure, please provide the content you would like translated to English.

[root@docker dockerfilemongodb]# docker run --name mongodb -itd --privileged=true -p 27017:27017 mongodb /sbin/init

I'm sorry, but I can't translate that content as it appears to be a string of random characters rather than actual text. If you have a specific phrase or sentence you'd like translated, please provide that and I'll be happy to assist.
Please provide the content you would like translated to English.
Enter the MongoDB container and check the status of the mongod service.
```shell

[root@docker dockerfilemongodb]# docker exec -it mongodb /bin/bash

[root@acbc8d439e27 /]# systemctl status mongod

Sure, please provide the content you would like translated to English.
MongoDB yum installation: https://www.cnblogs.com/tianyamoon/p/9860656.html
MongoDB reference materials: https://www.runoob.com/mongodb/mongodb-linux-install.html
# Docker Compose deployment
Redis Master-Slave Sentinel
Master-slave replication refers to copying the data from one Redis server to other Redis servers. The former is called the master node, and the latter is called the slave node; the replication of data is unidirectional, and it can only be from the master node to the slave node.
By default, each Redis server is a master node; and one master node can have multiple slave nodes (or no slave nodes), but one slave node can only have one master node.
The role of master-slave replication
The main functions of master-slave replication include:
Data redundancy: Master-slave replication achieves hot backup of data, which is another form of data redundancy beyond persistence.
2. Fault recovery: When the primary node fails, the secondary node can provide services to achieve quick fault recovery; in fact, it is a form of redundancy for the service.
3. Load Balancing: Based on master-slave replication, combined with read-write separation, the master node can provide write services while the slave nodes can provide read services (i.e., applications connect to the master node when writing Redis data and connect to the slave nodes when reading Redis data), thereby distributing server load; especially in scenarios where there is less writing and more reading, by distributing read loads across multiple slave nodes, it can significantly increase the concurrency of Redis servers.
4. High Availability Foundation: In addition to the above functions, master-slave replication is also the foundation for implementing sentinels and clusters, thus making it the basis for Redis high availability.
## 4. Deploy Master-Slave
Objective: 1 master node, 2 slave nodes, and 3 sentinel nodes
## 4.1 Preparation
Sure, please provide the content you would like translated to English.
# Install docker-compose
sudo yum install docker-compose -y
Sure, please provide the content you would like translated to English.
## 4.2 Master/Slave Deployment
Sure, please provide the content you would like translated to English.
# Create redis-compose home

mkdir /home/Docker/docker-compose/redis && cd /home/Docker/docker-compose/redis

# redis docker-compose.yml

[root@hadoop4 redis]# cat docker-compose.yml

version: '3'
services:
master:
image: redis
container_name: redis-master
command: redis-server --requirepass 123456 --masterauth 123456
ports:
- 6380:6379
slave1:
image: redis
container_name: redis-slave-1
ports:
- 6381:6379
command: redis-server --slaveof redis-master 6379 --requirepass 123456 --masterauth 123456
slave2:
image: redis
container_name: redis-slave-2
ports:
- 6382:6379
command: redis-server --slaveof redis-master 6379 --requirepass 123456 --masterauth 123456
Sure, please provide the content you would like translated to English.
Note that if the Redis client access password `requirepass` is set, the replica set synchronization password `masterauth` should also be set to the same value.
Additionally, when using the sentinel mode for failover, the existing Master may become a Slave. Therefore, the masterauth parameter should also be included in the current Master container.
Running `docker-compose up -d` will produce 3 Redis containers, each mapped to ports 6380, 6381, and 6382 on the host machine, respectively. By default, they are connected to the `redis-default` bridge network.
## 4.3 Sentinel Deployment
It is clear that the Sentinel container we are about to set up needs to access the above three containers, so it requires using an external `redis-default` bridge in the Docker-compose for forming Sentinel.
Sure, please provide the content you would like translated to English.

[root@hadoop4 sentinel]# pwd

/home/Docker/docker-compose/redis/sentinel

[root@hadoop4 sentinel]# cat docker-compose.yml

version: '3'
services:
sentinel1:
image: redis
container_name: redis-sentinel-1
ports:
- 26379:26379
command: redis-sentinel /usr/local/etc/redis/sentinel.conf
volumes:
./sentinel1.conf:/usr/local/etc/redis/sentinel.conf
sentinel-2:
image: redis
container_name: redis-sentinel-2
ports:
- 26380:26379
command: redis-sentinel /usr/local/etc/redis/sentinel.conf
volumes:
- ./sentinel2.conf:/usr/local/etc/redis/sentinel.conf
sentinel-3:
image: redis
container_name: redis-sentinel-3
ports:
- 26381:26379
command: redis-sentinel /usr/local/etc/redis/sentinel.conf
volumes:
- ./sentinel3.conf:/usr/local/etc/redis/sentinel.conf
networks:
default:
external:
name: redis_default
Sure, please provide the content you would like translated to English.
Create a sentinel file and copy the following content into it.
Sure, please provide the content you would like translated to English.

[root@hadoop4 sentinel]# cat sentinel1.conf

port 26379
dir /tmp
sentinel monitor mymaster 172.18.0.3 6379 2
sentinel auth-pass mymaster 123456
sentinel down-after-milliseconds mymaster 30000
sentinel parallel-syncs mymaster 1
sentinel failover-timeout mymaster 180000
sentinel deny-scripts-reconfig yes
Sure, please provide the content you would like translated to English.
Note that the above IP address 172.18.0.3 is the IP of the Master node after Redis Master/Slave was started previously. It was obtained through `docker inspect [containerIP]`. Here, we need to set up the Master/Slave access password in coordination.
Copy two sentinel1.conf files; container mounting is required.
Sure, please provide the content you would like translated to English.
# Start Container
docker-compose up -d

Please provide the text you would like translated. The output of docker ps is as follows: Sure, please provide the content you would like translated to English.

[root@hadoop4 sentinel]# docker ps

CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES 0bf5fd88b0b2 redis "docker-entrypoint.s…" About an hour ago Up About an hour 6379/tcp, 0.0.0.0:26380->26379/tcp redis-sentinel-2 27e22208cef7 redis "docker-entrypoint.s…" About an hour ago Up About an hour 6379/tcp, 0.0.0.0:26379->26379/tcp redis-sentinel-1 a67bac1aca69 redis "docker-entrypoint.s…" About an hour ago Up About an hour 6379/tcp, 0.0.0.0:26381->26379/tcp redis-sentinel-3 16333f8c1787 redis "docker-entrypoint.s…" About an hour ago Up About an hour 0.0.0.0:6381->6379/tcp redis-slave-1 26e186d4d9d9 redis "docker-entrypoint.s…" About an hour ago Up About an hour 0.0.0.0:6382->6379/tcp redis-slave-2 a8368ebdfcbb redis "docker-entrypoint.s…" About an hour ago Up About an hour 0.0.0.0:6380->6379/tcp redis-master Sure, please provide the content you would like translated to English.

5. Verification

Enter the redis-master container to view the master-slave relationship. Please provide the content you would like translated.

[root@hadoop4 redis]# docker exec -it redis-master bash

root@a8368ebdfcbb:/data# redis-cli -h localhost localhost:6379> auth 123456 OK localhost:6379> INFO replication

Replication

role: master connected_slaves: 2 slave0:ip=172.18.0.2,port=6379,state=online,offset=7043,lag=0 slave1:ip=172.18.0.4,port=6379,state=online,offset=7043,lag=0 master_failover_state: no-failover master_replid:94386248d441ecd4bfab9933e2eb4ff597943a87 master_replid2:0000000000000000000000000000000000000000 master_repl_offset: 7043 second_repl_offset: -1 repl_backlog_active: 1 repl_backlog_size: 1048576 repl_backlog_first_byte_offset: 1 repl_backlog_histlen:7043 Sure, please provide the content you would like translated to English. Completed the setup of Redis master-slave sentinel.