From 0 to 1, teach you how to get started etcd

Posted Jun 29, 202016 min read

[ ]( https://mp.weixin.qq.com/mp/appmsgalbum?action=getalbum&album_id=1336857889704837123&__biz=MzI0MDQ4MTM5NQ ==#wechat_redirect)

Author:kaliarch
Link: https://juejin.im/post/5e02fb...

Background:There are some confusions about the functions of etcd in the recent k8s application. Learning them separately can better understand some of the features in k8s.

I. Overview

1.1 Introduction to etcd

etcd is an open source project initiated by the CoreOS team in June 2013. Its goal is to build a highly available distributed key-value database. etcd internally uses the raft protocol as a consistency algorithm, etcd is implemented based on the Go language.

1.2 Development History

1.3 Features of etcd

  • Simple:Simple installation and configuration, and provides HTTP API for interaction, easy to use
  • Security:Support SSL certificate verification
  • Quick:According to the official benchmark data, a single instance supports 2k+ read operations per second
  • Reliable:Raft algorithm is used to achieve the availability and consistency of distributed system data

1.4 Concept Terminology

  • Raft:The algorithm used by etcd to ensure strong consistency in distributed systems.
  • Node:An instance of the Raft state machine.
  • Member:an etcd instance. It manages a Node and can provide services for client requests.
  • Cluster:an etcd cluster that can work together by multiple members.
  • Peer:The name of another Member in the same etcd cluster.
  • Client:A client that sends HTTP requests to the etcd cluster.
  • WAL:Write-ahead log, etcd is a log format for persistent storage.
  • snapshot:etcd prevents snapshots set by too many WAL files and stores etcd data status.
  • Proxy:A mode of etcd that provides reverse proxy services for etcd clusters.
  • Leader:A node in the Raft algorithm that generates all data submitted through elections.
  • Follower:The node that failed in the election serves as a slave node in Raft, providing strong consistency guarantee for the algorithm.
  • Candidate:When Follower fails to receive the Leader s heartbeat for a certain period of time, it will switch to Candidate to start the election.
  • Term:A node becomes Leader until the next election time, which is called a Term.
  • Index:data item number. Raft uses Term and Index to locate data.

1.5 Data read and write sequence

In order to ensure strong data consistency, all data flow in etcd cluster is in one direction, from Leader(master node) to Follower, that is, all Follower data must be consistent with Leader, if it is inconsistent, it will be overwritten.

The user reads and writes all nodes in the etcd cluster

  • Read:Since the data of all nodes in the cluster is strongly consistent, reading can read data from any node in the cluster
  • Write:The etcd cluster has a leader. If you write to the leader, you can write directly. Then the Leader node will distribute the write to all Followers. If you write to the Follower, then the Leader node will Write distribution to all Followers

1.6 leader election

Assuming a cluster of three nodes, Timers are running on all three nodes(the duration of each Timer is random). The Raft algorithm uses a random Timer to initialize the Leader election process. The first node completes the Timer first, and then it will The other two nodes send a request to become a leader. After receiving the request, the other nodes will respond with a vote and the first node is elected as the leader.

After becoming a leader, the node will send notifications to other nodes at regular intervals to ensure that it is still the leader. In some cases, when the Followers do not receive the Leader's notification, for example, if the Leader node is down or loses connection, other nodes will repeat the previous election process to elect a new Leader.

1.7 Determine whether data is written

etcd believes that after the write request is processed by the leader node and distributed to most nodes, it is a successful write. So how many nodes are determined, assuming that the summary point is N, then most nodes Quorum=N/2+1. Regarding how to determine how many nodes should be in the etcd cluster, the graph on the left of the above figure gives the number of Quorum corresponding to the total number of nodes in the cluster(Instances). Instances minus Quorom is the fault-tolerant node in the cluster(failure-tolerant nodes are allowed) Nodes).

Therefore, the recommended minimum number of nodes in the cluster is 3, because the fault-tolerant nodes of 1 and 2 nodes are 0, and once one node goes down, the entire cluster will not work properly.

Second, etcd architecture and analysis

2.1 Architecture diagram

2.2 Architecture analysis

From the architecture diagram of etcd, we can see that etcd is mainly divided into four parts.

  • HTTP Server:used to handle API requests sent by users and synchronization and heartbeat information requests of other etcd nodes.
  • Store:It is used to handle transactions of various functions supported by etcd, including data indexing, node state changes, monitoring and feedback, event processing and execution, etc. It is the specific implementation of most API functions provided by etcd to users.
  • Raft:The specific implementation of the Raft strong consistency algorithm is the core of etcd.
  • WAL:Write Ahead Log(pre-write log) is the data storage method of etcd. In addition to the state of all data stored in memory and the index of the node, etcd is persistently stored through WAL. In WAL, all data is recorded in advance before submission.
  • Snapshot is a state snapshot to prevent excessive data;
  • Entry represents the specific log content stored.

Usually, a user's request is sent to the Store via HTTP Server for specific transaction processing. If it involves node modification, it is handed over to the Raft module for state changes and log records, and then synchronized to other etcd The node confirms the data submission, and finally submits the data, and synchronizes again.

  1. Application scenarios

3.1 Service Registration and Discovery

etcd can be used for service registration and discovery

  • Front-end and back-end business registration discovery

The mid-price has been registered in etcd by the back-end service. The front-end and mid-price can easily find related servers from etcd and then bind calls between the servers according to the calling relationship.

  • Multiple groups of back-end server registration discovery

Multiple apps with the same stateless copy in the backend can be registered by colleagues in etcd. The frontend can obtain the backend IP and port group from etcd through haproxy, and then forward the request. It can be used to failover and block the backend port. Multiple sets of app instances.

3.2 News Publish and Subscribe

etcd can act as message middleware. Producers can register topics in etcd and send messages. Consumers subscribe to topics from etcd to obtain messages sent by producers to etcd.

3.3 Load balancing

Multiple back-end groups of the same service providers can be registered in etcd via their own services. etcd will also monitor and check with the registered services. The service request first obtains the real ip:port of available service providers from etcd, and then Send requests to multiple groups of services, etcd acts as a load balancing function

3.4 Sub-deployment notification and coordination

  • When the etcd watch service finds that it is missing, it will notify the service to check
  • The controller sends a startup service to etcd, etcd notifies the service to perform the corresponding operation
  • When the service is completed, the work will update the status to etcd, and etcd will notify the user

3.5 Distributed Lock

When there are multiple competitor node nodes, etcd is used as the master control, and a lock is successfully allocated with a node in the distributed cluster

3.6 Distributed Queue

There are a pair of nodes, etcd creates a queue corresponding to each node according to each node, according to different queues, you can find the corresponding competitor in etcd

3.7 Clustering and Monitoring and Leader Election

etcd can elect the leader on multiple nodes according to the raft algorithm.

Fourth, installation and deployment

4.1 Stand-alone deployment

You can use binary or source code to download and install, but it is necessary to write the configuration file yourself. If you want to start, you need to write the service startup file yourself. It is recommended to use the yum installation method.

hostnamectl set-hostname etcd-1
wgethttp://dl.fedoraproject.org/pub/epel/epel-release-latest-7.noarch.rpm
rpm -ivh epel-release-latest-7.noarch.rpm
# yum The version of etcd in the repository is 3.3.11. If you need the latest version of etcd, you can perform binary installation
yum -y install etcd
systemctlenable etcd

You can view the effective configuration file of etcd installed by yum, modify the data storage directory according to your own needs, the name of the listening port url/etcd, etc.

  • etcd stores data in the default.etcd/ directory of the current path by default

  • Communicate with other nodes in the cluster at http://localhost:2380

  • Provide HTTP API service at http://localhost:2379 for client interaction

  • The name of the node defaults to default

  • heartbeat is 100ms, the function of this configuration will be explained later

  • election is 1000ms, the function of this configuration will be explained later

  • The snapshot count is 10000, and the role of this configuration will be explained later

  • The cluster and each node will generate a uuid

  • When starting, it will run raft to elect the leader

    [root@VM_0_8_centos tmp]# grep -Ev "^#|^$" /etc/etcd/etcd.conf
    ETCD_DATA_DIR="/var/lib/etcd/default.etcd"
    ETCD_LISTEN_CLIENT_URLS=" http://localhost:2379"
    ETCD_NAME="default"
    ETCD_ADVERTISE_CLIENT_URLS=" http://localhost:2379"
    [root@VM_0_8_centos tmp]# systemctl status etcd

4.2 Cluster deployment

Cluster deployment is best to deploy odd bits, which can achieve the best cluster fault tolerance

4.2.1 Host information

4.2.2 host configuration

In this example, three nodes are used to deploy the etcd cluster, and each node modifies the hosts

cat >> /etc/hosts << EOF
172.16.0.8 etcd-0-8
172.16.0.14 etcd-0-14
172.16.0.17 etcd-0-17
EOF

4.2.3 etcd installation

All three nodes are installed etcd

wget http://dl.fedoraproject.org/pub/epel/epel-release-latest-7.noarch.rpm
rpm -ivh epel-release-latest-7.noarch.rpm
yum -y install etcd
systemctlenable etcd
mkdir -p /data/app/etcd/
chown etcd:etcd /data/app/etcd/

4.2.4 etcd configuration

  • etcd default configuration file(omitted)

etcd-0-8 configuration:

[root@etcd-server ~]# hostnamectl set-hostname etcd-0-8
[root@etcd-0-8 ~]# egrep "^#|^$" /etc/etcd/etcd.conf -v
ETCD_DATA_DIR="/data/app/etcd/"
ETCD_LISTEN_PEER_URLS="http://172.16.0.8:2380"
ETCD_LISTEN_CLIENT_URLS="http://127.0.0.1:2379,http://172.16.0.8:2379"
ETCD_NAME="etcd-0-8"
ETCD_INITIAL_ADVERTISE_PEER_URLS="http://172.16.0.8:2380"
ETCD_ADVERTISE_CLIENT_URLS="http://127.0.0.1:2379,http://172.16.0.8:2379"
ETCD_INITIAL_CLUSTER="etcd-0-8=http://172.16.0.8:2380,etcd-0-17=http://172.16.0.17:2380,etcd-0-14=http://172.16.0.14:2380 "
ETCD_INITIAL_CLUSTER_TOKEN="etcd-token"
ETCD_INITIAL_CLUSTER_STATE="new"

etcd-0-14 configuration:

[root@etcd-server ~]# hostnamectl set-hostname etcd-0-14
[root@etcd-server ~]# mkdir -p /data/app/etcd/
[root@etcd-0.14 ~]# egrep "^#|^$" /etc/etcd/etcd.conf -v
ETCD_DATA_DIR="/data/app/etcd/"
ETCD_LISTEN_PEER_URLS="http://172.16.0.14:2380"
ETCD_LISTEN_CLIENT_URLS="http://127.0.0.1:2379,http://172.16.0.14:2379"
ETCD_NAME="etcd-0-14"
ETCD_INITIAL_ADVERTISE_PEER_URLS="http://172.16.0.14:2380"
ETCD_ADVERTISE_CLIENT_URLS="http://127.0.0.1:2379,http://172.16.0.14:2379"ETCD_INITIAL_CLUSTER="etcd-0-8=http://172.16.0.8:2380,etcd-0-17=http://172.16.0.17:2380,etcd-0-14=http://172.16.0.14:2380"
ETCD_INITIAL_CLUSTER_TOKEN="etcd-token"
ETCD_INITIAL_CLUSTER_STATE="new"

etcd-0-7 configuration:

[root@etcd-server ~]# hostnamectl set-hostname etcd-0-17
[root@etcd-server ~]# mkdir -p /data/app/etcd/
[root@etcd-0-17 ~]# egrep "^#|^$" /etc/etcd/etcd.conf -v
ETCD_DATA_DIR="/data/app/etcd/"
ETCD_LISTEN_PEER_URLS="http://172.16.0.17:2380"
ETCD_LISTEN_CLIENT_URLS="http://127.0.0.1:2379,http://172.16.0.17:2379"
ETCD_NAME="etcd-0-17"
ETCD_INITIAL_ADVERTISE_PEER_URLS="http://172.16.0.17:2380"ETCD_ADVERTISE_CLIENT_URLS="http://127.0.0.1:2379,http://172.16.0.17:2379"
ETCD_INITIAL_CLUSTER="etcd-0-8=http://172.16.0.8:2380,etcd-0-17=http://172.16.0.17:2380,etcd-0-14=http://172.16.0.14:2380 "
ETCD_INITIAL_CLUSTER_TOKEN="etcd-token"
ETCD_INITIAL_CLUSTER_STATE="new"

Start the service after configuration

systemctlstartetcd

4.2.5 View cluster status

  • View etcd status

    [root@etcd-0-8 default.etcd]# systemctl status etcd

    etcd.service-Etcd Server

    Loaded:loaded(/usr/lib/systemd/system/etcd.service; enabled; vendor preset:disabled)
    Active:active(running) since two 2019-12-03 15:55:28 CST; 8s ago
    Main PID:24510(etcd)
    CGroup:/system.slice/etcd.service

                    24510/usr/bin/etcd --name=etcd-0-8--data-dir=/data/app/etcd/--listen-client-urls=http://172.16.0.8:2379

    December 03:15:55:28 etcd-0-8 etcd[24510]:set the initial cluster version to 3.0
    December 03:15:55:28 etcd-0-8 etcd[24510]:enabled capabilities for version 3.0
    December 03:15:55:30 etcd-0-8 etcd[24510]:peer 56e0b6dad4c53d42 became active
    December 03:15:55:30 etcd-0-8 etcd[24510]:established a TCP streaming connection with peer 56e0b6dad4c53d42(stream Message reader)
    December 03:15:55:30 etcd-0-8 etcd[24510]:established a TCP streaming connection with peer 56e0b6dad4c53d42(stream Message writer)
    December 03:15:55:30 etcd-0-8 etcd[24510]:established a TCP streaming connection with peer 56e0b6dad4c53d42(stream MsgApp v2 reader)
    December 03:15:55:30 etcd-0-8 etcd[24510]:established a TCP streaming connection with peer 56e0b6dad4c53d42(stream MsgApp v2 writer)
    December 03:15:55:32 etcd-0-8 etcd[24510]:updating the cluster version from 3.0 to 3.3
    December 03:15:55:32 etcd-0-8 etcd[24510]:updated the cluster version from 3.0 to 3.3
    December 03:15:55:32 etcd-0-8 etcd[24510]:enabled capabilities for version 3.3

View the port monitoring(if the loopback address is not monitored locally, then using etcdctl locally cannot connect normally)

[root@etcd-0-8 default.etcd]# netstat -lntup |grep etcd
tcp 0 0 172.16.0.8:2379 0.0.0.0:* LISTEN 25167/etcd
tcp 0 0 127.0.0.1:2379 0.0.0.0:* LISTEN 25167/etcd
tcp 0 0 172.16.0.8:2380 0.0.0.0:*LISTEN 25167/etcd

View the cluster status(you can see etcd-0-17)

[root@etcd-0-8 default.etcd]# etcdctl member list
2d2e457c6a1a76cb:name=etcd-0-8peerURLs=http://172.16.0.8:2380 clientURLs=http://127.0.0.1:2379,http://172.16.0.8:2379 isLeader=false
56e0b6dad4c53d42:name=etcd-0-14peerURLs=http://172.16.0.14:2380 clientURLs=http://127.0.0.1:2379,http://172.16.0.14:2379 isLeader=true
d2d2e9fc758e6790:name=etcd-0-17 peerURLs=http://172.16.0.17:2380 clientURLs=http://127.0.0.1:2379,http://172.16.0.17:2379 isLeader=false

[root@etcd-0-8 ~]# etcdctl cluster-health
member 2d2e457c6a1a76cb is healthy:got healthy result from http://127.0.0.1:2379
member 56e0b6dad4c53d42 is healthy:got healthy result from http://127.0.0.1:2379
member d2d2e9fc758e6790 is healthy:got healthy result from http://127.0.0.1:2379
cluster is healthy
  1. Simple to use

5.1 increase

  • set

Specify the value of a key. E.g:

$etcdctl set/testdir/testkey "Hello world"
Hello world

#Supported options include:
--ttl '0' The timeout time(in seconds) of this key value, if not configured(default is 0), it will never time out
--swap-with-value If the current value of the key is value, set the operation
--swap-with-index '0' If the current index value of the key is the specified index, set the operation
  • mk

If the given key does not exist, a new key value is created. E.g:

$Etcdctl mk /testdir/testkey "Hello world"
Hello world

#When the key exists, executing this command will report an error, for example:
$Etcdctl mk /testdir/testkey "Hello world"
Error:105:Key already exists(/testdir/testkey) [8]

#Supported options are:
--ttl '0' Timeout time(in seconds), not configured(default is 0). Then never time out
  • mkdir

If the given key directory does not exist, a new key directory is created. E.g:

$Etcdctl mkdir testdir2

#Supported options are:
--ttl '0' Timeout time(in seconds), if not configured(default is 0), it will never time out.
  • setdir

Create a key catalog. If the directory does not exist, it is created. If the directory exists, update the directory TTL.

$Etcdctl setdir testdir3

#Supported options are:
--ttl '0' Timeout time(in seconds), if not configured(default is 0), it will never time out.

5.2 Delete

  • rm

Delete a key. E.g:

$Etcdctl rm /testdir/testkeyPrevNode.Value:Hello

#When the key does not exist, an error will be reported. E.g:
$Etcdctl rm /testdir/testkey
Error:100:Key not found(/testdir/testkey) [7]

#Supported options are:
--dir delete if the key is an empty directory or key-value pair
--recursive delete directory and all subkeys
--with-value check if the existing value matches
--with-index '0' check if the existing index matches
  • rmdir

Delete an empty directory, or key-value pair.

$Etcdctl setdir dir1
$Etcdctl rmdir dir1
#If the directory is not empty, an error will be reported:
$Etcdctl set/dir/testkey hihi
$Etcdctl rmdir /dir
Error:108:Directory not empty(/dir) [17]

5.3 Update

  • update

When the key exists, the value content is updated. E.g:

$Etcdctl update /testdir/testkey "Hello"
Hello

#When the key does not exist, an error will be reported. E.g:
$Etcdctl update /testdir/testkey2 "Hello"
Error:100:Key not found(/testdir/testkey2) [6]

#Supported options are:
--ttl '0' Timeout time(in seconds), if not configured(default is 0), it will never time out.
  • updatedir

Update an existing directory.

$Etcdctl updatedir testdir2

#Supported options are:
--ttl '0' Timeout time(in seconds), if not configured(default is 0), it will never time out.

5.4 Query

  • get

Get the value of the specified key. E.g:

$Etcdctl get /testdir/testkey
Hello world

#When the key does not exist, an error will be reported. E.g:
$Etcdctl get /testdir/testkey2
Error:100:Key not found(/testdir/testkey2) [5]

#Supported options are:
--sort sort the results
--consistent sends the request to the master node to ensure the consistency of the content obtained.
  • ls

List the keys or subdirectories under the directory(the default is the root directory). By default, the content in the subdirectories is not displayed.

E.g:

$Etcdctl ls/testdir/testdir2/dir
$Etcdctl ls dir/dir/testkey

#Supported options include:
--sort sort the output
--recursive If there are subdirectories under the directory, then recursively output the contents -p For the output to be a directory, add/distinguish at the end

5.5 watch

  • watch

Monitor the change of a key value, once the key value is updated, it will output the latest value and exit.
For example:the user updates the testkey key value to Hello watch.

$Etcdctl get /testdir/testkey
Hello world

$Etcdctl set/testdir/testkey Hello watch
Hello watch
$Etcdctl watch testdir/testkey
Hello watch

Options supported by Copy Code include:

--forever keep monitoring until the user presses CTRL+C to exit
--after-index '0' Keep monitoring until the specified index
--recursive return all keys and subkeys
  • exec-watch

Monitor the change of a key value, and execute the given command once the key value is updated.
For example:the user updates the testkey key value.

$Etcdctlexec-watch testdir/testkey - sh -c'ls'
config Documentation etcd etcdctl README-etcdctl.md README.md READMEv2-etcdctl.md

Supported options include:

--after-index '0' Keep monitoring until index is specified
--recursive return all keys and subkeys

5.6 Backup

Backup etcd data.

$etcdctl backup--data-dir /var/lib/etcd --backup-dir /home/etcd_backup

Supported options include:

--data-dir etcd data directory
--backup-dir backup to the specified path

5.7 member

Use the list, add, and remove commands to list, add, and delete etcd instances to the etcd cluster.

View the nodes present in the cluster

$etcdctl member lists
8e9e05c52164694d:name=dev-master-01 peerURLs=http://localhost:2380clientURLs=http://localhost:2379isLeader=true

Delete nodes that exist in the cluster

$etcdctl member remove 8e9e05c52164694d
Removed member 8e9e05c52164694d from cluster

Add new nodes to the cluster

$etcdctl member add etcd3 http://192.168.1.100:2380
Added member named etcd3 with ID 8e9e05c52164694d to cluster

Example

# Set a key value
[root@etcd-0-8~]# etcdctl set /msg "hello k8s"
hello k8s

# Get the value of key
[root@etcd-0-8~]# etcdctl get /msg
hello k8s

# Get key details
[root@etcd-0-8~]# etcdctl -o extended get /msg
Key:/msg
Created-Index:12
Modified-Index:12
TTL:0
Index:12

hello k8s

# Get a non-existent key return error
[root@etcd-0-8~]# etcdctl get /xxzx
Error:100:Key not found(/xxzx) [12]

# Set the key ttl, it will be automatically deleted after it expires
[root@etcd-0-8~]# etcdctl set /testkey "tmp key test" --ttl 5
tmp key test
[root@etcd-0-8~]# etcdctl get /testkey
Error:100:Key not found(/testkey) [14]

# key replacement operation
[root@etcd-0-8~]# etcdctl get /msg
hello k8s
[root@etcd-0-8~]# etcdctl set --swap-with-value "hello k8s" /msg "goodbye"
goodbye
[root@etcd-0-8~]# etcdctl get /msg
goodbye

# mk is created only when the key does not exist(set will overwrite the same key)
[root@etcd-0-8~]# etcdctl get /msg
goodbye
[root@etcd-0-8~~]# etcdctl mk /msg "mktest"
Error:105:Key already exists(/msg) [18]
[root@etcd-0-8~~]# etcdctl mk /msg1 "mktest"
mktest

# Create self-sorting key
[root@etcd-0-8~]# etcdctl mk --in-order /queue s1s1
[root@etcd-0-8~]# etcdctl mk --in-order /queue s2s2
[root@etcd-0-8~]# etcdctl ls --sort
/queue/queue/00000000000000000021
/queue/00000000000000000022
[root@etcd-0-8~]# etcdctl get /queue/00000000000000000021
s1

# Update key value
[root@etcd-0-8~]# etcdctl update /msg1 "update test"
update test
[root@etcd-0-8~]# etcdctl get /msg1
update test

# Update key ttl and value
[root@etcd-0-8~]# etcdctl update --ttl 5 /msg "aaa"
aaa

# Create a directory
[root@etcd-0-8~]# etcdctl mkdir /testdir

# Delete empty directory
[root@etcd-0-8~]# etcdctl mkdir /test1
[root@etcd-0-8~]# etcdctl rmdir /test1

# Delete non-empty directory
[root@etcd-0-8~]# etcdctl get /testdir/test
dir:is a directory
[root@etcd-0-8~]#
[root@etcd-0-8~~]# etcdctl rm --recursive /testdir

# List directory contents
[root@etcd-0-8~]# etcdctl ls /
/tmp
/msg1
/queue
[root@etcd-0-8~]# etcdctl ls /tmp
/tmp/a
/tmp/b

# Recursively list the contents of the directory
[root@etcd-0-8~]# etcdctl ls --recursive /
/msg1
/queue
/queue/00000000000000000021
/queue/00000000000000000022
/tmp
/tmp/b
/tmp/a

# Monitor the key, print out the change when the key changes
[root@etcd-0-8~]# etcdctl watch /msg1
xxx
[root@VM_0_17_centos ~]# etcdctl update /msg1 "xxx"
xxx

# Monitor a certain directory, when any node in the directory changes, it will be printed
[root@etcd-0-8~]# etcdctl watch --recursive
/[update]/msg1
xxx
[root@VM_0_17_centos ~]# etcdctl update /msg1 "xxx"
xxx

# Keep monitoring, unless `CTL + C` causes to quit monitoring
[root@etcd-0-8~]# etcdctl watch --forever /

# Monitor the directory, execute a command when there is a change
[root@etcd-0-8~~]# etcdctl exec-watch --recursive/- sh -c "echo change"
change

# backup
[root@etcd-0-14~]# etcdctl backup --data-dir /data/app/etcd --backup-dir /root/etcd_backup
2019-12-04 10:25:16.113237 I | ignoring EntryConfChange raft entry2019-12-04 10:25:16.113268 I | ignoring EntryConfChange raft entry
2019-12-04 10:25:16.113272 I | ignoring EntryConfChange raft entry2019-12-04 10:25:16.113293 I | ignoring member attribute update on/0/members/2d2e457c6a1a76cb/attributes
2019-12-04 10:25:16.113299 I | ignoring member attribute update/0/members/d2d2e9fc758e6790/attributes
2019-12-04 10:25:16.113305 I | ignoring member attribute update/0/members/56e0b6dad4c53d42/attributes
2019-12-04 10:25:16.113310 I | ignoring member attribute update/0/members/56e0b6dad4c53d42/attributes
2019-12-04 10:25:16.113314 I | ignoring member attribute update/0/members/2d2e457c6a1a76cb/attributes
2019-12-04 10:25:16.113319 I | ignoring member attribute update/0/members/d2d2e9fc758e6790/attributes
2019-12-04 10:25:16.113384 I | ignoring member attribute update/0/members/56e0b6dad4c53d42/attributes

# Use v3 version
[root@etcd-0-14~]# export ETCDCTL_API=3
[root@etcd-0-14~]# etcdctl --endpoints="http://172.16.0.8:2379,http://172.16.0.14:2379,http://172.16.0.17:2379" snapshot save mysnapshot .db
Snapshot saved at mysnapshot.db
[root@etcd-0-14 ~]# etcdctl snapshot status mysnapshot.db -w json
{"hash":928285884,"revision":0,"totalKey":5,"totalSize":20480}
  1. Summary

  • etcd only saves 1000 historical events by default, so it is not suitable for scenarios with a large number of update operations, which will cause data loss. The typical application scenarios of etcd are configuration management and service discovery. These scenarios are read more and write less.
  • Compared to zookeeper, etcd is much simpler to use. However, to achieve true service discovery, etcd also needs to be used with other tools(such as registrator, confd, etc.) to automatically register and update services.
  • Currently, there is no graphical tool for etcd.

If there are errors or other problems, friends are welcome to leave comments and corrections. If you have any help, welcome to like + forward to share.

Welcome everyone to pay attention to the public number of the migrant worker brother:the technical road of the migrant worker brother

image.png