Environment:
Centos 7ćzookeeper-3.4.6.tar.gz
1. The process of building an offline version
1. Download and extract Zookeeper installation
Download the server installation package and use it tar -zxvf zookeeper-3.4.6.tar.gz Decompression command
2. Use in the manual after decompressingmkdir data Create a data directory
3. Rename and change the configuration file ZOO_SIMPLE.CFG to CONF / DIRECTORY
Use commandmv zoo_sample.cfg zoo.cfg renamed again;
Use the VIM command to change the information about the data directory inside
4. Once the modification is completed, start the Zookeeper service and query the post-launch status
Use in / bin directory./zkServer.sh Start The team starts.
use ./zkServer.sh status The command reviews the current launch status; Mode: Standalone is a single node service.
Second, the Zookeeper cluster build process
1. Create a separate directory to store the multiple ZK installation directory (because the computer has only one virtual machine, so the cluster effect is used to use different ports)
Create a cluster installation directory: /home/Zookeeper/Cluster.
2. Download the ZK installation package to a directory, unzip undo rename, copy three copies of Zookeeper01, Zookeeper02, and Zookeeper03
And download the Zookeeper-3.4.6.tar.gz file below.
Use the command:tar -zxvf zookeeper-3.4.6.tar.gz Delay.
Then use the commandzookeeper-3.4.6 zookeeper01.0 gameChange the name of the directory to Zookeeper01.
Use commandcp -r zookeeper01 zookeeper0N Copy the 2 02 03 installation guide.
3. The data directory is created in three different installation directories, and the MyID file store node number is generated in the data directory
Use the command:mkdir data cd data touch midi And start writing the node number information into the MyID file.
Among them, 1, 2, and 3 indicate information about the current node number, and the information will be used in the next step.
4. Modify the Zoo_sample.cfg file in the /conf directory, change the port, log path, and block information
Use commandmv zoo_sample.cfg zoo.cfg renamed again;
Use the vim command to change the data directory, port, and add cluster node information; Each ClientPort node uses a different node port. New Server.myid = ip:port1:port2 is the value in the data/MyID file in the previous step. Different Ports When Port 1 and Port 2 are different from ClientPort.
Zoo.cfg information under node 02 and 03;
5. Start all node services separately and check the election results.
Use in the bin directory under each node./zkServer.sh Start Start.
use ./zkServer.sh status The team reviews the status of the launch.
6. ZK electoral process
When the service first starts, Node Server1 starts up by itself. Server1 will vote for itself and get the vote. The number of currently started maintenance nodes is less than half of all nodes, and the status of the node is still being looked up.
When the second Node Server2 starts, the second node starts the MyID value and finds that it is relatively large and will vote for itself in order to live vote. Server1 detects that new nodes are being added and will start voting. After comparing the value of MyID, Server2 and Server2 are voted to live twice. Currently, the starting nodes of the system and the number of votes make up more than half of all nodes. The leader was chosen at this time. Server1 is currently serialized and the state update looks for the next state. Server2 is the elected leader and the status is updated from finding the leader.
When the third Node Server3 is running, it is found that the system has elected a leader. Server3 follows the principle of half to vote for Server2. At this time, the elections are over.