Apache Zookeeper Setup & Testing Instructions for RHEL/CentOS7
Single Node Setup Instructions
1. Ensure you have the syndeia-cloud-3.4
_cassandra_zookeeper_kafka_setup.zip
(or latest service pack) downloaded to your home directory from the download/license instructions sent out by our team.
Note: the .ZIP will pre-create a separate folder for its contents when extracted so there is no need to pre-create a separate folder for it.
2. Ensure you satisfy Zookeeper's pre-requisites, ie: have (Open|Oracle)JDK/JRE, memory, HD space, etc. (see https://zookeeper.apache.org/doc/current/zookeeperAdmin.html#sc_systemReq for more details).
3. If using a firewall, ensure the following port is accessible (consult your local network admin if required): TCP port 2181 (this is the port to listen to for client connections).
Download, Install, Configure & Run Apache Zookeeper
1. Download Zookeeper 3.4.8.1 from https://archive.apache.org/dist/zookeeper/zookeeper-3.4.8/zookeeper-3.4.8.tar.gz (ie: wget https://archive.apache.org/dist/zookeeper/zookeeper-3.4.8/zookeeper-3.4.8.tar.gz
)
2. Use tar to extract the .tar.gz
file to /opt/
and create a logs
folder in it, ie: ZK_build_ver=3.4.8; sudo tar -xvzf zookeeper-${ZK_build_ver}
.tar.gz -C /opt/ ; sudo mkdir {/opt/
; where zookeeper-${ZK_build_ver}
/logs,/var/lib/zookeeper}
ZK_build_ver
= the version you downloaded, ex: 3.4.8.
3. Create/update a symlink to the current version, ie: sudo ln -nfs /opt/zookeeper-${ZK_build_ver}
/opt/zookeeper-current
4. Create a new group named kafka-zookeeper
, ie: sudo groupadd --system kafka-zookeeper
5. Create a new user named zookeeper
in the previously created group, ie: sudo useradd --system --groups kafka-zookeeper zookeeper
6. Take ownership of the extracted folder & symlink, ie: sudo chown -R zookeeper:kafka-zookeeper {/opt/zookeeper
-{${
ZK_build_ver},current},/var/lib/zookeeper};
7. Create the configuration file /etc/zookeeper/conf/zoo.cfg
& paste the configuration below, ie: sudo mkdir -p /etc/zookeeper/conf/ && sudo cp /opt/zookeeper-${ZK_build_ver}/conf/zoo_sample.cfg /etc/zookeeper/conf/zoo.cfg
(to use as a template) and edit it, pasting in the below to replace it:
# http://hadoop.apache.org/zookeeper/docs/current/zookeeperAdmin.html # The number of milliseconds of each tick tickTime=2000 # The number of ticks that the initial # synchronization phase can take initLimit=10 # The number of ticks that can pass between # sending a request and getting an acknowledgement syncLimit=5 # the directory where the snapshot is stored. dataDir=/var/lib/zookeeper # Place the dataLogDir to a separate physical disc for better performance # dataLogDir=/disk2/zookeeper # the port at which the clients will connect clientPort=2181 # specify all zookeeper servers # The fist port is used by followers to connect to the leader # The second one is used for leader election #server.1=zookeeperServer1.mydomain.tld:2888:3888 #server.2=zookeeperServer2.mydomain.tld:2888:3888 #server.3=zookeeperServer3.mydomain.tld:2888:3888 # To avoid seeks ZooKeeper allocates space in the transaction log file in # blocks of preAllocSize kilobytes. The default block size is 64M. One reason # for changing the size of the blocks is to reduce the block size if snapshots # are taken more often. (Also, see snapCount). #preAllocSize=65536 # Clients can submit requests faster than ZooKeeper can process them, # especially if there are a lot of clients. To prevent ZooKeeper from running # out of memory due to queued requests, ZooKeeper will throttle clients so that # there is no more than globalOutstandingLimit outstanding requests in the # system. The default limit is 1,000.ZooKeeper logs transactions to a # transaction log. After snapCount transactions are written to a log file a # snapshot is started and a new transaction log file is started. The default # snapCount is 10,000. #snapCount=1000 # If this option is defined, requests will be will logged to a trace file named # traceFile.year.month.day. #traceFile= # Leader accepts client connections. Default value is "yes". The leader machine # coordinates updates. For higher update throughput at thes slight expense of # read throughput the leader can be configured to not accept clients and focus # on coordination. #leaderServes=yes
Note: In particular, please pay attention to the value of dataDir=/var/lib/zookeeper
and ensure the zookeeper
user and kafka-zookeeper
group have access to the directory.
Note: For a quick start, the above settings will get you up and running, however for any multi-node deployment scenarios you will need to specify server.
n, where n = server # and create a /var/lib/zookeeper/myid
file on each server specifying each server's id (see steps 4-5 of https://zookeeper.apache.org/doc/current/zookeeperAdmin.html#sc_zkMulitServerSetup for more details)
8. Start the Zookeeper service, ie: /opt/zookeeper
>-<
release_ver/bin/zkServer.sh start /etc/zookeeper/conf/zoo.cfg
Note, Apache Zookeeper doesn't include a native systemd .service file by default. While systemd will dynamically create one at runtime via its SysV compatibility module, you may wish to create one yourself to exercise better control over the various service parameters. For your convenience we have created a systemd zookeeper.service
file (included in the syndeia-cloud-3.4
_cassandra_zookeeper_kafka_setup.zip
download) . To use this, copy it to to /etc/systemd/system
, reload systemd units, enable zookeeper to start on boot and start the service, ie: sudo cp ~/syndeia-cloud-3.4-SP3_2022-07-01_cassandra_zookeeper_kafka_setup/conf/init/systemd/zookeeper.service /etc/systemd/system/. && sudo systemctl daemon-reload && sudo systemctl enable zookeeper && sudo systemctl start zookeeper
9. If the service successfully starts you should get the command prompt again. If you are using the systemd service file you can verify that it started by verifying "Active: active (running)" shows up in the output of systemctl status zookeeper
:
$ sudo systemctl status zookeeper ā zookeeper.service Loaded: loaded (/etc/systemd/system/zookeeper.service; enabled; vendor preset: enabled) Active: active (running) since Fri 2019-04-05 14:04:23 EDT; 4h 16min ago Process: 21740 ExecStart=/opt/zookeeper-3.4.8/bin/zkServer.sh start /etc/zookeeper/conf/zoo.cfg (code=exited, status=0/SUCCESS) Main PID: 21747 (java) CGroup: /system.slice/zookeeper.service āā21747 /usr/bin/java -Dzookeeper.log.dir=/opt/zookeeper-current/logs -Dzookeeper.root.logger=INFO,ROLLINGFILE -cp /etc/zookeeper/conf:/usr/share/java/jline.jar:/usr/share/java/log4j-1. Apr 05 14:04:22 zookeeperServer1.mydomain.tld systemd[1]: Starting zookeeper.service... Apr 05 14:04:22 zookeeperServer1.mydomain.tld zkServer.sh[21740]: ZooKeeper JMX enabled by default Apr 05 14:04:22 zookeeperServer1.mydomain.tld zkServer.sh[21740]: Using config: /etc/zookeeper/conf/zoo.cfg Apr 05 14:04:23 zookeeperServer1.mydomain.tld systemd[1]: Started zookeeper.service.
10. To examine the log file, you can use sudo journalctl -xeu zookeeper
. To follow the log, you can use
. You should see output similar to the following (abridged) text:sudo journalctl -xfeu zookeeper
[...] 2019-04-05 14:04:23,046 - INFO [main:QuorumPeerConfig@103] - Reading configuration from: /etc/zookeeper/conf/zoo.cfg 2019-04-05 14:04:23,051 - INFO [main:DatadirCleanupManager@78] - autopurge.snapRetainCount set to 3 2019-04-05 14:04:23,051 - INFO [main:DatadirCleanupManager@79] - autopurge.purgeInterval set to 0 2019-04-05 14:04:23,051 - INFO [main:DatadirCleanupManager@101] - Purge task is not scheduled. 2019-04-05 14:04:23,052 - WARN [main:QuorumPeerMain@113] - Either no config or no quorum defined in config, running in standalone mode 2019-04-05 14:04:23,066 - INFO [main:QuorumPeerConfig@103] - Reading configuration from: /etc/zookeeper/conf/zoo.cfg 2019-04-05 14:04:23,067 - INFO [main:ZooKeeperServerMain@95] - Starting server 2019-04-05 14:04:23,095 - INFO [main:Environment@100] - Server environment:zookeeper.version=3.4.8-1--1, built on Fri, 26 Feb 2016 14:51:43 +0100 2019-04-05 14:04:23,095 - INFO [main:Environment@100] - Server environment:host.name=zookeeperServer1.mydomain.tld 2019-04-05 14:04:23,095 - INFO [main:Environment@100] - Server environment:java.version=1.8.0_131 2019-04-05 14:04:23,095 - INFO [main:Environment@100] - Server environment:java.vendor=Oracle Corporation 2019-04-05 14:04:23,095 - INFO [main:Environment@100] - Server environment:java.home=/usr/lib/jvm/java-8-oracle/jre 2019-04-05 14:04:23,096 - INFO [main:Environment@100] - Server environment:java.class.path=/etc/zookeeper/conf:/usr/share/java/jline.jar:/usr/share/java/log4j-1.2.jar:/usr/share/java/xer cesImpl.jar:/usr/share/java/xmlParserAPIs.jar:/usr/share/java/netty.jar:/usr/share/java/slf4j-api.jar:/usr/share/java/slf4j-log4j12.jar:/usr/share/java/zookeeper.jar 2019-04-05 14:04:23,096 - INFO [main:Environment@100] - Server environment:java.library.path=/usr/java/packages/lib/amd64:/usr/lib64:/lib64:/lib:/usr/lib 2019-04-05 14:04:23,096 - INFO [main:Environment@100] - Server environment:java.io.tmpdir=/tmp 2019-04-05 14:04:23,096 - INFO [main:Environment@100] - Server environment:java.compiler=<NA> 2019-04-05 14:04:23,097 - INFO [main:Environment@100] - Server environment:os.name=Linux 2019-04-05 14:04:23,098 - INFO [main:Environment@100] - Server environment:os.arch=amd64 2019-04-05 14:04:23,098 - INFO [main:Environment@100] - Server environment:os.version=4.18.16-x86_64-linode118 2019-04-05 14:04:23,098 - INFO [main:Environment@100] - Server environment:user.name=zookeeper 2019-04-05 14:04:23,098 - INFO [main:Environment@100] - Server environment:user.home=/var/lib/zookeeper 2019-04-05 14:04:23,098 - INFO [main:Environment@100] - Server environment:user.dir=/ 2019-04-05 14:04:23,107 - INFO [main:ZooKeeperServer@787] - tickTime set to 2000 2019-04-05 14:04:23,110 - INFO [main:ZooKeeperServer@796] - minSessionTimeout set to -1 2019-04-05 14:04:23,110 - INFO [main:ZooKeeperServer@805] - maxSessionTimeout set to -1 2019-04-05 14:04:23,124 - INFO [main:NIOServerCnxnFactory@89] - binding to port 0.0.0.0/0.0.0.0:2181
11. Validate correct operation and create/update an archive image to use as a new base image if the node needs to be rebuilt or if you wish to create a cluster.
Before making the image you may wish to first stop and optionally disable the service temporarily to prevent auto-start on boot, ie: sudo systemctl disable zookeeper
Multi-Node (Cluster) Setup Instructions (Adding nodes to an existing single-node)
11. Deploy another instance of your Zookeeper base image.
12. Make any appropriate changes for the MAC address (ex: in the VM settings and/or udev, if used).
13. Setup forward & reverse DNS records on your DNS server (consult your IT admin/sysadmin if required) and set the hostname and primary DNS suffix on the machine itself (sudo hostnamectl set-hostname
<new_Zookeeper_node_FQDN> where FQDN = Fully Qualified Domain Name, ex: zookeeper2.mycompany.com
)
14. SSH to the IP (or the FQDN of the new node if DNS has already propagated).
Note: If using Fail2Ban, update the sender line in /etc/fail2ban/jail.local
to root@
<new_Zookeeper_node_FQDN>. Restart the fail2ban service (sudo systemctl restart fail2ban
)
15. Follow "Clustered (Multi-server) Setup" https://zookeeper.apache.org/doc/current/zookeeperAdmin.html#sc_zkMulitServerSetup
16. Repeat steps 11 ~ 15 for each additional cluster node.
Validating Zookeeper Operation for 1-node (or multiple nodes)
17. To validate Zookeeper operation, we connect to each node using the included client script. To do this, connect via CQLSH on the server (or node 1 if testing a cluster) and perform the following steps:
17.1. Open a new terminal window
17.2. In the new terminal window, run /opt/zookeeper-<release_ver>/bin/zkCli.sh -server <zookeeperServerN.mydomain.tld>:2181
, where N
= the server # and zookeeperServerN.mydomain.tld
= your server FQDN. For Zookeeper running on localhost, you should see output similar to the below:
Connecting to localhost:2181 Welcome to ZooKeeper! JLine support is enabled WATCHER:: WatchedEvent state:SyncConnected type:None path:null [zk: localhost:2181(CONNECTED) 0]
17.3. Type close
17.4. Type quit