Skip to end of metadata
Go to start of metadata

You are viewing an old version of this page. View the current version.

Compare with Current View Page History

« Previous Version 2 Next »

Pre-requisites:

1.  Ensure you have the syndeia-cloud-3.5_cassandra_zookeeper_kafka_setup.zip (or latest service pack) downloaded to your home directory from the download/license instructions sent out by our team.  

(info)  Note: the .ZIP will pre-create a separate folder for its contents when extracted so there is no need to pre-create a separate folder for it.  

2.  Review Apache Cassandra's recommendations, ie: (Open|Oracle)JDK/JRE, memory, FS selection, params, etc. in Deployment.  

(info)  Note:  Syndeia Cloud can be deployed on a different machine VS Cassandra but these steps will mostly focus on a single-node deployment.  

Single Node Setup Instructions

1. Deploy a new standard RHEL/CentOS7, headless image on a physical or virtual machine (VM) or install from a Kixstart script or install from media manually.

2. Setup forward & reverse DNS records on your DNS server (consult your IT admin/sysadmin if required) and set the hostname and primary DNS suffix on the machine itself if necessary.  

3. If using a firewall, ensure the following ports are accessible(consult your local network admin if required): TCP ports 7000, 7001, 7199, 9042, 9142, 9160 (for details on what each port is used for see http://cassandra.apache.org/doc/latest/faq/index.html#what-ports & https://docs.datastax.com/en/dse/5.1/dse-admin/datastax_enterprise/security/secFirewallPorts.html#secFirewallPorts__firewall_table).  

(info) Note: If required by your IT department, perform any other standard configuration (ie: create separate admin account, set timezone, date & time or set it to synchronize with an NTP server, disable root logins, change default SSH port, installing Fail2Ban, enabling & configuring local firewall, etc.)



Download, Install & Run Apache Cassandra

4. Follow the "Installation from RPM packages" section at https://cassandra.apache.org/download/#installation-from-rpm-packages. This will setup https://www.apache.org/dist/cassandra/redhat/311x/ as a yum repo, install Cassandra (sudo yum install cassandra-3.11.13) + it's dependencies (ie: openjdk), set it up to run as a boot-service & start it.

(info) Note1, the instructions on the Apache Cassandra download page currently mention using the legacy SysV command of sudo service start cassandra to start the Cassandra service.  While this works, you will notice this redirects to systemctl, which is the standard way to manage services on RHEL/CentOS7, ie: sudo systemctl start cassandra

(info) Note2, Apache Cassandra doesn't include a native systemd .service file by default.  While systemd will dynamically create one at runtime via its SysV compatibility module, you may wish to create one yourself to exercise better control over the various service parameters.  For your convenience we have created a systemd cassandra.service & a tmpfiles.d cassandra.conf file (included in the syndeia-cloud-3.5_cassandra_zookeeper_kafka_setup.zip download).  To use these, copy the tmpfiles.d cassandra.conf to /etc/tmpfiles.d/, run it, copy cassandra.service to /etc/systemd/system/, reload systemd's units, enable cassandra to start on boot and start the service, ie:  sudo cp <tmpfiles.d_conf_file_download_dir>/cassandra.conf /etc/tmpfiles.d/. ; sudo systemd-tmpfiles --create --boot /etc/tmpfiles.d/cassandra.conf ; sudo cp <service_file_download_dir>/cassandra.service /etc/systemd/system/. && sudo systemctl daemon-reload && sudo systemctl enable cassandra && sudo systemctl start cassandra 

5. Verify/configure the following settings in /etc/cassandra/conf/cassandra.yaml ( (info) 'YourClusterName' = a cluster name of your choice, ex:  'SC 3.5 Prod Cluster')

cluster_name: 'YourClusterName'
num_tokens: 256
authenticator: PasswordAuthenticator
authorizer: CassandraAuthorizer
seed_provider:
 - class_name: org.apache.cassandra.locator.SimpleSeedProvider
   parameters:
        - seeds: "127.0.0.1"
listen_address: localhost
rpc_address: localhost
write_request_timeout_in_ms: 20000

(info) Note1:  FQDN = Fully Qualified Domain Name, ex: cassandra.mycompany.com 

(info) Note2:  For a quick start, the above settings will get you up and running, however for any production deployment scenarios you may wish to implement other settings to enhance security (ie:  changing the default cassandra superuser password, enabling encryption, etc.) & performance (setting the data & commitlog directories, swap file settings, etc.).  See Appendix B2.11 for more details.  

(warning) If you frequently deal with large artifact sizes, you may want to also bump up batch_size_fail_threshold_in_kb from default of 50 (KB) to, for ex. 100.

6. If any changes were made in the above step, type sudo systemctl restart cassandra to restart the service.  If the service successfully starts you should get the command prompt again.  To confirm, verify "Active: active (running)" shows up in the output of systemctl status cassandra

$ systemctl status cassandra
● cassandra.service - LSB: distributed storage system for structured data
   Loaded: loaded (/etc/rc.d/init.d/cassandra; bad; vendor preset: disabled)
   Active: active (running) since Wed 2022-09-07 02:41:38 EST; 1 weeks 6 days ago
     Docs: man:systemd-sysv-generator(8)
  Process: 3536 ExecStart=/etc/rc.d/init.d/cassandra start (code=exited, status=0/SUCCESS)
 Main PID: 3761 (java)
   CGroup: /system.slice/cassandra.service
           ‣ 3761 /usr/lib/jvm/java-1.8.0-openjdk-1.8.0.332-1.b09-2.el8_6.x86_64/jre/bin/java -Xloggc:/var/log/cassandra/gc.log -XX:+UseParNewGC -XX:+UseCon...

Sep 07 02:41:35 cassandra.mycompany.com systemd[1]: Starting LSB: distributed storage system for structured data...
Sep 07 02:41:36 cassandra.mycompany.com su[3570]: (to cassandra) root on none
Sep 07 02:41:38 cassandra.mycompany.com cassandra[3536]: Starting Cassandra: OK
Sep 07 02:41:38 cassandra.mycompany.com systemd[1]: Started LSB: distributed storage system for structured data.
$

7. To examine the log file, you can use less /var/log/cassandra/system.log.  To follow the log, you can use tail -f /var/log/cassandra/system.log .  You should see output similar to the following (abridged) text (for the full text of an example successful startup, see Appendix A1.1):

$ less /var/log/cassandra/system.log
[...]
INFO  [main] 2022-09-07 13:55:43,277 YamlConfigurationLoader.java:89 - Configuration location: file:/etc/cassandra/cassandra.yaml
INFO  [main] 2022-09-07 13:55:43,613 Config.java:481 - Node configuration:[allocate_tokens_for_keyspace=null; authenticator=PasswordAuthenticator; authorizer=CassandraAuthorizer; auto_boo
tstrap=true; auto_snapshot=true; back_pressure_enabled=false; back_pressure_strategy=org.apache.cassandra.net.RateBasedBackPressure{high_ratio=0.9, factor=5, flow=FAST}; batch_size_fail_t
hreshold_in_kb=50; batch_size_warn_threshold_in_kb=5; batchlog_replay_throttle_in_kb=1024; broadcast_address=null; broadcast_rpc_address=null; buffer_pool_use_heap_if_exhausted=true; cas_contention_timeout_in_ms=1000; 
[...]
INFO  [main] 2022-09-07 13:55:43,613 DatabaseDescriptor.java:367 - DiskAccessMode 'auto' determined to be mmap, indexAccessMode is mmap
[...]
INFO  [main] 2022-09-07 13:55:43,886 CassandraDaemon.java:471 - Hostname: cassandra.mycompany.com
INFO  [main] 2022-09-07 13:55:43,887 CassandraDaemon.java:478 - JVM vendor/version: Java HotSpot(TM) 64-Bit Server VM/1.8.0_332
[...]
INFO  [main] 2022-09-07 13:55:50,097 QueryProcessor.java:163 - Preloaded 328 prepared statements
INFO  [main] 2022-09-07 13:55:50,098 StorageService.java:617 - Cassandra version: 3.11.13
INFO  [main] 2022-09-07 13:55:50,098 StorageService.java:618 - Thrift API version: 20.1.0
INFO  [main] 2022-09-07 13:55:50,098 StorageService.java:619 - CQL supported versions: 3.4.4 (default: 3.4.4)
INFO  [main] 2022-09-07 13:55:50,099 StorageService.java:621 - Native protocol supported versions: 3/v3, 4/v4, 5/v5-beta (default: 4/v4)
INFO  [main] 2022-09-07 13:55:50,134 IndexSummaryManager.java:85 - Initializing index summary manager with a memory pool size of 98 MB and a resize interval of 60 minutes
INFO  [main] 2022-09-07 13:55:50,142 MessagingService.java:753 - Starting Messaging Service on cassandra.mycompany.com/127.0.0.1:7000 (eth0)
INFO  [main] 2022-09-07 13:55:50,168 StorageService.java:706 - Loading persisted ring state
INFO  [main] 2022-09-07 13:55:50,169 StorageService.java:819 - Starting up server gossip
INFO  [main] 2022-09-07 13:55:50,224 TokenMetadata.java:479 - Updating topology for cassandra.mycompany.com/127.0.0.1
INFO  [main] 2022-09-07 13:55:50,225 TokenMetadata.java:479 - Updating topology for cassandra.mycompany.com/127.0.0.1
[...]
INFO  [main] 2022-09-07 13:55:50,392 StorageService.java:2268 - Node localhost/127.0.0.1 state jump to NORMAL
INFO  [main] 2022-09-07 13:55:50,404 AuthCache.java:172 - (Re)initializing CredentialsCache (validity period/update interval/max entries) (2000/2000/1000)
INFO  [main] 2022-09-07 13:55:50,406 Gossiper.java:1655 - Waiting for gossip to settle...
INFO  [main] 2022-09-07 13:55:58,408 Gossiper.java:1686 - No gossip backlog; proceeding
INFO  [main] 2022-09-07 13:55:58,470 NativeTransportService.java:70 - Netty using native Epoll event loop
[...]
INFO  [main] 2022-09-07 13:55:58,520 Server.java:156 - Starting listening for CQL clients on localhost/127.0.0.1:9042 (unencrypted)...
INFO  [main] 2022-09-07 13:55:58,623 ThriftServer.java:116 - Binding thrift service to localhost/127.0.0.1:9160
INFO  [Thread-2] 2022-09-07 13:55:58,629 ThriftServer.java:133 - Listening for thrift clients...

8. Open a new terminal window
9. In the new terminal window, run nodetool status, you should see output similar to the following:

$ nodetool status
Datacenter: datacenter1
========================
Status=Up/Down
|/ State=Normal/Leaving/Joining/Moving
--  Address    Load       Tokens       Owns (effective)  Host ID                               Rack
UN  127.0.0.1  206.93 KB  256          100.0%            41ab853b-5d48-4c4e-8d59-40e165acadae  rack1


$ 

10. Validate correct operation and create an archive image to use as a new base image if the node needs to be rebuilt or if you wish to create a cluster.  

(info)  Before making the image you may wish to first stop and optionally disable the service temporarily to prevent auto-start on boot, ie:  sudo systemctl disable cassandra 



Multi-Node (Cluster) Setup Instructions

Enabling your single-node deployment for cluster operation

If you followed the steps in the previous section to deploy a single-node for Cassandra, you will need to make a few adjustments so the Cassandra ports are no longer bound to localhost and are accessible from other cluster nodes. 

(warning) If you have not already done so, you may wish to secure the cassandra superuser account's password and change it from the default, especially if you will be binding to a public interface on the internet (see Appendix B2.11 on how to do this).  

11. Make a backup of cassandra.yaml & open it for editing.   
12. Change the following settings:  

seed_provider:
 - class_name: org.apache.cassandra.locator.SimpleSeedProvider
   parameters:
        # where w.x.y.z = the IP address of the node,
        - seeds: "w.x.y.z"
[...]

listen_address: # set to IP of node or leave blank to pickup OS provided IP/FQDN
rpc_address:    # set to IP of node or leave blank to pickup OS provided IP/FQDN

(warning) IMPORTANT: Be aware that if you have other Cassandra server(s) with the same cluster_name as this one, this node will attempt to join it when the service is restarted, which may cause issues that will be difficult to troubleshoot later.  If you do not wish for this occur, you will need to first update cluster_name in the system keyspace via CQLSH (on ALL Cassandra server(s) in the cluster you are renaming), see Appendix B2.13 for details on how to do this.  

13. Save cassandra.yaml
14. Restart the Cassandra service, ie: sudo systemctl restart cassandra (use systemctl status cassandra to verify it started, if not, review your changes for errors),
15. Update any firewall configurations on the OS and/or externally, ie:  AWS or your cloud provider

(info) Note, if Janusgraph is installed, you will also need to update the syndeia_cloud_graph configuration as well (see "Enabling your single-node deployment for cluster operation" in the Multi-Node section on the Janusgraph page to do this).  



Adding new nodes to an existing single-node

16. Deploy another instance of your Cassandra base image and make any appropriate changes to the cloned MAC address, if necessary (ex: in the VM settings and/or udev, if used).
17. Setup forward & reverse DNS records on your DNS server (consult your IT admin/sysadmin if required) and set the hostname and primary DNS suffix on the machine itself (sudo hostnamectl set-hostname <new_Cassandra_node_FQDN> where FQDN = Fully Qualified Domain Name, ex: cassandra2.mycompany.com )
18. SSH to the IP (or the FQDN of the new node if DNS has already propagated).

(info) Note: If using Fail2Ban, update the sender line in /etc/fail2ban/jail.local to root@<new_Cassandra_node_FQDN>. Restart the fail2ban service (sudo systemctl restart fail2ban)

19. Follow "Initializing a multiple node cluster (single datacenter)" https://docs.datastax.com/en/cassandra/3.0/cassandra/initialize/initSingleDS.html

(info) Note1: in the provided example cassandra.yaml, rpc_address is shown set to 0.0.0.0, however leaving this blank will let it pick up the address automatically.  

(info) Note2: in steps 3a and 7, to stop/start Cassandra installed via a package on a RHEL/CentOS7 system, use sudo systemctl stop cassandra (or start, respectively)

(warning) IMPORTANT: Pay special attention to steps 3b & 4, the data dir must be empty for a node to join the cluster and auto_bootstrap: false should only be added in cassandra.yaml on seed nodes. Per the “Prerequisites” section, normally one would elect a subset of the nodes to be seeds (usually 2-3 per datacenter is sufficient)

20. Repeat steps 16 ~ 19 for each additional cluster node.



Validating Cassandra Operation (or Cluster Replication) for 1-node (or multiple nodes)

21. To validate Cassandra operation (or cluster replication), we create a sample keyspace, tables, insert test data, create a user (role), grant permissions and perform a basic query with a Consistency Level (CL) of ONE to the server (or each node separately if a cluster has been setup).  To do this, connect via CQLSH on the server (or node 1 if testing a cluster) and perform the following steps.  

21.1. In CQLSH create a new sample keyspace with a Replication Factor (RF) = to the total # of nodes you have (see Appendix B2.6 for sample CQL code on how to do this for a single server or mult-inode cluster).  

21.2. In CQLSH create new sample tables (see Appendix B2.7 for sample CQL code on how to do this).

21.3. In CQLSH insert test data into the new tables (see Appendix B2.8 for sample CQL code on how to do this).

21.4. In CQLSH create a new login user (role) with a password and GRANT ALL PERMISSIONS to the keyspace created earlier (see Appendix B2.9 for sample CQL code on how to do this). 

22. Exit CQLSH and run nodetool status until you see Owns show 100.0% for your server (or for all nodes if testing on a cluster).

23. If testing only 1-node, skip to only perform steps 27-28

(info) Note: For cluster testing, the steps below assume you are testing with a 3-node cluster.

24. Stop the Cassandra service on node1. If you run nodetool status on node 2 or 3 you should now see the other node show as down (DN):

$ nodetool status
Datacenter: dc1
=======================
Status=Up/Down
|/ State=Normal/Leaving/Joining/Moving
--  Address       Load       Tokens       Owns (effective)  Host ID                               Rack
DN  192.168.1.2   487.87 KiB 256          100.0%            377d1403-ca85-45ae-a1ca-60e09a75425b  rack1
UN  192.168.1.3   644.57 KiB 256          100.0%            141c21db-4f79-476b-b818-ee6d2da16d7d  rack1
UN  192.168.1.4   497.5 KiB  256          100.0%            3d4b81b6-5ccd-4b0b-b126-17d5eed3b634  rack1

25. On the test node (ex: 2), ensure the Cassandra service is running, if not start it.
26. On the other node (ex: 3), stop the Cassandra service.
27. On the test node (ex: 2) connect via CQLSH or via DataStax DevCenter on your machine (see Appendix B2.10 on where to obtain this) & set the Consistency Level (CL) = ONE (CONSISTENCY ONE;) (see Appendix B2.12 for more details on Consistency Level) and issue a simple SELECT * FROM <keyspace>.<table_name>;  
28. Verify that you get 1 row back.
29. Repeat steps 25 ~ 28 but switch the nodes, ie: restart service on node 3, stop service on node 2, and issue query on node 3.


.....Page break.....

  • No labels