Automated Deployment: Linux

Overview

In this section, the Automated Deployment method for Syndeia Cloud on Linux OS is presented. In the Automated Deployment method, Syndeia admins will run scripts that download and configure Syndeia Cloud and its infrastructure components - Cassandra, JanusGraph, Kafka, and Zookeeper.

There is also an option to first download Syndeia Cloud and its infrastructure components, and then run the automated scripts to install. This is useful when deploying Syndeia in an air-gapped environment. See the Offline Installation Mode (Optional) section below to learn more.

Syndeia admins must use either the Automated Deployment (presented here) or the Semi-Automated Deployment method. See the guidance on the parent page: Deployment Methods.


Prerequisites Summary

  1. Deployment page read and understanding of:

  2. Supported OS & shell:

    1. Preferred and Recommended - RHEL/CentOS/Alma Linux v7.9-8.10 with Console and/or SSH access enabled

      • ( INFORMATIONAL: SC Web Ports 9000 (HTTP) | 9443 (HTTPS) will be opened automatically by adding a firewall service definition /etc/firewalld/services/syndeia.xml)

      • bash shell is used

  3. Software: Compatibility Layer Components

    1. Linux:

      1. JRE/JDK (automatically installed during Cassandra installation)

  4. Software: Infrastructure Components (automatically downloaded OR can use offline mode, see below)

    1. Apache Cassandra

    2. Janusgraph

    3. Apache Zookeeper

    4. Apache Kafka

  5. Software: Syndeia Cloud (SC) Components

    1. Syndeia Cloud (SC) media file .ZIPs downloaded : Download .ZIPs from password-protected links provided in the Intercax Helpdesk request where you originally received your Syndeia Cloud license. Filenames for each are as follows.

      1. syndeia-cloud-3.6_cassandra_zookeeper_kafka_setup.zip

      2. syndeia-cloud-3.6_janusgraph_setup.zip

      3. syndeia-cloud-3.6.zip

Installation Logging

Before proceeding with any deployment steps in the CLI it is highly recommended you first enable input & output logging of the terminal as the shell does not do this by default.

Either enable (maximum) logging in your terminal of choice and/or use GNU script. See Appendix F6.1: Installation Logging for more details.

Offline Installation Mode (Optional)

If you are not in an offline environment, ex: air-gapped , & your server has internet access, simply skip to the next step.

If you are in an air-gapped environment, or wish to do an offline installation, please see the Offline Installation Mode (Optional) page.


Steps

Extract SC Media

  1. Launch a Terminal with bash (ensure you are in your home directory)

  2. Place all SC packages in your home directory

  3. Unzip all SC packages:

    unzip syndeia-cloud-3.6*.zip

Deploy Apache Cassandra

  1. cd to the cassandra_zookeeper_kafka_setup package’s bin directory:

    cd ~/syndeia-cloud-3.6*_cassandra_zookeeper_kafka_setup/bin
  2. Run the Apache Cassandra pre-setup script:

    ./syndeia-cloud-3.6_cassandra_pre-setup.bash

    This will download Casssandra 4.1 and upgrade from 3.11.x to 4.1, or just upgrade the already downloaded Cassandra 4.1. Depending on what version you are upgrading from, you should see output similar to the below:

Verification:

  1. Verify Cassandra is up and functioning by running nodetool status:
    If you are on Linux:

    You should get output similar to the following:

Deploy Apache Zookeeper (ZK)

  1. Run the Apache Zookeeper (ZK) pre-setup script:

Verification

  1. Verify Zookeeper is up and functioning by running zkCli.sh:

    You should get output similar to the following:

Deploy Apache Kafka

  1. Run the Apache Kafka pre-setup script. This will download Kafka 3.7.0 and install it, or will install an already downloaded Kafka 3.7.0 :

Verification

  1. Verify Kafka is up and functioning by creating a test topic, producer with test events and a consumer that replays them:

    … producer:

    … consumer:

    You should see:

Deploy JanusGraph (JG)

  1. cd to the janusgaph_setup package’s bin directory:

  2. Run the JanusGraph (JG) pre-setup script:

  1. Run the (main) JanusGraph (JG) setup script, ie:

Note, the current password for the Cassandra admin when installed via the pre-setup script is the Cassandra project’s documented default, ie: cassandra, we recommend changing this via CQLSH once SC deployment has been successfully completed (see Appendix B2.11 for instructions on how to do this), especially if the Cassandra node will (eventually) be exposed in a multi-node topology.

Verification:

  1. Verify JG is up and functioning by running the Gremlin client, ie:

    Then execute the following commands after it starts up:

Deploy Syndeia Cloud (SC)

  1. cd to the syndeia-cloud package’s bin directory:

  2. Run the Syndeia Cloud (SC) pre-setup script:

    During execution, you will be prompted to set credentials for JMX monitoring. By default Syndeia Cloud has been configured with JMX enabled & the script will prompt to set a reader & read-write credentials. If you do not wish to use JMX, you can do so pre or post-SC setup via the steps in Appendix C3.6).

  1. Run the (main) Syndeia Cloud (SC) setup script, ie:

Verification

On the server and/or your local machine, launch a web browser & check the following to validate that the application is correctly running:  

  1. http://<syndeia_server_FQDN>:9000 should give you:  

    Figure 1: SC Sign-In

    To log in as the default administrator and create users, see the User Management section.

  2. Once logged in, please verify you see:

    1. a bar graph gets rendered on the Dashboard home page and

    2. the installed version shows correctly under Help > About in the sidebar.

      image-20240421-205718.png
      Figure 2: Help > About

       

Review the following sections to learn about monitoring the services and locating service logs.


How to Manage Services & Check Logs

Services

Services are all managed using systemd’s systemctl command with a verb, ie: status, start, stop, restart, followed the service name, ie: cassandra, janusgraph, zookeeper, kafka, sc-SC-short-service-name(s) (or sc-* to reference all SC services). For more information run systemctl --help and/or man systemctl .

Example usage for Cassandra, Zookeeper, Kafka, JanusGraph, and Syndeia Cloud follows:

Apache Cassandra

  • To check the summary status:

  • To start the service:

  • To stop the service:

  • To restart the service:

Apache Zookeeper

  • To check the summary status:

  • To start the service:

  • To stop the service:

  • To restart the service:

Apache Kafka

  • To check the summary status:

  • To start the service:

  • To stop the service:

  • To restart the service:

JanusGraph

  • To check the summary status:

  • To start the service:

  • To stop the service:

  • To restart the service:

Syndeia Cloud

Syndeia Cloud 3.6 is defined by two sets of services:

  1. SC 3.6 Core Services: sc-store, sc-auth, sc-graph, sc-web-gateway

  2. SC 3.6 Integration Services ( = New!) : sc-aras, sc-artifactory, sc-bitbucket, sc-collaborator, sc-confluence, sc-doors, sc-dscr , sc-dse3 ,sc-dt ,sc-github, sc-genesys , sc-gitlab, sc-jama, sc-jira, sc-restful, sc-sysmlv2, sc-tc , sc-testrail, sc-twcloud, sc-volta, sc-wc

  • To check the summary status for a specific service, ex: web-gateway:

  • To start a specific service, ex: web-gateway:

  • To stop a service, ex: web-gateway:

  • To restart a service, ex: web-gateway:


Logs & Monitoring

Logs on Linux can be viewed using the journalctl command, ex: sudo journalctl -xeu service-name, ie: cassandra, janusgraph, zookeeper, kafka, sc-SC-short-service-name(s)

Raw log files are located in the following locations:

Apache Cassandra

Apache Zookeeper

Zookeeper creates a log file of the form zookeeper-accountName -server-serverFQDN .log

This is cumbersome, so for simplicity we have created a symlink to it as zookeeper.log

Apache Kafka

Apache Kafka generates several files with the extension .log in its logs folder

Most of these files constitute Kafka’s DB, the only one that is of concern for diagnostics/troubleshooting is the one named server.log

JanusGraph

Syndeia Cloud

For single-node deployments, one can use the common logs directory with symlinks to every service’s logs folder (this is useful is say one wishes to quickly archive all logs to submit for troubleshooting)

For an individual service’s logs, log files will be under:

… where $service_name = any one from the below two sets of services:

  1. SC 3.6 Core Services: store, auth, graph, web-gateway

  2. SC 3.6 Integration Services ( = New!) : aras, artifactory, bitbucket, collaborator, confluence, doors,dscr , dse3 , digital-thread , github, genesys , gitlab, jama, jira, restful, sysmlv2, teamcenter , testrail, twcloud, volta, windchill

    Note, teamcenter has an additional logs subfolder under logs/SOA for the logs generated by the Siemens SOA client libs.


Troubleshooting

The pre-setup scripts have been tested on the documented platforms so most scripts should execute reliably (ideally flawlessly).

However having said that, it’s always possible that:

  • you may have missed a step (or steps)

  • you have a non-standard environment

  • there’s a genuine bug

If you do experience any issues, please perform the following before opening that Helpdesk ticket:

  1. Review the output: if you notice an error, stop! VS simply moving forward (this is one of the reasons trace mode was (currently) left enabled VS hiding the output).

  2. Uninstall & try again: Try following the below instructions to reset to a clean state and start over (in case a step (or steps) were missed) :

Uninstall

Save the below file named SC_stack_uninstall.bash to /opt/, set execute permission on it (ie: chmod ug+x), then run it from a bash Terminal, ie: ./SC_stack_uninstall.bash (on Windows this would be a Cygwin Terminal, on Linux this would be a bash shell prompt)

Tips

  1. Ensure you DON’T run ANY of the setup scripts with root or sudo!

  2. Ensure you DO have enough space allocated to /opt, /var, /tmp ( check via sudo df -h ),

  3. Ensure you DON'T have a noexec mount option for /tmp ( check via mount , note, it may not be directly under / and could be a sub-dir under / ).

  4. Ensure you DO have enough CPU cores, ie: check via:

    If that # is < the minimum requirements (14 for SC 3.6), please bump this up value.

  5. Ensure you DO have the SC media downloaded to the home dir and NOT a subdirectory or anywhere else