Skip to end of metadata
Go to start of metadata

You are viewing an old version of this page. View the current version.

Compare with Current View Page History

« Previous Version 3 Next »

Overview

In this section, the Automated Deployment method for Syndeia Cloud on Linux OS is presented. In the Automated Deployment method, Syndeia admins will run scripts that download and configure Syndeia Cloud and its infrastructure components - Cassandra, JanusGraph, Kafka, and Zookeeper. There is also an option to first download Syndeia Cloud and its infrastructure components, and then run the automated scripts to install. This is useful when working in an air-gapped environment.

(info) Syndeia admins must use either the Automated Deployment (presented here) or the Semi-Automated Deployment method. See the guidance on the parent page: Deployment Methods.


Prerequisites Summary

  1. Deployment page read and understanding of:

  2. Supported OS deployed:

    1. Preferred and Recommended - RHEL/CentOS/Alma Linux v7.9+ with

      • Console and/or SSH access enabled

  3. Software: Compatibility Layer Components

    1. Linux:

      1. JRE/JDK (automatically installed during Cassandra installation)

  4. Software: Infrastructure Components (automatically downloaded OR can use offline mode, see below)

    1. Apache Cassandra

    2. Janusgraph

    3. Apache Zookeeper

    4. Apache Kafka

  5. Software: Syndeia Cloud (SC) Components

    1. Syndeia Cloud (SC) media file .ZIPs downloaded : Download .ZIPs from password-protected links provided in the Intercax Helpdesk ticket where you originally requested your Syndeia Cloud license. Filenames for each are as follows.

      1. syndeia-cloud-3.5-SP2_cassandra_zookeeper_kafka_setup.zip

      2. syndeia-cloud-3.5-SP2_janusgraph_setup.zip

      3. syndeia-cloud-3.5-SP2.zip

Installation Logging

Before proceeding with any deployment steps in the CLI it is highly recommended you first enable input & output logging of the terminal as the shell does not do this by default.

Either enable (maximum) logging in your terminal of choice and/or use GNU script. See Appendix F6.1: Installation Logging for more details.

Offline Installation Mode (Optional)

If you are not in an offline environment, ex: air-gapped , & your server has internet access, simply skip to the next step.

If you are in an air-gapped environment, or wish to do an offline installation, please see the Offline Installation Mode (Optional) page.


Steps

Extract SC Media

  1. Launch a Terminal with bash (ensure you are in your home directory)
    (info) for Windows Cygwin (and Linux!), this means ~/ NOT C:\Users\...

  2. Unzip all SC packages:

    unzip syndeia-cloud-3.5*.zip

Apache Cassandra

  1. cd to the cassandra_zookeeper_kafka_setup package’s bin directory:

    cd ~/syndeia-cloud-3.5_cassandra_zookeeper_kafka_setup/bin
  2. Run the Apache Cassandra pre-setup script:

    ./syndeia-cloud-3.5_cassandra_pre-setup.bash

Verification:

  1. Verify Cassandra is up and functioning by running nodetool status:
    If you are on Linux:

    nodetool status 

    You should get output similar to the following:

    Datacenter: dc1
    =======================
    Status=Up/Down
    |/ State=Normal/Leaving/Joining/Moving
    --  Address       Load       Tokens       Owns (effective)  Host ID                               Rack
    UN  192.168.1.3   644.57 KiB 256          100.0%            141c21db-4f79-476b-b818-ee6d2da16d7d  rack1

Apache Zookeeper (ZK)

  1. Run the Apache Zookeeper (ZK) pre-setup script:

    ./syndeia-cloud-3.5_zookeeper_pre-setup.bash

Verification

  1. Verify Zookeeper is up and functioning by running zkCli.sh:

    sudo -u zookeeper /opt/zookeeper-current/bin/zkCli.sh -server localhost:2181

    You should get output similar to the following:

    Connecting to localhost:2181
    Welcome to ZooKeeper!
    JLine support is enabled
    
    WATCHER::
    
    WatchedEvent state:SyncConnected type:None path:null
    [zk: localhost:2181(CONNECTED) 0]

Apache Kafka

  1. Run the Apache Kafka pre-setup script:

  • ./syndeia-cloud-3.5_kafka_pre-setup.bash

Verification

  1. Verify Kafka is up and functioning by creating a test topic, producer with test events and a consumer that replays them:

    /opt/kafka-current/bin/kafka-topics.sh --create --topic quickstart-events --bootstrap-server localhost:9092

    … producer:

    /opt/kafka-current/bin/kafka-console-producer.sh --topic quickstart-events --bootstrap-server localhost:9092
    Test event1
    Test event2

    … consumer:

    /opt/kafka-current/bin/kafka-console-consumer.sh --topic quickstart-events --from-beginning --bootstrap-server localhost:9092

    You should see:

    Test event1
    Test event2

JanusGraph (JG)

  1. cd to the janusgaph_setup package’s bin directory:

    cd ~/syndeia-cloud-3.5_janusgraph_setup/bin
  2. Run the JanusGraph (JG) pre-setup script:

    ./syndeia-cloud-3.5_janusgraph_pre-setup.bash
  3. Run the (main) JanusGraph (JG) setup script, ie:

    ./syndeia-cloud-3.5_janusgraph_setup.bash 

Note, the current password for the Cassandra admin when installed via the pre-setup script is the Cassandra project’s documented default, ie: cassandra, we recommend changing this via CQLSH once SC deployment has been successfully completed (see Appendix B2.11 for instructions on how to do this), especially if the Cassandra node will (eventually) be exposed in a multi-node topology.

Verification:

  1. Verify JG is up and functioning by running the Gremlin client, ie:

    /opt/janusgraph-current/bin/gremlin.sh

    Then execute the following commands after it starts up:

    :remote connect tinkerpop.server conf/remote.yaml session
    :remote console
    graph = ConfiguredGraphFactory.open('syndeia_cloud_graph');
    // should return: ==>standardjanusgraph[cql:[cassandra.mydomain.com]]
    g = graph.traversal();
    g.V();
    g.E();
    // The last 2 commands above should not return any results since the graph (syndeia_cloud_graph) is empty - no vertices or edges.

Syndeia Cloud (SC)

  1. cd to the syndeia-cloud package’s bin directory:

    cd ~/syndeia-cloud-3.5-SP2/bin
  2. Run the Syndeia Cloud (SC) pre-setup script:

    ./syndeia-cloud-3.5_install_pre-setup.bash

    (info) During execution, you will be prompted to set credentials for JMX monitoring. By default Syndeia Cloud has been configured with JMX enabled & the script will prompt to set a reader & read-write credentials. If you do not wish to use JMX, you can do so pre or post-SC setup via the steps in Appendix C3.6).

  3. Run the (main) Syndeia Cloud (SC) setup script, ie:

    ./syndeia-cloud-3.5_install.bash

Verification

On the server and/or your local machine, launch a web browser & check the following to validate that the application is correctly running:  

  1. http://<syndeia_server_FQDN>:9000 should give you:  

    (info) To log in as the default administrator and create users, see the User Management section.

  2. Once logged in, please verify you see:

    1. a bar graph gets rendered (and not a never-ending spinner followed by an error message) on the Dashboard home page and

    2. the installed version shows correctly under Help > About in the sidebar.

Congratulations! You have a running Syndeia Cloud instance.

Review the following sections to learn about monitoring the services and locating service logs.


How to Manage Services & Check Logs

Services

Linux

Services are all managed using systemd’s systemctl command with a verb, ie: status, start, stop, restart, followed the service name, ie: cassandra, janusgraph, zookeeper, kafka, sc-SC-short-service-name(s) (or sc-* to reference all SC services). For more information run systemctl --help and/or man systemctl .

Example usage for Cassandra, Zookeeper, Kafka, JanusGraph, and Syndeia Cloud follows:

Apache Cassandra

  • To check the summary status:

    sudo systemctl status cassandra
  • To start the service:

    sudo systemctl start cassandra
  • To stop the service:

    sudo systemctl stop cassandra
  • To restart the service:

    sudo systemctl restart cassandra

Apache Zookeeper

  • To check the summary status:

    sudo systemctl status zookeeper
  • To start the service:

    sudo systemctl start zookeeper
  • To stop the service:

    sudo systemctl stop zookeeper
  • To restart the service:

    sudo systemctl restart zookeeper

Apache Kafka

  • To check the summary status:

    sudo systemctl status kafka
  • To start the service:

    sudo systemctl start kafka
  • To stop the service:

    sudo systemctl stop kafka
  • To restart the service:

    sudo systemctl restart kafka

JanusGraph

  • To check the summary status:

    sudo systemctl status janusgraph
  • To start the service:

    sudo systemctl start janusgraph
  • To stop the service:

    sudo systemctl stop janusgraph
  • To restart the service:

    sudo systemctl restart janusgraph

Syndeia Cloud

Syndeia Cloud 3.5 is defined by two sets of services:

  1. SC 3.5 Core Services: sc-store, sc-auth, sc-graph, sc-web-gateway

  2. SC 3.5 Integration Services: sc-aras, sc-artifactory, sc-bitbucket, sc-collaborator, sc-confluence, sc-doors, sc-github, sc-gitlab, sc-jama, sc-jira, sc-restful, sc-sysmlv2, sc-testrail, sc-twcloud, sc-volta, sc-wc

To perform an operation on all services at once, you can use the wildcard sc-* to match all of the above

  • To check the summary status for a specific service, ex: web-gateway:

    sudo systemctl status sc-web-gateway
  • To start a specific service, ex: web-gateway:

    sudo systemctl start sc-web-gateway

To start all services with dependencies, use the specially defined SC group, use sc.target

  • To stop a service, ex: web-gateway:

    sudo systemctl stop sc-web-gateway
  • To restart a service, ex: web-gateway:

    sudo systemctl restart sc-web-gateway

Logs & Monitoring

Linux

Logs on Linux can be viewed using the journalctl command, ex: sudo journalctl -xeu service-name, ie: cassandra, janusgraph, zookeeper, kafka, sc-SC-short-service-name(s)

To tail a particular service’s logs add the f switch to the journalctl command, ie: sudo journalctl -xfeu service-name, or use the sudo tail -f /path/to/service.log command.

Raw log files are located in the following locations:

Apache Cassandra

/var/log/cassandra/system.log

Apache Zookeeper

Zookeeper creates a log file of the form zookeeper-accountName -server-serverFQDN .log

This is cumbersome, so for simplicity we have created a symlink to it as zookeeper.log

/opt/zookeeper-current/logs/zookeeper.log

Apache Kafka

Apache Kafka generates several files with the extension .log in its logs folder

(warning) Most of these files constitute Kafka’s DB, the only one that is of concern for diagnostics/troubleshooting is the one named server.log

/opt/kafka-current/logs/server.log

JanusGraph

/opt/janusgraph-current/logs/gremlin-server.log

Syndeia Cloud

For single-node deployments, one can use the common logs directory with symlinks to every service’s logs folder (this is useful is say one wishes to quickly archive all logs to submit for troubleshooting)

/opt/icx/syndeia-cloud-current/logs

For an individual service’s logs, log files will be under:

/opt/icx/syndeia-cloud-current/$service_name-impl-3.5/logs/$service_name.log

… where $service_name = any one from the below two sets of services:

  1. SC 3.5 Core Services: sc-store, sc-auth, sc-graph, sc-web-gateway

  2. SC 3.5 Integration Services: sc-aras, sc-artifactory, sc-bitbucket, sc-collaborator, sc-confluence, sc-doors, sc-github, sc-gitlab, sc-jama, sc-jira, sc-restful, sc-sysmlv2, sc-testrail, sc-twcloud, sc-volta, sc-wc


Troubleshooting

The pre-setup scripts have been tested on the documented platforms so most scripts should execute reliably (ideally flawlessly).

However having said that, it’s always possible that:

  • you may have missed a step (or steps)

  • you have a non-standard environment

  • there’s a genuine bug

If you do experience any issues, please perform the following before opening that Helpdesk ticket:

  1. Review the output: if you notice an error, 🛑 stop! 🖐 VS simply moving forward (this is one of the reasons trace mode was (currently) left enabled VS hiding the output).

  2. Uninstall & try again: Try following the below instructions to reset to a clean state and start over (in case a step (or steps) were missed) :

Uninstall

WARNING: The following will uninstall SC and all dependencies

Save the below file named SC_stack_uninstall.bash to /opt/, set execute permission on it (ie: chmod ug+x), then run it from a bash Terminal, ie: ./SC_stack_uninstall.bash (on Windows this would be a Cygwin Terminal, on Linux this would be a bash shell prompt)

Tips

  1. Ensure you DON’T run ANY of the setup scripts with root or sudo!

  2. Ensure you DO have enough space allocated to /opt, /var, /tmp ( (info) check via sudo df -h ),

  3. Ensure you DON'T have a noexec mount option for /tmp ( (info) check via mount , note, it may not be directly under / and could be a sub-dir under / ).

  4. Ensure you DO have enough CPU cores, ie: check via:

    cores=$(tail -28 /proc/cpuinfo | grep processor | awk '{ print $3 }'); cores=$((cores+1)); echo $cores

    If that # is < the minimum requirements (8 cores for SC 3.4. 11 for SC 3.5), please bump this up value.

  5. Ensure you DO have the SC media downloaded to the home dir and NOT a subdirectory or anywhere else

  • No labels