Migrate (3.4 SP3 -> 3.5 SP2)

This section describes the process for migrating from Syndeia Cloud 3.4 SP3 to Syndeia Cloud 3.5 SP2 (the latest service pack). In the remainder of this document, Syndeia 3.5 implies Syndeia 3.5 SP2.

Overview

Please read the following carefully before proceeding further.

(1) The starting point of the migration process is Syndeia Cloud 3.4 SP3. If you are currently running 3.4 / 3.4 SP1 / 3.4 SP2, please first upgrade to 3.4 SP3 then return here. The process is described in 3.4 Service Pack 3 (3.4 SP3)

(2) The estimated time to complete the migration procedure, including build upload (at 6Mbit/s+) but not including original server(s) cloning time, is ~2-3 hours.

(3) The overall approach for the migration process is as follows.

  • Stages 1-2 – The existing Syndeia 3.4 SP3 environment running in production is cloned to create a new environment that is the basis for Syndeia Cloud 3.5. Syndeia admins should make the Syndeia Cloud 3.4 SP3 production deployment unavailable to Syndeia users before cloning. Syndeia 3.4 SP3 production server can be brought back online after cloning but any data created in it after cloning will not be migrated. This approach is taken to ensure that the existing Syndeia Cloud 3.4 SP3 deployment is always available as a golden starting point for the migration process.

  • Stages 3-6 – Syndeia Cloud 3.5 installs are downloaded, services are started, and an upgrade is performed in the cloned environment.

  • Stage 7-10 - Syndeia Cloud 3.5 infrastructure services (Cassandra, JanusGraph, Zookeeper, Kafka) are upgraded to the latest/newer versions in the cloned environment. This is required for: (1) new capabilities in Syndeia Cloud 3.5, and (2) cybersecurity hardening of the Syndeia Cloud system.

  • Stages 11-12 – Syndeia Cloud 3.5 services are started and the data is verified. The cloned environment is now the new Syndeia Cloud 3.5-based production environment.


The Syndeia Cloud 3.4 → 3.5 migration process has the following 12 stages.

NOTE: The pages above have been written from the standpoint of a *NIX deployment running commands in a bash shell ( NOT sh, dash, ksh, or zsh, they share many similar syntax features but are all subtly different!).
For a Windows deployment, the process is identical, except please keep the following rules in mind:

  1. Where it says bash and/or "shell", please use the Cygwin Terminal (which runs bash).
    Note, most .bat tools should be able to run from the Cygwin Terminal, the ONE exception is if you need to run gremlin.bat, which seems to (currently) only work from Windows CMD.EXE

  2. Where it says .sh substitute in .bat. EXCEPT for ALL Intercax-provided scripts.

  3. For the main SC JG setup and the SC setup scripts, use the _windows suffixed scripts instead.

  4. For any component (Cassandra, JanusGraph, etc.) that requires a command, script or executable to be run (ex: cqlsh, gremlin.sh/ gremlin.bat , etc.), ensure you are first cd -ed in the bin directory for that component, ex: Cassandra = cd /opt/apache-cassandra-current/bin , JanusGraph = cd /opt/janusgraph-current/bin ( Tip: if you don’t like doing this, you can add each to your Windows' System PATH environment variable)

  5. Ignore any commands to cd /etc/systemd/system beforehand.

  6. Where it says systemctl substitute in Windows NT SCM’s sc.exe

  7. Where it says systemctl restart ... substitute in sc.exe stop ... followed by sc.exe start ... back to back

  8. Where it says systemctl enable substitute in sc.exe config start= delayed-start

  9. Where it says journalctl, open the raw log file instead

  10. Where it says ln -nfs, use winln -fs (and make sure the original symlink is deleted first if you are updating it or you will get an error)

  11. Ignore any commands to create any groups or users (this is typically not done in a Windows environment, except for hardening)

  12. Ignore any sudo or sudo -u ... command prefixes.