Migrate (3.4 SP3 -> 3.5 SP2)
This section describes the process for migrating from Syndeia Cloud 3.4 SP3 to Syndeia Cloud 3.5 SP2 (the latest service pack). In the remainder of this document, Syndeia 3.5 implies Syndeia 3.5 SP2.
Overview
Please read the following carefully before proceeding further.
(1) The starting point of the migration process is Syndeia Cloud 3.4 SP3. If you are currently running 3.4 / 3.4 SP1 / 3.4 SP2, please first upgrade to 3.4 SP3 then return here. The process is described in 3.4 Service Pack 3 (3.4 SP3)
(2) The estimated time to complete the migration procedure, including build upload (at 6Mbit/s+) but not including original server(s) cloning time, is ~2-3 hours.
(3) The overall approach for the migration process is as follows.
Stages 1-2 – The existing Syndeia 3.4 SP3 environment running in production is cloned to create a new environment that is the basis for Syndeia Cloud 3.5. Syndeia admins should make the Syndeia Cloud 3.4 SP3 production deployment unavailable to Syndeia users before cloning. Syndeia 3.4 SP3 production server can be brought back online after cloning but any data created in it after cloning will not be migrated. This approach is taken to ensure that the existing Syndeia Cloud 3.4 SP3 deployment is always available as a golden starting point for the migration process.
Stages 3-6 – Syndeia Cloud 3.5 installs are downloaded, services are started, and an upgrade is performed in the cloned environment.
Stage 7-10 - Syndeia Cloud 3.5 infrastructure services (Cassandra, JanusGraph, Zookeeper, Kafka) are upgraded to the latest/newer versions in the cloned environment. This is required for: (1) new capabilities in Syndeia Cloud 3.5, and (2) cybersecurity hardening of the Syndeia Cloud system.
Stages 11-12 – Syndeia Cloud 3.5 services are started and the data is verified. The cloned environment is now the new Syndeia Cloud 3.5-based production environment.
The Syndeia Cloud 3.4 → 3.5 migration process has the following 12 stages.
NOTE: The pages above have been written from the standpoint of a *NIX deployment running commands in a bash
shell ( NOT sh
, dash
, ksh
, or zsh
, they share many similar syntax features but are all subtly different!).
For a Windows deployment, the process is identical, except please keep the following rules in mind:
Where it says
bash
and/or "shell", please use the Cygwin Terminal (which runsbash
).
Note, most.bat
tools should be able to run from the Cygwin Terminal, the ONE exception is if you need to rungremlin.bat
, which seems to (currently) only work from WindowsCMD.EXE
Where it says
.sh
substitute in.bat
. EXCEPT for ALL Intercax-provided scripts.For the main SC JG setup and the SC setup scripts, use the
_windows
suffixed scripts instead.For any component (Cassandra, JanusGraph, etc.) that requires a command, script or executable to be run (ex:
cqlsh
,gremlin.sh
/gremlin.bat
, etc.), ensure you are firstcd
-ed in thebin
directory for that component, ex: Cassandra =cd /opt/apache-cassandra-current/bin
, JanusGraph =cd /opt/janusgraph-current/bin
( Tip: if you don’t like doing this, you can add each to your Windows' SystemPATH
environment variable)Ignore any commands to
cd /etc/systemd/system
beforehand.Where it says
systemctl
substitute in Windows NT SCM’ssc.exe
Where it says
systemctl restart ...
substitute insc.exe stop ...
followed bysc.exe start ...
back to backWhere it says
systemctl enable
substitute insc.exe config start= delayed-start
Where it says
journalctl
, open the raw log file insteadWhere it says
ln -nfs
, usewinln -fs
(and make sure the original symlink is deleted first if you are updating it or you will get an error)Ignore any commands to create any groups or users (this is typically not done in a Windows environment, except for hardening)
Ignore any
sudo
orsudo -u ...
command prefixes.