Versions Compared

Key

  • This line was added.
  • This line was removed.
  • Formatting was changed.

We are glad that you are reading this page because that means you’re already convinced of the power of Syndeia 3.3 with all the cool, new features and you’ve either already installed Syndeia Cloud 3.3, or are in the process of doing so, and you are wondering what happens to the data that you’ve painstakingly created with Syndeia Cloud 3.2? Well, since Syndeia 3.3 was such a big release, a lot has changed behind the scenes, and that means, unavoidably, you’ll have to migrate your data so that it complies with the data format expected by Syndeia 3.3. This page explains what the entire migration process entails and the steps to successfully transition to Syndeia 3.3.

NOTE: For the remainder of the document, whenever Syndeia 3.3 is mentioned, it means Syndeia Cloud 3.3, and same for Syndeia 3.2.

...

  1. Syndeia 3.2 installed and running on a server. Syndeia 3.2 requires the Cassandra database and a syndeia keyspace to hold 3.2 data. This is our source server (database).

  2. Syndeia 3.3 installed and running on a server. Syndeia 3.3 also requires the Cassandra database and syndeia_cloud_store & syndeia_cloud_auth keyspaces to hold 3.3 data. Additionally, Syndeia 3.3 uses JanusGraph behind the scenes, which requires additional syndeia_cloud_graph & syndeia_cloud_graph_config keyspaces to hold graph data. This is our target server (database).

  3. The superuser and the setup steps should’ve already been run on the Syndeia 3.3 server.

  4. The migration process requires an additional syndeia_cloud_devops keyspace on the target 3.3 server. This keyspace has three sets of tables - a pre_stage_* set, a migrate_stage_* set, and a duplicate_* set of tables.

  5. The Syndeia-Migration-3.3 utility.

  6. The user who is doing the migration should be an admin user. Additionally, he should have the credentials of each external repository (server) because one of the steps requires that you run the Syndeia dashboard and connect to each of the repositories using your credentials. Also, this user should have permission to view all the data in any of the external repositories (servers).

...

  • Replicate - first copy all data from the source (3.2) database to pre_stage tables (note that the 3.2 database has only 4 tables for repositories, containers, artifacts, and relations). Once the data has been copied from the 3.2 server to the pre_stage tables, we can safely disconnect from the 3.2 server and its role ends there. When the data is copied into the corresponding pre_stage tables, the value of the status column is set to -2.

  • Enhance - In 3.3, we also have type related tables, which were not there in 3.2. Hence, in this step, we also have to connect to a running instance of Slim. Using this running instance, we can get external repository types and internal relation types (which is like hard-coded data, made up by us) which we first insert in pre_stage tables. Then for containers and artifacts, we need to connect to the actual repositories, get the type information for each of the container and artifact, and populate the corresponding pre_stage tables. Once, all data has been enhanced, we modify the value of the status column to -1.

  • Stage - Once all the data has been enhanced, we just insert all data from pre_stage tables to migrate_stage tables. During the insertion, we first update the value of the status column to 0 in pre_stage tables and then set this same value 0 in migrate_stage tables (because the final part of the migration process only picks up those records whose status is 0).

  • Load - Load all the data from various migrate_stage tables (whose status is 0), create the corresponding *CreateForm(s), or *UpdateForm(s) for each piece of data and call the relevant POST / PUT Cloud APIs on the Syndeia 3.3 server.Load

...

Syndeia Cloud Migration Utilities

...