Skip to content
This repository has been archived by the owner on Jun 21, 2024. It is now read-only.

Requirements #20

Open
4 of 14 tasks
IljaN opened this issue Sep 17, 2018 · 1 comment
Open
4 of 14 tasks

Requirements #20

IljaN opened this issue Sep 17, 2018 · 1 comment
Labels

Comments

@IljaN
Copy link
Member

IljaN commented Sep 17, 2018

Create a CLI tool which allows data migration from one owncloud "Source" instance to another "Target". Data on the source instance is not modified.

Both instances must stay online (no maintenance!) during migration.

Following things should be exportable

User Level (Milestone 1)

  • Users + Preferences
  • Files
  • Shares
  • Versions (Does not work in S3 atm)
  • Acceptance testing: Feature specs
  • Documentation
  • Beta Release in github repo --> releases (no marketplace yet)

User Level (Milestone 2)

  • Comments
  • Trashbin
  • Ext. Storages (Personal)
  • Avatars + Activity (Low prio)

Instance Global

  • Groups
  • Systemtags
  • Ext. Storages (Global)

A directory for each exported user should be created which can be imported on another owncloud installation.

The import/export process should be split in three steps.

  1. Global owncloud data (Groups, Systemtags, ext Storages)
  2. User Data + Files
  3. Shares

The split is because users depend on groups to be there, while shares need the files to be created first.

The export process is not monolithic, eg. there is a single command invocation per user:

occ export:user [email protected] /var/export
occ import:user [email protected] /var/export  

Export format

Data is exported to a directory per owncloud and user with a specific layout: e.g.

|--groups.json
|--tags.json
|--ext_storages.json
[email protected]
   |--metadata.json
   |--activity.json
   +--trashbin
   +--files
      |--foo.docx
      |--dog.gif

Note: This is just a raw example to bring the idea across, this will probably evolve during development.

metadata.json

{
  "date": "2018-07-30T16:22:33+00:00",
  "user": {
    "userId": "admin",
    "displayName": "admin",
    "email": null,
    "quota": "default",
    "backend": "Database",
    "home": "/home/ilja/code/core2/data/admin",
    "enabled": true,
    "groups": [
      "admin"
    ],
    "preferences": [
      {
        "appId": "core",
        "configKey": "lang",
        "configValue": "en"
      },
      {
        "appId": "core",
        "configKey": "timezone",
        "configValue": "Europe/Berlin"
      }
    ],
    "files": [
      {
        "type": "folder",
        "path": "files/emptyfolder",
        "eTag": "5b5f37bd3a5b6",
        "permissions": 31
      },
      {
        "type": "folder",
        "path": "files/folder",
        "eTag": "5b5f2ebe44645",
        "permissions": 31
      },
      {
        "type": "file",
        "path": "files/folder/somefile.txt",
        "eTag": "81281d297cd48cf7e30cbf420921b360",
        "permissions": 27
      },
      {
        "type": "file",
        "path": "files/welcome.txt",
        "eTag": "275bae4be6e9ff5fc5cc342fe20c13cb",
        "permissions": 27
      }
    ]
  }
}

Plugin system

To allow export of app data provide import/export hooks for other apps. Potential Problem: Inter app dependencies.

Files

Files are imported/recreated using owncloud API's on the target system.

  • Restore etags in filecache
  • Possibility to mv or cp the files from the export directory in to the owncloud data-directory (Customer specific)

Users

If a user already exists in ldap on the target system, reuse the available user (match by email).
If the group a user is in is missing, import the user but warn the admin.

Shares

If a Sharee is not available on the "target" system the share is converted to a federated share pointing to the source system. As soon as both users are available on the target system the federated share is converted to a local share.

Group Shares

group shares are exported. on import they are recreated only if group exists, not caring about if users are the same. As we have no federated group shares, this is "best efforts", as this behavior allows to keep group shares, if e.g. complete instances are migrated, and groups are exported & imported first.

Public links

Move the hashids, passwords to new system

User mapping

Every import command should have a --user-mapping option which will receive a mapping.json
to import the users under a new uid in the new system.

We need to document that this file must be created beforehand by the admin and should not be changed untillthe migration process is completed.

Example usage

Step 1: Export the "cloud" as prepare step which includes groups, global-ext storages and tags

occ export:groups        ->   occ import:groups
occ export:storages     ->   occ import:storages #global ext.storages
occ export:systemtags -> occ import:systemtags 

Step 2:

  • User export, on system A.
  • User import on system B, this step will ignore groups, tags if not presents (should be created by commands above)
occ export:user [email protected] 
occ import:user [email protected] [--user-mapping [email protected]] (or a mapping file)

Step3: Migrate shares to fed. shares on System A:

occ migrate:shares [email protected] --user-mapping [email protected]

Export / Import for specific app related data (Step 2):

occ export:activities [email protected]
occ import:activities [email protected] [--user-mapping [email protected]]

Note: This is just a raw example to bring the idea across, this will probably evolve further during development.

@IljaN IljaN added the orga label Sep 17, 2018
@IljaN
Copy link
Member Author

IljaN commented Sep 17, 2018

Important to note is that the import:user commands already maps as much as possible, so there is no need to run "occ migrate:share" on the receiving instance.

Scenario

  • Instance A and B
  • Users A1, A2, A3 are migrated to B1, B2, B3.
  • A1 shares to A2, A2 shares to A3. (A1 -> A2, A2 -> A3)
  1. exports of users A1, A2, A3
    • system exports data (files, shares, etc) as is, no mapping
  2. on B: occ import:user A3 --user-mapping B3 (A1 -> A2, A2 -> A3)
    • system imports user data from A3 and maps all user id usages to B3 where the user is owner (storage owner, share owner, etc)
    • the share pointing to A3 is left alone since A3 is recipient here (will be adressed later)
  3. on A: occ migrate:shares A3 --user-mapping B3 (A1 -> A2, A2 -> B3)
    • system maps all share recipients from A3 to B3. Since B is a different domain than A, this local share becomes a federated share
  4. on B: occ import:user A2 --user-mapping B2 (A1 -> A2, B2 -> B3)
    • system imports user data from A3 and maps all user id usages to B3 where the user is owner (storage owner, share owner, etc). This converts the A2->B3 share to B2->B3. Since the domain is now the same, the federated share gets converted to a local one.
    • since there was a share owned by A2 it gets converted to be owned by B2 instead
  5. on A: occ migrate:shares A2 --user-mapping B2 (A1 -> B2, B2 -> B3)
  6. on B: occ import:user A1 --user-mapping B1 (B1 -> B2, B2 -> B3)
  7. on A: occ migrate:shares A1 --user-mapping B1 (not needed, no changes)
  • pro: designed for live migration and satisfies the "no maintenance mode" requirement
  • cons:
    • relies on admins executing commands in proper order on separate instances
      • can be mitigated with safeguards like locks, see below
      • admin could be advised to write scripts containing a series of these commands (might need puppet or other tools to be able to distribute commands across servers)
    • data is temporarily in an inconsistent state, share points at a not yet existing user
      • can be mitigated with safeguards/locks
      • risk of permanent inconsistency if admin forgets/skips a step

Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
Projects
None yet
Development

No branches or pull requests

1 participant