diff --git a/docs/images/_img/django_add_ipython_image.png b/docs/images/_img/django_add_ipython_image.png deleted file mode 100644 index f2df713..0000000 Binary files a/docs/images/_img/django_add_ipython_image.png and /dev/null differ diff --git a/docs/images/images.md b/docs/images/images.md deleted file mode 100644 index e2feeda..0000000 --- a/docs/images/images.md +++ /dev/null @@ -1,91 +0,0 @@ -# ipynbsrv - -> IPython Notebook Multi-User Server -> - https://git.rackster.ch/groups/ipynbsrv - -## Images - -### Building the images - -The following chapters will tell you how to build Docker images to be used in ipynbsrv. Through the currently available images aren't very generic, one should be able to create others as well by extracting some of the information found here. - -#### Base LDAP - -This is the base image for all images that will be made available to end users. It performs operations such as: - -- Initializing the LDAP client/configuration -- Disable root login -- Prevent non-authorized (in the sense of not owner) users from accessing the container -- etc. - -Other images should inherit from it by declaring: - - FROM ipynbsrv/base-ldap:latest - -at the very top of the corresponding `Dockerfile`. - -To build the image, execute the following commands on the Docker host: - -```bash -$ IMG_NAME=base-ldap -$ BRANCH=master -$ mkdir $IMG_NAME -$ cd $IMG_NAME -$ wget https://git.rackster.ch/ipynbsrv/dockerfiles/raw/$BRANCH/$IMG_NAME/Dockerfile -$ docker build -t ipynbsrv/$IMG_NAME . -``` - -#### IPython Notebook (Py2) - -The **ipynbsrv** stack was documented and developed as part of a school project. The main goal was the creation of a multi-user IPython notebook server, so it's not a surprise the only available image right now is an IPython one. - -Within the `Dockerfile` and startup script, those actions are performed: - -- Install the latest IPython version -- Create a working directory `/data` which contains the user's home directory, the shares and public dirs -- Initialize an IPython profile, set the `base_url` and start a new notebook instance -- some more... - -To build the image, issue the commands below: - -> Make sure you have already built the `base-ldap` image. It is the base image in use! - -```bash -$ IMG_NAME=ipython2-notebook -$ BRANCH=master -$ mkdir $IMG_NAME -$ cd $IMG_NAME -$ wget https://git.rackster.ch/ipynbsrv/dockerfiles/raw/$BRANCH/$IMG_NAME/Dockerfile -$ wget https://git.rackster.ch/ipynbsrv/dockerfiles/raw/$BRANCH/$IMG_NAME/$IMG_NAME.bin -$ docker build -t ipynbsrv/$IMG_NAME . -``` - -During the build process, some errors might show up. That is because some commands try to open an interactive dialog - and that is not possible. Just ignore them for now. - -#### IPython Notebook (Py3) - -To build the image, issue the commands below: - -> Make sure you have already built the `base-ldap` image. It is the base image in use! - -```bash -$ IMG_NAME=ipython3-notebook -$ BRANCH=master -$ mkdir $IMG_NAME -$ cd $IMG_NAME -$ wget https://git.rackster.ch/ipynbsrv/dockerfiles/raw/$BRANCH/$IMG_NAME/Dockerfile -$ wget https://git.rackster.ch/ipynbsrv/dockerfiles/raw/$BRANCH/$IMG_NAME/$IMG_NAME.bin -$ docker build -t ipynbsrv/$IMG_NAME . -``` - -During the build process, some errors might show up. That is because some commands try to open an interactive dialog - and that is not possible. Just ignore them for now. - -### Registering the images - -To make an image available to end users, you have to add the image to the application. - -Open the administration interface (`http://"dedicated node"/admin`) and login with the superuser account. Click on `Images` in the `IPython Notebook Server Web Interface` box and create a new entry like on the screen below: - -![Django Admin Interface: Adding the IPython Notebook image](https://git.rackster.ch/ipynbsrv/ipynbsrv/raw/master/docs/images/_img/django_add_ipython_image.png) - -> Make sure to adjust fields that are different for a given image! \ No newline at end of file diff --git a/docs/install/_installation.md b/docs/install/_installation.md new file mode 100644 index 0000000..c576e3b --- /dev/null +++ b/docs/install/_installation.md @@ -0,0 +1,431 @@ +# ipynbsrv + +> IPython Notebook Multi-User Server +> - https://git.rackster.ch/groups/ipynbsrv + +## Installation + +The following introduction steps explain how to setup a fresh box as an IPython notebook multi-user server. +If you follow the whole guide step-by-step, you should end-up with a fully functional, ready-to-use system. + +### Requirements + +- a dedicated hardware/virtualized node (will be the `Docker` host) +- around 4GB of ram (at very least 2GB as per the `Docker` requirements) +- some basic Linux skills + +### Tested Distributions + +- CentOS 7 64-bit (in theory only) +- Ubuntu 14.04 (LTS) 64-bit +- Ubuntu 14.10 64-bit + +> **Recommended distro:** Ubuntu 14.04 (LTS) 64-bit + +### Dedicated Node + +Everything starts at the dedicated node, which will be configured to host Docker containers. We assume you already installed a fresh copy of the recommended distro on the machine and are connected to it either directly or via SSH. + +To make the setup as easy as possible, we wrote a tiny shell script that will perform all needed operations for you. Just fetch and execute it as follow: + +```bash +$ apt-get -y install wget # or yum for EL +$ BRANCH=master +$ wget https://git.rackster.ch/ipynbsrv/ipynbsrv/raw/$BRANCH/lib/scripts/setup_docker_host.sh +$ chmod +x setup_docker_host.sh && ./setup_docker_host.sh +``` + +> Note: Commands prefixed with `$` are meant to be run under `root` account. + +All it does is, create some directories inside `/srv/ipynbsrv`, install the Docker packages/environment and configure the system to use `LDAP` as an additional backend for user management. + +There is one thing you should double-check (we noticed serveral *problems* here) however. Open the file `/etc/nsswitch.conf` and ensure the lines for `passwd`, `group` and `shadow` end with `ldap`, like so: + +```bash +passwd: compat ldap +group: compat ldap +shadow: compat ldap +``` + +### LDAP Container + +> It is already time to bootstrap your first Docker container. Yay! + +The Django application (more on that later) itself and some core features like `user shares` depend on a centralized account directory. We have choosen an LDAP server for that purpose, so the next thing you're going to do is to create a container for it. + +Again, there is a shell script available that will perform most operations for you. +Start over by issueing: + +```bash +$ wget https://git.rackster.ch/ipynbsrv/ipynbsrv/raw/$BRANCH/lib/scripts/create_ldap_container.sh +$ chmod +x create_ldap_container.sh && ./create_ldap_container.sh +``` + +and follow the introductions printed on screen. + +If everything went well, you should end up with an **All done!** message. + +#### User Management + +As said, we will use the newly created container for user management. So why not create one right away? + +> To be honest, we are not LDAP experts at all. We therefor use a graphical application called `Apache Directory Studio` to manage our users/groups. Head over and install a local copy for the next steps: [https://directory.apache.org/studio/downloads.html](https://directory.apache.org/studio/downloads.html) + +> Note: At this stage it is assumed that you have created the LDAP container, installed the `Apache Directory Studio` and know the IP address of your dedicated box (Docker host). + +Open `Apache Directory Studio` and create a new connection: + + File -> New -> LDAP Browser -> LDAP Connection + +Enter the IP address of your dedicated box into the `Hostname` field and verify the parameters by clicking on `Test connection`. If it works, you can continue to the next wizard page. + +You will be asked for authentication credentials. Fill them like this: + + Authentification method: Simple Authentification + Bind DN or user: cn=admin,dc=ipynbsrv,dc=ldap + Bind password: "the password you took when creating the LDAP container" + +and verify they are correct. If they are, you can finish the wizard and connect to your LDAP container. + +You should have a view similiar to this: + +![Apache Directory Studio Connection](https://git.rackster.ch/ipynbsrv/ipynbsrv/raw/master/docs/install/_img/apache_directory_studio_connection.png) + +##### Creating Records + +Now that you are connected to the LDAP server, we can continue by creating a new group (needed by the user) and the user itself afterwards. + +###### Creating a Group + + Right-click on "ou=groups" -> New -> New Entry -> Create entry from scratch + +In the upcoming dialog, choose the object class `posixGroup`, click `Add` and go on to the next screen, which you should fill in like this: + +![Apache Directory Studion Group Creation CN](https://git.rackster.ch/ipynbsrv/ipynbsrv/raw/master/docs/install/_img/apache_directory_studio_group_cn.png) + +> The value of `cn` is the desired username for which this group is. + +Click `Next` and enter a group ID. If this is your first group (and it should be), enter something like `2500` (so we have some offset to the default system groups which are around `500`) and finish the process. + +Again, you should end up with a view like this: + +![Apache Directory Studio Group Overview](https://git.rackster.ch/ipynbsrv/ipynbsrv/raw/master/docs/install/_img/apache_directory_studio_group.png) + +I have already right-clicked somewhere in the information window, because we need to add another attribute to the group: + + Right-click -> New Attribute -> memberUid -> Finish + +and enter the same username in the red-colored field. Done! + +![Apache Directory Studio Group Overview](https://git.rackster.ch/ipynbsrv/ipynbsrv/raw/master/docs/install/_img/apache_directory_studio_group_final.png) + +> Note: From now on, you should choose `Use existing entry as template` when creating a new group. That way you do not have to fill in everything again each time (**but do not forget to change the username fields**). + +###### Creating a User + + Right-click on "ou=users" -> New -> New Entry -> Create entry from scratch + +In the upcoming dialog, choose the object classes `inetOrgPerson` and `posixAccount`, click `Add` and go on to the next screen. + +As with the group, use `cn=username` as `RND` and click `Next`. You end up with a window that has some red-bordered fields (`gidNumber`, `sn` etc.), which you must fill out like on the screen below: + +![Apache Directory Studio User Wizard](https://git.rackster.ch/ipynbsrv/ipynbsrv/raw/master/docs/install/_img/apache_directory_studio_user.png) + +> The `gidNumber` is the ID of the group you have just created. I like to keep it in sync with the `uidNumber`, so it is easier to remember. + +Close the window by clicking on `Finish`. As a last step, you have to add a password to this user account. Proceed as follow: + + Right-click -> New Attribute -> userPassword -> Finish + +and enter the desired password in the popping-out window. Done. + +> Right now, only the default (and not so secure) `MD5` hashing algorithm is supported... + +> Note: From now on, you should choose `Use existing entry as template` when creating a new user. That way you do not have to fill in everything again each time (**but do not forget to change the username/group/password fields**). + +### Postgres Container + +As most useful applications, we need a database to store application information. We decided to use `Postgres` for that purpose. For that reason, we're going to create yet another container. + +Again, there is a shell script available that will perform most operations for you. +Start over by issueing: + +```bash +$ wget https://git.rackster.ch/ipynbsrv/ipynbsrv/raw/$BRANCH/lib/scripts/create_postgresql_container.sh +$ chmod +x create_postgresql_container.sh && ./create_postgresql_container.sh +``` + +and follow the introductions printed on screen. + +If everything went well, you should end up with an **All done!** message. + +> That was an easy one, wasn't it? + +### Web Interface (WUI) Container + +The WUI container is the trickiest one to setup, yet everyone should be able to suceed. The container will communicate with the others we have created (`LDAP` and `Postgres`) and expose our web application over `HTTP`. + +Yes, you have guessed correctly. There is yet another script to bootstrap the container for you: + +```bash +$ wget https://git.rackster.ch/ipynbsrv/ipynbsrv/raw/$BRANCH/lib/scripts/create_wui_container.sh +$ chmod +x create_wui_container.sh && ./create_wui_container.sh +``` + +It will bring you right into the container, where you need to issue all the commands found below. + +#### LDAP + +As already done on the dedicated node, we need to install and configure the `PAM LDAP` module: + +```bash +$ apt-get update +$ apt-get -y install libpam-ldap +``` + +When prompted, enter: + + LDAP server: ldap://ipynbsrv_ldap/ + Distinguished name: dc=ipynbsrv,dc=ldap + 3, No, No + +There is one thing you should double-check. Open the file `/etc/nsswitch.conf` and ensure the lines for `passwd`, `group` and `shadow` end with `ldap`, like so: + +```bash +passwd: compat ldap +group: compat ldap +shadow: compat ldap +``` + +#### Nginx/OpenResty + +Because we need special Nginx modules, we decided to use the `OpenResty` derivate, which includes them. +Sadly we cannot install the package via `apt/aptitude`, but need to compile it from source: + +```bash +$ OPENRESTY_VERSION=1.7.7.1 +$ apt-get -y install libreadline-dev libncurses5-dev libpcre3-dev libssl-dev perl make wget + +$ cd /usr/local/src +$ wget http://openresty.org/download/ngx_openresty-$OPENRESTY_VERSION.tar.gz +$ tar xzvf ngx_openresty-$OPENRESTY_VERSION.tar.gz +$ cd ngx_openresty-$OPENRESTY_VERSION + +$ ./configure \ + --user=www-data \ + --group=www-data \ + \ + --with-ipv6 \ + --with-pcre --with-pcre-jit \ + --with-http_auth_request_module \ + \ + --without-http_echo_module \ + --without-http_xss_module \ + --without-http_coolkit_module \ + --without-http_form_input_module \ + --without-http_srcache_module \ + --without-http_lua_module \ + --without-http_lua_upstream_module \ + --without-http_memc_module \ + --without-http_redis2_module \ + --without-http_redis_module \ + --without-http_rds_json_module \ + --without-http_rds_csv_module \ + --without-lua_cjson \ + --without-lua_redis_parser \ + --without-lua_rds_parser \ + --without-lua_resty_dns \ + --without-lua_resty_memcached \ + --without-lua_resty_redis \ + --without-lua_resty_mysql \ + --without-lua_resty_upload \ + --without-lua_resty_upstream_healthcheck \ + --without-lua_resty_string \ + --without-lua_resty_websocket \ + --without-lua_resty_lock \ + --without-lua_resty_lrucache \ + --without-lua_resty_core \ + --without-http_ssi_module \ + --without-http_geo_module \ + --without-http_split_clients_module \ + --without-http_fastcgi_module \ + --without-http_scgi_module \ + --without-http_memcached_module \ + --without-http_limit_conn_module \ + --without-http_limit_req_module \ + --without-http_empty_gif_module \ + --without-http_upstream_ip_hash_module \ + --without-mail_pop3_module \ + --without-mail_imap_module \ + --without-mail_smtp_module + +$ make +$ make install +``` + +To make it auto-start on boot, create the file `/etc/my_init.d/nginx.sh` with those lines inside: + +```bash +#!/bin/sh +exec /usr/local/openresty/nginx/sbin/nginx +``` + +and ensure it is executable: + +```bash +chmod +x /etc/my_init.d/nginx.sh +``` + +#### Python/uwsgi/npm + +Not much to say here, those are just some of the packages (mainly Python stuff) we need. + +```bash +$ apt-get -y install python-pip uwsgi-plugin-python +$ apt-get -y install python-dev libldap2-dev libsasl2-dev libssl-dev # for django-auth-ldap +$ apt-get -y install python-psycopg2 # for Django PostgreSQL +$ apt-get -y install nodejs-legacy npm +$ npm -g install bower less # for frontend assets +$ pip install mkdocs # for the user guide +``` + +#### Django/Application + +Finally here, you are going to clone the source code repository, create a dedicated user (things should not be run under `root`, should they?) and populate the database. + +First, install the `git` version control system: + +```bash +$ apt-get -y install git +``` + +and continue by creating the dedicated user and cloning the repository: + +```bash +$ useradd --home-dir /srv/ipynbsrv --create-home --system ipynbsrv +$ su ipynbsrv +``` + +```bash +cd ~ +mkdir -p data/homes data/public data/shares +BRANCH=master +git clone -b $BRANCH https://git.rackster.ch/ipynbsrv/ipynbsrv.git _repo +ln -s /srv/ipynbsrv/_repo/ /srv/ipynbsrv/www +``` + +As `root` again (use `exit` to become `root`), install some more Python modules and configure `Nginx`: + +```bash +$ cd /srv/ipynbsrv/_repo/ +$ pip install -r requirements.txt +$ mkdir -p /var/run/ipynbsrv/ +$ mkdir /usr/local/openresty/nginx/conf/sites-enabled +$ ln -s /srv/ipynbsrv/_repo/lib/confs/nginx/ipynbsrv.conf /usr/local/openresty/nginx/conf/sites-enabled/ + +$ nano /usr/local/openresty/nginx/conf/nginx.conf +``` + +and change these values to: + +```bash +user www-data; +worker_processes auto; + +http { + # remove the servers already defined here, but not other stuff like mime.types etc. + include /usr/local/openresty/nginx/conf/sites-enabled/*.conf; +} +``` + +and you are mostly done with the preparation! + +##### Django Application + +The final steps include defining the Django application settings, populating the database and some other little changes. + +Start over by changing to the `ipynbsrv` user account: + +```bash +$ su ipynbsrv +cd ~/www +``` + +###### settings.py + +Everyone familiar with `Django` knows this file. It contains the application's settings. +Some of them need adjustment, so open it with `nano ipynbsrv/settings.py` and define those options: + +| Option | Value +|----------------------------|------------------------------------- +| SECRET_KEY | Some randomly generated characters +| DEBUG | Change to `False` +| TEMPLATE_DEBUG | Change to `False` +| ALLOWED_HOSTS | ['*'] +| DATABASES.default.PASSWORD | The `Postgres` password you took +| DATABASES.ldap.PASSWORD | The `LDAP` admin password you took +| TIME_ZONE | Your timezone (e.g. `Europe/Zurich`) +| DOCKER\_API\_VERSION | Get it with `docker version` +| DOCKER\_IFACE\_IP | The IP address of the `docker0` iface + +> Note: For `DOCKER_IFACE_IP` issue `ifconfig docker0 | grep inet\ addr:` on the dedicated node. + +All other values should be fine. + +Now that you have the `DOCKER_IFACE_IP`, open `/srv/ipynbsrv/_repo/lib/confs/nginx/ipynbsrv.conf` and replace: + + proxy_pass http://172.17.42.1:$1; + +with: + + proxy_pass http://"DOCKER_IFACE_IP":$1; + +###### manage.py + +`manage.py` is Django's utility script to perform setup and maintenance tasks. + +Use it to create migrations (if any) and populate/alter the database: + +```bash +python manage.py makemigrations +python manage.py migrate +``` + +Because we are using `LESS` to produce `CSS` and `bower` to manage external dependencies, you need to compile the styles and install the deps (like `jQuery` etc.): + +```bash +cd ipynbsrv/wui/static/ +bower install # installs external dependencies +mkdir css +lessc less/main.less css/main.css # compile LESS to CSS +cd ~/www +``` + +> If `bower install` doesn't work, try forcing `git` to use HTTP protocol: `git config --global url.https://.insteadOf git://` + +The user guide must be generated as well: + +```bash +cd /srv/ipynbsrv/_repo/docs/user-guide/ +mkdocs build --clean +``` + +Last but not least, finalize the whole setup by issueing: + +```bash +python manage.py collectstatic +python manage.py createsuperuser +``` + +which will create a local superuser account (the admin account). + +Leave the container with: + +```bash +exit +exit +``` + +so the script continues. It will create a local image from the container and bootstrap a new instance using that one. As soon as it has completed, you're all done. + +Congratulations! \ No newline at end of file diff --git a/docs/install/installation.md b/docs/install/installation.md index c576e3b..02ac8c8 100644 --- a/docs/install/installation.md +++ b/docs/install/installation.md @@ -1,210 +1,143 @@ -# ipynbsrv +# Installation -> IPython Notebook Multi-User Server -> - https://git.rackster.ch/groups/ipynbsrv +> A step-by-step guide on how to setup an `ipynbsrv` server/infrastructure. -## Installation +## Introduction -The following introduction steps explain how to setup a fresh box as an IPython notebook multi-user server. -If you follow the whole guide step-by-step, you should end-up with a fully functional, ready-to-use system. +Before we begin with the installation, it is incessant to leave a few words about the concepts and architecture of `ipynbsrv`. The main reason for that is that `ipynbsrv` is not really an application, but a giant project consisting of several (independent) components – each of it playing an important role within the whole setup. Since most of these components are exchangeable, the necessarily install steps will vary depending on the concrete component implementation/specification you pick. This makes it harder to get started, but once you understand the concepts and ideas behind that approach, you'll be loving it – promised. -### Requirements +### Architecture -- a dedicated hardware/virtualized node (will be the `Docker` host) -- around 4GB of ram (at very least 2GB as per the `Docker` requirements) -- some basic Linux skills +Basically, the whole can be seen as a multilayered architecture project. Each component is either part of one layer or defines the layer itself. Additionally, each component can be structured and multilayered itself (i.e. the software stuff). -### Tested Distributions +Because there has to be at least one (sub)component plugging all the others together and coordinating their behavior, minimal requirements are specified for each of them. These specifications come either in the form of contracts (i.e. for the software backends) or formal descriptions (i.e. the networking). Everything together can therefor only work, if every single component fully fulfills its relevant specifications. -- CentOS 7 64-bit (in theory only) -- Ubuntu 14.04 (LTS) 64-bit -- Ubuntu 14.10 64-bit - -> **Recommended distro:** Ubuntu 14.04 (LTS) 64-bit - -### Dedicated Node - -Everything starts at the dedicated node, which will be configured to host Docker containers. We assume you already installed a fresh copy of the recommended distro on the machine and are connected to it either directly or via SSH. - -To make the setup as easy as possible, we wrote a tiny shell script that will perform all needed operations for you. Just fetch and execute it as follow: - -```bash -$ apt-get -y install wget # or yum for EL -$ BRANCH=master -$ wget https://git.rackster.ch/ipynbsrv/ipynbsrv/raw/$BRANCH/lib/scripts/setup_docker_host.sh -$ chmod +x setup_docker_host.sh && ./setup_docker_host.sh -``` - -> Note: Commands prefixed with `$` are meant to be run under `root` account. +Since a specification contains only the minimal set of requirements, two components that are totally specification conform, can have nearly nothing in common. For that reason, every component has to provide its own setup instructions (if any). Depending on the component's layer, its install steps may affect the installation process of other components as well. Besides, the type of deployment (single-server or multi-server) can have an impact too. For that reason, one has to inspect every component's install guide before actually starting. This makes the installation – as initially said – complex. -All it does is, create some directories inside `/srv/ipynbsrv`, install the Docker packages/environment and configure the system to use `LDAP` as an additional backend for user management. +### Components -There is one thing you should double-check (we noticed serveral *problems* here) however. Open the file `/etc/nsswitch.conf` and ensure the lines for `passwd`, `group` and `shadow` end with `ldap`, like so: +As touched in the previous sections, `ipynbsrv` consists of mostly independent and replaceable components. The following list gives you a brief and shorten overview of the different components currently involved: -```bash -passwd: compat ldap -group: compat ldap -shadow: compat ldap -``` - -### LDAP Container - -> It is already time to bootstrap your first Docker container. Yay! - -The Django application (more on that later) itself and some core features like `user shares` depend on a centralized account directory. We have choosen an LDAP server for that purpose, so the next thing you're going to do is to create a container for it. - -Again, there is a shell script available that will perform most operations for you. -Start over by issueing: - -```bash -$ wget https://git.rackster.ch/ipynbsrv/ipynbsrv/raw/$BRANCH/lib/scripts/create_ldap_container.sh -$ chmod +x create_ldap_container.sh && ./create_ldap_container.sh -``` +- **Networking:** To limit access to user created containers to its owner, there has to be a network that is not reachable from the outside. An internal only network has to exist. This network is used for other thing as well and is the most low-end component. The standard implementation uses `Open vSwitch` to create such a network. +- **Core Infrastructure:** The core infrastructure itself is not a component (or if you'd like to call it like that anyway, think of it as one giant component consisting of various parts). Beside a handful of directories on the filesystem, it includes an LDAP directory server, a Postgresql DB server, an Nginx web server and a few Django applications. The project won't work without them and they are not meant to be replaceable, that's why they are grouped under the *Core Infrastructure* name. The default implementation/install guide uses Docker containers to run these services, but one is free to install them somewhere else. +- **Backends:** Backend components are on one hand the most powerful abstraction in `ipynbsrv` (they abstract the storage, container and user/group backends), the most complex on the other hand. Take the `Docker` container backend as an example: it consists of the Docker platform (so it has to provide an install guide for that), Python code to communicate with that platform (it has to provide an `ipynbsrv.contract.container_backend` implementation for that) and a set of preconfigured images to run containers from (it has to provide `Dockerfile`s for that). Depending on the deployment (i.e. multi-server), additional tools like the `Docker Registry` are needed as well. -and follow the introductions printed on screen. +If you have (roughly) understood this concept of components, you're ready to go. -If everything went well, you should end up with an **All done!** message. +> PS: Do I have complied with my promise? ;) -#### User Management +## Requirements -As said, we will use the newly created container for user management. So why not create one right away? +The following requirements are only valid for the core infrastructure. Each component may define own requirements as well, so don't take the ones listed here as given: -> To be honest, we are not LDAP experts at all. We therefor use a graphical application called `Apache Directory Studio` to manage our users/groups. Head over and install a local copy for the next steps: [https://directory.apache.org/studio/downloads.html](https://directory.apache.org/studio/downloads.html) +- a dedicated hardware or virtualized node +- at least 2GB of RAM +- intermediate *nix skills -> Note: At this stage it is assumed that you have created the LDAP container, installed the `Apache Directory Studio` and know the IP address of your dedicated box (Docker host). +### Tested operating systems -Open `Apache Directory Studio` and create a new connection: +- OS X 10.10.x (Yosemite) +- CentOS 7 64-bit (in theory only) +- Ubuntu 14.04 (LTS) +- Ubuntu 14.10 +- Ubuntu 15.04 (LTS) +- Ubuntu 15.10 - File -> New -> LDAP Browser -> LDAP Connection +> **Recommended OS:** Ubuntu 15.04 (LTS) 64-bit -Enter the IP address of your dedicated box into the `Hostname` field and verify the parameters by clicking on `Test connection`. If it works, you can continue to the next wizard page. +## Available Components -You will be asked for authentication credentials. Fill them like this: +Below you can find a list of all currently available components. Make sure to consult their documentations before you start, as some might only work in combination with others. - Authentification method: Simple Authentification - Bind DN or user: cn=admin,dc=ipynbsrv,dc=ldap - Bind password: "the password you took when creating the LDAP container" +> If you are aware of one not listed, please submit a merge request. Thanks! -and verify they are correct. If they are, you can finish the wizard and connect to your LDAP container. +### Container Backends -You should have a view similiar to this: +- [Docker](https://git.rackster.ch/ipynbsrv/backends/blob/master/docs/container_backends.md#docker) +- [HttpRemote](https://git.rackster.ch/ipynbsrv/backends/blob/master/docs/container_backends.md#httpremote) -![Apache Directory Studio Connection](https://git.rackster.ch/ipynbsrv/ipynbsrv/raw/master/docs/install/_img/apache_directory_studio_connection.png) +### Networking -##### Creating Records +- [Docker](https://git.rackster.ch/ipynbsrv/backends/blob/master/docs/container_backends.md#docker) (single-server only) +- [Open vSwitch](https://git.rackster.ch/ipynbsrv/ipynbsrv/blob/master/docs/install/networking/openvswitch.md) -Now that you are connected to the LDAP server, we can continue by creating a new group (needed by the user) and the user itself afterwards. +### Storage Backends -###### Creating a Group +- [LocalFileSystem](https://git.rackster.ch/ipynbsrv/backends/blob/master/docs/storage_backends.md#localfilesystem) - Right-click on "ou=groups" -> New -> New Entry -> Create entry from scratch +### User/Group Backends -In the upcoming dialog, choose the object class `posixGroup`, click `Add` and go on to the next screen, which you should fill in like this: +- [LdapBackend](https://git.rackster.ch/ipynbsrv/backends/blob/master/docs/usergroup_backends.md#ldapbackend) -![Apache Directory Studion Group Creation CN](https://git.rackster.ch/ipynbsrv/ipynbsrv/raw/master/docs/install/_img/apache_directory_studio_group_cn.png) +## Getting started -> The value of `cn` is the desired username for which this group is. +> If you are looking for an easier install guide (this one describes the modular approach), the [Easy Install Guide](https://git.rackster.ch/ipynbsrv/ipynbsrv/blob/master/docs/install/easy_installation.md) might be for you. +> –––– +> If you landed here and have not yet read the **Introduction** chapter, do yourself a flavor and start there. -Click `Next` and enter a group ID. If this is your first group (and it should be), enter something like `2500` (so we have some offset to the default system groups which are around `500`) and finish the process. +Now that you are familiar enough with `ipynbsrv`'s architecture, we can proceed to the actual installation steps. Because the setup you're going to deploy depends extensively on the components you choose, the following steps are very generic. -Again, you should end up with a view like this: +The following chapters assume you have a running box (see **Requirements**) and an open `root` console. Commands prefixed with `$` are meant to be run as `root`. -![Apache Directory Studio Group Overview](https://git.rackster.ch/ipynbsrv/ipynbsrv/raw/master/docs/install/_img/apache_directory_studio_group.png) +### 1. Setting up Networking -I have already right-clicked somewhere in the information window, because we need to add another attribute to the group: +The very first component to setup is networking. Right now, you have the choice between *Docker* and *Open vSwitch* (see **Available Components**). If you plan to deploy a multi-server setup, you cannot use *Docker*. The actual installation steps are defined by the component, so head over to its documentation to get specification comply networking. - Right-click -> New Attribute -> memberUid -> Finish +> The reference implementation uses `Open vSwitch`. -and enter the same username in the red-colored field. Done! +### 2. Setting up the Core Infrastructure (Part 1) -![Apache Directory Studio Group Overview](https://git.rackster.ch/ipynbsrv/ipynbsrv/raw/master/docs/install/_img/apache_directory_studio_group_final.png) +Setting up the core infrastructure is splitted into two parts. The main reason for that is that backends (see below) might influent the way you have to deploy the core infrastructure. Thus, this first part only includes the steps that *should* not vary, while the second part contains the rest. -> Note: From now on, you should choose `Use existing entry as template` when creating a new group. That way you do not have to fill in everything again each time (**but do not forget to change the username fields**). +#### 2.1 Deploying a PostgreSQL server -###### Creating a User +The core application relies on a working PostgreSQL server to store its data. Make sure you have one running somewhere the application can access it. - Right-click on "ou=users" -> New -> New Entry -> Create entry from scratch +> The reference implementation uses the official PostgreSQL Docker container image and links the container into the application container. -In the upcoming dialog, choose the object classes `inetOrgPerson` and `posixAccount`, click `Add` and go on to the next screen. +#### 2.2 Deploying an LDAP server -As with the group, use `cn=username` as `RND` and click `Next`. You end up with a window that has some red-bordered fields (`gidNumber`, `sn` etc.), which you must fill out like on the screen below: +The *Lightweight Directory Access Protocol* is a widely supported protocol to communicate with directory servers like Window's Active Directory. The core application needs full access to such a server to store user and group information on it. While you're free to use other procotols/servers as well, this most likely does not work at the moment. -![Apache Directory Studio User Wizard](https://git.rackster.ch/ipynbsrv/ipynbsrv/raw/master/docs/install/_img/apache_directory_studio_user.png) +The server must have two organizational units for users and groups, best named `users` and `groups`. Depending on the container backend you pick, created containers (also on remote nodes) need to access the server. -> The `gidNumber` is the ID of the group you have just created. I like to keep it in sync with the `uidNumber`, so it is easier to remember. +> The reference implementation runs `slapd` within a Docker container. To ensure containers and remote nodes have access to it, the services is binded onto the internal IPv4 address of the master node. -Close the window by clicking on `Finish`. As a last step, you have to add a password to this user account. Proceed as follow: +### 3. Setting up the Storage Backend - Right-click -> New Attribute -> userPassword -> Finish +Storage backends define the way how and where directories and files are stored. This includes the user's home directories, publication directories and user created share directories. -and enter the desired password in the popping-out window. Done. +Because the core application as well as user containers need to access these resources (directories and files), the storage backend should be the first backend to setup. Again, open the documentation of the storage backend in use to see how it needs to be set up. -> Right now, only the default (and not so secure) `MD5` hashing algorithm is supported... +> The reference implementation uses the `LocalFileSystem` backend. The working directory is set to `/srv/ipynbsrv/data`. -> Note: From now on, you should choose `Use existing entry as template` when creating a new user. That way you do not have to fill in everything again each time (**but do not forget to change the username/group/password fields**). +### 4. Setting up the Container Backend -### Postgres Container +Container backends are by far the most complex component to install (and implement). Since the specification for such backends do not include the required install steps, you have to read the concrete backends documentation again. It should tell you how to install the (container) isolation product itself, how to configure it so it plays nicely with `ipynbsrv`, how to add/create images for it and what additional steps are needed for a working multi-server deployment. -As most useful applications, we need a database to store application information. We decided to use `Postgres` for that purpose. For that reason, we're going to create yet another container. +> The reference implementation uses the `Docker` backend in combination with the `HttpRemote` proxy backend (if deploying a multi-server setup). -Again, there is a shell script available that will perform most operations for you. -Start over by issueing: +### 5. Setting up the User Backend -```bash -$ wget https://git.rackster.ch/ipynbsrv/ipynbsrv/raw/$BRANCH/lib/scripts/create_postgresql_container.sh -$ chmod +x create_postgresql_container.sh && ./create_postgresql_container.sh -``` +Beside the internal usage of the `LdapBackend` for communication with the core infrastructure LDAP server (if that is the way you go), this backend (talking about it because no alternatives exist atm) can also be used to let users from an external LDAP server access the application. In general, user backends allow one to use an existing user directory to be used as `ipynbsrv`'s authentication backend. -and follow the introductions printed on screen. +But enough. Please consult the backend's own documentation to get it working. -If everything went well, you should end up with an **All done!** message. +> The reference implementation uses the `LdapBackend` backend and reuses the LDAP server from the core infrastructure to simulate an external server. -> That was an easy one, wasn't it? +### 6. Setting up the Core Infrastructure (Part 2) -### Web Interface (WUI) Container +Now that all backends are ready and you have read there documentations for potential notes about setting up the core infrastructure, we can finally deploy the rest of the core. -The WUI container is the trickiest one to setup, yet everyone should be able to suceed. The container will communicate with the others we have created (`LDAP` and `Postgres`) and expose our web application over `HTTP`. +#### 6.1 Deploying the Nginx Web Server -Yes, you have guessed correctly. There is yet another script to bootstrap the container for you: +Nginx is our web server of choice. A lot of the core features (i.e. container access control) depend on it. It will be configured to serve the Django application (but not only), so make sure to install Nginx either directly on the hardware node or within a powerful enough container. -```bash -$ wget https://git.rackster.ch/ipynbsrv/ipynbsrv/raw/$BRANCH/lib/scripts/create_wui_container.sh -$ chmod +x create_wui_container.sh && ./create_wui_container.sh -``` - -It will bring you right into the container, where you need to issue all the commands found below. - -#### LDAP - -As already done on the dedicated node, we need to install and configure the `PAM LDAP` module: +Because special Nginx modules are needed, we decided to use the OpenResty derivate, which includes them out-of-the-box. Sadly, we cannot install the package via apt/aptitude, but need to compile it from source. The following commands will help you with that: ```bash -$ apt-get update -$ apt-get -y install libpam-ldap -``` - -When prompted, enter: - - LDAP server: ldap://ipynbsrv_ldap/ - Distinguished name: dc=ipynbsrv,dc=ldap - 3, No, No - -There is one thing you should double-check. Open the file `/etc/nsswitch.conf` and ensure the lines for `passwd`, `group` and `shadow` end with `ldap`, like so: - -```bash -passwd: compat ldap -group: compat ldap -shadow: compat ldap -``` - -#### Nginx/OpenResty - -Because we need special Nginx modules, we decided to use the `OpenResty` derivate, which includes them. -Sadly we cannot install the package via `apt/aptitude`, but need to compile it from source: - -```bash -$ OPENRESTY_VERSION=1.7.7.1 -$ apt-get -y install libreadline-dev libncurses5-dev libpcre3-dev libssl-dev perl make wget +$ OPENRESTY_VERSION=1.7.10.2 +$ apt-get -y install libreadline-dev libncurses5-dev libpcre3-dev libssl-dev perl make wget # dependencies $ cd /usr/local/src $ wget http://openresty.org/download/ngx_openresty-$OPENRESTY_VERSION.tar.gz @@ -263,74 +196,68 @@ $ make $ make install ``` -To make it auto-start on boot, create the file `/etc/my_init.d/nginx.sh` with those lines inside: +> This will install OpenResty under `/usr/local/openresty` and Nginx under `/usr/local/openresty/nginx`. -```bash -#!/bin/sh -exec /usr/local/openresty/nginx/sbin/nginx -``` +Make sure the Nginx service is starting on boot by executing `/usr/local/openresty/nginx/sbin/nginx` during startup. -and ensure it is executable: +> The reference implementation is creating a dedicated Docker container for the Nginx web server and the stuff from the next chapters. -```bash -chmod +x /etc/my_init.d/nginx.sh -``` - -#### Python/uwsgi/npm +#### 6.2 Installing additional packages -Not much to say here, those are just some of the packages (mainly Python stuff) we need. +Not much to say here, those are just some of the packages (mainly Python) we need: ```bash -$ apt-get -y install python-pip uwsgi-plugin-python -$ apt-get -y install python-dev libldap2-dev libsasl2-dev libssl-dev # for django-auth-ldap +$ apt-get -y install python-pip # package manager +$ apt-get -y install uwsgi-plugin-python # uwsgi plugin (used for Nginx uwsgi_pass) $ apt-get -y install python-psycopg2 # for Django PostgreSQL $ apt-get -y install nodejs-legacy npm $ npm -g install bower less # for frontend assets $ pip install mkdocs # for the user guide ``` -#### Django/Application +#### 6.3 Deploying the Django Application -Finally here, you are going to clone the source code repository, create a dedicated user (things should not be run under `root`, should they?) and populate the database. +> Because no code has been publiched to `PyPie` or any other repository yet, everything needs to be installed from source. For that purpose, the source repositories are cloned via `git`. Once the project is mature enough, we expect the installation process to become easier. -First, install the `git` version control system: +##### 6.3.1 Preparing the base + +As a first prerequisite, install the `git` package: ```bash $ apt-get -y install git ``` -and continue by creating the dedicated user and cloning the repository: +After creating the directory where the application will resist, it is already time to clone the application repositories: ```bash -$ useradd --home-dir /srv/ipynbsrv --create-home --system ipynbsrv -$ su ipynbsrv -``` - -```bash -cd ~ -mkdir -p data/homes data/public data/shares BRANCH=master git clone -b $BRANCH https://git.rackster.ch/ipynbsrv/ipynbsrv.git _repo -ln -s /srv/ipynbsrv/_repo/ /srv/ipynbsrv/www ``` -As `root` again (use `exit` to become `root`), install some more Python modules and configure `Nginx`: +> A smart location for the repository is `/srv/ipynbsrv/_repo`. + +Since the `ipynbsrv` Django application has several dependencies, they need to be installed before the application will work: ```bash -$ cd /srv/ipynbsrv/_repo/ +$ cd _repo $ pip install -r requirements.txt +``` + +Next, we're going to create a directory that will contain the `uwsgi` socket and tell Nginx about the application's vhost file. This virtual host configuration for Nginx makes sure that i.e. requests are send to the `uwsgi` backend: + +```bash $ mkdir -p /var/run/ipynbsrv/ $ mkdir /usr/local/openresty/nginx/conf/sites-enabled $ ln -s /srv/ipynbsrv/_repo/lib/confs/nginx/ipynbsrv.conf /usr/local/openresty/nginx/conf/sites-enabled/ - -$ nano /usr/local/openresty/nginx/conf/nginx.conf ``` -and change these values to: +> The linking command assumes you have cloned the repository to `/srv/ipynbsrv/_repo`. + +Last but not least, make sure the Nginx main configuration at `/usr/local/openresty/nginx/conf/nginx.conf` reflects the snippet below: ```bash user www-data; -worker_processes auto; +worker_processes auto; # = CPU count http { # remove the servers already defined here, but not other stuff like mime.types etc. @@ -338,76 +265,50 @@ http { } ``` -and you are mostly done with the preparation! - -##### Django Application - -The final steps include defining the Django application settings, populating the database and some other little changes. +##### 6.3.2 Installing additional Python packages -Start over by changing to the `ipynbsrv` user account: +Remember the initial note regarding unpublished packages? They can be manually installed with: ```bash -$ su ipynbsrv -cd ~/www +$ cd /usr/local/src +$ git clone https://git.rackster.ch/ipynbsrv/contract.git +$ cd contract && pip install -e . && cd .. +$ git clone https://git.rackster.ch/ipynbsrv/common.git +$ cd common && pip install -e . && cd .. +$ git clone https://git.rackster.ch/ipynbsrv/client.git +$ cd client && pip install -e . && cd .. +$ git clone https://git.rackster.ch/ipynbsrv/backends.git +$ cd backends && pip install -e . && cd .. ``` -###### settings.py - -Everyone familiar with `Django` knows this file. It contains the application's settings. -Some of them need adjustment, so open it with `nano ipynbsrv/settings.py` and define those options: - -| Option | Value -|----------------------------|------------------------------------- -| SECRET_KEY | Some randomly generated characters -| DEBUG | Change to `False` -| TEMPLATE_DEBUG | Change to `False` -| ALLOWED_HOSTS | ['*'] -| DATABASES.default.PASSWORD | The `Postgres` password you took -| DATABASES.ldap.PASSWORD | The `LDAP` admin password you took -| TIME_ZONE | Your timezone (e.g. `Europe/Zurich`) -| DOCKER\_API\_VERSION | Get it with `docker version` -| DOCKER\_IFACE\_IP | The IP address of the `docker0` iface - -> Note: For `DOCKER_IFACE_IP` issue `ifconfig docker0 | grep inet\ addr:` on the dedicated node. - -All other values should be fine. - -Now that you have the `DOCKER_IFACE_IP`, open `/srv/ipynbsrv/_repo/lib/confs/nginx/ipynbsrv.conf` and replace: +> Note: This chapter should be removed once the packages have been published. - proxy_pass http://172.17.42.1:$1; +##### 6.3.3 Making the application ready -with: - - proxy_pass http://"DOCKER_IFACE_IP":$1; - -###### manage.py - -`manage.py` is Django's utility script to perform setup and maintenance tasks. - -Use it to create migrations (if any) and populate/alter the database: +After having completed the above preparation steps, the next few commands will look familiar to everyone already having used Django in the past. They are all about initializing the Django application and should be run in the repository root. ```bash -python manage.py makemigrations -python manage.py migrate +$ python manage.py makemigrations +$ python manage.py migrate ``` -Because we are using `LESS` to produce `CSS` and `bower` to manage external dependencies, you need to compile the styles and install the deps (like `jQuery` etc.): +As we are using `LESS` to produce `CSS` and `bower` to manage external dependencies, you need to compile the styles and install the deps (like `jQuery` etc.): ```bash -cd ipynbsrv/wui/static/ -bower install # installs external dependencies -mkdir css -lessc less/main.less css/main.css # compile LESS to CSS -cd ~/www +$ cd ipynbsrv/web/static +$ bower install --allow-root # installs external dependencies +$ mkdir css +$ lessc less/main.less css/main.css # compile LESS to CSS +$ cd ../../.. ``` -> If `bower install` doesn't work, try forcing `git` to use HTTP protocol: `git config --global url.https://.insteadOf git://` +> If `bower install` doesn't work, try forcing `git` to use the HTTP protocol: `git config --global url.https://.insteadOf git://` The user guide must be generated as well: ```bash -cd /srv/ipynbsrv/_repo/docs/user-guide/ -mkdocs build --clean +$ cd docs/user-guide +$ mkdocs build --clean ``` Last but not least, finalize the whole setup by issueing: @@ -419,13 +320,6 @@ python manage.py createsuperuser which will create a local superuser account (the admin account). -Leave the container with: - -```bash -exit -exit -``` - -so the script continues. It will create a local image from the container and bootstrap a new instance using that one. As soon as it has completed, you're all done. +### 7. Configuring the Application -Congratulations! \ No newline at end of file +TODO: Login to admin, define variables (backends), add backends and servers and images. \ No newline at end of file diff --git a/docs/install/networking/openvswitch.md b/docs/install/networking/openvswitch.md new file mode 100644 index 0000000..439a927 --- /dev/null +++ b/docs/install/networking/openvswitch.md @@ -0,0 +1,74 @@ +# Open vSwitch + +> Guide to setup Open vSwitch as `ipynbsrv`'s internal networking solution. + +## Introduction + +As per the `ipynbsrv` networking requirements, every node must have its unique IPv4 address within a private/internal network that cannot be reached from the outside. With Open vSwitch, we can setup such a network between our nodes. It doesn't matter which reserved IPv4 network range you pick for that network, as long as it doesn't conflict with other existing networks (check with `ifconfig` if you are unsure about other networks in use). This guide assumes the `192.168.0.0/24` network was picked. + +## Installation + +Open vSwitch packages are available in all major Linux distributions, so the installation should be straight-forward. +On Debian based distributions you'll have to run: + +```bash +apt-get install -y openvswitch-switch openvswitch-ipsec +``` + +> If the nodes are protected by a firewall, make sure to open the ports `500, 1723 and 4500` as well as to allow the `esp` and `ah` IP protocols. + +## Setting up the interface + +Now that Open vSwitch is installed, we need to create an Open vSwitch bridge interface, which will act as the nodes' virtual switch. All of the following commands need to be executed on every node (if not stated otherwise). + + + +To create the interface, issue: + +```bash +$ ovs-vsctl add-br ovsbr0 +$ ovs-vsctl set bridge ovsbr0 stp_enable=true +``` + +To assign an IPv4 address from the picked range to the created `ovsbr0` interface, execute the following statements: + +```bash +$ ifconfig ovsbr0 up 192.168.0.1 netmask 255.255.255.0 +$ ifconfig ovsbr0 mtu 1420 +``` + +> `192.168.0.1` is the internal only IPv4 address of the current node. Make sure every node has another IP address. Usually the master node will have `x.x.x.1`. +> –––– +> `255.255.255.0` is the netmask of the private network. If you plan to deploy more than 254 nodes, pick a `/16` or `/8` range. +> –––– +> These commands are best placed in `/etc/rc.local` so they are executed on boot. Make sure to put them before `exit 0`. + +## Establishing connections between the nodes + +Open vSwitch is installed and running, but no connections between the nodes have been added yet. Don't worry, adding them is as simple as the installation was. + +Basically, the following command needs to be executed on the two nodes between which the connection should be established. Executing that command instructs Open vSwitch to create and establish a `GRE over IPSec` connection beween the two nodes: + +```bash +$ ovs-vsctl add-port ovsbr0 gre_master_slave1 -- set interface gre_master_slave1 type=ipsec_gre options:remote_ip=10.0.0.2 options:psk=ipynbsrv +``` + +> `gre_master_slave1` is the connection's name. It must be unique and the same on both nodes. +> –––– +> `10.0.0.2` is the IPv4 address under which the remote node can be reached. +> –––– +> `psk=ipynbsrv` is the password used to encrypt the connection. + +For a minimal setup, you have to establish one connection to the master node at least. A full-meshed network might however perform better, so you're encouraged to establish additional connections between other nodes as well. + +## Troubleshooting + +### 1. Connections are not established after a reboot + +We saw this quite often. The solution is to restart the Open vSwitch services on the nodes: + +```bash +$ service openvswitch-ipsec restart && service openvswitch-switch restart +``` + +> Other services connecting to remote nodes via the internal network might need a restart as well, as soon as the connections have been established. \ No newline at end of file diff --git a/ipynbsrv/admin/admin.py b/ipynbsrv/admin/admin.py index 5cfcdc5..4cfd7bc 100644 --- a/ipynbsrv/admin/admin.py +++ b/ipynbsrv/admin/admin.py @@ -322,7 +322,7 @@ def get_readonly_fields(self, request, obj=None): :inherit. """ if obj: - return ['backend_pk', 'command', 'name', 'protected_port', 'public_ports', 'owner'] + return ['backend_pk', 'command', 'protected_port', 'public_ports', 'owner'] return [] @@ -592,12 +592,33 @@ def get_fieldsets(self, request, obj=None): }) ] + def get_readonly_fields(self, request, obj=None): + """ + :inherit. + """ + if obj is not None and hasattr(obj, 'backend_user'): + return ['groups', 'is_staff', 'username'] + else: + return ['is_staff'] + def get_urls(self): + """ + TODO. + """ urls = super(UserAdmin, self).get_urls() my_urls = patterns('', (r'^import_users/$', self.import_users)) return my_urls + urls + def has_add_permission(self, request): + """ + :inherit. + """ + return False + def import_users(self, request): + """ + TODO. + """ # custom view which should return an HttpResponse try: # Todo: imports @@ -611,21 +632,6 @@ def import_users(self, request): self.message_user(request, "Operation failed.", messages.ERROR) return HttpResponseRedirect(reverse('admin:auth_user_changelist')) - def get_readonly_fields(self, request, obj=None): - """ - :inherit. - """ - if obj is not None and hasattr(obj, 'backend_user'): - return ['groups', 'is_staff', 'username'] - else: - return ['is_staff'] - - def has_add_permission(self, request): - """ - :inherit. - """ - return False - # register the model admins with the site admin_site = CoreAdminSite(name='ipynbsrv') diff --git a/ipynbsrv/core/auth/authentication_backends.py b/ipynbsrv/core/auth/authentication_backends.py index 6fe4266..68a78d0 100644 --- a/ipynbsrv/core/auth/authentication_backends.py +++ b/ipynbsrv/core/auth/authentication_backends.py @@ -2,7 +2,7 @@ from django.core.exceptions import PermissionDenied from ipynbsrv.contract.errors import AuthenticationError, ConnectionError, \ UserNotFoundError -from ipynbsrv.core.helpers import get_user_backend_connected +from ipynbsrv.core.helpers import get_internal_ldap_connected, get_user_backend_connected from ipynbsrv.core.models import BackendGroup, BackendUser, \ CollaborationGroup import logging @@ -35,11 +35,13 @@ def authenticate(self, username=None, password=None): return None # not allowed, Django only user try: + internal_ldap = get_internal_ldap_connected() user_backend = get_user_backend_connected() user_backend.auth_user(username, password) if user is not None: # existing user if not user.check_password(password): - user.set_password(password) + user.set_password(password) # XXX: not needed. should we leave it empty? + internal_ldap.set_user_password(username, password) user.save() else: # new user uid = BackendUser.generate_internal_uid() @@ -61,6 +63,7 @@ def authenticate(self, username=None, password=None): return None finally: try: + internal_ldap.disconnect() user_backend.disconnect() except: pass diff --git a/ipynbsrv/core/auth/checks.py b/ipynbsrv/core/auth/checks.py index 7123ab2..255e5c0 100644 --- a/ipynbsrv/core/auth/checks.py +++ b/ipynbsrv/core/auth/checks.py @@ -1,3 +1,13 @@ +from django.contrib.auth.models import User +from django.core.exceptions import ObjectDoesNotExist +from django.http.response import HttpResponse +from ipynbsrv.core.models import PortMapping + + +COOKIE_NAME = 'username' +URI_HEADER = 'HTTP_X_ORIGINAL_URI' + + def login_allowed(user): """ @user_passes_test decorator to check whether the user is allowed to access the application or not. @@ -8,3 +18,35 @@ def login_allowed(user): if user is None or user.get_username() is None: # AnonymousUser return False return hasattr(user, 'backend_user') # not super = internal only user + + +def workspace_auth_access(request): + """ + This view is called by Nginx to check either a user is authorized to + access a given workspace or not. + + The username can be obtained from the signed cookie 'username', + while the port/container needs to be extracted from the 'X-Original-URI' header. + + Response codes of 20x will allow the user to access the requested resource. + """ + if request.method == "GET": + username = request.get_signed_cookie(COOKIE_NAME, default=None) + if username: # ensure the signed cookie set at login is there + try: + user = User.objects.get(username=username) + uri = request.META.get(URI_HEADER) + if uri: # ensure the X- header is present. its set by Nginx + splits = uri.split('/') + if len(splits) >= 3: + base_url = splits[2] + parts = base_url.decode('hex').split(':') + internal_ip = parts[0] + port = parts[1] + mapping = PortMapping.objects.filter(external_port=port).filter(server__internal_ip=internal_ip) + if mapping.exists() and mapping.first().container.owner == user.backend_user: + return HttpResponse(status=200) + except ObjectDoesNotExist: + pass + + return HttpResponse(status=403) diff --git a/ipynbsrv/core/signals/backend_users.py b/ipynbsrv/core/signals/backend_users.py index 60c359c..f50f4f1 100644 --- a/ipynbsrv/core/signals/backend_users.py +++ b/ipynbsrv/core/signals/backend_users.py @@ -78,8 +78,6 @@ def create_public_directory(sender, user, **kwargs): storage_backend.set_dir_mode(public_dir, 0755) except StorageBackendError as ex: raise ex - else: - logger.warn("Public directory for user %s already exists." % user.django_user.get_username()) @receiver(backend_user_deleted) @@ -151,24 +149,6 @@ def remove_public_directory(sender, user, **kwargs): raise ex -@receiver(backend_user_modified) -def update_password_on_internal_ldap(sender, user, fields, **kwargs): - """ - Update the password on the internal LDAP server on change. - """ - if user is not None: - try: - internal_ldap = get_internal_ldap_connected() - internal_ldap.set_user_password(user.backend_pk, user.django_user.password) - except UserNotFoundError: - user.delete() # XXX: cleanup - finally: - try: - internal_ldap.disconnect() - except: - pass - - @receiver(post_delete, sender=BackendUser) def post_delete_handler(sender, instance, **kwargs): """ diff --git a/ipynbsrv/settings.py b/ipynbsrv/settings.py index b33ecad..7fa6c35 100644 --- a/ipynbsrv/settings.py +++ b/ipynbsrv/settings.py @@ -121,7 +121,7 @@ # LEGACY SETTINGS FROM OLD IPYNBSRV. NEEDS REFACTORING LOGIN_URL = '/accounts/login' LOGIN_REDIRECT_URL = '/accounts/flag' - +PUBLIC_URL = '/public/' VARS_MODULE_PATH = 'ipynbsrv.core.conf' diff --git a/ipynbsrv/web/settings.py b/ipynbsrv/web/settings.py index ef89ab5..158d7ca 100644 --- a/ipynbsrv/web/settings.py +++ b/ipynbsrv/web/settings.py @@ -1,22 +1,22 @@ -''' +""" Setting storing the name of the cookie that is used to check access via the reverse proxy to containers. -''' +""" AUTH_COOKIE_NAME = 'username' -''' +""" Setting storing the name of the header that is indicating the requested URI, the reverse proxy is adding to sub-requests. -''' +""" PROXY_URI_HEADER = 'HTTP_X_ORIGINAL_URI' -''' +""" Setting storing the URL under which the application\'s documentation can be found. -''' +""" URL_DOCS = '/docs/' -''' +""" Setting storing the URL under which the user publications can be found. -''' +""" URL_PUBLIC = '/public/' diff --git a/ipynbsrv/web/templates/web/snippets/navbar.html b/ipynbsrv/web/templates/web/snippets/navbar.html index 154cb4f..f0de9c8 100644 --- a/ipynbsrv/web/templates/web/snippets/navbar.html +++ b/ipynbsrv/web/templates/web/snippets/navbar.html @@ -22,32 +22,25 @@
  • Notifications {% if new_notifications_count >= 0 %}{{ new_notifications_count }}{% endif %}
  • - \ No newline at end of file + diff --git a/ipynbsrv/web/templates/web/user/login.html b/ipynbsrv/web/templates/web/user/login.html index 6af4881..1ffa528 100644 --- a/ipynbsrv/web/templates/web/user/login.html +++ b/ipynbsrv/web/templates/web/user/login.html @@ -39,7 +39,7 @@ -

    An overview of all published notebooks is available under the public listing section here.

    +

    An overview of all published notebooks is available under the public listing section here.

    diff --git a/ipynbsrv/web/urls.py b/ipynbsrv/web/urls.py index 9311d14..fd13ebc 100644 --- a/ipynbsrv/web/urls.py +++ b/ipynbsrv/web/urls.py @@ -54,7 +54,7 @@ url(r'^notifications/mark_as_read$', 'ipynbsrv.web.views.notifications.mark_as_read', name='notification_mark_as_read'), # internal - url(r'^_workspace_auth_check$', 'ipynbsrv.web.views.common.workspace_auth_access'), + url(r'^_workspace_auth_check$', 'ipynbsrv.core.auth.checks.workspace_auth_access'), url(r'^error/404$', 'ipynbsrv.web.views.system.error_404'), url(r'^error/500$', 'ipynbsrv.web.views.system.error_500'), diff --git a/ipynbsrv/web/views/accounts.py b/ipynbsrv/web/views/accounts.py index 7c14103..014bca0 100644 --- a/ipynbsrv/web/views/accounts.py +++ b/ipynbsrv/web/views/accounts.py @@ -1,20 +1,19 @@ from django.contrib.auth.decorators import user_passes_test from django.core.urlresolvers import reverse from django.http import HttpResponseRedirect -from django.shortcuts import redirect from ipynbsrv.core.auth.checks import login_allowed from ipynbsrv.web import settings @user_passes_test(login_allowed) def create_cookie(request): - ''' + """ The flag view is called after a successful user login. Since we use Nginx, which does a subrequest to check authorization of workspace access, we need a way to identify the user there. So we bypass here to create a signed cookie for that purpose. - ''' + """ response = HttpResponseRedirect(reverse('dashboard')) response.set_signed_cookie(settings.AUTH_COOKIE_NAME, request.user.username, httponly=True) return response @@ -22,12 +21,12 @@ def create_cookie(request): @user_passes_test(login_allowed) def remove_cookie(request): - ''' + """ The unflag view is called before a user is actually logged out. We use that chance to remove the cookie we created after his login which authorizes him to access his workspaces. - ''' + """ response = HttpResponseRedirect(reverse('accounts_logout')) response.delete_cookie(settings.AUTH_COOKIE_NAME) return response diff --git a/ipynbsrv/web/views/common.py b/ipynbsrv/web/views/common.py index f034be8..ce43229 100644 --- a/ipynbsrv/web/views/common.py +++ b/ipynbsrv/web/views/common.py @@ -1,11 +1,6 @@ from django.contrib.auth.decorators import user_passes_test -from django.contrib.auth.models import User -from django.core.exceptions import ObjectDoesNotExist -from django.http.response import HttpResponse from django.shortcuts import render from ipynbsrv.core.auth.checks import login_allowed -from ipynbsrv.core.models import Container -from ipynbsrv.web import settings from ipynbsrv.web.api_client_proxy import get_httpclient_instance @@ -24,36 +19,3 @@ def dashboard(request): 'containers': containers, 'new_notifications_count': new_notifications_count }) - - -def workspace_auth_access(request): - ''' - This view is called by Nginx to check either a user is authorized to - access a given workspace or not. - - The username can be obtained from the signed cookie 'username', - while the port/container needs to be extracted from the 'X-Original-URI' header. - - Response codes of 20x will allow the user to access the requested resource. - ''' - - """ - Todo: rewrite - """ -# if request.method == "GET": -# username = request.get_signed_cookie(settings.AUTH_COOKIE_NAME, default=None) -# if username: # ensure the signed cookie set at login is there -# try: -# user = User.objects.get(username=username) -# uri = request.META.get(settings.PROXY_URI_HEADER) -# if uri: # ensure the X- header is present. its set by Nginx -# splits = uri.split('/') -# if len(splits) >= 3: -# port = splits[2] -# mapping = PortMapping.objects.filter(external=port) -# if mapping.exists() and mapping.first().container.owner == user: -# return HttpResponse(status=200) -# except ObjectDoesNotExist: -# pass -# - return HttpResponse(status=403) diff --git a/lib/confs/nginx/ipynbsrv.conf b/lib/confs/nginx/ipynbsrv.conf index d8c3a82..45e1616 100644 --- a/lib/confs/nginx/ipynbsrv.conf +++ b/lib/confs/nginx/ipynbsrv.conf @@ -32,7 +32,7 @@ server { proxy_set_header X-Original-URI $request_uri; proxy_pass_request_body off; - proxy_pass http://$host/_workspace_auth_check; + proxy_pass http://127.0.0.1/_workspace_auth_check; } # location for documentation @@ -59,13 +59,19 @@ server { location ~* /ct/([^\/]+)/(.*) { # authorization # ensure only container's owner can access it - #satisfy all; - #auth_request /auth; + satisfy all; + auth_request /auth; # get the IP and port from encoded part set $decoded_backend ''; set_decode_hex $decoded_backend $1; + # use the Django error pages + # forbidden = 404 so the user doesn't know there is a container + # 50x grouped to 500 + error_page 403 404 /error/404; + error_page 500 502 503 504 /error/500; + # needed for websockets connections proxy_http_version 1.1; proxy_set_header Upgrade $http_upgrade; diff --git a/lib/scripts/create_postgresql_container.sh b/lib/scripts/create_postgresql_container.sh index dacccdc..ab6a9fb 100755 --- a/lib/scripts/create_postgresql_container.sh +++ b/lib/scripts/create_postgresql_container.sh @@ -12,7 +12,7 @@ if [ "$EUID" -ne 0 ]; then exit 1 fi -CT_NAME="ipynbsrv.postgresql" +CT_NAME="ipynbsrv_postgresql" echo "------------------------------------------------------------" echo "Pulling the PostgreSQL server image..." diff --git a/lib/scripts/create_wui_container.sh b/lib/scripts/create_wui_container.sh index 8b60f93..d489ac4 100755 --- a/lib/scripts/create_wui_container.sh +++ b/lib/scripts/create_wui_container.sh @@ -25,14 +25,14 @@ docker run \ -t -i \ --name="${CT_NAME}" \ --link ipynbsrv_ldap:ipynbsrv_ldap --link ipynbsrv_postgresql:ipynbsrv_postgresql \ - phusion/baseimage:0.9.16 /bin/bash + phusion/baseimage:0.9.17 /bin/bash echo "------------------------------------------------------------" echo "Committing the WUI container so we can create a new one from it..." echo "------------------------------------------------------------" sleep 2 -docker commit $CT_NAME ipynbsrv/wui:init +docker commit $CT_NAME ipynbsrv/wui:install docker rm $CT_NAME echo "------------------------------------------------------------" @@ -46,6 +46,5 @@ docker run \ --name="${CT_NAME}" \ -p 80:80 \ --link ipynbsrv_ldap:ipynbsrv_ldap --link ipynbsrv_postgresql:ipynbsrv_postgresql \ - -v /srv/ipynbsrv/homes:/srv/ipynbsrv/data/homes -v /srv/ipynbsrv/public:/srv/ipynbsrv/data/public \ - -v /srv/ipynbsrv/shares:/srv/ipynbsrv/data/shares \ + -v /srv/ipynbsrv/data:/srv/ipynbsrv/data \ ipynbsrv/wui:init $CMD diff --git a/lib/scripts/setup_docker_host.sh b/lib/scripts/setup_docker_host.sh index 2f8a839..8a370d6 100755 --- a/lib/scripts/setup_docker_host.sh +++ b/lib/scripts/setup_docker_host.sh @@ -45,9 +45,6 @@ if [ $PS == "deb" ]; then # autostart on boot update-rc.d docker defaults update-rc.d docker enable - # enable memory and swap accounting (not used yet, --memory=limit) - sed -i 's/GRUB_CMDLINE_LINUX="find_preseed=\/preseed.cfg noprompt"/GRUB_CMDLINE_LINUX="find_preseed=\/preseed.cfg noprompt cgroup_enable=memory swapaccount=1"/' /etc/default/grub - update-grub else $INSTALL docker systemctl start docker.service @@ -60,9 +57,6 @@ curl --fail -L -O https://github.com/phusion/baseimage-docker/archive/master.tar tar xzf master.tar.gz ./baseimage-docker-master/install-tools.sh rm -rf master.tar.gz baseimage-docker-master -# pull the base image for our templates -docker pull phusion/baseimage:0.9.15 -docker pull phusion/baseimage:0.9.16 # create the data directories DATA="/srv/ipynbsrv"