There are several ways to access the servers:
- eduroam and LAN (All servers in TU munich are accessible from within the TUM network)
- VPN provided by RBG:
- this option only works for ls1 employes
- this vpn also gives access to the management network (i.e. for IPMI access)
- use the il1 profile from here
- VPN provided by LRZ:
- This vpn is also accessible by students, see this link
- Via SSH jump host:
- We have one Proxy jump host that contains all SSH keys that are added to the nixos configuration i.e. in modules/users.nix
- Reproducible example:
SSH_AUTH_SOCK= ssh -v -F /dev/null -i <path/to/privkey> -oProxyCommand="ssh [email protected] -i <path/to/privkey> -W %h:%p" <yourusername>@graham.dse.in.tum.de
- Keys are uploaded via the machine bill whenever nixos configuration is updated.
- You can generate an SSH config file for all TUM hosts with this script, providing your username as an argument
All servers in TUM have public ipv6/ipv4 addresses and dns record following the format:
$hostname.dse.in.tum.de
for the machine itself.$hostname-mgmt.dse.in.tum.de
for the IPMI/BMC interface.
i.e. bill has the addresses bill.dse.in.tum.de
and bill-mgmt.dse.in.tum.de
.
- Expansion cards and slots
- Network graph (see also networking notes in "Expansion cards and slots")
Our epyc servers are shared devices on which many users usually work concurrently.
Those servers (or individual devices) are sometimes used exclusively by a single user to conduct benchmarks.
- single socket Xeon Gold 5317
- dual socket Xeon Gold 6326, GPU
Note: these servers are equipped with Persistent Memory (PM). For information on how to setup the PM in App-Direct mode, please see here
Those serve as a github action runner for Systemprogramming + cloud systems lab
Each of these machines is equipped with an Alveo U50 FPGA card. Those servers are manually managed by @atsushikoshiba. They run ubuntu - that means that accounts/ssh keys added to this repos won't appear on those machines. Those machines also are not backed up.
We have a shared nfs-based /home
mounted. The nfs for /home is based on a NVME
disk on nardole and is limited to 1TB. If you need fast local disk access use
/scratch/$YOURUSER
- however unlike /home
and /share
this directory are
not included in the backup. If you want to share larger datasets between
machines use /share
, which is based on two hard disk (15TB capacity).
Both nfs export stored on nardole
are also replicated to bill
every 15
minutes using zfs replication based on
syncoid.
In case there are hardware problems with nardole
, bill
can take over serving
the nfs.
Our nfs servers allows connections from the 2a09:80c0:102::/64
network.
Add the following line to /etc/hosts
:
2a09:80c0:102::f000:0 nfs
And the following lines to /etc/fstab
to mount a shared /home
and /share
nfs:/export/home /home nfs4 nofail,timeo=14 0 2
nfs:/export/share /share nfs4 nofail,timeo=14 0 2
ZFS is used on all machines whenever possible. We enable automatic snapshots of
the filesystem every 15 minutes. The snapshot can be accessed by entering the
.zfs
directory of a zfs dataset mountpoint.
- for NFS mounted directories, snapshots are on the NFS master node (nardole?,
/export/home/.zfs
or/export/share/.zfs
) - for local zfs datasets (
zfs list
) snapshots are at/.zfs
,/home/.zfs
, ... - note that
.zfs
is not seen byls
Furthermore /share
and /home
are backed up daily to get RBG storage using
borgbackup
Our chair currently has three networks:
il01_16
for the machines:- order 10Gbit/s SFP+ connectors for fiber!
- ipv4: 131.159.102.0/24
- ipv6: 2a09:80c0:102::/64
il01_15
for management- usually 1Gbit/s RJ-45
- ipv4: 172.24.90.0/24
- L3 Switch "Craig"
craig-mgmt.dse.in.tum.de
- 6x 100Gbit/s QSFP
- many 10Gbit/s SFP+
- ip: to be configured
- vlan example config (layer2->static vlan config)
- vlan id: 1; untagged ports: Fx0/1-48,Cx0/1-2,Cx0/4; forbidden ports: Cx0/3,Cx0/5;
- vlan id: 2; vlan name: vlan2; untagged ports: Cx0/3,Cx0/5; forbidden ports: Fx0/1-48,Cx0/1-2,Cx0/4;
To add a new machine send the MAC address of your host interface and your IPMI/management interface to [email protected]
.
If the RGB group asks which networks to connect your machine to, tell them il01_16
for the machine and il01_15
for IPMI/BMC.
A graph of how the servers are connected right now can be found here.
- amy
- bill
- clara
- doctor
- donna
- martha
- rose