====== Server Inventory ====== This page lists all physical hosts and virtual machines in ABI's infrastructure. ===== Physical Hosts ===== All physical hosts run **FreeBSD**. ^ Hostname ^ Alias(es) ^ Role ^ OS ^ CPU ^ Sockets x Cores ^ RAM ^ Storage ^ Notes ^ | geonosis | nas0, genomic | HPC hypervisor + NAS | FreeBSD | AMD EPYC 7702 | 2 x 64 (128 cores / 256 threads) | *TODO* | ZFS (genomic data, user workspaces, VM images) | Runs all HPC bhyve VMs | | mustafar | nas1 | Primary NAS | FreeBSD | *TODO* | *TODO* | *TODO* | ZFS (home, proj, db) | Serves NFS to all nodes | | bane | -- | IT services | FreeBSD | *TODO* | *TODO* | *TODO* | *TODO* | Jails + Devuan VM for slurmctld | | hoth | bak1 | Backup server | FreeBSD | *TODO* | *TODO* | *TODO* | ZFS (backup target) | zelta-based ZFS send/recv | ---- ===== Virtual Machines ===== ==== HPC VMs (on geonosis, bhyve) ==== ^ VM ^ vCPUs ^ RAM ^ OS ^ Slurm Partition ^ State ^ Purpose ^ | ssh-01 | 2 | 4G | *TODO* | N/A | Running | Login node (''ssh.abi.am'') | | ssh-02 | 4 | 8G | *TODO* | N/A | Running | Login node (''ssh.abi.am'') | | thin-01 | 64 | 384G | *TODO* | compute (default), thin | Running | General computation | | thin-02 | 64 | 384G | *TODO* | compute (default), thin | Running | General computation | | thick-01 | 64 | 768G | *TODO* | compute (default), thick | Locked | High-memory computation | | dl-01 | 2 | 8G | *TODO* | download | Stopped | Data downloads | | dl-02 | 2 | 8G | *TODO* | download | Stopped | Data downloads | | rshiny0 | 4 | 16G | *TODO* | N/A | Running | R Shiny application server | | rshiny1 | 4 | 16G | *TODO* | N/A | Running | R Shiny application server | ==== Infrastructure VM (on bane) ==== ^ VM ^ vCPUs ^ RAM ^ OS ^ State ^ Purpose ^ | slurmctld | *TODO* | *TODO* | Devuan | Running | Slurm controller daemon | ==== Legacy / Stopped VMs (on geonosis) ==== These are not in active use. IT will document as needed. ^ VM ^ vCPUs ^ RAM ^ State ^ Notes ^ | comp0 | 32 | 32G | Stopped | Legacy compute node | | dna0 | 8 | 32G | Stopped | *TODO* | | mitte-dev-01 | 2 | 4G | Stopped | Development / testing | ---- ===== Physical Host Details ===== ==== geonosis (nas0 / genomic) ==== ^ Property ^ Value ^ | Hostname | geonosis | | Aliases | nas0, genomic | | FQDN | geonosis.local.abi.am | | Role | HPC hypervisor + NAS | | OS | FreeBSD *TODO: version* | | CPU model | AMD EPYC 7702 64-Core Processor | | Sockets | 2 (dual socket) | | Physical cores | 128 (256 threads) | | RAM (physical) | *TODO* | | Storage | ZFS pool(s) for genomic data, user workspaces, and VM images | | Virtualization | bhyve -- hosts all HPC VMs (see table above) | | ZFS datasets served | /mnt/nas0/user, /mnt/nas0/proj | | IP address | *TODO* | | Location | *TODO: rack, room* | **VM resource allocation on geonosis:** Total vCPU allocated: 2+4+64+64+64+2+2+4+4 = 210 vCPUs (+ legacy VMs when running) Total vRAM allocated: 4+8+384+384+768+8+8+16+16 = 1596G (+ legacy VMs when running) Physical CPU: 128 cores (256 threads) from 2x AMD EPYC 7702. Active vCPU allocation is 210 out of 256 threads (~82% commit when all VMs run). ==== mustafar (nas1) ==== ^ Property ^ Value ^ | Hostname | mustafar | | Alias | nas1 | | FQDN | *TODO* | | Role | Primary NAS | | OS | FreeBSD *TODO: version* | | CPU model | *TODO* | | CPU cores | *TODO* | | RAM | *TODO* | | Storage technology | ZFS | | ZFS pool | znas1 | | ZFS datasets | abi/home, abi/user, abi/proj, abi/collections/db | | NFS exports | /mnt/home (~32TB), /mnt/nas1/proj (~32TB), /mnt/nas1/db (~32TB) | | Compression | On (transparent, ZFS-level) | | IP address | *TODO* | | Location | *TODO: rack, room* | ==== bane ==== ^ Property ^ Value ^ | Hostname | bane | | FQDN | *TODO* | | Role | IT services server | | OS | FreeBSD *TODO: version* | | CPU model | *TODO* | | CPU cores | *TODO* | | RAM | *TODO* | | IP address | *TODO* | | Location | *TODO: rack, room* | **Jails on bane** (managed via ''jailer''): ^ Jail ^ Hostname ^ IPv4 ^ State ^ Service ^ | www-01 | www-01.abi.am | 37.26.174.181/28 | Active | nginx / web server (public-facing) | | git-01 | git-01.abi.am | 37.26.174.182/28 | Active | Forgejo (Git hosting, public-facing) | | mx-01 | mx-01.abi.am | 37.26.174.190/28 | Active | Mail server (public-facing) | | dns-01 | dns-01.local.abi.am | 172.20.42.53/24, 172.20.200.53/24 | Active | DNS | | dhcp-01 | dhcp-01.local.abi.am | 172.20.42.67/24, 172.20.200.67/24 | Active | DHCP | | ldap-01 | ldap-01.local.abi.am | 172.20.42.38/24 | Active | LDAP (slapd). See [[infra:ldap|LDAP]] | | psql-01 | psql-01.local.abi.am | 172.20.200.34/24 | Active | PostgreSQL | | mysql-01 | mysql-01.local.abi.am | 172.20.200.36/24 | Active | MySQL | | adg-01 | adg-01.local.abi.am | 172.20.200.55/24 | Active | AdGuard (DNS filtering) | | nms-01 | nms-01.local.abi.am | 172.20.200.161/24, 172.20.42.161/24 | Active | Network monitoring | | unifi-01 | unifi-01.local.abi.am | 172.20.200.99/24 | Active | UniFi controller (WiFi management) | | wiki-01 | wiki-01.local.abi.am | 172.20.200.77/24 | Active | Wiki (DokuWiki) | | aaa-01 | -- | -- | Stopped | *TODO* | | caps-log-01 | -- | -- | Stopped | *TODO* | | dash-01 | -- | -- | Stopped | *TODO* | | garage-01 | -- | -- | Stopped | *TODO* | | log-01 | -- | -- | Stopped | *TODO* | | nb-01 | -- | -- | Stopped | *TODO* | | pr-01 | -- | -- | Stopped | *TODO* | **VM on bane:** ^ VM ^ OS ^ Purpose ^ | slurmctld | Devuan | Slurm controller daemon (slurmctld) | ==== hoth (bak1) ==== ^ Property ^ Value ^ | Hostname | hoth | | Alias | bak1 | | FQDN | *TODO* | | Role | Backup server | | OS | FreeBSD *TODO: version* | | CPU model | *TODO* | | CPU cores | *TODO* | | RAM | *TODO* | | Storage technology | ZFS | | Backup tool | [[https://zelta.space|zelta]] (ZFS send/recv) | | What is backed up | User home directories, selected project directories | | IP address | *TODO* | | Location | *TODO: rack, room* | ---- ===== VM Details ===== ==== ssh-01 ==== ^ Property ^ Value ^ | Runs on | geonosis (bhyve) | | OS | *TODO* | | vCPUs | 2 | | RAM | 4G | | Public hostname | ''ssh.abi.am'' (shared with ssh-02) | | Purpose | Login node -- SSH entry point for users | | Slurm | Not a Slurm node | | Notes | Internet-facing | ==== ssh-02 ==== ^ Property ^ Value ^ | Runs on | geonosis (bhyve) | | OS | *TODO* | | vCPUs | 4 | | RAM | 8G | | Public hostname | ''ssh.abi.am'' (shared with ssh-01) | | Purpose | Login node -- SSH entry point for users | | Slurm | Not a Slurm node | | Notes | Internet-facing. More resources than ssh-01. | ==== thin-01 ==== ^ Property ^ Value ^ | Runs on | geonosis (bhyve) | | OS | *TODO* | | vCPUs | 64 | | RAM | 384G | | Slurm partitions | compute (default), thin | | Purpose | General-purpose computation | ==== thin-02 ==== ^ Property ^ Value ^ | Runs on | geonosis (bhyve) | | OS | *TODO* | | vCPUs | 64 | | RAM | 384G | | Slurm partitions | compute (default), thin | | Purpose | General-purpose computation | ==== thick-01 ==== ^ Property ^ Value ^ | Runs on | geonosis (bhyve) | | OS | *TODO* | | vCPUs | 64 | | RAM | 768G | | Slurm partitions | compute (default), thick | | Purpose | High-memory computation (genome assembly, pilon, etc.) | | Notes | Use ''%%--partition=thick%%'' when you need >384G RAM | ==== dl-01 / dl-02 ==== ^ Property ^ Value ^ | Runs on | geonosis (bhyve) | | OS | *TODO* | | vCPUs | 2 each | | RAM | 8G each | | Slurm partition | download | | Purpose | Data download tasks only (not for computation) | ==== rshiny0 / rshiny1 ==== ^ Property ^ Value ^ | Runs on | geonosis (bhyve) | | OS | *TODO* | | vCPUs | 4 each | | RAM | 16G each | | VNC | 127.0.0.1:5900 / :5901 | | Purpose | R Shiny application servers | | Slurm | Not Slurm nodes | ---- ===== Hardware Procurement Log ===== *TODO: Optional. Track when hardware was purchased, warranty info, etc.* ^ Host ^ Purchased ^ Warranty Until ^ Vendor ^ Notes ^ | geonosis | *TODO* | *TODO* | *TODO* | *TODO* | | mustafar | *TODO* | *TODO* | *TODO* | *TODO* | | bane | *TODO* | *TODO* | *TODO* | *TODO* | | hoth | *TODO* | *TODO* | *TODO* | *TODO* | ---- ===== Maintenance History ===== *TODO: Optional. Log significant maintenance events.* ^ Date ^ Host/VM ^ Action ^ Who ^ | *TODO* | *TODO* | *TODO* | *TODO* | ---- ===== See Also ===== * [[infra:overview|Infrastructure Overview]] * [[infra:network|Network]] * [[infra:ldap|LDAP Configuration]] * [[infra:monitoring|Monitoring]]