SilverSurfer

A small web and file server I built together with my friend Ada Reinthal for Salwa Foundation.

Thursday, 02.06.22

14:17, Lukas

Note: these Notes are pretty incomplete, because Ada did everything and I'm just trying to understand lol

Setting up everything with Ada today. Since I forgot to include a hole for the on button in the case I built, we are setting the machine to automatically turn on when plugged in. That way I can remove the button altogether. We do this in the BIOS: Advanced > Embedded Controller > Power Fail Resume Type to always on.

The Wifi Bridge works perfectly. It's not as fast as built in wifi, but works out of the box with a switch, which is great.

We are going to set up the server from scratch today, running Alpine Linux. On this we will install Visual Studio Code, so we have a comfortable environment to work in, then a VPN/Docker network, and something like Grav. Ada will guide me through it because I only understand these things in theory lol.

We start by installing all the services we want on docker on Ubuntu, save the compose files, and then try to install everything with (more or less) one command on alpine linux.

15:09, Lukas

We are going for an Alpine data install. That means there is a directory (/var) which is persistent, it doesn't just exist in RAM/is mounted as read-only like in a regular install. For all the docker containers we will use this directory to store important files. This way, even if the power goes out, the files are saved.

15:50, Lukas

Ada uncommented the apk community repository in /etc/apk/repositories, this lets us install more experimental apps, and added tags to them. For example things developed by the community

Things we added to fresh Alpine install

17:24, Lukas

Alpine Linux is setup. It was a bit of a fight lol. We start by installing docker and creating the docker network "the stack". We added this to all the docker compose files. All the containers in this network can communicate with each other using their host names.

#install docker
apk add docker
lbu commit
#enable controlling docker from non-root user account
adduser $USER docker
#start docker on boot, and start it now
rc-update add docker
rc-service docker start
#create the network!
docker network create theStack

Once it's done, we continue with docker compose. This sets up all the apps we configured in the docker compose files before. For now we set up nginx (as a reverse proxy), visual studio code (which gives us a super nice way to modify the system afterwards), pydio cells (as a potential nextcloud replacement) and Grav (to host a website). We navigate to the docker-compose file and enter:

docker-compose up -d

We then set up nginx. It runs on port 81. I set up a user account, using

(redacted)

I also added an access list entry for:

(redacted)

This basically lets me password protect something like VS Code (which has access to all the data on the machine more or less).

Using the token set up in the VSCode docker compose file, i can now access VS Code in my local network over port 3000!

Ada also installed wireguard. And also I created a Zerotier network.

18:20, Lukas

We tried to do the port fowarding thing, but cannot SSH into the pub server lol. We will use the mokumkraakt server instead (which is much cooler anyway)...

01:27, Lukas

And then it all got really difficult. We never managed to SSH into that server, and tried to set up port forwarding ourselves, but it never worked. Very frustrating evening of trying to adjust several layers of proxy servers. We gave up for today. Argh.

Friday, 29.05.22

10:42, Lukas

Today I started trying to control a bunch of LEDs through the built-in Arduino controller. For this I followed this guide: https://udoo.org/docs-bolt/Arduino_Leonardo_compatible(ATmega32U4)/Getting_Started_with_Arduino_Leonardo.html

It's super simple: just download the files, install with the two scripts (one fixes some permissions issue). Then it directly works. One of the pins(13) has a built in LED. One of the example sketches can directly control it. This gives me hope lol.

12:15, Lukas

configuring the neopixels seems easy enough, following this guide and translating to the pin out from the guide above: https://learn.adafruit.com/adafruit-neopixel-uberguide

It seems that I should be able to power around 28 LEDs quite comfortably (not too bright) with the 5V power supply of the board itself, so without a separate power supply, which is great. There are som examples in the library that comes with the Neopixels Arduino library and they immediately work!

Friday, 29.04.22

Met with Ada and Leila today. We decided that for storage we will go with an HC4 and two drives, which will be configured as a NAS. Leila will have a crack at this, which is a good way for her to get used to the command line etc. I will buy them soon.

We spend the day doing some pretty basic docker practice with her. Kinda wholesome day lol.

Friday, 22.04.22

15:57, Lukas

Today, I had a really constructive meeting with Ada, we made a bunch of decisions about setting up this server.

The first question was whether we install apps in containers or "on metal". We have decided to run everything through docker: this basically creates a small container for each app with its own dependencies. This might come at a small performance tax but the computer is fast enough, one of the fastest SBCs you can get. But this allows us to easily install and uninstall apps for testing, without worrying that any of their dependencies might conflict or whatever. This will make it much easier to let someone else take over the system in the end, since everything is more or less plug and play.

We will run everything in a virtual network (called docker network), which we run through a reverse proxy. This is similar to what a web server does. This way we only have to expose ports 80 and 443 to the world, and will re rout things from there. Normally you might run every service on its own machine but we do it all on one.

We have also decided that we will run everything on Alpine Linux. This is a really small OS that loads into RAM on boot and runs from there. It takes up like 150MB from our RAM but we have 16GB, and this will be super fast.

We also tested a bit. Using Ventoy we created a bootable USB stick, on which we can just throw ISOs of Linux distros, and when it's plugged in, on boot, we can run them for testing (In the case of Alpine Linux this could even be enough, since after it gets just loaded into memory, so doesn't really matter where it's installed). We checked our Alpine a bit and Ubuntu Studio, a linux distribution tailored to creatives, with a lot of open source apps preinstalled for creatives!

Then we started testing docker. We apt installed docker, following the official instructions (had to add some custom directories to apt),following these instructions. Then it gets more exciting:

Using docker is super easy. There is a great website, https://www.linuxserver.io/, of a community that maintains these containerized installs of a lot of different (cool) apps. Basically you make a compose file, which you can mostly copy paste. It's a yaml file, for example:

---
version: "2.1"
services:
  openvscode-server:
    image: lscr.io/linuxserver/openvscode-server
    container_name: openvscode-server
    environment:
      - PUID=1000
      - PGID=1000
      - TZ=Europe/London
      - CONNECTION_TOKEN= #optional
      - CONNECTION_SECRET= #optional
      - SUDO_PASSWORD=password #optional
      - SUDO_PASSWORD_HASH= #optional
    volumes:
      - /path/to/appdata/config:/config
    ports:
      - 3000:3000
    restart: unless-stopped

In this you just replace a few of the variables depending on what you want, and docker does the rest. That's it. If we want to change something, we can just turn it off again, and everything else is unaffacted. We installed Visual Studio Code like this. This runs on port 3000 now, and it basically replaces the terminal as our main tool to interact with all the docker installation. It's sandboxed, won't let us fuck with the system itself, but can even start and stop docker, install new docker files etc.

This will eventually be exposed to the web, which is really cool, so we can edit the configuration of the server with the graphic user interface of vscode from everywhere in the world, using whatever computer we want. In order to do this we also installed a VPN, wireguard. I still have to figure out how exactly this works, but this will ensure that vscode is not just exposed to the web.

Tuesday, 19.04.22

12:17, Lukas

I got the computer. I left to Rotterdam at 8AM, and just got back, and all in all (with the electric car from yesterday) spent around 60€ on transportation, but it's still worth it. The guy's house was super messy, and the computer is really gross, but it works (I asked him to show me) and I picked up some hardware cleaning supplies on the way.

16:04, Lukas

I had to do some other things during the day at Sandberg, including setting myself up a proper workstation, but the computer is cleaned and ready to go. I also removed the case for now, I want to experiment a bit with it. This meant I also took off the on/off switch, which was embedded in the top case. Note to self to put that back correctly! I am now going to install Ubuntu on it for now, and want to try to set up the reverse proxy tunnel on it, so I can SSH into it from outside the school. But first things first.

Installing Linux from Linux is quite easy, at least in my case, using Ubuntu. Download whatever version you want, I went with Ubuntu, and use the App Startup Disk Creator, which comes with Ubuntu, to create a bootable USB drive. Ideally we just plug that into the new machine, boot it, and that's it.

The login credentials are (for now)

un: SalwaServer pw: SilverSurfer!

Monday, 18.04.22

11:52, Lukas

I found a really suitable solution on marktplaats (which I would have never found without Ada's advice). An Udoo Bolt Gear. It's basically a prebuilt configuration (including a quite nice looking metal casing) of the Udoo Bolt V8, a single board computer slightly bigger than the Odroid (12cm * 12cm) but quite a bit faster.

Ada and I had actually looked into these machines, they seemed like the perfect solution, but new they were completely out of our price range, starting at around 500€ without RAM or an internal hard drive. The one on markplaats would have been closer to 7-800€ with an internal 128GB NVMe SSD drive and 16GB Ram preinstalled. The asking price was 250€ but I talked the seller down to 200€. This is great because we can spend the rest of the budget on a really nice storage solution!

I will go to Rotterdam tonight to pick it up.

21:08, Lukas

Only I never made it to Rotterdam. I rented an electric car because it was difficult to get to the seller's place with public transport. But I didn't realize the battery of the small cars are to small for such short distances, so I got stuck around half way, had to charge the car, and turn around. Will try again tomorrow. The seller was kind of annoyed, fingers crossed tomorrow will be better.

Sunday, 17.04.2022

17:00, Lukas

I met with Ada at Lab111 and we discussed the project. We went into hardware configurations, as well as defining all the services that should run on the new server.

Ada advised me that most ARM single board computers would not be up to the task, since the Salwa configuration is quite complicated (file server, office suite, video calling/chatting/calendar). However, it's good to think about energy efficiency: a server rack can easily cost 2000€ in electricty for one year. One option would be someone's old laptop, but of course it's good to think about form factor as well, after all I also want to build a case for this machine.

So, we will look for SBCs with an x86 processor with at least 8 cores. The Odroid H2+ that Paul and I are using would have been perfect, but is no longer being manufactured due to the chip shortage. There are some really powerful ones, but they are above our budget. We will check marktplaats.

In term of storage we discussed to set up two large hard drives in a RAID configuration, so that they always mirror each other. That means if one drive dies, no data is lost. Additionally, we would get a plan for an offsite backup. This is called the 3/2/1 rule of backing up: 3 backups, on 2 media and 1 offsite.

We are discussing two options here: either two drives directly connected to the main machine, or an additional computer (for example Odroid HC4) with two drives, that is connected to the main machine over Gigabit Ethernet. This could function as a file server/NAS even when the main machine is offline, for example for maintenance.

We agreed that during the week I would send her configurations for approval before buying them and that Friday we would meet again to get things up and running.