~comcloudway/ansible-srht

a866d99944bc7595bda29fed78e85977ecc6ef87 — Jakob Meier 10 months ago 9585c51
Started working on documentation

NOTE: took parts of the dockerhut documentation and rewrote it to fit
the ansible playbook
4 files changed, 413 insertions(+), 0 deletions(-)

A README.md
A docs/CONFIGURATION.md
A docs/PREPARATION.md
A docs/TROUBLESHOOTING.md
A README.md => README.md +22 -0
@@ 0,0 1,22 @@
# ansible-sourcehut

WIP ansible playbook to setup a simple sourcehut instance.

**Probably not useful for larger deployments** \
Should be perfectly fine for a single user instance.

## Project support
- [x] [meta.sr.ht](https://man.sr.ht/meta.sr.ht/installation.md)
- [x] [hub.sr.ht](https://man.sr.ht/hub.sr.ht/installation.md)
- [x] [git.sr.ht](https://man.sr.ht/git.sr.ht/installation.md)
- [ ] [builds.sr.ht](https://man.sr.ht/builds.sr.ht/installation.md)
- [ ] [paste.sr.ht](https://man.sr.ht/paste.sr.ht/installation.md)
- [ ] [lists.sr.ht](https://man.sr.ht/lists.sr.ht/installation.md)
- [ ] [todo.sr.ht](https://man.sr.ht/todo.sr.ht/installation.md)

## Guide
An in-depth configuration guide can be found [here](./docs/CONFIGURATION.md).

For an example lxc deployment guide, look [here](./docs/PREPARATION.md)
and if you are running into issues,
have a look at the [troubleshooting guide](./docs/TROUBLESHOOTING.md)

A docs/CONFIGURATION.md => docs/CONFIGURATION.md +224 -0
@@ 0,0 1,224 @@
# Configuration
Every sourcehut deployment consists of multiple parts,
most of which are optional,
however you always need the core setup 
(including databases and the basic configuration file)
and the meta service.

If you want to dive deeper into sourcehut config files,
checkout the official wiki:
 - [installation guide](https://man.sr.ht/installation.md)
 - [configuration guide](https://man.sr.ht/configuration.md).

By default sourcehut uses one single configuration file,
as stated in the [configuration guide](https://man.sr.ht/configuration.md).
> sr.ht services all use a shared configuration file, [...].
> The specific options you will configure 
> will depend on the subset of 
> and configuration of sr.ht services you choose to run.

However this playbook uses the magic of `blockinfile`
to separate the configuration options by service.
If a services supports detailed configuration,
or requires some additional installation steps,
you can find a `README.md` in the `roles/<servicename>.sr.ht/` folder.

This guide will focus on getting the basic services up and running.
To be able to change variables without interfering with git,
I'd recommend creating a file called `overwrite.yml` in `group_vars/all`,
in which you can put variable overwrites that are not sensitive information.

Additionally you'll need to create a vault to store sensitive:
```bash
ansible-vault create group_vars/all/secrets.yml
```
To access your vault just replace `create` with `edit` 
and type the password you picked when prompted.

If in doubt about what the variables mean,
you could also have a look the [default.yml file](../group_vars/ll/default.yml)

## Service keys
As a means of authorization sourcehut uses pregenerates keys.
By defaults you have to generate them using `srht-keygen`,
however this requires you to attach to the server running sourcehut,
which you have yet to setup.

Luckily the `srht-keygen` script can be used as a standalone version,
only requiring you to install `python3` and `py3-cryptography`.
Afterwards you can download the file using `wget`.
```bash
# assumes you are using Alpine Linux
apk add python3 py3-cryptography
wget https://git.sr.ht/~sircmpwn/core.sr.ht/blob/master/srht-keygen
```

That way, we can generate a service key 
using `python srht-keygen service` [^1],
and a network key using 
`python srht-keygen network` [^1].

Now you can copy the template below into your ansible vault,
replacing the string values with the keys you just generated.
```yaml
srht_service_key: "CHANGEME"
srht_network_key: "CHANGEME"
```

We also need to create a public/private keypair for webhooks.
Luckily, the *srht-keygen* util can also do that:
`python srht-keygen webhook`
This time around, we only want to copy the private key,
and save it in the vault: 
```yaml
srht_private-key: "CHANGEME"
```

You should store the public key, though,
as it has to be distributed to every webhook client.
For now, just put the public key into a *pub.key* file.

## Sending emails
Additionally, sourcehut requires you to set up email using SMTP.
If you do not have an email account with SMTP/IMAP support,
you might want to take a look at
[docker-mailserver](https://github.com/docker-mailserver/)
or [purelymail](https://purelymail.com).

```yaml
srht_smtp_host: "<smtp-server>"
srht_smtp_port: "<smtp-server-port>"
srht_smtp_from: "<sender@email.com>"
srht_smtp_user: "<smtp-username>"
srht_smtp_password: "<smtp-password>"
srht_error_to: "<receiver@email.com>"
srht_error_from: "<sender@email.com>"
```

After adding your SMTP details,
you have to generate a PGP key,
to enable email verification.

First of all, run `gpg --generate key` to generate a new keypair.
It should print something like this:
> pub   ed25519 yyyy-mm-dd [SC] [expires: yyyy-mm-dd]
>       <key-id>
> uid                      Jakob Meier <hut@ccw.icu>
> sub   cv25519 yyyy-mm-dd [E] [expires: yyyy-mm-dd]

Now copy the key id and run `gpg --pinentry-mode loopback --passwd <key-id>`
to remove the password from the private key.
Afterwards, you can export the public and private keys:
- public key: `gpg --armor --export <key-id> > email.pub`
- private key: `gpg --armor --export-secret-keys <key-id>  > email.priv`
and put your key id into the config file:

```yaml
srht_pgp_key_id=<your-key-id>
```

Keep in mind, that you also have to copy the `email.*` to the server
and set their path in your `overwrite.yml` file:
```yaml
srht_pgp_privkey_path: "/path/to/email.priv"
srht_pgp_pubkey_path: "/path/to/email.pub"
```

Instead of storing the public and private key on your hard drive,
you could also export them as plain text,
copying the key itself (without the BEGIN/END comment)
```bash
gpg --armor --export <key-id> # TODO: copy the plaintext key
gpg --armor --export-secret-keys <key-id> # TODO: copy the plaintext key
```
and storing them in your ansible vault:
```yaml
srht_email_pubkey: |
    KEYHERE
srht_email_privkey: |
    KEYHERE
```

## Forward facing changes
Sourcehut allows you to customize the instance name and contact adress,
this playbook exposes these options as follows:
```yaml
# The name of your network of sr.ht-based sites
srht_site_name: "sourcehut"
# The top-level info page for your site
srht_site_info: "https://sourcehut.org"
# description="$site-name, $site-blurb"
srht_site_blurb: "the hacker forge"
#
# Contact information for the site owners
srht_owner_name: "Drew DeVault"
srht_owner_email: "sir@cmpwn.com"
```

## Domain
This playbook requires you to specify which domain you want the services to run on:
Keep in mind that you'll have to create the subdomains,
which are hardcoded and assumed to follow the scheme: 
`<service-name>.<domain>`.

Additionally you can also set the protocol,
although you most likely want to keep it as `https`.

```yaml
srht_domain: "example.com"
srht_protocol: "https"
```

## Host
Because the way sourcehut ships their packages,
you restricted by the Alpine Linux versions officially supported by sourcehut.

Consult [the official mirror](https://mirror.sr.ht/alpine/),
to figure out which is the latest version supported
and update your host OS to it (unless you want to use an older version).

After setting up your host system,
you also have to specify the alpine version in the `overwrite.yml`:
```yaml
alpine_host_version: "v3.17"
```

Depending on how you have your server setup,
you might also need to adjust the internal ipnet setting,
so that the services can use the internal authorization API.
If you are using `lxc` 
and the container ip is somewhere in the `10.0.3.0` range,
you can probably use this setting:
```yaml
srht_ipnet: "127.0.0.0/8,::1/128,192.168.0.0/16,10.0.0.0/8,10.0.3.0/8"
```

## Registrations
To enable or disable public-facing account creation,
you can use the enable-registration setting.
```yaml
srht_enable_registration: "yes"
```

Depending on when you set this, 
you might want to reboot/restart all services to properly disable registration.

# RUN
After configuring your instance to your liking,
and setting up the `hosts.yml` file 
(look online or have a look at the [lxc setup guide](./PREPARATION.md)),
you can deploy it:

```bash
ansible-playbook run.yml --ask-vault-pass
```

This will probably take a couple of minutes but you should be good to go afterwards.

If you've disable registration, or you want to create an admin account,
you have to connect to the server, on which you've deployed sourcehut,
and use the `metasrht-manageuser` command to create a new user.

`metasrht-manageuser -t admin -e <email> <user>`

Just make sure to remove the `-t admin` if the user is not supposed to be an administrator.

A docs/PREPARATION.md => docs/PREPARATION.md +132 -0
@@ 0,0 1,132 @@
# Host configuration
The following guide assumes
you are trying to setup sourcehut inside an lxc container,
using Alpine Linux as the host OS.

**You could also use this playbook to install sourcehut inside an Alpine Linux VM or on a real alpine host system**,
however this adds another layer of security, 
because sourcehut won't be able to access the host computer,
in case you are running other services on it.

## Setting up lxc
The following guide assumes you are using Alpine Linux as a host OS.

Before we can start we need some base dependencies
``` sh
apk add lxc lxc-templates-legacy-alpine
apk add lxc-bridge iptables
```

### Networking
Before we can start the bridge interface,
you should set a static IP for the container.
Open the file `/etc/lxc/dnsmasq.conf` using your favorite text editor,
and add the following line
```conf
dhcp-host=srht,10.0.3.3
```

Now that you have configured networking, 
you can enable and start the bridge interface
``` sh
rc-update add dnsmasq.lxcbr0 boot
rc-service dnsmasq.lxcbr0 start
```

### Creating the container
Because you installed `lxc-templates-legacy-alpine` earlier,
you can now simply create the lxc container using the following command:

``` sh
lxc-create -n srht -f /etc/lxc/default.conf -t alpine -- -r v3.17
```

Next up create a system service to autostart the container
and enable and start it.

``` sh
ln /etc/init.d/lxc /etc/init.d/lxc.srht
rc-service lxc.srht start
rc-update add lxc.srht
```

### Internet access
By default the lxc container cannot access the internet.
To fix this, attach to the lxc container:

```sh
lxc-attach -n srht
```

and edit the `/etc/resolv.conf` file:
``` text
nameserver 1.1.1.1
nameserver 1.0.0.1
```

Whilst attached to the lxc container we also have to update the interface settings,
because we set the IP to be static.
Open `/etc/network/interfaces` using your favorite editor 
and make sure the `eth0` config looks like this:

``` text
auto eth0
	iface eth0 inet static
	address 10.0.3.3
	gateway 10.0.3.1
	netmask 255.255.255.0
	hostname $(hostname)
```

Now reboot the lxc container (`reboot`),
which should automatically un-attach the container.
Back on the host you can now verify that the container can access the internet:

``` sh
lxc-attach -n srht -- ping example.com
```

## Setting up ansible
The easiest way to deploy sourcehut on the lxc container
is to install ansible on the host machine.
``` sh
apk add ansible git # git is required for the next step
```

Now clone this repo and open it
``` sh
git clone https://git.sr.ht/~comcloudway/ansible-srht && cd ansible-srht
```

Before you can run the playbook, 
you have to setup a `hosts.yml` file,
which tells ansible how to access the container.
It should probably look something like this
```yaml
---
srht:
  hosts:
    srht:
      ansible_host: srht
      ansible_user: root
      ansible_connection: lxc
      ansible_python_interpreter: /usr/bin/python
```

Now you can move on to [Configuration](./CONFIGURATION.md) and come back here,
when you have successfully deployed the playbook.
If your playbook fails,
have a look at the [Troubleshooting](./TROUBLESHOOTING.md) page.

## Setting up a reverse proxy
I'd recommend using caddy to forward traffic from the host to the container,
as the config files are fairly simple and it automatically configures ssl.

## Additional configuration
Because sourcehut requires the `:22` port to be redirected to the container,
if you want to clone using `git@`/ssh,
you should probably forward the ssh port to the lxc container.
In case you are using `nftables` as a firewall you can use the following command:
``` sh
nft add rule ip nat prerouting tcp dport 22 redirect to 10.0.3.3:22
```

A docs/TROUBLESHOOTING.md => docs/TROUBLESHOOTING.md +35 -0
@@ 0,0 1,35 @@
# Troubleshooting
## lxc troubleshooting
The following errors are likely to occur,
when running sourcehut inside of lxc.
### python missing
 It appears like the alpine image is missing python by defaults,
which would lead to ansible crashing.
If this happens to you (or you would like to prevent it in the first place),
attach to the lxc container and install python.

``` sh
lxc-attach -n srht -- apk add python3
```

### unable to start docker
If your ansible playbook fails with the error message:
> Error connecting: Error while fetching server API version

on one of the docker related tasks, and you are running inside of lxc,
you have to modify a couple of files.

First of all add `--exec-driver=lxc` 
to the `DOCKER_OPTS` in `/etc/conf.d/docker` (of your lxc container).

On the hosts system edit `/var/lib/lxc/<container name>/config`
and add the following lines:

``` text
# For docker
lxc.apparmor.profile = unconfined
lxc.cgroup.devices.allow = a
lxc.cap.drop =
```

Once you're done, restart the container and rerun the playbook.