~comcloudway/ansible-srht

b4766dadd0614c90ab6be3cca314ee616b068295 — Jakob Meier 10 months ago 4ac7bed
Added builds.sr.ht worker setup and guide
M README.md => README.md +1 -1
@@ 9,7 9,7 @@ Should be perfectly fine for a single user instance.
- [x] [meta.sr.ht](https://man.sr.ht/meta.sr.ht/installation.md)
- [x] [hub.sr.ht](https://man.sr.ht/hub.sr.ht/installation.md)
- [x] [git.sr.ht](https://man.sr.ht/git.sr.ht/installation.md)
- [ ] [builds.sr.ht](https://man.sr.ht/builds.sr.ht/installation.md)
- [x] [builds.sr.ht](https://man.sr.ht/builds.sr.ht/installation.md)
- [ ] [paste.sr.ht](https://man.sr.ht/paste.sr.ht/installation.md)
- [ ] [lists.sr.ht](https://man.sr.ht/lists.sr.ht/installation.md)
- [ ] [todo.sr.ht](https://man.sr.ht/todo.sr.ht/installation.md)

M docs/CONFIGURATION.md => docs/CONFIGURATION.md +3 -0
@@ 222,3 222,6 @@ and use the `metasrht-manageuser` command to create a new user.
`metasrht-manageuser -t admin -e <email> <user>`

Just make sure to remove the `-t admin` if the user is not supposed to be an administrator.

## Specific Setup
- [builds.sr.ht setup guide](../roles/builds.sr.ht/README.md)

A roles/builds.sr.ht/README.md => roles/builds.sr.ht/README.md +105 -0
@@ 0,0 1,105 @@
# builds.sr.ht
## Worker
Unfortunately images cannot be automatically generated 
and still require manual creation.

### Configuration
The worker exposes two configuration options:
``` yaml
buildssrht_runner_log_dir: "/var/log/srhtrunner"
buildssrht_runner_mem: "2048"
```

`buildssrht_runner_log_dir` allows you to change the path,
where the logs are stored. 
This is only here to keep it in sync with three or four spots,
and you probably do not need to change it.

`buildssrht_runner_mem` allows you to specify the amount of memory (RAM)
the worker container/VM is allowed to use.

### Setting up an Alpine Linux image
Creating images should differ on a platform by platform basis,
but lets walk through a basic alpine setup.

First of all attach to your server running sourcehut
and navigate to `/var/lib/images`.
This directory is responsible for managing containers.

Navigate into the `alpine` directory 
and create a file called `bootstrap.sh`,
with the following content:
```shell
#!/bin/sh -eu

arch="${1:-x86_64}"
version="${2:-3.18.4}"
release="$(echo $version | cut -d. -f 1-2)"

# download the alpine virt iso
wget -O /tmp/alpine.iso https://dl-cdn.alpinelinux.org/alpine/v$release/releases/$arch/alpine-virt-$version-$arch.iso

# start VM
${qemu:-qemu-system-$arch} \
		-m ${MEMORY:-4096} \
		-smp cpus=2 \
		-nic user \
		-boot d \
		-cdrom /tmp/alpine.iso \
		-virtfs local,path=./,mount_tag=host0,security_model=passthrough,id=host0 \
		-nographic 
```

Now run the file `sh bootstrap.sh` and wait for the VM to boot.
Login in as `root` and run through the `setup-alpine` process until it wants to setup disks, 
at this point just exit using `Ctrl-C`.
You should now have a working internet connection 
and your mirrors/repositories should be setup.

Afterwards open `/etc/fstab` using a text editor (i.e nano or vi)
and add the following line:
``` text
host0   /mnt    9p      trans=virtio,version=9p2000.L   0 0
```
Close the file, run `mount -a` and navigate into `/mnt`.

If you type `ls` you should see that the files from the host system are visible.

Next up enable the community repository by running:
``` shell
sed -i -r 's/^\#(.*community)/\1/' /etc/apk/repositories
apk update
```

Install the following packages (as listed in the `build.yml`):
``` shell
apk add e2fsprogs qemu-img qemu-system-x86_64 sfdisk syslinux
```
Keep in mind that these might be different if you are not building for `x86_64`

Modprobe the ext4 module:
``` shell
modprobe ext4
```

Now that all of the dependencies are out of the way,
decide on a release to build (i.e `3.18` or `edge`)
and `cd` into the version folder,
i.e:
``` shell
cd edge
```

And run `./genimage x86_64` to generate an image.

Now repeat for all the image version you want.
If all the images were generated successfully,
you should be able to just start an example build.yml:
``` yaml
image: alpine/edge
tasks:
  - say-hello: |
      echo hello
  - say-world: |
      echo world
```

M roles/builds.sr.ht/defaults/main.yml => roles/builds.sr.ht/defaults/main.yml +6 -0
@@ 1,3 1,9 @@
---
# probably not needed, but might help fix authentification issues
buildssrht_oauth_client_id: ""
buildssrht_oauth_client_secret: ""

# where to store the logs
buildssrht_runner_log_dir: "/var/log/srhtrunner"
# how much memory the worker vm may use
buildssrht_runner_mem: "2048"

M roles/builds.sr.ht/tasks/config.yml => roles/builds.sr.ht/tasks/config.yml +0 -44
@@ 40,50 40,6 @@
      # Only needed if not run behind a reverse proxy, e.g. for local development.
      # By default, the API port is 100 more than the web port
      # api-origin=http://127.0.0.1:5102

      #
      # These config options are only necessary for systems running a build runner
      [builds.sr.ht::worker]
      #
      # Name of this build runner (with HTTP port if not 80)
      name=runner.{{ srht_domain }}
      #
      # Path to write build logs
      buildlogs=./logs
      #
      # Path to the build images
      images=./images
      #
      # In production you should NOT put the build user in the docker group. Instead,
      # make a scratch user who is and write a sudoers or doas.conf file that allows
      # them to execute just the control command, then update this config option. For
      # example:
      #
      #   doas -u docker /var/lib/images/control
      #
      # Assuming doas.conf looks something like this:
      #
      #   permit nopass builds as docker cmd /var/lib/images/control
      #
      # For more information about the security model of builds.sr.ht, visit the wiki:
      #
      #   https://man.sr.ht/builds.sr.ht/installation.md
      controlcmd=./images/control
      #
      # Max build duration. See https://golang.org/pkg/time/#ParseDuration
      timeout=45m
      #
      # Http bind address for serving local build information/monitoring
      bind-address=0.0.0.0:8080
      #
      # Build trigger email
      trigger-from={{ srht_smtp_from }}
      #
      # Configure the S3 bucket and prefix for object storage. Leave empty to disable
      # object storage. Bucket is required to enable object storage; prefix is
      # optional.
      s3-bucket=
      s3-prefix=
  register: conf

- name: Enable & start builds.sr.ht service

M roles/builds.sr.ht/tasks/main.yml => roles/builds.sr.ht/tasks/main.yml +3 -2
@@ 3,8 3,6 @@
  community.general.apk:
    name:
      - builds.sr.ht
      - builds.sr.ht-images
      - builds.sr.ht-worker
    state: latest

- name: Setup /etc/hosts localhost redirect


@@ 20,3 18,6 @@

- name: Setup nginx
  ansible.builtin.import_tasks: nginx.yml

- name: Setup runner
  ansible.builtin.import_tasks: worker.yml

A roles/builds.sr.ht/tasks/worker.yml => roles/builds.sr.ht/tasks/worker.yml +94 -0
@@ 0,0 1,94 @@
---
- name: Install runner dependencies
  community.general.apk:
    name:
      - builds.sr.ht-images
      - builds.sr.ht-worker
      # NOTE: add more qemu-system-$arch packages here,
      # once sourehut supports other architectures
      - qemu-system-x86_64
    state: latest

- name: Ensure the builds.sr.ht runner config is injected
  ansible.builtin.blockinfile:
    path: /etc/sr.ht/config.ini
    marker: "#-- {mark} ANSIBLE builds.sr.ht (runner) --#"
    block: |
      # These config options are only necessary for systems running a build runner
      [builds.sr.ht::worker]
      #
      # Name of this build runner (with HTTP port if not 80)
      name=runner.{{ srht_domain }}
      #
      # Path to write build logs
      buildlogs={{ buildssrht_runner_log_dir }}
      #
      # Path to the build images
      images=/var/lib/images/
      #
      # In production you should NOT put the build user in the docker group. Instead,
      # make a scratch user who is and write a sudoers or doas.conf file that allows
      # them to execute just the control command, then update this config option. For
      # example:
      #
      #   doas -u docker /var/lib/images/control
      #
      # Assuming doas.conf looks something like this:
      #
      #   permit nopass builds as docker cmd /var/lib/images/control
      #
      # For more information about the security model of builds.sr.ht, visit the wiki:
      #
      #   https://man.sr.ht/builds.sr.ht/installation.md
      controlcmd=/var/lib/images/control
      #
      # Max build duration. See https://golang.org/pkg/time/#ParseDuration
      timeout=45m
      #
      # Http bind address for serving local build information/monitoring
      bind-address=0.0.0.0:8080
      #
      # Build trigger email
      trigger-from={{ srht_smtp_from }}
      #
      # Configure the S3 bucket and prefix for object storage. Leave empty to disable
      # object storage. Bucket is required to enable object storage; prefix is
      # optional.
      s3-bucket=
      s3-prefix=
  register: conf

- name: Overwrite default runner setup
  ansible.builtin.template:
    src: image-control.conf
    dest: /etc/image-control.conf

- name: Make sure the runner user login shell is set correctly
  ansible.builtin.user:
    name: builds
    shell: "/bin/sh"  # may not be set to /sbin/nologin

- name: Make sure runner log dir exists
  ansible.builtin.file:
    name: "{{ buildssrht_runner_log_dir }}"
    state: "directory"
    owner: builds
    group: builds

- name: Copy runner nginx config file
  ansible.builtin.template:
    src: worker.conf
    dest: /etc/nginx/http.d/worker.sr.ht.conf
  register: nginxconf

- name: Start & enable nginx
  ansible.builtin.service:
    name: nginx
    state: restarted
    enabled: true
  when: nginxconf.changed

- name: Setup /etc/hosts localhost redirect for runner
  ansible.builtin.lineinfile:
    path: /etc/hosts
    line: "127.0.0.1    runner.{{ srht_domain }}"

A roles/builds.sr.ht/templates/image-control.conf => roles/builds.sr.ht/templates/image-control.conf +2 -0
@@ 0,0 1,2 @@
default_means="qemu"
MEMORY="{{ buildssrht_runner_mem }}"

A roles/builds.sr.ht/templates/worker.conf => roles/builds.sr.ht/templates/worker.conf +12 -0
@@ 0,0 1,12 @@
server {
	include sourcehut.conf;
	server_name runner.{{ srht_domain }};

	client_max_body_size 100M;

	location /logs {
		proxy_pass http://127.0.0.1:8080/logs;
		include headers.conf;
		include web.conf;
	}
}

M roles/git.sr.ht/tasks/ssh.yml => roles/git.sr.ht/tasks/ssh.yml +7 -0
@@ 27,6 27,13 @@
    group: git
    state: touch

- name: Manually create update-hook log file
  ansible.builtin.file:
    path: /var/log/gitsrht-update-hook
    owner: git
    group: git
    state: touch

- name: Start & enable sshd
  ansible.builtin.service:
    name: sshd