Compare commits

...

85 Commits
v2.1.0 ... main

Author SHA1 Message Date
Cadey Ratio 990cce7267
Use Tokio 1 (#306)
* upgrade all the things

Signed-off-by: Christine Dodrill <me@christine.website>

* build(deps): bump serde from 1.0.118 to 1.0.120

Bumps [serde](https://github.com/serde-rs/serde) from 1.0.118 to 1.0.120.
- [Release notes](https://github.com/serde-rs/serde/releases)
- [Commits](https://github.com/serde-rs/serde/compare/v1.0.118...v1.0.120)

Signed-off-by: dependabot-preview[bot] <support@dependabot.com>

* fix

Signed-off-by: Christine Dodrill <me@christine.website>

Co-authored-by: dependabot-preview[bot] <27856297+dependabot-preview[bot]@users.noreply.github.com>
2021-01-22 21:11:48 -05:00
İlteriş Eroğlu 6456d75502
chore(signalboost): Add linuxgemini (#300) 2021-01-22 20:44:16 -05:00
Vincent Bernat 1d95a5c073
Fix extra command in NixOS secret post (#304) 2021-01-22 20:38:00 -05:00
Cadey Ratio 49f4ba9847 orca
Signed-off-by: Christine Dodrill <me@christine.website>
2021-01-20 19:12:32 -05:00
Cadey Ratio 3dba1d98f8 nixos encrypted secret post/essay
Signed-off-by: Christine Dodrill <me@christine.website>
2021-01-20 16:42:05 -05:00
Cadey Ratio 90332b323d dont fuck seo
Signed-off-by: Christine Dodrill <me@christine.website>
2021-01-19 20:42:09 -05:00
Cadey Ratio 444eee96b0 blog: add redirect posts, tailscale nixos post
Signed-off-by: Christine Dodrill <me@christine.website>
2021-01-19 20:35:13 -05:00
Eliot Partridge b40cb9aa78
Add Bytewave to signalboost (#299) 2021-01-18 20:43:17 -05:00
Eliot Partridge 8b2b647257
Fix Twitter/JSON-LD timestamps (remove time/tz info) (#298) 2021-01-18 20:43:03 -05:00
dependabot-preview[bot] dc48c5e5dc
build(deps): bump serde_dhall from 0.8.0 to 0.9.0 (#267)
Bumps [serde_dhall](https://github.com/Nadrieril/dhall-rust) from 0.8.0 to 0.9.0.
- [Release notes](https://github.com/Nadrieril/dhall-rust/releases)
- [Changelog](https://github.com/Nadrieril/dhall-rust/blob/master/CHANGELOG.md)
- [Commits](https://github.com/Nadrieril/dhall-rust/compare/serde_dhall-v0.8.0...serde_dhall-v0.9.0)

Signed-off-by: dependabot-preview[bot] <support@dependabot.com>

Co-authored-by: dependabot-preview[bot] <27856297+dependabot-preview[bot]@users.noreply.github.com>
2021-01-16 22:10:26 -05:00
dependabot-preview[bot] 0f5c06fa44
build(deps): bump log from 0.4.11 to 0.4.13 (#293)
Bumps [log](https://github.com/rust-lang/log) from 0.4.11 to 0.4.13.
- [Release notes](https://github.com/rust-lang/log/releases)
- [Changelog](https://github.com/rust-lang/log/blob/master/CHANGELOG.md)
- [Commits](https://github.com/rust-lang/log/compare/0.4.11...0.4.13)

Signed-off-by: dependabot-preview[bot] <support@dependabot.com>

Co-authored-by: dependabot-preview[bot] <27856297+dependabot-preview[bot]@users.noreply.github.com>
2021-01-16 22:10:11 -05:00
dependabot-preview[bot] 1b91c59d59
build(deps): bump rand from 0.8.1 to 0.8.2 (#295)
Bumps [rand](https://github.com/rust-random/rand) from 0.8.1 to 0.8.2.
- [Release notes](https://github.com/rust-random/rand/releases)
- [Changelog](https://github.com/rust-random/rand/blob/master/CHANGELOG.md)
- [Commits](https://github.com/rust-random/rand/compare/0.8.1...0.8.2)

Signed-off-by: dependabot-preview[bot] <support@dependabot.com>

Co-authored-by: dependabot-preview[bot] <27856297+dependabot-preview[bot]@users.noreply.github.com>
2021-01-16 22:10:03 -05:00
dependabot-preview[bot] bc71c3c278
build(deps): bump ructe from 0.12.0 to 0.13.0 (#262)
Bumps [ructe](https://github.com/kaj/ructe) from 0.12.0 to 0.13.0.
- [Release notes](https://github.com/kaj/ructe/releases)
- [Changelog](https://github.com/kaj/ructe/blob/master/CHANGELOG.md)
- [Commits](https://github.com/kaj/ructe/compare/v0.12.0...v0.13.0)

Signed-off-by: dependabot-preview[bot] <support@dependabot.com>

Co-authored-by: dependabot-preview[bot] <27856297+dependabot-preview[bot]@users.noreply.github.com>
2021-01-16 22:09:49 -05:00
Cadey Ratio 4bcc848bb1 move poking services into app boot after systemd notify
Signed-off-by: Christine Dodrill <me@christine.website>
2021-01-16 21:38:22 -05:00
Cadey Ratio 17af42bc69 oops
Signed-off-by: Christine Dodrill <me@christine.website>
2021-01-15 14:11:32 -05:00
Cadey Ratio 1ffc4212d6 actually use google analytics
Signed-off-by: Christine Dodrill <me@christine.website>
2021-01-15 14:10:56 -05:00
Cadey Ratio 811995223c oops
Signed-off-by: Christine Dodrill <me@christine.website>
2021-01-15 13:58:17 -05:00
Cadey Ratio 585d39ea62 install google tag manager
Signed-off-by: Christine Dodrill <me@christine.website>
2021-01-15 13:53:20 -05:00
Cadey Ratio 201abedb14 blogpost for announcing a new PGP key
Signed-off-by: Christine Dodrill <me@christine.website>
2021-01-15 08:35:30 -05:00
Cadey Ratio 66233bcd40
Update my GPG key
Signed-off-by: Christine Dodrill <me@christine.website>
2021-01-15 08:24:07 -05:00
Cadey Ratio d2455aa1c1
Cache better (#296)
* Many improvements around bandwidth use

- Use ETags for RSS/Atom feeds
- Use cache-control headers
- Update to rust nightly (for rust-analyzer and faster builds)
- Limit feeds to the last 20 posts:
  https://twitter.com/theprincessxena/status/1349891678857998339
- Use if-none-match to limit bandwidth further

Also does this:

- bump go_vanity to 0.3.0 and lets users customize the branch name
- fix formatting on jsonfeed
- remove last vestige of kubernetes/docker support

Signed-off-by: Christine Dodrill <me@christine.website>

* expire cache quicker for dynamic pages

Signed-off-by: Christine Dodrill <me@christine.website>

* add rss ttl

Signed-off-by: Christine Dodrill <me@christine.website>

* add blogpost

Signed-off-by: Christine Dodrill <me@christine.website>
2021-01-14 22:36:34 -05:00
Cadey Ratio a359f54a91
update everything (#292)
Signed-off-by: Christine Dodrill <me@christine.website>
2021-01-10 10:13:11 -05:00
Cadey Ratio 1bd858680d
Borgbackup nixos post (#291)
* fix the systemd notify code

Signed-off-by: Christine Dodrill <me@christine.website>

* remove k8s baktag

Signed-off-by: Christine Dodrill <me@christine.website>

* borg backup post

Signed-off-by: Christine Dodrill <me@christine.website>

* fix build

Signed-off-by: Christine Dodrill <me@christine.website>
2021-01-09 17:16:30 -05:00
Cadey Ratio 49a4d7cbea oopsie whoopsie uwu 2021-01-04 08:43:52 -05:00
Cadey Ratio 0c6d16cba8 hlang in 30s
Signed-off-by: Christine Dodrill <me@christine.website>
2021-01-04 08:38:26 -05:00
Cadey Ratio 09c726a0c9 my american is showing
Signed-off-by: Christine Dodrill <me@christine.website>
2021-01-03 13:19:44 -05:00
Cadey Ratio a22df5f544
Update rust.yml 2021-01-03 11:55:39 -05:00
Cadey Ratio 1ae1cc2945 </kubernetes>
Signed-off-by: Christine Dodrill <me@christine.website>
2021-01-03 11:42:09 -05:00
Cadey Ratio 951542ccf2 systemdify this
Signed-off-by: Christine Dodrill <me@christine.website>
2021-01-02 18:11:27 -05:00
Cadey Ratio d63f393193
Update my-career-in-dates-titles-salaries-2019-03-14.markdown 2021-01-01 21:39:13 -05:00
Cadey Ratio 6788a5510b oops
Signed-off-by: Christine Dodrill <me@christine.website>
2021-01-01 16:30:18 -05:00
Cadey Ratio 9c5250d10a make this compatible with the new nix way of doing things
Signed-off-by: Christine Dodrill <me@christine.website>
2021-01-01 16:25:45 -05:00
Cadey Ratio 474fd908bc
Update LICENSE 2021-01-01 11:19:45 -05:00
Cadey Ratio 43057536ad blog: Kubernetes pondering post
Signed-off-by: Christine Dodrill <me@christine.website>
2020-12-31 12:10:37 -05:00
Cadey Ratio 2389af7ee5
Create mara-sh0rk-of-justice-2020-12-28.markdown (#286) 2020-12-28 15:49:05 -05:00
Cadey Ratio 2b7a64e57d date format
Signed-off-by: Christine Dodrill <me@christine.website>
2020-12-25 13:08:13 -05:00
Cadey Ratio b1cb704fa4 the source 1.0.0 release
Signed-off-by: Christine Dodrill <me@christine.website>
2020-12-25 12:52:24 -05:00
Nasir Hussain 695ebccd40
add: nasirhm in signalboost (#282)
* add: nasirhm in signalboost

Addition of Nasir Hussain in Signalboost.

* add: django in tags

Addition of Django in skill tags.
2020-12-23 17:51:36 -05:00
Cadey Ratio 0e33b75b26
Update vlang-update-2020-06-17.markdown 2020-12-22 20:43:49 -05:00
Cadey Ratio d3e94ad834 update salary post
Signed-off-by: Christine Dodrill <me@christine.website>
2020-12-21 19:05:39 -05:00
ansimita 9e4566ba67
Update signalboost.dhall (#280) 2020-12-20 16:24:45 -05:00
Cadey Ratio 276023d371 oh my god i was an idiot
Signed-off-by: Christine Dodrill <me@christine.website>
2020-12-20 12:01:30 -05:00
Cadey Ratio ccdee6431d the 7th edition
Signed-off-by: Christine Dodrill <me@christine.website>
2020-12-20 11:44:03 -05:00
Cadey Ratio 098b7183e7
Update twitter-plea-2020-12-14.markdown 2020-12-14 19:53:34 -05:00
Cadey Ratio 8b92d8d8ee blog: twitter account plea
Signed-off-by: Christine Dodrill <me@christine.website>
2020-12-14 18:38:27 -05:00
Cadey Ratio 9a9c474c76 start removing mentions of wasmcloud
Signed-off-by: Christine Dodrill <me@christine.website>
2020-12-06 10:48:15 -05:00
Cadey Ratio c5fc9336f5 someone got hired!
Signed-off-by: Christine Dodrill <me@christine.website>
2020-12-04 17:32:35 -05:00
Cadey Ratio 99197f4843 trisiel update
Signed-off-by: Christine Dodrill <me@christine.website>
2020-12-04 17:17:42 -05:00
Cadey Ratio 233ea76204
add webmention support (#274)
* add webmention support

Signed-off-by: Christine Dodrill <me@christine.website>

* add webmention integration post

Signed-off-by: Christine Dodrill <me@christine.website>
2020-12-02 16:16:58 -05:00
Cadey Ratio d35f62351f disable snow animation for now
Signed-off-by: Christine Dodrill <me@christine.website>
2020-12-01 10:25:27 -05:00
Cadey Ratio 7c7981bf70
add nixos/discord webhook post (#272)
* add nixos/discord webhook post

Signed-off-by: Christine Dodrill <me@christine.website>

* oops

Signed-off-by: Christine Dodrill <me@christine.website>
2020-11-30 22:51:49 -05:00
Gleb Peregud a2b1a4afbf
Minor fixes in the Prometheus/Grafana/Loki post (#271)
* Minor fixes in the Prometheus/Grafana/Loki post

* Update node exporter port
2020-11-27 11:52:11 -05:00
Cadey Ratio 23c181ee72
Update scavenger-hunt-solution-2020-11-25.markdown 2020-11-25 19:53:12 -05:00
Cadey Ratio b0ae633c0c
add blogpost explaining the scavenger hunt (#270)
Signed-off-by: Christine Dodrill <me@christine.website>
2020-11-25 11:29:20 -05:00
Cadey Ratio a6eadb1051 bump deps
Signed-off-by: Christine Dodrill <me@christine.website>
2020-11-25 11:14:22 -05:00
Cadey Ratio f5a86eafb8
Prometheus grafana loki nixos (#266)
* prometheus-grafana-loki-nixos post

* simple fix
2020-11-20 17:36:51 -05:00
Cadey Ratio 2dde44763d make blog/, gallery/, and talks/ work
Thanks benharri on Freenode for finding this
2020-11-18 15:01:37 -05:00
Cadey Ratio 92d812f74e yay it works 2020-11-18 13:35:15 -05:00
Cadey Ratio 9371e7f848 fix this? 2020-11-18 13:22:51 -05:00
Cadey Ratio f8ae558738 fix 2020-11-18 13:15:33 -05:00
Cadey Ratio e0a1744989 panasonic $299 2020-11-18 13:09:07 -05:00
Cadey Ratio 7f97bf7ed4 save on everything 2020-11-18 13:03:00 -05:00
Cadey Ratio 5ff4af60f8 will this work lol 2020-11-18 12:54:22 -05:00
Cadey Ratio 7ac6c03341 try this? 2020-11-18 12:41:26 -05:00
Cadey Ratio 8d74418089 Update nix.yml 2020-11-18 12:34:22 -05:00
Cadey Ratio f643496416
various updates (#263)
* various updates

* fix glory shot

* fix mi url for updating my blog

* fix CI
2020-11-18 12:18:24 -05:00
Cadey Ratio 089b14788d
nixops services blogpost (#259) 2020-11-09 13:00:25 -05:00
Cadey Ratio d2deb4470c Merge branches 'pr257', 'pr256', 'pr255', 'pr254', 'pr253', 'pr252', 'pr251', 'pr250' and 'pr247' into main
Closes #257
Closes #256
Closes #255
Closes #254
Closes #253
Closes #252
Closes #251
Closes #250
Closes #247
2020-11-08 15:46:07 -05:00
Cadey Ratio 895554a593 oops 2020-11-08 15:25:38 -05:00
Cadey Ratio 580cac815e oops 2020-11-06 16:08:53 -05:00
Cadey Ratio dbccb14b48
add moonlander review (#258)
* add moonlander review

* moonlander review: polish
2020-11-06 15:05:07 -05:00
dependabot-preview[bot] ec78ddfdd7
build(deps): bump color-eyre from 0.5.6 to 0.5.7
Bumps [color-eyre](https://github.com/yaahc/color-eyre) from 0.5.6 to 0.5.7.
- [Release notes](https://github.com/yaahc/color-eyre/releases)
- [Changelog](https://github.com/yaahc/color-eyre/blob/master/CHANGELOG.md)
- [Commits](https://github.com/yaahc/color-eyre/compare/v0.5.6...v0.5.7)

Signed-off-by: dependabot-preview[bot] <support@dependabot.com>
2020-11-06 10:49:07 +00:00
dependabot-preview[bot] 3b63ce9f26
build(deps): bump url from 2.1.1 to 2.2.0
Bumps [url](https://github.com/servo/rust-url) from 2.1.1 to 2.2.0.
- [Release notes](https://github.com/servo/rust-url/releases)
- [Commits](https://github.com/servo/rust-url/compare/v2.1.1...v2.2.0)

Signed-off-by: dependabot-preview[bot] <support@dependabot.com>
2020-11-06 10:48:43 +00:00
dependabot-preview[bot] e4200399e1
build(deps): bump thiserror from 1.0.21 to 1.0.22
Bumps [thiserror](https://github.com/dtolnay/thiserror) from 1.0.21 to 1.0.22.
- [Release notes](https://github.com/dtolnay/thiserror/releases)
- [Commits](https://github.com/dtolnay/thiserror/compare/1.0.21...1.0.22)

Signed-off-by: dependabot-preview[bot] <support@dependabot.com>
2020-11-04 10:44:42 +00:00
dependabot-preview[bot] 2793da4845
build(deps): bump tracing-subscriber from 0.2.14 to 0.2.15
Bumps [tracing-subscriber](https://github.com/tokio-rs/tracing) from 0.2.14 to 0.2.15.
- [Release notes](https://github.com/tokio-rs/tracing/releases)
- [Commits](https://github.com/tokio-rs/tracing/compare/tracing-subscriber-0.2.14...tracing-subscriber-0.2.15)

Signed-off-by: dependabot-preview[bot] <support@dependabot.com>
2020-11-03 10:31:47 +00:00
dependabot-preview[bot] ea94e52dd3
build(deps): bump hyper from 0.13.8 to 0.13.9
Bumps [hyper](https://github.com/hyperium/hyper) from 0.13.8 to 0.13.9.
- [Release notes](https://github.com/hyperium/hyper/releases)
- [Changelog](https://github.com/hyperium/hyper/blob/master/CHANGELOG.md)
- [Commits](https://github.com/hyperium/hyper/compare/v0.13.8...v0.13.9)

Signed-off-by: dependabot-preview[bot] <support@dependabot.com>
2020-11-03 10:31:13 +00:00
dependabot-preview[bot] 0d3655e726
build(deps): bump sitemap from 0.4.0 to 0.4.1
Bumps [sitemap](https://github.com/svmk/rust-sitemap) from 0.4.0 to 0.4.1.
- [Release notes](https://github.com/svmk/rust-sitemap/releases)
- [Commits](https://github.com/svmk/rust-sitemap/commits)

Signed-off-by: dependabot-preview[bot] <support@dependabot.com>
2020-11-03 10:30:36 +00:00
dependabot-preview[bot] 0b27a424ea
build(deps): bump comrak from 0.8.2 to 0.9.0
Bumps [comrak](https://github.com/kivikakk/comrak) from 0.8.2 to 0.9.0.
- [Release notes](https://github.com/kivikakk/comrak/releases)
- [Changelog](https://github.com/kivikakk/comrak/blob/main/changelog.txt)
- [Commits](https://github.com/kivikakk/comrak/compare/0.8.2...0.9.0)

Signed-off-by: dependabot-preview[bot] <support@dependabot.com>
2020-11-02 10:36:48 +00:00
dependabot-preview[bot] 5bd164fd33
build(deps): bump serde_yaml from 0.8.13 to 0.8.14
Bumps [serde_yaml](https://github.com/dtolnay/serde-yaml) from 0.8.13 to 0.8.14.
- [Release notes](https://github.com/dtolnay/serde-yaml/releases)
- [Commits](https://github.com/dtolnay/serde-yaml/compare/0.8.13...0.8.14)

Signed-off-by: dependabot-preview[bot] <support@dependabot.com>
2020-11-02 10:36:02 +00:00
Cadey Ratio 7e6272bb76 wasmcloud update 2020-10-31 11:56:18 -04:00
dependabot-preview[bot] 683a3ec5e6
build(deps): bump eyre from 0.6.1 to 0.6.2 (#246)
Bumps [eyre](https://github.com/yaahc/eyre) from 0.6.1 to 0.6.2.
- [Release notes](https://github.com/yaahc/eyre/releases)
- [Changelog](https://github.com/yaahc/eyre/blob/master/CHANGELOG.md)
- [Commits](https://github.com/yaahc/eyre/compare/v0.6.1...v0.6.2)

Signed-off-by: dependabot-preview[bot] <support@dependabot.com>

Co-authored-by: dependabot-preview[bot] <27856297+dependabot-preview[bot]@users.noreply.github.com>

Closes #249
2020-10-30 18:10:01 -04:00
Cadey Ratio 03eea22894
minicompiler: lexing (#248)
* minicompiler: lexing

* add list of reviewers

* make the rust series official

* use statement clarification
2020-10-29 13:55:25 -04:00
dependabot-preview[bot] d32a038572
build(deps): bump serde_dhall from 0.7.1 to 0.8.0
Bumps [serde_dhall](https://github.com/Nadrieril/dhall-rust) from 0.7.1 to 0.8.0.
- [Release notes](https://github.com/Nadrieril/dhall-rust/releases)
- [Changelog](https://github.com/Nadrieril/dhall-rust/blob/master/CHANGELOG.md)
- [Commits](https://github.com/Nadrieril/dhall-rust/compare/serde_dhall-v0.7.1...serde_dhall-v0.8.0)

Signed-off-by: dependabot-preview[bot] <support@dependabot.com>
2020-10-29 10:28:11 +00:00
Cadey Ratio 27c8145da3
moonlander first impressions (#245)
* moonlander first impressions

* hands
2020-10-27 09:10:05 -04:00
Cadey Ratio 116bc13b6b unbreak 2020-10-25 17:02:17 -04:00
83 changed files with 5917 additions and 1227 deletions

View File

@ -11,31 +11,8 @@ jobs:
runs-on: ubuntu-latest runs-on: ubuntu-latest
steps: steps:
- uses: actions/checkout@v1 - uses: actions/checkout@v1
- uses: cachix/install-nix-action@v6 - uses: cachix/install-nix-action@v12
- uses: cachix/cachix-action@v3 - uses: cachix/cachix-action@v7
with: with:
name: xe name: xe
- name: Log into GitHub Container Registry - run: nix build --no-link
run: echo "${{ secrets.CR_PAT }}" | docker login https://ghcr.io -u ${{ github.actor }} --password-stdin
- name: Docker push
run: |
docker load -i result
docker tag xena/christinewebsite:latest ghcr.io/xe/site:$GITHUB_SHA
docker push ghcr.io/xe/site
release:
runs-on: ubuntu-latest
needs: docker-build
if: github.ref == 'refs/heads/main'
steps:
- uses: actions/checkout@v1
- uses: cachix/install-nix-action@v6
- name: deploy
run: ./scripts/release.sh
env:
DIGITALOCEAN_ACCESS_TOKEN: ${{ secrets.DIGITALOCEAN_TOKEN }}
MI_TOKEN: ${{ secrets.MI_TOKEN }}
PATREON_ACCESS_TOKEN: ${{ secrets.PATREON_ACCESS_TOKEN }}
PATREON_CLIENT_ID: ${{ secrets.PATREON_CLIENT_ID }}
PATREON_CLIENT_SECRET: ${{ secrets.PATREON_CLIENT_SECRET }}
PATREON_REFRESH_TOKEN: ${{ secrets.PATREON_REFRESH_TOKEN }}
DHALL_PRELUDE: https://raw.githubusercontent.com/dhall-lang/dhall-lang/v17.0.0/Prelude/package.dhall

View File

@ -1,35 +0,0 @@
name: Rust
on:
push:
branches: [ main ]
pull_request:
branches: [ main ]
env:
CARGO_TERM_COLOR: always
jobs:
build:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v2
- name: Build
run: cargo build --all
- name: Run tests
run: |
cargo test
(cd lib/jsonfeed && cargo test)
(cd lib/patreon && cargo test)
env:
PATREON_ACCESS_TOKEN: ${{ secrets.PATREON_ACCESS_TOKEN }}
PATREON_CLIENT_ID: ${{ secrets.PATREON_CLIENT_ID }}
PATREON_CLIENT_SECRET: ${{ secrets.PATREON_CLIENT_SECRET }}
PATREON_REFRESH_TOKEN: ${{ secrets.PATREON_REFRESH_TOKEN }}
release:
runs-on: ubuntu-latest
if: github.ref == 'refs/heads/main'
steps:
- uses: actions/checkout@v2
- name: Releases via Palisade
run: |
docker run --rm --name palisade -v $(pwd):/workspace -e GITHUB_TOKEN -e GITHUB_REF -e GITHUB_REPOSITORY --workdir /workspace ghcr.io/xe/palisade palisade github-action
env:
GITHUB_TOKEN: ${{ secrets.CR_PAT }}

1299
Cargo.lock generated

File diff suppressed because it is too large Load Diff

View File

@ -1,6 +1,6 @@
[package] [package]
name = "xesite" name = "xesite"
version = "2.1.0" version = "2.2.0"
authors = ["Christine Dodrill <me@christine.website>"] authors = ["Christine Dodrill <me@christine.website>"]
edition = "2018" edition = "2018"
build = "src/build.rs" build = "src/build.rs"
@ -11,47 +11,49 @@ repository = "https://github.com/Xe/site"
[dependencies] [dependencies]
color-eyre = "0.5" color-eyre = "0.5"
chrono = "0.4" chrono = "0.4"
comrak = "0.8" comrak = "0.9"
envy = "0.4" envy = "0.4"
glob = "0.3" glob = "0.3"
hyper = "0.13" hyper = "0.14"
kankyo = "0.3" kankyo = "0.3"
lazy_static = "1.4" lazy_static = "1.4"
log = "0.4" log = "0.4"
mime = "0.3.0" mime = "0.3.0"
prometheus = { version = "0.10", default-features = false, features = ["process"] } prometheus = { version = "0.11", default-features = false, features = ["process"] }
rand = "0" rand = "0"
serde_dhall = "0.7.1" reqwest = { version = "0.11", features = ["json"] }
sdnotify = { version = "0.1", default-features = false }
serde_dhall = "0.9.0"
serde = { version = "1", features = ["derive"] } serde = { version = "1", features = ["derive"] }
serde_yaml = "0.8" serde_yaml = "0.8"
sitemap = "0.4" sitemap = "0.4"
thiserror = "1" thiserror = "1"
tokio = { version = "0.2", features = ["macros"] } tokio = { version = "1", features = ["full"] }
tracing = "0.1" tracing = "0.1"
tracing-futures = "0.2" tracing-futures = "0.2"
tracing-subscriber = { version = "0.2", features = ["fmt"] } tracing-subscriber = { version = "0.2", features = ["fmt"] }
warp = "0.2" warp = "0.3"
xml-rs = "0.8" xml-rs = "0.8"
url = "2" url = "2"
uuid = { version = "0.8", features = ["serde", "v4"] }
# workspace dependencies # workspace dependencies
cfcache = { path = "./lib/cfcache" }
go_vanity = { path = "./lib/go_vanity" } go_vanity = { path = "./lib/go_vanity" }
jsonfeed = { path = "./lib/jsonfeed" } jsonfeed = { path = "./lib/jsonfeed" }
mi = { path = "./lib/mi" }
patreon = { path = "./lib/patreon" } patreon = { path = "./lib/patreon" }
[build-dependencies] [build-dependencies]
ructe = { version = "0.12", features = ["warp02"] } ructe = { version = "0.13", features = ["warp02"] }
[dev-dependencies] [dev-dependencies]
pfacts = "0" pfacts = "0"
serde_json = "1" serde_json = "1"
eyre = "0.6" eyre = "0.6"
reqwest = { version = "0.10", features = ["json"] }
pretty_env_logger = "0" pretty_env_logger = "0"
[workspace] [workspace]
members = [ members = [
"./lib/go_vanity", "./lib/*",
"./lib/jsonfeed",
"./lib/patreon"
] ]

View File

@ -1,4 +1,4 @@
Copyright (c) 2017-2020 Christine Dodrill <me@christine.website> Copyright (c) 2017-2021 Christine Dodrill <me@christine.website>
This software is provided 'as-is', without any express or implied This software is provided 'as-is', without any express or implied
warranty. In no event will the authors be held liable for any damages warranty. In no event will the authors be held liable for any damages
@ -16,4 +16,4 @@ freely, subject to the following restrictions:
2. Altered source versions must be plainly marked as such, and must not be 2. Altered source versions must be plainly marked as such, and must not be
misrepresented as being the original software. misrepresented as being the original software.
3. This notice may not be removed or altered from any source distribution. 3. This notice may not be removed or altered from any source distribution.

177
blog/7e-2020-12-20.markdown Normal file
View File

@ -0,0 +1,177 @@
---
title: The 7th Edition
date: 2020-12-19
tags:
- ttrpg
---
# The 7th Edition
You know what, fuck rules. Fuck systems. Fuck limitations. Let's dial the
tabletop RPG system down to its roots. Let's throw out every stat but one:
Awesomeness. When you try to do something that could fail, roll for Awesomeness.
If your roll is more than your awesomeness stat, you win. If not, you lose. If
you are or have something that would benefit you in that situation, roll for
awesomeness twice and take the higher value.
No stats.<br />
No counts.<br />
No limits.<br />
No gods.<br />
No masters.<br />
Just you and me and nature in the battlefield.
* Want to shoot an arrow? Roll for awesomeness. You failed? You're out of ammo.
* Want to, defeat a goblin but you have a goblin-slaying-broadsword? Roll twice
for awesomeness and take the higher value. You got a 20? That goblin was
obliterated. Good job.
* Want to pick up an item into your inventory? Roll for awesomeness. You got it?
It's in your inventory.
Etc. Don't think too hard. Let a roll of the dice decide if you are unsure.
## Base Awesomeness Stats
Here are some probably balanced awesomeness base stats depending on what kind of
dice you are using:
* 6-sided: 4 or 5
* 8-sided: 5 or 6
* 10-sided: 6 or 7
* 12-sided: 7 or 8
* 20-sided: anywhere from 11-13
## Character Sheet Template
Here's an example character sheet:
```
Name:
Awesomeness:
Race:
Class:
Inventory:
*
```
That's it. You don't even need the race or class if you don't want to have it.
You can add more if you feel it is relevant for your character. If your
character is a street brat that has experience with haggling, then fuck it be
the most street brattiest haggler you can. Try to not overload your sheet with
information, this game is supposed to be simple. A sentence or two at most is
good.
## One Player is The World
The World is a character that other systems would call the Narrator, the
Pathfinder, Dungeon Master or similar. Let's strip this down to the core of the
matter. One player doesn't just dictate the world, they _are_ the world.
The World also controls the monsters and non-player characters. In general, if
you are in doubt as to who should roll for an event, The World does that roll.
## Mixins/Mods
These are things you can do to make the base game even more tailored to your
group. Whether you should do this is highly variable to the needs and whims of
your group in particular.
### Mixin: Adjustable Awesomeness
So, one problem that could come up with this is that bad luck could make this
not as fun. As a result, add these two rules in:
* Every time you roll above your awesomeness, add 1 to your awesomeness stat
* Every time you roll below your awesomeness, remove 1 from your awesomeness
stat
This should add up so that luck would even out over time. Players that have less
luck than usual will eventually get their awesomeness evened out so that luck
will be in their favor.
### Mixin: No Awesomeness
In this mod, rip out Awesomeness altogether. When two parties are at odds, they
both roll dice. The one that rolls higher gets what they want. If they tie, both
people get a little part of what they want. For extra fun do this with six-sided
dice.
* Monster wants to attack a player? The World and that player roll. If the
player wins, they can choose to counterattack. If the monster wins, they do a
wound or something.
* One player wants to steal from another? Have them both roll to see what
happens.
Use your imagination! Ask others if you are unsure!
## Other Advice
This is not essential but it may help.
### Monster Building
Okay so basically monsters fall into two categories: peons and bosses. Peons
should be easy to defeat, usually requiring one action. Bosses may require more
and might require more than pure damage to defeat. Get clever. Maybe require the
players to drop a chandelier on the boss. Use the environment.
In general, peons should have a very high base awesomeness in order to do things
they want. Bosses can vary based on your mood.
Adjustable awesomeness should affect monsters too.
### Worldbuilding
Take a setting from somewhere and roll with it. You want to do a cyberpunk jaunt
in Night City with a sword-wielding warlock, a succubus space marine, a bard
netrunner and a shapeshifting monk? Do the hell out of that. That sounds
awesome.
Don't worry about accuracy or the like. You are setting out to have fun.
## Special Thanks
Special thanks goes to Jared, who sent out this [tweet][1] that inspired this
document. In case the tweet gets deleted, here's what it said:
[1]: https://twitter.com/infinite_mao/status/1340402360259137541
> heres a d&d for you
> you have one stat, its a saving throw. if you need to roll dice, you roll your
> save.
> you have a class and some equipment and junk. if the thing you need to roll
> dice for is relevant to your class or equipment or whatever, roll your save
> with advantage.
> oh your Save is 5 or something. if you do something awesome, raise your save
> by 1.
> no hp, save vs death. no damage, save vs goblin. no tracking arrows, save vs
> running out of ammo.
> thanks to @Axes_N_Orcs for this
> What's So Cool About Save vs Death?
> can you carry all that treasure and equipment? save vs gains
I replied:
> Can you get more minimal than this?
He replied:
> when two or more parties are at odds, all roll dice. highest result gets what
> they want.
> hows that?
This document is really just this twitter exchange in more words so that people
less familiar with tabletop games can understand it more easily. You know you
have finished when there is nothing left to remove, not when you can add
something to "fix" it.
I might put this on my [itch.io page](https://withinstudios.itch.io/).

View File

@ -1,8 +1,8 @@
--- ---
title: "TL;DR Rust" title: "TL;DR Rust"
date: 2020-09-19 date: 2020-09-19
series: rust
tags: tags:
- rust
- go - go
- golang - golang
--- ---

View File

@ -0,0 +1,229 @@
---
title: "</kubernetes>"
date: 2021-01-03
---
# &lt;/kubernetes&gt;
Well, since I posted [that last post](/blog/k8s-pondering-2020-12-31) I have had
an adventure. A good friend pointed out a server host that I had missed when I
was looking for other places to use, and now I have migrated my blog to this new
server. As of yesterday, I now run my website on a dedicated server in Finland.
Here is the story of my journey to migrate 6 years of cruft and technical debt
to this new server.
Let's talk about this goliath of a server. This server is an AX41 from Hetzner.
It has 64 GB of ram, a 512 GB nvme drive, 3 2 TB drives, and a Ryzen 3600. For
all practical concerns, this beast is beyond overkill and rivals my workstation
tower in everything but the GPU power. I have named it `lufta`, which is the
word for feather in [L'ewa](https://lewa.within.website/dictionary.html).
## Assimilation
For my server setup process, the first step it to assimilate it. In this step I
get a base NixOS install on it somehow. Since I was using Hetzner, I was able to
boot into a NixOS install image using the process documented
[here](https://nixos.wiki/wiki/Install_NixOS_on_Hetzner_Online). Then I decided
that it would also be cool to have this server use
[zfs](https://en.wikipedia.org/wiki/ZFS) as its filesystem to take advantage of
its legendary subvolume and snapshotting features.
So I wrote up a bootstrap system definition like the Hetzner tutorial said and
ended up with `hosts/lufta/bootstrap.nix`:
```nix
{ pkgs, ... }:
{
services.openssh.enable = true;
users.users.root.openssh.authorizedKeys.keys = [
"ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIPg9gYKVglnO2HQodSJt4z4mNrUSUiyJQ7b+J798bwD9 cadey@shachi"
];
networking.usePredictableInterfaceNames = false;
systemd.network = {
enable = true;
networks."eth0".extraConfig = ''
[Match]
Name = eth0
[Network]
# Add your own assigned ipv6 subnet here here!
Address = 2a01:4f9:3a:1a1c::/64
Gateway = fe80::1
# optionally you can do the same for ipv4 and disable DHCP (networking.dhcpcd.enable = false;)
Address = 135.181.162.99/26
Gateway = 135.181.162.65
'';
};
boot.supportedFilesystems = [ "zfs" ];
environment.systemPackages = with pkgs; [ wget vim zfs ];
}
```
Then I fired up the kexec tarball and waited for the server to boot into a NixOS
live environment. A few minutes later I was in. I started formatting the drives
according to the [NixOS install
guide](https://nixos.org/manual/nixos/stable/index.html#sec-installation) with
one major difference: I added a `/boot` ext4 partition on the SSD. This allows
me to have the system root device on zfs. I added the disks to a `raidz1` pool
and created a few volumes. I also added the SSD as a log device so I get SSD
caching.
From there I installed NixOS as normal and rebooted the server. It booted
normally. I had a shiny new NixOS server in the cloud! I noticed that the server
had booted into NixOS unstable as opposed to NixOS 20.09 like my other nodes. I
thought "ah, well, that probably isn't a problem" and continued to the
configuration step.
[That's ominous...](conversation://Mara/hmm)
## Configuration
Now that the server was assimilated and I could SSH into it, the next step was
to configure it to run my services. While I was waiting for Hetzner to provision
my server I ported a bunch of my services over to Nixops services [a-la this
post](/blog/nixops-services-2020-11-09) in [this
folder](https://github.com/Xe/nixos-configs/tree/master/common/services) of my
configs repo.
Now that I had them, it was time to add this server to my Nixops setup. So I
opened the [nixops definition
folder](https://github.com/Xe/nixos-configs/tree/master/nixops/hexagone) and
added the metadata for `lufta`. Then I added it to my Nixops deployment with
this command:
```console
$ nixops modify -d hexagone -n hexagone *.nix
```
Then I copied over the autogenerated config from `lufta`'s `/etc/nixos/` folder
into
[`hosts/lufta`](https://github.com/Xe/nixos-configs/tree/master/hosts/lufta) and
ran a `nixops deploy` to add some other base configuration.
## Migration
Once that was done, I started enabling my services and pushing configs to test
them. After I got to a point where I thought things would work I opened up the
Kubernetes console and started deleting deployments on my kubernetes cluster as
I felt "safe" to migrate them over. Then I saw the deployments come back. I
deleted them again and they came back again.
Oh, right. I enabled that one Kubernetes service that made it intentionally hard
to delete deployments. One clever set of scale-downs and kills later and I was
able to kill things with wild abandon.
I copied over the gitea data with `rsync` running in the kubernetes deployment.
Then I killed the gitea deployment, updated DNS and reran a whole bunch of gitea
jobs to resanify the environment. I did a test clone on a few of my repos and
then I deleted the gitea volume from DigitalOcean.
Moving over the other deployments from Kubernetes into NixOS services was
somewhat easy, however I did need to repackage a bunch of my programs and static
sites for NixOS. I made the
[`pkgs`](https://github.com/Xe/nixos-configs/tree/master/pkgs) tree a bit more
fleshed out to compensate.
[Okay, packaging static sites in NixOS is beyond overkill, however a lot of them
need some annoyingly complicated build steps and throwing it all into Nix means
that we can make them reproducible and use one build system to rule them
all. Not to mention that when I need to upgrade the system, everything will
rebuild with new system libraries to avoid the <a
href="https://blog.tidelift.com/bit-rot-the-silent-killer">Docker bitrot
problem</a>.](conversation://Mara/hacker)
## Reboot Test
After a significant portion of the services were moved over, I decided it was
time to do the reboot test. I ran the `reboot` command and then...nothing.
My continuous ping test was timing out. My phone was blowing up with downtime
messages from NodePing. Yep, I messed something up.
I was able to boot the server back into a NixOS recovery environment using the
kexec trick, and from there I was able to prove the following:
- The zfs setup is healthy
- I can read some of the data I migrated over
- I can unmount and remount the ZFS volumes repeatedly
I was confused. This shouldn't be happening. After half an hour of
troubleshooting, I gave in and ordered an IPKVM to be installed in my server.
Once that was set up (and I managed to trick MacOS into letting me boot a .jnlp
web start file), I rebooted the server so I could see what error I was getting
on boot. I missed it the first time around, but on the second time I was able to
capture this screenshot:
![The error I was looking
for](https://cdn.christine.website/file/christine-static/blog/Screen+Shot+2021-01-03+at+1.13.05+AM.png)
Then it hit me. I did the install on NixOS unstable. My other servers use NixOS
20.09. I had downgraded zfs and the older version of zfs couldn't mount the
volume created by the newer version of zfs in read/write mode. One more trip to
the recovery environment later to install NixOS unstable in a new generation.
Then I switched my tower's default NixOS channel to the unstable channel and ran
`nixops deploy` to reactivate my services. After the NodePing uptime
notifications came in, I ran the reboot test again while looking at the console
output to be sure.
It booted. It worked. I had a stable setup. Then I reconnected to IRC and passed
out.
## Services Migrated
Here is a list of all of the services I have migrated over from my old dedicated
server, my kubernetes cluster and my dokku server:
- aerial -> discord chatbot
- goproxy -> go modules proxy
- lewa -> https://lewa.within.website
- hlang -> https://h.christine.website
- mi -> https://mi.within.website
- printerfacts -> https://printerfacts.cetacean.club
- xesite -> https://christine.website
- graphviz -> https://graphviz.christine.website
- idp -> https://idp.christine.website
- oragono -> ircs://irc.within.website:6697/
- tron -> discord bot
- withinbot -> discord bot
- withinwebsite -> https://within.website
- gitea -> https://tulpa.dev
- other static sites
Doing this migration is a bit of an archaeology project as well. I was
continuously discovering services that I had littered over my machines with very
poorly documented requirements and configuration. I hope that this move will let
the next time I do this kind of migration be a lot easier by comparison.
I still have a few other services to move over, however the ones that are left
are much more annoying to set up properly. I'm going to get to deprovision 5
servers in this migration and as a result get this stupidly powerful goliath of
a server to do whatever I want with and I also get to cut my monthly server
costs by over half.
I am very close to being able to turn off the Kubernetes cluster and use NixOS
for everything. A few services that are still on the Kubernetes cluster are
resistant to being nixified, so I may have to use the Docker containers for
that. I was hoping to be able to cut out Docker entirely, however we don't seem
to be that lucky yet.
Sure, there is some added latency with the server being in Europe instead of
Montreal, however if this ever becomes a practical issue I can always launch a
cheap DigitalOcean VPS in Toronto to act as a DNS server for my WireGuard setup.
Either way, I am now off Kubernetes for my highest traffic services. If services
of mine need to use the disk, they can now just use the disk. If I really care
about the data, I can add the service folders to the list of paths to back up to
`rsync.net` (I have a post about how this backup process works in the drafting
stage) via [borgbackup](https://www.borgbackup.org/).
Let's hope it stays online!
---
Many thanks to [Graham Christensen](https://twitter.com/grhmc), [Dave
Anderson](https://twitter.com/dave_universetf) and everyone else who has been
helping me along this journey. I would be lost without them.

View File

@ -0,0 +1,178 @@
---
title: "How to Set Up Borg Backup on NixOS"
date: 2021-01-09
series: howto
tags:
- nixos
- borgbackup
---
# How to Set Up Borg Backup on NixOS
[Borg Backup](https://www.borgbackup.org/) is a encrypted, compressed,
deduplicated backup program for multiple platforms including Linux. This
combined with the [NixOS options for configuring
Borg Backup](https://search.nixos.org/options?channel=20.09&show=services.borgbackup.jobs.%3Cname%3E.paths&from=0&size=30&sort=relevance&query=services.borgbackup.jobs)
allows you to backup on a schedule and restore from those backups when you need
to.
Borg Backup works with local files, remote servers and there are even [cloud
hosts](https://www.borgbackup.org/support/commercial.html) that specialize in
hosting your backups. In this post we will cover how to set up a backup job on a
server using [BorgBase](https://www.borgbase.com/)'s free tier to host the
backup files.
## Setup
You will need a few things:
- A free BorgBase account
- A server running NixOS
- A list of folders to back up
- A list of folders to NOT back up
First, we will need to create a SSH key for root to use when connecting to
BorgBase. Open a shell as root on the server and make a `borgbackup` folder in
root's home directory:
```shell
mkdir borgbackup
cd borgbackup
```
Then create a SSH key that will be used to connect to BorgBase:
```shell
ssh-keygen -f ssh_key -t ed25519 -C "Borg Backup"
```
Ignore the SSH key password because at this time the automated Borg Backup job
doesn't allow the use of password-protected SSH keys.
Now we need to create an encryption passphrase for the backup repository. Run
this command to generate one using [xkcdpass](https://pypi.org/project/xkcdpass/):
```shell
nix-shell -p python39Packages.xkcdpass --run 'xkcdpass -n 12' > passphrase
```
[You can do whatever you want to generate a suitable passphrase, however
xkcdpass is proven to be <a href="https://xkcd.com/936/">more random</a> than
most other password generators.](conversation://Mara/hacker)
## BorgBase Setup
Now that we have the basic requirements out of the way, let's configure BorgBase
to use that SSH key. In the BorgBase UI click on the Account tab in the upper
right and open the SSH key management window. Click on Add Key and paste in the
contents of `./ssh_key.pub`. Name it after the hostname of the server you are
working on. Click Add Key and then go back to the Repositories tab in the upper
right.
Click New Repo and name it after the hostname of the server you are working on.
Select the key you just created to have full access. Choose the region of the
backup volume and then click Add Repository.
On the main page copy the repository path with the copy icon next to your
repository in the list. You will need this below. Attempt to SSH into the backup
repo in order to have ssh recognize the server's host key:
```shell
ssh -i ./ssh_key o6h6zl22@o6h6zl22.repo.borgbase.com
```
Then accept the host key and press control-c to terminate the SSH connection.
## NixOS Configuration
In your `configuration.nix` file, add the following block:
```nix
services.borgbackup.jobs."borgbase" = {
paths = [
"/var/lib"
"/srv"
"/home"
];
exclude = [
# very large paths
"/var/lib/docker"
"/var/lib/systemd"
"/var/lib/libvirt"
# temporary files created by cargo and `go build`
"**/target"
"/home/*/go/bin"
"/home/*/go/pkg"
];
repo = "o6h6zl22@o6h6zl22.repo.borgbase.com:repo";
encryption = {
mode = "repokey-blake2";
passCommand = "cat /root/borgbackup/passphrase";
};
environment.BORG_RSH = "ssh -i /root/borgbackup/ssh_key";
compression = "auto,lzma";
startAt = "daily";
};
```
Customize the paths and exclude lists to your needs. Once you are satisfied,
rebuild your NixOS system using `nixos-rebuild`:
```shell
nixos-rebuild switch
```
And then you can fire off an initial backup job with this command:
```shell
systemctl start borgbackup-job-borgbase.service
```
Monitor the job with this command:
```shell
journalctl -fu borgbackup-job-borgbase.service
```
The first backup job will always take the longest to run. Every incremental
backup after that will get smaller and smaller. By default, the system will
create new backup snapshots every night at midnight local time.
## Restoring Files
To restore files, first figure out when you want to restore the files from.
NixOS includes a wrapper script for each Borg job you define. you can mount your
backup archive using this command:
```
mkdir mount
borg-job-borgbase mount o6h6zl22@o6h6zl22.repo.borgbase.com:repo ./mount
```
Then you can explore the backup (and with it each incremental snapshot) to
your heart's content and copy files out manually. You can look through each
folder and copy out what you need.
When you are done you can unmount it with this command:
```
borg-job-borgbase umount /root/borgbase/mount
```
---
And that's it! You can get more fancy with nixops using a setup [like
this](https://github.com/Xe/nixos-configs/blob/master/common/services/backup.nix).
In general though, you can get away with this setup. It may be a good idea to
copy down the encryption passphrase onto paper and put it in a safe space like a
safety deposit box.
For more information about Borg Backup on NixOS, see [the relevant chapter of
the NixOS
manual](https://nixos.org/manual/nixos/stable/index.html#module-borgbase) or
[the list of borgbackup
options](https://search.nixos.org/options?channel=20.09&query=services.borgbackup.jobs)
that you can pick from.
I hope this is able to help.

View File

@ -0,0 +1,78 @@
---
title: hlang in 30 Seconds
date: 2021-01-04
series: h
tags:
- satire
---
# hlang in 30 Seconds
hlang (the h language) is a revolutionary new use of WebAssembly that enables
single-paridigm programming without any pesky state or memory accessing. The
simplest program you can use in hlang is the h world program:
```
h
```
When run in [the hlang playground](https://h.christine.website/play), you can
see its output:
```
h
```
To get more output, separate multiple h's by spaces:
```
h h h h
```
This returns:
```
h
h
h
h
```
## Internationalization
For internationalization concerns, hlang also supports the Lojbanic h `'`. You can
mix h and `'` to your heart's content:
```
' h '
```
This returns:
```
'
h
'
```
Finally an easy solution to your pesky Lojban internationalization problems!
## Errors
For maximum understandability, compiler errors are provided in Lojban. For
example this error tells you that you have an invalid character at the first
character of the string:
```
h: gentoldra fi'o zvati fe li no
```
Here is an interlinear gloss of that error:
```
h: gentoldra fi'o zvati fe li no
grammar-wrong existing-at second-place use-number 0
```
And now you are fully fluent in hlang, the most exciting programming language
since sliced bread.

View File

@ -0,0 +1,160 @@
---
title: Kubernetes Pondering
date: 2020-12-31
tags:
- k8s
- kubernetes
- soyoustart
- kimsufi
- digitalocean
- vultr
---
# Kubernetes Pondering
Right now I am using a freight train to mail a letter when it comes to hosting
my web applications. If you are reading this post on the day it comes out, then
you are connected to one of a few replicas of my site code running across at
least 3 machines in my Kubernetes cluster. This certainly _works_, however it is
not very ergonomic and ends up being quite expensive.
I think I made a mistake when I decided to put my cards into Kubernetes for my
personal setup. It made sense at the time (I was trying to learn Kubernetes and
I am cursed into learning by doing), however I don't think it is really the best
choice available for my needs. I am not a large company. I am a single person
making things that are really targeted for myself. I would like to replace this
setup with something more at my scale. Here are a few options I have been
exploring combined with their pros and cons.
Here are the services I currently host on my Kubernetes cluster:
- [this site](/)
- [my git server](https://tulpa.dev)
- [hlang](https://h.christine.website)
- A few personal services that I've been meaning to consolidate
- The [olin demo](https://olin.within.website/)
- The venerable [printer facts server](https://printerfacts.cetacean.club)
- A few static websites
- An IRC server (`irc.within.website`)
My goal in evaluating other options is to reduce cost and complexity. Kubernetes
is a very complicated system and requires a lot of hand-holding and rejiggering
to make it do what you want. NixOS, on the other hand, is a lot simpler overall
and I would like to use it for running my services where I can.
Cost is a huge factor in this. My Kubernetes setup is a money pit. I want to
prioritize cost reduction as much as possible.
## Option 1: Do Nothing
I could do nothing about this and eat the complexity as a cost of having this
website and those other services online. However over the year or so I've been
using Kubernetes I've had to do a lot of hacking at it to get it to do what I
want.
I set up the cluster using Terraform and Helm 2. Helm 3 is the current
(backwards-incompatible) release, and all of the things that are managed by Helm
2 have resisted being upgraded to Helm 3.
I'm going to say something slightly controversial here, but YAML is a HORRIBLE
format for configuration. I can't trust myself to write unambiguous YAML. I have
to reference the spec constantly to make sure I don't have an accidental
Norway/Ontario bug. I have a Dhall package that takes away most of the pain,
however it's not flexible enough to describe the entire scope of what my
services need to do (IE: pinging Google/Bing to update their indexes on each
deploy), and I don't feel like putting in the time to make it that flexible.
[This is the regex for determining what is a valid boolean value in YAML:
`y|Y|yes|Yes|YES|n|N|no|No|NO|true|True|TRUE|false|False|FALSE|on|On|ON|off|Off|OFF`.
This can bite you eventually. See the <a
href="https://hitchdev.com/strictyaml/why/implicit-typing-removed/">Norway
Problem</a> for more information.](conversation://Mara/hacker)
I have a tor hidden service endpoint for a few of my services. I have to use an
[unmaintained tool](https://github.com/kragniz/tor-controller) to manage these
on Kubernetes. It works _today_, but the Kubernetes operator API could change at
any time (or the API this uses could be deprecated and removed without much
warning) and leave me in the dust.
I could live with all of this, however I don't really think it's the best idea
going forward. There's a bunch of services that I added on top of Kubernetes
that are dangerous to upgrade and very difficult (if not impossible) to
downgrade when something goes wrong during the upgrade.
One of the big things that I have with this setup that I would have to rebuild
in NixOS is the continuous deployment setup. However I've done that before and
it wouldn't really be that much of an issue to do it again.
NixOS fixes all the jank I mentioned above by making my specifications not have
to include the version numbers of everything the system already provides. You
can _actually trust the package repos to have up to date packages_. I don't
have to go around and bump the versions of shims and pray they work, because
with NixOS I don't need them anymore.
## Option 2: NixOS on top of SoYouStart or Kimsufi
This is a doable option. The main problem here would be doing the provision
step. SoYouStart and Kimsufi (both are offshoot/discount brands of OVH) have
very little in terms of customization of machine config. They work best when you
are using "normal" distributions like Ubuntu or CentOS and leave them be. I
would want to run NixOS on it and would have to do several trial and error runs
with a tool such as [nixos-infect](https://github.com/elitak/nixos-infect) to
assimilate the server into running NixOS.
With this option I would get the most storage out of any other option by far. 4
TB is a _lot_ of space. However, SoYouStart and Kimsufi run decade-old hardware
at best. I would end up paying a lot for very little in the CPU department. For
most things I am sure this would be fine, however some of my services can have
CPU needs that might exceed what second-generation Xeons can provide.
SoYouStart and Kimsufi have weird kernel versions though. The last SoYouStart
dedi I used ran Fedora and was gimped with a grsec kernel by default. I had to
end up writing [this gem of a systemd service on
boot](https://github.com/Xe/dotfiles/blob/master/ansible/roles/soyoustart/files/conditional-kexec.sh)
which did a [`kexec`](https://en.wikipedia.org/wiki/Kexec) to boot into a
non-gimped kernel on boot. It was a huge hack and somehow worked every time. I
was still afraid to reboot the machine though.
Sure is a lot of ram for the cost though.
## Option 3: NixOS on top of Digital Ocean
This shares most of the problems as the SoYouStart or Kimsufi nodes. However,
nixos-infect is known to have a higher success rate on Digital Ocean droplets.
It would be really nice if Digital Ocean let you upload arbitrary ISO files and
go from there, but that is apparently not the world we live in.
8 GB of ram would be _way more than enough_ for what I am doing with these
services.
## Option 4: NixOS on top of Vultr
Vultr is probably my top pick for this. You can upload an arbitrary ISO file,
kick off your VPS from it and install it like normal. I have a little shell
server shared between some friends built on top of such a Vultr node. It works
beautifully.
The fact that it has the same cost as the Digital Ocean droplet just adds to the
perfection of this option.
## Costs
Here is the cost table I've drawn up while comparing these options:
| Option | Ram | Disk | Cost per month | Hacks |
| :--------- | :----------------- | :------------------------------------ | :-------------- | :----------- |
| Do nothing | 6 GB (4 GB usable) | Not really usable, volumes cost extra | $60/month | Very Yes |
| SoYouStart | 32 GB | 2x2TB SAS | $40/month | Yes |
| Kimsufi | 32 GB | 2x2TB SAS | $35/month | Yes |
| Digital Ocean | 8 GB | 160 GB SSD | $40/month | On provision |
| Vultr | 8 GB | 160 GB SSD | $40/month | No |
I think I am going to go with the Vultr option. I will need to modernize some of
my services to support being deployed in NixOS in order to do this, however I
think that I will end up creating a more robust setup in the process. At least I
will create a setup that allows me to more easily maintain my own backups rather
than just relying on DigitalOcean snapshots and praying like I do with the
Kubernetes setup.
Thanks farcaller, Marbles, John Rinehart and others for reviewing this post
prior to it being published.

View File

@ -0,0 +1,51 @@
---
title: "Mara: Sh0rk of Justice: Version 1.0.0 Released"
date: 2020-12-28
tags:
- gameboy
- gbstudio
- indiedev
---
# Mara: Sh0rk of Justice: Version 1.0.0 Released
Over the long weekend I found out about a program called [GB Studio](https://www.gbstudio.dev).
It's a simple drag-and-drop interface that you can use to make homebrew games for the
[Nintendo Game Boy](https://en.wikipedia.org/wiki/Game_Boy). I was intrigued and I had
some time, so I set out to make a little top-down adventure game. After a few days of
tinkering I came up with an idea and created Mara: Sh0rk of Justice.
[You made a game about me? :D](conversation://Mara/hacker)
> Guide Mara through the spooky dungeon in order to find all of its secrets. Seek out
> the secrets of the spooks! Defeat the evil mage! Solve the puzzles! Find the items
> of power! It's up you to save us all, Mara!
You can play it in an `<iframe>` on itch.io!
<iframe frameborder="0" src="https://itch.io/embed/866982?dark=true" width="552" height="167"><a href="https://withinstudios.itch.io/mara-sh0rk-justice">Mara: Sh0rk of Justice by Within</a></iframe>
## Things I Learned
Game development is hard. Even with tools that help you do it, there's a limit to how
much you can get done at once. Everything links together. You really need to test
things both in isolation and as a cohesive whole.
I cannot compose music to save my life. I used free-to-use music assets from the
[GB Studio Community Assets](https://github.com/DeerTears/GB-Studio-Community-Assets)
pack to make this game. I think I managed to get everything acceptable.
GB Studio is rather inflexible. It feels like it's there to really help you get
started from a template. Even though you can make the whole game from inside GB
Studio, I probably should have ejected the engine to source code so I could
customize some things like the jump button being weird in platforming sections.
Pixel art is an art of its own. I used a lot of free to use assets from itch.io for
the tileset and a few NPC's. The rest was created myself using
[Aseprite](https://www.aseprite.org). Getting Mara's walking animation to a point
that I thought was acceptable was a chore. I found a nice compromise though.
---
Overall I'm happy with the result as a whole. Try it out, see how you like it and
please do let me know what I can improve on for the future.

View File

@ -0,0 +1,397 @@
---
title: "Minicompiler: Lexing"
date: 2020-10-29
series: rust
tags:
- rust
- templeos
- compiler
---
# Minicompiler: Lexing
I've always wanted to make my own compiler. Compilers are an integral part of
my day to day job and I use the fruits of them constantly. A while ago while I
was browsing through the TempleOS source code I found
[MiniCompiler.HC][minicompiler] in the `::/Demos/Lectures` folder and I was a
bit blown away. It implements a two phase compiler from simple math expressions
to AMD64 bytecode (complete with bit-banging it to an array that the code later
jumps to) and has a lot to teach about how compilers work. For those of you that
don't have a TempleOS VM handy, here is a video of MiniCompiler.HC in action:
[minicompiler]: https://github.com/Xe/TempleOS/blob/master/Demo/Lectures/MiniCompiler.HC
<video controls width="100%">
<source src="https://cdn.christine.website/file/christine-static/img/minicompiler/tmp.YDcgaHSb3z.webm"
type="video/webm">
<source src="https://cdn.christine.website/file/christine-static/img/minicompiler/tmp.YDcgaHSb3z.mp4"
type="video/mp4">
Sorry, your browser doesn't support embedded videos.
</video>
You put in a math expression, the compiler builds it and then spits out a bunch
of assembly and runs it to return the result. In this series we are going to be
creating an implementation of this compiler that targets [WebAssembly][wasm].
This compiler will be written in Rust and will use only the standard library for
everything but the final bytecode compilation and execution phase. There is a
lot going on here, so I expect this to be at least a three part series. The
source code will be in [Xe/minicompiler][Xemincompiler] in case you want to read
it in detail. Follow along and let's learn some Rust on the way!
[wasm]: https://webassembly.org/
[Xemincompiler]: https://github.com/Xe/minicompiler
[Compilers for languages like C are built on top of the fundamentals here, but
they are _much_ more complicated.](conversation://Mara/hacker)
## Description of the Language
This language uses normal infix math expressions on whole numbers. Here are a
few examples:
- `2 + 2`
- `420 * 69`
- `(34 + 23) / 38 - 42`
- `(((34 + 21) / 5) - 12) * 348`
Ideally we should be able to nest the parentheses as deep as we want without any
issues.
Looking at these values we can notice a few patterns that will make parsing this
a lot easier:
- There seems to be only 4 major parts to this language:
- numbers
- math operators
- open parentheses
- close parentheses
- All of the math operators act identically and take two arguments
- Each program is one line long and ends at the end of the line
Let's turn this description into Rust code:
## Bringing in Rust
Make a new project called `minicompiler` with a command that looks something
like this:
```console
$ cargo new minicompiler
```
This will create a folder called `minicompiler` and a file called `src/main.rs`.
Open that file in your editor and copy the following into it:
```rust
// src/main.rs
/// Mathematical operations that our compiler can do.
#[derive(Debug, Eq, PartialEq)]
enum Op {
Mul,
Div,
Add,
Sub,
}
/// All of the possible tokens for the compiler, this limits the compiler
/// to simple math expressions.
#[derive(Debug, Eq, PartialEq)]
enum Token {
EOF,
Number(i32),
Operation(Op),
LeftParen,
RightParen,
}
```
[In compilers, "tokens" refer to the individual parts of the language you are
working with. In this case every token represents every possible part of a
program.](conversation://Mara/hacker)
And then let's start a function that can turn a program string into a bunch of
tokens:
```rust
// src/main.rs
fn lex(input: &str) -> Vec<Token> {
todo!("implement this");
}
```
[Wait, what do you do about bad input such as things that are not math expressions?
Shouldn't this function be able to fail?](conversation://Mara/hmm)
You're right! Let's make a little error type that represents bad input. For
creativity's sake let's call it `BadInput`:
```rust
// src/main.rs
use std::error::Error;
use std::fmt;
/// The error that gets returned on bad input. This only tells the user that it's
/// wrong because debug information is out of scope here. Sorry.
#[derive(Debug, Eq, PartialEq)]
struct BadInput;
// Errors need to be displayable.
impl fmt::Display for BadInput {
fn fmt(&self, f: &mut fmt::Formatter) -> fmt::Result {
write!(f, "something in your input is bad, good luck")
}
}
// The default Error implementation will do here.
impl Error for BadInput {}
```
And then let's adjust the type of `lex()` to compensate for this:
```rust
// src/main.rs
fn lex(input: &str) -> Result<Vec<Token>, BadInput> {
todo!("implement this");
}
```
So now that we have the function type we want, let's start implementing `lex()`
by setting up the result and a loop over the characters in the input string:
```rust
// src/main.rs
fn lex(input: &str) -> Result<Vec<Token>, BadInput> {
let mut result: Vec<Token> = Vec::new();
for character in input.chars() {
todo!("implement this");
}
Ok(result)
}
```
Looking at the examples from earlier we can start writing some boilerplate to
turn characters into tokens:
```rust
// src/main.rs
// ...
for character in input.chars() {
match character {
// Skip whitespace
' ' => continue,
// Ending characters
';' | '\n' => {
result.push(Token::EOF);
break;
}
// Math operations
'*' => result.push(Token::Operation(Op::Mul)),
'/' => result.push(Token::Operation(Op::Div)),
'+' => result.push(Token::Operation(Op::Add)),
'-' => result.push(Token::Operation(Op::Sub)),
// Parentheses
'(' => result.push(Token::LeftParen),
')' => result.push(Token::RightParen),
// Numbers
'0' | '1' | '2' | '3' | '4' | '5' | '6' | '7' | '8' | '9' => {
todo!("implement number parsing")
}
// Everything else is bad input
_ => return Err(BadInput),
}
}
// ...
```
[Ugh, you're writing `Token::` and `Op::` a lot. Is there a way to simplify
that?](conversation://Mara/hmm)
Yes! enum variants can be shortened to their names with a `use` statement like
this:
```rust
// src/main.rs
// ...
use Op::*;
use Token::*;
match character {
// ...
// Math operations
'*' => result.push(Operation(Mul)),
'/' => result.push(Operation(Div)),
'+' => result.push(Operation(Add)),
'-' => result.push(Operation(Sub)),
// Parentheses
'(' => result.push(LeftParen),
')' => result.push(RightParen),
// ...
}
// ...
```
Which looks a _lot_ better.
[You can use the `use` statement just about anywhere in your program. However to
keep things flowing nicer, the `use` statement is right next to where it is
needed in these examples.](conversation://Mara/hacker)
Now we can get into the fun that is parsing numbers. When he wrote MiniCompiler,
Terry Davis used an approach that is something like this (spacing added for readability):
```c
case '0'...'9':
i = 0;
do {
i = i * 10 + *src - '0';
src++;
} while ('0' <= *src <= '9');
*num=i;
```
This sets an intermediate variable `i` to 0 and then consumes characters from
the input string as long as they are between `'0'` and `'9'`. As a neat side
effect of the numbers being input in base 10, you can conceptualize `40` as `(4 *
10) + 2`. So it multiplies the old digit by 10 and then adds the new digit to
the resulting number. Our setup doesn't let us get that fancy as easily, however
we can emulate it with a bit of stack manipulation according to these rules:
- If `result` is empty, push this number to result and continue lexing the
program
- Pop the last item in `result` and save it as `last`
- If `last` is a number, multiply that number by 10 and add the current number
to it
- Otherwise push the node back into `result` and push the current number to
`result` as well
Translating these rules to Rust, we get this:
```rust
// src/main.rs
// ...
// Numbers
'0' | '1' | '2' | '3' | '4' | '5' | '6' | '7' | '8' | '9' => {
let num: i32 = (character as u8 - '0' as u8) as i32;
if result.len() == 0 {
result.push(Number(num));
continue;
}
let last = result.pop().unwrap();
match last {
Number(i) => {
result.push(Number((i * 10) + num));
}
_ => {
result.push(last);
result.push(Number(num));
}
}
}
// ...
```
[This is not the most robust number parsing code in the world, however it will
suffice for now. Extra credit if you can identify the edge
cases!](conversation://Mara/hacker)
This should cover the tokens for the language. Let's write some tests to be sure
everything is working the way we think it is!
## Testing
Rust has a [robust testing
framework](https://doc.rust-lang.org/book/ch11-00-testing.html) built into the
standard library. We can use it here to make sure we are generating tokens
correctly. Let's add the following to the bottom of `main.rs`:
```rust
#[cfg(test)] // tells the compiler to only build this code when tests are being run
mod tests {
use super::{Op::*, Token::*, *};
// registers the following function as a test function
#[test]
fn basic_lexing() {
assert!(lex("420 + 69").is_ok());
assert!(lex("tacos are tasty").is_err());
assert_eq!(
lex("420 + 69"),
Ok(vec![Number(420), Operation(Add), Number(69)])
);
assert_eq!(
lex("(30 + 560) / 4"),
Ok(vec![
LeftParen,
Number(30),
Operation(Add),
Number(560),
RightParen,
Operation(Div),
Number(4)
])
);
}
}
```
This test can and probably should be expanded on, but when we run `cargo test`:
```console
$ cargo test
Compiling minicompiler v0.1.0 (/home/cadey/code/Xe/minicompiler)
Finished test [unoptimized + debuginfo] target(s) in 0.22s
Running target/debug/deps/minicompiler-03cad314858b0419
running 1 test
test tests::basic_lexing ... ok
test result: ok. 1 passed; 0 failed; 0 ignored; 0 measured; 0 filtered out
```
And hey presto! We verified that all of the parsing is working correctly. Those
test cases should be sufficient to cover all of the functionality of the
language.
---
This is it for part 1. We covered a lot today. Next time we are going to run a
validation pass on the program, convert the infix expressions to reverse polish
notation and then also get started on compiling that to WebAssembly. This has
been fun so far and I hope you were able to learn from it.
Special thanks to the following people for reviewing this post:
- Steven Weeks
- sirpros
- Leonora Tindall
- Chetan Conikee
- Pablo
- boopstrap
- ash2x3

View File

@ -40,7 +40,8 @@ The following table is a history of my software career by title, date and salary
| Software Engineer | August 24, 2016 | November 22, 2016 | 90 days | 21 days | $105,000/year | Terminated | | Software Engineer | August 24, 2016 | November 22, 2016 | 90 days | 21 days | $105,000/year | Terminated |
| Consultant | Feburary 13, 2017 | November 13, 2017 | 273 days | 83 days | don't remember | Hired | | Consultant | Feburary 13, 2017 | November 13, 2017 | 273 days | 83 days | don't remember | Hired |
| Senior Software Engineer | November 13, 2017 | March 8, 2019 | 480 days | 0 days | $150,000/year | Voulntary quit | | Senior Software Engineer | November 13, 2017 | March 8, 2019 | 480 days | 0 days | $150,000/year | Voulntary quit |
| Senior Site Reliability Expert | May 6, 2019 | (will be current) | n/a | n/a | CAD$115,000/year (about USD$ 80k and change) | n/a | | Senior Site Reliability Expert | May 6, 2019 | October 27, 2020 | 540 days | 48 days | CAD$115,000/year (about USD$ 80k and change) | Voluntary quit |
| Software Designer | December 14, 2020 | *current* | n/a | n/a | CAD$135,000/year (about USD$ 105k and change) | n/a |
Even though I've been fired three times, I don't regret my career as it's been Even though I've been fired three times, I don't regret my career as it's been
thus far. I've been able to work on experimental technology integrating into thus far. I've been able to work on experimental technology integrating into

View File

@ -0,0 +1,38 @@
---
title: New PGP Key Fingerprint
date: 2021-01-15
---
# New PGP Key Fingerprint
This morning I got an encrypted email, and in the process of trying to decrypt
it I discovered that I had _lost_ my PGP key. I have no idea how I lost it. As
such, I have created a new PGP key and replaced the one on my website with it.
I did the replacement in [this
commit](https://github.com/Xe/site/commit/66233bcd40155cf71e221edf08851db39dbd421c),
which you can see is verified with a subkey of my new key.
My new PGP key ID is `803C 935A E118 A224`. The key with the ID `799F 9134 8118
1111` should not be used anymore. Here are all the subkey fingerprints:
```
Signature key ....: 378E BFC6 3D79 B49D 8C36 448C 803C 935A E118 A224
created ....: 2021-01-15 13:04:28
Encryption key....: 8C61 7F30 F331 D21B 5517 6478 8C5C 9BC7 0FC2 511E
created ....: 2021-01-15 13:04:28
Authentication key: 7BF7 E531 ABA3 7F77 FD17 8F72 CE17 781B F55D E945
created ....: 2021-01-15 13:06:20
General key info..: pub rsa2048/803C935AE118A224 2021-01-15 Christine Dodrill (Yubikey) <me@christine.website>
sec> rsa2048/803C935AE118A224 created: 2021-01-15 expires: 2031-01-13
card-no: 0006 03646872
ssb> rsa2048/8C5C9BC70FC2511E created: 2021-01-15 expires: 2031-01-13
card-no: 0006 03646872
ssb> rsa2048/CE17781BF55DE945 created: 2021-01-15 expires: 2031-01-13
card-no: 0006 03646872
```
I don't really know what the proper way is to go about revoking an old PGP key.
It probably doesn't help that I don't use PGP very often. I think this is the
first encrypted email I've gotten in a year.
Let's hope that I don't lose this key as easily!

View File

@ -0,0 +1,317 @@
---
title: Nixops Services on Your Home Network
date: 2020-11-09
series: howto
tags:
- nixos
- systemd
---
# Nixops Services on Your Home Network
My homelab has a few NixOS machines. Right now they mostly run services inside
Docker, because that has been what I have done for years. This works fine, but
persistent state gets annoying*. NixOS has a tool called
[Nixops](https://releases.nixos.org/nixops/nixops-1.7/manual/manual.html) that
allows you to push configurations to remote machines. I use this for managing my
fleet of machines, and today I'm going to show you how to create service
deployments with Nixops and push them to your servers.
[Pedantically, Docker offers <a
href="https://releases.nixos.org/nixops/nixops-1.7/manual/manual.html">volumes</a>
to simplify this, but it is very easy to accidentally delete Docker volumes.
Plain disk files like we are going to use today are a bit simpler than docker
volumes, and thusly a bit harder to mess up.](conversation://Mara/hacker)
## Parts of a Service
For this example, let's deploy a chatbot. To make things easier, let's assume
the following about this chatbot:
- The chatbot has a git repo somewhere
- The chatbot's git repo has a `default.nix` that builds the service and
includes any supporting files it might need
- The chatbot reads its configuration from environment variables which may
contain secret values (API keys, etc.)
- The chatbot stores any temporary files in its current working directory
- The chatbot is "well-behaved" (for some definition of "well-behaved")
I will also need to assume that you have a git repo (or at least a folder) with
all of your configuration similar to [mine](https://github.com/Xe/nixos-configs).
For this example I'm going to use [withinbot](https://github.com/Xe/withinbot)
as the service we will deploy via Nixops. withinbot is a chatbot that I use on
my own Discord guild that does a number of vital functions including supplying
amusing facts about printers:
```
<Cadey~> ~printerfact
<Within[BOT]> @Cadey~ Printers, especially older printers, do get cancer. Many
times this disease can be treated successfully
```
[To get your own amusing facts about printers, see <a
href="https://printerfacts.cetacean.club">here</a> or for using its API, call <a
href="https://printerfacts.cetacean.club/fact">`/fact`</a>. This API has no
practical rate limits, but please don't test that.](conversation://Mara/hacker)
## Service Definition
We will need to do a few major things for defining this service:
1. Add the bot code as a package
1. Create a "services" folder for the service modules
1. Create a user account for the service
1. Set up a systemd unit for the service
1. Configure the secrets using [Nixops
keys](https://releases.nixos.org/nixops/nixops-1.7/manual/manual.html#idm140737322342384)
### Add the Code as a Package
In order for the program to be installed to the remote system, you need to tell
the system how to import it. There's many ways to do this, but the cheezy way is
to add the packages to
[`nixpkgs.config.packageOverrides`](https://nixos.org/manual/nixos/stable/#sec-customising-packages)
like this:
```nix
nixpkgs.config = {
packageOverrides = pkgs: {
within = {
withinbot = import (builtins.fetchTarball
"https://github.com/Xe/withinbot/archive/main.tar.gz") { };
};
};
};
```
And now we can access it as `pkgs.within.withinbot` in the rest of our config.
[In production circumstances you should probably use <a
href="https://nixos.org/manual/nixpkgs/stable/#chap-pkgs-fetchers">a fetcher
that locks to a specific version</a> using unique URLs and hashing, but this
will work enough to get us off the ground in this
example.](conversation://Mara/hacker)
### Create a "services" Folder
In your configuration folder, create a folder that you will use for these
service definitions. I made mine in `common/services`. In that folder, create a
`default.nix` with the following contents:
```nix
{ config, lib, ... }:
{
imports = [ ./withinbot.nix ];
users.groups.within = {};
}
```
The group listed here is optional, but I find that having a group like that can
help you better share resources and files between services.
Now we need a folder for storing secrets. Let's create that under the services
folder:
```console
$ mkdir secrets
```
And let's also add a gitignore file so that we don't accidentally commit these
secrets to the repo:
```gitignore
# common/services/secrets/.gitignore
*
```
Now we can put any secrets we want in the secrets folder without the risk of
committing them to the git repo.
### Service Manifest
Let's create `withinbot.nix` and set it up:
```nix
{ config, lib, pkgs, ... }:
with lib; {
options.within.services.withinbot.enable =
mkEnableOption "Activates Withinbot (the furryhole chatbot)";
config = mkIf config.within.services.withinbot.enable {
};
}
```
This sets up an option called `within.services.withinbot.enable` which will only
add the service configuration if that option is set to `true`. This will allow
us to define a lot of services that are available, but none of their config will
be active unless they are explicitly enabled.
Now, let's create a user account for the service:
```nix
# ...
config = ... {
users.users.withinbot = {
createHome = true;
description = "github.com/Xe/withinbot";
isSystemUser = true;
group = "within";
home = "/srv/within/withinbot";
extraGroups = [ "keys" ];
};
};
# ...
```
This will create a user named `withinbot` with the home directory
`/srv/within/withinbot`, the group `within` and also in the group `keys` so the
withinbot user can read deployment secrets.
Now let's add the deployment secrets to the configuration:
```nix
# ...
config = ... {
users.users.withinbot = { ... };
deployment.keys.withinbot = {
text = builtins.readFile ./secrets/withinbot.env;
user = "withinbot";
group = "within";
permissions = "0640";
};
};
# ...
```
Assuming you have the configuration at `./secrets/withinbot.env`, this will
register the secrets into `/run/keys/withinbot` and also create a systemd
oneshot service named `withinbot-key`. This allows you to add the secret's
existence as a condition for withinbot to run. However, Nixops puts these keys
in `/run`, which by default is mounted using a temporary memory-only filesystem,
meaning these keys will need to be re-added to machines when they are rebooted.
Fortunately, `nixops reboot` will automatically add the keys back after the
reboot succeeds.
Now that we have everything else we need, let's add the service configuration:
```nix
# ...
config = ... {
users.users.withinbot = { ... };
deployment.keys.withinbot = { ... };
systemd.services.withinbot = {
wantedBy = [ "multi-user.target" ];
after = [ "withinbot-key.service" ];
wants = [ "withinbot-key.service" ];
serviceConfig = {
User = "withinbot";
Group = "within";
Restart = "on-failure"; # automatically restart the bot when it dies
WorkingDirectory = "/srv/within/withinbot";
RestartSec = "30s";
};
script = let withinbot = pkgs.within.withinbot;
in ''
# load the environment variables from /run/keys/withinbot
export $(grep -v '^#' /run/keys/withinbot | xargs)
# service-specific configuration
export CAMPAIGN_FOLDER=${withinbot}/campaigns
# kick off the chatbot
exec ${withinbot}/bin/withinbot
'';
};
};
# ...
```
This will create the systemd configuration for the service so that it starts on
boot, waits to start until the secrets have been loaded into it, runs withinbot
as its own user and in the `within` group, and throttles the service restart so
that it doesn't incur Discord rate limits as easily. This will also put all
withinbot logs in journald, meaning that you can manage and monitor this service
like you would any other systemd service.
## Deploying the Service
In your target server's `configuration.nix` file, add an import of your services
directory:
```nix
{
# ...
imports = [
# ...
/home/cadey/code/nixos-configs/common/services
];
# ...
}
```
And then enable the withinbot service:
```nix
{
# ...
within.services = {
withinbot.enable = true;
};
# ...
}
```
[Make that a block so you can enable multiple services at once like <a
href="https://github.com/Xe/nixos-configs/blob/e111413e8b895f5a117dea534b17fc9d0b38d268/hosts/chrysalis/configuration.nix#L93-L96">this</a>!](conversation://Mara/hacker)
Now you are free to deploy it to your network with `nixops deploy`:
```console
$ nixops deploy -d hexagone
```
<video controls width="100%">
<source src="https://cdn.christine.website/file/christine-static/img/nixops/tmp.Tr7HTFFd2c.webm"
type="video/webm">
<source src="https://cdn.christine.website/file/christine-static/img/nixops/tmp.Tr7HTFFd2c.mp4"
type="video/mp4">
Sorry, your browser doesn't support embedded videos.
</video>
And then you can verify the service is up with `systemctl status`:
```console
$ nixops ssh -d hexagone chrysalis -- systemctl status withinbot
● withinbot.service
Loaded: loaded (/nix/store/7ab7jzycpcci4f5wjwhjx3al7xy85ka7-unit-withinbot.service/withinbot.service; enabled; vendor preset: enabled)
Active: active (running) since Mon 2020-11-09 09:51:51 EST; 2h 29min ago
Main PID: 12295 (withinbot)
IP: 0B in, 0B out
Tasks: 13 (limit: 4915)
Memory: 7.9M
CPU: 4.456s
CGroup: /system.slice/withinbot.service
└─12295 /nix/store/qpq281hcb1grh4k5fm6ksky6w0981arp-withinbot-0.1.0/bin/withinbot
Nov 09 09:51:51 chrysalis systemd[1]: Started withinbot.service.
```
---
This basic template is enough to expand out to anything you would need and is
what I am using for my own network. This should be generic enough for most of
your needs. Check out the [NixOS manual](https://nixos.org/manual/nixos/stable/)
for more examples and things you can do with this. The [Nixops
manual](https://releases.nixos.org/nixops/nixops-1.7/manual/manual.html) is also
a good read. It can also set up deployments with VirtualBox, libvirtd, AWS,
Digital Ocean, and even Google Cloud.
The cloud is the limit! Be well.

View File

@ -0,0 +1,146 @@
---
title: Discord Webhooks via NixOS and Systemd Timers
date: 2020-11-30
series: howto
tags:
- nixos
- discord
- systemd
---
# Discord Webhooks via NixOS and Systemd Timers
Recently I needed to set up a Discord message on a cronjob as a part of
moderating a guild I've been in for years. I've done this before using
[cronjobs](/blog/howto-automate-discord-webhook-cron-2018-03-29), however this
time we will be using [NixOS](https://nixos.org/) and [systemd
timers](https://wiki.archlinux.org/index.php/Systemd/Timers). Here's what you
will need to follow along:
- A machine running NixOS
- A [Discord](https://discord.com/) account
- A
[webhook](https://support.discord.com/hc/en-us/articles/228383668-Intro-to-Webhooks)
configured for a channel
- A message you want to send to Discord
[If you don't have moderation permissions in any guilds, make your own for
testing! You will need the "Manage Webhooks" permission to create a
webhook.](conversation://Mara/hacker)
## Setting Up Timers
systemd timers are like cronjobs, except they trigger systemd services instead
of shell commands. For this example, let's create a daily webhook reminder to
check on your Animal Crossing island at 9 am.
Let's create the systemd service at the end of the machine's
`configuration.nix`:
```nix
systemd.services.acnh-island-check-reminder = {
serviceConfig.Type = "oneshot";
script = ''
MESSAGE="It's time to check on your island! Check those stonks!"
WEBHOOK="${builtins.readFile /home/cadey/prefix/secrets/acnh-webhook-secret}"
USERNAME="Domo"
${pkgs.curl}/bin/curl \
-X POST \
-F "content=$MESSAGE" \
-F "username=$USERNAME" \
"$WEBHOOK"
'';
};
```
[This service is a <a href="https://stackoverflow.com/a/39050387">oneshot</a>
unit, meaning systemd will launch this once and not expect it to always stay
running.](conversation://Mara/hacker)
Now let's create a timer for this service. We need to do the following:
- Associate the timer with that service
- Assign a schedule to the timer
Add this to the end of your `configuration.nix`:
```nix
systemd.timers.acnh-island-check-reminder = {
wantedBy = [ "timers.target" ];
partOf = [ "acnh-island-check-reminder.service" ];
timerConfig.OnCalendar = "TODO(Xe): this";
};
```
Before we mentioned that we want to trigger this reminder every morning at 9 am.
systemd timers specify their calendar config in the following format:
```
DayOfWeek Year-Month-Day Hour:Minute:Second
```
So for something that triggers every day at 9 AM, it would look like this:
```
*-*-* 8:00:00
```
[You can ignore the day of the week if it's not
relevant!](conversation://Mara/hacker)
So our final timer definition would look like this:
```nix
systemd.timers.acnh-island-check-reminder = {
wantedBy = [ "timers.target" ];
partOf = [ "acnh-island-check-reminder.service" ];
timerConfig.OnCalendar = "*-*-* 8:00:00";
};
```
## Deployment and Testing
Now we can deploy this with `nixos-rebuild`:
```console
$ sudo nixos-rebuild switch
```
You should see a line that says something like this in the `nixos-rebuild`
output:
```
starting the following units: acnh-island-check-reminder.timer
```
Let's test the service out using `systemctl`:
```console
$ sudo systemctl start acnh-island-check-reminder.service
```
And you should then see a message on Discord. If you don't see a message, check
the logs using `journalctl`:
```console
$ journalctl -u acnh-island-check-reminder.service
```
If you see an error that looks like this:
```
curl: (26) Failed to open/read local data from file/application
```
This usually means that you tried to do a role or user mention at the beginning
of the message and curl tried to interpret that as a file input. Add a word like
"hey" at the beginning of the line to disable this behavior. See
[here](https://stackoverflow.com/questions/6408904/send-request-to-curl-with-post-data-sourced-from-a-file)
for more information.
---
Also happy December! My site has the [snow
CSS](https://christine.website/blog/let-it-snow-2018-12-17) loaded for the
month. Enjoy!

View File

@ -0,0 +1,332 @@
---
title: Encrypted Secrets with NixOS
date: 2021-01-20
series: nixos
tags:
- age
- ed25519
---
# Encrypted Secrets with NixOS
One of the best things about NixOS is the fact that it's so easy to do
configuration management using it. The Nix store (where all your packages live)
has a huge flaw for secret management though: everything in the Nix store is
globally readable. This means that anyone logged into or running code on the
system could read any secret in the Nix store without any limits. This is
sub-optimal if your goal is to keep secret values secret. There have been a few
approaches to this over the years, but I want to describe how I'm doing it.
Here are my goals and implementation for this setup and how a few other secret
management strategies don't quite pan out.
At a high level I have these goals:
* It should be trivial to declare new secrets
* Secrets should never be globally readable in any useful form
* If I restart the machine, I should not need to take manual human action to
ensure all of the services come back online
* GPG should be avoided at all costs
As a side goal being able to roll back secret changes would also be nice.
The two biggest tools that offer a way to help with secret management on NixOS
that come to mind are NixOps and Morph.
[NixOps](https://github.com/NixOS/nixops) is a tool that helps administrators
operate NixOS across multiple servers at once. I use NixOps extensively in my
own setup. It calls deployment secrets "keys" and they are documented
[here](https://hydra.nixos.org/build/115931128/download/1/manual/manual.html#idm140737322649152).
At a high level they are declared like this:
```nix
deployment.keys.example = {
text = "this is a super sekrit value :)";
user = "example";
group = "keys";
permissions = "0400";
};
```
This will create a new secret in `/run/keys` that will contain our super secret
value.
[Wait, isn't `/run` an ephemeral filesystem? What happens when the system
reboots?](conversation://Mara/hmm)
Let's make an example system and find out! So let's say we have that `example`
secret from earlier and want to use it in a job. The job definition could look
something like this:
```nix
# create a service-specific user
users.users.example.isSystemUser = true;
# without this group the secret can't be read
users.users.example.extraGroups = [ "keys" ];
systemd.services.example = {
wantedBy = [ "multi-user.target" ];
after = [ "example-key.service" ];
wants = [ "example-key.service" ];
serviceConfig.User = "example";
serviceConfig.Type = "oneshot";
script = ''
stat /run/keys/example
'';
};
```
This creates a user called `example` and gives it permission to read deployment
keys. It also creates a systemd service called `example.service` and runs
[`stat(1)`](https://linux.die.net/man/1/stat) to show the permissions of the
service and the key file. It also runs as our `example` user. To avoid systemd
thinking our service failed, we're also going to mark it as a
[oneshot](https://www.digitalocean.com/community/tutorials/understanding-systemd-units-and-unit-files#the-service-section).
Altogether it could look something like
[this](https://gist.github.com/Xe/4a71d7741e508d9002be91b62248144a). Let's see
what `systemctl` has to report:
```console
$ nixops ssh -d blog-example pa -- systemctl status example
● example.service
Loaded: loaded (/nix/store/j4a8f6mnaw3v4sz7dqlnz95psh72xglw-unit-example.service/example.service; enabled; vendor preset: enabled)
Active: inactive (dead) since Wed 2021-01-20 20:53:54 UTC; 37s ago
Process: 2230 ExecStart=/nix/store/1yg89z4dsdp1axacqk07iq5jqv58q169-unit-script-example-start/bin/example-start (code=exited, status=0/SUCCESS)
Main PID: 2230 (code=exited, status=0/SUCCESS)
IP: 0B in, 0B out
CPU: 3ms
Jan 20 20:53:54 pa example-start[2235]: File: /run/keys/example
Jan 20 20:53:54 pa example-start[2235]: Size: 31 Blocks: 8 IO Block: 4096 regular file
Jan 20 20:53:54 pa example-start[2235]: Device: 18h/24d Inode: 37428 Links: 1
Jan 20 20:53:54 pa example-start[2235]: Access: (0400/-r--------) Uid: ( 998/ example) Gid: ( 96/ keys)
Jan 20 20:53:54 pa example-start[2235]: Access: 2021-01-20 20:53:54.010554201 +0000
Jan 20 20:53:54 pa example-start[2235]: Modify: 2021-01-20 20:53:54.010554201 +0000
Jan 20 20:53:54 pa example-start[2235]: Change: 2021-01-20 20:53:54.398103181 +0000
Jan 20 20:53:54 pa example-start[2235]: Birth: -
Jan 20 20:53:54 pa systemd[1]: example.service: Succeeded.
Jan 20 20:53:54 pa systemd[1]: Finished example.service.
```
So what happens when we reboot? I'll force a reboot in my hypervisor and we'll
find out:
```console
$ nixops ssh -d blog-example pa -- systemctl status example
● example.service
Loaded: loaded (/nix/store/j4a8f6mnaw3v4sz7dqlnz95psh72xglw-unit-example.service/example.service; enabled; vendor preset: enabled)
Active: inactive (dead)
```
The service is inactive. Let's see what the status of `example-key.service` is:
```console
$ nixops ssh -d blog-example pa -- systemctl status example-key
● example-key.service
Loaded: loaded (/nix/store/ikqn64cjq8pspkf3ma1jmx8qzpyrckpb-unit-example-key.service/example-key.service; linked; vendor preset: enabled)
Active: activating (start-pre) since Wed 2021-01-20 20:56:05 UTC; 3min 1s ago
Cntrl PID: 610 (example-key-pre)
IP: 0B in, 0B out
IO: 116.0K read, 0B written
Tasks: 4 (limit: 2374)
Memory: 1.6M
CPU: 3ms
CGroup: /system.slice/example-key.service
├─610 /nix/store/kl6lr3czkbnr6m5crcy8ffwfzbj8a22i-bash-4.4-p23/bin/bash -e /nix/store/awx1zrics3cal8kd9c5d05xzp5ikazlk-unit-script-example-key-pre-start/bin/example-key-pre-start
├─619 /nix/store/kl6lr3czkbnr6m5crcy8ffwfzbj8a22i-bash-4.4-p23/bin/bash -e /nix/store/awx1zrics3cal8kd9c5d05xzp5ikazlk-unit-script-example-key-pre-start/bin/example-key-pre-start
├─620 /nix/store/kl6lr3czkbnr6m5crcy8ffwfzbj8a22i-bash-4.4-p23/bin/bash -e /nix/store/awx1zrics3cal8kd9c5d05xzp5ikazlk-unit-script-example-key-pre-start/bin/example-key-pre-start
└─621 inotifywait -qm --format %f -e create,move /run/keys
Jan 20 20:56:05 pa systemd[1]: Starting example-key.service...
```
The service is blocked waiting for the keys to exist. We have to populate the
keys with `nixops send-keys`:
```console
$ nixops send-keys -d blog-example
pa> uploading key example...
```
Now when we check on `example.service`, we get the following:
```console
$ nixops ssh -d blog-example pa -- systemctl status example
● example.service
Loaded: loaded (/nix/store/j4a8f6mnaw3v4sz7dqlnz95psh72xglw-unit-example.service/example.service; enabled; vendor preset: enabled)
Active: inactive (dead) since Wed 2021-01-20 21:00:24 UTC; 32s ago
Process: 954 ExecStart=/nix/store/1yg89z4dsdp1axacqk07iq5jqv58q169-unit-script-example-start/bin/example-start (code=exited, status=0/SUCCESS)
Main PID: 954 (code=exited, status=0/SUCCESS)
IP: 0B in, 0B out
CPU: 3ms
Jan 20 21:00:24 pa example-start[957]: File: /run/keys/example
Jan 20 21:00:24 pa example-start[957]: Size: 31 Blocks: 8 IO Block: 4096 regular file
Jan 20 21:00:24 pa example-start[957]: Device: 18h/24d Inode: 27774 Links: 1
Jan 20 21:00:24 pa example-start[957]: Access: (0400/-r--------) Uid: ( 998/ example) Gid: ( 96/ keys)
Jan 20 21:00:24 pa example-start[957]: Access: 2021-01-20 21:00:24.588494730 +0000
Jan 20 21:00:24 pa example-start[957]: Modify: 2021-01-20 21:00:24.588494730 +0000
Jan 20 21:00:24 pa example-start[957]: Change: 2021-01-20 21:00:24.606495751 +0000
Jan 20 21:00:24 pa example-start[957]: Birth: -
Jan 20 21:00:24 pa systemd[1]: example.service: Succeeded.
Jan 20 21:00:24 pa systemd[1]: Finished example.service.
```
This means that NixOps secrets require _manual human intervention_ in order to
repopulate them on server boot. If your server went offline overnight due to an
unexpected issue, your services using those keys could be stuck offline until
morning. This is undesirable for a number of reasons. This plus the requirement
for the `keys` group (which at time of writing was undocumented) to be added to
service user accounts means that while they do work, they are not very
ergonomic.
[You can read secrets from files using something like
`deployment.keys.example.text = "${builtins.readFile ./secrets/example.env}"`,
but it is kind of a pain to have to do that. It would be better to just
reference the secrets by filesystem paths in the first
place.](conversation://Mara/hacker)
On the other hand [Morph](https://github.com/DBCDK/morph) gets this a bit
better. It is sadly even less documented than NixOps is, but it offers a similar
experience via [deployment
secrets](https://github.com/DBCDK/morph/blob/master/examples/secrets.nix). The
main differences that Morph brings to the table are taking paths to secrets and
allowing you to run an arbitrary command on the secret being uploaded. Secrets
are also able to be put anywhere on the disk, meaning that when a host reboots it
will come back up with the most recent secrets uploaded to it.
However, like NixOps, Morph secrets don't have the ability to be rolled back.
This means that if you mess up a secret value you better hope you have the old
information somewhere. This violates what you'd expect from a NixOS machine.
So given these examples, I thought it would be interesting to explore what the
middle path could look like. I chose to use
[age](https://github.com/FiloSottile/age) for encrypting secrets in the Nix
store as well as using SSH host keys to ensure that every secret is decryptable
at runtime by _that machine only_. If you get your hands on the secret
cyphertext, it should be unusable to you.
One of the harder things here will be keeping a list of all of the server host
keys. Recently I added a
[hosts.toml](https://github.com/Xe/nixos-configs/blob/master/ops/metadata/hosts.toml)
file to my config repo for autoconfiguring my WireGuard overlay network. It was
easy enough to add all the SSH host keys for each machine using a command like
this to get them:
[We will cover how this WireGuard overlay works in a future post.](conversation://Mara/hacker)
```console
$ nixops ssh-for-each -d hexagone -- cat /etc/ssh/ssh_host_ed25519_key.pub
firgu....> ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIB8+mCR+MEsv0XYi7ohvdKLbDecBtb3uKGQOPfIhdj3C root@nixos
chrysalis> ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIGDA5iXvkKyvAiMEd/5IruwKwoymC8WxH4tLcLWOSYJ1 root@chrysalis
lufta....> ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIMADhGV0hKt3ZY+uBjgOXX08txBS6MmHZcSL61KAd3df root@lufta
keanu....> ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIGDZUmuhfjEIROo2hog2c8J53taRuPJLNOtdaT8Nt69W root@nixos
```
age lets you use SSH keys for decryption, so I added these keys to my
`hosts.toml` and ended up with something like
[this](https://github.com/Xe/nixos-configs/commit/14726e982001e794cd72afa1ece209eed58d3f38#diff-61d1d8dddd71be624c0d718be22072c950ec31c72fded8a25094ea53d94c8185).
Now we can encrypt secrets on the host machine and safely put them in the Nix
store because they will be readable to each target machine with a command like
this:
```shell
age -d -i /etc/ssh/ssh_host_ed25519_key -o $dest $src
```
From here it's easy to make a function that we can use for generating new
encrypted secrets in the Nix store. First we need to import the host metadata
from the toml file:
```nix
let
cfg = config.within.secrets;
metadata = lib.importTOML ../../ops/metadata/hosts.toml;
mkSecretOnDisk = name:
{ source, ... }:
pkgs.stdenv.mkDerivation {
name = "${name}-secret";
phases = "installPhase";
buildInputs = [ pkgs.age ];
installPhase =
let key = metadata.hosts."${config.networking.hostName}".ssh_pubkey;
in ''
age -a -r "${key}" -o $out ${source}
'';
};
```
And then we can generate systemd oneshot jobs with something like this:
```nix
mkService = name:
{ source, dest, owner, group, permissions, ... }: {
description = "decrypt secret for ${name}";
wantedBy = [ "multi-user.target" ];
serviceConfig.Type = "oneshot";
script = with pkgs; ''
rm -rf ${dest}
${age}/bin/age -d -i /etc/ssh/ssh_host_ed25519_key -o ${dest} ${
mkSecretOnDisk name { inherit source; }
}
chown ${owner}:${group} ${dest}
chmod ${permissions} ${dest}
'';
};
```
And from there we just need some [boring
boilerplate](https://github.com/Xe/nixos-configs/blob/master/common/crypto/default.nix#L8-L38)
to define a secret type. Then we declare the secret type and its invocation:
```nix
in {
options.within.secrets = mkOption {
type = types.attrsOf secret;
description = "secret configuration";
default = { };
};
config.systemd.services = let
units = mapAttrs' (name: info: {
name = "${name}-key";
value = (mkService name info);
}) cfg;
in units;
}
```
And we have ourself a NixOS module that allows us to:
* Trivially declare new secrets
* Make secrets in the Nix store useless without the key
* Make every secret be transparently decrypted on startup
* Avoid the use of GPG
* Roll back secrets like any other configuration change
Declaring new secrets works like this (as stolen from [the service definition
for the website you are reading right now](https://github.com/Xe/nixos-configs/blob/master/common/services/xesite.nix#L35-L41)):
```nix
within.secrets.example = {
source = ./secrets/example.env;
dest = "/var/lib/example/.env";
owner = "example";
group = "nogroup";
permissions = "0400";
};
```
Barring some kind of cryptographic attack against age, this should allow the
secrets to be stored securely. I am working on a way to make this more generic.
This overall approach was inspired by [agenix](https://github.com/ryantm/agenix)
but made more specific for my needs. I hope this approach will make it easy for
me to manage these secrets in the future.

View File

@ -0,0 +1,12 @@
---
title: "Tailscale on NixOS: A New Minecraft Server in Ten Minutes"
date: 2021-01-19
tags:
- link
redirect_to: https://tailscale.com/blog/nixos-minecraft/
---
# Tailscale on NixOS: A New Minecraft Server in Ten Minutes
Check out this post [on the Tailscale
blog](https://tailscale.com/blog/nixos-minecraft/)!

View File

@ -0,0 +1,395 @@
---
title: How to Setup Prometheus, Grafana and Loki on NixOS
date: 2020-11-20
tags:
- nixos
- prometheus
- grafana
- loki
- promtail
---
# How to Setup Prometheus, Grafana and Loki on NixOS
When setting up services on your home network, sometimes you have questions
along the lines of "how do I know that things are working?". In this blogpost we
will go over a few tools that you can use to monitor and visualize your machine
state so you can answer that. Specifically we are going to use the following
tools to do this:
- [Grafana](https://grafana.com/) for creating pretty graphs and managing
alerts
- [Prometheus](https://prometheus.io/) for storing metrics and as a common
metrics format
- [Prometheus node_exporter](https://github.com/prometheus/node_exporter) for
deriving metrics from system state
- [Loki](https://grafana.com/oss/loki/) as a central log storage point
- [promtail](https://grafana.com/docs/loki/latest/clients/promtail/) to push
logs to Loki
Let's get going!
[Something to note: in here you might see domains using the `.pele` top-level
domain. This domain will likely not be available on your home network. See <a
href="/blog/series/site-to-site-wireguard">this series</a> on how to set up
something similar for your home network. If you don't have such a setup, replace
anything that ends in `.pele` with whatever you normally use for
this.](conversation://Mara/hacker)
## Grafana
Grafana is a service that handles graphing and alerting. It also has some nice
tools to create dashboards. Here we will be using it for a few main purposes:
- Exploring what metrics are available
- Reading system logs
- Making graphs and dashboards
- Creating alerts over metrics or lack of metrics
Let's configure Grafana on a machine. Open that machine's `configuration.nix` in
an editor and add the following to it:
```nix
# hosts/chrysalis/configuration.nix
{ config, pkgs, ... }: {
# grafana configuration
services.grafana = {
enable = true;
domain = "grafana.pele";
port = 2342;
addr = "127.0.0.1";
};
# nginx reverse proxy
services.nginx.virtualHosts.${config.services.grafana.domain} = {
locations."/" = {
proxyPass = "http://127.0.0.1:${toString config.services.grafana.port}";
proxyWebsockets = true;
};
};
}
```
[If you have a <a href="/blog/site-to-site-wireguard-part-3-2019-04-11">custom
TLS Certificate Authority</a>, you can set up HTTPS for this deployment. See <a
href="https://github.com/Xe/nixos-configs/blob/master/common/sites/grafana.akua.nix">here</a>
for an example of doing this. If this server is exposed to the internet, you can
use a certificate from <a
href="https://nixos.wiki/wiki/Nginx#TLS_reverse_proxy">Let's Encrypt</a> instead
of your own Certificate Authority.](conversation://Mara/hacker)
Then you will need to deploy it to your cluster with `nixops deploy`:
```console
$ nixops deploy -d home
```
Now open the Grafana server in your browser at http://grafana.pele and login
with the super secure default credentials of admin/admin. Grafana will ask you
to change your password. Please change it to something other than admin.
This is all of the setup we will do with Grafana for now. We will come back to
it later.
## Prometheus
> Prometheus was punished by the gods by giving the gift of knowledge to man. He
> was cast into the bowels of the earth and pecked by birds.
Oracle Turret, Portal 2
Prometheus is a service that reads metrics from other services, stores them and
allows you to search and aggregate them. Let's add it to our `configuration.nix`
file:
```nix
# hosts/chrysalis/configuration.nix
services.prometheus = {
enable = true;
port = 9001;
};
```
Now let's deploy this config to the cluster with `nixops deploy`:
```console
$ nixops deploy -d home
```
And let's configure Grafana to read from Prometheus. Open Grafana and click on
the gear to the left side of the page. The `Data Sources` tab should be active.
If it is not active, click on `Data Sources`. Then click "add data source" and
choose Prometheus. Set the URL to `http://127.0.0.1:9001` (or with whatever port
you configured above) and leave everything set to the default values. Click
"Save & Test". If there is an error, be sure to check the port number.
![The Grafana UI for adding a data
source](https://cdn.christine.website/file/christine-static/blog/Screenshot_20201120_145819.png)
Now let's start getting some data into Prometheus with the node exporter.
### Node Exporter Setup
The Prometheus node exporter exposes a lot of information about systems ranging
from memory, disk usage and even systemd service information. There are also
some [other
collectors](https://search.nixos.org/options?channel=20.09&query=prometheus.exporters+enable)
you can set up based on your individual setup, however we are going to enable
only the node collector here.
In your `configuration.nix`, add an exporters block and configure the node
exporter under `services.prometheus`:
```nix
# hosts/chrysalis/configuration.nix
services.prometheus = {
exporters = {
node = {
enable = true;
enabledCollectors = [ "systemd" ];
port = 9002;
};
};
}
```
Now we need to configure Prometheus to read metrics from this exporter. In your
`configuration.nix`, add a `scrapeConfigs` block under `services.prometheus`
that points to the node exporter we configured just now:
```nix
# hosts/chrysalis/configuration.nix
services.prometheus = {
# ...
scrapeConfigs = [
{
job_name = "chrysalis";
static_configs = [{
targets = [ "127.0.0.1:${toString config.services.prometheus.exporters.node.port}" ];
}];
}
];
# ...
}
# ...
```
[The complicated expression in the target above allows you to change the port of
the node exporter and ensure that Prometheus will always be pointing at the
right port!](conversation://Mara/hacker)
Now we can deploy this to your cluster with nixops:
```console
$ nixops deploy -d home
```
Open the Explore tab in Grafana and type in the following expression:
```
node_memory_MemFree_bytes
```
and hit shift-enter (or click the "Run Query" button in the upper left side of
the screen). You should see a graph showing you the amount of ram that is free
on the host, something like this:
![A graph of the amount of system memory that is available on the host
chrysalis](https://cdn.christine.website/file/christine-static/blog/Screenshot_20201120_150328.png)
If you want to query other fields, you can type in `node_` into the searchbox
and autocomplete will show what is available. For a full list of what is
available, open the node exporter metrics route in your browser and look through
it.
## Grafana Dashboards
Now that we have all of this information about our machine, let's create a
little dashboard for it and set up a few alerts.
Click on the plus icon on the left side of the Grafana UI to create a new
dashboard. It will look something like this:
![An empty dashboard in
Grafana](https://cdn.christine.website/file/christine-static/blog/Screenshot_20201120_151205.png)
In Grafana terminology, everything you see in a dashboard is inside a panel.
Let's create a new panel to keep track of memory usage for our server. Click
"Add New Panel" and you will get a screen that looks like this:
![A Grafana panel configuration
screen](https://cdn.christine.website/file/christine-static/blog/Screenshot_20201120_151609.png)
Let's make this keep track of free memory. Write "Memory Free" in the panel
title field on the right. Write the following query in the textbox next to the
dropdown labeled "Metrics":
```
node_memory_MemFree_bytes
```
and set the legend to `{{job}}`. You should get a graph that looks something
like this:
![A populated
graph](https://cdn.christine.website/file/christine-static/blog/Screenshot_20201120_152126.png)
This will show you how much memory is free on each machine you are monitoring
with Prometheus' node exporter. Now let's configure an alert for the amount of
free memory being low (where "low" means less than 64 megabytes of ram free).
Hit save in the upper right corner of the Grafana UI and give your dashboard a
name, such as "Home Cluster Status". Now open the "Memory Free" panel for
editing (click on the name and then click "Edit"), click the "Alert" tab, and
click the "Create Alert" button. Let's configure it to do the following:
- Check if free memory gets below 64 megabytes (64000000 bytes)
- Send the message "Running out of memory!" when the alert fires
You can do that with a configuration like this:
![The above configuration input to the Grafana
UI](https://cdn.christine.website/file/christine-static/blog/Screenshot_20201120_153419.png)
Save the changes to apply this config.
[Wait a minute. Where will this alert go to?](conversation://Mara/hmm)
It will only show up on the alerts page:
![The alerts page with memory free alerts
configured](https://cdn.christine.website/file/christine-static/blog/Screenshot_20201120_154027.png)
But we can add a notification channel to customize this. Click on the
Notification Channels tab and then click "New Channel". It should look something
like this:
![Notification Channel
configuration](https://cdn.christine.website/file/christine-static/blog/Screenshot_20201120_154317.png)
You can send notifications to many services, but let's send one to Discord this
time. Acquire a Discord webhook link from somewhere and paste it in the Webhook
URL field. Name it something like "Discord". It may also be a good idea to make
this the default notification channel using the "Default" checkbox under the
Notification Settings, so that our existing alert will show up in Discord when
the system runs out of memory.
You can configure other alerts like this so you can monitor any other node
metrics you want.
[You can also monitor for the _lack_ of data on particular metrics. If something
that should always be reported suddenly isn't reported, it may be a good
indicator that a server went down. You can also add other services to your
`scrapeConfigs` settings so you can monitor things that expose metrics to
Prometheus at `/metrics`.](conversation://Mara/hacker)
Now that we have metrics configured, let's enable Loki for logging.
## Loki
Loki is a log aggregator created by the people behind Grafana. Here we will use
it as a target for all system logs. Unfortunately, the Loki NixOS module is very
basic at the moment, so we will need to configure it with our own custom yaml
file. Create a file in your `configuration.nix` folder called `loki.yaml` and
copy in the config from [this
gist](https://gist.github.com/Xe/c3c786b41ec2820725ee77a7af551225):
Then enable Loki with your config in your `configuration.nix` file:
```nix
# hosts/chrysalis/configuration.nix
services.loki = {
enable = true;
configFile = ./loki-local-config.yaml;
};
```
Promtail is a tool made by the Loki team that sends logs into Loki. Create a
file called `promtail.yaml` in the same folder as `configuration.nix` with the
following contents:
```yaml
server:
http_listen_port: 28183
grpc_listen_port: 0
positions:
filename: /tmp/positions.yaml
clients:
- url: http://127.0.0.1:3100/loki/api/v1/push
scrape_configs:
- job_name: journal
journal:
max_age: 12h
labels:
job: systemd-journal
host: chrysalis
relabel_configs:
- source_labels: ['__journal__systemd_unit']
target_label: 'unit'
```
Now we can add promtail to your `configuration.nix` by creating a systemd
service to run it with this snippet:
```nix
# hosts/chrysalis/configuration.nix
systemd.services.promtail = {
description = "Promtail service for Loki";
wantedBy = [ "multi-user.target" ];
serviceConfig = {
ExecStart = ''
${pkgs.grafana-loki}/bin/promtail --config.file ${./promtail.yaml}
'';
};
};
```
Now that you have this all set up, you can push this to your cluster with
nixops:
```console
$ nixops deploy -d home
```
Once that finishes, open up Grafana and configure a new Loki data source with
the URL `http://127.0.0.1:3100`:
![Loki Data Source
configuration](https://cdn.christine.website/file/christine-static/blog/Screenshot_20201120_161610.png)
Now that you have Loki set up, let's query it! Open the Explore view in Grafana
again, choose Loki as the source, and enter in the query `{job="systemd-journal"}`:
![Loki
search](https://cdn.christine.website/file/christine-static/blog/Screenshot_20201120_162043.png)
[You can also add Loki queries like this to dashboards! Loki also lets you query by
systemd unit with the `unit` field. If you wanted to search for logs from
`foo.service`, you would need a query that looks something like
`{job="systemd-journal", unit="foo.service"}` You can do many more complicated
things with Loki. Look <a
href="https://grafana.com/docs/grafana/latest/datasources/loki/#search-expression">here
</a> for more information on what you can query. As of the time of writing this
blogpost, you are currently unable to make Grafana alerts based on Loki queries
as far as I am aware.](conversation://Mara/hacker)
---
This barely scrapes the surface of what you can accomplish with a setup like
this. Using more fancy setups you can alert on the rate of metrics changing. I
plan to make NixOS modules to make this setup easier in the future. There is
also a set of options in
[services.grafana.provision](https://search.nixos.org/options?channel=20.09&from=0&size=30&sort=relevance&query=grafana.provision)
that can make it easier to automagically set up Grafana with per-host
dashboards, alerts and all of the data sources that are outlined in this post.
The setup in this post is quite meager, but it should be enough to get you
started with whatever you need to monitor. Adding Prometheus metrics to your
services will go a long way in terms of being able to better monitor things in
production, do not be afraid to experiment!

View File

@ -1,8 +1,7 @@
--- ---
title: Rust Crates that do What the Go Standard library Does title: Rust Crates that do What the Go Standard library Does
date: 2020-09-27 date: 2020-09-27
tags: series: rust
- rust
--- ---
# Rust Crates that do What the Go Standard library Does # Rust Crates that do What the Go Standard library Does

View File

@ -0,0 +1,351 @@
---
title: Scavenger Hunt Solution
date: 2020-11-25
tags:
- ctf
- wasm
- steganography
- stenography
---
# Scavenger Hunt Solution
On November 22, I sent a
[tweet](https://twitter.com/theprincessxena/status/1330532765482311687) that
contained the following text:
```
#467662 #207768 #7A7A6C #6B2061 #6F6C20 #6D7079
#7A6120 #616C7A #612E20 #5A6C6C #206F61 #61773A
#2F2F6A #6C6168 #6A6C68 #752E6A #736269 #2F6462
#796675 #612E6E #747020 #6D7679 #207476 #796C20
#70756D #767974 #686170 #76752E
```
This was actually the first part of a scavenger hunt/mini CTF that I had set up
in order to see who went down the rabbit hole to solve it. I've had nearly a
dozen people report back to me telling that they solved all of the puzzles and
nearly all of them said they had a lot of fun. Here's how to solve each of the
layers of the solution and how I created them.
## Layer 1
The first layer was that encoded tweet. If you notice, everything in it is
formatted as HTML color codes. HTML color codes just so happen to be encoded in
hexadecimal. Looking at the codes you can see `20` come up a lot, which happens
to be the hex-encoded symbol for the spacebar. So, let's turn this into a
continuous hex string with `s/#//g` and `s/ //g`:
[If you've seen a `%20` in a URL before, that is the URL encoded form of the
spacebar!](conversation://Mara/hacker)
```
4676622077687A7A6C6B20616F6C206D7079
7A6120616C7A612E205A6C6C206F6161773A
2F2F6A6C61686A6C68752E6A7362692F6462
796675612E6E7470206D7679207476796C20
70756D76797468617076752E
```
And then turn it into an ASCII string:
> Fvb whzzlk aol mpyza alza. Zll oaaw://jlahjlhu.jsbi/dbyfua.ntp mvy tvyl pumvythapvu.
[Wait, what? this doesn't look like much of anything...wait, look at the
`oaaw://`. Could that be `http://`?](conversation://Mara/hmm)
Indeed it is my perceptive shark friend! Let's decode the rest of the string
using the [Caeser Cipher](https://en.wikipedia.org/wiki/Caesar_cipher):
> You passed the first test. See http://cetacean.club/wurynt.gmi for more information.
Now we're onto something!
## Layer 2
Opening http://cetacean.club/wurynt.gmi we see the following:
> wurynt
>
> a father of modern computing, <br />
> rejected by his kin, <br />
> for an unintentional sin, <br />
> creator of a machine to break <br />
> the cipher that this message is encoded in
>
> bq cr di ej kw mt os px uz gh
>
> VI 1 1
> I 17 1
> III 12 1
>
> qghja xmbzc fmqsb vcpzc zosah tmmho whyph lvnjj mpdkf gbsjl tnxqf ktqia mwogp
> eidny awoxj ggjqz mbrcm tkmyd fogzt sqkga udmbw nmkhp jppqs xerqq gdsle zfxmq
> yfdfj kuauk nefdc jkwrs cirut wevji pumqt hrxjr sfioj nbcrc nvxny vrphc r
>
> Correction for the last bit
>
> gilmb egdcr sowab igtyq pbzgv gmlsq udftc mzhqz exbmx zaxth isghc hukhc zlrrk
> cixhb isokt vftwy rfdyl qenxa nljca kyoej wnbpf uprgc igywv qzuud hrxzw gnhuz
> kclku hefzk xtdpk tfjzu byfyi sqmel gweou acwsi ptpwv drhor ahcqd kpzde lguqt
> wutvk nqprx gmiad dfdcm dpiwb twegt hjzdf vbkwa qskmf osjtk tcxle mkbnv iqdbe
> oejsx lgqc
[Hmm, "a father of computing", "rejected by his kin", "an unintentional sin",
"creator of a machine to break a cipher" could that mean Alan Turing? He made
something to break the Enigma cipher and was rejected by the British government
for being gay right?](conversation://Mara/hmm)
Indeed. Let's punch these settings into an [online enigma
machine](https://cryptii.com/pipes/enigma-machine) and see what we get:
```
congr adula tions forfi gurin goutt hisen igmao famys teryy ouhav egott enfar
thert hanan yonee lseha sbefo rehel pmebr eakfr eefol lowth ewhit erabb ittom
araht tpyvz vgjiu ztkhf uhvjq roybx dswzz caiaq kgesk hutvx iplwa donio n
httpc olons lashs lashw hyvec torze dgamm ajayi ndigo ultra zedfi vetan gokil
ohalo fineu ltrah alove ctorj ayqui etrho omega yotta betax raysi xdonu tseve
nsupe rwhyz edzed canad aasia indig oasia twoqu ietki logam maeps ilons uperk
iloha loult rafou rtang ovect orsev ensix xrayi ndigo place limaw hyasi adelt
adoto nion
```
And here is where I messed up with this challenge. Enigma doesn't handle
numbers. It was designed to encode the 26 letters of the Latin alphabet. If you
look at the last bit of the output you can see `onio n` and `o nion`. This
points you to a [Tor hidden
service](https://www.linuxjournal.com/content/tor-hidden-services), but because
I messed this up the two hints point you at slightly wrong onion addresses (tor
hidden service addresses usually have numbers in them). Once I realized this, I
made a correction that just gives away the solution so people could move on to
the next step.
Onwards to
http://yvzvgjiuz5tkhfuhvjqroybx6d7swzzcaia2qkgeskhu4tv76xiplwad.onion/!
## Layer 3
Open your [tor browser](https://www.torproject.org/download/) and punch in the
onion URL. You should get a page that looks like this:
![Mara's
Realm](https://cdn.christine.website/file/christine-static/blog/Screenshot_20201125_101515.png)
This shows some confusing combinations of letters and some hexadecimal text.
We'll get back to the hexadecimal text in a moment, but let's take a closer look
at the letters. There is a hint here to search the plover dictionary.
[Plover](http://www.openstenoproject.org/) is a tool that allows hobbyists to
learn [stenography](https://en.wikipedia.org/wiki/Stenotype) to type at the rate
of human speech. My moonlander has a layer for typing out stenography strokes,
so let's enable it and type them out:
> Follow the white rabbit
>
> Go to/test. w a s m
Which we can reinterpret as:
> Follow the white rabbit
>
> Go to /test.wasm
[The joke here is that many people seem to get stenography and steganography
confused, so that's why there's stenography in this steganography
challenge!](conversation://Mara/hacker)
Going to /test.wasm we get a WebAssembly download. I've uploaded a copy to my
blog's CDN
[here](https://cdn.christine.website/file/christine-static/blog/test.wasm).
## Layer 4
Going back to that hexadecimal text from above, we see that it says this:
> go get tulpa.dev/cadey/hlang
This points to the source repo of [hlang](https://h.christine.website), which is
a satirical "programming language" that can only print the letter `h` (or the
lojbanic h `'` for that sweet sweet internationalisation cred). Something odd
about hlang is that it uses [WebAssembly](https://webassembly.org/) to execute
all programs written in it (this helps it reach its "no sandboxing required" and
"zero* dependencies" goals).
Let's decompile this WebAssembly file with
[`wasm2wat`](https://webassembly.github.io/wabt/doc/wasm2wat.1.html)
```console
$ wasm2wat /data/test.wasm
<output too big, see https://git.io/Jkyli>
```
Looking at the decompilation we can see that it imports a host function `h.h` as
the hlang documentation suggests and then constantly calls it a bunch of times:
```lisp
(module
(type (;0;) (func (param i32)))
(type (;1;) (func))
(import "h" "h" (func (;0;) (type 0)))
(func (;1;) (type 1)
i32.const 121
call 0
i32.const 111
call 0
i32.const 117
call 0
; ...
```
There's a lot of `32` in the output. `32` is the base 10 version of `0x20`,
which is the space character in ASCII. Let's try to reformat the numbers to
ascii characters and see what we get:
> you made it, this is the end of the line however. writing all of this up takes
> a lot of time. if you made it this far, email me@christine.website to get your
> name entered into the hall of heroes. be well.
## How I Implemented This
Each layer was designed independently and then I started building them together
later.
One of the first steps was to create the website for Mara's Realm. I started by
writing out all of the prose into a file called `index.md` and then I ran
[sw](https://github.com/jroimartin/sw) using [Pandoc](https://pandoc.org/) for
markdown conversion.
Then I created the WebAssembly binary by locally hacking a copy of hlang to
allow arbitrary strings. I stuck it in the source directory for the website and
told `sw` to not try and render it as markdown.
Once I had the HTML source, I copied it to a machine on my network at
`/srv/http/marahunt` using this command:
```console
$ rsync \
-avz \
site.static/ \
root@192.168.0.127:/srv/http/marahunt
```
And then I created a tor hidden service using the
[services.tor.hiddenServices](https://search.nixos.org/options?channel=20.09&from=0&size=30&sort=relevance&query=services.tor.hiddenServices)
options:
```nix
services.tor = {
enable = true;
hiddenServices = {
"hunt" = {
name = "hunt";
version = 3;
map = [{
port = 80;
toPort = 80;
}];
};
};
};
```
Once I pushed this config to that server, I grabbed the hostname from
`/var/lib/tor/onion/hunt/hostname` and set up an nginx virtualhost:
```nix
services.nginx = {
virtualHosts."yvzvgjiuz5tkhfuhvjqroybx6d7swzzcaia2qkgeskhu4tv76xiplwad.onion" =
{
root = "/srv/http/marahunt";
};
};
```
And then I pushed the config again and tested it with curl:
```console
$ curl -H "Host: yvzvgjiuz5tkhfuhvjqroybx6d7swzzcaia2qkgeskhu4tv76xiplwad.onion" http://127.0.0.1 | grep title
% Total % Received % Xferd Average Speed Time Time Time Current
Dload Upload Total Spent Left Speed
100 3043 100 3043 0 0 2971k 0 --:--:-- --:--:-- --:--:-- 2971k
<title>Mara's Realm</title>
.headerSubtitle { font-size: 0.6em; font-weight: normal; margin-left: 1em; }
<a href="index.html">Mara's Realm</a> <span class="headerSubtitle">sh0rk in the cloud</span>
```
Once I was satisfied with the HTML, I opened up an enigma encoder and started
writing out the message congradulating the user for figuring out "this enigma of
a mystery". I also included the onion URL (with the above mistake) in that
message.
Then I started writing the wurynt page on my
[gemini](https://gemini.circumlunar.space/) server. wurynt was coined by blindly
pressing 6 keys on my keyboard. I added a little poem about Alan Turing to give
a hint that this was an enigma cipher and then copied the Enigma settings on the
page just in case. It turned out that I was using the default settings for the
[Cryptee Enigma simulator](https://cryptii.com/pipes/enigma-machine), so this
was not needed; however it was probably better to include them regardless.
This is where I messed up as I mentioned earlier. Once I realized my mistake in
trying to encode the onion address twice, I decided it would be best to just
give away the answer on the page, so I added the correct onion URL to the end of
the enigma message so that it wouldn't break flow for people.
The final part was to write and encode the message that I would tweet out. I
opened a scratch buffer and wrote out the "You passed the first test" line and
then encoded it using the ceasar cipher and encoded the result of that into hex.
After a lot of rejiggering and rewriting to make it have a multiple of 3
characters of text, I reformatted it as HTML color codes and tweeted it without
context.
## Feedback I Got
Some of the emails and twitter DM's I got had some useful and amusing feedback.
Here's some of my favorites:
> my favourite part was the opportunity to go down different various rabbit
> holes (I got to learn about stenography and WASM, which I'd never looked
> into!)
> I want to sleep. It's 2 AM here, but a friend sent me the link an hour ago and
> I'm a cat, so the curiosity killed me.
> That was a fun little game. Thanks for putting it together.
> oh *noooo* this is going to nerd snipe me
> I'm amused that you left the online enigma emulator on default settings.
> I swear to god I'm gonna beach your orca ass
## Improvements For Next Time
Next time I'd like to try and branch out from just using ascii. I'd like to
throw other encodings into the game (maybe even have a stage written in EBCDIC
formatted Esperanto or something crazy like that). I was also considering having
some public/private key crypto in the mix to stretch people's skillsets.
Something I will definitely do next time is make sure that all of the layers are
solveable. I really messed up with the enigma step and I had to unblock people
by DMing them the answer. Always make sure your puzzles can be solved.
## Hall of Heroes
(in no particular order)
- Saphire Lattice
- Open Skies
- Tralomine
- AstroSnail
- Dominika
- pbardera
- Max Hollman
- Vojtěch
- [object Object]
- Bytewave
Thank you for solving this! I'm happy this turned out so successfully. More to
come in the future.
🙂

View File

@ -0,0 +1,69 @@
---
title: "Site Update: RSS Bandwidth Fixes"
date: 2021-01-14
tags:
- devops
- optimization
---
# Site Update: RSS Bandwidth Fixes
Well, so I think I found out where my Kubernetes cluster cost came from. For
context, this blog gets a lot of traffic. Since the last deploy, my blog has
served its RSS feed over 19,000 times. I have some pretty naiive code powering
the RSS feed. It basically looked something like this:
- Write RSS feed content-type and beginning of feed
- For every post I have ever made, include its metadata and content
- Write end of RSS feed
This code was _fantastically simple_ to develop, however it was very expensive
in terms of bandwidth. When you add all this up, my RSS feed used to be more
than a _one megabyte_ response. It was also only getting larger as I posted more
content.
This is unsustainable, so I have taken multiple actions to try and fix this from
several angles.
<blockquote class="twitter-tweet"><p lang="en" dir="ltr">Rationale: this is my
most commonly hit and largest endpoint. I want to try and cut down its size.
<br><br>current feed (everything): 1356706 bytes<br>20 posts: 177931 bytes<br>10
posts: 53004 bytes<br>5 posts: 29318 bytes <a
href="https://t.co/snjnn8RFh8">pic.twitter.com/snjnn8RFh8</a></p>&mdash; Cadey
A. Ratio (@theprincessxena) <a
href="https://twitter.com/theprincessxena/status/1349892662871150594?ref_src=twsrc%5Etfw">January
15, 2021</a></blockquote> <script async
src="https://platform.twitter.com/widgets.js" charset="utf-8"></script>
[Yes, that graph is showing in _gigabytes_. We're so lucky that bandwidth is
free on Hetzner.](conversation://Mara/hacker)
First I finally set up the site to run behind Cloudflare. The Cloudflare
settings are set very permissively, so your RSS feed reading bots or whatever
should NOT be affected by this change. If you run into any side effects as a
result of this change, [contact me](/contact) and I can fix it.
Second, I also now set cache control headers on every response. By default the
"static" pages are cached for a day and the "dynamic" pages are cached for 5
minutes. This should allow new posts to show up quickly as they have previously.
Thirdly, I set up
[ETags](https://developer.mozilla.org/en-US/docs/Web/HTTP/Headers/ETag) for the
feeds. Each of my feeds will send an ETag in a response header. Please use this
tag in future requests to ensure that you don't ask for content you already
have. From what I recall most RSS readers should already support this, however
I'll monitor the situation as reality demands.
Lastly, I adjusted the
[ttl](https://cyber.harvard.edu/rss/rss.html#ltttlgtSubelementOfLtchannelgt) of
the RSS feed so that compliant feed readers should only check once per day. I've
seen some feed readers request the feed up to every 5 minutes, which is very
excessive. Hopefully this setting will gently nudge them into behaving.
As a nice side effect I should have slightly lower ram usage on the blog server
too! Right now it's sitting at about 58 and a half MB of ram, however with fewer
copies of my posts sitting in memory this should fall by a significant amount.
If you have any feedback about this, please [contact me](/contact) or mention me
on Twitter. I read my email frequently and am notified about Twitter mentions
very quickly.

View File

@ -0,0 +1,24 @@
---
title: The Source Version 1.0.0 Release
date: 2020-12-25
tags:
- ttrpg
---
# The Source Version 1.0.0 Release
After hours of work and adjustment, I have finally finished version 1 of my
tabletop roleplaying game The Source. It is available on
[itch.io](https://withinstudios.itch.io/q7rvfw33fw) with an added 50% discount
for readers of my blog. This discount will only last for the next two weeks.
<iframe src="https://itch.io/embed/866470?linkback=true" width="552"
height="167" frameborder="0"><a
href="https://withinstudios.itch.io/the-source">The Source by
Within</a></iframe>
Patrons (of any price tier) can claim a free copy
[here](https://withinstudios.itch.io/the-source/patreon-access). Your support
gives me so much.
Merry christmas all.

View File

@ -0,0 +1,26 @@
---
title: Trisiel Update
date: 2020-12-04
series: olin
tags:
- trisiel
---
# Trisiel Update
The project I formerly called
[wasmcloud](/blog/wasmcloud-progress-domains-2020-10-31) has now been renamed to
Trisiel after the discovery of a name conflict. The main domain for Trisiel is
now https://trisiel.com to avoid any confusions between our two projects.
Planning for implementing and hosting Trisiel is still in progress. I will give
more updates as they are ready to be released. To get more up to the minute
information please follow the twitter account
[@trisielcloud](https://twitter.com/trisielcloud), I will be posting there as I
have more information.
> I am limitless. There is no cage or constraint that can corral me into one
> constant place. I am limitless. I can change, shift, overcome, transform,
> because I am not bound to a thing that serves me, and my body serves me.
Quantusum, James Mahu

View File

@ -0,0 +1,95 @@
---
title: Plea to Twitter
date: 2020-12-14
---
**NOTE**: This is a very different kind of post compared to what I usually
write. If you or anyone you know works at Twitter, please link this to them. I
am in a unique situation and the normal account recovery means do not work. If
you work at Twitter and are reading this, my case number is [redacted].
**EDIT**(19:51 M12 14 2020): My account is back. Thank you anonymous Twitter
support people. For everyone else, please take this as an example of how
**NOT** to handle account issues. The fact that I had to complain loudly on
Twitter to get this weird edge case taken care of is ludicrous. I'd gladly pay
Twitter just to have a support mechanism that gets me an actual human without
having to complain on Twitter.
# Plea to Twitter
On Sunday, December 13, 2020, I noticed that I was locked out of my Twitter
account. If you go to [@theprincessxena](https://twitter.com/theprincessxena)
today, you will see that the account is locked out for "unusual activity". I
don't know what I did to cause this to happen (though I have a few theories) and
I hope to explain them in the headings below. I have gotten no emails or contact
from Twitter about this yet. I have a backup account at
[@CadeyRatio](https://twitter.com/CadeyRatio) as a stopgap. I am also on
mastodon as [@cadey@mst3k.interlinked.me](https://mst3k.interlinked.me/@cadey).
In place of my tweeting about quarantine life, I am writing about my experiences
[here](http://cetacean.club/journal/).
## Why I Can't Unlock My Account
I can't unlock my account the normal way because I forgot to set up two factor
authentication and I also forgot to change the phone number registered with the
account to my Canadian one when I [moved to
Canada](/blog/life-update-2019-05-16). I remembered to do this change for all of
the other accounts I use regularly except for my Twitter account.
In order to stop having to pay T-Mobile $70 per month, I transferred my phone
number to [Twilio](https://www.twilio.com/). This combined with some clever code
allowed me to gracefully migrate to my new Canadian number. Unfortunately,
Twitter flat-out refuses to send authentication codes to Twilio numbers. It's
probably to prevent spam, but it would be nice if there was an option to get the
authentication code over a phone call.
## Theory 1: International Travel
Recently I needed to travel internationally in order to start my new job at
[Tailscale](https://tailscale.com/). Due to an unfortunate series of events over
two months, I needed to actually travel internationally to get a new visa. This
lead me to take a very boring trip to Minnesota for a week.
During that trip, I tweeted and fleeted about my travels. I took pictures and
was in my hotel room a lot.
[We can't dig up the link for obvious reasons, but one person said they were
always able to tell when we are traveling because it turns the twitter account
into a fast food blog.](conversation://Mara/hacker)
I think Twitter may have locked out my account because I was suddenly in
Minnesota after being in Canada for almost a year.
## Theory 2: Misbehaving API Client
I use [mi](https://github.com/Xe/mi) as part of my new blogpost announcement
pipeline. One of the things mi does is submits new blogposts and some metadata
about them to Twitter. I haven't been able to find any logs to confirm this, but
if something messed up in a place that was unlogged somehow, it could have
triggered some kind of anti-abuse pipeline.
## Theory 3: NixOS Screenshot Set Off Some Bad Thing
One of my recent tweets that I can't find anymore is a tweet about a NixOS
screenshot for my work machine. I think that some part of the algorithm
somewhere really hated it, and thus triggered the account lock. I don't really
understand how a screenshot of KDE 5 showing neofetch output could make my
account get locked, but with enough distributed machine learning anything can
happen.
## Theory 4: My Password Got Cracked
I used a random password generated with iCloud for my Twitter password.
Theoretically this could have been broken, but I doubt it.
---
Overall, I just want to be able to tweet again. Please spread this around for
reach. I don't like using my blog to reach out like this, but I've been unable
to find anyone that knows someone at Twitter so far and I feel this is the best
way to broadcast it. I'll update this post with the resolution to this problem
when I get one.
I think the International Travel theory is the most likely scenario. I just want
a human to see this situation and help fix it.

View File

@ -0,0 +1,230 @@
---
title: Various Updates
date: 2020-11-18
tags:
- personal
- consulting
- docker
- nixos
---
# Various Updates
Immigration purgatory is an experience. It's got a lot of waiting and there is a
lot of uncertainty that can make it feel stressful. Like I said
[before](/blog/new-adventures-2020-10-24), I'm not concerned; however I have a
lot of free time on my hands and I've been using it to make some plans for the
blog (and a new offering for companies that need help dealing with the new
[Docker Hub rate
limits](https://docs.docker.com/docker-hub/download-rate-limit/)) in the future.
I'm gonna outline them below in their own sections. This blogpost was originally
about 4 separate blogposts that I started and abandoned because I had trouble
focusing on finishing them. Stress sucks lol.
## WebMention Support
I recently deployed [mi v1.0.0](https://github.com/Xe/mi) to my home cluster. mi
is a service that handles a lot of personal API tasks including the automatic
post notifications to Twitter and Mastodon. The old implementation was in Go and
stored its data in RethinkDB. I also have a snazzy frontend in Elm for mi. This
new version is rewritten from scratch to use Rust, [Rocket](https://rocket.rs/)
and SQLite. It is also fully
[nixified](https://github.com/Xe/mi/blob/mara/default.nix) and is deployed to my
home cluster via a [NixOS
module](https://github.com/Xe/nixos-configs/blob/master/common/services/mi.nix).
One of the major new features I have in this rewrite is
[WebMention](https://www.w3.org/TR/webmention/) support. WebMentions allow
compatible websites to "mention" my articles or other pages on my main domains
by sending a specially formatted HTTP request to mi. I am still in the early
stages of integrating mi into my site code, but eventually I hope to have a list
of places that articles are mentioned in each post. The WebMention endpoint for
my site is `https://mi.within.website/api/webmention/accept`. I have added
WebMention metadata into the HTML source of the blog pages as well as in the
`Link` header as the W3 spec demands.
If you encounter any issues with this feature, please [let me know](/contact) so
I can get it fixed as soon as possible.
### Thoughts on Elm as Used in mi
[Elm](https://elm-lang.org/) is an interesting language for making single page
applications. The old version of mi was the first time I had really ever used
Elm for anything serious and after some research I settled on using
[elm-spa](https://www.elm-spa.dev/) as a framework to smooth over some of the
weirder parts of the language. elm-spa worked great at first. All of the pages
were separated out into their own components and the routing setup was really
intuitive (if a bit weird because of the magic involved). It's worked great for
a few years and has been very low maintenance.
However when I was starting to implement the backend of mi in Rust, I tried to
nixify the elm-spa frontend I made. This was a disaster. The magic that elm-spa
relied on fell apart and _at the time I attempted to do this_ it was very
difficult to do this.
As a result I ended up rewriting the frontend in very very boring Elm using
information from the [Elm Guide](https://guide.elm-lang.org/) and a lot of
blogposts and help from the Elm slack. Overall this was a successful experiment
and I can easily see this new frontend (which I have named sina as a compound
[toki pona](https://tokipona.org/) pun) becoming a powerful tool for
investigating and managing the data in mi.
[Special thanks to malinoff, wolfadex, chadtech and mfeineis on the Elm slack
for helping with the weird issues involved in getting a split model approach
working.](conversation://Mara/hacker)
Feel free to check out the code [here](https://github.com/Xe/mi/tree/mara/sina).
I may try to make an Elm frontend to my site for people that use the Progressive
Web App support.
### elm2nix
[elm2nix](https://github.com/cachix/elm2nix) is a very nice tool that lets you
generate Nix definitions from Elm packages, however the
template it uses is a bit out of date. To fix it you need to do the following:
```console
$ elm2nix init > default.nix
$ elm2nix convert > elm-srcs.nix
$ elm2nix snapshot
```
Then open `default.nix` in your favorite text editor and change this:
```nix
buildInputs = [ elmPackages.elm ]
++ lib.optional outputJavaScript nodePackages_10_x.uglify-js;
```
to this:
```nix
buildInputs = [ elmPackages.elm ]
++ lib.optional outputJavaScript nodePackages.uglify-js;
```
and this:
```nix
uglifyjs $out/${module}.${extension} --compress 'pure_funcs="F2,F3,F4,F5,F6,F7,F8,F9,A2,A3,A4,A5,A6,A7,A8,A9",pure_getters,keep_fargs=false,unsafe_comps,unsafe' \
| uglifyjs --mangle --output=$out/${module}.min.${extension}
```
to this:
```nix
uglifyjs $out/${module}.${extension} --compress 'pure_funcs="F2,F3,F4,F5,F6,F7,F8,F9,A2,A3,A4,A5,A6,A7,A8,A9",pure_getters,keep_fargs=false,unsafe_comps,unsafe' \
| uglifyjs --mangle --output $out/${module}.min.${extension}
```
These issues should be fixed in the next release of elm2nix.
## New Character in the Blog Cutouts
As I mentioned [in the past](/blog/how-mara-works-2020-09-30), I am looking into
developing out other characters for my blog. I am still in the early stages of
designing this, but I think the next character in my blog is going to be an
anthro snow leopard named Alicia. I want Alicia to be a beginner that is very
new to computer programming and other topics, which would then make Mara into
more of a teacher type. I may also introduce my own OC Cadey (the orca looking
thing you can see [here](https://christine.website/static/img/avatar_large.png)
or in the favicon of my site) into the mix to reply to these questions in
something more close to the Socratic method.
Some people have joked that the introduction of Mara turned my blog into a shark
visual novel that teaches you things. This sounds hilarious to me, and I am
looking into what it would take to make an actual visual novel on a page on my
blog using Rust and WebAssembly. I am in very early planning stages for this, so
don't expect this to come out any time soon.
## Gergoplex Build
My [Gergoplex kit](https://www.gboards.ca/product/gergoplex) finally came in
yesterday, and I got to work soldering it up with some switches and applying the
keycaps.
![Me soldering the Gergoplex](https://cdn.christine.website/file/christine-static/img/keeb/gergoplex/EnEYNxvW4AEfWcH.jpg)
![A glory shot of the Gergoplex](https://cdn.christine.website/file/christine-static/img/keeb/gergoplex/Elm3dN8XUAAYHws.jpg)
I picked the Pro Red linear switches with a 35 gram spring in them (read: they
need 35 grams of force to actuate, which is lighter than most switches) and
typing on it is buttery smooth. The keycaps are a boring black, but they look
nice on it.
Overall this kit (with the partial board, switches and keycaps) cost me about
US$124 (not including shipping) with the costs looking something like this:
| Name | Count | Cost |
| :------------------------- | :----- | :---- |
| Gergoplex Partial Kit | 1 | $70 |
| Choc Pro Red 35g switches | 4 | $10 |
| Keycaps (15) | 3 | $30 |
| Braided interconnect cable | 1 | $7 |
| Mini-USB cable | 1 | $7 |
I'd say this was a worthwhile experience. I haven't really soldered anything
since I was in high school and it was fun to pick up the iron again and make
something useful. If you are looking for a beginner soldering project, I can't
recommend the Gergoplex enough.
I also picked up some extra switches and keycaps (prices not listed here) for a
future project involving an eInk display. More on that when it is time.
## Branch Conventions
You may have noticed that some of my projects have default branches named `main`
and others have default branches named `mara`. This difference is very
intentional. Repos with the default branch `main` generally contain code that is
"stable" and contains robust and reusable code. Repos with the default branch
`mara` are generally my experimental repos and the code in them may not be the
most reusable across other projects. mi is a repo with a `mara` default branch
because it is a very experimental thing. In the future I may promote it up to
having a `main` branch, however for now it's less effort to keep things the way
it is.
## Docker Consulting
The new [Docker Hub rate
limits](https://docs.docker.com/docker-hub/download-rate-limit/) have thrown a
wrench into many CI/CD setups as well as uncertainty in how CI services will
handle this. Many build pipelines implictly trust the Docker Hub to be up and
that it will serve the appropriate image so that your build can work. Many
organizations use their own Docker registry (GHCR, AWS/Google Cloud image
registries, Artifactory, etc.), however most image build definitions I've seen
start out with something like this:
```Dockerfile
FROM golang:alpine
```
which will implicitly pull from the Docker Hub. This can lead to bad things.
If you would like to have a call with me for examining your process for building
Docker images in CI and get a list of actionable suggestions for how to work
around this, [contact me](/contact) so that we can discuss pricing and
scheduling.
I have been using Docker for my entire professional career (way back since
Docker required you to recompile your kernel to enable cgroup support in public
beta) and I can also discuss methods to make your Docker images as small as they
can possibly get. My record smallest Docker image is 5 MB.
If either of these prospects interest you, please contact me so we can work
something out.
---
Here's hoping that the immigration purgatory ends soon. I'm lucky enough to have
enough cash built up that I can weather this jobless month. I've been using this
time to work on personal projects (like mi and
[wasmcloud](https://wasmcloud.app)) and better myself. I've also done a little
writing that I plan to release in the future after I clean it up.
In retrospect I probably should have done [NaNoWriMo](https://nanowrimo.org/)
seeing that I basically will have the entire month of November jobless. I've had
an idea for a while about someone that goes down the rabbit hole of mysticism
and magick, but I may end up incorporating that into the visual novel project I
mentioned in the Elm section.
Be well and stay safe out there. Wear a mask, stay at home.

View File

@ -4,6 +4,36 @@ date: 2020-06-17
series: v series: v
--- ---
EDIT(Xe): 2020 M12 22
Hi Hacker News. Please read the below notes. I am now also blocked by the V
team on Twitter.
<blockquote class="twitter-tweet"><p lang="und" dir="ltr"><a href="https://t.co/WIqX73GB5Z">pic.twitter.com/WIqX73GB5Z</a></p>&mdash; Cadey A. Ratio (@theprincessxena) <a href="https://twitter.com/theprincessxena/status/1341525594715140098?ref_src=twsrc%5Etfw">December 22, 2020</a></blockquote> <script async src="https://platform.twitter.com/widgets.js" charset="utf-8"></script>
EDIT(Xe): 2020 M06 23
I do not plan to make any future update posts about the V programming language
in the future. The V community is something I would really rather not be
associated with. This is an edited-down version of the post that was released
last week (2020 M06 17).
As of the time of writing this note to the end of this post and as far as I am
aware, I am banned from being able to contribute to the V language in any form.
I am therefore forced to consider that the V project will respond to criticism
of their language with bans. This subjective view of reality may not be accurate
to what others see.
I would like to see this situation result in a net improvement for everyone
involved. V is an interesting take on a stagnant field of computer science, but
I cannot continue to comment on this language or give it any of the signal boost
I have given it with this series of posts.
Thank you for reading. I will continue with my normal posts in the next few
days.
Be well.
# V Update - June 2020 # V Update - June 2020
Every so often I like to check in on the [V Programming Language][vlang]. It's been Every so often I like to check in on the [V Programming Language][vlang]. It's been
@ -237,28 +267,3 @@ https://github.com/vlang/v/issues/new/choose
Like I said before, I also cannot file new issues about this. So if you are willing Like I said before, I also cannot file new issues about this. So if you are willing
to help me out, please open an issue about this. to help me out, please open an issue about this.
---
EDIT(Xe): 2020 M06 23
I do not plan to make any future update posts about the V programming language
in the future. The V community is something I would really rather not be
associated with. This is an edited-down version of the post that was released
last week (2020 M06 17).
As of the time of writing this note to the end of this post and as far as I am
aware, I am banned from being able to contribute to the V language in any form.
I am therefore forced to consider that the V project will respond to criticism
of their language with bans. This subjective view of reality may not be accurate
to what others see.
I would like to see this situation result in a net improvement for everyone
involved. V is an interesting take on a stagnant field of computer science, but
I cannot continue to comment on this language or give it any of the signal boost
I have given it with this series of posts.
Thank you for reading. I will continue with my normal posts in the next few
days.
Be well.

View File

@ -1,5 +1,5 @@
--- ---
title: "Wasmcloud Progress: Hello, World!" title: "Trisiel Progress: Hello, World!"
date: 2019-12-08 date: 2019-12-08
series: olin series: olin
tags: tags:
@ -11,7 +11,7 @@ tags:
I have been working off and on over the years and have finally created the base I have been working off and on over the years and have finally created the base
of a functions as a service backend for [WebAssembly][wasm] code. I'm code-naming this of a functions as a service backend for [WebAssembly][wasm] code. I'm code-naming this
wasmcloud. [Wasmcloud][wasmcloud] is a pre-alpha prototype and is currently very much work in wasmcloud. [Trisiel][wasmcloud] is a pre-alpha prototype and is currently very much work in
progress. However, it's far enough along that I would like to explain what I progress. However, it's far enough along that I would like to explain what I
have been doing for the last few years and what it's all built up to. have been doing for the last few years and what it's all built up to.
@ -100,7 +100,7 @@ I've even written a few blogposts about Olin:
But, this was great for running stuff interactively and via the command line. It But, this was great for running stuff interactively and via the command line. It
left me wanting more. I wanted to have that mythical functions as a service left me wanting more. I wanted to have that mythical functions as a service
backend that I've been dreaming of. So, I created [wasmcloud][wasmcloud]. backend that I've been dreaming of. So, I created [Trisiel][wasmcloud].
## h ## h
@ -144,9 +144,9 @@ world. I even got this program running on bare metal:
[hlang]: https://h.christine.website [hlang]: https://h.christine.website
[vlang]: https://vlang.io [vlang]: https://vlang.io
## Wasmcloud ## Trisiel
[Wasmcloud][wasmcloud] is the culmination of all of this work. The goal of [Trisiel][wasmcloud] is the culmination of all of this work. The goal of
wasmcloud is to create a functions as a service backend for running people's wasmcloud is to create a functions as a service backend for running people's
code in an isolated server-side environment. code in an isolated server-side environment.
@ -181,11 +181,11 @@ Top-level flags (use "wasmcloud flags" for a full list):
This tool lets you do a few basic things: This tool lets you do a few basic things:
- Authenticate with the wasmcloud server - Authenticate with the Trisiel server
- Create handlers from WebAssembly files that meet the CommonWA API as realized - Create handlers from WebAssembly files that meet the CommonWA API as realized
by Olin by Olin
- Get logs for individual handler invocations - Get logs for individual handler invocations
- Run WebAssembly modules locally like they would get run on wasmcloud - Run WebAssembly modules locally like they would get run on Trisiel
Nearly all of the complexity is abstracted away from users as much as possible. Nearly all of the complexity is abstracted away from users as much as possible.

View File

@ -0,0 +1,202 @@
---
title: "Trisiel Progress: Rewritten in Rust"
date: 2020-10-31
series: olin
tags:
- wasm
- trisiel
- wasmer
---
# Trisiel Progress: Rewritten in Rust
It's been a while since I had the [last update for
Trisiel](/blog/wasmcloud-progress-2019-12-08). In that time I have gotten a
lot done. As the title mentions I have completely rewritten Trisiel's entire
stack in Rust. Part of the reason was for [increased
speed](/blog/pahi-benchmarks-2020-03-26) and the other part was to get better at
Rust. I also wanted to experiment with running Rust in production and this has
been an excellent way to do that.
Trisiel is going to have a few major parts:
- The API (likely to be hosted at `api.trisiel.com`)
- The Executor (likely to be hosted at `run.trisiel.dev`)
- The Panel (likely to be hosted at `panel.trisiel.com`)
- The command line tool `trisiel`
- The Documentation site (likely to be hosted at `docs.trisiel`)
These parts will work together to implement a functions as a service platform.
[The executor is on its own domain to prevent problems like <a
href="https://github.blog/2013-04-05-new-github-pages-domain-github-io/">this
GitHub Pages vulnerability</a> from 2013. It is on a `.lgbt` domain because LGBT
rights are human rights.](conversation://Mara/hacker)
I have also set up a landing page at
[trisiel.com](https://trisiel.com) and a twitter account at
[@trisielcloud](https://twitter.com/trisielcloud). Right now these are
placeholders. I wanted to register the domains before they were taken by anyone
else.
## Architecture
My previous attempt at Trisiel had more of a four tier webapp setup. The
overall stack looked something like this:
- Nginx in front of everything
- The api server that did about everything
- The executors that waited on message queues to run code and push results to
the requester
- Postgres
- A message queue to communicate with the executors
- IPFS to store WebAssembly modules
In simple testing, this works amazingly. The API server will send execution
requests to the executors and everything will usually work out. However, the
message queue I used was very "fire and forget" and had difficulties with
multiple executors set up to listen on the queue. Additionally, the added
indirection of needing to send the data around twice means that it would have
difficulties scaling globally due to ingress and egress data costs. This model
is solid and _probably would have worked_ with some compression or other
improvements like that, but overall I was not happy with it and decided to scrap
it while I was porting the executor component to Rust. If you want to read the
source code of this iteration of Trisiel, take a look
[here](https://tulpa.dev/within/wasmcloud).
The new architecture of Trisiel looks something like this:
- Nginx in front of everything
- An API server that handles login with my gitea instance
- The executor server that listens over https
- Postgres
- Backblaze B2 to store WebAssembly modules
The main change here is the fact that the executor listens over HTTPS, avoiding
_a lot_ of the overhead involved in running this on a message queue. It's also
much simpler to implement and allows me to reuse a vast majority of the
boilerplate that I developed for the Trisiel API server.
This new version of Trisiel is also built on top of
[Wasmer](https://wasmer.io/). Wasmer is a seriously fantastic library for this
and getting up and running was absolutely trivial, even though I knew very
little Rust when I was writing [pa'i](/blog/pahi-hello-world-2020-02-22). I
cannot recommend it enough if you ever want to execute WebAssembly on a server.
## Roadmap
At this point, I can create new functions, upload them to the API server and
then trigger them to be executed. The output of those functions is not returned
to the user at this point. I am working on ways to implement that. There is also
very little accounting for what resources and system calls are used, however it
does keep track of execution time. The executor also needs to have the request
body of the client be wired to the standard in of the underlying module, which
will enable me to parse CGI replies from WebAssembly functions. This will allow
you to host HTTP endpoints on Trisiel using the same code that powers
[this](https://olin.within.website) and
[this](http://cetacean.club/cgi-bin/olinfetch.wasm).
I also need to go in and completely refactor the
[olin](https://github.com/Xe/pahi/tree/main/wasm/olin/src) crate and make the
APIs much more ergonomic, not to mention make the HTTP client actually work
again.
Then comes the documentation. Oh god there will be so much documentation. I will
be _drowning_ in documentation by the end of this.
I need to write the panel and command line tool for Trisiel. I want to write
the panel in [Elm](https://elm-lang.org/) and the command line tool in Rust.
There is basically zero validation for anything submitted to the Trisiel API.
I will need to write validation in order to make it safer.
I may also explore enabling support for [WASI](https://wasi.dev/) in the future,
but as I have stated before I do not believe that WASI works very well for the
futuristic plan-9 inspired model I want to use on Trisiel.
Right now the executor shells out to pa'i, but I want to embed pa'i into the
executor binary so there are fewer moving parts involved.
I also need to figure out what I should do with this project in general. It
feels like it is close to being productizable, but I am in a very bad stage of
my life to be able to jump in headfirst and build a company around this. Visa
limitations also don't help here.
## Things I Learned
[Rocket](https://rocket.rs) is an absolutely fantastic web framework and I
cannot recommend it enough. I am able to save _so much time_ with Rocket and its
slightly magic use of proc-macros. For an example, here is the entire source
code of the `/whoami` route in the Trisiel API:
```rust
#[get("/whoami")]
#[instrument]
pub fn whoami(user: models::User) -> Json<models::User> {
Json(user)
}
```
The `FromRequest` instance I have on my database user model allows me to inject
the user associated with an API token purely based on the (validated against the
database) claims associated with the JSON Web Token that the user uses for
authentication. This then allows me to make API routes protected by simply
putting the user model as an input to the handler function. It's magic and I
love it.
Postgres lets you use triggers to automatically update `updated_at` fields for
free. You just need a function that looks like this:
```sql
CREATE OR REPLACE FUNCTION trigger_set_timestamp()
RETURNS TRIGGER AS $$
BEGIN
NEW.updated_at = NOW();
RETURN NEW;
END;
$$ LANGUAGE plpgsql;
```
And then you can make triggers for your tables like this:
```sql
CREATE TRIGGER set_timestamp_users
BEFORE UPDATE ON users
FOR EACH ROW
EXECUTE PROCEDURE trigger_set_timestamp();
```
Every table in Trisiel uses this in order to make programming against the
database easier.
The symbol/number layer on my Moonlander has been _so good_. It looks something
like this:
![](https://cdn.christine.website/file/christine-static/blog/m5Id6Qs.png)
And it makes using programming sigils _so much easier_. I don't have to stray
far from the homerow to hit the most common ones. The only one that I still have
to reach for is `_`, but I think I will bind that to the blank key under the `]`
key.
The best programming music is [lofi hip hop radio - beats to study/relax
to](https://www.youtube.com/watch?v=5qap5aO4i9A). Second best is [Animal
Crossing music](https://www.youtube.com/watch?v=2nYNJLfktds). They both have
this upbeat quality that makes the ideas melt into code and flow out of your
hands.
---
Overall I'd say this is pretty good for a week of hacking while learning a new
keyboard layout. I will do more in the future. I have plans. To read through the
(admittedly kinda hacky/awful) code I've written this week, check out [this git
repo](https://tulpa.dev/wasmcloud/wasmcloud). If you have any feedback, please
[contact me](/contact). I will be happy to answer any questions.
As far as signups go, I am not accepting any signups at the moment. This is
pre-alpha software. The abuse story will need to be figured out, but I am fairly
sure it will end up being some kind of "pay or you can only run the precompiled
example code in the documentation" with some kind of application process for the
"free tier" of Trisiel. Of course, this is all theoretical and hinges on
Trisiel actually being productizable; so who knows?
Be well.

View File

@ -0,0 +1,54 @@
---
title: "Site Update: WebMention Support"
date: 2020-12-02
tags:
- indieweb
---
# Site Update: WebMention Support
Recently in my [Various Updates](/blog/various-updates-2020-11-18) post I
announced that my website had gotten
[WebMention](https://www.w3.org/TR/webmention/) support. Today I implemented
WebMention integration into blog articles, allowing you to see where my articles
are mentioned across the internet. This will not work with every single mention
of my site, but if your publishing platform supports sending WebMentions, then
you will see them show up on the next deploy of my site.
Thanks to the work of the folks at [Bridgy](https://brid.gy/), I have been able
to also keep track of mentions of my content across Twitter, Reddit and
Mastodon. My WebMention service will also attempt to resolve Bridgy mention
links to their original sources as much as it can. Hopefully this should allow
you to post my articles as normal across those networks and have those mentions
be recorded without having to do anything else.
As I mentioned before, this is implemented on top of
[mi](https://github.com/Xe/mi). mi receives mentions sent to
`https://mi.within.website/api/webmention/accept` and will return a reference
URL in the `Location` header. This will return JSON-formatted data about the
mention. Here is an example:
```console
$ curl https://mi.within.website/api/webmention/01ERGGEG7DCKRH3R7DH4BXZ6R9 | jq
{
"id": "01ERGGEG7DCKRH3R7DH4BXZ6R9",
"source_url": "https://maya.land/responses/2020/12/01/i-think-this-blog-post-might-have-been.html",
"target_url": "https://christine.website/blog/toast-sandwich-recipe-2019-12-02",
"title": null
}
```
This is all of the information I store about each WebMention. I am working on
title detection (using the
[readability](https://github.com/jangernert/readability) algorithm), however I
am unable to run JavaScript on my scraper server. Content that is JavaScript
only may not be able to be scraped like this.
---
Many thanks to [Chris Aldrich](https://boffosocko.com/2020/12/01/55781873/) for
inspiring me to push this feature to the end. Any articles that don't have any
WebMentions yet will link to the [WebMention
spec](https://www.w3.org/TR/webmention/).
Be well.

View File

@ -0,0 +1,76 @@
---
title: ZSA Moonlander First Impressions
date: 2020-10-27
series: keeb
tags:
- moonlander
- keyboard
---
# ZSA Moonlander First Impressions
As I mentioned
[before](https://christine.website/blog/colemak-layout-2020-08-15), I ordered a
[ZSA Moonlander](https://zsa.io/moonlander) and it has finally arrived. I am
writing this post from my Moonlander, and as such I may do a few more typos
than normal, I'm still getting used to this.
![a picture of the keyboard on my
desk](https://cdn.christine.website/file/christine-static/img/keeb/ElVbBm_XUAcVhOg.jpg)
The Moonlander is a weird keyboard. I knew that it would be odd from the get-go
(split ergonomic keyboards have this reputation for a reason), but I was
surprised at how natural it feels. Setup was a breeze (unbox, plug it in, flash
firmware, type), and I have been experimenting with tenting angles on my desk.
It is a _very_ solid keyboard with basically zero deck flex.
I have a [fairly complicated
keymap](https://tulpa.dev/cadey/kadis-layouts/src/branch/master/moonlander) that
worked almost entirely on the first try. Here is a more user friendly
visualization of my keymap (sans fun things like leader macros):
<div style="padding-top: 60%; position: relative;">
<iframe src="https://configure.ergodox-ez.com/embed/moonlander/layouts/xbJXx/latest/0" style="border: 0; height: 100%; left: 0; position: absolute; top: 0; width: 100%"></iframe>
</div>
My typing speed has been destroyed by the change to this ortholinear layout.
Before I was getting around 70 words per minute at best (according to
[monkeytype.com](https://monkeytype.com/)), but now I am lucky to hit about 35
words per minute. My fingers kinda reach for where keys are on a staggered
keyboard and I have the most trouble with `x`, `v`, `.` and `b` at the moment. I
really like having a dedicated : key on my right hand. It makes command mode (and
yaml) so much easier. The larger red buttons are a bit odd to hit at the moment,
but I imagine that will get much easier with time.
Each key has a programmable RGB light under it. This allows you to get some
really nice effects like this:
![The left hand of my steno
layout](https://cdn.christine.website/file/christine-static/img/keeb/ElTG7QSW0AEqXeE.jpg)
However brown colors don't come out as well as I'd hoped:
![My media layer that mostly has brown lighting, this looks a bit better in the
dark](https://cdn.christine.website/file/christine-static/img/keeb/ElVdFKoX0AE_dAA.jpg)
I am not sure how I feel about the armrests. On one hand they feel a bit cold
(context: it is currently 1.57 degrees outside and I'm wearing a hoodie at my
desk so that may end up being the cause of this), but on the other hand i really
hate typing on this without them. The tenting is nice, I need to play with it
more but the included instructions help a lot.
I still have a long way to go. I'll write up a longer and more detailed review
in a few weeks.
Expect to see many more glory shots on
[Twitter](https://twitter.com/theprincessxena)!
As an added bonus, here is the `if err != nil` key in action:
<video controls width="100%">
<source src="https://cdn.christine.website/file/christine-static/img/keeb/tmp.ZdCemPUcnd.webm"
type="video/webm">
<source src="https://cdn.christine.website/file/christine-static/img/keeb/tmp.ZdCemPUcnd.mp4"
type="video/mp4">
Sorry, your browser doesn't support embedded videos.
</video>

View File

@ -0,0 +1,442 @@
---
title: ZSA Moonlander Review
date: 2020-11-06
series: keeb
tags:
- moonlander
- keyboard
- nixos
---
# ZSA Moonlander Review
I am nowhere near qualified to review things objectively. Therefore this
blogpost will mostly be about what I like about this keyboard. I plan to go into
a fair bit of detail, however please do keep in mind that this is subjective as
all hell. Also keep in mind that this is partially also going to be a review of
my own keyboard layout too. I'm going to tackle this in a few parts that I will
label with headings.
This review is NOT sponsored. I paid for this device with my own money. I have
no influence pushing me either way on this keyboard.
![a picture of the keyboard on my
desk](https://cdn.christine.website/file/christine-static/img/keeb/Elm3dN8XUAAYHws.jpg)
[That 3d printed brain is built from the 3D model that was made as a part of <a
href="https://christine.website/blog/brain-fmri-to-3d-model-2019-08-23">this
blogpost</a>.](conversation://Mara/hacker)
## tl;dr
I like the Moonlander. It gets out of my way and lets me focus on writing and
code. I don't like how limited the Oryx configurator is, but the fact that I can
build my own firmware from source and flash it to the keyboard on my own makes
up for that. I think this was a purchase well worth making, but I can understand
why others would disagree. I can easily see this device becoming a core part of
my workflow for years to come.
## Build Quality
The Moonlander is a solid keyboard. Once you set it up with the tenting legs and
adjust the key cluster, the keyboard is rock solid. The only give I've noticed
is because my desk mat is made of a rubber-like material. The construction of
the keyboard is all plastic but there isn't any deck flex that I can tell.
Compare this to cheaper laptops where the entire keyboard bends if you so much
as touch the keys too hard.
The palmrests are detachable and when they are off it gives the keyboard a
space-age vibe to it:
![the left half of the keyboard without the palmrest
attached](https://cdn.christine.website/file/christine-static/img/keeb/EmJ1bqNXUAAJy4d.jpg)
The palmrests feel very solid and fold up into the back of the keyboard for
travel. However folding up the palmrest does mess up the tenting stability, so
you can't fold in the palmrest and type very comfortably. This makes sense
though, the palmrest is made out of smooth plastic so it feels nicer on the
hands.
ZSA said that iPad compatibility is not guaranteed due to the fact that the iPad
might not put out enough juice to run it, however in my testing with an iPad Pro
2018 (12", 512 GB storage) it works fine. The battery drains a little faster,
but the Moonlander is a much more active keyboard than the smart keyboard so I
can forgive this.
## Switches
I've been using mechanical keyboards for years, but most of them have been
clicky switches (such as cloned Cherry MX blues, actual legit Cherry MX blues
and the awful Razer Green switches). This is my first real experience with
Cherry MX brown switches. There are many other options when you are about to
order a moonlander, but I figured Cherry MX browns would be a nice neutral
choice.
The keyswitches are hot-swappable (no disassembly or soldering required), and
changing out keyswitches **DOES NOT** void your warranty. I plan to look into
[Holy Pandas](https://www.youtube.com/watch?v=QLm8DNH5hJk) and [Zilents
V2](https://youtu.be/uGVw85solnE) in the future. There is even a clever little
tool in the box that makes it easy to change out keyswitches.
Overall, this has been one of the best typing experiences I have ever had. The
noise is a little louder than I would have liked (please note that I tend to
bottom out the keycaps as I type, so this may end up factoring into the noise I
experience); but overall I really like it. It is far better than I have ever had
with clicky switches.
## Typing Feel
The Moonlander uses an ortholinear layout as opposed to the staggered layout
that you find on most keyboards. This took some getting used to, but I have
found that it is incredibly comfortable and natural to write on.
## My Keymap
Each side of the keyboard has the following:
- 20 alphanumeric keys (some are used for `;`, `,`, `.` and `/` like normal
keyboards)
- 12 freely assignable keys (useful for layer changes, arrow keys, symbols and
modifiers)
- 4 thumb keys
In total, this keyboard has 72 keys, making it about a 70% keyboard (assuming
the math in my head is right).
My keymap uses all but two of these keys. The two keys I haven't figured out how
to best use yet are the ones that I currently have the `[` and `]` keycaps on.
Right now they are mapped to the left and right arrow keys. This was the
default.
My keymap is organized into
[layers](https://docs.qmk.fm/#/keymap?id=keymap-and-layers). In each of these
subsections I will go into detail about what these layers are, what they do and
how they help me. My keymap code is
[here](https://tulpa.dev/cadey/kadis-layouts/src/branch/master/moonlander) and I
have a limited view of it embedded below:
<div style="padding-top: 60%; position: relative;">
<iframe src="https://configure.ergodox-ez.com/embed/moonlander/layouts/xbJXx/latest/0" style="border: 0; height: 100%; left: 0; position: absolute; top: 0; width: 100%"></iframe>
</div>
If you want to flash my layout to your Moonlander for some reason, you can find
the firmware binary
[here](https://cdn.christine.website/file/christine-static/img/keeb/moonlander_kadis.bin).
You can then flash this to your keyboard with
[Wally](https://ergodox-ez.com/pages/wally).
### Base Layers
I have a few base layers that contain the main set of letters and numbers that I
type. The main base layer is my Colemak layer. I have the keys arranged to a
standard [Colemak](https://Colemak.com/) layout and it is currently the layer I
type the fastest on. I have the RGB configured so that it is mostly pink with
the homerow using a lighter shade of pink. The color codes come from my logo
that you can see in the favicon [or here for a larger
version](https://christine.website/static/img/avatar_large.png).
I also have a qwerty layer for gaming. Most games expect qwerty keyboards and
this is an excellent stopgap to avoid having to rebind every game that I want to
play. The left side of the keyboard is the active one with the controller board
in it too, so I can unplug the other half of the keyboard and give my mouse a
lot of room to roam.
Thanks to a friend of mine, I am also playing with Dvorak. I have not gotten far
in Dvorak yet, but it is interesting to play with.
I'll cover the leader key in the section below dedicated to it, but the other
major thing that I have is a colon key on my right hand thumb cluster. This has
been a huge boon for programming. The colon key is typed a lot. Having it on the
thumb cluster means that I can just reach down and hit it when I need to. This
makes writing code in Go and Rust so much easier.
### Symbol/Number Layer
If you look at the base layer keymap, you will see that I do not have square
brackets mapped anywhere there. Yet I write code with it effortlessly. This is
because of the symbol/number layer that I access with the lower right and lower
left keys on the keyboard. I have it positioned there so I can roll my hand to
the side and then unlock the symbols there. I have access to every major symbol
needed for programming save `<` and `>` (which I can easily access on the base
layer with the shift key). I also get a nav cluster and a number pad.
I also have [dynamic macros](https://docs.qmk.fm/#/feature_dynamic_macros) on
this layer which function kinda like vim macros. The only difference is that
there's only two macros instead of many like vim. They are convenient though.
### Media Layer
One of the cooler parts of the Moonlander is that it can act as a mouse. It is a
very terrible mouse (understandably, mostly because the digital inputs of
keypresses cannot match the analog precision of a mouse). This layer has an
arrow key cluster too. I normally use the arrow keys along the bottom of the
keyboard with my thumbs, but sometimes it can help to have a dedicated inverse T
arrow cluster for things like old MS-DOS games.
I also have media control keys here. They aren't the most useful on my linux
desktop, however when I plug it into my iPad they are amazing.
### dwm Layer
I use [dwm](/blog/why-i-use-suckless-tools-2020-06-05) as my main window manager
in Linux. dwm is entirely controlled using the keyboard. I have a dedicated
keyboard layer to control dwm and send out its keyboard shortcuts. It's really
nice and lets me get all of the advantages of my tiling setup without needing to
hit weird keycombos.
### Leader Macros
[Leader macros](https://docs.qmk.fm/#/feature_leader_key) are one of the killer
features of my layout. I have a [huge
bank](https://tulpa.dev/cadey/kadis-layouts/src/branch/master/doc/leader.md) of
them and use them to do type out things that I type a lot. Most common git and
Kubernetes commands are just a leader macro away.
The Go `if err != nil` macro that got me on /r/programmingcirclejerk twice is
one of my leader macros, but I may end up promoting it to its own key if I keep
getting so much use out of it (maybe one of the keys I don't use can become my
`if err != nil` key). I'm sad that the threads got deleted (I love it when my
content gets on there, it's one of my favorite subreddits), but such is life.
## NixOS, the Moonlander and Colemak
When I got this keyboard, flashed the firmware and plugged it in, I noticed that
my keyboard was sending weird inputs. It was rendering things that look like
this:
```
The quick brown fox jumps over the lazy yellow dog.
```
into this:
```
Ghf qluce bpywk tyx nlm;r yvfp ghf iazj jfiiyw syd.
```
This is because I had configured my NixOS install to interpret the keyboard as
if it was Colemak. However the keyboard is able to lie and sends out normal
keycodes (even though I am typing them in Colemak) as if I was typing in qwerty.
This double Colemak meant that a lot of messages and commands were completely
unintelligible until I popped into my qwerty layer.
I quickly found the culprit in my config:
```nix
console.useXkbConfig = true;
services.xserver = {
layout = "us";
xkbVariant = "colemak";
xkbOptions = "caps:escape";
};
```
This config told the X server to always interpret my keyboard as if it was
Colemak, meaning that I needed to tell it not to. As a stopgap I commented this
section of my config out and rebuilt my system.
X11 allows you to specify keyboard configuration for keyboards individually by
device product/vendor names. The easiest way I know to get this information is
to open a terminal, run `dmesg -w` to get a constant stream of kernel logs,
unplug and plug the keyboard back in and see what the kernel reports:
```console
[242718.024229] usb 1-2: USB disconnect, device number 8
[242948.272824] usb 1-2: new full-speed USB device number 9 using xhci_hcd
[242948.420895] usb 1-2: New USB device found, idVendor=3297, idProduct=1969, bcdDevice= 0.01
[242948.420896] usb 1-2: New USB device strings: Mfr=1, Product=2, SerialNumber=3
[242948.420897] usb 1-2: Product: Moonlander Mark I
[242948.420898] usb 1-2: Manufacturer: ZSA Technology Labs
[242948.420898] usb 1-2: SerialNumber: 0
```
The product is named `Moonlander Mark I`, which means we can match for it and
tell X11 to not colemakify the keycodes using something like this:
```
Section "InputClass"
Identifier "moonlander"
MatchIsKeyboard "on"
MatchProduct "Moonlander"
Option "XkbLayout" "us"
Option "XkbVariant" "basic"
EndSection
```
[For more information on what you can do in an `InputClass` section, see <a
href="https://www.x.org/releases/current/doc/man/man5/xorg.conf.5.xhtml#heading9">here</a>
in the X11 documentation.](conversation://Mara/hacker)
This configuration fragment can easily go in the normal X11 configuration
folder, but doing it like this would mean that I would have to manually drop
this file in on every system I want to colemakify. This does not scale and
defeats the point of doing this in NixOS.
Thankfully NixOS has [an
option](https://search.nixos.org/options?channel=20.09&show=services.xserver.inputClassSections&from=0&size=30&sort=relevance&query=inputClassSections)
to solve this very problem. Using this module we can write something like this:
```nix
services.xserver = {
layout = "us";
xkbVariant = "colemak";
xkbOptions = "caps:escape";
inputClassSections = [
''
Identifier "yubikey"
MatchIsKeyboard "on"
MatchProduct "Yubikey"
Option "XkbLayout" "us"
Option "XkbVariant" "basic"
''
''
Identifier "moonlander"
MatchIsKeyboard "on"
MatchProduct "Moonlander"
Option "XkbLayout" "us"
Option "XkbVariant" "basic"
''
];
};
```
But this is NixOS and that allows us to go one step further and make the
identifier and product matching string configurable as will with our own [NixOS
options](https://nixos.org/manual/nixos/stable/index.html#sec-writing-modules).
Let's start by lifting all of that above config into its own module:
```nix
# Colemak.nix
{ config, lib, ... }: with lib; {
options = {
cadey.colemak = {
enable = mkEnableOption "Enables colemak for the default X config";
};
};
config = mkIf config.cadey.Colemak.enable {
services.xserver = {
layout = "us";
xkbVariant = "colemak";
xkbOptions = "caps:escape";
inputClassSections = [
''
Identifier "yubikey"
MatchIsKeyboard "on"
MatchProduct "Yubikey"
Option "XkbLayout" "us"
Option "XkbVariant" "basic"
''
''
Identifier "moonlander"
MatchIsKeyboard "on"
MatchProduct "Moonlander"
Option "XkbLayout" "us"
Option "XkbVariant" "basic"
''
];
};
};
}
```
[This also has Yubikey inputs not get processed into Colemak so that <a
href="https://developers.yubico.com/OTP/OTPs_Explained.html">Yubikey OTPs</a>
still work as expected. Keep in mind that a Yubikey in this mode pretends to be
a keyboard, so without this configuration the OTP will be processed into
Colemak. The Yubico verification service will not be able to understand OTPs
that are typed out in Colemak.](conversation://Mara/hacker)
Then we can turn the identifier and product values into options with
[mkOption](https://nixos.org/manual/nixos/stable/index.html#sec-option-declarations)
and string interpolation:
```nix
# ...
cadey.colemak = {
enable = mkEnableOption "Enables Colemak for the default X config";
ignore = {
identifier = mkOption {
type = types.str;
description = "Keyboard input identifier to send raw keycodes for";
default = "moonlander";
};
product = mkOption {
type = types.str;
description = "Keyboard input product to send raw keycodes for";
default = "Moonlander";
};
};
};
# ...
''
Identifier "${config.cadey.colemak.ignore.identifier}"
MatchIsKeyboard "on"
MatchProduct "${config.cadey.colemak.ignore.product}"
Option "XkbLayout" "us"
''
# ...
```
Adding this to the default load path and enabling it with `cadey.colemak.enable
= true;` in my tower's `configuration.nix`
This section was made possible thanks to help from [Graham
Christensen](https://twitter.com/grhmc) who seems to be in search of a job. If
you are wanting someone on your team that is kind and more than willing to help
make your team flourish, I highly suggest looking into putting him in your
hiring pipeline. See
[here](https://twitter.com/grhmc/status/1324765493534875650) for contact
information.
## Oryx
[Oryx](https://configure.ergodox-ez.com) is the configurator that ZSA created to
allow people to create keymaps without needing to compile your own firmware or
install the [QMK](https://qmk.fm) toolchain.
[QMK is the name of the firmware that the Moonlander (and a lot of other
custom/split mechanical keyboards) use. It works on AVR and Arm
processors.](conversation://Mara/hacker)
For most people, Oryx should be sufficient. I actually started my keymap using
Oryx and sorta outgrew it as I learned more about QMK. It would be nice if Oryx
added leader key support, however this is more of an advanced feature so I
understand why it doesn't have that.
## Things I Don't Like
This keyboard isn't flawless, but it gets so many things right that this is
mostly petty bickering at this point. I had to look hard to find these.
I would have liked having another thumb key for things like layer toggling. I
can make do with what I have, but another key would have been nice. Maybe add a
1u key under the red shaped key?
At the point I ordered the Moonlander, I was unable to order a black keyboard
with white keycaps. I am told that ZSA will be selling keycap sets as early as
next year. When that happens I will be sure to order a white one so that I can
have an orca vibe.
ZSA ships with UPS. Normally UPS is fine for me, but the driver that was slated
to deliver it one day just didn't deliver it. I was able to get the keyboard
eventually though. Contrary to their claims, the UPS website does NOT update
instantly and is NOT the most up to date source of information about your
package.
The cables aren't braided. I would have liked braided cables.
Like I said, these are _really minor_ things, but it's all I can really come up
with as far as downsides go.
## Conclusion
Overall this keyboard is amazing. I would really suggest it to anyone that wants
to be able to have control over their main tool and craft it towards their
desires instead of making do with what some product manager somewhere decided
what keys should do what. It's expensive at USD$350, but for the right kind of
person this will be worth every penny. Your mileage may vary, but I like it.

View File

@ -1,24 +1,32 @@
let Person = let Person =
{ Type = { name : Text, tags : List Text, gitLink : Text, twitter : Text } { Type = { name : Text, tags : List Text, gitLink : Text, twitter : Text }
, default = , default =
{ name = "", tags = [] : List Text, gitLink = "", twitter = "" } { name = "", tags = [] : List Text, gitLink = "", twitter = "" }
} }
let defaultPort = env:PORT ? 3030 let defaultPort = env:PORT ? 3030
let defaultWebMentionEndpoint =
env:WEBMENTION_ENDPOINT
? "https://mi.within.website/api/webmention/accept"
let Config = let Config =
{ Type = { Type =
{ signalboost : List Person.Type { signalboost : List Person.Type
, port : Natural , port : Natural
, clackSet : List Text , clackSet : List Text
, resumeFname : Text , resumeFname : Text
, webMentionEndpoint : Text
, miToken : Text
} }
, default = , default =
{ signalboost = [] : List Person.Type { signalboost = [] : List Person.Type
, port = defaultPort , port = defaultPort
, clackSet = [ "Ashlynn" ] , clackSet = [ "Ashlynn" ]
, resumeFname = "./static/resume/resume.md" , resumeFname = "./static/resume/resume.md"
} , webMentionEndpoint = defaultWebMentionEndpoint
, miToken = "${env:MI_TOKEN as Text ? ""}"
}
} }
in Config::{ in Config::{

View File

@ -218,7 +218,7 @@ a:hover {
overflow: hidden; overflow: hidden;
} }
.hack h1:after { .hack h1:after {
content: "===================================================================================================="; content: "===============================================================================================================================================================";
position: absolute; position: absolute;
bottom: 10px; bottom: 10px;
left: 0; left: 0;
@ -315,7 +315,7 @@ a:hover {
margin: 20px 0; margin: 20px 0;
} }
.hack hr:after { .hack hr:after {
content: "----------------------------------------------------------------------------------------------------"; content: "---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------";
position: absolute; position: absolute;
top: 0; top: 0;
left: 0; left: 0;

View File

@ -19,7 +19,7 @@
padding: 1em; padding: 1em;
} }
@keyframes snow { /* @keyframes snow { */
0% {background-position: 0px 0px, 0px 0px, 0px 0px;} /* 0% {background-position: 0px 0px, 0px 0px, 0px 0px;} */
100% {background-position: 500px 1000px, 400px 400px, 300px 300px} /* 100% {background-position: 500px 1000px, 400px 400px, 300px 300px} */
} /* } */

View File

@ -1,23 +1,57 @@
{ system ? builtins.currentSystem }: { sources ? import ./nix/sources.nix, pkgs ? import sources.nixpkgs { } }:
with pkgs;
let let
sources = import ./nix/sources.nix; rust = pkgs.callPackage ./nix/rust.nix { };
pkgs = import sources.nixpkgs { inherit system; };
callPackage = pkgs.lib.callPackageWith pkgs;
site = callPackage ./site.nix { };
dockerImage = pkg: srcNoTarget = dir:
pkgs.dockerTools.buildLayeredImage { builtins.filterSource
name = "xena/christinewebsite"; (path: type: type != "directory" || builtins.baseNameOf path != "target")
tag = "latest"; dir;
contents = [ pkgs.cacert pkg ]; naersk = pkgs.callPackage sources.naersk {
rustc = rust;
cargo = rust;
};
dhallpkgs = import sources.easy-dhall-nix { inherit pkgs; };
src = srcNoTarget ./.;
config = { xesite = naersk.buildPackage {
Cmd = [ "${pkg}/bin/xesite" ]; inherit src;
Env = [ "CONFIG_FNAME=${pkg}/config.dhall" "RUST_LOG=info" ]; doCheck = true;
WorkingDir = "/"; buildInputs = [ pkg-config openssl git ];
}; remapPathPrefix = true;
}; };
in dockerImage site config = stdenv.mkDerivation {
pname = "xesite-config";
version = "HEAD";
buildInputs = [ dhallpkgs.dhall-simple ];
phases = "installPhase";
installPhase = ''
cd ${src}
dhall resolve < ${src}/config.dhall >> $out
'';
};
in pkgs.stdenv.mkDerivation {
inherit (xesite) name;
inherit src;
phases = "installPhase";
installPhase = ''
mkdir -p $out $out/bin
cp -rf ${config} $out/config.dhall
cp -rf $src/blog $out/blog
cp -rf $src/css $out/css
cp -rf $src/gallery $out/gallery
cp -rf $src/signalboost.dhall $out/signalboost.dhall
cp -rf $src/static $out/static
cp -rf $src/talks $out/talks
cp -rf ${xesite}/bin/xesite $out/bin/xesite
'';
}

View File

@ -1,31 +0,0 @@
apiVersion: batch/v1
kind: Job
metadata:
name: christinewebsite-ping
namespace: apps
labels:
app: christinewebsite
spec:
template:
spec:
containers:
- name: ping-bing
image: xena/alpine
command:
- "busybox"
- "wget"
- "-O"
- "-"
- "-q"
- "https://www.bing.com/ping?sitemap=https://christine.website/sitemap.xml"
- name: ping-google
image: xena/alpine
command:
- "busybox"
- "wget"
- "-O"
- "-"
- "-q"
- "https://www.google.com/ping?sitemap=https://christine.website/sitemap.xml"
restartPolicy: Never
backoffLimit: 4

20
lib/cfcache/Cargo.toml Normal file
View File

@ -0,0 +1,20 @@
[package]
name = "cfcache"
version = "0.1.0"
authors = ["Christine Dodrill <me@christine.website>"]
edition = "2018"
# See more keys and their definitions at https://doc.rust-lang.org/cargo/reference/manifest.html
[dependencies]
reqwest = { version = "0.11", features = ["json"] }
serde_json = "1"
serde = { version = "1", features = ["derive"] }
thiserror = "1"
tracing = "0.1"
tracing-futures = "0.2"
[dev-dependencies]
eyre = "0.6.5"
kankyo = "0.3"
tokio = { version = "1", features = ["full"] }

View File

@ -0,0 +1,15 @@
use eyre::Result;
#[tokio::main]
async fn main() -> Result<()> {
kankyo::init()?;
let key = std::env::var("CF_TOKEN")?;
let zone_id = std::env::var("CF_ZONE_ID")?;
let cli = cfcache::Client::new(key, zone_id)?;
cli.purge(vec!["https://christine.website/.within/health".to_string()])
.await?;
Ok(())
}

64
lib/cfcache/src/lib.rs Normal file
View File

@ -0,0 +1,64 @@
use reqwest::header;
use tracing::instrument;
pub type Result<T = ()> = std::result::Result<T, Error>;
#[derive(thiserror::Error, Debug)]
pub enum Error {
#[error("json error: {0}")]
Json(#[from] serde_json::Error),
#[error("request error: {0}")]
Request(#[from] reqwest::Error),
#[error("invalid header value: {0}")]
InvalidHeaderValue(#[from] reqwest::header::InvalidHeaderValue),
}
pub struct Client {
zone_id: String,
cli: reqwest::Client,
}
static USER_AGENT: &str = concat!(
"xesite ",
env!("CARGO_PKG_NAME"),
"/",
env!("CARGO_PKG_VERSION")
);
impl Client {
pub fn new(api_key: String, zone_id: String) -> Result<Self> {
let mut headers = header::HeaderMap::new();
headers.insert(
header::AUTHORIZATION,
header::HeaderValue::from_str(&format!("Bearer {}", api_key))?,
);
let cli = reqwest::Client::builder()
.user_agent(USER_AGENT)
.default_headers(headers)
.build()?;
Ok(Self { zone_id, cli })
}
#[instrument(skip(self), err)]
pub async fn purge(&self, urls: Vec<String>) -> Result {
#[derive(serde::Serialize)]
struct Files {
files: Vec<String>,
}
self.cli
.post(&format!(
"https://api.cloudflare.com/client/v4/zones/{}/purge_cache",
self.zone_id
))
.json(&Files { files: urls })
.send()
.await?
.error_for_status()?;
Ok(())
}
}

View File

@ -1,6 +1,6 @@
[package] [package]
name = "go_vanity" name = "go_vanity"
version = "0.1.0" version = "0.2.0"
authors = ["Christine Dodrill <me@christine.website>"] authors = ["Christine Dodrill <me@christine.website>"]
edition = "2018" edition = "2018"
build = "src/build.rs" build = "src/build.rs"
@ -8,8 +8,8 @@ build = "src/build.rs"
# See more keys and their definitions at https://doc.rust-lang.org/cargo/reference/manifest.html # See more keys and their definitions at https://doc.rust-lang.org/cargo/reference/manifest.html
[dependencies] [dependencies]
mime = "0.3.0" mime = "0.3"
warp = "0.2" warp = "0.3"
[build-dependencies] [build-dependencies]
ructe = { version = "0.12", features = ["warp02"] } ructe = { version = "0.13", features = ["warp02"] }

View File

@ -1,12 +1,12 @@
use crate::templates::RenderRucte;
use warp::{http::Response, Rejection, Reply}; use warp::{http::Response, Rejection, Reply};
use crate::templates::{RenderRucte};
include!(concat!(env!("OUT_DIR"), "/templates.rs")); include!(concat!(env!("OUT_DIR"), "/templates.rs"));
pub async fn gitea(pkg_name: &str, git_repo: &str) -> Result<impl Reply, Rejection> { pub async fn gitea(pkg_name: &str, git_repo: &str, branch: &str) -> Result<impl Reply, Rejection> {
Response::builder().html(|o| templates::gitea_html(o, pkg_name, git_repo)) Response::builder().html(|o| templates::gitea_html(o, pkg_name, git_repo, branch))
} }
pub async fn github(pkg_name: &str, git_repo: &str) -> Result<impl Reply, Rejection> { pub async fn github(pkg_name: &str, git_repo: &str, branch: &str) -> Result<impl Reply, Rejection> {
Response::builder().html(|o| templates::github_html(o, pkg_name, git_repo)) Response::builder().html(|o| templates::github_html(o, pkg_name, git_repo, branch))
} }

View File

@ -1,11 +1,11 @@
@(pkg_name: &str, git_repo: &str) @(pkg_name: &str, git_repo: &str, branch: &str)
<!DOCTYPE html> <!DOCTYPE html>
<html> <html>
<head> <head>
<meta http-equiv="Content-Type" content="text/html; charset=utf-8"/> <meta http-equiv="Content-Type" content="text/html; charset=utf-8"/>
<meta name="go-import" content="@pkg_name git @git_repo"> <meta name="go-import" content="@pkg_name git @git_repo">
<meta name="go-source" content="@pkg_name @git_repo @git_repo/src/master@{/dir@} @git_repo/src/master@{/dir@}/@{file@}#L@{line@}"> <meta name="go-source" content="@pkg_name @git_repo @git_repo/src/@branch@{/dir@} @git_repo/src/@branch@{/dir@}/@{file@}#L@{line@}">
<meta http-equiv="refresh" content="0; url=https://godoc.org/@pkg_name"> <meta http-equiv="refresh" content="0; url=https://godoc.org/@pkg_name">
</head> </head>
<body> <body>

View File

@ -1,11 +1,11 @@
@(pkg_name: &str, git_repo: &str) @(pkg_name: &str, git_repo: &str, branch: &str)
<!DOCTYPE html> <!DOCTYPE html>
<html> <html>
<head> <head>
<meta http-equiv="Content-Type" content="text/html; charset=utf-8"/> <meta http-equiv="Content-Type" content="text/html; charset=utf-8"/>
<meta name="go-import" content="@pkg_name git @git_repo"> <meta name="go-import" content="@pkg_name git @git_repo">
<meta name="go-source" content="@pkg_name @git_repo @git_repo/tree/master@{/dir@} @git_repo/blob/master@{/dir@}/@{file@}#L@{line@}"> <meta name="go-source" content="@pkg_name @git_repo @git_repo/tree/@branch@{/dir@} @git_repo/blob/@branch@{/dir@}/@{file@}#L@{line@}">
<meta http-equiv="refresh" content="0; url=https://godoc.org/@pkg_name"> <meta http-equiv="refresh" content="0; url=https://godoc.org/@pkg_name">
</head> </head>
<body> <body>

View File

@ -1,7 +1,7 @@
use std::default::Default; use std::default::Default;
use errors::*; use errors::*;
use feed::{Feed, Author, Attachment}; use feed::{Attachment, Author, Feed};
use item::{Content, Item}; use item::{Content, Item};
/// Feed Builder /// Feed Builder
@ -160,7 +160,7 @@ impl ItemBuilder {
match self.content { match self.content {
Some(Content::Text(t)) => { Some(Content::Text(t)) => {
self.content = Some(Content::Both(i.into(), t)); self.content = Some(Content::Both(i.into(), t));
}, }
_ => { _ => {
self.content = Some(Content::Html(i.into())); self.content = Some(Content::Html(i.into()));
} }
@ -172,10 +172,10 @@ impl ItemBuilder {
match self.content { match self.content {
Some(Content::Html(s)) => { Some(Content::Html(s)) => {
self.content = Some(Content::Both(s, i.into())); self.content = Some(Content::Both(s, i.into()));
}, }
_ => { _ => {
self.content = Some(Content::Text(i.into())); self.content = Some(Content::Text(i.into()));
}, }
} }
self self
} }
@ -197,8 +197,7 @@ impl ItemBuilder {
date_modified: self.date_modified, date_modified: self.date_modified,
author: self.author, author: self.author,
tags: self.tags, tags: self.tags,
attachments: self.attachments attachments: self.attachments,
}) })
} }
} }

View File

@ -1,7 +1,6 @@
use serde_json; use serde_json;
error_chain!{ error_chain! {
foreign_links { foreign_links {
Serde(serde_json::Error); Serde(serde_json::Error);
} }
} }

View File

@ -1,7 +1,7 @@
use std::default::Default; use std::default::Default;
use item::Item;
use builder::Builder; use builder::Builder;
use item::Item;
const VERSION_1: &'static str = "https://jsonfeed.org/version/1"; const VERSION_1: &'static str = "https://jsonfeed.org/version/1";
@ -145,9 +145,9 @@ pub struct Hub {
#[cfg(test)] #[cfg(test)]
mod tests { mod tests {
use super::*;
use serde_json; use serde_json;
use std::default::Default; use std::default::Default;
use super::*;
#[test] #[test]
fn serialize_feed() { fn serialize_feed() {
@ -168,18 +168,16 @@ mod tests {
#[test] #[test]
fn deserialize_feed() { fn deserialize_feed() {
let json = r#"{"version":"https://jsonfeed.org/version/1","title":"some title","items":[]}"#; let json =
r#"{"version":"https://jsonfeed.org/version/1","title":"some title","items":[]}"#;
let feed: Feed = serde_json::from_str(&json).unwrap(); let feed: Feed = serde_json::from_str(&json).unwrap();
let expected = Feed { let expected = Feed {
version: "https://jsonfeed.org/version/1".to_string(), version: "https://jsonfeed.org/version/1".to_string(),
title: "some title".to_string(), title: "some title".to_string(),
items: vec![], items: vec![],
..Default::default() ..Default::default()
}; };
assert_eq!( assert_eq!(feed, expected);
feed,
expected
);
} }
#[test] #[test]
@ -208,10 +206,7 @@ mod tests {
size_in_bytes: Some(1), size_in_bytes: Some(1),
duration_in_seconds: Some(1), duration_in_seconds: Some(1),
}; };
assert_eq!( assert_eq!(attachment, expected);
attachment,
expected
);
} }
#[test] #[test]
@ -229,17 +224,15 @@ mod tests {
#[test] #[test]
fn deserialize_author() { fn deserialize_author() {
let json = r#"{"name":"bob jones","url":"http://example.com","avatar":"http://img.com/blah"}"#; let json =
r#"{"name":"bob jones","url":"http://example.com","avatar":"http://img.com/blah"}"#;
let author: Author = serde_json::from_str(&json).unwrap(); let author: Author = serde_json::from_str(&json).unwrap();
let expected = Author { let expected = Author {
name: Some("bob jones".to_string()), name: Some("bob jones".to_string()),
url: Some("http://example.com".to_string()), url: Some("http://example.com".to_string()),
avatar: Some("http://img.com/blah".to_string()), avatar: Some("http://img.com/blah".to_string()),
}; };
assert_eq!( assert_eq!(author, expected);
author,
expected
);
} }
#[test] #[test]
@ -262,10 +255,7 @@ mod tests {
type_: "some-type".to_string(), type_: "some-type".to_string(),
url: "http://example.com".to_string(), url: "http://example.com".to_string(),
}; };
assert_eq!( assert_eq!(hub, expected);
hub,
expected
);
} }
#[test] #[test]

View File

@ -1,11 +1,11 @@
use std::fmt;
use std::default::Default; use std::default::Default;
use std::fmt;
use feed::{Author, Attachment};
use builder::ItemBuilder; use builder::ItemBuilder;
use feed::{Attachment, Author};
use serde::ser::{Serialize, Serializer, SerializeStruct}; use serde::de::{self, Deserialize, Deserializer, MapAccess, Visitor};
use serde::de::{self, Deserialize, Deserializer, Visitor, MapAccess}; use serde::ser::{Serialize, SerializeStruct, Serializer};
/// Represents the `content_html` and `content_text` attributes of an item /// Represents the `content_html` and `content_text` attributes of an item
#[derive(Debug, Clone, PartialEq, Deserialize, Serialize)] #[derive(Debug, Clone, PartialEq, Deserialize, Serialize)]
@ -61,7 +61,8 @@ impl Default for Item {
impl Serialize for Item { impl Serialize for Item {
fn serialize<S>(&self, serializer: S) -> Result<S::Ok, S::Error> fn serialize<S>(&self, serializer: S) -> Result<S::Ok, S::Error>
where S: Serializer where
S: Serializer,
{ {
let mut state = serializer.serialize_struct("Item", 14)?; let mut state = serializer.serialize_struct("Item", 14)?;
state.serialize_field("id", &self.id)?; state.serialize_field("id", &self.id)?;
@ -78,15 +79,15 @@ impl Serialize for Item {
Content::Html(ref s) => { Content::Html(ref s) => {
state.serialize_field("content_html", s)?; state.serialize_field("content_html", s)?;
state.serialize_field("content_text", &None::<Option<&str>>)?; state.serialize_field("content_text", &None::<Option<&str>>)?;
}, }
Content::Text(ref s) => { Content::Text(ref s) => {
state.serialize_field("content_html", &None::<Option<&str>>)?; state.serialize_field("content_html", &None::<Option<&str>>)?;
state.serialize_field("content_text", s)?; state.serialize_field("content_text", s)?;
}, }
Content::Both(ref s, ref t) => { Content::Both(ref s, ref t) => {
state.serialize_field("content_html", s)?; state.serialize_field("content_html", s)?;
state.serialize_field("content_text", t)?; state.serialize_field("content_text", t)?;
}, }
}; };
if self.summary.is_some() { if self.summary.is_some() {
state.serialize_field("summary", &self.summary)?; state.serialize_field("summary", &self.summary)?;
@ -117,8 +118,9 @@ impl Serialize for Item {
} }
impl<'de> Deserialize<'de> for Item { impl<'de> Deserialize<'de> for Item {
fn deserialize<D>(deserializer: D) -> Result<Self, D::Error> fn deserialize<D>(deserializer: D) -> Result<Self, D::Error>
where D: Deserializer<'de> where
D: Deserializer<'de>,
{ {
enum Field { enum Field {
Id, Id,
@ -135,11 +137,12 @@ impl<'de> Deserialize<'de> for Item {
Author, Author,
Tags, Tags,
Attachments, Attachments,
}; }
impl<'de> Deserialize<'de> for Field { impl<'de> Deserialize<'de> for Field {
fn deserialize<D>(deserializer: D) -> Result<Self, D::Error> fn deserialize<D>(deserializer: D) -> Result<Self, D::Error>
where D: Deserializer<'de> where
D: Deserializer<'de>,
{ {
struct FieldVisitor; struct FieldVisitor;
@ -151,7 +154,8 @@ impl<'de> Deserialize<'de> for Item {
} }
fn visit_str<E>(self, value: &str) -> Result<Field, E> fn visit_str<E>(self, value: &str) -> Result<Field, E>
where E: de::Error where
E: de::Error,
{ {
match value { match value {
"id" => Ok(Field::Id), "id" => Ok(Field::Id),
@ -186,7 +190,8 @@ impl<'de> Deserialize<'de> for Item {
} }
fn visit_map<V>(self, mut map: V) -> Result<Item, V::Error> fn visit_map<V>(self, mut map: V) -> Result<Item, V::Error>
where V: MapAccess<'de> where
V: MapAccess<'de>,
{ {
let mut id = None; let mut id = None;
let mut url = None; let mut url = None;
@ -210,99 +215,93 @@ impl<'de> Deserialize<'de> for Item {
return Err(de::Error::duplicate_field("id")); return Err(de::Error::duplicate_field("id"));
} }
id = Some(map.next_value()?); id = Some(map.next_value()?);
}, }
Field::Url => { Field::Url => {
if url.is_some() { if url.is_some() {
return Err(de::Error::duplicate_field("url")); return Err(de::Error::duplicate_field("url"));
} }
url = map.next_value()?; url = map.next_value()?;
}, }
Field::ExternalUrl => { Field::ExternalUrl => {
if external_url.is_some() { if external_url.is_some() {
return Err(de::Error::duplicate_field("external_url")); return Err(de::Error::duplicate_field("external_url"));
} }
external_url = map.next_value()?; external_url = map.next_value()?;
}, }
Field::Title => { Field::Title => {
if title.is_some() { if title.is_some() {
return Err(de::Error::duplicate_field("title")); return Err(de::Error::duplicate_field("title"));
} }
title = map.next_value()?; title = map.next_value()?;
}, }
Field::ContentHtml => { Field::ContentHtml => {
if content_html.is_some() { if content_html.is_some() {
return Err(de::Error::duplicate_field("content_html")); return Err(de::Error::duplicate_field("content_html"));
} }
content_html = map.next_value()?; content_html = map.next_value()?;
}, }
Field::ContentText => { Field::ContentText => {
if content_text.is_some() { if content_text.is_some() {
return Err(de::Error::duplicate_field("content_text")); return Err(de::Error::duplicate_field("content_text"));
} }
content_text = map.next_value()?; content_text = map.next_value()?;
}, }
Field::Summary => { Field::Summary => {
if summary.is_some() { if summary.is_some() {
return Err(de::Error::duplicate_field("summary")); return Err(de::Error::duplicate_field("summary"));
} }
summary = map.next_value()?; summary = map.next_value()?;
}, }
Field::Image => { Field::Image => {
if image.is_some() { if image.is_some() {
return Err(de::Error::duplicate_field("image")); return Err(de::Error::duplicate_field("image"));
} }
image = map.next_value()?; image = map.next_value()?;
}, }
Field::BannerImage => { Field::BannerImage => {
if banner_image.is_some() { if banner_image.is_some() {
return Err(de::Error::duplicate_field("banner_image")); return Err(de::Error::duplicate_field("banner_image"));
} }
banner_image = map.next_value()?; banner_image = map.next_value()?;
}, }
Field::DatePublished => { Field::DatePublished => {
if date_published.is_some() { if date_published.is_some() {
return Err(de::Error::duplicate_field("date_published")); return Err(de::Error::duplicate_field("date_published"));
} }
date_published = map.next_value()?; date_published = map.next_value()?;
}, }
Field::DateModified => { Field::DateModified => {
if date_modified.is_some() { if date_modified.is_some() {
return Err(de::Error::duplicate_field("date_modified")); return Err(de::Error::duplicate_field("date_modified"));
} }
date_modified = map.next_value()?; date_modified = map.next_value()?;
}, }
Field::Author => { Field::Author => {
if author.is_some() { if author.is_some() {
return Err(de::Error::duplicate_field("author")); return Err(de::Error::duplicate_field("author"));
} }
author = map.next_value()?; author = map.next_value()?;
}, }
Field::Tags => { Field::Tags => {
if tags.is_some() { if tags.is_some() {
return Err(de::Error::duplicate_field("tags")); return Err(de::Error::duplicate_field("tags"));
} }
tags = map.next_value()?; tags = map.next_value()?;
}, }
Field::Attachments => { Field::Attachments => {
if attachments.is_some() { if attachments.is_some() {
return Err(de::Error::duplicate_field("attachments")); return Err(de::Error::duplicate_field("attachments"));
} }
attachments = map.next_value()?; attachments = map.next_value()?;
}, }
} }
} }
let id = id.ok_or_else(|| de::Error::missing_field("id"))?; let id = id.ok_or_else(|| de::Error::missing_field("id"))?;
let content = match (content_html, content_text) { let content = match (content_html, content_text) {
(Some(s), Some(t)) => { (Some(s), Some(t)) => Content::Both(s.to_string(), t.to_string()),
Content::Both(s.to_string(), t.to_string()) (Some(s), _) => Content::Html(s.to_string()),
}, (_, Some(t)) => Content::Text(t.to_string()),
(Some(s), _) => {
Content::Html(s.to_string())
},
(_, Some(t)) => {
Content::Text(t.to_string())
},
_ => return Err(de::Error::missing_field("content_html or content_text")), _ => return Err(de::Error::missing_field("content_html or content_text")),
}; };
@ -363,7 +362,12 @@ mod tests {
banner_image: Some("http://img.com/blah".into()), banner_image: Some("http://img.com/blah".into()),
date_published: Some("2017-01-01 10:00:00".into()), date_published: Some("2017-01-01 10:00:00".into()),
date_modified: Some("2017-01-01 10:00:00".into()), date_modified: Some("2017-01-01 10:00:00".into()),
author: Some(Author::new().name("bob jones").url("http://example.com").avatar("http://img.com/blah")), author: Some(
Author::new()
.name("bob jones")
.url("http://example.com")
.avatar("http://img.com/blah"),
),
tags: Some(vec!["json".into(), "feed".into()]), tags: Some(vec!["json".into(), "feed".into()]),
attachments: Some(vec![]), attachments: Some(vec![]),
}; };
@ -387,7 +391,12 @@ mod tests {
banner_image: Some("http://img.com/blah".into()), banner_image: Some("http://img.com/blah".into()),
date_published: Some("2017-01-01 10:00:00".into()), date_published: Some("2017-01-01 10:00:00".into()),
date_modified: Some("2017-01-01 10:00:00".into()), date_modified: Some("2017-01-01 10:00:00".into()),
author: Some(Author::new().name("bob jones").url("http://example.com").avatar("http://img.com/blah")), author: Some(
Author::new()
.name("bob jones")
.url("http://example.com")
.avatar("http://img.com/blah"),
),
tags: Some(vec!["json".into(), "feed".into()]), tags: Some(vec!["json".into(), "feed".into()]),
attachments: Some(vec![]), attachments: Some(vec![]),
}; };
@ -411,7 +420,12 @@ mod tests {
banner_image: Some("http://img.com/blah".into()), banner_image: Some("http://img.com/blah".into()),
date_published: Some("2017-01-01 10:00:00".into()), date_published: Some("2017-01-01 10:00:00".into()),
date_modified: Some("2017-01-01 10:00:00".into()), date_modified: Some("2017-01-01 10:00:00".into()),
author: Some(Author::new().name("bob jones").url("http://example.com").avatar("http://img.com/blah")), author: Some(
Author::new()
.name("bob jones")
.url("http://example.com")
.avatar("http://img.com/blah"),
),
tags: Some(vec!["json".into(), "feed".into()]), tags: Some(vec!["json".into(), "feed".into()]),
attachments: Some(vec![]), attachments: Some(vec![]),
}; };
@ -437,7 +451,12 @@ mod tests {
banner_image: Some("http://img.com/blah".into()), banner_image: Some("http://img.com/blah".into()),
date_published: Some("2017-01-01 10:00:00".into()), date_published: Some("2017-01-01 10:00:00".into()),
date_modified: Some("2017-01-01 10:00:00".into()), date_modified: Some("2017-01-01 10:00:00".into()),
author: Some(Author::new().name("bob jones").url("http://example.com").avatar("http://img.com/blah")), author: Some(
Author::new()
.name("bob jones")
.url("http://example.com")
.avatar("http://img.com/blah"),
),
tags: Some(vec!["json".into(), "feed".into()]), tags: Some(vec!["json".into(), "feed".into()]),
attachments: Some(vec![]), attachments: Some(vec![]),
}; };
@ -460,7 +479,12 @@ mod tests {
banner_image: Some("http://img.com/blah".into()), banner_image: Some("http://img.com/blah".into()),
date_published: Some("2017-01-01 10:00:00".into()), date_published: Some("2017-01-01 10:00:00".into()),
date_modified: Some("2017-01-01 10:00:00".into()), date_modified: Some("2017-01-01 10:00:00".into()),
author: Some(Author::new().name("bob jones").url("http://example.com").avatar("http://img.com/blah")), author: Some(
Author::new()
.name("bob jones")
.url("http://example.com")
.avatar("http://img.com/blah"),
),
tags: Some(vec!["json".into(), "feed".into()]), tags: Some(vec!["json".into(), "feed".into()]),
attachments: Some(vec![]), attachments: Some(vec![]),
}; };
@ -483,11 +507,15 @@ mod tests {
banner_image: Some("http://img.com/blah".into()), banner_image: Some("http://img.com/blah".into()),
date_published: Some("2017-01-01 10:00:00".into()), date_published: Some("2017-01-01 10:00:00".into()),
date_modified: Some("2017-01-01 10:00:00".into()), date_modified: Some("2017-01-01 10:00:00".into()),
author: Some(Author::new().name("bob jones").url("http://example.com").avatar("http://img.com/blah")), author: Some(
Author::new()
.name("bob jones")
.url("http://example.com")
.avatar("http://img.com/blah"),
),
tags: Some(vec!["json".into(), "feed".into()]), tags: Some(vec!["json".into(), "feed".into()]),
attachments: Some(vec![]), attachments: Some(vec![]),
}; };
assert_eq!(item, expected); assert_eq!(item, expected);
} }
} }

View File

@ -2,7 +2,7 @@
//! instead of XML //! instead of XML
//! //!
//! This crate can serialize and deserialize between JSON Feed strings //! This crate can serialize and deserialize between JSON Feed strings
//! and Rust data structures. It also allows for programmatically building //! and Rust data structures. It also allows for programmatically building
//! a JSON Feed //! a JSON Feed
//! //!
//! Example: //! Example:
@ -40,18 +40,20 @@
//! ``` //! ```
extern crate serde; extern crate serde;
#[macro_use] extern crate error_chain; #[macro_use]
#[macro_use] extern crate serde_derive; extern crate error_chain;
#[macro_use]
extern crate serde_derive;
extern crate serde_json; extern crate serde_json;
mod errors;
mod item;
mod feed;
mod builder; mod builder;
mod errors;
mod feed;
mod item;
pub use errors::*; pub use errors::*;
pub use feed::{Attachment, Author, Feed};
pub use item::*; pub use item::*;
pub use feed::{Feed, Author, Attachment};
use std::io::Write; use std::io::Write;
@ -116,14 +118,16 @@ pub fn to_vec_pretty(value: &Feed) -> Result<Vec<u8>> {
/// Serialize a Feed to JSON and output to an IO stream /// Serialize a Feed to JSON and output to an IO stream
pub fn to_writer<W>(writer: W, value: &Feed) -> Result<()> pub fn to_writer<W>(writer: W, value: &Feed) -> Result<()>
where W: Write where
W: Write,
{ {
Ok(serde_json::to_writer(writer, value)?) Ok(serde_json::to_writer(writer, value)?)
} }
/// Serialize a Feed to pretty-printed JSON and output to an IO stream /// Serialize a Feed to pretty-printed JSON and output to an IO stream
pub fn to_writer_pretty<W>(writer: W, value: &Feed) -> Result<()> pub fn to_writer_pretty<W>(writer: W, value: &Feed) -> Result<()>
where W: Write where
W: Write,
{ {
Ok(serde_json::to_writer_pretty(writer, value)?) Ok(serde_json::to_writer_pretty(writer, value)?)
} }
@ -137,10 +141,7 @@ mod tests {
fn from_str() { fn from_str() {
let feed = r#"{"version": "https://jsonfeed.org/version/1","title":"","items":[]}"#; let feed = r#"{"version": "https://jsonfeed.org/version/1","title":"","items":[]}"#;
let expected = Feed::default(); let expected = Feed::default();
assert_eq!( assert_eq!(super::from_str(&feed).unwrap(), expected);
super::from_str(&feed).unwrap(),
expected
);
} }
#[test] #[test]
fn from_reader() { fn from_reader() {
@ -148,39 +149,27 @@ mod tests {
let feed = feed.as_bytes(); let feed = feed.as_bytes();
let feed = Cursor::new(feed); let feed = Cursor::new(feed);
let expected = Feed::default(); let expected = Feed::default();
assert_eq!( assert_eq!(super::from_reader(feed).unwrap(), expected);
super::from_reader(feed).unwrap(),
expected
);
} }
#[test] #[test]
fn from_slice() { fn from_slice() {
let feed = r#"{"version": "https://jsonfeed.org/version/1","title":"","items":[]}"#; let feed = r#"{"version": "https://jsonfeed.org/version/1","title":"","items":[]}"#;
let feed = feed.as_bytes(); let feed = feed.as_bytes();
let expected = Feed::default(); let expected = Feed::default();
assert_eq!( assert_eq!(super::from_slice(&feed).unwrap(), expected);
super::from_slice(&feed).unwrap(),
expected
);
} }
#[test] #[test]
fn from_value() { fn from_value() {
let feed = r#"{"version": "https://jsonfeed.org/version/1","title":"","items":[]}"#; let feed = r#"{"version": "https://jsonfeed.org/version/1","title":"","items":[]}"#;
let feed: serde_json::Value = serde_json::from_str(&feed).unwrap(); let feed: serde_json::Value = serde_json::from_str(&feed).unwrap();
let expected = Feed::default(); let expected = Feed::default();
assert_eq!( assert_eq!(super::from_value(feed).unwrap(), expected);
super::from_value(feed).unwrap(),
expected
);
} }
#[test] #[test]
fn to_string() { fn to_string() {
let feed = Feed::default(); let feed = Feed::default();
let expected = r#"{"version":"https://jsonfeed.org/version/1","title":"","items":[]}"#; let expected = r#"{"version":"https://jsonfeed.org/version/1","title":"","items":[]}"#;
assert_eq!( assert_eq!(super::to_string(&feed).unwrap(), expected);
super::to_string(&feed).unwrap(),
expected
);
} }
#[test] #[test]
fn to_string_pretty() { fn to_string_pretty() {
@ -190,28 +179,19 @@ mod tests {
"title": "", "title": "",
"items": [] "items": []
}"#; }"#;
assert_eq!( assert_eq!(super::to_string_pretty(&feed).unwrap(), expected);
super::to_string_pretty(&feed).unwrap(),
expected
);
} }
#[test] #[test]
fn to_value() { fn to_value() {
let feed = r#"{"version":"https://jsonfeed.org/version/1","title":"","items":[]}"#; let feed = r#"{"version":"https://jsonfeed.org/version/1","title":"","items":[]}"#;
let expected: serde_json::Value = serde_json::from_str(&feed).unwrap(); let expected: serde_json::Value = serde_json::from_str(&feed).unwrap();
assert_eq!( assert_eq!(super::to_value(Feed::default()).unwrap(), expected);
super::to_value(Feed::default()).unwrap(),
expected
);
} }
#[test] #[test]
fn to_vec() { fn to_vec() {
let feed = r#"{"version":"https://jsonfeed.org/version/1","title":"","items":[]}"#; let feed = r#"{"version":"https://jsonfeed.org/version/1","title":"","items":[]}"#;
let expected = feed.as_bytes(); let expected = feed.as_bytes();
assert_eq!( assert_eq!(super::to_vec(&Feed::default()).unwrap(), expected);
super::to_vec(&Feed::default()).unwrap(),
expected
);
} }
#[test] #[test]
fn to_vec_pretty() { fn to_vec_pretty() {
@ -221,10 +201,7 @@ mod tests {
"items": [] "items": []
}"#; }"#;
let expected = feed.as_bytes(); let expected = feed.as_bytes();
assert_eq!( assert_eq!(super::to_vec_pretty(&Feed::default()).unwrap(), expected);
super::to_vec_pretty(&Feed::default()).unwrap(),
expected
);
} }
#[test] #[test]
fn to_writer() { fn to_writer() {
@ -249,4 +226,3 @@ mod tests {
assert_eq!(result, feed); assert_eq!(result, feed);
} }
} }

22
lib/mi/Cargo.toml Normal file
View File

@ -0,0 +1,22 @@
[package]
name = "mi"
version = "0.1.0"
authors = ["Christine Dodrill <me@christine.website>"]
edition = "2018"
# See more keys and their definitions at https://doc.rust-lang.org/cargo/reference/manifest.html
[dependencies]
chrono = { version = "0.4", features = ["serde"] }
color-eyre = "0.5"
reqwest = { version = "0.11", features = ["json"] }
serde_json = "1.0"
serde = { version = "1", features = ["derive"] }
thiserror = "1"
tracing = "0.1"
tracing-futures = "0.2"
[dev-dependencies]
tokio = { version = "1", features = ["macros"] }
envy = "0.4"
pretty_env_logger = "0"

73
lib/mi/src/lib.rs Normal file
View File

@ -0,0 +1,73 @@
use color_eyre::eyre::Result;
use reqwest::header;
use serde::Deserialize;
use tracing::instrument;
const USER_AGENT_BASE: &str = concat!(
"library/",
env!("CARGO_PKG_NAME"),
"/",
env!("CARGO_PKG_VERSION")
);
pub struct Client {
cli: reqwest::Client,
base_url: String,
}
impl Client {
pub fn new(token: String, user_agent: String) -> Result<Self> {
let mut headers = header::HeaderMap::new();
headers.insert(
header::AUTHORIZATION,
header::HeaderValue::from_str(&token.clone())?,
);
let cli = reqwest::Client::builder()
.user_agent(&format!("{} {}", user_agent, USER_AGENT_BASE))
.default_headers(headers)
.build()?;
Ok(Self {
cli: cli,
base_url: "https://mi.within.website".to_string(),
})
}
#[instrument(skip(self), err)]
pub async fn mentioners(&self, url: String) -> Result<Vec<WebMention>> {
Ok(self
.cli
.get(&format!("{}/api/webmention/for", self.base_url))
.query(&[("target", &url)])
.send()
.await?
.error_for_status()?
.json()
.await?)
}
#[instrument(skip(self), err)]
pub async fn refresh(&self) -> Result<()> {
self.cli
.post("https://mi.within.website/api/blog/refresh")
.send()
.await?
.error_for_status()?;
Ok(())
}
}
#[derive(Debug, Deserialize, Eq, PartialEq, Clone)]
pub struct WebMention {
pub source: String,
pub title: Option<String>,
}
#[cfg(test)]
mod tests {
#[test]
fn it_works() {
assert_eq!(2 + 2, 4);
}
}

View File

@ -8,7 +8,7 @@ edition = "2018"
[dependencies] [dependencies]
chrono = { version = "0.4", features = ["serde"] } chrono = { version = "0.4", features = ["serde"] }
reqwest = { version = "0.10", features = ["json"] } reqwest = { version = "0.11", features = ["json"] }
serde_json = "1.0" serde_json = "1.0"
serde = { version = "1", features = ["derive"] } serde = { version = "1", features = ["derive"] }
thiserror = "1" thiserror = "1"
@ -16,6 +16,6 @@ tracing = "0.1"
tracing-futures = "0.2" tracing-futures = "0.2"
[dev-dependencies] [dev-dependencies]
tokio = { version = "0.2", features = ["macros"] } tokio = { version = "1", features = ["macros"] }
envy = "0.4" envy = "0.4"
pretty_env_logger = "0" pretty_env_logger = "0"

10
nix/rust.nix Normal file
View File

@ -0,0 +1,10 @@
{ sources ? import ./sources.nix }:
let
pkgs =
import sources.nixpkgs { overlays = [ (import sources.nixpkgs-mozilla) ]; };
channel = "nightly";
date = "2021-01-14";
targets = [ ];
chan = pkgs.rustChannelOfTargets channel date targets;
in chan

View File

@ -5,10 +5,10 @@
"homepage": "", "homepage": "",
"owner": "justinwoo", "owner": "justinwoo",
"repo": "easy-dhall-nix", "repo": "easy-dhall-nix",
"rev": "3e9101c5dfd69a9fc28fe4998aff378f91bfcb64", "rev": "eae7f64c4d6c70681e5a56c84198236930ba425e",
"sha256": "1nsn1n4sx4za6jipcid1293rdw8lqgj9097s0khiij3fz0bzhrg9", "sha256": "1y2x15v8a679vlpxazjpibfwajp6zph60f8wjcm4xflbvazk0dx7",
"type": "tarball", "type": "tarball",
"url": "https://github.com/justinwoo/easy-dhall-nix/archive/3e9101c5dfd69a9fc28fe4998aff378f91bfcb64.tar.gz", "url": "https://github.com/justinwoo/easy-dhall-nix/archive/eae7f64c4d6c70681e5a56c84198236930ba425e.tar.gz",
"url_template": "https://github.com/<owner>/<repo>/archive/<rev>.tar.gz" "url_template": "https://github.com/<owner>/<repo>/archive/<rev>.tar.gz"
}, },
"naersk": { "naersk": {
@ -17,10 +17,10 @@
"homepage": "", "homepage": "",
"owner": "nmattia", "owner": "nmattia",
"repo": "naersk", "repo": "naersk",
"rev": "529e910a3f423a8211f8739290014b754b2555b6", "rev": "a76924cbbb17c387e5ae4998a4721d88a3ac95c0",
"sha256": "0bcy9nmyaan5jvp0wg80wkizc9j166ns685rdr1kbhkvdpywv46y", "sha256": "09b5g2krf8mfpajgz2bgapkv3dpimg0qx1nfpjafcrsk0fhxmqay",
"type": "tarball", "type": "tarball",
"url": "https://github.com/nmattia/naersk/archive/529e910a3f423a8211f8739290014b754b2555b6.tar.gz", "url": "https://github.com/nmattia/naersk/archive/a76924cbbb17c387e5ae4998a4721d88a3ac95c0.tar.gz",
"url_template": "https://github.com/<owner>/<repo>/archive/<rev>.tar.gz" "url_template": "https://github.com/<owner>/<repo>/archive/<rev>.tar.gz"
}, },
"niv": { "niv": {
@ -29,10 +29,10 @@
"homepage": "https://github.com/nmattia/niv", "homepage": "https://github.com/nmattia/niv",
"owner": "nmattia", "owner": "nmattia",
"repo": "niv", "repo": "niv",
"rev": "29ddaaf4e099c3ac0647f5b652469dfc79cd3b53", "rev": "94dadba1a3a6a2f0b8ca2963e49daeec5d4e3098",
"sha256": "1va6myp07gkspgxfch8z3rs9nyvys6jmgzkys6a2c4j09qxp1bs0", "sha256": "1y2h9wl7w60maa2m4xw9231xdr325xynzpph8xr4j5vsznygv986",
"type": "tarball", "type": "tarball",
"url": "https://github.com/nmattia/niv/archive/29ddaaf4e099c3ac0647f5b652469dfc79cd3b53.tar.gz", "url": "https://github.com/nmattia/niv/archive/94dadba1a3a6a2f0b8ca2963e49daeec5d4e3098.tar.gz",
"url_template": "https://github.com/<owner>/<repo>/archive/<rev>.tar.gz" "url_template": "https://github.com/<owner>/<repo>/archive/<rev>.tar.gz"
}, },
"nixpkgs": { "nixpkgs": {
@ -41,13 +41,26 @@
"homepage": "https://github.com/NixOS/nixpkgs", "homepage": "https://github.com/NixOS/nixpkgs",
"owner": "NixOS", "owner": "NixOS",
"repo": "nixpkgs-channels", "repo": "nixpkgs-channels",
"rev": "72b9660dc18ba347f7cd41a9504fc181a6d87dc3", "rev": "502845c3e31ef3de0e424f3fcb09217df2ce6df6",
"sha256": "1cqgpw263bz261bgz34j6hiawi4hi6smwp6981yz375fx0g6kmss", "sha256": "0fcqpsy6y7dgn0y0wgpa56gsg0b0p8avlpjrd79fp4mp9bl18nda",
"type": "tarball", "type": "tarball",
"url": "https://github.com/NixOS/nixpkgs-channels/archive/72b9660dc18ba347f7cd41a9504fc181a6d87dc3.tar.gz", "url": "https://github.com/NixOS/nixpkgs-channels/archive/502845c3e31ef3de0e424f3fcb09217df2ce6df6.tar.gz",
"url_template": "https://github.com/<owner>/<repo>/archive/<rev>.tar.gz"
},
"nixpkgs-mozilla": {
"branch": "master",
"description": "mozilla related nixpkgs (extends nixos/nixpkgs repo)",
"homepage": null,
"owner": "mozilla",
"repo": "nixpkgs-mozilla",
"rev": "8c007b60731c07dd7a052cce508de3bb1ae849b4",
"sha256": "1zybp62zz0h077zm2zmqs2wcg3whg6jqaah9hcl1gv4x8af4zhs6",
"type": "tarball",
"url": "https://github.com/mozilla/nixpkgs-mozilla/archive/8c007b60731c07dd7a052cce508de3bb1ae849b4.tar.gz",
"url_template": "https://github.com/<owner>/<repo>/archive/<rev>.tar.gz" "url_template": "https://github.com/<owner>/<repo>/archive/<rev>.tar.gz"
}, },
"xepkgs": { "xepkgs": {
"branch": "master",
"ref": "master", "ref": "master",
"repo": "https://tulpa.dev/Xe/nixpkgs", "repo": "https://tulpa.dev/Xe/nixpkgs",
"rev": "5621d41482bca79d05c97758bb86eeb9099e26c9", "rev": "5621d41482bca79d05c97758bb86eeb9099e26c9",

129
nix/sources.nix vendored
View File

@ -6,52 +6,63 @@ let
# The fetchers. fetch_<type> fetches specs of type <type>. # The fetchers. fetch_<type> fetches specs of type <type>.
# #
fetch_file = pkgs: spec: fetch_file = pkgs: name: spec:
if spec.builtin or true then let
builtins_fetchurl { inherit (spec) url sha256; } name' = sanitizeName name + "-src";
else in
pkgs.fetchurl { inherit (spec) url sha256; }; if spec.builtin or true then
builtins_fetchurl { inherit (spec) url sha256; name = name'; }
else
pkgs.fetchurl { inherit (spec) url sha256; name = name'; };
fetch_tarball = pkgs: spec: fetch_tarball = pkgs: name: spec:
if spec.builtin or true then let
builtins_fetchTarball { inherit (spec) url sha256; } name' = sanitizeName name + "-src";
else in
pkgs.fetchzip { inherit (spec) url sha256; }; if spec.builtin or true then
builtins_fetchTarball { name = name'; inherit (spec) url sha256; }
else
pkgs.fetchzip { name = name'; inherit (spec) url sha256; };
fetch_git = spec: fetch_git = name: spec:
builtins.fetchGit { url = spec.repo; inherit (spec) rev ref; }; let
ref =
if spec ? ref then spec.ref else
if spec ? branch then "refs/heads/${spec.branch}" else
if spec ? tag then "refs/tags/${spec.tag}" else
abort "In git source '${name}': Please specify `ref`, `tag` or `branch`!";
in
builtins.fetchGit { url = spec.repo; inherit (spec) rev; inherit ref; };
fetch_builtin-tarball = spec: fetch_local = spec: spec.path;
builtins.trace
''
WARNING:
The niv type "builtin-tarball" will soon be deprecated. You should
instead use `builtin = true`.
$ niv modify <package> -a type=tarball -a builtin=true fetch_builtin-tarball = name: throw
'' ''[${name}] The niv type "builtin-tarball" is deprecated. You should instead use `builtin = true`.
builtins_fetchTarball { inherit (spec) url sha256; }; $ niv modify ${name} -a type=tarball -a builtin=true'';
fetch_builtin-url = spec: fetch_builtin-url = name: throw
builtins.trace ''[${name}] The niv type "builtin-url" will soon be deprecated. You should instead use `builtin = true`.
'' $ niv modify ${name} -a type=file -a builtin=true'';
WARNING:
The niv type "builtin-url" will soon be deprecated. You should
instead use `builtin = true`.
$ niv modify <package> -a type=file -a builtin=true
''
(builtins_fetchurl { inherit (spec) url sha256; });
# #
# Various helpers # Various helpers
# #
# https://github.com/NixOS/nixpkgs/pull/83241/files#diff-c6f540a4f3bfa4b0e8b6bafd4cd54e8bR695
sanitizeName = name:
(
concatMapStrings (s: if builtins.isList s then "-" else s)
(
builtins.split "[^[:alnum:]+._?=-]+"
((x: builtins.elemAt (builtins.match "\\.*(.*)" x) 0) name)
)
);
# The set of packages used when specs are fetched using non-builtins. # The set of packages used when specs are fetched using non-builtins.
mkPkgs = sources: mkPkgs = sources: system:
let let
sourcesNixpkgs = sourcesNixpkgs =
import (builtins_fetchTarball { inherit (sources.nixpkgs) url sha256; }) {}; import (builtins_fetchTarball { inherit (sources.nixpkgs) url sha256; }) { inherit system; };
hasNixpkgsPath = builtins.any (x: x.prefix == "nixpkgs") builtins.nixPath; hasNixpkgsPath = builtins.any (x: x.prefix == "nixpkgs") builtins.nixPath;
hasThisAsNixpkgsPath = <nixpkgs> == ./.; hasThisAsNixpkgsPath = <nixpkgs> == ./.;
in in
@ -71,14 +82,24 @@ let
if ! builtins.hasAttr "type" spec then if ! builtins.hasAttr "type" spec then
abort "ERROR: niv spec ${name} does not have a 'type' attribute" abort "ERROR: niv spec ${name} does not have a 'type' attribute"
else if spec.type == "file" then fetch_file pkgs spec else if spec.type == "file" then fetch_file pkgs name spec
else if spec.type == "tarball" then fetch_tarball pkgs spec else if spec.type == "tarball" then fetch_tarball pkgs name spec
else if spec.type == "git" then fetch_git spec else if spec.type == "git" then fetch_git name spec
else if spec.type == "builtin-tarball" then fetch_builtin-tarball spec else if spec.type == "local" then fetch_local spec
else if spec.type == "builtin-url" then fetch_builtin-url spec else if spec.type == "builtin-tarball" then fetch_builtin-tarball name
else if spec.type == "builtin-url" then fetch_builtin-url name
else else
abort "ERROR: niv spec ${name} has unknown type ${builtins.toJSON spec.type}"; abort "ERROR: niv spec ${name} has unknown type ${builtins.toJSON spec.type}";
# If the environment variable NIV_OVERRIDE_${name} is set, then use
# the path directly as opposed to the fetched source.
replace = name: drv:
let
saneName = stringAsChars (c: if isNull (builtins.match "[a-zA-Z0-9]" c) then "_" else c) name;
ersatz = builtins.getEnv "NIV_OVERRIDE_${saneName}";
in
if ersatz == "" then drv else ersatz;
# Ports of functions for older nix versions # Ports of functions for older nix versions
# a Nix version of mapAttrs if the built-in doesn't exist # a Nix version of mapAttrs if the built-in doesn't exist
@ -87,23 +108,37 @@ let
listToAttrs (map (attr: { name = attr; value = f attr set.${attr}; }) (attrNames set)) listToAttrs (map (attr: { name = attr; value = f attr set.${attr}; }) (attrNames set))
); );
# https://github.com/NixOS/nixpkgs/blob/0258808f5744ca980b9a1f24fe0b1e6f0fecee9c/lib/lists.nix#L295
range = first: last: if first > last then [] else builtins.genList (n: first + n) (last - first + 1);
# https://github.com/NixOS/nixpkgs/blob/0258808f5744ca980b9a1f24fe0b1e6f0fecee9c/lib/strings.nix#L257
stringToCharacters = s: map (p: builtins.substring p 1 s) (range 0 (builtins.stringLength s - 1));
# https://github.com/NixOS/nixpkgs/blob/0258808f5744ca980b9a1f24fe0b1e6f0fecee9c/lib/strings.nix#L269
stringAsChars = f: s: concatStrings (map f (stringToCharacters s));
concatMapStrings = f: list: concatStrings (map f list);
concatStrings = builtins.concatStringsSep "";
# https://github.com/NixOS/nixpkgs/blob/8a9f58a375c401b96da862d969f66429def1d118/lib/attrsets.nix#L331
optionalAttrs = cond: as: if cond then as else {};
# fetchTarball version that is compatible between all the versions of Nix # fetchTarball version that is compatible between all the versions of Nix
builtins_fetchTarball = { url, sha256 }@attrs: builtins_fetchTarball = { url, name ? null, sha256 }@attrs:
let let
inherit (builtins) lessThan nixVersion fetchTarball; inherit (builtins) lessThan nixVersion fetchTarball;
in in
if lessThan nixVersion "1.12" then if lessThan nixVersion "1.12" then
fetchTarball { inherit url; } fetchTarball ({ inherit url; } // (optionalAttrs (!isNull name) { inherit name; }))
else else
fetchTarball attrs; fetchTarball attrs;
# fetchurl version that is compatible between all the versions of Nix # fetchurl version that is compatible between all the versions of Nix
builtins_fetchurl = { url, sha256 }@attrs: builtins_fetchurl = { url, name ? null, sha256 }@attrs:
let let
inherit (builtins) lessThan nixVersion fetchurl; inherit (builtins) lessThan nixVersion fetchurl;
in in
if lessThan nixVersion "1.12" then if lessThan nixVersion "1.12" then
fetchurl { inherit url; } fetchurl ({ inherit url; } // (optionalAttrs (!isNull name) { inherit name; }))
else else
fetchurl attrs; fetchurl attrs;
@ -115,14 +150,15 @@ let
then abort then abort
"The values in sources.json should not have an 'outPath' attribute" "The values in sources.json should not have an 'outPath' attribute"
else else
spec // { outPath = fetch config.pkgs name spec; } spec // { outPath = replace name (fetch config.pkgs name spec); }
) config.sources; ) config.sources;
# The "config" used by the fetchers # The "config" used by the fetchers
mkConfig = mkConfig =
{ sourcesFile ? ./sources.json { sourcesFile ? if builtins.pathExists ./sources.json then ./sources.json else null
, sources ? builtins.fromJSON (builtins.readFile sourcesFile) , sources ? if isNull sourcesFile then {} else builtins.fromJSON (builtins.readFile sourcesFile)
, pkgs ? mkPkgs sources , system ? builtins.currentSystem
, pkgs ? mkPkgs sources system
}: rec { }: rec {
# The sources, i.e. the attribute set of spec name to spec # The sources, i.e. the attribute set of spec name to spec
inherit sources; inherit sources;
@ -130,5 +166,6 @@ let
# The "pkgs" (evaluated nixpkgs) to use for e.g. non-builtin fetchers # The "pkgs" (evaluated nixpkgs) to use for e.g. non-builtin fetchers
inherit pkgs; inherit pkgs;
}; };
in in
mkSources (mkConfig {}) // { __functor = _: settings: mkSources (mkConfig settings); } mkSources (mkConfig {}) // { __functor = _: settings: mkSources (mkConfig settings); }

View File

@ -1,10 +0,0 @@
#!/usr/bin/env nix-shell
#! nix-shell -p doctl -p kubectl -p curl -i bash
nix-env -if ./nix/dhall-yaml.nix
doctl kubernetes cluster kubeconfig save kubermemes
dhall-to-yaml-ng < ./site.dhall | kubectl apply -n apps -f -
kubectl rollout status -n apps deployment/christinewebsite
kubectl apply -f ./k8s/job.yml
sleep 10
kubectl delete -f ./k8s/job.yml
curl -H "Authorization: $MI_TOKEN" --data "https://christine.website/blog.json" https://mi.within.website/blog/refresh

View File

@ -5,16 +5,14 @@ let
dhall-yaml = dhallpkgs.dhall-yaml-simple; dhall-yaml = dhallpkgs.dhall-yaml-simple;
dhall = dhallpkgs.dhall-simple; dhall = dhallpkgs.dhall-simple;
xepkgs = import sources.xepkgs { inherit pkgs; }; xepkgs = import sources.xepkgs { inherit pkgs; };
rust = pkgs.callPackage ./nix/rust.nix { };
in with pkgs; in with pkgs;
with xepkgs; with xepkgs;
mkShell { mkShell {
buildInputs = [ buildInputs = [
# Rust # Rust
cargo rust
cargo-watch cargo-watch
rls
rustc
rustfmt
# system dependencies # system dependencies
openssl openssl

View File

@ -1,7 +1,7 @@
let Person = let Person =
{ Type = { name : Text, tags : List Text, gitLink : Text, twitter : Text } { Type = { name : Text, tags : List Text, gitLink : Text, twitter : Text }
, default = , default =
{ name = "", tags = [] : List Text, gitLink = "", twitter = "" } { name = "", tags = [] : List Text, gitLink = "", twitter = "" }
} }
in [ Person::{ in [ Person::{
@ -76,35 +76,47 @@ in [ Person::{
, twitter = "N/A" , twitter = "N/A"
} }
, Person::{ , Person::{
, name = "Jamie Bliss"
, tags = [ "python", "devops", "full-stack", "saltstack", "web", "linux" ]
, gitLink = "https://github.com/AstraLuma"
, twitter = "https://twitter.com/AstraLuma"
}
, Person::{
, name = "Joseph Crawley" , name = "Joseph Crawley"
, tags = , tags =
[ "javascript", "react", "csharp", "python", "full-stack", "web", "bash", "linux" ] [ "javascript"
, "react"
, "csharp"
, "python"
, "full-stack"
, "web"
, "bash"
, "linux"
]
, gitLink = "https://github.com/espe-on" , gitLink = "https://github.com/espe-on"
, twitter = "https://twitter.com/espe_on_" , twitter = "https://twitter.com/espe_on_"
} }
, Person::{ , Person::{
, name = "nicoo" , name = "nicoo"
, tags = [ "cryptography", "Debian", "distributed systems", "embedded", "nix", "rust", "privacy", "security", "SDR" ] , tags =
[ "cryptography"
, "Debian"
, "distributed systems"
, "embedded"
, "nix"
, "rust"
, "privacy"
, "security"
, "SDR"
]
, gitLink = "https://github.com/nbraud" , gitLink = "https://github.com/nbraud"
} }
, Person::{ , Person::{
, name = "Natthan Leong" , name = "Natthan Leong"
, tags = , tags =
[ "c" [ "entry-level"
, "embedded" , "canada"
, "firmware" , "c"
, "java"
, "golang" , "golang"
, "linux" , "linux"
, "lua"
, "python" , "python"
, "rust"
, "shell" , "shell"
, "sql"
] ]
, gitLink = "https://github.com/ansimita" , gitLink = "https://github.com/ansimita"
, twitter = "https://twitter.com/ansimita" , twitter = "https://twitter.com/ansimita"
@ -139,7 +151,7 @@ in [ Person::{
, gitLink = "https://github.com/Piyushhbhutoria" , gitLink = "https://github.com/Piyushhbhutoria"
, twitter = "https://twitter.com/PiyushhB" , twitter = "https://twitter.com/PiyushhB"
} }
, Person::{ , Person::{
, name = "Ryan Casalino" , name = "Ryan Casalino"
, tags = , tags =
[ "golang" [ "golang"
@ -179,7 +191,7 @@ in [ Person::{
} }
, Person::{ , Person::{
, name = "Zachary McKee" , name = "Zachary McKee"
, tags = , tags =
[ "javascript" [ "javascript"
, "django" , "django"
, "react" , "react"
@ -195,22 +207,15 @@ in [ Person::{
, gitLink = "https://github.com/ZacharyRMcKee" , gitLink = "https://github.com/ZacharyRMcKee"
, twitter = "N/A" , twitter = "N/A"
} }
, Person::{ , Person::{
, name = "Muazzam Kazmi" , name = "Muazzam Kazmi"
, tags = , tags = [ "Rust", "C++", "x86assembly", "WinAPI", "Node.js", "React.js" ]
[ "Rust"
, "C++"
, "x86assembly"
, "WinAPI"
, "Node.js"
, "React.js"
]
, gitLink = "https://github.com/muazzamalikazmi" , gitLink = "https://github.com/muazzamalikazmi"
, twitter = "N/A" , twitter = "N/A"
} }
, Person::{ , Person::{
, name = "Jeffin Mathew" , name = "Jeffin Mathew"
, tags = , tags =
[ "Python" [ "Python"
, "routing&switching" , "routing&switching"
, "django" , "django"
@ -220,7 +225,53 @@ in [ Person::{
, "javascript" , "javascript"
, "iot" , "iot"
] ]
, gitLink = "https://github.com/mjeffin" , gitLink = "https://github.com/mjeffin"
, twitter = "https://twitter.com/mpjeffin" , twitter = "https://twitter.com/mpjeffin"
}
, Person::{
, name = "Nasir Hussain"
, tags =
[ "python"
, "linux"
, "javascript"
, "ansible"
, "nix"
, "docker&podman"
, "django"
, "golang"
, "rpm packaging"
]
, gitLink = "https://github.com/nasirhm"
, twitter = "https://twitter.com/_nasirhm_"
}
, Person::{
, name = "Eliot Partridge"
, tags =
[ "python"
, "linux"
, "typescript"
, "javascript"
, "docker"
, "c#"
, "dotnet"
, "php"
]
, gitLink = "https://github.com/BytewaveMLP"
}
, Person::{
, name = "İlteriş Eroğlu"
, tags =
[ "linux"
, "javascript"
, "node.js"
, "bash"
, "nfc"
, "python"
, "devops"
, "networking"
, "bgp"
]
, gitLink = "https://github.com/linuxgemini"
, twitter = "https://twitter.com/linuxgemini"
} }
] ]

View File

@ -1,42 +0,0 @@
let kms =
https://tulpa.dev/cadey/kubermemes/raw/branch/master/k8s/package.dhall
let kubernetes =
https://raw.githubusercontent.com/dhall-lang/dhall-kubernetes/master/1.15/package.dhall
let tag = env:GITHUB_SHA as Text ? "latest"
let image = "ghcr.io/xe/site:${tag}"
let vars
: List kubernetes.EnvVar.Type
= [ kubernetes.EnvVar::{ name = "PORT", value = Some "3030" }
, kubernetes.EnvVar::{ name = "RUST_LOG", value = Some "info" }
, kubernetes.EnvVar::{
, name = "PATREON_CLIENT_ID"
, value = Some env:PATREON_CLIENT_ID as Text
}
, kubernetes.EnvVar::{
, name = "PATREON_CLIENT_SECRET"
, value = Some env:PATREON_CLIENT_SECRET as Text
}
, kubernetes.EnvVar::{
, name = "PATREON_ACCESS_TOKEN"
, value = Some env:PATREON_ACCESS_TOKEN as Text
}
, kubernetes.EnvVar::{
, name = "PATREON_REFRESH_TOKEN"
, value = Some env:PATREON_REFRESH_TOKEN as Text
}
]
in kms.app.make
kms.app.Config::{
, name = "christinewebsite"
, appPort = 3030
, image = image
, replicas = 2
, domain = "christine.website"
, leIssuer = "prod"
, envVars = vars
}

View File

@ -1,51 +0,0 @@
{ sources ? import ./nix/sources.nix, pkgs ? import sources.nixpkgs { } }:
with pkgs;
let
srcNoTarget = dir:
builtins.filterSource
(path: type: type != "directory" || builtins.baseNameOf path != "target")
dir;
naersk = pkgs.callPackage sources.naersk { };
dhallpkgs = import sources.easy-dhall-nix { inherit pkgs; };
src = srcNoTarget ./.;
xesite = naersk.buildPackage {
inherit src;
buildInputs = [ pkg-config openssl git ];
remapPathPrefix = true;
};
config = stdenv.mkDerivation {
pname = "xesite-config";
version = "HEAD";
buildInputs = [ dhallpkgs.dhall-simple ];
phases = "installPhase";
installPhase = ''
cd ${src}
dhall resolve < ${src}/config.dhall >> $out
'';
};
in pkgs.stdenv.mkDerivation {
inherit (xesite) name;
inherit src;
phases = "installPhase";
installPhase = ''
mkdir -p $out $out/bin
cp -rf ${config} $out/config.dhall
cp -rf $src/blog $out/blog
cp -rf $src/css $out/css
cp -rf $src/gallery $out/gallery
cp -rf $src/signalboost.dhall $out/signalboost.dhall
cp -rf $src/static $out/static
cp -rf $src/talks $out/talks
cp -rf ${xesite}/bin/xesite $out/bin/xesite
'';
}

View File

@ -2,24 +2,31 @@ use crate::{post::Post, signalboost::Person};
use color_eyre::eyre::Result; use color_eyre::eyre::Result;
use serde::Deserialize; use serde::Deserialize;
use std::{fs, path::PathBuf}; use std::{fs, path::PathBuf};
use tracing::{instrument, error}; use tracing::{error, instrument};
pub mod markdown; pub mod markdown;
pub mod poke;
#[derive(Clone, Deserialize)] #[derive(Clone, Deserialize)]
pub struct Config { pub struct Config {
#[serde(rename = "clackSet")] #[serde(rename = "clackSet")]
clack_set: Vec<String>, pub(crate) clack_set: Vec<String>,
signalboost: Vec<Person>, pub(crate) signalboost: Vec<Person>,
port: u16, pub(crate) port: u16,
#[serde(rename = "resumeFname")] #[serde(rename = "resumeFname")]
resume_fname: PathBuf, pub(crate) resume_fname: PathBuf,
#[serde(rename = "webMentionEndpoint")]
pub(crate) webmention_url: String,
#[serde(rename = "miToken")]
pub(crate) mi_token: String,
} }
#[instrument] #[instrument]
async fn patrons() -> Result<Option<patreon::Users>> { async fn patrons() -> Result<Option<patreon::Users>> {
use patreon::*; use patreon::*;
let creds: Credentials = envy::prefixed("PATREON_").from_env().unwrap_or(Credentials::default()); let creds: Credentials = envy::prefixed("PATREON_")
.from_env()
.unwrap_or(Credentials::default());
let cli = Client::new(creds); let cli = Client::new(creds);
match cli.campaign().await { match cli.campaign().await {
@ -54,6 +61,7 @@ pub struct State {
pub jf: jsonfeed::Feed, pub jf: jsonfeed::Feed,
pub sitemap: Vec<u8>, pub sitemap: Vec<u8>,
pub patrons: Option<patreon::Users>, pub patrons: Option<patreon::Users>,
pub mi: mi::Client,
} }
pub async fn init(cfg: PathBuf) -> Result<State> { pub async fn init(cfg: PathBuf) -> Result<State> {
@ -61,9 +69,10 @@ pub async fn init(cfg: PathBuf) -> Result<State> {
let sb = cfg.signalboost.clone(); let sb = cfg.signalboost.clone();
let resume = fs::read_to_string(cfg.resume_fname.clone())?; let resume = fs::read_to_string(cfg.resume_fname.clone())?;
let resume: String = markdown::render(&resume)?; let resume: String = markdown::render(&resume)?;
let blog = crate::post::load("blog")?; let mi = mi::Client::new(cfg.mi_token.clone(), crate::APPLICATION_NAME.to_string())?;
let gallery = crate::post::load("gallery")?; let blog = crate::post::load("blog", Some(&mi)).await?;
let talks = crate::post::load("talks")?; let gallery = crate::post::load("gallery", None).await?;
let talks = crate::post::load("talks", None).await?;
let mut everything: Vec<Post> = vec![]; let mut everything: Vec<Post> = vec![];
{ {
@ -78,6 +87,8 @@ pub async fn init(cfg: PathBuf) -> Result<State> {
everything.sort(); everything.sort();
everything.reverse(); everything.reverse();
let everything: Vec<Post> = everything.into_iter().take(20).collect();
let mut jfb = jsonfeed::Feed::builder() let mut jfb = jsonfeed::Feed::builder()
.title("Christine Dodrill's Blog") .title("Christine Dodrill's Blog")
.description("My blog posts and rants about various technology things.") .description("My blog posts and rants about various technology things.")
@ -118,6 +129,7 @@ pub async fn init(cfg: PathBuf) -> Result<State> {
urlwriter.end()?; urlwriter.end()?;
Ok(State { Ok(State {
mi: mi,
cfg: cfg, cfg: cfg,
signalboost: sb, signalboost: sb,
resume: resume, resume: resume,

86
src/app/poke.rs Normal file
View File

@ -0,0 +1,86 @@
use color_eyre::eyre::Result;
use std::{env, time::Duration};
use tokio::time::sleep as delay_for;
#[instrument(err)]
pub async fn the_cloud() -> Result<()> {
info!("waiting for things to settle");
delay_for(Duration::from_secs(10)).await;
info!("purging cloudflare cache");
cloudflare().await?;
info!("waiting for the cloudflare cache to purge globally");
delay_for(Duration::from_secs(45)).await;
info!("poking mi");
mi().await?;
info!("poking bing");
bing().await?;
info!("poking google");
google().await?;
Ok(())
}
#[instrument(err)]
async fn bing() -> Result<()> {
let cli = reqwest::Client::new();
cli.get("https://www.bing.com/ping")
.query(&[("sitemap", "https://christine.website/sitemap.xml")])
.header("User-Agent", crate::APPLICATION_NAME)
.send()
.await?
.error_for_status()?;
Ok(())
}
#[instrument(err)]
async fn google() -> Result<()> {
let cli = reqwest::Client::new();
cli.get("https://www.google.com/ping")
.query(&[("sitemap", "https://christine.website/sitemap.xml")])
.header("User-Agent", crate::APPLICATION_NAME)
.send()
.await?
.error_for_status()?;
Ok(())
}
#[instrument(err)]
async fn cloudflare() -> Result<()> {
let cli = cfcache::Client::new(env::var("CF_TOKEN")?, env::var("CF_ZONE_ID")?)?;
cli.purge(
vec![
"https://christine.website/sitemap.xml",
"https://christine.website",
"https://christine.website/blog",
"https://christine.website/blog.atom",
"https://christine.website/blog.json",
"https://christine.website/blog.rss",
"https://christine.website/gallery",
"https://christine.website/talks",
"https://christine.website/resume",
"https://christine.website/signalboost",
"https://christine.website/feeds",
]
.into_iter()
.map(|i| i.to_string())
.collect(),
)
.await?;
Ok(())
}
#[instrument(err)]
async fn mi() -> Result<()> {
let cli = mi::Client::new(env::var("MI_TOKEN")?, crate::APPLICATION_NAME.to_string())?;
cli.refresh().await?;
Ok(())
}

View File

@ -4,8 +4,23 @@ use std::process::Command;
fn main() -> Result<()> { fn main() -> Result<()> {
Ructe::from_env()?.compile_templates("templates")?; Ructe::from_env()?.compile_templates("templates")?;
let output = Command::new("git").args(&["rev-parse", "HEAD"]).output().unwrap(); let output = Command::new("git")
.args(&["rev-parse", "HEAD"])
.output()
.unwrap();
if std::env::var("out").is_err() {
println!("cargo:rustc-env=out=/yolo");
}
let git_hash = String::from_utf8(output.stdout).unwrap(); let git_hash = String::from_utf8(output.stdout).unwrap();
println!("cargo:rustc-env=GITHUB_SHA={}", git_hash); println!(
"cargo:rustc-env=GITHUB_SHA={}",
if git_hash.as_str() == "" {
env!("out").into()
} else {
git_hash
}
);
Ok(()) Ok(())
} }

View File

@ -11,10 +11,11 @@ lazy_static! {
&["kind"] &["kind"]
) )
.unwrap(); .unwrap();
pub static ref ETAG: String = format!(r#"W/"{}""#, uuid::Uuid::new_v4().to_simple());
} }
#[instrument(skip(state))] #[instrument(skip(state))]
pub async fn jsonfeed(state: Arc<State>) -> Result<impl Reply, Rejection> { pub async fn jsonfeed(state: Arc<State>, since: Option<String>) -> Result<impl Reply, Rejection> {
HIT_COUNTER.with_label_values(&["json"]).inc(); HIT_COUNTER.with_label_values(&["json"]).inc();
let state = state.clone(); let state = state.clone();
Ok(warp::reply::json(&state.jf)) Ok(warp::reply::json(&state.jf))
@ -29,7 +30,22 @@ pub enum RenderError {
impl warp::reject::Reject for RenderError {} impl warp::reject::Reject for RenderError {}
#[instrument(skip(state))] #[instrument(skip(state))]
pub async fn atom(state: Arc<State>) -> Result<impl Reply, Rejection> { pub async fn atom(state: Arc<State>, since: Option<String>) -> Result<impl Reply, Rejection> {
if let Some(etag) = since {
if etag == ETAG.clone() {
return Response::builder()
.status(304)
.header("Content-Type", "text/plain")
.body(
"You already have the newest version of this feed."
.to_string()
.into_bytes(),
)
.map_err(RenderError::Build)
.map_err(warp::reject::custom);
}
}
HIT_COUNTER.with_label_values(&["atom"]).inc(); HIT_COUNTER.with_label_values(&["atom"]).inc();
let state = state.clone(); let state = state.clone();
let mut buf = Vec::new(); let mut buf = Vec::new();
@ -39,13 +55,29 @@ pub async fn atom(state: Arc<State>) -> Result<impl Reply, Rejection> {
Response::builder() Response::builder()
.status(200) .status(200)
.header("Content-Type", "application/atom+xml") .header("Content-Type", "application/atom+xml")
.header("ETag", ETAG.clone())
.body(buf) .body(buf)
.map_err(RenderError::Build) .map_err(RenderError::Build)
.map_err(warp::reject::custom) .map_err(warp::reject::custom)
} }
#[instrument(skip(state))] #[instrument(skip(state))]
pub async fn rss(state: Arc<State>) -> Result<impl Reply, Rejection> { pub async fn rss(state: Arc<State>, since: Option<String>) -> Result<impl Reply, Rejection> {
if let Some(etag) = since {
if etag == ETAG.clone() {
return Response::builder()
.status(304)
.header("Content-Type", "text/plain")
.body(
"You already have the newest version of this feed."
.to_string()
.into_bytes(),
)
.map_err(RenderError::Build)
.map_err(warp::reject::custom);
}
}
HIT_COUNTER.with_label_values(&["rss"]).inc(); HIT_COUNTER.with_label_values(&["rss"]).inc();
let state = state.clone(); let state = state.clone();
let mut buf = Vec::new(); let mut buf = Vec::new();
@ -55,6 +87,7 @@ pub async fn rss(state: Arc<State>) -> Result<impl Reply, Rejection> {
Response::builder() Response::builder()
.status(200) .status(200)
.header("Content-Type", "application/rss+xml") .header("Content-Type", "application/rss+xml")
.header("ETag", ETAG.clone())
.body(buf) .body(buf)
.map_err(RenderError::Build) .map_err(RenderError::Build)
.map_err(warp::reject::custom) .map_err(warp::reject::custom)

View File

@ -5,11 +5,11 @@ use crate::{
use lazy_static::lazy_static; use lazy_static::lazy_static;
use prometheus::{opts, register_int_counter_vec, IntCounterVec}; use prometheus::{opts, register_int_counter_vec, IntCounterVec};
use std::{convert::Infallible, fmt, sync::Arc}; use std::{convert::Infallible, fmt, sync::Arc};
use tracing::instrument;
use warp::{ use warp::{
http::{Response, StatusCode}, http::{Response, StatusCode},
Rejection, Reply, Rejection, Reply,
}; };
use tracing::instrument;
lazy_static! { lazy_static! {
static ref HIT_COUNTER: IntCounterVec = static ref HIT_COUNTER: IntCounterVec =
@ -86,12 +86,6 @@ impl fmt::Display for PostNotFound {
impl warp::reject::Reject for PostNotFound {} impl warp::reject::Reject for PostNotFound {}
impl From<PostNotFound> for warp::reject::Rejection {
fn from(error: PostNotFound) -> Self {
warp::reject::custom(error)
}
}
#[derive(Debug, thiserror::Error)] #[derive(Debug, thiserror::Error)]
struct SeriesNotFound(String); struct SeriesNotFound(String);
@ -103,12 +97,6 @@ impl fmt::Display for SeriesNotFound {
impl warp::reject::Reject for SeriesNotFound {} impl warp::reject::Reject for SeriesNotFound {}
impl From<SeriesNotFound> for warp::reject::Rejection {
fn from(error: SeriesNotFound) -> Self {
warp::reject::custom(error)
}
}
lazy_static! { lazy_static! {
static ref REJECTION_COUNTER: IntCounterVec = register_int_counter_vec!( static ref REJECTION_COUNTER: IntCounterVec = register_int_counter_vec!(
opts!("rejections", "Number of rejections by kind"), opts!("rejections", "Number of rejections by kind"),

View File

@ -1,3 +1,6 @@
#[macro_use]
extern crate tracing;
use color_eyre::eyre::Result; use color_eyre::eyre::Result;
use hyper::{header::CONTENT_TYPE, Body, Response}; use hyper::{header::CONTENT_TYPE, Body, Response};
use prometheus::{Encoder, TextEncoder}; use prometheus::{Encoder, TextEncoder};
@ -24,14 +27,17 @@ async fn main() -> Result<()> {
color_eyre::install()?; color_eyre::install()?;
let _ = kankyo::init(); let _ = kankyo::init();
tracing_subscriber::fmt::init(); tracing_subscriber::fmt::init();
log::info!("starting up commit {}", env!("GITHUB_SHA")); info!("starting up commit {}", env!("GITHUB_SHA"));
let state = Arc::new(app::init( let state = Arc::new(
std::env::var("CONFIG_FNAME") app::init(
.unwrap_or("./config.dhall".into()) std::env::var("CONFIG_FNAME")
.as_str() .unwrap_or("./config.dhall".into())
.into(), .as_str()
).await?); .into(),
)
.await?,
);
let healthcheck = warp::get().and(warp::path(".within").and(warp::path("health")).map(|| "OK")); let healthcheck = warp::get().and(warp::path(".within").and(warp::path("health")).map(|| "OK"));
@ -92,20 +98,39 @@ async fn main() -> Result<()> {
.and(with_state(state.clone())) .and(with_state(state.clone()))
.and_then(handlers::patrons); .and_then(handlers::patrons);
let files = warp::path("static").and(warp::fs::dir("./static")); let files = warp::path("static")
let css = warp::path("css").and(warp::fs::dir("./css")); .and(warp::fs::dir("./static"))
.map(|reply| {
warp::reply::with_header(
reply,
"Cache-Control",
"public, max-age=86400, stale-if-error=60",
)
});
let css = warp::path("css").and(warp::fs::dir("./css")).map(|reply| {
warp::reply::with_header(
reply,
"Cache-Control",
"public, max-age=86400, stale-if-error=60",
)
});
let sw = warp::path("sw.js").and(warp::fs::file("./static/js/sw.js")); let sw = warp::path("sw.js").and(warp::fs::file("./static/js/sw.js"));
let robots = warp::path("robots.txt").and(warp::fs::file("./static/robots.txt")); let robots = warp::path("robots.txt").and(warp::fs::file("./static/robots.txt"));
let favicon = warp::path("favicon.ico").and(warp::fs::file("./static/favicon/favicon.ico")); let favicon = warp::path("favicon.ico").and(warp::fs::file("./static/favicon/favicon.ico"));
let jsonfeed = warp::path("blog.json") let jsonfeed = warp::path("blog.json")
.and(with_state(state.clone())) .and(with_state(state.clone()))
.and(warp::header::optional("if-none-match"))
.and_then(handlers::feeds::jsonfeed); .and_then(handlers::feeds::jsonfeed);
let atom = warp::path("blog.atom") let atom = warp::path("blog.atom")
.and(with_state(state.clone())) .and(with_state(state.clone()))
.and(warp::header::optional("if-none-match"))
.and_then(handlers::feeds::atom); .and_then(handlers::feeds::atom);
let rss = warp::path("blog.rss") let rss = warp::path("blog.rss")
.and(with_state(state.clone())) .and(with_state(state.clone()))
.and(warp::header::optional("if-none-match"))
.and_then(handlers::feeds::rss); .and_then(handlers::feeds::rss);
let sitemap = warp::path("sitemap.xml") let sitemap = warp::path("sitemap.xml")
.and(with_state(state.clone())) .and(with_state(state.clone()))
@ -114,6 +139,7 @@ async fn main() -> Result<()> {
let go_vanity_jsonfeed = warp::path("jsonfeed") let go_vanity_jsonfeed = warp::path("jsonfeed")
.and(warp::any().map(move || "christine.website/jsonfeed")) .and(warp::any().map(move || "christine.website/jsonfeed"))
.and(warp::any().map(move || "https://tulpa.dev/Xe/jsonfeed")) .and(warp::any().map(move || "https://tulpa.dev/Xe/jsonfeed"))
.and(warp::any().map(move || "master"))
.and_then(go_vanity::gitea); .and_then(go_vanity::gitea);
let metrics_endpoint = warp::path("metrics").and(warp::path::end()).map(move || { let metrics_endpoint = warp::path("metrics").and(warp::path::end()).map(move || {
@ -128,14 +154,37 @@ async fn main() -> Result<()> {
.unwrap() .unwrap()
}); });
let site = index let static_pages = index
.or(contact.or(feeds).or(resume.or(signalboost)).or(patrons)) .or(feeds)
.or(blog_index.or(series.or(series_view).or(post_view))) .or(resume.or(signalboost))
.or(patrons)
.or(jsonfeed.or(atom.or(sitemap)).or(rss))
.or(favicon.or(robots).or(sw))
.or(contact)
.map(|reply| {
warp::reply::with_header(
reply,
"Cache-Control",
"public, max-age=86400, stale-if-error=60",
)
});
let dynamic_pages = blog_index
.or(series.or(series_view).or(post_view))
.or(gallery_index.or(gallery_post_view)) .or(gallery_index.or(gallery_post_view))
.or(talk_index.or(talk_post_view)) .or(talk_index.or(talk_post_view))
.or(jsonfeed.or(atom).or(rss.or(sitemap))) .map(|reply| {
.or(files.or(css).or(favicon).or(sw.or(robots))) warp::reply::with_header(
reply,
"Cache-Control",
"public, max-age=600, stale-if-error=60",
)
});
let site = static_pages
.or(dynamic_pages)
.or(healthcheck.or(metrics_endpoint).or(go_vanity_jsonfeed)) .or(healthcheck.or(metrics_endpoint).or(go_vanity_jsonfeed))
.or(files.or(css))
.map(|reply| { .map(|reply| {
warp::reply::with_header( warp::reply::with_header(
reply, reply,
@ -144,10 +193,51 @@ async fn main() -> Result<()> {
) )
}) })
.map(|reply| warp::reply::with_header(reply, "X-Clacks-Overhead", "GNU Ashlynn")) .map(|reply| warp::reply::with_header(reply, "X-Clacks-Overhead", "GNU Ashlynn"))
.map(|reply| {
warp::reply::with_header(
reply,
"Link",
format!(
r#"<{}>; rel="webmention""#,
std::env::var("WEBMENTION_URL")
.unwrap_or("https://mi.within.website/api/webmention/accept".to_string())
),
)
})
.with(warp::log(APPLICATION_NAME)) .with(warp::log(APPLICATION_NAME))
.recover(handlers::rejection); .recover(handlers::rejection);
warp::serve(site).run(([0, 0, 0, 0], 3030)).await; match sdnotify::SdNotify::from_env() {
Ok(ref mut n) => {
// shitty heuristic for detecting if we're running in prod
tokio::spawn(async {
if let Err(why) = app::poke::the_cloud().await {
error!("Unable to poke the cloud: {}", why);
}
});
n.notify_ready().map_err(|why| {
error!("can't signal readiness to systemd: {}", why);
why
})?;
n.set_status(format!("hosting {} posts", state.clone().everything.len()))
.map_err(|why| {
error!("can't signal status to systemd: {}", why);
why
})?;
}
Err(why) => error!("not running under systemd with Type=notify: {}", why),
}
warp::serve(site)
.run((
[0, 0, 0, 0],
std::env::var("PORT")
.unwrap_or("3030".into())
.parse::<u16>()
.unwrap(),
))
.await;
Ok(()) Ok(())
} }

View File

@ -1,7 +1,6 @@
/// This code was borrowed from @fasterthanlime. /// This code was borrowed from @fasterthanlime.
use color_eyre::eyre::Result;
use color_eyre::eyre::{Result}; use serde::{Deserialize, Serialize};
use serde::{Serialize, Deserialize};
#[derive(Eq, PartialEq, Deserialize, Default, Debug, Serialize, Clone)] #[derive(Eq, PartialEq, Deserialize, Default, Debug, Serialize, Clone)]
pub struct Data { pub struct Data {
@ -13,6 +12,7 @@ pub struct Data {
pub image: Option<String>, pub image: Option<String>,
pub thumb: Option<String>, pub thumb: Option<String>,
pub show: Option<bool>, pub show: Option<bool>,
pub redirect_to: Option<String>,
} }
enum State { enum State {
@ -81,7 +81,7 @@ impl Data {
}; };
} }
} }
_ => panic!("Expected newline, got {:?}",), _ => panic!("Expected newline, got {:?}", ch),
}, },
State::ReadingFrontMatter { buf, line_start } => match ch { State::ReadingFrontMatter { buf, line_start } => match ch {
'-' if *line_start => { '-' if *line_start => {

View File

@ -12,6 +12,7 @@ pub struct Post {
pub body: String, pub body: String,
pub body_html: String, pub body_html: String,
pub date: DateTime<FixedOffset>, pub date: DateTime<FixedOffset>,
pub mentions: Vec<mi::WebMention>,
} }
impl Into<jsonfeed::Item> for Post { impl Into<jsonfeed::Item> for Post {
@ -19,7 +20,6 @@ impl Into<jsonfeed::Item> for Post {
let mut result = jsonfeed::Item::builder() let mut result = jsonfeed::Item::builder()
.title(self.front_matter.title) .title(self.front_matter.title)
.content_html(self.body_html) .content_html(self.body_html)
.content_text(self.body)
.id(format!("https://christine.website/{}", self.link)) .id(format!("https://christine.website/{}", self.link))
.url(format!("https://christine.website/{}", self.link)) .url(format!("https://christine.website/{}", self.link))
.date_published(self.date.to_rfc3339()) .date_published(self.date.to_rfc3339())
@ -70,7 +70,7 @@ impl Post {
} }
} }
pub fn load(dir: &str) -> Result<Vec<Post>> { pub async fn load(dir: &str, mi: Option<&mi::Client>) -> Result<Vec<Post>> {
let mut result: Vec<Post> = vec![]; let mut result: Vec<Post> = vec![];
for path in glob(&format!("{}/*.markdown", dir))?.filter_map(Result::ok) { for path in glob(&format!("{}/*.markdown", dir))?.filter_map(Result::ok) {
@ -80,11 +80,21 @@ pub fn load(dir: &str) -> Result<Vec<Post>> {
let (fm, content_offset) = frontmatter::Data::parse(body.clone().as_str()) let (fm, content_offset) = frontmatter::Data::parse(body.clone().as_str())
.wrap_err_with(|| format!("can't parse frontmatter of {:?}", path))?; .wrap_err_with(|| format!("can't parse frontmatter of {:?}", path))?;
let markup = &body[content_offset..]; let markup = &body[content_offset..];
let date = NaiveDate::parse_from_str(&fm.clone().date, "%Y-%m-%d")?; let date = NaiveDate::parse_from_str(&fm.clone().date, "%Y-%m-%d")
.map_err(|why| eyre!("error parsing date in {:?}: {}", path, why))?;
let link = format!("{}/{}", dir, path.file_stem().unwrap().to_str().unwrap());
let mentions: Vec<mi::WebMention> = match mi {
None => vec![],
Some(mi) => mi
.mentioners(format!("https://christine.website/{}", link))
.await
.map_err(|why| tracing::error!("error: can't load mentions for {}: {}", link, why))
.unwrap_or(vec![]),
};
result.push(Post { result.push(Post {
front_matter: fm, front_matter: fm,
link: format!("{}/{}", dir, path.file_stem().unwrap().to_str().unwrap()), link: link,
body: markup.to_string(), body: markup.to_string(),
body_html: crate::app::markdown::render(&markup) body_html: crate::app::markdown::render(&markup)
.wrap_err_with(|| format!("can't parse markdown for {:?}", path))?, .wrap_err_with(|| format!("can't parse markdown for {:?}", path))?,
@ -96,6 +106,7 @@ pub fn load(dir: &str) -> Result<Vec<Post>> {
.with_timezone(&Utc) .with_timezone(&Utc)
.into() .into()
}, },
mentions: mentions,
}) })
} }
@ -113,23 +124,23 @@ mod tests {
use super::*; use super::*;
use color_eyre::eyre::Result; use color_eyre::eyre::Result;
#[test] #[tokio::test]
fn blog() { async fn blog() {
let _ = pretty_env_logger::try_init(); let _ = pretty_env_logger::try_init();
load("blog").expect("posts to load"); load("blog", None).await.expect("posts to load");
} }
#[test] #[tokio::test]
fn gallery() -> Result<()> { async fn gallery() -> Result<()> {
let _ = pretty_env_logger::try_init(); let _ = pretty_env_logger::try_init();
load("gallery")?; load("gallery", None).await?;
Ok(()) Ok(())
} }
#[test] #[tokio::test]
fn talks() -> Result<()> { async fn talks() -> Result<()> {
let _ = pretty_env_logger::try_init(); let _ = pretty_env_logger::try_init();
load("talks")?; load("talks", None).await?;
Ok(()) Ok(())
} }
} }

View File

@ -1,74 +1,49 @@
-----BEGIN PGP PUBLIC KEY BLOCK----- -----BEGIN PGP PUBLIC KEY BLOCK-----
mQENBF3pGlsBCACaR3eO9ELleaQypUJYRqI8rMHBE6oV6cexCbVL5efTy0wvvI4P mQENBGABktwBCACygH18iP698tm50PdmNeOd5orUVTV3nfB7z5wyPt7ZocUrlA3o
tgA5UKKDq6XRybhEulRUaqSFlwsFPRqMDT9bNK49d56oh0GwbtQwnNW6ZfHEgf5Q ok4D0Uu0ffJob91BquneCRyXdcbwUal29p/6JApTB5yO6kYJgDodJJ9/EEOhNXho
9gPbkwnfUMgVhJofiV/6mRhzrJUKfb+11dW4shV4lqffAeiO+wi6X0XMX9HsN6RE KEauzm25KGkcyiFVgapymBpvZEnec1gWO0/NGkv59aRGd48I45U+QicxltYbE1Wa
eO5Y4or/uKgz9ikQjYklNvQ4laXdtqmLbA5DkHRXWAhmKii9FcnRqCW/7Pbztfn8 BTGu5B8z02q0IJp+M+Qji7iRISCWc78lRA+G4U6TZ8qckoWWz8GomKtd5y9pxlUQ
JrH9TcHqbp1T6nFykEhYtkHS02UfD35Y7qugtDz3okM2vggllitQAXI9+BHpLtce 6tuYHcTxy8NLBnmSfUkg81tJ6Tym7gBAJdh2VdmJkxKOe2g92a4u3Azo5yUobBkP
8Wbr1D4py8AqqTyFrL4AwIYAwmjLGEN0pSRTABEBAAG0KENocmlzdGluZSBEb2Ry rRkkoeCGf4A9A/hicPwpYTTVIrJ9RYX1gtAvABEBAAG0MkNocmlzdGluZSBEb2Ry
aWxsIDxtZUBjaHJpc3RpbmUud2Vic2l0ZT6JAVQEEwEIAD4WIQSW/59zCcBe6mWf aWxsIChZdWJpa2V5KSA8bWVAY2hyaXN0aW5lLndlYnNpdGU+iQFUBBMBCAA+FiEE
HDSJeEtIFe6peAUCXekaWwIbAwUJEswDAAULCQgHAgYVCgkICwIEFgIDAQIeAQIX N46/xj15tJ2MNkSMgDyTWuEYoiQFAmABktwCGwMFCRLMAwAFCwkIBwIGFQoJCAsC
gAAKCRCJeEtIFe6peOwTB/46R0LAx+ZpiNT8WV1z+/IrFiSXZwN0EHS3LNBMAYlL BBYCAwECHgECF4AACgkQgDyTWuEYoiTNKAf8DWvbJWRlBrUN7CRRr+KBfd9X/UEv
Hn2jUa1ySgaBIwQy3mhDyOB9CESdNXo+Hr/sSG1khaCAoruQ7z4lK3UmpEeZmQsv wus7odDiEuAlnODFVnsE63/K+XBOzDtrpr/4Ldr8WQkFGbbFoG8hg6SUhE3rpBrS
iWOlK7NQYMtKKamxNK46h5ld8X6/8RmGOupuKuwUrdvZ+L67K6oomqrK/yJ9RUBs h7cpNe8PkBeHA6ekeVcBUGV6XvZ65FjPUan8xsoBDyrIPkIFzsqFOpQ7hjes+lJa
SYAceVXYSd/1QXEPIm7hjdhXGgk8FS8vODNI23ZiniqDCwbMMcM1g9QeDqPilsZY J3s2bgpw7z3/rs8o8mOxMU0A2D2UFVn8OtiHT6WgeXL6UnZqgZwEY+oipVLNP6ZG
T6L+YO63FpbhEWhEKmaXrsB31o7si7wfpAlcXJh6WHluaPUrxwr45O2u01NHb+ZG lfi4UIexpbSzciS1qZ4/qJfQeiVM2LkIJgpV8fn42XQ10VDkarKmx1XNN+sZI5vn
J8pHcGgS0WBVCqSdGYy9JWbPGn/TvokFxSxfMd5wfwImuQENBF3pGlsBCAC/Qy/X 3jJHtB+D6ZjFzVLFqW//N//MslQOrkXnBfa/KeU1ULdY9hEShpAePyfaQ7kBDQRg
jjcqhc2BDlWJaCLA6ZlR9pEAX/yuQCAR9tO68/vhj0TCC5DhbWpxAq/B8R/lcp1x AZLcAQgArCh+XEWsg9OfTrrIuFtCyRxr94yb2EMUCFJMKvsbeklmNQBaZ7N1RyE4
AzE6sxpZXlKlTfyOwAMF2d28jTz35dZN7lERlo/cj4CxCS/t5CPCwNp45AlSuJrQ IPFnxyk/0y7XZ3gfo5k1SCr5m03vQuyem8SbUMHppaGUp5ZgZA/RWOh68ygrvHTG
ofoqKm+AiJ4rrU1BipmumKawrDjfnDzmANPlduPdYzXKUL9sPmbWXPzqj/aV1jKB gWAhe3T8/gklUeLcp8reOZa/wSbv1VGewgOwplgUkQxa1v7YJpbhJtnKoiJmWcfc
3tQ1wDZCDrACmPKAgYflHqq1lWwrQZf68CGCV/Lqldv9T1iLtmNqERlPKROpoTYD abie7bt1ok10UVSLNTbPUiSIP1Sb1i9NXtkg0lFQjxPB5zAQbtuqnO/LAVHbt1U+
8OC/KprYiKLOJY0jtNB6G/eXCBN8vjkQjlQ3c7BacaCHD3ddOZtdbHXqEJlLfq/k xzfh5xJ+DKoBQhuKbFftUp4Hjwr/qv53XMz6MMUMJIDp9j3icQm2ifSKx74ic5vn
kCMm+FDQXGu7S3XpABEBAAGJATwEGAEIACYWIQSW/59zCcBe6mWfHDSJeEtIFe6p kaF3oWRJODTS/fR+FEUpdakIozCURwARAQABiQE8BBgBCAAmFiEEN46/xj15tJ2M
eAUCXekaWwIbDAUJEswDAAAKCRCJeEtIFe6peOX8CACL8RPJoIG/+mrcB32l7LOO NkSMgDyTWuEYoiQFAmABktwCGwwFCRLMAwAACgkQgDyTWuEYoiTSEQgAruSRZBLi
v0F9fuWUXpv05a3QkeBKaZhJVwfmR2LmgbnlQhA+KuDIfeKl5lkXz0WM0659vo9P JwHNQz2ZtPhGa8Avbj5mqhD8Zs627gKM4SdgYka+DjoaGImoqdhM1K0zBVGrfDZV
1hgHidqV3Wf7axBwxHWkWWE0JXc7o2Z/WSa65baRx8S9HLUHzZz0al8y87WgEoGw CDD+YILyW6C6+9/0TLHuhD9oo+byo6XXgHmtodiZBFLYHvtNNZMYoN/1eWaJBmxX
o0bFKuj6xvaMgsrrJY7qrcnfYsDg9nkya+VrLVzZCS6fIDqBfuRge8Jj+XcX4Boi 39r1BHA2fTSjeg3YChdIqMtFhHps/5ckyPUzTFrzJPOaz4xLC5QPog1aOzKzL8UA
aGkI30+5D0if1p2Zt7kOpfgXff63lEAWK+8pa1b2MGK5po6C7EGKkGppECm6mOgw oWseZjWgDJJbWIbiyoz3J7oHfqwRIhZEOJyVn2N46lXk7Xg6dLbqwq3+XCT0tph8
8l3U/jq7yXgiVx8n6WqNms9g1IRHNN2QICIaERGYvBOJn9XwTDfeVhjLvguPKTD3 0O/Q+zIvy/1q8dAQJsvomf81GsZdPsR9MJZiGbbM/gyCOjRFX163TiTIyeQPLTbA
uQENBF3pGnsBCAC/aCA120kcIWup6XKt4/u92GFYn/fVaF5Jxx00FRr+0vrPwl88 Er7mIpM0HYgK1rkBDQRgAZNMAQgAz+3aBcBzaLasO7ksB8o+3xctw2NydTR+VEw+
e7lYi8ZJUaanC8Lql90nQ/1jzxCreMSqOTeppxHE+Za+iCNGh0uP0TPitwlzszUU Pxub7CDY0BEcs7IuqjuPbFJ74MU1TriCCB5zP7bHFrqdwS+51E0WVunkOgxPYm9O
oO5Z5sKIamSPXFZJB/XB/VK6xPDw54IdkWzYp2otxmhcnJeIuRiNJfmUM8MZY2mV vEtkxyPHJW6PiY0xeSQt9hhqJe5TV/HpscQISfovd9DZkTbEjvCnpVnWjfGih3iR
j3VVflWjzeFnSMgeuHWbWQ+QfMzwJBquqqF3A148lPBH1q8bRWg6EiLJr/UlSBgb xy3o51gj5l47oSZFeRDZr9gNkJ+gY4I2GfgGA40UWXyj9jHyjh6jA32YDo19XKud
DLQyTwQ8IAihrf6TrEv6mE1s6VusPS5IZ44QKMQ2VyBoGGkfyxK5gu26V74PFlcq UqyLgPeUjOuGp8Y4Gu5JNmqb0Wqb2AEqOQTSGRCJaOzNxgxSUeECT7xzBYgn7Ghf
VtGKss78sahJhBnbrlHL2k+f/mnmiQaA7ZXhABEBAAGJATwEGAEIACYWIQSW/59z 7iJV+U9hqr9Jp3+6b5OJDv3QIfh48jOSIigbnyGs/4g7kUvmFQARAQABiQJyBBgB
CcBe6mWfHDSJeEtIFe6peAUCXekaewIbIAUJEswDAAAKCRCJeEtIFe6peHHHB/9R CAAmFiEEN46/xj15tJ2MNkSMgDyTWuEYoiQFAmABk0wCGy4FCRLMAwABQAkQgDyT
BK+l3agYh+40SAY+Lufqlz0vvFM+zRVRXLSHIwlpmXJmD0kPA2Uri9BVZ41rj+Lt WuEYoiTAdCAEGQEIAB0WIQR79+Uxq6N/d/0Xj3LOF3gb9V3pRQUCYAGTTAAKCRDO
DMf3b3WW3FZMGQH+olABSeVVWHtwP25ccDwdumU4s3bdQst3yZ3E2rjezixj/2nC F3gb9V3pRf/EB/9SuYeFL5bzg8TwbO/bhnAovYiiURW2bDtUnHKhiHHquuMo7iWN
qMqThE5WH7AdxdRihNNFvSvddDbNw1vcbeZ5MDlyFH63Qw3gl5fPbiJXNuSNwXN2 EbaSGFyURiffJJhjSq5+H+I8CeW+rHVJQ6yxoDzQfXHsBaAwP+b9geVhUEHvnQMy
Yi0J3GQlh/eCVaL7HHKdkfvImt6vhGWUWK0dPuz5IjGuC76zdUWlHoZ9OKLitQZC ydTvyvoiT84XrMJ4KuOti2lqpCoHRzBodLRaXLia2kyyTCj3QGyzzlFEChM0sZM5
Zss1jjErIyVEfrKS/T8Z70tjHacNexBtJLqGev6KuopWig9LQ13ytE/ZP0XX+svb rStSkexixGSIthFV9xx+wfdcA6Er3RagNYBb9scFNg1vM/v8YC0sI/bzwdjltBeH
+ZaVsDKuVHO7FSncPVzkuQENBF3pGrgBCADau+f5dSQvq1d+DbghQ06V6/ZATln2 F9wWpmOvDEvmY35hnMEpjrrvJkbi12sd33Tzh+pvhFxMa3HZihQ8MsST750kejNq
pXKQpqHeTc7jBL3qgDYV6w4Gayug6E8rWj275LGinSzGN/road9i2NYZPTDaD79y ZAZ9D+DmJDYAD6aycAJCONtnivtvReQWACKQgkUH/jb5I7osdN8s5ndoUy+iInX5
CZYSaHITwR1cH+JOeIrD2spoLX8hZfOC/qHMoJNr7x7EaC+iSlXL6C9CLfBP0kTD SU5K04LYK/oo/S8hLQ+lZeqJrEYqTmEJjzdULQS6TXSpriVm4b70Qtgr5X929JSo
qZLFK7nGSJPaUdJTD412iI5HcqgKPqidDbX75SHG5RC2vkARvkPDW8lEuJZvhjwD lqNa0kWR2LdF4q1wFAxkPEskPrM/fPEqZfjBfaezvSUTOU32KoCoWoeZqqbdBwXp
aOtR1i1QWFdBadGUOR5cAh6uYDDsum1WqO3H4bUSK16/S8C6wiEkDlJitnFogVtA ONwH73yiX9dc6wP9prW3laqUWAsSwMMBOYdKhOQJAy5J6ym37Q0noe1VuGQAGIlb
2IkooUTWll33+bdTjuxIsGb4us0YaxbFKDy9DL91/ek/e3fyaOUaSBuBABEBAAGJ OTOquCjjj8k63TfOPuJonKQUU1UoHtuukGJ27yUXljbsy2BmbgLcsm/R9xtz5Jxj
AnIEGAEIACYWIQSW/59zCcBe6mWfHDSJeEtIFe6peAUCXekauAIbDgUJEswDAAFA q4D/oYcgejx26NsV3alg1VfmqQiUD7/xUIOnR9bllPmOnUtjqaotwe/wUD+47z8=
CRCJeEtIFe6peMB0IAQZAQgAHRYhBBIgz5FIt2/z+IaZ5GRgK4TTvWujBQJd6Rq4 =O4RS
AAoJEGRgK4TTvWujgSQIAJUbPUPEyJe3cFCWIZd5sivMpZpV+Ef0npsZWc6lOPzi -----END PGP PUBLIC KEY BLOCK-----
AwFHxU5BCCd1RaCT7u3ZZaov6mzr9MtnPA8ZN+2nO+aIn3T9w5e7ibDZWS5mtlTS
WRebL3l4doPSL59dJzFchPK1ZNOgkIW6syyU+t3xSuM8KPpy03ORCZCf74D/yx0q
yT9N8xv5eovUJ4caDjG6np3LPUdc0wucf9IGi/2K06M+YE6gy8mjQAp5OKDa5wTK
FkVYVjBLhk+RvkU0Xzq5aRzFNnaQPyutCSe3kObrN2bK22eBA7LS3x/3XtV6b7EV
ZCdTWQgAFj4y0CkzyGdb4eDa2YiNQnzF7oCvI+RUA9//rAgAlG2fD6iGF+0OSpKu
y2btgHm5XbJm8en/5n/rswutVkGiGRKpYB6SwJ1PgZvcpn2nHBqYO+E95uSScjzj
3D5Rd2k4GwbXNyma/b0PX1iABSQmavjnoMM4c3boCc4gQoV54znt43DIovr9WmTR
pgAUh6H3hl80PmPUe7uJdoDDWRDLVJ1OPv1Wc2w6lAXrxtKBblOIP3csRn0D1EC4
/+Lr8n1OEV24lwwQoWvOZAWo0CZnR8v5+Qw3YuAxlw7U/8lgaGsaGiP25RWrtoix
3vQDOOv2/K+UytLxJZnAn1C1G1GGtrQyO1ibIPrTq86nexk2nr2djJGXFRp0unGl
Gu3xGrkBDQRd6RwGAQgAycfK7SCprgO8R9T4nijg8ujC42ewdQXO0CPrShKYLqXm
kFnKxGT/2bfJPhp38GMQnYOwYHTlcazmvzmtXlydtCkD2eDiU6NoI344z5u8j0zd
gE1GlG3FLHXPdKcnFchmsKSIMFW0salAqsUo50qJsQAhWuBimtXTW/ev1i+eFCyT
IJ6X8edVEO8Ub4cdHTLcSUgeTi51xT6tO3Ihg9D+nraGi5iT1RCk070ddtLFbhne
KNiG96lbhgNhpE8E3pkSXoGIeFzD9+j7wKoF5Tz+Bra7kiZFGrBWWyMY/rlubJog
zpuZ/kQgJn/sWfsJyLX6ya59PaRM+5aLGAEJiHJYRQARAQABiQE8BBgBCAAmFiEE
lv+fcwnAXuplnxw0iXhLSBXuqXgFAl3pHAYCGwwFCRLMAwAACgkQiXhLSBXuqXgt
xwf9HTyY1J4cRw/NyhKE+MABj/chCfCxePlsUMIL1iKSbxL2NmuQmPZGDKdAYOrH
ocR9NVFV/g77TfSuSEe2O/gz3LAOSn+RLs4rqq3yxJ10M/1zXfPIgbQQILhDyt4d
uR0s7hmmPkDT0CwBn8+jof5fH+pEsPnWmHAFqQ5yuyJDwa0+ICHr8zxqhvZJLJRv
GTSm9gXpXq/IFgsWeFmwC8GTaTyl5rd8qOxmcbV/x9j+0Q+GryqD8ILPyVp0PN39
2gSNBVfol2r5d+WZ5ye0oXbJGgy89vZRyUF5SQSJ83vF5NaXOarV3qJsy3v9lukK
JHDVbdWMkg5jUeusy24SURK5WA==
=zxPx
-----END PGP PUBLIC KEY BLOCK-----

View File

@ -12,12 +12,23 @@ Linux`, `Ubuntu`, `Linux`, `GraphViz`, `Progressive Web Apps`, `yaml`, `SQL`,
## Experience ## Experience
### Tailscale - Software Designer &emsp; <small>*2020 - present*</small>
> [Tailscale][tailscale] is a zero config VPN for building secure networks.
> Install on any device in minutes. Remote access from any network or physical
> location.
#### Highlights
- Go programming
- Nix and NixOS
### Lightspeed - Expert principal en fiabilité du site &emsp; <small>*2019 - 2020*</small> ### Lightspeed - Expert principal en fiabilité du site &emsp; <small>*2019 - 2020*</small>
(Senior Site Reliability Expert) (Senior Site Reliability Expert)
> [Lightspeed][lightspeedhq] is a provider of retail, ecommerce and point-of-sale > [Lightspeed][lightspeedhq] is a provider of retail, ecommerce and
> solutions for small and medium scale businesses. > point-of-sale solutions for small and medium scale businesses.
#### Highlights #### Highlights
@ -39,9 +50,13 @@ Linux`, `Ubuntu`, `Linux`, `GraphViz`, `Progressive Web Apps`, `yaml`, `SQL`,
#### Highlights #### Highlights
- [JVM Application Metrics](https://devcenter.heroku.com/changelog-items/1133) - [JVM Application Metrics](https://devcenter.heroku.com/changelog-items/1133)
- [Go Runtime Metrics Agent](https://github.com/heroku/x/tree/master/runtime-metrics) - [Go Runtime Metrics
- Other backend fixes and improvements on [Threshold Autoscaling](https://blog.heroku.com/heroku-autoscaling) and [Threshold Alerting](https://devcenter.heroku.com/articles/metrics#threshold-alerting) Agent](https://github.com/heroku/x/tree/master/runtime-metrics)
- [How to Make a Progressive Web App From Your Existing Website](https://blog.heroku.com/how-to-make-progressive-web-app) - Other backend fixes and improvements on [Threshold
Autoscaling](https://blog.heroku.com/heroku-autoscaling) and [Threshold
Alerting](https://devcenter.heroku.com/articles/metrics#threshold-alerting)
- [How to Make a Progressive Web App From Your Existing
Website](https://blog.heroku.com/how-to-make-progressive-web-app)
### Backplane.io - Software Engineer &emsp; <small>*2016 - 2016*</small> ### Backplane.io - Software Engineer &emsp; <small>*2016 - 2016*</small>
@ -211,3 +226,4 @@ I am an ordained minister with the [Church of the Latter-day Dude](https://dudei
[twit]: http://cdn-careers.sstatic.net/careers/Img/icon-twitter.png?v=b1bd58ad2034 [twit]: http://cdn-careers.sstatic.net/careers/Img/icon-twitter.png?v=b1bd58ad2034
[heroku]: https://www.heroku.com [heroku]: https://www.heroku.com
[lightspeedhq]: https://www.lightspeedhq.com [lightspeedhq]: https://www.lightspeedhq.com
[tailscale]: https://tailscale.com/

View File

@ -19,7 +19,7 @@
<entry> <entry>
<id>https://christine.website/@post.link</id> <id>https://christine.website/@post.link</id>
<title>@post.front_matter.title</title> <title>@post.front_matter.title</title>
<updated>@post.date.to_rfc3339()</updated> <published>@post.date.to_rfc3339()</published>
<link href="https://christine.website/@post.link" rel="alternate"/> <link href="https://christine.website/@post.link" rel="alternate"/>
</entry> </entry>
} }

View File

@ -9,6 +9,7 @@
<link>https://christine.website/blog</link> <link>https://christine.website/blog</link>
<description>Tech, philosophy and more</description> <description>Tech, philosophy and more</description>
<generator>@APP https://github.com/Xe/site</generator> <generator>@APP https://github.com/Xe/site</generator>
<ttl>1440</ttl>
@for post in posts { @for post in posts {
<item> <item>
<guid>https://christine.website/@post.link</guid> <guid>https://christine.website/@post.link</guid>

View File

@ -14,7 +14,7 @@
<p> <p>
<ul> <ul>
@for post in posts { @for post in posts {
<li>@post.date.format("%Y-%m-%d") - <a href="@post.link">@post.front_matter.title</a></li> <li>@post.date.format("%Y-%m-%d") - <a href="/@post.link">@post.front_matter.title</a></li>
} }
</ul> </ul>
</p> </p>

View File

@ -9,7 +9,7 @@
<meta name="twitter:card" content="summary" /> <meta name="twitter:card" content="summary" />
<meta name="twitter:site" content="@@theprincessxena" /> <meta name="twitter:site" content="@@theprincessxena" />
<meta name="twitter:title" content="@post.front_matter.title" /> <meta name="twitter:title" content="@post.front_matter.title" />
<meta name="twitter:description" content="Posted on @post.date" /> <meta name="twitter:description" content="Posted on @post.date.format("%Y-%m-%d")" />
<!-- Facebook --> <!-- Facebook -->
<meta property="og:type" content="website" /> <meta property="og:type" content="website" />
@ -20,7 +20,9 @@
<meta name="description" content="@post.front_matter.title - Christine Dodrill's Blog" /> <meta name="description" content="@post.front_matter.title - Christine Dodrill's Blog" />
<meta name="author" content="Christine Dodrill"> <meta name="author" content="Christine Dodrill">
<link rel="canonical" href="https://christine.website/@post.link"> @if post.front_matter.redirect_to.is_none() {
<link rel="canonical" href="https://christine.website/@post.link">
}
<script type="application/ld+json"> <script type="application/ld+json">
@{ @{
@ -29,7 +31,7 @@
"headline": "@post.front_matter.title", "headline": "@post.front_matter.title",
"image": "https://christine.website/static/img/avatar.png", "image": "https://christine.website/static/img/avatar.png",
"url": "https://christine.website/@post.link", "url": "https://christine.website/@post.link",
"datePublished": "@post.date", "datePublished": "@post.date.format("%Y-%m-%d")",
"mainEntityOfPage": @{ "mainEntityOfPage": @{
"@@type": "WebPage", "@@type": "WebPage",
"@@id": "https://christine.website/@post.link" "@@id": "https://christine.website/@post.link"
@ -45,6 +47,12 @@
@} @}
</script> </script>
@if let Some(to) = post.front_matter.redirect_to.clone() {
<script>
window.location.replace("@to");
</script>
}
@body @body
<hr /> <hr />
@ -62,6 +70,18 @@
<p>Tags: @for tag in post.front_matter.tags.as_ref().unwrap() { <code>@tag</code> }</p> <p>Tags: @for tag in post.front_matter.tags.as_ref().unwrap() { <code>@tag</code> }</p>
} }
@if post.mentions.len() != 0 {
<p>This post was <a href="https://www.w3.org/TR/webmention/">WebMention</a>ed at the following URLs:
<ul>
@for mention in post.mentions {
<li><a href="@mention.source">@mention.title.unwrap_or(mention.source)</a></li>
}
</ul>
</p>
} else {
<p>This post was not <a href="https://www.w3.org/TR/webmention/">WebMention</a>ed yet. You could be the first!</p>
}
<p>The art for Mara was drawn by <a href="https://selic.re/">Selicre</a>.</p> <p>The art for Mara was drawn by <a href="https://selic.re/">Selicre</a>.</p>
<script> <script>

View File

@ -10,7 +10,7 @@
<h3>Email</h3> <h3>Email</h3>
<p>me@@christine.website</p> <p>me@@christine.website</p>
<p>My GPG fingerprint is <code>799F 9134 8118 1111</code>. If you get an email that appears to be from me and the signature does not match that fingerprint, it is not from me. You may download a copy of my public key <a href="/static/gpg.pub">here</a>.</p> <p>My GPG fingerprint is <code>803C 935A E118 A224</code>. If you get an email that appears to be from me and the signature does not match that fingerprint, it is not from me. You may download a copy of my public key <a href="/static/gpg.pub">here</a>.</p>
<h3>Social Media</h3> <h3>Social Media</h3>
<ul> <ul>

View File

@ -1,5 +1,3 @@
@use crate::APPLICATION_NAME as APP;
@() @()
</div> </div>
<hr /> <hr />
@ -7,7 +5,7 @@
<blockquote>Copyright 2020 Christine Dodrill. Any and all opinions listed here are my own and not representative of my employers; future, past and present.</blockquote> <blockquote>Copyright 2020 Christine Dodrill. Any and all opinions listed here are my own and not representative of my employers; future, past and present.</blockquote>
<!--<p>Like what you see? Donate on <a href="https://www.patreon.com/cadey">Patreon</a> like <a href="/patrons">these awesome people</a>!</p>--> <!--<p>Like what you see? Donate on <a href="https://www.patreon.com/cadey">Patreon</a> like <a href="/patrons">these awesome people</a>!</p>-->
<p>Looking for someone for your team? Take a look <a href="/signalboost">here</a>.</p> <p>Looking for someone for your team? Take a look <a href="/signalboost">here</a>.</p>
<p>Served by @APP running commit <a href="https://github.com/Xe/site/commit/@env!("GITHUB_SHA")">@env!("GITHUB_SHA")</a>, see <a href="https://github.com/Xe/site">source code here</a>.</p> <p>Served by @env!("out")/bin/xesite</a>, see <a href="https://github.com/Xe/site">source code here</a>.</p>
</footer> </footer>
</div> </div>

View File

@ -14,7 +14,7 @@
<div class="card cell -4of12 blogpost-card"> <div class="card cell -4of12 blogpost-card">
<header class="card-header">@post.front_matter.title</header> <header class="card-header">@post.front_matter.title</header>
<div class="card-content"> <div class="card-content">
<center><p>Posted on @post.date.format("%Y-%m-%d")<br /><a href="@post.link"><img src="@post.front_matter.thumb.as_ref().unwrap()" /></a></p></center> <center><p>Posted on @post.date.format("%Y-%m-%d")<br /><a href="/@post.link"><img src="@post.front_matter.thumb.as_ref().unwrap()" /></a></p></center>
</div> </div>
</div> </div>
} }

View File

@ -9,7 +9,7 @@
<meta name="twitter:card" content="summary" /> <meta name="twitter:card" content="summary" />
<meta name="twitter:site" content="@@theprincessxena" /> <meta name="twitter:site" content="@@theprincessxena" />
<meta name="twitter:title" content="@post.front_matter.title" /> <meta name="twitter:title" content="@post.front_matter.title" />
<meta name="twitter:description" content="Posted on @post.date" /> <meta name="twitter:description" content="Posted on @post.date.format("%Y-%m-%d")" />
<!-- Facebook --> <!-- Facebook -->
<meta property="og:type" content="website" /> <meta property="og:type" content="website" />
@ -29,7 +29,7 @@
"headline": "@post.front_matter.title", "headline": "@post.front_matter.title",
"image": "https://christine.website/static/img/avatar.png", "image": "https://christine.website/static/img/avatar.png",
"url": "https://christine.website/@post.link", "url": "https://christine.website/@post.link",
"datePublished": "@post.date", "datePublished": "@post.date.format("%Y-%m-%d")",
"mainEntityOfPage": @{ "mainEntityOfPage": @{
"@@type": "WebPage", "@@type": "WebPage",
"@@id": "https://christine.website/@post.link" "@@id": "https://christine.website/@post.link"

View File

@ -3,8 +3,62 @@
@(title: Option<&str>, styles: Option<&str>) @(title: Option<&str>, styles: Option<&str>)
<!DOCTYPE html> <!DOCTYPE html>
<!--
MMMMMMMMMMMMMMMMMMNmmNMMMMMMMMMMMMMMMMMMMMMMMMMMMMMMMMMMMMMMMMMMMMMMMNmmmd.:mmMM
MMMMMMMMMMMMMMMMMNmmmNMMMMMMMMMMMMMMMMMMMMMMMMMMMMMMMMMMMMMMMMMMMMMNmmydmmmmmNMM
MMMMMMMMMMMMMMMMNm/:mNMMMMMMMMMMMMMMMMMMMMMMMMMMMMMMMMMMMMMMMMMMMMNmms /mmmmmMMM
MMMMMMMMMMMMMMMNmm:-dmMMMMMMMMMMMMMMMMMMMMMMMMMMMMMMMMMMMMMMMMMMNmmmmdsdmmmmNMMM
MMMMMMMMMMMMMMMmmmmmmmNMMMMMMMMMMMNmmdhhddhhmNNMMMMMMMMMMMMMMMMNmy:hmmmmmmmmMMMM
MMMMMMMMMMMMMMNm++mmmmNMMMMMMmdyo/::.........-:/sdNMMMMMMMMMMNmmms`smmmmmmmNMMMM
MMMMMMMMMMMMMMmd.-dmmmmMMmhs/-....................-+dNMMMMMMNmmmmmmmmmmmmmmMMMMM
MMMMMMMMMMMMMNmmmmmmmmho:-...........................:sNMMNmmmmmmmmmmmmmmmNMNmdd
MMMMMMMMMMMMNmd+ydhs/-.................................-sNmmmmmmmmmmmmmmmdhyssss
MMMMMMMMMMMNNh+`........................................:dmmmmmmmmmmmmmmmyssssss
MMMMNNdhy+:-...........................................+dmmmmmmmmmmmmmmmdsssssss
MMMN+-...............................................-smmmmmmmmmmmmmmmmmysyyhdmN
MMMMNho:::-.--::-.......................----------..:hmmmmmmmmmmmmmmmmmmmNMMMMMM
MMMMMMMMNNNmmdo:......................--------------:ymmmmmmmmmmmmmmmmmmmMMMMMMM
MMMMMMMMMMds+........................-----------------+dmmmmmmmmmmmmmmmmmMMMMMMM
MMMMMMMMMh+........................--------------------:smmmmmmmmmmmmmmNMMMMMMMM
MMMMMMMNy/........................-------------::--------/hmmmmmmmmmmmNMMMMMMNmd
MMMMMMMd/........................--------------so----------odmmmmmmmmMMNmdhhysss
MMMMMMm/........................--------------+mh-----------:ymmmmdhhyysssssssss
MMMMMMo.......................---------------:dmmo------------+dmdysssssssssssss
yhdmNh:......................---------------:dmmmm+------------:sssssssssssyhhdm
sssssy.......................--------------:hmmmmmmos++:---------/sssyyhdmNMMMMM
ssssso......................--------------:hmmmNNNMNdddysso:------:yNNMMMMMMMMMM
ysssss.....................--------------/dmNyy/mMMd``d/------------sNMMMMMMMMMM
MNmdhy-...................--------------ommmh`o/NM/. smh+-----------:yNMMMMMMMMM
MMMMMN+...................------------/hmmss: `-//-.smmmmd+----------:hMMMMMMMMM
MMMMMMd:..................----------:smmmmhy+oosyysdmmy+:. `.--------/dMMMMMMMM
MMMMMMMh-................---------:smmmmmmmmmmmmmmmh/` `/s:-------sMMMMMMMM
MMMMMMMms:...............-------/ymmmmmmmmmmmmmmmd/ :dMMNy/-----+mMMMMMMM
MMMMMMmyss/..............------ommmmmmmmmmmmmmmmd. :yMMMMMMNs:---+mMMMMMMM
MMMMNdssssso-............----..odmmmmmmmmmmmmmmh:.` .sNMMMMMMMMMd/--sMMMMMMMM
MMMmysssssssh/................` -odmmmmmmmmmh+. `omMMMMMMMMMMMMh/+mMMMMMMMM
MNdyssssssymMNy-.............. `/sssso+:. `+mMMMMMMMMMMMMMMMdNMMMMMMMMM
NhssssssshNMMMMNo:............/.` `+dMMMMMMMMMMMMMMMMMMMMMMMMMMMM
ysssssssdMMMMMMMMm+-..........+ddy/.` -omMMMMMMMMMMMMMMMMMMMMMMMMMMMMMM
ssssssymMMMMMMMMMMMh/.........-oNMMNmy+--` `-+dNMMMMMMMMMMMMMMMMMMMMMMMMMMMMMMMM
ssssydNMMMMMMMMMMMMMNy:........-hMMMMMMMNmdmMMMMMMMMMMMMMMMMMMMMMMMMMMMMMMMMMMMM
sssymMMMMMMMMMMMMMMMMMm+....-..:hMMMMMMMMMMMMMMMMMMMMMMMMMMMMMMMMMMMMMMMMMMMMMMM
symNMMMMMMMMMMMMMMMMMMMNo.../-/dMMMMMMMMMMMMMMMMMMMMMMMMMMMMMMMMMMMMMMMMMMMMMMMM
dNMMMMMMMMMMMMMMMMMMMMMMh:.:hyNMMMMMMMMMMMMMMMMMMMMMMMMMMMMMMMMMMMMMMMMMMMMMMMMM
la budza pu cusku lu
<<.i ko do snura .i ko do kanro
.i ko do panpi .i ko do gleki>> li'u
-->
<html lang="en"> <html lang="en">
<head> <head>
<!-- Global site tag (gtag.js) - Google Analytics -->
<script async src="https://www.googletagmanager.com/gtag/js?id=G-XLJX94YGBV"></script>
<script>
window.dataLayer = window.dataLayer || [];
function gtag()@{dataLayer.push(arguments);@}
gtag('js', new Date());
gtag('config', 'G-XLJX94YGBV');
</script>
@if title.is_some() { @if title.is_some() {
<title>@title.unwrap() - Christine Dodrill</title> <title>@title.unwrap() - Christine Dodrill</title>
} else { } else {
@ -15,7 +69,7 @@
<link rel="stylesheet" href="/css/gruvbox-dark.css" /> <link rel="stylesheet" href="/css/gruvbox-dark.css" />
<link rel="stylesheet" href="/css/shim.css" /> <link rel="stylesheet" href="/css/shim.css" />
<link rel="stylesheet" href="https://cdn.christine.website/file/christine-static/prism/prism.css" /> <link rel="stylesheet" href="https://cdn.christine.website/file/christine-static/prism/prism.css" />
@if Utc::now().month() == 12 { <link rel="stylesheet" href="/css/snow.css" /> } @if Utc::now().month() == 12 || Utc::now().month() == 1 || Utc::now().month() == 2 { <link rel="stylesheet" href="/css/snow.css" /> }
<link rel="manifest" href="/static/manifest.json" /> <link rel="manifest" href="/static/manifest.json" />
<link rel="alternate" title="Christine Dodrill's Blog" type="application/rss+xml" href="https://christine.website/blog.rss" /> <link rel="alternate" title="Christine Dodrill's Blog" type="application/rss+xml" href="https://christine.website/blog.rss" />
@ -30,7 +84,7 @@
<link rel="apple-touch-icon" sizes="144x144" href="/static/favicon/apple-icon-144x144.png"> <link rel="apple-touch-icon" sizes="144x144" href="/static/favicon/apple-icon-144x144.png">
<link rel="apple-touch-icon" sizes="152x152" href="/static/favicon/apple-icon-152x152.png"> <link rel="apple-touch-icon" sizes="152x152" href="/static/favicon/apple-icon-152x152.png">
<link rel="apple-touch-icon" sizes="180x180" href="/static/favicon/apple-icon-180x180.png"> <link rel="apple-touch-icon" sizes="180x180" href="/static/favicon/apple-icon-180x180.png">
<link rel="icon" type="image/png" sizes="192x192" href="/static/favicon/android-icon-192x192.png"> <link rel="icon" type="image/png" sizes="192x192" href="/static/favicon/android-icon-192x192.png">
<link rel="icon" type="image/png" sizes="32x32" href="/static/favicon/favicon-32x32.png"> <link rel="icon" type="image/png" sizes="32x32" href="/static/favicon/favicon-32x32.png">
<link rel="icon" type="image/png" sizes="96x96" href="/static/favicon/favicon-96x96.png"> <link rel="icon" type="image/png" sizes="96x96" href="/static/favicon/favicon-96x96.png">
<link rel="icon" type="image/png" sizes="16x16" href="/static/favicon/favicon-16x16.png"> <link rel="icon" type="image/png" sizes="16x16" href="/static/favicon/favicon-16x16.png">
@ -38,6 +92,7 @@
<meta name="msapplication-TileColor" content="#ffffff"> <meta name="msapplication-TileColor" content="#ffffff">
<meta name="msapplication-TileImage" content="/static/favicon/ms-icon-144x144.png"> <meta name="msapplication-TileImage" content="/static/favicon/ms-icon-144x144.png">
<meta name="theme-color" content="#ffffff"> <meta name="theme-color" content="#ffffff">
<link href="https://mi.within.website/api/webmention/accept" rel="webmention" />
@if styles.is_some() { @if styles.is_some() {
<style> <style>
@styles.unwrap() @styles.unwrap()

View File

@ -10,7 +10,7 @@
<p> <p>
<ul> <ul>
@for post in posts { @for post in posts {
<li>@post.date - <a href="/@post.link">@post.front_matter.title</a></li> <li>@post.date.format("%Y-%m-%d") - <a href="/@post.link">@post.front_matter.title</a></li>
} }
</ul> </ul>
</p> </p>

View File

@ -15,7 +15,7 @@
<p> <p>
<ul> <ul>
@for post in posts { @for post in posts {
<li>@post.date.format("%Y-%m-%d") - <a href="@post.link">@post.front_matter.title</a></li> <li>@post.date.format("%Y-%m-%d") - <a href="/@post.link">@post.front_matter.title</a></li>
} }
</ul> </ul>
</p> </p>

View File

@ -9,7 +9,7 @@
<meta name="twitter:card" content="summary" /> <meta name="twitter:card" content="summary" />
<meta name="twitter:site" content="@@theprincessxena" /> <meta name="twitter:site" content="@@theprincessxena" />
<meta name="twitter:title" content="@post.front_matter.title" /> <meta name="twitter:title" content="@post.front_matter.title" />
<meta name="twitter:description" content="Posted on @post.date" /> <meta name="twitter:description" content="Posted on @post.date.format("%Y-%m-%d")" />
<!-- Facebook --> <!-- Facebook -->
<meta property="og:type" content="website" /> <meta property="og:type" content="website" />
@ -29,7 +29,7 @@
"headline": "@post.front_matter.title", "headline": "@post.front_matter.title",
"image": "https://christine.website/static/img/avatar.png", "image": "https://christine.website/static/img/avatar.png",
"url": "https://christine.website/@post.link", "url": "https://christine.website/@post.link",
"datePublished": "@post.date", "datePublished": "@post.date.format("%Y-%m-%d")",
"mainEntityOfPage": @{ "mainEntityOfPage": @{
"@@type": "WebPage", "@@type": "WebPage",
"@@id": "https://christine.website/@post.link" "@@id": "https://christine.website/@post.link"