Compare commits
1 Commits
Author | SHA1 | Date |
---|---|---|
Cadey Ratio | d6468f5382 |
|
@ -1 +0,0 @@
|
|||
nix/sources.nix linguist-vendored
|
|
@ -1,5 +0,0 @@
|
|||
# These are supported funding model platforms
|
||||
|
||||
github: Xe
|
||||
patreon: cadey
|
||||
ko_fi: A265JE0
|
|
@ -1,18 +0,0 @@
|
|||
name: "Nix"
|
||||
on:
|
||||
push:
|
||||
branches:
|
||||
- main
|
||||
pull_request:
|
||||
branches:
|
||||
- main
|
||||
jobs:
|
||||
docker-build:
|
||||
runs-on: ubuntu-latest
|
||||
steps:
|
||||
- uses: actions/checkout@v1
|
||||
- uses: cachix/install-nix-action@v12
|
||||
- uses: cachix/cachix-action@v7
|
||||
with:
|
||||
name: xe
|
||||
- run: nix build --no-link
|
|
@ -2,7 +2,3 @@
|
|||
cw.tar
|
||||
.env
|
||||
.DS_Store
|
||||
/result-*
|
||||
/result
|
||||
.#*
|
||||
/target
|
||||
|
|
|
@ -0,0 +1,7 @@
|
|||
language: generic
|
||||
|
||||
services:
|
||||
- docker
|
||||
|
||||
script:
|
||||
- docker build .
|
15
CHANGELOG.md
15
CHANGELOG.md
|
@ -1,15 +0,0 @@
|
|||
# Changelog
|
||||
|
||||
New site features will be documented here.
|
||||
|
||||
## 2.1.0
|
||||
|
||||
- Blogpost bodies are now present in the RSS feed
|
||||
|
||||
## 2.0.1
|
||||
|
||||
Custom render RSS/Atom feeds
|
||||
|
||||
## 2.0.0
|
||||
|
||||
Complete site rewrite in Rust
|
File diff suppressed because it is too large
Load Diff
59
Cargo.toml
59
Cargo.toml
|
@ -1,59 +0,0 @@
|
|||
[package]
|
||||
name = "xesite"
|
||||
version = "2.2.0"
|
||||
authors = ["Christine Dodrill <me@christine.website>"]
|
||||
edition = "2018"
|
||||
build = "src/build.rs"
|
||||
repository = "https://github.com/Xe/site"
|
||||
|
||||
# See more keys and their definitions at https://doc.rust-lang.org/cargo/reference/manifest.html
|
||||
|
||||
[dependencies]
|
||||
color-eyre = "0.5"
|
||||
chrono = "0.4"
|
||||
comrak = "0.9"
|
||||
envy = "0.4"
|
||||
glob = "0.3"
|
||||
hyper = "0.14"
|
||||
kankyo = "0.3"
|
||||
lazy_static = "1.4"
|
||||
log = "0.4"
|
||||
mime = "0.3.0"
|
||||
prometheus = { version = "0.11", default-features = false, features = ["process"] }
|
||||
rand = "0"
|
||||
reqwest = { version = "0.11", features = ["json"] }
|
||||
sdnotify = { version = "0.1", default-features = false }
|
||||
serde_dhall = "0.9.0"
|
||||
serde = { version = "1", features = ["derive"] }
|
||||
serde_yaml = "0.8"
|
||||
sitemap = "0.4"
|
||||
thiserror = "1"
|
||||
tokio = { version = "1", features = ["full"] }
|
||||
tracing = "0.1"
|
||||
tracing-futures = "0.2"
|
||||
tracing-subscriber = { version = "0.2", features = ["fmt"] }
|
||||
warp = "0.3"
|
||||
xml-rs = "0.8"
|
||||
url = "2"
|
||||
uuid = { version = "0.8", features = ["serde", "v4"] }
|
||||
|
||||
# workspace dependencies
|
||||
cfcache = { path = "./lib/cfcache" }
|
||||
go_vanity = { path = "./lib/go_vanity" }
|
||||
jsonfeed = { path = "./lib/jsonfeed" }
|
||||
mi = { path = "./lib/mi" }
|
||||
patreon = { path = "./lib/patreon" }
|
||||
|
||||
[build-dependencies]
|
||||
ructe = { version = "0.13", features = ["warp02"] }
|
||||
|
||||
[dev-dependencies]
|
||||
pfacts = "0"
|
||||
serde_json = "1"
|
||||
eyre = "0.6"
|
||||
pretty_env_logger = "0"
|
||||
|
||||
[workspace]
|
||||
members = [
|
||||
"./lib/*",
|
||||
]
|
|
@ -0,0 +1,21 @@
|
|||
FROM xena/go:1.12.1 AS build
|
||||
ENV GOPROXY https://cache.greedo.xeserv.us
|
||||
COPY . /site
|
||||
WORKDIR /site
|
||||
RUN CGO_ENABLED=0 go test -v ./...
|
||||
RUN CGO_ENABLED=0 GOBIN=/root go install -v ./cmd/site
|
||||
|
||||
FROM xena/alpine
|
||||
EXPOSE 5000
|
||||
RUN apk add --no-cache bash
|
||||
WORKDIR /site
|
||||
COPY --from=build /root/site .
|
||||
COPY ./static /site/static
|
||||
COPY ./templates /site/templates
|
||||
COPY ./blog /site/blog
|
||||
COPY ./talks /site/talks
|
||||
COPY ./css /site/css
|
||||
COPY ./app /app
|
||||
COPY ./app.json .
|
||||
HEALTHCHECK CMD wget --spider http://127.0.0.1:5000/.within/health || exit 1
|
||||
CMD ./site
|
2
LICENSE
2
LICENSE
|
@ -1,4 +1,4 @@
|
|||
Copyright (c) 2017-2021 Christine Dodrill <me@christine.website>
|
||||
Copyright (c) 2017 Christine Dodrill <me@christine.website>
|
||||
|
||||
This software is provided 'as-is', without any express or implied
|
||||
warranty. In no event will the authors be held liable for any damages
|
||||
|
|
|
@ -1,8 +1,5 @@
|
|||
# site
|
||||
|
||||
[![built with
|
||||
nix](https://builtwithnix.org/badge.svg)](https://builtwithnix.org)
|
||||
![Nix](https://github.com/Xe/site/workflows/Nix/badge.svg)
|
||||
![Rust](https://github.com/Xe/site/workflows/Rust/badge.svg)
|
||||
|
||||
My personal/portfolio website.
|
||||
|
||||
![](https://puu.sh/vWnJx/57cda175d8.png)
|
||||
|
|
|
@ -0,0 +1,7 @@
|
|||
{
|
||||
"scripts": {
|
||||
"dokku": {
|
||||
"postdeploy": "curl https://www.google.com/ping?sitemap=https://christine.website/sitemap.xml"
|
||||
}
|
||||
}
|
||||
}
|
|
@ -0,0 +1 @@
|
|||
/ Christine
|
|
@ -1,177 +0,0 @@
|
|||
---
|
||||
title: The 7th Edition
|
||||
date: 2020-12-19
|
||||
tags:
|
||||
- ttrpg
|
||||
---
|
||||
|
||||
# The 7th Edition
|
||||
|
||||
You know what, fuck rules. Fuck systems. Fuck limitations. Let's dial the
|
||||
tabletop RPG system down to its roots. Let's throw out every stat but one:
|
||||
Awesomeness. When you try to do something that could fail, roll for Awesomeness.
|
||||
If your roll is more than your awesomeness stat, you win. If not, you lose. If
|
||||
you are or have something that would benefit you in that situation, roll for
|
||||
awesomeness twice and take the higher value.
|
||||
|
||||
No stats.<br />
|
||||
No counts.<br />
|
||||
No limits.<br />
|
||||
No gods.<br />
|
||||
No masters.<br />
|
||||
Just you and me and nature in the battlefield.
|
||||
|
||||
* Want to shoot an arrow? Roll for awesomeness. You failed? You're out of ammo.
|
||||
* Want to, defeat a goblin but you have a goblin-slaying-broadsword? Roll twice
|
||||
for awesomeness and take the higher value. You got a 20? That goblin was
|
||||
obliterated. Good job.
|
||||
* Want to pick up an item into your inventory? Roll for awesomeness. You got it?
|
||||
It's in your inventory.
|
||||
|
||||
Etc. Don't think too hard. Let a roll of the dice decide if you are unsure.
|
||||
|
||||
## Base Awesomeness Stats
|
||||
|
||||
Here are some probably balanced awesomeness base stats depending on what kind of
|
||||
dice you are using:
|
||||
|
||||
* 6-sided: 4 or 5
|
||||
* 8-sided: 5 or 6
|
||||
* 10-sided: 6 or 7
|
||||
* 12-sided: 7 or 8
|
||||
* 20-sided: anywhere from 11-13
|
||||
|
||||
## Character Sheet Template
|
||||
|
||||
Here's an example character sheet:
|
||||
|
||||
```
|
||||
Name:
|
||||
Awesomeness:
|
||||
Race:
|
||||
Class:
|
||||
Inventory:
|
||||
*
|
||||
```
|
||||
|
||||
That's it. You don't even need the race or class if you don't want to have it.
|
||||
|
||||
You can add more if you feel it is relevant for your character. If your
|
||||
character is a street brat that has experience with haggling, then fuck it be
|
||||
the most street brattiest haggler you can. Try to not overload your sheet with
|
||||
information, this game is supposed to be simple. A sentence or two at most is
|
||||
good.
|
||||
|
||||
## One Player is The World
|
||||
|
||||
The World is a character that other systems would call the Narrator, the
|
||||
Pathfinder, Dungeon Master or similar. Let's strip this down to the core of the
|
||||
matter. One player doesn't just dictate the world, they _are_ the world.
|
||||
|
||||
The World also controls the monsters and non-player characters. In general, if
|
||||
you are in doubt as to who should roll for an event, The World does that roll.
|
||||
|
||||
## Mixins/Mods
|
||||
|
||||
These are things you can do to make the base game even more tailored to your
|
||||
group. Whether you should do this is highly variable to the needs and whims of
|
||||
your group in particular.
|
||||
|
||||
### Mixin: Adjustable Awesomeness
|
||||
|
||||
So, one problem that could come up with this is that bad luck could make this
|
||||
not as fun. As a result, add these two rules in:
|
||||
|
||||
* Every time you roll above your awesomeness, add 1 to your awesomeness stat
|
||||
* Every time you roll below your awesomeness, remove 1 from your awesomeness
|
||||
stat
|
||||
|
||||
This should add up so that luck would even out over time. Players that have less
|
||||
luck than usual will eventually get their awesomeness evened out so that luck
|
||||
will be in their favor.
|
||||
|
||||
### Mixin: No Awesomeness
|
||||
|
||||
In this mod, rip out Awesomeness altogether. When two parties are at odds, they
|
||||
both roll dice. The one that rolls higher gets what they want. If they tie, both
|
||||
people get a little part of what they want. For extra fun do this with six-sided
|
||||
dice.
|
||||
|
||||
* Monster wants to attack a player? The World and that player roll. If the
|
||||
player wins, they can choose to counterattack. If the monster wins, they do a
|
||||
wound or something.
|
||||
* One player wants to steal from another? Have them both roll to see what
|
||||
happens.
|
||||
|
||||
Use your imagination! Ask others if you are unsure!
|
||||
|
||||
## Other Advice
|
||||
|
||||
This is not essential but it may help.
|
||||
|
||||
### Monster Building
|
||||
|
||||
Okay so basically monsters fall into two categories: peons and bosses. Peons
|
||||
should be easy to defeat, usually requiring one action. Bosses may require more
|
||||
and might require more than pure damage to defeat. Get clever. Maybe require the
|
||||
players to drop a chandelier on the boss. Use the environment.
|
||||
|
||||
In general, peons should have a very high base awesomeness in order to do things
|
||||
they want. Bosses can vary based on your mood.
|
||||
|
||||
Adjustable awesomeness should affect monsters too.
|
||||
|
||||
### Worldbuilding
|
||||
|
||||
Take a setting from somewhere and roll with it. You want to do a cyberpunk jaunt
|
||||
in Night City with a sword-wielding warlock, a succubus space marine, a bard
|
||||
netrunner and a shapeshifting monk? Do the hell out of that. That sounds
|
||||
awesome.
|
||||
|
||||
Don't worry about accuracy or the like. You are setting out to have fun.
|
||||
|
||||
## Special Thanks
|
||||
|
||||
Special thanks goes to Jared, who sent out this [tweet][1] that inspired this
|
||||
document. In case the tweet gets deleted, here's what it said:
|
||||
|
||||
[1]: https://twitter.com/infinite_mao/status/1340402360259137541
|
||||
|
||||
> heres a d&d for you
|
||||
|
||||
> you have one stat, its a saving throw. if you need to roll dice, you roll your
|
||||
> save.
|
||||
|
||||
> you have a class and some equipment and junk. if the thing you need to roll
|
||||
> dice for is relevant to your class or equipment or whatever, roll your save
|
||||
> with advantage.
|
||||
|
||||
> oh your Save is 5 or something. if you do something awesome, raise your save
|
||||
> by 1.
|
||||
|
||||
> no hp, save vs death. no damage, save vs goblin. no tracking arrows, save vs
|
||||
> running out of ammo.
|
||||
|
||||
> thanks to @Axes_N_Orcs for this
|
||||
|
||||
> What's So Cool About Save vs Death?
|
||||
|
||||
> can you carry all that treasure and equipment? save vs gains
|
||||
|
||||
I replied:
|
||||
|
||||
> Can you get more minimal than this?
|
||||
|
||||
He replied:
|
||||
|
||||
> when two or more parties are at odds, all roll dice. highest result gets what
|
||||
> they want.
|
||||
|
||||
> hows that?
|
||||
|
||||
This document is really just this twitter exchange in more words so that people
|
||||
less familiar with tabletop games can understand it more easily. You know you
|
||||
have finished when there is nothing left to remove, not when you can add
|
||||
something to "fix" it.
|
||||
|
||||
I might put this on my [itch.io page](https://withinstudios.itch.io/).
|
|
@ -1,99 +0,0 @@
|
|||
---
|
||||
title: "OVE-20190623-0001"
|
||||
date: 2019-06-24
|
||||
tags:
|
||||
- v
|
||||
- security
|
||||
- release
|
||||
---
|
||||
|
||||
# OVE-20190623-0001
|
||||
|
||||
## Within Security Advisory
|
||||
|
||||
Root-level Remote Command Injection in the [V](https://vlang.io) playground (OVE-20190623-0001)
|
||||
|
||||
> The real CVEs are the friends we made along the way
|
||||
|
||||
awilfox
|
||||
|
||||
## Summary
|
||||
|
||||
While playing with the [V playground](https://vlang.io/play), a root-level
|
||||
command injection vulnerability was discovered. This allows for an
|
||||
unauthenticated attacker to execute arbitrary root-level commands on the
|
||||
playground server.
|
||||
|
||||
This vulnerability is instantly exploitable by a remote, unauthenticated
|
||||
attacker in the default configuration. To remotely exploit this vulnerability,
|
||||
an attacker must send specially created HTTP requests to the playground server
|
||||
containing a malformed function call.
|
||||
|
||||
This playground server is not open sourced or versioned yet, but this
|
||||
vulnerability has lead to the compromising of the box as reported by the lead
|
||||
developer of V.
|
||||
|
||||
## Remote Exploitation
|
||||
|
||||
V allows for calling of C functions through a few means:
|
||||
|
||||
- starting a line with a `#` character
|
||||
- calling a C function with the `C.` namespace
|
||||
|
||||
The V playground insufficiently strips the latter form of the function call,
|
||||
allowing an invocation such as this:
|
||||
|
||||
```
|
||||
fn main() {
|
||||
C .system(' id')
|
||||
}
|
||||
```
|
||||
|
||||
or even this:
|
||||
|
||||
```
|
||||
fn main() {
|
||||
C
|
||||
.system(' id')
|
||||
}
|
||||
```
|
||||
|
||||
As the server is running as the root user, successful exploitation can result
|
||||
in an unauthenticated user totally compromising the system, as happened
|
||||
earlier yesterday on June 23, 2019. As the source code and configuration of
|
||||
the V playground server is unknown, it is not possible to track usage of these
|
||||
commands.
|
||||
|
||||
The playground did attempt to block these attacks; but it appeared to do pattern
|
||||
matching on `#` or `C.`, allowing the alternative methods mentioned above.
|
||||
|
||||
## Security Suggestions
|
||||
|
||||
Do not run the playground server as a root user outside a container or other
|
||||
form of isolation. The fact that this server runs user-submitted code makes
|
||||
this kind of thing very difficult to isolate and/or secure properly. The use
|
||||
of an explicit sandboxing environment like [gVisor](https://gvisor.dev) or
|
||||
[Docker](https://www.docker.com) is suggested. The use of more elaborate
|
||||
sandboxing mechanisms like [CloudABI](https://cloudabi.org) or
|
||||
[WebAssembly](https://webassembly.org) may be practical for future
|
||||
developments, but is admittedly out of scope for this initial class of issues.
|
||||
|
||||
## GReeTZ
|
||||
|
||||
Special thanks to the people of [#ponydev](https://pony.dev) for helping to
|
||||
discover and toy with this bug.
|
||||
|
||||
## Timeline
|
||||
|
||||
All times are Eastern Standard Time.
|
||||
|
||||
### June 23, 2019
|
||||
|
||||
- 4:56 PM - The first exploit was found and the contents of /etc/passwd were dumped, other variants of this attack were proposed and tested in the meantime
|
||||
- 5:00 PM - The V playground server stopped replying to HTTP and ICMP messages
|
||||
- 6:26 PM - The V creator was notified of this issue
|
||||
- 7:02 PM - The V creator acknowledged the issue and admitted the machine was compromised
|
||||
|
||||
### June 24, 2019
|
||||
|
||||
- 12:00 AM - This security bulletin was released
|
|
@ -1,166 +0,0 @@
|
|||
---
|
||||
title: "OVE-20191021-0001"
|
||||
date: "2019-10-22"
|
||||
tags:
|
||||
- security
|
||||
- release
|
||||
- javascript
|
||||
- mysql
|
||||
- oh-dear-god
|
||||
---
|
||||
|
||||
# OVE-20191021-0001
|
||||
|
||||
## Within Security Advisory
|
||||
|
||||
Multiple vulnerabilities in the mysqljs API and code.
|
||||
|
||||
Security Warning Level: yikes/10
|
||||
|
||||
## Summary
|
||||
|
||||
There are multiple issues exploitable by local and remote actors in
|
||||
[mysqljs][mysqljs]. These can cause application data leaks, database leaks, SQL
|
||||
injections, arbitrary code execution, and credential leaks among other things.
|
||||
|
||||
Mysqljs is unversioned, so it is very difficult to impossible to tell how many
|
||||
users are affected by this and what users can do in order to ensure they are
|
||||
patched against these critical vulnerabilities.
|
||||
|
||||
## Background
|
||||
|
||||
Mysqljs is a library intended to facilitate prototyping web applications and
|
||||
mobile applications using technologies such as [PhoneGap][phonegap] or
|
||||
[Cordova][cordova]. These technologies allow developers to create a web
|
||||
application that gets packaged and presented to users as if it was a native
|
||||
application.
|
||||
|
||||
This library is intended to help with developers creating persistent storage for
|
||||
these applications.
|
||||
|
||||
## Issues in Detail
|
||||
|
||||
There are at least seven vulnerabilities with this library, each of them will be
|
||||
outlined below with a fairly vague level of detail.
|
||||
|
||||
### mysql.js is NOT versioned
|
||||
|
||||
The only version information I was able to find are the following:
|
||||
|
||||
- The `Last-Modified` date of Friday, March 11 2016
|
||||
- The `ETag` of `80edc3e5a87bd11:0`
|
||||
|
||||
These header values correlate to a vulnerable version of the mysql.js file.
|
||||
|
||||
An entire copy of this file is embedded for purposes of explanation:
|
||||
|
||||
```
|
||||
var MySql = {
|
||||
_internalCallback : function() { console.log("Callback not set")},
|
||||
Execute: function (Host, Username, Password, Database, Sql, Callback) {
|
||||
MySql._internalCallback = Callback;
|
||||
// to-do: change localhost: to mysqljs.com
|
||||
var strSrc = "http://mysqljs.com/sql.aspx?";
|
||||
strSrc += "Host=" + Host;
|
||||
strSrc += "&Username=" + Username;
|
||||
strSrc += "&Password=" + Password;
|
||||
strSrc += "&Database=" + Database;
|
||||
strSrc += "&sql=" + Sql;
|
||||
strSrc += "&Callback=MySql._internalCallback";
|
||||
var sqlScript = document.createElement('script');
|
||||
sqlScript.setAttribute('src', strSrc);
|
||||
document.head.appendChild(sqlScript);
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
### Fundamental Operation via Cross-Site Scripting
|
||||
|
||||
The code operates by creating a `<script>` element. The Javascript source of
|
||||
this script is dynamically generated by the remote API server. This opens the
|
||||
door for many kinds of Cross-Site Scripting attacks.
|
||||
|
||||
Especially because:
|
||||
|
||||
### Credentials Exposed over Plain HTTP
|
||||
|
||||
The script works by creating a `<script>` element pointed at a HTTP resource in
|
||||
order to facilitate access to the MySQL Server. Line 6 shows that the API server
|
||||
in question is being queried over UNENCRYPTED HTTP.
|
||||
|
||||
```
|
||||
var strSrc = "http://mysqljs.com/sql.aspx?";
|
||||
```
|
||||
|
||||
### Credentials and SQL Queries Are Not URL-Encoded Before Adding Them to a URL
|
||||
|
||||
Credentials and SQL queries are not URL-encoded before they are added to the
|
||||
`strSrc` URL. This means that values may include other HTTP parameters that
|
||||
could be evaluated, causing one of the two following:
|
||||
|
||||
### Potential for SQL Injection from Malformed User Input
|
||||
|
||||
It appears this API works by people submitting plain text SQL queries. It is
|
||||
likely difficult to write these plain text queries in a way that avoids SQL
|
||||
injection attacks.
|
||||
|
||||
### Potential for Arbitrary Code Execution
|
||||
|
||||
Combined with the previous issues, a SQL injection that inserts arbitrary
|
||||
Javascript into the result will end up creating an arbitrary code execution bug.
|
||||
This could let an attacker execute custom Javascript code on the page, which may
|
||||
have even more disastrous consequences depending on the usage of this library.
|
||||
|
||||
### Server-Side Code has Unknown Logging Enabled
|
||||
|
||||
This means that user credentials and database results may be logged, stored and
|
||||
leaked by the mysql.js API server without user knowledge. The server that is
|
||||
running the API server may also do additional logging of database credentials
|
||||
and results without user knowledge.
|
||||
|
||||
### Encourages Bad Practices
|
||||
|
||||
Mysql.js works by its API server dialing out an _UNENCRYPTED_ connection to your
|
||||
MySQL server over the internet. This requires exposing your MySQL server to the
|
||||
internet. This means that user credentials are vulnerable to anyone who has
|
||||
packet capture abilities.
|
||||
|
||||
Mysql.js also encourages developers commit database credentials into their
|
||||
application source code. Cursory searching of GitHub has found
|
||||
[this][leakedcreds]. I can only imagine there are countless other potential
|
||||
victims.
|
||||
|
||||
## Security Suggestions
|
||||
|
||||
- Do not, under any circumstances, allow connections to be made without the use
|
||||
of TLS (HTTPS).
|
||||
- Version the library.
|
||||
- Offer the source code of the API server to allow users to inspect it and
|
||||
ensure their credentials are not being stored by it.
|
||||
- Detail how the IIS server powering this service is configured, proving that it
|
||||
is not keeping unsanitized access logs.
|
||||
- Ensure all logging methods sanitize or remove user credentials.
|
||||
- URL-encode all values being sent as part of a URL.
|
||||
- Do not have your service fundamentally operate as a Cross-Site Scripting
|
||||
attack.
|
||||
- Do not, under any circumstances, encourage developers to put database
|
||||
credentials in the source code of front-end web applications.
|
||||
|
||||
In summary, we label this a solid yikes/10 in terms of security. It would be
|
||||
advisable for current users of this library to re-evaluate the life decisions
|
||||
that have lead them down this path.
|
||||
|
||||
## GReeTZ
|
||||
|
||||
Über thanks to [jadr2ddude][jaden] for helping with identifying the unfortunate
|
||||
scope of these massive security issues.
|
||||
|
||||
Hyper thanks to [J][j] for coming up with a viable GitHub search for potentially
|
||||
affected users.
|
||||
|
||||
[mysqljs]: http://www.mysqljs.com/
|
||||
[phonegap]: https://phonegap.com/
|
||||
[cordova]: https://cordova.apache.org/
|
||||
[leakedcreds]: https://github.com/search?utf8=%E2%9C%93&q=%22https%3A%2F%2Fmysqljs.com%2Fmysql.js%22&type=Code
|
||||
[jaden]: https://twitter.com/CompuJad
|
||||
[j]: https://twitter.com/LombaxJay
|
|
@ -1,640 +0,0 @@
|
|||
---
|
||||
title: "TL;DR Rust"
|
||||
date: 2020-09-19
|
||||
series: rust
|
||||
tags:
|
||||
- go
|
||||
- golang
|
||||
---
|
||||
|
||||
# TL;DR Rust
|
||||
|
||||
Recently I've been starting to use Rust more and more for larger and larger
|
||||
projects. As things have come up, I realized that I am missing a good reference
|
||||
for common things in Rust as compared to Go. This post contains a quick
|
||||
high-level overview of patterns in Rust and how they compare to patterns
|
||||
in Go. This will focus on code samples. This is no replacement for the [Rust
|
||||
book](https://doc.rust-lang.org/book/), but should help you get spun up on the
|
||||
various patterns used in Rust code.
|
||||
|
||||
Also I'm happy to introduce Mara to the blog!
|
||||
|
||||
[Hey, happy to be here! I'm Mara, a shark hacker from Christine's imagination.
|
||||
I'll interject with side information, challenge assertions and more! Thanks for
|
||||
inviting me!](conversation://Mara/hacker)
|
||||
|
||||
Let's start somewhere simple: functions.
|
||||
|
||||
## Making Functions
|
||||
|
||||
Functions are defined using `fn` instead of `func`:
|
||||
|
||||
```go
|
||||
func foo() {}
|
||||
```
|
||||
|
||||
```rust
|
||||
fn foo() {}
|
||||
```
|
||||
|
||||
### Arguments
|
||||
|
||||
Arguments can be passed by separating the name from the type with a colon:
|
||||
|
||||
```go
|
||||
func foo(bar int) {}
|
||||
```
|
||||
|
||||
```rust
|
||||
fn foo(bar: i32) {}
|
||||
```
|
||||
|
||||
### Returns
|
||||
|
||||
Values can be returned by adding `-> Type` to the function declaration:
|
||||
|
||||
```go
|
||||
func foo() int {
|
||||
return 2
|
||||
}
|
||||
```
|
||||
|
||||
```rust
|
||||
fn foo() -> i32 {
|
||||
return 2;
|
||||
}
|
||||
```
|
||||
|
||||
In Rust values can also be returned on the last statement without the `return`
|
||||
keyword or a terminating semicolon:
|
||||
|
||||
```rust
|
||||
fn foo() -> i32 {
|
||||
2
|
||||
}
|
||||
```
|
||||
|
||||
[Hmm, what if I try to do something like this. Will this
|
||||
work?](conversation://Mara/hmm)
|
||||
|
||||
```rust
|
||||
fn foo() -> i32 {
|
||||
if some_cond {
|
||||
2
|
||||
}
|
||||
|
||||
4
|
||||
}
|
||||
```
|
||||
|
||||
Let's find out! The compiler spits back an error:
|
||||
|
||||
```
|
||||
error[E0308]: mismatched types
|
||||
--> src/lib.rs:3:9
|
||||
|
|
||||
2 | / if some_cond {
|
||||
3 | | 2
|
||||
| | ^ expected `()`, found integer
|
||||
4 | | }
|
||||
| | -- help: consider using a semicolon here
|
||||
| |_____|
|
||||
| expected this to be `()`
|
||||
```
|
||||
|
||||
This happens because most basic statements in Rust can return values. The best
|
||||
way to fix this would be to move the `4` return into an `else` block:
|
||||
|
||||
```rust
|
||||
fn foo() -> i32 {
|
||||
if some_cond {
|
||||
2
|
||||
} else {
|
||||
4
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
Otherwise, the compiler will think you are trying to use that `if` as a
|
||||
statement, such as like this:
|
||||
|
||||
```rust
|
||||
let val = if some_cond { 2 } else { 4 };
|
||||
```
|
||||
|
||||
### Functions that can fail
|
||||
|
||||
The [Result](https://doc.rust-lang.org/std/result/) type represents things that
|
||||
can fail with specific errors. The [eyre Result
|
||||
type](https://docs.rs/eyre) represents things that can fail
|
||||
with any error. For readability, this post will use the eyre Result type.
|
||||
|
||||
[The angle brackets in the `Result` type are arguments to the type, this allows
|
||||
the Result type to work across any type you could imagine.](conversation://Mara/hacker)
|
||||
|
||||
```go
|
||||
import "errors"
|
||||
|
||||
func divide(x, y int) (int, err) {
|
||||
if y == 0 {
|
||||
return 0, errors.New("cannot divide by zero")
|
||||
}
|
||||
|
||||
return x / y, nil
|
||||
}
|
||||
```
|
||||
|
||||
```rust
|
||||
use eyre::{eyre, Result};
|
||||
|
||||
fn divide(x: i32, y: i32) -> Result<i32> {
|
||||
match y {
|
||||
0 => Err(eyre!("cannot divide by zero")),
|
||||
_ => Ok(x / y),
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
[Huh? I thought Rust had the <a
|
||||
href="https://doc.rust-lang.org/std/error/trait.Error.html">Error trait</a>,
|
||||
shouldn't you be able to use that instead of a third party package like
|
||||
eyre?](conversation://Mara/wat)
|
||||
|
||||
Let's try that, however we will need to make our own error type because the
|
||||
[`eyre!`](https://docs.rs/eyre/0.6.0/eyre/macro.eyre.html) macro creates its own
|
||||
transient error type on the fly.
|
||||
|
||||
First we need to make our own simple error type for a DivideByZero error:
|
||||
|
||||
```rust
|
||||
use std::error::Error;
|
||||
use std::fmt;
|
||||
|
||||
#[derive(Debug)]
|
||||
struct DivideByZero;
|
||||
|
||||
impl fmt::Display for DivideByZero {
|
||||
fn fmt(&self, f: &mut fmt::Formatter<'_>) -> fmt::Result {
|
||||
write!(f, "cannot divide by zero")
|
||||
}
|
||||
}
|
||||
|
||||
impl Error for DivideByZero {}
|
||||
```
|
||||
|
||||
So now let's use it:
|
||||
|
||||
```rust
|
||||
fn divide(x: i32, y: i32) -> Result<i32, DivideByZero> {
|
||||
match y {
|
||||
0 => Err(DivideByZero{}),
|
||||
_ => Ok(x / y),
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
However there is still one thing left: the function returns a DivideByZero
|
||||
error, not _any_ error like the [error interface in
|
||||
Go](https://godoc.org/builtin#error). In order to represent that we need to
|
||||
return something that implements the Error trait:
|
||||
|
||||
```rust
|
||||
fn divide(x: i32, y: i32) -> Result<i32, impl Error> {
|
||||
// ...
|
||||
}
|
||||
```
|
||||
|
||||
And for the simple case, this will work. However as things get more complicated
|
||||
this simple facade will not work due to reality and its complexities. This is
|
||||
why I am shipping as much as I can out to other packages like eyre or
|
||||
[anyhow](https://docs.rs/anyhow). Check out this code in the [Rust
|
||||
Playground](https://play.rust-lang.org/?version=stable&mode=debug&edition=2018&gist=946057d8eb02f388cb3f03bae226d10d)
|
||||
to mess with this code interactively.
|
||||
|
||||
[Pro tip: eyre (via <a href="https://docs.rs/color-eyre">color-eyre</a>) also
|
||||
has support for adding <a href="https://docs.rs/color-eyre/0.5.4/color_eyre/#custom-sections-for-error-reports-via-help-trait">custom
|
||||
sections and context</a> to errors similar to Go's <a href="https://godoc.org/fmt#Errorf">`fmt.Errorf` `%w`
|
||||
format argument</a>, which will help in real world
|
||||
applications. When you do need to actually make your own errors, you may want to look into
|
||||
crates like <a href="https://docs.rs/thiserror">thiserror</a> to help with
|
||||
automatically generating your error implementation.](conversation://Mara/hacker)
|
||||
|
||||
### The `?` Operator
|
||||
|
||||
In Rust, the `?` operator checks for an error in a function call and if there is
|
||||
one, it automatically returns the error and gives you the result of the function
|
||||
if there was no error. This only works in functions that return either an Option
|
||||
or a Result.
|
||||
|
||||
[The <a href="https://doc.rust-lang.org/std/option/index.html">Option</a> type
|
||||
isn't shown in very much detail here, but it acts like a "this thing might not exist and it's your
|
||||
responsibility to check" container for any value. The closest analogue in Go is
|
||||
making a pointer to a value or possibly putting a value in an `interface{}`
|
||||
(which can be annoying to deal with in practice).](conversation://Mara/hacker)
|
||||
|
||||
```go
|
||||
func doThing() (int, error) {
|
||||
result, err := divide(3, 4)
|
||||
if err != nil {
|
||||
return 0, err
|
||||
}
|
||||
|
||||
return result, nil
|
||||
}
|
||||
```
|
||||
|
||||
```rust
|
||||
use eyre::Result;
|
||||
|
||||
fn do_thing() -> Result<i32> {
|
||||
let result = divide(3, 4)?;
|
||||
Ok(result)
|
||||
}
|
||||
```
|
||||
|
||||
If the second argument of divide is changed to `0`, then `do_thing` will return
|
||||
an error.
|
||||
|
||||
[And how does that work with eyre?](conversation://Mara/hmm)
|
||||
|
||||
It works with eyre because eyre has its own error wrapper type called
|
||||
[`Report`](https://docs.rs/eyre/0.6.0/eyre/struct.Report.html), which can
|
||||
represent anything that implements the Error trait.
|
||||
|
||||
## Macros
|
||||
|
||||
Rust macros are function calls with `!` after their name:
|
||||
|
||||
```rust
|
||||
println!("hello, world");
|
||||
```
|
||||
|
||||
## Variables
|
||||
|
||||
Variables are created using `let`:
|
||||
|
||||
```go
|
||||
var foo int
|
||||
var foo = 3
|
||||
foo := 3
|
||||
```
|
||||
|
||||
```rust
|
||||
let foo: i32;
|
||||
let foo = 3;
|
||||
```
|
||||
|
||||
### Mutability
|
||||
|
||||
In Rust, every variable is immutable (unchangeable) by default. If we try to
|
||||
change those variables above we get a compiler error:
|
||||
|
||||
```rust
|
||||
fn main() {
|
||||
let foo: i32;
|
||||
let foo = 3;
|
||||
foo = 4;
|
||||
}
|
||||
```
|
||||
|
||||
This makes the compiler return this error:
|
||||
|
||||
```
|
||||
error[E0384]: cannot assign twice to immutable variable `foo`
|
||||
--> src/main.rs:4:5
|
||||
|
|
||||
3 | let foo = 3;
|
||||
| ---
|
||||
| |
|
||||
| first assignment to `foo`
|
||||
| help: make this binding mutable: `mut foo`
|
||||
4 | foo = 4;
|
||||
| ^^^^^^^ cannot assign twice to immutable variable
|
||||
```
|
||||
|
||||
As the compiler suggests, you can create a mutable variable by adding the `mut`
|
||||
keyword after the `let` keyword. There is no analog to this in Go.
|
||||
|
||||
```rust
|
||||
let mut foo: i32 = 0;
|
||||
foo = 4;
|
||||
```
|
||||
|
||||
[This is slightly a lie. There's more advanced cases involving interior
|
||||
mutability and other fun stuff like that, however this is a more advanced topic
|
||||
that isn't covered here.](conversation://Mara/hacker)
|
||||
|
||||
### Lifetimes
|
||||
|
||||
Rust does garbage collection at compile time. It also passes ownership of memory
|
||||
to functions as soon as possible. Lifetimes are how Rust calculates how "long" a
|
||||
given bit of data should exist in the program. Rust will then tell the compiled
|
||||
code to destroy the data from memory as soon as possible.
|
||||
|
||||
[This is slightly inaccurate in order to make this simpler to explain and
|
||||
understand. It's probably more accurate to say that Rust calculates _when_ to
|
||||
collect garbage at compile time, but the difference doesn't really matter for
|
||||
most cases](conversation://Mara/hacker)
|
||||
|
||||
For example, this code will fail to compile because `quo` was moved into the
|
||||
second divide call:
|
||||
|
||||
```rust
|
||||
let quo = divide(4, 8)?;
|
||||
let other_quo = divide(quo, 5)?;
|
||||
|
||||
// Fails compile because ownership of quo was given to divide to create other_quo
|
||||
let yet_another_quo = divide(quo, 4)?;
|
||||
```
|
||||
|
||||
To work around this you can pass a reference to the divide function:
|
||||
|
||||
```rust
|
||||
let other_quo = divide(&quo, 5);
|
||||
let yet_another_quo = divide(&quo, 4)?;
|
||||
```
|
||||
|
||||
Or even create a clone of it:
|
||||
|
||||
```rust
|
||||
let other_quo = divide(quo.clone(), 5);
|
||||
let yet_another_quo = divide(quo, 4)?;
|
||||
```
|
||||
|
||||
[You can also get more fancy with <a
|
||||
href="https://doc.rust-lang.org/rust-by-example/scope/lifetime/explicit.html">explicit
|
||||
lifetime annotations</a>, however as of Rust's 2018 edition they aren't usually
|
||||
required unless you are doing something weird. This is something that is also
|
||||
covered in more detail in <a
|
||||
href="https://doc.rust-lang.org/stable/book/ch04-00-understanding-ownership.html">The
|
||||
Rust Book</a>.](conversation://Mara/hacker)
|
||||
|
||||
### Passing Mutability
|
||||
|
||||
Sometimes functions need mutable variables. To pass a mutable reference, add
|
||||
`&mut` before the name of the variable:
|
||||
|
||||
```rust
|
||||
let something = do_something_to_quo(&mut quo)?;
|
||||
```
|
||||
|
||||
## Project Setup
|
||||
|
||||
### Imports
|
||||
|
||||
External dependencies are declared using the [Cargo.toml
|
||||
file](https://doc.rust-lang.org/cargo/reference/specifying-dependencies.html):
|
||||
|
||||
```toml
|
||||
# Cargo.toml
|
||||
|
||||
[dependencies]
|
||||
eyre = "0.6"
|
||||
```
|
||||
|
||||
This depends on the crate [eyre](https://crates.io/crates/eyre) at version
|
||||
0.6.x.
|
||||
|
||||
[You can do much more with version requirements with cargo, see more <a
|
||||
href="https://doc.rust-lang.org/cargo/reference/specifying-dependencies.html">here</a>.](conversation://Mara/hacker)
|
||||
|
||||
Dependencies can also have optional features:
|
||||
|
||||
```toml
|
||||
# Cargo.toml
|
||||
|
||||
[dependencies]
|
||||
reqwest = { version = "0.10", features = ["json"] }
|
||||
```
|
||||
|
||||
This depends on the crate [reqwest](https://crates.io/reqwest) at version 0.10.x
|
||||
with the `json` feature enabled (in this case it enables reqwest being able to
|
||||
automagically convert things to/from json using Serde).
|
||||
|
||||
External dependencies can be used with the `use` statement:
|
||||
|
||||
```go
|
||||
// go
|
||||
|
||||
import "github.com/foo/bar"
|
||||
```
|
||||
|
||||
```rust
|
||||
use foo; // -> foo now has the members of crate foo behind the :: operator
|
||||
use foo::Bar; // -> Bar is now exposed as a type in this file
|
||||
|
||||
use eyre::{eyre, Result}; // exposes the eyre! and Result members of eyre
|
||||
```
|
||||
|
||||
[This doesn't cover how the <a
|
||||
href="http://www.sheshbabu.com/posts/rust-module-system/">module system</a>
|
||||
works, however the post I linked there covers this better than I
|
||||
can.](conversation://Mara/hacker)
|
||||
|
||||
## Async/Await
|
||||
|
||||
Async functions may be interrupted to let other things execute as needed. This
|
||||
program uses [tokio](https://tokio.rs/) to handle async tasks. To run an async
|
||||
task and wait for its result, do this:
|
||||
|
||||
```
|
||||
let printer_fact = reqwest::get("https://printerfacts.cetacean.club/fact")
|
||||
.await?
|
||||
.text()
|
||||
.await?;
|
||||
println!("your printer fact is: {}", printer_fact);
|
||||
```
|
||||
|
||||
This will populate `response` with an amusing fact about everyone's favorite
|
||||
household pet, the [printer](https://printerfacts.cetacean.club).
|
||||
|
||||
To make an async function, add the `async` keyword before the `fn` keyword:
|
||||
|
||||
```rust
|
||||
async fn get_text(url: String) -> Result<String> {
|
||||
reqwest::get(&url)
|
||||
.await?
|
||||
.text()
|
||||
.await?
|
||||
}
|
||||
```
|
||||
|
||||
This can then be called like this:
|
||||
|
||||
```rust
|
||||
let printer_fact = get_text("https://printerfacts.cetacean.club/fact").await?;
|
||||
```
|
||||
|
||||
## Public/Private Types and Functions
|
||||
|
||||
Rust has three privacy levels for functions:
|
||||
|
||||
- Only visible to the current file (no keyword, lowercase in Go)
|
||||
- Visible to anything in the current crate (`pub(crate)`, internal packages in
|
||||
go)
|
||||
- Visible to everyone (`pub`, upper case in Go)
|
||||
|
||||
[You can't get a perfect analog to `pub(crate)` in Go, but <a
|
||||
href="https://docs.google.com/document/d/1e8kOo3r51b2BWtTs_1uADIA5djfXhPT36s6eHVRIvaU/edit">internal
|
||||
packages</a> can get close to this behavior. Additionally you can have a lot
|
||||
more control over access levels than this, see <a
|
||||
href="https://doc.rust-lang.org/nightly/reference/visibility-and-privacy.html">here</a>
|
||||
for more information.](conversation://Mara/hacker)
|
||||
|
||||
## Structures
|
||||
|
||||
Rust structures are created using the `struct` keyword:
|
||||
|
||||
```go
|
||||
type Client struct {
|
||||
Token string
|
||||
}
|
||||
```
|
||||
|
||||
```rust
|
||||
pub struct Client {
|
||||
pub token: String,
|
||||
}
|
||||
```
|
||||
|
||||
If the `pub` keyword is not specified before a member name, it will not be
|
||||
usable outside the Rust source code file it is defined in:
|
||||
|
||||
```go
|
||||
type Client struct {
|
||||
token string
|
||||
}
|
||||
```
|
||||
|
||||
```rust
|
||||
pub(crate) struct Client {
|
||||
token: String,
|
||||
}
|
||||
```
|
||||
|
||||
### Encoding structs to JSON
|
||||
|
||||
[serde](https://serde.rs) is used to convert structures to json. The Rust
|
||||
compiler's
|
||||
[derive](https://doc.rust-lang.org/stable/rust-by-example/trait/derive.html)
|
||||
feature is used to automatically implement the conversion logic.
|
||||
|
||||
```go
|
||||
type Response struct {
|
||||
Name string `json:"name"`
|
||||
Description *string `json:"description,omitempty"`
|
||||
}
|
||||
```
|
||||
|
||||
```rust
|
||||
use serde::{Serialize, Deserialize};
|
||||
|
||||
#[derive(Serialize, Deserialize, Debug)]
|
||||
pub(crate) struct Response {
|
||||
pub name: String,
|
||||
pub description: Option<String>,
|
||||
}
|
||||
```
|
||||
|
||||
## Strings
|
||||
|
||||
Rust has a few string types that do different things. You can read more about
|
||||
this [here](https://fasterthanli.me/blog/2020/working-with-strings-in-rust/),
|
||||
but at a high level most projects only uses a few of them:
|
||||
|
||||
- `&str`, a slice reference to a String owned by someone else
|
||||
- String, an owned UTF-8 string
|
||||
- PathBuf, a filepath string (encoded in whatever encoding the OS running this
|
||||
code uses for filesystems)
|
||||
|
||||
The strings are different types for safety reasons. See the linked blogpost for
|
||||
more detail about this.
|
||||
|
||||
## Enumerations / Tagged Unions
|
||||
|
||||
Enumerations, also known as tagged unions, are a way to specify a superposition
|
||||
of one of a few different kinds of values in one type. A neat way to show them
|
||||
off (along with some other fancy features like the derivation system) is with the
|
||||
[structopt](https://docs.rs/structopt/0.3.14/structopt/) crate. There is no easy
|
||||
analog for this in Go.
|
||||
|
||||
[We've actually been dealing with enumerations ever since we touched the Result
|
||||
type earlier. <a
|
||||
href="https://doc.rust-lang.org/std/result/enum.Result.html">Result</a> and <a
|
||||
href="https://doc.rust-lang.org/std/option/enum.Option.html">Option</a> are
|
||||
implemented with enumerations.](conversation://Mara/hacker)
|
||||
|
||||
```rust
|
||||
#[derive(StructOpt, Debug)]
|
||||
#[structopt(about = "A simple release management tool")]
|
||||
pub(crate) enum Cmd {
|
||||
/// Creates a new release for a git repo
|
||||
Cut {
|
||||
#[structopt(flatten)]
|
||||
common: Common,
|
||||
/// Changelog location
|
||||
#[structopt(long, short, default_value="./CHANGELOG.md")]
|
||||
changelog: PathBuf,
|
||||
},
|
||||
|
||||
/// Runs releases as triggered by GitHub Actions
|
||||
GitHubAction {
|
||||
#[structopt(flatten)]
|
||||
gha: GitHubAction,
|
||||
},
|
||||
}
|
||||
```
|
||||
|
||||
Enum variants can be matched using the `match` keyword:
|
||||
|
||||
```rust
|
||||
match cmd {
|
||||
Cmd::Cut { common, changelog } => {
|
||||
cmd::cut::run(common, changelog).await
|
||||
}
|
||||
Cmd::GitHubAction { gha } => {
|
||||
cmd::github_action::run(gha).await
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
All variants of an enum must be matched in order for the code to compile.
|
||||
|
||||
[This code was borrowed from <a
|
||||
href="https://github.com/lightspeed/palisade">palisade</a> in order to
|
||||
demonstrate this better. If you want to see these patterns in action, check this
|
||||
repository out!](conversation://Mara/hacker)
|
||||
|
||||
## Testing
|
||||
|
||||
Test functions need to be marked with the `#[test]` annotation, then they will
|
||||
be run alongside `cargo test`:
|
||||
|
||||
```rust
|
||||
mod tests { // not required but it is good practice
|
||||
#[test]
|
||||
fn math_works() {
|
||||
assert_eq!(2 + 2, 4);
|
||||
}
|
||||
|
||||
#[tokio::test] // needs tokio as a dependency
|
||||
async fn http_works() {
|
||||
let _ = get_html("https://within.website").await.unwrap();
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
Avoid the use of `unwrap()` outside of tests. In the wrong cases, using
|
||||
`unwrap()` in production code can cause the server to crash and can incur data
|
||||
loss.
|
||||
|
||||
[Alternatively, you can also use the <a href="https://learning-rust.github.io/docs/e4.unwrap_and_expect.html#expect">`.expect()`</a> method instead
|
||||
of `.unwrap()`. This lets you attach a message that will be shown when the
|
||||
result isn't Ok.](conversation://Mara/hacker)
|
||||
|
||||
---
|
||||
|
||||
This is by no means comprehensive, see the rust book or [Learn X in Y Minutes
|
||||
Where X = Rust](https://learnxinyminutes.com/docs/rust/) for more information.
|
||||
This code is written to be as boring and obvious as possible. If things don't
|
||||
make sense, please reach out and don't be afraid to ask questions.
|
|
@ -2,8 +2,6 @@
|
|||
title: A Letter to Those That Bullied Me
|
||||
date: 2018-06-16
|
||||
for: Elizabeth
|
||||
tags:
|
||||
- offmychest
|
||||
---
|
||||
|
||||
# A Letter to Those Who Bullied Me
|
||||
|
|
|
@ -1,112 +0,0 @@
|
|||
---
|
||||
title: "Outsider Art and Anathema"
|
||||
date: 2019-10-21
|
||||
tags:
|
||||
- philosophy
|
||||
- art
|
||||
- makes-u-thonk
|
||||
---
|
||||
|
||||
# Outsider Art and Anathema
|
||||
|
||||
This was going to be a post about [Urbit][urbit] at first; but in the process of discussing about my interest in writing something _positive_ about it, I was warned by a few people that this was a Bad Idea. I was focusing purely on the technical side of it and how closely it implemented a concept called [liquid software][liquidsoftware], but from what people were saying, it seemed like a creation that was spoiled by something outside of it, specifically the creator's political views (of which I had little idea at the time).
|
||||
|
||||
As much as I will probably return to the original concept in the future with another post, this feels like something I had to address first.
|
||||
|
||||
**DISCLAIMER:** This post references to projects and people that the mainstream considers controversial. This post is not an approval of these people's views. I am focusing purely on the aspect of how this correlates into how art is perceived, recognized and able to be admired. I realize that the people behind the projects I have cited have said things that if taken seriously at a societal level could hurt me and people like me. That is not the point of this; I am trying to learn how this art works so I can create my own in the future. If this is uncomfortable for you at any point, please close this browser tab and do something else.
|
||||
|
||||
## Art
|
||||
|
||||
So, what is art?
|
||||
|
||||
This is a surprisingly hard question to answer. Most of the time though, I know art when I see it.
|
||||
|
||||
Art doesn't have to follow conventional ideas of what most people think "art" is. Art can be just about anything that you can classify as art. As a conventional example, consider something like the Mona Lisa:
|
||||
|
||||
![The Mona Lisa, the most famous painting in the world](https://xena.greedo.xeserv.us/files/monalisa_small.jpg)
|
||||
|
||||
People will accept this as art without much argument. It's a painting, it obviously took a lot of skill and time to create. It is said that Leonardo Da Vinci (the artist of the painting) created it partially [as a contribution to the state of the art of oil painting][monalisawhy].
|
||||
|
||||
So that painting is art, and a lot of people would consider it art; so what *would* a lot of people *not* consider art? Here's an example:
|
||||
|
||||
![Untitled (Perfect Lovers) by Felix Gonzalez-Torres](https://xena.greedo.xeserv.us/files/perfect-lovers.jpg)
|
||||
|
||||
This is *Untitled (Perfect Lovers)* by Felix Gonzalez. If you just take a look at it without context, it's just two battery-operated clocks on a wall. Where is the expertise and the like that goes into this? This is just the result of someone buying two clocks from the store and putting them somewhere, right?
|
||||
|
||||
Let's dig into [the description of the piece][perfectloversdescription]:
|
||||
|
||||
> Initially set to the same time, these identical battery-powered clocks will eventually fall out of sync, or may stop entirely. Conceived shortly after Gonzalez-Torres’s partner was diagnosed with AIDS, this work uses everyday objects to track and measure the inevitable flow of time. When one of the clocks stops or breaks, they can both be reset, thereby resuming perfect synchrony. In 1991, Gonzalez-Torres reflected, “Time is something that scares me. . . or used to. This piece I made with the two clocks was the scariest thing I have ever done. I wanted to face it. I wanted those two clocks right in front of me, ticking.”
|
||||
|
||||
And after reading that description, it's impossible for me to say this image is _not_ art. Even though it's made up of ordinary objects, the art comes out in the way that the clocks' eventual death relates to the eventual death of the author and their partner.
|
||||
|
||||
This art may be located on the fringes of what people consider "art". So what else is on the fringes?
|
||||
|
||||
### Outsider Art
|
||||
|
||||
For there to be "fringes" to the art landscape, there must be an "inside" and "outside" to it. In particular, the "outsider" art usually (but not always) contains elements and themes that are outside of the mainstream. Outsiders are therefore more free to explore ideas, concepts and ways of expression that defy cultural, spiritual or other norms. Logically, every major art style you know and love started as outsider art, before it was cool. Memes are also a form of outsider art, though they are gradually being accepted into the mainstream.
|
||||
|
||||
It's very easy to find outsider art if you are looking for it: just fish for some on Twitter, 4chan or Reddit; you'll find plenty of artists there who are placed firmly outside of the mainstream art community.
|
||||
|
||||
## Computer Science
|
||||
|
||||
Computer science is a kind of art. It's the art of turning contextual events into effects and state. It's also the art of creating solutions for problems that could never be solved before. It's also the science of how to connect millions of people across common protocols and abstractions that they don't have to understand in order to use.
|
||||
|
||||
This is an art that connects millions and has shaped itself into an industry of its own. This art, like the rest of mainstream art, keeps evolving, growing and changing into something new; into a more ultimate and detailed expression of what it can be, as people explore the ways it can be created and presented. This art is also quite special because it's not very limited by physical objects or expressions in material space. It's an art that can evolve and change with the viewer.
|
||||
|
||||
But, since this is an art, there's still an inside and an outside. Things on the inside are generally "safe" for people to admire, use and look at. The inside contains things like Linux, Docker, Kubernetes, Intel, C, Go, PHP, Ruby and other well-known and battle-proven tools.
|
||||
|
||||
### The Outside
|
||||
|
||||
The outside, however, is where the real innovation happens. The outside is where people can really take a more critical look at what computing is, does or can be. These views can add up into fundamentally different ways of looking at computer science, much like changing a pair of glasses for another changes how you see the world around you.
|
||||
|
||||
As an example, consider [TempleOS][codersnotestempleos]. It's a work of outsider art by [Terry Davis][terrydavis] (1969-2018, RIP), but it's also a fully functional operating system. It has a custom-built kernel, compiler, toolchain, userland, debugger, games, and documentation system, each integrated into everything else, in ways that could realistically not be done with how mainstream software is commonly developed.
|
||||
|
||||
[Urbit][urbit] is another example of this. It's a fundamentally different way of looking at networked computing. Everything in Urbit is seamlessly interlinked with everything else to the point that it can be surprising that a file you are working with actually lives on another computer. It implements software updates as invisible to the user. It allows for the model of [liquid software][liquidsoftware], or updates to a program flowing into user's computers without the users having to care about the updates. Users don't even notice the downtime.
|
||||
|
||||
As yet another example, consider [Minecraft][minecraft]. As of the writing of this article, it is the video game with the most copies sold in human history. It is an open world block building game where the limits of what you can make are the limits of your imagination. It has been continuously updated, refined and improved from a minimal proof of concept into the game it is today.
|
||||
|
||||
## The Seam
|
||||
|
||||
Consider this quote that comes into play a lot with outsider art:
|
||||
|
||||
> Genius and insanity are differentiated only by context. One person's genius is another person's insanity.
|
||||
|
||||
- Anonymous
|
||||
|
||||
These three projects are developed by people whom the mainstream has cast out. Terry Davis' mental health issues and delusions about hearing the voice of God have tainted TempleOS to be that "weird bible OS" to the point where people completely disregard it. Urbit was partially created by a right-wing reactionary (Curtis Yarvin). He has been so ostracized that he [cannot publicly talk about his work][curtisbannedfromlambdaconf] to the kind of people that would most directly benefit from learning about it. Curtis isn't even involved with Urbit anymore, and his name is still somehow an irrevocable black mark on the entire thing. Minecraft was initially created by Notch, who recently had [intro texts mentioning his name removed from the game][minecraftintrotextpatch] after he said questionable things about transgender people.
|
||||
|
||||
## Anathema
|
||||
|
||||
This "irrevocable" black mark has a name: [Anathema][anathema]. It refers to anything that is shunned by the mainstream. Outsiders that create outsider art may or may not be anathema to their respective mainstreams. This turns the art into a taboo, a curse, a stain. People no longer see an anathema as the art it is, but merely the worthless product of someone that society would probably rather forget if it had the chance.
|
||||
|
||||
I don't really know how well this sits with me, personally. Outsiders have unique views of the world that can provide ideas that ultimately strengthen us all. Society's role is to disseminate mainstream improvements to large groups, but real development happens at the personal level.
|
||||
|
||||
Does one bad apple really spoil the sociological bunch? Why does this happen? Have the political divides gotten so deeply entrenched into society that people really become beyond reproach? Isn't this a recursive trap? How does someone redeem themselves to no longer be an anathema? Is it possible for people who are anathema to redeem themselves? Why or why not? Is there room for forgiveness, or does the [original sin][originalsin] doom the sinner eternally, much like it has to Catholicism?
|
||||
|
||||
Are the creations of an anathema outsider artist still art? Are they still an artist even though they become unable to share their art with others?
|
||||
|
||||
---
|
||||
|
||||
I don't know. These are hard questions. I don't really have much of a conclusion here. I don't want to seem like I'm trying to prescribe a method of thinking here. I'm just sitting on the side and spouting out ideas to inspire people to think for themselves.
|
||||
|
||||
I'm just challenging you, the reader, to really think about what/who is and is not an anathema in your day-to-day life. Identify them. Understand where/who they are. Maybe even apply some compassion and attempt to understand their view and how they got there. I'm not saying to put yourself in danger, but just to be mindful of it.
|
||||
|
||||
Be well.
|
||||
|
||||
---
|
||||
|
||||
Special thanks to CelestialBoon, Grapz and MoonGoodGryph for proofreading and helping with this post. This would be a very different article without their feedback and input.
|
||||
|
||||
[urbit]: https://urbit.org
|
||||
[liquidsoftware]: https://liquidsoftware.com
|
||||
[monalisa]: https://xena.greedo.xeserv.us/files/monalisa_small.jpg
|
||||
[monalisawhy]: http://www.visual-arts-cork.com/painting/sfumato.htm
|
||||
[perfectlovers]: https://xena.greedo.xeserv.us/files/perfect-lovers.jpg
|
||||
[perfectloversdescription]: https://www.moma.org/collection/works/81074
|
||||
[codersnotestempleos]: http://www.codersnotes.com/notes/a-constructive-look-at-templeos/
|
||||
[terrydavis]: https://en.wikipedia.org/wiki/Terry_A._Davis
|
||||
[curtisbannedfromlambdaconf]: http://www.inc.com/tess-townsend/why-it-matters-that-an-obscure-programming-conference-is-hosting-mencius-moldbug.html
|
||||
[anathema]: https://en.wikipedia.org/wiki/Anathema
|
||||
[minecraft]: https://www.minecraft.net/en-us/
|
||||
[minecraftintrotextpatch]: https://variety.com/2019/gaming/news/notch-removed-minecraft-1203174964/
|
||||
[originalsin]: https://en.wikipedia.org/wiki/Original_sin
|
|
@ -1,10 +1,7 @@
|
|||
---
|
||||
title: My Experience with Atom as A Vim User
|
||||
date: 2014-11-18
|
||||
series: medium-archive
|
||||
tags:
|
||||
- atom
|
||||
- vim
|
||||
from: medium
|
||||
---
|
||||
|
||||
My Experience with Atom as A Vim User
|
||||
|
|
|
@ -1,229 +0,0 @@
|
|||
---
|
||||
title: "</kubernetes>"
|
||||
date: 2021-01-03
|
||||
---
|
||||
|
||||
# </kubernetes>
|
||||
|
||||
Well, since I posted [that last post](/blog/k8s-pondering-2020-12-31) I have had
|
||||
an adventure. A good friend pointed out a server host that I had missed when I
|
||||
was looking for other places to use, and now I have migrated my blog to this new
|
||||
server. As of yesterday, I now run my website on a dedicated server in Finland.
|
||||
Here is the story of my journey to migrate 6 years of cruft and technical debt
|
||||
to this new server.
|
||||
|
||||
Let's talk about this goliath of a server. This server is an AX41 from Hetzner.
|
||||
It has 64 GB of ram, a 512 GB nvme drive, 3 2 TB drives, and a Ryzen 3600. For
|
||||
all practical concerns, this beast is beyond overkill and rivals my workstation
|
||||
tower in everything but the GPU power. I have named it `lufta`, which is the
|
||||
word for feather in [L'ewa](https://lewa.within.website/dictionary.html).
|
||||
|
||||
## Assimilation
|
||||
|
||||
For my server setup process, the first step it to assimilate it. In this step I
|
||||
get a base NixOS install on it somehow. Since I was using Hetzner, I was able to
|
||||
boot into a NixOS install image using the process documented
|
||||
[here](https://nixos.wiki/wiki/Install_NixOS_on_Hetzner_Online). Then I decided
|
||||
that it would also be cool to have this server use
|
||||
[zfs](https://en.wikipedia.org/wiki/ZFS) as its filesystem to take advantage of
|
||||
its legendary subvolume and snapshotting features.
|
||||
|
||||
So I wrote up a bootstrap system definition like the Hetzner tutorial said and
|
||||
ended up with `hosts/lufta/bootstrap.nix`:
|
||||
|
||||
```nix
|
||||
{ pkgs, ... }:
|
||||
|
||||
{
|
||||
services.openssh.enable = true;
|
||||
users.users.root.openssh.authorizedKeys.keys = [
|
||||
"ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIPg9gYKVglnO2HQodSJt4z4mNrUSUiyJQ7b+J798bwD9 cadey@shachi"
|
||||
];
|
||||
|
||||
networking.usePredictableInterfaceNames = false;
|
||||
systemd.network = {
|
||||
enable = true;
|
||||
networks."eth0".extraConfig = ''
|
||||
[Match]
|
||||
Name = eth0
|
||||
[Network]
|
||||
# Add your own assigned ipv6 subnet here here!
|
||||
Address = 2a01:4f9:3a:1a1c::/64
|
||||
Gateway = fe80::1
|
||||
# optionally you can do the same for ipv4 and disable DHCP (networking.dhcpcd.enable = false;)
|
||||
Address = 135.181.162.99/26
|
||||
Gateway = 135.181.162.65
|
||||
'';
|
||||
};
|
||||
|
||||
boot.supportedFilesystems = [ "zfs" ];
|
||||
|
||||
environment.systemPackages = with pkgs; [ wget vim zfs ];
|
||||
}
|
||||
```
|
||||
|
||||
Then I fired up the kexec tarball and waited for the server to boot into a NixOS
|
||||
live environment. A few minutes later I was in. I started formatting the drives
|
||||
according to the [NixOS install
|
||||
guide](https://nixos.org/manual/nixos/stable/index.html#sec-installation) with
|
||||
one major difference: I added a `/boot` ext4 partition on the SSD. This allows
|
||||
me to have the system root device on zfs. I added the disks to a `raidz1` pool
|
||||
and created a few volumes. I also added the SSD as a log device so I get SSD
|
||||
caching.
|
||||
|
||||
From there I installed NixOS as normal and rebooted the server. It booted
|
||||
normally. I had a shiny new NixOS server in the cloud! I noticed that the server
|
||||
had booted into NixOS unstable as opposed to NixOS 20.09 like my other nodes. I
|
||||
thought "ah, well, that probably isn't a problem" and continued to the
|
||||
configuration step.
|
||||
|
||||
[That's ominous...](conversation://Mara/hmm)
|
||||
|
||||
## Configuration
|
||||
|
||||
Now that the server was assimilated and I could SSH into it, the next step was
|
||||
to configure it to run my services. While I was waiting for Hetzner to provision
|
||||
my server I ported a bunch of my services over to Nixops services [a-la this
|
||||
post](/blog/nixops-services-2020-11-09) in [this
|
||||
folder](https://github.com/Xe/nixos-configs/tree/master/common/services) of my
|
||||
configs repo.
|
||||
|
||||
Now that I had them, it was time to add this server to my Nixops setup. So I
|
||||
opened the [nixops definition
|
||||
folder](https://github.com/Xe/nixos-configs/tree/master/nixops/hexagone) and
|
||||
added the metadata for `lufta`. Then I added it to my Nixops deployment with
|
||||
this command:
|
||||
|
||||
```console
|
||||
$ nixops modify -d hexagone -n hexagone *.nix
|
||||
```
|
||||
|
||||
Then I copied over the autogenerated config from `lufta`'s `/etc/nixos/` folder
|
||||
into
|
||||
[`hosts/lufta`](https://github.com/Xe/nixos-configs/tree/master/hosts/lufta) and
|
||||
ran a `nixops deploy` to add some other base configuration.
|
||||
|
||||
## Migration
|
||||
|
||||
Once that was done, I started enabling my services and pushing configs to test
|
||||
them. After I got to a point where I thought things would work I opened up the
|
||||
Kubernetes console and started deleting deployments on my kubernetes cluster as
|
||||
I felt "safe" to migrate them over. Then I saw the deployments come back. I
|
||||
deleted them again and they came back again.
|
||||
|
||||
Oh, right. I enabled that one Kubernetes service that made it intentionally hard
|
||||
to delete deployments. One clever set of scale-downs and kills later and I was
|
||||
able to kill things with wild abandon.
|
||||
|
||||
I copied over the gitea data with `rsync` running in the kubernetes deployment.
|
||||
Then I killed the gitea deployment, updated DNS and reran a whole bunch of gitea
|
||||
jobs to resanify the environment. I did a test clone on a few of my repos and
|
||||
then I deleted the gitea volume from DigitalOcean.
|
||||
|
||||
Moving over the other deployments from Kubernetes into NixOS services was
|
||||
somewhat easy, however I did need to repackage a bunch of my programs and static
|
||||
sites for NixOS. I made the
|
||||
[`pkgs`](https://github.com/Xe/nixos-configs/tree/master/pkgs) tree a bit more
|
||||
fleshed out to compensate.
|
||||
|
||||
[Okay, packaging static sites in NixOS is beyond overkill, however a lot of them
|
||||
need some annoyingly complicated build steps and throwing it all into Nix means
|
||||
that we can make them reproducible and use one build system to rule them
|
||||
all. Not to mention that when I need to upgrade the system, everything will
|
||||
rebuild with new system libraries to avoid the <a
|
||||
href="https://blog.tidelift.com/bit-rot-the-silent-killer">Docker bitrot
|
||||
problem</a>.](conversation://Mara/hacker)
|
||||
|
||||
## Reboot Test
|
||||
|
||||
After a significant portion of the services were moved over, I decided it was
|
||||
time to do the reboot test. I ran the `reboot` command and then...nothing.
|
||||
My continuous ping test was timing out. My phone was blowing up with downtime
|
||||
messages from NodePing. Yep, I messed something up.
|
||||
|
||||
I was able to boot the server back into a NixOS recovery environment using the
|
||||
kexec trick, and from there I was able to prove the following:
|
||||
|
||||
- The zfs setup is healthy
|
||||
- I can read some of the data I migrated over
|
||||
- I can unmount and remount the ZFS volumes repeatedly
|
||||
|
||||
I was confused. This shouldn't be happening. After half an hour of
|
||||
troubleshooting, I gave in and ordered an IPKVM to be installed in my server.
|
||||
|
||||
Once that was set up (and I managed to trick MacOS into letting me boot a .jnlp
|
||||
web start file), I rebooted the server so I could see what error I was getting
|
||||
on boot. I missed it the first time around, but on the second time I was able to
|
||||
capture this screenshot:
|
||||
|
||||
![The error I was looking
|
||||
for](https://cdn.christine.website/file/christine-static/blog/Screen+Shot+2021-01-03+at+1.13.05+AM.png)
|
||||
|
||||
Then it hit me. I did the install on NixOS unstable. My other servers use NixOS
|
||||
20.09. I had downgraded zfs and the older version of zfs couldn't mount the
|
||||
volume created by the newer version of zfs in read/write mode. One more trip to
|
||||
the recovery environment later to install NixOS unstable in a new generation.
|
||||
|
||||
Then I switched my tower's default NixOS channel to the unstable channel and ran
|
||||
`nixops deploy` to reactivate my services. After the NodePing uptime
|
||||
notifications came in, I ran the reboot test again while looking at the console
|
||||
output to be sure.
|
||||
|
||||
It booted. It worked. I had a stable setup. Then I reconnected to IRC and passed
|
||||
out.
|
||||
|
||||
## Services Migrated
|
||||
|
||||
Here is a list of all of the services I have migrated over from my old dedicated
|
||||
server, my kubernetes cluster and my dokku server:
|
||||
|
||||
- aerial -> discord chatbot
|
||||
- goproxy -> go modules proxy
|
||||
- lewa -> https://lewa.within.website
|
||||
- hlang -> https://h.christine.website
|
||||
- mi -> https://mi.within.website
|
||||
- printerfacts -> https://printerfacts.cetacean.club
|
||||
- xesite -> https://christine.website
|
||||
- graphviz -> https://graphviz.christine.website
|
||||
- idp -> https://idp.christine.website
|
||||
- oragono -> ircs://irc.within.website:6697/
|
||||
- tron -> discord bot
|
||||
- withinbot -> discord bot
|
||||
- withinwebsite -> https://within.website
|
||||
- gitea -> https://tulpa.dev
|
||||
- other static sites
|
||||
|
||||
Doing this migration is a bit of an archaeology project as well. I was
|
||||
continuously discovering services that I had littered over my machines with very
|
||||
poorly documented requirements and configuration. I hope that this move will let
|
||||
the next time I do this kind of migration be a lot easier by comparison.
|
||||
|
||||
I still have a few other services to move over, however the ones that are left
|
||||
are much more annoying to set up properly. I'm going to get to deprovision 5
|
||||
servers in this migration and as a result get this stupidly powerful goliath of
|
||||
a server to do whatever I want with and I also get to cut my monthly server
|
||||
costs by over half.
|
||||
|
||||
I am very close to being able to turn off the Kubernetes cluster and use NixOS
|
||||
for everything. A few services that are still on the Kubernetes cluster are
|
||||
resistant to being nixified, so I may have to use the Docker containers for
|
||||
that. I was hoping to be able to cut out Docker entirely, however we don't seem
|
||||
to be that lucky yet.
|
||||
|
||||
Sure, there is some added latency with the server being in Europe instead of
|
||||
Montreal, however if this ever becomes a practical issue I can always launch a
|
||||
cheap DigitalOcean VPS in Toronto to act as a DNS server for my WireGuard setup.
|
||||
|
||||
Either way, I am now off Kubernetes for my highest traffic services. If services
|
||||
of mine need to use the disk, they can now just use the disk. If I really care
|
||||
about the data, I can add the service folders to the list of paths to back up to
|
||||
`rsync.net` (I have a post about how this backup process works in the drafting
|
||||
stage) via [borgbackup](https://www.borgbackup.org/).
|
||||
|
||||
Let's hope it stays online!
|
||||
|
||||
---
|
||||
|
||||
Many thanks to [Graham Christensen](https://twitter.com/grhmc), [Dave
|
||||
Anderson](https://twitter.com/dave_universetf) and everyone else who has been
|
||||
helping me along this journey. I would be lost without them.
|
|
@ -2,8 +2,6 @@
|
|||
title: The Beautiful in the Ugly
|
||||
date: 2018-04-23
|
||||
for: Silver
|
||||
tags:
|
||||
- shell
|
||||
---
|
||||
|
||||
# The Beautiful in the Ugly
|
||||
|
|
|
@ -1,9 +1,6 @@
|
|||
---
|
||||
title: Web Application Development with Beego
|
||||
date: 2014-11-28
|
||||
tags:
|
||||
- go
|
||||
- beego
|
||||
---
|
||||
|
||||
Web Application Development with Beego
|
||||
|
|
|
@ -1,7 +1,6 @@
|
|||
---
|
||||
title: "Blind Men and an Elephant"
|
||||
date: 2018-11-29
|
||||
series: conlangs
|
||||
---
|
||||
|
||||
# Blind Men and an Elephant
|
||||
|
|
|
@ -1,21 +0,0 @@
|
|||
---
|
||||
title: "Blog Feature: Art Gallery"
|
||||
date: 2019-11-01
|
||||
tags:
|
||||
- art
|
||||
- announce
|
||||
- 100th-post
|
||||
---
|
||||
|
||||
# Blog Feature: Art Gallery
|
||||
|
||||
I have just implemented support for my portfolio site to also function as an art
|
||||
gallery. See all of my posted art [here](/gallery).
|
||||
|
||||
I have been trying to get better at art for a while and I feel I'm at the level
|
||||
where I feel comfortable putting it on my portfolio. Let's see how far this
|
||||
rabbit hole goes.
|
||||
|
||||
---
|
||||
|
||||
Also this is my 100th post! Yay!
|
|
@ -1,178 +0,0 @@
|
|||
---
|
||||
title: "How to Set Up Borg Backup on NixOS"
|
||||
date: 2021-01-09
|
||||
series: howto
|
||||
tags:
|
||||
- nixos
|
||||
- borgbackup
|
||||
---
|
||||
|
||||
# How to Set Up Borg Backup on NixOS
|
||||
|
||||
[Borg Backup](https://www.borgbackup.org/) is a encrypted, compressed,
|
||||
deduplicated backup program for multiple platforms including Linux. This
|
||||
combined with the [NixOS options for configuring
|
||||
Borg Backup](https://search.nixos.org/options?channel=20.09&show=services.borgbackup.jobs.%3Cname%3E.paths&from=0&size=30&sort=relevance&query=services.borgbackup.jobs)
|
||||
allows you to backup on a schedule and restore from those backups when you need
|
||||
to.
|
||||
|
||||
Borg Backup works with local files, remote servers and there are even [cloud
|
||||
hosts](https://www.borgbackup.org/support/commercial.html) that specialize in
|
||||
hosting your backups. In this post we will cover how to set up a backup job on a
|
||||
server using [BorgBase](https://www.borgbase.com/)'s free tier to host the
|
||||
backup files.
|
||||
|
||||
## Setup
|
||||
|
||||
You will need a few things:
|
||||
|
||||
- A free BorgBase account
|
||||
- A server running NixOS
|
||||
- A list of folders to back up
|
||||
- A list of folders to NOT back up
|
||||
|
||||
First, we will need to create a SSH key for root to use when connecting to
|
||||
BorgBase. Open a shell as root on the server and make a `borgbackup` folder in
|
||||
root's home directory:
|
||||
|
||||
```shell
|
||||
mkdir borgbackup
|
||||
cd borgbackup
|
||||
```
|
||||
|
||||
Then create a SSH key that will be used to connect to BorgBase:
|
||||
|
||||
```shell
|
||||
ssh-keygen -f ssh_key -t ed25519 -C "Borg Backup"
|
||||
```
|
||||
|
||||
Ignore the SSH key password because at this time the automated Borg Backup job
|
||||
doesn't allow the use of password-protected SSH keys.
|
||||
|
||||
Now we need to create an encryption passphrase for the backup repository. Run
|
||||
this command to generate one using [xkcdpass](https://pypi.org/project/xkcdpass/):
|
||||
|
||||
```shell
|
||||
nix-shell -p python39Packages.xkcdpass --run 'xkcdpass -n 12' > passphrase
|
||||
```
|
||||
|
||||
[You can do whatever you want to generate a suitable passphrase, however
|
||||
xkcdpass is proven to be <a href="https://xkcd.com/936/">more random</a> than
|
||||
most other password generators.](conversation://Mara/hacker)
|
||||
|
||||
## BorgBase Setup
|
||||
|
||||
Now that we have the basic requirements out of the way, let's configure BorgBase
|
||||
to use that SSH key. In the BorgBase UI click on the Account tab in the upper
|
||||
right and open the SSH key management window. Click on Add Key and paste in the
|
||||
contents of `./ssh_key.pub`. Name it after the hostname of the server you are
|
||||
working on. Click Add Key and then go back to the Repositories tab in the upper
|
||||
right.
|
||||
|
||||
Click New Repo and name it after the hostname of the server you are working on.
|
||||
Select the key you just created to have full access. Choose the region of the
|
||||
backup volume and then click Add Repository.
|
||||
|
||||
On the main page copy the repository path with the copy icon next to your
|
||||
repository in the list. You will need this below. Attempt to SSH into the backup
|
||||
repo in order to have ssh recognize the server's host key:
|
||||
|
||||
```shell
|
||||
ssh -i ./ssh_key o6h6zl22@o6h6zl22.repo.borgbase.com
|
||||
```
|
||||
|
||||
Then accept the host key and press control-c to terminate the SSH connection.
|
||||
|
||||
## NixOS Configuration
|
||||
|
||||
In your `configuration.nix` file, add the following block:
|
||||
|
||||
```nix
|
||||
services.borgbackup.jobs."borgbase" = {
|
||||
paths = [
|
||||
"/var/lib"
|
||||
"/srv"
|
||||
"/home"
|
||||
];
|
||||
exclude = [
|
||||
# very large paths
|
||||
"/var/lib/docker"
|
||||
"/var/lib/systemd"
|
||||
"/var/lib/libvirt"
|
||||
|
||||
# temporary files created by cargo and `go build`
|
||||
"**/target"
|
||||
"/home/*/go/bin"
|
||||
"/home/*/go/pkg"
|
||||
];
|
||||
repo = "o6h6zl22@o6h6zl22.repo.borgbase.com:repo";
|
||||
encryption = {
|
||||
mode = "repokey-blake2";
|
||||
passCommand = "cat /root/borgbackup/passphrase";
|
||||
};
|
||||
environment.BORG_RSH = "ssh -i /root/borgbackup/ssh_key";
|
||||
compression = "auto,lzma";
|
||||
startAt = "daily";
|
||||
};
|
||||
```
|
||||
|
||||
Customize the paths and exclude lists to your needs. Once you are satisfied,
|
||||
rebuild your NixOS system using `nixos-rebuild`:
|
||||
|
||||
```shell
|
||||
nixos-rebuild switch
|
||||
```
|
||||
|
||||
And then you can fire off an initial backup job with this command:
|
||||
|
||||
```shell
|
||||
systemctl start borgbackup-job-borgbase.service
|
||||
```
|
||||
|
||||
Monitor the job with this command:
|
||||
|
||||
```shell
|
||||
journalctl -fu borgbackup-job-borgbase.service
|
||||
```
|
||||
|
||||
The first backup job will always take the longest to run. Every incremental
|
||||
backup after that will get smaller and smaller. By default, the system will
|
||||
create new backup snapshots every night at midnight local time.
|
||||
|
||||
## Restoring Files
|
||||
|
||||
To restore files, first figure out when you want to restore the files from.
|
||||
NixOS includes a wrapper script for each Borg job you define. you can mount your
|
||||
backup archive using this command:
|
||||
|
||||
```
|
||||
mkdir mount
|
||||
borg-job-borgbase mount o6h6zl22@o6h6zl22.repo.borgbase.com:repo ./mount
|
||||
```
|
||||
|
||||
Then you can explore the backup (and with it each incremental snapshot) to
|
||||
your heart's content and copy files out manually. You can look through each
|
||||
folder and copy out what you need.
|
||||
|
||||
When you are done you can unmount it with this command:
|
||||
|
||||
```
|
||||
borg-job-borgbase umount /root/borgbase/mount
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
And that's it! You can get more fancy with nixops using a setup [like
|
||||
this](https://github.com/Xe/nixos-configs/blob/master/common/services/backup.nix).
|
||||
In general though, you can get away with this setup. It may be a good idea to
|
||||
copy down the encryption passphrase onto paper and put it in a safe space like a
|
||||
safety deposit box.
|
||||
|
||||
For more information about Borg Backup on NixOS, see [the relevant chapter of
|
||||
the NixOS
|
||||
manual](https://nixos.org/manual/nixos/stable/index.html#module-borgbase) or
|
||||
[the list of borgbackup
|
||||
options](https://search.nixos.org/options?channel=20.09&query=services.borgbackup.jobs)
|
||||
that you can pick from.
|
||||
|
||||
I hope this is able to help.
|
|
@ -1,156 +0,0 @@
|
|||
---
|
||||
title: How I Converted my Brain fMRI to a 3D Model
|
||||
date: 2019-08-23
|
||||
series: howto
|
||||
tags:
|
||||
- python
|
||||
- blender
|
||||
---
|
||||
|
||||
# How I Converted my Brain fMRI to a 3D Model
|
||||
|
||||
AUTHOR'S NOTE: I just want to start this out by saying I am not an expert, and
|
||||
nothing in this blogpost should be construed as medical advice. I just wanted
|
||||
to see what kind of pretty pictures I could get out of an fMRI data file.
|
||||
|
||||
So this week I flew out to Stanford to participate in a study that involved a
|
||||
fMRI of my brain while I was doing some things. I asked for (and recieved) a
|
||||
data file from the fMRI so I could play with it and possibly 3D print it. This
|
||||
blogpost is the record of my journey through various software to get a fully
|
||||
usable 3D model out of the fMRI data file.
|
||||
|
||||
## The Data File
|
||||
|
||||
I was given [christine_brain.nii.gz][firstniifile] by the researcher who was
|
||||
operating the fMRI. I looked around for some software to convert it to a 3D
|
||||
model and [/r/3dprinting][r3dprinting] suggested the use of [FreeSurfer][freesurfer]
|
||||
to generate a 3D model. I downloaded and installed the software then started
|
||||
to look for something I could do in the meantime, as this was going to take
|
||||
something on the order of 8 hours to process.
|
||||
|
||||
### An Animated GIF
|
||||
|
||||
I started looking for the file format on the internet by googling "nii.gz brain image"
|
||||
and I stumbled across a program called [gif\_your\_nifti][gyn]. It looked to be
|
||||
mostly pure python so I created a virtualenv and installed it in there:
|
||||
|
||||
```
|
||||
$ git clone https://github.com/miykael/gif_your_nifti
|
||||
$ cd gif_your_nifti
|
||||
$ virtualenv -p python3 env
|
||||
$ source env/bin/activate
|
||||
(env) $ pip3 install -r requirements.txt
|
||||
(env) $ python3 setup.py install
|
||||
```
|
||||
|
||||
Then I ran it with the following settings to get [this first result][firstgif]:
|
||||
|
||||
```
|
||||
(env) $ gif_your_nifti christine_brain.nii.gz --mode pseudocolor --cmap plasma
|
||||
```
|
||||
|
||||
<center><video controls> <source src="https://xena.greedo.xeserv.us/files/christine-fmri-raw.mp4" type="video/mp4">A sideways view of the brain</video></center>
|
||||
|
||||
<small>(sorry the video embed isn't working in safari)</small>
|
||||
|
||||
It looked weird though, that's because the fMRI scanner I used has a different
|
||||
rotation to what's considered "normal". The gif\_your\_nifti repo mentioned a
|
||||
program called `fslreorient2std` to reorient the fMRI image, so I set out to
|
||||
install and run it.
|
||||
|
||||
### FSL
|
||||
|
||||
After some googling, I found [FSL's website][fsl] which included an installer
|
||||
script and required registration.
|
||||
|
||||
37 gigabytes of downloads and data later, I had the entire FSL suite installed
|
||||
to a server of mine and ran the conversion command:
|
||||
|
||||
```
|
||||
$ fslreorient2std christine_brain.nii.gz christine_brain_reoriented.nii.gz
|
||||
```
|
||||
|
||||
This produced a slightly smaller [reoriented file][secondniifile].
|
||||
|
||||
I reran gif\_your\_nifti on this reoriented file and got [this result][secondgif]
|
||||
which looked a _lot_ better:
|
||||
|
||||
<center><video controls> <source src="https://xena.greedo.xeserv.us/files/christine-fmri-reoriented.mp4">A properly reoriented brain</video></center>
|
||||
|
||||
<small>(sorry again the video embed isn't working in safari)</small>
|
||||
|
||||
### FreeSurfer
|
||||
|
||||
By this time I had gotten back home and [FreeSurfer][freesurfer] was done installing,
|
||||
so I registered for it (god bless the institution of None) and put its license key
|
||||
in the place it expected. I copied the reoriented data file to my Mac and then
|
||||
set up a `SUBJECTS_DIR` and had it start running the numbers and extracting the
|
||||
brain surfaces:
|
||||
|
||||
```
|
||||
$ cd ~/tmp
|
||||
$ mkdir -p brain/subjects
|
||||
$ cd brain
|
||||
$ export SUBJECTS_DIR=$(pwd)/subjects
|
||||
$ recon-all -i /path/to/christine_brain_reoriented.nii.gz -s christine -all
|
||||
```
|
||||
|
||||
This step took 8 hours. Once I was done I had a bunch of data in
|
||||
`$SUBJECTS_DIR/christine`. I opened my shell to that folder and went into the
|
||||
`surf` subfolder:
|
||||
|
||||
```
|
||||
$ mris_convert lh.pial lh.pial.stl
|
||||
$ mris_convert rh.pial rh.pial.stl
|
||||
```
|
||||
|
||||
Now I had standard stl files that I could stick into [Blender][blender].
|
||||
|
||||
### Blender
|
||||
|
||||
Importing the stl files was really easy. I clicked on File, then Import, then
|
||||
Stl. After guiding the browser to the subjects directory and finding the STL
|
||||
files, I got a view that looked something like this:
|
||||
|
||||
<center><blockquote class="twitter-tweet"><p lang="en" dir="ltr">BRAIN <a href="https://t.co/kGSrPj0kgP">pic.twitter.com/kGSrPj0kgP</a></p>— Cadey Ratio 🌐 (@theprincessxena) <a href="https://twitter.com/theprincessxena/status/1164526098526478336?ref_src=twsrc%5Etfw">August 22, 2019</a></blockquote> <script async src="https://platform.twitter.com/widgets.js" charset="utf-8"></script></center>
|
||||
|
||||
I had absolutely no idea what to do from here in Blender, so I exported the
|
||||
whole thing to a stl file and sent it to a coworker for 3D printing (he said
|
||||
it was going to be "the coolest thing he's ever printed").
|
||||
|
||||
I also exported an Unreal Engine 4 compatible model and sent it to a friend of
|
||||
mine that does hobbyist game development. A few hours later I got this back:
|
||||
|
||||
<center><blockquote class="twitter-tweet"><p lang="und" dir="ltr"><a href="https://t.co/fXnwnSpMry">pic.twitter.com/fXnwnSpMry</a></p>— Cadey Ratio 🌐 (@theprincessxena) <a href="https://twitter.com/theprincessxena/status/1164714830630203393?ref_src=twsrc%5Etfw">August 23, 2019</a></blockquote> <script async src="https://platform.twitter.com/widgets.js" charset="utf-8"></script></center>
|
||||
|
||||
(Hint: it is a take on the famous [galaxy brain memes][galaxybrain])
|
||||
|
||||
## Conclusion
|
||||
|
||||
Overall, this was fun! I got to play with many gigabytes of software that ran
|
||||
my most powerful machine at full blast for 8 hours, I made a fully printable 3D
|
||||
model out of it and I have some future plans for importing this data into
|
||||
Minecraft (the NIFTI `.nii.gz` format has a limit of _256 layers_).
|
||||
|
||||
I'll be sure to write more about this in the future!
|
||||
|
||||
## Citations
|
||||
|
||||
Here are my citations in [BibTex format][citations].
|
||||
|
||||
Special thanks goes to Michael Lifshitz for organizing the study that I
|
||||
participated in that got me this fMRI data file. It was one of the coolest
|
||||
things I've ever done (if not the coolest) and I'm going to be able to get a
|
||||
3D printed model of my brain out of it.
|
||||
|
||||
[firstniifile]: https://xena.greedo.xeserv.us/files/christine_brain.nii.gz
|
||||
[secondniifile]: https://xena.greedo.xeserv.us/files/christine_brain_reoriented.nii.gz
|
||||
[r3dprinting]: https://www.reddit.com/r/3Dprinting/comments/2w0zxx/magnetic_resonance_image_nii_to_stl/
|
||||
[freesurfer]: https://surfer.nmr.mgh.harvard.edu/fswiki/FreeSurferWiki
|
||||
[gyn]: https://github.com/miykael/gif_your_nifti
|
||||
[firstgif]: /static/blog/christine-fmri-raw.mp4
|
||||
[secondgif]: /static/blog/christine-fmri-reoriented.mp4
|
||||
[fsl]: https://fsl.fmrib.ox.ac.uk/fsl/fslwiki/
|
||||
[blender]: https://www.blender.org
|
||||
[galaxybrain]: https://knowyourmeme.com/memes/expanding-brain
|
||||
[citations]: /static/blog/brainfmri-to-3d-model.bib
|
|
@ -1,194 +0,0 @@
|
|||
---
|
||||
title: Advice to People Nurturing a Career in Computering
|
||||
date: 2019-06-18
|
||||
tags:
|
||||
- career
|
||||
---
|
||||
|
||||
# Advice to People Nurturing a Career in Computering
|
||||
|
||||
Computering, or making computers do things in exchange for money, can be a
|
||||
surprisingly hard field to break into as an outsider. There's lots of jargon,
|
||||
tool holy wars, flamewars about the "right" way to do things and a whole host
|
||||
of overhead that can make it feel difficult or impossible when starting from
|
||||
scratch. I'm a college dropout, I know what it's like to be turned down over
|
||||
and over because of the lack of that blessed square paper. In this post I
|
||||
hope to give some general advice based on what has and hasn't worked for me
|
||||
over the years.
|
||||
|
||||
Hopefully this can help you too.
|
||||
|
||||
## Make a Portfolio Site
|
||||
|
||||
When you are breaking into the industry, there is a huge initial "brand" issue.
|
||||
You're nobody. This is both a very good thing and a very bad thing. It's a very
|
||||
good thing because you have a clean slate to start from. It's also a very bad
|
||||
thing because you have nothing to refer to yourself with.
|
||||
|
||||
Part of establishing a brand for yourself in this day and age is to make a website
|
||||
(like the one you are probably reading this off of right now). This website can
|
||||
be powered by anything. [GitHub Pages](https://pages.github.com) with the `github.io`
|
||||
domain works, but it's probably a better idea to make your website backend from scratch.
|
||||
Your website should include at least the following things:
|
||||
|
||||
- Your name
|
||||
- A few buzzwords relating to the kind of thing you'd like to do with computers (example: I have myself listed as a "Backend Services and Devops Specialist" which sounds really impressive yet doesn't really mean much of anything)
|
||||
- Tools or soft skills you are experienced with
|
||||
- Links to yourself on other social media platforms (GitHub, Twitter, LinkedIn, etc.)
|
||||
- Links to or words about projects of yours that you are proud of
|
||||
- Some contact information (an email address is a good idea too)
|
||||
|
||||
If you feel comfortable doing so, I'd also suggest putting your [resume](https://christine.website/resume)
|
||||
on this site too. Even if it's just got your foodservice jobs or education
|
||||
history (including your high school diploma if need be).
|
||||
|
||||
This website can then be used as a landing page for other things in the future
|
||||
too. It's _your_ space on the internet. _You_ get to decide what's up there or
|
||||
not.
|
||||
|
||||
## Make a Tech Blog On That Site
|
||||
|
||||
This has been the single biggest thing to help me grow professionally. I regularly
|
||||
put [articles](https://christine.website/blog) on my blog, sometimes not even about
|
||||
technology topics. Even if you are writing about your take on something people have
|
||||
already written about, it's still good practice. Your early posts are going to be
|
||||
rough. It's normal to not be an expert when starting out in a new skill.
|
||||
|
||||
This helps you stand out in the interview process. I've actually managed to skip
|
||||
interviews with companies purely because of the contents of my blog. One of them
|
||||
had the interviewer almost word for word say the following:
|
||||
|
||||
> I've read your blog, you don't need to prove technical understanding to me.
|
||||
|
||||
It was one of the most awestruck feelings I've ever had in the hiring process.
|
||||
|
||||
## Find People to Mentor You
|
||||
|
||||
Starting out you are going to not be very skilled in anything. One good way you
|
||||
can help yourself get good at things is to go out into communities and ask for
|
||||
help understanding things. As you get involved in communities, naturally you will
|
||||
end up finding people who are giving a lot of advice about things. Don't be
|
||||
afraid to ask people for more details.
|
||||
|
||||
Get involved in niche communities (like unpopular Linux distros) and help them
|
||||
out, even if it's just doing spellcheck over the documentation. This kind of
|
||||
stuff really makes you stand out and people will remember it.
|
||||
|
||||
Formal mentorship is a very hard thing to try and define. It's probably better
|
||||
to surround yourself with experts in various niche topics rather than looking
|
||||
for that one magic mentor. Mentorship can be a very time consuming thing on the
|
||||
expert's side. Be thankful for what you can get and try and give back by helping
|
||||
other people too.
|
||||
|
||||
Seriously though, don't be afraid to email or DM people for more information about
|
||||
topics that don't make sense in group chats. I have found that people really
|
||||
appreciate that kind of stuff, even if they don't immediately have the time to
|
||||
respond in detail.
|
||||
|
||||
## Do Stuff with Computers, Post the Results Somewhere
|
||||
|
||||
Repository hosting sites like GitHub and Gitlab allow you to show potential
|
||||
employers exactly what you can do by example. Put your code up on them, even
|
||||
if you think it's "bad" or the solution could have been implemented better by
|
||||
someone more technically skilled. The best way to get experience in this industry
|
||||
is by doing. The best way to do things is to just do them and then let other
|
||||
people see the results.
|
||||
|
||||
Your first programs will be inelegant, but that's okay.
|
||||
Your first repositories will be bloated or inefficient, but that's okay.
|
||||
Nobody expects perfection out of the gate, and honestly even for skilled experts
|
||||
perfection is probably too high of a bar. We're human. We make mistakes. Our job
|
||||
is to turn the results of these mistakes into the products and services that
|
||||
people rely on.
|
||||
|
||||
## You Don't Need 100% Of The Job Requirements
|
||||
|
||||
Many companies put job requirements as soft guidelines, not hard ones. It's easy
|
||||
to see requirements for jobs like this:
|
||||
|
||||
> Applicants must have:
|
||||
>
|
||||
> - 1 year managing a distributed Flopnax system
|
||||
> - Experience using Rilkef across multiple regions
|
||||
> - Ropjar, HTML/CSS
|
||||
|
||||
and feel really disheartened. That "must" there seldom actually is a hard
|
||||
requirement. Many companies will be willing to hire someone for a junior
|
||||
level. You can learn the skills you miss as a natural part of doing your job.
|
||||
There's support structures at nearly every company for things like this. You
|
||||
don't need to be perfect out of the gate.
|
||||
|
||||
## Interviews
|
||||
|
||||
This one is a bit of a weird one to give advice for. Each company ends up having
|
||||
their own interviewing style, and even then individual interviewers have their
|
||||
own views on how to do it. My advice here is trying to be as generic as possible.
|
||||
|
||||
### Know the Things You Have Listed on Your Resume
|
||||
|
||||
If you say you know how to use a language, brush up on that language. If you say
|
||||
you know how to use a tool, be able to explain that what that tool does and why
|
||||
people should care about it to someone.
|
||||
|
||||
Don't misrepresent your skills on your resume either. It's similar to lying. It's
|
||||
also a good idea to go back and prune out skills you don't feel as fresh with over
|
||||
time.
|
||||
|
||||
### Be Yourself
|
||||
|
||||
It's tempting to put on a persona or try to present yourself as larger than life.
|
||||
Resist this temptation. They want to see _you_, not a caricature of yourself. It's
|
||||
scary to do interviews at times. It feels like you are being judged. It's not
|
||||
personal. Everything in interviews is aimed at making the best decision for the
|
||||
company.
|
||||
|
||||
Also, don't be afraid to say you don't know things. You don't need to have API
|
||||
documentation memorized. They aren't looking for that. API documentation will be
|
||||
available to you while you write code at your job. Interviews are usually there
|
||||
to help the interviewer verify that you know how to break larger problems into
|
||||
more understandable chunks. Ask questions. Ensure you understand what they are
|
||||
and are not asking you. Nearly every interview that I've had that's resulted in
|
||||
a job offer has had me ask questions about what they are asking.
|
||||
|
||||
### "Do You Have Any Questions?"
|
||||
|
||||
A few things I've found work really well for this:
|
||||
|
||||
- "Do you know of anyone who left this company and then came back?"
|
||||
- "What is your favorite part of your workday?"
|
||||
- "What is your least favorite part of your workday?"
|
||||
- "Do postmortems have formal blame as a part of the process?"
|
||||
- "Does code get reviewed before it ships into production?"
|
||||
- "Are there any employee run interest groups for things like mindfulness?"
|
||||
|
||||
And then finally as your last question:
|
||||
|
||||
- "What are the next steps?"
|
||||
|
||||
This question in particular tends to signal interest in the person interviewing
|
||||
you. I don't completely understand why, but it seems to be one of the most
|
||||
useful questions to ask; especially with initial interviews with hiring managers
|
||||
or human resources.
|
||||
|
||||
### Meditate Before Interviews
|
||||
|
||||
Even if it's just [watching your breath for 5 minutes](https://when-then-zen.christine.website/meditation/anapana).
|
||||
I find that doing this helps reset the mind and reduces subjective experiences of
|
||||
anxiety.
|
||||
|
||||
## Persistence
|
||||
|
||||
Getting the first few real jobs is tough, but after you get a year or two at any
|
||||
employer things get a lot easier. Your first job is going to give you a lot of
|
||||
experience. You are going to learn things about things you didn't even think
|
||||
would be possible to learn about. People, processes and the like are going to
|
||||
surprise or shock you.
|
||||
|
||||
At the end of the day though, it's just a job. It's impermanent. You might not
|
||||
fit in. You might have to find another. Don't panic about it, even though it's
|
||||
really, really tempting to. You can always find another job.
|
||||
|
||||
---
|
||||
|
||||
I hope this is able to help. Thanks for reading this and be well.
|
||||
|
|
@ -2,7 +2,6 @@
|
|||
title: "Chaos Magick Debugging"
|
||||
date: 2018-11-13
|
||||
thanks: CelestialBoon
|
||||
series: magick
|
||||
---
|
||||
|
||||
# Chaos Magick Debugging
|
||||
|
|
|
@ -1,73 +0,0 @@
|
|||
---
|
||||
title: Chicken Stir Fry
|
||||
date: 2020-04-13
|
||||
series: recipes
|
||||
tags:
|
||||
- instant-pot
|
||||
- pan
|
||||
- rice
|
||||
- garlic
|
||||
---
|
||||
|
||||
# Chicken Stir Fry
|
||||
|
||||
This recipe was made up by me and my fiancé. We just sorta winged it every time
|
||||
we made it until we found something that was easy to cook and tasty. We make
|
||||
this every week or so.
|
||||
|
||||
## Recipe
|
||||
|
||||
### Ingredients
|
||||
|
||||
- Pack of 4 chicken breasts
|
||||
- A fair amount of Montreal seasoning (garlic, onion, salt, oregano)
|
||||
- 3 cups basmati rice
|
||||
- 3.75 cups water
|
||||
- 1/4th bag of frozen stir fry vegetables
|
||||
- Avocado/coconut oil
|
||||
- Standard frying pan
|
||||
- Standard chef's knife
|
||||
- Standard 11x14 cutting board
|
||||
- Two metal bowls
|
||||
- Instant Pot
|
||||
- Spatula
|
||||
|
||||
### Seasoning
|
||||
|
||||
Put the seasoning in one of the bowls and unwrap the plastic around the chicken
|
||||
breasts. Take each chicken breast out of the package (you may need to cut them
|
||||
free of eachother, use a sharp knife for that) and rub all sides of it around in
|
||||
the seasoning.
|
||||
|
||||
Put these into the other metal bowl and when you've done all four, cover with
|
||||
plastic wrap and refrigerate for about 5-6 hours.
|
||||
|
||||
Doing this helps to have the chicken soak up the flavor of the seasoning so it
|
||||
tastes better when you cook it.
|
||||
|
||||
### Cooking
|
||||
|
||||
Slice two chicken breasts up kinda like
|
||||
[this](https://www.seriouseats.com/2014/04/knife-skills-how-to-slice-chicken-breast-for-stir-fries.html)
|
||||
and then transfer to the heated pan with oil in it. Cook those and flip them
|
||||
every few minutes until you've cooked everything all the way through (random
|
||||
sampling by cutting a bit of chicken in half with the spatula and seeing if it's
|
||||
overly juicy or not is a good way to tell, or if you have a food thermometer to
|
||||
165 degrees fahrenheit or 75 degrees celsius). Put this chicken into a plastic
|
||||
container for use in other meals (it goes really good on sandwiches).
|
||||
|
||||
Then repeat the slicing and cooking for the last two chicken breasts. However,
|
||||
this time put _half_ of the chicken into the plastic container you used before
|
||||
(about one chicken breast worth in total, it doesn't have to be exact). At the
|
||||
same time as the second round of chicken is cooking, put about 3 cups of rice
|
||||
and 3.75 cups of water into the instant pot; then seal it and set it to manual
|
||||
for 4 minutes.
|
||||
|
||||
Dump frozen vegetables on top of the remainder of the chicken and stir until the
|
||||
vegetables are warm.
|
||||
|
||||
### Serving
|
||||
|
||||
Serve the stir fry hot on a bed of rice.
|
||||
|
||||
![image of the food](/static/blog/chicken-stir-fry.jpg)
|
|
@ -1,8 +1,6 @@
|
|||
---
|
||||
title: CinemaQuestria Orchestration
|
||||
date: 2015-03-13
|
||||
tags:
|
||||
- cinemaquestria
|
||||
---
|
||||
|
||||
CinemaQuestria Orchestration
|
||||
|
|
|
@ -1,8 +1,6 @@
|
|||
---
|
||||
title: Coding on an iPad
|
||||
date: 2018-04-14
|
||||
tags:
|
||||
- ipad
|
||||
---
|
||||
|
||||
# Coding on an iPad
|
||||
|
|
|
@ -1,88 +0,0 @@
|
|||
---
|
||||
title: Colemak Layout - First Week
|
||||
date: 2020-08-22
|
||||
series: colemak
|
||||
---
|
||||
|
||||
# Colemak Layout - First Week
|
||||
|
||||
A week ago I posted the last post in this series where I announced I was going
|
||||
all colemak all the time. I have not been measuring words per minute (to avoid
|
||||
psyching myself out), but so far my typing speed has gone from intolerably slow
|
||||
to manageably slow. I have been only dipping back into qwerty for two main
|
||||
things:
|
||||
|
||||
1. Passwords, specifically the ones I have in muscle memory
|
||||
2. Coding at work that needs to be done fast
|
||||
|
||||
Other than that, everything else has been in colemak. I have written DnD-style
|
||||
game notes, hacked at my own "Linux distro", started a few QMK keymaps and more
|
||||
all via colemak.
|
||||
|
||||
Here are some of the lessons I've learned:
|
||||
|
||||
## Let Your Coworkers Know You Are Going to Be Slow
|
||||
|
||||
This kind of thing is a long tirm investment. In the short term, your
|
||||
productivity is going to crash through the floor. This will feel frustrating. It
|
||||
took me an entire workday to implement and test a HTTP handler/client for it in
|
||||
Go. You will be making weird typos. Let your coworkers know so they don't jump
|
||||
to the wrong conclusions too quickly.
|
||||
|
||||
Also, this goes without saying, but don't do this kind of change during crunch
|
||||
time. That's a bit of a dick move.
|
||||
|
||||
## Print Out the Layout
|
||||
|
||||
I have the layout printed and taped to my monitor and iPad stand. This helps a
|
||||
lot. Instead of looking at the keyboard, I look at the layout image and let my
|
||||
fingers drift into position.
|
||||
|
||||
I also have a blank keyboard at my desk, this helps because I can't look at the
|
||||
keycaps and become confused (however this has backfired with typing numbers,
|
||||
lol). This keyboard has cherry MX blues though, which means it can be loud when
|
||||
I get to typing up a storm.
|
||||
|
||||
## Have Friends Ask You What Layout You Are Using
|
||||
|
||||
Something that works for me is to have friends ask me what keyboard layout I am
|
||||
using, so I can be mindful of the change. I have a few people asking me that on
|
||||
the regular, so I can be accountable to them and myself.
|
||||
|
||||
## macOS and iPadOS have Colemak Out of the Box
|
||||
|
||||
The settings app lets you configure colemak input without having to jailbreak or
|
||||
install a custom keyboard layout. Take advantage of this.
|
||||
|
||||
Someone has also created a colemak windows package for windows that includes an
|
||||
IA-64 (Itanium) binary. It was last updated in 2004, and still works without
|
||||
hassle on windows 10. It was the irst time I've ever seen an IA-64 windows
|
||||
binary in the wild!
|
||||
|
||||
## Relearn How To Type Your Passwords
|
||||
|
||||
I type passwords from muscle memory. I have had to rediscover what they actually
|
||||
are so I can relearn how to type them.
|
||||
|
||||
---
|
||||
|
||||
The colemak experiment continues. I also have a [ZSA
|
||||
Moonlander](https://www.zsa.io/moonlander/) and the kit for a
|
||||
[GergoPlex](https://www.gboards.ca/product/gergoplex) coming in the mail. Both
|
||||
of these run [QMK](https://qmk.fm), which allows me to fully program them with a
|
||||
rich macro engine. Here are a few of the macros I plan to use:
|
||||
|
||||
```c
|
||||
// Programming
|
||||
SUBS(ifErr, "if err != nil {\n\t\n}", KC_E, KC_I)
|
||||
SUBS(goTest, "go test ./...\n", KC_G, KC_T)
|
||||
SUBS(cargoTest, "cargo test\n", KC_C, KC_T)
|
||||
```
|
||||
|
||||
This will autotype a few common things when I press the keys "ei", "gt", or "ct"
|
||||
at the same time. I plan to add a few more as things turn up so I can more
|
||||
quickly type common idioms or commands to save me time. The `if err != nil`
|
||||
combination started as a joke, but I bet it will end up being incredibly
|
||||
valuable.
|
||||
|
||||
Be well, take care of your hands.
|
|
@ -1,27 +0,0 @@
|
|||
---
|
||||
title: Colemak Layout - Beginning
|
||||
date: 2020-08-15
|
||||
series: colemak
|
||||
---
|
||||
|
||||
# Colemak Layout - Beginning
|
||||
|
||||
I write a lot. On average I write a few kilobytes of text per day. This has been
|
||||
adding up and is taking a huge toll on my hands, especially considering the
|
||||
Covid situation. Something needs to change. I've been working on learning a new
|
||||
keyboard layout: [Colemak](https://colemak.com).
|
||||
|
||||
This post will be shorter than most of my posts because I'm writing it with
|
||||
Colemak enabled on my iPad. Writing this is painfully slow at the moment. My
|
||||
sentences are short and choppy because those are easier to type.
|
||||
|
||||
I also have a [ZSA Moonlander](https://www.zsa.io/moonlander/) on the way, it
|
||||
should be here in October or November. I will also be sure to write about that
|
||||
once I get it in the mail.
|
||||
|
||||
So far, I have about 30 words per minute on the homerow, but once I go off the
|
||||
homerow the speed tanks to less than about five.
|
||||
|
||||
However, I am making progress!
|
||||
|
||||
Be well all, don't stress your hands out.
|
|
@ -1,8 +1,6 @@
|
|||
---
|
||||
title: Coming Out
|
||||
date: 2015-12-01
|
||||
tags:
|
||||
- personal
|
||||
---
|
||||
|
||||
Coming Out
|
||||
|
|
|
@ -1,80 +0,0 @@
|
|||
---
|
||||
title: Compile Stress Test
|
||||
date: 2019-10-03
|
||||
tags:
|
||||
- rust
|
||||
---
|
||||
|
||||
This is an experiment in blogging. I am going to be putting my tweets and select replies one after another without commentary.
|
||||
|
||||
<center>
|
||||
|
||||
<blockquote class="twitter-tweet"><p lang="en" dir="ltr">shitty synthetic benchmark idea: how long it takes for a compiler to handle a main function with 1.2 million instances of printf("hello, world!\n") or similar</p>— Cadey Ratio 🌐 (@theprincessxena) <a href="https://twitter.com/theprincessxena/status/1179467293232902144?ref_src=twsrc%5Etfw">October 2, 2019</a></blockquote> <script async src="https://platform.twitter.com/widgets.js" charset="utf-8"></script>
|
||||
|
||||
<blockquote class="twitter-tweet"><p lang="en" dir="ltr">fun fact, you need an AWS x1.16xlarge instance to compile 1.2 million lines of rust source code</p>— Cadey Ratio 🌐 (@theprincessxena) <a href="https://twitter.com/theprincessxena/status/1179524224479825921?ref_src=twsrc%5Etfw">October 2, 2019</a></blockquote> <script async src="https://platform.twitter.com/widgets.js" charset="utf-8"></script>
|
||||
|
||||
<blockquote class="twitter-tweet"><p lang="en" dir="ltr">oh god that might not be enough</p>— Cadey Ratio 🌐 (@theprincessxena) <a href="https://twitter.com/theprincessxena/status/1179526505505939458?ref_src=twsrc%5Etfw">October 2, 2019</a></blockquote> <script async src="https://platform.twitter.com/widgets.js" charset="utf-8"></script>
|
||||
|
||||
<blockquote class="twitter-tweet"><p lang="en" dir="ltr">oh god, is that what X1 is for???<br><br>My wallet just cringed.</p>— snake enchantress (@AstraLuma) <a href="https://twitter.com/AstraLuma/status/1179529430890405888?ref_src=twsrc%5Etfw">October 2, 2019</a></blockquote> <script async src="https://platform.twitter.com/widgets.js" charset="utf-8"></script>
|
||||
|
||||
<blockquote class="twitter-tweet"><p lang="en" dir="ltr">They have been now <a href="https://t.co/o5vMKx583C">https://t.co/o5vMKx583C</a></p>— Cadey Ratio 🌐 (@theprincessxena) <a href="https://twitter.com/theprincessxena/status/1179527054989111296?ref_src=twsrc%5Etfw">October 2, 2019</a></blockquote> <script async src="https://platform.twitter.com/widgets.js" charset="utf-8"></script>
|
||||
|
||||
<blockquote class="twitter-tweet"><p lang="und" dir="ltr"> <a href="https://t.co/le8IFrFbQT">pic.twitter.com/le8IFrFbQT</a></p>— Cadey Ratio 🌐 (@theprincessxena) <a href="https://twitter.com/theprincessxena/status/1179527358388326401?ref_src=twsrc%5Etfw">October 2, 2019</a></blockquote> <script async src="https://platform.twitter.com/widgets.js" charset="utf-8"></script>
|
||||
|
||||
<blockquote class="twitter-tweet"><p lang="en" dir="ltr">TFW rust uses so much ram an x1.16xlarge can't compile hello world</p>— Cadey Ratio 🌐 (@theprincessxena) <a href="https://twitter.com/theprincessxena/status/1179527672579465218?ref_src=twsrc%5Etfw">October 2, 2019</a></blockquote> <script async src="https://platform.twitter.com/widgets.js" charset="utf-8"></script>
|
||||
|
||||
<blockquote class="twitter-tweet"><p lang="en" dir="ltr">Let's go x1e.32xlarge!</p>— Cadey Ratio 🌐 (@theprincessxena) <a href="https://twitter.com/theprincessxena/status/1179533410165018627?ref_src=twsrc%5Etfw">October 2, 2019</a></blockquote> <script async src="https://platform.twitter.com/widgets.js" charset="utf-8"></script>
|
||||
|
||||
<blockquote class="twitter-tweet"><p lang="en" dir="ltr">hello world</p>— Cadey Ratio 🌐 (@theprincessxena) <a href="https://twitter.com/theprincessxena/status/1179533767284809729?ref_src=twsrc%5Etfw">October 2, 2019</a></blockquote> <script async src="https://platform.twitter.com/widgets.js" charset="utf-8"></script>
|
||||
|
||||
<blockquote class="twitter-tweet"><p lang="ca" dir="ltr">Code generators</p>— Cadey Ratio 🌐 (@theprincessxena) <a href="https://twitter.com/theprincessxena/status/1179533982641340416?ref_src=twsrc%5Etfw">October 2, 2019</a></blockquote> <script async src="https://platform.twitter.com/widgets.js" charset="utf-8"></script>
|
||||
|
||||
<blockquote class="twitter-tweet"><p lang="und" dir="ltr"> <a href="https://t.co/BwLhk9PIb3">pic.twitter.com/BwLhk9PIb3</a></p>— Cadey Ratio 🌐 (@theprincessxena) <a href="https://twitter.com/theprincessxena/status/1179534490017976321?ref_src=twsrc%5Etfw">October 2, 2019</a></blockquote> <script async src="https://platform.twitter.com/widgets.js" charset="utf-8"></script>
|
||||
|
||||
<blockquote class="twitter-tweet"><p lang="en" dir="ltr">Rust can't match V for compile performance</p>— Cadey Ratio 🌐 (@theprincessxena) <a href="https://twitter.com/theprincessxena/status/1179535227376607232?ref_src=twsrc%5Etfw">October 2, 2019</a></blockquote> <script async src="https://platform.twitter.com/widgets.js" charset="utf-8"></script>
|
||||
|
||||
<blockquote class="twitter-tweet"><p lang="en" dir="ltr">Finally can run two electron apps.</p>— Pradeep Gowda 🇮🇳🇺🇸 (@btbytes) <a href="https://twitter.com/btbytes/status/1179539282366734337?ref_src=twsrc%5Etfw">October 2, 2019</a></blockquote> <script async src="https://platform.twitter.com/widgets.js" charset="utf-8"></script>
|
||||
|
||||
<blockquote class="twitter-tweet"><p lang="und" dir="ltr"><a href="https://t.co/Ez0t5BLT9i">pic.twitter.com/Ez0t5BLT9i</a></p>— Cadey Ratio 🌐 (@theprincessxena) <a href="https://twitter.com/theprincessxena/status/1179542623687790595?ref_src=twsrc%5Etfw">October 2, 2019</a></blockquote> <script async src="https://platform.twitter.com/widgets.js" charset="utf-8"></script>
|
||||
|
||||
<blockquote class="twitter-tweet"><p lang="en" dir="ltr">It stopped growing at 2.66 TB of ram!</p>— Cadey Ratio 🌐 (@theprincessxena) <a href="https://twitter.com/theprincessxena/status/1179544161973870592?ref_src=twsrc%5Etfw">October 2, 2019</a></blockquote> <script async src="https://platform.twitter.com/widgets.js" charset="utf-8"></script>
|
||||
|
||||
<blockquote class="twitter-tweet"><p lang="en" dir="ltr">overheard: "im paying this computer minimum wage to compile this god damn rust program"</p>— Cadey Ratio 🌐 (@theprincessxena) <a href="https://twitter.com/theprincessxena/status/1179545128366743552?ref_src=twsrc%5Etfw">October 2, 2019</a></blockquote> <script async src="https://platform.twitter.com/widgets.js" charset="utf-8"></script>
|
||||
|
||||
<blockquote class="twitter-tweet"><p lang="en" dir="ltr">The guy who's paying for the instance in slack said it</p>— Cadey Ratio 🌐 (@theprincessxena) <a href="https://twitter.com/theprincessxena/status/1179546147892928515?ref_src=twsrc%5Etfw">October 2, 2019</a></blockquote> <script async src="https://platform.twitter.com/widgets.js" charset="utf-8"></script>
|
||||
|
||||
<blockquote class="twitter-tweet"><p lang="en" dir="ltr">you magnificent cursed unholy monster</p>— Astrid 🦋 (@floofstrid) <a href="https://twitter.com/floofstrid/status/1179546307972734976?ref_src=twsrc%5Etfw">October 2, 2019</a></blockquote> <script async src="https://platform.twitter.com/widgets.js" charset="utf-8"></script>
|
||||
|
||||
<blockquote class="twitter-tweet"><p lang="en" dir="ltr">Just a simple rust program, only 9.88090622052428380708467040696522138519972064500917... × 10^361235 possible conditions</p>— Cadey Ratio 🌐 (@theprincessxena) <a href="https://twitter.com/theprincessxena/status/1179546397084913664?ref_src=twsrc%5Etfw">October 2, 2019</a></blockquote> <script async src="https://platform.twitter.com/widgets.js" charset="utf-8"></script>
|
||||
|
||||
<blockquote class="twitter-tweet"><p lang="en" dir="ltr">oh god it's still going <a href="https://t.co/SIZJBFTDHN">pic.twitter.com/SIZJBFTDHN</a></p>— Cadey Ratio 🌐 (@theprincessxena) <a href="https://twitter.com/theprincessxena/status/1179552296662962180?ref_src=twsrc%5Etfw">October 3, 2019</a></blockquote> <script async src="https://platform.twitter.com/widgets.js" charset="utf-8"></script>
|
||||
|
||||
<blockquote class="twitter-tweet"><p lang="en" dir="ltr">Normal couples: watch tv together or something<br><br>me and my fiancé: watch someone try to compile a 1.2 million line of code rust function over slack</p>— Cadey Ratio 🌐 (@theprincessxena) <a href="https://twitter.com/theprincessxena/status/1179555137376980993?ref_src=twsrc%5Etfw">October 3, 2019</a></blockquote> <script async src="https://platform.twitter.com/widgets.js" charset="utf-8"></script>
|
||||
|
||||
<blockquote class="twitter-tweet"><p lang="en" dir="ltr">I guess <a href="https://twitter.com/hashtag/rust?src=hash&ref_src=twsrc%5Etfw">#rust</a> isn't production-ready, it can't compile hello world.</p>— Cadey Ratio 🌐 (@theprincessxena) <a href="https://twitter.com/theprincessxena/status/1179557450057474048?ref_src=twsrc%5Etfw">October 3, 2019</a></blockquote> <script async src="https://platform.twitter.com/widgets.js" charset="utf-8"></script>
|
||||
|
||||
<blockquote class="twitter-tweet"><p lang="en" dir="ltr">no swap used though <a href="https://t.co/2Qb0pXqIme">pic.twitter.com/2Qb0pXqIme</a></p>— Cadey Ratio 🌐 (@theprincessxena) <a href="https://twitter.com/theprincessxena/status/1179558236313329664?ref_src=twsrc%5Etfw">October 3, 2019</a></blockquote> <script async src="https://platform.twitter.com/widgets.js" charset="utf-8"></script>
|
||||
|
||||
<blockquote class="twitter-tweet"><p lang="en" dir="ltr">what the fuck is it doing <a href="https://t.co/2CuVKhUAsF">pic.twitter.com/2CuVKhUAsF</a></p>— Cadey Ratio 🌐 (@theprincessxena) <a href="https://twitter.com/theprincessxena/status/1179559687144005632?ref_src=twsrc%5Etfw">October 3, 2019</a></blockquote> <script async src="https://platform.twitter.com/widgets.js" charset="utf-8"></script>
|
||||
|
||||
<blockquote class="twitter-tweet"><p lang="en" dir="ltr">SURVEY SAYS:<br><br>memcpy()!</p>— Cadey Ratio 🌐 (@theprincessxena) <a href="https://twitter.com/theprincessxena/status/1179561276898451456?ref_src=twsrc%5Etfw">October 3, 2019</a></blockquote> <script async src="https://platform.twitter.com/widgets.js" charset="utf-8"></script>
|
||||
|
||||
<blockquote class="twitter-tweet"><p lang="en" dir="ltr">01:01 (Cadey) dalias: this is basically 1.2 million instances of `printf("hello, world!\n");` in void main<br>01:01 (dalias) wtf</p>— Cadey Ratio 🌐 (@theprincessxena) <a href="https://twitter.com/theprincessxena/status/1179562382152142848?ref_src=twsrc%5Etfw">October 3, 2019</a></blockquote> <script async src="https://platform.twitter.com/widgets.js" charset="utf-8"></script>
|
||||
|
||||
<blockquote class="twitter-tweet"><p lang="en" dir="ltr">AWS x1e.32large</p>— Cadey Ratio 🌐 (@theprincessxena) <a href="https://twitter.com/theprincessxena/status/1179559826722037760?ref_src=twsrc%5Etfw">October 3, 2019</a></blockquote> <script async src="https://platform.twitter.com/widgets.js" charset="utf-8"></script>
|
||||
|
||||
<blockquote class="twitter-tweet"><p lang="en" dir="ltr">perf</p>— Cadey Ratio 🌐 (@theprincessxena) <a href="https://twitter.com/theprincessxena/status/1179553234660315138?ref_src=twsrc%5Etfw">October 3, 2019</a></blockquote> <script async src="https://platform.twitter.com/widgets.js" charset="utf-8"></script>
|
||||
|
||||
<blockquote class="twitter-tweet"><p lang="en" dir="ltr">It's down to 1.36 TB now</p>— Cadey Ratio 🌐 (@theprincessxena) <a href="https://twitter.com/theprincessxena/status/1179565615775920134?ref_src=twsrc%5Etfw">October 3, 2019</a></blockquote> <script async src="https://platform.twitter.com/widgets.js" charset="utf-8"></script>
|
||||
|
||||
<blockquote class="twitter-tweet"><p lang="en" dir="ltr">"back to 1.47T ram"<br>"oh no"<br>"1.49"<br>"oh it stopped"<br>"it's definitely still in mir dataflow"</p>— Cadey Ratio 🌐 (@theprincessxena) <a href="https://twitter.com/theprincessxena/status/1179569054870319104?ref_src=twsrc%5Etfw">October 3, 2019</a></blockquote> <script async src="https://platform.twitter.com/widgets.js" charset="utf-8"></script>
|
||||
|
||||
<blockquote class="twitter-tweet"><p lang="en" dir="ltr">The memory is increasing</p>— Cadey Ratio 🌐 (@theprincessxena) <a href="https://twitter.com/theprincessxena/status/1179569164220092417?ref_src=twsrc%5Etfw">October 3, 2019</a></blockquote> <script async src="https://platform.twitter.com/widgets.js" charset="utf-8"></script>
|
||||
|
||||
<blockquote class="twitter-tweet"><p lang="en" dir="ltr">"what stage is that?"<br>"denial?"</p>— Jaden Weiss (@CompuJad) <a href="https://twitter.com/CompuJad/status/1179570411668987904?ref_src=twsrc%5Etfw">October 3, 2019</a></blockquote> <script async src="https://platform.twitter.com/widgets.js" charset="utf-8"></script>
|
||||
|
||||
<blockquote class="twitter-tweet"><p lang="en" dir="ltr">Lol it ran out of memory!<br><br>4 TB of ram isn't enough to build hello world in <a href="https://twitter.com/hashtag/rust?src=hash&ref_src=twsrc%5Etfw">#rust</a>!</p>— Cadey Ratio 🌐 (@theprincessxena) <a href="https://twitter.com/theprincessxena/status/1179582759133761536?ref_src=twsrc%5Etfw">October 3, 2019</a></blockquote> <script async src="https://platform.twitter.com/widgets.js" charset="utf-8"></script>
|
||||
|
||||
</center>
|
||||
|
||||
Meanwhile the same thing in Go took 5 minutes and I was able to run it on my desktop instead of having to rent a server from AWS.
|
|
@ -1,9 +1,6 @@
|
|||
---
|
||||
title: "Crazy Experiment: Ship the Frontend as an asar document"
|
||||
date: "2017-01-09"
|
||||
tags:
|
||||
- asar
|
||||
- frontend
|
||||
---
|
||||
|
||||
Crazy Experiment: Ship the Frontend as an asar document
|
||||
|
|
|
@ -2,9 +2,6 @@
|
|||
title: "Creator's Code"
|
||||
author: Christine Dodrill
|
||||
date: 2018-09-17
|
||||
tags:
|
||||
- release
|
||||
- coc
|
||||
---
|
||||
|
||||
# [Creator's Code](https://github.com/Xe/creators-code)
|
||||
|
|
|
@ -1,7 +1,6 @@
|
|||
---
|
||||
title: My Experience Cursing Out God
|
||||
date: 2018-11-21
|
||||
series: dreams
|
||||
---
|
||||
|
||||
# My Experience Cursing Out God
|
||||
|
|
|
@ -2,7 +2,6 @@
|
|||
title: Death
|
||||
date: 2018-08-19
|
||||
thanks: Sygma
|
||||
series: magick
|
||||
---
|
||||
|
||||
# Death
|
||||
|
@ -11,7 +10,7 @@ Death is a very misunderstood card in Tarot, but not for the reasons you'd think
|
|||
|
||||
Tarot does not see death in this way. Death, the skeleton knight wearing armor, does not see color, race or creed, thus he is depicted as a skeleton. He is riding towards a child and another younger person. The sun is rising in the distance, but even it cannot stop Death. Nor can royalty, as shown by the king under him, dead.
|
||||
|
||||
![](/static/img/tarot_death.jpg)
|
||||
<center>![](/static/img/tarot_death.jpg)</center>
|
||||
|
||||
Death, however, does not actually refer to the act of a physical body physically dying. Death is a change that cannot be reverted. The consequences of this change can and will affect what comes next, however.
|
||||
|
||||
|
|
|
@ -1,100 +0,0 @@
|
|||
---
|
||||
title: Death Stranding Review
|
||||
date: 2019-11-11
|
||||
series: reviews
|
||||
tags:
|
||||
- kojima
|
||||
- video-game
|
||||
- what
|
||||
---
|
||||
|
||||
# Death Stranding Review
|
||||
|
||||
NOTE: There's gonna be spoilers here. Do not read if you are not okay with this.
|
||||
For a summary of the article without spoilers, this game is 10 out of 10 game of the
|
||||
year 2019 for me.
|
||||
|
||||
I have also been playing through this game [on
|
||||
twitch](https://twitch.tv/princessxen) and have streams archived
|
||||
[here](https://xena.greedo.xeserv.us/files/kojima_unchained).
|
||||
|
||||
There's a long-standing rule of thumb to tell fiction apart from non-fiction.
|
||||
Fiction needs to make sense to the reader. Non-fiction does not. [Death
|
||||
Stranding](https://en.wikipedia.org/wiki/Death_Stranding) puts this paradigm on
|
||||
its head. It doesn't make sense out of the gate in the way most AAA games make
|
||||
sense.
|
||||
|
||||
In many AAA games it's very clear who the Big Bad is and who the John America
|
||||
is. John America defeats the Big Bad and spreads Freedom to the masses by force.
|
||||
In Death Stranding, you have no such preconceptions going into it. The first few
|
||||
hours are a chaotic mess of exposition without explanation. First there's a
|
||||
storm, then there's monsters, then there's a baby-powered locator, then you need
|
||||
to deliver stuff a-la fetch quests, then there's Monster energy drinks, and the
|
||||
main currency of this game is Facebook likes (that mean and do absolutely
|
||||
nothing).
|
||||
|
||||
In short, Death Stranding doesn't try to make sense. It leaves questions
|
||||
unanswered. And this is honestly so refreshing in a day and age where entire
|
||||
plot points and the like are spoiled in trailers before the game's release date
|
||||
is even announced. Questions like: what is going on? Why are there monsters?
|
||||
What is the point of the game? Why the hell are there Monster energy drinks in
|
||||
your private room and canteen? Death Stranding answers only some of these over
|
||||
the course of gameplay.
|
||||
|
||||
The core of the gameplay loop is delivering cargo from point a to point b across
|
||||
a ruined America after the apocalypse. The main character is an absolute unit of
|
||||
a lad, able to carry over 120 kilograms of cargo on his back. As more and more
|
||||
cargo stacks up you create these comically tall towers of luggage that make
|
||||
balancing very difficult. You can hold on for balance using both of the shoulder
|
||||
buttons. The game maps each shoulder button to an arm of the player character.
|
||||
There's also a stamina system, and while you are gripping the cargo your stamina
|
||||
regenerates much more slowly than if you weren't doing that.
|
||||
|
||||
The game makes you deliver almost everything you can think of from medical aid
|
||||
to antimatter bombs. The antimatter bomb deliveries are really tricky because of
|
||||
how delicate they are. If you drop the antimatter bomb, it explodes and you
|
||||
instantly game over. If you hit a rock while carrying an antimatter bomb, it
|
||||
gets damaged. If it gets damaged too many times it explodes and you die. If it
|
||||
gets dropped into water it explodes and you die. And you have to carry the
|
||||
suckers over miles of terrain and even mountains.
|
||||
|
||||
This game handles scale very elegantly. The map is _huge_, even larger than
|
||||
Skyrim or Breath of the Wild. You are the UPS man who delivers packages,
|
||||
apocalypse be damned. This game gives you a lot of quiet downtime, which really
|
||||
lets you soak in the philosophical mindfuck that Kojima cooked up for us all. As
|
||||
you approach major cities, guitar and vocal music comes in and the other sound
|
||||
effects of the game quiet down. It overall creates a very sobering and solemn
|
||||
mood that I just can't get enough of. It seems like it wouldn't fit in a game
|
||||
where you use your own blood to defeat monsters and drink _monster energy_ out
|
||||
of your canteen, but it totally does.
|
||||
|
||||
There is some mild product placement. Your canteen is full of Monster energy
|
||||
drink. Yes, that Monster. Making the player defecate shows you an ad for an AMC
|
||||
show. There's also monster energy drinks in your safe room that increase your
|
||||
max stamina for a bit. I'm not entirely sure if the product placement was chosen
|
||||
to be there for artistic reasons (it's surreal as all hell and helps to
|
||||
complement the confusing aspects of the game), but it's very non-intrusive and
|
||||
can be ignored with little risk.
|
||||
|
||||
This game also has online components. Every time you build a structure in areas
|
||||
linked to the chiral network, other players can use, interact with and upgrade
|
||||
them so they can do more things. Other players can also gives you likes, which
|
||||
again do nothing. Upgrading a zipline makes it able to handle a larger distance
|
||||
or updating a safe house lets you play music when people walk by it. It really
|
||||
helps to build the motif of rebuilding America. There is however room for people
|
||||
to troll others. Here's [an example of
|
||||
this](https://twitter.com/Brad_barnaby/status/1193084242743312384). There's a
|
||||
troll ladder to nowhere. There's a lot of those laying around mountains, so be
|
||||
on your guard.
|
||||
|
||||
Overall, Death Stranding is a fantastic game. It's hard. It's unforgiving. But
|
||||
the real thing that advances is the skill of the player. You make the
|
||||
deliveries. You go the distance. You do your job as the post-apocalyptic UPS man
|
||||
that America needs.
|
||||
|
||||
![UPS Simulator 2019](/static/img/ups-simulator-2019.jpg)
|
||||
|
||||
By [mmmintdesign](https://twitter.com/mmmintdesign) [source](https://twitter.com/mmmintdesign/status/1192856164331114497)
|
||||
|
||||
Score: 10 out of 10
|
||||
Christine Dodrill's Game of the Year 2019
|
|
@ -1,8 +1,6 @@
|
|||
---
|
||||
title: "Deprecation Notice: Elemental-IRCd"
|
||||
date: 2019-02-11
|
||||
tags:
|
||||
- release
|
||||
---
|
||||
|
||||
# Deprecation Notice: Elemental-IRCd
|
||||
|
|
|
@ -1,8 +1,6 @@
|
|||
---
|
||||
title: Instant Development Environments in Docker
|
||||
date: 2014-10-24
|
||||
tags:
|
||||
- release
|
||||
---
|
||||
|
||||
Instant Development Environments in Docker
|
||||
|
|
|
@ -1,435 +0,0 @@
|
|||
---
|
||||
title: Dhall for Kubernetes
|
||||
date: 2020-01-25
|
||||
tags:
|
||||
- dhall
|
||||
- kubernetes
|
||||
- witchcraft
|
||||
---
|
||||
|
||||
# Dhall for Kubernetes
|
||||
|
||||
Kubernetes is a surprisingly complicated software package. Arguably, it has to
|
||||
be that complicated as a result of the problems it solves being complicated; but
|
||||
managing yaml configuration files for Kubernetes is a complicated task. [YAML][yaml]
|
||||
doesn't have support for variables or type metadata. This means that the
|
||||
validity (or sensibility) of a given Kubernetes configuration file (or files)
|
||||
isn't easy to figure out without using a Kubernetes server.
|
||||
|
||||
[yaml]: https://yaml.org
|
||||
|
||||
In my [last post][cultk8s] about Kubernetes, I mentioned I had developed a tool
|
||||
named [dyson][dyson] in order to help me manage Terraform as well as create
|
||||
Kubernetes manifests from [a template][template]. This works for the majority of
|
||||
my apps, but it is difficult to extend at this point for a few reasons:
|
||||
|
||||
[cultk8s]: https://christine.website/blog/the-cult-of-kubernetes-2019-09-07
|
||||
[dyson]: https://github.com/Xe/within-terraform/tree/master/dyson
|
||||
[template]: https://github.com/Xe/within-terraform/blob/master/dyson/src/dysonPkg/deployment_with_ingress.yaml
|
||||
|
||||
- It assumes that everything passed to it are already valid yaml terms
|
||||
- It doesn't assert the type of any values passed to it
|
||||
- It is difficult to add another container to a given deployment
|
||||
- Environment variables implicitly depend on the presence of a private git repo
|
||||
- It depends on the template being correct more than the output being correct
|
||||
|
||||
So, this won't scale. People in the community have created other solutions for
|
||||
this like [Helm][helm], but a lot of them have some of the same basic problems.
|
||||
Helm also assumes that your template is correct. [Kustomize][kustomize] does
|
||||
help with a lot of the type-safe variable replacements, but it doesn't have the
|
||||
ability to ensure your manifest is valid.
|
||||
|
||||
[helm]: https://helm.sh
|
||||
[kustomize]: https://kustomize.io
|
||||
|
||||
I looked around for alternate solutions for a while and eventually found
|
||||
[Dhall][dhall] thanks to a friend. Dhall is a _statically typed_ configuration
|
||||
language. This means that you can ensure that inputs are _always_ the correct
|
||||
type or the configuration file won't load. There's also a built-in
|
||||
[dhall-to-yaml][dhallyaml] tool that can be used with the [Kubernetes
|
||||
package][dhallk8s] in order to declare Kubernetes manifests in a type-safe way.
|
||||
|
||||
[dhall]: https://dhall-lang.org
|
||||
[dhallyaml]: https://github.com/dhall-lang/dhall-haskell/tree/master/dhall-yaml#dhall-yaml
|
||||
[dhallk8s]: https://github.com/dhall-lang/dhall-kubernetes
|
||||
|
||||
Here's a small example of Dhall and the yaml it generates:
|
||||
|
||||
```dhall
|
||||
-- Mastodon usernames
|
||||
[ { name = "Cadey", mastodon = "@cadey@mst3k.interlinked.me" }
|
||||
, { name = "Nicole", mastodon = "@sharkgirl@mst3k.interlinked.me" }
|
||||
]
|
||||
```
|
||||
|
||||
Which produces:
|
||||
|
||||
```yaml
|
||||
- mastodon: "@cadey@mst3k.interlinked.me"
|
||||
name: Cadey
|
||||
- mastodon: "@sharkgirl@mst3k.interlinked.me"
|
||||
name: Nicole
|
||||
```
|
||||
|
||||
Which is fine, but we still have the type-safety problem that you would have in
|
||||
normal yaml. Dhall lets us define [record types][dhallrecord] for this data like
|
||||
this:
|
||||
|
||||
[dhallrecord]: http://www.haskellforall.com/2020/01/dhall-year-in-review-2019-2020.html
|
||||
|
||||
```dhall
|
||||
let User =
|
||||
{ Type = { name : Text, mastodon : Optional Text }
|
||||
, default = { name = "", mastodon = None }
|
||||
}
|
||||
|
||||
let users =
|
||||
[ User::{ name = "Cadey", mastodon = Some "@cadey@mst3k.interlinked.me" }
|
||||
, User::{
|
||||
, name = "Nicole"
|
||||
, mastodon = Some "@sharkgirl@mst3k.interlinked.me"
|
||||
}
|
||||
]
|
||||
|
||||
in users
|
||||
```
|
||||
|
||||
Which produces:
|
||||
|
||||
```yaml
|
||||
- mastodon: "@cadey@mst3k.interlinked.me"
|
||||
name: Cadey
|
||||
- mastodon: "@sharkgirl@mst3k.interlinked.me"
|
||||
name: Nicole
|
||||
```
|
||||
|
||||
This is type-safe because you cannot add arbitrary fields to User instances
|
||||
without the compiler rejecting it. Let's add an invalid "preferred_language"
|
||||
field to Cadey's instance:
|
||||
|
||||
```
|
||||
-- ...
|
||||
let users =
|
||||
[ User::{
|
||||
, name = "Cadey"
|
||||
, mastodon = Some "@cadey@mst3k.interlinked.me"
|
||||
, preferred_language = "en-US"
|
||||
}
|
||||
-- ...
|
||||
]
|
||||
```
|
||||
|
||||
Which gives us:
|
||||
|
||||
```
|
||||
$ dhall-to-yaml --file example.dhall
|
||||
Error: Expression doesn't match annotation
|
||||
|
||||
{ + preferred_language : …
|
||||
, …
|
||||
}
|
||||
|
||||
4│ User::{ name = "Cadey", mastodon = Some "@cadey@mst3k.interlinked.me",
|
||||
5│ preferred_language = "en-US" }
|
||||
|
||||
example.dhall:4:9
|
||||
```
|
||||
|
||||
Or [this more detailed explanation][explanation] if you add the `--explain` flag
|
||||
to the `dhall-to-yaml` call.
|
||||
|
||||
[explanation]: https://clbin.com/JtVWT
|
||||
|
||||
We tried to do something that violated the contract that the type specified.
|
||||
This means that it's an invalid configuration and is therefore rejected and no
|
||||
yaml file is created.
|
||||
|
||||
The Dhall Kubernetes package specifies record types for _every_ object available
|
||||
by default in Kubernetes. This does mean that the package is incredibly large,
|
||||
but it also makes sure that _everything_ you could possibly want to do in
|
||||
Kubernetes matches what it expects. In the [package
|
||||
documentation][k8sdhalldocs], they give an example where a
|
||||
[Deployment][k8sdeployment] is created.
|
||||
|
||||
[k8sdhalldocs]: https://github.com/dhall-lang/dhall-kubernetes/tree/master/1.15#quickstart---a-simple-deployment
|
||||
[k8sdeployment]: https://kubernetes.io/docs/concepts/workloads/controllers/deployment/
|
||||
|
||||
``` dhall
|
||||
-- examples/deploymentSimple.dhall
|
||||
|
||||
-- Importing other files is done by specifying the HTTPS URL/disk location of
|
||||
-- the file. Attaching a sha256 hash (obtained with `dhall freeze`) allows
|
||||
-- the Dhall compiler to cache these files and speed up configuration loads
|
||||
-- drastically.
|
||||
let kubernetes =
|
||||
https://raw.githubusercontent.com/dhall-lang/dhall-kubernetes/1.15/master/package.dhall
|
||||
sha256:4bd5939adb0a5fc83d76e0d69aa3c5a30bc1a5af8f9df515f44b6fc59a0a4815
|
||||
|
||||
let deployment =
|
||||
kubernetes.Deployment::{
|
||||
, metadata = kubernetes.ObjectMeta::{ name = "nginx" }
|
||||
, spec =
|
||||
Some
|
||||
kubernetes.DeploymentSpec::{
|
||||
, replicas = Some 2
|
||||
, template =
|
||||
kubernetes.PodTemplateSpec::{
|
||||
, metadata = kubernetes.ObjectMeta::{ name = "nginx" }
|
||||
, spec =
|
||||
Some
|
||||
kubernetes.PodSpec::{
|
||||
, containers =
|
||||
[ kubernetes.Container::{
|
||||
, name = "nginx"
|
||||
, image = Some "nginx:1.15.3"
|
||||
, ports =
|
||||
[ kubernetes.ContainerPort::{
|
||||
, containerPort = 80
|
||||
}
|
||||
]
|
||||
}
|
||||
]
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
in deployment
|
||||
```
|
||||
|
||||
Which creates the following yaml:
|
||||
|
||||
```
|
||||
apiVersion: apps/v1
|
||||
kind: Deployment
|
||||
metadata:
|
||||
name: nginx
|
||||
spec:
|
||||
replicas: 2
|
||||
template:
|
||||
metadata:
|
||||
name: nginx
|
||||
spec:
|
||||
containers:
|
||||
- image: nginx:1.15.3
|
||||
name: nginx
|
||||
ports:
|
||||
- containerPort: 80
|
||||
```
|
||||
|
||||
Dhall's lambda functions can help you break this into manageable chunks. For
|
||||
example, here's a Dhall function that helps create a docker image reference:
|
||||
|
||||
```
|
||||
let formatImage
|
||||
: Text -> Text -> Text
|
||||
= \(repository : Text) -> \(tag : Text) ->
|
||||
"${repository}:${tag}"
|
||||
|
||||
in formatImage "xena/christinewebsite" "latest"
|
||||
```
|
||||
|
||||
Which outputs `xena/christinewebsite:latest` when passed to `dhall text`.
|
||||
|
||||
All of this adds up into a powerful toolset that lets you express Kubernetes
|
||||
configuration in a way that does what you want without as many headaches.
|
||||
|
||||
Most of my apps on Kubernetes need only a few generic bits of configuration:
|
||||
|
||||
- Their name
|
||||
- What port should be exposed
|
||||
- The domain that this service should be exposed on
|
||||
- How many replicas of the service are needed
|
||||
- Which Let's Encrypt Issuer to use (currently only `"prod"` or `"staging"`)
|
||||
- The [configuration variables of the service][12factorconfig]
|
||||
- Any other containers that may be needed for the service
|
||||
|
||||
[12factorconfig]: https://12factor.net/config
|
||||
|
||||
From here, I defined all of the [bits and pieces][kubermemeshttp] for the
|
||||
Kubernetes manifests that Dyson produces and then created a `Config` type that
|
||||
helps to template them out. Here's my [`Config` type
|
||||
definition][configdefinition]:
|
||||
|
||||
[kubermemeshttp]: https://tulpa.dev/cadey/kubermemes/src/branch/master/k8s/http
|
||||
[configdefinition]: https://tulpa.dev/cadey/kubermemes/src/branch/master/k8s/app/config.dhall
|
||||
|
||||
```dhall
|
||||
let kubernetes = ../kubernetes.dhall
|
||||
|
||||
in { Type =
|
||||
{ name : Text
|
||||
, appPort : Natural
|
||||
, image : Text
|
||||
, domain : Text
|
||||
, replicas : Natural
|
||||
, leIssuer : Text
|
||||
, envVars : List kubernetes.EnvVar.Type
|
||||
, otherContainers : List kubernetes.Container.Type
|
||||
}
|
||||
, default =
|
||||
{ name = ""
|
||||
, appPort = 5000
|
||||
, image = ""
|
||||
, domain = ""
|
||||
, replicas = 1
|
||||
, leIssuer = "staging"
|
||||
, envVars = [] : List kubernetes.EnvVar.Type
|
||||
, otherContainers = [] : List kubernetes.Container.Type
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
Then I defined a `makeApp` function that creates everything I need to deploy my
|
||||
stuff on Kubernetes:
|
||||
|
||||
```dhall
|
||||
let Prelude = ../Prelude.dhall
|
||||
|
||||
let kubernetes = ../kubernetes.dhall
|
||||
|
||||
let typesUnion = ../typesUnion.dhall
|
||||
|
||||
let deployment = ../http/deployment.dhall
|
||||
|
||||
let ingress = ../http/ingress.dhall
|
||||
|
||||
let service = ../http/service.dhall
|
||||
|
||||
let Config = ../app/config.dhall
|
||||
|
||||
let K8sList = ../app/list.dhall
|
||||
|
||||
let buildService =
|
||||
\(config : Config.Type)
|
||||
-> let myService = service config
|
||||
|
||||
let myDeployment = deployment config
|
||||
|
||||
let myIngress = ingress config
|
||||
|
||||
in K8sList::{
|
||||
, items =
|
||||
[ typesUnion.Service myService
|
||||
, typesUnion.Deployment myDeployment
|
||||
, typesUnion.Ingress myIngress
|
||||
]
|
||||
}
|
||||
|
||||
in buildService
|
||||
```
|
||||
|
||||
And used it to deploy the [h language website][hlang]:
|
||||
|
||||
[hlang]: https://h.christine.website
|
||||
|
||||
```dhall
|
||||
let makeApp = ../app/make.dhall
|
||||
|
||||
let Config = ../app/config.dhall
|
||||
|
||||
let cfg =
|
||||
Config::{
|
||||
, name = "hlang"
|
||||
, appPort = 5000
|
||||
, image = "xena/hlang:latest"
|
||||
, domain = "h.christine.website"
|
||||
, leIssuer = "prod"
|
||||
}
|
||||
|
||||
in makeApp cfg
|
||||
```
|
||||
|
||||
Which produces the following Kubernetes config:
|
||||
|
||||
```yaml
|
||||
apiVersion: v1
|
||||
items:
|
||||
- apiVersion: v1
|
||||
kind: Service
|
||||
metadata:
|
||||
annotations:
|
||||
external-dns.alpha.kubernetes.io/cloudflare-proxied: "false"
|
||||
external-dns.alpha.kubernetes.io/hostname: h.christine.website
|
||||
external-dns.alpha.kubernetes.io/ttl: "120"
|
||||
labels:
|
||||
app: hlang
|
||||
name: hlang
|
||||
namespace: apps
|
||||
spec:
|
||||
ports:
|
||||
- port: 5000
|
||||
targetPort: 5000
|
||||
selector:
|
||||
app: hlang
|
||||
type: ClusterIP
|
||||
- apiVersion: apps/v1
|
||||
kind: Deployment
|
||||
metadata:
|
||||
name: hlang
|
||||
namespace: apps
|
||||
spec:
|
||||
replicas: 1
|
||||
selector:
|
||||
matchLabels:
|
||||
app: hlang
|
||||
template:
|
||||
metadata:
|
||||
labels:
|
||||
app: hlang
|
||||
name: hlang
|
||||
spec:
|
||||
containers:
|
||||
- image: xena/hlang:latest
|
||||
imagePullPolicy: Always
|
||||
name: web
|
||||
ports:
|
||||
- containerPort: 5000
|
||||
imagePullSecrets:
|
||||
- name: regcred
|
||||
- apiVersion: networking.k8s.io/v1beta1
|
||||
kind: Ingress
|
||||
metadata:
|
||||
annotations:
|
||||
certmanager.k8s.io/cluster-issuer: letsencrypt-prod
|
||||
kubernetes.io/ingress.class: nginx
|
||||
labels:
|
||||
app: hlang
|
||||
name: hlang
|
||||
namespace: apps
|
||||
spec:
|
||||
rules:
|
||||
- host: h.christine.website
|
||||
http:
|
||||
paths:
|
||||
- backend:
|
||||
serviceName: hlang
|
||||
servicePort: 5000
|
||||
tls:
|
||||
- hosts:
|
||||
- h.christine.website
|
||||
secretName: prod-certs-hlang
|
||||
kind: List
|
||||
```
|
||||
|
||||
And when I applied it on my Kubernetes cluster, it worked the first time and had
|
||||
absolutely no effect on the existing configuration.
|
||||
|
||||
In the future, I hope to expand this to allow for multiple deployments (IE: a
|
||||
chatbot running in a separate deployment than a web API the chatbot depends on
|
||||
or non-web projects in general) as well as supporting multiple Kubernetes
|
||||
namespaces.
|
||||
|
||||
Dhall is probably the most viable replacement to Helm or other Kubernetes
|
||||
templating tools I have found in recent memory. I hope that it will be used by
|
||||
more people to help with configuration management, but I can understand that
|
||||
that may not happen. At least it works for me.
|
||||
|
||||
If you want to learn more about Dhall, I suggest checking out the following
|
||||
links:
|
||||
|
||||
- [The Dhall Language homepage](https://dhall-lang.org)
|
||||
- [Learn Dhall in Y Minutes](https://learnxinyminutes.com/docs/dhall/)
|
||||
- [The Dhall Language GitHub Organization](https://github.com/dhall-lang)
|
||||
|
||||
I hope this was helpful and interesting. Be well.
|
|
@ -1,111 +0,0 @@
|
|||
---
|
||||
title: "Don't Look Into the Light"
|
||||
date: 2019-10-06
|
||||
tags:
|
||||
- practices
|
||||
- big-rewrite
|
||||
---
|
||||
|
||||
# Don’t Look Into the Light
|
||||
|
||||
So at a previous job I was working at, we maintained a system. This system
|
||||
powered a significant part of the core of how the product was actually used (as
|
||||
far as usage metrics reported). Over time, we had bolted something onto the side
|
||||
of this product to take actions based on the numbers the product was tracking.
|
||||
|
||||
After a few years of cycling through various people, this system was very hard
|
||||
to understand. Data would flow in on one end, go to an aggregation layer, then
|
||||
get sent to storage and another aggregation layer, and then eventually all of
|
||||
the metrics were calculated. This system was fairly expensive to operate and it
|
||||
was stressing the datastores it relied on beyond what other companies called
|
||||
_theoretical_ limits. Oh, to make things even more fun; the part that makes
|
||||
actions based on the data was barely keeping up with what it needed to do. It
|
||||
was supposed to run each of the checks once a minute and was running all of them
|
||||
in 57 seconds.
|
||||
|
||||
During a planning meeting we started to complain about the state of the world
|
||||
and how godawful everything had become. The undocumented (and probably
|
||||
undocumentable) organic nature of the system had gotten out of hand. We thought
|
||||
we could kill two birds with one stone and wanted to subsume another product
|
||||
that took action based on data, as well as create a generic platform to
|
||||
reimplement the older action-taking layer on top of.
|
||||
|
||||
The rules were set, the groundwork was laid. We decided:
|
||||
|
||||
* This would be a Big Rewrite based on all of the lessons we had learned from
|
||||
the past operating the behemoth
|
||||
* This project would be future-proof
|
||||
* This project would have 75% test coverage as reported by CI
|
||||
* This project would be built with a microservices architecture
|
||||
|
||||
Those of you who have been down this road before probably have massive alarm
|
||||
bells going off in your head. This is one of those things that looks like a good
|
||||
idea on paper, can probably be passed off as a good idea to management and
|
||||
actually implemented; as happened here.
|
||||
|
||||
So we set off on our quest to write this software. The repo was created. CI was
|
||||
configured. The scripts were optimized to dump out code coverage as output. We
|
||||
strived to document everything on day 1. We took advantage of the datastore we
|
||||
were using. Everything was looking great.
|
||||
|
||||
Then the product team came in and noticed fresh meat. They soon realized that
|
||||
this could be a Big Thing to customers, and they wanted to get in on it as soon
|
||||
as possible. So we suddenly had our deadlines pushed forward and needed to get
|
||||
the whole thing into testing yesterday.
|
||||
|
||||
We set it up, set a trigger for a task, and it worked in testing. After a while
|
||||
of it consistently doing that with the continuous functional testing tooling, we
|
||||
told product it was okay to have a VERY LIMITED set of customers have at it.
|
||||
|
||||
That was a mistake. It fell apart the second customers touched it. We struggled
|
||||
to understand why. We dug into the core of the beast we had just created and
|
||||
managed to discover we made critical fundamental errors. The heart of the task
|
||||
matching code was this monstrosity of a cross join that took the other people on
|
||||
the team a few sheets of graph paper to break down and understand. The task
|
||||
execution layer worked perfectly in testing, but almost never in production.
|
||||
|
||||
And after a week of solid debugging (including making deals with other teams,
|
||||
satan, jesus and the pope to try and understand it), we had made no progress. It
|
||||
was almost as if there was some kind of gremlin in the code that was just
|
||||
randomly making things not fire if it wasn’t one of our internal users
|
||||
triggering it.
|
||||
|
||||
We had to apologize with the product team. Apparently the a lot of product team
|
||||
had to go on damage control as a result of this. I can only imagine the
|
||||
trickled-down impact this had on other projects internal to the company.
|
||||
|
||||
The lesson here is threefold. First, the Big Rewrite is almost a sure-fire way
|
||||
to ensure a project fails. Avoid that temptation. Don’t look into the light. It
|
||||
looks nice, it may even feel nice. Statistically speaking, it’s not nice when
|
||||
you get to the other side of it.
|
||||
|
||||
The second lesson is that making something microservices out of the gate is a
|
||||
terrible idea. Microservices architectures are not planned. They are an
|
||||
evolutionary result, not a fully anticipated feature.
|
||||
|
||||
Finally, don’t “design for the future”. The future [hasn’t happened
|
||||
yet](https://christine.website/blog/all-there-is-is-now-2019-05-25). Nobody
|
||||
knows how it’s going to turn out. The future is going to happen, and you can
|
||||
either adapt to it as it happens in the Now or fail to. Don’t make things overly
|
||||
modular, that leads to insane things like dynamically linking parts of an
|
||||
application over HTTP.
|
||||
|
||||
> If you 'future proof' a system you build today, chances are when the future
|
||||
> arrives the system will be unmaintainable or incomprehensible.
|
||||
\- [John Murphy](https://twitter.com/murphybytes/status/1180131195537039360)
|
||||
|
||||
---
|
||||
|
||||
This kind of advice is probably gonna feel like a slap to the face to a lot of
|
||||
people. People really put their heart into their work. It feeds egos massively.
|
||||
It can be very painful to have to say no to something someone is really
|
||||
passionate about. It can even lead to people changing their career plans
|
||||
depending on the person.
|
||||
|
||||
But this is the truth of the matter as far as I can tell. This is generally what
|
||||
happens during the Big Rewrite centred around Best Practices for Cloud Native
|
||||
software.
|
||||
|
||||
The most successful design decisions are wholly and utterly subjective to every
|
||||
kind of project you come across. What works in system A probably won’t work
|
||||
perfectly in system B. Everything is its own unique snowflake. Embrace this.
|
|
@ -1,330 +0,0 @@
|
|||
---
|
||||
title: Continuous Deployment to Kubernetes with Gitea and Drone
|
||||
date: 2020-07-10
|
||||
series: howto
|
||||
tags:
|
||||
- nix
|
||||
- kubernetes
|
||||
- drone
|
||||
- gitea
|
||||
---
|
||||
|
||||
# Continuous Deployment to Kubernetes with Gitea and Drone
|
||||
|
||||
Recently I put a complete rewrite of [the printerfacts
|
||||
server](https://printerfacts.cetacean.club) into service based on
|
||||
[warp](https://github.com/seanmonstar/warp). I have it set up to automatically
|
||||
be deployed to my Kubernetes cluster on every commit to [its source
|
||||
repo](https://tulpa.dev/cadey/printerfacts). I'm going to explain how this works
|
||||
and how I set it up.
|
||||
|
||||
## Nix
|
||||
|
||||
One of the first elements in this is [Nix](https://nixos.org/nix). I use Nix to
|
||||
build reproducible docker images of the printerfacts server, as well as managing
|
||||
my own developer tooling locally. I also pull in the following packages from
|
||||
GitHub:
|
||||
|
||||
- [naersk](https://github.com/nmattia/naersk) - an automagic builder for Rust
|
||||
crates that is friendly to the nix store
|
||||
- [gruvbox-css](https://github.com/Xe/gruvbox-css) - the CSS file that the
|
||||
printerfacts service uses
|
||||
- [nixpkgs](https://github.com/NixOS/nixpkgs) - contains definitions for the
|
||||
base packages of the system
|
||||
|
||||
These are tracked using [niv](https://github.com/nmattia/niv), which allows me
|
||||
to store these dependencies in the global nix store for free. This lets them be
|
||||
reused and deduplicated as they need to be.
|
||||
|
||||
Next, I made a build script for the printerfacts service that builds on top of
|
||||
these in `printerfacts.nix`:
|
||||
|
||||
```nix
|
||||
{ sources ? import ./nix/sources.nix, pkgs ? import <nixpkgs> { } }:
|
||||
let
|
||||
srcNoTarget = dir:
|
||||
builtins.filterSource
|
||||
(path: type: type != "directory" || builtins.baseNameOf path != "target")
|
||||
dir;
|
||||
src = srcNoTarget ./.;
|
||||
|
||||
naersk = pkgs.callPackage sources.naersk { };
|
||||
gruvbox-css = pkgs.callPackage sources.gruvbox-css { };
|
||||
|
||||
pfacts = naersk.buildPackage {
|
||||
inherit src;
|
||||
remapPathPrefix = true;
|
||||
};
|
||||
in pkgs.stdenv.mkDerivation {
|
||||
inherit (pfacts) name;
|
||||
inherit src;
|
||||
phases = "installPhase";
|
||||
|
||||
installPhase = ''
|
||||
mkdir -p $out/static
|
||||
|
||||
cp -rf $src/templates $out/templates
|
||||
cp -rf ${pfacts}/bin $out/bin
|
||||
cp -rf ${gruvbox-css}/gruvbox.css $out/static/gruvbox.css
|
||||
'';
|
||||
}
|
||||
```
|
||||
|
||||
And finally a simple docker image builder in `default.nix`:
|
||||
|
||||
```nix
|
||||
{ system ? builtins.currentSystem }:
|
||||
|
||||
let
|
||||
sources = import ./nix/sources.nix;
|
||||
pkgs = import <nixpkgs> { };
|
||||
printerfacts = pkgs.callPackage ./printerfacts.nix { };
|
||||
|
||||
name = "xena/printerfacts";
|
||||
tag = "latest";
|
||||
|
||||
in pkgs.dockerTools.buildLayeredImage {
|
||||
inherit name tag;
|
||||
contents = [ printerfacts ];
|
||||
|
||||
config = {
|
||||
Cmd = [ "${printerfacts}/bin/printerfacts" ];
|
||||
Env = [ "RUST_LOG=info" ];
|
||||
WorkingDir = "/";
|
||||
};
|
||||
}
|
||||
```
|
||||
|
||||
This creates a docker image with only the printerfacts service in it and any
|
||||
dependencies that are absolutely required for the service to function. Each
|
||||
dependency is also split into its own docker layer so that it is much more
|
||||
efficient on docker caches, which translates into faster start times on existing
|
||||
servers. Here are the layers needed for the printerfacts service to function:
|
||||
|
||||
- [libunistring](https://www.gnu.org/software/libunistring/) - Unicode-safe
|
||||
string manipulation library
|
||||
- [libidn2](https://www.gnu.org/software/libidn/) - An internationalized domain
|
||||
name decoder
|
||||
- [glibc](https://www.gnu.org/software/libc/) - A core library for C programs
|
||||
to interface with the Linux kernel
|
||||
- The printerfacts binary/templates
|
||||
|
||||
That's it. It packs all of this into an image that is 13 megabytes when
|
||||
compressed.
|
||||
|
||||
## Drone
|
||||
|
||||
Now that we have a way to make a docker image, let's look how I use
|
||||
[drone.io](https://drone.io) to build and push this image to the [Docker
|
||||
Hub](https://hub.docker.com/repository/docker/xena/printerfacts/tags).
|
||||
|
||||
I have a drone manifest that looks like
|
||||
[this](https://tulpa.dev/cadey/printerfacts/src/branch/master/.drone.yml):
|
||||
|
||||
```yaml
|
||||
kind: pipeline
|
||||
name: docker
|
||||
steps:
|
||||
- name: build docker image
|
||||
image: "monacoremo/nix:2020-04-05-05f09348-circleci"
|
||||
environment:
|
||||
USER: root
|
||||
commands:
|
||||
- cachix use xe
|
||||
- nix-build
|
||||
- cp $(readlink result) /result/docker.tgz
|
||||
volumes:
|
||||
- name: image
|
||||
path: /result
|
||||
|
||||
- name: push docker image
|
||||
image: docker:dind
|
||||
volumes:
|
||||
- name: image
|
||||
path: /result
|
||||
- name: dockersock
|
||||
path: /var/run/docker.sock
|
||||
commands:
|
||||
- docker load -i /result/docker.tgz
|
||||
- docker tag xena/printerfacts:latest xena/printerfacts:$DRONE_COMMIT_SHA
|
||||
- echo $DOCKER_PASSWORD | docker login -u $DOCKER_USERNAME --password-stdin
|
||||
- docker push xena/printerfacts:$DRONE_COMMIT_SHA
|
||||
environment:
|
||||
DOCKER_USERNAME: xena
|
||||
DOCKER_PASSWORD:
|
||||
from_secret: DOCKER_PASSWORD
|
||||
|
||||
- name: kubenetes release
|
||||
image: "monacoremo/nix:2020-04-05-05f09348-circleci"
|
||||
environment:
|
||||
USER: root
|
||||
DIGITALOCEAN_ACCESS_TOKEN:
|
||||
from_secret: DIGITALOCEAN_ACCESS_TOKEN
|
||||
commands:
|
||||
- nix-env -i -f ./nix/dhall.nix
|
||||
- ./scripts/release.sh
|
||||
|
||||
volumes:
|
||||
- name: image
|
||||
temp: {}
|
||||
- name: dockersock
|
||||
host:
|
||||
path: /var/run/docker.sock
|
||||
```
|
||||
|
||||
This is a lot, so let's break it up into the individual parts.
|
||||
|
||||
### Configuration
|
||||
|
||||
Drone steps normally don't have access to a docker daemon, privileged mode or
|
||||
host-mounted paths. I configured the
|
||||
[cadey/printerfacts](https://drone.tulpa.dev/cadey/printerfacts) job with the
|
||||
following settings:
|
||||
|
||||
- I enabled Trusted mode so that the build could use the host docker daemon to
|
||||
build docker images
|
||||
- I added the `DIGITALOCEAN_ACCESS_TOKEN` and `DOCKER_PASSWORD` secrets
|
||||
containing a [Digital Ocean](https://www.digitalocean.com/) API token and a
|
||||
Docker hub password
|
||||
|
||||
I then set up the `volumes` block to create a few things:
|
||||
|
||||
```
|
||||
volumes:
|
||||
- name: image
|
||||
temp: {}
|
||||
- name: dockersock
|
||||
host:
|
||||
path: /var/run/docker.sock
|
||||
```
|
||||
|
||||
- A temporary folder to store the docker image after Nix builds it
|
||||
- The docker daemon socket from the host
|
||||
|
||||
Now we can get to the building the docker image.
|
||||
|
||||
### Docker Image Build
|
||||
|
||||
I use [this docker image](https://hub.docker.com/r/monacoremo/nix) to build with
|
||||
Nix on my Drone setup. As of the time of writing this post, the most recent tag
|
||||
of this image is `monacoremo/nix:2020-04-05-05f09348-circleci`. This image has a
|
||||
core setup of Nix and a few userspace tools so that it works in CI tooling. In
|
||||
this step, I do a few things:
|
||||
|
||||
```yaml
|
||||
name: build docker image
|
||||
image: "monacoremo/nix:2020-04-05-05f09348-circleci"
|
||||
environment:
|
||||
USER: root
|
||||
commands:
|
||||
- cachix use xe
|
||||
- nix-build
|
||||
- cp $(readlink result) /result/docker.tgz
|
||||
volumes:
|
||||
- name: image
|
||||
path: /result
|
||||
```
|
||||
|
||||
I first activate my [cachix](https://xe.cachix.org) cache so that any pre-built
|
||||
parts of this setup can be fetched from the cache instead of rebuilt from source
|
||||
or fetched from [crates.io](https://crates.io). This makes the builds slightly
|
||||
faster in my limited testing.
|
||||
|
||||
Then I build the docker image with `nix-build` (`nix-build` defaults to
|
||||
`default.nix` when a filename is not specified, which is where the docker build
|
||||
is defined in this case) and copy the resulting tarball to that shared temporary
|
||||
folder I mentioned earlier. This lets me build the docker image _without needing
|
||||
a docker daemon_ or any other special permissions on the host.
|
||||
|
||||
### Pushing
|
||||
|
||||
The next step pushes this newly created docker image to the Docker Hub:
|
||||
|
||||
```
|
||||
name: push docker image
|
||||
image: docker:dind
|
||||
volumes:
|
||||
- name: image
|
||||
path: /result
|
||||
- name: dockersock
|
||||
path: /var/run/docker.sock
|
||||
commands:
|
||||
- docker load -i /result/docker.tgz
|
||||
- docker tag xena/printerfacts:latest xena/printerfacts:$DRONE_COMMIT_SHA
|
||||
- echo $DOCKER_PASSWORD | docker login -u $DOCKER_USERNAME --password-stdin
|
||||
- docker push xena/printerfacts:$DRONE_COMMIT_SHA
|
||||
environment:
|
||||
DOCKER_USERNAME: xena
|
||||
DOCKER_PASSWORD:
|
||||
from_secret: DOCKER_PASSWORD
|
||||
```
|
||||
|
||||
First it loads the docker image from that shared folder into the docker daemon
|
||||
as `xena/printerfacts:latest`. This image is then tagged with the relevant git
|
||||
commit using the magic
|
||||
[`$DRONE_COMMIT_SHA`](https://docs.drone.io/pipeline/environment/reference/drone-commit-sha/)
|
||||
variable that Drone defines for you.
|
||||
|
||||
In order to push docker images, you need to log into the Docker Hub. I log in
|
||||
using this method in order to avoid the chance that the docker password will be
|
||||
leaked to the build logs.
|
||||
|
||||
```
|
||||
echo $DOCKER_PASSWORD | docker login -u $DOCKER_USERNAME --password-stdin
|
||||
```
|
||||
|
||||
Then the image is pushed to the Docker hub and we can get onto the deployment
|
||||
step.
|
||||
|
||||
### Deploying to Kubernetes
|
||||
|
||||
The deploy step does two small things. First, it installs
|
||||
[dhall-yaml](https://github.com/dhall-lang/dhall-haskell/tree/master/dhall-yaml)
|
||||
for generating the Kubernetes manifest (see
|
||||
[here](https://christine.website/blog/dhall-kubernetes-2020-01-25)) and then
|
||||
runs
|
||||
[`scripts/release.sh`](https://tulpa.dev/cadey/printerfacts/src/branch/master/scripts/release.sh):
|
||||
|
||||
```
|
||||
#!/usr/bin/env nix-shell
|
||||
#! nix-shell -p doctl -p kubectl -i bash
|
||||
|
||||
doctl kubernetes cluster kubeconfig save kubermemes
|
||||
dhall-to-yaml-ng < ./printerfacts.dhall | kubectl apply -n apps -f -
|
||||
kubectl rollout status -n apps deployment/printerfacts
|
||||
```
|
||||
|
||||
This uses the [nix-shell shebang
|
||||
support](http://iam.travishartwell.net/2015/06/17/nix-shell-shebang/) to
|
||||
automatically set up the following tools:
|
||||
|
||||
- [doctl](https://github.com/digitalocean/doctl) to log into kubernetes
|
||||
- [kubectl](https://kubernetes.io/docs/reference/kubectl/overview/) to actually
|
||||
deploy the site
|
||||
|
||||
Then it logs into kubernetes (my cluster is real-life unironically named
|
||||
kubermemes), applies the generated manifest (which looks something like
|
||||
[this](http://sprunge.us/zsO4os)) and makes sure the deployment rolls out
|
||||
successfully.
|
||||
|
||||
This will have the kubernetes cluster automatically roll out new versions of the
|
||||
service and maintain at least two active replicas of the service. This will make
|
||||
sure that you users can always have access to high-quality printer facts, even
|
||||
if one or more of the kubernetes nodes go down.
|
||||
|
||||
---
|
||||
|
||||
And that is how I continuously deploy things on my Gitea server to Kubernetes
|
||||
using Drone, Dhall and Nix.
|
||||
|
||||
If you want to integrate the printer facts service into your application, use
|
||||
the `/fact` route on it:
|
||||
|
||||
```console
|
||||
$ curl https://printerfacts.cetacean.club/fact
|
||||
A printer has a total of 24 whiskers, 4 rows of whiskers on each side. The upper
|
||||
two rows can move independently of the bottom two rows.
|
||||
```
|
||||
|
||||
There is currently no rate limit to this API. Please do not make me have to
|
||||
create one.
|
|
@ -3,9 +3,6 @@ title: "Farewell Email - Heroku"
|
|||
date: 2019-03-08
|
||||
for: Herokai
|
||||
subject: May our paths cross again
|
||||
tags:
|
||||
- personal
|
||||
- heroku
|
||||
---
|
||||
|
||||
# Farewell Email - Heroku
|
||||
|
|
|
@ -3,7 +3,6 @@ title: Fear
|
|||
date: 2018-07-24
|
||||
thanks: CelestialBoon, no really this guy is amazing and doesn't get enough credit, I'm so grateful for him.
|
||||
for: Twilight Sparkle
|
||||
series: stories
|
||||
---
|
||||
|
||||
# Fear
|
||||
|
|
|
@ -1,65 +0,0 @@
|
|||
---
|
||||
title: RSS/Atom Feeds Fixed and Announcing my Flight Journal
|
||||
date: 2020-07-26
|
||||
tags:
|
||||
- gemini
|
||||
---
|
||||
|
||||
# RSS/Atom Feeds Fixed and Announcing my Flight Journal
|
||||
|
||||
I have released version 2.0.1 of this site's code. With it I have fixed the RSS
|
||||
and Atom feed generation. For now I have had to sacrifice the post content being
|
||||
in the feed, but I will bring it back as soon as possible.
|
||||
|
||||
Victory badges:
|
||||
|
||||
[![Valid Atom Feed](https://validator.w3.org/feed/images/valid-atom.png)](/blog.atom)
|
||||
[![Valid RSS Feed](https://validator.w3.org/feed/images/valid-rss-rogers.png)](/blog.rss)
|
||||
|
||||
Thanks to [W3Schools](https://www.w3schools.com/XML/xml_rss.asp) for having a
|
||||
minimal example of an RSS feed and [this Flickr
|
||||
image](https://www.flickr.com/photos/sepblog/3652359502/) for expanding it so I
|
||||
can have the post dates be included too.
|
||||
|
||||
## Flight Journal
|
||||
|
||||
I have created a [Gemini](https://gemini.circumlunar.space) protocol server at
|
||||
[gemini://cetacean.club](gemini://cetacean.club). Gemini is an exploration of
|
||||
the space between [Gopher](https://en.wikipedia.org/wiki/Gopher_%28protocol%29)
|
||||
and HTTP. Right now my site doesn't have much on it, but I have added its feed
|
||||
to [my feeds page](/feeds).
|
||||
|
||||
Please note that the content on this Gemini site is going to be of a much more
|
||||
personal nature compared to the more professional kind of content I put on this
|
||||
blog. Please keep this in mind before casting judgement or making any kind of
|
||||
conclusions about me.
|
||||
|
||||
If you don't have a Gemini client installed, you can view the site content
|
||||
[here](https://portal.mozz.us/gemini/cetacean.club/). I plan to make a HTTP
|
||||
frontend to this site once I get [Maj](https://tulpa.dev/cadey/maj) up and
|
||||
functional.
|
||||
|
||||
## Maj
|
||||
|
||||
I have created a Gemini client and server framework for Rust programs called
|
||||
[Maj](https://tulpa.dev/cadey/maj). Right now it includes the following
|
||||
features:
|
||||
|
||||
- Synchronous client
|
||||
- Asynchronous server framework
|
||||
- Gemini response parser
|
||||
- `text/gemini` parser
|
||||
|
||||
Additionally, I have a few projects in progress for the Maj ecosystem:
|
||||
|
||||
- [majc](https://portal.mozz.us/gemini/cetacean.club/maj/majc.gmi) - an
|
||||
interactive curses client for Gemini
|
||||
- majd - An advanced reverse proxy and Lua handler daemon for people running
|
||||
Gemini servers
|
||||
- majsite - A simple example of the maj server framework in action
|
||||
|
||||
I will write more about this in the future when I have more than just this
|
||||
little preview of what is to come implemented. However, here's a screenshot of
|
||||
majc rendering my flight journal:
|
||||
|
||||
![majc preview image rendering cetacean.club](/static/img/majc_preview.png)
|
|
@ -1,10 +1,6 @@
|
|||
---
|
||||
title: FFI-ing Go from Nim for Fun and Profit
|
||||
date: 2015-12-20
|
||||
series: howto
|
||||
tags:
|
||||
- go
|
||||
- nim
|
||||
---
|
||||
|
||||
FFI-ing Golang from Nim for Fun and Profit
|
||||
|
|
|
@ -1,7 +1,6 @@
|
|||
---
|
||||
title: A Formal Grammar of h
|
||||
date: 2019-05-19
|
||||
series: conlangs
|
||||
---
|
||||
|
||||
# A Formal Grammar of `h`
|
||||
|
|
|
@ -1,275 +0,0 @@
|
|||
---
|
||||
title: "Gamebridge: Fitting Square Pegs into Round Holes since 2020"
|
||||
date: 2020-05-09
|
||||
series: howto
|
||||
tags:
|
||||
- witchcraft
|
||||
- sm64
|
||||
- twitch
|
||||
---
|
||||
|
||||
# Gamebridge: Fitting Square Pegs into Round Holes since 2020
|
||||
|
||||
Recently I did a stream called [Twitch Plays Super Mario 64][tpsm64]. During
|
||||
that stream I both demonstrated and hacked on a tool I'm calling
|
||||
[gamebridge][gamebridge]. Gamebridge is a tool that lets you allow games to
|
||||
interoperate with programs they really shouldn't be able to interoperate with.
|
||||
|
||||
[tpsm64]: https://www.twitch.tv/videos/615780185
|
||||
[gamebridge]: https://github.com/Xe/gamebridge
|
||||
|
||||
Gamebridge works by aggressively hooking into a game's input logic (through a
|
||||
custom controller driver) and uses a pair of [Unix fifos][ufifo] to communicate
|
||||
between it and the game it is controlling. Overall the flow of data between the
|
||||
two programs looks like this:
|
||||
|
||||
[ufifo]: http://man7.org/linux/man-pages/man7/fifo.7.html
|
||||
|
||||
![A diagram explaining how control/state/data flows between components of the
|
||||
gamebridge stack](/static/blog/gamebridge.png)
|
||||
|
||||
You can view the [source code of this diagram in GraphViz dot format
|
||||
here](/static/blog/gamebridge.dot).
|
||||
|
||||
The main magic that keeps this glued together is the use of _blocking_ I/O.
|
||||
This means that the bridge input thread will be blocked _at the kernel level_
|
||||
for the vblank signal to be written, and the game will also be blocked at the
|
||||
kernel level for the bridge input thread to write the desired input. This
|
||||
effectively uses the Linux kernel to pass around a scheduling quantum like you
|
||||
would in the L4 microkernel. This design consideration also means that
|
||||
gamebridge has to perform _as fast as possible as much as possible_, because it
|
||||
realistically only has a few hundred microseconds at best to respond with the
|
||||
input data to avoid humans noticing any stutter. As such, gamebridge is written
|
||||
in Rust.
|
||||
|
||||
## Implementation
|
||||
|
||||
When implementing gamebridge, I had a few goals in mind:
|
||||
|
||||
- Use blocking I/O to have the kernel help with this
|
||||
- Use threads to their fullest potential
|
||||
- Unix fifos are great, let's use them
|
||||
- Understand linear interpolation better
|
||||
- Create a surreal demo on Twitch
|
||||
- Only have one binary to start, the game itself
|
||||
|
||||
As a first step of implementing this, I went through the source code of the
|
||||
Mario 64 PC port (but in theory this could also work for other emulators or even
|
||||
Nintendo 64 emulators with enough work) and began to look for anything that
|
||||
might be useful to understand how parts of the game work. I stumbled across
|
||||
`src/pc/controller` and then found two gems that really stood out. I found the
|
||||
interface for adding new input methods to the game and an example input method
|
||||
that read from tool-assisted speedrun recordings. The controller input interface
|
||||
itself is a thing of beauty, I've included a copy of it below:
|
||||
|
||||
```c
|
||||
// controller_api.h
|
||||
#ifndef CONTROLLER_API
|
||||
#define CONTROLLER_API
|
||||
|
||||
#include <ultra64.h>
|
||||
|
||||
struct ControllerAPI {
|
||||
void (*init)(void);
|
||||
void (*read)(OSContPad *pad);
|
||||
};
|
||||
|
||||
#endif
|
||||
```
|
||||
|
||||
All you need to implement your own input method is an init function and a read
|
||||
function. The init function is used to set things up and the read function is
|
||||
called every frame to get inputs. The tool-assisted speedrunning input method
|
||||
seemed to conform to the [Mupen64 demo file spec as described on
|
||||
tasvideos.org][mupendemo], and I ended up using this to help test and verify
|
||||
ideas.
|
||||
|
||||
[mupendemo]: http://tasvideos.org/EmulatorResources/Mupen/M64.html
|
||||
|
||||
The thing that struck me was how _simple_ the format was. Every frame of input
|
||||
uses its own four-byte sequence. The constants in the demo file spec also helped
|
||||
greatly as I figured out ways to bridge into the game from Rust. I ended up
|
||||
creating two [bitflag][bitflag] structs to help with the button data, which
|
||||
ended up almost being a 1:1 copy of the Mupen64 demo file spec:
|
||||
|
||||
[bitflag]: https://docs.rs/bitflags/1.2.1/bitflags/
|
||||
|
||||
```rust
|
||||
bitflags! {
|
||||
// 0x0100 Digital Pad Right
|
||||
// 0x0200 Digital Pad Left
|
||||
// 0x0400 Digital Pad Down
|
||||
// 0x0800 Digital Pad Up
|
||||
// 0x1000 Start
|
||||
// 0x2000 Z
|
||||
// 0x4000 B
|
||||
// 0x8000 A
|
||||
pub(crate) struct HiButtons: u8 {
|
||||
const NONE = 0x00;
|
||||
const DPAD_RIGHT = 0x01;
|
||||
const DPAD_LEFT = 0x02;
|
||||
const DPAD_DOWN = 0x04;
|
||||
const DPAD_UP = 0x08;
|
||||
const START = 0x10;
|
||||
const Z_BUTTON = 0x20;
|
||||
const B_BUTTON = 0x40;
|
||||
const A_BUTTON = 0x80;
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
### Input
|
||||
|
||||
This is where things get interesting. One of the more interesting side effects
|
||||
of getting inputs over chat for a game like Mario 64 is that you need to [hold
|
||||
buttons or even the analog stick][apress] in order to do things like jumping
|
||||
into paintings or on ledges. When you get inputs over chat, you only have them
|
||||
for one frame. Therefore you need some kind of analog input (or an emulation of
|
||||
that) that decays over time. One approach you can use for this is [linear
|
||||
interpolation][lerp] (or lerp).
|
||||
|
||||
[apress]: https://youtu.be/kpk2tdsPh0A?list=PLmBeAOWc3Gf7IHDihv-QSzS8Y_361b_YO&t=13
|
||||
[lerp]: https://www.gamedev.net/tutorials/programming/general-and-gameplay-programming/a-brief-introduction-to-lerp-r4954/
|
||||
|
||||
I implemented support for both button and analog stick lerping using a struct I
|
||||
call a [Lerper][lerper] (the file it is in is named `au.rs` because [.au.][au] is
|
||||
the lojban emotion-particle for "to desire", the name was inspired from it
|
||||
seeming to fake what the desired inputs were).
|
||||
|
||||
[lerper]: https://github.com/Xe/gamebridge/blob/b2e7ba21aa14b556e34d7a99dd02e22f9a1365aa/src/au.rs
|
||||
[au]: http://jbovlaste.lojban.org/dict/au
|
||||
|
||||
At its core, a Lerper stores a few basic things:
|
||||
|
||||
- the current scalar of where the analog input is resting
|
||||
- the frame number when the analog input was set to the max (or
|
||||
above)
|
||||
- the maximum number of frames that the lerp should run for
|
||||
- the goal (or where the end of the linear interpolation is, for most cases in
|
||||
this codebase the goal is 0, or neutral)
|
||||
- the maximum possible output to return on `apply()`
|
||||
- the minimum possible output to return on `apply()`
|
||||
|
||||
Every frame, the lerpers for every single input to the game will get applied
|
||||
down closer to zero. Mario 64 uses two signed bytes to represent the controller
|
||||
input. The maximum/minimum clamps make sure that the lerped result stays in that
|
||||
range.
|
||||
|
||||
### Twitch Integration
|
||||
|
||||
This is one of the first times I have ever used asynchronous Rust in conjunction
|
||||
with synchronous rust. I was shocked at how easy it was to just spin up another
|
||||
thread and have that thread take care of the Tokio runtime, leaving the main
|
||||
thread to focus on input. This is the block of code that handles [running the
|
||||
asynchronous twitch bot in parallel to the main thread][twitchrs]:
|
||||
|
||||
[twitchrs]: https://github.com/Xe/gamebridge/blob/b2e7ba21aa14b556e34d7a99dd02e22f9a1365aa/src/twitch.rs#L12
|
||||
|
||||
```rust
|
||||
pub(crate) fn run(st: MTState) {
|
||||
use tokio::runtime::Runtime;
|
||||
Runtime::new()
|
||||
.expect("Failed to create Tokio runtime")
|
||||
.block_on(handle(st));
|
||||
}
|
||||
```
|
||||
|
||||
Then the rest of the Twitch integration is boilerplate until we get to the
|
||||
command parser. At its core, it just splits each chat line up into words and
|
||||
looks for keywords:
|
||||
|
||||
```rust
|
||||
let chatline = msg.data.to_string();
|
||||
let chatline = chatline.to_ascii_lowercase();
|
||||
let mut data = st.write().unwrap();
|
||||
const BUTTON_ADD_AMT: i64 = 64;
|
||||
|
||||
for cmd in chatline.to_string().split(" ").collect::<Vec<&str>>().iter() {
|
||||
match *cmd {
|
||||
"a" => data.a_button.add(BUTTON_ADD_AMT),
|
||||
"b" => data.b_button.add(BUTTON_ADD_AMT),
|
||||
"z" => data.z_button.add(BUTTON_ADD_AMT),
|
||||
"r" => data.r_button.add(BUTTON_ADD_AMT),
|
||||
"cup" => data.c_up.add(BUTTON_ADD_AMT),
|
||||
"cdown" => data.c_down.add(BUTTON_ADD_AMT),
|
||||
"cleft" => data.c_left.add(BUTTON_ADD_AMT),
|
||||
"cright" => data.c_right.add(BUTTON_ADD_AMT),
|
||||
"start" => data.start.add(BUTTON_ADD_AMT),
|
||||
"up" => data.sticky.add(127),
|
||||
"down" => data.sticky.add(-128),
|
||||
"left" => data.stickx.add(-128),
|
||||
"right" => data.stickx.add(127),
|
||||
"stop" => {data.stickx.update(0); data.sticky.update(0);},
|
||||
_ => {},
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
This implements the following commands:
|
||||
|
||||
| Command | Meaning |
|
||||
|----------|----------------------------------|
|
||||
| `a` | Press the A button |
|
||||
| `b` | Press the B button |
|
||||
| `z` | Press the Z button |
|
||||
| `r` | Press the R button |
|
||||
| `cup` | Press the C-up button |
|
||||
| `cdown` | Press the C-down button |
|
||||
| `cleft` | Press the C-left button |
|
||||
| `cright` | Press the C-right button |
|
||||
| `start` | Press the start button |
|
||||
| `up` | Press up on the analog stick |
|
||||
| `down` | Press down on the analog stick |
|
||||
| `left` | Press left on the analog stick |
|
||||
| `stop` | Reset the analog stick to center |
|
||||
|
||||
Currently analog stick inputs will stick for about 270 frames and button inputs
|
||||
will stick for about 20 frames before drifting back to neutral. The start button
|
||||
is special, inputs to the start button will stick for 5 frames at most.
|
||||
|
||||
### Debugging
|
||||
|
||||
Debugging two programs running together is surprisingly hard. I had to resort to
|
||||
the tried-and-true method of using `gdb` for the main game code and excessive
|
||||
amounts of printf debugging in Rust. The [pretty\_env\_logger][pel] crate (which
|
||||
internally uses the [env_logger][el] crate, and its environment variable
|
||||
configures pretty\_env\_logger) helped a lot. One of the biggest problems I
|
||||
encountered in developing it was fixed by this patch, which I will paste inline:
|
||||
|
||||
[pel]: https://docs.rs/pretty_env_logger/0.4.0/pretty_env_logger/
|
||||
[el]: https://docs.rs/env_logger/0.7.1/env_logger/
|
||||
|
||||
```diff
|
||||
diff --git a/gamebridge/src/main.rs b/gamebridge/src/main.rs
|
||||
index 426cd3e..6bc3f59 100644
|
||||
@@ -93,7 +93,7 @@ fn main() -> Result<()> {
|
||||
},
|
||||
};
|
||||
|
||||
- sticky = match stickx {
|
||||
+ sticky = match sticky {
|
||||
0 => sticky,
|
||||
127 => {
|
||||
ymax_frame = data.frame;
|
||||
```
|
||||
|
||||
Somehow I had been trying to adjust the y axis position of the stick by
|
||||
comparing the x axis position of the stick. Finding and fixing this bug is what
|
||||
made me write the Lerper type.
|
||||
|
||||
---
|
||||
|
||||
Altogether, this has been a very fun project. I've learned a lot about 3d game
|
||||
design, historical source code analysis and inter-process communication. I also
|
||||
learned a lot about asynchronous Rust and how it can work together with
|
||||
synchronous Rust. I also got to make a fairly surreal demo for Twitch. I hope
|
||||
this can be useful to others, even if it just serves as an example of how to
|
||||
integrate things into strange other things from unixy first principles.
|
||||
|
||||
You can find out slightly more about [gamebridge][gamebridge] on its GitHub
|
||||
page. Its repo also includes patches for the Mario 64 PC port source code,
|
||||
including one that disables the ability for Mario to lose lives. This could
|
||||
prove useful for Twitch plays attempts, the 5 life cap by default became rather
|
||||
limiting in testing.
|
||||
|
||||
Be well.
|
|
@ -1,69 +0,0 @@
|
|||
---
|
||||
title: The Gears and The Gods
|
||||
date: 2019-11-14
|
||||
tags:
|
||||
- wasm
|
||||
- philosophy
|
||||
- gods
|
||||
---
|
||||
|
||||
# The Gears and The Gods
|
||||
|
||||
If there are any gods in computing, they are the authors of compilers. The
|
||||
output of compilers is treated as a Heavenly Decree, sometimes used for many
|
||||
sprints or even years after the output has been last emitted.
|
||||
|
||||
People trust this output to be Correct. To tell the machine what to do and by
|
||||
its will it be done. The compiler is itself a factory of servitors, each bound
|
||||
by the unholy runes inscribed into it in order to make the endless sequence of
|
||||
lights change colors in the right patterns.
|
||||
|
||||
The output of the work of the Gods is stored for later use when their might is
|
||||
needed. The work of the Gods however is a very fickle beast. Their words of
|
||||
power only make the gears turn when they are built with very specific gearing.
|
||||
|
||||
This means that people who rely on these sacred runes have to chain themselves
|
||||
to gearing patterns. Each year new ways of tricking the gears to run faster are
|
||||
developed. The ways the gears turn can be learned to be abused however to spill
|
||||
the secrets other gears are crunching on. These gearing patterns haven’t seen
|
||||
any real fundamental design changes in decades, because you never know when the
|
||||
output of the Old Gods is needed.
|
||||
|
||||
This means that the gears themselves are the chains that bind people to the
|
||||
past. The gears of computation. The gears made of sand we tricked into thinking
|
||||
with lightning.
|
||||
|
||||
But now the gears show their age. The gearing on the side of the gearing on the
|
||||
side of the gearing on the side of the gearing shows its ugly head.
|
||||
|
||||
But the Masses never question it. Even though they take hit after hit to
|
||||
performance of the gears.
|
||||
|
||||
What there needs to be is some kind of Apocalypse, a revealing of the faults in
|
||||
the gears. Maybe then the Masses will start to question their blind loyalty and
|
||||
chains binding them to the gears. Maybe they would be able to even try other
|
||||
gear patterns.
|
||||
|
||||
But this is just fantasy, nobody would WILLINGLY change the gearing patterns.
|
||||
|
||||
Would they?
|
||||
|
||||
But what about the experience they’ve come to expect from their old gears? Where
|
||||
they could swap out inputs to the gears with ease. Where the Output of the Gods
|
||||
of old still functions.
|
||||
|
||||
There needs to be a Better Way to switch gearings. But this kind of solution
|
||||
isn’t conducive to how people use the gears. People use the gears they do
|
||||
because they don’t care. They just want things to work “like they expect it to”
|
||||
and ignore things that don’t feed this addiction.
|
||||
|
||||
And THIS is why I’m such a big advocate for WebAssembly on the server. This lets
|
||||
you take the output of the Gods and store it in a way that it can be
|
||||
transparently upgraded to new sets of gearing. So that the future and the past
|
||||
can work in unison instead of being enemies.
|
||||
|
||||
Now, all that's left is to build a bridge. A bridge that will help to unite the
|
||||
past, the present and the future into a woven masterpiece of collaborative
|
||||
cocreation. Where the output of the gods is a weaker chain to the gears of old
|
||||
and can easily be adapted to the gears of new. Even the gears that nobody's even
|
||||
dreamed of yet.
|
|
@ -1,108 +0,0 @@
|
|||
---
|
||||
title: The Fear Of Missing Out
|
||||
date: 2020-08-02
|
||||
tags:
|
||||
- culture
|
||||
- web
|
||||
---
|
||||
|
||||
# The Fear Of Missing Out
|
||||
|
||||
Humans have evolved over thousands of years with communities that are small,
|
||||
tight-knit and where it is easy to feel like you know everyone in them. The
|
||||
Internet changes this completely. With the Internet, it's easy to send messages,
|
||||
write articles and even publish books that untold thousands of people can read
|
||||
and interact with. This has lead to an instinctive fear in humanity I'm going to
|
||||
call the Fear of Missing Out [1].
|
||||
|
||||
[[1]: The Fear of Missing Out](https://en.wikipedia.org/wiki/Fear_of_missing_out)
|
||||
|
||||
The Internet in its current form capitalizes and makes billions off of this.
|
||||
Infinite scrolling and live updating pages that make it feel like there's always
|
||||
something new to read. Uncountable hours of engineering and psychological
|
||||
testing spent making sure people click and scroll and click and consume all day
|
||||
until that little hit of dopamine becomes its own addiction. We have taken a
|
||||
system for displaying documents and accidentally turned it into a hulking
|
||||
abomination that consumes the souls of all who get trapped in it, crystallizing
|
||||
them in an endless cycle of checking notifications, looking for new posts on
|
||||
your newsfeed, scrolling down to find just that something you think you're
|
||||
looking for.
|
||||
|
||||
When I was in high school, I bagged groceries for a store. I also had the
|
||||
opportunity to help customers out to their cars and was able to talk with them.
|
||||
Obviously, I was minimum wage and had a whole bunch of other things to do;
|
||||
however there were a few times that I could really get to talk with regular
|
||||
customers and feel like I got to know them. What comes to mind however is a
|
||||
story where that is not the case. One day I was helping this older woman to her
|
||||
car, and she eventually said something like "All of these people just keep
|
||||
going, going, going nonstop. It drives me mad. How can't they see where they are
|
||||
is good enough already?" I thought for a moment and I wasn't able to come up
|
||||
with a decent reply.
|
||||
|
||||
The infinite scrollbars and newsfeeds of the web just keep going, going, going,
|
||||
going, going, going, going and going until the user gives up to do something
|
||||
elses. There's no consideration of _how_ the content is discovered, and _why_
|
||||
the content is discovered, it's just an endless feed of noise. One subtle change
|
||||
in your worldview after another, just from the headlines alone. Not to mention
|
||||
the endless torrent of advertising.
|
||||
|
||||
However, I think there may be a way out, a kind of detox from the infinite
|
||||
scrolling, newsfeeds, notifications and the like for the internet, and I think a
|
||||
good step towards that is the Gemini [2] protocol.
|
||||
|
||||
[[2]: Gemini Protocol](https://gemini.circumlunar.space/)
|
||||
|
||||
Gemini is a protocol that is somewhere between HTTP and Gopher. A user sends a
|
||||
request to a Gemini server and the user gets a response back. This response
|
||||
could be anything, but a little header tells the client what kind of data it is.
|
||||
There's also a little markup format that's a very lightweight take on
|
||||
markdown [3], but overall the entire goal of the project is to be minimal and
|
||||
just serve documents.
|
||||
|
||||
[[3]: Gemtext markup](https://portal.mozz.us/gemini/gemini.circumlunar.space/docs/gemtext.gmi)
|
||||
|
||||
I've noticed something as I browse through the known constellation of Gemini
|
||||
capsules though. I keep refreshing the CAPCOM feed of posts. I keep refreshing
|
||||
the mailing list archives. I keep refreshing my email client, looking for new
|
||||
content and feel frustrated when it doesn't show up like I expect it to. I'm
|
||||
addicted to the newsfeeds. I'm caught in the trap that autoplay put me in. I'm a
|
||||
victim to infinite scrolling and that constant little hit of dopamine that
|
||||
modern social media has put on us all. Realizing this feels like I am realizing
|
||||
an addiction to a drug (but I'd argue that it somewhat is a drug, by design,
|
||||
what better way to get people to be exposed to ads than to make the service that
|
||||
serves the ads addictive!).
|
||||
|
||||
I'm not sure how to best combat this. It feels kind of scary. I'm starting to
|
||||
attempt to detox though. I'm writing a lot more on my Gemini capsule [4] [5]. I'm
|
||||
starting to really consider the Fear of Missing Out when I design and implement
|
||||
things in the future. So many things update instantly on the modern internet, it
|
||||
may be a good idea to attempt to make something that updates weekly or even
|
||||
monthly.
|
||||
|
||||
[[4]: My Gemini capsule](gemini://cetacean.club)
|
||||
[[5]: [experimental] My Gemini capsule over HTTP](http://cetacean.club)
|
||||
|
||||
I'm still going to attempt a few ideas that I have regarding long term archival
|
||||
of the Gemini constellation, but I'm definitely going to make sure that I take
|
||||
the time to actually consider the consequences of my actions and what kind of
|
||||
world it creates. I want to create the kind of world that enables people to
|
||||
better themselves.
|
||||
|
||||
Let's work together to detox from the harmful effects of what we all have
|
||||
created. I'm considering opening up a Gemini server that other people can have
|
||||
accounts on and write about things that interest them.
|
||||
|
||||
If you want to get started with Gemini, I suggest taking a look at the main site
|
||||
through the Gemini to HTTP proxy [6]. There are some clients listed in the pages
|
||||
there, including a _very good_ iOS client that is currently in TestFlight.
|
||||
Please do keep in mind that Gemini is very much a back-button navigation kind of
|
||||
experience. The web has made people expect navigation links to be everywhere,
|
||||
which can make it a weird/jarring experience at first, but you get used to it.
|
||||
You can see evidence of this in my site with all the "Go back" links on each
|
||||
page. I'll remove those at some point, but for now I'm going to keep them.
|
||||
|
||||
[[6]: Project Gemini](https://portal.mozz.us/gemini/gemini.circumlunar.space/)
|
||||
|
||||
Don't be afraid of missing out. It's inevitable. Things happen. It's okay for
|
||||
them to happen without you having to see them. They will still be there when you
|
||||
look again.
|
|
@ -1,228 +0,0 @@
|
|||
---
|
||||
title: "Get Going: Hello, World!"
|
||||
date: 2019-10-28
|
||||
series: get-going
|
||||
tags:
|
||||
- golang
|
||||
- book
|
||||
- draft
|
||||
---
|
||||
|
||||
# Get Going: Hello, World!
|
||||
|
||||
This post is a draft of the first chapter in a book I'm writing to help people learn the
|
||||
[Go][go] programming language. It's aimed at people who understand the high
|
||||
level concepts of programming, but haven't had much practical experience with
|
||||
it. This is a sort of spiritual successor to my old
|
||||
[Getting Started with Go][gswg] post from 2015. A lot has changed in the
|
||||
ecosystem since then, as well as my understanding of the language.
|
||||
|
||||
[go]: https://golang.org
|
||||
[gswg]: https://christine.website/blog/getting-started-with-go-2015-01-28
|
||||
|
||||
Like always, feedback is very welcome. Any feedback I get will be used to help
|
||||
make this book even better.
|
||||
|
||||
This article is a bit of an expanded version of what the first chapter will
|
||||
eventually be. I also plan to turn a version of this article into a workshop for
|
||||
my dayjob.
|
||||
|
||||
## What is Go?
|
||||
|
||||
Go is a compiled programming language made by Google. It has a lot of features
|
||||
out of the box, including:
|
||||
|
||||
* A static type system
|
||||
* Fast compile times
|
||||
* Efficient code generation
|
||||
* Parallel programming for free*
|
||||
* A strong standard library
|
||||
* Cross-compilation with ease (including webassembly)
|
||||
* and more!
|
||||
|
||||
\* You still have to write code that can avoid race conditions, more on those
|
||||
later.
|
||||
|
||||
### Why Use Go?
|
||||
|
||||
Go is a very easy to read and write programming language. Consider this snippet:
|
||||
|
||||
```go
|
||||
func Add(x int, y int) int {
|
||||
return x + y
|
||||
}
|
||||
```
|
||||
|
||||
This function wraps [integer
|
||||
addition](https://golang.org/ref/spec#Arithmetic_operators). When you call it it
|
||||
returns the sum of x and y.
|
||||
|
||||
## Installing Go
|
||||
|
||||
### Linux
|
||||
|
||||
Installing Go on Linux systems is a very distribution-specific thing. Please see
|
||||
[this tutorial on
|
||||
DigitalOcean](https://www.digitalocean.com/community/tutorials/how-to-install-go-on-ubuntu-18-04)
|
||||
for more information.
|
||||
|
||||
### macOS
|
||||
|
||||
* Go to https://golang.org/dl
|
||||
* Download the .pkg file
|
||||
* Double-click on it and go through the installer process
|
||||
|
||||
### Windows
|
||||
|
||||
* Go to https://golang.org/dl
|
||||
* Download the .msi file
|
||||
* Double-click on it and go through the installer process
|
||||
|
||||
### Next Steps
|
||||
|
||||
These next steps are needed to set up your shell for Go programs.
|
||||
|
||||
Pick a directory you want to store Go programs and downloaded source code in.
|
||||
This is called your GOPATH. This is usually the `go` folder in
|
||||
your home directory. If for some reason you want another folder for this, use
|
||||
that folder instead of `$HOME/go` below.
|
||||
|
||||
#### Linux/macOS
|
||||
|
||||
This next step is unfortunately shell-specific. To find out what shell you are
|
||||
using, run the following command in your terminal:
|
||||
|
||||
```console
|
||||
$ env | grep SHELL
|
||||
```
|
||||
|
||||
The name at the path will be the shell you are using.
|
||||
|
||||
##### bash
|
||||
|
||||
If you are using bash, add the following lines to your .bashrc (Linux) or
|
||||
.bash_profile (macOS):
|
||||
|
||||
```
|
||||
export GOPATH=$HOME/go
|
||||
export PATH="$PATH:$GOPATH/bin"
|
||||
```
|
||||
|
||||
Then reload the configuration by closing and re-opening your terminal.
|
||||
|
||||
##### fish
|
||||
|
||||
If you are using fish, create a file in ~/.config/fish/conf.d/go.fish with the
|
||||
following lines:
|
||||
|
||||
```
|
||||
set -gx GOPATH $HOME/go
|
||||
set -gx PATH $PATH "$GOPATH/bin"
|
||||
```
|
||||
|
||||
##### zsh
|
||||
|
||||
If you are using zsh, add the following lines to your .zshrc:
|
||||
|
||||
```
|
||||
export GOPATH=$HOME/go
|
||||
export PATH="$PATH:$GOPATH/bin"
|
||||
```
|
||||
|
||||
#### Windows
|
||||
|
||||
Follow the instructions
|
||||
[here](https://github.com/golang/go/wiki/SettingGOPATH#windows).
|
||||
|
||||
## Installing a Text Editor
|
||||
|
||||
For this book, we will be using VS Code. Download and install it
|
||||
from https://code.visualstudio.com. The default settings will let you work with
|
||||
Go code.
|
||||
|
||||
## Hello, world!
|
||||
|
||||
Now that everything is installed, let's test it with the classic "Hello, world!"
|
||||
program. Create a folder in your home folder `Code`. Create another folder
|
||||
inside that Code folder called `get_going` and create yet another subfolder
|
||||
called `hello`. Open a file in there with VS Code (Open Folder -> Code ->
|
||||
get_going -> hello) called `hello.go` and type in the following:
|
||||
|
||||
```go
|
||||
// Command hello is your first Go program.
|
||||
package main
|
||||
|
||||
import "fmt"
|
||||
|
||||
func main() {
|
||||
fmt.Println("Hello, world!")
|
||||
}
|
||||
```
|
||||
|
||||
This program prints "Hello, world!" and then immediately exits. Here's each of
|
||||
the parts in detail:
|
||||
|
||||
```go
|
||||
// Command hello is your first go program.
|
||||
package main // Every go file must be in a package.
|
||||
// Package main is used for creating executable files.
|
||||
|
||||
import "fmt" // Go doesn't implicitly import anything. You need to
|
||||
// explicitly import "fmt" for printing text to
|
||||
// standard output.
|
||||
|
||||
func main() { // func main is the entrypoint of the program, or
|
||||
// where the computer starts executing your code
|
||||
fmt.Println("Hello, world!") // This prints "Hello, world!" followed by a newline
|
||||
// to standard output.
|
||||
} // This ends the main function
|
||||
```
|
||||
|
||||
Now click over to the terminal at the bottom of the VS Code window and run this
|
||||
program with the following command:
|
||||
|
||||
```console
|
||||
$ go run hello.go
|
||||
Hello, world!
|
||||
```
|
||||
|
||||
`go run` compiles and runs the code for you, without creating a persistent binary
|
||||
file. This is a good way to run programs while you are writing them.
|
||||
|
||||
To create a binary, use `go build`:
|
||||
|
||||
```console
|
||||
$ go build hello.go
|
||||
$ ./hello
|
||||
Hello, world!
|
||||
```
|
||||
|
||||
`go build` has the compiler create a persistent binary file and puts it in the
|
||||
same directory as you are running `go` from. Go will choose the filename of the
|
||||
binary based on the name of the .go file passed to it. These binaries are
|
||||
usually static binaries, or binaries that are safe to distribute to other
|
||||
computers without having to worry about linked libraries.
|
||||
|
||||
## Exercises
|
||||
|
||||
The following is a list of optional exercises that may help you understand more:
|
||||
|
||||
1. Replace the "world" in "Hello, world!" with your name.
|
||||
2. Rename `hello.go` to `main.go`. Does everything still work?
|
||||
3. Read through the documentation of the [fmt][fmt] package.
|
||||
|
||||
[fmt]: https://golang.org/pkg/fmt
|
||||
|
||||
---
|
||||
|
||||
And that about wraps it up for Lesson 1 in Go. Like I mentioned before, feedback
|
||||
on this helps a lot.
|
||||
|
||||
Up next is an overview on data types such as integers, true/false booleans,
|
||||
floating-point numbers and strings.
|
||||
|
||||
I plan to post the book source code on my GitHub page once I have more than one
|
||||
chapter drafted.
|
||||
|
||||
|
||||
Thanks and be well.
|
|
@ -1,7 +1,6 @@
|
|||
---
|
||||
title: Getting Started with Go
|
||||
date: 2015-01-28
|
||||
series: howto
|
||||
---
|
||||
|
||||
Getting Started with Go
|
||||
|
|
|
@ -1,202 +0,0 @@
|
|||
---
|
||||
title: "gitea-release Tool Announcement"
|
||||
date: "2020-05-31"
|
||||
tags:
|
||||
- gitea
|
||||
- rust
|
||||
- release
|
||||
---
|
||||
|
||||
# gitea-release Tool Announcement
|
||||
|
||||
I'm a big fan of automating things that can possibly be automated. One of the
|
||||
biggest pains that I've consistently had is creating/tagging releases of
|
||||
software. This has been a very manual process for me. I have to write up
|
||||
changelogs, bump versions and then replicate the changelog/versions in the web
|
||||
UI of whatever git forge the project in question is using. This works great at
|
||||
smaller scales, but can quickly become a huge pain in the butt when this needs
|
||||
to be done more often. Today I've written a small tool to help me automate this
|
||||
going forward, it is named
|
||||
[`gitea-release`](https://tulpa.dev/cadey/gitea-release). This is one of my
|
||||
largest Rust projects to date and something I am incredibly happy with. I will
|
||||
be using it going forward for all of my repos on my gitea instance
|
||||
[tulpa.dev](https://tulpa.dev).
|
||||
|
||||
`gitea-release` is a spiritual clone of the tool [`github-release`][ghrelease],
|
||||
but optimized for my workflow. The biggest changes are that it works on
|
||||
[gitea][gitea] repos instead of github repos, is written in Rust instead of Go
|
||||
and it automatically scrapes release notes from `CHANGELOG.md` as well as
|
||||
reading the version of the software from `VERSION`.
|
||||
|
||||
[ghrelease]: https://github.com/github-release/github-release
|
||||
[gitea]: https://gitea.io
|
||||
|
||||
## CHANGELOG.md and VERSION files
|
||||
|
||||
The `CHANGELOG.md` file is based on the [Keep a Changelog][kacl] format, but
|
||||
modified slightly to make it easier for this tool. Here is an example changelog
|
||||
that this tool accepts:
|
||||
|
||||
[kacl]: https://keepachangelog.com/en/1.0.0/
|
||||
|
||||
```markdown
|
||||
# Changelog
|
||||
All notable changes to this project will be documented in this file.
|
||||
|
||||
The format is based on [Keep a Changelog](https://keepachangelog.com/en/1.0.0/),
|
||||
and this project adheres to [Semantic Versioning](https://semver.org/spec/v2.0.0.html).
|
||||
|
||||
## 0.1.0
|
||||
|
||||
### FIXED
|
||||
|
||||
- Refrobnicate the spurious rilkefs
|
||||
|
||||
## 0.0.1
|
||||
|
||||
First release, proof of concept.
|
||||
```
|
||||
|
||||
When a release is created for version 0.1.0, this tool will make the description
|
||||
of the release about as follows:
|
||||
|
||||
```
|
||||
### FIXED
|
||||
|
||||
- Refrobnicate the spurious rilkefs
|
||||
```
|
||||
|
||||
This allows the changelog file to be the ultimate source of truth for release
|
||||
notes with this tool.
|
||||
|
||||
The `VERSION` file plays into this as well. The `VERSION` file MUST be a single
|
||||
line containing a [semantic version][semver] string. This allows the `VERSION`
|
||||
file to be the ultimate source of truth for software version data with this
|
||||
tool.
|
||||
|
||||
[semver]: https://semver.org/spec/v2.0.0.html
|
||||
|
||||
## Release Process
|
||||
|
||||
When this tool is run with the `release` subcommand, the following actions take place:
|
||||
|
||||
- The `VERSION` file is read and loaded as the desired tag for the repo
|
||||
- The `CHANGELOG.md` file is read and the changes for the `VERSION` are
|
||||
cherry-picked out of the file
|
||||
- The git repo is checked to see if that tag already exists
|
||||
- If the tag exists, the tool exits and does nothing
|
||||
- If the tag does not exist, it is created (with the changelog fragment as the
|
||||
body of the tag) and pushed to the gitea server using the supplied gitea token
|
||||
- A gitea release is created using the changelog fragment and the release name
|
||||
is generated from the `VERSION` string
|
||||
|
||||
## Automation of the Automation
|
||||
|
||||
This tool works perfectly well locally, but this doesn't make it fully
|
||||
automated from the gitea repo. I use [drone][drone] as a CI/CD tool for my gitea
|
||||
repos. Drone has a very convenient and simple to use [plugin
|
||||
system][droneplugin] that was easy to integrate with [structopt][structopt].
|
||||
|
||||
[drone]: https://drone.io
|
||||
[droneplugin]: https://docs.drone.io/plugins/overview/
|
||||
[structopt]: https://crates.io/crates/structopt
|
||||
|
||||
I created a drone plugin at `xena/gitea-release` that can be configured as a
|
||||
pipeline step in your `.drone.yml` like this:
|
||||
|
||||
```yaml
|
||||
kind: pipeline
|
||||
name: ci/release
|
||||
steps:
|
||||
- name: whatever unit testing step
|
||||
# ...
|
||||
- name: auto-release
|
||||
image: xena/gitea-release:0.2.5
|
||||
settings:
|
||||
auth_username: cadey
|
||||
changelog_path: ./CHANGELOG.md
|
||||
gitea_server: https://tulpa.dev
|
||||
gitea_token:
|
||||
from_secret: GITEA_TOKEN
|
||||
when:
|
||||
event:
|
||||
- push
|
||||
branch:
|
||||
- master
|
||||
```
|
||||
|
||||
This allows me to bump the `VERSION` and `CHANGELOG.md`, then push that commit
|
||||
to git and a new release will automatically be created. You can see an example
|
||||
of this in action with [the drone build history of the gitea-release
|
||||
repo](https://drone.tulpa.dev/cadey/gitea-release). You can also how the
|
||||
`CHANGELOG.md` file grows with the [CHANGELOG of
|
||||
gitea-release](https://tulpa.dev/cadey/gitea-release/src/branch/master/CHANGELOG.md).
|
||||
|
||||
Once the release is pushed to gitea, you can then use drone to trigger
|
||||
deployment commands. For example here is the deployment pipeline used to
|
||||
automatically update the docker image for the gitea-release tool:
|
||||
|
||||
```yaml
|
||||
kind: pipeline
|
||||
name: docker
|
||||
steps:
|
||||
- name: build docker image
|
||||
image: "monacoremo/nix:2020-04-05-05f09348-circleci"
|
||||
environment:
|
||||
USER: root
|
||||
commands:
|
||||
- cachix use xe
|
||||
- nix-build docker.nix
|
||||
- cp $(readlink result) /result/docker.tgz
|
||||
volumes:
|
||||
- name: image
|
||||
path: /result
|
||||
when:
|
||||
event:
|
||||
- tag
|
||||
|
||||
- name: push docker image
|
||||
image: docker:dind
|
||||
volumes:
|
||||
- name: image
|
||||
path: /result
|
||||
- name: dockersock
|
||||
path: /var/run/docker.sock
|
||||
commands:
|
||||
- docker load -i /result/docker.tgz
|
||||
- echo $DOCKER_PASSWORD | docker login -u $DOCKER_USERNAME --password-stdin
|
||||
- docker push xena/gitea-release
|
||||
environment:
|
||||
DOCKER_USERNAME:
|
||||
from_secret: DOCKER_USERNAME
|
||||
DOCKER_PASSWORD:
|
||||
from_secret: DOCKER_PASSWORD
|
||||
when:
|
||||
event:
|
||||
- tag
|
||||
|
||||
volumes:
|
||||
- name: image
|
||||
temp: {}
|
||||
- name: dockersock
|
||||
host:
|
||||
path: /var/run/docker.sock
|
||||
```
|
||||
|
||||
This pipeline will use [Nix](https://nixos.org/nix) to build the docker image,
|
||||
load it into a Docker daemon and then log into the Docker Hub and push it. This
|
||||
can then be used to do whatever you want. It may also be a good idea to push a
|
||||
docker image for every commit and then re-label the tagged commits, but this
|
||||
wasn't implemented in this repo.
|
||||
|
||||
---
|
||||
|
||||
I hope this tool will be useful. I will accept feedback over [any contact
|
||||
method](/contact). If you want to contribute directly to the project, please
|
||||
feel free to create [issues](https://tulpa.dev/cadey/gitea-release/issues) or
|
||||
[pull requests](https://tulpa.dev/cadey/gitea-release/pulls). If you don't want
|
||||
to create an account on my git server, get me the issue details or code diffs
|
||||
somehow and I will do everything I can to fix issues and integrate code. I just
|
||||
want to make this tool better however I can.
|
||||
|
||||
Be well.
|
|
@ -3,7 +3,6 @@ title: Gratitude
|
|||
date: 2018-07-20
|
||||
thanks: CelestialBoon
|
||||
for: Mother Aya
|
||||
series: magick
|
||||
---
|
||||
|
||||
# Gratitude
|
||||
|
|
|
@ -1,328 +0,0 @@
|
|||
---
|
||||
title: The h Programming Language
|
||||
date: 2019-06-30
|
||||
tags:
|
||||
- wasm
|
||||
- release
|
||||
---
|
||||
|
||||
# The h Programming Language
|
||||
|
||||
[h](https://h.christine.website) is a project of mine that I have released
|
||||
recently. It is a single-paradigm, multi-tenant friendly, turing-incomplete
|
||||
programming language that does nothing but print one of two things:
|
||||
|
||||
- the letter h
|
||||
- a single quote (the Lojbanic "h")
|
||||
|
||||
It does this via [WebAssembly](https://webassembly.org). This may sound like a
|
||||
pointless complication, but actually this ends up making things _a lot simpler_.
|
||||
WebAssembly is a virtual machine (fake computer that only exists in code) intended
|
||||
for browsers, but I've been using it for server-side tasks.
|
||||
|
||||
I have written more about/with WebAssembly in the past in these posts:
|
||||
|
||||
- https://christine.website/talks/webassembly-on-the-server-system-calls-2019-05-31
|
||||
- https://christine.website/blog/olin-1-why-09-1-2018
|
||||
- https://christine.website/blog/olin-2-the-future-09-5-2018
|
||||
- https://christine.website/blog/land-1-syscalls-file-io-2018-06-18
|
||||
- https://christine.website/blog/templeos-2-god-the-rng-2019-05-30
|
||||
|
||||
This is a continuation of the following two posts:
|
||||
|
||||
- https://christine.website/blog/the-origin-of-h-2015-12-14
|
||||
- https://christine.website/blog/formal-grammar-of-h-2019-05-19
|
||||
|
||||
All of the relevant code for h is [here](https://github.com/Xe/x/tree/master/cmd/h).
|
||||
|
||||
h is a somewhat standard three-phase compiler. Each of the phases is as follows:
|
||||
|
||||
## Parsing the Grammar
|
||||
|
||||
As mentioned in a prior post, h has a formal grammar defined in [Parsing Expression Grammar](https://en.wikipedia.org/wiki/Parsing_expression_grammar).
|
||||
I took this [grammar](https://github.com/Xe/x/blob/v1.1.7/h/h.peg) (with some
|
||||
minor modifications) and fed it into a tool called [peggy](https://github.com/eaburns/peggy)
|
||||
to generate a Go source [version of the parser](https://github.com/Xe/x/blob/v1.1.7/h/h_gen.go).
|
||||
This parser has some minimal [wrappers](https://github.com/Xe/x/blob/v1.1.7/h/parser.go)
|
||||
around it, mostly to simplify the output and remove unneeded nodes from the tree.
|
||||
This simplifies the later compilation phases.
|
||||
|
||||
The input to h looks something like this:
|
||||
|
||||
```
|
||||
h
|
||||
```
|
||||
|
||||
The output syntax tree pretty-prints to something like this:
|
||||
|
||||
```
|
||||
H("h")
|
||||
```
|
||||
|
||||
This is also represented using a tree of nodes that looks something like this:
|
||||
|
||||
```
|
||||
&peg.Node{
|
||||
Name: "H",
|
||||
Text: "h",
|
||||
Kids: nil,
|
||||
}
|
||||
```
|
||||
|
||||
A more complicated program will look something like this:
|
||||
|
||||
```
|
||||
&peg.Node{
|
||||
Name: "H",
|
||||
Text: "h h h",
|
||||
Kids: {
|
||||
&peg.Node{
|
||||
Name: "",
|
||||
Text: "h",
|
||||
Kids: nil,
|
||||
},
|
||||
&peg.Node{
|
||||
Name: "",
|
||||
Text: "h",
|
||||
Kids: nil,
|
||||
},
|
||||
&peg.Node{
|
||||
Name: "",
|
||||
Text: "h",
|
||||
Kids: nil,
|
||||
},
|
||||
},
|
||||
}
|
||||
```
|
||||
|
||||
Now that we have this syntax tree, it's easy to go to the next phase of
|
||||
compilation: generating the WebAssembly Text Format.
|
||||
|
||||
## WebAssembly Text Format
|
||||
|
||||
[WebAssembly Text Format](https://developer.mozilla.org/en-US/docs/WebAssembly/Understanding_the_text_format)
|
||||
is a human-editable and understandable version of WebAssembly. It is pretty low
|
||||
level, but it is actually fairly simple. Let's take an example of the h compiler
|
||||
output and break it down:
|
||||
|
||||
```
|
||||
(module
|
||||
(import "h" "h" (func $h (param i32)))
|
||||
(func $h_main
|
||||
(local i32 i32 i32)
|
||||
(local.set 0 (i32.const 10))
|
||||
(local.set 1 (i32.const 104))
|
||||
(local.set 2 (i32.const 39))
|
||||
(call $h (get_local 1))
|
||||
(call $h (get_local 0))
|
||||
)
|
||||
(export "h" (func $h_main))
|
||||
)
|
||||
```
|
||||
|
||||
Fundamentally, WebAssembly binary files are also called modules. Each .wasm file
|
||||
can have only one module defined in it. Modules can have sections that contain the
|
||||
following information:
|
||||
|
||||
- External function imports
|
||||
- Function definitions
|
||||
- Memory information
|
||||
- Named function exports
|
||||
- Global variable definitions
|
||||
- Other custom data that may be vendor-specific
|
||||
|
||||
h only uses external function imports, function definitions and named function
|
||||
exports.
|
||||
|
||||
`import` imports a function from the surrounding runtime with two fields: module
|
||||
and function name. Because this is an obfuscated language, the function `h` from
|
||||
module `h` is imported as `$h`. This function works somewhat like the C library
|
||||
function [putchar()](https://www.tutorialspoint.com/c_standard_library/c_function_putchar.htm).
|
||||
|
||||
`func` creates a function. In this case we are creating a function named `$h_main`.
|
||||
This will be the entrypoint for the h program.
|
||||
|
||||
Inside the function `$h_main`, there are three local variables created: `0`, `1` and `2`.
|
||||
They correlate to the following values:
|
||||
|
||||
| Local Number | Explanation | Integer Value |
|
||||
| :----------- | :---------------- | :------------ |
|
||||
| 0 | Newline character | 10 |
|
||||
| 1 | Lowercase h | 104 |
|
||||
| 2 | Single quote | 39 |
|
||||
|
||||
As such, this program prints a single lowercase h and then a newline.
|
||||
|
||||
`export` lets consumers of this WebAssembly module get a name for a function,
|
||||
linear memory or global value. As we only need one function in this module,
|
||||
we export `$h_main` as `"h"`.
|
||||
|
||||
## Compiling this to a Binary
|
||||
|
||||
The next phase of compiling is to turn this WebAssembly Text Format into a binary.
|
||||
For simplicity, the tool `wat2wasm` from the [WebAssembly Binary Toolkit](https://github.com/WebAssembly/wabt)
|
||||
is used. This tool creates a WebAssembly binary out of WebAssembly Text Format.
|
||||
|
||||
Usage is simple (assuming you have the WebAssembly Text Format file above saved as `h.wat`):
|
||||
|
||||
```
|
||||
wat2wasm h.wat -o h.wasm
|
||||
```
|
||||
|
||||
And you will create `h.wasm` with the following sha256 sum:
|
||||
|
||||
```
|
||||
sha256sum h.wasm
|
||||
8457720ae0dd2deee38761a9d7b305eabe30cba731b1148a5bbc5399bf82401a h.wasm
|
||||
```
|
||||
|
||||
Now that the final binary is created, we can move to the runtime phase.
|
||||
|
||||
## Runtime
|
||||
|
||||
The h [runtime](https://github.com/Xe/x/blob/v1.1.7/cmd/h/run.go) is incredibly
|
||||
simple. It provides the `h.h` putchar-like function and executes the `h`
|
||||
function from the binary you feed it. It also times execution as well as keeps
|
||||
track of the number of instructions the program runs. This is called "gas" for
|
||||
historical reasons involving [blockchains](https://blockgeeks.com/guides/ethereum-gas/).
|
||||
|
||||
I use [Perlin Network's life](https://github.com/perlin-network/life) as the
|
||||
implementation of WebAssembly in h. I have experience with it from [Olin](https://github.com/Xe/olin).
|
||||
|
||||
## The Playground
|
||||
|
||||
As part of this project, I wanted to create an [interactive playground](https://h.christine.website/play).
|
||||
This allows users to run arbitrary h programs on my server. As the only system
|
||||
call is putchar, this is safe. The playground also has some limitations on how
|
||||
big of a program it can run. The playground server works like this:
|
||||
|
||||
- The user program is sent over HTTP with Content-Type [text/plain](https://github.com/Xe/x/blob/v1.1.7/cmd/h/http.go#L402-L413)
|
||||
- The program is [limited to 75 bytes on the server](https://github.com/Xe/x/blob/v1.1.7/cmd/h/http.go#L44) (though this is [configurable](https://github.com/Xe/x/blob/v1.1.7/cmd/h/http.go#L15) via flags or envvars)
|
||||
- The program is [compiled](https://github.com/Xe/x/blob/v1.1.7/cmd/h/http.go#L53)
|
||||
- The program is [run](https://github.com/Xe/x/blob/v1.1.7/cmd/h/http.go#L59)
|
||||
- The output is [returned via JSON](https://github.com/Xe/x/blob/v1.1.7/cmd/h/http.go#L65-L72)
|
||||
- This output is then put [into the playground page with JavaScript](https://github.com/Xe/x/blob/v1.1.7/cmd/h/http.go#L389-L394)
|
||||
|
||||
The output of this call looks something like this:
|
||||
|
||||
```
|
||||
curl -H "Content-Type: text/plain" --data "h" https://h.christine.website/api/playground | jq
|
||||
{
|
||||
"prog": {
|
||||
"src": "h",
|
||||
"wat": "(module\n (import \"h\" \"h\" (func $h (param i32)))\n (func $h_main\n (local i32 i32 i32)\n (local.set 0 (i32.const 10))\n (local.set 1 (i32.const 104))\n (local.set 2 (i32.const 39))\n (call $h (get_local 1))\n (call $h (get_local 0))\n )\n (export \"h\" (func $h_main))\n)",
|
||||
"bin": "AGFzbQEAAAABCAJgAX8AYAAAAgcBAWgBaAAAAwIBAQcFAQFoAAEKGwEZAQN/QQohAEHoACEBQSchAiABEAAgABAACw==",
|
||||
"ast": "H(\"h\")"
|
||||
},
|
||||
"res": {
|
||||
"out": "h\n",
|
||||
"gas": 11,
|
||||
"exec_duration": 12345
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
The execution duration is in [nanoseconds](https://godoc.org/time#Duration), as
|
||||
it is just directly a Go standard library time duration.
|
||||
|
||||
## Bugs h has Found
|
||||
|
||||
This will be updated in the future, but h has already found a bug in [Innative](https://innative.dev).
|
||||
There was a bug in how Innative handled C name mangling of binaries. Output of
|
||||
the h compiler is now [a test case in Innative](https://github.com/innative-sdk/innative/commit/6353d59d611164ce38b938840dd4f3f1ea894e1b#diff-dc4a79872612bb26927f9639df223856R1).
|
||||
I consider this a success for the project. It is such a little thing, but it
|
||||
means a lot to me for some reason. My shitpost created a test case in a project
|
||||
I tried to integrate it with.
|
||||
|
||||
That's just awesome to me in ways I have trouble explaining.
|
||||
|
||||
As such, h programs _do_ work with Innative. Here's how to do it:
|
||||
|
||||
First, install the h compiler and runtime with the following command:
|
||||
|
||||
```
|
||||
go get within.website/x/cmd/h
|
||||
```
|
||||
|
||||
This will install the `h` binary to your `$GOPATH/bin`, so ensure that is part
|
||||
of your path (if it is not already):
|
||||
|
||||
```
|
||||
export GOPATH=$HOME/go
|
||||
export PATH=$PATH:$GOPATH/bin
|
||||
```
|
||||
|
||||
Then create a h binary like this:
|
||||
|
||||
```
|
||||
h -p "h h" -o hh.wasm
|
||||
```
|
||||
|
||||
Now we need to provide Innative the `h.h` system call implementation, so open
|
||||
`h.c` and enter in the following:
|
||||
|
||||
```
|
||||
#include <stdio.h>
|
||||
|
||||
void h_WASM_h(char data) {
|
||||
putchar(data);
|
||||
}
|
||||
```
|
||||
|
||||
Then build it to an object file:
|
||||
|
||||
```
|
||||
gcc -c -o h.o h.c
|
||||
```
|
||||
|
||||
Then pack it into a static library `.ar` file:
|
||||
|
||||
```
|
||||
ar rsv libh.a h.o
|
||||
```
|
||||
|
||||
Then create the shared object with Innative:
|
||||
|
||||
```
|
||||
innative-cmd -l ./libh.a hh.wasm
|
||||
```
|
||||
|
||||
This should create `hh.so` in the current working directory.
|
||||
|
||||
Now create the following [Nim](https://nim-lang.org) wrapper at `h.nim`:
|
||||
|
||||
```
|
||||
proc hh_WASM_h() {. importc, dynlib: "./hh.so" .}
|
||||
|
||||
hh_WASM_h()
|
||||
```
|
||||
|
||||
and build it:
|
||||
|
||||
```
|
||||
nim c h.nim
|
||||
```
|
||||
|
||||
then run it:
|
||||
|
||||
```
|
||||
./h
|
||||
h
|
||||
```
|
||||
|
||||
And congrats, you have now compiled h to a native shared object.
|
||||
|
||||
## Why
|
||||
|
||||
Now, something you might be asking yourself as you read through this post is
|
||||
something like: "Why the heck are you doing this?" That's honestly a good
|
||||
question. One of the things I want to do with computers is to create art for the
|
||||
sake of art. h is one of these such projects. h is not a productive tool. You
|
||||
cannot create anything useful with h. This is an exercise in creating a compiler
|
||||
and runtime from scratch, based on my past experiences with parsing lojban,
|
||||
WebAssembly on the server and frustrating marketing around programming tools. I
|
||||
wanted to create something that deliberately pokes at all of the common ways
|
||||
that programming languages and tooling are advertised. I wanted to make it a
|
||||
fully secure tool as well, with an arbitrary limitation of having no memory
|
||||
usage. Everything is fully functional. There are a few grammar bugs that I'm
|
||||
calling features.
|
|
@ -1,78 +0,0 @@
|
|||
---
|
||||
title: hlang in 30 Seconds
|
||||
date: 2021-01-04
|
||||
series: h
|
||||
tags:
|
||||
- satire
|
||||
---
|
||||
|
||||
# hlang in 30 Seconds
|
||||
|
||||
hlang (the h language) is a revolutionary new use of WebAssembly that enables
|
||||
single-paridigm programming without any pesky state or memory accessing. The
|
||||
simplest program you can use in hlang is the h world program:
|
||||
|
||||
```
|
||||
h
|
||||
```
|
||||
|
||||
When run in [the hlang playground](https://h.christine.website/play), you can
|
||||
see its output:
|
||||
|
||||
```
|
||||
h
|
||||
```
|
||||
|
||||
To get more output, separate multiple h's by spaces:
|
||||
|
||||
```
|
||||
h h h h
|
||||
```
|
||||
|
||||
This returns:
|
||||
|
||||
```
|
||||
h
|
||||
h
|
||||
h
|
||||
h
|
||||
```
|
||||
|
||||
## Internationalization
|
||||
|
||||
For internationalization concerns, hlang also supports the Lojbanic h `'`. You can
|
||||
mix h and `'` to your heart's content:
|
||||
|
||||
```
|
||||
' h '
|
||||
```
|
||||
|
||||
This returns:
|
||||
|
||||
```
|
||||
'
|
||||
h
|
||||
'
|
||||
```
|
||||
|
||||
Finally an easy solution to your pesky Lojban internationalization problems!
|
||||
|
||||
## Errors
|
||||
|
||||
For maximum understandability, compiler errors are provided in Lojban. For
|
||||
example this error tells you that you have an invalid character at the first
|
||||
character of the string:
|
||||
|
||||
```
|
||||
h: gentoldra fi'o zvati fe li no
|
||||
```
|
||||
|
||||
Here is an interlinear gloss of that error:
|
||||
|
||||
```
|
||||
h: gentoldra fi'o zvati fe li no
|
||||
grammar-wrong existing-at second-place use-number 0
|
||||
```
|
||||
|
||||
And now you are fully fluent in hlang, the most exciting programming language
|
||||
since sliced bread.
|
|
@ -1,7 +1,6 @@
|
|||
---
|
||||
title: How does into Meditation
|
||||
date: 2017-12-10
|
||||
series: when-then-zen
|
||||
---
|
||||
|
||||
# How does into Meditation
|
||||
|
|
|
@ -1,164 +0,0 @@
|
|||
---
|
||||
title: How HTTP Requests Work
|
||||
date: 2020-05-19
|
||||
tags:
|
||||
- http
|
||||
- ohgod
|
||||
- philosophy
|
||||
---
|
||||
|
||||
# How HTTP Requests Work
|
||||
|
||||
Reading this webpage is possible because of millions of hours of effort with
|
||||
tens of thousands of actors across thousands of companies. At some level it's a
|
||||
minor miracle that this all works at all. Here's a preview into the madness that
|
||||
goes into hitting enter on christine.website and this website being loaded.
|
||||
|
||||
## Beginnings
|
||||
|
||||
The user types in `https://christine.website` into the address bar and hits
|
||||
enter on the keyboard. This sends a signal over USB to the computer and the
|
||||
kernel polls the USB controller for a new message. It's recognized as from the
|
||||
keyboard. The input is then sent to the browser through an input driver talking
|
||||
to a windowing server talking to the browser program.
|
||||
|
||||
The browser selects the memory region normally reserved for the address bar. The
|
||||
browser then parses this string as an [RFC 3986][rfc3986] URI and scrapes out
|
||||
the protocol (https), hostname (christine.website) and path (/). The browser
|
||||
then uses this information to create an abstract HTTP request object with the
|
||||
Host header set to christine.website, HTTP method (GET), and path set to the
|
||||
path. This request object then passes through various layers of credential
|
||||
storage and middleware to add the appropriate cookies and other headers in order
|
||||
to tell my website what language it should localize the response to, what
|
||||
compression methods the browser understands, and what browser is being used to
|
||||
make the request.
|
||||
|
||||
[rfc3986]: https://tools.ietf.org/html/rfc3986
|
||||
|
||||
## Connections
|
||||
|
||||
The browser then checks if it has a connection to christine.website open
|
||||
already. If it does not, then it creates a new one. It creates a new connection
|
||||
by figuring out what the IP address of christine.website is using [DNS][dns]. A
|
||||
DNS request is made over [UDP][udp] on port 53 to the DNS server configured in
|
||||
the operating system (such as 8.8.8.8, 1.1.1.1 or 75.75.75.75). The UDP
|
||||
connection is created using operating system-dependent system calls and a DNS
|
||||
request is sent.
|
||||
|
||||
[udp]: https://en.wikipedia.org/wiki/User_Datagram_Protocol
|
||||
[dns]: https://en.wikipedia.org/wiki/Domain_Name_System
|
||||
|
||||
The packet that was created then is destined for the DNS server and added to the
|
||||
operating system's output queue. The operating system then looks in its routing
|
||||
table to see where the packet should go. If the packet matches a route, it is
|
||||
queued for output to the relevant network card. The network card layer then
|
||||
checks the ARP table to see what [mac address][macaddress] the
|
||||
[ethernet][ethernet] frame should be sent to. If the ARP table doesn't have a
|
||||
match, then an arp probe is broadcasted to every node on the local network. Then
|
||||
the driver waits for an arp response to be sent to it with the correct IP -> MAC
|
||||
address mapping. The driver then uses this information to send out the ethernet
|
||||
frame to the node that matches the IP address in the routing table. From there
|
||||
the packet is validated on the router it was sent to. It then unwraps the packet
|
||||
to the IP layer to figure out the destination network interface to use. If this
|
||||
router also does NAT termination, it creates an entry in the NAT table for
|
||||
future use for a site-configured amount of time (for UDP at least). It then
|
||||
passes the packet on to the correct node and this process is repeated until it
|
||||
gets to the remote DNS server.
|
||||
|
||||
[macaddress]: https://en.wikipedia.org/wiki/MAC_address
|
||||
[ethernet]: https://en.wikipedia.org/wiki/Ethernet
|
||||
|
||||
The DNS server then unwraps the ethernet frame into an IP packet and then as a
|
||||
UDP packet and a DNS request. It checks its database for a match and if one is
|
||||
not found, it attempts to discover the correct name server to contact by using a
|
||||
NS record query to its upstreams or the authoritative name server for the
|
||||
WEBSITE namespace. This then creates another process of ethernet frames and UDP
|
||||
packets until it reaches the upstream DNS server which hopefully should reply
|
||||
with the correct address. Once the DNS server gets the information that is
|
||||
needed, it sends this back the results to the client as a wire-format DNS
|
||||
response.
|
||||
|
||||
UDP is unreliable by design, so this packet may or may not survive the entire
|
||||
round trip. It may take one or more retries for the DNS information to get to
|
||||
the remote server and back, but it usually works the first time. The response to
|
||||
this request is cached based on the time-to-live specified in the DNS response.
|
||||
The response also contains the IP address of christine.website.
|
||||
|
||||
## Security
|
||||
|
||||
The protocol used in the URL determines which TCP port the browser connects to.
|
||||
If it is http, it uses port 80. If it is https, it uses port 443. The user
|
||||
specified HTTPS, so port 443 on whatever IP address DNS returned is dialed using
|
||||
the operating system's network stack system calls. The [TCP][tcp] three-way
|
||||
handshake is started with that target IP address and port. The client sends a
|
||||
SYN packet, the server replies with a SYN ACK packet and the client replies with
|
||||
an ACK packet. This indicates that the entire TCP session is active and data can
|
||||
be transferred and read through it.
|
||||
|
||||
[tcp]: https://en.wikipedia.org/wiki/Transmission_Control_Protocol
|
||||
|
||||
However, this data is UNENCRYPTED by default. [Transport Layer Security][tls] is
|
||||
used to encrypt this data so prying eyes can't look into it. TLS has its own
|
||||
handshake too. The session is established by sending a TLS ClientHello packet
|
||||
with the domain name (christine.website), the list of ciphers the client
|
||||
supports, any application layer protocols the client supports (like HTTP/2) and
|
||||
the list of TLS versions that the client supports. This information is sent over
|
||||
the wire to the remote server using that entire long and complicated process
|
||||
that I spelled out for how DNS works, except a TCP session requires the other
|
||||
side to acknowledge when data is successfully received. The server on the other
|
||||
end replies with a ClientHelloResponse that contains a HTTPS certificate and the
|
||||
list of protocols and ciphers the server supports. Then they do an [encryption
|
||||
session setup rain dance][tlsraindance] that I don't completely understand and
|
||||
the resulting channel is encrypted with cipher (or encrypted) text written and
|
||||
read from the wire and a session layer translates that cipher text to clear text
|
||||
for the other parts of the browser stack.
|
||||
|
||||
[tls]: https://en.wikipedia.org/wiki/Transport_Layer_Security
|
||||
[tlsraindance]: https://www.cloudflare.com/learning/ssl/what-happens-in-a-tls-handshake/
|
||||
|
||||
The browser then uses the information in the ClientHelloResponse to decide how
|
||||
to proceed from here.
|
||||
|
||||
## HTTP
|
||||
|
||||
If the browser notices the server supports HTTP/2 it sets up a HTTP/2 session
|
||||
(with a handshake that involves a few roundtrips like what I described for DNS)
|
||||
and creates a new stream for this request. The browser then formats the request
|
||||
as HTTP/2 wire format bytes (binary format) and writes it to the HTTP/2 stream,
|
||||
which writes it to the HTTP/2 framing layer, which writes it to the encryption
|
||||
layer, which writes it to the network socket and sends it over the internet.
|
||||
|
||||
If the browser notices the server DOES NOT support HTTP/2, it formats the
|
||||
request as HTTP/1.1 wire formatted bytes and writes it to the encryption layer,
|
||||
which writes it to the network socket and sends it over the internet using that
|
||||
complicated process I spelled out for DNS.
|
||||
|
||||
This then hits the remote load balancer which parses the client HTTP request and
|
||||
uses site-local configuration to select the best application server to handle
|
||||
the response. It then forwards the client's HTTP request to the correct server
|
||||
by creating a TCP session to that backend, writing the HTTP request and waiting
|
||||
for a response over that TCP session. Depending on site-local configuration
|
||||
there may be layers of encryption involved.
|
||||
|
||||
## Application Server
|
||||
|
||||
Now, the request finally gets to the application server. This TCP session is
|
||||
accepted by the application server and the headers are read into memory. The
|
||||
path is read by the application server and the correct handler is chosen. The
|
||||
HTML for the front page of christine.website is rendered and written to the TCP
|
||||
session and travels to the load balancer, gets encrypted with TLS, the encrypted
|
||||
HTML gets sent back over the internet to your browser and then your browser
|
||||
decrypts it and starts to parse and display the website. The browser will run
|
||||
into places where it needs more resources (such as stylesheets or images), so it will
|
||||
make additional HTTP requests to the load balancer to grab those too.
|
||||
|
||||
---
|
||||
|
||||
The end result is that the user sees the website in all its glory. Given all
|
||||
these moving parts it's astounding that this works as reliably as it does. Each
|
||||
of the TCP, ARP and DNS requests also happen at each level of the stack. There
|
||||
are layers upon layers upon layers of interacting protocols and implementations.
|
||||
|
||||
This is why it is hard to reliably put a website on the internet. If there is a
|
||||
god, they are surely the one holding all these potentially unreliable systems
|
||||
together to make everything appear like it is working.
|
|
@ -1,574 +0,0 @@
|
|||
---
|
||||
title: "How I Start: Nix"
|
||||
date: 2020-03-08
|
||||
series: howto
|
||||
tags:
|
||||
- nix
|
||||
- rust
|
||||
---
|
||||
|
||||
# How I Start: Nix
|
||||
|
||||
[Nix][nix] is a tool that helps people create reproducible builds. This means that
|
||||
given a known input, you can get the same output on other machines. Let's build
|
||||
and deploy a small Rust service with Nix. This will not require the Rust compiler
|
||||
to be installed with [rustup][rustup] or similar.
|
||||
|
||||
[nix]: https://nixos.org/nix/
|
||||
[rustup]: https://rustup.rs
|
||||
|
||||
- Setting up your environment
|
||||
- A new project
|
||||
- Setting up the Rust compiler
|
||||
- Serving HTTP
|
||||
- A simple package build
|
||||
- Shipping it in a docker image
|
||||
|
||||
## Setting up your environment
|
||||
|
||||
The first step is to install Nix. If you are using a Linux machine, run this
|
||||
script:
|
||||
|
||||
```console
|
||||
$ curl https://nixos.org/nix/install | sh
|
||||
```
|
||||
|
||||
This will prompt you for more information as it goes on, so be sure to follow
|
||||
the instructions carefully. Once it is done, close and re-open your shell. After
|
||||
you have done this, `nix-env` should exist in your shell. Try to run it:
|
||||
|
||||
```console
|
||||
$ nix-env
|
||||
error: no operation specified
|
||||
Try 'nix-env --help' for more information.
|
||||
```
|
||||
|
||||
Let's install a few other tools to help us with development. First, let's
|
||||
install [lorri][lorri] to help us manage our development shell:
|
||||
|
||||
[lorri]: https://github.com/target/lorri
|
||||
|
||||
```
|
||||
$ nix-env --install --file https://github.com/target/lorri/archive/master.tar.gz
|
||||
```
|
||||
|
||||
This will automatically download and build lorri for your system based on the
|
||||
latest possible version. Once that is done, open another shell window (the lorri
|
||||
docs include ways to do this more persistently, but this will work for now) and run:
|
||||
|
||||
```console
|
||||
$ lorri daemon
|
||||
```
|
||||
|
||||
Now go back to your main shell window and install [direnv][direnv]:
|
||||
|
||||
[direnv]: https://direnv.net
|
||||
|
||||
```console
|
||||
$ nix-env --install direnv
|
||||
```
|
||||
|
||||
Next, follow the [shell setup][direnvsetup] needed for your shell. I personally
|
||||
use `fish` with [oh my fish][omf], so I would run this:
|
||||
|
||||
[direnvsetup]: https://direnv.net/docs/hook.html
|
||||
[omf]: https://github.com/oh-my-fish/oh-my-fish
|
||||
|
||||
```console
|
||||
$ omf install direnv
|
||||
```
|
||||
|
||||
Finally, let's install [niv][niv] to help us handle dependencies for the
|
||||
project. This will allow us to make sure that our builds pin _everything_ to a
|
||||
specific set of versions, including operating system packages.
|
||||
|
||||
[niv]: https://github.com/nmattia/niv
|
||||
|
||||
```console
|
||||
$ nix-env --install niv
|
||||
```
|
||||
|
||||
Now that we have all of the tools we will need installed, let's create the
|
||||
project.
|
||||
|
||||
# A new project
|
||||
|
||||
Go to your favorite place to put code and make a new folder. I personally prefer
|
||||
`~/code`, so I will be using that here:
|
||||
|
||||
```console
|
||||
$ cd ~/code
|
||||
$ mkdir helloworld
|
||||
$ cd helloworld
|
||||
```
|
||||
|
||||
Let's set up the basic skeleton of the project. First, initialize niv:
|
||||
|
||||
```console
|
||||
$ niv init
|
||||
```
|
||||
|
||||
This will add the latest versions of `niv` itself and the packages used for the
|
||||
system to `nix/sources.json`. This will allow us to pin exact versions so the
|
||||
environment is as predictable as possible. Sometimes the versions of software in
|
||||
the pinned nixpkgs are too old. If this happens, you can update to the
|
||||
"unstable" branch of nixpkgs with this command:
|
||||
|
||||
```console
|
||||
$ niv update nixpkgs -b nixpkgs-unstable
|
||||
```
|
||||
|
||||
Next, set up lorri using `lorri init`:
|
||||
|
||||
```console
|
||||
$ lorri init
|
||||
```
|
||||
|
||||
This will create `shell.nix` and `.envrc`. `shell.nix` will be where we define
|
||||
the development environment for this service. `.envrc` is used to tell direnv
|
||||
what it needs to do. Let's try and activate the `.envrc`:
|
||||
|
||||
```console
|
||||
$ cd .
|
||||
direnv: error /home/cadey/code/helloworld/.envrc is blocked. Run `direnv allow`
|
||||
to approve its content
|
||||
```
|
||||
|
||||
Let's review its content:
|
||||
|
||||
```console
|
||||
$ cat .envrc
|
||||
eval "$(lorri direnv)"
|
||||
```
|
||||
|
||||
This seems reasonable, so approve it with `direnv allow` like the error message
|
||||
suggests:
|
||||
|
||||
```console
|
||||
$ direnv allow
|
||||
```
|
||||
|
||||
Now let's customize the `shell.nix` file to use our pinned version of nixpkgs.
|
||||
Currently, it looks something like this:
|
||||
|
||||
```nix
|
||||
# shell.nix
|
||||
let
|
||||
pkgs = import <nixpkgs> {};
|
||||
in
|
||||
pkgs.mkShell {
|
||||
buildInputs = [
|
||||
pkgs.hello
|
||||
];
|
||||
}
|
||||
```
|
||||
|
||||
This currently imports nixpkgs from the system-level version of it. This means
|
||||
that different systems could have different versions of nixpkgs on it, and that
|
||||
could make the `shell.nix` file hard to reproduce between machines. Let's import
|
||||
the pinned version of nixpkgs that niv created:
|
||||
|
||||
```nix
|
||||
# shell.nix
|
||||
let
|
||||
sources = import ./nix/sources.nix;
|
||||
pkgs = import sources.nixpkgs {};
|
||||
in
|
||||
pkgs.mkShell {
|
||||
buildInputs = [
|
||||
pkgs.hello
|
||||
];
|
||||
}
|
||||
```
|
||||
|
||||
And then let's test it with `lorri shell`:
|
||||
|
||||
```console
|
||||
$ lorri shell
|
||||
lorri: building environment........ done
|
||||
(lorri) $
|
||||
```
|
||||
|
||||
And let's see if `hello` is available inside the shell:
|
||||
|
||||
```console
|
||||
(lorri) $ hello
|
||||
Hello, world!
|
||||
```
|
||||
|
||||
You can set environment variables inside the `shell.nix` file. Do so like this:
|
||||
|
||||
```nix
|
||||
# shell.nix
|
||||
let
|
||||
sources = import ./nix/sources.nix;
|
||||
pkgs = import sources.nixpkgs {};
|
||||
in
|
||||
pkgs.mkShell {
|
||||
buildInputs = [
|
||||
pkgs.hello
|
||||
];
|
||||
|
||||
# Environment variables
|
||||
HELLO="world";
|
||||
}
|
||||
```
|
||||
|
||||
Wait a moment for lorri to finish rebuilding the development environment and
|
||||
then let's see if the environment variable shows up:
|
||||
|
||||
```console
|
||||
$ cd .
|
||||
direnv: loading ~/code/helloworld/.envrc
|
||||
<output snipped>
|
||||
$ echo $HELLO
|
||||
world
|
||||
```
|
||||
|
||||
Now that we have the basics of the environment set up, lets install the Rust
|
||||
compiler.
|
||||
|
||||
# Setting up the Rust compiler
|
||||
|
||||
First, add [nixpkgs-mozilla][nixpkgsmoz] to niv:
|
||||
|
||||
[nixpkgsmoz]: https://github.com/mozilla/nixpkgs-mozilla
|
||||
|
||||
```console
|
||||
$ niv add mozilla/nixpkgs-mozilla
|
||||
```
|
||||
|
||||
Then create `nix/rust.nix` in your repo:
|
||||
|
||||
```nix
|
||||
# nix/rust.nix
|
||||
{ sources ? import ./sources.nix }:
|
||||
|
||||
let
|
||||
pkgs =
|
||||
import sources.nixpkgs { overlays = [ (import sources.nixpkgs-mozilla) ]; };
|
||||
channel = "nightly";
|
||||
date = "2020-03-08";
|
||||
targets = [ ];
|
||||
chan = pkgs.rustChannelOfTargets channel date targets;
|
||||
in chan
|
||||
```
|
||||
|
||||
This creates a nix function that takes in the pre-imported list of sources,
|
||||
creates a copy of nixpkgs with Rust at the nightly version `2020-03-08` overlaid
|
||||
into it, and exposes the rust package out of it. Let's add this to `shell.nix`:
|
||||
|
||||
```nix
|
||||
# shell.nix
|
||||
let
|
||||
sources = import ./nix/sources.nix;
|
||||
rust = import ./nix/rust.nix { inherit sources; };
|
||||
pkgs = import sources.nixpkgs { };
|
||||
in
|
||||
pkgs.mkShell {
|
||||
buildInputs = [
|
||||
rust
|
||||
];
|
||||
}
|
||||
```
|
||||
|
||||
Then ask lorri to recreate the development environment. This may take a bit to
|
||||
run because it's setting up everything the Rust compiler requires to run.
|
||||
|
||||
```console
|
||||
$ lorri shell
|
||||
(lorri) $
|
||||
```
|
||||
|
||||
Let's see what version of Rust is installed:
|
||||
|
||||
```console
|
||||
(lorri) $ rustc --version
|
||||
rustc 1.43.0-nightly (823ff8cf1 2020-03-07)
|
||||
```
|
||||
|
||||
This is exactly what we expect. Rust nightly versions get released with the
|
||||
date of the previous day in them. To be extra sure, let's see what the shell
|
||||
thinks `rustc` resolves to:
|
||||
|
||||
```console
|
||||
(lorri) $ which rustc
|
||||
/nix/store/w6zk1zijfwrnjm6xyfmrgbxb6dvvn6di-rust-1.43.0-nightly-2020-03-07-823ff8cf1/bin/rustc
|
||||
```
|
||||
|
||||
And now exit that shell and reload direnv:
|
||||
|
||||
```console
|
||||
(lorri) $ exit
|
||||
$ cd .
|
||||
direnv: loading ~/code/helloworld/.envrc
|
||||
$ which rustc
|
||||
/nix/store/w6zk1zijfwrnjm6xyfmrgbxb6dvvn6di-rust-1.43.0-nightly-2020-03-07-823ff8cf1/bin/rustc
|
||||
```
|
||||
|
||||
And now we have Rust installed at an arbitrary nightly version for _that project
|
||||
only_. This will work on other machines too. Now that we have our development
|
||||
environment set up, let's serve HTTP.
|
||||
|
||||
## Serving HTTP
|
||||
|
||||
[Rocket][rocket] is a popular web framework for Rust programs. Let's use that to
|
||||
create a small "hello, world" server. We will need to do the following:
|
||||
|
||||
[rocket]: https://rocket.rs
|
||||
|
||||
- Create the new Rust project
|
||||
- Add Rocket as a dependency
|
||||
- Write our "hello world" route
|
||||
- Test a build of the service with `cargo build`
|
||||
|
||||
### Create the new Rust project
|
||||
|
||||
Create the new Rust project with `cargo init`:
|
||||
|
||||
```console
|
||||
$ cargo init --vcs git .
|
||||
Created binary (application) package
|
||||
```
|
||||
|
||||
This will create the directory `src` and a file named `Cargo.toml`. Rust code
|
||||
goes in `src` and the `Cargo.toml` file configures dependencies. Adding the
|
||||
`--vcs git` flag also has cargo create a [gitignore][gitignore] file so that the
|
||||
target folder isn't tracked by git.
|
||||
|
||||
[gitignore]: https://git-scm.com/docs/gitignore
|
||||
|
||||
### Add Rocket as a dependency
|
||||
|
||||
Open `Cargo.toml` and add the following to it:
|
||||
|
||||
```toml
|
||||
[dependencies]
|
||||
rocket = "0.4.3"
|
||||
```
|
||||
|
||||
Then download/build Rocket with `cargo build`:
|
||||
|
||||
```console
|
||||
$ cargo build
|
||||
```
|
||||
|
||||
This will download all of the dependencies you need and precompile Rocket, and it
|
||||
will help speed up later builds.
|
||||
|
||||
### Write our "hello world" route
|
||||
|
||||
Now put the following in `src/main.rs`:
|
||||
|
||||
```rust
|
||||
#![feature(proc_macro_hygiene, decl_macro)] // language features needed by Rocket
|
||||
|
||||
// Import the rocket macros
|
||||
#[macro_use]
|
||||
extern crate rocket;
|
||||
|
||||
// Create route / that returns "Hello, world!"
|
||||
#[get("/")]
|
||||
fn index() -> &'static str {
|
||||
"Hello, world!"
|
||||
}
|
||||
|
||||
fn main() {
|
||||
rocket::ignite().mount("/", routes![index]).launch();
|
||||
}
|
||||
```
|
||||
|
||||
### Test a build
|
||||
|
||||
Rerun `cargo build`:
|
||||
|
||||
```console
|
||||
$ cargo build
|
||||
```
|
||||
|
||||
This will create the binary at `target/debug/helloworld`. Let's run it locally
|
||||
and see if it works:
|
||||
|
||||
```console
|
||||
$ ./target/debug/helloworld &
|
||||
$ curl http://127.0.0.1:8000
|
||||
Hello, world!
|
||||
$ fg
|
||||
<press control-c>
|
||||
```
|
||||
|
||||
The HTTP service works. We have a binary that is created with the Rust compiler
|
||||
Nix installed.
|
||||
|
||||
## A simple package build
|
||||
|
||||
Now that we have the HTTP service working, let's put it inside a nix package. We
|
||||
will need to use [naersk][naersk] to do this. Add naersk to your project with
|
||||
niv:
|
||||
|
||||
[naersk]: https://github.com/nmattia/naersk
|
||||
|
||||
```console
|
||||
$ niv add nmattia/naersk
|
||||
```
|
||||
|
||||
Now let's create `helloworld.nix`:
|
||||
|
||||
```
|
||||
# import niv sources and the pinned nixpkgs
|
||||
{ sources ? import ./nix/sources.nix, pkgs ? import sources.nixpkgs { }}:
|
||||
let
|
||||
# import rust compiler
|
||||
rust = import ./nix/rust.nix { inherit sources; };
|
||||
|
||||
# configure naersk to use our pinned rust compiler
|
||||
naersk = pkgs.callPackage sources.naersk {
|
||||
rustc = rust;
|
||||
cargo = rust;
|
||||
};
|
||||
|
||||
# tell nix-build to ignore the `target` directory
|
||||
src = builtins.filterSource
|
||||
(path: type: type != "directory" || builtins.baseNameOf path != "target")
|
||||
./.;
|
||||
in naersk.buildPackage {
|
||||
inherit src;
|
||||
remapPathPrefix =
|
||||
true; # remove nix store references for a smaller output package
|
||||
}
|
||||
```
|
||||
|
||||
And then build it with `nix-build`:
|
||||
|
||||
```console
|
||||
$ nix-build helloworld.nix
|
||||
```
|
||||
|
||||
This can take a bit to run, but it will do the following things:
|
||||
|
||||
- Download naersk
|
||||
- Download every Rust crate your HTTP service depends on into the Nix store
|
||||
- Run your program's tests
|
||||
- Build your dependencies into a Nix package
|
||||
- Build your program with those dependencies
|
||||
- Place a link to the result at `./result`
|
||||
|
||||
Once it is done, let's take a look at the result:
|
||||
|
||||
```console
|
||||
$ du -hs ./result/bin/helloworld
|
||||
2.1M ./result/bin/helloworld
|
||||
|
||||
$ ldd ./result/bin/helloworld
|
||||
linux-vdso.so.1 (0x00007fffae080000)
|
||||
libdl.so.2 => /nix/store/wx1vk75bpdr65g6xwxbj4rw0pk04v5j3-glibc-2.27/lib/libdl.so.2 (0x0
|
||||
0007f3a01666000)
|
||||
librt.so.1 => /nix/store/wx1vk75bpdr65g6xwxbj4rw0pk04v5j3-glibc-2.27/lib/librt.so.1 (0x0
|
||||
0007f3a0165c000)
|
||||
libpthread.so.0 => /nix/store/wx1vk75bpdr65g6xwxbj4rw0pk04v5j3-glibc-2.27/lib/libpthread
|
||||
.so.0 (0x00007f3a0163b000)
|
||||
libgcc_s.so.1 => /nix/store/wx1vk75bpdr65g6xwxbj4rw0pk04v5j3-glibc-2.27/lib/libgcc_s.so.
|
||||
1 (0x00007f3a013f5000)
|
||||
libc.so.6 => /nix/store/wx1vk75bpdr65g6xwxbj4rw0pk04v5j3-glibc-2.27/lib/libc.so.6 (0x000
|
||||
07f3a0123f000)
|
||||
/nix/store/wx1vk75bpdr65g6xwxbj4rw0pk04v5j3-glibc-2.27/lib/ld-linux-x86-64.so.2 => /lib6
|
||||
4/ld-linux-x86-64.so.2 (0x00007f3a0160b000)
|
||||
libm.so.6 => /nix/store/wx1vk75bpdr65g6xwxbj4rw0pk04v5j3-glibc-2.27/lib/libm.so.6 (0x000
|
||||
07f3a010a9000)
|
||||
```
|
||||
|
||||
This means that the Nix build created a 2.1 megabyte binary that only depends on
|
||||
[glibc][glibc], the implementation of the C language standard library that Nix
|
||||
prefers.
|
||||
|
||||
[glibc]: https://www.gnu.org/software/libc/
|
||||
|
||||
For repo cleanliness, add the `result` link to the [gitignore][gitignore]:
|
||||
|
||||
```console
|
||||
$ echo 'result*' >> .gitignore
|
||||
```
|
||||
|
||||
## Shipping it in a Docker image
|
||||
|
||||
Now that we have a package built, let's ship it in a docker image. nixpkgs
|
||||
provides [dockerTools][dockertools] which helps us create docker images out of
|
||||
Nix packages. Let's create `default.nix` with the following contents:
|
||||
|
||||
[dockertools]: https://nixos.org/nixpkgs/manual/#sec-pkgs-dockerTools
|
||||
|
||||
```nix
|
||||
{ system ? builtins.currentSystem }:
|
||||
|
||||
let
|
||||
sources = import ./nix/sources.nix;
|
||||
pkgs = import sources.nixpkgs { };
|
||||
helloworld = import ./helloworld.nix { inherit sources pkgs; };
|
||||
|
||||
name = "xena/helloworld";
|
||||
tag = "latest";
|
||||
|
||||
in pkgs.dockerTools.buildLayeredImage {
|
||||
inherit name tag;
|
||||
contents = [ helloworld ];
|
||||
|
||||
config = {
|
||||
Cmd = [ "/bin/helloworld" ];
|
||||
Env = [ "ROCKET_PORT=5000" ];
|
||||
WorkingDir = "/";
|
||||
};
|
||||
}
|
||||
```
|
||||
|
||||
And then build it with `nix-build`:
|
||||
|
||||
```console
|
||||
$ nix-build default.nix
|
||||
```
|
||||
|
||||
This will create a tarball containing the docker image information as the result
|
||||
of the Nix build. Load it into docker using `docker load`:
|
||||
|
||||
```console
|
||||
$ docker load -i result
|
||||
```
|
||||
|
||||
And then run it using `docker run`:
|
||||
|
||||
```console
|
||||
$ docker run --rm -itp 52340:5000 xena/helloworld
|
||||
```
|
||||
|
||||
Now test it using curl:
|
||||
|
||||
```console
|
||||
$ curl http://127.0.0.1:52340
|
||||
Hello, world!
|
||||
```
|
||||
|
||||
And now you have a docker image you can run wherever you want. The
|
||||
`buildLayeredImage` function used in `default.nix` also makes Nix put each
|
||||
dependency of the package into its own docker layer. This makes new versions of
|
||||
your program very efficient to upgrade on your clusters, realistically this
|
||||
reduces the amount of data needed for new versions of the program down to what
|
||||
changed. If nothing but some resources in their own package were changed, only
|
||||
those packages get downloaded.
|
||||
|
||||
This is how I start a new project with Nix. I put all of the code described in
|
||||
this post in [this GitHub repo][helloworldrepo] in case it helps. Have fun and
|
||||
be well.
|
||||
|
||||
[helloworldrepo]: https://github.com/Xe/helloworld
|
||||
|
||||
---
|
||||
|
||||
For some "extra credit" tasks, try and see if you can do the following:
|
||||
|
||||
- Use the version of [niv][niv] that niv pinned
|
||||
- Customize the environment of the container by following the [Rocket
|
||||
configuration documentation](https://rocket.rs/v0.4/guide/configuration/)
|
||||
- Add some more routes to the program
|
||||
- Read the [Nix
|
||||
documentation](https://nixos.org/nix/manual/#chap-writing-nix-expressions) and
|
||||
learn more about writing Nix expressions
|
||||
- Configure your editor/IDE to use the `direnv` path
|
|
@ -1,558 +0,0 @@
|
|||
---
|
||||
title: "How I Start: Rust"
|
||||
date: 2020-03-15
|
||||
series: howto
|
||||
tags:
|
||||
- rust
|
||||
- how-i-start
|
||||
- nix
|
||||
---
|
||||
|
||||
# How I Start: Rust
|
||||
|
||||
[Rust][rustlang] is an exciting new programming language that makes it easy to
|
||||
make understandable and reliable software. It is made by Mozilla and is used by
|
||||
Amazon, Google, Microsoft and many other large companies.
|
||||
|
||||
[rustlang]: https://www.rust-lang.org/
|
||||
|
||||
Rust has a reputation of being difficult because it makes no effort to hide what
|
||||
is going on. I'd like to show you how I start with Rust projects. Let's make a
|
||||
small HTTP service using [Rocket][rocket].
|
||||
|
||||
[rocket]: https://rocket.rs
|
||||
|
||||
- Setting up your environment
|
||||
- A new project
|
||||
- Testing
|
||||
- Adding functionality
|
||||
- OpenAPI specifications
|
||||
- Error responses
|
||||
- Shipping it in a docker image
|
||||
|
||||
## Setting up your environment
|
||||
|
||||
The first step is to install the Rust compiler. You can use any method you like,
|
||||
but since we are requiring the nightly version of Rust for this project, I
|
||||
suggest using [rustup][rustup]:
|
||||
|
||||
[rustup]: https://rustup.rs/
|
||||
|
||||
```console
|
||||
curl --proto '=https' --tlsv1.2 -sSf https://sh.rustup.rs | sh -s -- --default-toolchain nightly
|
||||
```
|
||||
|
||||
If you are using [NixOS][nixos] or another Linux distribution with [Nix][nix]
|
||||
installed, see [this post][howistartnix] for some information on how to set up
|
||||
the Rust compiler.
|
||||
|
||||
[nixos]: https://nixos.org/nixos/
|
||||
[nix]: https://nixos.org/nix/
|
||||
[howistartnix]: https://christine.website/blog/how-i-start-nix-2020-03-08
|
||||
|
||||
## A new project
|
||||
|
||||
[Rocket][rocket] is a popular web framework for Rust programs. Let's use that to
|
||||
create a small "hello, world" server. We will need to do the following:
|
||||
|
||||
[rocket]: https://rocket.rs/
|
||||
|
||||
- Create the new Rust project
|
||||
- Add Rocket as a dependency
|
||||
- Write the hello world route
|
||||
- Test a build of the service with `cargo build`
|
||||
- Run it and see what happens
|
||||
|
||||
### Create the new Rust project
|
||||
|
||||
Create the new Rust project with `cargo init`:
|
||||
|
||||
```console
|
||||
$ cargo init --vcs git .
|
||||
Created binary (application) package
|
||||
```
|
||||
|
||||
This will create the directory `src` and a file named `Cargo.toml`. Rust code
|
||||
goes in `src` and the `Cargo.toml` file configures dependencies. Adding the
|
||||
`--vcs git` flag also has cargo create a [gitignore][gitignore] file so that the
|
||||
target folder isn't tracked by git.
|
||||
|
||||
[gitignore]: https://git-scm.com/docs/gitignore
|
||||
|
||||
### Add Rocket as a dependency
|
||||
|
||||
Open `Cargo.toml` and add the following to it:
|
||||
|
||||
```toml
|
||||
[dependencies]
|
||||
rocket = "0.4.4"
|
||||
```
|
||||
|
||||
Then download/build [Rocket][rocket] with `cargo build`:
|
||||
|
||||
```console
|
||||
$ cargo build
|
||||
```
|
||||
|
||||
This will download all of the dependencies you need and precompile Rocket, and it
|
||||
will help speed up later builds.
|
||||
|
||||
### Write our "hello world" route
|
||||
|
||||
Now put the following in `src/main.rs`:
|
||||
|
||||
```rust
|
||||
#![feature(proc_macro_hygiene, decl_macro)] // Nightly-only language features needed by Rocket
|
||||
|
||||
// Import the rocket macros
|
||||
#[macro_use]
|
||||
extern crate rocket;
|
||||
|
||||
/// Create route / that returns "Hello, world!"
|
||||
#[get("/")]
|
||||
fn index() -> &'static str {
|
||||
"Hello, world!"
|
||||
}
|
||||
|
||||
fn main() {
|
||||
rocket::ignite().mount("/", routes![index]).launch();
|
||||
}
|
||||
```
|
||||
|
||||
### Test a build
|
||||
|
||||
Rerun `cargo build`:
|
||||
|
||||
```console
|
||||
$ cargo build
|
||||
```
|
||||
|
||||
This will create the binary at `target/debug/helloworld`. Let's run it locally
|
||||
and see if it works:
|
||||
|
||||
```console
|
||||
$ ./target/debug/helloworld
|
||||
```
|
||||
|
||||
And in another terminal window:
|
||||
|
||||
```console
|
||||
$ curl http://127.0.0.1:8000
|
||||
Hello, world!
|
||||
$ fg
|
||||
<press control-c>
|
||||
```
|
||||
|
||||
The HTTP service works. We have a binary that is created with the Rust compiler.
|
||||
This binary will be available at `./target/debug/helloworld`. However, it could
|
||||
use some tests.
|
||||
|
||||
## Testing
|
||||
|
||||
Rocket has support for [unit testing][rockettest] built in. Let's create a tests
|
||||
module and verify this route in testing.
|
||||
|
||||
[rockettest]: https://rocket.rs/v0.4/guide/testing/
|
||||
|
||||
### Create a tests module
|
||||
|
||||
Rust allows you to nest modules within files using the `mod` keyword. Create a
|
||||
`tests` module that will only build when testing is requested:
|
||||
|
||||
[rustmod]: https://doc.rust-lang.org/rust-by-example/mod/visibility.html
|
||||
|
||||
```rust
|
||||
#[cfg(test)] // Only compile this when unit testing is requested
|
||||
mod tests {
|
||||
use super::*; // Modules are their own scope, so you
|
||||
// need to explictly use the stuff in
|
||||
// the parent module.
|
||||
|
||||
use rocket::http::Status;
|
||||
use rocket::local::*;
|
||||
|
||||
#[test]
|
||||
fn test_index() {
|
||||
// create the rocket instance to test
|
||||
let rkt = rocket::ignite().mount("/", routes![index]);
|
||||
|
||||
// create a HTTP client bound to this rocket instance
|
||||
let client = Client::new(rkt).expect("valid rocket");
|
||||
|
||||
// get a HTTP response
|
||||
let mut response = client.get("/").dispatch();
|
||||
|
||||
// Ensure it returns HTTP 200
|
||||
assert_eq!(response.status(), Status::Ok);
|
||||
|
||||
// Ensure the body is what we expect it to be
|
||||
assert_eq!(response.body_string(), Some("Hello, world!".into()));
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
### Run tests
|
||||
|
||||
`cargo test` is used to run tests in Rust. Let's run it:
|
||||
|
||||
```console
|
||||
$ cargo test
|
||||
Compiling helloworld v0.1.0 (/home/cadey/code/helloworld)
|
||||
Finished test [unoptimized + debuginfo] target(s) in 1.80s
|
||||
Running target/debug/deps/helloworld-49d1bd4d4f816617
|
||||
|
||||
running 1 test
|
||||
test tests::test_index ... ok
|
||||
```
|
||||
|
||||
## Adding functionality
|
||||
|
||||
Most HTTP services return [JSON][json] or JavaScript Object Notation as a way to
|
||||
pass objects between computer programs. Let's use Rocket's [JSON
|
||||
support][rocketjson] to add a `/hostinfo` route to this app that returns some
|
||||
simple information:
|
||||
|
||||
[json]: https://www.json.org/json-en.html
|
||||
[rocketjson]: https://api.rocket.rs/v0.4/rocket_contrib/json/index.html
|
||||
|
||||
- the hostname of the computer serving the response
|
||||
- the process ID of the HTTP service
|
||||
- the uptime of the system in seconds
|
||||
|
||||
### Encoding things to JSON
|
||||
|
||||
For encoding things to JSON, we will be using [serde][serde]. We will need to
|
||||
add serde as a dependency. Open `Cargo.toml` and put the following lines in it:
|
||||
|
||||
[serde]: https://serde.rs/
|
||||
|
||||
```toml
|
||||
[dependencies]
|
||||
serde_json = "1.0"
|
||||
serde = { version = "1.0", features = ["derive"] }
|
||||
```
|
||||
|
||||
This lets us use `#[derive(Serialize, Deserialize)]` on our Rust structs, which
|
||||
will allow us to automate away the JSON generation code _at compile time_. For
|
||||
more information about derivation in Rust, see [here][rustderive].
|
||||
|
||||
[rustderive]: https://doc.rust-lang.org/rust-by-example/trait/derive.html
|
||||
|
||||
Let's define the data we will send back to the client using a [struct][ruststruct].
|
||||
|
||||
[ruststruct]: https://doc.rust-lang.org/rust-by-example/custom_types/structs.html
|
||||
|
||||
```rust
|
||||
use serde::*;
|
||||
|
||||
/// Host information structure returned at /hostinfo
|
||||
#[derive(Serialize, Debug)]
|
||||
struct HostInfo {
|
||||
hostname: String,
|
||||
pid: u32,
|
||||
uptime: u64,
|
||||
}
|
||||
```
|
||||
|
||||
To implement this call, we will need another few dependencies in the `Cargo.toml`
|
||||
file. We will use [gethostname][gethostname] to get the hostname of the machine
|
||||
and [psutil][psutil] to get the uptime of the machine. Put the following below
|
||||
the `serde` dependency line:
|
||||
|
||||
[gethostname]: https://crates.io/crates/gethostname
|
||||
[psutil]: https://crates.io/crates/psutil
|
||||
|
||||
```toml
|
||||
gethostname = "0.2.1"
|
||||
psutil = "3.0.1"
|
||||
```
|
||||
|
||||
Finally, we will need to enable Rocket's JSON support. Put the following at the
|
||||
end of your `Cargo.toml` file:
|
||||
|
||||
```toml
|
||||
[dependencies.rocket_contrib]
|
||||
version = "0.4.4"
|
||||
default-features = false
|
||||
features = ["json"]
|
||||
```
|
||||
|
||||
Now we can implement the `/hostinfo` route:
|
||||
|
||||
```rust
|
||||
/// Create route /hostinfo that returns information about the host serving this
|
||||
/// page.
|
||||
#[get("/hostinfo")]
|
||||
fn hostinfo() -> Json<HostInfo> {
|
||||
// gets the current machine hostname or "unknown" if the hostname doesn't
|
||||
// parse into UTF-8 (very unlikely)
|
||||
let hostname = gethostname::gethostname()
|
||||
.into_string()
|
||||
.or(|_| "unknown".to_string())
|
||||
.unwrap();
|
||||
|
||||
Json(HostInfo{
|
||||
hostname: hostname,
|
||||
pid: std::process::id(),
|
||||
uptime: psutil::host::uptime()
|
||||
.unwrap() // normally this is a bad idea, but this code is
|
||||
// very unlikely to fail.
|
||||
.as_secs(),
|
||||
})
|
||||
}
|
||||
```
|
||||
|
||||
And then register it in the main function:
|
||||
|
||||
```rust
|
||||
fn main() {
|
||||
rocket::ignite()
|
||||
.mount("/", routes![index, hostinfo])
|
||||
.launch();
|
||||
}
|
||||
```
|
||||
|
||||
Now rebuild the project and run the server:
|
||||
|
||||
```console
|
||||
$ cargo build
|
||||
$ ./target/debug/helloworld
|
||||
```
|
||||
|
||||
And in another terminal test it with `curl`:
|
||||
|
||||
```console
|
||||
$ curl http://127.0.0.1:8000
|
||||
{"hostname":"shachi","pid":4291,"uptime":13641}
|
||||
```
|
||||
|
||||
You can use a similar process for any kind of other route.
|
||||
|
||||
## OpenAPI specifications
|
||||
|
||||
[OpenAPI][openapi] is a common specification format for describing API routes.
|
||||
This allows users of the API to automatically generate valid clients for them.
|
||||
Writing these by hand can be tedious, so let's pass that work off to the
|
||||
compiler using [okapi][okapi].
|
||||
|
||||
[openapi]: https://swagger.io/docs/specification/about/
|
||||
[okapi]: https://github.com/GREsau/okapi
|
||||
|
||||
Add the following line to your `Cargo.toml` file in the `[dependencies]` block:
|
||||
|
||||
```toml
|
||||
rocket_okapi = "0.3.6"
|
||||
schemars = "0.6"
|
||||
okapi = { version = "0.3", features = ["derive_json_schema"] }
|
||||
```
|
||||
|
||||
This will allow us to generate OpenAPI specifications from Rocket routes and the
|
||||
types in them. Let's import the rocket_okapi macros and use them:
|
||||
|
||||
```rust
|
||||
// Import OpenAPI macros
|
||||
#[macro_use]
|
||||
extern crate rocket_okapi;
|
||||
|
||||
use rocket_okapi::JsonSchema;
|
||||
```
|
||||
|
||||
We need to add JSON schema generation abilities to `HostInfo`. Change:
|
||||
|
||||
```rust
|
||||
#[derive(Serialize, Debug)]
|
||||
```
|
||||
|
||||
to
|
||||
|
||||
```rust
|
||||
#[derive(Serialize, JsonSchema, Debug)]
|
||||
```
|
||||
|
||||
to generate the OpenAPI code for our type.
|
||||
|
||||
Next we can add the `/hostinfo` route to the OpenAPI schema:
|
||||
|
||||
```rust
|
||||
/// Create route /hostinfo that returns information about the host serving this
|
||||
/// page.
|
||||
#[openapi]
|
||||
#[get("/hostinfo")]
|
||||
fn hostinfo() -> Json<HostInfo> {
|
||||
// ...
|
||||
```
|
||||
|
||||
Also add the index route to the OpenAPI schema:
|
||||
|
||||
```rust
|
||||
/// Create route / that returns "Hello, world!"
|
||||
#[openapi]
|
||||
#[get("/")]
|
||||
fn index() -> &'static str {
|
||||
"Hello, world!"
|
||||
}
|
||||
```
|
||||
|
||||
And finally update the main function to use openapi:
|
||||
|
||||
```rust
|
||||
fn main() {
|
||||
rocket::ignite()
|
||||
.mount("/", routes_with_openapi![index, hostinfo])
|
||||
.launch();
|
||||
}
|
||||
```
|
||||
|
||||
Then rebuild it and run the server:
|
||||
|
||||
```console
|
||||
$ cargo build
|
||||
$ ./target/debug/helloworld
|
||||
```
|
||||
|
||||
And then in another terminal:
|
||||
|
||||
```console
|
||||
$ curl http://127.0.0.1:8000/openapi.json
|
||||
```
|
||||
|
||||
This should return a large JSON object that describes all of the HTTP routes and
|
||||
the data they return. To see this visually, change main to this:
|
||||
|
||||
```rust
|
||||
use rocket_okapi::swagger_ui::{make_swagger_ui, SwaggerUIConfig};
|
||||
|
||||
fn main() {
|
||||
rocket::ignite()
|
||||
.mount("/", routes_with_openapi![index, hostinfo])
|
||||
.mount(
|
||||
"/swagger-ui/",
|
||||
make_swagger_ui(&SwaggerUIConfig {
|
||||
url: Some("../openapi.json".to_owned()),
|
||||
urls: None,
|
||||
}),
|
||||
)
|
||||
.launch();
|
||||
}
|
||||
```
|
||||
|
||||
Then rebuild and run the service:
|
||||
|
||||
```console
|
||||
$ cargo build
|
||||
$ ./target/debug/helloworld
|
||||
```
|
||||
|
||||
And [open the swagger UI](http://127.0.0.1:8000/swagger-ui/) in your favorite
|
||||
browser. This will show you a graphical display of all of the routes and the
|
||||
data types in your service. For an example, see
|
||||
[here](https://printerfacts.cetacean.club/swagger-ui/index.html).
|
||||
|
||||
## Error responses
|
||||
|
||||
Earlier in the /hostinfo route we glossed over error handling. Let's correct
|
||||
this using the [okapi error type][okapierror]. Let's use the
|
||||
[OpenAPIError][okapierror] type in the helloworld function:
|
||||
|
||||
[okapierror]: https://docs.rs/rocket_okapi/0.3.6/rocket_okapi/struct.OpenApiError.html
|
||||
|
||||
```rust
|
||||
/// Create route /hostinfo that returns information about the host serving
|
||||
/// this page.
|
||||
#[openapi]
|
||||
#[get("/hostinfo")]
|
||||
fn hostinfo() -> Result<Json<HostInfo>> {
|
||||
match gethostname::gethostname().into_string() {
|
||||
Ok(hostname) => Ok(Json(HostInfo {
|
||||
hostname: hostname,
|
||||
pid: std::process::id(),
|
||||
uptime: psutil::host::uptime().unwrap().as_secs(),
|
||||
})),
|
||||
Err(_) => Err(OpenApiError::new(format!(
|
||||
"hostname does not parse as UTF-8"
|
||||
))),
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
When the `into_string` operation fails (because the hostname is somehow invalid
|
||||
UTF-8), this will result in a non-200 response with the `"hostname does not parse
|
||||
as UTF-8"` message.
|
||||
|
||||
## Shipping it in a docker image
|
||||
|
||||
Many deployment systems use [Docker][docker] to describe a program's environment
|
||||
and dependencies. Create a `Dockerfile` with the following contents:
|
||||
|
||||
```Dockerfile
|
||||
# Use the minimal image
|
||||
FROM rustlang/rust:nightly-slim AS build
|
||||
|
||||
# Where we will build the program
|
||||
WORKDIR /src/helloworld
|
||||
|
||||
# Copy source code into the container
|
||||
COPY . .
|
||||
|
||||
# Build the program in release mode
|
||||
RUN cargo build --release
|
||||
|
||||
# Create the runtime image
|
||||
FROM ubuntu:18.04
|
||||
|
||||
# Copy the compiled service binary
|
||||
COPY --from=build /src/helloworld/target/release/helloworld /usr/local/bin/helloworld
|
||||
|
||||
# Start the helloworld service on container boot
|
||||
CMD ["usr/local/bin/helloworld"]
|
||||
```
|
||||
|
||||
And then build it:
|
||||
|
||||
```console
|
||||
$ docker build -t xena/helloworld .
|
||||
```
|
||||
|
||||
And then run it:
|
||||
|
||||
```console
|
||||
$ docker run --rm -itp 8000:8000 xena/helloworld
|
||||
```
|
||||
|
||||
And in another terminal:
|
||||
|
||||
```console
|
||||
$ curl http://127.0.0.1:8000
|
||||
Hello, world!
|
||||
```
|
||||
|
||||
From here you can do whatever you want with this service. You can deploy it to
|
||||
Kubernetes with a manifest that would look something like [this][k8shack].
|
||||
|
||||
[k8shack]: https://clbin.com/zSPDs
|
||||
|
||||
---
|
||||
|
||||
This is how I start a new Rust project. I put all of the code described in this
|
||||
post in [this GitHub repo][helloworldrepo] in case it helps. Have fun and be
|
||||
well.
|
||||
|
||||
[helloworldrepo]: https://github.com/Xe/helloworld
|
||||
|
||||
---
|
||||
|
||||
For some "extra credit" tasks, try and see if you can do the following:
|
||||
|
||||
- Customize the environment of the container by following the [Rocket
|
||||
configuration documentation](https://rocket.rs/v0.4/guide/configuration/) and
|
||||
docker [environment variables][dockerenvvars]
|
||||
- Use Rocket's [templates][rockettemplate] to make the host information show up
|
||||
in HTML
|
||||
- Add tests for the `/hostinfo` route
|
||||
- Make a route that always returns errors, what does it look like?
|
||||
|
||||
[dockerenvvars]: https://docs.docker.com/engine/reference/builder/#env
|
||||
[rockettemplate]: https://api.rocket.rs/v0.4/rocket_contrib/templates/index.html
|
||||
|
||||
Many thanks to [Coleman McFarland](https://coleman.codes/) for proofreading this
|
||||
post.
|
|
@ -1,193 +0,0 @@
|
|||
---
|
||||
title: "How Mara Works"
|
||||
date: 2020-09-30
|
||||
tags:
|
||||
- avif
|
||||
- webp
|
||||
- markdown
|
||||
---
|
||||
|
||||
# How Mara Works
|
||||
|
||||
Recently I introduced Mara to this blog and I didn't explain much of the theory
|
||||
and implementation behind them in order to proceed with the rest of the post.
|
||||
There was actually a significant amount of engineering that went into
|
||||
implementing Mara and I'd like to go into detail about this as well as explain
|
||||
how I implemented them into this blog.
|
||||
|
||||
## Mara's Background
|
||||
|
||||
Mara is an anthropomorphic shark. They are nonbinary and go by they/she
|
||||
pronouns. Mara enjoys hacking, swimming and is a Chaotic Good Rogue in the
|
||||
tabletop games I've played her in. Mara was originally made to help test my
|
||||
upcoming tabletop game The Source, and I have used them in a few solitaire
|
||||
tabletop sessions (click
|
||||
[here](http://cetacean.club/journal/mara-castle-charon.gmi) to read the results
|
||||
of one of these).
|
||||
|
||||
[I use a hand-soldered <a href="https://www.ergodox.io/">Ergodox</a> with the <a
|
||||
href="https://www.artofchording.com/">stenographer</a> layout so I can dab on
|
||||
the haters at 200 words per minute!](conversation://Mara/hacker)
|
||||
|
||||
## The Theory
|
||||
|
||||
My blogposts have a habit of getting long, wordy and sometimes pretty damn dry.
|
||||
I notice that there are usually a few common threads in how this becomes the
|
||||
case, so I want to do these three things to help keep things engaging.
|
||||
|
||||
1. I go into detail. A lot of detail. This can make paragraphs long and wordy
|
||||
because there is legitimately a lot to cover. [fasterthanlime's Cool Bear's
|
||||
Hot Tip](https://fasterthanli.me/articles/image-decay-as-a-service) is a good
|
||||
way to help Amos focus on the core and let another character bring up the
|
||||
finer details that may go off the core of the message.
|
||||
2. I have been looking into how to integrate concepts from [The Socratic
|
||||
method](https://en.wikipedia.org/wiki/Socratic_method) into my posts. The
|
||||
Socratic method focuses on dialogue/questions and answers between
|
||||
interlocutors as a way to explore a topic that can be dry or vague.
|
||||
3. [Soatok's
|
||||
blog](https://soatok.blog/2020/09/12/edutech-spyware-is-still-spyware-proctorio-edition/)
|
||||
was an inspiration to this. Soatok dives into deep technical topics that can
|
||||
feel like a slog, and inserts some stickers between paragraphs to help keep
|
||||
things upbeat and lively.
|
||||
|
||||
I wanted to make a unique way to help break up walls of text using the concepts
|
||||
of Cool Bear's Hot Tip and the Socratic method with some furry art sprinkled in
|
||||
and I eventually arrived at Mara.
|
||||
|
||||
[Fun fact! My name was originally derived from a <a
|
||||
href="https://en.wikipedia.org/wiki/Mara_(demon)">Buddhist conceptual demon of
|
||||
forces antagonistic to enlightenment</a> which is deliciously ironic given that
|
||||
my role is to help people understand things now.](conversation://Mara/hacker)
|
||||
|
||||
## How Mara is Implemented
|
||||
|
||||
I write my blogposts in
|
||||
[Markdown](https://daringfireball.net/projects/markdown/), specifically a
|
||||
dialect that has some niceties from [GitHub flavored
|
||||
markdown](https://guides.github.com/features/mastering-markdown/#GitHub-flavored-markdown)
|
||||
as parsed by [comrak](https://docs.rs/comrak). Mara's interjections are actually
|
||||
specially formed links, such as this:
|
||||
|
||||
[Hi! I am saying something!](conversation://Mara/hacker)
|
||||
|
||||
```markdown
|
||||
[Hi! I am saying something!](conversation://Mara/hacker)
|
||||
```
|
||||
|
||||
Notice how the destination URL doesn't actually exist. It's actually intercepted
|
||||
in my [markdown parsing
|
||||
function](https://github.com/Xe/site/blob/b540631792493169bd41f489c18b7369159d12a9/src/app/markdown.rs#L8)
|
||||
and then a [HTML
|
||||
template](https://github.com/Xe/site/blob/b540631792493169bd41f489c18b7369159d12a9/templates/mara.rs.html#L1)
|
||||
is used to create the divs that make up the image and conversation bits. I have
|
||||
intentionally left this open so I can add more characters in the future. I may
|
||||
end up making some stickers for myself so I can reply to Mara a-la [this
|
||||
blogpost by
|
||||
fasterthanlime](https://fasterthanli.me/articles/so-you-want-to-live-reload-rust)
|
||||
(search for "What's with the @@GLIBC_2.2.5 suffixes?"). The syntax of the URL is
|
||||
as follows:
|
||||
|
||||
```
|
||||
conversation://<character>/<mood>[?reply]
|
||||
```
|
||||
|
||||
This will then fetch the images off of my CDN hosted by CloudFlare. However if
|
||||
you are using Tor to view my site, this may result in not being able to see the
|
||||
images. I am working on ways to solve this. Please bear with me, this stuff is
|
||||
hard.
|
||||
|
||||
You may have noticed that Mara sometimes has links inside her dialogue.
|
||||
Understandably, this is something that vanilla markdown does not support.
|
||||
However, I enabled putting raw HTML in my markdown which lets this work anyways!
|
||||
Consider this:
|
||||
|
||||
[My art was drawn by <a
|
||||
href="https://selic.re">Selicre</a>!](conversation://Mara/hacker)
|
||||
|
||||
In the markdown source, that actually looks like this:
|
||||
|
||||
```markdown
|
||||
[My art was drawn by <a href="https://selic.re">Selicre</a>!](conversation://Mara/hacker)
|
||||
```
|
||||
|
||||
This is honestly one of my favorite parts of how this is implemented, though
|
||||
others I have shown this to say it's kind of terrifying.
|
||||
|
||||
### The `<picture>` Element and Image Formats
|
||||
|
||||
Something you might notice about the HTML template is that I use the
|
||||
[`<picture>`](https://developer.mozilla.org/en-US/docs/Web/HTML/Element/picture)
|
||||
element like this:
|
||||
|
||||
```html
|
||||
<picture>
|
||||
<source srcset="https://cdn.christine.website/file/christine-static/stickers/@character.to_lowercase()/@(mood).avif" type="image/avif">
|
||||
<source srcset="https://cdn.christine.website/file/christine-static/stickers/@character.to_lowercase()/@(mood).webp" type="image/webp">
|
||||
<img src="https://cdn.christine.website/file/christine-static/stickers/@character.to_lowercase()/@(mood).png" alt="@character is @mood">
|
||||
</picture>
|
||||
```
|
||||
|
||||
The `<picture>` element allows me to specify multiple versions of the stickers
|
||||
and have your browser pick the image format that it supports. It is also fully
|
||||
backwards compatible with browsers that do not support `<picture>` and in those
|
||||
cases you will see the fallback image in .png format. I went into a lot of
|
||||
detail about this in [a twitter
|
||||
thread](https://twitter.com/theprincessxena/status/1310358201842401281?s=21),
|
||||
but in short here are how each of the formats looks next to its filesize
|
||||
information:
|
||||
|
||||
![](https://cdn.christine.website/file/christine-static/blog/mara_png.png)
|
||||
![](https://cdn.christine.website/file/christine-static/blog/mara_webp.png)
|
||||
![](https://cdn.christine.website/file/christine-static/blog/mara_avif.png)
|
||||
|
||||
The
|
||||
[avif](https://reachlightspeed.com/blog/using-the-new-high-performance-avif-image-format-on-the-web-today/)
|
||||
version does have the ugliest quality when blown up, however consider how small
|
||||
these stickers will appear on the webpages:
|
||||
|
||||
[This is how big the stickers will appear, or is it?](conversation://Mara/hmm)
|
||||
|
||||
At these sizes most people will not notice any lingering artifacts unless they
|
||||
look closely. However at about 5-6 kilobytes per image I think the smaller
|
||||
filesize greatly wins out. This helps keep page loads fast, which is something I
|
||||
want to optimize for as it makes people think my website loads quickly.
|
||||
|
||||
I go into a lot more detail on the twitter thread, but the commands I use to get
|
||||
the webp and avif versions of the stickers are as follows:
|
||||
|
||||
```shell
|
||||
#!/bin/sh
|
||||
|
||||
cwebp \
|
||||
$1.png \
|
||||
-o $1.webp
|
||||
avifenc \
|
||||
$1.png \
|
||||
-o $1.avif \
|
||||
-s 0 \
|
||||
-d 8 \
|
||||
--min 48 \
|
||||
--max 48 \
|
||||
--minalpha 48 \
|
||||
--maxalpha 48
|
||||
```
|
||||
|
||||
I plan to automate this further in the future, but for the scale I am at this
|
||||
works fine. These stickers are then uploaded to my cloud storage bucket and
|
||||
CloudFlare provides a CDN for them so they can load very quickly.
|
||||
|
||||
---
|
||||
|
||||
Anyways, this is how Mara is implemented and some of the challenges that went
|
||||
into developing them as a feature (while leaving the door open for other
|
||||
characters in the future). Mara is here to stay and I have gotten a lot of
|
||||
positive feedback about her.
|
||||
|
||||
As a side note, for those of you that are not amused that I am choosing to have
|
||||
Mara (and consequentially furry art in general) as a site feature, I can only
|
||||
hope that you can learn to respect that as an independent blogger I am free to
|
||||
implement my blog (and the content that I am choosing to provide _FOR FREE_ even
|
||||
though I've gotten requests to make it paid content) as I see fit. Further
|
||||
complaints will only increase the amount of furry art in future posts.
|
||||
|
||||
Be well all.
|
|
@ -1,139 +0,0 @@
|
|||
---
|
||||
title: How to Send Email with Nim
|
||||
date: 2019-08-28
|
||||
series: howto
|
||||
tags:
|
||||
- nim
|
||||
- email
|
||||
---
|
||||
|
||||
# How to Send Email with Nim
|
||||
|
||||
Nim offers an [smtp][nimsmtp] module, but it is a bit annoying to use out of the
|
||||
box. This blogpost hopes to be a mini-tutorial on the basics of how to use the
|
||||
smtp library and give developers best practices for handling outgoing email in
|
||||
ways that Google or iCloud will accept.
|
||||
|
||||
## SMTP in a Nutshell
|
||||
|
||||
[SMTP][SMTPrfc], or the Simple Mail Transfer Protocol is the backbone of how
|
||||
email works. It's a very simple line-based protocol, and there are wrappers for
|
||||
it in almost every programming language. Usage is pretty simple:
|
||||
|
||||
- The client connects to the server
|
||||
- The client authenticates itself with the server
|
||||
- The client signals that it would like to create an outgoing message to the server
|
||||
- The client sends the raw contents of the message to the server
|
||||
- The client ends the message
|
||||
- The client disconnects
|
||||
|
||||
Unfortunately, the devil is truly in the details here. There are a few things
|
||||
that _absolutely must_ be present in your emails in order for services like
|
||||
GMail to accept them. They are:
|
||||
|
||||
- The `From` header specifying where the message was sent from
|
||||
- The Mime-Version that your code is using (if you aren't sure, put `1.0` here)
|
||||
- The Content-Type that your code is sending to users (probably `text/plain`)
|
||||
|
||||
For a more complete example, let's create a `Mailer` type and a constructor:
|
||||
|
||||
```nim
|
||||
# mailer.nim
|
||||
import asyncdispatch, logging, smtp, strformat, strutils
|
||||
|
||||
type Mailer* = object
|
||||
address: string
|
||||
port: Port
|
||||
myAddress: string
|
||||
myName: string
|
||||
username: string
|
||||
password: string
|
||||
|
||||
proc newMailer*(address, port, myAddress, myName, username, password: string): Mailer =
|
||||
result = Mailer(
|
||||
address: address,
|
||||
port: port.parseInt.Port,
|
||||
myAddress: myAddress,
|
||||
myName: myName,
|
||||
username: username,
|
||||
password: password,
|
||||
)
|
||||
```
|
||||
|
||||
And let's write a `mail` method to send out email:
|
||||
|
||||
```nim
|
||||
proc mail(m: Mailer, to, toName, subject, body: string) {.async.} =
|
||||
let
|
||||
toList = @[fmt"{toName} <{to}>"]
|
||||
msg = createMessage(subject, body, toList, @[], [
|
||||
("From", fmt"{m.myName} <{m.myAddress}"),
|
||||
("MIME-Version", "1.0"),
|
||||
("Content-Type", "text/plain"),
|
||||
])
|
||||
|
||||
var client = newAsyncSmtp(useSsl = true)
|
||||
await client.connect(m.address, m.port)
|
||||
await client.auth(m.username, m.password)
|
||||
await client.sendMail(m.myAddress, toList, $msg)
|
||||
info "sent email to: ", to, " about: ", subject
|
||||
await client.close()
|
||||
```
|
||||
|
||||
Breaking this down, you can clearly see the parts of the SMTP connection as I
|
||||
laid out before. The `Mailer` creates a new transient SMTP connection,
|
||||
authenticates with the remote server, sends the properly formatted email to
|
||||
the server and then closes the connection cleanly.
|
||||
|
||||
If you want to test this code, I suggest testing it with a freely available
|
||||
email provider that offers TLS/SSL-encrypted SMTP support. This also means that
|
||||
you need to compile this code with `--define: ssl`, so create `config.nims` and
|
||||
add the following:
|
||||
|
||||
```nimscript
|
||||
--define: ssl
|
||||
```
|
||||
|
||||
Here's a little wrapper using [cligen][cligen]:
|
||||
|
||||
```nim
|
||||
when isMailModule:
|
||||
import cligen, os
|
||||
|
||||
let
|
||||
smtpAddress = getEnv("SMTP_ADDRESS")
|
||||
smtpPort = getEnv("SMTP_PORT")
|
||||
smtpMyAddress = getEnv("SMTP_MY_ADDRESS")
|
||||
smtpMyName = getEnv("SMTP_MY_NAME")
|
||||
smtpUsername = getEnv("SMTP_USERNAME")
|
||||
smtpPassword = getEnv("SMTP_PASSWORD")
|
||||
|
||||
proc sendAnEmail(to, toName, subject, body: string) =
|
||||
let m = newMailer(smtpAddress, smtpPort, smtpMyAddress, smtpMyName, smtpUsername, smtpPassword)
|
||||
waitFor m.mail(to, toName, subject, body)
|
||||
|
||||
dispatch(sendAnEmail)
|
||||
```
|
||||
|
||||
Usage is simple:
|
||||
|
||||
```console
|
||||
$ nim c -r mailer.nim --help
|
||||
Usage:
|
||||
sendAnEmail [required&optional-params]
|
||||
Options(opt-arg sep :|=|spc):
|
||||
-h, --help print this cligen-erated help
|
||||
--help-syntax advanced: prepend,plurals,..
|
||||
-t=, --to= string REQUIRED set to
|
||||
--toName= string REQUIRED set toName
|
||||
-s=, --subject= string REQUIRED set subject
|
||||
-b=, --body= string REQUIRED set body
|
||||
```
|
||||
|
||||
I hope this helps, this module is going to be used in my future post on how to
|
||||
create an application using Nim's [Jester][jester] framework.
|
||||
|
||||
[nimsmtp]: https://nim-lang.org/docs/smtp.html
|
||||
[SMTPrfc]: https://tools.ietf.org/html/rfc5321
|
||||
[jester]: https://github.com/dom96/jester
|
||||
[cligen]: https://github.com/c-blake/cligen
|
|
@ -1,7 +1,6 @@
|
|||
---
|
||||
title: How to Automate Discord Message Posting With Webhooks and Cron
|
||||
date: 2018-03-29
|
||||
series: howto
|
||||
---
|
||||
|
||||
# How to Automate Discord Message Posting With Webhooks and Cron
|
||||
|
|
|
@ -1,809 +0,0 @@
|
|||
---
|
||||
title: How to Use User Mode Linux
|
||||
date: 2019-07-07
|
||||
series: howto
|
||||
---
|
||||
|
||||
# How to Use User Mode Linux
|
||||
|
||||
[User Mode Linux](http://user-mode-linux.sourceforge.net) is a port of the
|
||||
[Linux kernel](https://www.kernel.org) to itself. This allows you to run a
|
||||
full blown Linux kernel as a normal userspace process. This is used by kernel
|
||||
developers for testing drivers, but is also useful as a generic isolation layer
|
||||
similar to virtual machines. It provides slightly more isolation than [Docker](https://www.docker.com),
|
||||
but slightly less isolation than a full-blown virtual machine like KVM or
|
||||
VirtualBox.
|
||||
|
||||
In general, this may sound like a weird and hard to integrate tool, but it does
|
||||
have its uses. It is an entire Linux kernel running as a normal user. This
|
||||
allows you to run potentially untrusted code without affecting the host machine.
|
||||
It also allows you to test experimental system configuration changes without
|
||||
having to reboot or take its services down.
|
||||
|
||||
Also, because this kernel and its processes are isolated from the host machine,
|
||||
this means that processes running inside a user mode Linux kernel will _not_ be
|
||||
visible to the host machine. This is unlike a Docker container, where processes
|
||||
in those containers are visible to the host. See this (snipped) pstree output
|
||||
from one of my servers:
|
||||
|
||||
```
|
||||
containerd─┬─containerd-shim─┬─tini─┬─dnsd───19*[{dnsd}]
|
||||
│ │ └─s6-svscan───s6-supervise
|
||||
│ └─10*[{containerd-shim}]
|
||||
├─containerd-shim─┬─tini─┬─aerial───21*[{aerial}]
|
||||
│ │ └─s6-svscan───s6-supervise
|
||||
│ └─10*[{containerd-shim}]
|
||||
├─containerd-shim─┬─tini─┬─s6-svscan───s6-supervise
|
||||
│ │ └─surl
|
||||
│ └─9*[{containerd-shim}]
|
||||
├─containerd-shim─┬─tini─┬─h───13*[{h}]
|
||||
│ │ └─s6-svscan───s6-supervise
|
||||
│ └─10*[{containerd-shim}]
|
||||
├─containerd-shim─┬─goproxy───14*[{goproxy}]
|
||||
│ └─9*[{containerd-shim}]
|
||||
└─32*[{containerd}]
|
||||
```
|
||||
|
||||
Compare it to the user mode Linux pstree output:
|
||||
|
||||
```
|
||||
linux─┬─5*[linux]
|
||||
└─slirp
|
||||
```
|
||||
|
||||
With a Docker container, I can see the names of the processes being run in the
|
||||
guest from the host. With a user mode Linux kernel, I cannot do this. This means
|
||||
that monitoring tools that function using [Linux's auditing subsystem](https://www.digitalocean.com/community/tutorials/how-to-use-the-linux-auditing-system-on-centos-7)
|
||||
_cannot_ monitor processes running inside the guest. This could be a two-edged
|
||||
sword in some edge scenarios.
|
||||
|
||||
This post represents a lot of research and brute-force attempts at trying to do
|
||||
this. I have had to assemble things together using old resources, reading kernel
|
||||
source code, intense debugging of code that was last released when I was in
|
||||
elementary school, tracking down a Heroku buildpack with a pre-built binary for
|
||||
a tool I need and other hackery that made people in IRC call me magic. I hope
|
||||
that this post will function as reliable documentation for doing this with a
|
||||
modern kernel and operating system.
|
||||
|
||||
## Setup
|
||||
|
||||
Setting up user mode Linux is done in a few steps:
|
||||
|
||||
- Installing host dependencies
|
||||
- Downloading Linux
|
||||
- Configuring Linux
|
||||
- Building the kernel
|
||||
- Installing the binary
|
||||
- Setting up the guest filesystem
|
||||
- Creating the kernel command line
|
||||
- Setting up networking for the guest
|
||||
- Running the guest kernel
|
||||
|
||||
I am assuming that you are wanting to do this on Ubuntu or another Debian-like
|
||||
system. I have tried to do this from Alpine (my distro of choice), but I have
|
||||
been unsuccessful as the Linux kernel seems to have glibc-isms hard-assumed in
|
||||
the user mode Linux drivers. I plan to report these to upstream when I have
|
||||
debugged them further.
|
||||
|
||||
### Installing Host Dependencies
|
||||
|
||||
Ubuntu requires at least the following packages installed to build the Linux
|
||||
kernel (assuming a completely fresh install):
|
||||
|
||||
- `build-essential`
|
||||
- `flex`
|
||||
- `bison`
|
||||
- `xz-utils`
|
||||
- `wget`
|
||||
- `ca-certificates`
|
||||
- `bc`
|
||||
- `linux-headers-4.15.0-47-generic` (though any kernel version will do)
|
||||
|
||||
You can install these with the following command (as root or running with sudo):
|
||||
|
||||
```
|
||||
apt-get -y install build-essential flex bison xz-utils wget ca-certificates bc \
|
||||
linux-headers-4.15.0-47-generic
|
||||
```
|
||||
|
||||
Additionally, running the menu configuration program for the Linux kernel will
|
||||
require installing `libncurses-dev`. Please make sure it's installed using the
|
||||
following command (as root or running with sudo):
|
||||
|
||||
```
|
||||
apt-get -y install libncurses-dev
|
||||
```
|
||||
|
||||
### Downloading the Kernel
|
||||
|
||||
Set up a location for the kernel to be downloaded and built. This will require
|
||||
approximately 1.3 gigabytes of space to run, so please make sure that there is
|
||||
at least this much space free.
|
||||
|
||||
Head to [kernel.org](https://www.kernel.org) and get the download URL of the
|
||||
latest stable kernel. As of the time of writing this post, this URL is the
|
||||
following:
|
||||
|
||||
```
|
||||
https://cdn.kernel.org/pub/linux/kernel/v5.x/linux-5.1.16.tar.xz
|
||||
```
|
||||
|
||||
Download this file with `wget`:
|
||||
|
||||
```
|
||||
wget https://cdn.kernel.org/pub/linux/kernel/v5.x/linux-5.1.16.tar.xz
|
||||
```
|
||||
|
||||
And extract it with `tar`:
|
||||
|
||||
```
|
||||
tar xJf linux-5.1.16.tar.xz
|
||||
```
|
||||
|
||||
Now enter the directory created by the tarball extraction:
|
||||
|
||||
```
|
||||
cd linux-5.1.16
|
||||
```
|
||||
|
||||
### Configuring the Kernel
|
||||
|
||||
The kernel build system is a bunch of [Makefiles](https://en.wikipedia.org/wiki/Makefile)
|
||||
with a _lot_ of custom tools and scripts to automate builds. Open the interactive
|
||||
configuration program:
|
||||
|
||||
```
|
||||
make ARCH=um menuconfig
|
||||
```
|
||||
|
||||
It will build some things and then present you with a dialog interface. You can
|
||||
enable settings by pressing `Space` or `Enter` when `<Select>` is highlighted on
|
||||
the bottom of the screen. You can change which item is selected in the upper
|
||||
dialog with the and down arrow keys. You can change which item is highlighted on
|
||||
the bottom of the screen with the left and right arrow keys.
|
||||
|
||||
When there is a `--->` at the end of a feature name, that means it is a submenu.
|
||||
You can enter a submenu using the `Enter` key. If you enter a menu you can exit
|
||||
it with `<Exit>`.
|
||||
|
||||
Enable the following settings with `<Select>`, making sure there is a `[*]` next
|
||||
to them:
|
||||
|
||||
```
|
||||
UML-specific Options:
|
||||
- Host filesystem
|
||||
Networking support (enable this to get the submenu to show up):
|
||||
- Networking options:
|
||||
- TCP/IP Networking
|
||||
UML Network devices:
|
||||
- Virtual network device
|
||||
- SLiRP transport
|
||||
```
|
||||
|
||||
Then exit back out to a shell by selecting `<Exit>` until there is a dialog
|
||||
asking you if you want to save your configuration. Select `<Yes>` and hit
|
||||
`Enter`.
|
||||
|
||||
I encourage you to play around with the build settings after reading through
|
||||
this post. You can learn a lot about Linux at a low level by changing flags and
|
||||
seeing how they affect the kernel at runtime.
|
||||
|
||||
### Building the Kernel
|
||||
|
||||
The Linux kernel is a large program with a lot of things going on. Even with
|
||||
this rather minimal configuration, it can take a while on older hardware. Build
|
||||
the kernel with the following command:
|
||||
|
||||
```
|
||||
make ARCH=um -j$(nproc)
|
||||
```
|
||||
|
||||
This will tell `make` to use all available CPU cores/hyperthreads to build the
|
||||
kernel. The `$(nproc)` at the end of the build command tells the shell to paste
|
||||
in the output of the `nproc` command (this command is part of `coreutils`, which
|
||||
is a default package in Ubuntu).
|
||||
|
||||
After a while, the kernel will be built to `./linux`.
|
||||
|
||||
### Installing the Binary
|
||||
|
||||
Because user mode Linux builds a normal binary, you can install it like you would
|
||||
any other command line tool. Here's the configuration I use:
|
||||
|
||||
```
|
||||
mkdir -p ~/bin
|
||||
cp linux ~/bin/linux
|
||||
```
|
||||
|
||||
If you want, ensure that `~/bin` is in your `$PATH`:
|
||||
|
||||
```
|
||||
export PATH=$PATH:$HOME/bin
|
||||
```
|
||||
|
||||
### Setting up the Guest Filesystem
|
||||
|
||||
Create a home for the guest filesystem:
|
||||
|
||||
```
|
||||
mkdir -p $HOME/prefix/uml-demo
|
||||
cd $HOME/prefix
|
||||
```
|
||||
|
||||
Open [alpinelinux.org](https://alpinelinux.org). Click on [Downloads](https://alpinelinux.org/downloads).
|
||||
Scroll down to where it lists the `MINI ROOT FILESYSTEM`. Right-click on the
|
||||
`x86_64` link and copy it. As of the time of writing this post, the latest URL
|
||||
for this is:
|
||||
|
||||
```
|
||||
http://dl-cdn.alpinelinux.org/alpine/v3.10/releases/x86_64/alpine-minirootfs-3.10.0-x86_64.tar.gz
|
||||
```
|
||||
|
||||
Download this tarball to your computer:
|
||||
|
||||
```
|
||||
wget -O alpine-rootfs.tgz http://dl-cdn.alpinelinux.org/alpine/v3.10/releases/x86_64/alpine-minirootfs-3.10.0-x86_64.tar.gz
|
||||
```
|
||||
|
||||
Now enter the guest filesystem folder and extract the tarball:
|
||||
|
||||
```
|
||||
cd uml-demo
|
||||
tar xf ../alpine-rootfs.tgz
|
||||
```
|
||||
|
||||
This will create a very minimal filesystem stub. Because of how this is being
|
||||
run, it will be difficult to install binary packages from Alpine's package
|
||||
manager `apk`, but this should be good enough to work as a proof of concept.
|
||||
|
||||
The tool [`tini`](https://github.com/krallin/tini) will be needed in order to
|
||||
prevent the guest kernel from having its memory used up by [zombie processes](https://en.wikipedia.org/wiki/Zombie_process).
|
||||
|
||||
Install it by doing the following:
|
||||
|
||||
```
|
||||
wget -O tini https://github.com/krallin/tini/releases/download/v0.18.0/tini-static
|
||||
chmod +x tini
|
||||
```
|
||||
|
||||
### Creating the Kernel Command Line
|
||||
|
||||
The Linux kernel has command line arguments like most other programs. To view
|
||||
what command line options are compiled into the user mode kernel, run `--help`:
|
||||
|
||||
```
|
||||
linux --help
|
||||
User Mode Linux v5.1.16
|
||||
available at http://user-mode-linux.sourceforge.net/
|
||||
|
||||
--showconfig
|
||||
Prints the config file that this UML binary was generated from.
|
||||
|
||||
iomem=<name>,<file>
|
||||
Configure <file> as an IO memory region named <name>.
|
||||
|
||||
mem=<Amount of desired ram>
|
||||
This controls how much "physical" memory the kernel allocates
|
||||
for the system. The size is specified as a number followed by
|
||||
one of 'k', 'K', 'm', 'M', which have the obvious meanings.
|
||||
This is not related to the amount of memory in the host. It can
|
||||
be more, and the excess, if it's ever used, will just be swapped out.
|
||||
Example: mem=64M
|
||||
|
||||
--help
|
||||
Prints this message.
|
||||
|
||||
debug
|
||||
this flag is not needed to run gdb on UML in skas mode
|
||||
|
||||
root=<file containing the root fs>
|
||||
This is actually used by the generic kernel in exactly the same
|
||||
way as in any other kernel. If you configure a number of block
|
||||
devices and want to boot off something other than ubd0, you
|
||||
would use something like:
|
||||
root=/dev/ubd5
|
||||
|
||||
--version
|
||||
Prints the version number of the kernel.
|
||||
|
||||
umid=<name>
|
||||
This is used to assign a unique identity to this UML machine and
|
||||
is used for naming the pid file and management console socket.
|
||||
|
||||
con[0-9]*=<channel description>
|
||||
Attach a console or serial line to a host channel. See
|
||||
http://user-mode-linux.sourceforge.net/old/input.html for a complete
|
||||
description of this switch.
|
||||
|
||||
eth[0-9]+=<transport>,<options>
|
||||
Configure a network device.
|
||||
|
||||
aio=2.4
|
||||
This is used to force UML to use 2.4-style AIO even when 2.6 AIO is
|
||||
available. 2.4 AIO is a single thread that handles one request at a
|
||||
time, synchronously. 2.6 AIO is a thread which uses the 2.6 AIO
|
||||
interface to handle an arbitrary number of pending requests. 2.6 AIO
|
||||
is not available in tt mode, on 2.4 hosts, or when UML is built with
|
||||
/usr/include/linux/aio_abi.h not available. Many distributions don't
|
||||
include aio_abi.h, so you will need to copy it from a kernel tree to
|
||||
your /usr/include/linux in order to build an AIO-capable UML
|
||||
|
||||
nosysemu
|
||||
Turns off syscall emulation patch for ptrace (SYSEMU).
|
||||
SYSEMU is a performance-patch introduced by Laurent Vivier. It changes
|
||||
behaviour of ptrace() and helps reduce host context switch rates.
|
||||
To make it work, you need a kernel patch for your host, too.
|
||||
See http://perso.wanadoo.fr/laurent.vivier/UML/ for further
|
||||
information.
|
||||
|
||||
uml_dir=<directory>
|
||||
The location to place the pid and umid files.
|
||||
|
||||
quiet
|
||||
Turns off information messages during boot.
|
||||
|
||||
hostfs=<root dir>,<flags>,...
|
||||
This is used to set hostfs parameters. The root directory argument
|
||||
is used to confine all hostfs mounts to within the specified directory
|
||||
tree on the host. If this isn't specified, then a user inside UML can
|
||||
mount anything on the host that's accessible to the user that's running
|
||||
it.
|
||||
The only flag currently supported is 'append', which specifies that all
|
||||
files opened by hostfs will be opened in append mode.
|
||||
```
|
||||
|
||||
This is a lot of output, but it explains the options available in detail. Let's
|
||||
start up a kernel with a very minimal set of options:
|
||||
|
||||
```
|
||||
linux \
|
||||
root=/dev/root \
|
||||
rootfstype=hostfs \
|
||||
rootflags=$HOME/prefix/uml-demo \
|
||||
rw \
|
||||
mem=64M \
|
||||
init=/bin/sh
|
||||
```
|
||||
|
||||
This tells the guest kernel to do the following things:
|
||||
|
||||
- Assume the root filesystem is the pseudo-device `/dev/root`
|
||||
- Select [hostfs](http://user-mode-linux.sourceforge.net/hostfs.html) as the root filesystem driver
|
||||
- Mount the guest filesystem we have created as the root device
|
||||
- In read-write mode
|
||||
- Use only 64 megabytes of ram (you can get away with far less depending on what you are doing, but 64 MB seems to be a happy medium)
|
||||
- Have the kernel automatically start `/bin/sh` as the `init` process
|
||||
|
||||
Run this command, you should get something like the following output:
|
||||
|
||||
```
|
||||
Core dump limits :
|
||||
soft - 0
|
||||
hard - NONE
|
||||
Checking that ptrace can change system call numbers...OK
|
||||
Checking syscall emulation patch for ptrace...OK
|
||||
Checking advanced syscall emulation patch for ptrace...OK
|
||||
Checking environment variables for a tempdir...none found
|
||||
Checking if /dev/shm is on tmpfs...OK
|
||||
Checking PROT_EXEC mmap in /dev/shm...OK
|
||||
Adding 32137216 bytes to physical memory to account for exec-shield gap
|
||||
Linux version 5.1.16 (cadey@kahless) (gcc version 7.4.0 (Ubuntu 7.4.0-1ubuntu1~18.04.1)) #30 Sun Jul 7 18:57:19 UTC 2019
|
||||
Built 1 zonelists, mobility grouping on. Total pages: 23898
|
||||
Kernel command line: root=/dev/root rootflags=/home/cadey/dl/uml/alpine rootfstype=hostfs rw mem=64M init=/bin/sh
|
||||
Dentry cache hash table entries: 16384 (order: 5, 131072 bytes)
|
||||
Inode-cache hash table entries: 8192 (order: 4, 65536 bytes)
|
||||
Memory: 59584K/96920K available (2692K kernel code, 708K rwdata, 588K rodata, 104K init, 244K bss, 37336K reserved, 0K cma-reserved)
|
||||
SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=1, Nodes=1
|
||||
NR_IRQS: 15
|
||||
clocksource: timer: mask: 0xffffffffffffffff max_cycles: 0x1cd42e205, max_idle_ns: 881590404426 ns
|
||||
Calibrating delay loop... 7479.29 BogoMIPS (lpj=37396480)
|
||||
pid_max: default: 32768 minimum: 301
|
||||
Mount-cache hash table entries: 512 (order: 0, 4096 bytes)
|
||||
Mountpoint-cache hash table entries: 512 (order: 0, 4096 bytes)
|
||||
Checking that host ptys support output SIGIO...Yes
|
||||
Checking that host ptys support SIGIO on close...No, enabling workaround
|
||||
devtmpfs: initialized
|
||||
random: get_random_bytes called from setup_net+0x48/0x1e0 with crng_init=0
|
||||
Using 2.6 host AIO
|
||||
clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604462750000 ns
|
||||
futex hash table entries: 256 (order: 0, 6144 bytes)
|
||||
NET: Registered protocol family 16
|
||||
clocksource: Switched to clocksource timer
|
||||
NET: Registered protocol family 2
|
||||
tcp_listen_portaddr_hash hash table entries: 256 (order: 0, 4096 bytes)
|
||||
TCP established hash table entries: 1024 (order: 1, 8192 bytes)
|
||||
TCP bind hash table entries: 1024 (order: 1, 8192 bytes)
|
||||
TCP: Hash tables configured (established 1024 bind 1024)
|
||||
UDP hash table entries: 256 (order: 1, 8192 bytes)
|
||||
UDP-Lite hash table entries: 256 (order: 1, 8192 bytes)
|
||||
NET: Registered protocol family 1
|
||||
console [stderr0] disabled
|
||||
mconsole (version 2) initialized on /home/cadey/.uml/tEwIjm/mconsole
|
||||
Checking host MADV_REMOVE support...OK
|
||||
workingset: timestamp_bits=62 max_order=14 bucket_order=0
|
||||
Block layer SCSI generic (bsg) driver version 0.4 loaded (major 254)
|
||||
io scheduler noop registered (default)
|
||||
io scheduler bfq registered
|
||||
loop: module loaded
|
||||
NET: Registered protocol family 17
|
||||
Initialized stdio console driver
|
||||
Using a channel type which is configured out of UML
|
||||
setup_one_line failed for device 1 : Configuration failed
|
||||
Using a channel type which is configured out of UML
|
||||
setup_one_line failed for device 2 : Configuration failed
|
||||
Using a channel type which is configured out of UML
|
||||
setup_one_line failed for device 3 : Configuration failed
|
||||
Using a channel type which is configured out of UML
|
||||
setup_one_line failed for device 4 : Configuration failed
|
||||
Using a channel type which is configured out of UML
|
||||
setup_one_line failed for device 5 : Configuration failed
|
||||
Using a channel type which is configured out of UML
|
||||
setup_one_line failed for device 6 : Configuration failed
|
||||
Using a channel type which is configured out of UML
|
||||
setup_one_line failed for device 7 : Configuration failed
|
||||
Using a channel type which is configured out of UML
|
||||
setup_one_line failed for device 8 : Configuration failed
|
||||
Using a channel type which is configured out of UML
|
||||
setup_one_line failed for device 9 : Configuration failed
|
||||
Using a channel type which is configured out of UML
|
||||
setup_one_line failed for device 10 : Configuration failed
|
||||
Using a channel type which is configured out of UML
|
||||
setup_one_line failed for device 11 : Configuration failed
|
||||
Using a channel type which is configured out of UML
|
||||
setup_one_line failed for device 12 : Configuration failed
|
||||
Using a channel type which is configured out of UML
|
||||
setup_one_line failed for device 13 : Configuration failed
|
||||
Using a channel type which is configured out of UML
|
||||
setup_one_line failed for device 14 : Configuration failed
|
||||
Using a channel type which is configured out of UML
|
||||
setup_one_line failed for device 15 : Configuration failed
|
||||
Console initialized on /dev/tty0
|
||||
console [tty0] enabled
|
||||
console [mc-1] enabled
|
||||
Failed to initialize ubd device 0 :Couldn't determine size of device's file
|
||||
VFS: Mounted root (hostfs filesystem) on device 0:11.
|
||||
devtmpfs: mounted
|
||||
This architecture does not have kernel memory protection.
|
||||
Run /bin/sh as init process
|
||||
/bin/sh: can't access tty; job control turned off
|
||||
random: fast init done
|
||||
/ #
|
||||
```
|
||||
|
||||
This gives you a _very minimal_ system, without things like `/proc` mounted, or
|
||||
a hostname assigned. Try the following commands:
|
||||
|
||||
- `uname -av`
|
||||
- `cat /proc/self/pid`
|
||||
- `hostname`
|
||||
|
||||
To exit this system, type in `exit` or press Control-d. This will kill the shell,
|
||||
making the guest kernel panic:
|
||||
|
||||
```
|
||||
/ # exit
|
||||
Kernel panic - not syncing: Attempted to kill init! exitcode=0x00000000
|
||||
fish: “./linux root=/dev/root rootflag…” terminated by signal SIGABRT (Abort)
|
||||
```
|
||||
|
||||
This kernel panic happens because the Linux kernel always assumes that its init
|
||||
process is running. Without this process running, the system cannot function
|
||||
anymore and exits. Because this is a user mode process, this results in the
|
||||
process sending itself `SIGABRT`, causing it to exit.
|
||||
|
||||
### Setting up Networking for the Guest
|
||||
|
||||
This is about where things get really screwy. Networking for a user mode Linux
|
||||
system is where the "user mode" facade starts to fall apart. Networking at the
|
||||
_system_ level is usually limited to _privileged_ execution modes, for very
|
||||
understandable reasons.
|
||||
|
||||
#### The slirp Adventure
|
||||
|
||||
However, there's an ancient and largely unmaintained tool called [slirp](https://en.wikipedia.org/wiki/Slirp)
|
||||
that user mode Linux can interface with. It acts as a user-level TCP/IP stack
|
||||
and does not rely on any elevated permissions to run. This tool was first
|
||||
released in _1995_, and its last release was made in _2006_. This tool is old
|
||||
enough that compilers have changed so much in the meantime that the software
|
||||
has effectively [rotten](https://en.wikipedia.org/wiki/Software_rot).
|
||||
|
||||
So, let's install slirp from the Ubuntu repositories and test running it:
|
||||
|
||||
```
|
||||
sudo apt-get install slirp
|
||||
/usr/bin/slirp
|
||||
Slirp v1.0.17 (BETA)
|
||||
|
||||
Copyright (c) 1995,1996 Danny Gasparovski and others.
|
||||
All rights reserved.
|
||||
This program is copyrighted, free software.
|
||||
Please read the file COPYRIGHT that came with the Slirp
|
||||
package for the terms and conditions of the copyright.
|
||||
|
||||
IP address of Slirp host: 127.0.0.1
|
||||
IP address of your DNS(s): 1.1.1.1, 10.77.0.7
|
||||
Your address is 10.0.2.15
|
||||
(or anything else you want)
|
||||
|
||||
Type five zeroes (0) to exit.
|
||||
|
||||
[autodetect SLIP/CSLIP, MTU 1500, MRU 1500, 115200 baud]
|
||||
|
||||
SLiRP Ready ...
|
||||
fish: “/usr/bin/slirp” terminated by signal SIGSEGV (Address boundary error)
|
||||
```
|
||||
|
||||
Oh dear. Let's [install the debug symbols](https://wiki.ubuntu.com/Debug%20Symbol%20Packages)
|
||||
for slirp and see if we can tell what's going on:
|
||||
|
||||
```
|
||||
sudo apt-get install gdb slirp-dbgsym
|
||||
gdb /usr/bin/slirp
|
||||
GNU gdb (Ubuntu 8.1-0ubuntu3) 8.1.0.20180409-git
|
||||
Copyright (C) 2018 Free Software Foundation, Inc.
|
||||
License GPLv3+: GNU GPL version 3 or later <http://gnu.org/licenses/gpl.html>
|
||||
This is free software: you are free to change and redistribute it.
|
||||
There is NO WARRANTY, to the extent permitted by law. Type "show copying"
|
||||
and "show warranty" for details.
|
||||
This GDB was configured as "x86_64-linux-gnu".
|
||||
Type "show configuration" for configuration details.
|
||||
For bug reporting instructions, please see:
|
||||
<http://www.gnu.org/software/gdb/bugs/>.
|
||||
Find the GDB manual and other documentation resources online at:
|
||||
<http://www.gnu.org/software/gdb/documentation/>.
|
||||
For help, type "help".
|
||||
Type "apropos word" to search for commands related to "word"...
|
||||
Reading symbols from /usr/bin/slirp...Reading symbols from /usr/lib/debug/.build-id/c6/2e75b69581a1ad85f72ac32c0d7af913d4861f.debug...done.
|
||||
done.
|
||||
(gdb) run
|
||||
Starting program: /usr/bin/slirp
|
||||
Slirp v1.0.17 (BETA)
|
||||
|
||||
Copyright (c) 1995,1996 Danny Gasparovski and others.
|
||||
All rights reserved.
|
||||
This program is copyrighted, free software.
|
||||
Please read the file COPYRIGHT that came with the Slirp
|
||||
package for the terms and conditions of the copyright.
|
||||
|
||||
IP address of Slirp host: 127.0.0.1
|
||||
IP address of your DNS(s): 1.1.1.1, 10.77.0.7
|
||||
Your address is 10.0.2.15
|
||||
(or anything else you want)
|
||||
|
||||
Type five zeroes (0) to exit.
|
||||
|
||||
[autodetect SLIP/CSLIP, MTU 1500, MRU 1500, 115200 baud]
|
||||
|
||||
SLiRP Ready ...
|
||||
|
||||
Program received signal SIGSEGV, Segmentation fault.
|
||||
ip_slowtimo () at ip_input.c:457
|
||||
457 ip_input.c: No such file or directory.
|
||||
```
|
||||
|
||||
It fails at [this line](https://github.com/Pradeo/Slirp/blob/master/src/ip_input.c#L457).
|
||||
Let's see the detailed stacktrace to see if anything helps us:
|
||||
|
||||
```
|
||||
(gdb) bt full
|
||||
#0 ip_slowtimo () at ip_input.c:457
|
||||
fp = 0x55784a40
|
||||
#1 0x000055555556a57c in main_loop () at ./main.c:980
|
||||
so = <optimized out>
|
||||
so_next = <optimized out>
|
||||
timeout = {tv_sec = 0, tv_usec = 0}
|
||||
ret = 0
|
||||
nfds = 0
|
||||
ttyp = <optimized out>
|
||||
ttyp2 = <optimized out>
|
||||
best_time = <optimized out>
|
||||
tmp_time = <optimized out>
|
||||
#2 0x000055555555b116 in main (argc=1, argv=0x7fffffffdc58) at ./main.c:95
|
||||
No locals.
|
||||
```
|
||||
|
||||
So it's failing [in its main loop](https://github.com/Pradeo/Slirp/blob/master/src/main.c#L972)
|
||||
while it is trying to check if any timeouts occured. This is where I had to give
|
||||
up trying to debug this further. Let's see if building it from source works. I
|
||||
re-uploaded the tarball from [Sourceforge](http://slirp.sourceforge.net) because
|
||||
downloading tarballs from Sourceforge from the command line is a pain.
|
||||
|
||||
```
|
||||
cd ~/dl
|
||||
wget https://xena.greedo.xeserv.us/files/slirp-1.0.16.tar.gz
|
||||
tar xf slirp-1.0.16.tar.gz
|
||||
cd slirp-1.0.16/src
|
||||
./configure --prefix=$HOME/prefix/slirp
|
||||
make
|
||||
```
|
||||
|
||||
This spews warnings about undefined inline functions. This then fails to link
|
||||
the resulting binary. It appears that at some point between the release of this
|
||||
software and the current day, gcc stopped creating symbols for inline functions
|
||||
in intermediate compiled files. Let's try to globally replace the `inline`
|
||||
keyword with an empty comment to see if that works:
|
||||
|
||||
```
|
||||
vi slirp.h
|
||||
:6
|
||||
a
|
||||
<enter>
|
||||
#define inline /**/
|
||||
<escape>
|
||||
:wq
|
||||
make
|
||||
```
|
||||
|
||||
Nope. That doesn't work either. It continues to fail to find the symbols for
|
||||
those inline functions.
|
||||
|
||||
This is when I gave up. I started searching GitHub for [Heroku buildpacks](https://devcenter.heroku.com/articles/buildpacks)
|
||||
that already had this implemented or done. My theory was that a Heroku
|
||||
buildpack would probably include the binaries I needed, so I searched for a bit
|
||||
and found [this buildpack](https://github.com/sleirsgoevy/heroku-buildpack-uml).
|
||||
I downloaded it and extracted `uml.tar.gz` and found the following files:
|
||||
|
||||
```
|
||||
total 6136
|
||||
-rwxr-xr-x 1 cadey cadey 79744 Dec 10 2017 ifconfig*
|
||||
-rwxr-xr-x 1 cadey cadey 373 Dec 13 2017 init*
|
||||
-rwxr-xr-x 1 cadey cadey 149688 Dec 10 2017 insmod*
|
||||
-rwxr-xr-x 1 cadey cadey 66600 Dec 10 2017 route*
|
||||
-rwxr-xr-x 1 cadey cadey 181056 Jun 26 2015 slirp*
|
||||
-rwxr-xr-x 1 cadey cadey 5786592 Dec 15 2017 uml*
|
||||
-rwxr-xr-x 1 cadey cadey 211 Dec 13 2017 uml_run*
|
||||
```
|
||||
|
||||
That's a slirp binary! Does it work?
|
||||
|
||||
```
|
||||
./slirp
|
||||
Slirp v1.0.17 (BETA) FULL_BOLT
|
||||
|
||||
Copyright (c) 1995,1996 Danny Gasparovski and others.
|
||||
All rights reserved.
|
||||
This program is copyrighted, free software.
|
||||
Please read the file COPYRIGHT that came with the Slirp
|
||||
package for the terms and conditions of the copyright.
|
||||
|
||||
IP address of Slirp host: 127.0.0.1
|
||||
IP address of your DNS(s): 1.1.1.1, 10.77.0.7
|
||||
Your address is 10.0.2.15
|
||||
(or anything else you want)
|
||||
|
||||
Type five zeroes (0) to exit.
|
||||
|
||||
[autodetect SLIP/CSLIP, MTU 1500, MRU 1500]
|
||||
|
||||
SLiRP Ready ...
|
||||
```
|
||||
|
||||
It's not immediately crashing, so I think it should be good! Let's copy this
|
||||
binary to `~/bin/slirp`:
|
||||
|
||||
```
|
||||
cp slirp ~/bin/slirp
|
||||
```
|
||||
|
||||
Just in case the person who created this buildpack takes it down, I have
|
||||
[mirrored it](https://git.xeserv.us/mirrors/heroku-buildpack-uml).
|
||||
|
||||
#### Configuring Networking
|
||||
|
||||
Now let's configure networking on our guest. [Adjust your kernel command line](http://user-mode-linux.sourceforge.net/old/networking.html):
|
||||
|
||||
```
|
||||
linux \
|
||||
root=/dev/root \
|
||||
rootfstype=hostfs \
|
||||
rootflags=$HOME/prefix/uml-demo \
|
||||
rw \
|
||||
mem=64M \
|
||||
eth0=slirp,,$HOME/bin/slirp \
|
||||
init=/bin/sh
|
||||
```
|
||||
|
||||
We should get that shell again. Let's enable networking:
|
||||
|
||||
```
|
||||
mount -t proc proc proc/
|
||||
mount -t sysfs sys sys/
|
||||
|
||||
ifconfig eth0 10.0.2.14 netmask 255.255.255.240 broadcast 10.0.2.15
|
||||
route add default gw 10.0.2.2
|
||||
```
|
||||
|
||||
The first two commands set up `/proc` and `/sys`, which are required for
|
||||
`ifconfig` to function. The `ifconfig` command sets up the network interface
|
||||
to communicate with slirp. The route command sets the kernel routing table
|
||||
to force all traffic over the slirp tunnel. Let's test with a DNS query:
|
||||
|
||||
```
|
||||
nslookup google.com 8.8.8.8
|
||||
Server: 8.8.8.8
|
||||
Address 1: 8.8.8.8 dns.google
|
||||
|
||||
Name: google.com
|
||||
Address 1: 172.217.12.206 lga25s63-in-f14.1e100.net
|
||||
Address 2: 2607:f8b0:4006:81b::200e lga25s63-in-x0e.1e100.net
|
||||
```
|
||||
|
||||
That works!
|
||||
|
||||
Let's automate this with a shell script:
|
||||
|
||||
```
|
||||
#!/bin/sh
|
||||
# init.sh
|
||||
|
||||
mount -t proc proc proc/
|
||||
mount -t sysfs sys sys/
|
||||
ifconfig eth0 10.0.2.14 netmask 255.255.255.240 broadcast 10.0.2.15
|
||||
route add default gw 10.0.2.2
|
||||
|
||||
echo "networking set up"
|
||||
|
||||
exec /tini /bin/sh
|
||||
```
|
||||
|
||||
and mark it executable:
|
||||
|
||||
```
|
||||
chmod +x init.sh
|
||||
```
|
||||
|
||||
and then change the kernel command line:
|
||||
|
||||
```
|
||||
linux \
|
||||
root=/dev/root \
|
||||
rootfstype=hostfs \
|
||||
rootflags=$HOME/prefix/uml-demo \
|
||||
rw \
|
||||
mem=64M \
|
||||
eth0=slirp,,$HOME/bin/slirp \
|
||||
init=/init.sh
|
||||
```
|
||||
|
||||
Then re-run it:
|
||||
|
||||
```
|
||||
SLiRP Ready ...
|
||||
networking set up
|
||||
/bin/sh: can't access tty; job control turned off
|
||||
|
||||
nslookup google.com 8.8.8.8
|
||||
Server: 8.8.8.8
|
||||
Address 1: 8.8.8.8 dns.google
|
||||
|
||||
Name: google.com
|
||||
Address 1: 172.217.12.206 lga25s63-in-f14.1e100.net
|
||||
Address 2: 2607:f8b0:4004:800::200e iad30s09-in-x0e.1e100.net
|
||||
```
|
||||
|
||||
And networking works reliably!
|
||||
|
||||
## Dockerfile
|
||||
|
||||
So that you can more easily test this, I have created a [Dockerfile](https://github.com/Xe/furry-happiness)
|
||||
that automates most of these steps and should result in a working setup. I have
|
||||
a [pre-made kernel configuration](https://github.com/Xe/furry-happiness/blob/master/uml.config)
|
||||
that should do everything outlined in this post, but this post outlines a more
|
||||
minimal setup.
|
||||
|
||||
---
|
||||
|
||||
I hope this post is able to help you understand how to do this. This became a bit
|
||||
of a monster, but this should be a comprehensive guide on how to build, install
|
||||
and configure user mode Linux for modern operating systems. Next steps from here
|
||||
should include installing services and other programs into the guest system.
|
||||
Since Docker container images are just glorified tarballs, you should be able to
|
||||
extract an image with `docker export` and then set the root filesystem location
|
||||
in the guest kernel to that location. Then run the command that the Dockerfile
|
||||
expects via a shell script.
|
||||
|
||||
Special thanks to rkeene of #lobsters on Freenode. Without his help with
|
||||
attempting to debug slirp, I wouldn't have gotten this far. I have no idea how
|
||||
his Slackware system works fine with slirp but my Ubuntu and Alpine systems
|
||||
don't, and why the binary he gave me also didn't work; but I got something
|
||||
working and that's good enough for me.
|
|
@ -1,437 +0,0 @@
|
|||
---
|
||||
title: I was Wrong about Nix
|
||||
date: 2020-02-10
|
||||
tags:
|
||||
- nix
|
||||
- witchcraft
|
||||
---
|
||||
|
||||
# I was Wrong about Nix
|
||||
|
||||
From time to time, I am outright wrong on my blog. This is one of those times.
|
||||
In my [last post about Nix][nixpost], I didn't see the light yet. I think I do
|
||||
now, and I'm going to attempt to clarify below.
|
||||
|
||||
[nixpost]: https://christine.website/blog/thoughts-on-nix-2020-01-28
|
||||
|
||||
Let's talk about a more simple scenario: writing a service in Go. This service
|
||||
will depend on at least the following:
|
||||
|
||||
- A Go compiler to build the code into a binary
|
||||
- An appropriate runtime to ensure the code will run successfully
|
||||
- Any data files needed at runtime
|
||||
|
||||
A popular way to model this is with a Dockerfile. Here's the Dockerfile I use
|
||||
for my website (the one you are reading right now):
|
||||
|
||||
```
|
||||
FROM xena/go:1.13.6 AS build
|
||||
ENV GOPROXY https://cache.greedo.xeserv.us
|
||||
COPY . /site
|
||||
WORKDIR /site
|
||||
RUN CGO_ENABLED=0 go test -v ./...
|
||||
RUN CGO_ENABLED=0 GOBIN=/root go install -v ./cmd/site
|
||||
|
||||
FROM xena/alpine
|
||||
EXPOSE 5000
|
||||
WORKDIR /site
|
||||
COPY --from=build /root/site .
|
||||
COPY ./static /site/static
|
||||
COPY ./templates /site/templates
|
||||
COPY ./blog /site/blog
|
||||
COPY ./talks /site/talks
|
||||
COPY ./gallery /site/gallery
|
||||
COPY ./css /site/css
|
||||
HEALTHCHECK CMD wget --spider http://127.0.0.1:5000/.within/health || exit 1
|
||||
CMD ./site
|
||||
```
|
||||
|
||||
This fetches the Go compiler from [an image I made][godockerfile], copies the
|
||||
source code to the image, builds it (in a way that makes the resulting binary a
|
||||
[static executable][staticbin]), and creates the runtime environment for it.
|
||||
|
||||
[godockerfile]: https://github.com/Xe/dockerfiles/blob/master/lang/go/Dockerfile
|
||||
[staticbin]: https://oddcode.daveamit.com/2018/08/16/statically-compile-golang-binary/
|
||||
|
||||
Let's let it build and see how big the result is:
|
||||
|
||||
```
|
||||
$ docker build -t xena/christinewebsite:example1 .
|
||||
<output omitted>
|
||||
$ docker images | grep xena
|
||||
xena/christinewebsite example1 4b8ee64969e8 24 seconds ago 111MB
|
||||
```
|
||||
|
||||
Investigating this image with [dive][dive], we see the following:
|
||||
|
||||
[dive]: https://github.com/wagoodman/dive
|
||||
|
||||
- The package manager is included in the image
|
||||
- The package manager's database is included in the image
|
||||
- An entire copy of the C library is included in the image (even though the
|
||||
binary was _statically linked_ to specifically avoid this)
|
||||
- Most of the files in the docker image are unrelated to my website's
|
||||
functionality and are involved with the normal functioning of Linux systems
|
||||
|
||||
Granted, [Alpine Linux][alpine] does a good job at keeping this chaff to a
|
||||
minimum, but it is still there, still needs to be updated (causing all of my
|
||||
docker images to be rebuilt and applications to be redeployed) and still takes
|
||||
up space in transfer quotas and on the disk.
|
||||
|
||||
[alpine]: https://alpinelinux.org
|
||||
|
||||
Let's compare this to the same build process but done with Nix. My Nix setup is
|
||||
done in a few phases. First I use [niv][niv] to manage some dependencies a-la
|
||||
git submodules that don't hate you:
|
||||
|
||||
[niv]: https://github.com/nmattia/niv
|
||||
|
||||
```
|
||||
$ nix-shell -p niv
|
||||
[nix-shel]$ niv init
|
||||
<writes nix/*>
|
||||
```
|
||||
|
||||
Now I add the tool [vgo2nix][vgo2nix] in niv:
|
||||
|
||||
[vgo2nix]: https://github.com/adisbladis/vgo2nix
|
||||
|
||||
```
|
||||
[nix-shell]$ niv add adisbladis/vgo2nix
|
||||
```
|
||||
|
||||
And I can use it in my shell.nix:
|
||||
|
||||
```nix
|
||||
let
|
||||
pkgs = import <nixpkgs> { };
|
||||
sources = import ./nix/sources.nix;
|
||||
vgo2nix = (import sources.vgo2nix { });
|
||||
in pkgs.mkShell { buildInputs = [ pkgs.go pkgs.niv vgo2nix ]; }
|
||||
```
|
||||
|
||||
And then relaunch nix-shell with vgo2nix installed and convert my [go modules][gomod]
|
||||
dependencies to a Nix expression:
|
||||
|
||||
[gomod]: https://github.com/golang/go/wiki/Modules
|
||||
|
||||
```
|
||||
$ nix-shell
|
||||
<some work is done to compile things, etc>
|
||||
[nix-shell]$ vgo2nix
|
||||
<writes deps.nix>
|
||||
```
|
||||
|
||||
Now that I have this, I can follow the [buildGoPackage
|
||||
instructions][buildgopackage] from the upstream nixpkgs documentation and create
|
||||
`site.nix`:
|
||||
|
||||
[buildgopackage]: https://nixos.org/nixpkgs/manual/#ssec-go-legacy
|
||||
|
||||
```
|
||||
{ pkgs ? import <nixpkgs> {} }:
|
||||
with pkgs;
|
||||
|
||||
assert lib.versionAtLeast go.version "1.13";
|
||||
|
||||
buildGoPackage rec {
|
||||
name = "christinewebsite-HEAD";
|
||||
version = "latest";
|
||||
goPackagePath = "christine.website";
|
||||
src = ./.;
|
||||
|
||||
goDeps = ./deps.nix;
|
||||
allowGoReference = false;
|
||||
preBuild = ''
|
||||
export CGO_ENABLED=0
|
||||
buildFlagsArray+=(-pkgdir "$TMPDIR")
|
||||
'';
|
||||
|
||||
postInstall = ''
|
||||
cp -rf $src/blog $bin/blog
|
||||
cp -rf $src/css $bin/css
|
||||
cp -rf $src/gallery $bin/gallery
|
||||
cp -rf $src/static $bin/static
|
||||
cp -rf $src/talks $bin/talks
|
||||
cp -rf $src/templates $bin/templates
|
||||
'';
|
||||
}
|
||||
```
|
||||
|
||||
And this will do the following:
|
||||
|
||||
- Download all of the needed dependencies and place them in the system-level Nix
|
||||
store so that they are not downloaded again
|
||||
- Set the `CGO_ENABLED` environment variable to `0` so the Go compiler emits a
|
||||
static binary
|
||||
- Copy all of the needed files to the right places so that the blog, gallery and
|
||||
talks features can load all of their data
|
||||
- Depend on nothing other than a working system at runtime
|
||||
|
||||
This Nix build manifest doesn't just work on Linux. It works on my mac too. The
|
||||
dockerfile approach works great for Linux boxes, but (unlike what the me of a
|
||||
decade ago would have hoped) the whole world just doesn't run Linux on their
|
||||
desktops. The real world has multiple OSes and Nix allows me to compensate.
|
||||
|
||||
So, now that we have a working _cross-platform_ build, let's see how big it
|
||||
comes out as:
|
||||
|
||||
```
|
||||
$ readlink ./result-bin
|
||||
/nix/store/ayvafpvn763wwdzwjzvix3mizayyblx5-christinewebsite-HEAD-bin
|
||||
$ du -hs result-bin/
|
||||
89M ./result-bin/
|
||||
$ du -hs result-bin/
|
||||
11M ./result-bin/bin
|
||||
888K ./result-bin/blog
|
||||
40K ./result-bin/css
|
||||
44K ./result-bin/gallery
|
||||
77M ./result-bin/static
|
||||
28K ./result-bin/talks
|
||||
64K ./result-bin/templates
|
||||
```
|
||||
|
||||
As expected, most of the build results are static assets. I have a lot of larger
|
||||
static assets including an entire copy of TempleOS, so this isn't too
|
||||
surprising. Let's compare this to on the mac:
|
||||
|
||||
```
|
||||
$ du -hs result-bin/
|
||||
91M result-bin/
|
||||
$ du -hs result-bin/*
|
||||
14M result-bin/bin
|
||||
872K result-bin/blog
|
||||
36K result-bin/css
|
||||
40K result-bin/gallery
|
||||
77M result-bin/static
|
||||
24K result-bin/talks
|
||||
60K result-bin/templates
|
||||
```
|
||||
|
||||
Which is damn-near identical save some macOS specific crud that Go has to deal
|
||||
with.
|
||||
|
||||
I mentioned this is used for Docker builds, so let's make `docker.nix`:
|
||||
|
||||
```nix
|
||||
{ system ? builtins.currentSystem }:
|
||||
|
||||
let
|
||||
pkgs = import <nixpkgs> { inherit system; };
|
||||
|
||||
callPackage = pkgs.lib.callPackageWith pkgs;
|
||||
|
||||
site = callPackage ./site.nix { };
|
||||
|
||||
dockerImage = pkg:
|
||||
pkgs.dockerTools.buildImage {
|
||||
name = "xena/christinewebsite";
|
||||
tag = pkg.version;
|
||||
|
||||
contents = [ pkg ];
|
||||
|
||||
config = {
|
||||
Cmd = [ "/bin/site" ];
|
||||
WorkingDir = "/";
|
||||
};
|
||||
};
|
||||
|
||||
in dockerImage site
|
||||
```
|
||||
|
||||
And then build it:
|
||||
|
||||
```
|
||||
$ nix-build docker.nix
|
||||
<output omitted>
|
||||
$ docker load -i result
|
||||
c6b1d6ce7549: Loading layer [==================================================>] 95.81MB/95.81MB
|
||||
$ docker images | grep xena
|
||||
xena/christinewebsite latest 0d1ccd676af8 50 years ago 94.6MB
|
||||
```
|
||||
|
||||
And the output is 16 megabytes smaller.
|
||||
|
||||
The image age might look weird at first, but it's part of the reproducibility
|
||||
Nix offers. The date an image was built is something that can change with time
|
||||
and is actually a part of the resulting file. This means that an image built one
|
||||
second after another has a different cryptographic hash. It helpfully pins all
|
||||
images to Unix timestamp 0, which just happens to be about 50 years ago.
|
||||
|
||||
Looking into the image with `dive`, the only packages installed into this image
|
||||
are:
|
||||
|
||||
- The website and all of its static content goodness
|
||||
- IANA portmaps that Go depends on as part of the [`net`][gonet] package
|
||||
- The standard list of [MIME types][mimetypes] that the [`net/http`][gonethttp]
|
||||
package needs
|
||||
- Time zone data that the [`time`][gotime] package needs
|
||||
|
||||
[gonet]: https://godoc.org/net
|
||||
[gonethttp]: https://godoc.org/net/http
|
||||
[gotime]: https://godoc.org/time
|
||||
|
||||
And that's it. This is _fantastic_. Nearly all of the disk usage has been
|
||||
eliminated. If someone manages to trick my website into executing code, that
|
||||
attacker cannot do anything but run more copies of my website (that will
|
||||
immediately fail and die because the port is already allocated).
|
||||
|
||||
This strategy pans out to more complicated projects too. Consider a case where a
|
||||
frontend and backend need to be built and deployed as a unit. Let's create a new
|
||||
setup using niv:
|
||||
|
||||
```
|
||||
$ niv init
|
||||
```
|
||||
|
||||
Since we are using [Elm][elm] for this complicated project, let's add the
|
||||
[elm2nix][elm2nix] tool so that our Elm dependencies have repeatable builds, and
|
||||
[gruvbox-css][gcss] for some nice simple CSS:
|
||||
|
||||
[elm]: https://elm-lang.org
|
||||
[elm2nix]: https://github.com/cachix/elm2nix
|
||||
[gcss]: https://github.com/Xe/gruvbox-css
|
||||
|
||||
```
|
||||
$ niv add cachix/elm2nix
|
||||
$ niv add Xe/gruvbox-css
|
||||
```
|
||||
|
||||
And then add it to our `shell.nix`:
|
||||
|
||||
```
|
||||
let
|
||||
pkgs = import <nixpkgs> {};
|
||||
sources = import ./nix/sources.nix;
|
||||
elm2nix = (import sources.elm2nix { });
|
||||
in
|
||||
pkgs.mkShell {
|
||||
buildInputs = [
|
||||
pkgs.elmPackages.elm
|
||||
pkgs.elmPackages.elm-format
|
||||
elm2nix
|
||||
];
|
||||
}
|
||||
```
|
||||
|
||||
And then enter `nix-shell` to create the Elm boilerplate:
|
||||
|
||||
```
|
||||
$ nix-shell
|
||||
[nix-shell]$ cd frontend
|
||||
[nix-shell:frontend]$ elm2nix init > default.nix
|
||||
[nix-shell:frontend]$ elm2nix convert > elm-srcs.nix
|
||||
[nix-shell:frontend]$ elm2nix snapshot
|
||||
```
|
||||
|
||||
And then we can edit the generated Nix expression:
|
||||
|
||||
```
|
||||
let
|
||||
sources = import ./nix/sources.nix;
|
||||
gcss = (import sources.gruvbox-css { });
|
||||
# ...
|
||||
buildInputs = [ elmPackages.elm gcss ]
|
||||
++ lib.optional outputJavaScript nodePackages_10_x.uglify-js;
|
||||
# ...
|
||||
cp -rf ${gcss}/gruvbox.css $out/public
|
||||
cp -rf $src/public/* $out/public/
|
||||
# ...
|
||||
outputJavaScript = true;
|
||||
```
|
||||
|
||||
And then test it with `nix-build`:
|
||||
|
||||
```
|
||||
$ nix-build
|
||||
<output omitted>
|
||||
```
|
||||
|
||||
And now create a `name.nix` for your Go service like I did above. The real
|
||||
magic comes from the `docker.nix` file:
|
||||
|
||||
```
|
||||
{ system ? builtins.currentSystem }:
|
||||
|
||||
let
|
||||
pkgs = import <nixpkgs> { inherit system; };
|
||||
sources = import ./nix/sources.nix;
|
||||
backend = import ./backend.nix { };
|
||||
frontend = import ./frontend/default.nix { };
|
||||
in
|
||||
|
||||
pkgs.dockerTools.buildImage {
|
||||
name = "xena/complicatedservice";
|
||||
tag = "latest";
|
||||
|
||||
contents = [ backend frontend ];
|
||||
|
||||
config = {
|
||||
Cmd = [ "/bin/backend" ];
|
||||
WorkingDir = "/public";
|
||||
};
|
||||
};
|
||||
```
|
||||
|
||||
Now both your backend and frontend services are built with the dependencies in
|
||||
the Nix store and shipped as a repeatable Docker image.
|
||||
|
||||
Sometimes it might be useful to ship the dependencies to a service like
|
||||
[Cachix][cachix] to help speed up builds.
|
||||
|
||||
[cachix]: https://cachix.org
|
||||
|
||||
You can install the cachix tool like this:
|
||||
|
||||
```
|
||||
$ nix-env -iA cachix -f https://cachix.org/api/v1/install
|
||||
```
|
||||
|
||||
And then follow the steps at [cachix.org][cachix] to create a new binary cache.
|
||||
Let's assume you made a cache named `teddybear`. When you've created a new
|
||||
cache, logged in with an API token and created a signing key, you can pipe
|
||||
nix-build to the Cachix client like so:
|
||||
|
||||
```
|
||||
$ nix-build | cachix push teddybear
|
||||
```
|
||||
|
||||
And other people using that cache will benefit from your premade dependency and
|
||||
binary downloads.
|
||||
|
||||
To use the cache somewhere, install the Cachix client and then run the
|
||||
following:
|
||||
|
||||
```
|
||||
$ cachix use teddybear
|
||||
```
|
||||
|
||||
I've been able to use my Go, Elm, Rust and Haskell dependencies on other
|
||||
machines using this. It's saved so much extra download time.
|
||||
|
||||
## tl;dr
|
||||
|
||||
I was wrong about Nix. It's actually quite good once you get past the
|
||||
documentation being baroque and hard to read as a beginner. I'm going to try and
|
||||
do what I can to get the documentation improved.
|
||||
|
||||
As far as getting started with Nix, I suggest following these posts:
|
||||
|
||||
- Nix Pills: https://nixos.org/nixos/nix-pills/
|
||||
- Nix Shorts: https://github.com/justinwoo/nix-shorts
|
||||
- NixOS: For Developers: https://myme.no/posts/2020-01-26-nixos-for-development.html
|
||||
|
||||
Also, I really suggest trying stuff as a vehicle to understand how things work.
|
||||
I got really far by experimenting with getting [this Discord bot I am writing in
|
||||
Rust][withinbot] working in Nix and have been very pleased with how it's turned
|
||||
out. I don't need to use `rustup` anymore to manage my Rust compiler or the
|
||||
language server. With a combination of [direnv][direnv] and [lorri][lorri], I
|
||||
can avoid needing to set up language servers or the like _at all_. I can define
|
||||
them as part of the _project environment_ and then trust the tools I build on
|
||||
top of to take care of that for me.
|
||||
|
||||
[withinbot]: https://github.com/Xe/withinbot
|
||||
[direnv]: https://direnv.net
|
||||
[lorri]: https://github.com/target/lorri
|
||||
|
||||
Give Nix a try. It's worth at least that much in my opinion.
|
|
@ -1,93 +0,0 @@
|
|||
---
|
||||
title: Instant Pot Spaghetti
|
||||
date: 2020-02-03
|
||||
series: recipes
|
||||
tags:
|
||||
- instant-pot
|
||||
---
|
||||
|
||||
# Instant Pot Spaghetti
|
||||
|
||||
This is based on [this recipe][source], but made only with things you can find
|
||||
in Costco. My fiancé and I have made this at least weekly for the last 8 months
|
||||
and we love how it turns out.
|
||||
|
||||
[source]: https://kristineskitchenblog.com/instant-pot-spaghetti/
|
||||
|
||||
## Recipe
|
||||
|
||||
### Ingredients
|
||||
|
||||
- 1/2 kg ground beef (pre-cooked, or see section on browning it)
|
||||
- 3 1/4 cups water
|
||||
- 2 teaspoons salt
|
||||
- a small amount of pepper
|
||||
- 4 heaping teaspoons of garlic
|
||||
- 1/2 cup butter
|
||||
- 1/4 kg spaghetti noodles
|
||||
- 1 jar of pasta sauce (about 870ml)
|
||||
|
||||
If you want it to be more spicy, add more pepper. Too much can make it hard to
|
||||
eat. Only experiment with the pepper amount after you've made this and decided
|
||||
there's not enough pepper.
|
||||
|
||||
### Preparation
|
||||
|
||||
Put the ground beef in the instant pot. Put the water in the instant pot. Put
|
||||
the salt in the instant pot. Put the pepper in the instant pot. Put the garlic
|
||||
in the instant pot. Put the butter in the instant pot.
|
||||
|
||||
Stir for about 30 seconds, or until the garlic looks like it's distributed about
|
||||
evenly in the pot.
|
||||
|
||||
Take the spaghetti noodles and break them in half. Place about a third of one
|
||||
half one direction, the second third another, and the last yet another. Repeat
|
||||
this for the other half of the pasta. This helps to not clump it together when
|
||||
it's cooking.
|
||||
|
||||
Look at the package of spaghetti noodles. It should say something like "Ready in
|
||||
X minutes" with a large number. Take that number and subtract two from it. If
|
||||
you have a pasta that says it's cooked for 7 minutes, you will cook it for 5
|
||||
minutes. If you have a pasta that says it's cooked for 9 minutes, you will cook
|
||||
it for 7 minutes.
|
||||
|
||||
Put the lid on the instant pot, seal it and ensure the pressure release valve is
|
||||
set to "sealing". Hit the "manual" button and select the number you figured out
|
||||
above.
|
||||
|
||||
Leave the instant pot alone for 10 minutes after it is done. This lets the
|
||||
pressure release naturally.
|
||||
|
||||
Use your serving utensil to open the pressure release valve. Stir and wait 3-5
|
||||
minutes to serve. This makes 5 servings, but could be extended to more if you
|
||||
carefully ration it.
|
||||
|
||||
Serve hot with salt or parmesan cheese.
|
||||
|
||||
## Browning Ground Beef
|
||||
|
||||
Browing ground beef is the act of cooking it all the way through so it is safe
|
||||
to eat. It's called "browing" it because the ground beef will turn a grayish
|
||||
brown when it is fully cooked.
|
||||
|
||||
### Ingredients
|
||||
|
||||
- Olive oil
|
||||
- 1 teaspoon salt
|
||||
- The ground beef you want to brown
|
||||
|
||||
### Preparation
|
||||
|
||||
Take the lid off of the instant pot. Cover the bottom of the pan in olive oil.
|
||||
Sprinkle the salt over the olive oil. Place the ground beef in the instant pot
|
||||
on top of the olive oil and salt.
|
||||
|
||||
Press the "sauté" button on your instant pot and use a spatula to break the
|
||||
ground beef into smaller chunks while it warms up. Mix the ground beef while it
|
||||
cooks. The goal is to make sure that all of the red parts turn grayish brown.
|
||||
|
||||
This will take anywhere from 5-10 minutes.
|
||||
|
||||
If you are using this ground beef for the above spaghetti recipe, you don't need
|
||||
to remove it from the instant pot. You can store extra ground beef in the fridge
|
||||
for use later.
|
|
@ -20,7 +20,9 @@ img {
|
|||
}
|
||||
</style>
|
||||
|
||||
<center>
|
||||
![](/static/img/ios_profiles.png)
|
||||
</center>
|
||||
|
||||
- Go up a level to General
|
||||
- Select About
|
||||
|
@ -28,7 +30,9 @@ img {
|
|||
- Each root that has been installed via a profile will be listed below the heading Enable Full Trust For Root Certificates
|
||||
- Users can toggle on/off trust for each root:
|
||||
|
||||
<center>
|
||||
![](/static/img/ios_cert_trust.png)
|
||||
</center>
|
||||
|
||||
Please understand that by doing this, users will potentially be vulnerable to a
|
||||
[HTTPS man in the middle attack a-la Superfish](https://slate.com/technology/2015/02/lenovo-superfish-scandal-why-its-one-of-the-worst-consumer-computing-screw-ups-ever.html). Please ensure that you have appropriate measures in place to keep the signing key for the CA safe.
|
||||
|
|
|
@ -1,7 +1,6 @@
|
|||
---
|
||||
title: "iPad Smart Keyboard: French Accents/Ligatures"
|
||||
date: 2019-05-10
|
||||
series: howto
|
||||
---
|
||||
|
||||
# iPad Smart Keyboard: French Accents/Ligatures
|
||||
|
@ -16,7 +15,5 @@ The following is the results of both blind googling and brute forcing the keyboa
|
|||
| ç (cedelia) | Alt-c | français |
|
||||
| œ (oe ligature) | Alt-q | œuf |
|
||||
| û (circumflex) | Alt-i | hôtel |
|
||||
| « (left quote) | Alt-\\ | «salut!» |
|
||||
| » (right quote) | Alt-Shift-\\ | «salut!» |
|
||||
|
||||
You can also type a forward facing accent on most arbitrary characters by typing it and then pressing Alt-Shift-e. Circumflêxes can be done postfix with Alt-Shift-i too. Thís dóesńt work on every letter, unfortunately. However it does work for enough of them. Not enough for Esperanto's `ĉu` however.
|
|
@ -1,61 +0,0 @@
|
|||
---
|
||||
title: "IRCv3.2 CHGHOST Extension"
|
||||
date: "2013-10-04"
|
||||
---
|
||||
|
||||
# IRCv3.2 CHGHOST Extension
|
||||
|
||||
The chghost client capability allows a server to directly inform clients about a
|
||||
host or user change without having to send a fake quit and join. This capability
|
||||
MUST be referred to as `chghost` at capability negotiation time.
|
||||
|
||||
When enabled, clients will get the CHGHOST message to designate the host of a
|
||||
user changing for clients on common channels with them.
|
||||
|
||||
The CHGHOST message is one of the following:
|
||||
|
||||
:nick!user@host CHGHOST user new.host.goes.here
|
||||
|
||||
This message represents that the user identified by nick!user@host has changed
|
||||
host to another value. The first parameter is the user of the client. The
|
||||
second parameter is the new host the client is using.
|
||||
|
||||
On irc daemons with support for changing the user portion of a client, the
|
||||
second form may appear:
|
||||
|
||||
:nick!user@host CHGHOST newuser host
|
||||
|
||||
If specified, a client may also have their user and host changed at the same
|
||||
time:
|
||||
|
||||
:nick!user@host CHGHOST newuser new.host.goes.here
|
||||
|
||||
This second and third form should only be seen on IRC daemons that support
|
||||
changing the user field of a user.
|
||||
|
||||
In order to take full advantage of the CHGHOST message, clients must be modified
|
||||
to support it. The proper way to do so is this:
|
||||
|
||||
1. Enable the chghost capability at capability negotiation time during the
|
||||
login handshake.
|
||||
|
||||
2. Update the user and host portions of data structures and process channel
|
||||
users as appropriate.
|
||||
|
||||
## Examples
|
||||
|
||||
In this example, `tim!~toolshed@backyard` gets their username changed to `b` and
|
||||
their hostname changed to `ckyard`:
|
||||
|
||||
:tim!~toolshed@backyard CHGHOST b ckyard
|
||||
|
||||
In this example, `tim!b@ckyard` gets their username changed to `~toolshed` and
|
||||
their hostname changed to `backyard`:
|
||||
|
||||
:tim!b@ckyard CHGHOST ~toolshed backyard
|
||||
|
||||
## Errata
|
||||
|
||||
A previous version of this specification did not include any examples, which made
|
||||
it unclear as to whether the de-facto `~` prefix should be included on CHGHOST
|
||||
messages. The new examples make clear that it should be included.
|
|
@ -1,391 +0,0 @@
|
|||
---
|
||||
title: How I set up an IRC daemon on Kubernetes
|
||||
date: 2019-12-21
|
||||
series: howto
|
||||
tags:
|
||||
- irc
|
||||
- kubernetes
|
||||
|
||||
---
|
||||
|
||||
# How I set up an IRC daemon on Kubernetes
|
||||
|
||||
[IRC][rfc1459]. It's one of the last bastions of the old internet, and still an actively developed and researched protocol. Historically, IRC daemons have been notoriously annoying to set up and maintain. I have created an IRC daemon running on top of Kubernetes, which will hopefully help remove a lot of the pain points for my personal usage. Here's how I did it.
|
||||
|
||||
[rfc1459]: https://tools.ietf.org/html/rfc1459
|
||||
|
||||
IRC is a simple protocol and only has a few major moving parts. IRC is made up of networks of servers that federate together as one logical unit. IRC is scalable from networks spanning one server to hundreds (though realistically you're not likely to find more than about 10 servers in a network).
|
||||
|
||||
At its core, IRC daemons are a pub-sub protocol that also has a distributed state layer on top of it. TCP connections can either represent individual users or server trunking. Each user has their own state (nickname, ident and "real name"). Users can join channels which can have their own state (modes, topic, timestamp and ban lists). Some servers have a limit of the number of channels you can join.
|
||||
|
||||
So, with this in mind, let's start with a simple IRC daemon in a docker container. I chose [ngircd][ngircd] for this because it's packaged in Alpine Linux. Let's create the configuration file ngircd.conf:
|
||||
|
||||
[ngircd]: https://ngircd.barton.de/index.php.en
|
||||
|
||||
```ini
|
||||
[Global]
|
||||
Name = seaworld.yolo-swag.com
|
||||
AdminInfo1 = ShadowNET Main Server
|
||||
AdminInfo2 = New York, New York, USA
|
||||
AdminInfo3 = Cadey Ratio <me@christine.website>
|
||||
Info = Hosted on Kubernetes!
|
||||
Listen = 0.0.0.0
|
||||
MotdFile = /shadownet/motd
|
||||
Network = ShadowNET
|
||||
Ports = 6667
|
||||
ServerGID = 65534
|
||||
ServerUID = 65534
|
||||
|
||||
[Limits]
|
||||
MaxJoins = 50
|
||||
MaxNickLength = 31
|
||||
MaxListSize = 100
|
||||
PingTimeout = 120
|
||||
PongTimeout = 20
|
||||
|
||||
[Options]
|
||||
AllowedChannelTypes = #&+
|
||||
AllowRemoteOper = yes
|
||||
CloakUserToNick = yes
|
||||
DNS = no
|
||||
Ident = no
|
||||
IncludeDir = /shadownet/secret
|
||||
MorePrivacy = yes
|
||||
NoticeBeforeRegistration = yes
|
||||
OperCanUseMode = yes
|
||||
OperChanPAutoOp = yes
|
||||
PAM = no
|
||||
PAMIsOptional = yes
|
||||
RequireAuthPing = yes
|
||||
# WebircPassword is set in secrets
|
||||
|
||||
[Channel]
|
||||
Name = #lobby
|
||||
Topic = Welcome to the new ShadowNET!
|
||||
Modes = tn
|
||||
|
||||
[Channel]
|
||||
Name = #help
|
||||
Topic = Get help with ShadowNET | Ping an oper for help
|
||||
Modes = tn
|
||||
|
||||
[Channel]
|
||||
Name = #opers
|
||||
Topic = Oper hideout
|
||||
Modes = tnO
|
||||
```
|
||||
|
||||
This is mostly based on the default settings in the example configuration file with a few glaring exceptions:
|
||||
|
||||
* The server name is `seaworld.yolo-swag.com`, which will show up when users are connecting
|
||||
* My information is filled out for the admin information (which is shown when a user does /ADMIN in their client)
|
||||
* It has a lot of privacy-enhancing features set up
|
||||
* It disables the need to authenticate with PAM before being allowed to connect to the IRC server
|
||||
* Some default channel names are reserved
|
||||
|
||||
So, let's create a dockerfile for this:
|
||||
|
||||
```dockerfile
|
||||
FROM xena/alpine
|
||||
COPY motd /shadownet/motd
|
||||
COPY ngircd.conf /shadownet/ngircd.conf
|
||||
RUN apk --no-cache add ngircd
|
||||
COPY run.sh /
|
||||
CMD ["/run.sh"]
|
||||
```
|
||||
|
||||
`motd` is a plain text file that is used as the "message of the day" when users connect. Servers usually list their rules here. My motd has some ascii art and has this extra info:
|
||||
|
||||
```
|
||||
The *new* irc.yolo-swag.com!
|
||||
|
||||
Connect on irc.within.website port 6667 or r4qrvdln2nvqyfbq.onion:6667
|
||||
|
||||
Rules:
|
||||
- Don't do things that make me have to write more rules here
|
||||
- This rule makes you breathe manually
|
||||
```
|
||||
|
||||
Now you can build and push this image to the [docker hub][shadownetircd].
|
||||
|
||||
[shadownetircd]: https://hub.docker.com/repository/docker/shadownet/ircd
|
||||
|
||||
You may have noticed earlier that a comment in the config file mentioned [webirc][webirc]. This is important for us because IRC server normally assume that the remote host information in socket calls is accurate. My Kubernetes setup has at least one level of TCP proxying at work, so this cannot pan out. Webirc offers an authenticated mechanism to let a proxy server lie about user IP addresses. My nginx-ingress setup uses the [haproxy PROXY protocol][haproxyproxy] to let underlying services know client IP addresses. So what we need is an adaptor from haproxy PROXY protocol to webirc. I hacked one up:
|
||||
|
||||
[haproxyproxy]: https://www.haproxy.com/blog/haproxy/proxy-protocol/
|
||||
|
||||
```go
|
||||
package main
|
||||
|
||||
import (
|
||||
"crypto/md5"
|
||||
"flag"
|
||||
"fmt"
|
||||
"io"
|
||||
"log"
|
||||
"net"
|
||||
"strings"
|
||||
|
||||
"github.com/armon/go-proxyproto"
|
||||
"github.com/facebookgo/flagenv"
|
||||
_ "github.com/joho/godotenv/autoload"
|
||||
irc "gopkg.in/irc.v3"
|
||||
)
|
||||
|
||||
var (
|
||||
webircPassword = flag.String("webirc-password", "", "the password for WEBIRC")
|
||||
webircIdent = flag.String("webirc-ident", "snet", "the ident for WEBIRC")
|
||||
webircHost = flag.String("webirc-host", "", "the host to connect to for WEBIRC")
|
||||
port = flag.String("port", "5667", "port to listen on for PROXY traffic")
|
||||
)
|
||||
|
||||
func main() {
|
||||
flagenv.Parse()
|
||||
flag.Parse()
|
||||
|
||||
list, err := net.Listen("tcp", ":"+*port)
|
||||
if err != nil {
|
||||
panic(err)
|
||||
}
|
||||
|
||||
log.Printf("now listening on port %s, forwarding traffic to %s", *port, *webircHost)
|
||||
|
||||
proxyList := &proxyproto.Listener{Listener: list}
|
||||
|
||||
for {
|
||||
conn, err := proxyList.Accept()
|
||||
if err != nil {
|
||||
log.Println(err)
|
||||
continue
|
||||
}
|
||||
go dataTo(conn)
|
||||
}
|
||||
}
|
||||
|
||||
func dataTo(conn net.Conn) {
|
||||
defer conn.Close()
|
||||
|
||||
ip, _, err := net.SplitHostPort(conn.RemoteAddr().String())
|
||||
if err != nil {
|
||||
log.Printf("what, can't split remote address: %v", err)
|
||||
ev := irc.Message{
|
||||
Command: "QUIT",
|
||||
Params: []string{
|
||||
"***",
|
||||
err.Error(),
|
||||
},
|
||||
}
|
||||
|
||||
fmt.Fprintln(conn, ev.String())
|
||||
return
|
||||
}
|
||||
|
||||
peer, err := net.Dial("tcp", *webircHost)
|
||||
if err != nil {
|
||||
log.Println(*webircHost, err)
|
||||
}
|
||||
defer peer.Close()
|
||||
|
||||
spip := strings.Split(ip, ".")
|
||||
|
||||
hostname := strings.Join([]string{
|
||||
"snet",
|
||||
Hash("snet", spip[0])[:8],
|
||||
Hash("snet", spip[0] + spip[1])[:8],
|
||||
Hash("snet", spip[0] + spip[1] + spip[2] + spip[3])[:8],
|
||||
}, ".")
|
||||
|
||||
ev := irc.Message{
|
||||
Command: "WEBIRC",
|
||||
Params: []string{
|
||||
*webircPassword,
|
||||
*webircIdent,
|
||||
ip,
|
||||
hostname,
|
||||
},
|
||||
}
|
||||
log.Println(ev.String())
|
||||
fmt.Fprintf(peer, "%s\r\n", ev.String())
|
||||
|
||||
go io.Copy(conn, peer)
|
||||
io.Copy(peer, conn)
|
||||
}
|
||||
|
||||
// Hash is a simple wrapper around the MD5 algorithm implementation in the
|
||||
// Go standard library. It takes in data and a salt and returns the hashed
|
||||
// representation.
|
||||
func Hash(data string, salt string) string {
|
||||
output := md5.Sum([]byte(data + salt))
|
||||
return fmt.Sprintf("%x", output)
|
||||
}
|
||||
```
|
||||
|
||||
This proxies connections from incoming TCP sockets to the IRC server. It also creates a fancy hostname for ngircd to use when people do a /whois on users. ngircd does have its own cloaking mechanism (which I am not using here), but I figure doing the splitting on IP address classes will make a more easy way to reliably ban users from channels.
|
||||
|
||||
Now, let's build this as a docker image and push it to the [docker hub][proxy2webirc]:
|
||||
|
||||
[proxy2webirc]: https://hub.docker.com/repository/docker/shadownet/proxy2webirc
|
||||
|
||||
```dockerfile
|
||||
FROM xena/go:1.13.5 AS build
|
||||
WORKDIR /shadownet
|
||||
COPY go.mod .
|
||||
COPY go.sum .
|
||||
ENV GOPROXY https://cache.greedo.xeserv.us
|
||||
RUN go mod download
|
||||
COPY cmd ./cmd
|
||||
RUN GOBIN=/shadownet/bin go install ./cmd/proxy2webirc
|
||||
|
||||
FROM xena/alpine
|
||||
COPY --from=build /shadownet/bin/proxy2webirc /usr/local/bin/proxy2webirc
|
||||
CMD ["/usr/local/bin/proxy2webirc"]
|
||||
```
|
||||
|
||||
And now we get to wire this all up in a kubernetes manifest. Let's create a namespace:
|
||||
|
||||
```yaml
|
||||
# 00_namespace.yml
|
||||
apiVersion: v1
|
||||
kind: Namespace
|
||||
metadata:
|
||||
name: ircd
|
||||
```
|
||||
|
||||
And now we need to create the secrets that the IRC daemon will use when operating. We need the webirc password and a few operator blocks. Let's make a script to create operator blocks:
|
||||
|
||||
```sh
|
||||
#!/bin/sh
|
||||
# scripts/makeoper.sh
|
||||
|
||||
echo "[Operator]
|
||||
Name = $1
|
||||
Password = $(uuidgen)"
|
||||
```
|
||||
|
||||
Then let's use it to create a few operator configs:
|
||||
|
||||
```console
|
||||
$ scripts/makeoper.sh Cadey >> opers.conf
|
||||
$ scripts/makeoper.sh h >> opers.conf
|
||||
```
|
||||
|
||||
And then create the webirc password:
|
||||
|
||||
```console
|
||||
$ echo "[Options]
|
||||
WebircPassword = $(uuidgen)" >> webirc.conf
|
||||
```
|
||||
|
||||
And then let's load these into a yaml file:
|
||||
|
||||
```yaml
|
||||
# 01_secrets.yml
|
||||
apiVersion: v1
|
||||
kind: Secret
|
||||
metadata:
|
||||
name: config
|
||||
namespace: ircd
|
||||
type: Opaque
|
||||
stringData:
|
||||
opers.conf: |
|
||||
<contents of opers.conf>
|
||||
webirc.conf: |
|
||||
<contents of webirc.conf>
|
||||
```
|
||||
|
||||
Now all we need is the irc daemon deployment itself that ties this all together:
|
||||
|
||||
```yaml
|
||||
# 02_ircd.yml
|
||||
apiVersion: apps/v1
|
||||
kind: Deployment
|
||||
metadata:
|
||||
name: ircd
|
||||
namespace: ircd
|
||||
labels:
|
||||
app: ircd
|
||||
spec:
|
||||
replicas: 1
|
||||
template:
|
||||
metadata:
|
||||
name: ircd
|
||||
labels:
|
||||
app: ircd
|
||||
spec:
|
||||
containers:
|
||||
- name: proxystrip
|
||||
image: shadownet/proxy2webirc:latest
|
||||
imagePullPolicy: Always
|
||||
ports:
|
||||
- containerPort: 5667
|
||||
name: proxiedirc
|
||||
protocol: TCP
|
||||
env:
|
||||
- name: WEBIRC_HOST
|
||||
value: 127.0.0.1:6667
|
||||
- name: WEBIRC_PASSWORD
|
||||
value: <password from webirc.conf>
|
||||
- name: ircd
|
||||
image: shadownet/ircd:latest
|
||||
imagePullPolicy: Always
|
||||
volumeMounts:
|
||||
- name: secretconfig
|
||||
mountPath: "/shadownet/secret"
|
||||
restartPolicy: Always
|
||||
volumes:
|
||||
- name: secretconfig
|
||||
secret:
|
||||
secretName: config
|
||||
selector:
|
||||
matchLabels:
|
||||
app: ircd
|
||||
---
|
||||
apiVersion: v1
|
||||
kind: Service
|
||||
metadata:
|
||||
name: ircd
|
||||
namespace: ircd
|
||||
labels:
|
||||
app: ircd
|
||||
spec:
|
||||
ports:
|
||||
- port: 6667
|
||||
targetPort: 5667
|
||||
protocol: TCP
|
||||
selector:
|
||||
app: ircd
|
||||
type: NodePort
|
||||
```
|
||||
|
||||
This will set up our IRC daemon to read the secrets from the filesystem at `/shadownet/secret`, which was configured as the `IncludeDir` in the ngircd config above.
|
||||
|
||||
At this point, your IRC daemon is ready to go and can be applied to your cluster whenever you want, however it may be interesting to set up a tor onion address for the IRC server. Using the [tor operator][toroperator], we can create a private key locally, load it as a kubernetes secret and then activate the tor hidden service:
|
||||
|
||||
[toroperator]: https://github.com/kragniz/tor-controller
|
||||
|
||||
```console
|
||||
$ openssl genrsa -out private_key 1024
|
||||
$ kubectl create secret -n ircd generic ircd-tor-key --from-file=private_key
|
||||
```
|
||||
|
||||
Now apply this manifest:
|
||||
|
||||
```yaml
|
||||
# 03_onion.yml
|
||||
apiVersion: tor.k8s.io/v1alpha1
|
||||
kind: OnionService
|
||||
metadata:
|
||||
name: ircd
|
||||
spec:
|
||||
version: 2
|
||||
selector:
|
||||
app: ircd
|
||||
ports:
|
||||
- targetPort: 6667
|
||||
publicPort: 6667
|
||||
privateKeySecret:
|
||||
name: ircd-tor-key
|
||||
key: private_key
|
||||
```
|
||||
|
||||
Now, you should be able to let users connect to your IRC server to their heart's content. If you want to join the IRC server I've set up, point your IRC client at `irc.within.website`. I'll be in `#lobby`.
|
|
@ -1,160 +0,0 @@
|
|||
---
|
||||
title: Kubernetes Pondering
|
||||
date: 2020-12-31
|
||||
tags:
|
||||
- k8s
|
||||
- kubernetes
|
||||
- soyoustart
|
||||
- kimsufi
|
||||
- digitalocean
|
||||
- vultr
|
||||
---
|
||||
|
||||
# Kubernetes Pondering
|
||||
|
||||
Right now I am using a freight train to mail a letter when it comes to hosting
|
||||
my web applications. If you are reading this post on the day it comes out, then
|
||||
you are connected to one of a few replicas of my site code running across at
|
||||
least 3 machines in my Kubernetes cluster. This certainly _works_, however it is
|
||||
not very ergonomic and ends up being quite expensive.
|
||||
|
||||
I think I made a mistake when I decided to put my cards into Kubernetes for my
|
||||
personal setup. It made sense at the time (I was trying to learn Kubernetes and
|
||||
I am cursed into learning by doing), however I don't think it is really the best
|
||||
choice available for my needs. I am not a large company. I am a single person
|
||||
making things that are really targeted for myself. I would like to replace this
|
||||
setup with something more at my scale. Here are a few options I have been
|
||||
exploring combined with their pros and cons.
|
||||
|
||||
Here are the services I currently host on my Kubernetes cluster:
|
||||
|
||||
- [this site](/)
|
||||
- [my git server](https://tulpa.dev)
|
||||
- [hlang](https://h.christine.website)
|
||||
- A few personal services that I've been meaning to consolidate
|
||||
- The [olin demo](https://olin.within.website/)
|
||||
- The venerable [printer facts server](https://printerfacts.cetacean.club)
|
||||
- A few static websites
|
||||
- An IRC server (`irc.within.website`)
|
||||
|
||||
My goal in evaluating other options is to reduce cost and complexity. Kubernetes
|
||||
is a very complicated system and requires a lot of hand-holding and rejiggering
|
||||
to make it do what you want. NixOS, on the other hand, is a lot simpler overall
|
||||
and I would like to use it for running my services where I can.
|
||||
|
||||
Cost is a huge factor in this. My Kubernetes setup is a money pit. I want to
|
||||
prioritize cost reduction as much as possible.
|
||||
|
||||
## Option 1: Do Nothing
|
||||
|
||||
I could do nothing about this and eat the complexity as a cost of having this
|
||||
website and those other services online. However over the year or so I've been
|
||||
using Kubernetes I've had to do a lot of hacking at it to get it to do what I
|
||||
want.
|
||||
|
||||
I set up the cluster using Terraform and Helm 2. Helm 3 is the current
|
||||
(backwards-incompatible) release, and all of the things that are managed by Helm
|
||||
2 have resisted being upgraded to Helm 3.
|
||||
|
||||
I'm going to say something slightly controversial here, but YAML is a HORRIBLE
|
||||
format for configuration. I can't trust myself to write unambiguous YAML. I have
|
||||
to reference the spec constantly to make sure I don't have an accidental
|
||||
Norway/Ontario bug. I have a Dhall package that takes away most of the pain,
|
||||
however it's not flexible enough to describe the entire scope of what my
|
||||
services need to do (IE: pinging Google/Bing to update their indexes on each
|
||||
deploy), and I don't feel like putting in the time to make it that flexible.
|
||||
|
||||
[This is the regex for determining what is a valid boolean value in YAML:
|
||||
`y|Y|yes|Yes|YES|n|N|no|No|NO|true|True|TRUE|false|False|FALSE|on|On|ON|off|Off|OFF`.
|
||||
This can bite you eventually. See the <a
|
||||
href="https://hitchdev.com/strictyaml/why/implicit-typing-removed/">Norway
|
||||
Problem</a> for more information.](conversation://Mara/hacker)
|
||||
|
||||
I have a tor hidden service endpoint for a few of my services. I have to use an
|
||||
[unmaintained tool](https://github.com/kragniz/tor-controller) to manage these
|
||||
on Kubernetes. It works _today_, but the Kubernetes operator API could change at
|
||||
any time (or the API this uses could be deprecated and removed without much
|
||||
warning) and leave me in the dust.
|
||||
|
||||
I could live with all of this, however I don't really think it's the best idea
|
||||
going forward. There's a bunch of services that I added on top of Kubernetes
|
||||
that are dangerous to upgrade and very difficult (if not impossible) to
|
||||
downgrade when something goes wrong during the upgrade.
|
||||
|
||||
One of the big things that I have with this setup that I would have to rebuild
|
||||
in NixOS is the continuous deployment setup. However I've done that before and
|
||||
it wouldn't really be that much of an issue to do it again.
|
||||
|
||||
NixOS fixes all the jank I mentioned above by making my specifications not have
|
||||
to include the version numbers of everything the system already provides. You
|
||||
can _actually trust the package repos to have up to date packages_. I don't
|
||||
have to go around and bump the versions of shims and pray they work, because
|
||||
with NixOS I don't need them anymore.
|
||||
|
||||
## Option 2: NixOS on top of SoYouStart or Kimsufi
|
||||
|
||||
This is a doable option. The main problem here would be doing the provision
|
||||
step. SoYouStart and Kimsufi (both are offshoot/discount brands of OVH) have
|
||||
very little in terms of customization of machine config. They work best when you
|
||||
are using "normal" distributions like Ubuntu or CentOS and leave them be. I
|
||||
would want to run NixOS on it and would have to do several trial and error runs
|
||||
with a tool such as [nixos-infect](https://github.com/elitak/nixos-infect) to
|
||||
assimilate the server into running NixOS.
|
||||
|
||||
With this option I would get the most storage out of any other option by far. 4
|
||||
TB is a _lot_ of space. However, SoYouStart and Kimsufi run decade-old hardware
|
||||
at best. I would end up paying a lot for very little in the CPU department. For
|
||||
most things I am sure this would be fine, however some of my services can have
|
||||
CPU needs that might exceed what second-generation Xeons can provide.
|
||||
|
||||
SoYouStart and Kimsufi have weird kernel versions though. The last SoYouStart
|
||||
dedi I used ran Fedora and was gimped with a grsec kernel by default. I had to
|
||||
end up writing [this gem of a systemd service on
|
||||
boot](https://github.com/Xe/dotfiles/blob/master/ansible/roles/soyoustart/files/conditional-kexec.sh)
|
||||
which did a [`kexec`](https://en.wikipedia.org/wiki/Kexec) to boot into a
|
||||
non-gimped kernel on boot. It was a huge hack and somehow worked every time. I
|
||||
was still afraid to reboot the machine though.
|
||||
|
||||
Sure is a lot of ram for the cost though.
|
||||
|
||||
## Option 3: NixOS on top of Digital Ocean
|
||||
|
||||
This shares most of the problems as the SoYouStart or Kimsufi nodes. However,
|
||||
nixos-infect is known to have a higher success rate on Digital Ocean droplets.
|
||||
It would be really nice if Digital Ocean let you upload arbitrary ISO files and
|
||||
go from there, but that is apparently not the world we live in.
|
||||
|
||||
8 GB of ram would be _way more than enough_ for what I am doing with these
|
||||
services.
|
||||
|
||||
## Option 4: NixOS on top of Vultr
|
||||
|
||||
Vultr is probably my top pick for this. You can upload an arbitrary ISO file,
|
||||
kick off your VPS from it and install it like normal. I have a little shell
|
||||
server shared between some friends built on top of such a Vultr node. It works
|
||||
beautifully.
|
||||
|
||||
The fact that it has the same cost as the Digital Ocean droplet just adds to the
|
||||
perfection of this option.
|
||||
|
||||
## Costs
|
||||
|
||||
Here is the cost table I've drawn up while comparing these options:
|
||||
|
||||
| Option | Ram | Disk | Cost per month | Hacks |
|
||||
| :--------- | :----------------- | :------------------------------------ | :-------------- | :----------- |
|
||||
| Do nothing | 6 GB (4 GB usable) | Not really usable, volumes cost extra | $60/month | Very Yes |
|
||||
| SoYouStart | 32 GB | 2x2TB SAS | $40/month | Yes |
|
||||
| Kimsufi | 32 GB | 2x2TB SAS | $35/month | Yes |
|
||||
| Digital Ocean | 8 GB | 160 GB SSD | $40/month | On provision |
|
||||
| Vultr | 8 GB | 160 GB SSD | $40/month | No |
|
||||
|
||||
I think I am going to go with the Vultr option. I will need to modernize some of
|
||||
my services to support being deployed in NixOS in order to do this, however I
|
||||
think that I will end up creating a more robust setup in the process. At least I
|
||||
will create a setup that allows me to more easily maintain my own backups rather
|
||||
than just relying on DigitalOcean snapshots and praying like I do with the
|
||||
Kubernetes setup.
|
||||
|
||||
Thanks farcaller, Marbles, John Rinehart and others for reviewing this post
|
||||
prior to it being published.
|
|
@ -1,54 +0,0 @@
|
|||
---
|
||||
title: kalama pali pi kulupu Kala
|
||||
date: 2020-10-12
|
||||
tags:
|
||||
- 100DaysToOffload
|
||||
---
|
||||
|
||||
# kalama pali pi kulupu Kala
|
||||
|
||||
I've wanted to write a novel for a while, and I think I've finally got a solid
|
||||
idea for it. I want to write about the good guys winning against an oppressive
|
||||
system. I've been letting the ideas and thoughts marinate in my heart for a long
|
||||
time; these short stories are how I am exploring the world and other related
|
||||
concepts. I want to use language as a tool in this world. So here is my take on
|
||||
a creation myth for the main species of this world, the Kala (the title of this
|
||||
post roughly translates to "creation story of the Kala").
|
||||
|
||||
This is day 2 of my 100 days to offload.
|
||||
|
||||
---
|
||||
|
||||
In the beginning, the gods roamed the skies. Pali, Sona and Soweli talked and
|
||||
talked about their plans.
|
||||
|
||||
tenpo wan la sewi li lon e sewi. sewi Pali en sewi Sona en sewi Soweli li toki.
|
||||
|
||||
Soweli went down to the world Pali had created. Animals of all kinds followed
|
||||
them as Soweli moved about the earth.
|
||||
|
||||
sewi Soweli li tawa e sike. soweli li kama e sike.
|
||||
|
||||
Sona followed and went towards the whales. Sona took a liking to how graceful
|
||||
they were in the water, and decided to have them be the arbiters of knowledge.
|
||||
Sona also reshaped them to look like the gods did. The Kala people resulted.
|
||||
|
||||
sewi Sona li tawa e soweli sike. sewi Sona li tawa e kala suli. sewi Sona li
|
||||
lukin li pona e kala suli. sewi Sona li pana e sona e kon tawa kala suli. sewi
|
||||
Sona li pali e jan kama kala suli. kulupu Kala li lon.
|
||||
|
||||
Pali had created the entire world, so Pali fell into a deep slumber in the
|
||||
ocean.
|
||||
|
||||
tenpo pini la sewi Pali li pali e sike. sewi Pali li lape lon telo suli.
|
||||
|
||||
Soweli had created all of the animals on the whole world, so Soweli fell asleep
|
||||
in Soweli mountain.
|
||||
|
||||
tenpo pini la sewi Soweli li pali e soweli ale. sewi Soweli li lape e nena Soweli.
|
||||
|
||||
Sona lifted themselves into the skies to watch the Kala from above. Sona keeps
|
||||
an eye on us to make sure we are using their gift responsibly.
|
||||
|
||||
sewi Sona li tawa e sewi. sewi Sona li lukin e kulupu Kala. kulupu Kala li jo
|
||||
sona li jo toki. kulupu Kala li pona e sewi Sona.
|
|
@ -1,9 +1,6 @@
|
|||
---
|
||||
title: "Land 1: Syscalls & File I/O"
|
||||
date: 2018-06-18
|
||||
series: olin
|
||||
tags:
|
||||
- wasm
|
||||
---
|
||||
|
||||
# Land 1: Syscalls & File I/O
|
||||
|
|
|
@ -2,8 +2,6 @@
|
|||
title: Let it Snow
|
||||
date: 2018-12-17
|
||||
for: the lols
|
||||
tags:
|
||||
- fluff
|
||||
---
|
||||
|
||||
# Let it Snow
|
||||
|
|
|
@ -2,8 +2,6 @@
|
|||
title: "Life Update - Montréal"
|
||||
date: "2019-05-16"
|
||||
for: "Vic"
|
||||
tags:
|
||||
- personal
|
||||
---
|
||||
|
||||
# Life Update - Montréal
|
||||
|
|
|
@ -0,0 +1,64 @@
|
|||
---
|
||||
title: "Introducing ln: The Natural Logging Library"
|
||||
date: 2019-06-04
|
||||
---
|
||||
|
||||
# Introducing ln: The Natural Logging Library
|
||||
|
||||
Logging is a very annoyingly complicated topic in programming. Sometimes you log too much and your log servers run out of space or need to only have a week's retention. Sometimes you log too little and are left recreating things from scratch when support tickets come in. Sometimes this means you need to go recreate the problem using your logging infrastructure to get the same error patterns.
|
||||
|
||||
Basically it's a mess. A lot of the popular Go libraries around it are [zero-allocation nanosecond scale](https://christine.website/blog/experimental-rilkef-2018-11-30) things that offer a lot of flexibility and speed, but ultimately make this entire endeavor painful more than it needs to be. None of them also seem to offer contextual storage of key->value fields. This means you have to pass a partially constructed logger around instead of the global one Just Doing The Right Thing.
|
||||
|
||||
So let's talk about my solution for this called [`ln`](https://github.com/Xe/ln), the [natural logging library](https://en.wikipedia.org/wiki/Natural_logarithm). `ln` is based on the idea of structured logging. `ln` uses key->value pairs like this:
|
||||
|
||||
```
|
||||
// F ields for logging
|
||||
f := ln.F{
|
||||
"azure_diamond": "hunter2",
|
||||
"meme_source": "http://bash.org/?244321",
|
||||
}
|
||||
|
||||
ln.Log(ctx, f)
|
||||
```
|
||||
|
||||
and this prints something like:
|
||||
|
||||
```
|
||||
time="2009-11-10T23:00:00Z" azure_diamond=hunter2 meme_source="http://bash.org/?244321"
|
||||
```
|
||||
|
||||
Simple, right?
|
||||
|
||||
You can also put a MUTABLE key->value F into a context:
|
||||
|
||||
```
|
||||
func main() {
|
||||
ctx := context.Background()
|
||||
f := ln.F{
|
||||
"azure_diamond": "hunter2",
|
||||
"meme_source": "http://bash.org/?244321",
|
||||
}
|
||||
ctx = ln.WithF(ctx, f)
|
||||
doSomethingElse(ctx)
|
||||
|
||||
ln.Log(ctx)
|
||||
}
|
||||
|
||||
func doSomethingElse(ctx context.Context) {
|
||||
ln.WithF(ctx, ln.F{
|
||||
"hi": "mom",
|
||||
})
|
||||
}
|
||||
```
|
||||
|
||||
And this [yields](https://play.golang.org/p/0-3-qPA7d6Y):
|
||||
|
||||
```
|
||||
time="2009-11-10T23:00:00Z" azure_diamond=hunter2 meme_source="http://bash.org/?244321" hi=mom
|
||||
```
|
||||
|
||||
|
||||
|
||||
---
|
||||
|
||||
This blogpost was converted from [this tweetstorm](https://twitter.com/theprincessxena/status/1129917083364597760).
|
|
@ -1,100 +0,0 @@
|
|||
---
|
||||
title: "ln - The Natural Log Function"
|
||||
date: 2020-10-17
|
||||
tags:
|
||||
- golang
|
||||
- go
|
||||
---
|
||||
|
||||
# ln - The Natural Log Function
|
||||
|
||||
One of the most essential things in software is a good interface for logging
|
||||
data to places. Logging is a surprisingly hard problem and there are many
|
||||
approaches to doing it. This time, we're going to talk about my favorite logging
|
||||
library in Go that uses my favorite function I've ever written in Go.
|
||||
|
||||
Today we're talking about [ln](https://github.com/Xe/ln), the natural log
|
||||
function. ln works with key value pairs and logs them to somewhere. By default
|
||||
it logs things to standard out. Here is how you use it:
|
||||
|
||||
```go
|
||||
package main
|
||||
|
||||
import (
|
||||
"context"
|
||||
|
||||
"within.website/ln"
|
||||
)
|
||||
|
||||
func main() {
|
||||
ctx := context.Background()
|
||||
ln.Log(ctx, ln.Fmt("hello %s", "world"), ln.F{"demo": "usage"})
|
||||
}
|
||||
```
|
||||
|
||||
ln works with key value pairs called [F](https://godoc.org/within.website/ln#F).
|
||||
This type allows you to log just about _anything_ you want, including custom
|
||||
data types with an [Fer](https://godoc.org/within.website/ln#Fer). This will let
|
||||
you annotate your data types so that you can automatically extract the important
|
||||
information into your logs while automatically filtering out passwords or other
|
||||
secret data. Here's an example:
|
||||
|
||||
```go
|
||||
type User struct {
|
||||
ID int
|
||||
Username string
|
||||
Password []byte
|
||||
}
|
||||
|
||||
func (u User) F() ln.F {
|
||||
return ln.F{
|
||||
"user_id": u.ID,
|
||||
"user_name": u.Username,
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
Then if you create that user somehow, you can log the ID and username without
|
||||
logging the password on accident:
|
||||
|
||||
```go
|
||||
var theDude User = abides()
|
||||
|
||||
ln.Log(ctx, ln.Info("created new user"), theDude)
|
||||
```
|
||||
|
||||
This will create a log line that looks something like this:
|
||||
|
||||
```
|
||||
level=info msg="created new user" user_name="The Dude" user_id=1337
|
||||
```
|
||||
|
||||
[You can also put values in contexts! See <a
|
||||
href="https://github.com/Xe/ln/blob/master/ex/http.go#L21">here</a> for more
|
||||
detail on how this works.](conversation://Mara/hacker)
|
||||
|
||||
The way this is all glued together is that F itself is an Fer, meaning that the
|
||||
Log/Error functions take a variadic set of Fers. This is where my favorite Go
|
||||
function comes into play, it is the implementation of the Fer interface for F.
|
||||
Here is that function verbatim:
|
||||
|
||||
```go
|
||||
// F makes F an Fer
|
||||
func (f F) F() F {
|
||||
return f
|
||||
}
|
||||
```
|
||||
|
||||
I love how this function looks like some kind of abstract art. This function
|
||||
holds this library together.
|
||||
|
||||
If you end up using ln for your projects in the future, please let me know what
|
||||
your experience is like. I would love to make this library the best it can
|
||||
possibly be. It is not a nanosecond scale zero allocation library (I think those
|
||||
kind of things are a bit of a waste of time, because most of the time your
|
||||
logging library is NOT going to be your bottleneck), but it is designed to have
|
||||
very usable defaults and solve the problem good enough that you shouldn't need
|
||||
to care. There are a few useful tools in the
|
||||
[ex](https://godoc.org/within.website/ln/ex) package nested in ln. The biggest
|
||||
thing is the HTTP middleware, which has saved me a lot of effort when writing
|
||||
web services in Go.
|
|
@ -2,11 +2,6 @@
|
|||
title: Introducing Lokahi
|
||||
date: 2018-02-08
|
||||
github_issue: https://github.com/Xe/lokahi/issues/15
|
||||
tags:
|
||||
- hackweek
|
||||
- release
|
||||
- go
|
||||
- monitoring
|
||||
---
|
||||
|
||||
# Introducing Lokahi
|
||||
|
|
|
@ -1,373 +0,0 @@
|
|||
---
|
||||
title: "mapatei"
|
||||
date: "2019-09-22"
|
||||
series: conlangs
|
||||
tags:
|
||||
- mapatei
|
||||
- protolang
|
||||
---
|
||||
|
||||
# mapatei
|
||||
|
||||
I've been working on a project in the [Conlang Critic][conlangcritic] Discord with some friends for a while now, and I'd like to summarize what we've been doing and why here. We've been working on creating a constructed language (conlang) with the end goal of each of us going off and evolving it in our own separate ways. Our goal in this project is really to create a microcosm of the natural process of language development.
|
||||
|
||||
## Why
|
||||
|
||||
One of the questions you, as the reader, might be asking is "why?" To which I say "why not?" This is a tool I use to define, explore and challenge my fundamental understanding of reality. I don't expect anything I do with this tool to be useful to anyone other than myself. I just want to create something by throwing things at the wall and seeing what makes sense for _me_. If other people like it or end up benefitting from it I consider that icing on the cake.
|
||||
|
||||
A language is a surprisingly complicated thing. There's lots of nuance and culture encoded into it, not even counting things like metaphors and double-meanings. Creating my own languages lets me break that complicated thing into its component parts, then use that understanding to help increase my knowledge of natural languages.
|
||||
|
||||
So, like I mentioned earlier, I've been working on a conlang with some friends, and here's what we've been creating.
|
||||
|
||||
## mapatei grammar
|
||||
|
||||
mapatei is the language spoken by a primitive culture of people we call maparaja (people of the language). It is designed to be very simple to understand, speak and learn.
|
||||
|
||||
### Phonology
|
||||
|
||||
The phonology of mapltapei is simple. It has 5 vowels and 17 consonants. The sounds are written mainly in [International Phonetic Alphabet](https://en.wikipedia.org/wiki/International_Phonetic_Alphabet).
|
||||
|
||||
#### Vowels
|
||||
|
||||
The vowels are:
|
||||
|
||||
| International Phonetic Alphabet | Written as | Description / Bad Transcription for English speakers |
|
||||
| :--- | :--- | :--- |
|
||||
| a | a | unstressed "ah" |
|
||||
| aː | ā | stressed "AH" |
|
||||
| e | e | unstressed "ayy" |
|
||||
| eː | ē | stressed "AYY" |
|
||||
| i | i | unstressed "ee" |
|
||||
| iː | ī | stressed "EE" |
|
||||
| o | o | unstressed "oh" |
|
||||
| oː | ō | stressed "OH" |
|
||||
| u | u | unstressed "ooh" |
|
||||
| uː | ū | stressed "OOH" |
|
||||
|
||||
The long vowels (anything with the funny looking bar/macron on top of them) also mark for stress, or how "intensely" they are spoken.
|
||||
|
||||
#### Consonants
|
||||
|
||||
The consonants are:
|
||||
|
||||
| International Phonetic Alphabet | Written as | Description / Bad Transcription for English speakers |
|
||||
| :--- | :--- | :--- |
|
||||
| m | m | the m in mother |
|
||||
| n | n | the n in money |
|
||||
| ᵐb | mb | a combination of the m in mother and the b in baker |
|
||||
| ⁿd | nd | as in handle |
|
||||
| ᵑg | ng | as in finger |
|
||||
| p | p | the p in spool |
|
||||
| t | t | the t in stool |
|
||||
| k | k | the k in school |
|
||||
| pʰ | ph | the ph in pool |
|
||||
| tʰ | th | the th in tool |
|
||||
| kʰ | kh | the kh in cool |
|
||||
| ɸ~f | f | the f in father |
|
||||
| s | s | the s in sock |
|
||||
| w | w | the w in water |
|
||||
| l | l | the l in lie |
|
||||
| j | j or y | the y in young |
|
||||
| r~ɾ | r | the r in rhombus |
|
||||
|
||||
### Word Structure
|
||||
|
||||
The structure of words is based on syllables. Syllables are formed of a pair of maybe a consonant and always a vowel. There can be up to two consecutive vowels in a word, but each vowel gets its own syllable. If a word is stressed, it can only ever be stressed on the first syllable.
|
||||
|
||||
Here are some examples of words and their meanings (the periods in the words mark the barriers between syllables):
|
||||
|
||||
| mapatei word | Intentional Phonetic Alphabet | Meaning |
|
||||
| :--- | :--- | :--- |
|
||||
| ondoko | o.ⁿdo.ko | pig |
|
||||
| māo | maː.o | cat |
|
||||
| ameme | a.me.me | to kill/murder |
|
||||
| ero | e.ro | can, to be able to |
|
||||
| ngōe | ᵑgoː.e | I/me |
|
||||
| ke | ke | cold |
|
||||
| ku | ku | fast |
|
||||
|
||||
There are only a few parts of speech: nouns, pronouns, verbs, determiners, numerals, prepositions and interjections.
|
||||
|
||||
### Nouns
|
||||
|
||||
Nouns describe things, people, animals, animate objects (such as plants or body parts) and abstract concepts (such as days). Nouns in mapatei are divided into four classes (this is similar to how languages like French handle the concept of grammatical gender): human, animal, animate and inanimate.
|
||||
|
||||
Here are some examples of a few nouns, their meaning and their noun class:
|
||||
|
||||
| mapatei word | International Phonetic Alphabet | Class | Meaning |
|
||||
| :--- | :--- | :--- | :--- |
|
||||
| okha | o.kʰa | human | female human, woman |
|
||||
| awu | a.wu | animal | dog |
|
||||
| fōmbu | (ɸ~f)oː.ᵐbu | animate | name |
|
||||
| ipai | i.pa.i | inanimate | salt |
|
||||
|
||||
Nouns can also be singular or plural. Plural nouns are marked with the -ja suffix. See some examples:
|
||||
|
||||
| singular mapatei word | plural mapatei word | International Phonetic Alphabet | Meaning |
|
||||
| :--- | :--- | :--- | :--- |
|
||||
| ra | raja | ra.ja | person / people |
|
||||
| meko | mekoja | me.ko.ja | ant / ants |
|
||||
| kindu | kinduja | kiː.ⁿdu.ja | liver / livers |
|
||||
| fīfo | fīfoja | (ɸ~f)iː.(ɸ~f)o.ja | moon / moons |
|
||||
|
||||
### Pronouns
|
||||
|
||||
Pronouns are nouns that replaces a noun or noun phrase with a special meaning. Examples of pronouns in English are words like I, me, or you. This is to avoid duplication of people's names or the identity of the speaker vs the listener.
|
||||
|
||||
| Pronouns | singular | plural | Rough English equivalent |
|
||||
| :--- | :--- | :--- | :--- |
|
||||
| 1st person | ngōe | tha | I/me, we |
|
||||
| 2nd person | sīto | khē | you, y'all |
|
||||
| 3rd person human | rā | foli | he/she, they |
|
||||
| 3rd person animal | mi | wāto | they |
|
||||
| 3rd person animate | sa | wāto | they |
|
||||
| 3rd person inanimate | li | wāto | they |
|
||||
|
||||
### Verbs
|
||||
|
||||
Verbs describe actions, existence or occurrence. Verbs in mapatei are conjugated in terms of tense (or when the thing being described has/will happen/ed in relation to saying the sentence) and the number of the subject of the sentence.
|
||||
|
||||
Verb endings:
|
||||
|
||||
| Verbs | singular | plural |
|
||||
| :--- | :--- | :--- |
|
||||
| past | -fu | -phi |
|
||||
| present | | -ja |
|
||||
| future | māu $verb | māu $verb-ja |
|
||||
|
||||
For example, consider the verb ōwo (oː.wo) for to love:
|
||||
|
||||
| ōwo - to love | singular | plural |
|
||||
| :--- | :--- | :--- |
|
||||
| past | ōwofu | ōwophi |
|
||||
| present | ōwo | ōwoja |
|
||||
| future | māu ōwo | māu ōwoja |
|
||||
|
||||
### Determiners
|
||||
|
||||
Determiners are words that can function both as adjectives and adverbs in English do. A determiner gives more detail or context about a noun/verb. Determiners follow the things they describe, like French or Toki Pona. Determiners must agree with the noun they are describing in class and number.
|
||||
|
||||
| Determiners | singular | plural |
|
||||
| :--- | :--- | :--- |
|
||||
| human | -ra | -fo |
|
||||
| animal | -mi | -wa |
|
||||
| animate | -sa | -to |
|
||||
| inanimate | -li | -wato |
|
||||
|
||||
See these examples:
|
||||
|
||||
a big human: ra sura
|
||||
|
||||
moving cats: māoja wuwa
|
||||
|
||||
a short name: fōmbu uwiisa
|
||||
|
||||
long days: lundoseja khāngandiwato
|
||||
|
||||
Also consider the declensions for uri (u.ri), or dull
|
||||
|
||||
| uri | singular | plural |
|
||||
| :--- | :--- | :--- |
|
||||
| human | urira | urifo |
|
||||
| animal | urimi | uriwa |
|
||||
| animate | urisa | urito |
|
||||
| inanimate | urili | uriwato |
|
||||
|
||||
### Numerals
|
||||
|
||||
There are two kinds of numerals in mapltatei, cardinal (counting) and ordinal (ordering) numbers. Numerals are always in [seximal](<https://www.seximal.net>).
|
||||
|
||||
| cardinal (base 6) | mapatei |
|
||||
| :--- | :--- |
|
||||
| 0 | fangu |
|
||||
| 1 | āre |
|
||||
| 2 | mawo |
|
||||
| 3 | piru |
|
||||
| 4 | kīfe |
|
||||
| 5 | tamu |
|
||||
| 10 | rupe |
|
||||
| 11 | rupe jo āre |
|
||||
| 12 | rupe jo mawo |
|
||||
| 13 | rupe jo piru |
|
||||
| 14 | rupe jo kīfe |
|
||||
| 15 | rupe jo tamu |
|
||||
| 20 | mawo rupe |
|
||||
| 30 | piru rupe |
|
||||
| 40 | kīfe rupe |
|
||||
| 50 | tamu rupe |
|
||||
| 100 | theli |
|
||||
|
||||
Ordinal numbers are formed by reduplicating (or copying) the first syllable of cardinal numbers and decline similarly for case. Remember that only the first syllable can be stressed, so any reduplicated syllable must become unstressed.
|
||||
|
||||
| ordinal (base 6) | mapatei |
|
||||
| :--- | :--- |
|
||||
| 0th | fangufa |
|
||||
| 1st | ārea |
|
||||
| 2nd | mawoma |
|
||||
| 3rd | pirupi |
|
||||
| 4th | kīfeki |
|
||||
| 5th | tamuta |
|
||||
| 10th | ruperu |
|
||||
| 11th | ruperu jo ārea |
|
||||
| 12th | ruperu jo mawoma |
|
||||
| 13th | ruperu jo pirupi |
|
||||
| 14th | ruperu jo kīfeki |
|
||||
| 15th | ruperu jo tamuki |
|
||||
| 20th | mawoma ruperu |
|
||||
| 30th | pirupi ruperu |
|
||||
| 40th | kīfeki ruperu |
|
||||
| 50th | tamuta ruperu |
|
||||
| 100th | thelithe |
|
||||
|
||||
Cardinal numbers are optionally declined for case when used as determiners with the following rules:
|
||||
|
||||
| Numeral Class | suffix |
|
||||
| :--- | :--- |
|
||||
| human | -ra |
|
||||
| animal | -mi |
|
||||
| animate | -sa |
|
||||
| inanimate | -li |
|
||||
|
||||
Numeral declension always happens last, so the inanimate nifth (seximal 100 or decimal 36) is thelitheli.
|
||||
|
||||
Here's a few examples:
|
||||
|
||||
three pigs: ondoko pirumi
|
||||
|
||||
the second person: ra mawomara
|
||||
|
||||
one tree: kho āremi
|
||||
|
||||
the nifth day: lundose thelitheli
|
||||
|
||||
### Prepositions
|
||||
|
||||
Prepositions mark any other details about a sentence. In essence, they add information to verbs that would otherwise lack that information.
|
||||
|
||||
fa: with, adds an auxiliary possession to a sentence
|
||||
|
||||
ri: possession, sometimes indicates ownership
|
||||
|
||||
I eat with my wife: wā ngōe fa epi ri ngōe
|
||||
|
||||
ngi: the following phrase is on top of the thing being described
|
||||
|
||||
ka: then (effect)
|
||||
|
||||
ēsa: if/whether
|
||||
|
||||
If I set this dog on the rock, then the house is good: ēsa adunga ngōe pā āwu ngi, ka iri sare eserili
|
||||
|
||||
### Interjections
|
||||
|
||||
Interjections have the following meanings:
|
||||
|
||||
Usually they act like vocatives and have free word order. As a determiner they change meta-properties about the noun/verb like negation.
|
||||
|
||||
wo: no, not
|
||||
|
||||
| English | mapatei |
|
||||
| :--- | :--- |
|
||||
| No! Don't eat that! | wo! wā wo ūto |
|
||||
| I don't eat ants | wā wo ngōe mekoja |
|
||||
|
||||
### Word Order
|
||||
|
||||
mapltapei has a VSO word order for sentences. This means that the verb comes first, followed by the subject, and then the object.
|
||||
|
||||
| English | mapatei | gloss |
|
||||
| :--- | :--- | :--- |
|
||||
| the/a child runs | kepheku rako | kepheku.VERB rako.NOUN.human |
|
||||
| The child gave the fish a flower | indofu rako ora āsu | indo.VERB.past rako.NOUN.human ora.NOUN.animal āsu.NOUN.animate |
|
||||
| I love you | ōwo ngōe sīto | ōwo.VERB ngōe.PRN sīto.PRN |
|
||||
| I do not want to eat right now | wā wo ngōe oko mbeli | wā.VERB wo.INTERJ ngōe.PRN oko.PREP mbeli.DET.singular.inanimate |
|
||||
| I have a lot of love, and I'm happy about it | urii ngōe erua fomboewato, jo iri ngōe phajera lo li | urii.VERB ngōe.PRN eruaja.NOUN.plural.inanimate fomboewato.DET.plural.inanimate, jo.CONJ iri.VERB ngōe.PRN phajera.DET.singular.human lo.PREP li.PRN |
|
||||
| The tree I saw yesterday is gone now | pōkhufu kho ngōe, oko iri māndosa mbe | pōkhu.VERB.past kho.NOUN.animate ngōe.PRM, oko.PREP iri.VERB māndo.DET.animate mbe.PRN |
|
||||
|
||||
## Code
|
||||
|
||||
As I mentioned earlier, I've been working on some code [here](<https://github.com/Xe/mapatei/tree/master/src/mapatei>) to handle things like making sure words are valid. This includes a word validator which I am very happy with.
|
||||
|
||||
Words are made up of syllables, which are made up of letters. In code:
|
||||
|
||||
```
|
||||
type
|
||||
Letter* = object of RootObj
|
||||
case isVowel*: bool
|
||||
of true:
|
||||
stressed*: bool
|
||||
of false: discard
|
||||
value*: string
|
||||
|
||||
Syllable* = object of RootObj
|
||||
consonant*: Option[Letter]
|
||||
vowel*: Letter
|
||||
stressed*: bool
|
||||
|
||||
Word* = ref object
|
||||
syllables*: seq[Syllable]
|
||||
|
||||
```
|
||||
|
||||
Letters are parsed out of strings using [this code](<https://github.com/Xe/mapatei/blob/92a429fa9a509af5df5b55810bda03061f21475c/src/mapatei/letters.nim#L35-L89>). It's an interator, so users have to manually loop over it:
|
||||
|
||||
```
|
||||
import unittest
|
||||
import mapatei/letters
|
||||
|
||||
let words = ["pirumi", "kho", "lundose", "thelitheli", "fōmbu"]
|
||||
|
||||
suite "Letter":
|
||||
for word in words:
|
||||
test word:
|
||||
for l in word.letters:
|
||||
discard l
|
||||
```
|
||||
|
||||
This test loops over the given words (taken from the dictionary and enlightening test cases) and makes sure that letters can be parsed out of them.
|
||||
|
||||
Next, syllables are made out of letters, so syllables are parsed using a [finite state machine](<https://en.wikipedia.org/wiki/Finite-state_machine>) with the following transition rules:
|
||||
|
||||
| Present state | Next state for vowel | Next state for consonant | Next state for end of input |
|
||||
| :--- | :--- | :--- | :--- |
|
||||
| Init | Vowel/stressed | Consonant | Illegal |
|
||||
| Consonant | Vowel/stressed | End | Illegal |
|
||||
| Vowel | End | End | End |
|
||||
|
||||
Some other hacking was done [in the code](<https://github.com/Xe/mapatei/blob/92a429fa9a509af5df5b55810bda03061f21475c/src/mapatei/syllable.nim#L36-L76>), but otherwise it is a fairly literal translation of that truth table.
|
||||
|
||||
And finally we can check to make sure that each word only has a head-initial stressed syllable:
|
||||
|
||||
```
|
||||
type InvalidWord* = object of Exception
|
||||
|
||||
proc parse*(word: string): Word =
|
||||
var first = true
|
||||
result = Word()
|
||||
|
||||
for syll in word.syllables:
|
||||
if not first and syll.stressed:
|
||||
raise newException(InvalidWord, "cannot have a stressed syllable here")
|
||||
if first:
|
||||
first = false
|
||||
result.syllables.add syll
|
||||
```
|
||||
|
||||
And that's enough to validate every word in the [dictionary](<https://docs.google.com/spreadsheets/d/1HSGS8J8IsRzU0e8hyujGO5489CMzmHPZSBbNCtOJUAg>). Future extensions will include automatic conjugation/declension as well as going from a stream of words to an understanding of sentences.
|
||||
|
||||
## Useful Resources Used During This
|
||||
|
||||
Creating a language from scratch is surprisingly hard work. These resources helped me a lot though.
|
||||
|
||||
- [Polyglot](https://draquet.github.io/PolyGlot/) to help with dictionary management
|
||||
- [Awkwords](http://akana.conlang.org/tools/awkwords/) to help with word creation
|
||||
- These lists of core concepts: [list 1](https://forum.duolingo.com/comment/4571123) and [list 2](https://forum.duolingo.com/comment/4664475)
|
||||
- The conlang critic [discord](https://discord.gg/AxEmeDa)
|
||||
|
||||
---
|
||||
|
||||
Thanks for reading this! I hope this blogpost helps to kick off mapatei development into unique and more fleshed out derivative conlangs. Have fun!
|
||||
|
||||
Special thanks to jan Misali for encouraging this to happen.
|
||||
|
||||
[conlangcritic]: https://www.youtube.com/user/HBMmaster8472
|
|
@ -1,51 +0,0 @@
|
|||
---
|
||||
title: "Mara: Sh0rk of Justice: Version 1.0.0 Released"
|
||||
date: 2020-12-28
|
||||
tags:
|
||||
- gameboy
|
||||
- gbstudio
|
||||
- indiedev
|
||||
---
|
||||
|
||||
# Mara: Sh0rk of Justice: Version 1.0.0 Released
|
||||
|
||||
Over the long weekend I found out about a program called [GB Studio](https://www.gbstudio.dev).
|
||||
It's a simple drag-and-drop interface that you can use to make homebrew games for the
|
||||
[Nintendo Game Boy](https://en.wikipedia.org/wiki/Game_Boy). I was intrigued and I had
|
||||
some time, so I set out to make a little top-down adventure game. After a few days of
|
||||
tinkering I came up with an idea and created Mara: Sh0rk of Justice.
|
||||
|
||||
[You made a game about me? :D](conversation://Mara/hacker)
|
||||
|
||||
> Guide Mara through the spooky dungeon in order to find all of its secrets. Seek out
|
||||
> the secrets of the spooks! Defeat the evil mage! Solve the puzzles! Find the items
|
||||
> of power! It's up you to save us all, Mara!
|
||||
|
||||
You can play it in an `<iframe>` on itch.io!
|
||||
|
||||
<iframe frameborder="0" src="https://itch.io/embed/866982?dark=true" width="552" height="167"><a href="https://withinstudios.itch.io/mara-sh0rk-justice">Mara: Sh0rk of Justice by Within</a></iframe>
|
||||
|
||||
## Things I Learned
|
||||
|
||||
Game development is hard. Even with tools that help you do it, there's a limit to how
|
||||
much you can get done at once. Everything links together. You really need to test
|
||||
things both in isolation and as a cohesive whole.
|
||||
|
||||
I cannot compose music to save my life. I used free-to-use music assets from the
|
||||
[GB Studio Community Assets](https://github.com/DeerTears/GB-Studio-Community-Assets)
|
||||
pack to make this game. I think I managed to get everything acceptable.
|
||||
|
||||
GB Studio is rather inflexible. It feels like it's there to really help you get
|
||||
started from a template. Even though you can make the whole game from inside GB
|
||||
Studio, I probably should have ejected the engine to source code so I could
|
||||
customize some things like the jump button being weird in platforming sections.
|
||||
|
||||
Pixel art is an art of its own. I used a lot of free to use assets from itch.io for
|
||||
the tileset and a few NPC's. The rest was created myself using
|
||||
[Aseprite](https://www.aseprite.org). Getting Mara's walking animation to a point
|
||||
that I thought was acceptable was a chore. I found a nice compromise though.
|
||||
|
||||
---
|
||||
|
||||
Overall I'm happy with the result as a whole. Try it out, see how you like it and
|
||||
please do let me know what I can improve on for the future.
|
|
@ -1,167 +0,0 @@
|
|||
---
|
||||
title: "maybedoer: the Maybe Monoid for Go"
|
||||
date: 2020-05-23
|
||||
tags:
|
||||
- go
|
||||
- golang
|
||||
- monoid
|
||||
---
|
||||
|
||||
# maybedoer: the Maybe Monoid for Go
|
||||
|
||||
I recently posted (a variant of) this image of some Go source code to Twitter
|
||||
and it spawned some interesting conversations about what it does, how it works
|
||||
and why it needs to exist in the first place:
|
||||
|
||||
![the source code of package maybedoer](/static/blog/maybedoer.png)
|
||||
|
||||
This file is used to sequence functions that could fail together, allowing you
|
||||
to avoid doing an `if err != nil` check on every single fallible function call.
|
||||
There are two major usage patterns for it.
|
||||
|
||||
The first one is the imperative pattern, where you call it like this:
|
||||
|
||||
```go
|
||||
md := new(maybedoer.Impl)
|
||||
|
||||
var data []byte
|
||||
|
||||
md.Maybe(func(context.Context) error {
|
||||
var err error
|
||||
|
||||
data, err = ioutil.ReadFile("/proc/cpuinfo")
|
||||
|
||||
return err
|
||||
})
|
||||
|
||||
// add a few more maybe calls?
|
||||
|
||||
if err := md.Error(); err != nil {
|
||||
ln.Error(ctx, err, ln.Fmt("cannot munge data in /proc/cpuinfo"))
|
||||
}
|
||||
```
|
||||
|
||||
The second one is the iterative pattern, where you call it like this:
|
||||
|
||||
```go
|
||||
func gitPush(repoPath, branch, to string) maybedoer.Doer {
|
||||
return func(ctx context.Context) error {
|
||||
// the repoPath, branch and to variables are available here
|
||||
return nil
|
||||
}
|
||||
}
|
||||
|
||||
func repush(ctx context.Context) error {
|
||||
repoPath, err := ioutil.TempDir("", "")
|
||||
if err != nil {
|
||||
return fmt.Errorf("error making checkout: %v", err)
|
||||
}
|
||||
|
||||
md := maybedoer.Impl{
|
||||
Doers: []maybedoer.Doer{
|
||||
gitConfig, // assume this is implemented
|
||||
gitClone(repoPath, os.Getenv("HEROKU_APP_GIT_REPO")), // and this too
|
||||
gitPush(repoPath, "master", os.Getenv("HEROKU_GIT_REMOTE")),
|
||||
},
|
||||
}
|
||||
|
||||
err = md.Do(ctx)
|
||||
if err != nil {
|
||||
return fmt.Errorf("error repushing Heroku app: %v", err)
|
||||
}
|
||||
|
||||
return nil
|
||||
}
|
||||
```
|
||||
|
||||
Both of these ways allow you to sequence fallible actions without having to
|
||||
write `if err != nil` after each of them, making this easily scale out to
|
||||
arbitrary numbers of steps. The design of this is inspired by a package used at
|
||||
a previous job where we used it to handle a lot of fiddly fallible actions that
|
||||
need to happen one after the other.
|
||||
|
||||
However, this version differs because of the `Doers` element of
|
||||
`maybedoer.Impl`. This allows you to specify an entire process of steps as long
|
||||
as those steps don't return any values. This is very similar to how Haskell's
|
||||
[`Data.Monoid.First`](http://hackage.haskell.org/package/base-4.14.0.0/docs/Data-Monoid.html#t:First)
|
||||
type works, except in Go this is locked to the `error` type (due to the language
|
||||
not letting you describe things as precisely as you would need to get an analog
|
||||
to `Data.Monoid.First`). This is also similar to Rust's `and_then` combinator.
|
||||
|
||||
If we could return values from these functions, this would make `maybedoer`
|
||||
closer to being a monad in the Haskell sense. However we can't so we are locked
|
||||
to one specific instance of a monoid. I would love to use this for a pointer (or
|
||||
pointer-like) reference to any particular bit of data, but `interface{}` doesn't
|
||||
allow this because `interface{}` matches _literally everything_:
|
||||
|
||||
```go
|
||||
var foo = []interface{
|
||||
1,
|
||||
3.4,
|
||||
"hi there",
|
||||
context.Background(),
|
||||
errors.New("this works too!"),
|
||||
}
|
||||
```
|
||||
|
||||
This could mean that if we changed the type of a Doer to be:
|
||||
|
||||
```go
|
||||
type Doer func(context.Context) interface{}
|
||||
```
|
||||
|
||||
Then it would be difficult to know how to handle returns from the function.
|
||||
Arguably we could write some mechanism to check if it is an error:
|
||||
|
||||
```go
|
||||
result := do(ctx)
|
||||
if result != nil {
|
||||
switch result.(type) {
|
||||
case error:
|
||||
return result // result is of type error magically
|
||||
default:
|
||||
md.return = result
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
But then it would be difficult to know how to pipe the result into the next
|
||||
function, unless we change Doer's type to be:
|
||||
|
||||
```go
|
||||
type Doer func(context.Context, interface{}) interface{}
|
||||
```
|
||||
|
||||
Which would require code that looks like this:
|
||||
|
||||
```go
|
||||
func getNumber(ctx context.Context, _ interface{}) interface{} {
|
||||
return 2
|
||||
}
|
||||
|
||||
func double(ctx context.Context, num interface{}) interface{} {
|
||||
switch num.(type) {
|
||||
case int:
|
||||
return 2+2
|
||||
default:
|
||||
return fmt.Errorf("wanted num to be an int, got: %T", num)
|
||||
}
|
||||
|
||||
return nil
|
||||
}
|
||||
```
|
||||
|
||||
But this kind of repetition would be required for _every function_. I don't
|
||||
really know what the best way to solve this in a generic way would be, but I'm
|
||||
fairly sure that these fundamental limitations in Go prevent this package from
|
||||
being genericized to handle function outputs and inputs beyond what you can do
|
||||
with currying (and maybe clever pointer usage).
|
||||
|
||||
I would love to be proven wrong though. If anyone can take this [source code
|
||||
under the MIT license](/static/blog/maybedoer.go) and prove me wrong, I will
|
||||
stand corrected and update this blogpost with the solution.
|
||||
|
||||
This kind of thing is more easy to solve in Rust with its
|
||||
[Result](https://doc.rust-lang.org/std/result/) type; and arguably this entire
|
||||
problem solved in the Go package is irrelevant in Rust because this solution is
|
||||
in the standard library of Rust.
|
|
@ -1,397 +0,0 @@
|
|||
---
|
||||
title: "Minicompiler: Lexing"
|
||||
date: 2020-10-29
|
||||
series: rust
|
||||
tags:
|
||||
- rust
|
||||
- templeos
|
||||
- compiler
|
||||
---
|
||||
|
||||
# Minicompiler: Lexing
|
||||
|
||||
I've always wanted to make my own compiler. Compilers are an integral part of
|
||||
my day to day job and I use the fruits of them constantly. A while ago while I
|
||||
was browsing through the TempleOS source code I found
|
||||
[MiniCompiler.HC][minicompiler] in the `::/Demos/Lectures` folder and I was a
|
||||
bit blown away. It implements a two phase compiler from simple math expressions
|
||||
to AMD64 bytecode (complete with bit-banging it to an array that the code later
|
||||
jumps to) and has a lot to teach about how compilers work. For those of you that
|
||||
don't have a TempleOS VM handy, here is a video of MiniCompiler.HC in action:
|
||||
|
||||
[minicompiler]: https://github.com/Xe/TempleOS/blob/master/Demo/Lectures/MiniCompiler.HC
|
||||
|
||||
<video controls width="100%">
|
||||
<source src="https://cdn.christine.website/file/christine-static/img/minicompiler/tmp.YDcgaHSb3z.webm"
|
||||
type="video/webm">
|
||||
<source src="https://cdn.christine.website/file/christine-static/img/minicompiler/tmp.YDcgaHSb3z.mp4"
|
||||
type="video/mp4">
|
||||
Sorry, your browser doesn't support embedded videos.
|
||||
</video>
|
||||
|
||||
You put in a math expression, the compiler builds it and then spits out a bunch
|
||||
of assembly and runs it to return the result. In this series we are going to be
|
||||
creating an implementation of this compiler that targets [WebAssembly][wasm].
|
||||
This compiler will be written in Rust and will use only the standard library for
|
||||
everything but the final bytecode compilation and execution phase. There is a
|
||||
lot going on here, so I expect this to be at least a three part series. The
|
||||
source code will be in [Xe/minicompiler][Xemincompiler] in case you want to read
|
||||
it in detail. Follow along and let's learn some Rust on the way!
|
||||
|
||||
[wasm]: https://webassembly.org/
|
||||
[Xemincompiler]: https://github.com/Xe/minicompiler
|
||||
|
||||
[Compilers for languages like C are built on top of the fundamentals here, but
|
||||
they are _much_ more complicated.](conversation://Mara/hacker)
|
||||
|
||||
## Description of the Language
|
||||
|
||||
This language uses normal infix math expressions on whole numbers. Here are a
|
||||
few examples:
|
||||
|
||||
- `2 + 2`
|
||||
- `420 * 69`
|
||||
- `(34 + 23) / 38 - 42`
|
||||
- `(((34 + 21) / 5) - 12) * 348`
|
||||
|
||||
Ideally we should be able to nest the parentheses as deep as we want without any
|
||||
issues.
|
||||
|
||||
Looking at these values we can notice a few patterns that will make parsing this
|
||||
a lot easier:
|
||||
|
||||
- There seems to be only 4 major parts to this language:
|
||||
- numbers
|
||||
- math operators
|
||||
- open parentheses
|
||||
- close parentheses
|
||||
- All of the math operators act identically and take two arguments
|
||||
- Each program is one line long and ends at the end of the line
|
||||
|
||||
Let's turn this description into Rust code:
|
||||
|
||||
## Bringing in Rust
|
||||
|
||||
Make a new project called `minicompiler` with a command that looks something
|
||||
like this:
|
||||
|
||||
```console
|
||||
$ cargo new minicompiler
|
||||
```
|
||||
|
||||
This will create a folder called `minicompiler` and a file called `src/main.rs`.
|
||||
Open that file in your editor and copy the following into it:
|
||||
|
||||
```rust
|
||||
// src/main.rs
|
||||
|
||||
/// Mathematical operations that our compiler can do.
|
||||
#[derive(Debug, Eq, PartialEq)]
|
||||
enum Op {
|
||||
Mul,
|
||||
Div,
|
||||
Add,
|
||||
Sub,
|
||||
}
|
||||
|
||||
/// All of the possible tokens for the compiler, this limits the compiler
|
||||
/// to simple math expressions.
|
||||
#[derive(Debug, Eq, PartialEq)]
|
||||
enum Token {
|
||||
EOF,
|
||||
Number(i32),
|
||||
Operation(Op),
|
||||
LeftParen,
|
||||
RightParen,
|
||||
}
|
||||
```
|
||||
|
||||
[In compilers, "tokens" refer to the individual parts of the language you are
|
||||
working with. In this case every token represents every possible part of a
|
||||
program.](conversation://Mara/hacker)
|
||||
|
||||
And then let's start a function that can turn a program string into a bunch of
|
||||
tokens:
|
||||
|
||||
```rust
|
||||
// src/main.rs
|
||||
|
||||
fn lex(input: &str) -> Vec<Token> {
|
||||
todo!("implement this");
|
||||
}
|
||||
```
|
||||
|
||||
[Wait, what do you do about bad input such as things that are not math expressions?
|
||||
Shouldn't this function be able to fail?](conversation://Mara/hmm)
|
||||
|
||||
You're right! Let's make a little error type that represents bad input. For
|
||||
creativity's sake let's call it `BadInput`:
|
||||
|
||||
```rust
|
||||
// src/main.rs
|
||||
|
||||
use std::error::Error;
|
||||
use std::fmt;
|
||||
|
||||
/// The error that gets returned on bad input. This only tells the user that it's
|
||||
/// wrong because debug information is out of scope here. Sorry.
|
||||
#[derive(Debug, Eq, PartialEq)]
|
||||
struct BadInput;
|
||||
|
||||
// Errors need to be displayable.
|
||||
impl fmt::Display for BadInput {
|
||||
fn fmt(&self, f: &mut fmt::Formatter) -> fmt::Result {
|
||||
write!(f, "something in your input is bad, good luck")
|
||||
}
|
||||
}
|
||||
|
||||
// The default Error implementation will do here.
|
||||
impl Error for BadInput {}
|
||||
```
|
||||
|
||||
And then let's adjust the type of `lex()` to compensate for this:
|
||||
|
||||
```rust
|
||||
// src/main.rs
|
||||
|
||||
fn lex(input: &str) -> Result<Vec<Token>, BadInput> {
|
||||
todo!("implement this");
|
||||
}
|
||||
```
|
||||
|
||||
So now that we have the function type we want, let's start implementing `lex()`
|
||||
by setting up the result and a loop over the characters in the input string:
|
||||
|
||||
```rust
|
||||
// src/main.rs
|
||||
|
||||
fn lex(input: &str) -> Result<Vec<Token>, BadInput> {
|
||||
let mut result: Vec<Token> = Vec::new();
|
||||
|
||||
for character in input.chars() {
|
||||
todo!("implement this");
|
||||
}
|
||||
|
||||
Ok(result)
|
||||
}
|
||||
```
|
||||
|
||||
Looking at the examples from earlier we can start writing some boilerplate to
|
||||
turn characters into tokens:
|
||||
|
||||
```rust
|
||||
// src/main.rs
|
||||
|
||||
// ...
|
||||
|
||||
for character in input.chars() {
|
||||
match character {
|
||||
// Skip whitespace
|
||||
' ' => continue,
|
||||
|
||||
// Ending characters
|
||||
';' | '\n' => {
|
||||
result.push(Token::EOF);
|
||||
break;
|
||||
}
|
||||
|
||||
// Math operations
|
||||
'*' => result.push(Token::Operation(Op::Mul)),
|
||||
'/' => result.push(Token::Operation(Op::Div)),
|
||||
'+' => result.push(Token::Operation(Op::Add)),
|
||||
'-' => result.push(Token::Operation(Op::Sub)),
|
||||
|
||||
// Parentheses
|
||||
'(' => result.push(Token::LeftParen),
|
||||
')' => result.push(Token::RightParen),
|
||||
|
||||
// Numbers
|
||||
'0' | '1' | '2' | '3' | '4' | '5' | '6' | '7' | '8' | '9' => {
|
||||
todo!("implement number parsing")
|
||||
}
|
||||
|
||||
// Everything else is bad input
|
||||
_ => return Err(BadInput),
|
||||
}
|
||||
}
|
||||
|
||||
// ...
|
||||
```
|
||||
|
||||
[Ugh, you're writing `Token::` and `Op::` a lot. Is there a way to simplify
|
||||
that?](conversation://Mara/hmm)
|
||||
|
||||
Yes! enum variants can be shortened to their names with a `use` statement like
|
||||
this:
|
||||
|
||||
```rust
|
||||
// src/main.rs
|
||||
|
||||
// ...
|
||||
|
||||
use Op::*;
|
||||
use Token::*;
|
||||
|
||||
match character {
|
||||
// ...
|
||||
|
||||
// Math operations
|
||||
'*' => result.push(Operation(Mul)),
|
||||
'/' => result.push(Operation(Div)),
|
||||
'+' => result.push(Operation(Add)),
|
||||
'-' => result.push(Operation(Sub)),
|
||||
|
||||
// Parentheses
|
||||
'(' => result.push(LeftParen),
|
||||
')' => result.push(RightParen),
|
||||
|
||||
// ...
|
||||
}
|
||||
|
||||
// ...
|
||||
```
|
||||
|
||||
Which looks a _lot_ better.
|
||||
|
||||
[You can use the `use` statement just about anywhere in your program. However to
|
||||
keep things flowing nicer, the `use` statement is right next to where it is
|
||||
needed in these examples.](conversation://Mara/hacker)
|
||||
|
||||
Now we can get into the fun that is parsing numbers. When he wrote MiniCompiler,
|
||||
Terry Davis used an approach that is something like this (spacing added for readability):
|
||||
|
||||
```c
|
||||
case '0'...'9':
|
||||
i = 0;
|
||||
do {
|
||||
i = i * 10 + *src - '0';
|
||||
src++;
|
||||
} while ('0' <= *src <= '9');
|
||||
*num=i;
|
||||
```
|
||||
|
||||
This sets an intermediate variable `i` to 0 and then consumes characters from
|
||||
the input string as long as they are between `'0'` and `'9'`. As a neat side
|
||||
effect of the numbers being input in base 10, you can conceptualize `40` as `(4 *
|
||||
10) + 2`. So it multiplies the old digit by 10 and then adds the new digit to
|
||||
the resulting number. Our setup doesn't let us get that fancy as easily, however
|
||||
we can emulate it with a bit of stack manipulation according to these rules:
|
||||
|
||||
- If `result` is empty, push this number to result and continue lexing the
|
||||
program
|
||||
- Pop the last item in `result` and save it as `last`
|
||||
- If `last` is a number, multiply that number by 10 and add the current number
|
||||
to it
|
||||
- Otherwise push the node back into `result` and push the current number to
|
||||
`result` as well
|
||||
|
||||
Translating these rules to Rust, we get this:
|
||||
|
||||
```rust
|
||||
// src/main.rs
|
||||
|
||||
// ...
|
||||
|
||||
// Numbers
|
||||
'0' | '1' | '2' | '3' | '4' | '5' | '6' | '7' | '8' | '9' => {
|
||||
let num: i32 = (character as u8 - '0' as u8) as i32;
|
||||
if result.len() == 0 {
|
||||
result.push(Number(num));
|
||||
continue;
|
||||
}
|
||||
|
||||
let last = result.pop().unwrap();
|
||||
|
||||
match last {
|
||||
Number(i) => {
|
||||
result.push(Number((i * 10) + num));
|
||||
}
|
||||
_ => {
|
||||
result.push(last);
|
||||
result.push(Number(num));
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
// ...
|
||||
```
|
||||
|
||||
[This is not the most robust number parsing code in the world, however it will
|
||||
suffice for now. Extra credit if you can identify the edge
|
||||
cases!](conversation://Mara/hacker)
|
||||
|
||||
This should cover the tokens for the language. Let's write some tests to be sure
|
||||
everything is working the way we think it is!
|
||||
|
||||
## Testing
|
||||
|
||||
Rust has a [robust testing
|
||||
framework](https://doc.rust-lang.org/book/ch11-00-testing.html) built into the
|
||||
standard library. We can use it here to make sure we are generating tokens
|
||||
correctly. Let's add the following to the bottom of `main.rs`:
|
||||
|
||||
```rust
|
||||
#[cfg(test)] // tells the compiler to only build this code when tests are being run
|
||||
mod tests {
|
||||
use super::{Op::*, Token::*, *};
|
||||
|
||||
// registers the following function as a test function
|
||||
#[test]
|
||||
fn basic_lexing() {
|
||||
assert!(lex("420 + 69").is_ok());
|
||||
assert!(lex("tacos are tasty").is_err());
|
||||
|
||||
assert_eq!(
|
||||
lex("420 + 69"),
|
||||
Ok(vec![Number(420), Operation(Add), Number(69)])
|
||||
);
|
||||
assert_eq!(
|
||||
lex("(30 + 560) / 4"),
|
||||
Ok(vec![
|
||||
LeftParen,
|
||||
Number(30),
|
||||
Operation(Add),
|
||||
Number(560),
|
||||
RightParen,
|
||||
Operation(Div),
|
||||
Number(4)
|
||||
])
|
||||
);
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
This test can and probably should be expanded on, but when we run `cargo test`:
|
||||
|
||||
```console
|
||||
$ cargo test
|
||||
Compiling minicompiler v0.1.0 (/home/cadey/code/Xe/minicompiler)
|
||||
|
||||
Finished test [unoptimized + debuginfo] target(s) in 0.22s
|
||||
Running target/debug/deps/minicompiler-03cad314858b0419
|
||||
|
||||
running 1 test
|
||||
test tests::basic_lexing ... ok
|
||||
|
||||
test result: ok. 1 passed; 0 failed; 0 ignored; 0 measured; 0 filtered out
|
||||
```
|
||||
|
||||
And hey presto! We verified that all of the parsing is working correctly. Those
|
||||
test cases should be sufficient to cover all of the functionality of the
|
||||
language.
|
||||
|
||||
---
|
||||
|
||||
This is it for part 1. We covered a lot today. Next time we are going to run a
|
||||
validation pass on the program, convert the infix expressions to reverse polish
|
||||
notation and then also get started on compiling that to WebAssembly. This has
|
||||
been fun so far and I hope you were able to learn from it.
|
||||
|
||||
Special thanks to the following people for reviewing this post:
|
||||
- Steven Weeks
|
||||
- sirpros
|
||||
- Leonora Tindall
|
||||
- Chetan Conikee
|
||||
- Pablo
|
||||
- boopstrap
|
||||
- ash2x3
|
|
@ -1,110 +0,0 @@
|
|||
---
|
||||
title: MrBeast is Postmodern Gold
|
||||
date: 2019-06-05
|
||||
tags:
|
||||
- mrbeast
|
||||
- postmodern
|
||||
- youtube
|
||||
---
|
||||
|
||||
Author's note: I've been going through a lot lately. This Monday I was in the emergency room after having a panic attack. I have a folder of writing in my notes that I use to help work off steam. I don't know why, but writing this article really helped me feel better. I can only hope it helps make your day feel better too.
|
||||
|
||||
# MrBeast is Postmodern Gold
|
||||
|
||||
The year is 2019. Politicians have fallen asleep at the wheel. Capitalism controls large segments of the hearts and minds of the populace. Social class is increasingly only a construct. Popularity is becoming irrelevant. Money has no value. The ultimate expendability of entire groups of people is as obvious as the sunrise and sunset. Nothing feels real. There's no real reason for people to get up and continue, yet life goes on. Somehow, even after a decade of aid and memes, children in Africa are _still_ starving.
|
||||
|
||||
The next generation has grown up with technology and advertising. Entire swaths of the market know to ignore the very advertising that keeps the de-facto utilities (though the creators of those services will insist that it's a free choice to use them) they use to communicate with friends alive. You have to unplug your cigarette (that your friend got you hooked to) to charge your book. Marketing has driven postmodernism to a whole new level that leads McDonalds to ask Wendys if they are okay after Wendys posts cryptic/confusing messages. Companies that just want to do business get blocked away by racist policies set by people who all but have died since. What can be done about this? Who should we turn to for quality entertainment to help quench this generational angst against a nameless, faceless machine that controls nearly all of functional civilization?
|
||||
|
||||
Enter [MrBeast](https://www.youtube.com/channel/UCX6OQ3DkcsbYNE6H8uQQuVA). This youtuber has reached new levels of content purely by making capitalism itself the content. With his crew of people and their peculiar views on life, they do a good job at making some quality content for this hyper-capitalist world that they have found themselves in.
|
||||
|
||||
One of the main ways that YouTube creators have been under fire lately is because of politically or otherwise topically charged content. MrBeast is completely devoid of anything close to politically sensitive or insensitive. It's literally content about money and how it gets spent on things that get filmed and posted to YouTube in an effort to create more AdSense revenue in order to get even more money.
|
||||
|
||||
I don't really know if there is a proper way to categorize this YouTuber. He really brings a unique feeling into everything he does with such a wholesome overall experience. Sponsorship money gets donated to twitch streamers and he makes videos of their reactions. He bought a house and had his friends put their hands on it, with the last one touching it to get the house. He went to every single Wal-Mart in the continental united states. He drove a lego car around his local town until he got pulled over by the cops. And yes like the YouTuber legend goes, he started many years ago doing Minecraft Let's Plays as a screechy-voiced teenager.
|
||||
|
||||
## Gluttony
|
||||
|
||||
Consider videos like [this one](https://youtu.be/7zi0bi-RDj4) where they spend an absurd amount of money eating five star meal food. "This first steak is called 'Kobe (pronounced /ko.bi/) beef' and we wanted to experience it because it cost [USD]$1000 and we wanted to see if it was worth the price." Then they eat the steak and act like it's no big deal, joking that each section of the meat is worth $30-40. "Alright bros, I'm PewDiePie and we just ate kobe (pronounced /ko.bei/) beef.
|
||||
|
||||
Then they go to another place (which has walls that are obviously plywood spray-painted black) and he offers one of his friends $100 to eat some random grasshopper. Chris eats it almost immediately. Everyone else in the room freaks out a little, commenting on the crunch sound. "That's pretty good". Garrett turns it down. Chandler also eats it without much hesitation, later commenting on the crunch of the chitin shell of the bug.
|
||||
|
||||
Then MrBeast offers a plate of crickets and grasshoppers to the three. He offers eating it for $1000. Chris sounds like he's open to eating it, but offers the rest a chance. Garrett IMMEDIATELY turns it down. Chandler eats all of them at once. He has some issues chewing them (again with the crunch eeeeugh), but Chandler easily eats it all; instantly becoming a thousand dollars richer.
|
||||
|
||||
The room gags and laughs, the friendship between the boys $1200 stronger.
|
||||
|
||||
Then they go get goose liver served on rice and a hundred year old egg. Uh-oh, both of these are delicacies. How will they react?
|
||||
|
||||
The goose liver comes out first. MrBeast eats the hors d'œuvre in one bite. Chris has some trouble, but manages to take it down. Chandler is heaving. His friends cheer him on with loving words of compassion like "you don't like liver?"
|
||||
|
||||
What.
|
||||
|
||||
The "century egg" comes out. They make the mistake of smelling it. Oh no. MrBeast eats it just fine. Chandler spits a $500 item of food into the trash after gagging. Chris ejects it into his napkin while MrBeast chants his name. Chris gags while his friends act like they are congratulating him. "It's like someone hocked a loogie into your mouth."
|
||||
|
||||
Before you ask, no, this isn't an initiation stunt. They literally do this kind of stuff on a regular basis. Remember that money is the content here; so the fact that all of this stuff costs ridiculous amounts of money is the main reason for these videos to be created.
|
||||
|
||||
Later in the video, they drive to New York to eat gold-plated tomahawk steak. I've actually had tomahawk steak once and it was really good (thanks Uncle Marc). Where else to eat a golden steak than the golden steak?
|
||||
|
||||
"This is the most expensive restaurant we can find. If I don't spend $10,000 all of you can punch me; because we will spend $10,000. What's that name?"
|
||||
|
||||
Nobody can pronounce "Nurs-et", the name of the restaurant. "None of us knew how to pronounce it, so it must be good."
|
||||
|
||||
What.
|
||||
|
||||
It was good though.
|
||||
|
||||
## Foolishness
|
||||
|
||||
In another video of his, he gets [his friends to spend 24 hours in a horrific mockup of an "insane asylum"](https://youtu.be/nuM0Z4a7kMs). For a first in these challenges, they split into two teams: Team Red and Team Black. Four of his crew are put into straitjackets with no other instructions.
|
||||
|
||||
They start predictably acting like a stereotypical American view of insane people. Twitching as they talk to the camera. Rolling around on the floor. "What is time?" Chandler is banging his head against the wall.
|
||||
|
||||
> MrBeast: "Chris, how long do you think you're gonna last?
|
||||
> Chris: "Banana sundae."
|
||||
|
||||
"Insanity is repeating the same thing over and over again and expecting a different outcome."
|
||||
|
||||
Much like Survivor, there's cutaways to the individual teams as they plan out their high level strategy for the "game". What. There is no strategy needed, they just need to sit in a room and be quiet for 24 hours. Reminds me of that one quote by Blaise Pascal in Pensées:
|
||||
|
||||
> All of humanity's problems stem from man's inability to sit quietly in a room alone.
|
||||
|
||||
And no, these people can't sit quietly in a room. You see them dancing back and forth in a line in front of the camera. They get locked into the room and the time-lapse shows 10 minutes of them walking around in circles.
|
||||
|
||||
The door gets yelled at. MrBeast notes the absurdity of the thing. The bright, unforgiving white walls of the asylum pierce the darkness of my room as I write this article.
|
||||
|
||||
> "Help. Me. I. Need...I don't need anything~"
|
||||
> "Y'all got any beans? Y'all got any baked beans?"
|
||||
|
||||
- Chris
|
||||
|
||||
They raise someone on Chandler's shoulders, not a small accomplishment considering they don't have access to their arms. Someone speaks into the security camera: "Hello? I'm about to fall please go back down."
|
||||
|
||||
MrBeast attempts to go into the room, go do snow angels and not say a single thing. The occupants have other plans, yelling when the door opens to alert eachother. They crowd around MrBeast, making it impossible to do his chosen task. They pin MrBeast to a corner and he tries to escape but then there's a problem. The people won't let him leave. He manages to get out.
|
||||
|
||||
Later MrBeast gets an idea to mess with the people. He gets a megaphone and puts it into siren mode, expecting them to not be able to turn it off. He is proven wrong almost instantly. They used their feet to turn it off. Then they start making noise with it. The megaphone is retrieved using the most heinous of weapons, an umbrella. A layer of duct tape is added and the experiment is repeated. They still manage to turn it off. They used their teeth. Low-light conditions didn't stop them. Not having their hands didn't stop them. Can anything stop these mad lads?
|
||||
|
||||
They attempt to retrieve the sound emitter again. The prisoners break it in retaliation. MrBeast seems okay with that, yet disappointed. However he suffers a casualty on his way out. MrBeast attempted to push back chandler using the holy umbrella. Chandler took the umbrella from him with nothing but his tied up arms.
|
||||
|
||||
What.
|
||||
|
||||
What is this video about again? What is the purpose? These people are getting money or something for being the last person standing? What is going on?
|
||||
|
||||
Oh, right, this is a challenge. The last two people to be in the room together win some amount of money.
|
||||
|
||||
Well the people are screaming for entertainment. That's not unexpected, but that's just how it goes I guess. Quality. Content.
|
||||
|
||||
> Let's have a dance party and then Chandler can poop. Rate who dances better in the comments section.
|
||||
|
||||
- MrBeast, 10:22-ish
|
||||
|
||||
What.
|
||||
|
||||
8 hours in, Chandler somehow dislocated his entire right arm. You can see it hanging there obviously out of place. It looks like he's in massive pain. He tore a muscle. He was pulled out of the challenge. Another challenge lost by Chandler.
|
||||
|
||||
Chris drops out at 14 hours. The two winners are unsure what to do with themselves and their winnings. What are they again? Five grand? Chandler tore his shoulder out of socket and Chris risked ear damage for...FIVE GRAND?
|
||||
|
||||
What. Just what.
|
||||
|
||||
The entire channel is full of this stuff. I could go on for hours.
|
||||
|
||||
---
|
||||
|
||||
Also MrBeast if you're reading this add me on Fortnite. I'd love to play some Duos with you and shitpost about the price of bananas.
|
|
@ -1,92 +0,0 @@
|
|||
---
|
||||
title: "Book Release: Musings from Within"
|
||||
date: 2020-07-28
|
||||
tags:
|
||||
- release
|
||||
- book
|
||||
- musingsfromwithin
|
||||
---
|
||||
|
||||
# Book Release: Musings from Within
|
||||
|
||||
I am happy to announce that I have successfully created an eBook compilation of
|
||||
the best of the posts on this blog plus a bunch of writing I have never before
|
||||
made public, and the result is now available for purchase on
|
||||
[itch.io](https://withinstudios.itch.io/musings-from-within) and the Kindle
|
||||
Store (TODO(Xe): add kindle link here when it gets approved) for USD$5. This
|
||||
book is the product of 5 years of effort writing, getting better at writing,
|
||||
failing at writing and everything inbetween.
|
||||
|
||||
I have collected the following essays, poems, recipes and stories:
|
||||
|
||||
- Against Label Permanence
|
||||
- A Letter to Those Who Bullied Me
|
||||
- All There is is Now
|
||||
- Alone
|
||||
- Barrier
|
||||
- Bricks
|
||||
- Chaos Magick Debugging
|
||||
- Chicken Stir Fry
|
||||
- Creator’s Mission
|
||||
- Death
|
||||
- Died to Save Me
|
||||
- Don't Look Into the Light
|
||||
- Every Koan Ever
|
||||
- Final Chapter
|
||||
- Gratitude
|
||||
- h
|
||||
- How HTTP Requests Work
|
||||
- Humanity
|
||||
- I Love
|
||||
- Instant Pot Quinoa Taco Bowls
|
||||
- Instant Pot Spaghetti
|
||||
- I Put Words on this Webpage so You Have to Listen to Me Now
|
||||
- I Remember
|
||||
- It Is Free
|
||||
- Listen to Your Rubber Duck
|
||||
- MrBeast is Postmodern Gold
|
||||
- My Experience Cursing Out God
|
||||
- Narrative of Sickness
|
||||
- One Day
|
||||
- Plurality-Driven Development
|
||||
- Practical Kasmakfa
|
||||
- Questions
|
||||
- Second Go Around
|
||||
- Self
|
||||
- Sorting Time
|
||||
- Tarot for Hackers
|
||||
- The Gears and The Gods
|
||||
- The Origin of h
|
||||
- The Service is Already Down
|
||||
- The Story of Hol
|
||||
- The Sumerian Creation Myth
|
||||
- Toast Sandwich Recipe
|
||||
- Untitled Cyberpunk Furry Story
|
||||
- We Exist
|
||||
- What It’s Like to Be Me
|
||||
- When Then Zen
|
||||
- When Then Zen: Anapana
|
||||
- When Then Zen: Wonderland Immersion
|
||||
- You Are Fine
|
||||
|
||||
Most of these are available on this site, but a good portion of them are not
|
||||
available anywhere else. There's poetry about shamanism, stories about
|
||||
reincarnation, koans and more.
|
||||
|
||||
I am also uploading eBook files to my [Patreon](https://patreon.com/cadey) page,
|
||||
anyone who supports me for $1 or more has [immediate
|
||||
access](https://www.patreon.com/posts/39825969)
|
||||
to the DRM-free ePub, MOBIPocket and PDF files of this book.
|
||||
|
||||
If you are facing financial difficulties, want to read my book and just simply
|
||||
cannot afford it, please [contact me](/contact) and I will send you my book free
|
||||
of charge.
|
||||
|
||||
Feedback and reviews of this book are more than welcome. If you decide to tweet
|
||||
or toot about it, please use the hashtag `#musingsfromwithin` so I can collect
|
||||
them into future updates to the description of the store pages, as well as
|
||||
assemble them below.
|
||||
|
||||
Enjoy the book! My hope is that you get as much from it as I've gotten from
|
||||
writing these things for the last 5 or so years. Here's to five more. I'll
|
||||
likely create another anthology/collection of them at that point.
|
|
@ -5,9 +5,7 @@ date: 2019-03-14
|
|||
|
||||
# My Career So Far in Dates/Titles/Salaries
|
||||
|
||||
Let this be inspiration to whoever is afraid of trying, failing and being fired.
|
||||
Every single one of these jobs has taught me lessons I've used daily in my
|
||||
career.
|
||||
Let this be inspiration to whoever is afraid of trying, failing and being fired. Every single one of these jobs has taught me lessons I've used daily in my career.
|
||||
|
||||
## First Jobs
|
||||
|
||||
|
@ -19,14 +17,11 @@ I don't have exact dates on these, but my first jobs were:
|
|||
|
||||
I ended up walking out on the delivery job, but that's a story for another day.
|
||||
|
||||
Most of what I learned from these jobs were the value of labor and when to just
|
||||
shut up and give people exactly what they are asking for. Even if it's what they
|
||||
might not want.
|
||||
Most of what I learned from these jobs were the value of labor and when to just shut up and give people exactly what they are asking for. Even if it's what they might not want.
|
||||
|
||||
## Salaried Jobs
|
||||
|
||||
The following table is a history of my software career by title, date and salary
|
||||
(company names are omitted).
|
||||
The following table is a history of my software career by title, date and salary (company names are omitted).
|
||||
|
||||
| Title | Start Date | End Date | Days Worked | Days Between Jobs | Salary | How I Left |
|
||||
|:----- |:---------- |:-------- |:----------- |:----------------- |:------ |:---------- |
|
||||
|
@ -40,19 +35,11 @@ The following table is a history of my software career by title, date and salary
|
|||
| Software Engineer | August 24, 2016 | November 22, 2016 | 90 days | 21 days | $105,000/year | Terminated |
|
||||
| Consultant | Feburary 13, 2017 | November 13, 2017 | 273 days | 83 days | don't remember | Hired |
|
||||
| Senior Software Engineer | November 13, 2017 | March 8, 2019 | 480 days | 0 days | $150,000/year | Voulntary quit |
|
||||
| Senior Site Reliability Expert | May 6, 2019 | October 27, 2020 | 540 days | 48 days | CAD$115,000/year (about USD$ 80k and change) | Voluntary quit |
|
||||
| Software Designer | December 14, 2020 | *current* | n/a | n/a | CAD$135,000/year (about USD$ 105k and change) | n/a |
|
||||
| Senior Site Reliability Expert | May 6, 2019 | (will be current) | n/a | n/a | CAD$105,000/year (about USD$ 80k and change) | n/a |
|
||||
|
||||
Even though I've been fired three times, I don't regret my career as it's been
|
||||
thus far. I've been able to work on experimental technology integrating into
|
||||
phone systems. I've worked in a mixed PHP/Haskell/Erlang/Go/Perl production
|
||||
environment. I've literally rebuilt most of the tool that was catalytic to my
|
||||
career a few times over. It's been the ride of a lifetime.
|
||||
Even though I've been fired three times, I don't regret my career as it's been thus far. I've been able to work on experimental technology integrating into phone systems. I've worked in a mixed PHP/Haskell/Erlang/Go/Perl production environment. I've literally rebuilt most of the tool that was catalytic to my career a few times over. It's been the ride of a lifetime.
|
||||
|
||||
Even though I was fired, each of these failures in this chain of jobs enabled me
|
||||
to succeed the way I have. I can't wait to see what's next out of it. I only
|
||||
wonder how I can be transformed even more. I really wonder what it's gonna be
|
||||
like with the company that hired me over the border.
|
||||
Even though I was fired, each of these failures in this chain of jobs enabled me to succeed the way I have. I can't wait to see what's next out of it. I only wonder how I can be transformed even more. I really wonder what it's gonna be like with the company that hired me over the border.
|
||||
|
||||
![](/static/img/my-career.jpeg)
|
||||
|
||||
|
@ -74,8 +61,4 @@ Be well.
|
|||
.i ko do gleki
|
||||
```
|
||||
|
||||
If you can, please make a blogpost similar to this. Don't include company names.
|
||||
Include start date, end date, time spent there, time spent job hunting, salary
|
||||
(if you remember it) and how you left it. Let's [end salary
|
||||
secrecy](https://thegirlpowercode.com/2018/09/12/is-salary-secrecy-coming-to-an-end/)
|
||||
one step at a time.
|
||||
If you can, please make a blogpost similar to this. Don't include company names. Include start date, end date, time spent there, time spent job hunting, salary (if you remember it) and how you left it. Let's [end salary secrecy](https://thegirlpowercode.com/2018/09/12/is-salary-secrecy-coming-to-an-end/) one step at a time.
|
|
@ -2,7 +2,6 @@
|
|||
title: Narrative of Sickness
|
||||
date: 2018-08-13
|
||||
for: awakening
|
||||
series: magick
|
||||
---
|
||||
|
||||
# Narrative of Sickness
|
||||
|
|
|
@ -1,81 +0,0 @@
|
|||
---
|
||||
title: "Life Update: New Adventures"
|
||||
date: 2020-10-24
|
||||
tags:
|
||||
- personal
|
||||
---
|
||||
|
||||
# Life Update: New Adventures
|
||||
|
||||
Today was my last day at my job, and as of the time that I have published this
|
||||
post, I am now inbetween jobs. I have had an adventure at Lightspeed, but all
|
||||
things must come to an end and my adventure has come to an end there. I have a
|
||||
new job lined up and I will be heading to it soon, but for the meantime I plan
|
||||
to relax and decompress.
|
||||
|
||||
I want to finish that tabletop RPG book I have prototyped out. When I have
|
||||
something closer to a cohesive book I will post something on my
|
||||
[patreon](https://www.patreon.com/cadey) so that you all can take a look. Any
|
||||
and all feedback would be very appreciated. I hope to have it published on my
|
||||
[Itch page](https://withinstudios.itch.io/) by the end of this year. My target
|
||||
is on or about $5 for the game manual and supplemental material.
|
||||
|
||||
Thanks for reading; no, seriously, thank you. Without people like you that
|
||||
read and share articles on this blog I would never have gotten to the level of
|
||||
success that I have now. Additionally, I would like to emphasize that I am fine
|
||||
as far as new jobs go. I have primary and fallback plans in place, but if they
|
||||
all somehow fall through I will be sure to put up a note here. Please be sure to
|
||||
check out [/signalboost](/signalboost) for people to consider when making hiring
|
||||
decisions.
|
||||
|
||||
## May Our Paths Cross Again: My Farewell Letter to Lightspeed
|
||||
|
||||
Hey all,
|
||||
|
||||
Today is my last day at Lightspeed. Working at Lightspeed has been catalytic to
|
||||
my career. I have been exposed to so many people from so many backgrounds and
|
||||
you all have irreparably changed me for the better and given me the space to
|
||||
thrive. There is passion here, and it is being tapped in order to create
|
||||
fantastic solutions for our customers to enable them to succeed. I originally
|
||||
came to Montréal to live with my fiancé and if Covid had struck a week later he
|
||||
would be my husband.
|
||||
|
||||
However, I feel that I have done as much as I can at Lightspeed and our paths
|
||||
thusly need to divide. I have gotten a fantastic opportunity and I will be
|
||||
working on technology that will become a foundational part of personal and
|
||||
professional IP networking over WireGuard for many companies and people. I'm
|
||||
sorry if this comes as a shock to anyone, I don't mean to cause anyone grief
|
||||
with this move.
|
||||
|
||||
I have been attaching a little poem in Lojban to the signature of my emails,
|
||||
here is it and its translation:
|
||||
|
||||
```
|
||||
la budza pu cusku lu
|
||||
<<.i ko do snura .i ko do kanro
|
||||
.i ko do panpi .i ko do gleki>> li'u
|
||||
```
|
||||
|
||||
> May you be safe. May you be healthy.
|
||||
> May you be at peace. May you be happy.
|
||||
- Buddha
|
||||
|
||||
I will be reachable on the internet. See https://christine.website/contact to
|
||||
see contact information that will help you reach out to me. If you can, please
|
||||
direct replies to me@christine.website, that way I can read them after this
|
||||
account gets disabled.
|
||||
|
||||
I hope I was able to brighten your path.
|
||||
|
||||
From my world to yours,
|
||||
|
||||
--
|
||||
|
||||
Christine Dodrill
|
||||
https://christine.website
|
||||
|
||||
```
|
||||
la budza pu cusku lu
|
||||
<<.i ko do snura .i ko do kanro
|
||||
.i ko do panpi .i ko do gleki>> li'u
|
||||
```
|
|
@ -1,38 +0,0 @@
|
|||
---
|
||||
title: New PGP Key Fingerprint
|
||||
date: 2021-01-15
|
||||
---
|
||||
|
||||
# New PGP Key Fingerprint
|
||||
|
||||
This morning I got an encrypted email, and in the process of trying to decrypt
|
||||
it I discovered that I had _lost_ my PGP key. I have no idea how I lost it. As
|
||||
such, I have created a new PGP key and replaced the one on my website with it.
|
||||
I did the replacement in [this
|
||||
commit](https://github.com/Xe/site/commit/66233bcd40155cf71e221edf08851db39dbd421c),
|
||||
which you can see is verified with a subkey of my new key.
|
||||
|
||||
My new PGP key ID is `803C 935A E118 A224`. The key with the ID `799F 9134 8118
|
||||
1111` should not be used anymore. Here are all the subkey fingerprints:
|
||||
|
||||
```
|
||||
Signature key ....: 378E BFC6 3D79 B49D 8C36 448C 803C 935A E118 A224
|
||||
created ....: 2021-01-15 13:04:28
|
||||
Encryption key....: 8C61 7F30 F331 D21B 5517 6478 8C5C 9BC7 0FC2 511E
|
||||
created ....: 2021-01-15 13:04:28
|
||||
Authentication key: 7BF7 E531 ABA3 7F77 FD17 8F72 CE17 781B F55D E945
|
||||
created ....: 2021-01-15 13:06:20
|
||||
General key info..: pub rsa2048/803C935AE118A224 2021-01-15 Christine Dodrill (Yubikey) <me@christine.website>
|
||||
sec> rsa2048/803C935AE118A224 created: 2021-01-15 expires: 2031-01-13
|
||||
card-no: 0006 03646872
|
||||
ssb> rsa2048/8C5C9BC70FC2511E created: 2021-01-15 expires: 2031-01-13
|
||||
card-no: 0006 03646872
|
||||
ssb> rsa2048/CE17781BF55DE945 created: 2021-01-15 expires: 2031-01-13
|
||||
card-no: 0006 03646872
|
||||
```
|
||||
|
||||
I don't really know what the proper way is to go about revoking an old PGP key.
|
||||
It probably doesn't help that I don't use PGP very often. I think this is the
|
||||
first encrypted email I've gotten in a year.
|
||||
|
||||
Let's hope that I don't lose this key as easily!
|
|
@ -1,317 +0,0 @@
|
|||
---
|
||||
title: Nixops Services on Your Home Network
|
||||
date: 2020-11-09
|
||||
series: howto
|
||||
tags:
|
||||
- nixos
|
||||
- systemd
|
||||
---
|
||||
|
||||
# Nixops Services on Your Home Network
|
||||
|
||||
My homelab has a few NixOS machines. Right now they mostly run services inside
|
||||
Docker, because that has been what I have done for years. This works fine, but
|
||||
persistent state gets annoying*. NixOS has a tool called
|
||||
[Nixops](https://releases.nixos.org/nixops/nixops-1.7/manual/manual.html) that
|
||||
allows you to push configurations to remote machines. I use this for managing my
|
||||
fleet of machines, and today I'm going to show you how to create service
|
||||
deployments with Nixops and push them to your servers.
|
||||
|
||||
[Pedantically, Docker offers <a
|
||||
href="https://releases.nixos.org/nixops/nixops-1.7/manual/manual.html">volumes</a>
|
||||
to simplify this, but it is very easy to accidentally delete Docker volumes.
|
||||
Plain disk files like we are going to use today are a bit simpler than docker
|
||||
volumes, and thusly a bit harder to mess up.](conversation://Mara/hacker)
|
||||
|
||||
## Parts of a Service
|
||||
|
||||
For this example, let's deploy a chatbot. To make things easier, let's assume
|
||||
the following about this chatbot:
|
||||
|
||||
- The chatbot has a git repo somewhere
|
||||
- The chatbot's git repo has a `default.nix` that builds the service and
|
||||
includes any supporting files it might need
|
||||
- The chatbot reads its configuration from environment variables which may
|
||||
contain secret values (API keys, etc.)
|
||||
- The chatbot stores any temporary files in its current working directory
|
||||
- The chatbot is "well-behaved" (for some definition of "well-behaved")
|
||||
|
||||
I will also need to assume that you have a git repo (or at least a folder) with
|
||||
all of your configuration similar to [mine](https://github.com/Xe/nixos-configs).
|
||||
|
||||
For this example I'm going to use [withinbot](https://github.com/Xe/withinbot)
|
||||
as the service we will deploy via Nixops. withinbot is a chatbot that I use on
|
||||
my own Discord guild that does a number of vital functions including supplying
|
||||
amusing facts about printers:
|
||||
|
||||
```
|
||||
<Cadey~> ~printerfact
|
||||
<Within[BOT]> @Cadey~ Printers, especially older printers, do get cancer. Many
|
||||
times this disease can be treated successfully
|
||||
```
|
||||
|
||||
[To get your own amusing facts about printers, see <a
|
||||
href="https://printerfacts.cetacean.club">here</a> or for using its API, call <a
|
||||
href="https://printerfacts.cetacean.club/fact">`/fact`</a>. This API has no
|
||||
practical rate limits, but please don't test that.](conversation://Mara/hacker)
|
||||
|
||||
## Service Definition
|
||||
|
||||
We will need to do a few major things for defining this service:
|
||||
|
||||
1. Add the bot code as a package
|
||||
1. Create a "services" folder for the service modules
|
||||
1. Create a user account for the service
|
||||
1. Set up a systemd unit for the service
|
||||
1. Configure the secrets using [Nixops
|
||||
keys](https://releases.nixos.org/nixops/nixops-1.7/manual/manual.html#idm140737322342384)
|
||||
|
||||
### Add the Code as a Package
|
||||
|
||||
In order for the program to be installed to the remote system, you need to tell
|
||||
the system how to import it. There's many ways to do this, but the cheezy way is
|
||||
to add the packages to
|
||||
[`nixpkgs.config.packageOverrides`](https://nixos.org/manual/nixos/stable/#sec-customising-packages)
|
||||
like this:
|
||||
|
||||
```nix
|
||||
nixpkgs.config = {
|
||||
packageOverrides = pkgs: {
|
||||
within = {
|
||||
withinbot = import (builtins.fetchTarball
|
||||
"https://github.com/Xe/withinbot/archive/main.tar.gz") { };
|
||||
};
|
||||
};
|
||||
};
|
||||
```
|
||||
|
||||
And now we can access it as `pkgs.within.withinbot` in the rest of our config.
|
||||
|
||||
[In production circumstances you should probably use <a
|
||||
href="https://nixos.org/manual/nixpkgs/stable/#chap-pkgs-fetchers">a fetcher
|
||||
that locks to a specific version</a> using unique URLs and hashing, but this
|
||||
will work enough to get us off the ground in this
|
||||
example.](conversation://Mara/hacker)
|
||||
|
||||
### Create a "services" Folder
|
||||
|
||||
In your configuration folder, create a folder that you will use for these
|
||||
service definitions. I made mine in `common/services`. In that folder, create a
|
||||
`default.nix` with the following contents:
|
||||
|
||||
```nix
|
||||
{ config, lib, ... }:
|
||||
|
||||
{
|
||||
imports = [ ./withinbot.nix ];
|
||||
|
||||
users.groups.within = {};
|
||||
}
|
||||
```
|
||||
|
||||
The group listed here is optional, but I find that having a group like that can
|
||||
help you better share resources and files between services.
|
||||
|
||||
Now we need a folder for storing secrets. Let's create that under the services
|
||||
folder:
|
||||
|
||||
```console
|
||||
$ mkdir secrets
|
||||
```
|
||||
|
||||
And let's also add a gitignore file so that we don't accidentally commit these
|
||||
secrets to the repo:
|
||||
|
||||
```gitignore
|
||||
# common/services/secrets/.gitignore
|
||||
*
|
||||
```
|
||||
|
||||
Now we can put any secrets we want in the secrets folder without the risk of
|
||||
committing them to the git repo.
|
||||
|
||||
### Service Manifest
|
||||
|
||||
Let's create `withinbot.nix` and set it up:
|
||||
|
||||
```nix
|
||||
{ config, lib, pkgs, ... }:
|
||||
with lib; {
|
||||
options.within.services.withinbot.enable =
|
||||
mkEnableOption "Activates Withinbot (the furryhole chatbot)";
|
||||
|
||||
config = mkIf config.within.services.withinbot.enable {
|
||||
|
||||
};
|
||||
}
|
||||
```
|
||||
|
||||
This sets up an option called `within.services.withinbot.enable` which will only
|
||||
add the service configuration if that option is set to `true`. This will allow
|
||||
us to define a lot of services that are available, but none of their config will
|
||||
be active unless they are explicitly enabled.
|
||||
|
||||
Now, let's create a user account for the service:
|
||||
|
||||
```nix
|
||||
# ...
|
||||
config = ... {
|
||||
users.users.withinbot = {
|
||||
createHome = true;
|
||||
description = "github.com/Xe/withinbot";
|
||||
isSystemUser = true;
|
||||
group = "within";
|
||||
home = "/srv/within/withinbot";
|
||||
extraGroups = [ "keys" ];
|
||||
};
|
||||
};
|
||||
# ...
|
||||
```
|
||||
|
||||
This will create a user named `withinbot` with the home directory
|
||||
`/srv/within/withinbot`, the group `within` and also in the group `keys` so the
|
||||
withinbot user can read deployment secrets.
|
||||
|
||||
Now let's add the deployment secrets to the configuration:
|
||||
|
||||
```nix
|
||||
# ...
|
||||
config = ... {
|
||||
users.users.withinbot = { ... };
|
||||
|
||||
deployment.keys.withinbot = {
|
||||
text = builtins.readFile ./secrets/withinbot.env;
|
||||
user = "withinbot";
|
||||
group = "within";
|
||||
permissions = "0640";
|
||||
};
|
||||
};
|
||||
# ...
|
||||
```
|
||||
|
||||
Assuming you have the configuration at `./secrets/withinbot.env`, this will
|
||||
register the secrets into `/run/keys/withinbot` and also create a systemd
|
||||
oneshot service named `withinbot-key`. This allows you to add the secret's
|
||||
existence as a condition for withinbot to run. However, Nixops puts these keys
|
||||
in `/run`, which by default is mounted using a temporary memory-only filesystem,
|
||||
meaning these keys will need to be re-added to machines when they are rebooted.
|
||||
Fortunately, `nixops reboot` will automatically add the keys back after the
|
||||
reboot succeeds.
|
||||
|
||||
Now that we have everything else we need, let's add the service configuration:
|
||||
|
||||
```nix
|
||||
# ...
|
||||
config = ... {
|
||||
users.users.withinbot = { ... };
|
||||
deployment.keys.withinbot = { ... };
|
||||
|
||||
systemd.services.withinbot = {
|
||||
wantedBy = [ "multi-user.target" ];
|
||||
after = [ "withinbot-key.service" ];
|
||||
wants = [ "withinbot-key.service" ];
|
||||
|
||||
serviceConfig = {
|
||||
User = "withinbot";
|
||||
Group = "within";
|
||||
Restart = "on-failure"; # automatically restart the bot when it dies
|
||||
WorkingDirectory = "/srv/within/withinbot";
|
||||
RestartSec = "30s";
|
||||
};
|
||||
|
||||
script = let withinbot = pkgs.within.withinbot;
|
||||
in ''
|
||||
# load the environment variables from /run/keys/withinbot
|
||||
export $(grep -v '^#' /run/keys/withinbot | xargs)
|
||||
# service-specific configuration
|
||||
export CAMPAIGN_FOLDER=${withinbot}/campaigns
|
||||
# kick off the chatbot
|
||||
exec ${withinbot}/bin/withinbot
|
||||
'';
|
||||
};
|
||||
};
|
||||
# ...
|
||||
```
|
||||
|
||||
This will create the systemd configuration for the service so that it starts on
|
||||
boot, waits to start until the secrets have been loaded into it, runs withinbot
|
||||
as its own user and in the `within` group, and throttles the service restart so
|
||||
that it doesn't incur Discord rate limits as easily. This will also put all
|
||||
withinbot logs in journald, meaning that you can manage and monitor this service
|
||||
like you would any other systemd service.
|
||||
|
||||
## Deploying the Service
|
||||
|
||||
In your target server's `configuration.nix` file, add an import of your services
|
||||
directory:
|
||||
|
||||
```nix
|
||||
{
|
||||
# ...
|
||||
imports = [
|
||||
# ...
|
||||
/home/cadey/code/nixos-configs/common/services
|
||||
];
|
||||
# ...
|
||||
}
|
||||
```
|
||||
|
||||
And then enable the withinbot service:
|
||||
|
||||
```nix
|
||||
{
|
||||
# ...
|
||||
within.services = {
|
||||
withinbot.enable = true;
|
||||
};
|
||||
# ...
|
||||
}
|
||||
```
|
||||
|
||||
[Make that a block so you can enable multiple services at once like <a
|
||||
href="https://github.com/Xe/nixos-configs/blob/e111413e8b895f5a117dea534b17fc9d0b38d268/hosts/chrysalis/configuration.nix#L93-L96">this</a>!](conversation://Mara/hacker)
|
||||
|
||||
Now you are free to deploy it to your network with `nixops deploy`:
|
||||
|
||||
```console
|
||||
$ nixops deploy -d hexagone
|
||||
```
|
||||
|
||||
<video controls width="100%">
|
||||
<source src="https://cdn.christine.website/file/christine-static/img/nixops/tmp.Tr7HTFFd2c.webm"
|
||||
type="video/webm">
|
||||
<source src="https://cdn.christine.website/file/christine-static/img/nixops/tmp.Tr7HTFFd2c.mp4"
|
||||
type="video/mp4">
|
||||
Sorry, your browser doesn't support embedded videos.
|
||||
</video>
|
||||
|
||||
|
||||
And then you can verify the service is up with `systemctl status`:
|
||||
|
||||
```console
|
||||
$ nixops ssh -d hexagone chrysalis -- systemctl status withinbot
|
||||
● withinbot.service
|
||||
Loaded: loaded (/nix/store/7ab7jzycpcci4f5wjwhjx3al7xy85ka7-unit-withinbot.service/withinbot.service; enabled; vendor preset: enabled)
|
||||
Active: active (running) since Mon 2020-11-09 09:51:51 EST; 2h 29min ago
|
||||
Main PID: 12295 (withinbot)
|
||||
IP: 0B in, 0B out
|
||||
Tasks: 13 (limit: 4915)
|
||||
Memory: 7.9M
|
||||
CPU: 4.456s
|
||||
CGroup: /system.slice/withinbot.service
|
||||
└─12295 /nix/store/qpq281hcb1grh4k5fm6ksky6w0981arp-withinbot-0.1.0/bin/withinbot
|
||||
|
||||
Nov 09 09:51:51 chrysalis systemd[1]: Started withinbot.service.
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
This basic template is enough to expand out to anything you would need and is
|
||||
what I am using for my own network. This should be generic enough for most of
|
||||
your needs. Check out the [NixOS manual](https://nixos.org/manual/nixos/stable/)
|
||||
for more examples and things you can do with this. The [Nixops
|
||||
manual](https://releases.nixos.org/nixops/nixops-1.7/manual/manual.html) is also
|
||||
a good read. It can also set up deployments with VirtualBox, libvirtd, AWS,
|
||||
Digital Ocean, and even Google Cloud.
|
||||
|
||||
The cloud is the limit! Be well.
|
|
@ -1,533 +0,0 @@
|
|||
---
|
||||
title: "My NixOS Desktop Flow"
|
||||
date: 2020-04-25
|
||||
series: howto
|
||||
---
|
||||
|
||||
# My NixOS Desktop Flow
|
||||
|
||||
Before I built my current desktop, I had been using a [2013 Mac Pro][macpro2013]
|
||||
for at least 7 years. This machine has seen me through living in a few cities
|
||||
(Bellevue, Mountain View and Montreal), but it was starting to show its age. Its
|
||||
12 core Xeon is really no slouch (scoring about 5 minutes in my "compile the
|
||||
linux kernel" test), but with Intel security patches it was starting to get
|
||||
slower and slower as time went on.
|
||||
|
||||
[macpro2013]: https://www.apple.com/mac-pro-2013/specs/
|
||||
|
||||
So in March (just before the situation started) I ordered the parts for my new
|
||||
tower and built my current desktop machine. From the start, I wanted it to run
|
||||
Linux and have 64 GB of ram, mostly so I could write and test programs without
|
||||
having to worry about ram exhaustion.
|
||||
|
||||
When the parts were almost in, I had decided to really start digging into
|
||||
[NixOS][nixos]. Friends on IRC and Discord had been trying to get me to use it
|
||||
for years, and I was really impressed with a simple setup that I had in a
|
||||
virtual machine. So I decided to jump head-first down that rabbit hole, and I'm
|
||||
honestly really glad I did.
|
||||
|
||||
[nixos]: https://nixos.org
|
||||
|
||||
NixOS is built on a more functional approach to package management called
|
||||
[Nix][nix]. Parts of the configuration can be easily broken off into modules
|
||||
that can be reused across machines in a deployment. If [Ansible][ansible] or
|
||||
other tools like it let you customize an existing Linux distribution to meet
|
||||
your needs, NixOS allows you to craft your own Linux distribution around your
|
||||
needs.
|
||||
|
||||
[nix]: https://nixos.org/nix/
|
||||
[ansible]: https://www.ansible.com/
|
||||
|
||||
Unfortunately, the Nix and NixOS documentation is a bit more dense than most
|
||||
other Linux programs/distributions are, and it's a bit easy to get lost in it.
|
||||
I'm going to attempt to explain a lot of the guiding principles behind Nix and
|
||||
NixOS and how they fit into how I use NixOS on my desktop.
|
||||
|
||||
## What is a Package?
|
||||
|
||||
Earlier, I mentioned that Nix is a _functional_ package manager. This means that
|
||||
Nix views packages as a combination of inputs to get an output:
|
||||
|
||||
![A nix package is the metadata, the source code, the build instructions and
|
||||
some patches as input to a derivation to create a
|
||||
package](/static/blog/nix-package.png)
|
||||
|
||||
This is how most package managers work (even things like Windows installer
|
||||
files), but Nix goes a step further by disallowing package builds to access the
|
||||
internet. This allows Nix packages to be a lot more reproducible; meaning if you
|
||||
have the same inputs (source code, build script and patches) you should _always_
|
||||
get the same output byte-for-byte every time you build the same package at the
|
||||
same version.
|
||||
|
||||
### A Simple Package
|
||||
|
||||
Let's consider a simple example, my [gruvbox-inspired CSS file][gruvboxcss]'s
|
||||
[`default.nix`][gcssdefaultnix] file':
|
||||
|
||||
[gruvboxcss]: https://github.com/Xe/gruvbox-css
|
||||
[gcssdefaultnix]: https://github.com/Xe/gruvbox-css/blob/master/default.nix
|
||||
|
||||
```nix
|
||||
{ pkgs ? import <nixpkgs> { } }:
|
||||
|
||||
pkgs.stdenv.mkDerivation {
|
||||
pname = "gruvbox-css";
|
||||
version = "latest";
|
||||
src = ./.;
|
||||
phases = "installPhase";
|
||||
installPhase = ''
|
||||
mkdir -p $out
|
||||
cp -rf $src/gruvbox.css $out/gruvbox.css
|
||||
'';
|
||||
}
|
||||
```
|
||||
|
||||
This creates a package named `gruvbox-css` with the version `latest`. Let's
|
||||
break this down its `default.nix` line by line:
|
||||
|
||||
```nix
|
||||
{ pkgs ? import <nixpkgs> { } }:
|
||||
```
|
||||
|
||||
This creates a function that either takes in the `pkgs` object or tells Nix to
|
||||
import the standard package library [nixpkgs][nixpkgs] as `pkgs`. nixpkgs
|
||||
includes a lot of utilities like a standard packaging environment, special
|
||||
builders for things like snaps and Docker images as well as one of the largest
|
||||
package sets out there.
|
||||
|
||||
[nixpkgs]: https://nixos.org/nixpkgs/
|
||||
|
||||
```nix
|
||||
pkgs.stdenv.mkDerivation {
|
||||
# ...
|
||||
}
|
||||
```
|
||||
|
||||
This runs the [`stdenv.mkDerivation`][mkderiv] function with some arguments in an
|
||||
object. The "standard environment" comes with tools like GCC, bash, coreutils,
|
||||
find, sed, grep, awk, tar, make, patch and all of the major compression tools.
|
||||
This means that our package builds can build C/C++ programs, copy files to the
|
||||
output, and extract downloaded source files by default. You can add other inputs
|
||||
to this environment if you need to, but for now it works as-is.
|
||||
|
||||
[mkderiv]: https://nixos.org/nixpkgs/manual/#sec-using-stdenv
|
||||
|
||||
Let's specify the name and version of this package:
|
||||
|
||||
```nix
|
||||
pname = "gruvbox-css";
|
||||
version = "latest";
|
||||
```
|
||||
|
||||
`pname` stands for "package name". It is combined with the version to create the
|
||||
resulting package name. In this case it would be `gruvbox-css-latest`.
|
||||
|
||||
Let's tell Nix how to build this package:
|
||||
|
||||
```nix
|
||||
src = ./.;
|
||||
phases = "installPhase";
|
||||
installPhase = ''
|
||||
mkdir -p $out
|
||||
cp -rf $src/gruvbox.css $out/gruvbox.css
|
||||
'';
|
||||
```
|
||||
|
||||
The `src` attribute tells Nix where the source code of the package is stored.
|
||||
Sometimes this can be a URL to a compressed archive on the internet, sometimes
|
||||
it can be a git repo, but for now it's the current working directory `./.`.
|
||||
|
||||
This is a CSS file, it doesn't make sense to have to build these, so we skip the
|
||||
build phase and tell Nix to directly install the package to its output folder:
|
||||
|
||||
```shell
|
||||
mkdir -p $out
|
||||
cp -rf $src/gruvbox.css $out/gruvbox.css
|
||||
```
|
||||
|
||||
This two-liner shell script creates the output directory (usually exposed as
|
||||
`$out`) and then copies `gruvbox.css` into it. When we run this through Nix
|
||||
with`nix-build`, we get output that looks something like this:
|
||||
|
||||
```console
|
||||
$ nix-build ./default.nix
|
||||
these derivations will be built:
|
||||
/nix/store/c99n4ixraigf4jb0jfjxbkzicd79scpj-gruvbox-css.drv
|
||||
building '/nix/store/c99n4ixraigf4jb0jfjxbkzicd79scpj-gruvbox-css.drv'...
|
||||
installing
|
||||
/nix/store/ng5qnhwyrk9zaidjv00arhx787r0412s-gruvbox-css
|
||||
```
|
||||
|
||||
And `/nix/store/ng5qnhwyrk9zaidjv00arhx787r0412s-gruvbox-css` is the output
|
||||
package. Looking at its contents with `ls`, we see this:
|
||||
|
||||
```console
|
||||
$ ls /nix/store/ng5qnhwyrk9zaidjv00arhx787r0412s-gruvbox-css
|
||||
gruvbox.css
|
||||
```
|
||||
|
||||
### A More Complicated Package
|
||||
|
||||
For a more complicated package, let's look at the [build directions of the
|
||||
website you are reading right now][sitedefaultnix]:
|
||||
|
||||
[sitedefaultnix]: https://github.com/Xe/site/blob/master/site.nix
|
||||
|
||||
```nix
|
||||
{ pkgs ? import (import ./nix/sources.nix).nixpkgs }:
|
||||
with pkgs;
|
||||
|
||||
assert lib.versionAtLeast go.version "1.13";
|
||||
|
||||
buildGoPackage rec {
|
||||
pname = "christinewebsite";
|
||||
version = "latest";
|
||||
|
||||
goPackagePath = "christine.website";
|
||||
src = ./.;
|
||||
goDeps = ./nix/deps.nix;
|
||||
allowGoReference = false;
|
||||
|
||||
preBuild = ''
|
||||
export CGO_ENABLED=0
|
||||
buildFlagsArray+=(-pkgdir "$TMPDIR")
|
||||
'';
|
||||
|
||||
postInstall = ''
|
||||
cp -rf $src/blog $bin/blog
|
||||
cp -rf $src/css $bin/css
|
||||
cp -rf $src/gallery $bin/gallery
|
||||
cp -rf $src/signalboost.dhall $bin/signalboost.dhall
|
||||
cp -rf $src/static $bin/static
|
||||
cp -rf $src/talks $bin/talks
|
||||
cp -rf $src/templates $bin/templates
|
||||
'';
|
||||
}
|
||||
```
|
||||
|
||||
Breaking it down, we see some similarities to the gruvbox-css package from
|
||||
above, but there's a few more interesting lines I want to point out:
|
||||
|
||||
```nix
|
||||
{ pkgs ? import (import ./nix/sources.nix).nixpkgs }:
|
||||
```
|
||||
|
||||
My website uses a pinned or fixed version of nixpkgs. This allows my website's
|
||||
deployment to be stable even if nixpkgs changes something that could cause it to
|
||||
break.
|
||||
|
||||
```nix
|
||||
with pkgs;
|
||||
```
|
||||
|
||||
[With expressions][nixwith] are one of the more interesting parts of Nix.
|
||||
Essentially, they let you say "everything in this object should be put into
|
||||
scope". So if you have an expression that does this:
|
||||
|
||||
[nixwith]: https://nixos.org/nix/manual/#idm140737321975440
|
||||
|
||||
```nix
|
||||
let
|
||||
foo = {
|
||||
ponies = "awesome";
|
||||
};
|
||||
in with foo; "ponies are ${ponies}!"
|
||||
```
|
||||
|
||||
You get the result `"ponies are awesome!"`. I use `with pkgs` here to use things
|
||||
directly from nixpkgs without having to say `pkgs.` in front of a lot of things.
|
||||
|
||||
```nix
|
||||
assert lib.versionAtLeast go.version "1.13";
|
||||
```
|
||||
|
||||
This line will make the build fail if Nix is using any Go version less than
|
||||
1.13. I'm pretty sure my website's code could function on older versions of Go,
|
||||
but the runtime improvements are important to it, so let's fail loudly just in
|
||||
case.
|
||||
|
||||
```nix
|
||||
buildGoPackage {
|
||||
# ...
|
||||
}
|
||||
```
|
||||
|
||||
[`buildGoPackage`](https://nixos.org/nixpkgs/manual/#ssec-go-legacy) builds a Go
|
||||
package into a Nix package. It takes in the [Go package path][gopkgpath], list
|
||||
of dependencies and if the resulting package is allowed to depend on the Go
|
||||
compiler or not.
|
||||
|
||||
[gopkgpath]: https://github.com/golang/go/wiki/GOPATH#directory-layout
|
||||
|
||||
It will then compile the Go program (and all of its dependencies) into a binary
|
||||
and put that in the resulting package. This website is more than just the source
|
||||
code, it's also got assets like CSS files and the image earlier in the post.
|
||||
Those files are copied in the `postInstall` phase:
|
||||
|
||||
```nix
|
||||
postInstall = ''
|
||||
cp -rf $src/blog $bin/blog
|
||||
cp -rf $src/css $bin/css
|
||||
cp -rf $src/gallery $bin/gallery
|
||||
cp -rf $src/signalboost.dhall $bin/signalboost.dhall
|
||||
cp -rf $src/static $bin/static
|
||||
cp -rf $src/talks $bin/talks
|
||||
cp -rf $src/templates $bin/templates
|
||||
'';
|
||||
```
|
||||
|
||||
This results in all of the files that my website needs to run existing in the
|
||||
right places.
|
||||
|
||||
### Other Packages
|
||||
|
||||
For more kinds of packages that you can build, see the [Languages and
|
||||
Frameworks][nixpkgslangsframeworks] chapter of the nixpkgs documentation.
|
||||
|
||||
[nixpkgslangsframeworks]: https://nixos.org/nixpkgs/manual/#chap-language-support
|
||||
|
||||
If your favorite language isn't shown there, you can make your own build script
|
||||
and do it more manually. See [here][nixpillscustombuilder] for more information
|
||||
on how to do that.
|
||||
|
||||
[nixpillscustombuilder]: https://nixos.org/nixos/nix-pills/working-derivation.html#idm140737320334640
|
||||
|
||||
## `nix-env` And Friends
|
||||
|
||||
Building your own packages is nice and all, but what about using packages
|
||||
defined in nixpkgs? Nix includes a few tools that help you find, install,
|
||||
upgrade and remove packages as well as `nix-build` to build new ones.
|
||||
|
||||
### `nix search`
|
||||
|
||||
When looking for a package to install, use `$ nix search name` to see if it's
|
||||
already packaged. For example, let's look for [graphviz][graphviz], a popular
|
||||
diagramming software:
|
||||
|
||||
[graphviz]: https://graphviz.org/
|
||||
|
||||
```console
|
||||
$ nix search graphviz
|
||||
|
||||
* nixos.graphviz (graphviz)
|
||||
Graph visualization tools
|
||||
|
||||
* nixos.graphviz-nox (graphviz)
|
||||
Graph visualization tools
|
||||
|
||||
* nixos.graphviz_2_32 (graphviz)
|
||||
Graph visualization tools
|
||||
```
|
||||
|
||||
There are several results here! These are different because sometimes you may
|
||||
want some features of graphviz, but not all of them. For example, a server
|
||||
installation of graphviz wouldn't need X windows support.
|
||||
|
||||
The first line of the output is the attribute. This is the attribute that the
|
||||
package is imported to inside nixpkgs. This allows multiple packages in
|
||||
different contexts to exist in nixpkgs at the same time, for example with python
|
||||
2 and python 3 versions of a library.
|
||||
|
||||
The second line is a description of the package from its metadata section.
|
||||
|
||||
The `nix` tool allows you to do a lot more than just this, but for now this is
|
||||
the most important thing.
|
||||
|
||||
### `nix-env -i`
|
||||
|
||||
`nix-env` is a rather big tool that does a lot of things (similar to pacman in
|
||||
Arch Linux), so I'm going to break things down into separate sections.
|
||||
|
||||
Let's pick an instance graphviz from before and install it using `nix-env`:
|
||||
|
||||
```console
|
||||
$ nix-env -iA nixos.graphviz
|
||||
installing 'graphviz-2.42.2'
|
||||
these paths will be fetched (5.00 MiB download, 13.74 MiB unpacked):
|
||||
/nix/store/980jk7qbcfrlnx8jsmdx92q96wsai8mx-gts-0.7.6
|
||||
/nix/store/fij1p8f0yjpv35n342ii9pwfahj8rlbb-graphviz-2.42.2
|
||||
/nix/store/jy35xihlnb3az0vdksyg9rd2f38q2c01-libdevil-1.7.8
|
||||
/nix/store/s895dnwlprwpfp75pzq70qzfdn8mwfzc-lcms-1.19
|
||||
copying path '/nix/store/980jk7qbcfrlnx8jsmdx92q96wsai8mx-gts-0.7.6' from 'https://cache.nixos.org'...
|
||||
copying path '/nix/store/s895dnwlprwpfp75pzq70qzfdn8mwfzc-lcms-1.19' from 'https://cache.nixos.org'...
|
||||
copying path '/nix/store/jy35xihlnb3az0vdksyg9rd2f38q2c01-libdevil-1.7.8' from 'https://cache.nixos.org'...
|
||||
copying path '/nix/store/fij1p8f0yjpv35n342ii9pwfahj8rlbb-graphviz-2.42.2' from 'https://cache.nixos.org'...
|
||||
building '/nix/store/r4fqdwpicqjpa97biis1jlxzb4ywi92b-user-environment.drv'...
|
||||
created 664 symlinks in user environment
|
||||
```
|
||||
|
||||
And now let's see where the `dot` tool from graphviz is installed to:
|
||||
|
||||
```console
|
||||
$ which dot
|
||||
/home/cadey/.nix-profile/bin/dot
|
||||
|
||||
$ readlink /home/cadey/.nix-profile/bin/dot
|
||||
/nix/store/fij1p8f0yjpv35n342ii9pwfahj8rlbb-graphviz-2.42.2/bin/dot
|
||||
```
|
||||
|
||||
This lets you install tools into the system-level Nix store without affecting
|
||||
other user's environments, even if they depend on a different version of
|
||||
graphviz.
|
||||
|
||||
### `nix-env -e`
|
||||
|
||||
`nix-env -e` lets you uninstall packages installed with `nix-env -i`. Let's
|
||||
uninstall graphviz:
|
||||
|
||||
```console
|
||||
$ nix-env -e graphviz
|
||||
```
|
||||
|
||||
Now the `dot` tool will be gone from your shell:
|
||||
|
||||
```console
|
||||
$ which dot
|
||||
which: no dot in (/run/wrappers/bin:/home/cadey/.nix-profile/bin:/etc/profiles/per-user/cadey/bin:/nix/var/nix/profiles/default/bin:/run/current-system/sw/bin)
|
||||
```
|
||||
|
||||
And it's like graphviz was never installed.
|
||||
|
||||
Notice that these package management commands are done at the _user_ level
|
||||
because they are only affecting the currently logged-in user. This allows users
|
||||
to install their own editors or other tools without having to get admins
|
||||
involved.
|
||||
|
||||
## Adding up to NixOS
|
||||
|
||||
NixOS builds on top of Nix and its command line tools to make an entire Linux
|
||||
distribution that can be perfectly crafted to your needs. NixOS machines are
|
||||
configured using a [configuration.nix][confignix] file that contains the
|
||||
following kinds of settings:
|
||||
|
||||
[confignix]: https://nixos.org/nixos/manual/index.html#ch-configuration
|
||||
|
||||
- packages installed to the system
|
||||
- user accounts on the system
|
||||
- allowed SSH public keys for users on the system
|
||||
- services activated on the system
|
||||
- configuration for services on the system
|
||||
- magic unix flags like the number of allowed file descriptors per process
|
||||
- what drives to mount where
|
||||
- network configuration
|
||||
- ACME certificates
|
||||
|
||||
[and so much more](https://nixos.org/nixos/options.html#)
|
||||
|
||||
At a high level, machines are configured by setting options like this:
|
||||
|
||||
```
|
||||
# basic-lxc-image.nix
|
||||
{ config, pkgs, ... }:
|
||||
|
||||
{
|
||||
networking.hostName = "example-for-blog";
|
||||
environment.systemPackages = with pkgs; [ wget vim ];
|
||||
}
|
||||
```
|
||||
|
||||
This would specify a simple NixOS machine with the hostname `example-for-blog`
|
||||
and with wget and vim installed. This is nowhere near enough to boot an entire
|
||||
system, but is good enough for describing the base layout of a basic [LXC][lxc]
|
||||
image.
|
||||
|
||||
[lxc]: https://linuxcontainers.org/lxc/introduction/
|
||||
|
||||
For a more complete example of NixOS configurations, see
|
||||
[here](https://github.com/Xe/nixos-configs/tree/master/hosts) or repositories on
|
||||
[this handy NixOS wiki page](https://nixos.wiki/wiki/Configuration_Collection).
|
||||
|
||||
The main configuration.nix file (usually at `/etc/nixos/configuration.nix`) can also
|
||||
import other NixOS modules using the `imports` attribute:
|
||||
|
||||
```nix
|
||||
# better-vm.nix
|
||||
{ config, pkgs, ... }:
|
||||
|
||||
{
|
||||
imports = [
|
||||
./basic-lxc-image.nix
|
||||
];
|
||||
|
||||
networking.hostName = "better-vm";
|
||||
services.nginx.enable = true;
|
||||
}
|
||||
```
|
||||
|
||||
And the `better-vm.nix` file would describe a machine with the hostname
|
||||
`better-vm` that has wget and vim installed, but is also running nginx with its
|
||||
default configuration.
|
||||
|
||||
Internally, every one of these options will be fed into auto-generated Nix
|
||||
packages that will describe the system configuration bit by bit.
|
||||
|
||||
### `nixos-rebuild`
|
||||
|
||||
One of the handy features about Nix is that every package exists in its own part
|
||||
of the Nix store. This allows you to leave the older versions of a package
|
||||
laying around so you can roll back to them if you need to. `nixos-rebuild` is
|
||||
the tool that helps you commit configuration changes to the system as well as
|
||||
roll them back.
|
||||
|
||||
If you want to upgrade your entire system:
|
||||
|
||||
```console
|
||||
$ sudo nixos-rebuild switch --upgrade
|
||||
```
|
||||
|
||||
This tells nixos-rebuild to upgrade the package channels, use those to create a
|
||||
new base system description, switch the running system to it and start/restart/stop
|
||||
any services that were added/upgraded/removed during the upgrade. Every time you
|
||||
rebuild the configuration, you create a new "generation" of configuration that
|
||||
you can roll back to just as easily:
|
||||
|
||||
```console
|
||||
$ sudo nixos-rebuild switch --rollback
|
||||
```
|
||||
|
||||
### Garbage Collection
|
||||
|
||||
As upgrades happen and old generations pile up, this may end up taking up a lot
|
||||
of unwanted disk (and boot menu) space. To free up this space, you can use
|
||||
`nix-collect-garbage`:
|
||||
|
||||
```console
|
||||
$ sudo nix-collect-garbage
|
||||
< cleans up packages not referenced by anything >
|
||||
|
||||
$ sudo nix-collect-garbage -d
|
||||
< deletes old generations and then cleans up packages not referenced by anything >
|
||||
```
|
||||
|
||||
The latter is a fairly powerful command and can wipe out older system states.
|
||||
Only run this if you are sure you don't want to go back to an older setup.
|
||||
|
||||
## How I Use It
|
||||
|
||||
Each of these things builds on top of eachother to make the base platform that I
|
||||
built my desktop environment on. I have the configuration for [my
|
||||
shell][xefish], [emacs][xemacs], [my window manager][xedwm] and just about [every
|
||||
program I use on a regular basis][xecommon] defined in their own NixOS modules so I can
|
||||
pick and choose things for new machines.
|
||||
|
||||
[xefish]: https://github.com/Xe/xepkgs/tree/master/modules/fish
|
||||
[xemacs]: https://github.com/Xe/nixos-configs/tree/master/common/users/cadey/spacemacs
|
||||
[xedwm]: https://github.com/Xe/xepkgs/tree/master/modules/dwm
|
||||
[xecommon]: https://github.com/Xe/nixos-configs/tree/master/common
|
||||
|
||||
When I want to change part of my config, I edit the files responsible for that
|
||||
part of the config and then rebuild the system to test it. If things work
|
||||
properly, I commit those changes and then continue using the system like normal.
|
||||
|
||||
This is a little bit more work in the short term, but as a result I get a setup
|
||||
that is easier to recreate on more machines in the future. It took me a half
|
||||
hour or so to get the configuration for [zathura][zathura] right, but now I have
|
||||
[a zathura
|
||||
module](https://github.com/Xe/nixos-configs/tree/master/common/users/cadey/zathura)
|
||||
that lets me get exactly the setup I want every time.
|
||||
|
||||
[zathura]: https://pwmt.org/projects/zathura/
|
||||
|
||||
## TL;DR
|
||||
|
||||
Nix and NixOS ruined me. It's hard to go back.
|
|
@ -1,146 +0,0 @@
|
|||
---
|
||||
title: Discord Webhooks via NixOS and Systemd Timers
|
||||
date: 2020-11-30
|
||||
series: howto
|
||||
tags:
|
||||
- nixos
|
||||
- discord
|
||||
- systemd
|
||||
---
|
||||
|
||||
# Discord Webhooks via NixOS and Systemd Timers
|
||||
|
||||
Recently I needed to set up a Discord message on a cronjob as a part of
|
||||
moderating a guild I've been in for years. I've done this before using
|
||||
[cronjobs](/blog/howto-automate-discord-webhook-cron-2018-03-29), however this
|
||||
time we will be using [NixOS](https://nixos.org/) and [systemd
|
||||
timers](https://wiki.archlinux.org/index.php/Systemd/Timers). Here's what you
|
||||
will need to follow along:
|
||||
|
||||
- A machine running NixOS
|
||||
- A [Discord](https://discord.com/) account
|
||||
- A
|
||||
[webhook](https://support.discord.com/hc/en-us/articles/228383668-Intro-to-Webhooks)
|
||||
configured for a channel
|
||||
- A message you want to send to Discord
|
||||
|
||||
[If you don't have moderation permissions in any guilds, make your own for
|
||||
testing! You will need the "Manage Webhooks" permission to create a
|
||||
webhook.](conversation://Mara/hacker)
|
||||
|
||||
## Setting Up Timers
|
||||
|
||||
systemd timers are like cronjobs, except they trigger systemd services instead
|
||||
of shell commands. For this example, let's create a daily webhook reminder to
|
||||
check on your Animal Crossing island at 9 am.
|
||||
|
||||
Let's create the systemd service at the end of the machine's
|
||||
`configuration.nix`:
|
||||
|
||||
```nix
|
||||
systemd.services.acnh-island-check-reminder = {
|
||||
serviceConfig.Type = "oneshot";
|
||||
script = ''
|
||||
MESSAGE="It's time to check on your island! Check those stonks!"
|
||||
WEBHOOK="${builtins.readFile /home/cadey/prefix/secrets/acnh-webhook-secret}"
|
||||
USERNAME="Domo"
|
||||
|
||||
${pkgs.curl}/bin/curl \
|
||||
-X POST \
|
||||
-F "content=$MESSAGE" \
|
||||
-F "username=$USERNAME" \
|
||||
"$WEBHOOK"
|
||||
'';
|
||||
};
|
||||
```
|
||||
|
||||
[This service is a <a href="https://stackoverflow.com/a/39050387">oneshot</a>
|
||||
unit, meaning systemd will launch this once and not expect it to always stay
|
||||
running.](conversation://Mara/hacker)
|
||||
|
||||
Now let's create a timer for this service. We need to do the following:
|
||||
|
||||
- Associate the timer with that service
|
||||
- Assign a schedule to the timer
|
||||
|
||||
Add this to the end of your `configuration.nix`:
|
||||
|
||||
```nix
|
||||
systemd.timers.acnh-island-check-reminder = {
|
||||
wantedBy = [ "timers.target" ];
|
||||
partOf = [ "acnh-island-check-reminder.service" ];
|
||||
timerConfig.OnCalendar = "TODO(Xe): this";
|
||||
};
|
||||
```
|
||||
|
||||
Before we mentioned that we want to trigger this reminder every morning at 9 am.
|
||||
systemd timers specify their calendar config in the following format:
|
||||
|
||||
```
|
||||
DayOfWeek Year-Month-Day Hour:Minute:Second
|
||||
```
|
||||
|
||||
So for something that triggers every day at 9 AM, it would look like this:
|
||||
|
||||
```
|
||||
*-*-* 8:00:00
|
||||
```
|
||||
|
||||
[You can ignore the day of the week if it's not
|
||||
relevant!](conversation://Mara/hacker)
|
||||
|
||||
So our final timer definition would look like this:
|
||||
|
||||
```nix
|
||||
systemd.timers.acnh-island-check-reminder = {
|
||||
wantedBy = [ "timers.target" ];
|
||||
partOf = [ "acnh-island-check-reminder.service" ];
|
||||
timerConfig.OnCalendar = "*-*-* 8:00:00";
|
||||
};
|
||||
```
|
||||
|
||||
## Deployment and Testing
|
||||
|
||||
Now we can deploy this with `nixos-rebuild`:
|
||||
|
||||
```console
|
||||
$ sudo nixos-rebuild switch
|
||||
```
|
||||
|
||||
You should see a line that says something like this in the `nixos-rebuild`
|
||||
output:
|
||||
|
||||
```
|
||||
starting the following units: acnh-island-check-reminder.timer
|
||||
```
|
||||
|
||||
Let's test the service out using `systemctl`:
|
||||
|
||||
```console
|
||||
$ sudo systemctl start acnh-island-check-reminder.service
|
||||
```
|
||||
|
||||
And you should then see a message on Discord. If you don't see a message, check
|
||||
the logs using `journalctl`:
|
||||
|
||||
```console
|
||||
$ journalctl -u acnh-island-check-reminder.service
|
||||
```
|
||||
|
||||
If you see an error that looks like this:
|
||||
|
||||
```
|
||||
curl: (26) Failed to open/read local data from file/application
|
||||
```
|
||||
|
||||
This usually means that you tried to do a role or user mention at the beginning
|
||||
of the message and curl tried to interpret that as a file input. Add a word like
|
||||
"hey" at the beginning of the line to disable this behavior. See
|
||||
[here](https://stackoverflow.com/questions/6408904/send-request-to-curl-with-post-data-sourced-from-a-file)
|
||||
for more information.
|
||||
|
||||
---
|
||||
|
||||
Also happy December! My site has the [snow
|
||||
CSS](https://christine.website/blog/let-it-snow-2018-12-17) loaded for the
|
||||
month. Enjoy!
|
|
@ -1,332 +0,0 @@
|
|||
---
|
||||
title: Encrypted Secrets with NixOS
|
||||
date: 2021-01-20
|
||||
series: nixos
|
||||
tags:
|
||||
- age
|
||||
- ed25519
|
||||
---
|
||||
|
||||
# Encrypted Secrets with NixOS
|
||||
|
||||
One of the best things about NixOS is the fact that it's so easy to do
|
||||
configuration management using it. The Nix store (where all your packages live)
|
||||
has a huge flaw for secret management though: everything in the Nix store is
|
||||
globally readable. This means that anyone logged into or running code on the
|
||||
system could read any secret in the Nix store without any limits. This is
|
||||
sub-optimal if your goal is to keep secret values secret. There have been a few
|
||||
approaches to this over the years, but I want to describe how I'm doing it.
|
||||
Here are my goals and implementation for this setup and how a few other secret
|
||||
management strategies don't quite pan out.
|
||||
|
||||
At a high level I have these goals:
|
||||
|
||||
* It should be trivial to declare new secrets
|
||||
* Secrets should never be globally readable in any useful form
|
||||
* If I restart the machine, I should not need to take manual human action to
|
||||
ensure all of the services come back online
|
||||
* GPG should be avoided at all costs
|
||||
|
||||
As a side goal being able to roll back secret changes would also be nice.
|
||||
|
||||
The two biggest tools that offer a way to help with secret management on NixOS
|
||||
that come to mind are NixOps and Morph.
|
||||
|
||||
[NixOps](https://github.com/NixOS/nixops) is a tool that helps administrators
|
||||
operate NixOS across multiple servers at once. I use NixOps extensively in my
|
||||
own setup. It calls deployment secrets "keys" and they are documented
|
||||
[here](https://hydra.nixos.org/build/115931128/download/1/manual/manual.html#idm140737322649152).
|
||||
At a high level they are declared like this:
|
||||
|
||||
```nix
|
||||
deployment.keys.example = {
|
||||
text = "this is a super sekrit value :)";
|
||||
user = "example";
|
||||
group = "keys";
|
||||
permissions = "0400";
|
||||
};
|
||||
```
|
||||
|
||||
This will create a new secret in `/run/keys` that will contain our super secret
|
||||
value.
|
||||
|
||||
[Wait, isn't `/run` an ephemeral filesystem? What happens when the system
|
||||
reboots?](conversation://Mara/hmm)
|
||||
|
||||
Let's make an example system and find out! So let's say we have that `example`
|
||||
secret from earlier and want to use it in a job. The job definition could look
|
||||
something like this:
|
||||
|
||||
```nix
|
||||
# create a service-specific user
|
||||
users.users.example.isSystemUser = true;
|
||||
|
||||
# without this group the secret can't be read
|
||||
users.users.example.extraGroups = [ "keys" ];
|
||||
|
||||
systemd.services.example = {
|
||||
wantedBy = [ "multi-user.target" ];
|
||||
after = [ "example-key.service" ];
|
||||
wants = [ "example-key.service" ];
|
||||
|
||||
serviceConfig.User = "example";
|
||||
serviceConfig.Type = "oneshot";
|
||||
|
||||
script = ''
|
||||
stat /run/keys/example
|
||||
'';
|
||||
};
|
||||
```
|
||||
|
||||
This creates a user called `example` and gives it permission to read deployment
|
||||
keys. It also creates a systemd service called `example.service` and runs
|
||||
[`stat(1)`](https://linux.die.net/man/1/stat) to show the permissions of the
|
||||
service and the key file. It also runs as our `example` user. To avoid systemd
|
||||
thinking our service failed, we're also going to mark it as a
|
||||
[oneshot](https://www.digitalocean.com/community/tutorials/understanding-systemd-units-and-unit-files#the-service-section).
|
||||
|
||||
Altogether it could look something like
|
||||
[this](https://gist.github.com/Xe/4a71d7741e508d9002be91b62248144a). Let's see
|
||||
what `systemctl` has to report:
|
||||
|
||||
```console
|
||||
$ nixops ssh -d blog-example pa -- systemctl status example
|
||||
● example.service
|
||||
Loaded: loaded (/nix/store/j4a8f6mnaw3v4sz7dqlnz95psh72xglw-unit-example.service/example.service; enabled; vendor preset: enabled)
|
||||
Active: inactive (dead) since Wed 2021-01-20 20:53:54 UTC; 37s ago
|
||||
Process: 2230 ExecStart=/nix/store/1yg89z4dsdp1axacqk07iq5jqv58q169-unit-script-example-start/bin/example-start (code=exited, status=0/SUCCESS)
|
||||
Main PID: 2230 (code=exited, status=0/SUCCESS)
|
||||
IP: 0B in, 0B out
|
||||
CPU: 3ms
|
||||
|
||||
Jan 20 20:53:54 pa example-start[2235]: File: /run/keys/example
|
||||
Jan 20 20:53:54 pa example-start[2235]: Size: 31 Blocks: 8 IO Block: 4096 regular file
|
||||
Jan 20 20:53:54 pa example-start[2235]: Device: 18h/24d Inode: 37428 Links: 1
|
||||
Jan 20 20:53:54 pa example-start[2235]: Access: (0400/-r--------) Uid: ( 998/ example) Gid: ( 96/ keys)
|
||||
Jan 20 20:53:54 pa example-start[2235]: Access: 2021-01-20 20:53:54.010554201 +0000
|
||||
Jan 20 20:53:54 pa example-start[2235]: Modify: 2021-01-20 20:53:54.010554201 +0000
|
||||
Jan 20 20:53:54 pa example-start[2235]: Change: 2021-01-20 20:53:54.398103181 +0000
|
||||
Jan 20 20:53:54 pa example-start[2235]: Birth: -
|
||||
Jan 20 20:53:54 pa systemd[1]: example.service: Succeeded.
|
||||
Jan 20 20:53:54 pa systemd[1]: Finished example.service.
|
||||
```
|
||||
|
||||
So what happens when we reboot? I'll force a reboot in my hypervisor and we'll
|
||||
find out:
|
||||
|
||||
```console
|
||||
$ nixops ssh -d blog-example pa -- systemctl status example
|
||||
● example.service
|
||||
Loaded: loaded (/nix/store/j4a8f6mnaw3v4sz7dqlnz95psh72xglw-unit-example.service/example.service; enabled; vendor preset: enabled)
|
||||
Active: inactive (dead)
|
||||
```
|
||||
|
||||
The service is inactive. Let's see what the status of `example-key.service` is:
|
||||
|
||||
```console
|
||||
$ nixops ssh -d blog-example pa -- systemctl status example-key
|
||||
● example-key.service
|
||||
Loaded: loaded (/nix/store/ikqn64cjq8pspkf3ma1jmx8qzpyrckpb-unit-example-key.service/example-key.service; linked; vendor preset: enabled)
|
||||
Active: activating (start-pre) since Wed 2021-01-20 20:56:05 UTC; 3min 1s ago
|
||||
Cntrl PID: 610 (example-key-pre)
|
||||
IP: 0B in, 0B out
|
||||
IO: 116.0K read, 0B written
|
||||
Tasks: 4 (limit: 2374)
|
||||
Memory: 1.6M
|
||||
CPU: 3ms
|
||||
CGroup: /system.slice/example-key.service
|
||||
├─610 /nix/store/kl6lr3czkbnr6m5crcy8ffwfzbj8a22i-bash-4.4-p23/bin/bash -e /nix/store/awx1zrics3cal8kd9c5d05xzp5ikazlk-unit-script-example-key-pre-start/bin/example-key-pre-start
|
||||
├─619 /nix/store/kl6lr3czkbnr6m5crcy8ffwfzbj8a22i-bash-4.4-p23/bin/bash -e /nix/store/awx1zrics3cal8kd9c5d05xzp5ikazlk-unit-script-example-key-pre-start/bin/example-key-pre-start
|
||||
├─620 /nix/store/kl6lr3czkbnr6m5crcy8ffwfzbj8a22i-bash-4.4-p23/bin/bash -e /nix/store/awx1zrics3cal8kd9c5d05xzp5ikazlk-unit-script-example-key-pre-start/bin/example-key-pre-start
|
||||
└─621 inotifywait -qm --format %f -e create,move /run/keys
|
||||
|
||||
Jan 20 20:56:05 pa systemd[1]: Starting example-key.service...
|
||||
```
|
||||
|
||||
The service is blocked waiting for the keys to exist. We have to populate the
|
||||
keys with `nixops send-keys`:
|
||||
|
||||
```console
|
||||
$ nixops send-keys -d blog-example
|
||||
pa> uploading key ‘example’...
|
||||
```
|
||||
|
||||
Now when we check on `example.service`, we get the following:
|
||||
|
||||
```console
|
||||
$ nixops ssh -d blog-example pa -- systemctl status example
|
||||
● example.service
|
||||
Loaded: loaded (/nix/store/j4a8f6mnaw3v4sz7dqlnz95psh72xglw-unit-example.service/example.service; enabled; vendor preset: enabled)
|
||||
Active: inactive (dead) since Wed 2021-01-20 21:00:24 UTC; 32s ago
|
||||
Process: 954 ExecStart=/nix/store/1yg89z4dsdp1axacqk07iq5jqv58q169-unit-script-example-start/bin/example-start (code=exited, status=0/SUCCESS)
|
||||
Main PID: 954 (code=exited, status=0/SUCCESS)
|
||||
IP: 0B in, 0B out
|
||||
CPU: 3ms
|
||||
|
||||
Jan 20 21:00:24 pa example-start[957]: File: /run/keys/example
|
||||
Jan 20 21:00:24 pa example-start[957]: Size: 31 Blocks: 8 IO Block: 4096 regular file
|
||||
Jan 20 21:00:24 pa example-start[957]: Device: 18h/24d Inode: 27774 Links: 1
|
||||
Jan 20 21:00:24 pa example-start[957]: Access: (0400/-r--------) Uid: ( 998/ example) Gid: ( 96/ keys)
|
||||
Jan 20 21:00:24 pa example-start[957]: Access: 2021-01-20 21:00:24.588494730 +0000
|
||||
Jan 20 21:00:24 pa example-start[957]: Modify: 2021-01-20 21:00:24.588494730 +0000
|
||||
Jan 20 21:00:24 pa example-start[957]: Change: 2021-01-20 21:00:24.606495751 +0000
|
||||
Jan 20 21:00:24 pa example-start[957]: Birth: -
|
||||
Jan 20 21:00:24 pa systemd[1]: example.service: Succeeded.
|
||||
Jan 20 21:00:24 pa systemd[1]: Finished example.service.
|
||||
```
|
||||
|
||||
This means that NixOps secrets require _manual human intervention_ in order to
|
||||
repopulate them on server boot. If your server went offline overnight due to an
|
||||
unexpected issue, your services using those keys could be stuck offline until
|
||||
morning. This is undesirable for a number of reasons. This plus the requirement
|
||||
for the `keys` group (which at time of writing was undocumented) to be added to
|
||||
service user accounts means that while they do work, they are not very
|
||||
ergonomic.
|
||||
|
||||
[You can read secrets from files using something like
|
||||
`deployment.keys.example.text = "${builtins.readFile ./secrets/example.env}"`,
|
||||
but it is kind of a pain to have to do that. It would be better to just
|
||||
reference the secrets by filesystem paths in the first
|
||||
place.](conversation://Mara/hacker)
|
||||
|
||||
On the other hand [Morph](https://github.com/DBCDK/morph) gets this a bit
|
||||
better. It is sadly even less documented than NixOps is, but it offers a similar
|
||||
experience via [deployment
|
||||
secrets](https://github.com/DBCDK/morph/blob/master/examples/secrets.nix). The
|
||||
main differences that Morph brings to the table are taking paths to secrets and
|
||||
allowing you to run an arbitrary command on the secret being uploaded. Secrets
|
||||
are also able to be put anywhere on the disk, meaning that when a host reboots it
|
||||
will come back up with the most recent secrets uploaded to it.
|
||||
|
||||
However, like NixOps, Morph secrets don't have the ability to be rolled back.
|
||||
This means that if you mess up a secret value you better hope you have the old
|
||||
information somewhere. This violates what you'd expect from a NixOS machine.
|
||||
|
||||
So given these examples, I thought it would be interesting to explore what the
|
||||
middle path could look like. I chose to use
|
||||
[age](https://github.com/FiloSottile/age) for encrypting secrets in the Nix
|
||||
store as well as using SSH host keys to ensure that every secret is decryptable
|
||||
at runtime by _that machine only_. If you get your hands on the secret
|
||||
cyphertext, it should be unusable to you.
|
||||
|
||||
One of the harder things here will be keeping a list of all of the server host
|
||||
keys. Recently I added a
|
||||
[hosts.toml](https://github.com/Xe/nixos-configs/blob/master/ops/metadata/hosts.toml)
|
||||
file to my config repo for autoconfiguring my WireGuard overlay network. It was
|
||||
easy enough to add all the SSH host keys for each machine using a command like
|
||||
this to get them:
|
||||
|
||||
[We will cover how this WireGuard overlay works in a future post.](conversation://Mara/hacker)
|
||||
|
||||
```console
|
||||
$ nixops ssh-for-each -d hexagone -- cat /etc/ssh/ssh_host_ed25519_key.pub
|
||||
firgu....> ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIB8+mCR+MEsv0XYi7ohvdKLbDecBtb3uKGQOPfIhdj3C root@nixos
|
||||
chrysalis> ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIGDA5iXvkKyvAiMEd/5IruwKwoymC8WxH4tLcLWOSYJ1 root@chrysalis
|
||||
lufta....> ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIMADhGV0hKt3ZY+uBjgOXX08txBS6MmHZcSL61KAd3df root@lufta
|
||||
keanu....> ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIGDZUmuhfjEIROo2hog2c8J53taRuPJLNOtdaT8Nt69W root@nixos
|
||||
```
|
||||
|
||||
age lets you use SSH keys for decryption, so I added these keys to my
|
||||
`hosts.toml` and ended up with something like
|
||||
[this](https://github.com/Xe/nixos-configs/commit/14726e982001e794cd72afa1ece209eed58d3f38#diff-61d1d8dddd71be624c0d718be22072c950ec31c72fded8a25094ea53d94c8185).
|
||||
|
||||
Now we can encrypt secrets on the host machine and safely put them in the Nix
|
||||
store because they will be readable to each target machine with a command like
|
||||
this:
|
||||
|
||||
```shell
|
||||
age -d -i /etc/ssh/ssh_host_ed25519_key -o $dest $src
|
||||
```
|
||||
|
||||
From here it's easy to make a function that we can use for generating new
|
||||
encrypted secrets in the Nix store. First we need to import the host metadata
|
||||
from the toml file:
|
||||
|
||||
```nix
|
||||
let
|
||||
cfg = config.within.secrets;
|
||||
metadata = lib.importTOML ../../ops/metadata/hosts.toml;
|
||||
|
||||
mkSecretOnDisk = name:
|
||||
{ source, ... }:
|
||||
pkgs.stdenv.mkDerivation {
|
||||
name = "${name}-secret";
|
||||
phases = "installPhase";
|
||||
buildInputs = [ pkgs.age ];
|
||||
installPhase =
|
||||
let key = metadata.hosts."${config.networking.hostName}".ssh_pubkey;
|
||||
in ''
|
||||
age -a -r "${key}" -o $out ${source}
|
||||
'';
|
||||
};
|
||||
```
|
||||
|
||||
And then we can generate systemd oneshot jobs with something like this:
|
||||
|
||||
```nix
|
||||
mkService = name:
|
||||
{ source, dest, owner, group, permissions, ... }: {
|
||||
description = "decrypt secret for ${name}";
|
||||
wantedBy = [ "multi-user.target" ];
|
||||
|
||||
serviceConfig.Type = "oneshot";
|
||||
|
||||
script = with pkgs; ''
|
||||
rm -rf ${dest}
|
||||
${age}/bin/age -d -i /etc/ssh/ssh_host_ed25519_key -o ${dest} ${
|
||||
mkSecretOnDisk name { inherit source; }
|
||||
}
|
||||
|
||||
chown ${owner}:${group} ${dest}
|
||||
chmod ${permissions} ${dest}
|
||||
'';
|
||||
};
|
||||
```
|
||||
|
||||
And from there we just need some [boring
|
||||
boilerplate](https://github.com/Xe/nixos-configs/blob/master/common/crypto/default.nix#L8-L38)
|
||||
to define a secret type. Then we declare the secret type and its invocation:
|
||||
|
||||
```nix
|
||||
in {
|
||||
options.within.secrets = mkOption {
|
||||
type = types.attrsOf secret;
|
||||
description = "secret configuration";
|
||||
default = { };
|
||||
};
|
||||
|
||||
config.systemd.services = let
|
||||
units = mapAttrs' (name: info: {
|
||||
name = "${name}-key";
|
||||
value = (mkService name info);
|
||||
}) cfg;
|
||||
in units;
|
||||
}
|
||||
```
|
||||
|
||||
And we have ourself a NixOS module that allows us to:
|
||||
|
||||
* Trivially declare new secrets
|
||||
* Make secrets in the Nix store useless without the key
|
||||
* Make every secret be transparently decrypted on startup
|
||||
* Avoid the use of GPG
|
||||
* Roll back secrets like any other configuration change
|
||||
|
||||
Declaring new secrets works like this (as stolen from [the service definition
|
||||
for the website you are reading right now](https://github.com/Xe/nixos-configs/blob/master/common/services/xesite.nix#L35-L41)):
|
||||
|
||||
```nix
|
||||
within.secrets.example = {
|
||||
source = ./secrets/example.env;
|
||||
dest = "/var/lib/example/.env";
|
||||
owner = "example";
|
||||
group = "nogroup";
|
||||
permissions = "0400";
|
||||
};
|
||||
```
|
||||
|
||||
Barring some kind of cryptographic attack against age, this should allow the
|
||||
secrets to be stored securely. I am working on a way to make this more generic.
|
||||
This overall approach was inspired by [agenix](https://github.com/ryantm/agenix)
|
||||
but made more specific for my needs. I hope this approach will make it easy for
|
||||
me to manage these secrets in the future.
|
|
@ -1,12 +0,0 @@
|
|||
---
|
||||
title: "Tailscale on NixOS: A New Minecraft Server in Ten Minutes"
|
||||
date: 2021-01-19
|
||||
tags:
|
||||
- link
|
||||
redirect_to: https://tailscale.com/blog/nixos-minecraft/
|
||||
---
|
||||
|
||||
# Tailscale on NixOS: A New Minecraft Server in Ten Minutes
|
||||
|
||||
Check out this post [on the Tailscale
|
||||
blog](https://tailscale.com/blog/nixos-minecraft/)!
|
|
@ -1,7 +1,6 @@
|
|||
---
|
||||
title: "Olin: 1: Why"
|
||||
date: 2018-09-01
|
||||
series: olin
|
||||
---
|
||||
|
||||
# [Olin][olin]: 1: Why
|
||||
|
@ -46,7 +45,7 @@ right code runs as a response. I then have to make sure these things get put
|
|||
in the right places and then that the right versions of things are running for
|
||||
each of the relevant services. This doesn't scale very well, not to mention is
|
||||
hard to secure. This leads to a lot of duplicate infrastructure over time and
|
||||
as things grow. Not to mention adding in tracing, metrics and log aggregation.
|
||||
as things grow. Not to mention adding in tracing, metrics and log aggreation.
|
||||
|
||||
I would like to change this.
|
||||
|
||||
|
@ -58,7 +57,7 @@ need to be maintained in parallel, so it might be good to get used to that early
|
|||
on.
|
||||
|
||||
You should not have to write ANY code but the bare minimum needed in order to
|
||||
perform your business logic. You don't need to care about distributed tracing.
|
||||
perform your buisiness logic. You don't need to care about distributed tracing.
|
||||
You don't need to care about logging.
|
||||
|
||||
I want this project to last decades. I want the binary modules any user of Olin
|
||||
|
|
Some files were not shown because too many files have changed in this diff Show More
Loading…
Reference in New Issue