#9 Finished installation instructions

master
Keith Irwin 2023-04-08 00:49:39 -06:00
parent 768fe5c5ba
commit 3d6c67d80c
Signed by: ki9
GPG Key ID: DF773B3F4A88DA86
4 changed files with 199 additions and 50 deletions

View File

@ -80,8 +80,7 @@ our_pubkey='ZZZZZZZZZZZZZZ' # From the client
wg set "${net_name}" peer "${our_pubkey}" allowed-ips "10.${net_num}.1.1/32"
```
Make sure the client can ping the server with `ping 10.${net_num}.0.1` and the server can ping the client with `ping 10.${net_num}.1.1`. If that's working, proceed to the next section.
Make sure the client can ping the server with `ping 10.${net_num}.0.1` and the server can ping the client with `ping 10.${net_num}.1.1`. If that's not working, post your error message on the matrix channel. If it is working, get a cup of coffee because the next section is a doozy.
## 2. bind9
@ -214,9 +213,9 @@ zone "99.10.in-addr.arpa" {
Excellent. This file referenced four files that don't exist yet and must be created. Let's start with the keys. `nsupdate` uses symetric keys, so one copy will live on the bind server and the other will be copied to the nsupdate client.
I like to keep my keys in `/etc/bind/keys`. Let's create this directory and the two keys named above. Actually you should rename the "admin" key as your username and give a different key to each admin. "wagon" is of course the key our future dashboard will use to update the nameserver.
As you can see from the first lines of this file, I like to keep my keys in `/etc/bind/keys`. Let's create this directory and the two keys named above. Actually you should rename the "admin" key as your username and give a different key to each admin. "wagon" is of course the key our future dashboard will use to update the nameserver.
```sh
```bash
mkdir /etc/bind/keys
tsig-keygen -a hmac-sha512 admin >/etc/bind/keys/admin.key
tsig-keygen -a hmac-sha512 wagon >/etc/bind/keys/wagon.key
@ -288,9 +287,9 @@ $ORIGIN 1.99.10.in-addr.arpa.
1 PTR pc.myuser.mynet.
```
See, that wasn't so hard! Now start the nameserver and check that it doesn't throw any errors:
Easy. Now start the nameserver and check that it doesn't throw any errors:
```sh
```bash
systemctl start named
systemctl enable named
systemctl status named
@ -298,7 +297,7 @@ systemctl status named
If it's not working, fix it and then go back to your pc and check the lookups.
```sh
```bash
nslookup pc.myuser.mynet 10.99.0.1
nslookup hn.mynet 10.99.0.1
nslookup 10.99.0.1 10.99.0.1
@ -306,14 +305,14 @@ nslookup 10.99.1.1 10.99.0.1
```
Each of these commands uses `10.99.0.1` as the nameserver by setting it as the second argument; you can also make that your default nameserver or the nameserver for the `mynet` TLD. Look into setting "search domains" for your VPN interface in your operating system. `systemd-resolved` users, for example, can run these commands:
```sh
```bash
resolvectl dns mynet 10.99.0.1
resolvectl domain mynet '~mynet' '~99.10.in-addr.arpa'
```
This will tell the OS to send `.mynet` queries to our vpn nameserver. Not all programs respect this setting though; `dig`, `ping`, and your browser will work but you'll still have to set the nameserver by hand for `nslookup` (as above) and `nsupdate` using the "server" command (even though we set it in our SOA):
```sh
```bash
nsupdate -k admin.key
> server 10.99.0.1
> add test.mynet 86400 TXT "hello"
@ -328,7 +327,7 @@ The last major step is to set up the certificate authority. Unlike wireguard and
A good place to keep your SSL certs and keys is in `/etc/ssl/private/mynet`. Let's make things easier by setting some variables:
```sh
```bash
tld='mynet'
crt_dir="/etc/ssl/private/${tld}"
ca_key="${crt_dir}/_ca.key"
@ -339,7 +338,7 @@ Now we'll create the ca key and cert. You will be asked for some details about y
Here we're setting `-days 3650` which will require re-signing and re-distributing the certificate every ten years. You can avoid that by setting it to 100 years with `-days 36500`. This field is required but I think there is no limit, so you can set it to `99999999` if you want.
```sh
```bash
openssl genrsa -des3 -out "${ca_key}" 4096
openssl req -x509 -new -nodes -key "${ca_key}" -sha256 -days 3650 -out "${ca_crt}"
ln -s "${ca_crt}" "/etc/ssl/certs/${tld}.pem"
@ -349,7 +348,7 @@ The last step makes the cert available to verification from the host OS. This c
We can use these CA files to sign certificates for hosts using our `mynet` domain. Let's sign one for the server first:
```sh
```bash
org='My Cool Network'
tld=mynet
host=hn
@ -393,27 +392,27 @@ openssl x509 -req -sha256 -extensions SAN \
That should do it! Let's check that the cert is valid for all domains and IPs:
```sh
```bash
openssl x509 -text -noout -in "${host_dir}/server.crt" | grep -A1 'Subject Alternative Name'
```
That should return something like:
```sh
```bash
X509v3 Subject Alternative Name:
DNS:hn.mynet, DNS:*.hn.mynet, IP Address:10.99.0.1
```
It contains our domain, wildcard domain, and IP address. Since everything went well, we can delete the CSR and cnf file:
```sh
```bash
rm -f "${crt_dir}/${host}.csr" "${crt_dir}/${host}.cnf"
```
One last thing: we need to generate a certificate and key for our pc. Everything is basically the same as with the server, except that our domain will be `pc.myuser.mynet` instead of `hn.mynet`. So let's breeze through this and check the comments from above if you get confused.
```sh
org='My Cool Network'
```bash
org='My Cool Organization'
tld=mynet
host='pc.myuser'
domain="${host}.${tld}"
@ -455,6 +454,137 @@ You might be thinking, this would all be easier as a script. A script that could
## 4. Wagon
Now that we have this all set up, we can use wagon. Wagon will help us add clients to wireguard, give them a domain name in bind, and create SSL certificates for them in a single step on a nice GUI dashboard.
I keep services in `/srv` so I would do:
TODO: Finish wagon setup instructions.
```sh
cd /srv
git clone https://gitea.gf4.pw/gf4/wagon.git
cd wagon
```
### 4.1. Configuration
Copy the sample environment file and docker-compose file:
```sh
cp etc/config.sample etc/config
cp etc/servers.sample etc/servers
cp docker-compose.yml.sample docker-compose.yml
```
Configure the `docker-compose.yml` file however you like, or don't use it at all. The other two files are tab-separated text files. Lines starting with a hash (`#`) are ignored as comments
The `etc/servers` file is a list of servers on the `/16` network. For now, just set our single server with the correct variables.
```tsv
# host ipv4 ipv6 pubkey wg-endpoint admin-endpoint secret
hn 10.99.0.1 XXXX XXXXX= 1.2.3.4:51820 https://wagon-admin.hn.mynet XXXXXX
```
We're just gonna leave `XXXX` as a placeholder for ipv6 since we aren't using it. But do set the pubkey to hn's wireguard public key from above. Set admin-endpoint to whatever you want right now; this is actually used for server-to-server communication, not administration. Same thing for secret: leave it as `XXXXXX` or generate something random; in any case it isn't used unless your network has multiple servers.
Now edit the `etc/config` file
```sh
TLD='mynet'
LOCAL_SERVER='hn'
IPV4_NET='10.11.0.0/16'
IPV6_NET='fd69:1337:0:420:f4:11::/96'
WG_DNS='DNS=10.11.0.1'
SSL_CONFIG_DIR="/etc/ssl/private/${TLD}"
SSL_CA_CERT="${SSL_CONFIG_DIR}/_ca.crt"
SSL_CA_KEY="${SSL_CONFIG_DIR}/_ca.key"
SSL_ORG='My Cool Organization'
SSL_DAYS='3650'
SSL_CA_PASS='XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX'
DNS_KEY='XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX=='
DNS_MASTER='10.3.0.1'
DNS_TTL='86400'
```
This file should be mostly self-explanitory. "SSL_CA_PASS" is the CA key passphrase created in the last section. The "DNS_KEY" can be found in the "secret" section of the `/etc/bind/keys/wagon.keys` file, which looks like this:
```tsig
key "wgapi-ksn" {
algorithm hmac-sha512;
secret "XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX==";
};
```
Wagon comes as 4 services:
1. An api users can access to add/delete hosts
2. An api admins can access to add/delete hosts and users
3. A frontend for the user dashboard
4. A frontend for the admin dashboard
The two frontends were built with knockoutJS and html and are very bare (no css) as they are packaged, but you can easily incoporate them in your existing web portal's design. There is no login (authentication is IP-based) so the frontend works fine on static sites.
For now, there's no authentication for the admin dashboard and maybe there never will be (out-of-scope). It runs on a different port, so simply set firewall and web proxy rules for whatever authentication configuration you like.
With that in mind, let's boot up the two API servers. This guide assumes the use of docker and docker-compose, but you can run everything outside docker too. You just need to host the `dashboard.cgi` script on one endpoint and `admin.cgi` on another. The `back/dashboard.Dockerfile` and `back/admin.Dockerfile` files can be a guide to doing so with apache2.
If you *are* using docker, you should be able to `touch /var/log/wagon.log` and run `docker-compose up` from the wagon directory. This should make the user API available on `localhost:4442` and the admin API on `localhost:4441`.
That's not bad. We could take requests on that port, but let's take secure https requests on a subdomain instead. With `nginx`, this would work:
**`/etc/nginx/sites-enabled/wagon.conf`**
```nginx
# User API
server {
server_name wagon-dashboard-api.hn.mynet;
listen 10.11.0.1:443 ssl http2;
ssl_certificate /etc/ssl/private/mynet/hn/server.crt;
ssl_certificate_key /etc/ssl/private/mynet/hn/server.key;
ssl_stapling off;
allow 10.11.0.0/16; # All users
deny all; # Everyone else
location / {
proxy_pass http://localhost:4442;
}
}
# Admin API
server {
server_name wagon-admin-api.hn.mynet;
listen 10.11.0.1:443 ssl http2;
ssl_certificate /etc/ssl/private/mynet/hn/server.crt;
ssl_certificate_key /etc/ssl/private/mynet/hn/server.key;
ssl_stapling off;
allow 10.11.1.0/24; # One admin
allow 10.11.7.0/24; # Another admin
deny all; # Everyone else
location / {
proxy_pass http://localhost:4441;
}
```
Our frontends are going to need these APIs. At the top of `front/dashboard.js` is a hardcoded variable:
```js
const API_URL = 'https://wg-dashboard-backend.myhost.mytld'
```
Set that to the nginx proxy virtual host we just set:
```js
const API_URL = 'https://wagon-dashboard-api.hn.mynet'
```
Or use direct http:
```js
const API_URL = 'http://localhost:4442'
```
Do likewise in `front/admin.js` and set the `TLD` too:
```js
const API_URL = 'https://wagon-admin-api.hn.mynet'
// or const API_URL = 'http://localhost:4441'
const TLD = 'mynet'
```
The frontend should work now, though it could use a bit of design work or implementation in your website.
That's the whole installation, phew! Take a break. When you come back, start learning how to [use wagon](USAGE.md).

View File

@ -66,7 +66,7 @@ Allowing access to virtual webservers is just as simple. For example, I can let
```nginx
server {
server_name dev.mypc.myuser.mynet;
listen 10.11.1.1:443 ssh http2;
listen 10.11.1.1:443 ssl http2;
ssl_certificate /path/to/downloaded/mypc.myuser.mynet/server.crt;
ssl_certificate_key /path/to/downloaded/mypc.myuser.mynet/server.key;

19
USAGE.md Normal file
View File

@ -0,0 +1,19 @@
# Wagon usage
This hasn't been written yet, but it will contain good information on all the dashboards and API endpoints.
## User dashboard
TODO
## User API
TODO
## Admin dashboard
TODO
## Admin API
TODO

View File

@ -4,7 +4,7 @@ networks:
name: wagon
ipam:
config:
- subnet: "172.19.0.0/16"
- subnet: "172.19.0.0/24"
services:
dashboard-backend:
@ -22,14 +22,14 @@ services:
- './etc:/etc/wagon:ro'
- '/var/log/wagon.log:/var/log/apache2/error.log'
dashboard-frontend:
build:
context: front
dockerfile: dashboard.Dockerfile
container_name: wagon-dashboard-frontend
networks:
wagon:
ipv4_address: 172.19.0.2
# dashboard-frontend:
# build:
# context: front
# dockerfile: dashboard.Dockerfile
# container_name: wagon-dashboard-frontend
# networks:
# wagon:
# ipv4_address: 172.19.0.2
admin-backend:
build:
@ -46,25 +46,25 @@ services:
- '/etc/ssl/private:/etc/ssl/private'
- './etc:/etc/wagon:ro'
admin-frontend:
build:
context: front
dockerfile: admin.Dockerfile
container_name: wagon-admin-frontend
networks:
wagon:
ipv4_address: 172.19.0.3
# admin-frontend:
# build:
# context: front
# dockerfile: admin.Dockerfile
# container_name: wagon-admin-frontend
# networks:
# wagon:
# ipv4_address: 172.19.0.3
fed-backend:
build:
context: back
dockerfile: fed.Dockerfile
args:
PORT: 4443
cap_add:
- NET_ADMIN
network_mode: host
container_name: wagon-fed-backend
volumes:
- '/var/log/wagon.log:/var/log/apache2/error.log'
- './etc:/etc/wagon:ro'
# fed-backend:
# build:
# context: back
# dockerfile: fed.Dockerfile
# args:
# PORT: 4443
# cap_add:
# - NET_ADMIN
# network_mode: host
# container_name: wagon-fed-backend
# volumes:
# - '/var/log/wagon.log:/var/log/apache2/error.log'
# - './etc:/etc/wagon:ro'