Compare commits

...

152 Commits

Author SHA1 Message Date
Niklas Yann Wettengel 299716ee31 ff-niyawe1 -> ff-niyawe3 2 months ago
Niklas Yann Wettengel 66acee3db2 named: add ula prefix 8 months ago
Niklas Yann Wettengel 288ea865f0 neue ip loppermann 8 months ago
Niklas Yann Wettengel fccb6ae8f0 aw1, sim2 ff-niyawe1 -> ff-loppermann1 11 months ago
Niklas Yann Wettengel 16f18a1b4c ems1, my1 ff-niyawe2 -> ff-uniko1 11 months ago
Niklas Yann Wettengel 26402592ad ko2 ff-niyawe2 -> ff-niyawe1 11 months ago
Niklas Yann Wettengel 0040112ad1 haveged no longer needed 11 months ago
Niklas Yann Wettengel ea3fb27515 fix ff-loppermann1 ffrl-ips 11 months ago
Niklas Yann Wettengel d2c9ec93a9 coc1 ff-niyawe1 -> ff-loppermann1 1 year ago
Niklas Yann Wettengel 71dadf3da4 ko1 niyawe1 -> uniko1 1 year ago
Niklas Yann Wettengel 82e6f06b6b ff-uniko1 1 year ago
Niklas Yann Wettengel aa15d8285c respondd_poller up 2 years ago
Niklas Yann Wettengel 7bee75f9e5 new ff-niyawe2 ip 2 years ago
Niklas Yann Wettengel 4d3268b80b loppermann1 2 years ago
Niklas Yann Wettengel fb0dbf28a0 new net with nat64 2 years ago
Niklas Yann Wettengel b53a8cf228 merge fastd and uplink nodes 2 years ago
Niklas Yann Wettengel e75acd9a72 respondd: use vxlan interface 3 years ago
Niklas Yann Wettengel 0fdb16e7b0 rm ww net 3 years ago
Niklas Yann Wettengel 1705b3ed49 monitor vxlan interface 3 years ago
Niklas Yann Wettengel 57d76e41f1 rm ww 3 years ago
Niklas Yann Wettengel 4f2e1d7b8d munin wg peers 3 years ago
Niklas Yann Wettengel 98b46e45eb make wgkex output unbufferd 3 years ago
Niklas Yann Wettengel d116d05912 restart wgkex on repo update 3 years ago
Niklas Yann Wettengel 140a04a8b5 run webserver for connectivity check 3 years ago
Niklas Yann Wettengel 9ec1670a26 wg 3 years ago
Niklas Yann Wettengel f394fd8166 new group wg 3 years ago
Niklas Yann Wettengel 0391e95103 rm ww01 and ww02 3 years ago
Niklas Yann Wettengel a778bc7c93 coc2 -> ff-niyawe4 3 years ago
Niklas Yann Wettengel 1381cce6f4 sim1 -> ff-niyawe4 3 years ago
Niklas Yann Wettengel acd20615ab new niyawe ssh-keys 4 years ago
Niklas Yann Wettengel f8dbb9f988 fastd grep script 4 years ago
Niklas Yann Wettengel 2874a4583e fastd verify add blacklist 4 years ago
Niklas Yann Wettengel 74ca81df46 munin: collect bind stats 4 years ago
Niklas Yann Wettengel 36440dcf2a use batctl to set hop_penalty 4 years ago
Niklas Yann Wettengel 64b0f4bd6c bind: allow requests from link-local adresses 4 years ago
Niklas Yann Wettengel 7e89a60f8c remove nat64 4 years ago
Niklas Yann Wettengel def14f0993 fastd: accept all keys 4 years ago
Niklas Yann Wettengel 2af6075cde remove old fastd peers 4 years ago
Niklas Yann Wettengel 9c2edaca05 Merge branch 'master' of git.niyawe.de:ffmyk-ansible into master 4 years ago
Niklas Yann Wettengel 65f18a5077 new pubkeys for ww 4 years ago
Niklas Yann Wettengel 41699bf138 erlaube dns anfragen von ww 4 years ago
Niklas Yann Wettengel 92885c4f58 ww02 4 years ago
Niklas Yann Wettengel 07475f6627 wireguard is now in the kernel 4 years ago
Niklas Yann Wettengel 88109f6250 ww01 4 years ago
Niklas Yann Wettengel d412bd90e3 fix batctl syntax 4 years ago
Niklas Yann Wettengel 0257727946 up batctl syntax 4 years ago
Niklas Yann Wettengel f7f85bab9a add untracked host_vars for ff-niyawe4 and ff-kraftimion1 4 years ago
Niklas Yann Wettengel 164e93f850 remove . zone in bind 4 years ago
Niklas Yann Wettengel 7bef07ac78 mark missing interfaces 5 years ago
Niklas Yann Wettengel 9670cc8980 olfb{a,s} 5 years ago
Niklas Yann Wettengel 1c34bfa814 wwlabs offloader 5 years ago
Niklas Yann Wettengel 58e999356d ffww 5 years ago
Niklas Yann Wettengel 1738af3a5d install mesh-announce 5 years ago
Niklas Yann Wettengel 27432b0ddc validate dnssec 5 years ago
Niklas Yann Wettengel fcb6d2efb0 no port 80 5 years ago
Niklas Yann Wettengel 43ed9c0c88 nat64 5 years ago
Niklas Yann Wettengel 2befca5ea4 babel: allow nat64 prefix 5 years ago
Niklas Yann Wettengel 59331e63f2 bind fix 5 years ago
Niklas Yann Wettengel 196f36c3aa use precompiled wireguard 5 years ago
Niklas Yann Wettengel ab1af81ee3 send router advertisements less often 5 years ago
Niklas Yann Wettengel 82026eeabf updated master dns server 5 years ago
Niklas Yann Wettengel a530d571a1 announce ipv6 prefix at least once a minute 6 years ago
Niklas Yann Wettengel 9223a53f37 new pacman syntax 6 years ago
Niklas Yann Wettengel b97d70626b netctl endabled path fix 6 years ago
Niklas Yann Wettengel 67c1ec3755 updated ff-uplink2 ip 6 years ago
Niklas Yann Wettengel c299d46261 call pacman only once 6 years ago
Niklas Yann Wettengel 20ca7fda98 ff-niyawe2: new ip 6 years ago
Niklas Yann Wettengel ddb300f1f2 new ip for ff-loppermann1 6 years ago
Niklas Yann Wettengel 16b105a2fc removed old host_vars samples 6 years ago
Niklas Yann Wettengel a859fd2568 updated README 6 years ago
Niklas Yann Wettengel a7928fd2fa added real host_vars 6 years ago
Niklas Yann Wettengel fa347deecc added real group_vars 6 years ago
Niklas Yann Wettengel 6bb1212cd3 updated gitignore 6 years ago
Niklas Yann Wettengel 86efc939d5 added real inventory 6 years ago
Niklas Yann Wettengel da72f062bb updated influx-scripts 6 years ago
Niklas Yann Wettengel 4bfc6f1e29 added whitelist for uplinks 6 years ago
Niklas Yann Wettengel 587d37b5f1 babel: ignore default routes on uplinks 6 years ago
Niklas Yann Wettengel 776fafa79c babel: removed old ipv6 prefixes 6 years ago
Niklas Yann Wettengel f25a8da111 babel: decreased smoothing-half-life 6 years ago
Niklas Yann Wettengel fe98873a15 monitor link between uplinks 6 years ago
Niklas Yann Wettengel a10985afe4 updated ip of munin.niyawe.de 6 years ago
Niklas Yann Wettengel d6aed0cfbd babel: insert metric into kernel table 6 years ago
Niklas Yann Wettengel dfc02c3178 babel mesh between uplinks 6 years ago
Niklas Yann Wettengel c17cedcf1b make sure nf_conntrack_ipv4 is loaded before systemd-sysctl is started 6 years ago
Niklas Yann Wettengel aff767a31f reduce tcp_timeout_established to 1 hour 6 years ago
Niklas Yann Wettengel 2c1d3f36eb drop fastd traffic from freifunk 6 years ago
Niklas Yann Wettengel f81e146f6c set net.netfilter.nf_conntrack_tcp_timeout_established later 6 years ago
Niklas Yann Wettengel 8cd5685755 add ula prefix to ip rules 6 years ago
Niklas Yann Wettengel f6f27ff950 limit log to 1 day 6 years ago
Niklas Yann Wettengel eedbf0f2be munin: use own fw_conntrack to count ipv4/6 6 years ago
Niklas Yann Wettengel 22c024eea0 remove lowered wireguard mtu 6 years ago
Niklas Yann Wettengel 10585bc4d2 babel add preferred uplink 6 years ago
Niklas Yann Wettengel 4fa23988e6 babel: use rtt for uplink selection 6 years ago
Niklas Yann Wettengel dff70e1224 initial icvpn 6 years ago
Niklas Yann Wettengel ad992a78dd set wireguard backbone mtu to 1280 6 years ago
Niklas Yann Wettengel c216adad03 added iperf3 6 years ago
Niklas Yann Wettengel dad5d1ac22 munin fix 6 years ago
Niklas Yann Wettengel 222aa7fef7 static routes: SysVStartPriority no longer exists 6 years ago
Niklas Yann Wettengel 001de11bd2 munin: ffrl if name fix 6 years ago
Niklas Yann Wettengel 4523a78c97 add munin monitoring 6 years ago
Niklas Yann Wettengel de66f3d823 fix wireguard backbone link local net size 6 years ago
Niklas Yann Wettengel 7b896973cc fix wireguard backbone down script 6 years ago
Niklas Yann Wettengel 54515eb744 clamp mtu 6 years ago
Niklas Yann Wettengel c5ed917c8e resize max conntrack 6 years ago
Niklas Yann Wettengel a85999dbac install vnstat 6 years ago
Niklas Yann Wettengel aa0593233e install nginx 6 years ago
Niklas Yann Wettengel 78d31cce2c increase mullvad metric 6 years ago
Niklas Yann Wettengel 6dd3a22f2f updatet master dns server 6 years ago
Niklas Yann Wettengel fd81293b4b moved all:vars from inventory to extra file 6 years ago
Niklas Yann Wettengel 17f90b1a9f removed yaourt 6 years ago
Niklas Yann Wettengel 261732bba0 added more sample files 6 years ago
Niklas Yann Wettengel ce3ca9e97c fastd sample config up 6 years ago
Niklas Yann Wettengel d527f8e6d0 new wireguard mesh format 6 years ago
Niklas Yann Wettengel b711ff0367 babel: redistribute ula prefix 6 years ago
Niklas Yann Wettengel 2354d5ad3e endpoint for additional wireguard backbone peers is now optional 6 years ago
Niklas Yann Wettengel 4364aab1a9 send router advertisements more often 6 years ago
Niklas Yann Wettengel 53be5c3014 reject forwarded traffic going out on the default gateway 6 years ago
Niklas Yann Wettengel 09ae123075 fix 6 years ago
Niklas Yann Wettengel b7615bd04e renamed group_vars for uplink 6 years ago
Niklas Yann Wettengel acf495d4ba add unreachable rule for uplinks 6 years ago
Niklas Yann Wettengel e202073040 removed unused routing table 6 years ago
Niklas Yann Wettengel 663c6c74c6 uplink: add additional peers 6 years ago
Niklas Yann Wettengel 41b22ed59b added vimrc 6 years ago
Niklas Yann Wettengel 8225aa0e7c added uplink group 6 years ago
Niklas Yann Wettengel 03eb642632 babel mullvad_uplink fix 6 years ago
Niklas Yann Wettengel fb0ce938dc babel mullvad_uplink fix 6 years ago
Niklas Yann Wettengel 0da10ba8bc mullvad_uplink fix 6 years ago
Niklas Yann Wettengel c5e189efc1 update ssh keys 6 years ago
Niklas Yann Wettengel b8cc4bc6cf updated sample 6 years ago
Niklas Yann Wettengel 0e9d895e77 added mullvad uplink 6 years ago
Niklas Yann Wettengel 99dddff862 ffrl uplink and fastd split 6 years ago
Niklas Yann Wettengel d2270e2e50 ffmyk influx minimize memory usage 7 years ago
Niklas Yann Wettengel ca323efbf4 set conntrack table size 7 years ago
Niklas Yann Wettengel 0f8af08cd7 fixed backbone routing 7 years ago
Niklas Yann Wettengel 5fed801449 changed master dns ip 7 years ago
Niklas Yann Wettengel b58f964097 added dns zones 7 years ago
Niklas Yann Wettengel 3fd6ef10d7 fixed rule for ipv6 net 7 years ago
Niklas Yann Wettengel 6425f1ee54 install tcpdump 7 years ago
Niklas Yann Wettengel 331fe0ad34 fixed rule for ipv6 net 7 years ago
Niklas Yann Wettengel 428cb1a287 added rule for ipv6 net 7 years ago
Niklas Yann Wettengel b426d17031 enable ipv6 exit via mullvad 7 years ago
Niklas Yann Wettengel afdc5fe92b wireguard_mesh: only run batctl commands if there are peers 7 years ago
Niklas Yann Wettengel 5ec7d9ba3e route changes 7 years ago
Niklas Yann Wettengel a72158a848 changed routing policy 7 years ago
Niklas Yann Wettengel 4136cb974e added mullvad role 7 years ago
Niklas Yann Wettengel 2e5b3ff179 fixed typos 7 years ago
Niklas Yann Wettengel 8bad801b15 added routing between servers 7 years ago
Niklas Yann Wettengel 6ef6aa8d62 iptables: use template 7 years ago
Niklas Yann Wettengel 739f97d859 wireguard site mesh 7 years ago
Niklas Yann Wettengel d82f852497 fastd working 7 years ago
Niklas Yann Wettengel 90a8a597ea sysctl: load nf_conntrack module 7 years ago
Niklas Yann Wettengel d18d1ffd4f enable ipv6 routing 7 years ago

3
.gitignore vendored

@ -1,5 +1,2 @@
inventory.ini
host_vars/*
*.swp
*.retry

3
.gitmodules vendored

@ -1,3 +0,0 @@
[submodule "library/external_modules/ansible-aur"]
path = library/external_modules/ansible-aur
url = git://github.com/cdown/ansible-aur.git

@ -2,20 +2,11 @@
sets up ffmyk supernodes
## usage
- load submodules
```
git submodule update --init
```
- copy inventory.ini.sample
```
cp inventory.ini.sample inventory.ini
```
- add hosts to inventory and edit variables in host_vars
- to install arch on hetzner vms run the bootstrap_arch playbook
```
ansible-playbook -i inventory.ini bootstrap_arch.yml
ansible-playbook --vault-id @prompt -i inventory.ini bootstrap_arch.yml
```
- to configure the node run
```
ansible-playbook -i inventory.ini setup_fastd.yml
ansible-playbook --vault-id @prompt -i inventory.ini setup_fastd.yml
```

@ -1,15 +0,0 @@
---
- name: reboot vserver into rescue image
hosts: new_fastds
connection: local
gather_facts: no
vars:
ansible_python_interpreter: /usr/bin/python
roles:
- role: boot-rescue
- name: install archlinux
hosts: new_fastds
user: root
roles:
- role: install_arch

@ -0,0 +1,26 @@
---
authorized_keys:
- sk-ssh-ed25519@openssh.com AAAAGnNrLXNzaC1lZDI1NTE5QG9wZW5zc2guY29tAAAAIJipjvUGQNrDqXjIulGP/y52+y44BkkZDSguN/1NGI6AAAAABHNzaDo= niyawe@yubikey-uni
- sk-ssh-ed25519@openssh.com AAAAGnNrLXNzaC1lZDI1NTE5QG9wZW5zc2guY29tAAAAILUmx9SpIHap0rpGqR54VBkO6v+JxJn0e6p01eJ8ZMQkAAAABHNzaDo= niyawe@yubikey
- sk-ecdsa-sha2-nistp256@openssh.com AAAAInNrLWVjZHNhLXNoYTItbmlzdHAyNTZAb3BlbnNzaC5jb20AAAAIbmlzdHAyNTYAAABBBEgXSSr+3cOG3dlmMVP+uLc1AQLuhzqqDagAs/0MRxdbn9aXvN20KIUF60mxZp5z/uB5wCv0b5fB8HaBOGXgdVoAAAAEc3NoOg== niyawe@solokey
- ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAILF/aG11fx4d+LxQN9xSgbHnY4iHX7zkmNDAbZ9+g6u3 niyawe@offline-backup
- ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQC2I7IRe94zaC3BSOEE/eLtVfyJCjSlBiPvmSqCoNBZWFkkdFC7LMbG6YUJgz0O9dQGToPc4Qt5pUszAPCM7XdDxbmWRkHOxhk0aNsnQ22aQT+DU+E2oQn7ovlOXyvrNXyRdsrNSU1AOnK0tsn4bAJnCj/KAvV7Py4JZkpblYX9xqZFwuIvii7zjLch0S2nCibZmJ+fme/l1mYWRChNZNriChFHdcv2bZDz5KQKGJ7pW3rZrbVM6/gSBEfObJkGEtXhnguqt76o0aa5LuMYEqerbbwdWgY8W5Yx3L195I65jgI3Qi6VX6VETT8UXkyxRLfhf/OrPDeblED9dHKUo10n ataflinski@manning
- ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAACAQCqHLdqtNAK78MAbQkvuq35sjZ/QvJIQDulhovu4AA5t88WWj91+HCYSRto9rsPXNXG6D9EjiFJFM55UUe1lQjM8wP6/nBCG5BPhqksVd20SR9IY4kmYAniedKwL85MCYsLIptHaH2OUm7aXmY61CvtpwcqTpthQicup9K+Oh1SLQOBNEZ98zBfxLmvebHYBXtHgm+tFuUQrJn8Pnlx1tkjwijgi3ysqdzs24WVV5DbLb4f78NAUuyOWkoVHqMhlF4G8UFPOA/i0fFZLMLzTlOFwknJPDtLoj3QSfAGDRy7hySo+X7c3rPdjnDqd00cggq0XbYAEMqic4+61bPCGzQMAz/+LVntG2N60Uj6YpDNCpVp9NAFzymbye7xYUqUYJ96DGnRSOvBG6mydNteSCqjjWZyMGvMxJNeuXgfwkNX74WKruil/LlB0Y5iao1qW49/bi8ZIMVBpmw+JDKSQn/odNqgU+S4GkUVsiHrg1FMVQj0GHyNATiR3VbSlX9pvzfWzCiNYWNj0lowEQ5grkuRjnUukAjWZNp7rsyO0+JAVwTb/qIyx+pcRw90aWtLTUMYGFBsigpqK+r4rLsv6F9tfyQ7YBRqFMpVgSKoOguChDMvubyjVaj/IUh83W4UFT8Jzq0+4Ruj0PDlX7BHFzqbnySdTuqJr8cOQgK9qIq06w== adlerweb@OP-Server
- ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAACAQDViQ3+m2l5GcYcEY+wd8DYbC0r3ChS2eR496XlkUg1+M6LQXbSn2w3SEXAOlSiWEsItfuLEyDaqK4y/8ZuWLo45DdnTWPbBgSakayEd8+a9rhz0woP9fCDKusF8Wj/tzGcD2klF0bhSsB4TfIjEPn0B3H3ikbqBF0X26cZmOnXa1E14yeiyxHaMag0aNV1gmGSKE9XOqen3vVP86Tw23b1mb4qCN+SCK/JQAlLJmhPcuRU3jMIgV7+2F40Z/yhT2utietUtAzsG2Vxt+qnvDPbSoA5l3ygmzgIxmlBXEM0G6M464ZA6FoUXXUZ+HxqBvJI5X+OBAoXS75mRcZyNXS3sFIxZWdwChMGejRnnYfnrkwaCe5RSv+hjHH5B3ysHgmDDV4vTbA21e+yQHGFYAzWAe9kG8CcucHa3fQBwS2s0fhxxaqilxGytTrcL3rBxNTvTgqHIg5oDIRbiJh6T3dr5T871R/BZ9y6hHKx/vcedUI+QwF3B7L+mzNuQY+GJZJXZ3560j7RLJ1mvSBFYCtYjcfBzfbC4kU34HhY4+NAgK8pXwsG+YE0YdcZ4bYgvCpMYyQquKBjvMBzFkzHd6fpWotIU0EKBGSmusGvRykBxNmbphWrp3bd+Oi9dzVRiNf22K0JsfvODWhLOeearuBwWoKjaMQkadNo23YNIdjvMw== norbert
- ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIKRUNWQj0vmWciklwjItP3bEth7fY52z99zndJ6Z3wJj lars@lapoppermann
- ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAACAQC0zC4tuyKQRoVHhNpYTsOIeLbadG43jy0ZW6QWQ4VCcmrR/ccE/P03Y4IGf4fNlnSzOjc3sTjR9+qKUbyNeVRSGUlBT6alYSqAzSTZ/dtzmnYl1Ce/WKw0J1E4C6v8TaKKsqDl0p4DoYuIihW71m2syNFSph8uO2KWZ8YNGifC/wRx9ZZQTNvsuSo0OWhiiXETs1c8a51wtqQmTEZ2b8ownX/ibJV+SEF+o+poiz0A8qDyBpcb4N2hoVd0WWy0CFkYTC6oHPlsVSowJxsj86c3o/3fcxNKwy7XNOM0lzvL/j7mVVTJyW1984p7GJ4Z+TWRRIOEHbu3OLIwk7XfF5257/GiuFuvbBIzg1/lEssdesDjdbucVCK544ZR1kcNMJ23SWJQ8tPVhZ/ED1mzou7BLlIJgewbfx3IklzSVeNlFgua+lKJ3J6WQL7SVDog1DPHLLEupsCGkM8Put8NKm6DUP/saXOLr3ulMlqjZIcvFpDx+HcBFr7NPTTPUVkSBiZwlypLAHghVMASvDoXK3hgZI+tuTcAhSMQ9dUEhEMP4OxP8aqsnl1EDfIV434XZpF007qWQviqi4LXlYlDZidtPt32kQDPRSgl4uk0YgBGOmqtY3k0QDSJIsonMeSOJ3DE2n41MWxYcOWzXKhscDC7I0prh4lJNGKxfFjvhC6rqQ== patrik@uni-koblenz.de
- ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQC2bbsrZLJ2271iSb04qpoUDlbrH19aTXUlzFaQSp1KO0BjCxdNvY1x6ZjkIPUC0YeaVGePu0cBJFWYZKpPRiz5hbWeFgaVvhbAlhxAMSlgdjLiN2alc92mBX40NhrpgSV/hGB5KAqqBQr9y01g9I5GRl9jdXgzUA9hhbqxls6tvXxGN2SJC3TFbUj+2PPpn8Cw2ZJiKsKZIoQfs9ZQuv2xDi7E6voqBALlYWd217ZgBezklrpm48dDisGI/WdZyllgk0XyxXwRSSD8QINTPjWmKXk5ZNH65J0KyDlnrZsgQuQbsN3jGgJsPfR6tydVITd1IXtSwawUYZ+JU8wwp6CR sebastian@gartenzwerk
influx_user: !vault |
$ANSIBLE_VAULT;1.1;AES256
64393466646161346131343130636632396162376664333366663238643938666263623238613437
6637393034383233663230313638636631353734666365320a396366646234383165616561356561
32643763333635663738306137646236636532303735663539633337356562376439323338386339
6337396163633731350a373164326435363634613238663463373534313434643964653930376533
3163
influx_password: !vault |
$ANSIBLE_VAULT;1.1;AES256
32306561353961366237383834616333636136393838306330323135653232626437666162393932
6566663234383666623165363861663336653339633636630a643865663439306536323538623935
38396438636234333431626466643732666562363963666534333535353164663365336631393534
6662333331616366620a313635346565623863376232646566303864616564613234393437656535
33366461613461333130623661616166393733636438643963363361643337643736

@ -0,0 +1,10 @@
wireguard_bb_peers:
- name: 'olfbs'
pub_key: 'AlD1EOetcyX2HN+t224nZZXYjb/4oaYW92njaKAr0AI='
ipv4: '10.222.0.213'
port: 10150
- name: 'olfba'
pub_key: 'LobyJ67+/rGkTcFSchnJMz76MGVBAz5FrFypYq9GnzQ='
ipv4: '10.222.0.212'
port: 10151
dns_ip: '2a03:2260:1016::53'

@ -1,71 +0,0 @@
---
ansible_host: 123.123.123.123
fastd_peer_limit: 200
fastd_secret: <fastd secret key>
fastd_mesh_mac: '<mesh mac>
bat0_ipv6: '<ipv6>'
bat0_ipv4: <ipv4>
dhcp_start: <ipv4>
dhcp_end: <ipv4>
mullvad_country: nl
mullvad_crt: |
-----BEGIN CERTIFICATE-----
...
-----END CERTIFICATE-----
mullvad_key: |
-----BEGIN PRIVATE KEY-----
...
-----END PRIVATE KEY-----
influx_user: <user>
influx_password: <password>
munin_node_plugins:
- name: cpu
- name: df
- name: df_inode
- name: dhcp-pool
- name: diskstats
- name: entropy
- name: fastd_peers
plugin: fastd_
- name: fastd_traffic
plugin: fastd_
- name: forks
- name: fw_conntrack
- name: fw_forwarded_local
- name: fw_packets
- name: if_bat0
plugin: if_
- name: if_err_bat0
plugin: if_err_
- name: if_ens3
plugin: if_
- name: if_err_ens3
plugin: if_err_
- name: if_ffmyk-mesh-vpn
plugin: if_
- name: if_err_ffmyk-mesh-vpn
plugin: if_err_
- name: if_mullvad
plugin: if_
- name: if_err_mullvad
plugin: if_err_
- name: interrupts
- name: irqstats
- name: load
- name: memory
- name: netstat
- name: nginx_request
- name: nginx_status
- name: ntp_kernel_err
- name: ntp_kernel_pll_freq
- name: ntp_kernel_pll_off
- name: ntp_offset
- name: open_files
- name: open_inodes
- name: proc_pri
- name: processes
- name: swap
- name: threads
- name: uptime
- name: users
- name: vmstat

@ -0,0 +1,182 @@
---
ansible_host: 2a01:4f8:190:44c2:ff::2
sites:
- name: 'aw'
net4: '10.222.80.0/21'
net6: '2a03:2260:1016:0201::/64'
site_net6: 'fd62:44e1:da:0200::/64'
fastd_secret: !vault |
$ANSIBLE_VAULT;1.1;AES256
66613864623830333561306634656664623831613235336463353433393835623633313531636164
3132343936323530316438366530343336393366343735390a643862663163366661383963366461
63356536333162306635653863386430306463323963633066626336663837633762356632393163
3661353338313935330a303338343231393965333534633438396261633431613734646265373830
30623665633364343639646539616262666663333830396363336436343938613266333963363432
65303930366339626331356230316236396138653735666431633437313436303862363437313738
38626439626562386264623534646238666436656362633432666137666334643366303733396132
35396461636664396633
fastd_mesh_mac: '02:ff:41:57:00:10'
fastd_port1: 10014
bat_ipv6: '2a03:2260:1016:0201::1'
bat_ipv4: '10.222.80.1'
bat_ipv4_cidr: 21
dhcp_subnet: '10.222.80.0'
dhcp_netmask: '255.255.248.0'
dhcp_start: '10.222.80.50'
dhcp_end: '10.222.87.250'
vxlan_id: 11443185
wireguard_mesh_number: 1
wireguard_mesh_priv_key: !vault |
$ANSIBLE_VAULT;1.1;AES256
35303461376637356232386239353362353333383966613030646361313338663839646666306237
3433636237396630623830303938663735376337666337640a346635616337306235376434643265
66396465393962326635313966653533313638646361383638373836313063346361343364306636
3033393631306137630a333763386666623835623635633839616165616362633836626135323530
35393363646161333062396139626563383334383262333066636663663634353635626334383935
3437616563363566613736623361633934643962643662366338
wireguard_mesh_pub_key: 'tf/eNi+WOlsoXTmtAvQEwRv64YME0SIE+rlQysLd/Dc='
wireguard_mesh_port: 10015
wireguard_mesh_address: 'fe80::00ff:41ff:fe57:1'
wireguard_mesh_endpoint: '{{ ansible_host }}'
wireguard_mesh_mac: '02:ff:41:57:00:01'
- name: 'coc'
net4: '10.222.48.0/21'
net6: '2a03:2260:1016:0101::/64'
site_net6: 'fd62:44e1:da:0100::/64'
fastd_secret: !vault |
$ANSIBLE_VAULT;1.1;AES256
36343336633735316533356365663562633136316164346335613665343736643538613033323837
3163666137323238323535623663393466343061393432640a363838366533663135366665343137
64393938336636336230306333376365646631393432333934326631366666363266633631366636
3232396339613063360a356636623235333161633630363361653064626232386132393065363961
64653535613861636633303164353964393461376432646539656332373461626139333166343163
65376133646361616539303338373164623933633061663635353338643036396332656332643738
61626236323463623362613335653436643631356362343866333035623662336262323166616163
61303232626638303231
fastd_mesh_mac: '02:ff:43:4f:43:10'
fastd_port1: 10012
bat_ipv6: '2a03:2260:1016:0101::1'
bat_ipv4: '10.222.48.1'
bat_ipv4_cidr: 21
dhcp_subnet: '10.222.48.0'
dhcp_netmask: '255.255.248.0'
dhcp_start: '10.222.48.50'
dhcp_end: '10.222.55.250'
vxlan_id: 10540244
wireguard_mesh_number: 1
wireguard_mesh_priv_key: !vault |
$ANSIBLE_VAULT;1.1;AES256
37346162323035633263653630353265333838376165636664363434666263636230383339336535
3666316438633539313137666461353133376532386434650a306262643965636431303138326436
62306233303134653232663233343134393833643866396466663664656638663864656266386336
3630343163393334390a303632663962316365626330613464353263616364366533316566633730
32366232336331653366656237323561323939356235323864393463616133373035323763363261
3937633731373231316433373866643365316637323134363931
wireguard_mesh_pub_key: 'dqyoKKWYSfaov1zc1SpKbtVJPsoCDui5NsFzTCoqkBs='
wireguard_mesh_port: 10013
wireguard_mesh_address: 'fe80::00ff:43ff:fe4f:4301'
wireguard_mesh_endpoint: '{{ ansible_host }}'
wireguard_mesh_mac: '02:ff:43:4f:43:01'
- name: 'sim'
net4: '10.222.184.0/21'
net6: '2a03:2260:1016:0402::/64'
site_net6: 'fd62:44e1:da:0400::/64'
fastd_secret: !vault |
$ANSIBLE_VAULT;1.1;AES256
39303135363836313137613238633137646235366637393463346132366361363465303531653565
3439336633396532303563613536333264373863663933650a653566626462346133363433333337
64333138353862613937653065613136323238666336363635643062643538363265323335643766
6465393863393630640a643531376464336334346530393764376139623033336139616138653534
64616531313665336365323331616263613336313938316663383437353532316631636138663661
37666538656533346365393435316630323065316336303138373962393038653831623339656634
37343837373965393866653965366335636563303931333465656539316563646162626261633535
34303934616666633764
fastd_mesh_mac: '02:ff:53:49:4d:20'
fastd_port1: 10018
bat_ipv6: '2a03:2260:1016:0402::1'
bat_ipv4: '10.222.184.1'
bat_ipv4_cidr: 21
dhcp_subnet: '10.222.184.0'
dhcp_netmask: '255.255.248.0'
dhcp_start: '10.222.184.50'
dhcp_end: '10.222.191.250'
vxlan_id: 10908477
wireguard_mesh_number: 2
wireguard_mesh_priv_key: !vault |
$ANSIBLE_VAULT;1.1;AES256
31343338643330396338336365636336363537633939396265336639666464643563353362613863
3234616436313331303433613837663033653437323839340a663838646136323265653861636539
63373462646430376265356533363932393861626133356536306237373730303132313366306538
3034653565386462640a666361653236373562653464643562636232303965663437376535646363
63333662333630383162326166323239333966323537303238353164373939343735366230313031
3731663830326363323062363637663730313736383139353732
wireguard_mesh_pub_key: 'hDx+zhY9WgabV3Sgp7fsfRRqNIzOP5z0Tl2t7wZjzBw='
wireguard_mesh_port: 10019
wireguard_mesh_address: 'fe80::00ff:53ff:fe49:4d02'
wireguard_mesh_endpoint: '{{ ansible_host }}'
wireguard_mesh_mac: '02:ff:53:49:4d:02'
wireguard_bb_name: 'loppermann1'
wireguard_bb_endpoint: '{{ ansible_host }}'
wireguard_bb_priv_key: !vault |
$ANSIBLE_VAULT;1.1;AES256
34643662623262646365326237626237313962663465366263386362353630633765363239333831
3632336333633862643737333864623666353935353166620a386462373161383266616633633837
33613761303136623264346435376664356235346633656531343564333334303266666462613665
3063333638323862360a653738306563393434376532313434633162666133343962313066616432
64356233663838353838326230613839663933666663393330303535653638343861656363326632
3539623766663136323061633562643365636162633134396361
wireguard_bb_pub_key: 'im56pv9JwwveDDkk8aA++0bgHjuUvUzaun4qFAZFrVc='
wireguard_bb_ipv4: '10.222.0.16'
wireguard_bb_ipv6: 'fe80::ffbb:ffbb:16'
wireguard_bb_port: 10116
wireguard_vpn_port: 10010
wireguard_vpn_priv_key: !vault |
$ANSIBLE_VAULT;1.1;AES256
37333837366636343138326138623361656462653861633566643831306139383964643839393234
3535393434653761643831663063386635323038343337340a336637633233623333316231346165
64643161663061356466616662336332373738306331386636373761623361343032663832663139
6465343666663861630a356231633764363030356230636631663333356665396462623862643863
66306461316633393065343063316633373530623163356530353031393132353964326238383137
3835373735333537396539353735326539633930393564376464
wireguard_vpn_address: 'fe80::d3:16ff:fee5:6239'
wireguard_vpn_client_range: '2a03:2260:1016:3000::/52'
tayga_ipv4: 10.3.0.1
tayga_pool: 10.3.0.0/16
ffrl_ip4: '185.66.194.105'
ffrl_peers:
- name: 'bbaakber'
remote: '185.66.195.0'
ip4: '100.64.10.233'
peer_ip4: '100.64.10.232'
ip6: '2a03:2260:0:58b::2'
peer_ip6: '2a03:2260:0:58b::1'
- name: 'bbafra2fra'
remote: '185.66.194.0'
ip4: '100.64.10.235'
peer_ip4: '100.64.10.234'
ip6: '2a03:2260:0:58c::2'
peer_ip6: '2a03:2260:0:58c::1'
- name: 'bbaixdus'
remote: '185.66.193.0'
ip4: '100.64.10.237'
peer_ip4: '100.64.10.236'
ip6: '2a03:2260:0:58d::2'
peer_ip6: '2a03:2260:0:58d::1'
- name: 'bbbakber'
remote: '185.66.195.1'
ip4: '100.64.10.239'
peer_ip4: '100.64.10.238'
ip6: '2a03:2260:0:58e::2'
peer_ip6: '2a03:2260:0:58e::1'
- name: 'bbbfra2fra'
remote: '185.66.194.1'
ip4: '100.64.10.241'
peer_ip4: '100.64.10.240'
ip6: '2a03:2260:0:58f::2'
peer_ip6: '2a03:2260:0:58f::1'
- name: 'bbbixdus'
remote: '185.66.193.1'
ip4: '100.64.10.243'
peer_ip4: '100.64.10.242'
ip6: '2a03:2260:0:590::2'
peer_ip6: '2a03:2260:0:590::1'

@ -0,0 +1,183 @@
---
ansible_host: 2a01:4f8:a0:6396:2::2
#ansible_host: 10.0.2.6
sites:
- name: 'aw'
net4: '10.222.88.0/21'
net6: '2a03:2260:1016:0202::/64'
site_net6: 'fd62:44e1:da:0200::/64'
fastd_secret: !vault |
$ANSIBLE_VAULT;1.1;AES256
64316166303564616561623661653739386434373564646630396361366262303737346632656136
3164613138393838616235343936633162333032323563320a666235383763383766373761623533
36313135643830623363353966653138346364646639386339393664366565323265366630333362
6264633837626133300a373133353532656331623038346637643834613563383435366534393865
31343432663535653364643564306533383333303939656232336232306136663839376662656332
63396465303038396531653239323264346233313563636261613231343763306130316530386262
31316432383834323237386138336434663365643732643732323439313564303337636466393334
63613666333161366366
fastd_mesh_mac: '02:ff:41:57:00:20'
fastd_port1: 10014
bat_ipv6: '2a03:2260:1016:0202::1'
bat_ipv4: '10.222.88.1'
bat_ipv4_cidr: 21
dhcp_subnet: '10.222.88.0'
dhcp_netmask: '255.255.248.0'
dhcp_start: '10.222.88.50'
dhcp_end: '10.222.95.250'
vxlan_id: 11443185
wireguard_mesh_number: 2
wireguard_mesh_priv_key: !vault |
$ANSIBLE_VAULT;1.1;AES256
63616334663237313761666462326564376439633631633839373434393636366363666139653239
3361623733653863613637616439616266393039316332380a373031626239383537316536353862
66616563356131333439303665303039393965383939383038646236643063613231616330363938
6536333561353564620a353634613666383430656639313231363431313662386138396236313364
61653766653462343937396636643132323137636331346132313763313135633263613230366336
6461376335353964343564383335346366633438383566653066
wireguard_mesh_pub_key: 'm3JXl4RCr9xNeWo9L2GXiGVCpPvRX3maaLUw6qPse1I='
wireguard_mesh_port: 10015
wireguard_mesh_address: 'fe80::00ff:41ff:fe57:2'
wireguard_mesh_endpoint: '{{ ansible_host }}'
wireguard_mesh_mac: '02:ff:41:57:00:02'
- name: 'coc'
net4: '10.222.56.0/21'
net6: '2a03:2260:1016:0102::/64'
site_net6: 'fd62:44e1:da:0100::/64'
fastd_secret: !vault |
$ANSIBLE_VAULT;1.1;AES256
64346365626531663039636230633430613338336164623065393964313538633839346537356533
3363313832333561373134363136333663313864383466360a333533643462336533336433353030
64306535326562343964373931306366613365356335386163303062363663383264353566656438
3838323261303331380a613366306566623531323162373266663863393563323232626565346163
64333835356662643561373062393831303366656138356464326232363235373734663038316336
37313164306565643032373938353434393333653531623635663030613861306663373761336233
65373565653939663832353565656262306633306461316461343735336431393033316433313164
35346363653832386138
fastd_mesh_mac: '02:ff:43:4f:43:20'
fastd_port1: 10012
bat_ipv6: '2a03:2260:1016:0102::1'
bat_ipv4: '10.222.56.1'
bat_ipv4_cidr: 21
dhcp_subnet: '10.222.56.0'
dhcp_netmask: '255.255.248.0'
dhcp_start: '10.222.56.50'
dhcp_end: '10.222.63.250'
vxlan_id: 10540244
wireguard_mesh_number: 2
wireguard_mesh_priv_key: !vault |
$ANSIBLE_VAULT;1.1;AES256
36326163616362316539366532373738393861343162346362346165323431306133663066616632
3333633636643530393030353930396165343134313531620a346361656539383935653061643633
36613038613336313137656264663661646233396333396563643664346339356530666231633130
6662326532323239300a653662653264636462353961383437623637636161363430643935326439
37366265376637653531613537346663343364626332343931613462666366643231356335626631
6238633631656139383733333739373733356430343132353330
wireguard_mesh_pub_key: 'qshyUBm3WTO0u+InjrJ5+oTv9xVzRGoOIuZOlC5/e2A='
wireguard_mesh_port: 10013
wireguard_mesh_address: 'fe80::00ff:43ff:fe4f:4302'
wireguard_mesh_endpoint: '{{ ansible_host }}'
wireguard_mesh_mac: '02:ff:43:4f:43:02'
- name: 'sim'
net4: '10.222.176.0/21'
net6: '2a03:2260:1016:0401::/64'
site_net6: 'fd62:44e1:da:0400::/64'
fastd_secret: !vault |
$ANSIBLE_VAULT;1.1;AES256
36623461376163303538353865656462643537646265393461656337383936363634653063363938
3735616161636231633238323935313861346163636565620a353132303235636662366231393236
30323734313065356132623736633231326537626462366264653138666533633461393830336634
6530666637613164340a663133386134393235636362633833373531323132636138623163656638
34363637623331666335353464366539623936306437356538393034376232346566323431636231
32653236386632656633636438303130323065386635616462666631386361396233303965393332
63333233656336313633303166333638663335363035653230316633303233376131396135373462
34343163616561343163
fastd_mesh_mac: '02:ff:53:49:4d:10'
fastd_port1: 10018
bat_ipv6: '2a03:2260:1016:0401::1'
bat_ipv4: '10.222.176.1'
bat_ipv4_cidr: 21
dhcp_subnet: '10.222.176.0'
dhcp_netmask: '255.255.240.0'
dhcp_start: '10.222.176.50'
dhcp_end: '10.222.183.250'
vxlan_id: 10908477
wireguard_mesh_number: 1
wireguard_mesh_priv_key: !vault |
$ANSIBLE_VAULT;1.1;AES256
61663530636333343161656664313464306533343934306335653137303463663663386663366463
6538396238616663336633326564386663343531653831650a633230653464636337653431663238
61363635616139643237626462306530313636383962653533626637666162643263323566373439
6632366462303033370a396638303765323939343335383165643739313738366363396566376337
65333237343631613636303639636231363331393262353566623564306330353038343562663464
6335616665613065393164383332633162306137396133343030
wireguard_mesh_pub_key: '3587KYreUmBTyARprP+gRKlM7Uo6HH1JJYR5v9JcMkE='
wireguard_mesh_port: 10019
wireguard_mesh_address: 'fe80::00ff:53ff:fe49:4d01'
wireguard_mesh_endpoint: '{{ ansible_host }}'
wireguard_mesh_mac: '02:ff:53:49:4d:01'
wireguard_bb_name: 'niyawe2'
wireguard_bb_endpoint: '{{ ansible_host }}'
wireguard_bb_priv_key: !vault |
$ANSIBLE_VAULT;1.1;AES256
35363033353438343934353931643434386239316435326139643133366438646261643261653430
3237666161356635633130306534346132313231373364340a346237316662346261633032323166
38643535303161656664303966333363383636333863623732666636386530343139623130626633
3664376633656634640a333964323139346531346461653836666232303031383036393233366262
38633235623665356136353435313030636135666637663031626333376562613035666539663535
3632363134303966613630613866366562386439313137383336
wireguard_bb_pub_key: 'ctSz9JjaPWM4Se39rSsbr39wXWfA1LJDF1OwwBui0VY='
wireguard_bb_ipv4: '10.222.0.12'
wireguard_bb_ipv6: 'fe80::ffbb:ffbb:12'
wireguard_bb_port: 10112
wireguard_vpn_port: 10010
wireguard_vpn_priv_key: !vault |
$ANSIBLE_VAULT;1.1;AES256
36623962663931636165643834636338373230623438306431316338633765333434626462626636
6330346538316361376531353932666363303431313737640a333931366638326164333937656566
32393639376561396161313365343563383132663338363437376563653930643835303230613336
6232616639643564360a613333666165623036613866383236323335383233376439386463333535
32616431393965313839613264326137633063366530336461643534623833306466653330373666
6364666534323361663937613837313031356262363338386563
wireguard_vpn_address: 'fe80::ce:30ff:fe37:94da'
wireguard_vpn_client_range: '2a03:2260:1016:2000::/52'
tayga_ipv4: 10.2.0.1
tayga_pool: 10.2.0.0/16
ffrl_ip4: '185.66.194.57'
ffrl_peers:
- name: 'bbafra2fra'
remote: '185.66.194.0'
ip4: '100.64.9.155'
peer_ip4: '100.64.9.154'
ip6: '2a03:2260:0:4e2::2'
peer_ip6: '2a03:2260:0:4e2::1'
- name: 'bbbfra2fra'
remote: '185.66.194.1'
ip4: '100.64.9.157'
peer_ip4: '100.64.9.156'
ip6: '2a03:2260:0:4e3::2'
peer_ip6: '2a03:2260:0:4e3::1'
- name: 'bbaixdus'
remote: '185.66.193.0'
ip4: '100.64.9.159'
peer_ip4: '100.64.9.158'
ip6: '2a03:2260:0:4e4::2'
peer_ip6: '2a03:2260:0:4e4::1'
- name: 'bbbixdus'
remote: '185.66.193.1'
ip4: '100.64.9.161'
peer_ip4: '100.64.9.160'
ip6: '2a03:2260:0:4e5::2'
peer_ip6: '2a03:2260:0:4e5::1'
- name: 'bbaakber'
remote: '185.66.195.0'
ip4: '100.64.9.163'
peer_ip4: '100.64.9.162'
ip6: '2a03:2260:0:4e6::2'
peer_ip6: '2a03:2260:0:4e6::1'
- name: 'bbbakber'
remote: '185.66.195.1'
ip4: '100.64.9.165'
peer_ip4: '100.64.9.164'
ip6: '2a03:2260:0:4e7::2'
peer_ip6: '2a03:2260:0:4e7::1'

@ -0,0 +1,183 @@
---
ansible_host: 2a01:4f8:141:1063:2::2
#ansible_host: 10.0.1.6
sites:
- name: 'ems'
net4: '10.222.200.0/21'
net6: '2a03:2260:1016:0502::/64'
site_net6: 'fd62:44e1:da:0500::/64'
fastd_secret: !vault |
$ANSIBLE_VAULT;1.1;AES256
39643432623937346662666565393066356635346236313562376339373665653837376365326531
3366643661613065303837353830666566356266613036650a383531336266363036366664323439
64636330346166306464353564363266303836666134373739646566306337333666356231616364
3635616561323332340a323665353031653566646562393430666261363834353036663938636634
62363261663531383464646262306237353233346535623235643561633435623939646262313561
30656531313664326663666661636465303239353331356633353238363433336561316264613037
33636239303465623333316561653732653638633632343165383934313738303365633937373038
33396464306363333965
fastd_mesh_mac: '02:ff:45:4d:53:20'
fastd_port1: 10020
bat_ipv6: '2a03:2260:1016:0502::1'
bat_ipv4: '10.222.200.1'
bat_ipv4_cidr: 21
dhcp_subnet: '10.222.200.0'
dhcp_netmask: '255.255.248.0'
dhcp_start: '10.222.200.50'
dhcp_end: '10.222.207.250'
vxlan_id: 337565
wireguard_mesh_number: 2
wireguard_mesh_priv_key: !vault |
$ANSIBLE_VAULT;1.1;AES256
64643165393762323161656536383934313365353664373636663937353531383333326164623434
3063356664313437353465346430303233303233343965320a373733326437616163616464356436
36323839353437656539383937333032353233316639363130666238303238623565363664613735
3037313661383930640a346235346661353435633362373861633134396466376631336637663534
34623365386161333230616339326665623535366333373436616633623634636139653766643165
3334653163353965383235356266623566666136663832396461
wireguard_mesh_pub_key: 'bOg54QrGq1DjyVQ13DKNkRYXKSy2bwhy3UM+HfCJPE8='
wireguard_mesh_port: 10021
wireguard_mesh_address: 'fe80::00ff:45ff:fe4d:5302'
wireguard_mesh_endpoint: '{{ ansible_host }}'
wireguard_mesh_mac: '02:ff:45:4d:53:02'
- name: 'ko'
net4: '10.222.24.0/21'
net6: '2a03:2260:1016:0002::/64'
site_net6: 'fd62:44e1:da::/64'
fastd_secret: !vault |
$ANSIBLE_VAULT;1.1;AES256
32616565386335373931326566326366306138386431303537386639373339306264613665613936
3630343838353631633832393265653666656164623434330a636537666266663835303561393437
61666665666162353665386434646439323730393839643464303237383034303066623731386638
6461303434383162300a303332333031396233383637653737393933636164653833303333633466
39336465616562613838646139303462306131326364356265366564356131343866313164356365
61623137653661633062613334633231633438626234303064363063396437666431353839313764
37313535646131393963353562353862363933373765316531656531353835653231643031383236
39633866633130373430
fastd_mesh_mac: '02:ff:4b:4f:00:20'
fastd_port1: 10010
bat_ipv6: '2a03:2260:1016:0002::1'
bat_ipv4: '10.222.24.1'
bat_ipv4_cidr: 21
dhcp_subnet: '10.222.24.0'
dhcp_netmask: '255.255.248.0'
dhcp_start: '10.222.24.50'
dhcp_end: '10.222.31.250'
vxlan_id: 10891866
wireguard_mesh_number: 2
wireguard_mesh_priv_key: !vault |
$ANSIBLE_VAULT;1.1;AES256
63313939383639656138636261363033336636303837303565623733663038646637363261386666
3562656362636434653131623133396134646666633338320a303435636432363333376130626265
66306336363565303433353731646336353764353333383339303865346334636334343231343266
3732316335656636630a623364343866633765653232336363653335613065663639626439656533
65313464663534626566613238666237623562383763316331306463643339636138623166623964
3438626431373233666532623433313337356530346563323838
wireguard_mesh_pub_key: 'Nv+aZ3cD6a9qvsrXipMbVG7kGiXV3e7tb92MTbyXDl4='
wireguard_mesh_port: 10011
wireguard_mesh_address: 'fe80::00ff:4bff:fe4f:2'
wireguard_mesh_endpoint: '{{ ansible_host }}'
wireguard_mesh_mac: '02:ff:4b:4f:00:02'
- name: 'my'
net4: '10.222.72.0/21'
net6: '2a03:2260:1016:0302::/64'
site_net6: 'fd62:44e1:da:300::/64'
fastd_secret: !vault |
$ANSIBLE_VAULT;1.1;AES256
36356665356465363064623732316337393137633133383133666330353238636432643232383534
3136386561663630633461653132626531666336663962650a363164343264623664316465663264
39336561346634623530636464646261313362383533363336383138663435346265626563646461
3231313735313266610a373663363966303961363039346137353132353864326639343732613032
33626665646364643036633662316234366666303364373434656137666233613030386562643662
37663232306135643461376435653263333834366163663634646164326236643730356135386464
31303439643035643732306162666261393735333334323433306633313635373363636364306663
36396363306537636164
fastd_mesh_mac: '02:ff:4d:59:00:20'
fastd_port1: 10016
bat_ipv6: '2a03:2260:1016:0302::1'
bat_ipv4: '10.222.72.1'
bat_ipv4_cidr: 21
dhcp_subnet: '10.222.72.0'
dhcp_netmask: '255.255.248.0'
dhcp_start: '10.222.72.50'
dhcp_end: '10.222.79.250'
vxlan_id: 6118532
wireguard_mesh_number: 2
wireguard_mesh_priv_key: !vault |
$ANSIBLE_VAULT;1.1;AES256
30353832633365613063633862383665666263393331323435393138643030393231643438353366
3039393736333564666530346630346130653138316436370a613763333334663731326363653863
39653139326462636531376136306666313537336265636334393831633035613337383464383838
3564356534323262370a393434353238383535363135393734636261633533323462623932366436
64613834363539303233356262373630373264623337356131623939646365653061663831343262
6464393331633661356232323338653137333635396137373636
wireguard_mesh_pub_key: 'pwwP7VxQsVyi/GUSLvyenhHgf71SNKaGwItThTWGHDg='
wireguard_mesh_port: 10017
wireguard_mesh_address: 'fe80::00ff:4dff:fe59:2'
wireguard_mesh_endpoint: '{{ ansible_host }}'
wireguard_mesh_mac: '02:ff:4d:59:00:02'
wireguard_bb_name: 'niyawe3'
wireguard_bb_endpoint: '{{ ansible_host }}'
wireguard_bb_priv_key: !vault |
$ANSIBLE_VAULT;1.1;AES256
62623537663537643532356163613166336165323463663033303431613136353936383439383036
3935323633386166633632626361383030653762623935630a356536393039303237326662656266
62326135303961636566616532663634343339666464383965343539653365643533383435313465
3435616562633531350a623932313866633065376435313365633062383630303836386361393938
65303266646135623234333935313566383864646337393130663733626331333134653732393264
3432383464363035626331393662343430366664613739306364
wireguard_bb_pub_key: 'zGubrJd9Wfa1Yo9I5xyJArdvX1bj7OS2VFth289PdlU='
wireguard_bb_ipv4: '10.222.0.11'
wireguard_bb_ipv6: 'fe80::ffbb:ffbb:11'
wireguard_bb_port: 10111
wireguard_vpn_port: 10010
wireguard_vpn_priv_key: !vault |
$ANSIBLE_VAULT;1.1;AES256
34313130643739316461343031626565323930303465623238356636636531656630396433383036
6337386336633165636633353139323366323563333464380a393438343365363661633331356438
62326531336666326662323535366463333265313130343430653162646461383230363064366264
6431663833633537660a343830623735633330643935363232366532346664353834623636326462
33393133363464313665623963393534306235653239636438343537366533306166623535663336
3864646261313135386563613637613330343935333636633434
wireguard_vpn_address: 'fe80::7e:adff:fefc:0b8c'
wireguard_vpn_client_range: '2a03:2260:1016:1000::/52'
tayga_ipv4: 10.1.0.1
tayga_pool: 10.1.0.0/16
ffrl_ip4: '185.66.194.56'
ffrl_peers:
- name: 'bbaakber'
remote: '185.66.195.0'
ip4: '100.64.9.99'
peer_ip4: '100.64.9.98'
ip6: '2a03:2260:0:4c6::2'
peer_ip6: '2a03:2260:0:4c6::1'
- name: 'bbafra2fra'
remote: '185.66.194.0'
ip4: '100.64.9.101'
peer_ip4: '100.64.9.100'
ip6: '2a03:2260:0:4c7::2'
peer_ip6: '2a03:2260:0:4c7::1'
- name: 'bbaixdus'
remote: '185.66.193.0'
ip4: '100.64.9.103'
peer_ip4: '100.64.9.102'
ip6: '2a03:2260:0:4c8::2'
peer_ip6: '2a03:2260:0:4c8::1'
- name: 'bbbakber'
remote: '185.66.195.1'
ip4: '100.64.9.105'
peer_ip4: '100.64.9.104'
ip6: '2a03:2260:0:4c9::2'
peer_ip6: '2a03:2260:0:4c9::1'
- name: 'bbbfra2fra'
remote: '185.66.194.1'
ip4: '100.64.9.107'
peer_ip4: '100.64.9.106'
ip6: '2a03:2260:0:4ca::2'
peer_ip6: '2a03:2260:0:4ca::1'
- name: 'bbbixdus'
remote: '185.66.193.1'
ip4: '100.64.9.109'
peer_ip4: '100.64.9.108'
ip6: '2a03:2260:0:4cb::2'
peer_ip6: '2a03:2260:0:4cb::1'

@ -0,0 +1,182 @@
---
ansible_host: 2001:4c80:50:14::c04
sites:
- name: 'ems'
net4: '10.222.192.0/21'
net6: '2a03:2260:1016:0501::/64'
site_net6: 'fd62:44e1:da:0500::/64'
fastd_secret: !vault |
$ANSIBLE_VAULT;1.1;AES256
64366430303532336538633661343838386537316364613866623134663866643634633436316565
3764303032353633336662613430663961646535353262310a613238643666313033343438666235
36316438366137333430663235303237666132306362616366356439306162633430326366663862
6633353266376537640a623163646437396564666232316530616264346566633032393033616438
31313538363462633865376234363262653861656234333661613139383538643963646436396464
65613834396464613266383936326539623461646661666464623337343834326533303039623665
37386130306432313766306638343561653232656238313734396562653661376131653036353264
63646437393532356338
fastd_mesh_mac: '02:ff:45:4d:53:10'
fastd_port1: 10020
bat_ipv6: '2a03:2260:1016:0501::1'
bat_ipv4: '10.222.192.1'
bat_ipv4_cidr: 21
dhcp_subnet: '10.222.192.0'
dhcp_netmask: '255.255.248.0'
dhcp_start: '10.222.192.50'
dhcp_end: '10.222.199.250'
vxlan_id: 337565
wireguard_mesh_number: 1
wireguard_mesh_priv_key: !vault |
$ANSIBLE_VAULT;1.1;AES256
32383031666464633861313732653264663463383036366539366431383066323438663738613265
6339636531646365336462353065633937373836323431610a343432616361646334636338306331
38663662373334653931656633373064613866336231613463303261646261323831623339616537
3933663036616664390a373965633838353535386239343864633435646566393334373637636561
38663566373433356165616535343366623562623464653034653963653235643935346632643533
6665633237376664613030373236396663383461366433303631
wireguard_mesh_pub_key: '97Ih/Gvgwj6W3Dcf0iMFm+DtLlkNEiSwIEwnUwlmMUI='
wireguard_mesh_port: 10021
wireguard_mesh_address: 'fe80::00ff:45ff:fe4d:5301'
wireguard_mesh_endpoint: '{{ ansible_host }}'
wireguard_mesh_mac: '02:ff:45:4d:53:01'
- name: 'ko'
net4: '10.222.16.0/21'
net6: '2a03:2260:1016:0001::/64'
site_net6: 'fd62:44e1:da::/64'
fastd_secret: !vault |
$ANSIBLE_VAULT;1.1;AES256
62303765323237376233313337343961663435363430646565616238356261646133326562363235
6639356166623437646664323236643161353837393330650a613565306362663932383436333635
63663832616334643939623835373731323835326361373266653331346530393462616364343633
3935316666653463370a653038313766383436303862306666356138353838386362363731663631
35313830346562643434393266393039336264663939363433336435653833323038363432623431
31636465666133633538633562323437333836376632343333306332356461663163396232626564
63393432373965323037656437313762383037363534343937303462393736666534653835633433
36656539623732333130
fastd_mesh_mac: '02:ff:4b:4f:00:10'
fastd_port1: 10010
bat_ipv6: '2a03:2260:1016:0001::1'
bat_ipv4: '10.222.16.1'
bat_ipv4_cidr: 21
dhcp_subnet: '10.222.16.0'
dhcp_netmask: '255.255.248.0'
dhcp_start: '10.222.16.50'
dhcp_end: '10.222.23.250'
vxlan_id: 10891866
wireguard_mesh_number: 1
wireguard_mesh_priv_key: !vault |
$ANSIBLE_VAULT;1.1;AES256
34656161316639303136656263333135366332393530646366373463356164326466316239303936
3932353863383437636630613562303662326232663131640a393833386164666634633964626138
33336365373833316266353865633930346664613363633235346432326430326233396336316265
3230373439313932360a653139636530383331666265393135653239363936663430623436663566
66333332363636343865663234396134346531633066626138663533333735323837373532636531
3966323936353934633637633965656663333366363634636165
wireguard_mesh_pub_key: 'jEPb55U0LjcVb+3ekAIW2Tmn07AmrBwU9DwJHwWO7i4='
wireguard_mesh_port: 10011
wireguard_mesh_address: 'fe80::00ff:4bff:fe4f:1'
wireguard_mesh_endpoint: '{{ ansible_host }}'
wireguard_mesh_mac: '02:ff:4b:4f:00:01'
- name: 'my'
net4: '10.222.64.0/21'
net6: '2a03:2260:1016:0301::/64'
site_net6: 'fd62:44e1:da:300::/64'
fastd_secret: !vault |
$ANSIBLE_VAULT;1.1;AES256
38333436396361633136336561633864383663666439613335613534336339373366396231646333
6264303364616131313966306438333135353564366134330a353438343861666337646633383534
31366233346663316434316439346639666639653433323363366161313362376262646663396330
6362356563616535640a633130623433316165313238346165376337326364306262643139376130
39326531633631656665346239386133363833623263663162356161333562636437633333643338
32623535323934306164653535633463626234623935653262633739383137326461623731623536
30366431633431363164633833323466616637633135636538656332356434333564386165643736
36303333346530376134
fastd_mesh_mac: '02:ff:4d:59:00:10'
fastd_port1: 10016
bat_ipv6: '2a03:2260:1016:0301::1'
bat_ipv4: '10.222.64.1'
bat_ipv4_cidr: 21
dhcp_subnet: '10.222.64.0'
dhcp_netmask: '255.255.248.0'
dhcp_start: '10.222.64.50'
dhcp_end: '10.222.71.250'
vxlan_id: 6118532
wireguard_mesh_number: 1
wireguard_mesh_priv_key: !vault |
$ANSIBLE_VAULT;1.1;AES256
63656233363539313336616565373830326235316135656535326364386339323762663433336266
6133336162323639663332343466666263653462376533620a623731663765646462663438653762
39376330613036353638356462376165393630393034343265383334616331643632323235376661
3632613063343461340a613637366461663134323738313566386432313233613862376335393732
61616262613936396661623735343131613835643431663935386134643062626430306430346130
6339373236313865653265636463373236316333646565313939
wireguard_mesh_pub_key: '+7I9fQugmzYpTssYZwQaLGwC2PfIElHyPY2iPZ7+NEs='
wireguard_mesh_port: 10017
wireguard_mesh_address: 'fe80::00ff:4dff:fe59:1'
wireguard_mesh_endpoint: '{{ ansible_host }}'
wireguard_mesh_mac: '02:ff:4d:59:00:01'
wireguard_bb_name: 'uniko1'
wireguard_bb_endpoint: '{{ ansible_host }}'
wireguard_bb_priv_key: !vault |
$ANSIBLE_VAULT;1.1;AES256
33323865636533656363643734313137313933353762316661623164616232333730303032613736
6238353532643966316135323861393937623739656636650a343839373332343939316533363230
30333038643766663131316136373264343536343734356139393737303030383436616366336430
3762656635303866310a333930333034613963363562313930663932333237306462663364663762
39306631356330353035386164616164656339316362366366366532373065643034613561323233
6132653032393235336566363561323563666133306639376637
wireguard_bb_pub_key: 'skqPL/XGmezXsF/3L/AO+kVF6XPw8ioGoN5T76Ukc30='
wireguard_bb_ipv4: '10.222.0.13'
wireguard_bb_ipv6: 'fe80::ffbb:ffbb:13'
wireguard_bb_port: 10113
wireguard_vpn_port: 10010
wireguard_vpn_priv_key: !vault |
$ANSIBLE_VAULT;1.1;AES256
32393830323730303332326634336466663262356131323333363936393431613137616462346662
6330386466393666626131303362633065393630323461380a373336633762643238643662663664
62383934616366373663653033353431633535393738393830363464303466313365373833306366
6533353438663861340a636638636265653136326130346133343332376663336161626234343136
39653135633037663766333863333063393635623937323139663063333863643637306630616565
6433343965626635393231646639366663393363363734623333
wireguard_vpn_address: 'fe80::58:c9ff:fe34:9785'
wireguard_vpn_client_range: '2a03:2260:1016:4000::/52'
tayga_ipv4: 10.4.0.1
tayga_pool: 10.4.0.0/16
ffrl_ip4: '185.66.194.104'
ffrl_peers:
- name: 'bbaakber'
remote: '185.66.195.0'
ip4: '100.64.11.81'
peer_ip4: '100.64.11.80'
ip6: '2a03:2260:0:5c1::2'
peer_ip6: '2a03:2260:0:5c1::1'
- name: 'bbafra2fra'
remote: '185.66.194.0'
ip4: '100.64.11.83'
peer_ip4: '100.64.11.82'
ip6: '2a03:2260:0:5c2::2'
peer_ip6: '2a03:2260:0:5c2::1'
- name: 'bbaixdus'
remote: '185.66.193.0'
ip4: '100.64.11.85'
peer_ip4: '100.64.11.84'
ip6: '2a03:2260:0:5c3::2'
peer_ip6: '2a03:2260:0:5c3::1'
- name: 'bbbakber'
remote: '185.66.195.1'
ip4: '100.64.11.87'
peer_ip4: '100.64.11.86'
ip6: '2a03:2260:0:5c4::2'
peer_ip6: '2a03:2260:0:5c4::1'
- name: 'bbbfra2fra'
remote: '185.66.194.1'
ip4: '100.64.11.89'
peer_ip4: '100.64.11.88'
ip6: '2a03:2260:0:5c5::2'
peer_ip6: '2a03:2260:0:5c5::1'
- name: 'bbbixdus'
remote: '185.66.193.1'
ip4: '100.64.11.91'
peer_ip4: '100.64.11.90'
ip6: '2a03:2260:0:5c6::2'
peer_ip6: '2a03:2260:0:5c6::1'

@ -0,0 +1,5 @@
[fastd]
ff-niyawe2
ff-niyawe3
ff-loppermann1
ff-uniko1

@ -1,12 +0,0 @@
[new_fastds]
123.123.123.123 arch_hostname=fastd
[fastds]
fastd
[all:vars]
hetzner_webservice_username=<hetzner_webservice_username>
hetzner_webservice_password=<hetzner_webservice_password>
rescue_authorized_key=<fingerprint of ssh key to use in rescue mode>
authorized_keys=["<key1>", "<key2>"]
aur_user=yaourt

@ -1 +0,0 @@
external_modules/ansible-aur/aur

@ -1 +0,0 @@
Subproject commit 04eec3e0afdf31d09ffa79067b75e6b05c78fd61

@ -0,0 +1,19 @@
---
- name: add aurto repo (1/3)
ansible.builtin.lineinfile:
path: /etc/pacman.conf
line: "[aurto]"
- name: add aurto repo (2/3)
ansible.builtin.lineinfile:
path: /etc/pacman.conf
line: "SigLevel = Optional TrustAll"
- name: add aurto repo (3/3)
ansible.builtin.lineinfile:
path: /etc/pacman.conf
line: "Server = https://aur.niyawe.de/"
- name: update pacman cache
pacman:
update_cache: yes

@ -1,32 +0,0 @@
# Generated by ip6tables-save v1.4.21 on Mon Feb 22 00:25:52 2016
*filter
:INPUT DROP [0:0]
:FORWARD ACCEPT [0:0]
:OUTPUT ACCEPT [0:0]
:LOGGING - [0:0]
-A INPUT -i lo -j ACCEPT
-A INPUT -m state --state RELATED,ESTABLISHED -j ACCEPT
-A INPUT -p icmpv6 -j ACCEPT
# SSH-Server
-A INPUT -p tcp -m tcp --dport 22 -j ACCEPT
# dns
-A INPUT -p tcp -m tcp --dport 53 -j ACCEPT
-A INPUT -p udp -m udp --dport 53 -j ACCEPT
# http
-A INPUT -p tcp -m tcp --dport 80 -j ACCEPT
# ntp
-A INPUT -p udp -m udp --dport 123 -j ACCEPT
# munin
-A INPUT -p tcp -m tcp --dport 4949 -j ACCEPT
# fastd
-A INPUT -p udp -m udp --dport 10000 -j ACCEPT
# MOSH
-A INPUT -p udp -m udp --dport 60000:61000 -j ACCEPT
# LOG
-A INPUT -j LOGGING
-A LOGGING -m limit --limit 2/min -j LOG --log-prefix "IP6Tables-Dropped: " --log-level 4
-A LOGGING -j DROP
-A FORWARD -i bat0 -p udp --dport 10000 -j REJECT
COMMIT
# Completed on Mon Feb 22 00:25:52 2016

@ -1,56 +0,0 @@
# Generated by iptables-save v1.4.21 on Tue Sep 8 21:44:08 2015
*mangle
:PREROUTING ACCEPT [0:0]
:INPUT ACCEPT [0:0]
:FORWARD ACCEPT [0:0]
:OUTPUT ACCEPT [0:0]
:POSTROUTING ACCEPT [0:0]
-A PREROUTING -i bat0 -j MARK --set-xmark 0x1/0xffffffff
COMMIT
# Completed on Tue Sep 8 21:44:08 2015
# Generated by iptables-save v1.4.21 on Tue Sep 8 21:44:08 2015
*filter
:INPUT DROP [0:0]
:FORWARD ACCEPT [0:0]
:OUTPUT ACCEPT [0:0]
:LOGGING - [0:0]
-A INPUT -s 127.0.0.1/32 -j ACCEPT
-A INPUT -m state --state RELATED,ESTABLISHED -j ACCEPT
-A INPUT -p icmp -j ACCEPT
# SSH-Server
-A INPUT -p tcp -m tcp --dport 22 -j ACCEPT
# dns
-A INPUT -p tcp -m tcp --dport 53 -j ACCEPT
-A INPUT -p udp -m udp --dport 53 -j ACCEPT
#dhcp
-I INPUT -i bat0 -p udp --dport 67:68 --sport 67:68 -j ACCEPT
# http
-A INPUT -p tcp -m tcp --dport 80 -j ACCEPT
-A INPUT -p tcp -m tcp --dport 443 -j ACCEPT
# ntp
-A INPUT -p udp -m udp --dport 123 -j ACCEPT
# munin
-A INPUT -p tcp -m tcp --dport 4949 -j ACCEPT
# iperf
-A INPUT -i bat0 -p tcp -m tcp --dport 5001 -j ACCEPT
# fastd
-A INPUT -p udp -m udp --dport 10000 -j ACCEPT
# MOSH
-A INPUT -p udp -m udp --dport 60000:61000 -j ACCEPT
# LOG
-A INPUT -j LOGGING
-A LOGGING -m limit --limit 2/min -j LOG --log-prefix "IPTables-Dropped: " --log-level 4
-A LOGGING -j DROP
-A FORWARD -i bat0 -p udp --dport 10000 -j REJECT
COMMIT
# Completed on Tue Sep 8 21:44:08 2015
# Generated by iptables-save v1.4.21 on Tue Sep 8 21:44:08 2015
*nat
:PREROUTING ACCEPT [0:0]
:INPUT ACCEPT [0:0]
:OUTPUT ACCEPT [0:0]
:POSTROUTING ACCEPT [0:0]
-A POSTROUTING -o mullvad -j MASQUERADE
COMMIT
# Completed on Tue Sep 8 21:44:08 2015

@ -1,6 +1,6 @@
---
- name: copy iptables.rules
copy:
template:
src: iptables.rules
dest: /etc/iptables/iptables.rules
notify: reload iptables
@ -12,7 +12,7 @@
state: started
- name: copy ip6tables.rules
copy:
template:
src: ip6tables.rules
dest: /etc/iptables/ip6tables.rules
notify: reload ip6tables

@ -0,0 +1,94 @@
*raw
:PREROUTING ACCEPT [0:0]
:OUTPUT ACCEPT [0:0]
{% for site in sites %}
-A PREROUTING -i wg{{ site.name }} -p udp -m udp --dport 8472 -j NOTRACK
{% endfor %}
{% for site in sites %}
-A OUTPUT -o wg{{ site.name }} -p udp -m udp --dport 8472 -j NOTRACK
{% endfor %}
COMMIT
*mangle
:PREROUTING ACCEPT [0:0]
:INPUT ACCEPT [0:0]
:FORWARD ACCEPT [0:0]
:OUTPUT ACCEPT [0:0]
:POSTROUTING ACCEPT [0:0]
{% for site in sites %}
-A PREROUTING -i bat{{ site.name }} -j MARK --set-xmark 0x1/0xffffffff
{% endfor %}
{% for peer in groups['fastd'] | difference([inventory_hostname]) %}
-A PREROUTING -i bb{{ hostvars[peer]['wireguard_bb_name'] }} ! -s fe80::/64 ! -d fe80::/64 -j MARK --set-xmark 0x1/0xffffffff
{% endfor %}
{% for peer in wireguard_bb_peers %}
-A PREROUTING -i bb{{ peer.name }} ! -s fe80::/64 ! -d fe80::/64 -j MARK --set-xmark 0x1/0xffffffff
{% endfor %}
COMMIT
*filter
:INPUT DROP [0:0]
:FORWARD ACCEPT [0:0]
:OUTPUT ACCEPT [0:0]
-A INPUT -i lo -j ACCEPT
-A INPUT -m state --state RELATED,ESTABLISHED -j ACCEPT
-A INPUT -p icmpv6 -j ACCEPT
# SSH-Server
-A INPUT -p tcp -m tcp --dport 22 -j ACCEPT
# munin
-A INPUT -p tcp -m tcp --dport 4949 -j ACCEPT
# iperf3
-A INPUT -p tcp -m tcp -s 2a03:2260:1016::/48 --dport 5201 -j ACCEPT
# dns
-A INPUT -p tcp -m tcp --dport 53 -j ACCEPT
-A INPUT -p udp -m udp --dport 53 -j ACCEPT
# http
-A INPUT -p tcp -m tcp --dport 80 -j ACCEPT
# ntp
-A INPUT -p udp -m udp --dport 123 -j ACCEPT
# fastd / wg
-A INPUT -s 2a03:2260:1016::/48 -p udp -m udp --dport 10010:10023 -j DROP
-A INPUT -p udp -m udp --dport 10010:10023 -j ACCEPT
# respondd
-A INPUT -i bat+ -p udp -m udp --dport 1001 -j ACCEPT
# wg_prefix_provider
-A INPUT -i wgmyk -s fe80::/64 -p tcp -m tcp --dport 9999 -j ACCEPT
# wireguard_mesh
{% for site in sites %}
-A INPUT -s 2a03:2260:1016::/48 -p udp -m udp --dport {{ site.wireguard_mesh_port }} -j DROP
-A INPUT -p udp -m udp --dport {{ site.wireguard_mesh_port }} -j ACCEPT
-A INPUT -i wg{{ site.name }} -p udp --dport 8472 -j ACCEPT
{% endfor %}
# wireguard_backbone
{% for peer in groups['fastd'] | difference([inventory_hostname]) %}
-A INPUT -i bb{{ hostvars[peer]['wireguard_bb_name'] }} -p udp --dport 6696 -j ACCEPT
-A INPUT -p udp --dport {{ hostvars[peer]['wireguard_bb_port'] }} -j ACCEPT
{% endfor %}
{% for peer in wireguard_bb_peers|default([]) %}
-A INPUT -i bb{{ peer.name }} -p udp --dport 6696 -j ACCEPT
-A INPUT -p udp --dport {{ peer.port }} -j ACCEPT
{% endfor %}
# MOSH
-A INPUT -p udp -m udp --dport 60000:61000 -j ACCEPT
# ffrl bgp
{% for peer in ffrl_peers %}
-A INPUT -i {{ peer.name }} -p tcp -m tcp --dport 179 -j ACCEPT
{% endfor %}
# LOG
-A INPUT -m limit --limit 2/min -j LOG --log-prefix "IP6Tables-Dropped input: " --log-level 4
{% for site in sites %}
-A FORWARD -i bat{{ site.name }} -p udp --dport 10010:10021 -j REJECT
{% endfor %}
-A FORWARD -o {{ ansible_default_ipv6.interface }} -j REJECT
-A FORWARD -d 2a03:2260:1016::/48 -p tcp -m tcp --tcp-flags SYN,RST SYN -j TCPMSS --clamp-mss-to-pmtu
-A FORWARD -s 2a03:2260:1016::/48 -p tcp -m tcp --tcp-flags SYN,RST SYN -j TCPMSS --clamp-mss-to-pmtu
COMMIT
*nat
:PREROUTING ACCEPT [0:0]
:INPUT ACCEPT [0:0]
:OUTPUT ACCEPT [0:0]
:POSTROUTING ACCEPT [0:0]
COMMIT

@ -0,0 +1,82 @@
*mangle
:PREROUTING ACCEPT [0:0]
:INPUT ACCEPT [0:0]
:FORWARD ACCEPT [0:0]
:OUTPUT ACCEPT [0:0]
:POSTROUTING ACCEPT [0:0]
{% for site in sites %}
-A PREROUTING -i bat{{ site.name }} -j MARK --set-xmark 0x1/0xffffffff
{% endfor %}
{% for peer in groups['fastd'] | difference([inventory_hostname]) %}
-A PREROUTING -i bb{{ hostvars[peer]['wireguard_bb_name'] }} -j MARK --set-xmark 0x1/0xffffffff
{% endfor %}
{% for peer in wireguard_bb_peers %}
-A PREROUTING -i bb{{ peer.name }} -j MARK --set-xmark 0x1/0xffffffff
{% endfor %}
COMMIT
*filter
:INPUT DROP [0:0]
:FORWARD ACCEPT [0:0]
:OUTPUT ACCEPT [0:0]
-A INPUT -s 127.0.0.1/32 -j ACCEPT
-A INPUT -m state --state RELATED,ESTABLISHED -j ACCEPT
-A INPUT -p icmp -j ACCEPT
# SSH-Server
-A INPUT -p tcp -m tcp --dport 22 -j ACCEPT
# iperf3
-A INPUT -p tcp -m tcp -s 10.222.0.0/16 --dport 5201 -j ACCEPT
# dns
-A INPUT -p tcp -m tcp --dport 53 -j ACCEPT
-A INPUT -p udp -m udp --dport 53 -j ACCEPT
#dhcp
{% for site in sites %}
-I INPUT -i bat{{ site.name }} -p udp --dport 67:68 --sport 67:68 -j ACCEPT
{% endfor %}
# http
-A INPUT -p tcp -m tcp --dport 80 -j ACCEPT
# ntp
-A INPUT -p udp -m udp --dport 123 -j ACCEPT
# fastd / wg
-A INPUT -s 10.222.0.0/16 -p udp -m udp --dport 10010:10023 -j DROP
-A INPUT -p udp -m udp --dport 10010:10023 -j ACCEPT
# wireguard_mesh
{% for site in sites %}
-A INPUT -s 10.222.0.0/16 -p udp -m udp --dport {{ site.wireguard_mesh_port }} -j DROP
-A INPUT -p udp -m udp --dport {{ site.wireguard_mesh_port }} -j ACCEPT
{% endfor %}
# MOSH
-A INPUT -p udp -m udp --dport 60000:61000 -j ACCEPT
# ffrl-gre
{% for peer in ffrl_peers %}
-A INPUT -p gre -s {{ peer.remote }} -j ACCEPT
{% endfor %}
# ffrl bgp
{% for peer in ffrl_peers %}
-A INPUT -i {{ peer.name }} -p tcp -m tcp --dport 179 -j ACCEPT
{% endfor %}
-A INPUT -m limit --limit 2/min -j LOG --log-prefix "IPTables-Dropped input: " --log-level 4
{% for site in sites %}
-A FORWARD -i bat{{ site.name }} -p udp --dport 10010:10023 -j REJECT
{% endfor %}
-A FORWARD -o {{ ansible_default_ipv4.interface }} -j REJECT
-A FORWARD -d 10.222.0.0/16 -p tcp -m tcp --tcp-flags SYN,RST SYN -j TCPMSS --clamp-mss-to-pmtu
-A FORWARD -s 10.222.0.0/16 -p tcp -m tcp --tcp-flags SYN,RST SYN -j TCPMSS --clamp-mss-to-pmtu
COMMIT
*nat
:PREROUTING ACCEPT [0:0]
:INPUT ACCEPT [0:0]
:OUTPUT ACCEPT [0:0]
:POSTROUTING ACCEPT [0:0]
{% if ffrl_ip4 is defined %}
{% for peer in ffrl_peers %}
-A POSTROUTING ! -s {{ ffrl_ip4 }} -o {{ peer.name }} -j SNAT --to-source {{ ffrl_ip4 }}
{% endfor %}
{% endif %}
COMMIT

@ -5,3 +5,10 @@
regexp: '^#?Storage='
line: 'Storage=volatile'
notify: restart systemd-journald
- name: save log for max 1 day
lineinfile:
path: /etc/systemd/journald.conf
regexp: '^#?MaxRetentionSec='
line: 'MaxRetentionSec=1day'
notify: restart systemd-journald

@ -0,0 +1,15 @@
[Unit]
Description=sets up ip rules and static routes
ConditionPathExists=/usr/local/bin/ffmyk-iproute.sh
After=network-online.target
Wants=network-online.target
[Service]
Type=forking
ExecStart=/usr/local/bin/ffmyk-iproute.sh
TimeoutSec=0
StandardOutput=tty
RemainAfterExit=yes
[Install]
WantedBy=multi-user.target

@ -0,0 +1,26 @@
#!/bin/bash
#Alles, was mit 0x1 markiert wird gehört zu Tabelle ffmyk
ip -4 rule add from all fwmark 0x1 table ffmyk priority 10
ip -6 rule add from all fwmark 0x1 table ffmyk priority 10
ip -4 rule add iif nat64 table ffmyk priority 10
ip -6 rule add iif nat64 table ffmyk priority 10
ip -4 rule add to 10.1.0.0/16 table ffmyk priority 10
ip -4 rule add to 10.2.0.0/16 table ffmyk priority 10
ip -4 rule add to 10.3.0.0/16 table ffmyk priority 10
#Alles mit Freifunk-IP - woher auch immer - gehört zu Tabelle ffmyk
ip -4 rule add to 10.222.1.0/24 table ffmyk priority 10
ip -4 rule add to 10.222.2.0/23 table ffmyk priority 10
ip -4 rule add to 10.222.4.0/22 table ffmyk priority 10
ip -4 rule add to 10.222.8.0/21 table ffmyk priority 10
ip -4 rule add to 10.222.16.0/20 table ffmyk priority 10
ip -4 rule add to 10.222.32.0/19 table ffmyk priority 10
ip -4 rule add to 10.222.64.0/18 table ffmyk priority 10
ip -4 rule add to 10.222.128.0/17 table ffmyk priority 10
ip -6 rule add to 2001:470:cd45:ff00::/56 table ffmyk priority 10
ip -6 rule add to 2a03:2260:1016::/48 table ffmyk priority 10
ip -6 rule add to 64:ff9b::/96 table ffmyk priority 10
ip -6 rule add to fd62:44e1:da::/48 table ffmyk priority 10
ip -4 rule add from all iif nat64 type unreachable priority 200
ip -6 rule add from all iif nat64 type unreachable priority 200

@ -0,0 +1,38 @@
---
- name: name ffmyk routing table
lineinfile:
path: /etc/iproute2/rt_tables
line: 42 ffmyk
- name: copy ffmyk iproute config script
copy:
src: ffmyk-iproute.sh
dest: /usr/local/bin/ffmyk-iproute.sh
mode: 0744
- name: copy site specific iproute up config script
template:
src: ffmyk-iproute-up.j2
dest: /usr/local/bin/ffmyk-iproute{{ item.name }}-up.sh
mode: 0744
with_items: "{{ sites }}"
- name: copy site specific iproute down config script
template:
src: ffmyk-iproute-down.j2
dest: /usr/local/bin/ffmyk-iproute{{ item.name }}-down.sh
mode: 0744
with_items: "{{ sites }}"
- name: copy ffmyk iproute systemd service
copy:
src: ffmyk-iproute.service
dest: /etc/systemd/system/ffmyk-iproute.service
mode: 0444
- name: start and enable ffmyk iproute service
systemd:
name: ffmyk-iproute.service
daemon_reload: yes
enabled: yes
state: started

@ -0,0 +1,20 @@
#!/bin/bash
{% if item.net4 is defined %}
ip -4 route del {{item.net4 }} dev bat{{ item.name }} proto static table ffmyk
{% endif %}
{% if item.net6 is defined %}
ip -6 route del {{item.net6 }} dev bat{{ item.name }} proto static table ffmyk
{% endif %}
{% if item.site_net6 is defined %}
ip -6 route del {{item.site_net6 }} dev bat{{ item.name }} proto static table ffmyk
{% endif %}
ip -4 rule del iif bat{{ item.name }} table ffmyk
ip -6 rule del iif bat{{ item.name }} table ffmyk
{% if item.net4 is defined %}
ip -4 rule del from {{ item.net4 }} table ffmyk
{% endif %}
{% if item.net6 is defined %}
ip -6 rule del from {{ item.net6 }} table ffmyk
{% endif %}

@ -0,0 +1,23 @@
#!/bin/bash
ip -4 rule add iif bat{{ item.name }} table ffmyk priority 10
ip -6 rule add iif bat{{ item.name }} table ffmyk priority 10
{% if item.net4 is defined %}
ip -4 rule add from {{ item.net4 }} table ffmyk priority 10
{% endif %}
{% if item.net6 is defined %}
ip -6 rule add from {{ item.net6 }} table ffmyk priority 10
{% endif %}
ip -4 rule add from all iif bat{{ item.name }} type unreachable priority 200
ip -6 rule add from all iif bat{{ item.name }} type unreachable priority 200
{% if item.net4 is defined %}
ip -4 route replace {{item.net4 }} dev bat{{ item.name }} proto static table ffmyk
{% endif %}
{% if item.net6 is defined %}
ip -6 route replace {{item.net6 }} dev bat{{ item.name }} proto static table ffmyk
{% endif %}
{% if item.site_net6 is defined %}
ip -6 route replace {{item.site_net6 }} dev bat{{ item.name }} proto static table ffmyk
{% endif %}

@ -1,10 +1,32 @@
net.ipv4.ip_forward=1
#net.ipv6.conf.all.forwarding=1
net.ipv6.neigh.default.gc_thresh3=4096
net.ipv6.neigh.default.gc_thresh2=2048
net.ipv6.neigh.default.gc_thresh1=1024
# Sonst landen ICMP-Fehlerpakete auf eth0 - mit source-IP 10.222.x.y...
# https://www.kernel.org/doc/Documentation/networking/ip-sysctl.txt
net.ipv4.icmp_errors_use_inbound_ifaddr = 1
net.ipv6.conf.all.forwarding=1
net.ipv6.conf.all.autoconf = 0
net.ipv6.conf.default.autoconf = 0
net.ipv6.conf.all.accept_ra = 0
net.ipv6.conf.default.accept_ra = 0
net.ipv4.neigh.default.gc_thresh3=8192
net.ipv4.neigh.default.gc_thresh2=4096
net.ipv4.neigh.default.gc_thresh1=2048
net.ipv4.neigh.default.gc_interval=3600
net.ipv4.neigh.default.gc_stale_time=3600
net.ipv6.neigh.default.gc_thresh3=8192
net.ipv6.neigh.default.gc_thresh2=4096
net.ipv6.neigh.default.gc_thresh1=2048
net.ipv6.neigh.default.gc_interval=3600
net.ipv6.neigh.default.gc_stale_time=3600
# decrease nf_conntrack_tcp_timeout_established - default=432000
net.netfilter.nf_conntrack_max=1048576
net.netfilter.nf_conntrack_buckets=131072
net.netfilter.nf_conntrack_tcp_timeout_established=3600
# reboot after kernel panic
kernel.panic=1

@ -1,4 +1,14 @@
---
- name: load nf_conntrack kernel module at boot
copy:
src: modules-load.d_nf_conntrack.conf
dest: /etc/modules-load.d/nf_conntrack.conf
- name: load nf_conntrack kernel module
modprobe:
name: nf_conntrack
state: present
- name: touch sysctl.conf
copy:
content: ''

@ -0,0 +1,13 @@
set tabstop=4
set shiftwidth=4
"set expandtab
set number
set autoindent
set laststatus=2
syntax on
colorscheme darkblue
set nocompatible " be iMproved
filetype off " required!
filetype plugin indent on " required!

@ -4,18 +4,18 @@
update_cache: yes
- name: install packages for admins
pacman:
name: '{{ item }}'
name:
- bash-completion
- bridge-utils
- htop
- mosh
- nload
- rxvt-unicode-terminfo
- screen
- tmux
- vim
- tcpdump
state: present
with_items:
- bash-completion
- bridge-utils
- htop
- mosh
- nload
- rxvt-unicode-terminfo
- screen
- tmux
- vim
- name: create bash_profile
lineinfile:
@ -27,3 +27,8 @@
copy:
src: bashrc
dest: /root/.bashrc
- name: copy vimrc
copy:
src: vimrc
dest: /root/.vimrc

@ -0,0 +1,5 @@
---
- name: restart babeld
systemd:
name: babeld.service
state: restarted

@ -0,0 +1,18 @@
---
- name: install babeld
pacman:
name: babeld
state: present
- name: babeld.conf
template:
src: babeld.conf.j2
dest: /etc/babeld.conf
mode: 0640
notify: restart babeld
- name: start and enable babeld service
systemd:
name: babeld.service
enabled: yes
state: started

@ -0,0 +1,48 @@
# Configuration for babeld. See the man page babeld(8) for
# details on the configuration format.
# Works on Linux > 3.11
ipv6-subtrees true
# You must provide at least one interface for babeld to operate on.
{% for peer in groups['fastd'] | difference([inventory_hostname]) %}
interface bb{{ hostvars[peer]['wireguard_bb_name'] }}
{% endfor %}
{% for peer in wireguard_bb_peers|default([]) %}
interface bb{{ peer.name }}
{% endfor %}
# Global options you might want to set. There are many more, see the man page.
#debug 1
local-port 33123
#diversity true
random-id true
default type tunnel rtt-min 1 rtt-max 25 max-rtt-penalty 128
smoothing-half-life 10
export-table 42
import-table 42
reflect-kernel-metric true
# Filtering rules.
in ip 10.0.0.0/8 allow
in ip 2a03:2260:1016::/48 allow
in ip 2003:46:e028::/48 allow # finzelberg
in ip fd62:44e1:da::/48 allow
{% if ffrl_ip4 is defined %}
in deny # ignore default routes on uplinks
{% endif %}
{% for peer in ffrl_peers %}
redistribute if {{ peer.name }} metric 128
{% endfor %}
# Only redistribute addresses from a given prefix, to avoid redistributing
# all local addresses
redistribute ip 10.0.0.0/8 allow
redistribute ip 2a03:2260:1016::/48 allow
redistribute ip 64:ff9b::/96 allow
redistribute ip 2003:46:e028::/48 allow # finzelberg
redistribute ip fd62:44e1:da::/48 allow
redistribute local deny

@ -11,6 +11,16 @@
owner: named
group: named
- name: create systemd-folder
file:
path: /etc/systemd/system/named.service.d
state: directory
- name: bind ip override
template:
src: ipv6.conf.j2
dest: /etc/systemd/system/named.service.d/ipv6.conf
- name: bind config
template:
src: named.conf.j2

@ -0,0 +1,2 @@
[Service]
ExecStartPre=/usr/bin/ip addr replace {{ dns_ip }}/128 dev lo

@ -4,27 +4,39 @@ options {
directory "/var/named";
pid-file "/run/named/named.pid";
dnssec-enable yes;
dnssec-validation yes;
dnssec-lookaside auto;
dnssec-validation auto;
auth-nxdomain no; # conform to RFC1035
listen-on-v6 { {{ bat0_ipv6 }}; };
listen-on port 53 { 127.0.0.1; {{ bat0_ipv4 }}; };
allow-recursion { 127.0.0.1; 10.222.0.0/16; 2a01:198:70a:ff::/64; };
listen-on-v6 {
2a03:2260:1016::53;
{% for site in sites %}
{{ site.bat_ipv6 }};
{% endfor %}
};
listen-on port 53 {
127.0.0.1;
{% for site in sites %}
{{ site.bat_ipv4 }};
{% endfor %}
};
allow-recursion { 127.0.0.1; 10.222.0.0/16; fd62:44e1:da::/48; 2001:470:cd45:ff00::/56; 2a03:2260:1016::/48; fe80::/64; };
allow-transfer { none; };
allow-update { none; };
//forwarders {
// 85.214.20.141;
// 213.73.91.35;
//};
version none;
hostname none;
server-id none;
//dns64 64:ff9b::/96 {
// clients { any; };
//};
max-cache-size 1024M;
};
statistics-channels {
inet 127.0.0.1 port 8053 allow { 127.0.0.1; };
};
zone "localhost" IN {
@ -52,27 +64,51 @@ zone "0.in-addr.arpa" IN {
file "empty.zone";
};
zone "." IN {
type hint;
file "root.hint";
zone "ffaw" IN {
type slave;
file "bak/ffaw.zone";
allow-query { any; };
masters { 2a01:4f8:a0:6396:1::17; };
};
zone "ffcoc" IN {
type slave;
file "bak/ffcoc.zone";
allow-query { any; };
masters { 2a01:4f8:a0:6396:1::17; };
};
zone "ffems" IN {
type slave;
file "bak/ffems.zone";
allow-query { any; };
masters { 2a01:4f8:a0:6396:1::17; };
};
zone "ffko" IN {
type slave;
file "bak/ffko.zone";
allow-query { any; };
masters { 2a01:4f8:a0:6396:1::17; };
};
zone "ffmy" IN {
type slave;
file "bak/ffmy.zone";
allow-query { any; };
masters { 2a01:4f8:a0:6396:1::17; };
};
zone "ffmyk" IN {
type slave;
file "bak/ffmyk.zone";
allow-query { any; };
masters { 10.222.100.1; };
masters { 2a01:4f8:a0:6396:1::17; };
};
//logging {
// channel xfer-log {
// file "/var/log/named.log";
// print-category yes;
// print-severity yes;
// severity info;
// };
// category xfer-in { xfer-log; };
// category xfer-out { xfer-log; };
// category notify { xfer-log; };
//};
zone "ffsim" IN {
type slave;
file "bak/ffsim.zone";
allow-query { any; };
masters { 2a01:4f8:a0:6396:1::17; };
};

@ -4,24 +4,25 @@
name: dhcp
state: present
- name: create dhcp file for static ips
copy:
content: ''
dest: /etc/dhcpd.hosts.conf
force: no
- name: copy fastd-services-api.php
copy:
src: fastd-services-api.php
dest: /etc/fastd-services-api.php
- name: setup cronjob for fastd-services-api
cron:
name: fastd-services-api
minute: '*/10'
user: root
cron_file: fastd-api
job: '/usr/bin/php /etc/fastd-services-api.php'
#- name: create dhcp file for static ips
# copy:
# content: ''
# dest: /etc/dhcpd.hosts{{ item.name }}.conf
# force: no
# with_items: "{{ sites }}"
#
#- name: copy fastd-services-api.php
# copy:
# src: fastd-services-api.php
# dest: /etc/fastd-services-api.php
#
#- name: setup cronjob for fastd-services-api
# cron:
# name: fastd-services-api
# minute: '*/10'
# user: root
# cron_file: fastd-api
# job: '/usr/bin/php /etc/fastd-services-api.php'
- name: dhcpd.conf
template:

@ -5,14 +5,16 @@ authoritative;
log-facility local7;
subnet 10.222.0.0 netmask 255.255.0.0 {
range {{ dhcp_start }} {{ dhcp_end }};
{% for site in sites %}
subnet {{ site.dhcp_subnet }} netmask {{ site.dhcp_netmask }} {
range {{ site.dhcp_start }} {{ site.dhcp_end }};
option routers {{ bat0_ipv4 }};
option domain-name-servers {{ bat0_ipv4 }};
option routers {{ site.bat_ipv4 }};
option domain-name-servers {{ site.bat_ipv4 }};
}
{% endfor %}
subnet {{ ansible_default_ipv4['address'] }} netmask 255.255.255.255 {
}
include "/etc/dhcpd.hosts.conf";
#include "/etc/dhcpd.hosts.conf";

@ -1,2 +0,0 @@
key "d78c8c9b2977f732cdd00d2d4b557cfb5de1438897d33b9ec04037512dd11d6a";
remote "fastd1.services.freifunk-myk.de":10000;

@ -1,2 +0,0 @@
key "03cb2b87af657dfc4a434c5dfe3234e947571ca5a8d114d24e0e9f9861eff558";
remote "fastd10.services.freifunk-myk.de":10000;

@ -1,2 +0,0 @@
key "c5ddbdc98a9aa8eb4fc684571c23eabaefd6ef63b8cb9d3a31a2cd6e656c47f9";
remote "fastd11.services.freifunk-myk.de":10000;

@ -1,2 +0,0 @@
key "d47e917875f145a27a3ef10e29bf011c1f89ab4ea313c4bd0d8bac07ffacf557";
remote "fastd12.services.freifunk-myk.de":10000;

@ -1,2 +0,0 @@
key "2895322d66ba7aaa0daf779d795a2a44255d1d14bea639e1267149f466602fce";
remote "fastd13.services.freifunk-myk.de":10000;

@ -1,2 +0,0 @@
key "22e08f6e9c72e77041aa635d380e03069cfe193d9f5a0551ff2188677d15d5c0";
remote "fastd14.services.freifunk-myk.de":10000;

@ -1,2 +0,0 @@
key "78605f4cc687a1a5c2a1cbbacb6310bb4dc2546e605a1f2852aabea5e2dbecbb";
remote "fastd15.services.freifunk-myk.de":10000;

@ -1,2 +0,0 @@
key "f753af06aff1e765a0601c21343965cd3a9abd91f98a76867589e742c041a550";
remote "fastd2.services.freifunk-myk.de":10000;

@ -1,2 +0,0 @@
key "70a561adcea747e4758376222cddf7d43db43fac55b43e3840b6e3bc5042b170";
remote "fastd3.services.freifunk-myk.de":10000;

@ -1,2 +0,0 @@
key "30e707472d8eed4397295554764846f309a4b046ba628d24f2acee79543d671c";
remote "fastd4.services.freifunk-myk.de":10000;

@ -1,2 +0,0 @@
key "c785f8d8f59b75ffbec7eb417e1971dc5a123ff3507e3121352102fdea646e89";
remote "fastd5.services.freifunk-myk.de":10000;

@ -1,2 +0,0 @@
key "c40b725a5118b7c37f76b562461db160b1c99495f1df254067de2b5772831d22";
remote "fastd6.services.freifunk-myk.de":10000;

@ -1,2 +0,0 @@
key "72dbb9f07c272e6cfba07ebc3e318cc66e7d6e7583d6aa27fdd0445cf1bea2d8";
remote "fastd7.services.freifunk-myk.de":10000;

@ -1,2 +0,0 @@
key "66744cda306b1087753a57a727c79a934c872e7221ec6a28ff41e3a316eff0ab";
remote "fastd8.services.freifunk-myk.de":10000;

@ -1,2 +0,0 @@
key "a8a79387ffa4370c6ae322d99aeb5b8b82f5580ce8dfe5726e0d161a7894a6ed";
remote "fastd9.services.freifunk-myk.de":10000;

@ -0,0 +1,7 @@
#!/bin/sh
for file in /run/ff*.socket
do
echo $file
nc -U $file | jq -r '.peers | keys[] as $k | "\(.[$k] | .connection.mac_addresses[]) \(.[$k] | .address) \($k)"' | grep $1
done

@ -0,0 +1,6 @@
#!/bin/sh
if grep -q $PEER_KEY /etc/fastd_blacklist; then
echo $PEER_KEY blacklisted
exit 1
fi
exit 0

@ -4,7 +4,37 @@
name: fastd@ffmyk.service
state: reloaded
- name: restart fastd
- name: restart fastdaw
systemd:
name: fastd@ffmyk.service
name: fastd@ffaw.service
state: restarted
- name: restart fastdcoc
systemd:
name: fastd@ffcoc.service
state: restarted
- name: restart fastdems
systemd:
name: fastd@ffems.service
state: restarted
- name: restart fastdko
systemd:
name: fastd@ffko.service
state: restarted
- name: restart fastdmy
systemd:
name: fastd@ffmy.service
state: restarted
- name: restart fastdsim
systemd:
name: fastd@ffsim.service
state: restarted
- name: restart fastdww
systemd:
name: fastd@ffww.service
state: restarted

@ -1,82 +1,61 @@
---
- name: install fastd
become: yes
become_user: '{{ aur_user }}'
aur:
pacman:
name: fastd
tool: yaourt
state: present
- name: create ffmyk folder
- name: create site folder
file:
path: /etc/fastd/ffmyk
path: /etc/fastd/ff{{ item.name }}
state: directory
with_items: "{{ sites }}"
- name: fastd.conf
template:
src: fastd.conf.j2
dest: /etc/fastd/ffmyk/fastd.conf
dest: /etc/fastd/ff{{ item.name }}/fastd.conf
mode: 0640
notify: restart fastd
- name: create backbone folder
file:
path: /etc/fastd/ffmyk/backbone
state: directory
- name: add backbone peers
copy:
src: '{{ item }}'
dest: /etc/fastd/ffmyk/backbone/{{ item }}
with_items:
- fastd1
- fastd2
- fastd3
- fastd4
- fastd5
- fastd6
- fastd7
- fastd8
- fastd9
- fastd10
- fastd11
- fastd12
- fastd13
- fastd14
- fastd15
notify: reload fastd
notify: restart fastd{{ item.name }}
with_items: "{{ sites }}"
- name: add fastd bin folder
file:
path: /etc/fastd/ffmyk/bin
path: /etc/fastd/ff{{ item.name }}/bin
state: directory
with_items: "{{ sites }}"
- name: add fastd up script
template:
src: fastd_up.sh.j2
dest: /etc/fastd/ffmyk/bin/up.sh
dest: /etc/fastd/ff{{ item.name }}/bin/up.sh
mode: 0744
notify: restart fastd
notify: restart fastd{{ item.name }}
with_items: "{{ sites }}"
- name: add fastd peers folder
file:
path: /etc/fastd/ffmyk/peers
state: directory
- name: add fastd verify script
copy:
src: verify.sh
dest: /etc/fastd/ff{{ item.name }}/bin/verify.sh
mode: 0744
with_items: "{{ sites }}"
- name: add fastd peer api script
- name: add fastd_grep script
copy:
src: fastd-api.php
dest: /etc/fastd/ffmyk/bin/fastd-api.php
src: fastd_grep.sh
dest: /root/fastd_grep.sh
mode: 0744
- name: install fastd_grep dependencies
pacman:
name:
- openbsd-netcat
- jq
state: present
- name: setup cronjob for fastd-api
cron:
name: fastd-api
minute: '*/10'
user: root
cron_file: fastd-api
job: '/usr/bin/php /etc/fastd/ffmyk/bin/fastd-api.php'
- name: start and enable fastd service
systemd:
name: fastd@ffmyk.service
name: fastd@ff{{ item.name }}.service
enabled: yes
state: started
with_items: "{{ sites }}"

@ -2,7 +2,7 @@
<?php
//$url = 'http://register.freifunk-myk.de/srvapi.php';
$url = 'https://www.freifunk-myk.de/node/keys';
$out = '/etc/fastd/ffmyk/peers/';
$out = '/etc/fastd/ff{{ item.name }}/peers/';
if(!is_dir($out)) die('Output Dir missing');
if(!is_writable($out)) die('Output Dir perms');

@ -1,18 +1,13 @@
log to syslog level info;
interface "ffmyk-mesh-vpn";
interface "vpn{{ item.name }}";
method "salsa2012+gmac";
method "salsa2012+umac";
secure handshakes yes;
bind any:10000;
bind any:{{ item.fastd_port1 }};
hide ip addresses yes;
hide mac addresses yes;
mtu 1280;
peer group "clients" {
include peers from "peers";
peer limit {{ fastd_peer_limit }};
}
include peers from "backbone";
secret "{{ fastd_secret }}";
on up "/etc/fastd/ffmyk/bin/up.sh $INTERFACE";
status socket "/run/ffmyk.socket";
secret "{{ item.fastd_secret }}";
on up "/etc/fastd/ff{{ item.name }}/bin/up.sh $INTERFACE";
status socket "/run/ff{{ item.name }}1.socket";
on verify "/etc/fastd/ff{{ item.name }}/bin/verify.sh";

@ -1,11 +1,11 @@
#!/bin/bash
ip link set address {{ fastd_mesh_mac }} dev $1
ip link set address {{ item.fastd_mesh_mac }} dev $1
ip link set up dev $1
batctl -m bat0 if add $1
batctl -m bat0 gw server 1000000/1000000
batctl -m bat0 it 10000
batctl -m bat0 mm 1
echo 128 > /sys/class/net/bat0/mesh/hop_penalty
netctl start bat0
batctl meshif bat{{ item.name }} if add $1
batctl meshif bat{{ item.name }} gw server 1000000/1000000
batctl meshif bat{{ item.name }} it 10000
batctl meshif bat{{ item.name }} mm 1
batctl meshif bat{{ item.name }} hop_penalty 64
netctl start bat{{ item.name }}
systemctl restart dhcpd4.service
systemctl restart named.service

@ -1,12 +0,0 @@
---
- name: install haveged
pacman:
update_cache: yes
name: haveged
state: present
- name: enable haveged at boot and start it
systemd:
name: haveged.service
enabled: yes
state: started

@ -0,0 +1,9 @@
[Unit]
Description=Iperf3 TCP Server
After=network.target
[Service]
ExecStart=/usr/bin/iperf3 -s -V
[Install]
WantedBy=multi-user.target

@ -0,0 +1,16 @@
---
- name: install iperf3
pacman:
name: iperf3
state: present
- name: copy iperf3 systemd-service
copy:
src: iperf3-tcp.service
dest: /etc/systemd/system/iperf3-tcp.service
- name: start and enable iperf3 tcp
systemd:
name: iperf3-tcp.service
enabled: yes
state: started

@ -0,0 +1,29 @@
---
- name: install mesh-announce dependencies
pacman:
name:
- git
- lsb-release
- ethtool
state: present
when: sites | length > 0
- name: clone mesh-announce repo
git:
repo: https://github.com/FreifunkMYK/mesh-announce.git
dest: /opt/mesh-announce
when: sites | length > 0
- name: create respondd service
template:
src: respondd.service.j2
dest: /etc/systemd/system/respondd.service
mode: 0644
when: sites | length > 0
- name: start and enable respondd service
systemd:
name: respondd
state: started
enabled: yes
when: sites | length > 0

@ -0,0 +1,12 @@
[Unit]
Description=Respondd
After=network.target
[Service]
ExecStart=/opt/mesh-announce/respondd.py -d /opt/mesh-announce/providers {% for site in sites %}-i bat{{ site.name }} -i vx{{ site.name }} -b bat{{ site.name }} {% endfor %}
Restart=always
Environment=PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin
[Install]
WantedBy=multi-user.target

@ -1,37 +0,0 @@
#!/bin/bash
INTERFACE=mullvad
FAILED_FILE=/tmp/mullvad.failed
fail=false
if [ ! -e /sys/class/net/$INTERFACE ]; then
echo "$INTERFACE interface does not exist"
fail=true
else
start_bytes=$(cat /sys/class/net/$INTERFACE/statistics/rx_bytes)
sleep 30
end_bytes=$(cat /sys/class/net/$INTERFACE/statistics/rx_bytes)
if [ $(($end_bytes-$start_bytes)) -lt 1000 ]; then
#echo "no traffic via $INTERFACE"
fail=true
fi
fi
if $fail; then
systemctl is-active openvpn-client@mullvad.service > /dev/null
if [ $? -ne 0 ]; then
systemctl status openvpn-client@mullvad.service
if [ -e $FAILED_FILE ]; then
echo restart openvpn-client@mullvad.service
systemctl restart openvpn-client@mullvad.service
else
touch $FAILED_FILE
fi
fi
else
if [ -e $FAILED_FILE ]; then
rm $FAILED_FILE
fi
fi

@ -3,6 +3,5 @@ cd /opt/ffmyk-influx
while : ;do
php -c ./php.ini -f dhcp.php
php -c ./php.ini -f traffic.php
php -c ./php.ini -f fastd.php
sleep 15
done

@ -0,0 +1,358 @@
#!/usr/bin/env python
"""
Munin monitoring plug-in for BIND9 DNS statistics server. Tested
with BIND 9.10, 9.11, and 9.12, exporting version 3.x of the XML
statistics.
Copyright (c) 2013-2015, Shumon Huque. All rights reserved.
This program is free software; you can redistribute it and/or modify
it under the same terms as Python itself.
"""
import os, sys
import xml.etree.ElementTree as et
try:
from urllib2 import urlopen # for Python 2
except ImportError:
from urllib.request import urlopen # for Python 3
VERSION = "0.31"
HOST = os.environ.get('HOST', "127.0.0.1")
PORT = os.environ.get('PORT', "8053")
INSTANCE = os.environ.get('INSTANCE', "")
SUBTITLE = os.environ.get('SUBTITLE', "")
STATS_TYPE = "xml" # will support json later
BINDSTATS_URL = "http://%s:%s/%s" % (HOST, PORT, STATS_TYPE)
if SUBTITLE != '':
SUBTITLE = ' ' + SUBTITLE
GraphCategoryName = "dns_bind"
# Note: munin displays these graphs ordered alphabetically by graph title
GraphConfig = (
('dns_opcode_in' + INSTANCE,
dict(title='BIND [00] Opcodes In',
enable=True,
stattype='counter',
args='-l 0',
vlabel='Queries/sec',
location="server/counters[@type='opcode']/counter",
config=dict(type='DERIVE', min=0, draw='AREASTACK'))),
('dns_qtypes_in' + INSTANCE,
dict(title='BIND [01] Query Types In',
enable=True,
stattype='counter',
args='-l 0',
vlabel='Queries/sec',
location="server/counters[@type='qtype']/counter",
config=dict(type='DERIVE', min=0, draw='AREASTACK'))),
('dns_server_stats' + INSTANCE,
dict(title='BIND [02] Server Stats',
enable=True,
stattype='counter',
args='-l 0',
vlabel='Queries/sec',
location="server/counters[@type='nsstat']/counter",
fields=("Requestv4", "Requestv6", "ReqEdns0", "ReqTCP", "ReqTSIG",
"Response", "TruncatedResp", "RespEDNS0", "RespTSIG",
"QrySuccess", "QryAuthAns", "QryNoauthAns", "QryReferral",
"QryNxrrset", "QrySERVFAIL", "QryFORMERR", "QryNXDOMAIN",
"QryRecursion", "QryDuplicate", "QryDropped", "QryFailure",
"XfrReqDone", "UpdateDone", "QryUDP", "QryTCP"),
config=dict(type='DERIVE', min=0))),
('dns_cachedb' + INSTANCE,
dict(title='BIND [03] CacheDB RRsets',
enable=True,
stattype='cachedb',
args='-l 0',
vlabel='Count',
location="views/view[@name='_default']/cache[@name='_default']/rrset",
config=dict(type='GAUGE', min=0))),
('dns_resolver_stats' + INSTANCE,
dict(title='BIND [04] Resolver Stats',
enable=False, # appears to be empty
stattype='counter',
args='-l 0',
vlabel='Count/sec',
location="server/counters[@type='resstat']/counter",
config=dict(type='DERIVE', min=0))),
('dns_resolver_stats_qtype' + INSTANCE,
dict(title='BIND [05] Resolver Outgoing Queries',
enable=True,
stattype='counter',
args='-l 0',
vlabel='Count/sec',
location="views/view[@name='_default']/counters[@type='resqtype']/counter",
config=dict(type='DERIVE', min=0))),
('dns_resolver_stats_view' + INSTANCE,
dict(title='BIND [06] Resolver Stats',
enable=True,
stattype='counter',
args='-l 0',
vlabel='Count/sec',
location="views/view[@name='_default']/counters[@type='resstats']/counter",
config=dict(type='DERIVE', min=0))),
('dns_cachestats' + INSTANCE,
dict(title='BIND [07] Resolver Cache Stats',
enable=True,
stattype='counter',
args='-l 0',
vlabel='Count/sec',
location="views/view[@name='_default']/counters[@type='cachestats']/counter",
fields=("CacheHits", "CacheMisses", "QueryHits", "QueryMisses",
"DeleteLRU", "DeleteTTL"),
config=dict(type='DERIVE', min=0))),
('dns_cache_mem' + INSTANCE,
dict(title='BIND [08] Resolver Cache Memory Stats',
enable=True,
stattype='counter',
args='-l 0 --base 1024',
vlabel='Memory In-Use',
location="views/view[@name='_default']/counters[@type='cachestats']/counter",
fields=("TreeMemInUse", "HeapMemInUse"),
config=dict(type='GAUGE', min=0))),
('dns_socket_activity' + INSTANCE,
dict(title='BIND [09] Socket Activity',
enable=True,
stattype='counter',
args='-l 0',
vlabel='Active',
location="server/counters[@type='sockstat']/counter",
fields=("UDP4Active", "UDP6Active",
"TCP4Active", "TCP6Active",
"UnixActive", "RawActive"),
config=dict(type='GAUGE', min=0))),
('dns_socket_stats' + INSTANCE,
dict(title='BIND [10] Socket Rates',
enable=True,
stattype='counter',
args='-l 0',
vlabel='Count/sec',
location="server/counters[@type='sockstat']/counter",
fields=("UDP4Open", "UDP6Open",
"TCP4Open", "TCP6Open",
"UDP4OpenFail", "UDP6OpenFail",
"TCP4OpenFail", "TCP6OpenFail",
"UDP4Close", "UDP6Close",
"TCP4Close", "TCP6Close",
"UDP4BindFail", "UDP6BindFail",
"TCP4BindFail", "TCP6BindFail",
"UDP4ConnFail", "UDP6ConnFail",
"TCP4ConnFail", "TCP6ConnFail",
"UDP4Conn", "UDP6Conn",
"TCP4Conn", "TCP6Conn",
"TCP4AcceptFail", "TCP6AcceptFail",
"TCP4Accept", "TCP6Accept",
"UDP4SendErr", "UDP6SendErr",
"TCP4SendErr", "TCP6SendErr",
"UDP4RecvErr", "UDP6RecvErr",
"TCP4RecvErr", "TCP6RecvErr"),
config=dict(type='DERIVE', min=0))),
('dns_zone_stats' + INSTANCE,
dict(title='BIND [11] Zone Maintenance',
enable=False,
stattype='counter',
args='-l 0',
vlabel='Count/sec',
location="server/counters[@type='zonestat']/counter",
config=dict(type='DERIVE', min=0))),
('dns_memory_usage' + INSTANCE,
dict(title='BIND [12] Memory Usage',
enable=True,
stattype='memory',
args='-l 0 --base 1024',
vlabel='Memory In-Use',
location='memory/summary',
fields=("ContextSize", "BlockSize", "Lost", "InUse"),
config=dict(type='GAUGE', min=0))),
('dns_adbstat' + INSTANCE,
dict(title='BIND [13] adbstat',
enable=True,
stattype='counter',
args='-l 0',
vlabel='Count',
location="views/view[@name='_default']/counters[@type='adbstat']/counter",
config=dict(type='GAUGE', min=0))),
)
def unsetenvproxy():
"""Unset HTTP Proxy environment variables that might interfere"""
for proxyvar in [ 'http_proxy', 'HTTP_PROXY' ]:
os.unsetenv(proxyvar)
return
def getstatsversion(etree):
"""return version of BIND statistics"""
return etree.attrib['version']
def getdata(graph, etree, getvals=False):
stattype = graph[1]['stattype']
location = graph[1]['location']
if stattype == 'memory':
return getdata_memory(graph, etree, getvals)
elif stattype == 'cachedb':
return getdata_cachedb(graph, etree, getvals)
results = []
counters = etree.findall(location)
if counters is None: # empty result
return results
for c in counters:
key = c.attrib['name']
val = c.text
if getvals:
results.append((key, val))
else:
results.append(key)
return results
def getdata_memory(graph, etree, getvals=False):
location = graph[1]['location']
results = []
counters = etree.find(location)
if counters is None: # empty result
return results
for c in counters:
key = c.tag
val = c.text
if getvals:
results.append((key, val))
else:
results.append(key)
return results
def getdata_cachedb(graph, etree, getvals=False):
location = graph[1]['location']
results = []
counters = etree.findall(location)
if counters is None: # empty result
return results
for c in counters:
key = c.find('name').text
val = c.find('counter').text
if getvals:
results.append((key, val))
else:
results.append(key)
return results
def validkey(graph, key):
fieldlist = graph[1].get('fields', None)
if fieldlist and (key not in fieldlist):
return False
else:
return True
def get_etree_root(url):
"""Return the root of an ElementTree structure populated by
parsing BIND9 statistics obtained at the given URL"""
data = urlopen(url)
return et.parse(data).getroot()
def muninconfig(etree):
"""Generate munin config for the BIND stats plugin"""
for g in GraphConfig:
if not g[1]['enable']:
continue
print("multigraph %s" % g[0])
print("graph_title %s" % g[1]['title'] + SUBTITLE)
print("graph_args %s" % g[1]['args'])
print("graph_vlabel %s" % g[1]['vlabel'])
print("graph_category %s" % GraphCategoryName)
data = getdata(g, etree, getvals=False)
if data != None:
for key in data:
if validkey(g, key):
print("%s.label %s" % (key, key))
if 'draw' in g[1]['config']:
print("%s.draw %s" % (key, g[1]['config']['draw']))
print("%s.min %s" % (key, g[1]['config']['min']))
print("%s.type %s" % (key, g[1]['config']['type']))
print('')
def munindata(etree):
"""Generate munin data for the BIND stats plugin"""
for g in GraphConfig:
if not g[1]['enable']:
continue
print("multigraph %s" % g[0])
data = getdata(g, etree, getvals=True)
if data != None:
for (key, value) in data:
if validkey(g, key):
print("%s.value %s" % (key, value))
print('')
def usage():
"""Print plugin usage"""
print("""\
\nUsage: bind9stats.py [config|statsversion]\n""")
sys.exit(1)
if __name__ == '__main__':
tree = get_etree_root(BINDSTATS_URL)
args = sys.argv[1:]
argslen = len(args)
unsetenvproxy()
if argslen == 0:
munindata(tree)
elif argslen == 1:
if args[0] == "config":
muninconfig(tree)
elif args[0] == "statsversion":
print("bind9stats %s version %s" % (STATS_TYPE, getstatsversion(tree)))
else:
usage()
else:
usage()

@ -1,5 +0,0 @@
[fastd_*]
user root
group root
env.socketfile /run/ffmyk.socket

@ -1,124 +0,0 @@
#!/usr/bin/perl -w
# -*- perl -*-
=head1 NAME
fastd_ - Plugin to monitor fastd uptime, peers and traffic
=head1 CONFIGURATION
Set user and group to have access to the socket
Set path to socketfile if not /tmp/fastd.sock
[fastd_*]
user fastd
group fastd
env.socketfile /tmp/fastd.sock
=head1 USAGE
Link this plugin to /etc/munin/plugins/ with the type of graph (uptime, peers, traffic)
append to the linkname, ie: /etc/munin/plugins/fastd_peers
After creating the links, restart munin-node. Don't forget to configure the plugin!
=head1 AUTHORS
Dominique Goersch <mail@dgoersch.info>
=head1 LICENSE
GPLv2
=head1 MAGIC MARKERS
#%# family=manual
#%# capabilities=suggest
=cut
use strict;
use warnings;
use File::Basename;
use IO::Socket::UNIX qw( SOCK_STREAM );
use JSON;
my $mode = basename($0); #get basename
$mode =~ s/fastd_//; #and strip 'fastd_' to get the mode
if ($ARGV[0] and $ARGV[0] eq "config") { #config graph
if ($mode eq 'uptime') { #for uptime
print "graph_title fastd Uptime\n";
print "graph_info This graph shows the uptime of the fastd on this supernode\n";
print "graph_args -l 0\n";
print "graph_scale no\n";
print "graph_vlabel uptime in days\n";
print "graph_category fastd\n";
print "uptime.label uptime\n";
print "uptime.draw AREA\n";
}
elsif ($mode eq 'peers') { #for peers
print "graph_title fastd peers\n";
print "graph_info This graph shows the peers of the fastd on this supernode\n";
print "graph_args -l 0\n";
print "graph_scale no\n";
print "graph_vlabel peers count\n";
print "graph_category fastd\n";
print "peers.label peers\n";
print "peers.draw AREA\n";
}
elsif ($mode eq 'traffic') { #for traffic
print "graph_order down up\n";
print "graph_title fastd traffic\n";
print "graph_args --base 1000\n";
print "graph_vlabel bits in (-) / out (+) per second\n";
print "graph_category fastd\n";
print "graph_info This graph shows the traffic of fast.\n";
print "down.label received\n";
print "down.type DERIVE\n";
print "down.graph no\n";
print "down.cdef down,8,*\n";
print "down.min 0\n";
print "up.label bps\n";
print "up.type DERIVE\n";
print "up.negative down\n";
print "up.cdef up,8,*\n";
print "up.min 0\n";
}
exit 0;
}
if ($ARGV[0] and $ARGV[0] eq "suggest") { #tell munin about our graphs
print "uptime\n";
print "peers\n";
print "traffic\n";
}
my $statusfile = exists $ENV{'socketfile'} ? $ENV{'socketfile'} : "/tmp/fastd.sock"; #get path to socket from environment or use default
my $socket = IO::Socket::UNIX->new(Type => SOCK_STREAM,Peer => $statusfile) #open socket
or die("Can't connect to server: $!\n");
my $fastdstatus = "";
foreach my $line (<$socket>) {$fastdstatus .= $line;} #read contents from socket
my $json = decode_json($fastdstatus); #decode json
my $fastd_uptime = $json->{uptime}; #get the uptime from json
#my $fastd_peers = scalar(keys(%{$json->{peers}})); #get number of peers from json
my $fastd_peers = 0;
for my $key (keys(%{$json->{peers}})) {
$fastd_peers = $fastd_peers + ($json->{peers}{$key}{connection}? 1 : 0);
}
my $fastd_rx_bytes = $json->{statistics}->{rx}->{bytes}; #get recieved bytes from json
my $fastd_tx_bytes = $json->{statistics}->{tx}->{bytes}; #get transmittetd bytes from json
if ( $mode eq 'uptime' ) {
printf "uptime.value %.0f\n",$fastd_uptime/86400000; #return uptime in seconds
} elsif ($mode eq 'peers') {
print "peers.value $fastd_peers\n"; #return number of peers
} elsif ($mode eq 'traffic') {
print "up.value $fastd_tx_bytes\n"; #return transmitted bytes
print "down.value $fastd_rx_bytes\n"; #and recieved bytes
}

@ -0,0 +1,180 @@
#!/usr/bin/perl -w
=head1 NAME
fw_conntrack - Plugin to monitor the number of tracked connections
through a Linux 2.4/2.6 firewall
=head1 CONFIGURATION
This plugin must run with root privileges
=head2 CONFIGURATION EXAMPLE
/etc/munin/plugin-conf.d/global or other file in that dir must contain:
[fw_*]
user root
=head1 NOTES
ESTABLISHED+FIN_WAIT+TIME_WAIT+SYN_SENT+UDP are the most interesting
connections.
The total list also includes SYN_RECV, CLOSE, CLOSE_WAIT, LAST_ACK and
LISTEN, but these were not (often) observed on my firewall.
TOTAL is the total number of tracked connections.
ASSURED and UNREPLIED connections are complimentary subsets of
ESTABLISHED.
ASSURED is after ACK is seen after SYN_RECV. Therefore ASSURED is
plotted but not UNREPLIED.
Note that the plugin depends on the netfilter "conntrack" userspace tool.
It comes from http://conntrack-tools.netfilter.org/
=head1 AUTHORS
=over
=item 2004.05.05: Initial version by Nicolai Langfeldt, Linpro AS, Oslo, Norway
=item 2004.05.06: Enhanced to count NATed connections after input from Xavier on munin-users list
=item 2011.09.23: Perl version by Alex Tomlins
=back
=head1 LICENSE
GPL
=head1 MAGIC MARKERS
#%# family=auto
#%# capabilities=autoconf
=cut
use strict;
use Munin::Plugin;
my $conntrack = '/usr/sbin/conntrack';
my $nf_conntrack_file = '/proc/net/nf_conntrack';
my $ip_conntrack_file = '/proc/net/ip_conntrack';
my @conntrack_max_files = qw(
/proc/sys/net/nf_conntrack_max
/proc/sys/net/netfilter/nf_conntrack_max
/proc/sys/net/ipv4/ip_conntrack_max
/proc/sys/net/ipv4/netfilter/ip_conntrack_max
);
if ( defined($ARGV[0]) and $ARGV[0] eq "autoconf" ) {
if ( -x $conntrack or -r $nf_conntrack_file or -r $ip_conntrack_file) {
print "yes\n";
} else {
print "no\n";
}
exit 0;
}
if ( defined($ARGV[0]) and $ARGV[0] eq "config" ) {
print <<EOF;
graph_title Connections through firewall
graph_vlabel Connections
graph_category network
graph_args -l 0
established.label Established
established.type GAUGE
established.draw AREA
fin_wait.label FIN_WAIT
fin_wait.type GAUGE
fin_wait.draw STACK
time_wait.label TIME_WAIT
time_wait.type GAUGE
time_wait.draw STACK
syn_sent.label SYN_SENT
syn_sent.type GAUGE
syn_sent.draw STACK
udp.label UDP connections
udp.type GAUGE
udp.draw STACK
assured.label Assured
assured.type GAUGE
assured.draw LINE2
nated.label NATed
nated.type GAUGE
nated.draw LINE1
ipv4.label IPv4
ipv4.type GAUGE
ipv4.draw LINE2
ipv6.label IPv6
ipv6.type GAUGE
ipv6.draw LINE3
total.label Total
total.type GAUGE
total.graph no
EOF
my $max;
foreach (@conntrack_max_files) {
if ( -r $_) {
chomp($max = `cat $_`);
last;
}
}
if ($max) {
print "total.warning ", $max * 8 / 10, "\n";
print "total.critical ", $max * 9 / 10, "\n";
}
exit 0;
}
my $command;
if ( -x $conntrack) {
$command = "$conntrack -L -o extended -f ipv4 2>/dev/null; $conntrack -L -o extended -f ipv6 2>/dev/null";
} elsif ( -r $nf_conntrack_file ) {
$command = "cat $nf_conntrack_file";
} else {
$command = "cat $ip_conntrack_file";
}
my %state = (
'ESTABLISHED' => 0,
'FIN_WAIT' => 0,
'TIME_WAIT' => 0,
'SYN_SENT' => 0,
'UDP' => 0,
'ASSURED' => 0,
'NATTED' => 0,
'TOTAL' => 0,
'IPV4' => 0,
'IPV6' => 0
);
open CMD, "$command|";
while (<CMD>) {
$state{'TOTAL'} ++;
$state{'UDP'} ++ if /udp /;
$state{'ASSURED'} ++ if /ASSURED/;
if (/tcp \s*\d+\s+\d+\s+(\S+)/) {
$state{$1} ++;
}
if (/src=(\S+)\s+dst=(\S+)\s+sport.*src=(\S+)\s+dst=(\S+)/) {
$state{'NATTED'} ++ if $1 ne $4 or $2 ne $3;
}
$state{'IPV4'} ++ if /ipv4 /;
$state{'IPV6'} ++ if /ipv6 /;
}
close CMD;
print "established.value $state{'ESTABLISHED'}\n";
print "fin_wait.value $state{'FIN_WAIT'}\n";
print "time_wait.value $state{'TIME_WAIT'}\n";
print "syn_sent.value $state{'SYN_SENT'}\n";
print "udp.value $state{'UDP'}\n";
print "assured.value $state{'ASSURED'}\n";
print "nated.value $state{'NATTED'}\n";
print "ipv4.value $state{'IPV4'}\n";
print "ipv6.value $state{'IPV6'}\n";
print "total.value $state{'TOTAL'}\n";

@ -1,6 +1,8 @@
[fw_*]
user root
[if_ens3]
[if_en*]
env.speed 1000
[dhcp-pool]
user dhcp

@ -0,0 +1,3 @@
[wg_peers_*]
user root
group root

@ -0,0 +1,54 @@
#!/bin/sh
# -*- sh -*-
: << =cut
=head1 NAME
wg_peers_ - Plugin to monitor wg peers
=head1 CONFIGURATION
Set user and group to have access
[wg_peers_*]
user root
group root
=head1 USAGE
Link this plugin to /etc/munin/plugins/
After creating the links, restart munin-node. Don't forget to configure the plugin!
=head1 AUTHORS
Niklas Yann Wettengel <niyawe@niyawe.de>
=head1 LICENSE
GPLv2
=head1 MAGIC MARKERS
#%# family=manual
=cut
. "$MUNIN_LIBDIR/plugins/plugin.sh"
myname=$(basename "$0" | sed 's/^wg_peers_//g')
if [ "$1" = "config" ]; then
echo "graph_title wg${myname} peers"
echo "graph_info This graph shows the wg peers on this supernode"
echo "graph_args -l 0"
echo "graph_scale no"
echo "graph_vlabel peers count"
echo "graph_category wireguard"
echo "peers.label peers"
echo "peers.draw AREA"
exit 0
fi
echo "peers.value $(wg show wg${myname} peers | wc -l)"

@ -1,37 +0,0 @@
server {
listen 80 default_server;
listen [::]:80 default_server ipv6only=on;
server_name localhost;
charset UTF-8;
index index.html index.htm;
root /srv/http/vnstat;
location / {
try_files $uri $uri/ =404;
autoindex on;
}
# redirect server error pages to the static page /50x.html
#
error_page 500 502 503 504 /50x.html;
location = /50x.html {
root /usr/share/nginx/html;
}
location /nginx_status {
stub_status on;
access_log off;
allow 127.0.0.1;
allow ::1;
deny all;
}
location ~* \.(?:jpg|jpeg|gif|bmp|ico|png|css|js|swf|svg)$ {
expires 30d;
# Optional: Don't log access to assets
access_log off;
}
}

@ -1,45 +0,0 @@
#!/bin/sh
set -e
IFACES=$(ls /var/lib/vnstat/)
TARGET=/srv/http/vnstat/
for iface in $IFACES; do
/usr/bin/vnstati -i ${iface} -h -o ${TARGET}${iface}_hourly.png
/usr/bin/vnstati -i ${iface} -d -o ${TARGET}${iface}_daily.png
/usr/bin/vnstati -i ${iface} -m -o ${TARGET}${iface}_monthly.png
/usr/bin/vnstati -i ${iface} -t -o ${TARGET}${iface}_top10.png
/usr/bin/vnstati -i ${iface} -s -o ${TARGET}${iface}_summary.png
done
cat > ${TARGET}index.html <<EOT
<!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Strict//EN"
"http://www.w3.org/TR/xhtml1/DTD/xhtml1-strict.dtd">
<html xmlns="http://www.w3.org/1999/xhtml" lang="en" xml:lang="en">
<head>
<titleu1 - Network Traffic</title>
<meta http-equiv="Content-Type" content="text/html; charset=utf-8" />
<meta http-equiv="Content-Language" content="en" />
</head>
<body style="white-space: nowrap">
EOT
for iface in $IFACES; do
sed s/IFACE/${iface}/g >> ${TARGET}index.html <<EOT
<div style="display:inline-block;vertical-align: top">
<img src="IFACE_summary.png" alt="traffic summary" /><br>
<img src="IFACE_monthly.png" alt="traffic per month" /><br>
<img src="IFACE_hourly.png" alt="traffic per hour" /><br>
<img src="IFACE_top10.png" alt="traffic top10" /><br>
<img src="IFACE_daily.png" alt="traffic per day" />
</div>
EOT
done
echo "</body></html>" >> ${TARGET}index.html

@ -10,22 +10,59 @@
dest: /etc/munin/munin-node.conf
notify: restart munin-node
- name: copy fastd plugin
- name: install perl-json
pacman:
name: perl-json
state: present
- name: copy wg peers plugin
copy:
src: munin/munin_fastd_plugin
dest: /usr/lib/munin/plugins/fastd_
src: munin/munin_wg_peers
dest: /usr/lib/munin/plugins/wg_peers_
mode: 0755
notify: restart munin-node
- name: copy wg peers plugin config
copy:
src: munin/munin_wg_conf
dest: /etc/munin/plugin-conf.d/wg
mode: 0644
notify: restart munin-node
- name: enable munin plugins for wg peers
file:
path: /etc/munin/plugins/wg_peers_{{ item.name }}
src: /usr/lib/munin/plugins/wg_peers_
state: link
with_items: "{{ sites }}"
notify: restart munin-node
- name: copy dhcp-pool plugin
copy:
src: munin/munin_dhcp_pool_plugin
dest: /usr/lib/munin/plugins/dhcp-pool
mode: 0755
notify: restart munin-node
- name: enable munin plugins for dhcp
file:
path: /etc/munin/plugins/dhcp-pool
src: /usr/lib/munin/plugins/dhcp-pool
state: link
notify: restart munin-node
- name: copy fw_conntrack plugin
copy:
src: munin/munin_fw_conntrack
dest: /etc/munin/plugins/fw_conntrack
mode: 0755
notify: restart munin-node
- name: copy fastd plugin config
- name: copy bind9stats plugin
copy:
src: munin/munin_fastd_conf
dest: /etc/munin/plugin-conf.d/fastd
src: munin/bind9stats.py
dest: /etc/munin/plugins/bind9stats.py
mode: 0755
notify: restart munin-node
- name: copy global config
@ -44,17 +81,87 @@
name: perl-lwp-protocol-https
state: present
- name: install perl-json
pacman:
name: perl-json
state: present
- name: enable munin plugins for network monitoring (1/6)
file:
path: /etc/munin/plugins/if_{{ ansible_default_ipv4.interface }}
src: /usr/lib/munin/plugins/if_
state: link
notify: restart munin-node
- name: enable munin plugins for network monitoring (2/6)
file:
path: /etc/munin/plugins/if_{{ ansible_default_ipv6.interface }}
src: /usr/lib/munin/plugins/if_
state: link
notify: restart munin-node
- name: enable munin plugins for network monitoring (3/6)
file:
path: /etc/munin/plugins/if_{{ item[0] }}{{ item[1].name }}
src: /usr/lib/munin/plugins/if_
state: link
notify: restart munin-node
with_nested:
- [ 'bat', 'vpn', 'wg', 'vx' ]
- "{{ sites }}"
- name: enable munin plugins for network monitoring (4/6)
file:
path: /etc/munin/plugins/if_bb{{ item.name }}
src: /usr/lib/munin/plugins/if_
state: link
notify: restart munin-node
with_items: "{{ wireguard_bb_peers|default([]) }}"
- name: enable munin plugins for network monitoring (5/6)
file:
path: /etc/munin/plugins/if_bb{{ hostvars[item]['wireguard_bb_name'] }}
src: /usr/lib/munin/plugins/if_
state: link
notify: restart munin-node
with_items: "{{ groups['fastd'] | difference([inventory_hostname]) }}"
- name: enable munin plugins for network monitoring (6/6)
file:
path: /etc/munin/plugins/if_{{ item.name }}
src: /usr/lib/munin/plugins/if_
state: link
notify: restart munin-node
with_items: "{{ ffrl_peers }}"
- name: enable munin plugins
file:
path: /etc/munin/plugins/{{ item.name }}
src: /usr/lib/munin/plugins/{{ item.plugin | default( item.name ) }}
path: /etc/munin/plugins/{{ item }}
src: /usr/lib/munin/plugins/{{ item }}
state: link
with_items: "{{ munin_node_plugins }}"
with_items:
- cpu
- df
- df_inode
- diskstats
- entropy
- forks
- fw_forwarded_local
- fw_packets
- interrupts
- irqstats
- load
- memory
- netstat
- nginx_request
- nginx_status
- ntp_kernel_err
- ntp_kernel_pll_freq
- ntp_kernel_pll_off
- ntp_offset
- open_files
- open_inodes
- proc_pri
- processes
- threads
- uptime
- users
- vmstat
notify: restart munin-node
- name: start and enable munin-node

@ -1,58 +0,0 @@
---
- name: install vnstat
pacman:
name: vnstat
state: present
- name: start and enable vnstat service
systemd:
name: vnstat.service
enabled: yes
state: started
- name: add interfaces to vnstat
command: /usr/bin/vnstat -u -i {{ item }}
args:
creates: '/var/lib/vnstat/{{ item }}'
with_items:
- bat0
- ens3
- ffmyk-mesh-vpn
- mullvad
- name: add output folder for vnstat graphs
file:
path: /srv/http/vnstat
state: directory
- name: install gd which is needed for graph generation
pacman:
name: gd
state: present
- name: add bash script to generate vnstat graphs
copy:
src: vnstat.sh
dest: /usr/local/bin/vnstat.sh
mode: 0744
- name: add cronjob to generate vnstat graphs
cron:
name: vnstat
minute: '*/5'
user: root
cron_file: fastd-api
job: '/usr/local/bin/vnstat.sh'
- name: add vnstat nginx config
copy:
src: vnstat
dest: /etc/nginx/sites-available/vnstat
notify: reload nginx
- name: enable vnstat nginx config
file:
src: /etc/nginx/sites-available/vnstat
dest: /etc/nginx/sites-enabled/vnstat
state: link
notify: reload nginx

@ -1,22 +1,6 @@
---
- name: install vnstat
include: install_vnstat.yml
- name: add bash script to check internet
copy:
src: check_internet.sh
dest: /usr/local/bin/check_internet.sh
mode: 0744
- name: add cronjob to check internet
cron:
name: check_internet
user: root
cron_file: fastd-api
job: '/usr/local/bin/check_internet.sh'
- name: install ffmyk-influx
include: install_ffmyk-influx.yml
import_tasks: install_ffmyk-influx.yml
- name: install munin
include: install_munin.yml
import_tasks: install_munin.yml

@ -1,23 +1,73 @@
<?php
date_default_timezone_set('UTC');
require('func.php');
$data = file_get_contents('/var/lib/dhcp/dhcpd.leases');
$dhcp_config = file_get_contents('/etc/dhcpd.conf');
preg_match_all('/lease ([\d\.]+) \{[^\}]+ends \d+ (\d{4}\/\d{2}\/\d{2} \d+:\d{2}:\d{2});[^\}]+}/s', $data, $match);
$num_ranges = preg_match_all('/range[\s]+([\d]+\.[\d]+\.[\d]+\.[\d]+)[\s]+([\d]+\.[\d]+\.[\d]+\.[\d]+)/', $dhcp_config, $ranges);
unset($data, $match[0]);
$lease_file_handle = fopen("/var/lib/dhcp/dhcpd.leases", "r");
$dend = time()-120;
$activeleases = array();
$clients = 0;
$lease = -1;
$start = -1;
$end = -1;
foreach($match[2] as $timeout) {
$end = strtotime($timeout.' UTC');
if($end > $dend) $clients++;
while(($line = fgets($lease_file_handle)) !== false)
{
if(preg_match('/lease ([\d]+\.[\d]+\.[\d]+\.[\d]+)/', $line, $match))
{
$lease = ip2long($match[1]);
continue;
}
elseif(preg_match('/starts \d ([\d]{4})\/([\d]{2})\/([\d]{2}) ([\d]{2}):([\d]{2}):([\d]{2})/', $line, $match))
{
$start = mktime($match[4], $match[5], $match[6], $match[2], $match[3], $match[1]);
continue;
}
elseif(preg_match('/ends \d ([\d]{4})\/([\d]{2})\/([\d]{2}) ([\d]{2}):([\d]{2}):([\d]{2})/', $line, $match))
{
$end = mktime($match[4], $match[5], $match[6], $match[2], $match[3], $match[1]);
if($lease > 0 && $start > 0 && $end > 0)
{
if( $start < time() && $end > time() )
{
$activeleases[$lease] = $lease;
$lease = -1;
$start = -1;
$end = -1;
}
}
}
}
$data = 'clients,host={{ ansible_hostname }},type=backend value='.$clients;
sendflux($data);
$pools = array();
for($range = 0; $range < $num_ranges; $range++)
{
$clients = 0;
$range_start = ip2long($ranges[1][$range]);
$range_end = ip2long($ranges[2][$range]);
foreach($activeleases as $lease)
{
if( $lease > $range_start && $lease < $range_end )
{
$clients++;
}
}
$pools[$range_start] = $clients;
}
$data = "";
foreach($pools as $range => $clients)
{
$data .= 'clients,host={{ ansible_hostname }},pool='.long2ip($range).',type=backend value='.$clients."\n";
}
sendflux($data);
?>

@ -26,9 +26,13 @@ function fastdGetPeers($file) {
return $peers;
}
$fastd_1280 = fastdGetPeers('/run/ffmyk.socket');
$data = "";
$data = 'fastdclient,mtu=1280,host={{ ansible_hostname }},type=backend value='.$fastd_1280."\n";
{% for site in sites %}
$fastd_{{ site.name }} = fastdGetPeers('/run/ff{{ site.name }}1.socket');
$data .= 'fastdclient,mtu=1280,host={{ ansible_hostname }},site={{ site.name }},type=backend value='.$fastd_{{ site.name }}."\n";
{% endfor %}
sendflux($data);

@ -1,6 +1,6 @@
<?php
function sendflux($data) {
$url = 'http://10.222.42.54:8086/write?db=freifunk';
$url = 'http://[2a03:2260:1016:302:c03:19ff:fe06:285]:8086/write?db=freifunk';
$options = array(
'http' => array(

@ -2,52 +2,26 @@
require('func.php');
function traffic($iface, $alias=false) {
function traffic($iface, $alias=false) {
if(!$alias) $alias = $iface;
if(!$alias) $alias = $iface;
/* ifconfig eth0 | grep bytes
RX bytes:700194759 (667.7 MiB) TX bytes:1090382719 (1.0 GiB)
$rx = file_get_contents('/sys/class/net/'.$iface.'/statistics/rx_bytes');
$tx = file_get_contents('/sys/class/net/'.$iface.'/statistics/tx_bytes');
*/
$data = shell_exec('ifconfig '.escapeshellarg($iface).' | grep bytes');
preg_match('/RX.+?bytes (\d+) /', $data, $match);
$rx = $match[1];
unset($match);
$data = 'rx,if='.$alias.',host={{ ansible_hostname }},type=backend value='.$rx."\n";
$data.= 'tx,if='.$alias.',host={{ ansible_hostname }},type=backend value='.$tx;
preg_match('/TX.+?bytes (\d+) /', $data, $match);
$tx = $match[1];
unset($match);
$file = '/opt/ffmyk-influx/traffic.'.base64_encode($iface).'.cache';
$out['rx'] = 0;
$out['tx'] = 0;
if(file_exists($file)) {
$cache = unserialize(file_get_contents($file));
$diff = time() - filemtime($file);
if($rx > $cache['rx']) $out['rx'] = ($rx - $cache['rx']) / $diff;
if($tx > $cache['tx']) $out['tx'] = ($tx - $cache['tx']) / $diff;
}
file_put_contents($file, serialize(array("rx" => $rx, "tx" => $tx)));
$data = 'rx,if='.$alias.',host={{ ansible_hostname }},type=backend value='.$out['rx']."\n";
$data.= 'tx,if='.$alias.',host={{ ansible_hostname }},type=backend value='.$out['tx'];
sendflux($data);
$out['if'] = $iface;
return $out;
}
(traffic('ens3', 'eth0'));
(traffic('mullvad'));
(traffic('bat0'));
(traffic('ffmyk-mesh-vpn', 'ffmyk-mesh-vpnd'));
sendflux($data);
}
(traffic('{{ ansible_default_ipv4.interface }}', 'wan'));
{% if ansible_default_ipv4.interface != ansible_default_ipv6.interface %}
(traffic('{{ ansible_default_ipv6.interface }}', 'wan6'));
{% endif %}
{% for site in sites %}
(traffic('bat{{ site.name }}'));
(traffic('wg{{ site.name }}'));
{% endfor %}
?>

@ -38,7 +38,7 @@ host_name {{ ansible_fqdn }}
# may repeat the allow line as many times as you'd like
allow ^127\.0\.0\.1$
allow ^2a01:4f8:151:13cd::35$
allow ^2a01:4f8:272:3d5f:1::35$
allow ^::1$
# Which address to bind to;

@ -0,0 +1,12 @@
{% for site in sites %}
[fastd_peers_ff{{ site.name }}]
user root
group root
env.socketfile /run/ff{{ site.name }}1.socket
[fastd_traffic_ff{{ site.name }}]
user root
group root
env.socketfile /run/ff{{ site.name }}1.socket
{% endfor %}

@ -17,12 +17,42 @@ http {
access_log off;
error_log /var/log/nginx/error.log;
#gzip on;
gzip off;
gzip_disable "msie6";
charset UTF-8;
server {
listen 80 default_server;
listen [::]:80 default_server ipv6only=on;
index index.html index.htm;
root /srv/http;
location / {
try_files $uri $uri/ =404;
autoindex on;
}
error_page 500 502 503 504 /50x.html;
location = /50x.html {
root /usr/share/nginx/html;
}
location /nginx_status {
stub_status on;
access_log off;
allow 127.0.0.1;
allow ::1;
deny all;
}
location ~* \.(?:jpg|jpeg|gif|bmp|ico|png|css|js|swf|svg)$ {
expires 30d;
# Optional: Don't log access to assets
access_log off;
}
}
# Virtual Host Config
include /etc/nginx/conf.d/*.conf;
include /etc/nginx/sites-enabled/*;

@ -2,7 +2,7 @@
#/sbin/ip route replace default via $4 table ffmyk
sleep 3
echo Reroute via $route_vpn_gateway
ip route replace 0.0.0.0/1 via $route_vpn_gateway table ffmyk
ip route replace 128.0.0.0/1 via $route_vpn_gateway table ffmyk
ip route replace 0.0.0.0/0 via $route_vpn_gateway proto static table ffmyk
#ip -6 route replace default dev $dev proto static table ffmyk
exit 0

@ -0,0 +1,5 @@
---
- name: restart radvd
systemd:
name: radvd.service
state: restarted

@ -0,0 +1,17 @@
---
- name: install radvd
pacman:
name: radvd
state: present
- name: radvd config
template:
src: radvd.conf.j2
dest: /etc/radvd.conf
notify: restart radvd
- name: start and enable radvd
systemd:
name: radvd.service
enabled: yes
state: started

@ -0,0 +1,26 @@
{% for site in sites %}
interface bat{{ site.name }}
{
AdvSendAdvert on;
IgnoreIfMissing on;
MinRtrAdvInterval 10;
MaxRtrAdvInterval 300;
AdvDefaultPreference low;
AdvHomeAgentFlag off;
prefix {{ site.net6 }}
{
AdvOnLink on;
AdvAutonomous on;
AdvRouterAddr off;
};
RDNSS {{ site.bat_ipv6 }}
{
AdvRDNSSLifetime 900;
};
};
{% endfor %}

@ -0,0 +1,161 @@
#!/usr/bin/env python
import socket
import ipaddress
import threading
import time
import zlib
import json
import os.path
import sys
from wgnlpy import WireGuard
import requests
from xml.etree import ElementTree
if not os.path.exists("/etc/respondd_poller.json"):
print("/etc/respondd_poller.json missing")
sys.exit(1)
interface = None
prefix = None
yanic_addr = None
request = None
with open("/etc/respondd_poller.json", "r") as f:
config = json.load(f)
if "interface" in config:
interface = config["interface"]
if "prefix" in config:
prefix = ipaddress.IPv6Network(config["prefix"])
if "yanic_addr" in config and "yanic_port" in config:
yanic_addr = (config["yanic_addr"], int(config["yanic_port"]))
if "request" in config:
request = config["request"].encode("ascii")
wg = WireGuard()
sock = socket.socket(socket.AF_INET6, socket.SOCK_DGRAM)
last_request = dict()
last_http_request = dict()
last_response = dict()
def get_wg_peers():
wgpeers = wg.get_interface(interface).peers
for peer in wgpeers:
for ip in wgpeers[peer].allowedips:
if ip.subnet_of(prefix):
yield ip
def inflate(data):
decompress = zlib.decompressobj(-zlib.MAX_WBITS)
inflated = decompress.decompress(data)
inflated += decompress.flush()
return inflated.decode()
def cleanup():
global last_request
global last_http_request
global last_response
while True:
time.sleep(60)
old = time.monotonic() - 360
ips = []
macs = []
for ip in last_request:
if last_response[ip] < old:
ips.append(ip)
for ip in ips:
del last_response[ip]
del last_request[ip]
del last_http_request[ip]
def recv():
global sock
while True:
data, addr = sock.recvfrom(1500)
sock.sendto(data, yanic_addr)
j = json.loads(inflate(data))
last_response[ipaddress.IPv6Address(addr[0])] = time.monotonic()
def send(ip):
global request
try:
sock.sendto(request, (bytearray(str(ip).encode('ascii')), 1001))
except:
print("failed to send packet to", ip)
return
def get_http_nodeinfo(ip):
global last_request
global last_http_request
global last_response
try:
print("get_http_nodeinfo", ip)
status = requests.get('http://[' + str(ip) + ']/cgi-bin/status')
except:
return
status_tree = ElementTree.fromstring(status.content)
mesh_ifs = []
interface_list = status_tree.findall(".//*[@data-interface]")
for interface in interface_list:
mesh_ifs.append(interface.attrib["data-interface"])
for mesh_if in mesh_ifs:
try:
nodeinfo = requests.get('http://[' + str(ip) + ']/cgi-bin/dyn/neighbours-nodeinfo?' + mesh_if)
except:
return
for line in nodeinfo.content.split(b'\n'):
if line.startswith(b'data: {'):
data = line.split(b': ', maxsplit=1)[1]
data = json.loads(data)
if "network" in data and "addresses" in data["network"]:
for address in data["network"]["addresses"]:
if ipaddress.IPv6Network(address).subnet_of(prefix):
node_ip = ipaddress.IPv6Address(address)
now = time.monotonic()
if node_ip not in last_request:
last_request[node_ip] = now
last_response[node_ip] = now
if node_ip not in last_http_request or now - last_http_request[node_ip] > 300:
last_http_request[node_ip] = now
get_http_nodeinfo(node_ip)
def scan_wg_peers():
global last_request
global last_http_request
global last_response
while True:
print("scanning wg peers")
request_threads = []
now = time.monotonic()
for net in get_wg_peers():
ip = ipaddress.IPv6Address(str(net.network_address) + "1")
if ip not in last_request:
last_request[ip] = now
last_http_request[ip] = now
last_response[ip] = now
request_thread = threading.Thread(target=get_http_nodeinfo, args=(ip,))
request_thread.start()
request_threads.append(request_thread)
if len(request_threads) > 10:
for thread in request_threads:
thread.join()
request_threads = []
time.sleep(60)
listen_thread = threading.Thread(target=recv)
listen_thread.start()
cleanup_thread = threading.Thread(target=cleanup)
cleanup_thread.start()
scan_thread = threading.Thread(target=scan_wg_peers)
scan_thread.start()
last_wg_time = 0
while True:
for ip in last_request:
now = time.monotonic()
if now - last_request[ip] > 15:
last_request[ip] = now
send(ip)
time.sleep(1)

@ -0,0 +1,12 @@
[Unit]
Description=respondd_poller
After=network.target
[Service]
ExecStart=/opt/respondd_poller/venv/bin/python -u /opt/respondd_poller/respondd_poller.py
Restart=always
WorkingDirectory=/opt/respondd_poller
Environment=PYTHONPATH=/opt/respondd_poller
[Install]
WantedBy=multi-user.target

@ -0,0 +1,49 @@
---
- name: install respondd_poller dependencies
pacman:
name:
- git
- python-virtualenv
- python-setuptools
- python-pip
state: present
- name: create venv
command:
cmd: "python -m venv /opt/respondd_poller/venv"
creates: /opt/respondd_poller/venv
- name: install respondd_poller requirements
copy:
src: requirements.txt
dest: /opt/respondd_poller/requirements.txt
mode: 0644
- name: install respondd_poller script
copy:
src: respondd_poller.py
dest: /opt/respondd_poller/respondd_poller.py
mode: 0644
- name: install requirements
pip:
requirements: /opt/respondd_poller/requirements.txt
virtualenv: /opt/respondd_poller/venv
- name: install respondd_poller config
template:
src: respondd_poller.json.j2
dest: /etc/respondd_poller.json
mode: 0644
- name: create respondd_poller service
copy:
src: respondd_poller.service
dest: /etc/systemd/system/respondd_poller.service
mode: 0644
- name: start and enable respondd_poller service
systemd:
name: respondd_poller
state: started
enabled: yes

@ -0,0 +1,7 @@
{
"interface":"wgmyk",
"prefix":"2a03:2260:1016::/48",
"yanic_addr": "fe80::41:18ff:fec5:5041%wgmyk",
"yanic_port": 10001,
"request":"GET nodeinfo statistics neighbours"
}

@ -0,0 +1,5 @@
---
- name: restart tayga
systemd:
name: tayga.service
state: restarted

@ -0,0 +1,30 @@
---
- name: install tayga
pacman:
name: tayga
state: present
- name: tayga.conf
template:
src: tayga.conf.j2
dest: /etc/tayga.conf
mode: 0644
notify: restart tayga
- name: create systemd override folder
ansible.builtin.file:
path: /etc/systemd/system/tayga.service.d
state: directory
- name: systemd override.conf
template:
src: systemd_override.conf.j2
dest: /etc/systemd/system/tayga.service.d/override.conf
mode: 0644
notify: restart tayga
- name: start and enable tayga service
systemd:
name: tayga.service
enabled: yes
state: started

@ -0,0 +1,10 @@
[Service]
ExecStart=
ExecStartPre=/usr/bin/tayga --mktun --config /etc/tayga.conf
ExecStartPre=/usr/bin/ip link set nat64 up
ExecStartPre=/usr/bin/ip addr replace {{ tayga_ipv4 }}/32 dev nat64
ExecStartPre=/usr/bin/ip addr replace 2a03:2260:1016::64/128 dev nat64
ExecStartPre=/usr/bin/ip route replace {{ tayga_pool }} dev nat64 proto static table ffmyk
ExecStartPre=/usr/bin/ip -6 route replace 64:ff9b::/96 dev nat64 proto static table ffmyk
ExecStart=/usr/bin/tayga --nodetach --config /etc/tayga.conf
Restart=always

Some files were not shown because too many files have changed in this diff Show More

Loading…
Cancel
Save