Steps to install node_exporter
-
Add user for node_exporter
sudo useradd --no-create-home --shell /bin/false node_exporter
| """Code to read .mca region files | |
| I modified the javascript library mca-js to create this file. | |
| mca-js: https://github.com/thejonwithnoh/mca-js | |
| This is largely just a python interpretation of that script. | |
| ----------- | |
| MIT License | |
| Copyright (c) 2019 Nicholas Westerhausen |
| #!/bin/bash | |
| ### | |
| # This script utilizes apprise (https://github.com/caronc/apprise) to send notifications | |
| # You need to have apprise available for the user sonarr operates under. | |
| # I installed it via `sudo pip install --system apprise` although the python | |
| # community really dislikes it when you do that. Recommended installation would be | |
| # something along the lines of: | |
| # | |
| # sudo su sonarr -s /bin/bash | |
| # pip install --user apprise |
| #!/bin/bash | |
| # Short script using the digital ocean API to update an A record with an IPV4 address | |
| # from a specificed device. | |
| # | |
| # I simply have cron run this every 30m. Ideally I wanted to have it run when openvpn | |
| # had to reconnect and either (a) change device or (b) change IP. | |
| # | |
| # Pre-requisites: | |
| # 1. DNS set up through Digital Ocean for this domain | |
| # 2. API Token created with write access |
| name: Deploy | |
| on: | |
| push: | |
| branches: | |
| - main | |
| jobs: | |
| build: | |
| name: Deploy docs to GitHub Pages | |
| runs-on: ubuntu-latest | |
| steps: |
This was a quick and dirty solution to grabbing creature names and descriptions from my raw files so I could look up creature information while preparing carefully for my embark. It's redundant to do for just the vanilla raws since they are covered pretty well on the wiki, but I was playing with a few mods and those mods didn't include a beastiary so I was presented with some unfamiliar creature names and no idea what they were or how valuable they might be. This creates a json file that is an array of objects that describe the raws.
I took this idea and re-wrote it in rust and have continued work on it over at df-raw-lookup
| # Ignore All | |
| * | |
| # Allow Some | |
| !bin/www | |
| !db/** | |
| !public/** | |
| !routes/** | |
| !util/** | |
| !views/** |