Skip to content

Instantly share code, notes, and snippets.

View madalinignisca's full-sized avatar
🏡
Open for business

Madalin Ignisca madalinignisca

🏡
Open for business
View GitHub Profile
@madalinignisca
madalinignisca / nextjs-deploy.md
Created November 9, 2023 20:06 — forked from jjcodes78/nextjs-deploy.md
Deploying NEXTJS site with nginx + pm2

How to setup next.js app on nginx with letsencrypt

next.js, nginx, reverse-proxy, ssl

1. Install nginx and letsencrypt

$ sudo apt-get update
$ sudo apt-get install nginx letsencrypt

Also enable nginx in ufw

Discover

SELECT CEILING(Total_InnoDB_Bytes*1.6/POWER(1024,3)) RIBPS FROM
(SELECT SUM(data_length+index_length) Total_InnoDB_Bytes
FROM information_schema.tables WHERE engine='InnoDB') A;

Take the value and set it as innodb_buffer_pool_size sufixed with G, example 20G

@madalinignisca
madalinignisca / 00-README.md
Last active March 3, 2025 16:19
Minimal cloud init suitable for any public cloud server instance

Cloud init snippets

Mostly used by me for Hetzner cloud, but should work with all Public Cloud server providers.

  • cloud-init.yml
    • please ensure to set your user, group for the normal user you want to use
    • ssh_import_id can be used with github usernames instead of ssh_authorized_keys
    • disable_root: true ensures root can't ssh in
  • optionally add a sshd extra config that will secure more; tweak TcpForwarding and PermitTunnel as your requiments are!
@madalinignisca
madalinignisca / README.md
Last active June 19, 2024 22:46
Kubernetes 1.25 on Fedora 37 @ Hetzner the right way

Kubernetes 1.25 on Fedora 37 @ Hetzner the right way

Using only kubeadm and helm from Fedora's repository Instructions are for Fedora 37+ only, using distribution's repository for installing Kubernetes.

Note: avoid kubernetes and kubernetes-master as there are deprecated and setup is more complex.

Note: I have not yet tinkered using ipv6, but it is in my plans to do it.

The result should be a decent Kubernetes cluster, with Cloud Control Manager installed and controlling layer 2 private networking allowing pods to talk at fastest speed possible. Will default to Hetzner's cloud volumes.

@madalinignisca
madalinignisca / kubeone-kubernetes-control-plane-resize.md
Created November 7, 2022 15:24
How to change KubeOne Kubernetes Control Planes size

Original

Scaling up the control plane nodes is a bit more complicated. That's because the worker nodes can be easily replaced and/or powered off, but that's not the case for the control plane nodes.

In this case, you can't rely on Terraform to scale up the control plane nodes for you, because Terraform would power off all control plane nodes at the same time, which could make etcd lose the quorum (in that case your cluster would be corrupt/lost).

I haven't tested it, but I think you might have success if you do something like this:

Drain one of the control plane nodes (kubectl drain ) SSH to that node and power it off (sudo poweroff)

@madalinignisca
madalinignisca / hetzner-ccm-with-network.md
Created November 4, 2022 15:12
Hetzner CCM with most Kubernetes distributions
  1. setup a network 10.0.0.0/8 with defaults (like hetzner panel does).
  2. setup a few servers (I start with 1 control-plane and 2 nodes), attaching them to the network (I did setup a snapshot image with common OS setup before, so next steps I usually skip)
  3. setup containerd (I do the manual setup, and all all container network plugins and rest of deps)
  4. setup kubeadm, kubelet, kubectl on latest or a maintained version as per kubernetes.io docs (I do again the manual setup)
  5. Edit on all servers /etc/hosts and add all nodes -- make sure to match the name in hetzner!!! (not necessary if you are going to do a dns server accessible in your private network)
  6. I'm adding a load balancer ip identical with first control-plane to be able to run the init, so later I can add the real load balancer and point to the new ip. Add it to hosts in all nodes.
  7. Example of init the cluster: kubeadm init --control-plane-endpoint=cplb.dev.saasified.dev --upload-certs --pod-network-cidr=10.244.0.0/16 --apiserver-advertise-ad
@madalinignisca
madalinignisca / hddperftest
Last active January 26, 2022 05:00 — forked from frozenice/hddperftest
Storage performance tests on Hetzner
# hel1, CX51, Ubuntu 20.04, 10 GB EXT4 Volume, 240 GB EXT4 Volume
# local NVME
root@ubuntu-32gb-hel1-1:~# hdparm -Tt /dev/sda
/dev/sda:
@madalinignisca
madalinignisca / phpmd.ruleset.xml
Created January 14, 2022 20:41
phpmd ruleset for Laravel
<?xml version="1.0"?>
<ruleset name="Sane Laravel ruleset"
xmlns="http://pmd.sf.net/ruleset/1.0.0"
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xsi:schemaLocation="http://pmd.sf.net/ruleset/1.0.0
http://pmd.sf.net/ruleset_xml_schema.xsd"
xsi:noNamespaceSchemaLocation="
http://pmd.sf.net/ruleset_xml_schema.xsd">
<description>
This enables everything and sets some exceptions
@madalinignisca
madalinignisca / ubuntu-dev-box.yml
Last active December 19, 2021 18:44
cloud init examples
#cloud-config
groups:
- docker
users:
- default
- name: ubuntu
groups:
- docker
@madalinignisca
madalinignisca / pritunlMigration.md
Created December 15, 2021 04:42 — forked from makenova/pritunlMigration.md
move pritunl between servers

Migrating your pritunl install between servers

This is a small write up about how to migrate your pritunl install between servers. It's not especially detailed because I'm lazy and your migration story will most likely be different. All this can be avoided by using a remote/hosted mongo instance(compose.io, mongolab, etc.) and simply pointing your pritunl instance at that. If you want more details ask, and I'll do my best to answer and update this write-up accordingly. Also, feel free to criticize my grammar and spelling.