git flow init -d
(Omit -d if you want to select values other than the defaults.)
This creates a new branch based on develop and switches to it:
#!/bin/bash | |
#### NOTE: This is a combined image file for maas images, lxd images, and Juju Steams. | |
#### The server name defaults to "images" | |
#### If creating a ssl cert, the following Subject Alternate Names should be used: | |
# DNS:$(hostname -f), DNS: $(hostname -s), DNS: maas-images, DNS: lxd-images, DNS: cloud-images, DNS: juju-images, DNS:images.linuxcontainers.org, DNS: us.images.containers.org, DNS: uk.images.containers.org, DNS: canonical.imagecontainers.org, DNS:cloud-images.ubuntu.com, DNS: streams.canonical.com | |
#### If using DNS poisoning/hijacking, ensure all the above domains resolve to the images contaier's IP address |
-- Query the database to calculate a recommended innodb_buffer_pool_size | |
-- and get the currently configured value | |
-- The rollup as the bottom row gives the total for all DBs on the server, where each other row is recommendations per DB. | |
SELECT | |
TABLE_SCHEMA, | |
CONCAT(CEILING(RIBPS/POWER(1024,pw)),SUBSTR(' KMGT',pw+1,1)) | |
Recommended_InnoDB_Buffer_Pool_Size, | |
( | |
SELECT CONCAT(CEILING(variable_value/POWER(1024,FLOOR(LOG(variable_value)/LOG(1024)))),SUBSTR(' KMGT',FLOOR(LOG(variable_value)/LOG(1024))+1,1)) |
For this configuration you can use web server you like, i decided, because i work mostly with it to use nginx.
Generally, properly configured nginx can handle up to 400K to 500K requests per second (clustered), most what i saw is 50K to 80K (non-clustered) requests per second and 30% CPU load, course, this was 2 x Intel Xeon
with HyperThreading enabled, but it can work without problem on slower machines.
You must understand that this config is used in testing environment and not in production so you will need to find a way to implement most of those features best possible for your servers.
Here are the simple steps needed to create a deployment from your local GIT repository to a server based on this in-depth tutorial.
You are developing in a working-copy on your local machine, lets say on the master branch. Most of the time, people would push code to a remote server like github.com or gitlab.com and pull or export it to a production server. Or you use a service like deepl.io to act upon a Web-Hook that's triggered that service.
parted /dev/nvme0n1 unit s p free | |
parted /dev/nvme0n1 print free | |
parted /dev/nvme0n1 mkpart primary zfs $START 100% | |
parted /dev/nvme0n1 print free | |
--- | |
# https://gist.github.com/Miouge1/4ecced3a2dcc825bb4b8efcf84e4b17b |
#!/usr/bin/perl | |
package World::Schema; | |
use base qw/DBIx::Class::Schema::Loader/; | |
$ENV{SCHEMA_LOADER_BACKCOMPAT} = 1; | |
my $schema = World::Schema->connect("DBI:mysql:database=world", "world", "world1", | |
{PrintError => 1, RaiseError => 1, mysql_enable_utf8 => 1}); | |
package main; | |
use DBIx::Connector; | |
use DBIx::Struct; |
box-shadow:inset 0 1px 0 rgba(255,255,255,.6), 0 22px 70px 4px rgba(0,0,0,0.56), 0 0 0 1px rgba(0, 0, 0, 0.3); |
Hi:
perl -e 'print "hello world!\n"'
A simple filter:
perl -ne 'print if /REGEX/'
Filter out blank lines (in place):