Nginx 配置方法,使用到 ngx_http_v2_module 模块,必须使用 https 才能启动。
server {
listen 443 ssl http2;
ssl_certificate /www/web/blog/ssl/cert1_cert.pem;
ssl_certificate_key /www/web/blog/ssl/cert1_key.pem;
}
Nginx 配置方法,使用到 ngx_http_v2_module 模块,必须使用 https 才能启动。
server {
listen 443 ssl http2;
ssl_certificate /www/web/blog/ssl/cert1_cert.pem;
ssl_certificate_key /www/web/blog/ssl/cert1_key.pem;
}
安装后初始化
sudo groupadd wireshark
sudo chgrp wireshark /usr/bin/dumpcap
sudo chmod 7777 /usr/bin/dumpcap
sudo usermod -a -G wireshark suhua
Filter
Hosting multiple websites on a single public IP address on the standard HTTP(S) ports is relatively easy with popular web servers like Apache, Nginx and lighttpd all supporting Virtual Hosts.
For Web Services which bundle their own HTTP server, things get more complicated, unless their HTTP stack can be shared somehow. More often than not, the application's HTTP stack listens directly on a dedicated TCP port.
Hosting multiple services on a single IP then requires using a fronting server listening on the standard HTTP port, and routing to the right backend service based on the host name or the path sent by the client.
Path based routing is cumbersome, usually requiring either the service to be aware of the path prefix, or a rewrite by the HTTP fronting server of all absolute URLs in the requests and responses.
Hostname based routing is more straightforward. The fronting server can just look at the [HTTP/1.1 Host header](https://tools
| #!/bin/bash | |
| readonly DB_FILE="$(pwd)/images.db" | |
| readonly IMG_DIR="$(pwd)/images" | |
| save-images() { | |
| echo "Create ${DB_FILE}" | |
| echo "$(docker images|grep -v 'IMAGE ID'|awk '{printf("%s %s %s\n", $1, $2, $3)}'|column -t)" > "${DB_FILE}" | |
| echo "Read ${DB_FILE}" |
| #!/bin/sh | |
| # Use socat to proxy git through an HTTP CONNECT firewall. | |
| # Useful if you are trying to clone git:// from inside a company. | |
| # Requires that the proxy allows CONNECT to port 9418. | |
| # | |
| # Save this file as gitproxy somewhere in your path (e.g., ~/bin) and then run | |
| # chmod +x gitproxy | |
| # git config --global core.gitproxy gitproxy | |
| # | |
| # More details at http://tinyurl.com/8xvpny |
| #!/bin/bash | |
| # `gitea dump` doesn't currently back up LFS data as well, only git repos | |
| # It primarily backs up the SQL DB, and also the config / logs | |
| # We'll backup like this: | |
| # * "gitea dump" to backup the DB and config etc | |
| # * tar / bzip all the repos since they will be skipped | |
| # * Not rotated because git data is immutable (normally) so has all data | |
| # * rsync LFS data directly from /volume/docker/gitea/git/lfs | |
| # * No need for rotation since all files are immutable |
| #!/usr/bin/php | |
| <?PHP | |
| // Generates a strong password of N length containing at least one lower case letter, | |
| // one uppercase letter, one digit, and one special character. The remaining characters | |
| // in the password are chosen at random from those four sets. | |
| // | |
| // The available characters in each set are user friendly - there are no ambiguous | |
| // characters such as i, l, 1, o, 0, etc. This, coupled with the $add_dashes option, | |
| // makes it much easier for users to manually type or speak their passwords. | |
| // |
| # From http://stackoverflow.com/a/11158224 | |
| # Solution A - If the script importing the module is in a package | |
| from .. import mymodule | |
| # Solution B - If the script importing the module is not in a package | |
| import os,sys,inspect | |
| current_dir = os.path.dirname(os.path.abspath(inspect.getfile(inspect.currentframe()))) | |
| parent_dir = os.path.dirname(current_dir) | |
| sys.path.insert(0, parent_dir) |
| #!/bin/bash | |
| export LC_ALL=C | |
| archive="$1" | |
| container_name="root_gitea_1" | |
| now=$(date +"%Y%m%d-%H%M%S") | |
| # gitea_dir: it's the directory in the volume attached to the container which contains gitesa' data directory | |
| gitea_dir="/data/containers/gitea" | |
| gitea_data_dir="${gitea_dir}/data" | |
| restore_dir="/tmp/gitea-restore-${now}" |
Microsoft active directory servers will default to offer LDAP connections over unencrypted connections (boo!).
The steps below will create a new self signed certificate appropriate for use with and thus enabling LDAPS for an AD server. Of course the "self-signed" portion of this guide can be swapped out with a real vendor purchased certificate if required.
Steps have been tested successfully with Windows Server 2012R2, but should work with Windows Server 2008 without modification. Requires a working OpenSSL install (ideally Linux/OSX) and (obviously) a Windows Active Directory server.