_config.yml
Jekyll 的全局配置文件。
比如网站的名字,网站的域名,网站的链接格式等等。
1. my+ 1001. se+ 2001. thedaily+ 3001. empire+ 4001. herb+ | |
2. +online 1002. test+ 2002. giant+ 3002. +cook 4002. +teen | |
3. the+ 1003. fish+ 2003. survey+ 3003. +deluxe 4003. affordable+ | |
4. +web 1004. hk+ 2004. +conference 3004. +crunch 4004. proto+ | |
5. +media 1005. florida+ 2005. twit+ 3005. michigan+ 4005. +ity | |
6. web+ 1006. fine+ 2006. pick+ 3006. cars+ 4006. myhome+ | |
7. +world 1007. loan+ 2007. +dvd 3007. +forest 4007. plastic+ | |
8. +net 1008. page+ 2008. cinema+ 3008. yacht+ 4008. +kc | |
9. go+ 1009. fox+ 2009. desi+ 3009. +wallet 4009. +foot | |
10. +group 1010. +gift 2010. act+ 3010. +contest 4010. +sup |
package ciphers | |
import ( | |
"crypto/rand" | |
"crypto/rsa" | |
"crypto/sha512" | |
"crypto/x509" | |
"encoding/pem" | |
"log" | |
) |
The standard way of understanding the HTTP protocol is via the request reply pattern. Each HTTP transaction consists of a finitely bounded HTTP request and a finitely bounded HTTP response.
However it's also possible for both parts of an HTTP 1.1 transaction to stream their possibly infinitely bounded data. The advantages is that the sender can send data that is beyond the sender's memory limit, and the receiver can act on
const SPEEEEEED: bool = 1==1; | |
pub mod reg { | |
#[derive(Clone, Copy, Debug)] | |
#[repr(align(4))] | |
pub enum Instruction { | |
LoadInt { dst: u8, value: i16 }, |
# source: https://numbers.brighterplanet.com/2012/08/21/how-to-parse-quotes-in-ragel/ | |
examples = [ | |
%{''}, | |
%{""}, | |
%{' '}, | |
%{" "}, | |
%{'A'}, | |
%{'ABC'}, | |
%{'AB C'}, |
Syntax: cat <filename> | jq -c '.[] | select( .<key> | contains("<value>"))'
Example: To get json record having _id equal 611
cat my.json | jq -c '.[] | select( ._id | contains(611))'
Remember: if JSON value has no double quotes (eg. for numeric) to do not supply in filter i.e. in contains(611)
version: '2.4' | |
services: | |
kafka: | |
image: bitnami/kafka:3.1.0 | |
container_name: kafka | |
command: | |
- 'sh' | |
- '-c' | |
- '/opt/bitnami/scripts/kafka/setup.sh && kafka-storage.sh format --config "$${KAFKA_CONF_FILE}" --cluster-id "lkorDA4qT6W1K_dk0LHvtg" --ignore-formatted && /opt/bitnami/scripts/kafka/run.sh' # Kraft specific initialise | |
environment: |
#!/bin/sh | |
# | |
# CloudFlare Dynamic DNS | |
# | |
# Updates CloudFlare records with the current public IP address | |
# | |
# Takes the same basic arguments as A/CNAME updates in the CloudFlare v4 API | |
# https://www.cloudflare.com/docs/client-api.html#s5.2 | |
# | |
# Use with cron jobs etc. |