Note: This guide assumes Azure CLI 2.0 is installed and familiarity with Azure concepts.
The APP_ID_URI
needs to match what is expected in client request calls.
$ az ad sp create-for-rbac --name [APP_ID_URI] --password [PASSWORD]
Note: This guide assumes Azure CLI 2.0 is installed and familiarity with Azure concepts.
The APP_ID_URI
needs to match what is expected in client request calls.
$ az ad sp create-for-rbac --name [APP_ID_URI] --password [PASSWORD]
Read in English
Sergej Müller hat seinen Abschied von WordPress angekündigt. Damit seine hochgeschätzten freien Plugins nicht verschwinden, haben wir zu diesem Zweck ein Team gebildet. Wichtig ist uns, die Datenschutz-Konformität der Plugins zu erhalten und eine kommerzielle Nutzung der Domains zu verhindern.
Mit mehreren Personen aus der deutschsprachigen Community führen wir die 11 freien Plugins weiter. Dies sind im Einzelnen:
events { | |
worker_connections 1024; | |
} | |
http { | |
default_type text/html; | |
access_log /dev/stdout; | |
sendfile on; | |
keepalive_timeout 65; |
# Note (November 2016): | |
# This config is rather outdated and left here for historical reasons, please refer to prerender.io for the latest setup information | |
# Serving static html to Googlebot is now considered bad practice as you should be using the escaped fragment crawling protocol | |
server { | |
listen 80; | |
listen [::]:80; | |
server_name yourserver.com; | |
root /path/to/your/htdocs; |
#Using Scrapy with Selenium to scape a rendered page [Updated] | |
from scrapy.contrib.spiders.init import InitSpider | |
from scrapy.http import Request, FormRequest | |
from scrapy.contrib.linkextractors.sgml import SgmlLinkExtractor | |
from scrapy.contrib.spiders import CrawlSpider, Rule | |
from scrapy.spider import BaseSpider | |
from scrapy.selector import HtmlXPathSelector | |
from selenium import selenium |
The list would not be updated for now. Don't write comments.
The count of contributions (summary of Pull Requests, opened issues and commits) to public repos at GitHub.com from Wed, 21 Sep 2022 till Thu, 21 Sep 2023.
Because of GitHub search limitations, only 1000 first users according to amount of followers are included. If you are not in the list you don't have enough followers. See raw data and source code. Algorithm in pseudocode:
githubUsers
#!/bin/bash | |
#[email protected] | |
USER='uname' | |
PWD='password' | |
DBNAME='dbname' | |
DATE=`date +%y%m%d.%H%M` | |
#backup db | |
mysqldump --default-character-set="UTF8" --extended-insert=FALSE -u$USER -p$PWD $DBNAME | gzip > $DATE.stage.sql.gz |