One thing that I have to mention is that this Gist is a personal test env. Some things can be configured in a way that you don't want it to be.
Add the host information in the host file to your system (or do some DNS config)
Go to http://grafana_http:3000/ and import the grafana_dgraph.json
You have HTTP and GRPC endpoints.
You can remove zero_grpc, zero1 and alpha1.
You also have jaeger_http and ratel_http.
You can use
docker-compose logs -f alpha1
in a new terminal to follow a specific container. Not all containers you can get logs. Cuz they're deactivated.
Use docker container prune
to delete all containers in your Docker.
Use docker volume prune
to delete all volumes.
Download this gist's ZIP file and extract it to a directory called dgraph-nginx
.
mkdir dgraph-nginx
cd dgraph-nginx
wget -O dgraph-nginx.zip https://gist.github.com/danielmai/0cf7647b27c7626ad8944c4245a9981e/archive/5a2f1a49ca2f77bc39981749e4783e3443eb3ad9.zip
unzip -j dgraph-nginx.zip
This creates two files: docker-compose.yml
and nginx.conf
.
Start the 6-node Dgraph cluster (3 Dgraph Zero, 3 Dgraph Alpha, replication setting
- by starting the Docker Compose config:
docker-compose up
In a different shell, run the dgraph increment
(docs) tool
against the Nginx gRPC load balancer (nginx:9080
):
docker-compose exec alpha1 dgraph increment --alpha nginx:9080 --num=10
If you have dgraph
installed on your host machine, then you can also run this
from the host:
dgraph increment --alpha localhost:9080 --num=10
The increment tool uses the Dgraph Go client to establish a gRPC
connection against the --alpha
flag and transactionally increments a
counter predicate --num
times.
In the Nginx access logs (in the docker-compose up
shell window),
you'll see access logs like the following:
nginx_1 | [15/Jan/2020:03:12:02 +0000] 172.20.0.9 - - - nginx to: 172.20.0.7:9080: POST /api.Dgraph/Query HTTP/2.0 200 upstream_response_time 0.008 msec 1579057922.135 request_time 0.009
nginx_1 | [15/Jan/2020:03:12:02 +0000] 172.20.0.9 - - - nginx to: 172.20.0.2:9080: POST /api.Dgraph/Query HTTP/2.0 200 upstream_response_time 0.012 msec 1579057922.149 request_time 0.013
nginx_1 | [15/Jan/2020:03:12:02 +0000] 172.20.0.9 - - - nginx to: 172.20.0.5:9080: POST /api.Dgraph/Query HTTP/2.0 200 upstream_response_time 0.008 msec 1579057922.162 request_time 0.012
nginx_1 | [15/Jan/2020:03:12:02 +0000] 172.20.0.9 - - - nginx to: 172.20.0.7:9080: POST /api.Dgraph/Query HTTP/2.0 200 upstream_response_time 0.012 msec 1579057922.176 request_time 0.013
nginx_1 | [15/Jan/2020:03:12:02 +0000] 172.20.0.9 - - - nginx to: 172.20.0.2:9080: POST /api.Dgraph/Query HTTP/2.0 200 upstream_response_time 0.012 msec 1579057922.188 request_time 0.011
nginx_1 | [15/Jan/2020:03:12:02 +0000] 172.20.0.9 - - - nginx to: 172.20.0.5:9080: POST /api.Dgraph/Query HTTP/2.0 200 upstream_response_time 0.016 msec 1579057922.202 request_time 0.013
The logs say that it load balanced traffic to the following upstream
addresses defined in alpha_grpc
in nginx.conf:
nginx to: 172.20.0.7
nginx to: 172.20.0.2
nginx to: 172.20.0.5
By default, Nginx load balancing is done round-robin.
1 - add datasources.
- Find the Prometheus.
- In "HTTP"
http://prometheus:9090
- go to "import" and poste the JSON file "grafana_dgraph.json"
Done