Skip to content

Instantly share code, notes, and snippets.

@aKamrani
Last active September 17, 2024 15:32
Show Gist options
  • Select an option

  • Save aKamrani/ba6456430ac7a50d9215eb91a9cc2160 to your computer and use it in GitHub Desktop.

Select an option

Save aKamrani/ba6456430ac7a50d9215eb91a9cc2160 to your computer and use it in GitHub Desktop.
Hashicorp Boundary - Docker & Configures - Master/Worker Arch - other descriptions on telegram devops channel
This include Boundary Configs
Hashicorp Boundary - Docker & Configures - Master/Worker Arch - other descriptions on telegram devops channel
Setting up Boundary:
Place the Docker Compose file and other keygen.py files in the /opt/boundary path
Place the boundary.hcl file in the /opt/boundary/boundary-data/boundary.hcl path.
Open the docker compose file and uncomment the block related to db-init and depends_on in the boundary service itself.
In the boundary.hcl file, make changes related to addresses and other things.
Run the docker-compose up -d command to get the service up and running.
In the docker-compose.yml file, re-comment the block related to db-init and depends_on in the boundary service so that if you run Docker Compose again later, it will not reinitialize the database.
Now your boundary service is up and you can define admin account and initial configuration by logging into its admin panel at port 9200.
Note: There is also a worker block in the boundary.hcl file that creates a worker for you locally next to the boundary server. If you don't need it, you can comment it.
Add Worker:
Steps to add a worker to the warehouse:
Place the Docker Compose file in /opt/boundary-worker/ and the worker.hcl file in /opt/boundary-worker/boundary-data/.
In the worker.hcl file, make the necessary settings, including changing the IP addresses.
Run the service by running the docker-compose up -d command.
After running the service with the docker-compose logs command, copy the value of the registration request token from the Bondary logs and put it in the Bondary controller that we set up in the first step in the section related to adding a worker in the management panel and register the worker. (according to the picture)
Note: In the tags section of the worker.hcl file, put the values ​​needed to identify the worker in the future.
For example, I use location to record the location of the worker for each deployment:
tags {
location = ["parsonline"]
}
Adding Target:
To add a target in Bandari, go to the Bandari Master management panel and log in as a system administrator.
Create an organization and a project for yourself (if these items do not exist for your desired target)
After creating these items, you can create a specific target from the target tab and enter the ip address and port number of the desired target for it. (This port can be related to any tcp-based service such as ssh, rdp, etc.)
In order to be able to determine the access to this target according to a specific worker, you need to activate the Egress Worker Filter toggle button in the section related to Workers and filter your desired worker according to the tags you have given to the target. (according to the picture)
For example, here, considering that I had given the location tag to the worker with the parsonline value, now I do the filter in this way:
Then, we can access this target by using the desktop client or CLI client.
نصب و راه اندازی Boundary:
فایل داکر کامپوز و سایر فایل های keygen.py را در مسیر /opt/boundary قرار دهید
فایل boundary.hcl را در مسیر /opt/boundary/boundary-data/boundary.hcl قرار دهید.
فایل داکر کامپوز را باز کنید و بلاک مربوط به db-init و depends_on مربوط به آن را در سرویس خوده boundary از حالت کامنت در بیارید.
در فایل boundary.hcl تغییرات مربوط به آدرس ها و موارد دیگر را انجام دهید.
دستور docker-compose up -d را اجرا کنید تا سرویس بالا بیاید و اجرا شود.
در فایل docker-compose.yml بلاک مربوط به db-init و depends_on مربوط به آن را در سرویس خوده boundary مجدد کامنت کنید تا در صورتی که بعدا داکر کامپوز را دوباره اجرا میکنید دیتابیس را مجددا reinitialize نکند.
اکنون سرویس boundary شما بالا آمده است و میتوانید با ورود به پنل ادمین آن در پورت ۹۲۰۰ نسبت به تعریف اکانت ادمین و پیکربندی اولیه اقدام کنید.
نکته: در فایل boundary.hcl نیز یک بلاک worker وجود دارد که برای شما یک ورکر به صورت لوکال در کنار سرور باندری ایجاد مکیند. در صورتی که نیازی به آن ندارید میتوانید آن را کامنت کنید.
افزودن ورکر Worker:
مراحل افزودن ورکر به باندری:
فایل داکر کامپوز را در مسیر /opt/boundary-worker/ و فایل worker.hcl را در مسیر /opt/boundary-worker/boundary-data/ قرار بدهید.
در فایل worker.hcl تنظیمات لازم شامل تغییر آدرس های ip را انجام بدهید.
با اجرای دستور docker-compose up -d سرویس را اجرا کنید.
بعد از اجرای سرویس با دستور docker-compose logs از لاگ های باندری مقدار توکن registration request را کپی کنید و در کنترلر باندری که در گام اول راه اندازی کردیم در قسمت مربوط به افزودن ورکر در پنل مدیریت قرار دهید و ورکر را register کنید. (مطابق تصویر)
نکته: در قسمت tags مربوط به فایل worker.hcl مقادیر مورد نیاز برای شناسایی ورکر در آینده را قرار دهید.
به عنوان مثال من از location برای ثبت مکان قرار گیری ورکر برای هر استقرار استفاده میکنم:
tags {
location = ["parsonline"]
}
افزودن تارگت Target:
برای افزودن تارگت در باندری به پنل مدیریت باندری مستر رفته و به عنوان مدیر سامانه لاگین کنید.
برای خود یک organization و یک project بسازید (اگر این موارد برای تارگت موردنظر شما وجود ندارد)
بعد از ساخت این موارد از تب تارگت میتوانید یک تارگت مشخص بسازید و آدرس ip و شماره port تارگت موردنظر را برای آن وارد کنید. (این پورت میتواند مربوط به هر سرویس بر مبنای tcp مانند ssh و rdp و غیره باشد)
برای اینکه بتوانید دسترسی به این تارگت را بر حسب یک ورکر مشخص تعیین کنید نیاز هست در قسمت مروبط به Workers دکمه تاگل مربوط به Egress Worker Filter را فعال کنید و بر حسب tags هایی که به تارگت داده اید ورکر موردنظر خود را فیلتر کنید. (مطابق تصویر)
مثلا در اینجا من با توجه به اینکه به ورکر تگ لوکیشن با مقدار parsonline داده بودم اکنون در اینجا به این شکل فیلتر را انجام میدهم:
بعد با استفاده از کلاینت دسکتاب یا کلاینت cli باندری میتوانیم به این تارگت دسترسی داشته باشیم.
import base64
import os
def generate_encryption_key(name):
key = os.urandom(32)
encoded_key = base64.b64encode(key).decode("utf-8")
print("Base 64 encoded encryption key for {}: {}".format(name,encoded_key))
keys=["global", "worker", "recovery"]
for key in keys:
generate_encryption_key(key)
version: "3.8"
services:
db:
image: postgres
container_name: db
restart: always
# ports:
# - 5432:5432
environment:
- POSTGRES_DB=boundary
- POSTGRES_USER=postgres
- POSTGRES_PASSWORD=<DB-PASSWORD>
network_mode: "host"
volumes:
- ./db-data:/var/lib/postgresql/data
healthcheck:
test: [ "CMD-SHELL", "pg_isready -U postgres" ]
interval: 3s
timeout: 20s
retries: 15
# db-init:
# image: hashicorp/boundary
# container_name: db-init
# command:
# [
# "database",
# "init",
# "-config",
# "/boundary/boundary.hcl"
# ]
# volumes:
# - "./boundary-data/:/boundary:ro,z"
# environment:
# - BOUNDARY_POSTGRES_URL=postgresql://postgres:<DB-PASSWORD>@db/boundary?sslmode=disable
# cap_add:
# - IPC_LOCK
# network_mode: "host"
# depends_on:
# db:
# condition: service_healthy
boundary:
image: hashicorp/boundary
restart: always
container_name: boundary
command: [ "server", "-config", "/boundary/server.hcl", "--log-level=trace" ]
volumes:
- "./boundary-data/:/boundary/"
# ports:
# - "9200:9200"
# - "9201:9201"
# - "9202:9202"
# - "5000:5000"
environment:
- BOUNDARY_POSTGRES_URL=postgresql://postgres:<DB-PASSWORD>@db/boundary?sslmode=disable
- HOSTNAME=boundary
- BOUNDARY_LOG_LEVEL=trace
cap_add:
- IPC_LOCK
network_mode: "host"
# depends_on:
# db-init:
# condition: service_completed_successfully
healthcheck:
test: [ "CMD", "wget", "-O-", "http://boundary:9200" ]
interval: 3s
timeout: 20s
retries: 15
disable_mlock = true
controller {
name = "docker-controller"
description = "Docker-Controller"
database {
url = "env://BOUNDARY_POSTGRES_URL"
}
}
# If you dont want a local worker on server, comment out this block:
worker {
name = "local-worker"
description = "Local-Worker"
public_addr = "<IP-ADDRESS>" # address which client connects to (if behind NAT)
}
# port = 9200 API
listener "tcp" {
address = "<IP-ADDRESS>" # real address of server (docker host not container)
purpose = "api"
tls_disable = true
}
# port = 9201 Worker/Server
listener "tcp" {
address = "<IP-ADDRESS>"
purpose = "cluster"
tls_disable = true
}
# port = 9202 Clients Connect to this
listener "tcp" {
address = "<IP-ADDRESS>"
purpose = "proxy"
tls_disable = true
}
// Yoy can generate the keys by
// `python3 kyegen.py`
// Ref: https://www.boundaryproject.io/docs/configuration/kms/aead
kms "aead" {
purpose = "root"
aead_type = "aes-gcm"
key = "IVDvkRcDLv7xS4rlQaJfTHGPw63LYkz9Ouj5471Am6M="
key_id = "global_root"
}
kms "aead" {
purpose = "worker-auth"
aead_type = "aes-gcm"
key = "p5vSHEYcGWyVIxnNPOP3EUf+HnI8YkhGsfqJ3PBOpHo="
key_id = "global_worker-auth"
}
kms "aead" {
purpose = "recovery"
aead_type = "aes-gcm"
key = "U3CQB6sOfW33zZIpcrAF4ZwZsbLpVe+X1M7kZag9DIs="
key_id = "global_recovery"
}
version: "3.8"
services:
boundary-worker:
image: hashicorp/boundary
restart: always
container_name: boundary-worker
command: [ "server", "-config", "/boundary/worker.hcl", "--log-level=trace" ]
volumes:
- "./boundary-data/:/boundary/"
- "./boundary-data/config:/boundary-hcp-worker/config"
- "./boundary-data/logs:/var/logs/boundary"
- "./boundary-data/file:/boundary-hcp-worker/file"
environment:
- HOSTNAME=boundary
- BOUNDARY_LOG_LEVEL=trace
cap_add:
- IPC_LOCK
ports:
- "9202:9202"
healthcheck:
test: [ "CMD", "wget", "-O-", "http://boundary-worker:9200" ]
interval: 3s
timeout: 20s
retries: 15
# disable memory from being swapped to disk
disable_mlock = true
# listener denoting this is a worker proxy
listener "tcp" {
address = "0.0.0.0:9202"
purpose = "proxy"
tls_disable = true
}
# worker block for configuring the specifics of the worker service
worker {
public_addr = "172.30.10.2"
initial_upstreams = ["<MASTER-CONTROLLER-IP-ADDRESS>:9201"]
auth_storage_path = "/boundary-hcp-worker/file/auth-storage"
tags {
location = ["parsonline"]
}
}
## Events (logging) configuration. This
## configures logging for ALL events to both
## stderr and a file at /var/log/boundary/<boundary_use>.log
#events {
# audit_enabled = true
# sysevents_enabled = true
# observations_enable = true
# sink "stderr" {
# name = "all-events"
# description = "All events sent to stderr"
# event_types = ["*"]
# format = "cloudevents-json"
# }
# sink {
# name = "file-sink"
# description = "All events sent to a file"
# event_types = ["*"]
# format = "cloudevents-json"
# file {
# path = "/var/log/boundary"
# file_name = "egress-worker.log"
# }
# audit_config {
# audit_filter_overrides {
# sensitive = "redact"
# secret = "redact"
# }
# }
# }
#}
# kms block for encrypting the authentication PKI material
kms "aead" {
purpose = "worker-auth-storage"
aead_type = "aes-gcm"
key = "p5vSHEYcGWyVIxnNPOP3EUf+HnI8YkhGsfqJ3PBOpHo="
key_id = "global_worker-auth"
}
@aKamrani
Copy link
Copy Markdown
Author

photo_2024-09-10_12-54-35
Arch

@aKamrani
Copy link
Copy Markdown
Author

aKamrani commented Sep 10, 2024

iptables for NAT

CHAIN = FILTER
### NAT to Boundary Worker Parsonline ###
-A FORWARD -s 172.30.10.1/24 -d 172.16.30.3/32 -j ACCEPT
-A FORWARD -s 172.16.30.3/32 -d 172.30.10.1/24 -j ACCEPT
-A FORWARD -p tcp -s 172.30.11.2 -d 172.16.30.3 --dport 9202 -j ACCEPT
-A FORWARD -p tcp -d 172.30.11.2 -s 172.16.30.3 --sport 9202 -m state --state ESTABLISHED,RELATED -j ACCEPT
#########################################
 
CHAIN = NAT
### NAT to Boundary Worker Parsonline ###
-A POSTROUTING -p tcp -d 172.16.30.3 --dport 9202 -j MASQUERADE
-A PREROUTING -p tcp --dport 9202 -j DNAT --to-destination 172.16.30.3:9202
#########################################

In this scenario, the IP address of the nodes is as follows:

Master (The interface from which the client communicates with the master - this is the interface for the master) = 172.30.10.1
Worker = 172.16.30.3
Master (The interface from which the master communicates with the worker - this is the ip interface for the master) = 172.30.11.2

@aKamrani
Copy link
Copy Markdown
Author

aKamrani commented Sep 17, 2024

Boundary Role

For making a user allow to access some targets

after creating role in admin panel in global scope run this commands for making this role available:
add grant scope named ""descendants""

boundary roles read -id <ROLE_ID>
boundary roles set-grant-scopes -grant-scope-id=descendants -id=<ROLE_ID>

@aKamrani
Copy link
Copy Markdown
Author

for adding vault as cred broker:

https://security.theodo.com/en/blog/vault-credential-broker

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment