Master-minion architecture
-
ZeroMQ based transport
- network topology
- concurrency
-
MessagePack based serialization/compression - JSON compatible
-
ports : 4505 (broadcast port) 4506 (dedicated 2 way channel)
Other architectures
- masterless
- multimaster (HA)
- Syndic (heirarchy of masters), allows logical grouping, allows scale
- salt-ssh, can be used with zeromq (some via zeromq, some via ssh)
Transport
- ZeroMQ
- raw tcp with tornado
- HTTPs via salt-api
Config files : /etc/salt/{master, minion}
/etc/salt/minion_id
=> cached minion_id from the FQDN
/etc/salt/{master.d, minion.d/*.conf
=> for configuration overrides, name of file does not matter
Logs:
/var/log/salt/minion
Pull salt config data dynamically and securely from outside, like etcd, vault etc.
/etc/salt/pki
/etc/salt/pki/master
/etc/salt/pki/master/minions
=> accepted minion krys
salt keys (AES) are rotated every 24 hours or when a minion is removed(rejected?)
salt-key
== tool to do salt key operations
--help to see all options
salt-key -F : Fingerprints salt-key -a|A : accept minion salt-key --gen-keys [minion name]
- Targetting info is received by all minions.
- Targetting and matching determination is calculated at the minion (allows scale)
salt <target> <function> [args]
Check different target grouping options
salt -L jerry,stuart test.ping
-L :list
-C : compound [and, or , not etc..]
-N: nodegroups (saved list of target)
- Static information about the minion
- generated at startup by running python functions that return dicts, combined to a larger dict
- cached on the master
- custom grain values or modules can be written
[Min] salt-call --local grains.items
salt-call --local grains.setval foo Foo
[Mas]
salt -G os_family:Debian test.ping
salt -C 'G@os_family:RedHat and stu*' test.ping
Salt command structure
salt target module.function args
flow vs state
- flow: something that happened (like run something)
- state: maintain a state
execution modules: inside salt.modules
returns json serializable data
Salt loader, loads the modules, and are loaded into memory. depends on the OS etc.
Salt aliases
- pkg
- cmd - [ cmdmod ]
- user
salt jerry sys.doc test.ping
salt jerry sys.doc test | less
sys.list_modules sys.list_functions
pkg.list_pkgs :Queries the OS for which packages are installed.
salt -L jerry,stuart pkg.list_pkgs --out=txt| grep wget | cut -c 20
pkg.install
user.list_users sys.doc user | less
- init system operations
- System information/performance
test.versions_report
cmd.run cmd.run_all
cmd.script salt://myscript.sh
grains.items grains.setval grains.get os_family grains.get selinux:enforced
Salt internals as exectution modules
cp.list_master [useful with states]
Used for matching the target.
salt -L stuart,jerry match.list 'stuart,jerry'
salt jerry network.ip_addrs
- Salt is both push and pull
salt-call
from minion andsalt
with mastersalt-call
does not use the salt minion daemon. -debugsalt-call cmd.run 'ls /etc/salt' -l debug
salt-call network.netstat -l debug
flow: - one-off - ephemeral
State: - consistent over time - enforcing
ordering is consistent state tree is compiled locally
/srv/salt
directory for states
YAML format (superset of JSON)
apache.sls
install_apache: ## Human friendly name (State id)
pkg.installed:
- name: apache2
- version: 1.3.3
start_apache:
service.running:
- name: apache2
- enabled : True
welcome_page:
file.managed:
- name: /var/www/html/index.html
- contents: |
<!doctype html>
<body><h1>Hello world</h1></body>
run:
salt jerry state.sls apache
state.show_sls state.show_low_sls
State function return structure
- Result
- changes
- comment
set test=true
salt jerry state.sls apache test=true
To develop and run the state file locally, salt-call --local state.sls apache (alternatively set file_client:local) in the minion config
Debug: salt-call --local state.sls apache -l debug
#!jinja|yaml
#!py
- Only use jinja variables in a state
Custom exec modules: /srv/salt/_modules/myutil.py
salt '*' saltutil.sync_modules
-
Arbitary minion specific data
-
secure data (like a private key/ password)
-
data from external places (external pillar modules)
-
pillar is kept in-memory in minion
-
pillar is cached in master
-
pillar must have a top file
base:
'*':
- name
salt '*' saltutil.refresh_pillar
salt '*' state.sls apache pillar='{name:Overide}'
Note: this is seen by all minions, so be careful about sensitive data.
Note: Take a look at the pillar templating example via lookup table in video 4_5
name: {{ name | json() }}
# Jinja to yaml, call json() to avoid any strange quoting issues.
salt.states.test
(In the state file)
check_pillar_values:
test.check_pillar:
- present:
- name
- failhard: True # Stop on failure
gitfs_remotes:
- https://<>
salt jerry cp.list_master
-- list all the files got from master (good for checking new gitfs/pillar/state files are available)
salt jerry cp.list_states
salt-call state.show_sls <state> -l debug
Inside the sls files, Context is : {{ how_full_context().keys() }}
output_jinja:
file.managed:
- name: /tmp/jinja_ouput
- contents: |
My Jinja variable 'foo' is :
{{ foo | json() }}
To make other sls files be run as part of this, use include
statement at the beginning.
include:
- apache # The init.sls file is used
Note: Don't depend on includes for source ordering.
If you put a top.sls
file, then a highstate (state of how things should be)
base:
'*':
- apache
- apache.welcome
'db*':
- postgres
Then run: salt jerry state.highstate
. These are processed in Minions.
Multiple top files with state.top <top file name>