terraform tf
terraform {
required_providers {
aws = {
source = "hashicorp/aws"
version = "~> 3.0"
}
terraform tf
terraform {
required_providers {
aws = {
source = "hashicorp/aws"
version = "~> 3.0"
}
I know c# and always had trouble spending the time to learn full web framework and all the internals of making a web site. This didn't stop me from having ideas for sites, even easy sites.
This is my write up for notes extending the MSFT blazor tutorial: https://dotnet.microsoft.com/learn/aspnet/blazor-tutorial/run
dotnet run
the blazor site, I discovered kestral (whatever that is) wouldn't bind to tcp port 5000, as noted in ./BlazorApp/Properties/launchsettings.json
.Enable-WindowsOptionalFeature -Online -FeatureName Microsoft-Windows-Subsystem-Linux
wsl --set-default-version 1
$body = @{ | |
"username" = "[email protected]" | |
"password" = "password" | |
} | |
$LoginResponse = Invoke-WebRequest 'https://api.splunk.com/2.0/rest/login/splunk' -SessionVariable 'Session' -Body $Body -Method 'POST' | |
$Session | |
$headers = @{ |
yum -y update
reboot
yum -y install wget
wget https://bootstrap.pypa.io/get-pip.py
python get-pip.py
pip install virtualenv
mkdir ~/virtualenvs
cd ~/virtualenvs
virtualenv slim
{"phases": [], "container": {"node_guid": null, "in_case": false, "sensitivity": "amber", "create_time": "2019-06-13T18:28:36.836633Z", "tenant_id": 0, "role_id": null, "id": 105, "custom_fields": {}, "asset_id": null, "close_time": null, "open_time": "2019-06-13T18:30:53.840601Z", "status_id": 2, "container_type": "default", "closing_owner_id": null, "current_phase_id": null, "due_time": "2019-06-14T06:27:45.276000Z", "version": 1, "workflow_name": "", "owner_id": 1, "status": "open", "owner_name": null, "hash": "9e4458b9791d28101e5b3c1788fce582", "description": "A file download has been detected by network scan", "tags": [], "start_time": "2019-06-13T18:28:36.846066Z", "severity_id": "medium", "kill_chain": null, "artifact_update_time": "2019-06-13T18:32:15.408501Z", "artifact_count": 5, "parent_container_id": null, "data": {}, "name": "File Downloaded by HTTP", "ingest_app_id": null, "label": "events", "source_data_identifier": "e76431b6-c725-4981-9703-d27e0374693c", "end_time": null, "closing_rule_run_id" |
export SPLUNK_HOME=/opt/home/matt_b/splunkforwarder | |
$SPLUNK_HOME/bin/splunk stop | |
rm -rf $SPLUNK_HOME/etc/passwd | |
echo [user_info] > $SPLUNK_HOME/etc/system/local/user-seed.conf | |
echo USERNAME = admin >> $SPLUNK_HOME/etc/system/local/user-seed.conf | |
echo PASSWORD = NEW_PASSWORD >> $SPLUNK_HOME/etc/system/local/user-seed.conf | |
$SPLUNK_HOME/bin/splunk start | |
#reset password: | |
$SPLUNK_HOME/bin/splunk edit user admin -password 'splunk3du' -auth admin:NEW_PASSWORD |
https://docs.splunk.com/Documentation/Splunk/latest/Forwarding/Protectagainstlossofin-flightdata
How can data loss be avoided?
The architecture is such that the UDP data sources must be converted to TCP backed by reliable delivery. Additionally, the forwarders and indexers may be configured to send application level ACKs back to sending forwarders.
splunkd
delivery of packets is as follows:
<?XML version="1.0" standalone="yes" ?> | |
<job id="GetBacklog"> | |
<runtime> | |
<description> | |
This script uses the DFSR WMI provider to obtain | |
replication backlog information between two servers. | |
</description> | |
<named | |
name="ReplicationGroupName" |