- Triple check your GPOs.
- Run Resultant Set of GPOs to make sure some up-stream GPO isn't doing something you don't expect.
- Shadow the RDP session to see what TestExecute is doing.
- If you don't see TestExecute start in a session, double check your username variable in the pipeline.
- Run Agent Node as a windows service.
- Let service interact with the desktop.
import dpath.util | |
def dpath_null(data: dict, path: str, default_return = None): | |
'''function to trap any KeyErrors for dpath and return an acceptable 'null' value when dpath can't find a path | |
Example 1 | |
--------- | |
# Will return None if /some/path/to/an/attribute can not be found | |
var = dpath_null(my_dictionary, '/some/path/to/an/attribute') |
/** | |
* Get System Information in json format. Gets Run Queue, Memory and Swap Info. | |
*/ | |
var os = require('os'); | |
var fs = require('fs'); | |
var sysinfo = {}; | |
sysinfo.hostname = os.hostname(); |
So. I ran into a great deal of stress around ElasticSearch/Logstash performance lately. These are just a few lessons learned, documented so I have a chance of finding them again.
Both ElasticSearch and Logstash produce logs. On my RHEL install they're located in /var/log/elasticsearch and /var/log/logstash. These will give you some idea of problems then things go really wrong. For example, in my case, ElasticSearch got so slow that Logstash would time out sending it logs. These issues show up in the logs. Also, Elasticsearch would start logging problems when JVM Garbage collection took longer than 30 seconds, which is a good indicator of memory pressure on ElasticSearch.
ElasticSearch (and Logstash when it's joined to an ES Cluster) processes tasks in a queue, that you can peek into. Before realizing this I didn't have any way to understand what was happening in ElasticSearch besides the logs. You can look at the pending tasks queue with this command
If you're looking for something in depth, I suggest http://www.adequatelygood.com/JavaScript-Module-Pattern-In-Depth.html
var SomeMarionetteApp = (function(my, $, _, backbone, Marionette, bootstrap, common) {
my.App = Marionette.Application.extend({
initialize: function(options){
var self = this;
- Copy comprssed log files to a work area.
- Uncompress them, remove date part of file name.
- Copy
/etc/logstash/conf.d/*.conf
to a work location. - Modify conf files to change output to
stdout { codev => "rubydebug" }
- You want to do this to make sure things are working before you push logs into ElasticSearch.
- Modify conf files to change path in the input/file section
The InfluxDB Docs give you a very brief overview of installing InfluxDB on a host. It boils down to 'here's the RPM, install it.' That's fine for looking at the software, but you'll probably want to adjust the configuration a bit for a production environment.
https://influxdb.com/docs/v0.9/introduction/installation.html
Modify /etc/opt/influxdb/influxdb.conf
Rather than run a log shipper on hosts, we use Syslog when shipping logs out of monolog. This works great for single-line logs. It breaks when a log message gets split up by syslog. When syslog does this, it duplicates the line header, like so:
2015-06-09T05:39:31.457042-05:00 host.example.edu : This is a really really really
2015-06-09T05:39:31.475414-05:00 host.example.edu : really long message