Skip to content

Instantly share code, notes, and snippets.

@ahpook
Created October 21, 2013 17:52
Show Gist options
  • Save ahpook/7088023 to your computer and use it in GitHub Desktop.
Save ahpook/7088023 to your computer and use it in GitHub Desktop.
09:17 <_rc> using the yaml one is sufficent.
09:18 <_rc> just populate it in a non-dumb manner, profit.
09:18 <code-cat> ugh, puppet is not stellar when it comes to managing cron jobs though
09:18 <tremble> code-cat: What's important is not having to wait for the process to run. Making it totally asynchronous from the mcollective process.
09:19 <code-cat> k
18:00 <jaschal> If puppetcommander is raising an exception "execution expired", is there a setting somewhere I can tweak to increase this timeout? https://gist.github.com/jascha/6894461
18:06 <jaschal> Hmnn... I think the underlying problem is with ActiveMQ: https://issues.apache.org/jira/browse/AMQ-3131
16:58 <Zal> Hi all, woke up today to find that "mco find" on my puppet master suddenly only sees half of our nodes. Any suggestions as to what to check initially?
16:59 <Zal> puppet cert list -all shows all the nodes still active and managed by puppet, so I assume this is something specfic to mco
17:08 <Zal> I've also tried restaring the mcollective daemon on the "missing" nodes, to no avail
17:11 <ramindk> Zal: I've seen something similar when the time was off on my local VM. Might check the time on the master vs the time on your middleware vs time on the agents.
17:12 <Zal> thanks, will do
17:14 <Zal> ramindk, if that turns out to be the problem, what do I need to do after synchronizing time?
17:19 <Zal> never mind, it seems the nodes showed right up when I fixed the time synchronization. Thanks again.
17:21 <ramindk> Zal: IIRC, my scenario is laptop comes out of sleep, local VM a few minutes or hours off, mco find is missing some number of servers, I ntpdate, and then it starts working correctly. I don't believe I had to do anything, but I do have
17:21 <ramindk> no problem.
06:04 <jamido> Hi all, when trying to trigger a puppetrun with mcollective i got the following error from puppet-agent: can't be called from trap context . Using Puppet 3.2.4, Ruby 2.0 and mc 2.2.3. I know there is an open bug, but knows someone a workaround?
06:05 <_rc> what version of the puppet agent, and can you point me at the open bug
06:05 <jamido> Here is the corresponding bug.. .https://projects.puppetlabs.com/issues/22008
06:05 <jamido> i use puppet 3.2.4
06:06 <_rc> ok, so can you run this under ruby 1.9.3. that's your workaround.
06:06 <Volcane> or stop the puppet daemon and let mco start it on demand
06:06 <Volcane> whats probably happening is you're already running puppet
06:06 <Volcane> so mco tries to signal that one to wake up and run
06:07 <jamido> ok, thank you all for the fast help!
07:26 <GitHub114> [mcollective-puppet-agent] asedge closed pull request #5: Add method last_run_logs to pull the puppet logs from last_run_report.yaml and include that in last_run_summary. (master...master) http://git.io/FZeaDg
07:26 <gepetto> GitHub114: #5 is http://projects.puppetlabs.com/issues/5 "Feature #5: Allow short names for instance parameters - Puppet. It has a status of Closed and is assigned to Luke Kanies"
10:54 <GitHub66> [mcollective-puppet-agent] asedge opened pull request #7: Grabbing puppet run logs and adding to agent's "last_run_summary" report. (master...master) http://git.io/RWAyEA
10:54 <gepetto> GitHub66: #7 is http://projects.puppetlabs.com/issues/7 "Feature #7: Support arrays as arguments to PFile params - Puppet. It has a status of Closed and is assigned to Luke Kanies"
10:56 <eric0> stupid bot
07:34 <GitHub82> [marionette-collective] richardc pushed 2 new commits to master: http://git.io/XtFniQ
07:34 <GitHub82> marionette-collective/master 2c6d1f7 Tomas Doran: 22061 - stdin discovery plugin + use in rpcutil...
07:34 <GitHub82> marionette-collective/master 4c977b9 Richard Clamp: Merge pull request #96 from bobtfish/stdin_discovery_plugin...
07:34 <gepetto> GitHub82: #96 is http://projects.puppetlabs.com/issues/96 "Bug #96: Defaults get set for every set object, which causes failures - Puppet. It has a status of Closed and is assigned to Luke Kanies"
07:38 <GitHub136> [marionette-collective] richardc pushed 1 new commit to master: http://git.io/bPoseg
07:38 <GitHub136> marionette-collective/master 1346400 Richard Clamp: Update changelog for #22061
07:38 <gepetto> GitHub136: #22061 is http://projects.puppetlabs.com/issues/22061 "Feature #22061: stdin discovery plugin - MCollective. It has a status of Merged - Pending Release and is assigned to -"
07:44 <FriedBob> Can gepetto be configured to ignore the github messages?
07:44 <_rc> gepetto: who owns you?
07:44 <gepetto> _rc: incorrect usage, ask for help using 'gepetto: help who'
07:44 <Volcane> jamesturnbull
08:58 <kindjal> How can I issue a command via mco to run puppet once on all nodes where last run was over an hour ago?
09:00 <kindjal> got it
09:00 <kindjal> mco rpc puppetd runonce -S "resource().since_lastrun > 7200"
10:03 <sputnik13> :)
10:05 <sputnik13> noob question… does mcollective require a central server or is everything distributed?
10:06 <Volcane> middleware i guess is a central component, but thats like your switches are a central component
10:16 <sputnik13> so is "middleware" running on a dedicated server?
10:17 <sputnik13> I'm guessing this is where the rabbitmq is hosted
04:22 <GitHub57> [marionette-collective] richardc pushed 1 new commit to master: http://git.io/KYYRLw
04:22 <GitHub57> marionette-collective/master a1de29e Richard Clamp: maint - correct a typo. plugi -> plugin
05:39 <frankS2_1> I see that the mcollective puppet module has been worked on, great work :)
05:51 <frankS2> Is there a diagram or something on how mcollective works?
05:52 <GitHub80> [mcollective-puppet-agent] richardc opened pull request #8: 22860 - --force should set --no-splay (master...bug/master/22860) http://git.io/g6gcXg
05:52 <gepetto> GitHub80: #8 is http://projects.puppetlabs.com/issues/8 "Feature #8: Add 'ignore' to :file - Puppet. It has a status of Closed and is assigned to Luke Kanies"
05:56 <GitHub45> [mcollective-puppet-agent] ploubser pushed 2 new commits to master: http://git.io/oDz7Wg
05:56 <GitHub45> mcollective-puppet-agent/master eb7ae95 Richard Clamp: 22860 - --force should set --no-splay...
05:56 <GitHub45> mcollective-puppet-agent/master b348577 Pieter Loubser: Merge pull request #8 from richardc/bug/master/22860...
05:56 <gepetto> GitHub45: #8 is http://projects.puppetlabs.com/issues/8 "Feature #8: Add 'ignore' to :file - Puppet. It has a status of Closed and is assigned to Luke Kanies"
05:56 <GitHub195> [mcollective-puppet-agent] ploubser closed pull request #8: 22860 - --force should set --no-splay (master...bug/master/22860) http://git.io/g6gcXg
05:56 <gepetto> GitHub195: #8 is http://projects.puppetlabs.com/issues/8 "Feature #8: Add 'ignore' to :file - Puppet. It has a status of Closed and is assigned to Luke Kanies"
05:59 <GitHub182> [mcollective-puppet-agent] richardc pushed 1 new commit to master: http://git.io/KMncIg
05:59 <GitHub182> mcollective-puppet-agent/master 921b1ca Richard Clamp: release version 1.6.1
05:59 <GitHub34> [mcollective-puppet-agent] richardc tagged 1.6.1 at master: http://git.io/drUj3w
07:51 <GitHub180> [mcollective-puppet-agent] richardc pushed 1 new commit to master: http://git.io/ZmsRow
07:51 <GitHub180> mcollective-puppet-agent/master 3737103 Richard Clamp: maint - run the spec tests only once...
14:38 <code-cat> has anyone run into the situation where running puppet via the mcollective puppet plugin causes the puppet run to register failure with "Caught TERM; calling stop" as its error message?
14:39 <code-cat> and if you have, how did you fix it?
18:33 <Zal> hi there, trying to create a filter with -W, using one class and one fact filter. I appear to be getting all nodes of the specified class, regardless of the truth of the fact.
18:34 <Zal> Are filters AND'd or OR'd?
18:34 <Zal> I guess I'm seeing what looks like either OR'd filters, or my fact being ignored
18:35 <Zal> i.e., -W www environment=eng <-- giving me all www's, both eng and prod
18:36 <Zal> can anyone explain to me what I'm seeing (or likely seeing)?
18:39 <Zal> ah, found it. I need to quote the params as a single value: -W "www environment=eng"
18:40 <ddevon1> Zal: you could also use -S for more complex filters
18:41 <Zal> ah cool, I'll check out -S, thanks!
18:44 <ddevon1> huh...doesn't look like -S is covered in the main docs: http://docs.puppetlabs.com/mcollective/reference/ui/filters.html
18:44 <ddevon1> but here's a post from R.I. that covers it pretty well: http://www.devco.net/archives/2012/06/23/mcollective-2-0-complex-discovery-statements.php
18:44 <Zal> thank you
18:48 <Zal> fantastic stuff
08:50 <GitHub123> [marionette-collective] richardc pushed 2 new commits to master: http://git.io/mokL6g
08:50 <GitHub123> marionette-collective/master 96a1761 Pieter Loubser: 21910 - Publishing time should not be part of the request time...
08:50 <GitHub123> marionette-collective/master 82e2ff8 Richard Clamp: Merge pull request #121 from ploubser/bug/master/21910...
08:50 <gepetto> GitHub123: #121 is http://projects.puppetlabs.com/issues/121 "Bug #121: yumrepo bug - Puppet. It has a status of Closed and is assigned to David Lutterkort"
08:56 <GitHub75> [marionette-collective] ploubser pushed 1 new commit to master: http://git.io/UvNRpQ
08:56 <GitHub75> marionette-collective/master 216c86a Pieter Loubser: 21910 - Publishing time should not be part of the request time...
09:44 <Zal> trying to craft a complex query, per http://www.devco.net/archives/2012/06/23/mcollective-2-0-complex-discovery-statements.php
09:45 <Zal> Is there a parameter I can use to specify a single node in such a query? I've tried certname= and clientcert=, neither appears to work for me.
09:46 <Volcane> then you probably dont have those facts
09:46 <Zal> I certainly do
09:46 <Zal> well, we use clientcert everywhere else in our code
09:47 <Volcane> mco inventory some.node
09:48 <Zal> so if it's not showing up as a fact, what provides "$cliencert" to our manifests?
09:48 <Volcane> u need to configure mcollective to know your facts
09:48 <Zal> hm, it seems to know every other fact
09:49 <Volcane> you're probably using the facter fact plugin which wouldnt know about clientcert
09:49 <Zal> looks like I can use hostname= for my case (thanks for the "inventory" hint). Still confused about what exposes clientcert though, if it's not a fact.
09:50 <Zal> Volcane, is the "facter fact plugin" what ships with PE 3.0.1?
09:50 <Volcane> where if you set facts up in mco using the yaml method it would be there, since clientcert isnt a fact
09:50 <Volcane> ah PE? no idea, it probably runs facter -py
09:50 <Volcane> and clientcert isnt a fact, its a special variable the master sets
09:50 <Zal> ok, if clientcert isn't a fact, what is it? I guess that's a better question for #puppet
09:50 <Zal> ok thanks
09:54 <Zal> hm, runall doesn't take -S anyway apparently. Ah well.
09:54 <Volcane> what are you trying to do?
09:54 <Zal> can I negate a filter parameter with -W?
09:55 <Zal> Volcane, I'm trying to invoke a puppet run on a specific set of machines, avoiding one machine from that set.
09:55 <Volcane> disable it
09:55 <Zal> disable what?
09:55 <Volcane> that machine
09:55 <Zal> oh, this is a scripted function that will be run repeatedly, not a one-off. Disabling the machine isn't practical for this.
09:56 <Zal> and the machine to "disable" will be arbitrary
09:56 <Volcane> -W supports != iirc
09:56 <Zal> ah, excellent
09:58 <Zal> works beautifully. I'll need to make two calls to "mco puppet runall" to get all my filters in, but that's a minor inconvenience. Thanks for the help Volcane
09:58 <Volcane> np
14:58 <sputnik13> does anyone use the puppetlabs/mcollective module to deploy mcollective
14:58 <sputnik13> ?
03:15 <ajf_> is there a reliable way to check if a puppet run has succeeded using the puppet agent? I have a custom application similar to the "puppet runall" one, except in its loop it goes back and checks if the last_run increases since it started the run, and then checks if failed_resources > 0
03:15 <ajf_> it generally works except for when a catalogue doesn't evaluate
03:15 <ajf_> because the last_run attribute updates, but failed_resources is 0
03:16 <Volcane> unfortunately there still isnt a way for something like mco to assign a unique identifier to a run :(
03:16 <ajf_> can't see any attribute that would tell me about catalogue compile errors :(
03:16 <Volcane> which would really help this along, alas
03:17 <ajf_> yeah... that would be ideal. by checking the status and last_run just before I kick off the puppet run, I'm kinda "guessing" when it increases that it was because of my runonce action
03:18 <ajf_> if I could just detect these catalogue compile errors I think it would be sufficiently reliable though
03:19 <ajf_> might have a look at adapting the agent :)
03:19 <Volcane> total resources should be something like 7 in that case
03:20 <Volcane> and events total should be 0
03:20 <Volcane> that should be unique to those failures
03:21 <Volcane> bah events would be 0 when nothing changes too
03:21 <Volcane> resources total is about the only thing :(
03:25 <ajf_> interesting, will look at that, thanks. it seems to be on 170 total resources for a manifest that I just deliberately broke
03:25 <Volcane> hmm, how on earth? compile failure == no catalog
03:25 <Volcane> you should only get the scheduled stuff etc
03:25 <ajf_> yeah, was wondering if it's somehow saved from the last successful?
03:25 <Volcane> that would be lame
03:39 <ajf_> looks like it uses resources.txt which doesn't get updated in that case, the only sign in /var/lib/puppet/state something has failed seems to be the err message in the last_run_report.yaml
03:39 <ajf_> so I'll have a go at sending up those in the agent reply :)
03:40 <Volcane> last_run_status action
03:40 <Volcane> last_run_summary sorry
03:45 <ajf_> yeah, that's what I've been using in my app and will try sending the messages from last_run_report.yaml up in... in our case it would be useful to show client errors in the mco app
03:45 <ajf_> just noticed it has a "status: failed" attribute in that file too, would also be useful to send up
04:28 <ajf_> sweet, it works
07:06 <mjblack> anyone doing mesh activemq instead of hub spoke?
07:22 <t0m> not yet, but it's very much something I'd like to explore
07:23 <Volcane> meshes tend to be a bit of a fuckup
07:23 <Volcane> though the decrease priority thing might fix that now
07:23 <Volcane> not tested
07:23 <mjblack> last I tried it I kept getting dup messages
07:24 <Volcane> thats cos the ttl was wrong then
07:24 <t0m> yeah, AIUTI it's gonna be 'challenging', but it'd sure be handy for the times when MPLS = May Perhaps Lose Service
07:24 <Volcane> and probably because your persistance setup wasnt right
07:24 <Volcane> it uses the message persistance store to do dupe tracking
07:24 <mjblack> huh, interesting do you know what settings that was?
07:25 <Volcane> ttl is on the connectors, persistance is in the opening xml stanza of the broker an then you need a persistance store configured
07:25 <Volcane> i forget the exact directives now
07:26 <mjblack> well it uses that khanadb or whatever
07:26 <Volcane> nods
07:29 <mjblack> so the networkTTL setting was incorrect?
07:29 <Volcane> probably
07:29 <mjblack> looks like they added more ttl settings in 5.9
07:29 <Volcane> yeah, wish someone would update the PL packages
07:30 <Volcane> the webconsole got a complete redo etc
07:30 <_rc> in 5.9 yeah
07:30 <mjblack> dont see why the srpm couldnt be used against 5.9
07:30 <Volcane> probably quite a few changes to it required now
07:31 <_rc> they need to ship it though; or someone needs to package a nightly
07:31 <AbhayC> I'm unable to use flatfile discovery for puppet runall. Is there another mechanism to do this. Trigger a runonce based on list of nodes with a batchsize
07:31 <_rc> we do have 5.8.0 now at least
07:31 <Volcane> _rc: oh kewl, should notify the list about that
07:32 <_rc> yeah, I only found out by mistake, and 5.8.0-1.el6 was broken, 5.8.0-2.el6 is a goer though
07:32 <t0m> AbhayC: --dm stdin help? (if you grab the stdin discovery plugin from master)
07:34 <_rc> AbhayC: when you say you're unable what does that mean? it breaks?
07:35 <t0m> AbhayC: and if not, what's the problem which is stopping you from supplying a list / why do you need to supply a list? I feel somewhat like you've asked half a question :)
07:45 <AbhayC> says it action not declared in the ddl when I use #mco rpc puppet runall 3 --nodes somefile and ignores --nodes completely for #mco puppet runall 3 --nodes somefile
07:48 <Volcane> it says more than that, make a pastebin. and thats not the command to use
07:48 <Volcane> mco puppet runall....
08:29 <sputnik13> anyone use puppetlabs/mcollective module?
08:30 <igalic> sputnik13: it's such a busy day..
08:30 <sputnik13> igalic: :)
08:30 <sputnik13> it's just the start of the day!
08:30 <sputnik13> well, for me anyway
08:31 <sputnik13> I asked my questions yesterday and no response :(
08:31 <sputnik13> basically I install the mcollective module, and puppet breaks
08:31 <sputnik13> running centos 6.4
08:31 * igalic already had a full workday and will now remotely attend the traffic server summit in a different time zone.
08:31 <igalic> sputnik13: where or how does it break?
08:32 <sputnik13> I can't even do puppet module list
08:32 <sputnik13> the install goes through fine
08:32 <sputnik13> but it complains about mcollective not having something or other… I have to install again to get the specific error message
08:33 <sputnik13> I can't even uninstall the module at that point… fortunately I'm running in VMs
08:33 <sputnik13> "no source module metadata provided for mcollective"
08:33 <igalic> yeah, specifics are always more helpful.
08:33 <igalic> What's the error that puppet module list delivers?
08:33 <sputnik13> I googled for that and the hits I saw were pretty old as I recall
08:34 <sputnik13> "no source module metadata provided for mcollective"
08:36 <sputnik13> http://projects.puppetlabs.com/issues/4142
08:36 <sputnik13> I'm guessing it's related to this
08:36 <sputnik13> once I remove metadata.json from the mcollective module directory puppet commands work
08:42 <sputnik13> it looks like it's a problem with the latest 1.1.0 release of the module
08:42 <Volcane> sputnik13: ask ashp about that
08:42 <sputnik13> I tried installing 1.0.1 and it installed… however, it didn't install the dependencies
08:42 <sputnik13> Volcane: ashp?
08:43 <Volcane> sputnik13: yes, a nick ashp
08:43 <Volcane> or just file a ticket since you seem to know what the problem is
08:43 <sputnik13> Volcane: oh, I'm guessing ashp is the maintainer then
08:43 <Volcane> ticket is best
08:43 <sputnik13> Volcane: actually, no I don't know what the problem is, I'm just poking around and trying to find answers :) I'm too new to puppet to pretend to understand what's actually broken
08:44 <Volcane> well you have errors and a way to remove the errors and you isolated it to a version
08:44 <Volcane> thats enough info for a ticket, just capture detailed information about what you know
08:44 <Volcane> and make sure you run the latest puppet
08:44 <sputnik13> Volcane: forgive me for the noob questions, but where do I submit the ticket?
08:44 <Volcane> the forge page should show you
08:45 <sputnik13> Volcane: doh, I kept looking for "ticket", it's at the top as "bugs" :)
08:46 <sputnik13> Volcane: one more question if you don't mind… when I install a module I was (perhaps naively) expecting that dependencies would be installed as well… but I've had instances where that happened and instances where that did not, with just the mcollective module
08:47 <sputnik13> Volcane: is the standard expected behavior that a 'puppet module install <module>' will install all dependencies automagically?
08:48 <Volcane> i think it should install the dependencies yeah
08:48 <sputnik13> I also concede I could be making things up in my own head as to what I saw with this in my rush to get stuff done :)
08:48 <Volcane> but it depends how good the module is at declaring its dependencies
08:48 <sputnik13> Volcane: i c
08:49 <sputnik13> Volcane: well, thanks for your time, I'll get a bug submitted against the module
08:49 <Volcane> kewl
08:49 <Volcane> i use the latest one fwiw and it works fine
08:49 <sputnik13> igalic: thank you as well
08:49 <Volcane> on latest puppet
08:49 <sputnik13> Volcane: well, I didn't update puppet but I installed it only last week
08:50 <Volcane> what version?
08:50 <sputnik13> Volcane: it was 3.x.x… I'm re-initializing the VM, I'll know in a second
08:50 <Volcane> ok should be fine then
08:50 <Volcane> gtg
08:50 <sputnik13> Volcane: thanks much
09:29 <GitHub3> [marionette-collective] richardc pushed 2 new commits to master: http://git.io/4dx_yA
09:29 <GitHub3> marionette-collective/master 3f63f07 Pieter Loubser: 20467 - mcollective service does not gracefully exit on windows...
09:29 <GitHub3> marionette-collective/master 154f837 Richard Clamp: Merge pull request #122 from ploubser/bug/master/20467...
09:29 <gepetto> GitHub3: #122 is http://projects.puppetlabs.com/issues/122 "Bug #122: Small correction of puppet spec file - Puppet. It has a status of Closed and is assigned to Luke Kanies"
09:32 <GitHub175> [marionette-collective] ploubser pushed 1 new commit to master: http://git.io/cyWPrQ
09:32 <GitHub175> marionette-collective/master 5f29f21 Pieter Loubser: 20467 - mcollective service does not gracefully exit on windows...
11:22 <tmclaugh[work]> Hi, i'm using the template off of here to generate facts.yaml for mcollective
11:22 <tmclaugh[work]> http://projects.puppetlabs.com/projects/mcollective-plugins/wiki/FactsFacterYAML
11:23 <tmclaugh[work]> while the top level elements are sorted I'm having an issue where child elements are not. This is causing nodes to show state changes when nothing actually changed but the order of some elements in the file.
11:25 <Volcane> what child elements?
11:29 <Volcane> (facts are all strings)
11:35 <tmclaugh[work]> Volcane: here's an example
11:35 <tmclaugh[work]> https://gist.github.com/tmclaugh/7029953
11:35 <Volcane> yeah those arent strings
11:35 <Volcane> just filter out non string things
11:37 <ramindk> tmclaugh[work]: I believe this template should do that for you, https://github.com/puppetlabs/puppetlabs-mcollective/blob/master/templates/facts.yaml.erb
11:39 <tmclaugh[work]> ramindk: thanks for that!
01:09 <beddari> Volcane: did you read the state of data in modules post on puppet-dev?
01:10 <beddari> Volcane: my reaction was "how on earth did that happen?" ;-)
01:25 <beddari> Volcane: so I'm reading twitter backlog and saw that I already favorited your response on that hehe
01:49 <Volcane> yeah, its shocking
01:49 <Volcane> hows it such a mental barrier to consider data without logic as a good thing?
01:49 <Volcane> havnt every sane person come to realise that stored procedures in SQL databases are bad?
01:50 <beddari> I don't understand why everyone thinks about a module differently from site-wide
01:50 <Volcane> yet no, somehow its gonna be unusable to create a data solution without embedded logic
01:50 <beddari> where Hiera already is proven
01:50 <beddari> yeah .. *laughs*
01:50 <ajf_> i came across an innovative antipatern the other day from a colleague... a bash script in backticks inside a hiera map
01:50 <Volcane> heh
01:51 <beddari> :(
01:51 <ajf_> seperating data from the code... but putting code in the data :(
01:51 <Volcane> and the type thing in the data modules thing in 3.3.x? omfg
01:52 <beddari> ehmm yeah I wasn't going to mention all that .. ..
01:52 <beddari> on #puppet-dev, zipkid is here too so:
01:52 <beddari> WTF... back to params.pp ?????!!!!!!
01:53 <beddari> ;-)
01:53 <Volcane> heh
01:54 <beddari> but we can write a backend thanks to that being easy :P
01:54 <Volcane> you obviously didnt look at the backend situation with hiera 2 :P
01:54 <beddari> hehehe
01:54 <beddari> no
02:01 * zipkid is going into depression....
02:02 <beddari> solutions, not depressions, aren't we in this for that
02:02 <beddari> we could try Ansible ? ;)
02:03 <zipkid> yea, that's where all the cool kids are .... Ansible.
02:03 <zipkid> Maybe Chef would be a sound choice after all....
02:03 <Volcane> remember all the talk of bmc and how you need a flock of consultants to install it on site, need to buy appliances dedicated to it, need training etc and then its a huge cobbled together mess of stuff
02:03 <beddari> *laughs* yeah with all the beauty of ease that was Hiera fast disappearing
02:03 <Volcane> and how thats why luke created puppet etc
02:04 <Volcane> oh the irony
03:58 <igalic> Can I get some context for the data in modules talk? (it sounds interesting or wrong.. I can't quite decide.)
03:59 <Volcane> igalic: replace https://github.com/puppetlabs/puppetlabs-ntp/blob/master/manifests/params.pp with AIX.json, Debian.json in the module
04:00 <Volcane> igalic: with the module declaring that it wants a hierarchy of $osfamily, default
04:00 <Volcane> its important that the module declares the hierarchy else when you install it in your site your hierarchy might make this data inaccessible
04:00 <Volcane> so this way you get site hierarchy that overlays module hierarchy
04:01 <Volcane> and thus your code becomes a shitload easier, adding FooOS becomes dropping in a json file
04:01 <Volcane> which wont break anything but FooOS machines because the data will only be read on FooOS machines etc
04:04 <igalic> Volcane: I've read the ARM-9 proposal, and I loved the idea. I was just wondering how far it's progressed.
04:05 <Volcane> the idea seems fine, implementation is terrible
04:05 <Volcane> and brings in a bunch of ill conceived things like a type system thats confined to this data rather than actually thinking how a type system in puppet should behave etc
04:06 <Volcane> arm9 is _10000_ lines of code.
04:06 <Volcane> and creates a horrible world where logic is embedded in data and the logic is only parsable with puppets parser etc
04:07 <Volcane> which removes any hope of data portability and reuse
04:08 <Volcane> so then that got converted into lets data in modules only read varaibles from params.pp
04:08 <Volcane> which force everyone to write code like https://github.com/puppetlabs/puppetlabs-ntp/blob/master/manifests/params.pp
04:08 <igalic> hrmm.. much of the data I currently have in hiera is very specific to puppet, but others could just as easily be converted to, or used by different systems. mm.
04:09 <igalic> Volcane: it's sad that ntp is /the/ example module used through-out puppet's documentation, when it's /this/ horrible.
04:09 <Volcane> yes, thats why hiera has a cli and a reusable ruby gem
04:09 <Volcane> and why its a pluggable backend system - because data has to be portable
04:10 <Volcane> ntp module is the model module now
04:10 <beddari> this is currently my best Puppet-only feature .. a function of ease of integration and portability through Hiera v1
04:10 <Volcane> the recommended way to write modules is that
04:11 <Volcane> with data in modules you throw away params.pp in the ntp module, make init.pp this http://p.devco.net/437 and add a few data files
04:11 <Volcane> job done
04:12 <Volcane> massive reduction in complexity, no duplication of variables in 2 classes etc
04:12 <beddari> and not only that, once you got data separately it becomes a LOT easier to teach people about how to think correctly .. data is SEPARATED
04:13 <Volcane> yes see https://groups.google.com/d/msg/puppet-users/JrDNGiUyKD4/Ww7GDCMAtQAJ thats why hiera exists
04:13 <beddari> then you go from the separated data in the module to your site-wide hiera config, on-disk yaml defaults, then extend to dbs, external systems etc
04:13 <igalic> now that data is read from hiera (as it would have before), but it's first read from the module-provided hierarchy.
04:14 <beddari> that was the elegant original idea yes
04:14 <beddari> quite simple to grasp isn't it ;)
04:14 <Volcane> you'd think so
04:14 <igalic> 13:12:30 < beddari> and not only that, once you got data separately it becomes a LOT easier to teach people about how to think correctly .. data is SEPARATED <<< That's a massive improvement, yes. This was the hard bit about both, learning puppet first, then re-learning it with hiera, and then teaching it.
04:15 <beddari> I know .. I think lots of this proposal has gone wrong just because people didn't even know what Hiera provided already
04:15 <Volcane> beddari: there's a ticket with a full implementation in like 150 lines of code
04:16 <beddari> :-)
04:16 <igalic> """"There's nothing wrong with a hybrid model where you have data - pure data -
04:16 <igalic> in a data file and then a class similar in spirit to params.pp to take that data
04:16 <Volcane> beddari: but a bunch of people who refuse to learn from others experience thought they can do better - which got us to this
04:16 <igalic> and massage it and create derived data. """ <<< I do that right now, but those are very site specific modules
04:16 <igalic> Nothing to share on forge.
04:16 <Volcane> igalic: thats basically what http://p.devco.net/437/ is
04:17 <Volcane> igalic: though ntp doesnt derive data from its own data - but you can see there, a place for deriving data, a place for validating data, a place for using data
04:17 <Volcane> igalic: and pure data coming into the params from hiera
04:18 <igalic> line 25 (derive), 31+ (validate), 54+ use ..
04:18 <Volcane> yeah - you could go further and make some class as it mentions there and delegate all this into that class
04:19 <Volcane> this would combat the bullshit $real_foo pattern but would have some duplication
04:19 <Volcane> but you'd end up with a model to access data in - *exactly* like in other languages where you map your database to a object
04:19 <Volcane> and that object is responsible for loading/validation/deriving
04:20 <Volcane> this is nothing new or special, but omfg saying data is only useful if you can embed logic in it? the 90s called they want their stored procs back.
04:20 <igalic> Okay, never having been burned by stored procedures, can you explain to me why this is bad?
04:20 <beddari> :-)
04:20 <Volcane> non portable
04:21 <Volcane> LOTS of wisdom about this - just google, lots been written much better than i could on irc
04:22 <beddari> igalic: any recent system architecture books, really
04:22 <igalic> I might be confuzzling this with pl/SQL.
04:24 <igalic> beddari: any concrete recommendations?
04:24 <beddari> hmm hoping you wouldn't ask as for me it is just second nature, but I'll see
04:24 <beddari> ;-)
04:26 <Volcane> http://p.devco.net/438
04:26 <beddari> just as a general thing I can say I think a lot of what people earlier did with stored procedures are now done in separate, domain-specific dbs
04:26 <Volcane> there an example of a model class that validates / derives etc
04:26 <Volcane> the alternative we do today would be to do that and set $real_servers = ....
04:27 <Volcane> and using that in the tmeplate, total bollocks
04:28 <Volcane> this is exactly what you'd do in any programming language - this can probably be streamlined a bit but the idea is
04:28 <Volcane> you get one place for pure data, one place for validation/modifying and its a delegated data access for the model
04:29 <Volcane> thats roughly what i mean with the class similar in spirit to params.pp
04:30 <Volcane> but without the data embedded
04:31 <Volcane> though that should be elseif on line 6, meh
04:31 <beddari> but nooooboody would ever understand that ntp::model concept ;)
04:31 <beddari> hehe
04:31 <Volcane> but they understand params.pp? lol
04:32 <beddari> I'll rant about this with beer to anyone listening at Devopsdays London
04:32 <beddari> ;-)
04:33 <Volcane> updated with elsif :)
04:33 <beddari> lol
04:34 <Volcane> but yeah, not needed at all, first example does this fine - except you'd need some $real_servers var or someting
04:34 <beddari> if people can see where the $panic is at ..
04:34 <beddari> ;-)
04:34 <beddari> no need to panic
04:35 <Volcane> yeah, well i kept the current dump thing where you cant force panic to some value in the module
04:37 <beddari> its 13:37 on a friday, more coffee
04:37 <Volcane> :)
04:37 <Volcane> time for me to just ship my design as a gem and get everyone to use it :P
04:37 <beddari> thnx for Snipper, neat
04:38 <zipkid> Volcane: that would be the best!
04:43 <igalic> Volcane: I was asking yesterday in #puppet how to distribute files on a per-environment base, and you said to keep them in the module. I'm trying to figure out how that applies to some of my data:
04:43 <igalic> Namely, certificates (and their keys).
04:44 <beddari> .. ..
04:44 <beddari> igalic: so why .. are you asking?
04:44 <Volcane> why do you need them per env?
04:44 <beddari> igalic: hopefully not security :P
04:45 <Volcane> whats different between environments? surely ou dont have the same fqdn in 2 environments?
04:45 <Volcane> so from a security perspective the certname based private mounts would do what u need?
04:45 <Volcane> though I'd be *very* surprised if that stuff is actually taken from the cert
04:46 <igalic> Volcane: okay, wait.. this is the wrong angle. The main reason is this:
04:47 <igalic> So far I've put them in /etc/puppet/files, but I've moved everything (modules, manifests, hieradata) to be managed by r10k, except files.
04:47 <igalic> Now I'm starting to think that *actually* this stuff isn't files, it's data. I should put it in hiera.
04:49 <beddari> certs and keys? :) yes
04:49 <beddari> I do, with a encrypted backend using the gpgme gem
04:50 <igalic> Yeeeeaaah.. I was just looking into how to put that stuff into yaml as a lazy first attempt.
04:51 <beddari> I also keep a manual human-only step for some that are more sensitive
04:51 <Volcane> oh yeah, you dont want to be using this %H, %h and %d stuff from fileserver.conf
04:52 <Volcane> http://docs.puppetlabs.com/guides/file_serving.html#file-server-configuration stuff mentioned here
04:52 <Volcane> which are apparently taken from the client SSL cert
04:52 <Volcane> lies lies
04:52 <Volcane> taken from teh facts
04:52 <Volcane> not verified.
04:52 <beddari> $trusted on master ;)
04:53 <beddari> *laughs* but yeah, masterless
04:53 <Volcane> then the files are everywhere :P
04:54 <igalic> I see.
04:55 <Volcane> unless you have strict file naming conventions everywhere and build per-node payloads
04:55 <igalic> ?
04:55 <Volcane> for masterless
04:56 <Volcane> instead of copying all the files, pick out of your manifests ones for the node in question
04:56 <beddari> like for example SSLVerifyClient in apache for yum repos, self signed certs, build rpms per host
04:56 <Volcane> you know you name your certs on disk $fqdn.cert, so make a tarball that only include those
05:00 <Volcane> i have this for my code - well partially, i can parse a commit and figure out exactly which nodes it will touch. just need to do the same to produce the per-node tarballs i use with puppet apply
05:01 <beddari> I do the same but only for hieradata, all puppet-code is everywhere
05:02 <beddari> (which isn't the best approach clearly)
05:02 <beddari> which has me thinking about Ansible
05:02 <beddari> *laughs*
05:02 <Volcane> unconvinced its model will scale to large code bases :(
05:06 <beddari> so is there a bug for the fileserve vars if you know for a fact they are not derived correctly?
05:07 <Volcane> cant be bothered, last time i tried to convince them just trusting the client data is bad it was a waste of time
05:08 <beddari> .. gap .. chasm .. between that and what users expect to be true
05:10 <beddari> the $trusted band-aid fix went pretty quickly thanks to fiddyspence
05:10 <beddari> http://projects.puppetlabs.com/issues/19514
05:11 <beddari> but humm last comments on that bug puts what you just said in perspective doesn't it: One further comment I have on this approach is about where it touches hiera. We don’t currently interpolate hashes in hiera.yaml (or at least my testing didn’t make this work):
05:11 <beddari> :hierarchy:
05:11 <beddari> - %{trusted['clientcert']}
05:12 <beddari> "So the user needs to munge
05:12 <beddari> $trusted[‘clientcert’] to ${avariablenameofyourchoice} and then use that in the hierarchy"
06:23 <frankS2> Hi, I have a question regarding SSL keys and mcollective, ref: https://github.com/puppetlabs/puppetlabs-mcollective why copy the certs to the puppet server and ship them there, instead of making the module pointing to the local file system to the puppet agents keys?
06:27 <_rc> because you might want to use a completely different ca chain for it, as you do for rabbitmq
06:28 <_rc> also for the server key for the ssl securityprovider it need to be a common key for all the servers, so the puppet agent ones won't fit there
06:29 <_rc> there's nothing stopping you giving it the path to the puppet certs though, if you only want TLS
06:32 <frankS2> Ok, I have not deployed mcollective and rabbitmq yet, still reading up
06:32 <_rc> http://docs.puppetlabs.com/mcollective/deploy/standard.html#step-1-create-and-collect-credentials explains in detail what the credentials are used for
06:33 <AbhayC> http://pastebin.com/BYj6t3Fr
06:33 <AbhayC> unable to use runall with flatfile based discovery.
06:34 <AbhayC> Any other technique (other than fact filters etc) that i can use to target a list of nodes
06:35 <Volcane> it has to use normal discovery
06:36 <Volcane> if u want to limit it to nodes you might have to pass in multiple -I arguments
06:36 <_rc> AbhayC: so use runonce instead of runall?
06:37 <Volcane> you could probably work around that with a bit of custom code but its how its written as it uses the compound filters quite heavily and in that case you need to have actual filters supplied
06:39 <AbhayC> hmm. so simply supply hostname based fact filter?
06:39 <Volcane> sure any filter, as long as its a filter and not a -S filter
06:40 <AbhayC> it will accept a regex like "host1|host2"?
06:40 <Volcane> -I /host1|host2/
06:41 <AbhayC> cool tested. thanks
06:43 <frankS2> _rc: so do I have to create new unique certs for the servers and ship them trhough mcollective?
06:43 <frankS2> when using ssl
06:44 <frankS2> well not new, but copy from each server
06:44 <_rc> frankS2: you wouldn't ship them through mcollective, chicken and egg.
06:44 <frankS2> s/mcollective/puppet
06:45 <AbhayC> _rc: The stupidest excuse for uusing runall is that it hangs around for a while unless at max $concurrency nodes are done applying. This is a great source of relief for ops who cant handle the "Ok thats done. Whats next? We have to wait??"
06:45 <_rc> well you read the guide about which certs you need?
06:45 <frankS2> _rc: yes, twice
06:45 <frankS2> :p
06:46 <_rc> frankS2: so which set are you talking about?
06:46 <AbhayC> runonce *snap fingers. makes it intimidating for a lot of our users
06:47 <_rc> frankS2: the shared server cert used by the ssl plugin, or the one per server used for connecting to the middleware
06:47 <_rc> frankS2: and also what middleware?
06:47 <frankS2> _rc: im aiming for amq, as its eems to be default in the module
06:48 <_rc> frankS2: ok, so still, which one are you talking about; the middleware server cert or the securityprovider server cert
06:48 <frankS2> server
06:48 <frankS2> securityprovider
06:49 <_rc> ok; yes you need to generate and ship that one, because it's *shared*
06:49 <frankS2> yes, i understand that now, thanks
06:49 <frankS2> but for example ssl_server_public => 'puppet:///modules/site_mcollective/certs/server.pem',
06:50 <frankS2> ssl_server_private => 'puppet:///modules/site_mcollective/private_keys/server.pem', this one i mean
06:50 <frankS2> to make things simpler the puppet agent server key could be used?
06:50 <_rc> yes that one.
06:50 <_rc> no
06:51 <_rc> see how it's called server.pem not foo.bar.com.pem; that's how you know it's the shared one.
06:51 <frankS2> so both private and public are shared
06:51 <frankS2> server keys
06:52 <_rc> they're two halfs of the same key.
06:52 <frankS2> and the ssl_ca_cert is the one from the puppet ca
06:52 <frankS2> and the ssl_client_certs are the one to identify 'users'
06:53 <frankS2> but where does the amq cert come in?
11:22 <frankS2> Hi, im getting a 400 error while trying to define the class ::mcollective, this is my output, node def, and module layout: http://pastie.org/8414783
11:22 <frankS2> Asked in puppet too, but i guess this is a better place :))
11:25 <frankS2> hm, when trying to install new modules i get:
11:25 <frankS2> Notice: Preparing to install into /etc/puppet/modules ...
11:25 <frankS2> Error: No source module metadata provided for mcollective
11:25 <frankS2> Error: Try 'puppet help module install' for usage
11:25 <frankS2> related, maybe?
11:29 <frankS2> http://projects.puppetlabs.com/issues/22902 oh, there it is
04:22 <frankS2> http://pastie.org/8416203 Hi im getting this error when trying to deploy mcolletive with SSL, anyone know whats wrong?
06:36 <frankS2> Hi ive downloaded the nrpe plugin for github and renamed it to mrpe, included it like this: and i get the error that the DDL is not found: http://pastie.org/8416415
06:51 <Volcane> you renamed it
06:51 <Volcane> so why do you call it nrpe?
06:51 <frankS2> Volcane: i renamed it, i hadded package => true, and it started working
07:05 <frankS2> Im having a new problem with the plugin now, all plugins return UNKNOWN
07:06 <frankS2> from the audit log: 2013-10-20T16:04:57.076567+0200: reqid=48247a60cddb5a19887605824a5835f3: reqtime=1382277897 caller=cert=frank-admin@mclient agent=nrpe action=runcommand data={:process_results=>true, :command=>"check_load"}
07:07 <frankS2> Volcane: i renamed it back now
07:36 <frankS2> now i install like: mcollective::plugin { ['nrpe' package => true
07:46 <frankS2> when i use inventory on a node nrpe appears in both agent and data-plguins
09:30 <neoice> I want to get started writing my own mcollective plugins but it seems like the docs online aren't up to date/functional. does anyone have recommended references or reading material
09:43 <_rc> neoice: these docs? http://docs.puppetlabs.com/mcollective/simplerpc/agents.html
09:49 <neoice> _rc: that worked for me, the example here throws DDL errors: http://docs.puppetlabs.com/mcollective/reference/plugins/application.html
09:53 <_rc> neoice: well can you show the version of the code you ended up at that did that, and the error.
09:57 <neoice> _rc: I see that I may have overlooked "a simple application that speaks to a hypothetical echo action of a helloworld agent". do all applications require agents?
09:58 <neoice> yeah, adding the "helloworld.{rb,ddl}" files to agent solved my problem.
09:59 <_rc> so if you want to call on an agent then yes, but your application could interact with other existing agents
10:03 <neoice> _rc: can an application do useful work without calling into an agent? or is the application really just a wrapper around agent(s) with some extra data processing?
10:26 <eric0> Volcane beddari zipkid thx for data-in-modules commentary (from scrollback a few days ago), i'm heading back into the mailing list thread w/this in mind
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment