Skip to content

Instantly share code, notes, and snippets.

@hub-cap
Created July 11, 2013 21:32
Show Gist options
  • Select an option

  • Save hub-cap/5979452 to your computer and use it in GitHub Desktop.

Select an option

Save hub-cap/5979452 to your computer and use it in GitHub Desktop.
1:00 <konetzed> imsplitbit: you could do it from your car
1:00 <konetzed> you could do it from a boat
1:00 <!> jrodom [email protected] has joined #openstack-trove
1:00 <!> jrodom [email protected] has quit Remote host closed the connection
1:00 <konetzed> even sitting with a goat
1:00 <hub_cap> oh god
1:01 <!> jrodom jrodom@nat/rackspace/x-odjcovnzqqhqzpxn has joined #openstack-trove
1:01 <konetzed> yes?
1:01 <imsplitbit> well I decided about 2:30 that I really needed to ice my knee cause it was really hurting
1:01 <hub_cap> hey jrodom
1:01 <jrodom> hi hub_cap
1:01 <hub_cap> SlickNik: vipul-away around?
1:01 <hub_cap> grapex: around?
1:01 <imsplitbit> and also realized that I didn't have time to make it home before the meeting
1:01 <imsplitbit> so I hung around
1:01 <konetzed> so is jrodom like a mini odom?
1:01 <hub_cap> HA
1:01 <jrodom> konetzed: yes.
1:01 <imsplitbit> demorris: yt?
1:02 <demorris> yes
1:02 <hub_cap> so i got to thinking, one of the vocal people in the lat talk is in orientation today
1:02 <grapex> hub_cap: Yep
1:02 <konetzed> hub_cap: who was that?
1:02 <imsplitbit> it's ok
1:02 <hub_cap> conway
1:02 <hub_cap> twitty
1:02 <imsplitbit> 2 of the people that wanted to be vocal didn't make last meeting
1:02 <imsplitbit> but they're here now
1:02 <hub_cap> true imsplitbit
1:02 <imsplitbit> or at least jrodom is
1:02 <hub_cap> we will probably have to summarize and agree durin the next wed meeting, lets get this party started
1:02 <imsplitbit> demorris...
1:03 <hub_cap> broke knee burn
1:03 <imsplitbit> damn dude
1:03 <imsplitbit> thats harsh
1:03 <hub_cap> lulz
1:03 <!> xs2praveen_ b6400baf@gateway/web/freenode/ip.182.64.11.175 has joined #openstack-trove
1:03 <!> xs2praveen_ has userhost b6400baf@gateway/web/freenode/ip.182.64.11.175 and realname 182.64.11.175 - http://webchat.freenode.net
1:03 <!> xs2praveen_ is on #openstack-trove
1:03 <!> xs2praveen_ is connected on herbert.freenode.net (DE)
1:03 <!> xs2praveen_ signed on at Jul 11, 2013 1:03:16 <and> has been idle for 15s
1:03 <hub_cap> go imsplitbit
1:03 <hub_cap> summarize what we have so far
1:04 <imsplitbit> ok so once thing we discussed at length last time was making instances that are a part of something bigger, i.e. a cluster or replication set, invisible in the /instance api and can only be accessed through the /cluster api
1:04 <imsplitbit> I know jrodom and demorris had some specific points of discussion on this
1:04 <demorris> yes
1:04 <imsplitbit> can we start there and knock that you?
1:04 <demorris> so where do we start
1:04 <hub_cap> sure
1:04 <imsplitbit> /you/out/
1:04 <hub_cap> so i think the contention is this
1:04 <jrodom> demorris, why dont you go ahead and go first.
1:04 <hub_cap> should /clusters be all inclusive, or a supporting api to manipulate instances
1:05 <hub_cap> right?
1:05 <demorris> okay, so one of the things that changed from the original spec was that everything moved under clusters once a cluster resource is created
1:05 <demorris> I challenged that a bit and said that we should still maintain the ability to directly manipulate the individual resources that make up a cluster
1:06 <demorris> in addition to allowing specific operations directly on a cluster resource
1:06 <jrodom> hub_cap: that feels like an accurate very high level summary. at least today when i look at the api where /clusters is the control point, the overall usability of the api feels like it takes a hit
1:06 <hub_cap> westmaas!!!!!!
1:07 <jrodom> heh
1:07 <hub_cap> give me ~1min to reread why we made that decison
1:07 <jrodom> poor gabe
1:07 <!> SlickNik [email protected] has left #openstack-trove
1:07 <demorris> imsplitbit: can you enumerate the 4/5 scenarios you mentioned this morning for replication / clustering?
1:07 <demorris> i think that is pertinent to this discussion
1:08 <imsplitbit> ok use cases that we see for replication/clustering are:
1:08 <imsplitbit> 1. for backups
1:08 <konetzed> demorris: are you thinking of a flexable system where one type of application might work in a cluster and independant at the same time? Also could this be disabled for applications that only work in a cluster and not as indvidual apps?
1:08 <imsplitbit> 2. for reporting
1:08 <imsplitbit> 3. for redundancy of data
1:08 <imsplitbit> 4. for improvement of app performance (i.e. scaling reads or writes or both)
1:09 <!> vipul-away is now known as vipul
1:09 <konetzed> imsplitbit: monitoring of a just an instance?
1:09 <imsplitbit> konetzed: can you expand?
1:10 <demorris> konetzed: there will be end users that want to directly spin up a cluster from scratch and only interact with a single endpoint for the cluster to for example grow or shrink the cluster, however, there will also be customers who start with a single instance and individually add read replicas to the primary instance
1:10 <vipul> o/
1:10 <hub_cap> heyo vipul
1:10 <vipul> sorry, late lunch
1:10 <imsplitbit> yay
1:10 <imsplitbit> I'll say, it's 3pm!
1:10 <imsplitbit> :)
1:10 <hub_cap> catchup will be fast ;)
1:10 <konetzed> imsplitbit: to make it simple lets use mysql use case, i might just want to know about a single read only instnaces load
1:10 <demorris> in the scenarios imsplitbit mentions, if they just want to run a backup off a secondary in the cluster, they could feasibly do that against the individual isntance
1:11 <konetzed> imsplitbit: for say is my lb working correctly across them or is my reporting slave under to much load and thats why jobs are taking to long
1:12 <imsplitbit> ok I think I see what you're asking but we're not talking monitoring right now
1:12 <konetzed> imsplitbit: ok sorry
1:12 <imsplitbit> unless I completely missed your point
1:12 <hub_cap> ok so back to the /instances vs /clusters
1:12 <imsplitbit> yes pls
1:12 <hub_cap> we made teh decison to help prevent users from shooting themselves in the foot
1:13 <hub_cap> ie, dleeting a master, resizing down something that they shouldnt etc
1:13 <jrodom> hub_cap - can you elaborate on the point about resizing something down that they shouldnt?
1:13 <demorris> hub_cap: right but couldn't that be controlled with policies that the operator / provider decides is appropriate
1:13 <demorris> trying to separate out that vs. what the API supports
1:13 <hub_cap> demorris: not easily
1:14 <demorris> why not?
1:14 <jrodom> wrt deleting a master, isnt that just validation that could enforce a desired behavior
1:14 <hub_cap> itd be much more work to make things like that configurable
1:14 <hub_cap> rather than just making the decisions
1:14 <hub_cap> u _do_ want something done eh?
1:14 <demorris> flaygs?
1:14 <hub_cap> demorris: i understand your point
1:14 <hub_cap> jrodom: yes i was in the middle of saying
1:14 <hub_cap> going back to the conversation, im not sure it matters that much _where_ it lives
1:14 <vipul> We shouldn't make a decision on an instance operation based on whether that instance belongs to some other thing
1:15 <imsplitbit> sweet jesus someone kick westmaas :)
1:15 <hub_cap> i am now
1:15 <hub_cap> im banning him temp
1:15 <imsplitbit> kk
1:16 <vipul> I think for simplicity though, it does make sense to have cluster-only operations and instance only...
1:16 <!> mode/#openstack-trove +b westmaas!*@* by hub_cap
1:16 <konetzed> with that would an instnace joined to a cluster get a subset of normal commands?
1:16 <vipul> a Node in a cluster does not necessarily mean it's an Instance
1:17 <hub_cap> ok hes done
1:17 <jrodom> vipul: can you elaborate?
1:17 <hub_cap> someone remind me to unban him
1:17 <imsplitbit> vipul but which is more simple? /instance/{instance_id}/resize or /cluster/{cluster_id}/node/{node_id}/resize ?
1:17 <hub_cap> when we are done
1:17 <jrodom> if we're arguing for simplicity, i think the current spec is definitely more complicated from a user pov
1:17 <imsplitbit> when you want to resize an instance.
1:17 <vipul> imsplitbit: agreed the API is simpler if we use the ifrst
1:18 <vipul> but it's not just about that
1:18 <hub_cap> well there is a 3rd option
1:18 <demorris> couldn't the cluster types supported have policies that govern what is allowed / isn't?
1:18 <!> The server 108.166.86.202 does not understand 'CLUSTER/{CLUSTER_ID}'
1:18 <vipul> if we want the API to be consistent, then we'll have the case where some operations are not permitted because an instance belongs to something bigger
1:18 <hub_cap> /cluster/{cluster_id} POST {instanceX-ramX}
1:18 <hub_cap> we dont have to have node/node_id stuff for this
1:19 <demorris> hub_cap: i don't get the first one
1:19 <hub_cap> we can specify 1) an instance to resize, or 2) if no instance is specified resize the cluster
1:19 <hub_cap> the first one? i only gave 1 option
1:19 <hub_cap> demorris: ^ ^
1:20 <hub_cap> basically make all operations happen on /cluster/cluster_id
1:20 <vipul> I think its important to separate the Nodes from Instances... Instances can be 'converted' to nodes, but once that's done, it is no longer an Instance
1:20 <hub_cap> /cluster/cluster_id/actions {resize: {master: new_flavor}}
1:20 <vipul> and no Instance operation applies to it
1:20 <jrodom> what is a node?
1:20 <demorris> a node is an instance and instance is a node
1:20 <demorris> that is just terminology
1:20 <imsplitbit> hub_cap: I thought we nix'd actions
1:20 <!> jasonb365 jasonb365@nat/rackspace/x-acjvdmkseqohdjgv has joined #openstack-trove
1:20 <demorris> they NEVER stop being instances
1:21 <demorris> thy are just part of something bigger
1:21 <hub_cap> woah morris ;)
1:21 <demorris> :)
1:21 <vipul> That's where i think the disagreement is
1:21 <hub_cap> correct
1:21 <demorris> sorry I don't know irc etiquette
1:21 <demorris> i was just emphasizing :)
1:21 <vipul> The API will become really confusing IMO to the end user, when they can't do certain things because that instnace somehow belongs to something bigger
1:21 <hub_cap> _emphasize_
1:21 <!> The server 108.166.86.202 does not understand 'EMPHASIZE/'
1:22 <hub_cap> /emphasize/
1:22 <hub_cap> SCREAM
1:22 <demorris> vipul: I think that protects the user though
1:22 <demorris> provided you have proper error messages
1:22 <jrodom> i think the current api proposal is very confusing as is, not sure we're successful if thats what were trying to prevent
1:22 <hub_cap> yes but u will get a list of instances and have to make a decsison on what u can do to what
1:22 <!> xs2praveen_ b6400baf@gateway/web/freenode/ip.182.64.11.175 has quit Quit: Page closed
1:23 <hub_cap> i have 3 instances, 1s a real instance, 1s a master and i can do X, 1s a slave to that master and i can do Y
1:23 <jrodom> as trove supports different engines, etc. there clearly will be some systems that behave differently where different validation has to be applied
1:23 <hub_cap> OR
1:23 <hub_cap> i have 1 instance and 1 cluster i can do X on the instance and Y on the cluster
1:23 <hub_cap> to me the latter seems simpler honestly
1:23 <hub_cap> u will end up with like 20 cases of what u can and cant do on /instances
1:23 <konetzed> hub_cap: i agree but I think they should always be called instances
1:24 <vipul> exactly, and that grows with different engines
1:24 <hub_cap> im still not sold on /cluster/id/nodes/node resize tho
1:24 <hub_cap> but this also means we have 2 diff ways of resize if we dont do it this way
1:24 <!> Riddhi [email protected] has quit Quit: Riddhi
1:25 <hub_cap> if we do /clusters/id resize {master->new memory}
1:25 <hub_cap> some might say its not restful, but
1:25 <hub_cap> it kinda is, if the cluster is the mutable resource
1:25 <hub_cap> and the "instances/nodes" whatever they are, are just pieces of a cluster
1:26 <imsplitbit> I don't know that it is tho
1:26 <hub_cap> we own that decision
1:26 <imsplitbit> restful that is
1:26 <hub_cap> sure u are modifying the cluster ya?
1:26 <!> The server 108.166.86.202 does not understand 'CLUSTERS/ID'
1:26 <demorris> i have a question
1:26 <hub_cap> /clusters/id
1:26 <imsplitbit> but you aren't necessarily
1:26 <demorris> do you think we have maybe overcomplicated this by trying to jam replication and clustering into the same thing?
1:27 <hub_cap> nope
1:27 <demorris> I am wondering if we need to make a hard split here
1:27 <hub_cap> nope
1:27 <hub_cap> we will still have this covnersation
1:27 <vipul> i think it fits well actually
1:27 <demorris> where we define a cluster as a homogenous concept
1:27 <vipul> just a type of cluster
1:27 <hub_cap> thats a great idea demorris
1:27 <hub_cap> master-slave is a homogenous concept
1:27 <hub_cap> ;)
1:27 <demorris> so when you have a cluster, nodes stay the same size, same attributes, etc.
1:27 <hub_cap> not gonna always be the case tho
1:28 <demorris> and replication is a slight different concept, where maybe it does not roll into a cluster because you would add secondaries of different sizes / attributes
1:28 <vipul> besides disk, i don't think that clusters need to be homogenous
1:28 <hub_cap> im not sure we can say that all clusters of all types should be homogeneous
1:29 <hub_cap> im not an authority to say that, at least
1:30 <imsplitbit> vipul: I agree. storage is kee
1:30 <imsplitbit> key
1:30 <demorris> yeah, I am not 100% either, just throwing it out for discussion
1:30 <hub_cap> for instance demorris hadoop
1:30 <jrodom> a true "cluster" is most likely going to be homogenous (think multi-master type replication use case) - where as the replication use case is likely to be different
1:30 <imsplitbit> but otherwise they *have* to be allowed to be different flavors
1:30 <hub_cap> the namenode in the cluster is going to be larger than the task trackers
1:30 <jrodom> imsplitbit: +1
1:30 <hub_cap> and need more memory
1:30 <hub_cap> we cant think just mysql
1:30 <jrodom> hub_cap: ok, i can see that.
1:31 <hub_cap> we have to allow flexibility in the cluster
1:31 <demorris> yeah that makes sense...
1:31 <konetzed> hub_cap:
1:31 <konetzed> +1
1:31 <hub_cap> hell w/ cassandra u can have different disk tech on each node
1:31 <imsplitbit> I just don't know that hub_cap's proposed resize makes the most sense
1:31 <hub_cap> imsplitbit: ya lets not go there _yet_
1:31 <hub_cap> lets get everyone on the same page wrt instances vs clusters
1:31 <hub_cap> to summarize
1:32 <imsplitbit> I think we can now all agree that we *have* to support different flavors
1:32 <hub_cap> def
1:32 <demorris> yes
1:32 <jrodom> +1
1:32 <imsplitbit> ok. have we yet agreed on instances vs clusters?
1:32 <hub_cap> do we 1) have conditionals on types in /instances that allow for a difference in how each instance is handled, or 2) have conditionals on a per-cluster type
1:32 <hub_cap> we have not imsplitbit
1:33 <imsplitbit> ok just checking
1:33 <imsplitbit> and pushing for a decision :)
1:33 <hub_cap> so back to my point, i have 3 instances, 1 master, 1 slave, 1 regular instance, i can do X on master, Y on slave and Z on regular instance
1:33 <hub_cap> or i have 1 instance and 1 cluster. the instance i can do X and teh cluster i can do Y where Y is defined in terms of the cluster-type
1:33 <hub_cap> and scale that to 300 instances
1:33 <hub_cap> in your mind
1:33 <!> Riddhi [email protected] has joined #openstack-trove
1:34 <hub_cap> with master slave, multi master, galera/tungeston
1:34 <hub_cap> and percona
1:34 <imsplitbit> and redis
1:34 <imsplitbit> and mongo
1:34 <hub_cap> exactly
1:35 <konetzed> to me it seems easier if operations on instances joined to a cluster have their operations listed there
1:35 <jrodom> the problem with that is every one of those technologies are going to have different rules that still have to be enforced within a cluster anyway
1:35 <imsplitbit> jrodom: +1
1:36 <demorris> +1, a cluster needs a policy that defines what can be done on what resources (clusters vs. ind. instances)
1:36 <hub_cap> right josh
1:36 <demorris> so the redis cluster type policy would differ from the MySQL cluster policy, etc.
1:36 <hub_cap> so do we make /instances have all the policies
1:36 <konetzed> demorris: i hope so
1:36 <hub_cap> or do we have /instances be uniform, guaranteed to do X-Y-Z on anything in /instances
1:36 <hub_cap> and /clusters YMMV
1:36 <demorris> policies would be associated with supported cluster types
1:37 <imsplitbit> and that the rules live where? defined where? /clustertypes?
1:37 <demorris> allowing them to vary across dbengine's
1:37 <hub_cap> we all understand that we will have different policies
1:37 <hub_cap> its about how the user interacts w/ teh api
1:37 <hub_cap> imsplitbit: shhhhhh no implementation
1:37 <imsplitbit> gotcha
1:37 <hub_cap> oh thats api
1:37 <hub_cap> its kinda both
1:37 <imsplitbit> https://wiki.openstack.org/wiki/Trove-Replication-And-Clustering-API
1:37 <jrodom> i guess wrt /instances i see most of the operations today being valid against an instance that belongs to a cluster.
1:37 <konetzed> dont you thinking having different subsets on what a user can do through the api depending on an instances state in the cluster is confusing to an end user?
1:37 <imsplitbit> refressher
1:38 <hub_cap> jrodom: shoudl u be abel to downsize a master;s memory
1:38 <jrodom> hub_cap: sure, why not
1:38 <imsplitbit> hub_cap: absolutely
1:38 <konetzed> hub_cap: i dont see why not
1:38 <hub_cap> w/o mucking w/ the slaves?
1:38 <imsplitbit> yep
1:38 <imsplitbit> absolutely
1:38 <jrodom> yeah
1:39 <imsplitbit> dependign on the cluster type this may not be a bad operation
1:39 <hub_cap> sure well now multi master
1:39 <hub_cap> i wonder if its ok to downsize 1 of teh instances
1:39 <hub_cap> right now we are talking only master-slave
1:39 <imsplitbit> right but resizing the master or allowing that should be a part of the rules that are per cluster type no?
1:39 <hub_cap> imsplitbit: yes
1:39 <imsplitbit> and yes
1:39 <imsplitbit> it still is ok
1:39 <konetzed> hub_cap: and only the mysql use case
1:39 <imsplitbit> in a mysql use case
1:40 <hub_cap> even more to the point konetzed
1:40 <imsplitbit> I don't know about mongodb
1:40 <hub_cap> well how bout redis
1:40 <hub_cap> i assume that downgrading a slave would be not-so-good for memory
1:40 <imsplitbit> but fundamentally the ability to ressize an instance flavor *should* be allowed
1:40 <konetzed> thats one reason i like having everything under /cluster wouldnt it make it easier to enforce rules
1:40 <konetzed> ?
1:40 <imsplitbit> and only turned off by rules
1:40 <hub_cap> konetzed: correct thats what im getting at
1:41 <konetzed> ok so what is the big worry
1:41 <konetzed> customer ease
1:41 <konetzed> or code simplisity?
1:41 <demorris> hub_cap: but if they instances know they are part of a cluster they can get the rules as well
1:41 <hub_cap> customer ease
1:41 <konetzed> and i am making words up with my polish spelling
1:41 <imsplitbit> ease of use, and intuitiveness I suppose
1:41 <hub_cap> demorris: well ya, but its not uniform from a exception perspective i think
1:41 <jrodom> i see this as an api usability issue, konetzed
1:42 <hub_cap> instance 42 i cant downgrade
1:42 <hub_cap> instance 45 i cant modify disk
1:42 <hub_cap> instance 99 i cant mod vcpus
1:42 <konetzed> i think its very confusing saying i can do x or y or z on an instance depending on its role in a cluster resource that i might not be aware of when i just look at it as an instance
1:42 <hub_cap> instance 22 i cant downgrade w/o first downgrading instane 45
1:42 <hub_cap> or
1:42 <vipul> konetzed: agreed
1:42 <hub_cap> instance 1, 2, 3, 45, 99 ic an do whatever i want
1:42 <jrodom> cluster 42 i cant downgrade
1:43 <jrodom> cluster 45 i cant modify disk
1:43 <hub_cap> and cluster 1, i cant downgrade the slave w/o first downgrading the master
1:43 <jrodom> cluster 99 i cant mod vcpus
1:43 <jrodom> isnt it not that different?
1:43 <hub_cap> its not really that different no, but do we want to restrict the funk to /clusters
1:44 <hub_cap> so lets do this, lets switch this convo up
1:44 <konetzed> stupid question is there a use case where something wouldnt have a stand alone instance but only be part of a cluster, redis mongo?
1:44 <hub_cap> and go pro /instances
1:44 <imsplitbit> is there a case where we would need to exercise rules on /instances outside of the cluster context?
1:44 <hub_cap> so what does /clusters buy us
1:44 <konetzed> if we have a use case where something wouldnt be stand alone wouldnt that drive us to put things under a cluster
1:45 <hub_cap> well hold up lets take this conversation and switch it
1:45 <hub_cap> lets see if its as bad as we think
1:45 <hub_cap> cuz it might not be
1:45 <imsplitbit> hub_cap: /cluster I think gets us helper methods like make me a cluster of 5 nodes
1:45 <hub_cap> ok what else
1:45 <demorris> imsplitbit: +1
1:45 <hub_cap> no resizes
1:45 <hub_cap> create/delete?
1:45 <vipul> it's a logical grouping of a Resftul resource
1:45 <demorris> it makes it easier to grow / shrink a cluster
1:45 <imsplitbit> a way to do bulk/mass operations on an entity made up of instances
1:45 <jrodom> clusters could be a helper resource (E.g. single create for multi-node), etc
1:45 <hub_cap> so youre saying u have resize in clusters demorris?
1:46 <demorris> yeah, for a homogeneous cluster that makes sense
1:46 <hub_cap> right which we already know we cant guarantee
1:46 <imsplitbit> I would contend that resize in cluster works on all nodes
1:46 <hub_cap> right and we cant do that
1:46 <konetzed> why not?
1:46 <imsplitbit> why not?
1:46 <hub_cap> cuz we cant guarantee homogeneous clusters
1:46 <imsplitbit> you don't need to
1:46 <demorris> hub_cap: why not?
1:46 <konetzed> why not?
1:46 <hub_cap> wtf guys
1:46 <imsplitbit> but you can *allow* it
1:47 <hub_cap> hadoop namenode
1:47 <demorris> you can have the policy restrict that
1:47 <konetzed> what about other clusters where you want everything to be the same size
1:47 <demorris> konetzed: +1
1:47 <hub_cap> ok so lets say we can do resize, create
1:47 <demorris> policies!
1:47 <hub_cap> anything else
1:48 <demorris> i want to separate out what the API allows vs. what an operator / provider chooses to enforce
1:48 <hub_cap> i guess list instances
1:48 <imsplitbit> I've got 3 nodes on a cluster. all 3 diff sizes. a master 16G and 2 slaves 4G. I need to ramp up the slaves cause I'm about to do alot of business. /cluster/{cluster_id}/resize PUT {flavor: 6}
1:48 <imsplitbit> that makes all nodes flavor 6
1:48 <hub_cap> sure thats fine
1:48 <hub_cap> lets try to focus on what it can do and not how it does it. im trying to get a feel fr what clusters will look like if /instances still controls most of the logic/actions
1:48 <hub_cap> list a cluster, create a cluster, and resize a cluster
1:48 <hub_cap> it also makese sense to delete a cluster ya?
1:49 <konetzed> i would think so
1:49 <jrodom> yes
1:49 <imsplitbit> yah
1:49 <vipul> resize_volume?
1:49 <hub_cap> sure both resize
1:49 <imsplitbit> yep
1:49 <hub_cap> cept public trove cant resize volumes
1:49 <hub_cap> CUZ IT DOESNT EXIST IN CINDER !!!!
1:49 <hub_cap> :o
1:49 <imsplitbit> :)
1:49 <konetzed> no screaming
1:49 <hub_cap> i digress
1:49 <imsplitbit> we'll get there
1:49 <vipul> lol
1:49 <vipul> what about creating a User or Database
1:49 <vipul> which instance would you target
1:50 <vipul> if that wasn't part of cluster
1:50 <hub_cap> you would have to know to target yoru master
1:50 <vipul> exactly
1:50 <konetzed> see thats confusing to me
1:50 <hub_cap> or we will have error messages saying "sry use the master XXX"
1:50 <imsplitbit> that was one of the confusing things that led us down favoring /cluster
1:50 <hub_cap> and if u delete a master
1:50 <imsplitbit> because the knowledge of where to actually send the create user lies in the cluster type
1:50 <hub_cap> we will have error message "sorry u cant delete a master w/ slaves attached"
1:51 <hub_cap> and eventually, maybe, we will have issues w/ resizing
1:51 <hub_cap> _maaaaaybe_
1:51 <konetzed> so if you try an operation on a slave and not a master then you hand back a refural url saying use this instance?
1:51 <hub_cap> or issues w/ other resource creation for other techs
1:51 <hub_cap> konetzed: possibly
1:51 <hub_cap> or just error
1:51 <imsplitbit> or a referral
1:51 <hub_cap> or we make sure that all slaves are RO
1:51 <hub_cap> ;)
1:51 <konetzed> yea one of them
1:51 <hub_cap> and silently fail
1:51 <imsplitbit> :)
1:52 <hub_cap> oh and what about /configurations on a instance in a cluster
1:52 <konetzed> well if we are going to make sure a user cant shoot themself in the foot we shouldnt silently fail
1:52 <hub_cap> we havent even touched that yet
1:52 <hub_cap> konetzed: i know i was jokin ;)
1:52 <jrodom> i think there are use cases where you'd want users unique to a slave for a dedicated reporting or backup usre
1:52 <imsplitbit> hub_cap: good point
1:52 <imsplitbit> configs get tricky
1:52 <hub_cap> im sure there are some things u _sholdnt_ be allowed to execute on a slave eh?
1:52 <hub_cap> is not a mysql guy
1:52 <hub_cap> Bender: Call me old fashioned but I like a dump to be as memorable as it is devastating.
1:53 <hub_cap> mysql dump of course
1:53 <konetzed> oy
1:53 <imsplitbit> because, for the mysql use case, you have to have a unique server-id in the conf file
1:53 <imsplitbit> and you can't screw that up and come back from it easily
1:53 <imsplitbit> i.e. changing the id of an existing node would be bad news
1:53 <hub_cap> did jrodom drop?
1:53 <konetzed> hub_cap: no message about that
1:53 <imsplitbit> he just doesn't like you
1:53 <jrodom> sry, my irc client got hung up
1:53 <hub_cap> LOL
1:54 <hub_cap> there u are
1:54 <konetzed> imsplitbit: so your for configs under /cluster
1:54 <imsplitbit> I'm not sure to be honest
1:54 <hub_cap> we are talking about /configurations editing for an instance in a cluster
1:54 <imsplitbit> I think so
1:54 <konetzed> hub_cap: does centralized configs allow us to set options for a instances
1:54 <hub_cap> jrodom: demorris woudl know better
1:54 <konetzed> hub_cap: like can we set and store this id that imsplitbit is talking about?
1:55 <hub_cap> oh central configs
1:55 <hub_cap> sry
1:55 <konetzed> jrodom: ^^^^^^^^
1:55 <hub_cap> no no i was thinking config editing
1:55 <hub_cap> it doesnt yet but there is no reason why it cant be extended
1:55 <vipul> i don't think server-id would be something user would control
1:55 <konetzed> they shouldnt be allowed to edi id
1:55 <konetzed> vipul: +1
1:55 <hub_cap> for sure
1:55 <imsplitbit> as far as config editing you can def have different configs
1:55 <imsplitbit> but there is also a case to allow for identical
1:55 <imsplitbit> vipul: +10000
1:56 <hub_cap> ok so we have 5 min and imsplitbit has a hard stop
1:56 <jrodom> a read slave, for instance, may be optimized differently
1:56 <jrodom> vipul: +1
1:56 <konetzed> imsplitbit: yea you might wana tune a reporting slave differently or something right?
1:56 <imsplitbit> yep
1:56 <hub_cap> i think weve enumerated the issue
1:56 <hub_cap> but have not come to a conclusion
1:56 <konetzed> hub_cap: yes but have we come up with a solution
1:56 <hub_cap> konetzed: ?
1:56 <vipul> if you have different flavors for the slaves, then the config becomes tricky, but otherwise most of the config will be the same across cluster
1:56 <!> demorris_ [email protected] has joined #openstack-trove
1:56 <konetzed> hub_cap: solution = conclusion
1:56 <imsplitbit> vipul: right
1:56 <hub_cap> lol konetzed
1:57 <hub_cap> so the main contention point is do we restrict logic/errors/confusion to /clusters and put all operations on it
1:57 <vipul> i think we still need agreement on Instance != Node
1:57 <hub_cap> or leave logic/errors/confusion in /instances and make /clusters 3 or 4 helpers
1:57 <konetzed> i can think of use cases where slaves could have different configs
1:58 <imsplitbit> hub_cap: I lean toward the latter
1:58 <hub_cap> vipul: i dunno if we do... thatll come out of the decision of what those are
1:58 <konetzed> i think instance is instance we never speak of node
1:58 <jrodom> i would propose that we model it out as an api spec and use that to have a better conversation about the usability.
1:58 <hub_cap> *what the questions are*
1:58 <hub_cap> jrodom: i think thats a great idea
1:58 <konetzed> vipul: i dont like two names for what are the same thing in reality
1:58 <jrodom> both certainly have pros/cons
1:58 <imsplitbit> well we *had* a part of the spec done with instances and clusters separate
1:58 <!> demorris [email protected] has quit Ping timeout: 264 seconds
1:58 <!> demorris_ is now known as demorris
1:58 <imsplitbit> and it's been changed to put everyting under /cluster
1:59 <hub_cap> lol demorris
1:59 <vipul> konetzed: the issue is with the API, if it is an instance, then i should be able to operate via /instances
1:59 <hub_cap> imsplitbit: lets chat tomorrow
1:59 <hub_cap> about how to model this on wiki.openstack
1:59 <vipul> if we consider it as a separate sub-resource, then /isntnace operations do not apply
1:59 <konetzed> vipul: im confused on why /instance thats stand alone and /cluster/instance cant be used
1:59 <hub_cap> ill help u out w/ it
1:59 <imsplitbit> hub_cap: def
1:59 <imsplitbit> thank you
1:59 <imsplitbit> is new at api stuffs
1:59 <hub_cap> ok next wednesday meeting
1:59 <imsplitbit> relatively that is
1:59 <hub_cap> IS ALL ABOUT this
1:59 <vipul> konetzed: I guess you could, but not very clear to the end user
1:59 <hub_cap> sound good vipul?
2:00 <imsplitbit> I'd like to not wait 2 weeks to meet again
2:00 <vipul> ok
2:00 <hub_cap> itll be a day after h2 is cut
2:00 <vipul> good with me
2:00 <imsplitbit> barring any knee surgeries
2:00 <imsplitbit> :)
2:00 <hub_cap> so we wont have to really talk about h2 anymore
2:00 <hub_cap> lol imsplitbit
2:00 <imsplitbit> well thats why we skipped discussing it last week
2:00 <hub_cap> imsplitbit: we are going to revisit this durin our weekly
2:00 <imsplitbit> but I'm back and partially fixed
2:00 <imsplitbit> ok
2:00 <hub_cap> imsplitbit: not really
2:00 <hub_cap> we <3 u but thre was a holidy
2:00 <hub_cap> *holidy
2:00 <hub_cap> DAMMIT
2:00 <imsplitbit> as long as we can dedicate some real time on it
2:00 <hub_cap> holiday
2:01 <konetzed> hub_cap: you spell like me
2:01 <hub_cap> truu
2:01 <imsplitbit> yes but thursday isn't a set day
2:01 <demorris> is lost as usual
2:01 <hub_cap> yes it is dammit dammmmmit dmammmmmmmit
2:01 <imsplitbit> we *could* have done it wednesday or friday
2:01 <demorris> what did we just decide
2:01 <konetzed> someone give demorris a map
2:01 <hub_cap> demorris: to do the opposite of what u think is right
2:01 <jrodom> ill take care of demorris
2:01 <imsplitbit> demorris: hub_cap is gonna help me refine the spec as it is
2:01 <konetzed> lol
2:01 <hub_cap> lol jrodom
2:01 <imsplitbit> wednesday we're going to dedicate most of the meeting to continuing the discussion
2:01 <demorris> I know we like to do the opposite of what I want
2:02 <demorris> what else is new :p
2:02 <konetzed> imsplitbit: +1
2:02 <konetzed> my fav kinda meeting
2:02 <hub_cap> heh demorris
2:02 <!> djohnstone [email protected] has quit Ping timeout: 245 seconds
2:02 <hub_cap> ok go home imsplitbit
2:02 <imsplitbit> I will email out the link to the revised spec after hub_cap and I look at it
2:02 <hub_cap> lets summarize this and spec it out tomorrow
2:02 <imsplitbit> oh hub_cap I'm out tomorrow
2:02 <imsplitbit> I'll get on irc and discuss anyway
2:02 <hub_cap> no no
2:02 <imsplitbit> I just have a ton of dr apts tomorrow
2:02 <hub_cap> i can use teh day to finish my work w/ configuration editing
2:02 <imsplitbit> so I took the day off
2:03 <hub_cap> we can do monday
2:03 <imsplitbit> ok monday first thing?
2:03 <hub_cap> ya
2:03 <!> vipul is now known as vipul-away
2:03 <!> vipul-away is now known as vipul
2:03 <hub_cap> my first thing != your first thing ;)
2:03 <imsplitbit> sutures come out tomorrow yay!
2:03 <jrodom> im happy to volunteer some time too
2:03 <imsplitbit> I know
2:03 <imsplitbit> my 9am yes?
2:03 <imsplitbit> lol
2:03 <hub_cap> sure
2:03 <hub_cap> wait w bated breath
2:03 <imsplitbit> your first thing monday you need to hit me up
2:03 <imsplitbit> whenever that is
2:03 <imsplitbit> just dom'
2:04 <imsplitbit> don't make it 5pm my time
2:04 <imsplitbit> I get here at 7am cst
2:04 <imsplitbit> I can text you when I get in hub_cap
2:04 <imsplitbit> :)
2:04 <imsplitbit> or when I get up
2:04 <hub_cap> go head u know thatll help
2:04 <imsplitbit> I would *never* do that :)
2:05 <imsplitbit> *have* never done that
2:05 <hub_cap> *again* maybe
2:05 <imsplitbit> ;-)
2:05 <hub_cap> jrodom: that woudl be nice too
2:05 <hub_cap> u can meet w/ us on monday if u have time
2:06 <!> jmontemayor jmontemayo@nat/rackspace/x-vnaikmzsxdwyzkzi has quit Quit: My MacBook Pro has gone to sleep. ZZZzzz…
2:07 <imsplitbit> he will
2:07 <imsplitbit> or he will pay
2:07 <imsplitbit> muahahahaha
2:07 <jrodom> err, well im going to be on ETO monday...
2:07 <jrodom> just looked
2:07 <imsplitbit> ok I'm going to go home and ice my knee. see you guys on monday
2:07 <jrodom> we'll figure something out
2:07 <jrodom> cya
2:07 <imsplitbit> lol
2:07 <imsplitbit> bye
2:08 <konetzed> latz
2:08 <!> KennethWilke kwilke@nat/rackspace/x-esuhjfijztpyqfwp has quit Remote host closed the connection
2:10 <hub_cap> LOL jrodom
2:10 <hub_cap> ill hit u up tomorrow so we can chat about it jrodom
2:13 <!> jmontemayor [email protected] has joined #openstack-trove
2:18 <!> vipul is now known as vipul-away
2:18 <!> zacksh_ is now known as zacksh
2:19 <!> SlickNik [email protected] has joined #openstack-trove
2:19 <!> SlickNik [email protected] has left #openstack-trove
2:21 <!> SlickNik [email protected] has joined #openstack-trove
2:21 <SlickNik> crap
2:22 <SlickNik> There was supposed to be a meeting this afternoon wasn't there.
2:22 <demorris> SlickNik: you missed the party
2:23 <SlickNik> yeah, I was looking into something and it completely slipped my mind.
2:24 <SlickNik> Oh well, any one got the notes?
2:24 <demorris> we decided to rename Trove again to "Kibbles N' Bits"
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment