mindscratch: now that some people are here, here's an earlier question: hen performing a rolling update (https://github.com/GoogleCloudPlatform/kubernet...) ...how does kubernetes know which image to update if a pod has multiple containers with different images 2:23 PM
jbeda: mindscratch: Right now, it is run client side and is kind of a little too simple. 2:24 PM It captures the list of pods and just kills them one by one. It assumes there is a replication controller that will 'heal' them with the new version 2:24 PM
jbeda: Ideally, the 'upgrader' would make sure that the new pods come up okay, do some health checking, etc. 2:25 PM Or you can do a blue/green type thing where you bring up a new set (with new labels) and switch the service over. 2:25 PM
mindscratch: jbeda: for the rolling update. if the pod has two containers "foo" and "bar", when I do the rolling update and specify -i "foo:latest" ...it's smart enough to match the "foo" container and update it? 2:28 PM
jbeda: mindscratch: It isn't based on image at all -- you might have 2 services that use the same image. The way it works is like this -- you have a replication controller that will launch, say, 10 replicas of a pod. You specify the image that you want as part of that. 2:29 PM
The replication controller is where the 'healing' happens. If a pod or node fails, the replication controller will spin up a new one 2:29 PM
if there are too many pods it'll kill some 2:30 PM
The replication controller keeps track of the set of pods it is managing via a label query. So it'd be something like 'service=foo, tier=prod' 2:30 PM
Up to you to figure out how you want to name/organize stuff 2:30 PM
When doing an update, it essentially does this: 2:30 PM
-
Update the template for the replication controller so that new pods will use the new image 2:30 PM
-
Capture the list of pods currently satisfying the label query the replication contrller is using 2:31 PM
-
Kill those pods one by one and leave it to the replication controller to spin up a new pod using the new updated template. 2:31 PM
Now -- which image gets used? This is a little bit of a bug in docker that we've been talking to them about 2:31 PM
Ideally, we would resolve my-image:latest to some identifier that was unique in time so that we could syncrhonize the version of an image across a cluster 2:32 PM
mindscratch: jbeda: ok, so can you doing a rollingupdate of a replicationController whose pods have more than one container? 2:32 PM
jbeda: my-image:latest is a little bit of a 'soft link' and there is no way to make sure it resolves to the same thing across all machines 2:32 PM
mindscratch: yes -- it is just a way to kill things so that you get new pods based on the updated template. 2:32 PM
actually -- kind of 2:33 PM
the concept of a rolling update will work but the current implementation makes some simplifying assumptions 2:33 PM
as we move from kubecfg to kubectl we are making taht stuff more general 2:33 PM
as for image -- I'd suggest you name your images something stable that you never chagne. Instead of using my-image:latest, use my-image:v1.2.3.4 or my-image:git-abcedf 2:34 PM
That way you can be 100% sure which version you are deployting 2:34 PM
mindscratch: jbeda: that makes sense. i have to think through this a bit more to understand how a rolling update works for a pod that has multiple containers, since on the command-line only a single image is specified ($KUBECFG -image $DOCKER_HUB_USER/update-demo:$NEW_IMAGE -u $TIMING rollingupdate update-demo) 2:35 PM
jbeda: yeah -- the kubecfg stuff is a little too simple -- demos well but we want more options/checking/flexibility for real prod 2:35 PM
this is the type of thing that Brian Grant is working on (don't htink he is on IRC right now) 2:35 PM