Skip to content

Instantly share code, notes, and snippets.

# required to run in a container
daemon off;
user nginx;
worker_processes {{ or (getv "/deis/router/workerProcesses") "auto" }};
pid /run/nginx.pid;
events {
worker_connections {{ or (getv "/deis/router/maxWorkerConnections") "768" }};
# multi_accept on;
class App:
def _scheduler(self):
module, auth, ApiEndpoint, options,key
def init():
#intialize the name,scheduler,version,bootstrap App
scheduler.init()
def start():
# required to run in a container
daemon off;
user nginx;
worker_processes {{ or (getv "/deis/router/workerProcesses") "auto" }};
pid /run/nginx.pid;
events {
worker_connections {{ or (getv "/deis/router/maxWorkerConnections") "768" }};
# multi_accept on;
title:Choosing a Scheduler
description:How to choose a scheduler backend for Deis.

Choosing a Scheduler

The :ref:`scheduler` creates, starts, stops, and destroys each :ref:`container`

    ##start k8sapps
    {{range $kapp := lsdir "/registry/services/specs/default"}}
    upstream {{base $kapp}} {
        {{$appdir := printf "/registry/services/specs/default/%s" $kapp}}{{range gets $appdir}}
        server {{$data := json .Value}}{{$data.spec.portalIP}}:80;
        {{end}}
    }
    server {
    server_name ~^{{ $kapp }}\.(?<domain>.+)$;
@smothiki
smothiki / gist:a86eae8aec67aff3c287
Last active August 29, 2015 14:22
Proposal integrating kubernetes with deis

Once we start core-os machine with flannel. which gives a containers a feel of overlay network. Every container will have a unique IP.

  1. Container port shouldn't be bind mounted.
  2. Publisher should be able to publish container IP and Port as each APP container is unique.
  3. Only Jobstate.Up, Destroyed, Crashed, Error.
  4. Moving scale logic from models.py to schedulers accordingly.
  5. Allow custom interfaces and IPs while creating user-data.
  6. Make Registry URL based or run registry in every host.
  7. Create services for each App
@smothiki
smothiki / gist:bad29e5798e7ff79a73e
Created June 2, 2015 21:51
mesos marathon logs app sample
Jun 02 21:49:24 deis-01 sh[1868]: [2015-06-02 21:49:24,172] INFO Upgrade id:/ version:2015-06-02T21:49:24.166Z with force:false (mesosphere.marathon.state.GroupManager:124)
Jun 02 21:49:24 deis-01 sh[1868]: [2015-06-02 21:49:24,173] INFO Take next configured free port: 10000 (mesosphere.marathon.state.GroupManager:186)
Jun 02 21:49:24 deis-01 sh[1868]: [2015-06-02 21:49:24,175] INFO Compute DeploymentPlan from Group(/,Set(),Set(),Set(),2015-06-02T21:46:53.044Z) to Group(/,Set(AppDefinition(/sample,Some(sleep 600),None,None,Map(),1,0.1,16.0,0.0,,Set(),List(),List(),List(10000),false,1 second,1.15,3600 seconds,None,Set(),Set(),UpgradeStrategy(1.0,1.0),Map(),2015-06-02T21:49:24.166Z)),Set(),Set(),2015-06-02T21:49:24.166Z) (mesosphere.marathon.upgrade.DeploymentPlan$:211)
Jun 02 21:49:24 deis-01 sh[1868]: [2015-06-02 21:49:24,177] INFO Computed new deployment plan: DeploymentPlan(2015-06-02T21:49:24.166Z, (Step(Vector(Start(App(/sample, Some(sleep 600))), 0))), Step(Vector(Scale(App(/sample, Some(sleep 600))), 1)
Jun 02 20:20:58 deis-03 sh[3772]: [2015-06-02 20:20:58,200] INFO Compute DeploymentPlan from Group(/,Set(),Set(),Set(),2015-06-02T20:19:56.936Z) to Group(/,Set(AppDefinition(/ramui,Some(while [ true ] ; do echo 'Hello Marathon' ; sleep 5 ; done),None,None,Map(),1,0.1,16.0,0.0,,Set(),List(),List(),List(10000),false,1 second,1.15,3600 seconds,None,Set(),Set(),UpgradeStrategy(1.0,1.0),Map(),2015-06-02T20:20:58.188Z)),Set(),Set(),2015-06-02T20:20:58.188Z) (mesosphere.marathon.upgrade.DeploymentPlan$:211)
Jun 02 20:20:58 deis-03 sh[3772]: [2015-06-02 20:20:58,202] INFO Computed new deployment plan: DeploymentPlan(2015-06-02T20:20:58.188Z, (Step(Vector(Start(App(/ramui, Some(while [ true ] ; do echo 'Hello Marathon' ; sleep 5 ; done))), 0))), Step(Vector(Scale(App(/ramui, Some(while [ true ] ; do echo 'Hello Marathon' ; sleep 5 ; done))), 1))))) (mesosphere.marathon.upgrade.DeploymentPlan$:265)
Jun 02 20:20:58 deis-03 sh[3772]: [2015-06-02 20:20:58,203] INFO Deploy plan:DeploymentPlan(2015-06-02T20:20:58.188Z, (Ste
[DEBUG] - initializing zookeeper cluster
May 27 23:38:26 deis-04 sh[1368]: [DEBUG] - adding node %v to zookeeper cluster172.17.8.103
May 27 23:38:26 deis-04 sh[1368]: [DEBUG] - set /zookeeper/nodes/172.17.8.103/id -> 1
May 27 23:38:26 deis-04 sh[1368]: [DEBUG] - adding node %v to zookeeper cluster172.17.8.101
May 27 23:38:26 deis-04 sh[1368]: [DEBUG] - set /zookeeper/nodes/172.17.8.101/id -> 2
May 27 23:38:27 deis-04 sh[1368]: [DEBUG] - adding node %v to zookeeper cluster172.17.8.102
May 27 23:38:27 deis-04 sh[1368]: [DEBUG] - set /zookeeper/nodes/172.17.8.102/id -> 3
May 27 23:38:27 deis-04 sh[1368]: [DEBUG] - adding node %v to zookeeper cluster172.17.8.100
May 27 23:38:27 deis-04 sh[1368]: [DEBUG] - set /zookeeper/nodes/172.17.8.100/id -> 4
May 27 23:38:29 deis-04 sh[1368]: [INFO] - waiting for confd to write initial templates...
@smothiki
smothiki / zookeeper
Created May 25, 2015 19:17
zookeeper failure
```
Started Zookeeper.
May 25 19:16:17 deis-01 docker[3543]: [DEBUG] - returning default value "/zookeeper/nodes" for key "ETCD_PATH"
May 25 19:16:17 deis-01 docker[3543]: [DEBUG] - returning default value "127.0.0.1" for key "ETCDCTL_PEERS"
May 25 19:16:17 deis-01 docker[3543]: [DEBUG] - starting pprof http server in port 6060
May 25 19:16:17 deis-01 docker[3543]: [DEBUG] - 501: All the given peers are not reachable (Tried to connect to each peer twice and failed) [0]
May 25 19:16:17 deis-01 docker[3543]: [INFO] - zookeeper: starting...
May 25 19:16:17 deis-01 docker[3543]: panic: 501: All the given peers are not reachable (Tried to connect to each peer twice and failed) [0]
May 25 19:16:17 deis-01 docker[3543]: goroutine 1 [running]:
May 25 19:16:17 deis-01 docker[3543]: github.com/aledbf/coreos-mesos-zookeeper/pkg/boot/zookeeper.CheckZkMappingInFleet(0x7cc470, 0x10, 0xc208078080)