Skip to content

Instantly share code, notes, and snippets.

@fapestniegd
Created February 5, 2012 18:39
Show Gist options
  • Select an option

  • Save fapestniegd/1747106 to your computer and use it in GitHub Desktop.

Select an option

Save fapestniegd/1747106 to your computer and use it in GitHub Desktop.
Every Core node is assigned a netblock within the /24
node_0 node_1 node_2 node_3 node_4 node_5 node_6 node_7
(up to) ------------- --------------- --------------- --------------- --------------- --------------- --------------- --------------
1 nodes: 172.16.0.0/24
2 nodes: 172.16.0.0/25 172.16.0.128/25
4 nodes: 172.16.0.0/25 172.16.0.64 /25 172.16.0.128/25 172.16.0.192/25
8 nodes: 172.16.0.0/26 172.16.0.32 /26 172.16.0.64 /26 172.16.0.96 /26 172.16.0.128/26 172.16.0.160/25 172.16.0.192/26 172.16.0.224/25
After 8 would be 16 nodes, which would require 16*15=240 IPs to be dedicated to intra-node communication (for full connectivity)
which would leave no network or broadcast addresses left in the "Class C". Basically, over 8 nodes is where Ahmdahl's law kicks in,
unless a larger netblock is used.
From the block of IPs dedicated to the node, there will be 3 "pools"
1) a pool consisting of one IP that is always the first (non-network) IP of the netblock to be used as the "primary" IP for the node.
2) a pool of intra-node point-to point tunnels. There will be (n-1) of these (so 7 for an 8 node network) immediately after the "primary" IP.
3) a pool of external node IPS for road-warriors and other "Class C" or smaller networks.
When nodes use these, their routes (and routes to networks behind them based on certificates) will be propagated throughout the core.
So lets say we have 5 nodes. This is less than 8 but greater than 4. We will still lose 10 IPs per netblock of 32 IPs.
On node_0:
0) 172.16.0.0 : the network
1) 172.16.0.1 : the IP services bind to
2) 172.16.0.2 : the IP of the local end of the PtP to node_1 (this node +1, modulo n) (loop to 0 on 8)
3) 172.16.0.3 : the IP of the local end of the PtP to node_2 (this node +2, modulo n)
4) 172.16.0.4 : the IP of the local end of the PtP to node_3 (this node +3, modulo n)
5) 172.16.0.5 : the IP of the local end of the PtP to node_4 (this node +4, modulo n)
6) 172.16.0.6 : the IP of the local end of the PtP to node_5 (this node +5, modulo n) (reserved for future growth in this case)
7) 172.16.0.7 : the IP of the local end of the PtP to node_6 (this node +6, modulo n) (reserved for future growth in this case)
8) 172.16.0.8 : the IP of the local end of the PtP to node_7 (this node +7, modulo n) (reserved for future growth in this case)
9) 172.16.0.31: the broadcast
So 22 IPs (9-30, inclusive) are available for road-warrior connections and are placed in the road-warrior connection pool.
Certificates and LDAP auth will be used for phase-1. If a certificate for a host is presented, then it's IP/subnet will be looked
up in LDAP (every core node is an LDAP server) and if it has a network behind it, a route to this network will be added via the
assigned IP. This route should propagate to the other core nodes, and if a packet needs to get from an ip on this network, it
should then be routable throughout the core without any kind of NAT being necessary, and it should only pass through 2 core nodes
at most to reach the external network IP.
22 * 5 => 110 road warriors and/or external networks may "hang off" the core. If more are needed, add core nodes up to 8, and
22 * 8 => 176 road warriors and/or external networks may be attached, less if there are redundant connections, but these connections
may be to other "core networks" as well, allowing for the chaining of cores. If cores are chained, consideration for SNATs must be
made if both core networks have overlapping network address space.
Note: An external network should be able to connect to several core nodes (or even all) and use one IP on each, and the routes
to the networks behind it will be added to each IP it is assigned. This should not cause a problem as OSPF should handle the route
ordering. We may look at BGP for this as well/instead.
A note on quorums: since the networks are broken down by powers of two, we will end up with an even number of core nodes.
In order to get a quorum, an odd number is needed, so no service accepting writes may run on more than n-1 nodes.
Note: these nodes do not have to be sequential, nor even in the same "Class C" we keep this convention, we can scale to 8 nodes,
each node delivering 22 dynamic end-points. And the 172.16.0.0/12 address space is 16 class B's, which is 256 class C's, which is
8 of these network segments, so (16*256*8 => 32,768) nodes, even if we charged $20/node (which would be ripping ourselves off at cost)
we'd be pulling in $655,360/month.
Websages Core:
5 nodes: /26
172.16.0.0 node_0 network
172.16.0.1 freyr
172.16.0.2 freyr<->odin
172.16.0.3 freyr<->thor
172.16.0.4 freyr<->loki
172.16.0.5 freyr<->vili
172.16.0.6 <future expansion>
172.16.0.7 <future expansion>
172.16.0.8 <future expansion>
172.16.0.9 [dynamic pool]
..
172.16.0.30 [dynamic pool]
172.16.0.31 node_0 broadcast
#--------------------------------------#
172.16.0.32 node_1 network
172.16.0.33 odin
172.16.0.34 odin<->thor
172.16.0.35 odin<->loki
172.16.0.36 odin<->vili
172.16.0.37 <future expansion>
172.16.0.38 <future expansion>
172.16.0.39 <future expansion>
172.16.0.40 odin<->freyr
172.16.0.41 [dynamic pool]
..
172.16.0.62 [dynamic pool]
172.16.0.63 node_1 broadcast
#--------------------------------------#
172.16.0.64 node_2 network
172.16.0.65 thor
172.16.0.66 thor<->loki
172.16.0.67 thor<->vili
172.16.0.68 <future expansion>
172.16.0.69 <future expansion>
172.16.0.70 <future expansion>
172.16.0.71 thor<->freyr
172.16.0.72 thor<->odin
172.16.0.73 [dynamic pool]
..
172.16.0.94 [dynamic pool]
172.16.0.95 node_2 broadcast
#--------------------------------------#
172.16.0.96 node_3 network
172.16.0.97 loki
172.16.0.98 loki<->vili
172.16.0.99 <future expansion>
172.16.0.100 <future expansion>
172.16.0.101 <future expansion>
172.16.0.102 loki<->freyr
172.16.0.103 loki<->odin
172.16.0.104 loki<->thor
172.16.0.105 [dynamic pool]
..
172.16.0.126 [dynamic pool]
172.16.0.127 node_3 broadcast
#--------------------------------------#
172.16.0.128 node_4 network
172.16.0.129 vili
172.16.0.130 <future expansion>
172.16.0.131 <future expansion>
172.16.0.132 <future expansion>
172.16.0.134 vili<->freyr
172.16.0.135 vili<->odin
172.16.0.136 vili<->thor
172.16.0.137 vili<->loki
172.16.0.138 [dynamic pool]
..
172.16.0.158 [dynamic pool]
172.16.0.159 node_4 broadcast
#--------------------------------------#
172.16.0.160 node_5 network
172.16.0.161 <future expansion>
..
172.16.0.190 <future expansion>
172.16.0.191 node_5 broadcast
#--------------------------------------#
172.16.0.192 node_6 network
172.16.0.193 <future expansion>
..
172.16.0.222 <future expansion>
172.16.0.223 node_6 broadcast
#--------------------------------------#
172.16.0.224 node_7 network
172.16.0.225 <future expansion>
..
172.16.0.254 <future expansion>
172.16.0.255 node_7 broadcast
#--------------------------------------#
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment