►
Description
Anyone joining the wonderful world of Kubernetes networking from OpenStack will confront new terminology and networking/policy concepts that are different from OpenStack's equivalents. Terms like services, pods, CNI, Ingress, Internal vs. External Load Balancing, service proxy's, namespaces, labels, selectors, policy, etc. can be distinctly different to OpenStack or Neutron artifacts like ML2, L3-agent, security groups, metadata proxy, integration bridges and other OpenStack or Neutron artifacts
A
So
good
afternoon
guys
it's
interesting.
You
have
now
back-to-back
Indian
speakers
and
it's
that
time
of
the
afternoon
where
you
start
dozing
off
a
little
bit
if
the
coffee
hasn't
kicked
in
yet
trust
me
I,
don't
want
to
be
dreaming
with
an
Indian
accent
right,
so
it
gets
get
your
coffee
make
sure
you
wake
up.
Networking
is
exciting.
Yeah
I
see
a
few
smiles,
that's
good
good,
so
my
name
is
Karthik
debarker
I
work
for
a
company
called
Agora
Kagura
is
is
fairly
active
in
the
networking
community
within
kubernetes
networking.
A
A
So
today
what
I
wanted
to
kind
of
walk
through
with
you
is
a
little
bit
of
kubernetes
networking
and
give
you
some
of
the
concepts
in
kubernetes
networking,
but
before
I.
Do
that
I
want
to
give
you
some
of
the
fundamental
design,
thinking
that
went
into
building
the
foundations
of
kubernetes
networking
and
want
to
contrast
that
a
little
bit
with
some
of
the
design
thinking
that
went
into
Neutron
back
in
the
early
days?
A
B
A
Like
every
kubernetes
presentation,
if
you
don't
do
a
live,
demo,
you're
not
worth
your
salt
right,
so
you've
got
to
do
a
live
demo.
What
I'm
going
to
I'm,
assuming
I,
can
get
to
another
network
I'd
like
to
deploy
a
kubernetes
cluster
multiple
nodes
with
networking
launching
applications.
I've
set
myself
a
target
of
five
minutes
to
do
that.
I'm,
hoping
I
can
do
it
in
two
right
and
what
that
means
is
I
want
for
those
of
you
who
haven't
deployed
kubernetes
before
I.
Want
you
to
walk
out
alive
Romo.
A
Maybe
you
can
even
do
it
with
me.
I
want
you
to
walk
out
of
this
room
and
tried
deploying
kubernetes
super
easy
lots
of
deployment
tool
options.
Many
of
them
are
extremely
easy
to
use,
and
this
is
something
that
everyone
can
do.
Literally.
Everyone
in
this
room
should
be
able
to
do
it.
Oh
and
a
little
bit
of
a
plug
for
tomorrow.
A
A
A
So
eventually
it
got
to
a
point
where
you
get
this
complex
mess
of
overlays
and
then
you
have
to
deal
with
Sdn
controllers
to
kind
of
force
overlays
in
the
physical
network
infrastructure
you
have
very
often
with
with
Neutron
the
way
many
people
have
implemented
it
today,
a
mess
of
bridges,
Reese,
witch's
policy
enforcement
points,
security
enforcement
points,
virtual
routers,
backhauls
and
the
case
in
point.
This
is
a
picture
from
OpenStack
documentation
of
what
Neutron,
with
open,
V
switch,
looks
like
standard
open,
V
switch
for
those
of
you
have
this
introduction.
A
My
heart
goes
out
to
you:
I've
spent
numerous
nights
and
weekends,
the
last
three
years
and
I
think
a
number
of
my
former
colleagues
can
attest
to
this
year
in
the
audience
it
when
things
break
and
the
things
that
break
could
be
in
servers
right
next
to
each
other
troubleshooting.
This
complex
mess
of
overlays
can
be
a
real
pain,
and
this
gets
even
worse,
when
you
add
add
on
things
like
layer,
three
DVR
vrrp.
A
A
So
here
we
are.
We
are
here
to
talk
about
micro
services,
microservices
in
a
cloud
native
world
right.
Here's
a
picture
of
what
Netflix
is
application
flows,
look
used
to
look
like
going
back
a
few
years.
It's
obviously
gotten
more
complex
than
this,
and
when
you
have
these,
this
large
number
of
micro
instances,
micro
services
collaborating
with
each
other.
This
traditional
model
of
creating
overlays
between
pairs
of
instances
of
groups
of
instances
that
need
to
collaborate
with
each
other
does
not
scale
and
increasingly
because
micro
services
are
very
dynamic,
the
flow
is
tend
to
vary.
A
Things
can
tend
to
come
up
and
down
fairly
dynamically.
We
really
have
to
look
at
a
new
approach
because,
increasingly,
where
the
world
we
are
moving
to,
it's
sort
of
a
server
less
function
as
a
service
world,
with
things
running
functions
individually
in
different
parts
of
the
infrastructure
and
this
concept
of
building
overlays
for
everything
is
fundamentally
a
approach
that
has
its
roots
in
the
early
ideas
of
E
and
networking,
and
we
have
to
move
past
that
right.
So
that
said,
kubernetes
which
is
focused
on
how
do
you
run
container
containers
at
scale?
A
How
do
you
provide
an
infrastructure
for
running
micro
services?
Scale
took
a
different
set
of
assumptions.
First
of
all,
the
world
is
IP
right.
How
many,
if
you
have
an
application
that
does
not
use
IP,
so
kubernetes
started
with
the
assumption
that
it's
going
to
assume
that
every
node
in
the
cluster
has
an
IP
address.
It
also
made
the
assumption
that
every
pod
has
an
IP
address,
that's
unique
within
the
cluster
right,
and
so
when
pods
communicate,
no
IP
addresses
other
pods
that
they
communicate
with
know
that
IP
address
and
its
fact
that
it's
unique.
A
So
that's
a
fundamental
assumption
and
by
the
way
that
is
a
little
bit
more
evolved
from
sort
of
some
of
the
early
design.
Thinking
that
went
into
Neutron
kubernetes
then
adopted
CNI
the
container
network
interface
as
a
networking
abstraction
to
allow
different
vendors
to
plug
their
different
networking
plugins
provide
that
connectivity
between
pods,
okay
and
today
there's
numerous
plugins
each
with
their
own
characteristics,
n'
and
different
market
segments
that
they
go
after.
A
So
we
work
with
my
company
works
with
calico
and
flannel,
which
are
fairly
popular
plugins,
but
there's
plenty
of
other
plugins,
the
big
difference
that
kubernetes
introduces
and,
in
fact
other
container
office
creatives,
have
also
adopted.
This
abstraction
is
that
it
now
decouples.
This
concept
are
using
network
topology
for
isolation.
A
What
kubernetes
is
is
I'm
using
an
IP
address,
it's
up
to
the
network
plug-in
to
decide
how
to
connect
the
instances
to
each
other,
but
I'm
now
going
to
give
users
a
new
abstraction,
a
declarative
model
by
which
they
can
declare
what
pods
need
to
talk
to
what
other
pods.
So
users
can
declare
in
a
llamo
syntax,
using
advanced
concepts
like
labels
and
selectors
how
they
want
pods
to
communicate
with
each
other
which
pods
they
want
isolated
from
each
other.
A
So
that's
declared,
as
opposed
to
enforced
in
the
network
using
network
topology,
and
what
that
allows
you
to
do
is
you
can
now
build
networking
plugins
that
keep
the
network
simple
absolutely,
can
build
networking
plugins
that
keep
the
network
complex.
You
can
do
it,
but
you
don't
have
to
right-
and
this
is
a
fundamental
design
choice
at
kubernetes
made
in
a
kubernetes
environment
because,
like
application
or
micro
service
is
served
by
dozens
of
pods
or
hundreds
of
pods.
If
you
have
a
web
server,
it
could
be
running
as
tens
of
web
server
instances.
A
You
need
a
concept
to
abstract
away
this
fleeting
dynamic
environment
and
that
concept
is
services.
So
kubernetes
has
the
concept
of
services
and
we'll
talk
about
how
that's
implemented
in
the
network
and
the
service
refers
to
a
collection
of
parts
at
the
back
end.
We
also
have
sort
of
a
way
of
discovering
services
using
a
variety
of
options.
You
can
use
DNS,
but
there's
other
options.
A
You
can
use,
and
kubernetes
basically
presents
all
of
these
to
you
and,
in
addition
to
do
services
very
often,
users
and
microservices
want
to
do
higher-level
sorts
of
traffic
redirection
based
on
layer.
Five
to
seven
decisions
like
HTTP
headers,
like
application,
specific
headers,
and
for
that
purpose,
kubernetes
introduces
the
concept
of
ingress,
which
is
something
that
confuses.
A
So,
let's
start
with
a
little
bit
of
a
dive
into
what
the
networking
landscape
looks
like
we'll
start
with:
first
simple,
east-west
traffic
on
Dupont
traffic
between
pods
and
kubernetes,
and
then
we'll
go
on
to
the
more
abstract
concepts
like
services,
ingress
and
so
on.
Right,
so
to
begin
with,
in
kubernetes
a
kubernetes
cluster
as
a
collection
of
nodes
nodes
could
be
metal
nodes,
it
could
be
virtual
machines
running
on
OpenStack.
It
could
be
instances
running
on
your
favorite
public
cloud.
Doesn't
matter
it's
it's
a
bunch
of
nodes.
A
Some
nodes
are
designated
as
master
nodes,
way
running
the
kubernetes
control
services
and
some
nodes
are
designated
worker
nodes,
we're
actually
running
the
pods
or
the
applications.
The
main
services
running
on
the
on
the
master
nodes
are
things
like
the
API
server,
which
is
how
you
interact.
How
users
interact
with
kubernetes?
You
have
a
scheduler
which
helps
scheduled
pods.
A
Ok,
and
so
you
have
the
skew
blade
agent,
that's
running
on
every
node,
every
worker
node,
within
that
worker,
node
I've,
just
called
out
this
concept
of
the
host
network
namespace,
that's
typically
the
Linux
networking
stack
as
you
and
I
know
it.
As
you
all
know,
it's
possible
to
create
multiple
namespaces
in
Linux
and
so
we'll
get
to
that
concept.
Next.
So,
when
kubernetes
launches
a
pod
apart
in
kubernetes,
is
then
essentially
a
collection
of
containers
with
a
shared
network
namespace,
so
typically
that
what
that
means
is
that
collection
of
containers
shares
an
IP
address.
A
It
shares
a
routing
table.
It
shows
some
basic
network
concepts,
so
it
has
a
separate
namespace
and
when
a
cubelet
launches
a
pod,
which
is
a
collection
of
containers.
What
it
does
is,
it
now
says:
I
need
to
net
this
part
and
get
it
connected
to
the
network,
so
it
can
talk
to
other
Pods
and
talk
to
the
rest
of
the
world.
The
way
it
does
that
in
kubernetes
is
using
CNI
and
it's
a
CNI
is
really
simple.
What
it
what
happens?
A
A
Our
plugin
says:
okay
I
need
to
give
this
part
in
IP
address
and
I'm,
creating
a
virtual
Ethernet
that
connects
the
pod
namespace
to
the
hosts,
namespace,
okay,
and
so
that's
sort
of
the
first
step,
and
then
it's
up
to
the
it's
really
up
to
the
plug-in
as
to
how
it
connects
that
virtual
Ethernet
to
the
rest
of
the
network.
Some
plugins
use
bridges,
V
switches
create
overlays.
A
Some
plugins
like
calico,
use,
simple
IP,
writing.
Okay,
flannel
gives
you
a
choice
of
what
you
can
actually
use
both
and
just
walking
through
this
one
example
of
what
calico
does
calico
basically
writes
this
new
workload
into
the
shared
xcd
namespace,
that's
available
in
kubernetes
that
CD
namespace
calls
out
to
another
calico
agent
called
Felix
running
on
the
node,
and
what
Felix
does
is
inserts
routes
for
that
pods
IP
addresses
into
the
host
routing
table,
saying
if
you
need
to
reach
that
pod
send
traffic
into
this
virtual
Ethernet.
So
calico
does
not
use
virtual
switches.
A
It
does
not
use
bridges.
It
simply
sends
it
to
let
virtual
Ethernet,
okay,
there's
an
other
agent
running
on
the
node
called
bird,
which
is
a
bgp
agent,
and
so
all
the
calico
nodes
within
the
cluster
appeared
together
using
standard
bgp,
and
so
what
that
means
is
when
bird
detects
these
new
routes,
it
advertises
that
route
to
other
nodes
in
the
cluster
as
an
aggregated
route,
and
so
within
the
matter
of
milliseconds.
A
Every
node
in
the
cluster
knows
that
these
new
pods
are
reachable
from
from
through
the
snowed,
so
any
traffic
distant
blackboard
is
sent
to
that
node
without
any
encapsulation,
without
any
overlays
right
for
those
of
you
coming
from
a
neutral
obvious
world,
this
might
be
a
little
bit
of
a
foreign
concept,
but
your
networking
is
really
simple.
It
really
is
I
see
a
few
people
laughing
networking
should
be
simple
if
you're
looking
at
scale.
A
So
that's
basically
what
calico
does
I
have
another
example
and
calico
runs
as
a
pod,
so
it's
fairly
non-inclusive
in
the
horse
stack
and
your
actual
data
traffic
simply
flows
with
normal
in
Linux
routing.
In
this
case,
calico
is
not
in
a
data
park.
It's
simple,
IP
forwarding
right,
no
real
magic
to
it.
A
Flannel
is
another
example.
This
is
one
of
the
early
plugins
available
for
kubernetes
and
flannel
has
a
variety
of
ways
that
it
can
provide
the
actual
networking.
To
give
you
an
example
of
a
different
networking,
plugin
again
called
flannel,
it's
called
using
CNI,
so
cubelet
called
C&I.
In
this
case.
It
says
it
gives
flannel
some
configuration
parameters
like
use
views
my
net
as
a
bridge.
In
this
case,
flannel
actually
creates
a
bridge
connecting
the
virtual
Ethernet
and
it
tells
you
how
to
how
to
assign
IP
addresses
it
says,
used
host
local
and
the
wave
flannel.
A
That
works
is
that
it
assigns
a
slash
24
for
every
node
in
the
cluster
and
each
node
assigns
IP
addresses
from
that
slash
24
to
individual
pods.
From
that
point,
on
flannel
has
a
variety
of
backends
to
actually
connect
the
different
nodes
to
each
other
right
and
to
exchange
routes.
The
most
commonly
used
backends
for
flannel
are
either
host
gateway
or
VX
LAN.
A
Another
common
mode
of
operation
for
flannel
is
VX
LAN,
and
this
is
a
standard
overlay
that
many,
if
you
are
used
to
so
if
you
want
to
use
an
overlay
for
whatever
reason
you
can
absolutely
do
so
with
something
like
flannel
and
when
you
use
VX
line.
Obviously
you
ping
a
little
bit
of
a
performance
penalty
for
doing
the
excellent
encapsulation,
depending
on
your
NIC
card,
depending
on
how
it's
implemented.
That
might
be
ways
to
offset
some
of
that
performance
overhead.
But
again
this
is
standard
VX
line
overlays.
A
So
that's
a
couple
of
examples
of
how
calico
and
flannel
work
there
are
the
network
plugins
in
the
market
today,
there's
increasing
numbers
of
network
plugins,
but
generally
the.
What
tends
to
happen
is
the
lot
of
the
plugins
are
coming
in
tend
to
bring
either
specific
market
input,
so
they
tend
to
have
specific
features.
They
tend
to
claim.
A
I,
don't
want
to
be
in
the
business
of
comparing
for
you,
which
is
the
best
plug-in
I
think
you
should
really
make
that
call
for
yourself
based
on
individual
plugins
merits
and
if
you're,
if
you're
interested
I,
would
certainly
encourage
you
to
talk
to
more
folks
who
are
who
build
kubernetes
deployments
at
scale
and
I'm
sure
they
will
guide
you
in
terms
of
some
of
the
plug-in
choices.
But
that's
one
part
of
the
kubernetes
networking
story,
which
is:
how
do
you
connect
pods
to
each
other
again,
keep
in
mind.
A
Pods
are
just
dynamic
instances
that
can
be
spun
up
and
down,
and
these
are
typically
fleeting
instances
in
kubernetes,
right
and
typically,
the
way
you
would
have
multiple
pods
for
an
application.
Is
you
would
you
would
either
configure
a
replica
set
or
replication
controller,
which
is
the
older
concept,
which
is
to
say
I'm
running
an
engine?
X
application
and
I
want
ten
replicas
and
kubernetes
spins
up
ten
instances
of
nginx
each
as
a
pod.
A
The
next
concept,
which
is
the
one
I
referred
to
as
a
very
powerful
concept
in
kubernetes,
is
this
concept
of
having
namespaces
labels
selectors
and
network
policy,
which
are
all
instruments
to
help.
You
declare
enamels
and
tax,
typically
how
you
want
to
isolate
different
objects
from
each
other.
Typically,
objects
refer
to
pods,
it
could
refer
to
other
instances
too,
and
so.
First
of
all,
the
concept
of
namespace
is
the
capability
where
you
can
take
a
kubernetes
cluster
and
start
a
logically
partition
it
into
virtual
clusters
so
that
you
can
isolate
different
projects
from
each
other.
A
A
A
Dynamically
okay,
so
I
have
given
this
example
here
with
calico
the
way
calico
does,
that
is
by
taking
those
policies
and
on
each
node,
independently,
instantiating
weather
objects
with
that
pod
exists,
and
if
they
do,
then
it
creates
IP
cables.
Actually
IP
sex
rules
dynamically,
which
is
enforced
at
the
virtual
Ethernet
connecting
into
the
pod.
So
it's
enforced
at
the
very
end
point
and
similarly
the
other
end.
If
there's
no
that
client,
it
will
similarly
create
IP,
sits
independently
on
that
end,
so
it's
sort
of
a
decentralized
architecture
and
different.
A
First
of
all,
it
launches
a
little
demon
called
cube
proxy
and
the
SKU
proxy
daemon
runs
on
every
node
in
the
cluster
and
to
proxy,
essentially
operates
by
setting
up
IP
cables
rules
in
the
node
that
map
the
service
into
the
backend
pods,
and
to
give
you
an
example
of
different
ways,
you
can
implement
the
service
abstraction.
One
of
the
common
ways
is:
what's
called
a
cluster
IP.
So
if
you
were
to
say,
I'm
now
want
a
service
for
nginx.
That's
served
by
10
different
pods
on
the
backend.
A
Essentially,
this
nginx
service
will
receive
a
cluster
IP,
which
is
a
well-known
IP
within
the
kubernetes
cluster
and
cube
proxies
IP
tables
rules,
take
care
of
translating
doing
a
dnad
on
any
traffic
going
to
that's
cluster
IP
and
translating
it
to
the
actual
pods
IP
address.
Okay.
So
basically
it's
it's
Donette
that
helps
you
translate
from
the
services
cluster
IP
to
the
actual
part
IP
address,
and
the
SKU
proxy
and
the
IP
tables
rules
run
on
every
node
in
the
cluster.
A
This
takes
care
of
when
you
have
services
within
the
cluster
that
need
to
use
services.
So
if
you
have
a
Redis
application
that
needs
to
talk
to
an
engine,
X
Redis
can
say
here's
here's,
my
nginx
well
known
cluster
IP
and
so
that
traffic
can
flow
east-west.
But
sometimes
you
need
traffic
from
outside
the
cluster
to
come
into
the
cluster
and
the
way
to
do
that
is
something
called
node
ports.
A
What
note
ports
is
is
that
kubernetes?
In
addition
having
a
cluster
IP,
now
assigns
a
port
from
a
well-known
port
range,
typically
by
default,
30,000
to
37,000
and
gives
a
port
a
well-known
port
to
that
service,
and
now
traffic
coming
to
any
node
in
the
cluster,
destined
to
that
port
essentially
gets
translated
using
denied
rules
and
sent
to
the
actual
product.
The
address
another
way
you
can
do
services
is
using
of
service
of
type
load
balancer
in
this
case.
A
What
kubernetes
does
is
that
for
certain
well-known
load,
balancers
like
Google's
load,
balancer,
Amazon's
ETL,
be
a
handful
of
other
well-known
load.
Balancers
kubernetes
basically
creates
the
load
balancer
rules
to
translate
services
to
part
IPS
dynamically.
So
it's
a
way
to
do
service
mappings
from
services
to
that
back-end,
pods
right
so
so
far,
so
good.
A
So
now
you've
sort
of
done
this
mapping
of
services
to
pods.
The
next
step
is
how
do
you
actually
find
what
the
services
IP
addresses
are?
Sometimes
you
may
want
to
find
the
actual
pods
IP
addresses,
because
as
a
client,
you
might
think
ok,
I
can
do
better
load
balancing
than
half
cubed
proxy.
Do
it
for
me.
So
sometimes
that
might
be
preferred
and
there's
a
variety
of
ways.
You
can
do
that
you
can
use
any
of
the
classic
service
discovery
mechanisms
and
kubernetes
expertly
perfectly
fine
one
of
the
default
ones.
A
That
kubernetes
provides
which
you're
welcome
to
use
is
called
cube.
Dns
and
today
there
are
different
DNS
servers,
the
score
DNS.
The
number
of
implementations
that
can
be
plugged
in
as
well,
but
the
weight
cube
DNS
works
is
that,
first
of
all
it
creates.
It
creates
a
DNS,
client,
resolver
mapping
within
every
pod.
A
That
says,
when
the
pod
does
a
DNS
lookup,
it
gets
sent
to
cube
DNS,
so
cube,
DNS
essentially
becomes
a
DNS
server,
resolving
client
queries
and
when
a
new
service
is
created
within
a
namespace
cube,
DNS
essentially
creates
a
DNS
name
that
maps
from
the
name
to
the
name
of
the
service
and
creates
a
domain.
So
if
you
have
a
service
named
webserver
and
in
a
namespace
called
Project
red,
it
creates
a
domain
called
webserver.
A
That
project
read
that
service
are
clustered
at
local
within
the
crew
binaries
domain,
and
so
when
the
client
does
a
lookup
for
say,
web
server
it'll
get
pointed
to
the
default
web
server
in
that
same
namespace.
But
if
the
client
wants
to
pick
a
different
namespace,
it's
absolutely
free
to
do
so.
Ok,
simple
DNS
and
DNS
works
in
the
form
of
pods,
so
the
actual
DNS
implementation
runs
as
a
pod.
Within
the
cluster
fairly
simple
interest
resources.
Again,
it
runs
as
an
add-on
and
kubernetes
just
like
DNS
again.
A
The
interest
resource
essentially
allows
you
to
define
a
arbitrary
set
of
layer,
five
through
seven
mappings,
that
define
what
needs
to
happen
when
application
mapping
matching
that
pattern
shows
up.
So
in
this
case,
you
can
now
define
interest
controllers
that
can
process
that
incoming
data
and
these
interest
controllers
run
as
pods
within
your
kubernetes
cluster.
A
A
A
A
A
So
let's
start
here
so
I'm
using
cube
admin,
which
is
one
of
the
simpler
ways
to
deploy
kubernetes,
there's
dozens
of
deployment
tools
and
kubernetes.
Today
many
of
them
come
by
night
working
by
default,
something
like
final
flannel
or
calico
or
other
plugins.
But
if
you
don't
cube,
admin
is
a
fairly
simple
way
to
get
started.
There's
lots
of
other
tools
so
I'm
using
cube
admin
here,
because
it's
simple
so
what
you
would
do.
A
Is
start
quit
on
the
master,
you
would
run
something
called
cube
ATM
in
it.
Let's
hope
the
demo
gods
are
kind
to
us
and
essentially,
what
cube
bagging
cube
bagman
init
is
doing.
Is
it's
launching
all
the
key
kubernetes
demons
on
the
master
and
configures
them?
So
if
it,
if
things
go,
go
pretty
soon,
it
should
come
up
and
say:
alright,
things
are
looking
good,
so
far,
voila.
A
A
All
right
so
there's
your
extending
you
what's
currently
running
and
basically
you've
like
a
bunch
of
kubernetes
processes
running
these
all
running
in
the
cube
system,
namespace,
which
is
sort
of
the
master.
It's
the
namespace
for
all
of
the
kubernetes
specific
stuff
run.
Now
before
we
do
anything
else.
We'll
also
have
to
do
this,
we'll
have
to
connect
some
sort
of
networking,
because
without
networking
kubernetes
doesn't
start
really.
A
What
very
much
is
it
and
I'm
installing
calico
calicoes
is
encapsulated
as
a
as
a
self-hosted
networking
solution,
so
you
actually
launch
calico
by
running
it
as
a
power
that
demon
set
on
top
of
Kali
on
top
of
kubernetes.
So
that
means
ass,
calico
daemon,
this
calico
pod
gets
launched
on
every
node
in
the
cluster
okay,
and
so,
when
I
do
this
you'll
see
the
calico
node
pods
get
launched?
And
if
you
look
at
the
watch,
you
see
the
calico
HDD.
A
You
see
your
calico
node
launch
and
pretty
soon
I
think
pretty
much
most
of
the
parts
are
appending
will
go
into
a
running
state.
Dns
will
not
yet
because
there
are
no
scheduled
herbal
nodes.
So
now
that
that
is
done,
you
have
networking
going.
What
we'll
do
is
we'll
run
this
command
on
the
worker
nodes
and,
let's
start
by
doing
it
on
worker
one
and
when
I
do
that
you'll
see
additional
calico
node
processes
launched
in
here,
and
in
fact
you
can
also
do
it
from
here.
A
So
we
can
launch
both
workers
at
the
same
time
and
hopefully,
if
things
go
well,
you
see
okay,
second,
one's
shown
up
and
pretty
soon
you'll
see
a
third
one
as
well.
It
tells
you
that
these
are
running
in
the
host
name
space
and
so
guess
what
suddenly
you
have
calico
nodes
being
created,
and
now
everything
in
the
cluster
is
running.
It
was
any
one
timing
me
was
at
two
minutes:
five
minutes.
A
How
many
replicas
do
we
want?
Let's
say
10
right
all
right,
and
so
now
you
see
this
engine
next
part
is
being
spun
up
and
in
the
network
you
should
see
all
of
them
receive
an
IP
address
through
calico
and
voila
we
up
and
running
so
now
you
want
to
look
at
what
your
actual
network
infrastructure
looks
like.
Let's
go
to
the
worker
node
and
let's
do
an
IP
address,
show,
and
you
see
all
of
these
interfaces
Cali,
whatever
those
essentially
are
interfaces
virtual
internets
connecting
to
the
individual
pods.
A
Each
of
them
have
an
IP
address
and
each
of
them
connect
to
connect
to
the
nginx
pods.
If
you
do
an
IP
right
show,
there
are
routes
so
for
all
the
pods
running
on
that
particular
node.
Here
are
the
/
32
routes
pointing
to
the
virtual
Ethernet
for
the
route
for
the
nginx
pods
running
on
a
different
node.
Here's
the
routes
that
have
been
advertised
by
BGP
with
an
aggregated,
/,
26
route.
So,
finally
reason
your
connectivity
fails.
A
Guess
what
you
just
look
at
your
writing
table
as
their
route,
if
not
go,
troubleshoot
standard
BGP
if
the
right
exists,
but
you
don't
have
connectivity,
you
would
do
things
like
look
at
the
IP
tables
to
see.
If
there's
any
policy,
that's
preventing
traffic
from
being
from
being
stopped,
but
networking
in
kubernetes.
That's
all
it
is.
A
You
have
really
powerful
networking
this
networking
scales
to
the
current
scale,
targets
for
kubernetes,
up
to
5,000
nodes
right
used
to
be
thousand
nodes
or
100,000
containers.
Now
it's
5,000
nodes,
I.
Believe
five,
five
million
containers
I
forget
what
the
number
of
containers
was,
why
it
works.
It's
simple
right
and
that's
that's
sort
of
an
illustration
of
why
kubernetes
networking
is
the
way
it
is.
A
You
might
have
an
LDAP
server
on
one
and
LDAP
clients
on
the
other
right
and
there's
different
ways
to
do
that.
There's
different
approaches.
The
Calico
approach
is
to
do
simple
net,
simple
networking,
bgp
peering,
essentially
and
using
labels
and
policy
to
define
what
can
talk
to
what
the
second
sort
of
scenario
is
when
you're
running
OpenStack
on
bare
metal-
and
you
have
individual
virtual
machines,
running
kubernetes
nodes,
so
kubernetes
is
an
effect
running
inside
of
OpenStack
and
again
there's
an
in
in
the
case
of
calico
she'll
talk
about
tomorrow.
A
It's
simple,
bgp,
puring,
simple
networking
and
you
use
policy,
and
the
third
scenario
is
actually
a
really
interesting
scenario,
which
you're
seeing
more
and
more
in
the
industry.
Now,
which
is
kubernetes,
is
running
on
bare
metal
takes
care
of
things
like
the
auto
scaling,
the
provisioning,
the
upgrades,
the
lifecycle,
management
and
OpenStack
is
running
as
a
containerized
application
on
top
of
kubernetes.
In
this
case,
what
the
benefit
is
that
the
OpenStack
control
plane
is
containerized,
and
so,
as
you
need
more
capacity,
you
can
auto
scale.
A
Also,
kubernetes
can
take
care
of
things
like
up,
create
drilling
upgrades
as
you
need
to
move
from
OpenStack
version
to
version,
and
this
is
sort
of
the
use.
This
is
the
demonstration
that
18k
is
being
shown
which
is
using
it
OpenStack
helm
as
the
OpenStack
project
to
do
this,
and
they
actually
use
calico
for
the
networking
fabric,
but
what
they
do.
A
During
so
the
inner
bgp
calico
Coleco
bgp
peers,
without
a
calico
bgp
and
at
simple
IP,
writing
right
and
use
policy
to
isolate
between
applications.
So
that's
something
we'll
talk
about
tomorrow,
so
it's
sort
of
to
wrap
up
the
session
here
and
to
take
any
questions.
The
things
I
want
to
sort
of
emphasize
is
that
when
you
start
looking
at
Neutron
and
kubernetes,
these
are
different
abstractions,
but
they
do
have
sort
of
different
targets
in
that
they
did
have
different
targets
in
mind
when
they
were
designed.
A
Neutron
was
sort
of
designed
for
virtual
machine'
scale,
and
so
it
can
handle
5
10
VMs.
A
second
depending
on
how
complex
your
Neutron
environment
is.
Kubernetes
is
designed
for
container
scale,
we're
talking
about
launching
hundreds,
potentially
thousands
of
pods
a
second
right,
and
so
it's
a
difference
on
a
scale
target.
So
when
you
start
connecting
Neutron
and
kubernetes,
you
have
to
give
a
little
bit
of
thought
to
what
you're
connecting
are.
A
A
The
second
thing
is
that
kubernetes
networking
abstractions
are
different
from
neutrons
networking,
abstractions
and
that's
intentionally
so
because
kubernetes
is
focused
in
this
world
of
micro
services
that
we
live
in,
which
is
a
much
more
rapidly
changing
world
than
the
world
that
OpenStack
and
Neutron
were
originally
designed
for
which,
which
was
virtual
machine
networking
right.
So
the
abstractions
have-have
are
quite
different.
Certainly
neutron
is
slowly
evolved,
but
I
think
as
its
evolved.
A
It's
also
picked
up
some
complexity
and
there
needs
to
be
some
focus,
especially
for
people
deploying
or
people
deploying
an
operating
networking
at
scale
to
focus
on
the
operations
part
of
it
because
as
there's
key
elements
from
an
operational
scale
perspective-
and
finally,
this
is
a
key
concept
in
kubernetes
networking,
which
is
it
gives
you.
The
option
of
keeping
networking
simple
and
using
policy
of
isolation
does
not
mean
you
have
to
follow.
That
principle
gives
you
the
option
now?
A
A
A
Generally,
that's
the
name.
Space
that's
created
by
cubelet
I
couldn't
speak
to
all
of
the
different
namespaces,
but
at
least
as
far
as
network
namespace
I'm
pretty
familiar.
That's
what
happens
and
the
way
if
you
actually
look
into
how
its
created
there's
something
called
the
pods
container
that
gets
launched
and
the
only
job
of
pods
container
is
to
keep
that
namespace
alive.
So
it's
a
very
simple
way
to
get
a
namespace
flying.
A
B
A
The
management
plane,
obviously
all
the
nodes,
are
also
connected.
But
at
this
point
the
master
doesn't
know
about
any
of
the
nodes.
All
you're
doing
when
you
do
a
cube,
cut'
will
apply,
calculate
gamma
is
you're,
creating
a
daemon
set
that
says
anytime.
A
new
node
comes
up,
launch
the
Calico,
node
or
calico
pod
on
that
node.
So
it
essentially
perhaps
your
kubernetes
cluster
for
networking
and
so
the
first
time
a
worker
node
joins
the
cluster.
The
Calico
node
sign
launches
on
that
on
that
clustered
node
and
you
have
networking
automatically
okay
I.
A
A
But
good
question
so
today,
policy
is
enforced
at
typically
the
inverse
point
or
the
egress
point
of,
depending
on
which
we
are
look
at
into
the
pods,
and
it's
really
a
function
of
the
network
plug-in
to
implement
policy.
So
calico
has
sort
of
a
policy
engine
that
was
the
team
behind
calico
helped
develop
network
policies
kubernetes.
So
it's
sort
of
a
kubernetes
policies,
a
subset
of
calico.
That
said,
there
are
different
policy
implementations.
Some
of
them
are
closed
source.
Some
of
them
are
sort
of
learning
approaches.
A
A
The
thing
I
will
point
out
is
in
the
world
of
micro
services,
especially
as
you
move
towards
things
like
functions
as
a
service
and
server
less
the
model
where
you
try
and
learn
something
in
the
network
doesn't
necessarily
scale
because
things
like
to
transient
and
so
the
most
of
the
container
akka
script
is
going
to
have
this
model
of
declarative,
declarative
syntax,
where
you
actually
declare
what
you
want.
The
application
to
do
and
then
have
enforced
that
policy,
so
things
like
calico
sort
of
follow
that
model.
Thank
you.
You're
welcome.
D
A
I,
so
if
let
me
distinguish
between
policy
and
ingress
so
interest
when
I
use
the
nginx
example,
there
nginx
is
the
ingress
controller
and
you
have
sort
of
the
interest
resource
defined
that
calls
out
hey
for
this
mapping.
You
need
to
do
this
that
function,
so
that's
sort
of
the
concept
of
interest
now.
Network
policy
is
a
sort
of
a
distinct
concept
from
interest
and
in
network
policy.
A
Today,
kubernetes
supports
just
interest
filtering,
so
what
that
means
is
that
you
can
tell
the
pod
how
to
protect
it
or
the
pod
can
tell
you
how
to
protect
it
from
the
rest
of
the
world.
You
cannot
express
in
kubernetes
today
how
to
protect
the
rest
of
the
world
from
the
pod,
so
there
is
no
egress
filtering
yet
in
kubernetes.
If
you
look
at
specific
Network
policy
implementations
like
calico,
for
example,
calico,
it
does
have
egress
filtering.
A
It
also
has
more
operational,
focused
policy,
but
kubernetes
itself
is
sort
of
a
developer,
focused
platform,
and
so
the
community
decided
that
since
most
developers
care
about
interests
and
protecting
themselves
from
the
world,
let's
do
in
respite
ring
first
and
so
egress
filtering
filtering
has
been
deferred
for
later,
it's
possible,
it
might
come.
There's
been
discussions
within
the
kubernetes
sig
network
community
by
potentially
looking
at
egress
filtering
as
well.
Ok,
good
question
by
the
way,
great!
Thank
you
so
much
for
your
questions.
I
know.
A
The
next
discussion
is
going
to
be
interesting
as
well,
which
is
going
to
be
talking
about
kind
of
combining
kubernetes
and
OpenStack,
and
the
four
and
a
half
dozen
ways
you
can
do
that
some
simple,
hopefully,
let's
a
couple,
a
simple
ones,
but
also
many
of
them
tend
to
be
very
complex
and
I
also
give
a
plug
for
joining
the
session
tomorrow,
with
that
I'll
be
doing
jointly
with
canonical
and
AT&T,
which
is
sort
of
talking
about
the
same
concept
but
purely
from
a
networking
perspective.
Ok,
thank
you.
So
much
chas.