►
From YouTube: OpenShift Commons Briefing #65: Simplify and Secure your OpenShift Network with Project Calico
Description
OpenShift networking works great out of the box, right? So why would you consider anything else? This briefing examines an alternative approach that has benefits for many scenarios – from tightly securing a few high-value AWS instances to scaling a large private cloud deployment.
Come learn how the Calico solution differs from traditional solutions like OpenShift SDN, and how Calico has now been integrated with Kubernetes and OpenShift to provide a smooth deployment experience, and lessons learned across hundreds of enterprise users.
Guest Speaker: Andrew Randall, Tigera
A
Let
Andy
introduces
the
way
this
session
works
is
we
have
a
chat
channel
in
the
background,
and
if
you
have
questions
please
pop
them
in
there
there
are
a
few
other
people
from
project
task
kelvil
on
the
line
listening
in
and
they
can
answer
during
that
and
then
at
the
end
of
the
presentation,
we'll
open
it
up
for
Q&A.
So
with
that
said,
I'm
them,
let
Andy
take
it
away.
A
B
Thanks
Diane
and
welcome
everyone
thanks
for
your
interest
in
project
calico
and
just
by
way
of
equation,
so
I'm
CEO
at
higuera
and
Segura,
is
the
company
behind
project
calico
work
with
a
lot
of
companies
across
the
ecosystem,
integrating
calcio
into
various
environments
and
supporting
deployments
as
well.
So
with
that
I'm
talking
about
simplifying
and
securing
your
open
ship,
Network
and
I
know
that
open
shift
is
a
widely
deployed
platform.
Networking
seems
to
work
right,
and
so
the
very
first
questions
that
you
may
have
you
know
and
and
it's
not
unreasonable
is
isn't
virtual
networking.
B
This
whole
problem
isn't
if
something
was
done
before
we've
been
doing
for
years,
virtual
machines
have
been
networked
for
years.
Containers
a
you
know,
looked
at
like
a
mini
virtual
machine
steps
on
can't.
We
just
like
focus
on
developing
and
deploying
our
apps,
and
you
know
I
think
that's
a
reasonable
question,
as
mr.
bean
would
probably
be
happy
to,
but
it
was
that
kind
of
expression.
I
did
think,
as
you
can
hear
I'm
originally
from
from
England,
so
I
thought
you
might
like
them.
B
Her
workload,
OS
overhead,
and
so
you
have
one
significant
order
of
magnitude
impact
in
terms
of
the
number
of
workloads.
But,
more
importantly-
and
this
is
something
that
we
see
in
surveys
and
also
just
kind
of
some
nature
of
how
these
applications
are
being
deployed
and
a
dynamic
Orchestrator,
the
whole
point
of
that
is,
you
can
bring
up
container,
take
it
down
very
rapidly
and
so
you've
got.
If
you
put
these
together,
you
get
in
terms
of
the
churn
on
the
network.
How
fast
are
individual,
endpoints
coming
and
going
application
workloads
containers
pods?
B
It's
probably
at
least
a
couple
of
orders
of
magnitude,
and,
if
you
think
about
the
architectures
that
you
build
the
one
scale,
if
you
increase
that
a
couple
of
orders
of
magnitude,
it's
very
hard
for
that
to
take
that
same
architecture
and
have
it
keep
up
so
the
kind
of
first,
what
I
call
first
generation
of
Sdn
that
was
built
around
virtual
machines
that
was
based
on
a
you
know:
single,
centralized
controller.
That
was
the
brain
of
the
network
and
wasn't
really
expecting
a
whole
lot
of
events
to
happen
all
at
once.
B
They'd
with
tending
to
see
them
start
to
reach
their
taxi
when
you
really
put
them
into
production
environments
in
it
in
a
cloud
native
kind
of
architecture.
The
other
thing
that's
happening
up,
but
more
about
this
as
well.
Is
you
know
if
you
try
and
use
a
traditional
but
virtual
firewall
to
route
traffic
through
to
enforce
east-west
rules
and
again
the
amount
of
traffic
and
the
amount
of
different
connections
within
a
within
a
cluster?
When
you
start
to
build
a
cloud
native
application
tends
to
mean
that
you're
not
going
to
want
to
take
that
approach.
B
Now
there
are
other
aspects
of
security:
I
want
to
dive
into
a
little
bit
more
here
as
well.
You
know,
and
the
the
first
is
that
if
you
think
about
how
is
traditionally
done,
security
I
saw
I
saw
a
lot
of
customers
and
they'll
say
things
like
yeah
I
have
this
sub
net,
which
is
where
you
know,
I
assign
all
of
this.
B
You
know
the
server's
aren't
special
snowflakes,
you
just
treat
them
all
as
fungible
resources,
so
contain
interfere
anywhere
with
any
IP
address.
So
that
does
mean
that
your
subnet
rules
and
tissue,
your
VLAN
rules
as
well
ways
and
you
feel
ends
in
the
past
no
longer
have
have
meaning
when
you
start
thinking
this
much
more
and
fluid
environment.
B
The
next
thing
that's
happening
as
well.
Is
you
know
the
the
introduction
of
micro
services
means
means
that
you're
breaking
down
applications
into
many
many
smaller
components
and
those
components
communicate
typically
over
skp
is
a
network
interfaces.
So
the
exposure
of
your
application
to
the
network
is
that
much
greater
yeah
attackers
already
jumping
behind
firewalls,
getting
from
one
layer
of
services
to
the
next.
If
you're
now
creating
you
know,
potentially
thousands
of
potential
attack
points
across
your
application.
B
That's
a
real
security
risk,
so
you
can't
just
rely
on
a
perimeter
firewall
to
protect
yourself
and
you
need
to
think
about.
How
is
that
intra
cluster
and
communication
going
to
be
protected
and
those
those
might
seem
a
little
bit
of
a
kind
of
negative
point,
but
I
just
want
to
make
a
very
positive
point
here
as
well
right,
which
is
that
you
know
there's
an
Orchestrator
involved
here.
B
You
have
in
the
indicators
kubernetes,
which
you
know
open,
soapy
ship
is
built
on,
but
there's
an
Orchestrator
which
is
making
scheduling
decisions
about
where
workloads
are
placed,
and
it
knows
things
about
those
workloads
and
zeref.
There
are
schemas
and
there
are
ways
for
developers
to
attach
metadata
onto
themselves
with
labels,
for
example.
B
B
But
there's
also
this
huge
opportunity
to
automate
things
because
you
but
you're
operating
in
an
automated
environment
and
that
whole
problem
that
you
used
to
have
of
I
rules
hanging
about
in
a
firewall
and
no
one
knew
why
they
were
still
there.
What
we
call
crap
things
things
that
I
hang
around
for
years
can
go
away
because
you
know
dynamically
and
where
every
workload
is
and
exactly
therefore,
flows
through
and
automated
security
rules.
So
so
that's
kind
of
the
background,
so
you
think
about
those
challenges
and
it
and
its
opportunity
what
what
is
it?
B
What
is
it
the
we
need
to
do
to
solve
this
problem
and
at
the
very
high
level
I'm
going
to
you,
know,
I
like
to
keep
keeping
simple
it's.
Firstly,
I
want
to
simplify
the
network.
I
want
to
take
out
unnecessary
layers
of
complexity,
because
that's
what
is
causes
challenges,
as
we
scale
up
those
multiple
orders
of
magnitude
and,
secondly,
I
want
to
secure
the
workload
I
want
to
take
these
fine-grained
rules
say
who
can
talk
to
and
integrate
that
with
the
orchestrator
set
the
whole
point
of
this
automation,
piece
and
then.
B
Thirdly,
I
want
to
do
these
things
kind
of
tightly
knitted
together
in
some
kind
of
you
know,
architecture
which
is
really
tied
to
the
way
that
we're
building
applications
today
and
not
bolted
on
the
side.
So
this
is
how
you
know.
This
is
how
we
think
about
things
and
that's
essentially
what
we
do
with
calico
and
what
I'm
going
to
talk
to
you
about
as
the
next
few
minutes
here.
B
How
we're
addressing
these
challenges-
and
you
know
what
project
calico
does
and
just
by
way
of
background
calluses
of
projects
in
going
for
a
couple
of
years
now
it
open
source,
Apache
License,
it's
a
pretty
active
community
now,
so
we
have
about
100
community
contributors.
You
know
there's
a
lot
of
those
from
outside
Tiger
Affairs
of
very
broad
community
of
folks
contributing
into
the
project,
and
you
know
we're
starting
to
see
pretty
large
scales
of
deployments.
Now.
B
Some
very
large
names
folks
is
talked
about
how
they
have
a
deploying
calico
in
various
different
environments,
a
lot
of
kubernetes,
but
also
people
working
with
OpenStack
and
Mesa
doctor,
and
you
know,
love
different
environments.
So
it's
it's
pretty
field
tested
now
and
it's
a
pretty
solid
basis
for
us
to
keep
building
network
on
and
taking
Cygnus
technology
forward,
so
step
back
and
think
about
this
simplification
of
the
network
and
really
is
this
kind
of
a
checklist
right.
B
So
the
first
thing
that
we
do-
and
this
is
very
much
in
line
with
the
kubernetes
philosophy-
and
you
know
kubernetes-
actually-
was
the
first
of
the
orchestrators.
That
really
said
we're
going
to
take
a
new
approach
to
how
we
think
about
networking
as
these
at
these
pods
and
every
pod
will
get
an
IP
address
and
the
way
I
like
to
think
about
it
is
you
know:
pods
are
endpoints
to
pod,
adjust
things
that
should
be
on
the
network
and
they
have
an
IP
address.
B
We
want
to
flatten
the
network,
get
rid
of
any
intermediary
layers
and
give
them
a
real
IP
address.
So
what
this
means
is
by
default,
we
don't
need
an
overlay
Network.
We
don't
want
an
overlay
network,
in
fact,
and
what
that
means
is
that
packets
coming
out
of
a
pod
just
go
onto
the
underlying
network
without
any
any
ink
encapsulation
without
any
additional
overhead
and
therefore
you're
going
to
get
good
performance.
B
The
other
piece.
The
other
approach
that
we
have
here
is
we
believe
in
IP
routing.
We
believe
IP
wrapping
is
the
way
to
get
to
scale
and
we
try
and
remove
layer
2
concepts.
So
it's
a
it's
a
routed
model
where
a
packet
comes
out
of
out
of
the
pod
is
rapid
onto
the
underlying
infrastructure
across
for
the
other
part,
and
that's
a
very
simple
model
to
understand
whether
you're
going
to
a
you
know
a
remote,
remote,
node
or
another
local
products.
Just
a
single,
rounded
hop.
B
We
believe
Linux
is
out
there
and
I
think
I.
Think
a
friends
of
Red
Hat
would
agree
that
Linux
is
a
pretty
good
basis
for
building
a
product
on
and
it's
got
a
very
good.
A
very
efficient
network
stack
a
lot
of
Technology
there
that's
very
proven
and
solid,
and
we
want
a
bit
leverage
what's
there.
So
you
know
the
up
shooter
fit.
Is
we
want
to
get
the
maximum
performance
while
making
it
really
simple
to
troubleshoot?
B
And
we
think
the
tools
are
there
and
the
way
that
we
put
those
together
that
makes
makes
calcio
the
high
performance
emphasis
troubleshoot
solutions.
So
let's
look
at
into
the
next
level
of
detail
in
terms
of
architecture
and
think
about.
First
of
all,
you
know,
as
with
most
networking
solutions,
there's
kind
of
two
pieces
to
it:
there's
a
control
plane
and
then
there's
how
do
your
packets
actually
get
around
so
looking,
first
as
a
control
plane?
B
Our
thinking
was,
if
it's
that
scaling
up
for
the
underlying
Orchestrator,
then
our
space
distribution
scales
up
in
exactly
the
same
way
as
the
overall
posture
that
we're
we're
integrated
with
there's
no
kind
of
question
as
to
whether
you're
going
to
get
out
of
step
in
terms
of
the
level
of
scale
you
can
reach,
but
we
don't
distribute
all
state
by
APD
there's
a
another
way.
We
communicate
between
the
nodes
as
well
and
that's
to
communicate
where
IP
addresses
are
located,
and
this
is
kind
of
a
special
bit
of
state
distribution.
B
If
you
like,
because
this
is
a
well
known
problem-
that's
been
around
for
several
decades.
How
do
I,
if
I,
have
a
set
of
nodes
on
it
on
a
network
and
I
want
to
communicate
to
them?
How
I
can
get
to
ultimate
I
P
address
endpoints,
and
so
you
know
we,
we
don't
believe
in
reinventing
the
wheel
if
there's
something
there
to
be
to
be
used
already.
B
B
Just
appearing
with
the
other
confused
nodes,
it
looks
exactly
the
same,
so
we
run
an
agent
on
HP
computers
and
that
is
programming.
The
route
into
the
Linux
kernel,
based
on
what
it's
learning
from
other
computers
via
CDC
is
programming
its
own
route,
based
on
what
it
learns
from
SVD
to
its
own
pod,
kirtland,
and
what
its
local
workloads
are
and
then
once
those
routes
are
set
up,
we
get
out
of
the
way
the
data
plane
go
is
from
one
pod.
B
The
virtual
Ethernet
interface
is
hooked
up
into
the
Linux.
Kernel
goes
through
the
limits
tunnel
wrapping
table
over
the
physical
interface
is
rapid.
Back
on
the
other
side,
no
encapsulation
required
the
routing
just
work,
because
that's
what
wrapping
does
it's
just
IP
and-
and
you
know
it's
I-
think
if
you
can
think
about
how
Sdn
is
traditionally
tried
to
sell
you
on
on
the
benefits
and
the
virtualization
and
obstruction
layers,
and
you
have
to
say
if
this
just
works
with
your
basic
model.
You
know
why
not
just
do
that.
B
There's
one
other
piece
as
well
that
we're
going
to
talk
a
little
bit
more
about
today,
and
that
is
policy
enforcement,
because,
if
I,
just
let
all
pods
all
other
pods,
as
in
the
previous
picture,
I,
potentially
let
in
malicious
traffic
I.
Let
a
pod
sort
for
someone
that
it's
not
meant
to,
and
here
again
the
Linux
kernel
has
all
the
tools
that
we
need
has
very
highly
scalable
and
sports
limiters
and
access
control
lists
so
that
same
agent,
which
is
programming
routes
to
the
local
pods.
B
Also
programs,
the
rules
into
the
kernel
access
control
list,
which
is
iptables
function,
and
we
have
a
very
tuned
way
of
doing
this.
That
manages
to
get
very,
very
high
performance
out
of
that
and,
and
so
the
traffic
that
goes
out
of
the
pod.
Actually,
the
data
plane
isn't
is
that
it
goes
through
through
the
route,
routing
tables
that
also
have
to
pass.
The
IP
tables
checks,
and
this
will
also,
for
example,
we
can.
B
The
other
point
to
make
here-
and
this
is
you
know,
pipes-
can
both
control
plane
and
the
base
plane
as
I've
shown.
So
you
know
physical
fabric
or
public
cloud
underneath
so
obviously,
in
the
case
you're
just
running
on,
say
on
bare
metal
on
a
physical
fabric,
you
have
complete
control
over
what
the
underlying
network
is.
B
In
many
cases,
you're
going
to
be
deploying
within
virtual
machines
with
an
set
either
in
your
own
data
center
or
in
a
public
cloud
such
as
Amazon
or
GTE
or
agar,
and
you
know,
and
then
you
don't
have
visibility
or
control
over
the
underlying
network.
Calico
works
just
as
well
in
that
kind
of
environment
as
well
slightly
different
recipes,
depending
on
whether
you're
in
Google
or
Amazon
etc,
but
it'll
work
across
both
of
those
those
environments.
B
So
I've
kind
of
talked
a
little
bit
there
about
the
architecture
and
how
these
pieces
fit
together,
hopefully
that
that
all
makes
sense.
But
you
know
I
come
back
to
mr.
bean
with
his
grumpy
questions.
You
know:
I've
got
a
firewall
at
the
edge
of
the
datacenter.
Why
why
on
earth
do
I
want
network
policies
as
well
what
it?
What
does
network
policy
do
for
me
as
a
developer
as
an
operator,
you
know
it
exactly.
B
So
you
know
that
you're
not
expecting
your
back-end
database
to
have
an
inbound
connection
directly
from
your
front-end
load.
Balancer,
but
if
you
leave
that
of
the
possibility
open
of
allowing
that
connection,
then
you've
created
an
opening
for
an
attacker
to
get
in
and
compromises
one
of
those
one
of
those
application
components
get
to
somewhere
else.
B
So
the
goal
here
is
to
identify
that
subset
of
the
N
squared
connections
and
reduce
connectivity
to
that
and
have
the
Caliphate
Calico
agent
on
each
node
program,
the
ATL
to
enforce
just
those
allow
just
those
connections
which
should
deal
out
and
deny
everything
else,
and
that
in
essence,
is
what
we
do
and
the
way
this
is
this
is
done.
Is
here
a
something
should
be
pretty
familiar
to
you
through
the
net,
either
a
llamo
resource
file,
and
it
looks
something
like
this.
So
you'll
have
up
to
the
API
version.
It
is
a
policy
file.
B
Each
policy
can
have
it
have
a
name
is
the
key
bits
are
where
you
get
to
this
respect,
piece
where
we
can
use
an
arbitrary
selector
expression.
So,
in
this
case,
for
example,
this
policy
says
says
that
it
applies
to
everything
where
you've
got
the
label
role,
equals
database
and
I'm,
going
to
specify
who
I'm
going
to
allow
to
make
inbound
connections
into
that
into
those
pod
style.
Allow
TCP
connections
on
port
63
79
from
anything
that
has
a
label
what
we
for
front-end
and
because
I
want
to
do.
B
Keep
this
simple
I'll
allow
any
egress
traffic
out
of
this
deep
database
pods,
so
that
I
could
make
that
you
know
much
more
complex,
egress
rule
I
could
include
you
know
more
more
complex
expressions
in
terms
of
you
know,
sources
and
destinations
specifying
IP
addresses
as
well
as
roles
I
could
be
using
namespace
selectors
as
well.
So
there's
a
lot
of
flexibility
and
power
within
this
network
policy.
B
Once
you
have
it,
then
you
just
apply
a
like
requital
as
a
command
line,
apply
this
policy
file,
and
now
the
those
of
you
seen
any
these
webinars
where
people
have
talked
about,
because
the
Nettie's
network
policy
will
recognize
it,
because
it
looks
very,
very
close,
and
you
know,
in
fact
you
know
the
Nettie's
network
policy
was
replaced
pretty
closely
on
what
we,
what
we
built
for
for
calico,
although
what's
actually
in
the
tuba
native
API,
is
a
subset
of
this.
If
you
want
to
just
use
its
event
API
you
can.
B
B
Have
it
come
up
and
start
and
have
network
connectivity
straight
away
and
if
I
ask
if
I
take
yet
many
seconds
to
to
get
the
policy
for
the
pod
or,
if
I'm
doing
that
policy
in
a
reactive
way,
so
the
first
time
I
see
a
packet
I
go
out,
have
to
go
and
ask
some
central
controller.
Am
I
allowed
to
send
this
or
not
you'll
get
you're
going
to
have
delays
on
the
network
and
delays
scheduled
in
pods,
and
so
so
that's
why
we
distribute
this.
B
B
So
so
that
that's
the
the
architectural
approach-
and
you
know
at
scale
we
test
this,
and
you
know
that
key
metric.
That
I
said
you
know
when
you
create
a
new
pod.
You've
got
to
set
up
the
network
and
apply
policies
to
it.
You
know
that's
typically
sub
10
milliseconds
and
when
I
say
it
scale
in
I'm
talking
hundreds
of
thousands
of
positives
and
a
compute
node.
So
this
is
a
you
know,
pretty
efficient
and
proven
in
that
scale
kind
of
system
for
implementing
these
policies.
B
Calico
has
much
more
dynamic,
IP
address
management,
so
we
could
we'll
take
a
smaller
range
of
IP
addresses,
initially,
typically
a
slash
twenty
six,
so
you'll
get
64
addresses
and
then,
if
you
schedule,
the
sixty-fifth
pod
will
call
another
address
range.
We
can
do
this
because
we
have
C
the
routing
protocol
where
we
can
dynamically
communicate
around
when
when
address
is
going
to
move
or
you
know,
allocated
community.
B
So
that
means
that
we
get
a
lot
more
efficient
use
of
the
IP
address
space,
so
you're
not
wasting
address
is
having
256
assign
to
a
house
when
you're
only
running
10
containers
on
it,
but
at
the
same
time
you're
not
imposing
enough
a
limit
on
how
many
containers
you
can
schedule
you
can
get.
You
know
for
2,000
on
there.
If
you
want
you'll,
just
pull
down
more
more
I
keep
cool.
B
So
so
that's
that's
one
architectural
comparison
that
the
next
one
is
I,
guess
use
of
bridges,
and
you
know
a
lot
of
the
a
lot
of
networking
solutions,
starting
from
I
guess
soccer.
The
containers
could
all
of
the
local
containers
or
local
pods
onto
onto
a
bridge
in
every
shift.
At
the
end
cases,
the
OBS
bridge,
indecorous
soccer
bridge
and
the
idea
here
being
essentially,
it
looks
like
all
of
the
local
pods
have
a
layer,
2
connection.
B
We
take
a
slightly
different
philosophical
approach
and
say
we
do
IP
wrapping
everywhere,
so
whether
whether
you
are
whether
you're
going
from
bottom
one
one
host
to
another
host
across
a
network
or
you're
going
from
you
know
between
two
pods
on
the
same
machine,
it's
a
single
router
top
it's
the
same
path
and
there's
there's
no
bridge
involved.
So
it's
a
it's
a
rapid
connection,
the
the
next
thing
you
know
it
took
bit
about
this
earlier,
no
overlays.
B
If
you,
when
you
connect
with
a
traditional
Sdn
to
remote,
pods,
you're,
typically
doing
that
via
a
tunnel
interface
at
the
bottom
of
the
kernel.
You'll
set
up
a
vehicle
and
tunnel
between
two
hosts.
Now
calicoes
can
do
tunneling
because
sometimes
it
is
required
because
you
have
a
network
topology
underneath
you
where
you
can't
wrap
across
it,
but
it's
not
required
it's
not
kind
of
the
basic
way
of
getting
packets
out
of
a
out
of
a
host,
so
the
product
as
a
real
IP.
B
That's
wrapped,
belong
the
underlying
network,
so
we
just
send
it
out
of
the
f0
interfaces
at
the
bottom
of
the
stack
see
one
nice
side
effect
of
this
and
didn't
mention
this
earlier.
But
you
know
when
you're
running
on
a
public
cloud
environment
or
sometimes,
if
you're
running
on
an
existing
open.
Second
environment,
where
you
have
here's,
maybe
using
neutral
networking
or
you
using
some
other
Sdn
and
in
public
value.
Don't
know
you
don't
know
what
Xen
is.
Underneath
you
see
anything.
B
Those
underlying
virtual
machines
are
are
going
to
be
having
some
kind
of
encapsulation
of
packets
coming
out
of
them.
So,
if
you're,
if
you,
if
you
use
vehicle
on-
or
you
know
some
other
encapsulation
from
the
container
down
to
the
VM,
now
you're
going
to
get
double
encapsulation
and
that
that
can
be
a
performance
problem,
particularly
when
you
start
hitting
the
threshold
of
packet
sizes,
so
the
MTU
limits
on
a
given
cloud.
Yes,
in
some
cases,
are
quite
low
and
you're,
adding
dozens
of
pipe
from
the
front.
B
B
Next
point
is:
how
do
you
get
outside
of
the
cluster?
Well,
if
you
are
doing
everything
always
in
an
overlay
Network,
then
you
have
to
always
go
out
fire
net.
Now
Calico
can
can
define
net
rules
because
sometimes,
for
example,
maybe
you're
using
addresses
sort
of
all
doing
the
internals
of
clustering.
You
want
to
be
able
to
connect
to
external
addresses,
but
it's
not
required
by
default.
B
You
could
have
a
set
of
external
IP
addresses
that
you
assign
to
some
pods,
and
then
they
just
have
a
real
external
IP
address,
no
net
required
and
the
the
next
one.
This
is
maybe
a
little
bit
more
internships.
Ftm
specific
is
you
know
a
couple
of
different
methods:
Network,
isolation,
isin,
multi-tenant,
plug-in
or
utilities
network
policy.
You
know
we
talk
about
how
we
do
Network
isolations,
it's
just
ingress
and
egress
policy
rules
in
IP
tables
ruled
in
the
Linux
kernel,
and
that
allows
me
to
be
my
lieutenant
effect.
B
It's
a
yeah,
it's
an
awesome
piece
of
software,
but
it
is.
It's
a
big
BMI
and
you
know
a
lot
of
people
say
you
know
what
I'd
rather
have
a
know
that
my
data
plane
is
running
through
students
channel
it
and
just
the
traditional
layer.
Three
data
passes
in
the
kernel
and
we're
very
happy
with
that
and
our
code
and
such
is
only
control
and
control
plate.
B
B
Now
what
we
do
hear
from
some
people,
is
they
really
like
the
way
we
do
policy,
but
also
for
some
reason
they,
and
there
are
some
good
reasons
in
some
cases
they
want
to
use
the
X
land
overlay,
in
particular
a
lot
of
folks
because
native
youth
youth
panel,
some
flowers,
a
project
with
we're
involved
with
us
as
well.
And
so
it's
been
saying,
you
probably
can't
do
that.
B
Actually
we
can-
and
this
is
a
project
that
we
launched
last
year
and
together
with
chorus
to
allow
you
to
take
if
you
like,
best
of
both
worlds,
to
take
the
simplicity
of
a
flannel
overlay
which
allows
you
just
to
allocate
a
single
subnet
and
have
the
a
clamp
to
each
house
and
he
Atlanta
milling
between
them
and
take
all
of
those
policy
rules
that
we
talked
about
and
apply
them
on
top
of
that
set
tunnels
network.
So
you
know
this.
B
This
works
pretty
well
I've
heard
of
quite
a
few
people
using
this
now
and
you
know
it's
a
it's
actually
very
straightforward
I
mean
even
though
it's
got
its
own
project.
We
throw
on
the
project
calico
that
the
key
piece
is
really
just.
How
do
you
get
so
to
see
and
I
plug
in
two
to
work
together
and
see
ni
as
this,
as
is
nice
attribute
of
being
able
to?
B
B
Let
me
sleep
it
up
on
each
node,
then
there's
a
single
container
that
we
have
called
calico
node,
which
contains
both
single,
which
call
Calcagno,
which
contains
a
couple
of
containers,
one
which
is
people
Felix,
local
routing,
the
policy
calculation,
another
one
which
is
a
protocol.
Third,
which
is
what
does
a
BGP
control
plane.
And
then
we
need
a
single
instance
of
the
policy
controller
and
again
it's
just
stood
up
as
a
two
Panetti
pod
and
plugged
into
the
policy
API
and
take
cert.
B
If
you
can
see
a
negative
Network
policy
converts
humanity's
network
policy
API
into
and
Calico
datastore,
so
that
steps
have
a
kind
of
architecture
map
in
terms
of
how
we,
where
we're
at
with
that
with
integration,
you
know
if
you,
if
you
want
to
go
out
and
try
calico
with
kubernetes,
there
are
tons
of
different
ways
to
do
it.
You
know
some
of
the
easiest
are
actually
yesterday
heck,
CEO
and
Ameth,
and
Amazon
announced
a
QuickStart
or
kubernetes
on
AWS
which
which
includes
calico
by
default.
B
So
you
just
run
that
vertical
settings
and
there's
a
bunch
of
CloudFormation
template
which
will
get
your
AWS
cloud
instance
set
up
with
these
communities
of
calico.
Another
really
nice
way
of
doing
it
step.
One
cloud
has
a
pointed
click
interface
for
creating
virtual
machines,
pre-configured
with
kubernetes
and
calico
uses,
quick
CFO
onto
calico
box
and,
and
that
comes
in
and
then
listening
like
cops
and
to
burdening
all
come
with
the
ability
to
configure
calico
networking
as
well.
B
So
we
actually
got
together
with
suppose
that
red
had
a
couple
of
months
ago
and
said:
let's,
let's
solve
this
problem,
let's
work
together
and
get
getting
integration
at
a
point
where
it's
actually
properly
supported
and
certified,
and
all
of
that
we
did
that
on
Sophie,
3.3
and
working
and
then
I
speak
Rita
poking
at
it
and
broke
it.
So
we're
working
on
fixing
those
those
issues,
but
that
should
be
up
soon
so
encourage
you
watch
this
space.
B
B
You
know
to
get
get
resources
behind
it
and
if
you
want
to
contribute
and
be
part
of
that
as
well-
and
you
know
we're
always
open
to
that-
it's
an
open
project-
and
you
know
some
things-
I'm
very
straightforward
things
that
we
have
to
do
to
get
this
get
this
working.
So
so
that's
that's
left
my
piece
and
set
in
terms
of
the
actually.
You
know
prepared
remarks:
here's
how
to
get
hold
of
us.
Here's
how
to
find
out
more
about
the
projects
they
won't
get
hub.
That's
project!
Calico!
You
know!
B
If,
if
you
like
this,
then
tweet
Andrew
Randall,
if
you
didn't
my
name's
Christopher
Lillian
salty,
you
can
tweet
to
protect
calico
and
I,
say
you
know
the
most
the
place
where
the
community,
what
it
means
to
meet,
is
flat
top
and
then
I'll
go
to
org
and
get
into
the
black
group.
And
so
that's
it
and
I'll
open
up
questions.
Now,
all.
A
Right
well,
thank
you
for
the
overview
and
I
like
to
watch
this
week
and
I'm
loving
the
mr.
bean
references
here
coming
from
Canada
we've
got
overloaded
on
a
mr.
bean,
so
Peter
Larson
is
asking
what
role
does
the
name
space
play
with
calico?
Are
there
any
default
policies
that
implement
isolation
between
Maine,
safe
spaces
is
better.
Yes,.
B
That's
the
three
questions,
so
the
main
spaces
I
think
are
often
misunderstood
by
people
who
think
they
give
them
a
lot
more
protection
than
they.
So
they
do
so.
You
know
namespaces
by
default.
Don't
give
you
any
isolation
between
at
the
network
level,
you
can
specify
namespaces
in
a
policy
collector,
so
you
can
say
this
namespace
consul
to
this
other
main
space.
B
Unless
one
of
the
engineers
who
knows
better
than
me
is
on
the
quarter,
pipe
up,
I
think
right
now,
there's
no
kind
of
automates
way.
Just
to
say
all
namespaces
should
be.
We
isolated,
I
think
you
need
to
write
policy
for
that,
but
you
know
I
think
I
think
take
these
on
the
tour
looking
to
make
cleaners.
A
C
Sorry
I
kind
of
want
on
you
from
Samaras,
oh
yeah,
I-
think
but
that's
pretty
accurate,
rounded,
with
default
namespace
legislation,
but
the
policy
model
makes
it
unique,
we'll
figure
that
now
I
said
in
the
comment.
It
also
lets
you
do
things
where
more
or
less
fine
grains.
In
that
sure
what
you
wanted
to
think
give
a
group
of
namespaces
to
a
tenant.
You
could
configure
policy
that
that
affects
that
entire
group
tonight.
Well,.
A
And
the
other
question
actually
was
a
question
that
that
I
had
because
I
was
rifling
through
the
documentation
for
calico
looking
for
documentation
on
the
specifics
around
deploying
open
ship,
a
calico,
an
island
shift,
and
you
kind
of
cover
that
off
and
maybe
previous
slide,
that
it's
a
work
in
progress.
There
is
a
little
bit
of
documentation
out
there
and
some
of
the
chatter
on
the
college
that
this
is
something
that
is
coming
in
a
in
a
very
short
term
mark
Curry's,
one
of
our
product
managers
in
free
zones
and
call
right.
Now.
D
Thank
You
Diane,
so
we
are
very
close
to
a
good
relationship
with
calico
and
agreement
and
we
are
revamping
our
documentation
for
our
SDM
and
we
would
like
very
much
to
to
highlight
calico
within
that
documentation.
So
I'm
sure
we
will
not
only
be
sharing
links
back
and
forth
to
specific
to
some
of
the
questions
in
the
chat,
but
also
we
I
think
we
will
probably
add
something
to
our
reference
architectures
to
that
effect
as
well.
A
A
The
drains
here
with
me
and
talk
about
calico
on
a
panel
at
the
okay
ship
Commons
gathering
on
March
28th,
which
is
the
day
before
coop
run.
So
if
any
of
you
are
coming
for
coupons,
please
come
a
day
earlier
and
join
us
there'll,
be
some
amazing
speakers,
including
Andy,
and
you
won't
have
to
listen
to
be
glad
around,
because
there's
a
great
so
not
just
in
the
on
the
panels
and
on
the
speakers,
but
in
the
audience
as
well.
A
So
I
really
highly
encourage
you
to
come
a
full
opportunity
just
to
meet
some
of
the
project
leads
on
open
source
projects
like
taco
and
kubernetes,
and
docker
and
core
OS
will
be
in
the
audience
and
all
kinds
of
good
people.
Well,
if
you're
around
please
join
us
and
get
me
there,
one
way
or
the
other
in
them
and
and
a
beer
will
be
good
upon
it.
A
Let's
see,
maybe
one
more
question
here:
all
right,
yes,
Jeff.
There
will
always
be
a
link
to
the
presentation
that
Andy
is
just
given.
We
post
the
it's
usually
on
Mondays
after
the
following
week.
This
week
we
have
three
briefings
going
on
so
there'd
be
a
lot
I'm
being
pushed
out
on
Monday
a
glove
that
OpenShift
calm,
and
it
will
also
be
on
our
YouTube
channel,
so
you
can
find
it
there
and
I
think
finally
update
the
comment
page
and
get
it
up
there
as
well.
A
Music,
so
I
think
that's
it.
So
Andy
I
really
want
to
thank
you
for
taking
time
out
of
your
schedule
to
do
this
and
I'm
looking
for
its
March
and
everybody
else.
Getting
the
updated
documentation
there
and
link
back
and
forth
between
Carol
for
documentation
and
the
open
shifts,
documentation
and
reference
architectures,
while
trying
to
keep
those
links
associated
with
the
video
of
this
so
you're
watching
this
at
Adelaide
point
in
time
check
the
comments
in
the
YouTube
channel
and
on
the
blog
post
and
I'll
try
and
keep
it
updated
as
well.
Alright,.
B
Diane
I'd
like
to
thank
you
for
inviting
us
here,
but
for
the
great
work
you're
doing
on
the
community,
I
think
one
of
the
things
that
we've
really
enjoyed
working
with
red
hat
on
this
is
just
kind
of
how
you
know
open
everyone
has
been
and
how-
and
you
know,
kind
of
inclusive
I
think
this
is
a
really
good
sense
of
community
that
you
guys
are
building
around
OpenShift
nothing.
You
know
that's
fantastic
well,.
A
I
will
thank
you
for
that,
and
probably
quote
you
on
that.
So
so
you
care
and
have
a
great
weekend,
everybody
and
we'll
see
you
soon
take
care
bud.