►
From YouTube: Webinar: Introduction to Cloud Native Networking
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
Okay,
so
it
seems
like
there's
quite
a
lot
of
people
on
the
call
now
we're
going
to
go
ahead
and
get
started.
Welcome
to
this,
the
second
and
new
series
of
webinars
from
the
cloud
native
computing
foundation
today
we'll
be
talking
about
cloud
native
networking
and
we're
very
fortunate
to
have
two
very
knowledgeable
speakers
with
us:
Christopher
Lillian,
Stouffer
who's,
the
CTO
and
co-founder
of
Higuera
dot
IO,
and
they
are
the
people
behind
project
calico.
Some
of
you
will
probably
know.
A
Brian
Borum
is
also
with
us
and
he's
a
director
of
engineering
of
weave
works
who
are
currently
working
on
weave
cloud,
which
allows
teams
to
connect,
monitor,
deploy,
containers
and
micro
services.
Now
cloud
native
networking
is
a
space
that
appeals
to
both
network
engineers
and
a
growing
number
of
developers
who
find
that
networking
is
becoming
a
part
of
their
daily
work.
We've
deliberately
avoided
heavy
networking
jargon
to
make
sure
that
everyone
can
follow.
A
B
C
B
C
We
are
so
sorry
about
that
a
little
bit
of
pointer
problems
on
my
side.
So
you
know
what
we're
talking
about
here
is:
is
networking
for
containers
in
micro
services
and
I
thought
it
might
make
sense
to
to
put
a
bit
in
context
within
the
CNTs
reference
architecture,
so
cmts
reference
architecture
for
those
of
you
who
aren't
familiar
with
it
basically
divided
the
landscape.
C
Then
you
actually
have
that
runtime
layer
and
that's
where
things
like
resource
management,
container
management
into
managing
to
compute
resources,
scheduling
as
well
as
things
such
as
networking
and
storage,
and
the
networking
is
both
of
the
networking
encourages,
is
basically
based
on
unplug
in
models
where
different
providers
of
that
technology
can
plug
into
the
infrastructure
and
then
above
that
is
obviously
the
orchestration
management
platforms
and
then
the
actual
application
delivery
and
the
development.
So
that's
where
we're
talking
about
today
is
in
that
runtime
layer
of
the
CN
CF
reference
architecture.
B
Way
way
about
like
three
years
ago
three
years
ago,
so
we
got
a.
We
got
a
picture
on
the
slide
here
and
this
is
kind
of
kind
of
indicates
one
of
the
key
problems
that
people
faced
when
you,
when
you
run
up
a
docker
container,
it
gets
a
it's
a
network
interface.
It
got
one
three
years
ago,
but
those
were
connected
to
a
bridge
which
only
worked
on
one
machine
on
the
machine
that
those
containers
were
on
and
you
could
bring
that
pour
out
to
the
machines
network
interface.
B
So
that's
the
interface
we
show
in
the
middle
of
the
picture,
and
but
then
you
had
this
problem
it.
What
if
two
containers
wanted
to
listen
on
the
same
port
number,
so
port
number?
We
we
mean
the
the
number
that
that
indicates
what
kind
of
a
service
you're
going
to
offer.
So
a
web
endpoint
is
port
number
80,
DNS,
endpoints,
port
number
53.
So
there's
all
these
numbers
that
are
there's
very
well-known
clients
know
which
to
connect
to,
but
you
can
only
listen
to
one
port
from
one
place
on
a
particular
network
interface.
B
So
so
this
is
the
problem
that
this
slide
is
trying
to
illustrate
that
if
you
happen
to
have
two
containers
that
both
wanted
to
listen
on
port
80,
then
what
you
ended
up
doing
was
remapping
those
port
numbers.
So
in
this,
in
this
example,
one
comes
out
on
port
2,
2,
6
8,
one
of
them
comes
out
and
3
to
7
6
9.
Those
are
those
are
not
well
known,
numbers.
Those
are
numbers.
The
client
is
going
to
have
to
find
out.
B
So
now
you
have
this
additional
bookkeeping.
You
have
to
keep
track
of
those
numbers.
You
have
to
be
able
to
look
them
up.
Maybe
this
may
be
some
extra
service
required
to
do
that
service
discovery.
So
this
is
the
this
is
kind
of
where
we
were
at
3
years
ago.
People
were
doing
this,
they
were
port
mapping
and
it
was
kind
of
ugly.
B
Yeah,
so
so
this
were
yeah
in
this
picture.
That
just
shows
the
coin:
disconnect
direct
port
80
because
there's
a
container
network
now
that
connects
those
two
endpoints
and
every
container
has
its
own
IP
address.
You
can
listen
up
the
same
port
on
any
port.
You
want
to
on
that
container,
Network
and
they're
all
connected
up.
C
So
there's
a
number
of
advantages
to
this
approach.
As,
as
Brian
said,
the
the
port
class
disappears.
Now
each
container
can
expose
port
80
service
because
of
that
now
discovery
of
that
work
with
discovery
of
that
service
being
provided
by
that
container
is
now
a
DNS
lookup.
You
know.
We
know,
for
example,
that
these
kind
of
that
this
kind
of
architecture
works.
C
The
scale
is
a
space
if
it
just
the
infrastructure
that
the
public
internet
is
still
on
your
services
on
the
public,
Internet
all
have
their
own
IP
addresses
and
then
there's
a
whole
bunch
of
mechanisms
that
allow
you
to
discover
those
services
and
connect
to
them.
They've
already
been
developed
to
work
at
very
large
scale.
The
interesting
thing
is
that
kubernetes,
as
a
as
a
container
platform,
took
this
approach
from
the
outset,
they
were
the
first
ones
really
to
say
from
the
get-go.
C
Let's
just
give
every
container
every
pod
and
Cabernets
case
on
IP
address
now.
A
lot
of
the
other
container
platforms
have
done
that
as
well.
But
you
know
this
gives
you
some
advantages.
You
know
DNS,
we
know
works
at
scale.
You
can
you
know.
Dns
is
mountain
natively
support
ranae's
for
every.
Like
addition,
some
other
platforms,
but
this
just
basically
gives
you
a
nice
scalable
way
that,
with
lots
of
tools
already
existing
to
to
connect
to
these
services.
C
B
How,
indeed,
well
it
turns
out,
turns
out
the
huge
array
of
facilities
in
the
Linux
kernel
dedicated
to
networking.
People
have
been
building
routers
and
firewalls
and
all
kinds
of
network
devices
for
decades
on
Linux
and
it's
immensely
powerful,
a
lot
of
facilities.
So
so,
basically,
from
the
point
of
view
of
someone
wanting
to
invent
a
container
network,
it's
really
putting
together
facilities
that
are
already
there
and
the
Linux
kernel.
B
That's
that's
great
news.
It's
it's
very
high
quality
features
and
very
available
to
anyone
is
running
on
Linux
the
bad
news.
This
really
there's
a
lot
of
choice
and
you
have
to
pick
one.
You
know
you
if
you've
ever
surveyed
the
market
for
container
networking,
you
will
find
quite
an
array
of
offerings
out
there
and
that
can
be
quite
building
and
we're
going
to
get
into
some
things
to
look
out
for
us.
You
look
at
which
one
you
want
to
pick
later
on,
but
yeah.
B
C
B
B
B
That's
okay,
yeah!
So
so,
as
I
was
saying,
the
control
plane
has
the
coordination.
So
things
like
the
IP
addresses.
There's.
Probably
a
pool
of
IP
addresses
that
you're
using
for
your
container
network
and
those
are
different
from
the
IP
addresses
on
your
regular
network.
So
the
the
containers
can
happily
come
and
go
and
talk
amongst
each
other,
the
routing
information.
B
How
am
I
going
to
get
to
a
particular
container
and
also
maybe
policy,
which
is
another
thing
we're
going
to
get
it
get
into
in
detail
later,
but
a
lot
of
people
are
pretty
keen
on
having
control
over
who
can
talk
to
whom
on
the
container
network.
So
that's
the
control
plane
the
data
plan
on
the
right-hand
side
of
the
picture,
sending
the
packets
everything
everything
on
your
network
is
broken
down
into
packets,
so
each
one
of
those
has
source
and
destination
address.
B
C
Now,
maybe
a
little
bit
more
into
some
of
the
specifics
of
how
control
planes
are
built
and
the
kind
of
technologies
that
you
commonly
see
within
container
networking
on
platforms
I,
you
might
be
using
a
distributed.
You
store,
so
this
is
like
what's
used
by
flannel
or
calico.
Scd
is
very
common.
There
are
other
solutions
out
there
as
well,
but
esidisi
see
the
most
most
common.
The
certain
key
value
store.
That's
used,
you
may
use
routing
protocols
again
calico
uses
bgp.
C
There
are
other
routing
protocols
out
there
and
that
is
used
to
potentially
distribute
it,
as
Brian
said,
reach
ability,
information,
it's
also
good
for
talking
to
classical
networking
hardware
that
might
also
exist
or
does,
and
at
least
in
private
files
very
often
exists
within
the
infrastructure.
This
allows
the
clouds
potentially
the
cloud
platform
to
to
talk
to
that
into
the
structure
if
it
needs
to
there's
also
a
use
of
gossip
protocols
which
are
completely
Cerberus
peer-to-peer
state
sharing
infrastructures
when
excused
by
you
know.
B
C
The
network,
so
those
are
some
of
the
control
point
options
that
you
see
in
these
environments
and
there
is
no
one
thing
that
work
that's
best
for
every
use
case.
So
it's
part
of
we'll
talk
a
little
bit
of
it.
We'll
talking
a
little
bit
about
some
of
the
things
you
need
to
think
about
as
you're
selecting
your
network
solution
for
your
container
environment.
All
of
these
things
have
trade-offs.
C
C
Are
we
going
to
implement
that
so
there's
two
really
aspects
to
transport:
one
is
the
actual
reporting
engine
what's
going
to
be
making
the
decision
about
sending
the
packet
figuring
out,
where
the
you
know
actually
putting
the
packet
into
onto
the
right
path
to
get
to
the
destination
and
also
applying
that
policy
filter
if
you're
using
policy
to
decide
should
a
be
able
to
talk
to
B,
so
it
makes
a
decision
on
how
to
get
traffic
from
A
to
B
and
then
should
that
connection
you
will
be
allowed
and
that
normally
takes
place
either
using
classical
Linux
kernel,
networking
functions
either
and
Ethernet
or
IP
layer
or
potentially
using
a
user
space
agent
that
sits
outside
of
the
kernel
makes
that
forwarding
decision
without
involving
the
kernel
in
any
regards.
C
So
OBS
can
be
used
here,
mataf
us
as
kernel
capabilities
as
well
or
user
space
routers.
Then
there's
a
transport
mechanism
and
there's
some
differences
here
as
well.
You
may
decide
that
you
want
to
run
just
on
the
underlying
network
and
those
containers
or
pods
are
actually
part
of
the
underlying
infrastructure.
There
they're
actual
nodes
on
the
underlying
infrastructure.
We
might
call
native
networking
that
might
have
some
advantages
if
you're
already
in
any
capsulated
network
or
you've
got
a
network
that
will
allow
you
to
do
that.
C
C
But
basically,
what
that
is
is
doing
is
taking
the
packet
that
came
out
of
the
the
workload
and
encapsulating
in
another
packet,
as
you
can
see
in
the
diagram
here,
to
get
it
to
be
closer
to
the
final
destination
where
it
will
end
get
unwrapped
and
presented
to
the
other
destination
as
as
the
packet
that
was
originally
sent
and
again,
there
are
trade-offs
to
both
of
these,
and
in
some
cases
you
might
want
to
pick
one
or
the
other.
But
that's
when
we
talk
about
transport,
that's
really.
C
B
B
So
let's
talk
about
those
the
on
the
Left
dockers
model
which,
for
most
of
its
life,
people
used
to
talk
about
as
mid
network
and
they
more
recently
they've
branded
it
as
a
CNN,
The,
Container
Network
model
and
on
the
right,
the
cni,
the
container
network
interface.
So
these
do
have
these
two
different
models
do
have
a
different
philosophy
base.
The
lid
network
is
very
tied
into
exactly
what
docker
does
and
there
are
advantages
for
that
that
it
fits
that
model
really
really
well.
B
B
It
is
almost
just
just
a
single
instruction
from
the
infrastructure
to
the
network
to
say,
add
this
container
to
the
network
and
how
it
goes
about
it
you
can,
it
can
do
anything
he
needs
to
do
so,
as
you
can
see
from
the
picture.
A
lot
more
people,
notably
kubernetes
and
me
sauce,
gotten
on
board,
with
with
the
CNI
with
that
model,
but
not
docker.
B
That's
the
subject
of
ongoing
discussion,
but
that's
where
we
are
today
with
with
plugins,
so
you
probably
already
picked
your
preferred
Orchestrator
or
platform
so
from
there.
That
tells
you
what
kind
of
plugin
you
need
and
there
go
look
at
the
provider
of
network
technology
and
make
sure
they
have
the
right
kind
of
plugin.
B
What
impact
is
a
they
family
are
going
to
have
on
your
overall
system,
first
of
all
the
software
crashes.
Secondly,
if
the
machine
it's
running
on
crashes
and
lastly,
if
the
network
link
that
it
is
running
on
top
of
it
gets
lost,
what
effect
does
that
have
that's
what
we
talked
a
network
partition
and
whether
everything
stops
in
that
event
or
whether
we
can
carry
on
on
one
side
of
the
partition?
Those
are
interesting
questions
to
look
at
with
your
choice
of
Network.
C
And
again
so
you
know,
sort
of
dovetailing
into
resilience
is
monitoring
and
troubleshooting
on
you
know
it
is.
Are
you
going
to
be
able
to
troubleshoot
and
monitor
and
understand
when
done
in
the
infrastructure,
with
the
skills
that
you
have?
So
if
you
pick
a
platform,
for
example,
of
leverages
basic
Linux
networking
concepts-
and
you
know,
you've
got
a
whole
lot
of
tools
and
a
lot
of
folks
who
run
Linux
can
sort
of
troubleshoot
those
kind
of
infrastructures
reasonably
well.
C
If
you
take
a
more
complex
infrastructure
to
say,
uses
MPLS,
encapsulation
or
something
along
those
lines,
then
you're
going
to
need
to
have
skill,
sets
different
types
of
skill
sets
and
more
hire
and
networking
skill
sets
Narda,
monitor
and
troubleshoot,
so
the
test
I
usually
use.
There
is
the
three
o'clock
in
the
morning
test.
If
it's
breaks,
will
the
person
who's
on
call
3
o'clock
in
the
morning?
C
Bail
will
figure
it
out
or
will
they
have
to
wake
someone
else
up
to
figure
it
out
or
call
in
a
team
and
something
to
think
about
as
well,
because
things
never
break
at
10:00
a.m.
they
always
break
it?
3:00
a.m.
you
know
is
what
kind
of
security?
What
kind
of
security
do
you
need
and
I
think
some
of
the
things
you
need
to
think
about
here,
a
bit
I'm
talking
with
us
about
this
more
in
policy,
is
that
this
is
a
much
more
dynamic
environment.
C
So
if
you
think
that
you're
going
to
be
doing
security
in
a
very
static
way
and
just
putting
some
basic
perimeters
around
there,
it
may
not
work
in
a
cloud
native
environment.
So
you
know
that's
what
your
security
needs
are
and
will
this
kind
of
the
platform's
you're
looking
at
give
you
the
kind
of
security
requirements,
meet
your
security
requirements
and
scale
and
performance.
You
know,
there's
there's
you
know
a
lot
of
folks.
Don't
need
the
highest
performance
network
around,
but
you
do
need
to
think
about
scale.
C
The
whole
idea
about
container
cloud
native
container
environments
as
they
scale
up
they
scale
down,
they're,
very
dynamic.
So
what's
that
going
to
do
to
your
performance?
What's
that,
as
you
add
nodes
as
nodes,
are
very
ephemeral
and
come
and
go
what
kind
of
performance
impacts
is
that
going
to
have?
Are
you
going
to
be
able
to
meet
your
fo
A's
and
you
need
to
take
a
rational
look
at
that?
As
not
you
know,
what
are
your
actual
work
was.
C
B
So
many
people
pick
on
some
micro
benchmark.
That
does
something
extremely
unrealistic,
like
like
one
thing
that
sends
packets
as
fast
as
it
can
that
isn't
going
to
tell
you
how
your
real
system
is
going
to
behave
under
load
so
so
use
a
realistic
benchmark
if
you're
going
to
measure
the
thing
and
also
beware
of
cloud
when
benchmarking,
if
you
think
about
it,
you're
renting
space
on
someone
else's
computer,
the
benchmarks
you
get
one
week
might
be
different
the
next
week,
because
your
VMs
have
landed
in
a
different
rack
or
you
know.
B
Maybe
a
different
segment
of
the
network
may
be
the
the
segment
you
were
using
was
very
congested
last
week
and
it's
less
congested
this
week.
These
are
real
difficulties
with
trying
to
get
accurate
information
unless
you
got
a
huge
amount
of
effort
into
trying
to
to
control
for
all
the
variability
so
bottom
line,
you
know,
do
some
testing
with
a
realistic
workload
and
I
think
you'll
find
almost
any
of
the
the
options
out.
B
C
You
know
Brian
I
both
have
operational
experience
as
well
and
I.
Think,
oh,
you
know,
Brian
can
say
I'm
wrong
here,
but
you
know
if
you
are
going
to
go
down
the
testing
ring
of
the
the
testing
path.
One
thing
to
keep
in
mind:
this
is
one
thing
we
always
said:
we
were
building
production
infrastructure
is
I,
just
don't
I'm
going
to
test,
there's
actually
two
values
to
test
and
key
benefits
to
testing
and
getting
some
of
that
information
back
one
is:
is
it
fit
for
purpose?
C
You
know
we
used
to
test
things
not
to
validate
necessarily
to
validate
things
but
test
them
until
they
broke
code
unquote.
So
you
actually
understand
as
an
operator
that
okay,
when
I
lose
50%
of
my
infrastructure
or
it
scales
up
on
a
Mother's
Day
event
to
five
times
what
I
thought
you
know.
Is
it
going
to
degrade
gracefully?
Is
it
going
to
go
boom
in
the
most
amusing
manner,
but
you
do
need
to
sort
of
understand.
How
is
this
thing
going
to
behave
in
under
stress
situations,
so
you
have
something
to
look
at
as
well.
C
C
Yeah,
so
we
hinted
a
little
bit
about
security
and
the
you
know
as.
C
C
We
started
doing
this
in
VMs,
but
now
really
in
the
Micra
service
environment,
that
application
very
likely
is
servicing
doing
more
effort,
servicing
other
applications
than
actual
end-users,
and
some
very
much
more
of
a
mesh
in
your
in
your
in
your
data
center
and
because
you
know
one
of
the
advantages
of
cloud
native
is
that
you
can
place
the
workload
anywhere
you
don't
have
to
start
thinking
about
physical
adjacencies,
etc.
You
be
treated.
C
The
underlying
infrastructure
is
one
big
fungible
fabric
and
you,
while
the
scheduler
to
be
at
the
fish
as
possible
and
placing
our
bullets
wherever
they
go
and
workloads
can
independently
scale
etc.
If
you,
if
you
think,
if
you
get
that
direction,
we're
going
one
of
the
problems
it
has
is
that
the
end
points
that
you
want
to
protect
are
now
also
very
dynamic.
They
change
a
number.
They
change
in
location,
et
cetera.
C
C
However,
if
you
don't,
if
you
want
to
people
who
have
wanted
to
try
and
use
their
existing
security
infrastructure
to
protect
these,
these
platforms
that
have
realized
the
only
way
to
really
do
that
is
to
just
put
a
perimeter
around
that
cloud
infrastructure,
because
so
the
legacy
security
kit
they're
using
can't
talk
to
these
orchestrators.
C
C
If
and
not
if,
when
you
have
a
malicious
factor
that
gets
past
that
firewall
either
by
spearfishing,
you
know
social
engineering
or
incorporation
of
bad
code,
somebody
did
it
get
pulled
from
a
repo
that
they
maybe
shouldn't
own
by
accident.
You
now
have
a
actor
within
your
infrastructure
behind
that
perimeter,
who
now
has
basically
access
to
everything.
This
is
the
way
a
number
of
the
very
large
tax
have
happened
over
the
last
number
of
years
has
happened.
C
C
I
know
it's
a
wonderful
mean
looking
shark
I
think.
The
thing
you
have
to
be
aware
of
is
that
people
will
get
into
your
infrastructure
and,
as
your
infrastructure
gets
very
dynamic
and
gets
very
large.
You
know
it's
not
going
to
matter
of.
If
it's
going
to
be
a
matter
of
when
or
how
many
things
are
going
to
be
bad
actors
in
your
infrastructure,
either
intentional
or
unintentional.
C
C
So
what
you
could
do
is
you
could
use
the
information,
that's
in
that
Orchestrator
to
actually
state
what
things
need
to
be
able
to
talk
to
what
things
where
they
are
in
the
infrastructure
and
then
start
applying
policy
such
that
only
connections
that
the
developers
and
the
operations
team
have
decided
make
sense
are
allowed.
So
you
know
an
LDAP
server
may
be
only
really
needs
to
allow
traffic
from
things
or
LDAP
client
on
the
LDAP
port
and
doesn't
need
that
versi
traffic
on
all
different
kinds
of
other
ports.
C
So
if
you
do
that,
then
you
end
up
with
a
point
where,
even
if
a
bad
actor
gets
in
his
blast
radius
or
a
piece
of
bad
code
that
that
overruns,
the
infrastructure
is
now
constrained
in
the
damage
it
can
do.
You
know
in
one
way
of
doing
that
is,
you
know,
think
about
a
very
simple
case.
In
a
three-tier
architecture:
blue,
yellow
green.
C
Those
workloads
are
showing
up
anywhere
in
the
infrastructure
and
depending
on
scaling,
they
could
be
changing
on
a
second-by-second
basis.
How
do
I
provide,
for
example,
the
classical
three-tier
architecture
protections
in
this
cloud?
Environment
notice,
I'm
running
this,
for
example,
in
kubernetes
I
can
use
kubernetes
policy.
We
won't
go
into
the
details
here,
but
you
can
basically
attach
a
policy
to
each
of
those
things.
The
blue
is
the
front
end
policy
and
it
basically
allows
traffic
from
the
public
internet.
C
The
green
boxes
are
the
middle
tier
policy
and
they
will
only
accept
traffic
coming
in
from
the
front
end
and
they're
only
so
they
can't
be
connected
to
by
anything
else
and
the
database
tears.
The
yellow
boxes
will
only
receive
traffic
on
database
ports
from
the
middle
tiers.
So
this
is
very
simple
policy.
C
You
attach
them
to
the
pod,
specs
and
kubernetes
and
in
the
policy
API
in
the
in
the
kubernetes
infrastructure,
and
then
whenever
a
blue,
green
or
yellow
workload
shows
up,
it
gets
this
policy
automatically
attached
to
it,
and
what
you
end
up
with
is
something
that
looks
like
the
security
policies
used
to
do
in
classical
silos
infrastructure,
but
now
in
a
cloud
native
manner.
In
this
case,
the
only
things
you
can
talk
to
the
rightest
database
or
the
middle
tiers,
for
example.
C
So
this
is
the
benefits
of
policy
and
linking
policy
into
that
orchestration
system
in
real
time-
and
this
is,
you
know,
I-
think
a
differentiator
in
some
of
the
cloud
networking
platforms-
and
this
is
where
you
can
get
some
real
additional
value
in
cloud
networking
beyond
just
a
simple
scale.
Others
now
you
can
scale
your
policies
as
well
as
your
actual,
this
network
connectivity
and
yeah.
B
B
So
in
that
world
we
may
need
some
new
networking
technology.
We're
definitely
going
to
need
some
networking
technology,
but
the
the
new
technology
of
container
networks
completely
implemented
in
software.
That
software
is
heavily
reusing
modules
which
are
in
the
Linux
kernel.
That's
the
great
thing,
but
it's
all
done
in
software,
that
that
gives
you
a
whole
new
flexibility
and
a
whole
new
power
at
base.
It's
a
very
simple
model.
Each
container
has
its
own
IP
address
and
they're
all
on
one
network.
They
talk
to
each
other
and
that's
the
the
way
everyone
does
it.
B
A
C
A
C
When
I
take
a
first
first
shot
and
then
Brian
so
I
think
the
issue
that
the
question
might
be
referring
to
is
IP
address
exhaustion.
We
know
IP
networking
works
at
very
large
scale.
We
have
a
very
large
proof
case
out
there
we're
using
it
all
right
now
and
as
we
speak,
there
may
be
issues
in
certain
organizations
with
IP
address,
exhaustion,
there's
a
fairly
large
space
in
1918
space
and
some
other
spaces
that
can
be
used.
Idea
address
are
used,
but
if
you're
talking
about
very
very
large
organizations
very
large
well.
B
Maybe
it's
important
to
maybe
it's
important
to
stress
that
when
we
go
to
the
Container
Network,
we
generally
are
in
a
private
space
yeah.
It's
gets
it
difficult
without
knowing
exactly
where
the
questioner
list
was
coming
from,
but
they're
there
within
the
ipv4
space,
where
there's
only
a
few
billion
possible
addresses
they're.
These
large
segments
carved
out
as
private
spaces,
and
we
tend
to
use
one
of
those.
B
B
Much
of
the
container
networking
space
does
not
do
ipv6.
So
that's
that's,
certainly
a
reality.
Kubernetes,
for
instance,
doesn't
handle
pods
with
ipv6
addresses,
and
but
it
is
something
that
they
worked
on
and
I
expect.
If
anyone
actually
had
a
real
problem
there
than
people
would
fix
that
fast
I
mean.
C
I
think
there
is
some
activity
actually
right
now
answering
to
discuss
more
seriously
ipv6
internet
ease
and
in
the
vision,
keep
in
mind
is
you
know,
as
we,
since
we've
leveraged
in
one
hour
folks
have
leveraged
for
the
existing
infrastructure
and
tooling
for
networking
base
from
on
Lennox
and
and
protocols,
etc.
All
of
those
things
support
v6.
So
it's
not
like
there
is
a
technical
hurdle
in
actually
implementing
v6.
In
a
lot
of
these
platforms,
it's
really
just
waiting
for
the
orchestration
platforms
etc
to
catch
up.
You
know
echo
what
Brian
said.
C
What
we
see
is
most
folks
deployed
their
containers
using
private
addresses,
or
you
know,
a
very
large
pool
of
private
addresses
in
a
smaller
pool
of
public
addresses
that
are
used
for
front-end
load,
balancers
or
other
things
that
would
be
connected
to
public
infrastructure.
So
you
know,
we've
had
one
or
two
folks
who
have
you
know
honestly
said:
yeah
ventually
we're
going
to
exhaust
that
very
large
private
space
within
before,
but
I
haven't
seen.
Anyone
today
come
a
cropper
again
not
being
able
to
deploy
because
they
don't
have
enough
privacy
for
address
race,
yeah.
B
So
that's
the
quantity
angle
on
that
question.
If
they
may,
the
questioner
may
have
been
thinking
from
a
speed
point
of
view
as
well,
because
there's
going
to
be
some
coordination
amongst
all
the
hosts
of
who
gets,
which
address
and
and
I
think
most
of
the
implementations
work
by
kind
of
grabbing
a
bunch
of
addresses
at
once.
B
C
Now
yeah
it's
yeah.
We
know
how
to
manage
very
large
IP
networks,
we've
been
doing
for
a
long
time.
There's
all
sorts
of
technologies,
like
Brian,
said
a
pre
allocation,
there's
potentially
rep
summarization
the
except
for
others.
The
nice
thing
about
taking
this
IP
end
point
based
approaches.
We
get
to
leverage
all
the
tools
that
have
been
developed
over
a
very
long
time
to
manage
these
very
large
IP
networks
and
that's
sort
of
standing
on
the
shoulders
of
giants.
C
A
Thanks
for
that,
so
there's
quite
a
few
questions
coming
in
now,
we're
gonna
jump
into
one,
that's
come
up,
twice,
will
recording
be
posted
and,
if
so,
where?
Yes,
a
recording,
will
be
posted
I'm
just
putting
the
link
to
the
CM
TF
YouTube
channel.
Now
into
the
into
the
chat
you
will
see
in
there
the
recording
of
the
first
webinar
we
did
with
Jimmy
Duncan
about
cloud
making
strategy.
This
one
will
be
posted
there,
I
guess
within
a
couple
of
days,
and
someone
also
asked
whether
or
not
we
would
be
giving
the
slide
the
way.
A
B
C
Tend
to
agree
if
there
is
no
central
controller
or
you
know
there
are
ways
you
can
build
relay
nodes
within
bgp
to
reduce
the
number
of
things
you
have
to
appear
too,
but
bgp
and
a
protocol
and
gossip
protocols
are
pretty
much
flood
the
information
out
on
a
graph
and
that
you
discover
and
eventually
everyone
converges.
Everyone
agrees
on
the
data.
That's
there.
A
C
I
think
there's
there's
a
larger
question
here
from
a
networking
standpoint
again
because
we're
leveraging
the
sort
of
IP
centric
approach.
You
know,
there's
lots
of
tools
out
there
that
lots
of
mechanisms
that
exist
to
allow
you
to
link
you
know
disparate
communicate
or
disjunct
infrastructures
together
and
allow
them
to
talk
and
be
that
VPNs
be
that
a
tunneling
infrastructure,
etc.
We've
got
lots
of
folks
who
are
doing
hybrid
public,
private
clouds,
for
example
at
and
or
multi
multi
public
clouds
and
that
sort
of
pretty
much
works.
C
I
think
there's
a
bigger
question
right
now,
which
is
the
orchestrators
and
the
control
plane
for
the
orchestrators
and
how
they'll
work
over
disjunct
infrastructure,
so
crew
Bernays
is
having
a
large
discussion
right
now
in
developments
around
what's
called
federation
max
really
does
not
get
the
networking
thing.
I
think
it's.
The
entire
cloud
infrastructure
has
to
be
able
to
federate
or
or
link
together,
semi
autonomous
regions,
and
you
know
as
the
orchestrators
provide
that
capability
the
network
and
will
come
along
for
the
ride
as
well.
On
a
brian
ich,
we
have
oh
yeah.
B
As
you
look
at
a
bigger
and
bigger
set
of
infrastructure,
it
becomes
more
and
more
likely
that
some
piece
of
it
is
going
to
fail,
and
so
a
set
up
where
there
are
thousands
of
things
all
over
the
globe
and
in
order
for
any
of
them
to
work,
they
all
have
to
contact.
One
central
database
in
one
place
is
not
a
good
model.
That's
that's!
What
leads
us
to
talk
about
things
like
Federation
like
how
do
you
split
you
split
that
decision
process?
B
There
are
different
ways
to
attack
that
problem,
but
the
fundamental
thing
to
to
think
about
is
what's
going
to
happen.
If
I
lose
this
transatlantic
Network
link,
what's
going
to
happen,
if
some
fishing
boat
crawls
up
the
the
link
between
this
country
in
this
country,
if
I'm
operating
globally,
these
are
the
things
got
to
worry
about
and
building
a
system
that
doesn't
have
that
doesn't
have
the
behavior
that
it
will.
The
whole
thing
will
fail
at
one
piece
of
sales,
so.
A
I'm
going
to
skip
past
the
question
to
the
to
the
last
one,
the
true
but
I
want
to
go
to
several
months.
There
seems
related
to
the
point
we
just
making
oh
and
you're.
Getting.
Thank
you
from
thank
you
for
the
answer
supposed
to
be
anonymous
on
all
so
the
anonymous
viewer
asked
I
would
expect
that
any
SPN
continues
to
work
in
the
presence
of
an
underlying
network
partition
I'm,
very
surprised
to
hear
that
some
Sdn
solutions
rely
on
CP
systems
like
etc
B
to
make
forward
progress.
C
So
yeah,
it
is
an
interesting
it's
an
interesting
question
and
I
think
this
also
gets
back
to
to
in
the
case
of
raft
based
consensus,
algorithms.
This
gets
back
to
the
concept
of
Federation
as
well
to
address
this,
but
let's
all
take,
for
example,
the
platform
I'm
familiar
with
arm
calico
if
at
CD
ends
up
being
partitioned.
If
you
end
up,
for
example,
with
a
split
in
across
two
data
centers
and
one
can't
contact
the
other,
the
part
that
has
long
Aurum
will
still
continue
to
function.
C
It
just
won't
get
updates
from
the
other
side,
so
ie
it's
going
to
be
running
on
the
data
it
you
know
last
had
as
far
as
policy
and
networking
etc.
Since
it
can't
talk
to
the
other
side
anyway,
that's
probably
a
reasonable
failure
scenario
right
now
and
then,
when
the
network
Parkash
resolve,
it
comes
back
into
corn.
The
kubernetes
commute
this,
but
this
isn't
just
a
networking
problem.
This
is
a
problem
for
any
orchestration
system.
That's
based
on
a
raft,
consensus
algorithm
and
that's
why
we're
looking
at
that's?
C
Why
good
Rene's
community,
for
example,
is
using
is
working
on
Federation
such
that
you
can
have
multiple
kubernetes
clusters
that
can
be
disconnected
and
reconnected
and
obviously,
once
that's,
you
know
once
you've
solved
that
problem
for
the
orchestrator
you
solve
that
for
the
network
as
well,
a
network
is,
do
is
taking
input
from
me
from
the
orchestrator.
Does
the
problem
the
community
has,
as
we
go
to
multi
data
center
if
those
data
centers
potentially
are
disconnectable
and
and
as
Brian
said,
things
happen
in
the
network,
but
now
this
is
not
just
a
networking
thing.
C
B
And
I
should
just
add
that
some
some
problems
choose
to
have
eventual
consistency,
and
that
tends
to
be
a
lot
more
work
to
get
that
to
work
right.
But
it
does
have
the
the
benefit
that
both
sides
of
the
partition
can
continue.
Making
updates.
True.
A
B
Whatsit
seriously,
if
you're,
if
you're,
renting
space
I,
think
if
you,
if
you
look
in
detail
into
to
the
different
cloud
providers,
I
guess
I
spent
more
time
reading
up
on
this
about
Amazon
but
I'm
sure
they
all
have
more
or
less.
The
same
idea
is
that
you
can
you
can
specify
a
bit
about
the
placement
of
your
machines?
You
can
you
can
create
sub
networks
and
ask
the
two
machines
go
on
the
same.
Subnet
will
be
you
can't
term?
B
If
you
rent
bigger
machines
again,
although
the
cloud
providers
and
have
different
instance
sizes,
if
you,
if
you
rent
those
kind
of
tiny
ones,
that
only
cost
$1
a
month
or
I
know,
that's
not
realistic.
But
you
know:
I
mean
if
you,
if
you
rent
the
cheapest
machine,
you're
going
to
get
the
worst
variability,
because
that's
the
way
they
make
the
economics
work.
And
if
you,
if
you
rent
a
much
bigger
model,
then
the
providers
are
not
packing
them
so
tightly
into
the
racks
and
you're
going
to
get
a
bit
of
better
experience.
That
way.
B
C
C
You
know
I
think,
that's
not
really
just
a
networking
question
right,
because
the
network
doesn't
really
dictate
where
a
work
load
gets
placed
is
an
Orchestrator,
scheduler
question,
and
we
do
hear
this
not
just
people
in
felt
with
the
open
price
of
clout
they
want
to
be
able
to
land
certain
workloads
and
certain
racks
and
make
sure
certain
workloads
we'll
end
up
in
the
same
racks
et
cetera.
You
know
and
that's
something
that
you
know
as
the
community.
C
You
know
once
that
kind
of
capability-
that's
capability
that
would
eventually
get
added
to
the
orchestrators
to
the
schedulers
and
that's
something
that
the
the
network
providers
could
could
leverage
or
couldn't
even
get
feedback
into
it's
in
point
in
time.
But
you
know
I
think
that's
not
just
a
network
specific
thing,
that's
a
the
scheduler
is
the
thing
that
decides
where
workloads
go
and
what
resources
get
attached
to
them,
and
that's
really
up
how
rich
is
your
scheduler?
How
expensive
is
happening?
I
should.
B
Just
say
we
got
a,
we
got
a
comment
on
the
chat,
but
somebody's
saying
we
can
dictate
placement
an
AWS.
So
go
go
read
whatever
your
cloud
provider
is.
Other
cloud
providers
are
available,
whatever,
whatever
your
provider
is,
go,
go,
ask
them,
go
check
out
their
Docs
and
see
what
they
offer
in
terms
of
requesting
particular
placement
of
your
VMs.
B
A
Right,
thank
you
very
much.
That
was
the
last
question.
Final
thing.
I
want
to
say
is
that
well
actually
not
the
final
thing.
The
penultimate
thing
I'd
like
to
say
is
that
we
have
another
cloud
native
webinar
coming
up
on
February
23rd,
it's
going
to
be
led
by
Alexis
Richardson,
who
is
the
chairperson
for
the
technical
oversight
committee
at
the
CNCs
and
he's
going
to
be
giving
a
talk
that
he
associated
software
section
and
spam,
which
is
what
is
cloud
native,
and
why
does
it
exist?