►
From YouTube: Cloud Native Live: Leveling Up Kubernetes with kube-vip
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
Hello,
everyone
welcome
to
cloud
native
live
where
we
dive
into
the
code
behind
the
cloud
native.
I
am
itai
shakuri
and
I'm
director
of
open
sourcing,
aqua
security,
I'm
also
a
cncf
cloud
native
ambassador
and
will
be
hosting
today's
show.
So
this
is
cloud
native
live.
It's
a
weekly
show.
Every
week
we
bring
the
new
set
of
presenters
to
showcase
how
to
work
with
cloud
native
technologies.
A
They
will
build
things,
they
will
break
things
and
they
will
answer
your
questions.
It's
every
wednesdays
at
11
00
a.m.
Eastern
time
and
this
week
we
have
dan
finran
he's
gonna
talk
to
us.
Hi
he's
going
to
talk
to
us
about
cubevip,
we'll
hear
about
it
shortly.
Just
a
quick
reminder
that
kubecon
and
cloudnativecon
europe
has
just
ended,
and
the
videos
are
up
on
youtube
for
your
on
demand
consumption.
A
So
you
can
go
ahead
and
binge
on
that
and
just
a
quick
reminder
before
we
get
started
that
this
is
an
official
live
stream
of
the
cncf
and
as
such
is
subject
to
the
cncf
code
of
conduct.
So
please
do
not
add
anything
to
the
chat
or
questions
that
would
be
in
violation
of
that
code
of
conduct
basically
just
be
respectful
of
each
other's
and
let's
have
fun,
and
with
that
I'll
hand
it
over
to
dan
to
introduce
yourself.
B
B
I
am
part
of
the
devrel
engineering
team
at
equinix
metal
and
I
spend
a
lot
of
my
time
mainly
focused
on
things
like
kubernetes,
on
their
metal
working
on
projects,
to
kind
of
facilitate,
getting
operating
systems
and
kubernetes
onto
physical
machines,
and
also
kind
of
what
I'm
here
to
talk
about
today,
but
h8
control
planes,
service,
load,
balancers
inside
kubernetes,
on
your
kind
of
on-prem
environments,
bare
metal
edge,
and
things
like
that
as
well
with
that
you
can
kind
of
the
cubevip
project
has
kind
of
come
out
of
the
the
experiences
that
I've
had
and
the
problems
that
I
faced
and
things
like
that
so
yeah
so,
prior
to
equinix
metal,
I
was
at
hexio
acquired
by
vmware,
where
I
was
helping.
B
A
Nice
sounds
like
fun
absolutely
so
why
don't
you
tell
us
a
little
bit
about
what
cube
is.
B
Yeah
absolutely
so
cubevip
is
a
project
that
has
kind
of
evolved
to
kind
of
fill
some
of
the
the
areas
that
I
I
found
lacking
to
a
certain
degree.
So
I
mean
I'll
cover
this
in
a
little
bit
of
detail
later
on,
but
as
I've
been
kind
of
working
with
customers
to
get
kubernetes
clusters
deployed.
B
There's
there
is
a
lot
of
tooling
out
there
that
people
can
that
can
use
things
like
that.
However,
when
we
start
looking
at
kind
of
lifecycle,
management
and
automation,
and
things
like
that-
that's
where
I've
generally
found
has
been
a
number
of
issues
where
I
think
there's
probably
ways
that
we
could
improve
upon.
B
So
you
know
cube
vip.
I
think
I
kind
of
open
sourced
it
about
a
year
or
so
ago.
B
Initially
it
was,
you
know,
kind
of
just
a
few
lines
of
go
to
make
my
life
easier
and,
as
the
months
have
gone
by,
and
people
have
kind
of
found
that
it's
been
a
bit
more
of
a
use
for
them.
It's
kind
of
made
its
way
into
things
like
some
of
the
cluster
api
providers,
there's
quite
a
number
of
kind
of
people.
End
users
that
have
used
it
to
stand
up
kubernetes
clusters,
so
you
know
kind
of
functionality.
B
That
means
that
their
clusters
are
highly
available
in
the
event
of
node
failures
and
things
like
that,
but
as
time's
kind
of
gone
on
as
well,
I've
kind
of
realized
that
those
technologies
that
I
implemented
for
that
particular
could
be
used
for
other
scenarios
as
well,
so
not
pivoting
but
kind
of
extending
that
functionality
to
things
like
service
type
load,
balancers
inside
kubernetes
clusters
and
kind
of
with
that.
There's
a
bunch
of
you
know
kind
of
additional
technologies
that
and
techniques
that
we're
needing
to
apply.
In
order
to
do
that.
B
So
I
guess
with
that,
I
can
kind
of
start
stepping
through
what
it
is
that
I'm
gonna
talk
to
talk
to
everybody
about
today.
That's
okay,
yeah!
Please
do
all
right
excellent!
So,
as
mentioned
today,
we're
going
to
be
talking
about
keep
vip,
it's
a
bit
of
a
a
two-phase
kind
of
overview,
in
that
it
will
step
through
how
I
got
to
cube
vip.
It
will
talk
about
the
technologies
that
kind
of
underpin
cube,
vip
itself.
B
B
So
this
is
kind
of
the
agenda
for
what
I'm
gonna
be
covering
today,
there's
also
I've
got
a
kubernetes
cluster
stood
up
in
the
background.
We
can
kind
of
quickly
expose
some
things
and
see.
You
know,
vips
appear
and
things
like
that,
but
we're
going
to
touch
on
kind
of
the
inception
of
it.
What
I
was
doing
when
I
suddenly
realized
that
maybe
there's
a
better
way
of
doing
these
sorts
of
things.
B
The
architecture
is
going
to
kind
of
touch
on
some
of
the
core
bits
that
kind
of
power
cube
vip.
We
will
have
a
quick
overview
of
how
it
provides
the
highly
available
control
plane,
we'll
talk
about
how
it
is
used
in
order
to
provide
service
type
load
balancers
within
kubernetes
and
then
kind
of
briefly
cover
the
roadmap
in
terms
of
where
people
in
the
community
are
kind
of
wanting
to
take
it
next
and
kind
of
some
of
the
functionality
that
end
users
have
been
asking
for,
and
things
like
that.
B
So
if
anybody
has
any
questions
around
cube,
vip
and
anything
that
I
talk
about
as
I'm
going
through,
that
please
kind
of
raise
those
questions
and
we
can
kind
of
delve
into
it
and
all
of
that
as
well,
so
the
inception
of
cube
vip.
As
I
mentioned
earlier,
I
was
doing
a
lot
of
work
around
bare
metal
kubernetes
clusters.
So
there's
a
lot
of
different
ways
that
one
can
get
a
bare
metal,
kubernetes
cluster
deployed.
A
lot
of
tooling
is
normally
involved
in
order
to
do
that
so
dhcp
tftp
etc.
B
So
I
kind
of
spent
a
lot
of
work
in
the
periphery
writing
software
in
order
to
automate
bare
metal
provisioning.
So
this
was
a
or
it
still
is,
a
project
called
plunder.
But
the
idea
behind
it
really
was
to
simplify
getting
bare
metal
kubernetes
clusters
deployed.
B
It
was
focused
on
automating,
you
know
kind
of
all
of
the
steps
of
getting
an
os
deployed
and
then
getting
kind
of
kubernetes
stood
on
top
of
all
of
that,
and
you
know
where
I
was
at
the
time
there
was
a
lot
of
conversation
on
his
cluster
api.
So
for
those
who
don't
know,
cluster
api
is
a
project
effectively
to
kind
of
standardize
the
tech
the
the
technique
of
getting
kubernetes
clusters
deployed.
B
So
you
have
things
like
a
cluster
api
provider,
aws
cluster
api
provider,
google
cloud
cluster,
api,
vsphere
et
cetera,
and
I
you
know
I
kind
of
got
most
of
the
bits
in
place
to
get
kubernetes
deployed
on
their
metal.
The
next
step.
Really,
I
was
I
thought.
Well,
maybe
I
can
write
this
api
provider
for
for
this
project
that
I've
written
and
that's
largely
where
I
started
to
kind
of
hit
on
a
number
of
problems
in
that
it's
not
very
easy
to
automate.
B
So
the
cluster
api
provider
would
kind
of
stand
up
the
nodes
to
a
certain
degree,
but
I
kind
of
started
to
realize
that
a
lot
of
bits
were
missing.
A
lot
of
bits
were
kind
of
hard
to
automate.
So
what
kind
of
what
am
I
talking
about?
Well,
typically,
from
a
thousand
yard
view,
if
you
look
at
kubernetes
cluster,
you
interact
with
the
control
plane
but
effectively.
B
You
know
you
don't
tend
to
care
too
much
about
the
control
plane.
It's
mainly
about
firing
things
into
there
and
having
the
workers
that
sit
underneath
that
manage
all
of
those
workloads.
Now,
if
you
lose
that
control
plane
under
any
sort
of
circumstances,
your
workloads
may
continue
to
run.
But
at
that
point
you
can
no
longer
do
any
sort
of
work
with
that
cluster.
You
can't
make
any
changes
to
it
until
either
that
control
plane
is
fixed,
or
you
end
up
having
to
rebuild
your
entire
cluster,
and
things
like
that.
B
So
in
order
to
get
around
that,
obviously
people
want
highly
available
control
planes.
So,
in
the
event
that
you
lose
nodes-
or
you
want
to
do
life
cycle
management,
upgrades
of
kind
of
various
bits
of
the
control
plane,
you
have
that
capability
without
downtown
without
downtime
or
without
not
being
able
to
interact
with
your
worker
nodes.
So
typically,
most
architectures
would
look
like
this.
B
In
that
you
would
have
three
control
plane
nodes
as
part
of
your
highly
available
kubernetes
cluster,
and
then
you
would
have
a
number
of
other
nodes
that
sit
atop,
that
kubernetes
cluster
and
their
role
in
all
of
this
is
typically
to
provide
highly
available
access
to
those
control,
plane
nodes
that
sit
beneath
them
now
this
is,
as
I
mentioned,
this
is
kind
of
where
things
I
I
really
started
to
realize
that
this
is.
B
You
know,
there's
probably
better
ways
of
doing
this.
I
mean
if
we
just
look
at
this
quick
architecture
diagram.
We
already
have
two
additional
nodes
that
are
required
in
order
to
sit
and
provide
that
additional
functionality
now
in
a
physical
environment,
two
physical
servers
that
can
be
quite
expensive
and
in
theory,
you
know
these.
B
These
additional
nodes
are
costing
money,
burning,
electricity
and
they're
not
really
doing
a
great
deal
of
work
in
order
to
provide
that
functionality
and
then,
furthermore,
if
we
kind
of
start
to
look
at
you
know
kind
of
what's
inside
this,
this
layer
that
provides
the
highly
available
access
to
control
plane,
we
need
kind
of
two
things:
we
need
a
clustered
technology
that
will
provide
this
highly
available
control,
plane
address
and
that
technology
needs
to
be
able
to
move
that
ip
address
around
in
the
event
that
these
this,
this
layer
changes
for
whatever
reason,
and
then
underneath
that
we
need
the
capability
of
load,
balancing
traffic
to
the
control
plane,
nodes
that
sit
beneath
it.
B
So
you
know
if
we
start
to
think
about
that,
there's
two
layers
of
additional
tooling
there's
additional
infrastructure:
that's
actually
required!
That's
a
large
amount
of
operational
overhead,
and
with
that
you
know,
that's
not
just
operational
overhead
of
these
machines.
It's
operational
overhead
of
those
operating
systems,
then
there's
the
operational
overhead
of
those
technologies
that
need
to
sit
within
that
layer
as
well.
B
So
you
need
all
the
operational
knowledge
in
terms
of
how
the
tooling
works,
how
to
architect
it
and
install
it
and
design
it,
and
then
for
each
of
those
layers
as
well.
There's
separate
configuration
as
well
so
there's
different
configuration
files
for
perhaps
that
clustering
part
of
it
that
moves
an
ip
address
around
another
set
of
configuration
for
the
load,
balancing
part
of
it.
Things
like
that,
so
that
you
know
incurs
all
of
that
sort
of
debt
and
then
there
is
also
the
life
cycle
management
of
it.
B
So
if
I
want
to
upgrade
those
various
pieces
or
move
things
around,
just
just
lots
of
surface
area
at
this
point,
which
kind
of
was
the
thing
that
I
was
hitting,
I
was
at
this
point.
B
So
this
is
kind
of
where
the
genesis
of
kind
of
cube
vip
comes
from
it.
It
became
apparent
that
kind
of
there
must
be
an
easier
way
that
I
could
perhaps
reimplement
all
of
this
in
a
much
more
simpler
way
in
a
more
cloud-native
way
that
can
kind
of
sit
nicer
with
a
kubernetes
cluster
that
I'm
trying
to
provide
this
functionality
to
and
then
kind
of
taking
it
a
little
bit
further.
B
As
I've
mentioned
once
I've
kind
of
implemented
some
of
these
technologies,
it
also
became
apparent
that
the
same
sort
of
things
could
be
exposed
to
other
areas
of
the
kubernetes
cluster,
so
in
typical
on-premise
environment
on
premises,
environments,
a
lot
of
the
technologies
and
functionality,
don't
kind
of
come
out
of
the
box.
So
you
know,
as
I
deploy
my
pods
and
things
like
that.
B
Additional
technologies
are
required
to
expose
those
pods
to
the
outside
world.
Typically,
you
know
through,
as
I
mentioned,
the
kubernetes
of
type,
a
service
of
type
load
balancer,
and
it
became
apparent
that
you
know
kind
of
cube,
vip
already
had
those
technologies
in
place.
It
was
just
a
case
of
then
of
marrying
up
the
capability
of
speaking
kubernetes
services
with
the
technologies
that
cuba
already
had
in
place,
at
which
point
then,
I
already
had
all
of
the
bits
there
to
do
that.
A
So
cloud
users
are
usually
accustomed
to
being
able
to
easily
provision
a
service,
a
kubernetes
service
of
type
load
balancer
and
make
it
external
and
the
cloud
machinery
takes
care
of
right,
provisioning,
an
actual
load
balancer
in
the
cloud
provider
and
redirecting
the
traffic
and
everything.
So
your
goal
is
basically
to
bring
these
to
the
people
who
don't
use
a
cloud
provider.
Is
that
a
fair.
B
Absolutely
yes,
so
a
lot
of
this
was
really
down
to
the
source
of
people
that
I
was
fortunate
to
work
with.
They
were
looking
to
deploy
kubernetes
and
you
know
kind
of
data
centers
that
had
no
internet
access.
They
wanted.
You
know
full
management
of
everything
so
running
things
in
public
clouds
was
not
really
an
option
for
them
and
they
basically
had
a
requirement
to
well.
I
I
went
to
go
work
with
this
customer.
They
sat
me
down
and
said
we
want
this.
B
We
kind
of
want
it
by
the
end
of
the
week
and
then
they
just
kind
of
disappeared.
So
I
effectively
kind
of
was
left
with
you
know,
kind
of
a
week
to
to
build
them
a
kubernetes
cluster
and
kind
of
start
to
implement
all
this
functionality.
B
And
from
that
you
know,
I
kind
of
realized
that
I'm
I
can
automate
various
bits
of
it,
but
there's
probably
better
ways
of
doing
that
and
in
on-prem
environments
a
lot
of
the
tooling
that
you
get
in
the
cloud.
Isn't
there
by
default
so
yeah,
you
know
I'm
going
to
talk
a
little
bit
about
what
a
ccm
is
a
little
bit
later
on,
but
that
is
the
secret
source
that
makes
a
kubernetes
cluster
able
to
speak
to
the
infrastructure
where
it
is.
So.
B
You
know
in
aws
when
you
request
a
load
balancer,
that's
ccm,
that's
aws
specific!
That
does
the
magic
and
you
get
an
ip
address
from
somewhere.
You
don't
need
to
care
where
things
are
just
exposed
to
you
and
it's
it's
a
little
bit
more
difficult
when
you,
when
you
want
to
kind
of
do
those
things
yourself.
So
I'm
hoping
that
this
project
will
be
well.
It
seems
to
be
making
people's
lives
a
little
bit
easier.
From
that
perspective,.
A
B
A
it's
an
entirely
separate
project.
It
was
just
that
when
I
was
doing
all
the
plunder
work,
cube
vip
and
what
it
needed
to
do
was
kind
of
all
part
of
that.
As
of
last
week,
cube
vip
now
has
been
moved
to
its
own
organization,
so
there's
now
github.com
forward,
slash
cubic
and
then
there's
the
q
bit
project
is
in
there.
The
yeah
cube
cloud
controller
lives
in
there
as
well,
so
there's
a
clear
demarcation.
The
two
are
not
dependent
on
either
each
other
at
all.
B
Yeah
thanks
for
clicking
no
problem.
So
what
are
the
technologies
that
actually
power
qvic?
So
it
can
do
hay
and
expose
things
to
the
outside
world,
we'll
kind
of
quickly
step
through
these.
Originally,
the
grand
plan
for
kubevip
to
provide
was
that
it
would
use
a
technology
called
raft.
B
This
is
a
clustering
technology
that
powers
things
inside
kubernetes,
so
the
fcd
data
store,
which
has
all
the
persistent
data
about
you
know
what
it
is
that
lives
inside
your
kubernetes
cluster
uses
its
algorithm
and
the
main
function
of
this
algorithm
is
to
select
or
vote
for
one
of
the
members
of
that
cluster.
To
be
the
leader
so
effectively
the
raft
algorithm,
you
join
your
nodes
in
this,
and
then
they
will
start
voting
against
one
another.
B
One
of
them
will
be
elected
leader
and
that
leader,
can
you
know,
provide
services
or
do
what
it
needs
to
do
as
the
leader.
This
algorithm
was
needed
because
effectively
when
I
want
to
do
hay,
something
needs
to
be
in
charge
of
providing
the
h
a
control
plane
address
to
the
outside
world,
so
I
originally
went
with
raft.
Unfortunately,
as
we
started
to
work
with
some
of
the
cluster
api
providers,
the
life
cycle
management,
just
it
didn't
really
work
when
we
removed
members
from
the
cluster.
A
B
Would
be
leader
at
which
point
there
was
no
accident,
there's
no
virtual
ip
there's,
no
cluster
ip
address
and
you
could
connect
to
the
cluster
so
ultimately
seemed
like
a
good
idea
at
the
time,
but
it
was
a
bad
design
decision.
However,
luckily,
the
kubernetes
api
provides
an
alternative
technology,
and
surprisingly,
it's
called
leader
election.
So
we
can
actually
ask
the
kubernetes
api
to
choose
a
leader
for
us
and
effectively.
B
The
way
that
it
works
is
that
any
we
can
have
this
code
running
in
a
number
of
different
pods
or
in
a
number
of
different
areas
through
the
api
they
can
all
say
I
particularly
want.
I
want
to
hold
this
lease.
Whoever
holds
the
lease
is
the
leader.
B
The
kubernetes
api
will
only
allow
one
of
those
requests
to
be
to
be
allowed
so
effectively.
What
that
means
is
that
all
of
your
nodes
will
try
and
get
access
to
this
lease,
but
only
one
of
them
can
actually
hold
that
lease
at
any
one
time.
So
once
the
leader
election
has
occurred
and
one
of
the
bits
of
code
has
acquired
that
lease
that
is
now
the
leader
and
from
a
perspective
that
leader
now
would
expose
that
control
plan
address
and
ensure
that
traffic
can
hit
the
kubernetes
cluster.
B
That's
up
and
running
the
lease
can
be
lost
in
a
number
of
ways.
So,
if
that
node
were
to
go
away,
if
that
node
was
to
issue
areas
where
you
know
kind
of
high
throughput,
it's
getting
quite
laggy,
then
that
lease
can
time
out.
One
of
the
other
nodes
will
acquire
that
lease
in
the
meantime
and
take
over
the
duties
of
being
the
leader
in
the
cluster,
so
that
technology
was
just
there
for
for
us
to
actually
make
use
of.
A
That's
super
cool.
I
mean
just
just
that
learning
from
this
stream
that
kubernetes
has
a
little
election
built-in
system.
A
B
The
code
behind
it
actually
is
all
available
in
the
client
go
library,
and
it's
it's
beautifully
simple
in
that
it
effectively
just
requires
the
the
kubernetes
token
to
speak
to
the
api
endpoint,
and
you
know
you
can
have
this
code
running
three
times
and
each
node
will.
Each
copy
of
this
cover
will
try
and
get
access
to
that
lease
and
effectively
the
one
that
gets
that
lease
then
will
be
told
it
can
do
something
at
this
point.
So
it's
it's
it's
just
there
fantastically
that
you
can
kind
of
make
use
of.
A
I
just
want
to
add
another
comment
for
people
asking
questions
in
the
chat.
We
see
your
questions.
If
you
could
just
clarify
them
a
little
bit,
then
I
can
pitch
them
to
them.
So
just
a
comment
to
the
people
online.
Thank
you
very
much.
B
So
a
couple
of
kind
of
networking
technologies.
This
might
sound
like
we're
going
completely
off
radar,
but
you
kind
of
need
to
understand
these
technologies.
So
you
know
what
happens
so.
There's
kind
of
two
technologies
that
we're
going
to
focus
on
one
is
called
arp
one's
called
bgp
and
the
reason
why
these
two
technologies
are
important
are
that
when
a
node
that
is
wanting
to
do
the
components
is
elected
the
leader.
They
need
to
inform
the
network
that
they
are
the
node
that
has
the
cluster
ip
address.
B
So
there's
two
ways
that
we
can
go
about
updating
a
network.
The
first
one
is
called
arp,
which
is
address
resolution
protocol,
but
effectively
arp
is
used
to
look
up
an
ip
address
to
a
physical
bit
of
infrastructure,
so,
for
instance,
in
the
diagram
to
the
left.
B
If
I
was
wanting
to
send
traffic
between
two
physical
machines,
traffic
doesn't
necessarily
go
ip
addressed
to
ip
address.
It
needs
to
actually
traverse
a
different
layer.
So
in
this,
it's
actually
going
to
go
to
the
layer
of
the
networking
cards
to
the
ethernet
that
sits
underneath
it
so
op
effectively
allows
us
to
look
up
the
hardware
address
that
is
linked
to
the
ip
address.
B
So,
as
mentioned,
if
I
wanted
to
send
traffic
from
22.33,
I
would
need
to
know
its
physical
address
in
order
for
the
two
networking
cards
to
send
that
traffic
across
to
one
another.
Now,
why
is
this
important?
This
is
the
way
where
we
can
effectively.
Let
a
network
know
that
if
I
need
to
send
traffic
to
an
ip
address,
this
is
the
machine
that
it
needs
to
go
to.
B
So
if
we
kind
of
backtrack
here
a
little
bit
when
a
node
is
elected
leader,
that
node
then
will
get
that
cluster
ip
address.
It
will
then
do
an
op
broadcast,
which
will
tell
the
network
that,
if
it
needs
to
send
traffic
to
this
particular
kubernetes
control,
plane
ip
address,
send
it
to
this
physical
piece
of
infrastructure.
B
So
this
is
kind
of
a
it's
called
layer
2,
but
this
is
effectively
the
the
linking
of
a
logical
ip
address
to
a
physical
machine
where
traffic
should
actually
be
sent
to,
and
this
is
the
most
common
way.
I
would
say
that
you
can
let
a
network
know
that
this
is
where
traffic
should
be
sent
to
as
an
update.
B
The
alternative
that
we
see
in
slightly
larger
networks
is
a
technology
called
bgp
and
bgp
allows
a
device
to
publish
to
the
network
that
traffic
should
be
sent
to
them
so
effectively.
They
participate
in
a
thing
called
peering,
and
what
that
means
is
that
a
if
you
look
at
the
diagram
on
the
right
here?
The
machine.21
also
has
a
different,
a
secondary
ip
address
here,
10.0.2.5
it
will
peer
to
a
either
a
router
or
a
top
of
rack
switch
in
the
network
and
using
bgp.
B
It
will
tell
that
that
piece
of
infrastructure
that,
if
somebody
wants
to
get
to
this
ip
address,
then
they
should
go
to
it
through
me.
I
am
the
route
to
that
traffic,
so
we
can
see
now
that
the
the
laptop
that
wants
to
get
to
that
ip
address
that
route
has
been
given
to
it.
So
it
now
knows
to
root
traffic
through
the
machine.
B
B
B
So
we
also
get
some
load
balancing
for
free
using
bgp,
so
this
is
kind
of
the
other
technology
that
we
can
use
so
that
kubernetes
cluster
can
make
the
rest
of
the
network
aware
and
where
to
send
traffic
when
you
want
to
when
you
want
to
hit
the
the
control
plane,
ip
address
so
kind
of
a
quick
overview,
arp
very
simplistic
doesn't
require
anything.
Special
bgp,
however,
requires
specific
infrastructure
that
supports
bgp.
B
Arp
can
be
dangerous
in
that
a
malicious
person
could
start
sending
updates,
which
kind
of
black
hole
traffic.
So,
for
instance,
I
could
do
an
arp
update,
which
says
the
gateway
is
actually
this
mac
address,
which
means
all
of
a
sudden.
You
know
traffic
is
going
to
black
hole
and
things
like
that,
however,
bgp
can
be
have
authentication
and
you
can
impose
restrictions
about
who
can
do
what,
with
that
network
and
kind
of
some
virtualization
software
can
restrict
gratuitous
art.
B
So,
in
the
event
that
that
ip
address
moves
to
a
different
host
and
we
need
to
tell
the
network-
this
is
where
we
need
to
send
traffic
to
on
things
like
vmware
vsphere.
The
v
switches
would
need
something
like
promiscuous
mode
enabled
in
order
for
that
to
work
so
and
that's
kind
of
a
quick
overview
between
the
two.
So
those
are
the
two
technologies
that
we
typically
use
to
provide
kind
of
that
highly
available
functionality
either
arp
to
say
we
have
moved
our
highly
available
ip
address
to
this
particular
node.
B
B
B
So
there's
two
methods
that
we
can
actually
use
to
get
kubrick
deployed
inside
a
kubernetes
cluster
and
that
is
either
using
static,
pods
or
through
demon
sets
both
of
them
kind
of
come
with
quirks
that
you
need
to
be
aware
of
a
little
bit
in
terms
of
how
best
to
get
it
deployed
so
I'll,
quickly,
kind
of
step
through
them
and
we'll
see
if
there's
any
questions
popped
up.
B
So
this
is
kind
of
where
I
hit
upon
another
another
kind
of
strange
scenario
in
that
I
wanted
to
use
cube
adm
in
order
to
stand
up
my
kubernetes
cluster-
and
I
wanted
to
use
cube
adm
in
in
saying
to
kubernetes
cluster,
and
this
is
the
control
plane
ip
address.
This
is
the
highly
available
ip
address
you
should
use.
B
However,
there's
an
issue
there
in
that
in
order
to
get
kubrick
deployed,
I
need
a
cluster
running
so
that
I
can
say:
do
a
cube,
ctl
apply
and
stand
the
cube,
vip
pods
up,
and
they
will
do
leader
election
and
advertise
that
address
to
the
outside
world.
However,
how
can
I
deploy
to
a
cluster
before
there
is
actually
a
cluster
in
place,
because
what
happens
is
the
cube?
B
B
So
I'm
in
a
scenario
here
where
I
can't
get
a
highly
available
ip
address,
because
I
can't
stand
the
cluster
up
because
the
highly
available
ipaddress
doesn't
exist.
Yet
it
turns
out
that
there's
a
way
around
this
that
we
can
kind
of
we
can
kind
of
cover.
So
this
is
kind
of
how
cubadm
init
works
effectively
cubadm
in
it
will
generate
a
bunch
of
static,
manifests
inside
ncd
kubernetes
manifests,
then
the
cubelets,
the
process
that
manages
pods
on
a
on
a
host
will
start
up
all
of
those
components.
B
B
It
turns
out
that
the
solution
to
this
was
relatively
straightforward.
We
can
actually
get
a
cube
cube.
Rip
can
actually
generate
the
manifest
for
us
and
put
it
in
the
xcd
manifests
directory
for
us.
So
now,
with
that
manifest
already
there,
when
I
do
a
cube
adm
in
it
with
that
control
plane,
ip
address,
cubelet
will
stand
everything
up
for
us.
It
will
also
stand
cube
vip
up
for
us
at
the
same
time,
which
means
all
the
control
plane,
components
will
start
and
cubevip
will
start
next
to
all
of
those
components.
B
Cubevip
starts
that
highly
available
ip
address
is
there
cube.
Adm
init
can
see
that
and
everything
will
complete
correctly.
So
we
we
have
stood
up
a
kubernetes
cluster
with
a
highly
available
end
point
actually
up
and
running
the
next
steps
really
with
that
are
to
add
in
your
additional
additional
control,
plane,
nodes
and
add
in
those
manifests
static,
pods,
at
which
point
you
have
a
highly
available
kubernetes
control
plane
with
a
demon
set.
We
have
a
much
simpler
deployment
method,
however,
this
isn't
possible
with
cube
adm.
B
This
is
more
of
a
deployment
technology
that
you
can
kind
of
use
with
k3s,
so
k3s
allows
us
to
stand
up
a
cluster
without
a
end
point
to
begin
with.
If
we
look
at
the
second
line
on
that
bit
of
commander
that
I'm
showing
there,
we
have
the
tls
san,
that's
our
end
point
and
what
this
actually
means
is
the
k3s
will
come
up
and
it
will
have
that
ip
address
as
part
of
its
certificates.
B
So
when
we
stand
up
our
h8
control
plane,
we
won't
get
any
certificate
errors
moving
forward,
so
we
start
our
first
node
using
k3s.
B
That
will
stand
up
everything
that
we
need
it
to
do.
We
can
then
apply
our
daemon
set
that
has
cube
in
there
cubic
will
start.
It
will
do
that
leader
election.
It
will
start
advertising
that
10.0.2.5
address
and
then,
as
we
join,
our
additional
control
plane
nodes
as
it's
a
daemon
set
cubevip
will
just
move
and
grow
and
deploy
itself
automatically
to
those
nodes.
B
So,
as
we
change
our
control
plane,
maybe
delete
node
one
upgrade
it
with
a
newer
version
cube.
It
will
just
keep
moving
around
and
provide
that
functionality
for
us,
so
that's
kind
of
how
you
actually
deploy
it
in
kind
of
either
in
a
daemon
set
mode
or
a
static
pod
manifest
mode.
How
does
it
actually
look
like?
What
does
it?
Actually?
How
does
it
actually
work?
B
B
So
at
this
point
it
has
that
ip
address
that
10.0.2.5
ip
address
end
users
will
connect
to
that
with
cube.
Ctl
do
keep
ctl
applies,
etc
and
deploy
things
on
those
worker
nodes
in
the
event
that
that
node
is
removed
or
has
issues
or
crashes
or
whatever
reason,
the
remaining
two
nodes
will
suddenly
start
doing
the
leader
election.
They
will
speak
to
the
the
api
server,
the
local
api
server
and
one
of
those
other
two
nodes
will
be
given
that
lease.
When
they
have
that
lease
they
will
then
do
that
gratuitous
up.
B
Let
the
rest
of
the
network
know
that
if
you
want
to
get
to
this
ip
address
send
your
traffic
to
this.
This
p,
this
particular
node,
in
this
case
the
hardware
address
of
node
three
and
that's
effectively
how
it
moves
around
that.
When
we
bring
node
one
back
in
back
up
into
service,
it
will
do
a
leader
election.
It
will
find
that
number
three
is
already
a
leader,
so
it
will
basically
then
just
sit
and
wait
until
there
is
new
leader
election
events
with
bgp.
However,
it
looks
a
little
bit
different.
B
We
have
our
three
nodes
actually
up
and
running,
and
they
will
all
be
peering
to
a
top
of
rack
switch.
So
you
can
see
on
this
diagram.
They
all
have
that
10.0.2.5
address,
which
is
our
control
plane
address.
They
don't
expose
that
to
the
outside
world.
So
in
order
for
the
bgp
things
technologies
to
work,
we
bind
that
ip
address
to
a
local
host,
an
internal
address
that
isn't
accessible
on
the
network
but
effectively
when
all
of
these
nodes
appearing.
B
If
any
device
wants
to
connect
to
that
control,
plane
ip
address,
the
traffic
would
go
through
that
top
of
rack
switch
that
router
and
that
would
take
care
of
sending
that
traffic
to
any
of
the
nodes.
That
is
part
of
that
peering
group.
So
all
three
control
plane
nodes
are
appearing
to
the
top
of
rec
switch.
They
are
all
saying
to
that.
Top
of
rack
switch.
B
If
you
want
to
get
to
this
ip
address,
send
the
traffic
through
me
and
that's
effectively
how
it's
doing
that
bgp
in
the
event
that
we
lose
any
of
those
nodes,
the
bgp
bgp
peering
will
stop
at
which
point
the
path
no
longer
exists
at
the
top
of
rec
switch
traffic
will
then
just
be
sent
to
the
remaining
peers
that
are
advertising
that
harp
address.
B
So
that's
effectively
how
the
control
plane
looks
to
the
networking
topology.
We
don't
necessarily
need
to
use
leader
election
with
bgp.
All
nodes
can
persist
as
advertising
that
ip
address
it's
only
with
arp,
where
only
one
of
the
nodes
can
say
send
traffic
to
me.
If
you
had
all
of
the
nodes,
all
sayings
send
traffic
to
me.
You
would
end
up
in
a
position
where
things
are
breaking.
B
A
No,
how
come
it's
different
just
the
last
point
that
you
mentioned
that
with
tarp
it
is,
it
was
required
to
do
leader
election,
but
with
bgp
it
wasn't.
Can
you
just
explain
that
again?
What's
the
difference.
B
Sure,
absolutely
so,
with
arp,
we
are
effectively
telling
the
network
to
send
traffic
to
this
ip
address.
Send
it
to
this
physical
piece
of
hardware.
If
we
had
all
three
nodes,
all
advertising
the
same
ip
address,
but
to
different
hardware
addresses
so
that
that
mac
address
that
ip.
A
B
We'd
start
sending
traffic
to
node
one,
but
then
nodes,
two
or
node
three
would
have
told
the
network
that
they
should
be
getting
that
traffic,
at
which
point
you
know,
you're
gonna
get
broken
connections
and
things
like
that.
However,
with
bgp
once
the
connection
is
established,
that
connection
lasts
the
lifetime
of
the
connection
that
it
happens
and
the
bgp
the
the
router
or
the
top
of
rack
switch
that
supports
bgp.
A
Okay,
so
it's
because
of
the
the
connection
semantics
that
doesn't
exist
with.
A
B
That
yeah
identifies
things
on
the
almost
on
the
physical
layer
to
a
certain
degree.
Would
you
have
a
quick
question?
Is
proxy
running
on
all
control
nodes?
It
is
not
no.
We
don't
need
to
do
that.
The
cube
the
qubit
nodes
can
either
just
send
traffic
directly
to
the
nodes
that
so,
if
you're
the
leader,
you
can
send
traffic
directly
to
the
local
api
server.
That's
running
there,
but
cubevip
also
supports
http
round
robin
load
balancing,
so
it
will
effectively
send
traffic
to
one
of
the
other
nodes.
B
B
Cool,
so
we
talked
a
little
bit
about
the
h,
a
part
of
it.
That
was
the
original
goal
for
cube
vip.
It
became
apparent
that
once
I
had
those
two
services
kind
of
implemented
for
the
control
plane,
I
could
also
use
the
same
thing
for
kubernetes
services.
One
thing
to
be
aware
of
there
are
two
components
that
are
actually
required
for
that
functionality,
so
you
mentioned
it
before
the
ccm
or
the
cloud
controller
manager.
B
This
is
normally
specific
to
an
infrastructure
provider
and
then,
once
the
ccm
has
done
its
magic,
we
need
something
to
provide
that
networking
magic
and,
in
this
instance,
we're
talking
about
qvip.
That's
going
to
be
doing
that
sort
of
magic,
so
a
ccm.
The
ccm
is
the
secret
source
when
running
a
kubernetes
cluster
on
effectively
other
people's
infrastructure,
also
known
as
the
public
cloud
and
that
cloud
provider
so,
for
instance,
cloud
ccm
for
aws
or
google
cloud,
etc.
B
That
effectively
is
almost
a
translational
layer
between
a
kubernetes
object
and
the
the
counterpart
within
that
infrastructure.
So
if
I
do
a
cube
ctl
expose,
it
is
the
role
of
the
ccm,
for
instance,
in
aws,
to
request
an
elastic
ip
address
for
you
and
update
the
kubernetes
object
with
that
information
same
with
google
cloud
or
azure
or
wherever
you're
doing
those
sorts
of
things.
B
The
ccm
on
your
infrastructure,
however,
needs
to
be
very
flexible,
because
most
people's
infrastructures
are
completely
different.
It
needs
to
be
capable
of
being
quite
configurable
for
different
types
of
networking
design
and
networking
ranges.
B
One
other
thing
that
I'm
looking
at
doing
is
being
able
to
plug
into
things
like
existing
ipam
or
different
infrastructure
management,
tooling
so
effectively
when
we
request
a
load,
balancer
service,
ip
address,
being
able
to
speak
to
other
things
in
a
person's
infrastructure
to
get
that
information
for
us.
B
So
how
does
it
kind
of
all
hang
together?
Well,
the
ccm
typically
has
one
main
role
so,
for
instance,
I'm
doing
a
cube,
ctl
expose
of
a
a
deployment
called
nginx
and
what
we
actually
get
from
that
is
to
begin
with,
is
a
kubernetes
service,
and
would
it
would
look
like
this?
We
can
see
that
one
part
of
it
hasn't
really
been
filled
in
yet
so
the
load
balancer
ip
address
is
was
going
to
be
blank
to
begin
with
and
effectively.
B
It
is
the
ccm's
role
to
update
that
object
with
information,
that's
specific
to
the
infrastructure.
So
again
in
aws
the
spec
dot
load,
balancer
ip
would
be
updated
to
an
eip
ip
address.
So
the
ccm's
role
there
is
being
able
to
speak
to
the
aws
api
and
populate
the
information
required
in
order
for
that
service
to
to
make
sense.
B
If
we
think
about
our
own
infrastructure,
cube
vip
has
its
own
ccm
that
we
can
give
network
ranges
to,
we
can
either
give
it
cider
ranges
or
a
start,
a
start
range
and
an
end
range,
and
things
like
that
and
the
ccm
will
use
that
in
order
to
populate
this
spec
for
our
environment.
So,
for
instance,
a
home
I
have
my
ccm
configured
to
give
it
ip
addresses
from
dot
200
to
dot
220.
B
So
it
has
20
addresses
that
I
can
use
and
if
I
do
a
cube
ctl
expose,
my
local
ccm
will
keep
track
of
those,
and
it
will
update
the
spec
with
that.
And
one
thing
to
be
aware
of
here
is
that
we're
not
using
config
maps
or
anything
like
that.
We
are
sticking
with
kubernetes
objects
directly,
so
we
don't
need
anything
special
here.
It's
using
it
it's
using
kind
of
tight
coupling
with
kubernetes
objects.
The
good
thing
here
is
that
any
ccm
can
can
replace
one
that
I
have
written.
B
So,
for
instance,
if
we
look
at
things
like
equinix
metal,
their
cc,
their
ccm
needs
to
speak
to
the
equinex
metal
api
and
get
me
an
elastic
ip
address.
It
only
then
needs
to
populate
this
kubernetes
object
with
that
information.
B
B
So
there's
kind
of
two
ways
that
you
may
want
to
get
cubic
deployed
on
your
on
your
worker
fleet.
You
can
either
deploy
as
a
daemon
set,
so
it
will
be
everywhere.
The
alternative
is
that
you
may
want
it
to
be
a
replica
set
or
you
may
want
it
to
be
tied
to
specific
nodes.
B
The
reason
being
you
may
want
to
ensure
that
traffic
is
only
coming
into
certain
parts
of
your
infrastructure
and
things
like
that.
However,
once
you
have
some
cube
vip
pods
deployed,
they
will
effectively
then
take
care
of
advertising
these
services.
So
how
does
it?
How
does
it
actually
work?
Well,
once
you
have
your
cube
vip
pods
deployed,
they
will
watch
services
of
type
load
balancer.
B
So
we
can
see
here.
We
we've
just
done
that
cube.
Ctl
expose
the
cube
ctl.
The
cube
vip
pods
are
all
watching
they've
seen
that
a
new
load
balancer
service
has
been
created
once
the
ccm,
whichever
ccm
it
is,
has
updated
that
load
balancer
ip
address.
B
It
is
at
this
point,
then,
that
cube
pod
can
go
ahead
and
advertise
that
ip
address
to
the
network
to
the
outside
world,
at
which
point
then
any
end
user
coming
in
will
be
able
to
send
traffic
into
the
kubernetes
cluster
and
then
kind
of
q.
Proxy
will
take
care
of
passing
that
traffic
to
the
pods
in
that
service
and
that's
effectively,
you
know
kind
of
the
crux
of
it.
It's
the
same
technologies
that
are
providing
that
technology
and
you
know
kind
of
turned
it
on
his
head
a
little
bit.
B
One
other
thing
is
that
it
can
also
work
in
kind
of
a
hybrid
mode.
So
we
saw
a
number
of
end
users
who
wanted
to
have
small
kubernetes
clusters
or
just
having
traffic
coming
in
to
their
control
plane
nodes.
So
what
we
can
actually
do
is
we
can
have
control
planes
and
service
type
load
balancers
all
sitting
together
and
in
the
event
that
we
expose
something
it
would
do
exactly
the
same
thing.
B
So
those
pods
that
are
sat
on
the
control
plane
nodes
would
advertise
to
the
network
either
through
bgp
or
op
this
service
ip
address
as
well.
I'm
also
exposing
send
that
traffic
to
me,
and
I
will
then
send
that
traffic
through
cube
proxy
internally
to
services
that
are
actually
running
some
other
things
that
have
briefly
been
added
recently
are
the
capability,
and
this
is
mainly
for
edge
environments.
B
Is
I
don't
want
to
have
ipam?
I
don't
want
to
have
to
worry
about
ip
address
ranges.
What
we
can
do
here
is
if
we
look
in
the
left,
hand
corner
I'm
doing
a
cube,
ctl
expose
and
I'm
specifying
that
the
load
balancer
ip
address
is
0.0.0.0,
which
isn't
really
a
properly
valid
ip
address.
B
However,
what
actually
happens
here
is
that
when
cubevip
sees
that,
that
is
the
ip
address
that
has
been
given
to
that
particular
service
cubevip,
the
cubic
pod
itself-
and
this
will
only
work
with
arp
because
it
needs
one
node
to
be
the
actual
leader
that
node
will
do
a
dhcp
request
to
the
network
that
it's
on
it
will
get
that
ip
address
and,
as
I
mentioned,
this
is
normally
for
an
edge
environment
and
that
ip
address
will
then
be
used
as
the
the
service
ip
address
for
that
service
that
I'm
exposing
so
kind
of
coming
to
the
end.
B
The
roadmap
there's
a
lot
of
kind
of
things
that
are
being
added
to
it
recently.
So,
as
somebody
asked,
there
is
no
proxies
required,
as
I
mentioned,
cubevip
collapses,
all
of
those
different
technologies
into
kind
of
a
single
place,
making
it
easy
to
manage
and
we're
looking
at
improved
control,
plane,
load,
balancing
through
things
like
ipvs
distributed
upload
balances.
So,
as
shown
on
some
previous
slides
and
only
one
node
using
leader
election
is
allowed
to
do
arp
broadcasts
to
the
network.
B
We're
looking
at
having
effectively
a
leader
election
per
load
per
service
ip
address,
so
that
would
start
to
distribute
those
across
all
of
the
cubic
pods
that
you've
deployed
and
then
enhancing
bgp
so
being
able
to
share
those
routes
further
afield
in
the
network,
a
lot
of
improvements
in
observability
and
monitoring
and
then
vastly
improved
documentation.
B
B
So
I'm
your
fingers
crossed
it
kind
of
gets
accepted
for
there.
I'm
just
kind
of
grateful
for
all
the
support
that
I've
had
so
far
as
well,
so
yeah.
Thank
you
very
much
so
yeah.
That
was
a
kind
of
a
quick
overview
of
all
of
the
different
technologies.
I
know
there's
a
lot
to
kind
of
cover.
There's
networking
technologies,
there's
kubernetes
technologies
as
clustering,
there's
a
lot
to
kind
of
it's
kind
of
covered
there,
but
thank
you
very
much.
Everybody
who's
stuck
with
me
through
that.
A
B
Yes,
so
in
raft
requires
an
odd
number
due
to
the
voting
algorithm.
However,
leader
election,
there
is
no,
there
is
no
kind
of
odd
or
even
requirement
it's
effectively.
Whichever
has
managed
to
get
the
lease
from
the
kubernetes.
A
Thanks
another
question:
about
what
kind
of
resources
does
this
require
from
the
control
plate
servers
or
the?
I
guess
the
question
is
more
about,
like
the
the
the
controller
that
is
cube,
fit
sure.
B
Yeah,
so
cube
is
actually
very
small.
It
requires
barely
any
resources.
I
think
the
the
general
cap
is
kind
of
like
100
meg
on
the
cube
vip
pod,
but
it
tends
to
use
way
less
than
that.
B
As
I
mentioned,
there's
a
lot
of
technologies
that
are
involved,
but
it's
very
simple
in
terms
of
how
it
all
hangs
together
and
the
technologies
that
it
actually
uses.
So
it
sits
and
watches
it
reacts
accordingly.
Yeah
it's
it's
quite
light,
it's
quite
lightweight
and
multi-architecture
as
well.
So
if
you
want
to
run
it
on
raspberry
pi's
on
arm,
if
you
want
to
run
it
on
big
metal,
x86
servers,
the
choice
is
yours.
A
Great
and
yes,
the
recording
is
available
at
the
cncf
youtube
channel
great,
and
I
see
that
there
are
no
more
questions
go
check
out,
cubevip
under
github.com
cubevip
right.
This
was
in
the
previous
slide.
Thank
you
dan!
So
much
it
was
a
fascinating,
deep
dive
into
distributed
computing,
networking
and
kubernetes
and
I'll
see
you
again
next
week
on
wednesday
11
eastern
time
every
week
the
cloud
the
cncf
cloud
native
live.