►
From YouTube: Cloud Tech Thursdays: MetalLB
Description
Cloud Tech Thursday explores the full modern open source cloud stack, from hardware to serverless. Learn about new ideas, projects, and releases around Kubernetes, OpenStack, hybrid cloud enablement, and many other topics.
B
Good
morning,
good
afternoon,
good
evening,
wherever
you're
hailing
from
in
the
words
of
chris
short,
I'm
not
chris
short,
I'm
bobby
kessler,
chris's
intern
filling
in
today
chris,
is
having
internet
issues,
but
he
will
be
back
later
on
openshift
tv,
but
this
is
another
edition
of
cloud
tech,
thursdays.
So
without
further
ado,
I'm
just
going
to
pass
it
over
to
josh
and
amy
to
introduce
our
guest
today.
C
C
Today
we
are
very
excited
to
welcome
two
of
the
people
behind
the
metal
lb
project
who
are
going
to
introduce
you
to
the
project
and
explain
what
it
does,
and
these
are
mark
curry
and
russell
bryant
before
we
get
started,
though
a
couple
of
things,
one
is,
if
you
have
questions
for
either
of
our
presenters,
then
feel
free
to
ask
them
in
chat,
and
that
would
be
youtube,
chat
or
twitch
chat
depending
on
how
you
are
tuning
in.
C
We
will
see
those
questions
and
we
will
share
them
with
the
presenters.
So,
with
that
mark
russell,
you
want
to
get
started.
D
Sure
I'll
go
first,
my
name
is
mark
curry
and
I'm
an
openshift
product
manager
and
responsible
for
networking,
and
so
part
of
that
is
where
this
this
comes
from,
and
so
you
know
the
the
first
question.
Maybe
that
could
be
asked.
Is
you
know
why
metal
I'll
be?
Why
are
we
here
talking
about
this?
How
is
this
important
or
relevant
for
open
shift?
D
E
Yeah
cool
thanks
a
lot
mark
and
yeah.
Thank
you
all
for
having
me
so
my
name
is
russell
bryant.
I
I
work
for
red
hat
in
engineering.
I
serve
as
an
architect
and
there's
a
some
of
my
big
interest.
Areas
are
both
bare
metal
or
on-premise
environments
and
also
networking.
So
I've
worked
with
mark
for
a
number
of
years
on
different
things
and
those
the
intersection
of
those
two
areas
is
what
brought
me
to
to
metal
I'll
be.
E
It
brought
me
to
looking
at
the
problem
space
and
evaluating
choices
and
then
deciding
you
know
what
metal
of
me
looks
like
a
great
solution
to
the
to
some
of
our
to
the
problem
I
wanted
to
solve
and
getting
involved.
E
E
So,
if
you're,
if
you're,
using
if
you're,
using
openshift
or
kubernetes,
one
of
the
the
things
that
you
can
create
is
a
service
of
type
load,
balancers,
a
key
fundamental
feature
of
kubernetes
now
in
a
cloud
environment,
what
that's
going
to
do
behind
the
scenes
is
create
a
load
balancer
through
a
cloud
api.
You
know
mark
mentioned
this
before,
but
what?
But
what
about
bare
metal
right?
There's
no
cloud
api.
There
there's
no
there's
nothing!
You
can
do
to
magically
bring
up
a
load
balancer.
E
So
how
can
we
replicate
that
experience
that
that
sort
of
functionality
in
a
bare
metal
cluster
there's
different
ways?
We
could
do
it.
Metal
lb
is
one
solution
and
one
that
I
think
is
a
pretty
good
solution,
and
so
how
does
it
do
it?
I'd
say
one
of
the
key
architectural
principles
about
metal
leds.
It's
a
it's
a
network-based
solution,
so
it's
not
about
I'm
spinning
up
a
bunch
of
new
software.
It's
not
spinning
up
a
software
load
balancer
it's
about.
E
How
can
we
provide
the
the
right
behavior
by
actually
interacting
with
the
the
network
itself
and
then
I'll
talk
a
little
bit
more
about
in
a
minute
what
that
means,
but
it
operates
in
two
modes:
okay,
so
it
has
a
layer,
two
mode
and
a
bgp
mode.
E
Now
the
architecture
before
I
get
into
those
two
modes,
there's
the
architecture
of
metal
b.
It
has
two
components:
okay,
so
one
is
a
controller
and
it
is
just
a
singleton
component.
One
of
these
runs
in
a
cluster
and
what
this
controller
does
is
it
watches
for
when
a
user
of
the
cluster
creates
a
service
of
type
load
balancer,
and
when
it
does
that
it's
going
to
it,
can
it
allocates
an
ip
address
to
it?
E
That's
really
the
main
thing
it
does:
it's
it's
ipam
or
ip
address
management,
so
the
service,
the
first
thing
it
does.
It
creates
an
ip
address.
Now
this
ip
address
has
to
be
something
it's
part
of
how
you
set
up
nlp,
as
you
allocate
a
set
of
ip
addresses
that
are
usable
on
the
network
that
the
cluster
operates
on
and
then
the
the
real.
I
guess
core
of
what
metal
lb
does
is
the
other
component
and
the
other
component
is
called
the
speaker.
Now.
E
The
speaker
is
a
a
component
that
runs
on
every
node
of
the
cluster,
so
it
runs
as
a
daemon
set
in
kubernetes
terminology,
and
this
is
what,
on
a
per
node
basis,
is
responsible
for
speaking
to
the
network
and
and
and
telling
the
network
about
these
ip
addresses
and
where
those
ip
addresses
can
be
reached
in
the
cluster.
E
So,
let's
to
dig
another
layer
deeper,
so
I've
created
a
service
of
type
load
balancer.
If
this
is,
if
you're
running
metal
will
be
in
layer
two
mode,
an
iep
address
will
be
allocated
to
that
load,
balancer
and
then
the
speaker
will
use
either
arp
if
this
is
an
ipv,
ib,
ipv4
network
or
ndp.
So
that's
address,
resolution
protocol
or
neighbor
discovery
protocol,
we're
kind
of
getting
into
the
network
protocol
level.
E
Details
here
to
announce
the
location
of
that
ip
address,
so
we've
got
the
load
balancer
ip
address,
and
now
we're
going
to
we're
going
to
metal
lb,
decides
which
node
should
receive
the
traffic
for
that
load
balancer
and
it's
going
to
announce
its
location
to
the
network
using
one
of
those
those
mechanisms
these
are.
This
is
core
to
how
networks
work
these
these
these
protocols,
and
so
that's
what
metal
b
does
just
it
just
issues,
gratuitous
arp
or
neighbor
discovery
messages,
and
then
the
network
is
able
to
reach
that
load.
E
Balancer
ip
address
on
the
node,
where
metal
ib
has
announced
its
location.
Now,
one
of
the
so
there's
there's
some
good
things
and
bad
things
about
this
mode,
the
good
one
is
it
it
it's
using
a
network
mode
that
exists
like
on
virtually
every
network
like
this,
it's
very
compatible.
It
works
almost
anywhere.
E
So
that's
that's
kind
of
that's
kind
of
the
good
thing,
and
it's
it's
good
enough
for
some
clusters
or
a
lot
of
clusters.
I
would
say
especially
ones
on
the
smaller
side.
E
F
E
Yeah
I
speak
of
bare
metal.
I
don't
know
that's
like
where
my
my
head
goes:
it's
really
applicable
to
any
on-premise
environment.
So
it's
so,
I
would
say
any
environment
where
you
have
the
same
problem
where,
where
it's
you
you're
running
you're,
really
responsible
for
the
infrastructure
the
cluster
is
running
on
and
you
don't
you
know,
you're
not,
is
anything.
Let's
say
not
cloud
effectively.
Is
it's
appropriate
so
it?
E
I
would
say
that
where
I'm,
where
we
see
the
most
demand,
for
it
happens
to
be
bare
metal,
but
it
actually
is
applicable
to,
and
I
expect
that
we'll
be
expanding
our
support
for
it
to
other
environments,
so
a
vm
environment
would
absolutely
be
appropriate
it,
I
would
say
maybe
it's
some
vm
environments.
I
mean
give
an
example
of
like
where
it
may
or
may
not
apply
interview.
Environment
like
openstack,
is
a
good
example
where
it's
very
applicable
to
openstack
in
some
scenarios
and
then
not
others.
E
So
if
it's
openstack
and
you're
using
openstacks
like
virtual
networks,
so
the
cluster
runs
on
a
virtual
network
very
similar
to
the
type
of
networking
it
would
be
in
a
cloud.
E
Then
you
know
it
that
it
doesn't
make
as
much
sense,
or
you
know,
maybe
and
and
that
sort
of
environment
you're
going
all
in
on
openstack
itself.
It's
cloud
networking
you'd,
really
look
toward
more,
like
its
load.
Balancer
service
would
be
a
better
architecture.
In
my
opinion
now
openstack
can
also
be
used
in
a
way
where
in
openstack
terminology
they
have
thing
called
provider
networks,
provider
networks
means
you're,
taking
the
vms
and
they're
and
you're
attaching
them
directly
to
let's
say
directly.
But
you
know
from
a
network
connectivity
standpoint.
E
The
vms
are
on
a
physical
network
that
the
environment
has
and
in
that
environment
it's
absolutely
applicable
here
like
then,
the
metal
b
makes
a
lot
of
sense.
So
you
know
it's
it's
like
absolute.
So
to
summarize,
it's
absolutely
applicable
to
bare
metal
clusters.
Vm
clusters
may
be
slash.
Probably
it
kind
of
depends
on
networking
details
whether
this
is
an
appropriate
solution,
but.
E
Well-
and
I
guess
I
in
another
and
I'm
I
spent
a
lot
of-
I
spent
several
years
working
on
openstack
in
the
past,
so
I
have
a
lot
of
background
there.
So
it's
always
very
top
of
mind
for
me.
So.
E
Yeah
and
we'll
come
back
to
that
too,
where,
like
it's
not
so
red
hat,
did
not
start
this
project.
This
is
something
we
got
involved
with,
because
we
saw
the
value
at
it
and
just
sort
of
at
our
core.
E
You
know
we
look
at
what
what
open
source
solutions
solve
the
problems
and
a
lot,
and
quite
often
they
didn't
originate
from
us,
but
we
want
to
join
with
other
people
that
are
trying
to
work
together
to
solve
the
same
problems,
and
this
is
a
case
of
that
where
we've
now
joined
a
community
of
people
that
are
that
that
are
collaborating
on
solving
this
problem.
So
we
didn't
originate
it.
E
We
now
contribute
to
it
and
help
maintain
it
and
it
we
are
going
to
include
it
in
a
red
hat
product
as
a
part
of
open
shift-
and
you
know
mark
alluded
to
that
earlier
and
I
think
that
in
a
bit
we'll
come
back
to
that
and
talk
about
the
sort
of
detailed
road
map
of
where
the
different
features
are
falling
on
our
release
schedule.
But.
D
Yeah,
I'd
like
to
jump
in
on
that
real,
quick
and
just
clarify
that
one
of
the
things
that
we
do
at
red
hat
is,
of
course
we
support
anything
that
we
ship
and,
as
part
of
that
support,
we
fully
test
and
vet
and
enterprise
harden
and
do
all
the
things
with
with
an
upstream
project
that
we
we
invest
a
lot
in
and-
and
so
you
know,
I've
heard
I've
heard
the
question:
will
it
work
with
other
things
sure
it
will
as
as
russell
articulated,
but
keep
in
mind
that
the
part
that
we're
supporting
at
least
initially
is
going
to
be
for
open
shift
bare
metal
deployments
and
with
the
success
of
that,
I'm
sure
we
could.
D
We
could
grow
that
that
that
footprint
of
support
to
to
other
implementations.
C
Okay,
now
we're
getting
into
some
detailed
technical
questions
so
decide
whether
or
not
this
is
going
to
be
covered
later.
As
the
sir,
our
person
who
was
asking
about
vmware
fahad,
said
to
follow
up
on
that.
Are
you
planning
to
replace
the
use
of
keep
alive
d
on
ipi
clusters
on
on-prem,
or
is
the
plan
for
this
to
coexist
with
that.
E
It's
a
fantastic
question,
and
so
I
was
actually
talking
to
someone
about
this
this
morning.
So
let
me
answer
it
which
so
the
the
first
answer
is
it's
going
to
coexist
at
first.
Okay,
so
you
know
where
we
use
people
id
is
is
solving
some
very
specific
use
cases
that
are
absolutely
required
for
these
ipi
clusters
to
get
them
to
bring
up
and
function
and
we're
not
going
to
rip
that
out
immediately
now,
as
you
can,
you
can
actually
you
can
tell
by.
E
I
can
tell
that
you
that
you
know
this,
because
you
basically
like
the
question
like
metal
lb
is
performing.
You
know
conceptually
the
same
thing
that
we
were
doing
with
people
id
there.
So
will
we
replace
it,
I
mean,
and
we
could
now
we
haven't
like
we
haven't,
put
that
on
the
roadmap
yet
but
like
we
absolutely
could,
and
it
actually
makes
sense.
There's
some
there's
some
catches
to
that
as
we
get
into
the
details
like
one
of
them
is
the
way
we're
using
people.
E
E
We
would
have
to
make
that
there,
by
default
in
scenarios,
we're
going
to
use
it
to
replace
some
of
our
keep
live
usage,
and
so
it's
a
it's
a,
maybe
slash,
probably
but
not
like
on
the
we
haven't
planned
exactly
when
we
would
do
it
yet
because
how
we're
using
people
id
you
know
it
works
okay
for
now,
so
it's
not
an
urgent
thing,
but
it.
E
That
wouldn't
be
considering
it.
So
sorry,
that's
sort
of
a
non-answer
but
great
question
and
absolutely
does
make
technical
sense.
I
just
don't
know
when
we'll
do
it,
video.
E
Yeah
yeah
yeah.
Let
me
tell
you
about
the
second
mode.
The
second
one
is
pretty
cool.
If
you
get,
if
you're
into
this
sort
of
thing,.
E
Mode
is
the
bgp
mode,
okay,
so
this
one
is,
I
think,
really
the
more
the
star
of
metal
ob
I
mean
both
both
modes
are
effective,
but
bgp
is
is
pretty
cool,
so
this
is
where
again,
the
speaker
runs
on
every
node
and
every
node
then
acts
as
a
bgp
speaker.
So
these
every
node
is
then
peering
with
the
the
routing
infrastructure
in
your
environment.
You
know,
if
you're
in
this,
so
this
is
only
working.
This
can
only
work
if
your
network
environment
can
do
bgp.
E
So
you
have
routers
that
your
nodes
are
connected
to
that
can't
speak
bgp,
so
the
the
the
speaker
and
every
node
will
connect
to
it
and
when
you
create
a
service
of
type
load
balancer,
an
ip
address
is
allocated
to
it.
Metal
lb
will
figure
out
which
nodes
that
ip
address
can
should
be
like
a
route
should
be
advertised
for,
and
what's
particularly
cool
about.
E
This
is
that
it
will
advertise
a
route
to
that
ip
address
from
multiple
nodes,
maybe
even
all
of
them
kind
of
depends
on
scenarios
I'll
pay
for
that
for
a
second,
but
in
almost
all
cases,
we'll
advertise
routes
for
multiple
nodes
and,
what's
cool
about
that,
is
that
like
then,
you
you?
Could
the
routing
infrastructure
itself
can
provide
load
balancing.
E
So
to
contrast,
this
with
the
layer
two
mode
in
all
the
traffic
is
all
going
to
come
through
one
node
in
the
layer,
two
mode,
because
we
can
only
announce
via
arp
or
ndp
that
ip
address
from
one
node.
So
all
traffic
comes
to
one
node
and
then
it
can
be
load
balanced
within
the
cluster.
E
It
can
sort
of
you
know,
go
through
that
one
node
and
then
be
be
load
balanced
from
there,
but
with
bgp
the
the
bgp
router,
since
it
has
routes
to
this
to
that
load,
balancer
ip
address
on
multiple
nodes
of
the
cluster.
It
can
use
ecmp
or
equal
cost
multipath
routing
to
descend
different
connections
to
different
nodes
in
the
cluster.
So,
like,
what's
cool
is
metal
I'll,
be
it
actually
isn't
doing
a
lot
like?
It's
what
it
does
is.
E
It
connects
to
a
router
and
announces
routes,
but
it,
but
it's
now
enabling
the
network
infrastructure
itself
to
provide
load
balancing
across
the
cluster.
So
it's
it's
quite
powerful
and
again
it's
you
know
just
it's
relying
on
just
good
use
of
existing
network
technologies
to
to
achieve
the
behavior
we
want
and
and
it's
and
it
solves
the
problem
quite
well
without
having
to
create,
maybe
any
not
running
additional.
Well,
not
additional
software.
E
Beyond
the
bgp
like
we're,
not
adding
additional
software,
that's
doing,
processing
of
the
traffic
itself
like
we're,
not
you
know,
adding
a
a
box
somewhere
that
runs
load
balancing.
You
know
we're
using
the
network.
So
that's
really
the
that's
the
second
mode,
and
that's
if
for
the
environments
that
that
can
use
bgp.
This
is
this
quite
powerful
mode.
So
that's
my
that's.
The
second
mode.
F
All
right
two
questions
yeah
now
this
this
first
one
might
be
mark
related.
Is
there
any
integration
with
okd
project?
Is
it
going
to
have
it
initially
or
it
should
add
after
setup,
so
that's
kind
of
a
workflow
release,
process,
type
question
and
the
other
one
that
is
most
likely
more.
You
is
openshift
support,
kh1.22.x.
D
C
It
might
actually
be
a
more
general
openshift
question:
that's
what
I'm
getting
out
of
it,
which
means
it's
not
really
for
the
show.
I
actually
answered
it
in
text
with
the
standard.
D
Yeah
but
but
I
think
I
think
we
could
probably
say
that
you
know
there's
a
there's,
a
good
chance.
This
will
be
kubernetes,
so
four
eight
is
121,
so
it'd
be
122
and
123.
D
D
E
Yeah-
and
I
guess
one
thing
I
can
mention,
is
you
know
from
an
open
source
perspective
like
so?
Okay,
metal
lb
is
not
going
to
be
installed
by
default.
So
that's
one
thing.
So
if
you
install
okay,
you
will
not
get
metal
be
just
out
of
the
box,
but
you
can
install
it
afterwards
and
like
many
things
we
do
in
openshift
or
okd.
We
use
operators
to
or
the
operator
marketplace
to,
to
choose
and
install
additional
components.
E
We've
created
a
metal
lb
operator
that
is
now
on
operator
hub
as
of
like
within
the
last
few
weeks.
So
it's
like
very,
very
brand
new
from
an
upstream
perspective.
So,
like
you
can
you
can
actually
go
ahead
and
try
this
out
in
an
unsupported
fashion
and
you
can
use
an
operator
to
get
it
deployed
in
your
cluster.
So
hopefully
that
answers
the.
E
Question
yeah,
so
I
think
I
completed
my
high
level
overview
of
the
different
modes
mark.
Do
you
want
to
just
kind
of
because
you
started
talking
about
the
road
map
there
a
little
bit
about
how
we're
gonna?
You
know
in
terms
of
where
we
officially
support
like
in
the
different
modes
and
different
releases
that
cover
that
now.
D
D
So
hopefully
you
can
you
get
the
big
picture
there.
So
the
idea
is
that
so
today
we,
the
current
default
version
of
openshift,
is
4.7.
B
D
Very
very
soon
we
will
be
releasing
openshift
4.8
and
then
openshift
4.9
is
targeting
this
fall,
and
that's
when
that
first
mode
will
be
made
that
russell
discussed
the
metal
I'll
be
there.
2
mode
will
be
fully
supported,
targeting
and
and
then
simultaneously.
D
That's
also
when
the
upstream
fr
support
for
bgp
capability
will
be
will
be
resolved
in
preparation
for
the
next
release,
openshift
4.10,
which
is
targeting
currently
the
the
december
january
time
frame
of
this
calendar
year,
and
that
will
that
will
provide
metal
lb
with
bgp
support
and
also
targeting
bgp,
with
dual
stack,
ipv6
capability
support
and
then
building
upon
the
success
of
those
two
primary
use
cases
that
our
customers
have
asked
for
in
the
future.
D
We
will
enhance
that,
and
you
can
see
a
few
of
those
topics
listed
here
and
currently
we'd
be
targeting
openshift
412
onward,
which
would
be
the
latter
half
of
calendar
year
2022.
So
the
latter
half
to
the
next
calendar
year
keep
in
mind.
These
are
target
dates,
but
so
far
everything
is
looking
very
good.
E
Thanks
mark-
and
you
reminded
me
of
some
some
of
the
the
fun
stuff
that
we
are
working
on
the
upstream
project.
So
I
mentioned
you
know,
I'm
a
I'm
a
maintainer,
but
there's
other
people
who
might
have
contributing
to
metal
lb
and
one
of
the
first
things
that
we're
doing
from
a
feature
perspective
in
the
project.
Is
this
fr
support.
E
So
fr
is
an
open
source
project
that
implements
a
bgp
daemon
and
that's
something
that
we
already
use
at
red
hat,
so
we're
interested
in
and
confident
in
it,
and
we
want
to
apply
it
to
this
in
this
use
case.
So
what
metal
lb
did
in
the
project
originally
is
implemented?
It's
basically
its
own
bgp
stack,
so
it
you
know
in
if
you
were
to
go
look
on
github
in
the
middle
of
the
code.
E
It
has
an
implementation
of
the
bgp
protocol,
or
at
least
a
minimal
implementation
of
just
enough
of
bgp
to
do
to
perform
its
use
case,
and
that
has
carried
the
project
well
so
far,
but
as
we
look
into
the
future,
the
additional
use
cases
we'd
really
like
to
support
more
bgp
features.
We
thought
that
it
would
be
advantageous
to
instead
of
metal,
be
implementing
bgp
itself,
let's
switch
to
an
another
existing
more
featureful,
mature
implementation
of
bgp.
So
that's
what
we're
doing
we're
adding
support
for
metal,
albeit
instead
of
speaking
bgp
itself.
E
It
will
be
then
managing
and
configuring
controlling
an
instance
of
the
fr
daemon
running
instead.
So
we
feel
like
that's
a
a
better
base
for
us
to
use
as
we
move
just
move
on
to
support
bgp.
So
that's
going
to
enable
us
to
some
of
them.
Some
features
we'll
get
from
that
like
just
right
away
by
using
fr
and
then
just
gives
us
so
much
more
flexible
base
for
each
future.
E
So
that's
that's
going
pretty
well
and
that's
yeah
I
mean
that's,
that's
really
a
pretty
crucial
base
for
all
the
other
bgp
features
that
you
talked
about
wanted
to
mention
it.
E
A
lot
of
the
other
stuff
we're
doing
is
like
we've
been
working
on
ci
coverage
ci,
you
know
improving
ci,
adding
ci
tests
and
new
ci
jobs
in
the
upstream
project
and
lots
of
general
sort
of
code
reviews
bug
fixing
that
sort
of
thing.
F
E
B
E
Also
just
go
and
this
you
know
you
can
also
just
go
to
the
metal
albie
upstream
website,
and
it
includes
instructions
for
how
to
deploy
it
sample
manifests.
It
also
includes
the
open
ships.
If
you
look
at
the
metal
lb
website,
it
has
some
openshift
specific
notes,
including
those
security
contact
settings
that
are
required
for
the
metal
of
the
namespace.
E
That's
I
don't
remember
the
details
off
top
of
my
head
that
I
know
that
we
have
documented
for
openshift
on
the
website,
and
so
you
know,
as
the
roadmap
sort
of
implied
with
4.8,
we
don't
have
a
an
official
way
to
try
it
like
it's,
not
it's
not
included
in
openshift
officially,
but
you
know
it's
it's
it's
open
source,
it's
a
component
that
you
can
run
under
clusters.
You
know
not
supported
by
by
red
hat,
but
that's
the
way
you
could
do
it.
F
E
Yeah,
it
is
a
is
an
existing
project,
so
the
website
is
fr,
routing,
dot,
org,
and
this
is
a
linux
foundation.
Project
lots
of
companies
have
contributed
to
it.
It's
actually,
it's
not
new
at
all.
E
If
you're
in
the
networking
space
you
may
have
heard
of
another
project
called
quagga
and
I
think
in
case
fr
is
originally
based
on
that
from
the
past.
It's
yeah,
it's
it's
a
it's
a
very
mature
existing
project,
but
fr
routing.org,
where
you
can
go,
learn
more
about
that.
It
is
a
very,
very
nice
feature
for
routing
damon.
E
Thanks
and
it's
some
of
the
other
things
that
like
are
really
interesting
about
it,
this
is
not
stuff,
we
have
on
our
road
or
we
have
on
a
detailed
roadmap
yet,
but
just
kind
of
imagining
the
future
of
an
another
example
of
why
it's
powerful.
E
So
I
talked
about
bgp
and
I
talked
about
bgp
because
that's
what
well
that's
where
we
see
the
most
demand
for
from
a
routing
protocols,
first
perspective
and
it's
what
meta-lb
already
supports
you
know
using
its
custom
implementation,
but
there's
also
been
interest,
at
least
in
the
upstream
community
about.
Can
we
support
other
routing
protocols
like
ospf,
for.
E
E
Absolutely
yeah,
that's
that's
another
thing:
yeah
pitching
the
project.
The
project
is
the
development
community.
Is
you
know
it's
certainly
on
the
smaller
side,
which
honestly
I
find
incredibly
fun
like
it's
just
it's
a
sort
of
tight-knit
small
group.
So
it's
really
fun
to
work
on.
So,
if
you're
so
interested,
there's
plenty
of
opportunity
for
contribution
for
sure
there's,
there's.
E
More
there's
more
work
to
do
than
than
we
have
maintainers
and
contributors
right
now,
which
is
a
common
story
on
any
open
source
project.
Really,
that's
that
is
seeing
successful
uptick
so.
D
Yeah
and
if
you're
an
openshift
customer
today
and
if
you
have
have
feedback
for
how
it
is
that
we're
implementing
metal
lb,
if
you
have
something
that's
about
it,
that's
very
important
to
you
in
your
use
cases.
Please
communicate
that
to
us.
This
is
the
right
time
to
get
it
into
the
queue
and
we
can
prioritize.
C
E
E
C
Let's
well,
we
wait
for
the
the
audience
to
catch
up
the
first,
so
imagine
that
I
actually
have.
We
have
real
bare
metal
situation
right.
I
have
a
cage
somewhere
and
I
actually
have
the
money
to
buy
a
dedicated
piece
of
network
hardware
with
like
an
api
like
a
cisco
box
or
something
right.
C
E
You
know
for
for
me,
it's
like.
I.
I
think
that
it's
just
it's
well,
it's
a
couple
things
one.
It's
not
necessary
right
because
it's
they.
E
You
probably
have
enough
in
your
network
as
it
is
with
your
existing
router
to
do
what
you
need
the
type
of
load
balancing
here
and
I
guess
another
thing
is
like
load
balancing
is
a
little
bit
of
a
loaded
term.
We
talk
about
load,
balancing
you
can
think
of
it
as
conceptually
we're
trying
to
balance
load
across
some
environment,
but
it's
used
at
different
layers
of
the
networking
stack
right.
So
it's
you
know
we're
talking
about
load,
balancing
at
like
layer,
three
level
layer
well,
really
layer.
E
Three,
only
in
metal
lb's
case-
and
you
know,
a
load
balancing
product
is
probably
providing
a
lot
of
features
that
are
higher
level,
particularly
the
layer,
7
level.
You're
gonna,
maybe
do
some
fancy
load
balancing
based
on
you,
know,
http
like
destinations
that
sort
of
stuff,
so
it's
sort
of
like
a
different
problem
space
and
another
one
is
the
sort
of
the
beauty
of
of
metal
b,
particularly
when
we're
talking
about
the
way
we
do
it
with
bgp.
E
Is
that
we're
not
putting
traffic
through
a
box?
Like
that's
the
point?
You
know
we're
not
we're
not
we're
not
funneling
traffic
through
any
box
or
or
a
couple
of
boxes,
even
we're
we're
using
the
network
infrastructure
to
to
provide
sort
of
ultimate
scale
where
we
can
spread
the
load
directly
from
the
where
traffic
is
coming
in
the
routing
infrastructure
to
across
the
entire
cluster.
You
know
it
doesn't
we're
not
introducing
a
point
where
traffic
goes
through.
E
So
I
guess
summarize
my
answer
in
two
parts,
one
is
a
load
balancing
appliance
of
some
sort-
it's
probably
related
to,
in
most
cases,
related
features
that
aren't
relevant
to
the
use
case
here
and
the
other
is
by
design.
We
don't
want
to
to
push
the
traffic
through
any
sort
of
centralized
points
at
all
we
want
to.
D
Some
have
pretty
extreme
configuration
requirements
and-
and
I
would
say
that
if
you
do
have
the
luxury
of
having
let's
say
a
hardware
solution
that
that
you
have
at
your
disposal
and
if
that
solves
the
gap,
then
by
all
means
use
that,
but
we
and
over
for
all
the
reasons
that
russell
articulated
there's
the
overwhelming
majority
of
our
customers
are,
are
satisfied
with
the
metal
lobby
solution.
C
Okay,
we
have
a
couple
of
questions
on
chat,
so
in
a
multi-tenant
open
shift
environment
will
there
be
a
way
for
tenants
to
manage
bgp
local
prefs
local
preferences
in
metal,
lb.
E
Good
question:
no:
this
is
a
cluster-wide
thing,
so,
like
the
at
least,
the
way
this
is
set
up
today
is
that
this
is
a
you
know.
This
is
a
cluster
administrative
level
setup
thing
and
you
wouldn't
you
wouldn't
be
able
to
have
multiple
tenants
with
separate
configurations.
C
E
So
yeah,
that's
that's!
That's
a
really
good
question
and
actually
I'm
gonna
answer
it
with
more
than
you
asked
so
yeah
that
that's
this
could
be
the
reason
I
say
that
is
another
place
where
we
can
talk
about
the
difference
between
the
two
modes
and
some
of
the
benefits,
but
so
in
layer,
two
mode.
E
First
failover.
You
know,
of
course
it's
a
thing.
If
a
node
fails
where
an
ip
address
was
resonant,
then
we
have
to
move
it
to
another
node.
So
there's
two
things:
there's:
how
quick
can
we
detect
the
failure
and
then
and
then
bring
it
up
to
another
node?
Once
we've
detected
the
failure?
E
It's
pretty
quick
right
right,
because
what
happened
we
have
to
do
is
decide.
Okay,
what's
the
new
node
that
owns
it
and
then
do
what
it
did
before,
which
is
issue
those
gratuitous
arp
messages
out
to
the
network
or
in
dp
for
ipp6
to
say,
hey
network
ip
address
is
over
here
now
from
the
node
so
but
the
trick
is,
of
course
detecting
when
a
failure
has
happened
and
all
of
my
testing
so
far
it's
been
at
worst.
E
You
know
under
10
seconds
the
way
it
is
it's
which
is,
you
know
not
it's
not
the
fastest.
Now
in
a
previous
iteration
previous
version
of
mlb,
it
could
be
minutes
which
was
was
definitely
not
good.
So
it's
improved
to
seconds
the
way
it's
implemented
it's
and
it's
using
a
it's
using
a
library
under
the
hood
called
member
list,
which
is
a
implementation
of
this
gossip
protocol.
E
It's
it's
doing
its
own
sort
of
cluster
membership
protocol
to
for
all
the
speakers
to
to
watch
out
for
each
other
to
be
able
to
detect
node
failure
faster
and
so
at
worst
we're
talking
seconds.
So
it's
absolutely
not
the
right.
If
you
need
sort
of
sub
second
failover
times
layer,
two
will
not
provide
that
and
we
might
be
able
to
tune
the
parameters
of
our
of
our
failure,
detection
to
speed
it
up
but,
like
I
don't
think,
it'll
reach
sub
second
failover
times.
E
Yeah,
so
bgp
is
a
better
start,
a
better,
a
better
will
be
better
for
this.
So
what
with
bgp?
First
of
all,
we
already
have
ip
addresses
like
actively
functional
on
multiple
nodes
in
the
clusters
to
the
nature
of
how
this
operates,
traffic
can
already
be
sent
to
multiple
nodes,
which
first
of
all,
that
means,
if
something
fails,
you're
not
immediately
impacting
all
traffic
to
begin
with
right,
so
it's
you're
impacting
connect.
You
know,
connections
that
happen
to
hit
the
node.
That
fails.
E
E
It's
when
does
the
bgp
router
no
longer
determine
a
given
node
a
valid
route,
and
and
well
that's
one
of
the
features
we
we're
going
to
get
out
of
using
fr
is
there's
a
there's,
a
there's
another
protocol,
like
I'm
talking
networking
it's
just
like
acronym
soup,
there's
proto,
you
know
protocols
out
of
the
place
now.
Another
protocol
called
bfd.
E
Okay,
so
bfd
is
a
a
protocol
used
to
help
determine
length
failures
fairly
quickly,
relatively
quickly
than
what
you
would
have
otherwise,
and
so
we'll
use
bfd
on
these
on
these
bgp
connections
to
more
rapidly
determine
when,
when
the
connection
between
our
bgp
speaker
and
a
node
and
the
bgp
speaker
dies
so
that
the
the
bgp
router
will
say,
oh
well,
that
you
know
I
don't.
I
don't
see
that
that
that
node
anymore,
so
it'll
no
longer
route
traffic
there.
So
that
will
be.
E
That
would
be
quick
and-
and
I
say
quick
because
that
kind
of
depends
on
depends
on
specifics
of
your
routing
infrastructure
and
and
how
we
tuned
vfd.
We
haven't
like
turned
that
on
and
tested
it.
Yet
so
I
can't
quote
specifics,
but
you
know
this
is
sort
of
a.
I
would
say
a
standard
way
of
doing
this
with
bgp
and
that's
what
we'll
do
and
so
that'll
provide
much
better
failover
behavior
in
the
bgp
mode,
but
we
do
you
know
we'll
do
our
best
with
with
layer.
E
C
E
The
cat
kitty
cat
yep,
the
I
have
a
dog
hiding
in
the
corner.
A
F
You
know
it's
been
so
long
since
I've
used
bgp,
we
had
a
data
center
and
we
were
processing
credit
cards.
So
it's
actually
kind
of
nice
to
hear
that
it's
still
out
there,
because
getting
two
connections
into
the
data
center
was
always
a
challenge
financially
and
just
getting
them.
You
know
knowing
that
your
two
connections
were
being
routed
in
different
ways,
so
it's
nice
to
see
that
it's
still
out
there
and
people
are
using
it
and
it.
E
Yeah,
so
bgp
is
definitely
alive
and
well,
it
is
you
know
one
of
the
things
that
runs
the
internet.
It
tends
to
pop
up
in
the
news
every
now
and
then
when,
when
a
mistake
is
made-
and
you
know
part
of
the
internet
was
taken
out
by
accident-
I
think
that
you
know
those
things
to
have
been
improved.
I
think
over
the
years,
but
it's
still
crucial
to
how
the
internet
operates.
E
And
you
know
your
comment
about
the
the
providers
thing
you
know,
bgp
as
a
protocol
can
be
used
in
a
couple
different
modes.
You
know,
there's
there's
the
there's.
The
case
kind
of
talked
about,
you
know
runs
the
internet
where
how
all
the
different
providers
peering
with
each
other
and
then
there's
as
a
protocol.
E
And
so
you
know
upstream
from
that,
whether
from
those
routers
or
upstream
routers
from
that
in
your
organization,
then
you
might
get
involved
into
video
peering
with
with
other
providers
and
so
forth,
but
we're
kind
of
making
use
of
bgp
as
a
technology
just
within
your
data
center
within
your
environment,
for
this
middle
of
the
use
case,
and
it
may
be
that
your
bd
infrastructure
beyond
that
involves
providers,
but
not
directly
in
the
context
of
nlp.
C
Is
has
there
been
any
integration
with
tools
for
managing
kubernetes
clusters,
so
things
like
ocm
and
kube
admin
and
cluster
api,
because
cluster
api
is
pretty
much
public
cloud
only
right
now,
isn't
it.
E
Yeah
well,
actually.
Another
thing
I
was
involved
in
the
last
few
years
was
starting
a
project
where
we
implemented
cluster
api
support
for
bare
metal
where
we
automated
the
provisioning
of
bare
metal.
We
do
cluster
api,
so
that's
metal,
cubed
or
metal,
three
dot,
io.
E
Yeah
absolutely
there's
not
direct
integration,
though
there's
there's
a
you
know:
it's
like
you
can
use
that
to
bring
up
your
cluster
and
then
once
you
have
a
cluster,
then
you
install
metal
b
but
like
it's,
we
haven't
done
anything
anywhere
where
it's
just
automatic.
So
as
a
part
of
I've
installed
my
cluster
and
it
includes
metal
on
the
out
of
the
box,
it's
just
it's
always
sort
of
you're
going
to
take
a
next
step
and
the
two
things
that
I
that
exists
in
terms
of
doing
that.
E
There's
there's
the
operator,
so
you
can,
you
can
add
metal
b
by
that
operator
or
you
or
there's
the
upstream
project.
We
we
have
a
helm
chart.
So
if
you
want
to
use
helm,
you
can
use
that
to
deploy
metal
lv.
We
also
just
have
like
sample,
manifests
for
like
quick,
quick
hack
of
a
deployment
sort
of
thing.
E
If
you
want
to
just
you
can
grab
our
manifests
as
is
or
with
slight
modifications,
if
necessary
and
use
those
to
to
install
it
on
the
cluster,
but
no
direct
integration
integration
without
any
of
those
things
today,.
F
E
Yeah
great
question:
so
to
start
it's
just
going
to
be
user
workload,
so
it's
not
going
to
be
used
by
default
for
any
of
the
stuff
like,
like
our
our
cluster
ingress
controller,
like
we
kind
of
talked
about
this
a
little
bit
earlier,
where
metal
I'll
be,
you
know
technically
and
conceptually
is
applicable
to
that
and
very
and
we
very
well
could
migrate.
E
E
But
that's
a
natural
sort
of
evolution
of
it
that
we
may
then
replace
some
of
the
stuff
that
we
used.
We
talked.
This
is
what
we
were
talking
about
earlier
with
keep
live
d,
where
we
that's
kind
of
a
sort
of
a
more
more
manual,
very
focused
use
of
keep
live
d
for
a
couple
of
specific
use
cases.
We
could
then
go
back
and
and
use
metal
only
for
those
in
the
future.
C
E
Well,
good
question,
and
I
don't
so
honor
I'm
trying
to
see
how
quick
I
can
pull
up
the
middle
of
the
website
to
help
me.
Remember
this
so
like
there's
a
I'm
part
of
the
metallup
website,
there's
a
list
of
like
different
environments
where
it's
been
used.
I
don't
know
about
distributions.
E
E
E
C
D
C
D
With
there
are
some
for
examples,
there
are
some
nebs
naps
out
there
that
sort
of
bespoke
custom
deployments
that
they
use
where
they
have
integrated
metal
b
to
to
some
extent,
but
what
our
customers
are
really
asking
for
with
the
bulk
of
the
customers
out
there
are
asking
for
was
for
fr,
bgp,
metal,
lb
and,
and
that
does
not
exist
anywhere
and
so
we're
basically
working
with
upstream
to
create
it
as
the
first.
E
Yeah,
that's
a
good
point
like
when
we
when
we
were
looking
at
this,
you
know
we
have
problems.
We
want
to
solve
what
are
our
choices?
What
we
chose
was
we
want
to
use
metal
lv
as
the
community
to
and
help
you
know,
jump
in
and
help
evolve
it
to
solve
what
we
want
to
solve.
Another
option
we
had
was
to
do
something
more.
You
know
basically
more
custom.
You
know
start
something
brand
new
around
fr,
but
that
just
didn't
seem
to
be
the
right
choice.
E
This
was
a
a
very
active
community,
well
very
actively
used
project
that
was
like,
like
lots
of
things
in
open
source
right.
It's
like
that's,
really
cool.
It
does
a
lot
of
like
it's
almost
what
I
need
right.
So,
let's
jump
in
and
collaborate
with
others
to
make
it
what
we
you
know
exactly
what
we
need.
That's
what
we're
trying
to
do.
C
Cool,
so
anybody
in
in
is
anybody
in
the
project
talking
about
joining
the
cncf.
E
Oh,
yes,
in
fact,
so
some
some
upstream
project
history.
So
this
is
the
the
this
pro
the
metal
will
be.
E
The
project
was
started
by
one
guy
originally
and
and
then,
but
he
has
since
moved
on
and
and
now
there's
a
team
of
maintainers,
including
myself,
that
take
care
of
the
project,
and
you
know
he
still
owns
the
domains
and
you
know
there's
some
sort
of
like
what,
let's,
let's
finalize
the
sort
of
handoff
of
the
project,
to
long-term
ownership
discussion
and
as
a
part
of
that,
we
did
apply
for
the
cncf
sandbox
and
it
is
in
the
queue
for
a
review
at
this
stage,
so
that
that's
the
discussion
that
that
I
helped
drive
and
put
in
a
proposal
to
the
other
maintainers
to
say
I
you
know
this
is
what
I
think
the
right
thing
is
for
the
project
and
we
reached
consensus,
and
I
felt
I
did
the
application
yeah.
E
We
are
in
the
cube
for
review
so
cool.
That's
the
that's
the
future
I
expect
or
that
I
hope
for.
I
think
it's
a
you
know
that
seems
like
a
natural
home
for
it's
a
it's
a
kubernetes
specific
project.
So,
hopefully,
that
that
goes
through
and
yeah
and
it's
a
healthy,
deaf
community.
I
work
with
people
from
so
there's
like
there.
What
well
there
was
a
company
named
kinfolk
who
has
now
been
acquired
by
microsoft,
that
there's
a
couple
maintainers
from
there
and
then
another
maintainer
from
from
who
works
for
rancher.
E
And
then
other
contributors
from
elsewhere,
but
yeah,
so
I
think
you
know
cncf
is
it
would
be
a
great
home
for
us.
A
E
Future
load
balancing!
No,
I
don't
know,
I
don't
think
so.
I
just
you
know
thanks
for
having
us,
you
know
this.
This
has
been
a
very
fun
project.
You
know,
of
course,
it's
I
do
this
as
part
of
my
job,
but
it's
it's
been
one
of
those
things.
That's
kind
of
been
super
fun
for
me
not
just
just
because
I'm
this
is
like
nice
intersection
of
things
I'm
interested
in.
So
thanks
for
the
opportunity
to
come
talk
about
it,
a
bit.
That's
fun.
D
Yeah,
I'd
also
like
to
thank
everybody
for
their
time
and
and
their
ability
to
speak
about
it
today
and
look
forward
to
hearing
from
all
the
customers,
with
their
requests
for
enhancement
and
so
forth,
about
middle
lb
and
and
love
to
have
those
discussions.
E
Yes,
so
the
upstream
project,
metal
I'll
be
there
is
a
metal
lb
user's
mailing
list.
It's
a
google
group
and
you
can
find
the
link
on
the
metal
lb
website
for
that.
The
project
also
has
two
slack
channels
on
the
kubernetes
slack,
so
there's
metal,
lb
and
metal
lb
dash
dev.
Those
two
channels
are
active
on
the
kubernetes
slack,
so
those
are
the
two
places
from
the
upstream
product
perspective
and
then
mark.
How
do
people
reach
you
if
they
have
any
product
questions
requests
that
sort
of
thing?
What's
the
best.
D
Basically,
through
their
account
team,
so
if
they're
openshift
customer
they,
they
should
have
a
connection
point
there
and
then,
depending
on
the
level
of
support,
they
have
that
that
is
different,
but
we
that
is
definitely
the
the
way
to
approach
it.
Some
customers
can't
submit
requests
for
future
enhancements
directly,
but
the
account
team
does
it
on
their
behalf,
and
that
gets
it
into
our
queue
and
at
that
point
at
which
point
we
evaluate
it
and
if
it
has
merit-
and
we
will-
we
will
build
it
into
the
future
of
the
product.
C
Cool
awesome:
well,
thank
you!
So
much
for
attending
and
letting
us
know
all
about
metal
ob,
the
you
know
and
how
to
use
it
and
what's
going
on
with
it,
for
anybody
tuning
in
who
say,
missed
the
beginning
of
this.
C
This
broadcast
will
be
archived
in
youtube,
so
you
can
go
back
and
watch
the
beginning
of
it
and
of
course
we
have
lots
of
different
shows
here
on
red
hat
and
openshift
streaming.
C
C
So
thank
you
very
much
and
we
will
see
you
online.