►
From YouTube: Ask an OpenShift Admin (Ep 24): CNI plugins and Multus
Description
Join Andrew Sullivan, Chris Short, and the occasional special guest for an hour designed specifically to help the OpenShift admins out there. Come with your questions, leave with solutions.
A
B
C
I
would
be
happy
to
I
have
to
say
that
today's
topic
and
the
one
from
a
couple
of
weeks
ago,
with
etcd
these
are
two
of
the
most
exciting
things:
the
things
that
are
most
interesting
to
me
from
an
openshift
perspective,
because
they
they
poke
my
buttons
of
being
super
technical
and
super
critical
of
of
the
things
that
are
happening
inside
of
the
cluster.
So
I
I'm
super
excited
you
know
to
to
just
have
everybody
on
today
so
same
yeah.
C
So
before
I
I
get
started,
I
want
to
just
have
a
reminder
that
this
is
one
of
the
office
hours
series
of
live
streams
that
we
have
here
on
openshift.tv.
C
So
what
that
really
means
is
that
we
are
here
to
answer
your
questions
so
for
anybody
who
is
watching
at
home
watching
from
the
office
watching
from
the
train,
wherever
you
happen,
to
be
feel
free
to
ask
us
any
questions
that
are
on
your
mind
at
any
point
in
time
during
the
stream,
and
that's
ideally,
you
know
for
us
because
most
of
us
are
are
experts
on
the
admin
side,
not
on
the
development
side.
We'll
we'll
entertain
those
developer
questions,
but
we
may
not
be
able
to
help
as
easily.
C
So
that
being
said,
we
do
have
a
topic
for
every
one
of
our
shows,
and
today
I
am,
as
I
said,
super
excited
to
talk
about
cni
and
sdn,
and
so
cni
being
container
network
interface
and,
and
I'm
going
to
rely
on
my
our
guests
here
to
correct
me
and
make
sure
that
I'm
speaking
correctly
so
first
of
all,
I
would
like
to
introduce
and
and
welcome,
mark
curry,
who
is
a
consulting
product
manager
with
the
cloud
platform's
business
unit.
D
Thanks
andrew
yes,
so
again,
my
name
is
mark
curry,
I'm
responsible
for
networking
with
openshift
and
and
so
today,
we're
going
to
talk
all
about,
hopefully
answering
your
questions
about
openshift
networking
and
cni's
and
the
plugins
and
multis,
and
how
all
of
those
tie
together
and
why
it
is
that
that
it
works
that
way.
D
I'm
joined
by
two
of
our
top
networking
engineers,
doug
and
tommy.
If
you
guys
could
please
introduce.
E
Yourselves
hi
I'm
doug
smith
and
yeah,
I'm
a
member
of
the
openshift
networking
team
and
I
work
on
a
team
that
we
call
the
openshift
plumbing
team
and
we're
interested
in
getting
your
workloads
all
plumbed
up
to
the
networks
and
the
hardware
to
enable
advanced
networking
use
cases
and
I'm
joined
by
the
guy.
Who
does
all
the
real
work
toa
go
ahead,
though.
B
Thank
you,
hi
hi,
guys,
I'm
tomoko
mihayashi
and
I'm
pretty
working
on
the
motors
for
four
years
or
things
with
the
duck
and
then
yeah
the
I'm
pretty
appreciative
that
the
the
answer,
your
questions
about
the
modest
cni
and
then
opens
it
as
well.
Thank
you.
C
Yeah
one
of
the
things
I
love
about
red
hat
is
like
all
of
you
are,
are,
I
think,
legitimately
the
smartest
people
I
know,
and
yet
you
you
keep
passing
the
buck
right
to
the
next
person
of
you
know.
I
you
know
mark
is
you
know,
consulting
product
manager
who
hands
it
over
and
says.
I
know
nothing.
It's
completely
up
to
doug
who
hands
it
over
to
tomo
and
says.
I
know
nothing.
It's
completely
up
to
tomo.
C
I
I
love
that
factor
or
that
aspect
of
red
hat
so
before
we
launch
into
that
for
our
audience
for
the
regular
attendees.
You
know
that
I
like
to
cover
a
couple
of
things
that
are
top
of
mind
the
things
that
have
come
up
internally
and
externally
regularly
over
the
last
week
or
sometimes
last
couple
of
weeks.
C
C
Let's
see,
let's
share
this
window,
so
the
first
thing
is
just
a
reminder
that
every
week
we
do
publish
the
summary
blog
post
from
the
previous
weeks
or
the
current
week,
I
guess
is
the
way
it
works
show.
So
you
see
here
we
have
our
blog
post
from
last
week,
where
we
had
christian
hernandez
on
to
talk
about
dns
and
all
the
things
going
on
inside
of
there
I'll
go
ahead
and
link
that
into
the
chat.
Thank.
A
C
So
those
come
out
friday
morning,
I'll
have
one
this
friday
that
recaps
everything
that
happened
here
we'll
share
all
of
the
links,
as
well
as
the
questions
and
stuff
like
that
that
we
discussed
to
help
make
that
more
discoverable
for
everybody.
C
So
the
second
thing
I
want
to
talk
about
is
so
for
those
of
you
who
are
currently
deployed
or
have
currently
deployed
4.6
clusters,
and
you
noticed
now-
it's
been
a
little
over
a
month
right
that
4.7
was
released
and
you
still
don't
have
a
stable
update
channel
or
a
release
in
the
stable
update.
So
yes,
we
know
we.
C
Unfortunately,
there
have
been
a
number
of
bugs
that
have
been
found.
The
most
critical
one,
the
biggest
one
that
I'm
aware
of
is
one
that
we've
actually
talked
about
before
yeah,
and
that
was
you
know
where,
when
we
update
to
4.7
with
vsphere
clusters
that
are
running
vm,
hardware
versions,
14
15
and
16,
we
begin
to
see
some
sporadic
right,
inconsistent
packet
loss,
which
of
course
is
a
bad
thing.
So
I
know
they're
working
diligently
on
that.
I
know
that.
There's
a
ton
of
work,
a
ton
of
discussion
happening
internally.
C
I
will
offer
that,
if
you're
curious
about
that
whole
process
and
what
it
looks
like
and
how
it
works
and
all
of
the
efforts
and
all
the
decisions
that
go
into
that
rob,
zumske
published
this
great
blog
post
five
months
ago
now
or
so
that
talks
about
this
entire
process
and
what
goes
into
all
of
those
things
and
one
of
the
things
I
thought
was
particularly
interesting.
All
right,
if
we
scroll
down
here
all
the
way
to
where
is
it
this
chart.
C
So
this
chart
talks
about
the
different
channels
that
are
in
here
and
I
think.
Last
week
I
talked
about
how
there's
both
the
fast
and
the
latest
and
they're
effectively
the
same
one
of
the
things
that
I
thought
was
interesting
that
rob
mentions
in
this
post
is
we
suggest
that
you
have
at
least
one
of
your
clusters
running
the
fast
channel?
So
that
way,
you
know
if
these
types
of
things
are
going
to
affect
you
before
you
roll
it
out
to
all
of
your
other
stable
clusters.
C
Interesting
thing
that
also
helps
us,
particularly
if
you
have
you
know
the
telemetry
collection
turned
on,
because
that
is
how
we
help
to
determine
when
there
are
issues
that
might
prevent
us.
That
might
you
know,
maybe
we
should
wait
to
roll
into
stable
with
those
particular
releases,
so
great
blog
posts.
You
know
super
timely
information.
C
Considering
that
we
are,
it
is
a
little
bit
longer
period
of
time
than
we
normally
expect
to
go
from
a
new
y
release
4.7
and
having
those
upgrades
available
in
the
stable
channel,
but
do
give
it
a
read
if
you
haven't
seen
it
already
all
right.
So
the
second
thing
that
I
want
to
talk
about
here
is
just
something
that
this
is
one
of
those
that
constantly
comes
up
that
I
like
to
put
out
reminders
for
so
much
like
dhcp
is
required
for
ipi.
C
So
this
is
sometimes
a
little
bit
confusing
one
of
the
things
that
the
docs
team
is
working
on,
so
you
can
see
I'm
in
the
4.7
documentation
and
I've
selected
the
installing
on
any
platform.
This
is
the
platform
agnostic
and
you
can
still
see.
We
still
have
this
configuring
a
three
node
cluster
option
here,
so
it
doesn't
have
to
be
bare
metal.
It
doesn't
have
to
be
physical
server.
Specific
can
be
virtual
servers
so
long
as
they
meet
the
resource
requirements
that
are
installed
with
that
non-integrated
platform.
C
So
just
keep
that
in
mind-
and
the
last
thing
I
have
before
we
kick
over
to
mark
is
a
question
that
got
asked
internally
earlier
this
week,
which
was:
can
I
convert
an
ipi
or
upi
cluster
into
bare
metal
right?
Can
I
remove
the
cloud
provider
integration
and
this
question
sprang
from?
I
don't
remember
if
it
was
a
vsphere
cluster
or
rev
cluster,
but
they
had
deployed
using.
I
think
it
was
ipi
and
basically
they
wanted
to
begin
adding
physical
nodes.
C
So
it's
a
virtual
cluster,
I'm
going
to
say
it's
vsphere,
ipi
virtual
cluster,
and
they
want
to
start
adding
physical
nodes
into
that
cluster,
and
you
can't
do
that.
You
can't
mix
the
platforms
unless
it's
a
non-integrated
cluster,
so
the
question
was
well.
Can
I
just
turn
it
into
a
non-integrated
cluster
and
unfortunately
the
answer
is
no
and
vice
versa.
You
can't
change
a
non-integrated
cluster
into
a
cloud
provider.
A
So
we've
already
got
a
question,
but
I
will
okay
leave
it
for
I'm
gonna
save
it,
but
I'll
just
put
it
out.
There
feel
no
obligation
to
answer
this
team
and
because
we're
gonna
kind
of
talk
about
it,
one
real
use
cases
of
multis
people
want
to
see
that
and
second,
what
kind
of
cni's
have
better
performance
for
offloading
switching
filtering,
so
for
so
on.
D
Yeah,
we'll
definitely
cover
that
chris
and
and
remind
me
if
I
forget,
but
I'm
pretty
sure
I
will
remember
those
yeah.
C
Read
this
morning
either
so
enough
of
me
rambling,
I
will
hand
over
to
you
mark
and
let
you
I
know
you,
you
had
a
couple
of
things
that
you
wanted
to
get
started
with.
So.
D
Great,
thank
you,
andrew
and
chris.
So
to
kind
of
kick
the
conversation
off,
I
don't
want
to
presume
or
assume
too
much
about
what
people
understand.
So
I
want
to
talk
about
some
fundamentals,
and
so
the
first
thing
I
want
to
talk
about
is
make
sure
that
people
have
a
clear
understanding
of
what
exactly
a
cni
or
cni
plug-in
is
and
how
it
is
they
get
used
so
simply
put,
as
you
heard
andrew
say
earlier,
the
kubernetes
container
network
interface
or
cni.
D
It's
really
just
a
specification
and
a
set
of
libraries
for
writing
plugins
to
configure
network
interfaces
in
both
linux
containers
and
pods.
So
when
a
kubernetes
pod
is
spun
up,
it
needs
networking
information
for
its
interface
and
it
gets
that
from
the
cni
plug-in.
So
we
have
a
default
out-of-the-box
cni
with
openshift.
As
you
might
imagine
ours,
our
current
default
is
based
on
ovs
and
we
call
it
openshift
sdm.
D
D
So
one
of
the
first
things
I
want
you
to
get
out
of
that
information
is
that
our
own
networking
for
openshift
is
itself
a
plug-in
which
implies
the
possibility
of
swapping
it
out
for
another
plug-in.
D
So
to
that
end,
we
also
support
a
special
cni
plug-in
named
courier
kubernetes.
D
We
worked
with
our
openstack
team
and
so
for
those
customers
that
are
running
openshift
on
openstack
and
prefer
to
avoid
the
double
encapsulation
that
happens
when
you
stack
one
overlay
network
on
top
of
another
open
shift
on
top
of
openstacks
courier
kind
of
distilled
down
to
the
simplest
explanation
is
really
just
a
way
to
collapse.
The
two
networking
stacks
down
into
one
that
being
openstack's
neutron,
networking
plug-in
whatever
that
might
happen
to
be
so
that
when
a
pod
is
spun
up,
it
reaches
down
to
openstack,
gets
networking
information
and
assigns
it
appropriately.
D
So,
in
addition
to
those
that
we
fully
support
within
openshift,
there
are
other
primary
networking
plugins
that
we
also
work
with
so
through
the
partnerships
that
we've
developed.
We
fully
support
several
of
these
and
each
has
their
own
market
differentiators,
and
sometimes
they
fill
a
gap
in
our
own
default
capabilities
that
might
solve
a
specific,
critical
problem
for
a
customer
that
maybe
we
otherwise
either
could
not
solve
or
just
simply
couldn't
solve
in
a
timely
manner.
D
So
each
one
of
those
third-party
cni's,
we
don't
just
say
yeah,
have
at
it.
You
know
swap
it
out
and
support
it.
Each
one
of
those
has
to
go
through
a
pretty
rigorous
certification
process.
D
The
goal
of
that
certification
process
is,
you
know
there,
there's
sort
of
a
predetermination
of
the
lines
of
support
between
the
two
organizations,
red
hat
and
whomever
that
vendor
is
maybe
it's
cisco,
juniper
tigera
whomever,
and
so
this
way
the
customer
can
just
simply
call
red
hat.
Both
organizations
get
woken
up
to
whatever
problem
or
issue
they
have
and
then
we've
already
predetermined,
whose
responsibility
that
particular
problem
is
and
then
we
get
to
work
on
it
without
the
customer
worrying
about
having
you
know
who,
should
I
call
red
hat
or
vendor
x?
D
D
I'll
talk
a
little
bit
about
that
ensure
the
plug-in
runs
workloads
that
customers
expect
around
openshift
it
should.
It
should
be.
You
know
normal
quote-unquote,
kubernetes
networking
and
also
to
prevent
plug-ins
from
doing
things
that
decrease
security,
the
classic
scenario
and
I'm
I'm
sure
if
you've
been
doing
administration
for
a
while
you've.
Seen
this,
you
visit
a
page
that
says:
here's
how
to
do
this
in
openshift
or
or
redhead
in
general
and
step
number
one
is
disable
se
linux
right.
So
we
want
to
avoid.
A
D
So
yeah,
so
so
that's
these!
These
are
some
of
the
goals
of
why
it
is
we
we
force
certification
on
the
things
that
we
support.
So
what
is
the
certification
process
for
these
vendors?
They
have
to,
first
of
all,
make
sure
that
all
the
containers
that
make
up
their
solution
are
themselves
certified
right.
We
don't
want
somebody
just
to
to
create
a
solution
based
on
a
container
image
they
pulled
from.
Who
knows
where
we
want
to
certify,
have
them
create
a
kubernetes
operator,
and
we
want
that
to
be
certified.
D
The
minimum
functionality
of
that
operator
is
it
just
simply
must
be
able
to
manage
the
life
cycle
of
the
cni
plug-in
they
can
make
the
operator
as
advanced
as
they'd,
like
you
can
do
things
like
add
the
ability
to
ensure
the
health
of
their
particular
plug-in
across
upgrades
of
the
plug-in
itself
or
even
openshift
another
part
of
the
certification
is
they.
They
have
to
pass
the
same
kubernetes
networking
conformance
tests
that
we
ourselves
pass
every
time
we
make
changes
to
our
openshift
plugins
to
validate
that
sdn.
D
C
To
to
be
clear,
when
we
say
it's
a
certified,
you
know
sdn
plug-in
and
I
just
realized
you
keep
saying
cni
and
I
often
refer
to
it
as
an
sdn.
So
maybe
after
after
you're
done
here,
you
should
explain
to
me
the
difference.
I
think
I
said
when
we
were
staging
this,
that
I'm
going
to
play
the
role
of
dumb
guy,
which
is
like
the
role
I
was
born
to
do
so,
but
just
to
be
clear
with
the
certification
like
we
aren't,
testing
and
certifying
and
validating
that
third
party's
functionality
right.
C
D
Correct
and
thanks
for
pointing
that
out,
andrew
the
clarification
there
is
that
the
cni
plug-in
is
that
thing
that
plugs
into
that
spot
in
kubernetes
that
can
be
called
to
to
to
get
networking
information.
C
D
That's
right:
okay,
yeah
exactly
so
you
know,
and
what
I'm
talking
about
here
and
we
haven't
gotten
into
secondary
interfaces
and
secondary
plug-ins,
but
I'll
talk
about
those
in
a
second.
But
you
know,
let
me
just
complete
this.
This
first
conversation
by
saying
really
what
I'm
talking
about
here
is
the
primary
networking
plugins
that
are
part
of
openshift,
and
so
I've
highlighted
our
current
default
openshift
sdn,
our
next
generation
that'll
become
our
default
at
four
nine.
D
It's
currently
g8,
however,
which
is
ovn
courier
kubernetes,
and
then
we
have
a
a
bunch
of
third-party
ones.
So
what
are
the
third-party
ones
in
no
specific
order?
We
we,
we
value
all
of
our
vendor
third-party
networking
solutions,
so
in
no
particular
order.
There
are
ones
from
I'll
try
to
try
to
do
this
mentally
alphabetically,
but
there
is.
There
is
calico
from
tigera,
so
we
have
a
great
relationship
with
tigera
and
support
there
plug
in
some
of
the
key
reasons
why
somebody
might
choose
calico.
Is
they
they
demand?
D
Maybe
bgb?
Maybe
they
like
some
of
the
advanced
security
features,
so
it
was
tigera
that
really
upstream
some
of
the
network
policy
features
in
the
very
beginning
and
they've.
They
have
some
proprietary
add-ons
to
that
that
some
customers
might
appreciate.
So
those
are
those
are
a
couple
of
big
things
why
somebody
might
choose
calico
another
another
one,
cisco
aci,
so
cisco
aci,
obviously
is
is
supported
fully
by
cisco
and
it
it
really
is.
I
hear
a
lot
from
customers.
Hey
I've
got
aci
deployed
throughout
my
data
center.
D
Maybe
it
makes
sense
for
me
to
use
cisco
aci
as
the
plug-in,
and
maybe
it
does,
maybe,
because
that
cisco,
aci
cni
plug-in
might
interact
with
the
rest
of
the
ecosystem
of
aci
throughout
the
rest
of
their
data
center
and
provide
some
advantage,
and
that's
really
for
the
customer
to
to
determine
another
another
one
that
we
support
is
vmware,
so
vmware
has
what
they
refer
to.
They
actually
have
two
plugins.
The
first
plugin
vmware
has
something
they
refer
to
as
the
nsx
container
plug-in
or
ncp.
D
More
specifically,
traditionally,
nsx
has
referred
to
more
of
the
one
that's
associated
with
esx
hosts,
whereas
nsxt
is
the
one
that's
been
more
associated
with
kubernetes
and
the
container
plug-in,
but
more
generally,
it's
just
called
ncp
nowadays
and
we
do
have
a
certified
solution
with
them.
So,
of
course,
if
you're
deploying
on
top
of
a
vsphere
environment
or
ecosystem,
there
may
be
some
benefit
to
your
using
that
one.
Now
one
of
the
you
know
so
there,
the
other
they
have
a
second
plug-in.
D
So
they
have
just
started
the
certification
of
that
plug-in
and
I'm
expecting
that
to
complete
sometime
in
the
second
quarter
of
this
calendar
year,
and
then
we
have
others
so
juniper
has
their
contrail
plug-in,
which
we're
actually
working
with
them
right
now,
that's
also
in
progress
to
get
certified
and
I
think
I've.
I
think
I've
remembered
everybody
apologies
if
I've
forgotten
someone,
but
those
those
are
the
big
ones
that
we're
working
with
today.
C
Yeah
and
and
again
we
don't
like
we
don't
performance
test
each
one
of
those,
so
I
think
you
know
one
of
the
questions
was
which
one
has
the
best
performance.
Well,
we
don't
we
don't
know,
certainly
for
all
the
partners,
because
we
don't
you
know
we
don't
test
them
for
their
performance
or
validate
their
performance
claims
or
anything
like
that
and
also
say.
C
Like
same
the
same
stance,
I
have
the
same
stance
with
cni
and
sdn
that
I
do
with
csi
and
storage
provisioners
right
of
they're,
all
my
favorite
children
and
and
my
my
favorite
one,
is
the
one
that
works
for
you
and
and
meets
all
of
your
needs
right.
You
know
we
really
have
no
preference
outside
of
that.
D
D
And
I
would
like
to
add
to
that,
so
we
do
nowadays
also
require
a
validation
with
some
of
our
layered
products
like,
for
example,
service
mesh.
So
if
a
customer
uses,
let's
say
calico
with
openshift
service
mesh,
can
we
guarantee
to
the
customer
that
that's
going
to
behave
properly,
and
so
we've
added
additional
layers
to
that
certification
to
do
additional
validation
that
they're
not
going
to
break
some
of
the
basic
features
and
functions
so.
C
A
That's
a
good
point:
yeah
certifying
those
things.
Question,
though,
is
there
any
hardware
networking
that
works
with
openshift
in
kubernetes
right
now
or
is
all
of
it
sdn
well,.
A
D
Is
actually
there
is
hardware
involved
in
some
of
the
decision
process
of
aci
right,
so
you
can.
Actually
I
mean
you
can
literally
run
perl
scripts
on
cisco
routers
that
affect
the
hardware,
networking
and
influence
the
cni.
Excuse
me,
the
aci
ecosystem,
that's
involved,
so
I
think
that's.
It's
really
limited
to.
To
that
extent.
C
So
I'm
just
gonna,
there's
been
a
couple
of
tangentially
related
questions
that
I'm
gonna
address
real,
quick,
so
one
from
usame
any
plan
to
add
vsphere
csi
driver
on
operator
hub.
So
I
don't
believe
that
their
csi
driver
is
an
operator.
A
C
So
the
first
step
would
be
for
them
to
create
an
operator
out
of
it
that
we
can
then
certify
and
put
onto
operator
hub,
so
that
that
would
be
the
first
step
and
then
the
other
one.
Where
did
I?
Where
did
it
go.
D
C
B
D
So
cni
operators-
you
may
sometimes
actually
I
don't
know
what
the
current
situation
is,
but
I
believe
it's
the
the
operators
actually
show
up
on
our
operator
hub.
But,
as
you
might
imagine
today,
it's
actually
there
may
be
more
for
information
purposes,
because
cni
plugins
are
actually
done
at
install
time
got
it.
C
Yeah
that
and
that's
that's
another
question
that
I
had
for
you
at
some
point
down
the
line,
which
was
you
know
we
we
see-
or
I
see
I
wouldn't
say
frequently,
but
it
comes
up
of.
Can
I
change
my
sdn
right?
Can
I
change
the
cni
plug-in
that
I'm
using.
D
D
Yeah
and
actually
before
I
just
before
I
do
that-
let
me
qualify
my
previous
comment,
which
is
you
can
change
cni
plugins.
But
let
me
be
careful
in
my
description,
so
we
do
have
a
mechanism
today
that
was
actually
created
by
by
our
the
engineers
who
have
joined
me
on
it's
called
tomo
and
doug,
which
is
the
ability
to
flip
from
one
primary
c
and
I
plug
in
to
another
using
multis
and
I'll
explain
multis
in
a
moment,
but
also
when
it
comes
to
secondary
plug-ins.
D
You
can
add
them
along
the
way
they
would
show
up
in
any
newly
created
pods.
From
that
point
onward
and.
D
Can
address
some
of
the
details
of
that
in
more
details
later,
but
before
we
get
to
that,
let
me
do
jump
into
multis
to
kind
of
get
that
conversation
going
so
back
in
openshift.
Three,
the
telecommunications
industry
was
really
the
catalyst
for
this,
but
but
it's
definitely
not
just
telco.
It
really
was
anybody
who
was
doing
any
kind
of
network
function,
virtualization
or
nfv
when
they
started
to
transform
their
virtual
network
functions
or
vnfs
to
cloud
native
container-based.
B
D
Functions
or
cnf,
a
a
very
early
gap
that
was
identified
was
the
inability
to
have
more
than
one
network
interface
on
a
kubernetes
pod.
D
So
the
the
primary
functional
gaps
then
were
where
people
wanted
to
have
these
additional
interfaces
for
purposes
of
network
segregation
for
both
functional
purposes
like
performance
and
non-functional
purposes
like
security,
but
also
you
know
the
ability
to
do
things
like
link,
aggregation
and
bonding
for
a
network
interface
redundancy.
D
So
to
solve
this
problem.
Red
hat's,
openshift,
sdn,
engineering
and
and
doug
and
tomu
were
were
big
leaders
in
that
and
nfv
partner
engineering
teams,
also
dan
williams,
funk
pond.
These
guys
were
big
players
in
this.
They
formed
a
networking
plumbing
group,
which
you
heard
doug
described
earlier,
that
he's
he's
responsible
still
for
the
networking
plumbing
team
within
within
red
hat,
but
they
formed
a
network
plumbing
working
group
as
part
of
the
kubernetes
network
sig.
D
This
was
done
during
kubecon
2017
to
address
some
of
these
lower
level
networking
issues
in
kubernetes,
so
it
was
chaired
by
red
hat,
but
it
was
broadly
attended
and
with
many
representatives
across
the
industry,
I
think
we
had
somewhere
upwards
of
about
17
different
members
that
were
all
contributing
input
somewhere
along
that
lines,
all
with
the
common
goal
of
achieving
consensus
on
a
de
facto
standard
for
implementing
multiple
network
attachments
in
an
out
of
tree
solution.
D
So
there
were
a
number
of
use
cases
that
we
gathered
and
a
standard
specification
was
proposed
and
what
we
collectively
agreed
to
build
and
what
we
did
was
we
built
a
reference
implementation
for
the
solution
using
an
upstream
project
initiated
by
intel
named
multis.
D
So
multis
cni
is
a
meta
plug-in
for
kubernetes
cni,
which
enables
the
ability
to
create
multiple
network
interfaces
per
pod
and
assign
a
cni
plug-in
to
each
one
of
those
that
have
been
created,
and
so
so
fundamentally
the
there
is
a
there's,
a
static,
cni
configuration
that's
going
to
point
to
multis
and
then
every
subsequent
c
and
cni
plug-in
as
called
by
multis,
would
get
its
configuration
defined
in
a
custom.
Resource
definition
object
I
like
to
imagine
multis
like
a
power
strip.
D
D
In
our
first
release
of
openshift
4,
we
built
multis
into
openshift
as
a
default
meta
plug-in
whether
you
were
using
secondary
plug-ins
or
not.
So
the
option
is
always
there
if
you're
not
using
additional
ones.
Fine,
it's
just
basically
a
pass-through.
If
you
do
want
to
use
additional
network
plugins,
then
multis,
you
don't
have
to
do
anything
other
than
you
know,
define
the
the
crd
and
and
plug
it
in.
C
So
a
couple
of
questions
there,
so
you
use
the
terms
primary
cni
and
secondary
cni
plugins,
and
I
I
I
want
to.
I
would
like
to
ask:
what's
the
difference
between
those
and
then
my
follow-on?
Is
I've
also
heard
of
cni
and
sometimes
in
reference
to
multis
being
referred
to
as
a
pipeline?
It
pipelines
those
plug-ins
together,
and
usually
I
hear
this
from
the
cnb
or
the
openshift
virtualization
team,
because
they
have
their
cnv
bridge
and
their
cnv
tuning
plug-ins
and
they
feed
my
understanding
as
they
feed
from
one
to
the
next.
D
Yeah
on
the
second
one
I
may
refer
to
the
engineering
folks
on
this
call
to
talk
more
about
how
that
works
functionally,
but
there
is
a
an
ordering
to
which
the
the
crds
are
parsed
and
the
configuration
is
done.
That
may
be
more
what
they're
referring
to
by
pipeline,
but
maybe
doug
and
ortomo,
can
jump
in
on
that
one.
But
to
your
first
question
about
the
types
of
cni
plug-ins
there,
there
are
broadly
two
different
types.
D
There
are
the
what
I
call
the
primary
c
I
plug-ins
and
then
what
we
loosely
call
secondary
plug-ins.
So
so
you
heard
that
the
primary
plug-ins-
those
are
the
ones
that
that
basically
define
the
primary
interface
on
every
pod
in
the
cluster
and
kubernetes
itself,
doesn't
fully
understand
or
doesn't
treat
like
a
first-class
citizen.
Any
of
the
additional
interfaces
kubernetes
end-to-end
is
primarily
focused
on
that
primary
interface
and
so
all
control
plane
traffic.
D
Let's
leave
multiples
out
of
the
picture
for
a
moment
all
traffic
control,
plane,
kubernetes
traffic
and
data
plane
traffic
traditionally
would
flow
in
and
out
of
the
0
on
every
pod
in
the
cluster.
When
you
start
to
add
secondary
interfaces,
kubernetes
remains
on
that
eth0
and
you
can
define
with
those
crds
and
additional
plugins.
You
can
define
additional
network
interfaces
that
are
separated
from
that
now.
D
The
good
thing
about
that
is
that
this
helped
to
solve
some
things
that
customers
are
asking
for
like
in
particular,
you
know
I'll
talk
more
about
some
of
the
different
plugins,
but
let's,
let's
choose
one
to
discuss,
and
that
is
sriov.
So
sreov
is
a
plug-in
that
basically
allows
your
traffic
to
bypass
even
the
linux
kernel,
networking
stack
and
go
directly
to
the
nic,
and
so,
as
long
as
you
have
a
srrv
capable
nic,
you
can
communicate.
D
That's
that's
the
fastest
possible
way
to
communicate
from
the
pod
to
the
core
network
of
the
cluster,
and
so
users
were
saying.
Look.
I
don't
want
that
to
be
encumbered
by
all
the
other
kubernetes
traffic
for
purposes
of
maybe
performance,
maybe
for
purposes
of
security
or
whatever.
So
what
they
said
was
that
one
of
the
immediate
and
first
use
cases
was
to
use
that
secondary
plug-in
to
separate
some
of
that
data
plane
traffic
from
the
primary
interface
you
asked
before.
D
We
were
one
of
the
first
industry
to
enable
high
performance
multicast
streaming
because
of
this
ability
for
us
to
plug
in
that
secondary
interface,
sri
ov,
cni,
plugin,
and
so
customers
could
redirect
their
traffic
out
there
and
and
achieve
basically
host
line
rate
or
their
nic
line
rate
for
their
traffic,
leaving
the
leaving
the
pod
and
going
on
to
stepping
onto
the
the
cluster
network.
C
So
we
have
a
question
and
mark
I
don't
know
if
this
is
a
question
for
you
or
doug
or
tomo
right
right,
so
hc631
asks
you
know.
You've
talked
about
crds
related
to
multis
so
which
crds
relate
to
multis
and
what
do
they
configure
and
if
we
want?
I
do
have
a
cluster
that
I
can
walk
through
some
things.
If,
if
you'll
want
to
go
that
far
or
if
you
just
want
to
use
words
to
paint
a
picture
that
works
too,.
E
Sure,
let
me
let
me
give
a
quick
overview.
Typically
when
you're
administrating
openshift
cluster,
what
you're
going
to
look
at
is
your
networks
object
and
that's
where
you
configure
your
networks
as
a
whole
and
also
it's
probably
a
good
place
to
kind
of
memorialize
the
configurations
you
have
so,
if
you
have.
E
Parameters
for
openshift
sdn
for
ovn
you're,
probably
going
to
have
them
there.
You
can
also
configure
your
additional
networks
there
in
a
field
that
I
believe
is
called
additional
networks.
That
is
then
an
abstraction
from
the
custom
resource
that
multis
use.
That's
called
a
network
attachment
definition
and
it's
that
particular
custom
resource.
It's
very
simple:
it's
basically
a
field,
that's
a
blob
of
json!
That's
a
cni
configuration
itself,
so
it
allows
you
to
say
this.
Is
the
cni
configuration
for
this
additional
interface.
C
So
I'm
going
to
share
a
screen
here,
real
quick,
so
that
we
can
walk
through
a
couple
of
things
and
and
I'll
ask
you
to
perhaps
explain
what
we're
seeing.
Oh,
I
don't
want
to
show
you
what.
C
C
So
I
have
here,
I
provisioned
an
azure
cluster
this
morning.
It's
running
four,
seven,
two,
so
doug!
You
were
just
saying
that
we
want
to
look
at
the
network
crd.
C
So
if
I
do
an
oc
get
network
which
is
a
global
object,
we
see
that
we
have
the
cluster
and
if
I
do
a
dash
o
yaml
we'll
see
the
contents
of
that,
and
this
should
look
kind
of
familiar,
because
this
is
more
or
less
the
same
stuff
that
you
saw
in
your
install
config.cmr.yaml
when
you
were
setting
up
the
cluster-
and
that
includes
you
know
you
can
see
here-
it's
openshift
sdn.
If
I
were
to
substitute
that
for
ovn
kubernetes.
C
A
E
Got
it
and
what
you
would
do
is
you
would
just
be
adding
a
line
here
under
the
spec.
That
would
be
additional
networks
and
in
the
openshift
docs
it
details
all
the
particular
parameters
and
goodies
that
you
can
configure
there.
E
You
also
could
potentially
do
an
oc,
explain,
network
attachment
definition,
hey.
B
E
And
kind
of
two
things
to
point
out
here
that
are
really
important
is
the
name,
the
metadata
name
and
you're,
going
to
use
that
you're
going
to
refer
to
that
in
pods
as
an
annotation
to
say
hey.
E
This
is
the
additional
network
that
I
want,
or
one
of
many
additional
networks
that
I
want
and
then
in
the
spec,
the
config,
that's
what
c
and
I
can
fake,
which
are
in
json
and
that's
how
you
configure
an
individual,
cni
plugin
and
some
of
those
fields
are
static
and
required
for
everyone,
so
name
type
or
plugins
cni
version
those
are
generally
required
and
then
the
rest
is
free
form.
So
it
could
be
specific
to
the
cnn
coding.
B
B
And
the
port
should
be
in
same
namespace,
so
if
you
create
the
cmbr1
at
the
default
and
if
you
create
the
port
for
that
in
the
another
namespace
okay,
let's
see
the
hoover
at
that
time,
the
amount
is
may
causing
an
error.
So
please
create
a
network
at
the
definition
and
put
them
in
the
same
name.
Space
got
it.
C
E
And
so
there
are
so
generally
we
they're
required
to
be
in
the
same
name
space.
However,
the
default
namespace
is
special.
You
can
refer
to
a
network
attachment
definition
in
the
default
namespace
from
any
namespace.
E
So
is
so
say,
for
example,
you've
got
say:
you've
got
a
hundred
name
spaces
and
you
and
you
need
a
hundred
different
network
attachment
definitions.
You
could
use
it
in
a
hundred
namespaces.
We
said
no,
we
gotta
have
a
way,
so
you
can
share
that
in
that.
E
Case
so
you'd
create
the
network,
attachment
definition
in
the
default
namespace
and
then,
when
you
annotate
it,
you
use
a
format
with
a
namespace,
slash
name,
so
you
would
say
default.
Slash
foo,
for
example,
and
that
would
allow
you
to
do
it.
So
yeah
default
is
special
and
so
also
keep
that
in
mind
for
security
considerations.
If
you
don't.
A
E
It
to
be
used
across
name
spaces
as
well.
C
A
C
So
it's
it's
funny
because
you
know
chris
and
I
chat
about
these
shows
and
I
usually
try
to
have
subjects
and
and
things
that
I
like
to
think
I
I'm
at
least
knowledgeable
on
so
that
I
can
hold
the
conversation
and
I
feel
like
I
have
learned
just
a
tremendous
amount
from
you
already.
So
thank
you.
Yes,.
A
C
All
right
so
chris
do
we
have
any
questions
that
have
come
in.
I.
A
C
Cool
okay,
so
I
have
a
question
and
in
particular,
if
we
look
at
and-
and
I
think
mark
you
refer
to
these
as
the
secondary
network
plugins,
if
we
were
to
look
for
example
in
the
openshift
github
and
I'm
gonna
share
my
screen
again,
I
think
that's
the
right
screen
to
share
it
is
so
if
I
go
to
like
github.com
open
shift,
if
I
could
type
and
talk
if
I
just
do
a
very
cursory
like
type
in
cni
in
the
search
field.
C
Here
right
we
get
a
bunch
of
cni
plug-ins,
and
I
and
I
think
these
are
what
you're,
referring
to
as
the
secondary
plug-ins
like
this
whereabouts-
and
you
know
here:
here's
multis,
which
is
not
a
secondary.
You
know
route
override,
cni,
sri
ov,
yeah,
so
on
and
so
forth.
So
can
you
kind
of
describe
what
some
of
these
are?
Maybe
when
I
should
or
shouldn't
yeah.
D
Well,
let
me
let
me
adjust
it
at
a
higher
level
and
then,
given
the
fact
that
doug
and
tomu
are
authors
of
some
of
these,
I
will
I'll
redirect
the
conversation
to
them
pretty
quickly.
But
the
the
premise
is:
is
that
these
these
secondary
cni,
these
enable
our
customers
to
immediately
take
advantage
of
some
benefit.
That's
afforded
by
some
some
other
cni
plug-in
implementation.
D
You
did.
You
didn't
have
to
wait
for
this
to
show
up
in
kubernetes
proper.
This
is
something
we
could
implement
as
a
secondary
c,
I
plug-in
faster.
It
may
end
up
evolving
into
something:
that's
a
fundamental
feature
of
kubernetes,
but
but
this
is
something
that
today
would
be
enabled
as
a
secondary
function
on
the
secondary
interface.
So
there's
a
number
of
different
plugins
that
we
support.
D
D
And
then
there
are
other
ones
here
that
we
created
out
of
need
that
there
was
something
some
functionality
that
was
not
there,
that
our
customers
were
asking
us
for.
D
So,
for
example,
you
know
there
is,
you
know,
ipv
land,
and
so
this
is
really
for
about
assigning
sub
interfaces,
their
own
unique
ip
address
that
and
they
all
share
a
common
mac
address
an
example
would
be
something
like
aws,
which
uses
ibv
land
instead
of
vxlan
overlays
for
their
vpc
offerings,
and
and
so
you
would
see,
lower
latency
and
improved
throughput
with
that.
D
Another
use
case
for
something
like
ipvlan
or
customers
with
pods
that
are
in
vms,
that
want
to
use
the
vm's
mac
for
egress
traffic
and
and
not
that
of
the
host.
So
there's
a
number
of
use
cases
to
each
one
of
these,
and
you
know
at
this
point,
I
think
what
I'll
do
is
I'll
hand
it
to
doug
and
tomo.
You
know
doug,
you
know,
there's
another
one.
You
have
in
here
that
I
know
for
sure
that
you
were
the
one
who
who
created
it,
and
that
was
the
whereabouts.
E
Yeah,
absolutely
so,
where
about
it's,
a
special
kind
of
cni
plug-in,
so
even
as
we
were
talking
about,
what's
the
difference
between
an
sdn
and
a
cni,
there's
actually
a
number
of
different
kinds
of
cni
plugins
and
one
of
those
is
an
ipm
cni
plug-in.
So
it's
like
a
specialized
type
of
cni
plug-in
that
works
with
other
cni
plug-ins
that
provides
it
with
with
ip
address
information
and
in
the
case
of
whereabouts,
we
discovered
that
in
some
scenarios.
E
Users
were
having
trouble
getting
ip
addresses
to
all
of
their
workloads
across
the
cluster,
so
we
had
a
lot
of
examples
that
use
the
host
local
ipam,
cni
plugin
and
that
ip
address
information
is
stored
locally
to
each
host.
So
you
go
to
assign
the
ip
address
on
2host
and
you
get
the
same
ip
address,
which
is,
of
course
a
disappointment,
because
guess
what
in
the
real
world,
everyone
has
a
cluster,
that's
bigger
than
one,
and
maybe
in
a
dev
environment.
E
You
just
have
the
one
host,
so
we
realized,
you
know
what
it's
not
always
easy
to
get
that
across
hosts.
So
what
whereabouts
does
it
is
assigns
ip
addresses
to
other
cni
plugins
and
therefore
interfaces
to
pods
and
kubernetes
using
custom
resources.
So
you
just
give
it
an
ip
address
range
and
it
says:
okay,
I
know
how
to
figure
out
which
ip
addresses
are
allocated
or
not,
and
it
assigns
those
for
you
and
one
place
where
we've
seen
this
to
be
particularly
useful
is
in
isolated,
srlv
networks.
E
So
let's
say
you've
got
this.
You
know,
you've
got
a
media
streaming
platform
and
you've
got
all
your
high
performance.
Video
audio
going
out
this
one
interface,
but
what,
if
you
don't
have
a
dhcp
access
on
that?
Well,
you
don't
necessarily
need
to
go
and
then
set
up
a
dhcp
server.
You
can
just
throw
in
a
couple
lines
of
config
and
say:
hey.
Can
I
get
an
ip
address
from
this
range?
Please,
and
and
that's
what
what
whereabouts
does.
C
Very
cool-
and
so
it's
not
not
like
assigning
ip
addresses
to
pods
that
are
connected
to
openshift
sdn,
because
it
has
its
own
mechanism
to
do
it.
It's
rather,
if
it's
connected
to
some
other,
you
know
sdn
for
lack
of
a
better
term
or
some
other
network
that
is
cni
controlled
and
it
needs
to
determine
an
ip
address
for
that.
So
I
I'll
pick
on
like
openshift
virtualization
right,
you
said
you
know
hey.
E
Yeah
absolutely-
and
it
has
a
number
of
features
like
excluding
ranges
so
say,
for
example,
you're
trying
to
play
nice
with
other
existing
infrastructure
in
your
network,
and
you
want
to
say,
like
hey,
I
know
I
have
this
slash
24,
but
we
have
these
legacy
10
ip
addresses
that
you
know.
I
don't
want
to
collide
with
those,
so
you
can
specify
stuff
like
that.
C
Very
cool,
so
I'm
going
to
change
directions
ever
so
slightly
and
I'm
going
to
ask
you
guys
what
is
a
bit
of
a
scary
question.
So
I
hope
I'm
not
alarming
you,
and
that
is
what
can
go
wrong
like
when
we
think
about-
and
I
know
like
sdns.
A
C
And-
and
I
think
it's
one
of
those
like
you
know
a
lot
of
times-
we
think
about
you-
know
the
the
sdn
isn't
able
to
instantiate
itself
right
the
nodes
can't
talk
to
each
other,
the
via
x
land
tunnels.
Can't,
you
know,
can't
be
created
whatever
that
happens
to
be
so.
Are
there
common
things?
Let
me
narrow
this
down
right.
Are
there
things
that
commonly
go
wrong
and
do
you
have
any
suggestions
for
like
troubleshooting
or
how
to
identify
those.
E
Mark
do
you
mind
if
I
take
it
away,
please
please
do
all
right
cool.
So,
as
you
know,
I
mean
networking.
I
wish
it
was
just
as
easy
as
plugging
ethernet
or
optical
cable
in
and
everything
worked.
That
would
be
great.
C
E
The
meanwhile
a
lot
can
go
wrong.
So
let
me
talk
about
a
few
of
the
things
that
we've
done
to
help
mitigate
that
for
you
in
openshift,
the
number
one
of
which
we
have
in
terms
of
additional
networks.
E
Is
we
have
an
admission
controller
so
that
when
you
go
and
create
these
custom
resources,
it
does
some
cursory
checks
on
those
to
make
sure
that
things
are
formatted
the
right
way
that
kind
of
some
simple
mistakes?
It
can
stop
you
and
say:
hey.
Can
you
double
check
this
before
I
go
and
instantiate
it?
So
that's
one
thing
kind
of
the
next
thing
that
you
might
see
happen
is:
if
you've
made
a
mistake,
that
our
admission
controller
couldn't
pick
up
on.
E
Multis
may
pick
up
on
it
and
tomo
created
a
functionality
for
multis
that
uses
kubernetes
events.
So
if
multis
finds
a
problem,
it's
going
to
create
a
kubernetes
event
and
you're
going
to
see
those
in
a
oc
describe
pod
foo,
and
hopefully
you
see
an
event
in
there
and
it
says,
like
here's,
a
here's,
a
like
a
common
mistake
that
we
see
all
the
time
is
you
go
to
use
mac,
vlan,
cni
and
you're
like
hey.
E
I
want
a
mac,
vlan
interface
on
this,
and
you
need
to
specify
a
interface
that
exists
on
your
host
system
if
you've
specified
the
wrong
interface
name
when
multis
goes
and
runs
mac,
vlan,
cni,
mac,
vlan
c
and
is
going
to
return
to
malta
and
say:
hey,
didn't,
find
east
fu
and
multis
will
then
create
a
kubernetes
event.
That's
going
to
say,
I
couldn't
create
a
mac
vlan,
because
there's
no
interface
on
the
host
called
ethu.
E
So
that's
going
to
be
one
of
the
first
places
you're
going
to
get
your
hints
and
in
terms
of
cni
cni
is
generally
a
one-shot
event-driven
api
and
your
sdn
itself
may
be
doing
things
that
are
more
complicated,
has
more
fault.
Tolerance
has
more
of
a
state
to
it,
whereas
there's
going
to
be
cni
events,
the
most
the
ones
you're
the
most
concerned
about
as
an
administrator,
is
on
cni
ad
and
cni
delete.
So
that's
when
your
pods
created
and
your
pods
torn
down.
E
There's
a
few
other
places
that
you
might
want
to
look,
one
of
which,
I
would
say,
is
the
cubelet
logs,
so
the
cubelet
libsy
and
I
cryo
there's
like
a
kind
of
constellation
here
that
all
works
together.
A
lot
of
those
logs
are
going
to
be
trapped
in
the
cubelet
and
the
cni
api
itself
is
all
standard
in
standard
out,
it's
fairly
basic
and
that
so,
when
you
have
an
error,
it's
in
standard
air
and
that
standard
error
is
going
to
be
picked
up
by
the
cubelet
itself.
E
C
And
a
question
and
we'll
eat
I'll,
go
ahead
and
ask
your
question
just
so
that
we
can
get
verbally
address
it
right,
so
statement
from
hc631
trailing
commas
which
are
normally
accepted
as
functional
json.
Don't
get
accepted
in
the
network
object
if
they
aren't
followed
by
another
key
value
pair
in
the
json
in
the
network
object.
So
I
think
that
was
just
a
statement
of
it.
It
seems
like
there
might
be
some
non-traditional,
parser
behavior
there.
E
I
would
love
to
have
an
upstream
issue
created
for
that,
because
that's
actually
news
to
me
about
the
gsat
spec.
I
honestly
thought-
and
maybe
it's
from
cni
specific
bias.
But
I
I
didn't
know
that
those
trailing
commas
were
okay.
So
certainly
the
if
you
look
for
the
container
networking
github
org
upstream,
that
is
probably
a
good
place
to
start
to
file
an
issue
and
that'll.
C
Be
cool
to
fix,
awesome
and
then
waleed
asks.
So
we
can
change
the
primary
cni
after
install
so,
for
example,
move
from
openshift
sdn
to
ovn
kubernetes.
What
happens
to
the
pod
and
service
configurations?
Do
they
need
to
be
restarted?
Does
the
node
need
to
be
rebooted.
E
That
is
handled
for
you
by
the
operator
and
what
I
believe
that
the
the
operator
does
is
that
it
gracefully
drains
the
nodes
and
reboots
them
when
it
needs
to.
C
Okay-
and
I
think
that
that
address
that
also
answers,
one
of
the
kind
of
very
first
questions
we
had,
which
was:
can
you
replace
the
sdn?
You
know
and-
and
you
know
substitute
one
for
the
other,
and
I
think
the
answer
to
that
is
yes,
although,
as
christian
pointed
out
in
the
chat
earlier,
sometimes
the
options
that
you
want
to
use
so,
for
example,
ov
and
kubernetes,
with
hybrid
networking,
have
to
be
decided
upon
before
the
install
happens.
C
D
We
we
have
our
first
real
challenge
in
this
space,
with
moving
from
our
current
and
legacy
default
of
openshift
sdn
to
obm,
so
we
need
a
way
to
get
our
new
customers
on
to
our
next
generation
ovn
networking,
where
all
of
our
new
development
is
going
without
asking
them
to
greenfield
redeploy
their
clusters
from
scratch
again,
so
we
looked
at
a
lot
of
different
possibilities
for
for
the
primary
cni
plug-in
to
go
from
one
to
another,
and
it
turns
out
that
that
multis
is
a
great
tool
to
facilitate
that
move.
D
And
essentially
you
know
the
pro.
Actually
you
know
doug.
Do
you
want
to
address
some
of
the
specifics
on
how
that
works?.
C
E
E
In
with
your
alternative
sdn,
and
then
we
switch
the
traffic
over
that
way,
that's
actually
how
it
goes.
C
Nice
yeah,
that's
malta
seems
like
it
was,
I'm
not
sure
if
it
was
super
smart
for
thought
or
accidental
genius,
but
it
seems
like
it
solves
a
lot
of
problems.
C
Come
on,
of
course,
of
course
that
was
that
was
what
I
was
going
with.
So,
as
I
said,
we
we
do
only
have
about
a
minute
left
now
before
we
go
over
to
openshift
commons
briefing,
so
I
want
to
take
the
opportunity
to
thank
you
mark.
Thank
you,
doug.
Thank
you,
tomo,
for
coming
on
today.
C
This
has
been
really
phenomenal.
Thank
you.
So
much
for
all
of
the
information
that
you've
shared
with
us,
so
I
saw
ht631
I
saw
you
said
you
have
another
question.
Please
feel
free
to
reach
out
to
me,
or
chris
yep,
so
andrew.sullivan
redhat.com
or
on
social
media.
Practical
andrew
at
twitter.
C
You're,
welcome
to
reach
out
to
us
anytime,
we'll
make
sure
that
the
team
here
gets
those
questions
and
we
can
get
good
answers
for
you
absolutely
yeah,
and
so
thank
you
again
for
everybody
who
who
is
watching.
Please
keep
an
eye
out
monday
for
the
excuse
me
friday
for
the
follow-up
blog
post
that
has
all
of
the
details
and
stuff
that
we
shared
inside
of
this
session.
Thank
you
again,
mark
doug
and
tomo
and
have
a
great
rest
of
your
day.
Everyone.