►
From YouTube: Summit 2022: Network Interface Hotplug
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
A
Okay,
so
folks,
next
up,
we
have
miguel
barroso
who's
going
to
be
talking
about
one
of
the
major
features
for
a
recent
release
of
goober
network
hot
plug,
so
welcome
miguel.
B
Hello,
I'd
first
like
to
thank
you
all
for
well,
especially
for
those
of
you,
which
is
already
late.
Well,
thanks
for
joining
this
talk
about
network
interface,
hot
plug
for
kuvert
at
the
keyboard
summit.
The
first
thing
that
I
think
it's
important
to
say
is
like
a
short
disclaimer
that
none
of
this
code
has
been
merged.
So
this
is
kind
of
a
feature.
That's
work
in
progress,
but
nevertheless
I
think
it's
a
very
interesting
feature.
You
can
re.
B
The
good
thing
here
is
that
you're
still
in
time
to
kind
of
make
your
voice
heard
and
state
your
opinion
on
pretty
much
a
bunch
of
stuff
that
I'll
be
showing
here,
you
can
find
my
email
in
the
bottom
left
corner.
If
you
want
to
reach
me
personally
and
yeah,
let's
move
to
the
agenda
of
the
presentation.
B
This
will
be
in
order
for
us
to
all
follow
the
things
I'm
going
to
be
speaking
about.
I
think
it
deserves
a
short
introduction.
First
about
cni
and
dan
of
multis
I'll,
introduce
the
maltese
project
and
I'll
also
give
a
brief
overview
of
how
cubert
sets
up
the
networking
for
vms.
B
Once
that's
clear,
we
can
move
into
motivation
problem
goals
and
with
that
we'll
move
into
the
implementation
parts
both
for
the
maltest
project
and
also
for
vert
I'll,
then
show
the
demo
of
this
proof
of
concept
and
conclude
with
the
conclusions
and
the
next
steps
for
all
this
work.
B
Okay,
let's
move
to
the
introduction
section
in
which
we
have
first
to
address
the
kubernetes
networking
model,
so
the
kubernetes
networking
model
is
quite
simple
but,
and
it
states
that,
according
to
it
all,
pods
can
communicate
with
all
pods
across
different
nodes,
even
if
these
parts
are
using
the
host
network.
B
The
second
statement
it
says
is
that
an
agent
on
a
node
can
communicate
with
all
parts
on
the
node
that
the
agent
manages
and
this
in
order
to
implement
this
networking
model.
Kubernetes
uses
cni,
which
stands
for
container
network
interface.
Cni
is
a
cncf
project
as
well,
and
is
a
plugin
based
networking
solution
for
kubernetes.
It
is
also
a
container
orchestration
engine
agnostic.
B
This
means
that,
as
far
as
cni
is
concerned,
kubernetes
is
just
another
runtime
like
plenty
of
runtimes
could
make
use
of
cmi
to
implement
their
networking.
Okay.
So
the
way
cni
works
is
whenever
pod
gets
created,
like
there
are
two
events
that
are
very
interesting
to
cni.
The
first
is
whenever
a
pod
gets
created,
so
when
a
pod
gets
created,
the
cni
will
be
invoked
by
the
runtime
and
will
create
and
configure
a
network
interface
in
the
pods
and
connect
that
to
the
cluster-wide
network.
B
B
Now
the
reply,
how
does
cni
work
so
in
cni
is
in
fact
just
a
binary
executable
located
on
the
host
file
system
and
is
invoked
by
the
runtime.
Like
start
a
new
process
and
the
configuration
of
the
plugin
is
passed
via
standard
in
it's
a
json
encoded
string
with
the
full
configuration
in
it
and
the
parameters,
one
of
which
is
the
command
like
it
says,
add
or
delete.
B
The
parameters
are
passed
via
environment
variables
to
the
plug-in
the
output
of
the
results
so
like
it
creates
stuff
and
does
stuff,
but
the
the
output
of
it
is
also
sent
to
standard
out
of
the
plugin
and
is
cached
on
the
host
file
system.
B
It's
interesting
to
say
that
kubernetes
chose
to
use
cni
in
a
very
simple
way
or,
and
it
just
creates
a
single
pod
interface
on
all
pods.
It
will
use
the
same
cni
the
entire
time
and
in
fact,
what
they
do
is
implement
a
single
cluster-wide
network
that
will
interconnect
all
the
pods
in
the
cluster.
B
So
this
means
that,
if,
for
whatever
reason
you
need
to-
or
you
want
more
than
one
interface
networking
interface
in
the
pod,
you
need
to
search
for
answers
outside
the
realm
of
kubernetes.
Kubernetes
will
not
do
that.
It's
outside
their
responsibility,
and
this
is
why
we're
speaking
about
multis
right
now,
malta's
goal
is
quite
literally
just
that
to
enable
multiple
interfaces
on
a
pod
on
a
kubernetes
spot.
B
It
is
a
meta
cni
plugin,
and
the
word
meta
here
is
to
denote
or
to
highlight
that
it
is
a
cni
plug-in
just
like
the
other
ones,
but
it
will
in
turn
invoke
other
cni
plug-ins.
So
it
needs
to
figure
out
which
to
invoke
and
invoke
it
and
then
proxy.
The
result
back.
B
Okay,
so
let's
discuss
and
show
a
little
bit
how
the
how
multis
is
used.
So
if
you
have
a
pod
that
requires
for
which
you
want
to
have
multiple
interfaces,
you
need
to
specify
a
list
of
attachments
using
a
special
annotation
on
the
pod.
The
annotation
name.
Is
this
one
kubernetes.v1.cni.cncf.io.
B
Networks,
it
is
also
a
json
encoded
string
where
you
pass
a
list
of
the
name
for
it
in
the
multi-spec
network.
Multi-Network
spec
is
network
selection
elements.
The
examples
we're
looking
here
are
quite
simple:
it
just
features
the
name,
but
in
reality
this
allows
for
more
complex
configurations
like
you
can
you
can
specify
specific
mac
addresses
for
your,
for
this
particular
attachment
specific
ip
address,
these
sort
of
more
complex,
complex
things,
and
what
does
maltese
do
with
this
thing?
B
So
in
essence,
multis
will
look
for
the
configuration
for
the
cni
plug-in
also
in
the
kubernetes
data
store,
so
it
will
must
match
a
network
attachment
definition
object
and
it
must
match
by
name.
So
here
we
have
the
data
plane
attachment
and
it
will.
You
must
have
like
a
matching
data
plane,
network
attachment
definition
and
the
network
attachment
definition.spec.config
you'll
have
a
json,
encoded
string,
which
will
in
essence
be
the
configuration
that
you'll
pass
to
the
delegate
plugin.
B
The
delegate
plugin
is
the
plugin
that
will
in
fact
create
the
extra
interface
and
connect
that
to
your
secondary
network,
and
here
in
this
diagram
we
have
two
scenarios.
The
left
diagram
represents
a
vanilla,
kubernetes
deployment,
with
just
a
single
cluster-wide
ci
plug-in
that
will
implement
a
single
cluster-wide
network
as
an
example.
It
is
a
flannel.
B
B
B
It
will
in
find
out
what
is
the
type
of
plugin
to
invoke
and
we'll
invoke
that
plugin
passing
it
the
configuration
as
we've
shown
before
and
does
create
an
interface
per
each
of
the
additional
selections
once
now.
This
is
clear:
let's
go
and
mention
a
little
few
things
about
cuber
networking
so
so
far
we've
covered
cni.
It
will
basically
configure
networking
within
the
pod
now
we're
talking
about
kubevert
and
we
want
to
have
the
vms
network.
B
So
we
want
to
have
networking
in
the
vms
like
not
only
in
the
pod,
so
the
question
is:
how
can
we
extend
the
networking
from
the
pod
interface
and
into
the
vm?
There
are
quite
a
few
ways
to
do
this
and
I'll
just
use
one
as
an
example
and
it's
using
something
that
is
called
bridge
binding
and
for
it,
kubeverd
will
need
to
perform
two
tasks,
the
first
of
which
is
to
create
auxiliary
networking
infrastructure
in
the
pod.
In
this
case,
as
I've
said,
a
linux
bridge
will
be
created.
B
B
The
second
keyword
task
is:
we
need
to
instruct
libvert
to
create
the
we
need
to
insert
livert
to
point
at
this
already
configured
tab
device,
and
for
that
we
need
to
specify
an
interface
of
type
ethernet
point
at
the
tab
by
name
and
be
sure
to
disable
the
manage
flag,
if
not
livford
will
attempt
to
reconfigure
the
top
device
with
a
mac
address
and
an
mtu
which
will
probably
cause
plenty
of
trouble,
because
the
pods
do
not
the
pods
where
liver
runs,
do
not
have
the
capabilities
to
do
so.
B
Okay,
and
once
this
happens.
Finally,
when
qm
boots
up
the
vm,
like
an
emulated
network
device
will
be
created
in
the
virtual
machine
from
the
previously
created
tab.
B
So
we
we
have
some
virtual
machines
that
run
critical
workloads
that
simply
cannot
tolerate
a
restart
without
impacting
service.
Common
scenario
is,
for
instance,
a
vm
is
created
before
the
network
like,
for
whatever
reason,
an
organization's
network
topology
is
optical.
Workload
must
connect
to
this
newly
created
network.
B
B
Okay.
Given
this,
we
can
now
define
the
problem
as
providing
the
dynamic
attachment
of
l2
networks
without
risk,
restarting
the
workload
whether
it
is
a
pod
or
a
virtual
machine,
and
now
we
can
clearly
list
our
goals,
which
are
to
add
network
interfaces
to
running
vms,
remove
networking
interfaces
from
running
vms
and
the
third
one
that
a
vm
can
have
multiple
interfaces
connected
to
the
same
secondary
networks.
B
Finally,
last
thing:
whenever
I
set
vms
before
you
can
replace
that
by
pause,
we
want
to
do
the
exact
same
things
for
pods,
and
with
this
we
can
move
into
the
implementation
section
where
we'll
start
by
the
changes
required
in
malta's.
For
that
we'll
start
with.
How
will
this
feature
be
used
in
malta's?
So
remember
that
the
user,
the
way
the
user
has
to
indicate
secondary
networks,
is
by
a
special
annotation
on
the
pod.
So
it's
only
natural
for
the
way
to
plug
new
pod
interfaces
is
also
by
updating
this
annotation.
B
So
what
we
want
to
do
is,
if
you
want
to
add
a
new
interface
to
the
pod.
You
just
add
a
new
attachment
to
this
list.
The
example
below
we've
added
a
new
attachment,
name,
new
shiny
network.
On
the
other
hand,
if
you
want
to
remove
an
attachment
from
the
pod,
what
you
sim
have
to
do
is
just
go
ahead
and
delete
that
attachment
from
the
annotation
list.
B
An
interesting
thing
to
say
here
is
that
we
do
not
plan
at
all
to
allow
for
updating
already
existing
attachment,
like
we
are
not
thinking
of
leaving
the
door
open
for
this
thing.
To
like,
let's
say
you
want
I
to
to
update
the
ip
address
of
this
particular
attachment,
for
instance,
and
so,
let's
just
update
the
value
on
the
annotation.
That
is
not
where
we're
trying
to
get
at
so
updates
to
existing
networks
are
out
of
scope.
B
We
just
care
about
adding
new
attachments
or
removing
old
attachments,
a
question
that
probably
is
coming
to
you
right
now
is
okay,
but
this
is
reactive,
so
you
need
to
be
looking
at
the
pod
and
cni
is
simply
a
simple.
Is
a
binary
executable
located
on
the
host
file
system
that
will
be
triggered
by
the
runtime
on
certain
set
of
events
whenever
the
pod
gets
added
or
deleted?
So
where
will
we
put
this
code
and
what
is
it
that
we
want
to
do
as
I've
said
before?
B
We
want
to
have
something
watching
the
pods
controlling
their
annotations
and
whenever
the
annotation
changes
do
kind,
some
kind
of
things
for,
for
instance,
whenever
new
attachments
are
added
to
this
annotation,
we
should
trigger
cni
ad
for
these
new
attachments.
Whenever
a
network
selection
element,
on
the
other
hand,
is
removed
from
the
annotation
of
a
given
pod,
we
should
trigger
the
corresponding
cni
delete
for
that
particular
attachment
for
that
kubernetes
gives
us
the
control
loop,
like
the
the
controller
pattern,
and
that
is
exactly
what
we
want
to
use.
B
B
I
think
that
a
restful
api
that
speaks
with
json
is
more
than
enough,
and
this
thing
is
listening
on
a
unix
domain
socket
that
is
bind
mounted
into
the
host
so
that
the
client
can
communicate
and
send
requests
and
get
the
replies
from
from
these.
This
multis
controller
multicontroller
will
do
all
the
epic
pulling
figure
out.
The
configuration
invoke.
The
delegate
which
will
in
in
turn
create
the
interfaces
on
the
pods,
grab
the
response
and
send
the
response
back
to
the
multisim.
The
multisim
that
has
the
response
will
event.
B
Will
afterwards
output
the
echo
into
centered
out
the
result
and
we'll
also
cache
it
on
the
host
file
system,
and
this
allows
this
multis
controller
that
we
see
here
like
a
daemon
that
is
listening
and
exposes
a
restful
api
to
also
host
the
controller.
So
this
will
in
turn,
do
two
things.
It
will
expose
a
restful
api
through
which
the
runtime
will
say
that
a
new
pod
got
added
or
a
new
pod
got
or
an
old
pod
was
deleted.
B
B
Okay-
and
this
takes
us
to
the
changes
in
in
cube-
and
I
think
it's
easier
to
start
with
an
example
and
show
like
a
diagram
of
the
networking
of
within
a
pod
and
a
virtual
machine.
So
as
we've
I've
shown
before,
we
have
our
input
bridge
our
pod
interface
and
the
running
interface
in
the
vm
and
everything
is
networked
and
our
vm
has
one
interface
and
is
connected
to
one
network.
B
What
we
want
to
do
here
is
to
like
a
good
api
for
interface,
hot
plug
for
keyword.
Virtual
machines
would
follow
the
same
approach
we
described
for
the
for
the
pods.
Previously
you
update
the
virtual
machine
specification
like
you,
add,
or
remove
interfaces
from
it,
and
this
should
trigger
the
interface
feature,
hot
plug
or
auto
plug
now.
B
The
thing
is
that
this
is
not
possible
in
kubert,
because,
like
the
users
cannot
mutate
the
pod
specification,
the
only
entities
that
are
able
to
mutate
the
parts,
the
vm
specification
are
the
keyword,
control
plane
entities,
for
instance
the
api
server
vert
handler
vert
controller,
those
those
entities.
B
So
for
this
we
are
proposing
to
mimic
the
the
hot
plug
disk
feature
and
we
expose
we
add
a
new
command
to
kubert
cli.
That
well,
basically
adds
the
you
send.
You
use
this
command
the
ad
interface,
and
this
will
send
a
put
request
to
the
to
this,
to
a
new
sub
resource
exposed
on
the
api
server
and
have
the
api
server
mutate.
The
virtual
machine
specification
for
us
now,
the
vert
the
convert
controller.
B
B
Once
this
is
done,
the
controller
will
look
at
the
pods
and
eventually
it
will
see
a
new
interface
on
the
pods
network
status.
Once
it
sees
the
new
interface
on
the
pod's
network
status,
it
will
patch
the
the
virtual
machine
status
saying
that
there's
a
new
hot
plugged
interface-
and
this
is
what's
happening
in
this
part
of
the
diagram
once
the
keyword
agent
with
a
vert
handler-
sees
this
update.
It
will
do
the
first
thing
that
we've
seen
previously
and
it
will
create
all
sorts
of
networking
infrastructure
like
in
this
case,
the
bridge.
B
It
will
create
this
tab
device,
configure
tab
device
and
connect
the
pod
interface
and
the
top.
Now
what
we're
left
to
do
is
then
it's
also
the
agent
that
must
do.
It
is
also
to
converge
the
virtual
machine
status
and
like
make
the
spec
like
push
the
changes
from
the
spec
and
into
the
status,
and
for
that
it
will
eventually
call
the
attach
interface
and
I'm
here,
showing
like
the
verse
attached
device
and
detached
device
commands
because
it's
easier
to
show,
but
our
like.
B
Our
proof
concept,
is
using
the
golang
binding
the
libert's
golden
binding
to
do
so.
But
this
is
like
the
idea
behind
the
the
payload
and
once
this
command
is
executed,
like
it
will
hot
plug
the
interface
into
our
virtual
machine,
which
finally
will
lead
into
the
virtual
machine
having
an
emulated
network
device
that
is
interconnected
via
a
bridge
to
the
pod
interface,
whose
networking
was
already
configured
via
cni.
B
B
Now
this
last
one,
the
most
modern
one
q35
has
a
limitation
in
its
definition.
By
default.
It
supports
a
single
hot
plug
operation
and
in
its
api
it
even
says
that
when
users
require
more
than
one
hot
plug,
they
must
prepare
in
advance
for
it
by
requesting
an
appropriate
number
of
pci
express
root
port
controllers.
B
B
But
the
thing
is
they
do
it
cluster-wise,
so
this
number
will
be
cluster-wide,
and
the
curious
fact
here
is
that
their
defaults
are
totally
different.
Like
openstack,
nobody
falls
to
zero
like
no
hot
plug.
By
default.
It's
configurable,
you
can't
change
it,
and
over
chose
something
totally
different.
It
falls
to
16,
which
means
that
you'll
surely
be
able
to
hot
plug
more
than
one
and
okay.
Let's
move
into
our
demos,
I'm
not
sure
if
I'm
very
safe
on
time
or
not,
so,
let's
jump
directly
into
a
not
plug
operation
of
a
q35
machine
type.
B
Okay,
so
here
on
the
left,
I'm
going
to
be
provisioning.
The
scenario
the
first
thing
is
this
feature
is
protected
behind
the
feature
gate,
so
we
needed
to
activate
it.
It's
named
hot
plug.
Next.
Second
thing
it
did
was
to
provision
a
network
attachment
definition.
As
you
can
see,
the
configuration
is
quite
minimal
and
finally,
it
provisioned
a
virtual
machine.
B
So
excuse
me
and
see
that
there
are
there
like
there's
a
single
networking
interface
in
the
in
the
vm
now
here
in
the
top
right
corner,
we're
going
to
be
watching
the
network
status
of
the
interface
and
in
the
bottom
right
corner,
I'm
going
to
be
showing
in
real
time
the
annotations,
like
networks
list
on
the
on
the
pods.
For
that
we're
going
to
need
the
name
of
the
pods
should
happen
any
moment
and,
as
you
see,
we
have
an
empty
list
of
network
selections.
This
means
that
there
are
no
additional
networks.
B
Now
we're
going
to
be
requesting
an
additional
interface
for
our
virtual
machine
by
using
the
hubert
cli.
B
Okay,
as
we
see
here
in
the
in
the
bottom
right
corner,
a
new
interface
was
listed
on
on
multis,
and
now
we
see
a
new
hot
plug
interface
request
on
the
top
right
corner.
It's
in
the
face
bot
interface
ready
means
that
the
interface
is
available
within
the
pod
and
eventually
the
state
converged
and
the
interface
got
plugged
into
our
virtual
machine.
Let's,
let's
now,
I'm
not
sure.
If
this
is
oh,
it's
running.
B
Let's
now
use
the
console
and
as
we
see
we
have
a
new
extra
interface
here,
we
can
see
that
the
mac
address
matches-
and
this
is
it
we're
now
going
to
show
the
thing
I've
spoken
before
about
the
machine.
B
And
as
we
see,
we
get
the
error
here
that
no
more
pci
supplies
are
available
for
this
vm,
so
I'm
now
going
to
show
the
second
demo,
which
is
the
exact
same
thing,
but
this
time
with
we're
going
to
define
this
exposed
knob
that
I've
mentioned
previously
the
here's,
the
network
interface
list
status
in
real
time
and,
as
you
can
see,
here's
our
new
attribute.
So
I'm
defining
for
this
particular
vm.
The
that
I
want
24
number
of
pci
ports
on
it
in
it.
B
Let's
now
provision:
oh
it's
already
provisioned,
as
we
can
see
it's
on.spec.domain.devices.number
pci
ports,
it
replies
with
24.,
let's
show
the
status
of
the
vm
as
we
see
that
we
have
a
single
interface
and
let
us
now
hide.
Let
us
now
ex
request
one
interface
here.
We
have
it
and
now,
let's
request
another
interface.
B
And
here's
our
second
interface,
so
when
we
want
to
have
more
than
one
interface
hot
plug,
we
need
to
to
tune
up
the
machine.
The
the
virtual
machine
specification
a
little
bit
josh.
How
am
I
on
time
like?
Do?
I
have
time
for
an.
B
I
got
one
minute,
so
I
do
not
have
time
for
the
hot
unplug,
so,
let's
jump
to
the
conclusions
yeah.
So
obviously
one
conclusion
of
this
work
is
that
if
you
want
to
hot
plug
or
unplug
to
the
vm,
you
first
need
to
do
it
to
the
pod,
and
the
second
obvious
thing
is
that
we've
chosen
to
implement
this
also
for
pods
and
to
put
that
implementation
in
multis,
so
multis
requirements
to
use
this
feature.
B
Another
interesting
thing
to
say
is
that
the
cluster
default
network
is
entirely
off
limits.
We're
not
going
to
be
touching
that
nor
we
want
to.
Lastly,
some
machine
types
require
more
changes,
and
that
is
the
number
of
pci
root
port
controllers
that
I've
mentioned
and
shown
in
this
previous
demo,
like
the
next
steps,
basically
are
to
productify
it.
We
have
open
pr's
for
all
the
work.
The
one
that's
most
advanced
is
the
refactor
into
a
thick
plug-in,
but
yeah.
There's
work
started
for
everything
else,
and
I
thank
you
for
your
time.
B
Thank
you
for
listening
to
me
and
well.
If
you
want
to,
you
can
reach
to
me
and
give
feedback
about
the
feature,
and
that's
pretty
much
it
thanks
for
your
time
again.
Okay
time
for
a
question.
A
So
do
do,
do,
let's
see
what
do
we
got
here?
This
add,
remove
network
can
be
done
in
a
windows.
Vm
is
the
question.
B
I
have
not
tried
that
at
all
and
I
I
am
unsure
if
this
requires
virta
yo
or
not
honestly,
so
I
do
not
have
the
answer
to
that
right
now.
I'll
try
to
come
up
with
a
better
reply
for
it
later
on.
A
Okay,
petter
wants
to
know:
can
we
hot
plug
unplug
sri
ov
interfaces.
B
Oh,
that's
a
good
question
yeah!
No,
no!
No,
like
we
we're
not
planning
on
doing
anything
on
we're,
considering
only
hot
plugging
or
unplugging
bridge
type
interfaces.
Now
estrogen
is
especially
tricky
because
it
involves
the
device
plug-in
and
nothing
in
my
proposal
involves
contacting
or
using
that
framework
it
would
require.
I
mean
it's
a
different
feature
actually.
B
A
Okay,
well
so
next
up,
I
we're
gonna
have
one
of
the
folks
from
nvidia
talking
about.