►
From YouTube: KubeVirt Community Meeting 2022-07-13
Description
Meeting Notes: https://docs.google.com/document/d/1kyhpWlEPzZtQJSjJlAqhPcn3t0Mt_o0amhpuNPGs1Ls/
A
My
name
is
daniel.
Please
everyone.
A
And,
first
of
all,
we
have
the
introductions
coming
up
so
now's
the
time
for
everyone
who
is
first
time
visitor
of
the
community
meeting
to
introduce
yourself.
Do
we
have
any
anyone
who
is
new.
A
So,
okay,
as
always,
I
would
like
to
point
out
that
if
you
need
to
join
the
community,
we
have
also
links
from
twitter
from
community
page
and
also
how
to
join
as
a
github
project
member.
A
B
What
did
I
put
down
here,
okay,
bridge
mode
for
migration,
so
I
I
just
caught
up
on
this
pr.
Hopefully
some
people
from
the
networking
team
can
weigh
in
here.
I
think
my
concerns
and
what
I'm
trying
to
understand
here
is:
if
we
allow
migration
with
bridge
mode,
how
dependent
on
this
working
is
the
operating
system
itself?
B
As
you
see,
one
on
the
call
feel
confident
to
answer
any
of
these
questions,
I'm
not
not
a
networking
guy.
C
I
mean
I
don't
have
a
strong
answer
to
what
you
asked.
I
think
it
needs
to
be
raised
in
the
proposal
and
then
to
try
to
answer
it
generally.
It
looks
like
a
bit
dangerous.
There
are
some
assumptions
there
and
we
need
to
make
sure
we
are
okay
with
the
assumptions.
B
Okay,
and
do
we
have
a
we
do,
have
a
proposal.
I
see
that
now,
so
it's
actually
in
the
community
directory.
Okay,.
A
C
B
Okay,
I
will
follow
up
on
that
proposal.
Then,
with
some
of
my
questions,
maybe
I'll
just
move
on
to
my
next.
My
next
topic
since
they're
kind
of
related
this
mac
vtap
network
binding
the
same
contributor-
is
trying
to
get
in
this
was
a
long
time
ago
when
we
were
looking
at
how
we
were
going
to
tie
the
the
pod
network
into
the
vm,
and
I
felt
like
we
looked
at
mac
vtap
directly
without
bridge
at
one
point,
and
I
can't
remember
why
we
decided
not
to
take
that
approach.
B
What
I'm
trying
to
determine
here
is
even
I
don't
like
the
idea
of
having
multiple
binding
modes
that
are
really
similar
if
mac
vtap
is
actually
more
performant
than
bridge.
If
there's
no
drawbacks,
I
guess
I'm
trying
to
understand
what
what's
the
cons
of
this
mac
btap
binding
like
where
are
we
going
to
get
ourselves
where's?
The
complexity
lie
compared
to
bridge.
Does
anyone
have
thoughts
on
that.
C
So
I
guess
the
most
appropriate
person
to
answer
is
forever
miguel,
but
in
general
I
think,
I'm
not
sure
which
proposal
which
which
one
you
are
talking
about
they
are.
Actually.
I
think
there
are
two
today
there
is
one
called.
There
is
one
table
that
miguel
did
like
maybe
two
years
ago,
something
like
that
which
is
the
cni.
It's
a
cni,
that
is
a
mega
vita,
so
you
it
calls.
C
C
C
Anything
like
you,
don't
have
the
bridge
and
you
don't
have
the
vest
between
the
board
and
the
host.
You
have
only
one
one
hope.
I
will
say
a
little
to
hope
directly
from
the
level
to
the
host
nick
that
that
this
magnituck
is
connected
to
the
disadvantage
of
it.
Is
that
you
cannot.
C
You
cannot,
cannot
inject
traffic
or
read
the
traffic
that
is
going
on
there.
So
it's
like
it's
directly
connected
to
the
node.
You
have
no
no
way
to
get
there.
You
cannot
put
a
dhcp
server
and
stuff
like
that,
and
but
it
is
much
faster.
I
guess
that's,
okay,
but
there
is
another
proposal.
I
think
that's.
Maybe
this
is
the
one
that
you're
talking
about.
B
So
maybe
just
to
talk
about
the
first
one
before
the
second
one,
so
with
the
mac
vtap
cni.
How
would
that
work
with
istio
or
would
it
work
with
istio.
C
It
can
work,
maybe
with
if
you
have
like,
if
you
connect
it
with
secondary
network
and
then
it
doesn't
interfere
but
for
the
primary
network.
If
you
put
that
for
the
primary,
it
will
not
work,
but
I
don't
think
it
is
used
for
the.
I
don't
think
it
is
used
for
the
primary
network,
because
that's
like
the
primary.
A
D
B
Andre
cool
yeah,
I'm
trying
to
understand
okay
for
the
second
approach,
and
I'm
talking
about
specifically
this
pr
I'll
post
to
the
chat.
How
does
it
differ
from
the
cni
plug-in
and
why
would
we
keep
the
bridge
binding
if
this
is
more
performant
like?
What's
the
drawback
of
this
macbee
tap
approach
compared
to
bridge
or
is
there
one.
D
Sorry
I
just
joined
a
little
bit
later,
so
I
a
little
skipped
the
conversation
I
don't
know.
I
checked
the
performance
and
I
found
that
maqvitap
is
working
better.
The
only
problem
I
see
here
that
it's
probably
supported
some
from
some
specific
kernel
version,
so
it
might
not
work
in
some
kernels,
some
old
kernels.
I
guess.
C
E
Hey
this
is
fabian
I've.
I've
got
one
question:
wasn't
there
a
problem
with
macbeth
that
you
were
not
able
to
speak
to
the
same
host
when
mcvtop
is
being
used.
D
There
were
some
problems,
but
actually
the
macbook
the
macro
tab
proposal.
I
did
in
this
way
macvet
app
is
binding
directly
to
which
interface,
so
it's
working
the
exactly
the
same
way
like
you
having
the
pod
network
without
the
virtual
machine.
D
So
there
are
no
any
issues
answering
your
question
with
that.
What
the
thing
you
talking
about
you're
talking
about
macrotops
in
I
and
when
it
is
binded
to
the
same
to
some
physical
interface.
There
is
this
problem,
but
it
can
also
be
solved
by
the
adding
mcvelan
interface
on
the
node.
For
now
we
have
no
any
information
for
that
or
you.
I
guess
you
can
do
that
with
the
this
helper.
I
forgot
how
it's
called.
C
Why
do
we
need
the
magnitude
I
mean
a
different
binding
for
the
I
mean,
essentially
this
pr,
mr
correctly
replaces
the
bridge
in
the
pod,
with
the
mcv
top.
So
instead
of
a
bridge,
you
have
you
have
mcvit
up
there,
which
is
like
for
sure
a
better
performance.
So
this.
D
Is
good
question
and
I
can
answer
it
actually,
a
magfetap
c9
is
provides
you
interface
for
building
physical
networks,
but
in
case,
if
you
want
to
build
a
standard
c
nice
which
we're
trying
to
reuse,
we
have
these
options
like
you
can
use
bridge.
You
can
use
mac
setup,
you
can
use
masquerade
and
whatever-
and
I
would
say
it's
two
different
modes.
D
So
in
first
case,
it
will
just
pass
through
the
physical
device
into
the
network
namespace
and
we'll
give
it
to
the
virtual
machine
in
case.
If
you
are
using
any
other
c9,
it's
checking.
If,
if
it's
not
a
macbook
app
interface
inside
the
pod
network,
it
will
just
bend
it
to
the
vet
interface
and
it
will
work
the
same
way
like
bridge
right
now.
C
D
C
So
so
it
so,
but
this
is
the,
I
think,
that's
the
know.
If
this
is
the
question
or
not,
but
in
generally
speaking,
you
don't
need
to
it's
not
necessary
that
you
need
a
really
a
different
binding.
You
want
to.
You
want
to
change
the
implementation
of
replacing
the
bridge
inside
the
pot
with
a
magnitude.
That's
like
a
different,
a
different
story.
Anyway,
I
mean
that
can
be
also
considered.
D
C
Only
a
single
villain,
a
in
general,
so
you
it
will
not.
I
think
it
will
not
work
if
inside
the
virtual
machine,
the
guest
you
have
like,
for
example,
a
or
multiple
vlans,
with
different
macs
or
stuff
like
that
it
will
not
pass
them.
It
can
only
handle
one
mech,
which
I
think
it's
in
most
cases,
okay,
this
limitation,
but
that's
the
I
think,
that's
the
the
main
difference.
C
B
B
Where
you
would
need
multiple?
Well,
I'm
sorry.
B
Associated
with
a
bridge,
do
we
have
that
today.
B
So
ed
edward
was
just
saying
that
one
of
the
different
differences
between
this
mcv
tap
and
the
bridge
mode
is
that
the
bridge
mode
could
potentially
I
I
guess
you
could
associate
multiple
interfaces
with
it.
I
don't
think
we
ever
do
that.
A
Multiple
interfaces
there.
C
A
C
D
If
someone
uses
it
like
that,
this
is
this
is
good
thing
to
think
about.
D
I
would
say
that,
if
you
use
anyway,
there
should
be
support
from
the
c9
itself,
if
you're
using
classic
c
nice
like
psyllium
flannel,
I
know
whatever
they
having
actually
the
this
check
on
the
on
the
on
the
c9
side.
Actually,
so,
if
your
virtual
machine
will
send
packets
from
with
the
wrong
comma
charges,
it
will
not
be
routed
anywhere.
D
So
the
thing
you're
talking
about,
if
you
want
to
have
cube
weird
running
inside
cubewear,
it
is
possible.
But
in
this
way
you
still
need
to
use
some
some
c9
who
is
supporting
this
actually.
E
C
B
One
thing
that
might
help
me
here,
so
let
me
try
to
repeat
back
what
I
think
I
understood,
and
maybe
you
guys
can
tell
me
if
I,
my
understanding
is
correct-
we're
saying
that
it's
possible
within
the
guest,
so
within
that
virtual
machine
to
be
using
the
network
devices
provided
to
it
in
a
way
that
would
not
work.
So
we,
for
example,
create
a
bridge
multiple
mac
addresses
within
the
guests.
Because
of
that
nested
use
case,
we
talked
about
or
something
like
that
that
that
traffic
would
not
get
passed
with
the
mac.
D
Yeah,
that's
true,
but
I
would
say
let's
stop
talking
about
that,
because
because
this
thing
I
did
I
like
building
bot
networking
using
macbook
app.
It's
exactly
the
same
thing.
It
just
replaces
bridge
mode.
So
if
you
would
use
some
standard
c9
and
if
you
will
bend
the
bridge
using
bridge,
you
will
have
the
same
issues
actually
because
cni
will
block
the
traffic
with
the
wrong
makaris
and
with
the
wrong
ap
addresses.
D
B
Was
my
next
question
but
does
masquerade
allow
for
that?
If
we
like
would
masquerade,
but
the
behind
that
and
everything
allow
passing
the
traffic.
D
D
I
would
not
consider
using
pod
networking
for
that.
I
think
the
pod
networking
is
nice
when
you
need
to
load
some
external
traffic
inside
the
cluster,
but
if
you
want
to
have
some,
I
know
cube
weird
inside
cube,
weird,
it's
better
to
use
cnice
who
who
is
providing
you
layer,
two
connectivity
between
the
virtual
machines.
C
D
Yeah,
but
the
only
problem
of
this
that
you
can't
reach
some
other
virtual
machine
on
another
node,
because
you
have
to
have
road
inside
your
original.
C
Are
using
it
by
default?
This
is
because
this
is
the
only
binding.
The
masculine
binding
is
the
only
binding
that
supports
migration
of
vmi.
So
if
you
don't
use
this
binding,
you
cannot
migrate
the
vm
vmi.
So
that's
that's
how
the
pod
network
is
working
today
with
the
mask,
but
I
don't
know
if
this
is
like.
C
I,
I
think
the
only
the
only
thing
that
can
be
said
here
is
that
if
you
have
in
existing
deployment
with
masquerade,
if
you
have
multiple
mac
addresses
inside
your
guest
that
are
coming
from
your
guests,
it
will
work
with
mascara.
It
doesn't
matter
which
cni
is.
If
you.
C
C
E
E
B
It
might
have
had
to
do
with
privileges,
and
we
didn't
have
an
understanding
of
how
to
get
bert
handler
to
do
things
on
our
behalf.
Break.
C
D
Like
macrotopsy
now
is
doing.
C
A
C
Like
a
limitation
with
the
mac
beat
up
cni
when
you
do,
if
you
do
migration,
then
on
the
target
node,
you
will
create
a
new,
a
new
magnitude
device
and
you
will
have
to
set
it
with
a
mac
address
the
specific
address
that
you
going
to
have
inside
your
inside
your
guest,
so
because
the
migration
at
the
same
time,
you
have
the
same
mark.
So
you
may
have
a
problem
there,
but
but
that's
it's
like
temporary
problem.
So
there
is
a
problem
with
that.
C
I
don't
know
it's
like
it's
like
when
you,
when
you
move
it
to
a
different
node.
I
think
it's
less
of
a
problem
if
you
have
it
in
the
same
node,
which
is
not
supposed
to
work
like
that,
but
in
our,
for
example,
in
kind
test
with
kind
it
doesn't
work.
So
there
was
some
complication
there,
but
but
I
don't
see
it
how
it
can
work.
C
D
Answer
you
I
tested
this
case
and
I
used
this
mac
or
I
don't
know
how
it's
like,
if
I'm
about
a
ipm
but
for
mac,
addresses
like
the
thing
which
is
ascending
mac
addresses
for
all
your
virtual
machine
specs
and
in
this
way
a
live
migration
with
the
mark
of
the
top
c9
is
working
fine
and
asked
about
the
this
pull
request
for
using
macbook
app
for
building
a
pod
network.
C
D
Machine
after
the
live
migration
is
done,
it
will
change
the
it
will
change
the
network
card
if
the
mac
address
is
different
and
if
it's
not,
it
will
just
do
link
down
link
up
to
renew
dhcp
list.
D
C
D
C
D
Yes,
okay,
then
you
perform
live
migration
and
during
the
live
migration,
old
pod
still
handling
the
connections
and
after
live
migration
is
done.
There
is
a
specific
handler,
which
is
removes
the
the
virtual
network
card
from
the
virtual
machine
and
attaches
the
new
one
with
the
correct
machines.
D
B
We
talked
about
that
earlier,
maybe
before
you
joined,
I
was
trying
to
understand
since
we're
on
the
topic
that
detach
and
reattach
is
that
going
to
be
guest
operating
system
dependent
on
how
well
that
is
tolerated.
B
D
I
tested
it
with
the
fedora
vm
with
the
cloud
unit
inside
and
it
was
work
perfectly.
I
don't
remember
actually
about
the
windows
vms,
but
I
didn't
I
remember
when
I
had
the
windows
in
virtual
machine.
It
was
always
trying
to
catch
to
get
new
dcp
new
ip
address
from
the
gcp
when
the
new
adapter
get
inject.
D
B
A
process-
let's
say
we
have
a
http
server
in
there
and
it's
it's
bound
to
that
network
device
that
we
detach
and
then
we
reattach
a
different
looking
one,
that's
similar.
What
happens
to
that
process
with
that
process?
It.
D
Depends
it
depends
on
which
ap
address
is
it
listening?
If
it's
listening
on
zero,
zero,
zero,
yep,
zero,
nothing
will
happen,
it
will
continue
handling
the
connections
and
in
any
case,
if
it's
been
little
specific
ip
address,
it
will
probably
not
gonna
work
anymore.
B
Okay,
that's
what
I
was
trying
to
get
at.
So
there
are
some
guest
considerations
and
application
considerations
for
this
this
mode,
so
it
could
work
if
people
really
know
what
they're
doing
and
can
tolerate
that.
B
D
Well,
we're
developing
our
cloud
cloud
platform
and
we're
trying
to
reuse
the
things
that
hardly
reduce
all
the
things
we
have
and
for
now
we
just
implemented
celium
and
we,
like
all
the
things
it's
bring
to
us
the
policies.
They
are
amazing
and
we
want
to
provide
the
opportunity
to
live
migration
for
the
virtual
machines.
We
also
patched
the
cielum
for
having
this
opportunity
to
specify
mac
address
and
the
ip
address
for
the
bot.
D
I
actually
wrote
the
thing
which
is
called
v,
my
router,
it's
just
adding
the
roads
for
specific,
for
example,
if
you
have
a
virtual
machine,
you
can
specify
some
ip
address
to
this
and
when
you're
live
migrating,
the
new
port
is
getting
the
same
api
with
the
same
mac
address.
So
when
virtual
machine
is
live,
migrated
we
just
swap
the
roads
and
virtual
machines
continue
working
with
this
ap
address
with
no
changes.
B
D
D
For
now,
it's
the
only
opportunity
to
do
that
through
the
virtual
network
through
the
standard
c9
or
you
can
use
some
layer,
two
network
network,
but
it
is
not
universal
solution.
So
anyway,
if
you
want
to
communicate
with
the
kubernetes
cluster,
we
always
need
to
be
in
the
pod
network
and
we
found
that
actually
a
mac,
sorry
masquerade
mode
is
not
so
performant,
but
bridge
is
not
allowing
the
migration.
D
It
would
be
nice
to
have
this
feature
for
at
least
creating
some
on
their
routers,
which
will
allow
you
to
pass
through
external
traffic
into
the
virtual
machines
and
then
use
some
macro
type
cni,
for
example,
to
rotate
inside
or
bridges.
I
see
and
what's.
B
Okay,
I
think
I
yeah
thanks
for
thanks
for
answering
a
lot
of
these
questions.
I
think
the
last
thing,
if
we
could
just
maybe
focus
it
down
a
little
bit,
was
I'm
still
trying
to
understand
the
mac
vtap
binding
versus
the
current
bridge,
binding
the
pros
and
cons
of
these.
If
there's
is
there
any
con,
is
there
any
disadvantage
to
using
the
mac
vtap,
finding
that
you're
proposing
versus
the
current
bridge
binding
that
we
have.
C
A
C
Unplug
plug,
or
put
it
linked
down
like
that,
the
interface
when
you
do
migration,
that
is
like,
can
be
a
killer
for
some
applications.
A
D
For
now
live
migration
for
bridge
networking
is
not
allowed
as
well
so,
and
if
we
going
to
accept
my
pr,
which
will
do
which
allow
which
will
allow
doing
this,
there
are
they're
actually
the
same
thing
which
should
be
done
for
macro
tab
and
for
the
bridge
as
well.
You
should
replace
the
correct
mac
address
or
if
it
in
case,
if
mac
address,
is
the
same,
just
do
a
link
down
to
link
up
and
link
up
to
renew
the
roles
and
ipads.
E
B
Yeah,
it
seems
like
the
migration
case
is
very,
very
similar.
The
big
thing
is,
we
have
to
force
that
dhcp,
even
if
the
the
mac
address
is
the
same,
the
new
dhcp,
whatever
request,
whatever
whatever
it
is.
Okay,
I
think
I
have
the
information
I
need.
B
And
the
device
I
saw
one
comment
about
device
using
device
plug-in
rather
than
having
it
looks
like
handler,
is
reaching
in
to
do
some
device
setup.
Did
you
have
any
thoughts
on
the
device
plug-in
comments.
B
I
didn't
get
this
sorry,
so
I
believe
you're.
You
are
using
vert
handler
to
use
that
with
this,
this
process
that
can
reach
into
the
pod
to
create
devices
or
really
just
do
privileged
actions.
B
It
looks
like
you're
using
that
to
create
the
device
inside
the
compute
container.
Would
there
was
a
comment
about
using
a
device
plug-in
which
is
like
a
kubernetes
concept
to
handle
that
device
creation?
Instead,
did
you
have
any
thoughts
on
that?
Would
that
be
feasible
or
what
is
that
even.
D
Hey
the
width
handler
is
using
the
binary,
which
is
create,
which
is
called
tab
device
maker
and
actually
the
creation
of
the
macbeth
app
is
done
by
the
same
binary.
The
only
thing
that
it
requires
to
set
specific
permissions
to
the
c
group
to
allow
using
this
tab
device.
It
works
little
bit
differently
than
top,
but
common
logic
is
the
same
okay
and
still
the
macro
still
david
handler
before
in
bridge
mode.
D
B
Yep,
okay,
interesting
well,
yeah,
thanks
for
the
explanations
here
this
is,
I
feel
bad
that
I
haven't
seen
this
earlier.
This
is
all
good
stuff.
Okay,
do
you
feel.
B
Yeah,
are
there
any
blockers
that,
like
what
are
the
points
of
discussion
that
are
preventing
you
from
being
able
to
move
forward
here.
D
For
now,
I'm
working
on
this
first
pull
request,
which
is
allowing
you
to
live
migrate
within
bridge
mode
and
later
after
it
will
be
merged.
I
will
continue
working
on
mac
wet
app
for
building
the
pod
networking,
the
only
problem.
I
see
that
I'm
not
much
familiar
with
the
testing
suit,
and
I
know
I'm
working
already
second
week
and
there's
still
some
issues.
That's
actually
my
question
in
the
open
floor.
If
anybody
can
help
me,
I
think
it's
last
thing
which
should
be
done
for
make
it
working.
D
B
You
should
be
fine
that
should
work,
so
can
you
create
the
cluster
using
like
make
cluster
up
within
our
source
tree.
B
I'd
really
encourage
you
to
find
a
linux
machine
to
do
local
development
with
it's
going
to
be
it's
a
fight,
but
it's
impossible
to
win.
I
already.
D
B
B
E
Go
ahead,
fabian
yeah,
just
another
thing
you
could
do
is
get
a
big
metal
machine,
but
in
aws
or
packet
to
to
drive
there
right,
then
you
can
install
linux
with
centos
orbit
or
whatever
to
bring
up
this
tank
here.
D
That's
also
possible
yeah,
but
the
question
I
have
it's
a
really
simple
question:
if
you
will
just
open
the
out
link
from
the
open
floor
and
they're
just
dimensioned
that
the
tests,
which
is
changing,
cubit
configuration
shouldn't,
be
running
in
parallel,
and
my
question
is:
how
should
I,
where
should
I
put
it
or
should
I
enable
some
feature
for
not
running
it
in
parallel,
because
I
just
get
confused
yeah?
Maybe
you
can
quickly.
B
B
Look,
thank
you
just
put
the
chat
that
in
the
name
and
that
will
make
it
run
in
serial
yeah.
D
E
I
just
want
to
say
kudos.
I
really
liked
to
see
that
pr
about
disconnecting
the
nick
and
reconnecting
and
just
a
note,
I
think,
for
windows.
You
need
to
do
it
at
least
like
13
seconds
or
so
to
to
get
the
dhcp
line
to
refresh.
That
might
also
be
interesting
because
it
doesn't
take
that
dhcp
option
to
request
a
refresh
of
the
of
the
gas.
E
D
E
D
There
are
two
different
cases,
but
yeah
you're
right
in
main
case,
it's
usually
be
reconnect
the
device,
at
least
when
c9
is
not
supporting
specifying
the
mac
for
the
pods.
I
was
thinking
to
write
a
cap
for
kubernetes,
which
would
allow
to
do
that
because
there
is
support
from
the
cni
side,
but
there
is
no
support
for
from
the
kubernetes
side
and
everything
is
implementing
it
different
way.
D
So
you
can
add
some
annotation.
I
know
few
cnice
who
is
supporting
this.
I
think
there
should
be
some
common
annotation
which
would
allow
you
to
specify
mac
macadis
for
the
port
actually,
and
in
this
way
there
will
be
just
linked
down
in
link
up
in
case
on
link
down
in
link
up.
We
were
considering
few
options
how
we
can
do
that
other
way.
Okay,
there
were
options
to
set
really
short
dcp
lease.
D
But
if
I
remember
well,
I
saw
some
gcp
clients
who
is
ignoring
this
option,
so
they
are
just
setting
this
address
for
infinity
lifetime
and
nothing
changes
and
another
option
is
to
send.
I
don't
remember
how
this
packet
is
called,
but
there
is
some
rfc
for
dcp
which
would
allow
you
to
renew
the
dc
release.
E
Yeah,
that's
why
I
was
mentioning
that
maybe
you
know
they
make
the
timeout
or
the
switch
time
effectively
configurable,
because
if
for
windows
the
the
link
town
die,
link
down
time
is
long
enough,
then
it's
actually
refreshing
right.
So
if
you
down
the
link
for,
I
think
it
was
13
seconds,
but
it
could
be
20
right
and
you
bring
it
up
again.
Then
window
is
refreshing,
so
it
doesn't
look
for
the
dhcp
plug,
but
are.
D
D
There
are
two
options:
if
the
mac
address
of
the
source
of
the
target
port
is
similar
to
the
source,
one
so
mac
address
is
not
changed.
We
just
do
link
down
and
link
up
in
case
if
mac
address
is
changed
like
it's
usually
happening
in
any
c9s.
If
you
don't
specify
macadis
annotation
for
the
port,
it
will
reattach
the
whole
device,
but
the
pci
address
doesn't
change
right.
You
plug
it
to
the
same
place
where
you,
actually
I
don't
know,
I
just
do
very
detach
and
build
attach
device
on
this
same.
E
D
And
the
only
thing
I
want
to
mention:
if
you
do,
link
down
and
link
up
it's
not,
it
is
actually
visible
from
the
virtual
machine
side,
so
it
will
not
affect
the.
I
think
it
will
not
affect
some
applications
in
some
cases,
and
I
like-
and
I
think
this
is
less
destructive
method
than
hot
plugging
the
network
card.
D
For
that
reason,
we
are
considering
to
use
it
to
attach
the
mac
address,
to
every
virtual
machine
and
do
this
operation
just
to
update
their
roles
because
we
yeah
we
have
this,
as
I
told
it
before,
that
we
have
a
tall
machine
with
the
assigned
ip
address
and
macadis
and
when
it's
live
migrated
to
another
node,
it's
still
having
the
same
ap
address,
but
it's
neat
to
update
the
roads
inside
the
virtual
machine,
so
we
still
need
to
do
link
down
and
link
up
even
in
case
if
api,
this
is
not
changed.
E
E
Yeah,
no
sorry
I
I
mean
when,
when
a
vm
is
using,
the
bridge
binding
will
any
vm
with
any
will
any
vmware
for
bridge
binding,
be
using
this
functionality?
If
the
feature
gate
is
enabled
or
do
you
need
to
specified
on
the
api
to
get
that
behavior.
E
D
D
So,
even
if
you
use
bridge
to
bend
c9
through
the
multus
it-
and
this
behavior
also
will
affect
it
because,
for
example,
you
can
bend
the
flannel,
which
is
not
allowing
you
to
like
to
migrate
the
ip
addresses
from
one
node
to
another
one.
D
A
Okay,
so
I
think
that
was
a
good
wrap
up
of
the
discussion
thanks
everyone,
so
I
think
we
are
nearly
out
of
time,
so
I
would
skip
the
put
request,
any
detention,
because
we
don't
have
any
and
the
mailing
list
review
and
give
you
everyone
five
minutes
back,
so
that
we
can
conclude
a
little
bit
earlier.
Probably
today,
yeah
thanks
everyone
for
your
attendance
thanks
for
your
participation
and
have
a
nice
day,
everyone
thank
you.