►
From YouTube: KubeVirt Community Meeting 2019-08-14
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
B
C
B
C
But
but
upgrades
like
this
should
be
pretty
young
seamless
like
existing
workloads
and
beams
remain
running
all
this
stuff.
A
D
D
No,
let
me
start
different
so
with
the
pod,
networking
and
port
network,
and
that
can
be
bound
or
we
have
interfaces
inside
the
VM
and
those
interfaces
can
be
bound
to
different
networks.
The
networks
can
be
provided
by
kubernetes
that
the
stockpot
network,
what
they
can
provide
by
other
networking
plugins
like
motors
or
the
ESRI
v2
plug-in,
or
even
Genie
or
calico,
which
we
don't
support.
D
Yet
all
of
all
of
these
nested
mesh
together
means
that
a
VM
can
have
one
or
more
zero
or
more
network
interfaces,
and
all
of
these
network
interfaces
can
be
then
mapped
to
one
of
the
networks
available
to
TDM
at
the
pod.
Now
that
little
bit
complicated,
which
is
annoying-
and
we
want
to
clean
that
up
down
the
road,
but
it
may
take
some
time.
D
One
step,
however,
wanted
to
do
because
we
had
trouble
with
that
in
live.
Migration
was
the
pod
network,
so
the
pod
network
network
is
provided
by
kubernetes.
So
when
you
start
a
pod,
you
usually
get
a
single
port
which
is
provided
by
communities
because
it's
the
pod
Network
it
has
certain
constraints,
and
that
is
that
these
hot
network
behaves
like
communities.
This
is
defining
the
pod
network.
D
D
Now,
this
behavior
is
problematic
for
life,
migration.
That's
actually
where
the
problem
came
up.
So
when
you
like
migrating
VM,
what
is
happening?
You
start
in
one
pot
right.
You
have
started
there
and
it's
getting
an
IP
when
you
live
migrate,
the
game
to
a
new
node
and
you
notice,
a
new
part
of
scheduled
the
VM
is
live,
migrated
over
to
that
new
pod
and
the
old
pot
is
taken
away
and
then
the
VM
is
living
on
a
new
node
in
the
new
pod.
D
D
That
also
means
that,
after
the
live
migration,
the
guest
inside
that
VM
is
not
reachable
anymore
through
that
pod
Network,
because
the
VM
is
listening
for
one
IP,
that
is
of
the
old
pod,
but
the
IP
it
has
when
looking
from
the
outside
is
the
t
of
the
new
pod,
which
is
a
different
one.
So
there's
a
disconnect
between
the
inside
the
outside
in
theory,
if
we
could
solve
that
problem
by
saying
a
VM,
please
refresh
your
IP
and
get
the
new
one
from
the
outside.
D
We
could
do
that
because
we're
using
DHCP
to
provide
the
IP
from
the
outside
to
the
inside.
The
problem
is
that
the
DHCP
clients
we
looked
at
don't
support
these
forced
DHCP
refresh
and
that's
the
problem,
because
we
can't
resolve
that
situation
reliably
for
every
guest
to
get
that
new
IP
and
that's
bad.
D
So
there
was
some
discussion
to
say
all
right
if
we,
if
we
cannot
enforce
a
new
IP
on
the
guest
of
large
migration,
we
are
the
need
to
live
with
that,
but
that's
not
something,
but
that
would
lead
to
a
connection
loss
after
life
migration,
but
life
migrations
can
happen
transparently.
So
what
does
that
mean
to
a
user
to
use?
It
could
mean
that
now
yvm
is
connected
and
in
in
some
other
minute
it's
just
disconnect
and
you've
got
no
idea
why
that
is,
and
that's
bad.
D
So
what
we
were
thinking
for
the
port
network
is
actually
a
different
way
to
connect
DVM
to
the
port
network
and
that
is
using
slurp
or
masquerade.
So
what
is
the
difference
to
the
previous
method?
I
was
describing
when
we're
connecting
a
VM
to
the
pot
on
work
using
syrup
for
masquerading
both
of
them
on
that
not
based
methods
right,
so
doing,
network
address
translation,
then
the
VM
will
have
one
link
local
IP,
which
is
only
local
to
the
VM
and
which
is
not
the
IP
it
has.
D
If
you
look
from
the
outside,
the
benefit
is
that
after
live
migration,
VM
keeps
the
same
IP
and
it's
still
reachable
because
there's
network
address
translation
happening
when,
when
the
traffic
is
coming
to
the
pod
and
going
to
the
VM,
and
vice
versa.
So
when
is
going
out
of
the
M
into
the
into
the
pod
Network,
the
trick
biggest
drawback,
however,
is
that
the
the
guest
so
inside
the
guest.
D
You
do
not
see
straightaway
what
IP
you
have
on
the
pod
network
and
that's
l'm
attic,
for
example,
if
you
think
of-
and
that
was
the
cube-
the
use
case,
which
actually
raised
awareness
that
there's
also
strong
problem.
If
that
approach
is
that,
if
you
use
cube
admin
to
inside
of
the
cube
root
vm
and
to
use
cube
admin
to
join
an
existing
communities
cluster,
then
this
will
fail
with
this
new
approach,
because.
D
Cube
ATM
is
only
seeing
that
link
local
IP
address
and
not
the
public
address
of
the
TBM
anymore,
so
that
is
problematic
in
that
use
case.
So
we
have
two
bad
situations
in
the
one
in
the
one
way
we
either
we
break
the
connection
of
the
vm
and
it
can't
use
the
network
at
all
anymore
and
it
comes
by
surprise
and
that
other
that
other
case
the
migrations
are
covered,
but
for
certain
use
case
it
will
become
more
difficult
to
to
enable
them
like
the
cube
ad
in
this
case,
so.
E
D
E
Some
thoughts
sure
that's
that's
a
great
explanation
of
the
problem.
We
have
I
think
there's
a
simple
solution
for
this.
We
just
disable
migrations
for
the
pod
Network
I'm,
not
sure
why
that
is
such
a
big
deal,
there's
definitely
use
cases
where
migration
isn't
even
that
critical.
Necessarily
it's
something
that
I
guess
would
be
nice,
perhaps,
but
also
it's
there's
tons
of
use
cases
where
it's
also
ok,
just
to
shut
down
the
virtual
machine
and
bring
up
another
one
somewhere.
E
So
why
can't
we
just
disable
migrations
for
the
pod
Network,
because
we
know
this
just
not
gonna
work.
It's
the
very
nish,
like
I,
can't
actually
think
of
it
actually
ever
being
useful
with
the
cloud
migration
of
the
pod
network.
So
disable
entirely
say
if
somebody
tries
to
do
migration
and
the
vmi
has
a
pod
network
attached
to
the
bridge
mode.
Just
know
can't
do
that.
E
D
Can
do
that
we
can,
we
can
block
like
memory
and
bridge
mode
is
used,
although
I
wonder
if
we
need
to,
because
even
the
guest
is
aware
that
this
can
happen,
then
they
can.
You
know
they
can
do
countermeasures.
My
point
is
I
would
like
to
switch
the
default
right
for
the
port.
Network
I
would
like
to
switch
the
default
to
a
nut
based
method,
but
leave
the
bridge
as
an
option
the
problem
with
bridge.
Furthermore,
besides
the
outlined
problems,
is
that
I'm
and
there,
our
our
your
David
and
mine
of
opinions,
differ
right.
D
It's
problematic
to
make
that
work
for
every
network
plug-in
that
exists
for
kubernetes
and
and
for
every
kernel
and
for
every
operating
system,
because
so
many
things
are
involved.
Network
opinions
are
different.
You,
depending
on
the
implementation
we
end
up
with
for
the
bridging,
because
that
can
be
done
with
TC
or
can
be
done
with
ivy
tables
or
just
just
routing
right.
So
bridging
that
is,
it
all
depends
on
so
many
factors
that
I'm
not
not
positive.
That
will
reach
a
stable
and
a
stable
solution
which
is
working
everywhere.
D
E
By
that
logic,
we
wouldn't
be
changing
the
default
because
of
live
migration.
It
we've
changed
a
default
because
the
bridge
mode
is
difficult
to
support
because
I
think,
let's,
let's
say
the
bridge
mode,
was
completely
supportable
and
worked
100%
of
time
across
all
all
hosts
and
operating
systems
and
hardware
or
whatever.
Would
this
discussion
be
happening?
I.
A
Asked
this
so
to
translate
David's
proposal.
Basically
he's
saying
we
should
not
be
using
the
pod
Network,
because
we
can't
control
the
IP
I
think
the
problem
is
that
there
may
be
other
solutions
where
we
also
can't
control
what
IP
the
new
pod
is
going
to
get,
which
would
so.
We
just
saying
we
can't
use
the
pod
web
network
doesn't
necessarily
completely
solve
the
problem,
because
there
can
still
be
other
cases
where
we
can't
control
what
the
the
new
that
the
target
of
the
migration
is
going
to
have
right.
D
F
Just
wanted
to
say
that,
as
there
is
right
now,
we
don't,
we
don't
support,
live
migration
with
the
with
bridge
on
the
on
the
pod
Network.
So
this
is
disabled
already,
and
we
also
have
an
option
to
specify
which,
which
default
interface,
do
we
use
for
live
migration,
I'm,
not
entirely
sure
that
we
need
to
change
the
default.
We
can
just
let
people
select
which
interface
that
they
want
for
as
a
default.
D
Yeah
I
think
I
I
mean
we
speak
about
the
same
default.
We
want
to
provide
out
of
the
box
right
and
the
default
binning
map
in
the
mechanism
we
want
I
mean
after
we
need
to
understand.
A
single
NIC
attached
to
the
pod
network
is
the
default
we
provide
when
we
are
starting
VM
when
there's
no
further
configuration
done
and
if
you
don't
opt
out-
and
that
is
the
case
I
would
just
want
to
cover.
D
You
know
what
is
our
safest
configuration
for
that
default
case
and
I'm
proposing
to
say
we
use
an
added
solution
for
this
case,
because
it's
most
portable
its
most
compatible
with
kubernetes
and
we
would
even
have
live
migration
there.
The
drawback
is,
as
said
that
the
that
it's
not
straightforward
to
it
to
get
the
the
public
IP
from
within
the
VM,
although
we
can
make
that
easily
work.
D
E
Very
versatile
Ike
that
now
the
thing
is
I
can't
think
of
a
single
VM
management
platform.
That
does
this,
but
that's
the
surprise
that
I
would
have
if
I'm
porting
a
existing
workload
over
the
Qbert
and
in
my
applications,
booting
up
and
it
detects
each
zero
and
its
IP
address
and
it
registers
it
with
whatever
service
for
for
it
to
contact
back
the
stuff.
Just
wouldn't
work
and
I
would
be
surprised
to
learn.
D
E
Would
actually
say
that
I
would
prefer
to
have
that
consistency
of
seeing
the
actual
IP
address
within
the
guest
OS
over
live
migration
based
on
not
just
based
on
my
previous
experience
with
using
this
stuff
in
the
wild.
But
that's
just
you
know.
My
personal
use
cases
that
I've
encountered
I
know
that
there's
a
much
broader
set
of
use
cases
to
do
care
about
live
migration
back
and
speak
for
myself.
That
I
would
prefer
IP
consistency
within
the
OS
versus
live
migration.
Support.
G
About
that
David
I
have
a
question,
so
let
let's
say
if
we
leave
it
like
that,
so
I
am
a
developer
and
I
import.
My
VM
workload
and
I
don't
change
anything.
I
use
the
regular
bridge
and
my
VM
is
running
and
I
happy
and
now
the
cluster
admin
need
to
put
the
host
in
maintenance
mode.
So
the
only
way
I
can
do
it.
It's
by
killing
my
machine
without
I
I,
don't
even
going
to
know
about
it.
My
VM
is
just
going
to
kill
and
go
and
start
in
another
place.
G
E
That's
one
use
case
there's
also,
and
it's
actually
more
common
I
would
say
speaking
from
people
that
maybe
are
familiar
with
AWS
or
some
other
cloud
public
cloud
providers
that
your
VM
can
go
away
at
any
time.
It
could
go
away
and
it
can
come
back
and
there's
no
guarantees
with
respect
to
being
live
migrated.
So
people
are
also
familiar
with
opposite
of
that.
I'm
saying
that
both
are
valid
like
both
are
important.
Some
people
actually
care
exactly
what
you're
talking
about
about.
D
To
support
so
maybe
one
node,
so
I
mean
when
we
started
cubed
right.
We
knew
that
we
would
encounter
situations
where
we
have.
What
were
your
constraint
about?
What
about
what
communities
can
do
and
back
then
we
said
we
favor
communities,
compatibility
over
our
virtualization
features
with
the
assumption
that
we'll
find
workarounds
to
provide
the
virtualization
features
in
some
other
way,
and
first
that's
the
case
here
so
for
the
feature
to
have
a
stable
IP
along
the
lifetime
of
the
VM.
We
can
use
additional
interfaces
provided
by
mortals,
so
we
have.
D
D
Just
let
me
close
the
sentence,
but
following
that,
even
if
I
understand
you
David
I
would
I
would
still
say.
I
will
go
with
the
link-local
approach,
because
it's
so
because
there
are
other
solutions.
If
you
need
the
classical
virtualization
behavior
and
we
cannot
reliably
provide
a
stable.
No,
we
just
can't
provide
the
same
functionality
on
the
pod
Network,
as
people
are
used
to
it
by
on
the
individualization
world.
We
cannot
provide
a
stable,
IP
and
live
migration.
D
Both
are
common
features
on
the
pod
Network
because
they
cannot
achieve
both
and
we
wouldn't
need
to
make
a
call.
I
would
rather
say.
Then
we
break
with
these
experience.
In
general
and
say,
port
network
only
only
works
according
to
the
combinations
rules
and
think
the
total
virtualization
experience
can
only
be
delivered
using
additional
mix.
I,
don't.
E
E
E
H
D
F
Is
this
whole
discussion
about
defaults?
I
mean
this?
Is
the
only
the
only
left
item
right,
because
there
was
this
bull
request
that
basically,
this
one
that
I
shared
that
completely
disables
the
bridge,
the
bridge
interface
for
for
certain
workloads
and
I
thought?
This
is
the
center
of
the
discussion,
but
where
we're
focusing
on
the
defaults
is
that
right?
Well,.
A
Yes,
it
comes
down
to
the
pull
request.
You've
posted
is
the
thing
that
catalyzed
this
larger
discussion.
On
the
one
hand,
the
point
of
view
is:
why
are
we
supporting
something
we
don't
intend
to
use
and
the
pushback
is,
but
we
do
use
it.
So
then
that's
how
the
discussion
changed
to
what
should
the
default
be.
D
The
choice
and
people
are
educated
right
or
we
need
to
give
educated
people
the
choice,
given
that
they
know
what
they
couldn't
constraints
of
their
choice
are,
and
so
I
would
rather
be
in
favor
of
discussing
the
default
and
leaving
the
bridge
mode
as
an
option,
but
rather
just
discussing
default
I
would
I
think
by
now
I
would
not
favor
to
drop.
The
bridge
was
in
general,
go.
G
Can
we
one
one
point
about
just
just
one
quick
moment
about
dropping
so
the
PR?
It's
not
dropping.
It's
only
giving
a
flag,
you
can
drop
it
because
again,
I
think
the
one
that
needs
to
get
like
the
decision.
If
in
the
cluster
you
will
allow
to
use
bridge
or
not,
should
be
the
cluster
admin,
so
you
need
to
have
a
way
you
can
block
your
users
to
configure
it
because
the
user
doesn't
know
the
underlay
cluster.
What
are
the
requirements
there.
G
So
let
me
try
again
sup
hey:
we
want
to
have
like
a
flag,
so
the
cluster
admin
can
configure
the
keyword
configure
it-it's,
not
like
any
user,
can
do
it,
and
we
want
to
give
the
administration
administrator
of
the
cluster
the
option
to
disable
a
bridge
on
a
pond
network.
Let's
say
he
want
to
be
able
to
migrate
on
the
VM
say:
don't
the
SLA
that
the
admin
of
the
cluster
have
more
the
developers?
Is
you
don't
have
a
reboot
on
your
virtual
machine
without
know
it
or
something
like
that?
G
So
if
we
don't
introduce
this
flag
like
I,
the
developer
can
go
and
just
create
not
a
default.
A
bridge
interface.
Do
it
explicitly
just
say:
I
want
to
create
a
bridge
on
the
port
network
and
that's
written
if
the
the
VM
will
get
this
type
of
connection.
And
now
the
admin
of
the
cluster
can
provide
the
SLA.
That
is
said
it
can
does
like
Lima
great
your
virtual
machine.
If
you
need
to
maintain
the
node.
D
A
I,
don't
think
we're
gonna
reach
a
consensus
or
a
decision
point
here,
just
judging
by
the
way,
the
conversations
going
so
I
think
we
should
table
it
and
move
back
to
the
mailing
list,
but
at
least
we
all
have
the
greater
perspective
of
having
talked
about
it.
At
least
you
know
face
to
face.
So
next
topic
is
GPU
or
vgpu
pass-through
and
I'll
turn
it
over
to
Fabien
yeah.
D
I,
don't
think
that
vish
is
around,
but
I
still
wanted
to
mention
it
in
this
forum,
so
wishes
from
Nvidia
was
actually
doing
some
design
work
around
NVIDIA
GPU
at
vgp
pass-through,
and
it
was
also
sharing
that
remaining
list
and
he
shared
a
PR
I.
Think
the
broad
design
looks
its
good
to
me
at
least,
and
there's
the
discussion
going
on
about
it
there
to
me.
The
one
remaining
question
really
is:
how
do
we
handle
yeah?
A
I
I
I
I
My
major
points
other
than
feedback
on
the
document
for
any
kind
of
design,
questions
or
assistance
will
be
greatly
appreciated
and
are
they
submitting
PRS?
And
you
know,
as
far
as
virtual
BMC
I'm
a
fork
that
the
cause
they
may
not
be
acceptable
to
the
deep
painters
that
are
needed
to
make
it
stop
talking.
You
know
to
add
an
option
to
talk
to
youtubers
instead
of
just
live
birds,
so.
A
I
did
see
the
document
and
unfortunately,
haven't
responded
on
it
yet,
but
I
did
see
that
one
of
the
options
being
discussed
was
to
have
was
was
surrounding
the
question
of,
should
it
be
a
daemon
per
node
or
a
daemon
per
cluster,
and
you
know
with
that,
if
you
had
only
a
centralized
daemon
to
do
all
this
coordination,
that
would
then
require
some
sort
of
way
to
reach
each
node
and
enact
the
ipmi
offense.
Is
that
correct?
Well,.
I
That
can
be
a
synchronized
demon
source
because
this
ipmi
listener
never
talks
into
the
m's.
It
would
receive
that
ipmi
command
and
turn
around
and
say:
Cubert,
don't
do
this,
so
any
kind
of
ipmi
cinders.
You
know
the
advantage
of
centralized
demon
is
any
kind
of
ipmi.
Cinders
only
needs
one
IP
and
needs
to
know
the
port
of
the
thing
you
wants
to
talk
to
you
and
no
matter
where
that
thing
ends
up
in
the
overall
cluster.
It
can
still
send
IPMI
requests.
A
D
Okay,
hey
hey
Keith
good,
to
see
good
to
see
that
action,
it's
Fabien
or
Fabien
yeah
one
one
thing
that
prevented
me
from
participating.
The
discussion
was:
if,
if
the
API,
my
IPMI
protocol
is
actually
a
per
per
node
protocol
or
if
a
single
IP
MI
I
mean
I
PMI
right,
not
which
means
E
but
IP,
my
people
allowed
to
manage
multiple
nodes
from
the
same
vm
c,
and
I
think
the
answer
is
no
right.
So
you
have
a
single
IP,
my
instance
for
every
node
you
want
to
manage.
Is
that
true?
Well,.
I
D
Yeah,
that
is
that's
okay,
that's
yeah
and
I
got
that
from
the
discussion,
so
that
helped
me
because
to
me
one
of
the
key
questions
really
is:
if
we
you
know
it,
this
IPMI
service
could
be,
could
be
an
add-on
service.
In
my
opinion,
because
you
know
it
can
be
a
standalone
Python
script,
which
is
just
mapping
the
IP.
My
calls
to
the
relevant
Qbert
qubit
calls.
A
D
I
D
In
that
on
that
same
matter
might
be,
maybe
it's
helpful.
I
know
that
we
currently
don't
have
a
forced
shut
down
for
VMs.
What
we're
actually
looking
to
add
that
and
I
think
IPM
I
might
be
the
first
consumer
for
that
feature
because
ultimately,
IPMI
is
about
you
know
enforcing
certain
states,
No
cool
thanks.
A
D
Yes,
it
is
so
the
submission
is
still
open
and
we
I
mean
the
process
is
that
we
need
two
sponsors
to
get
Cubert
sponsored
into
the
CN
CF
sent
box
Liz
volunteered
to
be
one
sponsor
now
hope
that
we
get
a
second
one
as
soon
as
well.
If
you're
working
for
a
company
I
mean
one
thing,
that's
helpful
for
that
submission
is
to
and
which
was
requested.
When
we
were
presenting
into
the
TOC.
Was
that
companies
I
mean
it
differs?
A
D
I
think
the
the
kayvyun
forum
is
co-located
with
the
open
source
summit
and
yeah.
It
just
adds
up
it's
traditionally
a
place
where
a
lot
of
the
tradition
laying
around
and
actually
one
talk,
God
already
accepted
that
the
Cuba
project
status
update.
So
we
put
the
schedule
is
not
out
yet
just
heads
up.
D
Maybe
just
one
thing
there
was
a
while
back:
there
was
a
PVC
64
le
PR
or
fixes
and
I've
been
in
touch
with
them,
and
we
we
really
would
like
I
mean
they
we're
working
to
see
if
we
can
actually
add
PVC
64
le
support
to
our
releases.
That
requires
that
we
have
CI
for
it,
but
resources
were
provided.
I'm
just
I
mean
I'm,
just
mentioning
it
here
to
give
other
people
who
have
interest
in
that
topic.
Just
pick
up
this
wall.