►
From YouTube: KubeVirt Community Meeting 2022-02-23
Description
Meeting Notes: https://docs.google.com/document/d/1kyhpWlEPzZtQJSjJlAqhPcn3t0Mt_o0amhpuNPGs1Ls/
A
Okay,
everyone
I've
just
seen
that.
A
A
A
B
Yeah,
I
just
wanted
to
say
hi
because
it's
the
first
time
I'm
attending
to
this
meeting.
My
name
is
harvard.
I
just
recently
joined
to
the
cuberd
storage
team
and
just
wanted
to
say
hi.
A
Okay,
since
we
don't
have
any
agenda
and
notes
items,
I
think
we
can
just
move
on
directly
to
the
upper
floor.
So
first
item
is
this
the
eye
outage
that
roman
wants
to
point
out?
I
guess.
D
Yeah,
you've
probably
noticed
it
that
we
had
an
outage
which
started
yesterday
about
8
40
utc
time
and
went
on
until
six
utc,
so
almost
a
whole
working
day
for
many
of
us
turns
out
that
it
was
sadly
outside
of
or
depends
on
how
you
see
it.
We
didn't
have
an
issue
in
our
platform,
but
there
seems
to
have
been
a
of
the
edge
or
network
outage
in
parts
of
the
us.
Yesterday,
that's
what
we
got
back
from
github
after
we
had
an
issue
with
them.
A
Okay,
thank
you,
so
I
think
the
next
one
is
also
you.
This
is
the
sandbox
excavation.
D
Yeah,
so
we
we
didn't
get
yet
an
official
press
release
from
zncf,
but
keyboard
passed.
The
incubation
process
we
are
now
out
of
sandbox
and
in
incubation
mode
would
have
probably
been
even
something
for
the
agenda.
But
oh
yeah,
no
big
fuss.
Yet
about
it.
There
will
be
an
official
announcement,
but
it's
just
nice
to
see.
A
So
I
guess
we
are
going
to
probably
forward
this
to
the
cube
so
that
everyone
is
aware
of
this
right.
Somehow.
A
Okay,
okay!
No!
No,
by
the
way
my
audio
was
also
having
trouble.
So
it's
fine!
Okay,
great!
I
was
just
asking
whether
we,
I
guess
that
we
are
announcing
this
at
cubit
dev
itself
right,
so
somehow
that
everyone
knows
that
we
are
now
in
incubation
and
so
on.
I'm
not
sure.
E
A
A
Okay,
great,
hopefully
I
don't
know
why
zoom
is
playing
tricks
on
me,
but
yeah,
okay,
so
another
heads
up
is
from
me.
We
have
been
looking
at
updating
the
sri
v
nodes
which
we
have
in
our
ci
cluster
and
as
far
as
the
plan
is,
we
are
going
to
update
them
on
friday,
so
we
are
doing
this
one
by
one,
so
we
don't
expect
any
complete
outage
for
the
sriv
jobs,
but
at
least
a
little
bit
more
a
bit
of
queuing
up
some
jobs.
I
guess
so
just
so.
A
Everyone
is
aware
if,
on
friday,
there
is
the
the
srrv
jobs
or
on
on
one
pr
or
something
there
they
might
be
piling
up.
A
Okay,
so
I
am
just
asking
once
again:
maybe
someone
else
has
anything
to
talk
about
in
the
open
floor
that
he
didn't
get
yet
the
opportunity
to
put
into
the
open
floor.
A
Okay,
so
I
think
then
we
can
right
away
move
to
the
next
section,
which
is
the
pull
request
that
needs
attention.
So
I
think
shelly
has
put
something
up
here.
If
you
want
to
start
explaining
what
this
is
all
about,
yeah
can.
G
Great
so
I
started
working
for
my
next
epic
for
411
starting
the
design,
so
I
wrote
a
design
proposal
and
I
posted
it
in
keyboard
community.
So
I
wanted
to
get
some
attention
to
get
some
comments.
Maybe
suggestions,
improvements.
H
G
C
G
Yes,
the
so
the
proposal
is
talking
about
doing
vm
memory,
snapshots
or
memory
dumps.
Currently,
their
purpose
is
for
analyzing
purposes,
with
thinking
of
the
future
of
using
it
for
hibernations
or
snapshots
and
going
back
to
restoring
back
to
the
same
memory.
G
So
the
the
basic
design
is
attaching
a
pvc
to
the
vert
launcher,
which
it
doesn't
have
a
disk,
so
it
doesn't
need
to
be
attached
to
the
vmi
only
to
the
word
launcher.
So
we
need
to
add
any
some
adjustments
in
that
part,
and
after
this
attachment
you
can,
the
proposal
says
that
you
could
either
do
a
versatile
command
that
will
do
a
simple
memory
dump
to
that
pvc,
and
then
you
can
decide
what
to
do
with
it
or
either
add
some
parameter
to
the
snapshot
yammo.
G
That
will
trigger
this
memory
dump
before
taking
the
volume
snapshots.
G
I
G
So
currently
it
will
be
a
part
of
the
vm
snapshot,
but
we
won't
currently
restore
the
actual
memory
in
the
future.
It
will
you
can,
you
will
probably
have
a
hibernation
which
will
do
this
dump
and
then
shut
down
the
vm
and
restore
the
vm
from
that
memory,
but
it's
not
something
I
yet
investigated
and
so
deep
and
for
the
restore
the
same.
There
should
be
an
option
to
use
this
file
to
restore
the
the
memory
from
it.
But
it's
not
part
of
the
design
currently.
I
G
J
I
Get
it
okay,
all
right.
I
What
would
be
the
technical
mechanics
for
well
are.
We
is
the
future
to
for
this
to
be
like
a
investigation
or
introspecting
a
guest
sort
of
feature
like
pick
snapshot,
the
memory
introspect
whatever
was
going
on
or
is
the
future
also
that
we
want
to
be
able
to
like
snapshot
a
virtual
machine
live
and
then
restore
it
like
we're
just
trying
to
understand
where
the
future
of
this
is
going
like
the
design
I've
just
briefly
looked
at.
I
It
makes
sense
as
far
as
what
we're
doing
the
mechanics
behind
how
we
do
it,
I'm
just
trying
to
understand
how
users
would
interact
this
feature
in
the
future.
G
Yes,
so,
as
michael
commented,
that
it
should
be
a
part
of
an
option
to
also
export
a
vm,
including
the
memory
or
hibernate,
which
is
only
doing
the
memory
snapshots
shut
down
the
vienna
continue
from
that
memory
or
and
doing
an
online
snapshot
and
continue
running
and
possibly
returning
to
the
same
point
with
the
disks
and
the
memory.
J
Yeah,
I
mean
part
of
the
at
least
with
the
snapshot
and
restore
stuff.
Is
we
don't
when
vms
are
restored?
Now
they're
powered
off
so
we'd
have
to
you
know,
change
some
of
the
restore
mechanics
too
yeah.
J
You
know,
maybe
you
know
an
actual
first
step.
There
would
be
to
support
snapchatting
bmis,
restoring
bmis,
and
then
we
can
add
memory.
Support
to
that.
I
So
I
think
my
comment
here
would
be
that
it's
fine
to
approach
this
in
phases
like
just
do
the
snapshot.
First,
then
we
can
export
it,
whatever
I'd
like
to
see
it
kind
of
fleshed
out
where,
where
this
is
going
with
multiple
like
implementation
phases,
even
if
it's
just
a
rough
understanding
at
this
point
just
so,
we
can
have
like
a
like
a
a
road
map
for
this
feature.
I
I
think
so
I
think
that
a
design
proposal
it
would
be
nice
to
have
both
the
snapshot
and
the
restore
or
whatever
that
means
just
so
we
can
kind
of
have
that
discussion
in
case
it,
for
some
reason
influences
how
snapshot
is
performed.
I
don't
think
it
will,
but
at
least
we
could
have
that
discussion
kind
of
up
front,
and
I
I'll
make
a
comment
on
the
on
the
community
pr
here
about
what
I
would
be
interested
in
seeing
I
don't
think
it
influences
your
initial
implementation
phase,
though
so
it
looks
good.
D
I'm
picking
that
up
what
what
what
I
also
always
love
to
see
on
proposals
is
really
end-to-end
thoughts,
not
not
if,
as
david
said,
not
technically,
but
from
how
users
are
using
it
like
just
imagining
that
that
everyone
can
see
that
behind
the
proposal,
the
thoughts
put
into
the
full
flow
on
how
people
are
getting
this
at
the
end
and
how
you
would
envision
it.
How
easy
it
should
be
for
users
to
get
something
out.
G
Yeah
so
there's
the
way
of
of
getting
this
memory
down,
but
there
is
not
explanation
of
how
will
you
use
it
currently?
F
F
I'm
saying
that
it's
really
interesting
in
the
context
of
collecting
memory
dumps
when
when
pining
occurs,
and
we
have
this
pva
panic,
we
could
use
and
then
there's
a
pivot,
pin
extended
events
that
we
could,
I
guess,
listen
to
and
then
and
then
we
would
have
a
place
to
to
store
those
dumps
when
something
like
this
happens
and
then
export
it
I
mean
for
me,
it
sounds
really
like
a
interesting
path
forward.
I
E
J
Yeah,
I
mean
we're
definitely
thinking
about
we're
building
this
export
mechanism.
To
do
you
know
kind
of
off
with
offline
migration
and
moving
the
vm
to
another
cluster
is
one
of
the
use
cases.
J
I'm
not
sure
that
this
again
until
we
have
the
ability
to
restore
from
from
the
memory
dump
or
or
start
with
it,
I
think
it's
going
to
be
more
of
a
just
for
offline
debugging
purposes
at
this
point,
but
yeah.
Eventually
they
will
be
part
of
that
migration.
A
A
A
I
don't
get
any
feedback,
so,
okay,
good,
your.
A
That's
great
to
hear
thanks
david
okay,
so
I
think
the
ci
outage
got
already
handled.
We
have
the
meeting
with
troubles
with
was
sent
by
catherine,
so
I
think
this
one
is
also
handled
already.
There
is
discussion
going
on
on
the
ssh
failure
this
one
also.
So
I'm
not
sure
I
don't
see
anything
that
we
probably
have
unhandled
here.
A
Oh
yeah,
what
I
see
I
just
forgot
to
to
have
to
have
this
updated
in
the
miscellaneous
stuff
or
in
the
open
floor
we
over
the
weekend.
Somehow
there
must
have
been
some
update
in
the
quay
or
probably
on
the
quay
io,
because
suddenly
the
unknown
blob
issue
has
magically
disappeared
so
which
is
great
to
see
but
yeah.
It
would
still
have
been
interesting
what
the
what
you,
what
the
problem
would
have
been
so
just
ahead
of
everyone.
If
things
still
do
not
work,
please
please
get
back
to
me.
I
A
Yeah
can
be
because
I
think
this
thing
suddenly
appeared
a
couple
of
weeks
ago
when
they
did
another
maintenance
and
then
now
it
suddenly
disappeared
again
so
yeah
may
I
guess
they
fixed
something
with
with
their
maintenance,
which
is
great
to
see
so
yeah,
but
still
if
anyone
still
needs
this
workaround.
Somehow
at
least
I
can
just
say
from
experience
that
docker
push
can
be
well
replaced
with
scorpio
copy,
but
yeah,
that's
just
as
a
side
note.
A
A
K
I
can
I
can
I
can
I
raise
something
there
was
that
that
discussion
about
the
ssh,
I
think,
and
if
I'm
not
mistaken,
I'm
not
sure
if
this
is
the
case
or
not,
but
from
a
small
discussion
in
the
network
thing
we
we
figure
out
that
I
mean
it's
a
it's
a
wild
guess
now
it's
not
100
confirmed,
but
it
looks
like
if
you
we
have
this
com,
this
command,
that
restarts
a
vm
right.
K
A
I
don't
exactly,
I
are
you
referring
to
this
ssh
email
that
that
that
I
understand.
K
So
I'm
asking
we
think
that
the
problem
is
that
the
way
that
the
restart
command,
the
that
is
done
on
a
vm
I
mean
because
we
are
we
are-
we
are
doing
the
restaurant
by
deleting
the
vmi
and
then
letting
the
I
guess
the
vm
create
another
vmware
instead,
if
this
is
correct,
then
is
this
correct
or
not?
This
is
my
question.
I
Well,
there's
two
different
types
of
restart
or
reboot.
There's
a
the
terms.
I
think
we've
landed
on
for
key
vert
and
I
don't
know
this
is
consistent
across
the
industry.
A
restart
for
us
means
we're
going
to
cycle
the
pod,
so
there's
going
to
be
a
new
pod
and,
unfortunately
there's
a
new
ip
address.
We
can
keep
that
mac
address
stable.
If
we
need
to,
I
believe,
but
maybe
yeah,
I
believe
I
believe
we
can,
but
there's
also
a
command
called
reboot.
I
I
think
we
have
it
implemented
now,
which
does
like
a
soft
reboot
within
that
pod
of
the
guest,
and
it's
just
going
to
actually
cycle
the
guests
within
that
same
pod
and
that's
a
separate
command.
Let
me
see
if
we
actually
have
implemented
it.
It's
something
we've
talked
about
for
years.
J
K
This
is
like
you're
confirming
our
suspicions
or
what
we
expect
here
and
I
think
there
is
a
problem
then
we
have
like.
I
guess
I
think
it's
a
systematic
problem.
I
think
a
restart
usually
on
a
domain
does
is
not
supposed
to
change
the
configuration
of
that
domain.
K
It's
like
at
least
from
from
other
systems
that
I
I
know
so
it's
like
I'm
doing
a
hard
risk,
restart,
as
you
say,
but
the
the
domain
configuration
is
not
supposed
to
change,
and
the
problem
here
is
that
for
what
happened,
we
use
leave
it
right
under
the
hood
so
leave
it
will
auto,
generate
a
mac
address
and
probably
also
pci
addresses
in
the
guest.
D
D
Vm
goes
down
that
can
be
a
restart
this
way
or
any
other
shutdown.
It
will
change
anyway.
I
understand
what
you
try
to
get
at,
but
this
is
a
limitation
which
we
have
right
now.
We
can
probably
talk
about
persistent
mac
addresses,
and
this
is
an
issue.
I'm
surprised
that
it
works
in
fedora.
Actually
because
from
my
experience,
it
does
also
not
work
on
fedora.
I
want
snap
once
you
configure
network
manager,
because
network
managers
and
also
or
some
parts
also
we
think
that
a
device
is
different.
If
the
mac
address
changes.
K
D
But
but
it's
very
common
to
have
setups
that
it's
bound
to
the
mac
address
and
that
the
convocation
thinks
it's
a
different
interface
when
you
make
address
changes.
So
what
I
wanted
to
say,
I
understand
what
I
want
to
get
that
with
the
restart,
but
I
don't
think
that
the
restore
just
such
is
the
problem.
It's
like
it's
in
general
that
we
don't
keep
any
mac
addresses.
D
If
we
could
do
that,
we
could
give
the
restart
command
a
different
notion
and
do
the
input
restart.
But
then
the
users
would
just
be
confused
later
on
when
they
shut
it
down
and
start
it
again.
Yeah
so
you're
saying.
K
D
K
D
K
So
so
let
me
ask
this
differently:
if
do
you
think,
do
you
think
it
makes
sense
in
an
architectural
sense?
I
guess
that
on
a
restart
we
will
as
part
of
the
logic
of
the
restore,
will,
we
will
read
the
existing
mac
and
we'll
explicitly
set
it
on
the
new
bmi.
Does
it
make
sense,
or
is
it
totally
breaking
something.
D
I
That
that's
the
mechanism
that
we
have
to
provide
consistent
mac
addresses.
Let
me
just.
K
K
I
D
I
K
K
K
But
but
you,
but
when
you
I
understand
the
restock
itself,
is
it's
coming
from
the
sub
resource
on
the
virt
api
right.
So
at
that
stage
before.
D
Right
just
again,
we
can
talk
about
what
you
expect
to
happen
with
restart
like
if
it's
in
the
port
or
if
a
new
port
gets
created,
but
the
the
thing
is
in
order
to
solve
this
consistently.
You
would
want
to
keep
especially
with
masquerade
where
it's
easy.
You
would
want
to
keep
the
make
address,
also
between
between
shutdowns
and
starts,
because
the
the
issue
described
in
on
the
mailing
list
will
also
happen
if
you
shut
it
down
and
start
it
again.
So
you
know
what
I
mean.
K
I
So
what
would
happen?
The
mechanics
behind
this
is
the
virtual
machine
controller
would
start
the
vmi
and
would
watch
the
vmi
as
soon
as
it
detects
a
mac
address.
If
we
didn't
explicitly
set
a
mac
address
on
the
vmi
spec,
we
can
cache
that
on
the
vmi
or
excuse
me
the
vm
object
in
that
status
and
anytime.
Therefore,
like
afterwards,
when
it
starts
a
new
bmi
for
that
vm,
we
would
explicitly
set
mac
address.
That
was
the
first
one
we
detected
exactly.
That
would
be
okay
for
masquerade,
yeah.
K
D
In
theory,
we
can
then
also
talk
about.
We
can
then
also
talk
about
the
pod
network,
because
with
many
network
you
see
scientific
cni
network
providers,
it's
okay!
If
we
keep
the
mac
address
behind
the
bridge,
also
but
yeah.
I
I
See,
okay
and
for
the
bridge
binding
to
the
pod
network.
I
think
that's
what
you're
talking
about
here.
We
are
generating
this
mac
address
ourselves
again
for
that,
or
are
we
using
the
one
from
the
c9
plugin.
K
D
I
D
E
A
Okay,
so
I
think
we
can
move
on
to
the
next
topic,
so
let
me
share
my
screen
again.
Let
me
see.
A
Okay,
I
think
this
is
one
from
lubo,
I'm
going
to
look
at
this
okay.
This
is
an
announcement,
so
I
think
we
can
skip
this.
A
This
is
just
for
tracking
work,
I
guess
or
checking
in
the
enhancement
itself.
So
let's
see
what
the
next
one
is
use
a
multi
gpu
card,
interesting.
A
So
I'm
just
going
to
read
this
question
aloud,
so
maybe
people
can
chime
in
who
wants
to
maybe
answer
that
and
I'll
then
ping
people
on
that
issue.
So
the
question
is,
I
deployed
keyboard
and
kubernetes,
and
each
node
has
eight
nvidia
gpu
cards.
I
used
cuber
gpu
device
plugin
to
use
gpu
for
virtual
machines.
I
want
four
cards
for
virtual
machines
and
four
cards
for
other
business:
non
virtual
machine,
kubernetes
parts.
However,
if
if
cupid
gpu
device
plugin
is
used,
the
kubernetes
port
cannot
use
gpu
cards.
F
A
It's
fine,
I'm
just
going
to
I'm
just
going
to
to
ping
you
on
the
issue.
If
you
want
so.
A
I
think
someone
already
commented
on
that.
I
think
this
was
also
taken
to
the
mailing
list
right.
A
A
A
Okay,
can
I
bring
you
on
the
issue
that
just
so
that
we
have
it
for
reference.
A
A
A
I
don't
know,
I
think,
I'm
just
going
to
to
leave
it
like
that,
and
I
I
in
general
I
just
want
to
want
to
want
to
have
make
sure
that
we
don't
go
over
this
issue
at
the
next
box
cup
again.
So
I
would
really
like
to
try
out
all
these
issues.
A
D
A
A
A
A
E
A
D
D
A
D
D
Yeah,
so
that
we
got
a
pr
also
ready
for
that,
so
the.
F
D
Is
that
I
mean,
in
general,
this
suspep
volume
source,
which
we
have
would
work
as
a
config
map
as
well.
So
this
is
just
a
kind
of
syntactic
sugar
around
providing
these
two
xml
files
where
a
config
map,
but
we
this
syntactic
sugar,
is
right
now
ensuring
that
that
both
entries
exist
in
the
conflict
map
auto
and
then
out
to
unattended,
xml
and
unattended
xml.
But
there
are
use
cases
where
you
want
to
only
want
to
provide
one
or
the
other
and
that's
what
we're
not
allowing
that's
all.
A
A
D
A
A
A
K
F
A
D
If
it
depends
on
how
they
are
how
they
are
grouped,
so
if
you
have,
I
don't
know,
50
view
gpus
under
the
same
label,
you
can
just
assign
one
if
they
have
different
names
like
really
different
resource
names,
then
it's
not
possible
because
you
can,
then
you
would
have
to
specify
all
on
the
template
so
radigrad.
D
A
Okay,
so,
okay,
I
think
we
have
three
minutes
left,
I'm
not
sure
I
I
think
I
can
just
don't
want
to
over
ride
this
meeting.