►
From YouTube: KubeVirt Community Meeting 2022-01-12
Description
Meeting Notes: https://docs.google.com/document/d/1kyhpWlEPzZtQJSjJlAqhPcn3t0Mt_o0amhpuNPGs1Ls/edit#heading=h.u74oyrl72es0
A
Okay,
we
have,
we
are
already
four
minutes
into
the
hour,
so
I
would
say,
let's
start
welcome.
Everyone
to
the
cuber
community
meeting
happening
this
time
on
12th
of
january
2022.
A
Please
everyone
share
your
attendance
in
the
document.
Link
that
I
posted
into
the
chat
first
point
would
be
whether
we
have
any
new
people.
I.
A
So
yeah,
if
anyone
wants
to
introduce
himself
to
the
community
now
it's
your
chance.
A
Okay,
so
I
think
we
still
need
some
some
people
that
are
posting
their
user
stories.
If
you
want
to
join
into
the
process,
please
probably
you
could
you
could
just
put
into
the
cupid
community
document
a
reminder,
so
that
came
back
get
back
to
you,
probably
because
I
think
we
would
be
really
interested
in
hearing
your
stories
on
how
it
goes.
A
A
A
Okay,
I
was
just
thinking
about
that.
Maybe
if
you
haven't,
then
that
I
would
to
remind
you
again,
but
if
you
have
done
that
already
that
you
should
be
good
thanks,
great
okay,
so
can
I
ask
something,
would
you
would
you
mind
if
that
we
go
through
the
agenda
first
and
there.
A
Okay,
great
so,
okay,
if
anyone
has
any
agenda
or
notes
that
he
wants
to
fill
in.
That
would
also
be
great,
probably
just
to
just
to
add
this
to
the
document.
B
Virtual
machine
puss,
and
I
would
like
to
ask
in
e-
because
I
was
talking
last
year,
that
it
is
gonna,
be
released.
Gonna,
be
part
of
the
next
release
that
if
the
version
49
already
includes
that.
C
It
does
there's
a
bug
in
it
that
I'm
working
on
right
now
we
talked
about
this
in
the
performance
meeting
last
week.
I
think
that
it's
yeah,
so
it's
available,
go
ahead
and
feel
comfortable
with
getting
some
preliminary
experience
with
it.
C
It's
going
to
be
fleshed
out
more
with
more
features,
and
things
like
that
that
we
have
documented
in
our
design
proposal
for
it,
but
also
be
aware
that
it's
not
perfect.
Yet
we've
already
identified
a
bug.
I
need
to
create
a
github
issue
for
it
I'm
already
working
to
resolve
it,
but
I
would
not
put
in
production
yet.
C
B
C
I
wrote
the
feature
so
this
is
david
vossel.
Yes,
I
wouldn't
say
anyone's
responsible
necessarily,
it's
kind
of
you
know
a
community
project.
Anyone
can
contribute
these.
These
additional
features
bug
fixes
things
like
that.
Moving
forward,
I'm
probably
the
I
guess
primary
contributor
to
that
feature
at
the
moment.
A
C
It's
in
the
pr,
so
if
you
look
at
the
virtual
machine
pools
pr,
it's
in
the
chat,
my
name
is
the
david
vox.
C
B
We
plan
to
contribute
not
only
with
ideas
but
actually
work,
and
that's
why
I
would
like
to
to
talk
to
you
later.
Okay,.
C
Yeah
sounds
great
yeah,
be
aware,
if
our
our,
you
might
already
be
aware
that
we
have
slack
channel
as
well
for
kind
of
I
dislike.
B
I
forget
the
name,
the
the
chat.
I
know
this
lack
also
we're
gonna
find
out
you
care,
okay,.
B
We
are
doing
a
hard
job
to
to
make
the
solution
scale
on
top
of
convert
and
on
my
mind,
this
can
became
one
of
the
largest
implementations
in
the
world
of
convert,
if
not
the
largest,
because
this
is
for
moving
our
current
user
base
only
and
we
plan
to
grow
on
top
of
that.
Okay,.
B
There
are
a
lot
of
features
that
are
missing
for
our
needs,
but
the
major
components
are
there
and
just
a
matter
of
change
a
little
bit
to
our
needs.
That's
at
least
my
my
sentiment
at
now.
Okay,.
D
D
D
B
During
the
the
scalability
project,
the
missing
feature
is
to
have
a
single
pool
with
multiple
flavors
like
two
virtual
cpu
4g
virtual
cpu,
and
you
mentioned
your
team.
The
team
mentioned
a
term
for
that.
I
don't
take
notes
of
that.
Can
you
repeat.
B
For
so
maybe
create
a
pool
with
several
let's
say:
flavors
two
virtual
cpu,
four
vitro
cpus;
eight,
eight
zero
cpu,
sixteen
virtual
cpus
in
the
same
pool.
No,
that's
not.
C
Yeah,
so
we
I
think
we
touched
briefly
on
this
in
the
performance
meeting
as
well.
A
pool
is
for
identical
replicas,
so
only
port
identical
right,
so
one
flavor,
for
example,
would
be
assigned
to
a
pool
simply
because
the
pool
only
has
one
virtual
machine
instance
specification
in
it.
So
there's
only
one
way
to
describe,
for
example,
what
a
vm
and
a
bmi
will
look
like
inside
of
a
pool.
C
If
you
want
multiple
cpu,
if
you
want
to
express
the
ability
to
have
multiple
replicas
of
different
size,
cpus
and
things
like
that,
that
would
be
multiple
pools,
so
it
would
be
a
pool
per
a
vm,
instant,
spec,
similar
think
of
this
as
a
deployment
like
a
pod
deployment,
it's
similar
in
concept
that
we're
only
replicating
identical
virtual
machines,
just
like
a
deployment,
only
replicates
identical,
pods
right.
F
G
C
So
like
a
resource
request,
then
it's
just
all
dependent
on
the
kubernetes
scheduler
to
assign
that
virtual
machine
to
virtual
machines
pod
to
a
node
that
has
access
to
a
gpu
that
should
okay,
it
should
work.
B
I
would
like
to
ask:
how
can
I
I
bring
to
the
table
live
migration,
because
this
is
something
that
we
are
testing
right
now
and
when
I
attach
the
gpu
on
it,
the
live
migration
doesn't
work.
We
need
to
work
on
that
also
to
fix
it
and
make
it
possible.
D
B
E
D
Just
keyboard,
it's
like
the
underlaying
stack
right
now
cannot
do
that
not
just
here,
and
so
we
yeah
not
sure
what
to
expect
there
from
us.
In
this
case,
I
think
one
thing:
what
what's
for
instance
done
on
sreovs
is
like
they're,
first
detached
so
live
unplugged,
then
you
migrate,
and
then
you
plug
it
in
again,
but
of
course
you
have
downtime
during
the
time
for
it,
but
for
sr,
for
I
mean
in
theory
that
would
be
probably
possible
for
gpus
too,
in
some
k
way.
B
A
Okay,
I
think
stu,
you
added
something
about
the
cupid
summit.
Do
you
want
to
go
on
on
that.
H
Sure
I
was
trying
to
be
anonymous
because
I
might
have
to
drop
soon
yeah.
Just
a
reminder.
The
keyboard
summit
is
coming
up,
it's
just
over
a
month
away
and
so
that'll
be
the
16th
and
17th
of
february.
Be
there
be
square
all
the
cool
kids
are
coming.
If
you
have
an
idea
for
a
topic
or
anything
you
want
to
propose
there
are
a
link
is
provided
in
order
to
figure
out
how
to
submit
that.
Please
do.
A
Okay,
so
next
one
would
be
edward
with
a
question
about
the
compatibility
of
commands.
Do
you
want
to
go
on
on
that
yeah.
G
I
I
think
I
asked
something
last
week
or
a
week
before
I
don't
remember,
but
something
else
came
to
mind
and
it
was
interesting
how
it
is
solved
if,
if
we,
if
we
cover
to
introduce
a
new,
a
new
vmi
specification
or
we
change
the
spec
in
one
way
or
the
other
and
and
how,
how
is
it
working
when
you
send
the
because
usually
the
vmi
spec
is
sent,
I
mean
the
structure
itself
is
sent
between
the
handler
and
the
launcher.
D
So
normally
it
works
like
this
like
when
we're
up
when
you're
updating
keyword
until
the
last.
So
we
have
to.
I
think
that
you
understand
it
from
the
beginning.
We
have
to
think
about
the
update
process
as
a
whole.
D
D
So
at
this
stage
it
would
be
like
you
can
only
post
all
vmi
fields,
they
get
validated,
they
get
they
they
get
saved.
They
go
through
red
controller
and
retender,
because
there
is
no
api
breakage,
they
can
understand
it,
although
they
would
understand
more
fields
and
then
it
goes
to
word
launcher
and
there
and
their
red
launcher
can
have
there
you
have.
So.
This
is
get
general
on
how
we
sit,
ensure
compatibility
and
then
on
the
serialization
to
word
launcher
you
can
have
after
the
update
is
succeeded,
two
possibilities.
D
One
is
that
the
word
launcher
already
understands
the
new
field,
or
it
does
not
understand
the
new
field.
If
it's
not
there.
This
basically
means
that
this
launcher
can't
use
it,
but
this
is
normally
not
relevant
because
the
api
spec
is
not
mutable,
the
vmi
spec,
so
all
vert
launchers
should
then
also
they're
running
against
the
new
word
handler.
Never
see
this
new
field,
except
if
it's
changeable,
which
we
normally
don't
do
sorry,
it
got
a
little
bit
more
confusing
that
thought
it
would
be.
G
E
D
G
Many
many
looking
at
the
case
where
you
have
already
the
withhandler
and
it
already
has
the
vmi
spec,
which
is
has
more
fields,
let's
say,
and
then
it
saves
it
to
the
other
side,
so
the
other
side
will
will
deserialize
it
and
and
then,
but
it
will
be
okay,
because
the
new
the
extra
fields
are
not
are
ignored.
That's
it
yeah
exactly
yeah.
A
Okay,
so
then,
if
there's
no
nothing
more
for
the
open
floor,
we
should
probably
have
a
look
at
the
pull
request
that
needs
attention.
A
D
G
Right,
I
think
that
that's.
That
section
was
only
if
someone
wanted
to
ask
for
for
attention
on
a
pr,
but
if
there
is,
if
it's
empty,
then
yeah,
I
think
we
can.
I.
A
Was
just
I
was
just
having
the
idea
of
just
having
a
look
at
the
sap
hours.
Probably
if
there
was
something
be
wrong,
but
yeah
we
can.
We
can
just
go
on
anyway,
okay,
so
next
one
will
be
the
mailing
list.
A
Oops
see
this
one,
I'm
not
sure.
I
think
the
only
emails
that
we
had
in
recent
days
were
the
announcement
of
the
new
release
and
my
my
pending
jobs,
email
right-
and
I
think
the
rest
has
already
gotten
handled
somehow-
and
this
is
meeting
notes
on
able
to
upload
image
using
word
okay,
but
I
think
there
is
already
some
okay
okay,
so
this
is
asking
for
more
information.
I
guess
so.
A
Okay,
so
I
think
this
looks
quite
okay,
okay,
the
next
one
would
be
a
second,
I
lost
my
lost
my
account
somehow.
Okay,
next
one
would
be
the
box
cup.
A
Okay,
I
see
one
pr
already
here,
I'm
not
sure
if
anyone
entered
that
that,
whether
that
should
have
gotten
probably
handled
at
the
upper
section,
with
the
pull
request,
I'm
just
going
to
have
a
look
at
that.
Probably.
I
Yeah
this
is
this
is
something
that
I
just
wanted
to
mention
that
something
that
we
saw
internally,
that
that
was
interesting
and
we're
still
kind
of
working
on
kind
of
deciphering.
What's
going
on,
but
at
like
a
high
level.
I
What
we've
noticed
is
that,
because
we
do
a
lot
of
deleting
for
the
leading
vms
and
or
sorry,
vmis
and
and
we've
noticed
some
some
strange
behavior
just
when
in
day-to-day
operations
you
know
like
we
have
sometimes
reboot
nodes
and
obviously
the
pods
will
like
for
handlers
and
stuff
where
we
start,
and
we've
noticed
that
sometimes
you
know
in
nodes
like
when
we
do
a
lot
of
deletes,
and
we
have
you
know.
Sometimes
we
restart
nodes
that
we
run
into
situations
where
this
this
ghost
record
stays
around.
I
It's
caused
a
few
issues
where
we've
seen
the
vmis
hang
around
and
like
in
a
state
that,
like
like
here
like
it's
scheduled,
and
it's
like
unclear,
what's
going
on
and
they
don't
they
don't.
We
can't
really
get
rid
of
them
and
then
like
when
we
get
forced
to
lead
them
and
whatnot,
but
they
you
know
just
the
normal
delete
path
and
doesn't
really
work
and
and
there's
a
few
things
that
kind
of
show
up
in
the
logs.
I
For
this
that
you
can
see
there
but
kind
of
I
guess
the
way
I'd
summarize
is
like.
If,
if
you
do
a
lot
of
deletes
and
you
restart
a
node,
some
things
might
don't
seem
to
get.
Maybe
don't
get
cleaned
up
or
something
doesn't
quite
happen
right
where,
like
the
ghost
record,
hangs
around
and
it
doesn't
happen
all
the
time
it
happens
very
infrequently,
but
it's
something
that
can
cause
vms
to
enter
this
state.
I
C
Yeah,
okay,
that's
interesting,
so
the
purpose
of
the
ghost
record
is
to
keep
a
bookkeep
keeping
on
the
persistent
nodes
disk,
like
on
the
actual
local
storage,
that
a
virtual
machine
existed
at
one
point,
and
maybe
we
created
some
local
ephemeral
data
associated
with
it.
That
needs
to
get
cleaned
up.
So
even
if
verb
handler,
for
example,
restarts
and
when
it
comes
back
online,
a
vmi
is
no
longer
present
in
the
fcd.
So
we
don't
see
it
anymore.
C
We
have
a
ghost
record,
saying:
hey
this
thing
used
to
exist,
so
clean
up
these
mounts
and
whatever
else
yeah
the
fact
that
you're
seeing
this
it's
curious
to
me,
if
my
expectation
for
how
this
would
work
in
this
exact
scenario,
where
we
have
a
ghost
record
for
a
vm
I,
which
has
a
different
uid
as
the
vm,
with
the
same
name,
that's
actually
in
the
cluster
right
now,
as
I
would
expect
the
previous
previous
vmis
local
data
to
get
cleaned
up,
that
ghost
record
to
get
removed
and
then
processing
the
new
vmi.
C
Are
you
saying
that
the
new
vmi
never
moves
past
this
stage
and
is
essentially
stuck
like?
Does
it
ever
eventually
get
started.
C
I
can
post
some
comments
on
it.
I
don't
know
what
I'd
like
to
look
at,
but
I
can
at
least
comment
on
what
my
expectation
would
be.
There.
I
Yeah
that
would
help
a
lot
yeah
just
to
make
it
clear
because
yeah,
I
think,
that's
that's,
basically
the
context
I'm
missing,
because
I
just
want
to
make
sure
that
you
know
because
we're
looking
at
trying
to
fix
this,
but
I
just
want
to
make
sure
we're
not
going
the
wrong
path
here.
A
Thanks
david
okay,
so
then
I
would
say
we
could
just
we
were
at
half
of
the
hour,
so
we
could
just
do
a
normal
box
scrub
if
anyone
is
okay
with
that.
A
F
A
Okay,
okay,
this
first
one
is
about
cni
plugin.
A
D
A
Okay,
next
one
is
something
was
sriv
from
eddie.
I
guess
this
is
something
that
okay,
let's
see
did
you
did
you
just
open
this
as
a
tracker
or
something
or
do
you
actually
see
some
see
some
need
some
more
information
from
someone
else
here.
G
I
know
it's,
we
identify
a
problem
and
when
you
have
a
survey,
no
guest
agent,
so
we
will
take
care
of
it.
Probably.
D
I
saw
something
similar
is
maybe
this
is
also
regretting
that
if
a
guest
agent
is
detected,
but
the
communication
is
not
working
that,
I
think
I've
seen
that
the
port
ips
are
also
not
reported
correctly.
Like
you
get
no
ips
reported.
G
There
is
a
work
done
now.
At
least
I
started
to
to
redo
or
refactor
the
the
reporting
in
general,
but
maybe
that's.
There
is
a
problem
there,
it's
a
bit
too
complicated
at
the
moment,
but
we
hope
we
will
simplify
it
and
what
I
don't
know,
I
don't
think
it's
related
to
this
one.
This
one
is
is
specific
with
currently
the
way
the
the
status
is
reported
or
what's
going
on.
G
Guest
with
srv,
specifically,
we
only
we
don't
take
it
from
the.
We
don't
read
the
domain
information
we
just
go
to
the
guest
agent
and
that's
it
and
that's
pro
obviously
wrong.
So
when
you
don't
have
a
guest
agent,
you
will
just
not
see
the
interface
at
all
in
the
status.
D
G
So
if
you
yeah,
but
currently
the
the
logic
is
more
or
less
is
okay,
it's
supposed
to
be
something
like
we
read
what's
on
the
domain
and
we
overwrite
it
with
what
we
see
on
the
guest,
something
or
the
guest
agent,
it's
more
or
less
like
this,
but
it's
oversimplified
it's
much
more
complicated
and
we
hope
to
make
it
much
clearer.
D
G
Yeah,
for
only
for
mass
current,
actually,
this
is
correct
only
if
it's,
if
you,
if
we
have
this,
is
actually
a
topic
that
we
are.
We,
I
think
it
was
raised
here.
Also
it's
a
bit
odd
that
in
most
of
the
cases
we
we
reported
the
status,
what
what?
What
we
hope
is
in
the
in
the
guest
itself,
but
for
masquerade-
it's
more
it's
more.
D
G
I
I
think
that
the
this
this,
if
you
think
it's
a
problem,
because
what
you
just
said
is
for
for
the
for
the
pod
network
with
masquerade
binding.
This
is
how
it
is
supposed
to
work.
You
only
read
what's
on
the
pod
and
you
report
only
that,
but
for
bridge
for
the
bridge
binding,
that's
not
how
it
works.
The
bridge
binding
we,
we
are
reporting
either
what's
in
the
guest
by
the
guest
agent
or
what
we
read.
That
is
there.
So
we
are.
D
Thing
for
me
was
that
the
q
q,
a
guest
agent,
was
running.
It
could
just
not
read
networking
information
due
to
a
bug
in
the
cloud
image,
so
it
was
reporting
no
network
and
then
we
saw
no
network
also,
it
was
happily
there
so-
and
I
guess,
if
you
have,
for
instance,
http
readiness,
probes
and
their
passing,
everything
would
be
fine,
still
right,
yeah,
it's
a
tricky
topic.
A
Okay,
so
then,
I
think
there
is
a
progress
going
on.
E
A
A
Okay,
okay,
so
I
can,
I
can
just
answer
and
that
we
do
not
support.
A
D
A
D
Imagine
that
sometimes
it
would
be
nice
for
two
admins
on
different
locations
where,
for
whatever
reason
they
can't
share
the
screen,
they
may
still
be
able
to
connect
to
the
same
same
vm
and
see
the
screen
or
something,
but
I'm
not
sure
if
it's
really
needed
depends.
So
I'm
also
curious
what
the
person
would
say.
I
D
A
Yeah,
okay,
yeah.
I
I
was
thinking
that
we
want
him
to
to
ask
about
the
use
case,
so
we
have
more
information
not
about
how
to
implement.
I
think
that
should
be
clear,
but
at
least
that
we
have
a
special,
decisive
use
case.
Yeah.
A
Yeah
yeah,
that's
that's
the
better
so
that
we
can
understand.
So
the
need
is
a
good
good
one.
That's
fine
with
here
works.
D
A
D
We
just
we
just
have
minimum
requirements
on
the
pod
set,
which
run
to
a
certain
degree,
I
would
say,
but
we
have
no
documentation
which
would,
for
instance,
outline
the
grow
like
what
happened.
How
much?
How
many
resources
do
you
need?
If,
if
you
run
on
a
no
on
a
cluster
with
500
nodes
and
10
000
vms,
I
mean
that's
not
even
possible.
A
F
A
D
And
and-
and
it
probably
makes
sense
to
note
that
when
the
cluster
scales,
you
may
need
more
than
that
for
good
performance,
and
that
will
also
take
it
for
as
long
as
it's
available.
But
we
have
no
no
documentation
or
tests
which
indicate
what
would
be
needed
in
bigger
scale.
D
A
A
Okay,
so
next
one
this
is
the
ghost.
A
A
D
E
E
Yeah
there's
a
cloud
limit.
I
think
so
that
this
has
to
be
block
migration.
A
A
Okay,
vladimir,
would
you
would
you
be
able
to
to
have
a
look
at
that
probably
and
tell
what.
E
A
Thank
you
for
that,
okay.
So
this
one
very
sick,
complete
related
education,
that's
just
a
tracker
from
from
howard!
I
guess
right.
A
D
E
Yeah,
so
in
general
this
should
work.
At
least
this
is
working
in
the
past.
I
think
alicia
was
just
looking
into
this
and
she
found
she
found.
Why
isn't
it
not
possible,
but
I
think
there
there
can
be
an
easy
fix
for
this,
although
here
the
device
is
the
same,
so
I'm
not
sure
why
the
pci
vendor
selector
is
the
same
number.
E
A
Yeah
so
yeah,
I
think
then
then
we
should
be
at
least
with
the
current
list
of
issues
we
should
be
through.
A
Okay,
that
is
that's
a
good
idea.
That's
a
good
thing!
So,
okay,
then
I
would
say
if
anyone
else
probably
has
something
to
announce
what
you
say
here.