►
From YouTube: KubeVirt Community Meeting 2021-06-30
Description
Meeting notes: https://docs.google.com/document/d/1kyhpWlEPzZtQJSjJlAqhPcn3t0Mt_o0amhpuNPGs1Ls/edit#heading=h.6m0oge593tiy
A
Hello,
everyone
welcome
to
the
weekly
covert
meeting,
I'm
your
host
chris
caligari,
and
this
is
where
we
talk
about
cooper
issues
and
topics
and
and
meet
and
greet
with
our
developers.
A
I'm
posting
our
meeting
notes
to
chat
so
you're
welcome
to
follow
along.
Please
add
your
your
name
to
the
attendance
bullet
point
there,
so
you
can
track
who's
been
attending
and
then
I
will
share
my.
A
A
A
Okay,
I
think
we
should
begin
alice,
has
a
first
agenda
item
with
bbc
locking
proposal.
C
Hi
everybody.
Can
you
hear.
D
C
Oh
okay,
thank
you
yeah.
I
signed
this
proposal
two
days
ago
also
in
the
mailing
list.
C
I
just
would
like
to
mention,
and
if
you
have
any
comments,
feel
free
to
add
that
just
to
summarize
briefly
the
issue.
So
what
I'm
proposing
is
a
new
crd
with
controllers,
and
I
would
like
to
introduce
a
new
mechanism
to
protect
pvc.
C
E
What
I
wonder
about
the
pvc
proposal
is,
since
the
company's
community
is
discussing
the
read,
write,
runs
per
part
policy.
Oh
yeah,
there
was
the
screen
here.
C
C
C
No,
I
haven't.
I
just
want
you
to
discuss
with
you
if
you
think
it's
something
that
has
potential
or
but
I
can
of
course
also
try
to
propose
that
in
the
in
the
sick
storage
in
kubernetes.
F
F
I
don't
know
it's
possible,
I
I
do
think
that
there
are.
This
is
one
of
those
since
we're
getting
into
the
realm
of,
like
you
know,.
F
Advisory,
locking,
there's
kind
of
a
lot
of
different
ways
to
skin
this
cat,
but
I
I
definitely
think
that
it
is
a
problem
that
everyone
has
and
yeah.
It
may
be
worth.
E
Discussion:
yeah.
Okay,
do
you
have
any
suggestions
for
lead
shaver
to
move
on
with
the
discussion
on
this.
C
I
can
I
mean
that
a
day
I
was
like
china
and
kubernetes.
I
ca.
I
can
ask.
Maybe
if
people
have
have
ideas,
I
personally
don't
have
a
use
case
outside
to
work.
C
So
I
I
had
the
chance
to
attend
kubecon
in
may
and
I
there
was
some
sick
storage
talk
and
I
asked
about
concurrency
access
to
bbc
and
they
pointed
me
to
these
read,
write
one
spot,
but
that
doesn't
apply
to
cuba
because
of
migration.
So
maybe
there
are
other
use
cases
outside
that.
That's
the
same
problem.
E
Yeah,
I
I
guess
it's
just
hard
to
yeah
for
for
scalable
applications.
It
may
be
nice
to
have
a
faster
way
to
get
access
to
the
pvc,
but
it's
still
bound
to
another
node
in
case
of
issues,
but
then
you
have
the
locking
issue
again.
Basically
right
that
you
don't
know
when
there
are
issues
on
that
node
and
it's
still
bound
there,
then
you
don't
know
if
you
can
already
use
it.
E
E
E
For
migrations,
we
can
also
find,
with
with
the
read
write
once
part.
Maybe
we
can
find
another
way
there
with
the
migration,
like
delaying
attaching
the
disk
on
the
other
node,
with
post
copy
migration
or
or
first
parsing,
the
vm
moving
that
pvc
over
and
then
to
pre-copy
my
person,
I
don't
know,
maybe
we
can
also
explore
this.
C
E
C
F
But
roman
so
at
least
for
our
use
case,
I
I
guess
what
I'm
getting
from
you
is
you'd
think
you
can
come
up
with
something
that
is
perhaps
I
don't
know
simpler
bit.
Yeah.
E
F
C
Yeah
exactly
I'm
I
mean
this
is,
I
think,
pretty
simple.
It's
just
the
controller
that
watch
about
pvc
and
watch
is
also
the
owner,
and
then
it
just
labeled
the
pvc
and
removed
the
label,
I'm
using
I'm
using
the
label
because
for
annotation
you
can
overwrite
it
with
label.
If
it's
already
labeled
and
somebody
tries
to
label
the
ppc
again,
it
just
fails
because
the
label
exists.
So
that's
a
nice
mechanism
to
allow
that
that
is
just
a
single
donor.
F
So
roman,
do
you
think
it
would
be
worth
coming
up
with
and
discussing
a
kind
of
more
simple
and
direct
solution
for
us.
E
I
have
yeah,
as
I
said,
I
have
no,
not
enough
insights
and
storage
discussions
at
the
moment
to
see
with
kubernetes.
Well,
I
was
hoping
that
you
have
it.
E
Yeah,
I
mean
definitely-
and
I
think
for
for
us
alicia,
I
think
you're
the
first
one
hitting
this
with
your
with
your
guest
fs
integration,
but
I
mean
we
have
for
data
volumes,
for
instance
a
similar
mechanism
where
we,
where
we
also
do
some
checks
in
our
controllers.
E
If
we
can
use
something
yeah,
maybe
maybe
you
need,
in
your
case
something
more
distributed,
but
but
maybe
we
would
maybe
another
option
would,
for
instance,
be
to
move
the
lip
gas
fs
part
more
server
side
in
the
way
from
cute
cattle,
so
that
it
can
just
do
the
similar
do
similar
things
like
with
the
data
volumes
and
pvcs,
but
in
general,
we're
like
the
rest
of
kubernetes
is
pretty
weak
regarding
to
ensuring
that
you're
not
overriding
your
stuff.
So
you
can
right
now
just
bind
one
pvc
to
multiple
vmis
and.
F
E
Maybe
I
think
what
one
thing
what
alicia
is
hitting
with
the
guest
fs
support
is
that
she's,
basically
and
allegedly,
is
correct
me
when
I'm
wrong
that
she's,
basically
creating
a
part
completely
out
of
bounds
of
the
rest
of
keyboard.
So.
F
E
C
E
C
C
C
Yeah
anyway,
I
will
try
to
bring
people
in
the
kubernetes
storage
and
get
feedback
also
from
them.
Maybe
we
are.
We
can
find
some
additional
user.
A
Okay,
thank
you,
aj.
That
was
a
great
discussion.
Everyone,
daniel
hiller,
has
the
next
couple
bullet
points.
B
B
So
just
so,
you
know
another
thing
that
we
we
have
done
is
we
have
merged
rook
saf
with
six
storage
lanes,
so
they
run
now
all
together
with
the
rook
safe
default
provider
from
cooper's
ci.
B
What
we
discovered
was
that
when
we
were
running
the
periodics
that
whenever
the
parallel
tests
were
failing,
the
serial
tests
were
not
executed.
We
have
fixed
that
with
another
pr,
but
still
waiting
for
confirmation
on
whether
that
works
exactly
as
we
want,
and
what
we
also
saw
was
that
some
tests
were
missing
the
labels
for
rook
chef,
because
that
was
required
on
some
tests.
I've
attached
the
prs
on
the
on
the
agenda
on
the
document
community
document.
If
anyone
is
interested
in
looking
into
that,
that's
all
from
me
thanks.
A
Thank
you,
daniel
and
speaking
of
ci
daniel,
and
I
moved
ci
for
the
website
over
to
proud
on
sunday
evening
at
midnight.
A
For
my
time
it
was
a
quite
quite
a
sprint
and
we
finished
up
successfully
and
surprisingly,
our
first
pull
request
went
through
successfully,
and
that
was
pretty
awesome
to
see.
Thank
you
daniel,
for
that
help
I
see,
auntie
jane
is
in
is
with
us
today.
A
A
G
Hey
chris
I'll
just
add
one
thing:
this
is
ryan.
The
I
post
is
also
on
the
mailing
list,
but
the
we
changed
sig
scale
to
go
from
bi-monthly
to
weekly
same
time,
we'll
be
doing
the
same
everything
the
same,
we're
just
going
to
do
it
more
often
now
so
folks
want
to
attend.
There
is
going
to
be
two
additional
meetings
per
month.
A
A
Hey
brian,
did
you
see
that
thread
in
in
slack
regarding
that
fellow
who
had
the
memory
issue
on
a
high
density
virtual
host.
G
A
G
A
Definitely
has
something
going
on
there
he's
not
running
ksm,
so
there's
probably
an
opportunity
to
reclaim
some
used
memory
pages
or
common
memory
pages.
A
Do
you
know
of
anybody
who's
running
ksm,
along
with
cooper.
G
No,
I
don't
I
like.
I
said
I
think
I
think
it
would
be
worth
having
discussion
on
the
mailing
list.
It's
strange.
It
seems
to
happen
but
like
like
he
was
saying
when
he
was
plugging
in
his
devices,
he
would
to
sort
of
learn
a
little
bit
more.
G
It
was
just
difficult
to
follow
it
like
having
it.
I
think
I'm
knowing
this
would
be
helpful.
A
That
sounds
good.
Okay,
pull
requests.
B
Oh
one,
second,
sorry,
I
forgot
something
I
had
requests
from
lubo,
whether
we
could
have
a
look
or
whether
we
could
enter
our
ci
notes.
Federico.
I
see
you
are
in
the
meeting.
B
Do
you
have
access
to
the
nodes
where
jobs
are
running
and
can
you
just
could
you
could
you
provide
probably
some
some
logs
for
lubo.
H
I
think
it
can
can
be
a
little
bit
difficult,
but
maybe
maybe
possible
not
sure
what
what
is
the
what
he
requires,
because
if
it
is
something
related
to
logs
or
to
artifacts,
I
think
it
will
be.
It
should
be
better
to
to
just
modify
the
the
test
code
to
do
write
down
that
in
the
in
the
logs,
but
yeah
I'm
I'm
currently
in
touch
with
him.
So
yeah.
D
H
Can
we
can
continue
with
that.
B
H
No,
I'm
not
I'm
I'm
not
aware,
maybe
maybe
lubrocan
can
know
better,
but
but
yeah,
no,
not
at
the
moment.
I
think
I'll
I'll
ask
him
and
create
one
if
it
is
needed,
but
yeah,
hopefully
hopefully
we
come.
We
will.
We
can
come
up
with
something
that
is
reusable
by
everyone
in
the
jobs
to
get
more
information
about
what
what's
going
on
in
inside
the
bots.
Something
like
that.
A
That
sounds
good
in
the
meantime
I'll
tag
you
on
on
this
note
right
here.
A
Here,
okay,
I
thought
this
was
a
new
poll
request,
so
we'll
continue
the
conversation
and
and
to
get
a
pull
request.
D
A
A
A
A
A
And
I
was
just
wondering
if
we
should
add
the
diagram
that
you
guys
came
up
with
in
the
performance
and
scale
meetings.
G
Yeah,
I
there's
a
third
I
mean
I'd
like
to.
I
mean
if,
for
that
diagram
I
mean
there's
an
open
pull
request
for
it.
I
just
don't
have
it
for
I
don't
know
where
it
is,
but
the
yeah.
I
would
like
to
add
that,
if
possible,
but
yeah
the
I
mean
I'll
review
the
rest
of
what's
here.
A
A
I'll
just
tag
you
here
and
just
take
a
look
at
it
and
add
some
notes
if,
if
you
think
anything
is
missing
or
tag
it
without
looks
good
to
me.
If
you
don't
mind.
A
And
then
I'll
do
the
approval,
so
we
can
get
a
merge.
A
A
B
This
looks
related
to
an
email
that
alexander
wells
already
answered.
I'm
not
sure
if
it
is,
it
is
just
that
exact
email
which
I
saw
also
today,
but
I
think
that
if
you
are
using
the
openshift
console,
he
suggests
that
you
are,
you
should
rather
use.
No,
I
don't
think
that
this
I
don't
see
it
too.
Yes,
but
yeah.
I
think
he
suggests
that
that
you
use
the
hyper-converged
cluster
operator
for
that,
because
this
interacts
better,
but
then
yeah,
I'm
not
sure.
D
A
I'm
just
good,
I
don't
know
every
time
a
red
hat
product
comes
up,
it
really
should
be
running
through
the
product
support
channels
and
then
through
the
community.
A
A
A
A
Yes,
everything
is
handled,
so,
let's
move
on
to
bug
scrub,
which
we
have
not
done
enough.
The
past
couple
weeks.
I
I
All
right,
can
you
see
the
screen
the
right
screen,
perfect.
Okay,
I
went
through
some,
so
I'd
say:
let's
just
go.
I
don't
know
14
20
days
to
the
past,
so
first
one
plus
the
vmi
shouldn't
be
marked
yesterday,
should.
C
K
I
J
J
Redness
probes
on
the
parts
get
reflected
on
our
part
and
we
have
the
controller
updating
the
bmi
with
their
respective
readiness
state.
If
the
pod
readiness
pro
succeeds
or
fails,
as
far
as
I
know,
and
it
might
just
default
to
true-
I
I
don't
know
why
why
it
would
be
true
if
it's
paused,
probably
because
it
doesn't
receive
an
update.
K
A
I
G
What's
the
what's
the
use
case
for
this
like
why,
why
would
we?
What
would
we
want
this.
J
I
think
it
comes.
I
I
just
assume
it
comes
from.
The
context
of
this
person
is
adding
the
readiness
status
to
the
qctl
output
for
pms,
and
it
shows
true
for
pause
vms
and
I
think
that's
how
it
came
up.
G
G
Yeah,
I
I
why
I
guess
why
they
have
to
be
related
like
if
you're
I
mean
pauses
its
own
condition
like.
Is
there
any
relationship
between
the
two?
Like
I
don't
know,
I
almost
don't
see
there
is
there
being
one.
J
G
Yeah,
I
mean
like
you
guys
are
saying
like
I
wonder,
if,
like
the
other
consequences,
because
during
like
when
these,
when
we
do,
the
phases
like
looking
at
ready,
is
important
like
when
we
actually
look
at
the
pod
is
ready.
And
I
wonder
how
like
for
handler,
will
look
at
this.
If
it
seems
unready
well,.
G
G
Like
I
mean,
and
ready
like
running
and
ready
go
together,
yeah,
I
don't
know
it's
just
kind
of
the
relationship
between
all
these
things
is
kind
of
like
what
do
we
gain
if
we
say
like
okay,
pod
shouldn't
retreat
traffic,
if
it's,
if
it's
paused.
J
Yeah,
I
said
one
positive
consequence
would
be:
the
user
might
not
get
an
error
if
there
is
multiple
parts.
I
don't
know
what
happens
during
migration,
but
but,
like
that's,
not
pot
status
supports
readiness
state
is
something
different.
I
think
this
is
just
an
oversight
that
we
didn't
don't
update
the
vm
status
when
pausing,
because
the
pod
is
still
running
and
might
not
might
or
might
not
be
reporting
its
readiness
status.
L
J
G
Yeah,
I
understand
that,
but
like
we
make
there's
as
part
of
reconciling
the
vm,
we
make
the
judgment
of
okay
pot
is
ready,
move
face
to
running.
So
I'm
wondering
what
are
the
other
concepts
if
there
are
any
other
consequences
around
us
scenes.
L
J
G
Okay,
because,
like
the
the
the
pod
is
ready
like
we,
we
like,
like
sort
of
separating
them,
the
pond
and
the
vmi
like
the
the
pod,
is
ready
and
it
can
receive
traffic
and
that's
no
problem.
It's
just
the
vmi
is
paused,
and
so
maybe
this
is
just
one
of
those
situations
where,
like
we're,
the
keywords
and
abstraction-
and
it's
not
like
a
perfect.
G
G
I
don't
know
it
would
be
good
to
hear
like
what
other
you
know,
like
a
lot
of,
I
think,
there's
less
a
lot
of
testing
that
would
need
to
be
done
to
make
sure
that
this
is
doesn't
cause
any
adverse
consequences
and.
G
Yeah,
because
it's
not
really
not
clear
and
and
just
the
gain
to
me
is
not
like,
like
can
we
say
like
affirmatively
like
okay,
this
is
going
to
get
us
this
much
of
a
improvement,
or
something
else
is
that
that's
that's
also
not
clear,
like
I
understand
the
argument
of
like
pause,
fam
shouldn't
be
receiving
traffic,
but
what
are
we
getting.
J
J
Be
receiving
traffic
but
for
the
person
querying
virtual
machines
they
should
not
know
or
shouldn't
have
to
know
about
that,
they're
being
a
part
in
the
background,
they
query
virtual
machines
and
they
know
it's
paused
and
they
want
to
know
if
the
original
mission
is
ready
for
them
to
pod.
They
give.
The
point
is
an
implementation
detail.
You
know.
J
Redness
readiness
to
me
is
the
readiness
of
the
workload.
That's
why
we
have
readiness
probes
that
can
do
network
checks
and
now
exec
checks
and
in
the
future,
have
an
exact
guest
ping
probe
to
see
if
the
workload
inside
the
vm
is
is
ready
to
receive
traffic
same
as
the
radius
pro
in
the
part
does
not
check
if
the
part
itself
is
ready,
but
the
check
you
execute
on
the
part
your
application
side
is
ready.
Ideally,
so,
in
our
case,
the
readiness
should
also
indicate
not
if
the
pod
itself
is
ready,
but
your
workload.
J
Yeah
with
the
exit
pro
we
can,
or
we
already
have
network
probes,
like
you
can
say,
you
can
set
a
readiness
or
a
liveness
probe
that
checks
some
port
on
your
virtual
machine,
sends
tcp
or
http
traffic
to
it
and
gets
a
response,
and
that
sets
the
readiness
status
of
the
part
and
then
the
vmi.
If
the
pod
sets
its
retina
status,.
J
G
Yeah
I
mean
I
guess
like
for
me
like
when
we
see
pause,
I
think
I've
seen
his
patch
like
if
we
see
if
we
see
that
the
vm
is
paused
like
that.
That
means
something
as
well
like
yeah,
I
don't
know
I
I
I
could
see
the
argument
being
like
yeah.
We
should.
It
should
be
really
false,
because
now
we
should
receive
traffic.
I
guess
it
really
is
just
a
concern
of
whether
there
are
any
adverse
consequences
to
this
to
doing
this.
I
think
this
just
needs
to
be
a
lot
of
testing.
D
I
I
Okay,
let's
move
on,
we
have
five
minutes
and
one
more.
We
are
at
the
end
of
the
week.
I
And
he
seems
to
be,
for
this
seems
like
any
reason
not
to
do
this,
or
do
we
have
any
workarounds
that
could
be
used
without
any
changes
to
the
color.
I
J
I
Okay,
this
doesn't
look
good
and
sounds
like
we
would
need
it.
I
think
anyway,
so
let
me
just
mark
this
accepted
unless
anybody
purchases.
I
J
D
It's
it's,
it
looks
like
those
asterisks.
Is
he
trying
to
build
out
yeah.
D
A
I
A
Yeah
yeah:
let's:
let's
do
that
they're,
the
guys
are
saying
they're
bored
and
that's
always
a
a
good
indicator
of
saying
I
need
work
and
this
would
be
a
good
thing,
a
good
thing
for
them.
I
A
Sure,
well,
thank
you
for
handling
that
bug
scrub.
It
is
7
57
and
that
concludes
our
weekly
coober
meeting.
Thank
you.
Everyone
we'll
see
you
next
week
or
we'll
see
you
on
the
mailing
list
or
slack.