►
From YouTube: KubeVirt Community Meeting 2022-01-26
Description
Meeting Notes: https://docs.google.com/document/d/1kyhpWlEPzZtQJSjJlAqhPcn3t0Mt_o0amhpuNPGs1Ls/
A
A
B
A
B
Great
just
I
I'm
to
be
honest,
I'm
not
exactly
sure
where
I
should
have
put
this
or
whether
I
should
have
put
this
rather
to
the
open
floor,
but
yeah
it's
under
general
notes.
Anyway.
What
I
was
just
wanting
to
to
mention
was
that
we
have
seen
an
increase
of
the
testing
of
the
unit
testing
failures
recently,
which
not
only
hurts.
D
B
The
post
submit
that
is
running
on
every
merge
to
the
main
branch,
but
also
on
the
prs,
where
it
is
failing
it,
you
can
reach
us
easily,
of
course,
but
what
what
I've
noticed,
at
least
from
the
from
the
search
ci,
was
that
this
shows
around
20
of
all
bills
is
currently
failing.
B
This
is
an
older
issue,
but
I
just
wanted
to
point
out
that
that
is
this
is
happening
and,
to
be
honest,
I
I'm
not
working
on
it
and
yeah.
I
think
we
should
just
discuss
this,
how
we
want
to
fix
it,
probably
or
if
we
have
plans
to
fix
that
or.
B
That
was
that
was
actually
a
question
I
had,
because
in
general
we
for
functional
tests.
We
have
this
recommend
or-
or
this
this
line
of,
I
think
10,
where
the
ci
is
stable
and
when
one
test
fails
around
20
of
the
time
that
that's
quite
an
issue
I
would
say-
and
I
would
ask
whether
we
should
maybe
apply
for
such
tests.
B
The
similar
thing
that
we
do
for
flaky
to
functional
tests
so
that
we
would
quarantine
it
and
remove
it
for
the
time
being
or
something
like
that.
E
From
my
perspective,
probably
makes
sense,
I
mean
there,
but
I
guess
we
just
have
two
or
three
flaky
unit
tests,
something
like
this
and
it
didn't
really
grow
or
anything
they're
just
with
us
for
a
very
long
time
already.
E
Not
do
all
the
overhead
to
it
like
running
the
unit
tests
and
periodically,
also,
and
so
on.
I
don't
know.
On
the
other
hand,
if
we
introduce
this
flow,
we
could
probably
also
get
more
in
optimize
the
unit
tests
more
to
be
faster
and
have
more
data
points
because
we're
not
yeah.
We
still
have
issues
with
yeah,
with
these
flaky
tests
to
run
tests
more
parallel,
and
so
on
and
unit
tests.
F
Right,
the
only
only
input
that
I
can
I
can
think
of
is
that
it's
more
when
you
skip
unit
tests
versus
the
end-to-end
test,
it's
you
introduce
a
higher
risk.
I
guess
because
the
units
are
supposed
to
be
more
focused
on
specific
area,
and
if
that
is
flaky,
then
maybe
the
production
code
is
like,
whereas
in
end-to-end
tests,
sometimes
it's
just
load
or
the
ci.
F
E
B
Yeah
and
eddie,
I
agree
with
you
that
that
that
the
risk
is
increased
when
you,
when
you
try
skipping
a
start
skipping
unit
test,
but
never
the
less.
It
is
still
very
annoying
to
have
such
so
such
flaky
tests
there
somehow
but
yeah.
To
be
honest,
I
just
wanted
to
mention
it
and
raise
awareness
on
that
so
and
hopefully
get
someone
to
volunteer
on
fixing
that.
E
Yeah
maybe
also
just
tag
a
few
of
the
maintainers
on
this
issue
and
ask
them
for
the
opinion,
and
then
we
can
probably
build
a
groom
on
what
to
do
with
it,
because
20
percent
failure
rate
is
really
too
much.
A
I
think
stu,
you
have
the
first
thing:
follow-up
to
all
things:
open.
G
Hey
guys,
can
you
hear
me?
Yes,
fantastic?
I
was
just
messing
with
my
microphones.
I
realized
it
wasn't
actually
attached
yeah.
So
this
is
just
a
follow-up.
The
all
things
open
conference
happened
actually
mid-october,
but
I
realized
we
never
actually
shared
the
recording.
So
there
you
go
proof
that
kubert
does
run
on
arm
in
video
form.
So
if
you're
interested
check
it
out
cool.
A
E
Yeah
I
just
seen
that
we
have
a
few
had
a
few
attempts
to
update,
go
and
the
kubernetes
dependencies,
and
I
think
the
networking
team
started
on
it
and
I've
seen
a
second
pr
which
seems
to
pick
up
that
work,
and
I
just
wanted
to
ask
where
we
are
there
in
person
could
also
have
pink.
But
since
I
think
a
lot
of
those
people
are
there
just
wanted
to
ask
directly
here.
H
There
is
maybe
some
problem
with
the
test,
because
sometimes
the
test
fails
because
of
that
amount.
For
some
reasons,
it
seems
that
resources
takes
more
time
to
go
in
running
states
and
tests
fail
failed
because
of
the
amount,
because
when
I
try
to
reproduce
it
in
local
environment
it
everything
seems
to
go
to
go
well,
I
I
really
don't
know
what
happening.
H
What
is
happening
behind,
maybe
is,
is
something
related
to
parallelism,
but
really
I
don't
know.
H
Yes,
because
it
includes
the
other
changes
pushed
by,
I
don't
remember
the
name
perfect.
E
I
J
It
it's
not
fixed,
yet
we
have
a
pr,
but
it
hasn't
been
merged.
So
I'll
link
the
pr
real
quick.
Can
you
put
here
the
gdpr?
I
will
link
it.
Let
me
let
me
find
it
real,
quick.
Thank
you.
So
much.
J
Here
I
will
put
it
in
the
document,
so
I
tried
to
describe
the
bug
in
that
pr
description.
Ultimately,
the
result
is
that,
without
this
bug
fix,
we
run
the
risk
that,
when
we
update
cuvert
that
it
will
restart
all
of
the
virtual
machines
in
a
pod
I
mean
in
a
pool,
but
with
this
fix
that
shouldn't
occur,
that's
about
it.
E
A
federation
yeah,
so
this
was
basically
why
I
asked
so
if
you
two
have
an
agreement,
basically,
because
there
are
two
pr's
where
one
includes
the
change
of
the
other,
so
I
I
just
wanted
to
ensure
to
not
overlook
something
and
and
just
ensure
that
you
had
an
agreement
or
something
on
with
which
peer
to
pe
proceed.
I
don't
have
any
preferences.
K
K
E
C
A
J
Yes,
I
will
just
briefly
do
a
plug
for
the
mailing
list
post
that
I
posted
yesterday
about
defining
api
graduation
guidelines,
so
I've
created
a
kind
of
rough
guideline
documentation
for
how
we
should
progress,
our
apis
across
the
keyboard
organization,
and
it's
not
meant
to
be
a
really
harsh
guideline
or
anything
like
that
as
much
as
it
is
to
kind
of
provide
expectations
for
the
timelines
that
we
should
be
kind
of
targeting
for
the
graduation
of
our
apis
and
kind
of
the
criteria
involved
with
that.
J
J
Sometimes
our
alpha
apis
are
effectively
g8
at
this
point,
because
we
support
them,
maybe
in
downstream
products,
and
they
have
live
production
experience
and
they're
stable.
It's
just
that
we
haven't
gone
through
the
task
of
actually
incrementing
those
apis.
So
this
is
a
rough
policy,
something
that
we
can
all
just
kind
of
point
to
and
agree
on
that.
This
is
the
criteria
that
needs
to
be
met
before
graduation
and
that
we
should
start
making
an
effort
to
progress.
Our
aps
do
this
graduation,
because
they
they
mean
something
to
people
who
are
consuming
them.
J
If
somebody
sees
an
alpha
api
in
their
manifests
that
that
leads
to
questioning
the
support
ability
of
that
api
will
get
dedicated
in
the
future
and
things
like
that,
but
often
that's
not
going
to
be
the
case,
so
we
should
go
ahead
and
graduate
that
api,
but
we
just
haven't
done
that
yet.
J
So
all
that
said,
that's
the
document.
Take
a
look
at
that
pull
request,
provide
your
feedback,
we're
just
trying
to
give
some
rough
guidelines
here,
nothing,
terribly
harsh
or
or
demanding.
Here
it's
just
more
of
a
nice
friendly.
Here's
how.
L
L
Yeah
david
thanks
for
sending
that
out,
I'm
I'm
gonna,
be
I'm
gonna,
send
a
message
to
keeper
dev
sometime
today,
probably
about
getting
feedback
on
the
snapshot
api,
which
has
been
in
alpha
for
a
while
and
yeah,
and
just
potentially
you
know
I'll,
send
the
mail
out.
Maybe
we
can
and
to
you
know,
discuss
it
next
week
in
the
community
call.
J
Yeah,
that's
actually
the
one
that
I
had
in
mind
when
I
was
writing
this
document,
because
here's
just,
I
think
it's
worth
saying
so
the
alpha
api
for
the
snapshot.
It
got
delivered
in
phases
where
it's
a
really
big
project
and
we
had
to
do
things
like
add
the
snapshot
portion
of
the
phase
of
implementation
for
the
restore
of
the
snapshot.
J
So
it
came
in
like
some
layers
as
we
were
implementing
that
feature,
so
it
stayed
in
alpha
for
a
while
and
then
eventually
it
got
adopted
downstream
by
red
hat,
and
at
that
point
it
probably
should
have
moved
to
beta.
We
just
haven't
done
that,
yet
I
would
be
in
full
support
of
it
being
beta
at
this
point.
L
Yeah,
it
just
didn't
move
so
slowly,
but
we,
I
didn't,
do
a
good
job
of
publicizing
it,
because
what
what
uses
snapshot
without
restore
and
then
it
was
like.
Well,
no
one
really
wants
you
know
to
do
offline
snapshots,
and
so
now
we've
got
pretty
full
featured
snapchat
implementation.
We
can
do
online
with
guest
agent,
fs,
freeze
integration,
and
you
know
we'd
like
to
hear
about
people
that
are
using
it
and
get
feedback.
A
If
not,
we
could
let's
say
we
have
time
to
move
on
to
a
bug
scrub.
Somebody
would
like
to
help
lead
that.
C
C
E
E
While
we
have
earned
the
meeting
and
yes,
it
seems
like
they
are,
having
they're
creating
their
own
containers
and
okay.
I
will
follow
up
on
the
conversation
there,
but
seems
like
they're
doing
something
wrong
there.
I'm
not
sure
we
have
to
ask
what
they're
doing
if
we
can
help
them
and
they
put
trash
needs
information
there.
Then
we
have
an
issue
from
vasily
21
hours
ago.
Legacy
container
image,
schema
mu1,
manifests
break
migration.
E
Then
this
image
idea
reported
here
looks
like
the
digest,
but
it's
just
an
id
and
not
really
the
digest
like
what
vasily
says.
Yeah
seems
like
we
have
to
is
vasily
on
the
call.
M
This
schema
one
manifests
kind
of
legacy
stuff.
I
asked
a
question
in
container
d
github
and
they
actually,
since
they
are
not
going
to
support
it,
so
they
do
an
internal
conversion
from
this
schema
one
to
some
new
format
and
therefore
they
do
not
expose
the
digest.
M
C
D
It's
yeah.
It's.
E
M
D
M
Other
container
runtimes,
they
kind
of
handle
this
normally
tested
with
docker,
and
it
works
also
cryo
from
it's
used
in
ci
in
covert.
As
far
as
I
know
so
so
it's.
D
M
E
I'm
not
sure
if
we
can
decide
it
in
this
call.
I
think
it
makes
sense
to
maybe
investigate
a
little
bit
how
common
it
is
that
people
are
using
continuity
and
old
images.
M
Yeah
I
just
wanted
to
have
it
somewhere
documented.
Maybe
if
someone
also
faces
these
issues
at
least
can
be
discussed
here.
M
E
M
E
D
E
E
M
E
Yeah
the
next
one
is
about
a
flaky
test
where
we
have
where
we
are
using.
We
gpus
on
different
test
lanes
in
parallel.
It
seems
nothing
we
have
to
go
in
here.
It's
already
worked
on
when
virtual
machine
instance
replicas
set
replica
service
connects
through
me.
You
see
two
in
c.
Clients
cannot
length
connect
to
one
vmi
replica,
okay,.
E
J
I
don't
think
this
will
work,
because
if
the
selector
is
across
all
five
replicas
we're
just
going
to
get
a
random
replica
when
we
try
to
connect
over
that
service,
it
won't
be
the
same
one
consistently.
E
J
E
E
E
E
A
Well,
on
a
side
note,
would
that
be
a
useful
good
first
issue
for
somebody
to
add
the
documentation
to
a
user
guide.
J
E
E
E
Let's
issue
here,
I
think
vladik
is
working
on
this
already
with
only
chicken
dp.
N
C
Okay,
I
think
it
yeah
that
relic.
C
O
We've
been
talking
about,
we've
been
trying
to
figure
out
some
some
of
our
threshold,
we're
trying
to
figure
out
thresholds
and
one
of
the
problems
we've
been
trying
to
figure
out
how
to
read
our
metrics
but
yeah.
I
have
this
this
long
explanation
in
the
in
that
comment
there,
but
yeah
we've
been
we've
been
talking
about
this
and.
E
E
M
M
M
But
I
faced
it
with
a
local
pass
storage
provider.
I
think
which
I
used
for
testing.
C
E
E
M
Yeah,
this
is
related
to
tpm,
actually
implementation.
So
when,
if
you
run
the
lever
with
tpm
support,
then
reverse
will
start
the
software
tpm
binary
and
will
use
the
uuid
of
the
vm
to
create
a
unique
path
for
the
state
of
this
tpm
device,
and
this
will
be
reflected
in
prod
speeds
and
the
line
and
basically
it's
used
for
searching
the
vm
process,
but
in
the
pull
request,
which
adds
dpm
support.
I
think
it's
already
handled,
so
I
attacked
this
issue
there
and
probably
should
be
fixed.
E
I
think
that
one
other
way
to
detect
the
main
process
would
be
reading
the
there
is
a
something
like
a
levered
state
file,
where
the
main
pit
of
qm
is
also
there,
so,
instead
of
detecting
it.
This
way
we
could
probably.
M
E
Get
stopped
before
cue
emo
and
we
just
want
to
ensure
that
you
know
all
our
shutdown
processes
and
so
on.
We
really
exit
with
launcher,
if
possible,
after
dreamo,
okay
yeah
but
yeah.
We
can
also
think
about.
We
can
also
discuss,
maybe
maybe
detecting
it
over
the
state
file
from
libert,
where
the
process
should
be
written
into.
E
A
The
last
one
actually
looked
like
a
good
first
issue,
because
the
pr
was
against
a
single
word
change
in
the
network
docs,
and
then
there
was
another
comment
said
well.
This
is
a
good
single
word
change,
but
the
whole
section
needs
to
be
updated.
E
E
And
you
thought
it
may
be
a
good
first
first
task
or
something
or
what
did
you
mean.
E
E
F
F
F
We
have
a
secondary
network
and
with
bridge
binding,
the
interface
bridge
binding
then,
and
the
guest
agent
reported
some
interface
at
least
one
then,
and
for
the
secondary
network
there
is
no
no
ip,
then
it
will
override
anything
that
is
on
the
pod,
which
makes
sense
because
it
means
there
is
nothing
inside
your
inside
your
guest
yeah.
I
know
that's
it,
but
but
in
general.
F
F
For
the
for
the
pod
network,
it's
I
think
it
behaves
the
same
because.
E
E
F
E
This
has
to
happen,
but
but
just
because
it's
not
reported
by
the
guest
agent,
the
field
is
empty
and
then
a
lot
of
things
break,
and
it's
then
sometimes
not
so
easy
to
fix
it
like,
for
instance,
in
the
fedora
case,
there
was
just
a
nice
linux
issue,
so
that
is
illinois
at
a
certain
point,
with
an
update,
stop
block
started
blocking
the
guest
agent
and
then
it's
not
so
easy
to
get
out
of
this
and
for
the
port
network.
E
F
So
so
I
just
wanted
to
say
that
for
the
for
for
the
case
that
it
is,
it
is
bad
thing
if
it
is,
if
it
is
a
masquerade,
then
this
is
not
a
problem.
Sorry,
I
want
you
to
if
it's
a
possibility,
it's
not
a
problem
right
but-
and
the
scenario
you
describe
is
that
if,
if
there
is
a
problem
with
the
guest
agent
reporting
the
network
stuff
in
general,
then
I
will
expect
it
not
to
report
any
interface
and
because
it
will
not
report
any
interface.
F
This
this
scenario
will
not
happen
because
the
only
information
will
be
taken
from
the
pod
as
you
expected
yeah.
So
it's
it
should
not
happen
that
if
it's
like
nothing
is
reported
from
the
guests
in
terms
of
networking
I
mean
then
yeah
what.
E
E
F
E
F
Check
it
well,
let's
let's
check
it
out
then,
because
the
code
says
something
but
sometimes
yeah
and.