►
From YouTube: KubeVirt Community Meeting 2022-06-01
Description
Meeting Notes: https://docs.google.com/document/d/1kyhpWlEPzZtQJSjJlAqhPcn3t0Mt_o0amhpuNPGs1Ls/
A
And
while
everyone
logs
in
and
fills
that
out,
if
anyone
wants
to
introduce
themselves,
I
know
we
have
alexander
new
on
the
call
with
us
today.
B
Hey
everyone.
I
hope
you
can
hear
me.
Okay,
yeah
can
hear
you
up
awesome
yeah,
so
my
name's
alex
a
consultant
with
red
hat
based
in
canberra,
australia.
Yeah
I've
been
moving,
I
guess
more
into
the
open
shift
and
like
yeah,
kubernetes
and
fertilization
side
of
the
house.
So
I
thought
I'd
pop
in
and
say
hello.
C
A
A
Okay,
daniel,
do
you
want
to
go
ahead
and
speak
to
the
pr
item
you
have
on
the
agenda.
A
Okay,
it's
a
bit
quiet,
but
we'll
do
our
best
to
hear
you,
I'm
trying
to
increase
the
that's.
D
Okay,
I
was
just
wanting
to
say
that
we
had
something
strange
going
on
if
catherine,
if
you,
who
would
want
to
click
at
the
link
that
I
inserted
the
job
load
dashboard,
what
you
see
there.
D
Should
be
hopefully
visible
soon,
yeah,
that's
exactly
the
thing.
Despite
you
see,
there
is
a
sudden
impact
that
a
lot
of
cute
prs
have
on
our
ci
infrastructure.
D
So
what
you
see
there,
for
example,
is
that
there
are
lots
of
projects
in
the
queue
this
just
to
explain
a
little
bit
of
this.
What
happened
was
that
there
were
around
10
prs
that
weren't
okay
to
test
which
suddenly
became
okay
to
test
and,
as
we
have
around
40
jobs
per
pr
running,
we
had
around
400
jobs.
D
That
would
be
running
suddenly,
and
this
explains
the
spike,
so
just
so
that
everyone
knows
that
if
someone
is
suddenly
getting
accepted
into
the
community,
which
I
I
totally
just
to
be
clear
on
this,
I
really
love
people
getting
accepted
into
the
community,
but
probably
we
might
look
at
how
their
pr
status
and
maybe,
if
they
have
lots
of
open
prs,
we
should.
We
should
be
a
little
bit
more
careful
next
time.
Probably
does
that
make
sense
what
I'm
saying.
D
Actually,
I
think,
oh
so
so
to
the
good
thing
is
that
it
coped
pretty
okay,
it
got
into
a
normal
state
or
a
couple
of
hours
later,
but
yeah.
Just
just
please
be
aware
that
if
you
are
putting
someone
adding
someone
to
the
community-
and
he
has
oprs
that
are
not
yet
okay
to
test
this
might
flood
the
ci
system
with
prs,
we
always
withdraw
jobs.
E
D
Yeah
there
were,
there
were
10
pr's
open,
so
the
the
thing
was
that
the
member
that
just
got
accepted
it
had
10
pr's
open
that
worked
okay
to
test.
D
To
be
honest,
I
I
could
probably
think
of
a
couple
of
things,
probably
that
we
might
not
accept
the
person
if
it
has
so
many
prs
open
that
aren't
okay
to
test
and
maybe
at
first
clear
out
the
situation,
somehow
look
at
the
prs
and
pose
an
okay
to
test
one
by
one.
I
think
that,
for
example,
the
prs
in
question
were
refactoring
prs.
D
There
is
nothing
wrong
with
refactoring
prs
to
be
clear.
On
that.
I
just
I
think
that
cleaning
up
the
code
is
a
good
thing,
but
the
thing
is:
if
you
are
doing
lots
of
com
lots
of
commits
and
separate
prs,
you
increase
a
load
and
I
think
what
would
be
have
been
a
better
thing
probably
would
be
to
have
a
lot
of
commits
in
one
pr.
D
So
we
would
not
have
this
situation
yeah.
I
don't
want
to
go
into
more
detail
into
that,
because
I
think
that
the
point
is
clear
that
I'm
trying
to
make-
and
I'm
just
wanting
to
raise
awareness
that
you
probably
should
sometimes
look
at
okay.
How
many
pr's
are
open
by
this
this
this
community?
D
Member,
that
all
that
is
about
to
get
community
member
and
maybe
have
a
look
if
you
are
accepting
or
you
should
sort
out
those
issues
beforehand,
we
didn't
get
in
a
bad
situation
because
we
didn't
have
that
much
of
a
load
on
your
ci,
but
we
could
have
been
right.
D
F
So
I
think
the
issue
was
it
wasn't
adding
the
community
member
to
the
organization,
it's
just
that
the
pr's
were
manually
marked
okay
to
test,
so
ten
of
them
are
marked
okay
to
test
it
once
so
that
was
so.
The
the
member
was
only
added
this
morning,
but
the
issue
happened
yesterday,
so
it
was
actually
just
yeah.
So
just
a
lot
of
pr's
were
marked
okay.
It
says
it
once
and
just.
D
So,
okay,
then
it's
even
worse,
I
I'd
say
probably
that
the
people
or
the
folks
that
added
the
okay
to
test
should
probably
have
had
a
look
at
the
prs
and
suggested
probably
that
they
should
be
1pr,
because
I
think
if
you
are
moving
a
function
around,
this
is
not
really
a
thing
that
should
have
its
own
pr.
If
you
are
moving
several
functions
around
that.
A
So
just
take
it
go
for
it.
Well,
I
was
just
thinking
I
mean
we're
not
expecting
to
see
this
exact
scenario
extremely
frequently,
so
I
imagine
if
we
watch
for
it
in
any
future,
like
community
joining
approvals,
we
it
should
be
fairly
reasonable
to
to
manage
right.
I
don't
think
we
have
to
get
too
far
into
the
weeds
with
that.
D
Yeah,
actually,
that's
that's
the
thing
that
I'm
trying
to
point
out.
Sorry
for
not
being
clear
enough.
What
I
was
trying
to
say
is
that
if
you
are
issuing
an
okay
to
test
on
several
prs
from
the
same
person
probably
or
from
that,
then
there
might
be
something
wrong.
So
just
that's
all
I'm
I'm
trying
to
say
that
might
be.
Maybe
the
structure
is
not
not.
That
good
am
I
am
I
clear
enough
or.
H
G
Yeah,
so
this
actually
happens
quite
frequently
if
it's
a
member
that
submits
the
pr
it
don't
even
need,
okay
to
test.
So
whenever
someone
is
doing
code
cleaning
and
creating
10
pr's
for
instead
of
one
we
get
into
that
scenario.
G
E
I
I
like,
I
understand
the
problem,
but
you
are
asking
me
to
understand
that
there
is
a
ci
infrastructure
behind
it
that
I
need
to
to
to
take
into
account
on
how
I
work
like
in
a
regular
way
like
if
I'm
doing,
10
refactorings
and
they
are
not
dependent
one
on
each
other
at
all,
and
I
prefer
to
send
10
pr's
because
then,
if,
if
each
one
can
be
independent
and
can
get
merged
quicker,
if
I,
if
I
send
10,
commits
on
a
single
pr
on
10
different
things,
even
if
it's
the
same
refactoring,
then
I
I
have,
there
is
good
chance
that
I
will
wait
like
every
single
thing,
like
20
30,
40
comments,
there
will
be
on
this
pr,
so
there
is
an
advantage
of
a
small
pr.
E
C
I
would
I
would
complete
odds
to
that
edward.
I
would
say
that
we're
not
unique
as
a
open
source
project
it
would.
I
think
the
assumption
would
automatically
be
that
there
is
a
ci
system
for
any
open
source
project
that
you're
committing
to,
and
so
I
would
always
be
cognizant
of
the
load
that
was
being
impacted
on
the
ci
system.
E
E
What
what
I'm
trying
to
say
is
that
I
don't
think
we
can
control
it
like
in
the
in
the
sense
of
please
don't
do
it
or
or
you
want
to
put
on
the
maintainer
or
someone
that
approves
it,
this
understanding
that
he
needs
to
care
about
this.
I
mean
he's
supposed
to
care
about
being
effective
in
taking
stuff.
This
is
what
I'm
trying
to
do.
E
J
We
have
to
we're
getting
pretty
far
in
the
weeds
on
this.
Let's
just
try
to
make
less
prs
if
we
can,
just
when
we're
making
judgment
calls
about
refactoring
or
whatever
we're
doing
favor
less
prs,
I
think
we
can
just
leave
it
at
that
for
now,.
D
Exactly
that,
that's
my
point
and
besides
that
I
don't
get
me
wrong
on
this.
I
have
nothing
against
cleaning
up
the
coke
base,
which
is
a
great
thing
to
do,
but
in
general
I
understand
refactoring
as
something
that
you
do,
while
you're
trying
to
implement
a
feature
and
refactoring
on
its
own,
I
think,
doesn't
have
as
much
value.
A
Awesome:
okay,
all
right!
In
that
case,
do
we
want
to
go
ahead
and
jump
into
the
gdb
port
discussion.
B
Sure
thing
so
yeah,
I
guess
at
the
moment,
with
a
client
or
a
customer
and
were
well
they're.
One
of
their
use.
B
Cases
for
virtualization
is
kernel,
module
development
and,
as
part
of
this
well,
traditionally,
they
were
using
just
like
kvm
on
top
of
rail
or
whatnot
or
whatever,
and
so
they
were
able
to
implement
like
or
expose
the
gdp
by
port
quite
easily,
whereas
now
they're
making
the
shift
over
to
like
cuba
and
openshift
fertilization,
and
it's
not
something
that
they
can
just
natively
do
so
as
an
intermediary
solution.
B
We're
looking
at
using
a
side
so
based
we're,
based
the
current
iteration
off
of
the
example
hook
sidecar
and
we'll
just
modify
that
to
modify
the
domain
xml
and
expose
the
the
gdb
stuff.
So
I
guess
going
forward.
What
we're
hoping
to
to
help
implement
is
something
that's
more
native
that
doesn't
require
the
use
of
the
sidecar,
so
whether
it
be
exposing
the
port
via
annotations
on
on
the
vm
that
you
want
to
have
that
functionality
or
whatnot
is,
I
guess,
part
of
that
discussion
that
I'm
wanting
to
have.
B
So
I
guess
what
I'm
trying
to
get
out
of
this
at
the
moment
is
yeah
I've
kind
of
submitted
a
few
things
around,
so
you
might
have
seen
in
in
the
mailing
list
a
couple
of
things
from
myself
yeah
I'm
just.
I
guess
I'm
just
trying
to
work
out.
What's
the
next
best
like,
what's
the
next
steps
to
to
get
this
kind
of
moving,
and
then
I
will
yeah,
I
want
to
be
able
to
put
some
resources
behind
that
from
our
end
as
well.
K
K
Is
this,
gdp
can
be
enabled
dynamically
through
a
monitor,
command
and
and
not
statically
via
this
command
line
in
camera?.
B
So
truthfully,
how
it's
being
used
is
either
yeah
just
by
using
that
tech
s
switch
with
kermie
or
in
kvm
and
the
domain
xml,
adding
those
qmu
command
line,
arguments
where
you
can
specify
tag
gdb
and
then
the
port
as
well,
but
like
our
ideally,
it
would
be
something
that
we
could.
I
guess,
turn
off
and
on
dynamically,
rather
than
at
the
definition
of
the
vm
at
the
very
start.
Yeah
being
able
to.
I
guess
turn
it
on
if
the
vm's
already
been
deployed
would
be
fantastic.
K
Sorry
so
in
general,
liberty
supports
like
sending
qmu
monitor.
Commands
to
running
vms
is
just
a
matter
of
of
exposing
this
in
in
in
kubert.
I
guess
it
could
be
something
like
a
sub
resource
or
something
of
that
nature.
B
K
We
could
do
something
similar
to
to
this
with
the
versatile
debug
or
something
like
that
or
or
as
a
sub
resource.
But
then
it
would
dynamically
send
a
monitor
command
and
then
that
would.
K
They
will
be
streamed
out
through
through
a
certain
socket.
I
J
With
your
issue
that
I'm
looking
at
now
with
the
sidecar,
how
are
you
getting
the?
How
are
you
connecting
to
the
gdp.
B
Yeah
yeah,
so
I'm
still
working
through
it
because
at
the
moment
it
seems
that
the
sidecar
feature
gate
in
openshift
is
disabled
by
default
through
for
cuba.
So
that's
the
hurdle
we're
currently
working
through
at
the
moment.
So
we
believe
I
mean
I
built
the
image
and
it's
passed
all
the
build
tests.
So
I
guess
it's
at
least
syntactically
correct,
but
yeah
we
haven't
actually
been
able
to
deploy
the
sidecar
in
like
an
open
shift
environment.
Yet
to
I
guess
I
actually
make
sure
it
functions
as
expected.
J
So
when
you
do
and
that
livert
is
modified,
I
don't
know
how
to
connect
to
that.
I
think
you'd
have
to
like
do
a
cube,
ctl
exact
into
that
pod,
launcher
pod
and
connect
there
yeah
okay,.
B
So
yeah,
so
we
will
probably
create
a
service
and
expose
that
service
in
order
to
be
able
to
use,
I
guess
gdp
on
like
our
our
physical
workstation
and
connect
to
that
port.
I
don't.
J
B
Yeah,
that
makes
a
lot
of
sense
and
that's
something
that
we'll
need
to.
J
Investigate
as
well,
okay,
what
would
it
be
like
I'm
trying
to
determine
like
how
user-friendly
does
this
need
to
be,
for
you
all
is
as
simple
as
you
just
need
to
enable
this
qmu
arg
value
on
the
domain,
xml
and
then
you'd
be
comfortable
with,
like
cube
exact,
keep
ctl
exact
into
the
pod
and
then
doing
stuff.
Or
do
you
need
it
to
be
like
super
user
friendly,
where
there's
just
like
a
command
line
tool
that
connects
to
it
directly.
B
No,
it
definitely
doesn't
need
to
be
that
that
user
friendly,
so
I
mean
the
people
that
are
using
this,
like
yeah,
do
kernel,
development
and
whatnot,
and
a
lot
of
them
do
a
lot
of
other
open,
shifty
or
kubernetes
related
development
as
well.
So
it
doesn't
need
to
be
super
super
user,
friendly,
yeah,
okay,.
K
G
So
you
can
also
connect
to
from
the
outside,
to
to
vert
launcher
like
I've
been
using
prof
to
debug,
launcher
and,
and
I
was
able
to
connect
from
the
host
to
the
compute
container
running
kim.
You
invert
launcher.
G
Yeah
kim
is
running
in
the
same
container
as
bird
launcher
right.
K
A
All
right,
so
it
sounds
like
we
have
a
general
direction,
at
least
to
investigate
yeah
thanks
for
bringing
that
up.
That
was
a
neat
conversation
to
see
how
that
develops.
For
you
guys
thank
you,
yeah
and
thanks
everyone's
input.
I
really
appreciate
it
just
out
of
idle
curiosity
as
the
customer
that
you're
working
with
potentially
valid
for
adding
to
the
cube
adopters
list.
A
B
Talking
to
them
at
the
moment,
I'm
really
hoping
that
they
are
okay,
yeah,
but
yeah,
I'm
in
discussions
with
them
at
the
moment.
A
E
I
just
had
I
have
a
question.
I
don't
know
if
someone
here
can
can
answer
it.
I
will
try,
so
we
in
the
end
end
one
test.
E
We
have
some
waiting
functionality
that,
for
example,
we're
waiting
for
the
vmi
to
be
ready
or
running
and
as
part
of
this
waiting
in
some
cases,
not
all
of
them,
but
in
some
cases
there
is
a
watch
is
created
there
inside
that
watches
for
event,
objects.
E
It's,
like
the
I
mean,
I'm
talking
about
the
event,
object,
not
the
event
command.
So
so
sorry,
not
the
events
that
the
watcher
receives,
but
the
events
object.
I
hope
I'm
clear
it's
like
it's
confusing.
Usually
so
can
someone
please
help
me
understand.
E
J
Yeah,
it's
just
so
there's
transient
reconcile
errors
that
can
occur
and
an
event
would
be
fired
that
that
occurred.
We'd
want
to
know
about
that.
C
J
Now,
if
we
were
just
doing
a
time
based
like
waiting
for
a
vmi,
for
example,
to
reach
a
running
state
within
30
seconds
that
might
get
covered
up,
that
error
might
get
covered
up
because
it
was
transient.
If
we
look
at
the
event
log,
we
can
determine
that
something.
Unexpected
did
occur,
that's
new
and
we
can
go
back
and
investigate
that.
But
that's
kind
of
our
only
evidence
right
now.
There's
some
other
ways.
We
can
look
at
that,
but
that's
kind
of
the
way
it's
been
done
historically
in
cubert.
E
So
so,
but
what
you
want
to
understand
is
in
the
middle
of
the
test,
so
you
want
to
fade
the
test
if
this
is
happening.
This
is
what
you're
saying,
because
in
the
sense
of
collecting
the
information,
the
information
I
think
it's
collected
anyway
in
the
end,
but
you
want
it,
you
want
to
have
an
action
on
it
in
the
middle.
This
is
if
this
is
what
I
understand.
J
E
But
isn't
it
is
like,
but
let's
say
that
it
happened.
You
have
a
transient
error
and
then
the
next
rate
right
and
the
next
rate
comes
side.
It
succeeds.
So
why
is
that
a
problem?
Because
at
the
moment
the
end-to-end
test
we
have
like
parallel
access
to
the
api
server
and
some
load
in
some
cases?
So
I
guess
it
can
happen
all
the
time.
I
E
We'll
have
transient
errors
like
this,
so
I'm
trying
to
understand
why,
if
this
is
like
intentional
or
or
can
we
can
we
say
that
it's
not
acceptable
that
it
will
do
a
one
reconcile?
This
is
what
I'm
trying
to
understand
it.
J
It's
one
of
those
things
we
want
to
understand.
So,
if
a
if
an
error
occurs-
and
it's
well
understood
why
it
occurs
and
it's
necessary
as
part
of
the
test,
then
that's
why
we
have
like
an
ignore
filter
at
times
where
you
can
say,
wait
for
this
condition,
ignore
this
specific
error
in
the
event
log.
It's
for
those
kinds
of
exceptions,
yeah
so
there's
times
when
yeah
you're
right,
we'll
understand
why
something
occurs
and
we'll
want
to
consider
it
part
of
the
test
to
ignore
it.
J
Primarily,
it's
to
catch
unexpected,
reconcile
events,
so,
okay,
just
it
cause,
like
all
these
things,
add
up
just
to
give
a
little
bit
more
contact.
So
we
have,
for
example,
a
new
air
conditioner
in
our
vmi
start
flow
that
we
weren't
aware
of
then
that
causes
possibly
20
more
reconciles
when
every
vmi
starts,
which
then,
when
you
start
1000
vmis,
that's
20,
more
api
load
and
things
like
that.
So
that's
why
we
want
to
be
aware
of
these
things.
E
Okay,
okay,
so
and
my
last
question
regarding
this:
if
we,
if
we
had
to
watch
the
object
itself,
let's
say
we
watch
the
vmi
itself.
The
events
from
the
vmi,
not
the
event,
object
the
just
watching
the
vmi.
Wouldn't
we
each
reconcile
get
some
update
about
something
happening
or
we
are
not
updating
the
status
every
all
the
time
like
if
there
was
an
error,
for
example,
okay,
we're.
J
J
I
I
think
that
might
not
be
possible.
I
have
to
go
back
and
look
at
how
the
the
reconcile
loop
is
handled
and
how
errors
propagate
from
the
event,
sorry
from
the
update
status
function,
but
things
like
that
can
occur,
so
you
wouldn't
always
see
it
if
you're
watching
the
object,
and
I'm
also
not
sure
that
we're
guaranteed
that
we
see
every
update
like
we
get
the
latest
update.
But
I
don't
know
if
they're
ever
like
munged
together
or
anything
if
they
happen
really
quickly.
E
E
You
that
you
get
all
the
events
from
a
specific
time,
but
you
need
to
it's,
I
guess
more
or
less
like
an
informal,
but
without
a
catch
a
catch
if
you
handle
it
correctly,
but
but
but
if
we
cannot,
if
you
cannot
trust
that
you
get
an
event
when
there
is
an
error
like
then,
yes,.
A
I
So
this
is
just
a
a
four
week
warning.
I
guess
so
that
no
one
gets
any
surprises
in
case
they
have
a
particular
preference
for
one
of
the
three
I
think
entries
into
our
calendar.
Also,
while
we're
here,
I
know,
there's
one
created
by
cuboid
at
cncf.io,
which
is
the
should
be
the
official
one.
I
know
there's
one
created
by
fabian
deutsch.
I
think
there's
a
third
one,
but
I
delete
it
from
my
personal
calendar.
Can
someone
help
me
out
and
tell
me
who
the
owner
of
that
third
calendar
invite
is.
D
I
Okay,
excellent
all
right,
so
I
will
contact
fabian
and
adam
to
delete
those
so
over
the
next
four
weeks,
I'll
I'll,
give
a
t-minus
warning
and
then
at
the
end
of
june,
I'll
I'll
have
those
removed,
and
we
should
just
have
the
one
and
it
seems
superfluous,
but
it
does
mean
at
the
end
of
day
at
savings.
We
won't
have
that
confusion,
because
the
different
calendar
invites
are
set
to
different
times.
I
I
also
haven't
put
it
in
the
agenda
so
sorry
about
this,
but
I'll
also
use
this
as
an
opportunity
to
remind
people
that
there
is
the
kvm
forum
and
the
kubecon
us
called
proposal.
Submission
dates
coming
up
friday
if
anyone's
still
kind
of
like
thinking
about
stuff
or
isn't
sure
about
how
to
commence
with
submitting
those
things.
Please
hit
me
up,
I'm
happy
to
talk
through
things
and
I've
learned
a
little
bit
about.
You
know
the
different
submission
requirements
for
those
two
conferences.
I
So
so
please
ping
me
or
send
me
an
email
or
if
you
want
to
raise
it
here,
then
by
all
means,
but
that's
it
for
me.
Thank
you.
Awesome.
A
Thanks
andrew,
we
don't
have
anything
specifically
on
open
floor.
A
D
I
didn't
look
too
close
at
the
pr,
but
I
think
that
the
intent
is
quite
the
opposite.
I
guess
that
they
are
trying
to
paralyze
parallelize
the
test
somehow,
and
I
think
that
is
a
good
thing,
because
in
overall,
if
tests
are
running
in
parallel,
they
should
lead
to
a
shorter
execution
time
off
the
overall
lane
but
yeah.
This
is
just
my
guessing.
So
maybe
I
I
guess
that
I'm
just
looking
at
I'll
just
look
at
this
pr
and
probably
clear
this
out.
A
Interesting,
I
have
not
experienced
that
one
myself
does
anyone
else
have
experience
with
this
possible
scenario.
It
looks
like
an
asynchronous
network
performance
observation.
B
B
A
A
And
in
that
case,
thank
you
all
for
all
of
the
participation
this
morning.
I'm
sorry
it's
not
morning
for
everybody.
I
actually
do
know
this
anyway
have
a
great
week,
and
I
will
see
you
same
time
same
place
next
week
thanks
everyone.