►
From YouTube: KubeVirt Community Meeting 2021-07-14
Description
Meeting Notes: https://docs.google.com/document/d/1kyhpWlEPzZtQJSjJlAqhPcn3t0Mt_o0amhpuNPGs1Ls/edit#heading=h.cteh78cpgkrp
A
Welcome
everybody:
this
is
july,
14th
2021
and
this
is
the
kubrick
community
meeting
chris
is
not
able
to
run
the
meeting
this
week,
so
I
am
standing
in
for
him.
My
name
is
stu
gott
and
we
normally
start
out
with
introductions.
So
I
would
open
it
up
to
anybody
who
is
new
to
the
community
or
wants
to
introduce
themselves
just
to
say
hi.
A
All
right,
I
don't
look
like
we
have
any
takers
this
week.
If
there
is
anybody,
you're
welcome
to
speak
up
at
any
point.
So
looking
at
the
agenda
first
thing,
we
have
a
topic
by
itamar,
and
so
I
will
let
you
introduce
that
sir.
B
Everyone,
so
this
is
about
the
fact
that
we
are
using
an
old
stress
binary
in
our
fedora,
with
tooling.
C
B
So
basically,
this
this
binary
is
dead.
It's
not
maintained
for
a
long
time,
and
it's
been,
it
has
been
rewritten
and
it's
now
called
stress
ng.
So
this
binary
is
fully
compatible
with
the
old
one.
I've
done
some
local
testing
to
verify
that
and
I've
sent
a
mail
about
it
to
keep
your
dev.
So
I
just
wanted
to
give
you
a
heads
up
and
to
know
if
there
are
any
objections
or
thoughts
or
questions
about
it.
D
B
D
B
Right
so,
as
far
as
I
understand
we
tried
to
do
that
in
the
code
and
for
some
reason
it
didn't
really
work.
I
know
that
roman
issued
a
pr
about
that
recently
that
supposed
to
fix
that.
But
that's
all
I
know
I
I
don't
have
more
details
a
little
bit.
E
Yeah
exactly
we
need
some
kind
of
changes
that
that
will
occur
in
the
vm.
If
this
is
completely
static,
I
mean
the
immigration
can
can
finish
before.
We
actually
have
a
chance
to
to
switch
the
postcode.
A
A
F
A
F
Yes,
you'll
have
to
have
a
guest
region
installed
and
running
and
we
would
try
to
ping.
We
would
try
to
run
the
ping
command
guest
ping
command
on
the
from
the
livboard
from
the
word
launcher
and
we'll
see
if
the
guest
replies
with
error,
it's
not
ready.
If
it's
replies
nail
error,
we
consider
it
the
guest
is
up
and
we
update
the
readiness
status.
A
Okay
sounds
pretty
harmless,
and
so
the
guest
agent
in
this
case
would
probably
be
up
more
or
less
when
the
system
is
already
running
at
that
point.
So
it's
a
decent
indicator
that
the
system
is
actually
ready
to
receive
traffic.
F
Yep,
it's
actually
a
more
precise
indicator
than
compared
to
other
ones.
A
Okay,
that
would
be
the
last
topic
that
was
proposed.
Is
there
anything
on
anybody's
mind
that
they
would
like
to
bring
up
this
week.
C
Yeah,
actually,
I
had
just
one
question:
this
is
the
readiness
on
the
pod
right.
F
Yes,
yeah
yeah.
We
said
the
readiness,
the
guest
agent
being
on
on
the
spec
of
the
vm
or
vmi,
and
that
would
be
translated
to
the
word
probe
based.
We
already
have
a
wordpress
binary
inside
the
inside
inside
the
word
launcher,
and
it
would
mean
that
it
would.
It
would
translate
it
to
wordprobe
based
exec
probe
on
the
vmi
port.
Yes
effect,
okay,
did
I
answer
it
yeah?
Does
it
confuse
you.
C
No,
I
the
reason
I
was
asking
this,
because
we
we
talked
about
this
underneath
two
weeks
ago,
when
this
was
brought
up
for
pause
right,
where
we
wanted
to
express
readiness
or
we
wanted
to.
C
We
wanted
to
disable
readiness
when
we
pause
a
vm
and
and-
and
I
guess
one
of
the
summaries
of
the
discussion
is
that
it
makes
sense
that
we
pause
a
vm,
that
a
pod
should
not
be
ready
because
we
shouldn't
it's
not
able
to
receive
traffic
anyway,
so
it
doesn't
make
sense
so
like
when,
for
this
flow,
what
sounds
like
so
we're?
You
have
a
guest
agent
you're
starting
a
virtual
machine.
C
You
have
you
have
a
ping
that
goes
through
and
it's
going
to
make
sure
that
it
that
the
vm
is
launched
is
actually
able
to
receive
traffic
so
normally
like
in
in
the
case
of
the
not
being
guest
agent,
we
like,
what's
like
we,
the.
When
does
the
pod
go
ready
like
they
brought?
The
pie
goes
ready
when,
when
what
like?
It's
is
it
when?
I
think
it's
when
we
just
when
it
starts
right
like
is
there
anything?
Is
there
anything
else
that
currently
sets
that.
C
Yeah
so
then
like
so
that
would
mean
in
terms
of
like
just
thinking
of
the
the
phase
transitions
roman
like
this
would
mean
that,
like
we
don't
go,
we
don't
hand
off
to
the
to
the
the
handler.
The
controller
doesn't
hand
off
to
the
handler
until
we
go
wait
until
we
see
the
pod
is
ready
right,
so
this
would
mean
like
we're
going
to
be
so.
Our
phase
transition
is
going
to
change
a
little
bit.
G
That's
independent,
we
have
we
we
see
if
we
are
not
waiting
for
the
part
to
become
ready,
so
we
can't
you
I
mean
we
can't
use
the
redness
probe
on
the
part,
because
readiness
program
departs
means
ready
to
receive
network
traffic
right.
G
G
A
it's
a
a
keyboard,
infrared
readiness-
I
would
call
it
I
just
have
I
have
to.
I-
have
to
open
the
code
to
check.
C
G
G
Yeah,
so
I
just
wanted
to
highlight
that
we
have
different
ways
for
that
too,
like
it's
a
little
bit
annoyed,
as
you
said
that
when
it's
paused
that
you
would
still
that
we
would
still
try
to
route
traffic
there,
and
one
way
is
also
to
use
the
it's
a
little
bit
tricky
to.
Let
your
handler
do
stuff.
There.
C
A
Okay,
so
it
sounds
like
we've
wrapped
up
there.
Is
there
anything
else
that
we'd
like
to
raise
before?
Moving
on
to
the
open
floor.
I
F
I
Just
what
is
it
just?
You
know
check
that
the
application
is
running.
G
G
That's
the
guest
agent
readiness
where
we
assume
that,
as
soon
as
the
operating
system
is
booted
and
guest
agent,
pings
are
coming
through
that
it's
ready
to
receive
some
kind
of
traffic
through
services
but
and
there
for
the
first
two
there's
always
the
assumption
that
we
have
actually
no
clue
what
you're
doing
with
your
vm.
G
But
if
you
know
which
application
you
want
to
run
in
and
you
want
to
serve
on
services
to
only
route
traffic
to
the
vm
when
your
application
inside
the
vm
or
applications
already,
you
can
really
define
a
readiness
probe
like
on
ports,
which
does
really
a
http
or
tcp
probe
on
the
vm
before
going
to
ready
that
are
the
different
grades.
Let
me
put
it
that
way.
I
Yeah,
so
so
my
I
guess
my
maybe
my
equation
is
too
low
level,
but
I
was
wondering
if
this
guest
ping
is
actually
checking
something
at
the
ip
level
or
or
is
it
just
checking
that
the
guest
agent
service
inside
the
or
the
guest
engineer
itself
is
just
running
not
related
to
networking
because
the
guests
may
have
no
ip?
Even
so.
This
is
the
product.
I'm
missing,
yeah.
G
But
this
is
this
can
very
well
be
so
that
you
didn't
even
run
dhcp,
but
you
would
still
see
this
ready
when
the
guest
agent
responds.
G
E
I
Yeah
it
sounds
like
it's
not
really
related
to
traffic,
but
it's
related
to,
for
example,
in
the
tests.
We
are
checking
that
we
are
waiting
in
some
way
if
the
guest
agent
is
is
alive
before
we
check.
If,
for
example,
if
the
ip
address
are
there,
because
if
if
the
guest
edit
is
not
even
we
don't
know
if
it
is
even
running,
then
there
is
no.
The
information
from
the
status
about
the
details
of
the
interfaces
are
not
not
relevant
because
they
were
not
gathered
yet.
I
E
Don't
we
already
have
a
condition
for
the
guest
agent?
I
thought
we
do
and
we
dynamically.
G
Something
yeah
that
was
always
there
that
was
yeah.
This
is
using
there
for
a
long
wrong
time,
but
the
guest
agent
probe
is
an
extra
way
which
is
extra
designed
to
tie
into
traffic
into
application
readiness,
it's
not
perfect
or
anything.
It's
like
the
initial
delay,
where
we
don't
know
anything
about
the
rating
system
and
just
delay
it
for
some
time.
I
G
I
To
say
that,
maybe
it's
important
to
to
say
in
the
pr
explain
why
why
this
is
needed
compared
to
the
old
one
like?
What's
the
difference
when
to
use
that
one,
it
will
be
more
clear,
then,
which
pr
I
mean
the
the
pianet
is
now
adding
the
readiness
for
the
for
the
guest
agent
thing
for
the
guest
thing
to
say
why?
Why
that
one
is
when
that.
G
E
So
now
I'm
curious,
I
mean,
what's
the
difference
between
between
this
pr
and
what
we
already
have
in
terms
of
conditions,
I
mean
we
ahead
and
remove
the
condition
when
the
guest
agent
is
not
responsible.
G
So
what
you
get
with
the
exit
probe
is
that
the
pod
gets
too
ready
without
that
passing
the
services
are
not
forwarding
traffic
to
the
pod.
That's
still
the
perfect
purpose
of
life
of
readiness
probes
compared
to
conditions.
The
service
does
not
know
anything
about
services
and
kubernetes,
don't
know
anything
about
conditions
which
you
add
on
our
vms.
G
H
G
G
H
G
E
A
Stronger,
an
exact
thing
is
if
the
pod
is
ready,
when
you
start
using
a
guest
agent
ping.
Now
we're
actually
saying
all
the
services.
As
far
as
we
can
tell
have
you
know,
systemd
has
come
up
and
to
the
point
where
the
guest
agent
has
been
booted
inside
the
vm.
So
it's
it's
a
stronger
statement
of
readiness
than
just
hey,
pods,
running
yeah,.
E
E
H
G
H
That's
what
I
mean
yep,
I
I
don't
know
if
the
I
I'm
not
familiar
enough
with
kimo
and
the
guest
agent
stuff,
if,
if
that's
an
optimization
worth
it,
because
the
ping
seems
very
simple
and
small
yeah.
If
the
additional
thing
is
a
problem.
E
G
H
H
G
H
H
I
think
the
the
current
ping
condition
is
on
the
vmi.
No.
H
H
I
Sorry
for
asking-
but
this
is
like
something
is
odd
here-
you
are
saying
that
this
is
really
a
readiness
from
kubernetes
that
kublet
is
checking,
but
this
is
this:
something
is
not
making
sense,
because
in
the
beginning
this
will
fail.
So
what
are
the
implications?
It
will
take
time
until
it
will
be
working.
Yeah,
yeah.
G
Just
like
some
probes,
yeah
and
you
have
different
configurations
options
for
that
you
can
normally
I
I
didn't
look
at
the
pr,
so
I
can't
say
that
this
is
all
there,
but
normally
on
all
the
readiness
probes.
You
have
a
startup
delay,
time
or
or
a
delay
time
until
you
do
the
first
probe
and
you
can
specify
retry
timeouts
and
everything
on
the
probe,
yeah.
E
I
G
So
let's
say
this
way:
we
probably
should
discuss
the
implementation
in
the
pr,
but
I
think
having
the
readiness
probe
in
general
makes
sense.
A
Okay,
I
think
at
this
point
we're
diving
into
the
weeds
and
the
implementation
and
soon,
as
I
just
said-
maybe
maybe
this
is
something
we
do
want
to
capture
in
the
pr
itself,
so
that
we
don't
lose
the
train
of
thought
and
wow.
That's
all.
I
would
have
expected
this
to
be
quite
so
controversial.
Sorry
about
that
same.
A
All
right
moving
on
roman,
I
think
I
it's
time
to
turn
it
over
to
you
to
talk
about
vendor
and
go
mode.
G
Yeah
just
to
follow
up
a
few
of
you.
People
probably
remember
that
we
had
on
the
latest
libert
go
bindings,
update
the
issue
that
or
yeah
or
one
of
the
latest
updates,
where
the
issues
that
the
vendor
folder
and
go
mod
were
not
completely
in
sync,
and
some
code
changes
were
checked
in
and
they
got
reverted
later
on
by
the
next
update,
which
caused
some
issues,
and
we
should
just
have
had
a
ci
job,
verifying
that
and
just
wanted
to
say
that
we
have
one
now.
G
A
A
J
Yeah
I
mean
just
looking
at
the
title.
I
understand
what's
happening,
so
there's
a
well
we're
terminating
the
pod.
J
If
there's
an
istio
sidecar
container,
we're
calling
a
htv
request
to
that
container,
tell
it
to
shut
down
quick,
and
we
just
want
to
eventually
time
that
out.
I
guess:
okay,
we're
going
to
sort
it
back
off.
That
makes
sense.
D
A
D
What
then,
what
them
and
what
words
that
I
see
there?
The
what's
will
do
with
the
new
changes
is
that
if
I
start
an
innovation
and
disconnects
srv
devices,
when
the
video
is
known
as
the
migration
phase
and
we
we
update
the
domain
metadata
immigration
metadata
with
completed,
failed
and
conflicted
tool
and
failed
true.
E
D
I
managed
to
to
figure
out
which
pr
was
it
it's.
Let
me
send
the
link,
it
was
the
it
was
the
vr
that
reduced
the
vmi
update
collision
event
collision,
something
like
that.
G
G
E
Well,
the
way
it
works
is
that
when
we
update
the
metadata
in
the
guest
in
the
domain,
vid
handler
fetches
it
and
then
updates
the
status
according
to
that.
But
if
this
doesn't
happen,
then.
D
Yeah,
it's
it's
hard
to
to
get
it
at
first,
because
it
should
work
as
you
would
expect.
It
can
be
deep.
G
There
may
be
a
bug
in
the
pr
and
then
the
the
status
may
not
be
updated.
That
could
really
be
it's
possible.
I
just
thought
that
it's
good
just,
I
guess
yeah
find
an
extra
pr
or
some
extra
issue
or
something
and
ping
david
on
it
again.
G
Then
yeah,
I
just
checked
the
pr
again.
I
don't
see
the
issue
immediately
why
it
could
happen.
But
what
can
happen
is
that
the
that
our
expectation
logic
is
not
right
and
we
think
that
we
should
get
another
vmi
update
soon
and
we're
not
updating
because
of
that,
and
then
this,
but
this
update
is
never
happening
and
then
we
never
clearly
expect
her
and
then
it
can
be
delayed
for
a
few
minutes
until
the
next
update
comes
in.
C
D
D
A
D
A
All
right,
looking
at
the
mailing
list
for
the
last
seven
days,
I
think
there's
only
been
a
couple
of
emails
in
the
last
few
days.
We
have
the
release,
43
vm
status,
definitions,
chandler
had
a
question.
I
don't
believe
he's
here.
So
I
don't
know
if
we've
answered
it,
but
I
gave
him
a
response
and
no
traffic.
Since
then
jay
did
we.
I
think
I
saw
looking
at
the
aws
ekf
support
that
the
conversation
wrapped
up
successfully
from
his
point
of
view.
Are
we
good
there.
A
I
assume
we're
good.
It
seemed
to
have
resolved
we
can
revisit.
If
not
next
thing
we
had
was
the
stress
versus
stress
mg.
We've
discussed
that
today
and
six
scale
meeting
notes.
Is
there
anything
ryan
worth
bringing
up
in
this
forum.