►
From YouTube: KubeVirt Community Meeting 2021-08-11
Description
Meeting Notes: https://docs.google.com/document/d/1kyhpWlEPzZtQJSjJlAqhPcn3t0Mt_o0amhpuNPGs1Ls/edit#heading=h.qqwn7jvxag
A
B
I
should
probably
introduce
myself
the
I'm
josh
burkas.
I
work
for
the
red
hat,
open
source
program
office
and
I've
been
helping
out
the
coovert
project
because
it's
time
for
coovert
to
advance
within
the
cncf
hierarchy,
from
a
sandbox
project
to
an
incubating
project.
C
A
B
As
part
of
becoming
a
mature
cncf
project,
one
of
the
things
that
the
cncf
wants
to
see
us
demonstrate
is
that
the
coovert
project
has
its
own
independent
existence
and
that.
B
You
know,
since
it
started
as
a
red
hat
project
and
that
we
have
plans.
We
have
both.
You
know
leadership
now
and
plans
for
leadership
in
the
future,
which
means
adopting
some
form
of
formal
governance.
We've
had
informal
governance
up
until
now.
You
know
with
fabian
and
roman
and
david
and
the
other
project
founders
leading
the
project,
but
that
doesn't
it
doesn't
offer
a
lot
of
direct
opportunities
for
new
people
to
move
up
into
those
leadership
positions
which
we
absolutely
want
because
nobody's.
D
B
B
So
as
such
at
chris
and
fabian
and
david's
request,
I
drafted
what's
a
sort
of
simple
but
complete
governance
for
the
project,
the
idea
being
that
we
have
a
number
of
people
who
are
project
leaders
in
various
areas,
the
people
who
are
approvers
on
cooper,
coover,
the
people
who
are
leaders
of
different
areas
like
the
the
couple
of
cigs
that
actually
have
their
own,
their
own
contributors,
certain
code,
areas
etc,
and
that
among
those
people,
we
have
some
people
who
actually
want
to.
B
Lead
the
project
not
just
technically
but
also
as
an
organization,
so
both
communicating
with
the
cncf
thinking
about
the
future
of
the
project
troubleshooting.
Any
social
problems
that
come
up.
Things
like
go
to
conduct
violations,
et
cetera
and
and
so
that,
because
the
cncf
uses
this
term.
B
That
group
of
people
that
subset
of
of
our
actual
leaders,
who
also
want
to
deal
with
governance
issues,
will
become
the
maintainers.
And
then
that
group
will
add
people
to
itself
as
folks
step
up
or
leave.
B
B
So
if
you
take
a
look
at
the
draft
pr
there,
it's
there.
As
far
as
I
can
tell
the
one
open
question
from
comments,
is
we
need
to
start
out
with
an
initial
group
of
maintainers.
B
The
current
list
that
I
have
there
is
the
list
that
the
cncf
has.
I've
been
told
that
it's
kind
of
dated,
because
it
was
kind
of
our
list
of
approvers
for
cooper
cooper
from
a
year
and
a
half
two
years
from
two
years
ago.
Something
like
that.
B
A
I
definitely
second
the
the
motion
to
review
the
list
of
maintainers,
because
these
are
yeah.
As
you
just
said,
half
this
little
30
percent
of
this
list
are
people
that
are
not
actively
involved
in
kubernetes
on
a
day-to-day
basis.
So
I
guess
a
follow-on
question
to
that.
As
part
of
this
is,
is
there
a
procedure
for
us
to
communicate
changes
back
to
cncf
in
terms
of
like
how
do
we
keep
in
sync,
what
we
think
our
maintainers
are
with
what
they
think
they
are
yeah.
B
And
that's
one
of
the
reasons
to
have
formal
governance,
because
otherwise
we
never
do
this
kind
of
review
and
and
and
then
we
actually
need
something
from
the
cncf
and
they
say
well,
you
can't
have
that
because
you're
not
a
maintainer
and
they're
going
to
be
well.
None
of
these
people
are
involved
anymore,
the
so
the
way
to
communicate
it
is.
B
We
need
to
actually
finalize
this
list
in
the
pr
merge,
the
pr
and
then
somebody
who
is
recognized
at
the
cncf
by
a
maintainer
such
as
you
stu,
needs
to
communicate
to
them
the
new
list
which
they,
which
you
do
by
pr.
You
actually
file
a
pr
against
the
cncf
foundation,
repo.
E
So
I
I've
been
looking
through
this
document
and
I
haven't
made
a
lot
of
comments,
because
I
wasn't
really
sure
what
the
next
step
is.
So
it
sounds
like
if
we
agree
on
just
the
general
guidance
here
that
this
document
provides.
Then
the
next
step
to
correct
me.
If
I'm
wrong
is
we
need
to
just
create
a
initial
seed
list
of
maintainers.
It
actually
makes
sense,
and
then
we
we've
kind
of
done
it.
E
At
least
we've
ended
as
a
follow-up
to
this,
it
sounds
like
there
needs
to
be
regular
meetings
of
some
sort
like
what
do
we
want
to
do
with
that?.
B
The
yeah
well
so
one
of
the
things
one
of
the
things
first
of
all
is
that
you
definitely
the
so
there's
both
irregular
and
regular
things.
The
irregular
thing
is
believe
it
or
not.
B
You
actually
already
have
a
maintainer
mailing
list
that
you're
not
using
so
you'll,
potentially
want
to
start
using
that
the
main
reasons
to
use
that
mailing
list
would
be
for
say,
discussion,
discussions
of
promoting
new
maintainers
and,
like
things
like
coc
violations
and
also
security
reports,
those
all
get
directed
to
the
maintainer
mailing
list,
because
the
closed
list,
the
and
then
you
should
decide
whether
or
not
you
need
to
actually
have
separate
maintainer
meetings.
B
B
I
you
know,
I
would
say
you
probably
should
have
a
meeting
for
that,
but
like
twice
a
year,
okay
right
because
like
because,
like
the
current
maintainer
list
is
out
of
date,
but
it's
out
of
date
by
like
two
years
right.
It's
not
like
this
gets
out
of
date
within
a
month.
A
Okay,
this
is
really
interesting.
I
didn't
know
that
we
had
a
mailing
list
for
maintainers
what
we
actually
have
published
somewhere
in
the
community
documentation
procedures
for
applying
to
be
a
community
member,
or
you
know
like
how
to
do
a
maintenance
leadership.
I
guess
is
a
flippant
thought.
We
should
make
sure
that
what
we're
putting
forth
as
a
government
stock
is
in
agreement
with
what
we
said
there
not
to
say
either
like
you
know,
we
just
need
to
make
them
make
sure
they
yeah.
A
Right
and
one
of
the
things
we
specifically
have
outlined
in
that
document,
if
I
recall,
was
that
we
were
using
just
a
general
cube,
vert
mailing
list
for
nominations
for
becoming
a
maintainer
or
an
administrator
of
the
project.
B
B
Yeah
now,
whether
or
not
one
of
the
things
that's
not
really
outlined,
there
is-
and
you
might
want
to
add-
is.
B
Of
the
things
that's
not
really
outlined
in
membership
policy,
is
you
know
how
we,
how
we
decide
to
advance
people,
particularly
to
approver,
and
you
might
decide
later
on
that
you
want
the
maintainer
group
to
do
that,
because
you
know
they're
the
ones
who
are
actually
have
a
mailing
list
at
meetings.
B
But
then
you
might
not.
You
might
decide
not
to
do
that.
You
might
decide
that
hey.
You
want
to
have
the
approvers
of
the
individual
repo
do
that,
particularly
if,
for
example,
say
sig
storage,
you
know
or
a
few
other
cigs
that
might
have
their
own
subrepos
spin
off.
E
Makes
sense
so
to
move
forward
with
this
governance
document.
We
need
to
come
up
with
this
list.
Where
do
you
propose
that
we
have
this
discussion?
Do
you
want
to
just
have
it
in
the
pr
where
we
nominate
people
you
want
to
have
a
separate
meeting
where
a
few
of
us
get
together,
I'd
like
to
have
representation
across
multiple
companies,
so
we
need
to.
A
I
imagine
that
this
may
be
a
two-step
process
david.
We
probably
want
to
cull
the
list
of
personnel
that
you
know
we
would
call
them
more
of
emeritus
members
of
the
of
the
project
at
this
point
and
not
really
actively
contributing
and
then
yes,
absolutely
as
step
because
otherwise,
like
you
know,
if,
if
you're
going
to
look
at
the
current
group
of
you,
know
maintainers
and
ask
you
know
to
have
that
increased
those
existing
members
by
rights,
we
should
be
asking
them
their
opinion
on
the
matter.
E
Right
and
what's
the
goal
here,
ultimately
we're
trying
to
become
a
incubator
project,
so
we
need
do
we
need
to
have
cross-company
representation
to
help
us?
It's
not
it's
right.
It's
not.
B
E
This,
unless
we
have
multiple
companies
involved.
B
Yeah
and
when,
when
I
look
at
the
the
last
three
other
projects
that
got
accepted
to
incubating,
they
all
had
some
form
of
written
governance
at
the
time
that
they
were
accepted.
B
So
so
the
fact
that
we,
even
though
it's
not
even
though
it's
not
technically
a
requirement
to
reach
graduated
level,
we
we
will
stand
out
and
then,
more
importantly,
part
of
the
process
of
going
from
cncf
sandbox
to
incubated.
Is
you
have
a
sponsor?
B
Who
is
on
the
cncf
technical,
organizing
committee,
and
I,
our
sponsor
elena,
has
specifically
asked
about
the
governance
so
and,
and
the
sponsor
has
sort
of
a
lot
of
discretion,
because
the
way
it
generally
works
is
once
you
convince
the
sponsor
that
you're
ready
to
graduate
it's
a
it's.
The
toc
then
approves
it.
So
the
hard
part
is
convincing.
B
Your
sponsor
the
I
mean,
there's
also
a
second
reason
for
this
right,
which
is
we
have
people
who
have
joined
the
project
since
it
started,
who
have
been
stepping
up
to
do
a
lot
of
things
and
there
hasn't
been
really
a
formal
way
to
allow
those
people
to
step
all
the
way
up
to
project
leadership.
If
they
want
the,
which
is
the
other
reason
to
do
this,
the
reason
why
it's
a
good
idea,
not
just
a
cncf
requirement.
B
I
wouldn't
necessarily
say
that
there
is,
you
know
I
mean.
Obviously,
if
every
contributor
to
the
project
is
a
maintainer
you're,
probably
doing
it
wrong,
because
not
every
contributor
to
the
project
is
going
to
have
time
to
think
about
the
project
you
know
and
roadmap
and
and
all
of
the
other
things
that
you
want
at
the
maintainer
level.
B
Beyond
that,
I
would
say,
the
list
of
people
that
we
want
to
have
as
maintainer
is
really.
How
likely
are
you
to
need
this
person's
opinion
when
we're
doing
something
major?
G
B
We've
got
a
bunch
of
people
in
the
meetings.
Anybody
and
it's
been
mostly
me
and
stu
and
david.
Anyone
else
have
thoughts,
opinions.
A
Okay,
so
the
general
consensus
is
the
big
ticket
item
to
readdress
on
the
governance
document
is
to
review
the
maintainers,
and
I
agree
I
think
somebody
had
mentioned
earlier,
potentially
just
making
proposals
or
discussion
right
directly
on
the
pr.
I
think
that's
a
good
approach,
then
it's
up
front
and
out.
You
know
public.
What
we're
up
to.
A
Okay
and
then
josh
once
this
pr
is
then
merged,
is
that
we've
then
adopted
that
as
our
governance
document
in
your
mind,
right
yep.
B
Yes,
we've
merged
the
pr,
and
then
you
or
someone
else
who
is
on
the
current
cncf
list
of
maintainers
submits
a
pr
to
cncf
foundation
that
updates
the
list
of
maintainers
there,
as
well
as
notifies
the
cncf
staff
that
we
have
this
governance
document
and
that's
where
they
should
get
their
list
of
maintainers
from
the
and
then
it's
a
thing.
A
I
Yes
thanks
so
I'd
like
to
enroll,
convert
as
a
project
and
outreach
internship
program.
Now
for
those
of
you
who
don't
know
which
is
a
program
for
interns
from
under
represented
groups
in
the
industry,
and
I'm
hoping
that
this
could
help
us
expose
the
project
and
also
to
get
some
new
people
involved
in
developing
it.
I
So
I
think
I
have
one
mentor
already
ready,
but
we
will
need
two
of
them,
so
I'm
looking
for
volunteers
in
this
forum
now
what
this
would
require
from
you
is
five
to
ten
hours
of
your
time
a
week
for
circa
four
months,
I
believe
and
to
prepare
a
project
for
the
intern
to
work
on
something.
That's
can
be
done
in
a
half
time
or
part-time
job
for
three
months,
something
interesting
on
convert
so
yeah.
A
Okay,
well,
not
everybody
all
at
once,
but
seriously.
This
is
a
great
opportunity
for
just
helping
out
with
the
community
for
sharing
your
knowledge
with
somebody.
So
if
you're
interested
please
reach
out
to
petr.
A
Next
up
roman,
you
have,
I
think,
we've
talked
about
this
before,
but
centos
eight
stream
revisiting
this.
H
Yeah,
I'm
not
sure
when
we
talked
about
this
the
last
time.
H
I
think
we
mostly
talked
about
it
regarding
to
that
ci
already
uses
it,
but
andrea
from
librid
engineer
is
preparing
a
switch
for
for
our
project
to
center
stream
as
the
base
image
for
all
our
images
for
some
time,
and
it's
pretty
close
now.
I
just
basically
wanted
to
give
a
heads
up
and
we'll
also
write
about
this
in
the
mailing.
D
H
I
think
it's
beneficial
for
the
project
to
go
to
center
stream
here,
because
we
can
very
easily
get
up-to-date,
delivered
and
cue
emo
changes
without
having
to
rely
on
fedora
or
copper
fedora
ripples.
But
in
case
you
have
any
concerns
happy
if
you
bring
them
up
here
or
late
on
the
mailing
list.
A
I
do
have
a
question
we're
back
to
the
reference
of
when
we
talked
about
this
before
I
want
to
say
it
was
a
year
ago,
maybe,
and
and
at
the
time
when
we
were
starting
to
approach
this,
we
realized
that
there
was
an
issue
with
machine
types
changing
and
it
was
going
to
disrupt
migrations,
and
that
caused
us
to
pause
and
reassess
this
move.
A
J
We
talk
about
the
images
of
each
each
port
or
only
the
vms.
H
We'll
talk
about
the
images
for
vertendler
and
vert
longer
and
yeah.
They
reduce
right
now,
fedora
as
a
base,
but,
as
some
of
you
know,
to
get
get
liberty
and
human
versions
which
you
want
libert
is
providing
to
us
some
copper
repos
which
they're
extra
maintaining
for
us
and
but
they
have
now
some
sentence:
eight
streams
just
for
the
brilliant
cuimo
and
it's
much
more
easier
for
them
to
provide
us
the
things
from
there
and
regarding
to
the
machine
types.
H
I
could
not
remember
that
we
had
issues
back
then
my
suspicion
would
be
that
we
have
that
in
that,
in
the
streams
from
centers
just
for
tumor
and
the
bird
we
get
the
same
thing.
The
same
settings
like
we
got
from
fedora,
but
maybe
that's
wrong,
but
a
good
point.
If
I
have
to
check
something
there
I'll
check
it.
K
D
A
D
H
H
So
in
the
past
it
was
first
going
through
return,
rated
internal
processors,
and
then
it
was
at
some
point
released
somewhere,
whatever
we
did
and
then
centos
was
assembled
out
of
this,
and
now
it's
the
other
way
around.
So,
for
instance,
if
yeah,
if
reddit
engineers
were
changing,
are
changing
something.
It
goes
first
back
upstream
before
it
then
goes
to
rail,
but
it's
not
upstream
of
fedora.
D
I
know
I
yes,
I
have
this.
I
had
I
had
this.
I
had
this
in
this
issue
or
I
don't
know.
If
it's
an
issue
I
mean
I
I
had
to
work.
I
was
working
with
network
manager
as
an
example
and,
and
there
there
it
it
was
odd
when
they,
when
rails
started
to
releasing
every
six
months,
because
it
caused
odd
thing
that
on
on
monreal,
you
could
get.
D
Let's
talk
about
santos,
eight
stream,
so
you
will
get
there,
something
that
was
worked
on
very
recently
by
by
the
developers
on
infrared
and
then
it
was
not
on
fedora
because
it
was
not
pushed
there
yet
or
was
not
released
here
yet
there
and
it
it
may
have
been
already
released
in
rel
like
when
really
is
released,
let's
say
8-4
or
what
whatever
version
something
can
land
there
and
and
not
be
in
fedora
for
a
few
months
I
don't
know
yeah.
That
is
true.
So
if.
H
Center-State
stream
is,
is,
can
have
newer
versions
or,
and
also
rail
can
have
newer
versions
nowadays
compared
to
fedora.
That
was
not
so
common
in
the
past,
but
can
happen
in
the
new
model.
A
So
I
had
to
just
look
up
edward
just
to
see.
I
believe
in
this
case
that
roman
is
right,
that
centos
stream
sits
in
between
fedora
and
rel
according
to
their
own
documentation.
D
H
But
it's
something
that
happens,
it's
something
which
can
happen
yes,
so
we
have
a
different.
We
have
a
different
case
also
where,
where
for
c
groups,
we
too,
we
need
a
change
in
containerized
linux
and
the
ac
linux
policy
change
will
be
first
in
center
stream
and
rail
because
it
it's
it's,
not
a
significant,
significant
enough
change
to
make
it
to
fedora
before
the
next
major
release
yeah.
H
A
A
So,
with
centos
stream,
it's
a
rolling
release.
So
with
the
images
we
were
shipping
in
the
past,
there
was
at
least
something
relatively
static
that
we
could
define
as
what
the
base
image
was,
because
it
would
be
whatever
you
know,
fedora
version
we
were
on
plus
security
patches.
As
of
that
day
would
be
what
the
the
the
image
that
we
were
building
on
this
with
centos
stream.
Of
course,
everything
is
changing
all
the
time,
because
that's
the
point
of
a
rolling
release
distro.
A
Is
that
there's
no
such
thing
as
eight
or
nine
or
whatever?
It's
just
a
rolling
release.
How
are
we
going
to
canonize
what
the
bits
are
that
we're
going
to
be
cooking
when
we
build
an
image.
H
D
I
think
I
think
you
just
describe
what
that.
What
will
happen
is
that
if
you,
if
you
decide
to
update
the
image,
then
you
perform
an
update
of
the
image
and
then
you
will
get
all
the
updates
until
that
time
that
by,
but
I
think
what
maybe,
what
still
meant
is
that
with
fedora
at
least
you
have
like
you
have
you
have
fixed
version,
and
you
know
some
some
of
the
things
will
not
get
in
because
they
don't
consider
they
want
to
keep
it
a
little
bit
stable.
D
A
General,
let
me
paint
a
picture
here
of
what
I
mean
just
so
that
it's
clear
so
at
the
beginning
of
this
month,
we
released
release
44
and,
at
some
point,
we're
probably
going
to
have
a
patch
release
where
we
do
a
release,
0.44.1.2
and
according
to
what
I
just
heard,
is
whatever
day
we
happen
to
to
build
that
image.
A
We're
going
to
get
whatever
the
latest
centos
stream
is
for
the
base
image
and
that's
fine,
but
next
month
we're
going
to
release
0.45,
and
so
what
I'm
hearing
is
that
if
we
were
to
build
release
44.5
sometime
in
the
middle
of
september,
it's
actually
going
to
have
a
newer
base
image,
potentially
than
the
one
that
we
use
for
release.
45.0.
H
Yeah
and
regarding
to
edward's
comment:
yes,
since,
since
it's
a
rolling
release,
there
is
probably
potentially
a
chance
for
more
braking
changes.
On
the
other
hand,
a
center's,
eight
stream
center
stream
version
x,
like
version
eight,
is
supposed
to
be
pretty
stable
too,
because
it's
part
of
the
one
rel
eight
and
for
center's
nine
stream.
It
will
be
around
nine,
and
I
suppose
that
very
much
a
lot
of
breaking
changes
should
not
happen.
I
mean
yeah.
H
I
have
no
evidence
yet
to
say
that
center
side
stream
is
more
buggy
or
less.
It
should
be
less
buggy,
but
it
could
also
be
the
other
case,
but
in
general,
one
of
the
purposes
for
having
it
is
to
have
fixes
early
for
centos,
which
you
had
to
wait,
sometimes
in
the
past,
for
a
year
or
longer
or
you
had
to
live
for
it
for
the
whole
centos
x
life
cycle.
Well,
now
you
can
get
it
fixed,
pretty
fast.
L
Yeah
yeah
I
mean
that
that
whole
reorg
will
then
really
streamline
how
we
get
stuff.
In
I
mean
this
is
also,
I
think,
to
underline
it's
a
collaboration
with
the
liver
team
right.
So
there
were
many
discussions
over
the
past
years
right
how
we
get
fixes
into
in
the
cuber
that
the
liver
team
or
kuma
team
did-
and
this
required
like
custom,
copper,
repos
and
other
stuff
by
moving
to
this
new
setup
that
whole
procedure
to
get
stuff
from
the
source
into
keyboard
builds
will
be
streamlined.
L
So
I
think
that's
a
great
effort
actually
also
by
the
delivery
team
could
close
to
them
as
well.
Besides,
theorem
roman.
H
And
let
me
also
add
that
I
don't
know
what,
for
instance,
ryan
is
using
as
a
base
or,
but
I
know
that,
for
instance,
suse
is
using
their
own
distribution
as
a
base
for
keyword.
That
will
for
sure
not
change.
You
can
still
exchange
the
base
with
whatever
you
want.
It's
mostly
about
what
we
use
for
our
releases
and
for
intent.
Testing.
D
That
was
my
my
next
question
was:
do
we
have
like?
I
guess
it's
a
kind
of
downstream.
Do
we
know
if
someone
uses
different
images
that
they
build
it
on
different
base,
oasis
and
some
downstreams.
H
Definitely
susie
does
that.
Definitely
they
use
it.
They
they
use
our
go,
build
flow
and
use
their
suse
operating
system
as
a
base.
D
A
I've
seen
a
lot
of
chat
discussion
on
the
site.
I
haven't
read
it
because
I
was
participating.
The
curt
discussion
is
there
anything
that
needs
to
be
brought
back
to
the
main
forum
there.
A
L
Hey
this
is
fabian
and
sorry
for
being
late.
There
was
a
conflict
on
my
side
and
I
just
wanted
to
to
circle
back
to
the
to
the
first
topic:
discuss
propose
the
new
governance
model,
so
first
josh.
L
Thank
you
very
much
for
bringing
up
the
pr,
and
I
was
wondering,
what's
the
was
any
consensus
and
I'm
looking
at
like
the
people
that
are
currently
mentioned
in
the
original
list,
like
david
vladick,
roman
stew,
what
do
you
think
about
this
proposal
and
also
ryan,
even
if
he's
not
mentioned,
but
also
ryan,
I'm
eying
on
you?
What?
What
do
you
guys
think
about
that
proposal?.
E
L
L
A
Sorry,
lovely
yeah,
one
of
the
things
that
josh
had
pointed
out
to
us
is
that
we
are
not
using
the
the
keyboard
maintainers
list
from
cncf,
and
that
would
be
a
great
way
to
periodically
maybe
twice
a
year
review,
who's
actually
really
being
an
active
maintainer
of
the
project
and
updating
that
list,
because
by
the
way,
that's
something
that
cncf
is
tracking
separately
from
us.
So
we
do
need
to
keep
them
apprised.
If
the
list
changes,
yeah.
B
One
of
the
other
important
things
about
why
you're
going
to
have
to
start
using
that
maintainers
list
is
that
we
had
some
aliases
like
security
at
cooper.io
that
were
being
directed
to
places
in
red
hat,
which
is
not
really
good
for
a
cncf
project,
and
so
those
aliases
are
going
to
get
redirected
to
that
maintainers
list.
B
I
I
think
we
except
for
updating
the
maintainers
list.
I
think
we
have.
I
think
we
have
it.
I
I
kind
of
regard
you
know
it's.
The
a
lot
of
people
have
commented
on
it
already.
You
know
more
people
in
this
meeting
are
welcome
to
comment,
and
I
do
think
of
something
later
on
today,
but
that
was
why
I
wanted
to
bring
it
to
the
community
meeting.
So
we
can
get
kind
of
final
approval
from
the
community
that
this
is
in
okay,
starting
governance,.
L
Yeah,
if
you
don't
mind,
I
would
actually
directly
propose
something
for
the
maintainers
list,
because
I
think
we
have
some
of
the
existing
maintenance
here.
So
first
I
propose
to
drop
some
of
the
inactive
maintainers
like,
and
all
of
them
are
great
right.
It's
not
about
them
personally,
but
they
have
simply
not
been
active
so
like
mark
mars
and
arthur
and
sebastian.
L
I
think
both
ryan
and
ryan,
while
being
at
red
hat
and
now
being
at
nvidia,
has
shown
the
interest
to
drive
kubert
forward,
and
the
same
is
true
for
marcus
who's
with
kubert
for
even
longer,
and
so
I
would
grant
them
my
trust,
at
least
to
say
they
behaved
like
them
right.
They
have
shown
they've
shown
direction.
They've
brought
input
to
it.
So
to
me,
they're
already
meeting
that
criteria
and
therefore
I
would
I
think
it's
just
fair
to
include
them
in
that
original
list.
H
L
I
think
that's
also
something
right.
We
don't.
I
think
one
goal
is
to
really
show
this
is
not
something
we
as
reddit
want
to
push,
and
but
that
is
not
my
motivation
to
to
to
propose
these
additional
maintainers,
it's
mostly
to
say
they
have
shown
right
that
they
they
have
shown
genuine
interest
to
push
cupid
forward.
Right
ryan
has
initiated,
like
the
the
sixth
scale
and
performance
that
we
have
now
and
marcus.
Yes,
roman,
yes
had
not
had
many
contributions,
but
I
think
they
have
been
around.
L
I
think,
to.
H
Move
this
forward,
I
think
we
need
to
talk
to
them,
or
especially
michael
yeah,
out
of
band
first,
because
if
they
are
on
that
list
they
are
complete
there
and
possibly
some
votes
to
take
where
we
need
majority
and
so
on.
So
they
need
to
be
available
and
we
need
to
be
so
it's
not
not
about
saying
no,
but
it's
about
saying
that
they
need
to
agree
to
be
present
when
needed
and
that
they
can
do
that.
A
Okay,
I
have
a
topic
for
the
open
floor
that
is
looking
to
see
if
there's
any
interest
or
volunteers
for
somebody
that
would
like
to
help
moderate
this
meeting
each
week.
A
Anyway,
the
thoughts
behind
that
are
that,
of
course,
with
chris
out
for
the
indefinite
future,
and
you
know
I've
I've
always
acting
as
a
backup,
but
in
his
wake
I
have
taken
over
the
you
know
kind
of
the
primary
position
here
as
the
meeting
moderator,
but
first
off
would
be
great
to
have
two
people
doing
this,
at
least
just
so
that
I
have
a
backup
and
it
would
also
be
great
to
have
somebody
who's
focused
technically,
if
you
will,
you
know
with
my
recent
role,
change
to
being
a
manager.
A
That's
you
know,
technically,
I'm
I'm
not
a.
You
know
not
overused
for
technical,
but
I'm
not
a
technical
contributor
in
that
sense
anymore.
I'm
happy
to
keep
leading
this
meeting,
but
it
would
be
great
to
have
more
broad
focus
if
anybody
were
interested
or
willing.
E
I
I
don't
care
about
leading
it
like
that
sounds
weird.
I
don't
mind
helping
lead
any
time.
It's
not
a
big
deal.
I'm
already
here.
E
Did
get
feedback
ryan
and
I
made
a
proposal
about
it's
going
to
talk
about
some
of
the
stuff.
We've
been
working
on
sixth
scale
and
we
got
wait
listed
so
we
might
make
it
we
might
not.
I
asked
them
when
we
would
know
for
sure-
and
they
said
by
september
1st.
So
maybe
we
get
to
talk
about
cupert,
we'll
see.
E
You
might
I
have.
I
have
a
theory
about
kubecon
submissions,
that
red
hat
employees
have
a
very
low
chance
of
making
it
if
yeah,
if
they
aren't
everyone.
B
Has
a
low
chance,
everyone
has
a
low
chance
of
making
it
in
even
this
kubecon
actually
had
lower
numbers
of
submissions
than
we've
had
in
the
past,
because
you
know
people
are
confused
about
the
whole
hybrid
thing
and
even
so,
seven
proposals
were
rejected
for
everyone
that
was
accepted.
B
B
B
The
acceptance
rate
for
kubecon
is
one
out
of
15.,
so
you
don't
really
need
to
look
for
a
reason
why
your
talk
wasn't
accepted.
It's
the
odds
are
that
it's
not
going
to
be
accepted.
E
Do
you
have
any
indication
that
they're,
because
I
know
the
cncf,
for
example,
throttles
some
discussions
that
occur
when
lots
and
lots
of
people
from
one
company
tried
to
bombard
them?
For
example,
yeah.
B
Yeah
that
happens,
that
tends
to
happen
towards
the
the
end
of
the
choosing
the
talks.
So
what
happens
is
all
of
the
track
leads
and
I'm
a
track
lead
for
this
one.
We
submit
our
final
list
of
recommendations
so,
like
I
was
a
storage
track
lead
and
we
recommended
five
storage
submissions
for
the
final
conference
and
then
the
chairs
put
all
of
those
together
and
they
look
across
it
and
one
of
several
things
they
look
for.
B
They
have
multiple
criteria,
but
one
of
several
things
they
look
for
is
hey
if
they
have
that
final
list
of
recommendations,
but
40
percent
of
the
presenters
are
from
one
company,
whether
that's
red
hat
or
anyone
else,
then
yeah
they're
going
to
bump
some
talks
based
on
who,
the
on
who
the
presenter
works
for
because
they
really
don't
it's
bad
for
kubecon
overall
to
appear
to
be
dominated
by
a
single
company.
B
B
B
So
you
know
and
the
and
why
I
would
have
liked
to
actually
get
us
into
incubating
by
the
end
of
june,
which
was
the
deadline
for
this
coupon.
But
there
were
just
too
many
things
to
do
around
the
project
to
make
that
happen,
but
this
will
mean
that
if
we
go
and
buy
for
kubecon
spring
in
europe,
which
is
honestly
the
better
coupe
gun
for
us
anyway,
we
should
be
entitled
to
maintain
recession.
A
M
J
J
D
J
E
J
Yes,
for
the
design,
this
is
for
the
design
discussion.
What's
the
angle
is
where
to
put
the
logic
for
the
re
hot
plug
once
the
immigration
is
failing,
we're
talking
about
we're,
not
plugging
a
srv
devices
when
immigration
fails,
so
can
we
get.
J
H
So
I
think
in
general,
regarding
to
if
you
retry
or
if
you
fail
a
migration,
I
think
it's
independent
of,
if
of
the
of
where
the
code
needs
to
reside
and
how
it
needs
to
react.
So
I
think
right
now
you
have
the
code
path
where
it's
not
picked
up
in
the
eventually
loop
in
the
eventual
consistent
controller
loop,
always
due
to
the
way
how
it
is
done,
and
I
think
that
needs
to
be
changed,
so
you
can
just
try
in
the
first
iteration,
for
instance,
once
I
just
treat
the
migration.
E
I
think
I'm
not
understanding
why
this
is
difficult
to
just
put
in
the
regular
reconcile,
so
we're
talking
about
a
failed
hot
plug.
Why
couldn't
the
sync
bmi
and,
for
example,
vert
launcher,
see:
hey
we're
not
doing
a
migration,
and
this
thing
isn't
hot
plugged.
Let's
try
to
re-hop
plug
it.
D
It's
it's.
This
is,
I
mean
I
found
this
very
interesting.
D
Just
to
I
mean
the
topic
is
really
interesting,
but
I
don't
know
if
we
have
other
things
that
are
reconciling
the
the
the
domain
or
the
cast
in
this
case,
because
hot
plugin
of
a
device
is
very,
it's
mutating
the
guest,
it's
like
a
device
appears
inside
of
it
or
disappears
it's
very
pretty
drastic,
and
usually
this
this
workaround
of
unplugging
and
plugging
it
back
in
order
to
do
the
migration
is
very,
very
specific
for
the
migration
and
and
making
it
more
generic
in
the
sense
that
you
reconcile.
M
K
From
my
point
of
view,
it
just
went
immigration
when
you
unplugged
something
and
the
there
was
immigration
it
ended
in
certain
way.
This
has
to
be
replugged
back.
So
if
you
have
a
condition
on
the
vmi,
for
example,
it's
already
know
that
this
has
to
happen.
So
it
doesn't
matter
if
the
immigration
succeeded,
then
you
need
to
plug
or
the
migration
didn't
succeed
and
plug
it
back.
D
No,
it's,
I
think.
I
think
this
is
because
you
are
looking
at
it
at
the
you
are
looking
at
it.
Even
if
there
is
no
migration
in
this
happen,
it
will.
You
are
saying
it
should
happen
right.
It's
like
you're,
saying
I'm
supposed
to
have
here
two
devices
plugged,
and
now
I
have
only
one
I
need
to
plug
the
second
one
right.
This
is
what
you're
saying
and
it
makes
sense
the.
K
At
the
beginning
of
the
migration
devices
are
getting
unplugged
and
then
you
can
add
a
condition
or
some
something
in
the
vmi
status
and
that
this
needs
to
be
plugged
back
and
no.
We
are
not.
K
No
okay,
so
this
is,
I
think
this
is
what
we
were
discussing
about
what
david
suggested.
I
guess.
E
Well,
a
condition
would
be
needed
if
we
don't
have
a
way
to
well
detect
that
needs
to
be
re-hot
plugged.
E
Yeah
yeah,
if
it's
possible,
just
for
a
vert
launcher
to
reconcile
when
it
when
it
goes
to
the
reconcile
function
and
vert
launcher
to
just
say,
hey.
This
thing
needs
to
be
re-hot
plugged,
because
we
know
that
it's
supposed
to
be
on
the
domain,
then
maybe
you
wouldn't
have
to
have
munition.
I'm
not.
D
E
J
E
J
E
H
Don't
I,
I
don't
think
it's
necessary
right
now
to
to
retry
you
can
you
can
just
decide
to
say
okay
just
twice
once
because
it
will
anyway
fail
and
it
gives
the
chance
for
the
admins
to
go
into
the
vm
and
fix
it
somehow
or
you
can
drive
with
a
backup
or
something,
but
I
think
independent
of
that
it
needs
to
be
at
the
places
in
the
logic
where
it
could
be
used
like
it
can
be
reconciled,
because
only
that
way
it's
probably
picked
up.
Everything.
E
I
would
rather
it
keep
retrying
and
make
lots
and
lots
of
noise.
Then
yeah.
N
H
D
So
maybe
we
should
have,
I
don't
know
how
it
works
and
we
should
maybe
it's
like
this
one.
This
pr
is
an
enhancement
to
the
existing
migration
and-
and
I
just
want
to
raise
that-
I
don't
know
how
it
works
in
openstack
now,
but
in,
for
example,
in
rev
there
is
no,
even
if,
if
the
migration
fails,
it
will
not
connect
it
back
at
all,
so
they
are
like.
D
That's
like
all
the
options
on
the
table,
and
I
do
think
that
exploring
the
fact
I
mean
currently
on
the
target
we
plug
the
devices
in
as
well.
So
if
we
put
it
in
the
vmi
sync,
we
can
even
remove
the
code
on
the
target.
We
can
say
well
on
the
target,
I'm
supposed
to
have
these
devices
or
just
connect
them.
If
I
see
they
are
not
there,
the
only
problem
with
that
is
that
we
cannot.
We
cannot
declare
that
the
migration
is
over,
so
in
a
nice
way.
Yes,
yes,.
A
Can
I
break
in
just
for
a
second
here
just
to
remind
us
that
we
are
over
time?
So
if
we
could,
I
I
there's
an
active
discussion
here,
so
I
won't
cut
it
off,
but
we
do
need
to
wrap
it
up
soon.
E
K
Yeah,
I
think
the
main
difference
between
rev
or
overt
or
anything
else
is
that
an
overt
you
can
dynamically
attach
these
devices
from
outside,
whereas
here
in
kubert,
we
cannot
do
this,
so
we
need
some
kind
of
a
something
on
the
vmi
that
will
indicate
that
these
devices
needs
to
be
replied.
D
K
Again,
just
just
hear
me
out:
this
information
may
exist,
I
don't
know
where
does
it
exist
and
I
don't
see,
but
there
has
to
be
when
you
start
immigration,
you
need
to
say
that
these
devices
has
been
unplugged
and
then,
when
these
devices
are
plugged
back,
this
condition
or
status
can
be
cleared.
K
And
then
we
don't
need
to
act
on
this,
but
when
we
do
have
this
condition,
it
doesn't
matter
if
the
migration
has
been
over
and
we
need
to
plug
it
on
the
destination
or
if
the
immigration
failed
and
we
need
to
replug
it
on
the
source.
There
will
be
something
that
on
the
vmi
that
will
indicate
that
this
operation
needs
to
happen.
K
D
D
Yes,
I'm
saying
this
is
what
I'm
saying:
I'm
saying
that
when
you,
when
you
go
over
the
loop,
you
know
what
what
the
domain
has,
what
the
devices
are
if
the
device
is
plugged
or
not,
and
you
know
that
you
need
what
you
need
to
plug
in,
you
need
what
devices
you
need.
So
this
information
exists,
I
mean
we
could.
We
could
call
it
like
every
time
in
the
loop
and
it
will
try
to
reach
the
end
result
of
having
all
the
devices
that
are
supposed
to
be
there
plug.
E
I
get
what
you're
saying
now
so
in
the
reconcile
loop,
we
have
both
the
domain
and
the
vmi,
and
we
know
if
there's
a
mismatch
here
and
that
the
vert
launcher
sync,
the
thing:
that's
syncing,
the
vmi
can
do
this
comparison
and
say
hey.
We
need
to
re-hot-plug
this
thing
and
re-hot
plug
it.
Then
we
could
also,
in
the
update
status,
for
the
bmi
have
a
condition
if
we
wanted
to
give
user
feedback
to
say,
we've
noticed
this
mitch
match
exists.
This
device
is
not
plugged
into
the
domain
and
that
would
give
more
informational.
E
E
E
D
Okay,
so
so,
if
I
understand
what
you're
saying
is
that
one
action
is
in
in
the
update,
because
we
see
that
the
desire
is
not
the
current
and
in
the
status
is
the
same.
If
we
say
that
we
have
another
check
in
the
status
and
if
we
say
they
are
also
different
there,
then
we
can
also
put
it
in
a
different
condition
status
that
says
that
explicitly
to
the
api,
so
my
condition.
E
Is
and
they
update
the
stash,
that's
going
to
be
an
observation
of
the
collective
like
bmi,
what's
happening
on
the
domain
and
all
that
I'm
reporting
that
this
state
exists,
something's,
not
hot
plugged
and
then
the
actual
sync
we're
doing
that
same
comparison
and
deciding
how
to
act
on
it.
Yeah.
D
Okay,
so
so
everything
I
mean
it
sounds
very
great
to
me.
The
the
only
problem
that
or
molly
question
that
I
have
over
this
is
that,
unlike
the
current
state,
that
where
everything
is
one
it's
a
one-time
attempt,
this
will
will
cause
the
reconcile
to
retry
it
all
the
time
and
then
and
I'm
really
not
sure
if
this
is
good
or
bad,
because
we
don't
have
any
any
anyone
else
doing
that
I
mean
any
other
project
that
works
with
livert.
I
guess
I
don't
know
anyone
that
will
try
to
do
this.
H
Yeah
but
the
others
are
not
eventually
consistent.
They
are
imperative
systems
all
together.
So
I
think
one
important
thing
here
is
that
retries
should
back
off.
So
it's
of
course
important
to
not
overload
the
system.
D
But
this
is
a
good
point.
What
you
raised
is
a
very
good
point,
though.
The
only
problem
is
that,
for
example,
in
rev
they
don't
they
don't
even
try
it
once
so.
What
happens?
Is
that
it's
up
to
the
operator
to
up
to
the
owner
of
this
vm
to
do
something
if
they
want
to?
What
I
mean
is
that
if
you,
if
we
use
the
back
off
here,
then
it
will,
it
may
kill
the
or
the
current
the
vmi
is
degraded,
but
that
may
not
be
the
what
the
user
wants.
H
H
The
second
thing,
if
you
are
in
a
back
of
loop,
that
normally
means
that
you
will
not
retry
until
the
back-off
period
is
over,
except
if
other
updates
are
coming
in
and
if
the
other
updates
are
coming
in,
you
can
still
process
everything
which
means
that
the
vm
as
such
can
still
react
pretty
snappy
on
other
operations,
but
if
no
other
operation
is
occurring
at
all,
this
operation
will
not
bring
down
the
system
by
fast
free
trying.
Does
it
explain
a
few
things.
D
I
understand
the
back
of
what
I
just
don't
understand
when
what
I
don't
understand
what
we
expect
from.
Let's
say
that
it
will
try
to
to
plug
in
the
devices
that
it
could
not
plug
in
okay.
So
we
will
try.
So
what
do
you
expect
to
happen
after
let's
say
effect,
what
what
now
it
would
just
retry
and
if
it
was.
H
D
H
Other
updates
are
coming
in,
they
would
still
be
processed
and
you
would
then
see.
Oh,
there
is
still,
and
you
can
even
start
the
asynchronous
hot
replug
again
that
would,
if
it
can
block
you,
just
have
to
ensure
that
it
started
asynchronous
so
that
it
doesn't
block
the
control
loop.
H
H
It
I
would
expect
that
you're
starting
the
hot
block
process,
but
not
waiting
for
it
to
succeed
and
that
you
get
after
by
that.
You
have
an
asynchronous
information
channel
which,
for
instance,
goes
into
the
domain
notifier,
where
you
can
see
that
it
succeeded
or
not,
and
then
you
will
retry
it
next
time.
H
D
If
it
can
block-
I
don't
know,
and
and
and
based
on
what
you
are
saying,
we
should
put
a
back
off
on
that
operation
as
well.
So
is
it
if
you
understand
correctly
yeah,
and
this
specific
operation
was
hot,
plugging
it
back,
yep,
okay
and
and
what
happens
if
you
have
a
backup
and
you
want
it
to
be
to
reach
a
maximum
time
or
you
want
it
to
give
up
at
some
point
I
it
will.
D
It
should
reach
a
maximum
time
and
then
retry
with
the
maximum
time,
okay,
so
and
okay,
and
if,
let's,
let's
and
if,
if
we
implement
this
one,
I
mean
this
means
that
this
pr
needs
to
be
hold
or
just
cancel,
and
we
need
to
move
the
existing
code
of
the
migration
to
change
it
because
on
the
target,
we
do
the
same
thing
we
we
plug
it
in.
So
what
you
are
saying
is
that
we
can't
we
can
do
it
like
in.
E
Yes,
yes,
keep
trying,
I
mean
we
don't
really.
We
don't
have
a
better
option
than
to
try
to
be
eventually
consistent
with
us.
I
mean
what
would
you
if
you
want
to
offload,
let's
say,
for
example,
the
hot
plug
fails
and
we
say
all
right.
This
is
what
the
admin
needs
to
do
to
restore
this
this
virtual
machine.
What
would
that
thing
be?
What
would
they
have.
H
D
E
Nobody
there's
no
way
to
fix
it
practically
unless
we
say
exec
into
this
pod,
execute
this
verse
command
by
mutating
your
domain,
xml
and
then
you're
competing
with
the
mutations
coming
from
the
for
launch.
Maybe
it
works.
I
don't
know
yeah.
D
Retrying
is
one
option.
The
other
option
is
to
say
that
if
it
I
mean,
let's
say
that
some
the
someone
comes
now
and
tells
you
okay.
If
it
failed
once
there
is
no
chance,
it
will
work.
I
don't
know
why,
but
let's
say.
H
D
H
H
E
Is
the
the
right
thing
to
do,
but
one
some
action
has
to
be
taken
like
we
can't
ignore
it
so
either
continue
to
retry
the
hot
plug
or
if
somebody
has
an
argument
for
why
that
would
never
succeed.
All
the
time
then
failing
the
bmi
would
be
the
correct
thing
to
do
not
leaving
it
in
the
state
that
can't
be
recovered.
I
guess
I
mean
or
reporting
that
it's
an
unrecoverable
state.
I
have
no
idea.
If
it
can't
be,
it
can't
be
ignored.
That's
all
I'm
saying.
J
J
E
D
E
Pause,
possibly
no.
D
D
No,
it
I
mean
in
terms
of
affecting
the
guest
okay.
D
I
know
it's
not
about
the
domestic
science,
I'm
saying
because
this
is
like
not
just
domain
xml
can
change,
leave,
build
configuration
even
on
the
host
site
or
the
port
side,
but
this
one
is
intrusive
in
terms
of
the
guess,
because,
let's
say
I'm
just
giving
you
an
example:
okay,
let's
say
you
have
10
10
devices
and
you
start
the
hot
plugging,
so
it
connect
the
first
one.
Then
it
will
fail
the
others.
Then
the
secondary
coincide
will
connect
the
second
it
it's
like
it's
like
intrusive
in
the
sense
of
the
guest.
H
M
J
H
That's
fine,
I
mean
you,
someone
tried
to
migrate
with
that
device
and,
of
course
expect
that
it's
still
working,
I
mean,
or
let
me
rephrase
it.
The
person
is
not
even
trying
to
migrate
it
for
whatever
reason
that
we
are
migrated,
because
we
don't
have
users
which
do
migrations,
so
migrations
is
not
something
which
users
do
so
they
have
a
vm
which
is
perfect
running
perfectly
fine.
H
H
D
K
D
D
D
Not
only
on
the
target
more
or
less
on
the
target,
and
and
and
when
it
fails,
we
just
need
to
trust
the
reconcile.
J
Okay,
so
okay,
okay,
I
got
you
and
what
about
the
the
problems
that
I
saw
on
the
with
the
expectations
and
the
stuff.
K
E
So
what
I
think
should
happen
here
is:
if
a
migration
fails
or
if
a
migration
succeeds,
there
should
be
one
more
reconcile
loop
that
either
occurs
on
the
source
or
the
destination
which
will
call
the
hot
plug,
which
would
cause
a
hot
plug.
The
problem
that
you're
having
with
the
bmi
expectations,
is
strange.
I
don't
really
understand
it
yet.
H
So
it
may
have
worked
by
accident
before
you
did
your
optimization
david,
that
you
you,
you
know
you
did
the
change
that
yeah,
that
you're
waiting
for
the
vm
update
status
to
come
in
again
before
you
do
the
reconcile,
and
I
think
that
led
in
the
let
before
you
change
to
the
situation
that
reconcile
was
just
triggered
so
often
anyway.
They
did
just
by
accidentals
to
pick
up
the
change.
H
D
D
No
sorry,
if
you,
if
you're,
let's
say
the
there
is
a
failure
of
the
migration
and
then
in
this
case
his
case
was
that
he
tries
to
connect
the
devices
the
devices
got
connected
right.
This
will
trigger
in
the
background
domain
the
domain
to
send
an
event
of
device
connected
which
is
picked
up
by
the
listener.
I
don't
remember
and
then
the
event
will
be
sent
to
the
to
the
vertender
and
it's
supposed
to
handle
it.
This
is
because
this
is
why
it's
strange
it's
supposed
to
be
catched
somehow.
J
D
It
is
just
consider
that
it
connects
the
devices
back
to
the
domain,
and
that
succeeds
because
that's
known
it
succeeds,
so
the
fact
that
it
was
plugged
in
is
supposed
to
be
the
event
should
should
go
over
everywhere.
This
is
right,
it's
it's!
Theoretically,
I
mean.
H
J
I
have
the
insights
I
have.
I
can
if
we
even
show
the
logs
what
basically,
what
I
am
seeing
is
that
the
the
initial
migration
metadata
does
reflect
on
the
vmi
once
the
attributes
completed
and
failed
are
added
to
the
immigration
metadata.
They
are
not.
They
are
not
reflected
back
to
the
vmi,
so
it
means
that
at
this
point
the
events
are
not
handled
anymore,
but
keep
in
mind
that
the
immigration
metadata
keeps
updating
it.
From
this
point
and
further
yeah.
H
J
H
H
There
are
two
ways
how
the
expectations
can
time
out.
One
is
to
get
an
actual
update
on
the
status,
then
it
should
be
immediately
timed
out
and
the
and
the
sync
should
happen
and
the
other
way
there
is
a
timeout
of
a
few
minutes,
and
then
it
would
con
on
the
next
on
the
next
notify
event
or
whatever
it
would
work
again.
E
But
that's
really
curious
to
me
that
it's
not
so
you're
saying
that
the
expectation
isn't
satisfied.
So
the
things
exactly.
D
E
N
H
Lose
on
the
domain
side
that
information
from
the
failed
thing
like
you
get
you
you
would
get
an
update
in
this
case.
Let
it
get
hot
plugged,
but
you're,
not
processing
it,
and
then
the
next
domain
notify
event.
This
information
is
not
sent
again.
E
J
J
I
couldn't
find
any
more
debugging
that
I
could
do
what
I
did
basically
is
taking
the
test
that
keeps
failing,
which
is
where
I
inject
a
failure,
while
the
immigration
set
up,
so
it
should
fail
immediately
and
replug
the
the
the
device
everything
works,
as
I
expect
other
than
the
the
immigration
method
that
I
want
to
propagate.
J
E
You
know
the
you
know,
the
expectation
is
not
satisfied,
so
you've
done
some
sort
of
debugging
to
see
that
the
expectation
here
returns
early,
because
it's
not
satisfied
then
do
some
more
debugging
to
maybe
put
time
stamp
on
when
it
actually
does
make
it
past
that,
and
if
you
can
see
that
it
wasn't
satisfied
at
this
time
stamp
and
it
immediately
gets
executed
like
sub
seconds
after
yeah.
After
that,
then
we
know
that
the
expectations
are
working
like
we
want
them,
but
something
else
is
preventing
us
from
doing
the
action.
J
K
O
N
E
That
expectation
is
doing
what
we
wanted
to
do,
meaning
it's
blocking
the
execution
of
that
loop,
but
then
very
quickly,
right
after
that
executing
again,
then
that
means
that
maybe
information
is
lost.
Like
roman
pointed
out.
Maybe
something
else
is
occurring
that
isn't
doing
the
thing
that
we
want
to
do
and
it's
important
that
we
figure
that
out,
because
there's
no
guarantees
that
that
won't
happen.
Anyway,
we
might
have
just
gotten
lucky
that
when
we
remove
the
expectation,
things
happen
to
fall
within
the
timing
threshold.
That
things
are.
H
For
instance,
you
can
you,
for
instance,
you
can
just
print
the
domain
information
you
get
in
the
notifier
callback
and,
and
it
would
be
interesting
and
to
see
there
if
you
will
so
so.
The
notifier
is
called
anyway
independent
of
expectations.
So
you
will
always
get
the
state
which
you
got
from
with
launch
ascent,
and
so
you
can
can
just
print
what
is
going
in
there,
and
it
will
then
be
interesting
to
see
if
you're
losing
that
information
in
a
sense
that
you
first
see.
H
J
J
And
I
also
wanted
to
mention
that,
from
what
I
saw
was
saying
I
you
know
whether
when
I
debunked
it,
I
saw
that
the
vm
controller
gets
a
drastically
lower
amount
of
events
to
handle,
for
example,
on
that
test.
It
with
expectations
it
reconciled
like
17
events,
and
when
I
tried
to
stop
the
adding
expectations.
H
The
perfect
place
that
we
are
to
not
do
to
only
reconcile
when
it
makes
sense
like
before
that
pr
we
were
not
waiting
that
the
vm
status
update
was
pro
propagated
back,
so
you
could
have
five
domain
events
in
the
meantime
and
they're
all
trying
to
update
the
vm
status
again,
which
of
course
failed
because
we
didn't
even
get
the
latest
status
back
so
david
edit.
H
The
expectation
that
we're
waiting
for
that,
but
the
key
is
that
the
expectator
is
is
only
delaying
the
the
sync
loop
until
that
update
is
coming
in
and
once
this
update
is
there,
it
takes
the
latest
vm
spec
it
got,
and
the
latest
notifier
spec,
which
it
got
into
the
controller,
and
you
should
still
see
it.
There.
D
H
There
is
no
queue
there,
I
mean
there
is
a
work
queue,
but
but
this
works
with
a
cache
in
the
backing
so
based
on
the
key
like
namespace
name
or
so
it
has
a
single
spot
which
gets
overwritten
with
with
the
latest
date
all
the
time.
So
there's
no
queue
filling
up.
D
H
Sure
about
is
maybe
can
it
be
that
maybe
the
hot
plug
in
the
guest
looked
right,
but
that
the
command
still
somehow
fails,
so
that
you,
for
instance,
what
happens
if
you,
through
the,
if
you
do
the
hot
plug
but
and
but
it
feels,
is
then
there
a
change
in
the
domain.
Xml.
H
You
do
a
hot
plug
in
this
asynchronous
part
where
you
do
it
right
now,
and
is
that
always
triggering
an
event
also
when
the
hot
plug
fails
or
is
there
also
in
this?
How
is
it
called
host
hot
block
device
function?
J
I
for
what
I
from
what
I
saw
until
now,
there
shouldn't
be
a
problem,
even
if
a
vm
is
failed
to
to
hold
back
a
device.
The
xml
is
updated
as
usual,
and
it
should
work.
D
Right,
it's
like
this
is
two
different
things
I
think
one
is.
If
there
is
a
success,
then
the
domain
is
the
domain.
Xml
is
updated
with
the
device
that
you
plugged
in.
If
there
is
a
failure,
then,
but
not
only
that
not
only
the
domain
is
updated.
You
it's
also
executing
you
will
get
an
event
of
that
says
device
plugged
or
something
attached,
so
it
sends
an
event
that
we
register
to
that.
D
Our
listeners
are
the
event.
I
don't
remember:
what's
the
name
it
registered
too,
so
it
should
trigger
an
event
of
the
domain
change
just
because
of
that
not
regarding
the
domain,
xml
change
and
but
the
domain
xml
will
change.
Obviously,
if
there
is
a
failure,
I
don't
think
we'll
get
any
an
event
and
not
and
we'll
not
say
you
will
not
see.
D
H
Yeah,
I
would
still,
even
if
you
are
pretty
confident,
I
would
still
print
the
domain
xml
specifications
out
again
and
try
to
ensure
that
you
really
see
the
the
succeeded
or
failed
them
attempting
the
latest
cache
state
first
and
then
you
also
see
that
you
clearly
see
that
the
expectation
is
never
fulfilled
like
just
printing
timestamps
or
so
when
it
gets
fulfilled.
Something
like
this.