►
From YouTube: KubeVirt Community Meeting 2022-11-09
Description
Meeting Notes: https://docs.google.com/document/d/1kyhpWlEPzZtQJSjJlAqhPcn3t0Mt_o0amhpuNPGs1Ls/
A
B
Know
I'm
not
open
for
today
so
first
time
through
the
meeting.
A
Fantastic
anything
specific
bringing
you
in
today.
A
A
Okay,
so
let's
see
going
ahead
and
kicking
things
off
if
you
haven't
added
what
you
need
to
to
the
agenda
or
to
attendees
feel
free
to
do
that
anytime
throughout
the
meeting
and
I
will
try
to
Circle
back
and
otherwise,
let's
go
ahead.
A
Kicking
off
with
the
first
agenda
now
or
you
guys
brought
that
to
us.
If
you
want
to
go
ahead
and
introduce
that
take
the
mic.
B
Thank
you
very
much,
basically
we're
busy
developing
infrastructure
for
the
CNC
if,
where
people
can
spin
up
clusters
and
and
underlying
infrastructure
to
quickly
contribute
to
open
source,
so
we're
doing
several
iterations
of
that
and
one
of
the
issues
that
we
ran
into
when
using
cube.
Verd
is
it's
quite
elegant
to
explain
to
you
by
hippie,
and
we
've
got
feedback
from
David,
so
it
it
seems
there
is
a
solution
that
can
be
done.
So
we
just
want
to
check
on.
B
Is
there
anything
that
we
can
do
to
help
move
this
forward
or
is?
Would
it
be
put
in
three
three
hours
or
how
does
stupid
work
with
issues
like
this.
A
So
I
guess
my
first
question
would
be
whether,
like
you
have
seen
in
the
code
where
a
PR
would
need
code
submitted
or
if
you
need
help
scoping
that
out.
If
you
have
anyone
who
would
be
up
to
the
the
task
of
getting
a
PR
started
or
if
it's
something
that
we
need
to
triage
and
figure
out
where
it
could
be
put
on
the
roadmap
for
someone
else
to
develop.
B
A
Of
course,
that's
some
helpful
context.
I,
don't
have
all
the
answers.
If
someone
else
on
the
call
is
able
to
speak
up
and
kind
of
give
some
pointers.
D
B
B
Basically,
the
explanation
by
David's
fossil
that
that's
what
it
seems
to
be
that
would
solve
our
problem.
So
I'm
not
sure
that
was
better
at
least
breaking
up.
B
A
B
E
B
Let's
try
that
is
this
better,
so
far,
so
good,
all
right,
so
basically,
the
the
feedback
that
you
got
from
David's
fossil
seems
to
be
that
that's
the
thing
that
we
want
to
do
so,
if
there's
somebody
that
can
point
us
to
where
we
wish,
we
should
do
it
or
if
we
can
get
some
feedback
about
Wiki
triage
and
we
can
support
them
in
some
way
or
and
whether
this
would
be
entertained
as
a
possibility.
F
I'm
not
exactly
sure,
but
what
I
understand
from
David
vossel's
comment
is
that
he
is
saying
that
we
should
probably
change
Cooper
to
allow
an
empty,
no
cloud
or
conflict
drive,
but,
to
be
honest,
I'm,
not
not
at
all
an
expert
in
that
matter.
But
what
I'm?
What
I'm
seeing
here
is
that
there
is
something
missing
like
like
no
cloud
or
config
drive
volume
which
is
empty
so
now
and.
A
Cooper
allows
for
us
to
switch
between
the
two.
It
just
looks
like
that's
not
exposed
at
the
Cappy
layer.
A
Usually,
that's
my
job
so
I'll
send
you
a
bill
for
the
encroachment
on
my
business.
Okay,
all
right
cool!
A
All
right!
All
right
so
sounds
like
that.
General
recommendation.
Is
that
like
where
this
code
comment
or
that's
linked
and
the
comment
is
not
a
bad
place
to
start
and
it
sounds
like
a
PR
to
the
covert
copy
provider.
Repo
would
be
welcome
and
I'm
guessing
that
any
follow-up
questions,
while
developing
that
would
be
fantastic
to
bring
to
the
coopert
cluster
API
or
cluster
API
Cube
vert,
slack
Channel,
if
you're,
all
right
or
otherwise.
So.
E
And
yeah
I
would
also,
if
you
do
I,
enter
any
difficulties
out
paying
your
email,
then
the
bottle
it
would
be.
The
expert
on
the
matter:
David,
Russell,
sorry,
David,
Boswell,
who's
already
commented.
A
Be
a
fun
one,
all
right
so
moving
on
see.
G
H
Yes,
I
would
like
to
talk
that
here's
Andre
from
the
desk.
H
H
It
is
the
goal
that
I
planning
to
do
for
ourselves
and
later
on,
we're
gonna,
publish
shampoo
to
convert
Community
as
part
of
our
development
I'm.
Just
saying
here.
Any
ideas
how
this
must
be
be
made
are
very
welcome.
We
probably
gonna
push
a
PR
to
word
when
we
have
some
code
don't
already
and
something
to
show
you
guys,
but
the
gvm
itself
is
a
very
good
and
stable
solution.
H
So
far,
slicing
the
GPU
in
a
proper
way
without
Nvidia
license
server
and
be
able
to
expose
from
the
host
to
the
gas
KVM
VMS
very
smoothly
and.
H
Clients
in
production,
but
not
over
convert.
We
are
adding
the
cube
word
between
the
host
and
the
guest
to
be
able
to
scale
and
scale
big.
I
And
if
I
understood
it
well,
this
technology
allows
you
to
do
mediated
devices,
and
there
is
nothing
which
you
need
to
change
on
the
keyword
side,
so
you
can
make
these
PCI
devices
and
their
logic
for
passing
through
the
PSI
device
is
already
meant
to
contributed.
H
Migration
is
part
of
them
code
already
and
we're
gonna
integrate
that
because
they
have
a
way,
for
instance,
that
is
different
from
Nvidia.
They
have
tunics
two
GPU
boards
with
16
gigs.
Each
I
can
grab
one
gig
from
one
board
and
one
gig
for
other
other
board
and
put
two
gigs
in
a
single
VM.
They
are
able
to
do
that.
Magic
part.
Did
you
understand
and.
H
Migration
without
or
broking
the
the
user
usage
for
you
understand
this.
H
They
can
do
offline
and
online.
My
migrations,
do
you
know
interesting
interesting?
This
is
what
they
create
and
there
is
a
video
I
put
here.
You
can
see
it
about
the
technology.
Okay,.
I
Okay,
I
just
wanted
to
say
that
I
think
from
there
I
haven't
tested
that
yet,
but
I'm
I'm
sure
that
this
technology
should
already
worked.
Haven't
you
tried
that
already.
I
A
All
right,
thank
you
for
bringing
that
and
let's
see,
we
have
an
item
noted
by
Ben
coxford.
If
you
want
to
go
ahead
and
speak
to
that.
J
Hi
yeah,
so
we've
just
started
looking
into
Cuba
and
sort
of
whether
we
can
sort
of
use
it
for
a
project
and
there's
sort
of
three
things.
Five
sort
of
raised
so
I
think
one
of
the
first
ones
which
I
was
mentioned
last
week
was
the
hot
plug.
Next
I
know:
there's
a
design
spec
setup
for
that
just
wanted
to
sort
of
get
an
idea
of
how
that's
going
and
if
there
is
a
time
frame,
fill
up.
J
And
so
anyone
on
the
call
could
answer
that,
but
I
may
just
create
a
discussion
and
or
if
there
is
someone
I
could
email
about
that,
get
an
update,
but
the
other
two
options
and
other
two
things
I
mentioned-
was
spice
support.
So
I
saw
a
recent
discussion
about
sort
of
supporting
spice
and
wondered
whether
this
was
going
to
go
ahead
and
whether
that
is
just
exposing
like
the
VM
on
a
cluster
IP
service
and
the
actual
port
and
I'd
say
not
100
sure
how
much
work's
involved
in
that.
J
H
I
can
say
something
we
was
trying
to
use
that
a
long
time
ago
and
I
know
that
red
hat
has
removed
completely
the
effort
to
continue
to
develop
spice
protocol
and
that's
why
we
they
remove
from
okg
that
is
openshift
open
source
and
now
it
is
not
part
any
anymore
about
of
any
of
the
efforts
of
of
red
hat.
But
if
you
are
interesting
to
add
that
back
again
we
are
able
to
contribute-
and
we
want
to-
we
have
that
on
our
roadmap-
to
remove
RDP,
remote,
desktop
protocol
and
use
spice
instead.
H
J
We
have
set
up
RDP
and
we'd
probably
like
to
keep
that
support
there,
and
instead
of
just
you
know,
removing
RDP
and
there's
some
other
sort
of
edge
cases
for
it,
and
so
I
think
like
the
sort
of
free
movie
and
see,
or
just
vmc,
general
RDP
and
Spice
and
sort
of
allowing
the
sort
of
end
user
to
be
able
to
enable
and
configure
them
would
be
quite
nice
instead
of
you
know
just
removing
support
for
that,
because
I
can
probably
guarantee
someone
out.
H
J
And
then
the
last
one
I
couldn't
really
find
too
much
documentation
about
it.
And
there
is
a
lot
about
updating
and
patching
keeper
control
plane
workplace.
J
C
J
If
we
wanted
to
see
that
sort
of
thing
like
if
we
wanted
to
give
it
another
image,
we'd
have
to
bring
down
the
VM
stuff
again,
but
I
think
it
was
more
on
the
process
of
if
we
change
like,
if
we
wanted
to
add
an
interface
or
we
decided,
we
want
to
add
a
new
hot
blood
volume
and
whether
there's
a
way
to
do
that
without
the
the
CTL
tool,
whether
you
know
we
could
do
it
through
acrd
to
attach
a
new
volume
to
the
end
and
but
also
the
upgrade
process
of
how
that
works
like
Theory
live
migrate
at
the
end
into
another
pod.
H
I
can
contribute
also
on
that
matter.
The
desk
updates
the
guest
operating
system
every
week
with
patches
and
everything,
and
this
is
completely
automated
over
rcic
pipeline
and
we
didn't
find
any
other
way
other
than
create
a
golden
image.
Every
week.
H
H
Clone
a
golden
image
and
always
update
the
golden
image
on
weekly
basis
for
patches
and
everything,
security
issues,
and
we
do
it
for
windows,
10,
Windows,
11,
Linux
Mac.
You
cannot
imagine
yeah.
J
No,
that
makes
sense
and
yeah
definitely
like
from
like
what
we've
done.
We
expect,
like
you,
know,
to
bring
the
VM
down
start
at
the
new
image.
That's
that's
fine,
but
I.
Think
for
us
it's
the
hot
blood
volumes
and
interfaces
so
how
we,
actually,
you
know,
can
like
attach
them
to
ABM.
Whilst
it's
still
running
or
you
know,
live
migrate
it
into
another
pod
and
attach
those
new
new
volumes
or
in
spaces
and
without
you
know
actually
having
to
bring
the
VM
down.
J
A
All
right,
then,
moving
on
to
open
floor
looks
like
Andrew
is
reminding
us
about
kubecon
EU
and
the
cfp
deadline
for
that.
If
anyone
wants
to
speak
or
is
considering
writing
a
post
proposal
or
interested
in
having
some
review
done
on
a
proposal
that
you
want
us
all
to
look
at
happy
to
help
provide
constructive
criticism
and
helpful
feedback
to
get
your
PR
in
as
good
of
a
statement
as
possible.
Andrew
do
you
have
anything
to
add.
A
Good
deal,
of
course,
that's
going
to
be
an
Amsterdam,
so
if
anyone
wants
an
excuse
to
go
to,
Amsterdam
sounds
like
a
great
opportunity.
All
right
and
let's
see
next,
we
have
separate
section
or
API
review.
You
want
to
speak
to
that.
K
Yeah
hi
I'm
alai
I
work
along
with
Ryan
hellessey,
so
there
we're
we
work
on
keyboard
and
we
have
to
deal
with
upgrading
production
workloads
and
sometimes
find
that
apis
as
they
evolve.
They
can
lead
to
breakables.
So
just
curious.
If,
if
we
can
add
a
separate
section
for
reviewing
the
API
changes
just
so,
we
can
prevent
my
code
compatibility,
breakages.
K
In
this
call
that
I
think
that
would
be
great.
What
do
folks
do.
A
I
might
not
have
paid
as
good
of
attention
as
I
should.
Can
you
describe
again.
D
K
Sure
so
there
is
a
there
is
a
bot
that
edge
is
labeled
to
every
PR
that
comes
in,
and
a
specific
label
that
has
API
change
is
attached
to
any
PR.
That
has
that
has
API
changes
in
it.
So
I
was
wondering
like
in
order
to
make
sure
our
V1
API
as
we
evolve.
It
remains
backward
code
compatible
as
we
upgrade.
K
Can
we
add
a
separate
detection
to
go
through
these
PR
changes
and
make
sure
that
you
know
the
API
change.
Pr
IDE
get
the
attention
and
make
sure
that
that's
what
compatibility
is
maintained.
C
Okay,
so
what
is
the
ask?
I
I
guess
do
you
want
to
be,
or
let
me
ask
you
space:
do
you
want
to
be
specified
on
the
list
of
the
of
the
of
the
people
who
should
review
the
the
API
changes
or
or
the
questions
or
ask
is
different?
No.
K
I
was
wondering
like
how
we
have
different
sections
in
this
in
this
community
called
right,
like
can
be
a
separate
section
and
just
go
through
the
the
review
in
here.
Just
so
you
know
it
can
be.
We
can
give
attention
that
is
required
for
the
API
review,
PRS
and.
A
K
A
Yeah,
so
if,
if
you
have
PRS
that
you're
specifically
interested
in
discussing
or
having
extra
eyes
on,
you
can
drop
the
PRS
that
are
interested
you're
interested
in
having
reviewed
in
the
PRS.
That
need
attention
attention
section,
and
that
would
be
a
great
time
to
call
attention
to
PRS
that
you're,
specifically
wanting
to
talk
about
sure.
K
K
I
I
think
actually
I'm
asking
if
we,
if
we
can
do
it
the
other
way
around.
So
what
I'm
asking
for
is,
if
we
can
have
people
in
the
call
like
do
a
triage
review
or
something
for
API
changed
PRS
in
the
call
that
would
really
help
in
you
know,
making
sure
that
those
changes
are
not
breaking
any
already
existing
upgrades
or
things
they
are
Backward
Compatible
changes,
if
that
makes
sense.
G
I,
don't
know
if
you're
sharing,
but
you
can
share
your
screen
and
show
a
little
bit
like
some
examples
of
of
these
changes,
and
maybe
we
can
look
at
one
of
them
and
maybe
that
would
better
demonstrate
kind
of
what
the
ask
is.
A
A
G
So
so,
while
lay
works
on
sharing
I
get
well,
maybe
like
help
I
guess
answer
some
of
the
questions.
So,
like
our
goal
is
you
know,
we
kubert
has
a
V1
release
of
its
API
and
has
a
stable
release,
and
the
goal
is
to
maintain
the
stability
to
to
not
have
any
backwards
and
compatible
changes
that
could
merge
to
to
not
change
default
behaviors
or
things
like
that
or
if
there
are
changes
to
any
of
those
things,
then
we're
aware
of
them.
G
The
community
is
aware
of
them,
it's
in
the
release,
notes
and
it
gets
kind
of
blasted
everywhere
it
gets
on
the
mailing
list,
whatever
we
want
to
call
attention
to
those
kinds
of
changes,
so
they
ask
us
to
delay.
Has
a
bot
that
identifies
when
these
when
there
are
changes
to
being
made
as
PRS
that
are
affecting
other
could
be
affecting
the
API?
The
ask
is
to
to
look.
G
Do
you
spend
a
few
minutes
just
looking
at
these
in
this
Con
to
see
if
there's
a
backwards,
incompatible,
change
or
that
someone
is
is
making,
and
then
we
need
to
raise
awareness
of
it.
C
I'm
not
sure
if
this
solution
would
scale
but
like
currently,
if
you,
the
responsibility,
lies
on
the
approverse.
So
if
there
is
a
potential
of
breaking
change,
I
guess
the
approval
would
need
to
spot
it,
and
this
kind
of
changes
should
not
be
merged.
Of
course,
does
it
make
sense?
Yeah.
K
Yeah
regarding
the
scale
part
so
I
I'm,
not
this
process
is
already
established
in
kubernetes,
so
there
is
a
working
group
called
Sig
API
review.
K
Its
responsibility
is
to
make
sure
that
all
the
API
changes
that
come
into
kubernetes
are
Backward
Compatible
and
it
does
a
periodic
triage
and
it
has
an
entire
process,
but
our
community
for
our
community
to
make
sure
what
process
works
best.
Our
thought
was
to
start
this
kind
of
process
here
and
you
know,
identify
those
challenges
and
if
it
goes
well
eventually,
we
might
have
something
that
scales
better,
but
this
is
just
a
starting
point.
C
Yeah
now
I
understand
your
ask.
C
So
a
note
which
is
probably
important
is
that
Fabian
Deutsch
is
working
on
a
way
to
scale
our
our
approvers
in
the
repository
and
what
he's
trying
to
suggest
for
the
community
is
to
I
have
like
smes,
so
areas
of
of
domain
knowledge,
in
which
we
would
then
repeat
in
which
the
code
would
be
split
into,
and
there
could
be
people
in
these
groups
and
review
The
Conch
chain
the
code
changes
and
there
then
we
could
have
like
possibly
or
we
could
take
it
as
a
c
groups
right
and
have
a
possibly
dedicated
meetings.
C
K
Absolutely
this
works
works
very
much
in
conjunction
with
that
proposal.
I
went
through
that
proposal
and
one
of
the
one
of
the
group
was
API
review
and
controllers
on,
so
I
can
see
that
as
we
get
into
habit
of
reviewing
these
changes,
people
who
are
involved
in
in
reviewing
it
here
can
easily
be
spun
off
in
a
separate
call,
but
you
know
just
to
get
an
idea
of
how
the
process
work.
K
We
were
wondering
if
we
can
just
get
started
with
a
separate
section
of
that
in
this
call,
and
then
maybe,
if
we
need
more
time,
a
separate
group
or
a
separate
call
can
be
arranged
you
know
from
from
there
does
that
does.
A
A
Is
this
an
opportunity
to
add
a
field
to
the
pull
request
template
where
a
PR
submitter
can
specifically
call
out
whether
there's
a
breaking
API
change
or
an
API
change?
That
would
require
specific
review.
K
I
am
not
sure,
because
the
there
there
was
a
bot
that
got
broke
a
while
back,
but
I
fixed
it,
and
it
looks
at
the
specific
files
which
changes
the
API
and
automatically
adds
this
labor
API
changes
later
on.
So
we
should
be
able
to
filter
through
all
those
PRS
and
automatically.
You
know
have
it
added
to
a
section
and
maybe
even
go
through
couple.
K
So
as
an
example,
this
PR
right
here
is
something
that
we
looked
at
and
we
could
you
know,
discuss
this
here,
just
as
an
example,
and
you
know
see
how
that
process
goes
and
follow
it
up
with
other
PR
reviews
in
the
next
call.
So
if
everyone's
okay
with
it
I,
can
probably
walk
through
a
couple
of
PRS
from
here
and
then,
if
that
helps,
if
folks
think
that
this
is
something
we
should
do
on
a
regular
basis,
we
can,
you
know,
do
that
in
the
upcoming
calls,
as
well.
A
K
I
think
as
I
think,
this
proposal
suggests
just
going
through
this
list
as
a
triage,
and
you
know
spending
few
minutes
on
on
this,
rather
than
you
had
like
identifying
one
or
two
and
adding
it
to
the
agenda
nodes.
Okay,.
A
Because
anything
that
I
say
that's
been
like
in
the
last
two
days
like
those
two
are
pretty
obvious,
add
them
to
the
pr
review,
and
you
know
we
will
see
you
know
if
there's
engagement
on
the
call
for
those
but
the
the
items
that
are,
you
know
old
significantly,
older,
there's
no
obvious
trigger
for
me
to
add
that
to
the
agenda
without
somebody
else
contributing
that
agenda
item
themselves.
K
For
example,
I
can
volunteer
to
go
through
these
these
prsnc
or
bring
up
potentially
PRS
that
are
back
breaking
backward
compatibility
and
add
that
to
that
list,
so
I
think
we
might
have
to
do
some
pre
Community
call
work
to
triage
those
or
you
know
a
place
filter
through
this,
but
yeah
I
I.
Don't
think
there
is
an
easy
solution
for
things
like.
Oh,
this
is
breaking
backward
compatibly
until
we
have
an
end-to-end
upgrade
test
which.
C
We
should
already
have
a
few
of
the
tests,
but
what
I
would
suggest
is
to
bring
this
up
bring
this
topic
into
the
mailing
list
and
why
I
think
it's
important
to
get
all
the
Proverbs
and
maintenance
on
the
one
board,
so
they
will
not.
Oh,
they
will
not
merge
anything
and
we'll
wait
for
these
reviews
to
happen.
C
G
Yeah
yeah:
no,
let's
start
a
discussion
and
let's,
let's
see
where
it
goes,.
K
Yeah
I
I
heard
a
document
which
might
have
some
some
guideline
of
that
that
were
largely
taken
from
kubernetes
API
review.
So
but
I
will
add
that
and
start
a
discussion
on
the
mailing
list.
C
And
if
you
have,
if
you
have
encountered
any
breaking
change
or
anybody
else
in
their
environment,
I
guess
that's
just
bring
it
up.
I
think
working
charges
can
be
reverted
and
you
can
find
a
way
to
to
make
it
right.
K
Sure
yeah
I,
we
did
ran
into
couple
of
changes.
I
will
file
issues
for
those
and
bring
like
add
those
in
that
mailing
list
it
just
as
an
example.
So
I
would
set
the
context
for
for
the
discussion.
It.
E
Just
to
ask
an
ignorant
question
before
we
wrap
up
on
that
thread.
E
It's
so
there's
five,
a
kind
API
changes
in
that
list
that
don't
have
a
you
know,
do
not
merge,
hold
mark
on
them,
two
of
them
from
this
week
and
the
rest
from
you
know
the
previous
month,
once
we've
gone
through
all
of
those
will
it
be
a
matter
of
looking
at
the
new
ones
or
we
have
to
systematically
keep
returning
to
those
older
ones.
K
So
so
you're
saying
how
would
the
process
work
out
for
the
reviews.
E
Yeah,
so
if
we've
looked
at
that
one
you've
pointed
at
about,
it
was
but.
E
Sure
yeah
yeah
the
hot
plug
disk
container.
So
if
we,
if
we
review
that,
let's
just
say
review
that
at
the
end
of
this
meeting-
and
we
would
talk
about
it
and
and
whatever
in
the
subsequent
meeting,
when
we
look
at
this
list
again,
if
we're
doing
an
A
Kind
API
triage,
will
we
need
to
talk
to
about
that
again
or
is
that
going
to
be
a
case-by-case
basis?
Or
we
will?
We
only
be
triaging
the
uprs
that
have
come
up
that
affect
API
change
in
that
you
know
in
the
week
between
meetings.
K
E
K
I
think
yeah
I,
think
that
depends
on
on
how
the
initial
review
went.
So,
for
example,
I
could
see
two
or
three
parts
involving
so
one
of
the
path
is
that
we
do.
You
know,
identify
that
this
is
indeed
breaking
backward
compatibility.
Then
we
might
want
to
like
add
a
label
ourselves
or
something
that
way.
We
know
that
if
there
is
an
update
on
that
PR
on
the
next
triage,
we
we
can
look
at
it
more
carefully
and
make
sure
that
that
that
breaking
change
has
been
resolved.
K
If
there
are
no
change
like
if
everything
looks
good,
there
are
no
breaking
changes
identified,
then
I
think
it's
good
to
go.
We
don't
need
to
look
at
it
again.
A
Cool
is
there
what
would
that
communication
Loop
all
be
solved
if
we
had
a
a
label
specifically
for
bringing
it
to
the
community
meeting
attention,
because
that
would
allow
us
to
help
make
sure
that
we
call
those
things
out.
K
D
K
A
Maybe
just
something
to
include
in
the
conversation
then
right
then
thank
you
for
bringing
that
up
and
look
forward
to
seeing
how
that
goes
on
the
mailing
list.
A
All
right.
So
let's
go
ahead
and
do
it
looks
like
maybe
you're
satisfied
with
the
state
of
PR's
right
now,
so
we
can
skip
to
bug
scrub.
E
Seem
to
yeah
I
mean
okay,
thanks
for
everyone,
that
yeah
looks
at
those
PR's
and
makes
sure
they're
up
to
date,
with
reviews.
A
A
A
A
A
A
All
right,
if
we
don't
have
some
activity
on
that,
we
will
have
it
stabbed
next
week
to
bring
up
as
well.
D
A
C
Here,
I
guess:
the
Earth
recommendation
is
that
admin
or
some
automation
needs
to
say
that
the
node
is
gone.
Otherwise,
we
cannot
really
migrate.
C
Or
when
your
node
is
not
healthy,
I
I
guess
you
can
cannot
migrate.
The
VMS
and
the
thing
you
can
do
is
like
shut
down
or
make
sure
that
the
VMI
is
on
the
Node
are
shut
down
and
spin
them
up.
On
another
note,.
C
So
it's
my
recommendation
would
be
to
look
for
a
solution
that
would
that
would
certainly
say
that
the
vmis
are
not
running
on
the
Node
and
then
these
vmis
could
be
deleted
from
kubernetes
cluster
and
Kubler
should
spin
up
a
new
ones
on
another
node.
A
D
A
C
A
D
A
All
right
and
with
that
I,
don't
see
anything
new
added
to
agenda
items
or
open
floor.
We're
also
approaching
time.
So
thank
you
all
for
joining
and
contributing
to
the
call.