►
From YouTube: KubeVirt community meeting 2021-03-10
Description
KubeVirt community meeting 2021-03-10
A
Okay,
good
morning,
community
welcome
to
the
cooper
weekly
community
meeting.
I'm
your
host
chris
caligari
and
co-host
is
stu
got
and.
A
Let's
begin
I've
posted
the
meeting
notes
into
chat
so
for
easy,
linking
and
access.
A
Again,
I
have
the
first
agenda
item
and
wanted
to
just
give
everybody
a
notice
regarding
daylight
savings
time.
Daylight
savings
time
is
march
14th.
So
next
week's
meeting
will
start
one
hour
later,
that's
great
for
me,
because
I
don't
have
to
wake
up
at
five
a.m.
C
Just
to
clarify
that's
one
hour
later
for
countries
that
follow
u.s
time
zone,
but
europe
switches
at
a
different
date.
So
it's
the
same
for
anybody
in
that.
A
A
Okay
looks
like
we
have
ryan
halsey
as
the
first
agenda
item
and
ryan.
Before
you
start,
I'm
going
to
put
you
on
a
10
minute
fence
this
time.
Since
we've
talked
about
your
your
new
feature
a
lot,
I
ask
that
you
put
any
detail.
Take
any
detailed
discussion
to
the
to
the
email
list.
A
D
Yeah,
that's
fine!
I've
I've
been
time
boxing
it
every
time
in
this
meeting,
but
I
think
we've
gone
over
every
single
time
right
so
yeah
that
that's
that's
fine,
yeah,
so
well,
a.
D
Yeah,
so
the
only
thing
I
wanted
to
well
two
things
I
want
to
bring
one
was
the
the
change
that
I
talked
about
on
the
mailing
list,
which
so
talking
when
talking
about
the
virtual
machine
pool
the
idea
of
a
virtual
machine
pool
to
sort
of
consider
everything
to
be
a
pet
instead
of
everything
to
be
cattle
and
and
that
change
is
reflected
here
as
sort
of
the
prefix
that
discussion
yeah,
like
I
said
it
was
on
the
mailing
list.
D
This
does
change
a
few
things
and
the
idea
is
that
now
we
have
naming
so
we
do
the
association
between
objects,
so
that
things
can
be
overridden
and
attached
as
part
of
the
pool
which
kind
of
makes
sense
because,
like
when
we
we
think
about
doing
these
likes
object
overrides
like
climate
data.
Virtual
machines
are
sorry.
Pvcs
would
be
another
one
like
if
you
want
to
attach
disks
to
individual
or
you
want
each
individual
vm
to
have
its
own
disk.
This
would
be
another
example.
D
It
kind
of
makes
sense
to
be
considered
them
as
as
cattle,
so
a
name
or
sorry
as
pets,
so
naming
makes
sense,
is
actually
doing
that
attachment
okay
and
then
with
that
being
said,
it
kind
of
brings
up
the
second
thing
that
I've
brought
up
with
another
list,
which
is
I
named
this
threads.
D
I
don't
really
know
what
this
if
this
is
the
right
name
for
this,
but
the
idea
here
is
that
now
that
we
have
sort
of
names
for
these,
like
you
know
whatever
they
are,
I
have
like
gs012.
So
we
have
this
increasing
postfix.
You
know
how
should
it
scale
up?
You
know
if
we
have
think
about
like
the
staple
set.
If
you
want
three
things
in
a
staple
set,
we
create
the
first
item.
D
The
second
item,
the
third
item,
and
it's
done
sequentially
and
there's
and
they're
done
sort
of
with
an
order
in
mind.
So
with
that
we
could
do
the
same
thing
always
with
or
not
so.
The
idea
is
that
or
I
guess
the
use
cases
would
be
like.
D
Let's
say
if
I
have
this
field
threads
and
I
set
it
to
one,
it
would
imply
that
we
create
the
first
vm
in
our
pool
and
we
wait
till
it's
running
before
you
create
the
second
one,
the
third
one
and
so
on,
but
doesn't
have
to
be
this
way.
It
could
be
that
we
do
multiple
threads
and
we
have
sort
of
multiple
api
calls
and
we
don't
do
waiting.
D
So
those
kind
of
use
cases
that
I
see
do
folks
have
any
thoughts
on
that.
E
Which
I'm
fine
with
yeah,
so
we
have
this
idea
and
like
the
the
bmi
replica
set
of
kind
of
batch
operations.
So
how
many
do
we
want
to
create
at
one
time?
I
don't.
I
don't
even
know
that's
configurable
right
now.
I
need
to
go
back
and
look.
E
The
same
sort
of
issue
is
hit
with
deployments
on
pods.
I
would
imagine
as
well-
and
I
don't
know
how
that
is
handled.
So
if
somebody
sets
that
they
want
10
000
pods,
I
don't
think
they
create
ten
thousand
pounds
all
at
once.
I'm
pretty
sure
when
I
looked
at
the
well,
ultimately,
it's
a
replica
set
was
doing
it,
but
I'm
pretty
sure
it's
batched
and
I
don't
know
if
the
user
has
control
over
how
those
are
batched.
But
I'd
say
that
would
be
important.
D
Yeah
there
was
there
wasn't
in
jobs,
there's
two
there's
two
fields:
it's
configurable
there's
you
can
do
parallel
or
you
can
do
completion
and
parallel
is
like
non-blocking
and
you
just
kind
of
like
get
to
like
c105.
You
just
get
to
five
and
then
completion.
It's
like
okay
get
to
five
and
then
we're
done.
E
So
with
the
stateful
set
they're
trying
to
create
deterministic
ordering
like
how
things
are
started,
I
don't
know
if
we
need
that,
with
the
virtual
machine
pool,
I'm
kind
of
hesitant
to
start
with
that
we
could.
We
could
add
that
on
later
on,
if
we
needed
it,
but
I'd
have
to
see
a
really
strong
use
case
for
a
while,
that's
necessary.
No,
what's
up
somebody
who's
that
I'm
here,
let's
can
we
mute
whoever
that
is
so
they
don't
accidentally
say
something.
A
A
E
So
I
would
look
at
prior
art,
understand:
what's
happened
with
the
deployment
and
like
basically
enumerate
what
already
exists
and
what's
the
precedent
in
kubernetes
today
and
decide,
does
any
of
this
make.
E
Let's
come
up
with
their
own
and
the
precedent
that
we
have
today
within
cubert
is
this
idea
of
like
batch
operations
like
a
batch
count.
For
example,
we
have
that,
for
how
many
migrations
will
and
vm
starts
will
allow
during
workload
updates,
and
things
like
that
of
something
I
had
recently
so
to
be
consistent.
I
would
probably
call
it
like
about
a
batch
size
or
batch
count,
or
something
like
that,
because
threads
might
be
confusing
in
the
context
of
like
actual
cpu
threads
or
something
like
that.
D
D
So
it's
something
that
I
think
makes
sense
to
be
configurable
and
something
that
people
may
want
to
use.
Okay,
so
maybe
then
so
then
let's
say
it.
Let's
just
say
it's
one
like
what
what's
our
behavior
then
so
non-blocking
do
you
think
like
we
just
kind
of
like
like
what's
one
and
what's
the
difference
between
one
and
two,
if
it's
not
blocking.
E
Yeah,
I
guess
you'd
have
to
wait
for
the
virtual
machine
to
hit
a
running
state,
so
you'd
be
only
having.
E
This
would
represent
the
number
of
parallel
virtual
machines
you
can
have
that
are
between
like
a
stop
and
start
state,
and
so
somewhere
in
that
pending
state
like
how
many
you
allow
to
be
pending
hitting
running
at
a
time.
So
it's
in
flight
ones
that
aren't
running.
D
I
don't
know
if
I
wanted
that
correctly,
so
would
you
well,
let
me
try
to
read
it
back
like
so
this
would
be
like
we
have
created.
We,
we
made
the
api
request
so
like
we
have
to.
We
make
we
make
the
api
requests,
and
this
would
be
so
it
would
be
based
on
phase
so
so,
which
phase
is
it
that
you
considered
like
when
we
start
the
next
one?
When
we've
reached
running.
E
D
D
Implement
okay,
so
then
that
would
make
sense
for
one.
So
then
we
just
like
have
that
that
would
get
us.
This
kind
of
we'd
follow
this
ordering
with
with
one
and
then
anything
above
one.
It's
now
we're
doing
that
we're
doing
the
captioning.
So
it's
it's
kind
of
like
a
bowl
like
yeah.
No,
it's
actually
more
than
that,
because
we
need
to.
We
need
to
control
how
much
we
want
to
match.
D
Okay,
I
mean
that's
mainly
all
I
wanted
to
kind
of
bring
up.
I
think
we
could
talk
about
the
rest
on
the
mailing
list,
but
that
was
mainly
the
changes
and
how
in
pets
now
the
idea
of
pets
kind
of
changes,
the
the
dynamic
I
mean,
I
think,
like
the
other
one,
was
we
delete
a
virtual
machine
like
say
this
one:
should
it
be
replaced
with
the
same
name
or
not.
E
E
E
So
you
could
have
some
sort
of
policy
about
if
the
persistent
storage
or
secret
isn't
owned
by
that
virtual
machine,
it
would
stick
around.
It
was
owned.
Maybe
it's
cleaned
up.
If
you
delete
it
and
then
recreate
it,
I
don't
know
that
would
be
some
details
to
work
through.
But
if
we're
going
to
use
prefixes
and
we're
going
to
allow
that
to
vol
matching
for
pre-population
of
secrets
to
match
something's
going
to
be
created
by
this
this
controller,
then
I
mean
I
don't
know
what
else
we
would
do.
E
D
Yeah,
the
only
thing
other
thing
is,
I
could
think
of
just
increment
on
top,
but
yeah
it
doesn't
quite
get
us.
The
matching
doesn't
allow
us
to
match
the
the
pvcs
and
the
secrets:
yeah.
Okay,
all
right
I'll
do
a
little
bit
more.
Another
look
at
this
is
and
kind
of
incorporate
some
of
the
the
idea
of
pets
all
the
way
through
again,
because
it
does
it
also
defects
detach.
That
was
the
other
thing
I
mentioned.
D
It
definitely
does
and
what
we
can
do
with
the
pool
so
yeah,
then
I'm
probably
gonna
change.
This
name
right,
yeah
threads
doesn't
make
sense.
Okay,
we
can
follow
up
the
rest
on
the
mailing
list.
A
Cool
yeah
thanks
sounds
good
ryan.
That
future
looks
so
awesome.
Like
I
said
earl
last
meeting,
my
my
nasa
colleague
is
definitely
interested
in
that
feature.
A
D
Him-
and
we
can
I'm
curious-
I
want
to
hear
as
many
as
much
feedback
as
possible
on
that
on
the
dock,
so
I'm
trying
to
hear
lots
of
different
opinions
and
if
they've
got
a
use
case,
I
definitely
want
to
hear
it.
A
Yeah
he
wants,
he
actually
wants
us
to
come
and
present
to
him
and
all
of
a
sudden
we
have
four
different
cons
that
we
have
to
present
to
so
my
next
three
months
worth
of
work
is
gone
ballistic.
A
So
speaking
of
cons,
we
have
kubecon
and
and
red
hat
summit
coming
up
and
we
need
videos.
So
if
anybody
has
a
a
great
video
that
they
want
to
that,
they
want
the
community
to
present
at
these
cons.
Please
let
me
know
otherwise,
I'm
going
to
go
through
our
catalog
and
and
see
what
we
have.
A
Because
we
also
need
volunteers
to
to
attend
the
booth
and
and
answer
questions.
So
there's
that
lots
of
talk
about
so
our
team
went
through
a
transition
in
the
last
two
weeks
and
I've
been
very
busy
trying
to
move
our
our
work
blocks
around
and
get
organized
so
that
email
should
be
coming
within
a
the
next
day
or
two.
D
Hey
chris,
I
had
a
curiosity,
I
guess
I
know
it's
a
cucumber's
virtual
and
you're
saying
attend
the
booth
or
attend
to
booth.
What?
What
exactly?
Does
that
mean.
A
Yeah,
it's
there's,
gonna,
be
a
a
conference
platform
and
if,
if
you've
been
to
a
red
hat
summit
before
it's
basically
a
a
dashboard,
a
virtual
summit,
I
mean
it's
a
dashboard
where
you
can
wander
around
and
and
enter
various
video
conferences
and
participate
in
chat
and
the
new
platform
is
going
to
have
one-on-one
chat.
So
a
group
of
people
can
can
have
like
a
zoom
meeting.
A
That's
built
into
the
platform
should
be
pretty
cool
attending
the
booth
means
we
need
people
at
certain
time
frames
to
sit
there
and
and
handle
questions
as
they
come
in
on
chat
and
and
in
video
conferences.
A
If
you'd
be,
we
really
have
no
idea
what
the
what
the
customer
response
is
going
to
be.
Last
virtual
summit,
we
had
maybe
three
questions
in
chat,
but
we're
now
a
year
into
a
pandemic
and
just
a
massive
work
from
home
change
and
mentality.
A
Sorry,
I
was
distracted.
What's
the
question
ryan
asked
what
what
the
details
of
of
the
virtual
booth
would
be-
and
I
mentioned
that
there's
a
chat
feature
in
the
in
the
platform
and
volunteers
just
have
to
be
available
at
blocks
of
time
to
handle
those
incoming
questions.
A
Yes
exactly
and
then
this
year
this
year
the
platforms
changed
and
now
there's
a
video
conferencing
breakout
session.
So
if,
if
a
customer
really
wants
to
hear
information
from
the
horse's
mouth,
they
can
request
a
video
conference
similar
to
what
we're
doing
here
at
zoom.
A
And
that
goes
for
ku
khan
as
well.
A
And
then
linux
foundation
has
got
a
kubecon
as
well,
and
for
that
con
we're
we're
just
gonna.
Do
a
an
office
hours
a
virtual
office
hour,
so
we
just
will
have
to
be
available
for
handling
customers
that
are
interested
in
the
project
and
and
handling
their
questions.
A
We're
not
gonna
do
any
kind
of
detailed
presentation
or
keynote
it's
just
too
much
too
much
work,
it's
march,
10th
and
that
is
may
5th.
So
we've
got
what
45
days
to
get
our
our
material
together.
A
A
Anybody
want
to
talk
about
anything
at
all,
any
any
pull
requests
that
are
worthy
of
discussion.
A
You
sound
really
hollow
to
me.
Can
anybody
else
hear
him
clearer.
B
B
Higher
consumption
of
ram
memory-
and
I
also
saw
that
the
ram
peaks
to
almost
140
gigs-
which
I
guess
is
the
run
capacity
of
each
node-
so
maybe
we
should
reduce
the
load
on
the
nodes
and
we
could
reduce
the
flex
that
we
see
in
the
tests
and
because
I
saw
this
behavior
when
I
was
working
on
the
srv
lane
stabilization
and
when
the
node
was
loaded
and
this
the
memory
peaked
to
the
top
and
there
was
a
huge
load
on
the
cpu
everything
slowed
down.
I
had
these
connections.
B
E
I
think
roman
introduced
something
like
that
fairly
recently,
where
there
was
a
request
to
request
more
memory
for
each
test
lane
or
something
something
like
that,
which
would
in
turn
reduce
the
number
of
parallel
test
lanes
that
could
execute.
So
by
requesting
more
memory,
it
would
reduce
kind
of
some
of
our
pressure
there.
E
F
As
you
say,
as
you
said,
this
increased
memory
would
make
things
better.
I
think
it
landed
at
the
end
of
last
week,
so
we
should
start
seeing
this
improvement
now,
even
and
and
also
the
now
that
we
don't.
We
don't
have
the
the
1.17
lanes
running
now.
The
pressure
is
much
lower,
so
yeah,
hopefully,
during
all
this
week
we
haven't
seen
this
test
failing
and
yeah.
F
Hopefully,
let's
keep
it
for
a
while
more
to
try
to
establish
some
kind
of
procedural
policy
to
to
take
tests
out
of
out
of
quarantine,
but
it
is
not
established,
but
yeah.
We
should
see
the
the
test
passing
consistently
for
for
a
while
and
but
yeah
this.
This
sounds
good
to
me,
but
the
the
explanation
from
from
all
about
what
what
happened
with
the
with
this
test
so
yeah,
let's
keep
it
there
for
for
a
while
and
then
bring
it
back
to
the
stable
suite.
B
F
I
think
that
there's
a
pr
also
for
increasing
it
and
I'm
not
sure
if
the
if
it
is
what
is
the
current
amount,
but
I
think
we
have
a
keyboard
to
beard
in
the
repo
pr
for
increasing
it.
To
I
don't
remember
exactly
how
much
is
it
in
the
in
the
in
the
jobs
we
are
now
requesting,
if
I
recall
correctly,
34
gigs
for
in
each
in
each
job,
for
for
all
the
all
the
ci
cluster.
B
F
B
B
F
This
is
the
the
the
notes
have.
We
are
running
now
in
parallel
kind
of
at
the
peak
like
10
10
jobs
per
node
and
yeah,
the
the
nodes
have
more
than
300
gigs
yeah.
Previously
before
this
increase
it
was,
it
was
more
than
12
jobs
or
something
like
that,
and
now
the
concurrency
in
each
of
the
notes
is
about
10
jobs.
F
F
A
G
B
Does
anybody
know
how
much
ram
we
have
on
the
nodes?
Is
it
300.
F
B
A
And
we
can
also
create
a
an
issue
in
the
cooper
repo
and
and
really
get
into
the
details
of
the
problem.
Probably
a
lot
more
so
than
than
the
weekly.
A
A
So
could
I
ask
you
to
create
an
issue
and
then
post
that
issue
in
the
meeting
notes.
B
A
A
Thank
you
for
that
that
bit
of
information
federico,
I
copied
it
into
the
meeting
notes,
but
please
do
create
an
issue
and
and
then
we
can
come
back.
A
We
can
come
back
and
talk
about
it
in
greater
detail
next
week
and
in
between
now
and
then
you
guys
can
work
out
if
there
is
a
problem
and
find
a
good
solution.
A
Okay,
moving
on
any
more
items.
A
Going
once
going
twice:
okay,
as
I
said
last
week,
there's
a
pull
request
for
establishing
our
membership
policy
and
how
to
become
a
member
and
how
to
get
upgraded
to
a
repo,
maintainer
and
owner.
A
Please
take
a
look
at
that
community
issues,
number
75
and
look
at
the
pull
request
associated
with
it.
I
greatly
appreciate
community
comments.
So
far,
the
majority
of
comments
have
come
from
the
red
hat
office
of
open
source
and
I'd
really
like
to
hear
from
what
the
community
says.
A
E
B
C
A
I
can
share,
but
I
don't
know
how
you
guys
do
the
filtering.
E
Yeah,
so
does
anyone
have
any
bugs
they'd
like
to
bring
up
for
discussion?
If
so,
we
can
have
that
sort
of
thing
now
I
I
think
for
the
bug
scrub
we
might
want
to
skip,
or
at
least
be
a
little
bit
more
proactive
next
time
with
assigning
somebody
before
the
meeting
starts
to
to
do
it.
Otherwise,
we
kind
of
fumble
around.
A
It
usually
it's
peter,
and
he
he
says
that
he
can't
attend
this
time
and
requested.
Somebody
else
drive
that
that
portion.
E
Nope,
okay
yeah,
I'm
in
favor
of
skipping
this
time
sucking
that
and
if,
if
this
becomes
a
pattern,
we'll
we'll
figure
out
a
way
to
get
back
up.
A
Yeah
sounds
good.
We
definitely
want
to
be
doing
that,
not
sure
about
everybody
else,
but
I
I
find
it
very
valuable
to
to
get
the
audio
video
like
get
everybody
connected
and
talking
about
bugs,
and
rather
than
just
have
them
linger
in
the
background
and
not
get
addressed.