►
From YouTube: KubeVirt Community Meeting 2021-07-21
Description
Meeting Notes: https://docs.google.com/document/d/1kyhpWlEPzZtQJSjJlAqhPcn3t0Mt_o0amhpuNPGs1Ls/edit#heading=h.y4kzkq4flc0x
A
And
welcome
everybody.
This
is
the
keyboard
community
meeting
july
21st
2021
and
on
the
agenda.
We've
got
a
couple
items
by
daniel
hiller,
so
I'll
turn
it
over
to
you,
sir.
B
Hi
everyone
we
had
an
issue
last
weekend
on
tests
and
suddenly
not
being
executed
due
to
pr
that
got
merged
and
yeah.
Therefore,
we
added
a
sanity
check.
A
small
sanity
check
that
where
in
which
we
check
that
tests
are
executed
on
the
e2e
dates,
just
for
your
heads
up,
if
you
see
some
error
that
you
can
directly
make
sense
of
probably
this
could
be
it.
B
On
the
next
item,
this
is
that
we
wanted
to
have
some
better
security
for
the
github
web
books
that
we
need
that
we
need
to
install
for
each
repo
or
each
organization
that
is
onboarded
or
prowl,
and
this
is
now
managed
by
our
deployment,
where
we
re
reorganized
the
onboarding
for
every
repo
that
is
working
with
brow
so
that
it
is
managed
by
us
so
that
people
who
are
on
boarding
don't
have
to
bother
with
manually
updating,
github,
quickbooks,
repo
secrets
and
so
on.
A
A
Wow,
quick
meeting
all
right
alex,
you
have
a
note
so
I'll
turn
it
over
to
you.
A
All
right
well,
while
alex
sorts
out
his
audio,
I
will
move
on
to
the
next
item,
which
just
a
quick
note
from
myself.
I'm
sure
people
saw
chris's
note
yesterday
that
he's
going
to
be
out
for
a
bit
that
will
include
probably
the
all
things
open
conference.
I
can
give
that
a
presentation
myself,
that's
not
a
problem.
A
C
C
I
don't
know
I
was
using
the
wrong
device.
So
basically,
my
issue,
slash
question
is
in
cdi.
We
got
a
pr
for
adding
arm
builds
to
cdi,
and
you
know
the
pr
looks
fine.
I
can
merge
it,
but
I
don't
have
any
lanes
to
actually
make
builds
or
I
tests
or
anything
like
that.
So
I'm
not
quite
sure
what
to
do
with
it.
I
noticed
q
vert
has
something
there
and
I
just
need
a
little
bit
of
help
in
it
to
get
it
over
the
finish
line
because.
D
A
C
Where
it's
running
a
single
test
on
some
node
somewhere,
I'm
not
quite
I'm,
not
understanding
what's
going
on!
So
that's
why
I
need
help.
E
Yeah
yeah,
of
course,
yeah
it
was.
This
was
a
note
provided
by
abayarm
themselves.
It
was
a
whole
word
from
from
arm
that
yeah.
He
he
had
this
node
and
we
have
some
code
that
executes
tube
spray
and
creates
a
cluster
for
with
our
single
node
cluster
in
this
arm
node,
and
then
we
we
use
in
instead
of
adding
it
to
the
to
the
brow
workloads
cluster
so
that
they
are
there.
It
is
connected
to
to
brow
the
same
as
we
do
with
our
workloads,
given
that
this
is
a
a
single
node
cluster.
E
Maybe
it
is
not
always
available,
and
this
can
create
a
habit
with
with
the
brow,
controller
manager
and
blank
and
so
on.
Instead
of
that,
we
are
using
kubert
ci
external
provider,
and
we
are
executing
these
periodic
jobs
that
alexander
mentioned
in
in
there.
There
is
a
single
periodic
lane
that
executes
currently
just
deploys
cubit
and
executes
one
test.
There's
work
in
progress
to
extend
the
coverage
but
yeah.
This
is
what
we
have
now.
C
C
E
Yeah
we
have,
we
have
in
the
in
the
secrets
available
to
to
all
the
builds.
We
have
the
tube
config
for
this
arm.
Cluster
and
yeah,
I
think
it
could
be,
could
be
used.
We
we
should
ask
before
hogwarts
because
yeah
this
is
something
provided
by
by
arm,
but
but
I
think
they
they
would
be
very,
very
happy
to
use
it
for,
for,
for
this,
extended
extended
testing
but
yeah.
We
should
ask
him
and
for
the
for
setting
up
the
periodic,
it
should
be
pretty
similar
to
what
we
have.
I
can.
C
So
was
there
just
wanted
to,
you
know,
figure
out
what
I
should
do
to
to
extend
cei
to
also
use
it
and
who
I
should
talk
to
that's
basically,
you
know
what
I'm
asking
here.
E
Yeah,
if
you
are
using,
I
I
don't
know
about
how
how
testing
is
done
in
cdi,
but
if
you
are
using
something
or
with
kubert
ci,
then
yeah,
and
then
it
should
be
only
using
the
external
kuber
ci
external
provider
and
pointing
it
to
the
to
this
arm.
E
E
F
Yeah,
I
think
it's
possible.
I
guess
we
just
have
to
keep
in
mind
that
we
can
only
run
one
job
at
a
time
there.
F
So
exactly
this
is
a
general
limitation
which
you
have
right
now,
but
yeah.
Technically.
There
is
no
issue
to
expect.
E
F
F
A
We'll
take
this
offline,
but
I'm
curious
how
or
what
the
specs
of
this
arm
nut
are
that
we
have.
But
again,
that's
that's
my
own
selfish
agenda,
so
no
need
to
discuss
it
here.
A
G
Hello,
this
is
fabian.
I
still
need
to
open
the
on
the
shared
doc.
Let
me
do
that
for
a
second,
so
one
thing
that
I
wanted
to
bring
up
is:
let
me
find
the
community
meeting
dog.
Oh
there,
it
is,
is
our
v1
release
right.
So
let
me
get
to
the
agenda.
G
Sorry,
for
I
was
also
surprised
that
we
got
there
so
quickly
to
the
end
of
the
agenda.
V1,
so
two
things
v1,
actually
an
incubator,
so
on,
on
the
one
hand,
side.
G
On
the
one
hand,
side
right,
we
would
like
to
creep
forward
to
get
to
the
incubating
level
and
cncf
and
we
need
to
to
meet
some
criteria
there,
and
I
know
that
a
few
folks
are
working
on
this,
and
why
does
v1
relate
to
it?
Because
once
cuber
is
getting
to
the
incubator,
then
we
can
benefit
from
from
the
marketing
support
of
cncf
right,
and
I
would
like
to
piggyback
this
support
right.
So
that
means
I
would
like
to
go
to
v1.
G
After
after
we
went
to
the
incubator
and
there's
now
a
cyclic,
because
then
right,
if
we
go,
if
we
are
in
the
incubator
and
we
go
to
v1,
then
we
can
have
like
you
know.
Cncf
will
will
help
us
to.
You
know,
provide
marketing
material
tweet
about
it
and
I
don't
know
possibly
even
help
with
a
press
release
and
such
so
that's
something
that
can
help
us.
Why
is
v1
relevant
here
is
because
if
we
want
to
move
to
the
incubator,
then
we
know
that
to
the
toc
to
the
cnf
cncf2c.
G
Our
roadmap
is
interesting,
and
that
is
where
v1
comes
in.
So
I
would
actually
like
to
pick
up
the
discussion
that
david
started
in
february
of
this
year.
He
started
a
dock,
a
road
mapped
and
he
did
a
kubert
summit
session
on
it.
Keyword
to
let
me
roadmap
to
v1
was
it
called
that,
like
that
keyword?
G
Let
me
see
if
I
find
it,
I
have
it
open,
let
me
see,
and
so
what
what
do
I
need?
Let
me
see
is
david.
Actually
here
he
is,
and
so
I
want
to
start
tracking
the
things
we
want
to
for
we
won
a
little
bit
more
actively
and
I
did
something
today
and
I
also
wanted
to
look
at
the
dock.
So
let
me
put
it
up:
where
is
it
version
one
planning,
not
v1,
but
version
one
planning
this
dock
here
and
I'm
putting
into
the
shared
dog.
G
That's
good
now
I
can
ship
it
that's
funny,
so
we've
got
the
v1
planning
and
what
I
did
earlier
today
is.
I
started
to
create
a
milestone
in
github
and
issues
for
everything
that
we've
so
far
planned
for
version
one
according
to
this
document
and
the
milestone
that
I
created
is
this
one
v1
milestone
this
one
here
and.
G
What
feedback
I
want
is
to
see
right
now
is
when
I
look
through
the
dock
right,
I
see
that
some
of
this
stuff
is
done
like
we
moved
our
api
to
v1
already
that's
great
non-root
vmi
ports.
I
need
to
pick
up
up
at
some
point
later,
but,
for
example,
for
persistent
container
disk
volumes.
G
I
know
that
we
discussed
it,
but
I'm
not
seeing
that
we
made
any
significant
processes
forward
in
order
to
meet
that
requirement.
Yeah
that
one.
H
Specifically
got
kind
of
knacked,
I
would
say
so.
Yeah
there's
been
a
discussion.
We'd
have
to
link
to
that
discussion
and
say
why
it's
not
on
the
list
anymore.
A
Hey
fabian,
I
don't.
Actually.
This
is
directed
to
david,
you
own.
The
document
that
document
that
we
just
shared
in
the
the
shared
document
is
not
appear
to
be
shared
with
the
google
group.
So
I
don't
know
if
it's
gonna
be
available
to
everybody.
I
H
G
It
gets
me
that
is
why
google
has
three
billion
users
right,
yeah,
all
right
so
yeah.
I
I
would
like
to
see
if
we
can
move
it
forward
or
you
know,
work
out
the
path
for
for
this
requirement,
specifically
and
by
the
way
david.
We
we
also
had
the
discussion.
G
Oh,
I
was
with,
I
think,
with
alicia
about
nbd.
I
want
yeah.
Maybe
we
can
drive
that
forward
and
maybe
we
can
come
up
with
some
ideas
about
what
you
like
said
part
of
a
topic,
but
then
yeah
I
didn't
want
to
derail.
Actually,
okay,
I
I
hold
I'm
holding
myself
back.
G
Okay,
got
it
yeah!
Sorry,
I'm
thinking
out
loud.
The
other
thing
is
like
the
established,
predictable
community
releases
and
support
patterns,
and
I
think
sp
there
there's
some
there's
some
sinkholes
here
or
traps
right
like,
for
example,
I
think
stable
branches
and
such
right.
What
what
do
we
call
stable?
I
think
we
counted
up
the
kubernetes
pattern
here.
What
you're
saying
like
we?
We
support
the
last
two
releases.
G
We
see
that
in
the
community,
like
specifically
red
hat,
where
I'm
coming
from,
they
would
like
to
to
support
very
much
all
releases,
and
we
need
to
come
up
with
with
a
process
right
or
to
find
a
way
of
how
we
define
stable
branches
and
what
that
actually
means
who
owns
responsibilities.
H
It
today
it's
more
of
do.
We
want
to
revise
it.
I
think
so.
That's
been
explicitly
we're
talking
about
the
release,
branching
and
what's
supported,
we
very
clearly.
G
Well,
yeah,
yeah,
okay,
yeah,
the
fair,
fair
point
david.
I
think
we
need
to
revise
a
little
bit
more.
Like
you
know,
I
think
the
the
example
from
a
few
weeks
ago,
when
ryan
backboarded,
like
that,
I
think,
was
like
a
performance
fix
to,
I
don't
know
10
branches
or
so
we
should
see
if
that's
feasible,
because
right,
if
we,
if
we
take
this
example,
we
rhyme
backward
it
like
one
by
one
right,
reverse
in
reverse
order
to
the
to
to
other
branches
and
others.
G
G
So
I
would
like
to
revise
that
a
little
bit
more
to
have
a
crystal
clear
statement
about
this
right,
yeah,
but
launcher
updates.
We
did
that
and
network
printing.
That
was
also
done
and
revive
review
and
revised
user
guide.
I
know
that
something
happened
there.
I
still
think
it
needs
some
attention
when
the
last
one
is
templating
mechanism
for
vms.
First,
I
wonder
if
it
should
be
called
templating
mechanism,
but
in
general
we
are
also
making
progress
here.
G
G
G
The
area
of
most
discussion,
emphasis
continuous,
might
be
difficult,
but
I
would
like
to
bring
up
something
that
is
to
me
important,
and
I
want
to
understand
why
we
settled
with
this,
as
it
currently
is,
with
the
non-root
vmi
pods,
and
my
problem
with
the
with
the
state
that
is
linked
here
right
now
is
that
it's
linking
to
a
to
a
pr
which
is
only
making
it
optional
right,
but
I'm
not
seeing
it
captured
here
that
we
say
by
default.
We
should
be
running
vms
with
non-root
and
totally
unprivileged.
G
So,
no,
you
know
privilege
not
use
and
no
additional
capabilities
which
would
be
important
to
me,
because
I
think
in
the
end
one
one
important
aspect
to
me
from
kubrid
is:
we
are
just
a
pod
right
and
pods
by
default.
Don't
have
any
of
these
features
unless
the
workload
is
really
needing
it,
and
I
wonder
where
we
want
to
set
the
bar
in
that
area
and
I'm
asking
because
it
requires
some
foundational
work
right
to
get
us
there,
which
has
an
impact
on
the
timeline.
G
H
We're
pretty
close,
I
think
it
would
also
be
wise
for
this
milestone
to
add
the
things
that
we've
completed
if
they
aren't
already
there
just
to
show
the
progress
we've
made,
if
that's
important,
to
the
the
incubator
thing
and
and
also
I
think,
that
going
to
v1
after
incubation
is
kind
of
neat,
because
then
it
gives
us
a
path,
I
think,
to
ga
as
well
like
the
next
milestone
being
ga.
G
Yeah,
what
so
I'll
be
gone
for
two
weeks,
but
what
I
would
propose
is
actually
to
to
take
each
of
the
items
which
are
currently
open
on
the
milestone
20,
which
is
now
v1
and
actually
put
them
on
the
agenda,
and
the
four
upcoming
four
or
six
calls
right
on
four
consecutive
goals
to
look
at
them
one
by
one
and
to
communicate
that
up
front
so
that
the
interested
community
members
can
join
and
that
we
maybe
take
like
30
minutes
on
every
call
to
to
just
discuss
these
topics.
To
simply
make
some
progress.
There.
A
G
Yeah,
we
can
do
it
like
at
the
end
so
that
anybody
else
is
not
interested
can
drop,
but
I
would
take
the
opportunity
to
to
start
soaking
through
it
and
what
I
would
encourage
everybody
as
well
is
to
think
about.
Are
there
other
things
that
we
really
need
for
v1,
or
you
know?
Why
do
we
need
these
elements
for
v1
to
think
about
that
as
well?
A
A
Okay
and
with
that,
then,
is
there
anybody
that
has
any
pull
requests
that
they
are
focused
on
and
need
help
or
attention
on
for
some
reason.
J
A
All
right,
so
I
did
not
hear
a
call
for
pull
requests.
Anybody
quiet
week,
man
crazy
all
right.
Looking
back
at
the
mailing
list,
then
we
had
a
few
topics
over
the
course
of
the
last
week.
Let's
see,
if
anything
we
know
about
the
cia
issue
that
came
up
late
last
week,
there
did
not
appear
to
be
any
fallout
from
that.
Thankfully,
because
we
did
do
a
quick
review,
has
anybody
have
we
seen
any
test
instability
as
a
result
of
that
that
we
would
observe
that's
probably
directed
mostly
federico.
E
A
Great,
we
really
lucked
out
with
that
one
that
could
have
congratulations
to
the
reviewers,
then
for
doing
their
due
diligence
and
not
letting
bogus
code
through
so
great.
A
Give
credit
to
jen
on
that
one.
Actually,
I
did
that
pull
request
through
by
proxy
for
him,
because
he
was
the
one
that
caught
it,
but
he
was
not
at
a
ski
board.
So
I
took
the
took
the
initiative
there.
All
right.
We
have
vm
status
definitions.
There
was
a
bit
of
a
conversation
there.
I
think
that's
resolved
vmpos
and
iowa
anything
to
bring
up
on
either
of
those
topics.
A
Okay,
I
think
we're
good
there
and
we
normally
do
a
bug.
Scrub
see
that
fabian
has
dropped,
so
he
will
not
be
here
to
respond
to
the
seven
or
so
five
five
that
he
put
in
this
morning.
We
have-
and
I
apologize-
I
don't
believe
I'm
able
to
share
my
screen.
Is
anybody
able
to
spearhead
this
if
they
are
able
to
share
their
screen.
A
A
That's
quite
a
wall
of
text,
let's
see
so
they
we
are
image
not
found
occurred.
When
I
use
data
volume
cloning
with
smart
cloning,
my
environment
as
follows:
we
have
volume,
snapshot
classes,
we
have
rook
ceph
and
data
volume
with
a
test.
Dv1.
A
Is
this
something
that's
a?
I
think
we
might
have
some
people
who
could
look
at
this
in
the
room.
Let
me
see.
A
A
A
Review
and
revise
the
user
guide
all
right,
yeah,
that's
true!
I
have
personally
noticed
that
the
information
in
the
user
guide
has
been
slowly
drifting,
so
it
might
be
worth
a
oh,
and
I
see
this
is
part
of
this
v1
milestone.
That's
what
we're
that's,
why
it's
important
so.
D
D
Everything
farming
created
last
three
hours
with
a
milestone
like
four
or
four
issues,
should
be,
should
be
the
milestone
issues.
A
Awesome,
I
would
imagine,
then,
that
we'll
be
spending
far
more
time
and
focus
on
those
in
the
upcoming
weeks,
so
yeah
yeah.
I
assume
that
they're
going
to
be
more
forward-looking
and
feature
sort
of
issues,
so
I
will
skip
past
them.
For
now
we
have
a
user
cannot
provide
service
account
for
a
running
vm.
Oh
no,
sorry,
that's
one
more
for
the
cncf
there.
A
So
then
we
have
marcelo,
says
six
hours
ago,
vert
operator
configure
vert
handler
to
run
more
than
110
vms
per
node
yeah.
Okay!
So
there's
some
history
here.
So
when
we
first
set
this
up,
I
believe
110
is
the
default
for.
A
For
the
number
of
pods
that
a
node
can
run
as
a
cube,
kubernetes
default,
and
so
we
had
picked
that
as
the
number
of
vms
as
well
and
we
do
have
a
response,
and
so
yes,
this
is
something
you
can
do
from
the
custom
resource.
I
believe.
A
Okay,
so
back
in
the
day
the
vert
launcher,
binary
itself
does
have
a
flag
that
you
can
add
to
the
run
time
that
will
configure
the
number
of
nodes.
I
think-
and
this
is
where
I'm
a
little
fuzzy
is
that
we
do
expose
that
to
the
cube
custom
resource.
So
it
is
something
that
you
could
configure
for
your
cluster
to
do
you
can
do
it?
It's.
H
Yeah,
maybe
we
should
increase
the
default
and
also
expose
this
as
an
explicit
tunable
on
the
key
vert
cr
as
well.
So
it's
a
it's
a
property.
It's
like
a
cli
argument
to
vert
handler
for
how
many
kvm
devices
it
exposes
or
as
resources,
and
that
limits
how
many
virtual
machines
you
can
run.
So
we
would
want
to
expose
something
on
the
keyboard,
cr
saying
vms
per
node,
something
like
that
and
then
have
bert
handle
automatically
pick
that
up.
H
But
today
the
escape
hatch
is
that
somebody
can
create
a
patch
which
modifies
that
cli
argument.
A
Agreed,
I
believe,
yeah
as
far
as
changing
the
default,
I
would
stick
with
whatever
kubernetes
is
using
for
its
default
number
of
pods.
D
A
Got
it
okay,
he
yeah.
I
do
see
that
perfect,
yeah,
so
yeah,
it's
exactly
what
david
said
is
we
just
simply
need
to
do
the
leg
work
to
make
this
not
so
painful
to
to
configure.
In
the
meantime,
there
is
hopefully
a
workaround
for
him.
I
think,
as
david
said,
we
could
probably
patch.
So
I
think
we
should
be
good
from
a
response.
H
L
I'm
sorry,
I
didn't
make
it
what
you
said.
I
said
that
I
didn't
know
that
we
can
patch
whatever
device
plugin
exposes.
A
I
think,
because
it's
actually
a
command
line
argument.
If
we
patched
the
the
command
that
we
actually
run,
then
that
would
do
it.
D
A
It
does
not
have
a
milestone,
I
think
what
we're
what
he's
pointing
out.
I
think
this
is
actually
a
request
that
it
should
be
unprivileged
right
or
is
it?
I
know
that
lubo
had
just
got
a
pr
merged
to
make
this
the
case.
D
A
A
D
A
D
Yeah
we
we
mentioned,
we
discovered
that
in
muscle's
skill
test
and
just
talked
about
it
in
the
last
scale
meeting
the
picture
is
very
small,
but
if
you
scroll
in
there
on
the
go
routines,
you
see
that
he
scales
up
vms
scale
them
down
again,
but
the
number
of
core
routines
still
increases
over
time,
which
is
a
bad
sign.
D
A
And
lost
track
of
where
we
are
work,
queue
performance
also
by
ryan.
So
it
sounds
like
this
is
another
scale
test
issue.