►
From YouTube: KubeVirt Community Meeting 2021-05-12
Description
Meeting Notes: https://docs.google.com/document/d/1kyhpWlEPzZtQJSjJlAqhPcn3t0Mt_o0amhpuNPGs1Ls
A
Hello,
everybody-
I
am
chris
caligari
and
I
am
your
host
for
this
week's
weekly
meeting
for
the
kobert
community.
Let
me
share
my
screen,
so
you
can
all
follow
along
with
the
notes.
A
Yes,
okay,
good
deal.
I
never
know
if
that
button
works
right
on
on
my
mac,
so
we'll
go
down
the
list
here.
If
you
can
add
in
your
name
to
the
attendees
list,
community
always
appreciates
seeing
who
is
attended
every
week.
A
Do
we
have
any
new
people
joining
us
this
week?
I
would
like
to
say
hello
and
make
an
introduction.
B
Sure
hi,
my
name
is
mark
denive.
I've
been
recently
working
on
some
stuff
with
kubert
and
saw
this.
So
I
just
thought:
I'd
join
up
and
see
what
the
the
weekly
community
meeting
was
all
about.
A
Great
welcome
mark,
I
think
I
just
peer
reviewed
one
of
your
pull
requests.
B
Yep
yeah
yeah-
I
just
put
up
the
one
of
the
more
recent
blog
posts
on
the
cooper
website
so
around
the
vgpu
and
the
intel
gpu
stuff.
So
just
thought
I'd
join
up
and
see
what
this
is
all
about.
Yup.
I
thought
that
was
the
one.
Thank
you
so
much
for
that.
Definitely.
A
Okay,
looking
down,
listen
a
few
more
people
are
rolling
in
here.
A
Hello
think
everybody
is
is
senior
people
here.
Okay,
let's
get
into
the
agenda.
Then
daniel
has
the
first
item
go
ahead.
Daniel.
C
Okay,
great
just
just
a
heads
up.
We,
as
a
couple
of
you
guys,
probably
has
noticed
that
we
lost
the
coverage
lane
a
while
ago,
which
is
very
sad
because
no
one
knows
what
how
much
test
coverage
we
have.
We
finally
managed
to
to
re-enable
this,
and
from
now
on,
it's
enabled
on
all
pr's
again,
so
everyone
should
see
a
test
that
is
called
full
cupid
coveralls,
where
you
can
at
least
see
how
the
lane
is
run
and
there
should
also
be
a
cover
coveralls.
C
I
o
test
result
on
the
pr
which
tells
you
how
much
coverage
you
have
and
yeah
any
questions
on
that.
D
Yeah
hi,
we
had
a
a
few
back
and
forths
about
this
pr
and
and
like
for
my
end,
I
I
have
one,
I
think,
hopefully
final
disagreement
about
some
of
the
proposal
where
it
seems
like
the
storage
team
and
I
might
be
disagreeing
and
david
suggested.
We
bring
it
up
here
because
everybody
might
be
here.
D
I
don't
know
if
the
involved
people
read
my
comments,
it's
mainly
about
chris.
If
you
scroll
up
a
bit,
maybe
or
let
people
read
that
first,
so
the
proposal
aims
to
add
next
to
the
source
field,
a
I
think.
D
It's
called
data
source
field
and
to
me
that
doesn't
sound
like
it
would
be
clear
to
the
user
why
there
should
be
different
source
fields
and-
and
it
seems
a
bit
yeah
unintuitive
and
my
suggestion
would
have
been
that
we
put
the
source
into
the
the
reference
we
create
into
the
source
field.
D
E
Hi,
this
is
mike
hendrickson,
so
yeah,
I
I
think,
having
to
one
field
called
source
in
one
field
called
data
source
is
maybe
not
the
best
naming,
and
we
can
talk
about
that
in
a
sec,
but
I
think
the
idea
of
having
of
not
of
having
a
separate
field
rather
than
embedding
it
in
the
existing
source
is
well
for
the
reasons
david
mentioned
in
there
as
far
as
like
just
code,
but
as
far
as
to
a
user.
E
We're
also,
we
just
recently
added
in
data
volumes
storage
profiles.
So
we
did
a
very
similar
thing
to
those
of
you
that
are
familiar
with
data
volumes.
There
is
a
spec
dot,
pvc
section,
which
you
give
the
specifications
of
the
pvc,
and
we
wanted
to
make
it
easier
for
users
to
not
have
to
supply
every
single
field
for
pvc
for
every
data
volume.
E
So
we
have
these
storage
profiles
where,
given
the
storage
class,
there
are
certain
defaults
and
we
went
back
and
forth
for
a
while
on
how
to
extend
the
data
volume
for
that
and
we
ended
up
coming
up
with.
Do
it
going
this
kind
of
rather
than
having
some
fields
of
pvc
be
optional,
that
are
filled
in
and
some
fields
mandatory,
and
we
felt
that
it
was
better
for
the
user
to
just
have
a
separate
field.
E
I
forget
what
it's
called
actually
but
where
they
will
fill
in
where
the
storage
profiles
will
be
utilized,
and
this
is
following
in
that
pattern
as
well,
and
that
was
another
reason
why
just
it's
consistent
with
kind
of
the
old
way
of
doing
things
and
the
new
way
of
doing
things
being
too
separate
to
a
new
field.
E
D
Like
the
the
reasoning
david
sent,
the
comment
was
like
that,
for
you,
the
source
field
is
like
embedded
sources,
and
this
one
would
be
a
reference
and
one
example.
I
kind
of
collided
for
me
with
that
is
the
pvc
like
source.pvc,
because
that
is
a
reference
that
works.
D
The
same
like
looks
the
same
way
to
the
user
as
the
reference
we
would
be,
adding
just
a
name
in
the
namespace
or
not
even
the
namespace,
depending
and
yeah
that
it
seems
weird
to
have
the
pvc
reference
in
the
source
field,
but
whatever
we
call
it
reference,
not
so.
D
E
Yeah
yeah.
E
Yes,
so
I
I'm
just
seeing
this
for
the
first
time
and
I
think
my
inclination
is
to
stick
with
what
we've
got
but
change
naming,
but
I
will
talk
to
other
folks
on
cbi
and
and
see
what
their
thoughts
are.
I
definitely
see
your
point.
F
A
Let
me
know
if
you
guys
find
it
convenient.
Let
me
know
if
you
guys
find
it
convenient
to
open
up
the
cncf
zoom
meeting.
I
can
handle
that
for
you
like,
like
we're
doing
with
the
performance
and
scale
sick,
yeah.
E
A
Okay,
thank
you,
kevin
boy,
just
10
minutes
in
and
we're
at
the
end
of
the
agenda
and
and
empty
open
floor.
So
anybody
have
anything
else
that
you
want
to
talk
about.
G
We
find
some
documentations
about
the
integration
of
of
of
weird
and
kobe
worth,
and
this
is
not
working
as
expected.
Regarding
scalability.
G
F
Do
you
have
a
link
or
anything
where
you
you
can
all
write.
G
The
same
thing
I
can
send
these
later,
but
let
me
find
out
here
just
one.
H
G
Sorry,
don't
answer
your
or
why,
if
you're
screwed?
Well,
then
let
me
see
here,
I
I
have
it
if
I
have.
G
G
What
is
the
name
open
shift?
Virtua
vm
virtualization,
something
like
that,
and
this
was
described
on
the
official
site
here.
G
Window
we
saw
some
codes
on
the
open
shift
virtualization
for
you
understand.
Okay,.
G
But
we
are
having
some
some
some
troubles
to
make
all
the
pieces
work
in
a
proper
way,
especially
we
are
using
100
of
our
vms,
has
gpu
on
it.
The
the
pcs
are
not
working
so
well.
Perhaps
we
need
to
fix
the
code.
Someone
have
done
the
code.
We
would
like
to
know
who
have
done
the
code
to
talk
to
okay.
A
Andre
this
is
chris
you're
in
luck,
because
I'm
actually
on
the
overt
red
hat
virtualization
team,
and
I
have
two
guys
in
my
in
my
back
end
that
can
help
you
with
this
wonderful.
A
And
I'm
I'm
also
on
slack.
So
if
you
want
to
just.
A
Time
then,
put
here.
G
Your
name
and
your
email,
then
I
find
you
on
slack.
Okay,.
F
A
Yeah
we
have,
I
have.
I
have
guys
working
on
this
right
now.
Actually
so
this
is.
This
is
very
convenient.
Minus
the
gpu
part.
We
are
very,
very
thin
in
gpu
hardware,
but
hopefully
we
can
collaborate
and
and
get
you
through
this
piece.
G
Yeah,
let
me
tell
you
what
this
callability,
we
plan
to
reach
150
000
bare
metals,
one
million
concurrent
users.
A
G
A
They
they're
just
pinging
me
about
use
cases
for
the
cncf
incubation,
graduation.
A
Yeah
yeah,
that's
on
on
me
and
the
red
hat
off
office
of
the
open
source
program.
It's
really
boring
paperwork
that
nobody
wants
to
deal
with.
G
I
can
give
you
a
link
regarding
the
gpu
just
one
second,
before
you
have
the
reference,
what
we
are
using:
okay,
oh
the
exact
exact
model,
the
how
we
not
exactly
model,
but
how
we
are.
We
are
doing
the
integration
with
the
nvidia.
A
G
We
are
using
44s,
that's
the
the
gpu
model.
G
A
Yes,
definitely
we'll
we're
happy
to
help
you
in
real
time
on
this.
Is
this
a
7
a.m?
Whatever?
This
is
utc
2
pm
utc?
Is
this
a
convenient
time
for
you,
I'm
on
eastern
time,
oh
okay,
great
I'm
in
pacific,
so.
A
And
my
two
fellows
are
are
in
central
time
they're
in
texas,
so
it's
convenient
for
all
of
us.
A
Okay,
next
agenda
item
is
wide
open.
G
A
Good
question:
we
are
we're
in
progress
of
petitioning
all
things
open
in
raleigh.
That
event
is
scheduled
for
october
15th.
I
believe
where
stu
and
I
are
still
yesterday
on
the
first
semester-
yeah
yeah-
it's
we
got.
We
have
some
time
so
the
worldwide
covet
pandemic
is
throwing
things
in
a
in
a
loop,
we're
all
we're
all
pretty
nervous
to
travel
so
yeah.
That's
that's
got
the
the
summer
of
fall
events
in
in
limbo
on
whether
or
not
they're
going
to
be
in
person
or
virtual.
A
We
we
just
finished
up
red
hat
summit
and
yeah.
I
saw
that
that
event
was
very
good
yeah.
Unfortunately,
we
almost
we
had
almost
no
traffic
for
for
coobert.
A
A
A
So
is
what
it
is
where
it's
the
whole
virtual
thing
is
really
weird
anyways.
It's.
G
Can
I
have
the
coupe
village
block
blog
for
the
intel
vgpu?
Can
you
send
me
the
link
yeah.
A
A
So
stu
and
I
are
gonna
put
together
an
internet
wide
k3s
cluster
based
on
raspberry
pi
4bs,
so
that
sounds
both
of
us
are
got
stars
in
our
eyes
about
how
awesome
this
thing
is
going
to
sound.
We
got
a
about
half
a
dozen
people
in
the
community,
wanting
to
participate
and
throw
hardware
at
the
demo.
A
Beside
that,
we're
gonna
look
at
super
computing
con
in
november
that
will
be
hpc
and
and
gpu
oriented
if
I
can
help
there
well,
that
would
be
fantastic.
G
A
Yeah,
that
would
be
great,
my
two
guys
are:
are
they
do
a
feature
validation
for
for
for
cooper
and
and
openshift
virtualization,
so
these
guys
get
in
there
and
and
kick
the
tires
of
the
car
before
it
goes
out
to
the
dealership.
A
G
A
Yeah,
so
not
a
problem!
My
my
team
stand
up
is
actually
right
after
this
meeting,
so
we'll
we'll
get
right
into
it.
G
I
appreciate
that,
if
you
have
any
other
questions
regarding
gpus,
we
are,
we
are
hard
users
of
that.
We
are
planning
to
move
to
amg
gpus,
but
intel
is
another
and
alternative,
as
as
they
they
became
something.
The
problem
is
the
performancy
again
the
price.
Now
I
know,
for
you
know,
amg
is
one
third
of
the
price
of
of
of
the
same
performance
on
on
on
nvg,
for
you
know
on
our
project,
and
if
you
are,
we
are,
we
are
buying
half
million
ports.
This
is
huge.
G
A
Oh
yeah,
oh
yeah,
I
don't,
I
haven't,
don't
recall,
seeing
any
any
work
being
done
on
the
amd
gpus,
so
someone
else
will
have
to
chime
in
on
that
yeah.
We
need
to
have.
G
That
they
have
worked
it
with
openshift
it's
there
already,
but
to
convert
great
so
downstream.
G
Well,
that's
most
what
I
I
would
like
to
talk
to
you
guys.
I
appreciate
if
he
can
contribute
each
other
to
make
this
a
success,
because
we
came
to
stay
not
for
play
around
okay
yeah.
Absolutely
now
I
have
your
email
so
unconvert
to
make
our
solution
work.
Let's
say
that!
Okay!
G
A
Okay,
well,
we,
I
talked
a
little
bit
about
events,
conversation.
A
Complete
all
things
open.
A
We
would
love
to
have
you
on
board.
You
will
be
required
to
provide
your
own
hardware.
If
you
want
to
participate.
I
just
bought
a
pi
4b.
I
think
it
costed
me
160
dollars
with
with
board
case
and
storage.
A
So
we
have
a
link
to
a
couple
hardware
options
here
and
we're
also
we're
it's
going
to
be
multi-architecture.
So
if
you
have
a
nook
or
any
other
kind
of
pc
that
you
can
that
you
can
use,
then
that
will
work
too.
We
at
least
want
eight
gig
of
memory
to
run
a
virtual
machine
and,
alongside
of
a
container.
A
And
again,
my
my
objective
for
super
computer
con
is
to
go
fishing
for
nasa,
that's
where
those
guys
hang
out
and
and
they
run
large
hpc
clusters
and
they
have
they're
stuck
between
a
rock
and
a
hard
place
dealing
with
their
legacy,
virtual
machine
workloads
and
their
their
new
containerized
workloads,
and
they
don't
want
to
run
two
apis
and
orchestration
engines.
A
So
they're
counting
on
kubert's
to
to
fill
that
gap.
A
F
Have
any
issues
or
prs
that
they
would
like
to
discuss
things
that
they're
working
on.
A
Meeting
yeah
sounds
like
it
and
we
did
a
really
big
bug
scrub
last
week.
So
do
we
want
to
do
a
bug
scrub,
or
do
we
want
to
skip
that.
G
A
But
rook
is
just:
is
it
rook
just
a
front
end
for
seth
right?
It's
an
operation
yeah
an
integration.
A
Does
it
do
gluster?
Also,
I
didn't
think
so.
No
yeah,
so
in
red
hat
land,
there's
a
a
big
debate
going
on
between
gloucester
and
and
seth
red
hat's
been
into
cluster
for
a
very
long
time.
So
it's
a
very
mature
project
and
seth
is,
is
I
wouldn't
say
it's
up
and
coming
because
it's
been
out
for
like
10
years
now,
but
cluster
has
been
out
for
like
16.
A
A
A
I've
interviewed
at
jobs
that
have
had
20
different
stuff
clusters
across
the
world,
managing
petabytes
of
data,
and
so
when
I,
when
I
look
at
the
technology
vert
of
seth
versus
cluster,
it's
it's
no
question.
A
Seth
is
a
the
better
product
and
then
brooke
just
is
the
kubernetes
integration
to
to
seth,
like
even
one
of
our
red
hat's
products.
We
we
investigated
using
ceph
underlying
storage
for
large
galera
clusters.
A
And
that
the
performance
that
was
in
aws,
that
was
before
staff
even
allowed
seth
to
run
in
cloud
they
prior
to
a
couple
years
ago.
It
was
just
bare
metal.
Only.
A
A
Yes,
let
me
tell
you
how
we
are
gonna,
don't
wanna
stop
on
anybody's
feet,
who
are
big
cluster
fans.
G
We
have
600
gigabytes
of
ram
on
each
server.
We
use
256
for
the
vms
we
use
there
are,
we
are
grabbing,
300
gigs
is
more
than
600,
600
plus
and
with
that
we
grab
300
gigs
and
we
create
a
ram
disk
and
then
every
host
offer
this
ram
disk
to
the
rook.
But
this
is
the
best
io
part
possible.
Then
the
rook
is
across
the
the
kubernetes
cluster
and
then
we
are
reaching
the
best
iops
per
hour
possible
to
run
the
vms
we
are
using.
G
You
understand,
there
is
also
the
duplication,
then
the
duplication
make
everything
happen.
Also
on
gluster
we
have
vgo,
but
we
are
fine
with
what
we
have
on
on
on
rook.
Also,
for
you
know,
okay,
this
means
that
we,
we
are
reaching
the
best
disc
io
possible
for
the
vms.
The
users
are
running
on.
G
I
don't
have
it
here,
but
I
can
show
you
the
actually
session
working
if
you
would
like
to
see
it,
how
about
we
meet
offline,
and
you
can
show
us
wonderful,
but
we
are
reaching
one
or
what
between
one
and
two
million
times
a
regular
ssg
io.
For
you
understand-
and
this
is
amazing-
for
windows-
the
worst
problem
we
have
with
microsoft.
Windows
is
right
to
the
disk,
okay,
and
we
have
solve
it
that
way.
The
solution,
okay,.
G
A
Still
questioning
rook
versus
gloucester,
then
we
are
using.
G
A
The
your
big,
your
big
gain
with
with
broken
staff,
is
going
to
be
your
the
efficiency
of
of
disk
usage
and
the
deficient
in
the
efficiency
of
the
of
the
I
o
path.
A
Gluster
still
has
the
double
I
o
with
using
the
file
based
storage
box
and
seth
got
away
from
that
with
with
the
roxdb
and
have
you
have
you
gotten
deep
into
the
underlying
stuff
cluster?
Yet
not
me
mike
my
guys
for
me,
I
I
don't
hear
much
talk
in
the
in
the
community
about
seth.
A
A
Yeah
and
it's
it's
funny
because
you
you
have
hyperscale
and
and
I
do-
I
have
a
couple
projects
that
I
do
outside
of
work
that
are
micro
scale
and
and
so
I
have
different
problems
than
the
than
the
big
fish
as
a
as
a
little
minnow.
A
I
have
different
demands
and-
and
I
I
want
a
memory
to
be
used
efficiently.
For
instance,
I
find
that
stuff
is
too
memory
hungry,
and
so
you
have
plenty
of
memory
to
run
everything
yeah.
But
then
you
hide
problems.
A
I've
only
got
my
systems
only
have
eight
gigabytes
of
memory,
but
I
have
to
run
six
osds
and
this
is
for
a
box
that
does
like
home
video
stuff
for
like
when
you
you
just
have
like
a
small
box
in
your
house,
and
you
have
video
cameras
around
video
surveillance.
A
That's
what
I'm
that's
the
word
I'm
looking
for
and,
and
I'm
sure
like,
I
could
do
a
raid
solution,
but
then
you
get
into
custom,
firmwares
and-
and
you
have
to
make
like
think
hardware
is
moving
so
fast
these
days.
I
don't
want
to
get
pinned
to
a
to
a
custom
firmware
that
may
not
be
available
in
it
in
a
year
or
so.
A
So
seth
is
the
best
software
solution
out
there.
A
G
G
It
does
something
when,
when
there
is
some
we
need
to
like
do
maintenance
and
things
we
we
we
can,
we
can.
There
is
a
way
to
to
do
a
comment
before,
because
this
is
actually
a
vm
on
top
of
google.
G
A
A
Yeah,
I'm
lucky
in
that
aspect.
My
data
is
mostly
write
once
read
many,
so
I
don't
have
to
worry
about
flushing
to
disc.
A
In
fact,
I
I
have
my
my
my
deep
scrub
set
to
90
days,
pretty
pretty
wild.
A
Because
darn
it,
I
don't
want
my
discs
thrashing
all
the
time
the
the
default
settings
on
on
cephale
just
constantly
thrash
your
discs
and
the
more
more
osds
you
have,
the
the
worse
that
the
problem
becomes.
You
you
end
up
drawing
a
ton
of
power
and
and
abnormally.
I
Disk
chris,
what
do
you
mind
going
to
the
back
scrub
section?
I
don't
want
to
leave
the
people
on
the
issues
hanging
for
two
weeks.
A
Okay,
yeah
no
problem
we're
at
7
45.
So
let's
go.
I
Through
yeah,
there's
enough
managed
real,
quick.
I
I
State
you
can
just
ask
about
the.
H
I
I
I
There
is
nothing
related
to
convert
convert
right.
I
mean
this
should
be
just
cdi.
I
H
I
I
Believe
it
I
see
you
on
the
thread
I
mean.
Do
you
have
anything
to
come
in
here,
he's
not
done
that
cool
anymore.
That
sucks.
I
C
Yeah,
the
problem
is
that
we
have
different
release:
outputs
when
you
are
looking
at
the
tack
release
and
versus
the
knightly
release,
also
the
url
on
the
on
the
tag
release
which
you
see
at
the
lower
end
of
this.
This
output
has
the
devel
inside,
which
is
not
really
nice.
I
think
this
issue
should
be
good
somehow,
so
this
is
just
something
we
need
to
tackle.
I
I
see
okay,
so
let's
keep
it
as
a
trekker
and
unless
anybody
protests-
let's
mark
it,
so
we
don't
need
to
go
over
it
again.
C
I
I
So
the
issue
is
that
we
don't
we
have.
We
use
structured
logs,
I
believe,
but
we
don't
really
mention
the
vm
in
these
slots.
So
it's
hard
to
find
like
corresponding
plugs.
I
Opposes
and
the
last
one
inconsistent
buff
for
note,
labor,
sh
and
guard
handler
image.
H
I
I
H
I
A
I
A
That
that
takes
us
to
the
end
of
the
agenda
and
right
on
time.
Thank
you,
peter,
and
thank
you.
Everybody
for
attending
this
week's
meeting
have
a
good
week
and
we'll
see
you
all
next
week.