►
From YouTube: KubeVirt Community Meeting 2021-05-05
Description
Meeting notes: https://groups.google.com/g/kubevirt-dev/c/jKv5EuHNB4U
A
The
recording
is
started:
hi,
everybody
I'm
chris
caligari,
and
this
is
the
kuvert
weekly
meeting
where
we
discuss
issues
and
topics
related
to
kubevert.
A
A
All
right
and
if
you
could
add
your
name
to
the
attendees
list,
I
will
be
grateful.
We
usually
take
a
a
minute
or
two
at
the
very
beginning
for
an
introduction.
So
if
anybody
is
new
has
joined
us
and
would
like
to
say
hello,
now
is
your
time.
A
Most
of
us
are
our
red
hat,
but
I
still
like
to
to
have
an
introduction,
so
we
can
all
get
to
know
each
other
a
little
better,
there's
a
few
that
are
non-red
hat,
and
so
it
just
has
a
more
welcome
feel.
A
A
A
D
I
I
just
wanted
to
do
a
brief
follow-up
here
in
this
meeting
from
last
week's
result
from
from
the
first
scalability
stick
meeting
and
so
an
outcome
there
was
that
a
group
from
ibm
is
working
on
collecting
the
metrics
with
prometheus
in
our
kuberci
clusters
and
and
then
we
were
discussing
how
to
get
the
metrics
out
of
the
keyboard
ci
clusters
into
into
the
cluster
metrics,
so
that
they
can
be
accessed
for
dashboards
and
so
on
and
federico
is
also
here,
and
I
was
just
wondering
how
far
away
we
are
here
from
actually
shouldn't
be
too
hard
to
deploy
operators,
the
prometheus
operators
and
providing
a
federation
setup.
D
E
Yeah,
no,
no,
no
progress
so
far
on
this.
I
think
we
should
start
by
modifying
or
I
I'm
not
completely
sure,
if
goku
go
cli,
has
the
proper
flags
in
place
to
provide
access
to
this
prometheus
operator
running
inside
the
the
qrci
cluster
but
yeah
other
than
that.
E
E
D
And
what
do
you
think
about
then
creating
a
a
public
route
where
people
can
basically
just
scrape
the
collected
metrics
for
the
performance
runs
so
that
I
don't
know
it's
easy
to
visualize
your
own
stuff
and
do
the
development
on
dashboards?
Does
it
sound
really?
Does
that
sound
reasonable
for
you.
E
Absolutely
yeah
yeah,
having
separate
dashboards
for
for
this,
for
this
yeah
mix
makes
total
sense
with
maybe
even
a
separate
graphene
instance,
or
we
can
use
the
the
common
grafana.ci
dot
keyboard
dot.
Io
instance
yeah.
That
makes
the
process
I.
D
F
F
D
F
D
As
I
understand,
what
we
agreed
is
that
that
you
and
your
team
are
doing
the
keyboard
ci
changes
so
that
prometheus
in
there
is
collecting
the
data
and
federico.
Would
then
and
I'm
happy
to
help
too
ensure
that
the
the
kbci
clusters
can
be
properly
scraped
and
by
the
central
prometheus
so
that
we
can
visualize
the
collected
complete
data
properly.
Is
that.
F
Yeah,
it's
just
a
matter
of
you
know.
Where
is
the
border?
What
I
meant
is
that
I'm
not
sure
that,
on
the
test,
you
I'm
not
sure
that
you
want
the
user
to
actually
control
how
you
collect
the
data.
You
only
want
to
allow
the
possibility
to
visualize
it
yourself
to
put
something.
F
D
F
D
At
the
end,
just
the
purpose
is
definitely
to
have
a
centralized
dashboard
at
the
end.
But
it's
more
like
you
know
when
you're
developing
anything,
I
was
just
thinking
about
giving
public
access
or
limited
access
to,
but
generous
access
for
some
people
which
are
working
on
that
so
that
they
don't
right.
D
A
E
G
Hey
roman,
can
you
tag
me
on
the
issues
I
kind
of
want
to
follow
along
because
I
think
we'll
probably
like
even
some
of
the
stuff
and
like
I
could
learn
a
little
bit
about
how
to
contribute
some
of
the
stuff
to
ci
like
as
we
go
through
some
more
stuff
from
sixth
scale.
G
I
expect
there'll
be
more
things
and
I'm
not
very
familiar
with
it.
So
just
I
think
I'll
probably
learn
a
few
things.
If
I
follow
along
the
issue.
F
F
D
F
D
A
Okay,
thank
you,
roman
daniel
you're.
Next.
H
Hey,
I
just
wanted
to
give
a
quick
heads
up
that
we
have
integrated
the
kubernetes
1.21
provider
into
the
cubewood
codebase,
so
that
people
can
start
their
tests
at
least
locally.
We
are
still
working
on
the
lanes
so
that
we
have
the
testing
layers
for
kubernetes
4x21
also.
H
So
that
is
just
working
progress
but
yeah.
I
think
we
are
a
little
bit
late
to
the
game
for
integrating
one
for
21
but
yeah.
It
is
like
it
is.
I
Crickets
hi,
this
is
vishal
here.
If
you
don't
mind,
may
I
introduce
myself
it's
my
very
first
call
or
I
can
wait
till
the
end.
Oh
go
ahead!
Okay!
Well,
thank
you
very
much
for
accepting
me
in
this
community
and
I
work
for
ibm.
My
name
is
vishal.
Yes,
thank
you.
I
So
I
have
submitted
one
request
like
if
you
all
like,
I
can
show
the
auto
import
functionality,
but
that's
on
my
openshift
cluster
not
like
very
say,
custom
built
upstream
project,
but
as
we
all
know
that
under
the
hood
is
cuba
so,
like,
let
me
know
if
you
all
want
to
see
and
I'm
happy
to
share,
but
this
functionality
of
autoimport
was
like
really
cool
and
I
was
really
able
to
bring
my
entire
virtualized
application
from
source
environment
to
my
cluster
in
23
minutes
and
10
gig
image
and
my
application
came
up
like
super
cool
on
my
cluster.
A
Oh,
that's
that's
awesome
to
hear
michelle.
I
know
welcome
to
the
group.
I
hope
you
thank
you.
Stick
around
and
join
us
for
a
while.
Did
your
demo
have
big
lengths
of
time
that
we
have
to
wait
or
is
it
just
a
nice
seamless,
end-to-end
demo.
I
J
I
A
Let's
start,
let's
take
a
few
minutes
and
and
happy
to
pass
the
screen
share
over
to
you,
but
let's
get
let's
skip
the
the
the
image
import
part
so
we're
not
waiting
on
on
meters.
A
Perfect,
okay,
I'm
gonna,
stop
sharing
and
go
ahead
and
share.
I
I
This
is
where
my
application
is
installed
now
from
here.
I
actually
did
that
last
night,
so
for
auto
import,
we
have
to
actually
shut
down
this
vm
on
the
vmware
side
and
like
easy
click
here,
really,
if
you
go
to
actions
and
just
power
off
your
vm
once
that
that's
powered
off,
you
are
back
to
your
openshift
cluster.
So
what
I
did
like
for
this
testing
purposes,
I
created
my
namespace
and
let
me
actually
create
a
namespace
here
right
and
show
you
what
exactly
I
did,
although
I'll
not
import
it,
but
here
say
test.
A
A
I
I
I
I
Next,
it's
it's
really
that
simple
right,
so
just
select
your
options
with
auto
import.
Here
you
can
change
from
bridge
to
masquerade
or,
if
you
like
bridge
it,
doesn't
allow
to
change
mac
address
here
by
the
way.
So
that
was
interesting
observation,
so
I
had
to
actually
I
wanted
to
remove
that.
So
I
did
that
in
the
yaml
file
later
on,
once
that
vm
came
out,
then
it
shows
the
disk
options.
I
So
I
want
the
read,
write
many
option
because
I
might
later
on
choose
to
live
migrate
within
my
cluster,
so
good,
it's
taken
the
read,
write
many
option
next,
for
now
I
don't
need
this,
so
I'm
going
to
use
the
same
credentials
which
I
have
on
the
source
side
and
I'm
fine
with
all
these
details
here,
import
it
won't
import
because
vm
is
actually
running
there
right.
So
I
have
to
power
off
this
here:
power
of
power
off
okay,
so
now
it
will
okay.
I
I
got
this
import
error
because
I
forgot
to
bring
that
vm
right,
so
I'll
have
to
actually
do
that
again,
but
this
was
the
process
I
followed.
I'm
sorry.
I
forgot
to
shut
down
this
vm
earlier.
So
that's
why
I
import
error,
but
when
I
did
this
yesterday
in
another
namespace
this
test,
that's
exactly
what
I
did
and
in
22
minutes
my
vm
came
up
here
and
I
didn't
do
much
changes
apart
from
just
editing
the
mac
address
and
rebooted
that
vm.
I
A
That's
fantastic!
It's
really
nice
to
see
such
a
seamless,
workflow.
I
I
Still
it
took
few
hours
and
had
to
wait
and
like
if
any
connection
is
interrupted,
then
you,
like
start
your
upload
again
a
lot
of
hours.
All
that
was
done
literally
in
22
minutes.
100
percent
import
done
one
extra
minute
to
create
that
service
and
route,
and
I
was
up
and
running.
That's
awesome.
I
A
I
I
Then
the
backend
application
was
using
dicom
4g
for
imaging
purposes
which
also
use
the
postgres
sql
and
ldap.
So
like
these
six
components,
I
moved
three
of
them
as
containers
and
three
of
them
as
part
of
the
vm
running
on
windows
2016,
including
that
frontend.net
application
brought
all
those
six
previews
and
three
containers
together
in
the
same
name
space,
and
I
still
have
that
application
up
and
running
on
my
cluster.
But
all
that
I
did
manually
think.
A
That's
amazing,
if
you
would
like
to
vlog
on
that,
I
would
love
to
have
that
content.
There's
there's
great
interest
in
in
the
windows
world
and
consolidating
into
one
api
into
kubernetes
and
running
running
hybrid
clouds,
and
so.
J
I
I
A
Okay,
ezra
you're
next.
F
Yeah,
this
is
just
a
simple
point
once
again,
if
it's
covered
or
if
people
do
not
think
it's
important,
we
can
skip
this,
but
for
the
last
I
think
few
months
already,
I
I
think
at
least
personally,
I
got
the
impression
that
we
probably
it
would
be
helpful
if
we
can,
for
the
developer,
have
some
few
paragraphs
that
give
the
you
know
the
wisdom
of
what
we
expect
to
have
a
review
process
on
the
pr,
because
it's
kind
of
a
new
coming
right
to
the
project.
F
I
saw
completely
different
processes
reaction,
the
the
lengths
of
the
review,
the
the
depth,
the
type
of
comments
and
so
on.
Of
course,
there
is
some
individual
individuality
in
this
process,
but
I
think
some
high-level
guidelines
will
be
good.
F
At
least
this
is
my
impression
if
you
have
already
that's:
okay,
withdraw
my
my
request,
but
I
think
it
will
be
good,
but
I
bring
the
tab
just
because
I
know
this
is
a
very
delicate
issue.
A
lot
of
people
say,
look
the
free
spirit.
You
know
everyone
can
do
whatever
he
likes
and
so
on.
So
this
is
why
I'm
bringing
it
up
to
see
if
anyone
at
all
think
we
we
need
such
a
thing.
A
That's
a
big
topic
who
do
we
have
that
senior.
C
C
I
don't
think
we
already
have
any
sort
of
canonical
or
canonized
rule
set
about
what
constitutes
a
good
review.
I
mean,
I
guess
it's
so
subjective,
that's
a
hard
thing.
I
don't
know
where
to
begin
and
because
it
does
depend
on
if
it's
small,
medium
large
in
terms
of
the
content,
but
it
also
has
a
lot
to
do
with
is.
Is
it
an
architecture
change?
Is
there
something
like
profoundly
different
about
the
new
rules,
or
did
we
did
we
change
something?
Are
we
introducing
something
new?
So
I
don't?
C
I
don't
know
if
there's
a
cookie
cutter
set
of
rules
overall
I
mean
there
could
be
guidelines
in
terms
of
what
needs
to
be
done.
For
instance,
you
know
it's
good
to
make
multiple
passes
over
a
pr.
You
know
wants
to
read
it
just
to
kind
of
get
the
general
just
reread
it
just
to
you
know
to
to
really
understand
it,
and
then
you
know
kind
of
dig
in
it
like
so
to
do
it
right,
it
does
take
multiple
passes
kind
of
like
proofreading,
a
paper
of
your
own
authorship.
C
A
Yeah,
I'm
I'm
the
same
way.
I
feel
like
some.
Some
pull
requests
are
created
very
well
and
that
they
give
guidance
on
exactly
what's
changed
and
how
to
test
the
changes
and
then
there's
some
like
large
and
extra
large
pull
requests
that
are
just
thrown
over
the
fence
and
you
have
to
interpret
what
is
what's
what's
happening
and,
of
course,
when,
when
the
pull
request
is
not
created
nicely,
then
it
takes
a
lot
of
time
to
test.
A
And
if
you
find
one
problem,
then
it's
gonna
make
you
be
a
lot
more
critical
on
on
the
further
along
in
a
diff.
F
A
How
about
I
take
this
one
as
a
community
organizer,
and
I
I
will
reach
out
to
cncf
and
see
if
they
have
any
any
material,
yeah
excellent,
absolutely.
J
F
F
By
the
way
to
go
to
the
last
comment,
if
we
look,
I
I
don't
have
statistics,
but
at
least
the
area
that
I
was
working
on
on
many
times.
The
actual
reviewer
is
actually
not
the
person
that
was
assigned
by
the
system,
so
it's
either
a
volunteer
or
someone
that
you,
I
is
probably
someone
approached
directly
so,
which
means
that
you
know
something
kind
of
not
working
very
well
with
the
also.
D
Yeah,
in
order
to
improve
that,
what
we
would
have
to
do
is
restructuring
the
code
to
reflect
ownership
on
on
one
hand.
So
we
work
with
owner
files
but
to
basically
just
have
a
top
level
owner
file.
That's
it
and
it
just
the
system,
then
just
tries
to
assign
people
which
have
less
reviews
open
to
something
out
of
that
list.
F
D
Yeah
another
issue
is
that
what
we're
experiencing
for
some
time
is
that
most
of
the
features
any
feature
which
you
add
is
basically
going
across
all
components
most
of
the
time
it's
very
seldom.
D
D
A
A
Okay,
we're
done
with
the
open
floor,
I'm
pretty
sure
just
a
reminder.
Tomorrow,
same
time
we
have
the
performance
and
sig
meeting
ryan
kelsey
will
be
running.
That
meeting
see
our
community
calendar,
which
probably
doesn't
have
the
event
yet
because
I'm
still
working
on
removing
that
calendar
from
red
hat
land
and
getting
it
to
cncf
land
ryan.
If
you
could
send
a
reminder
to
the
email
list,
if
you're
here,
I
think
you're
here.
G
A
You
know
I'll
be
reminding
people
about
that
meeting
for
a
couple
weeks,
since
it's
brand
new.
A
And
then,
let's
see,
let
me
just
give
a
quick
note
about
events.
Red
hat
summit
was
a
a
humongous
success
and
there
was
tons
of
activity
and
the
community
covert
k.
I
booth.
A
I
wish
I
think
we
had
one
question
and
it
was
just
a
simple
one,
but
I
hope
everybody
who
went
had
a
good
time.
I
had
a
good
time
about
the
keynote
speeches
were
really
awesome.
A
Stu
and
I
met
last
week
regarding
all
things
open
and
we
got
our,
we
got
the
call
for
paper
form
submitted,
and
now
we
get
to
wait
and
twitter
our
thumbs
for
two
weeks.
While
they
make
a
decision
on
whether
or
not
they
accept
us.
A
Hopefully
they
do
because
we
all
bought
raspberry,
pi,
4bs
and
so
for
this
demo.
A
Participants
are
going
to
have
to
purchase
their
own
pi
4b
with
eight
giga
memory.
It's
not
going
to
be
cheap,
it's
going
to
be
around
100
usd
and
then
stu
will
be
orchestrating
a
demo
and
and
probably
do
some
bitcoin
mining
on
your
on
your
time.
A
But
we're
both
really
excited
about
this
demo,
so
hopefully
you
guys
want
to
participate
stu.
What
do
you
think
about
another
meeting
like
next
week
or
so
and
start
getting
things
going.
C
Yeah,
we
can
do
that.
I
think
we're
in
kind
of
a
dead
time
where
we
have
to
wait
for
all
things
open
to.
Actually,
you
know
approve
the
talk,
but
of
course
we
don't
want
to
wait
too
long,
because
there's
there's
some
ground
to
cover.
There's
some
things
to
discover
here.
Yeah,
definitely.
A
And
I
think,
even
if
we
don't
get
accepted,
I
think
it
would
be
really
really
awesome.
If
we,
if
we
did
it
anyway
and
to
produce
a
video
or
or
a
blog
post,.
A
Yeah,
our
big,
our
big
content
is
getting
thin
and
long
in
the
tooth.
So
we
need
to
get
some
fresh
stuff
published.
A
L
L
L
David,
do
you
want
to
give
a
heads
up
about
this
one
I
know
or
I
can.
I
guess
so.
There
is
an
issue
that
prevents
live
migrations
and
the
issue
could
be
potentially
solved
by
bomb
by
the
delivered
version.
L
Common
as
anybody
aware
of
like
this
one
specifically
are
we
still
seeing
it
have
we
figured,
I
figured
out
a
solution
for.
L
L
Okay,
I
don't
know
so:
let's
leave
it
for
now.
M
Fixing
that
before
it
had
been
an
issue,
but
I
thought
it
was
fixed,
but
I
can
look
into
this
book.
L
L
N
L
Okay,
that's
awesome
thanks.
L
P
Yeah,
I
have
replied
him
because
I
think
it
was
he
was
trying
to
configure.
It
was
in
the
incorrect
way.
P
L
Okay,
so
it's
it's
debugger
and
rfid
video
due
to
my
creativity
and
includes
some
selector
anybody
familiar
with
this
api,
like
not
only
migrating
but
migrating
to
a
specific
node.
That's
what
it
is.
No,
it's
a
vm
with
node.
O
L
L
L
L
O
O
P
Yeah
that
one
I
was
meant
for,
I
was
talking
about
hatefuls.
The
real
issue,
I
think,
is
that
we
don't
have
proper
documentation
for
the
on
the
subject
of
our
infrared
workloads
placement.
P
L
Not
connection
shall
we
turn
it
into
an
rfe,
or
do
we
want
to
just
leave
it
as
like.
O
L
Does
anybody
speak
mandarin
on
the.
A
A
I
I
got
an
interpretation
on
that.
A
A
L
L
A
A
H
A
There
is,
but
I
think
we
lost
them,
they
were
sending,
they
were
sending
us
messages
around
december
and
and
then
they.
J
A
So
I
don't
know
if,
if
something
political
happened
or
if
that
meeting
dissolved
or
what
we
were
never
able
to
to
reach
out
to
them,.
K
Dramatic,
I'm
so
awkward
that
just
ask
if
they
can
translate
it
peter.
I
think
it's
your
keyboard.
A
Yeah
I
created
this
issue
because
I
onboarded
three
people
and
miraculously
right.
At
the
same
time,
quite
a
I
o
went
down,
and
so
we
twiddled
our
thumbs
all
day.
We
weren't
able
to
do
anything
with
with
cooper
because
all
the
images
come
from
quite
a
io.
L
But
I
mean
everyone:
if
we
do
so,
we
would
need
some,
like
I
don't
know,
mirroring
or
whatever
to
balance
between
those
right
because
like
if
koi
goes
down,
we
would
need
to
switch
all
our
ducks
to
something
else
for
a
while.
You
know
yeah.
H
I
mean
maybe
we
could
also
create
two
two
manifests
with
with
two
different
with
two
different
backhand
repos
right,
so
some
some
docker
manifest
and
some
quay
manifest,
but
yeah.
This
would
also
be
quite
a
bit
of
work
and
I'm
not
sure
to
keep
all
this
in
sync
would
be
even
more
a
hassle
right.
L
Yes,
yeah
like
what
you
suggest
some
easiest
to
do,
but
maybe
we
should
invest
energy
into
sending
like
angry
messages
to
coio,
to
make
sure.
A
It's
not
going
to
happen,
they
they've
had
multiple
major
outages
and
it's
only
a
matter
of
time
before
there's
a
data
loss
and
then.
A
H
Yeah,
but
on
the
other
hand,
you
you
have
you
have
the
the
docker
rate
limit
right,
so
this
is
also
not
ideal.
A
Yeah
docker
hub
is
going
to
charge
on
network
throughput,
so
everybody's,
abandoning
that
that
registry.
H
Yeah,
that
was
the
primary
reason
why
we
switched
over
from
docker
to
to
quay.
A
Well,
a
lot
of
people
are
using
the
github
registry.
A
If,
if
we
do
that,
though
we're
we
have
some
flexibility
with
the
cncf,
we
we
may
be
able
to
get
under
their
their
account.
A
A
Yeah
yeah,
they
they
give
us
all
sorts
of
neat
things.
They
actually
pay
for
the
zoom
conference.
So
if,
if
a
normal
person
was
doing
a
zoom
conference
like
this,
they
would
they
would
get
charged.
A
But
up
until
about
six
months
ago,
we
were
using
red
hat's
account,
and
then
we
transitioned
over
to
the
cncf
account.
H
I
was
actually
I
was
aware
that
there
are
a
couple
of
resources
provided
by
cncf.
For
example,
the
fossa
check
comes
from
cncf
and
there
are
a
couple
of
other
things,
but
yeah
there's
a
github
repository.
A
It's
not
moved,
it's
just
provided
an
additional
registry
so
that
we
don't
shut
customers
down.
I
had
three
community
members
that
I
was
on
boarding
on
that
monday.
That
quay
went
down
and
we
got
our
kubernetes
instances
online
and
time
to
install
cooper
and
we
can't
install
because
so
we.
R
Yeah,
so
we're
talking
about
another
registry
to
you
know
to
go
to
when
things
go
wrong
in
quest
exactly
okay,
thanks.
A
A
Okay,
guys
we're
at
eight
a.m.
Let's
do
a
really
fast
goodbye
and
and
have
a
good
week
and
we'll
see
you
next
week.