►
From YouTube: Office Hours: Q&A with KubeVirt maintainers
Description
Our final session is an opportunity for you to ask all your KubeVirt questions, whether they're about the project, or they are about using KubeVirt in production. Maintainers and experts will be on hand.
Panelists:
- David Vossel, Senior Principal Software Engineer, Red Hat
- Adam Litke, Engineering Manager, Red Hat
- Petr Horacek, Engineering Manager, Red Hat
A
The
the
dialogue,
but
again
this
is
this
session,
is
meant
for
attendees
to
ask
any
questions
that
you
might
have.
You
can
use
the
chat
here
or
even
better
feel
free
to.
Let
us
know
in
the
chat
that
you
want
to
speak
and
we
will
enable
your
microphone.
So
you
can
ask
questions
directly.
You
can,
of
course,
also
raise
questions
to
slack
through
slack
and
we
will
read
them
here,
but
in
any
case
you
know
the
main
drivers
for
this
session.
Are
you
know
anyone
attending
who
has
questions
that
we
can
answer?
A
But
while
we
wait
for
those
questions
to
come
in,
I'm
actually
handed
over
to
peter
adam
and
david,
and
actually
I
would
ask
let's
say
the
first.
A
The
first
question
is
for
each
of
you
to
if
you
can
give
us
a
quick
summary
of
your
area,
your
specific
area
and
maybe
a
highlight
of
the
reason-
let's
say
I
would
say
one
year
to
put
a
framework
or
some
but
recent
highlights
of
what's
new
in
your
respective
areas
that
you
would
like
to
highlight
and
well
and
I'll
save
my
other
question
for
after
this
one.
Just
a
recap
of
the
recent
highlight
of
the
recent
keyboard
improvements
in
your
area.
B
Go
so
I've
been
talking
a
lot.
The
I've
already
introduced
myself
previous
presentations
and
david
vossel,
I'm
one
of
the
core
contributors
to
the
cuvvert
repo,
so
I
primarily
focus
on
the
virtualization
side,
one
of
the
things,
the
kind
of
things
that
are
floating
around
in
my
mind
right
now
again,
a
lot
of
it
has
to
do
with
kind
of
moving
towards
what
we
need
for
keeper
version.
One
and
one
of
the
big
topics
there
is.
I
want
to
figure
out
ways
to
reduce
the
complexity
of
our
virtual
machine
api.
B
So
I've
been
kind
of
pondering
that
problem
for
the
past
few
weeks
and
I've
come
up
with
a
flavor.
D
Yeah
sure
I
can
go
ahead,
so
I'm
adam
litke
and
I'm
focused
mostly
on
the
cubevert
storage
area,
really
a
lot
of
attention
on
cdi
and
how
it
interfaces
with
with
cubevert
some
of
the
big
things
that
you
know.
We've
worked
on,
which
we've
seen
some
presentations
during
the
summit,
but
about
hot
plug,
also
about
vm
snapshots
kind
of
trying
to
think
about
moving
beyond
the
the
moment
when
you
first
create
a
virtual
machine
and
import
the
data
and
look
into
how
storage
comes
into
play
with
managing
it
going
forward.
D
Some
other
things
we're
looking
at
in
terms
of
like
one
of
the
biggest
things
that
we
get.
Is
this
complexity
or
a
bit
of
friction
with
trying
to
work
really
well
on
all
kinds
of
storage?
But
users
not
being
able
to
you
know
running
into
problems
where
their
vms
don't
import
properly,
because
they're
not
exactly
sure
which
pvc
parameters
to
create
for
their
vm
and
things
like
that
so
kind
of
just
working
on
making
all
that
work.
A
little
bit
more
seamless
and
automatic.
D
And
the
previous
presentation
was
really
interesting
to
us
is
because
we're
looking
to
make
the
cross
namespace
cloning
with
csi
work
really
well,
so
we're
excited
to
work
with
the
community
to
get
that
delivered.
C
E
And
for
the
last
year,
it
got
me
kind
of
unprepared
that
here,
but
I'll
try
to
figure
out
something
for
convert,
slash,
convert.
Our
main
focus
was
on
ipv6,
slash,
dual
stack
support,
which
is
currently
being
implemented
and
stabilized
in
kubernetes,
and
we
wanted
to
follow
that
and
extend
this
to
users
of
our
vms
as
well,
and
then
another
huge
effort
that
we
invested
in
was
support
for
srov,
which
was
we
were
working
on
this
for
quite
some
time,
but
we
finally
made
it
stable,
useful
and
covered
by
ci
this
year.
E
I
also
work
on
other
components
that
are
kind
of
around
convert
and
it
are
bringing
advanced
networking
features
to
kubernetes
and
to
convert
it
spawns
all
the
way
from
the
whole
network
configuration
using
kubernetes
and
state
that
you
might
have
heard
of
your
deployment
of
motors
cni,
plugins
management
of
mac
addresses
and
all
this
stuff.
So
that's
that
should
cover.
A
It
thanks
thanks,
I'm
sorry
to
put
you
on
the
spot:
I'm
prepared
better,
actually
did
it
following
up
on
what
you
mentioned
about
the
previous
session.
I
I
think
it's
like
that
alan
who
is
also
here
as
an
attendee.
Yes,
it
is
it's
kind
of
a
follow-up
of
the
the
d1
session,
but
the
thing
is
relevant
to
this
one.
His
question
on
on
was:
when
do
we
expect
the
one
to
be
released
or
needs?
There
is
any
time
like.
A
Maybe
this
was
not
clear,
so
I
think
it's
worth
qualifying
this.
B
B
Nope
did
not
hit
that
one,
so
when
it's
done
is
the
best
answer,
I
think
that
we
are
certainly
making
progress
quickly,
as
I
outlined
in
in
the
presentation
just
showing
how
much
we've
gotten
done
of
that
list,
that
we've
defined
and
really
the
the
last
big
items
here
are.
We
need
to
the
one
that
I'm
most
intimidated
by,
I
will
say,
is
removing
root
from
our
vmi
pod
and
if
we
consider.
A
B
B
I
want
to
follow
up
one
thing
with
that:
real
quick,
please
version,
one
isn't
magical
exactly
so.
You
know
we
just
saw
in
our
previous
presentation
that
people
are
using
cubert
in
production
today,
that's
not
the
only
person
using
it
in
production.
We
see
this.
You
know
we've
seen
other
presentations
today
and
yesterday,
there's
not
a
reason
to
necessarily
have
to
wait
on
kuvert
reaching
version
1
before
adopting
it.
I
want
to
make
sure
that's
clear
and
there's
commitment
to
our
apis
already.
B
B
A
Okay
and
that's
that's
actually
part,
I
believe
that
partially
explains
questions
that
just
was
raising,
but
maybe
you
have
to
emphasize,
though
george
in
the
chat
was
asking:
if
are
we
waiting
for
1.0
before
looking
at
moving
to
the
cnf
cncf?
Sorry
cncf
incubating
stage
so
right
now,
qbert
is
a
cncf
sandbox
project
and
the
next
stage
would
be
the
next
step
would
be
incubating.
So
is
one
version
1.0
any
any
anything
related
to
that
milestone
or
not?
I
guess
you
partially
answered,
but
just.
B
I
don't
think
that
is
being
tracked
as
part.
I
think
these
are
independent
things.
I
have
not
been
a
part
of
the
discussion
on
transitioning
to
the
incubation
milestone,
so
I'm
not.
I
don't
think
I'm
gonna
be
able
to
accurately
answer
yeah.
A
E
A
On
on
version
1.0
as
a
name
or
a
label
because,
as
you
mentioned,
the
api
is
stable,
etc.
I
think
the
the
major
focus
of
the
project
towards
the
next
stage
is
adoption
and
making
I
mean
there
is
adoption,
as
we
have
seen
in
several
several
of
the
sessions
during
that
summit.
But
it's
like
documenting
all
this
and
means
some
of
the
milestones
towards
b1
are
actually
related
to
to
this.
A
A
A
I
see
some
entries
in
the
chat
and
actually
I
would
encourage
people.
I
don't
know
if
you
want
to
joshy
was
having
entries
and
alan
is
also
cncf
website
has
keyboard
as
incubating.
Does
it.
E
F
Yeah
this
is
marcus.
I
just
had
something
I
wanted
to
mention.
You
know.
I
ran
into
something
relatively
recently
that
I
just
thought
would
be
good
to
to
share
and
honestly
at
dovetails
yeah.
I
did
a
presentation
like
six
months
ago
at
kubecon
about
scaling,
and
this
would
have
been
great
content
for
that.
F
Unfortunately,
it
kind
of
happened
out
of
order,
but
we
found-
and
this
may
be
documented
somewhere-
I
just
missed
it
or
whatever,
but
we
found
that
when
you
start
talking
about
thousands
of
of
kubernetes
nodes,
the
vert
handler
demon
set
needs
some
special
handling.
I
think
by
default
it
it
syncs,
like
every
vert
handler,
will
sync
on
like
a
five
minute
interval,
which
means
that
it
basically
fetches
the
whole
state
of
everything.
It's
subscribed
to
every
five
minutes.
F
So
if
you
think
about
300
seconds
and
thousands
of
nodes,
you
know
at
any
given
moment,
you
could
have
10
or
20
vert
handlers
pulling
down.
You
know,
giant.
F
You
know
state
of
the
system,
syncing
up
the
controllers,
and
so
you
may,
and
fortunately
the
vert
handler
actually
has
configurations
built
into
it,
where
you
can
increase
the
the
sync
interval,
and
so
you
may
want
to
tweak
that
sync
interval
so
that
they're
not
checking
in
and
syncing
up
as
as
often
if
you
have
a
large
number
of
systems,
there
was
also
oh
yeah.
So
the
other
thing
is
that
that
sink
can
take
some
time
it
can
take.
You
know,
30
seconds.
F
If
you
have
you
know,
if
your
environment
is
really
large
or
you
know,
is
busy
or
whatever,
and
what
we
found
was
that
these
vert
handlers
were
coming
up
and
they
were
sinking
and
then
they
may
take
36
seconds
to
sync
and
like
the
live,
I
think
is
the
liveness
probe
was
failing,
and
so
it
would
restart
the
vert
handler
and
it
would
become
the
storm
of
you
know,
restarting
bird
handlers,
and
I
honestly,
I
think
it
was
triggered
because
we
were
trying
to
roll
out
the
bird
handlers
faster
and
we
want.
F
We
changed
the
the
rolling
strategy
to
do
to
allow
you
know
multiples
of
handlers
to
be
down
at
a
time
at
the
same
time,
but.
F
Yeah,
there
are
a
couple
things
there
where
you
may
want
to
extend
both
the
liveness
probe
so
that
the
vert
handler
has
time
to
initially
sync
and
come
up
to
speed
and
then
also
maybe
spread
out
those
sync
intervals,
and
there
may
be
other
things
we
can
do
in
the
code
to
kind
of
improve
that,
or
maybe
minimize
the
amount
of
information
that's
synced
but
yeah.
I
just
thought
that
would
be
useful
as
people
scale,
their
systems.
B
B
Yeah
thanks,
one
of
the
things
we
tried
to
do
invert
handler
was
ensure
that
we
only
pulled
down
the
state
necessarily
necessary
to
process
what's
running
on
that
node.
B
So
the
number
of
objects
that
should
be
pulling
and
syncing
from
the
api
server
should
only
pertain
to
that
specific
note,
which
should
be
a
limited
subset.
So
if
you're
writing
a
thousand
nodes,
we're
still
hopefully
only
pulling
what's
necessary
for
that
node.
I'm
curious.
If
we
have
done
something
wrong,
if
you're
encountering.
B
We've
messed
something
up
if
you're
having
this
problem,
because
certainly
that
was
the
thing
we
designed
to
avoid.
I
guess
is
the
best
way
to
describe
it.
F
And
it
could
be
that
there
there's
a
bug
or
maybe
it's
something
in
our
environment,
but
we
did
find
that
the
vert
handlers
in
the
smaller
environments-
we
were
testing
in,
they
were
fine,
so
we
did
our
qualifications
and
everything
through
several
different
clusters.
There
were
maybe
100
nodes
or
whatever
just
for
testing,
and
then
we
went
out
to
a
larger
system.
Suddenly
we
started
seeing
this
and
it
could
be
something
environmental,
something
not
necessarily
related
to
the
nodes
or
to
the
the
node
counts
themselves,
or
the
vert
handler
pulling
down
whatever
it
was.
F
It
was
slower
than
than
usual,
and
so
the
vert
handler
was
taking
too
long
to
come
up
and
and
things
of
that
nature.
So
just
something
to
be
aware
of,
and
definitely
we
we
can
dig
into
it.
We
weren't
in
the
position
to
like
roll
a
patch
fix
immediately,
so
we
were
trying
to
find
a
solution
to
it,
and
so
we
found
that
tweaking
those
those
values
helped
a
lot.
B
I'd
like
to
track
this,
could
you
create
a
would
you
mind
creating
a
bug
report
on
github
for
us
and
what
I
want
to
investigate,
go
ahead
and
attach
me
to
the
issue?
I
want
to
investigate
what
informers
we
are
using
invert
handler
and
see
if
we
accidentally
leaked
a
globally
scoped
informer
rather
than
one
that's
narrowly
scoped,
just
to
the
objects
that
are
supposed
to
be
running
on
that
node.
B
We
might
have
had
a
regression
here,
so
we
we
need
to
give
this
a
little
bit
of
attention.
That's
some
great
feedback.
I
suspect
that
you're
one
of
the
few
people
running
like
over
a
hundred
nodes
with
keyboard
right
now
so
yeah,
you're,
gonna
you're
gonna,
see
some
things
that
maybe
we
don't.
F
And
and
unfortunately,
we
don't
have
giant
development
environments
or
giant
throwaway
environments
which
might
be
useful
to
kind
of
virtualize
or
create
some
sort
of
you
know
synthetic
large
environments
for
this
kind
of
testing,
but
yeah
I'll
create
an
issue
for
that.
G
Yeah,
marcus
I'd
definitely
be
interested
in
getting
it
too.
I
know
like
from
from
our
end
of
the
video
we've
been
looking
at
similar
things.
We've
been
looking
at,
I
mean
just
from
our
testing.
We
look
at
500
those
for
our
scale.
Just
from
some
immediate
work,
we've
seen
some
similar
things
as
well
yeah.
I
would
definitely
want
to
participate
in
the
discussion
there.
B
In
the
previous
presentation,
I
was
actually
trying
to
that.
That's
kind
of
information.
A
B
Trying
to
pull
out
of
if
they
encountered
anything
like
that,
so
it's
great
feedback
that
you
all
have-
and
I
think
that's
the
first
step
for
us
to
improving
the
situations
to
understand
it.
So
yeah.
B
B
A
Okay,
so
samia
wants
to
speak.
So,
let's
learn
something
as
well
in
meanwhile,
just
a
qualification
that
about
the
topic
about
the
incubation.
So
there
was
apparently
some
mistake
just
to
confirm.
A
Keyboard
is
still
a
sandbox
project
and
we
will
think
about
incubation
as
the
next
day,
so
sonia
welcome.
You
wanted
to
bring
some
toppings.
C
Yeah,
so
on
the
topic
that
mattress
and
ryan
were
discussing,
so
I
had
a
issue
where
I
didn't
throttle
my
api
requests
and
suddenly,
like
there
were
thousands
of
pending
vms
in
my
cluster
and
what
that
actually
cost.
What
handler
was
sending
listing
watch
calls
to
my
api
server
and
it
really
started
to
overwhelm
my
api
server
and
the
api
server
was
not
able
to,
like
you
know,
pull
in
our
data
from
hcd
within
the
default
timeout
and
it
just
sort
of
brought
down
my
whole
cluster.
I
mean
api.
C
Throttling
is
one
aspect
of
it,
but
I
think
word
handler
I
mean
there
could
be
improvements
made
in
the
frequency
or
list
and
watch
calls
that
the
word
handler
sends
because
we
were
just
running
some,
like
you
know,
edge
case
experiments
and
this
totally
like
brought
down
the
cluster,
and
it
was
painful
to
have
to
revive
the
api
servers
after
this.
B
So
you
said
that
api
limits
or
throttling
was
disabled
was
that
a
solution
for
you
to
help
avoid
this.
C
Yeah
so
I
believe
in
kubernetes
1.20
there
is
api
fairness
introduced,
so
that
could
really
help.
But
even
otherwise,
let's
say
like
I
have
a
really
big
cluster
like
say:
a
thousand
node
cluster
right.
The
word
handler
on
every
node
is
going
to
issue
a
request
to
the
api
server
and
that
can
that
could
potentially
really
overwhelm
the
api
server
just
because
of
the
frequency
of
the
calls
and
the
number
of
objects
it's
having
to
retrieve
from
the
data
store
right.
C
So
I
think
I
I
mean
I
haven't
looked
at
it
deeply,
but
I
definitely
feel
like
there
could
be
some
optimizations
around
this
and
what
handle.
B
Yeah,
I
think,
you're
right.
I
think
that's
consistent
with
what
marcus
was
saying
as
well
and
just
poking
around
at
the
code
a
little
bit.
I
I
have
found
a
few
instances
where
we're
making
api
calls
that
perhaps
we
can
avoid
as
well
and
the
frequency
of
that's
unclear
to
me,
depending
on
how
often,
for
example,
a
virtual
machine
loop
gets
triggered,
it's
unclear
if
we
are
making
multiple
calls.
That
should
only
happen
once
things
like
that,
we
need
to
get
more
attention
to
that.
This
is
good.
A
Feedback
great,
so
there
is
a
comment
here
about
from
kevin
in
the
chat
suggesting
to
you
know
back
to
the
topic
of
scaling
may
be
worth
creating
a
mailing
list
or
special
sake
or
working
group
for
about
scaling.
A
Well,
we
can,
I
don't
know,
have
david
thoughts,
I
think.
Well,
there
is
just
a
reminder.
There
is
a
keeper
dev
mailing
list.
That
could
be
a
starting
point.
I
guess,
but
I
don't
know,
if
do
you
guys
think
if
there's
enough
volume
to
create
a
specific
topic
for
that?
I
think
let's
bring
it
to
the
discussion.
B
D
No,
I
was,
I
was
actually
preparing
to
to
discuss
alan's
question,
but
to
I
mean
my
my
thoughts
about
kevin's
suggestion
would
be
that
I
would
suggest
just
taking
up
the
conversation
on
the
keyboard
dev
mailing
list,
and
if
the
volume
of
the
topic
becomes
you
know
such
that
it
makes
sense
to
break
that
off
into
a
different
form.
Then
it
would
be
pretty
easy
to
do
that.
B
Yeah,
I
would
agree,
there's
a
scaling
point
where
we
tip
over
to
the
noise
ratio
just
gets
too
high
for
one
for
one
mailing
list
signal
to
noise
ratio.
So,
let's
see
when
we
get
there,
I'm
not
sure
if
we're
there
yet,
but
certainly
it
makes
sense
in
the
future,
especially
with
the
growth
that
we've
we've
seen
recently
in
the
project.
A
D
Yeah,
so
I
I
could
read
the
question
out,
if
that's
helpful,
so
it
says
I
have
a
large
scale
shared
everything
relational
database
that
requires
the
storage
for
the
database
to
be
shared
to
multiple
vms
one
for
each
multiplex
node.
Since
I
can't
use
pvc
as
worker
node,
I'm
trying
to
understand
here
will
not
have
local
storage
to
support
the
disk.img
file.
D
How
should
I
share
read:
write
storage
to
the
virtual
machine,
so
I
guess
the
follow-up
question
I
would
have
there
is
I
I
assume
that
the
the
fundamental
storage
that
we
have
available
in
the
system
is
locally
attached
to
the
individual
kubernetes
nodes,
which
would
have
that
limitation
of
not
being
read,
write
many
shareable,
obviously
or
only
locally
accessible.
I
mean
in
that.
D
If
that's
the
case,
I
think
you
can't
get
away
with
trying
to
find
a
way
to
share
the
storage,
for
example
like
using
something
that's
going
to
consume
the
local
storage
and
present
it
in
as
shared
storage,
something
like
stuff
or
something
I
mean
that
may
not
be
what
you're
looking
for.
Maybe
it
could
be
accomplished
with
nfs
or
something
I
just
otherwise.
I
wouldn't
see
how
you
could
could
actually
share
that
storage
to
other
nodes.
D
D
C
B
We
do
with
the
pod,
I
I
guess
we
if
we
want
to
share
storage
between
two
pods
locally,
the
only
the
only
option
I
can
come
up
with.
In
my
mind,.
C
B
The
host
path
on
the
node
of
some
sort-
I
don't
know
any
other
way
of
doing-
that
in
a
read,
write,
mini
fashion
for
what's
equivalent
of
local
storage.
I
could
be
wrong.
D
I
mean
if
you're,
okay,
having
the
the
multiple
like
vms,
that
are
accessing
the
storage
running
on
the
same
cluster
node.
Then
that
seems
to
defeat
the
purpose,
though
I'm
assuming
they
want
it
to
be
able
to
spread
throughout
the
cluster.
So.
A
A
A
Bring
alan
to
the
discussion
please
one
second,
so
that
would
be
much
faster
because
we're
already
over
time,
but
that
sounds
like
a
very
interesting
topic
and
we
will
try
to
address
it
quickly.
So
alan
you
should
be
able
to
enable
your
microphone
now.
So
I.
H
Hi
so
yeah
we're
we've
sap,
like
you,
it's
a
shared
everything
database,
so
it
uses
basically
the
main
storage
and
the
system.
The
system
storage
is
shared
to
each
multiplex
node,
so
each
node
is
a
separate
vm
in
this
world,
so
in
our
physical
servers
or
on
vms,
we
would
just
use
a
san
or
something
to
share
that
up.
You
know
either
raw
block
or
raw
device
or
as
a
file
system
that
that
the
db
files
can
be
written
down
to
and
then
the
each
multiplex
node
sees
the
same
copy
of
that
file.
H
Iq
handles
the
actual
writing
down
into
the
individual
blocks
and
the
dv
files,
so
it
handles
the
communication.
So
when
I
tried
to
do
this
with
q
vert,
I
originally
used
pvc
not
realizing
that
the
disk
image
had
to
when
it
brings
it
up
in
there
in
in
the
pod
itself,
has
to
be
10.
90
of
these
total
storage
you
needed
so
therefore
I
ended
up
in
a
situation.
I
can't
use
a
pvc
to
do
that.
H
So
what
is
my
way
of
using
providing
the
storage
up
to
the
vm,
so
I
can
share
it
across.
I
have
looked
at
using
an
nfs
server,
but
a
lot
of
our
clients
where
we're
pm
analytics
tool
for
telecoms.
They
want
rid
of
nfs,
they
don't
want
to
see
nfs,
they
don't
want
to
hear
it.
H
So
I
originally
had
thought
of
using
an
nfs
server,
putting
it
in
place
and
then
sharing
that
to
each
vm
and
that
becomes
you
know
you
mount
it
in
after
the
vm
comes
up,
it's
not
ideal,
it's
not
as
performant
as
I'd
like,
and
ideally
I'd
like
to
keep
it,
because
we
would
have
multiple
customers.
We
at
the
moment
were
talking
about
a
thousand
deployments
of
our
product
across
300
customers.
H
We
we
are
going
to
have
multiple
deployments
scaling
from
anywhere
from
two
terabytes
to
60
terabytes
at
the
moment.
Probably
100
terabytes
database
size.
So
we
would
like
to
keep
it
as
close
to
the
kubernetes
and
keep
it
as
close
to
a
generic
storage
provider.
As
we
can,
we
just
expect
a
storage
provider
to
come
in
across
our
customers.
Give
us
storage,
which
has
read
my
money
access
and
then
we
share
them
up
into
the
vm
for
usage
and
we
will
create
the
db
file.
H
Then,
on
top
of
that,
that's
my
problem
statement
at
the
moment,
I'm
trying
to
figure
out
how
I
can
get
around
it
yeah
I
worked
fort.
Oh
fs
actually
is
piqued
my
interest
today
and
I
must
do
some
reading
on
it.
H
D
Sure
yeah
I
mean,
I
think
it
just
seems
to
me.
You
know
we
can't
really
get
around
the
general
statement
that
you
do
need
to
have,
that
that
shared
storage
in
the
environment,
so
whatever
whatever
that
ends
up
being
and
it
looks
like
we've-
got
a
suggestion
for
first
ffs
and
also
I'm
not
really
sure
how
vert
iofs
necessarily
plays
into
it.
That's
to
me
just
more
about
how
you
expose
the
storage
into
the
vm.
D
I
mean
that
yeah,
that's
how
it
appears
in
the
vm,
but
in
terms
of
something
high
performance.
You
just
really
need
that
shared
storage
first
and
then,
whether
you
attach
it
as
a
as
a
vm
disk
or
you
attach
it
as
vert
iofs.
I
don't
know
if
that
would
matter.
H
Okay,
yeah
sort
of
that's
my
problem,
I'm
that
it
is
the
solution
that
I
need
to
come
up
with,
as
let's
say
something
to
allow
me
to
do
it
and
then
realistically
we
have
like.
I
could
look
at
providing
something
through
the
actual
see
you
know.
What's
providing
the
storage
provider
will
have
exposure
that
I
can
mount
in
separately
to
the
vm,
but
I'd
like
to
tie
it
to
the
kubernetes.
H
If
I
could
so
it
simplifies
the
solution
down
there,
it's
not
a
manual
step
after
once.
The
vm
is
up
to
mount
it
up.
It
all
becomes
part
of
the
one
flow.
G
D
Yeah,
I
would
definitely
be
interested
in
in
following
up
with
you
in
more
detail.
You
know
just
to
understand,
like
the
details
of
the
storage
provider
and
other
things
like
that,
you
know
in
a
forum
where
we
have
more
time
to
look
at
specifics.
A
There
is
a
already
a
question
on
this
and
on
slacks
I
encourage
you
to
follow
up
there
and
take
it
from
there,
because
we
are
already
in
a
bit
over
time,
but
we
have
more
questions.
If
you
don't
mind,
if
you
have
a
few
more
minutes
better
david,
so
fan
join
here
and
ask
sorry
where's
the
question.
So
I
wanted
to
see
if
someone
has
seen
an
oom
happen
to
on
the
kubernetes
api
server.
A
B
No,
I
haven't
seen
that
I've
seen
that
occur.
I've
definitely
seen
a
ohm
occur
on
an
api
server
pretty
much
every
component
at
some
point.
You
know
increasing
the
memory
request.
Limits
should
help.
If
it
doesn't,
then
there's
potentially
a
leak
in
that
component.
That
would
need
to
be
figured
out.
B
The
other
thing
you
could
do,
if
you
really
need
to
throttle
vert
handler,
is
to
rate
limit
verb,
handler's
ability
to
talk
to
api
server.
B
I
I
really
think
rate
limiting
is
something
that
we
should
all
be
doing
in
our
clusters
to
protect
our
our
components
from
each
other,
so
bad
actors
making
two
mirror
requests
taking
things
down
things
like
that,
we
have
to
be
defensive,
so
I
would
be
defensive
in
what
you
allow
bert
handler
to
do
with
your
api
server.
A
Okay
and
I
see
more
questions
coming
and
actually
as
well
in
slack,
but
I'm
kind
of
worried
about
the
time.
So
you
guys
tell
me
when,
if
you
have
to
oh,
don't
be
shy.
Meanwhile,
we'll
keep
breathing.
There
is
one
here
from
shang:
do
we
have
a
plan
to
introduce
some
kind
of
metrics
for
life
cycles
in
being
creation,
for
example,
time
it
takes
waiting
for
for
a
domain
uiv
or
time
it
takes
to
create
a
connection,
I'm
not
sure
if
those
metrics
already
exist
or
if
we
have
any
plan
to.
B
Question
we
certainly
have
metrics,
so
today
we
we
expose
our
metrics
through
a
prometheus
endpoint
and
all
of
our
components
that
specific
metric
that
we're
talking
about
so
this
time
between,
I
guess
when
a
virtual
machine
pod
starts
going
from
then
to
the
domain
getting
created
within
that
pod
and
the
vert.
B
I
don't
know
if
we
track
that
or
not
I
mean
I
think
that
we'd
be
interested
in
exposing
anything
that
the
community
found
useful
there.
So
metrics
come
in
all
shapes
and
sizes,
and
people
are
interested
in
different
aspects
of
that,
for
whatever
shapes
their
use
case.
So
certainly
talk
to
us
about
that
cool.
A
D
So
this
yeah,
so
this
would
be
basically
just
the
standard
libvert
behavior,
which
is
when
you
have
multiple
disks
it
just
you
need
to
specify
the
boot
order.
If
you
want
to
select
one
of
those
in
particular,
there's
no
special
logic
about
what's
installed
there
now
I
know
with
like
vert
install
and
vert
manager.
They
do
some
stuff
underneath
the
covers
for
that
to
make
it
to
make
it
nice
for
you,
but
for
right
now
we
don't
have
any
kind
of
that
extra
logic
built
on
top
that
I'm
aware
of.