►
From YouTube: Kubernetes SIG Apps 20190211
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
And
anandhan
matter
go
here
and
we'll
be
coasting,
so
the
only
announcement
that
we
have
the
moment
is:
we've
decided
for
Rubicon
that
we
are
going
to
do
a
joint
session
for
both
the
intro
and
for
the
deep
dive
and
the
thinking
behind
this
is
that
we
know
we've
done
in
the
split
sessions
before
and
we
wanted
to
get
data
on
how
well
the
joint
session
would
run.
There
didn't
seem
to
be
too
many
objective
objections
of
this
and
all
the
chairs
kind
of
agreed
that
we'd
like
to
try
this
time.
A
B
A
A
There
are
some
issues
with
readiness
in
terms
of
egg-cam,
as
it
is
right
now
then
probably
need
to
be
addressed
as
in
right
now.
It
doesn't
talk
about
if
the
Potters
are
running
this
well,
do
we
respect
the
readiness
of
a
sidecar?
How
does
that
fit
into
the
life
cycle
of
a
sidecar
container
and
interacting
with
other
containers?
E
Yep,
that
sounds
reasonable,
I
think
there's
quite
a
few
stages
of
how
we
could
implement
it
as
well.
Not
everything
needs
to
be
done,
Oh
once
so
to
speak,
it's
kind
of
disconnected
in
terms
of
like
shutdown
ordering
and
just
affirm,
like
the
priests
top
hooks
isolated
from
the
signal
side.
Derek
seems
fairly
supportive
when
him
were
talking
about
how
we
would
go
about
implementing
this
this
week
in
terms
of
getting
like
a
POC
branch
going
and
iterating
on
that
link
you
to
that
film.
E
The
cap
we're
just
waiting
for
dawn
to
kind
of
give
us
the
go-ahead.
We
haven't
managed
to
capture
a
sick
dope
meeting
in
a
while,
and
they
would
just
like
her
to
kind
of
visually
stamp.
Her
seal
of
approval
before
we
start
going
she's.
A
A
If
you
look,
it
tends
to
be
back
there
Tim.
How
can
you
beat
it
if
it
feels
the
same
way
that
like
if
we
go
and
do
the
psych
article?
True,
if
it's
the
job
and
demons
that
use
case
pretty
well
having
to
kind
of
levels
of
two
tiers
or
things
like
I
started
on
my
side,
cars
first
and
wait
and
they
wait
for
them
to
be
ready
before
I
start
the
containers
they
depend
on
me
sense,
going
arbitrarily
deep.
The
additional
complexity
probably
outweighs
the
benefit.
I
kind
of
agree
with
that
too.
F
So
what
one
thing
when
I
see
sidecar
I,
don't
really
know.
What's
going
on,
I,
say
said:
katroo
I,
don't
know
it's
about
uttering
or
even
like
the
side
yard
would
be
like
the
first
time.
We
would
like
Molly's,
officially
introduced
this
term
about
what
kind
of
expectation
we
are
doing.
This
and
I
feel
like
some
different
user.
F
B
E
A
Allow
no
Lincoln
in
order
to
allow
more
than
one
pod
to
be
down
during
the
Rhoyne
update
right.
So
we
have
the
mode
in
staple
suspect.
There
is
a
move
that
says
Parsifal,
which
means
basically
I,
really
don't
care
about
ordering
I
just
want
unique
identities
so
to
allow
you
to
roll
things
over
in
terms
of
starting
the
staple
set
up
fast
or
tearing
it
down
fast,
but
we
still
try
to
go.
We
provide
the
ordering
guarantees
when
we
go
basically
straight
line
the
staple
set
to
update
it.
A
This
would
allow
you
to
do
more
than
one
pipe
simultaneously
and
the
thought
behind
it
is
that
it
allows
you
to
do
the
same
thing
that
you
do
with
diamond,
set
or
deployment
in
terms
of
trade
disruption
for
velocity
right,
like
I'm
willing
to
have
a
greater
amount
of
disruption
and
my
staple
set
to
improve
the
velocity
of
my
robots.
I
kind
of
like
I
think
that
there
is
definitely
a
real
use
case
for
it.
A
In
terms
of
you
know,
people
do
have
largest
staple
sets
where
the
restart
times
and
container
image
tool
times
kind
of
make
it
very
difficult.
In
order
to
do
a
rollout
in
a
timely
manner,
so
providing
the
ability
to
surge
well
tolerate,
more
unavailable
staple
sets
would
probably
end
surge
would
probably
allow
you
to
go
faster
than
you
would
previously
you're
a
lot
faster.
If
you
can
tolerate
that
amount
of
disruption,
I.
B
Think
my
fault,
this
is
like
stick
forces
around
one
by
one
and
they
do
it.
Do
maintain
quorum
right,
so
really
do
need
more
than
one
you
would
have
to
have
like
you
would
have
to
at
least
ten
volts.
So
this
is
commensurate
right,
because
if
you
start
rolling
on
more
than
one
you,
you
ran
into
quorum
issues.
Thank
you
use
your
stateful
sense,
so
this
would
be
like
a
really
easy
way
to
move
or
right.
A
Agreed
like,
if
so,
if
you
restricted
I,
guess
it's
kind
of
conceptually.
What
do
we
want
to
do
is
staple
set
from
that
perspective
right.
Do
we
want
to
say
that
really
is
for
distributed
systems
where
there
is
a
relatively
low
tolerance
to
disruption,
and
we
expect
that
when
you're
doing
a
rolling
update,
you
rather
definitely
have
one
by
one
semantics.
A
Then
you
would
to
be
able
to
be
more
disruptive
and
trade
for
velocity,
you're,
doing
learning
core
great
other
use
cases
where
you're
not
going
to
lose
quorum
where
you're
not
going
to
lose
consensus,
but
what
you'll
end
up
you're
using
stable
set,
not
for
contentious
mechanics,
but
primarily
to
just
make
use
of
the
stable
Network
identity
and
the
storage
provisioning,
which
is
actually
a
large
portion
of
use
cases
it's
non-trivial.
So
if
we
want
to
support
it
it
we
might
want
to
think
about.
B
A
In
addition
to
the
cap,
there
are
at
least
two
or
three
issues
that
are
open
in
the
community
that
point
to
it
and
based
on
where
it's
coming
from
I'm
fairly
sure.
This
is
something
that
the
user
needs
for
a
production
system
based
on
the
prior
request.
They've
had
further
workloads,
api's,
again
I
haven't
seen
somebody
saying
it's
just
like
a
nice-to-have
or
something
we
want
to
polish
off
the
API
for
consistency.
A
We
consider
that
before
we
would
be
one
we're
like
it's
more
important
to
have
the
semantics
of
ordered
rollouts
than
it
is
to
have
consistency
and
potentially
introduce
dangerous
semantics,
but
now
we're
in
v1.
If
we
added
this,
it
would
be
sort
of
an
opt-in.
It
would
be
something
that
would
be
on
by
default,
but
the
default
behavior
would
still
be
what
it
is
and
you
would
have
to
add
in
additional
fields
in
order
to
progress.
A
A
G
G
G
You
so
Kudo
is
an
operator
that
we're
building,
that's
specifically
designed
to
help
provide
operations
to
operators
might
be
a
cheesy
way
to
say
it,
but
we
want
to
try
and
capture
the
actions
that
are
required
for
managing
applications
in
production
as
part
of
the
deployment
for
the
applications
themselves
and
be
able
to
embed
those
best
practices
in
the
operator.
Crd
definition.
G
So
Kudo
is
the
operator
that
handles
these
specs
that
are
provided
as
declarative,
CR
DS
we'll
go
through
a
couple
examples
of
those
from
the
SRA
book
you
really
want
to
have.
This
is
like
a
page
for
DevOps
right.
You
want
your
developers
lock
and
step
with
operations,
but
at
some
point
that
doesn't
scale
when
you're
trying
to
really
release
these
applications
and
larger
larger
customer
bases.
So
the
more
you
can
provide
in
the
binary
quote,
unquote
for
release
better
application
operations
in
their
other
use.
Cases.
G
People
want
to
be
able
to
execute
jobs
on
demand
and
have
those
jobs
be
the
same
as
the
cron
job
so
being
able
to
provide,
though
as
one
unit
and
execute
those
on
demand,
canary
or
Bluegreen
deployments
defined
by
the
application
developers
who
know
the
right
way
to
go
about
coordinating
those
updates
and
then
be
able
to
do
those
same
sort
of
updates
on
config
map
changes
so
be
able
to
put
that
process
in
place
to
be
able
to
test
out
config
changes.
Those
are
all
issues.
G
So
when
would
someone
want
to
use
this
when
crimson
tail
apply,
isn't
quite
enough
to
manage
your
application
if
you're
just
applying
the
same
amyl
on
updates?
You
can
probably
continue
to
do
that
and
not
worry
about
it
and
then
just
manage
it
with
a
get-ups
type
feel
you
know.
You've
got
these
specific
data
intensive
operations
that
you
want
to
handle
a
lot
of
the
applications
that
we
see
value
here,
interact
with
external
data
sources,
and
you
want
to
be
able
to
backup
and
restore.
G
G
So
the
division
is
that
they're
going
to
have
to
see
our
DS
that
are
defined
in
the
kudo
spec
for
your
application.
You've
got
a
high-level
framework
object
and
then
you've
got
particular
implementations
of
that
framework,
inversions
and
then,
together
those
are
going
to
go
and
dynamically
create
a
CR
D
in
your
cluster,
with
a
particular
version.
So
you'd
have
a
zookeeper
you'd
have
a
couple
versions
and
they
need
to
be
able
to
get
particular
API
versions
of
that
C
or
D
object
dynamically
generated,
and
then
you
can
instantiate
CR
DS
of
that
type.
G
Eventually,
it's
not
quite
there
yet
so
right
now
we
have
a
wrapper.
Linker
object
called
an
instance
CRD
that
kind
of
ties,
an
application
instance
creation
to
a
framework
version.
We
like
to
consolidate
that
and
make
have
make
specific
CR
G's
instead
of
having
this
mentor,
the
framework
version
spec.
So
this
idea
of
plans
is
really
at
the
core
of
what
Kudo
wants
to
do.
G
So
in
this
particular
example,
you
do
a
deploy
and
you
wait
for
that
deployment
whatever
it
is
to
get
healthy
before
you
run
a
task
called
init
and
you
can
go
and
define
that
later,
you
can
define
separate
backup
and
restore
strategies
so
that
you
can
embed
how
to
do
those
processes
inside
your
framework
version
tasks
are
right.
Now,
just
references
to
Hamill
customized
patches
are
on
the
roadmap.
G
So
this
would
be
some
example
of
tasks
that
get
defined
and
when
you
instantiate
your
instance
are
going
to
be
setting
these
parameters,
you
define
a
parameter.
Spec
default
values,
descriptions
that
could
be
used
in
some
UI
components
required,
whether
you
need
it
or
not
in
your
instance,
whether
it
has
to
be
provided
or
whether
it
can
be
set
to
default
or
dynamically
generated
and
then
a
trigger.
So
when
I
update
this
value
in
an
instance,
what
plan
should
I
run?
And
this
lets
you
customize?
G
Well,
if
I'm
just
scaling
up
my
instance,
I
just
roll
on
your
deploy
out
there.
If
I
want
to
update
the
image
tag
or
change
this
big
value,
I
want
to
execute
a
different
plan
that
does
things
a
little
bit
more
smoothly
and
with
a
little
bit
more
checks
along
the
way,
and
then
you
reference
those
right
now.
Moustached
we've
had
some
requests
for
templating.
Instead,
the
power
of
that
seems
to
be
a
little
stronger
so
that
that's
been
talked
about
and
then
inside
the
hood,
the
operator
just
parses.
G
G
So
right
now
our
instance
object.
Is
this
linker
you've
got
a
reference
to
the
framework
version
that
it
implements
rather
than
having
that
be
the
API
version.
You
provide
some
parameters
in
the
future.
We
want
that
embedded
as
part
of
the
object.
So
you
can
see
the
definition
for
the
framework
version
is
specific
up
there,
and
then
you
provide
the
parameters
as
part
of
the
spec
you'll.
G
So
this
is
going
to
create
three
objects:
3cr
ds1
is
a
framework
for
MySQL.
When
is
the
version,
so
that's
five
seven
and
we
added
some
of
those
plans
in
there
for
backup
and
restore
how
to
deploy
a
MySQL
with
the
service
and
all
those
other
things
that
would
be
required
to
leverage
maestro's
an
instance.
G
So
the
the
actual
deployment,
if
you
guys,
are
in
the
repo
there's
an
initiation
step
that
actually
creates
a
database
inside
of
the
MySQL
instance.
We
could
have
done
that
by
embedding
that
startup
script
in
the
MySQL
init
folder
this
kind
of
lets
you
reuse,
that
common
mayest
well
image
and
still
provide
that
value
or
that
capability.
So
we
didn't
have
to
recompile
anything
specific
to
get
that
default
schema
in
place.
So
it
ran
that
job.
G
G
G
So
we
did
the
back
up
so
we'll
just
do
it's
not
a
sanity
check
the
data
still
there
and
I'm
going
to
do
with
the
lead
of
all
the
data
and
schema
in
our
table.
So
there's
nothing
there
anymore.
So
now
we
done
it
back
up
and
now
we
want
to
do.
It
was
store
on
that
back
up,
so
we
can
start
with
the
restore
plan.
Again
that
was
provided
as
part
of
the
definition
for
the
framework
version.
G
G
Link
to
get
helps
those
are
people
that
will
working
on
it:
Toby,
Jerry,
you're,
getting
Bobby
I,
think
they're,
both
on
the
call
we're
definitely
looking
for
for
help
for
different
use
cases
and
helping
out
with
different
actions
that
people
are
looking
for.
We
don't
have
Canaries
fully
implemented
yet
or
an
example
of
that,
but
we'd
like
to
there
was
one
or
two
features
that
we
need
to
add
along
the
way
to
get
those,
but
we're
close
on
that.
G
Dependencies
I
think
is
going
to
be
one
that,
like
a
lot
of
community
feedback
on
this,
is
the
right
way
to
be
able
to
reference
another
instance
or
CRD
and
have
that
connection
string
being
coherent
coming
out
so
that
when
I
have
a
calf
constancy,
how
do
I
extract
that
broker
string?
You
know
when
I
have
a
MySQL.
How
do
I
know
what
the
connection
URL
should
be
so
along
those
lines
are
for
generating
connection
strings
from
an
instance
or
an
application.
G
Actually
creating
new
theories,
that's
the
eventual
goal.
We
want
each
framework
version,
so
one
of
those
versions
of
a
framework
to
be
a
different
API
version
of
a
common
CRD
that
gets
mapped
to
the
framework
that
was
here
so
a
framework
respond,
creates
a
CR
D
and
then
the
versions
of
those
are
defined
by
the
framework
version.
A
So
from
our
back
perspective
mm-hmm,
how
would
that
work,
if
you
do
C
or
D
created,
is
like
pretty
a
pretty
heavy
administrator
privilege
because
effectively
you're
modifying
the
resources
of
the
Machinery
knows
about
right
and
in
Sierre
Deezer
cluster
script
right
there,
not
namespace,
so
anything
that
you
apply
there
you're
using
the
namespace
model
of
tenancy
propagates
across
the
entire
cluster
and
the
API
machinery.
Thinking.
Is
that
that's
what
was
designed?
That's
what
we've
got
now.
G
But
then
he
requires
this
admin.
Capabilities
to
install
I
think
we
want
to
go
long-term
dynamic
because
that
that
seems
cleaner.
But
the
permissions
thing
that
you're
bringing
up
is
definitely
one
of
the
advantages
for
our
current
implementation
and
we're
not
really
sure
the
overall
trade
space
for
everybody's
use
cases
on
what
works
and
what
what
people
like.
G
But
we're
definitely
interested
in
trying
to
figure
out
the
right
trade-off
there
between
the
flexibility
of
having
application
operators
deploy
their
own
versions
and
their
own
namespaces
to
manage
versus
having
this
broad
capability
from
a
cluster
perspective
that
you
know
easily.
Defining
these
framework
versions
in
yeah.
More
gives
you
the
capability
to
provide
full
operators
to
everyone
on
the
cluster
and
be
able
to
give
those
application
management
pieces
to
those
Asian
operators
as
well
and
not
have
to
provide
support
on
each
one
of
those.
A
The
typical
model
right
now
is
that
the
custom
resource
definition
that
describes
the
workload
is
installed
by
an
administrator
and
potentially
the
orchestration
component.
The
controller
itself
would
be
administrator
installed.
Then
our
vac
is
granted
to
individual
namespaces
or
cluster
wise
it
basically
to
whatever
principles
you
want
to
be
able
to
create
the
custom
resources
that
trigger
the
automation,
to
turn
the
workload
out.
How
I'm
this
is
differing.
They
are
you
turning
up
an
individual
controller
per
workload.
It's.
G
Like
the
Kennedy
model,
where
you're
gonna
have
one
controller
that
handles
all
of
the
CR
DS
of
this
type,
so
they're
all
going
to
be
spit
through
the
same
state
machine
and
be
subscribed
by
the
the
single
operator
and
right
now,
it's
really
clear
because
there's
only
an
instance
CR
D,
so
that
object
is
clearly
owned
by
the
the
one
Operator.
But
once
we
start
doing
dynamic,
there's
gonna
have
to
register
for
all
of
those
new
ones
that
come
out
along
the
way.
A
A
But
there's
kind
of
a
two
week
so
that
we
could
actually
go
about
this
like
so
right
now,
the
only
way
you'd
actually
be
able
to
consume
a
larger
volume
is
by
restarting
the
pie.
You
basically
are
going
to
have
to
reconstruct
the
file
system
for
most
implementations
in
order
to
make
use
of
a
larger
volume
size
online
file
system
resizing,
isn't
largely
supported
across
most
cloud
providers
as
far
as
I
know,
but
it
seems
from
six
store.
A
Is
the
direction
they're
going
is
to
support
online
file
system
recession,
which
would
mean
that
effectively
from
from
the
perspective
of
a
staple
statin
user?
When
you
update
the
line
things
template,
we
would
have
to
update
the
persistent
by
and
claim
which
would
eventually
cause
a
persistent
volume
to
get
updated,
but
with
the
online
file
system
resizing.
A
What
that
would
end
up
looking
like
is
it
would
work
in
place
without
having
to
restart
the
pot,
which
is
actually
kind
of
a
very
different
thing,
from
the
life
cycle
of
any
of
those
controllers
that
we
have
right
now.
The
only
thing
that
we
really
let
you
up
be
in
place
with
the
image,
and
even
that,
like
the
work
with
controllers,
I'll,
respect
that,
but
as
far
as
the
capabilities
of
Kubla
go,
you
can
do
an
in-place
image
of
it.
A
You
can't
do
in
place
container
resizing
or
any
of
that
stuff
that
doesn't
work
so
for
the
resources
that
we
do
provisioning
for.
We
would
have
kind
of
a
difference
if
we
go
that
route.
In
terms
of
we
have
this
one
thing
that
we
would
let
you
do
online
without
actually
restarting
the
pot,
and
it's
not
necessarily
clear
any
bit.
A
Many
applications
would
actually
benefit
from
that
approach
anyway
or
be
able
to
make
use
of
it
like
pivoting,
if
you're
doing
something
like
MySQL
or
PostgreSQL
generally,
the
amount
of
shared
buffer
that
you
use
is
a
configuration
parameter.
The
application,
that's
independent
of
the
underlying
file
system,
size
right.
So
if
you
have
to
increase
your
MySQL
versions
or
your
post
fresh
air
bumpers,
in
order
to
make
use
of
that
increased
memory
or
file
system
size,
you
know
that
Native
Asians
tend
not
to
like
separate,
maybe
like
blob,
structured,
birch
trees
and
not
to
utilize.
A
The
file
system
anyway,
like
Linux
kernel
maintainer,
is
yell
at
database
developers.
All
the
time.
Don't
use
a
file
system,
write
directly
a
block
and
implement
your
own
page
cache,
because
you
have
special
needs
and
we're
not
going
to
modify
the
Linux
page
cache
in
order
to
support
you
know
the
workload
use
cases
that
you
have
so
I
I'm,
just
kind
of
looking
for
people
to
get
feedback
on.
Do
we
think
we
want
to
do
we
start,
and
that
would
be
more
consistent
with
kind
of
what
we
do
for
everything
else.
A
When
the
update
goes
into
place,
then
we
could
roll
out
the
actual
logic
as
an
alpha
that
we
implement
the
volume
resize
via
requesting
of
new
PDS,
and
then
we
can
implement
on
we
star
thereafter,
so
I
mean
we're
talking
about
something
that
would
probably
take
three
releases
and
minimum
to
get
to
like
the
beta
state,
or
we
can
just
wait
for
a
volume
resizing
for
a
year.
So.
A
A
It
is
consistent,
so
the
storage
guys
are
seem
very
motivated
to
do
online
filesystem
resizing,
but
I,
don't
know
when
that's
actually
going
to
roll
out,
and
if
we
do
it
it
does
provide
sort
of
an
inconsistent
semantics
for
all
the
other
mutations
that
we
do
and
I'm
not
sure
that
it's
desirable
name,
one
clear.
C
B
Started,
let's
just
discuss
the
point:
what
why
is
it
incompatible
with
both
built
in
line
with
what
we
do
with
the
other
contours
like
right
now
we
say
request
for
the
storage.
It
means
at
least
say
one
gigabyte.
If
the
online
resizing
makes
it
bigger,
I,
don't
care
right,
it
doesn't
change
the
state.
A
But
it
does
right,
it
changes
the
volume
claims
template,
so
we
would
update
all
the
PVCs.
So
right
now
like
step
one,
we
could
relax
communication
on
the
vine
claims,
template
and
step
two
curves
I
relaxed
the
foundational
dividing
step
leading
step.
Two
you
can
just
implement
the
PVC
residing
in
place
by
adjusting
the
persistent
vine
claims
that
are
generated
by
the
claims,
template
right
and
then
do
nothing
like.
B
B
You
change
this
Bank
and
we
will
update
the
persisted
with
my
claims
in
a
rolling
update
fashion
right
yeah,
that's
pretty
simple:
I'm,
not
sure
how
to
be
online
resizing
go
sexually,
and
why
would
it
be
incompatible?
Because
would
you
actually
have
to
change
the
state
for
cents
back
to
achieve
the
Elmyra
sizing,
or
that
would
be
just
on
the
background
without
restarting
the
Pope
yeah?
It.
A
Would
just
be
on
the
background
right,
so
if
we
start
with
we're
gonna
restore
everywhere,
then
you
get
online
bow
stand
resizing.
You
would
have
to
add
maybe
another
option
until
it
did
not
restart
the
ties.
That
would
be
the
concern.
If
you
go
one
way
and
them
said
we
pivot
and
say:
okay,
we're
going
to
support
online
and
in
place
to
me.
A
B
A
Well,
so
it's
not
clear
to
me
how
online
file
system
resizing
would
actually
work
right,
but
you
may
have
to
would
that
be
by
default?
Is
that
a
global
request?
Would
you
set
it
per
workload
like?
Would
your
volume
clean
have
allow
online
resizing
on
it?
I'm
not
sure
the
API
was
like
that
or
is
even
a
door
for
visionary
level.
If
the
CSI
implementation,
for
instance,
would
support
multiple
moves
like
that,
you
only
do
filesystem
resizing
for
some
people
set
and
not
for
others.
I.
B
A
Of
really
well
I,
don't
think
it'll
change
a
staple
for
API
right,
like
I,
think,
ultimately,
whatever
they
do
is
going
to
have
to
go
into
persist
and
buying,
claim
and
persistent
volume
at
the
storage
layer,
but
that
my
own
claims
template
is
basically
our
system
by.
So,
if
they're
changing
that
v1
API,
it
kind
of
de
facto
modify
this
papal
set
API
right,
yeah.
B
We
do
did
a
conciliation
run,
changes
right.
Let
me
see
persistent
volume
that
doesn't
match
what
we
have
probably
I.
Don't
know
verbal
stop
draw
out,
because
we
can't
recreate
it
to
match
you
with
the
stateful
census.
So
to
me
it
depends
if
they,
if
the
ultimate
sizing
will
actually
modified
evolve,
my
claims
or
just,
but
they
could
just
do
something
like
you
have
a
file
system,
there's
90%
fool:
we
will
just
make
it
bigger
rid
of
that
you're.
Changing
the
persistent
hood.
In
that
case.
That's.
A
Not
the
direction
they're
going
they're
not
going
to
try
to
do
storage
resize
in
place
based
on
this
pressure.
So
that's
maybe
one
day
someone
would
implement
something
like
that,
but
that's
not
what
they're
trying
to
do
right
now,
there's
more
along
the
lines
of,
if
you
request
so
right
now,
you
can
actually
request
a
larger
volume
and
I
think
it
requires
a
container
restart
to
consume
it,
but
they
can
resize
your
disk
in
place
well
in
place.
A
They
can
resize
your
disk
for
you
and
that's
valuable
right,
because
a
lot
of
people
start
with
an
example
or
start
with
something
where
they've
requested
the
provision.
Far
too
little
storage
and
the
story
right
now
for
most
cloud
providers
and
most
on
premise,
is
that,
like
sure
you
can
go
ahead
and
resize
the
volume
manually
under
the
hood,
but
it
doesn't
it's
not
reflected
in
the
kubernetes
api
office
so
allowing
the
mutation
to
occur
via
API
objects
and
the
Reaper
visioning
to
occur.
A
Be
a
CSI
and
the
PVC
controller
generally
across
the
community,
been
looked
at
as
something
that's
helpful
for
people
that
are
using
the
workloads
right,
so
I
mean
I'm
kind
of
fairly
supportive
of
we
need
to
integrate.
Well
with
this
feature
to
provide.
You
know
our
users,
a
good
experience
when
they're
trying
to
resize
their
disk
resource.
In
the
same
way,
we
try
to
do
that
they're
trying
to
resize
member
or
keep
you.
A
Don't
but
there
so
that
would
be
against
that.
One
would
be
to
lift
that
right,
like
the
first
thing
we
have
to
do
is
with
the
validation,
and
that
would
have
to
happen
prior
to
rolling
out
new
feature
that
consumed
it
to
ensure
consistency
with
a
vergence
queue
of
one
for
API
servers
and
a
multi
API
FBI
server
clusters.
So
that's
pretty
much
step
one.