►
From YouTube: SIG Architecture 06142018
Description
Volume snapshots discussion part 2
A
Okay,
welcome
everybody
today
is
Thursday
June
14
2018,
you
are
in
the
community
sig
architecture,
community
meeting
I.
Am
your
co-host
chasing
a
new
Mars
and
we're
going
to
go
through
the
agenda,
which
is
available
after
the
fact
at
least
slash
sig
architecture
and
that's
where
you
can
pull
it
the
each
other?
Now
we
have.
The
first
item
on
the
agenda
is
the
further
continue
into
the
conversation
around
volume
snapshots,
so
I'm
going
to
go
ahead
and
turn
that
over
to
I'm,
not
sure
those
are
you
think,
okay,
all
right
so.
B
If
I
could
give
a
little
bit
of
context,
two
weeks
ago,
we
started
talking
about
volume,
snapshots
and
six
Sagarika
texture.
The
purpose
of
this
was
mostly
to
get
approval
to
pull
the
snapshots
design,
which
is
currently
an
external
controller,
with
CR
DS
into
the
core
of
kubernetes.
As
a
reminder,
the
purpose
of
this
is
not
to
do
a
deep
dive,
API
review
that
can
be
done
offline.
B
C
The
main
goal
for
this
project
is
to
provide
a
standardized
central
API
for
the
basic
function,
including
creating
listing
the
meeting
and
also,
more
importantly,
like
restoring
snapshot
to
volume
operations,
and
on
top
of
that,
we
can
build
up
more
like
up
level.
Functionality
is
using
sec
shot
and
so
just
give
a
like
a
quick
refresh
of
what
we
talked
about
before
is
auto
snapshot,
and
the
volume
has
very
close,
like
relationship
from
the
ticker,
you
can
see
on
the
right
side.
C
You
have
volume
in
kubernetes
we
represent
with
TVC
versus
volume,
claim
and
pv
persistent
module
to
AP,
orbital
and
ap
objects,
and
for
snapshots
you
can
create
snapshot
from
imodium
and
then
you
can
restore
snapshots
to
a
volume,
and
so
we,
what
we
propose
is
using
two
API
object.
Y
is
called
boarding
snapshot.
The
other
is
called
absentia
data
to
represent
warning
snapshots.
So
you
can
see
it's
very
similar
to
the
PCC.
A
C
A
A
C
This
first
slide.
Okay,
so
right
now
is
okay,
so
yeah.
Let
me
know
if
you
have
problem
and
so
for
step
shots.
There
are
a
variety
of
use
cases
except
the
traditional
metadata
protection.
You
can
use
as
a
Becca,
Becca
and
recover
data,
and
it's
also
used
for
like
replicate
data
disputed
on
migration
and
also
where
often
we
see
people
need
to
change
their
volume
and
for
different
views,
different
reasons,
so
they
can
just
exemption
and
recover
the
data
from
the
snapshot.
C
C
So,
for
example,
it's
if
we
put
the
snapshot
api
autumn
trillion,
the
PTSD
in
tree
represent
model
run
and
the
user
will
get
some
like
confusion
and
how
to
use
them
together
and
also
into
a
definite
generate
some
fragmentation,
and
that
has
ability
issues.
So
it's
hard
to
provide
out
of
tree
and
the
entry
Apio
together
and
so
also,
more
importantly,
for
some
fundamental
functions.
We
want
to
provide
including
influence,
restore.
We
talked
about
last
time,
although
the
API
itself
has
not
finalized
yet,
but
the
workflow
is
which
they
provided
in
Frazer.
C
Stone
cannot
build
with
Wizards
entry,
epi
extension
API
and
some
like
pre
post
preparation
before
taking
snapshots.
In
order
to
keep
data
consistency,
you
have
to
prepare
yourself
a
system
like
free
smile
system
or
and
in
amount,
and
those
kind
of
functions
has
to
be
handled
included
and
so
out
of
trade.
Api
won't
be
able
to
stop.
Please
I,
provided
you
this
automation,
also
weekly
functions
or
preparing
application
in
our
system
before
taking
snapshots.
C
C
C
F
The
first
we
I
never
seem
to
me
like
things
that
many
types
of
hours
tree
extensions
have
signed
Frank,
so
I
think
that's
not
that's
what
I
want
to
focus
on
today.
The
last
one
seemed
unique,
but
I
didn't
understand
the
last
one.
Can
you
talk
a
little
bit
more
about
the
last
bullet?
What
is
hard
to
do?
C
Items
right
so
this
advice,
I
have
for
increased,
restore
for
pre
post,
so
here
is
just
last
time.
I
present
this
visit
a
as
simple
snapshots
example
workflow,
so
not
right
now
manually
people
need
to
do
is
first,
for
example,
you
have
big
abuse
application.
You
have
to
run
some,
let's
say
local
database
flash
data,
freezer
file
system,
and
then
you
can
use
some
commands
to
chris
and
shot
and
after
that
you
need
to
unlock
this
unfreeze
file
system,
and
so
this
is
the
many
way
if
you
do
picking
set
shot
all.
F
B
C
B
B
We
haven't
decided
on
what
the
details
are,
but
the
the
important
point
is
that,
in
order
to
enable
this,
we
kind
of
need
cubelet
to
be
aware
that
there
is
going
to
be
a
special
step
before
a
snapshot
is
taken
and
a
special
step
after
a
snapshot
is
taken.
We
need
to
hook
into
that
lifecycle,
I,
don't.
A
G
You
have
two
screens
with
zoom.
You
can
do
it
so
so
my
comment
here
is
that,
like,
if
you
go
back
to
that
slide,
where
you
had
those
five
steps,
I
think
one
of
the
things
that
we've
done
a
lot
with
kubernetes
is
that
we
broken
things
down
into
granular
features,
and
then
we
found
that
those
features
can
be
used
in
other
ways
right.
So
in
my
mind
like
in
and
I
think
about,
there
are
going
to
be
situations
where
people
are
going
to
have
different
strategies
for
doing
snapshots.
G
C
G
G
You
know,
bunches
of
components,
doing
a
snapshot
depending
on
the
cloud
and
actually
verifying
that
is
going
to
take
a
bunch
of
time
and
as
we
look
past,
the
big
three
clouds
there's
going
to
be
a
whole
host
of
ways
that
you're
going
to
want
to
captured
data
within
the
within
the
volume
that
are
gonna,
you
know,
have
different
different
amounts
of
coordination,
so
this
is
not
sort
of
a
local
type
of
thing.
Already.
It's
already
exposed.
C
That
make
sense,
but
from
I
think
just
he
was
a
point
of
view,
maybe
to
minimize
the
delay.
It
would
be
like
better
it
possible
yeah,
but
definitely
I,
think
it's
interesting
idea
to
provider
like
a
separate
about
a
mortician
to
freeze
our
system,
like
so
other
use
cases
can
use
this
operations
except
snapshot.
C
I
C
I
B
All
ultimately,
I
think
with
the
premium
post
hooks.
What
we're
looking
for
is
some
way
to
be
able
to
signal
to
the
application.
That's
running
that
hey
we're
about
to
take
a
snapshot.
Please
prepare
for
that.
Do
what,
where
you
need
to
do
in
order
to
make
that
happen,
if
you're
a
database,
that
could
mean
doing
all
the
flushing
that
you
need
to
do
and
if
we
don't
have
a
hook
to
do
that,
we
could
go
ahead
and
take
the
snapshot
without
that
happening,
but
we
could
take
an
inconsistent
snapshot.
B
I
B
B
I
think
Jing
took
a
look
at
that
approach
and
it
looked
like
you
guys
were
doing
cute
cuddle
exactly
into
the
container
to
signal
that
a
snapshot
is
going
to
happen.
Is
that
correct,
yeah
using
speedy
for
it,
but
so
not
showing
out
but
same
protocol
right,
so
I
think
the
the
problem
with
that
approach,
as
we
saw
it
was
it
was
not
necessarily
applicable.
In
all
cases,
you
kind
of
have
to
be
aware
of
what
the
application
is,
that
you're
exacting
into
and
what
the
capabilities
are.
A
I
G
We
have
all
sorts
of
contracts
in
kubernetes
right
now
that
don't
involve
you
know,
extending
our
core
API
objects,
I
mean
yeah
and
again,
especially
for
something.
That's
still
very
early
like
this,
where
you
know
where
it's
still
being
being
developed.
I
think
you
know
codifying
that
into
something
that
we're
going
to
live
with
forever.
It
seems
it
seems
early
and
I
mean
the.
B
G
G
For
solar,
yeah
I
think
I
mean
again
trying
to
think
of
more
generic
features
that
we
can
extract
out
of
this.
We
already
have
this
idea
with
probes
for
readiness
and
liveness
probes
as
a
domain-specific
way
of
how
do
you
communicate
something
to
the
application?
Do
we
want
to
actually
have
a
generic
set
of
here
are
different
types
of
hooks
and
we
actually
end
up
looking
at
this,
as
generic
lifecycle
hooks
with
the
extensible
namespace
for
naming
those
hooks
and
then
a
way
to
trigger
those
right.
G
So
because,
because
I
think,
we've
seen
this
pattern,
we're
gonna,
see
we're
gonna,
see
an
explosion
of
new
ways
that
things
outside
of
the
workload
want
to
start
interacting
with
and
hook
into
the
workload,
and
it's
not
going
to
end
with
just
the
volume
management
stuff.
So
I
think
you
like,
maybe
taking
a
step
back
and
saying.
Okay,
can
we
do
volume
snapshotting
by
by
breaking
this
down
into
a
bunch
of
different
features
that
have
wider
applicability?
So
so
speaking,.
D
A
My
impression
is
that
those
aren't
nearly
is
widely
used
as
in
in
containers
and
just
receiving
stick
term.
For
example,
they
have
a
bunch
of
quirks
and
kind
of
problems
like
you
have
to
actually
put
extra
stuff
in
your
image
generally,
for
example,
for
those
to
be
useful.
So
I
don't
think
those
have
worked
out
super
well,
but
if
we
had
quantitated
so.
D
D
Of
our
vernacular
try
to
provide
the
different
type
of
the
pre-start
of
hookah.
We
have
those
requests
enough,
they
sit
is
solve
all
the
problems.
So
basically
we
don't
understand
them
how
the
user
really
using-
and
we
may
understand
one
use
cases
or
two
use
cases.
Every
time
we
look
into
and
in
earnest
the
enemies
of
the
new
use
cases
come
to
us.
So
that's
what
basically,
we
are
keen
to
really
make
propose
new
rate
you
to
achieve
this.
One
actually
had
agree
off.
The
Jew
sit,
a
pint
because
we
don't
have.
D
We
need
a
more
use
paper
to
provide
a
general
use
of
journals,
can
fix.
Those
focus
came
to
the
signal
the
almost
every
month
they
have
the
new
book
and
then
they
want
to
do
that
example.
Most
recently,
there's
the
media
came
off
the
new
logo,
for
you
can
use
the
quarter,
but
do
the
promise.
That
hope
is
a
specifically
for
the
GPO
attachment.
So
it's
not
a
general
read
enough.
So
that's
why
we
try
to
all
those
information,
yeah.
A
B
If
we
take
a
step
back,
I,
think
a
couple
of
other
issues
are
the
fact
that
if
these
API
objects
live
out
of
tree
and
you
have
an
external
controller,
doing
operations
like
an
in-place
restore
where
you
have
we're
basically
swapping
out
P
V
from
underneath
the
PVC,
it
requires
a
coordination
with
the
entry,
PV
PV,
C
controller.
The
entry
PV
PV
C
controller
doesn't
know
that
there
is
an
external
controller
that
is
trying
to
operate
on
these
objects.
E
I
had
a
question
while
ago
can
I
ask
it
I
wanted
to
just
understand
what
the
thinking
is
around
conformance.
So
so,
if
we
put
this
entry-
and
we
say
these
are
the
standard
ways
that
you
make
snapshots
and
restore
snapshots
and
etc,
is
the
expectation
that
every
volume
provider
has
to
provide
all
of
these
functions
in
a
consistent
way
or
are
there
different
levels
of
conformance?
Could
could
I
build
a
volume
controller
which,
for
example,
doesn't
support
snapshots
just.
B
Like
any
other
volume
plug-in,
you
can
define
what
the
set
of
capabilities
are
for
your
volume
plugin
at
a
very
minimum.
We
require
mount
beyond
that.
You
could
do
provisioning
and
snapshot
would
just
be
another
optional
feature.
We
already
do
resizing,
for
example,
and
that's
entry,
so
this
would
just
be
another
optional
capability
that
a
volume
plug-in
to
add
ons.
So.
A
K
B
A
And
if
they
set
up
the
default
storage
class
for
that
provider,
then
you
know
that
can
be
kind
of
a
generic
portable
set
of
tests
and
we
need
to
figure
out
whether
it
belongs
in
the
special
profile
or
not
in
performance
like
I,
think
we're
going
to
need
some
kind
of
cloud
provider
profile
or
some
common
sort
of
profile.
That's
not
a
hundred
percent
required
for
everybody,
but
people
who
implement
that
additional
functionality
of
these.
We
have
consistent
behavior
yeah
I
mean.
H
Good
well
from
a
higher
level,
it's
Wednesday
from
an
app
perspective
policy
based
data,
integrity
or
registration
is
something
that's
desirable
in
general
right,
so
people
want
to
be
able
to
say,
use
a
cron
job
or
use
some
want
immediate
orchestration
in
order
to
periodically
snapshot
their
data,
and
some
people
are
even
advanced
to
the
degree
that
they
want
to
be
able
to
turn
those
snapshots
up
and
verify
the
snapshots
actually
exist.
If
we
do
this
entry,
we
provide
a
primitive
that
allows
for
policy
based
portable
policy
based
orchestration
across
Laos.
H
Moreover,
from
seed
storage,
the
I
think
pretty
much
all
of
the
members
of
stick
storage
or
in
general
supportive
of
implementing
this
feature,
because
the
majority
aside
from
the
big
three
clouds
and
the
other
two
clouds,
which
also
implements
snapshots
the
on-prem
storage
providers,
all
implement
this
features
some
degree
or
another
in
a
way
that
can
well
the
ones
that
participate
as
a
storage
show.
Yeah.
G
But
what
I'm
saying
is
that
there's
a
lot
of
people
that
are
running
on
prem,
that
don't
have
some
sort
of
you
know
hitachi
networks
and
or
anything
like
that
right
so
like,
and
those
people
aren't
going
to
show
up
in
six
storage,
but
I
think
you're
really,
you
know,
there's
a
sampling
bias
with
respect
to
you
know
the
people
who
show
up
are
the
ones
that
care.
Of
course
we
do
have
a
bunch
of
lustre
and
stuff
and
a
bunch
of
on-premise
deployment.
G
What
I'm
saying
is
that
there's
a
lot
of
folks
that
run
without
any
of
that
stuff.
Also,
in
the
end,
there's
the
big
question.
This
is
what
Justin's
bringing
up
like
of
what
do
you
do?
You
know,
after
that,
so
I,
you
know,
and
Gluster
and
stuff
are
both
right
had
also
just
you
know,
in
terms
of
you
know
the
vendors
that
are
showing
up
here
so
folks,
like
port
work,
there
are
other
saying
that
there
aren't
on-prem
vendors
right,
actually
work
here
on
this
stuff.
B
H
F
F
G
I
think
we're
gonna
see
that
there's
gonna
be
yeah,
there's
gonna,
be
a
large
amount
of
variability
here.
I
think
that
there's
something
instructive
in
terms
of
the
stuff
that
we
did
early
on
with
ingress
ingress
is
not
universally
supported.
There's
a
lot
of
variability
there.
The
goal
out
of
ingress
was
to
have
something
where
you
know
there
was
something
that
people
could.
G
That
would
like,
like
trying
to
over
generalize
this
I
think
will
probably
lead
us
in
there
in
the
wrong
direction.
So
I
think
that
it's
it's
like.
We
should
aim
to
find
as
much
commonality
as
possible
here.
I
think
all
this
stuff
can
be
done
out
of
tree
I.
Think
just
you
know
that
if
somebody
wants
to
have
these
life
cycles,
we
can
you
know
all
those
patterns
can
be
done
using
our
extensibility
mechanisms.
I.
G
Don't
think
that
we
necessarily
need
this
stuff
to
be
in
tree
to
be
able
to
get
there
and
then
adding
on
top
of
that
as
we
expand,
the
idea
of
conformance
I
could
see
at
some
point.
We
have
a
conformance
profile
that
involves
dynamic
volume,
provision
that
involves
you
know,
snapshots,
that's
a
separable
issue
in
terms
of
predictability
from
what's
entry,
and
what's
not
entry
snapshot.
M
L
A
L
M
So
you
know
there's
kind
of
the
practical
thing
here
right.
If,
if
we're
gonna
support
this
out
of
tree,
then
this
means
that
the
tools
that
have
to
support
it
now
have
to
go
around
and
support.
You
got
to
build
external
tools
to
deal
with
like
a
pop
eration
that
deal
with
all
of
this
and
know
how
to
talk
to
all
the
varying
cloud
providers
right.
It
pushes
some
of
the
work
kind
of
into
that
app
automation
tool.
At
the
same
time,
a
whole
lot
of
storage
vendors
do
support
it.
A
G
I
I
agree
with
you
build
it
test
it
out,
get
feedback
iterate
on
it.
All
that
stuff
can
be
done
out
a
tree
right,
we're
trying
so
hard
across
the
architecture
to
limit
the
number
of
of
features
that
we
build
in
to
build.
Extensibility
points
that
are
widely
usable,
like
I,
would
totally
be
supportive
of
generic
hooks
I'd,
be
supportive
of
let's
find
ways
to
freeze
file
systems
right.
G
A
I
just
have
an
exam
I
just
have
a
question
before
you
respond
sure,
which
is
what
do
you
mean
by
out
by
out
of
tree
versus
entry?
You
mean
kubernetes
cover
Donny's
master
by
entry
by
or
out
of
tree
I
mean
external
CRD
versus
internal
API.
Okay,
so
I
disagree
with
that
distinction.
We
are
moving
towards
Sierra
DS,
even
for
for,
like
all
new
things,
and
unless
it's
required
for
performance
reasons
to
be
built
in,
we
are
going
to
be
using
series.
So
do
you
and
it.
B
B
M
That
then
makes
the
argument
for
all
of
these
other
things,
like
the
application
controller
and
all
these
other
things
that
we're
working
on
right,
that
they
all
need
to
be
in
tree.
So
that
way
it's
exposed
to
be.
If
we're
gonna
tell
some
people
to
be
out
of
tree
or
out
of
the
corn,
it
needs
to
be
installed.
We
probably
need
to
figure
out.
What's
the
line
for
this
is
in
vs..
M
B
So
I
think
the
challenge
is
if
this
was
a
completely
isolated
feature,
we
would
have
no
issue
having
it
out
of
tree.
The
problem
is
that
the
entire
volume
subsystem
is
already
part
of
the
core
kubernetes
api.
Those
controllers
are
shipped
with
a
win-win
kubernetes.
This
is
the
one
piece
that
is
not
so
it
becomes
really
challenging
to
integrate
it
all
together
when
part
of
it
is
in
tree
and
part
of
it
is
not,
and.
M
I
think
that
speaks
to
the
user
experience
and
to
consistency.
If
part
of
a
feature
is
in
tree
and
part
of
it
is
out,
then
that
is
difficult
for
people
to
navigate
and
understand.
Let's
say
what
storage
is
in
here,
why
do
I
need
to
go
elsewhere
for
just
a
piece
of
it
right?
That
makes
it
that's
that
that
cognitive
figuring
stuff
out
thing,
it's
not
like
finding
storage
versus,
not
it's,
and
so
I
can
understand
that.
Okay,.
N
Daniel
kid
so
there's,
there's
sort
of
two
quite
two
questions
here
with
universes
out.
One
is
where
you
put
the
controller
code,
the
ABI
literally,
the
only
reason
for
adding
this
to
the
built
in
API.
Nobody
here
is
mentioned
yet,
which
is
the
built
in
API,
is
in
a
particular
group
version.
You
want
this
to
also
be
in
that
same
group
right,
because
it's
a
coherent
future
together.
N
A
N
N
N
N
B
A
A
You
just
you
your
counters,
yeah
yeah,
the
other
thing
I
was
going
to
suggest
to
just
move
all
that
code
out
of
kubernetes
with
ya
master,
so
you
could
to
make
it
easier
to
do
these
kinds
of
extensions
and
flag
gate
them
or
whatever
yeah
you
know.
Definitely
the
highest
touch.
Part
of
this
are
the
pod
changes
and
those
we
need
to
be
very,
very
careful
about
for
the
other
things
if
we
can
find
a
way
to
say
so
in
the
scheduler
context.
A
For
example,
you
know
our
schedule
earlier
today,
but
they're
looking
at
turning
the
scheduler
code
into
a
framework,
so
people
can
build
their
own
schedulers
I
didn't
what
you
would
do
is
you'd
either
turn
off
the
default
scheduler
or
a
general
pattern
that
I
have
an
issue
open
to
actually
make
it
a
more
concrete,
similar
pattern,
but
specifying
the
controller
that
you
actually
want
to
take
over
a
particular
resource
right.
So
for
scheduler
you
can
specify
the
scheduler
name
for
deployments.
We
have
a
hack
for
using
your
own
roll
out
the
controller
forward.
A
N
That's
what
it
that's!
What
it
is
like
the
last
thing,
I
wanted
to
say
is
I
agree
with
Jonah
that
if
you
want
to
change
the
default
behavior
Cuba
forces
on
a
pond
and
adding
lifecycle
hooks
or
some
other
some
other,
maybe
not
that
exact
that,
because
about
adding
some
extension
point,
the
existing
type
I
would
rather
go
that
route
and
add
a
special
thing
just
for
yeah.
B
That's
fair,
I
think
the
the
biggest
argument
that
we
have
left
is
just
consistency
of
the
API
having
snapshots
separate
and
having,
for
example,
the
PVC
objects
being
able
to
say
I
want
to
create
a
new
PVC
object
from
this
snapshot
and
then
having
that
as
a
CR
D,
and
not
being
able
to
yeah.
To
do
that
and
again
just
to
be
clear,
like
you
could.
D
B
But
to
be
clear,
what
we're
asking
for
in
is
just
the
control
logic,
not
the
implementation
of
calling
plugins
that
would
be
external
with
CSI.
So
the
the
what
we
want
in
the
in
entry
is
going
to
be
just
the
common
controller
logic
that
implements
the
life
cycle
when
it
decides
to
do
the
actual
snapshot.
The
logic
to
do
that
for
any
given
storage
system
would
be
completely
external
because
of
CSI
trying.
H
To
get
my
only
thing
was
well,
basically,
I
was
going
to
say
pretty
much,
my
Daniel's
I
admit
for
API
consistency.
It
doesn't
make
sense
to
put
it
in
another
place,
but
the
point
that
you
couldn't
the
whole
thing
well,
whether
that's
realistic
or
not.
Yes,
you
could
in
theory,
do
that
from
a
higher
level,
though,
it
feels
like
there's
a
missing
part
of
the
storage,
not
not
only
with
respect
to
snapshot,
because
well,
it's
not
a
thing
everywhere
for
most
of
our
users.
H
It
is
a
thing,
it
is
part
of
the
CSI
and
it
is
something
that
people
are
trying
to
use
now
and
want
to
use
now,
and
we
don't
I'm
an
API
object
that
represents
them
and
allows
them
to
interact
with
the
inter
reasonable
way.
The
interaction
between
persistent
volumes
for
certain
volume
claims
and
snapshot
is
so
tightly
coupled
that
I
can't
imagine
having
this
object
live
outside
of
the
same
group
and
what
I
can't
I
know.
I
can't
wrap
my
head
around.
H
I
G
Were
next
in
line
yeah
I
mean
I
mean
just
wanting
to
have
this
stuff
line
up
in
the
same
API
group
seems
like
a
silly
reason
to
move
this
stuff
in
tree
I
mean
we
have
places
all
over
the
place
where
we
have
something
in
one
API
group
managing
and
orchestrating
objects
in
another
API
group.
Right,
you
look
at
you
know,
replica
said
versus
pod
is
a
great
example
there.
So
I
think
you
know.
If
we've
you
know,
there's
other
places
where
we
have
these
connections
all
over
the
time.
G
So
I'm
not
sure
you
know
that
that
carries
any
water
with
me.
I
think
some
of
the
question
again
is:
is,
is
dust
nap
shot
belong
to,
you
know,
doesn't
need
to
be
a
core
feature
for
it
to
work.
I
think
we
can
get
a
long
way
and
we've.
You
know
we
have
existent
proofs
both
in
the
prototypes
that
that
six
storage
folks
have
done,
but
also
in
work-
that's
been
done
outside
and
other
projects
in
terms
of
actually
like.
G
We
can
add
these
features
the
fact
that
you're,
having
a
hard
time
getting
people
to
install
and
test
this
stuff
and
move
it
forward,
maybe
to
me
that
sends
a
signal
that
maybe
this
shouldn't
be
part
of
the
core
right
I
mean
like.
If
the
only
reason
we
want
to
move
into
core
is
because
that's
the
only
way
we
can
get
people
to
actually
notice
our
feature.
Maybe
the
feature
is
not
that
useful,
or
at
least
not
that
widely.
Maybe
the
gestalt
hasn't
moved
to
the
point
where
everybody
realizes
they
need
it.
G
Yet
that
seems
to
be
the
wrong
way
to
do
it.
If
we
get
to
the
point
where
we
say
like
every
single
time,
people
install
kubernetes,
they
have
to
install
this
other
thing.
Maybe
we
should
move
it
in
in
inside
of
the
core.
That
is
an
argument
that
carries
more
weight
with
me
right
I
think
we
look
at
DNS
right,
DNS
started
as
this
add-on
and
we've
gotten
to
the
point
where
it's
like
it
really
kind
of
necessary.
It's
really
part
of
kubernetes,
even
though
it's
not
officially
part
of
kubernetes.
G
G
H
Good
I
would
like
to
point
out.
I
would
like
people
to
to
wonder
who
are
wondering
if
this
is
useful
to
go
and
look
at
how,
like
the
backup
instructions
for
popular
workloads
that
are
being
run
on
kubernetes
right
now,
including
stable
workloads
and
look
what
people
are
doing
on
the
largest
public
clouds
and
what
they're
doing
on
frameborder
to
back
these
things
up
now,
I
personally,
I
don't
agree
with
this
particular
model,
but
this
is
not
how
I
would
back
up
my
databases
in
production,
but
this
is
how
users
are
doing
it.
H
These
instructions
come
with
the
application
and
on
IAS
and
cloud
providers
they
are
using
volume,
snapshots
and
AWS
GCP,
as
your
backup
instructions
include
snapshots
as
a
primitive
to
use
for
popular
workloads,
so
I.
Just
the
fact
that
we
don't
support
that
in
kubernetes
to
me
is
a
gap.
It's
not,
but.
G
F
You're
saying
you
like:
if
this
were
in
court
or
this
were
shipping
kubernetes,
you
would
have
stateful
set
the
snapshot
aware
and
have
like
a
snapshotting
feature
and
in
corn
guy.
So
let's
say:
okay,
maybe
you're.
Just
maybe
then
say
yes,
hypothetical
purposes,
remember
wooden
architecture.
Would
we
allow
a
dependency
from
a
core
controller
out
to
an
optional
thing,
and
so
I
mean
maybe
I,
don't
know
if
you
have
any.
J
F
K
B
G
Controllers
wait:
okay,
but
let's
take
that
example.
If
you
want
to
back
up
a
stateful
set,
would
we
build
that
in
the
state
full
set
and
add
more
in
features
into
that
sort
of
thing?
That's
already
you
know
pretty
chunky
or
instead,
when
we
build
another
controller
that
actually
pairs
with
the
backup
system,
reads
the
state
out
of
Stateville
set
and
coordinates
it
that
way
so.
J
Think
I
was
gonna,
say
I.
Think
Joe
raises
the
really
good
point
of
when,
even
though
I
personally
like
fallen
the
reason
I
asked,
the
percentage
is
good
thing
in
the
long
run
you
know,
snapshots
are
probably
we
could
spend
a
lot
of
time
debating
if
there's
a
better
abstraction
in
place.
I'd
respond
to
Joe's
concern
that
we
rush
to
mate
with
forever
and
having
things
in
the
same
storage
group
is
not
we're
in
the
same.
Api
group
is
not
enough
reason
to
do
this
and
of
itself.
J
There
is
nothing
that
necessarily
prevents
us
from
doing
a
snapshot
in
an
alpha
state
and
make
a
grouping
and
moving
it
into
an
API
group
once
there's
enough
consensus,
I
think
this
is
the
challenge.
This
is
the
third
or
fourth
thing
where
we
have
run
up
against.
We
want
to
make
a
change
that
makes
users
lives
better
every
chance.
J
Every
choice
we
make
has
a
lot
more
impact
than
it
did
early
in
the
project,
and
so
it
feels
like
the
right
thing
to
do
is
err
on
the
side
of
being
a
little
bit
less
perfect
about
the
idea
design
in
favor
of
all
of
the
things
Jo
is
bringing
up,
because
we
have
to
make
these
choices
in
this
trade
offs.
At
some
point,
we
can't
make
it
now
we're
not
connected
the
next
time.
You
just
keep
kicking
it
down.
The
road
I
am.
G
I
would
totally
love
to
see
let's
break
this
down
in
more
granular
features
that
are
targeted
at
the
things
that
you
just
found.
We
cannot
do
outside
of
having
stuff
entry
things
like
file
system,
freezing,
dynamic,
mount
unmount,
a
generic
hooking
mechanism
for
in
you
know,
describing
and
injecting
hooks
that
all
seems
like
things
that
that
this
stuff
would
take
advantage
of,
but
you
know
but
but,
and
they
could
be
useful
in
a
whole
bunch
of
other
different
ways
so,
like
with
the
hooking,
you
could
actually
say:
hey
application.
Do
your
generic
domain
specific!
G
F
Not
excited
about
a
kubernetes
where
the
user
thinks
on
regular
basis
about
looks,
it
seems
not
declarative
and
it
seems
to
not
work
well
with
the
directional
and
declarative.
So
the
hooks
to
me,
if
we
do
that,
it
needs
to
be
more
like,
like
an
infrastructure
plug-in
than
an
API.
That's
just
things
that
are
snaps.
That.
G
Are
point
in
time
send
this
signal
you
know,
pre,
stop
post
start,
you
know
the
the
snapshots
are
temporal,
I
mean
we're.
Gonna
have
things
that
are
that
are
point
in
time.
Things
that
don't
map
cleanly
into
the
declarative
mode.
I
mean
we
don't
say,
I
wish
that
there
would
be
a
snapshot
of
this
thing
right
like
that's
just
not
the
way
it
works.
G
D
I
think
the
I
think
the
appreciate
yourself.
It's
not.
This
is
also
making
me
a
little
bit
concerned
about
why
we
just
stopped
reading
this
is
we
want
to
into
the
same
in,
can
rule
and
because
the
it
is
pollute
are
kind
to
our
approach
on
the
API.
The
triple-n
is
a
via,
but
now
another
thing
it
is
I
think
I
understand.
Now
we
do
have.
The
problem
component
here
is
over,
will
have
the
color
feature
ugly
story
best
the
practice
to
the
different
type
of
the
enterprise
user.
D
All
the
things
we
need
to
solve
Sep
shot
actually
features
so
this
controllin,
and
we
need
to
solve
this
one
actually,
even
like
the
car
just
being
on
the
caster
and
I
keep
getting
to
that
is
email.
Even
now
we
stand
up
cross
the
okay,
I
work
and
still
people
ask
me
what
it
is:
the
way
to
speak
out
the
Copernicus
cancer,
even
on
GCE.
D
No
people
don't
know
because
it's
difficult
to
find
so
to
feature
and
even
there's
a
feature,
there's
so
many
solution
and
what
it
is,
the
best
practice
for
the
asset
of
the
entity
render
service
and
that
it
is
harder.
So
I
still
synchrotron
is
good
idea
to
start
from
auto
country
provided
solution.
It
then
sink
storage
all
night
one.
D
So
you
could
have
your
own
six
storage
best
practice
and
for
all
the
sin
were,
and
then
that
is
the
way
we
can
start
from
and
if
we
simple
this
is
the
best
practice
and
in
the
future
we
single.
This
is
communities
how
handle
store
energy
for
the
different
work,
not
a
specialist
in
full.
What
kind
of
new
we
could
come
back
and
talk
about?
This
is
entry
on
our
entry,
so
that's
kind
of
the
best
of
which
I.
B
N
N
If
it's
not
the
desired
behavior
that
group
I
don't
know,
I
don't
have
a
super
strong
opinion
that
that's
actually
the
way
we
should
do
it.
So
if
that
doesn't
work
for
people-
and
you
should
tell
us
now
because
at
the
moment
you
cannot
turn
off
on
Iran
individual
resources
in
a
group
independently.
So
can
you
had.
F
Something
and
then
there
okay,
so
you
took
so
API
machinery
talks
about
api's
being
off
when,
like
the
discovery
dated
as
to
show
them
and
you
sort
of
can't
request
them,
but
you
could
out
an
API
that
is
there
but
doesn't
do
anything.
Would
you
agree?
That's
cheating
if
it
like.
If
it's,
if
it's
on
the.
N
F
N
F
A
A
K
B
N
G
But
look
if
we
do
go
the
route
of
like
adding
another
binary,
adding
another
extension
mechanism
for
something
as
Korres
volumes
itself.
I
think
that'll
have
to
force
the
discussion
in
sig
cluster
lifecycle
about
how
do
we
actually
start?
You
know
expanding
the
set
of
things
that
we
that
we
expose.
How
do
we
set
those
things
up?
How
do
we
actually
enable
folks
to
install
and
use
that
stuff
I
think
we
should
start
that
conversation
though
yeah
so.
C
We
already
done
on
also
tree
and
we
do
have
those
difficulties
and
that's
why
we
propose
to
entry
and
also
as
our
fajita
rights,
so
I,
just
don't
see.
The
big
disadvantage
of
I
say
implement
everything
on
gender.
We
already
present
the
reasons
we
need
entry,
and
so
there
are
some
functionality
is
like
required
by
many
users.
That
cannot
I
say
not
even
it
almost
impossible
to
to
them
like
to
be
ya.
G
Know
and
I
I
totally
hear
you
that
there
are
a
bunch
of
capabilities
that
these
systems
depend
upon
like
file
system
using
dynamic,
mount
unmount
things
like
that.
My
suggestion
is
that
we
start
looking
at
those
as
more
targeted
granular
features
that
can
be
composed
to
do
things
beyond
just
snapshotting.
G
That's
not
snapshots,
that's
that
what
I'm
saying
is
take
the
features
that
you
need
in
core
that
make
the
out
of
tree
stuff
not
be
less
than
ideal.
Let's
come
up
with
more
granular
features
that
impact
the
rest
of
the
system
instead
of
threading
through
snapshots
in
a
very
sort
of
you
know,
monolithic
way
through
all
those
things
so
like.
If
we
need
to
do
hooks,
if
we.
F
F
G
G
H
Especially
for
the
lifecycle
hooks,
that
seems
like
a
generic
component.
That
would
be
more
generically
useful
in
that
we
have
a
youtuber
across
the
board.
But
if
you
redo
the
rest
of
the
proposal
in
detail,
I'm
not
sure
that
a
lot
of
the
rest
of
the
functionality
has
a
lot
of
use
outside
of
a
snapshot.
It's
what
it
was
designed
for.
What.
C
A
It
well,
it
may
be
I
think
more
generally
or
just
a
revisit
a
number
of
assumptions
like
should
we
move
the
ink
or
stuff
out
of
core?
Should
we
move
a
smaller
set
of
things
in
core?
What
actually
needs
to
be
in
the
same
API
group?
Is
there
a
different
way
to
address
the
pod
level
changes
that
need
to
be
made?
You
know,
I
think
the
possible
changes
are
the
only
things
that
are
actually
outside
of
the
storage
systems.
A
Yeah
is
that
is
that
correct
right,
so
I
normally,
if
not
for
the
pod
proposals,
six
storage
probably
would
have
gone
forward
with
this
before
we
put
the
API
review
process
in
place
right,
so
speaking
of
which
next
week
I
want
to
talk
about
API
review
process.
If
you
maybe
we
should
start
a
thread
on
that
on
the
community
sig
architecture
mailing
lists,
so
people
have
thoughts
or
input
on
that.
A
We
should
start
that
discussion.
There
yeah
all
I'll,
kick
that
off
I'm
looking
at
the
prior
art
around
that,
and
so
we'll
send
them
something
out
to
them.
You
notice,
so
let's
go
and
wrap
it
up.
Everybody.
Thank
you
very
much
your
time
as
always,
and
we'll
see
you
in
week,
four,
the
regular
meeting
and
thanks
for
the
new
meaning
time.
Yes,
thank
you
so
much
Joe
Thank.