►
From YouTube: SIG Architecture 20180531
Description
B
I'd,
like
the
topology
one,
to
go
first,
there's
some
pending
pr's
that
would
go
in
before
code
freeze,
depending
on
this
conversation,
it's
Morse
mostly
informative,
I.
Don't
expect
any
big
major
issues
to
come
out
of
it,
and
then
we
can
do
snapshots
if
snapshots
happens
to
leak
in
the
next
week.
I'm.
Okay,
with
that
snapshots,
is
probably
going
to
be
the
more
contentious
one.
That's
a
proposal
to
pull
snapshots
entry,
okay,.
D
I'd
love
to
cover
the
meeting
time
stuff.
Real,
quick
I
want
to
know
what
the
criteria
is
here
for
how
we're
picking
this
it
feels
like.
There
is
a
list
of
folks
that
you
think
are
important
to
get
into
this
meeting
like
who's
that
list.
How
are
we
deciding
because,
like
you
know,
I
objected
to
Monday,
but
Monday
still
on
the
list,
and
so
I
want
to
understand
what
the
what
the
process
in
the
methodology
there
is
so.
E
A
A
Explaining
please
so
I
thought
well,
I
would
reach
out
to
technical
leaders
in
other
SIG's
and
ask
whether
they
could
attend
architecture
and
pretty
much
all
the
people
I
asked,
so
they
could
not
attend
at
the
current
time.
So
I
also
looked
at
what
other
SIG's
were
overlapping
with
possible
meeting
times,
because
if
the
goal
is
to
get
technical
leadership
from
other
SIG's
to
come,
overlapping
with
their
meetings
is
a
problem.
It's
an
obstacle
to
that
and
there
are
some
meetings
which
I
occasionally
attend.
A
So
I
knew
when
those
were
but
I
also
looked
at
the
community
calendar
and
other
sources
and
got
feedback
from
several
of
the
people.
I
asked
about
what
it
collided
with
and
Monday
I
know
you
objected
to
you,
but
you
were
I
would
say
the
the
only
or
at
least
the
strongest
objector
and
literally
that
time
is
the
only
time
that
I
could
find.
That
does
not
actually
overlap
with
any
other
signals.
So
I
felt
it
was
important
to
put
that
time
on.
For
that
reason,
now,
I'm
still
collecting
results.
A
We
have
over
a
dozen
responses
and
the
Thursday
time
is
currently
meeting.
But
it's
about
it's
about
50/50,
so
I
asked
I
just
asked
for
email
addresses,
so
you
know
if
it's
close
I
can
ask
people
who
responded
whether
they
actually
have
a
strong
opinion
or
not,
because
you
know
maybe
there
are
others
like
Clinton
who
didn't
have
actually
a
strong
opinion.
You
know
I
will
try
to
make
choose
a
time
that
works
for
people,
but
I
have
strong
feedback
that
the
current
time
does
not
work
for
people
that
we
would
like
to
attend.
A
D
And
I
think
it's
worth
also
including
a
wider
set
of
times
than
that
I
mean
you
know.
Other
other
SIG's
have
used
oodles
to
actually
try
and
find
times
that
work,
which
you
know
can
can
look
at
a
wide
variety
of
times.
It
may
mean
that
there
are
going
to
be
conflicts
here,
but
if
our,
if
our
goal
is
to
look
for
broad
participation,
you
know
having
a
wider
set
of
options
and
collecting
more
data
is
going
to
be
a
good
way
to
do
that.
I'm,
sorry.
D
I'm,
not
a
sig
chair
but
sure
I
will
send
out
a
diggle
doodle
giggle
I
will
send
out
a
doodle,
but
you
know
also,
if
there
are
a
set
of
folks,
that
you
consider
to
be
must-haves
for
this
meeting,
let's
be
clear
and
open
about
who
those
folks
are
I.
Assume
you
put
yourself
on
that
list.
I
did.
D
A
So
I
reached
out
to
a
number
of
the
other
cities
so,
for
example,
Ken
öhlins
who's.
You
know,
I,
don't
actually
see
him
on.
You
know
from
sagat's,
okay,.
D
A
D
A
D
High
so
and
I
think
this
overlaps
with
our
missing
charter
around.
Do
we
actually
have
a
set
of
name
technical
leaders
across
the
project
right
because
if
you
have
a
set
of
folks
that
you're
optimizing
the
meeting
times
for
those
are
the
de
facto
leaders
of
the
project,
and
we
should
make
that
list
and
that
structure
explicit
instead
of
implicit
right.
A
D
D
D
A
So
anyway,
I'm
pretty
convinced
that
the
times
I
chose
actually
achieve
the
goal
of
minimizing
conflicts
with
other
sig
needs,
but
I'm
trying
to
send
out
a
doodle.
It's
a
broader
set
I
may
end
up
not
providing
clear
data,
but
we
can
send
it
out
and
see
so
yeah.
Let's
move
on
to
the
actual
topics
of
the
day.
B
I'm,
so
first
up
is
topology.
That's
gonna
be
presented
by
Michelle.
The
idea,
here,
very
broadly
speaking,
is
that
so
far
the
idea
that
volumes
are
not
equally
accessible
throughout
a
cluster
has
kind
of
been
hacked
into
kubernetes.
Both
AWS
and
G.
C
EPD
plugins
have
had
hacks
that
allow
zones
and
regions
to
work
through
annotations
and
special
logic
and
scheduler,
and
we've
been
investigating
ways
to
make
this
more
generic
without
having
to
actually
encode
any.
You
know
cloud
provide
your
specific
logic
into
the
scheduler,
so
Michelle
do
you
want
to
add?
C
G
G
G
G
So
what
people
have
had
to
do
today
to
work
around
this
problem
is
to
basically
just
manually
provision
the
volumes
first
in
the
correct
zones
that
they
in
that
they
want,
and
you
know
this
this
manual
process
basically
makes
it
harder
to
use
a
lot
of
the
primitives
and
stuff
that
we've
already
built
into
kubernetes
to
automate
a
lot
of
this
and
they
all
the
whole
stateful.
So
that
story
is
pretty
rough
when
you
also
have
to
include
the
dynamic
provisioning
across
multiple
zones.
G
In
addition
in,
we
are
also
currently
working
on
enabling
local
persistent
volumes,
and
they
also
have
a
similar
topology
constraint,
except
it's
bound
to
nodes
instead
of
zones.
So
one
of
the
major
goals
here
is
to
try
to
define
topology
in
a
way
that
can
support
all
these
different
types
of
volumes
and
not
have
to
hard
code
logic
specific
to
you
know
specific
to
zones
or
specific
to
nodes
to
be
able
to
handle
these
different
volume
types.
G
So
the
basic
idea
behind
this
solution
were
that
we're
working
on
is
to
delay
volume
provisioning
until
a
pod
is
scheduled.
That
way,
the
scheduler
can
signal
to
the
volume
controller
it
can
signal
which
node
that
the
scheduler
has
chosen,
and
then
the
provisioners
can
use
that
node
information
to
find
the
topology
of
that
node
that's
relevant
to
them.
So
in
the
case
of
zonal
volumes
they
can.
They
can
pull
out
the
zone
of
that
node
and
then
provision
in
that
zone
and
for
local
volumes.
They
can
just
use
that
note
itself.
G
So
what
I
wanted
to
discuss
here
today
was
just
one
one
API
proposal
from
that
design.
So
currently
in
storage
classes,
we
have
we.
There
are
plugins
specific,
opaque
parameters
that
currently
encode
topology
information.
So
in
the
example
of
G's
the
GCE
PD
provisioner,
we
have
this
zones
parameter
where
an
administrator
can
go
and
restrict
what
zones
the
GCD
PD
can
be
provisioned
in
and
once
we
introduced
this
delayed
provisioning
behavior
we
need.
G
We
need
a
way
for
the
scheduler
to
be
able
to
take
into
account
these
admin
overrides,
but
not
have
to
encode
specific
plug-in
logic
to
handle
plug-in
specific
parameters.
So
we
want
as
part
of
this
design
proposal,
we
are
proposing
to
put
a
first-class
field
in
the
storage
class
for
this
topology
restriction.
C
E
G
Eventually,
it
will
so
so
this
is
the
setting
in
the
storage
class
says
something
like
I'm,
allowing
GCE
PDS
to
be
provisioned
in
zones
A
or
B
or
C,
and
then,
when
the
provisioner
creates
the
PD
for
it,
the
PV
node
affinity
is
gonna,
have
one
of
those
only
one
of
those
values.
So
it's
not
quite
exactly
a
copy
to
copy
from
the
storage
class
to
the
persistent
volume.
G
So,
on
the
PV
itself,
we
have
the
node
affinity,
which
is
basically
the
node
selector
that
we'll
have
like
for
in
the
case
of
GCP
EPD.
It
will
have
one
zone
in
it,
but
for
other
storage
systems
you
could
potentially
have
like
multiple
zones
or
multiple
nodes
specified
in
that
selector.
And
then
this
storage
class
override
is
a
superset
of
that.
Basically,
because
you
might
have
more
zones
specified
in
the
storage
class
than
the
actual
volume
that
got
provisioned
right.
G
G
E
Were
two
things
called
out
here:
one
was
lack
of
resources
and
then
the
other
was
kind
of
the
topology
affinity,
selector
type
of
thing,
yeah,
specifically
on
lack
of
resources.
I
know
the
one
resource
we've
talked
about
is
like
how
many
how
many
attached
volumes
a
particular
node
can
have
if,
if
the
scheduler
selects
a
target
node
and
then
hands-off
provisioning
to
the
provisioner,
doesn't
that
imply
that
the
scheduler
has
to
be
sure
that
that
node
has
resource
capacity
for
the
type
of
volume
that
will
be
provisioned.
G
Yes,
sort
of
so
this
is
kind
of
this
kind
of
gets
into
like
the
the
details
of
the
design,
but
the
the
scheduling
and
provisioning
is
sort
of
a
two-step
process.
So
when
the
scheduler
selects
signals
to
the
provisioner
which
node
it
picked,
the
schedule
actually
hasn't
officially
picked
that
node.
Yet
it's
kind
of
like
a
priest,
step
selection,
so
the
provisioning
will
happen
and
then
once
the
provisioning
succeeds,
then
we
run
through
the
scheduler
again
and
then
at
that
time
it's
going
to
take
into
account
the
attachable
volume
counts.
G
So
this
is
going.
This
has
a
corner
case
that
can't
be
handled,
which
is,
if
all
the
nodes
in
a
zone
don't
have
are
basically
all
fall
at
their
attachable
count.
Then
we're
not
when
we
select
the
zone
that
this
should
be
provisioned,
we're
not
taking
that
into
account.
But
the
idea
here
is
that
or
the
the
idea
here
is
that
if
we
have
there's
also
a
priority
function
that
tries
to
balance
the
attachable
volumes
across
nodes.
G
H
E
It
sounds
like
the
scheduler
is
annotating
a
PBC
with
a
hint
node
and
then
the
provisioning
is
reverse
engineering,
the
zone
of
that
node
or
the
topology
that
that
node
is
in
and
then
maybe
or
maybe
not
picking
that
correct,
node.
Okay,
so
it
seems
like
either
the
scheduler
should
be
indicating
topology
information
or
the
scheduler
should
also
have
the
resource
information
to
decide.
If
that's
an
appropriate
node
yeah.
I
C
I
Yes,
is
another
perfectly
good
example,
and
it's
actually
much
more
challenging
when
you
talk
about
host
names,
because
it's
pretty
typical
to
have
maybe
a
3d
zone
cluster
in
something
like
Google
or
Amazon
or
Usher,
and
so
worst-case
is
you've,
got
a
2/3
chance
of
getting
it
wrong.
If
I
have
a
100,
node
cluster
and
I'm
doing
local
storage,
there's
the
99%
chance
I'm
going
to
get
it
wrong.
So
it's
much
more
important,
but
I.
Think
focusing
on
zone
is
something
that's
really
tangible
for
people,
but
Michelle.
G
Yeah,
so
so
currently,
today
in
the
scheduler,
the
scheduler
actually
hard
codes
the
zone
label
and
makes
such
makes
selection
logic
based
on
that
specific
zone
label.
We
want
to
replace
that
and
then
have
be
able
have
the
scheduler
be
able
to
you
know,
use
like
arbitrary
topology
labels.
The
second.
C
Question
I
have
yes
arbitrary
topology
I
mean
there
already
has
arbitrary
topological
keys
that
you
can
use
for
affinity
and
pantai
affinity
eventually,
but
I'll
I'll
just
leave
that
as
a
separate
piece
of
conversation.
The
other
piece
that
I
care
about
are
the
logistics
with
regards.
This
is
a
specialized
East
case
and
the
scheduler
right
now
already
suffers
from
a
complexity,
logic,
reduction
problem
and
I
would
highly
recommend
or
I.
Would
you
know,
as
X
scheduler
chair
would
ever
lead?
C
I
would
highly
recommend
doing
this
as
a
plugin,
it's
external
to
the
core
and
eventually
graduating
it
back
into
the
main
line,
because
there
already
exists
a
number
of
edges
and
corner
cases
in
the
existing
scheduler
that
adding
this
type
of
potential
complexity
could
get
into
other
use.
Use
cases
that
we've
found
to
be
worse.
C
G
So
I've
been
I've
been
talking
with
Bobby
a
lot
about
this
too,
when
I
was
originally
working
on
this
and
I
found
that
the
current
scheduler
extension
mechanisms
using
the
scheduler
extender
didn't
have
all
the
interfaces
that
I
needed
so
I
think
Bobby
has
been
working
on
this
new
scheduler
framework
design.
That
I
think
should
be
able
to
address
the
modularity
part
of
this
and.
G
A
D
J
But
I've
thought
here
which
is
instead
of
well
I,
wonder
if
we
should
focus
instead
of
solving
this
particular
problem.
Just
for
storage
like
we
should
focus
on
creating
a
topology
announcement
system
and
it's
something
that
it
does
seem
a
little
strange
to
have
the
scheduler
set
this
with
like
labels
and
annotations.
So.
B
We
actually
did
exactly
that
for
her
storage.
I.
Think
a
part
of
the
design
that's
not
mentioned
here
is
integration
with
the
storage
system
and
specifically
CSI,
and
how
the
labels
actually
get
populated
today
with
a
whole
design.
They
were
basically
hacked
in
the
cloud
providers
would
apply
their
own
set
of
labels
and
the
scheduler
was
hard-coded
to
look
at
those
labels
with
the
new
design.
J
G
The
challenge
there
is
that
we
we
have
these
separate
objects
for
storage
already
in
the
core
that
are
different
from
other
resources,
so
of
the
resources,
the
are
sort
of
like
opaque
resources
on
the
node,
but
we
have
these
separate,
PVCs
and
PDS
for
storage
that
need
to
be
handled
differently.
It's.
B
That
and
then
the
other
other
argument
is
that
the
the
specific
availability
topology
for
a
given
storage
system
is
not
necessarily
going
to
apply
to
other
systems
right.
The
goal
here
is
to
for
the
storage
system
to
be
able
to
describe
where,
within
a
cluster,
specific
volumes
are
going
to
be
available,
so
in
reality
the
source
of
truth.
There
is
the
storage
system,
it's
describing
the
the
cluster
subdivisions
as
it
sees
them
and
then
how
volumes?
B
J
I
So
I
think
we
perceived
the
use
of
labels
and
label
selectors
and
node
affinity
as
a
net
positive
in
terms
of
adding
less
new
concepts.
Certainly
through
this
process
we
have
discussed
whether
topology
should
be
even
more
first-class
than
that.
The
storage
perspective
on
topology
is
interesting
and
I.
Don't
I'm,
not
comfortable,
saying
that
that
won't
happen
for
other
subsystems
too,
specifically,
you
know
it's,
you
know:
can
we
wired
up
to
a
storage
network
that
is
separate
from
your
data
network
and
those
topologies
can
can
be
different?
I
I,
don't
know
if
that's
gonna
replicate
in
other
parts
of
the
system,
but
for
sure
one
thing
that
is
not
present
in
the
system
is
any
way
to
indicate
capacity
other
than
on
nodes
right,
so
there's
at
least
conceptually
a
quota
of
volumes
per
zone,
and
we
have
just
no
way
to
represent
that
right
now.
So
one
idea
that
was
thrown
about
was
to
actually
introduce
a
resource
that
represents
topological
capacity
and
use
that
as
a
place
to
decorate.
I
A
Wouldn't
that
be
similar
to
that's,
because
these
volumes
are
being
dynamically
provisions
if
nodes
are
also
dynamically
provisioned,
presumably
there's
a
maximum
amount
of
resources
that
actually
be
used
to
congenita
XIV
inif.
There
are
virtual
machines
or
something
so
it
seems
like
the
cluster
level
resource
quota
would
come,
would
cover
both
of
those
cases.
I
J
I
J
K
Better
hold
it
look
by
local
storage,
you
mean
on
unknown
storage
right,
yes,
so
that
seems
like
one
of
the
easiest
of
the
range
of
problems
we're
trying
to
solve
here
and
it's
it
seems
like
that
could
be
solved.
I
won't
say
completely
independently,
but
we
don't
have
to
try
and
boil
the
ocean
to
solve
that
specific
problem.
K
K
Well,
I
know
it's
a
problem
worth
solving,
because
it's
been
explicitly
called
out
as
a
shortcoming
that
some
customers
have
and
then
the
similar
problem
applies
to
things
like
GPUs,
where
you
can
have
remote,
GPUs
or
GPUs
that
are
remote
from
nodes,
but
are
not
universally
accessible
across
an
entire
cluster
and
I.
Think
I
think
that
whole
area
is
is
worth
solving
more
generally
than
for
volumes,
but
I.
Don't
think
that
we
need
to
boil
that
ocean
in
order
to
solve
local
node
local
volumes
right
now,
that's.
I
H
C
H
C
Could
solve
this
today?
The
statement
I
just
want
to
make
sure
we're
clear
about
what
we're
saying
the
I.
Don't
I
don't
want
a
rathole
in
this
argument,
but
the
that
is
definitely
a
consolation
of
concerns
you
can
solve
this
problem
today
with
other
means
external
to
the
core.
It
simplifies
the
problem.
Makes
it
easier
to
solve
right.
Is
the
approach
that
I'm,
seeing
that
folks
are
trying
to
optimize,
right
and
and
sod
even
said
and
Michelle
I
believe
that's.
Her
name
had
said
that
people
have
problems
with
this
right.
H
H
E
So,
just
to
come
back
to
the
the
kind
of
communication
mechanism,
the
signal
mechanism
that
this
is
talking
about
I,
think
I
agree
that
limiting
ourselves
to
the
resources
associated
with
nodes
and
the
topology
associated
with
nodes
can
get
us
a
long
way
and
not
cause
us
problems.
If
we
introduce
kind
of
a
topology
object
in
the
future,
I
still
think
that
the
way
that
this
is
proposing
having
the
scheduler
signal
to
volume
provisioning,
is
missing.
Resource
information
at
the
point
where
it
selects
a
node
I
agree.
L
E
You
this
this
is
saying
we
want
to
put
topology
information
on
the
stories
class
so
that
the
scheduler
can
be
informed
by
about
what
node
to
select-
and
that
makes
sense.
But
if
the
scheduler
is
going
to
select
a
particular
node
and
then
hint
that
to
the
volume,
provisioning
I
think
it
also
needs
to
know
what
resources
that
node
has
to
have
available.
Otherwise,
this
it
can
select
a
node
that
literally
can't
have
the
volume.
I
I
You
I
think
that
we
have
generally
fallen
down
on
the
idea
of
sort
of
opaque
countable
things
that
other
things
can
decorate
about
things,
there's
a
lot
of
things
in
that
statement,
because
exactly
what
that
means.
But
we've
talked
about
this
for
IP
addresses.
We've
talked
about
this
for
for
arbitrary
things
that
every
time
you
schedule
a
pod
you
need.
One
of
Vol
attachments
is
another
example
of
this
I
think.
E
Persistent
volumes
have
topology
ability
to
express
topology,
we're
wanting
to
add
that
to
storage
class,
because
that's
what
the
agenda,
persistent
volumes
and
that
makes
sense,
I
think
having
the
ability
for
those
to
express
resource
constraints
as
well
kind
of
gets
parody
with
the
pod
resource.
And
that
way
the
decision.
It
can
take
the
topology
restrictions
from
the
pod
and
its
associated
PBS
or
source
classes
and
the
resource
constrains
to
the
pod
and
the
associated
PBS
or
stores
classes,
and
use
that
to
actually
select
a
node
that
can
satisfy
it.
Sure.
G
So
I
think
I
think
we
can
split
this
up
into
two
phases,
so
at
least
in
the
initial
phase,
we're
just
aiming
to
handle
the
topology
constraints
right
now
and
then,
as
in
this
second
phase,
we
will
look
into
handling
the
the
resource
resource
constraints,
such
as
the
capacity
and
also
the
attachable
limits.
So.
E
E
G
E
A
I
Yeah
I'm
very
eager
to
have
the
general
topology
question
answered
and
I
would
love
to
talk
about
it.
Jordan
I
share
your
passion,
for
it.
I
also
want
to
be
able
to
deliver
the
features
that
we've
been
pent
up
on
for
many,
let's
call
it
years,
because
it's
basically
what
it
is
without
blocking
that
so
I
would
try
to
find
some
balance
there.
Okay,.
G
M
Can
I
just
say:
the
design
is
not
great
today
and
I'm
guilty
of
that,
but
what
it
does
achieve
is
that
stateful
sets
spread
out
over
zones
by
default
and
I
think
it
sounds
like.
We
will
now
need
a
pod
ante
affinity
constraint
on
every
stateful
set
when
we
add
this,
if
you
want
to
guarantee
spreading
across
zones,
yes,.
I
H
H
It's
it
gives
you
does
give
you
a
reasonable
high
probability
of
a
good
spread,
provided
things
like
new
zones
aren't
added
to
the
cluster
so
worth
it
so
on
right,
like
if
you
were
into
a
ZZZ
and
all
of
a
sudden
you've
gone
into
3a
ZZZ
or
three
zones,
yeah
you're,
now
not
balanced
correctly.
So
there's
a
lot
of
limitations
and.
G
H
B
We've
exclusively
worked
on
it
as
an
external
controller
with
CR
DS,
but
now
we're
running
into
issues,
adding
features
where
we're
realizing
that
it
would
be
really
beneficial
to
be
able
to
move
this
controller
entry
along
with
the
rest
of
the
volume
subsystem
instead
of
having
it
as
a
one-off
and
we'd
like
to
get
permission
from
sneaky
architecture.
For
that.
So
with
that
I'll
hand
it
over
to
Jing
for
the
presentation.
A
N
Great
thank
you
for
joining
me.
So
I
will
present
the
proposal
for
volumes
and
shots
into
API
and
for
this
project,
we'll
go
is
basically
all
first
analyzed.
The
snapshot
API
for
like
the
basic
functionality,
is
creating
listing
and
restoring
snapshot.
Two
volumes,
so
user
can
be
very
like
high
level
on
these
building
blocks
and
our
current
status
is.
We
start
the
initial
design
last
year
and
also
we
have
proper
concept
out
of
chamber
limitation
and
based
on
feedback.
N
We
continue
revise
our
design,
and
now
we
also
have
in
place
restore
both
flow
design,
some
others.
Now
we,
as
sad
mentioned
we
realized,
is
our
design.
We
need
to
move
in
straight
for
a
number
of
reasons,
so
I
will
describe
in
more
details
importing
Christ
so
for
this
project
we
plan
have
several
stages:
the
first
stage
to
propose
I
workflow
design.
N
This
is
has
been
reviewed
by
six
storage
and
see
no
apps,
and
now
we
are
trying
to
get
into
a
API
proposal,
and
this
is
the
main
goal
of
this
meeting
and
so,
to
my
mind,
is
not
like
for
detail.
Api
design
review.
So
after
we
get
is
like
high
level
overview
on
design
for
provo,
then
we
can
get
into
the
more
detailed
review
other
meetings
and
then
implement
as
our
farm
feature
and
then
they
da
ga.
N
So
because
of
a
time
limit
and
my
skip
some
details,
please
let
me
know
if
I
go
too
fast,
and
this
is
all
trying
for
the
rest
of
the
slice
on
just
some
background
for
snapshots
and
concept
of
kubernetes
volumes
and
then
I'll
talk
about
our
bottom
snapshot,
API
design
and
the
workflow
to
create
snapshots
and
create
volume
from
snapshot
and
also
the
in-place
restore
workflow
and
then
I'll
talk
about
why
we
need
in-tray
API
proposal
instead
of
auto
trading
in
time.
I.
A
N
As
we
mentioned,
other
cloud
provider
has
sent
shot
for
their
volumes
and
they
normally
have
incremental
snapshots.
So
this
way
they
can
create
very
fast
with
very
low
cost.
Oh
no
another
penetrating
cloud
is
they
also
transfer
snapshot
data
to
the
cloud
ASIC
nicely,
so
that
means
the
data
will
be
available
on
a
cloud
either
in
a
zone
or
globally
for
TCP
right
now,
snapshot
is
globally
available,
so
user
can
access
it
anywhere.
E
N
I
would
just
give
a
that
walks
through
the
how
user
takes
them
shot
normally
and
you
snapshot
suppose
in
our
TCP
environment,
you
have
a
facade
and
you
have
a
number
of
bodies
used
by
each
part
and
user
want
to
take
snapshots
depending
on
your
application.
It
could
take
such
charges
on
one
of
the
volume
like
master
copy,
or
it
need
to
take
a
snapshot
on
each
individual
volume
and
to
ensure
the
consistency,
data
consistency.
N
Normally,
we
recommend
user
to
prepare
the
application
reporting
in
snapshot,
which
means
you
need
to
flash
a
lock
to
the
base
and
freeze
your
file
system,
and
then
you
can
use
command
line
to
Chris
up
shot
and
up
Charlie's
cut.
You
can
unlock
like
resume
application
and
freeze
our
system
and
when
the
snapshot
is
ready,
you
can
use
snapshots
to
create
new
volumes
and
in
communities
after
this
volume
is
ready.
Basically,
we
need
to
create
TVC
pvap
object
to
represent
this
volume
and
then
the
pod
can
start
using
this
new
volume.
C
C
N
This
part
is
quite
challenging
and
it's
really
application
kind
of
dependent,
and
so
we
are
not
trying
to
provide
like
a
standard
way
to
like
pass
the
application,
but
for
freestyle
system
is
more
like
standard.
So
if
you
have
a
system
like
yes
key
for
e
and
then
you
can
use
a
standard
processing
trace
command
to
freeze
the
effect
system
and
as
we
will
mention
a
little
bit
more
on
like
how
we
plan
to
provide
a
way
for
user
to
prepared
application.
But
it's
definitely
too
dependent
is
not
standard.
A
I
For
the
sake
of
time,
we
have
less
than
10
minutes.
Left.
I
would
like
to
just
assert
that
taking
snapshots
is
the
done
thing
in
many
applications
and
we
shouldn't
be
overly
focused
on
whether
they're
doing
it
right
or
wrong
I
think
we
should
be
focused
on
the
mechanism
for
enabling
them
or
telling
them
that
they're,
just
not
okay,.
C
H
H
Necessarily
people
want
to
be
able
to
use
file,
system
level,
snapshots
or
block
level
snapshots
to
restore
them,
they're
doing
it
on
pretty
much
every
major
cloud
provider
right
now,
one
way
or
another
and
they're
doing
it
in
bespoke
ways
that
aren't
necessarily
providing
the
best
data,
integrity
and
durability
for
their
applications.
So
one
thing
that
this
provides
is
a
standard
way
to
do
it
and
a
higher
probability
of
being
able
to
get
it
right
because
they're
already
doing
it
and.
B
N
B
N
So
I
think
probably
you
all
know
about
is
volume
concept
and
we
use
two
API
which
are
to
represent
the
volume
and
the
Y's
in
user
name
space.
The
other
is
in
nominal
space,
so
only
system
mean
can
access
it
for
security
reason
and
then
pod.
We
referenced
PBC
name
to
use
the
volume
so
for
snapshots
we
use
a
very
similar
way
to
represent
the
snapshot
object,
and
so
this
is
a
simple
example
of
snapshot.
Yong
pal,
you
only
need
to
put
the
policing
they're
so
indicated.
N
Okay,
what
volume
you
want
to
take
that
shot
from
and
the
controller
create
bottom
snapshot.
Ap
object--,
which
is
one,
is
in
user
name
space.
It's
likely
request
for
snapshot
and
the
other
is
in
long
name
space
and
it
will
contain
all
the
video
information
about
the
volume
snapshot
and
it
can
also
hide
sensitive
information
like
squizz
keys
from
the
user
and
for
take
snapshots,
as
we
mention
it.
Application
like
reading
writing
to
the
volume
so
to
ensure
that
they
had
consistency
in
the
paths
back.
N
B
N
So
that
you
see
it
required
coordination
between
this
natural
controller
on
the
master
and
the
no
side
to
cooperate
at
koobideh,
it's
the
volume
manager.
So
when
you
take
snapshot
right,
the
accumulated
volume
manager
can
signal
the
part
to
prepare
your
application
and
when
stem,
and
then
central
controller
can
start
taking
snapshots
and
after
it
finished,
the
post
meta
can
resume
the
application
and
some
other
functions
like
DB
list.
This
grab
snapshot
is
very
straightforward,
so
it's
like
other
API
objects
and
now
for
restore
snapshot.
N
N
One
simple
example:
here
we
show
is
we
still
use
PVC
junk
pile
and
we
add
a
new
field
cost
natural
source,
and
with
this
new
field,
the
provisioner
can
provision
the
volume
but
from
the
snapshot
instead
of
empty
volume,
and
this
is
for
the
create
single
volume
case
in
case
of
sit
for
set.
Similarly,
civis
has
a
volume
claim
tablet
and
you
can
put
snapshot
source
information
there,
and
so
in
this
case,
all
the
volumes
Forrester
set
will
be
created
from
the
same
snapshot.
N
K
B
N
So
this
is
mainly
you
see
it's
very
easy
to
use
to
like
replicas
of
data
to
different
other
volumes,
but
a
four-volume,
a
use
case.
That
means
currently
has
some
part
referencing
PVC
and
using
this
volume
right
and
you
want
to
change
your
volume
Roback
your
volume
to
a
previous
point
time,
and
we
have
some
difficulty
with
current
term
the
the
most
basic
API
enzyme
we
just
mentioned.
N
There
are
some
manual
steps
to
do
this,
such
as
you
need
to
delete
part
first
and
did
a
PVC
and
modify
a
police
jump
out,
creates
new
PVC
and
the
great
part
start
hot
again.
But
this
means
you
have
a
long
application
downtime
another
way
is
you
create
a
new
set
of
PVC
PV
with
a
different
name,
and
then
you
have
to
modify
your
pass
back
to
change
to
a
new
police
name
and
restart
a
part,
but
in
cases
like
subset,
it's
mine
not
possible,
because
the
police
name
is
prefixed.
N
So,
to
solve
this
restore
like
scenario,
we
propose
a
different
way
so
assume
we
have
kind
of
like
restore
volume
request
and
with
some
modern
snapshot
available
and
the
the
controller.
The
PV
controller
can
provision
a
new
volume
and
then,
when
it's
ready,
the
PVC
controller
can
switch
the
pointer
from
the
old
volume
to
the
new
volume
and
then,
when
the
spot
is
restart,
it
will
begin
to
start
using
the
new
volume.
So
this
way
you
don't
user,
don't
need
to
worry
about
like
modifying
PVC
or
parse
back,
etc.
N
I
So
for
the
for
the
group
I've
looked
at
this
several
times
over
the
last
year,
I
have
become
okay
with
the
idea
of
adding
the
snapshot
source
to
the
PC,
because
it
is
sort
of
a
declarative
statement.
Of
fact,
this
PVC
did
in
fact
come
from
a
snapshot.
I
find
it
weird
that
it's
an
artifact,
that's
sort
of
procedural
artifact
that
gets
left
in
the
amyl,
so
I
find
that
to
be
a
little
bit
awkward.
But
the
kena
shows
me
that
it's
not
a
big
deal
for
real
app
users.
I
I
find
this
to
be
the
most
challenging
part
of
the
overall
design,
because
I
think
it's
the
first
really
sort
of
imperative
thing
that
we're
trying
to
really
support.
We've
talked
about
and
danced
around
a
bunch
of
other
imperative
concepts,
but
we've
never
actually
supported
them.
Is
it
did
we
go
back
now?
That's
gone
now:
okay,.
I
N
Yeah,
but
actually
the
focus
of
this
meeting
more
like
trying
to
see
the
the
award
snapshot
API
into
API,
not
some
more
focus
on
just
the
restore
would
flow.
So
we
can
like
discuss
in
a
separate
meeting,
relate
to
this
restore
workflow,
but
before,
like
we
leave
I
just
want
to
maybe
a
emphasize
on
why
we
want
to
in
trade.
So
this
is
what
we
propose
to
add.
N
So,
from
user
point
of
view,
right,
confusion
caused
some
problems
when
they
use
some
part
is
auto.
Trace,
impart
entry
and
also
a
hardcore
system
mean
to
set,
have
a
kiss
environment,
and
now
everyone
will
start
have
a
different
installation.
This
different
difference,
like
API,
will
have
the
probability
of
permutation
issue
and
also
for
the
some
functions
that
we
mentioned.
Like
pre,
post
pot
hooks
passes
in
trays
and
also
include
the
restore
workflow
will
be
very
hard
or
impossible
to.
N
I
That's
really
the
question
here
is:
are
there
major
objections
to
moving
the
snapshot
resource
entry,
not
necessarily
the
restore
in
place
part
of
it?
We
can
defer
that
I
think
but
nap
shot.
Part
I
think
does
benefit
from
being
entry
if
for
no
other
reason
than
there's
already
a
storage
group-
and
it
would
be
sort
of
weird
to
have
snapshots
in
a
different
group
so.
L
L
L
So
I'm
I'm,
really
torn
on
getting
this
in
or
out
like
we
working
on
Ark
have
done
the
bulk
of
the
work
presented
here
out
of
tree,
except
for
the
in
use
swamp
and,
like
I,
said
I'm
torn
I
feel
like
with
PBS
being
provisioned
and
part
of
core
snaps
on
snapshots,
maybe
feels
natural,
but
I.
Don't
know
that
the
arguments
for
why
out
of
tree
is
bad
are
necessarily
all
truth.
N
L
Actually
doesn't
and
I
know
we're
out
of
time
so
I'm
happy
to
talk
about
this.
Oh
yeah.
A
B
I
Apology
question
I,
asked
Michelle
to
see
if
she
pulled
together
a
follow-up
meeting
with
a
smaller
group,
ASAP,
specifically
people
who
are
willing
to
devote
some
amount
of
time
to
the
topology
concept,
and
so
I
don't
want.
If
I
can
avoid
it
drive
by
naysayers
I'd,
like
people
who
are
actually
willing
to
invest
in
the
idea.
Space.
B
If
you
are,
you
know
one
of
those
people
please
reach
out
to
Michelle
and
she'll
include
you
in
that
meeting
and
for
snapshots.
I
guess
we'll
just
delay
to
the
next
meeting.
I
think,
let's
use
this
as
a
setting
the
context.
Everyone
can
go
and
think
on
it
a
little
bit
and
we'll
come
back
with
more
information.