►
From YouTube: Kubernetes Data Protection WG Bi-Weekly Meeting 20210602
Description
Kubernetes Data Protection WG Bi-Weekly Meeting - 02 June 2021
Meeting Notes/Agenda: -
Find out more about the Data Protection WG here: https://github.com/kubernetes/community/tree/master/wg-data-protection
Moderator: Xiangqian Yu (Google)
A
Recording
okay,
I
started
so
yesterday.
I
I
believe
most
of
you
have
also
received
this
email
talking
about
passing
the
whale
payus
gupta.
It
is
pretty
sad
news.
I
personally
met
with
him
only
once,
but
it's
actually
from
the
email.
It
seems
that
it's
because
of
covet,
so
this
is
still
impacting
all
of
us.
I
hope
everybody
over
here
yeah.
You
know
to
stay
safe
and
stay
healthy
at
home
and
I
hope
him
rest
in
peace.
A
A
There
are
some
updates
on
data
protection,
white
paper
and
tom
is
actually
off
today,
so
his
part
will
not
be
covered
today
and
lastly,
we
had
a
we
had
a
cloud
native
cncf
conference
two.
I
think
two
weeks
back
something
like
that
three
weeks
back
and
there
are
some
questions
from
there
today.
I
think
it's
shining.
Why
didn't
for
the
person
should
talk
about
mount
snapshots
as
read
arnie
williams,
and
she
you
want
to
give
a
short
description
or
introduction.
B
Yeah
sorry,
this
is
actually
not
from
this
is
not
from
kubecon
the
one
from
kubecon.
We
talked
last
time
when,
because
you
couldn't
attend
last
time,
it
was
in
the
last
minute
this
was
just
today
or
black
channel.
There
is
a
question
I
think
it's
robert
robert
sacrament,
oh
yes,
okay
yeah,
so
he
asked
a
question
this
morning
on
the
slack
channel
about
the
option
of
marketing
snapshot
as
a
read-only
volume.
So
we
can
talk
about
that
at
the
end.
A
Got
it?
Okay,
let's
get
started,
and
starting
from
the
cab
update
container
notifier.
We
are
fairly
close
to
merge
it.
We
just
need
one
more
lgtm
and
the
approval
from
sig
notes
chair
either
is
done
or
derek
I'm
following
up
with
the
dong
and
the
shin
has
made
another
run
of
updates
to
meet
the
offer
requirements
for
release
and
another
person
from
the
the
release
team.
A
production
readiness
team
is
going
to
need
to
sign
off
and
approve
it.
I
think
that
person
is
john
shin
is
my
understanding,
correct.
B
Yeah
I
yeah
we
listed
his
name
there
so
yeah,
I
think
yeah
we
can
ping
him
after
we
got
approval
from
signaled
right
the
alpha.
I
think
it
should
be
straightforward.
C
C
B
So
you
know
that
we
had
the
previous
proposal
as
question
hook,
right
that
one
we
actually
have
invitation.
Then
then
we
were
asked
to
go
this
completely
different
route,
so
right
this
time
we're
going
to
get
the
full
approval
before
we
start
coding.
B
D
B
Had
last
time
can
still
be
used,
it's
just
right
now.
You
know
the
whole
structure
is
different
right,
so
we
we
need
to.
We
need
to
go
to
signal
to
apply
for
a
ripple.
It's
like
a
repo,
it's
not
not
cubelet.
In
the
first
in
alpha
release,
it's
not
going
to
be
in
kubernetes,
so
we're
going
to
have
external
controller,
it's
going
to
be
a
separate
repo,
but
it's
going
to
be
managed
by
signaled,
so
we're
going
to
add
the
logic
there.
B
B
B
Entry
well
yeah,
so
the
api
will
be
entry,
api's
entry
and
then
the
the
part
that
is
running
the
command.
That
part
is
going
to
be
a
external
controller
for
now
and
then
after
the
first
offer,
and
I'm
not
sure
it's
going
to
be
another,
did
you
know
it's
going
to
be
another?
I
thought
was
going
to
be
better
when
we
moved
this
to
kubrick,
it's
going
to
be
beta
right.
We'll
hope
to
the
second
phase
is
better.
A
Right,
the
first
phrase
is
to
write
a
community
maintained
ripple
out
of
tree
controller
to
response
to
to
entry
apis,
and
once
this
is
kind
of
verified,
the
poc
works.
Fine,
there's
no,
a
lot
of
concerns
from
cooperate
in
terms
of
performance,
etc.
Then
this
logic
will
be
moved
into
couplet
and
I
I
actually
foresee
this
to
happen
in
one
offer,
because
the
api
itself
has
to
be
staged.
A
C
Okay
and
and-
and
I
I
haven't
checked
on
the
the
but
the
cap-
the
the
specification
for
the
apis
recently,
but
I'm
just
curious,
like
the
idea,
is
that
this
will
underlie
some
mechanism
of
like
performing
or
providing
a
standard
way
for
applications
to
quiesce
themselves.
That
can
be
invoked
from
the
outside
by
like
a
backup
application
right
that
that's
that's
still.
The
high
level
goal.
A
A
An
orchestrator
or
controller
can
trigger
those
container
notifiers
by
creating
a
pod
notification
resource.
The
orchestration
is
happening.
You
know
by
is
still
controlled
by
upper
level
controllers,
but
the
scope
is
limited
to
whoever
defines
the
deployment
and
state
process
they.
They
understand
the
application,
they
define
those
and
they
open
that
interface
to
backup
windows
or
application
snapshotters
or
whatever.
C
And
that's
how
we
care
about
this
particular
mechanism,
but
but
the
mechanism
has
other
uses
too.
A
Signals
as
well,
yes,
like
a
signal,
is
another
use
case
and
that's
in
h2
as
well.
We
are
not
going
to
focus
on
that
in
office
stage,
and
another
use
case
is
signal.
Nodes
do
have
this
requirement
when
tint
and
paint
and
node
they
want
to
notify
all
the
parts
that
run
up
and
running
on
that
particular
note
so
that
they
can
exit
restfully
instead
of
they
go
ahead
and
shut
them
down.
A
Yep,
if
anyone
is
interested,
please
take
a
look
at
the
cup.
It's
in
merge,
rarities,
eight
shin-
and
I
will
follow
up
on
this
and
hopefully
we'll
get
this
done
by
this
week,
all
right,
big
plus
to
brian
as
well.
I
think
he
cancelled
all
the
data
popular
api.
These
discussion
meetings,
which
means
the
cap,
is
officially
there.
I
will
leave
japan
to
give
up
this.
C
Oh,
I
mean
yeah,
there's
not
much
of
an
announcement
here.
I
I
gave
the
update
I
think
two
weeks
ago
on
what
was
actually
agreed
to
when
we
merged
the
cap.
C
So
I
won't
go
over
that
again,
but
but
yeah
the
the
weekly
meetings
we've
been
held,
we've
been
holding
were
really
about
design
and
now
that
the
design
is
settled,
they
just
kind
of
turned
into
status
meetings
and
we
don't
need
a
whole
hour
just
to
talk
about
status
and
and
have
no
one
show
up.
So
I'm
gonna
do
my
status
updates
in
the
kubernetes
csi
meetings
and
if
we
do
have
any
more
design
issues
come
up
related
to
data
or
volume
populators.
C
Well
so
yeah,
the
the
the
poc
for
how
populaters
should
work
has
remained
unchanged.
This
whole
time.
Everyone
is
pretty
happy
with
that.
All
of
the
the
disagreement
was
really
about
the
kubernetes
api
and
how
you
should
express
when
you
create
your
pvc,
that
you
want
to
have
the
volume
created
from
something.
C
C
It
turns
out
that,
because
of
various
design
choices
or
mistakes,
depending
on
how
you
look
at
it,
when
that
api
moved
from
alpha
to
beta
to
ga,
it
was
not
possible
to
open
it
up
to
arbitrary
crs
without
breaking
backwards
compatibility
in
some
ways.
So
that
was
that
was
where
the
api
reviewers
had
the
most
qualms
with
the
design,
and
so
we
went
around
and
around
and
around
and
finally
reached
a
design.
That
appears
to
make
everyone
happy.
C
But
it
involves
replacing
the
data
source
field
on
pvcs
with
a
new
field
called
data
source
ref,
which
will
have
to
go
through
alpha
beta,
ga
and
then
we'll
have
to
deprecate
the
existing
field
and
do
a
whole
thing
to
to
sort
of
gradually
move
people
to
the
new
way
of
specifying
data
sources.
C
But
but
the
new
mechanism
will
continue
to
support
pvcs
and
snapshots.
It
will
be
100
backwards,
compatible
with
the
current
implementation,
and
it
will
allow
all
of
the
other
things
that
we
want
to
allow,
so
so
that
so
it's
that
api
work
that
I'm
primarily
focused
on
in
this
release
is
getting
the
kubernetes
api
updated.
With
this
new
field
and
all
the
backwards
compatibility
stuff,
everything
else
is
sort
of
in
the
same
state.
C
C
C
D
C
Yeah,
but
but
to
be
clear
like
if,
if
I
define
like
a
foo
crd
and
then
I
instantiate
a
pvc
with
a
data
source
ref
that
points
to
an
instance
of
a
foo
pvc,
the
the
the
entry
validation
is
going
to
say
sure
that
looks
fine.
It's
not
going
to
it's
not
going
to
reject
it.
It's
just
going
to
accept
anything
that
you
put
in
the
data
source
field
as
long
as
it's
not
a
core
object.
Core
objects
are
special
and
we
will
reject
those,
but
anything.
That's
not
a
core
object.
C
The
the
the
pvc
will
be
created
and
then
it
just
won't
bind
unless
there
is
a
populator
that
understands
that
particular
object
and
knows
how
to
and
everything
else
lines
up
so
that
it
can
be
populated
and
then
the
volume
can
combine.
So
so
there
are
all
the
other
controllers
are
about
providing
user
feedback.
Helping
them
understand
why
their
pvc
hasn't
bound
yet,
but
it'll
pass
api
validation
and
you'll
just
get
these
events
that
says
no
populator
is
registered
for
type
foo
and
you'll.
A
Nice
work,
we
all
know
how
challenging
it
is
whenever
you
want
to
make
a
field
change
in
core
apis
and
hopefully
to
hear
back
from.
D
The
implementation-
okay,
just
gonna
and
the
api
guys
have
been
incredibly
helpful.
D
A
A
B
So
I
keep
thinking
cozy
cup-
I
I
think
sid
is
still
working
on
updating
the
cap,
because
there
are
some
concerns
from
the
api
reviewers.
So
once
that's
ready,
then
we
can
get
another
round
of
review
from
the
api
reviewers
and
then,
in
addition
to
that,
I
think
the
team
has
been
pretty
busy
also
looking
at
the
the
downward
apis
this.
This
is
like
the
harder,
the
the
driver,
how
how
that
part,
how
that
part
of
api
is
shaping
up
so
yeah,
so.
C
The
downward
the
downward
api
for
cozy
is
really
about
how
the
the
workload
will
consume
buckets.
That's
that's
that's
what
they're
working
on
there!
I
I'm
kind
of
curious
just
from
the
perspective
of
this
group.
What
the,
how.
B
C
B
So
this
is
a
so
I
think
we
are
thinking.
This
is
a
part
of
the
backup
repository.
So
this
is
also
one
of
the
missing
building
blocks,
but
I
think
something
is
still
missing
right.
So
this
is
so.
This
will
have
our
apis
to
provision
the
bucket,
but
then,
I
think,
seems
like
we
still
need
to
have
another
ap
object.
G
A
Yeah
yeah
those
all
these
were
including
volume,
backup,
bada,
populator,
container
notifiers
and
the
quality.
Those
are
just
pieces
or
backup.
Vendors
can
utilize
to
achieve
data
protection
in
a
cooper
that
is
context
the
how
how
they're
going
to
use
those
apis.
It's
really
upper
level
controllers
decision
right.
A
C
A
That's
a
great
question:
I
think
the
whole
purpose
of
white
paper
is
exactly
to
achieve
that.
We
are
not
necessarily
called
that
guidance.
It's
just
our
own
experiences
and
they
understand
it.
Does
that
make
sense
to
you
ben
because
apparently
different
backup
vendors
have
their
own
implementation
implementations
right,
they
have
their
own
ways
of
doing
orchestration.
A
Putting
a
generic
controller
like
that.
I'm
a
little
concerned.
We
don't
have
the
necessary
knowledge
to
put
in
a
very
generic
controllers
over
all
this
control
block,
there's
all
these
building
blocks,
but
instead
we
can
share
our
understanding
on
currently
widely
deployed
state
for
workloads
in
kubernetes
clusters
and
how
they.
How
do
we
look
at
things
or
how
do
we
look
at?
How
do
we
do
a
backup
of
my
sql
database
etc?
A
And
then
it's
up
to
the
specific
backup
vendors
to
use
those
apis
to
achieve
their
goals.
C
Yeah
and-
and
that's
that's
totally
worth
doing-
I
guess
I
I
was
wondering
if,
if
we
have
an
even
loftier
goal
in
the
long
run
of
developing
some
sort
of
api,
which
is
in
fact
standardized
right,
because
the
the
problem
with
what
you
propose
is
yeah,
we
have
all
these
tools
out
there
and
backup
vendors
can
use
them
to
deliver
really
good
backup
implementations,
but
those
backup
implementations
themselves
are
proprietary
and
non-portable
and
we'll
never
get
to
the
place
where
there
is
like
the
kubernetes
backup
api
that
you
can
just
rely
on,
no
matter
which
cluster
you're
running
against
right,
like
that,
it's
a
much
harder
goal
to
get
there.
B
A
C
So
similar
to
how
you
know
you
can
take
a
snapshot,
and
you
know:
there's
a
million
different
csi
drivers
that
do
different
things
when
you
take
a
snapshot,
but
at
the
end
of
the
day
the
user
knows.
What
is
you
know?
He
knows
what
he
can
get
from
a
snapshot,
and
so
he
can
use
this
api
and
expect
it
to
be
portable
across
clusters,
which
has
value,
and
so
you
you
could
aim
for
the
same
kind
of
portability
for
for
backup,
apis
and
then
have
what's.
C
B
Hey
ben,
since
you
are
working
on
worrying,
populator
and
also
you're
familiar
with
cozy,
it
might
be
good
for
you
to
think
about
how
this
tool
can
work
together.
You
know,
use
the
walling
calculator
at
restore
time.
B
B
We
need
to
make
sure
extendable.
You
can't
be
just
for
this
one
implementation
right
so
yeah
I
mean,
I
think,
yeah.
If
you
have
a
have
some
ideas,
because
this
is
something
that
I
was
talking
about
earlier.
I
don't
think
we
have
all
the
dots
connected
for
like
how
to
use
volume
populated
with
this
cozy.
H
B
Already
have
something
I
think
that
might
be
good,
a
good
topic
when
you're
ready
to
talk
about
that.
B
A
Yeah
and
let
me
let
me
circle
back
on
that,
maybe
in
next
minute.
B
Yeah,
maybe
we
can
yeah
ask
ask
them
to
give
an
update
at
some
point.
C
B
Yeah,
there's
also
yeah
and
there's
also
another
purpose,
actually
somehow
thinking
about
reusing
one
snapshot.
But
I
think
we
should
think
about
that
kind
of
reuse.
The
volume
snapshot,
kind
of
specify
a
backup
location
or
something
so
we
probably
need
to
meet
offline
just
to
sort
out
that
part
and
then
we
can
come
back
to
the
school
and
talk
more.
A
The
the
initial
goals
in
the
roadmap
is
actually
to
have
the
building
blocks
ready
before
we
can
actually
proceed
to
that
because
otherwise
it's
a
chicken
neck
problem.
First
of
all,
we
need
to
identify
what
are
the
building
blocks
and
then
we
go
ahead
and
you
know
try
to
build
some
generic
apis
around
these
building
blocks,
to
offer
to
the
community
to
achieve
a
portable.
C
A
Shin-
and
I
did
give
some-
we
have
a
slight
before
right-
that
workflow,
I
think,
is
also
revealed
by
the
community
by
the
this
group
over
here.
We
draw
some
workflow
roughly
over
there
does
that
does.
That
sounds
like
a
a
a
good
starting
point.
C
B
Right
here,
somewhere
yeah,
maybe
just
the
the
meeting
before
the.
A
The
it's
in
april,
21st,
meeting
agenda
in
the
data
protection
white
paper
updates
the
mission
building
blocks
diagram.
It
is
there
actually.
E
A
Okay,
I
guess
that's
all
for
the
kip
updates.
A
Okay,
let's
go
with
the
white
paper,
so
this
is
working
in
progress.
Oh
let
farm
to
pho
are
you
there.
I
think
you
I
saw
you.
A
Great,
do
you
want
to
give
an
update
on
the
cbd
stuff.
E
B
E
A
E
E
Yeah
and
so
far
we
have
receiving
feedback
from
zing
microsoft
and
zinc
form.
C
E
E
Yeah
you
can
see
that
that
the
document
is
not
really
long.
Is
this
describe
the
motivations
and
the
the
example
workflow
of
how
someone
can
use
the
different
snapshot
service
to
efficiently
backup
their
the
pvc,
or
I
also
see
received
some
feedback.
E
I
think
earlier
we
when
we
talk
about
this,
I
think
ben
have
introduced
another
workflow,
that's
similar
to
this
on
his
on
the
netapp,
snapdiv
and
and
I've
been
reading
it,
and
it
looked
like
it's
very
close
to
the
flow
that
we
have
here
and
zing
zhang
also
recommend
to
look
at
the
rustics,
the
rustic,
to.
E
I
think
we,
the
valero
backup,
also
using
the
rustics
to
backing
up
and
restore-
and
I
haven't
I
I
look
at
it
before,
but
I
haven't
looked
at
it
on
how
we
would
use
this
differential
snapshot
to
enhance
drastic
to
make
it
more
efficient.
Yet
I
would
take
some.
I
think
I
would
try
to
take
a
look
at
it,
maybe
this
week
or
next
week
on
that
one
to
enhance
this
white
paper
segment
for
with
some
workflow,
maybe
from
either
from
netapp
snapdev
or
from
rustic.
E
E
One
of
the
things
that
I
I
saw
the
snap
leaf
have
is
that
that
the
snapdev
have
they,
they
tracking
all
the
chain
between
snapshot
and
then
they
populated
into
the
catalogs
or
tracking
the
files
of
the
chain
in
each
of
the
files,
and
that
seemed
to
be
like
a
backbone
behind
the
restore
mechanism,
and
when
we
do
restore
recently.
Also
right
is
that
is
that
the
the
the
main
idea
behind
generating
the
catalog
for
the
entire
backup.
C
So,
as
far
as
I
know,
snapdiff
is
for
doing
efficient,
incrementals.
Not
it
has
nothing
to
do
with
restores
it's
purely
a
backup.
E
Well,
I
understand
that
I
was.
I
was
questioning
about
whether,
whether
how
how
it
helped
in
the
restore,
because
they
one
of
the
things
that
they
sniffed
snaplip
also
providing
service
on,
is
when
we
restore
you
can
do
restore
very
efficiently
with
the
snap
list
as
well.
That
is
what
I
looking
at
to
see
if
we
can
somehow
leverage
this
different
source
snapshot
service
on
the
restore
psi.
C
Okay,
you
might
be
confusing
that
technology
with
with
something
else
I
mean
netapp
has
a
bunch
of
different
apis
that
do
different
things
and
there's
ways
to
you
know
rapidly,
restore
certain
snapshots,
but
snap
snapdiff
in
particular,
is
about.
If
you
have
two
snapshots
you've
already
taken,
it
will
tell
you
like
what
the
difference
is
between
them.
C
So,
like
you
know,
if,
if
you're
trying
to
build
a
backup
api
on
top
of
snapshots,
you
can
take
your
first
snapshot
back
up
the
whole
thing
as
you're
full,
that's
your
base
and
then
your
second
snapshot,
you
can
say
well,
you
know,
give
me
a
list
of
what
changed
and
if
it's,
if
it's
only
a
few
things,
then
you
can,
you
know,
do
an
incremental
backup
that
only
captures
the
diff
between
the
two
snapshots
or
if
it's
really
really
large,
you
can
decide
to
do
another
full,
but
it's
just
sort
of
it's
a
tool
to
help
an
application,
sort
of
figure
out.
E
A
Well,
I
I
can.
I
can
share
my
limited
knowledge
around
that
as
well.
I
read
the
snapdivi
api
in
in
details
from
the
public
available
blogs
that
I
posted
there
are
two
pieces.
One
is
as
ben
was
mentioning
two
snapshots
calculate
the
differences.
The
difference
is
actually
at
file
level,
that's
one
and
then
using
that
differences,
the
snapdev
api
offers
the
catalog.
You
mentioned
that
catalog
is
just
a
list
of
changed
files
that
list
the
change.
A
E
C
C
E
Come
back
to
the
the
white
paper
on
this,
this
different
certain
snapshot.
I
think
that
is.
That
is
what
we
got
so
far,
and
I
hopefully
that
I
could
extract
a
little
bit
more
information
from
the
snip
dev.
E
I
mean
the
snap
diff
of
the
the
net
app
and
and
also
put
it
into
this
sample,
backup
workflow
as
one
another
example
of
how
it
is
how
this
one
differential
jump
shot
would
be
you,
but,
as
you
can
see,
that
there's
a
the
the
similarity
between
the
this,
the
differential
snapshot
service
that
we
are
proposing
and
the
the
snapdrive
service
from
netapp
is
pretty
close.
A
Me
sounds
great
yeah.
I
think
it's
looking
good.
There
are
a
couple
more
comments
to
address.
I
guess
other
than
that
it
is
looking
good
as
if
anyone
from
this
community
is
interested.
Let's
take
a
look
all
right.
Let
me
share
back
my
scream
shin.
Do
we
have
other
updates
in
the
in
the
white
paper?
Tom
is
off
today.
B
Yeah,
I
think,
other
than
that
I
think
we're
fine.
I
think
I
see
that
cyan
has
made
some
edits
to
the
first
couple
of
sections.
Thank
you.
So
I
think
now
we
can
move
on
to
the
open
issues.
D
E
F
So
my
question
was
about
volume,
backup,
and
maybe
this
is
too
early
to
consider
various
use
cases
for
it.
But
anyway
my
name
assumption
about
the
backup
process
is
that
the
volume
would
first
be
snapshoted
and
then
in
order
to
get
the
snapshot
contents
to
the
object,
store.
F
The
the
component
that
handles
volume
backup
would
be
able
to
say
that
the
volume
that
I'm
creating
will
be
read-only
and
then
the
csi
driver
can
pick
the
optimal
path
to
populate
that
volume.
With
the
snapshot,
data.
B
Yeah,
so
I
think
actually
it
really
depends
on
the
storage
systems.
Some
three
systems,
as
you
said,
you
can
actually
just
mount
a
snapshot.
You
know
there
are
already
there
are.
You
know
backup
software
already.
Does
that
so
you
just
read
the
snapshot
directly
so
but
right
now
we
don't
have
this
option
in
csi,
because
in
csi
you
can
only
multi-volume
can't
really
mount
a
snapshot.
B
I
don't
know
if
it's
good
to
kind
of
what
he
was
talking
about
almost
like
converting
a
snapshot
into
volume
that
probably.
H
B
I
actually
know
a
few
vendors
can
do
that
as
well.
B
F
F
I
think
so
I
haven't
used
it,
but
I
think
they
do.
I.
C
F
B
F
The
way
the
snapshots
are
exposed
in
surfaces
that
you
have
basically
a
hidden
directory
in
the
directory
that
you
have
taken
snapshot
of
and
then
you
can
access
it
through
normal
posix
file
interface.
So
it's
basically
just
a
directory
that
that
is
read.
B
Whole
directory,
basically,
it's
like
the
whole.
When
you
take
a
snapshot,
you
snapshot
the
let's
say
the
file
system
and
then
this
whole
thing
will
be
yes,
you'll
be
mounting
the
whole
thing.
Basically,
okay,
it's
nice,
like
okay,
that
makes
sense.
F
C
So
I
have
no
doubt
that,
like
there's
many
vendors
who
could
who
could
implement
something
like
this,
I
think
that
the
challenge
is,
what
is
the
user
interface
going
to
look
like,
because
we've
deliberately
designed
both
the
csi
layer
and
the
kubernetes
interface?
To
like
say,
the
only
thing
you
can
do
with
snapshot
is
create
a
volume
from
it
and
if
we
try
to
say
well
well,
sometimes
you
can
also
just
directly
mount
it
like
that's
going
to
involve
changes
to
the
pod,
spec
and
and
all
kinds
of
other
you
have
to
think
through.
B
Yeah
yeah,
of
course,
that's
why
I
actually,
but
this
actually
came,
came
up
after
long
a
while
back.
Actually,
we
didn't
really
do
anything,
but
I
think
it's
worth
at
least
to
think
through
think
through
this.
Definitely
so
once
you
once
you,
you
get
monk
involved,
it's
getting
very,
very
complicated,
very
quickly
right.
Definitely,
I
agree
that,
but.
I
C
Gonna
make
a
proposal.
I
think
this
is
a.
What
I
was
going
to
say
is,
while
the
existing
kubernetes
api
really
doesn't
lend
itself
to
doing
this,
and
and
it
would
be
hard
to
change
it,
if,
if
we
wanted
to
what
you
can
do,
is
sort
of
in
your
csi
driver
have
some
sort
of
a
hack
that
says
you
know
if,
if
the
user
tries
to
create
a
read-only
copy
of
a
snapshot.
C
Then
attaches
that
volume
to
a
read-only
volume
from
a
snapshot
and
then
attach
that
volume
to
a
to
a
pod.
You,
you
have
complete
control
on
the
csi
side
to
sort
of
play.
The
games
you
need
to
to
sort
of
directly
attach
the
snapshot
to
the
pod
without
ever
telling
anybody
the
user
just
has
to
go
through
the
step
of
creating
a
read-only
volume
in
the
middle.
A
B
B
C
B
C
C
Of
that
snapshot,
instead
of
creating
a
new
volume
from
the
snapshot
and
attaching
a
pod
to
that
new
volume
and
then
reading
it,
you
could
create
that
new
volume
as
rd
or
rox
and
the
driver
could
instead
of
actually
copying
the
snapshot
to
another
volume,
say.
Oh,
this
is
a
read-only
volume.
I
don't
need
to
copy
it
when
someone
wants
to
attach
to
this
new
volume
I'll
just
attach
them
directly
to
the
snapshot,
which
is
read
only
anyways
right
and
then
and
then
on
the
driver's
side.
You
have
all
of
the
control.
B
Okay,
see
when
your
driver
got
a
css
regard,
request
saying,
create
volume
from
snapchat
you.
Actually,
maybe
you
have
to
pass
me
some
special
flags
saying
okay,
I
want
this
to
be.
There's
a
read
on
the
snapshot,
so
you
basically
kind
of
somehow
point
into
your.
C
H
C
H
C
D
C
H
So
pvc
from
the
snapshot,
but
the
many
storage
drivers
do
have
this
space
check
right
to
make
sure
that
there
is
enough
free
space
available.
So
in
that
scenarios
also,
this
is
a
problem
where
so,
if
you
are
creating
a
volume
from
snapchat
of
a
200
gig
snapshot,
even
though
we
intend
only
to
read
only,
we
don't
really
need
a
space
available
space
check
and
there's
no
way
to
indicate
that.
C
B
B
Yeah
but
then
you
have
that's
saying
that
you've
got
this
inconsistency
between
the
storage
in
kubernetes
and
whatever,
whatever
that
is
really
used
on
your
screen
system.
So
there
is
a
definitely
some
problem
there.
B
Just
saying
that,
if
you
are
saying
okay
in
kubernetes,
when
I
say
I
want
to
create
a
new
pvc
right,
it
doesn't
matter
whether
that
has
the
data
source
or
not.
There's
kubernetes,
when
you
once
you
do
that,
I
think
that
quota
will
be
taken
and
then
but
then
you're
saying
that
we
are
actually
not
really
taking
any
space
on
your
3d
system,
because
this
is,
there
is
already
a
read,
only
snapshot
there.
We
just
use
it
so.
B
I
don't
think
you
can,
because
when
it
is
you,
if
the
quota
I
think
is,
is
based
on
the
your
when
you
request,
when
you
request
the
pvc
that
size
so
yeah,
I
don't
know
what
what
we
can
do
with
the
quota
yet,
but
that's,
my
understanding
is
actually
it's
actually
taken
when
you
are
request
that
much
capacity
for
your
pvc
so
yeah.
B
H
Cases
where
there
can
be
separate
storage
class,
where
you
could
specify
these
kind
of
indications
and
then
when
we
create
the
pvc
from
the
snapshot.
Instead
of
using
the
original
storage
class
of
the
source,
pvc,
we
can
use
this
storage
class,
which
is
almost
same
as
this
original
one.
But
may
have
these
extra
flags
to
indicate.
B
Okay,
yeah
so
yeah.
I
think
maybe
that's
something
that
we
could
explore
yeah
so
robert.
I
don't
know
if
that
answers
your
question,
maybe
you
can
do
you?
Do
you
work
on
this
sap
for
scissors
driver
yourself
or
you're
just
using
it.
H
B
Well,
maybe
that's
something
that
you
can
go:
do
they
have
a
they
probably
have
a
channel
for
that.
Do
we
have
anyone
here
working
on
that
driver,
maybe
not
here
yeah.
So
maybe
you
can
bring
that
up
with
some
of
yours.
G
B
On
that
csr
driver
yeah.
H
B
B
Think
I
was
talking
about
maybe
like
suggesting
this
as
a
way
of
look
around
a
standard
walk
around.
I
B
That
will
be
optional
just
like
we
have
a
lot
of
optional
things
in
csr
driver.
I
know
we
have
a
lot
of
csi
driver
optional,
how
many?
How
many
capabilities
are
really
required?.
C
B
Okay,
so
that's
the
okay!
So
that's
something
that
something
probably
we
should
try
for
now,
yeah
it
definitely.
I
can
see
it's
a
very
difficult.
Do.
B
What
do
you
mean
sufficient?
What
do
you
mean
sufficient
use
case
so
they're
definitely
vendors.
Who
would
like
to
do
this,
because
it's
a
it's
a
big
difference.
C
So
like
there
you
every
time
you
want
to
do
a
backup.
You
have
a
volume
to
snapshot
and
snapshot
to
volume.
Two
clones
have
to
happen
just
as
just
as
your
regular
back
backup
workflow
and
if,
if,
if
one
of
those
steps
is
inefficient,
then
your
whole
backup
mechanism
mechanism
is
grossly
inefficient
and
nobody
wants
that.
A
C
B
Quick,
this
is
not.
This
is
creative
volume
from
snapshot.
This
is
a
when
you're.
This
is
where
you
are.
This
is
actually
at
a
backup
time.
I
don't
know
you
know
right
so,
okay,
it's
not
sure
for
you.
A
A
Right
but
after
that,
that
the
data
pass
piece,
could
you
elaborate?
What
would
it
mean?
The
database
is
not
there.
H
C
A
Blocks,
that's
the
key
right.
Look!
Here's!
What
rustic
does
it?
It
runs
a
demon
set
on
every
single
note
right
in
kubernetes
when
the
demon
set
is
up
and
running
it
go
ahead,
and
it
has
the
privilege
to
read
all
the
mandate
volumes
under
that
note
right
in
this
case,
you
effectively
have
excess,
read
access
to
everything.
C
Right,
but
to
get
a
point
in
time,
backup
you
have
to
take
a
snapshot
first
and
then
that's
right
read
from
the
snapshot
not
from
like
the
live
file
system
and
in
order
to
read
from
the
snapshot.
You
have
to
take
the
snapshot
that
you
just
took
and
turn
it
back
into
a
volume
and
then
mount
and
attach
that
to
a
pod.
A
C
C
C
I
F
Yeah
sure
I
mean
the
this
backup
use
case
was
exactly
the
thing
I
was
trying
to
present
so
yeah.