►
Description
Meeting of Kubernetes Data Protection WG - 14 June 2023
Meeting Notes/Agenda: -
Find out more about the DP WG here: https://github.com/kubernetes/community/tree/master/wg-data-protection
Moderator: Xing Yang (VMware)
A
Hello,
everyone
today
is
June
14
2023.
This
is
the
kubernetes
data
protection
or
in
group
meeting
I
think
today
we
want
to
continue
with
our
CPT
update
Yvonne.
You
want
to
give
an
update
here.
B
Yeah
I
moved
the
old
pull
request
to
a
new
one.
So
this
the
the
older
one
was
still
created
from
a
fork
which
we
no
longer
have
permissions
to
change
the
collaborator
settings
and
rewrite
permissions.
B
So
I
just
created
a
foreground
from
my
GitHub
account
and
then
I,
just
granted
like
a
column
or
sort
like
push
on
permission.
B
Do
it
and
in
terms
of
the
cap
documents
itself,
I
copy,
all
the
the
metadata
I
guess
again
from
the
previous
cap
to
this
one
I
added
some
like
a
really
high
level
test
plans
in
the
in
the
test
plan
section.
B
So
I
think
that
part
would
be
that
part
will
continue
to
evolve.
As
more
you
know,
review
feedback
is
captured,
also
I
included,
like
I,
updated
in
the
Alternatives
alternate
solution
sections
like
all
the
previous
prototypes
that
we
have
done
around
like
aggregated
APA,
server,
volume,
populator
and
also
the
you
know
the
initial,
like
those
rest
endpoint
to
hop
on
proposal
and
provide
like
just
a
bit
of
information
on
why
they
were
previously
rejected.
B
I
didn't
go
into
too
much
details
and
whatever
is
relevant.
I
just
provide
links
because
there's
so
many
discussions
from
those
previous
proposals.
I
also
updated
the
production,
residence
review
sections
just
at
least
like
them,
those
sub
sections
that
are
required
for
the
alpha
release.
B
So
there
are
other
more
involved
like
checklist
like
the
the
metrics,
like
you
know,
all
those
things
like
they're
not
required
until
beta,
so
yeah
I
think
like
also
showing,
like
those
part,
especially
the
test
plan,
Parts
they're,
always
like
not
clear
to
me
like
what
is
how
much
detail
we
need
to
provide
I.
Just
look
back
at
some
of
the
previous
six
storage
caps.
B
I
think
most
of
them
are
pretty
high
level
without
going
before
being
you
know,
boiled
down
into
the
details,
so
you
know
just
yeah.
If
you
have
any
thoughts,
then
just
let
me
know
how
much
to
involve
how
much
like
your
details,
you
include
there.
A
Yeah
I
can
review
that
so
also
go
ahead.
B
I,
so
I
didn't
include
any
of
the
proposal
or
no
like
the
slide
deck
I'll,
lift
it
up
to
column
Prasad
to
determine
you
know
when
it's
the
appropriate
time
to
include
those
and
how
to
convey
and
communicate
those
details.
A
B
Okay,
there's
a
big
chunk
yeah
under
design
detail
is
supposed
to
be
a
big
chunk
in.
A
B
Yep
and
then
also
like
the
you
know
like
the
cap,
MD
was,
is
included
as
well.
The
production
Readiness
MD
is
included
in
as
well
in
the
pr
so.
B
The
prr
review,
where
it
might
change
it,
used
to
be
John
but,
like
I,
think
now
they
have
more
people
at
least
I'm
shadowing
on
in
the
group.
So.
A
A
B
A
Yeah,
we
can
paint
them
later,
I.
Think
after
yeah
this
the
designs.
B
Yeah
yeah
yeah,
like
yeah
like
I,
think,
like
you
know,
if
we
so
yeah
I,
think
we
chatted
about
this
on
slack
I.
Think
the
next
step
would
be
for
us
to
share
this
with
the
CSI
at
the
CSI
community
meeting
next
week.
So.
A
Yeah
I
think
all
right.
Let's
see
if
I
have
a
link,
I
think
okay,
I
put
a
link
here.
You
actually
should
add
a
link
there
right
now
right.
You
attended
that
meeting
before
right,
I.
A
B
A
C
A
This
yes
I
was
talking
about
this
one
because
next
week,
next
Wednesday
there
is
a
CSI
CSI
community
meeting
right.
This
is
just
happened
like
once
a
month,
so
you
guys
should
have
this
discussed
over
there.
So
add
is
like
like
similar
to
this
one
right
at
your
at
your
name.
Add
this
your
cssback
PR
here.
B
B
Okay,
okay,
I
think,
would
I
think
it
would
make
sense
yeah
so
yeah.
We
can
include
that
for
sure.
A
B
Think
you
know
right
now,
like
our
current
thinking
is
like
at
least
like
yeah,
so
we
will
include
that,
like
the
slide
deck
and
the
CSI
spec
PR,
our
current
thing
thinking
is
at
least
go
through
the
this,
the
slide
deck
that
Kyle
has
first,
because
if
that
changed,
the
CSI
spec
will
also
change
right.
B
That's
that
yes,
yes
back,
is
a
lower
details.
It
has
lower
lower
level
details.
A
D
Could
I
I
feel
like
every
time,
there's
a
CSI
spec
proposal.
People
ask
like
well
what
are
you?
How
is
this
going
to
be
used?
What
is
the
cap,
so
we
can
go
see
what
the
higher
level
you
know
interface
is
going
to
be,
but
then
we
say
well,
we
can't
merge
the
cap
until
the
CSI
spec
change
is
done,
and
so
it's
like
we
have
this
yeah.
D
A
A
I
think
provisional
should
be
fine
if
if
we
just
merge
the
card,
that's
provisional,
but
if
it
says
implementable,
then
we
need
to
get
this.
You
know
the
CSI
side.
Pr
merge
first
right
so.
D
A
B
Yeah
yeah,
actually
we
got
the
anyways
yeah,
so
I
would
yeah
I
would
include,
like
I'll
update
this.
The
CSI
community
meeting
agenda
right
after
this
call.
So
okay,
but
I
want
to
confirm
that
so
one
thing
Colin
saw.
Will
you
folks
be
able
to
attend
this
next
week
because
I'll,
be
you
know,
time
timing,
wise,
I'll,
be
out
of
office
next
week?
Would
you
folks
be
able
to
attend.
C
Is
it
okay,
though,
that
we
present
what
we
what
I
mean,
essentially
what
we've
gone
over
the
design
as
opposed
to
the
cap
update?
That's
the
main
thing.
A
D
Yeah
I
I
was
going
to
say
for
the
CSI
meeting.
You
really
want
to
focus
on
the
CSI
spec
changes
and
try
to
view
it
through
a
a
CO
neutral
lens.
Yes,
so
I
mean
it
helps
to
be
able
to
point
to
you
know:
kubernetes
is
an
example
of
one
Co
that
will
implement
this,
but
you
want
to
be
able
to
present
it
in
a
way
that
it's
not
kubernetes,
specific
yeah.
A
D
Many
people
there
will
be
that
will
not
have
seen
this
presentation
in
that
meeting
and
in
that
meeting,
what
we're
really
concerned
about
is
like
just
what
are
the
implications
for
CSI?
You
know
at
that
level.
Yeah.
C
C
A
So
maybe
just
to
focus
on
it's
basically
shorten
the
those
slides
right.
Maybe
just
the.
D
B
We
have
a
sorry
hang
on
a
sec,
we
have
a
PR,
but
it
was
still
the
old,
the
older
I
guess
we
we
have
a
yeah,
we
have
a
PR
in
the
CSI
repo,
but
it
is
still
based
on
the
former
like
solution,
but.
D
A
A
C
Actually,
you
know,
one
thing
we
have
not
talked
about
at
this
meeting.
Right
is
the
proposal
for
the
RPC.
Essentially,
there
were
two
things
which
were
left
dangling
in
the
original
PR
and
that
is
do
we
do
fixed
Block
versus
extent
based
and
all
that
type
of
stuff.
So
the
proposal-
the
The
grpc
Proposal
I,
put
out
right
in
this
series
of
slides,
has
kind
of
support
for
that.
C
D
C
C
From
last
meeting,
whatever
any
of
the
slide
links
that
hasn't
changed
yet.
C
So
I
think
it's
the
RPC
proposal,
so
scroll
number
six
onwards.
This.
A
A
Hey
we'll
do
that
maybe
Ben,
so
you
were
asking
about
the
you
know
in
the
middle
changing
from
provisional
to
to
like
implementable
in
the
mid
like
after
the
cap
deadline
right.
But
if
we
do
that
I
think
we
have
to
go
through
the
like
release
team
lines
like
requesting
for
exception.
If
that's
the
case,
then
nothing
that's
fine.
We
could
give
that
a
try.
D
C
Okay,
so
is
this
is
so
in
this.
This
is
the
CSI
part
of
the
spec
right.
C
B
C
Yep,
okay,
great
so
so
in
this
one
we
we
had
proposed,
adding
snapshot
metadata
service
to
the
to
the
spec
dot
MD,
and
in
that
snapshot,
metadata
service
there'd
be
two
rpcs
get
allocated
and
get
Delta
the
first
one
being
to
return
metadata
on
the
allocated
blocks
of
a
snapshot
and
the
second
to
get
the
changes
between
two
snapshots.
The
first
one
would
be
used
typically
for
a
full
backup
and
the
second
one
would
be
used
for
incrementals.
C
These
are
I,
think
relatively
straightforward.
It's
this
one
that
starts
getting.
You
know
where
we
have
a
lot
of
stuff.
This
is
trying
to
call
out
the.
What
does?
What
does
the
sorry?
Let
me
go
back
once
this
one
takes,
you
know
and
get
allocated
requests
the
first
one
or
get
Delta
requests
the
second
one
and
both
return
streams
of
data.
Once
the
street
will
get
allocated
responses.
You
know,
get
Delta
response,
and
this
is
the
grpc
stream
stuff.
We
talked
about
so
Delta
request.
Right
is.
C
We
know
these
arguments
from
the
discussions.
It's
like
opaque
session
tokens,
you
know
identities
or
volumes
and
snapshots
Etc
a
starting
offset
a
bike
offset
right
and
maximum
results.
It's
the
return
stuff
that
we,
you
know,
which
I
think
we
need
to
talk
about.
So
this
is
returning
a
stream
of
get
Delta
response,
and
in
this
that's
the
last
thing
down
here.
C
No,
no!
No,
but
sorry
by
the
pagination
mechanism
is
basically
this.
It
returns
a
stream
and
the
stream
can
contain
Mac.
You
know
n
results
in
one
and
that
the
Max
results
would
be
kind
of
the
pagination
thing
starting
by
the
offset
is
so
that
you
can
you.
If
you
want
to
right,
you
can
start
recovering
stuff.
Only
after
a
particular,
you
know
point
in
the
object,
you're
targeting.
D
Well,
yeah,
that's
what
I'm
getting
at
is
suppose,
there's
a
suppose,
there's
a
half,
a
million
blocks
that
have
changed
and
so
you're
slapping
that
over
the
grpc
connection
and
someone
reboots
you
after
a
hundred
thousand
of
them,
yes,
and
and
now
you
want
to
like
pick
up
where
you
left
off
like
are
you
meant
to
use
the
starting
bite
offset
to
figure
out
where
you
were
so,
you
can
continue
where
you
left
off.
Yes,.
C
C
Of
course,
the
server
side
will
of
course,
put
a
cap
on
Max
results
if
it
wants
to
so
so
what
what
you
get
back
in
the
Callback
is
the
is
the
get
get
Delta
response,
which
is
the
last
structure
down
here
and
in
this
one
right
I
was,
you
know
there
was
I
was
looking
at
some
of
the
original
documents
for
right
and
it
wasn't
very
clear
whether
we
settled
on
extent
based
or
Block
Base,
so
but
but
I
think
the
original
document
had
both
so
I
kind
of
copied
the
data
types
from
there,
so
the
data
type
essentially
is.
C
C
Think
what
we
see
in
a
lot
of
the
low
level
vendor
stuff-
you
know
like
AWS
EBS-
has
got
a
block
based
and
VMware
has
got
extent
based
right
so
anyway,
so
it
would
come
back
with
the
with
the
metadata
type
and
then
we
we
need
to
know
the
volume
size
bytes
and
what
you
have
to
put
it
into
every
one
of
these
things
or
have
a
separate
RPC,
which
makes
no
sense,
but
that's
only
one
one
integer
here
and
then
this
is
the
repeated
block
metadata
and
in
the
block
metadata
it
would
have
the
you
know
this
data
structure
here,
which
is
a
byte
offset
for
this
piece
of
data
of
metadata,
the
size
of
the
metadata
and
some
vendor
specific
field.
C
D
C
Then
then
that's
fine,
that's
fine!
But
but
let's
say
let's
say:
I
wanted
to
use
this
API
to
get
you
know
to
get
stuff
back
then
I
have
no
I
mean
we
don't
have
a
companion
API
to
go
with
this
right,
which
would
recover
the
blocks
in
the
kubernetes
space.
We
don't
have
a
problem.
C
We
would
Mount
the
volume
of
create
volume,
a
snapshot
and
mount
it
and
return
data
back
whether
or
not
they're
sufficient
for
the
infrastructure
right,
but
so
we
could
see
the
blocks,
but
otherwise
we
have
no
companion
API
to
get
it.
So
if
I
look
at
the
Spectrum
of
apis
right,
the
EBS
ones,
for
example,
give
a
block
token
and
I
need
that
block
token
to
recover
the
the
block
through
that
to
the
network.
Api.
D
I
I
want
to
understand
this
you're
saying
in
at
least
one
implementation
like
there's
an
optimization
that
can
be
performed
if
you
have
per
block
I
guess
this,
this
vendor-specific
metadata
and
then
as
a
different
API,
where
you
can
just
get
the
the
actual
block
without
having
to
mount
the
volume
and
read
the
read
the
blockers
and
that's
somehow
faster.
Is
that
what
I'm
hearing
that's
right,
I.
C
Mean
not
just
Mount
the
volume
it's
create
a
snap,
create
a
volume
from
the
snapshot
and
then
Mount
that
volume
okay.
So
there,
the
creative
volume
from
a
snapshot
depending
on
the
infrastructure
can
be
cheap
or
expensive
operation
right
and
beyond
that
right,
depending
on
the
API.
The
context
of
that
call
right
so
on
the
EBS
API,
for
example,
and
I,
don't
know
what's
behind
that
token,
they
return
a
token
I'm,
assuming
it's
got
security
and
other
connotations
it
expires
and
you've
got
to
give
that
token
in
the
call
to
get
the
block
so.
D
C
D
Well,
I
guess
my
my
thinking
is
like
if,
if
we
think
that
is
a
problem,
that's
worth
solving
then
like.
We
should
also
solve
that
problem
in
a
portable
manner
like
having
a
having
a
data
plane
level
version
of
this.
Exactly
because
everything
that
we've
everything
we've
talked
about
has
been
metadata.
Yes,.
A
D
I,
don't
think
so
because,
because
here
like
with
the
storage
class
parameters,
it's
the
SP
that
has
the
magic
right
and-
and
there
are
many
many
different
SPS
with
different
magic.
You
know
with
different
parameters
that
mean
things,
but
the
clients
are
all
the
same
right.
The
client
doesn't
need
to
know.
What's
going
on
here,
the
client
of
the
API
would
have
to
know
oh
I.
It
would
have
to
know
two
things.
D
D
No,
because
this
is
the
direction
is,
is
backwards
right,
like
snapshot,
class
stuff
goes
from
the
co
into
the
SP.
This
is
stuff
coming
from
the
SP
back
to
the
CEO.
Now
the
CEO
has
like
processing
vendor-specific
stuff
and,
and
we
can't
write
down
what
it
is
that
that
that's
the
fundamental
problem
is,
if
you.
C
A
D
Problem
is
if,
if,
if
other
vendors
start
doing
this
too
you're
just
gonna
have
a
giant
if
then
else
tree
somewhere
in
the
code
that
says
you
know,
if
Amazon
do
this,
if
Azure
do
that,
if
NetApp
do
the
third
thing
and
it's
it's
and
that
that's
exactly
what
we're
trying
to
avoid
by
having
a
standards,
because
so
you
can
write
it
once
and
it
just
works.
So
so
this
is.
C
Yeah,
so
the
thing
would
be
if
there
were,
if
there
was
a
companion
session
service
right,
a
companion
service,
to
get
to
get
blocks,
then
that
the
idea
was
that
this
block
metadata
object
right.
This
message
could
be
given
to
get
that
block,
and
so
you
just
take
this
data
structure
opaque
and
give
call
that
thing
and
get
the
block
back.
B
Carl,
like
I,
think
even
if
we
have
a
companion
API
somehow
like
the
the
back
end
was
still.
You
know,
still
need
to
convey
this
information
back
to
the
client
first
right
before
the
client
can
call
the
companion.
Oh.
B
D
C
D
Right,
yes,
so
so
I
I
guess
what
I
would
say
is
that
there's
nothing
preventing
someone
from
writing
like
an
EBS,
specific
backup
thing
that
talks
to
Amazon
and
gets
the
gets,
the
right
information
and
talks
directly
to
the
appropriate
apis
and
does
a
specific
implementation
for
Amazon
right
that
that
so
you
can.
You
can
write
that
today,
if
you
want
to
it's
great
but
like
CSI,
is
about
writing
the
the
API
that
everyone
can
use
right?
That's
that's
why
we
have
it
so
that
you
can
right.
C
Is
fine,
that's
fine!
That's
fine,
maybe
I!
Think
then
you
know
we
can
get.
We
can
let
the
AWS
guys
come
back
with
some
proposal
right.
If
they,
you
know,
let's
say
we
remove
this
vendor-specific
field.
Correct,
yes,
and
so
that
means
right
essentially
from
a
kubernetes
perspective,
and
we
know
what
we
have
to
we
have
to
do.
You
know
volume
from
snapshot
and
then
seek
and
get
the
data
if
if
they
were
a
future
API
right,
then
that
future
API
has
to
address
all
the
aspects
of
how
you
get
the
context
back.
D
Right
I
mean
yeah
you're,
talking
about
like
a
data
plane
API
to
actually
get
the
bytes
of
the
diff.
That's
right,
yeah
yeah,
and
we
should
build
that.
You
know
if
there's
enough
interest
on
both
the
vendor
side
or
the
storage
vendor
side
and
the
back
of
vendor
side
to
have
a
standard
way
of
doing
that.
That
would
be
great.
C
D
Okay,
yeah
well,
we'll
have
to
think
about.
Then.
Is
there
a
now
I,
don't
know
if
you
wouldn't
have
tied
the
two
proposals
together
like
this,
because
this
proposal
so
much
time
and
effort
has
gone
into
it
and
it
feels
like
we
are
at
a
point
where,
like
it
was
at
least
something
everyone
could
agree
to
that
you
could,
in
principle,
write
a
portable
implementation
of
that.
Would
you
know
it
might
not
be
as
good
as
directly
pulling
the
blocks
out
of
the
API,
but
it
would
still
work
and
so
like.
D
A
B
Yeah
just
tacking
on
to
that
very
quickly
again
right
people
just
keep
asking
us
what
can
I
do
with
all
this
metadata
if
I
can
continue
to
use
CSI
to
do
the
pull
data,
so
there
might
be
a
case
that
we
can
make
there
to
say
Hey.
You
know
this
is
this.
Cap
is
half
of
the
story.
We
still
somehow
need
to
have
a
story
for
the
other
half
of
the
data
path.
Yeah.
C
D
Well,
it
might
not
be
grpc
right,
I
mean
CSI
is
going
to
be
grpc,
but
this
standardized
way
of
actually
getting
the
box.
Maybe
we
would
look
at
it
and
say:
grpc
is
the
wrong
transport
or
maybe
not
I,
think
that's
a
that's
a
decision
we
would
have
to
look
at
when
we
were
designing
a
data.
Api
is
what
it.
What
is
the
right
transport
for
moving,
potentially
large
amounts
of
data.
B
Yeah,
so
for
for
what
is
worth
like
in
the
current
like
CSI
spec
PR,
we
we
basically
we
specifically
so
under
like
the
equivalence
of
block
metadata
right.
We
specifically
added,
like
a
a
token
property
there
to
capture
that
the
kind
of
data
movement
where
the
back
end
gives
you
back
a
token
and
you
use
it
to
fetch
that
they
have
logs
the
Assumption
there
that
we
make
is
like
we
try
to
limit
like
the
data
movement
to
to
I
guess
strategies.
One
is
that
you,
you
restore
the
PVC.
B
You
mount
on
some
sort
of
data
move
apart
and
you
read
the
offset
and
bytes
from
there
and
then
the
other
one
is
like
this
that
we
are
trying
to
solve
here
as
like.
Oh
the
back
and
give
us
a
token
and
it's
a
more
efficient
way
to
download
those
blocks.
B
So
we
just
say:
hey,
you
know,
here's
a
here's,
a
token
feel
that
if
you
have
it,
you
put
it
in
there
at
least
it's
I,
don't
know
if
it
is
any
better,
but
at
least
it's
the
user
of
it,
the
consumer
of
it
will
say:
oh
yeah,
it
is
a
token,
then
you
know
that
each
token
Niche
type
of
thing
that
I
know
I
can
use
for
data,
have
movement
but
kind
of
what
we're
talking
about
here.
B
C
C
Semantics
are
horrible,
I
mean
I,
agree
with
Ben.
You
know
the
semantics
become
horrible
with
this
vendor
specific,
because
not
only
are
we
having
this
vendor
specific
we're
also
saying
we
have
support
for
click
versus
variable
right.
So,
for
example,
if
it's
an
extent
based
thing,
there's
no
way
to
specify
a
block,
a
per
block
token
Etc
that
were
required
by
an
API
right.
So
it's
it's
just
weird
semantics.
So
maybe
it's
pull.
D
C
That's
why
I
said
we
need
to
talk,
because,
because
in
the
informs
original
proposals
they
have
this
type
of
data
structure
fixed
and
variable.
There
are
two
streams
of
thought
here.
Right
I
mean
if
there's
a
lot
of
change
and
let's
you
know,
for
example,
if
the
entire,
what
you
know
three-fourths
of
the
volume
would
change,
and
it's
contiguous,
you
know
an
extent
based
thing-
is
amazingly
efficient
in
expressing
that
change
on
the
flip
side
right,
you
know,
block
based,
says:
okay,
these
are
literally
the
blocks.
I
want
so.
C
D
C
So
the
question
is:
should
we
put
a
stake
in
the
ground
and
say
no,
everyone
returns
blocks,
so
everyone
returns
extends
that's
that's
what
we
you
know.
That's
what
I
want
to
ask
over
here.
Well,.
D
I
I
think
if,
if
neither
one
is
clearly
better
and
it's
going
to
be
circumstantial
and
it's
not
a
big
deal
to
force
okay,
because
because
by
including
both
of
them,
you're
saying
every
backup.
Software
must
support
both,
but
each
SP
can
pick
and
choose
whether
to
support
one
or
the
other
or
both
yeah
right
and
and
that's
that's
not
terrible
from
where
I'm
sitting
right,
because
the
SP
might
say.
Oh
I
have
an
extent
based
system,
I'm
always
going
to
do
extents.
C
B
Yes,
I
think
like
historically
right
like
when
found
first
brought
this
up,
the
fixing
versus
variable
line
proposal
I.
Think
if
I
remember
correctly,
the
conclusion
was
that
we
decided
to
park
it
aside
for
now
yeah,
it's
like
an
optimization,
but
there
are
some
like
you
know,
as
we
already
surfaced
here,
there's
some
underlying
assumptions
we
back
then
we
just
decided
that
it's
something
that
could
be
added
if
needed,
type
of
thing.
Yeah.
C
Okay,
so
this
vendor
specific
field
from
The
Proposal
right
just
just
to
note
that
you
know
so
all
the
parameters
here
are
going
to
be
expressed
from
the
client
makes
a
call
to
a
server.
The
client
has
to
provide
all
the
parameters
in
the
in
the
namespace
in
the
nomenclature
of
the
server
okay.
So,
for
example,
in
the
kubernetes
case,
the
backup
application
which
kubernetes
application
is
talking
in
terms
of
kubernetes
volume,
snapshots
and
kubernetes
volume,
IDs
Etc
and
session
token.
C
But
when
the
when
the
site
car
talks
to
the
SP
right,
the
sidecar
has
to
translate
to
the
SPs
language,
which
was
handles
Etc,
so
I
mean
I,
don't
know
if
you
recall
that
from
The
Proposal,
but
that
was
that
was
part
of
what
that
is
what
that
meant.
D
Okay,
okay,
that
all
makes
that
makes
sense.
I
mean
that's
what
we
always
do.
Why
I
just
wanted
to
understand
the
structure
of
that
message?
If
you
can
go
back
one
more
time,
so
the
you
get
a
the
response.
D
Has
a
meta
has
a
single
metadata
type
for
the
whole
response.
The
volume
size
bytes
is
the
size
of
the
whole
volume
right
right
and
then
you
have
a,
and
then
you
have
a
an
array
of
these
actual
metadata
blocks,
each
one
of
which
is
just
two
un64s
an
offset
and
a
size
correct,
oh
and
I.
Guess
the
thing
that
stinks
is:
if
it's
fixed
length,
the
size
will
always
be
the
same.
It'll
be
wasting
yeah.
C
After
your
bandwidth,
I
know
I
agonized
over
that
and
then
I
said
you
know,
screw
it
it's
a
few
blocks,
but
a
few
bytes
wasted.
This
is
all
metadata,
never
mind.
You
can't
have
your
cake
and
eat
it
too.
Okay.
So
the
thing
is
this:
if
you
go
back
to
this
right,
the
way
grpc
works,
it
doesn't
allow
us
to
return
fixed
Properties
Plus,
a
stream.
It's
either.
You
return
a
stream
or
you
return
a
set
of
properties,
okay,
so
I'm
forced
to
return
in
that
stream
right
everything.
C
D
Was
gonna
say
if
you
did
want
to
agonize
over
it
and
and
optimize
the
heck
out
of
it
you
what
you
would
end
up
with
I
think
is
two
completely
separate:
rpcs
like
one
for
one
for
fixed
length,
the
one
for
variable
length
and
then
you'd
have
a
capabilities
API,
where
the
client
could
query,
which
one
is
supported.
So
it
knows
which
one
to
call,
but.
D
D
It's
not
that
bad
and
if
I'm
just
trying
to
think
through,
like
if
you,
if
you
always
have
to
specify
the
the
block
size
when
you're
doing
fixed
length,
you
lose
the
whole
benefit
of
doing
fixed
slang,
and
you
may
as
well
just
always
do
variable
length
right
that
that's
that's
the
downside
to
this
thing,
flavor
of
doing
it
this
way
is,
is
the
only
benefit
that
fixed
length
could
possibly
have
of
a
variable
length.
Is
the
ability
to
emit
the
size
with
every
single
block?
D
C
That
poses
that
poses
a
problem
for
API
consumers,
a
client,
because
we
don't
want
it
changing
Midstream
right.
If
we
just
let
it
be
I
mean
we
don't
have.
Is
there
a
facet,
understanding
or
specification
saying
that
you
can't
change
it
from
one
block
metadata
to
the
next
right.
D
C
You
know
I
mean
it
could
cause
an
issue,
but
it
certainly
is
smooth.
It's
easier.
I
would
say
it's
more
often
it's
easier
on
the
client.
If,
if
we
knew,
if
one
knows
up
front
what
it's
what
it
is,
but
no
it
wouldn't
it's
not
that
much
of
an
issue
if
you're
gonna,
if
you're,
going
to
consider
everything
an
extent
based
and
clearly
fixed
length,
it's
an
accent
based
right.
It's
it's
easy,
so
in
that
case
you
know
the
client
I
would
assume
would
maintain
something
like
a
bitmap
or
whatever
right
and.
D
Okay,
so
I'm
beginning
to
see
now
so
so
what
we're
saying
is
it
really
is
an
array
of
variable
length
blocks,
but
if
you
specify
that
the
type
is
fixed
length
in
the
response,
that's
just
a
hint
that
basically
says
hey
your
block
length
is
always
going
to
be
the
same,
yeah
and
I'm
going
to
send
it
to
you
over
and
over
yeah
all
right.
All.
B
There
was
previously
in
the
one
of
our
really
old
prototypes,
where
we
try
to
do
like
an
end-to-end
from
like
CPT
metadata
all
the
way
down
to
like
reading
and
seeking
like
the
blogs
like
we
use
this
as
the
input
parameters
into
like
a
tool
like
DD
and
said
you
know
hey
now
now
that
you
have
the
metadata
go,
and
you
know
find
this
off-site
right,
and
it
is
this
long
on
this
and
fetch
it
and
do
something
with
it,
so
that
I
think
that
prototype
kind
of
have
some
influence.
B
Essentially,
we
use
this
to
to
to
to
stick
the
blocks
on
this
when
we
restore
the
volume,
so
I
think
that
prototype
kind
of
influenced
like
at
least
previously
I,
don't
know
about
this
case,
but
previously
it
influenced
our
thinking
based
on
how
the
data
was
being
pulled
out
or
sick
out
of
the
block.
The
volume
okay.
A
C
B
So
sorry
go
ahead,
so
are
we
expect
it's
like
essentially
right
based
on
our
current
design?
There
are
two
grpc
calls.
One
is
between
the
backup
software
and
you
know
the
CSI
driver
and
in
between
the
CSI
driver
and
the
I,
guess
the
CSI
Plug-In
or
whatever.
So
are
we
expecting
to
use
the
same?
Just
PC
spec
for
both
colors.
C
Absolutely
I
mean
you
recall
the
speaker
right.
This
API
right
between
the
client
and
the
sidecar
is
the
RPC,
but
between
the
sidecar
and
the
SP
is
the
same
grpc.
It's
just
that
it's
now
over
the
Unix
domain
socket.
C
B
Okay,
I
think,
like
it's
just
I,
think
previously
again,
we
don't
have
to
use
soft
or
addresses
right
now,
but
just
previously,
like
one
of
the
feedbacks
that
I
received
from
James
the
fornic
is
that,
like
the
currency
as
I
expect
like
its
version
in
such
a
way
that
you
know
it,
it
doesn't
expect
like
an
external
consumer
and
not
saying
that
we
can't,
but
he
just
said,
like
yeah,
you
know
something
that
you
know
when
the
discussion
comes
like
the
versioning
would
be
how
we're
gonna
version
this
RP's
grpc
is
going
to
be.
B
It
will
be
something
that
will
have
quite
a
bit
of
a
discussion
on
how
we
version
it,
because
currently,
the
way
we
position
it
in
a
unique
socket
call
like
Paradigm
is
going
to
be
different
when
you
expose
it
to
the
external
client
right.
So
again,
it's
just
just
a
you
know
for
it.
That's
why
I
think
for
all
of
us
here.
D
Say
I
I
I'm
confused,
because
we
we
have
versioning
solved
pretty
much
for
for
CSI
and
the
way
we
handle
grpc.
You
know
we.
We
have
a
mechanism
for
introducing
new
new
fields
and
new
new
methods,
and
we
have
an
alpha
mechanism
so
that
if
we
introduce
something
that
we
later
decide,
we
don't
like
and
get
rid
of
it,
but
once
it's
approved
you
can
never
get
rid
of
it.
D
So
I'm
not
sure
what
the
versioning
concern
is
like
this
proposal.
If
accepted,
would
there's
no
intent
for
it
to
immediately
change
right?
It
would
you
could
implement
it
and
it
could
stay
the
same
forever
in
principle,.
C
Yeah,
my
understanding
of
versioning
and
the
CSI
is
that
you
can
add
additional
rpcs
to
a
service
and
add
capability
with
saying
these
new
rpcs
exist.
But
you
know
by
that
capability
bit
and
that's
the
extent
of
change.
You
cannot
these
get
cast
in
concrete
once
they
accept
it.
B
Right
I
think
like
yeah,
that
that
versioning.
C
Other
than
with
one
with
one
caveat
these
data
structures
here
the
data
structures
can
change
I.
Think
if
we
do
something
correctly
right
up
front
and
say
put
some,
you
know
a
version
type
fee
on
the
top,
then
you
can
add
additional
Fields
below
you
know
to
the
end,
but
the
order,
order
and
existence
of
these
properties
right
cannot
change.
But
let's
say
if
I
wanted.
To
put
another
thing:
I
could
put
a
number
seven
over
here
after
match
results
and
extend
the
request
right.
D
D
The
the
requirement
is
a
recipe
that
ignored
parameter.
7
would
still
have
to
function
correctly,
even
if
parameter
7
was
specified.
B
Right
I
think,
like
the
the
the
discussion
around
the
grpc
version,
it
was
brought
up
in
the
context
of
like
what
does
alpha
mean
right
so
right
now
the
alpha
is
like
the
definition
of
alpha
is,
like
you
know,
between
like
it's
like
it,
communicate
like
a
certain
semantic
between
the
CSI
driver
and
the
SP
right.
But
now,
if
you
exploit
a
certain
external
client,
that's
that
Alpha,
that's
that's
the
semantic
of
the
alpha
and
beta
or
whatever,
still
carry
the
same
like
meaning
behind
it.
D
That's
all
it
means
it's
just
a
mechanical
way
to
make
sure
that,
like
if
we,
if
we
do
yank
something
and
then
later
we
introduce
a
new
feature
that
it
won't
cause
horrible
things
to
happen,
and
there
is,
there
is
no
concept
of
beta.
It
goes
it's
Alpha
and
ga
that's
how
it
works
on
the
CSI
Spike
I
wanted
to
get
back
to
the
to
the
slide
you
were
showing
about
the
get
allocated.
I
just
want
to
understand.
This
is
basically
the
equivalent
of
get
Delta
compared
to
an
all
zero
snapshot.
D
C
C
C
D
C
So
I
mean
if
you
look
at
if
you
look
at
I,
I
confessions
to
some
extent
right,
I
was
looking
at
vendor
apis
and
when
the
apis
have,
you
know,
have
these.
You
know
this
type
of
variants
because
sometimes
in
essence,
the
get
allocated
request
is
actually
not
necessarily
talking
about
a
snapshot,
but
it
could
be
a
snapshot,
but
actually
it
is
a
snapshot.
That's
true
I
mean
yeah.
The
f
apis
I
looked
at
VMware,
apis
I
think
even
even
EBS
apis
is
looking
at
those
right.
C
D
C
I
I,
don't
think
it
will
help
anybody
if
you
do
one
and
not
the
other,
because
seriously
as
a
backup
vendor
how
in
the
world
am
I
supposed
to
do
an
optimized
anything
if
I
thought
for
the
first
for
the
First
full
backup,
I'm
gonna
have
to
read
the
entire
volume.
Okay
I
mean
that's
crazy.
I
need
to
know
what
I
need
to
know.
Only
what
blocks
were
written
to
the
volume.
D
C
D
C
Actually,
do
we
have,
let's
see
the
link
to
prasad's
reaction
we
had.
This
is
the
code
we
showed
last
time.
C
And
Prasad
this
followed
me
to
the
you
want
to
see
the
client
or
service
okay
yeah
climb.
Let's
start
with
the
client,
which
is
the
client
this
one
yeah
yeah
in
CMD,
there
should
be
okay.
C
C
D
C
Over
the
wire
I
believe
it
uses
the
same.
If
you
think
about
the
transport
whatever
Scholars
made
over
here,
it
uses
pocket
to
get
the
back.
D
C
C
The
applications
are
right,
so
the
grpc
spec
there
was
a
lot
about
streaming.
Let's
see
we'll
have
that
out
somewhere,
I,
don't
I
think
it
closed
it.
A
long
time
ago,
yeah
there
was
yeah,
they
had
examples
of
streaming.
So
it's
it's
like
I
I,
don't
know
if
there's
anything
special
that
I
can
recall.
It's
like
any
any
protocol
where
you're
reading
you
know
blocks
through
stuff.
The
client
has
it.
The
server
has
its
own
state
right
and
the
client
will
have
to
retry
or
something
based
on
its
state.
C
So
now
you
notice
when,
when
it's
doing,
let's
go
to
the
client
side
for
this,
because
when
it's
doing
the
server
side
for
this,
but
the
server
is
sending
data
server
has
a
loop
itself.
D
If
there
was
a
way
to
like
get
the
same
effect
without
leveraging
to
stream
semantic.
Maybe
we
would
consider
doing
that
instead.
C
Actually,
I
think
it
would
be
a
different
proposal.
I
mean
the
whole.
The
the
whole
reason
we
were
able
to
do.
What
we
did
was
because
of
the
stream,
so
it
would
be
a
totally
different
type
of
proposal,
I
think,
to
keep
and
require.
C
A
C
A
Thank
you,
so,
okay,
so
we
tried
to
bring
this
up
in
the
next
CSI
community
meeting.
But
if
not
that's
something
right,
I
think
we're
not
ready.
Are
we
ready.