►
Description
Meeting of Kubernetes Data Protection WG - 09 August 2023
Meeting Notes/Agenda: -
Find out more about the Data Protection WG here: https://github.com/kubernetes/community/tree/master/wg-data-protection
Moderator: Xiangqian Yu (Google)
A
Hello,
everyone
today
is
August
9th
2023.
This
is
the
kubernetes
data
protection
meeting,
I
think
today
we
don't
have
other
things
on
the
agenda.
I
think,
namely
we
probably
just
want
to
give
and
do
an
update
on
the
CBD
cap,
so
Yvonne
or
car
Prasad.
If
you
want,
if
you
guys
want
to
talk
about
it,.
B
Yeah
we've
updated
the
cap
with
the
feedback
from
the
cigar
team.
I
believe
Prasad
has
updated
his
demonstration
prototype
do.
C
Cool
yeah
so
based
on
previous
discussion,
I
have
updated
the
blue
today,
yeah
I
would
like
to
show
the
demo
and
we
can
walk
through
the
sample
implementation
yeah.
A
C
Oh
yeah
I
have
shared
my
screen.
Is
it
visible?
Yes,
cool
yeah
I
can
on
very
high
level.
I
can
just
talk
about
what
we
discussed
last
time,
so
the
yeah.
So
this
is
the
new
approach.
I
think
it's
I,
don't
know
how
many
iterations
we
had
but
I
I.
Guess
it's
8th
or
ninth
iteration,
so
so
the
new
as
per
the
new
proposal.
What
we
are
doing
is
we
are
we
are
taking.
C
We
are
making
use
of
token
request
and
token
reviews
API
to
basically
generate
the
token,
as
well
as
validator
token
and
to
you
know,
achieve
the
authentication
authorization
without
having
the
extra
controller
we
had
in
the
previous
proposals.
C
So
the
if
I
talk
about
the
flow
of
the
end-to-end
flow
look
like
so
the
backup
client
would
basically
discover
the
snapshot
meta
service
CR
for
for
the
for
the
driver
and
the
snapshot
metadata
service
CR
contains
the
required
information
for
for
calling
the
grpc
endpoint.
So
it
has
basically
audience
stream
CS
certificate
and
address
of
the
site
card
service
to
which
we'll
be
making
the
GRTC
call
to
get
the
change
block,
met,
editor
and.
C
Yeah,
it
could
be
any
random
stream
or.
B
D
Yeah,
it
can't
just
be
a
random
one
right
like
if
it
matches
another
CSI
driver.
Somehow
by
coincidence,
then
you
know
it
would
mess
up
the
authentication
flow,
so
it
has
to
be
unique
to
the
CSI
driver.
So
one
of
the
recommendations
is
like
DNS
name,
okay,
but
it's
something
that
you
know.
If
you
imagine
like
user
using
a
Helm
to
install
it
can
be
something
that
the
user
provides
override.
But
by
default
we
can,
you
know
somehow
extracted
from
the
DNS
name
or
something
yeah.
C
Yeah,
it
has
to
be
a
unique
per
driver.
That's
right!
Yeah
I
mean
we
can
decide
on
format.
It
could
be
DNS
name
as
well
to
to
make
sure
that
it's
Unique
or
for
each
driver
yeah.
So
once
clients
gets
the
the
required
information
based
on
the
audio
stream.
C
It
generates
the
token
so
backup
client
can
use
token
request
API
to
generate
a
token
for
for
the
given
audience
it
will
so
we
can
also
specify
the
expiry
time
and
bound
object
reference,
but
once
it
once
once,
we
got
once
once
it
gets
the
token.
C
The
next
next
step
would
be
calling
the
grpc
endpoint
with
the
given
parents,
so
we
have
to
specify
the
audio
scope
token
we
generated
in
the
first
day
as
well
as
the
you
know,
other
information
like
namespace
based
snapshot,
Target
snapshot
and
the
the
external
snapshot
metadata,
which
is
deployed
as
a
sidecar
to
the
CSL
driver
where
the
the
CSR
Tower.
Here
we
have
the
service
provider
implementation
of
getting
the
change
block
metadata.
So
the
the
site
car
basically
performs
the
authentication
using
the
token
review
API.
C
It
also
validates
if
it
also
validates,
if
the
token
or
the
client
identity,
that
we
received
that
we
kind
of
faced
using
the
token
review
is
if
that
identity
is
allowed
to
access
the
volume
snapshot,
resources
and
the
authorization
we
check
using
subject
access
through
API,
so
basically
token
review
provides
user
identity
and
that
user
identity
we
use
to
perform
authorization
check
using
subject
access
review
if
the
user
is
allowed
to,
you
know
access
the
volume
snapshot,
resources,
yeah
and
then
the
the
client-side.
C
This
communication
is
really
obviously
TLS,
so
the
Cs
certificate
we
got
from
the
snapshot,
metadata
Service
it
it
used
to
validate
server
at
the
client
side
and
yeah.
That
way
we
have
PLS
as
well
as
of
the
the.
We
have
also
validated
client
using
tokens,
reviews
and
subject
accessories.
Api.
C
The
sidecar
calls
the
service
provider
implementation
of
CBT
and
streams
the
response
back
to
the
client.
So
this
is
the
kind
of
end-to-end
workflow
I
tried
to
summarize.
If
you
have
any
questions
around
these,
we
can
talk
but
yeah.
Otherwise,
I
can
jump
into
the
demo.
C
Sorry,
it's
awesome,
so
this
is
where
I
have
added
all
the
Prototype
source
code
in
the
redmi
Havel
said
it
states
like
how
to
deploy
this
end-to-end
prototype
and
given
an
example
how
a
sample
client
can
be
used
to
get
the
change.
Locator
I'll
talk
in
brief
about
the
services,
so
we
have
external
network
data
service
which,
which
is
which
is
deployed
as
a
side
card
to
the
service
provider.
Excuse.
A
C
We
are
taking
snapshot,
we
are
just
mocking
the
gate.
Delta
Handler.
Basically,
we
implemented
the
get
Delta
RTC.
Okay,.
C
A
C
A
C
Right
so
yeah,
so
as
we
see
in
in
the
as
we
have
already
seen
in
the
design,
these
two
containers
are
these
two
Services.
Basically,
this
we
see
in
the
diagram,
so
external
structure
and
metadata
is
the
site
car
to
the
service
provided
implementation,
which
is
sample
CSS
activity
service,
which
marks
the
gate,
Delta
grpc
handle
and
external
snapshot.
Metadata
services
are
obviously
responsible
for
proxying
the
call
to
service
further
implementation,
sidecar
and
Returns.
C
The
response
response
back
also
performs
authorization
or
authentication
checks,
using
token
reviews
and
subject
access
reviews
right.
So
I'll
briefly
talk
about
the
deployments.
C
So
I
have
yeah
I,
have
it
namespace
CSR
driver
where.
C
Where
I
have
installed
postpath
CSR
driver,
as
well
as
the
external
snapshot
media
data
service,
which
has
two
containers,
one
is
the
Excel
receptor
metadata
as
well
as
as
well
as
the
you
know,
the
get
Delta
grpc
Handler
or
we
can
call
it.
The
service
provided
implementation
to
get
the
change
block
metadata,
and
there
is
one
more
namespace
CBT
client,
where.
C
C
We
have
created
a
PVC,
so
if
we
can,
we
can
assume
this
as
a
application
namespace,
where
we
have
some
applications
installed.
Some
database
is
installed.
I've
also
have
also
created
couple
of
snapshots
on
the
same
same
PVC,
so
you
can
see
there
are
like
I
guess
around
four
snapshots
created
on
CSI
PVC,
okay
and
I
have
created
the
snapshot
metadata
service
resource
for
this
driver
for
the
host
path,
driver.
C
C
C
Right,
if
you
view
the
content
or
if
you
see
the
Yaman
manifest
for
this
resource,
we
have
added
it
under
the
space
we
can
see,
it
has
address
audience.
Obviously,
this
is
I
have
used
standard
stream,
but
we'll
have
to
we'll
decide
on
the
specific
format.
We
can
use
the
TRS
names
itself
and
it
tells
our
CSS
CS
certificate,
all
right
so
yeah.
So
we
can
work
to
the
client.
You
know
what
what
what
specific
things
client
does
in
order
to
get
the
change
block
metadata
so.
C
Yeah
I'll
first
render
I'll
first
run
the
demo.
Then
we
can
go
through
the
source
code
and
figure
out
what
how
the
implementation
would
look
like.
So
in
the.
C
All
right,
so
what
we'll
do
is
we'll
on
for
the
snapshot
that
we
have
just
listed
so
I
just.
C
Client
part
I
have
there
is
a
there
is
a
CLI
I
would
say
which
we
have
implemented.
It
takes
this
snapshot,
Target
snapshot
as
a
parameter,
along
with
other
information
like
client,
namespace,
the
application
name,
space
and
yeah
about
the
token
Mount.
We
will
come
to
this
in
a
bit
but
yeah.
So
further.
C
We
will
try
to
call.
We
will
try
to
get
the
change
of
metadata
so
base
snapshot.
Is
this
one?
There
is
a
target
snapshot.
The
application
name.
Species
database
and
the
client
needs
to
the
backup.
Client
needs
resistivity
client,
and
we
have
also
mounted
service
account
into
the
CBT
into
the
spot,
which
is
CBT,
client
service,
account
and
yeah.
If
you
execute
this
command,
what
it
does
is
it
first
discovers
the
Tower
name
from
the
snapshot
right
once
it
gets
the
driver
name.
C
It
search
for
volume,
snapshot,
metadata
service
resource
for
the
driver,
the
driver
resource
parts,
so
once
it
it
founds
the
it
finds
the
data
service
for
this
driver
it.
It
gets
the
information
like
the
Cs
certificate,
as
well
as
audience
and
the
address
for
the
audience.
It
creates
the
token
token
request,
API
and
in
the
response
we
get
the
token
along
with
the
expiry
time.
C
Obviously,
and
then
we
use
all
these
parameters
to
call
the
it's
called
the
grpc
endpoint,
and
this
is
the
sample
for
mock
change
of
metadata
information
that
we
have
implemented
on
on
this
sample.
Csr
travel,
which
is
yeah,
which
is
streamed
back
to
the
the
lack
of
client.
Basically.
C
Right
so
there
are,
two
containers
will
recruit
one
by
one.
This
is
the
external
snapshot
service,
which
acts
as
a
proxy
right
yeah.
So
if
you
see
the
flow
here,
it
gets
the
request.
First,
with
the
token
and
other
information
snapshot
based
snapshot,
name,
Target
snapshot
name
and
the
max
wizard
parameter
for
pagination.
C
So
what
it
does
is
the
Department
of
proxy
service
or
the
site
external
snapshot.
My
data
site
car
does.
Is
it
first
discovers
the
selection
data
service
for
the
driver
to
to
face
the
audience
team
so
once
it
once
it
gets
the
audience
stream.
It
calls
the
token
review,
API
or
with
the
token
it
received
in
the
request,
as
a
request
parameter
and
the
audience
we
faced
from
the
the
snapshot
media
data
service.
C
So
token
review
basically
validates
the
token
if
this
token
is
valid
for
this
audio,
if
it's
targeted
for
this
audience
right.
So
in
the
response
we
get
authenticate
true,
we
also
get
the
user
information
along
with
the
username,
the
group,
and
you
know
their
information
yeah
statistical
groups.
It
belongs
to
ID
and
all
this
stuff
also.
It
also
Returns
the
audiences
that
we
can
again
validate
validate
against
the
one
we
received
in
the
snapshot.
Metadata
service
object,
I,
guess
it.
C
This
returns
all
the
list
of
audiences,
for
which
this
token
is
generated,
and
we
can
obviously
match
if,
if
it,
if
it's
generated
for
multiple
audiences,
we
can
it
it
validates
these.
The
audience
will
be
passed
in
this
request.
C
If,
if,
if
this
audience
team
belongs
to
the
list
of
audiences,
if
there
are
multiple,
basically
it
it
matches
with
any
one
of
the
audiences
for
which
it
generated,
we
don't
know
we
don't
have
to
go
into
that,
but
yeah
I'm,
just
I,
wanted
to
share
that
yeah.
C
So
once
we
once
it
gets
the
user
or
user
identity
it
excuse
me,
it
calls
subject:
access
review
apis
to
validate
if
the
user
can
access
to
one
in
snapshot,
resources
and
if
it
validated
that
and
then
it
yeah
it
tries
to
catch
the
snapshot,
handles
or
snapshot
IDs
for
the
snapshot
names.
So
it
looks
for
volume,
snapshot,
resources,
only
snapshot,
content,
resources
finds
out
the
IDS
and
then
uses
those
IDs
should
have
logged
third
request.
C
What
what
how
it
calls
the
CSI
driver
standpoint,
but
yeah
so
based
on
the
after,
after
converting
the
parameters
like
after
mapping,
the
snapshot
names
to
the
ID
and
it
calls
the
CSI
drivers
grpc
endpoint,
to
get
the
actual
change
of
metadata
and
proxies
the
response
back
to
the
back
to
the
client.
C
And
if
you
take
the
CSR
drivers
logs
so
yeah,
we
can
see
here
the
request
how
the
request
to
look
like
at
the
CSL
driver
side,
and
it
sends
it
so
it
sends
back
the
responsibilities
service
and
then
the
response
is
streamed
back
to
the
client
right.
C
So
this
is
one
way
how
client
can
generate
token.
There
is
another
way
as
well,
so
using
projected
volumes.
C
So
client
can
either
use
token
request
apis
to
generate
token
or
they
can
use
the
projected
volumes
to
mount
the
service
account
token
into
the
Pod.
So
this
is
a.
We
are
basically.
C
The
the
token
can
be
generated
by
cubelet
and
mounted
to
the
Pod
when
we
create
the
Pod.
So.
C
Exactly
exactly
so,
if
the
Pod
is
deleted,
the
token
becomes
invalid,
automatically
and
client
doesn't
have
to.
You
know,
call
this
all
apis
to
discover
audience
and
create
the
token
request.
Api
create
token
and
use
that
token
right,
you
can
just
read
the
token
from
the
mounted
path
and
directly
call
the
GRT
statement.
So
yeah
we'll
quickly
write
that
as
well
so
I'll
create
this
Backup
backup
core
in
the
same
namespace,
yeah
I
guess
we
already
seen
the
specs
of
the
spot.
C
We
are
using
the
same
sample
client,
which
in
which
we
have
a
sample.
We
have
a
CLI,
grpc
clients
CLI
it.
It
mounts.
The
CBT
client
service
account
and
we
are
using
projected
volumes
here
to
Mount
this
Mount.
The
service
account
token,
with
this
audience
right
to
to
this
this
path.
Basically,
so,
let's
see
what
happens
when
we
create
this
pod.
D
C
Right
yeah,
so
we
can
see.
The
token
is
generally
are
mounted
here
so
the
same
pli
that
I
had
created
for
this
demo.
It
also
has
these
two
flags
which
will
tell
it
to
use
the
projected
token
from
this
path.
Okay,
and
if
we
at
the
parameters
are
same
like
Source
at
the
base,
snapshot,
name,
Target
snapshot,
name,
duplication,
namespace,
the
client,
namespace
and
yeah
list
of
the
parameters
are
same.
C
If
you
call
this,
you
can
see
if
you
go
to
the
workflow
right
yeah,
so
it
discovers
initial
steps.
You
may
send
to
get
the
service
address,
as
well
as
your
certificate
when
it's
snapshot
metadata
service
and
we
we
discovered
it
based
on
Twitter
name.
C
The
defense
here
is,
instead
of
creating
the
service
account
token,
using
the
token
request,
API.
We
are
directly
reading
it
from
this
path
and
using
the
token,
to
call
the
jrpc
endpoint
on
this
address
right
so
yeah.
These
are
the
two
ways
the
client
can
use
these
apis
and
yeah.
All
the
source
code
is
here
in
this
repo
external
software.
We
have
server
implementation
as
well
as
client
implementation
again
quickly,
go
for
it
like
yeah,.
D
I
mean
from
the
north,
it's
the
first
sorry
just
one
quick
thing
so
the
for
what
is
worth
like
one
of
the
good
things
with
the
projected
service
account
token
is
that,
like
I
appreciate,
like
the
backup
driver,
doesn't
specifically
need,
like
a
r
back
permission,
to
call
the
token
review.
Sorry,
the
token
request
API.
Whereas
if
you
invoke
the
API
directly,
you
need
the
r
back
permission,
but
with
the
projector
service
account
Define.
This
back
like
appreciate
that
they
don't
need
to
specifically
have
the
r
back
permission
is.
B
C
B
If
you,
if
you
have
a
backup
taking
a
long
time,
let's
see
this,
you
know
something
breaks
and
you
restart
using
the
starting
offset
Etc,
just
really
the
fresh
token
in
and
you're
good
to
go
so.
B
D
Thing
we
got
from
the
sick
off
group
is
like.
There
are
multiple
ways
to
generate
this
token,
so
you
can
even
keep
cuddle
like.
It
has
like
a
create
token
thing
that
you
know
for
what
this
work
user
can
use
so
anyway.
So
basically,
the
message
is
like
there
are
many
options
to
do
the
API
together,
API,
so
API
server,
request
token
in
comparison
to
say
previously.
If
we
have
to
you
know
like
roll
our
own,
like
token
management
service.
B
D
If
you
called,
if
you
call
it,
if
we
talk,
call
it
if
the
backup
server
called
it,
the
token
requests
API
directly,
the
backup
server
is,
is
responsible
for
refreshing
it.
But
with
the
projected
service
account
token,
then
cubelet
would
refresh
it
for
the
service
account.
So
the
backup
software
will
be
seamless
to
the
backup
software.
Okay.
D
But
again
these
are
just
multiple
options:
the
backup
software
can,
it
doesn't
affect
the
CSI
driver.
Let's
say
it's
just
many
options
for
the
backups
client,
which
is
the
world
that
I
live
in.
So
this
is
like
really.
B
C
Cool
yeah,
so,
as
Evan
mentioned,
if
you
want
to
create
token
using
token
request,
apis
we'll
have
to
add
these
rules
and
bind
these
rules
to
the
services
counter.
The
the
client
backup,
client
right
it
needs
access
to
this
service
accounts
token
sub
resource,
but
yeah.
With
projected
token,
we
don't
need
these
specific
permissions.
E
D
C
D
So
before
that,
does
anyone
have
any
questions?
Does
anyone
have
any
questions
about
the
demo
and
the
design
so
far
before
we
dive
deeper
into
the
code?
D
I
think,
like
yeah
I,
think
we
I
just
feel
like
finger
crossed
I
feel
like
we
are
at
a
point
where
we
can
just
really
scope
this
to
be
the
alpha.
You
know,
release
and
I
know
like
there's
still
a
little
bit
of
requirements
here
and
there
like
what
ifs
here
and
there
but
I
think
if
possible
like.
If
we
can
just
say
hey,
this
is
going
to
be
Alpha
and
anything
else
will
be
Beyond,
Alpha
D1.
It
would
be.
That
would
be
great,
but
anyways
yeah.
You
want
to
proceed.
F
Well,
I
I
wanted
I
wanted
to
ask
how
many
organizations
do
we
know
of
that
are
likely
to
implement
clients
and
or
Surfers
of
this
API,
just
as
proofs
of
concept
right,
independent
implementations.
F
Because
because
that
that'll
be
the
that'll,
be
the
really
positive
signal
when
other
people
start
using
it
and
saying
like
yeah,
this
works.
So
if
we
identify
who
those
parties
are
that'll
help
us
get
to
the
next
step,
I
think.
D
Yeah
I
can't
speak
for
other
organizations,
but
at
least
for
us
on
for
Dell,
like
I,
live
on
the
client
side,
so
for
us
there's
definitely
the
needs
for
it
and
Tom
who's
on
the
call
like
he's
like
the
CSI
driver,
server
side
I
think
he
has
some
interest
as
well
right
Tom
in
CPT
yeah.
D
F
Yeah
yeah
what
I'm
thinking
is
like
once
you
have
two
different
client
representations
and
two
different
server
implementations,
and
you
show
that
they're
all
interoperate,
like
that's
the
gold
standard
right
once
you
get
to
that
point.
You
like
you'd,
say
this
is
let's
pour
the
concrete
on
this
and
say
it's
done
so
I.
Look
forward
to
that,
but
please
continue.
C
Yeah,
since
you
mentioned
about
two
different
service
implementations,
so
basically
the
sidecar
implementation,
the
the
proxy
service
that
will
deploy
as
a
site
car
that
will
be
common
for
all.
That
will
be
maintained
by
Community.
F
Yeah
I
mean
I'm,
not
I'm,
not
talking
about
the
community
code.
Yes,
we
want
one
copy
of
that,
but
you
want
to
have
like
a
couple
of
different
vendors
implementing
like
storage
implementations
of
this
and
a
couple
different
backup
providers,
preventing
client
implementations
of
this
and
showing
that
it
all
works
and
then
and
then
that's
that's
your
sort
of
proof.
Proof
by
example
that
that
it
it's
the
right
answer.
C
All
right
so
yeah
I
just
wanted
to
quickly
walk
through
the
dharback
configuration
for
the
the
the
Excel
snapshot
metadata
service,
it
for
token
to
call
token
reviews
and
subject
access
review
apis.
It
needs
permissions
for
the
service
account
that
this
site
car
uses
needs
these
rules
to
be
bound.
The
token
reviews,
as
well
as
the
subject,
access
reviews,
resources
along
with
I,
guess,
almost
all
the
sales
side
drivers
have.
These
have
permissions
to
one
of
snapshots
and
volume
snapshot.
C
Contents
yeah
I
mean
we
can
we
can
assume
that
it
will.
It
will
already
have
the
success
yeah.
Additionally,
we
also
need
to
add
access,
or
we
also
need
to
add
rules
to
provide
access
to
snapshot,
metadata
Service
resource
yeah.
C
B
C
Yeah
we
can
quickly
go
over
the
code.
I
guess
the
flow
is
from
the
logs
and
from
the
demo
I
guess.
The
flow
is
a
big,
clear,
but
I'll
just
walk
through
the
code
to
reiterate
over
that
yeah
when
the
client
calls
get
Delta
RPC
right
for
of
this
proxy
service,
we
have
a
get
Delta
Handler,
where
we
first
perform
the
auth
authentication
checks
in
the
Authentication
for
authentication
the
request.
We
first
discovered
the
snapshot
metadata
Service.
We
first
discovered
the
driver
name
yeah
for
this
service.
C
It
it
already
has
driver
name
set
as
a
parameter
or
EnV
variable.
Since
it's
it's
deployed
as
a
sidecar
to
the
CSR
divers
itself.
C
Where
we
get
the
audience
team
and
then
we
call
the
authenticator
to
authenticate
this
request
and
in
the
authenticator
we
basically
to
jump
to
this
yeah.
We
are
calling
token
3
view
apis
service,
token
and
audiences
as
a
as
a
parameter
to
this
API
and
based
on
the
response,
we
return,
if
the
if
the
request
with
with
the
token
with
a
given
token,
is
valid
or
not,
once
it's
valued
yeah,
once
the
yeah
I
went
to
the
once,
you
make
the
token
review
API
we
are
in
the
response.
C
We
also
get
the
user
identity,
which
we
written
and.
C
Too
much
sorry
about
that,
so
we
got
the
user
info
from
the
token
review
API
and
we
use
this
user
info
to
perform
the
authorized
authorization
checks
and
in
the
authorization
checks.
What
we
do
is
we
called
subject
access
review
apis
and
we
do
it
to
check
if
the
user
has
access
to
volume
snapshot
resources.
C
So
you
pass
the
verbs
like
it
has
access
to
a
gate,
the
snapshots,
the
olive
sub
short
name,
spaces,
the
gvr
information,
the
user
info
and
yeah,
and
in
the
response
which
we
get
the
in
this
course
we
we
get
the
if
whether
the
user
is
allowed
to
access
the
given
resources
in
the
given
namespace
with
a
given
verb
or
not
so
once
it
passes.
The
next
next
step
is.
C
Server
dot
yeah,
so
the
next
step
is
the
next
step
is
straightforward.
We
initialize
the
GRTC
client.
C
We
call
the
drivers
get
Delta
RPC
and
we
stream
the
response
back
to
the
client
here.
So
we
have
a
stream,
for
we
have
a
basically
we
are
getting
streamed
from
this
yesterday
driver
and
we
are
sending
it
back
to
the
CBT
client,
okay.
So
it's
kind
of
proxying
the
response
from
the
CSR
driver
to
the
backup.
C
I
hope
that
makes
sense.
If
you
look
at
the
client,
we
do.
C
Yeah
so,
based
on
the
parameters
right,
we
call
setup
security
access.
What
we
do
here
is
we
first
find
the
driver
name
for
the
snapshot
it
has
passed.
Then
we
get
the
snapshot.
Metadata
service
object
for
the
driver
from
which
we
get
the
we
get
the
audience,
and
then
we
we
call.
We
create
the
security
token.
So
if
we
have
passed
the
flags
like
use
mounted
token,
we
read
it
from
the
file.
C
Otherwise
we
call
creates
a
service
account
token
method,
where
we
are
basically
calling
the
token
request,
API
with
a
given
audience
and
given
expiry
yeah
in
the
response
we
get
the
token
and
we
use.
We
use
that
token.
Then.
C
To
to
call
the
grpc
endpoint
to
get
the
change
of
metadata,
so
here
we
populate
all
the
parameters
like
security,
token
namespace
based
Target
snapshot,
names
and
yeah.
We
make
the
grpc
call,
we
read
the
read
from
the
stream
till
we
get
the
end
of
file
yeah.
So
this
is
the
high
level
workflow
and
we
have,
as
I
said,
we
have
a.
We
have
mock
the
get
Delta
RPC
for
for
the
driver
and
if
you
look
at
the
implementation,
it's
again,
we
are
doing
nothing.
C
We
are
just
we
have
implemented
handled
for
get
Delta.
We
are
just
logging,
the
apparentress,
and
this
is
a
mock
response.
Basically,
we
are
returning
for
each
call.
C
This
is
where
the
service
providers
will
have
to
implement
their
own
logic,
to
get
the
to
get
the
change
of
metadata
and
stream.
The
response
back
to
the
proxy
service.
C
G
Hey
Prasad,
so
this
is
my
first
meeting
so
I
apologize
if
any
of
this
has
already
been
discussed,
but
just
looking
at
the
design
so
far
and
the
code
as
well
for
for
background
I'm
working
on
a
backup
restore
of
VMS,
which
are
surfaced
in
kubernetes
as
custom
resources.
G
So
a
lot
of
this
can
also
be
generalized
for
other
resources,
but
it
looks
like
this
is
really
specific
for
persistent
volumes.
There
are
a
few
references
to
volume
snapshots,
for
example,
in
the
code,
so
I
guess
my
question
is
early
on.
Did
we
discuss
to
make
it
generic
enough
so
that
any
backup,
server
or
backup
provider
can
implement
the,
for
example,
get
Delta
API
and
support
zpt,
for
example,
because.
B
Yeah,
that's
actually
awesome.
That's
a
very
interesting
point.
I
think
it's
possible
right.
We
can
explore
on
future
meetings,
but
you
know,
essentially,
you
saw
prasad's
sidecar
well,
the
sidecar
is
built
over
a
library
and
and
the
main
program
right.
We
can
always
expose
every
piece
of
the
code
and
all
the
rest
of
the
stuff
is
grpc.
So
if
you
have
a
service
which
is
grpc
as
long
as
you
can
connect
to
it,
there's
nothing
in
this
design
that
stops
you
that
says
it
has
to
be.
B
You
know
a
specific
thing
like
a
snapshot
or
something
it's.
It's
still
abstract
enough
that
maybe
the
discovery
mechanism
of
how
you
discover
the
grpc
service
can
be
tailored
differently
for
say
your
use
case.
G
G
Definitely
go
through
the
Gap
in
details
to
see
or
try
to
spot
anything
that
is
specific
here
for
volumes,
but,
for
example,
I
saw
a
few
references
of
like
volume
snapshots,
for
example,
so
volume
doesn't
really
map
one
to
one
for
a
VM
right,
VMS
have
either
disks
or
I'm
sure
they
can
have
volume
surface
as
well,
but
yeah.
So
those
are
the
references
that
I'm
talking
about.
D
I
think,
like
so
far,
we
haven't
discussed
that
but,
like
I
think
for
the
really
like
I
think
some
part
is
generic,
but
I
think
some
part
will
still
be
like
specific
to
snapshots.
I.
D
Think,
as
you
know,
right
the
getting
the
Deltas
between
the
snapshots
and
the
Deltas
of
you
know,
like
a
VM,
for
example,
is
like
the
parameters
everything
it
would
be
different
right
instead
of
like
at
this
point
like,
instead
of
trying
to
like
shoot
on
it
to
fit
like
every
single,
like
potential
storage
entities
out
there
I
think
like
the
focus
will
be
on
snapshots
so,
but
you
know
like
if
you
can
help
us
to
flash
out
like.
D
Oh,
you
know
like
that
in
order
to
get
the
Deltas
between
like
VM
images
like
this
was
what
the
inputs
parameters
or
requests
will
look
like
then
I
think
that
would
be
definitely
like
interesting,
but
I
think
it's
unlikely
that
we'll
fold
that-
and
it's
unlikely
that
we
will
make
this
cap
like
a
b-o
and
NL
type
of
get
Delta
for
anything
that
you
can
think
of
cyber
from
API
yeah.
F
I
I
think
this
is
the
right
primitive.
You
know
to
just
focus
on
how
do
you
get
diffs
of
individual
volumes
and
individual
snapshots,
because
everything
else
is
built
on
top
of
that
right?
Vms
VMS
can
have
multiple
volumes
and
multiple
snapshots,
but
a
VM
snapshot
at
the
end
of
the
day
consists
of
volume,
snapshots
and
other
stuff,
and
if
you're,
trying
to
efficiently
back
that
up
being
able
to
go
down
to
the
individual
volumes
and
get
the
diffs
is
extremely
valuable.
Yeah.
So
I.
A
G
So
so
I
mean
there
are.
There
are
a
few
different
types
of
disks
that
are
available
out
there
in
the
industry
just
type,
so
one
of
them
are
like
a
first
class
or
individual
disks
which
can
be
managed
or
surfaced
in
kubernetes
as
well,
and
their
life
cycle
is
not
tied
to
the
life
cycle
of
a
VM
and
when
Shing
says
static
disk.
That
means
a
disk
that
comes
with
the
VM
and
its
life
cycle
is
again
tied
to
the
VM.
G
F
G
What,
if
so,
I
guess
what
I
what
I
was
referring
to
is
like
you,
you
mentioned
snapshots
right.
So,
yes,
snapshots
work
for
the
VMS
as
well,
and
if,
if
this
was
genetic
enough,
that
it
was
calling
a
snapshot
API
on
a
resource
which
can
be
a
volume
or
a
VM
or
a
pod
or
anything,
sorry
snapshots,
don't
make
sense
for
parts.
But
you
get
the
idea
any
runtime
engine,
then
an
implementation
could
be
calling
the
volume
snapshot.
All
the
VM
snapshot.
C
G
F
F
G
Just
trying
to
see
if
if
it
was
brought
up
early
on
and
and
so
all
of
that
exists
at
the
hypervisor
level,
it's
just
a
matter
of
if
somebody
tries
to
surface
more
like
different
types
of
runtimes
at
kubernetes,
VMS
being
one
of
them.
How
would
data
Protection
work
for
them
and
then
I
think
that's
a
valid
statement
that
this
proposal
focuses
on
on
just
volumes
that
are
managed
by
CSI,
which
not
every
volume
in
a
VM
is
managed
by
CSI,
so
that.
F
That
it
creates
it
creates
an
incentive
so
that,
if
you
do
want
to
be
able
to
take
advantage
of
this
kind
of
backup
technology
to
make
all
of
your
volumes
CSI
volumes
right
I
mean
it.
It
creates
a
path
for
someone
who's
sufficiently
motivated
to
to
go,
build
the
right
system
to
take
advantage
of
this,
and
anyone
who
doesn't
implement
it
might
get
left
out
right
that
that's
that's
always.
D
Challenges
so
kind
of
possible
yeah,
so
yeah
so
I
think,
like
the
the
use
case
you
define
is,
is
valid
in
a
sense
it's
valid
in
a
sense
again,
there
are
folks
on
my
team
at
Dell
like
who
lives
in
your
world,
but
I
think
for
the,
for
all
intents
and
purposes,
for
the
scope
of
this
cap.
I
think
that
is
outside
of
the
scope
of
this
cab.
D
Csi
PVCs
and
CSI
snapshots
like
yeah
when
you
said
like
a
Snapchat
backup
or
snapshot
like
the
entire
VM
I
I
was
thinking
along
what
you
said.
It
was
beyond
just
all
the
CSI
PVCs
that
are
attached.
It's
like
the
actual
entire
image
entire
state
of
the
so
so
that
is
completely
beyond
the
the
scope
of
this
club
as
far
as
I'm
concerned
at
this
point,
I
think
it's
a
good
story
and
there's
a
value
use
case,
because
we
have
people
on
my
team
trying
to
solve
those
kind
of
issues.
Okay,.
G
Understood
so
let
me
just
go
through
the
cap
first
and
then
yeah
I
I'm,
okay,
with
the
fact
that
this
is
outside
the
scope
of
this
cap
yeah
and
we'll
see
how
we
can
address
it.
Yeah.
D
D
Yeah
and
for
what
is
worth
right,
even
if
you're
part
or
containers
it's
like,
backed
by
like
Qatar
containers
or
G,
Vice
or
firecrackers,
if
there
is
a
way
to
CSI
all
the
volumes
of
that
thing
of
that
part
attached
to
it,
then
you
know
the
implementation
is
up
to
you
that
that
interface,
that
Prasad
show
that
get
Delta.
At
the
end
of
the
day,
you
have
to
write
something
you
know
over
that.
You
know
so
grpc
from
client
to
CSI
driver
and
then
another
grpc
call
from
CSI
driver
over
Unix
socket
to
the
plug-in.
D
E
D
Yeah
I
think
for
okay.
I
just
want
to
be
aware
of
time.
I
think,
like
seems,
like
you
know,
there's
no,
there's
no
like
glaring,
like
red
flag.
So
far,
I
think
we're
gonna
ship
like
update,
keep
the
cap
up
to
date
and
then,
hopefully
we
can
I
guess.
What's
whatever
the
right
term,
you
know
terminology
is
opt-in
for
1.29.
Is
it
I?
Think
1.28
is
coming
out
in
a
week
or
two
weeks
three
weeks,
yeah.
F
A
A
What
about
the
CSS
back
part?
Do
you
need
to
do
update
of
that
TR
I.
D
Yeah
I
think
like
I,
think
Carl
has
been.
Oh,
no
yeah
I
think
lucky
has
been
very
detailed,
so
it's
all
the
grpc
spec
I
think
we
can
hopefully
like
copy
and
paste
most
of
it
from
the
cap
into
the
CSI
spec
on
PR
oud,
but
for
maybe
a
little
bit
more
details.
F
F
A
A
Yeah
I
think
the
tab
needs
to
go
in
first,
because
cap
has
a
more
you
know:
there's
a
deadline,
that's
written
right,
so
it's
not
like
the
cinestack
one.
We
have
more
time
so
yeah
make
sure
the
clap
gets
in
before
the.
A
F
We
don't
have
to
block
it
but
like
we
have
to
approve
it
with
an
exception
that
you
know.
If
something
CSI
level
changes,
we
have
to
update
the
cap
slightly.
A
That
we,
you
know
as
long
as
you
know
how
the
CSS
pack
would
roughly
look
like.
If
you
are
okay
right,
you
can
you
can
we
can
merge
the
cap
first
and
then
yeah
the
thing,
that's
what
what
we
did
for
the
modified
Walling,
because
that
cat
was
emerged
a
long
time
ago.
The
CSI
is
back
where
I
just
merged
right
like
a
week
ago
or
something.