►
Description
Kubernetes Data Protection WG - Bi-Weekly Meeting - 16 November 2022
Meeting Notes/Agenda: -
Find out more about the Storage SIG here: https://github.com/kubernetes/community/tree/master/wg-data-protection
Moderator: Xing Yang (VMware)
A
Hello,
everyone
today
is
November
16
2022.
This
is
the
kubernetes
data
protection
when
group
meeting
so
today
we
just
have
some
updates
on
the
cap
status.
We
don't
have
a
lot,
but
of
course,
if
you
have
any
other
topic
always
feel
free
to
edit
on
the
agenda.
B
B
Again
they
were
the
folks
who
have
been
providing
some
feedback
on
the
CBT
cap
as
well
as
on
the
CSI
API
PR,
that's
associated
with
this
cap,
so
the
tldr
there
is
James
that,
like
he
liked
the
idea
that,
like
we
included
like
the
the
flow
control,
the
API
server
flow
control
mechanism
as
a
mitigation
strategy
within
the
cap
to
like
you,
know,
regulate
the
amount
of
requests
and
response.
B
You
know
flowing
back
and
forth
kubernetes
API
server
and
the
a
computer
API
server.
But
it
is
hard
to
argue
with
the
concern
around
like
the
number
of
requests
and
response
that
happened.
I
don't
know
if
you
folks
have
a
chance
to
see
the
most
recent
threats
in
the
working
group
Channel.
B
So,
even
if
we
take
like
a
volume
of
one
terabyte
which
is
not
uncommon
and
then
we
stay
like
the
block
size
is
about
512
kilowatt
bytes,
you
know
I
know
we
make
assumptions
around
like
okay.
B
Maybe
one
response
will
return
like
10
000,
you
know
CBT
entries,
which
is
what
AWS
EBS
do
you
know
just
plug
in
random
numbers
that
we
can
find
what
is
out
there,
we're
still
talking
about,
like
thousands
of
requests
and
responses
right
flowing
between
the
backup
software
through
the
kubernetes
API
server
and
then
find
finally
to
the
aggregated,
API
server,
and
it's
likely
that
you
know
this
request
and
response.
You
know
whether
you
know
we
reuse
the
existing
TCP
connections
and
do
some
streaming
there.
B
B
So
yeah
then
like
so
we
explore.
We
kind
of
chat
like
this,
like
I'm
floating
ideas
around,
to
see
how
we
can
handle
this.
One
thing
that
young
was
talking
about
was:
is
it
possible
to
export
the
CSI
RPC
directly
to
the
user
so
bypassing
like
the
kubernetes
API
server?
So
it
just
end
up
being,
you
know
like
a
grpc
product
buff
like
a
request
and
response
between
the
backup
server
and
the
CSI
driver.
B
The
main
concern
there
is
still
like
I
mean
it's
in.
There
is
like
it's
real
now
like
to
date.
Csi
hasn't
been
used
that
way,
you
know
and
like
there.
It
has
like
a
very
different
versioning
scheme.
B
You
know
and
unlike
the
API
serve
apis,
sorry
kubernetes
API,
where
there's
Alpha
Beta
GA,
the
user
has
a
certain
expectations
on
what
like
directly
accessible,
endpoints
and
apis
and
how
they
are
versions
and
how
they'll
deploy
and
we
look
around
like,
and
we
just
don't
see
like
beside
CBT
or
maybe
potentially
change
file.
Tracking
that
follows
we
just
don't
see
like
there
are
any
like
other
CSI
feature
will
require
exposing
like
a
subset
of
the
CBT
direct
us
I
mean
CSI
RPC
directly.
B
So
there's
one
you
know
alternative
that
we
just
talked
about
and
then
James
also
brought
up
the
idea
of
what
is
like
we
use
like
you
know
we
used
to
argue.
We
still
deploy
an
aggregate
API
server,
but
we
Implement
some
sort
of
permanent,
redirect
request
response,
some
type
of
exchange,
Seneca,
300
kind
of
back
to
the
client
and
then
redirect
the
all
the
subsequent
requests
to
another
endpoint
directly
out
of
that
endpoint
directly.
B
So
that
didn't
work
because
you
know,
as
you
folks
might
remember,
like
the
newer
kubernetes
API
server.
It
now
rejects,
like
I'm,
all
like
redirect
response
from
educated
API
server
in
response
to
a
cve
that
was
filed
back
in
September,
so
yeah.
You
know
just
like
I'm
kind
of
the
recent
chat
around
that
yeah.
C
Yeah,
the
the
records
are
supposed
to
be
extent
based
right,
so
you'll
only
get
so.
If
there's,
you
know
multiple
blocks
that
are
contiguous
get
changed.
You
should
get
one
record
back
for
that
extent,
so
your
worst
case
is
every
other
block
got
touched
and
that's
your
absolute
worst
because
otherwise
you
know
like
if
every
block
on
the
disk
gets
touched,
you
should
get
one
extent
back
that
covers
the
entire
desk.
B
Would
that
be
provider
specific
I,
don't
know
if
it's
I
mean
like
we
can
update
the
CSI
site
car
to
be
more
intelligent.
That
way.
C
B
Yeah
yeah
I
think
at
one
point
like
we
did
bring
this
up
in
the
very
early
design
draft
in
the
yeah
I.
Think
like
that
with
Medicaid
part
of
it,
I
mean
again
like
I,
think
it's
something
that
we
want
to
definitely
can
account
for.
B
Like
you
said
in
the
worst
case
scenario,
it
will
still
be
like
every
other
block
like
and
then,
like
you
know,
we
still
maybe
from
like
a
2
000
request
response
we
go
down
to.
We
have
that
to
1000,
because.
C
Well,
that's,
that's
your
absolute
worst
and
and
very
few
applications
are
going
to
touch
every
other
block,
and
you
know
it's
also
like
how
much
time
was
there
between
snapshots,
but
you
can
assume,
but
you
can
take
the
worst
case.
This
is
the
boundary
yeah.
B
I
think,
like
the
worst
case,
it's
the
key
concern
here
and
you
know,
and
the
because,
like
you,
gotta
think
that
we
I
brought
up
in
the
slackdown
threat
is
also
like.
Do
we,
as
at
least
for
Alpha?
Do
we
want
to
like
at
a
state,
specify
like
a
threshold
on
the
volume
size
like
one
terabyte
like
because,
if
you're,
even
like
a
10K,
100
terabytes,
you
know
it
will
still
continue
to
like
grow
in
an
unbounded
fashion?
B
Right
and
like
you
have
that
five
worth
of
kilobyte
Block,
it's
also
arbitrary,
too
right,
500
512
kilobytes
is
AWS
EBS
news
I,
don't
know
about
like
you
know,
for
example,
VMware.
C
It's
it's
a
lot
smaller,
actually
than
VMware,
but
you're
really
looking
at
worst,
but
worst
case
scenarios
worst
case
scenario,
but
you
get
more
benefit
on
larger
on
larger
volumes,
because
just
you
know
you
can't
write
100
terabytes
in
a
reasonable
amount
of
time.
So
you
know
if
your
backup
is
taking
place
like
once
a
day.
Even
you
know,
you
usually
don't
expect
to
write
more
than
10
of
the
volume.
C
C
So
the
worst
is
you:
is
you
manage
to
write
every
other
block
between
snapshots
or
you
know
at
the
beginning
of
the
the
you
know
before
the
first
snapshot.
D
Yep
I
have
a
quick
question
so,
regardless,
whether
or
not
the
end
of
day,
we
choose
to
implement
a
kubernetes
native
API
or
not
CSI
itself,
it's
agnostic
to
kubernetes,
should
we
take
the
route
of
having
the
API
defined
in
CSI
personally,
and
then
we
can
think
about
kubernetes
implementation.
Of
course,
this
is
a
very
important
use
case,
but
looks
like
this.
This
can
well
be
two
things
that
server
serves
the
same
purpose
but
facing
different
platforms.
D
E
A
Think
when
you
submitted
APR
in
cssback
Ripple
the
reviewers
there
want
to
see
the
end-to-end
workflow.
They
want
to
understand
that.
A
A
Yeah
we
have
seen
one
of
those
cases
right
yeah.
There
are
some
PR
submitted
without
a
cap
to
start
with,
and
then
they're
asked
for
something.
You
know
that
is
like
okay.
It.
D
Is
just
to
me
that
there
are
restrictions
in
kubernetes
right,
whether.
F
A
D
D
So
there
are
restrictions
over
there,
something
we
cannot.
Sometimes
we
cannot
bypass
and
we
will
be
facing
pushbacks
from
the
kubernetes
community.
Talking
about
okay
concerns
over
security
concerns
overload,
doesn't
matter
how
you
do
paging,
etc,
etc.
Now
that
now
the
issue
is
the
CSI
is
supposed
to
be
kind
of.
You
know
powerful
agonostic
and
is
a
very
legit
use
case.
Otherwise,
vendors
will
be
forced
to
talk
to
Ubuntu
or
when
the
specific
API
to
the
backup
workflow.
So
yeah
I
was
just
thinking
now.
If.
A
You
yeah
I,
understand
I,
think
definitely
what
you're
saying
makes
sense,
but
I
actually
have
a
a
very
recent
case.
Well,
I
was
trying
to
promote
what
a
very
small
feature
TGA
in
CSI
spark.
It
got
blocked
just
because
we
have
not
figured
out
how
to
use
that
field
in
kubernetes
as
a
first
class
field.
It's
not
like
it's
actually
used
in
kubernetes.
It's
just
not
not
a
direct
field
in
TVC
itself,
then.
For
that
reason
it
got
blocked.
B
Yeah
we
have
for
what
is
worth
like.
We
already
have,
like
you,
know,
a
PR
into
the
the
CSI
spec
on
on
the
front.
So
yeah,
like
you
know,
if,
at
the
end
of
the
it's
the
end
of
the
day
like
are
we
just
gonna,
get
asked
like
okay?
How
do
we
expect
the
users
to
use
it
if
they
use
it
directly?
I
think
that's
like
I
wanted
to
cons
the
main
concerns
that
James
pointed
out.
B
You
know
if
they
use
it
directly.
You
know
like
how
we
version.
Currently,
you
know
the
RPC
spec
is
different
from
typical
kubernetes
API,
but
yeah
like
for
what
is
what
the
pr
is
there
and
then
we
address
all
the
feedback
so
far,
I
guess
like
yeah,
I,
guess
Dave
to
your
point.
Also
like
I
think
it's
you
know
like
have
been
able
to
do
the
having
some
sort
of
extend
mechanism
to
change
or
like
combine
like
blocks
into
like
one
or
fewer.
B
The
I
guess
like
yeah
the
the
follow-up
question
there
is
like
because,
like
we
have
to
make
assumptions
right
or
like
the
volume
size
and
like
the
block
size,
and
there
is
just
a
little
bit
of
it-
feels
like
there's
no
predictability
there
like
once
you.
We
said
this
okay,
even
though
it's
Alpha
once
you
push
it
out
there
and
release
it,
and
people
put.
C
C
I
mean,
but,
but
we
can,
we
can
collapse
that,
like
the
CSI
driver
can
certainly
collapse
things
together,
it
can
look
for
extents
if
it's
getting
back
like
a
stream
of
bits.
Even
and
I'm
writing
a
note
for
the
channel,
but
I
just
thought
I'd
toss
it
out
here
is
what
we
really
need.
There's
only
the
the
actual
data
that
we
need
is
one
bit
per
block
written,
not
written.
That's
all
we
need
so
sorry.
C
C
Nope
because
you
give
the
entire
bitmap
that's
the
entire
bitmap
for
the
disk
right.
Every
512,
byte
block
512
cable
by
bit,
yeah
five.
So
a
one
bitmap
map
for
the
entire
disk
thing
which
which
blocks
are
written
so
no
addressing
information
because
you
you
get
the
address
by
the
offset
in
the
bitmap.
C
I
understand
but
I'm
just
saying
that
in
in
so
start
from
what
is
our
absolute
minimum
data
required
sure.
So
it's
one
bit
per
block,
that's
what
we
need,
and
so
we
could,
for
example,
chunk
that.
So
we
could
chunk
the
chunk
the
disk
into
multiple
regions
and
give
back
a
bitmap
for
region.
You
know
like
if
we
gave
back
even
say
a
thousand
ten
24
records
right.
That
gives
us
well
256
bits
per
record
or
your
256
records
with
1
000.
B
So
what
does
the
I
guess,
a
guy
I'm
not
able
to
Envision
what
that
bitmap
looks
like
is
it
like?
Are
you
so,
in
my
mind,.
C
Maybe
yeah
it's
a
bitmap,
so
so
say.
For
example,
you
have
one
byte
and
you
have
eight
blocks
right.
Bit
Zero
says
what
their
block
block
zero
got
written
or
not
that
one
block
one
got
written
or
not.
Etc.
C
D
It
sorry
this
was
something
instead
of
space
right,
so
you
know,
if
you
think
about
people
that
what
they
was
trying
to
say
is
that
you
don't
carry
any
block
location
information.
You
just
cured
it
based
on
indexing
and
decide
whether
this
block
has
been
marked
as
30
or
not
in
the
in
between
two
snapshots.
Right.
C
D
C
So
so,
for
example,
we
can
we
can
break
this
down,
though
so
each
record
could
cover
like
a
gigabyte
or
whatever
you
know
pick
a
number,
but
so
we
can
have
a
record
per
the
same
say.
We
did
record
for
gigabyte,
that's
10
24
records
per
terabyte
disk
maximum
and
you
just
skip
records
where
there
were
no
changes
and
if
there's
at
least
one
change
in
that
segment,
then
you
get
back
the
the
1024-bit
bitmap
bitmask.
D
C
C
C
D
They
were
conformed
with,
with
their
format,
I
mean
it's.
C
Yeah,
it
would
definitely
I
mean
if
the
concern
is
going
to
be
the
amount
of
Records
returning.
It's
really
it's
the
worst
case,
but
we
can
deal
with
the
worst
case.
But
if
we,
if
we
just
went
to
returning
a
list
of
bitmaps,
a
sparse,
potentially
sparse
list
of
bitmaps
it'll
be
a
considerably
small,
it's
still
a
large
number
of
Records.
That's
why
we,
even
with
a
thousand
records
we
don't
want
to
put
a
thousand
records
into
the
API
server
per
snapshot,
but
bandwidth-wise.
It's
pretty
tractable.
B
B
Right
I
think
like
yeah.
B
D
Back
well,
even
if
you
can
reduce
the
payload,
then
the
pagination
can
be
can
be
in
effect,
much
louder
right.
So
in
this
case
it
actually
will
reduce
number
of
requests
as
well.
C
Yeah
so
I'm
thinking,
yeah,
like
in
this
case,
it'd
be
like
a
thousand
records.
You
know
we
could
just
pick
well
yeah
at
250,
it
yeah,
so
there's
256,
there's
256
000
512,
divided
512k
by
blocks
so
check.
However,
you
want
to
you
want
to
split
the
disk
up.
You
could
split
that
one
terabyte
into
256
records
right.
That's
that's!
One
thousand
blocks
per
record
is.
C
So
it's
256.
D
C
So
if
you
have
a
terabyte
volume,
so
you
have
one
get
one:
one
gigabyte
or
one
billion
billion
billion
ish
blocks
right
and
then,
if
you
go
one
bit
per
block,
then
that's
divide
that
by
eight
right,
which
is
256.,
math
work
name
or
no.
It's
64.
wait!
Damn
it.
F
E
C
C
C
It
works
better
in
the
worst
case
and
in
best
cases
you
return
more
data,
but
it
may
not
matter
that
much
you're
more
likely
to
get
the
full
amount
of
data,
because
it's
more
likely
that
you
would
write
one
block
in
every
across
the
entire
disk
so
that
you
hit
every
big
segment,
but
there's
probably
something
clever.
We
could
do
in
there
too,
so
we
could
do
like
extents
or
something.
E
D
C
C
D
E
C
Yeah
because
it's
cheaper
to
do
a
to
do
the
bid
offs
to
do
the
the
or
the
implied
offset
than
it
is
to
actually
write
out
all
the
offsets
it
really.
It's
it's
optimizing
for
the
worst
case,
rather
than
the
best
case,
because
in
the
best
case
you
have
one
block
that
changed
somewhere
on
the
disk,
and
you
can
represent
that
with
one
address
and
length.
C
B
So
once
the
backup
software
Auto
user
got
back
to
the
bitmaps
like
so
like,
it
would
have
to
like
deduce
like
the
location
of
the
blocks
based
on
the
the
bits
right
on
the
map.
B
B
Right
then,
the
damn
mover
pot
would
be
the
one
that
is
trying
to
like
yeah
locate.
C
C
E
B
Yeah
yeah
I
think
that
sounds
feasible.
I
mean
the
the
only
like
I
don't
know
if
it
was
how
common
or
how
Corner
that
particular
case,
it's
like
unless
you
Might
Recall,
like
some
apis,
like
they
rely
on
like
things
like
data
token,
to
get
the
actual
data
box.
B
It
would
be
refined
to
one
particular
type
of
this
is
exactly
how
you're
going
to
use
the
the.
B
Point
yeah
I,
guess
it's
really
depending
on
like
which
one
is
the
common,
which
one
is
the
corner
case
really
so
I'm,
okay
with
like
hey,
you
know
like
for
phase
one.
We
don't
support
their
token.
It's
a
future
problem,
so
I
yeah
I,
don't
have
for
me
like
the
bitmaps
approach,
seems
like
reasonable
to
me
at
least
that's
my
perspective.
C
D
The
the
end
of
the
day
is
the
storage
windows
and
SG
Implement
that
so
whether
their
internal
system
can
represent
the
system
like
that
I'm
not
sure,
because
this
is
the
this
is
the
major
concern
right.
So
this
actually
requires
them
to
label
or
have
a
internal
kind
of
a
sequential
identity
of
each
cons.
Each
blocks
in
the
system.
C
Well,
I
mean
I,
know
vsphere.
That
was,
we
wound
up
with
a
bitmap
down
there,
because
it's
compact.
C
F
D
C
C
E
C
B
So
Dave,
when
you
said
like
be
spheres,
are
you
referring
to
like
stuff
like
via
DP
and
so.
C
So
under
vadp
right
down
at
the
bottom
level,
there's
change,
block
tracking
is
implemented
inside
of
a
vsphere
and
that's
for
the
on
disk
not
on.
But
you
know
the
the
standard
quote-unquote
standard,
vsphere
Snapshot,
not
the
ones
that
the
arrays
does
directly
but
yeah.
There's
there's
basically
a
bitmap
down
there
at
the
bottom
because
it
needs
to
be
compact
because
otherwise,
you
wind
up
with
taking
too
much
you
know
taking
as
much
space
for
the
extent
for
the
change
block
list.
As
for
the
actual
disk,
so.
B
B
Depending
on
which
storage
providers
you
ask
like
I,
guess
to
Shang
Chen's
point
also
like
I,
think
AWS
EBS
they
own,
they
use
their
own
logical
index,
which
may
or
may
not
necessarily
be
the
index
on
the
Block
on
the
disk.
But
like
I
I,
wonder
if
we're
at
a
place
where
we
can
just
say
hey,
you
know,
like
you
know,
we
want
to
like
focus
on
this
particular
kind
of
pattern
that
we
know
enough
providers
out.
There
will
work
with
enough
providers,
implementation,
I,
guess
at
least
one
personally
I.
B
Just
don't
have
that
like
data
to
say
but
I
think
like
this
bitmap
approach
is
definitely
a
very
simple:
it's
similarious.
Yes,
it's
easy
to
reason
about
from
an
API
consumption
perspective
as
well,
so
implementation
perspective.
C
F
B
You
folks
have
been
on
a
call
I've,
any
other
thoughts,
no
feedback
on
this.
E
So
it
will
be
CBT
service
responsibility
to
calculate
that
bitmap
right.
C
Yeah,
so
that's
it's
the
it's.
Basically,
the
format
we're
returning
so
either
you
know,
so
the
initial
proposal
was
a
list
of
extents
which
is
pretty
good
in
the
average
case
honestly,
but
you
know
there's
some
concerns
about
the
worst
case,
but.
C
A
A
I'm
just
wondering
how
many
see
the
driver
can
implement
this.
C
Actually,
we
could
even
well
you
know,
honestly,
given
a
list
of
extents
I
can
build
a
bitmap
pretty
easily.
We
could
even
implement
this
at
the
at
the
you
know
in
in
the
kubernetes
side,
it's
really
not
hard
to
take
a
list
of
extents
and
build
a
bit
now
as
long
as
they're,
in
sequential
order.
D
C
D
C
C
D
It
okay,
then,
then
there
shouldn't
be
any
like
parallelization
per
se.
You
have
to
call
this
one
by
one
one
by
one.
C
Right,
well,
you
can
call
in
you
can
call
with
an
offset.
D
C
C
D
If
it
is,
you
know,
sequential,
that's
that's
great.
B
Yeah
in
the
current,
like
in
The
Listener
prototype
that
we
did
like
we
didn't
make
any
assumptions
about
whether
they
are
random
or
sequential
order.
We,
just
whatever,
like
the
store
storage
on
layers,
give
feedback
to
us.
We
just
pass
it
back
to
kubernetes.
F
So
I
think
there's
two
two
ways.
This
is
typically
implemented.
One
is
a
journaling
technique
and
then
then
it
might
be
hard
to
to
get
them
in
order
without
some
post-processing
saying
this
TSI
driver,
the
other
way
is
copy
on
right
technology.
Basically,
where
you
maintain
a
bit
per
block
anyway,
and
when
you
duplicate
the
block
you
set
the
bit
so
then
they're
stored
directly
as
bits.
C
C
D
I
I
think
we
should
definitely
try
it
devs
method
right.
We
there
are
some
details
needs
to
be
flushed
out,
but
this
sounds
pretty
promising.
D
From
my
perspective,
if
the
only
the
main
concern
is
really
about
payload
plus
number
of
requests,
if
we
can
compact
this
in
a
way
to
extreme
where
we
think
we
get,
we
can
get
agreement
from
like
many
storage,
Windows,
that's
kind
of
doable
or
maybe
we
can
either
to
Davis
Point,
implement
this
by
ourselves
via
some
library
that
can
they
can
just
code
this
Library
by
themselves
and
exposing
CSI
I
I
guess
this
is
going
to
greatly
reduce
the
number
of
requests
as
well
as
the
payload.
D
So
now
it's
it's
really
up
to
us
to
make
flushing
out
how
to
capture
all
these
cases.
The
token
one
I
have
no
idea
that
one's
a
real
one
yeah.
Is
that
a
token
per
block
or
is
a
token
per
request?
It's
token.
E
E
B
D
It
says
actually
so
EBS
and
why
it
advertises
512
byte
sectors
to
the
operating
system,
so
yeah
512
is
this.
B
Yeah
I
think
like
if,
like
say
for
like
one
terabyte
volumes
and
then
we
can
fit
like
potentially
fit
entire
bitmap
into
like
one
response:
I
think
that
would
be
like
gold.
That
would
be
great
and
then
yeah
I.
Think,
like
you
know
now
it
becomes
that
I.
Think,
like
the
rest,
you
know
potentially
like
the
CPT
cycle,
can
be
the
one,
the
Stitch
about
the
raw
response
together
and
present
a
bitmap,
possibly
like.
B
D
C
C
But
if
you're,
but
but
if
I'm
reading
from
a
block
device,
the
tokens
there's
no
way
to
pass
the
token
down
right.
So
I
just
need
to
know
the
offset
to
read
sure.
D
Oh
I
see
I,
see
you
you
just
you
know
just
completely
kind
of
make
a
new
volume
out
of
it.
Okay,.
C
B
I
feel
like
that
is
a
generic
enough
approach
that
you
know
you
can
just
tell
users
hey.
This
is
how
it's
going
to
work.
You
know
you
have
to
clip,
you
have
to
restore
the
snapshot
and
they
use
those
bitmaps
that
information
that
we
give
you
to
complete
the
rest
of
the
their
path.
B
Yeah
I
think,
like
data
token,
is
like
future
things:
anyways,
cool,
I,
okay,
I
think,
like
I,
think
we
have
enough
information.
B
What
the
next
Step
prototyping
is
going
to
look
like.
D
Sorry,
sorry,
unfortunately,
how
open
source
Community
works,
everybody
gotta
say
no,
it's
good!
It's
one
good
discussion,
yeah.
B
No
I
think
it's
great,
you
know
I
think
that's
part
of
what
make
kubernetes
great
right,
like
everyone
get
to
share
ideas,
and
at
least
you
know
we
choose
and
pick
what
is
best
for
the
community
at
Large,
okay,
I
think
like
that's
it
for
now.
B
You
know,
and
then
I'll
just
maybe
chat
with
some
of
the
folks
to
see
what
the
potential
next
prototype
is.
Gonna
look
like
and
so
stay
tuned.
Okay
back
to
you.
A
This
is
the
probation
volume
from
Cross
namespace
snapshot
as
you
all
know
that
when
you
create
a
PVC
for
anyone
snapshot
right
now,
it
has
to
be
in
the
same
namespace.
But
this
new
cap
allows
you
actually
to
provision
a
PVC
in
a
different
name:
space
from
the
original
golden
snapshot
so
well.
The
actual
implementation
should
change
a
little
bit
after
this
cap
is
merged,
but
at
high
level.
This
will
require
this.
This
reference
Grant
crd.
Currently
this
is
a
Opera
crd
under
Gateway
networking.
A
So
so
there
is
some
discussions
moving
this
one
to
seek
off,
because
it's
a
bit
weird
for
for
us
to
depend
on
networking
crd,
but
this
one
basically
allow
you
to
say
hey.
Let
me
transfer
this
snapshot
to
be
used
by
this
PVC,
which
is
in
a
different
namespace,
but
as
you
see
this
one
race
and
there's
a
grand,
it's
a
reference
Grant.
So
from
from
this
Precision
volume
claim
to
this
volume
snapshot
so
there
are
there
are
there
are
two
names
in
the
production
in
space.
A
And
I
think
this
is
the
the
place.
Well,
it's
actually
got
changed,
so
the
if
you,
if
you
remember
that
I
I,
believe
books
here
are
familiar
with
the.
How
how
you
provision
a
PVC
file
on
a
snapshot
know
the
data
source
field.
This
is
for
one
on
Snapshot,
and
previously
there
is
a
there
is
a
data
source
graph
field
that
is
added
for
the
volume
populator.
A
E
A
A
A
Okay,
yeah
so
I
think
that
yeah,
that
was
the
use
case.
I,
think
we
look
at
when
when
we
move
the
that
field
to
Beta
but
they're
not
there
are
not
a
lot
of
adoption
yet
so
when
this,
this
new
proposal
for
cross
new
space
provisioning
got
initially
they're
proposing
to
add
another
field
called
data
source
ref2.
A
Then
that
means
there
are
like
three
data
source
fields
in
PVC
stack,
so
when
when
they
submitted
this
TR
invitation,
there
are
some
new
comments
asking
them
to
actually
just
to
change
the
existing
data
source
graph
to
a
so
from
a
local
object
reference
to
a
object
reference,
the
local
object.
Reference
does
not
have
a
namespace
field,
but
object.
Reference
has
a
name
space
field
so
because
this
is
your
data,
and
also
adoption
is
not
that
much
so.
E
G
Of
new
I
wanted
to
ask:
do
we
plan
to
use
the
same
infrastructure,
maybe
a
different
infrastructure,
to
allow
also
multi-name
space
attachment
of
PVCs
not
just
from
a
snapshot
data
source,
for
example,
a
shared
file
system?
If,
if
a
user
wants
to
have
a,
has
a
PVC
pointing
to
a
PV
with
an
announce
that
with
like
NFS
or
something
like
that,
and
then
it
wants
a
PVC
open
pods
to
be
able
to
access
the
same
data
source
like
a
read,
write
many
volume
from
another
namespace.
Today,
it's
sort
of.
A
You're
saying
like
a
bind
to
this,
bind
to
how
do
you
say
that?
Are
you
talking
about
like
module
PVC
the
same
PV
or
like
a.
A
G
Yeah
I
know
but
like
we,
we
see,
customers
and
and
with
our
I'm
from
NetApp
and
we've.
F
G
Some
of
our
customers
we
and
like
latest
release
of
our
CSI
driver.
We
sort
of
work
our
way
around
that,
but
but
since
the
PVC
is
a
namespaced
object
and
since
there's
a
one-to-one
binding,
if,
if
you
have
an
NF
like
a
shared
file
system,
sometimes
you
some
sometimes
customer
users
would
like
to
Mount
the
same
data.
E
G
Since
it's
a
shared
file
system
with
several
namespace
in
that
cluster,
okay
and
then
it's
today,
it's
impossible.
You
can
do
that
with
static
provisioning,
but
they
they
like
to
work
with
with
the
CSI
features
and
they
don't
like
to
use
static
provisioning.
So.
G
This
infrastructure
can
be
used
to
point
not
just
to
a
snapshot
but
maybe
to
a
different
PVC
in
another
namespace
or
something.
A
Yeah
I,
that's
probably
possible
I.
Actually
don't
I
forgot
how
if
this
would
work
for
another
PVC
I
need
to
I
need
to
check
that
part.
So
I
will
add
a
note
there.
A
That
part
was
originally
kind
of
a
kind
of
not
in
the
original
proposal,
but
I
think
when
there
are
a
lot
of
changes
when
this
one
got
implemented
so
I
think
there
are
some
restriction
got
removed.
Initially,
like
this
data
source
rough
field
only
allow
a
R23
crde
as
a
source.
You
can't
use
like
in
anything
entry,
another
PVC
or
something
but
I
believe
that
got
removed
well.
But
let
me
double
check
on
that.
So,
basically,
what
you
want
is,
let's
see
so
I'll
just
add
a
question
here.
A
I
need
to
check
this
one
I
do
not
remember
if
the.
If,
because
I
initially
was
saying
that
it
will
be
too
complicated,
it
could
involve
secrets,
and
things
like
that.
So
we
decide
to
just
limit
this
the
snapshot
case,
but
then
you
know
kind
of
now
combine
these
two
together.
I
actually
need
to
I
need
to
double
check
on
this
one
I
I
do
not
know
if
there's
any
like
restrictions
in
the
implementation
part
so
yeah.
Let
me
double
check
on
that
and
get
back
to
you
right.
A
So
now
this
one
right
now
it's
not
going
to
be
affecting
the
the
status
was
filled
right.
I
know
a
lot
of
you
already
have
used
this
field
in
production.
For
the
you
know,
bottom
snapshot
for
PVC
front
room
snapshot,
so
this
will
always
be
there
because
you
know
this
is
all
the
ga
field.
A
It's
it's
going
to
so
like
it's
going
to
do
some.
You
know
some
logic
to
copying
from
this
field,
to
this
new
field
and
also
from
the
new
field
back
to
that
field.
If
the
new
field
does
not
have
namespace
specified,
but
this
this
field
is,
will
always
be
there
so
right
now,
you
know
you
don't
have
to
make
any
changes
to
your
to
your
code,
but
I
just
want
to
bring
this
to
your
attention,
because
this
might
be
useful.
A
Now
this
is
a
alpha
still
we're
actually
still
implementing
this
in
the
side
course,
it's
not
provisional
that
that's
still
that
that's
still
ongoing,
but
once
it's
there
you
know
you
can
try
it
out
and
see
how
that
works.
A
You
know
eventually,
when
this
goes
GA,
then
you
know
you
could
also
use
this
instead
of
inside
this
exists
build
because
it's
going
to
be
equivalent
right
once
it's
a
once.
This
one
is
ready,
but
but
if
you
want
to,
you
know,
keep
this,
you
can
always
keep
this.
This
is
this
will
always
be
there.
A
Is
that
clear?
Is
there
any
I
just
well
I
thought
this
might
be
interesting
right,
so
I
think
there
are
a
lot
of
use
cases
for
this
cross.
Namespace
transfer.
A
A
Also,
this
is
a
crd
in
networking.
There's
some
some
discussions
with
seek
off
to
move
this
to
cigarth.
So
that
looks
more
more
like
a
common
API.
It
doesn't
make
sense
for
like
storage
apis,
to
depend
on
a
networking,
API
and
also
this
is
a
it's,
not
a
core
API
event
right.
So
if
it's
a
quorica,
it's
fine,
this
is
a
crd,
and
sometimes
you
don't
know
if
they
are
making
any
changes
and
suddenly
it's
not
working
anymore,
so
yeah.
A
So
we
want
this
one
to
be
moved
to
some
common
place
before
we
move
this
feature
to
Beta
yeah.
So
but
right
now
this
is
just
for
for
Ava.
This
is
this
is
well
it's
coming
from.
This
is
a
alpha,
Gateway,
API
right,
so
user
will
have
to
do
this
reference
Grant,
it's
not
like
you-
can
just
use
this
directly
like
their
provision
directly
provision
a
PVC
if
I
want
to
snapshot
in
the
same
namespace.
So
I
don't
have
this
extra
step
this
one.
A
We
do
require
user
to
do
this
extra
step,
so
this
is
just
for
the
security
reason
right
that
we
don't
want
this
one
to
be
so
easy,
so
they
have
to
know.
This
is
a
you
need
to
do
this
Grand
and
accept
then
allow
this
to
happen.
B
Hey
I
I
need
to
drop
yeah.
A
I
think
so,
oh
sorry,
actually
we
already
passed
them
actually,
thanks
for
reminding
I
need
to
join
another
meeting,
all
right
great
discussion.
Thank.