►
From YouTube: Kubernetes Data Protection WG Bi-Weekly Meeting 20210825
Description
Kubernetes Data Protection WG Bi-Weekly Meeting - 25 August 2021
Meeting Notes/Agenda: -
Find out more about the DP WG here: https://github.com/kubernetes/community/tree/master/wg-data-protection
Moderator: Xing Yang (VMware)
A
Hello,
everyone
today
is
august
25th
2021.
This
is
the
kubernetes
data
production
boarding
group
meeting.
I
think
today
we
do
have
something
on
the
agenda,
but
I
do
not
see.
A
A
So
sushanti,
are
you
all
sad
with
this
section
about
the
data
protection
definition?
This
is
all
said.
Are
you
still
working
on
this.
C
D
D
I
do
have
some
diagram
that
I
want
to
ask
before
I
insert
is
that
do
we
have
some
kind
of
a
common
theme
or
common
color
theme,
because
I
my
my
favorite
color
is
blue,
so
I
I
do
a
lot
of
blue
and
but
do
we
have
any
like
restriction,
or
should
we
do
rage
spell
by
diagram?
Only?
What
do
you
guys
think?
A
B
Yeah
yeah,
I
already
have-
I
just
didn't,
get
a
chance.
It
didn't
see
inserted
into
this
white
paper
yet
but
anyways
this.
It's
it's
a
moving
target
right,
so
we
have
the
first
version
and
then
we
will
own
that
repository
this
group
as
a
whole.
We
own
that
repository
right
should
there's
any
updates.
D
Okay,
in
that
case,
I'm
gonna
stop
inserting
my
my
that
I
got
that
I
have
and
if
you
guys
have
any
feedback
on,
if
you
think
the
color
is
offensive
or
look
uglier
just
let
me
know:
okay.
B
Yeah,
it
just
just
remind
I
kind
of
reminder
right:
we
will.
This
file
will
be
translated
into
a
markdown
file
and
if
you
have
any
diagram,
it
has
to
be
in
a
gpac
format
or
gif
format.
Whatever
format
you
have,
it
cannot
be
a
google
draw.
A
B
Let's
just
make
sure
that
you
have
a
jpeg
prepared
somewhere
so
that
when
we
submit
that,
we
can
include
that
gpa.
E
A
E
A
So
maybe
we
can
see
how
you
know
how
everyone
come
up
with
the
drawing
first
and
then,
if
we
want
to
have
that
in
our
like
unified
style,
then
maybe
we
can
talk
to
her
about
it.
Yeah
sure.
A
Okay,
thank
you
for
offering
that
okay,
so
I
think
this
section
is
also
ready
for
review
and,
and
then
the
next
section
is
okay,
and
this
section
also
ready.
I
think
we
actually
went
over
this
one,
but
if
you
have
more
comments
about
use
case
section,
please
add
your
comments
there,
and
this
is
also
sean
chang's.
Also,
you
are
pretty
guys
these
two
sections
right.
B
That
is
correct
and
also
I
have
down
post
sections.
Okay
again,
I
just
need
some
eyes
to
help
me
find
out
whether
there's
any
issue.
A
Okay,
I
see
that
humble
added
this
one,
so
I'm
not
sure
do
we
want
so
those
are
kind
of,
I
believe
those
are
kind
of
out
of
scope,
but
I
think
he's
talking
about
application
role
back
and
the
data
replication
out
of
snapshots.
So
I
think
so
those
are
kind
of
out
of
scope
for
this
white
paper.
For
now
I
believe,
do
you
want.
A
E
B
A
All
right,
yeah,
just
all
right,
I
will
do
that.
A
A
D
A
D
B
A
All
right
and
then
volume
populators,
okay,
I
think
ben
added
the
section
yeah.
Please
review
this
and
this
section
that
we've
okay,
so
the
setting,
I
think
tom
you're,
still
updating
right.
Okay,
see
you
learn
more.
E
Yeah
so
steven
and
I
met,
I
think
he
had
another
action
item
on
the
first
section
and
then
I
took
the
container
notifier
discussion.
A
E
Do
you
do
you
want
to
take
a
quick
pass
at
the
container
of
our
section
to
make
sure
that
you're?
Okay
with
that.
E
Yeah
my
opinion
here
is
that
you
know
we
have.
This
is
kind
of
like
a
reference
to
the
the
cap
and
gives
enough
of
an
overview
that,
when
things
change
in
the
cap
you
know
doesn't
lock
down
an
opinion
here.
Basically.
A
Sean
chan
as
well-
let's
just
add
yours,
so
we
both
take
a
look
at
this
and
see.
Yeah,
probably
should
just
yeah
just
like
a
deep
brief
description
of
that.
Okay,
we'll
take
a
look,
and
I
I
think
this
is.
I
probably
yeah.
If
you
know,
I
think
this
is
actually
ready
for
review
if
you
guys
think
this
is
a
too
much
or
too
little.
A
Let
me
know
this
is
just
to
add
in
the
motivation
and
some
some
goals
that
didn't
really
add
any
details,
because
you
know
the
cap
itself
has
some
details,
so
this
is
ready
for
review
as
well
and
the
section
oh
okay,
I
see
prashantha
actually
made
some
changes
here.
A
Let's
see
is
oh
okay,
so
yeah
he's
about
to
change
that
all
right.
So
probably
we
should
take
a
look
and
see
is
preschooler
on
a
callback
any
chance.
A
Okay,
I'm
not
sure
if
it's
done
yet.
It's
like
we
need
to
make
a
lot
of
changes.
Okay,
oh
our
pinky,
let's
see!
If
it's
done,
then
we
can
review
it
and
okay.
Then
we
come
to
the
section.
I
think
chanting
you're
going
to
take
a
look
at
this
one
right.
So,
let's
just.
A
Was
the
set
okay,
I
think
I,
which
was
section
that
was
added
by
by
phone,
is
that
okay,
this
is
the
follow
this
one
right
back
up
application.
Is
this
section
added
by
phone?
I
forgot,
which
section
is
that
I
think
this
is
the.
B
Last
section
is
from
france,
a
free
pre-cvt
work,
basically.
A
We
actually
went
through
the
song
before
okay,
so
take
a
look
and
then
comes
to
the
appendix,
so
I
think
we're
saying
that
we
still
want
to
keep
those.
So
we
can
just
just
go
over
this
one
and
see
if
there's
anything
needs
to
be
cleaned
up,
but
it's
kind
of
useful.
A
I
think
I
actually
have
a
few
things.
Maybe
I
want
to
add
here
so
yeah,
but
you
know
this
section
is
also
ready
to
review
and
take
comments
to
the
attending
section.
So
there
are
a
lot
of
examples
of
how
different
databases
applications
are
doing
choirs
and
providing
their
native
tools
to
back
up
and
store.
Okay.
So
all.
B
Right
jin
have
we
had
or
report
set
up.
I
remember
you
had
a
lot
of
I'm.
A
Still
waiting
yeah,
I
think
people's.
Maybe
they
are
on
vacation,
something
I
opened
an
issue
because
somebody
else
will
have
to
we'll
have
to
create
an
alias,
because
I
don't
know
how
to
create
those
and
usually
also
the
contributor,
said
people
do
that
I
need
to.
I
will
pin
someone,
so
this
was
a
few.
I
think.
E
A
Think
I
I
submitted
that
last
week
I
assigned
to
paris
and
nikita,
but
I
haven't
heard
from
them.
Yet
it's
still
there.
A
Okay,
okay,
so
I
think
that's
that's
that
for
this
anything
else
for
this
for
this
white
paper,
if
not
we
can
go
back
to
here.
Let
me
see
for
eric
is
eric
on
call.
Yet
no,
oh
there
there
he
is
okay,
hey
eric!
Yes,.
A
Oh,
that,
that's
fine,
so
yeah,
so
you
want
to
go
ahead
and
talk
about
this.
Do
you
have
anything
to
share,
or
should
I
just
go
to
this?
We
have
that
notes
there.
Do
you
have
anything
else
you
want
share,
or
should
I
just
go
go
to
here.
E
C
I
don't
have
a
formal
proposal
for
this
built
out,
although
something
that
I'm
thinking
about,
but
what
I
wanted
to
get
some
feedback
from
the
data
protection
working
group
was
basically,
we
had
infinite
at
have
immutable
snapshots,
implemented
within
our
storage
layer
and
and
for
clarity.
H
C
To
us,
basically
that
at
the
time
of
creation
or
potentially
in
a
future
policy,
update
that
the
snapshots
can
have
a
specific
time
before
which
you
cannot
edit
the
snapshot.
You
can't
delete.
C
To
do
whatever
they
wish
to
it,
but
that's
the
that's
the
critical
aspect
that
I
mean
by
immutability.
We
are
seeing
customers
who
are
wanting
to
deploy
these
immutable
snapshots
as
part
of
their
overall
data
protection
workflows
with
an
eye
toward
ransomware,
prevent
a
ransomware
recovery
and
basically
take
immutable
snapshots
regularly.
C
Maybe
every
couple
of
minutes,
even
in
some
cases
and
then
delete
them
after
24
hours
or
whatever,
basically
cycle
through
continual
immutable
snapshots,
copies
of
their
most
critical
data
sets.
And
ideally
they
would
like
to
have
some
awareness
or
ability
to
specify
this
on
a
policy
basis.
With
regard
to
the
time
element
of
immutability
at
the.
B
C
E
C
And
it
would
be
good
to
have
one
standard
approach
as
opposed
to
having
each
vendor
roll
their
own
kind
of
ways
to
expose
this
policy,
so
that
that's
the
genesis
of
this
and
I'd
love
to
hear
feedback
from
folks
on
in
this
work
group.
As
to
whether
that
makes
sense.
A
E
Thanks
eric,
you
know,
I
can
give
my.
I
can
tell
you
what
we
do
for
immutable
backups,
because
this
is
a
really
important
feature.
As
you
point
out,
we
actually
rely
on
the
backup
repository
to
do
this.
So
the
canonical
example
is
amazon
s
the
s3
api
and
we
use
their
features
there.
So
they
have
a
feature
called
object,
locking
and
governance
that
lets
you
manage
that
we
we
typically
don't
you
know,
because
we
consume
a
lot
of
things
through
csi
right
now.
E
We
don't
actually
require
immutable
snapshots
from
the
storage
provider.
Having
said
that,
if
it
was
available,
we
probably
you
know,
we
definitely
would
use
it,
but
I
think
for
us,
the
more
important
side
is
actually
the
the
backup
repository
side.
A
What
what
is
the
name
of
the
policy
we're
talking
about?
This
is
suicide
right.
E
I
F
C
I
think
it's
a
very
interesting
angle,
or
it's
a
very
interesting
perspective
on
both
angles,
both
from
the
kind
of
backup
target,
as
well
as
from
the
source
or
potentially
as
an
intermediary
step.
So
I
think
there
should
be
absolutely
room
to
consider
integrations
in
both
of
those
areas.
C
So
one
fail
back
or
failover
mechanism
would
be
if
kubernetes
enforce,
that
time
frame
or
if
csi
enforced,
that
time
frame
for
drivers
that
don't
implement
it
in
the
back
end,
but
that's
obviously
much
less
secure
than
having
it
implemented
all
the
way
through
the
to
the
backend
storage
snapshot
mechanism.
A
Hey,
I
do
have
a
question,
so
that
means
your
csr
driver
will
actually
provision
a
snapshot,
and
that
actually
is
your
backup
right.
So
it's
like
a
so
like,
like
aws
ebs,
snapshots,
those
type
of
snapshots
right
because
you're,
correct
okay.
So
then,
but
that
only
applies
to
some
cloud
providers.
Basically
right.
C
Not
necessarily
cloud
providers,
but
pretty
much.
All
storage
appliances
have
snapshot
mechanisms.
Primary
storage
appliances
have
snapshot
mechanisms
most
have
some
kind
of
equivalent
of
this
immutability
feature
that
I
just
described.
A
Right
but
I
mean
I
mean
normally
like
a
snapshot
or
local
snapshot.
That's
not
really
considered
a
backup
right
unless,
if
you
move
it
to
a
third
location,
that's
away
from
your
primary
storage
that
you
can
protect
if
your
primary
studio
is
done
right.
So
I'm
not
so
I
I
so
I'm
just
trying
to
see
so
what
you're
talking
about
here
is,
for
example,
for
infinidat.
A
You
have
your
snapshot
stored
on
your
primary
storage.
Is
this
actually
already
uploaded
to
some
object,
storage
location?
When
you
take
a
snapshot.
C
No
you're
correcting
it's
it's
and
again
similar
story
for
other
vendors.
Maybe
ben
can
weigh
in
as
well
from.
I
know
he
was
involved
in
the
previous
discussion
from
another
primary
storage
vendor.
This
does
not
replace
the
need
for
immutability
in
the
long
term,
backup
repositories
as
well,
and
that
would
be
the
object,
storage
mechanism
or
some
other.
You
know,
implementation
depending
on
what
the
backup
vendor
workflow
is
set
up.
J
Yeah,
you
absolutely
want
this
capability
at
each
level,
so
you
can
have
either
hand
or
both,
and
I
guess
the
main,
the
main
reason
it
would
be
interesting
to
backup
vendors
is
just
the
knowledge
that
snapshots
might
not
be
deletable,
because
because
some
workflows
involved
take
a
snapshot
back,
it
up
delete
the
snapshot.
A
J
A
E
And
so
would
that
mean
that
we
would
you
know
you
take
a
snapshot,
you
delete
it,
you,
essentially
it
wouldn't
get
deleted
in
the
underlying
system.
Is
that
what
you're
talking
about
then.
J
J
A
Should
have
a
way
to
configure
the
snapshot
class,
I.
J
A
A
Mean
but
normally
I
think,
when
backup
vendors
do
their
backups,
they
do
they
use
existing
snapshots.
Do
you
use
their
own?
The
question
for
backup
vendors
here.
J
A
J
J
A
I
mean
that
you
know
this
back.
Vendor
probably
wouldn't
talk
to
the
customer
about
those
right
I
mean
that's
just
I
think
we
are
talking
about
here
is
trying
to
see.
If
we
can
introduce
this
as
a
crd
itself,
then,
but
I
do
we
did
we
say
to
set
this
in
the
snapshot
class.
Where
do
you?
Where
do
we
set
this
up
policy?
K
As
far
as
let
me
talk
about
that
part
two,
so
as
far
as
deletion,
some
vendors
do
have
some
limits,
especially
the
ones
that
take
snapshots.
You
know
locally
within
a
volume,
but
you
know
if
you're
talking
about
backup.
You
know
those
limitations
go
away,
because
you
know
you
can't
write
as
many
snapshots
to
an
object
store
possible.
So
I
I'm
not
too
concerned
about
that
problem.
I
think
the
main
problem
we
have
really
talked
about
is
assuming.
K
K
So
if
you
have
a
mechanism
that
periodically
takes
snapshots
according
to
the
schedule,
according
to
the
retention
policy
that
the
customer
wants
now
that
those
samples
are
generated,
how
are
we
going
to
consume
them
because
the
kubernetes
model
so
far
we're
only
talking
about
user
driven,
user-initiated
snapshots,
you
know
all
the
volume
snapshot
objects
are
user-triggered
in
this
case
users
are
not
triggering.
You
know,
any
snapshot.
Objects,
so
the
question
becomes:
how
are
we
exposing
these
schedule
driven
backup
generated
snapshots
to
users,
so
they
can,
for
example,
do
restores.
C
I
would
assume
that
that
would
be
coordinated
by
the
backup
software
and
trying
to
work
with
the
storage
underlying
scheduling
mechanism
or
somewhere
in
the
intersection
there,
but
I
view
that,
as
as
potentially
a
separate
issue
from
the
more
narrow
thing
which
I
was
discussing,
which
is
just
to
boil
it
down
is
basically
knowing
that
a
snapshot
cannot
be
deleted
until
active
time.
That's
that's
the
that
narrower
scope
is
what
I'm
trying
to
solve
here.
What
I'm
trying
to
gauge
interest
if
there's
interest
in
community
standardization.
K
I
mean,
I
think,
backup
vendors
have
already
tackled
that
problem.
You
know,
especially
you
know
like
especially
if
these
options
are
not
exposed
to
users
through
volume
snapshot,
you
know
objects,
then
there
is
no
way
for
customers
to
delete
them.
I
mean
the
only
way
it
can
get
deleted
is
someone
goes.
You
know
out
of
band
and
deal
with
stuff,
which
you
know,
there's
not
really
a
concern
here.
K
I
I
think
the
main
issue
is
really,
if
you
know
for
these
scheduled
snapshots,
either
taken
by
backup
when
they're
or
taken
by
the
storage
system
behind
the
scene,
many
storage,
vendors,
you
know
like,
for
example,
netapp.
You
can
specify
financial
policy
and
the
you
know,
netapp
storage
would
take
snapshots
for
you.
K
You
know
on
schedule,
you
know,
assuming
we
have
a
mechanism
to
take
your
scheduled
snapshots.
How
are
we
going
to
expose
those
to
customers?
So,
let's
say
for
ransomware
attack
happens
and
they
want
to
restore
their
data
to
data.
You
know
snapshot
from
a
day
ago.
They
can-
and
I
think
that
that's
what's
lacking
really.
K
So
we
kind
of
you
know
have
solved
that
problem.
You
know
internally
for
our
systems,
but
basically
we
periodically,
you
know,
generate
these
snapshot
objects,
we
pull
the
back
end
and
generate
them
and
well
you
manage
your
life
cycle,
but
that's
not
very
you
know
handy
you
know,
plus
there
can
be
some
it's
period.
You
know
for
for
the
objects
to
sync
up
with
the
state
on
the
back
end.
So,
but
that's
how
we
went
about
tackling
this
problem
internally.
M
M
M
But
there
is
a
limit,
but
when
you
have
a
policy
that
determines
that
any
snapshot
taken
cannot
be
deleted
for,
let's
say
seven
days
or
n
days,
and
if
the
periodicity
or
frequency
of
taking
snapshot
for
the
customer
is
high
and
then
maybe
within
one
or
two
days
they
will
reach
the
services
quota
limit
the
maximum
number
of
snapshots
allowed.
Now
they
are
in
a
state
where
they
cannot
delete
because
of
the
immutable
policy.
At
the
same
time,
they
have
already
consumed
the
max
number
of
snapshot
limit.
M
They
have
to
wait
for
three
to
four
days
until
the
seventh
day
appears
so
that
they
can
have
the
older
snapshots
deleted.
We
just
have
to
be
careful
about
that
aspect,
and
how
do
we
model
that
so
that
customers
don't
shoot
on
their
feet?
That's
point
number
one
point
number
two:
we
have
this
concept
of
storage
classes
where
we
have
seen
swmr
mwmr
single,
read,
multi-read,
multi-right
single
right,
all
those
right.
M
Maybe
we
can
use
and
expand
that
concept
to
bring
in
warm
compliance
also
in
there
and
so
that
it
becomes
a
qualifier
in
the
storage
class.
With
a
v
storage
attribute,
I
would
think
of
those
two
as
aspects.
However,
not
dismissing
what
you're
bringing
to
the
table.
There
is
some
merit
in
exploring
it.
C
Yeah,
it's
a
great
point
krishna
and
I
I
would
say
at
least
to
the
first
perspective.
Obviously
different
back-ends
may
have
different
limitations
with
regard
to
storage
snapshots
again,
I
think
the
the
policy
piece,
the
periodic
policy
piece-
is
a
much
more
complicated
issue,
so
I'm
I'm
not
trying
to
tackle
that
part,
but
at
least
with
regard
to
the
the
guaranteed
inability
to
delete
a
snapshot.
I
would
hope
that
that
would
be
enforced
at
the
by
the
back
end.
C
Ultimately,
and
if
you
were
going
to
run
into
an
issue
you've
accumulated
too
many
snapshots,
you
wouldn't
be
able
to
create
more
snapshots,
so
it
doesn't
really
affect
your
ability
to
delete
them.
Does
that
make
sense?
C
E
Eric
so
you
asked
if
this
would
be
useful
for
backup
vendors.
I
think
absolutely
we
do
have
customers
who
have
backup
repositories
that
don't
give
us
primitives
that
we
can
use
for
immutability.
So
I
think,
having
at
the
storage,
the
snapshot
layer
will
be
really
useful.
We
you
know
to
integrate
with
us.
My
preference
would
actually
be
to
not
have
it
done
through
a
policy
like
this.
We
actually
try
to
maintain
any
any
snapshot.
We
reference
to
be
immutable
as
long
as
live
meaning
for
us.
E
We
would
actually
need
an
api
that
would
extend
the
retention
or
you
know,
immutability
duration,
of
an
existing
snapshot,
and
so
you
can
imagine
kind
of
essentially
a
crud
like
crud
type
apis
on
top
of
the
immutability,
the
locking,
essentially
on
each
of
the
snapshots,
rather
than
putting
it
you
know
into
into
the
storage
class,
or
something
like
that.
E
Layer
completely
if
we're
gonna
consume,
so
you
know
I'm
talking
from
my
opinion
here.
So
take
it
with
a
grain
of
salt.
But
if,
if
we
were
to
consume
immutability
of
underlying
storage,
we
would
want
an
api
that
would
do
you
know,
manage
the
retention
directly,
so
we
could
say
take
the
snapshot,
make
it
immutable.
E
C
Okay,
so
you
would
want
a
kubernetes
api
mechanism
to
adjust
the
the
protected
time
or
whatever
we
end
up
calling
this.
E
D
Yeah,
I
I
have
a
headset
here,
I'm
a
backup
vendor
here
at
dell
emc.
What
we're
concerned
about
is
that
when
we
from
a
backup
point
of
view,
when
we
create
a
snapshot,
we
then.
E
D
D
B
It
phone:
I
think
this
is
a
fundamentally
different
right.
This
is
the
ability
you
can
do
that
it
doesn't
necessarily
mean
that
you
can
only
do
that.
Eric
correct
me
from
wrong.
H
H
A
A
A
K
Like
for
cbt,
we
said
you
know,
this
is
going
to
be
a
service
where
you
just
talk
to
let's
say,
address
server
and
gives
you
all
the
change
blocks.
There
is
no
kubernetes
app
involved
right
here.
I
think
we
have
to
make
a
decision
whether
these
backup
restore
workflows
are
user,
driven
through
kubernetes,
apis
or
no
they're
like
driven
through
the
you
know,
proprietary
apis
by
backup
vendors,
because
that
can
that
can
affect
all
these
discussions.
K
If
you
know,
if
backup
vendors
are
responsible
for
snapshot
schedules
and
their
retention
policies,
you
know
that's
like
a
black
box.
Those
snapshots
are
not
exposed
to
customers
to
kubernetes
apis
at
all
right,
so
we
don't
have
any
problems
of
somebody
going
out
deleting
stuff.
You
know,
because
they're
not
even
aware
that
those
snapshots
are
not
even
exposed
to
customers,
so
I
I
think
it
kind
of
go
back
to
my
earlier
point
that
we
have
to
kind
of
decide.
K
K
A
Anyway,
okay,
I
think
we
should
talk
about
that
later
I
mean,
but
this
one,
obviously
in
the
csi
community
meeting
the
at
least
the
thought
at
that
time
was
to
actually
even
have
a
kubernetes
controller,
actually
would
be
there
to
prevent
the
deletion
from
from
happening.
A
E
A
C
But
I
think
tom,
sorry,
I
think
tom,
your
your
point
is
that
if,
if
a
driver
doesn't
implement
this
functionality-
and
it's
just
provided
at
the
container
at
the
csi
level,
then
then
you
can
compromise
the
entire
system
in
a
true
kind
of
attack
scenario.
Whereas
if
it's
implemented
all
the
way
back
into
storage,
then
then
it
addresses
the
issue.
A
But
I
think
I
don't
think
you
can
actually
prevent
that
from
happening,
because
if
you,
because
the
this,
this
type
of
like
the
web
hook
right
that
normally
does
not
talk
to
the
csi
driver
directly.
So
it's
like
we're
talking
about
two
levels.
A
So
if
if
we
are
doing
this
at
a
as
a
webhook
like
in
the
kubernetes
layer,
it's
only
looking
at
api
side.
So
as
long
as
you
have,
this
policy
will
stop
you
from
deleting
it,
and
then
there
is
another
layer
which
is
on
the
csi
side.
That's
different!
That's
those
two
are
not
together.
That's
what
my
understanding
from
last
week's
meeting,
but
I
definitely
can
see
the
problem.
A
So
there
is
a
nourish
w.
You
could
have
this.
You
know
out
of
sync
right,
because
your
driver
doesn't
support
this,
but
then
you
know
the,
but
then
you
use
the
policy
that
is
out
of
sync.
That
definitely
could
happen.
So
I
think
that
was
what
we
talked
about
last
time
ben.
Do
you
remember?
We
actually
talked
about
that
even
for
like
one
expansion,
we
have
something
similar
because
your
your
story
class
says
I
support
one
expansion,
but
your
driver
does
not
support
it
right.
They
are
automatically
yeah.
N
A
N
A
Right
but
but
if
you
look
at
how
that
is
implemented,
normally,
though,
like
the,
if
it's
a
web
people
talking
about
a
web
hook,
I
remember
talk
about
having
something
like
a
admissions
controller
that
prevent.
If
the
policy
is
set,
then
we
will
prevent
snapshots
from
being
deleted
right,
but
that
the
mission
controller
normally
do
not
talk
to
your
css
driver
directly.
A
D
A
Right
so
the
same
same
thing
as
we
do
with
some
other
features.
Basically,
that's
funny.
That's
on
your
way,
but
but
you
know
you
of
course
nobody
supports
this
by
default
right.
If
you
want
to
score
this,
then
you'll
see
if
the
driver
has
to
declare
that
you
support
this
and
then
that's
it.
That's
like
one
short
thing.
You
can't
opt
out.
Basically
you
support
it
and
you
support
it.
E
Feature
where,
to
be
honest,
the
majority
of
deleted
snapshots
are
deleted
because
of
you
know
that
you
didn't
tend
to
were
operational
failures,
and
I
think,
and
putting
this
at
the
api
layer
here
here
would
be
useful
to
prevent
the
operational
failures,
but
for
ransomware
attacks.
You
really
have
to
push
down
to
the
storage
layer,
so
they
could
be
separate
features
they
could
be.
You
know,
maybe
maybe
there
are
separate
use
cases
that
we
installed
separately.
A
C
Did
not
implement
this
via
our
csi
driver.
We
were
hoping
for
some
kind
of
community
consensus
building
before
we
did
that,
but
we
do
have
the
apis
available
in
our
system.
So.
C
N
I
think
it
helps
build
the
community
consensus
to
have
a
working
model
of
it.
Maybe
nobody
could
play
with
it
if
it's,
it
is
totally
proprietary,
but
just
just
to
see
how
the
upper
layers
can
be
built.
On
top
of,
it
might
still
be
helpful.
E
Yeah
eric
it's
actually
a
pretty
common
pattern.
You
know,
for
example,
with
cbt.
You
know.
Dp
vendors
can
essentially
detect
that
you're
using
you
know
vsphere
for
example,
and
then
directly
integrate
with
the
cvt
apis,
even
though
it's
not
exposed
through
csi
or
any
kubernetes
yeah.
So.
E
Common
pattern
actually
for
us
to
see
you
know
specific
features,
really
a
customer
request
for
consuming
a
feature
from
a
storage
vendor
implementing
integration,
and
then
that
kind
of
becomes
a
model.
And
then
we
like
to
bring
it
back
to
the
group
to
see
if
it
worked.
If
there's
issues
with
it,
if
it,
you
know,
it
makes
sense
to
put
into
kubernetes.
C
I'm
new
to
this
whole
community
process,
so
I
appreciate
everyone's
patience
as
I
kind
of
figure
it
out
tom
if
you're
willing,
I
think
we're
we're
actually
already
engaged
at
some
level
with
your
organization,
maybe
you-
and
I
can
have
some
time
to
discuss,
preferred
implementation,
because
I
think,
ideally,
the
reference
implementation
would
be
something
that
both
the
backup
vendor
and
the
storage
vendor
can
can
agree
upon.
Does
that
make
sense.
E
Yeah
that
does
and
we
you
know
we
could
talk
to
shared
customers,
because
I
think
they
see
if
they're
interested
in
this
too
right
that'd
be
good.
We
like
to
be
customer
driven
here
so
that'll
be
good.
Of
course,.
C
Okay,
so
just
to
close
off
the
topic,
at
least
from
my
perspective,
appreciate
all
the
discussion
here,
I'll
I'll
work
with
tom
and
and
probably
get
some
input
from
broader
stakeholders
and
be
thinking
about
a
reference
implementation
of
sorts.
It
will
not
be
immediate
on
our
side
so
as
there's
further
thoughts,
we'll
we'll
continue
to
consolidate
that
and
come
back
to
this
group
and
do
course
once
we
have
something
to
share.
Unless
anyone
has
other
ideas
on
how
to
proceed.
E
That
sounds
great
I'd
also
like
to
see
maybe
the
netapp
immutability
apis.
I
don't
know
if
folks
can
share
that.
Maybe
in
the
document
we
can
go
through
them
and
just
make
sure
we're
building
a
general
model
if
possible.
C
K
K
Like
yeah,
so
as
far
as
the
aps
we
built
these
are
mainly
for
internal
consumption.
They
are
not
planned
to
be
consumed
by
customers
or
exposed
for
now,
but
I
can
talk
about
the
workflows
and
how
they
work
their
nature.
There's
nothing
really
proprietary
about
the
way
apis
work.
I
mean
I.
I
still
think
we
should
somewhat
discuss.
You
know.
K
E
Yeah,
definitely,
you
know,
I
don't
think
we
want
to.
We
want
to
you,
know
kind
of
hit
the
the
most
you
build
the
most
useful
things
and
if,
if
customers
say
they
don't
need
this
at
this
moment,
then
then,
okay,
okay
with
that
and
eric,
I
guess.
K
And
yeah,
I
think,
there's
some
conflict
interest,
probably
here
too,
because
you
know,
obviously
you
know
backup
vendors.
They
want
to
refer
to
the
game.
Support
particular
feature,
so
most
likely
they're
going
to
have
property
apis
same
can
go
with
storage
vendors.
You
know
they
want
to
have
to
be
the
first
to
support
certain
workflows,
but.
K
I
think,
as
a
community
so
putting
us
at
our
you
know.
Company
has
for
a
second.
I
think
you
know
if
some
workflows
can't
be
standardized,
you
know
that
can
benefit
customers,
because
you
don't
want
110
different
ways
for
customers
to
you
know
let's
say,
do
backup,
restore
or
to
handle.
You
know
run
somewhere.
E
K
Internal
yes,
they're
internal
to
netapp
and
we
have
no
plans
to
expose
them
outside,
but
I
can
talk
about
how
they
work.
You
know,
there's
nothing
really
secret
about
them,
so
they
must
be
exposed,
at
least
for
configuration,
though
right
so
the
configuration
part
like
you
know
the
way.
Like
snapshot
policies,
you
know
you
can't
define
them
on
the
back
end
or
you
know,
for
example,
for
a
cloud
back-end.
So,
for
example,
you
say
I
want
so
many
hourly
snapshots.
I
want
so
many
daily
snapshots
and
you
know
the
policy
itself.
K
You
know
implements
the
retention
policy,
so
you
have
so
many
daily
snapshots.
You
have
so
many
you
know
hourly
snaps
or
so
many
weekly
snapshots.
So
that's
how
we
do
it.
You
know
outside
of,
but
just
just
the
store
just
netapp
itself,
but
obviously
netapp
also
integrates
with
you
know,
backup,
vendors
so
and
there
they
manage
their
schedules.
They
may
rely
on
netapp
for
snapchat
generation,
but
they
manage
the
retention
policy
or
you
know.
So.
K
K
And
you
know,
look
netapp
also
has
some
plans
like
the
astra
project,
to
handle
some
of
these
workflows,
so
I
I
think,
basically
we
need
to
discuss,
maybe
the
workflow.
I
think
I
think
this
white
paper
is
really
trying
to
one
of
the
things
we're
talking
with
white
paper
is
at
least
explain
the
problems.
K
Look,
for
example,
like
kind
of
going
back
to
the
cbt
discussions
we
were
having
you
know.
One
of
the
things
we
discussed
was
you
know
the
churn
as
if
you
use
kubernetes
apis
for
cbt,
the
churn
would
be
too
high.
The
objects
are
too
large
to
be
stored
on
lcd,
so
that
kind
of
ruled
out.
You
know
kubernetes
integrations
xing
said
you
know,
this
is
not
finalized,
but
at
least
we
had
that
discussion
around
it
right.
E
I
yeah
I
mean
I,
I
think
it
is
worth
describing
the
problem
in
cbt
in
the
the
white
paper
and
do
we
have
this
section
in
there?
I
I
don't
recall,
seeing
one
in
there.
A
No,
we
don't
have
anything
for
this
because
we
are
well.
We
can't
see
if
we
can
add
this
there
I
mean.
A
I
think
we
are
basically
talking
about
what
is
missing,
because
this
one
well,
it's
not
really
completely
missing,
like
I
mean
in
my
view,
but
what
what
do
you
think
do
we
want
to
talk
about
this?
One.
A
Yeah,
so
let's
yeah,
because
we
have
a
lot
in
the
in
there
already.
So,
let's
see
if
we
should
add
this
one
or
not,.
A
Yeah
yeah,
yeah,
and
also
you
know,
as
soon
as
I
was
saying,
you
know
we
actually
can.
Even
after
we
have
this
first
version
of
the
white
paper
submitted,
we
can
still
go
update
it.
It's
going
to
be
our
ripple
right,
so
yeah,
I'm
just
saying
that
we
don't
have
to
rush,
because
this
one
just
came
out
right.
This
issue
just
was
brought
up
last
week,
not
like
other
other
things.
We
have
been
talking
about
those
other
issues
for
quite
a
while
right.
B
A
A
Yeah,
thank
you
is
there,
something
not
found
or
did
you
find
it
senchen
is?
Is
the
problems
result.
A
Oh
okay,
sure,
okay,
thank
you
and
then
I
think
I
think
that's
it.
I
just
okay.
We
still
have
this
here
so
kukan
is
coming.
I'm
not
sure
if
anyone
has
the
immediate
decision
to
attain
person
yet
but
anyway,
so.
C
Has
anyone
hey.
C
Actually
going
to
be
face
to
face,
or
if
they're,
if
they're
waffling
on
that
now
or
given
the
delta.
G
C
G
It
will
be
face
to
face.
I
can
send,
I
guess
I'll
drop
it
into
the
chat
here.
If
we
have
time,
but
there
are
some
specific
requirements,
you
must
demonstrate
vaccination
in
order
to
register,
you
will
have
to
wear
masks
on
site
and
they
are
following
la
county
guidelines,
so
it
is
subject
to
change,
but
it's
estimated
approximately
a
quarter
of
attendees
may
be
in
person.
G
No,
I
don't,
I
think
I
think
they.
I
think
there
was
an
assumption
at
the
outset
that
no
one
would
be
in
person
and
folks
have
modified
their
plans
to
potentially
be
there.
G
You
know
I
wouldn't
place
a
bet
on
on
whether
or
not
the
situation
now
remains,
but
I'm
not
aware
of
anyone
having
pulled
out
entirely.
I
know
there's
some
folks
that
had
assumed
at
the
outset
they
wouldn't
be
there
and
haven't
changed
that.
A
G
Yeah,
it's
the
same
thing.
It's
a
it's
a
companion
event
to
kubecon.
You
know
it's
not
exactly
co-located,
but
it's
located
in
your
proximity
using
the
same
sort
of
guidelines
for
attendance.
Similarly,
they'll
have
a
you
know
online
quote-unquote,
virtual
presence,
but
then
they're
also
being
in
person.
I
believe,
there's
a
museum.