►
From YouTube: WG data protection bi-weekly meeting for Feb. 7, 2020
Description
WG data protection bi-weekly meeting for Feb. 7, 2020
A
Okay,
hello
again,
everyone
today
is
January
29th,
and
this
is
a
second
kubernetes
data
protection
working
group
meeting
this
building
will
be
recorded
and
to
be
available
on
YouTube
I'll,
be
hosting
today's
meeting
and
co-hosted
by
the
scene
as
well.
Today,
we're
going
to
go
through
major
e3
topics
based
on
last
meetings.
A
The
second
thing
is
there
to
populate
er
I,
think
been
successfully
cutted
emerged
into
1:18
for
the
cat,
even
though
the
scope
has
shrinked
a
little
bit.
If
I
would
like
to
you
know,
I
pain
to
go
through
the
the
cat.
What
it
will
be
included
and
13
will
be
the
workflows
in
better
protection.
They
mean
Xin,
Tom
and
Nora.
There's
another
person,
I
forgot
your
etiquette,
her
name
over
here,
I'm.
Sorry,
please,
edit
by
yourself,
we
went
through
some
of
the
topics.
There's
a
worry.
A
A
B
A
C
B
B
C
B
C
A
B
C
F
B
A
B
All
right
well
we'll
go
with
this
and
then
and
then
we'll
see
so
so
general
I
know.
Probably
most
folks
have
read
it,
but
let
me
just
kind
of
overview
what
I
was
trying
to
do
with
it,
because
that's
gonna
be
part
of
the
substance
of
my
response
to
some
of
these
comments.
So
I
did
a
background
section
on
here,
which
was
really
just
to
make
this
document
stand
alone.
I
didn't
really
expect
that
this
stuff,
in
this
background,
would
be
new
to
anybody.
Who's
been
tapped
into
the
working
group,
so
I.
B
Really
any
better
framing
of
this
background
is
what
I
would
expect
in
the
stuff
that
Dave
and
Shawn
are
working
on
right,
which
is
the
overall
workflow.
So
I
I
appreciate
all
the
comments
here,
but
I
wasn't
really
trying
to
a
just
just
basic
framing.
So
if
anyone
has
any
issue
with
the
basic
framing
I'd
be
happy
to
address
that,
but
that
wasn't
really.
The
the
main
purpose
of
this
section
was
to
sort
of
be
an
exhaustive
list
of
what
we're
trying
to
do,
or
anything
like
that.
B
B
And
so
one
of
the
things
that
I've
been
trying
to
kind
of
deal
with
in
the
discussion
here
is
the
difference
between
those
things
that
we
need
to
specify
as
as
in
we,
we
want
to
the
users
to
be
able
to
say
blah,
no
matter
who
the
backup
supplier
is
versus.
Those
things
that
are
can
be
hidden
inside
the
volume
backup
implementation
itself.
G
That
sounds
pretty
good
to
me.
This
is
Tom
from
Kasten.
I
did
have
kind
of
a
higher-level
question
plot
point:
I
think
it
might
be
worth
calling
out
the
different
roles
here
compared
to
taking
volume
snapshots
so
in
the
backup
community.
I.
Think
kalons
kind
of
point
of
view
is
that
there
be
multiple
parties,
local
roles
involved.
So
if
you
look
at
si
si
si
si
has
an
interface
defined
and
implemented
defined
by
kubernetes
and
even
by
different
storage
providers
in
the
back
up
case
kind
of
what
I
was
envisioning.
G
Was
that
it'd
be
back
providers
like
Kasten
that
would
kind
of
consume
the
interface
much
like
the
snapshot,
controller
consumes
the
CSI
interface
and
that
the
storage
providers
could
implement
whatever
that
API
was
and
what's
different
here
is
rather
than
having
kind
of
the
community
delts
a
shot,
control
or
there'd
be
third-party
vendors.
That
would
be
the
back
of
vendors
right.
The
data
protection
vendors
is
that
kind
of
match.
What
your
understanding
is.
Is
it
worth
calling
all
those
roles
explicitly.
B
This
is,
this
is
exactly
I,
think
one
of
the
topics
that
I
wanted
to
talk
about.
So
let
me
make
sure
that
I
am
understanding
your
view
and-
and
let
me
do
it
by
way
of
sort
of
explaining
how
I've
looked
at
this
yeah.
Please
part
of
this
is
there's
two
different
problems:
that
kind
of
intersect
here
and
I.
Don't
maybe
I
didn't
do
a
great
job
of
talking
about
the
two
of
them.
B
One
of
them
is
the
degree
to
which
the
general
backup
problem
is
distinct
from
the
volume
backup
problem
right
and,
and
several
of
the
comments
on
this
doc
were
really
referring
to
the
general
backup
problem
things
like
application,
quiesce
things
like
capturing.
You
know
the
config,
in
addition
to
the
volume
data
and
I've,
been
explicitly
viewing
volume
backup
as
a
layer
on
which
any
kind
of
sort
of
workload
or
cluster
or
application
backup
would
be
based.
B
So
it
could
use
that
as
a
as
an
underlying
facility,
so
it
would
have
a
well-defined
API
for
how
to
do
that.
So
in
in
with
that
in
mind,
I
actually
imagined
that
we'd
have
the
following
roles,
which
is
underlying
primary
storage,
including
snapshot
support
right
all
in
one
one
component,
yeah
a
volume
backup
component,
which
might
be
provided
by
primary
storage
or
could
be
provided
by
a
different
plugin
that
relied
on
something
from
the
primary
storage.
Either
snapshot
or
we've
talked
about
this
other
things.
B
It
would
have
been
called
incremental
snapshots
the
sort
of
feeding
of
Delta's
into
a
system,
but
then
also
a
third
layer,
which
is
the
layer
that
orchestrates
the
overall
backup
process
and
the
rationale
for
the
difference
between
that
second
and
third
layer,
which
will,
by
the
way,
of
course,
all
of
those
layers.
All
of
those
roles,
could
collapse
and
be
provided
by
a
single
vendor
right.
So
you
could.
A
B
A
storage
provider
might
want
to
be
storage
back.
You
know,
volume,
backup
and
overall
backup,
but
you
you
can
also
imagine
that
the
problems
involved
with
managing
cross
cluster
backups
is
a
bit
of
a
different
problem
than
backing
up
volumes
in
particular,
since
it
I
know
of
at
least
several
of
the
storage
vendors
have
actually
the
capability
to
do
volume
backups,
absolutely.
G
B
A
G
The
characterization,
you
know
you
talk
about
the
three
layers,
so
I
can
give
you
my
kind
of
opinion
on
the
three
layers
and
then
we
can
eat
it.
You
know
my
opinion
is
a
group
right,
so
if
the
layer,
if
I'm
at
the
lowest
level
layer,
the
first
one
you
talked
about,
where
you
have
a
storage
provider
implementing
backup
solution,
I,
don't
know
if
there's
too
much
work
that
we
would
have
to
define
as
a
sink
for
that
I
think
you
know.
G
B
H
B
And
and
the
reason
why,
let
me
let
me
just
go
a
little
bit
further
into
that.
Theater
is
a
layer
that
wants
to
be
able
to
invoke
such
a
thing.
Yeah,
there's
value
in
having
that
thing
be
invoke
about,
in
the
same
way,
whether
it's
provided
by
primary
storage
or
by
different
primary
storage
vendors
or
whether
that
volume,
backup
layer
itself
is
provided
by
another
component.
Sure.
D
The
difference
between
the
invocation
and
configuration-
that's
so
right
now,
like
yes,
I
for
example,
gives
you
the
opportunity
to
say:
hey:
I
want
a
durable
snapshot.
I
think!
That's
that's
once
you've
heard
that's
what
we
started,
calling
them
and
then
it's
up
to
the
storage
system
to
figure
out
how
to
actually
migrate
that
out
to
secondary
storage,
but
we
already
got
an
API
for
that
which
is
relatively
simple.
It
may
need
some
tweaking
here.
The.
B
D
Lee
know
that
that's
exactly
true
and
so,
but
I've
been
working
on
separating
these
two
into
like
two
different
silos,
so
we've
got
stuff
that
happens
under
the
covers,
essentially
like
the
durable
snapshots
and
I.
Think
that's
going
to
be.
We
have
a
way
to
trigger
that.
We
may
need
to
change
the
way
we
trigger
it,
but
a
lot
of
the
internals
of
that
like
like,
for
example,
I,
can
in
EBS
or
I,
don't
know
what
the
Google
Cloud
equivalent
is
called.
D
Is
coming
in
from
the
top,
so
right
we've
got
two
different
modes
right,
so
one
is,
one
is
hey.
You
know.
Storage
system
go
take
care
of
this.
For
me,
the
other
is
hey
backup
system.
You
need
access
in
order
to
get
this
data
out
in
order
to
either
move
it
or
you
know
basically
to
move
it
to
put
it
in,
put
it
somewhere
else,
either
a
secondary
storage,
a
destination
target
whatever
it
is.
So
that's
that's
how
I'm
seeing
these
two
silos,
but
but.
H
B
B
G
So
Andrew
I
see
that
as
kind
of
your
third,
your
third
layer,
the
characterization
that
you
defined
over
the
top
right
so
I
think
when
you
maybe
I
misunderstood
your
original
characterization.
But
it
sounded
like
you
defined
kind
of
three
layers.
Like
the
storage
layer
itself,
wasn't
mean
to
back
that
up.
I
think
stored
vendors
will
have
their
own
opinion
there.
The
middle
layer,
I
think,
is
what
we've
been
thinking
about
a
lot
of
Kasten,
which
is
how
do
we
get
the
actual
blocks
themselves
and
put
them
into
our
backup
storage
system?
B
I,
don't
think
we're
actually
disagreeing
at
a
top
level.
What
the
characterization
of
the
layers
are
I
think
that
maybe,
when
we
just
start
getting
into
API
discussions,
we'll
flesh
out
what
differences
we
might
have
I'm
thinking
that
is
you
know,
the
difference
is
probably
just
in
the
nuance
of
what
is
the
API
to
the
second
layer
versus
the
first
layer,
and
you
know,
I
was
envisioning
that
the
API
to
the
first
layer
would
be
storage
and
snapshots,
not
backups.
B
G
True,
but
there's
some
nuance
there
right,
so
the
current
snapshot
API
is,
for
example,
don't
have
any
data
path
components
you
know
so
things
like
change,
block
tracking
and
in
fact,
just
general
Jade
extraction.
I'll,
give
you
an
example.
So
what
what
Caston
is
right
now,
if
we're
using
CSI
as
we
take
it
back
up,
restore
it
and
then
extract
the
data
out
of
yeah
exactly
yep?
That's
and
you
don't.
B
Take
a
backup,
you
take
a
snapshot;
I'm,
sorry,
a
snapshot,
yes
right,
yeah,
and
that
that
is
the
de-facto
way
to
do
that
now.
But
there
is
on
the
table
for
this
working
group,
some
enhancement
at
that
level
right:
ask
primary
storage
to
expose
additional
what
I
will
call
snapshot
related
api's.
The
fact
that
they
exist
for
the
purpose
of
backup
may
be
the
nuance,
but.
B
H
B
H
Hi
I
was
gonna,
say,
I
had
exactly
the
same
idea
back
in
November
and
I,
followed
exactly
the
same
path
that
that
you
did
and
I
have.
I
also
have
a
proof
of
concept
that
I
don't
have
permission
to
release,
but
it
sounds
like
it
sounds
like
we're
on
the
same
page
and
so
I
am
very
curious
about
the
exact
implementation
decisions
you
made,
because
I
do
think
that
is
a
promising
path.
Obviously,
is.
G
B
A
B
B
The
existing
snapshot
IDs
only
have
to
be
unique
within
a
cluster
and
the
ideas
these
would
have
to
be
effectively
globally
unique.
It
also
has
to
be
explicitly
pop
possible
to
import/export
in
two
different
clusters:
backups
right
and
then,
of
course,
now
we
get
into
the
nuance
of
different
kinds
of
backups,
because
we
can
imagine
backups
that
are
completely
portable
between
storage,
vendors,
ie,
arrested,
class,
backup
or
backups
that,
while
they're
independent
of
a
particular
storage
pool,
they
might
not
be
independent
of
a
particular
storage
technology.
B
So
hey
I
have
my
file
system
built
in
the
following
way
and
so
I'm
the
only
one
who's
going
to
be
able
to
interpret
my
backups
right.
That
doesn't
mean
I
have
to
come
back
into
this
same
appliance,
or
this
same
you
know
distributed
file
system,
but
it
would
need
to
be
back
to
my
product.
You
know
something
like
so
I
guess.
The
key
thing,
then,
is
that
you
should
be
able
to
nuke
the
original
volume
and
the
backup
should
should
be
just
as
useful.
B
Of
course,
there's
a
bunch
of
other
considerations
here
too,
like
backup
life
time
once
you've
separated
the
life
cycle
from
the
individual
volume.
You
start
getting
into
interesting
questions
about
versioning,
and
you
know:
how
long
can
you
keep
a
backup
and
be
able
to
expect
to
be
able
to
recover
from
it?
And
what
do
we
want
to
say
about
that?.
A
Radio
is
definitely
one
of
the
things
and
now
the
interesting
issue,
that's
on
top
of
what
and
you
just
described
it's
unlike
everybody
knows
right
now
in
the
snapshot
API.
We
have
this
interesting
retention
policy
and
this
retention
policy.
It
does
apply
to
cloud
providers
like
a
PD
EBS
successor.
However,
it
may
not
necessarily
apply
to
local
storage
systems
because
it
wants
you
to
read
the
warning.
A
A
And
this
also
applies
to
backup
as
well.
So
that's
the
reason
why
I
want
to
toast
this
problem
out.
This
kind
of
you
know
interesting
person
out
the
reason
why
I'm
asking
is,
since
the
lifecycle
is
somehow
lifecycle,
management
of
a
backup
seems
to
be
a
mix
of
within
the
cluster.
You
can
manage
it
and
outside
the
car
straight
and
manage
it
now
really
leads
to
a
very
interesting
question,
which
is
a
dangling
reference
from
an
existing
backup
object.
A
D
There's
also
the
flip
of
that
which
is
you
have
a
cluster
you
back
it
up
you
surface
some
snapshot
IDs
there
and
you're
managing
them
in
that
cluster.
Now
you
restore
to
another
cluster,
you
don't
delay
the
first
cluster,
but
you
keep
the
second
cluster.
Now
those
same
snapshot
IDs
have
surfaced
in
the
second
cluster
who
gets
to
delete
those?
Yes
exactly!
That's
exactly
well
yeah
because
you're
talking
about-
and
your
point
is
very
good
one
as
well,
which
is
you
know,
the
external
storage
deleted
them.
D
B
B
You
know,
with
with
some
sort
of
lifecycle,
management
of
the
things
that
are
not
in
clusters
themselves
right.
So
that
has
an
entire
system
of
stuff
that
you
have
to
worry
about,
that
could
be
from
kubernetes
could
be
managed
from
camana
jables
could
be
days
but
could
be
managed
outside
of
kubernetes
as
well.
That's
I
kind
of
see
as
a
natural
problem
for
that
third
layer,
I.
B
E
G
Had
a
question
for
this
group,
you
know,
I
was
thinking
I.
Think
part
of
part
of
your
document
is
kind
of
me
proposing
an
opinion
on
what
these
things
are.
I
think
it'd
be
reasonably
relatively
opinionated,
and
maybe
one
of
the
things
that
some
of
you
were
getting
out
was
that
perhaps
backups
may
be
reinforced
kind
of
an
independence
right.
We,
when
we
talk
about
a
backup,
it
would
not
be
tied
to
the
original
volume,
for
example.
I
can
you
know
we
say
that
the
life
cycle
would
be
independent
and
maybe
then
has
implications
on.
B
C
C
A
C
The
worker
yes
Allah,
I,
don't
know
what
stores
you
have
been
using,
but
actually
a
lot
of
system
most
resistant.
Actually,
if
you
just
count
on
the
numbers,
there
was
a
distant
dependency,
so
I
actually
see
a
lot
of
people
complaining
our
coroner's
that
trust.
Ok,
that
because
right
now
the
API
assumes
that
they
are
independent.
So
some
CSI
drivers
actually
implements
a
lot
of
logic
like
Ben's
drivers
way.
C
You
actually
have
to
manage
this
one
at
the
driver
layer
assume
that
the
volume
can
be
deleted
successfully,
but
actually,
since
not-
and
we
kept
keep
with
the
reference
count
internally
within
a
driver,
but
it's
definitely
a
problem
that
I
actually
wants.
You
make
this
experience
better
for
those
drivers,
so
I,
don't
think
they
should
be
it's
already
right
now.
It's
actually
already
has
different
life
cycles
because
it
actually
a
coordinated
API.
C
I
B
That
dependency
didn't
understand,
you're,
saying
no,
because
I
thought
the
the
the
sort
of
initial
proposal
there
was.
Maybe
we
should
revisit
that
requirement
of
volume
snapshots
to
make
it
easier
for
those,
because
we're
now
going
to
provide
that
lifecycle,
independence
with
volume
backups
well
yeah.
We.
C
A
E
C
A
G
H
C
D
G
A
B
E
B
A
I
H
I
H
I
Yes,
if
we
stuck
with
implementing
a
feature
or
a
change
that
is
going
to
break
backwards,
compatibility
I,
think
history
teaches
us
that
we
should
do
that
sooner
rather
than
later,
because,
with
the
rapid
adoption
cycle
of
kubernetes
at
the
moment,
we
are
going
to
see
a
wider,
a
sphere
of
impact.
The
longer
we
wait,
I,
don't.
C
H
C
C
B
C
B
A
H
Yeah
I
I,
don't
have
anything
to
share
per
se,
and
this
shouldn't
take
too
long.
I
just
wanted
to
to
mention
that
you
know,
while
developing
my
prototype
for
a
way
to
do
backup
and
trying
to
answer
some
of
the
same
questions
that
Andrews
trying
to
answer
I
ran
into
the
problem
of
you
know
if
you
create
a
CR
D,
that
represents
a
backup
and
you
want
to
be
able
to
restore
from
it.
H
That
is,
if
you
have
like
a
dynamic,
provisioner
running
it
as
a
CSI
driver
if
it
sees
a
data
source
that
it
doesn't
know.
If
it's
not
a
volume,
if
it's
not
a
snapshot,
it'll
just
ignore
it,
and
so
in
the
short
term.
If
you
turn
on
this
feature,
gate
and
create
your
own
CR
D
and
then
use
that
as
the
data
source
for
a
PVC,
the
system
just
ignores
it.
It'll.
B
H
Yeah,
it
says
it
says:
I,
don't
know
this
and
it
just
Waits,
which
is
perfect,
because
the
the
implementation
data
Popular's
that
I
propose
involves
a
separate
controller,
also
watching
PVCs.
Seeing
that
request
and
saying
oh
I
know
what
to
do
here
and
then
creating
a
volume
populating
it
and
then
binding
it
to
that
PVC
using
that's
exactly
the
way.
Ours
works,
Oh,
wonderful,.
A
A
H
So
so,
but
here's
the
here's,
the
thing
and-
and
this
is
this-
was
Tim's
point
about
about
this-
the
valid
API
validation
is
we
don't
want
to
have
a
situation
where
you
know
the
user,
creates
as
a
CR
D
or
creates
a
CR
and
then
uses
it
as
the
data
source
for
PV
C,
and
then
nothing
happens
because,
of
course
they
haven't
installed
a
data
populate,
or
that
knows
what
to
do
with
that
CR
D
and
then
it
the
system
just
sits
there
and
they
get
no
feedback
like
that.
That
is
undesirable.
H
H
That
is,
that
is
one
of
the
TBD
items
is
alright
I
envision
a
future
where
you
can
have
any
CRT.
You
want
be
the
source
of
a
data
populate
ER
and
you
could
have
lots
of
different
controllers
that
know
how
to
handle
different
CRTs
and
over
time.
This
could
become
a
no
very
feature,
full
area
where
you
can
pre-populate
your
volumes
with
with
whatever
you
know,
from
whatever
source
and
people
could
do
really
interesting
things
conceivably.
I
D
That's
what
I'm
thinking
so
we've
got
like
like
with
pods.
For
example,
you
say
if
the
pods
are
not
coming
up,
you
go
to
the
pod,
you
say
describe
pod
and
you'll
get
some
bad
error
messages,
but
you
can
usually
do
that.
Can
we
do
the
same
thing
on
the
PV
and
we
can
say
hey,
the
PV
is
not
being
instantiated.
C
A
H
A
Gotta
figure
that
out
yeah
and
at
the
other
point
I
also,
you
know,
commented
on
a
doc
is
when
we
design
this
there's
an
interesting
thing.
How
do
you
envision
the
external
populated
to
work
at
the
end
of
it?
Whatever
requires
it
will
still
need
to
cause
some
CSI
driver
to
provision
of
warning
right.
Okay,.
H
A
H
But
my
approach
was
the
populated
controller,
that's
watching
the
PVC
and
it
sees
a
data
source
of
some
CRT
that
it
knows
how
to
handle.
What
it
will
do,
then
is.
It
will
create
a
second
PVC
in
a
different
name,
space
with
no
data
source,
but
all
the
other
details
are
exactly
the
same
so
that
the
CSI
driver
will
see
that
request
and
say:
oh
yeah,
I
know
how
to
handle
that
go,
create
an
empty
void
and
then
and
then
the
so
the
popular
will
wait
for
that
to
happen.
H
Then
it
will
attach
a
pod
to
that
empty
volume.
Do
whatever
needs
to
do
to
make
the
data
up
here
and
then
it's
just
a
matter
of
deleting
that
PVC
and
rebinding
the
the
now
populated
PV
back
to
the
original
PVC
that
the
the
user
created
and
is
still
waiting
for
PVC
to
get
bound
or
pv
to
get
bound
to
provisioning
stuff.
H
H
A
H
Yeah
yeah,
so
there's
there's
the
the
trick
that
when
you
create
that
second
PVC,
that
with
no
data
source,
you
better
have
all
of
the
same
details
exactly
you
see
so
that
it
ends
up
exactly
where
it's
supposed
to
be.
And
after
you
perform
the
the
rebind,
you
don't
have
a
useless
pv.
You
have
what
what
you
expected
to
get.
I
H
Okay,
I
mean
yeah,
so
so
so
that
the
intent
is
get
this
alpha
feature
gate
in
so
that
we
can
use
see
ours
when
you
enable
this
feature
and
then
we'll
propose
many
different
ways
of
doing
this,
and
hopefully,
as
a
community,
we'll
settle
on
the
one
that
we
like
and
then
and
then
that
that's
a
prerequisite
for
moving
us.
This
feature
to
beta
is
that
we
better
agree
on
something
that
we're
all
happy
with.
So.
C
H
So
the
way,
the
way
that
we've
done-
data
searches
so
far
is
we
have
two
of
them.
We
have,
we
have
volumes
and
we
have
snapshots
and
each
one
has
its
own
feature.
Gate
that's
gone
through
alpha
beta
and
not
yet
GA.
We
could
do
the
exact
same
thing
for
backup,
so
you
could
have
a
third
feature.
Gate
go
through
alpha
bit,
but
but
this
is
never
gonna
end
right.
A
Actually,
slightly
disagree
on
that
because
they're
the
whole
point
of
backup
and
is
that
what
what
is
proposing
an
angel
staggers
about
probability?
It
is
not
that
reasonable
to
let
external
provisioner
to
understand
all
this
backup,
mechanic,
ins
and
they've
no
room
for
backup
windows
to
kick
in
right.
So,
okay,.
H
So
so
and
again
I
we
could
schedule
another
meeting
and
I
could
go
over
the
details
of
so
so.
My
proposal
did
envision
backups
that
could
be
implemented
either
in
a
generic
way
that
was
vendor
agnostic
or
down
inside
the
CSI
driver
and
in
fact,
I
even
envisioned
a
way
that
maybe
they
could
interact
with
each
other
so
that
you
could
do
the
backup
with
the
vendor
CSI
driver
and
do
the
restore
with
a
generic
thing.
Conceivably.
A
C
H
C
D
D
C
D
Yeah,
so
so
data
protection
workflows,
so
we
did
have
a
very
good
meeting.
Last
week
we
had
a
very
good
discussion
and
I
think
we
need
to
start
getting
a
little
more
disciplined
about
working
through
exactly
what
what
we're
doing
and
who's
doing.
What
so
I
would
schedule
I'll
schedule
another
meeting
or
schedule
another
meeting
we'll
do
that
together
kind
of
at
the
high
level
we
have
a
document
where's.
The
document
I
just
had
the
document.
C
D
D
D
Yeah,
that's
it
so
I
wanted
to
start
with
defining
scenarios
that
we're
trying
to
defend
against,
and
this
is
something
I
really
encourage
everybody
to
toss
in
whatever
it
is.
You
think
that
you
know
we
need
to
be
doing
you
for
for
backup/restore
I,
really
like
input
from
the
existing
deep
offenders.
Who've
got.
You
know
the
background
for
what
you
know.
D
What
really
happens
out
in
the
real
world
and
and
things
that
may
not
be
captured
here,
and
so
we've
got
a
set
of
those
started,
defining
what
objects
would
be
protected
and
then
from
there
we're
going
to
try
and
define
what
the
actual
workflow
would
be
in
terms
of
like
our
existing
api's,
and
that
way
we
should
be
able
to
see
where
we
have
gaps
like,
for
example,
people
are
talking
about
hey.
You
know
at
some
point.
You've
got
to
restore
the
snapshot
ID
into
kubernetes.
Where
does
that
get
stored?
How
do
you
store
it?
D
A
Based
on
those
scenarios,
we
came
out
with
a
white
paper
or
something
like
that.
That
describes
the
whole
data
protection
problem
for
this
working
group
right,
hopefully
come
up
with
something
that
is
reasonable
to
everyone
from
different
layer.
It's
not
supposed
to
provide
any
like
big
detailed
design
dog
like
what
angel
was
doing
right
now.
Look
what
Ben's
doing
right
now
is
really
to
provide
in
general.
D
No,
it
sounds
really
good,
so
so
we'll
schedule,
another
meeting,
I'll
send
it
out
to
the
list
and
in
the
meantime
the
documents
here
you
know
feel
free
to
either
add
things
in
comment
or
you
know.
If
you
just
want
to
see
us
something
via
email,
you
know
either
you
know
one
on
one
slack
or
the
mailing
list.
I.
G
D
C
Think
yeah
I
think
this
scope
is
very
broad,
so
I
we
may
not
be
able
to
cover
all
of
those,
but
at
least
we
we
can
have
a
you
know,
a
high
level
a
line
and
then
we'll
just
decide
how
to
approach
each
of
them
and
some
of
them
we
can
be
make
a.
They
come
up
with
more
detail.
We
workflows,
but
others
probably
will
be
like
a
strategy
go
or
something
right.
Yeah.
D
G
Yeah
I
think
non
goals
are
very
powerful.
You
know,
so
we
figure
out
things
that
yeah.
C
Yeah,
so
so
the
just
at
the
end
of
the
what
flow
section
right.
So
there
are
like
several
levels
of
the
workflows,
the
bottom
level
that
we
can
look
at
in
what
andrew
is
doing
and
come
up
with
that
and
then
application
will
crow.
That's
probably
will
be
our
focus
as
well
and
then
a
nexus
cluster
level,
and
then
mr.
joseph
data
center
level,
that's
probably
out
of
scope
for
a
while
I
think
we
need
to
tackle
the
easiest
ones.
First,
yes,.