►
From YouTube: Data Protection WG's bi-weekly meeting for 20200923
Description
Data Protection WG's bi-weekly meeting for 20200923
A
A
Today
we
have
a
few
things
on
the
agenda.
First,
I
want
to
let
you
know
that
we
have
submitted
a
cap
on
container
notifier.
We
can
go
over
that
quickly
and
then
we
have
been
having
discussions
on
the
data
protection
workflow
by
paper.
So
there
are
several
topics
being
discussed.
We
can
go
over
those.
A
Okay,
so
it's
here
so
shenzhen
and
I
went
to
signal
meeting
just
to
tell
them
that
we
are
ready
to
submit
the
cap
and
they
identified
reviewer
and
approver
first
forensic
node
so-
and
this
is
based
on
the
the
google
doc
that
we
shared
earlier
just
to
have
some
other
requirement
for
this
for
this
cap.
A
A
This
is
only
for
issuing
the
comments
and
an
ap
object
that
will
request
this
content
notifier
to
trigger
those
comments
and
then
for
the
controller
logic.
It's
going
to
be
a
separate
controller.
It
will
be
a
separate
repo
under
kubernetes.
So
that's
the
that's
our
plan,
but
then
that
repo
needs
to
be
sponsored
by
signaled.
A
So
that's
phase
one
and
then
sig
node
also
asks
us
to
write
e3
test
even
for
phase
one.
So
that's
for
alpha
normally
for
alpha.
That's
not
e2
tests
are
not
required,
but
I
think
this
is
for
they
want
to
get
some
sense
on
how
this
is
working,
whether
there
are
any
issues
to
so
to
find
out
early
and
then
in
phase
two.
If
phase
one
goes
well,
then
we
go
to
phase
two.
A
In
that
case,
we'll
move
the
control
logic
from
the
separate
controller
to
couple
it
and
then
also
would
add,
support
for
signals
and
then
phase
three
is
right.
Now
we
have
not
really
decided
whether
that's
really
needed
or
not.
So
we
can
decide
after
phase
one
and
phase
two,
but
we
need
to
have
both
phase
one
and
phase
two
done
before
we
can
move
to
beta.
A
So
so
that's
the
the
general
plan.
So
there's
still
quite
a
few
things.
We
need
to
figure
out
from
the
some
details
we're
getting
some
comments
on
the
status
update.
There
are
some
concerns
on
the
some
issues
because
there
could
be
performance
issues
we're
sending
too
many
requests
when
updating
the
status,
because
it's
like
we
have
something
for
each
container,
all
the
parts
that
we
are
selecting.
So
those
are
the
things
we
still
need
to
sort
out,
but
those
are
the
general
plan.
B
Not
too
much
I
just
want
to
you
know,
bring
this
to
the
community's
attention.
Please
help
us
out.
If
you
do
see
something
that,
in
this
proposal
doesn't
fit
into
a
particular
domain,
please
let
us
know
they
are
a
lot
of
discussions.
B
At
least
I
foresee
a
lot
of
discussions
around
this
cap,
as
you
guys
may
have
already
noticed.
This
is
a
fair,
big
change
to
the
core
and
again
because
we
need
to
support
signals,
so
the
api
looks
a
little
bit
different
than
a
regular
kubernetes
core
api.
This
is
a
whole
imperative
versus
declarative
thing
and
there
are
many
many
corner
cases
and
then,
as
she
mentioned
before,
the
scalability
issues,
etc.
B
You
know
we
are
trying
to
make
the
cap
as
as
thorough
as
possible,
but
we
can
not
do
that
without
everyone's
input.
So
please
take
a
look.
If
you
have
anything
just
either
comment
on
this
cab
in
the
github
or
talk
to
us
directly,
that'd
be
much
appreciated.
A
Well,
I
think
this
particular
comment
I'd
like
to
bring
to
everyone's
attention,
because
I
think
this
is
relevant,
so
we
have
because
I
think,
cube
the
signal
team
have
some
concerns
if
we
need
to
keep
asking
public
to
do,
retries
making
sure
the
the
command
is
is
run
successfully.
A
So
we
have
decided
that
this
is
the
responsibility
of
a
higher
level
external
controller.
Who
is
requesting
to
do
this
command
to
make
sure
to
retry.
So
basically
that
means
that
we
are
only
running
this.
The
cubelet
will
be
running
this
once
I
think
so.
Tin
actually
brought
up
some
good
point
like
if
we
are
using
a
selector
just
like
all
the
parts
and
then
what
is
the?
A
What
is
what
is
the
like?
A
cutting
point,
because
what,
if
you
send
those-
and
then
there
are
new
parts
being
created
after
that?
What
do
you
do
so
there's
some
discussion?
You
may
want
to
take
a
look.
I
think
I
think,
because
this
is
the
time
sensitive
thing
when
users
want
to
do
a
backup,
they
should
have
all
the
parts
up
and
running
already.
A
So
when
this
request
to
koias
happens,
all
the
parts
should
be
there,
so
I
think
we
shouldn't
be
like
keep
coming
back
and
you
know
try
to
see
if
there
are
any
new
ones,
because
then
you
don't
have
them
twice
at
the
same
time
right
so
so.
This
is
something
that
we
need
to
clarify
in
the
cab
itself
yeah.
So
it's
like
the
team
was
saying
this
is
like
a
pseudo
imperative.
A
B
Yeah
one
thing:
google
one-
I
just
want
to
add
to
what
she
was
saying.
One
thing
is
also
related
to
this.
One
is
the
problem
of
having
too
many
parts
selected
for
a
particular
notifier,
because
the
status
holds
all
the
status
at
this
moment
hold
all
the
status
from
each
container
that
causes
that
brings
the
concern
around.
Will
the
will
the
api
object
big
enough
size
wise
to
have
all
these
status?
B
A
Okay,
yeah,
please
please
take
a
look
over
the
cap,
okay,
so
the
next
we'll
continue
to
talk
about
the
other
protection
workflow
is
I
don't
know
if
stephen
is
here
today?
I
have
not
seen
him.
Oh
I
can.
I
can
bring
ping
stephen
to
join
in
a
moment-
okay,
sure
yeah,
so
yeah
just
wanting
to
continue
for
this
yeah.
So
I
think
the
use
case
session.
We
already
have
a
discussion
that
last
time.
A
I
believe
there
was
a
recovery
section
that
we
did
not
get
time
to
cover,
and
then
it
was
the
second
item
in
the
last
meeting
and
we
didn't
get
a
chance
to
go
there.
So
let
me
see
in
this
case
we
did.
I
think
we
we
went
through
the
the
first
section,
is
more
about
backup
and
then
the
second
section.
A
Okay,
so
let's
give
him
a
minute
and
then
and
then
we
have
also
started
to
talk
about
the
application
workflow.
So
we
can
we
can.
I
think
this
is
still
working
progress.
We
can
go
to
the
next.
Let's
see
for
stephen.
A
Okay,
so
I
see
tom
is
there
tom?
Do
you
have
anything
to
add
for
for
this,
for
this
use
case
session.
C
No,
I
think
it's
pretty
well
covered
here.
We
can.
We
can
start
walking
through
before
stephen
gets
here.
If
we'd
like.
A
D
Sure
you're
already
running,
I
say:
let's
go
with
stuff
that
works,
so
so
yeah
so
just
wanted
to.
I
think
where
we
left
off
last
time
was
recovery.
Wow,
you
have
a
lot
of
tabs
so
anyway,
the
the
the
so
sort
of
we
broke
it
down
into
a
few
chunks.
So
the
first
one
is,
you
know
the
full
application,
recovery
and-
and
this
this
is
a
you
know
again.
The
assumption
here
is,
you
know,
probably
someone
either
deleted
it.
D
Probably
this
was
a
full
sort
of
complete
wipe
out
of
the
application,
because
the
next
one
is
rollback
and
again
our
assumption
is
this
is
something
that
either
an
app
admin
will
want
to
run
him
or
herself
or
maybe
go
to
a
central
admin
who
can
help
coordinate
for
them.
D
This
is
this
is
one
of
the
pieces
here
where
we
talked
last
time.
I
think
about.
Ideally,
application
owners
are
sort
of
detailing
what
their
resources
are.
That
comprise
an
application,
but
you
can't
necessarily
assume
they've
done
that
correctly
or
that
they've
done
that
work,
and
so
you
they
need
to
be
able
to
either
specify
recovery
based
on
the
resources
that
were
already
defined
during
the
backup
or
when
it
comes
to
the
restore
and
you've
done
say
a
namespace
backup.
They
kind
of
go
yeah.
E
D
Are
the
pieces
that
made
up
my
application
so
so
some
sort
of
mechanism,
obviously
that
allows
them
to
to
sort
of
you
know,
look
through
what's
there
and
pull
out
what
they
need
and
and
and
really
the
big
thing
here.
I
think,
especially
as
a
as
a
newer
vendor.
That's
that's!
That's
that's
built
in
this
space,
certainly
guidance
from
the
community
or
or
anyone
that's
building
it
new
that
that
that
order
of
operations
is,
is
interesting
right.
D
What
needs
to
be
restored
before
what
implications
of
restoring
pvcs
as
they
get
recreated
and
assigned
to
nodes
and
whatnot,
and
this
I
know
this
is
you
know,
hit
on
the
pv
side
at
times
where
you
could
potentially
get
a
pod
spun
up
in
a
place
that
it
can't
get
access
to
its
storage
or
something
so
so
those
those
are.
D
Those
are
the
the
pieces
that
I
know
at
least
when
we
were
working
on
this
yeah
sort
of
a
this
is
this:
is
the
proper
ordering
of
things
would
be
nice
and
then,
as
things
evolve,
you
know.
Some
sort
of
you
know.
Guidance
on
that
proper
ordering
would
be
a
good
thing.
C
Yeah,
you
know,
I
think,
there's
a
lot
of
vendors
in
the
space
now
and
not
that
it's
secret
sauce
by
any
means,
but
that
really
is
kind
of
a
core
part
of
people
implementing
their
own,
their
own
kind
of
high
level,
backup
controllers,
backup
and
restore
controllers
right.
The
you
know
I
can.
I
can
give
an
example
of
what
we
do.
For
example,
you
know
we
there's
a
lot
of
coordination
around
things
like
scaling
down
workloads.
C
C
I
think
it's
really
there's
a
lot
there
right.
There's
it's
a
pretty
complex
state
machine
to
get
that
right,
especially
given
the
different
types
of
workloads.
Given
you
know,
given
things
like
open
shift,
there's
a
lot,
there's
just
many
many
different
coordination
points
you
have
to
work
through.
D
As
we
mature
right,
one
of
the
one
of
the
analogies
people
often
drew
as
we
as
we
worked
through.
This
was
you
know.
Vmware
makes
it
really
easy.
You
know
it's
a
very
clean
sort
of
you
know,
there's
a
there's
a
protocol,
and
you
do
this
this
and
this,
and
so
there
was
some
amount
of
lamenting
of
it'd,
be
really
nice.
If,
if
that
worked.
C
Yeah
I
mean
yeah,
I
think,
there's
a
lot
there.
Hopefully,
hopefully
we
don't
get
into
those
details
in
this
document.
No.
D
D
C
Oh
yeah,
no,
I
don't
think
it
should
be
super
soft
side.
I
mean
it's
just
there's
just
a
lot
there.
You
know,
I
think,
there's
a
lot
you
have
to
coordinate
if
you
want
to
bring
back
an
application,
you
know
there's
different,
even
philosophies
here
right.
C
One
philosophy
is
kind
of
git
ops
where
you
know,
maybe
maybe
the
depression
should
coordinate
with
with,
like
you
know,
argo
or
cd
or
something
that
actually
deploys
the
application
separately
from
restoring
the
data
and
that's
you
know,
that's
a
completely
viable
workflow,
and
so
is
the
data,
protection,
controller
or
product,
restoring
the
the
application
components
and
configuration
directly.
C
That's
also
very
common.
I
think.
A
So
so
what
do
you
mean
so
that
would
be
like
they?
Those
controllers
wouldn't
be
running
in
the
same
cluster,
I
mean
I
didn't
that
what
you
were
trying
to
say.
C
Well:
okay,
so
at
this
point,
there's
a
ton
of
a
ton
of
vendors
here
and
I
think,
there's
a
lot
of
different,
viable
architectures
for
those
vendors
we
run.
You
know,
cast
and
castings
data
protection
controller
runs
inside
a
cluster
and
coordinates
from
there.
Other
vendors
can
run
outside
the
cluster
and
that's
completely
viable
and
handling
handling.
C
The
recovery
of
the
actual
configuration
of
an
app
is
is
pretty
complex,
right,
essentially
yeah,
it's
the
same
thing
as
a
deployment
pipeline,
and
you
can
handle
that
directly
where
you
can
handle
that
through
your
normal
deployment
workflow
and
for
if
you're,
fully
recovering
application.
I
think
both
of
those
things
are
fully
viable
workflows.
D
Yeah
so
then,
so
then,
if
you
thought
that
one
was
tricky
now,
let's
talk
about
rolling
back
an
application
and-
and
and
this
is
this
is
basically
almost
more
of
a
traditional
use
case.
You
know
that
was
that
was
generated
years
ago,
which
is
someone
someone
so
there's
layers
to
it
right.
If
you
just
fat
finger,
you
know
sort
of
a
schema
in
a
database
rolling
back.
D
The
data
is
actually
relatively
easy
because
it's
mostly
just
the
snapshot
restore,
but
if
you,
if
you
do
something
more
destructive
at
an
app
level,
but
you
don't
you
know
you
don't
destroy
the
application.
You
don't
want
to
necessarily
redeploy
it
for
whatever
reason,
but
you
just
want
to
be
able
to
to
sort
of
go
back
to
that
previous
point
in
time
you
again
most
of
this
is
really
really
twofold.
D
D
Okay,
that's
that's
the
thing
I
want
and
then
the
second
one
is
again
in
terms
of
ordering
and
handling
all
of
this
again
make
sure
that
you're
overwriting
the
stuff
you
should
overwrite,
not
overwriting
the
stuff
you
shouldn't
overwrite
and
and
manage
all
that
for
me.
D
A
D
I
mean
at
least
when
we
talk
to
people.
We
assume
we
assume
we
tell
them,
you
know,
there's
you
know
yeah
doing
a
rollback
live
is
a
is
never
a
good
idea,
because
you
know
you're
ripping
out
the
world
from
underneath
yourself,
but
what
you're
not
trying
to
do
is
lay
a
full
redeployment
and-
and
I
think
in
kubernetes
space
this
this
should
be
less
common
than
it
was
in
the
sort
of
the
the
virtual
machine
or
physical
server
cases.
But
we've
had.
D
A
D
Yeah,
so
so
we
we
we've
got,
we've
got
sort
of
something
betaish
here
where
you
can
basically
think
of
it.
As
for
for
a
given
resource,
you
know
you
can
see
the
previous,
the
you
know,
sort
of
what
the
resource
looks
like
now
and
what
the
resource
looked
like
before,
and
then
you
can
choose
that
you
want
to
sort
of
move
the
resource
back,
that
that
definition
of
the
resource
back
to
a
previous
point
in
time.
B
Stephen,
I
have
a
question
here.
You
know.
One
of
the
typical
things
for
for
rollback
is:
how
do
you
make
sure
the
namespace
or
class,
or
whatever
it
is
you're
you're
carrying
a
rollback
in
is
clean
for
your
low
back
I'll
give
more
concrete
example.
For
example,
you
have
to
have
a
deployment,
a
steadfast
set
that
has
been
upgraded
to
a
new
image
and
also
they
may
just
start
crashing
or
whatever,
and
then
you
want
to
roll
back
right.
D
Yeah
totally
agree,
and
that
and
that's
one
reason
why
I
think
a
lot
of
this
tends
to
be
a
bit
more
sort
of
handheld
and
almost
glowy
driven.
Is
that
you
know
what
what
like
I
said.
That's
that's
why
these
users
almost
want
to
see
the
diff
right.
It's
like
show
me:
what's
there
now
versus
what
was
there
before
they
kind
of
diff
it?
If
that
makes
sense,.
F
F
We
may
say
you
know
what
you're
never
going
to
be
able
to
do
that,
but
it's
worthwhile
laying
out
what
what's
like
the
big
picture
of
what
we
want
to
be
able
to
do,
and
maybe
we'll
come
up
with
something
clever,
but
I
don't
think
we
should
be
worrying
about
well.
How
do
you
do
this
at
this
point?
This
is
more.
What
do
we
want
to
do?
Yep.
D
Fair
point,
and
and
and
yeah
and
like
I
said
we,
we,
like
we've,
heard
this
quite
a
bit.
You
know
in
and
in
and
and
we've
pointed
out
to
these
customers.
We've
talked
to
as
well
that
this
is
fairly
reminiscent
of
like
basically,
you
know,
change,
control,
kind
of
procedures
right
and
so
they're
a
little
weird
that
they're
looking
to
their
backup
vendor
for
change
control.
But
I
suspect
it's
an
implication
that
they
just
don't
have
it
anywhere
so
they're
asking
everyone.
D
They
talk
to
we'd
like
to
be
able
to
do
this
because
we
because
again
druva
is
a
company
we've
seen
people
ask
us
for
this
and
like
salesforce,
but
you
know
we
we
back
up
like
salesforce
too,
and
we
get
the
same
question
there
of
show
me
the
before
and
after
of
my
sales
force.
So
it
is
sort
of
a
change
control
use
case
in
a
lot
of
ways.
But,
like
you
said
dave,
it
doesn't
mean
that
we
have
to
do
it.
B
It
makes
sense
right.
Yes,
we
want
to
find
what
we
want
to
do,
but
we
want
also
at
least
have
a
rough
idea,
whether
there's
something
should
be
phrased
out
in
stages,
because
some
of
the
problems
is
really
hard
to
to
to
solve.
I'm
not
sure
whether
there's
even
a
solution,
I'm
not
sure
in
the
white
paper.
If
we
list
things
that
we
cannot
solve
in
the
short
term
or
even
long
term
wise,
I
I
don't
know,
what's
the
meaning,
there.
F
F
Be
a
matter
of
here's
why
it
doesn't
make
sense
or
here's
why
it
doesn't
work.
Customers
have
asked
for
this,
but
we
you
know
it
doesn't.
It
doesn't
make
sense,
because
people
keep
asking
the
same
questions
and
you
have
to
come
up
with
an
explanation.
So
if
you
just
have
an
explanation,
say
yeah,
I
understand
you
might
want
to
do
that.
But
here's
why
it
doesn't
work.
It's
really
helpful.
B
C
So
sean
it's
kind
of
a
question
of
scope
for
who,
because
I
think
that
that
use
case
is
is
100
requirement
for
data
production
in
kubernetes,
and
I
guess
in
general,
but
the
question
is
who
who
builds
that
right?
Whose
domain
is
that?
I
think
that's
gonna,
be
the
domain
of
the
the
full
backup
and
data
protection
controller,
which
is
gonna,
have
probably
multiple
invitations,
multiple
user
interfaces,
and
you
know
that
control
those
controllers
can
make
decisions
themselves
on
what
kind
of
user
interface
is.
C
D
But
yeah,
I
agree
later
yeah
the
goal
I
think
they've
had
it
right.
You
know
I'll
just
walk
through
the
use
cases
we
hear
and
then
yeah.
I
don't
want
to
try
to
solve
them
because
they're
hard,
so
so
yeah,
so
so
the
next
one,
I
think
was
was
just
the
resource
level,
recovery
and-
and
this
this
is
pretty
pretty
simple.
We
hear
this
mostly
for
testing
where
you
get
the
customer
who
says
and
what
I
think
resource
here,
the
most
common
one
I
we
get
is
I'm
running
this.
D
You
know
my
sequel,
right
and-
and
I
want
to
do-
I
want
to
dork
around
with
a
version
of
my
sequel
just
separately.
I
don't
need
to
clone
the
whole
app.
I
don't
need
to
restore.
I
just
want
to
be
able
to
get
a
copy
of
this,
my
sequel
somewhere
else,
so
I
can
test
around
with
it
a
little
bit
maybe
upgrade
test,
maybe
config
tests.
D
Cool
and
then
resource
rollback,
again
kind
of
this
kind
of
like
the
one.
Before
again,
this
is
the
I
didn't
roach
my
whole
application,
but
I
did
broach
my
again
my
database
or
my
data
store
again
just
roll
me
back
to
a
previous
point
in
time
and
and
again,
the
assumption
here
is:
you
are
taking
down
time
when
you
roll
this
back,
because
you're
generally
crazy.
D
If
you
try
to
roll
back
like
a
live
database,
that's
never
a
good
idea,
but
again
so
so
so
the
thought
process
is
still
the
same
as
the
application
rollback,
but
it's
not
as
comprehensive
right.
This
applies
to
a
specific
resource
or
maybe
like
a
resource
pair
of
a
pod
plus
its
pvc,
but
not
not
broader
than
that.
Usually.
A
So
the
resource
recovery
that
they're
not
going
to
be
able
to
just
to
use
like
a
part
part
of
the
application.
It's
still
actually
the
whole
application.
It's
just.
They
don't
touch
some
part,
but
just
only
replace
the
specific
part.
D
If
you,
if
you
think
of
it,
so
this
depends
how
you
define
applications.
So
we
we
we,
you
know,
we've
been
trying
to
push
away
from
what
was
kind
of
an
old
backup
thing
which
was
oracle's
an
application
and,
and
you
would
usually
meet
people
and
they'd,
be
like
no
or
it's
a
database.
D
The
application
is
billing
or
the
application
is,
you
know
some
some
hr
system,
and
so
so,
if
you
think
of
it
that
way,
the
application
may
again
be
designed
by
guitar
and-
and
that
involves
you
know,
sort
of
a
web
server.
Plus
it
involves
some
file,
storage
plus
it
involves
a
database
plus.
It
involves
a
bunch
of
code
right,
a
bunch
of
sort
of
you
know,
stateless
pods,
this
customer
said
yeah.
D
I
just
want
to
test
out
like
a
database
upgrade
so
I
want
to-
or
I
think
in
particular,
they
wanted
to
go
from
standalone
to
replicated
database,
so
they
just
wanted
to
be
able
to
clone
their
my
sequel
so
in
their
mind
that
wasn't
the
app
right.
It's
a
re.
To
their
mind,
the
the
resource
is
really
the
database,
which
is
the
pod
in
its
pvc
or
it's
people
whatever
in
its
volume.
D
They
just
wanted
to
basically
restore
that
to
another
location,
so
they
could
do
some
things
with
it
and
so
in
their
mind,
that's
a
resource
as
opposed
to
the
application,
because
the
application
is
the
actual
business
app.
So
so
that's
that's
why
yeah
I
mean
if,
if
restore
resource
was
like
restore
config
map,
I
agree.
That's
that's
weird.
So
resource
here
was
was
meant
more
in.
In
that
sense,
the
way
at
least
the
way
the
customers
were
thinking
about.
G
It
yeah,
I
I
agree
with
it.
I
agree
with
that
explanation
because
you
know
it
all
kind
of
depends
upon
the
definition
of
the
application.
You
know
your
application
definition
could
be
one
app
or
multiple
apps
and
you
could
you
know
then
do
a
granular
selective
restore
to
get.
You
know
just
pieces
out
of
it
to
test
them
and
all
those
other
kind
of
use
cases.
G
So
you
know
completely
agree
with
that
kind
of
flow.
C
Are
we
talking
how
granted
are
we
talking
here?
Are
we
talking
about
like
at
the
kubernetes
api
resource,
like
the
api
object
level,
or
are
we
talking
about?
You
know,
for
example,
tables
or
files
within
so.
G
Yeah,
so
I
think
right
now,
two
parts
to
it
right.
I
think
the
selective
restore
will
be
from
the
api
pieces
and
then
the
granular
restore
would
be
something
on
the
lines
of
you
know,
cherry
picking,
the
items
from
the
data
volumes
itself.
G
Correct
correct
yeah,
so
I
mean
I
think
that
there's
a
bifurcation
here
right,
the
first
one
is
where
you
know
storm
brought
up.
Is
you
know
just
taking
all
the
api
pieces
and
trying
to
recreate
whatever
kubernetes
gives
you
and
the
second
portion
of
it
is
where
you
are
actually
introspecting
a
data
volume
and
trying
to
pick
out
those
items
from
that
volume
to
you
know
just
see
how
do
some
analysis,
or
you
know,
run
it
in
a
test
environment
or
do
some
other
things
on
it?
G
So
I
think
you
know
the
use
case
for
both
kind
of
becomes
the
same
either
you're
working
with
the
kubernetes
objects
or
you're
working
with
your
data
objects,
but
I
think
the
bifurcation
is
as
to
what
level
do
you
allow
that
user
to
do
it?
Is
it
you
know,
at
least
on
our
side,
the
trillion
side?
We
do
it
on
we
kind
of
provide
that
segmentation.
One
is
selective,
easter
and
the
other
one
is
granularity.
C
C
Yeah-
and
you
know
it's
interesting
here
from
the
context
of
the
primitives
that
we're
trying
to
build
in
kubernetes
to
support
these
workflows,
you
probably
don't
need
to
support
the
very
fine
grain
in
you
know,
segmentation
of
of
volumes
or
databases
in
these
higher
level
workflows.
You
know,
I
think
most
data
protection
vendors
provide
that
kind
of
mechanism
right,
yep,.
D
D
So
then,
next
we
have
the
namespace
one
which
that
one
scares
the
scares,
the
heck
out
of
me
and-
and
you
know
my
and
I
didn't
put
it
in
here-
but
you
know
basically,
this
is
again
in
my
mind.
The
user
basically
just
said,
look
yeah.
I
want
to
be
able
to
to
restore
this
name
space
again,
I'm
assuming
not
in
place,
because
we
were
talking
before
man.
D
That's
a
lot
of
just
wow,
so
assuming
to
an
alternate
namespace,
the
one
challenge
we
have
as
we
look
at
this
is:
how
do
I
know
that
the
name
space
is
empty
and
and
do
I
need
to
worry
about
that,
and
you
know
that?
That's
that's
the
only
part
I
struggle
with.
Sometimes
is
you
get
an
alternate
namespace
to
recover
into?
How
do
you
know
you're
still
not
stomping
on
something
you
know?
D
There's
you
know,
there's
the
old
net
app
in
me
that
that,
like
oh
it'd,
be
really
nice
to
have
sort
of
a
fencing
mechanism
that
says
this
is
an
empty
name
space
and
until
you
release
it,
no
one
else
can
modify
it
except
you,
the
restore
process,
but
that
was
a
that
was
an
old
snap
mirror
thing
we
used
to
have.
D
But
you
know
that
that's
the
only
part
that
sort
of
gives
me
the
willies
here
is:
it's
just
is
my
restore
going
to
stomp
on
a
bunch
of
stuff,
but
maybe
that's
just
buyer.
Beware
kind
of
thing.
F
Well,
we
can
also
look
at
other
ways
of
doing
restores
right.
So,
for
example,
we've
we've
put
together
data
populator
for
volumes.
Some
of
the
app
operators
are
getting
the
point
where
they
say
clone
from
a
backup.
There's,
no
reason
why
we
can't
define
things
like
create
a
namespace
from
a
backup.
D
F
D
F
Oh
okay,
I'll
give
you
my
address
and
you
can
let
me
check,
but
seriously
I
mean
that's
where
we
want
to
that's
where
I
think
we
should
be
going,
but
the
workflows
right
are
what
drive
why
we
want
to
go
there.
D
And-
and
here
I
mean
this
is
very
much-
the
customer
is
just
coming
to
us
and
saying,
and
especially
as
as
they're
seeing
things
like
the
hierarchical
name,
space
is
coming
out.
They're
like
yeah.
We
we
can
easily
see
this
world
where
we
just
restore
this
name
space,
because,
as
the
central
administrator,
I
don't
necessarily
have
visibility
into
the
apps
inside
it,
but
they're
telling
me
they
want
this
and
again,
as
we
get
to
the
the
sub
name
space,
you
know
almost
like
on
a
per
user
basis.
F
Yeah
and
we're
actually
doing
like
you
know,
we've
we've
got
the
new
project,
pacific
stuff
from
from
vmware
right
and
in
supervisor
cluster.
There's
resources
that,
if
you
restore
them,
you
break
the
cluster,
so
we've
locked
down
access
to
it.
We've
blocked
restore
on
that
via
valero,
but
we
want
to
move
towards
having
mediated
apis
where
the
cluster
defends
itself
and
says.
No,
I'm
not
going
to
let
you
write
that,
even
though
you're
the
backup
restore
utility.
A
Yeah,
this
is
actually
very,
very
tricky
because
there
are
so
many
different
resources
and
you
don't
even
know
what
they
are
almost
like.
You
have
to
deal
with
them
individually.
For
you
know,
whatever
application
is
running
there,
you
have
to
have
to
know
what
are
there
and
what
are
the
things
you
can
restore.
What
are
the
things
you
cannot
touch
just
I
think,
similar
to
what
you
have
talked
about
earlier.
A
As
soon
as
you
try
to
bring
up
those
resources
right
by
creating
them,
then
that's
going
to
trigger
something
else
to
happen
right.
You
may
not
want
that
to
happen
at
the
restore
time
so
so
yeah
it.
A
Yeah
and
then,
if
it's
like
some
other
some
other,
you
know
some
other
cr,
then
you
don't
really
know
how
does
control
behave
right
when
you
try
to
restore
them?
You
know
what
they're
supposed
to
do.
So,
it's
almost
like
you
have
to
know
exactly
what
you're
trying
to
restore
so
to
order
to
handle
them
properly,
because
not
all
the
controllers
are
written.
The
same
way
right.
A
Some
probably
just
can't
handle
that
when
you
try
to
you,
try
to
restore
them,
but
then,
when,
when
you
restore
those
yamo,
when
you
do
a
cubecard
or
grid
those
yamaha
again,
then
they
probably
try
to
create
those
same
resources
again
right.
So
how
do
you
reconcile
those?
The
controller
needs
to
do
that,
but
not
sure
if
every
controller
is
written
that
way
that
they
can
actually
reconsider
those
so
very,
very
tricky.
C
A
D
F
But
as
we
start
to
build
out
like
these
are
best
practices,
these
are
how
to
build
your
application.
You
can
start
to
say:
well,
your
application
isn't
doing
the
right
thing.
That's
why
restore
doesn't
work
once
once
you've
got
this.
This
firm
base,
it's
like
I'm
with
a
database
right
now,
right
databases;
no,
they
need
to
sync
everything
to
disk
and
if
you
write
an
application
that
that
can't
recover
from
a
crash,
it's
no
longer
a
case
of
blaming
the
system
for
crashing
right.
F
You
say
no
that
application
needs
to
be
able
to
recover,
and
I
think
that's
what
we'll
get
as
we.
We
figure
out
what
the
best
practices
are,
for.
You
know
the
the
applications
that
are
running
in
kubernetes.
No,
you
need
to
do
this,
these
things
so
that
you
can
be
backed
up
and
restored,
and
if
you
don't
do
that,
it's
not
going
to
work
and
that's
not
our
fault.
That's
your
fault!.
C
Yeah,
there's
always
this
trade-off
between
user.
You
know
what
what
do
we
push
to
the
responsibility
of
the
user
versus
the
controller
itself
and
there's
a
lot
of
things
that
you
just
can't
can't
do
in
the
controller.
D
So
the
next
use
case
is
just
the
even
even
worse,
which
is
the
one,
the
customers
that
say
I
want
to
recover
all
the
name
spaces
on
my
cluster
and
and
some
of
them,
though,
though
this
is
dying
off,
at
least
in
the
last
few
months,
and
I
don't
know
if
you
guys
are
hearing
the
same,
it
used
to
be
well
and
we
want
you
to
sort
of
you
know.
D
Whatever
cluster
state
needs
to
be,
you
know,
sort
of
recreated
that
at
least
on
our
world
that
seems
to
have
diminished,
and
so
I'm
not
sure
this
one
matters
as
much
anymore,
but
but
I
was
I
was
gonna
leave
it
open
for
for
for
discussion.
G
I
think
yeah,
I
think
I'm
on
that
on
the
same
page,
you
know
the
cluster
recovery,
like
you
know,
I'm
not
able
to
kind
of
figure
out
the
difference
between
doing
you
know,
mapping
all
your
name
spaces
together
versus
doing
a
full
cluster
recovery
if
there
is
any
added
benefit
or
value
that
is
coming
in
from
the
customer
point
of
view.
So
from
our
end
as
well,
you
know
it
is
becoming
more
of
a
diminishing
use
case,
but
we
haven't
really
eliminated
it
completely.
G
You
know
still
kind
of
trying
to
see
you
know
if
there
are
additional
areas
or
things
that
we
haven't
really
thought
about,
but
other
than
you
know
a
full
namespace
remapping,
all
your
namespaces
and
cluster
recovery.
I'm
not
really
seeing
much
of
a
difference
in
general.
B
A
key
thing
over
here,
yeah,
just
don't
forget,
we
also
have
now
namespace
resources,
yeah
the
names
miss
recovery
piece
may
not
necessarily
cover
non-namespace
resources.
B
As
long
as
I'm
fine,
I
think
I
agree
with
you
guys,
because
I
don't
know
how
to
define
cluster
recovery
over
here
like
cube
system
right.
If
you
already
have
a
cluster
exactly
right,
you
can
actually
replace
all
the
controllers
over
there.
I'm
not
sure,
because
this
is
normally
tied
to
a
specific
vendor
to
a
specific
kubernetes
version
right,
like
cube
public,
you
probably
don't
want
to
touch
that
namespace
as
well.
E
But
bear
in
mind
that
we
need
to
cover
non-namespace
resources.
Yep,
that's
perfect!.
B
F
H
I
Right
then
yeah
so
yeah
we
ran
into
that
as
well,
so,
but
by
default
so
later
has
this
suture
side.
So
if
you
have
the
null,
then
it
will
back
up
any
trust,
resources
that
are
mapped
to
the
namespace.
A
Yeah,
it's
like
you
have
to
know
the
relationship
between
them
right.
Otherwise,
how
do
you
know
how
to
map
pvpc
that's
well
known,
but
if
it's
like
some
customer
resource,
then
you
don't
really
know.
What
is
that?
There's
some
relationship
between
namespace
and
now
namespace
object
that
we
may
not
really
know.
A
H
B
B
D
Yeah
so
so
I'd
say
I
mean
the
the
the
general
view
they
have
ending
in
the
like.
I
said
the
cluster
ones
kind
of
faded
for
for
all
the
reasons,
but
but
on
the
others.
Basically,
what
marxit
has
done
in
their
mind?
Is
they
view
it?
As
the
you
know,
the
basically,
the
resources
have
been
put
in
the
state
that
was
in
the
the
backup
the
the
pvs
have
been
sort
of
put
back
into
the
state.
D
At
the
point
in
time
and
and
and
and
then
you
know,
sort
of
things
are
running
now.
That
doesn't
necessarily
mean
in
their
mind
that
everything
is
magically
fixed
right
because
they
again
they
understand,
they
may
need
to
do
a
little
sort
of
hand
massaging
afterwards,
but
they
view
as
if
we
have,
if
we
have
put
the
resources
that
they
have
specified
to
be
restored
back
in
the
state
that
they
were
during
the
point
in
time
of
the
backup.
They
view
that
as
a
successful
recovery.
B
D
Yeah,
so
so,
for
example,
if
we
go
to
like
the
resource
one
of
the
my
sql
database,
the
assumption
there
is,
if
they're
restoring
that
mysql
database
again
to
let's
say
an
alternate
location,
they
would
expect
that
they
have
a
mysql
up
and
running
using
the
point
in
time,
a
clone
of
the
point
in
time
version
of
the
data
that
they
took
the
backup
from,
but
because
they
only
recovered
the
database.
D
They
don't
necessarily
expect
that
there's
external
access
to
it
or
or
any
of
the
other
parts
right
they're
just
saying
my
database
is
up
and
running
one
of
the
things
they
they've
asked
for
as
well.
Again,
you.
G
D
But
you
know
not
we're
not
doing
at
this
point
would
be
things
like
I'd
like
to
be
able
to
change
the
secret
on
that
database
before
I
do
the
restore,
but
for
now
things
like
that
they
they
can
manipulate
afterwards
they
are
not
doing
it.
You
know
we
restore
it
back
to
to
what
it
was
at
that
point
in
time
and
the
resources
associated
with
it.
B
If
I
hear
you
correctly,
that
means
that
other
than
just
simply
recreate
resources
in
the
target
cluster
or
in
the
target
namespace
the
they
are
also
expecting.
Basically,
the
applications
up
and
running
that
have
some,
maybe
kind
of
endpoint
exposed,
et
cetera,
et
cetera.
B
B
We
probably
need
to
also
think
a
little
bit
on
that,
given
all
these
use
cases
as
well,
how
to
send
the
signal
so
that
the
restoration
we
can
call
it
a
success
or
coil
it
down
right.
D
G
I
think
if
we
are
allowing
like
you
know,
selective
restores
and
everything
the
you
know
when
does
the
restore
you
know,
when
is
the
restore
successful,
becomes
even
more
challenging
to
do
it
natively
right,
so
I
think
to
your
point:
providing
a
user
variable
or
a
user
kind
of
like
liveness
probe
or
something
around
the
restore,
would
probably
be
the
implementation.
But
again
that
would
be
specific
to
how
the
vendors
do.
G
A
Us
yeah,
I
think
that
also
depends
on
like
what
other
resources
that
you
are
restoring
right.
If
it's
well
known
once
kubernetes
is
still.
Certainly
you
can
go,
try
to
check
if
that's
up,
but
with
something
some
other
resources,
some
crs
that
you
don't
even
know
yeah
how
to
check
those
exactly
yeah.
D
So
then,
probably
the
last
one
we
have
time
for
today
and
one
that
will
come
up
more.
I
think,
is
just
this
question
of
I'm
going
to
keep
this
backup
copy
for
some
period
of
time
right
three
months,
six
months,
nine
months,
god
help
us
seven
years,
20
years
someday
and
and
just
that,
how
do
we?
How
do
we
maintain
sort
of
the
versioning
right?
So
so,
on
the
data
side,
fine
right,
there's,
snapshots
and
backup
vendors
have
ways
of
storing
data
and
deep
down
inside.
D
We
know
no
one's
really
going
to
ever
get
it
back
anyway.
I'm
kidding,
I
think
this
is
recording,
but
but
but
the
metadata
part
is,
is
a
bit
scarier
given
how
quickly
things
rev
and
so
again
as
a
backup.
Maybe
it's
just
an
exercise
left
to
the
backup
vendor
of
how
to
sort
of
map
from
resource
definitions
and
and
and
sort
of
protocols
from
six
years
ago
to
whatever
it
is
today,
but
but
that
that
is
one
that
you
know.
D
A
J
From
what
I
remember
when
that
was
discussed
or
shown,
I
think
that
was
more
of
a
case
that
there's
different
ways
to
do
an
export,
and
I
think
the
problem
here
is
that,
if
there's
a
period
of
time
or
drift
through
a
few
years,
you
probably
need
some
kind
of
mechanism.
That's
more
on
the
restore
side,
I
can
do
the
migration.
K
D
F
I
think
the
the
closest
to
a
good
answer-
I've
seen
so
far,
has
been
like
vmware
vms
right,
because
we
have
a
relatively
small
surface
for
a
vm
and
we're
able
to
emulate
the
hardware
going
back
quite
a
ways,
so
you
can
actually
take
a
relatively
old
vm
and
boot
it
and
run
it.
D
F
As
is
the
concept,
that's
that's
pretty
cool,
I'm
not
quite
sure
how
we
can
apply
that
in
the
kubernetes
space,
but
expecting
forward
forwards,
compatibility
or
backwards
compatibility
is
kind
of
hard.
D
D
So
so
I
think
with
that
I
will.
I
will
confess
that,
when
we're
almost
out
of
time
and
two
as
we
get
into
mobility,
I'm
a
lot
more
uncomfortable
on
those
use
cases
not
because
I
think
they're
bad.
It's
because
I
personally
don't
have
exposure
to
them.
So
I
kind
of
I
think
I
think
I
might
have
copied
those
off
of
of
the
work
that
the
other
gentleman
did.
D
D
A
A
A
Okay,
anything
else,
any
any
questions
you
guys
want
to
cover
in.
We
have
one
minute
left.