►
From YouTube: WG Data Protection 20201021
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
B
B
A
Great
yes,
it's
now
905.
hello.
Folks
again
today
is
october
21st
wednesday,
and
this
is
the
kubernetes
data
protection
working
group
meeting.
This
meeting
will
be
recorded,
we
don't
have.
We
have
two
items
to
discuss
today.
Historically,
we
might
have
put
too
many
items
over
there
and
we
don't
have
enough
time
to
go
through
all
of
them.
A
So
shin-
and
I
discussed
anything-
maybe
it's
time
to
you-
know-
reduce
the
load
for
a
little
bit
and
better
amortize
those
detailed
discussions
into
smaller
groups.
Yeah.
First
of
all,
we
will
go
through
some
data
protection.
Workflows
of
one
will
focus
on
application,
backup
and
restore
workflow
and
that
from
his
experiences
and
then
we
have
a
couple
of
updates
on
caps,
including
snapshot
container
notifier.
I
think
ben
has
done
a
lot
of
work
in
the
generic
data.
A
B
Yeah,
I
think
yeah,
so
just
let
me
open
it.
A
D
So
we
we
did
talk
about
backup
and
restore
over.
We
won.
Today
we
talked
about
mainly
about
the
backup
and.
D
D
So,
let's
start
with
the
backup
first
right,
when
we
backup
the
application
in
general,
we
want
to
backup
first,
the
all
of
the
resources
that
needed
for
that
application
to
run.
First,
then,
we
go
to
the
data
of
that
application,
so
the
the
general
workflow
will
be
like
this.
First,
we
will
backup,
for
example,
the
namespace,
the
cluster
role
or
anything
that
the
application
needed
to
even
before
the
the
application
starts
like.
D
D
The
data
we
might
do
might
need
to
do
additional
steps
that
put
the
application
into
specific
conditions
for
backup,
for
example
in
mongodb,
where
we
might
want
to
load
disable
the
load
balancer
between
the
node
before
we
go
into
backup
the
data
that
can
be
done
in
the
second
step
here
and
then
the
third
step
is
the
almost
like.
The
main
step
of
the
backups
of
workflow
is
where
we
back
up
the
data,
and
during
this
step
there
are
two
subs
workflow.
D
One
of
them
is
the
logical
done.
If
you
look
into
mysql,
you
can
application,
you
can
see
like
mysql,
dom,
do
that
other
data
database
application
also
have
similar.
I
think
know
you
know,
meeting
somebody
also
raised
the
sc
scd,
the
scd
for
kubernetes.
We
also
use
the
lcd
done
without
it
can
be
done
in
this
step
too.
So
and
then
the
the
second
workflow
related
to
this
backup
data
is
the
snapshot
in
the
previous
meeting.
D
I
mean
maybe
two
two
three
two
three
meeting
ago,
we
have
already
talked
about
how
to
use
the
these
queers
and
nps,
using
the
container
hook
to
do
quiet.
The
application
then
take
a
snapshot
of
the
pvc
and
then
unquest
and
so
on
so
forth.
So
these
are
the
the
steps
that
involved
in
the
second
in
in
this
second
workflow
for
snapshot.
D
I
want
to
talk
a
little
bit
about
this.
Second.
The
workflow
here
is
that
for
some
application
for
some
application,
you
might
see
that
all
parts
are
not
equal
right.
Some
parts
are
primaries,
the
other
one
we
secondary
and
the
data.
Among
this
part,
even
though
they
are
somewhat
a
replication
of
each
other,
but
they
there
is
a
relationship
between
them.
D
For
example,
if
we
want
to
bask
up,
that's
you
know
three
parts,
then
you
have
to
back
up
the
secondary
first
before
you
back
up
the
primary,
because
the
primary
containing
new
data
coming
in
so
during
this
snapshot
and
and
back
up
the
data
we
have
to
somehow
serialize
them
into
a
specific
order
to
being
back
up
in
in
the
right
order,
but
some
other
application.
D
There's.
No
need
to
the
backing
of
this
part
in
in
this
pvc
in
specific
order,
but
they
might
have
a
different
requirement.
D
For
example,
you
might
want
to
back
up
these
snapshots
have
to
be
snapped
back
up
them
very
close
to
each
other
within
a
few
seconds,
and
that's
where
I
think,
there's
a
design
from
zing's
zing
zhang
to
snapshot
the
volume
group
snap
of
the
entire
balloon,
the
root
volume
together.
This
kind
of
the
consideration
that
we
need
to
take
into
consideration
during
this
snapshot
accent
steps
after
the
snapshot
action
after
the
snapshot
of
the
pvc
being
backed
up
I'll,
be
being
snapshot.
D
It
will
be,
you
know,
backing
up
into
the
repository
or
the
back
end,
backup
storage
device
later.
So
that
is
the
third
step.
The
fourth
step
is:
is
the
opposite
of
the
the
second
step,
as
you
see
in
the
workflow
right,
if
we
disable
the
load
balancer
for,
for
example,
in
the
mongodb,
if
you
disable
the
mongodb
load
balancer,
and
then
this
four
step
you
have
to
revert
back,
you
have
to
put
it
back
to
action,
so
you
have
to
enable
the
load
balancer
at
this
point.
D
So
that
is
the
the
general
four
step
in
the
in
the
application
workflow.
It
include
both
logical
done
or
snapshot
them
the
difference
only
at
the
the
third
step,
where
we
actually
either
take
a
logical
dem
or
the
pvc
the
snapshot
pvc
way,
so
those
are
the
different
there.
So
that
is
the
application
workflow.
Could
you
move
to
the
next
screen
when
we
talk
about
this
door.
D
The
restore
also,
we
have
to
consider
to
workflow
one
of
them
for
logical,
dumb
and
the
other
one
for
the
for
the
snapshot.
However,
in
the
restore
there's
two
scenario,
that
kind
of
we
have
to
consider
them
for
a
separate
one
of
them
is
restore
to
entire
new
application.
You
don't
you,
don't
even
need
to
consider
many
aspects
of
the
application
it
is
currently
run.
The
second
scenario
is
we
want
to
restore
to
the
system.
That
is
the
application
that
is
currently
running.
We
restore
to
production,
so
this
should
have
to
be
that's.
D
We
have
a
have
a
slightly
different
between
those
two
right.
So
let's
talk
about
the
first
scenario
when
we
have,
we
want
to
restore
the
using
the
pvc
snapshot
way
and
then
we
restore
to
entirely
name
space.
That
is
so.
The
first
step
would
be
to
producing
valero
or
similar
tool
to
restore
the
entire
namespace
with
the
application
in
it.
It
might
also
restore
the
cluster-wide
resort,
that's
used
by
the
application
right,
as
I
mentioned
above
the
cluster
role.
D
All
of
that,
my
you
know
all
the
secret
on
a
few
things
that
being
used
by
by
this
application
should
also
be
restored
in
this
step.
The
second
step
is
to
store
the
data
you.
This
is
very
important
that
you
have
to
restore
the
data
first
before
the
restore
the
application,
because
if
you
restore
the
the
part,
but
you
haven't,
restored
the
pvc,
then
then
the
port
coming
up
it
will
fail.
So
what
we
want
to
do
is
restore
the
pvc
first
of
the
data.
D
First,
then,
when
we
restore
the
positive,
the
part
will
automatically
hook
with
this
dvc.
That
is
the
third
step.
So
let's
talk
about
the
second
step
to
store
the
pvc
a
little
bit
in
more
detail.
The
second
step
is
that
when
we
restore
the
pvc
first,
we
will
create
an
empty
pvc.
One
or
multiple
one
depend
on
how
whatever
you
have
in
the
backup,
restore
the
and
then
we
will
have
to
mount
these
empty
pvc
into
some
data
mover
parts.
D
At
that
point,
we
can
pull
data
from
the
sorted,
the
backup,
storage
and
put
them
into
these
pvc
there's.
Currently,
we
have
two
type
of
pvc
right.
One
of
them
is
a
raw
block
which
can
be
mouse
as
a
raw
block
device
on
the
data
mover.
The
other
one
is
the
file
system,
pvc
right
and
then
we
after
we
restore
these
data
into
the
pvc,
which
is
the
either
the
mounted
directory
on
the
data
mover
or
it's
actually
the
raw
block
on
the
device.
Then
we
will
terminate.
D
We
delete
that
data
mover
when
we
delete
the
data
mover.
Remember
that
the
pvc
is
not
being
delete.
It's
just
a
data
move
or
part
being
delete
at
that
point,
then
the
data
mover
part
can
be
removed
and
then
next
step
we
restore
this
part-
the
stat
to
set
the
service
etcetera
that
at
that
point
the
data
is
already
available
in
the
pvc.
D
Then
the
part
will
automatically
hook
up
with
these
pvc
and
start
running
the
application.
Again,
I
want
to
talk
a
little
bit
about
the
the
data
mover
here
is,
I
think
it's
it.
It's
the
general
idea
in
in
in
kubernetes,
but
we
have
to
use
the
data
mover
on
demand,
because
currently
the
kubernetes
have
a
restriction
that
we
cannot
mount
a
device
on
a
port
run
time.
We
have
to
mount
it
at
when
we
specify
the
configuration
to
create
the
port.
So
that
is
the
reason.
D
When
we
have
a
new
pvc,
we
have
to
create
the
new
data
mover
part
there
with
that
pvc
mounted
on
it
and
then,
after
we're
done
with
that,
we
have
to
delete
the
malware
part
to
unmount
the
the
pvc,
and
another
thing
I
want
to
point
out
here
is
that
the
most
of
the
most
of
these,
these
pvcs
cannot
be
written
at
the
same
time
by
more
than
one
part.
So
that's
why
we
have
to
delete
them,
delete
the
part
after
we
don't
keep.
D
You
know
restoring
data
so
that
other
port
can
can
access
to
it.
So
that
is
the
workflow
for
restore
to
new.
With
the
ppc
snapshot
for
restore
to
production,
it's
a
little
bit
more
complicated.
Could
you
scroll
up
the
screen
a
little
bit
more?
D
It
will
be
more
a
little
bit
more
more
complicated
because
the
the
applications,
the
the
the
namespace
and
the
application
is
still
running
the
namespace
due
there
and
so
on
and
so
forth,
and
we
still
have
the
same
restriction
that
I
just
mentioned
earlier,
that
if
an
application
part
is
using
that
pvc,
the
the
data
right,
we
can
not
have
another
part
jumping
in
and
mount
that
same
pvc
and
start
writing
data
to
it.
That
is
the
current
problem
right.
D
Unless
the
pvc
is,
it
have
a
different
mode.
It
have
redry
many
mode,
but
most
common
application
is
having
the
read
write
one
mode,
so
it's
only
be
capable
to
write
and
read
by
one
part
at
a
time.
So
if
that
is
the
case,
then
the
first
step
we
need
to
do
to
restore
an
application
in
production.
Is
we
need
to
scale
down
the
deployment
of
that
application
to
zero?
D
So
if
like
say,
if
we
have
a
deployment
that
have
deployment
config,
the
other
deployment
that
have
five
port
and
then
we
have
to
scale
all
of
that
down
to
zero
that
mean
effectively,
we
delete
all
the
part
that
currently
is
using
this
pvc
after
we
scale
down
to
zero.
All
of
the
pvc
now
doesn't
have
anything
mounting
on
it.
Yet
then
we
will
create
a
data
mover
with
these
pvc
mounted
on
it.
D
After
that,
we
will
delete
this
data
mover
then
scale
back
the
application
to
the
previous
number
of
replicas
that
it
have
before
we
scale
it
down.
So
these
are
the
steps
that
need
to
be
done
for
the
right
one
mode.
If
you
have
v
dry
many,
I
I
think
I
don't
know
if
how
many
started
vendor
out
there.
Actually,
support
lead
by
many,
because
that
means
that
two
part
can
mount
the
same
pvc
at
the
same
time
and
start
writing
more.
D
I'm
not
sure
how
much
problem
you
are
buying
into
such
a
time.
Okay,
could
you
move
down
to
the
next
page,
so
that
was
for
the
snapshot
way
of
backup
and
restore.
Now
we
were
talking
about
restore
to
the
using
the
logical
down.
The
logic
of
them
is
is
a
little
bit
less
complicated
than
the
previous
one,
even
though
we
still
have
to
do
some
of
the
restores
of
the
namespace
and
so
on
and
so
forth.
It's
pretty
similar
to
the
previous
way.
D
But
when
we
restore
the
data,
then
we
just
run
the
reverse
of
the
logical,
dumb
operation.
For
example,
you
go
if
you're
using
my
sql
domain,
you
just
join
mysql
down
again,
but
using
the
going
the
other
direction
so
that
everything
else
should
be
normal.
We
can
talk
a
little
bit
about.
You
know
the
logical
dumb,
where
the
data
and
the
logical
dam
will
go
and
how
would
that
go,
how
a
lot
different
from
the
pvc
and
what
is
the
good
and
bad.
D
But
I
think
we
can
talk
about
that
in
a
different
meeting,
because
this
this
meeting,
I
believe
it's
only
like
introduced
the
workflow
of
it
right.
So
I
think
that's
all.
I
have.
E
So
so
sorry
to
interrupt,
I
was
on
double
mute
earlier
and
I
couldn't
get
off
me.
When
you're
talking
about
read,
write
once
pvcs,
you
can
have
multiple
pods
attached
to
read
write
once
pvc.
They
just
have
to
be
on
the
same
node.
E
D
That's
the
thing
is
that
that
that
we
have
to
yes
that
that
can
be
the
case,
but
my
concern
was
that
some
of
the
pvc
they
have
a
requirement
of
where
the
affiliate
to
a
specific
node.
Then,
if
we
want
to
do
that,
we
have
to
move.
We
have
to
create
the
part.
The
data
move
on
that
node
itself.
Okay,.
E
No,
I
it
sounded
like
you
wanted
to
use,
read,
write
once
as
a
protection
against
a
restore
happening.
While
a
workload
was
still
running,
and
I
was
going
to
say
that
that
that
one
you
won't
actually
get
that
protection
unless
you
can
find
a
way
to
ensure
that,
like
you,
have
different
nodes
for
these
things
and
furthermore,
like
read,
write
many
volumes
are
actually
quite
common.
At
least
where
I
come
from.
You
know
we
we
have
nfs
volumes
and
they're
all
read,
write
many.
D
I
I
at
least
there
for
my
that's
that's
this
but
yes
like,
like
I
said
in
this
in
this
flow
here,
that
we
consider
both.
If
we
have
a.
If
you
have
the
redry
many,
then
all
you
don't
have
to
concern
about
having
two
part
writing
to
the
same
ppc
at
the
same
time,
because
there
might
be
some
access
violation
there.
You
know
there,
so
the
application
has
to
take
care
of
that.
D
D
I
only
also
explicitly
spell
out
what
step
to
skip
if
you're
using
read,
write
many,
I
think
in
the
first
step
in
the
restore
to
production,
if
you
don't
have
a
concern
about
writing
multiple
data,
in
fact,
I
don't
have
a
concern
that
the
data
thought
will
write
into
your
restore
pvc
at
the
same
time
with
the
application,
then
you
don't
need
to
scale
down
the
application
to
zero
and
you
don't
have
to
scale
back
in
step
number
three.
I
I
did
spell
it
out
there
in
the
document
you
can
see
it.
E
Okay,
I'm
still
having
a
hard
time
wrapping
my
mind
around
the
context
for
this
whole
discussion,
because
because
you're
talking
about
some
very
specific
management
of
approaches
for
doing
restores
like
with
various
applications-
and
I
I
my
mind-
is-
is
more
in
the
place
of
how
do
we
develop
something
that
works?
No
matter
what
you're
doing
I
I
don't.
E
It
doesn't
seem
to
me
that
a
backup
and
destroy
application
should
like
that
the
layer
that
does
the
restoring
should
know
anything
about
the
application
itself.
Like
that,
you
know,
you
need
a
facility
to
restore
a
backup
to
a
volume
that
is
agnostic
to
what
the
application
is
and
then,
if
there's
any
application,
specific
requirements
for
like
what
you
do
after
the
data
is
back
on
the
volume
but
like
that
should
be
handled
by
a
higher
layer.
B
Right,
but
if
you
are
actually
a
backcountry
store
a
staple
set,
you
do
have
to
understand
how
that
works.
Actually,
so
I
think
yeah.
Maybe
this
is
maybe
we
go
a
little
bit
too
fast
on
this
one
yeah.
So
this
one?
Basically,
I
think
he
talked
about
how
you
you
have
you
have
to
you
know
you
need
to
back
up
other
kubernetes
resources
right,
kubernetes,
metadata.
B
Okay,
so
this
part
actually
so
yeah,
normally
the
like.
I
would
normally
to
this.
You
know
I'm
more
familiar
with
the
standard
so
too
new,
so
the
production.
F
B
D
Depends
it
depends
on
how
how
you
restore
this.
B
E
B
I
have
a
question
for
here
because
I'm
not
yeah,
maybe
we
can
actually
focus
on
the
the
new
scenario
first,
but
I
just
have
a
question
for
fun
here.
So
when
you
scale
it
down
to
zero
those,
so
the
all
the
pv
and
pvc
are
still
there,
they're
not
deleted
when
you
scale
down
right.
Is
that
yeah?
That's
what
it
means?
Oh
right
now.
Actually
there
is
a
upstream
already
some.
D
So
this
is
actually
yeah.
Actually
this
actually
interrupt
with
the.
If,
if
like
I
said,
if
we
have
to
scale
it
down
because
we're
concerned
about
having
two
part
accessing
data
at
the
same
time,
then
we
scale
it
down.
But
when
you
scale
it
down,
you
interrupt
the
workflow
for
the
user
right
the
product
disappear.
The
user
cannot
view
the
application.
During
that
time,.
D
B
C
B
D
D
Where
we
allow
multiple
parts
to
write
into
it
at
the
same
time
or
and
also
multiple
park-
and
I
mean
all
this
case
when
you
don't
need,
but
I
think
I
I
understand
the
point
that
if
you
want
to
provide
a
workflow
that
very
low
level
that
ignore
what
application
run,
that
is
very
primitive
low
level
and
that
works
for
some
cases
and
might
not
work
for
some
cases.
I
think
what
we
are
concerned
here
is
the
app
consistent
level
of
it
right.
E
Yeah
you
need
you
need
that
higher
level,
but,
like
you
want
to
be
able
to
to
have
you
want
to
be
able
to
layer
the
solution,
so
there's
like
something
that
takes
backups
and
turns
them
into
volumes,
and
then
there's
also
something
that
takes
applications
and
takes
restored
volumes
and
rebuilds
your
application
out
of
it
like
yeah.
You
need
both
things
like
I.
It
feels
like
this
is
a
very
strange
blurring
of
the
lines.
E
D
D
We
we
did
talk
about
in
the
previous
week.
We
did
talk
about
what
is
need
to
be
done
to
backup
and
restore
a
part
individual
part,
and
this
one
we
talked
about
how
we
back
up
an
application
that
involved
more
than
one
part
like
multiple
part
or
multiple
state
rule
set
and
service
that
associated
with
it
and
so
on.
So
this.
So
from
my
point
of
view,
this
one
is
like
a
one,
almost
like
a
one
layer
up
compared
to
the
previous
wood
when
we
focus
on
single
part
and
the
pvc
for
that.
B
D
This
actually
it's
in
production,
oh
okay,
yeah,
but
not
not
not
all
of
it,
not
all
of
it
not
like
the
detail.
Some
of
these,
for
example,
the
big
drive
many.
That
is
why
many
my
product
did
not
support
that.
Okay,.
A
Yeah,
I
got
a
couple
questions
over
here
for
restore
to
production.
That
sounds
to
me
more
or
less
like
a
rollback
scenario.
D
This
one
actually
allows
you
to
do
the
cbt,
so
we
only
restore
a
few
blocks
right
if
you,
if
you
really
want
to
go
to
with
the
cbt,
if
you
want
to
just
say,
I
want
to
go
back
to
you
know
two
days
ago
and
because,
if
I
restore
the
entire
all
the
data
like
it
might
take
three
hours
instead
of.
If
I
go
back
this
way,
it's
only.
I
can
using
the
different
snapshot
for
example,
then
I
can
only
need
to
go
back
three
blocks.
G
A
H
H
Here
is,
we
are
not
creating
the
whole
volume
again,
we
are
just
like
you
said
right,
rolling
back
the
volume
or
roll
forwarding,
whatever.
D
H
B
D
A
I
would
also
interesting
to
see
whether
we
can
better
utilize
the
choirs
mechanism
here,
because
the
end
of
day,
you
only
want
to
freeze
your
application,
whether
you
have,
if
quite
work
is
already
doing
this
work,
then
maybe
it
is
okay
to
just
restore
the
volume
and
not
touching
the
application
at
all
the
kubernetes
resource
at
all,.
D
D
A
G
E
If
we
were
to
build
this,
on
top
of
this,
like
I
have
a
snapshot
mechanism,
I'm
going
to
take
a
snapshot
of
my
application
using
kubernetes
snapshots,
and
I
want
to
restore
those
snapshots
like
the
way
that
works
today
is
you're.
Going.
D
E
New
pvcs
right
because
that's
how
snapshot
restore
works,
it
doesn't
seem
like
such
a
big
deal
to
me
to
tear
down
all
your
pods
and
rebuild
them
pointed
at
the
new
pvcs.
B
There
might
be
a
performance
issue,
maybe
found
it's
kinda
like
you
guys
can
champion
well.
E
I
mean
you
need
a
that.
You
have
the
fastest
possible
way
of
of
reconstituting
a
pvc
from
a
backup
and
that
that's
that's
efficient.
D
H
B
E
B
E
That's
what
I'm
getting
at
it's
like
if
there
was
a
way
to
just
like,
say,
hey.
I
took
the
snapshot
from
this
volume
and
I
would
like
to
roll
this
volume
back
to
the
snapshot
that
I
took
like
that.
That
would
be
exactly
like
what
you're
describing
here
like.
I
want
to
roll
the
volume
back
to
the
backup
that
I
took,
and
so
if
we
had
the
same
functionality,
it
was
just
like.
Oh
yeah
roll
back.
E
B
I
mean
yeah,
I
mean
unless,
if
your
snapchat
is
actually
a
like
a
remote
array
like
like
aws
ebs
right,
that's
automatically
upload
something
or
some
driver
can
also
do
that,
where
they
upload
a
snapshot
somewhere.
Then
in
that
video,
but
then
they're.
E
F
E
Or
you
know
the
api
object
that
you
use
right
like
right
now,
the
way
that
you
affect
a
restore
of
a
snapshot.
Is
you
create
a
new
pvc?
You
give
it
a
data
source,
you
point
it
to
the
snapshot
and
boom
your
volume,
pops
in,
like
I'm
envisioning
that
like
there
will
be
something
similar
for
backups
and
like
whether
it
is
efficient
or
not.
Is
an
implementation
detail.
E
Like
I'd
like
to
I'd
like
to
understand
like
what
is
the
low-level
kubernetes
api
operation,
you
would
undertake
to
say:
okay,
here's,
my
backup,
I
don't
want
a
new
volume.
I
want
the
backup
to
just
go
back
to
this
volume
from
whence
it
came
and
like
and
and
somehow
magic
will
happen
and
maybe
change
block
tracking
is
using
or
you
know,
and
the
data
will
get
there
whatever
we
do
there,
we
should
also
do
for
snapshots
to
maintain
the
symmetry.
That's
all
I'm
getting
at
is
like.
D
I
think
that's
a
good
idea.
I
mean
it
basically
from
abstract
level,
snapshot
or
data
form
the
data
backup
device,
basically
just
it's
just
a
bunch
of
blocks
and
then
of
the
data
at
a
specific
time
in
time.
If
you
can
think
of
it
that
way,
then,
but
whether
it
can
map
nicely
into
a
symmetry
way
in
kubernetes.
That
is
the
question
right.
D
B
Should
we
actually
look
at
your
pen,
your
proposal
on
the
external
data
populator,
because
that
would
be
a
good
one
right,
so
you
you
can
just
to
create
a
pvc
from
some
other
data
source
that
could
be
yeah
repository.
E
E
The
core
api,
I
think,
and
but
but
this
other
this
other
thing
of
like
I'd
like
to
basically
overwrite
an
existing
volume
like
I
find
that
very
interesting
but
like
I
don't
want
to
understand
what
the
user
does
at
the
kubernetes
layer
to
make
that
happen,
because
it's
you
don't
create
a
new
pvc
and
it's
not
it's
not
obvious
like
what
what
you
would
send
a
cube
api
server.
That
would
say
hey.
E
D
A
B
A
45
minutes
mark
shin,
I
know,
there's
still
could
there
are
still
a
couple
of
other
items
in
the
agenda,
how
we
can
postpone
them?
If
you,
if
everybody.
B
Yeah,
that's,
okay!
Actually
I
think,
because
those
are
we
can
that
you
go
very
quick.
So
so
do
we
want
to
take
a
look
of
this
and
then
so
ben?
Is
this
this
workflow?
B
B
E
B
H
A
I
I
got
a
I've,
got
a
couple
questions,
but
in
general
it
looks
fine
to
me,
but
I
got
a
couple
questions.
I
think
we
need
to
resolve
those
questions
ahead
of
time.
First
of
all,
is
the
sequence,
the
name
space
and
cost
wide
resource
expect.
A
Pv
restoration
might
need
to
happen
after
pvc
has
been
restored
because
there's
a
strong
dependence
strong
dependency
over
there,
because
imagine
you
have
a
stateful
set
if
you
restore
the
stat
percent
the
first
and
then
go
ahead
and
restore
the
pvcs,
and
these
pvcs
might
be
used
by
the
parts
created
by
the
staff
processor
already.
B
I
think
those
should
be
later.
I
yeah
do
you
have
those
here?
Okay,
maybe
phone
should
add
those
yeah.
Those
resources
should
be
restored
later
after
the
data.
D
Yeah
yeah
so
because
the
pvc
is
actually
namespace,
it's
in
a
namestage
space
scope.
So
if
you
want
to
restore
the
pvc,
you
have
to
install
the
namespace
first
yeah.
A
A
A
I
saw
it
but
yeah.
The
thing
is
that
namespace
and
clusterwide
resources
is
a
very
blurred
term
right
there.
Many
many
names
is
like
cub
system
public.
Those
are
default.
Those
are
not
restorable.
Those
are
deletable
right.
Cluster
wide
same
thing,
you
definitely
you
probably
don't
want
to
reach
to
a
warm
snapshot,
for
example
right
yeah,
so
I'm
not
sure
whether
we
should
be
exclusively
calling
this
out
in
the
beginning
and
yeah
that
that's
that's
the
only
thing
I
was
thinking.
B
D
H
B
It
so
so
far,
I
think
that
may
be
helpful
if
you
just
take
like
one
simple,
stable
set
example
and
just
put
the
yama
file
there
and
see
you
know
what
are
the
resources
that
we're
seeing
at
this
time
at
this
step,
and
what
is
the
you
know,
things
that
we
already
see
at
the
end?
Maybe
that
will
be
helpful,
because
this
is
very
general
when
you
say
yeah
classified
resources.
That's.
D
B
B
A
And
another
thing
is
the
the
data
mover
part
that
is
basically
effectively
very
similar
to
ben's
data
populator.
A
So
I
want
to
hear
his
opinion
as
well
on
this,
but
one
thing
I
was
thinking
originally
is
that
create
empty
pvcs
right
and
this
pvc
doesn't
necessarily
need
to
be
the
pvcs.
The
application
will
be
using
in
the
future
right.
It
can
be
temporary
pvcs
and
then
transfer
the
volume
the
underlying
pv
after
the
population
has
done.
I
think
that's
how
ben
did
it
is
that
right
ben?
Is
it
yes.
B
Shane
is
there
two,
do
we
have
two
pvc
here
actually
as
well
or
I
think,
there's
a
target
pvc
and
then
you
create
a.
E
The
whole
point
of
the
data
populators
work
is
to
create
a
framework
where
you
can
create
crds
that
represent
something
that
has
data,
that
that
can
become
the
data
source
of
a
new
pvc
and
and
then
and
have
kubernetes,
basically
treat
that
like,
like
the
existing
cases
of
data
source
snapshot
or
data
source,
pvc
that
implement
a
clone
of
a
snapshot
or
a
clone
of
a
pvc,
you
can
basically
do
a
clone
of
a
something
else
and
and
I'm
fairly
agnostic
about
like
what
actually
happens
after
that
now
now
I
do
have
a
a
specific
prototype
of
like
a
way
to
do
it.
E
E
So
I
I'm
not
saying
it's
a
bad
idea
to
like
overwrite
existing
pvcs,
but,
like
I
really
want
to
understand
the
you
know
the
the
user
interface
for
that
when
you
know,
if
I
have
something-
and
I
don't
want
to-
and
I
want
to
tell
kubernetes
like
put
that
something
in
this
existing
pvc
like
how
do
you
do
that
at
the
kubernetes
api
layer
and
such
that
a
controller
could
see
that
request
and
and
do
the
right
thing
at
a
very
low
level,
because
once
you
have
that,
I
understand
like
you,
can
build
all
the
upper
layers
and
and
build
a
really
fancy
application.
E
You
know
aware
backup
and
restore
system,
but
like
just
the
the
low-level
nuts
and
bolts
of
like
here's,
my
data,
here's,
my
pvc,
make
the
data
go
into
the
pvc
like
that's
kind
of
a
hard
thing
to
do,
especially
with
the
way
the
kubernetes
api
is
kind
of
declarative.
It's
like
where
would
you
declare
your
intention
to
make
that
happen?
B
Can
we
phone?
Can
you
like
talk
about?
What's
the
like
advantage
of
this,
the
second
like
yeah?
Why
do
you
need
to
do
the
second
approach?
Why
the
first
one
does
not
work
for
production?
D
To
restore
to
the
existing
namespace.
D
What
is
the
namespace?
So
if
you
want
to
restore
the
let's
just
say
you
have
to
create
a
new
namespace
right,
some
of
the
namespace
wouldn't
have
to
some
of
the
elements
in
the
namespace
can
be
used.
If
you
look
into
like
a
hem
chart
or
something
they
embedded,
the
namespace
name
into
their
address,
that
being
used
by
the
post,
that
is
running.
H
No,
so
I
guess
wrong
question
is
why
not
tear
down
the
contents
of
the
existing
namespace
and
store
it
back?
So
I
think,
maybe,
since
here
we
are
just
addressing
the
workflow.
So
what
I
guess
fonts
trying
to
convey
here
is
say
you
have
an
application.
It
has
a
multiple
file
system
volumes
and
say
someone
one
file
got
corrupt.
So
now
you
want
to
restore
that
file.
H
So
it's
not
necessarily
tied
into
kubernetes
api,
but
a
workflow
could
be
you
go
into
that
backup
application
select
the
file
you
want
to
restore
from
the
copy
that
was
created
on
the
secondary
storage
and
use,
select,
restore
and
then
the
backup
application
will
scale
down
the
parts
applications
not
running.
It
will
connect
to
that
pvc,
the
existing
pvc
restore
that
file
and
then
scale
back
up
the
port.
So
it's
nothing
to
do
with
kubernetes
api.
It's
just.
B
E
I
guess
my
mind
is
that
you
know
how
do
we
provide
a
kubernetes
api
that
just
lets
anybody
do
this
stuff
and
it's
two
entirely
different
questions.
I
mean
that's,
that's
why
it's
not
making
any
sense
to
me.
Okay,
I.
A
I
think
that
too
application
oriented-
maybe
it's
gonna,
be
very,
at
least
to
me-
is
very
challenging
to
achieve
that.
We
are
some
generic
mechanism.
D
I
kind
of
lean
toward
that
too
I
mean.
Maybe
the
restore
to
production
may
be
too
specific
for
the
generic
workflow,
but
we
can
use
the
the
restore
to
new
way
completely
restore
to
new.
I
think:
that's,
that's.
Okay,
but
the
resource
of
production,
maybe
for
a
specific
application
for
a
specific
vendor
to
implement.
A
E
We
want
to
create
an
interface
that,
like
consumers,
can
consume,
and
providers
can
provide,
where
there's
like
multiple
implementations
on
both
sides
that
that
work
together,
because
that
has
value
as
a
standard
right
that
that
was
sort
of
the
point
of
the
data
protection
working
group
was
we're
trying
to
develop
a
standard
that
we're,
like
consumers
and
of
backup
apis
and
producers
of
backup
apis
can
all
implement
the
same
thing.
And
then
we
can
have
an
ecosystem
that
actually
works
rather
than
having
like
one
tool
that
just
does
all
the
work.
D
E
Yeah
yeah,
so
so
like
that's
an
argument
for
like
we
should
define
a
change
block
tracking
api
and
then
get
everyone
to
implement
it,
and
then
once
we
do
that
people
who
want
to
consume
it
to
build
things
like
that
are
free
to
do
so,
but,
like
I
think,
the
working
group's
responsibility
would
end
at
like
here's.
Here's
the
api.
It
works,
build
what
you
want
on
top
of
it
right.
E
I
I
thought
that
we
we
also
wanted
to
to
go
down
the
path
of
like
not
only
you
know
we're
going
to
have
a
change
block
tracking
api,
which
I
I
think
there's
pros
and
cons
doing
that.
But
you
know
we're
also
going
to
have
a
a
formalized
backup
object.
That
means
something
that
kubernetes
the
kubernetes
api
understands,
and
you
can
say,
hey
turn
this
back
up
into
a
volume
for
me
and
like
whatever
your
csi
driver
is
or
whatever
distro
of
kubernetes.
E
B
D
B
This
is
like
very
yeah,
just
some
general
steps
so
yeah.
Maybe
we
can
focus
on
this
result
you
new
for
now
ben.
Do
you
want
to
go
over
the
data
populator
again,
maybe
in
the
next
meeting.
I
think
that
will
be
helpful
to
refresh
everyone's
memory.
Well,.
B
E
B
B
B
Meeting
so
we
can
yeah,
we
can
forward
that
meeting
to
this
millionaire
as
well.
So
if
people
are
interested,
they
can
join
that
one
yeah
yeah
yeah.
A
I
think
we
should
do
this
right,
so
smaller
groups,
if
you
are
looking,
I
please
share
the
median
details
in
this
document
in
the
in
the
open
group
document,
so
that
anyone
who
is
interested
might
choose
to
join
right
and
that's
probably
a
better
way
of
organizing
all
this.
E
B
That,
in
the
the
general
that
doc,
the
agenda,
doc
yeah.
A
B
A
All
right
we're
on
the
10
o'clock
mark.
Thank
you,
for
it
has
been
really
helpful
and
thank
you.
Everyone,
I'm
stopping
right
and
enjoy
the
rest
of
the
day.