►
From YouTube: Kubernetes Data Protection WG Bi-Weekly Meeting 20211103
Description
Kubernetes Data Protection WG Bi-Weekly Meeting - 03 November 2021
Meeting Notes/Agenda: -
Find out more about the DP WG here: https://github.com/kubernetes/community/tree/master/wg-data-protection
Moderator: Xing Yang (VMware)
A
Hello,
everyone
today
is
november
3rd
2021.
This
is
the
kubernetes
data
protection
ordering
group
meeting
today
we
have
two
topics,
actually
mainly
one
topic
from
ronak,
to
talk
about
the
voting
mode
conversion.
A
B
Yeah,
okay,
cool,
so
thanks,
shane,
hi
everybody.
So
this
is
my
name
is
ronok
and
I'm
actually
going
to
continue
a
conversation
that
we
had
a
little
while
ago
about
a
snapshot.
Security
issue
that
we
sort
of
not
ran
into
per
se
but
but
sort
of
identified
earlier
early
on.
B
So
the
the
current
issue
that
exists
I'll
just
run
through
the
basic
problem,
just
so
so
that
everyone
can
sort
of
get
on
the
same
page
and
then
where
we're
at
with
the
with
the
proposal
and
how
it's
changed
in
the
last
week.
So
so
the
issue
that
was
identified
is
basically
that
it's
currently
possible
to
restore
a
pvc
from
a
snapshot
but
actually
sort
of
modify
the
volume
mode
of
the
new
volume
that
you're
creating.
So
the
steps
are
pretty
simple
right.
B
You
you
create
a
pvc
with
a
volume
mode
block
and
then
write
some
malformed
data
to
it,
and
then
a
user
takes
a
snapshot
of
this
volume
right
and
then,
when
you
go
ahead
and
try
to
restore
that
snapshot
as
a
pvc,
you
can
actually
change
the
volume
mode
to
file
system.
B
B
So
the
reason
we
don't
want
to
block
this
completely
is
because
this
is
actually
a
valid
use
case
for
some
backup
vendors.
This
is
actually,
I
think,
it's
a
more
efficient
way
that
some
backup
vendors
use
to
restore
volumes.
So
there
is
a
valid
use
case,
but
at
the
same
time
it's
it's
a
there's,
a
potential
for
a
malicious
user
to
sort
of
take
advantage
of
it.
B
So
we
spoke
a
little
while
ago
about
introducing
something
called
as
volume,
what
we
call
as
volume
security
standards,
which
was
basically
similar
really
similar
to
the
pod
security
standards
that
was
introduced
recently
into
kubernetes
right.
Where
this
this
piece,
pod
security,
stand
security
standards,
actually
replace
the
the
old
pod
security
policy
or
psps,
and
then
you
know
they
sort
of
define
like
some
security
standards,
what
it
means
to
violate
it
and
what
ways
you
can
violate
and
stuff
like
that.
B
So
this
is
the
original
sort
of
proposal
that
we
went
with
and
saw
and
just
started
involving
other
cigs
and
stuff
just
to
see
what
they
think
about
it.
So
paul
security
standards
was
actually
introduced
by
sigoth,
and
we
ran
this
by
them
last
week
and
and
one
of
the
key
sort
of
takeaways
from
that
meeting
was
that
it
seems
like
we're
doing
a
lot
of
work
for
one
use
case
at
this
point
of
time.
Right,
it's
it's
it's
a
totally.
B
It
probably
makes
sense
to
just
sort
of
make
it
less
generic
and
more
specific
to
the
issue
at
hand,
and
one
of
the
reasons
they
said
that
that
makes
sense
is
because
right
now,
there's
only
one
sort
of
parameter
that
we
would
have
introduced
to
the
volume
security
standards.
I
suppose
support
security
standards,
which
has
a
whole
bunch
right.
B
You
can
define
a
bunch
of
configuration
which,
as
part
of
the
pod
security
standard,
so
that
was
the
sort
of
feedback
that
we
got
from
them
and
they
actually
gave
us
some
good
ideas
as
well
about
how
we
can
we
can
like
use
annotations
or
some
sort
of
certificate
authorization
sort
of
stuff.
So
I'm
just
gonna
go
through
in
in
this
meeting.
I
just
wanna
go
through
like
this,
this
annotation
sort
of
mechanism
that
we
can
potentially
use
to
solve
this
issue.
B
B
Cool,
so
so,
originally,
the
goal
of
this
document
was
actually
to
develop
something
or
design,
something
that's
generic
and
can
be
extended
to
other
storage
related
security
aspects,
but
now
that's
moved
into
a
non-gold
section.
So
the
only
goal
of
this
document
is
to
design
a
a
solution.
That's
going
to
solve
our
problem
at
hand
right.
So
basically,
what
we're
going
to
talk
about
is
three
main
points.
B
Currently
that
doesn't
exist,
and
this
source
volume
mode
is
going
to
be
basically
a
a
copy
of
the
persistent,
the
pvc's
volume
mode
when
it's
created
and
we'll
go
into
details
of
how
that's
going
to
go
about
and
stuff
we're
thinking
of
introducing
a
new
annotation
on
the
volume
snapshot
class,
which
basically
would
be
a
comma
separated
list
which
sort
of
specifies
the
users
that
are
allowed
to
convert
the
volume
mode
of
a
of
a
snapshot
or
of
pvc
right
can
ask.
C
B
At
this
yeah
I
mean
not
upgraded
in
particular,
so
let
me
just
answer
that
right
now,
so
basically
we're
just
going
to
say
that
if
it's
empty,
it's
an
optional
field
right.
So
if
it's
empty
we're
just
gonna
sort
of.
B
Then
we
we
just
go
back
with
the
existing
behavior,
so
yeah
and
totally
the
third
sort
of
main
thing
that
we're
introducing
here
would
be
an
admission
controller
which
basically
intercepts
pvc
creation
calls.
B
So
let
me
just
sort
of
go
through
that
a
little
more
detail
in
the
next
sort
of
couple
of
pages,
so,
firstly,
the
change
to
the
volume
snapshot,
content
api.
Again,
it's
going
to
be
an
optional
field
and
if
it's
left
empty
it'll
be
treated
as
unknown,
otherwise
it's
either
file
system
or
block,
and
secondly,
we're
going
to
introduce
this
annotation
on
the
volume
snapshot
class.
B
So
that's
going
to
be
something
like
a
comma
separated
list
of
users
that
are
allowed
to
convert
the
volume
mode
of
the
volume.
So
so
so,
basically,
this
the
the
volume
snapshot
content.
Sorry,
the
volume
snapshot
class
object
is
a
cluster
scoped
object
right.
So
the
assumption
here
is
that
only
the
admin
or
someone
with
access
to
a
cluster
scope
object
and
therefore
with
privileges
already,
would
be
able
to
sort
of
add
this
annotation.
B
It
wouldn't
be
a
user
who's
in
charge
of
a
particular
name
space
or
anything
like
that.
So
that's
the
assumption
that
I've
gone
with
over
here
and
I
mean
if,
if
there
are
any
like
thoughts
about
that,
I'm
definitely
open
to
listening
to
that.
Why.
B
Because
so
right
now
so
originally
we
went
with
something
along
the
lines
of
name
series
but
sort
of
on
thinking
about
a
little
more
it.
It
turns
out
that
the
in,
in
the
same
name,
space,
a
backup
vendor,
can
attempt
to
restore
a
volume
from
a
snapshot
as
well
as
a
regular
user
in
the
same
name
space.
C
B
Yeah
sure
sure
so
so,
basically
this
volume,
this
annotation,
will
be
copied
from
the
volume
snapshot
content
to
the
volume,
sorry
from
the
volume
snapshot
plus
to
the
volume
snapshot
content
when
it's
being
created
and
in
the
case
of
dynamic
provisioning
that
will
be
done
by
the
snapshot,
controller
and
the
reason
I've
sort
of
gone
with
that
sort
of
mechanism.
Instead
of
doing
it
directly
on
the
volume
snapshot.
Content
is
basically
that
it's
it's
sort
of
cumbersome.
B
I
guess
to
actually
go
to
each
snapshot,
content,
object
and
add
this
annotation
manually
right.
So
the
hope
is
that
you
add
it
to
the
snapshot
class
and
then
every
snapshot
created
from
that
snapshot
class
gets
that
annotation
as
well
now
moving
on.
So
these
are
the
sort
of
changes
to
like,
I
guess,
an
api
and
annotation,
and
then
we're
also
going
to
make
changes
to
the
snapshot
controller
right
because
in
the
case
of
dynamic
provisioning,
the
samsung
controller
actually
has
a
dynamic
snapshotting.
B
I
guess,
if
that's
the
right
word,
the
snapshot
controller
has
like
a
bigger
role
to
play
right.
So,
in
the
regular
case,
what
happens
is
the
snapshot
is
created
by
the
user,
with
the
snapshot
class
option
specified
right
and
then
the
volume
snapshot
content
is
created
by
the
snapshot
controller
and
it's
the
snapshot.
Controller's
sort
of
responsibility
to
update
the
spec
of
the
of
the
volume
snapshot.
Content
object.
B
So
with
this
sort
of
design
we'll
be
adding
we'll
be
introducing
two
new
changes
to
the
snapshot
controller.
The
first
one
would
be
to
populate
the
source
volume
mode
of
the
snapshot,
content
right.
It
will
fetch
the
persistent
volume
mode
of
the
pv,
and
if
you
actually
go
into
these
links,
the
snapshot
controller
already
gets
the
pv
object.
We
we
just
don't
use
the
persistent
volume
mode.
So
it's
not
that
big
of
a
change
and
we'll
use
that
volume
mode
and
populate
the
snapshot.
B
Contents,
new
source
volume,
mode
field
right
and
the
second
sort
of
change-
would
be
that
it
would
look
for
this
annotation
in
the
snapshot.
Class,
which
again
is
is,
is
sort
of
already
part
of
the
code,
like
the
structural
controller.
Already
has
access
and
gets
the,
or
does
it
get
on
the
snapshot
class.
So
it'll
look
for
this
annotation
and
if
it
exists,
it
copies
that
into
the
snapshot,
content
as
well,
so
and,
of
course,
for
static
provisioning.
B
Obviously,
the
admin
has
a
has
a
great
role
to
play
so
populating
these
fields
is
sort
of
the
the
admins
responsibility.
So
this
I
I
guess
this
design
mainly
deals
with
the
dynamic
case
right
so
again.
If,
if
it's
left
nil,
then
the
unknown
mode
will
be
assumed
and
that's
done
to
preserve
existing
behavior.
F
Oh
excuse
me:
I
have
some
questions
about
sort
of
the
light
when
this
list
will
be
updated.
So
I
guess
my
my
concern
here
is
that
what
the
way
this
works
is.
First
of
all,
the
list
is
different
per
snapshot
class
and
then
also
when
you
create
a
snapshot,
the
list
is
copied
sort
of
from
the
snapshot
class
to
the
volume
snapshot,
contents
and
right
as
far
as
I
can
tell
it,
would
it
would
likely
be
sort
of
static
from
then
on.
F
B
C
G
C
A
That's
a
good
point,
so
actually
I
was
wondering:
do
we
actually
really
need
to
do
this
at
the
user
level,
because
the
backup
software
will
be
the
one
who
is
you
know
doing
adding
this
annotation
and
then,
which
has
the
you
know,
admin
privilege
right
is
that
is
that
the
non-name
space
object?
A
If
that's
the
case,
then
I
mean
why
do
we
still
need
to
check
the
user?
Just
wondering?
Would
that
be
enough?
Just
use
our
back
to
say
you
know,
because,
and
you
know
the
the
backup
vendors
can,
you
know-
can
modify
that
one
snapshot
contents,
so
you
can
do
this
well.
A
Oh,
I
would
just
I'll
just
I
just
not
sure
like
do.
We
actually
need
to
check
the
user,
because
because
if
you
know,
if
some
user
does
not
have
permission
to
modify
the
content
anyway,
then
you
know,
if
someone
has
the
permission
to
change
the
content,
then
I
mean
there's
no
way
you
can
actually
protect
anything
because.
C
B
Basically,
you
validate
if
the
pvc
is
being
restored
from
a
snapshot
and
then,
if
it
is,
then
you
go
ahead
and
compare
this
the
two
volume
modes
and
if
they
don't
match,
then
you
get
the
list
of
if
they
don't
match.
You
get
the
list
of
users
in
this
sort
of
this
list
that
we've
populated
on
the
snapshot,
content
right
and
you
validate
whether
that
user
or
the
user
requesting
this
current
volume
is
present
on
that
list
or
not
right,
and
if,
if
they're
not
present,
then
you
reject
the
pvc
creation.
C
I
A
H
J
So,
like
you
mentioned
users,
does
this
apply
to
service
accounts
as
well.
B
C
C
User
can
create
a
pvc
that
points
at
a
snapshot
and
and
gives
it
a
different
volume
mode
right
and
today,
like
we
just
run
with
that,
we
say
that's
great
this
this.
This
admission
controller
is
going
to
start
looking
at
that
request
and
saying:
no,
you
can't
do
it,
but
then
we
need
an
escape
patch.
That
says:
well,
some
guys
can
do
it.
If
they're,
given
special
permission.
B
Yeah,
so
basically
the
the
I
think-
that's
well
summarized
right.
It's.
How
do
you
define
what
the
special
permission
is
and
where.
C
C
A
F
I
mean
I,
so
I
think
I
agree
with
the
cluster
scope
thing.
I
wanted
to
say
a
couple
of
things.
First
of
all,
if
you're,
I
imagine,
if
you're
a
large
enterprise
that
has
multiple
clusters,
you
probably
don't
really
want
to
maintain
this
in
your
clusters
at
all.
What
you
probably
want
to
do
is
have
some
sort
of
central
list
of
all
of
your
users
who
are
on
like
the
admin
team
or
the
backup
team
or
whatever,
and
then
sync
it
to
all
of
your
clusters
automatically.
F
I
think
probably
the
crd
is
the
best
we
can
do
just
because
we
don't
take
any
dependencies
outside
of
the
cluster,
but
it's
actually
not
ideal,
and
yet
the
king's
point,
I
think
you
know
if
we
can
do
this
by
installing
a
special
cluster
role
and
using
kubernetes
normal
our
back
permissions
that'd
be
good.
That
would
be
significantly
simpler
and
I
haven't
really
heard
a
reason.
We
can't
do
that.
M
H
N
I
think
the
admission
plugin
could
query
are
back
like
there
is
some
special
object
it
could
create.
I
I
don't
remember
how
it's
called.
Unfortunately,
like
can
this
user
do
this
work
on
this
object
and
api
server
returns
true
or
false?
N
M
O
Yeah,
so
maybe
as
a
concrete
example,
we
could
look
at
open
shifts
security
context,
constraint,
objects
right,
so
I
mean
you
know,
that's
that
restricts
the
settings,
for
you
know
things
like
the
user
that
a
pod,
the
user
id
that
a
pod
can
run
as,
and
so
that's
similar
to
restricting
the
field
in
your.
O
You
know
that
you're
setting
in
your
pvc-
and
in
that
case,
normal
our
back,
can
be
used
to
grant
a
user
or
service
account
those
privileges
by
giving
them
the
use
verb
against
the
a
particular
scc
that
has
that
permission,
and
I
think,
that's
kind
of
what
we're
trying
to
do
here.
H
No
and
the
real
good
thing
about
this
real
quick
is
also
like
just
having
a
list
of
users
is
sometimes
not
sufficient,
because
you
can
have
a
user
granted
like
a
scope
down.
H
H
Access
review
when
comparing
like
can
a
user
do
something
in
a
kubernetes
cluster.
B
So
hey,
so
I
think
I
would
need
to
look
up
like
this.
This
whole
security
context
from
strain
and
stuff,
because
I
think
my
mindset
was
sort
of
similar
to
what
michelle
was
saying,
which
is
it's
just
a
field
that
you
potentially
may
not
want
updated
versus
something
you
definitely
don't
want
updated
right.
Only
if,
like
certain
conditions
match,
you
don't
want
to
to
go
through.
B
So
that's
something
I
would
actually
have
to
sort
of
go
back
and
look
at
and
see
how
you
can
like
add
a
new
verb
and
stuff
like
that,
but
definitely
something
like
that.
C
C
The
only
question
is,
what
is
the
user
interface
for
deciding?
Who
has
this
power
or
not?
I
think
everything
else
about
it
makes
perfect
sense.
The
question
is
just
how
does
the
admin
say?
These
are
the
people
that
can
do
it,
and
our
back
system
does
seem
to
have
advantages
over
sticking
annotations
on
a
on
a
snapshot
class
and
then
having
to
copy
those
around
and
consult
it.
B
Yeah,
I
think
the
mutability
point
was
also
pretty
something
I
hadn't
thought
about
earlier,
so
yeah
that
definitely
makes
sense.
B
Cool,
so
so
maybe
jing.
Let
me
let
me
take
a
look
at
that.
I
think
two
or
three
really
good
points
have
come
out
of
this.
So
let
me
take
a
look
at
that
and
then
update
the
page
and
obviously
keep
getting
some
feedback
and
stuff
and
running
it
through
everyone
over
here.
A
A
Yeah,
but
if
it's,
if
it's
alpha
I
mean
if
it's
off,
I
know
I
mean.
A
Okay,
what
are
the
other
changes
that
we
are
making
in
the
snapshot
controller?
This
is
everything
in
a
mutual
controller,
so.
B
No
so
snapshot
controller.
The
main
changes
would
be
updating,
volume,
mode
or
sorry,
copying
the
volume
mode
and,
as
well
as
copying,
the
the
annotation
from
the
snapshot
class
to
the
snapshot,
content.
B
Gonna
need
right
right,
yeah.
I
guess
the
first
part
is
definitely
necessary,
irrespective
of
how
we're
gonna
enforce
the
rules.
A
H
I
really
think
that
we
should
explore
the
r
back
before
creating
a
cap
at
least
add
that
as
an
alternative
on
the
cap
would
be,
I
think.
C
A
A
C
A
Yeah
we
have
not
okay
yeah,
maybe
that's
something
that
we
yeah,
maybe
wrong.
Take
a
look
of
that.
I
didn't
know
that
I
thought
every
time
I
thought
those
are
pretty
restricted,
but
if
it's
actually
customizable,
maybe
it'll
be
much
easier.
C
C
A
This
is
the
one
yeah
require,
that's
those
are
yeah,
but
those
are
the
well-known
ones.
Is
there
another
way
to
create
your.
M
It's
pretty
involved
to
create
a
new
verb,
sub-resource
and
wait.
Are
we
talking
sub-resource
or
verb.
A
A
It's
definitely
very
important.
Oh
crd,
then
that's
different
yeah.
I
thought
it's
just
like
you
can
just
do
something.
M
A
M
M
Yeah
yeah
yeah,
it's
it's
possible.
I
just
reviewed
some
someone
added
a
new
verb
to
the
pod.
A
C
M
Yeah,
you
can
look
up.
I
I
just
reviewed
the
pod
ephemeral
containers,
okay
review.
You
can
look
at
that
feature.
A
A
C
C
M
Not
clear
on
verb,
I
think
in
in
crds.
The
sub
resource
is
just
like
another
there's
like
a
sub
resources
array,
but
I'm
not
clear
on.
A
J
In
in
the
setup
of
the
problem,
it
talks
about
how
this
workflow
is
required
for
data
protection
vendors.
Another
alternative
could
be
to
actually
make
that
a
more
explicit,
workflow
and
not
have
to
kind
of
you
know.
This
is
a
bit
of
a
hack
right
to
change
the
volume
mode.
Potentially,
maybe
we
can
also
go
with
another
workflow
that
lets
you
do
that
and
then
lock
down
this.
This
workflow
here.
A
But
this
workflow
is
used
right.
It's
used
by
backup
vendors
already
are
saying:
don't
need
this.
J
A
Not
accidental,
I
think,
like
a
vendor,
they
did
this
on
purpose
right.
They
just
didn't
know
there
was
any
potential
problem
potential
kernel
issue
that
that
one
we
didn't
realize
that
right.
So
what
I'm
not
so?
What
is
your
suggestion?
How
to.
J
Support
this
in
the
volume
controller
on
purpose
I
mean
it
seems
like
it
was.
You
know
it
happened
as
kind
of
somewhat
of
an
oversight
than
back
vendors
picked
up
on
it
and
started
using
it,
or
was
this
kind
of
supported,
workflow.
A
I
don't
think
this
is
a
like
accident.
It's
just
a
backup
vendors.
We
don't.
We
don't
re
record
the
original
body
mode
in
the
snapshot
right,
so
provision
there's
no
way
for
provisioner
to
check
this,
and
then
this
is
the
type
of
workflow
that
used
by
many
backup
vendors.
Maybe
you
maybe
you
don't
use
it,
but
no
other
vendors
use
this
to
retrieve
change
blocks.
A
I
C
J
A
A
A
You
mundi
that
some
other
how
about
you,
then
you
have
to
have
like
a
spare
note
or
something
just
do
something
there
that
that's
just
too
much
right.
You
have
a
testing
node
only
for
this
purpose
and
you
make
sure
everything
is
just
a
monty
there
and
then
test
it,
but
that's
just
too
much
too
much
overhead.
A
A
A
J
H
C
A
A
A
A
Okay,
because
yeah-
because
I
think
normally
they
have
some
some
tests-
some
validation
tests-
we
make
those
changes,
maybe
that
maybe
you
know
get
rejected
somewhere.
A
Okay,
do
we
have
anything
else
about
about
this
topic.
A
All
right,
then
yeah,
then
we'll
just
you
know,
run
out.
Do
some
investigation
and
come
back,
and
we
can
talk
about
this
again.
A
So
then
we
have
another
topic
from
shine.
P
Not
see
your
your
name,
is
you
logged
in
as
yourself
right?
Yes,
my
name
is
oh.
I
A
I
I
You
I
did
yeah
all
right,
so
this
is
about
disaster
recovery
and
some
thoughts
around
what
we
want.
With
respect
to
disaster
recovery
related
to
storage,
it's
not
an
introduction
to
disaster
recovery.
I
As
such,
it's
more
about
disaster
recovery
in
cube,
just
some
basic
terminology,
which
most
of
you
probably
aware
of
the
recovery
point
and
recovery
time
objectives
are
what
typically
measured
for
disaster
recovery
and,
in
this
case
we're
talking
about
a
disaster
that
takes
out
an
entire
cube
cluster
and
the
workload
needs
to
shift
to
another
cube
cluster
and
reattach
to
its
storage.
I
That's
been
synchronously
or
asynchronously
replicated
and
continue
to
function
from
that
point
on,
so
so
the
landscape.
Hence
moves
this
way.
These
are
borrowed
from
a
kubecon
presentation
talks
about
ceph,
but
it's
immaterial.
What
storage
is
used
as
long
as
storage
supports
data
replication?
I
I
I
It's
considered
standby
on
on
the
alternate
cluster
and
and
different
applications
can
be
active
on
either
of
these
clusters.
It's
not
necessary
that
all
applications
have
to
be
access
active
on
one
of
these
clusters
and
standby
on
the
other,
and
the
use
case,
we're
not
looking
at
is
an
application
being
active
active
in
both
clusters,
which
kind
of
falls
down
to
storage
being
used
synchronously
across
these
clusters.
That's
that's
we're
not
looking
at
shared
storage
systems
in
this
case.
I
So
what
what
we
really
want
to
do
here
is
so
so
broadening
out
the
scope,
a
little
bit
to
other
generic
storage
systems
is
usually
storage
systems,
have
concepts
of
pairing
volumes.
B
I
So
this
presents
us
with
quite
a
few
challenges,
actually,
one
of
the
first
being
that
what
we
want
to
do
is
we
want
to
manage
the
storage
replication,
size
lifecycle
along.
B
I
B
I
B
I
Not
required
and
also
typically,
storage
systems
have,
because
this
is
asynchronous
replication,
have
a
concept
of
what
is
a
primary
end
for
the
replication.
What's
the
target
or
the
secondary
and
for
the
storage
endpoint.
So
so
we
we'd
like
to
manage
those
relationships,
and-
and
yes
because
this
is
asynchronously
replicated,
there
would
be
a
point
in
time
where
there
is
a
split
brain
or
data
has
to
be
healed
or
pre-synced
back
to
one
of
the
endpoints
posted
is
abstract.
I
So
this
is
something
that
we're
looking
at,
based
on,
which
we're
talking
here,
so
that
we
can
have
csi
extensions
that
can
add-ons
whatever
we
want
to
call
it.
That
can
actually
help
us
manage
this
for
individual
pvcs
for
a
given
application.
I
I
The
the
next
challenge
that
we're
really
looking
at
is
that,
because
these
are
applications
that
are
deployed
across
storage
clusters
across
kubernetes
clusters,
the
usual
dynamic
provisioning
of
pvcs
cannot
take
over,
because
that
will
just
give
it
a
new
storage
endpoint,
and
we
really
need
to
reattach
the
pvc
request
or
to
the
replicated
storage
endpoint
on
the
on
the
remote
cluster.
I
So
so
applications
need
a
way
to
present
this
desire,
that's
first
and
then
once
it's,
they
need
some
form
of
ability
to
reattach
to
the
remote
replicated
endpoint
on
the
on
the
other
cluster.
That's
another
challenge
which
which,
for
example,
you
could
do
with
any
data
source.
Things
like
that
that
could
have
been
added
recently,
but
it
is
something
that
needs
to
be
factored
in.
I
If
you
want
to
pull
off
disaster
recovery
across
communities
clusters,
the
replication
policy
management,
not
so
much
a
challenge,
it's
kind
of
an
orthogonal
need,
because
once
you
have
some
form
of
replication
management
across
clusters,
you
would
need
some
form
of
policy
control
to
to
schedule
an
application
to
manage
the
identities
of
both
cube
clusters
and
and
the
storage
clusters
and
ensure
they
are
in
a
mirroring
or
a
replication
relationship.
And
so
on.
I
Other
challenges
are:
how
do
you
really
get
across
the
application
resources
across
the
youtube
clusters,
not
necessarily
a
storage
challenge,
but
it
is
something
that
needs
to
be
considered.
There
are
again
a
couple
of
ways
to
do
this.
I
One
is
you
can
have
them
deployed
and
passive
on
one
end,
which
comes
with
its
own
sort
of
challenges
actually,
and
they
can
remain
undeployed
and
can
be
deployed
on
demand
on
the
target
cluster
from
some
declarative
source
hand,
wavy
githubs-
something
like
that,
but
this
is
something
that
again
outside
of
storage,
but
still
is
required
for
the
solution.
Overall,
in
this
disaster
recovery
landscape,
some
other
challenges-
are
custom
operators
and
how
they
deal
with
resources
that
that
again
causes
some
some
level
of
problems.
I
Because,
even
if
we
look
at
pvc
level,
we
attach
semantics
to
a
replicated
endpoint
custom
operators,
which
is
designed
to
create
pvcs
arbitrarily
named
or
without
without
the
without
the
format
of
of
of
data
sources
that
we
probably
want
to
use
this.
In
these
scenarios,
they
sometimes
even
create
very
arbitrary
pvcs
without
giving
the
ability,
the
user,
the
ability
to
define
names
and
and
semantics
for
the
pvc
other
than
basic
information.
I
So
those
are
other
set
of
challenges
that
we
need
to
look
at
here
and
the
last
one
that
I
put
in
this
site
slide.
Is
it's
not
only
enough
if
the
application
moves
across
cluster
across
cube
clusters,
traffic
routing
to
these
applications
also
have
to
be
shifted
across
these
clusters
once
the
application
moves
again,
not
a
storage
concern,
but
something
an
orchestration
concern
from
an
bigger
standpoint.
I
So
so
I
just
I
just
put
down
this
introduction
and
some
of
these
challenges
here.
A
couple
of
weeks
later
in
the
meeting,
we'll
probably
expand
on
this
and
talk
more
about
use
cases,
but
I
wanted
to
understand
if
there
are
others
who
are
interested
in
something
in
in
terms
of
disaster,
recovery
and
storage,
replication
across
tube
clusters
and
how
to
reuse
these
volume
endpoints
and
what's
the?
What
do
people
think
about
this.
E
E
The
replication
that
works
with
two
of
our
storage,
arrays
and
we'll
come
with
more
and
we
it's
the
code's
all
open
source
and
we'd
be
happy
to
work
with
you
towards
standardizing
it.
We've
worked
on
some
of
the
challenges
that
you
described
like
when
we
set
up
the
replication
we
actually
create
pvs
remotely
or
have
a
facility
for
creating
tvs
in
the
remote.
E
The
target
cluster
that's
annotated
with,
like
the
name
space
and
the
names
that
were
from
the
original
cluster
and
the
data,
and
we
have
a
rep
ctl
command
that
allows
you
to
essentially
create
duplicate
pvcs
in
the
target
cluster.
E
Additionally,
it's
got.
It
supports
the
basic
failover.
You
know,
actions
like
failover
fail
back,
swap.
I
E
I
Right
so
I
had
been
following
that
I'm
not
gonna
say
I
have
been
following
that
because
I
did
read
about
the
intention
for
that
project
to
follow,
to
enable
replication
in
india.
A
E
I
I
Okay,
we
sorry.
C
I
was
gonna
say
this
is
ben
swordslander.
I
won't
go
into
any
detail,
but
I'll
just
throw
out
there
that
netapp
has
done
the
same
thing
in
our
trident
driver.
It's
open
source,
but
the
implementation
of
it
is
based
on
proprietary
software,
but
we've
gone
down
the
same
path
and
we
have
a
working
implementation
that
does
a
lot
of
this
kind
of
stuff.
G
This
is
infinite,
we
agree
and
we
we
would
also
like
to
participate
in
this
from
a
replication
implementation
standpoint.
We
do
all
the
typical
varieties,
including
three
site,
which
is
an
interesting
topic
as
well.
I
Thanks
eric,
did
you
talk
about
a
specific
company,
name
there
or
or
a
storage
system
there?
So
I
missed
that.
I
Okay,
thank
you,
okay,
so
one
good
thing
is
looks
like
there
are
multiple
people
doing
this.
So
there's
probably
a
good
need
to
standardize
on
the
interface
so
that
csi
vendors
can
do
what
they
need
and
the
orchestration
can
do
what
it
needs
and
users
know
what
they
need
to
do
so
we'll
definitely
yeah.
We
do
want
to
that's
why
I'm
not
throwing
out
what
we
did
right
now
here.
I
I
do
have
this
last
slide,
which
has
a
bunch
of
we
had.
I
The
talk
in
cube,
converse
talks
about
what
we
did
or
what
we
are
doing
right
now
to
solve
this
particular
problem,
but
we
would
want
to
look
at
standardization
efforts
so
that
it
makes
it
easier
for
users
to
use
as
well.
A
So
I
think
yeah.
I
think
we
only
have
like
one
minute
left
so
so,
let's
see
how
a
lot
of
people
are
interested
in
this
is
good.
So
how
do
we
proceed
further?
I
wonder
maybe
it
still
makes
sense
for
you
to
present
what
you
have
done
next
meeting
and
then
we
can
see
the
comments
and
then-
and
we
can
see
like
what
others
have
done
differently,
or
what
do
you
have
other
you
wanna.
Do
it
differently?
That's
up
to
you.
I
I
it
it
really
depends
yeah.
We
can
definitely
talk
about
what
we
did
for
seth
and
and
run
through
the
spec
and-
and
there
are
other
components
in
play
like
the
open
cluster
management,
multi-cluster
managers-
we
can,
we
can
talk
about
that
or
we
can,
because
there
are
quite
a
few
people
who
are
interested.
We
can
assume
we
can
also
about
what
we
want
to
do,
how
we
want
to
do
this
and
come
back
people
who
showed
interest.
How
would
you
see
this
moving
forward.
A
I
I
think,
probably
there
are
a
lot
of
people
interested
in
this,
so
I
think
it's
good
that
we
actually
maybe
we
can
talk
about
it
again
in
the
next
meeting.
Unless,
if
you
guys
can't
wait,
I
don't
want
to
meet
before
that.
That's
in
any
case,
I
think
we
can.
Maybe
we
can
coordinate
offline.
There
is
a
data
potential.
When
groups
like
channel
emails
work
too,
then
we
can
decide
what
we
want
to
do
next.
A
All
right,
thank
you
for
yeah.
Thank
you
for
sharing
this.
I
think
yeah,
that's
it
for
today
thanks
everyone
bye.