►
From YouTube: Kubernetes Data Protection WG Bi-Weekly Meeting 20220223
Description
Kubernetes Data Protection WG Bi-Weekly Meeting - 23 February 2022
Meeting Notes/Agenda: -
Find out more about the Data Protection WG here: https://github.com/kubernetes/community/tree/master/wg-data-protection
Moderator: Xing Yang (VMware)
A
Okay,
so
so
today
is
february:
23rd
2022.
This
is
the
kubernetes
data
production
meeting.
I
think
today,
then
maybe
we'll
just
give
a
high
level
overview
of
netapp's
volume,
replication
and,
and
then
in
the
next
meeting
he
will
have
some
slides
and
go
through
that
in
detail.
B
Yeah,
I'm
hopefully
it'll
be
a
next
meeting,
because
I
have
so
many
different
things
going
on
so
yeah.
The
the
high
level
sketch
of
of
how
trident
does
volume
replication
is
you?
Can
you
can
indicate
when
you're
provisioning,
a
pvc
that
you
want
it
to
be
mirrorable
and
that
will
basically
prevent
it
from
ending
up
in
a
system
where
we
can't
perform
mirroring,
but
other
than
that?
B
It's
an
ordinary
pvc
to
take
snapshots
of
you
can
do
all
the
normal
things
and
by
default
it's
it's
not
actually
mirrored,
and
then,
when
you
want
to
establish
a
mirroring
relationship,
you
what
you
do.
Is
you
create
a
special
crd
that
points
at
that
pvc
in
in
the
same
name
space?
So
your
admin
has
to
give
you.
You
know
our
back
access
to
create,
try
to
mirror
relationship
crs.
B
So
there's
there's
access
control
around
who
can
do
this,
but
you
create
a
try
to
mirror
relationship
cr
on
the
original
pvc
and
it
dumps
back
some
some
specific
details
that
are
needed
to
create
the
secondary,
because
there's
some
there's
some
internal
names
that
are
necessary
on
the
other
side
to
establish
the
mirror
relationship.
And
so
the
controller
will
populate
those
names
into
the
cr.
B
It's
quite
flexible,
and
there
is
a
prerequisite
that
if
it's
a
separate
storage
controller,
that
a
peering
relationship
has
been
established
out
of
band
between
the
two
storage
controllers.
So
they
can
talk
to
each
other
and
they
know
about
the
existence
of
each
other
and
all
the
networking
is
set
up.
B
But
then
you
can
just
create
a
second
pvc
on
a
second
system
with
another
trident
mirror,
relationship,
cr
and
trident
will
create
the
secondary
volume
and
it
will
establish
a
mere
relationship
and
the
volume
will
then
just
be
mirrored
so
you'll
have
an
existing
volume.
That
is
a
copy
of
the
primary
volume
and
it
will.
It
will
be
nominally
writable.
So,
like
you
know
the
pvc
can
be,
you
know
rwo
or
wx,
whatever
you
set,
but
in
actuality.
B
So
so
that's
a
little
bit
of
a
lie,
but
it's
important
because
then
later,
if
there's
an
actual
disaster-
and
you
want
to
start
using
the
secondary
volume,
then
you
just
mutate
the
the
try
to
mirror
relationships.
You
are
to
break
the
relationship
and
you
can
attach
a
pod
to
the
secondary
pvc
and
it
will
become
immediately
usable
as
soon
as
the
relationship
is
broken
and
then,
if
later
on,
you
decide
you
want
to
re-establish
the
relationship
in
the
reverse
direction.
B
Again,
that's
just
you
make
another
change
to
the
treadmill
relationship
cr
and
we
will
re-establish
the
relationship
in
the
opposite
direction
or
you
can.
You
know
if
the
primary
was
is
gone.
B
You
know
there
was
a
disaster,
but
when
you're
testing
your
your
failover,
it's
important
that
you
can,
you
know,
switch
from
one
site
to
another
without
actually
losing
any
data
so
that
you
can
so
it's
the
kind
of
thing
that
you're
encouraged
to
do
frequently.
You
know
like
as
a
monthly
test,
so
what
we
do
there
is,
you
can
establish
a
snapshot
and
then
there's
a
special
I'm
trying
to
remember
the
name
of
our
special
snapshot.
Cr.
B
I
think
it's
called
like
snapshot
info,
it's
a
weird
name,
but
you
can
take
a
snapshot
of
your
primary
volume.
You
can
create
a
snapshot
info
cr
and
it
will
give
you
the
internal
name
of
the
snapshot,
and
then
you
can
break
the
relationship
on
the
secondary
side
with
an
indication
to
wait
for
that
snapshot
to
be
transferred.
B
So
you
know
you
can
shut
down
your
primary
application
when
it's
shut
down
or
quiet.
You
know,
however,
you,
however,
you
do
your
your
consistent
snapshots.
You
can
take
the
snapshot
in
the
primary.
You
can
obtain
the
name
of
it
and
you
can
tell
the
secondary
as
soon
as
you
have.
This
snapshot
break
the
relationship
and
bring
up
the
volume.
And
then
you
can
start
your
application
running
on
the
secondary
side
and
it's
guaranteed
to
have
a
consistent
view
of
the
volume.
B
As
of
the
snapshot
that
you
took
on
the
primary
so
that
allows.
B
Yeah
so
so
netapp
has
multiple
mirroring
technologies
down
at
the
software
layer
or
the
storage
controller
layer.
We
have
asynchronous
flavors
of
mirroring,
where
basically,
you
just
take
a
snapshot,
every
few
minutes
and
then
transfer
the
snapshot.
We
have
synchronous
forms
of
mirroring
where
every
single
write
is
is
mirrored
and
acknowledged
on
the
secondary.
Before
we
acknowledge
back
to
the
client
and
in
both
cases
you
know
you
end
up
all
of
the
snapshots
that
get
taken
on
the
primary
also
appear
on
the
secondary
and
they
don't
show
up
as.
B
A
B
No
yeah,
I
mean
the
snapshots,
get
replicated
from
one
volume
to
another
volume
and
they
don't
show
up
in
kubernetes,
because
we
don't
have
a
way
to
import
existing
snapshots,
but
they're
there
on
the
storage
controller-
and
you
can
you
can
tell
the
storage
device
it
needs
to.
You
know
only
break
the
relationship
after
it
has
a
particular
snapshot
so
that
you
can
effect
a
lossless
failover
from
your
primary
to
your
secondary
and
then
re-establish
the
relationship
backwards,
which
is
what
we
would
encourage.
B
Someone
who
wanted
to
test
their
dr
workflow
to
actually
do
on
a
regular
basis
right,
because
if
you
don't
test
your
dr
workflow,
it's
never
going
to
work
in
an
actual
disaster.
So
so
the
lossless
failover
is
something
that
we
felt
was
very
important
there.
B
There
are
other
flavors
of
this,
so
so
you
you
can
you
can
control
the
whether
it's
asynchronous
or
synchronous?
Or
you
know
we
have
various
flavors
of
synchronous.
Like
you
know,
synchronous
mirroring
it
gets
very
complicated
because
you
pay
a
performance
penalty
when
you
enable
it
right
because
the
the
the
client
application.
That's
actually
writing
data
can't
see
an
acknowledgement
to
its
right
until
it
has
been
replicated
on
both
sides.
B
Otherwise,
it's
not
really
synchronous,
so
there's
various
games
that
we
play
to
try
to
make
it
as
performant
as
possible,
while
maintaining
the
well
there's
different
levels
of
synchronous
guarantees
in
terms
of
what
you're
going
to
get
in
the
case
of
a
of
a
failure.
But-
and
I
won't
go
into
all
the
different
flavors
of
synchronous.
A
B
Not
with
trident
no,
we
don't
allow
that
yeah
someday.
We.
A
B
It's
because
trident
has
snapshot
metadata
internally,
that
if
you
don't
recreate
it,
you
won't
get
a
you
can't
just
create
a
volume
snapshot.
Content
object
in
a
volume,
snapshot,
object
and
pointed
it.
A
How
do
you
prevent
people
from
using
that?
Because,
if
someone
just
created
volume,
sample
content
and
just
point
your
existing
snapshot
handle?
Does
that
automatically.
A
B
I
think
we
only
support
the
singleton
version
of
list
snapshots.
I
don't
think
we.
A
B
We
totally
plan
on
adding
snapshot
import.
It
just
hasn't
been
a
priority
because.
A
B
B
I
don't
want
to
say
orthogonal,
because
there's
significant
overlap,
but
there's
sort
of
there's
two
different
use
cases
right
backing
up
is
typically
about
the
ability
to
go
back
in
time
to
an
earlier
version
of
your
data
right.
That's
usually
why
you
have
backups
unless
you're
also
using
backups
as
a
disaster
recovery
mechanism
for
disaster
recovery.
What
you
really
want
is
the
ability
to
have
the
most
recent
possible
copy
of
your
data,
no
matter
what
happens
right,
so
a
backup
solution
can
offer
you
both
of
those
features,
but
we
typically
view
them
as
separate.
B
A
B
B
B
Be
but
this
this
mirroring
solution
is
particularly
designed
for
going
cross
kubernetes
cluster,
so
you
have
tried
and
running
on
cluster
a
at
one
site:
try
running
on
cluster
b
and
another
site.
The
storage
controllers
underneath
know
about
each
other,
but
the
kubernetes
clusters
don't
necessarily
know
about
each
other
right.
B
We
kubernetes
clusters
and
kubernetes
federation
is
still
sort
of
nascent
and
we're
not
relying
on
it,
but
we
we
merely
rely
on
the
storage
controllers,
knowing
about
each
other
and
some
somebody
transferring
those
internal
handles
from
one
cluster
to
the
other,
just
a
string,
so
you
create
a
volume
on
one
side
you
get
the
volume
name
and
the
svm
name,
and
then
you
go
to
the
other
side
and
you
plug
those
two
values
back
in
and
you
create
another
pvc
and
then
magic
happens
and
you
get
a
mirror,
but
but
the
two
tridents
never
talk
directly
to
each
other
trident
is
try
to
never
knows
about
anything
outside
the
cluster.
B
C
All
right
so
trident's
restricted
to
a
kubernetes
control
plane
in
terms
of
let's.
C
B
B
I
I
wish
I
could
like
walk
through
some
more
detailed
examples
and
and
show
you
exactly
how
it
works
or
even
demo
it
like.
I
said
I
that's
gonna,
require
prep
time
in
a
setting
up
a
demo
environment,
and
I
don't
have
those
available
today.
C
B
B
It
just
has
to
become
writable,
and
so
there's
a
there's
this
you
know
you
you
trigger
the
the
failover
by
mutating
the
try
to
mirror
relationship
object
and
then
the
status
gets
updated
once
the
relationship
is
broken
and
at
that
point
any
pod,
that's
attached
to
the
volume
will
be
able
to
start
writing.
C
Okay,
so
broadly,
pvcs
exist
on
primary
and
secondary
are
coupled
with
a
trident
mirror
object.
Resource
on
primary
and
secondary
secondary
is
readable.
Only
till
the
trident
resource
object
emitted
resource
object
is
updated
as
primary
or,
however,
the
spec
is.
C
Independently
on
primary
and
secondary
the
trident
mirror
resource
is
requires
the
handle
of
the
pvc
from
the
primary
on
the
secondary,
and
vice
versa.
B
Yeah,
in
order
to
you,
establish
the
relationship
on
the
secondary
with
information
gleaned
from
the
primary
got
it
and
then
tried
it
well
to
set
it
up
and
trying
can
re-establish
a
previously
broken
relationship
without
transferring
all
the
data.
If
it
can
find
a
common
snapshot
that
both
volumes
have,
then
it
will
resync
from
that
point
and
that
can
dramatically
speed
up
a
resync.
B
Yes.
Yes,
it
does,
but
but
this
particular
trident
mirroring
feature
doesn't
doesn't
apply
to
that
case,
because
because
in
that
case
it
just
looks
like
one
cluster
to
us
and
the
volumes
are
just
ordinary
volumes.
F
Correct
so
in
that
case,
how
how
do
you
do
replication
between
the
two
clusters
kubernetes
is
classified
failing
over,
I
mean
and
I'm
failing
over
an
app
from
one
kubernetes
cluster
to
another
kubernetes
cluster.
B
You
mean
two
different
clusters
where
they're
actually
pointed
at
the
same
storage
yeah,
it's
like
yeah,
so
so
that
that
would
just
be
like
an
ordinary
volume
import
case
right
because
the
volume
is
already
there.
It
already
has
your
data,
it's
just
the
other
trading
doesn't
know
about
it,
so
you
just
do
a
volume
import
and
then
the
trading
knows
about
it
and
then
you
can
use
it.
C
C
B
B
C
C
B
Yeah
yeah.
Well,
that's
just
there's
no
reason
that
you
can't
do
that.
It's
just
we
decided
that
would
be
too
too
hard
to
maintain
too
hard
to
support
and
we'd
rather
have
a
modular
architecture
where
the
trident
is
the
thing
that
doesn't
know
what's
going
on
outside
the
cluster
and
that
there's
another
layer
above
it
that
doesn't
know.
What's
going
on
across
the
clusters.
E
The
admin
to
you
know,
watch
the
status
on
one
side
and
do
the
right
action
on
the
other
side.
Like
do
we
report
the
status
of
metering,
let's
say
on
the
source
so
that
you
know
the
admin
knows.
Let's
say
the
mirroring
is
done
and
would
take
the
right
actions
on
you
know
to
make,
for
example,
the
secondary
a
primary
or
you
know,
do
something
on
the
destination
like
do
we
have
enough.
I.
B
So
I
I
can't
answer
it
off
top
of
my
head.
I
suspect
that
no,
that
the
source
side
probably
doesn't
receive
any
status
updates
because
again
trident
doesn't
trading
is
not
a
monitoring
tool.
It's
not
watching.
What's
going
on,
it's
only
responding
to
requests,
so
we
have
kubernetes
api
watchers.
So
whenever
any
kubernetes
api
object
changes,
people
try
to
reconcile
its
state,
but
we're
not
going
back
to
the
storage
controller
and
like
checking
what's
going
on
on
any
kind
of
regular
basis.
E
B
When
something
has
been
requested,
then
trident
will
so
so
when
you
create
the
secondary
mirror
object,
we
will
keep
updating
the
status
until
we
see
that
the
mirror
of
the
new
relationship
is
established.
Once
it's
been
established,
we
stop.
We
stopped
paying
attention
because
we
said
well,
we
did
what
we
were
supposed
to
do
and
then
only
when
someone
says
I
want
to
break
it
does
just
try
to
wake
up
and
start
doing
things
again,
and
everything
else
just
happens
in
the
background
and.
E
I
don't
think
basically
like
as
far
as
writing
is
concerned.
We
don't
like
trident,
doesn't
defeat
or
doesn't
really
track
individual
transfers.
It
basically
sets
up
the
relationship
and
then
you
know
forgets
about
it
doesn't
really
monitor
their
status.
It's
just
you
know,
sets
up
the
relationship.
This
replication
schedule
does
its
job
and
then
it's
really
up
to
the
admin
to
track.
You
know,
then,
when
the
last
transfer
happened
from
primary
to
secondary
and
manipulate
the
crs
on
both
sides.
Accordingly,
right.
B
B
Sides,
right,
yes,
and
and
the
reason
we
don't
do
anything
in
between
those
two
times
is
for
scaling
reasons
right.
We
expect
people
to
have
you
know
hundreds
or
thousands
of
volumes
and
potentially
hundreds
or
thousands
of
mirror
relationships
and
and
keeping
an
eye
on
all
of
those
would
just
be
brutal
from
a
cpu
and
network
usage
perspective.
B
D
B
G
A
So
someone
helped
you
right.
B
At
the
trident
layer
it's
manual
now
now,
the
layer
above
trident
is
able
to
orchestrate
all
of
this,
for
you
and
you
know,
set
up
a
volume
on
one
side
set
up
a
mirror
relationship
on
the
other
side.
Pay
attention
to
them
automatically
fail
them
over
like
that.
That
can
be
automated
at
a
higher
layer,
but
if
you're
just
using
trident
yes,
it
would
be
manual.
B
Astra
will
do
that.
I
don't
know
if
it
does
it
today,
I'm
not
a
I'm,
not
an
expert
on
exactly
what
astra
does
and
doesn't
do,
but
it
is,
it
is
the
place
where
that
kind
of
functionality
would
land.
Yes,.
C
B
So
so
you
need
to
create
it
on
the
primary
to
obtain
the
internal
name,
because
that
that
is
something
that
an
ordinary
kubernetes
user
wouldn't
know,
and
so,
when
you
wouldn't
be
able
to
tell
the
secondary
what
the
actual
name
of
the
volume
is,
unless
you
have
a
trident
mirror
relationship
where
you
know
the
writer
will
then
put
it
in
the
status.
You
can
see
it
and
it's
not
a
particularly
secret
detail.
It's
just
the
kind
of
detail
that
an
ordinary
kubernetes
user
doesn't
need
to
know.
So
it's
not
exposed.
B
So
you
create
the
try
to
mirror
relationship
on
the
primary.
If
you
fill
in
the
status
with
those
details,
you're
going
to
need
on
the
secondary
side
and
then
you
you
fill
them
in
on
the
secondary
side
and
then
and
then
the
benefit
is
then,
if
you
do
ever
experience
a
failover
and
you
want
to
reverse
the
relationship
and
go
point
it
back
at
the
original
one,
like
you
already
have
a
tried,
a
mirror
relationship
object
bound
to
your
pvc,
and
you
just
have
to
fill
in
that.
B
C
C
C
I
did
say
that,
right
now
we
do
static
pvs,
that
we
don't
want
to
do
that.
The
idea
is
to
fill
the
volume
replication
object
on
the
secondary
from
a
status
report
of
the
primary
which
is
equivalent
to
the
mirror,
object
that
you're
talking
about
and
then
use
those
objects
to
flip
state.
Flip
state
resync
do
what
you
want
with
the
volume
yeah
so.
C
Broadly
similar,
except
that
ceph,
we
have
the
notion
that
when
you
establish
mirroring
on
one
end,
we
actually
get
the
same,
underline
we.
The
target
doesn't
need
an
image
created
to
actually
replicate
to
it.
Auto
creates
the
image
based
on
what
the
primary
is.
So
so
we
don't
have
to
hydrate
a
pvc
on
the
secondary.
Before
we
establish
replication,
we
can.
H
B
I'll
tell
you
one
of
the
reasons
we
decided
to
have
an
actual
pvc
on
the
secondary
and
it's
it's
because
a
given
trident
can
can
manage
multiple
storage
devices
and
multiple
sort
of
we
have
store.
B
We
call
them
svm,
storage,
virtual
machines
within
the
storage
device
and
when
you're
setting
up
a
mirror
relationship,
there
is
a
scheduling
step,
because
the
mirror
relationship
could
go
to
any
one
of
multiple
devices,
and
so
by
creating
a
pvc
it
allows
the
trident
scheduler
to
pick
a
place
to
put
your
destination
volume
out
of
the
many
potential
places.
B
To
put
it
so,
and
it
can
look
at
you
know
you
can
look
at
size,
it
can
look
at
performance
characteristics,
it
can
look
at
you
know
which,
which
devices
have
been
peered
to
the
primary.
That's.
Actually,
it's
able
to
figure
that
out
automatically.
So
if
you
have
two
different
potential
devices,
but
only
one
of
them
has
a
peer
relationship
to
the
source,
it
will
pick
the
one
that
has
the
pier
you
know
you
might
want
to.
B
You
might
want
to
be
able
to
control
whether
you're
mirroring
too
slow,
spinning
discs
or
fast,
flash
or
really
fast
nvme.
You
know
those
those
types
of
considerations
all
go
through
the
trident
scheduler
we're
still
trying
to
picks
a
place
to
put
your
secondary,
and
so
it
was
really
easy
to
just
say
you
know
what
just
create
a
pvc.
It's
going
to
be
your
volume,
let
it
run
through
the
scheduler.
B
B
C
In
ceph
we
do
the
same
thing,
actually
rbdr
etcetera.
I
mean
either
the
block
or
file
system.
There
is
scheduled
internal
snapshots,
nothing
to
do
with
kubernetes.
Those
are
mirrored
and
are
a
snapshot
submitted
as
well.
So
it's
pretty
much
a
similar
scheme.
C
B
Well,
and
and
the
other
place
where
that
came
in
handy
in
our
design,
was
when
you're
when
you're.
When
you
break
a
relationship,
the
primary
still
exists
right.
It
doesn't
go
away,
it's
it's
still,
writable,
it's,
it's
still
a
volume
and
the
secondary
becomes
writable
in
the
when
you've
broken
a
relationship,
and
then,
when
you
want
to
resync
a
mirror
relationship
in
the
opposite
direction,
because
you
already
have
a
pvc
on
both
sides,
it
makes
it
really
easy
to
say:
okay,
we'll
just
point
them
at
each
other
and
and
boom
they.
B
You
know
the
relationship
gets
re-established,
so
you're
never
having
to
delete
and
recreate
pvcs.
In
this
model,
the
pvcs
are
just
every
volume
that
exists
has
a
pvc
associated
with
it.
Some
of
the
pvcs
are
special
because
they're
destinations,
but
but
that
that
fact
can
change.
You
know,
depending
on
the
state
of
the
volume
and
kubernetes,
doesn't
need
to
know
about
those
particular
changes.
So
we
just
say
everything
is
a
pvc.
B
There
was
an
earlier
version
of
this
design
where
we
tried
very
hard
to
avoid
that
and
but
then
it
ended
up
in
needing
to
create
and
delete.
Pvcs
all
the
time-
and
it
got
to
be
very-
it
got
to
be
very
hard
to
manage,
because
there
were
certain
workflows
where
you
needed
to
delete
a
pvc
that
contained
valuable
data,
so
that
kubernetes
would
just
forget
about
it.
B
But
you
know,
but
internally
we
didn't
actually
delete
it
and
we
decided
that
was
too
scary
for
real
users
to
just
delete
their
pvcs,
knowing
that
we're
not
actually
going
to
delete
them,
because
they're
always
going
to
wonder
well
what?
If
we,
what
if
we
screw
up
and
do
delete
it.
So
we're
like
you,
know
what
we're
never
going
to
delete
a
pvc
that
you
don't
actually
want
to
delete.
We're
just
going
to
everything
is
going
to
be
a
pvc
and
then
you'll
only
delete
them
when
you're
really
done
with
them.
C
The
the
the
one
thing
about
not
requiring
target
pvcs
from
from
the
design
that
we
were
looking
at
was
we
could
just
work
on
the
primary
and
the
new
pvc
comes
into
existence
on
the
primary
everything
set
up
on
the
primary
and
only
on
a
failover
or
a
relocation.
C
Sailback
or
relocate
targeted
failover
requires
the
pvc
to
appear
on
the
target
secondary
instance
right.
B
Yeah,
you
don't
need
it
until
until
you
actually
have
a
failed
over
situation,
but
then
then
the
question
is:
what
do
you
want
to
do
when
you
want
to
fail
back
and
there's
the
question
of?
How
do
you
control
the
scheduling
of
the
secondaries
like
we?
We
could
have
had
a
special
scheduling
engine
that
didn't.
You
know
that
wasn't
working
on
pvcs,
but
just
working
on
some
other
sort
of
concept
of
a
secondary
volume,
but
we,
after
we
looked
at
that.
We
we
realized
it's
it's
exactly
the
same.
B
Well,
pvcs
are
always
you
know.
The
name
only
appears
in
kubernetes
right.
The
name
never
gets
reflected
into
the
cluster.
The
the
actual
object
inside
the
storage
device
is
some
uuid
name.
You
know
chosen
by
kubernetes,
so
yeah
the
names
are
all
you
know:
kubernetes
only
labels
for
for
objects
with
random
names,
and
we
don't
care.
B
E
E
B
A
What
about
the
application
running
there
like
on
the
secondary
cluster?
You
have
parts
stable,
set
running
there
or
it's
only.
The
pvcs.
B
If
you
want
to
replicate
your
application,
then
you're
that's
becomes
a
user
responsibility
or
the
the
responsibility
of
a
higher
level.
Software
like
an
astra
trident,
does
not
know
anything
about
your
pods
and
and
again
that's
an
intentional
design
decision
that
tried.
It
just
does
the
storage
in
one
cluster
and
anything
else
is
someone
else's
responsibility.
B
F
Any
issues
with
you
know
attaching
parts
on
the
other
side,
you
know
user.
You
know
you
have
a
pvc
on
the
secondary
side
which
looks
like
it
can
be
read
and
written
and
parts
start
but
parts.
You
know
the
rights
actually
fail.
B
The
way
you
deal
with
it
is
you,
don't
start
the
pod
until
the
until
the
volume
is
writable
and
then
again
that's
a
that's
the
responsibility
of
the
end
user
or
the
higher
level
software.
That's
automating
the
process
to
basically
watch
the
mirror
relationship
observe
when
it
becomes
broken
and
then
create
the
pod
so
that
that
is
an
undesirable
detail.
F
Right
so
in
the
model
that
sean
described
right,
you
know
if
I
have
a
controller,
a
side
card
that
can
automatically
protect
pvcs
that
are
labeled
with
a
certain.
You
know
label,
then
without
me,
creating
without
any
orchestration
software
that
is
required
to
copy
this
cookie
from
one
side
to
another
side.
F
That
model
is
able
to
enable
replication
just
on
the
primary,
without
going
and
doing
anything
on
the
secondary
right,
whereas
in
the
model
that
you
described
ben
to
do
anything
to
get
the
replication
initiated,
we
need
to
go
to
across
the
two
clusters,
and
so
I'm
worried
about
you
know
how
does
that
work?
When
you
know
in
a
stateful
set,
I
scale
up
the
stateful
set
right
then
I
need
to
immediately
go
to
scale
up
on
the
other
side
to
create
the
second
pvc
on
the
other
on
the
remote
cluster.
B
Yeah
yeah
you're
correct.
If,
if
you're
using
a
stateful
set,
you
could
create
treadmire
relationships
for
every
pvc
in
your
stateful
set,
but
as
new
ones
were
created
like
something
would
have
to
notice
that,
and
someone
would
have
to
respond
to
it
and
create
the
treadmill
relationship
and
then
something
would
have
to
create
a
secondary
volume
and
and
yeah.
It
would
have
to
be
other
controllers
that
are
watching
for
those
events.
B
You
would
have
to
have
additional
controllers
and
additional
automation
to
help
in
that
situation,
but
but
something
is
always
going
to
have
to
do
the
work
of
scheduling
the
secondary
volume
and
establishing
the
relationship,
and
you
know
in
the
I
think,
the
model
that
cheyenne
described
you
know
they're
the
primaries
doing
all
the
work
we
decided
to
only
have
the
primary.
Do.
You
know
10
percent
of
the
work
and
have
the
secondary
to
the
other
90.
B
Just
as
a
you
know,
I
guess
that
was
our
design
decision.
So
yeah
there
are
certain
downsides
to
that
for
sure
that
requires.
E
B
We
could
have
made
trident
reach
out
and
talk
to
the
secondary,
but
it
just
would
have
made
the
architecture
dramatically
more
complicated
in
a
multi-cluster
environment
where
you
have
10
different
kubernetes
clusters
and
10
different
storage
controllers
and
they're
all
replicating
to
each
other.
Like
you
begin
to
have
a
question
of
like
well,
who
is
responsible
for
the
storage
right?
If,
if
each
trident
can
talk
to
all
of
the
storages
of
all
of
the
clusters
and
and
create
volumes
on,
all
of
them
then
like?
B
Where
is
the
brain
we
didn't
want
to
have
that
problem?
We
wanted
to
say
you
know
trident,
is
responsible
for
the
storage
that
you
delegate
to
it
and
if
you
want
to
mirror
to
some
other
storage,
someone
else
is
responsible
for
that,
and
if
you
want
to
automate
all
of
that,
you
need
something
higher
right.
You
need
to
have
one
brain
somewhere
that
that
is
consistent
with
itself,
because
10
different
tridents
will
never
be
consistent
with
each
other
just
because
of
the
nature
of
distributed
systems.
B
F
C
That
that
that's
correct,
the
thing,
though,
is
not:
all
storage
is
equal,
so
we
there
are
storage
systems
out
there
without
naming
them
which
require
the
target
volume
to
be
set
up
on
the
secondary
controller
to
and
then
establish
mirror
from
the
primary
controller.
Now
the
deal
there
was
yes,
the
drivers
would
hence
communicate
across
each
other
and
set
this
up.
C
The
control
plane
on
the
secondary
is
reachable
from
the
primary
and
so
it'll
set
it
up,
which,
which
is
exactly
what
ben's
also
talking
about
and
scaling
it
out
and
saying
hey.
If
I
have
10
cube
clusters
and
10
different
storage
control,
planes
or
storage
clusters,
then
yeah
it
gets
a
little
messy.
A
Oh,
maybe
I
missed
this
one
from
ben's
model.
You
don't
have
to
create
a
warning
on
the
secondary
side.
You
I
thought
it's
or
you
just
need
the
snapshot
to
be
there
and
but
then
what
does
that
pvc
point
to?
Does
it
point
to
a
some
warning
handle?
That's
the
pv
on
the
secondary
side.
Point
your
as
well.
A
The
motto
I
was
just
I
thought
I
I
thought
yours
is
similar,
but
shannon
was
saying
it's
different,
so
you
also
not
really.
B
B
B
Yeah
yeah,
so
so
you
would
create
the
trident
mirror
relationship
you
give
it
a
name.
You
create
your
pvc
and
you
would
annotate
the
pc
with
the
tmr
name
and
then
try
to
would
see
that
at
provisioning
time
and
say
this
is
not
actually
an
empty
volume.
This
is
a
volume,
that's
going
to
be
the
destination
of
a
mirror
relationship.
B
No,
no
it
it
runs
through
the
scheduler.
It
finds
a
place
to
put
it
that
has
the
right
right.
Space,
the
right
performance
characteristics,
the
right
peer
relationships,
all
of
that
stuff,
and
it
does
create
an
empty
volume,
but
then
it
immediately
makes
it
read
only
and
establishes
a
mere
relationship
to
that
volume.
B
A
A
B
G
H
So
these.
C
Steps
works,
it's
kind
of
the
storage
layer,
so
seth
works
a
little
differently.
So
the
moment
you
enable
mirroring
on
an
image
or
volume,
the
the
two
are
nsf
clusters
are
kind
of
talking
to
each
other
I
mean
there
are.
There
are
mirror
processors
running
in
these
storage
layers,
which
kind
of
thing
right.
A
But
I
mean
the
I
mean
the
wall:
okay,
I'm
just
wondering
like:
when
do
you
create
the
volume
get?
I
guess
the?
When
does
the
second
volume
get
created
and
get
a
mirror.
F
A
Think
I'm
talking
about
this
volume,
I'm
talking
about
the
storage
volume,
so
kind
of
forget
what
what
ben
was
saying,
because
the
one
that
I'm
familiar
with
is
normally
create.
You
have
to
create
a
body
on
one
side
and
the
secondary
set.
Then
you
establish
the
relationship,
so
I
was
just
wondering
the
slight
difference.
In
I
mean
three
systems
are
different,
so
I'm
just
wondering
in
the
shannon's
case.
C
C
No,
it's
not
it's
like
you,
you
create
the
storage
volume
on
the
primary
and
then
you
have
to
mirror
enable
it.
I
mean
that's
step
two,
you
say:
okay,
mirror
enable
this
volume.
The
volume
on
the
secondary
is
auto,
created.
B
A
I'm
more
yeah.
I
was
just
trying
to
understand
the
underlying
apis
like
how
many
apis
you
need
to
call
to
get
there.
So
maybe
that's,
maybe
that's
the
much
details
to
talk
here.
C
C
Layer
mirroring
the
emitting
demonstrating
processes
are
the
ones
that
are
creating
the
volumes
and
the
secondary
csi
is
not
involved.
A
See
the
driver
will
be
making
one
one
call
right,
like
the
call
your
student
system.
B
G
D
C
Have
to
communicate
data
somehow
so
that
that's
fine
yeah
I
was
more
tempted
to
because
we
know
we
have
date.
Any
data
source-
I
was
more
tempted
to
say,
hey
secondary,
could
be
a
pvc
within
any
data
source
of
trident
mirror,
object
or
volume,
replication
object,
and
that
carries
the
cookie
information
so
that
we
don't
look
back
at
the
annotation.
But
that's
again,
one
year.
B
I
hate
using
annotations
for
for
real
real
data,
but
it
it's
a
common
pattern
in
kubernetes.
Unfortunately,
because
we
can't
just
add
a
new
field
to
the
pvc,
which
is
what
we
would
prefer.
We'd
prefer
to
have
another
spec
field
that
we
could
just
fill
in
with
this
information.
C
B
Oh
yeah,
so
so
to
as
far
as
volume
populators
are
concerned.
Yes,
like
you,
can
create
new
volumes
from
things
and
we
we
certainly
have
thought
through.
You
know.
Various
types
of
cloning
use
cases
it,
but
but
it
turns
out
that,
because
we
already
have
a
pvc,
that's
just
sitting
there,
that
is
the
destination.
B
We
don't
need
to
do
anything
special
right
like
we
already
pvc.
Cloning
is
already
a
thing
in
kubernetes,
so
you
can
just
you
can
create
a
volume
that
is
a
clone
of
the
destination
of
the
mirror
relationship
by
just
doing
ordinary
pvc
cloning,
and
it
just
works
because
trident
understands
what
you
meant
when
you
pointed
a
new
pvc
at
the
old
pvc-
and
it
says
it
doesn't
care
that
it's
the
destination
of
a
mirror
relationship,
it's
just
something
that
is
going
to
clone.
So
it
just
does
it
and
then
you
get
your
new
volume.
B
B
B
But
the
weird
thing
that
we
observed
after
playing
with
this
for
a
while
was
you
know
these
mirror
objects,
sort
of
are
permanently
related
to
the
volume
as
long
as
the
volume
is
involved
in
any
mirror
relationship
with
any
other
volume.
It's
going
to
have
the
sort
of
side
car
cr
sitting
there
expressing
the
state,
the
mirroring
state
of
that
volume,
and
you
could
detach
a
volume
from
one
mirror,
relationship
and
attached
to
a
different
mirror
relationship,
and
it
can
be,
it
can
switch
back
and
forth
to
streaming
a
destination
and
being
a
source.
B
B
You
don't
have
to
so
so
that's
the
fun
part
is.
You
can
create
an
ordinary
pvc
with
no
tmr
and
then
start
using
it
for
a
while
and
then
later
you
can
decide
you
want
to
mirror
that
volume
and
at
that
time
you
can
go
create
your
tmr
bind
it
to
the
pvc
by
creating
an
annotation
at
that
time,
and
at
that
point
it
becomes,
and
as
long
as
as
long
as
it
was
on
a
system
that
could
be
mirrored,
then
you
can
just
start
mirroring
it.
B
Of
course
you
could
get
unlucky
and
it
could
have
gotten
scheduled
to
a
system
that
doesn't
support,
mirroring
or
isn't
peered
to
the
system
you
want
to
use
or
whatever,
in
which
case
you're
out
of
luck.
But
but
if
you
got
lucky
and
the
original
volume
ended
up
in
a
place
that
can
be
mirrored
from
then
yes,
you
can
decide
to
mirror
it
long
after
it
was
originally
created
by
just
creating
the
tmr
object.
At
that
time.
A
B
A
Yeah,
that
will
that
only
give
you
a
deadline:
okay,
so
so,
okay,
I
will
check
with
you
maybe
in
by
the
end
of
the
week,
or
you
think
next,
maybe
next
week,
well
I'll
check
with
you
in
a
few
days,
yeah.
B
A
That's
fine,
I
think
it's
helpful.
I
think
this
is
a
complicated
topic,
so
yeah,
so
it's
good
that
we
started
talking
about
it.