►
From YouTube: WG Data Protection Bi-Weekly Meeting for 20201006
Description
WG Data Protection Bi-Weekly Meeting for 20201006
A
A
So
just
I
want
to
quickly
go
over
some
of
the
caps
that
are
in
review
or
be
merged
currently
for
the
1.20
release,
and
then
we
can
talk
about
the
data
protection
workflow
white
paper.
So
there
are
a
couple
of
work
streams
that
are
going
on
so
we'll
see
how
much
time
we
have
and
we
will
discover
as
much
as
we
can
so
for
1.20.
A
The
enhancement
freeze,
which
is
cap
merch
date
was
yesterday
and
then
code
phrase
is
november
12th,
which
is
one
month
away.
So
there
are
quite
a
few
caps
that
are
relevant
to
this
data
production
group.
So
that's
why
I
listed
them
here
snapshot.
So
we
are
trying
to
bring
snapshot
to
ga
1.20
cap
got
merged,
so
that's
good,
but
then
we
actually
still
have
a
lot
of
work
to
do
to
meet
the
code
freeze
deadline,
which
is
one
other
way.
A
We
need
to
write
quite
a
few
e3
tests,
so
I
listed
this
three
tests
here,
I'm
actually
looking
for
some
help
right
now.
Those
three
tests
don't
have
any
owners.
So
if
you
are
interested-
and
if
you
have
some
time
to
work
on
them
in
1.20
release,
please
let
me
know
and
and
then
the
next
one
is
a
container
notifier,
so
we
were
trying
to
bring
it
to
afr,
but
we
didn't
make
it.
There
are
some
issues
that
we
still
need
to
get
resolved
in
our
design.
A
So
I
listed
a
couple
of
them
here.
I
just
want
to
talk
about
them
briefly
and
see
if
there's
any
comments
from
you
guys,
so
so
the
container
notifier
we
are
trying
to
bring
bring
in
this
as
a
port,
inline
definition.
A
First
to
define
what
kind
of
command
we
can
run
in
the
container
for
our
use
case.
It
is
quite
as
non-choice
and
then
also
a
con,
also
a
notification
api
object,
that
is,
for
an
external
controller,
to
request
the
container
notifier.
So
basically
you
request
what
comment
you
run
in
the
pod.
So
there
are
a
couple
of
things
that
that
we
have
some
discussions
in
that
a
cab
review.
A
A
There
so
if
you
have
the
selector
and
it's
going
to
select
the
parts
that
meet
the
criterion,
but
I
think
if
it's
a
selector,
then
that
means
the
the
controller,
so
the
backup
controller,
who
is
actually
running
this
requesting
the
class,
wouldn't
know
ahead
of
time
what
parts
are
selected
because
the
part
is
going
to
be
this
cubelet
or
in
first
phase
we're
going
to
have
this
central
controller,
the
content
notify
controller,
which
is
the
one
that
is
deciding.
A
What
are
the
parts?
That's
that
are
selected
so
which
I
think
it's
really
not
not
going
to
be
practical
for
the
especially
for
the
choice
use
case.
So
so
I'm
thinking
that
either
we
can.
We
can
use
a
pod
list
which
is
more
deterministic.
Basically,
a
backup
controller
would
be
the
one
who
can
use
a
part
selector
or
some
other
ways
to
determine
what
other
parts
that
should
that
should
get
this
choice
command
and
then
and
then,
in
this
notification
object.
A
We
just
have
a
pod
list
instead
of
a
four
selector
and
and
then
there
is
another
concern
which
is
the
the
status
update
right
now
we
have
because
initially
we
say
we
want
to
have
this
part
selector.
So
now,
if
we
say
we
have
a
pod
list
still
there
could
be
of
basically
or
you're
getting
the
status
update
for
all
the
parts
and
all
the
containers.
That's
a
lot
to
know
some
concerns
on
the
scalability.
A
So
for
that,
if
we're
using
a
pod
list,
I'm
thinking,
maybe
we
need
to
do
a
update
using
a
slice
of
pot
so
that
so
the
details.
I
don't
really
have
any
details
on
that
and
there's
some
some
similar
proposal
for
the
end
points
the
endpoint
slides.
So
it's
going
to
be
similar
to
that.
A
A
So
this
way
we'll
simplify
the
the
status
update,
part
yeah.
So
those
are
the
things
that
we
are.
We
still
need
to
sort
out.
I
don't
know
if
there
were
any
comments
on
this.
A
Okay,
so
yeah
the
I
listed
the
the
cap
there,
so
you
can
take
a
look
at
the
cap.
I
need
to
update
that
and
and
with
this
proposals,
so
right
now
we
have
part
selector,
and
so
I'm
thinking
that
I
will
just
write
down
all
the
alternatives
and
what
are
the
corresponding
status
update
that
we
we
need
if
we
use
either
part
will
be
straightforward
or
a
party
list.
Maybe
we
need
to
have
a
slice
of
pop
pod
slice
or
something
which
might
mean
that
we
need
another
api
object
for
that.
A
Okay,
so
that
so
that's
that,
so
if
you
don't
have
any
questions
for
the
first
two
caps,
I
move
on
to
the
genetic
data
populator.
I
wonder
if
ben
is
here
today
doesn't
look
like
he's
there.
Okay,
yeah,
so
ben
is
actually
working
on
this
generic
data
populator
and
initially
we
were
thinking
about
bringing
to
beta,
but
I
think
we're
going
through
this
production
readiness
review
now
the
the
cap
have
this
new
requirement
for
production
readiness
reviews.
A
Actually
the
snapshot
cap
went
through
that
so
ben
realized
that
okay,
so
he
he's
going
to
introduce
this
validation
webhook
for
for
the
data
populator
to
making
sure
that
it
is
something
it
is
like
a
registered
external
data
source
rather
than
you
know
anything
so
so
for
that
we
actually
wouldn't
need
one
more
release.
So
I
think
right
now
you
want
to
keep
the
scene
alpha
in
1.20,
so
that's
that's
the
status
for
for
this
one.
A
If
not,
I
just
quickly
talk
about
it.
This
is
this
is
the
to
have
a
kubernetes
api
to
be
able
to
provision
object
bucket,
so
there
are
actually
weekly
meetings
on
that.
A
So
the
plan
is
to
get
the
cab
merged
as
provisional,
so
that
you
can
start
to
do
pocs
having
code
ready
and
then
getting
getting
ready
for
the
next
phase,
which
is
to
bring
you
to
alpha
so
1.20.
This
was
stay
as
provisional
and
pocs.
A
I
think
the
cap
is
not
merged
yet,
but
I
think
because
this
is
provisional,
so
we
actually
does
not
have
to
be
getting
getting
merged
strictly
by
this
deadline.
A
A
Okay,
if
not,
then
we're
moving
on
to
the
data
protection
workflows.
I
is
a
phone
here
today.
A
Okay,
I
don't
see
him
here,
because
I
actually
would
like
him
to
go
over
the
application
workflow
because
he
added
some
update
for
the
backup
and
this
workflow
there.
A
All
right,
so
this
is
the
one
of
the
things
that
we
are
talking
about.
We
had
one
meeting
to
discuss
this.
This
is
a
really
just
a
pretty
early
discussions,
but
I
want
to
get
more
feedback
from
this
from
this
group
here.
A
So
so
so
we
talked
about
why
we
need
cpt
in
kubernetes,
so
we
listed
a
few
use
cases
so,
but
the
first
one
is
backup
for
backup.
Before
we
talk
about
cvt,
we
actually
need
to
talk
about
in
backup
in
general.
A
So
currently
we
do
have
apis
in
kubernetes
snapshot
apis,
but
that
does
not
really
have
any
definitions
in
in
those
apis
to
say
whether
we
want
to
back
up
back
it
up
to
a
different
location
or
not.
So
that's
not
there.
So
anju
actually
had
a
google
doc
on
this.
A
He
actually
talked
about
this
in
one
of
the
early
meetings
that
we
need
to
have
a
backup
apis
to
handle
this
so
yeah,
so
the
first
one
is
to
in
order
for
for
backup
vendors,
to
support
this,
we
will
need
to
be
able
to
backup
data
to
a
different,
a
secondary
storage.
That
is
not
your
primary
storage.
A
So
that's
that's
the
first
one
and
then
so
what
do
we
do
if
we
don't
have
this?
What
is
the
currently?
What
what
do
you
recommend
to
do,
and
so
currently
we
can
have
the
data
mover
pod
so
and
then,
basically,
after
taking
a
snapshot,
we
can
create
a
pvc
front
snapshot
and
mount
that
to
the
data
more
pod
and
then
move
data
to
a
secondary
device
away
from
your
primary
storage.
A
And
then,
then,
and
then
also
for
for
backup,
if
we
are
always
taking
the
full
backup,
then
the
concern
is,
of
course
is
not
space
efficient.
A
It
takes
longer
so
and
also
there
are,
even
though
there
are
cloud
providers
that
are
providing
ways
to
do
incremental
snapshots,
but
that
only
works
if,
if
it's
on
the
same
storage
right
so
you're,
it's
all
aws.
So
it's
going
from
ebs
to
s3
it's
handled
underneath.
A
A
So
we
haven't
got
to
that
point,
so
this
is
right
now,
I'm
talking
about
the
white
paper
part.
So
in
the
white
paper
we
talk
about
why
we
need
it
right.
So
if
we,
I
think
we
need
to
list
out
the
reasons
why
we
need
it
and
if
without
it,
what
is
the
workaround?
What
are
we
doing?
A
So
that's
why
we
have
this
search
here
saying
that,
okay,
without
that,
what
do
we
do
today,
right
so
think
of
phone
actually
added
this
one
saying:
either
we
have
to
always
do
full
backup
or.
A
Yeah
so
otherwise
we
will
have
to
directly
use
the
the
storage
api
to
get
that
information
right
so
yeah.
So
I
think
the
the
the
problem
is
that
then
that
means
the
state
of
mobile
power
will
need
to
be
dealing
with
different
apis.
A
A
So
yeah,
so
basically
the
that's
the
thing:
that's
the
the
workaround.
If
we
don't
have
this,
then
you
just
you
need
to
have
to
deal
with
us
individually.
A
C
A
A
Yeah,
so
we
haven't
got
that
point,
so
basically,
okay,
so
here
is
the
thing
right.
So
if
you
actually
have
every
everything
so,
for
example,
just
use
the
aws,
for
example
right,
so
you
already
automatically
take
incremental
snapshots.
So,
but
in
that
case
this
is
basically
all
aws
right,
so
aws
they
take
care
of
this
one
internally,
that's
fine!
But
what?
If
we
want
to
back
up
this
data
to
your
different,
solid
device
from
a
different
vendor
right,
so
third-party
backup
blender
right,
not
not
aws
right!
So
that's
that's.
A
What
do
we
want
to
do
here
right?
If
you
want
to
support
a
variety
of
storage,
you
want
to
be
able
to
backup
from
different
storage
vendors
and
like
packet
to
your
storage
device,
which
is
not
the
same
as
the
primary
storage.
What
do
you
do?
I
think
that's
the
so
that's
the
question
here
right.
If
you
handle
everything
underneath,
I
guess
that's
yeah,
then
you
don't
need
it
because
you
have
your
api
coming
apart,
handle
this
right.
So
here
we
are
talking
about.
A
A
You
maybe
have
your
own
common
api
to
to
look
at
the
diffs.
That's
also
possible
right.
That's
also
possible
too.
A
A
So
we
actually
have
not
really
get
to
the
details
of
the
api
part.
Yet
so
just
right
now
we
are
looking
at
the
reasons
right
why
we
need
to
do
this,
what
a
workaround,
if
we
don't
have
this,
so
I
think
when
we
talk
about
this
last
time,
I
think
there
are
someone
was
saying:
yeah,
yeah,.
D
Yeah
so
azure
for
azure
disk
that
those
apis
are
exposed.
B
A
So
yeah,
so
basically,
if
we
look
at
the
whole
support,
so
we
have
a
section
to
say
who
supports
this
right.
So
this
is
a
section
that
I
need
input
from
all
of
you,
so
we
have
aws
aws
actually
have
that
they
expose
this
and,
as
you
have
this,
but
I
think
I
think
the
I
think
I
see
this
lexi
saying
that
the
links
listed
here
does
not
really
does
not
list
to
the
the
change
blog
api.
This
is
just
some
introducing
this
introducing
the
support.
A
So
if
anyone
knows
some
api,
is
there
a
link
to
the
api
that
we
can
add
here?
Yeah.
Please
just
add
that
information
here,
if
you
know,
and
for
google,
I
think
the
this
is
a
work
in
progress.
They
they
don't
have
it,
but
they
are
working
on
it
right.
So
we
do
know.
At
least
you
know
this
is
three
cloud
providers
actually
either
have
it
or
working
on
it
yeah.
A
The
sender
is
the
little
different
singer,
it
doesn't
have
cbd,
but
it
actually
has
a
full
backup
service
that
does
increment
the
backups
automatically
and
then
vmware
we
have.
Those
for
vivos
is
dave
here:
okay,
yeah
so
yeah.
So
this
is
actually
a
api,
so
every
story
vendor
that
supports
vivald
actually
already
implemented
those
apis
so
yeah.
So
that's
already
a
common
api
for
different
storage
vendors.
That's
supposed
to
be
well
and
then
you
know
we
have
other
other
vendors
who
support
this
right.
A
So
if
you
know
any
other
storage
vendor
that
supports
this,
please
go
ahead
and
add
them
here
yeah.
I
think
there
are
probably
a
lot.
C
Okay,
can
I
ask
the
clarifying
questions
so
like
yeah,
these
vendors,
that
support
incremental
snapshots
like
are
these
these
snapshots
portable,
such
that
they
can,
for
example,
like
a
v-ball
snapshot,
can
be
converted
into
a
non-viva
volume
or
rds
just
lets.
You
store
a
vivo
snapshot
on
a
on
an
arbitrary
storage
backend
like
look.
Oh
that's
part
of
the
those.
A
A
That's
the
next
thing
that
we
want
to
discuss
right
so
right
now
we
just
want
to
say
who
actually
have
this
and
then,
of
course
I
think
it's
going
to
be
a
challenge
to
if
we
want
to
say:
okay,
that's
how
the
api
in
kubernetes
is
just
for
this,
and
what
is
what
does
it
look
like
right,
but
I
think
we've
all
is
one
example,
because
it
actually
have
multiple
so
defenders.
That's
what's
that
right!
So
that's
one
example,
I'm
not
saying,
of
course
it's
not
going
to
be
exactly
the
same
as
that.
E
Right,
I'm
certainly,
I
think
you
can
add
a
little
bit
on
that
one.
So
I
I
did
take
a
I
did.
I
do
research
on
these
cbt
these
and
generally
the
the
component
are
similar
like
you
for
each
of
the
chain
block
in
each
of
this
description,
you
will
have
like
an
offset
and
what
the
length
of
the
data
that
is
chain
in
that
blocks
and
you
have
and
then
they
specify
like
a
multiple
struct
of
that
multiple
of
that
in
an
array,
so
in
general
they
are.
E
E
A
short
list,
each
of
the
item
in
the
list
for
the
file
will
specif
specify
the
file
path,
the
the
the
start
of
the
file
os,
that
of
the
file
like
you,
know,
file
mode,
what
type
of
fridays
access
time
recovers
and
so
on
so
forth.
So
these
are
in
general.
They
they
they
are.
They
they
have.
They
have
quite
a
lot
of
similarity
between
them
I
haven't
taken.
I
cannot
say
I
can
look
at
all
of
them.
E
I
look
at
some
of
the
some
of
the
api
that
being
provide,
or
some
of
the
you
know,
internal
from
the
omc
api
that
not
being
exposed
outside
yet
so
I
look
at
them
and
I
see
a
lot
of
com
common,
so
I
think
in
our
next
step,
when
we
have
a
meeting
again,
we
will
dig
down
to
like
how
can
we
extract
these
common
api
these
into
into
a
common
api
to
use?
But
that
is
the
topic
for
maybe
the
next
meeting.
I
guess.
A
Okay,
yeah
so
yeah.
If
you
can
also
just
to
add
some
here
in
the
dark
after
the
meeting,
that'll
be
good.
E
Yes,
please,
aside
to
me,
as
a
matter
of
fact,
I
already
have
a
document
to
ready
to
be
shared
in
our
last
meeting
and
then
we
started
to
say
hey.
Let's
just
talk
about
the
reason
for
now,
but
I
do
have
a.
I
do.
Have
the
info.
Okay,
sorry,
what
is
your
email
h-o-a-n-g?
E
Is
the
last
name
h-u-o,
it's
okay!
I
I
I
can
talk.
A
A
So
yeah,
so
please,
you
know,
add,
add
to
the
list,
if
you
no
other
other
storage
that
has
this
for
yeah
the
cheapest
list.
Okay,
so
going
back
to
here
so
island.
Does
that
address
your
concern?
Questions.
C
So
it
seems
basically
the
goal
like
one
of
the
end
goals
is
to
have
a
portable
snapshot
format
such
that,
if
a
storage
vacuum
incremental
snapshots,
those
snapshots
can
be
backed
up
to
any
arbitrary
backyard,
not
necessarily
of
the
same
type
of
primary,
but
to
anybody.
D
A
A
C
A
E
I
I
think
we
already
do
some
of
these
portable
when
we
create
the
p
in
the
csi
pv
and
pvc
right.
When
we
backing
up
a
pv,
we
can
backing
up
from
any
started
vendor.
A
Right
and
then
you
so
basically,
you
can
back
up
and
then
store
at
the
same
at
the
same
storage
device.
Basically
right.
A
F
E
A
Okay,
so,
and
then
okay,
then
continue
with
the
restore
restore.
Basically,
I
think
this
really
depends
on
the
the
backup
part
at
the
restore
time.
We
would
still
need
to
do
a
full,
restore
first
and
then
layer
incremental,
restore
on
top
of
that
from
the
incremental
snapshots
and
then
and
also
partial,
restore.
A
C
Because
the
way,
the
way
most
implementations
do
incremental,
restore
partial,
restores
based
on
data
access
patterns.
So
this
is
something
in
the
data
plane,
so
csi
or
the
control
plane
is
completely
out
of
loop,
and
I
don't
know
exactly
how
csi
is
going
to
do
smart
incremental,
restore,
given
that
it's
just
a
control,
plane
component.
F
G
A
G
F
A
I
yeah
no,
I
actually
I
so
the
thing
when
you
think
about
this.
I
don't
want
csi
to
do
all
of
those
work.
This
should
be
handled
by
the
the
backup
vendors,
but
when
you
think
about
how
to
do
that,
yeah,
I'm
not.
I
haven't
really
thought
about
this
restore
case,
yet
I
think
it's
the
back
of
vanderbilt.
C
This
is
a
little
tricky
yeah.
Even
if
we
supply
some
rb2
range
to
csi,
you
know,
let's
say
blocks
one.
Two
100
gets
fetched
by
csi
prefixed
what
if
the
application
tries
to
access
block
200.
So
there
is
a
you
know
this
joint
there
is
a
disconnect
between
what
the
csi
may
do
or
what
the
application
needs.
As
far
as
data
access.
E
Yeah
this
this,
the
restore,
is
a
little
bit
more
complicated
or
to
the
depend
on
jewelry
store
to
a
completely
new
or
you
restore
on
something
is
running
right
now
right
and
then,
when
we
do
restore
to
a
development
in
in
a
production
environment
that
would
have
some
part
is
running
and
using
that
persistent
volume
things
are
much
more
complicated
than
just
that,
and
I
don't
think
it
might.
I
don't
think
it's
allowed
to
be
right
to
some
pvc
that
currently
is
using
by
a
specific
part.
I
mean
by
some
part.
E
The
reason
is
that
I
think
the
volume
when
you
create
a
pvc-
you
you
create
a
mode
of
I
think
it
have
a
you-
have
to
specify
the
mode
of
write.
Read
many
or
right
read
one.
If,
if
you
have
a
wrong
permission
there,
then
only
the
part
that
currently
is
using
that
pvc
can
accept
that
pvc,
and
if,
for
that
case
I
mean
you,
cannot
the
backup
sort.
It
cannot
even
do
anything
with
the
part
that
is
with
the
pvc
that
currently
being
used.
A
A
E
But
let's
still
have
some,
it
still
has
some
views,
for
example,
if
the,
if
the
user
agree
that
to,
for
example,
if
I
deploy
an
a
stateful
set
right-
and
I
even
though
it's
in
the
production
I
can,
if
I
can,
if
the
user
allow
me
to
scale
down
the
stateful
set
to
zero,
which
means
that
effectively
remove
the
part,
but
still
the
pvc
still
there,
then
I
can
do
partial
or
you
know,
differential,
restore
a
few
blocks
there
quickly
and
then
restore
the
scale
back.
E
The
stateful
set
to
whatever
the
current
num,
the
previous
number,
and
that
can
still
can
be.
You
know
a
good
production
of
restore
with
the
incremental
restore,
but
you
know
we
have
to
accept
that
the
part
have
to
be
go
away,
for
you
know
whatever
period
of
time,
it's
still
very
quick
compared
to
restore
the
entire
new
to
entire
new
namespace
with
a
new
pvc.
But
it's
still
I
mean
that
is
something
that
is
possible,
but
I
agree
that
we
need
to
think
about
this.
A
little
bit
more.
A
A
So
yeah,
so
this
one
I
I
forgot
who
mentioned
I
does
not
look
like
this-
is
directly
related
to
cbt,
the
partial,
the
oh
you're,
saying
that
it's
it's
still
related
because
they
probably
already
have
something
there
just
to
do
incremental.
H
E
Yeah,
I
think
david
or
someone
in
the
previo
in
our
meeting
mentioned
that
this
incremental
backup
the
efficiency
of
us
having
a
cbt
mainly
for
the
backup
side,
I'm
not
sure
if
it
must
benefit
on
the
restore
side.
It's
mainly
efficiencies
in
the
backup,
because
we
do
backup
more
frequently
like
every
day
or
every
hour
restore.
You
know
not
that
much
so
not
sure
yeah,
but.
A
E
The
the
the
the
backup
image
actually
synth
is
a
synthesized
image
of
the
full
backup
right.
It
looked
like
a
full
backup
all
the
time,
but
it
helps
it.
We
only
we
only
even
though
we
only
you
know
you
get
the
the
change,
but
in
the
backup
storage
it
should
be
a
full
backup.
So
you
can
restore
every
single
time
using
that
you
know
synthetics
synthesize,
full
backup
image.
A
A
All
right
yeah:
this
is
what
it's
going
to
do.
If
you
know,
if
it's,
if
it
can
completely
handled
by
backup
vendor,
then
that's
not
a
problem.
I
was
thinking.
If
csi
needs
to
do
something,
then
that's
really
weird
yeah.
A
H
H
So
you're
then
getting
you
know
the
efficient
and
efficient
change
list,
basically
between
two
snapshots
from
your
source
and
then
applying
that
on
the
remote
site,
and
so
this
obviously
isn't
a
synchro.
You
know
useful
for
synchronous,
replication,
but
async
replication
is
a
a
commonly
used
thing
and
this
would
allow
vendors
to
to
build
this
facility
fairly.
H
So
seth
yeah
I
mean
seth,
does
the
rbd
driver
does
do
snapshot
based
async
replication,
but
obviously
that's
not
the
the
change
block
and
stuff
is
not
exposed
at
the
the
kubernetes
level.
It's
also
limited
to
mirroring
cepharbd
to
seth
rbd,
and
so
you
know
like
a
lot
of
like
a
lot
of
of
what
we're
talking
about
here.
If
we
want
to
do
or
allow
these
diffs
to
be
portable
across
storage
back
ends
right,
we
need
to
to
standardize
that
interface
and
and
expose
it.
A
H
A
H
So,
by
putting
the
or
by
by
getting
the
changes
directly
from
to
be
exposed
directly
by
the
storage
system,
it
would
like
say
in
the
case
of
of
file
volumes,
allow
us
to
skip
the
metadata
scan
before
sending
right
so
that
for
a
file
volume
for
a
block
volume,
you
know
we
there's
really
no
good
way
to
do
it
at
a
higher
level.
C
H
Does
that
make
sense?
I
I
guess
I
think,
there's
two
parts
to
that
right.
One
is
kind
of
the
the
interface
of
what
gets
exposed
at
the
kubernetes
level,
and
you
know
yes,
I
think
it
would
be
great
for
us
to
have
a
some
sort
of
of
interface.
Probably
not
you
know
not
something
sort
of
standardized
in
in
the
cigs,
but
allow
a
vendor
to
produce
a
an
async
replication
controller
right
that
would
allow
you
to
to
configure
async
replication
cross
cluster
within
cluster.
H
You
know
cross
storage
system,
whatever
you
know,
the
user
wants
to
do
to
meet
their
their
data
protection
requirements,
and
you
know
do
that
in
a
standardized
way.
Now
the
the
other
side
with
respect
to
the
efficiency
of
it,
you
know,
there's
there's,
certainly
nothing
standing
in
the
way
of
today
for
a
file
volume
just
using
rsync
to
implement
that
feature,
but
it
would
be
made
even
more
efficient
by
allowing
you
to
by
basically
making
it
so
that
you
didn't
have
to
do
the
metadata
scan
right.
H
That
rsync
is
doing
if
you
could
get
the
change
list
directly
from
the
storage
system.
F
H
A
H
H
If
you're
doing
it
manually
today,
yes,
you
do
need
to
take
the
snapshot,
turn
it
back
into
a
volume.
You
know,
attach
it
into
a
pod
and
you
know
go
from
there.
So,
yes,
that
is
kind
of
inefficient.
Today,
if
we
get
the
the
change
tracking,
you
know,
I
don't
think
we've
gotten
to
the
point
where
we
figure
out
how
we
actually
get
the
the
data
out
right.
H
F
A
A
F
A
E
That's
the
the
very
good
point
that
the
the
weather,
the
cbt
that
we
are
talking
about,
is
currently
contained.
The
description
of
the
chain,
like
you
know
the
offset
and
the
and
the
the
offset
and
the
length
of
the
chain,
or
it
actually
contained
the
data
that
it
changed
like,
for
example,
if
the
chain
is
200
megabyte
then
does
the
cbt
provide
the
actual
200
megabyte
of
data
chain
itself
or
just
saying
it
upset
200
and
the
the
the
length
is
200
megabytes
so.
F
You
could
implement
just
snapshot
reading
and
leave
the
change
block
tracking
up
to
some
higher
some
other
application
like
rsync
to
figure
out
and
then
or
you
know,
or
have
a
fallback
mode
for
drivers
that
can't
actually
provide
the
change
blocks
but
can
provide
read
access
to
the
snapshots.
You
can.
F
E
That
actually,
some
of
the
idea
that
we
have
thought
about
it,
but
we
haven't
got
to
the
point
of
of
sharing
it
with
the
community.
Yet
we
have
to
discuss
a
lot
in
our
next
meeting.
I
guess.
F
A
Well,
actually,
not
necessarily
right,
then
you
have
pot
once
you
have
pod
includes,
then
that's
actually
very
complicated
right.
I
think
so
far.
Snapchat
has
been
simple
because
it
don't
have
to
deal
with
attaching
that
to
your
partner.
A
H
A
A
E
So
the
common,
the
the
api
that
currently
provided
by
all
the
started,
vendors
that
I
looked
at
cbt,
do
not
contain
the
data.
They
only
contain
an
description
of
the
chain
like
what
block
had
changed
so
and
so
forth.
They
do
not
including
data
whether
it
is
possible
to
do
that.
I
think
it's
totally
possible
that
we
can
enhance
our
you.
F
Want
to
if
you
want
to
design
like
an
interface,
that's
actually
moving
a
lot
of
data
around
this
based
on
grpc,
because
that
might
just
be.
E
Yeah
well,
the
one
more
thing
is
that,
even
though
we
want
to
do
you
know,
including
the
data
or
not,
it's
have
to
up
to
the
vendor
the
started
vendor
to
support
it
right
and
currently
the
vendor
the
current
all
the
vendors
that
currently
support.
You
know
this
type
of
abt
that
exposed
to
us.
I
do
not
contain
the
data
and.
A
A
E
The
api
provides
also
a
way
to
get
the
data
like,
for
example,
if
I,
if
this
like
vmware
right
vmware's,
they
you
first,
you
get
the
chain
block,
and
then
you
actually
get
the
block
in
a
different
call
right.
It's
just
a
two
different
call.
I
think
it's
on
the
same
api,
but
it
just
do
two
separate
call
right.
A
E
Yeah-
and
this
is
it's
not
really
required
for
all,
but
at
least
it
is
something
that
currently
in
many
of
the
api
that.
E
Let's
just
say:
if
I
already
have
the
pv
my
pvc
mounted
on
my
data
mover
block,
I
only
need
then
then
you
know,
I
only
need
to
know
what
block
it
is.
Then,
because
all
the
data
is
already
mounted
there
in
such
case
we
don't
need
to
get
it
directly
from
the
api.
I
only
need
say
you
know
five
blockchain.
I
have
all
the
data
mounted
as
a
raw
block
on
my
part
already,
so
I
just
copied
those
five
blocks.
E
E
Well,
like
I
said,
there's
a
true
possibility
right
two
way
to
do
that.
I
think
there's
a
good
point
that
the
data
should
be
available
should
be
there
too
yeah.
A
Yeah,
that's
that
really
got
into
the
data
path.
So
that's
that's
one
reason
I'm
saying
that
not
supporting
that
in
csi,
but
we
can
take
a
look
and
see
exactly
what
is
needed
and
how
to
get
there,
because
I
thought
in
last
discussion.
I
remember
we
talked
about
that,
we're
just
providing
api,
it's
actually
just
the
control
path
so
but
yeah.
So
so
we
yeah.
We
will
need
to
talk
more
about
this,
but
I
think
this
is
yeah.
This
is
this
is
good,
but
two
more
minutes
left.
A
A
And
auditing,
I'm
not
sure,
I'm
just
not
sure
what
auditing
is
related
to
cbd
anyone.
I
want
to
add
something
for
this
to
use
case
here.
I
actually
forgot:
what
is
this
auditing
piece?
H
In
the
in
the
auditing
case,
you
could
just
be
looking
at
the
stream
of
files
that
have
changed
and
basically
getting
a
report
for
like
security
purposes,
and
that
kind
of
thing.
A
Okay,
all
right,
I
think
we
have
one
minute
left,
so
I
think
we
can
wrap
up
this
and
then
we
can
continue
to
talk
about
this
in
another
meeting.
So
going
back
to
this
yeah
so
phong,
can
you
talk
about
this
application
workflow
in
next
meeting,
because
I
was
initially
going
to
ask
you
to
talk
about
this
today,
but
you
joined
a
little
late,
so
yes,
I.