►
Description
Kubernetes Data Protection WG - Bi-Weekly Meeting - 02 November 2022
Meeting Notes/Agenda: -
Find out more about the DP WG here: https://github.com/kubernetes/community/tree/master/wg-data-protection
Moderator: Xiangqian Yu (Google)
A
A
Hello
today
is
November,
2nd
Wednesday.
This
is
the
kubernetes
data
protection
working
group
meeting.
This
meeting
is
recorded.
We
don't
have
too
much
too
many
agenda
today,
given
that
we
probably
vast
majority
of
us
just
came
back
from
cubecom
or
you
attended
remotely.
So
we
have
a
couple
of
items
updates
from
the
kubecon
North
America,
both
shin
and
shin
and
me
was
there
I
get
to
meet
a
couple
people
but
I,
don't
think
amid
anyone
over
here
any
one
of
you.
There
eight.
C
A
Then
kasten's
Booth
I,
don't
I,
didn't
see.
Dave,
okay,.
B
But
this
at
this
time
is
actually
the
first
time
both
of
us
are
at
qcom
in
person.
Doing
the
you
know
doing
presentation
for
this
data
Protection
One
group.
This
is
the
first
person
presentation
after
this
group
was
formed
right.
This
was
formed
by
early
2020
after
the
San
Diego
San
Diego
cubecon.
A
B
D
I
joined
remotely
and
yeah.
Thank
you
to
the
both
of
you,
for
you
know,
providing
the
updates
yeah.
B
You
know,
unfortunately,
I
actually
should
have
called
that
out
in
during
the
session.
I
just
realized
that
later
I
should,
because
you
actually
did
that
nice
recording
right.
But
then
we
just
could
not
play
that,
because
that
was
the
talking
detail
about
your
current
design,
which
we
know
that
we're
not
going
to
go
with
that
design.
D
B
B
All
I
think
they're
just
asking
about
aggregated
APS
server
part
or
something
I
forgot
exactly
what
that
it's
yeah.
It's
not
that
we
didn't
really
go
into
details
because
you
know
we
are
since
we're
not
going
with
that
design.
So
it's
just
some
question
that
that
I
just
want
to
know
why
why
we
can't
go
with
that
design,
something
perfect
I.
B
Oh
yeah,
yeah
Tom
was
there
yeah
and
I
think
and
Mark
and
Mark.
E
E
Kwan
I
I
didn't
I,
didn't
I
I,
just
fist,
bumped
you
or
said
hi,
but
that
was
about
it.
So
sorry,
I
didn't
make
more
of
a
contact.
No.
B
You
were
next
to
Tom
right,
you,
you
were
standing
next
to
each
other,
so
we
probably
yeah
I,
think
we
we
saw
you
there.
A
Hopefully,
next
year
we
have
more
people
there,
so
we
can.
We
can
have
a
small
group
meeting
or
something
like
that.
This
time.
B
B
And
there's
also
a
I
see
a
sukana
is
not
not
here
sukana,
he
is
from
hotel
he's
also
there.
B
A
Yet
cool
all
right,
I'll
make
this
short.
Basically,
in
this
session,
we
we
have
a
couple
of
sessions
over
there.
I
think
Shin
did
a
couple
of
them
and
then
I
did
one
of
them.
The
the
first
one
is
that
Terra
and
kubernetes
panel
actually
I.
Think
Shin
was
one
of
the
hosts
over
there.
With
regarding
to
this
session,
the
cup
of
one
takeout
I
got
from
that
particular
session.
A
B
Remember
there
was
actually
n,
it
was
a
survey
if
you
guys,
you
know,
are
interested.
We
can
add
a
link
here.
Yeah.
B
D
B
This
right
right,
don't
you
feel
consistent,
but
this
one
is
more.
Focusing
on
like
people
are
running
databases
on
kubernetes.
B
Think
they
have
a
slight
focus
on
that,
so
they're,
probably
the
people,
even
the
people,
maybe
the
people
they
surveyed
I-
think
they
probably
mentioned
that
when
somewhere
I
remember
reading
that
they
just
say
it
could
be
little
skew
it
because
they
they're
probably
just
talking
to
people
who
are
already
running
database
and
kubernetes.
So
you
you
could
see
a
like
higher
percentage
on
that
part.
B
B
A
The
trend
is
pretty
obvious,
but
another
thing
which
is
also
interesting
from
to
me:
it's
actually
the
usage,
the
usage
of
object
storage
seems
to
be
growing
faster
and,
as
I
even
say,
why
do
we
even
need
a
block
devices
yeah.
B
E
B
I
think
yeah
I,
don't
think
that's
actually
new.
To
be
honest,
I
haven't
seen
people
like
well,
if
you
okay,
but
if
you,
if
you
double
click
right,
this
is
from
that
panel.
This
is
the
Patrick
he
works
at
datastack,
so
what
they
do
is
they?
Basically
they
they
have
a
operator
for
Cassandra.
So
they
want
to
separate
their
computer
from
their
storage
because
otherwise
they
have
this
problem.
They
have
to
like
scale
everything
all
at
the
same
time.
B
If
it's
just
you
know
not
good
for
running
kubernetes,
so
what
they
do
is
they
basically
use
the
object
store
as
their
persistent
storage?
So
all
the
data
is
there,
but
it's
not
like
they
can
completely
get
get
away
from
blog
stories.
If
you,
if
you
listen
closely,
they
use
block
storage
as
cash
layer.
Obviously
it's
too
slow.
You
can't
possibly
completely
without
that
right.
I.
A
Understand
but
yeah
I
I
know
that
another
restricted
to
certain
type
of
applications,
for
example,
Cassandra
itself,
is
a
gigantic
like
kiwi
store
class
indexing
right.
So
it's
not
a
surprise,
but
it's
it's.
It's
pretty
actually,
not
quite
new,
not
quite
old.
In
kubernetes
kubernetes
community.
It
was
there
for
a
while
in
non-kubernetes
environment.
A
But
I
brought
this
up
because
this
is
a
virtual
group
right,
yeah.
B
Yeah
definitely
I
think
this
is
interesting.
This
is,
of
course,
a
something
that
caught
my
attention,
because
I
was
standing
right
there
he
was
telling
he
was
saying
blocks,
it
is
dead
or
something
like
what
no
that's
not
possible.
That's.
B
But
but
I
know
that
he
want
to
make
his
he
want
to
make
his
point
right.
He
want
to
make
people
hear
about
it.
Yeah,
but
definitely
I
can
see
this.
There
is
a
trend
here
and
if
you
go
talk
to
people
at
the
menu,
they
probably
want
to
say
the
same
thing,
but
they
also
have
something
they
I
need.
I
need
to.
Actually
there
is
a
there
is
a
session
on
the
I.
Think
it's
the
I
don't
know
Dave.
B
B
B
E
B
Actually,
so
that's
what
I'm
saying
is
I
was
actually
you
know
thinking
about
this.
Okay,
whether
do
we
need
to
also
you
know
back
up
our
menu
buckets.
If
we
have
data
store
there,
but
then
the
feedback
I'm
getting
is
hey.
That's
menu
is
already
running.
You
know
it's
already
distributed.
B
F
F
Yeah,
so
replication
is
definitely
part
of
your
disaster
recovery,
but
it's
not
a
complete
answer
and
we've
seen
this
with.
You
know:
we've
been
at
this
for
a
long
time
with
block
devices
right,
we
have
replicated
block
devices
yay.
Now
we
don't
need
to
back
them
up
anymore
oops.
That's
not
true
right.
A
B
F
G
B
B
B
B
E
B
Do
you
need
to
back
that
up
to,
let's
say
Google
this?
Is
that
necessary?
I!
Guess
that's
my
question
because
this
is
already
distributed.
It's
already
kind
of
you
have
several
copies
well,.
G
D
F
D
Like
in
other
use
cases,
object
storage,
it's
not
just
limited
to
cloud
provider
right
like
on
the
other.
On-Prem
object,
storage
solution
as
well
like
those.
D
Let's
say
you
should
be
like
somebody
to
your
cloud
provider
at
large,
so
I'm
curious.
So
speaking
of
object,
storage
like
this
folks
are
that
are
using,
are
going
to
use
like
object,
storage,
either
utilizing
cozy
or
something
else.
B
Not
yet
right
so
has
anyone?
Does
anyone
actually
because
I
think
cozy,
just
it's
basically
just.
C
D
B
Was
still
kind
of
warming
up,
I
was
actually
talking
to
people
about
this.
One
they're
like
it's
actually
not
really
changing
a
data
path.
So
what
is
what
does
this
bring
right?
So
still,
I
think
people
are
still
trying
to
figure
out
how
to
use
it.
If.
B
B
D
A
Key
thing
is
that
Seth
for
Redhead
it
has
a
rook
right.
The
Rook
operator
uses
the
own
API
right.
B
A
B
Permission:
yeah
yeah,
basically
just
it
basically
just
allow
you
to
give
you
the
so
in
the
there's,
something
called
the
bucket
access
right.
That
has
the
credentials
your
pod
basically
just
use
those
credentials
to
access
the
bucket.
It's
not
the
same.
This
is
someone.
What
else
was
asking
me
he's
like
wrapping
his?
He
was
trying
to
rub
his
head
around
and
say:
hey
CSI
seems,
like
you
know,
of
course
you
provide
us
among
paths
so
that
you
know
your
product
can
use
that,
but
cozy
it
sounds
like
it's.
You.
G
B
D
Abhishek
said
like
in
a
chat
that
they're
using
cozy
so
Abhishek,
what
are
you
guys
using
cozy
for?
Is
it
mainly
just
bucket
Administration?
Is
there
any
data
protection
stories
there.
G
We
are
basically
using
it
to
provide
storage
so
like
we
have
the
nutanix
objects
stored
with
us
and
then
using
the
Cozy
interface.
We
are
only
providing
the
object
store
to
the
customers.
D
B
G
Interested
yeah
so
actually
I'm
a
part
of
this
group
like
for
like
four
to
five
months.
Right
now
and.
C
G
That
I'm
not
sure
about,
but
we
are
currently
working
on
the
backup
and
restore
services
for
the
cloud
native
applications
so
like
I'm,
pretty
much
new
to
the
organization
and
I,
don't
know
the
whereabouts
currently
of
like.
But
we
I'm
a
part
of
the
ca
India's
team
here,
which
we
basically
works
on
the
data
services
for
cloud
native
applications.
So.
C
G
Am
responsible
for
like
backup
and
recovery
processes
gotcha,
so
those
are
like
just
the
normal
flow,
where
we
like
take
a
snapshot
and
then
back
up
to
a
particular,
you
can
say,
storage
and
then
recover
from
it.
So
it's
just
the
basic
stuff
that.
A
Not
rare
at
all
in
Google,
we
also
need
that
for
the
for
the
the
newly
Pro
announced
product
called
public
sector
hosted
a
Google
distributed
cloud.
This
is
basically
a
single
sale,
something
that
deploys
to
a
single
data,
Los
Angeles
or
a
single
region
and
the
durability
of
the
object
storage
system
locally
at
one
sale,
one
zone
is
not
sufficient,
so
we
do
need
backup
systems
to
kind
of
replicate
from
one
cell
to
another.
A
It's
at
that
level
at
this
at
this
moment,
but
it's
definitely
some
of
the
use
cases.
Some
of
the
interesting
use
cases
you
need
it
to
it
was
that
you
want
to
un's
point
when
you're,
not
in
now,
you're,
not
you're,
not
using
Cloud
vendors
by
using
on-prem
Solutions.
F
So
we
had
that
request
to
Valero
to
back
up
S3
buckets
and
kind
of
the
way
I
look
at
cozy,
it's
a
way
to
manage
buckets
from
inside
kubernetes.
So
it's
great
for
apps
that
are
keeping
their
own
private
data
in
an
S3
bucket
So
like
the
harbor
container
registry.
F
A
Once
yeah,
they
will
absolutely
right
once
the
harbor
container
registry,
backed
by
S3
or
GCS
storage
or
anything
or
any
object,
storage
windows
and
the
other
is
actually
Prometheus
end
of
one
bit
right.
So
the
logs
generated
within
the
cost
and
metrics
can
also
use
back
object,
storage
as
the
backup
as
the
as
the
storage,
underneath
that
so
those
typically
one
strong
thing,
I
heard
from
is
audio
login.
That's
stored
in
object.
A
Storage
should
be
kept
either
first
of
all
in
a
warm
bucket,
which
is
right
ones
with
many
that
requires
versioning,
lock-in,
etc,
etc,
and
the
third,
the
second
thing
is,
but
actually
the
ability
to
reach
those
logs
for
almost
arbitrary
long
period
of
time.
A
So
in
the
in
the
case
of
Dr,
the
there's
their
compliance
requirements
to
back
up
this
storage
systems,
regardless
whether
you're
using
PV
or
PVC
or
or
object
storage
to
serve
as
the
back
end
for
audio
login
story,
their
use
case
is
there
it's
just
not
as
rich
as
file
block
at
this
moment,.
B
So
right
now,
cozyani
has
one
official
driver,
which
is
the
Azure
side.
They
wrote
a
cozy
driver,
so
if
any
of
you
are
actually
need
a
driver
or
has
some
objects
to
be
back
in
and
it's
a
cozy
driver
yeah
feel
free
to
join
the
Cozy
meeting,
which
is
weekly.
A
A
Cool
good
discussion,
so
next
one
is
as
Euro
we
delivered.
We,
we
did.
The
data
protection
working
group
appreciate
all
there
for
this
group
has
brought,
and
a
lot
of
people
are
not
here
today.
So
really,
thanks
for
your
contribution,
the
main
things
are
on
the
white
paper.
I
still
remember
that
is
chasing
people.
Writing
the
sections
up
and
also
the
yearly
report.
Please
take
a
look
and
enjoy
your
own
contribution
and
enjoy
your
own
words
there
and
then
I.
Think
Sheen
did
some
sick
storage
updates.
A
It's
pretty
I
think
she
has
been
bringing
in
her
expertise
as
well
as
knowledge
in
sick
storage
from
the
beginning.
So
Jin
do
you
want
to
add
up
anything
there.
B
So
yeah
so
six
words
meet.
Actually
we
had
a
good,
a
good
turnout.
It's
a
Friday
afternoon
I'll
be
there,
but
actually
we
were
not
bad
actually,
so
we
just
did
an
update
of
you
know
one
1.25
and
1.26
release
what
we
did
you
1.25
and
what
we
are
currently
working
on
for
1.26
release,
mainly
that's
the
the
update.
Normally
that's
what
we
do
in
the
six
storage
session.
We
do
an
update
to
talk
about
what
we
are
doing
there
and
then
also
I
did
a
session
with
Mauricio.
B
A
Cool,
that's
all
from
me
today,
any
items
anything
any
team
members
want
to
bring
up.
Otherwise
we'll
return.
30
minutes
back
to
your
calendar.
D
I
wanna
just
do
a
quick
update
on
CBT
I.
Think
I
already
share
this
with
some
of
you
I
guess,
just
for
the
sake
of
you
know
like
updating
everyone
else.
So
like
yeah,
we
I
think
right
before
I
keep
calling
we
received.
Some
feedback
from
folks
from
architecture
basically
did
I,
guess
that
it
just
I
guess
to
reject
it
highly,
not
recommend
us
go
down
the
aggregated,
API
server
route.
D
The
main
argument
is
that
there
is
that,
like
you
know,
as
we
have
discussed
over
the
past
couple
of
months
like
when
we
stream,
like
change,
block
data
back
to
the
backup
software,
you
know
it
still
proxy
to
everything.
Is
your
proxy
through
the
kubernetes
API
server
right?
Essentially,
we
put
the
aggregate.
Api
I
mean
sorry
kubernetes,
API
server
back
into
the
request
response
path.
D
The
whole
argument
around
like
using
oh,
we
do
that
for
logs
and
metrics
that
didn't
stand
as
well,
because
yeah,
let's
see
architecture
argument,
is
that,
like
those
are
like
systems
data
you
know
in
so
they
have
more
control
over
it,
like,
as
opposed
to
user
data,
which
that
classify
what
I
wish
like
the
change
block
data
that
we
found
there
so
yeah
it's
so
now,
it's
I'll
I
think
the
last
comment:
I
left
there
was
okay,
that
we
tried
four
or
five
prototypes
already,
like
my
question
to
see.
D
Architecture
was
like
where
you
know
where
which
way
and
which
would
be
the
right
Forum
to
bring
these
up
again.
Surely
we
have
exhausted
I
think
in
my
opinion,
most
of
the
of
the
Alternatives
and
prototypes
we
are
considered
and
that's
they
got
to
be
a
trade-off
somewhere
and
so
like.
Do
we
bring
this
up
to
architecture
meeting?
Do
we
bring
it
out
to
the
stick
apps?
D
Do
we
bring
it
up
to
some
storage?
You
know
like
just
yeah.
A
D
The
colors
yeah
give
me
give
me
a
minute.
B
I'm,
not
just
thinking,
maybe
good,
to
actually
talk
to
Cigar
texture
because
they.
A
B
A
B
B
B
A
Just
I
haven't
I
saw
one
comment
talking
about
an
imitation
of
sides
for
each
call,
it's
beyond
couple
megabytes,
then
it's
not
going
to
be
supportable
or
whatever
it
is
yeah,
but.
G
A
D
Yeah
yeah,
so
I
brought
those
up
too
right,
like
there's
a
way
to
rate
limit
on
our
side,
there's
a
way
to
a
rate
limit
on
the
kubernetes,
API
server
side.
I.
Think
fundamentally,
like
the
right
now,
like
the
difference
that
the
the
main
point
that
they
bring
up
is
like
hey
this.
D
Basically,
we
are
putting
all
this
so
we're
in
a
way
like
hoping
that
the
user
would
do
the
right
thing,
because
these
are
not
like
control
plane
data
right,
so
we're
as
telling
the
user
that
hey
you're
responsible
for
not
taking
down
the
kubernetes
API
server.
F
F
B
D
I
think
fundamentally
is
like
I
think
so,
at
least
like
you
know,
that's
where
things
are
at
I
think
like
yeah.
If
we
want
to
go
to
get
meeting
and
make
a
case
for
it,
you
know
I
think
we
can
definitely
schedule
something
with
them
and
get
together
as
a
group
I.
D
Think,
like
maybe
just
I,
can
send
you
all
the
link.
B
Yeah
I
think
we
need
to
compile,
maybe
I'll
say
we
need
to
prepare
like
a
separate
document,
just
to
write
down
like
what
how
much
traffic
we
think
will
be
going
through
that,
let's
say
if
we're
going
through
this
aggregated
API
server
path,
even
others
like.
We
should
also
have
an
alternative
right.
So.
D
Yeah
I
mean
like
dude,
okay,
so
the
the
main
point
yeah
so
you're
right,
like
I,
think
the
the
main
thing
there
is
like
there
has
to
be
some
rate
limiting
and
throttling
happening
on,
the
educated,
API
server
and
the
kubernetes
API
server.
Otherwise,
like
the
amount
of
data
will
Unbound
it
right
like
so
like
I,
think
Jan
called
it
like
the
I
think
he
did
something.
D
He
also
showed
some
math
like
he
did
some
math
on,
like
the
data
protection
working
group
Channel,
like
I,
say
if
you
assume,
like
a
volume
of
10
terabytes
and
then
in
the
worst
case
scenario
like
the
entire
thing,
all
the
blocks
in
the
volume
was
returned,
then,
according
to
these
math
we're
talking
about
gigabytes
of
just
metadata
right,
you
know
and
then,
like
I,
think
like
found
this
math
there
as
well.
I
guess
the
Assumption.
The
final
maker
was
okay.
D
D
The
amount
of
data
is
unbounded,
but
now,
if
we
talk
about
struggling
and
rate
limiting
what
values,
what
thresholds
are
we
gonna
put
there
if
they're
configurable
like
then
the
the
user
data
Park
user
data
owners
would
be
responsible
for
making
sure
that
was
on
break
right,
so
I
think
like
those
are
like
the
again
like
you
know,
I'm
just
like
yeah
relaying
information
I'm,
not
defending
that
position
right
now,
I'm,
just
relaying
the
information
to
this
group
here
so
I
think
like
it
would
be
helpful
for
folks
to
just
chime
in
on
the
cap
as
well,
so
they
don't
always
just
hear
from
me.
D
You
know
folks
can
pick
sometimes
look
at
it
and
share
some
comments.
Then
it
would
be
helpful
because,
right
now
it
has
always
been
me
like
I'm,
responding
to
like
four
of
them.
So.
B
D
B
D
I'm
gonna
share,
like
the
latest
this
discussion
on
thread
on
the
zoom
chat,
so
that
is
the
the
latest
one.
B
This
was
the
some
discussion
about
the
data.
How
much
is
going
through
right,
so
this
autism.
D
Yeah
yeah
and
also
like,
if
it
can,
you
know
if
we
do
want
to
go
into
a
architecture
meeting
like
I,
want
to
make
sure
like
I
mean.
Obviously
we
have
to
go
in
there
prepared
and
educated.
So
if
there's
anything
that
is
there
in
the
cap
that
cannot
be
resolved
via
the
pr,
then
we
should
bring
it
up.
But
if
we
can
resolve
it
over
the
pr
the
you
know,
we
should
try
to
add
to
the
pr
there
so
but
yeah
I
think
that's
where
we
are
at
like
I.
D
Think
like
that's
where
the
cap
is
currently
at
and
but
at
the
same
time
like
so
like
switching
gear,
a
little
bit
like
I
think
there
are
two
two
questions
that
have
been
brought
up:
that
I
just
want
to
share
with
this
group
that
I
think
it
might
be
worth
thinking
about
like
the
first
one
again
is
still
boils
down
to
the
actual
data
path
right,
the
blocks
of
the
data.
D
So
it
seems
to
me
that
if
CSI
is
going
to
continue
to
deem
like
data
path
is
out
of
scope
and
how
useful
is
the
CBT
feature
as
a
whole?
So
it's
like
if
the
people
keep
asking
me
okay,
what
do
I
do
asked
as
an
user
after
I
get
back
to
CBD
CBT
metadata,
so
it
sounds
like
that.
Second,
half
of
the
story
is
okay.
Now
I
need
to
go
and
find
the
actual
data
blocks,
but.
D
So
yeah
I
think
like
if
some
folks
can
help
me
to
come
up
with
like
a
stronger
story
there
like.
That
would
be
great
so
like
so
Shin
you
mentioned
about
Cozy.
If
you
can
maybe
like
either
like
you
can
point
me
to
the
right
documentation
on
cozy,
so
I
can
take
a
look
or
we
can.
Oh.
B
D
B
D
D
Cozy,
at
least
like
it
feels
like
the
story
is
more
or
less
complete
right.
It's
okay,
the
scope
is
like.
We
will
give
you
apis
to
manage
some
object.
Storage,
buckets!
Yes,
we
don't
deal
with
data,
but
with
CBT
is
more
like
okay,
I
get
back.
The
the
problem
CBD
is
trying
to
solve
is
like
effective,
incremental
backups
right,
but
with
cbts.
Okay,
I
get
back
to
metadata,
but
my
backup
story
is
not
complete.
Right,
like
I'm
like
cozy
cozy
is
like
yeah.
Here's
a
collection
of
apis.
D
D
C
B
E
D
B
Don't
think
we
can
even
make
them
the
same
yeah.
D
I
think
it
comes
up
like
many
times
like
in
this
working
group,
like
even
in
the
CSI
API
PR
people
asked
about
it
and
then
the
second
thing
is
also
so
I
mean
I.
Don't
expect
answers
here,
but
I
just
want
to
share
like
the
questions
that
I've
been
receiving.
The
second
question
is
around
like
again:
why
do
we
need
to
go
through
the
volume
snapshot
API
because
right
now,
I'm,
looking
back
at
it
right
and
I
was
just
comparing
earlier
today,
I
was
comparing
it
with
the
EBS
direct
API.
D
It
feels
like
again
like
we're
still
heavily
like
skewed
towards
the
EBS
model
right.
So
typically,
if
you
want
to
do
incremental
backup,
like
is
it
common
to
always
have
to
create
a
snapshots?
First
AWS
pick
it
that
way,
because
they
want
users
to
use
the
EBS
snapshot,
so
they
can
charge
them.
I
just
noticed
that
for.
B
D
So
I
think,
can
we
just
do
backup
directly
like
clone
I
I?
Think,
like
you.
F
F
F
So
we
have
the
option
in
K10
to
not
take
a
snapshot,
but
we
really
discourage
it
because
in
order
to
do
that
right,
then
you
have
to
place
the
application
independently.
So
typically,
like
K10
will
snapshot
the
volume.
Then
then,
you
know
do
a
copia
backup
of
the
volume
either
by
cloning
it
or
attaching
it
somehow.
D
Right
so
yeah
like
I,
feel
like
yeah.
Okay,
that
that's
that
makes
sense
to
me
like
the
whole
like
assuring
ensuring
like,
is
either
application
Level
or
crash
consistency
backups
via
snapshot.
So
right
now
like
we
are
depending
on
like
so
like
I
guess
like
whether
in
the
snapshottings
OR
cloning,
some
there
are
different
mechanisms
to
go
about
it
right.
So
right
now
like
we
are
under
the
assumptions
that
there
is
some
sort
of
snapshot,
API
that
we
have
to
utilize
like
in
this
case,
like
volume
snapshot,
but
eventually
it
caught.
B
D
B
F
Had
well
so
so
most
backup
utilities
use
Snapshot,
because
it's
quick.
So
if
the
storage
provider
does
a
snapshot
for
you,
it
happens
very
quickly.
You
know
milliseconds,
ideally,
and
that
means
you
don't
have
to
pause
the
application
while
you're
cloning.
So
if,
if
you,
for
example,
if
you,
if
you
want
a
consistent
backup
of
something,
you
can
pause
the
application
copy,
all
the
data
you
can
clone
it,
but
that
takes
that
means
the
application
is
down
for
a
long
period
of
time.
F
Usually
the
storage
systems
have
this
Block
Level
snapshotting
that
happens
very
quickly
and
then
lets
you.
You
know
pause
as
the
application
momentarily
while
it
snapshots
the
volume
and
whatever
it
does
underneath
you
know
this
baby
copy
on
right.
Some
systems
actually
do
like
a
full
clone.
I.
Think
I
was
hearing
that
Seth
actually
does
a
full
clone
under
under
the
covers
which
takes
forever,
but
there's
some
techniques
they
have
for
making
that
go
faster.
But
it's
just
basically
getting
that
consistent
point
in
time
handled
by
the
storage
system,
and,
ideally,
that's
very
fast.
D
So
I
guess,
like
the
main
difference
between
like
taking
a
snapshot
of
the
volume
versus
like
cloning.
The
volume
is
that
snapshots
in
general
is
a
lot
more
efficient
and
faster.
That.
F
D
D
Yeah,
okay,
yeah
just
yeah,
okay,
yeah,
like
that,
sounds
good,
okay,
where
I
come
yeah;
okay,
just
because
I'm
looking
back
at
the
some
of
the
earlier
discussions
and
comparing
it
with
like
the
Epps
direct
apis
and
all
those
things.
Okay,
that
means
that
makes
sense
anyway.
Yeah
there's
like
the
updates
from
CBT
at
this
point,
I
think
like
the
next.
D
Maybe
the
follow-up
action
is
like
seems
like
we
do
want
to
bring
it
up
to
six
storage,
but
before
that,
I
wanted
to
really
encourage
us
to
take
a
look
at
the
cap
and
add
your
comments
and
feedback
there.
So
it
yeah
it
sounds.
It
doesn't
sound
like
it's
coming
from
me
and
also
it
also
give
us
a
chance
to
if
we
do
bring
this
up
to
secure
architecture.
Sorry,
not
sorry
see.
Architecture
then
like
we
can
go
in
more
prepared
and
more
educated
and
we're
all.
In
the
same
sum.
C
Well,
regarding
the
first
part,
where
you
want
to
convince
folks
how
you
want
to
use
this
response
right,
I
want
to
use
CBT.
What
is
the
use
of
it?
C
If
you
remember,
we
had
created
a
prototype,
how
we
consume
that
in
canister
and
how
we,
you
know,
basically
do
incremental
in
incremental
backups.
So
would
it
help
if
we,
you
know
refresh
that
prototype
and
use
that
to
convince
for.
D
C
D
I
mean,
like
I,
think
I
think
like
it's
definitely
doable
right,
like
that's,
it's
feasible,
but
it's
just
like
from
a
a
CSI
API
perspective.
It's
like
the
only
gift
from
a
CSI
user
and
consumer
perspective
that
that
CBD
API
only
get
them
halfway
there
right
so
and
then
it's
got
them
halfway
too.
The
problem
that
they're
trying
to
solve
is
back
up
right,
so
it's
like,
and
then
we
have
to
like
keep
telling
users
that
hey,
no,
you
only
get
back
to
metadata.
The
second
half
is
that
you
have
to
pick
canister.
G
D
To
do
your
own
actual
to
get
back
the
block
data.
B
D
B
Yeah
same
thing
here
right:
this
gives
you
some
what
this
gives
the
browser
change,
then
you
go
retrieve
it
same
thing
right.
This
actually
gives
you
get
you
one
one
step
further
on
top
of
the
snapshot.
Right
snapshot.
That's
just
that's
just
a
full
snapshot,
but
this
one
allow
you
to
do
incrementals
unless
of
course,
some
systems
they
they
do
incremental
snapshots,
so
they
always
do
incremental.
Then.
Actually,
this
is
probably
not
that
really,
maybe
not
that
very
useful
right.
D
B
Fundamentally,
that's
better,
but
for
backup
vendors
you
need
this
right,
so
that's
just
for
the
storage
system.
There's
a
difference
between
if
you
are
a
storage
vendor
or
you,
if
you
are
a
backup
vendor,
that's
a
difference
right
for
backup
vendor.
You
always
need
this
right.
If
you
just
get
the
snapshot,
then
you
definitely
you
need
to
you
need
to
get
the
get
get
the
blocks
to
do.
D
It
right
I
think,
ultimately,
it's
just
that
a
bit
of
a
user
expectation,
management
or.
D
B
D
He
was
one
of
the
proponent
proponents
for
aggregated
Affairs
server,
actually,
but
anyways
I
think
like
he
had
a
chance
to
speak
to
well.
He
talked
after
he
changed
his
mind
after
he
talked
to
David
David.
Yes,.
B
He
is
doing
the
he's.
Is
he
doing
the
the
Readiness
production
Readiness
review?
Is?
This?
Is
the
Dave
who
did
the
who
do
the.
D
B
B
But
that's
the
thing
we,
this
is
a
pretty
large
cap
very
complicated,
so
people
will
always
have
different
opinion
back
and
forth
right.
So
that's
why
it's
very
important
to
document
everything
this
is
rejected.
We
have
a
section
explaining
this
proposal.
Why
it's
rejected
so
people
say
hey.
Why
don't
you
do
this
I'm
pretty
sure,
once
you
have
another
cat
written
using
other
approach,
I'm
pretty
sure
there
will
be.
Several
people
were
asking.
Why
don't
you
use?
Yes,
your
alternative
section
hey!
This
is
why
we
don't
use
it
yeah,
yeah.
E
B
D
Is
I
mean
yeah,
you're
right
so
I
think
like
again
to
backtrack
a
little
bit
like
get
two
separate.
This
is
two
separate
discussions
between
this,
the
one
that
we
are
looking
at
right
now
versus
the
earlier
one
around
like
the
API,
doesn't
deal
with
the
data
path,
so
the
the
latter
one
API,
does
give
it
a
data
path.
It's
just
more
like
the
questions
that
have
been
considered.
B
G
B
B
D
B
B
C
D
D
On
a
second
I
think
we're
talking
over
each
other,
so
I
find
it
hard
to
look
here
so
I
think
we
agree
that
this
is
the
main
issue
here,
so
I
think
I
think
the
one
thing
to
be
more
more
concretely
right.
D
If
you
want
to
okay,
so
going
back
to
what
Chang
was
saying
right
like
yeah,
this
will
only
fly
potentially
fly
you
take
off
like
if
we
can
convince
like
see
architecture
that,
yes,
we
need
some
like
rate
limiting
on
thriveling
implementation,
and
that
has
to
be
concrete,
like
those
have,
they
have
to
be
specific
numbers
in
there.
So
right
now
like
what
I'm
saying
is
I
need
folk,
I'm,
hoping
focusing
chiming
here
with
some
of
these
concrete
numbers.
D
F
C
F
D
I
think
yeah
I
did
appreciate
it
thanks.
That
sounds
really
good
to
me.
I
think,
eventually,
like
we
sh
yeah,
we
will
put
together
a
One
patreon
summary,
but
at
the
same
time
like
we
should
continue
to
for
the
sake
of
disability.
We
should
add
to
like
this
discussion
throughout
here,
because
you
know
they're
more
than
just
like
David
and
John
and
Clayton
looking
at
this
right.
So
but
people
folks
are
not
commenting.
D
D
If
you
look
at
all
this
discussions
right,
it
has
been
me
like
responding
to
it,
which
is
fine,
but
right
now,
like
I'm
running
out
of
words
and
ideas,
to
add
to
this.
So
you
know
as
as
one
folks
timing
then,
like
you
know
like
folks,
like
John
David
eats
and
Clayton,
they
can
see
that
okay,
you
know
there
are
other
people.
There
are
other
stakeholders,
the
other
users
yeah.
B
So,
let's,
let's
okay,
just
everybody-
maybe
just
take
a
look
of
the
cab
and
move
through
this
document
and
then
let's
get
back
to
this.
D
B
D
Okay,
we
don't
have
to
look
through
the
whole
cap,
I
think
one
at
one
point
or
the
other.
Like
you
know,
we
have
listened
to
you
already
through,
like
content
of
this
cup
over
the
past,
like
what
six
months
now
anything
has
changed
much
since
I
think
this
is
the
part.
This
particular
discussion
about
is
the
one
that
we
want
to
come
up
with
more
concrete
numbers
around
like.
D
If
we
do
rate
limiting
and
forth
Lane,
you
know,
is
it
10,
10
terabytes
of
of
size
volume,
then
we
can
put
together
a
Google
doc
and
bring
it
to
see
architecture
and
artificial
comment
on
this
on
that
Google
doctor,
but
now
we
I
need.
We
still
need
some
like
a
data
yeah.
B
D
Yeah
thanks
yeah
I
appreciate
all
the
feedback
and
inputs,
so
cool
I.
That's
that's
all
from
me
like.
If.
B
B
Yeah,
thank
you
Yvonne,
for
you
know
all
the
hard
work
I
know
you're
happy.
You
know
it's
hard
to
going
back
and
forth
with
those
comments
that.
D
It's
good,
like
yeah
I,
mean
I
learned
a
lot
I've
discovered
a
lot
I
mean
like
I
used
to
anyway
yeah,
so
yeah
no
I
appreciate
like
getting
all
the
feedback
so
far.
Pictures.
B
All
right
all
right,
I
think
yeah.
We
are
on
top
of
the
hour.
We
will
meet
again
in
two
weeks.
Thank
you.
Everyone.