►
From YouTube: [WG Data Protection] Meeting 20201104
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
A
So
today
we
are
going
to
go
over
some
cap
status
and
after
that
we
have
one
topic
about
our
data
production
workflow
white
paper.
So
this
one
is
about
backup
repository,
so
I'll
have
preshanto
and
dave
to
talk
about
this.
A
So
let's
first
go
over
the
caps,
so
we
are
trying
to
bring
snapshot
to
ga.
The
cap
is
already
merged.
Api
review
is
also
done
right
now.
There
are
prs
on
adding
the
metric
support
and
snapshot
controller
that
is
being
reviewed,
and
then
we
also
have
several
e3
tests
prs
that
are
really
in
progress,
and
I
just
want
to
note
it
here
that
we
used
to
have
this
snapshot
and
there
is
a
monthly
meeting
on
the
morning
snapshot.
A
So
now
that
meeting
is
cancelled.
So
if
you
have
any
questions
related
to
snapshot,
you
can
bring
that
in
this
meeting
and
then
I
want
to
briefly
go
over
the
graduation
criteria
in
the
only
snapshot
cap
because
we're
trying
to
bring
this
to
ga
now
we
actually
need
to
go
over
this
production
readiness
review.
So,
let's
actually
go
over
this.
I
think
this
is
a
new
in
kubernetes.
A
I
think
we
are
the
first
in
sixth
stretch
to
go
through
so
not
even
clear
on
what
has
to
be
done,
but
but
at
least
there
are
documents,
so
we
do
have
documents
for
those
it's
just.
We
haven't
really
done
anything
like
this
before
for
any
other
features,
so
so
for
for
snap
for
brilliant
snapshot.
Ga
we
here
we
basically
just
listed.
What
are
the
things
that
we
are
working
on
so
like
the
matrix
for
that
I
mentioned
earlier:
that's
working
progress
and
then
also
attacking
the
crd
schema
validation.
A
So
that's
also.
We
have
a
pr
there.
It's
been
reviewed
and
then
production
readiness
review
questionnaire.
This
is
actually
very
lengthy,
actually
a
lot
of
detailed
questions,
so
the
first
one
is
for
about
a
feature,
enablement
and
rollback.
I
think
this
is
mostly
for
a
ava
feature.
I
think
in
our
case
we're
actually
already
beta
trying
to
go
ga.
So
we
did
try
to
answer
those
questions,
but
I
think
this
one
is
less
important
now,
because
when
your
feature
gate
turns
to
beta,
it's
already
turned
on
by
default.
A
So
this
one
basically
says
like
what
will
happen
when
you
enable
it.
What
will
happen
when
you
disable
the
feature
gate
right
so,
but
in
our
case
from
beta
to
ga,
there's
no
feature
gate
change,
it's
already
it's
already
by
default,
so
it
goes
through
this
one.
But
then
there
is
also
some
questions
about
rollout
upgrade
and
the
rollback
plan.
So
if
you
go
from
beta
to
ga
you're
supposed
to
use
for
backward
compatibility,
so
you
know
upgrade
what
type
of
tests
that
you've
done
for
upgrade.
A
Does
it
really
support
backward
compatibility
so
like
a
beta
version
of
one
snapshot
should
still
work
after
the
upgrade
things
like
that,
so
and
then
also
there's
questions
on
metrics.
So
this
is
something
that
we
are
currently
working
on,
so
it
needs
to
have
some
metrics
and
yeah.
A
So
the
questions
on
how
you,
whether
you
have
tested,
upgrade
roll
back,
but
in
our
case
since
we
are
still
just
still
trying
to
bring
this
to
jira,
we
don't
really
have
it
yet
so
right
now
we
have
done
some
manu
testing
it's
well,
since
it's
not
available,
it's
not
really
possible
for
others
to
really
do
this
test.
Yet
so
it's
just
for
us,
it's
manu
testing
for
now
and
then
so.
A
This
is
why
it's
about
application,
so
we
don't
have
that
and
the
monitoring
requirements
talks
about
how
you
so
those
are
mostly
about
those
metrics.
So
we
do
have
matrix
to
measure
how
long
it
takes
for
snapshot
operation
and
then
also
some
error.
If
you
know,
if
error
happens,
we
also
have
those
type
of
indications.
How
many
times
we
have
those
failures,
how
many
times
we
have
success,
so
we
have
those
type
of
metrics.
A
So
there
are
those,
and
then
I
think,
there's
also
I
forgot
where
that
is
it
talks
about
like
the
size
of
this
change
like
you're,
adding
this
api
right
so
you're,
basically
adding
more
api
objects
to
kubernetes
api
server,
so
like
the
size
of
that
api
object.
So
there
are
questions
like
that
as
well.
A
I'm
not
sure
where
that
is.
I
remember
seeing
that
and
then
cli
this
sro
service
level
objectives.
A
This
one
is
this:
one
is
kind
of
hard
to
say
what
exactly
is
the
is
the
reasonable
excels.
A
So
I
think
in
our
case,
because
snapshot
is
a
crd,
is
auto
tree,
so
we
depend
on
easters
to
package
those
and
deploy
them
crds
central
control,
and
then
we
also
have
this
snapshot
validation
hook,
those
should
be
destroyed
by
the
distro
and
then
and
then
the
sea,
the
side,
cars
and
the
caesar
driver.
Those
are
responsibilities
of
the
vendors
story-
vendors,
oh,
okay.
I
think
this
this
one.
The
scalability
is
the
one
that
was
asking
like.
What
are
you
adding
what
other
like
the
size
of
those
ap
objects?
A
What
other
api
calls
that
you're?
Adding?
Because
you
are
basically
adding
more
costs,
basically
increase
the
load
of
the
guest
server
right.
So
it's
those
kind
of
questions.
A
A
Okay
and
then
it
basically
talks
about
the
troubleshooting.
How
do
you
troubleshoot
this.
A
So
I
think
we
can,
we
can
look
at
the
metrics
and
then
we
can
also
look
at
the
logs.
So
that's
further.
A
So
I
think
mainly
right
now.
We
need
to
make
sure
that
we
get
all
the
all
the
et
tests
there.
A
We,
I
think
we
have
a
test
plan
somewhere,
but
if
you
test
yeah,
so
we
we
have
to
have
the
we
have
to
have
tests
for
for
the
finalizers
so
that
actually
already
we
already
have
a
e3
test,
that's
been
reviewed,
we
need
to
add
tests
for
the
secrets
and-
and
then
also
we
have
a
stress
test-
that's
also
being
reviewed
and
then
of
course,
once
we
add
the
mattress
support,
I
also
need
the
e3
test
for
those.
So,
okay,
all
right.
A
So
if
you
notice
no,
if
there
are
no
questions,
I
will
go
back
to
you
here.
So
I
wonder
so.
This
is
a
question
for
you
guys
I'd
like
to
know
who
are
building
products
that
are
using
css
snapshots
or
who
have
already
got
like
release,
products
that
are
using
css
snapshots,
and
that
will
be
really
helpful
if
you
can
share
like
if
you
want
to
put
your
company
name
or
it's
even
better.
If
you
can
put
a
link
there,
you
know
if
you
have
a
some
paid
some
website
that
shows.
A
Okay,
we
have
this
product
using
cd
snapshot
just
to
help
us
know
who
are
actually
using
this
and
who
would
like
this
one
to
be.
Who
would
like
this
feature
to
go
ga
yeah?
So,
if
you
guys
want
to,
you
know,
enter
this
information
here,
that'll
be
awesome.
B
A
A
All
right,
okay,
so
for
the
next
one,
is
a
container
notifier
for
content.
Notifier.
We
have
that
cap,
that's
still
being
reviewed.
I
think
there's
still
a
couple
questions
that
we
need
to
get
resolved,
so
we
will
need
to
schedule
a
meeting
with
the
aka
reviewers
to
figure
that
out
yeah.
So
next
one
is
the
generated
populator
ben.
You
want
to
give
a
update
on
this.
C
Yeah,
so
we've
we've
been
having
a
weekly
meetings
on
tuesday
mornings,
pacific
time
to
go
into
detail
on
this.
We've
had
three
meetings
so
far
and
we've
we've
made
some
progress.
The
proposal
going
into
120
had
been
to
implement
like
a
validating
web
hook,
to
to
notice.
When
pvcs
had
data
sources
that
that
were
never
going
to
bind,
they
were
never
they
were
going
to
result
in
a
pc,
they
never
bound,
but
we
we
didn't
like.
C
We
decided
we
didn't
like
that
design
and
decided
to
go
instead
with
a
controller
model
where
we
just
basically
post
events
to
pvcs
that
look
like
they're
never
going
to
bind,
because
because
it's
possible
that
the
populator
that
will
bind
them
is
going
to
get
installed
later.
So
so
it's
actually
okay
to
have
those
pvcs
sitting
around
just
waiting
for
a
populator
to
get
installed.
So
so
that
was
the
first
sort
of
decision
we
made
that
was
deviated
from
the
original
plan.
C
That
mirrors
what
the
original
pvc
looked
like
running,
either
a
pod
or
some
other
process
on
the
empty
pvc
to
populate
it
and
then
rebinding
the
pv
that
got
bound
to
that
pvc
back
to
the
original
pvc
that
the
user
asked
for
so
that
at
the
moment
their
pvc
binds
it
already
has
the
data
they
were
expecting
and
any
pods
waiting
on
it
can
can
immediately
start
up.
We've
did
some
investigations
about.
C
Can
we
support
models
like
wait
for
first
consumer
with
data
populators,
and
we
believe
the
answer
is
yes
with
a
little
bit
of
extra
work,
and
we
spent
some
time
talking
about,
given
that
there's
going
to
be
some
fairly
complicated
code
to
implement
the
mechanics
of
the
of
the
the
watching
and
the
rebinding
and
and
respecting
all
of
the
kubernetes
rules.
C
You
can
import
for
the
sake
of
expedience,
we're
going
to
start
off
with
the
library
and
but
try
to
keep
our
options
open
so
that
you
know
if,
if,
if
we
can
turn
it
into
a
sidecar
later
on,
we'll
we'll
try
to
do
that,
but
but
basically
we're
making
some
changes
to
the
to
the
cap
in
120.
C
The
feature
will
remain
alpha
in
120
and
we're
going
to
see
how
far
we
can
get
towards
actually
implementing
the
controller
that
posts,
events
and
samples
of
the
of
the
populator
mechanism
and
reusable
code,
so
that
we
can
get
things
rolling
and
maybe
get
debated
in
121..
A
Effort,
I
wonder
if
maybe
not
everyone
understands
that
if
you
know
the
importance
of
wait
for
first
consumer,
yet
I
think
normally
at
the
restore
time
people
probably
just
do
static
provisioning.
I
guess
right
so.
A
I'm
just
guessing
I'm
just
because
without
this
I
mean
you
can't
really
do
real.
I
mean
you
can
do
dynamic
progression,
but
you
can't
really
support
a
wait
for
first
consumer.
Without
this
feature,
I
don't
think
so.
A
I
think
probably
backup
vendors,
mostly
probably
can't
really
use
that
feature
directly
at
the
resort
time,
because
the
data,
if
there's
data,
is
not
really
managed
by
the
csi
driver,
etc.
C
Somewhere
yeah,
if
you
have,
if
you
have
a
backup
scheme
that
involves
like
creating
a
pod
and
attaching
it
to
the
volume
that
the
user
will
be
using
then
yeah
like
then,
you
can't
have
a
situation
where
you
specify
what
the
application
should
be.
At
the
same
time,
you
specify
what
you
want
to
restore
it
from
you'd
have
to
sort
of
go
through
a
whole
process
to
do
a
restore
and
then
tell
kubernetes.
I
want
to
run
this
application
so
yeah.
It's
like.
D
A
Yeah,
I'm
not
actually
not
sure
at
restore
time
if
people
normally,
I
I
think
they
normally
just
assume
that
the
original
data
source
still
there
I
mean,
but
I
could
be
wrong-
maybe
maybe
it's
not
true,
maybe
already
solved
that
problem.
C
A
A
Oh,
I'm
just
saying
that
normally
your
individual
static
provision
right,
then
you
kind
of
assume
that
it's
the
same,
the
same
storage
right
at
the
restore
time,
you're
not
really
changing
anything.
It's
static,
provisional,
not
really
dynamically,
selecting
anything.
So
you
don't
really
know
if
at
the
restore
time,
if,
if
that,
like
the
story,
location
still
has
enough
capacity
or
anything
you
don't.
You
wouldn't
really
know
right
because
you're
not.
C
A
C
C
D
C
D
C
Volume
that
has
his
data
go
and,
and
it
we
just
want
to
basically
break
out
of
the
the
shackles
of
the
snapshot.
You
know
the
limitations
of
you
know
what
you
can
and
can't
do
in
that
model
and
come
up
with
something
more
flexible,
but
but
still
have
it
be
very
much
like
a
kubernetes
data
source.
D
Okay,
in
this
case,
the
external
probation
is
out
of
the
picture
right.
A
A
A
C
In
the
prototype
so
sure
I'm
happy
to
go
into
more
detail
about
how
that
all
works.
But
I'm
hoping
that.
C
D
B
C
A
Three
weeks
ago,
yeah,
actually
we
can
also
yeah,
but
I
don't
know
if
everyone
during
this
meeting
also
have
time
to
join
that
meeting
right.
So
that's
why
I
would
like
I
think,
but
I
think
that
actually.
C
C
It's
been
a
fairly
high
bandwidth
meeting
so
far,
but
like
we're,
we're
slowing
down
now,
like
we
sort
of
had
a
bunch
of
stuff
to
plow
through
at
the
beginning,
and
now
now
it's
just
a
matter
of
coding.
There
aren't
that
many
open
questions,
so
I'm
not
sure
what
we'll
discuss
next
week.
We
might
want
to
reduce
the
frequency
to
every
other
week
potentially,
but
but
I
I
can,
I
can
post
a
link
to
the
agenda
doc,
which
has
the
zoom
link
and
all
the
other
information
in
this
agenda.
Doc.
If
you
want.
A
I
was
so
so
bad,
I'm
thinking,
maybe
good
for
you
to
like
do
an
overview.
Just
explain
why
we
need
that.
I
mean
I
know,
because
currently
I
think
people
already
have
their
own
solution.
That
you
know
does
not
rely
on
that.
But
then,
if
you
can
explain,
you
know
why
we
need
that.
What's
the,
what
is
the
advantage
of
using
a
feature?
That'll
be
very
good
right,
so
maybe,
like.
D
A
A
A
D
A
C
C
A
Right
right
but
you're
saying
that
what
but
like
when
you,
when
you
introduce
the
feature
when
you
write
the
cap
right,
you
have
some
section
explaining
the
motivation
right,
so
you
might,
in
my
case
I
found
this.
I
didn't
realize
this.
Is
that
important
until
I
realized
that
I
need
to
wait
for
first
consumer?
So,
but
maybe
you
know
in
general
of
course-
and
we
know
this
is
useful,
but
for
me
that
was
the
one
that
I
think
it's
really.
A
C
In
this
meeting
or
we
can,
I
encourage.
A
A
C
A
Could
do
that
too
yeah
we
can
yeah.
So
I
can
just
fold
that
in
my
to
this
mailing
list
as
well,
so
people
can.
C
That's
one
way
to
do
it
sure,
but
also
just
maybe
post
a
copy
to
the
agenda
doc
in
this
agenda.
A
A
A
A
Okay,
all
right
thanks,
okay,
so
then
next
one
cozy,
basically
right
now
it
actually
has
been
very
active.
There
are
still
weekly
meetings,
poc
progress
and
the
cap
is
merged,
it's
provisional!
So
it's
doing
a
pvc.
It's
a
poc
poc
implementation
for
this
release,
trying
to
go
off
in
the
next
release,
1.21
and
what
in
group
so
there's
a
cap
there
that
has
been
reviewed.
I
think
I
need
to
schedule
another
review
meeting.
There's
there
are
some
questions
being
raised
in
the
previous
meetings.
A
So
that's
why
I
have
not
scheduled
meeting
for.
I
was
trying
to
see
how
to
how
to
solve
that
problem.
So
I
updated
the
cap.
Take
a
look
if
you
are
interested
and
then
I
will
also
try
to
schedule
a
real
meeting
for
that.
Okay.
Next,
we
have
a
topic
on
backup
repository.
A
So,
let's
see,
if
prussian
okay,
I
see
pressure
there
dave,
I'm
not
sure.
If
dave
is
here
so
appreciate
the
this
link,
you
added
here,
it
looks
like
there's
only
half,
it
doesn't
have
the
full
link.
A
A
E
Cool,
thank
you
so
much
shin,
so
hey
everyone,
dave
myself
and
shin
have
been
working
on
the
backup
repositories
or
the
backup
target
piece
for
the
overall
data
protection
white
paper
and
overall
I'm
going
to
go
over.
You
know
high
level
requirements
of
what
we
need
for
a
target,
but
just
kind
of
level
setting.
Why
are
we
doing
this?
E
You
know
when
we
do
backups
or
capture
data
from
a
kubernetes
environment.
We
need
a
location
to
put
those
backups
into.
We
obviously
have
snapshots
and
snapshots
technology
is
good,
but
most
times
you
know,
obviously
not
not
the
cloud
providers,
but
most
times
snapshots
provide
a
single
point
of
failure.
So
you
know
you
do
need
a
centralized
location
where
you
would
want
to
move
your
backups,
especially
from
a
migration
perspective.
E
And
again
you
know.
Data
has
been
growing
at
a
very
exponential
pace
today
and
we
need
to
ensure
that
the
data,
whatever
we
are
storing
or
capturing
within
the
repository
you
know-
can
handle
scalability
performance
aspects
and
you
know,
can
be
stored
in
a
much
more
efficient
fashion.
E
E
So
the
first
item
that
we
have
is
supporting
multiple
protocols
right
so
from
a
target
perspective
or
where
you
store
your
backup
data
we
want
to
make
sure
or
customer
would
want
to
store
it.
You
know
probably
use
object,
storage
for
scalability,
you
know
there
could
be
existing
file
servers.
You
know
nano
file,
servers
or
emc
or
storage
systems
providing
nfs
storage
or
something
on
those
lines
which
are
already
available
in
the
customer
environment
which
they
would
want
to
connect
to.
E
Along
with
that,
you
know,
you
may
have
data
domain
type
of
appliances
which
you
know
which
may
be
a
use
case
again.
You
know
this
is
kind
of
looking
at
it
and
prioritize
use
case
perspectives
of
what
we
expect
to
happen
more
or
more
so
obviously
object,
storage
being.
First,
then,
you
know
some
kind
of
file
storage.
You
know
using
technologies
like
data
domain
and
so
on.
Next
and
then
the
other
point
was
you
know,
we
were
also
looking
at
good.
Customers
want
to
store
their.
E
You
know,
backup
data
on
direct
cloud
storage
like
ebs
or
azure
cloud
cloud
storage.
Instead
of
you
know,
using
azure,
blob
or
s3,
for
example,
we
weren't
100
sure
on
this.
You
know
because
of
the
cost
reasons-
and
you
know
the
usability
aspect
of
it.
Unless
and
until
you
know,
customers
are
doing
a
lot
of
on-demand
and
continuous
migrations
and
want
to
use
that
ebs
volume.
E
As
the
you
know,
as
the
central
connection
connection
point
between
two
clusters,
you
know
that
is
a
use
case,
but
it's
not
that
well
defined
in
our
opinion,
yet
suggested
improvements.
You
know,
we've
been
talking
about.
Cozy
cozy
has
a
lot
of
you
know
apis
to
create
the
object
storage
pieces,
so
that
can
be
something
that
can
be
leveraged
there.
Could
be
certain
improvements
that
could
be
made
to
csi
as
well,
but
because
I
believe,
cosy
is
already
down
that
path
pretty
much.
E
We
probably
can
just
leverage
cosy
to
do
all
that.
Obviously,
certain
things
need
to
be
kept
in
mind.
You
know
object
storage
for
one
entity
may
be
different
to
another
entity
in
terms
of
how
it's
implemented
underneath
the
covers,
and
from
that
angle
you
know
migration
use
cases
from
this
is
migration
from
one
target
into
another
target
probably
needs
to
be
thought
through.
You
know
what
would
be
the
underlying
changes
in
terms
of
storage
capacity,
how
it
is
laid
out
performance.
You
know,
performance,
deltas
and
so
on.
E
So
after
defining
or
listing
the
protocols,
you
know,
we
would
want
to
make
sure
that
we
are
able
to
do
this
on-prem
in
cloud-based
environments,
whether
it's
you
know,
azure
blob,
aws,
s3,
google
cloud
storage.
That
would
be
the
next
kind
of
requirement
from
the
target
perspective.
You
know
one
is
obviously
supporting
those
protocols
and
then
the
other
one
is
supporting
those
protocols
in
different
environments.
E
Again,
cozy
provides
the
apis
to
do
this
and
can
be
achieved
pretty
seamlessly
through
that
from
a
redundancy
perspective.
Customers
to
want
to
ensure
that
the
target
has
some
high
availability.
This
could
be
jio
based
high
availability.
You
know
if
you're
using
cloud
based
technologies,
aws
azure,
google
again,
you
know
you
can
have
the
geo
availability
from
a
specific
cloud
provider
or
you
know
you
can
also
have
customers
looking
for
cross
cloud
redundancy.
E
E
The
next
item
here
is
long
term
archival.
So
once
you
know,
once
you
start
storing
the
amount
of
data
into
your,
you
know
different
s3
buckets
or
you
know,
azure
block,
storage
buckets
or
something
like
that.
You
may
want
to
have
a
tiering
policy
that
basically
moves
the
data
from
s3
into
even
colder
storage
or
even
lower
cost
storage
than
s3.
Something
like
glacier.
I
forgot
what
it's
called
on
the
azure
side,
but
they
they
also
have
something
similar,
so
the
other.
E
So
this
point
basically
talks
about
taking
the
data
and
basically
based
on
you,
know,
retention
policies
or
archival
policies.
You
automatically
move
them
into
a
lower
cost
storage.
If
it's
available-
and
you
know
having
the
target
or
you
know
whether
it's
the
target
storage
automatically
moving
that
or
you
know
the
backup
vendor
doing
it,
but
that
would
become
a
requirement
underneath
the
covers.
F
Okay,
you
mentioned
cbt
is
there,
is
there
this
that
requires
kind
of
storage
integration
as
well
in
some
way,
usually
right,
correct.
E
So
I
think
dave
I
was
hoping
dave
was
on
the
call,
because
dave
has
some
information
around.
You
know
different
kinds
of
target
repositories
which
he's
defined
as
raw
passive
active.
You
know
the
raw
target
repositories
are
basically,
you
know
just
kind
of
dumb
storage
where
you
put
all
that
data
into
it.
E
E
Okay,
so
moving
along
the
next
one
is
permissions.
You
know
once
you
have
the
capabilities
of
you
know.
Obviously
this
this
probably
all
falls
into
the
cozy
aspect
of
it.
When
you're
creating
the
target
you
want
to
be
able
to
create,
you
know
different
kinds
of
object,
stores
in
different
locations
and
apply
the
right
permissions
around
it
as
well.
E
So
we
probably
would
want
to
move
this
one
up
over
here,
because
it's
all
kind
of
provided
by
the
same
set
of
apis
that
cozy
presents
so
overall
seems
like
cozy
would
be
a
you
know,
pretty
innate
way
of
hooking
into
targets
and
doing
pretty
much.
You
know
the
probationing
and
handling
of
all
the
permissions
for
creating
a
target
within
kubernetes
or
for
capabilities.
E
The
next
set
of
items
are
around
encryption.
You
know
encryption.
Obviously
it
could
be
in
multiple
things
right.
The
encryption
can
be
provided
by
the
backup
vendor
or
it
could
be
provided
by
the
storage
vendor.
That
is
at
rest.
So
one
of
the
biggest
things
with
encryption
is
obviously
integration
with.
E
You
know
different
key
management
softwares
and
being
able
to
interact
with
that
key
management
software
to
get
the
right
keys
to
decrypt
the
data
and
move
it
into
the
target
storage
or
if
it's
not
not
encrypted
at
source,
then
you
know
taking
that
source
information
encrypting
it
sending
it
over
the
wire
or
it
could
mean
you
know
just
having
the
target
doing
it
at
its
level
by
doing
data
addressed
kind
of
encryption,
which
would
be
kind
of
agnostic
to
the
backup
vendor,
but
would
be
handled
by
the
target
provider
itself.
E
So
I
think
the
the
net
net
the
way
we
look
at
it
is
it
should
be
kind
of
open.
You
know
if
the
storage
provider
has
the
capability,
the
you
know.
The
backup
vendor
should
enable
the
storage
provider
to
leverage
that,
because
from
a
speed
of
backups
and
everything
that
may
be
faster,
and
obviously
you
know
if
that
does
not
exist,
then
the
backup
vendor
would
want
to
provide
that
as
well
as
a
added
feature.
E
E
Next
items
deduplication
and
compression
similar
to
encryption,
you
would
want
the
ability
to
enable
disable
the
duplication
compression
again.
This
can
be
provided
by
this
can
be
provided
by
the
storage
target
again
directly
or
can
be
something
that
the
backup
vendor
can
handle
before
sending
it
to
the
target.
Again.
This
should
be
something
around
choice
that
should
be
available
to
the
end
user
or
the
end
customer
deduplication
and
compression
will.
Obviously
you
know
speed
up
the
transfer
protocols.
E
You
know
how
much
data
you
store
on
the
target
will
also
be
reduced,
so
avoids
not
only
performance
savings
burst,
cost
savings
as
well
and
then
finally,
the
last
one
was
target
management
by
multiple
clusters.
E
Or
you
know,
the
data
is
not
incorrectly
deleted
or
is
actually
accessed
and
deleted
by
the
right
source,
so
especially
useful
in
migration
scenarios,
something
that
needs
to
be
taken
care
of,
as
well
from
the
backup
notes
perspective
provided
some
information
on
cozy.
I
believe
this
information
is
probably
better,
probably
better
provided
than
what
shin
is
sharing
in
her
main
document.
E
Along
with
that,
there
is
astrolape.
This
is
another
open
source
project
which
focuses
on
you
know
kind
of
standardizing
the
target
spec
that
we've
been
speaking
about
so
far,
so
it
would
be
interesting
to
take
a
look
at
that
and
then
the
different
backup
repository
types.
You
know,
as
this
is
something
that
they've
had
come
up
with,
which
I
agree
with
as
well.
E
You
know
initially,
right
now
we
are
working
with
all
raw
raw
kind
of
storages.
You
know
nothing,
no
intelligence.
On
top
of
that,
everything
is
handled
by
the
backup
vendor
to
do
you
know
full
synthetic
backups
merging
your
incrementals
to
the
base
for
snapshots,
so
that
is
the
world
we
are
living
in,
but
you
know,
as
things
improve
and
as
we
abstract
more
layers
to
you,
know
further
application
deployment
and
delivery.
E
We
will
kind
of
see
more
of
the
passive
and
the
active
targets
as
well,
where
you
know
a
lot
of
the
handling
of
the
incremental
snapshots
and
merging
them
into
full
synthetic
would
be
handled
at
the
target
level
without
really
having
to
worry
about
it.
You
know
from
a
backup
perspective,
so
just
kind
of
you
know
some
points
to
think
about
to
chew
on
you
know,
as
as
we
kind
of
grow
through
this
data
protection
realm,
you
know
as
products
get
a
bit
more
mature.
A
So
I
think
the
this
part,
the
raw
past
activate.
I
think
this
overlap
with
the
what
we
talked
about
in
the
cpt
meeting.
I
think
we
talked
a
little
bit
about
like
if,
let's
let's
say,
if
we
support
bt,
then
should
we
have
the
backup
repository
managing
those.
D
A
And
we,
I
think,
at
least
at
that
meeting
we
last
time
we
talked
about
that-
that
we
should
leave
that
to
the
backup
vendors
and
not
really
introducing
that
in
our
kubernetes
api.
I
think
that's,
that's
it.
E
Probably
that
was
where
he
probably
brought
it
up
as
well.
So
yeah,
that's
right!
It's
it's
only
provided
as
suggestions.
E
In
a
high
level,
this
is,
this
is
kind
of
the
crux
of
you
know
what
we
think
and
again
the
you
know
inference
right
now,
based
on
talking
to
you,
know,
customers,
and
so
so
on
we
see
80
are
kind
of
aiming
for
cloud-based
targets.
Obviously
there
are
a
lot
of
different
reasons
for
which
you
want
to
do
on-prem,
but
your
combination
of
both
you
know
dealing
towards
cloud
mostly.
F
You're,
mentioning
backup
repository
kubernetes
apis
is
that
is
that
cozy
or
is
that
additional
stuff
as
well?
Which
point
are
you
exactly
referring
to
well
so
this
is
kind
of
expanding
on
zhing's
question.
If
we're
you
know
what
what
apis
do?
We
want
to
make
to
support
different
backup
repositories
kind
of
if
any
right,
because
I
think
cozy
is
a
pretty
low
level
api
that
you
know,
gives
you
an
interface
to
audio
storage
right,
so
you
know
maybe
the
raw
raw
tiers,
the
apis.
A
Yeah
cool,
so
I
I
don't
know
if
cozy
I
don't
know
if
cody
is
raw
or
passive
passive,
meaning
what
meaning
that
we
provide
and
common
api
right.
What's
the
difference
between
draw
passive
pass,
meaning
we.
E
That
was
the
same
question.
Actually
I
had
for
dave.
I
think
he
explained
it
to
me
was
more
around.
You
know
having
an
api,
obviously
to
talk
to
the
passive
piece,
but
raw.
There
would
probably
be
no
api.
You
would
have
to
directly.
You
know.
A
A
A
But
I
think,
if
I
look
at
what
they
were
saying
here,
the
passive
heming
this
api
actually
can
can
provide.
You
can
actually
get
the
snapshots
not
just
because
I
think
cozy
it
only
provision
bucket
did
not
you
can't.
A
A
Oh,
no,
that's
the
backup
repository.
That's
that's
not
really
backup
api.
Yet
backup
repository
is
just
the
deposit
representing
like
the
your
object,
store
or
nfs,
or
something
or
backup
device,
or
you
know,
on-prem
backup
device.
That's
the
backup
repository!
That's
not
the
backup
api!
Yet
the
backup
api
yeah.
We
don't
have
that.
Yet
we
have
andrew's
dock.
A
Was
just
talking
about,
I
think
I
remember
he
talked
about
okay.
This
api
should
be
very
similar
to
snapshot,
except
that
it
has
this
pretty
clear
definition
of
what
becca
means.
You
know,
backup,
meaning
it's
a
different
device.
Not
your
primary
storage
snapshot
does
not
have
that
differentiation.
Basically.
A
I
think
I
think
someone
is
is
that
alexis
he's
working
on
this
right
he's,
I
think
alexis
says,
he's
working
on
the
backup
apis.
If.
A
Yeah,
so
maybe
you
know,
maybe
we
can
hear
from
that
yeah.
So
that's
the
backup
baby.
That's
still
not
the
backup
repository
yet
yeah.
F
A
F
Then
the
fallout
of
that
will
be
the
the
components
in
kubernetes
that
we
can
build.
That
will
be
useful
for
that,
and
so
it
seems
like
the
motivation
for
backup
repository.
Is
you
know?
Maybe
we
expand
cozy
to
include
the
actual
object
interactions
or
maybe
different
interfaces.
You
know,
but
I
think
that's
for
me.
That's
kind
of
what
falls
out
of
this
section.
You
know
the
need
for
an
interface
and
in
fact
there
already
are
common
interfaces
out
there,
that
you
can
use
right.
F
There's
libraries,
that
kind
of
abstract
away
cloud
provider
interactions,
and
so
the
question
is:
do
we
need
to
add
that?
Do
we
need
to
define
one
of
those
in
kubernetes?
You
know:
is
there
something
correas
that
would
have
to
directly
interact
with
objects?
Is
there
something
that
or
can
it
just
use
kind
of
off-the-shelf
interactions
right,
try
using
sdks
or
other
existing
interfaces.
A
I
think
those
are
open
questions
right.
I
think
it's
I
thought.
Oh,
I
think
someone
is
that
maybe
alexis
and
elite
are
also
working
on
this.
Is
that
right
or
no
a
couple
repositories.
G
A
Right,
so
I
think
this
is
still
tom.
I
think
that's
still
like
an
open
question,
probably
that's
something
that
we
just.
We
need
to
just
still
need
to
discuss
and
clarify
what
else
do
we
need
to
do
so
now
we
do
have
this
cozy.
A
F
A
A
A
A
So
that's
the
the
plug-in.
This
is
the
velar
plug-in
for
visual
uses
that
asteroid
layer
to
do
those
yeah.
It's
probably
what
he
means.
I
guess,
and
then
I
think
active
is
more
yeah,
another
layer
that
you
actually
the
the
backup
repository
actually
also
kind
of
managing
all
the
things
that
are
stored
there,
because
if
you're
just
using
let's
say
s3,
it
doesn't
really
manage
those
like
you
have
your
change
blocks.
You
have
your
you
know.
It
doesn't
really
manage
them
for
you
right.
A
E
A
F
A
F
You
have
there's
kind
of
two
sides
right,
there's
like
the
storage
side
and
then
the
the
repository
side
for
this,
especially
with
cbt
cpt's,
pretty
hard
to
come
up
with.
A
A
F
My
view
of
that
repository
is
that
it's
able
to
handle
consuming
like
change
blocks.
Oh.
A
F
Help
make
it
efficient,
but
that
may
not
be.
That
may
be
like
a
higher
level
layer
right.
I
think
it's
actually
talking
about
the
primitives
that,
like
the
storage
primitives,
that
backup
repositories
consume
and
so
there's
kind
of
two
sides
right,
there's
the
what
does
the
repository
need
even
a
higher
level
repository
layer
and
then
what
does
the?
What
do
the
volume
storage
providers
provide?
A
Okay
yeah:
maybe
we
can
talk
about
cbt
more
and
then
come
back
to
this
one.
A
F
Preshanta,
where
do
you
think
rustic
would
fit
in
this
framework?
That's
interesting.
E
I
think
that
rustic
would
probably
be
another.
I
think
my
knowledge
on
wrestling
is
not
that
great.
But
what
is
the
underlying
storage
that
the
elastic
requires?
Any?
It
can
be
anything
right.
It's
an
object,
store,
typically,
okay,
so
probably
it
would
probably
something
much
like
one
of
these.
I
guess.
C
C
C
But
yeah,
I
don't
know
if
it
could
also
use
like
an
nfs
store.
Oh.
A
A
A
A
C
A
A
It's
not
really
it's
not
like
a
s3
orchestra
type
of
thing
right,
it's
just
so.
I
think
she
that
should
be
some
course
something
else.
I'm
not
sure
what
it's
almost
that's.
That's
the
way
to
do
the
backup
for
you
right.
It's
not
really.
A
C
Object
that
represents
what
you
can
restore
from,
and
a
repository
is
a
nice
way
to
conceptualize
what
that
is,
but
it's
it's
kind
of
rigid
and
not
as
flexible
as
you
might
want,
or
you
might
want
the
ability
to
move
backups
between
repositories
or
you
know,
do
other
things
with
like
have
meta
repositories.
There's
all
kinds
of
other
constructs.
You
could
imagine
what
matters
is
that
you
know
that
when
you
take
the
backup,
you
know
where
it's
going
to
go
and
when
you're
doing
a
restore.
You
know
where
it's
coming
from
yeah.
E
I'm
just
looking
at
the
rustic
documentation
right
now
and
it
talks
about
preparing
a
new
repository.
It
can
be
local,
it
can
be
s3
mini
or
wasabi
azure
or
google
cloud
storage.
E
Different
yeah.
C
E
A
F
D
C
C
Like
it,
if
you
can
only
get
back
up
to
like
like,
but
the
way
snapshots
work
right,
you
don't
have
the
option
of
like.
Where
do
I
want
my
snapchat
to
go?
You
just
say
I
want
a
snapshot
and
it
goes
where
it's
going
to
go.
You
get
zero
choices
or
you
get
one
choice:
the
the
snapshot
destination
so.
C
F
That,
but
it's
on
both
sides
too
right,
because
you
also
the
things
you
consume
are
very
different
right.
You
might
want
to
consume.
For
example,
output
from
you
know
my
sequel
dump.
You
might
want
to
consume
a
foster
directly,
whereas
with
volund
snapshots,
you
know,
you
know
the
thing
you're
taking
the
backup
of
yeah.
So
that
background
you
know,
I
think,
there's
a
lot
of
layers
and
backgrounds
that
make
it
a
little
more
complex
than
the
snapshots.
A
I
think
we
are
running
out
of
time
yeah.
It
looks
like
we
maybe
need
to
talk
about
this
more
and
just
define
like
what
are
the
layers,
what
should
be
handled
by
black
color
repository?
What
is
not.
A
E
A
All
right,
I
think,
that's
it
for
today,
are
there
anything
else,
otherwise,
we'll
meet
in
two
weeks
thanks.
Everyone.