►
From YouTube: WG Data Protection 20200812
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
B
A
A
The
first
agenda
is
that
last
meeting
we
discussed
about
the
data
protection
workflow
workflows,
as
well
as
a
detailed
kind
of
breakdown
of
tasks
and
assignments
into
different
participants
in
this
working
group
and
run
different
companies.
So
we
want
to
follow
up
on
those
items.
That's
one,
and
the
second
thing
is
that
xin
and
I
wanted
to
give
the
community
a
an
update
with
regarding
the
progress
we
are
making
on
the
container
notifier.
A
There
were
some.
There
were
some
more
discussions
between
or
working
group
signals,
sig
storage
and
some
other
kubernetes
grooves
with
regarding
to
the
design.
This
time
we
have
four
focused
mostly
on
security
related
stuff,
and
I
want
to
give
you
guys
an
update
on
that,
and
there
are
other
mysterious
items
over
there.
At
least
there
you
can
go
through
all
these
three
topics
and
then
we
open
the
discussion
to
the
public
shin.
Do
you
want
to
or
tom
do
you
want
to
get
started
on
the
workflow
dock?
B
So
yeah
there
are
like
two
sections
of
the
stock
that
we
have
started
working
on
the
first
one
is
the
section:
why
do
we
need
a
data
production
in
kubernetes?
So
proshanto
are
you
there,
so
actually
preshanto
has
put
this
together
and
then
we
also
had
some
discussion
about
the
use
case
section.
So
maybe
tommy
will
ask
you
to
go
over
that
next.
C
Yeah
sure,
if
you
could
open
the
link
that
we
sent
out
for
that
maybe
better.
I
can
also
share
with
you.
D
And
shane,
I'm
on
the
call
as
well.
If
you
need
me
to
kind
of
go
over
something.
B
D
Great,
so
hey
guys,
this
is
prashanth
kochavara
from
trillo,
so
we've
been
working
on
a
data
production
solution
for
kubernetes
and
we've
been
kind
of
you
know,
orchestrating
the
reasons
and
the
needs
for
data
protection
in
general.
So
I've
taken
some
of
that
content
and
massaged
it
and
added
it
in
here.
D
So
at
a
high
level,
why
do
we
need
data
protection
in
kubernetes?
We're
breaking
this
down
into
three
different
buckets,
so
the
first
pocket
talks
about
cloud
native
applications
how
they
have
built
and
how
the
application
dna
or
the
architecture
has
changed
over
with
time.
We
initially
had
mainframes
where
the
application
and
the
data
were,
you
know,
kind
of
together.
Then
we
had
the
server
architecture,
client,
server,
models,
one-to-one
mapping
between
a
server
and
an
application.
D
So
looking
at
how
the
applications
have
evolved.
Definitely
you
know
it's
a
new
kind
of
architecture
that
needs
to
be
adhered
to,
a
new
kind
of
architecture
that
needs
to
be
supported,
and
when
we
look
at
traditional
data
protection
solutions,
we
see
that
the
focus
of
traditional
data
protection
solutions
is
completely
different
from
how
cloud
native
applications
have
been
developed.
D
They
are
generally
focused
on
monolithic
applications.
They
are
focused
on
more
of
the
data
volumes,
some
of
the
items
and
pieces
which
we
have
been
taking
for
granted
with
legacy
solutions.
You
know
things
around.
Networking
and
security
and
pieces
now
are
also
broken
down
into
additional
components.
So
these
traditional
data
protection
solutions
are
not
able
to
keep
up
or
are
not
able
to
match
up
to
how
cloud
native
applications
are
built,
are
deployed
and
are
you
know
growing
in
a
way?
D
So
that's
the
first
level
of
discussion
point
that
we've
added
here.
The
second
point
is
more
around
in
terms
of
stateful
versus
stateless
we've
been
kind
of
talking
about
stateful,
stateless
around
with
virtual
machines
and
so
on,
but
looking
at
data
stateful
applications
are
the
kind
of
primary
kind
of
applications
that
are
run
within
kubernetes.
D
There
are
some
links
that
I've
added
from
different
analysts
what
we
are
saying
here
that,
in
order
to
protect
applications
successfully,
even
if
obviously,
if
it's
a
stateful
application,
you
need
to
protect
the
container
image
and
the
metadata
and
the
persistent
volumes.
D
But
even
if
it's
a
stateless
application,
you
know
there
are
certain
metadata
variables
and
items
that
may
be
needed
for
the
application
to
run
successfully
in
an
environment.
While
we
can
definitely
do
that
via
automation,
tools
like
get
off
sensible
teleform
to
spin
that
out,
when
those
statements
are
applications
up
again
it.
It
will
take
time.
You
know
it
will
take
time,
and
data
protection
generally
is
measured
in
two
terms
or
two
variables:
rpo
and
rto.
D
Protecting
these
stateless
application
metadata
is
going
to
help
reduce
that
rto.
I'm
not
saying
that
you
know
it's.
It's
completely
impossible
to
recreate
that
application,
a
stateless
application
without
backing
it
up.
You
still
can,
but
if
we
look
at
it
from
an
rpo
rto
perspective,
you
will
still
want
to
capture
that
metadata,
because
your
application
dna
is
changing,
the
application
is
growing
and
you
would
want
to
capture
that
to
recreate
that
application
in
a
you
know,
secondary
environment
efficiently
and
successfully.
D
So
that's
what
the
second
point
focuses
on
and
then
the
third
point
is
focusing
more
on
the
ide
roles
and
personas
right.
So
one
of
the
things
that
we
all
have
been
kind
of
using
kubernetes
for
today
is
for
application,
faster
application
development
and
faster
application
delivery.
But
now
this
can
happen
only
when
you
know
the
same
kind
of
tools
and
technologies
can
be
used
across.
D
You
know
your
id
domain
and
your
developer
domain,
so
you
need
to
have
a
technology
that
is
centralized
which
can
be
you
know
which
can
be
delegated
based
on
rules,
and
you
know
different
permissions
to
operate
that
same
data
protection
tooling.
D
So
the
third
point
basically
focuses
on
the
fact
that
we
need
a
we'd
actually
need
a
cloud
native
application
that
can
align
to
the
principles
of
kubernetes
to
support
data
protection
as
well
for
both
the
id
rows.
D
Okay,
I
think
the
screen
here
just
stopped.
A
Yeah
I
can
share
okay,
okay
yeah.
I
I.
D
Thought
yeah,
the
last
last
point.
What
I
was
saying
is
you
know,
so
basically
we
need
a
cloud
native
solution
that
can
address
the
needs
of
both
the
developers
and
the
ite
administrators.
So
it
has
to
be
a
unified
cloud
native
data
protection
solution
that
gets
deployed
in
a
kubernetes
environment.
A
A
Could
you
elaborate
a
little
bit?
Why
would
a
cloud
native
data
protection
methodology
can
reduce
rto
and
rpo
compared
to
githubs
the
end
updates
really
to
just
recreate
all
these
resources
correct
correct.
So
you
want
me
to
add
that
into
this
section.
Yes,
it
is
going
to
be
really
helpful
if
you
can.
A
So
I'm
I'm
not
sure
how
to
what
is
all
position
over
here
compared
to
github's
operation
for
stateless
workload,.
D
D
You
know,
that's
that's
one
aspect
of
it
and
also
when
we
think
about
stateless
applications.
Stateless
applications
generally
work
within
the
realm
of
other
stateful
applications,
so
you
would
have
on
an
application
but
other
stateful
components,
so
you
could
have
multiple
stateful
components
talking
to
stateless
components,
to
do
its
overall
application
job.
D
So
so
from
that
angle,
I
think
you
know
you
you
kind
of
look
at
it
from
a
day
zero
day.
One
perspective
yes
get
ops
and
everything
can
help
it
from
that
angle.
But
when,
once
your
application
starts
growing
the
metadata
pieces
keep
changing.
Unless
and
until
you
keep
track
of
those
metadata
pieces,
you
would
want
to.
You
know,
have
a
backup
available
so
that
you
can
recreate
it
swiftly
and
much
much
more
efficiently.
A
Yeah,
I'm
not
disagreeing
just
just
don't
get
me
wrong
yeah.
We
want
it's
better,
that
we
have
some
clarifications
over
here,
so
that
there's
no
confliction,
let's
say
definitely
perfect,
yeah,
yeah
and
another
aspect.
You
might
also
want
to
think
about
talk
about
that.
A
little
bit
more
is
the
whole
effort
of
xcd
backup
right
right
now.
A
Essentially,
the
metadata
piece
we're
talking
about
kubernetes
resource
definitions
right,
and
they
are
effort
right
now
in
open
source
in
kubernetes,
open
source
and
actually
it
is
on
beta
right
now,
a
code
to
protect
the
xcd
right,
basically
xcd
duplicated
copy
from
each
other
having
a
redundancy
in
xd
level.
A
D
B
Yeah,
so
so
this
doc
link,
we
have
added
this
link.
Can
you
go
back
to
the
bigger
dog
just
to
show
everyone
how
to
get
to
this?
Yes,
we
added
dog
here
in
the
first
section.
Why
do
we
need
the
possession
so
yeah?
So
please
go
take
a
look,
and
you
can
add
your
comments
there.
A
Just
to
be
clear,
these
dogs
will
eventually
come
together
and
build
the
white
paper
for
this
working
group.
So
it's
really
highly
appreciated.
If
anyone
can,
you
know,
share
your
opponents
over
their
opinions
over
there
and
it's
already
also
highly
appreciated
for
whoever
wrote
this
dog
is
really
nicely
wrote.
The
first
finalist
thanks
all.
E
B
B
Access
them.
No,
I
think
this
is
all
yeah.
This
one
is
already
public.
I
you
can
add
the
comments
there.
Yeah.
B
B
B
Yeah,
let's
make
tom.
Oh
sorry,
almost.
B
I
almost
clicked
the
wrong
button
with
a
stop
recording.
Sorry,
let
me
actually,
I
think
you
can
you
stop
sharing
and
then
maybe
we'll
just
make
tom.
The
presenter
me.
Let's
see.
B
B
C
Okay
cool,
so
this
is
much
more
of
a
work
in
progress.
I
think
we
got
together
and
put
together
an
outline
here,
thanks
steven
and
and
yeah.
So
let's
go
through
this.
I
think
we
we
started
more.
This
is
the
use
case
section,
so
this
is
kind
of
the
outline
for
that.
We
started
with
kind
of
the
high
level
of
what
we
need
for
data
protection
for
so
that
we'll
probably
overlap
with
the
previous
section.
C
So
we
should
probably
maybe
merge
that
a
little
bit,
but
we
talked
about
the
kind
of
highly
used
cases
and
really
it
comes
down
to
operational
recovery
and
disaster
recovery.
So
the
two
they
differentiate,
because
operational
recovery
is
what
you
do
through
your
normal
operations.
C
So
I
can,
you
know,
restore
something
if
I'm
doing
an
upgrade
of,
let's
say
like
a
database
version.
Maybe
I
have
to
back
up
and
restore
instances
of
that
database
as
part
of
an
operational
workflow
for
disaster
recovery.
We
talked
about
kind
of
obviously
big,
larger
scale
outages
and
we
can
talk
about
kind
of
the
the
failure
domains
there
as
well.
C
I
think
a
second
order
from
there
come
is
you
know
portability
so
doing
application
migrations
between
different
florida
domains
as
well,
so
that
can
be
between
different
clusters
between
different
name
spaces
between
maybe
different
cloud
providers,
and
you
know
that.
Will
that
also
ties
into
copy
data
management
so
moving
data
between
test
instances
and
production?
For
example?
C
You
know
you
don't
wanna
test
in
prod,
but
maybe
you
you
wanna,
replicate
your
your
production
setup
in
your
test
clusters.
We
also
went
into
scheduling
a
little
bit.
I
think
this
is
going
to
end
up
being
a
non-goal
so
for
scheduling
we
talked
about.
You
know
having
running
running
back
of
jobs
on
a
schedule
how
this
goes
into
rpo
and
we
kind
of
talked
about
this
is
mostly
already
supported.
So,
for
example,
castin
has
a
scheduling
mechanism.
C
Kubernetes
itself,
you
can,
you
can
do
things
with
cron
jobs
and
I
think
most
data
protection,
vendors
kind
of
have
this
that's
already
built
in
and
probably
the
same,
is
true
for
compliance
as
well.
So
you
can
kind
of
you
can
check
to
see
how
compliant
you
are
with
the
desired
schedule.
You
can
check
to
see
if
you're
you're,
you're
compliant
with
whatever
regulations
the
the
company
is
imposing,
for
example,
yeah.
C
C
We
did
differentiate
between
namespaces
and
application,
a
little
bit
which
we
can
talk
about
here
as
well
as
well
as
individual
pvs,
and
so
we
talked
about
that
there
we
had
this
open
question.
If
apps
can
span
local
namespaces,
you
know,
I
think
it's
probably
okay,
to
take
an
opinion
here.
My
opinion
is
that
we
shouldn't
allow
this
right.
If
you,
if
you're,
defining
like,
for
example,
consistency
group,
maybe
that
should
be
limited
to
a
specific
name
space.
C
You
know,
I
think
you
should
be
able
to
certainly
back
up
multiple
namespaces
on
the
same
schedule,
but
if
it's
easier
from
you
know
our
implementation
side,
I
think
it's
probably
okay
to
you
know
not
consider
applications
that
span
multiple
namespaces.
In
that
sense,
we
also
went
through
some
examples
here.
You
know
to
help
answer
that
question,
so
stefan
came
up
with
a
good
example
where
he
saw
a
customer
who
had
a
single
database
that
was
in
use
by
multiple
namespaces,
whether
that
was
you
know.
C
You
have
a
single
physical
database
with
multiple
logical
instances,
each
instance
might
be
used
by
namespace
or
even
one
database
that
had
multiple
namespaces
use,
different
tables
within
a
single
instance,
there's
obviously
much
more
complex
applications.
Sap
is
a
good
example
right.
It
has
a
ton
of
this
one,
maybe
we'll
spam,
more
namespaces
as
oracle
databases,
multiple
other
components,
etc.
Right.
C
We
also
will
probably
go
into
depth
in
some
we'll
just
mention
consistency
groups.
I
think
you
know
this
is
kind
of
the
purview
of
some
other
work
that
jing
is
doing,
but
maybe
outside
the
scope
of
this
working
group.
But
it's
worth
it's
worth
mentioning,
I
think
certainly
there's
a
primitive.
What
we'll
want
you
know.
C
I
think
I
was
in
sheesh,
who
gave
the
token
version
a
few
a
few
weeks
ago,
but
we,
I
think
versions
mentioned
like
api
resource
versioning,
is
mentioning.
Certainly,
when
talking
about
use
cases,
I
think
any
kind
of
migration
or
upgrade
is
definitely
worth
mentioning.
So
we
will
include
that
here
as
well.
C
We
also
want
to
reason
about
backing
up
from
older
versions
and
restoring
into
new
versions
of
kubernetes
clusters,
and
you
know
I
think
it's
that's
certainly
a
valuable
use
case.
You
know
I
have
a
backup
from
two
years
ago,
over
those
two
years,
I've
upgraded
my
cluster
several
times.
You
know
how
do
I,
how
am
I
able
to
restore
that
that
older
backup-
and
we
can
talk
about
you-
know
direct
support
from
some
primitives.
C
We
have
direct
support
from
vendors
direct
or
we
can
talk
about
just
having
you
know
manual,
a
manual
process
right
doing
you
have
to
maybe
do
the
upgraded
steps
where
you
first
upgrade
to
an
intermediate
version
and
then
kind
of
upgrade
to
the
final
version.
C
C
You
know,
I
think
we
can
go
through
this
really
quick,
but
I
think
what
we've
seen
is
that
what
we've
personally
seen
castings
is
that
there
are
multiple
types
of
ways
that
people
want
to
back
up
their
their
data
and
I
think
in
kubernetes
we
kind
of
want
to
have
the
primitives
that
support
them
all.
C
So,
like
logical
dumps,
for
example,
doing
my
sql
dump
postgres
dom
extracting
data.
This
is
especially
important
if
you're
using
something
outside
the
cluster
like
like
rds,
you
know
a
managed
service
where
you
maybe
don't
have
access
to
the
underlying
infrastructure,
like
the
storage
or
the
the
compute.
You
just
have
kind
of
an
api
call
that
you
can
make,
and
if
you
want
to
back
that
up,
you
have
to
do
things
at
the
logic
level.
C
We
call
the
application
consistent.
So,
for
example,
you
could
take.
You
can
find
volume
snapshots
with
with
application
level
hooks,
so
you
could
do
run
the
quiz
commands,
which
I
think
will
be
another
section
at
the
at
the
application
level
at
the
database
level.
So
like
a
flush
table
with
reedlock,
you
know
an
fsynclock
mongodb
that
kind
of
thing
and
then
use
the
volume
primitives
that
we
we
have
in
the
community
to
to
take
volume
snapshots
there.
C
You
could
also
rely
only
on
volume
snapshots,
just
let
the
application
handle
cleaning
up
so
crash
consistent.
You
know
to
just
take
the
volume
level
snapshots
and
back
those
up,
and
we
also
talked
about
dirty
reads
so
essentially
the
worst
case.
The.
If
you
have
another
option,
you
can
just
do
a
slurp
of
the
file
system
and
grab
all
the
data
from
from
there.
Of
course,
you
you
have
very
low
consistency
guarantees,
but
if
you're
able
to
orchestrate
things,
maybe
at
a
higher
level,
if
you
can
say
you
know,
pause
my
application.
C
While
this
happens
or
if
you
have
like
a
copy
of
your
application,
then
maybe
that
will
be
okay
or
if
the
window
is
small
enough.
Maybe
your
application
can
handle
a
startup
just
using
its
normal
crash,
consistent
semantics.
C
I
think
there's
quieting
use
cases
we
should
talk
about
here.
We
probably
won't
get
in
detail
because
there's
another
section
later
in
the
white
paper
that
goes
into
quieting
in
more
detail
which
I
think
is
still
pending,
but
you
know
we'll
talk
about
the
use
cases
requesting.
Certainly
in
this
this
part
of
the
talk
we
want
to
talk
about
the
use
cases
for
backup
and
data
protection
with
data
protection
in
normal
operations.
C
So
things
around
how
you
orchestrate
like
database
upgrades,
maybe
schema
changes
that
kind
of
thing
with
data
with
data
protection,
primitives.
C
We
want
to
talk
about
the
data
protection
components,
so
I
think
we
kind
of
scoped
in
application,
consistency,
volume,
backups
and
metadata
snapshots,
so
we
kind
of
talked
a
little
bit
about
that
before.
I
definitely
agree
that
you
know
one
use
case
is
get
offs.
So
that's
a
great
point.
Sean.
C
We
should
probably
mention
that
here
as
well
and
talk
about
how
you
can
still
record
recover
data
if
you're
recording,
if
you're
using
git
apps
to
recover
the
the
actual
metadata
of
an
application.
That's,
I
think,
certainly
worth
mentioning
here.
C
We
scoped
out
cluster
infrastructure,
so
this
includes
things
like
nodes,
for
example,
maybe
no
gammas.
Typically,
when
people
we
see
people
handling
like
disaster
recovery,
we
don't
see
them
restoring
like
the
actual
node
specs
themselves.
You
kind
of
rebuild
the
cluster
and
then
restore
the
data
into
it.
C
Even
when
doing
cluster
recovery,
we
don't
see
that
too
commonly,
but
we
can
we
can.
We
probably
should
dig
into
this
a
little
bit
and
talk
about
what
parts
of
that
we
music
community
think
are
important
to
support
backing
up
and
which
parts
we
think
are
kind
of
handled
just
by
initializing
clusters
right.
C
Something
that
we
said
was
a
non-goal
was
things
that
are
really
specific
to
applications.
So
if
you're
like
replication,
is
a
good
example
where
you
know,
I
don't
think
it's
within
the
purview
of
data,
the
data
protection
group
to
handle
replication
directly.
You
know
I
think
replication
should
be
handled
either
by
the
application.
So
you
can,
you
know,
like
my
sql
bin
logs,
for
example,
or
potentially
the
storage
layer,
so
a
lot
of
people.
A
lot
of
sds
systems
will
replicate
their,
for
example,
the
blocks
directly
or
nfs.
C
You
know
that
you
get
that
kind
of
built
in
for
high
availability.
I
think
falls
into
silver
camp
right.
This
is
talking
about
if,
for
example,
if
a
single
replica
goes
down,
how
do
you
handle
that?
I
think
that
still
falls
into
the
application
domain
and
maybe
not
the
the
data
protection
domain,
but
I
think
it's
worth
having
that
discussion.
C
When
you
mention
use
cases,
it's
probably
important
to
mention
verticals,
so
we
talked
about,
we
just
mentioned
big
data,
maybe
ml,
there's
also
kind
of
brownfield
so
taking
things
that
already
exist
and
migrating
them
into
kubernetes.
C
You
know,
I
think
a
lot
of
people
are
just
kind
of
copying
their
application
stack
and
putting
kubernetes
and
then
maybe
breaking
it
up
into
microservices
from
there.
There
is
something
worth
talking
about
with
data
protection
there,
where
I
think
any
time
you
talk
about
data
portability
or
migrations.
It's
worth
certainly
mentioning
these
type
of
migrations.
C
Well,
we
want
to
elaborate
on
certainly
these
failure
domains.
I
think
failure
domains
are
very
interesting
and
board
in
the
space
there's
already
supporting
kubernetes
and
I
think,
for
you
know,
for
at
least
labeling
failure
domains.
You
know,
I
think,
like
the
pvcs,
for
example,
do
have
some
support
for
this.
You
can
they
use
these
tags
now,
there's
explicit
field
thesis
annotations.
Excuse
me
now,
there's
explicit
fields.
C
I
think
it's
worth
mentioning
here
in
the
context
of
describing,
if
you
have
like
multi-multi-zone
clusters,
or
if
you
want
to
talk
about
regional
data,
regional
disaster
recovery,
we
should
talk
about
that.
We
also
try
to
figure
out.
I
think,
there's
another
section
in
this
white
paper
which
which
I
think
jing
is
working
on
on
the
primitives.
We
need
in
kubernetes,
and
so
it's
worth
kind
of,
I
think,
figuring
out
what
failure
domain
parameters
work.
Well,
what
what
maybe
we're
missing
there?
C
Something
that
comes
up
a
lot
in
data
protection
is
what
an
application
is,
and
I
think
that
discussion
always
comes
up
right.
Sean
presented
the
application
crd,
for
example,
which
is
a
definition
of
application.
One
definition
of
an
application
could,
for
example,
just
be
a
name
space.
You
know,
maybe
or
maybe
a
helmet
chart
could
be
a
definition
of
an
application
that
kind
of
ties
into
the
granularity
figuring
out
exactly
what
the
things
you
want
to
backup
are.
C
C
But
that's
that's
definitely
discussion
and
I
think
it's
worth
elaborating
on
in
the
use
case
part
of
this
right
talking
about
the
the
different
types
of
application
definitions
we
want
to
support
and
I
think
it's
possible
to
kind
of
get
a
flexible
definition
that
supports
kind
of
supports.
All
these
things
right.
You
know
everything
is
namespace,
for
example,
even
the
application
crd.
So
we
can
talk
about
that.
C
I
think
it's
worth
elaborating
on
you
on
the
specific
databases
in
the
use
case
section
as
well
talking
about
the
different
patterns
that
exist
in
databases
how
you
can
perform
like
application,
consistent
backups
of
a
database.
What
the
logical
primitives
you
need
are
what
the
flow
of
a
backup
is,
for
example,
some
some
databases
will,
when
taking
a
backup,
will
directly
push
to
an
object,
storage.
Some
will
write
to
a
local
costume.
Some
will
kind
of
stream
data
out
that
you
can.
You
can.
B
C
That,
wherever
you
want,
I'm
talking
about
that,
I
think
will
be
in
this
section
as
well.
C
I
think
steven
brought
up
this
point
around
self-describing.
You
know
it's
important
that
these
things,
when
you
take
a
backup
of
something
you're
able
to
cut
references
to
the
to
any
local
state,
that's
obviously
important
for
disaster
recovery.
You
want
to
have
a
backup,
be
kind
of
self-contained
and
self-describing.
C
This,
I
think
we
repeated
I
mentioned
rds
and
managed
services.
I
think
it's
worth
touching
on
them,
but
really
for
full
support
for
things
like
vm
backups.
I
think
it's
kind
of
out
of
scope
or
maybe
future
work.
C
We
probably
don't
want
to
you
know
that
gets
pretty
complex
right
because
there's
a
lot
of
different
types
of
external
infrastructure.
My
view
here
is
that
we
maybe
start
with
logical
backups
of
things
like
external
databases
and
at
least
in
the
in
the
scope
of
this
document
and
punch
on
things
like
backing
up.
For
example,
external
vms
backing
up,
you
know
other
other,
more
complex
applications
that
exist
outside
of
the
cluster.
C
So
that's
kind
of
the
outline,
I
think,
there's
a
lot
of
details
that
we
have
to
fill
in,
certainly,
but
we
have,
we
have
a
few
people
working
on
this
is
there
anything
that
I
missed.
B
B
Yeah,
so
I
think
probably,
we
need
to
sort
out
a
little
bit
and
maybe
finish
it
a
few
sections
first
and
those
overlapping
sections.
Maybe
we
can
do
it
later.
I
mean
we,
of
course,
when
we
combine
everything
together,
we
can
move
around
things.
That's
that's
fine!
I
just
right
now.
I
just
feel
this
one
is
covering
a
lot
of
a
lot
of
things
in
this
user
use
case
section.
B
C
So
maybe
to
move
forward
there,
we
should
start
with
well
because
this
is
still
just
an
outline
right.
So
maybe
we
can.
B
C
Sections
specifically
people
work
on
because
I
know
we
have
a
lot
of
people
kind
of
working
on
some
parallel,
so
we
can
and
as
we
pull
them
out,
maybe
we'll
sync
with
you,
because
I
know
you're
you're
working
on
you're,
taking
kind
of
the
high
level
view
the
document
as
well
just
to
see
if
we
think
it
should
go
here
or
go
in
or
if
it
would
fit
better
in
another
section.
C
You
know
because
I
know
there's,
for
example,
a
quieting
section
as
well,
and
I
think
in
the
use
cases
you
might
want
to
mention
it,
but
we
kind
of
go
into
detail
in
the
in
the
actual
section
of
the
document.
B
G
A
With
the
word
with
that
said
tom,
I
think
we
have
to
add
another
item
here,
which
is
encryption.
C
A
Yeah,
the
the
encryption
piece
is
missing.
I
think
it
is
gonna,
be
it
has
to
be
one
of
the
workflow
use
cases.
B
Should
we
add
that
in
the
so
the
general
dog
should
add
that
as
well,
because
we
don't
have
that
in
what
is
missing?
Are
you
talking
about
encryption?
You
want
this
one
to
be
like
a
separate
like
first
class
feature
or
because
right
now,
I
think
no.
G
A
That
this
working
group,
how
encryption
is
done,
is
probably
out
of
scope
of
this
working
group.
However,
whatever
the
fundamental
building
blocks
we
were
looking
to
provide
should
be
capable
of
for
taking
in
any
arbitrary
encryption
mechanism.
That's
what
I
was
trying
to
say,
and
I
think
this
is
one
of
the
use
cases
that
the
backup
needs
to
have.
C
H
A
I
H
Right,
you're,
you
are
right.
I
obviously
migrate
that
this
is
not
a.
A
Itself
alone
is
not
as
guilty,
because
the
race
needs
to
be
there
for
the
for
the
use
cases.
That's
one
thing:
okay
requirement
is
a
fair
term.
G
B
G
A
B
A
Good
all
right,
we
talk
about
this,
we'll
talk
about
this
yeah.
A
Yeah
we
haven't
started
with
this
one,
and
my
team
member
is
still
on
vacation.
So
I
see
he's
gonna.
A
Yeah
yeah
yeah,
I'm
here
yeah,
okay,
but
alex,
is
still
on
vacation,
so
he's
gonna
be
back
next
week.
Okay,
he
will
probably
start
putting
some
time
together
with
elliot
to
work
on
this
topic.
B
Okay,
I
was
thinking
that
angel
actually
shared
a
doc
some
time
ago.
Talking
about
this,
maybe.
A
B
I
think
so
davis
dave
has
complicated
he's
not
here,
but
he's
very
interested
in
working
on
this,
so
yeah
so
preshanto.
Maybe
I
can
connect
you
with
dave
and
can
start
talking
about
this
section.
Okay,.
A
A
A
Later
with,
regarding
the
latest
updates,
shin
and
working,
one
group
group
snapshot.
B
Yeah
yeah
happened
to
protect
any
dock,
but
I
mean
there
is
a
cab
that
I'm
going
to
update
and
set
up
another
meeting
to
review
a
cup.
A
Okay,
this
one
okay,
then
the
application
snapshot,
backup
and
restore.
This
is
basically
the
the
application
definition
piece
right.
B
B
Yeah,
so
this
one,
we
have
something
already
that
right,
you
put
together
some
time
ago,
yeah
we
can
yeah.
We
should
get
this
one
started
too
yeah
and
then
I
think.
B
Will
be
like
later,
let's
finish,
the
the
missing
pieces
and
the
use
case
section
first
before
yeah.
A
I
need
to
ask
everybody
a
favor
over
here
or
whoever
had
your
name
over
there.
Please
attach
your
email
after
your
name,
so
that,
for
example,
if
we
want
to
organize
some
kind
of
conversation
or
meetings
that
we
can
reach
out
to
you
or
a
slack
id
and
or
snack
id
like,
can
anybody
everybody
do
so,
please,
if
you
see
your
name
over
there,
all
right
then
next
time
item,
let's
move
on
to
yeah
the
container
notifier.
B
Yeah,
so
we
had
a
meeting
again
with
signaled,
and
this
time
we
also
invited
the
jordan,
who
is,
I'm
not
sure,
he's
she's
part
of
everything
he's
but
mainly
security
side
right
from
because
he
has
some
concerns
over
our
original
crd
approach.
So
we
imagine
jordan.
B
Yeah
right
right
yeah,
so
he
yeah,
I
think
so
so.
Overall,
I
think
the
media
she
went
went
pretty
well
right.
So
at
the
end,
but
from
sick
note
side,
the
thing
is
from
signals
that
we
only
have
that
just
have
one
person
we,
the
both
sig
chairs
front
signal
there.
They
have
conflict,
so
they
delegate
to
yourself.
B
Actually,
at
the
end
of
the
meeting,
we
actually
had
an
agreement
that
we're
going
to
kind
of
reduce
the
scope.
Basically,
we'll
have
this
doing
this
phase,
one
phase
pure
protein
phase,
one
we
are
focusing
only
on
the
content.
Notifier
action
probe
will
be
a
future
phase
if
needed,
and
then
even
for
the
first
phase,
we
are
not
going
to
have
a
cubelet
do
rechoice,
because
that's
a
concern
like
how
long
it
should
be.
Try.
You
know
what
is
the
cubele's
responsibility?
B
It
fails
right
so
that
that
cut
on
some
of
the
the
responsibility
of
cubelet,
and
then
it
will
be
the
external
controller.
Who
is
making
the
request
for
containerifier
to
do
retry?
If
that
fails
and
then
the
other
piece
is
the
status,
I
think
there
are
some
concerns
on
the
status
field.
If
we
are
making
like
a
status
for
each
container,
then
I
think
there
are
some
concerns.
There
will
be
too
many
updates.
B
So
I
think
I
think
the
conclusion
there
is
to
have
like
one
one
status
per
pod
kind
of
consolidate
those
and
then
by
the
still,
we
will
have
like
an
array
of
status
for
each
continuous
inside
that
pod
at
high
level
that
that's,
then
they
were
supposed
to
oh
and
then
there
was
also
a
there.
Are
some
ask
in
the
beginning
of
this
talking?
You
can
see
the
staff
edit
some
comments.
They
want
us
to
write
what
other
impact
on
kublet
if
we're
making
this
change.
B
So
that's
something
that
we
need
to.
We
need
to
do
and
but
then
I
think
after
the
meeting
we
got
some
comments
from
from
derrick.
Who
is
one
of
the
sister
from
signaled.
He
is
still
concerned.
He
was
asking
us
to
do
some
prototyping
outside
of
cubelet.
So
that's
you
can
see
the
comments
here.
So
here's
the
where
we
are
right
now
trying
to
decide
what's
the
next,
you
know
what
is
the
next
step.
A
A
The
one
is
that
at
least
that
we
get
agreement
from
participant
of
that
meeting
that
we're
going
to
do
this
in
kubernetes,
and
the
second
thing
is
that
we're
going
to
do
this
in
a
very
restricted
way.
First
of
all,
we're
not
going
to
have
kubrick
doing
retry
as
shin
was
saying,
we're
going
to
have
limitations
on
timeout,
very
restrict
limitations.
Right
accumulator
is
just
basically
execute
the
command
and
whatever
it's
out
the
output
is
accumulated
gonna
stuck
into
the
status
object.
It
is
up
to
the
upper
layer.
A
Controller
whoever's
used
this
feature
to
decide
what
are
the
next
actions,
so
you
can
do
retry,
you
can
just
give
it
up
or
if
it's
a
signal,
etc,
etc,
and
does
that-
and
the
third
thing
is
that
we
are
also
looking
to
limit
the
number
of
actions
you
can
define
per
container
and
that
way
the
basically
people
are
having
signals.
People
are
having
concerns
around
scalability
of
kubernetes.
A
Basically,
they
are
concerned
about.
If
I
am,
some
random
controller
sends
out
a
cube
exit
or
a
notification
object
to
thousands
of
parts
in
the
system,
and
they
they
it's
gonna,
bring
a
serious
pressure
to
the
couplet,
so
those
are
the
basic
stuffs
we
are.
A
You
know
where
the
basic
feedbacks
received
from
the
meeting
yeah,
despite
the
data
piece
where
she
was
mentioning
derek,
has
some
more
comments
but,
but
so
far
I
think
fundamentally
himself.
We
make
made
good
progress
and
hopefully
this
is
going
to
be
ended
soon.
B
Oh
and
then
there's
one
question
shantian
that
we
want
to
ask
this
group,
which
is
that
requirement.
The
requirement
like
we
want
to
put
the
limitation
on
how
many
actions
can
be
defined
in
for
each
container.
B
Each
each
part
right
so
will
that
be
a
problem?
If,
let's
say
if
we
are
doing
choirs,
if
let's
say
if
you
need
to
actually
run
not
only
application
quiz,
you
also
need
to
go,
do
file,
fs,
freeze
and
all
those
volumes.
B
A
Sure
yeah,
let
me
try
so
in
order
to
take
an
application,
consistent
snapshot,
you
might
need
to
execute
couple
steps.
For
example,
the
first
step
is:
stop
exit
from
the
application
container,
stop
do
accepting
any
new
rights
for
sql
database
and
the
second
step
is
flash
my
stuff,
my
memory
into
the
disk
right
and
the
third
step
is
maybe
potentially
fs.
Freeze
and
fourth
step
is
taking
a
one
m
snapshot.
A
So
those
can
you
sometimes
it
can
be
executed
in
one
command,
but
sometimes
it
requires
a
series
of
commands
and
it
has
to
be
coordinated.
A
So
one
concern
I
think
shin
had
about
limiting
the
number
of
notifiers
over
here
is
for
those
who
cannot
break.
Who
cannot
combine
those
steps
together
as
a
one
single
command
at
the
one
single
action,
and
if
we
have
let's
say
limit
the
notifies
to
number
three
and
you
need
four
notifies
to
do
your
application
level,
consistent
snapshot,
then.
Basically,
this
feature
is
useless
to
those
you
to
those
applications,
and
what
shin
was
trying
to
understand
is:
are
those
use
cases
rare
or
rather
common.
I
I
think
in
the
in
the
chat,
ben
ben's
got
a
ben's
got
a
point
that
I
think
in
rubrik
and
I
don't
think.
B
J
So
hopefully
you
guys
can
hear
me.
Yes,
it's
been
from
rubrik
yeah,
for
example,
on
the
vm
backup
you
know
for
more
complex
applications.
You
know
we'll
run
a
script
before.
Basically,
we
have
pre-snapshot
post
snapshot
and
post
successful
backup.
I
kind
of
like
three
injection
points
we
have
on
a
vm
and
you
know
for
some
applications.
We'll
use
you
know
one
or
two
of
those.
Some
all
three
but
save
like
a
pre-snapshot
would
be
hey,
get
getting
the
application
into
a
queer
state.
J
You
know
flushing
everything
to
disk,
etc
and
pausing
new
transactions,
and
then
the
snapshot
occurs.
You
know
now,
once
we've
successfully
taken
the
snapshot,
we
would
then
go
and
do
the
post
snapshot,
which
could
be
you
know
getting
the
application
back
into
an
operational
state
and
and
so
they're
two
separate
act,
actions
that
need
to
be.
You
know
either
side
of
the
the
snapshot
and
they're
done.
You
know
the
post
snapshot.
You
know
he's
just
done
when
the
snapshots
cleared
up
post,
successful
backup.
You
know.
J
That's
the
third
injection
point
that
we
have
and
that's
just
useful.
We
have
some
customers
using
that
just
for
doing
cleanups
log
truncation
things
like
that,
you
know
actions
that
they
wanna
to
do
after
they're
assured
that
the
backup
process
itself
has
been
successfully
completed
and
yeah,
it's
kind
of
like
a
third
injection
point
and
they're
reasonably
common,
not
every
application.
Obviously,
but
you
know
a
decent
number
of
them,
you
know
would
use
at
least
two
or
three
of
those
three.
B
So
that's
actually
reasonable.
We
are
actually
talking
about
like
every
time
everything
you
do
for
choir.
I
think
that
you
know
pre
snapshot,
that's
combine
those
into
one
and
then
everything
you
do
after
combining
to
another
one.
But
if
you
need
another
one,
so
let's
say
if
we
limit
it
to
four,
would
that
be
enough
for
four
injection
points?
Would
that
be
enough.
B
Yeah
inside
each
one
you
can,
you
should
be
able
to
combine
them.
Actually,
that's
a
you
can
either
put
them
in
a
script.
Then
you
know,
then
you
just
run
that
one
script
or
there's
also
a
way
to
combine
multiple
commands
and
then
that's
also
possible.
I
C
I
was
trying
to
think
about
how,
if
this
makes
it
less
modular
right,
if
I'm
able
to
define
kind
of
more
common
injection
points,
for
example
like
a
volume
and
if
I'm
forced
to
do
multiple
actions
within
a
single
notifier.
If
that
would
mean
that
I
would
need
kind
of
specific
notifiers
based
on
the
number
of
volumes,
maybe
based
on
specific
pod
definitions.
C
B
Yes,
because
I'm
thinking
actually,
you
would
have
actually
had
to
probably
need
some
sequencing.
Let's
say
if
you
wanna
do
both
class
the
database
and
also
you
need
to
run
fs3.
You
probably
have
to
info
some
you,
you
do
the
application
coins
first
and
then
do
a
fs3
next
or
there's
some
sequencing
there
too
right.
That's.
B
Probably
it's
better
actually
just
to
put
them
in
one
script,
and
then
you
can
do
that
in
order,
because,
if
you're
running
them
separately,
sometimes
you
know
kubernetes
can't
really
guarantee
the
the
sequencing.
A
All
right,
we
got
one
minute
left
right
on
time,
so
if
you
guys
are
interested
or
on
this,
please
go
ahead
and
check
out
this
doc
and
we
have
a
working
group
europe
park.
You
guys
can
find
the
details
over
here
and
shin
and
I
recorded
it,
but
the
time
is
not
friendly
at
all.
It's
5
5
30
a.m.
Pst
and
8
30
a.m.
Yes
est
to
so,
if
you're
interested,
please
go
ahead
and
listen,
we
both
jin
and
I
will
be
online
and
to
answer
any
questions.
A
Unfortunately,
we
don't
have
too
much
time,
but
I
still
want
to
open
issues.
Is
anything
anyone
wants
to
discuss
at
least
bring
up
over
here
or
shoot
me
or
shin
an
email?
Then
we
can
put
your
item
into
the
agenda.