►
From YouTube: WG Data Protection Bi-Weekly Meeting 20200213
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
A
B
Yeah
I
was
gonna,
say
I
I
wish
I
had
had
time
to
cook
up
some
some
slides
with
some
pictures
of
how
this
works.
It's
like
because
I
don't
have
that
as
of
yet,
but
we
can
talk
through
the
the
code,
and/or,
the
implementation
and
hopefully,
there's
questions
about
the
details
that
we
can
actually
make.
Some
progress
on
may
be
coming
to
an
agreement
about
how
data
populate
errs,
should
work
or,
or
the
very
least,
getting
people
to
think
about
the
details
so
that
we
can
have
that
discussion
later.
B
B
But
I
think
it's
it's
highly
likely
that
that
we
will
end
up
meeting
some
new
CRD
for
a
backup
and
we
will
need
a
way
to
specify
the
data
source
in
kubernetes
and
so
I
have
the
cap
to
basically
just
change
the
kubernetes
api
to
allow
arbitrary
securities
to
be
data
sources,
and
so
we
didn't
get
into
during
that
meeting
two
weeks
ago
was
exactly
how
this
might
work
at
an
implementation
level.
So
I'm
trying
to
think
what
I
have
other
than
that
code.
That
I
could
show
you.
A
B
B
A
C
B
B
So
so
the
the
kubernetes
change
is
very
simple.
It
will
be
a
new
alpha
feature,
gate,
I'm,
not
sure
exactly
what
it
will
be
called
I'll
have
to
decide
that
in
the
next
in
the
coming
weeks.
But
when
you
enable
this
alpha
feature
gate,
it
will
basically
stop
deleting
CRD
references
on
pvcs
that
are
not
snapshots
or
volumes.
It'll
it'll
just
leave
that
whatever
object
type
you
put
in
your
data
source
field
of
your
PVC,
so
that
has
two
implications.
B
B
B
Fortunately,
if,
if
you
put
something
in
there,
that
is
not
one
of
those
two
things
the
sidecar
looks
at
it
and
says:
I,
don't
know
what
that
is,
and
it
just
doesn't
do
anything
it
just
waits
which
is
which
is
the
correct
behavior
from
from
my
perspective,
because
what
you
want
is
to
be
able
to
add
new
things
and
and
if
the
provisioner
site,
if
there's
not
a
feature
in
CSI,
that
knows
what
to
do
with
that.
Then
the
external
provision
of
sidecar
shouldn't
do
anything
with
that.
E
B
It's
only
about
750
lines,
it's
not
terrible,
and-
and
this
is
this
is
heavily
based
on
this
sample
controller
that
exists
in
the
kubernetes
tree
today.
So
it's
if
you're
familiar
with
out
the
sample
controller
works.
This
is
not
a
huge
deviation
from
that.
So
what
this
controller
does
is.
Basically,
it
follows
the
same
basic
logic
as
what
the
external
provisioner
sidecar
is
doing.
It's
watching
for
PVCs
when
it
sees
a
new
PVC,
get
created.
B
It
goes
and
figures
out
if
there
is
a
bounds
pv
and
if
not,
and
if
the
data
source
is
something
that
this
pot,
this
populate
er
understands,
and
it
says:
okay,
it's
my
job
to
create
the
pv
that
this
PVC
will
bind
to
and
then
perform
the
binding.
Of
course,
this
thing
doesn't
have
a
CSI
driver;
it
doesn't
have
access
to
any
real
storage,
so
we
can't
create
a
pv
on
its
own.
So
the
approach
that
we
use
in
this
example
populate
er.
B
Is
you
create
a
second
PVC
with
all
of
the
same
parameters
as
the
one
that
the
user
created
accepted?
The
data
source
field
is
empty
and
then
the
normal
CSI
controller?
For
that
storage
class
will
see
that
second
PVC
and
say
oh
I,
know
how
to
how
to
address
this
one,
because
there's
no
data
source,
so
I'm
going
to
create
an
empty
one
and
it
will
provide
an
empty
PvE
with
all
of
the
same
parameters
that
the
original
you
see
asked
for.
Just
no
data
in
it.
B
This
this
popular
does
that
in
a
separate
namespace,
because
I
don't
think
it's
a
good
idea
to
have
extra
PVCs
popping
up
in
the
users
name
space,
because
the
user
could
see
those
and
wonder
what
what
they're
for
so.
This
has
its
own
namespace,
where
it
does
all
of
its
work,
so
it
creates
a
second
PPC.
That's
exactly
like
the
first
one,
except
in
the
data
source
in
its
own
namespace.
B
As
an
example
of
a
you
know,
a
way
to
populate
some
data
into
a
volume,
and
so
the
implementation
in
the
hello
world
populated
it's
a
naked
pod,
that's
managed
by
this
controller,
and
so,
if
you
know,
if
they
make
failures
or
anything,
you'd
have
to
delete
the
pod
and
create
a
new
pod,
and
you
know
and
repeat
that
process
until
it
eventually
succeeds,
you
could
also
use
a
career,
nice
job
or
other.
You
know,
there's
other
higher
level
abstractions.
B
B
All
we
really
need
on
the
career
side
is
to
change
the
API.
You
know
the
kubernetes
a
because,
because
all
the
machinery
that
for
the
data
population
is
out
of
tree
so
rather
than
rather
than
define
a
solution
in
the
kubernetes
cap,
just
limit
the
cap
to
the
kubernetes
api
change
and
then
figure
out
the
machinery
to
make
it
happen
out
of
tree
in
this
working
group
or
six
storage
or,
however,
we
want
to
I
mean
there
are
a
few
different
ways
to
actually
make
this
happen,
and
so
yeah.
F
G
B
So,
as
we
add,
new
objects
to
CSI
or
new
ways
to
restore
data
for
CSI
will
have
to
evolve
the
prisoner
sidecar
to
know
what
to
do
with
those
and
right
now.
It
always
just
know.
If
it's
volume,
it
always
says
clone
to
the
CSI
driver
snapshot,
it
always
says
clone,
but
you
could.
You
could
have
more
complicated
logic
to
say.
Well,
if
it's,
if
it's
a
CR
D
and
this
field
is
set
to
true,
then
do
it
through
c
rd.
If
it's
that
the
false
don't
do
anything
or
something
more
complicated.
So
if.
E
H
How
about
as
we
evolved
CSI?
So
as
we
add
additional
data
populate
errs
into
CSI?
How
do
we
signal
from
the
CSI
driver
to
the
external
populate
errs
because
it's
probably
gonna
be
a
bunch
of
them?
They
may
not
all
be
in
sync
that
hey
I
actually
grabbed
this
or
in
fact,
if
you
have
multiple
populating.
G
B
So
so,
for
some
for
some
types
of
things,
I
think
it's
safe
to
assume
they
will
never
be
built
in
to
CSI
like
this
hello
world
thing
right.
This
is
a
CRT
I
made
up.
It
does
something
silly
you
know,
but
but
something
like
this
could
have
used
to
somebody
right.
Somebody
could
design
their
own
CRT
and
says
I'm
gonna
use
this
to
populate
my
data
and
and
and
as
long
as
as
long
as
it's
not
like
a
standard.
That's
been
agreed
to
by
six
storage.
B
F
B
That
could
probably
be
handled
through
versioning
like
the
first
versions
of
it.
The
implement
might
be
alpha,
and
then
you
could
teach
the
external
provision
your
never
to
deal
with
the
alpha
versions
of
those
objects
and
then,
by
the
time
it
got
to
beta,
you
could
say:
well,
you
better
have
solved
the
problem
of
how
how
the
things
cooperate.
B
So
so
let
me,
let
me
tell
you
what
at
what
I
did
for
my
prototype
on
with
backup
and
restore
and
see
if
that
sounds
appealing
to
you
at
all,
before
I
go
on
to
cover
the
details
of
how
this
actually
works.
So
my
backup
CRT
had
both
a
restore
field
which
is
similar
to
like
the
CSI
driver
field,
so
that
if
you
have
multiple
CSI
side,
cars
for
different
CSI
plugins,
they
know
which
one
should
so.
So
they
already
look
at
that
right.
B
If
you
have
a,
if
you
try
to
restore
from
a
snapshot
the
provisioner
side,
car
will
look
at
the
driver
field
of
the
snapshot,
object
and
say:
if
that's
not
me,
I'm
not
going
to
handle
it
right,
because
presumably
someone
else
knows
how
to
do
that.
So
the
CSI
drivers
already
know
which,
which
driver
the
side
car
knows
which
driver
it's
dealing
with
and
will
filter
on
that
driver
when
they're
looking
at
these
objects.
B
So
what
the
provisioner
would
do
is
it
would
talk
to
talk
to
the
CSI
driver
and
say
what
formats
do
you
understand
and
it
would
return
a
list
and
then,
when
it
saw
a
request
to
restore
a
backup
it
would
both
filter
on
the
restorer
field,
and
so,
if
this
restorers
and
it
doesn't
match
the
driver,
I'm
gonna
drop
it.
There
also
look
at
the
format
and
say:
if
this
format
is
not
one
of
the
supported
formats,
I'm
also
going
to
drop
it
and
let
someone
else
deal
with
it
and
in
that
way,.
F
H
The
correct
thing
is,
the
proposal
is
to
the
cap.
Currently
is
very
narrowly
scoped,
it's
it's
get
rid
of
the
restriction
yeah
and
then
we're
gonna
have
to
do
a
second
you're
going
to
do
a
second
one.
That's
actually
how
this
is
implemented,
so
we
can
probably
table-
or
you
know
open
up
these
things
later.
B
On
they're
not
well
so
to
be
clear,
I
think
this
cap
is
accepted
and
we're
gonna
implement
it
and
put
it
in
alpha
and
then
part
of
the
graduation
criteria
for
the
cap
is
went
before
we
moved
to
beta.
We
have
to
figure
all
this
stuff
out,
but
I.
Don't
think
that
there
will
be
another
cap.
It'll
just
be
we'll
have
to
figure
it
out.
It's
gonna
become
a
much
bigger
cap
somewhere.
H
B
But
I,
but
is
this
all
that
this
is
all
out
of
kubernetes
all
over
the
community
tree?
We
need
to
have
a
solution
for
six
storage,
that's
implemented
in
the
sidecars
and
that
the
storage
group
agrees
to,
but
Cooper
days
doesn't
care
about
any
of
that
as
long
as
as
long
as
we're
happy
with
it
and
we
move
it
on
to
beta
and
then
it
becomes
something
that's
enabled
by
default.
I
don't
think
there
will
be
another
cap
and
I,
don't
think
there'll
be
a
lot
of
changes
to
the
cap.
B
I
I
Is
set
to
something
it
doesn't
understand,
all
right,
it's
set
to
something
it
doesn't
understand,
it
doesn't
move
forward.
The
data
populate
or
then
you
know
in
this
proposal,
for
example,
create
in
this
example,
creates
another
PVC
which
is
doesn't
have
any
popular
set,
and
you
know
kind
of
goes
from
there
I'm
wondering
if
data
population
actually
needs
to
work
in
concert
with
provisioning,
where
you
don't
actually
need
to
be
creating
any
dummy,
PVCs
or
separate
TVs
sure.
F
H
I
H
I
A
H
B
So
so
let
me
let
me
describe
the
rest
of
the
process
and
there's
some
there's
some
subtlety
here,
because
you're
right,
we
had
some
earlier
proposals,
but
the
issue,
the
issue
that
we
always
came
to
was
typically
after
you
have
your
empty
volume
and
you're
ready
to
populate
it.
You
once
the
thing
that
does
that
work
to
be
a
communities
pod
right.
It's.
E
B
E
B
That
we
probably
gonna
want
to
implement
those
as
kubernetes
pods.
That
know
how
to
talk
to
people
and-
and
there
are
many
many
use
cases
of
reusable
populate
errs
that
are
not
going
to
be
CSI.
Specific
I
mean
for
all
the
CSI
specific
stuff,
I'm
perfectly
happy
to
say,
yep,
that's
a
CSI
thing,
add
new
CSI
or
modify.
G
B
Rpc
is
to
do
it,
but
but
I'm
interested
in
digital,
and
we
specifically
call
these
generic
populate
errs
because
they're
not
platform,
specific,
that's
very
important.
Is
that
there's
a
huge
class
of
non
storage,
specific
populated
for
I
just
want
to
have
data
on
my
volume
and
I,
don't
care!
You
know,
I
just
wanted
to
work
on
100%
of
CSI
implementations.
B
So
so
the
the
issue
is,
if
you,
if
you
just
create
one
PVC
and
create
a
PV,
and
you
allow
it
to
bind
at
that
point.
If
the
user
had
created
a
pod
attached
to
that
PVC,
the
pod
will
say:
I
can
run
now
right
because
the
PVC
is
bound,
but
there's
no
data
at
that
point.
B
B
H
C
H
F
B
F
Doesn't
have
to
be
as
long
as
it
sets
it
up
so
that
it
can
so
that
it
follows
the
rules
necessary.
The
storage
controller
will
reap
will
do
the
binding.
Well,
then,
the
storage
controller
needs
to
know
when
it's
done
right.
It
just
needs
to
have
the
binding
logic,
be
valid
and
so
you've
got
to.
Yes,
you
need
to
some
status
to
reflect
that
we
both
so
so.
I
B
H
B
Okay-
yes,
yes,
so
maybe
they're
also
helped
to
cover
some
of
the
other
ideas
that
we
had
initially
and
why
we
didn't
end
up
implementing
those.
So
so
we
did
consider
an
idea
of
keeping
the
1
PVC
model,
but
putting
a
taint
on
the
PV
and
then
implementing
that
that
would
inhibit
the
original
users
pod
from
binding
to
that
PVC
until
it
was
populated.
B
There
are
no
taints
on
TVs
in
Coober
days
today,
so
we'd
be
considered,
adding
that
as
a
new
feature
that
will
be
generically
useful
that
you
could
taint
your
Peavey's,
so
that
ones
that
were
waiting
population
would
have
a
taint
on
them.
That
would
inhibit
everything,
except
for
the
populated
pod
from
attaching
to
them.
But
I'm
implementing
taints
as
a
top-level
kubernetes
feature
on
volumes,
is
a
massive
undertaking
for
somebody.
B
You
know
to
define
a
whole
new
taint,
Toleration
API
for
pods
and
PDC's,
and
so
nobody
wanted
to
take
on
that
that
large
amount
of
effort,
although
I,
think
it
would
have
it-
would
have
solved
the
problem.
The
other.
The
other
thing
that
was
considered
and
rejected
over
a
year
ago
was
some
sort
of
a
PVC
swap
API
that
that
you
could
call
where
you
could
have
two
different
PVCs
into
different
Peavey's,
and
you
would
you
would
invoke
some
operation
and
we
never
defined
exactly
what
it
would
look
like.
Cuz
we
couldn't
figure
out.
B
B
B
With
that
approach
were
one
we
couldn't
figure
out
what
the
API
should
look
like,
because
communities
is
declarative,
and
this
is
a
very
imperative
type
of
request
and
two
there's
all
kinds
of
scary
things
that
happen
when
you
rebind
PDS
and
PVCs.
If,
if
they're
attached
to
a
pod
at
the
moment,
that
happens.
B
H
B
D
H
B
H
D
H
Have
it
where
the
driver
actually
drives
it,
where
the
driver
says
yes,
there's
a
PV
that
I
need
to
create
I,
don't
know
how
to
fill
it
in
so
I'll
go
ahead
and
do
the
logic
of
creating
the
PV
and
if
there's
an
error
and
like
I'm
out
of
space
or
something
I
can
then
generate
it.
And
then
after
I've
created
the
PV,
then
I
can
put
up
like
an
I,
can
tell
the
the
data
source
to
go
and
fill
it
in
well.
B
So
so
you're
right
well,
we
could
enhance
the
interaction
between
the
external
provision
or
side
car
and
the
and
the
CSI
driver
to
have
a
new
kind
of
creation
request.
That
says,
please
create
be
an
empty
volume,
but
you
know,
but
don't
don't
just
make
it
ready
to
use
like
we're.
Gonna
hold
it
after
its
created
and
go
do
something
else,
and
and
all
the
regular
errors
would
still
flow
back
as
errors,
but
success
would
be
would
be
delayed
until
we
did
the
other
thing
that
the
problem
is.
G
B
H
B
H
B
Okay,
well,
so
so
the
way
it
works
today,
if
you
look
at
the
the
hello
poppet,
let
me
scroll
down
to
the
interesting
part.
We
have
all
these
informers.
We
have
a
run
worker
sink
okay,
interesting
PVC,
so
this
datasource
making
you
know
we
check
what
the
kind
of
data
source
is.
If
it's
a
only
deals
with
things
called
hello,
it
gets
that
gets
the
thing
it
looks
for
the
pod
looks
for
the
second
PVC.
It
runs
the
pod
and
then
where's
the
success
case
here.
Oh
yeah
checks.
If
the
pod
is
succeeded,.
D
B
Then
okay,
so
basically
we
had
this
control
loop,
where
it's
gonna
keep
watching
the
you
know
the
original
PVC,
the
temporary
PVC
we
created
it's
gonna
watch
for
the
PV
they
get
that
the
psych
are,
creates
it's
going
to
watch
for
the
pod
that
it
creates
and
then,
when
the
pot
eventually
succeeds,
so
so
it
the
second
PVC
has
been
created,
it's
been
bound
to
a
PV.
The
pod
did
its
job,
meaning
the
data
is
now
there.
At
this
point,
we
need
to
rebind
the
PV
back
to
the
original
one.
B
Now
this
this
particular
implementation
actually
creates
a
second
PV
and
binds
back
to
the
first
one.
This
is
not
necessary.
You
can
just
take
the
original
PV
and
rebind
it
directly
back
to
the
original
PVC
and
kanae's
perfectly
happy
with
that
I
discovered,
so
there's
actually
a
few
different
tricks.
You
can
play
to
do
the
rebinding.
B
Yeah
yeah,
so
so
at
the
moment
that
we
set
the
the
back
reference
correctly,
we
can
at
that
point
go
delete
our
temporary
PVC
in
our
temporary
pod
and
remove
all
the
finalized
errs
and
walk
away
and
let
the
left,
the
ordinary
volume
bind
controller,
do
the
binding
and
then
make
that
user
happy
so
so
yeah
as
soon
as
we
update
the
PV
to
point
back
to
the
users
PVC.
This
controller
just
needs
to
clean
up,
and
it's
done.
A
B
B
F
G
F
B
B
Somebody
somebody
raised
a
point
about
like
better
managing
the
you
know
when
this
is
done
and
doing
the
rebind
and
it's
possible
that
some
of
that
can
be
standardized,
I,
don't
know
if
it
needs
to
represent
in
a
library
or
an
API.
But
yes,
those
are
the
details
that
we
totally
want
to
iron
out
and
agree
to
before
we
move
forward
with
you
know
making
this
beta,
and
we
have
time
to
do
that
right.
B
It's
not
even
alpha
yet
so
we
have
at
least
three
months
to
come
up
with
a
good
design
for
that
that
finishing
off
the
workflow
and
and
handling
all
the
weird
error
cases.
You
know
what,
if
what,
if
the
provisioning
fails,
what,
if
there
is
no
populate
err,
you
know
you
got
to
return
something
that
those
details
need
to
be
worked
out
to
have
a
very
smooth
user
experience.
H
A
A
A
A
So
the
purpose
for
this
excellent
cook-
yes
provide
a
way
to
do
that,
so
that
we
can
have
application
consistency
and
the
status
of
this
work.
We
do
have
a
cup,
so
I
have
a
link
here.
The
tab
was
actually
merged
and
then
implementation
is
in
progress.
So
Ashish
has
been
working
on
this,
but
then
we
got
some
comments
from
a
peer
reviewers.
A
They
give
some
suggestions,
but
then
that
actually
means
we'll
have
to
go
a
very
different
direction.
So
actually,
during
the
design
phase
over
the
cup,
we
actually
look
at
several
different
approaches.
So
there
are,
of
course
there
were
many
different
variations
of
it
about.
Maybe
there
are
two
directions:
one
is
proposing.
The
current
proposal
will
have
a
CR
d
that
that
defines
the
hook,
and
we
also
have
a
external
controller
to
manage
the
life
cycle
of
that
hook
and
then
in
alternatives.
A
A
A
A
A
E
A
One
I
think
it's
possible,
so
so
here
we
have
in
this
diagram.
We
have
your
two
controllers.
Why
is
this
execution
cool
controller?
Another
one?
Is
this
application
central
controller
which
actually
this
is
there
is
another
another
cap
that
Iman
has
been
working
on.
We
actually
should
asked
him
to
talk
about
that
as
in
one
of
our
meetings,
so
the
applications
natural
controller
will
be
taking
a
snapshot
of
a
of
a
whole
application,
so
the
so
here
we
have
a
execution
hook.
A
Crt
and
that's
this
green
box
here
and
I
also
have
this
hook
action
hook,
action
will
be
user
will
be
creating
those
so
that
it
can
be
reused.
In
those
hook,
actions
you
define
the
type
of
comment
and
see,
run
and
and
the
application
snapshot
controller
would
be
the
one
that
creates
the
execution
hook
and
then
the
execution
hook
control
over,
be
the
one
that
manages
the
the
life
cycle
of
it.
Basically,
it
will
be
based
on
the
whatever
is
in
the
you
know,
it
will
be
running
running
that
triggered
the
execution
of
those
comments.
I
A
Yeah,
it's
actually
basically
kind
of
so
there
are.
There
are
different
proposals.
One
way
you
can
just
reference
it,
but
they
you
can
also
copy
the
whole
the
whole
content
and
then
so,
basically,
they'll
be
the
same.
So
whatever
it's
in
the
in
the
hook,
action
application
snapshot
well
know
what
what
are
the
comments
that
that
is
needed.
K
A
The
hook,
action
and
yeah
I
think
he
initially
was
together,
but
I
think
this
is
nearly
four
if
you
so,
for
example,
if
you
look
at
this
hook,
action
example
here
right
so
I
said:
okay.
This
is
a
like
a
script
that
I
run
for
my
sequel,
but
it
could
be
some
other
like
a
MongoDB.
There
are
certain
command
Iran
requires,
so
those
type
of
comments
can
be
shared.
So
that's
why
we
thought
okay,
maybe
it's
better
to
separate
them.
A
L
A
A
H
Well,
no
but
III
say
queer
something:
mm-hmm
and
I'm
waiting
10
seconds
yeah
and
it
doesn't
happen
when
I
say
okay,
that
that
was
my
my
wait
time.
So
with
the
controller,
then
like
say
the
controller
had
quest
five
out
of
ten
pods.
Would
the
controller
then
be
responsible
for
rolling
that
back
when
the
timeout
expires.
A
A
A
A
H
Other
problem
that
we've
run
into
is,
we
have
a
similar
API
in
vSphere
and
it's
possible
for
you
to
like
basically
lock
down
something,
but
then
you
never
unlock
it.
So
do
we
have
a
concept
of
like
a
here?
You
know
you're
gonna
lease,
because
it's
not
really
quite
as
its
execution
hook,
which
is
what
gets
a
little
weird,
but
in
terms
of
like
playas,
you
know
you
could
see
yourself
having
a
lease
like
I'm,
gonna
Quietus,
and
you
know
after
30
seconds
a
minute
an
hour
whatever.
E
Possibility
so
different
actions
like
one
for
so
you
have
your
your
regular
hook,
action
which
we
used
to
initiate
acquiesce,
and
then
we
might
need
a
successful
completion
action
and
an
unsuccessful
action
which
one
you
would
use
upon
where
you
know
back
of
us
completed
normally,
and
now
we
want
to
uncle'
us
the
others
where
we
timed
out.
We
didn't
actually
complete
the
workflow.
We
were
going
to
do.
You
know
what's
on
quit
house,
because
there
might
be
some
different
and
different
actions,
I'm
thinking
of
like
the
database.
G
A
B
There's
another
subtle
failure
mode,
which
is,
if
you,
if
you
set
a
timeout
on
how
long
it
can
be
quiet
and
your
snapshot
there's
this
Knapp
shot
itself
takes
about
that
amount
of
time,
knowing
that
the
snapshot
completed
before
the
unquiet
up
and
can
be
tricky
in
a
system
like
communities
that
has
no
concept
of
a
clock,
so
you
you
need
a
way
to
like
detect
when
all
the
snapshots
have
definitely
been
taken
and
know
that
nothing
has
been
inquest
at
that
point.
So
you
can
declare
success
on
that
that
consistent
snapshot.
H
B
J
A
It
was
a
right
now,
I'm,
just
talking
about
this.
This
of
current
proposal,
which
is
a
external
controller,
the
execution
hook
controller,
will
be
handling
that.
But
then
there
is
another
alternative
proposal
which
I
haven't
sketch.
We
hit
what
that
will
be
kubeloso,
so
there
are
so
there's
the
current
proposal.
Yes,
I
guess.
A
Yeah,
so
I
I
was
actually
trying
to
show
that
unit,
basically
yeah,
so
basically
the
words
that
we
actually
mentioned.
That
is
cap.
So
if,
if
the
the
action
so
basically
the
unquiet
will
always
have
to
happen
so
I
we
actually
mention
here.
The
controller
will
always
have
to
you
create
the
extreme
hooks
for
uncle
eyes.
Duplications,
no
matter
what
no
matter
that
succeeds
or
fails,
because
we
can't
leave
it
in
the
you
know:
it's
its
freedom
forever,
yeah.
A
G
Is
there
going
to
be
a
provision
for
I
mean
it?
We've
talked
about
at
auditing
and
timing
out
as
if
this
is
all
a
transaction
I
could
either
all
succeeds.
They're
all
fails.
Is
there
also
a
mode
where
it
doesn't
matter
like
you
know
you,
you
go
and
tell
everybody
to
snapshot,
and
then
you
do
backups
and
you're
good.
As
long.
You
know,
you're.
G
A
M
G
A
A
A
L
A
A
M
A
A
Have
some
other
yeah
finish
yeah?
So
let's
do
that
so
I
take
a
look.
So
all
of
this
are
the
links
are
in
the
gender
doc.
Take
a
look.
We
can
have
another
meeting
to
talk
about
this.
So
go
back
to
choose.
This
ok
cube
con
Amsterdam,
so
we
are
going
to
have
a
meet
and
greet
at
the
kubernetes
contributors
amid
and
puke
on
Amsterdam.
A
So
sarcastic
storage
already
has
this
register,
so
we
will
just
do
it
under
6th
orage,
and
so,
if
you
are
going
to
KUKA
I'm
Sudan
and
you
want
to
be
there,
this
is
a
this
is
more
like.
A
casual
event
arise,
as
the
name
suggests,
is
a
meet
and
greet
so
it's
going
to
be
an
afternoon
of
the
contributors
summit,
so
there
will
be
a
sign
saying:
okay,
this
is
a
sick
storage.
A
So
if
you
are
interested
in
going
register
for
the
contributors
summit,
this
is
a
Monday
I
think
this
is
the
one
day
before
the
regular
session
regular,
regular
made
starts
and
also
if,
if
anyone
who
is
not
a
contributor
but
who
also
want
to
attend
this
event,
let
me
know
because
I
think
you
actually,
if
you
want
to
register
for
this
contributor
summon,
you
actually
have
to
be
a
contributor
to
you
register,
but
but
we
can
sponsor
you
if
you
are
interested
in
going,
but
not
a
contributor.
Yet.
A
C
A
C
A
G
C
I
I
we
had
talked
when
we
were
talking
about
the
data
protection
scenarios
and
also
I've
had
discussions
with
Nina
and
about
his
application
snapshot
controller.
We
wanted
to
kind
of
do
a
quick
run
through
of
canister,
which
is
a
project
we've
we've
looked
at,
which
actually
implements
very
similar
to
our
execution,
hooks
and
actions.
Okay,
oh
I,
wanted
to
see
if
we
could
just
get
the
group
familiar
with
it.
So
we
can
see
if
we
can
merge
those
efforts,
because
there's
a
large
body
of
work
already
out
there.