►
From YouTube: Kubernetes Data Protection Workgroup Meeting 20210324
Description
Kubernetes Data Protection Workgroup Meeting - 24 March 2021
Meeting Notes/Agenda: -
Find out more about the Data Protection WG here: https://github.com/kubernetes/community/tree/master/wg-data-protection
Moderator: Xiangqian Yu (Google)
A
A
Morning
again,
today
is
wednesday
march
24.
Fourth,
this
is
the
kubernetes
data
protection
working
group
meeting.
Let
me
share
my
screen.
First,
apologies
today,
majorly
we
have
two
major
agendas
and
then
we'll
do
the
leave
to
an
open
discussion.
A
The
first
thing
is
that
shin
and
I
have
been
working
on
this
whole
container
notified
cap
for
a
little
while
today,
I'm
going
to
give
you
they
give
the
group
a
more
kind
of
look
back,
plus
current
status
updates.
This
will
take.
Probably
30
minutes
of
the
session
depends
on
how
many
questions
we
have
and
the
second
agenda
is.
We
will
talk
about
a
little
bit
about
the
white
paper.
Was
that
without
updates,
then
we
will
be
opening
the
opening
the
session
to
the
audience.
A
A
A
So
you
probably
have
been
hearing
this
topic
for
a
little
while
just
to
give
those
who
are
not
very
familiar
with
this
topic
a
little
bit
background,
so
that
in
total,
two
attempts
so
far
to
define
a
mechanism
to
send
a
command
to
an
container
in
kubernetes
world.
The
first
attempt
the
campus
list
over
there.
It
was
crd-based
controller-
will
reconcile
on
the
crd
and
it
needs
to
be
a
privileged
part
because
it
needs
to
run
arbitrary,
cube,
exact
comment
against
arbitrary
part
and
container,
and
it
mainly
was
targeting
and
solving
the
application.
A
Consistency
snapshot
problem.
That
was
the
first
attempt
and
there
were
a
couple
of
concerns
around
that.
One
biggest
thing
is
about
the
security,
because
the
controller
needs
to
be
privileged.
A
There
were
good
amount
of
pushback
from
signal
and
and
also
as
api
reviewers
talking
about
the
possibility
of
moving
this
into
a
different
direction.
The
other
thing
is
that
this
seems
to
be
a
good
there's.
Some
there's
there
are
other
use
cases
which
can
fit
into
this
model
and
they
want
to
extend
the
scope
a
little
bit
so
that
use
cases
like
sending
signals
to
a
set
of
containers
or
parts
in
the
kubernetes
cluster.
A
A
So
those
are
the
cases
then
I
went
ahead
and
proposed
this
container
notifier
it
was
co-api
based
cooperate
will
run
the
command
against
the
pod
and
the
container.
Eventually
this,
basically
is
the
main
purpose
is
not
to
widen.
A
The
scope
of
pre-reached
controllers
that
offered
by
the
community
cooperate
already
have
all
the
necessary
access
to
run.
Those
commands
against
any
arbitrary
part
and
container
on
that
particular
node,
and
that
is
also
to
support
generic
more
generic
to
support
other
use
cases,
and
this
is
a
second
attempt,
but
we
received
a
good
amount
of
feedback
from
the
second
attempt.
A
A
A
We
don't
send
too
much
traffic
to
the
api
server
from
kubernetes
and
how
do
we
ensure
kubrick
can
actually
handle
so
many
notifications
when
the
pod
can
be
in
the
order
of
thousands
in
the
cluster?
A
The
second
thing
that
drove
a
lot
of
discussion
is
the
whole
imperative
versus
declarative
concept.
Kubernetes
apis
are
mostly,
if
not
all,
declarative
right.
This
is
this
whole
base
level
triggering
versus
age
of
triggering
right.
Everything
is
about
level
triggering
kubernetes
work
and
signal
execution
hooks.
I
tend
to
be
more
imperial.
Basically,
you
send
something
and
you
leave
it
there
that
you're
wrong.
A
This
there's
not
too
much
need
in
terms
of
retrying
logic
like
a
regular
kubernetes
controller
will
be
doing
in
case
of
failure,
taking
the
execution
hook
for
application
consistency
exam
for
as
an
example
we
even
if
we
want
to
hr,
I
would
retry
or
maybe
we
don't
even
want
to
retry-
to
limit
the
the
quietest
period
of
a
certain
application
and
while,
during
application,
consistency
snapshot
process,
if
we
introduce
this
retry
logic,
may
we
maybe
we
can
still
bound
it,
but
amit
it
will
extend
the
period
or
the
sla
we
need
to
requires.
A
A
Yeah
application
requires
enquires
for
sure,
and
when
you
send
a
signal,
for
example,
if
you
want
to
temporarily
adjust
the
lock
level
to
debug,
you
don't
want
to
do
this.
You
know
you
don't
want
to
leave
it
there
forever,
because
that
probably
will
you
know,
write
too
much
block
into
your
system
and
make
your
disk
full
very
soon.
A
So
how
to
deal
with
the
newly
created
or
newly
deleted
part
in
a
kubernetes
world,
it
remains
a
problem
to
be
solved.
Basically,
pods
can
go
and
get
the
again
go
and
come
at
any
time
in
this
in
the
kubernetes
environment,
even
nodes,
so
how
we
process
that
is
another
major
discussion,
topic
question
so
far.
A
No,
I
then
I
will
move
on
so
so
we
have
two
attempts
in
the
second
attempt
that
this
is
the
first
attempt
in
the
second
attempt
the
call
thing,
so
the
idea
is:
have
a
inlined
container
loaded,
notifier,
a
list
of
actually
container
notifiers
in
the
container
core
type,
and
this
is
not.
This
is
not
an
api
object,
it's
just
a
united
type,
so
the
structure
of
the
notifier
is
pretty
straightforward.
A
It's
basically
just
a
handler
which
does
the
command
and
the
name,
and
the
name
uniquely
identifies
uniquely
identifies
a
specific
handle
to
run
in
a
container
you're
going
to
have
multiple
notifiers.
A
A
So
there's
a
part
selector
as
the
first
field
groups,
all
the
parts
that
codifies
that
has
the
label
and
get
all
the
and
find
the
corresponding
notify
within
those
part
definitions
and
they
send
the
coupe
exit
command.
A
So
you
can
imagine
you
have
let's
say
my
sequel
deployment
somewhere
with
the
leader
in
the
forward
enabled
so
when
you
do
an
application
requires
logic
and
application
consistency
snapshot.
A
What
are
you
going
to
do
is
that,
okay,
you
select
other
parts,
that
label
with,
let's
say,
mysql
and
then
there's
a
notifier
defined
in
the
container
say
acquires
me
whatever
it
is,
and
then
you
send
the
signal
to
that
corresponding
notifier
and
the
results
will
be
recorded
in
the
notification
status.
A
You
can
take,
you
can
see
that
the
notification
status-
the
first
thing
it
has-
it
is
a
list
of
pod
notification
status
and
each
part
notification
status
contains
potentially
multiple
container
notification
status,
because
each
part
can
have
multiple
containers
and
all
these
containers
can
have
that
notifier
defined
this
making
sense.
So
far.
A
A
How
do
you
scale
your
notification
status
because
your
path
selector
can
be
arbitrary
right?
It
can
be
anything
and
then
it
may
end
up
thousands
of
pot
notification
status
nested,
underneath
the
net
notification
status.
A
B
A
B
B
Wondering
like
if,
if
we
just
like
cube,
cuddle
exec
lets,
you
run
commands
and
the
reason
we
don't
like
it
is
because
it's
just
you
know,
if
you
don't,
if
you
don't
put
any
limit
on
the
commands,
you
can
run,
people
can
use
it
to
perform
attacks.
So
this
mechanism
lets
the
guy
who
defines
the
pod
sort
of
enumerate
a
set
of
commands
that
can
be
run
from
you
know
from
other
controllers.
D
B
For
scaling,
we
have
a
solution
to
the
cube,
cuddle
exec
issue
right
like
where,
if
I
wanted
to
run
cube,
cuddle
exec
on
a
thousand
pods.
E
B
D
But
you
don't
really:
where
do
you
save
your
you're?
Not
really
saving
those
in
your
ikea
objects
right
the
results
when
you
run
that,
where
are
you
saving
this,
so
you
need
to
have
your
own
controller,
take
care
of
those
right
here.
We
are
talking
about
we're,
seeing
that
in
an
api
object
status.
So
that's
the
concern.
B
D
D
D
D
Then
your
controller
needs
to
save
it
somewhere
right.
So
that's
what
I'm
asking:
what
are
you
comparing
with
right,
the
status
update
here
we're
talking
about?
If
this
is
part
of
a
status
in
an
api
object,
then
there
is
concerns
from
the
api
update
point
of
view.
There's
a.
D
D
D
G
Container,
no,
it
is
not
sure,
hey
just
want
to
intervene.
I
think
this
is
ali
from
redhead.
I
did
a
little
deep
dive
into
how
cube
cuddle
uses
implements
log
and
exec
and
from
my
study.
G
Cubecut.Exec
opens
a
bi-directional
connection
from
api
server
into
the
cubelet
into
the
container.
So
if
the
api
server
is
just
proxy
that
connection
so.
G
Yeah,
so
that's
a
valid
question.
All
I'm
saying
is
to
your
point.
It
just
opens
a
bi-directional
connection
and
it
does
not
store
that
matter.
D
D
So
so,
if
you're,
using
cube
and
also
what
I'm
saying
is
we're
talking
about
scaling
issue
right
when
you're
running
that
cube
color
command
you're
running
one
one
command,
so
you
are
logged
into
one
container
at
a
time
right
here,
we're
talking
about
we're
doing
a
program.
Then,
when
we
select
a
lot
of
we
use
the
part,
the
the
selector,
then
it
could
be
thousands
part
and
we're
running
that
at
one
time.
That's
a
concern.
Well,.
D
We
need
to
yeah,
but
we
need
to
get
the
results
right,
but
you
can
yeah,
so
you
can
achieve
all
of
this
by
using
cube
cuddle.
Yes,
but
then
you
still
have
to
figure
out
like
how
to
store
that
yourself
right
so
right
today,
I
think,
like
the
backup
vendors,
some
some
of
them
they
well,
I
think
most
of
them
should
be
already
doing
this,
so
they
have
to
handle
that
themselves.
Maybe
actually
we
should
probably
ask
someone
to
answer
this
one
other
than
we
talk
about
it.
D
B
D
D
I'm
just
trying
to
understand
I'm
just
saying
what
I'm
trying
to
say
is
that
the
the
problem
is
the
potent
potentially
you
could
have
thousands
of
parts.
So
I
think
the
concern
is
the
scalability.
I
think
that
probably
actually
exists
today.
If,
if
you
are
doing
a
if
your
class
and
then
you
actually
have
to
require
thousands
of
parties,
you
also
have
that
problem.
D
A
D
D
So
maybe
actually
tom-
I
don't
know
saint
tom-
you
guys
are
already
doing
this.
I
think
it
might
be
better,
for
you
actually
give
an
example,
because
otherwise,
maybe
we're
actually
going
into
different
directions,
but
actually
it's
the
same.
Similar
problem.
H
Yeah
so
the
way
we
we
tackle
this
is
we
the
cuddle,
exec
libraries,
return,
three
values,
standard
or
standard
out
and
error
code.
If
there's
an
error
code,
we
treat.
H
Then
update
the
status
in
whatever
you
know,
whatever
crd
or
api
object,
we
return
back
and
the
actual
standard
iron
standard
out.
We
we
log
in
the
controller
you
know
because
it
doesn't
actually
show
up
in
the
pod
log
itself.
D
H
H
D
H
A
D
Yeah
so.
A
Are
making
this
engineering
only
solving
the
application
is
not
a
problem.
I
don't
think
scalability
is
really
a
concern
because,
as
tom
was
suggesting
normally
you
won't
have
over,
let's
say
a
dozen
of
parts
for
a
specific
application
that
we
need
to
requires.
But
if
we're
talking
about
signals,
then
this
is
a
good
chance.
We
will
bridge
the
thousands
we're
getting
to
the
southern's
kind
of
domain.
H
For
the
signal
case,
you
won't
have
that
much
output
being
sent
back,
though
right
I
mean,
because
you're
essentially
just
running
a
single,
a
single
command
that
won't
have
too
much
output.
Unless
what
output
are
you
thinking,
you'll
have
to
collect
for
from
sending
a
signal
to
a
thousand
pods
sure.
A
A
And
I
think
this
is
a
valid
concern
in
many
cases,
so
we
let's
see-
let's
see
we
have
some
kind
of
you
know
solution.
Let's
see
whether
we
can
solve
this
problem,
but
I
want
to
discuss
with
the
plane
offline
a
little
bit
and
to
get
where
his
idea
is
because
I'm
interested
to
learn
about
that
ben
I'll
ping.
You
later
on.
B
A
B
That
I
was
trying
to
figure
out
if
there
was
something
else
that
we
could
leverage
and
why
this
feels
like
a
new
problem
when,
fundamentally,
what
we're
doing
is
something
that
people
already
do
and
it's-
and
I
think
we
understand
now
it's
because
we're
not
in
a
position
to
directly
stream
the
results
of
these
commands,
so
someone
has
to
buffer
them
somewhere
and
that
that's
where
the
problem
comes
from
yeah.
B
B
A
Okay:
okay,
that's
the
first
attempt,
so
we
collect
this
feedback
right
did.
I
did
I
talk
about
the
other
feedback
we
have
seen
like.
I
think
right.
I.
I
I
Back
to
the
previous
discussion,
I
think
basically,
the
problem
here
is
this:
has
the
potential
you
know
for
people
to
use
it?
However?
They
want
so
outside
of
you
know,
snapshot
use
case
and
quieting
people
can
use
this
like
ansible,
you
know
just
to
run
commands
at
a
bunch
of
parts.
You
know
which
is
can
be
more
frequent
than
running
application,
consistent
snapshots
or
you
know,
as
far
as
scalability,
I
think
it's
same
as
you
know
the
kubernetes
model,
where
people
just
write
custom
controllers
in
their
pods,
you
know
and
they're
watching
and
updating
crs.
I
So
I
think
what
this
enables
is
that
people
don't
have
to
write
custom
controllers.
They
can
just
abuse.
This
mechanism,
to
you
know,
run
commands
across
a
bunch
of
pause,
look
kind
of
like
answerable.
You
know
you
just
unleash
the
command
on
a
bunch
of
nodes.
I
think
this.
This
is
this.
This
is
where
scalability
can
get
affected.
I
A
A
There
will
also
be
the
ones
who
label
the
parts
the
way
they
want
it,
so
that
the
correct
parts
are
selected
and
there
will
be
also
the
ones
defining
the
execution
hook
command.
So
if
this
is
a
purely
kind
of
you
know,
application
consistency,
snapshot
use
case,
then
I
would
be
less
concerned
about
the
scalability,
but
extend
this
to
a
more
generic
way
opens
the
door
at
least.
A
Oh,
can
you
repeat
the
question?
Okay
in.
A
This
proposal
brings
that
concern
because
kubernetes
is
already
kind
of
busy,
so
it
is
possible
that
many
parts
are
scheduled
on
a
note
so
having
coupled
to
run
all
these
notifications
in
once,
you
may
may
I'm
saying
not
necessarily
for
sure
but
may
bring
them.
You
know
too
much
burden
to
the
cuprate.
A
So
we
discussed
it
and
shin,
and
I
we
had
a
couple
of
rounds
of
discussion
with
the
api,
reviewers,
etc,
and
then
we
come
up
with
the
second
attempt,
so
the
container
notify
paste
doesn't
change,
so
it's
still
gonna
be
in
line,
but
instead
we
introduce
this
part
notifications
back
as
ben
was
suggesting
just
one
to
one
mapping
right
and
this
part
notification
status
will
contain
container
notification
status
again.
A
A
This
thing
is
for
user
friendly
cases,
especially
for
signal
you
don't
want
to
manually,
send
every
single
signal
to
every
single
part
using
part
notification
define
that
bob,
the
dice
lana
can
actually
scale
and
also
high
level
controllers
may
utilize
this
one.
But
on
top
of
that
we
add
a
couple
of
things,
one
or
two
things.
Majorly.
One
thing
is
this
whole
policy
thing
it
defines
one.
Should
this
part
this
selector
should
cut
the
scope
so
right
now
there
were
two
possibilities:
one,
it's
all
parts.
A
Basically,
the
the
controller
will
keep
on
reconcile
on
this
notification
object
if
a
new
part
get
added,
it
will
be
claimed
by
the
controller
under
the
old
parts
policy
and
getting
a
part
notification
will
be
created
for
that
part,
and
the
other
policy,
for
example,
is
called
pre-existing
parts
only.
A
It
means
that
at
the
creation
time
of
your
notification,
this
is
the
time
that
you
market
in
the
api
server
for
your
notification
object,
use
that
timestamp
to
find
any
existing
parts
that
is
up
and
running
which
have
which
have
a
smaller
timestamp
than
the
notification
creation.
Time
way,
any
newly
created
parts
after
the
creation
of
the
notification
will
not
have
a
pod
notification
created.
A
A
The
another
field
we
add
is
called
parallelism.
If
you're
familiar
with
the
drops
or
batch
jobs
api,
you
will
find
that
the
parallelism
similar
mechanism
over
there.
This
is
majorly
targeted
at
avoiding
users
from
creation,
creating
a
bunch
of
jobs.
Let's
say
a
thousand
jobs
to
be
kicked
off
at
the
same
time
which
crashed
the
system.
A
A
Okay,
question
about
delete
parts
there
are,
there
are
two
things
to
to
think
about.
A
One
thing
is
that,
if
you
think
about
in
this
mechanic,
then
the
that,
if
a
part
has
been
deleted
before
the
notification
is
created,
so
it's
not
up
and
running
there's
no
problem.
A
A
If
e,
either
the
part
notification
will
fail
or
it
will
succeed
right,
there's
the
the
result
should
be
deterministic,
but
the
problem
is
in
the
in
the
paired
execution
case
where
the
first
part,
a
notification
actually
succeeded
for
choirs,
they
say
using
fs,
freeze
and
the
second
before
the
second
one
is
send
the
part
get
deleted.
Then
your
file
system
may
be
locked
in
this
case.
I
have
to
be
honest
with
you.
We
don't
have
a
solution
for
that.
D
Yeah,
so
that's
easy,
then
we
should
just
fail
well,
depending
on
what,
if
we
are
doing
class,
then
we
should
just
fail
that
choice
right
because
otherwise,
what's
the
point
no.
A
No,
it
is
possible
at
the
quietest
time.
The
part
is
still
up
and
running
right
and
then,
if
you
imagine
a
lot
of
level
controllers
say,
oh
now
requires
my
application.
No,
no!
I
get
exited
execute
it.
You
go
ahead
and
do
your
volume
snapshot
or
volume
backup
or
your
application,
backup
whatever
it
is.
It
takes
some
time
right
before
you
execute
inquires
against
the
part
the
could
delete
it.
D
D
F
I
think
I
would,
I
think
what
I
was
getting
at
is
like.
Are
we
going
to
take
ownership,
then
of
the
pod
when
we
start
running
notifiers.
A
It's
not
only
overhead,
but
also,
how
do
we
know
there's
an
upcoming
comment
right,
we're
talking
about
the
commands
in
pairs
right,
which
will
have
this
kind
of
you
know,
split
brain
problem
where
the
universe
is
not
is
different.
A
In
that
case,
I
can
imagine
the
the
the
way
the
a
viable
approach,
upper
level
controller
creates
and
notification
sees
a
notification
object
and
it
creates
pod
notifications
right
and
a
part
notification
to
parts
are
one-to-one
mapping
they're
one-to-one
mapping.
So
that's
can.
Let's
say
this
is
a
choice
and
when
you
do
an
inquiries,
you
should
be
expecting
exactly
the
same
universe
of
your
quiet
comment.
A
A
Did
I
answer
your
question?
Surely
I
think
so?
Yeah:
okay,
the
last
piece
is
that
notification
status
will
no
longer
hold
a
jargon
list
of
pod
notifications
and
container
notifications
results,
but
rather
it
offers
high
level
aggregates
the
completion.
Time
and
start
time
are
both
kind
of
optional.
In
this
case,
the
completion
time
will
only
be
marked
if
the
policy
is
pre-existing
parts.
Only
imagine
the
word.
If
it's
all
parts,
you
will
never
have
a
completion
time.
Basically.
A
Okay,
that's
basically
the
current
ap
as
the
second
attempt
of
this.
The
second
attempt.
C
We
never
record
a
completion
time
then,
but
does
that
mean
that
that,
in
effect
is
a
notification
policy,
that's
in
effect
forever,
so
that
any
new
pilots
that
come
in
that
notification
spec
will
immediately
take
effect.
A
That's
correct
by
saying
that,
if
you
say
all
pods
until
the
notification
is
deleted,
any
newly
added
parts
which
match
the
label
selector
will
be
notified.
I
So
one
question:
so
in
phase
one,
we
talked
about
the
pod
notification
object,
which
is
used
to
basically
specify
the
pos
we
want
to
notify
and
then,
in
this
phase,
we're
adding
a
new
abstraction
called
notification
right
on
top
of
pod
notification
to
cover
different
parts
right.
I
A
No
there's
a
key
difference
over
here
right.
The
first
difference
is
this:
whole
notification.
A
Controller
logic
will
not
be
encrypted
right.
So
in
this
case,
cooper
will
be
not
the
ones,
selecting
the
path
to
be
executed.
That's
what
the
other
one
and
then
you
know
it.
At
least
you
know
somehow
shift
the
concern
from
sick
nodes
to
more
or
less
upper
level
controller.
Where
this
is,
I
don't
know
yet.
That's
one.
The
second
one
is
the
aggregated
results.
Is
it's
recorded
instead
of
detailed
ones
in
the
upper
level?
A
A
You
know
shoot
themselves
in
not
sure
themselves,
but
a
user
intentionally
to
break
the
system
versus
whether
we
provide
a
mechanism
to
them
to
prevent
that
from
happening
right.
The
parallelism
mechanism
is
there
to
prevent
it
from
happening.
I
A
A
You
can
always
kind
of
you
know,
find
a
way
to
you
just
screw
it
up.
I
guess.
I
I
mean,
I
guess,
the
the
way
we
currently
control,
that
is,
that
we
have
coders
per
name
space,
so
we
can,
for
example,
prevent
the
user
from
creating
too
many
parts
or
too
many
pvcs
or
you
know,
launch
a
denial
of
service
attack.
But
here,
if
users
can
set
their
parallelism
field
to
arbitrary
large
values,
then
nothing
can.
I
I
mean
basically
we're
just
hoping
you
know
the
other
coders
would
kick
in,
and
you
know,
for
example,
the
part
code
would
kick
in
and
prevent
you
know.
Okay,
that's.
A
A
A
But
fundamentally
you're
right:
yes,
parallelism
only
gives
them
the
mechanism.
It
doesn't
prevent.
I
A
You
are
absolutely
right,
so
we
imagine
this
to
have
multiple
cases
right,
so
we
imagine
the
application
controller
most
likely.
It
is
reasonable
to
for
them
to
use
the
part
notification
directly
so
that
they
can
control
the
universe
for
choirs.
You
have
this
universe
for
enquires,
you
use,
you
have
another
universe.
A
They
don't
necessarily
need
to
rely
on
the
notification
to
achieve
that,
and
the
notification,
then,
is
a
more
suitable
way
of
kind
of
selecting
large,
relatively
large
scope
of
parts
to
run,
let's
say
a
command
or
signal,
as
in
the
signal
et
cetera-
and
this
is
more
like
a
user-friendly
thing
again
to
to
the
the
point
that
previously
brought
about
by
shirley.
A
This
is
this
is
basically
for
for
the
application,
consistent
snapshot,
use
cases.
I
think
it's
still
better,
that
the
application
controller
can
have
fun
granted
control
over
which
parts
to
notify
and
which
part
to
not
nullify.
A
B
I
had
a
question
about
so
obviously
the
the
notifiers
have
to
be
at
the
container
level,
because
different
containers
might
want
to
run
different
commands
in
response
to
notification.
B
Have
we
thought
through?
If,
if
notifications
should
always
be
at
the
pod
level,
or
if
it
should
be
valid
to
like
notify
just
one
container
of
a
pod
or
like
how
do
we
decide
that
granularity,
whether
to
do
it
at
the
pot
or
the
container
level.
A
First
of
all,
the
command
has
to
be
executed
against
the
container
a
specific
content
right,
that's
one,
so
the
defining
the
notifi
notifiers
in
the
container
level.
That
makes
sense
right.
Yes,
yes,
so
your
question
is.
B
A
A
Okay,
let
me
let
me
give
you
an
example:
why
they,
first
of
all
there,
there
could
be
multiple
containers
within
the
pot
right.
They
they
run.
They
do
different
things.
A
A
D
D
The
the
cap
for
the
content
notifier,
we
actually
have
that
explained
notifiers
the
name
of
the
content
notified
the
trigger.
So
it's
basically
just
based
on
that
name.
If
you.
D
Just
just
to
go
to
the
the
cap
itself,
just
go
to
your
files,
go
to
cap
itself
when
it
explains
what
is
in
the
content
notifier
that
act
over
there.
This
is
the
api
definition.
Actually
we'll
have
some
examination
there.
D
Right
so
yeah,
so
this
is
a
container
right,
so
we're
adding
this
kind
of
notifier
and
then
in
each
continent,
notifier,
so
each
kind
of
notifier.
We
have
this.
We
have
this
name,
so
this
name
right.
So
this
one
so
the
basically
the
container
these
kind
of
content
notify
in
the
container
must
have
a
unique
name.
And
then
you
move
down
move
down
a
little
bit.
I
think
it's.
D
B
D
Okay,
so
maybe
you
can
either
proposal
on
suggesting
on
how
to
how
you
want
to
differentiate
that.
A
C
The
the
two
different
containers
from
just
using
two
different
notification
names
if
the
intent
is
for
a
notification
to
be
delivered
to
a
specific
container
for
a
specific
reason.
Does
it
make
sense
that
that
both
the
containers
would
use
the
same
name
for
the
same
reason,.
B
Right,
there's
both
there's
both
use
cases.
You
may
want
the
ability
to
just
broadcast
a
message
to
all
of
your
containers
and
all
of
your
pods,
in
which
case
the
convenient
thing
to
do,
is
define
your
notification
with
the
same
name
on
all
of
your
containers.
But
then
you
may
have
a
different
use
case
where
you
want
to
just
do
a
one-off
and
the
the
fact
that
you
can't
target
one
container
forces
you
to
like
make
a
second
notification
that
runs
the
same
command
with
a
different
name
or
something
lame
like
that.
B
F
Yeah,
I
also
have
another
question.
I
think,
while
we're
talking
about
this
it,
if
I
have,
I
can
have
three
containers
and
each
container
has
a
container
and
a
photo
with
the
same
name
would
create
a
pod,
a
notification.
Spec
that
says
go
run
this
notifier
in
that
pod
and
I
would
just
get
a
success
or
failure.
A
A
No
problem:
it's
here
see
the
status
is
a
container
notification
status,
but
ben's
point.
I
I
think
it's
a
very,
very
important
idea.
The
typical
case
I
can
think
of
is
you
have
multiple
parts
in
and
sorry
multiple
containers
in
the
pot.
Maybe
some
of
them
is
the
main
container
you
don't
want
to
mess
up
with,
but
you
want
to
adjust
local
levels
with
other
containers.
A
A
All
right,
we
are
close
to
58
minutes
mark,
so
we
probably
won't
have
enough
time
to
discuss
the
white
paper
thing.
I
guess
we
just
think
of
flying
scene.
A
All
right
any
questions.
H
A
That's
a
good
question,
so
I'm
from
my
side,
I'm
pushing
api
reviewers
to
take
a
look
at
it,
because
this
this
involves
a
lot
of
change
shin
and
I
already
got
at
least
the
bare
minimum
consensus
from
most
of
the
major
reviewers.
A
A
So
this
is
slightly
off
the
schedule.
D
A
Yeah
I
have
to
keep
what
I
was
saying
that
you
know
this
camp
has
been
there
since
january
or
something
right,
yeah.