►
From YouTube: WG-Data Protection 20200408
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
A
So
today
we
have
a
couple
of
things
on
the
agenda:
ruffle
we'll
talk
about
back
up
with
the
different
kubernetes,
API
versions.
This
is
something
he's
started
to
talk
about
in
the
last
meeting,
and
today
he
had
some
slides
to
share,
and
then
next
is
Ashish
will
talk
about
some
issues
that
he
ran
into
we're,
trying
to
use
CSS
snapshot
to
do
backups
and
then
I
will
give
an
update
on
the
execution
hook,
design
all
right,
I'll
stop
sharing
the
ruffle.
You
want
to
share
yours
absolutely.
A
Hello,
I
don't
see
an
option
to
you
in
it,
but
think
you
should
be
able
to
share
once
I
stop
the
sharing
all
right.
If
you
can
today
I
well,
maybe
they
did
something
to
make
this
trick,
but
I
do
not
see
an
option
to
enable
share.
There
is
a
lower
court
hold
if
you
try
it?
If
you
cannot,
then
I
will
see.
B
A
B
A
A
B
So,
first
of
all,
thank
you
everyone
very
quickly.
True,
my
name
is
rafael.
Burrito
I
work
for
VMware
office
of
CTO.
Recently
I've
been
working
closely
with
the
Valera
doc,
the
majority
of
them.
You
only
call
Lisa,
Nolan,
Steve,
sheesh
and
and
I
be
involved
on
this
project
to
make
kubernetes
migration
easier,
and
that's
why
we
are
here
today
to
discuss
about
API
groups.
So
in
the
beginning,
you're
gonna
go
to
a
little
bit
basic
information
and
apologies
on
that.
Just
to
make
sure
everybody
has
the
same
page
in
terms
of
knowledge.
A
B
A
B
B
Skylar
is
a
great
example
for
API
groups,
so
at
each
item
or
object
is
a
kind
next,
please
so
the
collection
of
the
kind
it's
a
resource
list,
so
you
know
Cube
control,
get
pods
gonna
get
all
the
pods
which
saw
the
the
type
pod
and
the
collection
of
resource
lists
are
grouped
into
an
API
group,
which
is
group
inversion.
So
examples
are,
you
know
how
to
scaling.
Is
an
API
group
which
horizontal
pod
scaler
are
part
of
it
and
next
please
so
in
a
very
long
story
short,
you
know,
the
dependency
tree
is
like.
B
B
So
horizontal
pod
scalar
belongs
to
horizontal
pod
scalars
that
belongs
to
out
scaling
/v
1,
it's
an
example.
Next,
please
continue
on
and
the
introduction
of
API
groups,
kubernetes
evolving,
very
rapidly
right
and
the
new
object.
Schemas
are
rollout,
they're
grouped
and
roll
out
those
new
API
groups.
Next,
please.
B
Next,
so
there
are
two
types
of
API
groups:
the
native
kubernetes
they
are
baked
into
the
API
server
and
their
usual
suspects
and
all-out
scaling
policy,
scheduling,
apps
and
so
on.
I
took
a
quick
look
at
the
118
API
server
source
code,
there's
around
20
API
groups
right,
there's
some
special
cases
on
API
groups
and
next
please,
the
core
API
group,
which
are
from
the
metal
objects
of
kubernetes
since
version
1
or
even
before,
Todd
service
accounts,
persistent
volumes
and
the
funny
thing
is.
B
This
group
does
not
have
actual
name
when
you
invoke
on
request
API
information
about
this
group,
but
is
refers
core
next,
please,
there
are
other
situations
where
a
resource
belongs
to
multiple
API
groups
for
be
compatible,
I
didn't
set.
For
example,
we
don't
belong
to
apps
and
extensions
next,
please
and
finally,
there
is
the
non
native
kubernetes
api
groups,
the
ones
that
extensible
by
the
user
is
using
CR
DS
and
those
users
can
create
their
own
versions
and
roll
out
their
own
versions.
Next,
please.
C
B
B
B
B
B
B
B
Thanks,
okay,
this
is
a
is
a
very
picture
of
what
we
are
talking
about.
So
scaling
is
a
native
kubernetes
group.
These
are
116
clustered
that
I
took
in
snapshot.
I
took
this
command
line,
so
you
can
see
that
there
are
three
versions
for
this:
API
group:
V
1,
V,
2,
beta
1,
B,
2,
beta
2,
and
the
preferred
versions.
V
1,
ok!
B
This
is
the
changes
of
preferred
version
across
major
kubernetes
version
I'm
going
to
apologize,
because
the
columns
start
with
the
most
recent
to
the
older,
but
you
can
see
that
between
115
to
116
we
got
a
change
on
the
API
extensions
scheduling,
change
between
113
and
114,
and
this
every
time
there
is
a
preferred
version
changes.
There
is
some
slightly
changes
on
the
object,
schema
and
we're
going
to
talk
about
this.
Next
is
thanks.
B
B
So
that
means
that
if
you
try
to
create
some
object
using
that
schema,
the
API
server
going
to
reject.
So
that's
another
inventory
that
we
ran
across
the
variants.
Next,
please
so
there
we
go
here.
We
go
why
we
are
take
why
we
need
to
care
about
this
right.
That's
the
problem
statement.
You
know
different
versions
of
kubernetes
support,
different
API
groups,
and
we
remove
when
I
say
we,
the
community.
B
B
This
is
very
important
when
you
start
to
think
how
to
migrate
workloads
from
one
major
kubernetes
to
another
and
skipping
multiple
releases.
That's
very
common
I
came
from
the
enterprise
world.
You're
gonna
be
surprised
how
old
kubernetes
out
there
in
production.
So
if
you
need
to
migrate
to
a
new
version,
it's
you
can
end
up
skipping
multiple
releases.
Next,
please
so
project
pillar
of
does
migration
be
very
clear.
Is
the
pages
and
documentation
how
to
do
it,
but
today
Valero
only
backs
up
the
preferred
version
of
the
API
group
on
each
cluster.
B
Ok,
next,
please
so
the
motivation
for
us
and
for
my
my
team
is
really
developed
mechanism.
First,
you
take
a
backup
on
any
cluster
and
research
on
any
other
cluster,
regardless
the
version
of
kubernetes
transparently
without
the
user
being
punished.
Next,
please
so
very
quick
I
want
to
show
how
the
preferred
version
works.
B
B
Hpa
v1,
because
this
is
the
preferred
is
stored
on
etcd-
is
preferred
next.
So
when
because
the
user
requests
a
v2
beta
2,
there
is
a
web
hook
next,
please,
and
there
is
a
conversion
from
v1
version,
2,
v,
2,
beta
2
and
next
and
finally
gets
back
to
the
user.
So
that's
how
the
magic
happens.
The
conversion
between
the
versions
and
I
got
an
I
got
a
tell
they're
very
honest
from
you
was
very
difficult.
You
understand
how
this
happens.
B
B
B
B
B
Here's
the
fuse
that
I
created
with
you
know
for
people
that
are
used
to
HPA
V
2
beta
2
has
much
more
instrumentation
how
you
do
out
scaling
such
as
packet
per
second
and
v1
version
doesn't
know.
What
is
this
metric
is
about
next,
please,
and
the
conversion
from
one
tron
others
have.
It
happens
from
the
web
hooks
and
api
server
next,
please
so
how
we
are
doing
in
terms
of
the
lira
back
up
next.
So
today,
on
version
1
3,
we
take
a
backup
of
the
preferred
version.
B
B
So
we
continue
to
have
the
same
directory
structure
to
be
compatible
and
now
have
different
directories
with
each
version
of
the
object,
and
this
is
the
same
object
for
the
version
that
is
preferred
during
the
backup
time.
We
add
the
string,
slash
preferred
version,
so
basically
is
the
same
object
with
different
schemas
on
the
different
versions.
B
B
We
are
in
discussions,
how
we're
gonna
make
decisions
we'll
use
the
version.
Next,
we
can
it's
up
for
discussion,
the
community,
if
you're,
the
little
user
really
recommend
you
join,
the
community
calls
and
Tuesday's
I
think
is
11:00
Pacific
time.
That's
where
we
discuss
all
this
kinds
of
good
stuff.
So
one
of
the
proposals
really
detect
the
restore
is
being
done
in
a
different
version
of
kubernetes,
and
then
you
use
a
discovery
API
to
find
out.
B
For
that
we've
got
to
remember
that
this
is
this
whole
point
is
to
really
prepare
for
the
future.
Right
today
we
have
20
API
groups,
part
of
API
server.
If
you
install
any
other
application
kubernetes
that
extends
kubernetes.
We
post
really
talking
about
dozens
of
api
groups
by
users
and
we
gotta
be
prepared
to
have
all
these
multiple
versions
across
and,
of
course,
any
good
possibility
to
really
skip
level
urban
areas.
Migration
are
next
and
that's
pretty
much.
B
B
I'm
not
aware
of
any
particular
problem
like
this,
because
luckily
the
preferred
version
doesn't
change
a
lot
across
the
releases,
but
I
think
the
tricky
part
and
when
you
start
should
really
handle
CR
DS,
which
is
you
don't
have
a
tight
control.
What
the
user
is
like
standing.
That's
my
take
I,
don't
know
he
sent
anyone
else
from
the
level
when
I
jump
in
and
give
their
opinion.
A
D
B
D
B
D
They
were
using
V,
2,
beta
2,
so
I
think
that
might
be
a
situation
where
Valero's
not
doing
what
what
the
user
wanted
where
it
needs.
It
needs
those
fields
that
are
only
present
on
G
2,
beta
2,
but
I.
Don't
I'm
not
aware
of
other
situations
like
that,
but
that's
not
to
say
that
they're
they
aren't
out
there
I.
E
B
The
one
example
that
we
took
is
someone
took
it
back
up
on
kubernetes
one-seven-one
a
restored
on
117
and
the
API
group
version
was
not.
There
was
not
a
match,
but
you
can
argue
that
one
seven
is
very
odd,
but
basically
I
think
with
the
more
and
more
users
of
kubernetes
we
got
a.
We
got
a
kind
of
the
clear
for
this
kind
of
situation.
F
Yeah
I
think
you
know
you're
looking
right
now
like
people
migrating,
but
there's
also
the
archive
case,
and
we
might
want
to
start
thinking
about
what
is
a
reasonable
thing
to
do
for
archival,
you
know:
are
you
expecting
that
your
archive
is
actually
going
to
restore
a
kubernetes
cluster,
or
do
you
want
to
start
saying
hey,
you
know
your
data's,
gonna
be
archived,
but
other
stuff
has
to
be
upgraded.
I
think
that's
something
we
should
probably
think
about.
G
B
I
go
with
Nolan
said
it
might
happen
some,
but
up
to
an
hour
knowledge,
we
didn't
see
this
happening
right,
and
the
this
is.
This
missing
for
you
today
is,
is
being
handled
by
the
API
server,
the
API
service,
lifting
the
wait
here,
converting
all
those
versions
and
and
even
though
a
given
point
API
server.
Does
this
conversion
on
the
source
cluster?
Then
the
little
takes
a
snapshot
of
that
version
right.
B
So
if
you,
if
you
get
the
v1
version
of
the
same
object,
all
these
new
fields
of
the
new
schema
are
baked
as
an
annotated,
so
the
API
server
does
this
magic,
so
there's
nothing
that
Valera
can
do
to
tell
API
server
how
to
convert
back
and
forth.
What
Valera
can
do
is
really
take
one
copy
of
each
or
the
same
object
in
different
format.
D
So
for
for
longer
term
views
when,
when
the
API
server
for
native
kubernetes
objects,
when
when
these
web
hooks
start
going
away,
for
example,
with
Apps
I,
think
Raphael,
you
talked
about
having
a
few
folks
on
your
team
working
on
plugins
for
Valero.
That
would
essentially
do
the
same
work
as
these
these
web
hooks.
D
So
I,
don't
know
exactly
how
you're
going
to
do
that
and
I.
Think,
like
this
slide,
that's
on
screen
right
now.
I
think
that
a
lot
of
the
restore
logic
is
up
in
the
air
at
the
moment
like
the
work
you've
done
so
far
is
to
make
sure
we
have
the
information,
that's
in
the
cluster
right
now
in
the
glare
backup
and
then
we're
gonna
figure
out
what
we
do
with
that
on
restore
whether
we
make
a
bunch
of
plugins
for
certain
things
to
allow
it
to
do
the
Skip
versions
for
close
enough
of
versions.
D
We
might
be
able
to
just
say:
hey
upgrade
the
version,
and
here
you
go,
but
if
we,
if,
like
keying
off
of
what
Dave
said,
if
we
start
doing
archival
or
or
a
lot
on
a
long
enough
time,
horizon
valerio
may
have
to
carry
plugins.
That
does
some
of
the
web
book
logic
to
say:
hey
I,
see
you
have
this
version
and
therefore
I
need
to
do
this
project
to
insert
these
fields
and
and
make
these
assumptions
that
the
web
hooks
were
doing.
A
D
I
know
early
on
in
this
project.
Raphael,
you
talked
about
having
a
group
of
people
like
looking
at
those
web
hooks,
maybe
porting
some
of
that
logic
to
have
happen
elsewhere,
so
that
we
could
go
with
it.
But
I
think
some
of
these
questions
are
still
to
be
answered.
I,
don't
know
that
we
have
all
those
answers
yet,
but.
D
When
what
I
mean
by
that
is
like
take
apps
v,
1
beta
1
in
kubernetes
116,
that
version
was
removed
from
the
API
group
and
kubernetes
116.
So
you
have
to
use
apps
v
1
for
your
deployments.
Eventually,
kubernetes
will
remove
versions
from
the
api
groups
because
they
just
don't
want
to
maintain
them
anymore.
D
Right:
it's
to
give
the
backups
that
Valero
generates
a
longer
window
to
allow
people
to
jump
versions
and
if,
for
example,
you
had
a
a
V
1,
V
1
beta
one
deployment,
you
could
deploy
it.
To
has
an
example,
kubernetes
122,
to
have
some
some
sort
of
way
of
translating
that
without
the
webhook
being
inside
the
kubernetes
api
server.
H
H
B
From
my
observation,
that's
why,
when
you
upgrade
criminate,
is
you
usually
want
to
do
incremental
upgrades,
because
the
next
minor
version,
or
when
I
say
minor,
is
one
dot
tax
tax,
it's
considered
minor
right,
one
is
considered
major.
So
when
you
do
these
upgrades
the
next
one,
you
will
support
all
the
previews
preferred
version.
Ok,
the
complication
start
to
happen
when
you're
running
so
many
releases
behind
that
your
preferred
version
on
that
old
release
will
no
longer
support
it
on
whatever
cluster
are
you
going
to?
B
And
in
the
realities,
as
I
came
from
the
enterprise
world,
you
have
something
production.
You
cannot
just
upgrade
every
three
months
right
and
that's
that's
a
challenge
that
office
of
CTO.
Where
might
the
group
that
I
work
for
VMware
strategically
trying
to
solve
for
the
users
should
be
able
to
move
those
workloads
without
the
operational
process
to
every
three
months?
You
have
to
stop
your
workloads
upgrade
and
then
just
know
it's
not.
It's
not
feasible.
For
for
many
many
workloads
out
there.
I
H
G
H
J
Api's
there's
actually
internal
schema.
That
is
not
versioned.
That
is
what's
actually
stored
in
SED,
and
so
there
could
be.
You
know
that
could
differ
I,
don't
know,
example,
one
that
differs
from
preferred,
but
they
do
have
the
flexibility
of
adding
things
in
there
that
are
not
in
the
preferred
version,
for
example,
so
they're
supporting,
for
example,
v1
as
well.
As
you
know,
V
1,
V,
2
beta
something.
Then
there
could
be
extra
information
in
there.
That
would
support
both
the
one
as
well
to
be
to
beta
something.
B
B
The
the
I'll
be
honest,
the
taking
the
multiple
copies
of
the
the
same
object
was
almost
like
one
API
change.
On
the
discovery
side,
the
real
work
is
when
you
make
decisions,
what
you
use
it,
ok
and
then,
and
that's
why
I
cannot
stress
enough
to
if
you
guys
interest,
join
the
community
call
where
we
discuss
all
this
and.
D
A
A
A
C
E
What
we
are
trying
to
do
is
we
are
trying
to
build
capabilities
into
whether
Oh
to
perform
backup
and
recovery
or
restores
of
CSI
back
volumes
so
leveraging
the
CSI
API
is
n
types.
We
want
to
perform
volume
snapshot
operations
and
one
you
restore
operations
for
CSI
for
volume
sparked
by
CSI
drivers,
so
that
that's
the
use
case
that
people
trying
to
solve
in
doing
that.
There
were
a
couple
of
issues
that
we
run
into
around
how
the
relationship
between
the
volume
snapshot,
content,
object
and
the
volume
snapshot
object.
E
E
E
To
make
sure
the
so
that
if
the
relationship
between
the
volume
snapshot
in
the
volumes
after
content
is
like
one
to
one
and
then
that
that
is,
there
are
two
kind
of
scenarios
for
like
volume.
Snapshotting
one
is
like
dynamically:
dynamically
created
volume,
sub
charts
there,
which
is
representing
the
scenario
or
there's
a
CSI
back
volume,
and
you
are
trying
to
take
a
snapshot
at
which
point
you
create
a
volume
snapshot
object
which
in
turn
creates
a
volume,
static,
content
object.
E
So
that's
the
dynamically
created
volume
snapshots
and
then
the
other
scenario
is
where
you
want
to
create
a
volume
from
our
volume
snapshot
that
is
already
existing.
So
that
is
where
you
would
statically,
by
a
volume,
snapshot
to
a
volume,
snapshot,
content
and
say
and
I'll
give
it
to
the
CSI.
Api
still
create
a
volume
off
of
that
snapshot.
So
the
one
of
the
issues
that
I
ran
into
was
I
wanted
to
reuse.
E
The
dynamically
created
volumes,
not
content
for
provisioning
and
pre-populating
data
into
that
volume
and
turns
out
that
doing
that
kind
of
violates
some
of
the
assumptions
that
was
baked
into
the
design
of
the
volume.
Some
short
objects
and
the
right
way
to
do
that
would
be
to
create
explicitly
create
a
volume
suffered
content
to
dynamic,
to
statically
link
the
volume,
subtract,
underworld,
content,
I'm
sorry
I
should
have
drawn
pictures
because
that
kind
of
makes
it
a
little
easier
to
follow
so
I
apologize
for
that.
E
So
that
was
one
of
the
issues
and
then
the
other
issue
that
I
opened
that
so
that
was
the
workaround
for
that
problem.
The
other
problem
was
the
relationship
between
the
volume,
some
shorter
and
the
volumes
at
content,
which
is
one-to-one
I
wanted
to
know.
If
that
can
be
one
too
many,
like
many
volume
snapshots
can
refer
to
the
same
volumetric
content.
That
way,
you
can
have
multiple
volumes
provision
using
the
same
volume
snapshot
contact,
so
that
also
was
a
non-supported
scenario
and
the
workaround
or
the
right
way
to
use.
E
That
would
be
to
create
multiple
volumes
of
content
using
the
same
snapshot
time
ville.
So
you
don't
need
multiple
volumes
snapshot
at
the
storage
provider
level.
You
just
need
multiple
volumes,
not
contents
at
the
kubernetes
seven,
and
then
you
can
resolve
instead
of
having
one
too
many,
you
can
have
many
one-to-one
mappings
so
that
that
was
the
workaround.
The
other
issue
that
I
opened
was
around
the
lifecycle
of
the
volume
snapshot
contents.
E
So
four
volumes
are
charts.
There
are
two
kinds
of
retention
policies
of
like
yeah:
it's
called
as
deletion
policies.
One
is
whether
or
not
you
want
whether
whether
or
not
you
want
to
retain
the
volume
suffered
content
when
the
volumes
snapshot
goes
away.
If
that's,
basically,
what
the
policy
is
defining,
so
the
the
issue
that
I
opened
up
was,
if
somebody
was
accidentally
deletes.
Their
volumes
are
short
objects
that
that
means
the
curse.
Creating
the
volumetric
content
that
was
created
for
that
volume
start
object
also
goes
away
on
deletion,
which
means
you.
E
The
snapshot
that
was
taken
is
not
useful
beyond
that
period,
so
you
cannot
restore
using
previously
taken
volume
snapshot
if
you
accidentally
delete
the
volume.
Subject
object
so
I
wanted
to
decouple
the
lifecycle
of
the
snapshot
in
the
storage
provider
with
that
of
those
kubernetes
objects.
So
that's
the
issue
that
is
still
open
and
there
have
been
a
few
discussion.
There
has
been
some
discussion
on
that
I'm
trying
to
pull
up
the
issue
number.
Can
you.
C
I
K
I
A
F
F
I
mean
there
there's
some
problem
that
I
think
exists
that
say,
for
example,
you
backup,
plus
Turei,
and
that
has
that
took
a
set
of
snapshots
and
that
continues
to
have
the
kubernetes
snapshot
resources
there
right
you
make
a
new
cluster
and
you
restore
into
that,
and
now
you
create
a
new
set
of
snapshot.
Ap
snapshot,
resources
that
point
to
the
same
snapshot
now.
F
Say
you're
just
doing
this
for
testing
and
you
delete
your
snapshot
resources
in
cluster
B.
If
you
didn't
change
your
policy,
if
your
policy
was
delete
and
cluster
a
and
you
set
the
same
policy
and
cluster
B
cluster
B
is
going
to
delete
the
snapshots,
even
though
cluster
a
is
still
referencing
them
right.
If.
A
I
K
G
E
That's
that's
the
problem
that
I
was
trying
to
surface.
The
proposal
was
like,
if
you,
if
we
decouple
the
life
cycle
management
of
the
volume
snapshot
from
the
storage
provider
with
that
of
the
kubernetes
objects
it
you
can
have
policy
saying
hey
if
there's
a
volume
snapshot
that
is
not
referenced
in
by
or
not
being
used
anywhere,
keep
it
around
for,
like
say,
10
days,
pulling
a
number
out
of
hat,
keep
it
around
for
10
days
and
delete
it
afterwards,
or
something
like
that.
Instead
of
immediately
believing
it
yeah.
E
K
Like
so
so,
if
I
had
that
problem,
my
recommendation
would
be
like
just
always
use
the
retain
deletion
policy
on
all
of
my
kubernetes
clusters
and
like
make
deletions
somebody
else's
problem
right.
That
way,
you
never
have
an
issue
where
kubernetes
is
gonna,
delete,
a
snapshot
and
then
something
else
will
have
to
be
responsible
for
monitoring
all
the
snapshots
figuring
out,
who
has
references
to
them
and
then
actually
deleting
them
at
the
appropriate
times
and
that'll.
F
E
K
It
it's,
it
depends
what
you're
restoring
you.
If
you're
restoring
like
a
backup
of
your
kubernetes
cluster,
then
you'll
get
copies
of
the
kubernetes
objects
with
deletion
policies
of
delete
and
then
in
the
new
kubernetes
cluster
you'll
have
now
you
have
two
references
to
the
same
snapshot
that
that's
the
risk
he's
describing
and
yeah
to.
I
I
I
fully
agree
with
what
Pam
was
saying
right.
This
is
not
an
API
that
was
solution
right,
but
there's
my
easy
API
level
solution
you,
and
even
if
it
won't
the
proposal
we're
here
to
introduce
in
detail
into
the
CSI
driver,
it's
not
gonna
solve
the
fundamental
problem
of
multi
referencing
right.
The
same
thing
still
stands
because.
F
Yeah,
you
know
that
just
that's
gonna
create
more
orphans,
because
the
TTL
is
dependent
on
the
CSI
driver
continuing
to
run.
So
if
you
want
to
take
a
cluster
down
and
get
rid
of
all
of
its
stuff,
you
delete
all
everything
you
delete
all
the
snapshots,
then
you
take
the
cluster
down
and
the
CSI
driver
now
no
longer
actually
does
the
TTL
work
right.
I
K
A
A
K
A
A
I
A
Yeah,
so
that
you
can,
you
have
to
go,
delete
it
physically
from
your
storage
system
right
because
you
delete
I,
don't
know
if
you
have
tried
this
so
after
you
leave
on
a
snapshot,
the
one
essential
content
and
the
physical
resource
still
exists,
and
then
you
go
ahead
and
just
say:
delete
the
content
that
already
with
the
content,
object
and
the
physical
snapshots
do
exist.
So
for.
E
A
I
A
A
C
A
To
given
could
update
on
the
execution
cook
design,
so
we
so
we
basically
talked
to
Tim
about
this-
got
some
feedback
from
him.
So
he
was
suggesting
that
we
wish
you'd
let
cubelet
handle
the
execution
with
hook.
I
think
that's
actually
in
sync
with
what
we
have
here
in
the
main
new
proposal,
and
then
he,
but
he
was
suggesting
to
have
a
inline
poor
definition
for
the
hook,
but
they
have
another
hook.
Action,
ap
object--!
A
So
have
that
separate,
I
think
in
our
main
proposal,
I
think
everything's
together
so
and
then,
then
the
external
controller
will
be
triggering
by
modifying
the
hook.
Action
to
tell
the
couplet
went
you
wrong
that
wrong
that
a
command
inside
a
container
and
then
also
he
want
this
one
to
be
used
for
not
just
for
choirs
and
uncles,
but
also
for
some
other
use
cases.
When
you
need
to
send
a
signal
to
the
pod
to
do
something
to
run
some
command.
So
we
need
to
come
up
with
an
updated
proposal.