►
From YouTube: Kubernetes SIG Service Catalog 2018-07-16
Description
F2F Action Item Review
investigate changes to chart process
prow review of todo issue and call to action
PRs Must have docs
API compatibility requirements
staged delete explanation and demonstration
B
Welcome
to
the
July
16th
Service
Catalog
sig
meeting,
if
you
could
put
your
name
in
to
our
agenda,
I'll
paste
it
in
chat
it.
Let
everyone
know
that
you
were
here
even
if
you're,
not
an
official
member.
Anything
you're
totally
welcome.
If
there's
something
you'd
like
to
talk
about,
everyone
should
be
able
to
edit.
This
just
put
your
name
with
the
new
bullet
points
and
whatever
you'd
like
to
chat
about,
and
we
will
try
to
get
to
everything
today
and
then.
Otherwise,
if
you
have
a
question,
I
need
help.
B
I
can't
use
words
today
help
people
get
a
word
in
edgewise,
okay.
So
the
first
thing
on
our
agenda
is
last
week
we
all
met
in
Sunnyvale
California
and
hung
out
at
Google
and
meet
Google
very
mad
at
Scott
and
came
up
with
a
couple
decisions
and
action
items
based
off
of
things
that
we'd
like
to
address
over
the
next
three
months.
I
want
to
just
roll
through
that
quick
I.
B
B
The
first
thing
we
talked
about
was
the
Charter,
our
sig
Charter,
for
six
service
catalogue.
It's
in
review
by
the
steering
committee.
Hopefully
they'll
make
a
decision.
This
Wednesday
about
it
I
had
a
couple
action
items
that
I'm
going
to
knock
off
today,
related
to
sneaking
it
so
that
any
maintainer
could
nominate
another
maintainer
instead
of
requiring
it
to
only
be
chairs
who
can
nominate
maintainer
that
should
just
make
it
easier
for
more
people
to
help
bring
in
new
contributors.
B
It's
gonna
add
a
little
bit
of
text
to
explain
how
to
go
from
being
a
contributor
to
a
maintainer,
and
maybe
it's
just
linking
people
to
existing
Docs
or
just
making
it
a
little
more
clear
so
that
if
someone
comes
in
and
they
didn't
get
like
the
caralyn
Welcome
Wagon,
they
still
know
how
to
jump
in.
B
Everyone
should
have
reviewed
the
current
charter
by
now.
If
you
have
feels
about
it
or
need
something
change,
please
say
it's
soon
before
the
steering
committee.
Does
their
final
look
good
to
me?
Hopefully,
this
Wednesday,
because
we
really
don't
want
to
it's
been
hard
to
get
their
attention.
So
it's
not.
Let's
not
waste.
B
Will
be
able
to
change
it
yeah,
it's
just
that
they're
reviewing
a
lot
of
charters
at
the
moment
and
it
took
a
while
to
get
on
their
schedule.
So
if
we
immediately
want
to
have
to
turn
something
turn
it
around
and
change
something
like
a
week
later,
that
may
not
be
able
to
do
that
quickly
because
they're
super
busy
looking
at
like.
C
B
Why
I
understand
it?
Yeah
Thanks
Jonathas,
pointed
out.
They
can't
force
us
they're,
not
our
dad.
That's
totally
true
and
that's
that's.
What
we
decided
is
that
we're
going
to
just
move
forward
as
though
the
Charter
has
been
accepted
by
them
upstream
and
make
like
follow
the
processes
that
we
put
out
in
the
Charter
so
that
we're
not
blocked
at
all.
So
that's
what
we're
doing
looking
for
more
action
items.
I
was
gonna
look
into
if
there
was
still
a
problem.
B
We're
like
you
said
you
like,
create
an
instance
and
you
screwed
up
the
provisioning
parameters,
and
then
you
turn
around
and
fix
it
like
five
ten
minutes
later.
That's
the
reason
why
we
have
the
command
S.
If
you
can't
touch
that
may
have
been
fixed
a
couple,
so
go
so
I'm
going
to
just
going
to
make
sure
that
do
we
still
need
that
at
all,
we
had
a
wonderful
discussion
about
zombies
that
is
still
going
on
by
the
way.
B
We
discussed
using
CR
DS
now
that
it's
gotten
a
little
further
along
to
be
clear.
What
we
want
to
do
is
prototype.
It
not
like
immediately
turn
around
and
submit
a
PR
and
change
everything
right
before
one
o,
but
we'd
like
to
look
at
it
and
figure
out,
has
all
of
our
roadblocks
been
removed,
and
do
we
have
like
a
clear
path
to
moving
to
CR
DS
off
of
having
our
own
ad
CD
cluster
just
for
service
catalog,
so
I
was
gonna,
make
a
couple
issues
for
that,
essentially
so
that
we
can
track
that
work.
B
One
piece
that
was
kind
of
interesting
Doug
suggested
was
that
we
could
do
a
little
refactoring
right
now
before
one
o,
so
that
we
have
a
easier
migration
going
to
CRTs
and
we
don't
have
any
disruption
or
more
difficult
things
that
we
need
to
do
in
order
to
switch
off
or
C
or
D,
so
that
that's
something
that
I'm
going
to
add
to
our
OneNote
roadmap,
if
possible,
a
little
bit
more
on
the
one
Oh
plan
pulse
gave
up
and
he
stopped
saying
that
we
had
to
have
a
human-readable
class
and
plan
names
for
one
else,
so
that
got
dropped
and
Jonathan
I.
B
B
E
It
wasn't
really
related
to
the
release.
I
just
noticed
it,
because
I
started
using
the
helm
chart
to
install
the
broker
and
use
namespace
resources
so
like,
as
we
were,
going
through
and
implementing
all
this
stuff.
We
added
the
feature
gate
to
enable
you
via
Helms,
to
pass
that
flag
to
the
API
server,
but
we
never
added
the
necessary
are
back
resources.
E
So,
if
you're
deploying
to
our
back
cluster,
you
didn't
it
wouldn't
work
because
you
were
trying
to
you
know,
get
things
create
things,
and
there
was
no
permission
that
allowed
the
service
account
that
Service
Catalog
used
to
do
those
things.
So
you
ended
up
in
a
bad
state
where
things
weren't
working
so
I,
basically
just
added
those
new
resources
and
they're
behind
a
conditional
in
the
home
chart.
E
There
really
should
be
separated
right,
like
we
should
be
able
to
find
issues
with
the
help
chart
and
use
semantic
versioning
to
do
new
releases
of
the
helm
chart
without
having
to
do
releases
of
service
catalog
ourselves.
But
our
our
build
process
doesn't
really
allow
that
right
now
and
then
there's
kind
of
like
this
ongoing
problem,
where
we
don't
really
have
permission
to
do
to
maintain
that
helm,
chart
published
ourselves.
So
I
thought
it
would
be
good
for
us
to
start
discussing
some
alternatives.
E
Make
sure
that
that
process
is
healthy
and
moved
something
more
like
what
we're
doing
in
publishing
svk
binaries,
so
we
Microsoft
can
provide
a
storage
account
for
that
stuff
and
we
can
integrate
it
into
the
build
process
kind
of
the
same
way.
We
are
now
and
then
look
at
moving
the
the
hump
chart
to
be
independently
buildable
independently,
and
so
we
can
avoid
these
kind
of
situations
in
the
future.
E
B
B
More
active,
like
Scott
or
kibbles,
maybe
makes
sense
to
have
whoever's
managing
one
of
our
storage
accounts
to
be
active,
because
this
is
it
really
sucks
that
we
can't
publish
fixes
because
of
something
like
this
and
then
one
suggestion
I
had
that's
completely
separate
from
if
we
change
anything
is
right.
Now
our
helm,
repository
URL
that
we
tell
people
to
use
is
specific
to
a
hosting
provider
in
a
location.
We
have
a
website
and
are
sorry
we
have
a
website.
B
We
have
a
domain
like
it
may
be
nice
to
have
a
vanity
domain
for
our
repositories
so
that,
regardless
of
where
it
is
or
how
its
implemented
or
if
it
moves
into
a
different
like
blob,
storage
container
or
anything
like
that,
it
doesn't
break
people
like
as
a
100
items,
it'd
be
really
nice
to
not
have
the
exact
storage
path,
whatever
one
is
using,
so
that
if
we
change
it
in
the
future
at
all
we're
not
breaking
people
yeah.
E
I
think
for
one:
oh,
it's
good
to
have
that
plus
more
general
see
I
better
see
if
make
CI
more
better
before
GA,
so
that
we
can
respond
more
quickly
if
we
need
to
fix
things
it's
kind
of
okay.
Now,
because
we're
we're
not
one.
No,
it's
good
yeah,
we're
like
an
alpha
beta
would
release
whatever.
So
people
don't
expect
a
lot
of
that.
But
I
think
once
it
becomes
a
one,
oh
thing,
but
we
have
some
more
expectations.
I'm
like
how
quickly
will
be
able
to
turn
around
fixes,
fix
critical
things.
E
F
No
I,
just
if
we
have
that
you
have
that
whoever
has
that
SV
cat
I!
Oh,
can
we
get?
You
know
a
sub-domain
or
whatever
the
appropriate.
You
know
slash,
charts,
I,
don't
know
if
the
appropriate
thing
is
I,
don't
do
web
to
redirect
right
now
and
start
advertising
that
that
way
we
just
start
as
soon
as
possible.
It's
the
only
thing.
I
can
think
of
that.
We
can
like
sort
of
start
to
do
and
then,
when
it
eventually
redirects
somewhere
else,
it's
it's
not
really
a
problem,
because
we've
already
got
that
set
up.
B
Can
add
an
issue,
so
we
can
talk
about
what
that
should
be,
unless
we
just
want
to
decide
that
right
now,
maybe
like
chart
SVC,
IO
and
then
it'll
be
whatever
you
know,
there's
a
little
bit
more
trailing
after
that,
but
the
domain
could
point
to
Google
storage.
B
B
B
To
that,
okay,
is
it
terribly
distasteful
to
think
about
in
the
future,
I
mean
either
moving
it
to
an
account
storage
account
that
you
have
control
over
or
a
storage
account
on
Azure
that
mean
Jeremy
have
control
over
or
something
like
that,
like
it's
a
little
worrisome
that
it's
hard
to
get
releases
out
consistently
right.
Now
we're
pretty
much
stuck
on
this.
It
happened
for
like
what
two.
B
Yeah,
how
about,
if
the
is
it
the
way
we
make
the
church?
That's
the
problem
or
is
it
just
pushing
it
somewhere?
That's
the
problem.
F
B
Cuz
I
manage
the
chart
for
some
other
things,
there's
an
index
file
and
then
as
like
a
zip
file,
basically
or
tar
gzip,
and
those
need
to
go
basically
to
some
static
hosting
whoever
it
is,
and
every
time
you
like
publish
a
new
person
of
a
chart
that
a
next
file
is
updated
and
then
a
new
zip
file
is
added
for
the
version
to
this
big,
just
folder.
Okay,.
F
E
E
B
B
Yeah
so
oh
you
be
trying
to
I'll,
make
that
DNS
change
this
afternoon
and
then
maybe
Jeremy
and
I
can
take
a
look
at
Morgan's
PR
and
see
if
we,
if
anything,
jumps
out
at
us
on
how
to
get
that
work.
You
know
we
can
try
it
out
on
a
branch
and
see
if
we
can
get
something
to
push.
F
F
Unit
tests
that
I
showed
you
last
week,
I'll
see
if
I
get
final
and
pull
it
up,
but
basically
you
know
this
is
only
the
first
step.
I've
got
a
jillion
questions
that
I've
written
down
in
here
about
you
know
what
we
currently
do
and
what
what
else
we
need
to
do.
The
first
round
of
this
was
trying
to
basically
say
you
know:
what
do
we
want?
What
do
we
already
have
and
try
to
connect
up?
What
needs
to
connect
to
what.
F
F
That's
the
that's
the
breakdown
on
that.
What
we
have
left
is
make
basically
converting
each
of
our
Travis
jobs
into
a
proud
job.
If
that's
the
appropriate
thing
to
do
I'm,
not
a
hundred
percent
sure,
that's
the
appropriate
thing
to
do
or
not
or
if
we
should
have
one
giant
proud
job
I
was
a
discussion
that
I
was
going
to
have
at
some
point
in
the
future.
F
F
F
F
F
B
So
Morgan
I
was
looking
at.
How
did
one
of
your
items
was
keep
using
to
fill
the
image
throughout
the
pipeline
I'm
working
on
that
right
now,
I'm
gonna
make
sure
that
there's
an
issue
for
it,
so
we
can
track
it
against
the
proud
milestone.
Are
there
other
things
in
here
that
we
can
split
out
and
then
other
people
can
help
with
I.
F
Mean
I
think
a
lot
of
this
is
requires,
I
mean
I,
think
we
can
split
a
lot
of
that
and
we
can
like
do
stuff.
I
think
it
would
be
good
to
have
a
consistent
conversation
with
somebody
so
that
we're
not
you
know,
swarming
them
with
a
bunch
of
stuff
and
that
we're
all
on
the
same
page.
At
the
same
time,
I
mean
take
a
look
through
the
quest,
and
you
know
if
we,
if
we
can
have
a
conversation
where
we
discuss
what
what
our
proud
job
should
look
like.
F
B
I'm
just
trying
to
make
sure
that
I
totally
get
your
point
that
we
don't
want
all
of
us
to
individually
hit
tests
and
from
I
was
just
wondering.
Is
there
any
work
that
that
you
would
like
to
farm
out
like?
Is
there
things
that
we
can
do
just
independent
of
communicating
with
tests
and
fro
that
other
people
can
help
with
or
point?
Is
it
all
testing
for
coordination.
F
I
think
it's
mainly
tested
for
coordination.
I
think
that
preparing
us
I
think
sort
of
figuring
out
a
way
to
make
sure
everything
is
doable
without
docker,
because
getting
the
apparently
they
have.
You
know
doctor
and
docker
issues,
or
they
see
issues
with
docker
and
docker,
frequently
I
think
trying
to
make
sure
our
build
works
with
that
docker
would
be
a
good
start
or
and
as
well
as
figure
out
an
image
figure
out.
F
B
F
Your
home,
it
may
be
maybe
one
when
maybe
the
first
job
is
like
set
up
the
image
and
then
the
rest
of
the
job
just
use
that
image.
Like
that's.
That's
my
okay
mine
thinking.
If
we
don't
change
the
build,
it's
very
often,
you
know,
maybe
we
can
get
away
with
just
manually,
pushing
it
everyone
so
while
but
I
would
prefer
to
have
as
much
automated
as
possible.
Yeah.
F
F
D
Looking
good
job
on
yet,
let's
go
on
a
document
in
it,
I
think
I'm
coming
with
Carolyn,
you
know
if
you,
when
you
get
start
getting
frustrated
here,
you
need
some
help,
speak
up
and
I'm
sure
we
can
find
someone.
You
know
myself,
maybe
or
someone
somebody
that
to
help
out
so
I
appreciate
what
you
do
and
speak
up
when
you're
getting
frustrated
yeah.
F
B
You
know,
Morgan,
you
made
a
great
point
at
the
face-to-face
I
that
really
stuck
with
me,
which
is
that
you
know
our
sig
has
a
problem
of
bus
factor
at
times
and
in
my
opinion,
if
we
switch
to
prowl
a
lot
of
these
things
right
now
that
are
buses
for
us.
We
would
be
lessened
because
we're
not
having
to
ask
a
certain
person
to
handle
Jenkins,
for
example,
for
us,
because
this
will
be
managed
infrastructure,
isn't
just
a
random
person
that
we've
kind
of
tricked
into
supporting
it
forever.
B
So
I
think
that
raises
his
importance
to
getting
it
done
before
1o,
because
honestly,
I
do
expect
after
1o
that
some
people
will
having
accomplished
their
goal,
switch
to
other
things,
and
we
may
lose
active
people
who
may
be
in
charge
of
some
of
our
build
bottlenecks.
So
the
more
that's
in
prowl
at
that
point,
the
better
off
our
project
will
be.
B
Next
up,
I
just
wanted
to
put
out
a
call
to
anyone
who
is
a
reviewer.
We
have
managed
to
get
into
herself
in
the
situation
where
a
number
of
features
have
gone
in
over
the
past
few
months
and
they
have
no
documentation
at
all.
No
one
knows
how
to
use
them.
People
aren't
even
aware
that
they're
in
service
Kalighat
service
kind
of
long
anymore,
Jeremy
made
a
list
of
a
couple
one.
Second.
B
He
said
it
to
me
and
he's
starting
to
make
issues
right
now
to
break
that
out
their
secret
transformations,
catalog
restrictions
and
namespaced
resources
for
broker
class
and
plan.
He
is
making
issues
for
that.
If
you
would,
if
you
have
information
about
these,
you
know
how
it
works.
You've
used
it
before.
We
really
need
help
with
people
writing
the
doc
for
this,
so
that
we're
not
pulling
it
out
of
somewhere
the
cloud
we
need
help
writing
that
doc.
B
So,
but
in
the
future,
if
you're
looking
at
a
feature,
PR
and
you're
thinking,
like
should
I
put
an
LG
TM
on
it,
you
don't
see
user
facing
facing
doc,
explaining
what
the
feature
isn't
how
to
use
it.
Please
don't
merge
it
ask
people
to
add
that
as
part
of
the
PR
not
as
a
follow
on,
because
it
isn't
happening
at
all
so
yeah.
B
F
F
B
Would
actually
put
a
hold
on
it
or
something
don't
merge
it
right
if
you
put
needs
doc,
we
use
that
flag
at
the
moment
to
indicate
that
it's
related
to
Doc's,
maybe
the
name
of
that
label
blows,
whereas
we
work
I
really
want
to
say
and
I'm
encouraging
other
people
to
say.
If
you're
a
reviewer,
please
don't
merge
until
you
see
Doc's
in
that
PR
yeah.
D
F
Since
we
have
this
ability-
sorry
I'm
interrupting,
since
we
have
this
ability
to
put
the
template
things
and
the
PR
I
don't
know.
If
anybody's
going
to
read
it.
I
read
the
one
on
the
issues
that
people
seem
to
read
that
one.
But
me
you
know
I,
don't
sure
if
the
standard
contributors
are
going
to
read
a
PR
template,
but
I
think
we
should
write
up
a
hey
checklist.
F
B
And
then
otherwise,
there
was
talking
slack.
Moving
on
to
the
next
item
on
the
agenda.
Here
is
that
there
was
talking
slack
about
as
we
look
towards
v1
at
the
face
of
face.
Last
week
we
were
making
a
lot
of
discussions
about
things
that
would
effectively
change
our
API
and
how
you
use
it,
and
it
was
mentioned
by
Paul
that,
because
our
version
is
view
on
beta
one
there's
a
bit
of
a
commitment
attached
to
using
that
version
and
I
linked
to
the
deprecation
policy
in
there
I.
B
Don't
let
me
do
decide
it
right
now,
but
let's
start
thinking
about
what
we're,
what
we
can
change
without
making
way
and
what
isn't
gonna
need
to
stay.
The
same
like
some
of
this
is
around
deletes.
I,
don't
know,
are
you
raising
your
hand?
Jonathan?
Oh
okay,
cool
I,
don't
know
if
people
have
experience
but
I
get
the
impression
that
it's
not
like
summer,
where
we're
v-0
and
we
can
change
anything
before
everyone
I'm,
getting
the
impression
that
we
actually
can't
change
as
much
as
we
may
want
to
potentially.
B
So
just
making
people
aware
of
that,
and
if
you
have
feels
about
it-
and
we
can
chat
about
it
next
time
but
I
don't
know
that
made
us
change
some
of
the
things
we've
been
talking
about
so
practically
the
sinjar
sig
arch,
because
that
would
be
a
big
change.
For
example,
no
longer
do
deletes
or
something
like
that.
Yeah.
F
I
have
long-standing
feelings,
but
what
are
your
feelings?
No,
we
can
take
it
another
time.
Okay,
I
just
want
to
make
people
aware,
so
they
could
think
about
it
and
then
yeah.
It's
not
gonna.
Wear
of
this
document.
For
a
couple
of
years
it's
never
been
in
a
great
state.
The
the
deprecation
policy
guide
er.
Yes,
it's
changed
nom
times
so
anyway.
Moving
on
okay
I
recovered
this
one.
B
F
F
F
Okay,
so
during
the
meeting
I
feel
like
I
may
not
have
appropriately
communicated
how
this
works,
why
it
works.
What's
going
on,
I
feel
that
with
conversation,
some
people
after
the
meeting
that
the
understanding
of
how
the
kubernetes
api
scheme,
how
it
works,
is
not
intuitive
to
people
used
to
rest
and
whatnot,
and
I
kind
of
want
to
me
clarify
this,
and
you
know
maybe
show
that
all
of
this
does
work
in
the
way
that
is
expected
from
the
end
user
versus
from.
F
F
Things
that
look
at
the
resource
you
know
I
need
to
go
through.
The
I
have
not
gone
through
the
event
stream,
but
things
that
look
at
the
resource
get
the
event
stream
up
the
resource
and
eventually
they
see
the
delete,
go
in
to
storage
and
the
Watchers
look
at
storage.
So
there's
there's
I,
don't
know
how
to
draw.
Is
there
a
white
board
on
this
thing
up?
F
F
Basically,
when
what
watch
is
looking
at
watch
looks
at
the
storage
layer
through
the
API
service,
the
API
server
puts
up
a
watch
and
what
that's
looking
at
is
that's
looking
at
not
the
things
that
are
coming
into
the
API
server,
but
the
things
that
are
coming
into
storage.
So
when
you
issue
a
delete
on
an
object,
what
happens
is
that
the
leak
comes
in?
It's
changed
to
an
update
on
storage
and
then
an
update
event
goes
out
right
and
the
update
that
contains
the
deletion.
Timestamp
is
that
that
makes
sense.
F
So
we
following
that
eventually
stuff
happens:
the
finalizes
are
removed,
blah
blah
blah
and
the
delete
either
happens
a
second
time
or
there's
the
updates
when
they
come
in
and
the
the
API
server
sees
that
the
finalize
errs
have
been
removed
or
that
the
deletion
period
has
passed
because
it's
graceful
and-
and
that
means
there's
a
period
once
it
sees.
The
API
server,
sees
that
and
will
issue
the
actual
store
delete
to
the
storage.
F
Again
sorry
about
the
comments
are
on
the
top,
but
it
is
the
actual
delete
to
the
storage,
at
which
point
the
delete
object.
Events,
whatever
goes
through
a
watch
to
the
Watchers
everybody
watching
it.
So
my
position
here
is
that
nobody
sees
the
deletion
until
nobody
should
be
relying
on
the
deleted
thing
being
gone
until
the
deletion
event
from
storage
has
been
seen,
which
allows
us
to
sort
of
insert
steps
into
this
operation.
F
Okay,
and
so
what
I'm
saying
is,
instead
of
going
instead
of
the
update
here
being
go-to
deletion
timestamp,
it
should
be
go
to
our
custom
field,
then
the
controller
can
reconcile
that
until
it's
either
succeeded
or
failed
and
then
roll
that
back.
If
it
succeeds,
what
it
does
is
it
does
the
standard
delete
in
this
case
I'm
saying
it
goes
directly
to
the
final
delete
as
if
there
were
no
other
final
answers,
but
we
can
what
would
actually
happens?
What
actually
would
happen
in
this
case
is
that
we
would
do
the
standard.
F
You
know
the
delete
changes
to
an
update
of
the
deletion,
timestamp
back
and
forth
back
and
forth,
and
eventually
the
API
server
says
you
know,
actually
delete
the
thing
and
then
the
Watch
says
storage
sends
the
event
the
whatever
the
Watchers
looked
at
thing
at
the
end,
and
so
I
have
code
that
does
this,
and
it
does
this
as
far
as
I'm
concerned
the
correct
way,
and
so
I
can
show
you
the
the
code
in
the
demo
and
I
guess
it
is
Carolyn's
turn
to
talk.
I,
don't
know
when
that
was
I.
Just.
B
Want
to
clarify
a
couple
things
because
there's
a
little
bit
of
context
from
the
face
to
face,
and
that
thread
that
doc
just
referred
to
and
this
what
you're
suggesting
here
is
that
it
would
be
a
field
that
is
not
deletion,
timestamp
right
and
redirect
the
delete
to
an
update,
we're
not
setting
that
we'd
be
setting
our
own
field.
Is
that
right.
F
B
F
B
Yeah,
that's
what
I
want
to
get
out
was
that
what
we
have
here
would
work
for
one,
oh
and
then,
when
we
moved
to
see
our
DS,
we
could
switch
to
having
those
I
think
we
were
a
condom,
shadow
sure,
yep,
some
type
of
shadowing
resource
one
represents
what
the
broker
is
really
doing,
and
one
represents
our
desired
state
right.
F
B
We
keep
using
delete
so
wouldn't
change.
The
user
flow.
Do
I.
Have
that
right,
right,
okay,
so
ignore
my
even
the
argument
with
sig
arch.
If
we
just
did
that,
we
do
your
thing
in
one,
oh
and
then
later
on,
we
use
a
backing
or
shadow
resource
and
we
keep
using
delete.
Do
we
even
need
to
get
cigars
to
agree
with
us,
or
can
we
just
do
that?
Well,.
F
Once
it's
under
wants
to
see
I
mean
in
my
opinion,
we
don't
need
to
see
how
I
should
agree
to
any
of
this,
but
to
be
to
play
nice
once
it's.
What's
once
is
the
CR
DS
I?
Think
again,
there's
been
discussion
of
CR
DS
for
all
and
whatnot
I
think
that
we
could
do
the
same
thing
in
the
existing
API
server.
The
problem
is
again:
it
changes
the
API
so
that
once
we
are
moving
to
CR
DS
we're
basically
changing
the
API.
Just
because
that's
it's
a
hard
cutoff
anyway.
B
B
F
F
B
D
D
If,
if
we
stick
with
the
current
model,
when
we
encounter
a
non
happy
path
and
basically
you
wind
up
with
a
bad
failed
deletion
scenario,
we
could
create
a
new
object
which
is
meant
to
only
for
interaction
with
admins
and
basically
put
move
you're,
not
moving
the
instance
but
you're
creating
the
you
know
here,
because
you
should
I
know,
instance
or
Shadow
Copy
of
the
instance
and
you
moving
it
onto
an
admin
acute
basically,
so
the
admin
can
deal
with
it
and
he
says:
oh
I
see
this
instance.
The
deletion
fails.
D
Here's
the
details
on
the
specific
instance
of
the
binding
or
whatever
was
that
failed
and
I
need
to
go
and
clean
it
up
manually.
It
kind
of
gets
it
out
of
this
out
of
the
area
of
the
user
for
trying
to
deal
with
it
and
helps
us
keep
track
of
it.
I
guess
I'm
a
little
concerned
about
the
approach
Morgan
was
describing,
but
I'm
certainly
up
for
listening,
and
you
know
discussing
it
more,
but
this
was
one
another
option
that
it
sounded
like
ahead.
A
Yeah,
sorry
for
anything,
I,
don't
think
the
decision.
Paul
Steiger
got
actually
solves
the
problem.
She's
not
gonna
more
about
how
to
clean
up
a
leaky
model,
and
that's
why
we're
talking
about
here
we're
talking
about
a
situation
where
you
want
to
basically
undo
this
kubernetes
or
you
start
down
the
process
and
that
bit
varied
across
so
I,
don't
think
Paul
solution
is.
B
Practice
one
question
on
that
before
Morgan's
hand,
all
right
when
you
say
it
doesn't
solve
the
problem,
do
you
mean
creating
the
shadow
resource
unconditionally
at
all
times
like
what
we
were
suggesting
for
CR
DS?
Or
do
you
mean
to
us
in
the
case
of
recovering
from
us
stuck
like
stuck
in
the
lead
state.
A
Definitely
give
K
for
the
stuff
that
we
take.
It
doesn't
solve
the
problem
yet
visually
and
actually
kind
of
weird
the
shadow
resource
for
call
cases
to
be
on
us
to
think
more
about
it.
I,
don't
know
whether
that
solves
our
analysis
introduces
an
extra
layer
that
may
be
overworked,
you're
going
to
get
into
really
good
situations
which
someone
watches
the
wrong
resource,
an
ex-con
believe
up
the
wrong
resource
and
that's
incredibly
risky,
and
you
could
just
say,
oh
we're
not
paying
attention.
A
F
F
I'd
have
to
think
I
think
it
I
think
it
is
a
concept
that
can
work
again.
I
think
we
need
to
go
over
it
again.
I've
got
I,
get
a
presentation
on
someone.
Who's
basically
implemented
these
these
kind
of
situations
with
CR
DS,
not
for
this
issue,
but
for
for
other
issues
and
yeah
I
guess
we
could
try
to
schedule
some
time
to
talk
about
it,
but
yeah.
F
B
F
B
Are
you
cool
with
maybe
putting
up
a
doodle
or
something
we
can
meet?
Chak
chak.
F
F
Anything
else
in
this,
oh
yeah,
you
actually
want
to
see
the
demo.
How
do
we
just
yeah
I
think
I've
eaten?
So
just
let
me
run
through
the
gun,
real,
quick
and
because
it's
it's
really
it's
unimpressive,
because
basically
the
test
works
and
it's
the
standard
test
that
goes
through
and
goes
through.
The
add
a
broker
at
a
you
know:
broker.
Does
the
class
thing
at
a
create
an
instance
create
a
binding
then
do
unbind
on
instance,
etcetera,
etcetera,
so
I'll
you
know
I
could
just
is
the
test.
F
F
So
there's
there's
nothing
super
impressive
to
see
there,
but
the
key
thing
is:
how
does
this
this
still
work
and
I
didn't
make
any
changes
to
the
test?
This
is
just
the
test.
Working
I
did
make.
A
I
did
make
changes
to
a
different
test
to
explicitly
show
that
this
works,
because
it
goes
through
all
the
stages,
and
so
that
might
be
more
interesting
to
look
at.
F
And
then
here's
our
expansion
that
calls
our
special
delete,
which
is
self
resource,
delete.
Otherwise
it's
it's
literally
exactly
a
copy
of
delete
and
I
added
this
line.
That's
exactly
what
this
is.
I
made
a
change
to
the
controller
which
you
can
see
here,
which
says
as
the
secret
key
as
the
secret
name
been
set
to
of
my
special.
F
You
know
this
is
Arbor
tombstone
flag,
that
we
would
make
an
actual
field
for
this,
but
since
I
hijacked
it,
you
know,
I
checked
that
it's
there
and
then
I
said
reconcile
delete
as
the
state
that
we're
in
then,
when
we
actually
go
okay,
we're
you
enter
the
reconcile,
delete
state.
You
know
if
the
time
stamps
not
set,
we
know
that
we
haven't
gone
down
the
real
timestamp
thing
and
we
do
our
special,
especially
and
especially,
is
literally
nothing
else,
but
this
is
where
we
would
do
all
the
broker
stuff.
F
This
is
where
we
would
say
you
know,
broker
a
reconcile
is,
is
the
thing
success
and
we
would
loop
in
here
or
we
would.
You
know
enter
this
and
I
exit
it
until
it
actually
succeeds
or
fails,
at
which
point
the
last
step
is
called
the
special
delete
which,
as
you
remember
well,
we
don't
remember.
We
didn't
cover
yet,
but
special
delete
is
literally
a
resource
that
has
only
the
standard
delete
implement,
and
this
is
the
delete
that
sets
the
deletion
timestamp.