►
From YouTube: Kubernetes SIG Service Catalog 20170912
Description
- Demo of PodPreset from service-catalog API server
- ServiceClass and ServicePlan should have spec
- Orphan mitigation
- Demo of OpenSDS from service-catalog
A
Welcome
everybody
to
the
Tuesday
September
12
2017,
meeting
of
criminality,
six
Service
Catalog,
so
we're
usually
we're
usually
having
Aaron
do
moderation
today,
since
he's
not
here.
Does
anybody
want
to
raise
their
hand
to
do
that?
I'll?
Do
it
at
what
I
expected
Doug
would
volunteer
and
I
was
not
disappointed.
So
Doug
you
have.
You
have
the
helm
of
the
meeting
so
to
speak.
All.
B
C
D
B
D
B
C
D
C
Yes,
two
months
back,
I
started
off
this
task
of
moving
pot
resets
to
beta
and
as
part
of
the
deciding
the
roadmap,
it
was
decided
that
what
reset
is
one
of
the
functionalities
that
should
be
developed
as
using
initializers
and
should
be
developed
outside
the
code,
and
so
that's
where
this
all
started.
So,
to
give
you
an
example,
to
give
you
a
little
bit
of
background
about
initializers,
so
in
code
kubernetes,
you
have
admission
controllers
and
admission
controllers
are
second.
D
C
Yeah
admission
controllers
basically
gets
executed
before
any
resource.
Any
object
gets
created
in
the
system
and
they
are
used
to
enforce
policies
like
any
admission
control
policy.
You
want
to
apply,
for
example,
it
may
be,
let's
say,
go
to
enforce.
Every
part
that
gets
created
should
use
the
image.
Should
image
policy
should
be
always
pull
things
like
that.
The
examples
of
mod
security
policies
and
what
preset
was
also
implemented,
as
in
the
Alfa,
it's
actually
implemented
as
an
admission
control
plugin.
So
what
are
the
disadvantage
of
admission?
C
Control
plug-in
is
in
the
core
is
that
they
need
to
be
compiled
in
the
core
and
they
can
be
configured
only
at
the
API
server
startup
time.
So,
if
you
want
to
let
stay
extend
the
functionality
you
want
to,
let's
say,
implement
and
any
other
policy
that
you
want
to
enforce
as
part
of
the
object
creation,
then
you
want
that
to
be
built
in
the
core
which
is
which
is
becoming
harder
and
harder,
as
you
can
see.
So
that's
where
in
1.7
in
the
initializers
who
are
introduced
and
what
initializers
are.
C
Basically,
you
can
pretty
much
think
of
them,
as
the
DS
are
the
what
initializers
are.
These
are
set
of
pending
tasks
that
we
get
executed
as
resource
gets
created
in
kubernetes
system
till
all
these
initialize
initializer
tasks
are
finished.
That's
where
we
say
when
they
are
finished.
That
object
is
initialized
and
then
it
will
become
active
in
the
system.
It
would
be
visible
to
the
rest
of
the
system
for
it
to
act
on.
C
So,
for
example,
if
let's
say
there
is
initializer
is
configured
on
a
pod
and
a
pod
request
comes
in
until
all
the
initializers
for
the
pods
are
executed.
This
pod
will
not
be
will
not
become
active
in
the
system,
so
cubelet
or
any
other
component
in
the
kubernetes
will
not
be
able
to
act
on
it.
It
will
not
be
even
visible
by
default
unless
you
specify
a
special
parameter
when
you're
querying
pods
saying
you
know,
give
me
an
initialized
far-distant,
so.
A
C
Is
yeah,
it
is
there
so
so
what
you're
looking
on
the
screen
is
right
now
initializer
configuration
where
I'm
specifying
that
this
initializers
name
is
hot
preset,
dot,
initializer,
dot,
gates,
dot,
IO
and
it
needs
to
act
on
these
objects
and
objects
are
basically
it's
a
gbk.
Where
you
have
api
groups,
you
have
API
versions,
you
have
parts,
so
you
can
create
bunch
of
initializer
configuration
like
this
and
submit
it
now.
C
Let's
say
there
are
two
initializers
which
have
registered
you,
which
you
have
registered
for
pods,
so
they
they
any
as
they
get
executed.
They
can
mark
the
result
in
object
metadata.
So
there
are
two
fields
that
are
introduced.
One
is
the
object
metadata.
There
is
a
field
called
initializers
which
has
list
of
pending
tasks
and
at
the
same
time,
there
is
something
called
result
which
an
initializer
controller
can
write
back.
So
the
moment
core
API
server
sees
that
there
is
a
result
which
is
basically,
you
will
write
in
case
of
a
failure.
B
So
so
now
got
a
question
for
ya,
and
this
is
more
of
a
sort
of
a
best
practice
kind
of
thing
with
initializers
and
it
came
up
recently.
I
assume
you're
familiar
with
is
T
all
right.
Yes,
I
do.
C
B
Yeah
that's
before
so.
Obviously,
if
you're
deploying
a
pod-
and
you
want-
is
tio
to
add
a
side
card
and
you're
going
to
modify
the
pod
spec
to
add
the
additional
containers
for
the
for
the
proxy
stuff
times,
but
let's
say
you're
actually
deploying
a
the
a
deployment
as
an
example:
mm-hmm,
okay,
is
it
best
practice
to
have
the
initializer
modify
the
deployment
so
that
inside
the
deployment
where
you
define
your
your
containers,
you
add
the
is
geo
one.
Or
do
you
wait
until
the
deployment
itself
deploys
the
pods?
And
then
you
modify
that
pod?
C
Yeah
yeah
so
you're
saying
whether
to
write
an
initializer
for
deployments
or
whether
to
write
initializer
for
pod,
basically
yeah
I,
guess
it
depends
on
the
functionality
where
it
belongs.
It's
like
in
case
of
pod
preset,
if
we
think
I
mean
it.
This
is
a
functionality
which
belongs
at
the
pod
level.
Then
you
would
like,
then
you
would
write
a
pod
initializer
and
if
it
belongs
at
the
deployment
level
because
see,
if
you
are
let's
say,
writing
for
ISTE,
oh
and
not
every
application
deployed
is
using
deployment.
B
E
B
C
While
writing
this
I
realize
that
if
you
are
writing
a
board
initializer,
it
is
much
more
fundamental
to
kubernetes
system
and
you
can
actually
end
up
in
a
deadlock
scenario.
So
let's
say
you
have
initializer
up
and
running
and
somehow
and
it's
a
pod
initializer
and
somehow
it
got
stuck
or
let's
say
it
is
down
right.
Now
you
pretty
much
lose
your
capability
to
launch
any
new
pod
in
the
system.
C
It's
even
to
launch
your
own
even
to
launch
the
initializer
controller
itself,
because
it
depends
on
pod
initializers
to
be
present
so
I
think
it
it's
a
very
powerful
mechanism,
so
it
needs
to
used
with
lot
more
responsibility.
You
can
only
allow
trusted
initializers
to
be
present
and
some
of
the
best
practices
I
think
are
still.
People
are
playing
with
it,
so
they
have
not
come
out
that
well,
yet
I
think
it.
It
needs
more
baking
time.
Okay,
thank
you.
Watch
yeah.
C
So
yeah,
so
this
is
an
example
of
an
initializer
configuration
that
we
are
looking
here.
So
in
this
case,
I
am
saying
that
I
want
to
intercept
all
pods,
and
this
is
my
initializers
name.
So
this
is
how
the
linking
happens.
If
this
is
very
important
now,
once
this
initial
is
the
configuration
is
Nexus
submitted
and
a
new
request
for
pod
comes
in
so
the
kubernetes
api
server
it
is,
it
will
look
at
the
object,
will
see
what
initializer
configurations
are
there.
I
mean
basically
what
initializer
registrations
are
already
there
it
will.
C
It
will
in
the
object
metadata,
it
will
create
a
list
of
pending
tasks
for
initializers
and
one
of
them
and
in
the
pending
task
it
will
be
the
name
of
the
initializer
that
needs
to
be
executed,
and
then
you
have
initializer
controller.
On
the
other
hand,
which
is
listening
for
these
objects
that
it
is
interested
in
and
it
will
see
whether
I
can
actually
break
up
my.
C
Was
thinking
I
can
quickly
show
the
example
so
yeah.
This
is
the
code
that
I
have
is
a
controller
and
in
controller
what
it
is
doing
is
it
is
listening
for
all
pods
and
it
is
looking
at
whether
this
part
needs
initializer
or
not
initialization
or
not,
and
the
way
it
is
doing
is
it's
looking
at
pod
object
metadata.
C
It's
stretching
the
initializes,
seeing
whether
the
first,
whether
in
the
so,
if
you
look
at
initializes,
not
pending,
so
this
is
the
pending
tasks,
so
it
looks
at
whether
arm
in
the
arm
at
the
top
of
the
cube.
So
this
is
pod
preset,
initializer
name.
If
you
look
at
this
matches
with
the
one
that
we
created
in
the
initializer
configuration.
So
if
it
it's
my
turn
to
execute,
then
it
says:
okay,
I
need
the
initializer.
C
This
part
needs
to
be
initialized
initialized
and
then
the
logic
for
initializing
initialization
is
actually
getting
triggered
here.
So
here
up,
you
can
see
I'm
at
the
admit,
function
is
executing
the
admission
control
logic,
which
is
applying
the
pod
preset
in
this
case,
and
once
it
is
done
with
applying
the
logic
which
is
which
could
be
lit,
simulating
the
pod
spec,
and
then
it
can
say
that
I'm
done
with
it,
so
the
process
of
saying
I'm
done
with
it
is
it
will.
It
will
remove
itself
from
the
list
of
pending
tasks.
C
So
here
is
how
it
is
doing
if
it.
If
this
was
the
only
one
in
the
list,
then
it
will
mark
initializers
object
to
nil,
otherwise
it
is
going
to
pop
itself
out
and
then
initialize
it
to
the
rest
of
the.
So,
if
you
think
like
there
are
multiple
initializers
and
they
would
cooperatively
there
to
figure
out
whether
it's
their
turn
to
execute
and
they
execute
now
bringing
back
to
Pauls
point.
So,
let's
save
admit,
fails
here
now
in
case
of
failure.
C
It
can
write
the
result
of
the
failure
in
our
case
in
part
preset,
since
it's
not
an
enforcement
policy,
we
don't
reject
if
we
don't
if
we
fail
to
apply
pod
preset,
if
let's
say
it,
what
is
it
was
a
critical
initializer
and
it
was
an
enforcement
plug-in.
Then
we
would
write
the
result
back
here
and
then
that
would
reject
the
admission
of
this
object
that
we
are
trying
to
initialize.
D
C
So
how
do
you
listen
for
uninitialized
spawned?
So
this
is
a
quick
example.
I
wanted
to
show
here
so
we're
so
point
when
you
get
a
chance.
You
know
with
you
have
a
hand
up
just
at
us,
no
well
yeah.
So
so
this
is
yeah
I'll
just
quickly
finish
this,
then
we
can
go
to
the
question
so
here,
if
you
look
at
how
I'm
listening
to
the
onion
uninitialized
pods,
so
there
is
a
new
option
saying
in
the
client:
go
library
saying
include
uninitialized.
C
So
whenever
you
are
listening,
listening
or
listing
or
watching
any
object,
and
when
you
are
creating
these
shared
informer,
you
can
say
include
uninitialized
to
true.
You
will
start
receiving
these
uninitialized
object
and
everywhere
else
in
the
system.
This
is
by
default,
false.
So
that's
why
you
don't
get
to
see
uninitialized
objects
and
system
doesn't
act
on
it.
Ok,
so
now
I'm
ready
to
take
the
question.
That's
alright,
mr.
power,
so.
A
C
B
A
I
asked
because
the
the
elevator
pitch
for
initializers
is
that
you
can
do
a
mission
outside
of
outside
of
the
API
server
and
it
the
this
is
not
a
criticism
and
I
can't
imagine
actually
how
to
make
it
work.
But
one
thing
that
you
can
do
with
with
the
mission
controllers
that
you
can't
do
with
initializers
right
now
is
that
you
can
reject
an
update
and
or
or
mutate
an
object
when
it's
being
updated,
I
see
yeah.
C
Got
it
yeah
because
that
path
will
not
be
covered
by
these
yeah
I
think
that
would
probably
come
across
after
initializers
are
successful
and
Beit.
Probably
that
would
be.
In
fact
there
are.
Some
I
was
following
up
the
discussion.
Even
the
lot
of
unanswered
questions,
finish
Eliza's
like
about
the
ordering
of
initializers,
and
if,
let's
say
one
initializes
depend
on
another
one,
how
that
should
be
done
so
still
I
think
those
questions
are
being
figured
out.
C
C
C
So
I
have
a
part
preset
named
DB
config,
and
so
first
thing
it
has
is
what
pods
that
this
preset
should
be
injected
in
so
I'm
saying
all
the
apps,
all
the
parts
with
label
matching
app
is
equal
to
command,
would
should
get
this
pot
preset
injected
and
pot.
Preset
definition
here
is
that
I'm
saying
only
the
environment
variables,
these
two
environment
variables,
which
is
the
DB
host
and
DB
port
I
guess
I,
took
this
example
for
Redis,
because
if
you
have
a
radius
application,
it
will
need
these
two
things
here:
local
host
and
port.
E
C
C
D
C
C
Stuff
environment
variables
is
kubernetes
service
host
home-brewed
and
if
you
look
at
these
two
and
vibrant,
very
able
DB,
host
and
DB
port,
they
are
not
listed
in
the
containers
and
they
will
be
injected
by
the
pod
preset
at
runtime
and
the
other
three
are
available
by
they.
They
are
made
available
by
the
couplet
itself.
So,
okay,
one
more
thing
to
show
here
is
the
label.
If
you
see
app
is
command
so
now
we
will
create
this
pod
and
we
should
see
environment
variable
okay,
so
this
got
created.
C
If
we
look
at
what
the
spec
looks
like
after
the
creation,
so
first
thing
we
see
is
that
we
Patri
set
was
applied
and
it's
the
name
of
the
part.
Preset
is
d,
so
we
add
an
annotation
saying
what
version
of
the
part
preset
was
actually
applied
and
then,
if
you
look
at,
there
are
two
new
environment
variables
that
shows
up.
They
were
not
part
of
the
spec,
a
DB
host
and
DB
port,
and
if
we
say.
C
C
That
is
awesome.
Thank
you.
So
yeah
I
have
these
two
PRS.
They
are
mentioned
in
the
agenda
doc.
So
right
now,
these
two
peers
have
the
I
have
my
dated.
The
Alpha
API
is
from
core
to
Service.
Catalog
I
mean
process
of
writing
walkthrough,
so
the
first
PR
I'm,
Morgan
and
Paul
has
reviewed
it.
I
would
encourage
everyone
to
review
it
and
give
me
feedback,
and
then
once
these
these
two
RM
I
plan
to
add
two
small
features
which
are
required
for
the
beta
ap
ice
and
plan.
To
do
that.
That's
kind
of
the
roadmap.
A
C
C
I
wanted
to
figure
out
two
things.
One
was
this
namespace
and
second,
when
we
let's
say
an
owl
set
publicly
that
it
is
available
to
be
used
and
what
was
the
policy
for
duplicating
the
pod
preset
in
the
core,
because
when
people
are
running
core
with
alpha
support
of
pod,
preset
and
service
catalog
as
well,
how
do
you
want
people
to
migrate.
E
A
Probably
it
would
be
a
better
idea
to
ask
an
architecture
because
I
know
we
haven't
had
this
specific
thing
happened
before
as
worse,
but
the
like
backward
compatibility,
guarantees
are
I,
think
it's
it's
alpha,
and
so
there
are
no
such
guarantees,
so
I'm
not
sure
that
we
we
are
obligated
to
keep
anything
around
in
core,
but
we
certainly
should
be
as
friendly
as
possible
to
two
users
so
that
we
don't
break
them.
If
we
can
avoid
it.
Yes,.
A
F
A
C
A
A
Think
it
is
likely
that
we
will
have
some
kind
of
status
information
that
we'll
want
to
show
for
for
service
classes
and
plans,
and
so
I
created
a
few
issues.
I
created
one
to
introduce
spec,
which
would,
in
the
sake
of
preserving
young
Morgan
sanity
I,
would
suggest
that
we
do
after
his
pull
request,
merges
a
tree
to
introduce
spec
to
service
class
and
service
plan,
and
then
I
also
created
issues
to
have
controllers
that
maintain
a
count
of
the
number
of
instances
that
were
on
a
particular
service
class
or
planned
plan.
A
B
A
A
What
I
meant
by
that
was
and
I
was
thinking
about
this
in
terms
of
like
replica
set,
you
will
have
in
the
status
of
a
replica
that
the
number
of
replicas,
that
is,
that
currently
exists,
and
so
what
I
mean
by
controllers
that
maintain
those
statuses
is
I,
as
opposed
to
when
you
make
a
new
instance
or
when
you
update
an
instance
as
part
of
that
transaction
script,
to
quote
the
Martin
Fowler
patterns
of
enterprise
application
architecture,
book,
which
I'm
sure
everybody
has
next
to
them
on
their
desk.
Instead
of
maintaining
that.
A
Maintaining
that,
as
part
of
the
instance,
reconciliation
that
we
should
have
controllers
that
periodically
maintain
like
do
a
field
selection
on
service
instances,
that
service
class
name
or
service
instance,
dot
plan
name
and
maintain
the
number
of
instances
that
are
on
service
class
or
on
a
plane.
Does
that
make
sense.
B
E
B
A
A
So
orphan
mitigation
is
conceptually
much
simpler
than
I
had
feared
so
to
level
set
about
what
that
is.
There's
a
part
of
the
spec
open,
Service
Worker,
API
spec.
That
is
meant
to
prevent
resources
associated
with
instances
or
bindings
from
leaking.
So,
for
example,
if
you
have
a
synchronous
provision
call
and
that
times
out,
the
idea
of
orphan
mitigation,
the
orphan
in
this
thing
is
the
the
instance
that
you
might
have
created
or
that
you
tried
to
provision
which
might
have
been
created.
A
A
A
A
Okay,
so
the
proposal
is
basically
to
add
a
field
to
the
status
for
service
instances
and
for
service
instance.
Credential
can
follow
the
same
pattern
slightly
simplified
for
service
instance
credential,
because
there
are
no
async
Operations
currently,
but
once
the
controller
detects
that
orphan
mitigation
has
to
be
performed,
it
should
set
this
orphan
mitigation
in
progress
boolean
and
set
the
conditions
accordingly
and
like
the
readiness
condition,
and
it
should
start
doing
the
deep
revision
and
the
broker
might
handle
a
deep
revision
due
to
orphan
mitigation
asynchronously,
just
like
any
other
deep
revision.
A
If
that
happens,
it
should
finish
doing
the
deep
revision
and
then
it
should
set
the
the
ready
condition
to
be
false
and
set
the
failure
condition.
So
for
clarity
once
you
have
to
do
this
for
a
services
instance
or
a
service
instance
credential.
That
thing
is
considered
to
be
failed,
which
is
similar.
That's
how
they're
treated
in
Cloud
Foundry.
So
when,
when
this
kind
of
thing
happens
in
Cloud
Foundry,
they
consider
the
instance
or
the
binding
to
be
failed.
A
I
believe
that,
if
you
want
to
do
something
with
it,
you
it
you
delete
it
and
you
make
another
one
with
the
same
name.
So
it's
it's
actually
fairly
simple
I
had
feared
that
there
were
really
complicated,
behaviors
like
if,
if
you
had
to
retry
to
provision
an
after
you
D
provision
but
lucky
for
us
all
that
clap
foundry
does
is
is
just
say:
I
had
to
orphan
mitigate
this
it
it
might
not
have
worked.
A
D
B
A
buzzing
yeah
I
could
hear
a
buzzing.
Is
that
you
Mike
pull
sounds
like
him.
Okay,
so
say
what
Michael.
While
you
work
on
your
mic,
let's
go
on
the
key,
we'll
come
back
around
to
you,
I
think
I'm.
Next,
so
quick
question
Paul.
Once
our
mediation
is
done,
you
say
you
unset
the
boolean
and
everything's
fine
and
I'm.
Sorry,
it
turned
out
there
for
a
sec,
but
you
start
talking
about
something
being
set
to
false.
A
After
you're
done,
you
should
unset
the
orphan
in
progress,
boolean
right,
it's
at
the
ready
condition
to
false,
because
it's
not
ready
and
you
should
set
the
failed
condition
to
true
and
say
like
give
the
user.
Some
information
saying
we're
done
with
this
one.
The
provision
didn't
work
correctly
and
we
had
to
do
orphan
mitigation,
or
we
had
maybe
something
a
user's
more
likely
to
understand
and
say
that
we've
cleaned
up
to
make
sure
that
you're
not
being
charged
for
any
of
the
resources
that
you
might
have
created.
But
it
didn't
work
right
if
they.
B
G
Yeah,
so
just
a
little
bit
concerned
that
I
think
this
behavior
breaks
the
reconciliation
loop.
So
you
just
like
to
say
in
case
of
a
failure.
We
just
clean
up
and
stop
doing.
Retrying
Wow
I
think
that
it
it
should
be
possible
to
retry
after
you
finish
the
cleanup.
So
why
do?
Why
do
we
stop
retrying
or
like?
If
you
see,
if
we
think
that's,
this
back
is
incorrect,
which
is
necessarily
true.
A
So
I
think
that
we
could
eventually
do
that,
but
what
I
think
that
we
should
shoot
for
initially
is
something
that
is
simple
and
easy
to
understand
that
that
works,
to
ensure
that
we
have
only
two
resources
and
if
we
want
to
get
fancy
and
do
something
after
we
initially
support
this.
That's
forward
compatible
like
if
you
bumped
the
spec
again
we'll
try
to
will
try
to
provision
I
think
that
we
could
do
that,
but
I
would
prefer
to
have
something
implemented
in
the
short
term.
G
A
Paints
us
into
a
corner
so
to
speak
so
like
if,
in
the
future
we
were,
we
thought
it
would
be
good
to
allow
a
retry
after
you
had
to
do
orphan
mitigation.
I
think
that
we
could
use
something
like
if
you
do
that
you
bump
the
spec
like
you
would
for
an
out-of-band
change
to
a
secret
parameter
and
the
controller
does
the
whole
thing
again.
Maybe
we
could
do
that
and
if
we
did
something
like
that,
I,
don't
think
that
we
would
have
any
API
breaks
right.
A
Now,
let's,
let's
put
on
our
programmer
hats
and
split
a
hair.
So
if
you
can't
dial
the
broker,
this
behavior
doesn't
get
initiated.
It
only
gets
initiated
when
the
provision
called
times
out
which
could
be
doing
to
a
network
lip
right
like
someone
unplugged
the
the
the
f5
that
runs
the
the
edge
and
your
call
timed
out.
A
It
could
happen.
For
that
reason.
You
don't
know,
though,
when
it
times
out
right,
like
any
anything
could
have
happened,
and
then
this
also
says
that
you
should
do
this
when
a
provision
call
gives
you
backup.
408,
like
the
server
actually
says
that
something
timed
out
or
any
200
series
could
beyond
200
201
202
or
a
500.
A
G
A
G
A
A
I
I
agree
with
you,
and
that
is
what
I
would
do
in
the
absence
of
something
in
the
spec.
That
told
me
not
to
do
it,
or
rather
that
the
experienced
users
will
probably
come
to
us
with
will
info
will
have
their
muscle
memory
or
expectations
be
that
nothing
else
will
happen
on
this
thing,
but
we
can
we
can
make
it.
We
can
definitely
change
that
in
the
future.
I
think
in
a
forward
compatible.
A
H
Didn't
hear
me:
yes,
we
can
go
ahead.
Cool
I'm
wondering
how
we
deal
with
like
internal
failures
and
the
catalog
so
say,
for
instance,
during
the
reconciliation.
Something
goes
wrong.
We
can't
connect
to
the
API
server
or
just
the
controller
goes
down,
because
then
we're
kind
of
in
a
state
where
we
don't
know
what
to
do
right.
So
how
do.
A
The
intention
of
writing
what
operation
is
happening
currently
into
the
status
before
you
start
working
is
to
prevent
situations
like
that.
So
that
say
that
you
say
in
and
we're
just
for
the
record
folks
we
are.
We
do
have
another
issue
for
this-
that
we've
already
gotten
consensus
on
what
the
consensus
was
was
to
before
you
start
doing,
work
record
the
operation
that
you're
going
to
do,
and
then
we
also
have
a
consensus
on
capturing.
These
are
the
things
that
I'm
sending
and
then
also
to
update
when
you've
completed
doing
some
work.
A
Well,
then,
you
didn't
do
anything
to
the
broker
and
you
can
just
return
that
error
and
retry
with
the
latest
copy
and
say
that
you
start
you
update
the
status
and
it
succeeds
and
you
start
doing
work
and
then
somebody
sends
you
sig
kill
and
the
controller
dies.
Well,
when
the
controller
comes
back,
the
controller
can
pick
up
so
I.
A
Think
that,
like
the
the
class
of
failures
that
kibbles
is
wondering
about,
are
these
classes
of,
like
the
controller
fell
over
or
someone
fat-fingered,
the
kill,
come
in
on
the
machine
and
killed
the
controller
process,
or
something
else
happen
is
that
right?
Oh,
my
god.
Does
that
address
your
your
question?
What.
H
H
A
B
I
think
I'm
next
in
the
queue
so
Paulette
I'll
make
sure
that
the
summary
of
your
conversation
with
dollas,
basically,
we
should
be
able
to
handle
retries
later
if
we
want,
because
everything
you've
done
here
is
forward
compatible
right,
yeah,
all
right,
cool
Thanks.
Any
other
questions,
comments,
I,
don't
think
they're
in
your
other
hands
up.
B
Okay,
do
people
need
more
time
to
think
about
this
when
our
revisiting
and
tomorrow
and
make
sure
there's
consensus,
or
do
people
have
to
have
enough
time
already
I
think
Michael,
based
on
your
question,
you
may
need
a
little
more
time
based
upon
the
other
issues.
Just
to
think
about
some
more
is
that
true
yeah.
B
A
B
E
However-
and
this
is
Leon
from
I
guess-
have
told
some
percent
on
the
circle
of
community-
and
today
I'm
here,
to
show
show
you
some
some
ladies
work
about
how
past
years
can
enable
story
service
in
service
catalog.
So
here
we
we
prepare
a
demo
about
home.
Xps
can
provide
safe
storage
to
service
Carol.
The
user
will
do
many
if
I
share
my
screen
do.