►
From YouTube: Kubernetes SIG Service Catalog 20170130
Description
SIG Service Catalog meets to discuss prior week's F2F meeting and the basics of writing Kubernetes controllers.
A
B
So
for
everyone
who
came
thanks
for
coming
for
everyone
who
didn't
we'll
have
a
pretty
brief
summary
in
here,
along
with
supporting
documentation
over
the
next
week
or
so
to
describe
what
we
did
and
just
in
summary,
I
was
really
happy
that
we
all
came
out
writing
code
that
took
us
much
much
closer
to
an
MVP
one.
So
thank
you
for
everyone
who
participated.
A
Yeah
I'll
just
echoes
of
the
statements.
I
think
it
was
really
good
too
I
to
get
together
and
I
felt
like
I
could
really
personally
perceive
that
like
we're
much
more
on
the
same
page
in
this
meeting,
then
we
were
in
the
first
one.
So
that's
progress,
so
I
tell
you
what
an
errand
we
had
me
talking
about
how
to
write
communities
controller
and
then
you
talk
about
architecture,
evolution
but
I
think
we
should
swap
those.
Are
you
comfortable
showing
what
you
have
now
and
then
I
can
talk
about
communities
controllers.
B
Okay,
can
people
I'll
see
this?
Can
you
speak
up
if
you
cannot
see
this
a
that
is
a
no-no,
novias
issue
seeing
this.
So
in
the
morning
of
the
first
day
of
wednesday,
at
the
face-to-face
meeting,
we
discussed
at
length
the
current
design
and
the
design
and
architecture
of
Service
Catalog
going
forward
to
get
us
some
feedback.
Can
everybody
still
hear
me?
Okay,
all
right
so
yeah
we
talked
about
current
architecture
and
architecture,
going
forward
to
get
a
sewer,
MVP
one
and
beyond,
while
still
achieving
the
individual
goals
of
all
the
participants.
B
So,
as
you
might
expect,
we
started
with
the
current
architecture
as
we
have
now.
This
is
disjoint,
as
you
can
see,
and
obviously
doesn't
work
from
cube
CTL
all
the
way
down
to
the
controller
and
the
primary
reason
why
it
doesn't
work
is
because
the
API
server
and
the
controller
can't
communicate
with
each
other.
So
you
can
imagine
how
that
would
cause
some
problems.
B
So
the
remainder
of
the
discussion
in
the
morning
of
wednesday
was
how
to
get
from
here
to
our
end
state-
and
this
is
our
MVP
one
and
beyond
state.
So
what
we
want
is
api
server
that
can
speak
well,
that
cube
CTL
can
speak
to
and
the
api
server
can
respond
to
keep
CTL
commands,
and
then
we
want
the
api
server
to
be
able,
depending
on
a
command-line,
switch
to
talk
to
TP,
ours
or
SED.
So
you
can
see
up
in
the
current
state.
We
currently
have
in
a
CD
integration.
It
works
great
and
Morgan.
B
Thanks
to
Morgan
gave
us
a
demo
of
all
the
API
service
functionality
talking
to
STD.
So
what
we
need
to
get
to
hear
the
long-term
architecture
is
to
cook
the
controller
up
to
the
API
server
and
we're
planning
on
doing
that
via
generated
client,
go
binding
to
the
API
server
and
then
the
second
to
do
here
and
possibly
slightly
bigger
to
do
than
the
one
I
just
said.
It
took
the
API
server
up
to
TP
ours
as
well,
and
then
turn
on
that
toggle
ability
functionality.
B
So
in
doing
this,
we'll
get
to
a
pretty
similar
workflow
as
eventually
we'll
get
to,
and
we
can
plug
the
API
server
into
aggregated
api's
from
those
are
based
in
the
KU
benetti's
and
then,
when
running
the
API
server
through
an
environment
variable
or
similar
command-line
flag,
we
can
just
decide
on
start
time
whether
we
want
our
service
catalog
and
all
the
related
resources
to
be
stored
inside
of
SCD.
That
will
be
run
either
in
a
cluster
or
will
be
the
same.
B
Sed
is
the
kakugane
of
these
core
server
uses
or-
and
this
is
something
that
dais
prefers
right
now
or
the
API
server
will
use
third-party
resources,
as
the
controller
currently
does
as
a
database,
not
as
an
data
access
mechanism,
but
just
as
a
database.
So
that's
really
the
high-level
summary.
We
wrote
some
code
to
this
end
and,
as
I
said
just
a
few
seconds
ago,
Morgan
graciously
showed
us
a
demo
of
cute
CTL
talking
to
the
API
server
and
on
the
API
server
persisting
inside
of
SCD.
B
We
have
a
recording
of
that
demo
if
you
scroll
up
in
the
slack
history.
So
the
next
steps
I
said
are
just
to
hook
up
the
controller
and
then
hook
up
the
API
server
to
the
tpr
alcohol.
It
a
database
right
now,
and
we
also,
we
also
have
some
design
documents
to
this
effect
as
well
and,
of
course,
we'll
be
expanding
on
this
kind
of
the
work
to
get
to
this
end
space
over
the
next
week.
B
A
B
That
is
because,
at
least
in
the
short
term,
it's
kind
of
difficult
for
us
to
go
to
a
customer
and
either
ask
one
of
our
customers
to
hook
up
an
API
server
that
we,
the
sig,
wrote
either
to
their
SCD
clusters
that
coo
benetti's
core
is
using
or
to
install
a
net
CD
operator
into
their
cluster
and
have
API
servers
speak
to
that
and
then,
of
course
the
implication
there
is
to
deal
with
all
backups
and
disaster
recovery.
Concerns
that
that
in
cluster
SED
in
coover
Nettie's
cluster
SED
cluster
would
come
with
that.
B
So
having
TT
arse
of
the
backing
database
or
effectively
uses
the
Cougar
Nettie's
built-in
kind
of
arbitrary
database
and
again
this
is,
for
the
time
being,
there
may
be
a
solution
that
comes
along
along
with
aggregated
API
support
or
shortly
thereafter
that
addresses
directly
this
database
state
of
storage
concern.
But
for
now
we
feel
that
using
tprs
as
a
database
for
aggregated
API
servers
is
a
reasonable
enough.
Ask
for
customers
that
we
bring
service
catalogs
to
the
Pathan's
needed
that
go
into
that,
because
you
were
looking
for
yeah.
B
So
there
is
a
question
in
chat
from
cats
that
says:
wasn't
there
an
open
question
of
whether
it
would
be
possible
for
the
API
server
to
talk
to
TP
ours?
This
is
something
about
limitations
of
the
rest
storage
interface
and
yes,
there
was
an
open
question
and
there
is
still
some
research
to
be
done
to
answer
that
question
of
whether
the
API
spirit
can
talk
to
tpr
and
additionally,
the
mechanism
that
the
API
server
would
use
to
talk
to
PP
ours
is
an
implementation
of
a
go
interface
called
rest,
stop
storage
but
yeah.
B
There
is
some
research
to
be
done.
There
I
think
there
was
some
concern
that
it
will
be
impossible
and,
of
course,
if
it
indeed
a
kind
of
small
chance
that
it
is
actually
impossible,
we'll
have
to
go
revisit,
but
based
on
the
small
bit
of
research
that
about
an
hour's
worth
of
research
that
I
did
on
that
interface,
it
doesn't
look
like
it's
going
to
be
impossible.
B
It
does
look
like
it's
going
to
be
an
implementation
of
that
storage
interface,
but
basically
calls
through
to
the
community's
core
go
clients
which
in
turn
of
course,
talks
to
the
zoo,
bearnaise
core
API
to
access
GPRS,
that's
about.
As
far
as
I
personally
have
gotten.
It
looks
like
there
might
be
some
other
questions
in
chat,
but
I
can't
quite
so,
there's
one
from
Phil
Phil.
Can
you
expand
on
your
question
about
it?
We
gotten
any
feedback
on
the
strategy.
Yeah.
C
I
think
it's
just.
You
were
talking
about
doing
more
research
in
that
area
and
since
we're
basically
extending
the
API
machinery,
it
might
be
worth
just
running
this
plan
item
yeah.
B
D
The
most
part
it's
unrelated,
but
it's
reasonable
with
our
mind,
is
because
I
was
writing
I
backwards.
We
can
I,
do
talk
about
difference
between
our
desired
state.
If
you
were
just
showing
here
first
of
the
current
state-
and
we
talked
a
little
bit
about
that
in
there
and
I've
been
making
I'm
going
to
be
submitting
the
TR
based
on
the
branch,
hopefully
accordion
the
day
so
hopefully
does
it
feel
to
come
at
them
I
when
I
got
it
wrong.
Awesome.
B
A
A
While
we're
at
the
face
to
face
last
week,
one
of
the
things
that
came
out
of
that
is,
it
seemed
as
the
people
would
appreciate
some
kind
of
overview
of
how
goober
Nettie's
controllers
are
written,
so
I'm,
going
to
give
a
little
bit
of
that
now
and
I
actually
spend
some
time
this
weekend
and
earlier
today,
going
back
over
some
of
the
controller
infrastructure
and
I'm
not
going
to
get
into
that
here.
I
want
to
just
talk
about
concepts
and
then,
if
folks
would
like
to
learn
more
I.
A
Obviously
the
code
is
all
in
github,
so
you
can.
You
can
go
check
that
out
for
yourself
and
if
there's
a
strong
desire
for
like
a
tour
of
a
particular
controller,
I'm
happy
to
do
that.
So
one
note
before
I
start
is
that
I'm
going
to
intentionally
gloss
over
discussing
specific
types,
because,
like
many
other
things
in
Cooper
Nettie's,
there
are
multiple
mechanisms
that
are
used.
Some
of
them
are
kind
of
like
the
synthesis
like
generation
n.
Is
the
synthesis
of
experience
on
generation
input
1,
but
there
are
controllers
that
use
different
generations
of
infrastructure.
A
A
A
E
A
The
nineties
controllers
is,
though,
it
may
seem
serious.
It's
very,
very,
very
simple
to
grasp
of
the
pattern
that
controllers
and
kinetic
views
is
called
state
reconciliation,
and
there
are
basically
like
the
mechanism
for
a
state.
Reconciliation
is
like
look
at
what
I'm
supposed
to
be
look
at,
where
I
am
and
then
reduce
the
Delta
between
those
two.
So
the
canonical
example
that
we
come
back
to
oftentimes
is
one
of
the
thermostat,
where
the
the
user
of
the
thermostat
says.
I
want
this
room
to
be
at
75
degrees,
and
that
is
the
that's
the
state.
A
Expand
and
contract
at
different
rates,
so
that
was
the
thermostat
read.
What
is
the
actual
temperature,
and
so
you
can
think
of,
like
user
hasn't
put
75,
some
of
that
reads
at
73,
and
so
the
thermostat
has
to
activate
the
furnace
and
raise
the
temperature
in
the
room
by
2
degrees,
to
get
alignment
between
what's
expected
and
what
reality
tells
us.
A
A
So
those
of
you
who
are
familiar
with
the
details
of
many
of
the
core
communities-
resources,
like
pods
may
say:
aha
I've,
seen
those
things
before
and
what
you
will
find
if
you
go
and
look
at
our
resources
and
service
catalog
is
that
of
the
four
of
them.
Three
have
second
status
and
those
model
like
generalized
concerns
about
say,
for
example,
the
Brooker
resource
part
of
the
broker
resources.
Spec
is
what
is
the
URL
for
this
thing,
and
part
of
the
throat
of
resources
status
is
what's
happening
to
it.
Is
it
is
it
being?
A
Is
it
ready?
Was
there
a
failure
talking
to
it,
etc,
etc?
So,
going
back
to
controllers
the
the
essence
of
what
a
controllers
job
is
to
do
is
to
look
at
what
is
the
specs
for
the
thing
that
is
being
managed
and
what's
the
status
of
it
and
take
some
steps
to
make
them
match
so
to
do
this?
There's
a
few
concerns
that
are
kind
of
universal
to
any
controller.
You
need
a
way
to
find
out
what
the
spec
is
say
say:
this
is
some
hypothetical
controller,
then
manages
a
single
resource.
A
Well,
the
controller
needs
to
find
needs
a
way
to
understand
what
is
a
specs
for
the
resources
and
manages,
and
then
controller
has
to
react
to
changes
in
those
specs
like
as
an
example,
you've
got
a
new
resource
right
controller
needs
to
say:
aha
I've
got
this
new.
One
I
need
to
do
some
work
right
and
I.
A
And
when
we
say
an
API
client,
we
basically
mean
an
abstraction
that
lets
the
controller,
talked
to
the
API
server
and
get
information
about
the
expected
state
of
resources
that
it
manages.
So
that's
all
fine
and
good,
but
we
have
to
a
very
important
mechanisms
to
like
keep
a
controller
doing
this,
and
one
of
those
is
a
list
which
is
when
I
say
a
list
I
mean
list
is
give
me
all
of
the
resources
of
this
type
that
you
currently
have,
and
then
the
other
mechanism
is
called
watch
and
watch
is
give
me
reach.
A
A
I'm,
sorry,
let
me
let
me
take
that
sentence
fragment
back,
so
we
do
a
list
and
then
we
do
a
watch
on
some
resource
right
well
in
order
to
keep
programmers
from
being
lulled
into
a
sense
of
security
thinking
that
they
are
going
to
have
some
perfect
event
stream,
where
events
never
get
dropped,
where
you
always
get
guaranteed
deliveries
that
are
etc.
You
never
have
a
network
partition,
watches
in
the
queue
brunette
es
api
server
terminate
automatically
after
some
amount
of
time.
A
A
Striped
and
former
here
by
the
way,
if
there
are
any
questions
as
I'm
doing
this,
please
don't
be
afraid
to
speak
up
anybody.
It's
become
very
clear
to
me
that,
like
after
to
put
years
of
swimming
in
Coober
Nettie's
things
that
I
take.
A
Not
obvious
other
people,
so
in
the
event
that
I'm
doing
that,
don't
hesitate,
speak
up.
So
informer
is
basically
like
a
is
a
thing
that
knows
how
to
lift
knows
how
to
wash
and
then
there's
another
thing
that
it
gives
up,
which
is
it's
a
live
cash
that
is
populated
by
lift
and
watch,
which
is
handy
if
you,
if
you
have
a
need
like
say,
you're,
watching
multiple
resources
and
you
need
to
react
to
some
change
in
one
resource
in
a
way.
A
That
means
you
need
to
get
the
state
of
another
one,
and
the
final
thing
that
Informer
gives
us
is
a
way
to
register
handlers
for
events
that
happen
to
resources.
So
just
to
make
sure
this
all
comes
through
on
YouTube
I'll,
just
make
them
notice
here
informers
list
and
wash
they
have
event
handlers.
A
A
A
So,
for
example,
there
is
a
controller
in
Coober
Nettie's
that
manages
a
attach
and
detach
a
persistent
disk
that
come
from
cloud
providers
called
amazingly
the
attached
detached
controller,
and
it
has
an
informer
for
persistent
four
nodes
for
persistent
volume
claims
for
persistent
volumes
and
four
pods.
This
is
very
common:
to
have
multiple
resources
that
had
informers
in
a
single
control.
Does
that
answer
your
question?
Doug
yeah.
D
A
All
right
I'm
here
to
know
so
the
we
definitely
a
decline.
We've
got
an
informer
and
the
important
runner
gives
up
lizard
watching
gives
us
caching
gives
us
event
handlers.
So
the
event
handlers
are,
where
controllers
do
their
work
or
on
cue
work
to
be
done
in
another,
go
routine.
So
let's
just
talk
about
two
different
flavors
of
that
say
that
say
that
we
have
our
really
simple
arrangement
and
we
can
do
our
work.
A
Why,
in
some
very
low
amount
of
time,
it's
probably
fine
to
just
register
event
handlers
with
an
informer
that
do
run
our
transaction
scripts.
For
whatever
thing
it
is
that
we
have
to
go
another
pattern.
That
controllers
use
is
able
to
use
event
handlers
to
on
cue
work
that
another
go
routine
is
doing.
It
is
taking
off
at
you.
A
A
If
that
makes
sense,
let
me
say
a
few
more
words
about
what
the
the
long-term
picture
that
Aaron
showed
earlier
looks
like
in
terms
of
this.
So.
A
A
B
A
A
A
So
as
an
example,
we've
talked
a
little
bit
about
controllers
and
watching
I
can
I
can
walk
through
a
quick
example.
I
don't
want
to
spend
more
than
like
five
or
ten
minutes
on
it,
but
I'll
do
another
quick
one
right
now
select
one
of
the
resources
that
we
have
is
the
broker
resource,
which
represents
the
service
worker.
The
reefs
or
the
broken
resource
has
a
spec
field,
which
is
a
type
that
that
contains
the
URL
field.
A
A
Condition
is
like
this
thing
is
like
this
attribute
of
this
thing.
Status
has
a
name
like
condition,
failed
to
get
catalog
completely.
Made-Up
example
and
conditions
are
true
or
false.
So
when
we,
when
you
think
about
controllers,
you
can
think
about
the
programming
model
as
being
like,
you
have
an
API
client
for
service
catalog.
That
gives
you
a
waiting
to
do
crud
and
like
watches
on
brokers.
E
A
That
broker
that
I
added
and
it's
going
to
run
the
broker
added
event
handler,
and
maybe
that
I
forget
about
on
queuing
work.
Maybe
we
say
we're
just
going
to
handle
that
work
in
the
handler
and
so
that
handler
method
that
we
register
can
implement
the
trend
and
script
them.
You
go
out
to
the
broker,
you
get
its
catalogue
endpoint
and
you
fetch
the
the
services
service
classes
of
that
broker
offers
take
those
services
transform
them
into
our
service
class
API
resource.
A
A
A
So
for
those
of
you
that
might
not
be
familiar
with
how
persistent
volumes
and
human
Nettie's
work,
they
have
a
there's,
basically,
two
pieces
be
using
and
persistent
volume
there's
the
actual
persistent
volume
which
represent
the
storage
asset
and
then
there's
another
resource
called
persist.
Employment
claim
that
says
I
want
to
use
something
so
there's
a
controlling
Cooper
Nettie's
that
it's
job
is
to
watch
for
claims
that
are
not
that
are
made,
that
don't
have
a
volume
associated
with
them
and
be
the
matchmaker
that
says.
Aha
I
see
this
claim.
A
I've
got
this
volume
that
isn't
being
used
that
isn't
claimed
yet
I'm
going
to
do
the
transformations
on
those
objects
and
indicate
that
they're
bound
to
one
another
and
then,
after
that,
the
claim
is
ready.
So
one
of
the
things
that
the
persistent
volume
controller
does
is
it
maintains
a
cache
of
persistent
volumes
so
that
when
it
gets
a
persistent
volume
claim
that
is
matched
it
can
go
and
just
look
at
its
cash
values
without
having
to
relist
from
the
API
server.
A
D
Yeah
so
the
face
to
face.
Last
week
we
talked
about
making
sure
they
keep
our
PR
velocity
up
and
we
talked
about
different
ways,
but
possibly
do
that
and
we
definitely
come
to
any
firm
conclusions
or
anything.
But
one
thing
I
did
want
to
sort
of
request
of
the
group.
Is
that
because
they're
very
much
company
base
when
it
comes
to
PR
reviews,
meaning
there's
pretty
much
four
main
companies
there,
and
we
ask
that
all
three
of
the
four
companies
approve
a
PR
before
it
gets
merged
in
most
cases
anyway?
D
What
I'd
like
to
do
is
to
request
that
on
a
desk
we
can
anyway,
a
near
daily
basis
have
at
least
one
person
from
each
company.
Do
a
quick
scan
through
all
the
pr's,
and
you
know,
comments
on
it
or
get
healthy
lgtm
on
it.
Just
to
keep
things
going
forward
because
I'd
like
to
avoid
doing
is,
have
you
know
two
or
more
companies,
not
review
PRS
for
like
four
or
five
days,
and
that's
just
going
to
cause
and
stuff
and
I?
D
Think
because
we
have
at
least
two
people
from
each
company
in
the
group
who
are
maintained,
errs
I,
don't
think
it's
a
big
burden
that
he's
company
to
try
to
look
at
all
PRS
at
least
once
a
day
and
give
some
sort
of
feedback
or
comments
or
something
to
keep
it.
The
bottles
and
forward
I
just
want
to
put
that
idea.
I
feel
people
time.
D
Just
an
ask
if
I'm
not
looking
to
make
any
formal
change
just
the
same,
look
guys
we
want
to
do
our
PR
towns
us,
but
that's
not
going
to
happen
to
people
are
reviewing.
So
it's
just
an
ask
for
each
company.
It
could
talk
much
themselves
and
say
how
we
going
to
try
to
get
at
least
a
look
at
the
PRS
on
a
daily
basis.
Cuz
I
know
we're
all
busy
hey.
C
If
you
guys
know
about
guven
aider,
does
that
even
work
for
terms
like
let's
root,
both
I
must
will
put
it
through
with
it.
D
C
D
B
D
We
see
if
there's
a
PR
out
there,
that
you
made
a
comment
about,
or
somebody
made
comments
on,
and
you
can't
going
forward
with
it
and
you
can't
go
anymore
forward
with
it
until
the
pr
author
make
some
changes,
then
obviously
this
is
going
to
do
with
that
one.
But
if
the
pr
offer
believes
it's
ready
for
reviews,
everybody
on
the
team
within
a
day
or
so
should
we
give
it
a
lot
quick,
look
over
and
say,
yea
or
nay
got
it
right.
Everybody
thing
everybody,
my
personal
needs
company,
sorry
erotic.
D
C
Is
ok
for
someone
like
recuse
himself,
like
let's
say,
you're,
viewed
it
and
you're
like
okay.
This
looks
good
and
some
other
folks
have
comments
like
it's
probably
makes
sense
to
just
feel
like.
Well,
you
don't
need
my
approval
anymore.
If
you
want
to
be
in
the
whole
chain,
you
restate
course
is
slightly
less
quite
sure
I
followed
so
so
one
thing
I've
experienced
on
PR
is
I've.
C
D
B
D
And,
like
I
said,
I'm
just,
I
think
the
adult
company
based
reviews-
it's
just
somebody
at
least
one
person
from
each
company
to
look
at
the
prl,
was
looking
for
because
it
always
forget
a
little
long
and,
as
I
said
before,
I
get
I
get
a
little
antsy
when
they
listen.
The
Coakley.
Ours
is
above
10
and
we're
above
that
now.
B
I
want
to
know
what
a
mansion
is.
There
are
work
in
progress,
PR
people,
just
people
label
it
somehow,
whatever
way
they
want
to
is
fine.
I
skip
those
if
their
work
in
progress.
Unless
I'm
asked
like.
Can
you
look
at
this
for
am
I
on
the
right
track
or
whatever
so
I'm
going
to
continue
skipping?
Those
during
my
daily
walk
through
yeah.
D
D
A
Alright,
so
there's
about
10
minutes
left
any
questions
remaining
topics
folks
would
like
to
discuss
the
independence.
E
I
guess
I
haven't
introduced
myself
and
you
I'm
from
IBM
Simon
when
I'm
putting
two
communities
and
just
thought
team
keeping
around
with
some
things
like
building
the
binaries
and
things
like
that,
I'm
wondering
if
there's
any
architecture
track
for
the
Service
Catalog
I
can
look
at
or
anything
I
can
like
so
much
I
think
I.
Well,
my
question
is:
where
is
the
subject?
Service
Catalog
within
the
entire
system?
I
know
whether
a
TI
server
but
and
it
seems
like
we
are
right
and
I'm
not
too
sure.
D
So
first
of
all,
I
thought
should
have
been
I
should
have
introduced
imut
earlier
other.
My
a
team
is
coming
through
as
from
the
craft
out
of
the
world,
so
it
has
a
lot
of
experience
with
the
service
broker,
touch
stuff
and
he's
worked
out.
A
lot
of
feel
like
type
of
stuff
too
and
Simon
Lee
be
missed
by
mystic,
because
you
join
the
chat
late,
but
there
is
a
link
and
I
just
pasted
it
to
you
in
private.
D
Go
to
the
presentation
are
working
that
helps
that's
trying
to
flush
out
our
architectural
picture
for
our
stuff.
Anyway,
they
won't
answer
your
other
question
of
how
does
this
fit
in
with
Malik
uber
Nettie's,
bigger
perspectives?
Don't
think
it
does
yet,
but
it's
at
least
a
start
over
of
the
chef
america,
our
architecture,
yeah.
A
So
Simon
one
welcome
I
q
I
can
I
can
try
to
answer
some
of
that.
Some
of
your
questions
now
I
think
that
you'll
find
that
the
is
information
about
this
subject.
That's
in
videos
but
I
would
not
say
yeah
that
its
present
in
github
I
made
an
empty
document.
That's
supposed
to
hold
it
and
dug
it
has.
As
you
said,
it
has
a
four
requests
but
dispel
some
of
the
south,
but
basically
like
you,
can
you
can
think
of
everything
in
service
catalog
as
like
in
a
vacuum
like
there
might
be
many
deployments.
A
Apologies,
but
probably
the
way
to
think
about
this
is
it's
an
extension
that
will
be
hosted
in
Cooper
Nettie's
in
a
pot,
so
we'll
probably
have
a
deployment
apology.
That's
like
you
had
a
kipper
native
cluster
and
you
deploy
catalogue,
API,
server
and
controller
in
a
pod
that
runs
in
Coober
Nettie's
and
at
some
time
in
the
future.
After
16
we'll
have
a
mechanism
that
is
called
API
registration,
which
is
basically
a
proxy
that
fronts
the
whole
cluster
and
II.
A
This
is
important
because
what
it
will
allow
us
to
do
is
have
extension,
secure,
benetti's
that
present
as
par
communities
from
an
API
endpoints
like
a
bi
aggregator.
Has
the
Cooper
Nettie's
core
API
registered
to
it
mmv
like
the
other
API
groups
that
are
part
of
tremendous
communities.
It
will
have
our
API
registered
with
it
and
then
like
a
very
basic
client.
Instead
of
talking
directly
to
any
particular
API
server
talks
to
the
aggregator
and
the
aggregator
can
just
pass
along
the
request
to
the
right
place
or
for
advanced
clients.