►
From YouTube: Kubernetes SIG Service Catalog 20171107
Description
0.1.0 Retrospective, see goo.gl/5YQkgx for notes
A
B
B
We
completed
discussion
what
we
did
well,
of
course,
we're
gonna
keep
trying
to
do
those
things,
but
we
didn't
complete
discussion
on
what
we
don't
think
went
well
and
what
we
think
could
have
gone
better,
so
I'm
gonna
paste
the
link
to
the
summary
of
that
discussion.
Yesterday,
the
summary
of
that
voting
yesterday,
the
we
had
four
things:
we're
gonna
focus
on.
We
had
one
first
place
and
three
tied
for
second
place
items.
B
You
can
read
the
three
that
tied
for
second,
so
what
I'd
like
to
do
have
a
five
to
seven
minute
conversation
about
each
of
these
items
will
do
it
speaker
queue
style.
For
those
who
don't
know
speaker,
queue
is
just
if
you've
got
something
you'd
like
to
add
to
the
conversation
type
plus
hand
into
the
chat
and
I
will
add
you
to
the
high
text.
Speaker
queue
here.
B
A
I
this
is
the
one
that
I
was
laughing
about
at
the
end
of
the
meeting
yesterday.
I
agree
that
it's
long
and
duplicated,
however
I
think
it's
completely
fallacious.
A
You
think
that
if
we
had
continually
refactored
that
we
would
have
been
able
to
deliver
when
we
did
and
that
the
code
would
have
been
higher-quality.
My
own
personal
subjective
opinion
is
that
premature
refactoring
is,
on
the
same
level
as
premature,
optimization
and
frequently
gets
you
into
more
trouble
than
it
gets
you
out
of
so
well.
I
I
do
agree
that
the
the
code
that
we
have
is
I
would
not
say
it's
overly
complex.
A
It's
just
unfactored
I
think
that
we
are
actually
in
a
good
spot
in
terms
of
being
able
to
refactor
to
something
that
actually
captures
all
of
the
requirements.
So
I
think
it
would
have
been
very
hard
if
we
had
gotten
say
50%
of
the
way
functionally
and
then
refactor.
We
might
find
that
we're
fighting
our
factoring
after
that.
So
I
agree,
but
I
also
disagree.
B
So
we've
got
no
other
hands.
Our
goal
here
today
is
to
decide
whether
we
should
change
something
for
the
o2
development
cycle,
and
if
we
should,
what
should
it
be
so
does
anybody
have
other
comments
on
whether
or
not
they
think
we
should
change
something
and
or
what
that
should
be
over
the
next
few
months
or
so.
A
B
B
D
So
well
Paul's,
sort
of
kind
of
alluded
this,
which
is
testing
I.
Think
if
we're
looking
at
things
that
might
change
is
really
focusing
on
making
the
code
more
testable,
so
we
can
really
focus
our
tests
that
are
much
much
smaller,
much
more
focused
and
can
actually
start
testing
some
of
these
corner
cases
that
are
very
tricky
to
do
right
now.
It's
it's
there's,
there's
a
lot
of
things
that
you
can't
retest
today,
with
the
exception
of
the
huge
weakens
I'll
do
so.
D
B
All
right
any
other
comments.
It
sounds
like
so
far.
We've
said
Hart.
The
refactoring
that
should
go
forward
is,
of
course,
a
better
testing.
Maybe
do
some
refactoring
to
make
code
easier
to
test
and
improve
the
in
code.
Comments
have
I,
missed
anything
Doug
or
viele,
sorry,
Paul
or
veal.
A
that.
You
guys
have
said
I,
think.
B
C
B
D
It
seemed
like
there
was
a
period
of
time
there.
This
is
be
like
I
think
this
was
mine,
so
it
seemed
to
me
like
there
was
a
bunch
of
things
that
were
not
necessarily
on
the
we're
not
captured
as
like,
peas,
zeroes
or
say
the
beta,
but
they
were
deemed
to
be
high-priority
items
and,
and
they
kind
of
sometimes
might
have
taken
a
precedence
over
something.
It
is
also
unclear.
Sometimes
holding
back
some
TRS
do
to
repay
sinks
and
something
set
for
a
while.
D
B
E
So
I
think
that,
like
such
big
features,
STR
is
a
good
example,
so
like
when
I
worked
on
making
a
CLE
support
based
on
PPR,
like
I,
saw
some
like
I
I.
I
think
that
they
could
need
needed
improvement
before
getting
merged,
and
there
were
also
some
issues
mentioned
in
the
other
github,
which
is
about
to
gossip
or
testing
and
in
general,
I,
think
actually
like
even
in
DPR
I
saw
or
to
own,
like
I
started
digging
on
the
history
of
the
PR
of
HTTP
are
to
understand
whether
why
some
things
were
implemented.
E
A
Think
you
get
it
so,
for
example,
and
it's
we're
kind
of
like
having
a
retro
over
a
very
large
amount
of
time,
but
I
think
that,
like
the
TPR
work,
for
example,
was
done
before
we
went
through
the
exercise
of
scoping.
What
Oh
table
one
was
gonna,
be
and
I
think
that
they're
like
we,
we
got
much
better
as
we
approached
beta
in
terms
of
very
clearly
stating
what
the
priorities
for
different
things
were.
A
E
What
preset
is
also
probably
another
example
of
this
thing
so,
like
I,
think
it
was
fine
for
like
striving
to
release
bed
as
soon
as
possible
and
trying
different
things,
but
once
we
start
releasing
stuff
like
once,
we
stabilized
our
API
I
think
it
is.
We
need
to
pay
more
attention
and
it's
like
it's.
It
will
not
be
as
easy
to
just
rename
things.
For
example,.
B
A
So
I
I
think
that
if
we
continue
to
define
like
the
group's
priorities
together
for
the
next,
like
major
release
like
that
was
very
effective
at
I
mean
focus
to
the
work
that
the
group
was
doing
and
I
also
think
that,
since
the
pool
of
contributors
is
grown
and
hopefully
we'll
continue
to
grow,
that
will
have
more
review
bandwidth
to
be
able
to
get
a
overall
merge.
Bandwidth.
I
think
we
time
will
tell
if
we
continue
to
work
in
the
way
that
we
have
after
we
scoped
ODOT
1.
A
If
we
continue
working
like
that,
I
think
that
that
will
probably
be
very
effective,
but
the
only
way
to
find
out
for
sure
is
to
do
it
and
ensure
also
that
people
maintain
focus
on
the
project
like
if,
if
we
have
folks
that
get
pulled
on
to
other
things-
and
we
don't
have
a
lot
of
people
paying
full-time
attention
to
the
project,
it
will
slow
down
and
it
will
be
less
manageable
to
keep
everybody
on
track.
Uf
all.
E
So
I
think
that,
for
some
features,
writing
proposals
in
discussing
invent
first
would
be
also
beneficial,
like
just
from
my
experience
for
parameter
supports
in
instance
and
bindings.
For
example,
we've
wasted
a
lot
of
time
on
like
discussing
which
one
is
the
right
approach
and
then
finally,
I
just
wrote
a
wrote.
A
documentation
page
like
describing
in
detail,
ok
and
proposing
and
like
I,
think
we
sorted
it
out
in
like
just
several
days.
So
I
think
this
is
a
good
way
of
not
wasting
time
on
the
one
head
and
agreeing
other
things
on
the
other.
B
All
right
we're
at
over
seven
minutes
now,
I
have
heard
a
few
things
here,
I've
heard
to
keep
doing
our
planning
process
that
we
did
just
prior
to
the
actual
ODOT
one
release.
I
just
heard
that
having
more
sort
of
one-on-one
ish
discussions
helps
not
waste
time
and
I
also
heard
that,
as
we
add,
more
full-time
contributors,
and
we
add
thus
more
pull
request
reviewers,
this
problem
may
go
away
specifically
related
to
getting
clear
criteria
and
getting
PRS
merged
according
to
those
criteria.
B
Anything
I've
missed
from
anyone
who
talked
during
this
item
going
once
all
right.
Let's
move
on
to
the
the
third
and
second
to
last
item.
This
is
another
that
was
tied
for
second
place,
with
three
votes:
more
interrupts
testing
before
beta
with,
is
it
with
existing
brokers
so
who
brought
this
one
to
the
forefront
yesterday?
I
think.
C
E
E
So
so
my
concern
is
that,
like
I
agree
that
we
should
probably
test
against
existing
brokers,
but
every
time
we
raise
this
question
whether
existing
brokers
will
be
broken
by
our
changes
or
not
like
I'm
unaware
of
the
pool
of
existing
brokers.
E
So
it's
about
speculation
and
trying
to
ask
some
people
what
do
they
think
and
I
think
it's
not
very
effective
if
we
had
a
actual
list
of
brokers
against
which
we
can
test,
for
example,
or
like
we
can
even
write
some
integration
tests
or
something
like
that,
or
at
least
have
a
list
of
brokers
which
we
should
make
sure
that
they
are
not
broken.
Something
and
I
think
it
would
be
beneficial,
accessible,
I,
don't
know
like
some
brokers
might
be
like
not
public
I,
don't
know
all.
F
B
C
So
obviously,
from
as
I
said
from
my
side,
I
lived
Waldrop
because
I
didn't
push
harder
to
get
the
stuff
tested
with
inside
bluemix
with
our
existing
brokers.
That's
one
thing:
I'm
gonna
try
to
keep
pushing
on
that,
but
I
think
as
a
group
to
Paul
specific
question.
I
think
maybe
we
should
do
is
reach
out
to
the
Cloud
Foundry
folks,
because
they
do
have
a
community
of
service
brokers
that
they
regularly
ping
to
ask
questions.
F
So
I
don't
know
like
established
actual
brokers,
but
I
do
know.
For
my
time,
working
as
a
cloud
fines
developer,
they
do
have
a
like
fake
broker
for
testing
purposes.
That's
a
ruby
app
that
adheres
to
the
spec
that
they
used
for
basically
the
same
thing:
testing
Cloud
Foundry.
So
we
might
want
to
look
at
using
that.
Maybe.
A
That's
a
good
point:
I
think
that
the
Cloud
Foundry
broker
the
the
test
broker
that
you
referred
to.
I
done
a
little
bit
of
looking
into
it
and
I
think
it
would
be
a
good
project
for
somebody
to
do
to
figure
out
exactly
what
the
test
suite
they
use
to
to
test.
That
broker
is
in
general,
we've
also
talked
about
potentially
having
a
conformance
suite
for
brokers
and
platforms,
or
some
kind
of
list
of
performant
or
conformance
tests
and
I
think
it
would
probably
be
probable
profitable
to
look
at
that.
A
In
a
vacuum,
considering
things
that
existing
brokers
do
to
be
correct
is
twofold,
so
the
first
one
is
that
there
are
a
number
of
gaps
in
open
service
broker,
API
aspect
that
we've
been
working
to
close
we've
closed
a
number
of
them
and
the
further
we
get
the
more
seem
to
pop
up.
So
I
am
Not
sure
that
in
a
vacuum
we
can
necessarily
say
that
a
particular
X
or
Y
brokers,
behavior,
is
necessarily
correct.
A
A
However,
we
test
it
with
the
number
of
brokers
at
red
hat
that
we
developed,
and
that
was
really
productive
in
finding
bugs
in
Service
Catalog.
So
my
own
personal
feeling
is
that
it's
probably
most
actionable
for
individual
vendors
to
test
what
the
brokers
that
matter
to
them
and
then
there's
always
the
question
that
has
to
be
asked
when
some
behaviors
and
what's
expected,
is
it
correct
and.
A
Is
is
the
platform's
behavior
correct
and
the
broker
is
wrong?
Is
there
a
gap
in
the
spec,
so
I
I
definitely
applaud
anyone's
efforts
to
test
with
real
brokers.
We
just
have
to
be
careful
and
thoughtful
about
whether
a
particular
perceived
break
is
actually
like
gap
in
the
spec
plugin
service,
catalog
bug
in
the
broker
bad
tests,
whatever
he
left
for
me
all.
B
B
We
may
want
to
plug
in
some
tests
and
invest
in
the
test
infrastructure
to
test
against
reference
brokers,
but
it
may
be
a
pitfall
for
us
for
a
few
different
reasons
to
actually
install
and
run
conformance
tests
against
real
brokers.
That
may
be
best
left
to
the
individual
vendors
to
do
have
I
missed
anything
that
anyone
has
said
here.
B
All
right:
let's
move
on
to
the
last
item:
I
had
incorrectly
written
this
last
item
in
the
agenda.
I
fixed
it
just
now,
it's
a
final
tied
for
second
item
with
three
votes.
Again,
it's
reads:
focus
on
code
quality
and
testability.
We
per
we
purposefully
left
this
one
separate
from
controller
code
is
very
long
and
so
on.
So
whoever
wrote
this.
Can
you
give
an
overview
here?
Please.
B
B
Alright,
so
with
that,
thank
you
Doug
for
writing.
All
these
notes
down
I
will
go
through.
These
I
will
write
up
a
summary
of
everything
and
send
it
out
to
the
Google
Groups.
We
can
take
it
from
there
decide
what
we
want
to
do
going
forward
and
what
we
don't
I
think
it's
been
fairly
clear
from
discussion
on
each
of
the
three
items
that
we
talked
about
in
depth.
B
A
I
think
no,
it
makes
a
good
point.
I
was
actually
hoping
to
be
able
to
get
some
technical
writers
that
work
at
Red
Hat
to
help
improve
the
documentation
quality
in
the
repository
I
haven't
been
successful
at
that
so
far,
but
I
think
that
the
solution
here
is
probably
that
we
need
to
have
some
folks
that
actively
contribute
to
documentation
that
work
in
the
sig
just
like
we
have
folks
that
are
primarily
focused
on
documentation
and
kubernetes.
So
that's
that's
my
own
two
cents.