►
From YouTube: Kubernetes SIG Service Catalog 2019-7-1
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
A
A
A
They
added
a
get
robot
to
a
repository
that
goes
along
when
issues
haven't
been
respond
to
or
touched
or
anything
for
more
than
90
days
and
marks
them
stale
and
then
30
days
later
it
marks
them
rotted
and
30
days
after
that
it
deletes
them.
I,
don't
know
how
useful
a
feature
this
is
given
the
way
we
use
our
repo,
mainly
because
a
lot
of
our
issues
or
stuff-
that's
like
very
long-lived
like
planned,
features
all
the
way
out
to
GA
stuff,
like
that.
A
B
A
A
A
A
A
A
A
B
A
A
A
A
A
A
A
Another
thing
to
be
aware
of
when
we
get
control
of
our
client
lab
so
till
the
rotten
ones
I.
Imagine
it's
probably
deleted
to
stuff
into
the
ether
which
I
suppose
isn't
too
relevant.
You
know
it
was
important.
We
probably
didn't
keeping
up
with
it,
but
just
something
to
keep
in
mind
as
I
move
forward.
A
Now
this
was
proposed
by
some
cloud
foundry
people
who,
those
of
you
unfamiliar
with
cloud
foundry,
the
the
thing
that
issues
the
requests
to
the
service
brokers
is
the
generic
api
server,
which
is
also
where
users
send
api
requests.
So
it
has
a
single
address.
That's
you
know
published
external
to
the
cluster.
This
is
how
you
talk
to
this
Cloud
Foundry.
A
A
We
don't
really
even
necessarily
know
the
location
of
the
actual
API
server,
and
even
if
we
did
that's
not
the
thing
talking
to
the
service
broker,
because
that's
the
controller
so
like
I,
don't
know
what
would
be
appropriate
or
how
would
I
even
get
this
piece
of
information
and
I
just
wanted
to
know.
If
anybody
else
has
any
thoughts.
A
A
A
A
A
You
yeah
over
so
over
in
clef
times
the
API
server.
Has
this
endpoint
called
info
which
like
to
have
some
basic
information
published
about
thing
again,
I,
don't
know,
I,
don't
even
know
what
I
would
put
in
here
for
ku
and
then,
if
I
didn't
know,
I
don't
know
how
I
would
fetch
it
from
the
controller.
A
A
A
B
A
A
A
A
B
Yeah,
but
it's
just,
it
was
just
a
question
right
what
we
want
to
do
with
it,
maybe
not
easily
to
do
that
right
now,
but
the
question
what
to
do
with
them.
B
A
B
B
Request,
okay,
I,
send
it
on
the
slack
that
prequels
needs
to
be
reviewed
and
that
the
because
it's
a
blocker
for
merging
the
series,
implementation
and
also
yeah,
because
those
tears
will
be
written
to
the
unit
test.
Thanks
to
that
we
are,
we
can
get
rid
of
the
integration
tips
of
the
API
server
right,
because
when
we
merge
this
year
these,
then
the
API
server
will
be
removed
and
that's
what
the
one
thing.
B
The
second
one
is
to
check
the
releases
process
right,
so
how
we
can
do
that
with
the
master
branch
and
dedicated
branch
for
the
API
server,
and
we
also
right
now
finishing
the
story
about
the
immigration
process.
So
we
already
have
all
that
tools
set
up
and
implemented.
We
finishing
this
story
about
writing
the
documentation
about
it,
so,
probably
in
the
next
few
days
for
that
story,
with
immigration
from
DC
early
to
API
serve
from
this
TPS
ever
this
year.
B
This
should
be
already
done,
and
this
is
our
platform,
so
I
think
that
only
the
these
frequent,
which
I
son
sent
on
these
slack,
should
be
reviewed
and
purged.
After
that,
we
can
just
you
can
just
test
this
year,
implementation
if
you
want
to
have
some
deep
dive
session
with
about
that
is
also
not
the
problem.
What.
B
So
basically,
we
also
created
some
script
that
is
doing
that
stuff,
like
standing
the
cells
catwalk
being
given
version
performing
some
upgrades
and
before
performing
the
Opera.
This
is
some
kind
of
a
setup.
That's
creating
sample
broker
stuff
like
that
performing
operate,
and
one
more
time
testing.
If
it's
still
complete
the
whole
configuration
is
still
valid.
B
It
should
be
done
before
the
before
the
emergency
release.
I
think
that
maybe
in
next
week
we
all
have
that
tickets
on
our
sprint
right,
so
it
should
be
in
next
week.
Don
and
I
have
us
a
question
about
the
CIA
itself,
because
I
want
to
finish
the
story
about
splitting
the
job
for
building
images
right.
We
have
that
one
ma
one
job
which
is
building
the
all
all
that
stuff
and
I
have
a
question
about
the
platforms
that
we
are
supporting
right
now.
B
B
Okay,
yeah,
you
have
it
here.
So,
for
example,
we
have
a
five
platforms
and
the
question
is:
if
we
really
want
to
build
service
cover
for
five
different
platforms,
basically
we're
using
only
image
for
the
EMB
platform,
and
if
we
build
that
image
only
for
that
one
platform
that
then
it
takes
around
three
minutes
max
maximum
is
around
five
minutes.
You
want
to
build
that
45,
but
from
then
it's
around
20-30
minutes
depends
on
the
latency
right.
B
You
want
to
have
because,
when
you're
reading
service
level,
then
it's
built
against
five
different
platforms
right,
so
it
really
takes
five
times
more
than
usually
right.
So,
basically
right
now
it
takes
around
one
hour,
and
this
is
because
you
have
in
one
pipeline
in
one
build.
You
can
catch
images.
So
if
you
download
it
images
for
all
those
platform
already,
then
you
don't
do
that
anymore
for
the
next
builds
for
D
test
broker
for
the
UPS
broker
and
you
cut
right.
B
A
B
Because
I
and
III
understand
that
we
are
building
as
we
cut
four
or
five
different
platforms,
because
the
binary
itself
can
be
run
on
a
different
platforms
right
but
will
be
serviced.
Apart
is
the
docker
image
which
is
executed
on
the
kubernetes,
and
the
platform
is
quite
agnostic
right
because
it's
inside
the
docker
image,
so
the
questions
yeah
leave
it
for
the
SV
cut
only
and
get
out
of
reads
from
the
Service
Catalog
from
the
test
broker
and
from
the
ups
broker
and
from
the
hash
table.
Also.
B
Intentional
but
yeah,
but
it
is
saying
about
the
Cuban
entity
itself,
not
about
the
surf
o'clock.
Kubernetes.
As
a
binary
is
run.
It
could
be
run
on
the
directly
on
the
VM
right,
so
it
could
be
like
SD
card
executed
on
the
some
host.
That
is
the
first
different
platform,
but
we
will
be
talking
about
something
that
is
executed
inside
the
cluster
means
always
inside
the
docker
inside
the
docker
image,
and
it
should
be
platform,
agnostic,
right
and
and.
A
B
So
I
would
just
believe
it
only
for
the
SV
cut
as
deatils
a
binary
can
be
execute
on
the
different
platforms,
but
for
the
Service
Catalog,
for
the
test
block
error
for
the
UPS
broker
and
for
the
health
check
which
are
run
directly
on
the
kubernetes,
then
we
can
only
have
the
one
that
we
are
really
really
releasing
right.
Sure,
okay,
so
I
think
that
Lucas
should
be
are
now
in
today's
already
prepared.
B
A
B
All
of
them
are
built
and
pushed
to
the
2d
docker,
the
docker
hub
for
for
something
different,
but
basically
no
one
is
using
that
and
I
didn't
know
why
we
are
doing
that
for
the
binaries
that
I
that
are
executed
inside
the
cluster
I
know
why?
What
why
we
are
doing
for
the
SD
card?
Does
it
so
reasonable,
but
for
this
extra
work.
A
A
A
B
So
if
we
go
get
rid
of
them,
then
those
builds
should
be
more
stable,
also
and
I
tried
to
find
out
something
on
the
Internet.
What
is
the
problem?
Why
we
have
some
kind
of
D,
except
for
my
error,
but
I
cannot
find
anything.
I
also
asked
on
our
dis
infra,
how
it
testing
for
a
team-
and
they
are
also
don't
have
any
idea.
What
what
is
the
problem-
and
it's.
A
B
A
Seen
the
while
I'm
just
trying
to
remember
exactly
with
what's
with
what
man
get
in
that
error,
I
mean
I.
Imagine
it's
something
to
do
with
the
crazy,
weird
architectures
that
aren't
you
know:
MB
64,
Linux,
sort
of
thing
with
everything
uses
I
can't.
Imagine
too
many
people
from
tests
like
you
actually
use
that
junk
intestine
for
us,
so
I
imagine
there
might
be
a
little
bit
like
a
bug
on
there
and
rather
than
a
bug
on
our
end
and
that
they.
A
B
It's
fine,
but
it's
just
failing
from
time
to
time.
It's
it's
under
the
term.
It
didn't
deterministic
stuff
right,
I
also
sent
on
the
blog
message
on
the
slack.
So
it
was
something
like
that,
but
after
updating
the
substrates
there
was
the
exact
format,
error
and
I
couldn't
find
out
what
is
wrong
with
with
such
good
okay.
So
basically,
I
will
submit
the
request
and
thank
you
yeah.
A
B
B
You
remember
that
that
one
what's
the
color
make
time
out
of
any
requests,
you
already
have
mouth
aunt,
yeah
exactly
that
one
already
added
HTM
and
probably
you
can
check
if
you
agree
with
it
or
not.
It's
the
same
that
already
Florian
asked
to
do
so.
It's
partially
finished
because
we
have
the
global
timeout
and
after
some
time
we
can
add,
also
timeout
broker.