►
From YouTube: Kubernetes SIG API Machinery 20170830
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
B
C
A
H
I
H
A
A
A
Other
one
that
I
put
on
here
is
larger
and
my
knowledge
hasn't
been
thoroughly
reviewed.
Yet
it's
one
for
making
shared
initiative.
Our
share
and
foremost
include
uninitialized
objects
and
I.
Think
it
needs
attention.
It's
Chou's
and
I'm.
Gonna
guess
Chow
agrees.
It
needs
attention.
Yes,
I
need
some
of
yours.
I
H
B
A
F
K
K
H
K
M
K
So
there
was
some
comfortable
pair
of
object
reference
in
the
API
back
in
time,
so
we
decided
by
signature
design
design
here
in
not
very
right.
We
assumed
that,
for
each
of
them
will
be
any
source
of
automatics,
which
has
exactly
the
same
name
and
exactly
the
same
namespace
and
it
will
be
representing
the
matrix
from
the
original
pot,
so
the
the
reference
was
done
in
a
kind
of
indicted
indirect
way.
So
the
question
is:
is
this
approach?
Okay,
or
should
we
change
this
or
I,
don't
know
any
any
comments?
K
K
A
I
mean
I
left
the
answer
as
it's
reasonable
to
not
have
duplicate
information.
If
you
want
there's
actually
more
information
in
the
email
thread,
whoever
scroll
on
through
there's
a
specific
example
about
I
have
a
five
metric.
It's
mapping
is
based
on
this
name.
You
could
add
redundant
information
if
you
wanted
to
that,
you
know
API
machineries
about
what
giving
you
the
tools
to
do.
What
you
want
in
your
API
v-shape
Arabia
is
yours.
Yeah,.
K
J
A
J
K
K
B
D
K
K
H
F
A
F
There's
other
lists
in
this
in
this
bug
this
was
this:
was
my
list
November?
You
think
they
think
absolutely
have
happened
if
we're
gonna
call
it
beta,
like
all
the
API
changes
need
to
happen
and
like
beta
means,
we're
turning
it
on
by
default.
So
we
can't
have
any
bad
interactions
between
the
controllers
and
the
initialization
mechanism.
A
F
A
I
F
J
F
F
So
I
think
when
I'm
hearing
here
is
there's
actually
quite
a
lot
to
do
in
this
feature
before
is,
can
be
legitimately
called
beta.
I
do
yes,
you
disagree
or
you
agree.
I
agree:
yeah,
okay,
anybody
feel
like
it's
worth
burning
the
candle
at
both
ends
to
try
and
get
this
to
beta
and
this
release.
I,
don't
think
it's
practical
I,
don't
think
it's
right.
A
B
I
think
also
the
goal
was
made
before
we
had
a
list
of
what
was
required
to
go
to
beta,
so
I
think
a
lot
yeah.
It
actually
turns
out.
I
think
this
is
a
really
major
feature
that
has
a
lot
of
a
lot
of
little
loose
ends
that
need
to
be
tied
up.
Yeah
I
think
all
the
right
things
are
happening.
It's
just
the
what
it
may
mean:
you've
not
won
fine,
so
yeah,
okay,.
M
I,
do
it
so
I
talked
to
to
throw
the
last
place
robot,
which
is
go
for
four
weeks
and
there's
quite
some
work
to
update
idealism
in
a
way,
so
it
doesn't
wake
on
those
events
which
we
had
like
murders
of
branches
and
so
on.
We
have
a
couple
of
people
who
are
blocked
at
the
moment
because
we
don't
have
published
labels,
so
everything
is
four
weeks
old.
M
Basically,
and
the
proposal
is
so,
we
have
iOS
it's
implemented,
it's
not
very
thin
ways
that
we
can
merge
it
into
test
info,
but
we
have
manually
revolutes
exports
and
private
rocks
at
the
moment,
and
the
proposal
proofs
that
manually,
as
we
have
done
before
in
the
past,
to
update
our
staging
tables.
Those
this
includes,
of
course,
client
go,
but
also
metrics.
Metrics
is
quite
critical
for
some
people,
maybe
is
important,
of
course,
I'm.
I
We
had
to
know
we
had
a
bug
in
the
publishing
robots
that
caused
it
to
fail
like
three
or
four
weeks
ago,
and
there
are
people
that
have
blocked
by
finest
this
situation
and
st
DTS
has
fixed
the
robot.
The
algorithm
is
quite
complex,
so
we
don't
plan
to
review
the
algorithm
this
week,
but
we
do
want
to
push
the
changes
generated
by
the
algorithm
okay
onto
the
menu
menu,
push
to
all
those
we
post
so.
F
N
M
D
F
P
P
M
L
That's
me
so
I
just
wanted
to
bring
it
up
for
visibility
that
I'm
working
on
this,
and
it
seems
like
the
general
consensus
ish
is
to
use
the
OpenShift
lease
reconciler
and
I
have
some
code
around
it
already
I'm
working
out
some
of
the
testing
issues
currently
locally,
but
is
there
anyone
that
doesn't
like
that
fix
for
this
issue
or
just
something
else
B?
Does
that
mean
that
you're
gonna
talk
directly
to
NCD?
It's
gonna
use
the
storage
API
layer
to
do
that.
Yes,
okay,
and
it's
not
going
to
be
using
a
existing
API.
F
F
So
I
think
the
the
eventual
like
API
server
is
doing
something
that
we
supply
is
reimplemented,
a
feature
that
we
supply
to
other
things
that
run
in
the
cluster
like
right.
Now
you
start
a
service,
there's
a
controller
that
finds
the
pods
that
run
in
the
service
and
it
updates
the
endpoints
record
and
people
can
see
all
the
things
that
are
running
in
the
cluster
API
server
is
reimplemented
us
in
a
very
like
hacky
way,
so
I
think.
Ultimately
we
want
to
do
so.
We
wouldn't
want
to
leverage
the
existing
controller.
F
F
D
Is
it
possible
I
mean
I,
don't
know
everything
about
the
issue
we're
trying
to
solve
here,
but
one
thought
is
in
the
long
run
that
we
could
run
the
end.
The
endpoint
controller
in
a
special
standalone
mode,
essentially
reusing
as
much
of
the
code
as
we
can
running
it
in
a
special
mode,
which
it's
only
job
to
do
is
to
make
sure
that
this
bootstrap
thing
works
right
kind
of
the
way
you
run
a
cubelet
in
the
standout,
only
mode
to
get
a
machine
off
the.
F
D
F
Q
D
Q
F
Ip
allocation
objects
are
also
a
little
scary.
Hi
I
would
like
fewer
things
like
this
in
the
future.
Rather
than
more
because
imagine
some
poor
cluster
administrator
something
breaks
about
the
cluster,
and
then
you
go
dig
around
an
X
you
to
fix
it
and
there's
all
this
stuff,
a
net
CD
that
they're
not
familiar.
Q
There's
issues
open
for
all
of
the
service
for
all
of
the
allocators.
There
are
issues
open
for
that.
It's
really
like
this
particular
case.
We
don't
know
enough
about
how
do
we
really
solve
this
for
everyone?
The
right
way?
Why
are
we
trying
to
rush
something
in
versus
keeping
the
details
under
cover
yeah,
it's
one
slightly
ugly
thing
but
like
it
might
like.
We
had
zero
problems
with
this
and,
like
there's
a
lot
of
open
shift
clusters.
They've
been
writing
like
this
for
at
least
two
for
at
least
a
year.
Q
F
That's,
oh,
that's!
That's!
That's!
That's
a
great
point
and
the
things
that
don't
things
that
don't
show
up
in
api's
are
things
that
we
can
swap
out
the
implementation
for
later,
and
so
this
does
cross
one
layer
which
is
like
the
API
server
storage
layer.
So
it's
not
completely
innocent,
but
it's
certainly
way
better
than
having
the
hack
built
into
the
public
API
and
the.
F
L
J
Q
And,
like
the
other
core
controllers
that
run
on
the
API
servers
that
do
the
other
monkey
bits
that
we
would
prefer
to
have
to
exist,
will
eventually
go
away
and
master
should
be
mostly
generic,
even
at
the
end
of
the
day,
two
years
from
now,
I'm
pretty
sure
we'll
still
have
a
little
bit
of
ugly
in
the
core
API
server,
because
it
is
different
than
all
the
other
API
servers.
Q
Q
Yes,
service
service
IP
has
to
be
atomically
allocated
before
it
becomes
visible
to
end-users,
because
that's
the
API
contract
we
have,
you
know
a
controller,
could
do
it
better.
My
only
worry
with
that
was
if
there
are
core
components
that
depend
on
services
having
an
IP,
the
Masters
write
their
own
right,
hard-coded
IPS
into
that
map
on
startup.
So
like
kubernetes
service
and
the
insecure
kubernetes
service,
you
know
hard
code
which
IP
they
take
so.
F
J
A
I
F
D
F
I
C
F
Q
Want
to
read
it:
well,
we
already
where's
the
proposal
because
enough
people
look
good
to
me
bit
so
so
at
this
point
Daniel,
if
you
want
to
stop
it
before
Friday
now
would
be
a
good
time
to
do
a
quick
review,
but
Jordan
and
I
were
on
track
to
merge,
given
that
enough
people
have
waited
on
the
various
trade
offs.
A
good
chunk
of
the
sig
has
reviewed
it.
Okay,.
Q
Q
Let
it
soak
in
because
it's
really
difficult
to
get
speculative
code
into
all
of
the
sub
branches
put
it
in
and
we'll
just
disable
the
client-side
aspect
of
it
before
we
ship
probably
next
week
once
we
get
enough
signal,
but
the
feature
on
the
server-side
is
disabled
by
default,
the
client
is
much
much
more
difficult
to
globally
disable
or
enable,
and
so
I
was
gonna
in
master
for
a
period
of
time.
The
code
will
opt
into
paging
and
then
we'll
turn
that
off
before
we
show
I
think.
Q
Just
ignores
it,
but
I
don't
want
a
one,
a
client
going
out
the
door
that
talks
parameters
that
aren't
respected
yet,
and
then
we
realize
we
need
to
change
them.
So
then
we
have
to
deal
with
old.
That
would
be
the
crap
yeah
yeah
I
agree,
that's
yeah!
So
I
want
some
time
in
master,
because
we
have
no
evidence
that
this
causes
any
issues,
but
given
the
masses
of
testing
that
aren't
covered
by
PR
tests,
especially
cube
mark
and
the
higher
scale
density
test,
there's
no
easy
way
to
do
this.