►
From YouTube: Kubernetes SIG API MACHINERY 2019-09-25
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
We
don't
have
a
super
PAC
agenda,
but
thanks
to
Jordan
we
have
a
number
of
items
to
discuss:
I'm,
probably
that
will
expand
a
bit
further
and
what
his
prediction
is.
But
we
will
see.
I
was
hoping
that
we
could
get
some
people
from
it.
City
I
think
they
are
coming
I
really,
but
we
can
start
if
you
guys
are
ready
Jordan
do
you
want
to
go.
B
The
shape
of
the
poll
looks
okay,
the
syntax
on
the
overridden
servers
flag
is
pretty
hokey
anyway,
so
it's
it's
deterministic,
it's
ugly,
but
liveable.
The
question
I
wanted
to
raise
here
was:
is
this
like?
What
are
the
news
cases
that
back
this
and
do
we
want
to
do
things
to
encourage
people
sharing
a
net
CD
cluster
and
CD
server
between
clusters,
or
is
just
simple
parity
with
the
existing
primary
cluster?
Sufficient
to
add
this,
so
I
wanted
to
get
people's
thoughts
on
this
and
get
more
eyes
on
this
proposal.
C
C
D
C
Ya
know:
I
glanced
through
the
bug
and
it
looks
like
they
want
to
share
a
CD
specifically
for
events
among
many
clusters,
on
which
at
first
I
was
like.
That
seems
like
not
a
very
good
idea.
Cuz,
that's
like
big
all
our
flyers
and
we
put
them
in
one
dumpster,
big
yeah.
You
can
see
query
power.
There
that's
actually
kind
of
a
cool
idea.
They
claimed
that
the
events
are
bursty
so
like,
if
you
put
enough
clusters
on
the
same
thing
like
they
won't
all
create
a
ton
of
events
all
at
the
same
time.
C
I
guess
maybe
that
would
work
out
in
some
environments.
So
maybe
it
makes
sense.
The
the
thing
I'm
more
likely
to
want
like
personally
in
the
next
six
months
to
a
year
is
like
I
suspect
we're.
Gonna
have
to
do
something
to
help
folks
migrate,
built
in
api's
to
see
or
at
ease
and
or
vice
versa,
and
or
move
C
or
D
between
groups.
So
but
that's
that's
affects
the
NTD
path,
but
necessarily
the
prefix
I
guess
I.
C
D
C
B
C
Got
to
say
like
that
doesn't
seem
great,
especially
in
this
scenario.
Like
you
imagine
configuring.
That
yeah
seems
like
a
bad
security
practice
to
like
have
your
all
of
your
at
CDs,
including
the
shared
one.
Among
all,
your
clusters
have
the
same
credentials.
Ensuring
is
forcing
all
of
them
uses
the
same
one
which
I
had
yeah
so
I.
C
Guess.
If
somebody
sent
me
a
change
to
split
that
out,
sometimes
it
doesn't
break
existing
stuff
it'd
be
hard
for
me
to
argue
against
that
as
much
as
I
hate,
adding
knobs
to
our
API,
we
could
we
could
so
one
thing
we
could
request.
Maybe
is
this
is
kind
of
a
ridiculous
thing
to
configure
in
a
flag.
So
maybe
it's
time
to
ask
for
a
file-based,
I
think.
B
C
E
B
B
C
Then
that
has
no
impact
on
my
life,
but
if
we
add
a
overarching
config
file
to
include
everything
that
has
a
large
impact
on
my
life,
so
I
have
selfish
reasons
for
being
like.
Let's
just
continue,
the
existing
pattern
also
it'll
be
a
bad
experience
for
the
contributor
to
be
told
like
actually,
you
have
to
like
regularize
our
entire
config
to
like
at
this
thing,
I
think.
D
A
C
E
D
Like
so
as
a
concrete
thing
Daniel
we
could.
You
could
have
something
where
you
have
your
your
initial
sed
that
you
do
have
connection
to
that
holds,
say:
pod
services,
namespaces
and
then
references
one
for
events.
Oh
yeah,
we
could
do
fun
things
like
have
because
I
could
see
myself
from
wanting
to.
C
D
B
C
C
B
So,
a
year
and
a
half
ago,
apparently
we
all
talked
and
decided.
This
was
something
that
we
should
do
and
that
we
were
okay.
Changing
the
signature
of
client
methods
to
thread
context,
to
requests
and
talked
about
in
the
mailing
lists.
I,
don't
think
it
made
it
out
to
kubernetes
dev,
but
maybe
I'm
wrong,
and
then
a
proposal
got
opened
back
when
we
opened
proposals
against
the
community
Rico
and
a
couple
PRS
got
opened
and
languished
and
nothing
much
happened.
B
Then,
like
three
to
six
months
ago,
we
started
plumbing
context
to
more
places
to
make
some
of
our
internal
handling
of
Web
books
and
admission
work
better.
Recently,
we
plumbed
context
to
the
aggregator
so
that
it
can
cancel
calls
to
the
backend
and
a
poll
just
merged
today.
That
makes
our
authentication
and
authorization
wipe
up
calls
honor
context,
timeouts
and
so
there's
been
more
attention
on
this
recently
and
I
would
like
to
see
the
client
support
context.
Plumbing
move
forward,
I
I
think
we
still
agree
that
we
want
contexts
and
time
out
and
cancel
support.
B
The
main
question
I
have
is
whether
we
are
still
happy
with
breaking
client
go
signatures
on
a
release
boundary
or
if
we
want
to
generate
here,
methods,
deprecated,
the
old
methods
and
then
after
one
to
two
releases,
remove
the
old
methods.
The
reason
I
can
see
in
favor
of
that
is
that
it
eases
upgrade.
So
you
can
bump
library
versions
without
changing
all
of
your
client
calls
and
then,
over
a
period
of
time
migrate.
All
your
client
calls
and
then,
before
you
bump
to
the
next
level
of
libraries,
you
have
to
have
completed
that
migration.
G
The
thing
we're
talking
about
is
plumbing
through
contacts,
and
there
are
some
some
ugly
things
that
we
might
want
to
actually
address
that.
I
adding
new
methods,
I,
don't
like
straight
up,
because
that
impacts
everyone
who
has
extensions.
It
means
those
methods.
Are
there
forever
I,
don't
see
a
lot
of
benefit?
Just
to
that
I
would
say
we
I
I
am
much
more
in
favor
of
doing
some
small
things
to
improve
usability
of
the
client.
Add
a
new
package.
D
D
G
Requires
you
to
do
an
extension
method,
yeah
and
like
there's
a
bunch
of
other
things
with
rest,
so
like
I
guess,
the
question
is
I
had
is,
is
context
more
important
than
us
going
and
cleaning
everything
up,
in
which
case
you
know
create
context
as
ugly
as
it
is,
is
fine.
I
was
thinking
more
of
the.
Do
you
break
everyone
in
one
release
versus
giving
a
way
to
control
over
several?
Let
me.
B
Signatures
of
the
client
go.
Clients
have
not
changed
very
much.
Get
options
list
options,
those
ways.
It's
changed
a
couple
times,
but
most
releases.
If
you
just
construct
from
a
queue,
config
get
a
client
set
and
then
call
those
methods
you
can
upgrade
without
much
difficulty
at
all
or
any
changes.
G
Mean
if
you
break
everybody
in
one
release,
that
means
for
someone
to
pick
up
new
fields.
They
have
to
refactor
their
whole
code
base.
That
was
the
that's
the
part
of
it
that
I
felt
was
where
I
could
feel
some
empathy
for
the
community
that
still
needs
to
change
these
things,
because
context
is
the
right
thing
to
do
for
long-running
stuff,
and
some
of
these
do
have.
F
G
G
D
D
B
D
G
It's
not
gonna
work
for
a
lot
of
call
sites,
though
there's
a
lot
of
places
where
you
don't
pass
in
a
modifier,
so
basically
saying
we'll
make
some
people's
lives
easier,
but
they
could
just
ignore
it,
but
those
people's
lives
were
already
easy.
I
mean
if
we
think
that
everybody
like
do.
We
want
support
to
long
term,
we
kind
of
say
no.
Do
we
want
to
put
context
in
we're
kind
of
saying?
Yes,
how
do
we
do
that
in
a
way?
G
That's
minimally
invasive
to
the
community,
where
even
the
people
who
are
using
just
the
interfaces,
probably
who
want
the
interfaces,
still
need
to
be
able
to
switch
right?
If
you
passing
a
namespace
getter,
for
instance,
you're
not
going
to
have
the
whiff
context
method
unless
I
guess
we
add
that
to
all
those
which
I'd
prefer
not
to
it's.
B
C
How
I,
like,
as
a
user
of
the
client
libraries
I,
think
if
I
I
I,
probably
would
prefer
that
they
actually
did
include
context
and
I
would
rather
write
my
script
or
find
a
replace
to
add
my
existing
context
or
whatever
or
context
to
do
to
every
call
site,
then
I
to
switch
every
call
site
to
be
like?
Oh
I
can
do
that
right.
G
And
also
the
wrapper
the
problem,
the
wrapper
is
like
there's
lots
of
bugs
that
exist
in
big
code
bases,
because
people
added
contexts
trucks
like
context
ism
is
the
first
parameter
for
a
reason
and
you
mess
with
that
at
your
peril.
So,
like
I,
think
the
wrapper
as
appealing
as
it
might
be
for
a
workaround.
It
just
creates
more
bugs
down
the
road.
Is
this
a
pull
the
band-aid
off
kind
of
problem,
which
is
we're?
G
C
G
Thank
you,
I
was
going
to
add.
I
was
I
was
just
afraid
of
the
people
with
pitchforks,
because
the
people
with
pitchforks
are
particularly
loud
and
vocal
recently,
and
it
was
more
of
a
you
know.
We
know
that
will
break
it.
Can
we
do
a
really
good
job
of
communicating
this
upfront
and
maybe
117,
is
too
aggressive,
but
we
would
say,
give
people
the
release
to
get
used
to
the
idea
and
say
in
118.
This
is
coming
yeah
I'm,
fine
with
that
I.
C
B
B
H
G
C
G
But
we
will,
this
will
probably
be
the
most
impactful
client
like
we
will
create
an
enormous
amount
of
work
for
an
enormous
amount
of
people
in
our
ecosystem
and
I
think
this
is
like,
like
the
listing
out
what
we
are
doing
is
we
were
literally
telling
every
single
person
who's
invested
in
cube,
here's
an
hour
or
twos
worth
of
work
across
thousands
of
people,
and
so
I.
Don't
think
we're
did
I,
don't
disagree
that
it's
important.
B
When
you
have
things
that
are
generated
or
things
that
you
have
dependencies,
it's
not
always
you
know,
write
a
script
and
do
a
find
and
replace,
and
it
takes
you
an
hour.
It's
go
open
issues
and
pull
requests
for.
Your
eight
dependency
is
to
ask
them
to
upgrade
to
117
libraries
and
regenerate
Andrey
bender,
and
then
you
know,
that's
that's
the
real
impact
of
yeah.
Okay,
that's
true
be
clear
that
the
pure
methods
don't
help
in
that
case,
because
your
existing
dependencies
aren't
fulfilling
those.
Neither
do.
B
B
C
There's
there's
an
intermediate
option
which
is
like
this
suppose:
we
make
a
v2,
and
this
is
the
only
thing
we
do
and
like
we
take
all
of
our
much-needed
improvements
for
the
future
and
be
like
okay.
Once
we
have
practice
making
a
v2,
we
can
make
a
v3
that
fixes
all
that
stuff,
but
I
don't
know
how
to
feel
about
that
like
making
it
be,
and
this
is
the
only
change
we
do.
G
Certainly
I
the
things
I
was
thinking
about
where
things
like
slightly
cleaner
package
names
and
breaking
up
the
mega
package.
That
no
matter
like
the
thing
about
the
mega
package
is
it
pulls
in
every
API
group.
So
there's
no
like
most
people
who
are
pulling
in
Clank,
oh
kubernetes
get
every
API
group
every
so
forth.
You
know
there
is
an
opportunity
at
least
cut
that
that
octopus
touching
every
bit
of
code
that
we
ship
in
our
api's
and
then
the
next.
G
G
This
is
one
of
those
cases
where
we're
gonna
cause
enough
work
to
everybody
else
in
the
world
that
may
be
taking
on
a
tiny
bit
of
work,
and
that
was
kind
of
like
the
the
big
client
rethink
versus
a
smaller
one.
In
my
head
was
the
smaller
client
read
think
is
like
we
just
go
change
the
generator
to
generate
two
different
packages
at
two
different
locations
with
an
if
block,
and
that's
the
only
change
we
do-
maybe
you're
a
very
small
change.
G
C
G
C
Mean
I,
I
kind
of
feel,
like
all
the
places
that
need
to
be
updated
are
actually
bugs.
So
it's
okay
to
ask
people
to
update
that,
but
I
do
take
the
point
that
that
it's
like
especially
painful
to
update
this,
if
you
have
a
bunch
of
dependencies
that
haven't
yet
so.
For
that
reason,
maybe
it
is
reasonable
for
us
to
add
a
B
to
that
only
fixes
this
we
can
add,
we
can
fix
the
other
things
if
it's
like,
but
that
doesn't
seem
nearly
as
important
important
to
me
as
the
context
thing.
C
C
C
C
You
add
a
parallel
set
of
packages
or
whatever.
However,
you
choose
to
do
that,
if
you
add
one
and
just
leave
the
existing
generators
alone,
it's
fairly
cheap
for
us
to
maintain
and
yeah
eventually
like
does
it
make
eventually,
it
doesn't
make
sense
to
maintain
for
like
the
next
decade,
but
for
the
next
year,
which
is
our
defecation
period
for
api's.
Maybe
it
makes
sense
to
give.
C
G
C
G
We
could
propose
it
and
then
the
moment
the
pitchforks
come
out.
We
just
kept
an
April
Fool's,
but
I.
Don't
know
that,
like
I'm,
trying
to
think
of
an
older
example
in
the
last
couple
years
that
we've
hit
the
closest
one
I
can
think
of.
Is
the
comb
odd
stuff.
You
know
able
to
deflect
a
lot
of
that
on
the
go
team,
I.
B
F
C
F
C
C
C
You
need
to
fix
it
in
both
the
implementation
is
buggy,
but
the
way
go
was
supposed
to
work
like
I.
Don't
know
how
we
would
actually
construct
it,
but
the
way
you're
supposed
to
do
this
and
go
is
like
leave
one
package
as
an
interface
and
like
it,
imports
the
other
one
and
calls
it.
This
is
what
I
actually
there's
a
be.
C
G
G
That
would
change
things
that
are
likewise
almost
public
but
have
lower
impact,
usually
on
people
who
are
either
pretty
deep
in
the
guts
or
are
doing
clever
and
unusual
things
which
is
a
smaller
set.
So
I
think
this
one
is
the
try
to
minimize
the
amount
of
pain
caused
to
the
maximum
amount
of
people
is
a
reasonably
good
rule
of
thumb,
or
these
show
that
we're
doing
it.
J
G
F
Fair
but
I
mean
I
think
if
we
were
to
do
it
in
another
language,
we
should
just
be
aware:
I
mean
I,
think
we'd
want
it
to
be
compatible
with
the
NGO
context
and
then
so
we're
gonna
have
to
but
build
serializers,
D,
serializers
and
other
things
to
make
that
work.
What
kind
of
serializers?
Well
the
context
actually
gets.
Serialized
in
the
request
right,
I.
B
D
G
E
H
F
F
C
B
C
K
K
So
I
would
like
to
do
some
research
and
about
see
Bohr
and
dynamic
protobuf
and
have
some
proof
of
concept
and
measure
it
and
the
center.
Also
we
will
looping
seek
scalability
I
think
when
we
have
a
cap
weight
and
the
other
one
is
we
catch
the
web
for
commercial
results
for
charity
conversion?
This
is
more
straightforward,
but
I
I
still
need
some
sadhana
from
some
edge
case.
I'm.
C
Interested
in
the
binary
encoding,
because,
if
you
add
a
new
binary
encoding
for
photobook,
we
need
to
support
it
for
all
different
of
these
resources,
not
just
inside
of
your
data.
So
we
can't
at
a
binary
encoding
that
works
only
for
CDs
I.
Don't
think
that
makes
very
much
sense
as
a
David,
you
do
that
I
agree.
D
I
also
have
a
significant
interest
in
whether
we
can
make
something
dynamic.
I
think
that,
having
some
way
to
handle
this
dynamically
instead
of
trying
to
force
proto
C
generation
would
be
really
nice
as
a
as
an
improving
thing,
when
we
try
to
talk
across
few
servers
from
a
client,
it
gives
us
I
think
some
hope
of
generically
doing
the
right
thing
in
the
future.
Yeah.
C
L
In
the
it's
in
the
family
of
like
the
binary
J's
bonds,
but
it's
got
a
full
spec
and
it's
pretty
sane
but
yeah
I,
think
yeah
I
think.
The
idea
here
is
just
to
do
that
investigation
and
help
get
like
numbers.
So
we
can
see
how
size
and
latency
compare
okay,
so
you're
just
investigating
this
is
the
problem
you
come
to.
Do
that
investigate
yeah.
If
you.
C
J
L
The
caps
for
1:17
are
due
on
October
15th,
which
is
actually
coming
up
pretty
quick.
So
there
was
a
couple
things.
I
was
interested
in
keeping
track
of
around
types
improvements
for
Sierra
T's,
so
immutability
was
one
and
then
the
idea
of
adding
rest
to
see
our
DS
so
that
we
can
keep
their
size
down
and
leverage
other
types.
So
those
I
just
wanted
to
mention
that
we're
thinking
about
both
of
those
here
with
immutability
I
wasn't
totally
clear
on.
H
To
to
the
second
beta
I'm,
not
sure,
as
this
we
blocked
on
apply,
it
was
blocked
in
116
on
having
agreement
on
what
equality
is
for
native
types,
whether
it's
semantics
reflection
based
equality.
What
we
have
already
in
for
or
something
else
for
series
is
pretty
clear
that
we
have
an
agreement
so
to
see
any
part
of
thing
is
not
blocked.
We
have
pretty
clear
what
to
do.
Okay,
but
I
I,.
C
H
L
C
D
Had
concerns
about
this,
one
I
think
we
talked
about
it
last
time
as
well.
I
yeah
I
put
up
I
left
kind
of
a
brain
down
at
the
end
of
this.
So
Jordan
my
concerns
briefly.
It
would
be
things
like
you,
reference
something's,
not
there,
you
reference
something
and
it
changes.
Are
you
sure
you
really
one
of
that
and
something
disappears
and
we're
using
it
for
validation
and
pruning?
What
do
you
actually
do
you
there's.
C
Additionally,
what
what
if
two
things
refer
to
each
other
fine
I
had
but
about
yeah,
I?
Think
in
general,
the
approach
is
when
you
refer
to
something
you're,
referring
to
a
specific
instantiation
of
that
thing
and
like
so
that,
if
that
thing
changes
yeah
yeah.
This
is
this
is
really
hard,
because
if
the
thing
that
you
depend
on
changes
really
that's
also
a
new
version
of
you,
your
deal
as
well
and
we
need
to
like,
like
we
have
a
like.
C
Oh,
that
does
the
conversion
on
the
other
thing
that
changed,
but
we
need
to
run
the
storage
migrate
or
whatever
it
like
me
to
propagate
that
change
to
all
the
places
that
refer
to
it,
not
just
for
built-ins,
but
also
for
CDs
that
reference
each
other
right.
It's
pretty
complicated
to
make
all
that
work
and.
B
C
C
L
So
at
maybe
a
cap
to
start
with
it
enumerates
the
problems
and
starts
to
look
at
some
of
the
prerequisites
and
the
various
milestones
that
we
could
do
to
make
progress
on
this
over
a
couple
releases:
yeah.
Okay,
that's
good
to
me
great
okay,
I
will
I
think
I
know
some
people
to
talk
to
about
this
as
follow
ups
yeah.