►
From YouTube: Kubernetes SIG API Machinery 20180103
Description
For more information on this public meeting see this page: https://github.com/kubernetes/community/tree/master/sig-api-machinery
A
B
B
Seems
to
be
working
okay,
the
use
case
is
for
running
them
off
cluster.
It's
still
possible
to
do
it's
not
easy,
but
since
it
is
targeted
at
a
since
there's
only
used
by
the
cube
aggregator,
which
already
has
access
to
lookup
services
and
endpoints
I'm
not
concerned
about
it
from
permissions
perspective.
Okay,.
A
Yeah
I'd,
like
I'd
like
more
stuff
to
go
from
beta
to
GA
and
in
2018.
So
it's
me
one
of
one
of
my
reasons
is
that
I
feel
like
the
extensibility
stuff
has
to
be:
it's
not
really
very
useful
unless
it's
universal
across
all
kubernetes
clusters,
so
I
wanted
to
be
GA
so
that
there's
no
reason
not
to
put
it
in,
like
conformance
tests
and
stuff
and
I
really
want
that
to
be
universally
consumable
on
any
component
communities
cluster
and
it's
kind
of
hard
to
tell
people
yeah.
A
A
So
we
we
ship
a
deliverable
that
then
goes
into
the
kubernetes
project
rather
than
being
the
current
state
where
everything
is
sort
of
mushed
together,
something
something
to
think
about.
I,
don't
think,
there's
any
actionable
items
on
this
at
the
moment,
but
just
be
aware
that
you're
thinking
about
it
so.
C
B
We
have
owned
it
right.
We
did
the
informers,
we
did
the
initial
caches,
we
did,
the
Delta
FIFO,
you
know,
be
the
actual
command
to
give
it
the
structure
of
controller
context
and
an
initialization
we
own
naming
them
and
providing
config
before
them
and
managing
their
flags,
and
so
we've
been
pretty
deeply
involved
there.
No.
D
E
The
Cloud
Controller
manager,
when
it
split
out
it,
was
base
it
basically
had
parity
with
the
controller
manager
and
a
lot
of
a
lot
of
ways,
good
and
bad,
and
a
lot
of
the
improvements
that
we've
made
to
the
controller
manager
in
terms
of
standardizing
the
startup
flow
in
the
context
and
the
isolating
the
controller
loops
and
things
like
that
did
not
keep
pace
in
the
cloud
controller
manager.
So
Walter,
were
you
talking
about
kind
of
bringing
bringing
the
Cloud
Controller
manager
up
to
speed
with
kind
of
the
lessons
of
the
perches
learn
yeah.
D
So
what
I'm
envisioning
is
abstracting,
some
of
the
those
very
significant
improvements,
you're
talking
about
into
sort
of
a
generic
controller
manager,
and
so
that
they
can
then
be
brought
into
the
cloud
controller
manager,
and
we
won't
have
this
problem,
whether
it
they
the
cow
control
manager,
is
a
copy.
It
should
actually
be
using
the
same
core
library
to
do
a
lot
of
its
work.
Yeah
I
think
other.
A
We
haven't
like
a
loon
owned
it,
but
we've
done
a
lot
of
work
and
I
don't
see
anybody
else,
stepping
up
to
or
any
other
things
that
me
I,
don't
even
know
what
others
say.
It
could
logically
hollingers
so
I
I'm
fine
with
like
an
xing
it,
and
I
don't.
I
don't
know
that
it
really
should
fall
under
our
jurisdiction
in
perpetuity.
But
at
the
moment
there's
stuff
that
we
would
like
to
change
there,
and
I
don't
know
who
else
is
going
to
do
it
so
yeah.
D
But
yeah,
that
is,
that
is
definitely
where
I'm
thinking
of
starting
and
then
there
are
a
few
related
items
where
I
want
to
make
some
improvements
on
both
sides
but
yeah.
The
first
thing:
I'm
envisioning
is
bringing
the
cloud
controller
up
to
parity
by
sort
of
abstracting
out
what
should
be
common
code
and
making
it
actually
common
code.
I
guess.
B
B
Right,
like
the
default,
would
be
sure
our
controllers
work,
fine
and
we
use
them,
but
do
we
do
we
really
want
to
encourage
the
pattern
that
says
you
have
more
than
one
controller
per
process
or
do
we
want
to
have
something
that
says
you
have
a
logical
unit
of
function
and
it
runs
in
a
zone
controller,
and
once
you
get
past
the
core
controllers
that
that
queue
provides
by
default,
you
should
be
trying
to
run
them
in
pods
on
the
platform
in
as
small
unit
as
possible.
No.
A
I
think
I
think.
Why
not
both
is
this
my
answer
they're
like,
but
why
do
we
fundamentally?
Why
do
we
run
multiple
controllers
in
one
process?
There's
a
lot
of
problems
with
that
like
if
one
controller
crashes
it
breaks
the
whole
cluster
I
think
the
reason
the
only
legitimate
reason
I'm
aware
of
this,
that
is,
for
efficiency
right,
it's
like
because
the.
A
D
I
will
say
is
that
I
think
your
concern
is
a
good
one,
but
if
we
can
get
out
generic
sort
of
a
generic
controller
manager
logic
that
can
then
be
reused,
it
becomes
much
easier
to
then
work
out
good
ones.
When
we
can
start
splitting
the
controller
manager
into
logical
pieces
that
are
affiliated
along
the
lines.
That
Daniel
is
suggesting
yeah.
A
B
The
the
single
controller
per
binary
is,
there
is
some
significant
conceptual
purity
to
it.
It
makes
it
really
easy
to
understand,
what's
happening
with
your
system,
there's
never
a
discussion
of
well.
If
this
controller
dies
should
I,
kill
the
process
or
am
I
better
off
running
the
other.
Four
that
are
running
here
is
just
a
straight
up.
Yeah,
it
died,
it
crashed
loop,
you
can
even.
D
B
I
guess
I
guess
I'm
saying
is
not
that
I!
Don't
want
you
to
do
that
refactor!
We
shouldn't
do
that
refactor.
What
I'm
saying
is
that
before
we
try
to
publish
a
generic
controller
library,
we
should
consider
what
we
think
our
guidance
should
be
and
whether
what
we
really
want
is
to
provide
something
simple,
that'll
give
people
sort
of
standardized
kind
of
Flags
a
standardized
little
server
that
will
let
them
look
at
metrics
and
P
prof.
and
what's
currently
active
in
health,
I
mean
I,
don't
I,
don't
think
we
have
to.
A
B
A
B
G
B
I
C
A
H
A
H
B
A
Is
this
is
something
that
we
should
maybe
have
a
more
in-depth
conversation
on,
because
right
now,
field
selectors
are
not
implemented
in
a
very
efficient
manner,
though
I
don't
know.
If
we
want
to
encourage
given
the
way
they're
implemented
right
now,
I
don't
know
if
we
want
to
encourage
people
to
make
heavy
use
of
them.
Hopefully
the
name
and
main
space
are
supported.
I
think
we
support
those
overall
resources,
so.
K
K
C
A
F
We
have
not
received
as
much
pushback
on
the
limited
set
of
field
selectors
for
core
resources,
as
I
would
have
anticipated
three
years
ago.
There
are
definitely
used
cases
that
don't
work
as
well,
but
the
majority
of
people
who
are
dealing
with
resources
do
tend
to
use
controller
caches,
which
is
the
biggest
mitigation
yeah.
F
K
B
I
C
E
A
B
So,
in
order
for
your
custom
resource
definition
to
be
useful,
you
end
up
creating
a
rolls
go
with
it.
The
vast
majority
do
that
right,
but
in
order
to
be
able
to
create
the
role,
you
have
to
pass
an
escalation
check
to
be
able
to
pass
the
escalation
check.
It
means
that
you're
able
to
create
essentially
an
arbiter
if
you're
able
to
create
an
arbitrary
auerbach
role
and
if
you're
able
to
create
an
arbitrary
Auerbach
role.
Don't
you
already
have
superpowers.
A
E
Person
loading
the
custom
resource
definition
is
presumably
the
same
one
that
wants
to
load,
something
that
will
define
the
web
book
validator
for
it
and
that
person
to
create
the
definition
and
create
accompanying
roles
and
permissions
that
let
people
use
those
custom
resources.
That
implies
high
levels
of
permission,
correct,
but.
K
L
F
E
K
I
thought
one
of
the
only
reasons
why
we're
even
considering
doing
webhook
validation
in
the
customer
resource
pack
is
because
I
mean
because
curl
you
can
do
the
exact
same
thing
with
the
mission
controller
read
books,
but
the
reason
we
did
it
is
because
we
wanted
a
way
for
people
to
do
it
that
didn't
have
cluster
admin.
Privileges.
E
Right,
that's
the
point.
This
would
be
something
inside
the
CD
that
would
drive
under
the
covers
validating
webhook
config
right-to-die
CRT,
okay.
So
the
question
is:
if
we
go
do
this,
have
we
actually
made
it
possible
for
someone
who
is
not
a
cluster
admin
to
define
CR,
DS
and
associated
permissions
and
validation
in
a
useful
way
and
I?
Think
there's
more,
that's,
probably
a
lot
more.
That
would
need
to
be
done
in
addition
to
this
before
we
could
actually
get
there,
and
so
we
just
need
to
design.
K
E
We
have
to
do
this
work
to
enable
things
inside
the
CR,
t--
spec
and
then
hook
up
controllers
to
turn
those
into
validating
admission
web
hooks.
So
it's
it's
work
that
we're
signing
up
to
do
or
someone
deciding
up
to
do,
and
the
question
is:
is
that
worth
work
worth
doing
if
it
doesn't
actually
provide.
E
A
B
B
B
A
A
F
What
what
the
resolution
for
that
is,
so
we
had
to
meet
when
he
gets
back
and
then
do
some
discussion
in
SIG's
CLI,
but
just
so
folks
are
aware
of
that
that
if,
if
we
can
reach
closure
than
we
would
like
to
move
to
beta
in
110
and
there's
another
issue
for
it,
which
is
the
partial
object
metadata
stuff
for
garbage
collection,
it
was
a
discussion
I'm,
looking
back
to
beta
and
110
as
well.
We
actually
talked
about
in
1:9,
but
we
didn't
conclusion
on
it.
F
You
fell
and
I
fell
and
I
are
going
through.
All
the
a
lot
of
it.
I
think
would
be
interesting
to
many
of
the
folks
on
this
call,
because
it's
about
what
is
the
minimum
bar
for
generic
things
on
a
client
and
then
it's
the
trade-off
between
user
experience
like
you
can't
make
good
user
experiences
necessarily
just
by
using
JSON
path
and
trying
to
sort
through
that.
A
F
And
like
we've,
been
we've
been
coming
back
and
forth
with
ideas
like
virtual
fields
and
how
do
we
communicate
more
complex,
abstract
concepts
around
the
API
to
clients
and
some
resources
from
one
thing?
For
that
and
stuff
resources
are
limited,
so
I
think
we'll
probably
pull
up
a
lot
of
topics
that
have
been
discussed.
I
will
I
will
try
to
make
sure
that
everyone
here
gets
a
good
to
say
if.
A
L
B
A
E
Conversations
around
defaulting
are
actually
sort
of
small-scale
on
a
small
scale,
touching
on
the
same
issues.
So
a
lot
of
the
ways
that
defaults
get
applies
to
objects
today
on
normal
API
objects.
Is
that
they're
done
at
decode
time?
So
when
you
submit
an
object
to
the
server?
If
you
are,
if
you
omit
a
field
that
has
a
default
that
field,
it
gets
defaulted
on
its
way
into
the
server,
and
if
it
gets
persisted
in
that
CD
and
then
in
the
next
release,
a
new
field
shows
up
in
the
default
the
next
time.
E
E
A
lot
of
similar
issues
come
up
with
conversion,
and
so
some,
if
you're
interested
I,
would
actually
suggest
jumping
into
the
defaulting,
CR
DS
poll
and
kind
of
seeing
what
what
questions
are
being
raised
there
like
how
we
can
do
it
efficiently,
whether
we
have
to
start
maintaining
like
start
modifying
things
as
we
read
them
out
of
sed
and
like
persisting,
those
changes
back
and
modify
on
read
starts
getting
really
weird.
A
lot
of
those
things
are
gonna
be
similar
with
conversion,
and
we
ended
up
holding
that
all
those
are
ago.
A
Yeah
we
should
also
we
have.
Maybe
we
need
to
think
about
how
much
we
protect
people
from
themselves
with
CRTs
like
like.
Clearly,
if
you
have
a
web
book
that
does
default
thinking
of
it,
I
think
that's
a
good
idea,
but
if
you
had
a
web
poop
that
had
those
defaulting
and
then
you
go
and
change
that
web
book
you've
probably
broken
yourself.
A
F
The
other
topic
server-side,
so
there
was
some
discussions
with
sig,
CLI
and
I.
Don't
think,
there's
anybody
from
six
CLI
on
this
call
to
represent
discussions
on
server-side
validation,
so
being
able
to
avoid
having
to
have
clients,
implement
a
full,
open,
API
syntax
validating
engine
in
order
to
verify
that
objects
are
correct,
as
well
as
server-side
apply,
which
would
want
a
longer
discussion.
I
think
that
this
is
probably
the
next
six
Eli,
meaning
I'll
bring
up
and
make
sure
that
they
at
least
represented.
But
those
are
two
server-side
things
that
we've
said.
F
I
E
E
The
in
the
decode
chains
and
actually
has
implications
for
backwards
compatibility
as
well
like
submitting
objects
with
new
fields
to
a
one
version
old
server
like
we
can't
just
reject
those
requests.
So
it's
unclear
to
me
like
how
you
would
like
what
you
would
do
with
that.
Would
you
have
to
opt
in
to.
J
F
E
And
I,
Phil
and
I
had
talked
about
like
a
three-way
three-way,
merge
type
of
API
where
you
submit
the
old
and
new
and
ask
the
server
to
provide
the
current
and
do
the
merge
for
you.
So
rather
than
trying
to
compute
a
patch
format,
you
give
the
server
your
old
and
new.
In
that
way,
the
server
can
convert.
The
server
has
the
ability
to
convert
those
to
a
coherence
where.
F
Both
of
the
proposals
for
server
side,
both
of
the
proposals
for
server
side,
apply
involved
not
dealing
with
patches,
okay,
you
say
this
is
what
I
think
should
be
the
truth
or,
as
Jordan
pointed
out,
this
is
what
I
think
should
be
the
truth,
and
this
is
what
I
think
the
previous
version
was
and
it
becomes
a
server
version
specific
problem.
That's
of
never.
E
A
E
F
L
I
I
do
have
one
question
here:
this
is
Matt
Frey,
no
sorry
about
my
voice.
It's
why
I
haven't
spoken
up
earlier.
The
thing
that
I
lost
over
the
holiday
yeah,
one
of
the
things
that
came
up
at
coop
con
was
the
idea
of
a
separate
go
client
from
client
go
a.
L
It
was
discussed
in
a
few
circles
and
I
wanted
to
formally
bring
it
up
here
because
y'all
own
client
go.
You
know,
I,
understand
that
it's
it's
part
of
kubernetes.
It's
had
to
evolve
a
lot
and
it
regularly
changes
interfaces.
A
lot
of
visit
isn't
documented
well
because
there
is
so
much
change
in
churn
going
on
because
that's
needed
inside
of
kubernetes,
but
for
those
who
are
building
applications
on
the
outside
to
interact
with
the
API.
L
L
You
know
get
it
rounded
out
with
fewer
or
almost
no
API
changes
other
than
adding
four
new
objects
and
object
versions
and
just
kind
of
maturing
it
out
and
then
getting
the
documentation
and
the
experience
and
all
of
that
into
place.
I
was
kind
of
cold.
It's
not
high
enough
on
the
radar
that
it's
going
to
happen,
and
so
this
conversation
is
kind
of
come
up
in
a
few
venues
where
I
know
where
people
are
really
interested
in
it
and
it'll
probably
have
an
outside
in
the
ecosystem
somewhere.
L
Actually
it's
already
starting
to
happen
there,
and
so
now
it
becomes.
Can
our
stuff.
Does
it
still
need
use
Clank
OH?
Can
we
use
ecosystem
thing
like
with
home
version
3
coming
out?
Should
they
switch
to
something?
That's
not
part
of
kubernetes,
because
there's
a
bunch
of
pain,
points
and
convert.
You
know
how
do
we
approach
this
and
so
I
wanted
to
raise
the
question
here
because
it's
already
being
talked
about
elsewhere.
Now.
B
In
your
idea
of
a
separate
go
client,
are
you
talking
about
a
simple
go,
client
does
basic
crud
and
rest
operations,
or
are
you
talking
about
the
other
pieces
of
client
go
things
like
reflectors
and
queueing
mechanisms
and
the
Informer's
and
the
Lister's,
and
all
the
others
that
are
required
to
make
say
a
client
like
a
controller?
Or
are
you
just
talking
about
basic
cry?
Well,.
L
B
B
I
only
a
scarlet,
so
I
guess
I
guess
what
I
would
say
is
our
client
NGO
is
actually
a
lot
like.
Our
basic
is
actually
a
lot
like
our
basic
controller
toolkit
right.
This
is
a
client
that
you
need
to
be
able
to
write
a
solid
control
and
that's
what
our
client
go
Bruins,
if,
if
there
is
instead
a
request
for
just
like
a
typed
generated,
client
I,
don't
know
that
it
would
actually
bother
me
to
have
the
community
build
their
own
write
if
they
build
it
based
on
open
API.
Why
not
so.
E
Something
like
cube
control.
A
lot
of
push
there
is
to
reduce
its
dependence
on
types
and
make
it
deal
with
generic
things
generically,
and
so
it's
actually
switching
more
and
more
to
use
like
unstructured
objects
and
pull
pull
data
down
sort
of
opaquely
from
the
server
a
lot.
A
lot
of
the
things
we're
trying
to
move
logic
to
the
server
like
server-side,
printing
and
you
ever
said,
apply,
and,
and
so
it
is
dealing
less
and
less
with
types
and
more
and
more
with
unstructured
data.
And
so
you
want.
E
E
There's
also
two
types
of
changes:
there
are
type
changes
and
API
changes
from
the
Cuban
kubernetes
api,
but
then
there
are
the
actual
go
changes
like
how
you
call
the
client
methods
and
things
like
that,
and
so
what
I
hear
is
more
concerned
about
the
latter
and
we've
talked
about
that.
That's
kind
of
a
function
of
the
generators
that
we
have
even
the
client
goes
stuff.
E
That's
there
now
is
generated,
and
so
every
time
we
change
the
generators,
those
change
the
go
API
is
that
people
have
to
write
to,
and
so
we've
talked
about
regenerating
client
go
against
updated
schemas,
using
the
various
versions
of
the
generators
that
we
have
and
that
would
let
people
pick
up
new
fields,
new
API
objects,
but
keep
using
the
same.
Go
api's
that.
B
L
Because,
right
now
the
issue
is,
if
you
know,
when
the
new
version
of
kubernetes
comes
out,
somebody's
going
to
spend
days
figuring
out
what
happened
in
client
go
because
the
changes
aren't
well
documented.
There's
a
lot
of
documentation
targeted
towards
an
end
user,
and
so
they're
gonna,
take
days
to
figure
out
what
change
and
then
how
they
need
to
rewrite
their
programs.
And
it's
not
just
client
go
they
end
up
pulling
an
API
machinery.
Cuz
client
go
ends
up
leaking
API
machinery
types.
L
L
A
L
E
One
is
like
the
actual
API
types
themselves,
as
of
1:8
are
in
the
place
where
they're
going
to
be.
You
can
use
those
and
decode
things
directly
into
them
and
as
much
as
you
limit
your
applications
to
dealing
with
those
types,
you
are
insulated
from
changes
around
API
machinery
and
things
like
that.
New
fields
come
in
to
those
objects,
occasionally
there's
a
change
that
affects
those,
but
those
are
very,
very
slow.
Moving
to
the
extent
that
you
are
using
the
other
helpers
and
things
that
David
was
talking
about
informers
things
like
that.
E
Yes,
if
those
change
you
have
to
react
to
it
and
that's
why
being
able
to
republish
old,
not
old,
but
previously
generated
versions,
can't
go
with
only
schema
and
field
changes
would
keep
people
from
having
to
touch
anything
in
their
application.
That's
what
I'm
saying
I
would
be
a
reasonable
place
to
explore.
Yeah
I.
A
Think
that's
you'd
be
very
close
to
that
and
if
you
look
at
our
old
proposal
documents
in
the
client,
they
call
for
splitting
the
go
client
into
a
go
base
which
is
like
the
dynamic
type,
the
unstructured
type
of
rest
by
the
former,
like
the
informer
library
and
a
set
of
generated
repositories.
And
and
if
we
had
that,
then
you
could
generate
a
like.
B
That's
not
completely
sure
so
so
I
actually
experimented
with
it
and
openshift
a
while
back
to
see
if
I
was
going
to
be
able
to
do
it
and
a
lot
of
it
comes
down
to
whether
the
generator
tags
have
changed
the
annotations
that
you
use
and
whether
new
things
are
present
and
whether
the
API
machinery
has
changed
enough.
The
text
no
longer
valid
and
in
in
the
previous
releases
that
we've
had
is
something
generally
changes
between
those
releases
and
it
breaks.
B
A
B
I
was
trying
to
explore
with
my
question,
am
at
of
exactly
what
he
needed
is
that
if
he
only
needs
to
cry,
but
if
he
doesn't,
if
you
need
something,
that'll
work
for
a
CLI
or
for
something
that
runs
on
a
small
scale
and
not
like
a
full
controller,
then
generate
basic
crud
with
their
own
types.
They
don't
have
to
match
kubernetes
types,
you
know
reason
for
them
to
there's
no
reason
to
have
it.
B
L
L
This
is
what
somebody
expects
in
a
client
right:
who's
going
to
consume
it,
whether
they're
working
on
coop
control,
how
something
out
in
the
ecosystem
right
they
don't
want
to
go,
take
a
bunch
of
files
and
generate
their
own
client.
They
just
want
to
say
and
go
import
this
package
and
it
works
and
it's
easily
versioned,
and
you
know
I
do
minor
version
updates
and
I
get
new
features
such
as
new
types.
B
L
The
kind
of
direction
is
that
the
clients
should
come
from
the
kubernetes
organization,
not
happen
in
the
ecosystem
and
I've
heard
that
more
than
once
and
so
of
helm,
which
is
a
kubernetes
projects
for
say,
doesn't
use
client
go
and
goes
and
uses
something,
that's
not
part
of
the
kubernetes
ecosystem.
What
does
that
say?
Is
that
kosher?
Is
it
not
like?
How
do
we
approach
this?
If
the
general
volume
is
saying
no,
you
should
do
this.
If
there's
a
problem
with
client
go,
we.
G
E
Here
is
that
by
making
client
go
behave
that
way
we
are
breaking
use
cases
of
having
it
update
and
reacts
to
changes
in
the
types
and
these
API
machinery
that
is
developing,
and
so
the
cost
of
that
is
breaking
the
current
use.
Cases
that
go
is
tango,
is
meeting
okay,
I
think
it's
all
about
like
what?
What
do
you
value?
If
you
want
to
write
a
point-in-time
application
and
never
touch
it
again,
then
you
probably
don't
want
to
be
using
the
client
go
that
is
keeping
up
to
date
with
changes
in
API
machinery.
E
If
you
want
to
write
something
that
is
able
to
keep
up
to
date,
and
you
want
to
make
it
work
against
newer
it's.
So
if
you,
if
you
write
your
application,
it
will
continue
working
against
the
kubernetes
api,
even
on
a
newer
server,
because
our
api
is
accept
requests
from
older
clients,
but
what
you're
wanting
is
both
are
not
you,
but
what
the
request
is.
I
want
to
take
advantage
of
new
features
and
not
have
to
react
to
changes
that
come
along
with
those
releases
and
right
now.
That's
not
realistic.
L
That's
not
exactly
what
I
mean:
they
want
to
be
able
to
take
advantage
of
new
features
and
new
properties
and
things
like
that,
but
they
want
it
in
an
additive
manner,
not
a
breaking
change
matter.
So,
instead
of
like
when
client
co
comes
out
for
every
new
version
of
kubernetes
right,
it's
a
major
API
breaking
change,
so
you've
got
the
in
summer.
You
got
the
first
number
changing.
E
And
I
mean
just
in
terms
of
a
go
API.
Every
change
is
a
breaking
ABI
change.
If
you
add
a
deal
to
a
public
struct,
it's
a
breaking,
API
change,
it's
depending
on
how
people
constructed
their
struct
like
it.
Every
change
is
a
breaking
API
change.
Technically
and
again,
many
changes
that
go
in
are
additive.
You
know
we
have
a
new
option
in
list
options
that
says
included
under
this
alized.
Does
that
break
people
know
it
was
additive?
Other
changes
are
not.
E
You
know
it's
a
new
parameter
to
a
function
and
doing
that
in
a
way
that
would
let
people
keep
calling
the
old
function
with
the
old
signature
makes
the
client
go
API
sort
of
nonsensical,
and
so
our
priority
has
been
build
a
coherent
API,
and
if
people
want
to
keep
using
the
old
versions
of
client
go,
they
can
if
they
want
the
new
features.
Sometimes
that
comes
with
signature
changes
not
always
and
like,
like
we
said,
the
less,
the
more
just
standard,
crud
stuff,
you're,
doing
the
less
you're
likely
to
be
impacted.
D
A
I
I
have
I
have
one
one
comment
and
they
generated
opening
a
mystery,
so
the
open,
API
mystery
the
client
go
as
Jordan
said:
it's
not
always
backward
compatible
by
adding
fields.
It
would
be.
Wouldn't
you
well,
the
changes
would
be
less
definitely
than
the
coin
go
that
we
have
right
now
and
more
mechanical
when
you
can
generate
a
new
client.
My
stand
on
this
I'm
not
sure
you're,
ready
to
support
it.
I
Open,
API
client
goal
right
now,
but
we
should
support
client
goal
when
they
go
base
repo
and
make
it
very
easy
for
people
to
generate
client
goal,
and
you
know
to
compete.
Loader
and
and
simple,
simple
things
to
you
know
generate
their
own
client
code
using
other
January
Pole
and
all
of
this
stuff.
I
think
that
but
I'm
not
sure,
if
they're
ready
to
support
an
actual
science
cool
that.
E
What
what
is
being
generated
now
is
done
in
a
much
more
principled
way
than
it
was
previously.
As
that
gets
finalized,
I
expect
the
rate
of
change
in
the
API
types
and
annotations
to
slow,
and
as
that
slows,
we
will
be
able
to
continue
generating
clients
with
updated,
schemas
and
types
back,
several
versions
more
easily
yeah.
L
Because
once
you
have
that,
then
we
can
start
laying
or
any
documentation
to
actually
explain
how
to
use
things,
because
one
of
the
most
frustrating
things
that
I
also
hear
is
I
have
to
go.
Read
the
code
to
know
how
to
do
stuff
and
that's
a
really
hard
thing
for
people
who
don't
who
just
want
to
use
the
API.
They
don't
want
to
learn
how
this
works,
and
so
the
next
step
is
nted
to
really
detail
and
explain
how
everything
works,
and
you
got
to
have
stability
to
be
able
to
do
that.
Well
and
so.
E
K
E
A
The
people
here
on
this
call
are
mostly
back-end
type
engineers
and
I.
Think
client
library,
like
ultimately
the
best
client
libraries,
are
probably
we're,
probably
not
the
best
people
to
like
make
super
user-friendly,
client
libraries.
They
get
there
eventually.
But
history
is
any
guide.
We've
got
a
lot
more
rearranging
like
adding
wheels
to
the
bus
and
taking
off
them
off
and
all
the
bus.
It's
moving,
though
I
like
mother
thousand
flowers,
bloom,
other
people
make
clients
great
I,
think
I
think
in
the
process
of
making
a
client.
A
L
And
I
think
people
understand
and
they
respect
that
their
problem
is:
is
that
just
the
practicalities
of
using
it
with
the
really
cycle,
as
often
as
it
comes
out
and
all
of
the
changes
to
the
go
API
that
they
interact
with
and
the
difficulty
with?
And
you
know,
navigating
the
documentation
and
the
changes
and
the
upgrades
they
have
to
do
that
becomes
the
burden.
L
They
understand
that
there's
a
lot
of
complexity
here
and
they
appreciate
that
it's
been
created,
but
now
it's
just
as
regular,
ever
every
quarterly
burden
that
they're
putting
out
and
having
to
deal
with,
and
that's
the
frustration
and
it
builds
over
time
every
quarter,
but
they
have
to
go.
Do
this
I
mean
if
things
settled
down
and
documentation
came
into
place,
I'm
sure
they'd
be
much
happier
and
they
wouldn't
want
to
go
it's
that
external
user
experience.
That's
are
interested
in.