►
From YouTube: WG API Expression Bi-Weekly Meeting for 20200915
Description
WG API Expression Bi-Weekly Meeting for 20200915
A
All
right
welcome
to
working
group
api
expression.
It's
september,
the
15th
of
2020
and
we've
got
an
item
on
our
agenda
today.
A
C
Yeah
sure
so
we
can,
if
anybody's
willing
to
pop
open
the
first
atomic
tags
problem
one
I
just
wanted
to
bring
these
up,
because
they're
involved
and
more
eyes
could
help.
So
the
first
one
is
something
we
discovered
from
a
bug.
This
is
the
atomic
tags
problem.
C
The
bug
was
that
in
kubernetes,
the
label
selector
is
marked
as
granular
or
separable,
which
means
that
each
part
of
it
is
considered
its
own
thing.
You
can
own
just
a
little
bit
of
the
label
selector
just
one
field
or
something
that
doesn't
really
make
any
sense,
and
it
gets
really
confusing.
There's
a
whole
bug
about
all
the
confusion
that
can
happen
so
that
so
it
needs
to
be
atomic,
which
is
really
easy.
Conceptually
to
do
like
you
can
add
atomic
to
it.
C
It'll
do
the
right
thing,
but
that
creates
a
new
problem,
because
you
have
pre-existing
data
and
field
sets
that
show
ownership
of
these
pieces
of
this
strut
from
data
that
was
written
before
you
made
it
atomic
and
this
this
exists
in
the
wild
today
right.
So
it's
a
real
problem.
We
have
if
we
want
to
fix
this
label
selector
problem.
C
We
have
to
do
something
about
it,
but
the
consequences
of
it
are
like
you
wouldn't
get
a
conflict
when
you
should
there'd
be
orphaned,
managed
field
data
laying
around
with
pretty
hard
to
define
consequences.
Did
I
not
show
that
doc?
C
All
right,
if
you
refresh
that,
should
show
now,
and
so
this
exists
in
two
different
ways.
So
so
one
way
you
can
run
into
this
is
the
way
that
we're
running
into
kubernetes,
like
you,
basically
don't
want
to
introduce
a
v2
of
every
type
of
kubernetes
to
fix
those
problems,
because
server
side
apply
is
still
in
beta.
Like
that's
a
pretty
it's
it's
an
intractable
thing
to
do
right,
nobody's
going
to
go
for
that,
so
we
have
to
do
it
in
an
unversus
way.
C
We
have
to
say
we're
going
to
fix
the
mistake
that
we
made
and
make
label
selectors
atomic.
C
So
I
have
to
we
have
to
somehow
kind
of
detect
that
the
problem
has
happened
and
then
fix
it
and
that's
actually
doable.
I
have
code
that
does
that
it
can
detect
that
anywhere
that
you
own
the
subfields
of
something
that's
atomic.
Clearly
you
have
the
change,
even
though
we
don't
even
know
what
the
schema
was
before
you
can
kind
of
imp
infer
in
a
very
safe
way.
C
C
But
it's
not
right,
like
you
should
be
able
to
say
I'm
introducing
v2
of
something,
and
you
should
just
be
able
to
change
anything
about
the
schema
and
people
expect
that
to
be
fine,
but
it
turns
out
that
that's
actually
a
problem
too,
so
this
exists
as
a
problem
in
both
of
those
cases,
and
so
I've
figured
out
how
to
do
one
of
the
changes
going
from
granular
to
atomic.
I
can
do
going
from
atomic
to
granular.
C
It
turns
out,
it
is
much
harder
to
detect
and
I
don't
think
I'm
going
to
spend
the
whole
time
talking
about
it
here,
but
it
basically
involves,
like
you
kind
of,
have,
to
try
and
infer
it,
but
you
need
you
need
like
a
bunch
of
information
about
all
the
field
managers,
the
entire
schema
all
of
the
live
objects.
It's
really
complicated
to
do
it's!
C
It's
like
a
three-way,
merge
right
of
data,
and
I
I
actually
almost
have
it
working,
but
it's
going
to
be
really
expensive
to
do
and
we're
gonna
have
to
potentially
do
it
on
every
apply,
because
you
don't
know
if
the
scheming
has
changed,
yeah
so
anyways.
This
is.
This
is
turning
into
like
a
little
bit
of
a
mess.
C
So
one
thing
I
was
thinking
of
doing
was
solving
the
granular
to
atomic
one
now,
which
gets
us
out
of
the
immediate
problem
with
the
field
label
selector
and
closes
a
bug
and
then
start
to
continue
to
build
out
documentation
about
why
going
from
atomic
to
granular
is
so
hard,
but
I
just
kind
of
wanted
to
surprise
it
to
people
and
kind
of
explain
the
problem
in
broader
scope.
If
anybody
has
any
thoughts
on
this
document,
pretty
well
covers
it.
There
are
some
alternatives
we
consider
down
below.
C
One
of
them
is
instead
of
doing
it,
instead
of
doing
the
fix
up,
which
I'll
call
this
the
fix
up
when
you
detect
that
there
is
a
schema
change
somehow,
instead
of
doing
the
fix
up
as
part
of
the
wide
serving
path,
you
could
try
and
do
it
as
like
some
kind
of
background
migration
task,
but
that
runs
into
like
a
correctness.
Problem
right,
you
can't,
you
can't
serve
you
can't.
Handle
applies
on
objects
where
they've
changed
safely
until
you
fix
them
up.
C
So
if
you
do
it
as
a
background
thing,
then,
basically,
once
the
scheme
has
changed,
you
have
to
like
stop
the
world
from
applies
or
any
rights.
Actually
until
you
left
up,
you
could
mitigate
that
still
by
going
through
and
saying
well,
we
know
which
ones
have
been
migrated
and
not
if
it
hasn't
been
migrated,
then
the
online
circumpress
could
like
proactively,
do
it,
but
then
you're
getting
into
this
really
complicated
implementation.
A
If
we
discussed
that
last
time
with
the
idea
of
having
a
schema
version,
hash
check
some
something
in
the
fleet
field
manager
to
to.
C
Yeah
do
that
in
the
alternatives,
and
what
that
does
is
that
helps
that
helps
reduce
the
cost
when
nothing's
changed.
So,
for
example,
if
if
we
built
this
and
then
I
performance
tested
it,
and
we
found
that
it
was
super
slow
like
unacceptably
slow
and
there's
nothing
that
we
could
do
to
do
it
better,
we
could
we
could.
We
could
bring
up
that
idea
as
a
way
to
optimize
it.
I
was
kind
of
holding
off
to
do
that
until
we
saw
like
what
the
performance
was,
but
even
that
doesn't
change.
C
The
kind
of
that
doesn't
make
the
conversion
from
atomic
to
granular
any
easier.
It
allows
you
to
avoid
do
running
any
code
when
nothing's
changed
and
that's
it.
A
B
Yeah
I
mean
you
can
have
an
an
arbitrarily
large
number
of
people
who
undergraduate
fields.
So
what
happens
when
it
becomes
atomic.
A
B
My
primal
is
the
fact
that
it's
not
going
back
and
forth
is
what
happens
when
we
have
aha
like
doing
it
during
an
update,
because
one
is
it's:
gonna
go
from
granular
to
atomic,
but
from
the
others
perspective,
it's
gonna.
Look
like
it's
going
from
atomic
to
from
atomic
to
granular.
C
B
You
have
a
new,
so
I'm
assuming
this
is
true
for
built-in
types
and
crds
right
yeah.
So
so
the
built-in
type
is
built
inside
the
api
server.
So
let's
say
you
have
a
new
version
of
the
api
server
where
it
went
from
granular
to
atomic
yeah,
and
now
you
update
to
this
version
of
the
api
server,
but
because
it's
maybe
an
hr
cluster
or
whatever
the
upgrade
is
not
atomic.
B
It's
granular,
and
so
it's
going
to
work
one
way,
but
some
may
see
the
fields
as
being
epitomic,
even
though
see
what
I'm
saying
like
yeah
you're,
going
to
transform
some
object
from
granular
to
atomic
and
then
another
api
server
is
maybe
going
to
want
to
go
from
granular
back
to
atomic
yeah.
I
know
it
could
totally
happen.
C
I
think
yeah,
it's
pretty
cool,
I
think
it's
pretty
gross
during
it
happening.
I
think
you're
just
going
to
get
a
ton
of
conflicts.
Let
me
look
at
that
case
too.
It's
probably
can
we.
It's
also.
C
To
we
kind
of
have
to
deal
with
that
anyways
because
say
you
have
a
v1
and
v2
and
then
v2
you
make
something
granular
and
v1,
it's
still
atomic,
then
for
an
undefined
period
of
time
you
have
some
clients
working
against
something
that
thinks
it's.
Atomic
and
some
bookings
against
somebody
thinks
it's
granular,
but
there
are.
B
C
It's
it's
the
ideal,
yes,
but
unfortunately
the
way
that
we
work
with
field
ownership.
It's
kind
of
we
we
do
a
lot
of
like
set
operations
to
figure
out
like
how
the
field
ownership
was
supposed
to
work
in
a
different
version
and
then
like
transform
that
into
the
other
version,
and
it
doesn't
really
consider
the
granular
atomic
change.
C
Interesting,
so
it's
gonna,
there's
gonna,
be
there's
gonna
be
problems.
I
think
the
short
of
this
is
that,
like
this
is
actually
a
really
nasty
problem
like
hidden
underneath
this
are
a
lot
of
things
that
make
it
very
hard
to
solve.
Even
if
we
had
the
previous
schema,
like
some
things,
get
a
lot
better,
but
it's
still
like
some
of
the
things
we
mentioned
aren't
solved
by
that
like.
If
we,
if
we
had
the
previous
schema,
then
you
would
actually
know
exactly
what
the
change
was
on.
Mbus
you'd
be
like.
C
B
C
I
I
think
yeah
I
think
we
can.
I
think
I
think
we
could
do
the
one
we
would
need
now,
which
is
go
from
granular
to
atomic,
solve
the
labor
selector
problem
that
actually
exists
in
kubernetes.
Right
now
sure
we
can
prove
that
the
aha
state
might
be
conflict
heavy
briefly,
but
will
coalesce
to
something
sane
like
it
will.
B
C
Yeah
we
need
to,
we
need
to
see
what's
happening
there.
If
we
don't
implement
that
and
we
only
implement
granular
to
atomic,
then
then
you'll
have
some
clients
that
are
still
like
writing
that
as
granular
again,
but
then
the
upgraded
ones
will
keep
trying
to
transition
to
atomic
once
everybody's
upgraded
they'll
they'll
eventually
agree
that
it's
atomic,
and
during
that
period
I
think
you
might.
B
Let's
I
don't.
C
B
C
A
C
The
problem
is,
is
once
if
you
go
to
atomic,
then
you
end
up
owning
just
the
top
level
of
the
the
record,
and
so
then
anybody
that
owns
granular
fields
doesn't
necessarily
see
that
as
a
conflict,
okay,
yeah
yeah-
if
they
just
don't
want
part
of
the
field,
they
don't
conflict
with.
Somebody
owns
just
the
top
of
it.
They
would.
C
If
it's
atomic,
because,
like
atomic
means
you
own
anything,
it's
it's
all
one
and
the
same,
and
so
that
that's
still
fine,
but
I
would
what
I
can
do
is
I
can
grid
out
what
exactly
will
happen
and
we
can
look
at
the
cases
and
see
if
there's
anything,
that's
still
a
real
big
problem
and
try
at
least
try
and
at
least
get
past
this,
like
this
label
selector
upgrade.
C
C
Then,
if
you
know
it,
if
you
know
they
intended
for
the
ownership
to
be
atomic,
then
you
could
merge
in
an
intelligent
way.
Even
if
you
still
saw
this
granular.
C
All
right,
okay
and
then,
if
we
let's
see
how
we
do
it
on
time,
I've
used
13
minutes,
yeah,
so
tombstones.
I
didn't
want
to
show
everybody
everything.
Everybody
remembers
the
previous
one
I
had,
but
if
you
go
down
to
alternatives
considered,
I
blew
out
two
examples
and
I
thought
they
were
kind
of
interesting
and
just
as
context.
I
know
antoine
asked
me
this.
So
the
reason
that
we're
looking
at
tombstones
now
is
not
to
do
an
implementation.
C
It's
so
that
if
we
go
to
ga
we
can,
we
can
say
with
with
authority
like
this,
could
be
a
post,
ga
task,
and
we
know
we
know
that,
because
we
know
how
we
could
implement
it,
which
I
think
is
something
that's
going
to
come
up.
So
one
of
the
alternatives
was
instead
of
modifying
the
way
we
represent
the
the
actual
applied
configuration.
C
We
allow
appliers
to
write
the
managed
fields
and
by
doing
that,
any
any
field.
They
claim
they
own
here
if
it
doesn't
exist
in
their
object,
they're
effectively
saying
they
tombstoned
it.
The
really
nice
property
about
this
is
when
you
tombstone
objects.
This
is
the
resulting
field
set
anyway
right.
This
is
this
is
how
we're
going
to
represent
that
you
own
something
that
you
don't
have
in
your
object,
so
you're
actually
just
doing
apply
on
the
manage
fields.
C
It's
very
declarative,
like
you're,
actually
changing
the
state,
it's
going
to
be
in
you're,
not
doing
anything
magical.
It
has
a
couple
nice
properties,
so
it
has
a
nice
property.
It's
very
declarative,
right,
you're,
actually
setting
the
state
to
what
you
want
to
see.
If
you
read
it
back
again,
you'll
actually
see
what
you
wrote,
which
is
really
nice
you're,
not
changing
the
the
apply
configuration
at
all,
so
we're
not
breaking
the
schema
rules.
We're
not
changing
anything
like
that.
C
The
main
downside
of
it
is
that
it's
arguably
less
like
agronomic
right
if
you're
a
developer-
and
you
do
have
to
tombstone
something
you
do
have
to
figure
out
how
to
write
this.
It
may
not
be
super
obvious
how
to
do
it
and
a
point
that
antoine
may
that
I
agree
with.
Is
you
also
have
to
put
the
manager
in
here
because
it's
required
as
a
key
to
actually
be
able
to
write
the
field
set,
so
here
the
manager
is
coop
cuddle.
C
So
that's
not
super
great
because
you
could
imagine
if,
like
you're,
writing
these
files
somewhere,
you
might
not
even
know
what
your
manager
is,
and
it
could
be
that
it's
applied
from
different
by
different
managers
into
different
systems.
So
that's
actually
probably
the
biggest
like
practical
downside
to
this.
If
you
write
like
language
bindings
like
if
you
write
like
a
go
client
that
helps
you
do
tombstone,
you
would
never
even
have
to
worry
about
where
this
stuff
goes
right.
C
C
C
The
next
alternative
is
just
kind
of
like
that
one,
except
for
instead
of
saying,
instead
of
actually
writing
to
the
managed
fields.
Stanza
we
create
like
a
new
stanza.
It
doesn't
have
to
be
called
tombstone,
but
that's
you
know
just
as
an
example
like
a
straw,
man
and
you
don't
put
the
manager's
name
in
this.
You
just
put
the
fields
that
you
want
tombstone,
that's
the
difference
between
this
and
the
previous
one.
So
it's
just
a
variant
on
that
and
then
the
last
alternative
was
one
that
had
been
mentioned.
C
A
couple
different
ways.
Antoine
mentioned
it,
but
in
a
slightly
different
way-
and
I
think
daniel
mentioned
it
exactly
this
way-
which
is
you
actually
do
yaml,
you
can
have
different
co,
you
can
have
different
parts
separated
by
dash
dash
dash.
So
the
idea
was
that
you
allow
a
second
a
second
body
in
the
apply
request,
which
is
the
things
you
want.
Tombstone
there's
the
practical
problems
with
this
is
there's
a
lot
of
machinery
already
for
kubernetes.
That
probably
doesn't
understand
that
you're
allowed
to
have
a
dash
dash
dash
here.
C
It's
also
it's
a
little
easier
to
write
because
you're
kind
of
using
the
object
structure,
I'm
not
a
huge
fan
of
this
approach.
I
feel
like
it
feels
a
little
bit
magical.
There's
some
there's
some
variants
on
this
approach.
One
approach
would
be
like.
Instead
of
having
to
be
a
separate
thing,
you
have
to
be
like
a
sub
resource
or
something,
but
then
you
lose
atomicity.
C
A
C
Okay,
so
what
I
think
I'm
going
to
do
with
this
is
I'm
going
to
definitely
comment
on
the
ones
you
like.
In
addition,
what
you
say
in
this
meeting,
I'm
probably
going
to
swap
out
the
leading
design
with
one
of
these
recommendations
and
then
what
we'll
probably
do
is
shelve
this
design,
and
I
think
what
we'll
do
at
that
point
is
when
we
talk
about
ga,
we'll
kind
of
use
this
as
a
as
a
reference
for
why
we
think
we
have
you
know
because,
like
this
one,
for
example,
you
could
implement
this
postgame
right.
C
B
You
forgot
about
me
so
sorry,
okay,
so
we
need
to
see
the
api
machinery
is
trying
to
decide
if
what's
going
to
happen
soon
in
api
machinery,
and
so
obviously
the
question
of
solo
site
apply
is
coming
up
and
we
need
to
know
daniel
was
suggesting
or
asking.
If
we
were
planning
on
making
silverstein
apply
ga
for
121.
B
I
think
the
next
release
is
going
to
be
120.
If
I'm.
D
C
B
You
definitely
need
some
time
to
suck
the
features
if
needed.
So
that's
something
to
remember,
and
eventually
we
want
to
land
in
gta.
I
don't
think
we
want
solar
setup
like
to
not
be
ga
for
too
long
joe
has
done
this
process
of
going
for
ga
in
the
past.
It's
not
like.
I
haven't
withdrawan
and
div
with
julian,
but
I'm
expecting
some
inputs
from
joe,
because
the
crds
were
significantly
more
difficult,
yeah
happy
to
do
that.
But
so
what
do
you?
C
Yeah,
so
I
had
actually
asked
daniel
this
exact
same
question
when
I
started-
and
he
said
the
the
first.
The
first
precondition
to
go
into
ga
is
to
figure
out
what
the
requirements
are
to
go
to
ga
and
that's
like
actually
really
hard.
C
Some
of
the
things
that
I
didn't
expect
that
ended
up
being
really
important
was
you
have
to
get
conformance
to
a
certain
level,
so
you
you're
you're,
going
to
dedicate
quite
a
bit
of
manpower
to
defining
agreeing
on
and
getting
written
and
approved
and
committed
a
bunch
of
conformance
tests
around
your
feature
to
make
sure
that
it's
implemented
by
other.
You
know
that
anybody
that
says
they're
a
compliant
certified
kubernetes
distributionist
actually
has
the
functionality
that
you
expect
that
took
a
lot
of
time.
C
So
like
we
had
to
be
fairly
crisp
on
like
like
how
did
we
think
about
scalability
performance
like
how
did
we
like
map
that
back
to
the
way
that
the
rest
of
like
six
scalability
thinks
about
things?
So
we
talked
to
scalability
for
quite
a
bit
of
time
made
sure
we
understood
like
what
their
their
guarantees
were,
mapped
them
back
to
crds
and
everything
ended
up
being
like
you
could
have
so
many
crds
of
you
know
different
things
and
so
forth,
and
it
it
was.
Quite
that
was
a
fairly
involved
project.
C
I
don't
think
we're
gonna
have
as
much
of
a
scale
one
but
we'll
probably
like
wanna
like
resurrect
some
of
our
performance
numbers.
There
was
quite
a
bit
of
work
on
like
bug
burn
down
to
like
get
things
to
where
they
needed
to
be.
C
We
did
some
like
user
surveys
around
certain
things,
so
we
did
some
around
scalability.
Basically
asking
people
like
how
many
of
various
like
crt
objects
do.
They
have
like
per
name
space
and
all
this
stuff
as
a
way
of
like
defending
that.
C
I
think
part
of
part
of
going
to
ga
is
like
you
have
to
prove
to
the
right
stakeholders
that
this
is
not
just
ga
in
terms
of
implementation
quality,
but
ga
in
terms
of
like
usage
right
are:
are
there
enough
people
using
this
thing
that
it
should
be
part
of,
or
so
to
speak
those
were
some
of
the
big
ones?
Some
of
those.
B
C
B
Did
you
have
any
features
requirements
like
features
that
were
missing
and
very
important.
C
Yeah
so,
for
example,
crds
going
to
ga
versioning
had
to
be
fully
fleshed
out.
Crd
versioning
was
like
a
big
like
partially
incomplete
thing,
so
we
had
to
like
change
the
way
this
custom
resource
definition
objects
were
structured,
so
you
could
have
multiple
versions,
so
you
could
have
conversion
web
hooks,
so
the
conversion
web
hooks
were
all
like
set
up
and
secured
tls
all
the
things
you'd
expect
from
a
web
hook.
There
was
a
whole
push
around
like
metrics
and
then
from
instrumentation.
C
So
we
added
like
a
bunch
of
prometheus
metrics
to
crds
to
get
them
the
level
of
quality
we
needed.
So
we
had
to
kind
of
like
work
with
sig
instrumentation
on
that
make
sure
that
they
were
okay
with
it.
They
felt
that
was
complete.
C
There
was
a
fairly
substantial
burn
down
one
of
one
of
the
hardest.
Things
was
getting
agreement
on
like
what
the
remaining
like
bug
and
feature
gaps
were
those
those
two
lists
were
like
pretty
serious
and
there's
a
lot
of
back
and
forth.
Like
sig
apn
machinery
on,
like
you
know,
there
was
a
major
complaint
about
web
hook
ordering
for
web
hooks,
and
this
is
web
hooks
and
so
like.
C
We
ended
up
with
this
solution
that
nobody
loved,
but
in
practice
like
actually
solved
a
lot
of
people's
problems,
which
is
like
we
would
just
run
them
all
twice,
because
we
didn't
know
what
the
order
was
going
to
be,
and
so
you
said
there
they
have
to
be
a
diffident
and
you
just
we're
just
going
to
run
twice
so
there
was
there
were
some
kind
of
like
trade-offs
that
we
had
to
find,
but
I
think
identifying
those
and
convincing
everybody
that,
like
we
had,
the
list
was
a
big
part
and
we
kind
of
had
to
champion
it
like
you
couldn't
just
ask
people
what
your
list
had
to
be.
C
What
we
would
do
is
we
would
call
out
all
the
things
that,
yes,
we
agreed
need
to
be
done
at
some
point,
but
don't
necessarily
need
to
be
part
of
ga,
and
our
criteria
for
ga
was
like
it's
needed
for
the
feature
to
be
useful
and
it's
or
it's
something
that
we
can't
change
without
like
breaking
backwards
compatibility
in
some
way,
because
once
you
go
to
ga,
you
really
lock
down
your
api
like
you're.
That's
now
a
forever
api
and
so
being
able
to
like
show
that
like
yeah.
That's
that's
important.
C
It's
a
really
nice
feature,
but
people
are
using
crds
today
and
they
would
really
benefit
from
having
a
v1
and
if
we
don't
give
them
a
v1
because
of
that
that's
actually
not
in
their
service.
Right,
like
we
can
add
that
postga
and
it
will
still
be
they'll,
still
be
just
as
good
as
it
was
now,
and
there's
no
reason
why
it
can't
be
added
post
ga
so
that
list
building
that
list
and
then
advocating
that
a
bunch
of
stuff
belonged
in.
A
C
Yeah
and
and
they're
like
yeah,
getting
bug
burned
down,
also
start
to
send
out
like
what
you
I
think,
what
you
want
to
do
is
you
want
to
get
some
a
document.
That
is
a
skeleton
of
what
you
want
and
then
just
let
the
community
help
you
fill
it
out
as
much
as
you
can.
B
Okay,
cool:
I
suggest
we
go
through
the
status
update
but
keep
again
keeping
an
eye
on
these,
specifically
so
for
each
item.
We
discuss
about
the
ga
aspect
of
that
sounds
good
and,
if
anything's
miss.
B
Ideally
nothing
is
missing
from
the
items,
but
if
we
can
think
of
things
that
need
to
be
done
and
that
we
don't
have
in
the
list,
then
maybe
we
should
add
them
to
the
list.
So
let's
do
that
at
the
end.
A
A
A
Okay,
serial
value
normalization,
I'm
turning!
That's
on
you.
B
How
am
I
starting
I'm
I'm
still
working
on
that?
The
more
I
work
on
that
the
more
difficult
it
is.
I
will
I'm,
I'm
literally
working
on
that
damn
the
prime
is
still
that
built-in
and
crds
are
not
the
same,
and
if
we
want
to
have
something
that
makes
sense,
we
want
them
to
be
as
similar
as
possible,
but
we
don't.
Even
I
don't
know
if
we
even
understand
properly
what
the
differences
are,
and
so
I'm
trying
to
investigate
and
work
on
that.
C
B
The
defaults
should
be
very
easy
if
we
solve
the
other
one.
I
mean
the
two
are
very
tightly
related.
I'm
trying
to
not
make
them
be
the
same
so
that
we
can
simplify
the
process,
but
at
some
point
we
almost
we're
almost
considering
merging
these
two
together
because
they're
so
tightly
coupled,
but
I'd
rather
not
do
that
right,
yeah.
We
simply
need
defaulting
to
do
that.
B
A
Yeah,
instead
of
swiping,
I
I
think
a
week
ago
I
had
some
short
time
to
push
something,
but
remember
my
my
last
two
weeks
were
way
too
full
it's
there's
still
way
stuff.
That's
waiting
for
review
like
you
can
look
at
it
and
I
still
need
to
do
the
pod
util
stuff.
Sadly,
but
it's
it
should
be
mostly
finished.
C
I
jenny
and
I
worked
on
a
early
prototype
of
kind
of
like
what
the
generator
code
would
look
like
I'm,
so
I
need
to
keep
working
with
jenny
on
this
one
question
I
had,
though,
because
we're
talking
about
ga
and
we
haven't
made
too
much
progress
in
the
actual
thing,
but
I'm
not
clear
if
we
want
this
for
ga
or
not.
C
One
of
the
reasons
I
say
is
when
we
went
to
stig
api
machinery
there
would.
The
the
guidance
was
don't
build
something
in
kate's
cates
go
experiment
elsewhere,
see
if
you
can
prove
value
with
the
developer
community,
which
made
me
wonder
if
if
sig
apm
machine
really
really
wants.
This
is
part
of
like
a
ga
prerequisite.
B
I
mean
when
I
look
at
it
I
can
hear
you
say
before,
like
we
shouldn't
block
ga,
because
we
don't
have
that
it's
good
to
have,
and
we
think
we
should
have
that.
But
there
is
no
reason
to
block
the
future
if
I'm
going
to,
because
we
don't
have
that
yeah.
C
C
No,
it
wouldn't-
and
it's
not
clear,
it's
not
clear
to
us
yet
that
that
the
majority
of
developers
that
are
getting
value
out
of
it
need
need
that
right
now
we
might
get
more
developers
later
from
it.
Maybe,
but
we
don't
know
yeah,
I
think
there
you
go
okay,
so
we
can
mark
that
one
as
post,
ga
and
I'll
talk
to
the
sig
api
machinery
and
let
them
know
we're
thinking
that
way.
A
Yeah
for
information
we
are
now
in
the
java,
clients,
support
support
and
java
client.
I
saw
that
merged
so
java
client
can
now
support
service
and
apply,
I
think
by
just
sending
yamo.
I
looked
it
for
a
bit.
B
A
C
Yeah
and
since
we're
probably
gonna
get
it
in
any
ways,
why
not?
Why
not
take
the
win.
B
Absolutely
required
for
ga,
but
so
it's
important
to
discuss
that
because
I
believe
we
need
them
in
ga
and
they
should
probably
suck
a
little
bit
of
time,
probably
not
too
much
so
I
I'd
say
if
we
managed
to
make
this
in
120,
I
I
don't
think
that
would
be
blocking
from
making
making
it
to
ga
and
121.
A
Yeah
121
sounds
like
a
good
goal.
A
I
think
it's
a
little
ambitious,
but
I
think
we
should
go
for
it
yeah
it
might
be,
but
it's
still
almost
a
year.
A
A
C
C
B
C
B
Yeah,
this
is
api
convention,
so
it's
not
strictly
server
side
reply.
I
agree
the
let's
talk
about
the
topology.
Instead,
I
suspect
the
topology
documentation
easily
acquired
the
other.
One
is
not
necessary.
Specific.
A
C
B
Let's
look
at
these
bugs
miscellaneous
is
mostly
bugs,
so
that's
good.
B
No,
and
but
this
is
this
issue
in
general-
is
blocking
because
this
is
literally
breaking
people
from
using
several
side
of
play.
In
some
cases.
B
A
B
C
B
B
C
B
B
I
don't
even
know
if
we
install
the
that
that
needs
to
be
investigated.
Okay,
that's
a
good.
B
C
B
A
B
B
Oh
yuck
yeah,
if
you
send
an
end,
it's
going
to
be
canonicalized
to
a
string
and
we're
going
to
think
there
is
a
change
every
single
time
I
see
so
we
need
to
know
we
need
to
be
able
to
canonicalize
audio
in
server
side
of
player
or
somewhere
before
that,
and
ideally
this
is
part
of
the
open
api.
So
we
know
how
to
but
it's
kind
of
difficult,
because
we
would
need
to
know
how
the
canonicalized
function
works
inside
the
open
api
definition,
and
I
mean
yeah,
we
don't
have
any
such
thing.
B
Ideally,
we
don't
have
too
many
places
where
we
do
canonicalize.
Okay,
like
this
asian
of
this
sort,
because
they
are
bad
right
anytime.
We
change
the
object
after
it's
being
sent,
we
run
into
this
problem
yeah,
so.
B
Also,
I
don't
see
how
this
is
different
from
a
mutating
white
hook.
What
if
the
canonicalization
was
done
by
a
mutating
my
hook,
we
would
always
have
this
problem,
wouldn't
we
yeah,
so
I
mean
technically
anyone
can
build
a
system
that
is
going
to
be
broken
in
such
a
way
and
there's
nothing
we
can
do
about
it.
C
B
Yeah,
so
we
see
we
compare
the
object
against
the
new
object
against
the
live
object
and,
let's
say
the
live.
Object
has
a
string.
The
new
object,
the
new
applied
object
has
an
end,
even
though
the
int
would
be
canonicalized
to
the
same
string.
We're
like
oh
you're,
trying
to
change
the
object,
and
so
we
record
that
inside
the
timestamp
date
yeah.
B
So
because
we
change
the
manage
field
state,
we
do
persist
the
object
I
see
so,
ideally,
maybe
we
could
improve
the
way
we
compute
this
the
the
date
because
I
feel
like
if,
if
we're
not
there.
A
B
B
A
B
A
C
Okay,
we,
I
think
this
one
right
now,
yeah,
I
think
it's
in
until
we
can
at
least
figure
out
what
what
parts
of
it
need
to
be
solved.
B
This
one
absolutely
needs
to
be
fixed,
yeah
quantity
phase,
stop
checking,
because
if
you
receive
a
need,
I
think
it
completely
fails
outside
of
play,
because
it's
like
you
send
that
you
sent
on
it
and
it
needs
to
be
a
yeah.
That's
just
a
string
yeah
and
I'm
pretty
sure
we
had
solved
that
problem
initially.
I
can
definitely
remember
working
with
daniel
on
this
specifically,
so
I'm
very
surprised.
B
And
andrea
sent
an
update
in
the
slack
channel
about
that.
I'm
gonna
make
the
update
for
him,
but
jordan
found
out
that
this
bug
was
not
because
of
server
side
of
like
yay,
oh
cool,
but
it's
it's
still
confusing
for
many
reasons.
So
we
need
to
update
something
so
that
it's
not
that
confusing
and
the
documentation.
A
B
I'm
not
gonna
go
in
the
details
of
what's
going
on
because
I
can't
remember
them
all,
but
it's
nothing
bad
and
it's
it's
I'm
not
I'm
going
to
say
it's
not
even
necessary.
C
B
B
A
C
B
C
C
D
B
B
C
B
This
is,
do
we
want
to
track
manage
fields
for
admission
controllers
like,
and
this
is
relevant
to
the
the
quantity
thing
like
do.
We
want
to
track
woo
what
changes
are
made
by
admission
controllers
yeah?
I
see
what
you're
saying.
C
A
B
Got
it
yeah
because
back
then
tracking
the
fields
was
very
expensive,
so
we
couldn't
just
run
the
tracking
for
every
single
mutating
web
hook.
There's
also
the
possibility
of
running
it
once
and
say.
This
is
admission
controllers
in
general
that
made
these
changes
having
the
specific
details
of
like
hey,
who
has
changed
this
field?
Oh
it's!
Actually,
this
mutating
web
hook,
that's
actually
very
useful.
I
think.
B
A
C
B
Yeah,
I
thought
about
that
a
little
bit
when
the
question
of
gre
came
up.
Do
we
need
or
want
to
do
anything
sort
of
clean
cleanups
from
flags
in
cube
cuddle?
B
C
A
B
I
think
if,
if
I
had
to
give
my
a
quick
opinion
on
that,
how
do
I
do
that?
I
think
julian
and
I
have
done
a
ton
of
work
on
this
already
and
so,
and
julian
has
been
really
on
top
of
everything
for
that.
So
any
experience
glitch
that
we've
seen
on
server
side
apply
so
far
has
been
tackled
extremely
quickly,
so
I
wouldn't
be
so
I
wouldn't
be
surprised
if
this
is
actually
fine.
B
We
don't
I
mean
besides,
maybe
the
fact
that
people
are
frustrated
with
the
managed
fields
being
two
verbals
in
gets
and
stuff,
but
beside
that,
I
think
the
experience
has
been
somewhat
polished
already.
So
I
wouldn't
be
surprised
if
we
don't
have
anything.
I'd
be
curious
about
going
about
the
flags
again,
because
we
have
like
many
flags
and
the
conflict
with
client
side
of
play
flags,
but
I
wouldn't
be
surprised
if
this
is
mostly
good.
C
That's
great,
I
remember
when
we
were
talking
with
sig
ap
my
machinery.
One
of
the
questions
we
had
was
like:
should
we
be
converting
controllers
to
be
using
apply
and
the
answer
I
seem
to
to
get
there
was
that,
like
most,
the
built-in
controllers
were
not
going
to
change,
so
I
think
I
think
if
anybody
asks
during
jf,
we
should
be
like
migrating
controllers
or
things.
I
think
the
answer
should
be
metal.
Does
everybody
agree
with
that?.