►
From YouTube: 20230126 SIG Architecture Community Meeting
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
B
A
Hello,
everybody
today
is
January
26
2023.
This
is
the
kubernetes
architecture
meeting
and
I
guess
time
to
get
started.
So
thanks
a
lot
for
coming
today
and
we
have
first
item
on
our
agenda-
is
Rob.
C
Hey
everyone,
yeah
I've
got
a
doc,
that's
talking
about
a
proposal
at
least
to
phase
out
beta
from
Gateway
API.
We've
discussed
this
on
Cygnet
and
cigarch
mailing
list
previously,
as
well
as
in
Gateway
API
Community
meetings.
C
It
seems
like
there
has
been
support,
but
I
really
wanted
to
get
seed
Arch
perspective
here,
because
I
recognize.
If
we
were
to
do
this,
it
would
be
I
think
the
first
time
a
kubernetes
API
had
existed
without
that
intermediate
API
version,
I'll,
try
and
explain
the
reasoning,
I
I
don't
know,
maybe
can
I
share.
D
C
C
Hopefully,
everyone
can
see
that
yeah.
So
you
know,
as
we've
been
thinking
about
in
beta
and
API
versioning
in
general,
in
Gateway
API
we've
been
trying
to
to
make
sense
of
how
how
we
go
ga
we
have
beta.
We,
we
have
met
our
graduation
criteria
to
become
a
ga
API,
but
we
have
been
questioning
you
know.
Do
we
really
need
these
layers
all
three
of
these
API
versions
as
I
think
one
of
the
first
major
kubernetes
apis
that
is
crd
based
and
not
in
tree?
C
We
have
some
somewhat
unique
release,
versioning
and
other
processes,
and
one
of
our
considerations
is
that
every
API
version
We
introduce
incurs
significant
cost.
So
when
we
introduce
a
new
API
version
in
this
case,
beta
we're
maintaining
separate
type
definitions
for
every
piece
of
generated
code,
there's
a
as
far
as
I
can
tell
that.
The
best
way
to
introduce
an
API
version
involves
four
releases
where
we're
going
through
different
stages
of
served,
storage
and
deprecated
States
for
each
API
version,
and
then
you
know
at
some
point.
C
We
need
to
go
through
the
pain
of
deprecating
API
versions,
which
is
especially
painful
as
we've
all
experienced.
Gateway
API
being
kind
of
the
natural
successor
of
the
Ingress
API
before
it
we're
very
familiar
with
the
pain
caused
by
deprecating,
Ingress,
API
and
I
know
these
are.
These
are
not
parallel
things
not
exactly
the
same
thing,
but
all
of
these
implementations
we
have
more
than
15
implementations
of
this
API
I.
C
So
the
argument
or
proposal
that
I'm
making
is
that
instead
of
this,
we
should
simply
offer
two
states
a
not
enabled
by
default
and
not
stable
and
enabled
by
default
and
stable
and
not
have
a
intermediate
state
that
is
effectively
on
by
default.
You
know,
I
know
that's
changing
an
upstream
or
has
changed
an
upstream,
but
not
completely
stable.
E
C
E
C
That's
that's
a
fair
point.
This
this
does
feel
a
bit
unique,
because
this
is
a
thing
that
this
is
a
set
of
apis,
that
users
are
opting
into
and
in
most
cases
they.
You
know,
because
they're
crds
users
have
some
level
of
control
that
they
don't
have
with
built-in
kubernetes
apis.
One
example
that
I
I
highlighted
I
think
later
on
in
the
stock
is,
if
we
were
to
remove
a
crd,
or
you
know,
deprecate
a
crd
entirely
users
can
still
install
it
implementations.
Can
we
we
can't
prevent
an
API
from
existing
it.
F
So
I
mean
Rob.
We've
talked
about
this
and
I
published
my
own
thing
around
about
the
same
time.
Proposing
this
same
idea.
Sort
of
writ
large
I
have
some
follow-up
work
to
do
to
see
if
I
want
to
push
forward
with
that
and
try
to
justify
it,
but
you
all
over
in
Gateway
land,
have
been
so
gracious
about
being
our
guinea
pigs
for
so
many
things,
I
am
for
one
perfectly
happy
to
let
you
be
the
guinea
pig
for
this
too.
C
I
I
appreciate
that
support
yeah
I.
You
know
I
that
you
know,
especially
it
feels
like
with
crds
there.
There
are
these
additional
costs,
and
you
know
it.
The
I
was
trying
to
draw
a
parallel
with
Upstream
apis
of
the
on
by
default
and
off
by
default
analogies,
but
it's
a
little
trickier
with
with
crds,
because
it's
not
really
clearly
on
and
off
by
default,
we're
just
providing
different
stability
levels
and
letting
users
choose
what
they
use.
B
G
Yeah
I
I
was
just
gonna
sort
of
echoed
what
you
just
said,
like
the
the
fact
that
someone
can
take
an
office,
security
and
layer
it
onto
an
existing
cluster
and
then
try
using
it
and
then
delete
it
and
clean
up
all
the
objects
and
it's
sort
of
a
not
no
risk.
But
it's
a
very
lightweight
way
to
try
something
out
like
it's
much
easier
to
try
a
crd.
While
it's
in
Alpha
State
to
me,
the
the
question
seems
to
be
about
like
how
willing
would
the
project
I
guess
Gateway
in
this
case?
G
How
willing
would
they
be
to
disrupt
things
that
had
integrated
with
the
beta
API
and
if
there
would
be
a
lot
of
reluctance
to
do
that?
It's
sort
of
something
for
like
Gateway,
where
the
value
is
in
the
Integrations.
G
It
seems
like
opposing
goals
to
say
like
we
want
to
get
lots
and
lots
of
feedback,
so
we're
going
to
encourage
people
to
integrate
with
this.
But
then
we're
also
going
to
say
everything
you
integrate
with
a
beta
API
is
not
long-term
stable,
like
that.
That
seems
very
difficult
to
you
know
say
both
of
those
things
with
a
straight
face,
and
so,
if
the
posture
of
the
Gateway
project
is
that,
like
once
we've
gotten
enough
experience
to,
you
know
think
that
we
can
support
this
API.
G
The
shape
of
the
API
is
going
to
work
long
term
and
people
start
integrating
with
it
we're
going
to
be
very
reluctant
to
break
them.
That
sounds
like
stable.
That
sounds
like
B1
like
we're
encouraging
Integrations
at
this
point,
we're
not
planning
to
break
those
Integrations.
If
we
made
changes
in
the
future,
we
would
keep
supporting
these
current
Integrations.
That
sounds
like
stable
to
me.
C
Yeah,
those
are
really
helpful
clarifications
and
I
agree.
I
mean
this
is
an
API
that
is
very
broadly
implemented
and
we've
already
built
Community
consensus
and
documented
Community
consensus
that
the
changes
from
beta
to
GA
are
very,
very
minimal
and,
and
we
will
make
every
intent
to
you
know
not
make
any
significant
changes
between
those
API
versions.
So.
A
Yeah
well,
a
preface
is
to
say
that
I
I
was
out
for
a
while,
so
I
haven't
actually
read
all
the
documents
so
I'm
going
out.
A
Discussion
is
here,
but
I
think
that
a
little
bit
echoing
what
Jordan
said
right
like
I,
think
that
there's,
if
you
think
about
it
and
you
kind
of
structured
you're,
not
doing
this
way
a
little
bit
too,
like
there's
a
couple
different
stakeholders
right,
there's
the
the
users,
the
the
idea
of
the
versioning,
is
to
protect
a
couple
different
stakeholders.
One
is
to
protect
the
users.
A
One
is
to
protect
the
maintainers
wanting
to
protect
the
the
people
who
support
clusters
that
are
deployed
out
there
and
and
maybe
in
the
case
of
an
integration
like
this.
It's
to
support
the
implementers
of
those
specific
apis.
So,
like
that's
kind
of
the
maintainers,
but
in
this
case
it's
they're
they're,
not
the
same
people,
maintaining
the
API
that
are
implementing
it.
So
I
think
it
makes
sense
to
me
that
that
that's
why
the
versioning
scheme
is
there,
but
in
this
case
the
crd
ability
to
add
and
remove
crds
protects
the
user
already.
A
So
the
versioning
isn't
as
critical
there
and
the
implementers
it's
clear
that
they
don't
see
value
in
the
beta
right.
For
the
reasons
you
just
said,
so
it's
not
really
protecting
them
to
have
a
three-tier
structure.
So
similarly,
I
think
from
a
like
a
cloud
provider
perspective
or
a
classroom
administrator
perspective.
A
I
I
think
that
likely
the
the
ability
that
it's
a
a
crd-
and
you
can
you
know,
add
and
remove-
is
probably
somewhat
protective
there
as
well,
although
I
haven't
thought
that
through,
but
so
generally
speaking,
I
I
think
that
if
you
go
back
to
why
we
have
the
versioning
scheme
and
who
it's
protecting
from
what
I
think
the
the
crd
mechanism
is
sufficient
in
this
case
with
the
two
stage.
So
to
me,
it
makes
it
makes
sense
what
you're
saying
that's
the
short
of
it.
I
I
So
I
was
going
to
ask
so
today
you
have
the
alpha
beta
GA
phases.
Do
you
feel
like
had
if
you
had
gone
back
in
time
and
not
had
the
beta
phase
that
you
would
have
gotten
the
adoption
and
feedback?
C
I
think
so,
so
it's
a
good
question.
We
stayed
in
in
Alpha
for
a
long
time
with
this.
Api
I
want
to
say
two
years
almost
we
we
had
a
V1
Alpha
One.
We
got
a
lot
of
feedback.
We
made
significant
changes.
Had
a
V1
Alpha,
two
V1
Alpha
two
changed
to
V1
beta
1,
with
no
changes,
and
we
expect
V1
beta
1
to
change
to
V1
with
no
changes
again
I
mean
there.
There
are.
There
have
been
some
additions
to
the
API,
but
no
changes
beyond
that.
C
So
I,
you
know,
I
I,
think
you
know
we're
very
informed
that
this
community
is
largely
made
up
of
implementers
of
the
API
at
this
point
that
the
people
designing
an
API
are
also
implementing
it
and
from
the
and
and
most
of
them
are
coming
from
the
Ingress
perspective
of
having
dealt
with
version,
compatibility
and
and
pain
points
along
that
that
upgrade
path,
and
so
there's
a
a
strong
preference
to
avoid
that
going
forward.
I
Are
there
aspects
of
the
like
the
kubernetes
kind
of
process
such
as
like
the
prr
review
or
subset
of
it
that
could
be
applied
here
to
help,
maybe
purely
as
a
way
of
helping
answer
questions
when
it
comes
time
for
some
future
Alpha
API?
It
goes
straight
to
GA
that
those
things
have
been
thought
through.
C
Yeah
I
I
think
that
is
a
good
idea.
You
know
so
I
so
much
for
the
pr
PR
does
not
seem
to
apply
here,
but
I
think
there
are
at
least
pieces
that
that
do
and
we
we
should
incorporate
that
and
for
for
context.
This
is
a
bit
of
a
discussion
on
graduation
criteria.
We
we
currently
have
defined
graduation
criteria
for
beta
and
GA
in
the
API,
and
this
doc
proposes
merging
those
requirements
together
and
I.
C
Think
one
of
the
one
of
the
most
significant
bits
of
criteria
to
to
leave
Alpha
is
that
we
have
complete
conformance
test
coverage
and
multiple
implementations
that
are
conformant,
so
I
think,
in
my
opinion,
that
helps
alleviate
concerns
about
product
production
Readiness.
If
we
have
multiple
implementations
that
are
passing
conformance
tests
and
have
clearly
tested
this
out,
but
I
I
agree
that
there's
probably
some
aspects
of
prr
that
we
can
pull
in
as
well.
A
About
relaxing
graduation
criteria
for
GA
right,
so
so
in
that
sense,
if
you're
making
the
jump
or
future
thing
is
making
a
jump
from
alpha
to
GA,
they
still
have
to
meet
all
the
same
graduation
requirements
that
they
would
meet
in
beta
and
GA.
So
I
I
that
that's
not
a
something
that
would
at
the
face
of
it.
A
Concern
me
as
far
as
prr
I
mean
you
know,
I
think
that
reviewing
those
questions
is
is
always
helpful.
You
know
I
I,
don't,
given
that
the
implementations
are
all
different.
It's
a
little
bit
more
challenging
to
apply
them
on
the
API.
You
know
they
don't
really
apply
to
the
API.
They
apply
to
the
implementations.
I
I
A
closing
thought
on
what
I
was
asking
is
so
I
I
feel
like
a
lot
of
the
maybe
the
confidence
or
maybe
lack
thereof,
of
hesitation
on
some
of
this.
It
is
because
that
we're
talking
about
something
that's
a
crd
and
and
maybe
maybe
what
that
means
is
that
in
core
we
should.
I
We
should
think
about
more
crd
based
things,
because,
if,
if
the
reason
why
we
really
want
alphas
and
betas
and
everything's
an
enablement
by
default
and
not
enable
them
by
default,
is
because
it's
so
tightly
controlled
and
coupled
with
a
particular
kubernetes
version
and
all
that
stuff,
maybe
maybe
there's
some
other
discussion
or
thoughts
there,
but
I
think
I've
had
my
Phil
I
think
David.
You.
E
Wrote
yeah
I
was
gonna,
say
that
I
do
see
I
you
know,
I
asked
the
question
and
I
also
do
see
a
significant
distinction
between
a
crd
that
you
can
add
to
any
supported,
kubernetes
level
and
KK
the
KK
repo
itself,
but
I
want
to
look
back
at
maybe
a
couple
apis
that
we
developed
that
did
have
significant
changes
beta
to
ga
one
of
them
was
the
admission
web
hook,
configuration
API
where
we
actually
change
default
values
significantly
I
think
we
actually
changed
one
from
like
false
to
True,
which
was
a
significant
change
that
you
know
bits
on
people
and
then
the
crd
API
itself,
which
actually
changed
the
the
way
the
entire
object
was
shaped
as
it
evolved
from
beta
to
GA
and
I.
E
C
C
A
fair
question,
a
very
good
question:
I,
you
know,
with
with
Gateway
API
we've
been
fortunate
to
get
reasonably
good
feedback
and
and
usage
with
our
Alpha
apis.
Part
of
you
know
one
of
the
things
I
didn't
really
cover
here
is
we
we
have
a
concept
of
release
Channels
with
our
crds,
so
you
can,
you
know,
install
the
experimental
release
of
of
any
crd
or
the
stable
release,
and
so
because
again
these
are
crds.
C
It
is
relatively
low
cost
to
experiment
with
an
experimental
feature
by
installing
experimental
crds
that
has
helped
get
feedback
on
Alpha,
API
features
and
versions,
and
you
know
it
it's
certainly
not
perfect
and
I
I.
Would
you
know,
agree
that
I
don't
know
how
this
translates
to
Upstream
apis,
but
I
think
for
crds.
It
is
easier
to
get
feedback
at
Alpha
versions.
E
I
agree
with
that
and
I
think
I'd
like
to
put
the
same
question.
I
just
asked
to
Tim
I
heard
him
on
the
call,
because
Tim's
I
think
you
want
a
similar
track
for
KK,
based
on
the
the
email.
I
read
a
while
back.
F
Yes,
cool
Jordan:
do
you
mind
if
I
jump
the
Q
an
answer?
No
go
for
it
all
right?
Yes,
so
I
started
writing
about
this
I'll
paste
the
link
to
the
doc
in
the
chat
in
just
a
minute,
I'm
not
ready
to
really
have
the
argument
yet
about
KK,
because
there's
a
lot
of
good
feedback
that
I
need
to
incorporate
and
synthesize
I
agree,
fundamentally
with
the
proposition
that
Rob
is
making
here
right,
which
is
beta,
or
at
least
my
interpretation
of
it.
F
My
extension
of
it
is
beta,
provides
us
relatively
little
value,
provides
our
users
relatively
little
value.
It's
effectively,
GA
with
lower
quality.
We
should
just
accept
that
call
it
that
and
adjust
our
product
development
life
cycle
around
that
the
questions
you're
asking
David
are
really
good
questions.
How
do
we
get
feedback?
I?
Don't
think
the
existence
or
non-existence
of
beta
actually
makes
a
difference
there
by
the
time
we
put
something
out
in
beta
and
turn
it
on
by
default.
F
F
I,
don't
know
the
details
of
that
particular
change.
We
we
do
still
support
API
versions
right.
We
do
support
in
general,
the
ability
to
change
defaults
across
API
versions,
so
you
could
have
just
gone
to
V1
with
a
his
poor
product
and
then
gone
to
V2
straight
away
with
a
got
it
right.
This
time
thing,
the
the
distinction
being
once
we
put
it
out
there
in
beta
and
we
get
people
using
it.
It's
really
hard
and
impactful
to
pull
the
rug
out
from
underneath
them.
E
Okay
can
I
request
that
as
you
explore
the
idea,
you
look
at
those
two
apis
in
particular
validating
admission
web
Hooks
and
crds
both
had
significant
changes
beta
to
GA
and
they
were
painful.
You
were
correct.
They
were
painful.
F
Yep
and
I
mean
the
the
short
answer
is
I'll.
Make
this
my
last
word
on
it.
The
the
short
answer
is:
we
should
have
found
those
in
what
I
would
call
preview,
not
in
beta
when
it
was
on
by
default.
The
question
then
becomes:
how
do
we
get
people
to
use
previews
and
when
they're
off
by
default
right
we've
got
managed
Cloud
providers,
we've
got
representatives
from
them
here.
How
would
I
convince
gke
to
enable
preview
Gates,
which
are
effectively
sort
of
a
mix
of
Alpha
and
beta
on
their
users
clusters?
F
That's
a
tall
order
right,
it's
a
big
ask,
but
if
we
don't
get
feedback,
then
those
sorts
of
things
will
never
be
found.
So
we've
got
to
find
an
answer,
but
I,
don't
think
I,
don't
think.
That's
a
problem
that
we're
avoiding
today
anyway,
like
a
lot
of
beta
stuff,
just
isn't
getting
feedback
beta,
API
is
being
off
by
default.
Now
means
beta
and
Alpha
are
effectively
the
same
thing.
F
Thanks
I
posted
a
link
to
the
doc
in
the
comments
here.
Anybody
who
hasn't
read
it
please
go
through
and
read
it
and
shoot
holes
in
it.
I
will
Circle
back
to
it,
sort
of
after
kept
time
when
I
have
some
more
brain
power.
B
G
Beta
sort
of
being
this
weird,
like
GA
but
lower
quality,
I
kind
of
agree
with,
and
if
we
want
to
get
rid
of
beta,
you
can
either
push
that
one
way
or
the
other.
You
can
say
like.
Oh,
that
lower
quality
we're
not
quite
sure
about
this
aspect.
G
That's
gonna
live
in
Alpha
and
we're
just
going
to
stay
Alpha
and
iterate
until
we
can
like
show
feedback
and
I
think
that's
what
Gateway
API
did
and
that's
encouraging
to
see
like
the
the
idea
that
the
alpha
would
like
iterate
and
get
feedback
and
change
and
wait
until
it
could
be
promoted
without
changes.
G
G
The
other
thing
to
David's
point
I,
think
crd,
has
actually
started
in
beta,
and
a
lot
of
their
initial
beta
shape
was
informed
by
third-party
resource
compatibility.
Like
third-party
resources
didn't
have
a
schema,
and
so
I
was
like
well
I
guess
we
can't
require
a
schema.
E
It
was,
it
was
a
tricky
situation.
There
were
no
aspects
to
it.
It
did
come
in
start
there
in
beta
in
part
because
of
that
also
in
part
because
of
the
gke
requirement
of
no
Alpha
apis.
So
it
was
well
I.
Gotta
have
a
replacement
for
this
thing.
So
so
it
came
in
as
big
yeah.
G
So
yeah
it's
good
to
be
informed
by
history.
I
think
some
of
the
things
that
we
caught
in
crd
between
beta
and
GA,
like
the
the
first
beta
of
CRTs,
was
probably
like
actually
Alpha.
So
hopefully
we
wouldn't
repeat
that
mistake:
I,
don't
know
all
right.
We
can
move
on.
A
C
I
I
am
going
to
take
this
as
agreement
to
move
forward,
but
if
I
should
not,
let
me
know:
I
I'll
send
something
like
that
on
the
mailing
list.
Just
for
for
anyone
who
wants
to
have
any
hesitation,
they
can
say
it,
but
otherwise
I
think
we'll
move
forward
with
this.
A
Me
see
I
guess
I'm
the
one.
Let's
check
the
agenda
next
ligit
discussion
of
go
update,
Cap
all.
B
G
Funny,
apparently,
that's
not
gonna
work,
if
you
don't
mind,
pulling
up
the
link
from
the
agenda
and
there's
the
There's
the
link,
sorry
about
that
yeah
one
sec
yeah.
G
So
just
to
give
a
little
bit
of
background.
This
has
been
a
topic.
We've
talked
about
literally
for
years
under
many
different
contexts.
That's
come
up
when
we
were
talking
about
the
annual
support
cycle
for
kubernetes
releases,
extending
support
from
nine
months
to
12
months.
It
came
up
when
we
were
talking
about
long-term
support,
the
idea
that
it
felt
disingenuous
or
silly
to
say.
Oh
yeah.
G
We
can
support
a
kubernetes
version
for
like
a
year
or
two
years
or
five
years
when
it
was
built
on
top
of
a
Go
version
that
was
out
of
support
after
eight
months,
and
if
you
switch
to
the
markdown
view,
this
will
be
way
easier.
A
Yep
I
gotta
remember
how
to
do
that
in
here
yeah.
But
why
did
it
do
that.
G
There
we
go,
and
so
you
jump
to
the
motivation
for
those
who
don't
know
to
go
project
releases,
a
new
version
every
six
months.
They
support
the
last
two
minor
versions,
let's
security
fixes,
so
that
gives
a
year
of
support
for
any
given
go
Miner
version.
G
We
pick
up
that
as
quickly
as
we
can
on
kubernetes,
usually
within
a
month
or
two
or
three,
depending
on
alignment
with
kubernetes
release
cadences
and
if
there
were
any
regressions
in
the
Go
version
that
we
find
and
have
to
get
fixes
for
which
means
that
by
the
time
we
actually
ship
a
kubernetes
version,
there
might
be
like
eight
months
of
support,
left
on
that
go
minor
version
and
we
now
support
a
given
kubernetes
minor
version
for
12
to
14
months.
G
So
that
leaves
a
gap
at
the
end
where,
if
there's
a
go
vulnerability,
it's
not
going
to
be
available
on
the
go
Miner
version
that
we're
building
our
patch
releases
from.
So
the
obvious
solution
would
just
be
to
update
our
release
branches
to
new
go
Miner
versions,
but
that
has
usually
not
been
possible
because
of
behavior
changes
in
new
go
minor
versions.
So
if
you
scroll
down
I
listed
a
few
of
them,
sometimes
they're
Behavior
changes,
sometimes
they're.
G
They
just
require
a
lot
of
build
changes
or
pretty
disruptive
changes
anyway.
There's
a
variety
of
examples,
but
the
short
version
was:
we
couldn't
consistently
update
our
release
branches
to
new
Go
versions
and
keep
our
release
branches,
Behavior,
compatible
and
low
risk,
and
so
over
the
last
few
years
we've
been
talking
to
the
go
team
about
this
saying
we
would
love
to
stay
up
to
date,
but
we
can't,
for
these
reasons,
can
you
maybe
support
Go
versions
longer
or
like
stop
making
behavioral
changes?
G
Or
can
you
help
us
out
here
and-
and
they
really
took
that
to
heart
and
so
I
had
linked
to
him
discussion
in
the
go
project
and
a
proposal
which
was
just
accepted
this
last
week,
which
is
to
have
a
compatibility
mode
in
go
releases
so
that
if
you
build
a
a
module
like
kubernetes
and
your
module
says,
I
build
with
go
119..
G
G
So
there's
a
there's:
a
commitment
to
put
compatibility
switches
around
changes
in
new
go
Miner
versions
and
to
honor
the
behavior
of
the
module
that
you're
building
yeah.
Okay,.
F
In
sorry,
my
question
actually
isn't
answered
by
the
screen.
I
was
going
to
ask:
are
we
going
to
go
back
and
do
like
add
back
compatibility
flags
for
all
the
examples
that
you
listed
here
or
are
we
saying
it
starts
today
and
sorry
about
the
past.
G
So
some
of
them
already
had
compatibility
switches,
so
the
sha-1
support
and
the
look
path.
Behavior
already
had
compatibility
switches.
Some
did
not
like
the
parse
IP
changes.
F
A
G
G
It
wouldn't
be
quite
two
years
because
we
pick
up
things
you
know
a
few
months
later,
but
that
would
easily
cover
the
gap
for
the
open
source
support
and
from
what
I
can
see.
It
would
cover
the
support
Windows
of
the
downstreams
that
are
picking
up
kubernetes
versions
and
might
have
a
few
more
months
of
support.
Past
open
source,
end
of
life,
Yeah.
F
Tim
do
I
understand
correctly.
This
is
sort
of
Monolithic
like
we
get
one
choice.
We
don't
get
to
choose
on
a
case-by-case
basis
like
if
I
choose.
If
I
say
I
wanted
the
parse
IP
changes
to
not
be
changed,
we
would
pin
to
117,
which
means
we
get
117
semantics
for
everything.
No.
G
It's
awesome
it,
the
the
version
of
your
go.
Module
determines
the
defaults.
So
if
you
say
my
modules
go
117,
then
by
default
you
get
go
117,
Behavior
across
the
board.
You
can
have
fine-grained
control
in
your
main
module
to
say,
like
I,
want
to
twiddle
this
compatibility
switch
or
that
compatibility
switch.
G
So
it
it's
exactly
what
I
would
expect
you
get
sane
defaults,
but
you
also
have
fine
grain
control
if
you
want
it
anyway,
with
with
that
on
the
horizon,
it
seems
like
a
good
chance
to
revisit
like
now
that
it
seems
like
we
actually
could
pick
up
go.
Miner
version
updates
on
our
release
branches.
What
would
our
requirements
be
for
doing
that,
and
so
the
The
Proposal
is
basically
three
parts.
One
is
to
track
the
changes
that
we
made
to
kubernetes
in
order
to
pick
up
a
new
go
Miner
version.
G
Historically,
when
we
bump
go
Miner
versions,
we
sort
of
just
put
everything
in
one
big,
PR
and
say
like
we
made
this
change,
we
made
this
change,
made
this
change
and
bump
To
Go
versions,
and
so
this
is
proposing
to
make
the
changes
required
to
adopt
the
new
Go
version
as
prereq
PRS.
So
we
can
make
sure
that
they
work
with
the
old
or
existing
Go
version
and
the
upcoming
Go
version,
and
so
there
are
examples
of
the
types
of
changes
we
normally
make.
G
Tooling
changes,
Vet
or
lint
check,
fixes
things
like
that
so
track
those
changes
and
merge
them
into
Master.
Before
we
pick
up
the
new
Go
version,
then
the
second
step
is
to
backport
those
prereq
changes
to
to
our
release
branches
and
those
should
match
our
risk
requirements
like
we
shouldn't
backport,
high-risk
things
or
really
disruptive
things.
G
If
any
of
those
prereq
changes
look
to
be
risky
or
disruptive,
we
should
try
to
make
them
minimal
or
find
Alternatives
that
let
us
export
them
and
then
the
third
step
is
the
big
one
actually
update
the
release
branches
to
the
new
go
Miner
versions
and
I
I
took
a
stab
at
sort
of
what
our
requirements
would
be.
We
want
to
avoid
regressions.
We
want
to
avoid
Behavior
changes.
G
We
want
to
avoid
people
who
are
using
our
libraries
having
to
update
their
Go
version
to
keep
using
the
libraries
on
that
kubernetes
minor
release,
and
so
this
was
just
sort
of
a
initial
stab
at
requirements.
We
want
to
allow
some
time
of
the
new
Go
version
to
get
picked
up
by
like
a
pretty
broad
community
and
get
reports
of
regressions
and
give
time
for
fixes.
G
I
think
we
should
have
a
kubernetes
release
on
the
new
Go
version
before
we
start
taking
it
back
to
older
kubernetes
releases.
So
that
gives
time
for
people
to
consume
our
release,
candidates
and
and
early
adopters,
to
pick
up
a
new
like
actual
dot,
Zero
kubernetes
release
and
and
run
it
through
qualification.
Things
like
that.
I
put
a
month
there
I
know
some
providers,
I
can
speak
for
gke
like
we
have
our
pre-prod
tests
passing
and
like
we've,
been
getting
new
versions
to
production
in
a
month.
G
So
it's
not
necessarily
super
wide
adoption,
but
it
does
mean
that
a
lot
of
pre-prod
qualification
has
been
done
in
downstreams
and
then
the
other
requirements
are
are
to
make
sure
that
there's
no
user-facing
Behavior
changes.
There's
no
action
required
items.
People
using
our
libraries
don't
have
to
update
Go
versions.
G
So
that's
an
overview
of
The
Proposal
I
wanted
feedback
on
sort
of
this
these
requirements.
If
they
were
requirements,
people
thought
we
were
missing.
People
thought
these
were
too
strict,
too
loose
yeah,
so
I'll
open
it
up
for
a
question.
A
Well,
a
minor
question
which,
knowing
you
you've,
probably
taken
care
of
this,
but
so
you've
got
you've,
got
a
number
of
little
timing.
Cycles
in
here
you've
got
the
the
go
supported,
release
versions.
You
got
the
kubernetes
first
release
versions
and
then,
in
this
requirements
that
I'm
showing
on
the
screen
now
you've
got.
You
know
three
months
before
this
and
three
months
after
this
like
are
we
we
have
the
backward
compatibility
Flags,
but
the
overall
goal
release
is
still
only
have
the
year
support
as
I
understand
it.
A
So
are
we
sure
that
the
timing
works
between
getting
a
kubernetes
release
out
that
has
this
Go
version?
You
know
that's
months
that
we
do
that
every
four
months.
You
know
you
already
said
we're
eight
to
nine
months
behind
and
then
you
go
more
three
more
versions.
Are
we
going
to
be
updating
it
to
a
version?
That's
that's
going
out
of
support
anyway.
Have
you
done
the
math
on
that
yeah.
G
We're
not
eight
to
nine
months
behind.
We
are
usually
we
usually
release
them
pretty
quickly
on
a
Go
version.
That's
like
one
to
three
months
old,
so
so
it
actually
aligns
decently.
Well,
like
the
requiring
a
kubernetes
release
and
a
month
for
feedback,
might
push
it
us
to
a
four-ish
months.
G
This
ends
up
getting
us
on
assuming
no
assuming
no
regressions
that
we
discover
and
then
have
to
you
know
push
through
it
go
report
and
fix
and
release
cycle.
This
gets
release
branches
onto
a
new
go
version.
While
it
is
still
the
newest
go
okay
because
it
gets
it.
G
Yeah.
Other
questions.
B
G
We
I
actually
talked
through
this
with
the
Sig
release
folks,
earlier
this
week,
overall
I
mean
it:
it's
updating
go
on
older
release
branches,
so
it
is
some
more
work,
but
if
we
can
actually
keep
all
of
our
release
branches
on
the
same
go
minor
version
generally.
I
think
that
overall
is
less
work
right.
The
the
main
addition
would
be
the
setup
of
the
two
CI
jobs
to
make
sure
unit
and
integration
tests
work
on
the
original
Go
version
for
a
release
branch.
G
So,
in
the
step
three,
that's
that's
the
main
change.
That's
like
a
net
new
thing,
Go
Go
version
bumps
across
release
branches
are
just
a
thing
that
we
already
do
so
doing
the
same
bump
across
two
more
release.
Branches.
Isn't
isn't
that
big
of
a
deal
so
when
the
feedback
from
Thing
released
was
to
make
sure
this
is
integrated
into
the
tooling
that
you
know
copies
out
the
CI
jobs,
okay,
but
they
weren't
particular
concerns
from
cigarettes.
B
And
then
the
next
question
is
also
people
oriented
other
than
you
and
me
do
we
have
people
signed
up
to
help?
Do
this.
G
G
B
G
B
G
My
my
my
expectation
is
that,
as
the
go
proposal
lands
in
121.,
it
will
become
more
and
more
mechanical.
To
do
this,
when
we
made
the
update
to
go
119
before
the
holidays,
we
were
still
in
sort
of
one-off
mitigation
land,
where
you
know
we
had
to
go
and
Fiddle
with
and
go
debug
environment
variables,
and
things
like
that
right.
I
think
this
is
only
reasonable
to
do
in
an
ongoing
way
because
go
will
support
compatibility
automatically.
G
The
expectation
is
you
build
an
old
version
with
a
new
go
release
and
it
should
be
the
same,
and
if
it
doesn't,
you
can
open
a
bug
against
the
go
team
and
say
hey
this
started
behaving
differently.
Please
add
a
compatibility
switch
for
this.
B
Then
the
other
question
I
had
was
I
know.
We
were
looking
at
like
how
to
make
the
canneries
better
so
that
we
could
test
changes
like
this.
You
know
with
a
newer
Go
version
like
you
know,
right
now,
it's
cumbersome
to
update
the
tooling
to
newer
versions
of
go
and
then
try
this
out
so.
G
G
Yeah
in
terms
of
like
doing
the
making
this
a
mechanical
process,
I
would
expect
a
URI
to
be
pretty
involved
in
the
next
one
or
two
of
these
to
go
120
and
go
121
and
iron
out
sort
of
the
the
special
cases
make
sure
that
the
new
go
approach
is
actually
working
as
expected
and
then
sort
of
make
it
a
process
that
anyone
can
follow.
B
That
sounds
good
overall,
you
know
plus
one
from
me.
Let's
do
it,
let's
try
it
out,
and
you
know
this
will
help
everybody
for
sure.
It's
just
a
question
of
like.
Can
we
get
more
Hands
on
the
Wheel
here
from
the
people
in
the
room
that
that's
the
reason
for
asking
the
question?
G
G
Okay,
well,
we
can
leave
it
there
and
if
you
have
more
comments,
feel
free
to
jump
in
on
the
cup
and
I'll.
A
Next
item:
Rihanna:
yes,.
D
I'll,
be
short:
oh
thank
you
very
much
just
to
bring
to
the
attention
that
we
have
brought
a
another
conformance
test
for
the
API
service.
Endpoints
for
those
I
have
already
asked
David
and
Jordan
Taft.
Let's
put
some
eyes
on
that,
and
when
the
earth
merged
in
we'll
only
have
a
few
left
before
we
start
digging
out
things
from
the.
F
F
We
have
some
docs
which
are
good
but
incomplete
and
often
forgotten
and
exclusively
human
oriented
and
so
I
started
slapping
together
a
a
mini
cap
to
propose
that
we
create
a
repo
and
put
some
structured
metadata
into
that
repo
so
that
at
least
when
we
see
a
PR
or
a
kept,
that's
adding
a
new
annotation.
We
can
say,
go
put
it
in
the
database
and
we
can
then
crowd
Source
over
time.
F
A
I
mean
it
seems
reasonable
to
me:
I
I'm,
a
little
skeptical
that
it
will
I
think
we
have
internally.
We
have
something
like
that
for,
like
crd
names
and
like
it's
like
very
sparsely
populated
with
the
actual
usage,
but
maybe,
if
kept
reviewers
see
it
and
can
you
know,
enforce
it
and
we
can
educate.
You
know
the
the
same
leads
who
are
doing
those
reviews,
a
lot
of
them
that
would
be
doable.
I,
don't
see
a
I,
don't
have
an
issue
with
it:
I
just
I'm
I'm.
E
So
just
a
question:
I
think
this
is
a
proposal
to
reflect
the
world,
as
it
is
not
to
pass
a
judgment
on
whether
a
name
is
good
or
is
allowed,
or
anything
like
that.
So
this
would
be
a
case
of
somebody
provides
here
is
a
name
that
is
in
use,
and
we
say
yes.
D
F
We
can
find
a
bunch
of
them
just
by
grepping
around
and
by
taking
the
the
human
oriented,
docs
and
importing
those
once
we
reach
a
sort
of
stasis,
then,
if
you
as
a
reviewer,
see
somebody
adding
a
new
name
to
a
new
context
or
something
you
can
say,
hey
make
sure
you
register
this
over
in
the
in
the
database
over
there,
where
we
can
use
the
directory
structure
and
owners
to
say
you
know
what
all
of
the
API
Machinery
names
they
they're
approved
by
David,
so
you
can
go
and
approve
that
PR
yourself
like
there's
it's
not
to
impose
Central
governance
or,
like
an
architecture
naming
committee,
it's
more
just
to
have
it
in
a
place
where
humans
can
go
look
for
it
find
some
documentation
on
it,
find
which
Sig
owns
it
and
maybe
understand.
F
E
That
sounds
good
I,
like
the
idea.
Thank
you.
B
Yeah
I'll
go
first,
it's
okay,
yeah!
This
is
one
of
those
right
like
we
want
to
do
this.
It's
going
to
be
incredibly
useful
for
people,
but
then
we
can't
keep
it
up
to
date
and,
like
John
was
saying
at
some
point,
it
will
fizzle
out.
So
anything
we
can
do
to
automate
this
or
anything
we
can
do
to
like
put
it
in
front
of
people
like
it
could
be
like
a
step
in
the
kept
process.
Saying
hey,
go,
update
things
in
this
repository.
It.
B
Depending
yeah
so
go
go,
do
it
there,
so
then
it
becomes
like
an
automatic
checkbox
that
people
will
have
to
go.
Hey
I
need
to
go.
Do
a
PR
there
before
this
gets
approved
before
the
cap
gets
approved,
or
something
like
that.
So.
F
F
We
we
have
a
DOT,
we
have
a
page
in
our
docs
that
has
dozens
of
these
things
already
registered.
I
know
SF
Tim
I,
don't
know
if
he's
here
today,
but
he
has
taken
it
upon
himself
to
remind
people
that
they're
supposed
to
be
putting
things
into
this
dock.
F
You
know
he
he
scanned
through
caps
and
PR's
and
stuff.
So
I've
seen
him
do
this
enough
times
that
I
sort
of
preemptively
volunteered
him
as
a
co-author
or
as
an
approver.
F
Difficult,
that's
the
question
right.
I
I
would
hope
that
we
could
generate
the
docs
from
this
or
we
cross
reference
them.
Some
other
way
like
it
seems
like
having
a
short
low
impact
place
for
people
to
register.
These
things
is
important.
The
context
that
it
came
up
in
is
there's
a
cap
around
app
protocol
Service
app
protocol,
which
you
know.
If
you
don't
dig
in
surface
space,
you
never
even
deal
with.
F
B
Is
this?
Does
this
have
a
lower
threshold
instead
of
two
people
lgtm
approved?
Can
we
just
do
one?
You
know.
F
If
we
want
to
use
it
to
generate
docs,
then
we'd
probably
want
somebody
from
Sig
docs
to
make
sure
the
description
is
good.
If
we're
just
going
to
use
this
as
a
sort
of
database,
that
has
you
know
a
short
description
and
then
a
link
to
something,
then
it
can
just
be
delegated
to
Sig
owners.
B
H
My
question
is
mostly
about
Downstream
components
and
a
couple
examples
here,
like
continuity,
is
doing
image
annotation
for
pinned
images,
images
that
wouldn't
be
garbage
collected.
This
is
kind
of
like
plot,
like
component
specific
Downstream
thing
that
will
be
used
and
do
we
want
to
control
those
as
well
like
by
control.
I
mean
like
know
about
and
spread
the
knowledge
about,
and
this
can
turn
energy
related
and
Kyle
doesn't
have
the
same
annotation.
So
yeah
probably
magic
point.
F
It's
a
good
question
I.
My
initial
answer
is
no.
This
is
exclusively
for
things
that
the
kubernetes
project
lays
claim
to.
If
we
want
to
extend
that
if
this
is
successful
and
we
want
to
extend
it
to
Sister
projects,
I
wouldn't
be
against
that.
But
that's
not
where
I'm
starting
yeah.
H
And
the
opposite
question
about
Upstream
like
today,
we
have
CLI
that
knows
about
few
conditions.
North
may
have,
but
gke,
for
instance,
have
more
conditions
that
not
may
have
do
you
want
to
record
those
conditions
somewhere
in
central
place
as
well
or
again,
it
will
be
like
we
know
about
these
names
and
everything
else
beyond.
That
is
somebody
else's
responsibility.
You
cannot
look
it
up
here,
even
though
it's
kubernetes.
F
It's
a
also
a
good
question
and
I
think
it's
the
same
answer.
If
this
is
successful,
then
we
could
make
this
available
for
other
places
to
register
their
stuff
here,
but
a
our
efficacy.
There
will
be
much
lower
because
we
can't
require
them
to,
and
so
you
know
it's
just
not
obvious
that
that's
useful
I
would
hope.
Every
provider
actually
I
hope
that
it
would
have
happened
by
now.
F
I
would
hope
that
every
provider
has
a
page
that
you
can
go
to
to
find
all
of
their
provider
specific
annotations
and
what
they're
used
for
and
what
they
mean.
But
you
know
that
hasn't
really
worked
out
and
nor
has
ours
grown
organically.
So
this
is
my
attempt
to
inorganically
bootstrap
that.
F
So
you
know
whenever,
whenever
we
have,
whenever
we
have
a
scope
of
stuff
like
labels
right,
we
say
anything
that
starts
with
kubernetes.io
is
as
a
label
is
ours
and
nobody
else
should
use
it.
That
doesn't
stop
them
from
doing
that.
They
have
done
that,
and
you
know
whenever
we
find
them,
we
should
smack
them
and
tell
them
not
to
do
that.
F
Likewise,
we
say
you
know
app
protocols
that
are
not
prefixed
should
be
Iana
standard
names,
and
yet
I
know
that
at
least
two
projects,
gke
and
istio
happen
to
use
non-prefixed
names
that
use
this
same
value
and
mean
different
things,
and
both
of
them
are
wrong.
Neither
of
them
should
have
used
a
non-prefixed
name,
and
at
least
now
we
can
go
to
the
database
and
say
cates.io,
slash,
H2
means
this.
H
F
Yes,
all
we
can
do
is
say
that
it's
here
and
it
has
meaning,
and
if
you
use
it,
then
other
components
will
be
able
to
use
it
or
to
be
able
to
recognize
it.
We
can't
force
people
to
use
it
and
I.
Don't
plan
to
inject
myself
in,
like
the
code
review
process
like
I'm,
not
going
to
add
a
a
CI
hook
that
greps
for
Strings
in
in
kubernetes
PRS
at
least
not
yet.