►
From YouTube: Kubernetes SIG Release for 20230124
Description
Kubernetes SIG Release for 20230124
A
Hey
everybody
Welcome
to
the
Tuesday
January
24th
Sig,
release
weekly
meeting.
My
name
is
Jeremy
and
I'll,
be
the
host.
Today,
I'm
gonna
put
the
agenda
into
the
chat.
Oh,
it
looks
like
Marco
already
did
it
thank
you
Marco,
and
with
that
we
can
go
ahead
and
kick
it
off.
We
generally
start
this
meeting
off
by
allowing
folks
to
introduce
themselves
if
they're
new,
before
we
do
that.
A
I'll
just
give
a
quick
reminder
that
this
meeting
is
covered
by
the
cncf
code
of
conduct,
so
be
mindful
of
the
things
you
say
on
the
meeting,
and
this
will
be
recorded
and
be
available
on
YouTube
later
today.
So
if
you
don't
want
to
be
on
camera,
feel
free
to
turn
your
camera
off
and
with
that
I'll
open
it
up.
If
anybody
would
like
to
introduce
themselves
if
they're
new
to
the
meeting
or
if
they're,
coming
back
after
a
long
break,
go
ahead
and
introduce
yourselves
just
pause
there
for
a
minute.
B
I'm
new
I'm
going
to
make
him
before
I'm
like
I'm,
a
AWS
on
eks
and
I'm,
also
on
the
SRC.
It's
a
security
response
community.
A
Oh,
and
would
anybody
we
willing
to
take
notes
today,
I
can
do
it.
There's
some
bonds.
A
Okay,
I'll
take
notes.
While
we
go
all
right,
if
anybody
else
wants
to
say
hello,
this
is
your
last
shot,
otherwise
you
can
say
hi
in
the
chat
or
something
and
go
from
there,
all
right,
we'll
jump
over
to
sub
project
updates
and
first
up
we
have
Muhammad
freezing
Kate's
GCR.
I
o
image
registry
account
go
for
it.
D
Hi
there,
so
this
is
something
that
we
talked
about
two
weeks
ago.
It's
in
a
cap
now
so
I
need
the
Sig
release,
leads
to
approve
it
and
we
can
get
going
with
it.
It
would
be
nice
if
we
did
this
as
soon
as
possible,
because
that
gives
us
two
months
for
the
message
to.
E
D
To
set
and
everybody
get
an
idea,
that's
what
we're
doing
and
that's
it
so
I
need
some
secret
release,
approvers
to
approve
the
cap
and
then
there's
some
technical
work
for
me
to
do
about
March
time,
but
I'm
gonna
make
it
started
now
and
that's
it.
A
Cool
I
took
a
preliminary
pass
through
that
and
it
looked
I
had
a
couple
of
small
questions,
but
I
think
you
addressed
them.
Does
anybody
have
any
questions
about
this?
Have
you
had
a
chance
to
look
at
it
or
not?
Otherwise
we
can
take
some
actions
to
go
ahead
and
give
it
some
reviews
that
I
agree
getting
it
done.
Asap
is
a
really
good
good
thing
to
allow
some
comms
time.
This
is
a
fairly
major
change
for
folks
to
accommodate.
D
A
All
right,
moving
on
Marco
January
patch
releases,
yeah.
F
I
left
some
notes
there
about
the
January
batch
releases.
Last
week
we
had
126
1
125
6,
124,
10
and
123
16..
Those
Specialists
are
now
available.
Everything
went
most
defined,
we
had
the
hiccup
in
the
promotion
process
and
that
required
us
to
delay
12410
for
one
day,
mostly
because
it
got
late
in
the
day
and
we
needed
to
have
to
have
everyone
stay
up,
late,
ready
for
promotional
process
released
and
so
on.
F
So
we
decided
to
be
on
the
safe
side
to
move
it
for
one
day
and
then
to
continue
from
day
next
day.
Everything
went
fine
and
we
have
the
releases.
However,
one
bad
thing
that
we
have
is
that
we
are
off
the
hitting
signing
flakes
I
left
an
issue
in
the
meeting
minutes
about
this.
It
sort
of
happens
that,
when
trying
to
sign
something,
we
get
an
error
that
the
file
doesn't
exist
and
the
signature
fat
does
it
exist
and
that
breaks
the
wall
release
process.
F
So
it
is
nothing
too
bad.
We
can
just
retry
and
it
eventually
works,
but
it
is
very
annoying
sometimes
like
for
134.
We
had
to
really
try
three
or
four
times
and
like
this
is
a
waste
of
time
and
I
think
that
we
should
try
to
get
it
fixed
as
soon
as
possible.
That
said,
I
increase
the
priority
of
this
ticket
to
critical,
because
it
would
be
nice
to
get
it
fixed
in
Thai
for
February
releases.
If
that's
okay
for
everyone.
A
F
F
Okay,
I:
don't
think
that
there's
a
no
so
open,
build
service
or
OBS
proof
of
concept
is
in
progress
again.
I
finally
come
up
with
a
cap,
so
I
edited
as
an
agenda
Point
later
on
to
quickly
introduce
it
and
to
ask
the
questions
that
I
have
so
I
believe
that
for
the
upper
discussion
also,
I
would
like
to
highlight
that
release
branches
are
updated
to
go.
1.19.5
and
master
is
getting
updated
to
go.
120
RC
free.
F
G
I
was
just
going
to
note
that
the
tests
are
actually
totally
green.
There's
one
test
job,
that's
failing,
but
it's
like
a
job
that
doesn't
normally
run
and
doesn't
actually
have
an
image
that
can
run,
but
all
the
the
ede
is
passing
on
the
new
Go
version
and
all
the
canary
surpassing
on
the
new
Go
version,
which
is
awesome.
A
That's
super
cool
all
right,
any
questions
about
the
go,
update
or
OBS
before
we
move
on
okay,
next
up
CC
planned
Alpha
One
cut
today.
C
Yeah,
just
as
we
scheduled
earlier
today
will
be
the
offer
2
card
for
1.20
M7
release
and
I've
learned
to
do
it
today
after
dropping
off
case.
So
we
we
do
had
a
blogger
issue.
Yesterday
pop
me
up
and
thanks
for
niheta
and
others,
we
get
episode
so
I
guess
on.
Currently
we
don't
have
any
blogging
for
the
alpha
card
yeah.
If
people
do
have
confidence,
please
feel
free
to.
Let
me
know.
A
All
right,
thank
you.
If
you
have
any
questions
or
any
running
issues
during
that
feel
free
to
reach
out
and
we'll
jump
in,
to
help
you
out
all
right
release
team,
any
updates
for
the
release
team,
Xander.
E
A
F
Yeah,
sorry
again
for
a
little
bit
of
my
items
but
yeah,
so
the
first
one
is
updating
the
cap
7031
for
building
packages,
I
finally
created
that
cap
and
I'm
also
sorry
that
it
took
so
long,
but
we
had
holidays
and
I
had
many
other
things
to
take
care
of,
but
it
is
finally
there
I
wanted
to
quickly
highlight
what
I
did
there
so
that
when
you're
reviewing
it,
you
have
some
ideas,
so
I
described
why
we
wanted
to
go
with
OBS
I
described
it.
F
How
is
this
going
to
look
like
what
the
architectures
we
are
going
to
go
with?
What
packages
and
stuff
like
that
I
try
to
keep
it
as
much
as
we
have
to
decide
that
we
have
right
now
so
same
architectures
same
packages.
What
is
different
is
on
whatever
it
existence
we
are
building.
Now,
for
example,
we
are
building
packages
for
Ubuntu,
Ubuntu
2004
for
sentoso
sent
to
a
stream
8,
and
we
are
also
building
for
some
architectures,
some
open
Susie
operating
system.
So
this
is
something
to
pay
an
attention
to
so
far.
F
It
turns
out
to
work
pretty
good.
A
little
problem
is
that,
while
for
architectures
like
power,
PC
or
s309ex
packages
are
building,
but
we
don't
really
have
a
way
to
test
them.
This
is
something
that
I'm
not
super
happy
about,
but
yeah
I.
Don't
think
that
we
have
much.
We
can
do
much
about
it
until
finding
someone
who
eventually
uses
those
architectures.
If
there's
anyone
using
them.
Also
I
did
some
I.
Did
some
tasks
to
let's
say
come
close
to
wrapping
up
the
proof
of
concept.
F
I
finally
got
all
packages
building
I
migrated
to
that
build,
so
we
don't
have
Debian
specs
anymore.
Those
are
not
a
thing.
We
only
have
RPM
specs.
Those
specs
are
being
converted
to
Debian
specs
by
a
tool
called
the
dev
build,
and
then
we
are
using
the
app
required
tooling
to
build
packages.
F
That
is
what
working
fantastic,
well,
I'm,
very
happy
about
it,
and
also
without
some
changes
to
the
structure
of
our
project
in
OBS
like
coming
up
with
building
or
staging
project
and
coming
up
with
a
publishing
project
where
we
are
going
to
publish
packages
solving
the
problem
that
all
packages
are
being
deleted.
So
we
now
keep
all
packages
and
all
versions,
so
some
pretty
important
progress.
Yes,
we've
made
on
that.
F
What
I
would
like
to
bring
up
to
the
let's
say
highlight
also:
is
that
I'm
probably
going
to
remove
everything
from
our
OBS
project
this
week,
I
mentioned
that
in
this
live
Channel,
but
I
wanted
to
start
fresh
so
that
I
can
better
test
it
and
also
I'm
going
to
need
support
from
leads
first
of
all
to
web
with
a
cap.
The
second
thing
is
that
you
will
need
to
set
up
account
to
bring
it
in
the
gcp
so
that
we
can
use
with
crail
any
stuff
like
that
later.
F
This
is
not
that
we
mentioned
before
my
before
I
actually
implementing
it,
but
I
hope
that
I
will
be
soon
able
to
like
integrate
it.
Krell
and
also
one
thing
is
that
we
need
to
decide
on
domain
that
we
are
going
to
use
because,
like
is
it
going
to
be
like
OBS,
kubernetes
IO
isn't
going
to
be
packages.cubernetes
IO
or
something
like
that.
We
can't
keep
APD
kubernetes,
IO
and
young,
because
the
URL
structure
is
a
little
bit
different
or
maybe
we
do
I
don't
know,
but
maybe
we
can
just
go
with
packages
document.
B
At
least
for
the
the
naming
we
can
I
I
think
we
can
bike
shut
down.
The
naming,
but
I
would
I
would
say
that
for
the
existing
names
we
just
make
a
clean
break.
We've
got
what
I
said:
we've
got
apt
and
and
say:
yum
right
are
the
are
the
ones
that
are
currently
being
used.
I
would
say
we
just
make
a
clean
break.
That
way,
it's
easier
to
announce
when
you
actually
do
turn
the
thing
on.
F
Yeah
I'm
going
to
double
check
because
I'm
speaking
just
right
now,
if
I
got
an
idea
how
they
can
keep
APD
and
young,
so
I'm
going
to
check
it
folks,
if
that
is
doable
from
C
gifra,
okay
testing
friend
we'll
report
back
on
that.
G
F
G
B
Yeah
I
would
agree.
This
is
so
there's
a
supported
platforms
kept
somewhere
that
existed
kind
of
across
release
and
and
say
architecture
at
least
discussion
of
it
and
really
I
mean
I
I.
Think
that
support
predated.
B
Maybe
everyone
that's
on
this
call
outside
of
Jordan,
for,
for
you
know,
powerpc
as
well
as
s390x.
B
F
Yeah
I
think
I
almost
brought
it
up
that
we
have
actually
considered
deprecating
it
and
like
getting
rid
of
those
packages,
but
I
think
the
consensus
is
that
we
don't
want
to
do
it
yet.
So
that
would
be
a
little
bit
bigger
change,
so
we
keep
them.
We
build
them
and
hope
for
the
best
I
guess
this
is
the
solution
that
we
can
do
for
now.
B
Yeah
we
yeah
we
built
we
we,
you
know
mentioned
or
continue
to
mention
that
they
are
really
best
effort
for
providing
them
as
best
effort.
The
I
think
it
also.
You
know
if
you,
if
you
look
at
our
builds,
go
Runner
images,
for
example,
where
they,
where
they
flake
out
or
pretty
much
always
on
s390x,
builds
so
like
something
to
keep
in
mind
like.
B
If
there
is
an
opportunity
further
down
the
line,
it
doesn't
need
to
be
tied
into
this
project
delivery,
but
further
down
the
line,
we
should
reassess
what
what
we're
producing.
B
It's
not
a
question,
it's
more
of
a
comment,
great
work.
Thank
you
for
thank
you
for
working
on
this.
Thank
you
for
all
the
updates
that
you
provided.
Thank
you
to
the
entire
team.
Who's
been
kind
of
cranking
away
at
on
the
OBS
stuff.
I
know
that
not
all
of
us
have
been
able
to
direct
attention
at
it,
but
the
the
work
is
appreciated,
recognized.
F
F
A
F
Believe
that
information
is
not
discoverable
and
accessible
enough,
like
you
can
find
it
is
casually
ammo
file.
You
can
find
it
in
the
sieg
release
repo.
But
if
you
look
at
the
bachelor's
page,
you
can
easily
find
it
narrated
the
release
information
page,
so
I
propose
two
approaches
and
I
would
like
if
you,
when
you
have
some
time
to,
please
take
a
look
and
to
let
me
know
what
approach
you
like
better
and
if
you
at
all
like
this
approach
and
in
that
case,
I,
will
make
this
PR
ready
and
then
we
can
merge
it.
F
If
that's
okay,
of
course,
and
the
second
PR
is
something
about
Cherry
picks
I
actually
wanted
to
discuss
it
on
two
meetings,
but
we
once
didn't
have
time.
Second
time
we
canceled
it
is
about
leaving
a
note
in
the
cherry
pick,
unapproved
message
that
if
you
did
a
cherry
pick
the
change
to
all
support
the
release
branches
to
leave
a
comment
about
that.
F
So
this
is
something
that,
while
reveling
Cherry
picks
I
encounters
pretty
often,
and
it
is
often
solid
that
the
process
making
PRS,
Mr,
Cherry,
Picked
deadline
and
I
think
it's
nice
to
highlight
that
if
you
didn't
do
so
like,
if
you
did
a
cherry
pick
to
all
supported
branches,
that
you
should
never
covered
describing,
why
you
didn't
also-
and
those
are
the
two
PRS
that
I
created-
that
I
am
looking
for
feedback.
So
please
take
a
look.
If
this
was
okay
to
you,
we
can
merge
them
if
not
yeah.
A
G
Yeah
so
I
had
email
to
the
mailing
list
last
week,
I
guess,
as
we
talked
about
last
time
or
as
was
talked
about
last
time,
sorry
I
missed
the
meeting.
There
was
an
update
to
go
119
on
the
123
and
124
release
branches,
and
that's
not
something.
We've
traditionally
done.
G
We've
traditionally
kept
release
branches
on
the
go,
Miner
version
they
started
with,
and
that
was
largely
due
to
not
being
able
to
consistently
update,
go
Miner
versions
and
I
talked
through
a
ton
of
the
ton
of
the
history.
G
Lots
of
examples
of
things
go
did
on
minor
versions
that
kept
us
from
updating
release
branches
to
those
new
minor
versions,
but
talking
with
the
go
team
over
the
past
two
years
and
sort
of
raising
the
issues
that
we
were
having
trying
to
keep
our
release
branches
on
supported,
Go
versions
prompted
the
go
team
to
take
a
hard
look
at
what
they
were
doing
in
minor
versions
and
adjust
what
they're
doing.
B
Yeah
and
I
just
interject
for
a
bit,
so
so
one
thing
to
note
for
people
who
are
on
the
call
is
that
go,
doesn't
quite
follow,
sember
and
so
go
minor
versions
are.
Are
things
that
if
we
were,
if
we're
looking
at
it
from
the
lens
of
semver
versioning
right,
go,
Miner
versions
are
actually
made
go.
Those
are
go
major
versions.
H
B
G
G
I'm,
sorry,
the
chat
was
for
the
agenda,
go
and
release
a
new
minor
version.
There'd
be
nothing
problematic
in
it.
We
could
update,
but
because
we
couldn't
do
it
consistently,
it
was
just
more
confusing
to
like
sometimes
update,
go
Miner
versions
and
sometimes
not
anyway,
as
we've
sort
of
solidified.
How
long
we
support
kubernetes
minor
versions,
so
the
annual
support
cap
landed
a
couple
years
ago.
G
The
mismatch
between
how
long
kubernetes
supports
patch
releases
on
a
given
minor
stream
and
how
long
the
Go
version
we
were
using
was
supported,
has
sort
of
come
up
over
and
over
again,
and
the
result
of
that
mismatch
is
that
patch
releases
for
kubernetes,
older,
supported,
kubernetes
versions
have
been
built
on
Go
versions
that
are
out
of
support
and
in
some
cases
have
you
know,
denial
of
service,
cves
or
cves
that
get
get
flagged
by
scanners
and
so
we're
in
the
not
great
position
of
having
kubernetes
versions
that
we're
actively
cutting
and
releasing
like
showing
up
flag.
G
Does.
Oh,
this
image
has
vulnerabilities
you
shouldn't
use
this
anyway.
I
opened
a
cap
kind
of
laying
out
the
history
and
some
of
the
changes
go
has
made
and
some
of
the
reasons
kubernetes
couldn't
update
in
the
past
and
with
a
proposal
for
ways
that
we
could
cautiously
update,
release
branches
to
new
Go
versions.
Once
a
set
of
conditions
were
met,
I
don't
know
if
we
want
to
open
it
now.
G
If
there's
more
topics,
we
can
sort
of
leave
it
at
that
and
ask
for
a
review
on
the
cap
or
if
we
want
to
dive
in
or
get
other
people's
thoughts.
G
B
A
Do
you
want,
do
you
want
to
bring
it
up?
I
can
give
you
presenter
permission
if
you
want
to
walk
through
it
or
I
can
bring
it
up,
and
we.
G
Can
see
prior
comments
about
not
using
Zoom
recently
yeah,
if
you
have
screen
share
how
about,
if
you
try,
it
might
be
easier.
Okay,.
A
Sounds
good,
let
me
let
me
do
that.
G
Go,
let's
start
with
the
motivation
and
goals.
That's
a
nice
place
to
start
so
this
kind
of
talks
through
what
I
was
mentioning,
where
there's
a
mismatch
between
how
long
ago
version
supported
and
how
long
the
kubernetes
version
built
on
it
is
supported.
G
I
looked
at
the
history
of
go
patch
releases
and
set
aside
my
OCD
major
about
exactly
50
of
them
mentioned
security
content.
So
obviously
that
will
change
in
the
future,
but
it's
not
rare
for
a
Go
version,
a
go
patch
release
to
say:
oh,
we
got
a
security
issue
and
fixed
it,
obviously
not
all
of
those
affect
kubernetes,
but
some
of
them
do
like.
G
We
actually
use
a
lot
of
the
standard
library
and
so
stuff
under
HTTP
servers
stuff
under
encryption
stuff
under
like
parsing
Json
like
we
actually
use
a
lot
of
a
surface
area,
so
I
would
I
would
estimate.
Maybe
half
of
the
goes
security
issues.
You
could
make
a
plausible
case
that
someone
using
kubernetes
would
be
impacted
in
some
way
and
so
then
I
talk
about
like
obviously
the
what
we
would
like
to
do
is
just
like.
G
Actually
going
to
land
and
go
121,
which
is
awesome,
I'm
super
happy
to
to
see
that,
and
so
that's
sort
of
the
background
like
it.
We
want
to
stay
up
to
date,
but
it
was
hard
and
we
couldn't
consistently
in
the
past,
and
so
now
that
there's
a
chance
that
the
problems
that
we
encounter
in
the
past
might
not
be
problems.
G
I
wanted
to
sort
of
lay
out
like
what
would
what
would
our
requirements
be
for
updating
release
branches
to
avoid
disruption,
avoid
risk,
while
staying
secure,
I
think
folks
had
already
taken
a
quick
look.
There
were
some
questions
about
if
this
was
wanting
to
change
our
approach
for
updating,
Go
versions
for
the
development
branch
and
I
I,
wouldn't
change
anything
we're
doing
there.
I
think
where
we're
at
there
is
a
pretty
healthy
place
where
we
adopt
early
and
give
feedback
and
identify
issues
early.
G
So
I
I've
been
pretty
happy
with
what
we've
been
doing
for
the
last
few
years
on
the
development
Branch.
So
this
is
really
just
talking
about
like
what.
What
would
our
requirements
be
for
taking
those
updates
back
to
older
release
branches?
G
The
main
two
things
I
think
we
want
to
avoid,
are
regressions
and
requiring
user
action
on
patch
upgrades
and
so
pretty
much.
The
rest
of
this
is
just
talking
about
how.
How
can
we
build
confidence
that
we're
not
regressing,
and
how
can
we
ensure
that,
when
we
update
a
Go
version
on
a
release,
Branch
we're
not
you're
pushing
required
actions
to
users
either
in
Behavior
changes
or
in
requiring
them
to
update
their
Go
version
when
they
build
our
libraries
yeah?
G
G
G
Okay
hearing
none
I
will
move
on.
So
if
you
want
to
jump
to
the
proposal,
there's
basically
three
steps
to
what
I'm
proposing
the
first
is
that
we
do
a
better
job
of
tracking
what
we
had
to
do
in
kubernetes
as
part
of
updating
to
a
new
Go
version.
G
In
the
past,
we've
kind
of
you've
only
been
working
on
the
development
branch
and
so
we've
sort
of
sometimes
just
smooshed
together
like
oh,
we
need
to
update
this
dependency.
We
need
to
fix
this
like
lent
check,
and
we
also
need
to
update
this
tool,
and
we
also
need
to
like
tweak
how
we're
you
know,
calling
the
standard
Library
here
and
then
we'll
also
bump
the
Go
version,
and
so
we'll
merge
this
sort
of
Mega
PR.
That's
like
a
stack
of
I,
don't
know
a
bunch
of
commits
it's
like.
G
Oh,
we
are
now
on
the
next
version
of
go
and
it
makes
it
hard
to
tell
if
any
of
those
changes
only
work
with
the
next
version
of
go
instead
of
like.
G
Ideally,
we
wouldn't
make
any
changes
that
would
break
the
previous
version
of
go
and,
and
so
the
proposal
is
to
track
what
we're
changing
as
we're
updating
the
development
Branch
to
a
new
Go
version
and
try
to
do
that
independently
and
as
a
as
a
prereq
merge,
and
that
gives
us
confidence
that
we
actually
pass
all
our
tests
and
password
submits
both
on
the
current
version
and
the
next
version
of
go
and
I
linked
to
an
example
of
how
we
were
doing
this
as
we're
actually
making
the
go.
120
update.
G
So
we
we
haven't
actually
updated
the
master
Branch
to
go
120
yet,
but
we've
merged
four
updates
to
turn
tests
green
on,
go
120.
and
because
they
merged
and
passed
and
post
submits
are
still
green.
G
We
also
know
they
still
work
on
go
119.,
so
this
is
sort
of
I
mean
the
mechanism
could
be
an
issue
or
a
project
board
or,
like
the
mechanism,
isn't
as
important,
but
this
is
sort
of
what
I'm
envisioning,
where
we
say
like
here
are
the
exact
changes
we
had
to
make
so
that
kubernetes
would
build
and
pass
tests
with
go
120
and
we
know
none
of
them
require
go
120.
They
also
work
with
go
119.,
so
that's
the
first
step,
just
tracking
that
and
trying
to
do
it
independent
of
the
Go
Bump.
G
The
second
step
is
to
take
those
changes
back
to
our
supported
release
branches,
ideally
not
preemptively.
What's
the
word,
I
lost
my
vocabulary
proactively,
and
obviously
this
should
stay
within
the
guidelines
for
what
we
accept
in
release
branches
in
terms
of
risk
and
size
and
disruption
like
none
of
these
should
be
disruptive.
None
of
these
should
be
risky
if
they
are
risky
or
disruptive.
G
We
should
look
at
reworking
them
to
be
less
risky
and
less
disruptive
and
I
kind
of
stepped
through
the
typical
changes
we
see
so
tooling
and
test
changes
we're
not
actually
affecting
what
we
ship
dependency
updates,
usually
they're
small,
sometimes
we'll
see
one
that
come
in
that's
big,
and
so
what
we've
done
in
the
past
is
go
to
that
dependency
author
and
say
like
hey.
Can
you
cut
a
patch
release
of
like
your
older
version
with
just
this
minimal
change,
so
we
can
bump
to
that
on
our
release.
G
Branches,
vet
and
lent
fixes
usually
are
real
fixes
that
we
would
want
to
backboard
and
then
yeah.
There
are
examples
of
working
with
both
go
one
and
the
previous
version
of
go
where
usually
it's
like
a
one-line
tweak,
and
it's
pretty
easy
to
reason
about
the
risk
of
that.
G
G
How
do
we
know
when
we're
comfortable
taking
a
new
Go
version
back
to
release
branches,
and
so
these
requirements
were
sort
of
what
I
came
up
with,
but
I
would
love
to
hear
other
people's
perspective
on
this,
so
I
can
walk
through
them,
real,
quick
and
then
let
people
weigh
in
so
I
think
we
should
wait
until
the
new
Go
version
has
been
around
at
least
a
few
months
like
Go
versions,
don't
get
instant
adoption
and
so
giving
time
for
it
to
sort
of
percolate
through
the
community
and
get
reports
of
regressions
and
those
get
investigated.
G
I
think
we
should
probably
wait
at
least
a
few
months
timing
wise
if
we
start
looking
at
making
the
update
a
few
months
in
that
sort
of
splits,
the
difference
between
staying
on
the
latest
version,
but
also
giving
time
for
feedback
so
I.
This
was
sort
of
a
shot
in
the
dark,
but
it
seemed
maybe
reasonable.
The
second
one
was
I.
Don't
think
we
should
update
older
release
branches
to
a
new
Go
version
until
we
have
an
actual
kubernetes
release
on
that
Go
version.
G
That's
out
until
we've
released
release
candidates
and
maybe
even
in
a
DOT
zero
dot,
One
release
it's
hard
to
be
confident
that
there's
not
some
education,
kubernetes
use
that
would
be
impacted,
and
so,
if
we
adopt
a
new
Go
version
or
Master
release,
you
know
the
next
kubernetes
release
on
it,
and
then
you
know
give
time
for
feedback
on
that
as
a
precondition
to
taking
that
back
further.
That
seems
reasonable
to
me.
G
I
picked
a
month
because
at
least
some
major
providers
are
get
new
minor
versions
to
production
in
a
month
and
again
kind
of
a
just
a
random
choice,
but
combined
with
waiting
for
at
least
three
months
on
the
new
Go
version.
G
I
think
that
if
we
wait
for
both
of
those
to
be
true,
that
seems
good
and
then
the
rest
of
these
are
around
like
making
sure
that
we're
not
requiring
users
to
take
action,
we're
not
requiring
users
to
bump
and
Go
versions
and
we're
not
releasing
anything
that
we
know
has
behavioral
changes.
So
we
can
mitigate
any
behavioral
changes,
and
so
the
mechanism
we
use
to
mitigate
those
changes
could
vary
the
one
we
just
did
with
the
go:
119
updates
and
123
and
124.
G
We
used
like
some
environment
variable
twiddling
internally
with
what
go
is
doing.
That
should
be
a
lot
easier
going
forward,
but
the
exact
mechanism
I'm
not
as
concerned
about
as
just
the
guarantee
that
we're
not
exposing
required
actions
or
behavior
changes
to
to
end
users
anyway.
Thoughts
on
that
are
there
things
that
we're
missing
from
here.
Do
you
see
anyone
look
at
these
and
think?
Oh,
that's!
That's
way
too
long
or
that's
way
too
short
or
yeah
be
curious
to
hear
feedback.
F
G
B
So
yeah
so
I
think
I
I.
Think
generally
the
the
idea
that
you
know
new.
B
And
and
N
minus
one
are
sane
for
some
period
of
time
makes
sense.
B
I
do
I,
do
think
that
like
if
we
can
get
if
we
can
get
a
bit
more
precise
on
the
I,
I,
say
precise
and
then
I'm
about
to
say
edge
cases,
but
the
the
interesting
things
that
pop
up
in,
like
you
know
the
117
to
118
right,
I.
Think
the
you
know.
What
we
had
done
in
the
past
is
basically
like
you
or
I
or
dims,
or
something
would
open
this
PR
and
then
just
play
whack-a-mole
until
everything
was
happy
and
then
like.
B
That
would
essentially
be
the
tracking
PR
issue
and
I
think
that
we've
gotten
better
in
terms
of
like
documenting
what
it
looks
like
to
do.
A
pre-release,
at
least
within
the
the
scope
of
the
the
issue
template
for
k
release.
But
the
I
think
the
the
actions
to
be
taken
by
a
developer,
maybe
across
Olympics,
is
across
the
across
some
of
these
environment
variables
that
need
tweaking
I.
Think,
like
the
the
the
the
the
x509
one
was,
was
fun
because
you're
very
certificates
have
to
be
regenerated.
B
Tests
needed
to
be
updated,
et
cetera,
so
yeah
I
do
I
I
overall
agree
with
the
the
tracking
of
Behavior
changes,
I
think
if
we
can
get
a
bit
more
concrete
on
them,
based
on
what
we've
seen
in
the
past,
how
they
might
be
happening
as
well
as
having
that
contract,
that
n
and
n
minus
one
need
to
be
good
before
we
go
further
back
I
think
makes
sense.
Marco.
F
Yeah
I
would
first
like
to
comment
on
the
first
two
points,
because
I
would
I
think
that
we
can
merge
them.
No,
no,
sorry,
not
not
about
that
one
about
three
and
one
months:
okay,.
D
F
We
can
probably
merge
point
one
and
point
two
in
one
point
like
if
you
have
a
kubernetes
released
version
that
is
like
for
three
months,
I
think
that
it
should
be
safe
to
go.
There
is
a
biome
a
little
bit
worried
about
this
one
month,
policies
because,
from
my
experience
from
personal
one
adopting
releases,
especially
new
minor
stays
much
more
than
one
month,
both
for
providers
for
on-prem
from
others
users,
so
it
wouldn't
get
as
much
as
a
signal.
F
For
example,
we
released
126
mid-December
or
something
like
that-
I
doubt
that,
and
it
was
started
using
it
until
recently,
or
something
like
that
and
that's
the
first
thing.
The
second
thing
is
relevant
to
0.5
in
in
some
way
is
one
of
the
criteria
I
would
think
about
is:
should
we
allow
merging
to
updating
to
never
go
if
that's
going,
to
break
backwards,
compatibility
in
way
that
for
libraries
and
other
kubernetes
dependencies
you
have
to
use
the
new
Go
version,
for
example,
something
that
I
brought
up.
F
Last
time
when
we
updated
the
release,
branches
is
like,
can
you
use,
client
go
and
other
dependencies
will
go
voltage,
18
dub
that
we
updated
to
1.19
I
think
this
is
one
that
is
pretty
important,
because
we
don't
really
want
to
break
that
compatibility,
because
we
can't
know
our
users
of
those
libraries
able
to
easily
update
to
newer
conversions,
so
something
that
I
would
pay
attention
to
I.
Think
that
can
be
pretty
important
as
well.
F
G
Yeah,
big
plus
one
two,
not
breaking
use
of
the
libraries
on
the
previous
Go
version
and
so
I
I
think
someone
else
had
asked
about
that
and
I
tried
to
call
it
out
there
so
I.
This
proposes
creating
unit
and
integration
test
jobs
that
use
the
original
Go
version
for
that
release.
Branch,
so
that
we
don't
merge
things
to
the
branch
that
keep
that
version
of
go
from
being
able
to
run
all
the
library
unit
and
integration
tests.
G
I
left
off
e
to
E
because
it
was
hard
and
because
I
think
we're
expecting
people
to
run
the
binaries
we
release,
and
so
I
would
probably
lean
on
unit
and
integration
tests.
Integration
tests
actually
spin
up
servers
in
process
and
like
exercise
them
a
lot.
So
integration
tests
cover
a
ton
of
use.
I
would
probably
lean
on
unit
and
integration
tests
passing
on
the
previous
code
version
as
an
indicator
that
the
libraries
are
usable
on
that
version,
but
be
curious
about
feedback.
F
B
I
think
the
will
need
to
get
tighter
on
the
it's
going
to
be
worth
a
look
around
the
tooling
that
we
do
for
creating
release
branches
and
creating
the
tests
for
release
branches,
because
we
are
having
this
kind
of
like
post-op
situation,
where
the
the
the
the
first
time
we
cut
the
release.
B
The
first
time
we
cut
the
release.
Branch,
we
get
all
the
the
tests
right.
We
get
all
the
tests
that
are
assumed
for
the
Go
versions
that
that's
currently
out
for
that
Branch.
This.
The
second
go-around
means
that
we're
going
to
be
doing
a
test,
regeneration
or
rotation
whatever
you
want
to
call
it
for
an
additional
Suite
of
tests
that
are
going
to
be
tested.
B
So
one
we've
got
tests
for
the
old
Go
version,
which
are
fine
because
we
created
them
at
the
time
that
the
branch,
the
branch
and
the
release
was
going
out.
The
second
Suite
of
tests
is
going
to
be
tests
for
Go
version,
n,
plus
one
right
or,
and
at
the
time
of
release,
cut
plus
one.
So
there's
going
to
be
some
tooling
work
involved
as
well,
at
least
on
the
the
test
info
Rowland
side.
G
G
Oh
yeah
yeah
the
testing
for
issue
linked
there
we
for
a
while
we
wanted
to
make
it
easier
to
to
switch
the
Go
version
that
Tess
run
with,
and
so
we
started
sort
of
looking
at
doing
it
in
a
way
similar
to
how
kind
does
it
and
other
kubernetes
projects
do
it?
G
Where
there's
a
Go
version
file
in
the
repo
that
says,
I
want
to
build
and
run
tests
with
this
version,
and
so
it's
possible,
as
that
gets
looked
at
and
and
improved,
might
be
easier
to
sort
of
make
the
the
automatic
test
jobs
just
use
the
version
specified
in
the
repo
and
if
we
need
to
create
additional
test
jobs
at
the
point
where
we
bump
Go
versions
to
say,
Force
tests
using
the
original
Go
version,
we
could
but
I
agree.
We'd
want
to
make
sure
it
was
coherent
with
our
tooling
and
documented
process.
G
I
did
want
to
mention
one
thing
about
what
Marco
said:
I
agree
that
adoption
of
the
new
kubernetes
version
takes
longer,
but
I
think
we
actually
do
see
a
lot
of
good
reports
and
qualification
of
new
versions
during
that
one
month,
Post
Release
period,
even
if
it's
not
getting
adopted
by
end
users
during
that
month
after
release,
a
lot
of
people
are
picking
it
up
and
running
it
through
their
own
qualification
passes.
G
So
like
I
I
can
speak
for
gke
like
we,
we've
been
getting
new
versions
to
production
within
a
month,
which
means
that
they're
already
passing
all
of
our
pre-prod
qualifications,
so
scale
issues
and
test
issues
and
integration
issues.
Turning
up
in
our
own
internal
tests,
we
would
be
reporting
that
and
catching
issues
there.
So
it's
not
perfect
and
I
agree.
You
know
it's
not
going
to
be
widely
adopted
after
a
month,
but
I
think
it
does
get
us
something
so.
B
The
regression
like
report
is
going
to
come
in
under
under
that
month
time
time
limit,
so
I
think
the
the
way
we're
getting
the
feedback
today
is
is
okay,
just
as
long
as
we
continue
to
to
message
whatever
contract
that
we
have
with
the
community
about
when
things
are
happening,
how
they're
happening
how
you
can
provide
feedback
Etc.
A
All
right,
we
are
at
time
now
that
was
a
really
good
overview
Jordan.
Thank
you
doing
that.
So
it's
on
the
recording
and
other
folks
can
kind
of
come
back
and
watch
it
later.
That'll
be
really
useful
to
share
out.
If
anybody
has
any
more
questions
on
the
topic
or
any
other
topic
feel
free
to
dive
into
the
Caps
that
were
referenced.
Give
your
reviews
there
provide
feedback,
then
I
think
that'll
be
a
a
great
Improvement
for
us
going
forward.