►
Description
Kubernetes SIG Release Team Bi-Weekly Meeting for 20220407
A
Hello,
everyone-
this
is
april
5th
2022,
and
this
is
the
bi-weekly
secrets
meeting.
My
name
is
sabolfo
garcia
and
I'm
going
to
be
your
host
today.
A
As
a
reminder,
this
meeting
is
governed
by
the
kubernetes
of
conduct
and
which
means
be
awesome
to
each
other
and
remember
that
this
meeting
will
be
recording
recorded
to
the
public
internet,
so
be
mindful
of
what
you
say
and
do
and
with
that,
let's
kick
off
this
meeting
for
today.
So
the
first
thing
we'd
like
to
do
is
to
give
any
an
opportunity
to
any
new
members
who
would
like
to
say
hello
or
introduce
themselves.
A
Okay,
I
see
how
stay
familiar
on
the
attendees
list,
so
I
think
we
can
move
to
the
next
topic.
Okay,
the
first
one
on
the
subproject
updates
for
release
engineering.
I
don't
know
if
anyone
would
like
to
give
a
few
say.
A
few
words.
A
Okay,
well
yeah,
just
a
quick
update
there,
so
the
first,
the
first
thing
to
to
say
is
that
we
have
established
a
canary
release
and
testing
process
for
the
image
promoter.
So
we
now
have
valid
confirmation
and
ongoing
releases
of
the
image
promoter
that
are
constantly
testing
the
new
image
sign-in
code.
I
mean
it's
like
a.
They
use
the
main
end-to-end
test
for
the
promoter
board,
but
we
established
that
because
we
have
been
doing
big
refactors
on
that
code
and
we
wanted
to
make
sure
it's
running
as
it's
supposed
to.
A
This
means
that
we
have
already
produced
some
a
couple
of
signed
images,
the
first
ones
for
the
kubernetes
project
and
they're
in
it
seems
to
be
running
well.
We
there's
still
a
bug
on
our
side.
Well,
it's
not
on
a
site.
It's
rather
on
the
six
store
side.
A
That
means
that
the
first,
when
we
do
the
release
candidate
depending
on
the
time
frames,
because
we
now
have
a
few
more
days
to
get
that
merchant,
it
means
that
the
identity
that
will
sign
the
releases
will
not
be
the
service
account
that
we
established
to
identify
all
of
the
kubernetes
organization
signatures.
A
We
don't
consider
this
to
be
a
blocker,
because
it's
still
a
pre-release
and
the
upstream
code
in
sixer
is
already
fixed,
but
we
have
to
wait
for
a
new
release
of
cosine
to
incorporate
that
change
into
our
code.
A
That
is
one,
and
I
think,
that's
the
big
one
from
our
side.
I
don't
know
if
anyone
has
any
questions
around
that
or
we
can
move
to
the
next
topic.
A
B
Sure
so
code
freeze
has
passed
test.
Freeze
is
today
well
today
for
us
people
tomorrow
for
more
european
people
in
just
under
12
hours.
B
The
big
news,
of
course,
as
you
may
have
seen
on
the
email
update
from
the
maroon,
is
that
the
rc
rc-0
cut
has
been
delayed
from
today
until
next
monday,
and
the
reason
for
that
is
that
we
have
a
blocking
issue
in
in
golang
the
fix
for
that
is
known,
and
it
is
on
golang's
main
development
branch
and
has
been
cherry-picked
at
the
moment
to
their
118
release
branch.
There
is
a
new
version
of
go.
118
go
181
due
for
release
this
thursday.
B
However,
our
understanding
is
that's
a
security
release
with
some
security
fix
that
I
don't
know
details
of,
and
we
don't
know
yet
whether
or
not
our
fix
will
be
in
that,
so
the
rc
is
the
upshot
of
it.
All
this
is
that
the
rc
0
is
definitely
delayed.
There
is
a
possibility,
but
the
release
as
a
whole
will
be
delayed
if
our
fix
is
not
and
that
that's
made
more
likely
if
our
fix
is
not
in
in
go
118,
one
that
comes
out
on
thursday.
B
So
when
that
happens
on
thursday,
we'll
have
a
little
bit
more
information,
but
for
now
we're
just
kind
of
waiting
and
hoping
it
makes
it
in
there
and
the
way
to
actually
we'll
see
other
than
that
things
are
going
all
right.
There's
a
few
exception.
Requests
still
knocking
about.
We've
been
asked
to
re-evaluate
one,
so
we're
looking
at
that.
B
There's
one
that
came
in
on
friday,
which
I
missed,
and
I'm
looking
at
now,
and
we're
probably
going
to
have
a
response
on
that
that
out
after
this
meeting-
and
there
are,
I
think,
like
five
or
six
prs-
still
tagged
ready
to
go
in
on
kk
with
a
milestone,
and
my
hope
is
we'll
get
that
down
kind
of
as
test
freeze
passes,
we'll
kick
everything
out
and
then
hopefully
we'll
be
down
to
zero
pr's,
which
would
be
nice.
B
A
So
my
understanding
of
the
of
the
blog
ban
that
the
later
releases,
then
they
go
118
started
rejecting
csrs
with
that
we're
generating
assignment.
B
B
The
things
that
was
that
was
done
on
the
main
branch,
which
is,
let
me
just
pull
it
out.
So
this
is,
I
believe
this
is.
This
is
the
fix
that
has
now
merged
onto
go's,
main
development
branch
and
that's
it
looks
like
yeah.
They
still
want
to
be
able
to
do
sha-1
on
oscp
ocsp
responses,
I'm
not
sure
what
that
is,
I'm
not
fully
familiar
and
crl
sort
of
think
of
specific
revocation
lists,
so
it
looks
like
yeah
they
needed
to
they.
A
So
we
once
we
have
the
security
release,
we'll
know
if
that
fix
is
included,
and
that
will
determine
the
next
steps
right.
B
A
Okay,
awesome
so
perfect,
so
thank
you
for
that
of
the
james
okay.
So
let's
move
to
the
open
discussion
topics.
I
see.
Jeremy
have
a
follow-up
from
the
retro
he
had
to
drop
from
the
call,
but
maybe
we
can
wait
for
that
and
see
if
he
comes
back
okay,
so
the
next
one
is
from
leo
cs
single
board
or
view
managing
team
to
import
things.
Yeah.
C
Right
so
last
release
we
created
a
new
project
board,
so
the
beta
github
beta
project
board
to
track
this
year
signal
issues,
and
we
still
have
one
exosheet
left,
which
we
manage
kind
of
manually.
I
would
say
so,
every
release
we
have
to
kind
of
transfer
this
knowledge
and
the
next
lead
takes
this
template
and
and
manages,
manages
it.
C
So
I
would
just
give
it
a
shot
and
just
see
how
it
turns
out
in
1.25.
C
What
is
everybody
thinking
about
this?
I
think,
like
in
general,
just
moving
away
from
those
magic
fights
that
are
getting
shared
by
someone
and
then
I
don't
know
floating
around
somewhere.
I
think
this
is
in
general,
a
good
approach
if
we
can
move
towards
github.
So
this
was
my
thinking.
C
So
in
that
spreadsheet
I
can
one
second
copy,
so
this
is
the
link
to
the
spreadsheet
right.
No,
this
is
not
the
link,
but
there's
the
link,
so
we
to
basically
kind
of
plan
who
is
reporting
in
the
team
for
the
release
team
and
also
for
the
release,
cut
and
then
observing
test
grid.
We
have
this
exo
file
and
which
is
kind
of
not
very
not
very
crazy.
It's
not
like
the
enhancement
sheets,
which
is
crazy
optimized.
This
is
just
a
simple
spreadsheet
and
yeah.
A
Okay,
let's
it's
like,
like
a
calendar
like
the
schedule
of
loosening.
A
D
A
All
right,
so
any
questions
around
that
automating,
the
the
reporting
of
the
the
roster
of
ci
signal
reporting.
A
Okay,
so
the
next
one
is
also
free,
audio
ci
signal.
C
A
C
Yeah,
so
the
next
two
are
more
or
less
from
from
this
reliability
cap,
so
the
reliability
cap
there's
been
a
lot
of
discussion,
also
in
the
community
meeting,
and
I
think
also
last
week
we
discussed
something
about
this.
This
I
think
we
talked
about
sippy
and
so
some
two
more
action
items
from
this
discussion.
The
first
one
is
at
the
moment
the
ci
signal
team
kind
of
misses
powers
to
more
or
less
enforce,
or
to
like
just
pinging
people.
Sometimes
they
don't
respond
and,
for
example,
this
one
issue
that
I
linked.
C
If
you
go
to
basically
the
comments
you
can
see,
we
commented
more
or
less
like
four
or
five
times
hello.
Is
this
what's
the
status
and
so
on?
So
it's
yeah.
C
Maybe
there
is
like
some.
I
don't
know.
I
don't
have
right
now
something
in
mind
and
not
the
solution
to
this
problem,
but
maybe
like
as
like
a
brainstorm
or
open
discussion.
Is
there
like
anything
that
we
can
do
more
than
just
pinging
people
and
writing
in
the
chat
to
basically
raise
awareness
or
to
yeah
to
get
like
those
flakes
fixed.
A
So
the
problem
is
that
you
sometimes
feel
the
owners
of
these
problems
do
not
respond
and
you
are
powerless
to
right
to
do
anything
about
it.
I
mean
this.
C
Is
like
also
like,
like
feedback
from
the
discussion,
so
it's
not
just
from
from
me,
but
also
from
from
the
authors
of
the
of
the
cap,
so
they
they
were
thinking.
If
we
have
to
move
more
into
also
reliability
and
and
so
on,
then
there
would
be
the
need
of
of
some
like
superpowers
or
whatever
and
for
the
csignal
team.
A
Well:
okay,.
E
Eddie,
have
you
dmed
the
leads
of
the
sig
yet.
C
Yes,
we
wrote
over
kubernetes
in
the
chat.
C
E
E
B
James,
I
think
we
have
hitler,
is
really
a
common
theme
or
feeling
with
the
release
team.
In
that
we
are
as
members
of
the
release
team.
We
are
responsible
for
a
whole
bunch
of
these
milestones
and
other
things,
but
we
don't.
We
neither
decide
on
these.
These
milestones
because
often
sick
architecture
supervisor
generally.
Nor
do
we
actually
have
the
power
to
fully
enforce
them,
because
that
lies
with
the
six.
B
I
think
that's
that's
true.
Across
a
lot
of
a
lot
of
areas
in
general,
I
don't
think
I
haven't
necessarily
have
a
solution.
As
eddie
says:
dming
leads.
Maybe
we
need
to
put
in
make
it
more
explicit
in
the
handbook.
You
know
how
leads
expect
you
to
deal
with
them
and
things
to
do
there,
but
yeah.
I
don't
necessarily
have
a
solution
just
to
have
an
observation.
This
is
this
is
something
we
struggled
with.
E
For
a
while,
I
mean
to
cut
you
off
james
sorry,
we
used
to
have
a
leads
spreadsheet
with
contact
preferences.
Does
anyone
remember
this?
Do
you
remember
this
at
also.
E
Yeah,
I
I
just
keep
coming
back
like
the
the
blocker
that
I
fixed
a
few
weeks
ago,
I
got
pinged
on
public
channels.
On
slack,
I
got
tagged
on
github
issues
and
I
missed
all
of
them
and
it
wasn't
until
I
got
dm'd
by,
I
think
you
leonard
or
james
one
of
you
dm'd
me
or
like
hey.
Did
you
see
this
and
like
it
got
fixed
in
a
few
days?
So
I
know
that
everyone
just
has
different
preferences
for
being
contacted
and
yeah.
I
I
want
to
believe
they're,
not
maliciously
ignoring
you.
A
C
A
Could
be
I
mean
I
mean
even
between
you
and
me
leo,
you've
seen
it
sometimes
you've
had
to
come
and
bring
me
directly.
I
don't
think
it's
the
best
way
to
do
it
starting
number
one,
because
we
should
always
be
paying
attention
to
notifications,
and
the
second
is
that
communication
should
be
as
open
as
possible,
but
I
wanted
to
defer
a
little
bit
on
what
you
were
saying.
A
A
So
I
think
some
more
discussions
should
be
around
that
and
because,
in
the
end,
it's
not
the
release
team
responsibility,
I
mean
it's
it's
their
responsibility
to
obviously
bring
attention
and
to
raise
the
issues
and
to
ensure
that
the
whole
releasing
and
release
engineering
teams
are
informed
about
the
the
flaking
test.
But
it's
not
our
responsibility
to
get
those
fixed,
and
so,
if,
if
the
owners
of
the
problem
do
not
fix
those
time,
I
think
it
should
be
on
them.
C
Ready,
maybe
it
would
be
interesting
to
have
like
a
like
an
escalation
path
or
something
like
this,
so
first
priority
like
important
soon
then
milestone
or
whatever,
then
I
don't
know
three
hands
in
the
chat,
so
it's
even
more
awareness
or
I
don't
know-
maybe
this
would
be
interesting
to
identify
when
what
is
basically
the
trigger
to
get
a
notification
or
like
to
raise
awareness.
So
is
it
like
the
priority
label?
Is
this
like
it's
like
that?
Yeah.
E
A
few
releases
ago
we
held
off
on
code
thaw
until
a
bunch
of
flakes
and
blockers
were
fixed.
Is
that
something
we
want
to
talk
about
again.
A
D
I
wanted
to
say
that
I
think
that
this
is
obviously
a
people
problem
rather
than
a
technology
problem,
but
I
feel
it
it's
specifically
unfair
to
say
I
signal,
because
as
a
as
a
team,
historically,
they
we
and
us
everyone
have
been
contributing
to
better
reporting
tools
over
the
years
and
over
the
cycles,
and
I
don't
think
that
this
is.
This
is
my
opinion,
and
I
don't
think
that
this
is
a
problem
that
can
be
solved
with
more
processes.
D
It's
more
about
this
from
my
experience,
and
I
also
don't
take
things
personally.
So
what
I
wanted
to
share
was
that
the
that
same
experience
comes
to
the
other
way,
so
I
have
been
in
the
receiving
gun
as
a
release
manager.
When
I,
I
usually
don't,
have
the
discipline
to
go
like
through
issues
or
things
that
need
to
be
approved,
and
people
usually
have
to
ping
me
or
or
the
release
managers,
and
I
don't
get
offended
like.
D
I
actually
prefer
that
so
there
have
been
like
a
couple
of
people
who
are
released
after
release
like
hey
release.
Manager
is
almost
in
a
in
a
very
like
they.
They
seem
irritated
right
and
I
don't
know
that's
the
way
it
is
and
but
they
get
we
get
it
done,
and
so
I
think
that
is
mostly
on
preferences,
but
we're
so
many
people
working
here
and
different
different
teams,
different
parts
of
it
that
we
we
won't
be
able
to
keep
everyone
happy,
probably
I'm
too
old
for
for
this.
D
So
if
someone
is
not
responding,
then
I'll
thank
someone
else
or
whatever
that
can
be
tiring,
but
at
the
same
time
I
don't
think
that
there
is
like
a
silver
bullet
for
everyone.
A
No,
it
is,
but
I
I
feel
that
then,
since
different
people
have
different
ways
of
preparing
their
communications
and
things,
maybe
having
like
a
an
established
way
of
contacting
people.
So
why
just
just
simply
to
answer:
why
aren't
you
pinging
me
directly?
Well,
because
I
did
abc
and
you
did
not
acknowledge
my
my
messages.
I
mean
just
to
just
to
not
put
that
burden
into
the
the
cia.
Seeing
all
teams
would
be
a
good
thing
to
have.
C
E
A
A
All
right
yeah,
I
propose
we
bring
it
to
the
next
chairs
and
decline
discussion
and
hear
what
others
think
about
it
all
right.
I
just
put
it
on
the
agenda
for
you
also.
Oh,
thank
you,
okay.
So
that
was
the
last
topic
on
the
agenda.
I
don't
know
if
anyone
would
like
to
mention
something
else
or
bring
another
topic.
C
A
Okay,
is
it
the
reliability,
monitoring.
C
So
so
yeah,
so
two
basically
action
items
came
out
out
of
this
discussion.
The
first
one
was
how
to
raise
awareness
or
how
to
like
those
superpowers
or
like
and
so
on,
and
the
second
one
was.
C
They
were
thinking
about,
adding
reliability
more
to
the
signal
team
that
we
as
a
csigna
team,
should
look
out
for
reliability
issues,
and
if
this
is
the
case
basically
on
this
cap,
there's
a
lot
of
like
points
how
to
basically
monitor
reliability
and
so
on,
and
I
feel
like
this
might
be
a
lot
of
extra
work,
a
lot
of
basically
responsibility
for
the
team
and
if
this
gets
added
to
the
team,
there's
probably
not
enough
capacity
to
basically
fulfill
to
fulfill
this
goal.
C
So
it's
basic
so
what's
so
to
basically
monitor
and
then
also
force
or
not
force.
That
was
also
a
discussion
about
how
to
basically
carry
this
out
reliability
issues.
It
should
be
done
by
some
team,
probably
by
a
sick
release
team
and
then
by
ci
signal
and
there's
yeah
in
detail.
C
I
have
to
walk
through
the
steps
again,
but
this
seems
like
a
bit
out
of
scope
for
the
current
team,
because
there's
currently
still
a
lot
of
things
that
you
have
to
do
so
just
to
get
this
on
the
radar.
If,
like
new
topics,
new
tasks
get
added
to
the
handbook,
the
handbook
is
already
20
pages
long.
C
We
should
consider
changing
something
in
the
team,
so
if
we
have
new
focus
so
for
flaky
tests
and
also
for
failing
tests
and
then
reporting
and
so
on,
but
then
also
to
track
reliability
for
long
term
and
then
I
don't
know
get
in
this
discussion
with
the
other
teams.
This
might
be
a
bit
extensive.
I
don't
know.
A
Yeah,
okay,
I'm
not
aware
of
what
would
actually
monitoring
the
reliability
would
involve,
but
would
certainly
like
to
hear
I
mean
yeah.
I
agree
that
that
team,
but
we
can,
I
mean
if
this
is
something
the
project
needs
and
I'm,
and
we
determine
at
some
point
that
it
may
fall
within
our
realm
of
responsibility.
A
C
A
Yeah
exactly
because
I
mean
if
it's
monitoring
more
boards
because
reliability,
I
think
it's
an
expression
of
how
of
an
expression
of
the
internal
processes
and
the
state
of
the
code
base
of
a
certain
area
which
I
think
could
blend
with
ca
single.
But
if
it
involves
a
lot
more,
maybe
we
should
rethink
it.
I'm
not
I'm
not
sure
what
they're
proposing
here,
but
if
anyone
has
more
context
would
love
to
love
to
hear
about
it.
A
But
yeah
I
mean
we
should
discuss
it,
but
with
more
I
don't
know
with
more
knowledge
of
the
matter.
A
Yeah,
so
let's,
let's
try
to
I'm
gonna,
try
to
read
a
little
bit
more
and
we
can
keep
the
discussion
going
because
I'm
not
I.
To
be
honest,
I
skim
the
I've
been
skimming
the
the
cap
and
the
discussion,
but
I
didn't
notice
the
the
the
part
of
the
ci
signal
that
you
were
mentioning
but
try
to
understand.
A
A
Okay,
so
to
the
last,
so
I
just
had
an
invitation
if
anyone
wants
to
discuss
something
else
or
bring
another
topic
to
the
table.
Now's
the
time.
D
I
would
like
to
have
a
little
bit
of
context
like
to
london
read
about
all
the
cannery
image,
promo
signing
thing
that
you
have
been
working
on.
I'm
also
happy
to
wait
and
catch
up
in
another
call.
If
someone
else
has
a
more
pressing
issue.
A
I
mean
yeah,
it's
it's,
basically
the
the
update
that
I
gave
recently.
So
the
idea
is
to
have
a
promoter
cut
whenever
so
we
will
be
now
constantly
be
run
and
releasing
new
canary
images
for
the
image
promoter
and
those
those
images
will
be
promoting
some
of
the
images
of
the
release
engineering
projects.
So
right
now
it's
just
the
field
of
materials
tool,
so
we
will
be
okay.
D
Okay,
yeah.
I
sorry
I
had
more
questions
about.
I
saw
that
you
did
something
like
you
specifically
you
adolfo.
You
did
something
that
was
later
dismissed
like
it
was.
It
was
okay
but
like
it
was
later
dismissed
and
favor
of
something
else.
That's
steepen!
Oh.
A
Yeah
yeah
another
approach,
so
the
idea
was
that
we
were
trying
to
set
up
the
canary
bills
inside.
So
what
we
tried
to
do
during
the
weekend
was
run.
The
canal
reveals
in
the
bomb
repository,
and
so
what
stephen
did
was
to
he
took
that
those
camera
bills
and
moved
those
to
the
promoter
repository
and
then
we,
and
instead
of
just
doing
the
canaries
on
one
of
the
projects.
We
now
have
them
inside
of
the
promoter
and
we
can
add
more
images
to
those.
If
we
need
to
okay.
A
Okay,
so
any
other
questions,
oh
I
see
yeah,
I
I
know
yeah.
F
A
Yeah,
I
saw
the
issue
he
opened
the
issue,
so
the
idea
is
the
just
a
super,
quick
context.
When
we
do
the
promotion
of
images
of
the
kubernetes
images,
we
are
currently
pushing
them
to
to
the
google
cloud
registries
that
we
will
use
for
like
forever.
A
This
project
involves
also
making
copies
of
those
to
be
served
in
aws
locally
for
to
to
basically
not
pay
all
the
traffic
from
gcp
to
aws
and
jpap
the
pipes
kindly
offered
to
take
on
that
effort,
and
he,
like
I
don't
know
I
mentioned
he
yeah,
he
opened
the
issue
and
attacked
me
and
veronica
in
it
and
yeah.
I
I
this
is
as
far
as
I
saw
yesterday
that
this
thing
has
moved.
I
haven't
not
heard
on
slack
from
him
recently.
D
A
Exactly
there
are
some
particular
things
inside
of
the
promoter
that
we
need
to
discuss
of
how
we,
how
we
do
the
some
of
the
copies
inside
and
also
the
promoter,
is
limited
in
some
aspects
of
which
media
types
it
will
recognize.
So
we
should
talk
about
that
and
but
yeah
I'll
follow
up
with
with
jay
when
when
he
reaches
that
point.
F
D
Is
that
something
that
we
want
to
discuss
like
in
the
future,
once
we
are
balancing
with
them
here
in
this
meeting,
or
should
one
of
us
be
embedded
into
the
k
team
from
meeting
or.
F
F
F
That's
the
basically
exact
when
the
complexity
is
super
of
multiple
providers,
because
we
won't
at
some
point
have
at
least
gcs
and
s3,
and
we
can
basically
stick
with
that
for
a
very
long
time.
But
unless
there
are
something
else
we
need
to
implement,
I
don't
see
real
complexity.
So
it's
gonna
be
kind
of
feel
straightforward
apart
for
that,
but
I
can
be
liaison
between
kitchen
foreign.
A
Exactly
most
of
the
discussions
around
the
image
promoter
should
happen
in
our
release
engineering
meeting,
because
the
promoter
is
a
project
from
release
engineering,
yep
and
but
it
touches
on
that
boundary
with
sick,
etc,
but
any
any
meetings
are
fine
and
if
anyone
is
interested
in
that
topic,
I
recommend
attending
both
meetings.
So.
D
Yeah
I'll
try
to
attend
the
other
one.
Also,
okay,.
A
A
Yeah,
so
they
said
this,
there
is
like
a
time
pressure
on
this
project.
So
if
anyone
wants
to
to
help
around
that
help
is
very
much
welcome.
A
Okay,
perfect!
So
we
have
four
minutes
left
and
if
no
one
has
any
other
topics,
I
think
we
can
have
those
back
and
continue
the
discussion.
Async.