►
From YouTube: Kubernetes Release Engineering 20200721
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
Hello,
hello
folks
today
is
july
21st.
This
is
a
release,
engineering
subproject
of
sig
release.
This
is
a
meeting
that
is
recorded
and
available
on
the
internet.
So
please
be
mindful
of
what
you
say
and
do
please
be
sure
to
adhere
to
the
kubernetes
code
of
conduct
and
overall,
just
be
awesome
people.
A
So
we
had
a
great
discussion
yesterday
in
in
the
119
release
team
meeting
or
one
of
the
119
release
team
meetings,
and
I
thought
what
a
great
idea
to
say
exactly
all
the
things
that
we
said
there.
A
lot
of
the
a
lot
of
the
content
was
release.
Engineering
related
and
some
of
the
topics
that
we
got
into
were
overlapped
with,
with
like
a
kind
of
wider
release,
engineering
effort
efforts,
as
opposed
to
just
119,
so
I
figured
it
would
be
good
to
rehash
that
stuff
here.
A
You'll
see
that
I've
copied
in
notes
from
previous
from
yesterday's
meeting,
we
will
clean
those
notes
up
as
we
go
if
anyone
is
interested
in
acting
as
a
note
taker,
that
would
be
greatly
appreciated
and
if
anyone
has
topics
that
they
want
to
add
feel
free
to
slot
them.
In
ahead
of
any
of
these
things
that
are
listed
already,.
A
So,
first
off
the
rc2
which
is
planned
for
today
do
we
want
to
get
a
status
check
on
the
rc2
we've
got.
I
see
daniel
and
sasha
here
yeah.
It
would
be
really
helpful.
I
think
so.
A
Okay,
all
right.
B
Thread
right
now
going
in
release
management
about
some
of
those
issues
that
liggett
brought
up
yesterday,
but
I
don't
think
that
those
are
intended
to
block
rc2.
It
looks
like
tim
mentioned
this,
also
that
that
was
more
about
reopening
master.
A
Exactly
right,
so
so
there
were
several
issues
that
were
brought
up
yesterday
and
a
lot
of
them
are
related
to
if
anyone
was
checking
out
our
kind
of
ci
posture
over
the
last
week,
or
so
you
would
have
noticed
quite
a
few
few
failures
across
a
variety
of
different
areas.
A
Some
in
windows
there
were
the
gpu
plug-in
tests,
which
are
which
were
kind
of
like
permafiles,
the
I
think
some
of
the
other
end-to-end
tests
and
the
node
end-to-end
test
was
showing
about
a
five
percent
flake
rate
as
well.
So
what
we
usually
try
to
do
during
today
is
a
day
where
we
are
doing
a
few
things,
or
there
are
a
few
milestones
that
we're
hitting
and
they're
kind
of
bundled
up
into
the
same
thing.
Right,
one
is
the
rc2
itself.
A
The
rc2
is
also
supposed
to
newly
in
the
cycle
will
mark
the
start
of
our
code.
Thaw
right,
there's
not
really
a
start
of
it.
It
just
is,
but
we
during
that
meeting
we
kind
of
looked
around
and
said
this
is
fairly
bad.
Maybe
maybe
we
don't
immediately
open
so
for
code
thaw
for
those
who
are
not
familiar
with,
though,
basically
says
like
hey
we're,
we're
just
about
wrapping
up
this
cycle
and
we
feel
comfortable
with
opening
reopening
the
master
branch
for
development
again
for
the
next
cycle.
A
What
we're
saying
overall
is
that
we
are
not
ready
to
do
that
just
yet.
There's
some
there's
some
various
flakes
that
we
want
to
burn
down
and
as
well
as
so
some
of
those
are
infrastructure
level.
Some
of
those
are
code
level.
Some
of
the
prs
within
the
milestone
are
not
necessarily
pr,
so
we
want
to
continue
being
in
the
milestone,
so
this
will
give
us
an
opportunity
to
kind
of
gut
check
the
the
milestone.
A
B
Yeah,
so
you
mentioned
a
couple
of
things
there
that
we're
having
issues
with
most
of
them
are
resolved
at
this
point.
So
the
first
thing
you
mentioned,
I
think,
was
the
device
plug-in
issue
that
one
was
kind
of
weird
and
actually,
if
you
tune
into
virtual
summit
china,
I
will
walk
through
how
we
fixed
it.
So
we
recorded
that
yesterday,
but
anyway,
that
was
just
pulling
a
manifest
from
another
repo,
so
that
is
up
to
date
and
has
passed
consistently
consistently,
so
we're
good
there.
B
B
Unfortunately,
that's
the
mirror
pod
with
grace
period
issue,
however,
like
it's
not
a
new
thing,
but
it
has
increased
in
the
consistency
of
the
flaking
and
that's
the
same
story
with
the
scalability
test
and
liggett
mentioned
yesterday.
B
Basically,
the
scheduling
throughput
has
been
consistently
falling,
so
there's
a
gate
for
ninety
percent
and
it's
it
was
like,
would
flake
every
once
in
a
while
around
like
89
or
something
like
that,
and
now
we're
getting
some
results
that
are
more
like,
like
80
and
that
sort
of
thing
so,
like
I
said
all
these,
these
things
are
not
new.
So
in
terms
of
like
cutting
a
new
release,
I'm
hesitant
to
do
anything
that
would
block
those,
especially
since
they're
just
flaking,
but
they
do
need
to
be
addressed.
A
Thank
you.
Thank
you,
so
yeah,
one
of
the
one
of
the
decisions
that
we
had
made
yesterday
and
I
think
this
was
off
the
call
and
probably
scattered
between
chats
with
the
various
leads,
is
that
we
can
my
my
words,
maybe
not
exactly
what
we're
going
to
do,
but
we
can
cut
an
infinite
amount
of
rcs
if
we
needed
to
right,
and
I
think
the
rc's
give
us
an
opportunity
to
actually
test
the
content
and
give
the
people
who
actually
consume
rcs
the
opportunity
to
test
that
content.
A
So
we
don't
think
it's
a
good
idea
to
necessarily
delay
the
release,
but
we
do
think
it's
a
good
idea
to
delay
the
start
of
code
thaw
or
the
reopen
of
the
master
branch.
So
there's
going
to
be
a
note
that
is
sent
out
later
today
by
taylor.
That
will
give
some
details
about
where
we're
where
we
currently
stand,
and
basically
reiterate
some
of
the
stuff
that
we
mentioned
just
now,
hey
jordan.
Do
you
want
to
chime
in
this?
A
Is
we're
essentially
doing
a
rehash
of
the
meeting
you
were
in
yesterday?
So
we're
talking
about
the
flakes
right
now
and
what
we
are
and
aren't
going
to
do
for
today.
So.
C
So
it
looks
like
several
of
the
things
from
yesterday
did
merge,
which
is
great
two
new
windows.
Sorry,
so
there's
one
panic
fix
that
is
still
making
its
way
through
the
merge
queue.
The
scale
issue
has
not
gotten
any
comments
from
sig
scheduling
or
six
scalability
are
those
the
two
that.
A
B
The
the
panic
issue
is
less
critical
because
it
typically
I
mean
it's
going
to
return
an
error
there
anyway
for
the
test,
so
I
mean
I
think
it
would
be
good
to
get
that
in.
But
it's
not
oh.
D
C
C
I
I
missed
that
that
was
the
test
station-
okay,
so
so
yeah,
so
the
cubelet
or
the
the
node
e
to
e.
That
was
failing
on
the
mirapod
test.
Seth,
I
think,
has
a
proposed
fix
for
that.
He
and
I
were
about
to
jump
onto
a
call
when
I
got
tagged
in
here.
C
So
if,
if
that
fix
is
legitimate,
then
I
think
we'll
be
good
to
go
like
by
midday
or
right
afternoon,
like
once
that
emerges
yeah,
I
I
would
like
sig
scheduling
and
sig
scalability
to
at
least
weigh
in
on
the
on
their
issue.
I
don't
know
that
that
needs
to
block
rc2
so.
A
Yeah
yeah,
so
so
what
I
was
mentioning
in
the
right
before
you
joined
is
that
they're,
the
upside
is
that
there
are
two
discrete
actions
doing
the
rc
doesn't
necessarily
mean
that
we
need
to
doing
the
rca
doesn't
necessarily
mean
that
we're
going
to
do
code.
Thaw,
though
that's
something
that.
A
Usually
do
so
we
are
going
to
go
forward
with
the
rc
and
continue
to
watch
master
about
where
we
stand
for,
but.
A
Yeah,
but
I
would
say
more
importantly,
if
you've
got
a
jump
to
work
on
a
fix
for
one
of
these
things,
I
would
say:
go
for
it.
C
I
would
I
would
move
the
one
that
seth
is
working
on
down
to
the
like
release,
blocking
and
master
reopening
blocking
for
now,
just
because,
based
on
his
investigation,
it
is
an
issue
that
did
exist
before
and
this
new
test
exposed-
oh
nice,
so
yay
for
tests,
and
it
makes
me
bad
when
I
find
out
we
had
bugs
for
a
long
time.
A
C
C
A
A
A
A
Okay,
so
let
me
check
the
contents
are
actually
release
managers
if
you
want
to
check
the
content,
I
know
we
did
a
fast
forward
earlier
and
we'll
want
to
check
that.
Okay,
that
was
at
6
35.
A
C
And
I
just
updated
the
the
list
of
issues
and
pull
requests
categorized
into
rc2
blocking
things
and
master
reopen
blocking
things:
okay,
cool
and
I
linked
to
that.
So
all
right
and
I
will
all
right-
I'm
going
to
jump
over
the
cult
of
stuff
and
try
to
get
that
mirror
body
nailed
down.
A
A
A
All
right
so
we'll
be
watching
that
throughout
the
day,
daniel
and
sasha,
if
you
can
poke
around
and
make
sure
that
the
content
that
we
need
is
in
the
119
branch
and,
if
not
we'll,
plan
to
do
another
fast
forward
to
get
us
get
us
there.
A
So
one
of
the
things
that
jordan
had
mentioned
scalability
scheduling-
and
this
is
something
that
I
mentioned
yesterday-
communication
wise.
So
one
thing
I
was
worried
about
and
have
confirmed
just
now-
is
that
the
scalability
ping
groups
on
on
github
api
review,
bugs
feature,
requests,
miscellaneous
pr
reviews,
proposals
and
test
failures.
A
Those
are
all
pretty
stale,
so
I'm
gonna
update
those
later
probably
right
after
this
call
to
include
the
the
current
tech
leads
and
chairs
for
sig
scalability
and
then
for
some
of
those
ping
groups.
A
I'm
going
to
consolidate
some
of
these,
like
some
of
these,
have
jobita
on
it
and
he's
definitely
not
doing
scalability
reviews
right
now,
so
I'll
I'll
get
these
updated
and
what
I'll
do
is
I'll
also
include,
so
we're
moving
to
kind
of
a
not
new
system,
but
consolidated
teams
for
github,
so
the
expectations
that
we
would
see
at
a
bare
minimum,
the
a
team
that
is
sig's
name,
a
team
that
is
a
six
six
leads
and
then
a
team
for
pr
reviews.
A
So
it
would
be
like
sig
release
pr
review.
Sig
release
and
sig
release
leads
right,
so
I'll
move
scalability
to
that
format
for
those
teams,
I'll
also
add
the
release.
Team
leads,
or
rather
the
sig
release,
leads
overall
right
so
and
then
maybe
someone
from
ci
signal
to
to
get
that
overlap.
A
E
I
guess
that's
step
one
and
what
we
talked
about
yesterday
and
coming
up
with
some
more
comprehensive
yeah.
Okay,
I
added
the
note
to
the
agenda
under
sig
scalability
that
that's
an
action
item
for
you.
A
All
right,
so
I'll
drop,
my
note
cool
all
right.
So
next
up
is
the
vdf.
The
vdf
is
in
progress
for
those
who
are
not
familiar.
Pdf
is
vanity
domain
flip.
It
is
when
we
will
change
the
backing
container
registry
of
kates.gcr.io
from
google
owned
to
community
owned.
A
So
if
you're
familiar
with
gcr.io
slash,
google
containers,
that
is
where
most
of,
if
not
all,
of
the
prod
community
images
are
hosted,
moving
forward,
that
will
be
kate's,
artifacts
prod
and
the
various
geo
geolocated
endpoints,
so
us.gcr
slash
case
artifacts,
proud,
eu
dot,
dot,
gcr,
dot,
io
and
asia.gcr.io
kate
gates,
artifacts
prod.
A
A
So
we
are
in
a
good
spot
to
do
that.
I
also
tested
the
promotion
process,
so
release
managers
who
are
cutting
releases
moving
forward
you'll
have
to
make
sure
that,
after
the
mock
stage
and
mock
release,
we're
issuing
a
promotion
pr
for
the
mock
release.
A
The
the
container
images
that
are
pushed
as
a
result
of
the
mock
release
to
carry
them
into
production
right
so
that
pr
would
be
approved
by
a
release,
manager
and
then
promoted
into
production.
A
So
that
should
not,
when
you
issue
one
of
those
pr's
make
sure
that
you
open
the
open,
the
pr
you
add
a
hold
to
it
and
you
don't
release
that
hold
until
the
until
right
as
we're
successful
with
the
official
release
right.
So
it
should
be
mock
stage.
A
Success
mock
release,
success,
create
the
promotion,
pr
hold
the
promotion.
Pr
start,
the
official
stage
start
the
official
release
and
then,
as
the
official
release,
is
completing
we're,
releasing
the
promotion
pr
so
tiny,
a
tiny
bit
of
gymnastics
to
be
aware
of,
but
I
think
it's
minimal
in
the
overall
process.
A
I
linked
a
I
linked
a
kind
of
demo
example
pr
for
for
how
to
do
that,
and
that
should
be
linked
to
some,
where
I
can
dig
it
up.
If
it's
not
so,
questions
on.
A
A
Okay,
great
news
so
time
for
my
favorite
topic
and
all
the
world
go
go
updates,
so
we've
been
doing
a
lot
of
go
updates.
Recently.
If
you've
been
watching
the
streams
we
are
at
go
114.
A
So
the
first
ones
that
we
did
as
the
cycle
was
going
on
was
to
go
114
4
for
the
master
branch.
Then
we
moved
quickly
into
go.
114.5
recently
updated
to
go
114.6.
A
And
the
for
the
release
branches
we
were
on
for
the
previous
release
branches,
so
118,
117
and
116..
We
were
on
go
113
9
and
we
are
now
on
go
113
14..
A
So
I
think
that
throughout
this
quarter,
or
so
we've
kind
of
proven
that
it
is
possible
for
us
to
kind
of
crank
out
go
releases
fairly
quickly,
depending
on
the
complexity
of
the
release.
Let
me
not
jinx
us
for
the
rest
of
go
release
history,
but
but
I
think
that
you
know
we're
starting
to
pro
we're
starting
to
tighten
the
way
that
we
do
the
go
updates.
A
I've
recorded
a
maybe
three
hours
or
so
of
content
on
how
to
do
some
of
the
stuff,
which
covers
the
patch
release
kind
of
the
go
patch
release
updates,
but
not
the
not
the
minor
updates
minor
updates,
get
kind
of
sticky,
depending
on
the
the
various
issues
that
you
might
run
into,
and
I'm
wondering
how
much
people
are
interested
in
actually
seeing
what
I
was
working
on
yesterday
last
night
later
taylor.
A
It
is
the
update,
for
I
think
it's
something
that
we
haven't
done
quite
yet,
which
is
an
update
for
go
115
0..
So
the
part
that
we
haven't
done
yet
is
is
try
to
update
to
a
version
of
go.
That's
that
doesn't
exist
yet
so
we're
working
with
a
pre-release
version
of
go
and-
and
you
know
what
I'll
I
don't.
I
don't
really
think
it's
showing
off,
because
it's
not
done
yet,
but
I
will
go
into
this
pr
just
a
little
bit.
So
let
me
stop
my
slack
all
right.
A
So
what
did
we
do
here
at
first?
There
are
some
interesting
things
going.
A
On
and
I
think
we
can
stop
there
or
so,
okay,
so
first
one
up,
there's
kind
of
some
version,
swizzling
that
you
need
to
do
within
the
build
directory
thedependencies.yaml
because
we're
trying
to
consume
a
pre-release.
Some
of
these
version
numbers
that
we
are
depending
on
they're
slightly
different
and
they
don't
necessarily
respect
the
same.
The
same
regex
match
patterns
that
we
were
using
before
so
one
example
is
this:
where
the
golang
upstream
container
image
is
actually
it's
not
115..
A
You
know
115.0
dash,
beta.1,
it's
115
beta1,
so
we
need
to
account
for
that
and
then
the
cubecross
image
is
also
has
has
what
would
be
the
december
expected
string
dash
one
there's
a
repo
infra
configuration
here
that
we
have
to
set.
So
I
initially
had
that
set
to
december
compliance
version,
some
swizzling
of
the
the
test
images
make
file
and
then
referencing
the
new
cube
cross
version
for
under
test.
A
Then
we
can
see
that
this
starts
to
go
off
the
rails.
The
go
version
is
updated
to.
This
is
one
of
the
two
minus
a
little
bit
of
stuff.
This
is
one
of
the
two
accepted
regexes
for
december,
and
you
can
see
that
that
is
a
bit
long.
A
I
did
not
come
up
with
this.
This
was
already
listed
in
december
and
december.org
close
to
the
bottom,
so
you
can
feel
free
to
to
use
that
if
you
ever
need
to
capture
something
that
is
semper
compliant
and
then
we
do
some
more
stuff,
because
the
go
version
that
they're
targeting
are
that
they've
listed
is
not
quite
december.
A
A
Here
we
remove
the
dependency
checker.
We
comment
out
the
dependency
version
check
for
repo
infra,
and
then
we
also
comment
out
where
we're
pulling
in
repo
infra
from
right.
So
it's
basically
saying
like
this
is
an
http
archive.
I
want
you
to
use
this
version
of
repo
infra
zero
zero,
and
this
is
the
expected
sha
of
that
tarball.
A
So
repo
infra
contains
a
variety
of
things,
but
one
of
the
primary
things
that
it
contains
is
a
is
a
bunch
of
the
rules
that
we
define
for
for
go
so
what's
cool
about
it
is
you
are
then
able
to
define
your
rules,
define
the
rules
that
you
care
about
in
repo
infra
and
then
ingest
them
in
your
repo
by
by
loading
the
repository
through
various
mechanisms
right,
so
the
one
that
you
saw,
which
is
I'm
lost
now?
A
Okay,
the
one
that
you
saw
was
an
http
archive,
but
we
can
also
do
it
via
git
repository.
So
you
can
see
here
I'm
instead
opting
to
use
the
git
repository
mechanism
right
and
I'm
naming
it
the
same
as
we
would
for
the
http
archive
io
case.
So
it's
so.
It's
essentially
like
flipping
the
flipping,
the
the
repo
name
right.
So
the
the
end
point
is
this
case
that
I
o
and
then
the
the
repo
name
is.
Is
repo
infra
right?
So
that's
just
that's
it
represented
in
basilise.
A
A
We
don't
have
a
representation
of
pre-release
versions,
so
we
have
to
directly
point
at
at
tarballs
for
those
pre-release
versions
right.
So
you
can
see
here
that
we're
we're
adding
this
going
sdk
to
the
depths
load,
the
loading
that
that
depths
at
bazels
and
defining
a
set
of
sdks
right
for
all
of
the
various
architectures,
and
these
are
possible
to
get.
If
you
just
do.
A
A
A
I
tried
a
few
different
things.
It
appears
that
there
may
be
a
there
may
be
an
issue
with
the
rules
that
we
may
not
be
able
to
fix
just
yet
because,
because
we're
trying
something
new,
it's
it
looks
like
it's
new
enough
that
we're
running
into
an
issue
that
does
not
have
a
solution
just
yet
on
the
gazelle
side,
so
gazelle.
Okay,
so
we've
got
a
response
from
jay
conrod.
Who
is
one
of
who's?
A
One
of
the
people
who
works
on
on
various
various
basal
things
so
so
gazelle,
and
I
think
he
also
works
on
rules
underscore
go
and
probably
somewhere
around
go
proper
around
bazel
proper.
A
So
I'm
going
to
look
into
what
he
said
and
see
if
I
can
clean
up
some
things
here,
but
it
looks
like
there
are
two
active
issues
that
are
open
right
now:
slinker
conflict
when
external
tests.
Basically
it
says
that
you
know
we've
compiled
from
some
place,
but
we've
linked
from
someplace
different
and
the
places
that
are
this
tends
to
happen
when
it's
for
external
some
external
component.
So
in
this
case
the
external
component
would
be.
I
think
where
we're
seeing
a
lot
of
the
errors
is
the
staging
directories.
A
So,
if
you're
familiar
with
staging
some
weird
things
happen,
the
way
it's
built
kind
of
weird
things
happen,
and
we
have
a
set
of
scripts
to
make
sure
that
we
can
update
the
go
mod
and
go
sums
in
in
various
places
in
the
staging
directory
and
yeah.
So
so
I
had
played
around
with
a
bunch
of
different
things
like
updating
the
various
the
various
tools.
A
So
this
one
basically
bumped
repo
infra
to
the
latest
version
and
then
bumped.
I
basically
dropped
the
the
depths
from
here,
only
bumped
repo
infra,
and
then
we
ran
a
make
depths
within
that
directory,
which
pulled
in
the
repo
infra
versions
of
these
tools.
The
versions
of
these
tools
that
are
compatible
with
repo
indra.
Rather
so
that
did
not
work,
and
so
afterwards
I
ran
an
update
vendor
to
see
what
it
would
fix
so
update
vendor.
A
Also
updates,
update
bazel,
also
runs
update,
bazel,
so
you'll
see
like
a
few
build
file
fixes
that
will
come
in.
As
a
result
of
that,
we
can
see
that
we've
got
double
slashes
on
these
conformance
tags.
A
And
then
I
tried
it
again,
but
this
time
I
bumped
the
git
repository
that
we're
referencing.
So
I
referenced
the
new
sha
on
my
repo
infra
pr,
our
my
repo
info
branch,
and
then
I
tried
running
that
update
vendor
again
and
we
can
see
that
it
pulled
in
some.
I
pulled
in
some
references
for
new
platforms,
aix
and
js,
but
overall
those
are
not
useful
to
us
and
don't
fix
errors
like
this
right.
So
this
is
what
we
were
seeing
right.
A
So
it
says:
yada
yada
yada
was
compiled
with
go
default
library,
but
was
linked
with
go
default
test.
This
happens
when
an
external
test
package
ending
in
underscore
test
imports
a
package
that
imports
the
library
being
tested.
This
is
not
supported
and
we
can
see
that
there
are
a
variety
of
these
errors
throughout
these
failing
tests.
A
So
the
current
status
is
that
some
of
these
things
are
passing
right.
We've
got
some
intent
tests
passing
dependency
checks,
passing
files,
remake
node
and
end
is
passing,
but
the
ones
that
I
would
watch
to
see
if
they
were
failing
during
a
go
update
would
be.
I
want
to
make
sure
that
bazel,
build
and
test
are
good.
I
want
to
make
sure
that
the
cross
looks
good.
I
make
sure
that
the
primary
ede
for
gce
looks
good
and,
of
course,
the
integration
and
verify
tests.
A
So
we've
got
a
bit
to
go.
We've
got
time
to
do
this
and
I
think
it's
it's
good
that
we're
starting
early
the
the
goal
overall
because
of
some
hairy
issues
in
go
114
is
to
move
to
move
to
go
115
across
all
release
branches.
A
So
we
want
to
vet
that
this
looks
good
for
the
for
master
branch
in
119
before
starting
to
carry
this
into
the
the
various
release
branches.
So
so
we
obviously
don't
want
to
do
that
until
there
is
an
actual
version
of
go
to
test
against,
but
this
is
this.
Pr
is
meant
to
get.
Some
soak
get
some
signal
before
we
ingest
the
final
version
of
go.
115
are
the
first
miner
version,
our
rather
yeah
the
first
fire
version.
First
patch
version.
A
Yes,
the
minor
version,
which
is
also
technically
the
first
patch
of
go
115.,
I
should
just
say,
go
115
0
is
what
we're
waiting
for
and
that
should
be
available
sometime
in
early
august.
So
we've
got.
You
know.
We've
got
about
two
weeks
in
change
to
to
figure
out
some
of
the
things
that
are
going
on,
hopefully
knock
them
down
before
we
get
the
full
version
of
go
and
then
it
will
be
a
race
to
make
sure
that
go.
115
is
on
for
kubernetes,
119.
D
This
so
would
you
say
if,
like
the
the
rules
were
in
place
which,
by
the
way,
that
is
always
the
issue
that
I
found
like
not
not
in
this
update,
not
even
with
the
go
upgrade
but
like
every
single
time
that
I
have
to
fix
something
it's
like.
Oh,
the
basil
rules
are
not
suitable.
Yet
for
this
specific
thing
that
I
want
to
do,
but
that
said,
would
you
say
that
if
these
rules
were
were
not
a
problem
with
your
job
have
been
way.
A
You
know
it
was.
It
was
annoying
I
poked
at
this
for
quite
a
few
hours
yesterday.
A
No,
not
necessarily,
I
think
that
what
we
uncovered
to
yes-
and
no,
I
guess,
if
there
were
rules
available
for
pre-releases,
are
if
we
knew
exactly
which
ones
to
consume
that
would
be.
That
would
be
useful.
A
We
the
way
it's
been
going
for
the
last
few
updates.
I
have
basically
assumed
that
new
rules
were
going
to
be.
There
found
the
rules,
updated,
repo
infra
cut
a
new
tag,
targeted
targeted
that
new
tag
on
kubernetes
kubernetes
and
then
proceeded
with
the
the
the
patch
up
update
the
what
we
need
to
yeah.
I
think
what
we
need
a
better
understanding
of
is
what
the
process
looks
like
when
we're
dealing
with
pre-releases
right.
A
This
is
an
opportunity
to
kind
of
like
play
whack-a-mole
with
some
of
the
errors
that
we
might
see,
and
I
think
that
overall,
like
we're
going
to
take
some
pain.
Initially,
I'm
trying
to
figure
out
some
of
the
stuff,
but
it
will
allow
us
to
consume,
go
a
lot
faster
if
we're
able
to
start
consuming
it
closer
to
the
edge
right.
A
A
But
yes,
it
that
is
basically
the
the
thing
that
I've
been
told
to
find
and
update.
It's
like
it's
probably
rules
go,
go
it's!
Maybe
something
in
the
tool
chain
updates.
A
So
when
we
do
an
update
for
pre-go
release,
we're
going
to
make
sure
that
the
bazel
tool
chain,
repo
is
is
up
to
date
in
our
repositories
and
then
also
making
sure
that
the
rules
not
go
rules
underscore
go
is
up
to
date,
and
then
you,
you
do
the
dance
that
you
do
with
all
of
the
the
other
go
related
things
so
cube
cross
q
cross,
adding
cube
graph,
promoting
cube
cross,
adding
cube
graphs
to
the
pr
rebuilding
the
kate's
cloud
builder,
image,
setting
up
the
cubican's,
ede
images
to
and
and
then
starting
to
play
whack-a-mole
with
the
various
issues
that
you
run
into
on
the
on
the
kubernetes
kubernetes
pr.
A
For
that
once
that
merges
then
handling
the
cherry
picks
for
each
of
those
things
merging
so.
A
Yeah
exactly
yeah,
there
are
a
few
things
to
add.
I
guess
now
that
now
that
I
understand
more
of
the
bazel
stuff,
but
it's
fairly
minor,
I
think,
but
maybe
maybe
one
day
another
another.
D
B
A
A
All
right
so
eta
on
that
is
unknown
at
this
point.
Hopefully
we
have
it
ahead
of
ahead
of
the
end
of
next
week.
Our
forward
progress
ahead
of
the
end
of
this
week
and
and
hopefully
closer
to
merge
by
the
end
of
next
week.
A
E
E
So
if
you
click
the
link
and
take
a
look,
it
covers
the
to-do's
that
came
up
yesterday
through
the
conversation
and
then
I've
just
added
one.
The
item
from
veronica
actually
just
now,
but
that's
a
plan
to
combine
her
notes
and
your
videos
into
some
materials
around
go
update,
but
two
things
on
the
spreadsheet.
So
I
don't
want
it
to
compete
with
the
agenda
notes
because
there's
no
sense
in
duplicating
and
also
it
shouldn't,
subsume
the
agenda.
E
E
So
I
I'm
wondering
I
posted
this
in
the
black
channel
like
if
this
is
helpful
to
people
if
they
have,
if
you
all
have
feedback.
E
My
other
note
on
this
and
the
chat
was
that
I
had
considered
making
a
github
issue.
That
would
just
be
a
checklist
of
things,
but
this
spreadsheet.
E
First,
music
just
started.
I,
like
my
computer,
loves
to
prank
me,
but
at
least
it's
good
music,
but
anyway,
the
advantage
of
a
spreadsheet
is
that
you
just
have
more
information
fields,
but
maybe
maybe
we
don't
need
them
all.
Maybe
we
just
need
to
have
the
to-do
list,
in
which
case
a
github
issue
might
be
simple,
an
umbrella
issue.
A
So
I
was
going
to
say:
let's
do
a
github
issue
simply
because,
like
that's
the
way
that
most
people
work.
F
A
Is
a
separate
github
issue
per
per
event
or
per
sub
project
e
thing?
Then
we
can.
Then
we
can
go
from
the
meeting
to
the
to
that
issue
and
go
all
right.
Did
we
do
the
things
did.
B
A
Do
the
things
all
right?
Are
there
more
things
to
link
cool
and
and
do
kind
of
weekly
updates
there
right?
So
we
can
break
it
into.
We
can
break
it
into
dates,
I'll
leave
it
to
you
to
to
create
those
so
that
you
have
access
to
edit
the
descriptions
as
well,
and
then
we
can
just
do
it
by
date
and
and
checklist.
E
Yeah,
okay,
so,
and
I
think
we
can
use
the
apps,
we
can
use
people's
handles
to
have
the
owners
and
the
proposers
bolded
dates.
If
there's
a
timeline
for
any
of
these
items
or
urgency
that
we
need
to
check
up
on
the
following
meeting
like
from
one
meeting
to
the
next,
you
do
it.
We
can
have
that
information.
So
all
right
that
makes
sense.
A
Cool
cool,
yeah
and
yeah
so
similar
to
the
retrospective
things
that
we
had.
We
had
done
but
like
much
shorter
and
I
think
completable.
E
A
B
A
Should
we
I
mean
another
idea:
is
we
could
have
a
project
for
it
right
and
it
could
be
separate,
github
issues
and
we
could
burn
down
that
project
board
at
the
beginning
of
a
meeting.
E
A
A
E
Then
so,
basically
we
could
just
have
that
issue
stored
in
the
project
board
and
then
it's
always
there
and
then
my
question.
Actually
I
did
have
the
question
like
how
how
the
sub
teams
are
managing
their
own
project
boards
like
release
engineering,
has
theirs
bug
triage.
E
A
So
initially,
what
we
were
doing
is
you'll
you'll
note
that
we
have
not
done
scrub
for
this
meeting,
but
what
we
were
doing
is
towards
the
end
of
the
meeting
we
would.
We
would
go
through
and
walk
the
board
a
little
bit
and
that's
that's
kind
of
that
was
kind
of
the
intent.
I
think
that
we
held
off
on
that,
because
these
backlogs
have
gotten
a
bit
ominous,
and
the
hope
is
that
now
that
we
have
triage
party
we'll
be
able
to
jam
on
that
a
little
faster.
A
So
it's
it's.
I
think
we
have
it's
less
about
new
issues
and
more
about
more
about
making
sure
that
we're
we're
turning
the
crank
on
the
various
pr's
that
are
out
for
review.
So
I
know.
B
A
I
have
a
few
open
from
like
sasha
and
carlos
that
have
not
been
reviewed
yet
and
then,
and
then
also
turning
the
crank
on
like
older
issues
right
so
some
of
like
the
older
issues,
probably
from
you
know,
from
some
of
the
the
coalition
that
you've
done.
You
know
that
some
of
the
issues
are
kind
of
like
these
big
blobs
of
problems.
Right
that
are
just
like.
A
This
is
a
problem
with
the
community,
like
you
know,
like
storing
google
images
like
storing
container
images
in
google
right
and
they're,
just
like
multi-cycle
efforts
right
that,
like.
A
Get
an
update
and
so
getting
better
about
turning
around
updates
for
those
like
large,
vacuous
projects
and
and
then
you
know,
driving
down
the
time
to
response
for
for
the
things
that
are
active
right.
B
A
The
the
way
I
work
is
kind
of,
if
it's
in
front
of
me
I'll
do
it.
I
think
this
is
common
right.
A
A
That's
cool
yeah,
yeah,
okay,
so
so
what
I'll
usually
do
is
I'll
go
to
polls
and
I'll
hit
polls
first
and
the
way
I
work
is
you
know,
so
we
already
know
that
there
are
the
created
ones.
There
is
assigned
right,
I
will
do
assigned.
I
will
sort
by
recently
updated,
and
so
the
idea
there
being
is
that
if
it's
more
recently
updated,
it's
likely
that
it's
more
important
or
maybe
maybe
yeah.
A
But
right
not.
A
Exactly
so
I'll
try
to
take
an
action
on
the
most
recent
things,
first
right,
which
of
course
means
that
you
know
like
we
have
to
review.
We
have
to
review
sasha's
update
to
this
cap
right
and
then
I'll
hit.
So
this
is
the
this
is
like.
If
you
want
to
get
in
touch
with
me
right
ways
to
do
it,
so
I
will
review
the
things
that
I'm
assigned
to
first
things
that
I'm
I've
requested
been
requested.
A
Reviews
on
those
will
take
longer,
but
again
they
will
be
sorted
by
updated
and
I
will
turn
through
them
that
way
and
then,
from
there
you
know
from
there.
Maybe
maybe
we
get
to
mentioned
right
and
mentioned
is
a
lot.
Hairier
mentioned
is
actually
not
as
bad
for
for
this
overall,
but
then
I'll
go
to
issues
I'll
kind
of
do
the
same
thing
I'll
hit
assigned,
and
I
will
sort
by
recently.
B
A
Yeah
right-
and
this
is
you
know
so
like
at
any
one
time
I
have
about
seven
eight
plus
items
in
my
assigned
thing
and
then
mentioned
right
mentioned
again
as
much
hairier,
and
I
will
try
to
do
something
like
mentioned
and
not
assigned,
and
not
the
author
or
something
like
that
right
and
then
get
a
better
idea
of
like
what
I
have
to
look
at
next
right,
like
the
ci
signal
thing,
as
I
need
to
give
that
some
love
so
yeah,
that's
my
very
vague
basic
triage
process,
but
that's
how
I
do
it
on
a
personal
level
and
not
necessarily
for
the
team
and
but
but
yeah.
A
We
need
to,
I
think,
the
expectation
moving
forward.
Now
that
we
have
technical
leads
the
I
would
expect
the
technical
leads
to
handle
more
of
the
day-to-day,
so
that
tim
and
I
can
take
more
of
the
you
know
so-
they're
focusing
on
the
tactical
where
tim
and
I
can
focus
more
on
the
strategy
level
right
so
understanding
what
those
nebulous
issues
that
we
have
open
for
for
years.
A
What
a
project
plan
looks
like
for
those
right
and
turning
those
into
chunkable
tasks
for
the
teams,
and
then
things
like
running
the
running
the
the
day-to-day
board.
Those
would
go
to
jorge
and
sasha
for
the
various
sub
projects.
E
So
what
I
see
unfolding
is
once
we
actually
have
this
looking
at
the
old
stuff
session
that
we've
been
doodling
about,
then
we'll
have
less
noise
and
some
notes
around
delegating
certain
items
to
certain
parties
to
take
up
and
own,
and
we
may
close
some
items
as
well
and
then
what
that
would
leave
would
be
a
more
refined
set
of
tasks
for
the
release,
team,
backlog
or
and
then
upon
to
some
of
the
sig
release.
E
Members
who
are
more,
who
want
to
get
more
engaged
as
we
head
into
a
new
cycle
like
some
of
those,
maybe
process
improvement,
oriented
items
that
they
might
be
able
to
take
on
because
they're
passed
for
these
team
members
and
then
that
would
be
in
all
of
the
items
that
they
might
do
would
be
in
that
project
board.
And
we
could
walk
that
board
every
time
that
we
meet
in
the
sig
and
then
also
having
the
umbrella
item
the
umbrella
issue
for
items
that
are
specific
and
urgent
around
119..
E
E
A
So
that's
kind
of
the
reason
that
we
don't
walk
the
board
anymore,
it's
kind
of
predicated
on
predicated
on
one
getting
getting
jorge
and
and
sasha
in
his
technical
aids,
getting
you
in
as
a
program
manager
to
actually
look
at
some
of
the
stuff
and
then
taking
the
time
to
actually
chunk
up.
Some
of
the
better
understand
the
backlog
before
we.
E
A
Throw
people
at
like
here's,
a
here's,
this
big
thing
and
yeah,
so
that
was
kind
of
the
idea
behind
it
and
I
think
we've
got
the
team
now
you've
got
the
the
doodle
up
for
that
thing.
So
I've
got
to
answer
that
doodle.
But
yes,
yes,
good
things
coming
with
prioritization.
A
We
are
over
time
everyone.
So
thank
you
for
taking
the
time
to
hang
out
with
us.
I
hope
the
meeting
was
informative
and
useful
and
all
those
good
things-
and
I
hope
you
have
awesome
days-
I
will
catch
you
at
the
next
one
of
the
next
release
team
meetings,
if
you're
there-
and
if
not
for
next
week,
the
sig
release
meeting,
take
it
easy.