►
From YouTube: Kubernetes 1.19 Release Team Meeting 20200720
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
All
right
welcome
everyone
to
another
edition
of
our
119
release
team
meeting,
I'm
taylor
dolezal
the
release
lead
and
I
hope,
y'all
had
a
fantastic
weekend.
Let's
get
started
with
enhancements
neverend.
What's
going
on.
B
B
B
A
Excellent,
thank
you
neverend.
I
think
I
was
a
little
bit
quiet.
So
can
you
all
hear
me
for
those
on
video?
If
you
give
me
a
thumbs
up,
is
that
a
good
volume
for
you?
Okay,
fantastic?
Thank
you
again,
neverend
thanks
everyone
we
can
follow
up.
I
know
we've
got
that
thread
about
some
of
the
issues
that
we're
looking
at
but
yeah.
If
you
need
anything
on
that
front,
obviously
we
can
continue
talking
in
there
moving
along
to
ci
signal
with
dan.
C
Yeah,
so
we
are
red,
but
probably
not
for
long.
We
have
prs
open
to
fix
both
of
the
failing
jobs
on
master
blocking,
so
the
first
one
there
is
for
the
ipv6
endpoint
slices,
that's
been
approved,
but
is
is
currently
being
retested
and
then
the
nvidia
plug-in
pr
has
been
merged
and
that's
running
right
now
so
expect
that
to
be
green
before
we
get
into
any
of
the
other
stuff.
C
C
Yeah
yeah,
the
main
ones,
were
the
mirror
pod
with
grace
period
on
node,
cubelet
master,
yeah
and
and
it's
flaking
about
five
percent
of
the
time.
And
then
there
was
a
general
comment
that
there's
quite
a
few
runs
of
jobs
that
either
just
have
no
build
log
or
are
timing
out.
Yeah.
D
D
All
right,
seth,
jennings
from
signode,
is
investigating
the
mirapod
issue.
It
looks
like
he
found
some
potential
causes
and
he
thought
we
would
have
either
a
an
approach
or
agreement
to
roll
back
the
change
by
end
of
the
day
today.
The
third
issue
that
was
mentioned
was
something
about
excessive
logging.
That
was
a
regression
in
case
I
o
utils
that
got
reverted
and
there's
a
pr
with
the
rollback
which
should
relieve
load
across
the
board
for
our
integration
in
ci
dogs.
D
That
leaves
the
very
few
yards
emerging
like
the
pod,
timeouts
or
failures
with
no
logs.
We
don't
have
a
lot
of
information
about
that.
D
Yet,
once
we
get
the
load,
issues
helped
we'll
probably
start
digging
into
that
one
and
seeing
if
we
can
figure
out
what
the
cause
is
there,
so
that
I
would,
I
would
still
say
that
is
pretty
concerning,
because
it
masks
a
lot
of
signal
like
we
wouldn't
know
if
we
were
getting
widespread
failures
because
we're
getting
widespread
failures
because
of
input
reasons,
there
was
one
other
thing
that
I
found
over
the
weekend,
which
was
a
scheduling,
throughput
failure
in
one
of
the
scalability
jobs.
D
So
I'll
put
that
in
the
notes
I
wanted.
D
C
Cool
thanks,
jordan,
so
yeah
those
those
are
mentioned
throughout
here
other
than
that
we're
looking
pretty
good
master
informing.
C
There
were
some
attempts
by
me
that
were
failures
to
make
the
windows
jobs
running
successfully
again,
and
but
I
think
they
are
being
investigated
about
10
days
ago,
there's
a
pr
that
that
skipped
some
of
the
jobs
added
skips
for
them,
because
they
apparently
weren't
supposed
to
be
running
from
sig
windows
perspective
and
it
looks
like
the
individual
hoop
and
that
one
is
investigating
some
of
the
other
ones
as
well.
So
hopefully
we
can
get
some
green
windows
runs,
which
would
be
the
first
time
in
quite
a
while.
C
The
other
thing
to
note
here
is
one
of
the
cube
admin.
Jobs
started
failing
and
they
have
identified.
The
issue
had
to
do
with
something
with
the
lcd
version,
but
I
believe
there's
a
pr
open
on
the
cube
admin
side
to
get
that
fixed
up.
I
just
saw
a
comment
there
an
hour
ago,
so
we
should
be
looking
pretty
good
here,
probably
by
the
afternoon.
So.
F
G
I
did
want
to
take
some
time
at
the
end
of
the
call
to
try
to
figure
out
what
all
things
we
need
to
do
like
get
to
line
up
for
the
release
starting
tomorrow,
or
once
this
happens,
so
we
can
defer
that
for
later.
Thank
you.
A
Absolutely
no.
That
sounds
good
to
me.
Thank
you,
dan.
Thank
you,
jordan.
Thank
you,
dims,
any
questions
for
ci
signal
or
for
dan.
A
F
Good
morning,
everyone,
my
center
status,
is
yellow.
For
the
most
part,
we
do
see
a
downward
trend,
which
is
a
pretty
good
thing,
but
we
do
have
a
few
items
where
they
have
increased.
I
think
this
is
probably
just
due
to
newly
found
issues
that
are
being
worked
through,
so
we
and
bug
triage
will
continue
to
follow
up
and
attract
these
items.
A
Thank
you,
jen.
Let's
move
on
to
docs.
H
Hi,
everyone
hope
everyone's
doing.
Good
doc
is
green.
We
have
two
pending
mergers
and
I
am
going
to
reach
out
to
tim
lannister
and
see
if
we
can
get
those
march
they
they
have
all
the
edits
and
reviews
comments
addressed
and
they
are
ready
for.
It
looks
like
they're
ready
for
a
final
review,
I'm
being
overly
optimistic
here,
but
they
are
off
low
risk,
I'm
hoping
to
get
them
worse
today
or
tomorrow,
and
branch
1.19
is
healthy.
H
Now
I
saw
early
march
conflicts
was
pushed
yesterday
and
I
got
that
was
online
and
I
wasn't
expecting
to
be
online.
He
marched
it.
So
it's
spoken
happy
any
questions.
A
Thank
you
so
much
fantastic
job
and
getting
all
those
closed
up
last
week
and
yeah.
If
you
need
any
more
help
on
those,
please
let
us
know
we
can
we
can
engage
sick,
docs
or
anyone
else
on
that
front.
So
thank
you
very
much.
Samitha
release
notes
with
adolfo.
I
Hi
everyone,
so
we
started
so.
First
of
all,
there's
no
changes
for
us.
Since
the
last
rc
was
cut
and
we
started
contacting
the
6
for
capturing
their
changes.
They
want
to
reflect
in
the
final
document
and
we
we
we
are
starting
to
receive
the
first
data
from
them
and
we'll
get
back
to
you
as
soon
as
we
have
everything
ready.
So
we're
green.
A
Fantastic
green
is
go
or
go
faster,
which
we've
seen
with
the
go
versions.
Fantastic.
Thank
you
adelpho.
Any
questions
for
release,
notes.
A
Fantastic
comms
update
with
max.
J
Hey
everyone
hope,
you're
all
all
doing
well
so
far
we
are
still
green.
Thank
you
for
your
input.
Taylor
for
the
release.
Blog
looks
good,
we're
going
to
well
finalize
it
now
this
week
and
also
the
upcoming
week
having
some
checks
on
the
enhancements.
If
we
do
not
mismatch
something
or
wrote
the
wrong
enhancements
into
it,
hopefully
not
and
yeah,
then
we'll
go
in
the
line.
J
All
the
content
with
the
cncf
take
over
the
whole
community
input
or
talk
to
reddy
with
eli,
and
he
said
they
have
a
huge
list
of
stuff
which
they
would
like
to
throw
into
so
looking
forward
to
get
there
the
input
from
and
yeah
that's.
Basically,
it.
A
All
right
release
branch
management.
A
I
didn't
see
carlos
on
the
line,
but
I
can
go
through
his
updates.
If,
if
I
can,
I
could
happen.
K
So
yeah
so
rc2
is
planned
for
tomorrow.
From
that
perspective,
we
should
be
good
to
go.
We've
been
doing
branch
fast
forwards
for
since
we
cut
rc1
or
since
we
cut
rc0
actually
and
the
branch
fast
forwards
will
stop
upon
the
successful
release
of
rc2.
K
Something
to
note
there.
That
means
that,
at
that
point,
master
is
this,
will
rc2
will
also
mark
codesaw
or
what
we've
previously
referred
to
as
code
daw,
which
means
master,
is
again
open
for
development
for
120,
starting
again
after
the
successful
release
of
rc2,
which
means
afterwards
we'll
we'll
be
expecting
cherry
picks
for
to
the
119
branch.
If
anyone
expects
to
get
things
in
for
119.
K
At
that
point,
some
interesting
things
happening
for
for
rc2,
outside
of
all
of
that
good
stuff.
Is
that
the
vanity
domain
flip,
which
switches
the
underlying
backing?
Gcr
repository
from
google
gcr.io,
slash,
google
containers
over
to
the
kate's
infra
kate's
artifact
gcr.io,
or
the
geo
located
endpoints
us.gcr.io
eu
dot,
gcr.io
and
asia.gcr.io
slash
kates
dash,
artifacts
prod.
K
That
is
going
live
right
now,
which
means,
as
a
result
of
that
the
we
need
to
switch
from
publishing
our
images
to
the
old
from
the
old
to
the
new.
So
I've
got
prs
up.
That's
in
progress,
I'm
doing
testing
on
that
right
now
and
we're
looking
pretty
good.
K
What
that
means
for
release
managers
to
be
something
to
be
aware
of
is
that
you'll
need
to
promote
the
staging
images
to
prod
after
the
mock
stage
and
mock
releases
are
successful.
I've
opened
a
example.
Pr
of
how
to
do
that
included
instructions
for
that
stuff,
I'll,
be
integrating
the
instructions
into
the
branch
management
handbook,
but
no
promises
that
that
will
happen
before
you
need
to
actually
start
the
work
I'll
be
online
and
all
that
good
stuff.
K
D
Had
a,
I
had
a
question
about
reopening
master
for
120.
We
have
several
issues
or
pull
requests
that
are
fairly
large
or
require
bumping
dependencies
to
pick
up
fixes.
Are
we
comfortable
reopening
master
before
those
issues?
Npr's
are
fixed.
J
D
K
Fair,
let's,
let's
async
it
on
sig
release
channel
and
if
we
need
to
bump
the
code
thaw
we
can
do
that.
A
Awesome,
thank
you
steve.
Thank
you,
jordan.
Any
questions
for
release
branch
management,
action
pack
today.
I
love
it.
No,
we
don't
want
action.
G
Okay,
so
when
do
we
want
to
talk
about
the
114
or
the
older.
M
I
don't
have
anything
too
specific
to
add.
I
think
the
the
general
theme
like,
like
jordan
and
dems
and
steven
have
mentioned
like
establishing
these
specific,
concrete
action
lists
of
things
to
do
day
to
day
and
making
sure
that
those
are
documented
and
shared
explicitly.
So
everyone
understands
making
sure
we're
really
communicating
and
driving
the
action
to
resolve
each
of
those,
because
things
end
up
dependent
and
stacked
up,
and
we
have.
We
have
choices
that
we
have
to
make
like.
A
Absolutely
agree:
harkening
back
to
the
boring,
boring,
boring
comment
that
was
made
a
little
bit
earlier.
A
Awesome
so
updates
that
I
have
for
you
tomorrow,
end
of
day
pacific
we
have
kotzong.
Like
stephen
had
said:
that's
gonna
go
live
coastal
is
going
to
synchronize
with
the
rc2,
then
august
3rd.
On
monday,
we're
going
to
pick
up
burn
down
meetings
daily
to
go
through
statuses.
A
I've
talked
with
some
of
the
leads
on
that
front
and
that
you
know
I
imagine
that
with
more
meetings
we
don't
have
to
pack
as
much
content
into
each
one,
definitely
going
to
to
call
those
off
closer
that
we
get
to
kubecon
and
don't
want
to
be.
You
know,
demanding
too
much
of
anyone
leading
up
to
the
release
and
hoping
that
most
things
are
done
by
that
point
in
time.
A
After
that
deadline
we
have
august
6th,
which
is
actually
two
milestones.
Cherry
pick
deadline
and
today,
pacific
on
august,
6th
and
test
freeze
and
a
day
pacific
august
6th
any
questions
on
that
front
and
I'm
going
to
go
sig
skill
ability
and
then
to
the
parking
lot.
We
can
chat
dimms.
A
Awesome
sig
scalability
update
as
of
today,
scalability
jobs,
look
good,
no
release
blockers,
moving
into
open
discussion,
as
always.
A
That
that
I'm
not
sure
on
that's
just
that.
That's
that's
just
the
note
within
within
the
release
agenda,
but
it
sounds
like
it
sounds
like
it
might
not
be
accurate.
All
right.
A
Yeah
we
can
take
a
look
at
that
one.
Please
put
any
items
in
the
retro
that
you
would
like
to
discuss.
I
know
I've
been
pinged
a
couple
times
by
the
cncf
this
week
in
terms
of
asking
clarification
on
the
release
schedule
and
why
some
things
got
moved
and
pushed
out.
I
know
that
we
max
and
I
cover
that
within
the
119
release
blog
post,
so
that
will
be
going
in
there
as
well
once
this
does
release.
A
So
if
there
are
any
questions
on
that
front,
obviously
please
direct
them
to
me
any
of
the
shadows
or
the
sig
release
leads
and
co-chairs,
and
let's
go
to
stephen.
Let's
talk
about
some
triage
party.
K
Hey
everyone,
so
we
discussed
triage
party
in
the
past
as
a
cool
tool
to
potentially
use
for
sig
release
to
help
with
the
bug,
triage
workload
and
overall
defray
some
of
the
cost
of
understanding.
What's
going
on
for
for
releases
milestones
all
that
good
stuff
in
kubernetes
kubernetes.
K
K
There's
a
link
in
the
notes
for
that.
What
that
means
is
we
don't
necessarily
have
a
process
for
this,
it's
more
so
for
y'all
to
get
a
pulse
point
on
what
you
do
and
don't
need
out
of
this
tool
and
we
can
tweak
the
configs
as
necessary,
but
for
now
this
is
kind
of
an
initial
baseline
config
of
triage
party
to
bug
triage.
Please
check
it
out.
Let
us
know
what
you
think.
Every
other
team
feel
free
to
check
it
out.
K
Let
us
know
what
you
think
lori
do
you
want
to
drop
some
comments
too.
O
But
we
want
to
give
a
closer
look
to
some
of
those
items
as
well
as
a
few
others
that
have
popped
up
and
basically
clear
out
noise
so
that
when
we
get
running
with
triage
party,
there's
a
good,
solid
foundation
to
get
going
with.
So
there
might
be
some
things
in
our
backlogs
for
delegating
to
other
cigs.
There
might
be
things
that
have
been
usurped
by
bigger
and
better
issues
over
time.
O
We
just
want
to
take
a
look
at
that,
so
that
will
happen
this
week
and
then
I
will
in
as
part
of
that
conversation
we'll
think
about
how
to
activate
some
more
engagement
from
all
of
you
into
like
what
the
process
should
look
like
going
forward
for
using
triage
party.
K
O
A
Me,
thank
you
very
much.
No,
I'm
excited
to
use
it.
That's
going
to
be
a
quite
a
fun
tool,
so
I'm
I'm
already
getting
use
out
of
it.
Fantastic
steven
go
1.15.
K
Yay,
okay,
so
so
a
few
things
are
happening.
This
release
cycle
outside
of
all
the
infrastructure
changes
as
we've
been
working
through
the
updates
for
go
114.
There
are
a
variety
of
problems
that
were
uncovered.
I
guess
making
go
114
unusable
for
us
or
not
a
preferred,
go
over.
K
Not
a
preferred
go
version,
so
we're
we're
waiting
out.
115
115
is
currently
go.
115.
go
in,
15
is
currently
in
beta
1.
we're
expecting
an
rc.
For
that
soon.
I
am
hitting
the
button
on
creating
apr
for
building
the
115
beta
images.
The
cube
cross
image
for
that.
K
K
There's
a
tricky
thing
that
we
have
to
deal
with
on
the
previous
release
branches.
It's
the
fact
that
we're
currently
at
go
113
right
now
for
the
release
branches
are
on,
go
113
14.
As
of
earlier
today,
upgraded
from
1
13
9..
K
I
had
opened
prs
for
updating
those
to
go
114
6
to
get
some
feedback
and
the
feedback
is
generally.
Let's
not
do
that.
So
the
tricky
thing
that
we're
running
into
is
based
on
our
support
cycle,
as
well
as
overlaid
with
go's
support
cycle.
K
We
are
probably
are
definitely
going
to
need
to
move,
to
go
115
in
previous
release
branches
as
well
so
figuring
out
how
to
tackle
that.
That
means
that
it
needs
to
land
on
master,
then
be
cherry
picked
over
to
119,
and
then
we
need
to
handle
any
oddities
that
pop
up
within
the
118
117
and
potentially
116
branches,
the
116
branch
I
say
potentially,
because
by
the
time
we
release
by
the
time
we
release
kubernetes
119.
K
That
will
just
about
be
the
cutoff
time
for
for
kubernetes
116
to
be
out
of
support.
So
we
can
maybe
opt
to
not
take
the
hit
on
one
on
116
and
and
skip
that
upgrade
and
let
one
113
14
do
the
last
go
version
update
for
for
the
116
branch.
So
thoughts
on
that.
I
know
jordan
has.
D
So,
where
to
start
so
when
we
moved
to
go
114
that
brought
along
a
fair
number
of
changes,
the
change
set
to
fix
up
like
to
pull
in
all
the
dependency
updates
for
go
114
was
was
very
large.
So
if
we
updated
our
release
branches
to
go
114
or
newer,
we
would
also
have
to
pull
in
all
of
those
same
updates.
So
that
includes
things
like
golang.org.
D
Dependencies
includes
updating.
The
atd
client
includes
protobuf
updates,
it's
a
fairly
large
number
of
changes,
much
larger
than
I
would
expect
us
to
make
on
a
release
branch
in
a
patch
release.
So
it's
it's
not
just
a
matter
of
updating
the
go
runtime.
It
also
is
sort
of
shifting,
like
the
most
important
20
of
our
dependencies
up
several
versions.
D
So
that's
one
one
thing
to
consider
and
that
would
apply
whether
we
updated
those
old
release
branches
to
go
114
or
go
115..
The
second
thing
to
consider
is:
there's
a
known
issue
in
go
114..
I
call
it
a
potential
note
issue.
It's
the
kind
of
thing
that
could
have
been
encountered
before,
but
go
114
makes
more
likely.
So
it's
it's
sort
of
unclear
whether
like
how
big
of
a
deal
it
is,
but
some
people
have
reported
seeing
it
I
linked
to
it
from
the
the
notes
there.
D
In
my
mind,
that
makes
go
114,
something
we
would
not
want
to
release
a
patch
release
on.
I
would
really
rather
wait
until
go.
115
is
available
which
resolves
this
and
then
go
to
go
115..
D
That
that's
that's
always
true.
The
scalability
team
has
been
running
on
the
115
betas,
which
is
good.
We
got
engaged
there
earlier
and
we
actually
reported
some
problems
which
they
resolved.
So
that's
great.
It
means
that
hopefully
like
as
far
as
a
scalability
perspective
is
concerned.
The
dotto
release
should
be
usable
by
us,
which
is
great
but
sure
I
mean
once
we
update
to
115-0
and
soak
for
a
few
weeks
like
we
might
find
something
else
that
makes
us
wait
till
1,
15,
1
or
so.
G
G
Have
as
much,
if
you
ask
me
right
now,
I
would
say:
don't
do
it
right,
but
if
I
say
if
we
can
move
the
decision
for
the
older
releases
for
a
few
weeks
until
we
get
hands
on
115,
try
it
on
master
and
may
see
if
119
is
releasable
and
release
119.
At
that
time
we
can
take
a
decision
on
the
older
branches.
K
The
reason
I
put
this
in
the
parking
lot
is
that
it's
more
of
this
leans
less
into
the
release
team
and
more
into
release
engineering
overall
scope
wise
for
I
would
raise
the
concern
once
again.
This
is
the
thing
that
we
encounter
for
several
release
cycles.
We
need
tighter
interlock
with
sig
scalability.
K
They
have
been
testing
this
stuff
already
and-
and
I
haven't
seen
any
notifications
out
to
sig
release
regarding
that.
So
what
we're
trying
to
do,
at
least
on
the
sig
release
side.
I've
been
working
on
a
variety
of
images
to
to
pull
in
canary
versions
of
go
our
pre-release
versions
of
go
so
that
we
can
start
to
use
those
against.
K
We
can
start
to
build
tests
against,
go
newer,
go
versions
and
start
to
get
some
earlier
signal
there
once
this
pr
merges
I'll
start
promoting
stuff
and
get
a
branch
up
to
start
soaking
some
of
this,
but
yeah
we
need,
we
do
need
tighter
interlock.
This
is
a
continuing
thing
that
worked.
You
have.
O
E
D
K
So
yeah,
so
for
the
for,
and
it's
less
for
the
release
team
like
this
information,
should
be
disseminated
from
the
release
from
the
release
engineering
subproject
to
the
release
team.
This
is
because
this
is
a
sig
wide
concern,
not
just
not
just
a
quarterly
concern.
The
we
should
as
soon
as
we
know
about
like
one
release
manager
is
if
you're
not
signed
up
for
the
golang
announce
mailing
list.
K
Please
sign
up
for
the
go
online,
announce
mailing
list,
so
you're
getting
notifications
about
pre-announces,
so
one
we
should
all
be
signed
up
for
that
list.
If
you
are
helping
to
manage
go
updates
two,
when
we
I'm
working
on
a
doc
about
what
go
updates,
look
like
overall,
once
we
know
we're
going
to
do
an
update,
which
we
know
quarterly,
that's
going
to
happen.
We
should
signal
intent
to
scalability.
K
I
I
guess
the
hope
is.
If
the
notification
is
coming
from
us,
then
hopefully
we're
getting
faster
feedback
from
them.
I
I
don't
know
what
this
looks
like
honestly
and
lori.
We
can.
We
can
sit
down
and
talk
about
it
because
it
is.
It
has
been
a
multi-cycle
thing.
A
Even
even
if
even
if
it's
an
informing
thing,
I
think
that
you
know
personally
I'd
love
to
see
something
on.
You
know
on
the
sig
release
mailing
list
or
anything
like
that.
Like
hey
we're
testing
this
out
just
kind
of
like
an
you
know,
info
log
type
of
update.
You
know
just
even
visibility
at
that
level.
I
think
would
be
helpful
to
see
that
that's
getting
kicked
off.
K
Yeah
because,
because
honestly,
I
I
would
like
to
get
us
to
the
point
where
we're
doing
we're
doing
pre-announces
for
our
releases
as
well:
hey,
there's
a
new
rc
coming,
and
you
know,
and
in
a
few
days,
if
you're,
if
you're
keen
to
test
that
out.
Okay,
you
know
jump
on
this
channel
or
you
know,
and
here's
the
issue
for
tracking
that
kind
of
thing,
and
that
would
include
thinking
about
what
new
versions
of
various
our
various
dependencies
are.
G
So
can
we
switch
mode
now
to
119
release
and
what
sense?
I
have
a
bunch
of
things
that
I
wanted
to
talk
about.
Oh
yeah,
yeah,
so
the
elephant
in
the
room
we
already
talked
about,
which
was
go
115
and
then
the
second
one
that
we
need
to.
G
Get
going
or
wrap
it
up
is
the
vdf
the.
When
we
switch
the
images
we
have
references
to
image
the
us.gcr.io
and
things
like
that
in
our
repository,
so
we
need
to
clean
them
up.
So
that
would
be
the
second
thing
that
we
need
to
tackle,
and
then
there
was
a
hcd
update
that
was
needed
for
one
of
the
ci
jobs.
I
don't
know
if
that
needed
a
new
bump
in
hcd
that
we
are
using
jordan.
Do
you
remember
that
one.
D
G
It's
not
required,
okay,
good,
that
that
takes
one
off
my
list,
the
other
one
is.
There
was
a
series
of
issues
around
huge
page
support
in
c
advisor
and
kind
was
hitting
it
first.
I
think
we
nailed
down
the
changes
in
container
d
as
well
as
in
cubelet.
G
I
have
to
cross
check
whether
everything
has
landed,
so
so
that's
a
to-do
on
my
list
to
make
sure
that
the
we
have
all
the
changes
for
huge
pages.
G
K
So
to
comment
on
the
on
the
swizzles
for
the
vdf:
that
shouldn't
be
too
much
of
a
concern.
There
is
a
pr
open
to
swizzle
some
of
that
stuff,
I'm
not
sure
who
the
contributor
was
that
opened
it,
but
I
can
track
that
one
down
I
was
intending
on
picking
up
that
work,
but
with
the
with
references
to
to
the
us.gcr.io,
regardless
of
whether
or
not
the
vdf
succeeds,
those
references
will
continue
to
work
so
we're
fine
there
with
regards
to
core
images.
K
G
That's
good
so,
and
we
don't
want
to
when
is
the
current
date
for
the
vdf.
It's
happening
right
now,
so
over
the
course
of
the
next
four
days.
It
should
be
complete.
G
Okay,
so
when
is
the
earliest,
we
could
switch
to
the
newer
ones
next
monday,
the
newer
ones
for
which
exactly.
K
Up,
yes,
I
want
it
clean.
B
K
There
is
already
a
clean
up
pr
inflate
so
I'll
chase
that
down.
If
that
is
stalled,
I
will
pick
it
up
and
can
carry
it
forward
because
they'll
prob
they're,
probably
one
or
two
more
references
to
add
from
that
pr.
Okay,
pr
will
succeed
even
before
the
vdf
is
done.
K
K
It
should
already
be
succeeding,
yeah
and
if
it's
and
if
it's
not
I'll
I'll,
come
in
and
clean
it
up,
I
will
not.
We
won't
release
anything
pr,
wise
release
any
holds
until
we
get
the
all
clear
from
linus
that
pdf
is
successful,
so
that
should
hopefully.
K
Don't
need
to
the
anything
that's
published
from
anything
that's
published
within
the
rc
is
going
to
have
the
the
rc.
This
is
going
to
be
the
first
rc
that
we
use
the
promotion
process
within
the
release
process,
so
conformance
cube,
dash,
star
images
and
hypercube
for
older
branches
when
those
things
get
released
will
all
be
under
the
new
name.
G
Okay,
the
only
other
thing
that
we
haven't
touched
yet
is
from
your
experience,
even
with
the
one
three
and
one
four
basil
stuff.
Do
you
see
any
problems
with
bazel
stuff
in
115
or.
K
It's
timing,
I
guess
the
the
boundary
the
go
boundaries.
What
we
usually
care
about
is
the
rules
that
rules
underscore
go
update
and
the
bazel
tool
chains
updates.
I
talked
with
eric
last
week
and
I
now
have
admin
access
to
repo
infra
repo
infras.
What
handles
all
the
most?
If
all,
if
not
all,
of
the
go
rules
for
kubernetes
kubernetes,
so
I
I
now
have
access
to
cut
releases
there
when
the
the
go
ones
come
out.
K
G
Then
the
last
question
is:
if
all
the
stars
line
up,
that
we
talked
about
how
much
soap
time
will
we
have
before
we
can
we
release
based
on
the
current
dates
for
one
with
115,
with
vdf,
with
all
the
other
things
the
golang
is
supposed
to
come
if
august
3rd
or
something.
G
K
Yeah,
that's
the
target,
so
that
means
we
have
a
few
weeks
of
soak,
I'm
going
to.
Hopefully
once
this
is
p
promoted.
This
image
is
promoted,
I'll,
have
a
branch
up
running
115
beta,
so
as
the
rc
comes
out
and
as
the
as
the
actual
release
comes
out,
we'll
get
more
signal
there.
So
we
should
be.
We
should
have
an
additional
10
days
or
so
of
it
soaking
on
that
branch
to
get
an
idea.
K
G
You
know
the
what
we
are
up
against
right.
Yes,
this
is
almost
like
an
interrogation,
so
everybody
knows
that.
A
All
right,
thank
you,
everyone,
any
more
questions
or
items
to
address.
Thank
you.
I
know
being
a
little
bit
longer
than
normal,
but,
like
jim
said,
this
is
important
information
that
needs
to
be
addressed.
A
About
five
weeks
out
from
a
release
for
those
of
you
participating
for
the
first
time,
so
this
is
all
normal,
there's,
no
flashing
lights
or
we
we
know
about
the
flashing
lights.
There's
a
github
issue
open
for
it.
M
D
D
There
we
go
under
release
branch
management
in
today's
agenda.
I
will
build
the
list.
O
So
just
a
real,
quick
question
for
the
leads
and
shadows:
would
it
help
if
we
created
a
small
template
in
the
agenda
to
track
such
action
items?
So
as
we
go
through
these
meetings,
we
have
things
we
should
follow
up
on
and
so
I've
created
in
the
past,
like
a
very
simple
template
for
just
tracking
who
owns
the
task.
What
is
the
question,
and
when
is
the
expected
date
for
turnaround?
E
A
Would
that
would
definitely
help
in
terms
of
if
we
could
have
that
added
to
either
the
first
like
underneath
either
how
this
meeting
works
or
something
like
that
potentially
and
then
that
way,
we
can
quickly
link
to
it
and
get
to
it.
O
Okay,
it
would
be
like
a
running
a
list
of
action
items.
We
could
either
collect
those
items
per
meeting
and
then
just
follow
up
in
the
next
meeting
or
we
could
have
a
running
tab.
So
why
don't
we
get
offline
and
you
can
we
can
talk?
Whoever
wants
to
whoever
has
ideas
and
thoughts
about
how
they
would
like
this.
To
look,
please.
Let's
talk
about
it
in
sigrs.
O
O
K
Just
a
general
great
job,
everyone
I
see,
people
are
working
super
super
hard
on
on
nailing
the
last
bits
of
this
release.
It
is
it's
it's
a
marathon.
We've
still
got
a
lot
of.
We
still
got
a
lot
of
time
left
in
the
release
we've
got,
you
know,
we've
got
covid
and
and
just
general
priorities
skewed
all
over
the
place
and
kubecon
coming
up
so
speakers
who
are
getting
ready
for
that
plus
internal
conferences
and
all
that
good
stuff
and
job
updates
so
like
kudos
to
everyone.
A
K
A
It
awesome
any
any
more
comments.
Questions
concerns.
N
A
You,
jeremy,
wonderful,
have
a
fantastic
monday.
Everyone
I'll
see
you
on
wednesday,
don't
be
stranger
for
anything
during
the
week
and
we'll
see
y'all
later,
keep
going
later.