►
From YouTube: Kubernetes SIG Release 20200714
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
Actual
cloud
recording
hello,
hello,
everyone
today
is
july
14th.
This
is
a
sig
release
meeting.
It
is
a
meeting
that
is
recorded
and
available
on
the
internet
for
viewing
later.
So
please
be
mindful
of
what
you
say
and
do
please
be
sure
to
adhere
to
the
kubernetes
code
of
conduct
and,
in
general,
just
be
awesome
people.
So
I've
got
a
few
things
on
the
agenda
kind
of
so
tim
and
I
do
a
sync
every
monday,
and
this
is
kind
of
a
reflection
of
the
things
that
we
discussed
yesterday.
A
A
A
A
And-
and
we
should
be
back
in
later
in
the
day
with
an
all
clear,
but
the
mock
stage
and
mock
release
are
done
so
within
the
next
two
hours
or
so
we
should
have
all
clear
all
right,
so
the
next
one
up
is,
there
is
a
go
security
bump
the
I
believe,
the
114
114
and
113.
There
are
new
patch
releases
for
go
that
are
supposed
to
be
landing
today.
A
As
soon
as
they
do,
we
will
be
consuming
them,
so
stay
tuned
for
that
stuff.
There's
going
to
be
some
pr
work
around
that
the
and
then
not
soon
afterwards.
So
I,
I
think
one
of
the
things
that
we
were
trying
to
understand
is
what
we
wanted
to
do.
A
A
Do
we
attempt
to
roll
the
new
versions
of
go
into
these
patch
releases
as
well
or
delay
the
patch
releases
slightly,
so
that
we
can
do
that?
The
as,
as
you
know,
trying
to
merge
things
on
on
kubernetes
kubernetes
can
be
a
journey
sometimes
so
coordinating
multiple
prs
is
probably
roughly.
A
Let's
say:
15
prs,
15
plus
prs
to
to
do
these,
the
various
pumps,
so
I'm
not
sure
that
we
will
be
able
to
do
that
by
the
time
we
want
to
do
patch
releases
tim.
You
want
to
drop
some
of
your
thoughts
on
this
as
well.
C
B
I
can
hear
him:
can
you.
B
Okay,
so
my
thinking
this
depends
on
the
severity
of
the
cves.
The
my
worry
scenario
is
we
go.
We
whatever
comes
out
to
go
today
comes
out
and
maybe
they're
delayed
just
tomorrow.
I
don't
know
who
knows
they're
saying
something's.
Coming
today
tomorrow
we
released,
we've
got
our
content.
We
know
what
our
content
is.
We're
shipping
our
stuff.
For
our
reasons,
our
community
likes
it
as
wanting
to
upgrade,
but
then
we
start
kind
of
getting
into
the
questions
of
like.
B
Oh,
that
ghost
stuff
is
it's
coming
through
like
what,
if
it's
serious
and
then
maybe
our
community
is
also
saying
like.
Oh,
this
is
serious.
I
want
that.
So
the
the
problem
scenario
is
if
next
week
we
then
have
to
put
out
another
release
because
there's
something
important
for
our
community
to
have
that
this
is
a
security
thing
from
the
get-go
is
kind
of
what
what
turns
that
scenario
up
a
bit
in
my
mind
they
didn't
describe
yet
what
it
is.
We
don't
seem
to
have
any
signal
like.
B
Is
there
embargoed
info
out
there
on
this?
We
don't
know.
Looking
I've
been
watching
their
repo,
like
one
of
the
things
that
I
like
to
do,
sort
of
nerdy
hobby-wise
is
watch
people's
repos
for
signs
of
cves.
That
aren't
actually
announced
like
that
level
of
get
archaeology
and
snooping
is
interesting
to
me
and
they
have
landed
a
patch
yesterday
and
looking
at
the
code,
I
could
kind
of
imagine
that
this
might
be
a
high
sev
issue.
B
It's
it's
a
network
serving
code
thing
that
that
is
the
type
of
thing
that
could
potentially
impact
us,
but
we
gotta
we
gotta,
get
their
write
up.
We
gotta
get
their
the
details.
Do
an
analysis
have
our
psc.
A
Now
that
was
great.
I
can
actually
hear
you
now.
My
headphones
are
working
so
so
yeah.
So
more
more
later,
we'll
we'll
keep
a
tight
feedback
loop
with
the
rest
of
the
release.
Managers.
We've
got
a
chat
already
rolling
with
the
psc,
so
stay
tuned.
Everyone.
A
Alrighty,
so
next
up,
there
are
patch
releases
tomorrow.
Just
so
just
an
additional
note
about
that:
patrick
lisa,
smarter
for
people
who
are
still
consuming
hypercube.
There
are
some
fixes
for
the
hypercube
cubelet
in
there.
So
the
previous
patch
release
cycle.
We
did
some
fixes
on
the
on
hypercube
cube
proxy.
These
regressions
yeah
its
regression
cropped
up
as
a
result
of
updating
the
underlying
hypercube
base
image.
A
So
the
hypercube,
the
debian
hypercube
base
image,
was
based
off
of
the
debian
base
image
and
the
wn
base
image
does
not
include
some
of
the
same
magic
that
that
the
debian
ib
tables
image
does
so
there's.
Basically
a
there's,
an
iptable
wrapper
script
right
to
recognize
what
version
of
iv
tables
you're
you're
running
and
to
force
the
to
force
the
various
proxy
rules
to
work
with
that
version.
A
A
So
we
fixed
that
and
then
realized
that
for
people
who
were
using
using
a
hypercube
or
virtualized
cubelet
that
there
were
still
issues
lingering
for
them,
as
confirmed
by
multiple
people,
so
there's
a
fix
for
that.
Next
next
release.
A
Anything
else,
interesting
tim
on
that
one
or
that
we
can
yeah
never
mind.
So
those
will
come
out
tomorrow.
There
should
be
some
additional
notes
on
on
some
other
fixes
across
branches
that
you'll
see.
A
But
again
this
this
will
be
dependent
on
the
information
that
we
get
from
the
psc.
Regarding
the
the
go
laying
update
so
stay
tuned
again,
all
right
any
questions
on
patches.
A
Okay,
all
right,
so
next
up
branch
fast
forward,
so
branch
fast
forwards
have
been
happening
since
we
cut
the
rc
0,
so
119,
rc,
0.
A
branch
fast
forwards,
will
end
on
the
21st
or
the,
whichever
date
that
we
end
up
doing
the
the
119
rc2.
Currently
that's
scheduled
for
july
21st.
A
A
Nothing
too
exciting
there.
It's
kind
of
more
of
what
we've
done
over
the
last
few
cycles.
It's
just
the
cadence
has
changed
somewhat.
Now
that
we
have
moved
the
now
that
we
move
the
branch
cut
to
code
freeze
so
yeah
nothing,
nothing
to
worry
about
there,
release
managers,
any
questions
any,
but
anyone
any
questions
about
that.
B
Not
a
question
but
two
comments,
so
the
difference
just
for
for
folks.
Listening
in
the
past,
the
cherry
pick
period
between
the
cherry,
pick
deadline
and
release
was
usually
on
the
order
of
days
now
we're
going
to
have
more
on
the
order
of
weeks.
So
that's
a
slight
difference,
but
what
we're
trying
to
do
going
back
to
the
spring
and
all
of
the
the
coveted
stuff
starting
and
the
stress
people
had
is
just
like
dial
things
back
and
move
slower
so
that
that's
kind
of
what's
going
on
there.
B
Just
in
the
the
timing
change
then.
The
other
thing
to
to
maybe
reiterate
for
folks
is
that
on
the
branch
management
for
the
standard
patch
release
branches,
the
branch
managers
are
sort
of
the
the
last
level
reviewer.
To
say
like
is
this
a
valid
cherry
pick
after
the
appropriate
sig
and
working
group
or
whatever
whoever
the
thing
happens
to
be
for
the
ownership
has
given
the
review
and
approver
lgtm
after
that
branch
management
makes
the
final
call?
Does
it
go
into
the
release
branch
for
a
pending.0
release
like
this?
B
Is
branch
management
has
broken
out
a
bit
more
relative
to
the
release
team?
We
don't
talk
quite
about
quite
quite
as
much,
maybe
always
in
the
way
that
we
used
to
where
the
release
team
was
really
making
the
calls,
because
the
branch
management
wasn't
hadn't
established
is
a
thing.
Quite.
The
same
branch
management
was
only
the
the
later
patches,
so
after
dot
one.
B
The
the
final
call,
probably
on
most
of
these
so
ci
signal
would
feed
in
branch
management
would
feed
in
the
other
folks
involved
with
risk
management
in
the
release,
team
and
ultimately,
taylor
sort
of
saying,
like
I,
I'm
really
worried
about
this
one.
Let's
not
pull
it
into
the
release
branch
just
yet,
or
things
like
that
are
usually
conversations
to
to
have,
and
normally
there's
nothing
going
on
for
those
three
days
or
whatever.
B
It
is
like
we're
down
to
the
end,
so
this
doesn't
usually
come
up
and
with
the
elongation
and
time,
there's
a
little
more
chance
for
something
weird
to
happen,
whether
somebody
who's
not
just
saying
okay,
this
is
done.
I'm
gonna
wait.
I'm
gonna
have
a
quiet
couple
weeks.
Somebody
may
try
to
force
something
in
last
minute:
that's
maybe
not
ready
or
that
did
get
ready
and
we'll
have.
We
may
have
some
more
of
those
judgment
calls
to
make
and
to
to
really
manage
the
risks
well
to
destabilizing.
Something
needs
to
just
be
a
broad
conversation.
D
A
A
Okay,
so
vdf
is
the
next
one
up
vdf
our
vanity
domain.
Flip
is
the
plan
to
cut
over
from
from
the
underlying
gcr
dot
io
slash.
Google
containers
over
to
community
hosted
infra,
so
kate's
artifacts
prod
will
be
the
new
underlying
endpoint,
or
rather
the
the
geopins
us.gcrio
and
and
eu
and
asia.gcr.io
kate's
artifacts
prod.
A
So
you
might
have
seen
on
the
mailing
list
that
there
was
discussion
about
doing
this
starting
this
week
on
starting
yesterday.
A
This
process
takes
approximately
four
days
for
everything
to
settle
out
because
they
kind
of
roll
out
the
roll
out
the
images
and
stages.
A
But
the
so
part
of
the
trick
is
that
they
have
to
do
it
for
three
geographic
locations,
so
that
happens
in
phases,
and
that
takes
a
little
time
and
the
time
is
about
four
days.
So
we
before
we
embark
on
that,
we
have
to
make
sure
that
our
house
is
in
order.
I
guess
for
release
engineering
making
sure
that
we're
able
to
actually
continue
continue
to
do
releases
patch
releases
or
otherwise,
as
well
as
there
are
no
important
infra
upgrades
happening
at
the
same
time.
Right.
A
So,
given
everything
that
we've
said,
there
are
quite
a
few
infra
related
things
happening
this
week.
So
go
the
patch
releases,
the
rcs,
all
of
which
make
it
more
risky
to
consider
doing
the
vdf
this
week.
So
tentatively,
we
have.
The
pdf
planned
for
nvdf
equals
vanity
domain
flip
for
anyone
who
is
not
familiar.
A
So
we
have
that
work
planned
for
for
next
week,
starting
on
the
20th
and
yeah.
Hopefully,
that's
there's
nothing
to
worry
about.
We've
had
a
few
starts
and
stops
we'd
initially
plan
this
work
for
april,
but
as
you're
dealing
with
legacy
endpoints,
there
are
always
fun
things
that
you
encounter.
A
A
So
what
this
will
mean
for
release
manager
specifically,
is
that
we'll
be
moving
to
we'll
be
moving
to
promotion
of
our
images
as
opposed
to
the
release
images
specifically,
so
so
anything
that
is
core
server
image,
so
think
cube
api
server,
controller
manager,
proxy
scheduler,
the
conformance
images
and
for
the
branches
that
still
support
it.
A
Hypercube
those
images
previously
we've
been
able
to
push
those
images
directly
to
the
staging
repo
during
mock
stages
and
directly
to
the
production
repo
in
the
instance
of
mock
in
the
instance
of
official
stage
and
releases.
A
A
It's
essentially
you
aggregate
set
of
of
digest
for
the
container
images
you
plop
that
into
a
yaml
file
that
is
in
kate's
dot,
io,
the
the
repo
kates
dot,
io,
slash,
gates.I,
o
and
then
and
then
wait
for
that
pr
to
be
approved,
and
then,
after
that
pr
is
approved,
there's
a
post
submit
that
will
carry
the
images
from
staging
into
production
right
so
moving
forward.
A
A
So
something
to
be
aware
of.
There
is
a
tool
called
cip
mm
or
container
image
promoter
merge,
manifest,
which
was
created
by
linus,
and
that
is
in
the
kate's
container
image
promoter
repo
in
in
kubernetes,
and
what
that
tool
does.
Is
it
essentially
merges
manifests
right,
so
we
have
manifests
that
are.
A
Let
me
see
if
I
can
just
show
this
to
you
really
quickly
for
people
who
are
not
familiar
and
probably
quit
slack
while
I'm
at
it
so
kate's
dot.
I
o.
A
Right
so
there's
a
folder
called
kates.gcr.o
and
the
folder
split
into
images
and
manifests
for
the
manifests
folder.
What
that
will
tell
you
is
it's
a
set
of
instructions
for
various
staging
repositories
about
where
they
should
promote
their
their
content
right,
so
in
our
in,
in
our
case,
we
work
with
a
few,
but
primarily
for
the
res
for
the
release
process.
We
care
about
the
kate's
staging
kubernetes,
manifest
right.
A
A
This
is
a
manifest
that
will
be
handling
root,
level,
root
level,
case.gcr.io
images,
the
reason
we're
handling
root
level
here
is
because
these
images
have
been
root
level
since
epoch,
and
it's
important
that
we
don't
all
of
a
sudden
break
a
bunch
of
users
who
have
been
consuming
kubernetes
images
by
pointing
them
to
a
new
location.
A
The
reason
I
mentioned
locations
specifically
around
the
locations
each
of
these
manifests
correspond
to
a
essentially
staging
subdirectory
of
of
the
the
new
image
repository
right.
So
if
you
have,
if
I
have
kate
staging
build
image,
which
is
another
one
that
we
use
for
the
cube
cross
images,
if
you
remove
the
prefix
kate
staging
you'll
get
what
the
subdirectory
will
be
right.
A
So
it
is
so
it
would
be
say
for
the
u.s,
the
u.s
location,
it
would
be
us.gcr
dot,
io,
slash,
kate's,
artifacts,
prod,
slash,
build
image,
slash
the
name
of
your
image
right
in
the
instance
of
the
case
staging
kubernetes
ones,
we're
we're
saying
right.
We
want!
Let
me
just
make
that
a
little
bigger,
so
we
want
kate's
artifacts
prod,
but
we
also
want
case
artifacts
prod,
slash
kubernetes
right.
So
this
is
just
making
sure
that
it
lands
in
that
sub
directory
right.
A
If
we
remove
the
kate
staging
piece
right,
it
lands
in
that
sub
directory,
but
it
also
lands
in
the
root,
and
it's
and
again
it
landing
in
the
root.
Most
images
will
not
ever
be
promoted
to
root.
The
only
exceptions
for
this
are
the
images
that
will
land
in
this
in
the
staging
and
and
subsequent
prod
project.
So
the
images,
the
cloud
controller
manager,
conformance
cube,
api
server,
controller
manager,
cube
proxy,
keep
scheduler
and
the
pause
image,
and
then
also
the
sed
images
will
get
promoted
into
that
root
location
right.
A
A
So
this
is
basically
a
a
diges,
a
map
of
the
digests.
So
this
is
a
tag
for
an
image,
and
this
is
its
it's
container
image
digest
right.
So
it's
the
name
of
the
image,
it's
the
one
of
the
digests
and
then
an
associated
tag
with
that
digest,
and
then
we
can
see
that
we
have
this
image
from
multiple
architectures.
A
A
You
can
see
that
this
is.
This
is
a
slightly
different
format:
okay,
okay,
all
right,
I'm
on
a
call,
and
so
again
we
have
the
image.
The
image
on
based
on
the
architecture,
architecture,
specific
images
and
then
and
then
say
for
cube
cross
cube
process
does
not
have
multiple
architectures
right
now,
so
we
can
see
that
these
images
are
listed
here.
A
Usually
the
next
question
will
be
well
these
ima.
These
tags
seem
to
be
out
of
order
right.
The
tags
are
actually
not
out
of
order
this.
This
is
sorted
by
the
digest
number,
so
you
can
see
zero.
Two
eight
zero,
eight
eight,
which
is
confusing,
but
it
makes
sense
the
way
it
is
organized
and
it's
primarily
consumed
by
machines
right.
So.
B
B
A
Right
so
the
the
tool
that
I
had
mentioned
before
cipm.
A
A
So
in
the
instance
of
this
this
this
one,
it
will
move
these
into
this
format
right,
so
it'll
it'll
remove
the
individual
list
and
it'll
turn
it
into
kind
of
it'll
still
be
a
list,
it'll
just
move
into
kind
of
a
single
line,
and
then
it
will
will
merge
in
the
additional
images
image
manifests.
So
I
had
requested
this
from
linus
a
little
while
back
and
he
totally
totally
crushed
it.
I
used
it.
A
I
use
it
for
most
of
my
image
promotions
now,
just
to
make
sure
that
everything
is
well
formed.
He
also
uses
it
for
the
image
backfills,
which
is
hundreds
thousands
of
images,
sometimes
to
make
sure
that
kate
snatches
her
dot,
io
stays
in
sync
with
kate's
artifacts
prod,
or
vice
versa,
depending
on
how
you
look
at
it.
So
tool
is
pretty
cool
and
it
does
exactly
what
we
needed
to
my.
My
future
request
was
essentially
hey.
I
need
to
make
this
dead
simple
for
someone
who
is
doing
doing
a
release.
A
It'll
get
to
the
point
where
we
need
to
promote
a
large
amount
of
images
at
the
same
time-
and
I
don't
want
there
to
be
human
error
around
around
that,
so
I
think
within.
A
If
you
consider
all
promotions,
the
ones
that
we
have
to
do
are
are,
I
guess,
the
the
largest
amount
of
promotions
at
the
same
time
outside
of
the
backfield,
so
making
sure
that
that
goes
up
goes
off
without
a
hitch
is
super
important,
especially
given
that
these
images
are
images
that
we
produce
for
the
public.
So
any
questions
on
that.
A
A
B
And
I'll
try
to
make
this
brief,
because
I
know
we've
got
a
couple
of
the
arm
folks
on
the
call-
and
I
want
to
make
sure
we
get
an
update
from
where
that
stuff
went.
I
hadn't
managed
to
make
it
to
that
cigar
meeting,
so
patch
release
kubernetes
originally
having
been
monolithic
and
cloud
providers
being
in
it.
There's
a
lot
of
cloud
provider
code
that
is
potential
candidate
patch
releases
cloud
providers
tend
to
view
it
as
a
notable
bug.
B
If
current
kubernetes
doesn't
run
on
their
platform,
where
current
kubernetes
could
be
like
the
latest,
116,
not
119.,
so
that
drives
a
desire
to
have
the
things
working
and
integrated.
But
what
happens
is
the
way
the
code
is.
What
happens?
Is
it
attacks
me
while
I'm
trying
to
talk
or
sig
kitten?
I
think
so.
B
Cloud
provider
comes
up
with
a
new
instance
type,
for
example,
so
vm
has
more
or
less
cpu
a
different
way
of
attaching
disk
different
things
like
that,
and
somebody
runs
code
from
kubernetes
community
and
discovers
it
doesn't
work
right
and
they're
not
happy.
So
the
cloud
provider
existence
is
about
making
their
users
happy.
They
want
to
get
the
fix,
but
the
way
I
sort
of
view
this
is
as
we're
looking
at
patches
that
are
critical,
urgent
patches.
B
B
Well,
if
the
answer
is
well,
it's
related
to
a
vm
type,
that's
brand
new
on
azure
or
gcp
or
vmware
change,
something
in
vsphere
or
whoever
it
is
to
me.
That
starts
to
sound.
Like
a
feature
the
cloud
provider
has
introduced,
something
that
in
the
past,
we
had
no
way
of
designing
for,
and
some
of
this
comes
back
to
design
decisions,
how
the
things
are
coupled
if
you
have
a
giant
yaml
file
that
lists
all
the
vm
types,
obviously
you're
going
to
always
want
to
be
appending
new
vm
types
to
that.
B
A
Lilo,
do
you
have
opinions
on
cloud
provider,
cherry
picks,
okay,
nothing
all
right,
so
so
yeah.
I
agree
that
this
is.
This
is
something
that
comes
up
pretty
frequently
when
we're
reviewing
cherry
picks,
and
I
and
I
think
it's
pretty
much
always
tim
who
goes?
What
should
we
do
here
like?
Is
it
a
feature?
Is
it
a
bug
and
it
for
wearing?
Wearing
my
my
emeritus
sig
azure,
chair
hat,
or
I
think
that
was
the
right
combination
of
words.
It
is
hard
to
catch
everything
right.
A
We
don't
have
perfect
parity
between
between
instances
of
say,
arm
new
feature
sets
within
azure
or
and
and
kubernetes
right
where
there
are
also.
There
are
also
efforts
to
don't
eat
me.
Stop
it.
There
are
also
efforts
to
move
to
out
of
tree
providers
right.
So
in
instances
where
we
have
where
we
have
code
that
may
be
functionally
glue
code
to
allow
movement
we
have
to
consider.
Do
we
want
to
be
able
to
merge
that
stuff
too,
so
it
depends
for
me.
A
I
think,
as
the
easiest
way
to
say
this,
the
ways
we
can
head
that
off
is
tests
within
our
you
know
within
test
coverage.
It's
you
know,
curious
to
ask.
Well,
you
know
is
this:
is
this
something
that's
covered
by
the
tests
on
this
branch
right?
Why
you
know
when
we
talk
about
misses
right?
Is
it
because
it's
not
in
the
test
suite?
Is
it
because
it's
a
net
new
feature
right?
A
Did
we
introduce
a
bug
and
and
have
you
know
and
have,
as
a
result,
added
a
new
new
piece
of
a
new
piece
within
the
test
suite
or
is
it?
Is
it
a
net
new
feature?
Is
it
kind
of
on
the
cusp
where
it's
a
feature
that
was
introduced
to
fix
bug
that
we
should
take
advantage
of
from
the
cloud
of
a
feature
that
was
introduced
to
fix
a
bug
from
the
cloud
provider
that
we
should
take
advantage
of
within
kubernetes?
So
I
think
it.
I
think
it's
it's
case
by
case.
A
I
think
it
would
be
nice
to
have
some
rough
heuristics
around
when
it
is
right
and
wrong
to
do
these
patches
and
and
yeah,
maybe
we'll.
Maybe
we
can
start
chatting
about
this
more
in
public
than
we
we
normally
do.
A
I
think
that
that
might
be
cool
for
the
release
engineering
meetings
to
to
bring
a
set
of
of
patches
for
the
community
to
discuss
as
well,
because
when
we're,
because
when
we're
unsure,
these
conversations
usually
happen
on
our
private
release,
managers
chat
right,
so
so
I
think
bringing
those
bringing
those
out
in
the
public
would
be
at
least
one
step
in
the
process.
A
Stuff
alrighty
well
we're
going
to
speed
through
the
rest,
because
we
do
want
to
get
to
the
the
arch
stuff
really
quick.
We
have
a
golang
115
release
date
trending
towards,
I
believe
the
last
we
heard
was
early
august
depending
on
when
the
security
stuff
is
planned
to
land
should
be
today
for
113
and
114.
A
go
115
again
tentative
early
early
august,
but
it
does
seem
that
maybe
we
were
expecting
an
rc
or
maybe
their
betas
are
considered
to
be
rcs
for
them
looks
like
we
were
expecting
an
rc
earlier
for
go
go
115,
because
the
intent
was
to
start
consuming
that
rc
on
on
our
our
master
branch.
So
there
could
be
delay
and
the
delay
could
be
related
to
the
fact
that
the
113
and
114
patches
are
security.
Releases
could
have
taken
some
extra
time
to
get
that
information
out.
A
A
Bazel!
That's
ominous!
Left
up!
That's
that's
a
longer
conversation.
I
don't
really
know
what
I
want
to
say
there
outside
of
there
are
some
efforts,
or
there
has
been
some
discussion
in
the
past
to
remove
bazel
from
the
kubernetes
kubernetes
build
infrastructure
that
is
at
least
for
now
on
hold.
I
think,
that's
for
us
to
get
to
a
place
where
we
can
actually
do
it.
A
A
So
if
a
ci
job
passes,
that
is
usually
cash
for
similar
ci
jobs
right
so
similar
ci
jobs
running
in
the
same
batch
right,
whether
it's
per
per
commit
or
or
within
a
certain
period
of
time.
If
we've
noticed
that
a
job
has
passed
and
the
content
are
like
a
specific
job
has
passed
within
a
suite
and
the
content
that
would
cause
the
job
to
run
has
not
changed,
then
bazel
will
essentially
skip
it.
A
Bazel
will
find
the
cached
result
and
skip
it,
so
we
need
a
similar
mechanism
for
for
running
jobs
without
bazel
catherine
was
working
on
something
like
this,
like
a
go,
build
cache
and
catherine
has
moved
teams
on
the
google
side,
so
someone
would
have
to
get
a
better
understanding
of
that
work
and
then
pick
it
up
and
drive
it
to
completion.
A
A
No,
so
just
something
to
keep
in
our
minds
if
there
is
interest
in
helping
to
rip
out
bazel,
if
you,
if
you
enjoy
the
idea
of
project
like
that,
let
us
know
and
we'll
link
you
up
with
sig
testing
to
chat
more
about
it,
but
I
think
I
think
it's
early
days
we
might
hear
we
might
hear
something
closer
to
end
of
august
early
september
about
where
we're
moving
to
there.
A
So
any
questions
on.
A
That,
okay
last
one
up,
this
is
again
related
to
go
updates.
So
we've
been
in
discussion
kind
of
like
in
various
ways,
kind
of
offhand,
but
also
with
the
release,
managers
and
then
offhand
again
about
what
would
it
look
like
to
produce
our
own
going
images
right.
So
this
would
give
us
an
opportunity
to
essentially
consume
golang
closer
to
tip
and
tip
of
the
various
release
branches
and
get
that
signal
a
little
faster.
A
If
this
is
something
that
I
don't
believe,
the
scalability
team
produces
their
own
images,
but
they
do
consume
earlier
releases
of
go
than
than
we.
The
community
and
sig
release
does
so
it'd,
be
nice
to
move
to
we're
kind
of
dependent
on
cubican's
ede
tests,
as
well
as
well
as
as
as
well
as
the
cube
cross
images
right
and
both
of
those
require
a
golang
image
of
some
sort
right
to
be
based
off
of
or
an
image
version
to
reference.
A
So
there
would
be
a
few
things
to
a
rather
go
version
to
reference
to
pull
into
the
image.
So
there
are
a
few
things
to
think
about
there.
But
if
we're
able
to
do
it,
it
means
that
we
can
push
on
cube
cross
images
faster,
which
means
we
can
turn
around.
Go
go
image
bumps
faster
right,
so
something
to
think
about
something
I'm
going
to
be
start
starting
to
poke
at
if
any
of
the
release
managers
are
interested
in
potentially
looking
at
taking
a
look
at
this
as
well
with
me.
A
Let
me
know
cool
all
right
and
last
thing
on
the
agenda,
but
certainly
not
least,
we
are
talking
about
support
for
hardware
architectures,
various
hardware,
architectures
and
and
operating
systems
right,
so
we've
had
discussions
with
the
lumos
folks
have
dropped
by
our
calls
before
and
arm
more
recently.
I
believe
we
have
some
armed
folks
on
the
call
right
now,
so,
if
you
all
want
to
say
hi
and
maybe
give
any
updates
from
the
sig
architecture
meetings,
I
believe
there
was
a
separate
discussion
there.
B
A
Yeah
I
didn't
get
to
I
didn't
get
to
pop
in
there's.
B
A
A
A
A
All
right!
Well,
in
that
case,
I
let's
put
this
back
in
the
holster
for
future
topics
and.
A
A
B
B
A
B
And
cigar
channel
there
was
chatter
makes
it
sound
like
the
meeting
did
happen,
so
upload
probably
just
got
snagged
up
somewhere
yeah.
A
There
are
also
issues
on
and
off
with,
splain
which
handles.
A
Automatic
uploads
to
youtube,
so
it's
possible
this.
This
just
hasn't
been
updated,
yet
yeah
should
be,
yeah
should
be
recorded,
but
not
probably
not
uploaded.
Yes,
it's
probably
recorded
and
splain
dropped
the
web
book
and
it
has
been
yet
to
be
manually,
updated
so.
A
Okay,
all
right,
so
I
think
we're
wrapped
unless
anyone
else
has
stuff
to
chat
about.
A
Alrighty
well,
wait
one
more
arno!
Do
you
want
to
give
a
an
update
on
a
triage
party.
C
What
can
I
say
so?
Basically,
trish
party
is
up
running
on
triple
a
gk
cluster,
but
we
face
a
technical
issue
with
13th
party
so
to
resume
the
issue.
The
ingress
controller
on
the
gk
cluster
check
the
avail
check
for
the
service,
which
means
you
need
to
you
need
to
you
need
to
be.
You
need
to
be
sure
your
service
return,
a
200
http
code,
but
trespass,
you
return
the
300,
I
think
so.
The
ingress
controller
considered
jewish
party
is
not
running
and
can
expose
the
service
through
the
ingress.
C
A
A
Okay,
that's
that's
awesome.
I
mean
the
upside
of
that
is
we're
not
blocked
on
us
anymore.
We
are
blocked
on
we're.
Blocked
on
triage
party
upstream
and
thomas
has
been
kicking
out
releases
pretty
consistently
for
triage
party,
so
we
should.
We
should
see
that
soon
everyone
who's
been
working
on
this.
Thank
you.
Thank
you
once
this
is
out
it's
going
to
be
a
big
improvement
to
to
kind
of
the
way
we
we
do
business.
I
guess
there
there's
going
to
be
a
lot
more
scrutiny
on
on.
A
You
know
on
triaging
the
backlog,
and
I
think
that
our
response
time
will
increase
across
the
board.
So
thank
you
again
for
for
working
on
this
and
and
carrying
this
ball
for
it's.
It's
been
a
bit
since
we've
been
chatting
about
it.
So
thanks
for
thanks
everyone
for
sticking
with
it.
A
And
now
now
I
think
we're
out
of
things
to
chat
about,
so
I'm
gonna
try
to
I'm
gonna.
Try
to
give
you
a
kitty,
quick
kitty,
hey
kitty,
all
right.
She
doesn't
like
to
be
on
camera
unless
it's
of
her
own
volition,
but
I
will
catch
y'all
later.