►
From YouTube: Kubernetes SIG Testing - 2021-01-26
Description
A
Hi
everybody,
my
name
is
aaron
krikenberger,
also
known
as
aaron
of
sickbeard
or
spiffxp
on
all
the
things,
and
you
are
at
the
kubernetes
sig
testing
bi-weekly
meeting
today
is
tuesday
january
26th.
A
This
meeting
is
being
publicly
recorded,
so
we
can
all
go
to
youtube
later
and
watch
ourselves
adhere
to
the
kubernetes
code
of
conduct,
which
basically
means
we
be
our
very
best
selves
and
not
be
jerks
to
each
other.
If
you
find
you
have
a
problem
with
our
conduct
during
this
meeting,
please
feel
free
to
email,
conduct,
kubernetes
dot,
io
or
you
are
welcome-
to
reach
out
to
me
privately
as
well
on
today's
agenda.
A
We're
going
to
start
with
a
demo
from
sean
chase
to
talk
about
a
new
test
grid,
rendering
feature
for
kubernetes
adjacent
projects,
and
then
I
had
a
couple
project
updates.
I
wanted
to
discuss
and
then
we'll
talk
about
sort
of
the
the
work
I
think
sig
testing
is
committing
to
for
the
121
kubernetes
release
lifecycle
and
where
we
could
use
your
help
in
accomplishing
that.
A
So
I
will
hand
it
over
to
mr
demo
time
in
all
caps
sean
chase
after
I
make
him
a
co-host,
so
he
can
share
his
screen.
Thank
you.
I.
B
Was
just
about
to
ask
about
that
as
I
slowly
panic
about
whether
I
have
all
the
technical
bits
together,
I
need.
Let
me
see
if
I
can
present
my
screen.
D
B
Here,
hi
everyone,
my
name
is
sean,
I'm
on
github
as
chase
s2,
and
I
do
a
lot
of
work
on
test
grid.
B
We've
recently
gotten
to
the
point
with
test
grid,
where
we
have
enough
of
it
open
sourced
that
users
can
start
creating
their
own
test
grid,
and
I
wanted
to
give
a
quick
demonstration
on
how
that's
done
more.
Specific
information
is
given
in
a
readme
and
then
some
links
kind
of
at
the
end
of
the
presentation,
because
they
don't
want
to
take
everyone's
time.
This
is
test
grid,
for
those
of
you
who
are
just
joining
us
test
grid
is
the
tool
that
the
kubernetes
community
uses
to
put
their
tests
in
a
grid.
B
Here
is
technically
a
grid
with
some
tests,
they're
all
green,
it's
very
good,
but
to
get
into
how
you
can
leverage
this
yourself.
I
would
like
to
get
into
its
architecture.
A
little
bit
test
grid
has
kind
of
two
halves
a
front
end
and
a
back
end.
The
back
end
is
to
be
simple.
A
number
of
kubernetes
processes
that
are
updating,
state
and
information
in
cloud
storage
and
the
front
end
reads
those.
B
To
be
a
bit
more
specific,
this,
like,
I
believe
alvaro
said,
but
anyway,
this
is
the
cloud.
Storage
in
question
is
gcs
and
in
particular,
test
grid
needs
three
things
to
function.
It
needs
a
configuration
proto
that
tells
it
where
everything
a
test
script
is.
It
needs
a
summary
for
each
dashboard
for
now
named
summary
dashboard
name,
and
it
needs
a
grid
for
each
test
group
to
show
what
to
display.
B
B
We're
kind
of
working
on
other
ways
to
generate
those
configurations,
state
is
maintained
by
updater
and
summary,
is
maintained
by
summarizer
and
as
long
as
all
of
these
controllers
are
kind
of
continuously
running
and
updating
protos,
the
information
gets
updated
and
it
puts
it
into
place
so
that
the
front
end
can
display.
B
It
the
front
end
currently
isn't
open
source,
but
the
front
end
has
been
extended
to
work
for
any
gcs
bucket
that
the
app
engine
account
has
permission
to
access
going
to
a
particular
render
endpoint
that
contains
your
bucket
name
in
it
will
display
the
test
grid
located
in
that
particular
gcs
bucket.
So
if
you
have
your
own
gcs
bucket,
and
you
would
like
to
display
the
test
grid
contents
of
that
gcs
bucket,
you
can
go
to
testgrid.case.io
r,
slash
the
name
of
your
bucket.
B
This
api
is
currently
a
work
in
progress.
It's
liable
to
change
right
now.
It
expects
everything
to
be
in
the
root
of
your
bucket,
and
we
don't
like
that
and
changing
that
will
probably
change.
The
api
updates
will
be
on
google
cloud
platform
test
grid
in
the
readme
there.
B
Your
bucket
does
need
to
have
this
user
account
able
to
read
from
it,
because
we
can't
read
your
information,
can't
display
it,
and
if
you
have
your
own
bucket,
the
data
in
it
is
maintained
by
presumably
you
running
these
controllers.
B
B
That
is
it
assuming
that
I'm
displaying
a
screen
and
not
a
tab.
Here's
a
demonstration.
This
is
test
grid,
as
we
know
it.
The
current
case
instance.
It
shows
everything,
but
I
wanted
to
create
my
own
bucket
that
showed
just
the
jobs
that
I
cared
quite
a
lot
about,
so
test
grid,
slash
r
fade
only
we
have
a
version
of
the
config.
B
That's
been
paired
down
to
just
show
fade
about,
and
a
script
that's
doing
effectively
what
the
summarizer
and
updater
do
it's
a
little
similar
a
little
simpler
for
this
mock,
but
the
dashboard
only
shows
what
the
configuration
tells
it
to
show.
B
These
results
are
accurate,
though
they're
very
old,
because
I
haven't
run
the
updater
in
about
two
weeks,
so
this
shows
that
it
is
indeed
reading
my
config
and
is
reading
my
current
state
with
my
artificially
bad
controller.
B
But
that's
that's
what
this
does
and
me
continuing
to
click
around
is
just
demoing
test
grid.
At
this
point,
that
is,
that
is
it.
I
think
I
was
a
little
bit
over
eight
minutes,
but
I
have
some
links
like
seven
google
cloud
platform
slash
test
grid,
there's
a
a
walkthrough
and
if
there
are
any
questions,
please
contact
me.
A
We
sean
has
frozen
for
me,
but
I
assigned
yes,
let
me
get
a
link.
A
I
signed
a
thing
to
you
on
the
meeting
that
sean,
so
I'll
follow
up
with
you
afterwards
make
sure
that's
done.
A
I
guess
my
high
level
question
is:
is
this
you
mentioned
the
api
subject
to
change
at
any
time.
This
is
very
alpha.
Would
you
recommend
the
kubernetes
community
start
trying
to
use
this.
B
B
If
you
are
perhaps
a
kubernetes
adjacent
team
and
have
a
lot
of
interest
in
setting
up
your
own
test
grid,
this
is
kind
of
what
that's
targeted
for
okay,
we've.
I've
heard
kind
of
a
lot
of
interest
in
like
we're
glad
to
ask
for
this
open
source
we'd
like
to
try
it
for
ourselves,
and
this
is
currently
where
we
are
with
that
project.
B
A
I'll
I'll
ask
it
another
way.
Maybe
there
are
a
lot
of
items
on
the
kubernetes
tasker
instance
right
now
related
to
istio
or
k-native
or
our
company
specific
groups?
Do
we
think
we
could
go,
tell
those
people
to
move
their
stuff
to
different
buckets,
or
should
we
wait
on
that.
B
A
Cool
that
that
sounds
cool
yeah,
the
number
of
sig
groups
versus
the
number
of
non-sig
groups
seems
to
be
about
equal
right
now.
I'd
love
for
us
to
get
back
to
the
point
where
it's
it's
only
like
sig
dashboard
groups.
So
then
sigs
know
what
they're
looking
for.
A
Awesome
yeah
all
right
tried
to
drop
links
to
the
code
for
the
updater
and
the
summarizer
which
logo
in
the
google
cloud
platform
test
grid
repo.
A
The
configurator,
I
believe,
is
currently
still
in
kubernetes
test
infra,
because
it's
tied
to
some
kubernetes
isms
and
also
that's
the
thing
that
allows
us
to
magically
take
some
information
out
of
crowd,
job
definitions
and
generate
some
further
test
grade
can
make
from
that.
A
Okay,
awesome
thanks
so
much
for
your
your
time.
Sean
super
excited
about
this.
A
Okay,
moving
on
to
a
couple
sub
project
updates,
so,
first
to
talk
about
kind
ben.
Do
you
want
to
you
want
to
talk
about?
What's
going
on
with
kind
these
days,.
A
A
Okay,
well,
the
the
only
key
thing
that
I
was
aware
of
is
that
I'm
at
what
and
antonio
who's
on
the
call
here
were
just
recently
promoted
to
approvers
for
at
the
root
of
that
kind,
repo
which
to
me
means
they
are
now
also
sub-project
owners
for
kind.
A
That's
really
great
news:
okay,
next
update
is
the
testing
commons
sub
project,
so
testing
commons
was
a
some
project
that
was
created
a
couple
years
ago
by
tim,
sinclair
to
try
and
rally
a
group
of
people
who
were
interested
in
how
we
could
define
best
practices
for
all
of
the
tests
that
live
in
the
kubernetes
kubernetes
repo
and
over
the
years
it's
kind
of
had
people
come
and
go.
A
It's
sort
of
been
like
a
volunteer-led
best
effort,
project
or
sub-project,
and
so
after
talking
with
the
existing
excuse,
me
lead
of
the
leads
of
the
sub
project.
Andrew
kim
and
jorge
allarcon,
we've
decided
or
george
allicon.
We
decided
to
sort
of
move
the
project
to
more
of
a
formal
best
effort
basis,
so
we're
gonna
since
there's
not
often
a
lot
of
people
showing
up
to
the
meeting
and
there's
not
often
a
lot
of
chatter
related
to
it.
A
A
If
you
raise
these
sorts
of
questions
in
this
meeting
or
the
sig
testing
channel,
since
this
is
sort
of
a
a
change
in
the
subproject,
I
it's,
I
believe
it
is
subprime
it
is.
It
needs
to
go
through
lazy
consensus,
so
I'll
send
out
an
email
related
to
this
later
today,
any
questions
or
comments
there.
A
Okay,
the
other
thing
I
wanted
to
mention
real
quick
since
grant
is
here,
is
to
just
give
grant
a
bunch
of
kudos
for
the
work
that
he's
done
on
kettle
and
improving
the
flakiness
queries
that
we
run
against
the
data
that
kettle
ingests
into
bigquery.
A
So
this
is
probably
most
interesting
to
people
who
are
on
the
ci
signal
team,
but
for
a
while
now
we
had
a
bunch
of
jobs
that
used
this
legacy,
thing
called
bootstrap
to
run
and
those
jobs
were
getting
picked
up
by
the
our
flakiness
queries,
but
the
jobs
that
did
not
use
bootstrap
and
only
used
pod
utils,
which
is
the
recommended,
supported
way
of
writing
and
running
your
proud
jobs.
Those
were
not
getting
picked
up
and
they
are
now
and
then
further.
A
A
Right
so
today,
if
you
go
to
test
infra
and
you
click
on
the
flake's
latest
json
file,
you
will
see
sort
of
over
the
past
week,
all
of
the
jobs
that
we
run
in
our
infrastructure,
which
is
a
bunch
so
not
just
the
pre-submits,
but
also
the
ci
jobs
sorted
by
which
ones
are
the
most,
which
ones
are
the
flakiest
and
then
within
those
jobs
which
tests
are
flaking
and
we're
no
longer
filtering
out
flakes
that
don't
occur
very
often
we're
tracking
all
the
flakes.
A
The
reason
I
say
this
is
relevant
to
ci
signal
is
this
means
we're
we're
now
showing
tests
that
flake
for
release
blocking
jobs
and
we're
also
showing
tests
that
flake
for
oh
boy
integration.
There
we
go
so
the
integration
tests
used
to
be
something
that
you
could
only
see
failing
if
you
went
to
go.kates.io
triage,
which
is
sort
of
an
alternate
way
of
viewing
test
failures
across
the
project.
A
But
if
you
just
want
to
know
like
which
are
the
flakes
that
are
most
impacting
the
project,
and
we
really
want
to
get
people
working
on
these,
you
can
use
this
file.
So
hopefully
this
is
gonna.
Help
us
out,
as
we
look
to
the
reliability
working
group
to
start
tracking
down,
like
which
sigs
have
the
flakiest
tests
and
let's
make
sure
that
they're
addressing
these
problems
so
kudos
to
grant
for
putting
that
together.
A
You
can
do
this
today,
using
like
a
free
tier
google
cloud
account
you
can
query
up
to
a
terabyte
worth
of
data
so
but
we're
going
to
try
and
make
it
a
little
easier-
and
I
think,
grant
showed
last
week
how
we're
going
to
try
and
put
all
this
visualized
into
data
studio,
to
empower
community
members
to
create
their
own
dashboards
or
their
own
reports
in
data
studio.
D
If
I
can
just
stack
onto
that
real
quick,
I
did
create
an
issue
if
anybody
wants
to
help
out
gcs
where
all
the
kettle
metrics
are
stored
or
the
the
metric
metrics
from
slap
metric,
slash
in
the
repo
allows
for
static
site
hosting
and
if
anybody's
interested
in
making
those
json
metrics
a
little
bit
more
human
readable
and
wants
to
build
some
static
sites
around
adding
search
or
anything.
To
those.
A
Awesome
thanks
great
yeah,
I
think
it'd
be
cool
if
we
could
have
like
a
go.kates.io,
flakiest
or
whatever
that
just
took
you
straight
to
a
dashboard
that
was
helpful
as
far
as
the
other
sub
projects
go,
I
don't
have
any
specific
updates
on
them,
but
I
did
just
want
to
point
out
the
the
list
of
all
the
sub
projects
that
fall
under
this
sig.
We're
going
to
be
reviewing
all
of
these
as
we
go
through
our
annual
reports
to
the
steering
committee
and
I
plan
on
adding
a
section,
the
future
sick
testing
meetings.
A
So
as
with
testing
comments,
those
sub-projects
that
don't
have
meetings
or
slack
channels
of
their
own,
we
can
sort
of
revisit
those
and
see
if
there's
anything
new
worth
talking
about
there,
so
things
like
updates
to
bosco's
or
updates
to
prow
that
are
of
interest,
okay,
so
I'll
move
on
from
there.
A
So
the
next
thing
I
wanted
to
talk
about
is
the
stuff
I
feel
is
important
for
us
to
accomplish
in
the
121
release
life
cycle,
which
I'm
roughly
thinking
of
as
this
this
quarter,
the
release
may
move
beyond
that.
I'm
not,
I
think,
that's
still
being
sorted
out,
but
I
just
sort
of
wanted
to
say
we
should
get
these
things
accomplished
at
the
same
cadence
as
the
release,
because
I
think
they
are
impactful
to
the
release.
A
So
steven
created
an
issue
that
says
registries
used
in
kubernetes
ci
should
be
on
kubernetes
community
infra.
We
identified
this
as
a
pain
point
last
release
cycle
when
one
of
the
registries
that
was
google
owned
accidentally
disappeared
because
it
was
not
appropriately
tagged
as
in
use
by
the
project.
A
We've
taken
care
of
that,
but
to
mitigate
that
risk
going
forward.
We've
also
tried
to
identify
all
of
the
other
google-owned
projects
that
host
images
that
are
used
by
kubernetes
ci,
so
some
of
the
I'll
just
walk
through
each
of
these.
Some
of
these
are
of
the
form
or
like.
There
is
a
test
that
uses
an
image
that
is
hosted
by
google.com
project,
so
authenticated
image
polling
is
one
of
them.
A
Basically,
we
will
need
help
from
the
sig
that
owns
the
test
that
uses
this,
which
in
this
case
is
sig
node
to
decide
whether
or
not
this
test
is
still
relevant
as
written,
and
if
it
is,
you
know
how
the
credentials
and
stuff
are
set
up
and
then
migrate
this
over
to
kates.gcr
dot
io.
A
Okay,
I'm
not
taking
that
as
no
questions,
I'm
taking
that
as
we're
all
looking
at
this
for
the
first
time.
So
another
example
of
a
project
is
apparently,
we
have
a
number
of
images
that
are
used
in
the
gke
release
project.
I
identified
this
by
using
our
kubernetes
code
search
tool.
C
A
Okay,
awesome,
if
you
could,
if
you
could
comment
on
the
this
gke
release
issue,
that
would
be
cool
too,
because
I
I
feel
like
yeah
mechanically.
This
should
be
pretty
easy.
It's
just
a
matter
of
making
sure
like
tests
still
pass.
A
Kate's
authenticated
test
is
another
example,
so
this
was
the
one
that
sort
of
kicked
all
this
off.
This
is
an
example
of
two
tests
that
pull
an
image
and
we
should
decide
if
the
test,
as
it's
written
makes
sense,
and
if
so,
we
should
migrate
the
image
over
to
kate's
dot
gcr
dot
io.
A
So
I
tried
to
sort
of
spec
out
what
the
credentials
are
for
the
image
and
stuff,
and
I
feel
like
release.
Engineering
can
work
in
release
engineering
or
the
release.
Team
can
work
in
context
in
concert
with
sig
apps
to
decide
like
does
this
test
still
make
sense,
and
if
so,
let's
get
it
using
an
image
hosted
on
community
infrastructure.
A
A
A
Lubomir
is
here.
Maybe
you
want
to
speak
more
to
it,
but
I
feel,
like
lubramir's,
pretty
quickly
identified
what
needs
to
change
in
cuba
dm
for
this
to
happen
and
if
you
haven't
already
made
the
changes
to
cube
adm
you're
well
on
the
way
to
making
this
happen
for
121..
H
H
I
think
more
of
a
problem
was
the
gcs
bucket,
but
we
already
have
a
pr
for
that.
H
A
That
sounds
great,
so
most
likely,
the
the
help
that
will
be
needed
from
release
engineering
is
to
make
sure
that
once
we
are
are
happy
that
cube
adm
is
no
longer
using
images
from
this
project.
We
don't
need
to
worry
about
keeping
images
here
up
to
date,
and
that
should
be
one
of
the
last
remaining
reasons.
Why
we're
not
fully
relying
on
the
build
job
that
runs
on
community
infrastructure
and
publishes
to
kate's
release
dev
instead
of
kubernetes
release
them.
A
Let
me
see
if
I
have
a
link
to
the
relevant
issue
here,
not
at
the
moment
I'll
get
there,
but
the
the
bucket
that
lubimir
mentioned
kubernetes
released.
It's
gonna
be
a
broader
effort
that
we
need
help
with,
but
sticking
just
with
container
images.
So
another
thing
that
cloud:
u
has
helped
with
a
bunch,
but
I
still
think
could
use
some
more
help
or
we
need
some
help.
Pushing
this
over
the
line
is.
B
A
Use
e2e
test
images
that
are
hosted
by
google
docs.
C
Yeah
we
have
built
and
promoted,
I
think
all
of
the
images
and
I've
also
been
testing
them
out
to
with
the
presumatus
to
see
if
they
were
passing.
We
caught
a
couple
of
issues,
for
example,
with
the
up
armor
loader
image
and
there's
a
pull
request,
yeah
that
one
which
basically
will
update
the
registry
for
those
images
to
the
promoted
one
there's
still
a
couple
of
images
that
will
have
to
be
added
to
this
pull
request.
C
For
some
reason,
we
use
two
versions
of
that
image
in
in
testing
and
there's
even
a
test
case
which
spawns
a
pod
with
two
containers,
one
with
each
version
for
some
reason,
and
I'm
not
particularly
sure
why,
or
what
about
about
that?
That's
the
only
image
that
remains
from
that
registry.
A
Okay,
I
am
very
hopeful
that
that
test
is
tagged
with
a
sig
name,
because
then
I
would
suggest
it's
the
sig's
responsibility.
So
what
I
would
probably
do
is
go
ping
the
sick
in
their
slack
channel
or
notify
the
sig
leads,
assign
the
sig
leads
to
an
issue
about
hey.
What's
the
deal
with
this,
and
if
neither
of
those
get
their
attention,
you
can
always
send
something
to
the
mailing
list.
C
Okay,
yeah,
it
was
punted
a
couple
of
times
because
it
is
on
the
larger
side,
but
we
have
been
using
this
image
and
with
those
updates
for
a
couple
of
releases
already
in
our
testing,
we've
also
run
the
hpa
tests
and
they've
they've
been
passing,
and
I've
also
pasted
put
up
baseband
with
iran.
I've
run
with
that
image.
C
So,
after
that,
we
can
also
promote
that
image
for
windows
as
well,
and
I
think,
with
that
we'll
be
finished
with
this
registry.
A
Awesome,
however,
as
much
as
I
appreciate
everything
that
cloudy
has
done-
and
this
should
hopefully
mostly
be
done-
and
I
didn't
actually
write
this
in
this
issue-
I'm
so
sorry-
I
feel
like
with
pretty
much
every
change.
The
name
of
the
image
change,
the
location
of
the
image
task
that
I've
been
talking
about
thus
far,
we
need
sig
releases
input
on
whether
it
is
appropriate
to
cherry
pick
back
all
of
these
image
renames
to
previous
patches
or
patch
releases
of
kubernetes.
A
However,
I
could
see
the
case
being
made
that
there
are
some
people
who,
like
set
up
for
air
gap
testing
of
their
clusters.
You
feel
like
the
change
in
image.
Names
means
a
change
in
which
images
they
needed
to
pre-poll
to
set
up
their
cluster
for
air
gaps,
testing
and
whatnot.
A
So
I
could
anticipate
some
pushback
based
on
that,
so
I
feel
like
that's
that's
something
for
sig
release
to
to
mull
over.
Like
generally,
I
do
feel
like
we
pick
back
test
changes.
If
the
test
is
a
like,
a
bug
fix
or
if
it
happens,
to
be
picking
up
a
security
change
and
in
some
sense
I
could
view
the
images
that
are
used
under
test
just
being
another
test
change.
A
But
we
should.
We
should
consider
that
in
the
context
of
like
what
kind
of
deprecation
window
do
we
want
for
all
of
these,
because
we're
going
to
try
really
hard
to
keep
all
these
projects
around
for
as
long
as
is
needed.
A
Let
me
see
here
same
thing
for
node
etv
images
do
tests
use
this.
If
so,
we
should
migrate
the
images
that
are
used
if
they
don't.
We
can
probably
delete
this.
A
That's
everything
I
have
linked
off
this
issue
sort
of
related.
We
also
want
to
stop
using
images
that
come
from
docker.io.
A
I
think
the
kubernetes
project
ci
has
been
sufficiently
protected
from
rate
limiting,
but
I
feel
like
there
are
other
sub
projects
or
other
downstream
consumers
who
occasionally
hit
rate
limiting,
and
so
I
believe
we
have
a
staging
mirror
project
set
up
for
all
these
images
and
cloud
you,
I
think,
you've
also
been
involved
in
trying
to
migrate
these
over,
in
the
same
way
that
we're
migrating
the
e
to
b,
test
image.
It's
over.
C
Yeah,
basically,
it's
using
the
same
image
building
and
promoting
process
as
the
other
e3
test
images
that
we've
previously
discussed
already
have
a
pull
request
for
that
sent.
It
just
adds
some
empty
docker
files,
so
no
docker
files
into
kubernetes,
test
images
and
would
just
need
then
to
be
approved
and
then
simply
promoted
and
then
replaced
in
the
communities
test
duties
manifest.
C
A
Okay,
I
still
feel
like
I.
I
had
a
lot
of
concern
about
whether
or
not
we
should
be
mirroring
images
before.
I
don't
really
think
I
have
that
concern
as
much
anymore.
I
think
we
should
just
go
ahead
and
mirror
because
mirroring
images
means
our
downstream
users,
and
consumers
of
kubernetes
would
also
benefit
from
this.
A
If
there
are
other
issues
with
other
images
from
other
sub-projects
yeah,
we
should.
We
should
chat
about
that.
I
view
that
as
less
critical
to
accomplish
within
the
121
milestone,
but
please
send
them,
send
them
over
to
kate's,
infra
or
sorry.
Yeah.
Sorry
send
them
to
stick
testing
if
they're
having
problems.
A
Okay,
I'm
just
going
to
pause
there
because
I've
been
talking
a
lot.
I
want
to
hear
from
folks
who
showed
up
wanting
to
help
out.
I
see
a
hand
from
hibby.
I
One
of
the
things
on
our
commitments
for
q1
is
the
migrating
of
the
images
and
I've
reached
out
to
a
couple
folks,
but
I
wanted
to
see
in
this
wider
group
of
humans,
as
we
do
all
of
this
work
to
migrate
to
kates.gcr.org
and
we're
noticing
rate
limits,
somewhat
uneven.
Building
the
images
that
we're
mirroring
over
a
lot
of
our
infrastructure
costs
coming
over
from
the
kate
central
working
group
are
related
to
the
pulling
of
those
images,
some
with
the
storage
but
more
heavily,
the
actual
continual
pulling.
I
So
we
as
a
community
can
totally
understand
why
docker
said
yeah.
We
need
the
rate
limit
and
our
costs
are
going
to
start
soaring
as
we
continue
to
migrate
all
of
the
remaining
pieces
of
the
kubernetes
and
for
sure
kubernetes,
focusing
just
on
this
image
pulling
issue
and
with
the
choice
of
using
the
case.gcr.io
that
piece
of
componentry
as
far
as
using
that
domain
name
and
pointing
to
google.
I
What
is
there
anybody
within
google-
or
maybe
it's
as
far
as
how
we
might
thinking
about
ways
that
we
could
distribute
that
to
be
more
push
towards
where
our
consumers
are
similar
to
a
cdn
or
similar
to
some
type
of
distributed?
Caching,
that
would
help
us
alleviate
the
heavy
heavy
hits
we
have
towards
the
the
non-rate
limited,
but
heavily
heavily
funded
distribution
of
all
the
beautiful
artifacts
that
our
community
produces.
A
A
In
the
short
term,
I
was,
I
was
sort
of
really
concerned
about
bad
actors
abusing
kates.gcr.io
and
pulling
way
more
than
they
should
I'm
less
concerned
about
that
at
the
moment.
A
But
I
do
believe
that
we
will
want
to
sort
of
get
reporting
to
understand
like
which
images
are
causing
the
most
traffic
and
thus
costing
the
most
to
host,
and
then
from
there
we
can
sort
of
decide.
You
know
who
is
it
most
appropriate
to
seek
out
for
additional
funding
or
whatever?
So
I
think
it's
we
can
look
at
it.
A
Both
in
terms
of
you
know,
who's
who's,
pulling
these
images
and
also
in
terms
of
which
images
are
getting
pulled
the
most,
but
I
feel
like
that
is
a
concern
that
can
be
solved.
Orthogonally.
I
Okay,
if
there's
anyone,
that's
also
google,
here
I've
reached
out
to
a
couple
of
folks
for
the
logging
and
I've
not
had
any
success
so
far.
A
A
Okay,
so
here's
a
silly
question
when
I
talk
about
moving
images
over
to
community-owned
infrastructure.
Do
you
all
know
what
that
means
and
how
to
do.
A
A
Can
you
repeat
that
sure,
when
I
talk
about
moving
images
over
to
like
kates.gcr.io,
do
people
here
understand
the
steps
involved
in
doing
that?
I'm
basically
trying
to
understand
like
or
all
the
issues
that
I
walked
through?
I
can
tag
them
all
as
help
wanted,
but
I
could
use
others
perspective
on
whether
there
is
sufficient
instructions
in
those
issues
to
make.
You
feel
comfortable
enough
to
take
those
on.
E
For
me,
I
think
the
main
question
is
like
how
to
move
how
to
migrate
existing
like
existing
mages
from
one
side
to
another
for
new
ones.
I
know
the
process,
but
for
existing
ones.
I
I
have
no
idea.
A
So
ben
linked
the
cage,
gcrio
docs,
let's
see
if
that's
helpful
yeah,
I
totally
hear
you.
I
feel
like
the
way
I've
solved
that
in
the
past.
A
Let's
see
if
the
word
migrate
shows
up
here,
it
does
not
yeah.
I
think
this
is
really
good
about
describing
how
to
set
up
a
new
repo
for
migrating
things
over
the
way
I've
done.
It
is
kind
of
as
a
human
being
I've
docker
pulled
from
the
old
place
and
then
docker
pushed
to
the
new
place.
A
That's
very
easily
scriptable.
I
don't
have
a
huge
problem
with
a
human
being
doing
that.
I
think
that's
sort
of
why.
A
A
So
I
feel
like
this
empowers
you
to
write
a
script
to
push
from
one
place
to
the
other,
but
it
is
a
common
problem.
I
know
that
release
engineering
is
kind
of
taking
ownership
of
the
container
image
promoter.
A
A
That
could
be
a
cool
way
of
doing
some
reuse,
but
I
don't
know
if
the
cost
of
figuring
that
out
and
the
benefits
that
that
provides
is
where
is
like
less
than
I'll
just
quickly
script.
At
this
one
time.
B
But
when
you
first
mentioned
migrating
images
from
the
old
repository
to
the
new
repository,
I
thought
that
would
involve
like
either
going
into
the
bazel
target
or
something
and
instead
of
having
the
infrastructure
push
to
this
repository.
We
are
pushing
to
this
other
repository
and
now
that
other
repository
is
up
to
date
and
then
we
move
the
references
from
the
old
one
to
the
new
one.
A
So,
like
I
guess,
the
algorithm
is
like
if
there
doesn't
already
exist
a
staging
project
or
these
images,
and
there
isn't
already
a
ci
job
that
automatically
builds
and
pushes
these
images
to
a
staging
project.
There
should
be.
A
If
there
already
is,
but
older
images
haven't
made
it
over.
Let's,
let's
do
the
backflip
and
then
update
code
and
tests
to
use
the
new
images
and
make
sure
that
the
tests
still
work.
A
A
I
will
try
and
link
it
to
the
relevant
thing,
but
cloud
you
since
you
brought
it
up.
I
do
think
migrating
all
the
e3
testing,
which
is
over.
A
We
ended
up
creating
like
a
new
gcp
job
for
each
test
image
so
that
when
you
merge
a
pull
request
to
kubernetes
kubernetes,
only
the
image
that
you
update
will
get
built
and
pushed,
but
to
sort
of
kick
this
process
off
you
and
a
couple
other
folks
submitted
prs
that
kind
of
changed
all
of
them
at
once,
which
caused
us
to
try
and
trigger
a
bunch
of
gcp
jobs
at
once,
and
we
hit
quota
for
gcd,
and
so
I
did
some
digging
and
the
quota
limits
that
we
hit
are
internal
service
quotas.
A
So
they're,
not
things
that
I
can
change.
I
can
only
lower
the
quota
and
I
don't
think
that's
what
you
want
me
to
do.
A
So
I
linked
dashboards
that
should
be
visible
to
anybody
who's
in
the
caten
for
prow
viewers
group
that
show
the
quota
requests
that
the
that
prow
is
hitting.
So
this
shows
sort
of
the
trend
of
get
traffic
over
time,
and
then
this
bottom
graph
here
shows
every
time
that
we
bump
into
quota
for
that.
A
So
what
this
tells
me
is
today.
I
think
we
use
one
central
service
account
called
gcb
builder,
and
I
think
that
service
count
lives
inside
of
the
like
prows
project,
and
perhaps
what
we
want
to
consider
doing
is
switching
to
a
service
account
for
each
staging
project,
because
the
staging
projects
are
where
the
builds
actually
run,
and
then
the
people
who
have
ownership
of
those
staging
projects,
the
human
beings
in
those
staging
groups
can
view
things
like
this.
A
I'm
proving
that
this
works
by
clicking
on
this
from
my
personal
non-google
laptop
and
you
can
see
the
concurrent
builds
and
api
requests
for
cloud
build
or
any
other
service
that
your
project
happens
to
use
and
so
yeah
we
did
hit
our
quota
of
10
concurrent
builds
when
we
tried
to
build
all
the
e
to
b
test
images
at
once
on
wednesday,
the
20th
there's
not
a
whole
lot.
A
I
can
do
about
concurrent
builds
other
than
like
they
get
queued
up
and
throttled,
and
they
should
eventually
execute,
and
if
the
image
builder
tool
doesn't
support
that
or
is
breaking
for
some
reason.
Because
of
that,
we
should
look
at
how
to
make
the
image
builder
tool
more
robust.
C
Yeah,
I
had
no
idea
about
this
sorry
for
what
it's
worth.
We
won't
have
to
do
this
anytime
soon,
because
most
of
the
images
are
rarely
built.
A
I
agree
I
feel
like
like
this.
I
don't
use
so
much
as
a
as
a
pain
point,
because
it
was
a
one-time
thing
and
that
was
kind
of
annoying.
Let's
see
if
I
can
see
it,
we're
probably
trusted.
A
Quotas
and
if
I
look
over
the
last
seven
days,
14
days
30
days,
this
I
see
is
more
of
a
problem,
because
this
traffic's
not
really
going
away
and
we
seem
to
be
having
it
more
frequently,
so
we'll
look
into
that.
I've
basically
run
out
of
time.
I
apologize
for
that.
What
I
was
going
to
say
is
for
sort
of
the
remaining
health
wanted
stuff
that
happens
to
be
relevant
to
release
engineering
when
it
comes
to
migrating
stuff
for
kate's
infra.
There's
a
wg,
kate's
impro
project
board.
A
I've
got
things
split
up
into
two
backlogs.
One
is
for
things.
We
need
to
do
to
our
existing
info.
That
we've
already
migrated
over
to
better
support
the
community,
so
like
make
sure
that
the
container
image
promoter,
pre-submit
job,
validates
that
people
don't
try
to
move
tags
or
whatever,
and
then
all
of
this
stuff
around
migrating
stuff
over.
So
all
the
projects
that
I
walked
through
just
now
are
in
the
infrared
migrate
column.
A
Oh
here's
another
one,
real,
quick,
kate's,
test
images.
This
project
is
where
random
images
that
are
built
for
tools
and
components
and
testing
relief.
It
is
also
where
all
the
images
that
are
used
in
our
projobslim
so
so
koopkins,
for
example,
lives
in
kate's
test
images.
I
would
really
like
to
see
us
move.
H
A
Over
to
the
to
a
community-owned
sub-project,
I
believe
with
kubkins,
especially
this
is
going
to
raise
the
question
of
the
general
push
the
staging
and
then
promote
to
kate's
dot
gcr
to
io
workflow.
A
I'm
not
sure
we're
going
to
want
to
do
that
for
kubekins
and
we'll
want
to
consider
how
moving
these
images
over
making
sure
that
that
works
with
the
auto
bumper
job.
That
automatically
updates
like
every
job
in
test
infra
to
use
the
latest
version
of
whatever
images
are
used
there.
A
All
right,
I'm
sorry
y'all,
I
feel
like
I
did
a
lot
of
talking,
so
I
don't
know
how
useful
this
was
for
everybody.
My
goal
was
to
get
a
sense
for
who
wants
to
work
on
what
and
if
you
feel
like
you
know
what
to
do.
A
So,
as
you
have
questions,
please
reach
out
on
sig
testing
or
ktempra,
or
hang
me
on
slack
and
I'll
work
through
all
that
any
other
last
remaining
thoughts
or
questions.
A
Okay,
thank
you
all
for
your
time.
It's
been
great
to
see
you
all.
I
hope
you
have
a
happy
tuesday
and
a
great
two
weeks
see
you
again.