►
From YouTube: Kubernetes SIG Testing - 2020-11-10
Description
A
Okay,
hi
everybody
today
is
tuesday
november
10th.
This
is
the
sig
testing
bi-weekly
meeting
at
which
we
adhere
to
the
kubernetes
code
of
conduct.
By
being
our
very
best
selves,
I
am
your
host
aaron
krickenberger.
A
You
can
see
me
at
spiffxp
on
all
the
things
if
you
have
a
problem
with
us
adhering
to
the
code
of
conduct,
please
reach
out
to
conduct
kubernetes
dot
io,
where
you're
also
welcome
to
reach
out
to
me
personally
on
slack
email,
github.
Whatever
our
agenda
today
is
pretty
light,
I
only
see
two
things
on
the
agenda
and
the
first
one
is
mine,
so
I'll
just
roll
right
onto
that.
A
Basically,
I
just
wanted
to
check
in
and
see
how
folks
think
we
are
doing
with
mitigating
dr
hubb's
rate,
limiting
on
imageables.
I
have
not
been
keeping
super
tight
track
with
things.
I'm
I'll
be
honest,
I'm
kind
of
because
we
just
don't
have
a
way
to
instrument
the
various
ways
that
a
doctor
image
can
be
pulled.
A
It's
difficult
to
have
like
a
single
source
of
truth
on
whether
or
not
image
polls
are
being
impacted
or
not
right.
Those
ways
are
the
kubelet
trying
to
pull
down
an
image
to
run
a
pod.
A
It
could
be
docker
inside
of
that
image.
Attempting
to
pull
things
down.
It
could
be
clusters
that
we
have
stood
up
elsewhere,
that
are
trying
to
download
pods
and
clusters
that
we've
stood
up
elsewhere,
also
using
docker
and
docker.
All
those
have
different
ways
of
logging,
none
of
them
log
authoritatively
to
the
same
commonplace.
A
They
came
back
with
a
bunch
of
marketing
requests
that
didn't
seem
fully
reasonable
but,
more
importantly,
the
exemption
was
for
our
out
for
egress
of
any
images
we
happen
to
publish
to
doctor
hub,
that
is
to
say,
if
we
were
publishing
images
to
docker
hub,
other
users
could
download
them
as
many
times
as
they
want
without
break
limit,
which
is
nice
for
open
source
projects,
but
we
made
a
decision
a
long
time
ago,
not
to
publish
anything
to
docker
hub,
so
that
really
gets
us
no
benefit.
A
A
I
know
that
docker
has
not
just
flipped
a
switch
and
immediately
flipped
us
over
to
the
new
rate
limit.
I
have
a
link
in
the
meeting
notes
to
their
increase
rate
limits,
page,
which
you
can
check
periodically
if
you're
curious,
where
they
say
what
they've
lowered
the
rate
limit
to
it's
about
1
000
images,
every
six
hours,
the.
A
100
images
every
six
hours
and
then
they
occasionally
turn
on
the
full
100
image
limit
for
certain
windows,
and
we
are
in
the
middle
of
one
of
those
windows
right
now
so
from
3am
to
9am
pacific.
Today
the
full
rate
limit
is
being
applied,
and
so,
if
that
stuff
is
happening,
we
should
be
seeing
it
by
now.
A
So,
okay,
not
hearing
any
anything
else
on
that,
I.
C
I
want
to
add
that
we
we
seem
to
have
so
far,
not
heard
any
pain
from
our
users,
yet
I
think,
outside
of
a
couple
of
sub-projects.
We
don't
really
have
images
posted
on
docker
of
ourselves
and
from
an
ingress
point
of
view.
You
know.
Projects
also
have
the
mitigation
of
just
not
choosing
to
use
docker
hub
images
from
whatever,
like
downstream
project
they're,
consuming
it
from
I'm
potentially
going
to
follow
up
on
the
open
source
project
egress
bit
for
the
kind
sub
project.
A
Okay,
that
sounds
cool.
Then
I
can
help
you
if
you
want.
The
form
is
pretty
easy
to
fill
out,
but
I
have
experience
filling
that
out
for
kubernetes
thanks.
D
A
I
mean,
I
think
this
is
a
great
thing
to
have
this
conversation,
so
here's
the
state
of
things
that
I'm
aware
of
claudio
has
been
making
a
bunch
of
changes
to
how
images
are
built,
trying
to
get
us
to
use
docker,
build
x
and
then
making
sure
the
right
flags
are
set
for
that
to
use,
and
it's
been
bug
fix
after
bug
fixed,
try
to
get
that
to
work,
and
I
think
he
has
the
image
build
working.
A
A
E
A
Build
for
example,
so
if
I
click
the
magnifying
glass
I
get
this
completely
illegible
to
you.
I
get
this
big
old
pile
of
yaml,
which
is
the
actual,
proud
job
crd,
and
so
it
describes
like
what
should
we
run?
It's
got
the
extra
rafts.
It's
got
the
container
arguments
and
all
that
stuff,
so
re
running
a
job
is
basically
taking
this
yaml
and
then
just
re-applying
it
in
the
cluster
so
making
another
crd
resource
with
everything
the
same
here.
A
That
is
different
from
re
triggering
a
job.
So
if
you've
made
changes
to
a
jobs,
config
clicking
the
rerun
button,
which
it's
this
little
guy
here
and
then
rerun
won't,
won't
pick
up
your
new
changes.
That
is
the
problem
that
I
think
we
were
having
with
server
with
the
server
core
image,
so
we
need
to
just
either
introduce
a
dummy
commit
that
would
cause.
I
think
it's
a
post
submit
job
in
this
case
that
we're
having
problems
with.
A
I
think
I
could
even
go
to
all
job
types
post
submit,
so
we
could
see
all
those
yeah,
and
so
the
problem
is
that
that
job
doesn't
show
up
in
this
list.
So
I
can't
click
the
rerun
button
to
to
rerun
it
that
way,
but
further,
because
we're
trying
to
test
out
new
job
config
changes.
A
We
need
to
create
a
brand
new
crd
of
the
job
or
we
need
to
re-trigger
it,
and
so
the
only
other
way
to
do
that
is
for
somebody
with
right
access
to
the
prow
service
cluster,
which
would
be
test
infra
on
call
to
use
a
command
called,
make
pj
or
mkpj.
So.
A
I'm
going
to
prowl
cmd,
I
think
it
even
shows
up
in
here
yeah
dev
tools.
A
D
No,
I
think,
that's
I
think
that
the
cloud
you
had
a
a
push
back
on
the
dummy
comic
from
sangha.
E
I'll
take
a
look,
I
think
we
are
that's
a
different
dummy
commit
you're
talking
about.
E
About
the
dummy
commit
you
made
it
on
kk
registry
and
aaron
was
talking
about
the
silver
core
periodic
job
right.
A
Yeah,
I
do
I
I
know
what
antonio's
talking
about.
I
just
don't
have
it
in
front
of
me
on
this
computer.
I
did
see
pushback
on
another
pr.
There
was
a
dummy
commit
to
trigger
something
same
situation
there
like
if
it's
just
a
matter
of
re-triggering
testing
for
on-call,
should
be
able
to
do
that.
But
if
we're
you
know
urgently
blocked
by
this,
I
can
have
a
conversation
with
whoever
was
holding
up
the
pr
to
see.
If
you
know
we
can
use
this
part
around.
E
Yeah
well
to
be
fair.
If
it's
urgent,
I
think
the
dummy
pr
can
just
be
merged
if
it's
urgent,
especially
since
the
freeze
is
coming
so
soon.
A
D
D
A
A
So
I
I
share
that
concern.
I
will
say
not
release
team
that
there's
a
separate
there's,
a
separate
part
in
the
release.
Schedule
called
test
freeze.
A
Test
freeze
is
the
date
after
which
no
more
tests
may
be
added.
So
before
this
date,
it's
totally
fine
to
fix
tests
and
add
new
tests
and
stuff
after
this
goal
is
to
fix
or
revert
or
remove,
whatever
tests
are
failing.
A
So
for
this
cycle,
test
freeze
is
november
23rd,
which
is
like
I
don't
know
a
week
and
a
half
after
code
freeze,
so
we'll
still
have
plenty
of
time
to
merge
test
related
fixes,
but
I
do
agree
with
you
that
it
would
be
helpful
to
land
test
things
earlier,
so
that
any
bug
fixes
that
need
to
be
done
as
a
result
of
those
tests
also
have
the
opportunity
to
land
earlier
so
I'll.
Work
on
that.
A
The
other
thing
I
think
you
wanted
to
discuss
was
that
it
you
have
this
pr
to
automatically
build
modified,
e2e
images,
because
it
seems
like
you're
saying
right
now
the
post
submit
job
this
job
here
you
think
it
only
promotes
the
conformance
image.
Is
that
correct,
or
only
push
on
its
image.
E
Yeah
currently,
whenever
changes
to
the
kubernetes
slash
test,
images
are
being
made.
E
Basically
in
a
cloud
build.yaml
file,
it's
specified
that
it's
going
to
build
it's
going
to
make
the
all
dash
conformance
images
target,
which
is
a
list
of
images
that,
from
what
I
saw,
were
commonly
used
in
conformance
tests
and
only
those
images.
Those
images
are
the
busybox
image,
the
agnost
image,
kitten
and
nautilus
resource
consumer
sample,
api
server
and
a
few
other
images
which
kind
of
escape
my
mind
right
now.
E
This,
I
didn't
include
all
the
images,
because
some
of
them
take
an
egregious
amount
of
time
to
build,
which
was
their
like.
I
don't
remember,
which
ones,
but
especially
the
pets,
images
which,
to
be
honest,
I
never
saw
them
anywhere
in
e3
tests
being
used.
E
Those
take
a
lot
of
time
to
build
and
we
are
currently
limiting
the
job
at
one
hour.
Something
and
with
that,
with
all
the
images
that
I
tried
to
build
once
it
was
taking
me
like
three
hours
to
build
everything.
D
B
E
Yeah,
the
idea
was
to
just
have
a
subset
of
images
which
covered
the
99
of
all
the
tests,
and
cases
obviously
doesn't
cover
100,
and
I
was
planning
sometime
in
the
future
when
it
was
finishing
with
all
my
work
items
to
just
have
a
like
an
all
images
target,
or
something
which
would
be
an
expanding
list
of
images
that
we
would
build,
but
indeed
it
would
be
even
more
better
to
just
build
images
that
only
those
has
to
be
rebuilt.
But
that
question
is
a
bit
more
difficult
to
answer.
E
D
E
That
was
just
an
example.
One
other
example
would
be
the
fact
that
we
should
have
had
s390x
support
for
all
the
test.
A
E
A
E
Never
working
and
we
fixed
that
in
image
util.sh
and
make
file
or
something
like
that,
and
we
didn't
really
touch
the
the
images
themselves,
their
docker
files.
But
in
this
scenario
we
would
have
to
rebuild
all
the
images
right.
E
Yeah
just
some
examples
that
are
coming
to
my
mind
right
now,.
D
So
and
in
order
so
and
then
it
makes
sense
to
to
keep
doing
the
conformance
so,
but
for
me
to
be
able
to
promote
the
image,
how
should
I
do
should
I
add
the
image
to
the
conformant
list?
Submit
the
pr
wait
until
it
is
promote
and
then
submit
another
pr
to
remove
it
from
there,
so
they
keep
rebuilding
it.
E
I
have
a
question:
aaron:
is
it
possible
to
have
the
image
builder
also
act
as
a
trigger,
and
then
you
basically
say
slash,
build
image,
image,
name
or
something
like
that,
and
then
you,
you
would
just
pass
that
what
image
to
the
image
builder
job,
which
is
going
to
be
triggered
and
then
would
basically
cover
any
kind
of
different
image
that
would
have
to
be
built.
A
Right
that
would
be
neat
I
feel
like
that,
has
a
lot
many
more
implications
that
need
to
be
thought
through.
So
the
idea
is
basically
to
trigger
a
trusted
type
of
job
as
a
pre-submit.
So,
right
now
all
the
image
building
jobs
have
to
run
as
post
submits.
We
don't
want
them
to
run
as
pre-submits,
because
we
don't
want
them
to
build
untrusted
code.
A
So
it
sounded
like
claudio
was
asking
if
we
could
just
do
like
a
slash,
build
and
then
this
image
name
and
have
that
build
a
build.
An
image.
C
A
Or
you
know
offering
the
ability
to
manually
trigger
post
events
through
the
ui
is
another
option,
so
I'm
not
sure
off
the
top
of
my
head.
What's
what
the
best
way
is
to
do
that
sort
of
thing?
Another
suggestion
I
had
just
for
the
moment
is,
you
know
we
do
have
this
run
if
changed
field
here
on
prowl
jobs
and
right
now
we
only
have
one
job
and
it
runs
if
anything
in
test
images
changes
and
so,
as
a
result,
yeah
that's
gonna
have
to
build
everything,
but
it
could
be.
A
What
we
may
actually
want
to
do
is
go
change
this
up
to
have
like
a
job
per
sub
directory
in
images.
So
then
we
can
have
a
job
that
pushes
just
the
agn
host
image
or
we
can
have
a
job
that
just
pushes
the
nginx
images
or
whatever
image
antonio
is
trying
to
promote
that
feels
like
that
fits
with
what
with
the
system
we
have
in
place
right
now,
it
would
be
a
lot
more
copy,
paste
or
whatever,
but.
D
But
I
think
that
this
is
okay,
because
I
only
plan
to
to
I
had
this.
I
tried
to
put
this
image
into
an
cost,
but
it
was
more
complicated
that
I
thought
so.
I
decided
to
just
promote
the
current
image,
but
I
don't
think
that
nobody
is
going
to
have
this
problem
and
if,
if
this
system
is
working,
I
think
that's
the
best
idea.
I
will
create
a
new
job
and
then
we
can
do
it.
D
E
Well,
to
be
fair,
to
be
fair,
with
aaron's
suggestion,
damage
building
would
be
more
atomic
or
more
granular,
which
would
also
mean
that
yeah
sure
it
failed
for
one
image,
but
it
worked
for
the
different
images
and
you
can
still
promote
those
successful
images
which
would
be
great.
E
It
wouldn't
be
failed
the
whole
job
just
because
one
image
failed,
which
would
be
a
nice
addition.
Also
question.
You
said
that
you
would
have
a
lot
of
duplicate.
Config
files
cannot
can't
that
be
solved
by
presets.
A
I
wish
I
could
say
the
answer
was
yes,
but
I
believe
the
answer
is
that
presets
are
very
limited
in
what
they
support.
Okay,
I
don't
know
if
I
can
get
to
it
really
quickly.
They
were
intended
to
kind
of
mimic
the
pod
preset
resource,
which
I
don't
think
has
ever
left
alpha
like
it
was
intended
to
be
a
kubernetes
feature,
and
I
don't
know
if
that's
actually
going
to
ever
get
promoted
to
beta
or
ga.
A
So
I
think
it
only
applied
to
volumes
and
environment
variables,
and
maybe
one
other
thing.
So
we
could
consider
that
volume.
Bounce
is
the
third
thing
so
like
we
could
consider
augmenting
crowl's,
preset
functionality
to
start
patching
in
more
stuff,
but
I'm
not
sure
if
that's
the
direction
people
will
want
to
go.
Another
option
is,
I
think,
some
folks
somewhere,
the
sig
storage
folks
have
basically
created
a
script
that
automatically
generates
a
bunch
of
yml
files.
I
don't
think
this
script
is
like
rerun
by
ci
or
anything.
A
I
mean
it's
a
lot
of
you
know
it's
a
lot
of
copy
paste
and
there
is
some
manual
toil
involved,
but
I
feel
like
it's
the
quickest
approach
now.
Does
anybody
have
any
suggestions
for
like
what's
a
longer
term
investment
we
could
make
that
would
better
solve
this
problem.
B
A
Okay,
it
looks
like
we
lost
cloud
view,
so
let's
say
this
is
plan
of
record
I'll,
open
up
an
issue
and
I'll
assign
you
and
cloud
youtube
just
to
get
comments
on
there.
If
that
sounds
good
sorry,
I
was
pointing
at
the
screen.
Let
me
see
antonio,
I
will
sign
it
to
you.
A
A
So
claudio,
I
think
what
we
agreed
to
I'm,
not,
I
didn't
quite
notice
when
you
dropped,
we
were
suggesting
like
maybe
it's
quicker
to
just
go
ahead
and
have
antonio
work
on
a
script
that
will
generate
jobs
for
each
of
the
images
and
then
that
way,
we'll
have
a
post-submit
job
per
image
in
test
e2e
and
we'll
try
that
going
forward.
E
A
It's
all
good,
so
I'll
I'll,
open
an
issue
and
tag
youtube
on
it.
Any
other
questions,
comments
or
concerns
on
this.
A
Okay,
let's
see
like
tracker
progress,
I'm
guessing
rob.
This
is
your
item.
B
Yeah,
so
I
just
so,
if
you
click
on
that
hackmd
link
and
the
that
part
of
the
report
has
been
generated
by
the
flag,
tracker
and
the
code
that
generated
that
report,
I've
not
yet
pushed
up
to
my
repo
so
I'll
do
that
this
week,
and
then
I
think,
over
the
next
few
days,
I'll
get
the
last
part
working
where
I'm.
B
So.
What
I
do
is
I'm
I'm
scraping
from
still
scraping
from
test
grid,
but
it's
nicely
componentized
out
and
on
one
of
the
to
do's
is
to
interact
with
bigquery
to
in
the
future.
But
so
I
scrape
this,
I
scrape
the
summary
screen
on
test
grid
and
then
for
anything,
that's
flaking.
I
I'm
a
I
I
then
having
walked
test
grid.
B
I
walk
the
the
ci
signal
project
board,
which
isn't
all
of
the
all
of
the
reports,
but
in
there
we
have
a
database
of
reported
flakes
and
what
I'm
attempting
to
do
slightly
naively
is
just
to
use
the
test
names
on
the
the
flaking
jobs
to
find
corresponding
reported
issues
in
github,
and
that's
not
quite
there.
Yet.
B
I
have
a
little
defect
to
eradicate,
but
I'd
expect
that
to
get
that
done
in
the
next
week,
so
hopefully
by
next
monday,
each
flaking
test
will
have
a
list
of
associated
issues
with
it.
B
I'm
scraping
the
front
end
jason
and
then
so
so
and
then
what
I'm
doing
is
is
that
I'm
attempting
so
the
front
end
jason
has
the
front
end.
Jason
has
has
an
array
of
linked
bugs
and
that's
presently
empty,
so
it
it
er.
So
that's
what
I'm
trying
to
decorate
and
flesh
out.
A
I
think,
but
I
don't
know
for
certain-
I
think
that
this
code,
google
cloud
platform
test
grid
command
summarizer.
I
believe
this
is
the
thing
that
ends
up
writing
the
object
that
he
ends
up
getting
served
as
jason.
Oh.
B
B
I
suppose
I
suppose,
eventually
what
I'd
like
to
see
is
that
we
semi-automatically
report
issues
around
flakes
and
failures
in
ci,
and
what
I
would
propose
is
that
we
generate
the
initial
that
effectively
that
we
automatically
fill
out
the
issue
in
github,
but
not
automatically
create
the
issue
if
that
makes
sense,
so
that
so
that
we
so
that
we
fill
out
the
form
in
github
but
not
but
have
humans
review
it,
I'm
just
afraid
of
a
skynet
scenario
where,
where
we
end
up
automatically
creating
a
load
of
reported
issues
and
it
gets
out
of
control,
but
but
I
like
the
idea
of
perhaps
gathering
as
as
automatically
as
possible,
gathering
all
of
the
text
that
goes
into
a
reported
flake
and
fail
failure
and
get
up.
B
A
The
flags
are
not
really
easy
and
obvious,
of
course,
they're
not.
I
think
we
had
something
at
some
point
that
created
issues
in
github
automatically
as
a
result
of
the
top
flights
in
triage,
so
the
code
to
actually
create
issues
still
lives
here
and
testing
for
a
robot's
issue.
Creator,
okay,
yeah,
might
be
something
you
could
use
to
help
with
the
issue
creation
thing
I.
B
A
B
A
B
Yeah,
I
think,
from
from
from
the
philosophical
standpoint,
there's
an
issue
with
issues
and
the
the
what
we
want
to
do
is
and,
and
the
part
that's
hard
to
automate,
is
to
use
best
judgment
when,
when
collating
information
about
tests
that
are
failing
or
flaking.
B
It's
flaking
in
these
jobs
that
there's
a
lot
of
there's
a
certain
amount
of
judgments
that
has
to
be
used
in
pulling
that
together
and
one
of
the
frustrations
that
I
think
developers
have
and
test
maintainers
have
is
if
too
many
issues
are
being
logged
and
if
their
attention
is
being
spread
out
across
multiple
issues.
So
so
that's
where
that's
the
the
background
thought
I
have
on
well.
How
do
we
automate
this?
B
Should
we
automate
it
to
what
extent,
which
should
we
automate
it,
and
and
really
what
I
want
to
do
is
just
improve
the
workflow
whereby
or
by
somebody
who's,
very
interested
in
ci
signal
or
works,
and
seeing
a
single
team
can
do
this
in
a
in
a
in
a
coherent
and
efficient
manner,
and
then,
ultimately,.
B
It's
not
clear
to
me
that
a
casual
passerby
or
somebody
who
expresses
a
casual
interest
in
reporting
noise
in
the
signal,
ultimately
that's
the
ultimate
goal-
is
that
a
casual
user
could
report
noise
in
the
signal
in
a
simple
and
easy
way
from,
and
that's
where
I
s,
that's
why
I'm
struggling
a
little
bit.
I
might
need
to
be
here
for
another
few
months
before
I
can
make
strong
pronouncements
on
that.
You
know,
like
I
mean
it's
yeah,
it's
a
tricky
one.
A
A
Yeah
there's
well,
there's,
certainly
lots
of
data
to
work
with.
So
what
I'm
hearing
you
say
is
you
want
to
start
with?
You
feel,
like
the
noise
level
on
test
failures
is
really
high.
You
think,
there's
a
limited
amount
of
people
who
can
actually
troubleshoot
test
failures,
so
you
want
to
raise
the
sort
of
noise
floor
as
it
were,
so
that
anything
that
peaks
above
a
really
high
noise
floor
is
what
we
want
to
pay
attention.
B
So
how
would
I
put
this
so
so
the
so
the
the
the
current
set
of
powers
afforded
to
a
test
grid
user
are
vast
and
they're
numerous,
and
if
you,
if
you
sit
with
test
grid
regularly,
I
think
I've
said
this
before,
but
when
you
sit
with
test
grid
on
a
daily
basis,
and
you
can
use
all
of
the
features
it,
it
has
there
for
you
in
order
to
collate
the
evidence
to
go.
Here's
a
flake,
it's
flaking
on
this
job,
it's
flaking
on
this
job,
but
not
this
job.
B
I
want
to
make
that
and
that
in
conjunct,
conjunction
with
triage
and
there's
a
bit
of
wrangling
that
needs
to
be
done
there
to
collate
the
evidence.
I'd
like
to
automate
that
as
much
as
possible,
but
acknowledging
that
it's
not
an
easy
thing
to
automate.
You
know,
and
so
so
currently.
What
I'm
thinking
is
is
that
and
when
you
do
that
job
day
in
day
out
as
part
of
ci
signal,
you
get
a
feel
for
it
and
you
you,
you
figure
out
the
workflow
for
gathering
all
of
that
evidence.
B
A
A
No,
no,
no,
that
means
yeah
so,
but
what
I'm
trying
to
get
at
is
using
humans
as
filters
versus
using
who
kind
of
exists.
B
Yeah
and
using
their
judgment,
yeah
there's
a
certain
there's,
a
certain
amount
of
stuff
that
we
can
automate
on
this
in
relation
to.
B
I
could
talk
about
this
for
ages
and
I
don't
get
caught
in
the
weeds
in
this,
but
but
but
essentially
the
to
exercise
the
good
judgment
of
a
gathering,
all
the
evidence
and
b
so
gathering
the
positive
evidence,
kathy
and
and
gathering
the
negative
evidences
is
the
tricky
part
I
think
to
automate
so
where
something
is
working
where
something
is
not
working.
You
know,
there's
a
lot
of
searching
to
be
done
in
ci
in
order
to
help
set
out
in
issue
form
yeah
th.
This
is
what
this
is.
This
is
this.
Is
these?
A
Okay,
do
you
feel
like
the
process
by
which
somebody
would
go
gather?
All
of
that
evidence
is
documented
somewhere.
B
I
don't
know
to
be
honest,
like
I
mean
it
is
to
a
certain
degree,
but
but
I
I
kind
of
feel
that
I
kind
of
feel
that
the
more
we
automate
this,
the
the
lower
the
lower
and
the
lower
need
we'll
have
for
documentation
like
for
me,
where
the
documentation
is
now
for
me
is
spec
for
me,
the
specification
for
for
for
how
how
to
automatically
lift
and
collate.
B
A
Meeting
hostage,
I
just
like
having
done
a
fair
amount
of
time
in
ci
signal
myself.
The
problem
I
wanted
to
see
us
collectively
avoid
was
more
people
having
to
learn
how
to
read
the
tea
leaves
by
building
tools
that
either
automatically
read
the
tea
leaves,
or
it
sounds
like.
Maybe
where
we're
at
is
automatically
collating
sort
of
related,
metrics
and
artifacts
into.
B
B
It
needs
to
be
fairly
white
boxy
if
that
makes
sense
like
I
don't
obscure
evidence
gathering
or
you
know
like,
I
want
to
automatically
gather
evidence,
but
still
everybody
can
see
the
evidence
from
where
it
came
from.
B
F
Potentially
low
hanging
fruit
just
looking
at
test
grid,
I
know
if
you
we
look
just
at
a
cell
or
a
row,
you
can
do
file
a
bug
or
attach
to
bug
it
might
be
nice
just
to
have
a
c
related
failures,
and
if
we
have
some
standard,
that's
like
tab
test,
name
links
to
a
search
in
github,
and
we
could
do
that,
maybe
for
tabs
as
well.
F
Just
so,
if
you
see
flakes
in
the
tab,
you
can
say,
let
me
see
everything
related
potentially
related
and
it
might
make
it
a
little
easier
for
the
delay
test.
Grid
user.
B
Yeah,
it's
it's
yeah,
that's
certainly
a
good
idea
and,
and
then
there's
there's
so
operating
at
the
test
level.
That
would
be
useful
but
then
operating
at
the
operating
at
the
release
level.
B
We
do
kind
of
need
to
see
a
list
of
things
are
failing
and
flaking
and
then
the
corresponding
list
of
you
know
get
issues
logged
or
not
because
from
the
use
case
point
of
view,
I
want
to
be
able
to
see
what
has
been
logged,
but
also
what
has
not
been
reported.
What
has
been
reported
and
what
has
not
been
reported?
B
Okay,
but
let
me
we
can
probably
leave
this
here
for
now
and
then
we
come
back
to
this
in
six
weeks
and
see
where
we're
at
and
then
I
can
lay
it
all
out
and
we'll
go
okay.
What
can
we
do
here?
How
can
we
make
things
better.
A
That
sounds
good
to
me.
I
was
trying
to
quickly
surf
around
and
find
some
other
priorities.
We
had
somebody
work
on
something
called
entomologist
a
while
ago
that
was
supposed
to
like,
based
on
a
well-known.
You
know
how
release
notes
for
kubernetes
are
sort
of
scraped
from
a
well-known
markdown
block,
yeah
yeah.
So
I
think
we
were
thinking
of
something
similar
for
test
gradations.
That
way,
you
could
link
an
issue
to
like
a
known
a
test.
Failure
which
test
grid
could
then
link
to
something
like
that.
B
A
Okay,
any
other
questions,
comments
or
concerns.
Would
you
all
like
10
minutes
back.
E
Just
the
minor
question:
is
there
a
easier
or
better
way
to
view
the
actual
logs
from
a
pro
job?
For
example,
I've
been
digging
a
lot
in
the
italy
test
images
pro
job
logs,
but
I
basically
have
to
jump
through
a
couple
of
hoops
to
get
the
actual
logs
from
the
make
command.
That
was.
A
That
is
a
bug,
in
my
opinion,
I'll
place.
The
bio
key
in
chat
here,
I'm
of
the
opinion
that
at
one
point
in
time,
we
exported
all
logs
from
google
cloud
build
back
into
the
artifacts
directory,
for
whichever
proud
job
ended
up
triggering
cloud
build.
A
E
Yeah
I
remember
when
those
logs
were
being
exported
as
artifacts
as
you
say,
and
I
thought
that
maybe
someone
decided
that
shouldn't
be
the
case.
I
don't
think
it.
It's
not
a
bug,
but
it's
been.
It
is
a
bit
difficult,
especially
for
newcomers
to
get
the
actual
logs
from
this.
E
E
E
Italy
test
images
well
post,
submit
to
something
I
don't
know
the
exact
string.
Okay,
but.
E
Yeah,
I
think
that
applies
to
almost
all
all
pro
jobs.
A
It
pays
for
the
link
in
chat
to
like
an
example
of
a
failure,
but
if
you
take
a
look
at
the
issue
like,
I
found
a
link
to
examples
of
crowd,
jobs
that
use
the
image
builder
to
kick
off.
Google
cloud
build
and
they
do
successfully
pull
logs
back
and
then
I
have
links
of
jobs
that
don't
so.
That
should
be
enough
for
us
to
identify.
What's
the
common
condition
that
causes
this
bug
to
happen
and
go
fix
it,
it
could
be
a
bug
in
the
image
builder.
A
It
could
be
that
permissions
have
changed
somehow
in
a
way
that
you're,
not
aware
of,
but
I
just
I
haven't
had
time
to
go.
Look
at
this.
I
haven't
tagged
it
as
help
wanted,
because
it's
not
clear
to
me
that
it
is
great
for
a
new
contributor,
but
if
somebody
wants
to
put
the
time
in
it'd
be
really
appreciated.
A
A
F
I
guess
really
quick
before
talking
about
logs.
Just
for
my
own
sake,
is
there
a
quicker
way
to
view
the
logs
beyond
lenses
for
pro
jobs?
I
know
I've
had
some
times
where
I'm
looking
through
pro
logs
and
the
lens
just
takes
forever
to
render,
because
there's
so
many
lines.
A
A
Okay,
if
there's
nothing
else,
I
thank
you
all
for
showing
up
just
to
remind
everybody,
we're
not
going
to
have
another
sig
testing
meeting
until
december,
because
our
regularly
scheduled
meeting
would
fall
during
the
week
of
kubecon
and
if
we
were
to
go
two
weeks
after
now,
that
would
be
the
week
of
thanksgiving
during
the
year
in
the
u.s
which
a
number
of
people
are
going
to
be
absent.
For
that.
So
we'll
see
you
all
at
the
beginning
of
december
and
happy.