►
From YouTube: Kubernetes SIG Testing - 2021-07-13
Description
A
Okay,
hi
everybody
welcome
to
the
kubernetes
testing
bi-weekly
meeting.
I
am
your
host
for
today,
aaron
craig
converter,
also
known
as
aaron
of
sickbeard
or
at
spiff
xp
at
all
of
the
places.
We're
gonna
adhere
to
the
kubernetes
code
of
conduct
during
this
publicly
recorded
meeting.
So
you
can
go
to
youtube
later
and
watch
all
of
yourselves
and
hear
be
your
very
best
selves
and
not
be
jerks
to
people
here
on
today's
agenda.
We're
going
to
have
mitchell
talk
to
us
about
the
design
for
a
proud
multi-tenancy,
especially
when
it
comes
to
private
repos.
A
B
B
Okay,
today,
I'm
going
to
be
just
discussing
the
design
that
I'm
working
on
for
oss,
proud
private
repo
multi-tenancy.
This
is
a
design
that
is
still
a
work
in
progress,
but
I
think
it's
important
to
get
a
discussion
on
it
going
sooner
rather
than
later.
Currently
we
have
have
two
deck
instances
for
the
oss
pro
one
of
them,
that's
being
used
for
a
private
repo
and
the
other
one
that
is
public.
The
issue
is,
if
we
wanted
to
add
more
private
deck
instances
that
are
separate
from
each
other.
B
We
can't
do
this
with
the
current
architecture.
Certainly
either
organizations
and
repos
or
individual
prior
jobs
can
be
marked
as
hidden
and
then
deck
instances
can
be.
C
B
Of
which
show
all
of
the
private
jobs,
what
we
want
is
a
way
to
separate
power,
jobs
into
different
private
buckets
and
have
deck
instances
for
each
of
the
individual
buckets
so
that
we
can
have
different
private
repos
each
with
their
own
deck
instance
running
on
one
instance
of
crowd,
something
that
we've
discussed
but
is
not
currently.
B
The
the
goal
for
this
design
is
having
one
instance
of
deck
that
dynamically
shows
different
buckets
of
information
based
off
of
different
logins,
and
the
solution
that
we
are
currently
working
on
is
one
that
uses
our
back
to
designate
a
namespace
for
each
of
these,
like
private
budgets,
for
proud
jobs
and
assign
different
roles
to
each
deck
instance,
so
that
they
only
have
access
to
the
individual
name
spaces
where
these
private
jobs
live.
B
Current
this
would
require
some
changes
to
how
we
set
up
oss
brow,
because
currently,
that
is
using
a
cluster
instead
of
individual
roles,
and
it
would
also
require
all
of
the
other
components
are
proud
to
use
cluster
roles
instead
of
individual
roles
so
that
they
have
access
to
all
the
crowd
jobs
across
all
of
these
private
name
spaces,
when
they're
required
on
the
private
side.
B
This
is
gonna
require
a
couple
of
changes
who
show
all
of
these
different
name.
The
original
design
that
is
here,
which
is
just.
B
For
matching
namespaces,
but
there
are
things
that
we
can
match
other
than
namespaces,
so
it
would
work
similar
to
how
decoration
default
decoration.
B
Config
works
where
we
can
match
different
board,
repos
or
clusters
to
a
variety
of
different
fields
instead
of
just
namespace.
But
the
general
idea
here
is.
B
Is
that
in
the
prowl
config,
you
would
list
every
evening
space
and
list
the
order
repos
and
clusters
that
you
want
to
match
to
it
and
when
creating
the
proud
job,
it
would
create
the
proud
job
resource
in
that
specific
namespace,
and
it
would
also
require
changes
to
all
of
the
components.
B
B
Bobby
right
now
we
can
ask
questions.
I
can
answer
questions
at
the
end
too.
B
Is
the
ability
to
add
the
name
space
as
an
override
in
the
inward
book
config?
B
This
would
make
it
so
that,
if
anybody
messes
or
mistakes,
something
in
the
the
crowd
confide
anything
any
jobs
listed
in
the
in
rebooting
fit
would
still
use
the
correct
name
space
yeah
before
I
move
on
to
some
of
the
other
like
alternatives
we
consider.
Does
anybody
have
any
questions
about
that
design?.
A
Questions,
I
guess
like
I
know
we
talked
about
the
idea
of
segmenting
crowd
jobs
up
by
name
space
earlier
for
some
other
reason.
I
think
it
was
like
multi-tenancy
on
the
same
cluster,
where
like
right
now
in
order
to
have
multi-tenancy
or
resource
isolation,
we
kind
of
suggest
that
each
team
or
each
group
has
their
own
build
cluster.
C
A
What
was
unclear
to
me
was
like
whether
that
played
well
with
the
crowd
job
controller,
and
I
felt
like
that
maybe
has
changed,
but
the
way
of
like
hooking
up
different
build
clusters
to
prowl
involved
handing
over
like
credentials
or
config
that
effectively
had
admin
access
to
all
namespaces,
instead
of
just
the
relevant
name
space.
A
C
A
Moving
in
the
correct
direction
for
probably
more
motivational
reason,
I
was
just
asking
if
like
if
build
clusters,
unit
of
conservation
still
matters
or
if
we
can
once
this
is
done,
we
can
assume
that
name.
Space
is
a
pretty
rock
solid
unit
of
isolation.
A
It
wasn't
about
two
proud
control
plants.
It
was
about
one
crowd,
control,
plane,
talking
to
different,
build
clusters
or
the
same
build
cluster
with
different
name
spaces
right.
So
I
could
have
like
the
name
space
a.
I
could
like
point
to
the
same
cluster,
but
I
could
hand
two
different
cube.
Configs
one
called
namespace,
a
one
called
namespace
b
and
the
same
singular
pro
control
plane
could
end
up
scheduling
jobs
to
two
different
name
spaces.
A
A
Okay,
all
right,
thank
you
for
clarifying
that.
A
E
E
And
the
the
motivation
for
that
is
that
then
we
can
limit.
You
know
currently,
when
our
front
end
is
listing
all
of
the
proud
jobs
we
can
actually
say
this
front.
End
only
has
access
to
these
brow
jobs
in
this
name
space,
and
we
can
enforce
that
with
our
back
so
that
we
have
more
security
over
what
we're
doing.
C
Yeah,
how
strong
of
a
requirement
do
you
need
for
that,
because
I
do
think
like
alvar,
and
I
made
some
comments
on
the
stock
label.
Selectors
get
you
99.9995
of
everything
you
need.
The
filtering
is
done
server
side,
the
deck
instances
that
don't
need
access,
don't
have
or
don't
need
to
display
a
job.
Don't
have
it.
E
I
think
I
see
your
point
right
because
we're
kind
of
getting
security
through
either
the
mechanisms
in
some
sense
yeah
our
back,
certainly
seems
like
a
little
bit
more
of
a
sure
bet
to
make
sure
that
we're
covering
our
bases.
B
Yeah.
I
think
the
the
benefit
is
really
like
that
the
death
instances
wouldn't
have
access
to
any
of
those
crowd
jobs
at
all.
So
if
there's
something
that
goes
wrong
with
the
it's
like,
a
bug
gets
introduced
in
the
toad.
B
What
specifically
filtering
there's
there's
no
possibility.
A
Like,
for
example,
I
don't
actually
know
what
the
state
is
of
kate's,
proud
and
ci
testing
of
security,
vulnerability
related
fixes.
I
I
vaguely
feel
like
we
just
kind
of
dropped
it,
because
there
wasn't
sufficient
trust
in
pow's
ability
to
isolate
stuff,
and
I
feel
like
it
would
probably
be
a
much
easier
sell.
A
C
A
C
A
That's
what
we
have
to
do
today
like
if,
if
we
want
to
ensure
that,
like
like,
let's
just
say,
for
example,
that
openshift
ran
all
their
ci
jobs
off
crowdupkates.io
right,
it'd
be
kind
of
overwhelming.
For
me,
a
very
kubernetes
focused
person
to
see
all
these
openshift
jobs.
It's
it's
really
weird
like
we.
I
actually
added
the
ability
to
isolate
by
build
clusters,
so
I
can
see
all
the
jobs
that
are
relevant
to
the
build.
C
C
And
sure-
and
I
think
label
selectors
get
you
there.
I
I
guess
my
biggest
comment
is
like
I
think,
there's
a
difference
between
like
hand-wavy
security
answers
and
like
fairly
concrete
ones
and
so
like
as
a
point
of
reference,
the
conversations
that
we
had
like
many
years
ago
about
security
isolation,
vis-a-vis
like
running
trusted
jobs
on
a
separate
cluster
like
at
that
point,
namespace
isolation
was
viewed
as
insufficient
for
security,
isolation
between
jobs
and
credentials,
and
so
like
that's
why
those
trusted
jobs
are
running
on
an
entirely
separate
cluster.
So
it.
E
Was
viewed
as
insufficient,
though,
because
that
was
related
to
untrusted
workloads
right,
like
we
have
untrusted
workloads
in
the
build
clusters,
where
that's
not
the
case
in
our
service
clusters.
E
D
One
one,
I
think,
the
the
main
reason
why
I
dislike
this
idea
of
using
namespaces
here
is
that
we
get
this
kind
of
security
that
is
supposed
to
protect
us
against
bugs
we
have
in
our
code.
However,
what
in
an
ideal
world
we
would
at
some
point
get
to
is
one
deck
instance
that
can
based
on
which
user
you
are
show
you
different
jobs
and
at
the
point
we
get
to
that,
we
won't
have
this
namespace
cuber
based
security,
anymore
anyways.
D
E
That
that
part
I'd
like
to
clarify
on
what's
the
perceived
incredible
amount
of
change
compared
to
using
label
selectors
for
this,
is
that,
just
because
label
selectors
only
have
to
be
changed
on
deck
because,
like
the
change
that
I'm
perceiving
that
like
this
does
touch
a
lot
of
components.
But
this
is
just
making
the
name
spaces
that
we're
listing
over
configurable
right
so
shouldn't
it
be
a
like
just
a
few
line
change
for
our
components.
A
So
maybe
that's
fair,
but
my
my
silly
question
is:
I
thought:
namespaces
were
the
correct
unit
of
isolation
for
kubernetes
kubernetes-based
resources.
I
don't.
I
guess
I
don't
understand
or
maybe
help
me
understand
like
what
is
the
far
simpler
scenario.
You
are
envisioning
that
keys
off
of
what
somebody
is
authorized
to
to
see,
because
somehow,
I'm
just
implicitly
assuming
the
authorization
is
tied
to
which
namespaces
they
are
allowed
to
see
via
our
back.
C
It
could
be
github,
oh
that's
right,
like
which
repos
can
you
see?
I
I
think
my
biggest
question,
though,
was
just
like
the
the
doc
didn't
lay
out
like
the
goal
for
security
and
thereby
it
was
a
little
bit
hard
to
understand
the
exact
trade-off
of
cost
benefit
for
so.
A
A
I
will
definitely
like
I'm
the
person
who
gave
the
security
thing
as
an
example
and
I'm
not
trying
to
blow
the
scope.
So
maybe
maybe
that's
not
relevant,
I
don't
know
but
I'll
call
it
which
I'll
speak
for
with
that.
G
G
That
is
almost
the
ultimate
goal
that
we
can
get.
For
example,
we
can
store
proud
job,
not
in
not
just
display
the
short-lived
project
custom
resource.
We
can
save
them
in,
like
data
store
database
and
the
users
can
use
their
whatever
cloud
credential
to
authenticate,
and
they
can
only
see
the
jobs
there
are
they
are
allowed
to
see.
G
But
that
that
I
agree-
and
I
do
love
the
idea
of
arrow
just
proposed-
I
think
that's
the
ultimate
goal
and
the
long-term
destination
we
want
to
go
to.
To
be
honest,
we
have
a
hard
deadline
to
get
this
stuff
done
in
whatever
way
or
approach.
G
D
Just
if
we
ever
in
the
future
added
to
one
we
lose
the
security
benefit
again,
because
this
one
deck
instance
needs
to
have
arbuck
to
see
everything,
which
is
why
I
find
the
security
argument
not
convincing,
because
we
have
to
add
quite
a
bit
of
complexity
in
all
of
power
for
this,
and
I
don't
think
that's
worth
it.
We
could
just
use
label
selectors
or
something
like
that.
E
E
What
if
we
did
the
label
selector
solution,
since
that's
the
simplest
and
least
invasive
as
a
temporary
solution,
and
then
switch
to
our
more
robust
solution
when
we
had
time
to
implement
it,.
C
If
I
didn't
fully
understand
like
the
specifics
of
what
was
required
from
a
security
perspective
and
given
the
label
selector,
I
do
think
would
just
be
simpler
to
do
and
like
more
straightforward,
easier
to
implement
if
you're
going
to
have
multiple
deck
instances
like
from
a
time
perspective.
I
think
that
might
be
a
more
advantageous
approach.
B
B
Go
over
the
you
know
it's
listening
to
the
alternative
considered,
but
the
non-arbac
more
labels
are
their
type
change.
If
you
guys
would
like
to
get
that
as
well.
B
I'm
not
hearing
anything
question,
so
I
guess
I
can
go
over
it
really
quickly.
It's
simple
the
simpler
solution
that
we
discussed
would
just
be
to
add
a
hidden
id
alongside
hidden
to
the
crowd
job
set,
and
this
would
be
used.
Inside
of
that,
for
example,
are
someone
sorry?
B
It's
been
awesome
for
this
in
addition
and
the
debt
can
fit
alongside
hidden
repos,
which
is
currently
what's
there,
the
begin
repo
matches
which
matches
this
hidden
id
to
org
repos
as
well,
and
then
in
the
deck
options.
When
you're
setting
up
the
deck
instance,
you
would
specify
had
an
id,
and
the
idea
here
would
be
that
each
deck
instance
would
only
show.
E
E
If,
if
we're
gonna
go
with
the
label
selectors
as
a
temporary
solution,
I
think
we
would
want
that
to
be
temporary,
so
we
probably
want
to
be
pushing
for
the
more
complete
solution
within
a
reasonable
time
frame.
But
that
definitely
would
let
us
get
to
our
deadline
pretty
easily.
C
C
E
Yeah
by
temporary,
I
didn't
mean,
like
I,
don't
feel
an
urge
to
remove
it
once
we're
done
with
it.
I
just
mean
more
that
I
hope
that
we
can
switch
to
our
more
robust
solution
within
a
reasonable
time
frame.
E
B
B
I
mean
doing
an
easier
change,
I
think,
is
something
that
I'm
looking
forward
to,
but
also
you
know.
I
think
that
there
definitely
are
some
security
benefits
from
the
the
first
proposal,
but
if
it's
gonna
be
temporary
anyways
having
to
undo
all
of
it
would
be.
B
Trading,
so
I
think,
if
we're
looking
into
the
future,
that
I
like
the
second
solution
as
well.
D
Well,
actually,
I
think
the
follow-up
change
to
get
a
single
attack
instance
is
not
that
hard,
because
we
could
say
we
put
some
kind
of
proxy
in
front
of
deck.
That
exposes
say
a
header
that
I
don't
know
gives
users
a
group
or
name
or
whatever,
and
then
deck
would
only
need
a
mapping
from
header
to
these
tenant
id
or
hidden
id
or,
however,
we
called
it
and
filter
by
that,
and
that's
both
not
very
hard
changes,
so
that
then
actually
seems
somewhat
close
after
we
have
this
done.
D
A
Years,
okay,
yeah,
it
sounds
cool
to
me.
I
just
wanted
to
like
make
sure
we've
all
set
that
expectation.
A
A
Okay
mitchell,
do
you
have
any
more
questions,
or
do
you
feel
like
you've
gotten
what
you
need
from
this.
A
I
tried
to
take
notes
of
what
sounded
like
our
salient
consensus,
but
if
anybody
looks
at
the
meeting
notes,
it
was
like
no,
that's
not
what
we
said
at
all.
Please
feel
free
to
correct
me
also
mentioning
note
taking,
because
I'm
about
to
start
talking
and
I'm
really
bad
at
talking
and
typing.
At
the
same
time,
if
anybody
feels
compelled
to
kind
of
summarize
what
we
talk
about
next,
I
appreciate
it.
A
So,
let's
see
here
I
will
share
my
screen.
Just
so
y'all
don't
have
to
look
at
my
face.
The
entire
time
will
I
find
the
right
window.
A
I
think
I
did
okay,
so.
A
Maybe
a
little
late,
but
on
fully
moving
the
project
away
from
a
to
google.org
to
google.com
owned
assets,
one
is
a
gcs
bucket
called
kubernetes
release.
Dev
one
is
a
gcr
repo
called
kubernetes
ci
images.
These
are
both
used
to
host
builds
of
kubernetes
that
are
built
via
post,
submits
or
periodic
jobs
from
the
kubernetes
kubernetes
repo.
A
These
names
are
hard-coded
and
we're
hard-coded
in
a
lot
more
places
when
we
started
talking
about
this
about
a
year
ago,
so
the
timeline
is
as
of
kubernetes
117,
which
I
think
is
roughly
over.
Like
a
year
and
a
half
ago
we
started
dumping
kubernetes
assets
into
a
community
owned
bucket,
called
kate's
release
dev.
A
A
A
C
A
So
there's
a
wonderful
code
search
thing:
let's
see
if
I
can
drag
this
tab
up
cool,
I
think
that
worked
so
I
specifically
have
include
excluded
vendor.
These
are
the
deprecated
build
jobs
that
push
the
old
place
they're
supposed
to
keep
pushing
to
the
old
place,
but
everything
else
is
a
bunch
of
sub
projects
that
need
this
updated
and
there's
also
a
bunch
of
random
documentation
that
also
needs
this
updated.
A
So
my
intent
is
to,
I
think
the
reason
I'm
bringing
this
to
the
group
is
hey.
If
anybody
wants
to
help
out
with
cleaning
up
these
other
repos.
I
would
super
appreciate
the
help
and
b.
If
nobody
has
any
objections,
I'm
gonna
like
send
out
this
deprecation
notice
and
I'm
gonna
make
it
part
of
our
like
release.
A
Notes
action
required
whatever
this
is
all
a
rehearsal
to
do
the
same
thing
for
release
artifacts
that
are
hosted
in
the
kubernetes
release
bucket,
I'm
currently
in
the
process
of
prototyping,
something
that
uses
google
cloud
storage
transfer
service
to
sync
for
one
bucket
to
the
other,
and
eventually
we're
going
to
do.
A
There's
a
redirector
called
dl.kates.io
that
many
many
many
people
use
to
get
like
the
latest
version
of
kubernetes
or
a
lot
of
people
download
coop
cuddle
just
bear
crew
pedal
from
it.
Apparently
so.
A
We're
gonna
flip
that
over
as
well
and
it'll
be
interesting
to
see
how
much
traffic
that
points
away
versus
how
much
traffic
we're
gonna
have
to
redirect
by
changing
a
bunch
of
hard
codes
across
the
project,
and
I
suspect
that
there
are
far
fewer
people
out
there
who
download
ci
assets.
Then
you
download
actual
kubernetes
release
assets.
A
A
We
do
not
for
this,
we
will,
when
it
comes
to
the
actual
release
artifacts
at
the
moment,
the
just
for
for
contacts
for
folks
here
over
in
the
k-10
for
working
group,
we're
talking
about
setting
up
something
called,
maybe
called
registry.kates.io
that
is
going
to
serve
as
sort
of
a
cross-cloud
redirector
such
that
when
somebody
hits
that
and
asks
for
a
container
image
or
a
random
binary.
A
They're
going
to
get
redirected
to
whatever
place
makes
the
most
sense
for
them
most
sense
for
them.
Ideally,
this
will
be
used
within
the
major
clouds
to
redirect
people
to
the
cloud
local
mirror
of
those
artifacts
such
that
the
project
is
not
paying
hundreds
of
thousands
of
dollars
in
network
egress
to
send
artifacts
from
google
cloud
to
other
clouds
or
other
random
data
centers.
A
The
focus
has
mostly
been
on
container
images,
but
random,
tar,
balls
and
binaries
is
part
of
that
effort.
I
just
don't
think
that
as
much
attention
has
been
paid
to
that,
I
thought
we
didn't
even
have
support
for
that,
but
I
vaguely
have
the
notion
that
justin
santa
barbara
hacked
that
in
and
didn't
tell
anybody
and
is
using
it
for
cops
artifacts.
A
So
I
kind
of
need
to
verify
that
and
sync
up
with
the
release
team
about
possibly
moving
the
intended
community
hosted
bucket
over
the
same
project
that
we're
hosting
all
of
the
other
production
artifacts.
H
A
Jeffrey
g,
free
gp,
I'm
going
to
call
you
any
kind
of
name.
Sorry,
you're!
Welcome
correctly
g
free,
I
like
g3,
okay,
if
people
are
cool
with
it,
I
thought
we
could
spend
the
rest
of
the
time,
walking
the
board
and
just
see
what's
out
there.
A
I
thought
it
might
be
cool
to
go
specifically
through
the
health
wanted
issues
just
to
give
folks
an
idea
of
where
we
could
use
some
help.
A
We
try
to
groom
these
and
make
sure
that
they're
still
fresh
and
relevant
they'll
either
have
a
life
cycle
frozen
label
on
them,
which
means
yes,
they
are
actually
still
open
issues
and,
yes,
we
could
use
the
help
if
they
have
anything
else
like
a
life
cycle
stale
or
something
give
it
give
me
some
time
I'll
go
see
if
it's
still
like
a
relevant
thing,
so
the
board
linked
it
in
the
meeting
notes
I'll
toss
it
up
in
chat
as
well.
If
anybody
wants
it.
H
A
Fun
fact
about
project
boards:
they
only
work
for
a
given
organization,
so
we
do
actually
have
a
sister
board
over
in
the
kubernetes
org.
I
don't
have
a
quick
handy-dandy
link
to
it,
but
it
was
kind
enough
to
post
a
link
to
one
issue
over
in
kubernetes,
which
is
about
adding
a
version
command
to
test
to
the
idea
here.
C
A
It's
the
default
job
that
runs
against
clusters
set
up
in
gce,
using
everybody's
favorite,
bash
script,
cuba,
dot,
sh
and
if
test
grid
loads
I'll,
be
able
to
point
to
a
column
that
shows
what
is
called
the
infra
commit.
A
It's
this
column
here
which
not
quite
nelson
okay,
so
the
infra
commit
is
something
that
comes
up
by
the
way.
Basically,
is
the
commit
of
the
test
infrarepo
at
the
time
that
the
job
ran.
A
A
A
The
problem
is
that
this
is
all
kind
of
it
comes
from
the
legacy
bootstrap
scripts,
which
were
temporary
about
three
years
ago
and
bootstrap
should
no
longer
be
used.
There's
another
issue
on
the
board
that
I
can
go
to
for
that,
and
instead
you
should
be
using
pod
details.
You
know
you
should
be
marking
a.
A
And
then
you
magically
get
a
whole
bunch
of
stuff
for
free,
which
is
really
cool,
but
what
you
don't
get
is
the
infra
commit
and
so
really
the
thing
that
is
relevant
when
it
comes
to
info.
C
A
C
A
It's
outside
of
the
testing
for
repo
it's
over
in
kubernetes,
six
and
you
know
commits
to
this-
are
just
about
commits
to
the
the
whole
test.
A
This
it's
currently
being
used,
as
far
as
I
know,
by
the
cops
folks,
it's
being
used
by
a
number
of
distinct
scalability
folks
to
stand
up
their
clusters
and
it's
being
used
by
other
people
internally
within
my
employer,
and
we
do
have
a
cap
somewhere
to
migrate
everything
to
using
this,
but
everybody
who's
on
the
like
ci
signal
team
or
anybody
who's
involved
in
the
release
team
might
wonder
like
but
where's
this
extra
piece
of
information
that
will
help
me
troubleshoot
when
something
changed,
what
changed
and
that
sort
of
stuff
so
long-winded
way
of
saying
like
if
you
want
to
add
a
version
to
cubetest2
that
would
be
really
cool,
because
then
we
can
dump
that
version
someplace.
A
F
Hey
hi
aaron,
so
I
had
actually.
I
think
this
came
up
a
couple
of
meetings
ago
and
I
asked
supriya
about
you
know
helping
around
with
this.
So
what
I
see
is
that
the
prs
to
add
version
to
cube
test
and
keep
test
two
have
been
merged
and
I'd
also
started
a
thread
in
sick
testing.
So
if
that
has
been
done,
then
it's
we
just
have
to
add
the
the
column
header
in
our
default
yaml
right
somewhere
here.
F
A
A
Maybe
let
me
find
the
actual
issue:
that's
about
infringement
real,
quick,
the
default
yaml
is
what
would
be
used
for
literally
every
single
job,
and
not
literally,
every
single
job
uses
cubetest
or
q
test
two.
There
are
many
jobs
that
run
unit
tests,
for
example,
for
which
that
information
would
not
be
relevant.
A
A
A
This
live,
and
just
just
like
find
that
issue,
and
I
can
point
you
at
it,
but
I
don't
actually
know
the
full
chain
of
connections
necessary
to
get
the
data
in
the
right
place
so
that
test
grid
can
read
it
and
I'm
slightly
unfamiliar
with
that
I'll.
Go
to
like
I'll
show
you
how
other
stuff
gets
loaded
up.
A
So
I'm
clicking
through
to
a
crowd
job,
I'm
going
to
it's
artifacts,
I'm
looking
at
finished.json,
so
you
can
see
it
inside
of
finish.json.
Is
this
thing
called
metadata
and
some
some
part
of
our
scripting
or
machinery?
I
just
don't
know
which
is
responsible
for
looking
for
a
file
called
metadata.json.
A
That
expects
it
to
be
a
key
value
file
and
then
it
will
read
stuff
from
that
file
and
put
it
into
this
finished
json
under
the
key
metadata.
And
then
that
is
what
test
grid
reads
and
when
you
specify
things
in
default.gamble
or
like
whichever
test
grid
config
is
specific
to
your
jobs
and
you
want
to
add
custom
column,
headers
you're,
referring
to
these
things.
A
So
apparently
we
do
have
a
coupe
test
version
thing,
but
it's
blank.
So
I'm
guessing
the
decision
that
we
need
to
iterate
on
this
issue
over
whether
we
want
to
have
a
q
test,
2
version
and
then
have
like
the
jobs
that
used
to
test
2
have
a
column
specifically
for
that.
If
we
want
to
talk
about
having
like
a
version,
a
generic
like
test,
harness
version
do
it
for
that.
A
A
A
This
so
pinging
slack
offline
and
I'll
find
the
the
issue
I'm
talking
about.
Does
that
make
sense.
A
The
job
config
shouldn't
have
to
change
at
all.
As
far
as
I
know
or
like
we
might
need
test
two.
I
think
basically
would
need
to
write
its
version
into
a
metadata
file
and
I'm
not
sure
if
there
is
something
in
the
pod,
utils
or
bootstrap
or
kettle
realm.
That
needs
to
like
munch
the
metadata
into
finished.json
for
testgrid
to
pick
it
up
as.
E
Yeah,
the
one
piece
of
information
I
know
that
can
help
with
that
is
that
the
pod
utilities
will
look
for
a
metadata.json
file
and
the
artifacts
directory,
and
if
that
exists,
that
is
what
populates
that
that
section
of
finish.json
right
there.
A
E
A
It
might
be
as
simple
as
just
doing
that.
I
think
it
might
get
more
challenging
for
cubetest
two
things,
because
something
might
already
write
metadata
json.
So
you
might
need
to
figure
out
if
you
need
to
merge
it
or
get
the
two
tools
to
collaborate,
but
it
might
just
be
as
simple
as
that
for
q
test
two.
E
A
Okay,
I
appreciate
that
additional
info.
That's
really
helpful.
A
We
got
through
one
help
on
an
issue:
we've
got
five
minutes
left
the
testing
for
our
architecture.
Diagram
is
out
of
date.
Oh
no.
A
So
I
think
somebody
actually
redid
the
architecture
diagram
a
while
ago,
let's
see,
but
I
still
think
it's
yeah,
so
it's
a
little
bit
newer.
It
no
longer
has
mung
github
in
it
hooray
and
it
it
does
have.
A
I
think
basically,
what
I
feel
like
is
now
missing
from
this
diagram
is
something
that
talks
about
sort
of
prow
and
its
various
build
clusters,
and
I
want
to
start
talking
about
segmenting
this
into
stuff
that
lives
over
in
the
community-owned
google
cloud
organization
versus
stuff
that
lives
in
the
google.com
owned
community
organization
to
be
able
to
give
folks
a
really
clear
map
of
what
infrastructure
has
and
has
not
been
migrated.
A
A
A
So
basically
we
have,
I
want
to
say
ballpark
about
200
ci
jobs
that
still
use
bootstrap
instead
of
cubetest
or
cubetest2
or
or
pod
users,
and
it
is
relatively
straightforward
to
change
from
one
to
the
other.
A
If
somebody
wanted
to
assemble
this
into
a
document
that
just
says
step
by
step,
how
to
migrate,
so
dims
put
together
a
gist
and
then
ricardo
cats
went
through
and
like
did
a
migration
from
bootstrap
to
pod
details
for
some
of
the,
I
believe,
note
jobs.
Just
if
you
need
code
to
look
at,
for
example,
pull
requests,
and
you
can
see
it
took
him
a
couple
attempts
to
get
through
it.
A
So
what
I
would
be
looking
for
is
somebody
who
could
kind
of
consolidate
that
down
into
a
single
pr
and
here's
dims's
gist
as
well,
where
he
kind
of
posts
the
after
and
the
before,
and
some
notes
on
how
to
do
it
and
ricardo
talking
about
what
worked
for
him
and
what
didn't.
So.
This
is
a
relatively
simple
wrong
word,
but
like
write,
some
docs
like
synthesize
this
into
a
doc,
let's
link
it
someplace
that
contributors
can
read
and
we've
got
tests
that
measure
how
many
jobs
are
outstanding.
A
A
Okay,
if
none
of
those
sounded
appealing,
there
are
other
issues
in
the
healthcare
column.
There
are
also
other
issues.
We
could
use
your
help
on
they're
all
on
the
board,
if
you're
not
sure
what
to
work
on
or
you're,
not
sure
what
you
want
to
work
on
feel
free
to
reach
out
to
the
sig
testing
channel
on
slack,
and
thank
you
all
for
your
time.