►
From YouTube: Kubernetes SIG Testing - 2021-03-09
Description
B
C
E
E
F
D
You
you
seem
to
have
some
bandwidth
issues,
but
it's
on.
F
That
would
be
the
web
client's
fault
if
it's
yeah,
if
it's
too
much
I
can,
I
can
switch
over
to
the
other
laptop
like
you
know
what
I'm
just
I'm
gonna
go
ahead
and
do
that
and
I'm
gonna
move
my
agenda
item
down
to
the
very
bottom.
If
that's
cool
with
you,
thanks
aaron.
D
D
So
now,
first
up
we
have
steve's
topic
on
built-in
censoring.
A
Thanks
ben,
so
we
landed
a
bunch
of
changes
over
the
last
week
and
a
half
or
so.
A
The
basic
idea
is
that
we
push
logs
and
artifacts
and
whatnot
into
public
buckets
so
that
we
can
all
collaborate
on
the
output
of
jobs.
Unfortunately,
that
means
that,
if
there's
secrets
in
those
artifacts,
then
you
know
those
get
put
into
generally
plain
text
on
the
internet
available
to
everyone.
A
So
sidecar
now
has
an
algorithm
which
will
sensor
secret
data
out
of
those
I've
written
it
to
be
memory
bounded
so
that
we
don't
explode
the
footprint
of
the
sidecar.
If
someone
is
uploading
enormous
logs,
we
censor
secret
data,
that's
in
plain
text
as
well
as
base64
encoded
stuff,
since,
like
you
know,
sometimes
people
just
have
kubernetes
secrets
that
they're
passing
around
we
also
transparently
we'll
open
archives,
so
tarballs
and
zip
files
will
censor
the
stuff
inside
of
them
and
then
we'll
zip
them
back
up
before
we
upload
by
default.
A
This
is
off,
so
you
can
turn
this
on
in
prowl
config
for,
like
it's
part
of
the
decoration
config,
so
you
can
turn
it
on
for
a
repo
for
an
org
or
for
everyone.
A
I
didn't
know
exactly,
which
was
the
best
way
to
get
this
to
work
for
everyone,
so
the
default
implementation
today
will
look
at
any
volume
mounted
onto
the
pod
that
comes
from
a
secret
source,
and
so
we
sort
of
infer
that
these
are
secrets
that
contain
something
special,
that
the
test
actually
has
access
to,
and
so
we'll
go
ahead
and
mount
all
those
into
sidecar
as
well,
and
then
censor
those
I'm
totally.
Okay,
you
know
that's
a
fungible
implementation,
but
that's
at
least
what
we
have
today,
that
you
can
opt.
A
A
I
guess
my
two
questions
are
first:
is
that
a
reasonable
thing
to
be
doing
by
default,
and
then
second
aaron
and
ben?
How
do
we
I
mean,
I
think,
maybe
like
rolling
it
out
to
test
infra
as
a
repo
first
might
make
sense,
we're
running
it
on
our
end,
it
seems
to
be
useful,
but
I
didn't
know
how
we
wanted
to
do
that
or
if
we
wanted
to
at
all.
D
Yeah,
that
sounds
good
to
me.
One
question
I
had
was:
how
do
you
handle
the
live?
Live
logs
in
point?
Is
that
anything
being
done
about
that.
A
No,
I
wasn't
super
certain
how
to
do
that,
but
I
guess,
like
the
underlying
mechanism,
could
be
used
there
as
well.
Let
me
let
me
write
down
to
follow
that
up.
D
Yeah,
I
think
that
would
work
fine.
You
probably
just
need
to
make
sure
that
you
know
whatever
implementation
is
reusable
enough
to
apply
to
that
endpoint
as
well.
A
I
think
the
challenge
is
that
so
like
if
the
job
has
access
to
a
secret
on
because
decks
running
on
the
like
the
core
cluster
right
and
the
live
log.
Endpoint
is
just
going
through
the
kubernetes
api
to
grab
the
pod
logs,
and
so
we
don't
have
access
to
the
secrets
that
the
job
has
access
to
that's
on
a
different
cluster.
D
We
could
oh
yeah,
I
guess
maybe
we
could
run
something
in
the
cluster
with
the
secrets,
but.
G
F
Guess
I
have
two
questions.
One
any
sense
of
the
performance
impact
of
this.
A
So
the
yeah
the
way
it's
written,
it
will
be
bounded
in
memory
consumption.
If
you
have
a
lot
of
like
enormous
tarballs
and
stuff
it'll
take
time
and
cpu
there
was
a.
I
also
recently
realized
that
when
we
were
setting
the.
A
Or
I
guess,
there's
there's
two
cases
where,
like
the
time
it
takes
to
do,
this
is
important
right.
The
first
case
is
when,
like
in
the
middle
of
doing
this,
we
get
interrupted
by
a
timeout,
and
the
second
case
is
when
we've
already
gotten
the
timeout
and
we're
in
the
grace
period.
A
Recently,
I
noticed
that
we
were
actually
setting
the
pod's
grace
period
to
be
exactly
equal
to
the
grace
period
of
the
test
process,
which
meant
that
there
would
never
be
any
time
to
upload
anything
after
it
was
done
if
the
test
process
actually
took
the
entire
grace
period.
I've
recently
changed
that,
and
I
it
was
just
super
naive.
I
just
said
assume
an
80
20,
80
20
distribution
of
like
it
takes
a
long
time
to
generate
the
logs
and
it'll
take
a
shorter
amount
of
time
to
upload
them.
A
I
didn't
really
know
that
seemed
to
be
appropriate,
based
on
what
I
was
seeing,
but
obviously
fungible.
So
I
don't
know
I
guess
the
point
being
like
it
was
fairly
minimal
time
processing.
On
our
end,
we
now
extend
the
pause
grace
period
to
also
give
extra
time
for
upload.
On
top
of
the
grace
period
that
the
process
has
itself-
and
I
guess
we
could
make
that
longer
if
we
saw
that
this
was
impacting,
that.
D
Seems
like
it
might
make
sense
to
have
some
way
to
either
like
opt
out
certain
artifacts
or
opt
into
the
ones
that
we
think
are
likely
to
have
these
secrets
so
that
we're
not
like,
like
binary
test
build
artifacts,
are
pretty
unlikely
to
contain
these
and
unnecessarily
expensive
to
process.
A
D
F
Yeah
my
question
about
performance
impact
comes
from
optimistically,
assuming
you've,
you've
tested
kind
of
a
common
case,
the
corner,
the
fun
corner
cases
we
hit
are
from
the
scalability
team
and
like
the
sheer
volume
of
logs
and
stuff
that
they
generate
as
a
result.
So.
A
F
Another
comment
I
had
is
I'm
wondering
if
the
kate's
I
o
repo,
would
be
a
better
pilot
to
start
with.
I
know
there
are
some
secrets
used
there.
I
know
we
also
have
jobs
like
the
container
image
promoter
which
dump
a
decently
sized
log
and
stuff,
because
that
way
I
feel
like
if
we,
if
we
run
into
anything,
that's
that
breaks
or
is
blocking,
we
haven't
blocked
all
of
test
infra
and
then
I
think
test
infra
might
be
a
reasonable
pilot.
A
Ben,
do
you
think
it
would
be
better
to
have
probably
opt
out
mattress
right
rather.
D
Than
opt,
I'm
not
sure,
which
makes
more
sense,
though
I
think
I
mean
we
usually
kind
of
offer
both
than
probably
okay
yeah.
We
can
do
these
because
you
can,
because
you
can
always
have
like
you,
know,
opt-in,
star,
opt-out,
star
and
like
merge
them.
A
Okay
cool,
so
I
will
I'll
open
a
poll
for
the
kids
that
I
o
repo
and
I'll
try
to
implement
the
path
matching
and
then
I'll
think
about
the
live
logs
as
it
stands.
The
live
logs
are
at
least
a
much
smaller
attack
surface
and
they're
around
for
a
very
short
period
of
time.
A
Even
if
the
pod
logs
are
available,
we
don't
serve
them.
If
the
job
is
finished,
I
believe,
can
only
serve
them
when
it's
running.
D
H
D
Anything
okay.
So
next
up,
we
have
progress
on
our
significant
caps
for
this
cycle.
First
up
is
actually
me:
the
cap
is
reducing
kubernetes,
build
maintenance
or
some
people
might
call
it
deleting
basil
we've
all
but
removed
basil
from
the
kubernetes
kubernetes
repo,
which
is
the
only.
C
D
We're
discussing
for
the
purposes
of
this
cap,
there
have
been
some
more
lingering
traces
of
it
cleaned
up
since
the
last
go-round.
We
need
to
update
developer
docs
still
that
are
not
in
the
kubernetes
repo
and
we
need
to
fix.
D
D
So
I
intend
to
migrate
that
I
believe
that
is
fine
pending
test
freeze,
because
it's
only
a
like
test
configuration
change
and
not
any
actual
source
code
changes.
So
we're
completely
on
track
for
this,
and
we
have
pretty
much
all
that
shifted.
D
It
also
looks
like,
for
the
most
part,
the
impact
is
pretty
negligible,
as
expected
for
some
of
the
jobs.
Doing
builds
we're
spending
a
bit
more
time
building,
but
I've
also
been
working
on
a
follow-up
to
allow
edw
jobs
to
only
build
the
artifacts.
They
need,
instead
of
more
or
less
a
full
release
for
one
platform,
which
is
what
most
of
them
doing
currently,
and
I
think
we
can
cut
time
there.
D
Okay,
so
cube
test
two
ci
migration.
Do
you
wanna
talk
about
that.
I
Yeah,
so
let's
see
we,
we
were
able
to
completely
migrate
over
one
of
the
jobs,
which
is
a
good
start,
so
we
migrated
over
the
gc
master,
conformance
job
and
we've
changed.
I
The
relative
test
grid
links
to
point
to
the
cube
plus
two
job
instead
of
the
cube
test,
one
job-
and
I
think
it's
it-
has
the
same
level
of
signal.
I
I
don't
see
any
failures,
any
extra
failures
or
any
false
results
there.
So
that's
a
good
start.
Aaron
was
able
to
find
a
couple
of
differences
between
how
the
logs
are
generated,
but
we
we
we
are
working
through
those
for
the
node
and
scale
tests.
I
I
think
what
we
are
targeting
for
121
is
to
at
least
get
them
to
functionality,
parity
between
cube
test
one.
So
in
the
next
release
we
are
at
least
unblocked
to
start
migrating
jobs,
one
after
the
other,
but
I
think
we
can
start
with
some
of
the
gc
jobs,
since
most
of
that
functionality
should
be
in
and
yeah.
If
any
folks
are
interested
in
helping
out
with
this
effort
or
in
contributing
in
general,
there's
a
lot
of
stuff
to
do
mostly
in
the
cube
test.
I
I
Yeah,
I
think
I've
mostly
been
tracking
most
of
the
progress
in
the
cap
itself,
but
I
can
probably
pull
out
one
specific
for
the
gce
migration.
D
I
think
once
you're
confident
that
a
class
of
drops
can
be
migrated.
We
should
do
something
similar
to
the
kubernetes
ci
policy
effort
and
have
like
a
tracking
issue
and
then
a
number
of
individual
ones,
so
that
we
can
let
people
shard
out
the
migration.
F
Yeah,
I
completely
agree.
I
guess
my
question
is:
how
are
you
planning
to
get
as
many
negative
lines
committed
as
ben
did
with
his
cap.
I
F
Yeah,
it
sounds
good.
I'm
super
excited
about
this
yeah,
so
it'd
be
great
to
use
something
more
easily
supportable
and
it's
in
its
own
repo.
So
it's
a
lot
easier
to
keep
track
of
it's
a
lot
easier
to
develop
all
that
fun
stuff.
D
D
I
will
try
to
go
follow
that
after
this
video,
okay,
so
next
up,
chow
has
a
topic
about
continuously
deploying
product
case
studio
ciao.
F
I
don't
see
chao
around,
I
I
added
it,
but
then
I
neglected
to
go
ping
him
so
I'll,
just
I'll
give
an
update
on
his
behalf.
So
we
talked
last
meeting
about
the
idea
of
sort
of
having
prow
deploy
continuously
just
like
every
time
it
there's
an
auto
bump
pr.
Have
it
merge
automatically?
F
We
were
kind
of
freaked
out
about
the
idea
of
doing
this
on
an
hourly
basis,
so
the
proposal
is,
do
it
every
three
hours
and
then
we
started
talking
about
you
know:
how
can
we
kind
of
reduce
the
churn
slash
noise
and
how
can
we
make
it
easier
for
people
to
troubleshoot
like
what
changed
at
what
time?
F
So
some
things
that
chow
and
some
of
the
other
testing
for
contributors
have
done
to
mitigate
this,
so
the
proud
deploy
job
now
posts
an
alert
to
the
prow
alerts
channel
the
thinking
is
we
don't
want
the
testing
ops
channel
to
be
spammed
every
time,
there's
something
routine,
that's
happening,
so
we're
going
to
try
and
repurpose
the
proud
alerts
channel
for
this.
On
a
related
note,
the
prowlers
channel
is
spammed
quite
a
lot
with
alerts
about
gh
proxy
api
calls
having
some
status
code
someone's.
F
So
chow
ended
up
really
pruning
those
rules
and
tightening
those
thresholds,
but
it
seems
like
we're
we're
trying
to
troubleshoot
why
those
didn't
actually
take
effect,
but
the
goal
is
to
have
the
prowlers
channel
be
more
like
the
informational
channel
about
routine
changes.
So
contributors
could
lurk
in
that
channel
to
get
an
idea
of
like
when
did
proud
change?
I
mean
this
is
basically
just
an
alternate
view
of
looking
at
the
appropriate
test
grid,
but
this
is
more
immediate.
F
The
other
thing
that
we've
done
thanks
to
work
from
mitchell
hermann
is
we've
really
kind
of
improved
the
auto
bumper
functionality
to
the
point
where
you
can
define
different
config
files
to
have
different,
auto
bump
jobs
bump
different
things.
So
we
talked
about
splitting
up
bumping,
prow
images
from
bumping
all
of
the
other
images
involved
in
jobs,
and
so
where
we're
at
today
is.
I
think
that
now
happens.
There
are
separate
pr's
that
get
opened
up
right
now.
F
They
both
get
routed
to
test
infra
on
call,
but
the
goal
is
to
pretty
much
as
quickly
as
possible
hand
off
the
bumping
of
job
images,
possibly
to
the
ci
signal,
team
or
or
some
community
team,
that
is
more
appropriate
because
bumping
java
images
doesn't
really
require
a
special
super
secret
access
to
kate's
prow,
and
it's
also
pretty
easy
to
revert
and
the
people
who
are
impacted
by
that
are
more
likely
to
be
watching
the
results
of
jobs
than
the
test.
Infra
on-call
rotation.
C
A
F
So
I
think
we
settled
on
three
hours
and
it's
still
going
to
be
during
on-call
working
hours,
yeah,
the
idea
being
that
three
hours
is
about
as
long
as
most
of
our
periodics
and
pre-submits
should
be
lasting,
and
so
we
feel
like
that
would
give
us
enough
time
to
notice
that
something
had
changed
in
between
job
runs.
F
Issues
yeah,
I
think,
we're
totally
fine
with
that
I'd
love
to
basically
do
the
same
thing
for
the
job
image
updates.
I
think
it's
just
getting
people
comfortable
with
that.
The
the
super
ideal
place
I'd
like
to
to
get
us
to
the
world.
I'd
love
to
live
in
is
where,
like
there
are
a
bunch
of
columns
in
test
grid
for
all
of
the
sundry
things
that
change
like
the
image.
F
What
changed,
because
I
would
love
to
be
able
to
change
things
as
quickly
and
as
rapidly
as
possible,
but
right
now,
humans,
kind
of
have
to
know
how
everything
is
wired
together.
F
D
Thanks
erin,
now
we
have
arno
for
discussion
about
the
prom
migration.
C
Yeah
so
last
meeting
we
I
we
decided
to
open
a
discussion
on
sick
testing
repo
about
pro
migration
to
the
community
infra.
So
I
thumped
some
questions
really
specific,
a
question
related
specifically
to
kubernetes
jenkins
bucket.
So
I
don't
know
I
mean
my
main
question
is:
do
we
want
to
take
care
of
this
before
we
migrate
pro?
F
I
gotta
think
about
it.
I
know
I
know
I
owe
you.
I
owe
you
time
on
this
and
I've
been
giving
priority
to
like
your
your
help,
with
like
the
audit
stuff
in
kate's
infra
and
I've
been
giving
priority
to
some
internal
stuff
and
I'm
giving
priority
right
now
to
test
freeze.
But
I
will
come
back
around
and
iterate
on
this
a
bunch.
F
My
hope
is.
I
think
I
think
the
scalability
team
may
have
already
started
migrating,
at
least
their
artifacts
to
a
different
bucket
such
that
proud
could
read
from
that
bucket
for
artifacts.
F
But
but
I
might
be
totally
misunderstanding
the
the
scope
of
that,
but
I
I
sort
of
agree
with
cole's
suggestion
that,
like
it
would
be
ideal
if
we
could
make
sure
that
prow
can
read
from
multiple
buckets,
I
think
he's
setting
up
a
configuration
for
default
decoration
that
can
be
sharded
up
by
cluster.
I
know
alvaro
and
steve
have
been
looking
at
that
pull
request,
and
so,
ideally,
we
could
leverage
that
to
say
that
anything
that's
running
over
in
kate's
infra
dumps
its
logs
into
a
k-10
for
our
own
bucket.
D
I
I
think
it's
a
good
idea
to
migrate
the
bucket,
I'm
not
sure
if
it's
a
hard,
blocker
or
not-
and
I
would
point
at
aaron
for
that-
one
since
he's
been
kind
of
managing
the
security
management
of
the
existing
resources.
On
our
end
for
the
moment,.
F
I
mean
the
main
thing
for
me
is
like
I'm
going
to
defer
to
the
expertise
of
folks
like
alvaro
and
steve
and
cole
and
and
eric
feda,
who
are
like
more
actively
involved
in
in
prows
development
and
proud's.
Day-To-Day,
because
they'll
be
able
to
help
us
understand
and
articulate
where,
like
having
multiple
prows,
pointed
at
the
same
thing
might
be
weird
and
where
it
makes
like,
where
it's
kind
of
a
prerequisite
or
blocker
to
separate
things
out.
D
Well,
we
we've
definitely
had
multiple
prows
pointed
out
like
a
gcs
bucket
and
that
that's
not
really
problematic.
I
think
the
more
interesting
thing
would
be
stuff,
like
the
service
account
credentials
to
write
to
the
bucket
that
that
that's
where
I
bring
up
that
angle,
I
don't.
I
don't
think
we
have
any
real
issues
with
with
sharing
a
bucket
since
everything's
name-based.
G
F
So
I
don't
know
I
I
promise
like
I'll
I'll
participate
in
that
discussion
soon.
You
just
gotta,
give
me
like
a
couple
more
days.
D
Also,
thank
you
for
working
on
this.
I
think
it's
been
a
critically
understaffed
effort.
It's
really
good
to
see
anyone
working
on
this.
Currently,
I
feel
the
same
as
aaron
code
freeze
has
eaten
almost
all
my
time.
F
Sure
wow
we're
just
whipping
through
all
these
things.
This
is
wow.
You
should
run
meetings
more
often
they're,
so
much
quicker
yeah.
So,
for
those
of
you
who
don't
know,
every
sig
has
to
write
an
annual
report.
F
We
had
to
get
a
draft
open
in
poll
request
form
by
yesterday
and
they're
going
to
be
due
by
the
end
of
this
month.
So
my
plan
is
like
I,
I
started
on
it.
F
It's
out
there
in
the
public
if
people
want
to
take
a
look
at
it
and
have
comments,
but
I
kind
of
feel
like
I
want
steve
ben
and
myself
as
the
sig
testing,
leads
to
schedule
some
time
to
populate
more
of
that
content
and
then
kind
of
bring
it
back
to
the
group
for
review.
But
if
there
are
any
questions
you
have
like
suggestions
on
like
comments
on
things,
you
think
we
are
doing
well
here
comments
on
things
you
think
we
could
improve
on.
F
We
would
absolutely
welcome
your
feedback
and
it's
probable
that
we're
going
to
reach
out
to
sub
project
owners,
to
comment
more
specifically
on
the
status
of
their
subprojects
for
things
like
cubetest2
and
boscos,
and
the
e2e
framework
repo
that
was
created
as
part
of
the
testing
commons
subproject.
F
F
D
F
Yup,
I
agree:
okay,
so
I'll
take
an
ai
to
go
open
up
a
hackindy
about
that
steve
and
ben.
I
am
coming
for
you
with
a
calendar
invite
so
we
can
have
a
meeting
to
discuss
what
we
talked
about
at
this
meeting
to
document
the
process
about
how
we
have
other
meetings.
D
D
C
F
F
What
I'm
trying
to
accomplish
here
is
the
github
management.
Subproject,
you
know,
is
looking
to
help
the
kubernetes
project
move
all
their
repos
default
branch
names
from
master
to
main.
F
I've
done
this
for
a
couple
repos,
and
there
are
three
things
that
I
identified
as
like
kind
of
blockers,
or
at
least
would
really
help
smooth
the
experience
for
the
majority
of
our
repos,
but
especially
like
our
big
big
repos,
and
so
the
one
that
steve
and
I
are
working
on
at
the
moment
is
I
I'm
trying
to
think
all
the
bootstrap
jobs
and
all
the
pod
utils
jobs
that
are
periodics
right
now.
F
They
have
the
branch
name
master
hard
coded
in
them,
and
I
would
like
the
ability
to
remove
that
hard
code
so
that
these
jobs
can
be
tolerant
to
the
branch
renaming
out
from
underneath
of
them
and
they'll
just
follow
the
branch
rename,
because
today
I
have
to
change
the
job.
Config
then
go
rename.
The
thing
then
go
like
basically
like
I
have
to
do
it
in
lockstep
and
that's
a
lot
of
coordination,
and
I
want
to
federate
this
out
to
all
of
our
repo
owners
and
some
project
owners
as
much
as
possible.
F
So
an
example
of
where,
like
we
can
do
this
today,
is
with
our
pre-submits
and
post
submits
that
many
of
them,
if
they
explicitly
target
master,
is
the
branch
they're
going
to
trigger
for
you
just
change
the
regex
to
say,
target
for
master
or
main
and
they'll
just
do
the
right
thing?
It's
awesome.
I
want
the
ability
to
allow
our
periodics
to
do
the
same
thing
and
anything
that
clones
like
additional
repos
as
part
of
their
pre-submits
or
post
submits.
F
So
for
me,
I
feel
like
pod.
Details.
Right
now
requires
that
you
specify
a
branch
it
specifies
org,
repro
and
branch.
I
have
to
have
all
three
of
those
things
and
I'd
like
the
ability
to
either
not
specify
branch
or
specify
head
of
the
repo
that
I'm
checking
out
so
that
these
these
jobs
can
clone
whatever
the
repos
default
branches.
A
Yeah-
and
I
I
think,
like
my
only
comment
to
you-
is
basically
adding
the
concept
of
jobs
that
run
on
like
default.
Implicit
branches
totally
reasonable,
but
might
have
a
larger
overhead
as
compared
to
just
taking
what
we
already
do
like
as
a
user.
If
problem
I'm
comfortable-
and
I
know
like
the
current
approach-
and
so
if
I
see
that
approach
also
in
use
in
periodics.
A
It's
not
going
to
be
confusing,
there's
not
going
to
be
any
more
learning
to
do,
and
I
think,
on
the
coding
side,
it's
going
to
be
a
little
bit
easier,
like
I
don't
think,
there's
anything
wrong
with
having
a
job
type
or
a
job
like
triggering
type.
That
goes
off
of
the
default
branch,
but
I
do
think
it'll
be
a
little
bit
more
complicated,
especially
because
you'll
have
to
answer
questions
like
I've
loaded,
my
config.
This
branch
declares
that
it
runs
on
the
default
or
sorry.
A
This
job
declares
that
it
runs
on
the
default
branch
for
this
repo.
I
need
to
go
check
what
that
is
such
that
when
a
pr
goes
in
for
that
repo,
I
know
whether
or
not
the
trigger,
and
if
the
default
branch
changes.
I
need
to
be
like
periodically
checking
to
see
that
as
well
or
maybe
like
make
some
assumptions
about
how
often
it
might
change
right
like
I
don't
think
it's
a
bad
feature.
A
I
just
think
it
might
be
more
complicated
than
than
expanding
the
set
of
periodics
and
I
think,
like
that's
something
that
we've
like.
I
don't
think
it's
been
a
pain
point
for
anyone,
which
is,
I
think,
why
we've
ignored
it
but
like
in
in
practice
we
do
have
periodics
that
are
very
tightly
bound
to
repos
and
I'm
not
really
sure
why
we
don't
have
another
type
of
job
that
has
like.
You
know
these
explicit
semantics
that
we
see
but
yeah.
So
I
I
I
think
either
either
approach
is
fine.
F
Okay,
like
I'm,
not
gonna,
lie
as
I've
looked
at
like
implementing,
and
I
think
some
of
the
ci
signal
people
could
maybe
empathize
with
this,
like
it's
really
annoying
that
pre-submit
and
post-submit
jobs
I
can
assume
they
have
a
repo
for
for
free,
but
with
periodics.
I
have
to
go.
Do
some
extra
digging
to
see
if
they
have
extra
refs
and
then
pull
the
first
one
out,
and
so.
C
F
It
would
be
nice
to
be
able
to
make
that
assumption
the
moment
I've
been
looking
at
tackling
that
by
maybe
having
the
like
some
proud
job
utilities
sort
of
automatically
do
that.
For
me,
this
is
handy
and
auto
populating
stuff
for
test
grid.
This
is
andy
for
enforcing
policies
on
jobs
at
like
precipitate
time
and
stuff.
My
concern
is
just
like
adding
another
job
type.
That
is
a
periodic.
F
So
I
kind
of
feel
like
solving
this
at
the
at
the
clone
refs
level.
So
the
thing
we
use
to
express
like,
where
do
you
want
to
clone
from
we'll
neatly
solve
this
for
all
of
the
existing
job
types
that
we
have
the
thing
that
I'm
really
sticking
on,
or
I
don't
I
don't
quite
don't
know,
don't
quite
know
how
to
answer
is
when
it's
appropriate
to
resolve
what
I'm
asking
for
to
an
actual
sha.
A
Right
and
I
think,
like
the
one
thing
that
would
be
a
detraction
from
from
that
approach,
is
you
can't
cleanly
sequester
it
to
that
one
part
right,
because
we
filter
the
set
of
all
registered
jobs
to
the
set
of
jobs
that
we
actually
want
to
trigger,
based
on
a
github
event
at
the
time
that
we
get
that
event
and
if
one
of
the
create
like
one
of
the
criteria
today
is?
Are
we
triggering
this
job
because
it's
bound
to
this
branch?
A
And
so
you
can't
just
have
this
happen
in
clone
refs,
because
you
do
need
to
have
that
information
available
to
you.
What
branch
does
this
actually
need
to
run
on?
What
is
the
default
branch?
And
you
need
to
do
that
at
config
resolution
or
like
in
trigger
somewhere.
F
It's
a
it's
a
get
native
thing
like
the
way
I
would
solve
this
via
get
native
implementation.
Assuming
I
had
a
remote
is,
I
would
look,
I
would
do
git
symbolic,
ref,
refs,
remotes
origin
head,
and
what
that
would
give
me
back
is
the
name
of
the
branch
on
that
remote.
That
is
that
the
remote
has
a
head,
and
so
I
can
use
that
today
to
be
like.
Is
this
repo
using
master.
C
F
Main
so
that's
me
resolving
it
at
fetch
time
or
at
clone
time.
The
other
option
is
we
could
have
like
it
feels
like
right
now.
What
we
do
today
is
we
make
planck
in
charge
of
resolving
a
spec
to
a
shaw,
and
so
it
does
that
via
querying
github's
api,
I
think,
and
so
there's
some
potential
token-based
consumption
there.
F
H
A
H
I
have
a
suggestion
that
might
hide
too
much
stuff
and
might
not
be
a
solution
here,
but
I
know
that
you
can
add
a
label
that
will
apply
config.
H
So
if
we
added
the
label
like
kubernetes
default
and
remove
the
ref
from
the
job
that
actually
used
the
default
and
it
all
came
off
of
a
singular
config,
then
you'd
only
have
to
switch
the
branch
from
master
domain
in
one
place
and
that
label
would
apply
the
extra
ref
to
all
default.
Kubernetes
clone
ref
jobs
in
periodics.
F
I
might
be
wrong,
but
I
feel
like
you're
talking
about
if
we
were
able
to
change
every
like
all
175
repos
from
master
domain
at
once,
where
I
think
I'm
asking
for
a
solution
where
we
allow
the
individual,
repo,
admins
and
sub
project
owners
to
make
this
change
when
they
and
their
contributors
are
ready
for
it
got
it.
Okay,.
A
Yeah
so
I
looked
up,
I
think
it
is
like
a
it's
a
symbolic
graph,
that's
given
by
like
a
server,
and
it
does
seem
like
garrett
and
git
lab
and
github
all
do
something
like
this,
but
it's
not
actually
built
into
git
itself.
But
in
any
case
I
think,
like.
A
I
think
it's
a
perfectly
reasonable
thing
to
do,
but
I
do
think,
like
you,
you
open
up
a
can
of
worms.
As
far
as
like
a
pull
request
comes
in
it's
targeting
a
specific
branch,
I
need
to
know
which
jobs
to
trigger
on
that
pull
request
based
on
the
branch
it's
targeting.
For
that,
I
need
to
know
what
the
default
branch
is
if
any
jobs
are
configured
to
run
on
the
default
branch.
It's
not
an
intractable
problem,
but
I
do
think
it
can't
happen
only
in
clone
refs.
A
F
I
hear
you,
I
think,
I'm
not
proposing
that
we
change
the
current
implementation
of
which
branch
we
choose
to
trigger
post,
pull
requests
and
sorry,
pre-submits
and
post
submits
on,
like
I
think,
the
mechanism
we
have
today
where
you
specify
regex,
you
only
want
this
to
be
available
for
periodics.
I
only
want
this
available
for
the
extra
refs
part
of
periodics,
pre-submits
and
post
submits.
F
So,
for
example,
today
there
are
some
pre-submit
jobs,
I'm
trying
to
think
like
the
the
the
job
that
does
automated
audit
prs
for
kate's
infra,
like
it
clones
the
case
it
it
is
a
post.
It
is
a
periodic
for
the
k-10
for
repo.
I
could
do
it
as
a
post
submit
for
the
k10
for
repo,
but
I
would
also
at
least
right
now
the
way
things
are
implemented
today.
F
I'd
also
want
to
clone
the
test
in
for
repo,
so
that
I
could
use
the
pr
creator
from
the
test
infrarepo
and
so
I'd,
like
the
ability
to
say
just
get
me
the
test
in
for
repo
ahead.
I
don't
care
what
the
branch
is,
and
so
I
think
this
is
strictly
constrained
to
the
extra
refs
thing
and
not
to
the
whatever
it
is
like
branch
trigger
thing
yeah,
I
think
that's
that's
fairly
reasonable.
A
However,
like
changing
a
periodic
to
have
get
content
associated
with
it,
unless
we
do
that
in
like
a
clever
way
where
you
can
associate
it
with
more
than
one
repo,
I
think
you'd
still
have
the
problem
right
because,
like
if
you
had
multiple
extra
refs
and
you
wanted
to
have
the
implicit
thing
on
all
of
them,
then
you'd
still
need
what
you're
talking
about
yeah.
That
seems
reasonable
to
do.
D
Yeah
we
for
for
context.
We
also
have
things
like
in
the
kind
repo
we're
running
equivalents
of
the
kubernetes
presets,
that
use
kind.
So
we
can
make
sure
that
we
don't
break
those.
So
we
need
to
clone
the
different
branches
in
each
job
and,
of
course,
we're
not
explicit
in
the
or
ideally
wouldn't
be
explicit
in
the
like
main
development
branch,
as
opposed
to
the
release
branches
and
that's
an
extra
ref
right
now.
A
Yeah
yeah,
we
probably
want
to
fork
the
type
so
that
extra,
refs
and
refs
are
actually
different
types,
since
we
wouldn't
support
this
on
pre-submissive
post
events,
but
yeah.
That
seems
reasonable.
F
Okay,
I'll
take
a
crack
at
it.
If
I
find
out
I'm
wading
in
way
too
deep
I'll
pull
back
and
do
like
a
proposal.
A
Like
when
we're
in
clone
reps,
what
are
we
doing
with?
Because
we
have
just
the
like
a
bass,
ref
we're
cloning
right.
A
F
A
F
A
F
I'm
gonna
try
that
first
but
yeah.
I
think
I
got
what
I
need.
Thank
you
all
for
letting
me
kind
of
talk
a
whole
bunch.
If
I
had
actually
prepared
this,
I
would
have
at
least
gotten
some
sample
code
to
illustrate
what
I'm
doing,
but
I
hope
that
made
some
sense
and
maybe
give
you
some
insight
into
like
what
what
we're
trying
to
the
design
constraints
we're
trying
to
think
about.
When
we
talk
about
modifications
to
pro.
E
One
last
comment
with
regards
to
pr
creator
and
that
auto
audit
job
is.
It
would
be
nice
to
have
a
pr
creator
as
a
binary
to
download
or
as
a
part
of
the
images
that
we
use
so
that
we
don't
need
to
build
pr
creator
anytime.
We
need
to
use
it.
That
flow
is
a
little
more
difficult
and
then,
as
you're
as
you're
wrapping
your
head
around
this.
E
Looking
to
it,
I
I
didn't
see
a
clean
way
to
use
pr
creator
to
have
decent
logic
in
the
job
to
add
extra
commits
when
things
have
changed
even
further,
when
the
pr
is
already
open.
Currently,
I
just
force
push
everything
as
one
big
large
commit.
F
Yeah,
I
think
those
yes,
I
feel
that
pain
for
sure
having
to
review
those
a
lot
of.
I
guess
I
would
ask
if
there
aren't
issues
for
these
already
in
the
testing
for
repo,
please
open
them
up.
We
make
sure
the
right
folks
on
the
test.
Infra
team
can
take
a
look
into
those
things.
E
F
I
just
have
one
other
thing
I'll
say,
but
I'm
totally
happy
to
give
you
all
10
minutes
back.
I
feel
like
we
talked
last
week
about
how
like
my
concern,
is
that
I
spend
a
ton
of
air
time
or
I
take
up
a
lot
of
air
time
in
the
meeting
talking
and
one
of
the
main
reasons
I'm
here
is
because
I
get
to
hang
out
with
everybody
else.
F
F
D
Well,
that's
a
little
awkwardly
thanks
everyone
for
coming
this
is
it's
been
nice.
I
think
we've
gotten
a
bit
more
productive
with
these
meetings
this
year,
lots
of
good
discussion
and
I'll
try
to
follow
up
with
your
issues
when
you
file
them.
I
think
we're
just
about
a
time
now,
thanks
everyone
for
coming
in
have
a
happy
tuesday,.