►
From YouTube: Kubernetes SIG Testing - 2021-08-10
Description
A
A
A
A
Meeting
you
are
free
to
reach
out
to
myself
on
slack
or
all
the
places
or
you
can
reach
out
to
conduct
kubernetes
dot
io.
This
meeting
is
being
recorded
and
will
be
posted
publicly
to
youtube
later.
So
you
can
watch
us
be
our
very
best
selves
on
today's
agenda.
I
wanted
to
talk
a
little
bit
about
stuff
we've
accomplished
during
the
last
release
cycle.
A
I'm
gonna
hand
over
to
after
that,
we're
gonna.
Have
you
raj
talk
to
us
about
granular
approval,
support
for
the
approval
plug-in,
and
then
I
thought
we
could
use
the
remainder
of
the
time
to
talk
about
what
plans
we
have
as
a
sig
or
as
individual
contributors
for
the
123
release
cycle
of
kubernetes.
A
So
with
that,
let
me
find
the
dock
where
I
was
scraping
random
stuff,
so
I
can
put
it
all
in
here.
A
A
I'm
bad
at
celebrating
accomplishments
I'll
just
say
that
up
front
like
I,
I
think
one
of
the
wonderful
things
about
the
kubernetes
community
is
that
we
have
the
shout
outs
channel
where
people
can
shout
out
folks
who
do
awesome
stuff
or
go
above
and
beyond,
and
I
aspire
to
get
back
to
that
level.
But
it's
been
difficult
for
me
to
do
so,
because
in
the
old
days
I
used
to
have
just
visibility,
kind
of
on
all
of
the
pr's
that
were
flowing
into
the
test
information.
A
A
So
if
you
all
notice
stuff
that
seems
really
cool
or
you're
really
excited
has
landed,
please
feel
free
to
like
shout
it
out.
Call
it
out
the
sick
testing
channel,
call
it
out
in
the
shout
outs
channel.
You
know
bring
it
up
in
this
meeting
here
because
I
think
there's.
C
A
A
It
takes
me
a
long
time
to
kind
of
sift
through
what
is
a
job.
Config
change.
What
is
a
proud
change?
That's
really
helpful
to
somebody
who
runs
prow
on
a
project
that
isn't
kubernetes
and
what
is
something
that's
actually
been
done
to
improve
the
contribute,
the
experience
of
contributing
to
the
kubernetes
project,
especially
when
it
comes
to
making
sense
of
our
tests
running
our
tests
better
or
making
the
test
results
more
actionable.
A
So
I'll
give
you
my
hot
take,
which
I
just
dumped
in
the
meeting
notes,
but
I
can
share
my
screen.
So
it's
not
just
my
face
the
whole
time
that
I'm
talking.
A
Let's
see
so
I
tried
to
just
bucket
things
that
were
in
the
122
milestone,
and
I
did
this
by
kind
of
taking
a
look
at
the
sig
testing
board
as
a
refresher.
This
sig
testing
board
covers
all
of
the
issues
that
exist
over
the
kubernetes
organization,
so
we
are
missing
things
from
kubernetes
6
here,
unless
they're
added.
As
a
note,
as
amit
did
here
for
some
coop
test,
2
stuff,
we
do
have
a
separate
board
for
kubernetes.
A
I
just
don't
have
that
link
quite
as
available
and
then
in
this
board
we
try
to
keep
a
healthy
column
of
things
that
are
of
issues
that
anybody
can
work
on
their
health,
wanted.
We
try
to
tee
them
up.
A
We
try
to
make
sure
they're
relevant
things
that
we
plan
on
working
on
things
that
are
in
flight
things
that
are
blocked
and
then
things
that
are
done,
and
so
then
I
went
and
I
clicked
on
the
filter
box,
I
filtered
to
everything
that
was
in
the
122
milestone
and
I
came
up
with.
There
are
only
14
issues
on
this
board
for
the
122
milestones.
I.
A
C
A
Infra
working
group,
which
is
all
about
migrating
the
project's
infrastructure
over
to
a
place
that
the
community
is
empowered
to
manage
on
its
own,
such
that
the
community
is
self-sustaining
and
the
probably
the
biggest
lift
that
we
pulled
off
this
release
cycle
was
to
make
sure
that
all
of
the
ci
builds
of
kubernetes
live
someplace.
That
is
community
managed.
So
no
longer
do
we
rely
on
google.com
to
post
either
the
binaries
or
the
container
images
that
are
produced
by
kubernetes,
which
is
huge.
A
I
sent
out
a
mailing
list
message
to
gibberdysdev
trying
to
thank
everybody
who
was
involved
there.
So
thank
you
to
everybody
who
helped
that
out.
A
I've
started
looking
into
migrating
our
metrics
pipeline
over
to
kate's
infra,
so
I
click
on
this
issue.
Let's
see,
I
do
it
the
way
that
doesn't
leave
me
on
the
board.
Basically
like
this
file.
This
file
is
a
super
useful
file.
If
you
want
to
figure
out
like
what
are
the
flakiest
jobs
that
exist
in
kubernetes
and
then
within
those.
C
A
What
are
the
tests
that
are
flaking
the
most
and
the
way
we
generate
this
file?
Is
we
run
a
gigantic
sql
query
against
the
bigquery
data
set
that
bigquery
data
set
is
populated
by
a
tool
called
kettle,
and
then
from
those
queries
we
get
a
bunch
of
json
and
then
we
pass
that
through
jq,
and
you
end
up
with
something
that
looks
like
this.
I
know
a
bunch
of
json
in
text.
Format
is
not
necessarily
the
most
friendly
way
of
doing
things.
A
I
think
there's
an
issue
somewhere
if
somebody's
interested
in
pulling
this
into
like
more
of
a
friendly
front
end
that
could
be
filtered
or
drilled
down
or
clicked,
but
this
is
really
useful,
so
I
can
say,
for
example,
the
integration
tests
seem
to
be
not
super.
Consistent.
Consistency
here
is
basically
a
proxy
for
flakiness.
It's
how
often
it
passes
and
fails
for
the
same,
commit
and
apparently
the
test.
Exec
plug-in
rotation
via
informer
test
is
the
flakiest
by
far
out
of
all
of
these
tests
in
the
integration
test
job.
A
So
if
somebody
wanted
to
go
fix
this
test
figure
out
why
it's
flaking,
they
would
have
the
highest
impact
for
the
jobs
that
we
care
about,
that
are
merge
blocking
kubernetes,
but
this
file
is
kind
of
large
because
it's
flakiness
for
every
single
job
that
we
run
not
just
the
ones
that
are
merged
blocking,
not
just
the
ones
that
are
released
blocking,
but
all
of
them.
A
So
it
could
be
useful
for
the
whole
project.
But
when
it
comes
time
to
like
burn
down
for
the
release-
and
you
know,
we've
got
those
dedicated
flake
hunters
who
really
want
to
like
figure
out
why
things
are
moving
so
slowly.
It's
always
about
figuring
out,
what's
in
here,
figuring
out,
why
it's
flaking
and
getting
rid
of
it.
So
this
now
this
lives
in
a
bucket
called
cage
metrics,
and
this
bucket
now
lives
in
the
kubernetes.io
gcp
organization
instead
of
the
google.com
organization.
A
I
just
wanted
to
call
attention
to
that
because
that's
the
first
time
we
tried
migrating
something
without
like
migrating
to
a
brand
new
bucket
name,
and
this
was
in
the
interest
of
people
toss
this
url
around
all
the
time
and
it's
linked
in
a
number
of
issues,
and
I
wanted
to
make
sure
that
it
wasn't
a
dead
link
and
it
is
a
bucket
that
is
not
written
to
a
lot
and
it's
not
necessarily
a
super
high
impact
super
critical
bucket.
A
So
I
wanted
to
try
playing
the
dance
of
you,
can
delete
the
bucket
and
then
immediately
create
a
bucket
of
the
same
name,
someplace
else,
and
so
we
tried
that
dance
out
and
it
seems
to
have
worked
just
fine.
Other
parts
of
this
pipeline
would
involve
kettle
the
thing
that
actually
writes
to
bigquery
the
bigquery
data
itself,
which
lives
in
a
project
called
kate's,
gubernator
triage
another
tool,
that's
and
which
ends
up
querying
that
same
data
set,
and
I
think
that's
everything.
A
A
Step
for
122-
and
maybe
we
can
do
that
in
123.-
let's
see
so
I
think
this
is
something
I
set
up,
but
arno
definitely
helped
extend
it
so,
as
you're
familiar
prow
has
auto
bump
jobs
that
just
sort
of
automatically
bump
all
of
the
proud
component
images
to
the
latest
image
that's
available
in
staging
repos.
C
A
Infrastructure
area,
so
instances
of
boscos
and
gh
proxy
are
automatically
bumped
via
pr.
We
currently
don't
have
it
doing
the
same
thing
that
we
do
for
productpates.io
these
days.
Proudocates.Io
now
has
prs
that
are
automatically
opened
up
and
basically
automatically
merged.
There's
no
human
review
in
that
loop,
we're
still
being
a
little
cautious
and
having
humans
review
these
prs
and
then
arno
recently
extended
this
auto
bump
job
to
not
just
pump
the
build
clusters,
but
also
the
kate's
in
for
prow
instance,
that
he's
been
standing
up
over
in
the.
A
Triple
a
cluster
over
in
case
infrared,
I'm
talking
a
lot,
I'm
gonna
move
a
little
more
quickly.
Let's
see
there
were
a
number
of
generic
pro
quality
of
life,
things
that
happened,
which
I
thought
were
really
cool.
They
were
kind
of
unsolicited
and
unprompted,
which
is
nice.
One
of
them
really
only
matters
to
folks
like
myself
for
folks
who
are
interested
in
trying
to
script
or
generate
reports
about
what's
going
on
with
our
proud
jobs.
A
E
A
A
So
you
know:
gke
ends
up
shoving
stuff
over
to
google
cloud
monitoring
thing,
and
so
I
get
all
sorts
of
nifty
metrics
about
pods,
like
they
requested
cpu
and
how
much
they're
actually
using
and
stuff,
and
I
can
start
to
like
slice
and
dice
by
things
like
pod
name
and
labels.
A
So
I
could
use
that
to
start
figuring
out
like
what
are
all
the
jobs
that
are
running
and
how
many
pods
are
they
creating
and
what's
the
usage
of
that
stuff.
But
I
could
not
do
that
for
this.
Let's
see
if
this
works
real,
quick
cool.
So
this
query
here
will
hopefully
pop
up
anything
that
has
the
label
proud
on
case
that
I
have
job
and
it's
looking
at
the
integration
job.
And
I
don't
oh
hey
look
there
we
go.
There
are
metrics,
so
I'm
seeing
metrics
for
the
integration
tests.
A
It
looks
like
what
I'm
seeing
is
the
utilization
of
the
cpu
limit,
so
most
of
these
things
are
peaking
up
super
quickly
here.
Let's
change
the
time
scale,
a
bit
to
like
use
most
of
the
cpu
that
they
are
asking
for,
go
faster,
google
and
then
they're
scaling
back
down.
So
it's
a
kind
of
a
good
way
to
understand
like
are
we
actually
asking
for
enough
enough
cpu
and
then
are
we
taking
advantage
of
the
cpu
that
we're
asking
for?
A
But
what
I
can't
do
is
understand,
like
are
things
improving
over
time,
on
the
release
that
we're
currently
working
on,
but
I
should
be
able
to
do
this
now
by
counting
the
demo
dots
and
using
base
ref
and
then
say
that
it
equals,
let's
just
say,
master
for
grins,
and
now
I
have
filtered
this
down
to
all
of
the
integration
test,
jobs
that
are
running
against
master.
So
as
somebody
who
likes
to
create
dashboards
and
pretty
graphs
and
stuff.
C
A
A
A
I
don't
have
a
live
demo
of
this,
but
one
of
the
things
that
bugs
me,
the
most
is
when
I
like
go
to
test
grid
and
I
try
to
figure
out.
Oh,
let's
see
I'll,
go
to
kate's
infra
I'll
pick
a
random
build
job,
that's
failed!
So
like
the
canary
csi
or
whatever
this
job,
it
appears
to
have
failed.
A
I
wonder
why
I'll
click
on
it
and
like
let's
just
pretend
that
the
failure
wasn't
the
job
blatantly
doesn't
work,
because
there
is
no
cloud
build
and
it
was
something
that
was
more
transient,
that
if
I
just
re-ran
this
it
could
work.
Yes,
this
demo
is
not
working
for
me,
live
on
the
fly
like
what
I
have
to
do
in
order
to
rerun
this
job
is.
I
have
to
look
at
the
job
name.
A
I
have
to
copy
paste
it
I'm
going
to
click
on
that,
I'm
going
to
click
on
the
pro
logo
to
go
to
the
product,
case.io,
I'm
going
to
click
on
this
box.
That
says
search
job
name,
I'm
going
to
paste
the
job
name
in
there
I'm
going
to
hit,
enter
and
wow.
I
don't
even
see
the
job
in
here,
so
I
can't
even
do
this
like
normally,
let's
see
if
there's
another
close
submit,
I
can
look
at
how
about
this.
A
I
would
then
come
to
like
a
job
like
this
and
I
would
click
this
rerun
button
and
I
would
click
this
rerun
button
and
that's
cool,
but
I've
had
to
click
around
like
five
times.
I've
had
to
do
a
bunch
of
copy
paste
and
whatnot,
and
I
think
it
would
be
so
much
cooler
if
I
could
have
done
that.
Specifically
from
the
page.
That's
like
something
failed
and
I
don't
know
why
I
would
like
to
try
again
sorry
I
couldn't
get
to
that
one
live,
but
I
thought
that
was
really
cool.
A
The
other
two
there's
no
real
good
demo
for,
but
essentially
there
are
integration
tests
that
run
against
proud.
Now
that
spit
that
spin
up
like
a
whole,
proud
deployment
on
a
kind
cluster.
I
think
and
then
run
some
integration
tests
against
it,
live
and
we're
testing
deck
out
as
part
of
that
panel.
A
So
we
can
start
to
make
sure
that
things
show
up
index
ui
as
it
should,
and
we
encountered
some
sort
of
racy
conditions
during
failures
or
interrupts
on
certain
test
plots,
and
so
the
pod
utils
thing
that
just
sort
of
magically
makes
files
upload
to
gcs
when
your
job
finishes
fails,
break
supports
whatever
does
so
a
little
more
reliably
and
consistent
consistently
and
then
the
last
two
things
like
this
is
mostly
ben,
and
this
is
mostly
follow-up
to
our
cap.
C
A
Just
like
make
tests,
so
we
managed
to
make
sure
that
you
no
longer
have
to
run
unit
tests
as
root
in
order
to
get
them
to
pass,
which
seems
like
an
appropriate
thing
to
do
for
unit
tests,
and
we
also
made
sure
that
the
unit
tests
show
up
on
the
release
blocking
dashboard.
A
We
kind
of
lost
them
ever
so
briefly,
let's
see
if
I
can
go
find
them,
because
when
we
transition
from
bazel
to
make,
we
also
transitioned
from
seeing
unit
tests
per
package
and
we
started
seeing
results
per
test
and
it
turns
out
there
are
like
I
don't
know.
What's
the
number
ben,
it's
like
20
000
unit
tests,
something
like
that.
D
I
think
it
might
have
even
been
higher
than
that.
There
are.
A
A
lot
anyway,
so
I
zoom
out
at
all
and
show
this
no
tester
does
not
take
well
to
zooming
out
anyway,
they're
a
bunch
of
a
bunch
of
columns
and
stuff
now,
which
safari
is
rendering
poorly
for
you
all.
But
I
can
actually.
C
A
D
Yeah,
you
probably
want
to
close
that
this
is
actually
a
good
future
thing
for
us
to
work
on
at
some
point,
when
I'm
not
sure
if
this
is
the
case
now
when
test
grid
is
fully
migrated
to
the
open
source
updater,
one
of
us
can
send
a
pull
request
to
rewrite
the
testgrid
updater
to
use
cases
appropriately,
mapped
to
junit
testgrid
understands
hierarchical
test
cases,
just
like
junit
test
suites,
but
right
now
each
case
is
a
suite.
D
So,
instead
of
collapsed,
like
each
package
gets
a
sweet
and
then
these
are
the
tests
of
them.
You
get
you
get
a
row
for
every
single
test
case,
which
is
a
lot.
A
That
would
be
awesome
so,
since
I
know
I
didn't
have
time
to
to
cover
everything
like
I
know,
I
missed
a
lot
of
stuff
that
happened.
I
did
not
want
to
name
names
as
far
as
like
who
landed
everything
here,
even
though
I
did
call
myself
some
folks
out,
but
I
just
wanted
to
take
a
moment
to
thank
everybody
who
has
contributed
to
making
kubernetes
easier
to
contribute
to
and
make
it
easier
to
run
and
write
tests
and
understand
what
they're
doing
so.
Thank
you
so
much
for
all
that
you've
done.
D
I'd
like
to
call
out
specifically,
though,
since
you
did
call
out
one
person
since
you
called
out
me,
I'm
gonna
turn
around
and
call
out
claudio
is
in
this
call
who
did
a
lot
of
work
on
the
migrating,
our
images,
which
is
in
the
first
point
and
eddie,
helped
with
the
getting
all
the
tests
to
actually
pass
when
you
run
make
tests.
Sadly,
we
have
one
more
to
fix.
Now
I
swear
soon,
you
will
be
able
to
run
make
tests
on
your
machine
and
all
the
unit
tests
will
actually
just
pass.
D
You
won't
need
to
be
rude.
Nothing
like
that
eddie
fixed
the
last
one
for
us
and
then
another
one
stuck
in
so
thanks
for
that.
A
Yes,
thank
you
for
thank
you
for
shouting
about
you're
right.
How
do
you
I
keep
forgetting
like
we're
so
close
to
finishing
migrating
all
zi
images,
and
I
can't
even
begin
to
fathom
the
amount
of
heavy
lifting
that
you
have
done
in
getting
us
to
to
this
point.
A
So
thanks
a
bunch
and
eddie
thanks
much
for
the
unit
test
work
it
it
baffles
my
mind
that
we
have
been
cable,
we've
called
them
unit
tests,
the
way
they've
behaved
and
what
they
have
required
for
so
long.
So
it's
good
to
like
make
the
unit
again.
B
Yeah!
In
the
in
one
of
the
previous
meetings,
I
gave
a
small
demo
on
support
for
granular
update
within
the
pro
plugin
and
okay
yeah.
I
can
share.
B
Yeah,
as
I
was
just
saying
in
one
of
the
previous
sig
testing
meetings,
I
gave
a
demo
on
an
approved
plugin
that
has
granular
approval
support.
So
what
that
means
is
instead
of
doing
blanket
approvals,
you
can
choose
to
approve
like
individual
files
for
a
group
of
files
and
so
on.
B
There
were
two
feedback
items
that
came
out
of
that
demo.
One
was
the
original
demo
had
used
a
different
plug-in
altogether
called
approved2,
instead
of
having
that
logic
behind
the
same
approved
plugin.
B
So
one
of
the
feedback
items
was
to
not
have
a
different
plugin
called
a
proof
to
just
have
it
behind
approved,
maybe
behind
a
feature
flag
so
that
it's
easier
to
transition.
So
that
is
not
right
now
address
so
now
we
have
something
called
a
granular
rule
as
an
option
in
the
approve
plugins
configuration.
B
So
if
that
is
set
to
true
your
upload,
plugins
behavior
will
switch
to
using
like
acting
as
a
granular
approval
instead
of
the
standard
approve
plugin
and
the
second
feedback
item
that
was
mentioned
was
in
the
approve
in
the
approval
notifier
message:
we
used
to
only
see
like
what
how
many
files
are
uploaded,
how
many
are
still
pending
approval
and
then
just
the
owners
of
the
associated
unapproved
files.
B
One
of
the
feedback
items
was,
it
would
be
nice
if
you
can
actually
see
which
files
are
uploaded
and
which
files
are
not.
So
this
is
an
example
screenshot
of
how
the
new
approval
message
looks
like
and,
as
you
can
see,
it
shows
you
like
collapsible
sections
of
each
of
the
folders
associated
with
the
chain
set
and
shows,
if
you
can,
you
can
expand
those
sections
and
then
see
what
files
are
approved
and
what
files
are
not
approved.
B
B
Comment-
and
you
can
see
the
folder-
that's
unapproved-
has
no
files
that
are
striked
whatever,
so
so,
with
those
two
feedback.
With
those
two
items
address,
I
think
the
approval,
the
pr
for
the
approval
plugin,
is
probably
ready
for
review.
B
It
was
a
huge
pr
like
7000
lines
or
something,
but
now
with
one
of
the
plugins
configurations
gone,
it
came
down
to
like
5000
lines,
but
it's
still
like
you
have
a
huge
yeah
yeah
like
any
feedback
on
vr
and
pr
reviews
would
be
awesome
and
again
start
working
on
them,
as
I
receive
feedback,
and
the
next
items
for
this
is
definitely
like.
After
all,
the
of
after
all,
the
items
are
addressed
and
the
pr
is
reviewed.
What
would
be
the
rollout
strategy
like?
B
How
would
how
should
we
roll
out
this
new
configuration
new
granular
approval
for
the
approve
plug-in
right?
So
I'm
looking
for
suggestions,
but
I
was
thinking
maybe
like
enable
it
on
maybe
one
repo
right
now-
maybe
the
testing
for
a
repo
and
then
look
to
see
actual
feedback
on
how
people
are
using
it
or
any
other
items
that
need
to
be
addressed
and
then
start
rolling
out
to
more
repos.
B
D
I
will
often
I
will
opt
kind
into
this
immediately
as
soon
as
possible.
I
I
guess
I
I
need
to
go
review
this.
I
apologize,
I'm
definitely
a
bit
behind
on
that
of
late.
D
I
don't
know
if
there's
any
other
grinder
plan
we
should
have
their
testimony
sounds
reasonable.
I
might
be
because
we've
mostly
moved
to
automated
deployments.
I
might
be
somewhat
inclined
to
suggest
that
we
toggle
it
first
on
another
repo,
so
that
we
don't
back
so
we
can
continue
to
avoid
backing
ourselves
on
corner.
D
We
have
to
use
like
admin
powers
to
to
fix
the
repo
that
runs
the
infra
itself,
because
we
dog
food
like
that,
but
I
think
that
maybe,
after
just
seeing
it
work
at
all
anywhere
else,
it
probably
makes
sense
to
test
on
testing
for
it.
It
is
still
where
we
usually
test
the
things
in
the
pack
and
where
the
right
set
of
people
that
are
likely
to
fix
it
will
notice
how
it
behaves.
A
I
think
I
largely
agree
with
that.
I
think
I
personally
am
going
to
feel
a
lot
more
comfortable
if
alvaro
or
cole,
or
somebody
who's
deeper
into
proud
than
I
am,
can
review
it
from
just
kind
of
a
like
token
consumption
perspective
performance
perspective.
They
they
understand
the
operational
characteristics
of
crowd
a
lot
better
than
I
do.
I
can
definitely
help
with
the
rollout
thing.
A
I
think
what
ben
is
talking
about
sounds
sounds
great,
like
kind
as
as
a
repo
sounds
cool
test
infrap
as
a
high
traffic
repo,
that
has
a
variety
of
six
contributing
sounds
cool.
I
think
kubernetes
community
would
be
the
final
repo
I
would
want
to
see
before
I
considered,
throwing
it
at
kubernetes
kubernetes
or
enabling
it
worldwide.
D
A
I
saw
eddie's
hand
go
up,
so
I
was
just
saying
I
think
kind.
First,
in
terms
of
rolling
out
to
repos
kind.
First
sounds
cool
testing
for
a
second
sounds
cool
and
commune,
and
kubernetes
community
is
the
third
which
sounds
like
a
useful
cross-section
to
me,
and
then
I
guess,
I'm
I'm
uncertain
whether
I
would
want
to
look
at
like
enabling
it
or
wide
or
just
enabling
it
specifically
for
kubernetes
kubernetes,
but
before
rolling
it
out.
A
A
Having
said
those
words,
eddie
has
his
hand
up.
F
Just
I
you
raj,
I'm
is
it
okay,
if
I
start
asking
you
a
bunch
of
questions
about
your
development
processing
cycle
for
working
on
this,
because
I
just
started
working
on
two
different
pro
plugins
and
I've
been
wondering
how
I'm
going
to
test
and
get
this
to
work
like
you
did
so
yeah
sure
awesome.
Thank
you.
A
F
F
So
I
have
finally
finished
the
adding
mutation
support
to
that
client
and
I
added
my
mutation
api
request,
and
so
I
have
a
client
now
that
can
transfer
github
issues,
which
makes
me
very
happy,
and
I
have
just
started
trying
to
look
at
the
prowl
plug-in
sdk
infrastructure,
so
yeah
I
I
have.
I
guess
the
question
is:
do
I
need
to
spin
up
my
own
proud
cluster
to
like
dev
against,
like
I
imagine
I
can
get
kind
of
far
with
unit
tests,
but
I'd
love
to
know
more
about
your
experience.
There.
B
Yeah,
so
unit
has
definitely
helped
for
like
small
tests,
but
I
did
actually
set
up
a
dev
pro
instance
and
like
a
test
org
and
a
test
repo-
and
I
made
all
my
like-
I
did
all
my
testing
against
that.
Like,
for
example,
I
have
like
a
prop
testing
arc
and
a
test,
one
repo
in
it,
and
my
pro
I
have
a
prof
cluster
running
on
gcp
and
that
has
been
configured
to
interact
with
this
arc,
and
I
use
this
as
my
playground
to
test
all
the
work.
I
did.
F
B
There
is
a
dock
somewhere
in
github
in
github.
I
can
find
that
for
you,
which
step-by-step
instructs,
how
you
can
set
up
your
own
pro
instance
on
tcp,
and
I
just
follow
that
after
that,
it's
just
like
configuring,
the
deployments
to
use
your
own
source
code
so
that
it
compiles
from
that
and
from
there
I
was
able
to
test
it.
F
B
I
don't
know
how
beneficial
it
will
be
if
multiple
people
are
working
on
the
same
plugin
against
the
same
prof.
That
could
potentially
have
issues,
so
it
depends.
A
A
That
he
that
he
raj
is
describing
is
what
I
know
most
of
the
folks.
I
know
that
work
on
proud
views
like
I
think
there
might
be
like
a
shared
org
that
some
of
them
use.
I
know
eric
veda,
has
the
fate
of
our
sport
and
does
a
bunch
of
stuff
against
that,
but
like
as
much
as
you
can
integration
test
and
unit
tests
and
like
run
prowl
on
kind
and
do
stuff
that
way.
A
Eventually,
you
run
into
the
delta
between
what
github
docs
say
the
api
does
and
what
the
api
actually
does
live
and
there's
just.
I
have
seen
no
effective
substitute
for
having
a
github
org
and
some
repos
and
like
a
sock,
puppet
user
or
two
or
some
friends
who
are
willing
to
help
you
out
and
play
against
the
crowd
to
play
against
that.
A
D
Do
think
we
can
actually
improve
the
this
a
bit
for
folks
here,
but
I
think
that,
like
totally
shared
one,
it
has
other
problems
like
you
need
to
give
it
a
github
token.
We
probably
don't
want
to
have
like
an
official
github
token
that
people
can
effectively
take
arbitrary
commands
with
or
that
sort
of
thing
you
you
could
run
pretty
much.
D
All
of
this,
I
think
the
probably
the
biggest
blocker
for
people
running
this
themselves
is
that
you
do
need
a
public
endpoint
it
because
of
the
because,
because
of
like
plugins
in
particular
web
hook
based,
I
think
we
can
either
improve
on
helping
people
replay
web
hooks.
That
we
know
are
real
or
like
pointing
to
one
of
the
options
for
something
like
inlets
that
lets.
You
have
like
a
public
endpoint
on
ingress
on
a
cluster
that
isn't
necessarily
like
cloud
hosted
in
public.
D
I
also
think
aaron
knows
about
this.
We
should
probably
surface
two
people
working
on
things
like
pro
the
github
or
the
kubernator
test
instance.
Endpoint
there's
an
extremely
useful
like
side
tool
of
kubernetes
that
allows
you
to
browse
the
web
hooks
coming
into
our
projects
and
it
summarizes
them
quite
a
bit,
but
it
also
gives
you
like
the
raw
web
hooks.
So
you
can
go
back
and
see
like
historical
web
hooks
from
our
real
instance,
but
it's
kind
of
hidden.
A
It
is
kind
of
hidden,
and
I
want
to
kill
that
app
engine
app
with
fire.
I
would
if
we
could
maybe
just
keep
that
part
of
the
app.
A
A
Thing
is
convenient
for
is,
I
think
I
forget
what
level
of
auth
we
have
the
workbook
stuff
in
mind
is
my
thing.
I
don't
know
if
it's
like
a
public
endpoint
that
literally
anybody
can
hit,
see
all
the
web
hooks
that
have
been
sent
to
us
or
if
there
is
some
level
of
auth
against
your
email
address
or
which
team
you're
a
member
of
then
shaking
his
head?
No,
no!
No!
I
like
publixly,
I
just
like
I
would
rather
get
out
of
the
business
of
managing
stuff-
that's
not
running
on
kubernetes.
A
D
Well,
I
think
that
that's
gonna
ask
we
can
put
out
is
for
someone
to
I.
I
don't
know
that
you
would
even
really
need
like,
for
example,
app
engine
access
to
to
do
this.
It's
the
apis,
for
the
data
store
are
really
really
simple.
It
would
probably
mostly
be
writing
the
code
to
put
it
into
some
other,
probably
more
complex
data
story
across
projects.
A
Yeah,
so
I'm
going
to
answer
some
of
your
questions.
I
I
also
a
little
bit
of
the
id
with
my
github
management
team
on,
like
I
don't
know
if
it
has
to
be
an
official,
kubernetes
or
anything,
but
I
think
it'd
be
since
there
are
a
number
of.
A
Have
a
variety
of
playground
boards.
I
think
I
don't
know
if
there's
benefit
in
just
having
like
one
playground
for
proud
developers
to
play
against,
so
that
we're
all
trying
stuff
or
if
it
would
turn
into
a
crazy
bot
fight
with
multiple
different
rowbots
trying
to
manage
multiple
different
repos.
At
the
same
time,.
D
It's
also
for
a
lot
of
the
plugins.
It
would
be
a
bit
of
a
permissions
issue,
that's
fair,
because
they
take
direct
action
on
the
repo
with
right
permission,
which
we
probably
don't
want
to
be
granting
all
over
place
even
in
the
playground,
just
because
of
official
association
and
people
on
the
internet.
Miss
pave.
A
F
A
C
A
Gotten
a
lot
better,
especially
especially
like
with
your
own
dummy
oregon,
stuff
prowl,
can
sort
of
automatically
manage
its
web
hooks
and
its
tokens
a
lot
more
nicely
than
it
used
to
be
able
to.
I
don't
know.
Maybe
yuvraj
has
a
more
recent
onboarding
experience
with
standing.
B
Yeah,
it
is
documented,
and
I
followed
that
dog
just
blind
and
it
helped,
although
I
would
say
that
that
dog
has
two
sections
in
it.
One
is
called
the
tackle
deployment
which
is
supposed
to
do
it
all
for
you
and
the
other
is
the
manual
the
title
didn't
work
for
me.
I
had
to
do
it
everything
manually
one
by
one.
D
Yeah
and
the
other
thing
is
the
tackle
one
is
like
gcp
specific:
it
takes
advantage
of
just
like
create
a
gk
cluster
and
then
like
grab
the
cube
configs
from
the
clusters.
D
That
sort
of
thing
to
set
up
the
like
pot,
the
like
job
scheduling
credentials
and
that
sort
of
thing
it's
not
tackle
is
not
like
generalized
and
I'm
not
sure,
I'm
not
sure
how
doable
that
is.
F
Do
you
have
a
link
to
that
app
engine
app
with
the
webhook
debug
info
and
all
that.
A
A
And
I
will
post
it
slack
or
maybe
ben
can
come
up
with
it
faster
than
I
can.
A
Okay,
with
our
remaining
15
minutes,
I
thought
I'd.
Try
it
then
sorry,
okay,
also,
hopefully
the
noise.
A
There's
a
lawn
mower
or
some
kind
of
industrial
straight
shredder
going
on
outside.
I
rambled
a
bunch
last
time.
C
A
Some
of
the
potential
things
we
could
work
on
for
123..
I
wasn't
I
kind
of
wanted
to
open
the
floor
and
ask
y'all
like
what
are
you
interested
in
accomplishing
over
the
next
four
months?
A
Three
months
ish,
you
know
like
historically,
we
used
to
think
in
terms
of
quarters
because
kubernetes
releases
were
released
quarterly
and
for
some
of
us
it
lines
up
well
with
internal
planning
and
scheduling
and
whatnot,
but
I
found
it
easier
to
tie
to
the
kubernetes
release
lifecycle,
because
that
helps
us
plan
around
what
are
changes
that
we
can
do
like
right
now
before
the
release
cycle
kind
of
really
starts
up,
and
then
what
are
changes
that
we
really
want
to
land
before,
like
code
free,
starts
to
roll
around
things
of
that
nature?.
F
Well,
the
two
plans
that
I
have,
I
have
the
one
plug-in
I'm
working
to
transfer
issues
and
then
I'd
like
to
do
one,
for
I
want
to
change
the
release,
notes
plugin
to
be
able
to
modify
the
top
level
comment.
So
I
want
to
add
a
release
note
instead
of
having
to
ask
someone
before
I
approve
like
hey.
Could
you
please
add
a
release,
though,.
A
Yeah
that
sounds
super
cool,
the
release,
dance
plug-in
one
is
kind
of
near
and
dear
to
my
heart,
just
because
the
ability
to
edit
issue
descriptions
for
the
express
purpose
of
release
notes
is
kind
of
the
remaining
major
reason
that
we
still
have
so
many
people
who
have
direct
right
access
to
the
kubernetes
repo,
and
I
wish
it
weren't
that
way.
A
A
Well
like
yes
today,
I
didn't
have
to
do
it
against
kubernetes.
I
did
it
against
test
info,
but
today,
when
github
was
not
being
very
happy,
it
was
really
convenient
that
I
could
manually
add
the
labels
when
the
slash
commands
didn't
work,
but
generally,
like
the
slash
commands,
are
great
for
the
labels.
There.
A
That
we
specifically
don't
have
command
support,
for
has
that
extra
barrier
of
only
certain
people
should
be
authorized
to
add,
excuse
me
or
remove
these
labels
and,
like
cole
wagner,
had
an
idea
kicking
around
a
year
or
two
ago.
Maybe
it
was
longer
than
that
about
creating
like
a
generalized
plug-in
framework,
where
we
could
tie
authorization.
A
Four
different
commands
to
like
the
team
membership
of
people
so
like
that
is
that
exists
today
for
the
milestone
plug-in,
which
is
how
the
whole
milestone
maintainers
team
came
to
be,
and
there
are
there
are
configuration
things
that,
like
other
repos,
have
other
teams
hard-coded
by
their
numeric
id
that
are
authorized
to
use
that.
But
what
would
be
super
cool
is
to
kind
of
like
pull
that
back
and
turn
it
into.
A
This
is
a
plug-in,
and
these
are
the
commands
that
are
authorized
to
this
github
team
and
do
it
kind
of
like
by
the
I
think
it's
called
the
slug
or
something
where
it's
like
the
board
name
and
the
team
name.
It's
it's
a
far
more
readable
way
of
doing
it
and
it
would
be
cool
if
that
was
just
something
that
was
freely
available
to
all
plugins
versus
hard
coded
in
one.
But
now
I
also
want
pony
and
a
rainbow
too
so.
A
D
I
I
just
add
super
quickly
that,
as
opposed
to
the
release
notes
one,
I
actually
think
the
issue
migration
one
is
also
gonna,
be
very
well
received.
Besides
something
like
moving
issues
from
one
reaper
to
the
other,
because
you're
moving
repos,
I
see
pretty
often
with
like
documentation
issues
that
should
have
been
in
the
website.
Repo
and
right
now.
Very
few
people
can
actually
move
them
and
there's
a
lot
of.
Like
close.
The
original
issue
filed
the
exact
same
issue.
A
So
crown
migration,
I'm
not
sure
that
I
have
anything
to
say
that
I
haven't
set
before
because
I
haven't
written
anything
new
down.
But
basically,
if
I
want
it,
if
I
want
to
dream
real
real
big,
I
have
this
open
dream
of
having
like
proudcates.io
point
to
a
prow
instance,
that's
running
over
in
kate
centerland,
and
I
want
it
to
be
the
crow
instance
that
arnaud
has
been
working
to
stand
up
which
we're
calling
kate's
in
for
proud
right
now
and
our
plan
looks.
A
To
stand
up
as
kate's
in
for
proud
thing,
and
we
want
to
use
it
to
fully
manage
the
kubernetes,
slash
kate's,
I
o
repo
and
then
when
we
feel
confident
that,
like
we
got
it
operational
and
it's
it's
able
to
look
at
all
the
things,
then
we
could
look
at
like
expanding
it
to
other
repos
potentially
or
we
could
continue
to
iterate
on
it
or
like
hey.
A
Let's
get
more
people
than
like
arno
and
myself
to
understand
what
this
thing
is,
how
it
works,
how
it
can
break
how
to
fix
it
when
it
breaks-
and
you
know,
create
something
that
looks
a
little
more
like
a
playbook
for
people
to
support
it
best
effort
via
some
kind
of
on-call,
alias
or
something.
A
The
other
option,
and
so
so
then
it
then
it
kind
of
looks
like
do
we
look
at
like
migrating
things
over
to
this
new
instance,
or
do
we
attempt
to
shut
it
all
down
pick
up
the
crow
instance.
That's
currently
running
over
in
google.com
redeploy.
F
A
A
Maybe
riskier
approach.
I
think
it's
the
approach
that
people
will
also
look
to
and
be
like.
Why
are
we
spending
all
this
time
doing
migration
and
zero
downtime
stuff?
We
could
just
flip
a
switch,
and
I
think,
like
somebody
from
openshift
said
they
tried
this
and
it
was
about
a
two
or
three
day:
tire
fire.
A
C
A
But
if
I
just
like
bottom
line,
I
think
the
somewhere
in
the
middle
ground
there.
I
would
love
to
say
that
kubernetes
kubernetes,
the
repo
that
receives
by
far
and
away
the
largest
amount
of
traffic
and
triggers
the
largest
amount
of
jobs
and
is
the
co.
The
thing
that
does
the
most
spend
like
that
ought
to
be
those
jobs
ought
to
be
handled
by
a
control
plane
that
is
under
community
management.
D
I
think
the
odds
are
pretty
high,
that
in
any
migration,
where
you
stop
using
a
prow
to
control
a
repo
and
you
move
to
another
crowd,
there's
there's
gonna
be
some
pickups,
so
I
think
it's
better
to
just
go
ahead
and
plan
it
as
a
hard
cut
over
at
with
a
known
like
we're.
Gonna
have
a
window
where
maybe
things
are
interrupted.
D
Let's
try
to
avoid
like
code
freeze,
but
I
think
before
that
I
don't
think
we
should
be
talking
about
doing
that
when
we
haven't
moved
the
things
that
don't
remotely
require
that
all
of
the
periodics
and
post
submits
for
nearly
all
repos,
unless
we
have
reporting
post
submits
enabled
somewhere.
I
don't
know
about,
can
be
moved
from
between
prows
with
no
real
interruption
and
that
will
help
us
make
sure
that
we've
built
up
capacity
and
remaining
quirks
around
permissions
around
things
and.
A
A
We
got
to
find
some
way
of
actually
budgeting
like
what
we're
spending
our
ci
like
in
terms
of
ci
costs.
We
need
to
better
understand
what
we're
spending
our
money
on
and
why?
Because
right
now
what
happens
is
money
that
is
spent
before
ci
is
spent
in
two
places?
It's
either
spent
running
pots
on
a
kubernetes
cluster,
or
it
is
spent
spitting
up
external
clusters.
C
A
We
got
a
real,
quick
taste
of
this
with
scalability
costs,
because,
right
now
the
5k
node
scalability
jobs
are
pinned
to
a
single
project,
and
so,
as
we've
been
canarying
jobs
over
for
kate's
infra,
like
we
noticed,
hey,
there's
this
one
gcp
project
that
is
got
a
lot
more
spend
than
than
usual,
so
we
were
able
to
identify
that
cost.
A
So
it
could
be
that
we
do
something
like
you
know,
make
a
bunch
of
bosco's
pools
and
shard
out
projects
that
way,
but
like
we
kind
of
need
that
broader
policy
discussion
of
like
so
how
are
we
going
to
kind
of
budget
and
enforce
budgeting.
A
It
should,
if
we're
going
to
run
out
of
money
before
we
have
finished
migrating
like
I
need
to
prioritize
spending
money
on
the
things
that
are
most
important
to
the
project
and
right
now.
I
just
have
this
sneaking
suspicion
that
if
I
were
to
copy
paste
all
jobs
that
run
on
productgates.io,
not.
A
E
I'm
just
want
to
quickly
step
up
before
we
finish
and
bring
some
attention
to
information
to
ben.
So
basically
I
add
that
item,
because
I
don't
think
we
should
try
to
figure
out
now
how
we
want
to
do
the
migration,
but
basically
identify
what
are
the
broker
for
the
migration,
because
I
know
some
person
I
basically
integrate
with
internal
google
tools
or
basically
recently,
I
realized,
I
think
tech
needs
to
talk
to.
Basically
it
does
create
bucket
inside
google.
So
my
what
why
I'm
adding
this
is.
E
Basically,
I
would
like
to
basically
get
a
list
of
on
bloggers
from
on
the
google
side,
so
we
can
basically
identify
one.
What's
the
maintenance
windows
to
do
that
if
we
decide
to
do
a
flip
and
switch
or
how
we
do
the
transition,
the
problem
is:
error
me
on
iron
can
easily
identify
what
are
the
blocker
for
that
migration,
because
why
we
are
not
on
the
operation
side
of
productivity.
A
Yeah,
I'm
super
on
board
with
that.
I
think,
like
the
the
staging
instance
that
you
stood
up
has
been
all
about,
for
whatever
the
k-10
for
brown
instance
has
been
all
about
a
kind
of
a
fact-finding
mission
for
like
what
are
all
of
the
credentials
and
dependencies
that
are
necessary
to
get
something
stood
up
and
which
of
these
can
we,
you
know
copy
paste
or
replicate
over
to
community
infrastructure.
What
stuff
can
we
not
do
that
for
to
identify
those
dependencies?
A
I
think
maybe
yeah
I'm
leapfrogging
ahead
to
yeah
yeah.
Soon
we
got
all
that
worked
out.
Then,
like
policy
wise
like
are
we
okay?
I
think
there's
also
the
the
do.
We
have
people
who
can
run
and
support
it.
The
way
that
we
currently
have
people
who
run
and
support
proud,
kate's,
dot,
io
right
there.
There
is
a
team
of
googlers.
A
They
run
it
a
certain
way
that
has
parody
with
a
number
of
other,
proud
instances
that
they
manage.
Do
we
maintain
that
parity?
Do
we
change
it
at
all?
So
I,
like
my
scenario,
my
ideal
vision.
I
guess
is
like
at
a
future
meeting.
We
get
representatives
from
testing
for
on-call
here
and
we
we
walk
through
a
document
that
describes
you
know
what
on-call
is
and
how
it
is
done
and
we
make
sure
that
it's
still
doable
with
prowl
over
communities
that
I
have.
So
it's
like
a
similar
list
of
blockers.
A
I
think
that
you're
talking
about-
and
I
would
totally
push
hard
for
that
to
be
next
week-
I'm
happy
for
it
to
be
next
sorry
next
meeting,
but
I
am
going
to
be
on
a
beach,
definitely
not
in
this
meeting
two
weeks
from
now.
So
if
you
all
want
to
have
that
conversation
without
me,
that
would
be
super
cool.
If
it's
a
conversation
that
I
need
to
drive,
it'll
have
to
be
at
least
two
meetings
from
now.
C
A
Super
want
to
understand,
like
I
wanted
to
have
this
happen
well
before
toad
freeze.
I
want
us
to
have
enough
time
to,
like
flip,
have
some
time
with
it
operationally
and
then,
if
we
need
to
flip
back,
do
that
well
in
advance
of
all
of
the
usual
capacity
fund
that
we
encounter
as
we
get
closer
to
code,
freeze
and
stuff,
I
I
don't
enjoy
having
to
chase
our
tails
over
too
many
things
changing
at
once.
A
So,
okay,
thank
you
all
for
showing
up
today.
It's
super
cool
to
see
everybody
thanks
a
bunch
for
all
that
you
do.
Thank
you
for
your
time
and
I
look
forward
to
seeing
a
recording
of
you
all
at
this
meeting
in
two
weeks
time
and
seeing
you
all
online
all
right,
happy
tuesday,
everybody
bye-bye
cheers.