►
From YouTube: Kubernetes SIG Testing - 2021-02-23
Description
A
Okay,
hello:
everyone
welcome
to
kubernetes
testing
bi-weekly
meeting.
This
meeting
is
covered
by
the
cncf
code
of
conduct.
The
very
short
version
of
that
is
be
good
to
each
other.
This
meeting
will
be
recorded
and
uploaded
to
youtube
publicly.
B
I
click
share
screen.
It
says:
host
disabled
attendee
screen
sharing,
yeah
ben
you
gotta.
A
B
Okay,
so
hi
guys
this
is
ciao
from
google.
I
believe
yeah.
This
is
my
first
time
presenting
here
nice
to
meet
you
guys,
so
the
topic
I'm
going
to
teach
us
today
is
a
propos,
a
proposal
to
keep
on
continuous,
deploying
case
pro
instead
of
manually,
deploying
every
day
by
pro
uncle.
So
the
dock
is
here,
the
link
is
here.
I
already
opened
it
on
this
side.
B
Can
you
guys
see
my
screen
yeah
cool
yeah?
So
so
I
will
try
to
be
quick
because
there's
really
not
too
much
technical
difficulty
here,
it's
more
like
a
decision,
so
basically
the
motivation
of
doing
this
is
because
currently
there
are
like
six
steps
for
uncle
to
go
through
manually
every
working
day
to
deploy
a
pro
image.
I
mean
a
set
of
new
pro
images,
and
this
is
pretty
time
consuming
and
it
has
been
proven
in
the
past
several
months.
B
That
trial
is
pretty
reliable
in
the
sense
that,
as
alvaro
pointed
out
in
the
design
dock,
it's
most
of
the
errors
were
caught
by
open
shift
pro
because
they
are
already
doing
this
continuous
deployment
of
pro.
B
B
This
is
this:
is
nice
but
not
necessary?
I
I
think
the
ideal
model
for
pro
is
should
be
since
it's
producing
case
it
should
be
continuously
deployed,
and
our
monitor
and
alerting
system
should
be
good
enough
to
capture
the
errors.
So
humans
don't
need
to
intervene.
B
That's
basically
the
motivation
of
doing
this
and
the
the
impact
of
doing
this
is.
It
can
save
on
call
half
an
hour
every
day
which
aggregates
to
three
hours
per
week,
and
we
what
we
get
is
we
are
going
to
have
a
more
actively
updating
pro,
so
we
can
always
get
the
latest
pro
and
latest
pro
features
and
bug
fixes
in
case
what
we're
gonna
lose
is
probably
a
little
bit
of
stability
because
we
are
not
canaried
by
openshift
from
and
the
way
we
are
doing.
B
B
B
The.
What
we
need
to
be
care
about
is
like
two
things.
One
is
rollback,
so
the
rollback
is
also
pretty
easy.
We
just
need
to
put
a
hold
on
the
autobahn
pr,
so
future
autobahn
will
not
get
merged
automatically
by
type
and
the
encore
will
go
over
the
manually
created
rollback
pr
and
the
rollback.
If
pro
is
badly
affected
like
it
will
not
propagate
the
changes
from
the
rollback
pr.
B
The
on-call
need
to
go
over
and
manually
apply
the
rollback
pr
and
if
there
there
is
another
thing
we
need
to
be.
Caution
is
like
breaking
changes
improv,
so
several
things
we
need
to
care
about
is
so.
First
of
all,
we
should
be
disciplined
on
causing
breaking
change
going
into
prowl
like
if
you
somebody
the
pr,
is
going
to
break
pro
right
away.
That's
probably
not
very
acceptable,
or
if
it's
not
avoidable,
then
we
should
at
least
inform
every
possible
stakeholder
to
make
sure
they're
aware
of
the
change.
B
A
Sorry,
I
was
just
going
to
say,
I
think
the
the
like
versioning
discussion
is
maybe
a
little
bit
of
a
separate
topic
and
there's
probably
actually
quite
a
bit
of
talk
about
already
with
like.
Should
we
be
continuously
deploying
one
thing,
I'd
sorry,
I
needed.
I
had
a
comment
earlier.
I
need
to
scroll
back
in
the
doc.
A
So
if
tide
is
auto-emerging
prs
that
bump
proud
on
one
hour
intervals,
how
is
uncle
going
to
know
what
a
known
conversion
is?
Are
they
just
going
to
roll
back
like
an
entire
day
or
is
one
of
the
things
about
the
current
model?
Is
that
since
we
don't
know
when
breaking
changes
are
landing
and
that
sort
of
thing
somebody's
bumping
once
a
day
if
it
breaks,
we
have
an
entire
day
at
least
of
running
the
previous
version,
so
we
know
that
one's
fine
and
we
can
roll
back.
A
B
D
Yeah,
so
in
openshift
we
currently
have
a
six
hour
interval,
I
believe,
and
so
so
I
can
give
a
blanket
answer
on
how,
in
the
past,
we
found
out
the
last
good
version,
but,
generally
speaking,
I
think
the
number
of
changes
in
pro
at
this
point
is
so
little
that
usually
it's
pretty
easy
to
figure
out
what
bump
touched
every
year.
That's
now
not
working
anymore.
C
I
guess
I
have
a
similar
comment
thinking
about
right
now,
the
auto
bump
pr,
also
bumps
all
of
the
job
images
and
well.
This
could
avoid
some
confusion.
We
ran
into,
for
example,
where
people
didn't
realize
that
changes
to
coop
tests
don't
take
effect
until
the
next
bump.
Pr
merges
so
like
it
could
cause
their
changes
to
go
into
effect
faster,
which
might
be
good,
but
many
of
the
jobs
that
the
kubernetes
community
cares
about.
Take
anywhere
from.
C
You
know:
30
minutes
to
two
hours
to
run
to
completion,
so
in
the
event
that
either
a
job
change
or
or
a
pro
image
change
caused
those
things
to
stop
behaving
correctly.
I
feel
like
it
would
be
harder
to
track
it
down
having
a
day
as
a
boundary
is
useful.
C
But
but
so
I
guess
like
the
compromise
I
would
propose,
is
like
the
auto
merging
stuff
and
kind
of
keeping
it
the
current
cadence
and
then
identifying
like
how
we
can
push
forward
or
make
more
visible,
like
the
versions
that
have
changed
when,
like
I'm
thinking
like
you,
could
have
prow
attach
a
label
to
the
jobs
that
it's
scheduling
to
say.
C
It
was
created
by
this
version
of
rao,
and
we
could
augment
pod,
utils
and
stuff
to
make
sure
that,
like
which
version
of
pro
shows
up
in
test
grid
or
stuff
like
that,.
A
I
would
also
suggest
that,
possibly
instead
before
doing
this,
it
might
make
sense
to
split
out
the
auto
bump
process
between
the
test,
images
and
prow,
and
we
can
hand
over
control
over
approving
testament
changes
to
the
broader
sick
testing
community
and
let
the
prow
on-call
group
control
just
the
prow
and
put
those
on
different
update
cycles.
A
I
don't
think
I
think
coupling
them
was
just
convenient
and
also
came
from
a
time
where
people
worked
on
crowd
and
those
images
like
cubans,
which
is
not
very
frequent
today.
It
probably
makes
more
sense
to
just
go
ahead
and
set
up
a
totally
separate,
auto
update
process
for
the
test
images.
C
B
A
Well
and
it'll
make
reverting
a
little
bit
easier
because
then
we
can
still
just
revert
like
the
merge
commit
for
the
auto
bump.
Yes,
I
think,
without
the.
D
B
And,
to
be
honest,
in
my
opinion,
the
test
images
can
be
like
auto
bumped
on
different
cadence
and
they
actually
can
be
merged.
24
7,
in
my
opinion,
as
well.
C
So
while
this
is
all
true,
I
still
personally
would
feel
better
if
we
could
get
some
kind
of
annotation
or
label
on
proud
jobs,
so
we
knew
which
version
of
prowl
was
responsible
for
creating
them
before
we
went
straight
to
an
hourly
or
as
quickly
as
possible,
cadence.
C
So,
like
again,
I
see
nothing
wrong
with
like
automating
this,
it's
some
kind
of
slower,
cadence
to
start
with
and
then
start
gradually
turning
it
up,
because
I
just
have
concerns.
If
we
go
straight
to
as
fast
as
possible,
it
might
be
too
much
instability.
C
Yeah,
that's
good
to
know.
B
C
I
think
it
should
be.
I
think
it
should
be.
Somebody
like
sick
testing
chairs.
I
kind
of
feel
like
somebody
from
sig
release
needs
to
feel
comfortable
like
I
think
the
risk
we're
expressing
about
troubleshooting,
instability
and
stuff
would
be
would
come
from
them
as
well.
So.
A
I'd
also
like
to
see
an
explicit
approval
from
someone
that
is
in
prow
approvers
or
is
not.
It
is
I
I
don't
actually
recall
who's
all
currently
in
that,
but
I
know
like,
for
example,
like
alvaro
or
cole
yeah.
A
C
Yeah
arno,
maybe
maybe
you'd,
be
an
appropriate
person
from
sig
release,
as
one
of
the
tech
leads.
If
you
want.
C
I
okay
well
like
if
you
are
uncomfortable
with
it,
go,
go
chat
amongst
your.
Your
fellow
chairs
and
tech
leads,
but
I
feel
like
you've
gotten
comfortable
enough
with
some
of
the
mechanics
of
deploying
prow
over
in
kate's
infra.
It
would
be
good
to
get
your
opinion
at
least.
E
Okay,
basically,
I
think
we
should
involve
the
technique
from
secretaries
like
sash
and
dan,
and
also.
E
C
Okay,
if
I
think
of
it
in
terms
of
like
a
cap,
we
typically
want
to
pin
it
down
to
a
name,
so
we
don't
have
to
decide
the
name
here.
I'm
just
suggesting
like
talk
amongst
yours.
I
think
comments
from
all
of
you
are
good,
but
you
should
choose
one
person
to
be
designated
an
approver
for
this.
C
C
Sorry,
we
kind
of
jumped
in
in
the
middle
of
you
presenting
this
child.
Did
you
have
any
any
more
to
say.
B
C
Good
stop
sharing!
Thank
you.
C
I
guess
one
one
other
comment
I
have
would
be
not
sure
if
the
slack
reporter
supports
this
level
of
granularity
today,
but
it
would
be
cool
to
continue
to
get
pings
in
testing
ops
when
auto
bumps
have
deployed
so
that
it
may,
let's
see,
I
don't
think
it.
C
It
might
not
be
like
a
link
to
the
it'd
be
cool
if
it
was
a
link
to
the
pr,
but
I
think
at
least
setting
up
so
that,
like
the
deploy
job
post,
submit
posts
to
testing
ops
could
give
people
who
are
already
used
to
checking
that
slack
channel
some
visibility
into
when
things
have
changed
like
I
know,
ideally
like
we
just
point
them
at
test
grid
and
say
that
that's
that's
what
happened,
but
since
people
already
checked
slack,
I
think
it
would
be
cool
to
keep
that.
B
C
Welcome
thanks
a
bunch
for
presenting
I'm
pretty
excited
by
it
back
to
you,
ben.
A
Yeah,
I
also
want
to
add
that
I
think
the
versioning
discussion
is
also
something
worth
having,
but
we
have
a
number
of
today.
We
might
want
to
circle
back
to
that
it
also
doesn't.
I
think
it
has
a
little
bit
less
thought
and
comments.
So
far.
The
rest
of
the
the
rest
of
the
doc
is
quite.
A
Detailed
okay,
so
next
up,
let's
try
something
different,
so
we
have
just
this
morning
and
github
discussions
enabled
I'm
going
to
start
sharing
screen
for
this.
A
Or
not,
okay,
I'd
rather
not
quit
and
reopen
zoom
aaron.
Do
you
think
you
can
screen
share
this
yeah.
C
Sure
what
do
you
want
me
just
tell
me
where
to
go
I'll,
be
here.
C
A
This
in
the
chat-
I
guess:
okay,
there's
not
a
whole
lot
to
show.
I
just
wanted
to
like,
have
it
actually
in
the
video.
While
we
were
talking
about
it,
hey
can
you
make
me
co-host,
so
I
can
share.
A
A
So,
okay,
so
github
now
has
a
discussions,
feature
it's
not
on
by
default
and
repos.
It's
something
you
have
to
enable
they're
just
starting
to
roll
this
out
more,
but
we've
been
talking
about
sort
of
differentiating
the
meeting
for
our
social
and
triage
and
that
sort
of
thing
versus
just
like
decision
making
right
now.
In
theory
that
goes
to
the
mailing
list,
or
a
cap
or
issues
and
so
on.
A
I
think
we
don't
get
a
lot
of
interaction
on
the
mailing
list
for
various
reasons,
and
not
everything
needs
to
go
all
the
way
to
being
a
cap.
That's
a
bit
overkill
for
a
lot
of
stuff
like
when
we're
discussing
what
should
we
do
about
the
images
on
docker
hub,
so
we've
been
talking,
or
I
in
particular,
have
been
talking
to
say,
contributor
experience
about
what
we
can
do
with
this,
and
also
in
the
sick
chairs
at
leeds
media
monthly
meeting.
A
One
of
the
suggestions
we
got
is
to
try
github
discussions.
So
we've
just
requested
this
repo
kubernetes
testing.
It
went
through
this
morning
and
I
enabled
discussions
on
it.
We've
got
a
test
discussion
here.
I
believe
this
has
threading.
It
has
comment
voting,
it
has
reactions,
so
you
can
just
like
plus
one
something
instead
of
emailing
everyone
with
your
plus
one,
I'm
hoping
we
can
give
this
a
shot
for
things
like
do.
A
We
want
to
auto
bomb,
proud,
they're,
still
useful
to
talk
about
in
the
meeting,
perhaps,
but
for
anywhere
that
you
want
to
record
a
permanent
discussion
that
doesn't
rise
to
the
level
of
being
a
cap.
I'd
like
to
give
this
a
shot.
A
It's
also,
in
contrast
to
slack
where
we
might
be
doing
real-time,
debugging
or
something
or
hey.
Can
someone
take
a
look
at
this?
This
is
a
good
place
for
something
that
you
want
people
in
all
time
zones
to
see
to
have
a
permanent
record
that
you
can
link
to
instead
of
telling
people
oh
go
scroll
back
through
slack
looking
for
this
or
trying
to
get
everyone
on
the
mailing
list.
A
So
please
take
a
look
at
this
issue.
Try
it
out,
we
are
pretty
new
to
even
knowing
like
what
is
the
full
feature
set
of
this
tool.
We
may
find
that
this
is
actually
not
great
and
just
decide.
This
is
fine,
we'll
go
back
to
just
using
the
mailing
list
and
slack
and
so
on.
A
Via
a
pull
request,
I
don't
think
so
I
don't
think
they
close.
I
think
if
you
have
like
a
tracking
bug,
you're
still
going
to
want
to
use
an
issue,
but.
A
So
we
have
a
slack
channel
and
the
slack
channel
is
great
for
real-time
discussions.
A
Like
say
we
have
a
bug
happening
with
our
infrastructure
right
now
and
we
need
to
figure
out
what's
going
on,
it's
not
very
friendly
for
capturing
things
across
time
zones
or
for
or
anything
more
than
that
you
can
technically
link
to
things
in
slack,
and
we
do
actually
have
history
unlocked,
but
it's
not
public,
so
you
have
to
actually
join
the
slack
to
see
any
of
it
and
it's
easy
to
get
lost
in
all
of
the
ongoing,
like
real-time
chatter,
so
part
of
the
hope
here
is.
A
This
is
a
totally
separate
repo.
If
we
think
this
tool
works
well,
you
should
be
able
to
subscribe
to
this
repo.
However,
you
prefer
to
handle
github's
notifications,
email
or
so
on,
and
this
can
just
be
a
place
where
we're
having
sort
of
discussions
that
are
not
purely
in
the
moment,
things
that
might
have
gone
to
the
mailing
list
before
I
think,
is
probably
the
closest
one,
but
has
a
few
couple
of
features
that
may
make
it
a
little
bit
more.
A
Inviting
everyone
with
github
has
access
to
this,
and
you
kind
of
by
definition
do
to
work
in
kubernetes,
and
it
has
things
like
you
know,
plus
wanting
a
comment
without
having,
without
you
know,
notifying
everyone
which
is
often
pretty
nice.
H
I
think
I
missed
the
entry
point
to
get
to
here.
I
clicked
on
the
links,
but
how
do
you
initiate
a
discussion
and
what's
the
index
of
them.
A
A
Yeah
it
I'm
that's.
Definitely
one
of
the
interesting
features
we'll
have
to
see
if
this
actually
works
well
or
not,
but
I'm
hoping
we
can
try
it.
One
of
the
things
we're
doing
is
we're
just
trying
it
and
they're
sick
and
reporting
back.
I
think
there
may
be
one
or
two
other
sticks
looking
to
try
this
as
well.
If
this
sort
of
thing
is
good,
I
believe
steering
even
is
looking
to
kind
of
recommend
more
asynchronous
communication
within
cigs,
so
this
may
be
one
way
that
we
do
it.
A
We
don't
know
yet
we
literally
just
got
this
this
morning
and
you
need
to
have
a
repo
that
you
can
edit
the
settings
on
to
even
start
toying
with
this.
So
this
is
a
kind
of
our
there's,
our
sandbox.
For
this
we
have
like
just
sig
testing
repo,
we'll
probably
also
potentially
use
this
to
source
some
other
like
syntax,
but
for
now
this
should
just
be
like
a
place
that
we
can
experiment
with
how
well
this
works
for
us.
A
A
I
think
I
think
the
people
with
repo
admin
have
to
create
the
topics.
A
Yeah,
I
think
we'll
need
to
put
thought
into
that.
I
didn't
get
a
chance
to
do
any
of
that,
because
I
mean
I
enabled
the
discussion
feature
before
this
call.
The
repo
was
approved
like
last
night
slash
this
morning,
so
we
just
barely
squeaked
it
in
time
to
bring
it
up
for
this
meeting.
A
This
is
definitely
not
looking
to
immediately
shift
anything
serious
here
right
now,
I'd
like
everybody
to
take
a
look,
so
I
think,
get
some
feedback
on
like
what
we
can
do
like
setting
up
these
categories
before
we
maybe
try
having
some
useful
discussion
here
like
the
prow,
auto
bump,
would
have
been
probably
a
pretty
good
one
to
capture
here.
If
we
don't
feel
we
need
to
kept
for
that,
but
yeah.
I
definitely
think
it's
it's
going
to
be
a
very
much
on
a
trial
basis.
A
I
it's
entirely
possible
that
this
doesn't
actually
pan
out
and
that
we
need
to
find
a
different
solution.
Kubernetes
also
has
discus
board
and
I
think
overflow
and
some
other
things,
so
we
have
some
options
to
look
into,
but
I
think
so
far
they
haven't
panned
out
that
well
for
upstream
developer.
Discussion-
and
I
think
one
of
the
things
that
may
make
this
one
possible
to
work
is
it
has
a
little
bit
lower
barrier
to
entry,
because
it's
just
your
github
account.
A
This
is
good,
though
this
is
exactly
what
we're
looking
for.
Go
go
mess
around
with
this,
we'll
we'll
figure
it
out
as
we
go.
C
Or
yeah,
since
I'm
here,
let
me
find
the
board.
Do
I
have
it
handily
linked?
I
do
not.
Okay,
you
have
it.
C
So
yeah,
I
guess
I
I
love
this
just
under
the
general,
let's,
let's
try
something
different
to
kind
of
carry
or
forward
the
what
we
talked
about
last
week,
where
we
want
this
to
be
kind
of
more
about
optimizing.
The
high
bandwidth
time
that
we
have
here
and
less
about
being
like
broadcasts
or
whatever,
so
one
way
that
other
sigs
have
done.
That
is
to
sort
of
set
up
a
specific
meeting
for
issue
triage,
but
I
thought
it
might
be
handy
to
try
doing
that
within
this
meeting.
C
C
Basically,
ben
and
I
are
kind
of
manually
throwing
issues
on
here
if
we
feel
like
it's
kind
of
relevant
to
what
we're
talking
about
or
what
we're
doing.
Usually
that
means
the
issues
are
going
to
be
milestones
in
the
current
release,
milestone
121
and
it's
just
issues.
It's
not
pull
requests,
because,
if
you're
doing
a
pull
request,
ideally
you've
linked
to
it
in
the
issue-
and
we
can
actually
see
that
from
here
right
to
see
like
what
pull
requests
have
linked
to
a
given
issue.
C
C
The
help
wanted
label
to
and
think
are
good
for,
maybe
not
like
your
first
contribution
to
the
project,
but
I
think
certainly
anybody
showing
up
to
this
meeting
could
certainly
help
out
with
these
issues,
and
I
think
even
your
like
second
or
third
contribution
could
probably
be
closing
out
or
helping
with
one
of
these
issues.
C
They
all
have
a
well
sorry.
Most
of
them
have
a
priority
label
assigned.
So
it's
not
like
we're
trying
to
put
up
unimportant
things
here.
These
are
these
are
important
and
would
provide
real
value
to
us
the
backlog.
C
I
can't
necessarily
say
that
it's
sorted,
but
this
would
be
like
we
have
agreed
that
we
want
to
work
on
this
for
sure,
and
so,
if
you
know,
if
you
are
a
more
knowledgeable
contributor,
this
would
be
a
really
great
thing
to
take
on
in
progress
would
be
somebody's
actually
taken
on
the
issue.
D
C
Is
working
it,
we
could
try
playing
around
with
using
the
triage,
accepted
command
and
try
and
say
that,
like
everything
or
we
could
try
using
the
life
cycle
active
command
and
then
say
that,
like
everything
in
this
column
has
to
be
active,
I
don't
I
don't
want
to
like
impose
too
much
strict
process
here,
but
sometimes
making
things
more
consistent
might
lead
us
to
a
path
of
like
more
automation
to
automatically
populate
these
columns
for
us
blocked,
you're
stuck
you
need
help
and
then
done
it's
stuff
that
we
have
put
on
this
board
and
closed
out.
C
So
the
board,
if
I
remember
correctly,
has
its
own
sorting
based
on
how
humans
manually
drag
cards
around.
So
an
alternative
way
to
view
what's
going
on
is
to
use
a
github
issue:
query
you
can
use
the
project
field
or
whatever
to
say
whether
or
not
you
can
query
for
everything,
that's
in
a
given
project
or
not
in
a
given
project,
and
then
I
have
this
sorted
by
most
recently
updated.
C
So
we
could
also
sort
of
like
go
through
and
take
a
look
at
everything,
that's
kind
of
been
active
recently
and
make
sure
that
we've
got
the
questions
answered,
that
we
need
to
you
know,
or
we
could
sort
it
the
other
way
and
say
like
hey
this
stuff
that
hasn't
been
updated
in
a
while
you
know,
but
is
it
still
relevant?
Does
it
need
to
be
bumped
up
in
freshmen
freshness
whatever,
so
I
don't.
C
C
I
sometimes
feel
like
people
are
adding
stuff
to
the
agenda,
to
talk
about
like
a
specific,
a
specific
item.
You
know
one
way
we
could
sort
of
save
you
having
to
like
jump
into
a
google
doc
or
whatever
is
make
sure
that
it's
on
this
board
and
then
make
sure
you
know
you
get
thrown
and
blocked.
C
For
example,
if
you
feel
like
you're
blocked,
and
we
could
start
with
everything
that
people
are
blocked
on
and
see
if
we
can
unblock
you,
this
project
board,
like
all
project
boards
within
or
almost
all
project
boards
within
the
kubernetes
project,
any
org
member
can
write
to
which
means
you
can
you
can
move
issues
around?
You
can
add
issues
to
the
board.
You
can
take
issues
off
the
board.
What
you
can't
do
is
change
the
columns
or
rearrange
the
columns
that
is
restricted
to
the
sig
testing
leads.
H
We
also
have
a
board
that
we've
used
in
conformance.
I
just
dropped
a
link
where
we
use
the
generation
of
the
tickets
to
to
to
have
both
the
pre-approval
for
the
work
that
we're
doing
and
discussion
a
slightly
different
flow.
But
we've
we
have
found
the
board
useful,
but
not
so
much
for
the
actual
meeting
for
preparing
for
the
meeting
and
and
having
the
short
list
of
things.
We'd
like
to
get
our
eyes
on.
A
Yeah,
that
sounds
like
an
interesting
approach.
I
do
think
there's
quite
a
bit
of
variation
in
how
sigs
do
this,
but
it
seems
like
a
potentially
valuable
use
of
of
high
bandwidth
time.
One
of
the.
D
A
That
I've
been
trying
to
think
about
is
how
we
draw
attention
to
things
that
might
be
interesting
for
people
to
work
on.
We
do
have
like
the
help
wanted
labels
and
so
on,
but
I've
noticed
that
other
things
kind
of
do
a
bit
more
of
this
in
their
meetings.
Bringing
up
like
looks
like
we're.
You
know
missing
getting
to
these
issues.
A
I'm
pretty
plus
one
on
going
through
triage
in
this
meeting,
but
I
think
we
might
need
to
think
a
little
bit
about
how
much
we
time
box
it
and
whether
it
happens
at
beginning
at
the
end.
And
how
do
we
reconcile
that
with
many
of
the
issues
that
people
are
adding
to
the
agenda
being
essentially
zeroing
on
specific
ones?
Already.
C
So
how
about
this-
and
we
can-
we
can
actually
like
answer
this
in
a
discussion.
I
guess,
but
just
to
throw
it
out
here-
sig
architecture
a
while
ago
sort
of
set
the
precedent
that,
like
you,
can't
show
up
to
take
architecture
and
start
talking
about
a
given
issue.
C
If
you
haven't
tried
to
talk
about
it,
asynchronously
first,
which
for
them
meant
you
have
to
send
something
to
the
mailing
list,
and
only
then
is
it.
Is
it
appropriate
to
take
up
high
bandwidth
time
because
it's
been
proven
not
to
be
resolved,
so
we
could
say
something
similar
like
hey
start
a
github
discussion
about
this,
or
we
could
say
like
hey,
it
has
to
be
an
issue
and
it
has
to
be
on
our
board.
C
C
H
I
think
this
is
good
in
several
ways.
One,
the
recording
of
the
discussion
during
the
meeting
can
happen
on
that
targeted
asynchronous
discussion
thread
rather
than
taking
notes
on
the
on
the
google.
H
A
Yeah,
I
think
it's
a
good
idea
to
have
some
policy
around
try
to
take
things
asynchronous
before
raising
them
to
the
limited
high
bandwidth
synchronous
time.
We
have.
A
Speaking
of
which,
since
we
don't
have
that
rule
yet,
and
we
do
have
a
few
more.
A
On
sounds
good:
it
looks
like
hippies
up
next
with
some
pr
create
help.
H
I
was
looking
for
the
mute
yeah.
I've
got
an
issue
there
and
I've
been
looking
at
it
for
a
while.
We
have
it's
funny
enough
part
of
the
auto
auto
bump
script,
which
calls
a
golang
program
which
in
turn
goes
around
and
calls
the
script
which
calls
the
another
goaling
binary
and
tracking
it
down
to
just
the
most
small
component
that
I'm
looking
for
to
help
with
the
working
group,
kate,
zinfra
audit
scripts.
H
I
am
unable
to
get
that
binary
in
a
standalone
way
to
take
my
the
contents
of
my
local
and
I'm
we're
not
even
using
problem
just
trying
to
use
it
in
my
local
shell
to
get
it
to
successfully
create
a
pull
request,
and
I
worked
with
ciao
for
a
bit.
We
debugged
and
I've
put
a
few
requests
out
there,
but
it
might
be
if
anybody
has
some
time
to
actually
just
try
to
look
at
it.
H
We
don't
have
a
lot
of
docs,
but
I
I
must
be
doing
something
wrong
and
the
only
error
that
I
get
is
the
json
isn't
valid,
which
may
mean
maybe
my
urls
are
wrong
or
or
something,
but
I
don't
yet
have
the
skill
set
to
to
get
past
this.
So
I'm
kind
of
putting
up
the
please.
A
A
Yeah,
I'm
not
sure
if
anyone
super
familiar
with
this
is
active
in
this
meeting
right
now,
but
we'll
definitely
try
to
raise
that
for
follow-up.
Thanks
heaps.
H
H
It
may
be
some,
it
may
be
an
argument
error.
I've
tried
to
look
at
all
the
configured
jobs
and
mirror
them.
I
did
update
the
the
error
string
so
that
it
was
more
accurate,
but
even
with
updated
error
codes
that
the
json
just
not
being
valid
as
it's
consumed
by
the
library
yeah
it's
it
may
be
a
user
error,
but
it's
beyond
the
user's
ability
to
figure
out
understandable.
A
G
Yeah
hi-
this
was
brought
up
to
me
as
well
like
one
hour
ago
or
something
like
that.
Apparently,
someone
hit
a
docker
hub
rate
limit
building
some
images
and,
in
this
particular
scenario,
the
pause
image
and
we
are
basically
using
a
docker
hub
image
to
compile
some
binaries
which
we
need
for
windows.
A
Yeah
so
so
pause
seems
like
something
that
I
would
expect
to
wind
up
in
kubernetes
kubernetes
adjacent
to
the
existing
pause
image
and
be
hosted
also
alongside
it,
for
the
build
images
those
are
usually,
the
sources
are
in
kubernetes,
slash
release
now
for
the
base,
images
that
we
use
for
these
things
and
they
are
built
and
published
to
gcr,
with
the
cloud
build
and
so
on.
A
G
A
Could
still
maybe
make
sense
to
have
an
image
in
the
kubernetes
release,
build
images,
and
just
have
that
be
like
from
that
image:
push
it
okay,
so
we
can
avoid
rate
limiting.
We
are
trying
to
make
sure
that
the
things
we
depend
on
and
ci
we
know,
are
not
going
to
be
rate
limited,
which
hasn't
been
a
drastic
problem
for
upstream.
I
think,
but
I
know
it's
been
problematic
for
various
downstreams
trying
to
mirror
what
we're
testing
in
particular.
A
I
think
upstream
ci
has
less
of
an
issue
with
like
each
node
having
an
ip
and
not
pulling
as
many
things
directly,
but
it
seems
to
be
a
pretty
frequent
problem
with
some
of
our
downstreams
as
well,
and
since
we
have
the
gcr,
I
think
it's
worthwhile
to
remove
that
pain.
A
It's
also
been
good
one,
one
other
issue
that
I've
found
that's
been
helpful
from
us
moving.
This
is,
we
also
gained
the
benefit
of
even
if
we're
using
a
tag.
We
can
trust
the
tag.
We
don't
have
to
worry
about
someone
pushing
a
like
nasty
replacement
image,
because
we
know
that's
controlled
by
the
image
promotion
process
and
we
don't
have
mutable
tags.
A
Whereas
when
we've
been
consuming
images
directly
from
docker
hub
and
our
builds,
and
so
on,
we've
wound
up
with
things
like.
Oh
we're,
gonna
like
register
chemu
multi-arch,
but
we're
just
pulling
a
floating
tag
that
someone
could
push
over
from
a
build
provenance
perspective.
Even
if
we're
going
to
use
third
party,
it
kind
of
makes
sense
to
make
sure
that
we
get
in
a
place
where
we
can
pin
it
easily
and
putting
things
in
the
gcr
also
helps
with
that.
Because
of
the
image
promoter.
C
Yeah,
that
sounds
good.
What
why
aren't
any
of
our
other
windows-based
images
running
into
this
problem,
and
is
this
a
is
this
a
ci
job.
G
Oh,
no,
someone
building
their
own
images
ran
into
this.
They
had,
they
were
running
on,
you
know,
shared
cloud
or
something
and
most
other
clients
to
the
orthodontists
were
basically
pulling
images
from
dark
hub.
So
collectively
they
all
hit
the
rate
limit.
Basically,.
C
Okay,
that
makes
sense,
but
I
guess
can
you
only
understand
like
we,
we
do
build
a
bunch
of
other
windows
based
images.
How
have
have
we
not
hit
this
before.
C
Or
decided
like,
is
this
specific
to
pause,
or
is
this
something
that's
that
would
that's
affecting
all
of
our
windows
images.
G
I
thought
it's
windows
specific,
it's
just
that
we
require
to.
We
require
some
binaries
to
be
to
be
built
for
the
windows
image,
for
the
pause
image.
Sorry,
but
for
the
image
builder
itself,
we
don't
really
run
that
often
I
think
we
run
just
a
couple
of
jobs
the
entire
week.
C
Yeah
yep,
I
totally
agree
with
with
ben's,
take
that
this.
This
is
a
release,
engineering,
thing
and
yeah.
A
Yeah-
and
we
avoid
this
because
pausing
kubernetes
is-
is
like
a
is
not
frequently
built,
and
I
think
the
same
for
most
of
the
other
ones,
like
the
images
that
actually
need
to
do.
This
stuff
tend
to
be
like
base
images
that
aren't
built
as
frequently.
A
A
Okay,
well,
we
have
just
a
couple
minutes
left.
I
see
we
have
one
more
topic,
I
know.
Do
you
want
to
talk
about
that.
E
Yeah,
because,
basically,
a
while
ago,
I
think
in
april
of
last
year,
aaron
opened
an
issue
about
margaret
pro
dot
case.io
to
the
community
infra.
We
are
looking
to
run
prowl
in
the
community
cluster
trooper
a
so.
I
started
to
set
up
a
staging
instance
first
pro,
but
I
think
aaron
made
a
comment
about.
We
can
run
to
pro
instance
against
the
same
braille
cluster.
A
I
think,
to
get
a
staging
instance,
we're
probably
just
going
to
want
to
stand
up
parallel
of
anything
that
we'd
want
to
use
to
test
this
like
the
build
clusters
and
not
touch
the
ones
that
are
being
used
by
the
primary
crow.
That
should
be
huge
blocker.
A
I
think
we
have
scripts
to
just
create
projects
and
that
sort
of
thing
now,
but
in
terms
of
migrating,
like
the
product
case,
that
I
owe
I
think
there
hasn't
been
as
much
focus
on
hashing,
that
plan
out
currently,
because
it's
blocked
on
all
of
the
resources
that
are
being
used
in
the
ci,
depending
on
using
google.com
credentials
to
access
other
legacy
infrastructure
like
ci,
gcs
buckets,
for
example,
and
until
we
finish
migrating
those
things
I
don't
know
if
we
can
safely
like
wind
down
and
fully
migrate.
A
That
instance,
which
is,
I
think,
quite
a
bit
of
ci
still
depending
on
these
things,.
A
Yeah
so
so
in
case
that
I
owe
all
of
the
like
resource
management
is
automated
and
we
have
scripts
for
all
the
gcp
resources.
I
can.
D
C
C
So
I
mean
sorry,
I
know
I
I
feel
bad
because,
like
I
I
don't
know
if
I'm
the
blocker
here,
but
your
your
lack
of
connection
with
somebody
who
both
has
a
proud
expertise
and
also
the
privileges
to
stand
stuff
up
in
kate's
infra
is
is
blocking
you.
C
But
I'm
kind
of
I
I
don't
know
if
I
agree
with
ben's
suggestion
of
like
standing
up
parallel
staging
clusters
or
parallel
staging
build
clusters
and
hooking
it
up
to
that.
Maybe
I
like
I
just
man
as
far
as
I'm
aware
things
start
to
get
really
tricky
when
we
have
multiple
instances
of
prow
attempting
to
manage
the
same
repo
or
attempting
to
schedule
jobs
to
the
same,
build
cluster,
and
so
I
feel
like
we
need
we
need.
A
Well,
can
we
step
back
a
little
bit
further?
What
what
is
the
goal
of
this
staging
cluster,
because,
depending
on
that.
A
Okay,
honestly,
I
think,
given
how
how
many
issues
prow
has
with
sharing
resources
and
so
on,
and
how
much
state
is
in
the
cluster,
I'm
kind
of
suspecting
that
what
we're
going
to
wind
up
doing
is
once
we
finish
getting
all
of
the
sort
of
non-pro
dependencies
migrated
like
the
bosco's
pool
the
dcs
buckets
to
be
cncf
based,
then
I
think
we're
gonna
need
to
have
a
day
where
we
just
kind
of
spin
down
the
old
one
and
spin
up
the
new
one
with
the
same
config
but
different
clusters
or,
like
maybe
stop
scheduling,
jobs
and
like
copy
over
the
cr,
the
custom
resources,
with
all
the
data
about
what
jobs
they've
been
running
or
something
like
that
just
because,
besides
the
can't
schedule
together,
you
have
things
like
jobs
that
are
only
supposed
to
run
so
frequently
like
scale
testing
and
they're.
A
Just
I
don't
think
there
is
a
good
way
to
trivially
shift
these
things.
As
aaron
mentioned,
there's
prows
not
really
built
to
have
multiple
running
against
a
rebound.
I
think
there'd
be
quite
a
bit
of
work
to
make
that
happen.
I
suspect,
I
think,
the
I
think
the
most
viable
option
may
just
be
hard
cut
over
and
and
have
not
actually
having
a
saving
one
running
against
anything.
A
Hey,
but
if
we
think
that
we
should
stage
it,
I
think
the
stand
up
totally
parallel,
infra,
because
and
and
just
like
migrate
everything
over
to
it.
I
don't
think
we
can
share
repos
or
build
clusters
without
a
lot
of
work.
B
So
I
sorry
good
bye,
charming
a
little
bit
because
I'm
currently
in
the
process
of
having
coop
flow
migrating
from
case
proud
to
another
pro
and
the
the
thing
I
observe
is
is
it
will
be
it's
possible
to
use
the
same,
build
cluster,
but
the
cut
of
date
you
cut
off
time
should
be
very
well
aligned,
because
what
we
can
do
is
to
there's
a
deny
list.
So
basically,
the
most
concerning
part
is
thinker
thinker
would
come
over
and
delete
parts
that
doesn't
belong
to
this
crowd.
B
So
once
you
have
two
from
managing
the
same
build
cluster,
they
will
start
to
delete
parts
from
the
other
crop
and
they
will
keep
on
failing
jobs
on
for
on
the
other
pro.
So
there
is
a
deny
listing
thinker
where
you
can
say
I
don't
want
to
delete
things
from
this
build
cluster
that
we
can
configure
on
the
cut
of
day.
On
case
pro
say,
we
don't
want
to
manage
this
build
cluster
at
this
point
and
we
can
migrate
on
the
to
the
other
pro
on
that
day
and
yeah.
B
B
Yeah,
so
basically
what
you
can
do
is
you
can
set
up
a
bunch
of
projects
that
are
optional,
not
reporting
on
the
other
pro
instance,
and
they
can
start
to
managing
the
other.
The
other
program
started
managing
the
repo,
but
they
are
not
going
to
cause
any
noise,
and
once
you
feel
comfortable
with
the
new
build
cluster,
you
can
start
migrating
to
the
new
build
cluster
without
denying
at
least
anything
you
can
just
do
the
transition,
even
without
downtime.
B
E
C
Yeah,
that's
why
I
think
chow's
a
great
person
to
talk
to,
I
guess,
like
my
short
version,
is.
I
think
this
requires
a
little
bit
more
of
a
working
session
to
sort
of
figure
out
the
plan,
but
I
think
we
can
we
could
start
with.
C
Maybe
is
you
get
you
get
proud
set
up
and
we
just
you
get
a
staging
prowl
set
up
and
we
point
it
at
like
its
own,
its
own
gcs
bucket
for
for
logs
and
stuff
like
that,
and
then
we
could
hook
it
up
to
like
either
a
single
repo
within
kubernetes
or
kubernetes,
six
or
like
we
could
even
try
hooking
it
up
to
a
separate
org.
Just
so
we
know
like
we
have
a
pro
cluster
up
and
running.
C
I
still
personally
would
really
like
to
see
work
put
into
syncer
and
maybe
trigger
so
that
we
could
have
prow
apply
a
label
to
the
jobs
to
say
like
which
prow
instance
scheduled
this
job,
so
that
syncer
could
then
go
through
and
only
delete
the
pro
jobs
that
it
created,
or
something
like
that.
I
mean
like
maybe
the
maybe
the
denialist
logic
of
doing
it
by
job
name
is
is
a
good
start,
but
yeah.
C
I
would
like
try
to
set
it
up
see
if
we
can
get
web
hooks
from
kate's,
but
be
very,
very
quiet
in
in
terms
of
which
plugins
and
stuff
we
turn
on
like
experiment
with
a
single
repo
to
figure
out
what
multiple
you
know,
multiple
pros,
trying
to
do
that
or
you
know
migrating
plug-ins
from
one
pro
instance
to
another,
looks
like.
A
Open
question
around
that
are
we
able
to
make
prow
restrict
a
build
cluster's
usage
to
a
namespace?
I
I
know
you
can
set
a
namespace
when
using
a
build
cluster.
I
don't
know
if
proud
jobs
can
still
like
set
the
namespace
on
the
pod.
If
they
can't,
then
couldn't
we
just
actually
couldn't
we
just
make
sinker
only
pay
attention
to
like
test
pod
for
one
prow
and
like
test
pods
two
for
another
pro.
D
The
problem
is
so
first,
no,
it's
not
possible
to
have
one
power
instance
with
multiple
namespaces.
It's
one
global
setting,
the
namespace
in
which
you
create
the
pods,
and
the
problem
is
that
synga
has
logic
to
delete
all
pots
that
were
created
by
power
and
do
not
have
an
associated
project
anymore,
and
the
created
by
power
is
determined
by
a
binary
label
created
by
power
equals
two
or
something
like
that.
It
will
just
delete
everything
there.
That's
the
part
xiao
I
was
referring
to
before.
I
believe
yeah.
Okay,.
A
A
I
think
it
would
also
affect
things
that
are
not
using
fully
qualified
access
to
cluster
services,
but
I
believe
the
lane
cluster
services
we're
using
today
are
boscos
and
the
build
cache,
and
I
think
we
switched
those
to
fully
qualified
to
reduce
dns
load.
Anyhow.
C
I
feel
like
there's
a
path
forward
to
muddle
through
this,
and
I
know
as
much
as
I
want
to
like
work
with
you
directly
to
help
figure
out
what
that
path
is.
I
just
don't
have
the
time
for
the
next
couple
of
days,
certainly
how?
How
could
we
help
you
move
forward?
Like
I
know
the
thing
I
blocked
you
on
before
was
like
setting
up
a
gcs
bucket
for
the
staging
pro
instance
to
log
to
I
could
I
could
do
that,
so
we
could
start
getting
like
a
a
prow
instance.
C
That's
stood
up,
maybe
even
without
a
build
cluster
to
start
with.
Just
so,
you
can
make
sure
you
get
that
running
in
the
k-10
fret
cluster.
C
C
C
Sorry
for
taking
us
long
on
that,
but
I
know
you've
been
asking
repeatedly
are
now
so
thanks
for
sticking
with
us.
E
Okay,
I
just
want
to
clarify.
Basically
the
idea
about
this
is
was
just
to
define
a
transition
plan
without
cutting
the
traffic
of
pro
dot
gates,
not
io.
I
mean
that's
the
ultimate
goal
if
at
some
point
this
year
or
next
year,
we
want
to
shift
the
pro,
and
since
we
want
to
identify,
what's
the
blocker
we're
doing
without
the
traffic
yeah.
C
That's
where
they
totally
wants
to
do
it
this
year.
C
A
Thanks
everyone,
we
went
a
bit
over
time,
but
I
think
this
was
a
very
productive
meeting
today,
I'll
follow
up
with
everyone
and
slack
and
so
on
about
what
you
do.