►
From YouTube: Kubernetes WG K8s Infra BI-Weekly Meeting for 20201014
Description
Kubernetes WG K8s Infra BI-Weekly Meeting for 20201014
A
Okay,
happy
wednesday:
everybody
today
is
wednesday
october
14th.
You
are
at
the
kubernetes
kateson
from
working
group
bi-weekly
meeting.
I
am
your
host
today,
erin
griffenberger,
so
that's
also
spiff
xp
on
all
the
places.
A
A
A
Okay,
so,
like
I
was
sort
of
discussing
before
we
pressed
record,
there
was
nothing
really
on
the
agenda
for
today,
so
I
expect
we'll
keep
it
light,
but
I
will
sort
of
run
through
two
things
and
I
turned
my
screens
off
I'll,
be
right
back
with
you.
A
So
I'll
paste
the
link
to
the
agenda
in
chat
if
folks
want
to
put
their
name
in
as
attendees
or
if
they
have
anything
else,
they
want
to
add
to
the
agenda
and,
let's
see
so,
we
kind
of
did
it
off
camera.
But,
as
usual,
are
there
any
new
members
or
attendees
here
who
would
like
to
introduce
themselves.
B
Sure
I
can
go
hi,
I'm
eddie
zaneski.
I
work
for
aws
as
a
developer
advocate
I've
been
spending
most
of
my
time
working
full-time
on
kubernetes
for
the
past
few
months.
I'm
on
the
1.20
release
team
as
a
ci
signal
shadow,
and
I
mostly
spend
most
of
my
time
under
sig
cli,
where
I'm
realizing.
I
have
to
go
lead
a
meeting
in
20
minutes
there,
so
I
may
have
to
drop
off
early.
But
thanks
for
having
me-
and
I'm
just
here
to
learn
all
I
can.
A
Cool
welcome,
eddie
anybody
else,
okay.
So
I'm
going
to
start
by
looking
at
our
billing
report
and
seeing
if
I
can
share
my
screen
with
everybody
once
I
find
the
right
window.
A
Okay,
that
share
screen
that
one
okay.
Can
everybody
see
that
it
looks
like
we've
spent
about
a
little
over
a
hundred
grand
in
the
100
000
in
the
last
28
days?
Let's
take
a
look
at
our
daily
spend.
A
This
is
where
we
sort
of
break
it
down
by
project
and
then
by
cloud
service
as
well.
So
as
expected,
we're
seeing
really
periodic
bumps
during
the
weekdays
versus
the
weekend.
A
A
And
so
that's
roughly
how
much
our
ci
and
staging
builds
and
public-facing
infrastructure
like
slack
and
triage
party
and
all
that
are
costing
us.
A
None
of
this
seems
too
wildly
surprising
to
me.
So
unless
there
are
any
questions,
I
think
I'll
move
forward.
A
Okay,
so
next
up,
let's
talk
briefly
about
where
we
are
with
migrating
proud.
A
So
briefly,
I
think
we
may
be
aware
of
this,
but
just
to
refresh
so
prow
is
the
thing
that
runs
all
the
ci
for
kubernetes.
It
runs
on
the
order
of
ten
thousand
jobs
a
day
we
think
of
crowd
architecturally
in
terms
of
a
service
cluster.
That's
what
you
talk
to
when
you
talk
to
proud,
kate,
spot
io
and
then
a
bunch
of
build
clusters,
so
proud
can
schedule
specific
jobs
as
pods
on
different
build
clusters.
A
A
So
the
initial
spike,
let
me
see
if
I
can
navigate
to
where
those
are
in
the
rematch.
Actually
that
might
help.
A
A
A
In
order
to
stand
this
up,
terraform
can
be
a
little
racy
or
wonky
so
kind
of
have
to
like
do
some
things
manually
and
run
terraform
a
couple
times,
but
otherwise
it
is
basically
stood
up
and
then,
within
this
build
cluster
we
have
things
like
greenhouse,
which
is
our
bazel
remote
cache.
A
You
have
a
couple
things
to
tune
the
cluster
itself
so
like
setting
up
a
default
limit
range
for
memory
and
cpu
doing
things
like
for
the
benefit
of
kind.
We
make
a
bunch
of
loopback
devices
on
each
of
the
nodes
tuning
some
of
the
high
notify
settings
on
all
of
the
notes,
via
payment
set
so
on
and
so
forth,
and
then
we
also
have
our
own
boscos
instance.
So
boscos
is
the
thing
that
is
responsible
for
handing
out
or
managing
pools
of
resources.
A
What
we
most
often
use
it
for
here
in
kubernetes
is
to
manage
pools
of
gcp
projects,
so
these
are
just
a
bunch
of
gcp
project
names
and
when
an
end-to-end
job
wants
to
stand
up
a
kubernetes
cluster
somewhere,
it's
going
to
get
an
end
project
out
of
this,
and
it's
going
to
stand
it
up
and
then,
when
it's
done,
it'll
check
it
back
in
and
bosco's
has
a
janitor,
that's
responsible
for
cleaning
it
all
up.
A
There
are
roughly
1700
and
something
jobs
in
total,
but
not
all
of
these
jobs
relate
to
kubernetes
the
project,
and
many
of
these
jobs
relate
to
kubernetes
sub
projects
and
stuff,
which
we
want
to
support
for
sure.
But
I
think
the
priority
would
be
the
kubernetes
pacific
jobs
first
and
then
thanks
to
the
sort
of
ci
policy
effort
that
we
had
between
july
august
and
september,
you
know
we
sort
of
pushed
on
migrating
all
of
the
jobs
to
this
build
cluster.
A
So
it's
really
possible
for
jobs
to
land
on
nodes
with
noisy
neighbors,
which
is
probably
what
has
been
happening
lately
with
the
build
jobs
which
are
the
remaining
jobs
of
the
release,
blocking
jobs
that
need
to
be
migrated,
so
build
master,
build
master
fast,
I'll,
just
choose
fast
as
an
example.
A
The
reason
these
jobs
still
have
to
live
inside
of
the
default
cluster
is
because,
right
now,
all
of
the
jobs
are
set
up
to
write
to
this
google
cloud
storage,
bucket,
called
kubernetes
released
app
and
it
lives
inside
of
a
google.com
owned
project
and
the
policies
on
that
project
do
not
allow
non-google.com
accounts
to
write
to
it.
That's
just
that's
just
the
way
that
is
so.
As
long
as
we
run
inside
of
a
google.com
build
cluster,
we
can
write
to
the
google.com,
but
we
want
to
run
in
a
non
google.com
cluster.
A
So
we
need
to
find
a
non
google.com
bucket
to
write
to.
It
doesn't
appear
that
we
have
any
way
to
transfer
a
bucket
name
from
one
project
to
another,
so
we
worked
through
proposing
different
bucket
names
to
use.
I
opted
for
kate's
release
instead
of
kubernetes
release
and
I
believe
those
buckets
now
exist
and
are
writable
by
the
proud,
build
cluster
in
the
community
and,
I
think,
see
panato.
I
forget
what
her
name
is.
I
really
should
get
better
about
this.
I
think
he
created
a
job
that
migrates
over
yeah.
A
He
created
a
canary
job
that
should
run
in
the
build
cluster
over
in
the
community,
and
I
sort
of
tried
to
spell
out
at
the
bottom
here
step
by
step
what
I
would
recommend
that
the
community
do
to
like
create
duplicate
jobs
that
write
to
the
new
bucket
and
then
alter
existing
jobs
that
pull
from
the
old
bucket
to
pull
from
the
new
bucket,
and
as
long
as
all
that
works,
we
just
gradually
shift
stuff
over
to
using
the
new
buckets
and
stop
using
the
old
pockets.
A
What
to
do
about
the
remain
remaining
jobs.
We're
gonna
have
to
sort
of
figure
that
out.
I
think
part
of
the
reason
I
say
we're
gonna
have
to
figure
that
out
is
from
a
billing
perspective
right
now.
I
have
no
way
to
describe
I'll
stop
sharing
my
screen.
A
I
have
no
way
to
describe,
like
I
think,
in
an
ideal
world,
we'd
love
to
be
able
to
give
each
sig
sort
of
their
own
budget
and
say
here's
your
budget
and
you
decide
like
which
projects
need,
which
jobs,
which
tests
to
spend
that
budget
and
right
now
we
don't
have
a
way
to
assign
costs
to
a
specific
sig
and
the
end
tests
end
up
consuming
resources,
not
just
on
the
build
cluster
themselves,
but
they
also
consume
resources
by
spinning
up
clusters
in
the
cloud
and
so
getting
like
all
of
that
assigned
to
a
single
bucket.
A
We,
if
that's
the
approach
we
want
to
take,
we
need
to
figure
out
how
to
engineer
that
sort
of
system.
A
My
rough
thought
was
like,
since
we've
taken
care
of
the
mostly
taking
care
of
the
release,
blocking
and
merge
blocking
the
next
logical
thing
might
be
the
release
and
forming
jobs
which
are
not
supposed
to
be
hard
blocks
on
the
release,
but
are
useful
for
informing
us
about
whether
or
not
the
release
is
good
to
go
from
sort
of
a
broader
perspective.
So
I
know
that's
kind
of
where
the
5
000
node
scalability
tests
live,
and
that
may
be
where
openstack
and
a
couple
other
cloud
providers
have
their
tests
at
the
moment.
A
So
that's
that
is
roughly
that
at
the
moment
did
that
answer
your
your
questions
out
here.
Do
you
have
more
specifics.
B
A
Yeah,
essentially,
we
wanna
in
at
least
in
google
cloud
project
is
kind
of
like
the
almost
like
logical
name.
Spacing
I
mean,
like
quotas
and
iam
stuff
are
also
tied
to
two
projects,
but
that
gives
us
the
freedom
to
have
a
job,
create
whatever
cloud
resources
it
wants
and
it
won't
interrupt
or
stop
over
top
of
other
jobs.
A
This
way
we
can
ensure
that,
like
the
simplest
case
will
be,
we
have
a
bunch
of
jobs
that
create
a
bunch
of
clusters,
and
if
we
were
to
point
too
many
of
them
at
a
single
gcp
project,
we
might
find
that
they
stop
over
top
of
each
other
naming-wise
or
they
eventually
hit
that
project's
quota
for
like
nodes
or
network
or
ips
or
whatever.
So
it's
just.
It
really
simplifies
management
to
have
just
a
pool
of
generic
projects.
We
can
check
out.
A
I
I
don't
know
if
it's
this
way
anymore,
at
least
like
when
I
was
using
aws
a
lot
more
actively.
I
sort
of
thought,
like
I'm
gonna
check
out
a
vpc
and
I'll
create
whatever.
A
D
A
So
I
don't
know
is
the
the
short
answer
slightly
longer
answer:
is
I'm
gonna
trust
or
ask
that
the
release
engineering
team
kind
of
look
into
that?
I
trust
that
they
are
the
team,
that's
in
charge
of
the
jobs
that
uses
those
scenarios
most
often.
A
So
if
I
would
trust
that
they
know
what
needs
to
be
changed,
I'm
trying
to
talk
just
sort
of
specifics
about
the
buckets
and
stuff
it
seems
like
all
that
needs
to
happen
is
changing
some
flags
in
a
job
config
like
I
don't
think
any
code
has
to
be
changed.
I
think
just
making
sure
that
flags
that
tell
the
scenarios
which
buckets
to
publish
to,
or
vice
versa,
the
end-to-end
test
jobs
which
bucket
to
consume,
builds
from
yeah
in
an
ideal
world.
The
scenarios
are
actually
we
shouldn't
be
using.
A
We
shouldn't
be
using
them,
because
technically
scenarios
are
related
to
bootstrap,
which
is
a
legacy
thing
that
we
used
prior
to
the
existence
of
cube
test
and
also
prior
to
the
existence
of
proud
job
decoration
or
also.
We
also
call
that
pod
utils,
which
are
the
things
that
are
responsible
for
like
clone
this
repo
and
put
it
in
a
known
space
and
then,
when
you're
done
upload,
the
artifacts
from
this
well-known
location
to
a
well-known
bucket
in
the
cloud
and
stuff
pod
details
are
much
more
actively
supported
than
bootstrap
in
all
of
the
scenarios.
A
Okay,
cool,
so
next
up
claudio
had
a
question
about
docker
hub.
So
let
me
share.
A
So
this
is,
I
created
this
issue
in
the
testing
for
repo
titled
mitigate
docker
of
changes
rolling
out
november
1st,
the
tldr
is
docker
hub
is
going
to
rate
limit
polls
of
images
from
docker
hub
if
we
are
anonymous,
we'll
be
rate
limited
to
100
over
six
hours
per
ip.
If
we're
authenticated
we're
limited
to
200,
regardless
of
what
ip
address
we're
coming
from
and
if
we
use
a
paid
plan,
we
get
unlimited
polls.
A
And
so,
even
if
we
were
worried
that,
even
if
we
fixed
the
really
common
case
with
the
images
that
we
care
about
the
long
tail
of
images
that
come
from
dr
hub,
it
may
result
in
random
nodes
on
our
build
clusters
hitting
that
rate
limit
and
then
not
working
for
a
bunch
of
different
jobs
and
because
it's
by
ip
it
might
start
looking
like
jobs,
might
randomly
start
failing,
depending
upon
which
node
and
which
build
cluster
they
are
scheduled
to.
So
it
might
look
really
confusing
and
annoying.
A
So
one
option
is
we
just
fork
over
the
cash
for
a
paid
plan
and
then
figure
out
how
to
pass
the
credentials
for
that
all
across
our
build
cluster
nodes,
which
you
know
the
things
responsible
for
like
running
unit
tests
and
running
integration
tests,
the
things
responsible
for
building
things
that
use
docker
images
so
like
the
kubernetes
build
job,
pulls
down
some
docker
images
during
its
build
process
or
the
various
google
cloud
build
jobs
that
are
responsible
for
building
images
that
are
hosted
in
the
kates.gcr.io
image
repository.
A
So
that's
one
option.
We
pass
the
credentials
through
all
that
another
option
is
we
look
at
setting
up
a
pull
through
cache
for
a
prowl
and
all
of
its
build
clusters,
and
then,
ideally,
people
wouldn't
need
to
change
what
images
they're
using
at
all
another
option
is
we
we
rely
on.
Google
provides
some
level
of
mirroring.
I
don't
know
if
I
have
it
linked
here:
okay,
yeah
google
provides
mirror.gcr.io
which
caches
a
number
of
commonly
used
docker
images.
B
A
Busy
boxes
your
golang's,
your
pythons
stuff
like
that,
are
available
at
this
repo
and
we
would
not
be
rate
limited
and
pulling
from
it
and,
I
believe,
by
default,
all
gke
clusters
are
set
up
to
are
configured
to
have
this
as
a
mirror.
So
when
we
pull
from
docker,
we
actually
look
to
see
if
we
can
pull
from
this
mirror.
A
First,
I
have
not
found
a
way
to
prove
that
that
is
what
is
actually
happening
in
the
nodes
available
on
the
build
cluster
logs,
but
this
may
already
be
happening
and
may
actually
take
care
of
us
in
most
of
the
common
cases.
A
A
So
that's
the
stuff
at
a
high
level,
focusing
specifically
on
kubernetes,
since
that's
our
highest
traffic
thing.
Antonio
pointed
out
that
it
really
looks
like
these
are
the
only
remaining
images
that
are
that
come
from
docker
hub
that
are
used
in
atv
tests.
So
what
we
could
do
is
look
at
pulling
these,
so
first
we
either
determine
whether
or
not
we're
transparently
using
mirror.gcr.io.
A
If
we're
not,
we
could
explicitly
reference
that
and
just
update
these
update
the
docker
library
registry
to
a
mirror,
dot,
gcr,
dot,
io,
slash,
docker,
slash
library.
I
think
the
alternative
is
we
could
try
mirroring
this,
try
setting
up
jobs
that
mirror
these
images
into
kate's.gcr
and
pull
from
there,
so
that
was
just
looking
at
the
code.
Some
other
approaches
I've
tried
to
take
are
looking
at
the
kubelet
logs
from
all
of
our
build
clusters
and
looking
at
what
they
pull
down
this
lines
up
pretty
well
with
what
antonio
found
for
kubernetes.
A
We
don't
seem
to
be
pulling
down
a
lot
on
the
community-owned,
build
cluster
and
I'm
still
working
on
getting
this
data
from
the
default.
Google.Com
build
cluster
because
it
was
stood
up
with
logs
in
a
different
format
a
long
long
time
ago.
So
trying
to
see
if
I
can
parse
out
something
similar
there.
A
So
we
may
be
in
the
clear
it's
the
long
tail
of
images
that
I
I'm
less
certain
about.
What
to
do.
We
could
recommend
that
people
push
them
into
staging.
I
think
my
concern
and
probably
tim's
concern
might
be.
We
want
to
make
sure
that
these
mirrors
are
used
by
the
kubernetes
project
and
not
necessarily
by
the
world
just
sort
of
the
way
the
funding
works
and
no
problem
using
mirror.gcr.io,
because
that's
something
that
google
on
the
whole
provides,
but
I
think
kubernetes.
The
project
should
not
be
funding
artifact
mirroring
for
the
world.
A
A
We
may
not
be
able
to
move
that
quickly.
It
is
something
I
can
certainly
take
a
look
at,
but
I
would
say
if
we
feel
like
there's
a
long
tail
of
images
that
are
not
available
there.
We
should
look
more
at
either
using
different
images
or
attempting
to
set
up
a
pull
through.
Cache
would
be
my
guess
like
with
busybox.
Specifically,
I
think
I'd
rather
suggest
alpine,
so
claudia
can
direct
me
here.
I
feel
like
alpine
is
better
cross-platform
or
is
it
across
platforms.
C
Technically
busybox
because
we
also
use
busybox
on
windows
tests
as
well.
There
was
a
pull
request
some
time
ago
that
was
replacing
the
busy
box
image
usage
to
agnost,
yes,
which
is
a
typical
image
for
pretty
much
most
conformance
tests,
but
it
hasn't
been
updated
in
a
couple
of
months.
C
A
Maybe
the
reason
I'm
kind
of
a
little
down
on
that
is
just
that.
I
don't
know.
Maybe
you
can
correct
me,
I
feel
like
fizzybox
may
often
be
used
by
I
mean
I
can't
see
where
that
it's
showing
up
here,
but
I
could
see
that
there
might
be
projects
that
try
to
use
busybox
as
a
shorthand,
for
I
just
want
bash,
and
I
want
the
tiniest
image
possible
that
has
bash
and
agnost
has
is,
is
larger
than
that,
because
it's.
A
Yeah,
so
it's
certainly
it's
an
option,
and
then
I
think
I
don't
think
I
said
this
on
camera
like
I
would
like
to
send.
Generally
speaking,
we
should
encourage
people
not
to
use
docker.
A
A
Speaking
with
my
sig
testing
add-on,
I
care
most
about
the
kubernetes
kubernetes
jobs
and
then
my
secondary
concern
is
that
long
tail
of
images
affecting
the
long
tail
of
jobs.
A
A
Yeah,
so
you
mentioned,
you
were
interested
in
helping
out
where
what
what
are
those
approaches
sound
best
to
you
or
where
do
you
think
you
could
contribute.
C
C
I've
been
mostly
working
on
the
kubernetes
e3
tests
and
the
images
that
are
being
used
so
anything
in
that
direction.
I
can
definitely
help
okay,
as
I
mentioned,
I
could
take
a
look
at
the
but,
as
I
said,
it
won't
really
help
for
other
tests
outside
of
kubernetes.
A
Okay,
yeah,
I
feel
like
I
just.
I
really
need
to
answer
the
question
of
whether
mirror.gcr.io
is
actually
being
used
today
and
then
I
think
arnold
took
the
next
logical
step
of
what
are
the
images
that
are
not
covered
by
mirror.gcr.io,
so
we'll
have
to
figure
out
something
for
those
images
and
we
can
survey
what
what
the
scope
of
that
looks
like
and
then,
if
we're
not
getting
the
benefits
of
mirror.gcr.io
for
free
we're,
gonna
have
to
figure
out
how
to
do
the
same
thing
like.
C
I'm
wondering
if
we
can't
host
the
busy
box
image
in
kate's
that
gcr
io
as
well,
basically
use
the
same
image
builder
job
to
basically
mirror
the
the
image
itself.
That
would
just
mean
adding
a
couple
of
lines
in
the
kubernetes
test
images,
busy
box
base
image
and
and
almost
empty
docker
file.
For
that
it's
something
that
I've
been
doing
to
also
include
the
linux
images.
For
my
own
manifest
lists.
C
I
was
wondering
about
this
and
when
we
can
confirm
that
mirroring
works
and
that
that's
perfectly
fine,
I'm
wondering
if
we
can
unpromote
images,
so
this
can
also
be
like
a
temporary
fix
until
we
have
the
the
mirror
ready
for
the
busy
box
image.
A
I
cannot
recall
whether
sorry
words,
we
cannot
unpromote,
but
we
could
just
not
promote
anything
out
of
staging,
like
I
feel
like
once.
We
promote
out
of
the
staging
location
into
kates.gcr
like
that.
We
can't
go
back,
but
we
could
not
promote
because
I
don't
actually
think
we
have
promoted
a
gm
host
or
anything.
C
Yeah,
we
have
promoted
the
agnost
image
a
couple
of
times,
yeah.
Okay,
I
think
the
staging
hosting
the
image
on
the
staging
registry
sounds
like
a
pretty
good
temporary
solution.
A
A
C
I'll
just
send
the
request.
For
that
sure
it
should
be
like
15
minutes
thing
to
do,
and
then,
when
this
is
decided
on,
you
can
just
approve
the
pull
request
and
then
the
image
builder
job
will
build
that
image
and
we'll
have
it
in
the
in
the
staging
registry
and
see
if
everything
is
working
perfectly
afterwards.
E
E
D
A
That
so,
yes,
I
agree,
that's
an
option
for
us.
It
is
less
prepared
for
me
than
using
mirror.gcr.io
and
that's
solely
for
the
reason
of
funding.
Just
to
restate
it,
like
my
concern,
is
we're
we're
not
the
only
open
source
project
out
there,
we're
not
the
only
people
out
there
who
are
like.
Oh
no,
I
don't
want
to
pay
docker
hub
money,
and
I
just
want
to
make
sure
that
if
we
end
up,
you
know
putting
a
really
popular
image
in
one
of
our
staging
repos.
A
The
people
don't
organically,
just
start
using
that
instead
of
docker
hub.
So
if
we
use
mirror.jcr,
that's
a,
I
am
fine.
If
more
people
use
that
that's
kind
of
a
more
appropriate
place,
because
it's
funded
by
google
and
I
don't
care
if
google
wants
to
pay
money
for
that
or
not.
But
I'm
thinking
about
this
with
my
kubernetes
project
and
I
don't
think
the
kubernetes
project
should
be
paying.
You
know
the
bandwidth
costs
for
a
busy
box
for
the
entire
world.
A
We
should
be
paying
for
the
bandwidth
of
hosting
busybox
for
our
project
and
for
rci,
but
everybody
else
yeah.
So
maybe
maybe
I'm
being
overly
paranoid
about
that,
because
it
would
be
the
same.
I
mean
it's
the
same
concern
with
any
of
our
staging
images.
I
get
it,
but
that
just
seems
like
something
popular
like
that
around
an
event
that
motivates
everybody
to
choose
alternatives.
It
could
lead
to
more
traffic
there,
but
if
we
can't
use
mirror.gcr.io,
I
think
that
is
the
next
logical
step
like
hosting
our
stuff
in
staging.
C
C
Okay,
yes,
I
have
one
last
thing
regarding
the
e3
test
images
building
job,
I
have
a
request
for
that.
C
Basically,
each
time
apk
was
being
run
to
install
packages,
you
would
always
fail
with
a
bad
address
or
something
like
that.
You
can
actually
see
the
original
requests
in
the
special
notes
in
the
need.
The
message
why
why
was
happening?
What
was
fixing
and
so
on
so
forth?
C
Even
the
agnost
image
was
installing
packages
through
apk,
but
it
was
silently
failing
and
continuing
for
s,
39,
x
images,
and
there
was
actually
someone
that
was
building
clusters
and
trying
to
test
images
with
that
architecture,
and
it
was
failing
because
of
that
in
diagnosed
image.
We
are
installing
a
bind
tools
which
install,
which
basically,
you
have
dig
and
it's
a
common
tool
used
for
testing,
dns
names
and
so
on
and
so
forth,
since
that
apk
command
failed.
C
Of
course,
the
s
39x
didn't
have
that
program,
so,
of
course
it
was
failing
for
him
after
the
original
request
merged
it
was
building.
For
me,
the
images
everything
went-
fine
also
tested
in
my
own
environment,
with
some
emulation
to
make
sure
that
it's
fine
but
there's
an
issue
when
trying
to
build
images
inside
another
container,
which
is
what
the
image
builder
does,
and
I
replicated
that
in
my
environment
as
well,
and
of
course
it
was
failing
for
me
as
well,
and
with
this
particular
section,
I've
sent
is
basically
one
line
of
code
change.
C
It's
building
for
me
again,
currently
rebuilding
all
the
images
for
myself,
including
windows,
images
and
I'll
post
as
a
comment
the
logs
as
well.
A
A
Jeff
or
at
the
very
least
ben
the
elder
to
to
get
eyes
on
that.
But
that
looks
valid
to
me.
A
D
C
D
Another
question
about
pro
migration
sure
so
now,
gh
proxy
is
mesh
and
deploy
what
what
is
the
next
step,
because
I'm
kind
of
lost
about?
What's
the
step
related
to
pro
migration.
A
Okay,
so
the
use
of
the
github
proxy.
A
A
There
are
likely
jobs
in
that
cluster
that
we
would
like
to
migrate
to
the
community's
trusted
cluster.
I'm
thinking
and
jobs
that
interact
with
github's
api
a
lot,
so
the
biggest
candidate
for
me
would
be
the
parabolas
jobs
and
maybe
the
job
that
runs
label.
Sync,
that's
responsible
for
setting
up
github
labels
on
all
the
repos.
A
What
is
less
certain
to
me
is
whether
we
are
comfortable
having
the
kate's
ci
robot
token
put
into
the
community
owned,
build
cluster
or
whether
we're
going
to
feel
like
we
need
a
different
bot
token
put
into
the
community-owned
cluster.
A
The
reason
I
say
that
is
because
k-ci
robot,
for
better
or
for
worse,
has
access
to
a
lot
of
github
organizations
and
repos
that
are
not
kubernetes
and
kind
of
feel
like
they
just
shouldn't,
be
there
in
the
first
place,
but
they
are
so
from
a
security
perspective.
It's
unclear
to
me
whether
we
want
to
expose
access
to
all
those
to
the
community
owned,
build
cluster,
so
we
might
want
to
look
at
using.
A
I
had
created
a
github
or
I
had
there
was
an
old
github
account
called
kate's
merch
robot,
or
it
was
used
by
the
munch
github
piece
of
infrastructure
long
ago,
when
we
used
that
instead
of
tied-
and
I
renamed
that
account
to
kate's
github
robot,
and
I
believe
that
robot
has
admin
access
to
github
for
kubernetes
orgs.
So
you
could
use
that
token.
Instead.
A
The
other
thing
is,
I
didn't
actually
check
that
gh
proxy
was
working
in
the
trusted
cluster.
I'm
not
sure
that
I'll
have
time
to,
but
my
thought
would
be
trying
to
get.
A
I
guess
the
way
I
would
look
at
it
for
next
steps
is,
I
might
take
something
small
like
label
sync,
and
I
wouldn't
run
it
for
all
of
the
orgs.
I
would
just
run
it.
I
would
configure
the
job
to
run
for
a
small,
org
or
orgs
like
kubernetes
clients,
kubernetes
csi,
for
example,
and
I
would
see
if
that
job
is
successfully
able
to
talk
to
you
to
get
a
proxy
or
not,
and
we
might
also
need
some
way
to
hook
that
up
to
monitoring
dashboard.
A
So
we
can
actually
see
the
stats
that
show
whether
or
not
the
proxy
was
hit
or
not,
and
I'm
sorry,
I'm
just
making
this
up
off
the
top
of
my
head.
I
don't
know
whether
it
would
be
better
to
try
and
like
hook
that
up
to
the
monitoring.proud,
kate,
spotio
dashboard
that
we
currently
use
or
whether
we
want
to
consider
setting
up
our
own
copy
of
that
monitoring
stack.
D
Yeah,
because
I
wanted
to
suggest
if
we
kind
of
deploy
in
in
your
stack
of
pro
somewhere
with
the
monitoring
stack,
so
we
can
see
where
are
the
blocking
step,
because
right
now
I
say
just
proxy:
you
is
also
used
by
oak
and
cryer.
So
if
we
want
to
talk
about
migration,
the
question
is,
we
also
need
to
know
where
pro
is
will
run.
A
Yeah,
it
is
so
that's
I
mean,
maybe
that's
a
larger
question.
A
I
thought
that
it
might
be
a
cleaner
separation
of
concerns
if
we
don't
ever
allow
jobs
to
run
on
the
service
cluster
and
we
instead
only
run
jobs
in
a
separate
cluster
that
is
trusted,
but
that
means
that
right
now,
gh
proxy
is
not
intended
for
access
outside
of
a
given
cluster
kind
of
the
same
way.
Posgus
is
right.
Like
you,
you
talk
to
the
instance
that
lives
in
the
cluster,
so
both
the
proud
cluster
and
the
trusted
cluster
might
need
their
own
instance
of
gh
proxy.
A
We
might
end
up
seeing
less
benefit
from
the
caching
as
a
result
of
that.
So
maybe
I
don't
know.
Maybe
we
do
want
to
follow
the
model
where
all
the
proud
components
are
hosted
on
the
same
thing,
but
I
feel
like
I'd
like
to
better
understand
the
pros
and
cons
of
that
decision.
A
But
I
like
your
idea
of
setting
up
a
sort
of
a
staging
prow
instance
or
something
where
we
could
just
see
like
what
access
it
needs
to
what
that
sort
of
gets
into
the
larger
story
of
migrating
from
the
existing
proud
service
cluster
to
a
brand
new
crowd
service
cluster.
And
it's
less
clear
to
me
what
a
seamless
migration
path
for
that
will
look
like.
But
if
you
want
to
get
started
on
sort
of
describing
the
steps
there,
that
would
be
awesome.
A
Okay,
that
sounds
good
read,
you
know,
reach
out
for
help.
Ask
questions
we
yeah
iterating
on
stuff
is
always
great.
Okay,.
A
Well,
cool:
I
expect
this
meeting
to
take
ten
five
to
ten
minutes,
so
I
guess
I'll
give
you
five
to
ten
minutes
back.
Thank
you,
everybody
for
showing
up,
and
I
hope
you
have
a
happy
wednesday.