►
From YouTube: Kubernetes WG K8s Infra - 2021-08-18
Description
A
Hi
everybody:
you
are
at
the
kubernetes
wg
kate's
infra
bi-weekly
meeting.
It
is
currently
1
p.m,
wednesday,
in
my
time
zone,
but
happy
current
time
a
current
day,
wherever
you
may
be
we're
all
going.
A
On
today's
agenda
after
we
go
through
the
regular
stuff,
I
thought
we
could
talk
a
little
bit
more
about
this
group's
plans
for
the
123
release
cycle
and
then
arno
has
a
number
of
specific
items
on
the
agenda
around
migrating,
all
proud
jobs
and
pandabot,
and
so
on
and
so
forth.
A
I
think
I'm
just
going
to
get
right
into
it.
Now
before
I
share
my
screen,
I'm
going
to
give
a
massive
shout
out
to
arno
for
actually
putting
these
slides
together
and
asking
if
he
is
interested
in
speaking
to
them
and
throwing
to
me
or
if
you
would.
Rather,
I
just
try
rambling
over
these
arnold
totally
saved
my
bacon.
A
Yeah
actually
you're
right.
I
jumped
right
into
the
recurring
topics
and
I
neglected
to
I
jumped
right
into
the
new
topics
that
requested
to
do
the
recurring
topics.
So
there's
the
meeting
notes
that
people
want
to
sign
themselves
in
as
an
attendee,
and
I
will
go
ahead
and
fire
up
the
data
studio
report
and
share
my
screen
share
my
window.
C
Okay,
so
I
think
two
main
two
meetings
ago,
we
noticed
an
increase
in
the
cost
for
a
specific
project
which
is
getting
fat
scanning
5k
project.
So
this
project
is
supposed
to
basically
host
all
the
six
scalability
jobs
from
pre-submit
to
periodic,
and
I
create
some
gunnery
job
that
basic
that
basically
reflect
what's
running
inside
google.org.
C
D
C
A
Yeah,
I
super
appreciate
you
going
through
and
tracking
all
of
that
down
for
what
it's
worth.
I
did
a
little
bit
of
inspecting
into
like
hey,
it
seems
like.
Maybe
this
is
the
sort
of
thing
we
should
have
a
janitor
automatically
clean
up
for
us
in
case
things
get
stuck,
and
at
least
as
far
as
the
code
that
I
can
find
in
test
infrared.
There
are
actually
multiple
janitor
scripts.
A
It's
unclear
to
me,
which
of
them
still
run,
but
the
scalability
project
was
excluded
from
both
of
them
and
I
think
that's
because
it
runs
in
a
very
fine-tuned
manually
done
schedule
and
because
the
job
takes
so
long
to
run
the
completion,
it
would
be
very
frustrating
if
an
actively
running
job
was
killed
off
in
the
middle
of
running.
A
A
Running,
maybe
just
in
this
specific
project
for
some
extended
period
of
time,
there's
some.
I
I
have
a
sneaking
suspicion.
We
could
put
together
a
google
cloud
monitoring
alert
for
something
like
this.
C
Possible
just
want
to
add
most
of
the
jobs
using
the
scenario
in
testing
for
shutting
down
themselves.
It
was
like,
even
if
the
job
is
I
mean,
if
the
job
is
failing
to
succeed,
to
upload
these
logs.
If
we
shut
down
yourself
so
yeah,
but
I
think
you're
right,
you
should
basically
add
in
a
lot
about
like
every
week.
We
check
the
number
no
of
way
out
for
a
project,
so
we
can
quickly
identify
if
something
is
wrong.
A
Yeah-
and
it's
also
worth
noting
that,
like
this
is
a
special
case
where
it's
pinned
to
a
specific
project
for
all
of
the
projects
they
get
leased
out
of
a
pool
managed
by
boscos,
whether
or
not
the
job
cleans
up
after
itself,
whether
or
not
it
actually
returns
the
project
to
bosco
so
eventually,
boss,
guys
will
be
like
that
project
is
mine
again
and
it
cleans
the
project
by
running
a
janitor
script
before
reintroducing
it
back
into
the
pool.
A
So
the
majority
of
jobs
don't
have
this
problem,
but
it
is
something
we'll
want
to
keep
track
of
for
special
cases
like
scalability,
so
yeah.
If
I
look
at
sort
of
this
graph
down
here,
it's
you
know,
we
do
have
sort
of
some
amount
of
overhead,
but
it
is
significantly
better.
It
needs
to
be
so
really
appreciate
you
tracking
that
technology.
A
I
will
speak
closer
to
the
laptop,
so
that
wasn't
just
a
big
fat
mumble.
Okay,
so
stop
sharing
this
window
unless
there
are
any
other
questions
about
the
billing
report.
E
A
At
our
current
run,
right
now,
beyond
that,
I
basically
will
I
need
to
do
some,
or
we
collectively
will
need
to
do
some
planning.
As
we
look
to
migrate
over
posting
of
kubernetes
release
artifacts,
I
have
a
sneaking
suspicion
if
we
were
just
to
flip
a
switch
right
away,
that
that
would
probably
put
us
over
our
budget,
but
we
can
probably
migrate
over
hosting
of
a
fair
number
of
release.
A
Artifacts,
but
we'll
need
to
sort
of
kind
of
flip
a
little
bit
over
and
see
how
much
traffic
that
that
is
and
then
how
much
cost
that
is,
and
then
I
think
from
there
we
can
decide
whether
we
need
to
hold
off
on
that
and
make
arbitrary
binaries
a
blocker
to
doing
cross-cloud
mirroring
or,
if
you're
comfortable,
proceeding
with
cross-cloud
mirroring
for
container
artifacts.
Only.
E
A
All
credit
to
arno
for
putting
this
together
kind
of
so
the
I
kind
of
rambled
about
this
earlier,
but
now
there's
a
slide
that
has
this
and
essentially
the
first
big
ticket
item
for
this
working
group
is
for
this
working
group
to
become
a
sig.
B
A
This
a
sig
explicitly
kind
of
answers
that
question.
It
also
provides
us
the
opportunity
to
break
up
our
organizer
role
into
chair
and
tech
lead
roles,
and
so
I
proposed
that
we
proposed
that
arno
and
dims
would
be
the
chairs
of
this
music
and
then
myself
and
tim
hawkin
would
be
the
tech
leads
of
this
music.
A
Steering
last
I
checked
earlier
this
morning.
Steering
has
raised
a
couple
questions
around
the
ambiguity
of
shared
ownership
of
things.
A
A
So
an
example
for
that
last
one
would
be
the
group's
reconciler.
You
know
there
are
certain
at
kubernetes.io.
Email
addresses
that
kind
of
presents
a
level
of
formalism
as
if
speaking
for
the
project,
and
so
that
is
something
that
falls
within
the
bounds
of
contributor
experience.
A
However,
the
group's
reconciliation,
because
it
controls
permissions
and
access
to
different
parts
of
infrastructure
like
the
new
sig,
would
also
own
and
or
have
the
ability
to
block
policies
or
changes
that
were
sort
of
insecure
or
troublesome,
and
so
I
see
this
kind
of
evolving
in
the
same
way
that
sig
testing
kind
of
like
owned
the
running
of
prow,
but
all
of
the
individual
sigs
were
responsible
for
like
their
jobs
and
their
job
configuration
and
whatever
plugins
were
needed
for
their
repos
and
their
sub
projects
to
do
their
thing.
A
But
sig
testing
still
reserved
like
the
right
to
block
any
change.
If
it
was
unhealthy
for
the
good
of
the
project,
if
it
was
causing
things
to
crash,
if
it
was
causing
rate
limits
or
quotas
to
be
hit,
you
know
somebody
needs
to
be
there
to
pull
the
fire
alarm
and
do
what
is
needed
to
make
sure
that
we
recover
back
to
a
functional
state.
So
I.
A
Similar
happening
with
this
sig,
where,
in
general,
if
you
want
to
run
the
code,
you
that
you
wrote,
you
should
be
responsible
for
supporting
that
code,
because
you
have
the
domain
expertise
for
it
and
in
general
we
would
love
to
own
and
craft
less
policy.
A
Okay,
as
far
as
what
I
think
we
want
migrated
by
the
time,
kubernetes
123
goes
out
the
door.
This
right
here
is
our
uber
issue.
A
A
Next,
you
have
google
cloud
projects
that
are
used
to
host
images
that
are
either
run
as
part
of
ci
or
involved
in
end-to-end
testing.
In
some
way,
then
you
have
a
whole
laundry
list
of
projects
that
are
like
pinned
to
specific
jobs.
Much
like
the
scalability
job
had
its
own
project.
There
are
a
number
of
other
projects
that
seem
to
be
pinned
to
these.
A
So
going
from
the
bottom
up,
I
think
it's
kind
of
the
responsibility
of
the
sigs
that
own
the
jobs
that
use
these
special
purpose
projects
to
figure
out
if
they
still
need
these
special
purpose
projects
like
do
they
still
care
about
this
job.
If
they
do
what
is
special
about
the
project
that
the
job
is
using
and
then
let's
get
an
appropriately
configured
special
purpose
projects
provisioned
over
in
k,
temple
land,
but
this
is
kind
of
the
long
tail
of
things
we
want
to
go
for
the
higher
impact
things.
A
So
google
containers
is
the
project
that
hosts
all
of
the
release
artifacts
both
for
it
also
used
to
host
all
of
the
ci
artifacts,
but
we
completed
migration
of
all
the
ci
artifacts
last
release
cycle.
A
So,
as
I
mentioned
earlier,
I
anticipate
that
hosting
of
the
release.
Artifacts
is
going
to
be
a
pretty
big
bang
for
buck
type
of
thing,
but
it's
also
going
to
result
in
a
lot
of
spend.
A
So
roughly,
what
we
need
to
figure
out
to
do
here
is
how
exactly
we
want
to
sync
between
the
community
hosted
or
the
community
on
location
for
artifacts
and
the
google
hosted
location
for
artifacts
determine
whether
we
want
to
do
the
same
sort
of
like
geo,
replication
from
different
buckets
and
different
regions.
A
For
those
of
you
who
aren't
familiar,
there's
a
bucket
in
europe
and
a
bucket
in
asia,
that's
mirrored
to
right
now
for
the
google.com
hosted
artifacts,
but
by
far
the
majority
of
the
traffic
is
being
served
out
of
the
us
bucket
so
like.
For
me,
I'm
kind
of
questioning
whether
that
additional
complexity
is
actually
providing
us
cost
savings
or
not
and
then
similar
to
how
we
had
to
go,
make
a
bunch
of
changes
across
the
project
to
get
rid
of
the
hard-coded
kubernetes
release
dev
bucket.
A
We
want
to
do
the
same
thing
for
the
kubernetes
release
bucket
and
as
much
as
possible
if
sub
projects
or
scripts
or
whatever
are
fetching
release
artifacts
via
an
http
based
tool
like
pearl
or
wget,
and
change
them
to
use
dl.kates.io,
which
is
then
something
we
can
gradually
flip
pattern,
matched
uris
over
to
the
community
hosted
bucket.
A
All
of
the
other
artifacts
that
live
in
this
bucket.
There
are
things
like
csi
cni
and
a
number
of
other
random
things
that
live
in
this
bucket
that
appear
to
be
critical
to
the
release
of
kubernetes.
A
C
I'm
basically
just
small
adjustment
about
everything
you
say
until
now:
okay,
so
the
first
thing
about
migrate,
the
google
project
is,
I
don't
think
we
need
to
migrate
everything,
so
we
basically
need
to
basically
identify
white
need
to
be
really
migrate,
because
I
see
a
basically
a
lot
of
job
failing
using
special,
proper
special
projects
so
over
like
three
months,
so
I
think
we
should
not
migrate
them,
but
I'm
not
sure
if
we
are
skating
for
have
the
ultimate
power
to
make.
That
call,
because
I
think
that
should
basic
testing.
A
I
agree
with
the
first
part
whether
or
not
it
is
sick,
testing's
call,
I'm
less
sure,
because,
speaking
with
basic
testing,
chair
hat
on,
like
we
don't
write
your
jobs,
so
we
don't
necessarily
define
like
we
don't
know
what
your
thing
is.
We
don't
know
how
it
works,
so
we
don't
know
necessarily
what
the
best
set
of
tests
are
to
ensure
that
it's
well
tested.
A
But,
yes,
I
would
say
either
sig
testing
or
this
group
in
the
name
of
like
resource
preservation
and
noise
reduction.
If
your
job
has
been
continuously
failing
for
over
90
days,
it
sure
seems,
like
you,
don't
have
the
people
around
to
pay
attention
to
it
and
noisy
or
failing
signal,
at
least
in
my
opinion
is
worse
than
no
signal
whatsoever.
A
We
can
find
ways
to
jail
those
jobs.
We
could
just
delete
them
because
that's
what
source
control
is
for.
We
can
find
ways
to
by
jailing.
I
mean
like
running
them
on
a
significantly
reduced
frequency
and
moving
them
to
some
like
different
dashboard
or
different
place.
That
gets
less
attention,
but
by
and
large
I
would
prefer
that
decision-making
of
like
what
are
the
set
of
jobs
that
are
critical
for
you,
sig
foo,
to
do
your
job.
I'd
rather
sigfu
make
that
decision,
but
I
definitely
want
the
first
question
to
be
like.
A
Do
you
still
need
this
job?
It
seems
like
you
haven't,
been
paying
attention
because,
yes,
I'd
rather
like
avoid
migrating
when
we
don't
have
to
migrate
in
the
first
place.
A
Okay,
the
next
line
on
here
is
about
migrating.
What
I'll
basically
call
like
the
proud
data
pipeline,
so
we
have.
A
And
it
contains
all
of
the
historical
data
about
all
of
the
test
results
that
land
in
gcs
buckets
that
are
paid
attention
to
by
a
tool
called
cattle.
A
We
have
other,
like
friends
of
kubernetes
who
write
things
there
or
write
things
to
buckets
that
they
own
and
have
been
configured
for
us
to
pick
up
so
like
openshift,
big
big
user
of
test
grid
jet
stack
also
uses
test
grid.
A
A
We
we
can
measure
all
that
we,
as
a
community
have
not
gotten
as
far
as
like
automatically
having
bought
like
enforce,
presence
in
release
blocking
or
not
based
on
these
metrics,
but
that's
the
dream
and
we
basically
should
start
doing
the
same
thing
for
merge
blocking
jobs
too,
because
I
personally
kind
of
find
it
a
little
ridiculous
that
opening
a
pull
request
then
means
you
get
to
wait
for
90
minutes
to
two
hours
for
the
tests.
A
And
so,
in
order
to
do
that,
we
need
to
make
sure
that
that
data
pipeline
continues
to
stay
up
and
running
and,
as
some
of
you
are
aware,
it's
not
up
and
running
right
now,
because
the
thing
that's
responsible
for
scraping
gcs
and
dumping,
the
results
into
bigquery
called
cattle
is
down,
and
the
problem
is
because
the
bigquery
data
set
that
it
writes
to
lives
within
google.
There's
really
nobody,
but
a
few
overburdened
googlers
who
have
the
time
or
capacity
to
troubleshoot
it.
I've
heard
community
members
say
they
would
love
to
help
out.
A
We
just
need
to
get
to
a
point
where
they
can
help
out.
So
what
remains
to
be
done
to
get
to
that
point
is
to
migrate
the
bigquery
data
set
into
which
all
this
data
is
written
and
then
to
get
kettle
the
tool
to
run
in
a
community-owned
cluster.
F
I
do
really
think
we
should
move
this
and
that
it
will
help,
but
if
someone
wants
to
look
at
kettle
being
down
right
now,
the
relevant
logs
are
in
the
issue
and
also
the
approach
I
just
haven't
had
time
to
implement
anything
or
attempt
deploying
it,
because
something
else
keeps
being
more
important.
F
The
the
change
needs
to
be
made
should
be
relatively
straightforward
in
public
source
code,
so
this
is
important.
We
should
move
it,
but
also,
if
you'd
like
to
see
that
data
running
again
sooner
than
that
and
there
shouldn't
be
any
real
blockers.
It's
deployed
by
automation
and
the
you
know
the
source
code's
public.
Obviously,.
F
A
So,
by
moving
the
whole
data
pipeline
over
to
community
owned
infrastructure,
we
can
also
do
fun
stuff
like
say
that
if
you
are
part
of
the
prowl
viewers
group
or
like
the
prowl
on
call
group-
and
you
are
interested
in
writing
your
own
queries
to
run
against
this
data
set
to
maybe
come
up
with
your
own
version
of
what
project
health
looks
like
or
what
ci
signal
looks
like.
A
You
are
empowered
to
do
so.
You're,
actually
empowered
to
do
so
today,
because
this
data
set
is
publicly
accessible
and
publicly
readable.
You
just
have
to
build
the
compute
time
that
you
use
to
run
that
query
against
your
own
google
cloud
projects,
which
I
do
with
my
free
tier
projects
on
my
personal
account,
but
that
seems
to
weird
some
people
out.
B
A
Think
I
still
agree
with
that.
The
big
picture
thing
is
proud.outcase.io
becomes
fully
owned
by
the
community,
so
that
means
it
runs
on
community
hardware
or
infrastructure,
and
community
members
are
on
call
for
it.
On-Call
is
a
scary
word,
so
to
be
clear,
what
on-call
is
right
now
is
best
effort
during
working
hours
only
during
pacific
time
zone,
since
that's
where
all
of
the
current
googlers
who
have
access
to
this
live.
A
A
To
necessarily
increase
the
rigor
of
level
of
support
that
we
expect,
but
theoretically
there
are
more
community
members
in
a
wide
variety
of
time
zones,
as
this
meeting
demonstrates
and
we
could
get
closer
to
having
community
members
sort
of
best
effort
during
their
local
time
zone.
I
see
arnold
has
his
hand
up.
A
Yes,
that,
and
so
yes,
we're
in
violent
agreement
there
like
that's
what
the
big
picture
is,
but
these
are
the
steps
that
help
us
get
there.
So
we've
like
the
staging
pro
instance
that
arno
has
been
putting
up
and
I've
been
helping
review
like
is.
A
A
lot
of
the
questions
about
like
what
permissions
are
required
to
view
all
the
same
things
and
access
all
the
same
things
that
products.I
o
does
and
what
would
it
look
like
to
have
this
run
jobs
on
the
community
build
clusters,
and
then,
I
think,
there's
basically
a
there
we'll
need
to
come
to
a
fork
in
the
road.
A
We
need
to
decide
as
a
community
whether
we
are
willing
to
accept
full
downtime
for
a
little
while,
as
we
shut,
proud
our
case.io
down
and
then
bring
everything
up
on
the
community
owned
prow
instance
and
make
the
the
staging
pro
instance
the
real,
proud
instance.
A
A
A
Because
I
don't
think
we
want
to
pay
money
for
all
those
jobs
until
we've
decided,
which
ones
are
actually
important
and
worthwhile
right
now,
that's
been
really
easy
because
it's
very
clear
which
jobs
are
merged,
blocking
and
release
blocking
for
kubernetes.
But
as
we
look
at
the
wider
set
of
jobs,
it
becomes
a
fuzzier,
murkier
question.
C
I'm
sorry
I
just
want
to
say
I
need
help
about
getting
central
pro
because
everything
set
up,
but
the
job
thing
to
run
on
the
broadcaster,
and
I
don't
know
why
I
basically
have
an
issue.
Oh
sorry,
I
thought
copy
the
issue,
but
I
just
copied
the
text.
C
A
That's
good
to
know,
I'm
kind
of
excuse
me,
iron
person,
heads
down
elsewhere,
but
you
may
be
able
to
get
somebody
from
testing
for
a
call.
I
think
that's
cool
right
now
to
maybe
help
you
troubleshoot
a
little
bit.
A
Okay,
I'm
just
doing
a
time
check.
I
know
we
have
a
bunch
more
stuff
on
the
agenda,
so
I
think
I'm
gonna
zoom
forward
and
say
I
already
talked
about
migrating
release:
artifacts
building
ownership
right
now,
the
data
source
that
drives
this
report,
like
I
can
edit
this
report.
That's
super
cool,
but
I
won't
click
through
the
data
source
that
drives
this.
I
can't
see
it.
I
can't
migrate
it.
I
can't
change
it
and
I
want
to
be
able
to
do
that.
I
want.
A
Member
to
be
able
to
do
that,
so
we
gotta
work
on
that
a
little
bit
because
then
we
can
start
talking
about
making
this
building
report
like
more
granular.
A
A
A
A
Looking
ahead
to
124
and
next
year,
we
need
to
make
more
effective
use
of
the
credits
that
we
have
so
look
at.
Getting
discounts
for
committed,
use,
look
at
starting
to
set
up
budgeting
and
alerting
my
dream
vision
is
each
sig
gets
a
budget
and
they
can
do
with
that
budget.
What
they
will
who's
responsible
for
setting
up
that
budget
is
probably
a
big,
unknown
question
and
I
would
say
we
need
input
from
steering
and
the
rest
of
the
project
in
that
process.
A
I
think
that's
fair.
I
think
it
would
still
be
worthwhile
to
be
able
to
at
least
bucket
these,
like
shard
costs
out
to
different
buckets
so
before
we
start
like
enforcing
things,
let's,
let's
notice,
if
one
of
these
buckets
seems
to
have
like
an
unusual
amount
of
usage
or
its
historical
usage,
suddenly
changes
over
time
like
we
already
did
this
with
the
scalability
job,
but.
A
Because
it
was
real
obvious
because
that
project
had
scale
in
the
name
like
which
thing
that
was
or
like,
if
a
if
a
sig
was
to
set
up
a
new
job
that
runs
super
frequently
and
consumes
a
ton
of
resources.
It
might
be
slightly
more
difficult
for
us
to
find
that
if
it
happened
to
spread
its.
B
A
C
I
I
just
put
that
in
one
for
24,
because
about
that
wave
we
have
right
now
so
politely
before
we
split.
I
have
like
one
request
on
the
slide
three.
So
the
first
item
as
one
requirement,
we
need
to
be
make
sure
the
credit
we
have
are
eligible
for
committed
discount,
because
the
credit
we
have
a
donation
to
cncf.
So
I
don't
know
how
gcp
handles
that.
So
I
can
pay
him
because
he's
liaison
to
gcp,
but
if
you
can
have
an
internal.
A
Then,
looking
ahead
sort
of
to
the
end
of
next
year
by
that
time,
prow
is
all
community
owned.
So
all
the.
B
A
A
A
Really
great
progress
in
terms
of
maintaining
all
of
our
infrastructure
that
we
have
right
now,
I'm
gonna
kind
of
skip
over
this
other
than
to
say.
I
think
a
lot
of
this
will
make
things
a
lot
more
self-service.
So
anything
we
can
do
to
reduce
the
toil
and
empower
more
people
to
automatically
manage
her
infrastructure
will
be
good
for
everybody.
C
Before
before
we
have
questions
just
want
to
say
the
last
slide,
anyone
can
work
on
that,
especially
the
validation
configuration
change.
If
you
are,
I
would
flag
them
as
ed
wanted,
but
some
of
them
are
really
straightforward.
So
if
anyone's
interest
work
on
this,
I
can
basically
guide
them
or
drive
them
over
those
issues.
A
I'm
gonna
hunt
the
question
on
flipping
all
proud
jobs.
Do
we
need
a
deadline
just
because
I
think
we
need
to
save
that
for
a
larger
discussion
about
migration
and,
let's
see
so
arnold,
you
have
a
question
about
enabling
the
pentabot
for
groups.
Do
you
want
to
talk
to
that.
C
Yes,
this
is
a
practice
done
in
by
sick
relays
and
by
other
open
source
period.
I
saw
on
github.
Basically
dependable
is
a
set
of
tools
that
basically
automatically
bump
the
golang
naturals
and
all
the
defense,
all
the
dependencies
you
have
so
I
see
for
the
groups.
We
have
very
old
dependencies
about
goal,
so
I
don't
know
if
we
can
basically
end
up
busy,
have
a
weekly
job
that
check
and
upgrade
those
dependencies,
and
I'm
aware
we
need
more
testing
and
tuning
around
this
before
we
do
that.
F
Honestly,
it's
kind
of
anti-go
goes
whole
thing
with
the
minimum
version.
Selection
is
sort
of
like
use
the
version.
You
need
not
just
the
latest
shiny
thing.
For
no
reason,
it
also
creates
a
lot
of
noise
because
it
pushes
directly
to
the
repo,
not
not
a
big
fan
of
that,
and
it
enables
on
forks,
which
can
be
problematic.
F
So
I
don't
know
that
I
would
rush
right
into
depend.
A
bot
we
I've
definitely
had
some
some
issues
with
it.
It
seems
like
a
cool
tool.
I
think
there
are
languages
where
it
makes
sense.
I'm
not
sure
the
go
ecosystem
is
one
kubernetes
kubernetes
repos
example
is
too
we
don't.
We
don't
just
like
upgrade
every
dependency
constantly,
usually
there's
some
justification
like
it
fixes
a
bug
or
a
security
issue.
A
Okay,
I
don't
know
it
could
be
cool.
I
think
you
hit
the
nail
on
the
head,
though,
like
I
would
feel
way
more
comfortable
if
we
had
more
test
coverage
on
it
right
now.
We
have
mostly
policy
type
tests.
We
don't
really
have
a
lot
of
great
unit
tests.
I
have
this
sneaking
suspicion.
It
needs
to
the
code.
A
To
be
refactored
a
little
bit
to
allow
some
more
mocking
of
some
of
the
services,
and
I
kind
of
suspect
you
may
have
some
of
that
ahead
of
us
anyway.
I
question
if
this
is
scalable
and
reliable
enough
to
handle
syncing
like
all
of
the
kubernetes
mailing
lists.
For
example,
there's
open
a
dream
to
make
us
manage
everything
that
we
currently
use.
Google
groups
for,
so
we
don't,
like
accidentally,
lose
ownership
of
groups
and
we
can
keep
better.
We
have
api
access
to
this
stuff.
A
I
don't
know
I'm
open
to
an
experiment,
but
I
think
I
want
to
see
more
test
coverage.
First
is
my
tldr.
C
But
just
last
thing
I
want
to,
and
from
is
I'm
acting
as
liaison
with
the
election
committee
staff,
because
they
want
to
run
elector,
which
is
a
basically
an
application
braille
to
run
election.
So
I
will
open
an
issue
about
this
today.
C
C
Sorry
could
you
say
that
last
part
again,
so
I'm
just
saying
I'm
gonna
run
electro
on
a
dedicated
project,
because
I
don't
think
we
should
run
it
right
now.
A
I
know
I
I'm
a
little
net
uncomfortable
with
like
trying
this
thing
out
for
the
first
time
ever
for
the
community
steering
election,
but
we'll
see
what
happens.
It'll
be
an
interesting
bumpy
ride.
H
So
I'm
in
need
of
a
role
binding
for
a
service
account
in
the
ii
sandbox
project
to
have
access
to
a
bucket
in
the
public
pii
project
in
gcp,
and
this
will
allow
us
to
get
the
up-to-date
logs
from
all
of
the
traffic
coming
in
grabbing
artifacts
from
the
info
and
allow
us
to
get
a
dashboard
eventually
with
up-to-date
things.
But
I'm
in
need
of
the
role
binding
mode.
So
I
can
run
the
pipeline
once
more
sure.
A
A
Only
person
asking
for
it-
and
maybe
I'm
like
I
don't
want
people
to
think
I'm
special.
I
tend
to
watch
that
channel
when
there's
traffic
on
it,
and
so,
if
somebody's
got
something,
that's
basically
ready
to
go.
I
will
get
to
that
as
I
am
able
my.
A
Now
and
will
be
for
a
while,
which
is
why
it's
taken
me
a
little
bit,
but
it's
on
my
radar
and
I
will
get
to
it
soon,
like
totally
for
people.
Don't
have
to
wait
for
this
meeting
to
ask
for
stuff
like
being
a
channel,
and
there
are
more
people
than
arnold
and
myself.
Who
can
look
at
this
stuff.
A
A
D
C
D
A
Okay,
yeah,
that
that
is
good.
So,
okay,
jim
you
have
a
question
about
code
search
and
hosting
it
on
aaa.
G
Yeah,
just
could
you
educate
me
like
who
else
would
need
input
on
if
that
was
okay?
I
started
the
gear
to
move
it,
but
also,
it
kind
of
seems,
like
that's,
maybe
lower
priority
based
on
the
123
work.
A
C
C
A
While
you
are
technically
correct,
I
still
really
want
want
it
to
like
not
be
a
special
thing
that
just
you
are
running,
and
so,
if
there's
some
way
either
you
take
what
you
are
running
and
you
open
it
up
to
more
people,
so
they
know
how
to
run
it
or
if
it's
not.
That
much
effort,
I
think,
having
it
run
on
aaa
like
everything
else,
does,
would
provide
more
people
the
opportunity.
F
Yeah,
I
think,
instead
of
trying
to
guess
what's
better,
we
should
just
say
that
whatever
arnold,
whoever
else
has
experience
of
this
thinks,
will
make
it
the
easiest
to
run
and
automate
and
open
up
to
more
people.
F
C
B
A
F
Yeah,
I'm
still
not
clear
on
exactly
where
the
overlap
lies
between
these,
but
in
the
past
I
would
have
said
that,
like
sick
testing
is
basically
doing
sort
of
the
equivalent
of
engineering
productivity
at
google,
like
that's
where
a
lot
of
our
staffing
comes
from,
and
some
similar
teams
at
red
hat
and
something
like
having
code
search
is
a
thing
that,
like
that
group,
would
very
much
want
to
see.
F
So
if
we
can't
find
a
home
for
it
I'll
own
it.
I
I
want
that
to
exist
and
be
a
thing
that
we
run.
G
I
mean
it's
pretty
simple.
I
would
even
be
willing
to
help
with
that.
I
mean
what
I've
figured
out
from
trying
to
migrate.
It's
not
hard.
The
one
argument,
I
would
also
say,
is:
if
we
did
move
it,
we'll
have
the
luxury
of
it
being
similar
to
the
other
apps
under
our
repo,
with
the
brow,
jobs
that
deploy
and
everything.
C
C
A
A
Stts
stood
up
the
first
incantation
of
publishing
pot
and
it
was
running
on
an
openshift
cluster
for
a
while
before
nikita
and
dims
ended
up
taking
ownership
of
it,
and
it's
now
running
on
aaa,
though
I
still
need
to
go
poke
them
about
the
manifests
and
stuff
for
that
living
inside
of
kate's
tada
the
same
as
everything
else.
F
F
Think
it's
an
errand
problem
to
clarify
like
where,
where
does
this
fit
in
the
charter,
but
if
it's
not
fitting
in
somewhere
or
if
it
makes
sense
there,
I
would
be
super
happy,
listen
suggesting
just
because
I
don't
think
that's
totally
unreasonable
and
I
I
really
want
to
see
this
continue
to
exist.
A
G
Yeah,
I
have
one
pr
open
and
I
can
work
on
the
docker
container
part.
That's
the
part
that
I
haven't
done
yet,
hopefully
friday
during
my
free
time,.
F
H
C
So,
just
to
clarify
with
ben
concession
is
actually
funding,
there's
no
issue,
but
funding
it
will
still
exist
until
basically
sensitive
decide.
Otherwise,
oh
no.
A
Okay,
okay,
we
are
at
time
I
saw
jing
added
something
at
the
bottom
of
the
agenda
about
csi.
B
A
B
I
I
don't
know
so.
I
tried
like
a
ping
justin,
he
seems
dead
for
chaos,
but
he's
not
responding
and
I
don't
know.
A
Yes,
I
have
the
same
problem,
so
I
saw
that
there's
there's
something
in
the
kubernetes
release
repository
that
seems
to
use
a
very
similar
file
format,
but
it
is
not
clear
to
me
whether
that
is
actually
used
or
whether
there's
something
else
that
actually
uses
this.
B
F
Information
super
quick
note:
if
you're
trying
to
get
justin's
help
and
you're
pinging
him
on
github
or
slack
and
you're
a
googler
or
you
use
the
google
chat
tool.
You'll
find
a
much
easier
time
paying
his
work
chat,
he's
very
responsive
there,
just
not
quite
so
much
on
slack
and
getting
up
my
truck.
A
A
A
D
A
We
just
need
to
find
figure
out
from
between
release
engineering
or
justin
like
what
is
the
tool
that
consumes
this,
who
runs
it,
how
do
they
run
it
and
go
forward
from
there?
I
just
I
have
not
gotten
responses
from
the
people
I
reached
out
to
so
in
my
band.
As
I
said
earlier,
my
bandwidth
is
not
great.
B
B
Sure
everything
just
two
and
if
I
find
out
some
information
I'll,
also
share
here
all
right,
awesome.
A
Okay,
thanks
to
everybody
who
stuck
around
long,
I
apologize
we
went
well,
but
it
was
great
to
see
you
all
and
I
look
forward
to
seeing
you
all
again
in
two
weeks
time
at
wg
or
sig,
something
kate's
infra,
five
weekly
eating
sounds
good.
Thank
you.
Happy
wednesday,
happy
with.