►
From YouTube: k8s-infra-team's Bi-Weekly Meeting for 20210317
Description
k8s-infra-team's Bi-Weekly Meeting for 20210317
A
Okay,
hi
everybody
today
is
wednesday
march
17th,
at
least
in
my
time
zone,
and
you
are
at
the
kubernetes
working
group
kate's
in
for
a
working
group,
bi-weekly
meeting.
A
I
am
your
host
aaron
kirkenberger,
also
known
as
spiff
xp
on
all
the
places,
occasionally
known
as
aaron
of
sig
beard
we're
all
going
to
watch
ours.
This
meeting
is
being
publicly
recorded
and
will
be
posted
to
youtube
later,
so
we
can
watch
ourselves
adhere
to
the
kubernetes
code
of
conduct,
which
essentially
is
be
your
very
best
self
to
you
and
others.
A
A
Okay,
thank
you
all
for
showing
up
sorry
for
the
confusion
over
what
time
is
and
how
do
time
zones.
But
you
know
it's
it's
it's
daylight
savings
time,
so
I'm
gonna
paste
the
link
to
the
agenda
in
chat.
If
you
want
to
put
yourself
down
as
an
attendee
or
at
the
items
that
we
may
be
able
to
get
to,
if
you
have
time,
let's
see
so,
let's
welcome
any
new
members
or
attendees
mosh.
A
B
A
B
So
I've
actually
joined
one
or
two
sessions
before
okay,
so
I'm
a
bit
around
the
place.
I
mainly
spend
my
time
in
c
cluster
life
cycle
on
image
builder
and
a
little
bit
on
the
cluster
api
yeah.
That's
kind
of
me.
A
Okay,
welcome
goodness
and
good
to
have
you
around
again,
even
if
I'm
really
terrible
at
remembering
names
and
faces
okay,
so
anybody
else
want
to
say
hi
or
hello.
A
So
it's
fine,
I'm
gonna,
move
on
to
the
next
part
that
we
usually
do,
which
is
we
look
at
our
building
report,
let's
type
like
the
doc.
A
Sorry
give
me
a
moment:
I'm
going
to
share
my
screen,
and
maybe
the
illustrious
tim
hawkin
will
take
a
look
at
billing
from
his
side
and
see
if
it
lines
up.
That's
our
spend
over
the
last
28
days.
Let's
see
what
that
looks
like
with
a
graph.
A
While
I'm
waiting,
I
know
we
kind
of
we
felt
like
last
meeting.
We
saw
kind
of
a
big
uptick
in
cost,
not
not
big
but
like
it
was.
It
was
the
first
time
I
feel
like
there
was
a
noticeable
trump
and
thank
you
arno
for
opening
an
issue
on
that.
I
have
not
had
time
to
go
drill
into
it
boy.
I
wish
this
report
would
actually
load
there.
A
We
go,
and
so
it
kind
of
looks
like
we
definitely
had
two,
maybe
peakier
days
in
terms
of
compute,
spend
here,
but
that
maybe
we're
kind
of
settling
back
down.
This
looks
an
awful
lot
to
me.
Like
we
had
a
code
freeze
happen
where
everybody
was
desperate
to
land,
their
pull
requests
and
every
time
they
open
or
push
or
update
a
pull
request.
A
A
Next
up
on
the
agenda,
I
wanted
to
go
through
some
administrivia,
so
that
makes
it
sound,
really
small
and
tiny.
I
don't
know
how
else
this
at
the
other
word
I
had
was
meta.
So
first
I
want
to
congratulate
arno
our
newest
kate's
infra.
A
Has
consistently
shown
up
and
done
great
work
and
has
been
pushy
to
tim
and
myself
and
dems
to
remind
us
that
other
people
want
to
help
out.
So
I'm
really
looking
forward
to
continuing
to
work
with
arno
and
you
know
bart
reached
out
and
said
that
as
much
as
he
really
wants
to
help
out
with
this
group,
his
availability
is
kind
of
diminished
right
now,
so
he
has
to
be
moved
to
emeritus.
A
So
thanks
to
bart
so
the
next
item
rhiann
put
on
the
agenda,
but
I
also
had
meant
I
meant
to
add
preempt
this
agenda
item
last
week,
but
I
didn't
get
to
it
in
time,
which
is
basically
time.
Zones
are
a
man-made
construct
that
are
horrible,
but
we
all
live
in
them.
I
feel
like
we.
Maybe
we
want
to
look
at
a
better
meeting
time
that
accommodates
the
fact
that
we
have
people
in
as
many
time
zones
as
we
do
so.
A
In
south
africa
right
now,
so
I
don't
even
know
what
time
zone
that
is,
but
I
know
we
at
least
have
people
who
span
gmt
minus
seven
to
gmt
plus
13
right
now.
So
I
I
asked
around
and
I
think
what
I
heard
was
like-
maybe
7
p.m,
to
11
p.m.
Gmt
is
people's
preference.
A
Another
thing
to
keep
in
mind
is
that
by
the
march
28th
folks
in
cet
are
going
to
jump
forward
an
hour
and
then,
by
april,
4th
folks
in
new
zealand
are
going
to
drop
back
an
hour.
So
whatever
time
we
would
choose
today
would
turn
into
one
hour
earlier
for
people
in
new
zealand
one
hour
later
for
people
in
europe
by
mid
april.
A
So
I
will
share
my
screen
because
I
tried
to
find
sort
of
a
tool
to
help
me
figure.
This
out,
ignore
the
place
names.
I
don't
know
where
we
actually
live.
I
just
wanted
to
get
the
time
zones,
so
I
felt
like
if
I
picked
first
off
this
limits,
the
limited
to
what
it
thinks
our
local
office
times.
A
A
By
this
time
or
cet
would
be
at
10
p.m,
and
then
auckland
will
eventually
drop
back
to
8
a.m.
So
that's
that's
my
suggestion.
I
was
thinking.
A
If,
if
you
want
to
talk
about
it
now,
we
can
it's
also
gonna,
throw
out
a
doodle
with
sort
of
some
time
ranges
here,
and
I'm
also
open
to
the
idea
of
shifting
days.
If
we
need
to
do
that
to
accommodate
people's
schedules,
thoughts.
D
Yeah
for
new
zealand
time,
because
most
of
our
team
is
in
new
zealand,
the
9
a.m,
or
even
if
we
move
to
8
a.m.
Now,
if
we
have
to
7
a.m
to
accommodate,
it
would
also
work
so
yeah.
Anything
from
seven
is
reasonable
in
new
zealand.
A
Okay,
that
sounds
good.
A
A
Okay,
that's
everything.
Let's
see,
oh,
so
some
alternate
things
we
could
try
doing
I'll,
just
throw
them
out
there
one.
I
I've
seen
contributor
experience,
do
kind
of
an
a
an
asynchronous
meeting
approach
where
they
basically
open
up
a
bunch
of
threads
in
a
slack
channel
dedicated
to
this.
A
Another
thought
I
had
was:
I
could
try
to
do
kind
of
like
a
board
triage
session,
because
sometimes
I
feel
like
the
point
of
this
meeting-
is
to
force
everybody
to
show
up
and
talk
about
what
we're
not
doing
and
who's
blocked,
and
why,
if
you
like,
consistently
going
through
the
board
and
keeping
issues
up
to
date
with
status
like
is
this
still
important?
Is
somebody
working
on
it?
Why
isn't
it
done
yet
could
provide
that
same
sort
of
cadence
thing,
and
it
could
be
that,
like?
A
Maybe
not,
everybody
needs
to
be
present,
for
that.
You
know
update,
make
sure
that
all
the
relevant
information
lands
in
the
issue.
So
anybody
working
asynchronously
sees
that
new
information,
and
we
can
also
record
it
and
sort
of
post
the
discussion
about
how
and
why
we're
prioritizing
the
way
we
are.
A
Anyway,
the
the
last
thing
is,
I
apologize
for
the
recordings
not
getting
up
in
time,
there's
an
issue
with
the
automation
between
zoom
and
youtube,
and
I'm
going
to
have
to
create
a
new
meeting
anyway
to
try
and
get
that
hooked
back
up.
A
So
in
the
meantime,
I'm
uploading
things
manually
to
my
own
personal
youtube
channel
and
then
adding
them
into
the
k-10
for
playlist.
So,
if
you're,
like
looking
for
new
meetings
by
following
the
youtube
channel,
the
kubernetes
channel
on
youtube
and
you're
wondering
where
they
are
they're
on
my
channel,
where
you
can
also
see
sync
testing
meetings
and
random
recordings
of
me
playing
drums.
But
if
you
just
want
the
kate's
info
meetings,
there's
a
playlist
that
I'm
happy
that
I
believe
is
linked
at
the
top
of
the
meeting
notes
yeah.
C
A
Are
the
drums
going?
It's
like?
Oh,
those
are
public
and
in
the
same
place
as
all
the
testing
meetings
and
youtube
is
suggesting
the
drumming
videos.
That's
that's
cool.
Okay,
yeah!
That's
out
there
I'm
gonna
hand
it
over
to
rhiann
to
talk
to
us
about
sort
of
the
plan
to
maybe
move
kates.gcr
dot
io
to
something
like
registry.kate's
dot,
io.
D
Thank
you,
I'm
speaking
on
behalf
of
hippie,
as
he
is
hopefully
still
sleeping.
It
was
only
five
in
the
morning
here.
Excuse
the
red,
the
rain
on
the
roof
on
the
side.
If
it's
a
little
noisy,
so
we
at
the
moment
are
trying
to
access
the
logs
we're
working
with
cncf
between
hippie
and
eeyore
and
priyanka,
because
he's
sorting
out
the
last
bit.
D
We
are
ready
for
the
launch,
but
we
don't
have
access
yet
yet
so
as
soon
as
we
have
access,
we
will
be
able
to
start
evaluating
what's
inside.
Aaron.
Is
that
from
your
side
that
you
want
to
see
the
policy
discuss
the
quality
yeah.
A
I
think
I
was
going
to
ask
real
quick
because
it
I
mentioned
it
during.
I
assume
what
mosh
is
here
to
talk
about
like
at
some
point.
We
need
to
make
sure
that
people
who
have
access
to
things
that
potentially
contain
pii
are
actually
following
what
the
cncf's
rules
are
for
handling
pii,
and
so
I
know
that.
A
D
And
I'm
sure
that's
not
a
problem.
Soon,
it's
available
I'll,
ask
ap
to
share
it
with
you
and
yeah.
It
would
make
sense
to
carry
the
same
across
the
different
projects.
D
Then
the
next
two
topics
is,
I
do
not
know
the
content
in
depth,
but
I
can
just
mention
that
we
are
busy
having
conversations
with
docker
distribution
and
with
harbor,
so
you
can
have
a
look
at
the
images
of
the
issues
there.
So
there
is
some
conversation
around
how
to
migrate
and
what's
the
options
with
hardware
and
docker
distribution,
and
then
we
did
create
an
issue
for
getting
a
playground
where
we
can
start
testing
our
infrastructure
movements
so
yeah,
that
is
in
progress.
So
I
will
get
back
to
you.
A
A
That
sounds
good
to
me
and
I
was
going
to
start
with
whomever
has
signed
this
document,
so
I'm
assuming
this
is
rhiann
and
the
hippie
and
caleb
and
folks
from
that
team.
But
I
wanna
I
just
that's
a
great
start,
but
I
feel
like
we
should
ensure
that
that
sort
of
process
of
oh
there's
pii
here,
we
should
make
sure
that
we
know
what
the
rules
are
and
we
should
make
sure
we've
got
people
who
have
agreed
to
those
rules.
A
I
think
so
and
I'm
happy
to
go,
go
do
the
poking
you
tell
me,
did
I
miss
the
part
where
we're
like
I
mean,
do
you
feel
like
we
should?
I
feel
like
I
missed.
The
part
was
like
so
cncf.
What
exactly
are
your
expectations
for
pii?
I
feel
like
what.
A
Was
cncf
go,
go,
make
some
people
sign
a
document
and
then
tell
us
that
they
signed
it.
C
Yes,
it's
not
even
that
transparent,
like
mostly,
I
think
what
I
saw
was
priyanka
saying
yes,
the
ii
crew
is
cleared
for
this
and
we'll
have
the
rest
in
place.
Eventually,
I
don't
know.
When
eventually
is
I
don't
know
what
the
agreement
is
or
what
the
rules
are.
I'm
not
sure
that
I
need
to
honestly.
Like
I
mean
it
feels
like
in
the
purpose
of
transparency.
Probably
it
should
be
a
public
statement,
but
I'm
not
sure
I
care
that
much.
C
A
Yeah.
Sorry!
Yes,
I'm
okay
with
that,
but
I
feel
like
this
is
very
like
this
is
a
very
specific
set
of
data.
There
are
other
sources
of
data
that
we
should
also.
C
Look
at
okay
from
a
tactical
point
of
view:
rhian,
do
you
guys
want?
What
do
you
think
is
the
right
approach?
Is
it
give
you
a
playground
and
let
you
make
a
mess
and
then
figure
out
what
we're
doing
and
then
blow
the
whole
thing
up
and
do
it
for
real
or
should
we
turn
on
real
logs
and
grant
you
real
access
to
real
logs
so
that
you
can
actually
work
with
real
substantial
data.
D
I
think
it's
two
two
separate
issues
the
one
is:
we
need
real
logs
and
access
to
real
logs,
because
there's
a
lot
of
questions
regarding
which
images
is
most
important
carries
the
most
data
which
providers
are
using
them
to
determine
the
flow
of
data.
So
that
is
the
one
very
important
thing
that
we
why
we
need
the
real
logs
and
then.
Secondly,
the
playground
is
exactly.
We
totally
want
to
mess
it
up
and
see
see
where
it
goes
and
develop
with
that
before
we
go
real,
so
actually,
both.
A
Please
put
that
information
in
that
issue,
because
I
I
agree
both,
I
think
we're
on
the
same
page,
but
let's
make
sure
the
requirements
are
written
down
so
that
it's
something
we
can
collaborate
on
synchronously
and
when
I
get
time
or
when
somebody
who
has
access
to
create
projects
and
set
all
this
stuff
up
gets
time.
A
C
Okay,
I
I
don't
care
as
long
as
it's
fairly
obvious,
like
you
know,
pii
access
or
something
which
we
can
then
use
to
add
like
we
could
go,
make
the
script
changes
to
turn
on
access
logs
for
all
of
our
buckets
and
start
with
that
right,
like
that'll,
give
you
that'll
start
collecting
data
immediately
and
within
a
day
or
two
you'll
have
a
body
of
really
decent
data
to
work
with.
C
A
Make
it
a
group
that's
targeted
at
access
just
to
these
specific
logs,
I
feel
like
I
feel,
like
a
group
like
pia
access
is
a
really
broad
term.
It
is
that's
right,
scope.
It,
however
tight
you
want.
C
Probably
pii
should
be
in
there.
Okay,
the
there's
one
more
question:
we
have
gcs
access
logs
and
we
have
this
load
balancer
that
fronts
the
dl
stuff
or
the
I
don't
know.
Justin
set
up
a
load
balancer
that
fronts
some
of
the
artifacts,
so
that's
will
be
a
separate
source
of
logs.
C
At
least
the
gcs
logs
are
where
the
bulk
of
our
money
is
going
right
now,
so
that's
probably
the
most
useful
stuff
when
we
figure
out,
if
there's
going
to
be
sort
of
a
unified
access
or
not,
and
how
we're
going
to
do
mirroring
or
not,
we
may
have
to
shift
what
we're
doing,
but
I
think
the
analysis
won't
fundamentally
change.
C
Okay.
Basically,
if
we
one
one
option
that
was
discussed
was
sticking
an
http
load
balancer
in
front
of
gcr
as
part
of
the
mirroring
protocol,
and
if
we
did
that
we
would
be
using
the
http
logs
instead
of
the
gcs
logs.
But
hopefully
the
analysis
will
be
the
same.
Just
the
source
of
data
will
be
different.
A
I
agree,
I
don't
know,
I
guess
my
my
other,
like
random,
random
comment
here
is,
I
want
to
make
sure
I
feel
like
you
guys
are
like
looking
hey
guys.
Sorry,
I
feel
like
y'all
are
real
hard
at
like
harbor.
A
A
What's
in
scope
and
out
of
scope
for
this
problem
and
a
couple
of
alternatives,
we're
looking
at-
and
I
say
that
in
terms
of
I
feel
like
ricardo,
not
not
to
put
you
on
the
spot
or
anything,
but
I
feel,
like
ricardo,
has
also
been
exploring
some
ideas
that
don't
necessarily
involve
harbor
and
but
you
know
it's
also
unclear
like
if
they
solve
the
whole
problem
or
different
parts
of
the
problem.
That
sort
of
thing,
so
I
I
would
encourage
collaboration
in
in
exploring
alternatives.
E
E
We
are
going
to
bootstrap
a
machine
and
then
then
going
to
maintain
a
hardware,
and
it's
not
I'm
saying
as
a
as
a
first
and
it's
not
easy
to
maintain
a
hardware
registry
when
you
have
like
a
lot
of
a
lot
of
repos
and
a
lot
of
things
going
on
and
to
update
that.
So
this
is
going
to
be
probably
a
maintaining
issue
for
us.
That's
just
my
my
concern
about
moving
to
to
something
not
managed
by
google
or
by
amazon
or
something
else.
So
I
I
have
started
just
I
I
just
started
to
look.
E
E
A
Yeah,
let
me
give
you
closed
access,
so
you
can
share
that
diagram.
I
promise
I
won't
pass
from
four.
E
A
E
E
That
said,
that
was
really
fast
as
well,
and
some
some
cncf
ambassadors
as
well,
so
this
is
actually
what's
going
on,
so
we
have
like
this
any
case
that
I'd
be
passing
through
the
cdn
and
I'm
forcing
the
request
header
to
point
to
the
to
the
cage.gcr.io,
even
if
I
am
pointing
to
here
so
this
so
the
registry
answers
well.
So
this
is
this:
was
the
the
cdn
configuration
that
it
used
at
forcing
cache.
C
So,
ricardo,
let
me
quick,
give
you
a
peek
at
what's
happening
behind
the
scenes.
Can
you
go
back
to
that?
Yes,
yeah
when
you
hit
capes.gcr.io,
that
is
an
in
any
cast?
Well,
not
quite
any
cast,
but
whatever
google's
doing
for
their
for
their
stuff
that
will
enter
the
google
network
at
whatever
point
of
presence,
is
closest
to
the
client
and
will
decide
based
on
that,
whether
to
route
you
to
the
u.s
replica,
the
eu
replica
or
the
asia
replica
and
those
replicas
are
multi-regional
within
the
continent.
C
So,
like
there's
two
or
three
in
the
us
and
two
or
three
in
europe
and
two
or
three
in
asia,
so
it
should
be
fast.
The
cdn
front
end
here
doesn't
add
much
except
possibly
distributing
the
download
bandwidth
even
further,
pushing
it
out
towards
the
edge
more.
What
I'm
really
interested
in
maybe
you'll
get
to
this.
What
I'm
really
interested
in
is
how
do
we
expand
that
back
end
set
so
that
it's
not
just
case.gcr.io
but
also
something
dot?
C
E
A
good
question
thing
and
to
point
that,
through
to
you,
I
have
this
back
end
that
I'm
pointing
actually
to
to.
Let
me
see
if
I
can
edit
here
to
an
external
origin
right,
so
I
could
use
the
cdn
and
point
to
as
an
as
an
example.
We
had
another
another.
Sorry,
that's
in
portuguese
because
of
my
browser,
but
I
can
add
another
another
back-end
service,
pointing
to
maybe
to
to
amazon
or
to
docker,
io
or
anything
else,
because
they
all
of
them.
They
follow
the
same
the
same.
E
I
guess
the
same.
The
same
schema
right
so
you
you
might
have
like
the
the
name
of
the
registry,
slash
path,
comma
version
or
comma,
or
something
like
that.
So
we
we
could
use
the
the
the
load
balancer
in
front
also
to
rewrite
those
right.
So
I
can.
I
can
make
some
rules
and
say:
okay,
I'm
going
I'm
going
to
change
this
to
my
back
end
here.
But
if
I
I
have
like
a
specific
and
specific
path,
I
can
send
that
to
another
to
another
back
end.
The
problem
here,
the
the
the
biggest
problem
here.
E
Even
if
I,
if
I
ask
here
for
my
cdn
when
I
ask
the
blob
of
the
image
he
writes
to
the
gcs,
so
I
I
cannot
rewrite
the
gcs
the
the
the
found
response
to
the
gcs
here,
and
this
is
a
problem
I
I'm
cash
hidden
and
I'm
going
through
the
cdn
for
the
manifest.
But
I
I
was
thinking,
oh
those
those
answers.
They
are
really
small.
What's
what's
going
on,
so
I
saw
that
this
is.
E
This
is
going
through
the
through
the
through
the
story.googleapis.com
which
which
I
saw
that
was
entering
in
my
in
my
case
through
argentina
but
yeah.
It's.
This
is
the
main
problem.
So
so
we
I
can
take
a
look
if
I
can
use
like
the
the
balancer
here
or
I
know,
I
know
that
I
can
use
also
from
cloud
from
the
same
way,
but
I
I
was
trying
to
make
this
working
in
google
cdn.
I
know
that
I
can.
E
F
C
A
Yeah
thank
you
for
for
walking
us
through
this.
I
think
this
is
exactly
why
I
want
to
make
sure
that
you
and
hippie's
team
are
talking
through
possible
paths
to
implementation
here
and
really
appreciate
your
your
help
here.
G
So
to
add
a
bit
of
context,
so
with
the
cdn,
for
example,
it
would
per
default
be
a
proxy
two
different
back-ends
right,
so
the
back-ends
being
aws
or
different
other
clouds
of
gcr,
then
everything
would
be
proxied
which
would
end
up
or
the
bandwidth
would
end
up
in
our
cdn
and
on
our
build.
Basically.
So,
basically,
the
the
issue
that
came
up
that,
besides
the
image
header,
the
actual
data
is
redirected
to
gcs,
is
kind
of
a
what
we
want
right.
G
So
we
want
to
have
the
the
main
image
manifests,
be
delivered
either
by
a
proxy
or
gcr
as
a
backing
store
or
whether
whether
custom
implementation
or
whatever
be
delivered
centrally,
and
then
everything
else
that
is
spent
with
heavy
should
be
redirected
with
a
302
with
either
split
horizon,
dns
or
something
else
to
specific
endpoints
being
in
azure
in
aws
or
gcr,
basically,
that
the
blobs,
the
data
blobs
that
are
us
costing
the
most
are
delivered
locally.
Even
so,
the
docker
pool
is
officially
pulling
from
registry.
G
C
A
A
That
kind
of
yeah-
let's
let's
help,
define
the
problem
away
from
this
meeting
and
let's
come
back
to
this
meeting
sort
of
with
a
proposal
of
like
what
we
think
that
the
problem
is
what
we
think
the
requirements
are
so
and
so
forth,
not
to
cut
not
to
cut
you
short,
but
I
did
want
to
respect
the
rest
of
the
stuff
we
have
on
the
agenda.
I
will
do
so
by
kicking
my
thing
off
the
agenda
and,
let's
just
have
moshe
talk
about
the
agenda
item.
He
added.
B
B
B
B
So
that's
that's
just
a
comment
there
in
terms
of
on
call.
So
I
understand.
B
Okay,
so
I've
started
working
on
a
working
document
really
to
define
what
are
the
the
constraints
and
kind
of
what
are
the
the
the
paths
that
and
the
items
that
need
to
be
completed
before
before,
for
example,
a
request
for
volunteers
goes
out,
and
I
know
like
that.
Pii
group
would
probably
be
one
of
those
things.
I
know
the
security
model
is
is
another
one
that
that
needs
to
be
addressed.
B
I
think
there
are
ways
to
solve
all
of
these
things,
so
one
of
the
things
on
on
the
access
is
that
you,
we
we're
actually
working
on
a
break
glass
controller
for
other
clients
that
you
link
that
up
to
an
on-call
schedule.
Only
when
an
incident
is
triggered,
you
hit
the
button
and
then
that
button
grants
you
access
and
then
sends
out
notifications
to
everybody.
So
you
can
trust
but
verify
or
trust,
but
not
give
blanket
access.
B
B
B
One
gives
you
rights
to
commit
code
and
one
gives
you
rights
to
fix
things
that
are
broken.
That
may
need
code
changes
after
the
fact,
but
one
is
really
about
applying
band-aids
when
you're
bleeding
the
other
is
how
do
we
improve
the
health
in
total
and
don't
reduce
that
that
health,
so
that
that's
kind
of
my
thoughts
on
the
matter?
I
don't
know
what
everybody
else.
B
C
So
I'm
curious,
like
I
think,
it's
great,
that
we
would
try
to
formalize
this
we're
in
a
gr,
not
a
great
spot
for
anybody
new
who
wants
to
try
to
pick
up,
in
fact
we're
not
even
in
a
great
spot
for
somebody
who
knows
what
they're
doing
to
try
to
pick
up
stuff.
Like
I
don't
know
anything
about
some
of
the
dimensions
that
you've
listed
here.
The
like
I
couldn't.
C
I
couldn't
help
you
if
prow
went
down,
despite
holding
all
the
keys,
so
the
the
question-
I
guess
I
have
is
two
parts
one:
can
we
do
this
before?
We've
really
established
run
books
and
is
there
some
underlying
syndrome
that
you're
trying
to
address
through
this
like
something
urgent.
B
No,
so
I
think
you
well
part
and
parcel
of
of
sending
a
call
out
would
be
to
build
those
run
books.
I
don't
think
you
you
can
do
this
without
those
running
books.
So
there's
I
think,
a
set
of
criteria
that
that
we
need
to
meet
before
you
can
say:
okay,
let's,
let's
start
requesting
people
to
be
on
call
or
ask
people
to
start
applying
to
be
on
course.
So
I
understand
there's
a
lot
of
leg
work
to
be
done
before
that
and
and
that
that's
something
that
I
can
allocate
some
some
bandwidth
too.
B
But
it's
it's
if
you
if
we
set
the
goal
and
we
work
towards
a
goal,
it's
much
easier
than
we
have
this
big
problem
and
we
need
to
solve
it
and
we
don't
know
where
to
start,
and
we
don't
know
what
the
milestones
are
and
so
like.
I
don't
want
to
put
in
a
whole
bunch
of
effort
and
then
three
months,
six
months,
12
months
down
the
line
like
yeah,
but
we
we
we
still
need
to
do
x,
y
and
z
before
we
can
enable
this.
B
So
if
we,
if
we
put
a
plan
together
to
say
this,
is
what
we
need.
This
is
what
we
all
agree,
that
we
need
and,
and
then
it
becomes
an
objective
measure
as
to
when
that
call
goes
out
and
and
when
how
you
process
applications
for
on-call
or
how
you
communicate
this
intent.
So
that
that's
the
one
aspect
in
terms
of
is
it?
Is
there
something
urgent?
I
don't
think
there's
something
urgent
per
se.
So
this
this
came
out
of
the
the
certificate
expiry
that
that
occurred
the
other
day
during.
B
I
think
it
was
around
2
a.m
at
night,
and
I
I
picked
that
up
like
12
hours
before
the
certificate
was
due
to
be
expired,
and
I
try
to
get
hold
of
people
and
like
there's.
There
was
nobody
available
who
had
access
to
do
anything
about
it,
and-
and
that
was
like
a
really
simple
thing
that
that
could
have
been
fixed
like
it
doesn't
require
yeah.
So
that
was
an
example
of
the
type
of
of
on
call
that
I
think,
is
necessary.
B
B
I
think
those
are
things
that
we
need
to
understand
how
we
protect
those
those
clusters
from
on
call,
so
that
on
call
doesn't,
doesn't
can't
impact
them
or
break
them
in
any
way
and
and
have
a
separate
pro
on
call,
and
I
don't
think
that
that
really
needs
to
be
a
24
7
365,
because
the
impact
of
that
is
is
much
lower
than
if
call
codes
serving
goes
down
so
packages,
registries
dns.
C
I'm
glad
you
said
the
last
part,
because
I
I
agree
that
something
like
prow
like
demands,
dedicated
ownership
right,
like
there's,
no
way
that
we
can
enlist
a
volunteer
army
to,
I
think,
get
that
level
of
depth
and
do
other
stuff
like
it's
not.
It
doesn't
seem
like
a
reasonable
requirement,
at
least
off
the
bat.
C
The
rest-
I
guess
I
I
don't
disagree
with,
except
that
I
wouldn't
characterize
the
cert
stuff
as
normal
or
simple
like
it
was
actually
it's
kind
of
an
extraordinarily
weird
situation
that
certain
manager
doesn't
handle
it
properly
and
we've
painted
ourselves
a
bit
into
a
corner,
and
we
know
we.
Six
months
ago
we
set
a
clock,
and
six
months
later
we
had
to
reset
the
clock
and
six
months
from
now,
we
will
have
to
either
have
fixed
it
or
set
the
clock
again,
but
eventually
that
needs
to
stop.
C
C
That
said,
I'm
sorry
I'm
just
talking
to
hear
myself.
I
don't
know
what
I'm
saying
I
I
think.
A
I
guess
I
I
would
love
for
somebody
to
articulate
what
what
the
plan
should
be,
and
I
think
this
is
a
fantastic
start
like
for
real
it.
I
can't
even
tell
you
how
long
it
took
me
to
try
and
articulate
what
the
plan
should
be
for
what
even
is
the
infrastructure
that
we
are
trying
to
migrate
out
of
the
project
who.
A
About
it,
who
are
the
stakeholders?
How
does
it
work?
Why
does
it
work
that
way?
Should
it
work
that
way
and
like
that?
That's
kind
of
the
prerequisite
to
a
whole
bunch
more
on-call
responsibilities
showing
up,
but
I
think,
like
you,
should
definitely
think
about
what
with
what
we
have
do.
We
have
enough
to
do
on-call
for
certain
pieces
or
portions
of
this
project.
A
How
do
we
do
better?
I
would
love
your
help
in
continuing
to
kind
of
spec.
This
out
lay
this
out
and
see
if
we
can
put
together
milestones
and
encourage
people
to
show
up.
A
I
feel
like
one
of
the
models
we
sort
of
have
here
is
that
there's
specific
services
that
run
unlike
the
aaa
cluster,
I
think
it'd
be
neat
to
figure
out
how
we
make
sure
those
people
are
capable
of
running
their
stuff
and
fixing
those
problems,
and
then
you
can
kind
of
go
back
the
other
layer
of
like
this
super
critical
infrastructure,
and
we
need
to
make
sure
that
you
understand
how
that
works.
Dude
your
specific
problem
of
like
nobody
being
awake
at
that
time,
who
had
access
this
kind
of
part.
A
Part
of
why
arno
is
is,
is
a
new.
A
Arnold
and
bart
were
are
both
in
this
time
zone
which
is
not
gonna,
provide
us
full
follow
the
sun
coverage,
but
it
does
guarantee
we
have
a
better
shot
at
somebody
being
awake
in
a
time
zone
where
other
contributors
are
also
awake.
A
I've
also
seen
a
bunch
of
activity
from
nikita
who
contributes
code
and
automation
and
testing
to
this
project,
who
she
is
in
the
in
the
indian
time
zone.
I
forget
which
one
I
just
haven't
really
put
her
on
the
spot.
A
Yet,
given
that
adding
india
to
the
mix
in
terms
of
scheduling
this
meeting
and
whatnot
would
be
even
crazier,
but
I
think,
like
you
know,
it's
it's
sort
of
that
organic
bootstrappy
thing
where,
like
the
people
who
are
interested
in
showing
up
and
doing
the
work
and
helping
out
like
we're,
gradually
trying
to
build
up
that
pool
of
people
and
make
sure
that
they
don't
all
live
in
a
single
time
zone.
A
So
if
you
are
interested
in
joining
that
pool
of
people
or
describing
what
you
think,
the
formal
process
should
be
to
get
the
requirements
to
join
that,
and
you
know
what
infrastructure
we
need
to
to
build
or
stand
up
to,
support
that,
and
then
you
know
who
do
we
need
to
maintain
that,
like,
I
think,
you're
proposing
a
very
reasonable
thing.
C
And
just
to
be
to
be
clear,
there
isn't,
as
far
as
I
know,
there
isn't
a
single
list
that
we've
written
down
anywhere.
That
is
maybe
as
clear
as
what
you've
written
in
this
doc
in
terms
of
what
are
the
things
that
we
need
to
that
we
own
right
and
your
list
certainly
isn't
complete,
but
you
know
it.
The
problem
is
that
it
spans
levels
right.
There's,
google
service
oriented
stuff
like
dns.
C
What's
going
wrong
and
like
I
don't
want
to
wear
that
crown
I'll
I'll,
happily,
leave
that
to
aaron
and
the
other
sig
test
folks
and
there's
stuff
in
the
middle
like
running
the
clusters
and
the
things
that
run
in
the
clusters
right.
We
we
cross
that
boundary
as
infrastructure,
we
own
the
clusters
clearly,
and
so
if
something
went
wrong
with
the
cluster
we'd
want
to
know,
but
we
also
run
some
number
of
those
smaller
services
on
there,
like
cert
manager,
great
example
right,
it's
kind
of
it's
infrastructure
for
the
cluster.
B
So
I
think
that
we,
we
should
split
what
needs
to
run
inside
the
cluster
versus
actually
running
what
should
be
or
fixing
what
should
be
inside
the
cluster.
So
you
can
have
a
contributor
ladder
for
changing
stuff
inside
the
cluster,
but
I
think
once
something
is
is
in
github,
and
this
is
what
should
be
running-
that
type
of
ladder
should
be
different
to
somebody
who's
just
tasked
and
enabled
to
make
sure
what
what
should
be
running
in
israel.
C
Sure
we
have
generally,
we
have
different
google
groups
for
each
of
those
roles
right
and
though
we
haven't
consistently
named
them
in
terms
of
spanning
the
the
in
cluster
out
of
cluster
divide.
It's
it's
pretty
obvious,
like
we
have
cluster
admins
and
our
back
dash,
cert
manager
and
those
sorts
of
groups.
So
we
have
some
place
to
designate
ownership,
but
I
don't
feel
like
we're
actually
demonstrating
it.
A
C
A
A
I
would
love
your
help
in
kind
of
trying
to
put
together
that
taxonomy,
even
just
in
like
label
form,
and
maybe
we
can
sort
of
converge
to
like
sort
of
the
same
label
and
sort
of
the
same
google
group
for
privileges
and
roles
and
sort
of
the
same
set
of
owners,
kind
of
live
in
sort
of
the
same
location
for
this
service.
This
service,
this
app
this
piece
of
infrastructure,
and
then
it
becomes
really
clear
who
owns
what.
B
So
also
coming
back
to,
I
agree
with
you,
so
I
think
also
coming
back
to
the
dns
thing.
So
if
dns
goes
down
so
maybe
the
on-call
person
can't
fix
anything,
but
if
he
has
rights
to
log
a
support
ticket
for
it
and
it
can
go
through
to
the
google
dns
team
like
any
dns
customer,
then
I
think
that
meets
the
on-call
requirement.
So
it's
not
necessarily
that
they
need
to
be
able
to
fix
it,
but
they
need
to
be
able
to
take
it
to
the
next
step
and
the
next
escalation
point.
G
Which,
probably
is,
is
a
bit
harder
because
the
groups
are
scoped
more
on
the
specific
items
and
support
is
taken
out
of
that.
If
I'm
not
mistaken
because
filing
an
issue
with
support,
it's
a
really
high
up
the
ladder
kind
of
thing,
so
that
might.
G
So
so
the
the
escalation
path
could
be
correct
me.
If
I'm
wrong
that
basically
per
service,
we
should
try
to
to
have
basically
coverage
in
terms
of
different
time
zones.
For
that
specific
service,
say
dns
people
specific
to
the
dns
thing
that
can
debug
or
something
like
that,
but
they
can't
do
the
creating
the
support
ticket,
but
they
can
analyze
that
it's
not
on
us,
but
on
google,
for
example,
and
then
they
would
escalate
that
to
say
currently
the
the
admin
team
which
has
access
to
everything,
including
maybe
creating
the
support
tickets.
G
Where
now
with
ono,
we
have
the
ability
to
be
a
bit
more
time
zone
spanning.
Is
that
right.
B
So
what
is
the
security
risk
or
or
the
risk
of
somebody
creating
the
support
ticket?
I
just.
A
Want
to
call
out
that
we're
at
time
right
now,
and
so
I
I
feel
like
like,
I
feel
like
we,
we
are
very
receptive
to
your
input,
mosha
and
and
michael
again,
I'm
very
receptive
to
the
idea
of
y'all
hashing,
this
out
offline
and
kind
of
helping
iterate
on
this.
Until
we
get
to
the
point
where
there's
a
more
fully
formed
proposal
that
we
can
kind
of
agree
on
and
break
out
and
then
figure
out
who's
going
to
work
on
this
in
and
get
it,
you
know,
get
it
get
it
moving
forward.
A
B
A
Okay,
I
guess,
like
I
kind
of
want
to
want
to
caution
that
a
problem
I
have
run
into
in
the
past
is,
I
spend
a
whole
bunch
of
time,
creating
this
very
crystalline,
perfect
structure
of
the
work
that
needs
to
be
done,
and
then
it
never
gets
done,
because
I
haven't
really
thought
much
about
like
how
do
I
incentivize
people
to
do
this?
Where,
where
am
I
getting
the
people
from
for
this,
so
on
and
so
forth?
A
Now
I
I
think
what
I
got
out
of
the
issue
you
filed
to
start.
This
conversation
was
like
yeah,
but
you
still
need
to
be
able
to
call
for
volunteers
like
so,
and
so
this
is
about
kind
of
figuring
out
like
when
are
we
ready
to
make
that
call
for
which
parts
of
the
infrastructure-
and
I
I
think,
that's
cool,
but.
A
B
So
that's
why
the
distinction
between
the
on-call
versus
the
contributor
ladder?
So
if
you,
if
you
put
out
an
encore
for
volunteers,
to
follow
the
contributor
ladder
that
ladder
is
a
lot
of
chopping,
wood
and
carrying
water
and
and
a
lot
less
reward.
B
So
it's
difficult
to
motivate
people
for
that,
whereas
if
you
calling
people
to
be
on
call,
but
they
don't
have
to
chop
the
wood,
because
at
the
end
of
the
day,
what
we're
looking
for
is
not
the
people
to
drop
the
wood.
Yes,
we
need
that,
but
that
problem
is
orthogonal
to.
We
need
the
people
to
have
access
which.
A
B
100,
so
I
I
think
that
I'm
I'm
happy
to
do
a
lot
of
that
that
chopping
wood
to
get
this
thing
off
the
ground
and
to
that
that
point
where,
where
we
can
put
out
a
call
for
actual
on-call
people,
so
I'm
prepared
to
commit
time
to
this
and
and
take
and
and
follow
it
through.
A
C
So
many
times,
maybe
maybe
concretely
like
we
can
first
of
all
this
list
is
a
good
start
but
incomplete.
Maybe
we
can
flesh
out
the
list
and
sort
it
two
different
ways.
One
by
like
ease
of
approach
like
dns
is
a
relatively
easy
thing
to
say:
hey,
I'm
on
call
for,
because
there's
really
nothing
to
do
right,
except
to
say,
oh,
my
god,
it's
not
working
you're
right
versus
like
prowl,
which
like
if
it's
not
working,
you're,
actually
expected
to
do
stuff
right.
C
So
there's
a
spectrum
there
and
then
there's
a
spectrum
of
like
importance
like
what
are
the
things
that
we
think
are
uncovered
that
are
most
likely
to
explode
in
our
face.
Yeah.
A
And
to
the
concrete
thing,
moshe
talked
and
said,
motivated
him
to
sort
of
start.
This
conversation
and
I've
said
it
in
slack
like
we
mudders
has
been
a
hero,
multiple
times
for
us
now.
We
really
need
to
stop
doing
that
to
him.
We
really
need
to
prioritize
making
that
the
thing
that
can
be
yeah,
that's
that
people
can
be
on
call
for
not
just
totally.
C
Right
and
and
honestly
like
in
terms
of
demonstrating
the
ownership
that
I
think
will
make
this
successful.
It's
not
just
learning
to
do
what
what
james
has
done
three
times
for
us,
but
actually
pushing
on
cert
manager
and
saying
hey
guys.
This
is
an
actual
problem
that
we
need
to
fix,
insert
manager
upstream
or
looking
for
an
alternative
solution.
If
cert
manager
doesn't
care
to
fix
the
sort
of
operating
mode
that
we
find
ourselves
in
either
we
change
our
operating
mode
or
we
change
our
certificate
management
system.
C
B
C
A
I
I
think
the
issue
that
moshe
opened
is
probably
a
great
place
to
like
post
a
link
to
this
doc.
Yeah
send
something
out
to
the
mailing
list:
yeah
post
it
to
the
channel.
Let's
get
some
eyes
on
it,
cool
I'll
do.
A
Yeah,
thank
you.
Thank
you
for
pushing
us
cool
all
right.
Thank
you.
Thank
you.
Everybody
for
your
time.
It's
great
to
see
you
all.
As
always,
and
I
look
forward
to
seeing
you
online
and
back
here,
maybe
different
day,
maybe
different
time
in
about
three
weeks
so
take
it.