►
From YouTube: Kubernetes WG K8s Infra 2019-02-06
Description
A
All
right
a
good
morning
evening
or
afternoon
or
dark
time,
everybody
today
is
Wednesday.
February
6th
I
am
Aaron
of
cig
beard.
You
are
at
the
Cates
infra
working
group,
I
weekly
meeting
Wednesday
Pacific
time.
This
is
a
publicly
recorded
meeting
and
will
be
posted
to
YouTube.
So
we
all
get
to
adhere
to
the
kubernetes
cover
to
conduct
which
boils
down
to
don't
be
a
jerk
posted
the
agenda
in
chat
and
I.
Think
do
we
have
anybody
new
here,
I
feel
like
I
know,
all
of
you
Jokers.
A
A
B
B
B
C
A
B
A
So
the
like,
just
to
hopefully
connect
the
dots
you're
saying
that
the
month
of
January
we
spent
seventeen
dollars
and
sixty
cents
worth
of
credits
and
I.
Think
what
we
were
doing
in
the
month
of
January
was
running
DNS
for
the
community
and
Justin
was
also
running
some
kind
of
utility
truck
cluster
and
that
that
activity
in
total
cost
us
$17.60.
A
B
D
A
E
A
B
Can
do
it
like
what
they
can
do?
Man
la
can,
it
can
add
a
pdf
version
of
the
report
every
second
week
telling
me
the
notes
it
did
feel
will
be
good
for
you.
Yeah
do
dis
to
do
that.
That
is
lettuce
in
progress
like
print
and
printing
to
be
they
after
their
report
is
like
2
clicks
for
me,
and
I
can
do
it
on
a
regular
basis
like
every
to
every
every
second
week.
Even
if
I
will
not
be
able
to
join
the
medium,
the
we
don't
meet
and
you'll
still
have
your
report.
B
A
That's
fine!
Yes,
this
is
supposed
to
be
the
slow,
painful
cadence
to
encourage
you
to
automate
yourself
out
of
this
manual
solution
and
when
we
get
to
a
point
where
we
feel
like
we
can
just
go,
click
a
button
and
see
daily,
updated
report.
Maybe
we
can
consider
being
a
little
more
flexible
with
what
projects
we
take
on
absolutely.
E
So
for
anybody
who
actually
cares
I've
got
the
January
numbers,
I,
hope
they're,
correct
in
front
of
me,
and
it
shows
VM
cost
was
27:19
$7
DNS
queries
for
January,
42
million
queries,
since
the
turn-on
was
16
bucks,
some
more
VMs,
another
10
bucks,
not
sure
why
those
are
different.
Oh
this,
this
Aram
storage,
$2.00
load,
balancing
84
cents,
DNS,
query,
Deana
zone,
a
sense,
Interzone
traffic,
two
cents
and
we've
hit
the
bottom
of
anything:
that's
not
zero.
So
we
actually
have
a
a
report
sort
of
by
SKU
that
we
can
look
at.
A
A
D
I
see
Justin's
hand,
okay,
Justin
I
can
give
a
quick
update
on
their
sort
of
an
action
item
stream
I.
Don't
we
have
an
actor
techno
action
item
but
to
do
sort
of
the
serving
HTTP
redirect
artifact
stockades
CEO
Brendan
Burns,
who
I
don't
see
here?
Put
up
a
cap
I
believe
there
was
a
first
draft
that
was
approved
and
some
additional
modifications
anyway.
There's
some
progress
being
made
on
that
I
put
up
I
personally
put
up
an
MVP
stroll
man
of
some
very
basic
code.
D
E
D
A
F
I,
carry
over
from
last
meeting,
I
had
to
leave
early,
so
I
don't
know
if
it
actually
got
discussed
or
not,
but
basic
idea
is
that
for
a
let's
encrypt
we're
currently,
so
this
is
for
the
the
TLS
certs
that
we
use
for
the
Kalb
director,
so
not
the
artifacts
one
that
Justin
was
talking
about,
but,
like
the
other,
all
the
other
likes,
you
know
various
subdomains
and
go
to
kick.
It's
thought
I/o
and
things
like
that.
F
So
currently
we're
using
DNS
challenges
for
that,
and
that
was
fine
as
long
they
had
access
to
DNS
directly,
which
you
know
I,
don't
need
more
Nell,
we've
moved,
DNS
and
also
like.
Maybe
we
want
to
change
that.
You
know
it's
kind
of.
It
was
only
sent
me
automated
ie
I
ran
a
script
as
opposed
to
like
actually
automated,
like.
Let's
encrypt
should
set
up
so
I
think
you
know
we
might
be
able
to
see
you
cert
manager
or
something
like
that.
E
F
F
Which
doesn't
help
us
since
we
need
like
50,
so
I
might
look
into
that.
If
anyone
else
is
interested
in
this
pygmy,
we
can
figure
out,
I
mean
it
seems
like
it
just
needs
to
I.
Don't
think
it's
a
lot
of
work.
I
think
we
just
need
to
experiment,
make
sure
we
do
this
carefully.
But
if
anyone
is
interested
or
has
thoughts,
let
me
know
and.
E
F
F
F
Yes,
we
have
until
April
11th
I
believe
is
when
the
cert
expires,
and
you
know
if
we
get
there
and
somehow
I
haven't
figured
out
automation.
We
can,
you
know,
figure
out
some
other
way,
but
I
think
you
know
this
shouldn't
probably
take
more
than
a
day's
work.
So
we
should
just
plan
to
do
that
sometime
the
next
month.
A
D
D
More
I
want
to
say
like
does
that
mean
that
we
should
assume
that
we
are
not
gonna,
get
credits
from
AWS
type
things
and
like
lean
more
on
Google
products
or
other
donations
to
the
CNC
F,
rather
than
assuming
that
we
will
have
an
s3
bucket
or
is
the
CN
CF
likely
to
fund
a
nativist
account
out
of
their
general
fund?
I
guess.
E
D
A
D
B
Understand
if,
if
we
rely
on
the
same
infrastructure
in
the
same
environment
as
we
have
a
test
in,
if
the
billing
issues
will
continue,
I
have
the
credits,
issues
will
continue.
We
will
also
not
be
able
to
use
mr.
artifacts,
because
it's
it's
the
single
dis
single
account.
A
single
billing
account
where's
waste
like
all
the
assets
connected
to
it.
A
Okay,
yeah
I
had
started
a
discussion
offline
with
Bob
around
the
testing
stuff,
but
the
the
desire
to
end
state
is
something
that
would
meet
your
needs.
Justin,
where
I
really
feel
like
AWS
should
be
crediting
the
CN
CF,
and
that
CN
n
CF
should
be
using
that
to
run
project
infrastructure
in
an
agreed-upon
mater
as
defined
by
this
group.
E
I
presume
Justin
you're
talking
about
the
the
mirroring
and
stuff
process.
So
yes,
you
get
that
design
and
if
we're
all
in
agreement
on
the
design,
then
we
should
share
it
with
Bob
and
say:
look
the
implications
of
this
are.
If
we
have
no
money,
then
there
will
be
no
mirror
in
Amazon,
yes,
and
there
is
that
there
is
a
kept.
That
is
the
beginning
of
that.
So.
A
B
You,
the
bigger
question
here,
is
that
we
start
to
discuss
in
the
post
meeting
two
weeks
ago.
Are
we
interested
in,
like
speaking
about
the
AWS
account
and
speaking
about
AWS
items,
that
we
have
forgiven
at
a
service
forgiven
at
his
community,
specifically
for
suggestion,
but
probably
not
only
for
suggest
and
who
is
a
clear
owner?
Who
is
the
technical
owner
of
all
this
stuff?
B
A
At
present,
like
I'm,
actually
not
clear,
because
I
feel
like
there
are
a
couple
different
accounts
that
are
floating
around
to
be
able
to
run
like
EKF
stops,
so
I
feel
like
the
people
who
run
Sega.
Aws
are
the
points
of
contact
for
that
now
and
probably
more
specific
I
saw
Justin
perk
up
more
specifically,
probably
the
AWS
people
who
are
insane
sig,
AWS
other
points
of
contact.
What
I
want.
A
The
answer
to
be
is
that
you
are
the
point
of
contact,
because
you
would
have
sufficient
access
to
billing
and
you
would
be
able
to
run
the
same
kinds
of
regular
reports
that
we
just
did
for
this
group
to
be
able
to
say
ahead
of
time.
Oh
no,
it
looks
like
our
burn
rate
is
too
high
and
we
should
do
something
about
this.
It
feels
like
whatever
amendment
mechanics
are
in
place
to
turn
the
crank
to
refill
the
account
that
you
shared
with
the
project
to
run
tests
like
her
really
hooked
up
correctly.
A
B
I
can
do
the
bill
in
her
purse
as
well,
but
I
don't
have
capacity
to
manage
this
account
as
a
technical
person,
I
mean
like
I.
Don't
I
don't
have
time
to
like
just
look
at
what
do?
What
do
we
run
there
in
in
which
amounts
and
so
on?
So
I'm
not
I'm,
not
concerned
about
GCP
account,
because
we
have
a
group
of
people
right
here
in
this
meeting,
and
it
is
this
group
who
understand
what
what
do
your
under,
how
much
when
and
so
on.
B
A
Account
so
like
we
could
I
think
say
that
thing
could
sleep
through
and
find
like
the
jobs
that
use
the
account
that
you
provided
to
us.
I
suspect
the
majority.
If
the
jobs
have
the
word
cops
on
them,
and
so
I
would
then
ask
the
cops
a
project
like
a
if
they
feel
like
they
have
sufficient
visibility
into
the
jobs
would
be
if
they
really
mean
all
that
we.
D
Do
like
a
dose
has
reasonable
building
reports
and
we
can
certainly
like
start
tagging
our
jobs
to
produce
that
technical
stuff.
If
that
isn't
asked,
but
I
feel
like
the
provision
of
funds
of
any
funds,
is
right
now
the
blocker
and
if
there's
a
question
about
utilization
of
funds,
we
can
do
the
technical
things
to
make
that
more
visible,
quick.
H
B
From
Reuben
who
is,
was
working
on
the
cluster
a
paper
as
far
as
I
remember,
and
there
was
a
request.
Yes,
the
cluster
I
credible,
yes
under
the
Terminator
six,
so
he
filed
the
request,
also
to
create
accounts
for
them
to
do
to
use
this
infrastructure
is.
B
H
A
So
like
because
I
just
feel
like
I
heard
and
asked
from
a
and
I
heard
an
ask
from
AWS
as
well
to
get
some
kind
of
like
what's
the
run
right
so
that
we
can
forecast
for
the
next
ten
months,
we'll
take
that
run
rate
will
multiply
by
the
number
of
months.
In
fact,
how
much
money
will
be
dumped
into
the
account
and
I
feel
like
that
is
insufficient,
because
at
least
for
all
of
the
tests
that
run
on
the
GCE
side
of
things
were
very
welcoming.
A
We
say:
hey
if
you're
a
part
of
our
community,
if
you're
part
of
our
sakes,
please
come
use
our
test
infrastructure
to
run
your
tests.
We
have
not
yet
gotten
into
that
sticky
conversation
of.
Are
your
tests
worthwhile
enough
for
us
to
spend
the
community's
money
on
and,
like
implicitly,
a
lot
of
the
AWS
jobs
have
already
jumped
the
queue
directly
into
that
discussion.
So
like
should
we
say
that
we'll
take
the
current
set
of
jobs
and
that's
it?
A
D
I
was
gonna,
say
what,
if
we,
what
if
we
look
at
our
current
run
right
and
try
to
like
get
approval
for
that,
plus
a
little
bit
so
like,
and
then
we
can
try
to
fit
into
that
and
and
have
that
conversation
and
yes,
we
can
like
to
eliminate
a
lot
of
cops
jobs.
We
can
trim,
trim
the
fat
and
and
budget
appropriately
to
a
budget
of
I.
Think
I
think
it
looked
like
three
months
was
ten
thousand
on
the
tests.
D
B
A
This
is
where
you're
I
don't
know
that
I'm
asking
on
a
technical
level
for
you
to
provide
a
breakdown
on
all
of
the
assets.
I
just
think
that
if
we
have
a
high
frequency
enough
total
burn
rate
that
we
can,
we
can
start
asking
wake
who
changed
something.
If
we
see
a
sudden
uptick
in
usage,
but
the
trick
is
we
have
to
have
somebody
who
has
access
to
the
billing
information
to
give
us
that
information
soon
enough.
A
A
I
agree,
I
feel
like
you
should
talk
to
somebody
at
AWS
to
help
you
with
that.
This
is
your
incentives
are
aligned
with
our
incentives.
Now
you
want
to
get
this
off
of
your
manual
human
play,
that's
correct,
yeah
and
then
I
feel
like
in
terms
of
the
escalation
path
like
this
group
could
be
notified,
that
hey
AWS
usage
seems
a
little
off
the
wall
and
we
can
go
find
the
appropriate
people.
I
still
feel
like
it's
a
little
too
early
to
have
like
the
appropriate
cabal
of
people
and
then
a
set
of
documented
policies.
A
H
I
think
the
underlying
thing
here
has
been
that
no
one
is
monitoring.
Basically
any
of
this
at
all,
and
not
just
like
the
the
billing,
like
you,
know,
maintaining
this
stuff,
making
sure
that
it
keeps
running
properly.
It's
kind
of
just
been
punted
off
to
testing
and
most
of
the
people
are
working
on
and
don't
really
have
the
familiarity
with
AWS.
B
Also
I'm
concerned
about
the
possible
security
leaks.
Like
do
we
have
like.
We
have
to
define
a
security
policy
for
for
edible
air
specifically
and
for
GCP
I
do
as
well
but
like
if
someone
will
leaked
the
private
and
public
key
is
for
accessing
the
infrastructure
and
the
bad
people
will
run
like
one-way,
high-cost
machines
there.
That
will
burn
out
our
our
credits
in
like
in
a
few
hours
so
and
unfortunately,
I'm
not
sure
that
our
current
current
state
of
inebriation
infrastructure
from
the
from
the
process
perspective
his.
H
A
Feel
like
we're
a
far
afield
here,
but
I
just
want
to
use
this
to
illustrate
this
isn't.
This
is
why
it's
really
important
to
have
people
who
care
about
the
credentials
not
being
leaked
in
our
paranoid
about
infrastructure
being
misused,
reviewing
the
jobs
that
use
those
credentials.
So
there
are
some
people
who
would
really
wish.
A
We
lived
in
a
world
where,
like
the
repos,
could
just
write
their
own
job
definitions
and
they
could
be
just
like
a
travesty
animal
file
inside
the
repo
and
then
like
some
secret
group
of
people
wouldn't
have
to
read
through
them
and
approve
them
and
I
would
love
to
live
in
that
world
too.
But
how
can
we
trust
that,
like
the
job?
That's
written
inside
of
that
repo
is
not
just
echo
a
double
us
credentials
into
the
into
the
console
right,
but.
C
D
I
mean
it
sounds
like
it
was
like
you
to
contribute
credits
for
the
particular
on
fire
right
now
and
it
sounds
like
we.
We
have
a
broad
outline
of
an
ask
that
we
can
go
to
a
to
us
and
say:
can
you
fund
at
this
level,
and
then
we
can?
We
can
review
the
spend
on
that
somehow
and
track
that
back
to
cut
the
fat
as
that
is
that
the
general
oh
yeah.
D
J
C
C
B
D
A
B
C
A
D
B
A
A
E
If
somebody
files
an
issue
I
or
one
of
the
other
people
I
forget
who
is
in
the
owners
file,
exactly,
can
open
up
a
PR
review
it
push
and
once
the
PR
is
merged,
we
just
run
one
command.
There's
a
script
in
the
repo
which
will
push
to
the
canary
run
a
test,
push
to
prod,
run
a
test
and
then
fail
along
the
way
if
it
needs
to.
A
No,
it's
fine,
I
know
who's
in
the
owners.
File
and
I
know
it's
not
just
you
I
think
the
so.
The
ideal
state
is
something
like
the
automation
in
the
kubernetes
or
repo,
where
once
a
PR
gets
merged.
There's
a
post
submit
job
that
runs
system
and
you're,
saying
right
now,
a
human
being
has
to
go
run
the
command
correct.
E
E
E
For
the
record,
there
have
been
a
grand
total
of
zero
such
requests
this
month.
This
is
not
a
high-bandwidth
effort,
so
there's
not
a
ton
of
urgency
on
doing
the
full
automation,
so
I'm
perfectly
happy
to
wait.
If
we
think
that
prowl
is
the
the
right
way
to
do
it,
then
I'm
happy
to
wait
for
prowl.
If
we
want
to
try
something
else:
I'm,
okay
with
that
too,
but
it
doesn't
seem
urgent,
like
I'd
rather
spend
my
time
getting
the
cluster
up.
Given
the
the
slack
debacle
this
weekend,
yes
agree:
yeah.
A
Yeah
I'm
just
interested
in
like
what
are
the
dangly
bits
that
remain,
how
long
till
we
can
say
yeah,
it's
totally
done
and
we
don't
even
have
to
think
about
it
anymore.
Thunder
understood
understood.
A
E
A
Right,
correct,
you're,
gonna
say
this:
we
needed
to
take
down
and
a
very
limited
number
of
people
had
access
to
that
service.
More
people
have
access
to
that
service
now,
but
they're
still
all
Googlers.
Would
this
service
be
a
candidate
to
migrate
to
a
cluster
that
this
group
owns
with
a
wider
pool
of
people
yeah.
E
A
E
A
A
C
So
the
idea
here
was
there's
at
least
two
people
who
are
currently
looking
for
a
place
to
stage
their
images.
One
is
the
cluster
api,
the
AWS
folks,
the
other
one
is
the
sick
storage
for
the
CSI
stuff.
So
then
the
other
one
other
set
of
people.
So
this
is
even
before
we
use
Lena's
stuff
right.
So
there
has
to
be
a
place
where
we
can
publish
the
artifacts
and
Lenise.
C
E
In
fact,
we're
already
using
it
internally
for
to
manage
kate's
GCR
net
I/o
and
it
works
great.
What
we
have
to
still
answer
is
the:
what
is
the
staging
for
each
of
the
sub
projects?
Is
it
one
shared
staging,
or
is
it
a
bunch
of
distinct,
stagings
and
I?
Think
the
promoter
is
designed
around
the
idea
that
there's
one
but
I
don't
see
why
I
couldn't
be
adapted
to
work
with
many
and
then
the
follow-up
question
is
what,
if
those
stagings
are
not
GCR
what
if
they
are
doctoral
or
something
correct.
E
D
E
Yes,
well,
I,
don't
know
the
kata
GCR
dot
IO
repo
right
now
is
under
Google
comm.
So
we
could
try
it
with
a
different
name.
The
the
big
moment
is
going
to
be
one
we've
synced.
All
of
the
gigabytes
of
images
from
KCC
are
that
IO
to
some
new
backing
store
and
then
flip
that
DNS
name,
because
that's
an
internal,
alias
right.
That's
that's
the
big
moment,
but
we
could
totally
set
everything
up,
except
for
that
and
then
prove
it
out.
A
C
So
there
is
one
more
thing:
the
atom
which
is
the
different
projects
right.
They
need
a
staging
repository
from
where
you
know
we
can
run
a
limiter
stuff
right,
so
we
need
a
staging
repository
which
can
be
cleaned
up
periodically,
I
mean
which
is
not
to
be
used.
You
know
in
a
and
given
out
to
everybody,
but
it's
used
it.
E
A
Yeah,
what
is
what
I
mean?
Well
I
I,
think
I'm
trying
to
drive
to
is
I
feel
like
we
keep
bumping
into
this
particular
design
problem,
whether
or
not
you
should
have
a
bunch
of
things
for
different
projects
or
one
Hoover
area
and
I
feel
like.
We
need
somebody
to
own
that
and
put
some
dedicated
time
to
proposing
here
the
alternatives.
Here's
the
pros
and
cons,
here's
what
we
should
choose
and.
E
E
A
Again,
it
sounds
like
this
super
have
a
big
design
up
front
thing,
I'm
a
real
big
fan
of
trying
to
find
a
way
to
just
iterate,
but
this
feels
like
something
that
could
I
don't
know,
maybe
that
maybe
just
using
a
scratch
bucket
and
just
cleaning
it
out
and
figuring
out.
This
whole
policy
later
is
the
way
to
go.
Yeah.
E
I
mean
the
scratch
bucket
has
to
be
pushed
by
an
unknown
set
of
people,
that's
the
that's
the
concern
right
and
and
and
we
end
up
paying
for
it.
So
you
know
it's
it's
a
vector
for
abuse
unless
we're
careful
and
who
we
give
access
to
and
if
we're
gonna
put
the
thought
into
who
we
give
access
to
that's
75
percent
of
the
work?
Can
we
promote
from
prowl
some
prowl.
E
E
Would
be
great
if
every
project
that
we
were
talking
about
built
through
that?
But
that's
not
the
case
right,
like
Kate,
said
she
said
and
IO
is
all
of
the
images
that
we
consider
part
of
the
family,
which
includes
mirrored
stuff
from
core
DNS
and
test
images
and
stuff
that
is
hand-built
like
gets
inked
and
other
things.
It
is
ideal,
end
state
I,
the
ideal
end
state
is
CI
with
a
privileged
token
shouldn't,
be
the
only
thing
that
can
push
to
the
staging
repo,
but
we're
not
there.
Yet.
C
Right,
so
how
do
we
get?
There?
Is
the
question
right?
Can
we
do
a
scratch
repository
with
some
scripts
that,
just
like
the
DNS
we
run
by
hand,
can
can
a
few
of
us
on
this
call
can
can
be
the
initial
people
who
can
populate
the
scratch
repository
and
people
just
like
you
do
for
the
core
DNS
right,
Tim
similar
to
that
I,
see
what
you
say:
yeah
probably
I,
guess
so,
because
that
that
is
not
too
much
right.
Yeah.
E
E
C
E
You
know
I
if
you
want
to
chat,
let's
talk
on
slack
either
later
today
or
maybe
Friday
and
work
through
the
details.
We'll
set
up
a
Google
Groups
account
for
creating
GCR
repos
with
the
minimal
privilege
and
work
through.
If
we
can
prototype
it,
I'll
see
if
we
can
get
Linus
on
the
line
too.
Okay
sounds
good,
so.
C
E
A
Guess
I
personally
would
want
to
see
at
least
one
more
person,
so
it's
a
group
of
three
people,
so
it
could
maybe
be
hippie
I'm.
Also
wary
of
the
fact
that
the
people
on
this
call
tend
to
be
the
same
people
and
a
bunch
of
critical
paths,
and
if
there's
a
way
we
could
find
somebody,
we
trust
who's,
maybe
not
on
so
many
critical
paths.
That
would
be
super
ideal,
but
this
is
I
also
own
eating.
So.
J
J
J
A
A
So
this
is
much
like
the
much
like
the
slack
in
writing
service.
This
is
guinea
pig
service
that
we
want
to
move
to
the
cluster.
That
Justin
has
stood
up
that
he's
not
an
alpha
cluster
but
will
be
blown
up
and
recreated
as
an
alpha
cluster
in
the
next
two
weeks.
This
one
might
be
trickier,
I,
don't
know,
then
the
slack
and
writer
service,
because
it
maybe
involves
credentials
to
actually
do
the
publishing
two
repos.
You
tell
me.
E
C
Yeah
the
publishing
board
yeah
so
in
the
queue
for
moving
yeah.
So
the
update
on
the
publishing
part
is
that
over
the
last
week
we
had
like
three
four
times
that
the
bot
failed
but
ran
into
trouble
because
of
different
reasons.
You
know
people
changed
the
ordering
of
the
repositories.
You
know
what
depends
on
what
and
things
like
that.
C
So
in
the
first
thing
that
I
want
to
do
before
we
start
doing
this,
publishing
bot
in
the
cluster
is
to
do
a
verify
job
which
validates
the
you
know
that
when
the
bot
runs,
it
has
enough
information
to
do
all
the
right
things,
and
so
right
now,
that's
the
problem.
People
make
changes
and
that
broke
the
bot
and
they
didn't
know
that
they
were
gonna
break
the
bar.
Oh,
when
they've
made
the
changes.
So
that's
the
first
problem
to
fix
there.