►
From YouTube: Kubernetes WG K8s Infra 20181031
Description
k8s-infra-team meeting on Oct 31
GMT20181031 153322 k8s infra 640x360
A
Okay,
so
hi
everybody
today
is
Wednesday
October
31st
welcome
to
the
Cates
in
for
a
working
group
meeting
at
our
new
time,
which
is
8:30,
Pacific,
1530
UTC
thanks
to
Tim's
innie,
or
forgetting
that
Doodle
together
so
on
the
agenda
today,
I
just
wanted
to
welcome
any
new
members
or
attendees,
but
I
feel
like
I'm,
seeing
a
lot
of
familiar
faces
here,
and
so
then
I
wanted
to
cover
action.
Item
review
from
our
last
meeting,
followed
by
any
open
discussion.
Topics
that
we
have
so
I
can
go
first
in
terms
of
AI
review.
A
I
did
one
of
the
things
and
I
didn't
do
another
one
of
the
things.
So
the
thing
that
I
didn't
do
was
get
together
with
Ben
the
elder
and
write
a
one-pager
on
how
we're
running
our
testing
for
our
clusters.
Today,
we
really
would
like
to
actually
document
what
is
running
where
and
what
practices
were
using
that
we
like,
and
we
don't
like
as
advice
for
this
group
to
use
I
don't
have
it.
I
also
tried
working
on
a
charter
to
call
this
thing.
A
A
Gonna
go
ask
us
a
if
they
want
to
own
this
thing,
but
we
do
feel
like
we're
responsible
for
staffing
up
the
team
and
the
process
is
to
actually
run
these
things
and
ultimately,
after
this
iteration
and
I'm,
just
gonna
push
through
and
suggest
that
like.
If
you
don't
like
it,
we
can
turn
into
a
snake
later.
But
let's
just
let's
get
moving.
What
do.
A
A
So
I'm
gonna,
thank
you
for
taking
notes.
Nadir,
I'm
gonna,
follow
up
on
that
before
the
end
of
the
week
and
try
and
push
that
through.
So
the
next
outstanding
item
was
Kristoff
is
not
here,
but
Tim
did
he
ever
get
in
touch
with
you
on
writing
some
sort
of
doc
skeleton
doc
on
how
we
might
want
to
administer
the
cluster's
going
forward.
B
A
C
Hello,
yes,
I
did
go
through
and
do
some
initial
dry
run.
Efforts
and
I
was
trying
to
the
deploys
works
and
I
was
trying
to
find
a
way
to
do
some
testing
on
that,
maybe
going
through
and
looking
at
the
data
and
doing
queries
when
we're
doing
some
stuff
in
batch,
which
I
its
way
through
some
programmatic
stunts
I'll,
look
at
again
and
the
next
week
here
so.
B
I
apologize
I
didn't
realize
that
you
had
that
in
flight
I
haven't
a
PR
open
that
adds
a
shell
script.
Actually
I
think
I
signed
it
to
you.
That
does
the
basically
the
shell
loop
for
failure,
detection
if
a
push
failed
and
don't
proceed,
but
if
it
didn't
fail
push
to
the
canary
zone,
I
saw
you
had
created
the
canary
zone,
I
think
it
was
either
that
way,
and
so
I
wrote
the
script
to
wrap
that
up,
but
I
left
basically
an
empty
test
SH,
which
the
PR
basically
says
yeah.
C
B
C
B
A
B
My
here's,
my
feeling
we
have
basically
three
things
that
I
think
we're
missing.
One
is
the
tests
so
that
we
can
do
the
canary
push
and
make
sure
that
everything
everything
passes.
We
could
conceivably
move
forward
without
it,
but
I'd
rather
not
to
is
a
a
swag
at
a
billing
report.
It's
gonna
be
hard,
and
since
we
have
no
data,
I
looked
at
the
data
and
it's
you
know
it's
zero
dollars
so
far,
because
we've
served
58,
DNS
queries
and
but
I
would
like
to
know
how
we're
going
to
do
transparency
here.
B
To
know
where
that's
money's
going
and
I
want
to
be
really
clear
that
we
can
show
people
where
it's
going
and
the
third
thing
was
I
guess
alerts
but
alerts,
don't
matter
for
this
because
it's
a
hosted
service.
So
it's
really
only
I
think
it's
really
only
those
two
things
is
there
anything
else
that
people
feel
like
we
need
to
do
before
we
actually
flop
it
over
to
this
service.
A
B
That's
happening
manually,
there's
a
script
that
is
checked
into
that
directory,
the
the
DNS
directory,
or
at
least,
will
be
in
my
PR
that
runs
against
a
locally
compiled
docker
image
of
octa
DNS.
There's.
Basically,
you
can
run
docker
build
dot
and
you
get
your
image
and
then
you
can
run
that
to
push
from
your
local
client
up
into
the
real
DNS.
It's
not
ideal,
but
until
we
actually
have
a
cluster
up,
that's
running
on
a
regular
basis,
I
think
it's
fine.
It's
we're
doing
for
everything
else.
Okay,.
A
So
the
idea
would
be
instead
of
replacing
instead
of
manually,
editing
some
fields
in
a
console
somewhere.
The
person
now
responsible
for
updating
dns
would
run
this.
Would
they
make
PRS
or
would
they
just
run
this
script
after
somebody
has
P
Arden
changes?
That's
like
I
want
to
add
a
new,
so
many
PRS
engine
e's.
B
Exactly
you're
right,
somebody
sends
a
PR
that
says:
I
want
to
add
a
new
name
where
I
want
to
update
a
text
record
or
whatever
we
did
the
small
list
of
people
who
have
push
approval
to
that
to
the
actual
site
to
the
GCP
project.
Sorry
still
early,
they
would
approve
the
PR
synced
ahead
and
run
the
command
against
it.
Single,
okay,
ahead.
B
A
G
A
That's
all
the
boringness
I
guess
I
was
trying
to
articulate,
is
I,
think
Tim's
Tim's
road
map,
but
we
need
three
things.
Is
we
need
these
three
things
in
order
to
actually
like
be
able
to
physically
run
the
command,
but
I
feel
like
there
should
be
a
little
bit
more
in
terms
of
Doc's
process
for
people
to
know
how
to
poke
somebody
to
update
DNS
I
can
do
PR
for
a
doctor.
The
process,
that's.
B
G
H
B
A
A
G
A
H
Awesome,
yes,
I
mean
I.
I
have
done
a
little
prototyping
on
I'm
still
trying
to
the
work.
I'm
trying
to
do
is
figuring
out
how
we
can
do
downloads
of
binary
artifacts
from
ideally
multiple
locations,
multiple
mirrors
and
basically
have
some
sort
of
redirect
or
service,
or
some
sort
of
way
to
unlabeled
people
to
download
from
them.
Without
having
to
be
aware
of
all
the
mirrors,
I've
done
little
very
good
small
prototype,
which
basically
is
a
trivial
like
go
based.
302
redirect
err,
super
MVP,
but
I
figure.
H
It
would
make
sense
to
actually
get
it
put
somewhere
and
start
iterating
on
it
and
start
moving
towards
where
we
want
to
go,
not
least
because
I
think
it
will
drive
a
bunch
of
other
things
like
we
are
a
long
way
away
from
the
instructions
are
not
going
to
be
drop
it
into
a
git,
repo
or
and
and
off.
It
goes
right.
We
have
to
set
up
DNS
integration.
H
We
have
to
figure
out
how
we're
going
to
do
TLS
we're
going
to
figure
out
how
we're
gonna
like
run
it
in
a
kubernetes
cluster
or
wherever
we
want
to
run
it.
So
it
is
a
it
is
a
sort
of
wait
forgot
where
the
code
lives,
so
I
want
to
sort
of
try
to
start
the
start.
The
ball
rolling
with
a
more
complicated
I
guess
process
than
the
ones
that
were
tackling
so
far,
I
hope
they
will
open
up
a
bunch
of
avenues
for
progress,
did.
H
H
H
H
Having
a
bucket
in
a
sir
would
be
great
as
well
a
placed
some
sort
of
DNS
integration
sounds
like
that's
happening,
so
we
can
have
a
DNS
record
that
points
to
a
kubernetes
cluster,
which
we
need
two
committees
cluster
to
run
services
we
actually
like.
We
can
just
use
let's
encrypt,
but
like
some
sort
of
acknowledgment,
that
we
want
to
use,
let's
encrypt
and
not
something
else,
and
then
some
sort
of
where
do
we
put
code
and
how
do
we
promote
the
code?
H
A
My
feeling
is
this
sounds
an
awful
lot,
like
a
sake,
release
sub-project
in
terms
of,
if
there's
new
code
that
needs
to
be
written
in
a
sig
that
needs
to
own
it.
I
think
sig
release
is
all
about
where
the
bits
land
and
then,
if
you
need
like
an
umbrella
issue,
to
track
this
I
feel
like
we
can
we're
free
to
use
Kate's
dot
IO
as,
like
our
umbrella
issue,
repo,
my
I
guess.
The
thing
that
I'm
curious
about
is
how
much
progress
you
can
make
before
we
get
blocked
yet
again
and
Honda
boy.
A
It
would
sure
be
nice
if
we
had
a
cluster
where
we
could
run
things
because
I
now
feel
like
myself,
Ben,
Tim
and
Kristoff
are
all
like,
we'll
totally
like
document
the
process
for
the
policy
for
creating
these
clusters.
But
it's
been
like
four
weeks
and
we
this
is
like
the
third
or
fourth
like
gee.
It
would
be
nice
if
we
had
a
cluster
to
run
stuff
thing.
Yeah.
B
B
Yeah
and
I,
just
don't
I,
know
that
I
have
a
tendency
to
get
spread
too
thin
if
I,
let
it
so
I'm
just
trying
not
to
get
distracted
from
other
things
DNS.
If
we
can
get
DNS
up
in
the
next
in
this
cycle,
the
next
two
weeks,
then
I'm
happy
to
look
at
turning
up
a
cluster
and
actually
starting
stuff.
There's
a
cycle
after
that
yeah.
G
G
H
Meet
or
get
preferred
pricing
the
okay
I
looked
at.
It
was
a
little
fuzzy
in
terms
of
Amazon's.
It's
called
a
cloud
front,
I
always
get
the
CFS
wrong,
become
front
and
I,
don't
think
they
have
preferred
pricing.
I,
don't
know
about
Google's
offering
and
Google
has
this
or
there
was
this
new
thing
that
was
announced,
which
I
think
has
are
in
Google
signed
up
to
with
the
some
sort
of
like
preferred
tearing
or
preferred
peering.
H
So
maybe
a
CDN
would
work
there,
but
there
I
mean
this
is
so
for
shits
that,
but
also
the
other
thing
is.
We
can
start
to
get
stats.
So
if
we
want
to
know
how
many
users
are
on
a
particular
kubernetes
version,
we
can
do
that
it.
It
opens
up
a
can
of
worms,
I
guess
in
there
like
who
can
see
those
stats
right
so
but
it
is,
it
is
also
an
opportunity
to
get
stats.
Yeah.
G
H
G
G
F
No
intention
of
rebuilding
the
database
yeah
itself,
yeah,
okay,
let's
just
let
I
guess,
let's
just
keep
that
in
mind.
B
Help
me
get
DNS
up
and
out
of
the
way,
I
mean
really
a
I,
don't
mind
if
we
want
to
turn
up
a
cluster
to
start
doing,
testing
stuff
right
now.
I
just
want
to
be
wary
that
we
don't
burn
away
money
that
we
want
to
spend
on
real
things
and
I
want
to
be
careful
that
we
don't
set
up
accidentally
set
up
things
that
we
end
up
keeping
forever
without
having
put
some
thought
into
them.
That's
what
we
end
up
with
we
had
before
I
mean.
B
H
We
know
it's
like
$200
or
whatever
this
like,
we
can
say
it's
$200.
We
don't
necessary
need
to
worry
about
it.
You
just
want
the
cluster
to
play
with
your
redirector,
just
I'm,
saying
like
we.
Basically
we
it's
the
same
logic
as
the
Alpha
clusters.
We
know
that
we're
gonna
want
to
cluster
at
some
stage,
but
we
also
know
that
we're
not
fully
baked
on
how
it's
gonna
work.
We
want
to
do
these
filling
things
all
these
other
things
set
it
up.
It'll
last
for
30
days
and
we'll
tear
it
down.
B
B
H
D
D
What
is
wrong
and-
and
you
know,
fix
whatever
is
wrong
and
then
the?
What
is
kid
you
know
the
water
runs
periodically.
So
when
the
next
time
it
runs,
it
picks
up
the
changes
and
does
the
same
thing
all
over
again,
and
you
know
that
cycle
is
pretty
much.
You
know,
you
know
Nikki
between
Nikita
Stefan
and
occasionally
me.
B
So
if
we
want
to
turn
on
a
cluster,
it's
sort
of
the
same
things
that
you
dealt
with
at
the
beginning
of
DNS
like
who
has
access
to
it.
How
do
you
govern
who
has
access
to
it
and
what
are
the
like?
What
are
back,
and
whatever
else
do
we
need
to
set
up
to
make
it
work?
If
somebody
wants
to
start
down
that
road,
I
guess
I'm
happy
to
unblock
if.
D
B
D
A
A
We
still
have
the
the
authority,
the
ability,
whatever
to
say
like
okay,
it's
cool
that
we
figured
out
how
to
run
publishing
BOTS
with
a
single
Google
group
on
a
single
cluster,
but
now
that
we
spent
some
time
exploring
we've
made
up
our
mind
that
we'd
rather
have
this
cluster
over
here
run
the
publishing
bot
and
a
couple
other
pieces
of
infrastructure.
So
we're
tearing
it
all
down
we're
bringing
it
back
up
like
accepting
that
there's
gonna
be
some
pain
of
iteration.
D
B
So
Tim's
you
gonna,
take
away
I
get
a
doc
on
how
to
administer
or
set
up
these
clusters.
We
can
turn
that
maybe
into
a
script
that
helps
us
make
sure
that
we've
got
the
same
clusters
in
the
meantime
Justin.
If
you
want
to
play
with
a
cluster,
we
can
set
up
a
an
alpha
cluster
which
will
self
implode
after
whatever
is
30
days
or
something
I
mean
if.
B
B
Mostly
I'm
specifically
concerned
about
how
do
we
make
sure
that
the
right
people
have
the
right
access
to
the
right
things,
especially
if
we're
going
to
trend
towards
a
cluster
that
runs
more
than
one
service,
then
I
just
want
to
make
sure
that
you
know
add
yourself
to
this
Google
group,
which
gets
you
access
to
this
namespace,
the
these
are
back
rules
and
here's
the
script
that
will
turn
these
things
up
and
consistent
way.
Okay,
yep.
A
D
I
added
that
this
was
this
showed
up
in
the
email
from
mmq
Maytag
a
few
days
ago,
forgotten
to
add
it
to
the
agenda.
Sorry
about
that,
so
it
basically,
there
are
at
least
five
six
requests
that
have
come
so
far
about
uploading
images
and
I
think
we
we
can
start
down
that
road
as
well,
so
Manjunath
wrote
up.
What
does
the
image
look
like
and
you
know
it
needs
manifests?
It
needs
like
a
structure
in
the
name
and
things
like
that.
So
that's
a
document.
D
A
D
And
I
was
also
thinking
about
one
more
idea
in
this,
for
example,
if
we
take
how
Tim
is
doing
the
core,
DNS
merger
emerges
right,
so
basically,
what
they're
doing
is
they're
publishing
to
Quade
or
I/o,
and
then
they
give
you
a
script
and
then
Tim
basically
runs
the
script
which
pulls
in
the
images
to
our
repositories.
So
if
we
do
something
like
that,
what
a
bot
does
the
that
work,
then
we
don't
even
have
to
give
permissions
to
anyone.
That's
right.
D
So
if
we
can
drive
it
off,
a
yam
L
file
and
people
create
a
PR
against
the
ml
file
saying
this
is
the
source.
This
is
the
destination,
and
these
are
the
list
of
images
and
then
the
PR
gets
merged.
The
bot
automatically
goes
and
syncs
it
all
up.
This
way
we
know
who
asked
for
it.
Where
is
the
source
of
the
image?
And
you
know,
and
we
in
the
future
we
could
even
add
a
capability
where,
if
you
remove
something
from
the
from
the
file,
it
goes
and
cleans
up
the
image
repository
too.
D
B
I
haven't
I,
haven't
looked
at
this
dock
at
eff,
it
came
through
I'm,
apologize,
I,
didn't
see
it
before
I
will
take
a
look
at
it
and
throw
my
notes
in
there
about
sort
of
where
we
are
today.
The
open
question
that
I
have
is
about
the
the
pre-production
staging
area.
I
had
been
working
on
the
assumption
that
we
would
want
one
pre
production
staging
area
for
all
of
capes
and
we
would
have
to
give
trusted
people
from
each
sub
project
the
ability
to
push
to
that
repository.
B
I
haven't
really
dug
into
the
access
controls
that
GCR
offers.
But
my
understanding
is
it's
not
very
fine-grained,
and
so
maybe
we
actually
do
want
to
be
able
to
pull
from
arbitrary
staging
areas
and
synchronize.
Those
so
I
think
that's
really
the
discussion
there
and
then
it
comes
back
to
the
same
thing
of
like.
Let's
get
a
building
report
set
up
and
obviously
I
think
this
one
will
be
easy
like
this
was
the
motivator
for
a
lot
of
this
work.
B
So
then,
getting
that
we
have
a
promoter
internally,
that
we
need
to
just
clean
up
to
be
able
to
publish,
and
we
have
a
ml
file
that
that
has
a
format
that
already
works
so
also
I
was
kind
of
waiting
for
Linus
to
get
back
from
vacation.
Who
wrote
our
internal
promoter
he's
back
as
of
this
week,
so
I
can
actually
drag
him.
Maybe
to
the
next
meeting
here.
B
D
B
A
A
Our
access
right
now
come
on
guys,
give
me
an
account
and
if
you're
saying
we
want
to
give
you
an
account
to
a
staging
area,
but
we
need
to
make
sure
that
random
developer
X
from
some
other
sub
project
doesn't
squash
your
images
all
right,
all
right,
so
yeah
cuz
that
that's
what
I
feel
like
is
the
clamoring
at
the
gates
is
all
about
is
look
guys.
I
just
want
to
push
my
images
someplace.
B
Yes,
we
we
did
all
the
pre
work
for
this
most
of
the
repositories.
Now,
all
their
make
files,
whatever
push
to
staging
gates,
GC
r
dot
IO,
which
was
the
assumption,
was
that
we
would
have
sort
of
subdirectories
under
there
and
people
be
able
to
push
to
project
specific
sub
directories.
But
it
turns
out
that
I
don't
think
the
GC
r
permissions
are
able
to
represent
that.
B
Unfortunately,
so
we
have
to
decide
whether
that's
sort
of
good
enough
by
convention,
because
you
know
we're
giving
you
a
directory
and
you
shouldn't
be
pushing
anything
outside
of
your
own
staging
directory
and
at
the
end
of
the
day,
it's
just
staging.
It's
not
that
huge
of
a
deal
or
or
do
we
actually
create
separate
GCR
repos
for
each
sub
project,
or
do
we
just
say
we'll
pull
it
from
wherever
you're
hosting,
whether
that's
docker,
hub
or
Quay,
or
whatever
I?
Don't.
C
B
I
mean
nothing:
should
nothing
should
Auto
flip
from
staging
to
production
ever
there's
only
the
end
result
is
there's
a
get
file
that
derives.
What
is
in
the
production
repo
and
that's
it,
and
it's
by
hash,
so
I'm
not
worried
about
the
staging.
The
production
I'm
only
worried
about
intentionally
or
unintentionally,
people,
removing
or
overwriting
an
image
tag
for
or
something
that
they
didn't
intend
to
it's
in
staging
which
would
potentially
impact
tests,
but
shouldn't
impact
production
at
all,
because
all
the
copies
are
going
to
hash.
A
B
A
I,
don't
see
anything
else
on
the
agenda
and
I'm
happy
to
call
it
here
unless
anybody
else
has
anything
pressing
cool,
so
assuming
I
do
get
the
the
Charter
merged,
I
kind
of
want
to
rename
some
things
to
match.
The
fact
that
this
is
an
working
group
instead
of
a
team
instead
of
Kade's
infra
team,
WG
Kate's
head
for
all
that
crap,
but
I'll
email
out
about
that
when
we
get
there.
Okay,.