►
From YouTube: Kubernetes WG K8s Infra 2019-01-23
Description
A
Okay,
hi
everybody
today
is
Wednesday
January
23rd
and
you
are
at
the
bi-weekly
WG
Cates
infra
Cates
inferring
group
meeting
I.
Think
it's
the
inaugural
meeting,
where
we're
actually
officially
a
working
group.
The
health
ray
paper
work
is
fun,
so
keep
in
mind
that
this
meeting
is
being
recorded
and
will
be
posted
to
a
publicly
accessible
YouTube
channel.
So
everything
that
you
say
will
be
archived
in
stone
for
all
eternity
and
you
should
obey
our
code
of
conduct,
which
basically
boils
down
to
don't
be
a
jerk.
B
A
A
So
motivated
partially
by
that
fact,
I
opened
up
a
pull
request
about
maybe
double
tracking
the
owners
and
security
contacts
we
have
in
the
Cates
IO
repo
just
to
see
if
they're
still
sort
of
relevant
I
forgot
to
put
a
hold
on
it.
So
Tim
was
a
little
eager
to
maybe
get
himself
out
of
the
roof
owners
file.
So
the
changes
I
made
to
the
owners
file
we're
by
no
means
an
attempt
to
grab
power
or
voluntold
anybody
into
doing
things.
A
It
was
more
an
attempt
to
formalize
the
fact
that
the
working
group
seems
to
be
doing
a
bunch
of
stuff
with
this
repo,
so
we'll
the
dims
and
I
will
be
owners
at
the
root.
This
is
a
sake,
contributor
experience
and
sub-project.
The
repo
is
so
I
feel
like
the
tea.
The
tech
leads
from
state
control
back
should
be
owners
at
the
root
and
then
the
idea
is
to
actually
push
all
the
real
power
into
owners
files
that
are
in
sub
directories.
A
So
since
the
DNS
related
subdirectory
can
be
managed
by
the
community
they're
a
bunch
of
community
people
in
there,
since
the
other
two
sub
directories
currently
relate
to
infrastructure.
That's
deployed
on
Google
Hardware
I
have
in
XD
I
mean
Jeff
Grafton
listed
there
kinda
to
motivate
you
to
to
get
yourselves
out
of
there
by
turning
those
things
into
stuff
that
the
community
can
manage
if
I
have
done
bad.
Please
comment
on
that
PR
and
I'm
happy
to
revert
whatever
I
went
and
talked
to
the
other
individuals
that
I
dropped
and
they're
okay
with
it.
A
B
C
Sure,
hi
everybody
so
I
created
this
cap
basically
was
inspired
by
an
issue
from
Tim
st.
Clair
and
try
to
understand
all
the
bits
and
pieces
in
general
of
the
whole
packaging
stuff
from
creating
packages
to
publishing
that
this
cap,
specifically,
is
only
talking
about
the
tooling
and
stuff
related
to
creating
the
packages.
It
is
right
now
not
talking
about
how
to
publish
that
into
which
repos
or
buckets
or
whatever,
but
it
came
up
quite
often
that
this
is
more
or
less
the
more
important
part.
C
It
seems
the
the
publishing
part
and
the
publishing
power
is
actually
the
thing
this
working
group
is
probably
most
interested
in
I.
Don't
think
the
working
group
is
that
interested
in
creating
the
packages
so
anyway,
first
off
I'd
like
to
get
as
much
comments
and
review
on
the
cap
itself
and
I
maybe
want
to
have
your
opinion
on.
Should
this
be
one
cap
for
the
whole
process,
meaning
also
handling
the
publishing
part
and
and
what
whatever
is
involved
there,
creating
the
pockets
and
and
whatnot,
or
should
we
keep
it
separate
for
now?
C
Maybe
one
additional
thing:
I
have
a
meeting
in
half
an
hour.
So
with
some
people
who
are
currently
doing
the
publishing
part
right
now,
I
don't
have
much
of
insight
what's
going
on
there
and
how
this
could
be
solved.
But
that's
that's
what
I'm
trying
to
discover
right
now
and
my
plan
is
to
if,
if
I
have
an
overview
about
that
to
create
another
cap
which
handles
the
publishing
part
or
if
people
prefer
fold
that
into
the
current
cap,
I.
D
Apologize
for
being
late
in
doing
this
that
I
captured
some
of
the
thoughts
on
publishing
this
just
in
Santa,
Barbara
and
I
had
had
that
cube
con
into
a
PR.
Actually,
literally,
you
can
still
smell
the
fresh
ink
in
the
case
about
Rico,
so
that
gives
us
sketch
about
what
we
were
thinking
about
in
terms
of
image,
promotion
and
stuff,
like
that.
D
E
I'm,
like
I'll,
take
the
opposite
position
as
somebody
who
used
to
say
that
caps
were
overkill.
The
consistency
of
having
future
work
in
caps
is
so
much
nicer
in
terms
of
cognitive
burden
of
what's
done
and
what's
not
done,
like
I
saw
your
doc
Brendan
and
the
doc
itself
is
good,
but
it's
all
future
tense,
which
means
to
me.
I
really
want
to
see
it
in
a
in
a
cap,
even
though
through
two-thirds
of
the
cap
doesn't
apply.
I.
Think
cap
is
the
right
place
for
it.
Somebody
argue
with
me
I.
A
D
E
D
E
B
B
This
is
not
working
fine
and
there
are
duplicates
here
and
there
so
Hans
actually
went
through
all
those
issues
and
try
to
come
up
with
something
that'll
at
least
say
something
about
each
of
those
issues
and
how
we
should
go
forward,
move
forward
from
where
we
are
today
and
if
you're
not
able
to
do
that,
then
we
can't
we
can't
move
forward.
Essentially
that's
the
problem
and
not.
B
D
G
D
F
H
D
E
D
Effectively
what
the
doc
says
I
mean
and
that's
my
goal.
Is
it's
basically
be
able
to
the
extent
that
there's
something
there
is
like
there's
going
to
be
a
llamo
file
like
I,
think
this
cap
has
to
say:
there's
a
llamo
file
is
checked,
indicate
that
IO
somewhere
and
that
drives
whatever
promotion
process.
We
build
be
it
for
images
or
artifacts,
and
then
everything
else
in
the
cap
is
about
the
redirection.
B
B
D
So
I
will
accept
in
effectively
what
I
see
this
cap
as
being
is
the
combination
of
mirroring
infrastructure
not
even
mirroring
infrastructure,
just
a
redirection
part
of
the
mirroring
infrastructure,
the
vanity
URL
part
and
the
fact
that
the
overall
flow
that
we
expect
similar
to
DNS
is
there'll,
be
a
llamo
file,
checked
in
two
cases
that
IO
somewhere
and
that's
how
you
promote
stuff,
whether
it's
an
RPM,
whether
it's
a
bucket,
whether
it's
an
image,
doesn't
matter
and
we
use
different
tools
for
all
of
those.
But
that's
the
basic
flow
right.
B
The
feedback
here
is:
let's
keep
in
the
scope
of
what
you're
doing
to
just
this
one.
If
we,
if
we
try
to
boil
the
ocean,
then
you're
gonna
get
a
way
more.
You
know
sidetracked.
So,
let's
stick
to
the
one
thing
that
we
need
to
deal
with
this
cap
itself
is
currently
the
the
people
who
cut
the
Devon
rpms.
They
push
the
depths
and
rpms
to
a
specific
GCS
repository.
B
D
B
So
and
one
thing
that
Nikita
and
Stefan
have
been
doing
for
the
publishing
board
is
using
git
crypt
and
that
seems
to
be
working
fine,
so
the
way
the
get
crypt
stuff
works.
Is
they
just
added
me
as
my
GPG
key
into
the
publishing
bot
repository
and
you
know,
and
then
was
able
to
use
the
token.
So
basically,
what
we
would
do
here
is
the
release
manager
who
has
to
who
needs
the
dev
on
the
RPM
signing
key.
B
D
B
D
B
I
Just
want
to
say,
like
I,
thank
you
to
burn
their
hands
for
doing
those
Docs
I
do
hope
that
technically
we
come
up
with
the
same
solution
at
the
end
for
all
three,
even
though
they
might
look
different
on
the
surface
and
with
like
minor
modifications
for
signing
versus
docker
or
whatever
it
is,
and
I
also
hope
that
that
signing
is
not
controlled
by
an
individual
that
you
never
see.
The
key
it
might
be.
I
have
I,
have
one
of
those
keys
and
I.
It
makes
me
uncomfortable
to
have
that
you.
B
Just
exactly
so
in
the
OpenStack
infrastructure.
The
way
we
did
that
was
there's
a
yam
L
file
and
you
put
in
a
sha
and
then
the
bot
goes
in
takes
the
key
from
some
someplace,
which
is
not
accessible
by
any
individuals,
except
for
the
admins,
of
course,
the
root
admins
of
the
box,
and
then
it
actually
does
assigning
and
publishes
the
images
so
we'll
get
there.
But
this
is
like
the
first
step
to
that
process
and
I.
B
Don't
want
to
lay
down
the
whole
path
because
I,
don't
I,
don't
know
what
all
changes
will
come
in
the
future.
So,
let's
start
with
at
least
giving
some
way
to
replicate
what
we
are
doing
today
in
in
a
public
way
and
then
go
on
for
you,
yeah
I
agree:
Aaron
I
try
to
write
it
down
some
at
least
the
initial
steps.
C
Sure
so
correct
me
if
I
heard
something
wrong,
but
what
I've
heard
is:
let's
keep
the
cap
as
it
is
and
hash
out
the
details
there
figure
out.
What's
the
current
state
in
regarding
in
regards
to
publishing
and
and
I'll,
like
the
other
part,
and
create
probably
initial
kept
there
when,
when
I
have
some
overview
or
check
on
Brendan's,
stuff
and
yeah,
let's
see
how
that
goes.
Okay,.
F
C
B
C
B
A
J
Little
work,
it
seems
fine.
It's
the
same.
We've
wrote
down
everything
in
the
issue
that
we
really
wanted,
at
least
from
a
packaging
perspective.
There's
two
parts:
there's
packaging
and
publishing
right
from
a
packaging
perspective.
We
know
exactly
what
we
want.
You
wrote
it
all
down
from
a
publishing
perspective.
That
means
it's
a
handoff
and
that
part
is
I.
Don't
know
if
I'm
ever
gonna
get
what
I
want,
but
from
a
packaging
perspective,
I
think
everything
is
written
down.
J
It's
fine
I,
don't
think
we
even
needed
to
kept
for
it,
because
it's
not
ambiguous
but
I'm,
happy
to
see
it
written
up
so
I'm
fine
I
haven't
been
around
for
the
last
week
because
I've
been
off-site
so
I'm
I'm,
trying
to
spin
back
into
things.
So
I
took
a
look
at
the
kept
him
like
sounds
good.
It's
great
glad.
He
wrote
it
up.
I
think
the
hard
part
will
be
publishing
and
signing
okay.
Yes,.
A
A
B
Okay,
so
let's
go
to
the
next
one.
There
is
a
request
from
okay.
The
way
this
started
was
somebody
needed
X.
Was
it
hippie
hacker
who
needed
access
to
the
Cuban
artists,
charts
information,
and
then
it
turned
into
be
Iglesias,
requesting
the
GCP
project
for
the
Cuban
artists
charts
to
whether
they
should
go
into
CN
CF
e
whole?
K
H
So
I
can
I
can
give
a
background
on
kind
of
where
what
we
were
using
and
what
it's
been
used
for
in
the
past.
So
essentially
the
charts,
repo
and
helm
had
some
GCP
projects
that
had
buckets
for
both
helm
binaries
and
for
the
charts
themselves.
The
tar
balls
that
make
up
the
charts
out
from
there
you
can
pull.
H
You
know
quite
a
bit
of
data
from
the
audit
logs
and
or
download
logs
of
those
objects,
and
so
that's
the
feminine
so
that
that
hippie
hacker
would
like
to
have
access
to
and
and
kind
of
be
able
to
control
that
date,
a
little
bit
the
projects
that
we've
been
using,
there's
actually
three
one
for
home,
one
for
charts
and
then
one
for
chart.
Ci
testing
have
all
been
just
Google
owned
projects.
We
started
them
and
they're
just
kind
of
growing.
H
At
this
point,
I've
been
one
of
the
owners
and
probably
what
I
would
call
a
primary
at
this
point
just
because
I've
been
mostly
accessible
it,
but
as
if
the
ekor
I'll
tell
you
may
be
not
as
accessible
as
I
should
be,
and
I
would
like
to
move
that
into
a
CN,
CF
mode
project
or
transfer
ownership
of
that
project
and
then
have
some
more
structure
around
the
ownership
right
now.
It's
just
kind
of
a
free-for-all.
B
K
So
I
got
it.
Can
you
please
drop
me
a
brief
email
was
dead
so
dirt
on
the
technical
said
it
shouldn't
be
a
big
issue.
Another
a
possible
challenge
is
how
can
we
sort
out
from
the
legal
standpoint
because
we
have
worried
you
know:
we've
had
an
agreement
with
Google.
How
do
we
receive
all
this
donation
and
how
can
we
consume
all
the
stuff
form
for
Cuban
artists
and,
for
example,
all
this
donation
cannot
be
used
for
analysis
and
share
projects.
K
A
K
Yeah
I'm
responsible
for
like
I'm
a
person
who
can
help
you
with
dead,
also
on
a
ham
side,
so
I'm
also
managing
our
aza
CN,
CF
r,
CN
CF
project
project
you
can't
and
so
on.
So
we
can
sync
with
you
outside
of
this
group
on
how
to
transfer
all
this
essence
under
CN
CF,
but
we
have
to
fill
it
out
if,
if
this
proposal
lands
under
communities,
infrastructure
or
not,.
H
L
H
A
D
M
There's
still
some
things
as
far
as
configuration
on
the
unusal,
the
helm,
charts
project,
we
have
audit
logs,
which
seem
to
have
some
roll
up
of
about
every
30
minutes,
but
it
doesn't
allow
us
to
do
some
early
of
what
so
I
could
only
tell
that
which
charts
were
downloaded
at
least
once
every
30
minutes
and
that's
most
of
them.
What
I
need
to
know
is
every
download
and
try
to
also
distinguish
different
IPS
and
different
user
agents.
M
I've
been
able
to
recreate
that
in
my
own
org,
but
we
need
to
do
some
changes
to
the
the
logging
sinks
for
what's
available
for
that
helmet
and
I'll,
probably
just
work
directly
with
their
comment.
Maybe
today,
if
that's
possible
or
if
I
can
get
granted
enough
privileges
to
do
so,
myself,
that'll
be
great.
So.
B
A
Drops
any
other
media
hate
this
as
an
action
item
and
for
what
it's
worth?
This
was
like
trying
to
go
back
and
look
at
what
we
have
is
a
eyes
last
time
so
for
dns
to
be
well
and
truly
done
done,
like
I
feel
like
there
should
be
saturation
criteria
or
something
for
the
DNS
thing
right
now,
where
I
just
pointed
people
to
file
issues
so
I'm
curious
what
the
whole
workflow
will
look
like
with
filing
an
issue.
Somebody
else
is
gonna
have
to
like
write.
A
E
E
D
D
D
E
N
N
E
A
Okay,
I'm
gonna
take
a
stab
at
like
flushing
out
that
umbrella
at
the
NS
issue,
with
whatever
we
needed
as
follow-up
hell.
Maybe
I'll
even
try
writing
in
a
cup
and
we
can
have
graduation
criteria.
I
don't
know,
but
it
sounds
like
this
is
one
of
those
things.
Maybe
I'm
missing
a
couple.
Other
things.
A
A
Part
of
my
hesitancy
with
doing
that
is
I
have
no
idea
how
we're
spending
our
money
if
we
could
get
like
a
lot
of
transparency
on
where
the
money's
going
and
why
then
I'm
a
lot
more
comfortable
being
a
little
looser
and
a
little
more
flexible,
so
I
think
we
had
talked
last
time
about
dumping
the
billing
data
into
like
a
big
query
table
my
goal
here
is
to
get
to
the
point.
We
can
start
each
meeting
off
with
you
who
are
telling
us
like
hey
this
in
the
past
two
weeks.
A
You
all
have
spent
your
money
on
XY
and
Z
and
then
eventually
get
to
the
point
where
that
those
sorts
of
updates
can
be
more
frequent
or
even
self-service,
rather
than
having
to
wait
for
this
meeting
to
get
that
information
because
that
will
allow
us
to
adapt
would
be
like
whoa
wait.
What
are
these
people
doing
with
that
money
over
there
and
we
can
maybe
be
a
little
looser,
so
it's
just
my
two
cents
on
that
matter,
but
so,
where
are
we
on
billing
Tim
in
E
or
I?
Think
you
guys
were
owning
this
I.
K
Can
I
build
reports
set
up
and
they
make
sure
open,
damn
Belinda's
verbs,
so
the
building
airports
are
set
up
and
we
have
them
automatically
dumped
into
week
be
worried
what
is
not
set
up
at
the
moment.
E
data
studio
to
easily
consume
all
that
stuff
and
visualized
it,
but
that
depends
on
on
like
what
do
we
want
to
see
there
and
and
so
on.
K
So
at
the
current
moment,
I
can
tell
you
what
our
actual
reports,
for
example,
if
you,
if
you
have
any
more
specific
requests
on
our
billing,
you
may
probably
start
with
github
issue
with
what
specifically
would
like
to
see
her
and
I'll
and
work
in
the
episode.
The
current
moment,
we
just
have
the
full
dump
of
all
our
billion
expenses
every
day,
expenses
that
are
dumped
into
into
the
bigquery
tables.
So.
E
I've
never
personally
played
with
the
data
studio
for
billing
export
I.
Think
honestly,
the
first
step
is
somebody.
Maybe
it's
you
or
maybe
it's
somebody
else
to
just
play
with
it
and
see
what
sort
of
reports
are
interesting
and
useful
and
easy
to
produce.
You
know
a
report
of
week
by
week,
breakdown
per
product
or
something
like
that
would
be
really
interesting.
How
much
do
we
spend
on
vm's
this
week?
G
A
E
A
E
E
Its
ask
me:
I
mean
honestly,
like
we
sit
and
play
with
the
Alpha
clusters
and
and
test
migrate
over
various
services.
Like
you
know,
GCS
web
or
the
gates,
I,
Olli
directors
or
whatever.
We
just
can't
go
live
with
them
until
one
of
us
makes
enough
time
to
sit
and
play
with
data
studio
and
understand
it
properly.
A
K
B
O
E
B
B
E
A
What
it's
worth
Kampai
speaker
lease
and
shopped
it
around
there
and
everybody
was
like
yeah
that
looks
good
I
had
a
question
about
gee.
It
looks
like
it
only
takes
promotions
from
GCR
repos
right
now.
We
had
thoughts
of
maybe
what
if
there
was
a
manifest
that
pulled
down
images
from
any
potential
docker
registry,
not
just
the
GCR
ones
neat,
but
let's
get
rolling
on
the
GCR
thing.
First
right.
B
E
That's
what
we
want
to
go,
let's
write
down
what
the
plan
is
going
to
be
with
the
convention
for
how
to
name
those
things
and
how
to
grant
access
to
whom
and
probably
those
want
to
be
checked
in
as
a
mo
file
somewhere
else
with
get
a
box
or
something
or
not
get
about
Sathya,
the
chief
GCP
bots
or
somebody
automating
that
creation
etc.
I
just
want
to
get
all
the
the
t's
crossed
and
the
I's
dotted.
Okay.
B
I
Then
I'm
just
say
four
one:
five
one,
which
is
the
one
about
automate
dns
for
kubernetes.
We
do
have
a
cluster,
we
could
do
half
of
it,
which
is
sort
of
checking
the
DNS
records,
because
our
cluster
is
not
where
I
would
want
it
to
be
the
sole
source
of
truth.
But
for
verification
we
can
do
that
and
then
that's
new
functionality
that
we're
not
gonna
risk
breaking
GCSE
web
or
something
Timms.
Maybe
we
could
start
with
that.
You.
I
L
All
right
do
do
do
we
need
it
in
community
owned
space
just
yet
to
to
run
it
like
it's
all,
going
to
eventually
move
to
community
owned
space
when
proud
jobs
do
but
I
wouldn't
I
would
suggest
that
you
know
we
have
this
very
fancy.
Ci
automation
system
all
set
up
and
where
it
runs
probably
doesn't
isn't
critical
to
actually
doing
the
DNS
checks
and
having
all
the
same
blogging
mechanisms
and
output
like
you'd,
be
this
:
goober
Nader
it'd
be
visible
test
grid,
all
that
kind
of
stuff.
L
I
L
I
M
E
Think
that
that's
super
urgent
right
now
I
actually
be
okay.
If
we
didn't
do
that,
just
yet
and
actually
ran
with
it
manually,
just
to
make
sure
that
nothing
explodes
while
humans
are
paying
attention
to
the
script,
I
mean
I,
think
between
test
and
the
script
itself
like
I,
think
it'll
be
fine,
but
I
would
love
to
get
a
little
bit
of
miles
on
it.
While
humans
are
paying
attention.
Sorry.
A
A
I
B
F
B
F
B
E
And
from
our
end
on
the
cluster
infrastructure
side,
we
Justin
and
I
have
charted
a
course
through
the
various
GCP
API
is
to
find
how
to
create
a
cluster
with
minimal
access
required.
We
have
not
scripted
that,
though,
so,
if
we
eventually,
we
will
want
to
burn
the
thing
down
and
start
it
over
from
script,
and
we
will
have
to
put
those
pieces
back
together.
Are.