►
From YouTube: Kubernetes WG K8s Infra 2019-03-06
Description
A
So
hi
everybody
today
is
Wednesday
March
6th
I
am
Airness
I
feared
you
are
at
the
kubernetes
infra
Kitson
for
a
working
group,
you're
all
being
publicly
recorded,
you're
gonna
be
posted
to
YouTube,
and
you
all
get
to
be
that
way.
You
can
watch
yourselves
follow
the
kubernetes
code
of
conduct,
which
basically
means
you're
all
not
going
to
be
a
bunch
of
jerks
I
pasted
a
link
to
the
agenda
in
chat
I'll.
Do
it
again
for
those
who
just
joined
it?
If
you
want
to
add
your
names
to
it,
he'd
be
very
helpful.
A
B
C
A
D
D
A
A
A
E
E
A
E
E
We
need
somebody
to
make
the
time
to
or
look
at
data
studio
and
try
to
previously
more
useful,
which
I
think
is
going
to
be
challenging
until
we
have
more
than
literally
two
billable
items
where
one
is
disposable.
So
I'm
okay
to
let
this
sort
of
once
we
turn
on
the
cluster
for
real
or
once
we
turn
on
the
staging
stuff,
which
I
think
we'll
talk
about
in
a
little
bit,
then
we'll
have
something
more
tangible
to
actually
produce
a
report
on
yep.
A
H
H
This
is
like
apparently,
there's
like
one
trust.
Cluster
sorry
called
proud
cluster
and
that's
the
one
that
like
has
secrets
and
stuff
the
reason
we
want
to
run
it
there.
Well,
we
don't
want
to
run
in
some
other
random
cluster.
Oh
sorry,
with
the
other
jobs
that
are
not
in
the
trusted
cluster,
because
then
they
are
probably
more
susceptible
to
I,
guess,
malicious,
PR
and
stuff,
like
that.
H
So
my
understanding
is
I've
been
already
a
presubmit
check
for
any
any
jobs
that
try
to
use
a
trusted
cluster
like
already
in
crowd
somewhere,
so
that'll,
hopefully
prevent
people
from
adding
like
random
jobs
there
using
the
secret
that
we
will
add.
That's
the
secret
to
push
to
Kate's
GC
our
prod,
so.
A
This
came
up
briefly
in
the
state
testing
meeting
yesterday
and
so
I
apologize.
If
I'm
missing
contacts,
there
should
be
responding
to
email,
threads
I
know
you
think
me
on
a
couple
of
that
I
didn't
get
to
so.
Essentially,
the
trusted
crowd
cluster
is
where
all
of
the
proud
components
run.
That's
the
cluster,
where
we
there's
very
little
else
that
so
we
have
a
couple
secrets
there
and
so
we're
thinking,
oh
great,
we'll
just
we'll
put
it
there.
Since
that's
where
secrets
already
happen,
but
I
was
trying
to
understand
like
are
there
any
technical
reasons?
A
Are
there
any
technical
limitations
that
are
preventing
us
from
running
the
container
image
promoter
on
the
same
CN
CF
own
cluster
that
were
running
the
publishing
bot?
Does
the
publishing
bot
is
an
example
of
something
that
runs
in
the
CN
CF
and
uses
some
like
crestock
credentials
to
push
some
stuff
out?
There
wait.
E
D
E
A
H
E
C
C
E
H
D
E
It's
not
important
yet
because
it's
not
actually
doing
anything
for
real
yeah
right
like
we
want
I
want
to
get
it
up
somewhere
like
this
week,
although
this
week
is
perfect
I'm
at
Google.
So
maybe
this
week's
a
bad
idea,
but
I
want
to
get
it
up
like
this
week
or
next
week,
so
that
we
can
say,
hey
group,
look
I
made
a
get
change
here,
proud
kicked
in
and
it
moved
the
image
from
this
repo
to
that
repo
and
that
all
the
plumbing
okay.
A
So
in
the
interest
of
moving
forward
sounds
like
this
group
is
okay.
If
we
run
it
in
the
same
cluster,
we
run
the
rest
of
our
test.
Infrastructure
I
just
had
concerns
that
that
was
gonna.
Let
us
like
accumulate
more
google-specific
debt
because
it's
possible
there
and
it's
not
possible
in
the
CNC
F,
but
I
agree
in
the
interest
of
expediency,
making
sure
we
get
our
test
clusters
straightened
out.
That's
we'll
move
forward.
That
way
can.
E
Just
proves
that
it
works
exactly
I
think
we
can
move
proud
towards
the
front
of
the
list
of
things
that
we
try
to
move
into
the
real
cluster.
And
then
we
have
to
have
the
discussion
about
the
secure
prowl
cluster
versus
the
Nam,
secure,
proud
cluster
and
what
we're
going
to
share
and
how
we
manage
those
secrets,
which
is
good,
because
we
need
to
have
that
conversation
anyway.
A
E
We
have
a
short
list
right
now
for
the
purposes
of
testing
once
I
mean
at
some
point.
It
doesn't
have
to
be
serialized,
but
at
some
point
we
need
to
decide
what
is
the
policy
by
which
we
create
new
ones
and
and
the
policies
by
which
we
govern
who
gets
access
to
push
to
those
staging
repos
I
think
we
can
be
pretty
liberal
in
that
they
cost
us
very
little
and
the
risk
is
fairly
low,
especially
if
we're
actually
paying
attention
to
billing.
A
H
I
think
that's
reasonable.
I
mean
Tim
already
gave
me
the
secrets
to
the
the
registries
for
the
three
stars
that
he
set
up,
that
he
created
so
I
just
need
to
insert
them
into
the
I
guess
the
trust
cluster
later
when
the
PR
is
merged.
I
just
need
to
make
some
changes.
I'm
already
working
with
feta
on
getting
the
details
right
for
this,
so
I,
don't
think
I
need
anything
else
from
this
group.
E
H
H
E
F
E
H
Yeah
I
mean
the
so
the
implemented
the
implementation
details
on
that
are
I,
guess
still
kind
of
an
open
question,
because
the
reason
why
I
say
that
is
because
is
it
John
Johnson
or
somebody
named
John,
something
at
Google.
He
brought
up
G
crane
for
copying
just
images
like
GTR,
and
they
can
do
that
now.
So
there's
like
multiple
ways
to
do
this,
so
I.
J
J
E
I
mean
that's
more
or
less.
What
we
do
now
is
Zac
Laughlin
wrote
a
script
that
does
it
and
I'm
guessing
it.
They
probably
end
up.
Looking
a
lot
alike.
I
think
I
agree
with
heavier.
It
makes
sense
to
me
to
have
the
promoter
be
the
thing
that
pushes
to
the
prod,
repos
and
motor
has
no
access
to
the
backup
repo
and
that's
done
through
a
separate
mechanism,
so
that
we
can
make
sure
that
even
if
the
promoter
goes
haywire,
it
can't
nuke
the
backup
they
can
have
right.
Only
access,
okay,.
H
E
A
D
D
I
think
I'd
circulated
a
doc
two
weeks
ago
and
I
I'm
not
entirely
sure
what
the
process
is,
but
I
put
that
as
a
PR
against
the
cap
as
a
implementation
details
for
milestone
zero.
If
people
are
generally
agreed
with
that,
I
would
like
to
proceed
and
start
like
making
progress
on
this
I.
Don't
know.
Brendon
I
haven't
heard
from
you
about
how
you
feel
about
it
in
particular,
and
how
others
feel
yeah
I
did
I
didn't.
Although.
E
So
we
have
these
scripts
to
make
GC
our
staging
repos.
You
can
probably
copy
it
and
just
change
a
few
of
the
details.
If
you
actually
look
at
the
commit
history
of
those
scripts,
they
have
some
lifecycle
management
stuff
which
doesn't
really
work
for
GC
r,
but
it
does
work
for
GCS.
So
you
can
actually
apply
some
lifecycle
stuff
to
the
stage
and
repost
say
you
know
after
90
days
the
whatever
is
there
is
going
to
get
deleted,
those
sorts
of
things
and
it's
good
idea
to
start
with
that.
A
D
A
E
Actually,
the
the
GCS
policy
language
is
pretty
cool,
so
you
can
actually
say
like
after
90
days,
move
it
to
the
cheap
storage
and
then
after
180
days
or
after
six
months,
move
it
too
or
delete
it
at
that
point
right.
So
we
have.
We
have
all
sorts
of
options
here,
I
just
we
should
look
at
it
and
think
about
it.
Unfortunately,
we
can't
do
it
for
GCR
yet,
but
there's
hopefully
we'll
be
able
to
do
that.
Eventually,
I
mean.
E
I
specifically
think
we
should
apply
lifecycle
policy
in
one
direction
to
the
staging
repos,
so
they
don't
end
up
wasting
money.
Anything
that's
not
promoted
from
staging
to
prod
within
half
a
year,
probably
won't
be,
and
the
prod
should
probably
have
life
cycle
in
the
other
direction
to
do,
delete
inhibition
and
those
sorts
of
things
to
make
sure
that
nobody
can
do
get
in.
You
know
less
than
a
certain
amount
of
time,
something
like
that.
So
we
should
think
about
the
policies
along,
say:
okay,
okay,.
D
K
We're
just
looking
for
some
pair
of
time
to
go
through
and
figure
out
where
to
how
to
read
those
permissions
for
auditing
in
general,
I,
think
and
specifically
being
able
to
dump
the
I
n
roll's
so
that
we
can
dump
I.
Don't
on
going
when
there's
a
change.
We
do
an
IM
dump
and
update
the
probably
the
Kate's
that
I,
oh
okay,.
K
E
C
F
F
A
That
was
there
for
us,
yeah
yeah
I
got
it
so
I
know
you
have
a
proposal
here
for
us
to
discuss,
spend
is
there
something
we
should
run
through
now
sure.
I
A
I
think
my
main
question
here
is
cz
or
I,
hear
you
saying
like
yes,
let's
see
and
see,
if
already
has
something
going
on
with
now
Phi.
What's
unclear
to
me
is
whether
we're
asking
the
CNC
F
to
support
this
directly
or
somehow
the
GCP
funds
are
being
used
for
this
or
like
what
story
is
it?
It
doesn't
cost
anything
more
okay,
it.
F
F
I
Well,
it
seems
like
this
is
kind
of
the
clearinghouse
for
the
community
agreeing
to
host
things
for
people,
so
it
is
not
an
expense,
but
there
you
know,
there's
still
some
semblance
of
that
and
to
go
with.
This
will
most
likely
want
to
set
up
the
sites
with
someone
of
our
subdomains
like
something
dot
Cates
that
IO
does.
E
I
So
that's
the
that's
the
thing
that
also
needs
to
be
covered.
That
I
didn't
finish.
It
was
that
we
really
should
have
some
kind
of
like
net
lafha
administration
team
in
the
sense
that
we
have
the
github
team
or
the
DNS
team
is
seeing
this
net
liffe.
I
has
teams,
but
for
DNS
ownership
reasons
we
pretty
much
just
need
to
use.
Eighteen
anyhow,
so
we'll
have
to
have
a
team
that
the
the
docs
team
on
that
laughs
I
will
need
to
fulfill
actually
creating
the
sites.
I
I
I
C
I
And
Zachary
Sarah
from
sick
docks
has
been
doing
this
as
well
and
I've
been
in
discussion
with
them
about
this,
and
we've
also
gone
clarified,
and
now
he
or
as
well
the
the
CNC
F
is
funding.
This
account
going
forwards
and
you
basically
paid
by
how
many
team
members
you
want
to
be
able
to
have
access
to
sites
we
pay
for
unlimited.
Currently.
So,
even
if
this
team
were
to
grows
because
we
want
did,
doesn't
change
the
price
and
we
don't
use
any
other
features
that
cost
anything
so
there's
to
me
sounds.
A
A
F
E
We
have
Jeff
Jeff,
didn't
spun
it
up
for
GCSE
web
this
week,
so
we're
now
serving
GCSE
web
over
SSL
and
the
the
bump
in
the
road
for
the
main
site
is
to
move
it
from
a
service
Selby
to
an
ingress
which
isn't
really
that
complicated,
but
take
a
little
bit
of
effort
and
will
require
a
new
IP
address
which
requires
a
cascade
of
changes.
We
have
to
move
it.
It's
not
in
CNCs
infrastructure,
though,
is
it
what?
E
E
A
E
D
E
E
E
For
all
the
staging
stuff
is
just
destroy
the
project
and
start
over
and
recreate
the
whole
world
from
scratch,
and
make
sure
that
the
scripts
sore
so
yeah
being
able
to
just
burn
down
the
cluster
and
create
a
new
one
from
script
and
make
sure
that
it
comes
up
with
all
of
the
appropriate
infrastructure.
That's
needed,
then
cool,
and
that,
at
this
point,
probably
includes
certain
manager
that
doesn't
use
I
mean
we'll
see
about
shortly
involves
not.
F
E
E
Honestly
hold
no
opinion
on
that
I
as
part
of
the
original
setup
of
that
it
was.
The
idea
was
to
not
make
that
critical
path
on
anything.
It
was
really
just
for
governance
work.
If
steering
wants
to
turn
around
on
that
I.
Don't
care,
but
I,
don't
think
it
actually
matters
here,
because
at
the
end
of
the
day,
I,
what
we're
developing
is
a
set
of
permissions
that
are
granted
to
a
group
I,
don't
care
what
the
name
of
the
group
is
right,
but
I
think
multiple
groups
with
the
same
set
of
permissions.
A
K
About
that,
if
you're,
if
we
use
the
public
Google
Groups
for
this,
if
you,
if
you
need
to
get
a
message
or
somebody's
unavailable,
you
have
this
way
to
email
that
group,
or
we
have
a
way
kind
of
publicly
see
that
it
in
the
same
way
that
we're
doing
our
audits
for
our
Vatican
it'd
also
be
nice
to
see
who
who
are
in
these
groups
and
I.
Don't
know
if
that's
easier
to
do
the
AG,
suite
or
Google
Group.
That's
I,.
E
Don't
think
that's
any
different.
The
thing
that
I
know
off
the
top
of
my
head,
that's
easier
in
G
or
possible
in
G
suite
is,
you
can
have
groups,
be
part
of
groups
which
you
cannot
do
through
the
public
groups.
Api
hippy
I,
totally
agree
with
you
on
transparency.
I
would
love
to
have
the
source
of
truth
for
membership
in
the
groups
via
file
and
github,
which
a
bot
would
sink
into
Google
Groups
I've
never
played
with
the
Google
Groups
API
either.
So
I
have
no
idea
how
complicated
what
I'm
asking
for
there.
E
A
E
A
Christoph's
you're
in
comment,
as
of
January
15th
was
I'd
like
to
consider
it
for
moving
any
ways
to
use
our
G
suite
the
auditing
of
permissions,
administrator
recovery,
etc
are
way
stronger
there
as
opposed
to
public
groups.
It
appears
as
though
using
groups
in
G
suite
costs
us
nothing
extra,
that's
wrong.
E
A
A
I
like
to
get
the
get
ops
model,
but
I
think
you
should
explore
whether
or
not
the
api's
exists
to
allow
you
to
do
that
with
public
Google
Groups
and
if
we
find
they
don't
I
think
we
have
another
item
open
about
using
G
suite
anyway
for
SIG's
and
working
groups
to
have
a
place
for
their
Doc's
to
live,
so
they
don't
disappear
when
random
people
delete
their
personal,
G,
Drive,
folders
and
stuff.
Oh
and.
E
I'm
down
for
all
that,
I'm
not
gonna
block
on
it,
though
I
think
we
need
a
volunteer
just
up
and
drive
that
stuff,
because
I
think
it's
a
significant
effort
honestly,
like
it's
a
whole
new
API,
to
learn
too
to
do
get
sinking
from.
So
maybe
it's
not
huge,
but
it's
not
like
I'm
gonna.
Do
it
this
afternoon,
Justin
trying
not
to
volunteer
I'm
trying
to
volunteer
I,
see
him
over.
E
E
Don't
know
if
two
weeks
from
this
is
gonna
be
enough
time
to
have
the
cluster
up
I'd
really,
once
we
get
this
staging
stuff
up,
I'd
like
to
stand
on
the
gas
on
the
cluster
and
make
sure
that
we've
got
it
actually
working,
it
seems
like
they're,
really
the
next
big
milestone,
so
I'm,
not
gonna,
say
two
weeks
from
today,
but
maybe
four
weeks
from
today
we
I'd
like
to
have
the
cluster
up.
Does
that
seem
fair
Justin,
the
the
real
buster
yeah.
F
E
E
A
E
So
an
interesting
concern
of
trust
domains
here,
I'd
like
to
be
able
to
I
want
to
assume
for
now
that
we
can
get
a
cluster
in
a
trusted
enough
place
that
sort
of
within
the
family.
We
trust
our
own
projects,
which
is
all
trusted
code
to
not
destroy
each
other
I,
do
not
trust
arbitrary
PRS
from
random
people
who
managed
to
get
somebody
to
say
okay
to
test
to
not
try
to
break
out
of
their
container.
A
E
E
C
E
A
Just
another
thought
off
the
top
of
my
head:
I
feel
like
we
don't
start
talking
about
repeatability
until
M
is
greater
than
n
is
greater
than
or
equal
to
three
like.
If
this
is
our
second
cluster
I
want
to
make
sure
we're
not
falling
prey
to
the
second
system.
Effect
I
feel
a
little
bit
more
confident
if
we
burned
this
next
cluster
down
and
created
it
from
scratch
again
to
know
yeah
for
really
reals
weak
options,
but
I'm
with
you,
I'd.
E
E
K
E
E
Dev
seems
to
be
the
new
cool
kid
I
didn't
I
I've
resisted
the
urge
to
go
off
and
register
one.
A
A
A
K
K
A
But
just
so
like
all
of
Linus's
work
to
get
the
container
image
promote
proper
running
I
would
view
that
as
a
single
card,
and
we
can
just
sort
of
track
the
progress
and,
what's
going
on,
there
looks
like
the
board
is
set
up,
so
any
org
member
can
write
to
it
or
make
changes
so
yeah
hippy.
If
you
are
saying
you're
volunteering
for
this,
you
are
my
bestie.