►
From YouTube: Kubernetes SIG K8s Infra - 2021-10-13
Description
A
Hi
everybody
today
is
wednesday
october
13th.
This
is
the
kubernetes
sig
case
in
for
a
bi-weekly
meeting.
I
am
your
host
aaron
krickenberger,
also
known
as
aaron
of
sig
beard
or
spiffxp
at
all
the
places
we
are
going
to
adhere
to
the
kubernetes
code
of
conduct
during
this
meeting,
which
basically
means
we're
going
to
be
our
very
best
selves.
You
can
watch
yourselves
do
that
when
this
meeting
is
posted
publicly
to
youtube
later.
A
A
Well,
why
don't
you
tell
us
a
little
bit
about
wow?
What's
your
name
tell
us
a
little
bit
about
yourself.
What
brings
you
here?
What
are
you
interested
in.
C
My
name's
ben
and
I
work
as
a
software
engineer
and
a
company
that
I
started
with
some
friends
of
mine
sort
of
in
the
fintech
space
really
like
we
use
kubernetes
really
like
it.
I
think
someone
posted
this
group
as
a
sort
of
place
to
join
and
help
and
sort
of
everything
it
felt
like
a
nice
place
to
join
and
help.
I
guess
I
might
be
entirely
out
of
my
depth
and
completely
wrong,
but
fingers
crossed.
A
A
All
right
well:
well,
then,
everybody
feel
free
to
speak
up
at
any
time.
So
next
thing
we
do
regularly
is
go
through
our
building
report,
so
I'm
gonna
click
on
the
data
studio
report
link
in
the
meeting
agenda
and
I'm
gonna
share
my
screen.
A
A
Yeah,
so
this
is
probably
really
tiny.
I
wonder
if
I
can
blow
it
up.
Oh
cool.
I
can
so
cloud
storage
is
the
bulk
of
the
cost
for
our
kate's
artifacts
project,
which
is
apparently.
A
And
some
binaries
next
up
is
the
gcp
project.
That's
used
for
the
5000
minute
scalability
tests.
Next
up
is
compute
engine
which
I
think
is
actually
network.
Egress
costs
for
the
artifacts
that
are
hosted
in
cloud
storage
and
then
fourth
place
is
the
proud
build
cluster,
which
I
could
go
back
and
take
a
look
at
how
it's
changed.
I
don't
think
it
is
that,
if
anything,
I
thought
I
had
helped
make
the
costs
go
down
over
time.
A
Let's
filter
down
the
search,
because
by
moving
to
local
ssds,
it's
actually
slightly
cheaper
and
slightly
more
performant
to
run
the
builds.
I
thought
builds
might
run
a
little
bit
faster,
but
I
suspect
we're
just
going
to
see
that
the
volume
of
prs
went
up
over
time.
E
A
A
Are
there
any
so
the
tldr
is,
I
think,
like
we
still
feel
like
reducing
the
cost
of
our
artifact
hosting.
Is
priority
number
one?
Are
there
any
other
questions
about
this.
A
A
F
G
A
This
is,
I
think,
actually
might
be
a
great
limit
being
applied
to
me
as
a
viewer
of
this
report
because
of
the
way
the
billing
sources
data
sources
are
set
up.
I
don't
know
okay.
A
To
to
maybe
make
this
clear
on
video
go
august,
15th,
so
we're
sort
of
looking
at
the
last
few
months.
I
feel
like
there
is
slightly
more
usage
in
august.
I
can't
explain
entirely
where
that
usage
is
coming
from
the
first
place.
I
would
start,
though,
might
be
the
slight
difference
in
maybe
the
number
of
jobs
that
were
kicked
off,
and
so
I
would
probably
want
to
go.
Take
a
look
at
like.
A
At
this
point,
I
would
either
take
a
look
at
the
builds
data
set.
That's
in
bigquery
or
I
might
scrape
the
kubernetes
jenkins
gcs
buckets
to
figure
out
or
or
I
would
probably
use
spyglass
to
go.
Take
a
look
at
the
build
history
for
the
jobs
that
we
know
would
use
this
project
and
see
if,
like
more
of
them,
have
been
kicked
off
than
usual
or
something
okay.
A
A
The
in
the
80k
in
august
is
still
the
leftover
bump
from
that
from
that
cluster
that
I'm
sticking
around,
because
we're.
D
Yeah
kind
of-
and
I
think
if,
when
you
finish
the
microsoft
of
the
scalability
jobs
we
we
are
supposed
to
even
get
less
than
that,
because
we
use
now
the
scalability
pool
for
yeah
for
the
preregic.
D
A
I'm
gonna
stop
my
share.
Thank
you
for
asking
that
question
or
no,
I
guess
I
should
say
up
front.
I'm
willing
my
plan
is
for
us
to
run
a
little
bit
longer
than
usual.
So
if
you
need
to
hop
off
at
the
usual
time,
that's
super
cool,
but
I
figured
since
kubecon
is
happening.
I'm
willing
to
take
some
extra
time
to
go
down
tangents
and
answer
questions
and
generally
ramble
and
watch
hippie
cook
himself.
Breakfast.
It's
gosh
that
looks
so
delicious.
Okay,
okay,
wait!
A
There's
action
item
review
would
be
the
next
thing
we
do,
but
I
see
that's
on
the
agenda,
so
I
jump
to
that
any
news
from
cncf
legal
on
gdpr
compliance.
I
currently
have
no
news.
E
Right,
I
have
cncf
legal,
it's
not
even
cncf,
it's
lf
legal
overall,
it's
not
a.
H
H
E
D
A
Broadly
speaking,
I
think
I
wanted
to
see
if
amazon
was
willing
to
participate
in
the
cloud
native
credits
program
and,
if
so,
tried
to
get
an
account
provisioned.
That
way.
Failing
that,
I
wanted
to
try
and
unpack
the
access
that
we
currently
have
see.
Who
has
the
credentials
to
that?
I
think
it's
probably
just
in
santa
barbara
and
one
or
two
other
people
likely
and
expand
that
to
the
same
set
of
people
who
have
access
to
the
gcp
admin
credentials.
H
A
Of
and
it
seems
like,
the
cluster
api
folks
managed
to
unlock
themselves
a
little
bit.
A
Okay,
jumping
around
on
the
agenda,
I
wanted
to
jump
up
to
a
progress
report
on
123
and
I
know
we
just
spent
a
bunch
of
time
talking
about
the
5
000
note
scalability
job,
that
you
want
to
update
us
on
the
rest
of
the
migration
of
the
scalability
jobs.
D
Yeah,
basically,
we
I'm
still
working
on
my
great
scalability
job
and
in
the
last
five
days
we
had
an
issue
with
quota
on
some
project.
It
was
not
really
very
well
defined,
so
I
started
to
raise
all
the
quarter
for
the
third
project
dedicated
to
the
scalability
job.
D
G
A
Okay,
the
other
thing
that
I
put
up
there.
I
wish
I
had
more
to
report
and
I'll
just
sort
of
a
little
bit
more
through
it.
E
E
A
Cool
where
we
stand
are
some
of
the
like
trickier,
the
real
easy
one
that
we
finished
already
was
all
the
images
that
are
scheduled
for
end-to-end
tests
inside
of
kubernetes
clusters
have
since
been
moved.
This
isn't
even
the
issue
for
that.
This
is
all
of
the
images
that
run
ci
builds
of
kubernetes
that
are
generated
by
ci.
A
These
are
now
community,
hosted
cool,
so
we're
standing
up
clusters
for
nn
tests
using
community
of
images.
Basically,
what
I'm
trying
to
get
at
is.
We
have
a
lot
of
like
corner
cases.
This
is
the
one
that's
like
all
of
the
images
that
are
scheduled
to
clusters
as
part
of
our
end
and
tests.
A
I
think
we're
basically
at
the
point
where
we
need
to
announce
a
deprecation
window
for
this.
However,
there
are
a
number
of
repos
throughout
the
kubernetes
project
that
aren't
kubernetes
kubernetes,
that
still
rely
on
the
google.com
hosted
images,
and
we
need
to
tell
them
to
stop
doing
that.
A
Cloudu
had
done
a
good
job
of
opening
up
a
couple
piecemeal
pr's.
I
suspect
this
is
probably
too
small
for
screen
sharing,
but
I
suspect,
since
the
word
csi
driver
is
in
a
lot
of
these.
I
think
that
the
kubernetes
csi
folks
or
the
six
storage
folks
tend
to
have
like
one
repo
that
has
like
the
scripts
and
release
tools
and
stuff
that
are
used
by
all
of
the
csi
repos.
A
We
gotta
figure
out
which
repo
that
is
and
change
change
things
there
and
that
will
probably
change
what's
used
in
a
bunch
of
these
other
repos.
Failing
that,
somebody
needs
to
go,
make
a
bunch
of
pull
requests
to
each
of
these
repos
to
get
them
to
use
the
correct
registry.
It's
basically
just
a
search,
replace
said,
regular
expression,
whatever
thing
so
this
one's
a
pretty
easy
issue
for
new
contributors
to
work
on.
A
A
There's
a
tcp
project
that
hosts
a
number
of
these
called
kate's
test
images
which
is
owned
by
google
and
I've
got
here
a
list
of
every
single
image
that
lives
there
and
needs
to
be
migrated
along
with
links
to
or
brief
notes
about,
like
what
seems
to
use
this
image
and
how
to
move
it
about
half
the
images
we
managed
to
already
take
care
of,
or
were
unused,
which
is
super
cool.
A
I
A
The
the
biggest
thing
that
I
can't
really
take
care
of
super
quickly
that
I
could
use
a
lot
of
help
with
is
to
bump
the
version
to
change
the
image
that
is
used
by
the
majority
of
our
cloud
build
jobs.
So
we
have
jobs
all
across
the
project
that
use
google
cloud
build
in
order
to
securely
build
docker
images
and
push
those
to
container
registries
that
ultimately
get
promoted
to
our
production,
artifact
testing
and
the
image
that
they
use
within
cloud
build
right
now
the
majority
of
them
use
an
image
that
comes
from
google.com.
A
A
The
the
main
thing
that's
blocking
progress
on
this
is
just
that
there
are
a
lot
of
repos,
and
so
I
think
anybody
who
is
handy
with
the
scripting
language
and
the
github
api
could
probably
script
opening
up
a
ton
of
pull
requests
against
all
of
these
repos
at
once,
or
it's
a
really
easy
thing
to
open
up
a
full
request,
repo
at
a
time.
It's
a
good
way
to
make
sure
that
you've
got
our.
A
You
can
go
through
our
contributor
workflow
of
like
getting
the
cla
signed,
making
seeing
how
tests
run
against
pr's
all
that
sort
of
stuff.
A
For
anybody
who
does
this
I'd
recommend
using
a
pr
description,
that's
kind
of
like
this,
if
not
exactly
like
this,
where
it
links
back
to
the
umbrella
issue.
The
reason
we're
doing
this
work
like-
why
is
this
needed
and
a
link
to
the
previous
pull
requests
to
help
people
sort
of
follow
the
trail
of
bits
like
what?
What
was
the
previous
thing?
That
happened?
That's
causing
you
to
do
this
once
we
take
care
of
all
of
these.
A
There
are
all
of
the
other
images
that
don't
have
their
checkboxes
checked
up
above
that
are
kind
of
going
to
need
to
be
figured
out
on
a
case-by-case
basis.
Some
of
these
are
real
easy
search
places,
some
of
them
figuring
out
where
or
how
this
image
is
even
used
things
of
that
nature.
A
The
remaining
things
are
the
trickier
corner
cases
in
our
end-to-end
tests,
like
this
project
kate's
authenticated
test,
if
I
remember
right,
is
about
making
sure
that
we
can
pull
from
an
authentic
image
registry
during
end
to
end
tests.
That
requires
authentication-
and
this
is
something
that
hippie
and
his
team.
I
think
we're
going
to
try
taking
to
signo
to
look
into
instead
of
using
a
google
cloud
registry
with
a
hard-coded
password.
A
Basically,
why
not
try
standing
up
a
registry
inside
of
the
cluster
and
then
set
up
the
e2v
tests
to
pull
from
that
registry?
This
would
be
a
great
way
of
reimplementing
this
test
because
it's
more
portable-
and
it
also
wouldn't
have
to
hit
google
cloud
all
the
time
and
it
would
work
in
aircapped
environments
where
I
legitimately
don't
know
how
they
pass
this
test
in
air
gap
environments
right
now,.
A
So
there's
kind
of
a
long
tail
of
stuff
to
do
here.
I
still
think
it
is
reasonable
to
get
all
of
the
end-to-end
task
related
stuff
done
by
the
time.
123
code
freeze
comes
into
effect,
which
I
believe
is
november
16th
about
a
month
from
now.
Some
of
the
other
stuff,
like
the
job
images
and
the
google
cloud
build
stuff,
might
take
longer
than
that,
but
it
would
be
really
cool
to
get
the
big
check
box
checked
of
I
lost
it
on
this
issue.
Getting
done
by
the
time.
H
A
Let's
stop
my
share.
I'm
gonna
hand
it
over
to
arno,
because
I
run
out
of
breath.
D
F
Much
sorry,
oh
no,
sorry
I'm!
I
moved
it
down
because
I
I
presented
it
last
week,
my
mistake.
So
if
you
want
to
discuss
it,
we
can
quickly
go.
I
thought
it
somehow
got
accidentally
copied
over
my
mistake.
So.
D
F
Thanks
for
asking
that
question
today,
at
cubecon
in
the
keynote
priyanka
actually
launched
the
cloud
native
credits
and
there's
a
page
I'll,
look
for
it
just
now
and
I'll
share
it
where
you
actually
apply
for
creativity,.
E
F
D
E
Okay,
that's
so
it'd
be
best
if
we
went
ahead
and
applied
through
the
cnc
flash
credits
program-
and
I-
and
only
with
my
my
cncf
head
on
I'm
gonna,
I'm
gonna
help
be
the
humans
behind
the
scene.
Advocating
and
working
with
lachlan
for
the
the
credits
are
donating,
make
sure
they're
stewarded
the
same
way
that
we're
gonna
have
very
similar
access.
But
yes,
please,
please
apply
through
the
credits
program
as
chairs
of
sig,
kate,
sandra.
D
E
With
lackey-
and
the
next
step
is
to
it's
kind
of
funny-
let's
close
both
of
them
I'd,
say
open
the
ticket
open
the
thing
and
refer
to
this,
so
that
we
have
the
I'm
sorry
I'm
in
zoom
and
I'm
not
used
to.
How
do
I
get
back
I'm
on
my
phone.
This
is
new
for
me.
E
D
Okay,
because
I'm
asking
also
a
question
because
I
felt
like
women-
claim
those
credits
and
move
the
class
api
jobs
to
aks.
So
we
lift
some
costs
on
the
gcp
credits.
A
Well,
so,
to
be
clear,
I
feel
like
what
is
happening
right
now
is
that
microsoft
employees
are
contributing
to
the
cluster
api
azure
thing
and
they're,
using
a
microsoft,
owned
set
of
credentials,
and
so
microsoft
is.
E
A
The
bill
for
the
cluster
api
azure
project
to
move
okay-
and
this
is
an
invisible
cost
of
the
project.
What's
not
happening
is
the
use
of
those
credentials
for
like
other
arbitrary
projects.
As
far
as
I
know
today
like
whether
or
not
kubernetes
works
successfully
on
azure
is
not
something
that
can
block
the
release
of
kubernetes.
E
I
think
the
whole
cluster
api
ecosystem
could
benefit
from
a
cost
analysis
and
with
the
resources
human
resources
are
going,
because
those
are
things
that
the
vendors
themselves
are
not
going
to
tend
to
advocate
and
have
a
lot
of
priority,
for,
whereas
the
health
of
our
ecosystem
as
a
whole
depends
heavily
on
that
actually
being
a
high
priority
for
us.
As
a
group.
A
Like
we
need
help,
this
is
kind
of
like,
unfortunately,
like
boring
work.
I
guess,
but
like
I
I
don't
have
the
time
or
bandwidth
to
oversee.
Do
we
actually
have
the
credits
hooked
up?
I
certainly
have
the
I
look
at
this
as
like.
Well,
I
have
that
this
billing
report
that
we
just
clicked
through
for
gcp,
where
we
saw
all
the
costs-
and
I
know
for
for
a
fact
like
how
we're
using
all
of
that.
A
I
don't
have
that
today
for
amazon-
and
I
don't
have
that
today
from
microsoft
and
we
have
kind
of
two
open
issues
about
like
do.
We
know
how
to
like
make
sure
we
have
enough
of
that
and
do
we
know
how
much
of
that
we're
using.
I
just
need
somebody
to
like
take
that
on.
I
don't
know
anything
about
how
azure
does
its
building
stuff.
I
sure
hope
that,
like
we
don't
have
to
have
another
custom.
Crafted
database
thing
that
scrapes
from
to
create
a
billing
report,
or
maybe
maybe
we
do.
G
D
A
Just
to
recap
what
I
heard
you
say:
hippie
was
you
need
either
you
need.
One
of
these
sig
leads
to
go
through
the
cloud.
B
A
E
Our
repo
yep
and
we'll
eventually
have
a
similar
thing.
I
assume
for
amazon
as
well,
because
then
I'll
have
a
formal.
It's
all
relationship
at
this
point,
this
formalizes
it
or
I
can
actually
step
in
and
go
okay.
I've
been
told-
and
this
is
how
you're
formally
telling
and
that
and
it's
anything
that
will
work
for
other
projects
as
well,
and
I
I
agree
absolutely-
and
I
think
it's
on
on
ii,
to
to
help
get
those
reports
going
for
amazon,
and
for
I
mean
it
may
not
be
us
that
actually
does
the
the
thing.
E
But
it's
an
to
to
try
to
make
sure
that,
because
we
have
that
access
trying
to
model
like
it's
probably
part
of
it's
the
permission
problem
as
well,
so
going
through
the
whole
pii
thing
and
like
we
can
just
go
through
it
like
it's
ii,
so
internally
we
can
get
that
done.
But
I'd
love
to
have
that
similar
model
of
figuring
out
what
the
pii
things
for
each
of
those
orgs,
so
that
there's
that
transparency
involved
as
well
with
our
reporting.
E
Oh
you're
right
sorry:
well,
it's
the
logs
as
long
as
our
reports
are
high
level.
E
You're
right,
I
just
mean
it's
on
us
to
try
to
get
those
reports
going
and
the
first
one
being
a
cost
analysis
per
project,
and
I've
tried
to
engage
with
each
of
the
cloud
providers
that
I
just
need
a
way
to
isolate
each
project
so
that
I
can
stop
the
cost
if
it
gets
too
much
and
be
able
to
know
how
much
is
being
spent
so
that
we
can
see
over
the
percentage
of
what's
being
over
this
month
period,
how
what's
the
granularity
so
that
we
can
have
that
visibility,
particularly
for
transparency
to
the
budget
like
we
have
for
kubernetes,
but
also
you
know
it's
not
starving
out
or
doing
the
other
projects
as
well.
D
Okay,
I
think
I
will
leave
the
amazing
task
to
find
the
form
for
the
credits
to
run.
D
Cool,
oh
okay.
So
the
next
item
is
auto,
enable
auto
upgrade
of
the
terraform
provider
through
dependent
bots.
So.
D
D
Every
week
we
can
define
the
schedule
and
if
we
let
it
weekly,
we
will
get
an
upgrade
from
the
telephone
provider.
There
is
no
impact
on
the
state,
except
we
just
need
to
be
careful
about
the
change
introduced
in
every
upgrade,
which
means
we
need
to.
We
need
to
have
people
reviewing
the
pull
request.
This.
A
Is
my
problem
this?
It
looks
like
it's
a
really
low
effort
review,
because
it's
just
changing
some
numbers,
but
really
what
I
need
to
do
in
order
to
review
pr
like
that,
based
on
our
current
limited
understanding
of
how
to
work
with
terraform,
because
I
have
to
read
the
change
logs
for
that
thing,
which
I
don't
think
we're
automatically
linked.
Maybe
I
was
wrong.
I
got.
D
A
For
sure
I
just
my
concern
is
also
that
I
think
again
correct
me.
This
is
me
expressing
my
limited
understanding
of
working
with
terraform
effectively
but,
like
I
don't
know
how
to
like
know
for
sure
that
nothing's
gonna
break,
unless
I
am
at
least
capable
of
doing
a
terraform
plan,
and
I
also
kind
of
feel
like
I'm,
signed
up
to
be
prepared
to
mitigate
rigging
changes.
If
I
do
a
terraform
appliance
applied
something
wrong.
D
D
D
D
D
Like
so,
I
feel
like
I
want
to
also
the
one
thing
I
want
to
do.
This
is
because
I
want
people
volunteering
to
do
the
review
like
if
we
can
have
people
reviewing
I'm
looking
at
gym,
and
I
say
hey
if
jim
wants
to
review
terra
from
code,
we
don't
have
a
lot
of
terrifying
line
of
codes,
but
if
people
are
interested
to
review
that,
we
can
ask
some
reviewers
and
say:
okay,
every
chan,
someone
looking
at
the
changing
line
and
say:
oh,
this
is
something
deprecated.
We
should
upgrade.
A
I'm
wondering
something
I'm
super
in
favor
of
it.
I
I
think
like
this
is
another
example
of
like
when
I
need
help.
I
need
to
help
with
from
somebody
who
has
used
terraform
to
look
at
the
way,
we're
doing
stuff
and
say
that,
like
we're
doing
it
I'll
even
take.
A
Type
of
criticism,
but,
like
I
recognize
that
we
don't
really
have
staging
environments
with
copies
of
our
infrastructure,
that
we
can
automatically
run
live
tests
against
to
make
sure
that
we're
not
going
to
mess
anything
up
when
we
deploy
these
changes
to
production
for
better,
if
or
worse
we're
running,
terraform
apply
against
the
one
production
that
we
have
and
hoping
that
it
doesn't
break,
and
so,
when
we
start
to
increase
the
number
of
chain,
terraform
changes
that
we
do,
even
though
we're
doing
the
responsible
thing
of
making
them
smaller
and
more
granular.
A
If
the
amount
of
manual
toil
and
human
review
that
we
have
to
do,
stays
large
and
does
not
go
down
and
is
not
automated
away
by
tooling,
we
trap
ourselves
in
doing
a
lot
more
review
where
I'm
not,
but
that
that's
just
that's
me
extrapolating
way
too
far
ahead.
I
agree
that,
like
just
bumping,
the
terraform
dependency
is
a
good
prompt
for
us.
We
already
have
other
things
that
bump
dependencies,
so
I'm
super
in
favor
of
that.
A
I
think
I
was
just
using
it
as
a
lens
or
an
illustration
for,
like
we
don't
the
most
testing.
We
do
of
our
terraform
is
terraform
validate,
which
I
think
is
like.
Is
it
syntactically
valid?
I
don't
think
we
have
anything.
That's
like
did
you
know
you
are
using
a
deprecated
field,
or
did
you
know
this
feature
goes
away
in
the
latest
version
of
the
terraform
google
provider
stuff,
like
that,
the.
D
Other
parameter
form
is
even
when
you
use
the
duplicating
is
not
failing
the
terraform
plan.
It's
just
telling
you
it's
a
program.
So
if
you
enter
those,
if
you
use
that
in
in
pro
job,
you
may
see
things
pass
and
maybe
break
things
over
time.
So
that's
what
I
want.
Basically,
because
we,
we
are
very
simple
about
the
usage
of
terraform
right
now.
I
want
just
a
simple
bump.
D
D
H
Yes,
I'm
interested
to
learn
more
about
how
it's
implemented
with
what
you
guys
are
doing,
just
to
be
able
to
help
out
a
little
bit
more.
A
Will
walk
through
that
I
may
get
along,
but
if
you
need
to
leave
we'll
have
it
on
the
recording.
A
Airborne,
we
don't
trust
enough
to
have
proud
do
anything,
it's
still
arno
or
myself
or
yeah.
Technically
dims
can,
but
I
think
it's
basically
just
bernardo
and
myself.
A
I
can,
I
can
just
start
running
through
it
now,
but
I
would
like
to
give
time
to
rihann's
agenda
item
as
well.
Okay,.
A
F
All
right
mine
is
real
short.
We
made
a
really
simple
issue
just
to
get
feedback
from
the
group.
At
the
moment,
api
sleep
website
is
updated
manually
by
running
a
script
every
so
often,
and
we
are
thinking
that
it
might
be
useful
to
have
a
brow
job
automate
that
just
get
your
feedback
on.
Would
that
be
available
for
the.
A
Resource
I
want
to
throw
a
little
policy
work
at
y'all
about
api
snoop,
and
that
is
that
it
is
currently
living
in
a
kubernetes
organization
or
sorry,
it's
living
in
a
github
organization.
That
is
not
part
of
the
kubernetes
project.
I
think
you
all
need
to
decide
if
you
want
api
snoop
to
be
a
cncf
project
or
if
you
wanted
to
be
part
of
the
kubernetes
project.
A
If
you
wanted
to
be
part
of
the
kubernetes
project,
where
I
believe
it
is
principally
used,
I'm
super
cool
with
us
managing
infrastructure
to
make
api
snoop
run
better
and
all
that
stuff.
But
if
api
snoop
is
intended
to
be
sort
of
a
general
agnostic
thing,
that's
used
for
more
than
kubernetes,
it's
not
part
of
the
kubernetes
project
and
therefore
we
should
not
be
footing
the
bill
for
its
infrastructure.
E
I'm
in
the
cafe
so
it'll
be
a
little
bit
noisy.
I
think.
Initially,
we
had
thought
about
trying
ways
to
generalize
it
to
be
used
elsewhere,
but
it
is
definitely
going
to
be
kubernetes
only
because
it's
a
database
logging,
the
calls
to
the
api
server.
It's
it's
all
case
and,
and
I
don't
see
it
even
be
able
to
support
even
our
other
cave
efforts
like
within
envoy.
It's
just
not
the
right
fit,
so
I
think
it
would
the
best
place
for
it
is
inside
the
kubernetes
organization
as
a
project
there.
E
I
just
I
I
don't
know
the
steps
for
that.
I
guess
probably
be
good
to
to
research
what
it
looked
like
to.
I
is
whether
it
be
the
cncf
donating
api
snoop
to
the
kubernetes
project.
I
don't
know.
A
Yes,
I
I
would
the
way
I
would
start.
This
conversation
is
by
opening
up
an
issue
in
the
kubernetes
org
repository,
there's
an
article
about
transferring
repositories
from
elsewhere
to
the
kubernetes
project,
and
that's.
A
E
A
E
E
C
A
We
just
we
don't
own
that
kate's
conformance
repo,
that's
all
I'm
saying
so,
okay,
so
there's
that,
as
far
as
your
pattern,
I've
lost
the
issue
now
hang
on.
Let
me
find
it.
D
So
I
have
a
question:
is
it
which
they
need
to
own
this?
Because
I
feel
like
api
missionary
is
the
best
sig
to
own.
A
Because
this
is
used
for
the
conformance
sub
project
in
the
same
architecture
as
the
conformance
project.
A
E
We
need
to
go,
I
mean,
I
think
we
don't
need
a
conformance
call
for
it.
I
think
it's
fine
there.
I
think
it's
actually
just
going
to
stick
arch.
Yes
on
the
mailing
list
and
saying
the
cncf
would
like
to
donate
the
aps
new
project,
to
the
like,
I
said
just
to
the
kubernetes
project.
What
is
the.
A
E
A
F
F
Over
we
can
then
look
at
the
terraforming
and
automation
of
the
update.
A
Talk
it
would
be
terraform
in
this
case.
I
want
to
share
my
screen
real,
quick
and
say
it
seems
like
what
you're
asking
for
is.
We
have
a
bunch
of
files
that
live
inside
of
a
github
repo
and
what
we
would
instead
like
to
have
is
a
website
that
people
could
visit
and
something
would
be
in
charge
of
keeping
that
website
up
to
date
and
that
website
could
be
based
on
files
that
live
inside
of
that
git
repo.
A
So
now
we're
talking
about
automation,
to
keep
a
kit
repo
up
to
date,
or
that
could
be
a
set
of
files
that
live
inside
of
like
a
google
cloud
bucket
or
something
that
a
proud
job
would
be
responsible
for.
You
know
regenerating
what
lives
in
there
based
on
the
contents
of
various
git
repos.
This
is
very
much
like
a
request
to
try
and
keep
a
website
that
stores
that
is
based
on
the
metadata
about
various
caps
up
to
date.
So
I
would
recommend
that
you
take
a
look
at
this
issue.
A
This
also
has
a
link
to
pull
requests
to
implement,
what's
proposed
here.
A
Essentially,
I
described
how
there
are
a
number
of
different
patterns
we
could
implement,
but
if
you
truly
want
two
different
sources
of
truth,
where
you've
got
some
source
files
in
git,
but
then
you
have
like
a
google
cloud
bucket
somewhere
that
just
serves
a
website
either
directly
or
through,
like
nginx
redirects,
or
what
have
you
I've
got
a
pr
that
is
is
an
example
of
like
putting
the
gcs
bucket
in
the
kubernetes
public
project
and
then
tying
together
service
accounts
to
give
prowl
the
appropriate
permissions
to
write
that
bucket.
A
F
E
A
A
A
Can
we
do
that
so
there
are
some
suggested
patterns
and
there's
a
pull
request
linked
off
of
that
that
implements
one
of
those
patterns
and
I'm
waiting
to
hear
back
from
the
enhancement
sub
project
who
wants
to
own
with
that
and
manage
that
infrastructure.
A
So,
with
whatever
time
you
have
remaining,
I
wanted
to
try
and
walk
through.
I
guess
the
way
your
terraform
is
set
up
a
little
and,
like
I
said
it's
gonna
run
long
and
if
you
gotta
bounce,
that's
totally
fine,
the
recording
will
be
extra
long.
A
A
Inside
of
here
we
have
a
bash
directory
which
contains
a
ton
of
bash
scripts
that
manage
the
bulk
of
our
infrastructure.
We
have
a
terraform
subdirectory
that
managed
that
contains
most
of
our
terraform,
and
then
we
have
a
static
directory
for
like
static
files
that
could
be
copied
around
for
our
infrastructure.
This
feels
like
a
wart
and
it's
weird,
but
it's
here
so
I'm
gonna
start
by
going
into
our
bash
directory.
A
Like
I
said,
bash
came
first,
it's
the
thing
that
tim
hawk
and
myself
are
most
comfortable
with,
and
I'm
gonna
go
to
the
ensure
main
project
file.
So
just
a
brief
description
of
what
is
in
here,
where's
the
readme.
Maybe
it's
in
here
hey,
look
further
down
on
the
readme
inside
of
the
infogcp
directory,
I
sort
of
tried
to
describe
the
way
the
bash
is
laid
out.
The
idea
is,
there
are
a
bunch
of
scripts
that
start
with
the
word,
ensure
those
are
in
charge
of
banishing
a
set
of
infrastructure.
A
So,
in
the
case
of
insure
main
project,
it's
going
to
manage
everything
related
to
the
main
project,
which
is
kubernetes
public.
This.
We
call
this-
I
guess
the
main
project,
because
it
has
a
lot
of
resources,
including
the
aaa
cluster,
where
most
of
our
infrastructure
runs.
It
has
a
number
of
critical
gcs
buckets
is
where
we
have
the
data
set
that
contains
most
of
our
billing
data.
A
It's
the
main
project.
It's
where
we
kind
of
dump
everything.
Sometimes
I
wonder
if
we
should
segregate
things
more,
but
this
is
the
pattern
that
we
started
with
all
of
our
bash
scripts
to
try
and
keep
them
minimal.
We
have
a
set
of
reusable
functions
that
live
inside
of
lib
files.
There
are
other
things
in
here:
I'm
not
going
to
read
them
all
and
narrate
them
all.
A
There
was
a
thought
I
had
at
one
point
in
time
that
maybe
yaml
could
be
the
wonderful
database
bridge
between
our
bash
and
our
terraform.
If
we
could
get
both
of
them
to
read
configuration
data
from
yaml,
we
could
ensure
that
there's
sort
of
like
one
source
of
truth
for
things
to
change,
so,
for
example,
for
today,
if
you
want
to
add
a
new
staging
project,
you
add
it
to
this
list,
and
then
this
script
file
here
ensure
staging
storage.
A
Is
the
thing
that's
in
charge
of
actuating
this
list
or
making
this
set
of
projects
real?
This
is
totally
like
a
human
wrote
yaml,
it's
not
super
consumed
by
our
automation
other
than
right
now,
like
a
project,
has
to
be
inside
of
the
cml
file
for
it's
to
be
usable
by
avash
or
our
terraform.
I
think.
A
So
into
your
main
project,
most
of
these
things
are
written,
such
that
they
source
in
the
libraries
up
top.
They
have
a
bunch
of
global
variables.
The
variable
that
I'm
interested
in
is
this
terraform
state
bucket
entries.
So
the
idea
is
since
terraform
stores
its
state
in
a
file.
We
want
that
file
to
be
someplace
remote
so
that
multiple
contributors
can
update
that
state,
rather
than
it
living
on
my
laptop.
A
We
are
not
using
hashicorps
commercial,
offering
where,
like
we
check
in
and
lock,
and
all
that
stuff
with
state,
we
just
have
a
copy
of
the
state.
It
lives
in
gcs
so
as
to
ensure
that
like
or
maybe
back
up.
One
of
the
things
about
terraform
is
that
it
stores
everything
in
the
state,
including
sensitive
values
like
keys
and
passwords
and
credentials.
A
And
so
we
have
this
fancy
naming
scheme
here
where
buckets
are
all
prefixed
with
the
word
caten
for
tf
is
a
hint
to
us
humans
that
it's
cades
infra
manage
gcs
buckets
it's
for
terraform,
and
then
we
try
to
have
like
a
name
that
fits
this
naming
scheme
here.
A
I
hope
one
day
to
have
all
of
our
different
kinds
of
projects
live
in
folders
so
that,
instead
of
managing
iem
credentials
and
all
that
sort
of
stuff
on
a
per
project
basis,
we
could
manage
it
on
a
folder
and
have
that
hierarchically
descend
down
so
kate's
infra
tf
public
clusters
bucket
is
intended
for
a
public
project
in
this
case
kubernetes
public,
and
it's
intended
mostly
to
work
on
clusters
within
the
public
project.
A
A
We
want
organs
to
be
able
to
swoop
in
and
manage
our
infrastructure
with
terraform
if
needed,
but
the
idea
is
to
try
and
scope
the
set
of
resources
inside
of
the
google
cloud
bucket
or
try.
This
set
of
resources
that
terraform
is
managing
to
something
that
logically
makes
sense
for,
like
just
the
proud
people
to
have
access
to.
A
What
I'm
trying
to
imply
here
is
that,
if
you
have
access
to
this
bucket,
you
should
theoretically
have
access
to
be
able
to
create
and
destroy
all
of
the
resources
that
the
terraform
state
in
this
bucket
describes
projects.
Google
compute
instances,
gke
clusters,
gcs
buckets
all
that
sort
of
stuff.
A
A
It
reads
that
array,
it
splits
things
up
and
it
uses
some
of
our
library
functions
to
ensure
there
is
a
gcs
bucket
that
is
private,
not
public,
to
ensure
that
a
group
of
people
can
admin
that
gcs
bucket
to
ensure
that
the
organs
can
also
admin
that
bucket
and
to
ensure
that,
let's
see
the
people
who,
oh
god,
gcs
permissions,
are
weird.
We
need
to
ensure
that
the
people
who
can
admin
the
bucket
also
have
the
ability
to
list
the
bucket
for
reasons
that
don't
make
any
sense
to
me
anyway.
A
Okay,
having
set
all
of
that
up,
that
was
really
in
service
of
enabling
me
to
show
you
the
kubernetes
public,
terraform
module
and
specifically
this
file
here,
which
sets
up
our
terraform
provider.
A
This
is
what
dependable
would
bump
for
us,
and
then
this
thing
here
uses
the
buckets
that
I
just
showed
you
bash
creating
so
we're
using
the
bucket
named
ktm
for
tf
public
clusters
and
then
within
that
bucket.
Here
is
the
path
to
the
state
file
that
we're
doing,
and
we
try
to
describe
all
this
in
the
comments
above
so
this
might
be
the
worst.
This
might
not
be
the
best
one
to
like
walk
through,
because
it's
also
kind
of
the
biggest
grab
bag
of
things.
A
Let
me
think
I'm
trying
to
find
where
the
project
is
defined
so
so
notice.
I
said
there
was
a
bash
script
called
the
turbane
project,
which
is
actually
responsible
for
creating
a
gcp
project
called
kubernetes
public
and
to
create
some
of
the
things
in
it,
and
this
terraform
started
out
as
something
that
just
managed
a
gke
cluster
within
the
project,
and
so
it
did
that
by
doing
things
like
creating
service
accounts,
setting
up
some
iem
bindings
specific
to
this
cluster
and
then
actually
defining
a
cluster
using
a
terraform
resource.
C
A
The
level
of
comfort
that
we
have
with
bash
versus
terraform
with
bash
bash
is
freely
available
everywhere.
We
were
more
comfortable
scripting
our
indications
of
g
cloud.
Then
we
were
working
with
terraforms
various
resources.
A
Others
like
myself,
I've
had
terraform
experience
and
I've
had
a
bad
time
with
it
accidentally
deleting
things
it
could
be
like
my
own
inexperience
with
terraform
and
terraform
moved
on
quite
a
bit
since
I
last
used
it,
but
in
the
interest
of
like
using
what
we
know,
we
started
with
bash,
and
I
think
like
where
we're
at
with
terraform
is
a
is
a
good
medium
place,
but,
like
I
described
like
we,
don't
really
have
effective
testing
of
our
terraform.
A
I
guess
not
that
we
really
have
testing
our
bash
either,
but
it's
a
little
bit
easier
to
sort
of
think
our
way
through
a
bash
problem.
I
think
not
too
many
people
currently
contributing
to
this.
E
D
There's
a
function
for
that
error,
but
I
never
really
there
is.
I
like
I
personally.
A
A
A
This
thing
where,
like
sometimes
gke,
will
like
to
do
things
automatically
behind
the
scenes,
and
so
terraform
in
the
battled
early
days
was
not
very
friendly
to
things
automatically
updating
the
state
of
resources
in
a
way
that
did
not
match
up
with
the
state
file,
the
terraform
maintained
and
so
go
back
and
reconcile
to
the
like.
No
stop
shuffling
things
around
in
the
real
world.
It
should
look
exactly
like
this
in
my
static
state
file
and
it
would
like
go
undo
changes
that
we
wanted
to
stay
out
there.
A
That's
how
it
can
accidentally,
that's
how
it
has
for
me
in
the
past
accidentally
deleted
things,
but
I
think
it's
gotten
a
lot
better
and
a
lot
more
cooperative,
certainly
with
gke.
I
know
this
is
in
part,
because
google
helps
fund
the
development
of
this
google
provider.
I
think
most
of.
A
Cloud
providers
do
this
for
their
terraform
providers
anyway,
like,
for
example,
we
have
this
lifecycle
prevent
destroy
thing
which
didn't
exist
when
I
last
used
terraform,
which
makes
sure
that,
like
even
if
you
run
terraform,
delete
to
delete
all
the
terraform
resources,
it
won't
actually
delete
the
live,
gke
cluster
or
no.
You
have
a
hand.
D
And
just
want
to
say
that
we
are
very
open
about
how
we
want
to
manage
kitchen
for
right
now,
because
there's
some
aspect
like
the
the
staging
project.
I
think
we
don't
need
to
migrate
to
term
from
so
that's
basically
with
until
until
we
finish
the
migration
from
google
or
go
to
the
community
organization,
we
maintain
passion
to
reform,
because,
basically,
right
now,
iran
didn't
say
something
we
have
over.
D
A
Yeah,
I
would
agree
yeah
I
mean
most
like.
I
would
expect
the
way
I
would
implement
this
in
terraform
would
probably
be
to
create
a
staging
project,
terraform
module
that
I
would
then
reuse
or
I
might
instantiate
it
using
a
4-h
block
that
reads
in
all
the
projects
from
yaml
the
tricky
case
that,
oh,
my
god,
this
is
so
bad.
The
tricky
thing
that
we
do
right
now
in
our
bash
is
we
have
special
cases
for
some
of
the
staging
projects.
A
We
have
these
functions
that
have
a
double
underscore
in
the
middle
of
them
and
they
start
with
the
word
staging
special
case
so
like,
for
example,
we
make
sure
that
the
cluster
api
gcp
project
has
it.
A
It
gives
the
instance
admin
role
to
a
specific
service
account
and
it
specifically
enables
a
proud
cluster
to
use
this
project,
and
in
retrospect
I
should
probably
have
a
link
to
the
issue
that
describes
why
this
gets
a
special
case
and
why
we're
not
doing
this
for,
like
all
staging
projects
in
general,
but
like
our
bash,
basically
looks
at
the
name
of
the
project
that
it's
working
on
and
then,
if
these
functions
exist,
it
will
magically
do
the
special
case.
A
For
that,
I'm
sure
there's
a
way
to
do
this
with
terraform
too,
but
yeah.
I
kind
of
would
agree
with
like.
If
it
ain't
broke,
don't
fix
it
sort
of
thing
it.
It
does
annoy
me
sometimes
when
I
have
to
change
what
we're
doing
with
their
staging
projects,
and
then
I
have
to
basically
it
runs
runs
all
this
passion
before
which
it
takes
a
non-trivial.
A
To
pull
out,
but
it
works.
Okay,.
A
A
But
okay,
so
that
was
that
was
an
example.
Terraform
manages
cluster,
but
we
sort
of
so
that
we
don't
have
one
gigantic
multi
thousand
line
terraform
file.
We
started
sort
of
breaking
up
into
different
terraform
files
that
describe
like
all
of
the
resources
that
are
going
to
be
used
for
this
thing
called
kettle,
and
some
of
these
resources
are
reusable
modules
defined
elsewhere
in
this
repo,
which
I
can
get
to
as
well
as
like,
I
am
policies
so
on
and
so
forth.
A
D
D
D
A
It
really
depends
on
what
you're
doing
gke
related
operations
take
a
while
other
stuff
can
take
a
minute.
D
D
A
So
this
is
an
example
of
me.
I
run
these
meetings
from
my
personal
computer
because
my
employer
does
not
like
zoom
running
on
my
work
laptop.
So
I'm
using
my
personal
account
here.
I
cannot
use
my
work
account
and
lo
and
behold,
my
personal
account
does
not
have
the
ability
to
update
the
main
project.
A
I
did
this
so
that
the
more
secure
thing
is
the
thing
that's
allowed
to
use
access
to
this,
but
I
also
kind
of
use
my
personal
account
as
a
sock
puppet
that
has
less
less
access,
so
I
can
make
sure
people
who
have
not
org
admin
privileges
are
capable
of
doing
stuff.
One
place
I
do
have
access
is
one
of
the
proud
build
sets
of
resources,
so
we're
doing
vi
now,
apparently
again,
it's
a
set
of
resources
that
basically
describe
how
to
set
up
a
cluster.
A
E
A
So
that
what
I
define
here
looks
hopefully
a
little
more
straightforward
about
like
what
we
want
to
change
and
configure.
So
I'm
saying
that
I
want
a
node
pool
that
has
two
ephemeral
local
ssds,
I'm
also
giving
a
hundred
gigs
of
standard
persistent
disk.
It's
an
n1
line,
m8
instance,
and
it
runs
the
ubuntu
container
d
instance,
and
that
is
hooked
up
to
this
cluster,
which
is
defined
above,
which
is
sitting
on
the
regular
release
channel.
A
So
it's
updated
regularly
to
its
regular
is
more
stable
than
the
rapid
release
channel
but
less
stable
than
the
stable
channeling,
and
it
had.
It
looks
what
this
module
ends
up.
Expanding
to
the
set
of
raw
resources
looks
very
similar
to
what
I
was
showing
earlier.
So
there's
like
a
defined
maintenance
window
and
we
have
auto
upgrade
and
node
auto
healing
turned
on.
A
So
this
just
sort
of
magically
updates
every
so
often
to
the
next
known
good
version.
D
Yeah,
so
we
don't,
we
don't
really
touch
a
lot
of
the
program
because
there's
another
upgrade
system.
We
we
do
what
we
change
most
of
the
time
is
we
basically
activate
features
on
the
cluster
or
we
integrate
new
feature,
for
example
the
local
ssd,
but
most
of
the
time
we
don't
really
touch
those
clusters.
Yeah.
H
So
my
question
probably
falls
under
the
if
it's
working
don't
touch
it,
but
have
you
looked
at
the
google
provided
module
for
gke
and
the
reason
I
asked.
B
H
H
Okay,
we
switched
to
it.
I
mean
it
took
away
a
lot
of
the
issues
that
we
had
with
like
having
to
keep
our
like
self
wrote
modules
up
to
date
with
the
new
features
you
just
kind
of
point
to
the
new
version,
and
they
do
a
good
job
of
maintaining
support.
D
I'm
holding
on
this
because
I
spend
a
lot
of
time
looking
at
the
evolution
on
those
modules.
It's
take
a
lot
of
time
to
understand
them,
there's
a
lot
of
hiding
abstraction
about
their
phone
and
those
modules.
So
personally
I
prefer
we
have
our
own
modules
because
it's
easy
to
maintain
and
we
don't
want
to
expand
the
usage
on
the
gk
cluster.
So
we
can
use
the
official
modules
because
these
for
rm
information,
there's
a
dedicated
team
inside
google
maintenance,
the
provider
and
those
module.
H
D
Yeah,
but
you
have
a
dedicated
team
of
people
and
looking
at
those
modules-
and
you
have
like
I,
I'm
guessing
10
gk
clusters
manage
or
operate
like
60.
see.
We
have
just
three
three
cluster
and
that's
it
so.
I
feel,
like
I
kind
of
want
to
even
chop
the
modules
from
the
gk
cluster
and
just
use
parallel
resources,
because
we
don't
we
don't
have
a
lot
of
things.
D
A
A
Like
there
I
mean
visually,
you
can
see,
there
are
lots
of
patterns
and
that's
really
cool,
but
it's
also
pretty
dense
stuff,
and
I
I
just
whatever.
Maybe
I
should
just
plus
one
everything
like
I
said
like
I
don't
want
to
have
to
read
through
all
of
this
to
understand,
what's
happening
I
feel
like
and
I'm
happy
to
be
corrected
or
proven,
otherwise
that
our
modules
are
simpler.
A
E
A
These
have
opinionated
defaults.
H
A
Are
awesome
and
super
robust?
Some
of
the
modules
that
are
provided
by
google
are
far
more
opinionated
and
I
don't
know
that
those
either
align
with
our
opinions
or
like
like
they
might
be
really
good
opinions,
but
we
would
have
to
rework
the
way
we
have
everything
laid
out
in
order
to
meet
with
those
opinions,
and
it
is
not
clear.
I
A
G
G
A
Sorry,
I
was
just
gonna
you
have,
if
you
have
some
that
you
do
prefer
to
use
or
you
have
like
useful
defaults
or
something
like
we
could
take
a
look
at
it
like
because
you're
right,
we
could
like
import
those
modules
into
our
modules
and
then
apply
the
same
set
of
defaults.
I
just
knowing
knowing
where
everything
comes
from
and
having
to
manage
fewer
things
for
just
our
limited
set
of
use.
Cases
feels
slightly
like
the
right
trade-off
to
make
right
now.
D
Also,
another
aspect
I
I
forget
to
mention
is
onboarding.
If
we,
if
we
want
to
embody
people
in
terraform,
I
prefer
we
use
our
mergers
because
they
are
very
simple
to
understand
and
stand
down.
Ask
someone
to
go
understand
the
official
gk
modules,
which
is
like
a
a
wakes
of
reading
and
understanding.
What's
done
inside
those
modules.
A
Yes,
it
was
going
to
show
some
of
our
modules,
but
I
think
I'll
come
here.
A
I
am
trying
to
think
in
terms
of
like
what
does
the
yaml
data
structure
look
like
that
is
more
intuitive
or
understandable
to
describe
some
of
the
common
patterns
we're
seeing
in
our
infrastructure
so
that
when
I
point
contributors
to
hey,
do
you
want
a
thing
write
some
yaml?
Maybe
that's
like
easier
and
more
approachable
than
hey.
Do
you
want
a
thing
write
some
terraform.
A
See
like
this,
this
feels
like,
if
even
if
I
didn't
know
terraform
at
all,
I
could
sort
of
understand
that.
Oh,
I
want
an
ip
address
and
I
want
it
to
be
named
infra
pro
and
I
want
it
to
be
a
v6
ip
address,
I'll
just
copy
paste
this
thing
and
rename
it
to
like
my
application,
because
I
want
an
ip
for
my
application
and
this
looks
an
awful
lot
like
yaml.
A
E
A
Like
so,
we
wanted
to
try
and
have
a
terraform
module
just
for
like
a
project,
that's
going
to
host
a
gke
cluster.
Surely
we
have
an
opinionated
set
of
defaults
that
we
want
to
encode
for
that
and
we
use
that
everywhere
and
like
a
lot
of
it,
is.
E
A
A
certain
set
of
services
enabled-
and
I
just
copy
pasted
the
google
project
service
resource
over
and
over
and
over
and
changed
the
name
for
each
of
the
different
services.
I'm
like
this
is
a
for
each
you.
Could
you
could
absolutely
redo
this
as
a
for
each?
I
know
that
now
anybody
who
knows
their
terraform
could
totally
open
a
poll
request
to
do
that
for
us
anyway.
I
think
I'm
going
to
kind
of.
E
A
My
wait:
wait,
wait
so
just
to
to
to
bring
home
the
like
the
set
of
resources
in
these
modules
or
try
we're
trying
to
scope
them
to
like,
logically,
you
can
own
these.
So
aside
from
the
modules
directory,
everything
else
in
here
describes
a
set
of
resources
that
live
inside
of
a
gcp
project.
So
the
idea
is,
if
I
have
owner
privileges
on
the
kate's
in
for
sandbox
sandbox
capge
project,
I
can
use
this
terraform
to
manage
everything
that
lives
inside
of
this
project.
A
So,
unfortunately,
the
kubernetes
public
thing
like
we
don't
have
a
lot
of
people
who
haven't
order
access
to
everything
in
here.
But
the
idea.
E
A
For
people
who
need
our
infrastructure,
we
can
scope
you
through
a
project
that
is
more
just
about
the
infrastructure
that
you
need
for
your
sig
or
your
sub
project
or
whatever,
like
we've,
got
all
the
proud
stuff
in
its
own
in
its
own
set
of
projects,
and
so
people
in
the
case
infra
pro
on-call
group
can
manage
everything
in
here.
It's
another
example
of
like
defining
secrets
in
a
yamalish
type
way,
and
this
is
automatically
set
up
anyway.
A
I'll
stop
talking
now,
I
feel,
like
we've,
run
a
half
an
hour
over,
maybe
a
more
rehearsed
version
of
that
with
some
slides
guide
posts
might
be
useful,
but
I
don't
know
ama
any
any
questions.
D
D
So
we
want
to
focus
on
simplicity
and,
let's
say
automation,
that's
the
main
focus
it's
not
about
saying,
and
we
want
to
use,
therefore,
but
the
main
question
is
how
we
use
a
tool
to
simplify
the
automation
process
for
us,
because
we
are
not
a
company,
we
are
an
open
source
project,
so
aaron
is
doing
on
best
a4,
I'm
doing
this
on
best
effort.
So
and
all
of
you,
I
can
ask
you
to
work
on
this
full-time
because
you
need
to
pay
your
the
bills
for
everyone.
D
D
A
A
A
About
a
month
ago,
I
tried
to
bring
this
is
probably
out
of
date,
but
I
tried
to
bring
down
roughly
what
I
think
this
date
of
our
infrastructure
is
like,
where
it
all
lives
and
what
it
all
is.
So
I've
zoomed
way.
A
A
I
drew
this
using
a
tool
that
allowed
me
to
specify
some
of
this
in
like
sort
of
a
json
like
tree
like
syntax,
but
it
is
not
an
open
source
tool.
So
it's
not
clear
to
me
whether
I
can
show
the
rest
like
if
I
can
show
the
syntax-
and
I
really
wish
I
could
lay
this
out
more
intelligently.
You
can
see
there
are
lines
going
all
over
the
place
and
what
do
they
even
mean
and
can't
I
annotate
them.
A
I
have
tried
to
look
for
other
open
source,
diagramming
tools
that
could
help
me
build
this.
I
don't
I
haven't
found
one
that's
like
quite
meets
the
bill
just
yet,
but
the
idea
is
with
this:
we
can
visualize
how
we
have
like
the
k-team
for
prowl
project
in
yellow
here
and
then
inside
of
that
we
have
a
gke
cluster
called
prowl,
build,
and
then
inside
of
that
we
have
a
couple
set
of
kubernetes
resources
that
will
sort
of
cluster
together
into
single
groups.
A
So,
like
the
kubernetes
external
secrets,
app
that's
responsible
for
making
sure
we
have
secrets
in
these
clusters
or
the
bosco's
rental
service,
oops
that
make
sure
that,
like
we
have
end-to-end
projects
available
for
each
of
our
indent
tests,
we
also
have
the
kateson
for
proud
build
trusted
project
which
has
a
cluster
that
looks
similar
and
then
like
our
triple
a
cluster
is
somewhere
over
here
recall
I
titled
it.
The
community
infra
cluster,
then
you
can
see
like
I've,
got
little
boxes
in
here
for
all
of
the
apps
that
are
currently
running
in
there.
A
A
A
The
source
is,
it
is
closed
source
at
the
moment
because
it
is
done.
It
was
drawn
using
a
google
proprietary
tool.
So
it's
not
clear
to
me
whether
I
can
share
the
source
for
this
or
not.
E
E
It
looks
super
cool.
Can
I
kind
of
want
to
study
a
bit
and
see
what
other?
If
we
can't
do
that,
then
let's
find
some
other
diagram
events,
so
I
need
your
help.
A
A
I
want
to
use
an
open
source
tool
with
open
source
so
that
people
can
edit
the
diagram,
or
maybe
we
could
even
do
something
wilder
like
generate
the
diamond
yeah
I
feel
like
generating
the
diagram,
doesn't
produce
a
lot
of
value.
I
bet
there's
something
that
can
automatically
generate
graphs
from
all
the
terraform
resources
that
we
do
and
I
don't
think
it
would
be
quite
as
meaningful
without
a
human
appliance
and
stuff,
so
we
might
have
to.
A
E
A
So
I
think,
where
I
landed
in
this
investigation
was
that
maybe
maybe
something
using
plant
uml
could
be
neat.
Alternatively,
isla
graph
could
be
can
be
really
neat,
but
we
need
to
work
out
whether
or
not
we
want
to
try
using
a
paid
tool,
but.
G
A
A
E
I
I'm
not
able
to
hold
the
tools
as
often
as
I'd
love
to,
but
steven
and
caleb
are
hands
on
and
know
the
how
to
hold
the
violin
now.
Yeah.
Let
me
go
grab
stephen
real,
quick,
because
we've
done
some
really
cool
stuff,
both
in
the
presentation
style
side
and
with
the
areas
it's
a
small
office.
E
This
is
super
cool
aaron's
got
a
a
svg
that
gets
generated
that
has
all
of
the
layout
of
the
of
the
the
diagram,
and
I
know
that
we've
got
some
cool
stuff
that
we've
been
doing
inside
of
org,
where
you
have
the
the
code
blocks.
That
is
a
diagram
both
I
want
to
see
if
you
can
help
me,
pull
that
up
and
then
also
the
presentation
style
stuff,
because
this
other
thing
he
was
just
showing
around
lithograph
I
low
graph.
E
E
A
A
So
like
one
of
the
one
of
the
more
like
tangled
bits
that
arno
has
been
working
on
a
little
bit-
and
I
keep
mentioning-
is
this
big
query-
data
set
where,
like
all
of
the
information
about
every
single,
proud
job,
that's
ever
been
run
for,
like
the
past
four
or
five
years
lives
inside
of
this
big
query,
data
set,
it
gets
there
by
way
of
a
tool
called
kettle
which
runs
inside
of
a
gke
cluster
gator
inside
of
a
gcp
project
called
gates.
A
Kubernator
kettle
gets
its
data
from
both
a
set
of
gcs
buckets
that
live
inside
of
the
kubernetes
jenkins
project,
as
well
as
via
notifications
that
are
sent
via
pub
sub
and
then
proud.
Case.Io
is
the
thing
that
is
responsible
for
writing
results
into
this
results
bucket,
as
well
as
the
other
build
clusters.
I
don't
know.
If
I
have
a
diagram,
I
mean
a
line
showing
that,
but
like
build
clusters,
also
end
up
writing
the
results
into
this
bucket.
So
when
I
talk
about
like
getting
this
thing,
this
little
builds
data
set.
A
A
So
like
this
is,
and
then
we
have
an
issue
that
represents
migrate
over
the
infrastructure
in
the
case
kubernetes
gcp
project,
and
that
issue
basically
has
a
checklist
for
each
of
the
boxes
in
here
like
these
are
all
the
things
that
need
to
be
migrated
over.
We
have
a
github
issue
for
each
of
these
things.
D
D
D
A
Okay,
I
can
probably
help
with
this,
but
anyway
this
was
yeah.
This
was
me
spending
a
couple
hours
or
a
good
chunk
of
a
day
or
sort
of
beginning
of
september
end
of
august.
A
Hopefully
I
can
go
through
and
refresh
this
I
don't
know
in
like
a
month
or
so
and
see
where
we're
at.
I
will
stop
there.
E
A
I
think
I'm
gonna
go
ahead
and
stop
the
recording
here.
It's
been
an
hour
and
45
minutes
and
I.