►
From YouTube: Kubernetes WG K8s Infra BI-Weekly Meeting for 20200122
Description
Kubernetes WG K8s Infra BI-Weekly Meeting for 20200122
A
Justin
is
in
town,
I
don't
know
where
he
goes,
hi
everyone,
I'm
bart
smith,
and
I
will
be
hosting
our
this
b-weekly
community
meeting
at
the
beginning.
I
want
to
remind
everybody
that
about
the
code
of
conduct,
which
we
can
summarize
be
excellent
to
each
other,
and
is
there
someone
new
who
wants
to
introduce
himself?
A
No,
so
let's,
let's
move
further,
please
add
your
name
to
our
agenda
document
and
I
think
that
this
week
we
should
start
with
billing
review.
C
You
new,
yes,
sorry,
give
me
one
moment
while
I
load
it
up,
I
didn't
get
ahead
of
you
yeah,
of
course,.
A
C
So
I
can
look
at
the
cloud
console.
I
don't
have
the
billing
report
pdf
in
front
of
me,
though,.
A
C
Difference
excellent
from
my
report.
I
see
the
lion's
share
goes
to
dns
queries
billion,
500
million
queries.
A
C
Yes,
yes,
compute
engine
in
the
americas
is
a
distant
second
that
doesn't
seem
right
over
a
billion
and
a
half
dns
queries.
Maybe
we
have
a
reload
ttl
or
something
a
billion
and
a
half
queries
in
a
month.
I
mean
if
we
had
a
one.
Second,
I
don't
know.
C
Okay.
Well,
to
do
item
figure
out
if
I
could
dig
into
that
further.
Oh,
there
was
a
massive
spike
on
one
day.
E
Not
a
release
first
day
back
from
everybody.
C
G
So
until
christoph
checked
it
so
the
the
default
ttl
is
300
seconds,
which
is
pretty
low
for
dns
in
general.
So
that
could
be
why
anything
going
viral
or
something
just
shows
up
heavily
in
our
dns
records.
A
safe
thing
would
be
to
go
with
a
default
one
to
like
3600,
that's
like
usually
the
lowest.
We
would
go
on
anything.
That
is
not
the
main
thing.
Probably
somebody.
C
G
It's
only
one
three
hundred,
that's
the
octa
dns
default
ttl.
We
don't
specify
anything
specifically.
I
think
I
only
did
a
rough
overview.
F
G
F
G
Okay,
yeah,
I
agree
one
one
thing
that
probably
doesn't
need
to
switch
because
it's
behind
a
anycast
ip
is
the
one
from
netlify
and
it's
also
cname
to
netlify.
I
think
so.
That
would
probably
be
the
safest
one
and
it
probably
is
the
one
that
gets
the
most
hits
from
overall
guessing
yeah.
C
I
mean
I
I'm
still
eager
to
hear
what
kristoff
thinks
this
spike
might
be,
but
there's
actually
I
see
three
separate
spikes
that
each
reach
into
the
50
sort
of
range.
F
Okay,
so
those
ones
I
I
have
a
good
guess
at
the
kubernetes
hacker
one
bug
bounty
launched
on
january
14th.
C
F
And
we
have
noticed
system
wide
across
across
lots
of
different
parts
of
our
infrastructure.
People
are
poking
it
with
sharp
sticks
to
try
and
see
what
they
what
they
can
get
at.
F
You
want
in
lots
of
different
ways
that
I
can't
say
on
a
public
call,
but
people
have
definitely
been
jabbing
different
parts
of
the
infrastructure
with
sharp
sticks,
and
that
would
be
my
guess
for
like
16
17
like
a
day
or
two
after
the
bug
bouncy
but
launched,
it's
probably
somebody
just
like
I'm
gonna,
run
random
characters
and
try
and
find
whatever
dns
entries
entry
like
yeah.
C
Well,
there's
a
ramp
up
from
the
15th
to
the
17th
and
then
a
tail
off
to
the.
I
guess
this.
That's
the
17th
studio,
it's
hard
to
there's,
not
data
line
between
the
15th
and
17th.
There
was
a
sharp
uptick,
but
that's
not
the
biggest
one.
The
biggest
one
is
between
the
second
and
the
fifth
of
january.
C
H
F
A
True,
that's
true,
okay.
So
let's
move
further
to
the
review
of
action
items
in
the
last
week,
linus
alliance,
can
you
tell
a
little
bit
what's
the
status
of
image
promoter
currently.
I
So
I
we
just
merged
the
the
testing
code
for
the
auditor.
Yesterday,
I'm
trying
to
get
the
proud
job
submitted,
but
I'm
also
working
with
eric
feta
to
use
workload.
Identity
instead
of
service
accounts.
It's
a
minor
detail.
I
I
Rather
I
guess
today
because
the
code
is
already
there
just
a
really
actually
a
matter
of
just
creating
the
official
promoter
images
because
we
don't
want
to
just
you
know,
because
when
you
run
cloud
run
you
have
to
say
you
know,
use
this
container,
but
we
don't
want
to
just
use
some
random
containers.
So
I'll
do
a
build
of
that,
but
that
that
can
all
happen
today.
I
Yeah
and
then
I
guess
the
remaining
main
thing
is
doing
the
migration
of
images
from
google
containers
to
the
new
case,
artifacts
pro
gcp
project.
That's
just
a
massive
copy.
I
You
know
cp-r
of
images
of
all
the
images
that
we
have
and
I
and
they
should
all
just
live
in
the
same,
like
name
space
like
top
level,
not
under
sub
project
for
legacy
like
non-breakage
reasons,
so
I
can
do
that
today.
I
And
after
that,
I
think
from
the
google
side
we
have
to
take
care
of
an
internal
security
issue.
That's
we
we're
being.
We
are
being
told
by
google
security
to
make
sure
that
so
just
for
more
context,
there's
an
internal
promoter
that
we
have
and
we
need
to
essentially
turn
that
off
and
make
sure
that
gke
uses
private
images.
I
That's
tied
into
this
this
process
that
we're
trying
to
do
for
shifting
the
vanity
domain
from
of
khgcr.io
from
google
containers
to
kate's
artifacts
prod,
it's
kind
of
messy,
but
we
need
to
do
that
because
my
hands
are
sort
of
tied
at
the
moment
until
I
fix
that.
I
So
that
is
the
current
status.
Hopefully
you
can
get
it
done
within
the
next.
I
don't
know
two
weeks.
I
don't
know
how
long.
A
It
will
take
but
accept
this
issue
which
you
have
inside.
Is
there
anything
which
we
can
help
you
with,
or
are
you
blocked
by
something
we
can
help
with.
I
So
it's
it's
an
internal
thing,
so
unless
you're,
a
cochlear.
I
Right
so
yeah,
essentially,
we
need
to
send
a
a
communication
email
to
everybody
who
essentially
relies
on
google
containers
today.
Anybody
who
puts
new
images
there,
for
example,
like
you,
know,
core
dns-
they
put
stuff
there
every
once
in
a
while.
So
they
should
know
about
this.
I
I
So
I
can
make
a
creates,
I
guess
a
list
to
start
with,
because
I
do
have
the
history
of
all
the
new
images
that
got
into
google
containers
for
the
past.
I
guess
year,
so
every
one
of
those
people
you
know
they
need
to
have
us.
You
know
sub
project
everything
set
up
so
so
I
can
create
that
list.
C
C
You
need
to
have
a
sub
project
yeah,
but
I
yeah
it
would
be
interesting
to
have
a
less
for
that
and
more
for
the
people
who
are
consuming
images
in
bulk
right
is
like
if,
if
you
care,
if
you,
if
you
are
concerned
about
who
has
access
to
images
or
if
you
want
to
do
better
auditing
on
your
own
corporate
side,
for
whatever
distribution,
for
whatever
use
cases
you
have
now
is
the
time
to
either
get
on
board
with
the
idea
that
this
is
going
broader
or
insulate
yourself.
C
F
If
we
so
just
to
just
to
be
explicitly
clear
number
one
existing
images
on
kate.gcr
or
existing
images
in
gcrio
slash,
google
containers,
those
are
not
being
touched.
Like
the
old
image,
references
will
still
exist,
yes,
correct
right,
so
it's
only
new
images
going
forward
that
are
going
to
need
to
follow
the
new
process.
F
So
that
being
the
case,
I'd
say
like
we
should
fire
something
out
to
kdel,
maybe
like
possibly
to
know
like,
like
even
users,
just
being
like
hey.
If
you
are
unable
to
pull
an
image
after
this
date
that
you
used
to
be
able
to
pull
before
this
date.
Let
us
know
we'll
probably
hear
about
it
anyways,
but
as
far
as
like
yeah,
like,
I
think,
it'd
be
just
providing
folks
notice,
because,
ultimately,
anybody
who
wants
to
publish
new
images
they're
going
to
we're
going
to
hear
about
that,
because
they'll
come
and
they'll
go
through.
C
Great
point:
so
how
about
this
then,
as
a
plan?
Let's,
let's
talk
to
somebody
on
project
to
see
if
they
think
it's
worth
sending
a
notice.
Anybody
here
on
protzek.
Are
you
kristoff?
C
No,
we
can
talk
to
tim,
all
claire
or
someone
on
prodsec
see
if
they
feel
like
it's
worth
sending
a
sort
of
two
or
three
week
notice
to
prosec,
we'll
send
we'll
pick
a
date,
we'll
pick
a
target
date.
C
You
know,
maybe
you
know
mid-feb
or
something
we'll
send
a
notification
at
t
minus
two
weeks
to
kdev
that
this
is
happening,
we'll
send
a
notification
to
at
t
minus
one
week
to
k,
announce
and
then
at
on
on
tee
off
day,
we'll
send
another
reminder
to
k
and
outs
and
we'll
send
it
to
slack
all
kidding,
kidding
and,
and
then
we'll
have
everybody
who
has
a
twitter
account
retweet
it
so
that
the
news
that
this
is
happening
today
gets
spread
far
and
wide
so
that
people
know
where
to
reach
out
and
then
we'll
go
from
there.
F
F
C
Anywhere,
that's
the
theory
right
but
like
if
any
little
minor
thing
goes
wrong,
like
this
is
going
to
be
largely
scripted,
we're
going
to
run
tests
against
it
to
verify
that
the
conversion
happened
properly.
But
this
is
where
the
devil
lives,
and
if
we
missed
one
and
people
start
experiencing
problems,
and
they
don't
know
where
to
reach
out
they'll,
be
super
frustrated,
so
I'd
rather
them
say.
Oh,
I
saw
this
thing
on
twitter,
where
christoph
was
saying
that
this
move
thing
was
happening,
something
something
gcr.
Let
me
let
me
poke
him.
F
F
F
E
C
Okay,
so
we'll
start
thinking
about
the
timeline,
yeah
yeah
right.
A
C
I
I
have
totally
ignored
it
through
the
holidays
and
I
just
came
back
to
it
yesterday
and
realized
that
there's
actually
a
bunch
of
pr's
open
that
I
need
to
shift
attention
back
to
so
I
apologize
to
the
folks
who
sent
prs.
I
will
make
that
my
focus
for
all
my
info
work
over
the
next
week.
C
No
updates,
I
haven't
touched
it.
Someone
pinged
me
over
the
holidays
to
say
they
were
interested
in
looking
at
it.
I
appointed
them
at
it
and
basically
said:
go
to
town.
Look
at
the
data.
There's
some
in
there.
That's
really
interesting.
There's
some!
That's
totally
useless.
You
know
hackaway,
it's
it's
a
big
gnarly!
That's
actually
not
even
that
big.
It's
kind
of
a
gnarly
script
that
pokes
at
every
various
api
that
it
can
find
to
try
to
get
information
and
sort
them
in
various
ways.
That's
it!
J
Noise,
I've
been
looking
into
it,
so
I
had
a
few
questions
about
that
sure.
So
the
object
of
the
other
tool
is
to
generate
like
text
files
that
we
can
give
to
the
previous
versions
and
see
if
anything
has
changed,
and
so
I've
been
I
I've
sent
up.
J
I
wanted
to
see
if
anyone
had
any
ideas
on
what
sort
of
aps,
we
should
query
to
find
out
the
differences.
C
C
I
think
those
are
the
most
important
and
then
it's
a
matter
of
you
know
if
you
poke
at
what
the
various
it's
all
wrapped
around
g
clouds,
if
you
poke
at
the
various
g
cloud,
commands
and
see
what
they
dump
out
is
if
it's
interesting
that
might
be
auditable,
then
I
think
it's
fair
game
like
before
I
touched
it.
It
was
dumping
the
contents
of
all
the
storage
buckets,
not
a
useful
thing
to
audit,
because
those
are
going
to
change
all
the
time
right.
C
One
thing
that
might
be
good
here
would
be
to
actually
dump
it
out
in
a
more
tabular
format.
G
cloud
has
some
formatting
options,
or
we
can
run
stuff
through
jq
or
whatever,
to
produce
more
auditable
output.
J
Yeah-
and
I
was
also
looking
into
into
other
making
the
runs-
maybe
when
impermissions
change,
but
I
I
don't
know
if
that
would
be
an
option
doing
that.
C
Yes,
I
think
the
end
state
of
this
once
we
get
to
a
place
where
the
report
is
relatively
sparse,
the
delta
report
is
relatively
sparse
is
we
should
run
it
every
hour
or
day
or
something
and
get
a
report
sent
out
as
an
automatic
pr
or
something
that
says
hey
the
audit
results
have
changed
a
human
better,
go
look
at
this
right.
What
it
really
means
is
somebody
was
granted
permissions
to
something
and
the
textual
representation
doesn't
match
the
you
know
in
cloud
representation.
J
And
I
can
run
it
on
after
the
permissions
change,
but
I
don't
know
if
we
can
get
access
to
that,
because
I
would
need
to
create
a
sync
on
the
on
the
stackdriver
logs.
I
don't
know
if
that's
possible.
C
Yeah
I
mean
we
can
iterate
on
that.
I'm
not
sure
that
that's
I
mean
it's
interesting,
I'm
not
sure
it's
the
most
urgent
thing,
if
it
literally
if
it
ran
once
an
hour
or
once
a
day
or
something
that's
probably
fine,
okay,
and
we
could
run
that
as
a
job
in
aaa
once
we
have
a
comprehension
of
what
the
log
should
actually
look
like.
Okay,
perfect.
A
Okay,
so
I
think
that
that
was
a
very
smooth
transition
to
the
open
discussion,
because
this
is
this
was
the
I
think
the
first
topic
am
I
right,
yeah,
okay.
So
let's
switch
to
another
one
from
json
about
guidance,
the
for
building
automated
release
tooling
for
sick
sub
project
json.
Can
you.
K
Yes,
so
just
to
give
some
background,
we've
been
working
on
trying
to
automate
our
processes
for
cluster
api
to
make
doing
things
like
releases
a
lot
easier.
So
right
now
we
are
building
staging
images
whenever
we
cut
a
new
tag,
we
would
like
to
take
that
to
the
next
level
now
and
basically
get
to
the
point
where
we
can
draft
a
github
release
with
the
proper
attached
binaries.
K
We
already
have
the
process
like
we
can
already
kind
of
automate
those
within
prowl.
We
can
automate
building
the
release
notes.
The
one
thing
that
we
can't
do
today
is
basically
create
a
draft
github
release,
since
that
would
require
you
know
basically
github
credentials
for
us
to
be
able
to
interact
with
the
github
api
to
create
that
draft
release.
K
So
what
I
was
looking
to
this
group
for
was
some
guidance
on
you
know,
what's
the
proper
avenue
for
us
to
get
to
automate
that,
because
ideally
we'd
like
to
avoid
having
to
use
something
like
github
actions
to
do
this
one
step
of
the
release
process
where
everything
else
is
automated
through
prowl.
C
K
L
So
far,
the
pattern
for
a
securish,
proud
job
is
to
allow
arbitrary
actions
to
happen,
often
gcb,
where
we
no
longer
care
and
everything
is
disposable
and
one-off.
L
You
can
have
a
proud
job
configured
to
create
the
gcb
from
the
trusted
cluster
so
that
the
chain
of
credentials
is
secure
and
then
you
can
put
whatever
credentials
and
steps
you
want
in
there,
you'll
get
a
decent
sized
vm
underneath
and
you
can
run
containers
and
you
can
actually
do
a
sequence
of
steps
and
things
it's
a
little
awkward
needing
to
use
both
I'm
not
personally
thrilled
by
it
either.
But
that's
the
pattern
so
far.
L
K
So
that's
definitely
a
route
that
we
can
look
at
the
challenges.
There
would
be
one
getting
a
github
token
that
isn't
associated
with
an
individual
contributors
account
and
the
other
one
would
be
obviously
getting
those
credentials
injected
up
properly
into
the
trusted
cluster,
so
that
we
can.
L
We
don't
have
a
pattern
for
that,
yet
we
only
have
a
handful
of
accounts.
I
was
playing
around
with
this
with
kind
and
then
had
to
drop
it
so
far.
The
answer
there
was
to
just
create
a
bot,
amongst
account
amongst
the
maintainers,
but
I'm
not
sure
that
that's
a
pattern
we
want
to
replicate
or
how
we
want
to
handle
this
officially
as
a
project
and
we're
not
actually
using
that
account.
Yet.
F
F
The
closest
comparison
I
can
make
is
the
is
either
the
kubernetes
release
tooling,
so
like
anago
and
stuff,
or
the
the
publishing
bot.
The
staging
publishing
bot
is
a
unique
account
and
a
unique
set
of
credentials
for
a
specific
bot
at
a
specific
process.
F
F
We
would
want
to
scope
the
the
particular
account
and
the
particular
credentials
to
say
like
okay,
you
have
just
as
much
permissions
as
you
need
and
no
more
and
what
we
don't
really
have
is
kind
of
a
standardized
process
around
like
how
we
do
that,
because
right
now,
those
the
the
few
the
few
places
where
we
needed
to
create
a
bot
account
we've
kind
of
just
done
it
ad
hoc,
and
it's
like
a
group
of
a
group
of
people
who
we
trust
holds,
who
we
we
know
are
like
the
humans
behind
the
bot
actor.
F
But
we
like
that,
that's
something
that
on
the
github
admin
side,
we
should
probably
do
so.
I
don't
know
if
you
have
a
ticket
open
for
this
already.
In
the
case,
I
o
repo
for
any
infra
work,
but
like
definitely,
let's
get
a
take.
A
an
issue
opened
under
the
case,
slash
org
repo,
which
is
where
all
the
github
admin
stuff
happens
and
describe
out,
like
basically,
you
you're
looking
for
a
service
account
you're
looking
for
a
github
service
account
to
do
this
particular
action.
F
You
wanted
to
be
able
to
cut
releases
on
a
particular
repo,
and
then
we
can
kind
of
look
at
that
and
see
like
what
the
what
the
best
the
best
way
to
handle
that
as
far
as
like
a
process,
we
can
replicate
if
there's
other
folks
that
need
to
to
do
a
similar
thing.
I
guess.
F
L
C
L
Yeah,
what
I'm
saying
is
that,
if
you
take
like
say,
bend
the
elder
plus
nonsense
and
register
a
new
account
with
it,
there's
a
high
probability
that
it
is
just
auto
flagged
and
getting
it
unflagged
is
not.
I
actually
had
to
ask
a
friend
for
help.
I
don't
think
that's
going
to
be
a
usable
pattern.
I
think
so
like
what
I
think.
What
I'm
really
getting
at
is
sure
we
can
probably
make
a
bunch
of
like
kubernetes
mailing
lists
like
fetabot
is
backed
by
a
google
group,
but
we're.
D
F
D
F
F
A
password
manager,
then
that's
where
it
kind
of
that's
where
it'll
loop
back
to
this
group.
As
far
as
like
hey,
we
need
to
manage
like
password
manager,
two-factor
credentials
and
that
kind
of
stuff,
but
that's
not
that's,
not
necessarily
the
starting
point.
The
starting
point
is
like
talking
with
the
github
admin
team,
talking
with
github
itself
and
figuring
that
problem
out.
C
K
So
our
goal
is,
is
that
the
main
human
interaction
should
be
to
push
the
tag
once
the
tag
is
pushed,
then
you
know.
Obviously
we
already
have
the
staging
images
that
are
being
built.
We'd
have
the
release,
notes
generated
the
artifacts
generated
and
the
draft
github
release
published
so
that
you
know
basically
pushing
up.
You
know,
initiates
everything
and
then
it's
just
a
matter
of
the
image
promotion,
modifying
the
release,
notes
and
publishing
the
release.
Notes
for
you
know
actually
making
the
release
public.
L
C
Sure
I
mean
if,
like
I'm
thinking
out
loud-
and
we
should
probably
time
box
this
but
thinking
out
loud
if
the
build
process
produced
everything
a
human
would
need,
but
the
human
had
to
go
to
the
ui
and
say
create
the
tag
now
create
the
draft
release
like.
Would
that
be
a
appreciable
step
forward,
or
is
that
just
not
a
good
stopping.
K
Point
so
right
now
we're
at
the
point
where
you
know
we
basically
have
to
push
a
tag
that
triggers
the
automated
image
builds
to
staging.
At
that
point,
we
run
a
manually
run,
a
release,
notes,
generation
tool,
generate
the
artifacts,
and
then
we
basically
copy
and
paste
those
into
a
draft
github
release
right
now.
K
F
The
other
the
other
option
here
like
and
and
and
maybe
maybe
jason
like
we,
we
can
sync
up
after
the
meeting
because,
like
the
other
option
that
I've
used
before
act
with
decent
success
is
actually
github
actions
to
do
the
things
that
you
were
literally
stating
as
far
as
release,
notes,
generation
and
drafting
the
release,
and
then
it
would
just
be
connecting
together
all
the
artifacts
stuff.
That's
happening
in
prowl
with
the
actual
github
stuff
for
like
publishing
the
actual
release
through
the
github
releases
api.
K
Yeah,
that's
definitely
our
our
alternative
if
we
can't
automate
this
through
prow.
Somehow,
however,
that
requires
basically
managing
you
know
if
we
want
additional
community
contributors
to
be
able
to
participate
in
this
process,
we're
managing
permissions
in
two
different
places
for
different
parts
of
the
release
process.
You
know
the
permissions
for
prowl
versus
the
permissions.
You
know
for
github
actions
and
all
of
that
not
to
mention
two
different
systems
to
learn,
and
all
of
that,
the
ideal
thing
would
be
is
just
point
people
to
one
place
for
the
automation
versus
two,
but.
F
Doing
doing
it
over
in
prague
isn't
necessarily
easier,
even
if
it's
like
centralizing
it,
because
you're
not
technically
running
it,
on
prowl
you're
using
proud
to
generate
a
gcp
job
which
might
be
even
more
complicated
and
more
to
learn.
Otherwise,
because
the
thing
is
permissions
to
see
github
actions
are
the
same
permissions
that
you
need
to
push
a
tag.
So
if
somebody
has
permissions
to
push
a
tag,
they'll
have
a
permission
set
for
github
actions
too.
B
F
A
Okay,
so
I
think
that
we
can
close
this
topic
for
now,
there's
only
last
one
which
I
created
is
about
the
subgroups,
because
I
want
to
get
back
to
this
idea
and
to
move
things
forward,
and
I
the
thing
is
that,
after
my
long
video.
A
I
think
that
we
don't
have
any
more
the
subgroup
for
image
promoter,
because
it's
almost
at
the
end.
What
do
you
think
what
we
should
do
to
get
these
people
who
were
who
had
like
an
initiative
to
help
back
to
us
and
get
this
help
from
them?.
A
That's
the
case,
what
part,
because
we
we
divided
this
into
the
first
second
and
third
stage
of
the
suburbs,
and
we
are
understood
of
moving
the
small
projects
to
the
new
cluster,
what
we
should
where
we
need
to
have
help
and
where
we
can
actually
drive
these
people
to.
F
So
help
biggest
thing.
Here's
like
this
is
a
an
opinion
from
me,
but,
like
the
problem
is
I
don't
have
the
cycles
to
actually
drive
this
to
completion
right
now?
The
thing
that
we
could
really
use
help
for
and
that
I
see
biting
us
in
the
end
is
we
have
too
much
bash
in
the
kate
scott.
I
o
repo
all
of
these
ensure
scripts
and
stuff
that
are
great,
but
there's
it's
going
to
bite
us
at
some
point
and
it
just
I
the
way
that
we're
growing
it
doesn't
feel
maintainable.
F
My
personal
kind
of,
like
gut
feeling,
is
moving
some
of
this
stuff
over
to
terraform
that
we
have
a
terraform
config
that
says
like
okay,
here's
what
things
look
like!
We
have
these
projects,
they
have
these
resources
in
them
and,
like
templating,
some
of
that
stuff
out,
so
that
it's
like
okay,
we
need
a
new
project
for
a
staging
bucket.
F
C
C
So
I
think
somebody
sent
a
pr
or
something
around
this
I
over
the
holidays.
I
just
haven't
gotten
a
chance
to
look
at
it,
part
of
it
because
I'm
afraid
of
it,
because
I
don't
know
terraform
that
well
so
I
don't
know
what
best
practices
are
in
this
case,
but
I
as
much
as
I
want
to
chide
you
for
being
racist
against
bash.
C
I
agree
actually-
and
I
have
spent
time
myself
looking
at
how
to
refactor
this
bash
and
it
ends
up
looking
a
lot
more
like
terraform,
where
there's
some
basic
level
of
inheritance
and
some
template
and
you
fill
in
the
blanks
for
a
particular
project
and
as
I
started
writing
that
I
went.
I
really
ought
not
do
this,
so
I'm
with
you.
It
needs
an
owner
to
who
knows
terraform.
Who
knows
where
the
pitfalls
are.
Who
can
say?
Don't
do
this,
because
that
way
lies
madness,
but
do
it
this
way.
L
The
other
downside
to
bash
is
that
it's
really
expensive
to
review
and
catch
all
bugs
in
and
with
the
terraform
we'll
have
the
ability
to
compare
against
the
actual
state,
the
desired
state,
which
will
be
pretty
handy.
I
think.
C
H
We
at
jet
stack
have
all
of
our
infrastructure
and,
like
google
groups
and
stuff
like
that,
managed
via
terraform
at
the
minute,
as
well
as
like
github
membership
and
so
on.
So
we've
we've
also
started
to
run
into
things
of
like
indiv,
individual
subdivisions
within
the
company.
H
I
wanted
to
manage
their
own
infrastructure
and
roll
that
forwards
and
kind
of
keep
it
all
in
sync,
and
some
of
the
problems
where
we've
run
into
issues
with
terraform
are
specifically
around
like
the
state
management
when
you've
got
multiple
people
working
on
things,
because
you
kind
of
have
your
state
repository
when
that
can
be
like
a
gcs
bucket
or
something,
and
it
has
locking
and
whatever
else.
H
But
you
end
up
with
either
people
applying
that
manually
from
their
machines
or
there's
a
project
called
atlantis,
which
we've
just
started
kind
of
evaluating,
which
is
like
github
automation
to
do.
Terraform
plans
and
terraform
applies,
and
so
on.
There's
also
the
new
terraform
cloud
service,
which
actually
looks
pretty
interesting
too,
where
it
will
actually
deal
with
generating
those.
I
think
you
can
do
audits
as
well
and
like
show
your
plans
and
your
replies
as
they
go
along
and
that
definitely
has
tiers
for
open
source.
H
H
But
I
can
try
and
either
gather
some
feedback
there
from
and
or
like
write
up
some
details
on
what
we've
been
doing
to
make
that
better.
We've
been
moving
things
into
like
terraform
modules,
to
make
it
easier
to
distribute
and
like
have
different
people
working
on
that
repository,
so
I'll,
try
and
write
something
up
or
get
someone
within
jetstack
to
get
involved
with
this,
because
I
think
we
might
have
some
insight
to
share
and
at
least
we've
been
burnt
a
few
times.
H
I
don't
know
if
we've
been
burned
the
most
times,
but
not
all
the
time,
we're
still
being
burned
so
yeah,
but
I
would
also
say
like,
despite
that
it
is
definitely
better
than
bash
having
gone
through
similar
pains
of
like
oh,
I
just
wanna.
Do
this
one
thing
and
write
some
bash
and
then
spend
two
hours
reconciling
statement
going
mad
building.
Auditors.
F
The
like
right
right
now
so
arc
like
our
current
state,
for
what
we
have
some
terraform
we
use
terraform
for
the
clusters
like
a
aaa,
is
currently
spun
up
by
a
terraform,
and
we
have
like
we've
solved
at
least
sort
of.
We
have
like
a
bucket
that
we're
using
for
state,
so
we've
got
like
the
very
basics
needed
for
terraform
infrastructure
already
in
place,
is
just
kind
of
like
moving
and
importing
like
what
our
current
stuff
is
into
something,
but
like
right
now,
a
lot
of
these
privileged
actions.
F
There
is
like
we
have
kind
of
like
small
trusted
teams
who
are
actually
running
things
and
for
me,
as
one
of
those
people
who
like
has
credentials
and
has
permissions
to
go
and
like
hit
hit
buttons
on
keyboard,
to
go
and
run
things
after
somebody
peers,
something
in
there
are
certain
scripts
right
now
that
I'm
like
okay,
I
know
everything
that
that
script
does,
I
feel,
really
comfortable
running
that
script
and
then
there's
other
scripts,
and
I'm
like
I
am
terrified
about.
What's
going
on
in
that
script,
I
don't
even
want
to
touch
it.
F
I
don't
even
want
to
run
it.
I'm
going
to
leave
that
one
for
tip
so
like
if
we,
if
we
move
to
something
where
we
had
a
lot
of
this
stuff
done
via
terraform,
at
least
for,
like
the
google
infrastructure
side,
the
groups
stuff
the
tool
that
we've
written
in
go
to
handle
our
google
groups.
I
actually
have
a
great
deal
of
confidence
in
because
I've
like
rewritten
it
three
times
so
the
tool
that
we
have
right
now.
C
I
agree,
although
I
feel
more
comfortable
with
the
shell
scripts
than
you
do,
having
rewritten
them
three
times
the
the
problem
that
I'm
facing
is
I'm
on
the
cusp
of
rewriting
them
a
fourth
time,
and
at
this
point,
they're
delicate
enough
that
if
I
do
that,
I
will
almost
certainly
get
something
wrong
and
if
I'm
going
to
get
something
wrong,
I'd
rather
get
it
wrong
in
the
right
direction
and
get
it
wrong
in
the
wrong
direction.
C
So,
yes,
so
manners,
I'm
happy
to
talk
with
you
about
sort
of
my
thoughts
on
the
state
of
where
that
shell
script
stuff
is
and
how
I
would
think
a
reasonable
minimum
set
of
terraform
would
model
it.
I
don't
know
the
terraform
syntax
on
how
to
spell
it.
That's
where
I
really
need
to
help.
H
Yeah,
that
sounds
good
yeah
cool.
A
Let's
set
that
up
okay,
so
that
sounds
like
a
plan.
Is
there
any
other
topic
which
we
want
to
discuss
today?
We
have
like
a
six
minutes
more.