►
From YouTube: gitlab-com-infrastructure planning
Description
https://docs.google.com/document/d/1heTNQrF7Jy4J_nYleo2zqBSgmiQ2Wy55NBZJrJrPWYE/edit (internal)
A
Okay,
this
is
the
gitlab.com
infrastructure
planning
meeting
we're
going
to
discuss
the
future
of
the
gitlab
kind
of
infrastructure
project
and
also
how
it
relates
to
the
repeatable
database.
A
So
I
thought
it
would
be
good
to
talk
about
this
a
bit
to
see
whether
that's
a
good
idea
and
if
it
isn't
a
good
idea.
I
think
there
are
a
lot
of
merits
to
having
a
model
repo
for
terraform
and
possibly
a
monorepo
for
terraform
and
ansible.
So
maybe
we
can
discuss
what
it
would
look
like
to
get.
A
I
I
put
some-
I
put
some
notes
here
about
what
I
just
discussed
and
I
also
number
two.
I
just
propose
the
scope
of
what
we're
talking
about.
Then.
I
think
we
don't
have
to
do
all
of
this
now,
although
it
would
be
nice,
maybe
to
consider
some
of
the
other
items
that
you
know
go
beyond
just
moving
the
db
provisioning
stuff
into
gitlab
infrastructure,
for
example.
I
would
really
like
to
move
the
project
itself
under
the
gl
info
group.
A
A
I
guess
on
the
scope,
I
just
wanted
to
kind
of
go
through
these
one
by
one
I
think
for
2a.
Everyone
probably
agrees
that
that's
a
good
thing
for
us
to
do
eventually,
like
there's
no
controversy
there,
and
I
think
we
could
probably
can
anyone
think
of
a
reason
why
we
couldn't
just
do
that
now
and
move
it
on
ops,
not
on
like
we
would
just
move
it
to
a
new
location
on
knobs.
A
Think
I
should
do
it
then,
okay,
so
that
could
be
like
an
easy
thing
to
do
at
least
on
ops
and
then
for
2b.
This
I
would
love
to
do
as
well
and
we
did
it.
You
know
I
did
it.
A
I
did
this
for
chef
repo
and
it
wasn't
that
big
of
a
deal,
but
the
one
difference
here
is
that
gitlabcom
infrastructure
uses
changes,
and
we
all
know
that
when
you
use
changes
on
a
mirror,
it's
just
a
big
problem,
because
whenever
you
create
a
branch,
it
can't
determine
what
the
changes
are
when
you
use
branch
pipelines,
so
you
really
changes
becomes
useless
and-
and
that's
like
a
big
disadvantage,
does
anyone
have
any
ideas
for
how
to
overcome
that.
C
Maybe
what
we
can
try
and
do-
and
this
is
just
a
half-baked
idea-
I
just
thought
of
right
now,
because
when
we
trigger
a
pipeline
on
the
mirror
we
use
variables
and
then
we
can
try
and
mimic
the
changes
with
the
wireless
like
these
are
the
files
that
changed
with
cicd
variables,
and
maybe
we
can
try
and
mimic
that.
But
again,
this
is
just
a
half-baked
idea.
I
just
thought
right
now,
so
I'm
not
sure.
A
Do
you
know
steve
whether
when
we
when
we
merge
to
like
I
know
when
we
initially
create
the
branch
it
will,
we
can't
determine
what
changes?
But
I
think
that's
generally
okay,
because
the
only
thing
we're
doing
there
is
that
we're
looking
at
plans
it's
when
we
merge
to
master.
This
is
where,
like
the
changes,
become
really
important,
because
this
is
where
we're
actually
applying
them.
When
you
merge
to
master,
do
we
also
have
the
same
problem,
or
is
it
only
when
you
initially
create
the
branch.
C
I
think
it's
only
when
you
initially
create
a
branch
because
on
master
we
do
know
the.
If
we
use
the
merge
commit,
we
know
the
parent
child,
so
we
can
ruby
can
do
the
diff
there.
We
can
find
it,
but
I
think
confirming
that
through
a
small
project
might
be
best,
but
as
far
as
I
know
muster
we
we
know
the
default
master
because
we
have
the
merge
commands.
D
Similarly
related
to
variables
there's
a
way
we
can
do
dynamic
pipelines.
Could
we
even
have
a
job
that,
like
does
just
a
manual,
get
diff
between
master
and
the
current
commit
and
finds
the
files
that
are
changed
between
master
and
the
current
commit
and
then
creates
a
dynamic
child
pipeline?
Based
off
that
I
mean
it's,
it's
hairy.
It's
basically
like
we're
recreating
out
the
changes
functionality
in
in
a
bash
script
ourselves,
but,
like
I
I
mean
we've
seen
we've.
D
B
This
exactly
the
same
thing
like
we
just
building
this
like
contraption,
you
know,
we've
got
woodhouse
and
we
just
keep
adding
onto
it
and,
like
my
personal
experience
with
it,
is
that
like
it
can
be
pretty
rough
around
the
edges,
and
you
know
if
you
just
want
to
do
something.
Sometimes
it's
like
a
lot
of
extra
overhead,
and
you
know
I've
been
around
a
long
time.
I
know
what's
going
on.
B
It
must
be
10
times
worse
for
for
for
new
employees,
and
you
know
people
from
outside
the
infrastructure
department
are
trying
to
contribute,
and
I
I
agree
like
maybe
we
should
be
trying
to,
and
this
is
kind
of,
not
something
we
would
block
on,
but
we
should
be
putting
more
pressure
on.
On
the
you
know,
products
like
distributed
git
lab
whatever
you
want
to
call
it
distributed,
repos.
Something
like
that.
You
know:
we've
built
a
product
on
top
of
a
distributed
version
management
system.
Maybe
we
can
have
distributed
mars.
E
D
Yeah-
and
it
should
be-
I
think,
the
big
thing
with
the
issue
I'm
thinking
of
was
that
part
of
it
was
like
it
wasn't,
necessarily
that
it
couldn't
be
done.
It
was
that,
oh,
if
we
change
this,
this
will
break
existing
users.
So
it's
like
well
can't
we
just
add
a
flag
to
check
like
it
was
a
really
kind
of
yeah.
D
This
is
something
we
could
probably
do,
and
and
but
there
was
never
any
push
to
do
it
and
it
just
got
put
on
the
backlog
for
the
product,
but
I
I
don't
know
if
there's
a
way
we
can
engage
and
be
like.
No,
we
really
want
this.
It
would
make
our
lives
a
lot
easier,
but
even
if
we
do
have
that,
that's
not
going
to
fix
the
problem
right
now
for
us
right
now,.
A
I
think
the
problem
also
is
that
merch
requests,
I'm
sorry.
Branch
pipelines
are
really
unique
to
us
because
we're
using
mirroring-
and
this
is
not
something
that
other
people
are
going
to
find
two
useful,
perhaps
or
I
know
that
there
were
other
people
commenting
on
that
issue
outside
of
gitlab.
That
said
that
it
was
a
problem,
but
I
don't
know
like
so
so
then
do
we
just
to
move
on,
should
we
just
say
like
we
skipped
2b
for
now
and
shelby.
F
I
I
had
one
question:
first:
how
are
we
doing
that
right
now
with
kubernetes
workloads,
because
we
we
have
a
similar
case
there
right?
We
don't
do
it.
We.
F
F
F
Cross
cross
notify
on
the
ones
that
we
detect
a
a
dirty
plan
on
potentially.
E
D
D
A
A
Yeah,
I
think
it's
a
symptom
of
I
but
yeah.
I
think
I
think
the
unfortunate
thing
would
be
like
you
make,
I
don't
know
a
benchmarking
environment
change,
and
now
you
have
to
wait
for
the
production
dry
run,
which
has
a
thousand
resources
in
it,
which
is
a
pain.
C
But
with
that,
you
can
skip
that
using
a
dag
pipelines
like
having
the
the
production
pipeline
only
depend
on
production
jobs
but
like
the
other,
for
example,
environment,
for
example,
dns
only
depend
on
dns
shops,
with
the
needs,
keyword
and
like
gitlab
cinema
support
stages
stage
less
pipeline.
So
you
can
just
use
the
needs
keyword
and
you
just
have
it
back
there
and
that's
it
and
you
don't
have
it.
A
F
A
F
A
A
We
would
just
have
that
spinner,
like
we
have
now
for
db
bridge
name,
which,
like
waits
for
the
mirror
to
complete,
I
would
say
given
this
like:
maybe
we
should
shelve
2b.
For
now
I
mean
something
we
can
go
back
to
and
maybe
say
like
2a
is
in
scope
here,
just
to
be
prepared
for
when
we
move
it
on
the
dot-com
side
and
then
2c
steve,
like
you're
the
closest
to
this.
I
think,
because
you
did
the
recent
jsonnet
move.
Do
you
think
this
is
something
you
said
it's
gonna?
C
Yeah,
I
can
probably
try
work
on
it
this
week
or
next,
because
the
other
jobs
they
want
for
the
environment
they're
already
migrated
to
the
rules.
I
I
did
that
part
of
the
json
generation,
so
there's
only
five
jobs
left,
so
I
could
try
and
do
that
fairly
quickly
and
the
the
only
index
expects
on
those
shops
are
fairly
simple,
to
run
only
on
pipelines
for
merchandise.
So
it
should
be
a
small
change.
A
And
then
2d
is
an
ask
from
me
because
I
I'm
using
this
approach
for
db
provisioning.
I
can
just
show
you
if
you
haven't
seen
it
what
it
looks
like.
I
like
the
approach,
because
it's
a
lot
cleaner
than
using
the
analytic
graph
approach.
A
I
just
have
a
pipeline
here
and
we
have
a
child
pipeline,
that's
generated
for
each
environment.
I
click
on
the
environment
that
generates
a
local
child
pipeline
and
then
from
here
in
the
case
of
db
provisioning.
We
have
database
shards,
but
these
are
my
ansible
plays,
which
are
all
allow
failure,
so
they
skip
right
to
the
terraform
stuff
and
then
the
terraform
runs
prepare,
deploy
and
optionally
destroy.
A
A
If
that
makes
sense
right,
like
you
always
want
to
run
the
prepared
job
to
see
what
changes
are
pending,
but
I
also
may
just
want
to
run
ansible
plays
right
away,
and
what
I
envision
is
that
as
we
for
our
vms,
we
could
add
even
like
maintenance
plays
and
other
things
here.
You
know
any
kind
of
like
operational
task.
A
A
What
do
you
guys
think
about
this,
and
is
this
like
something
we
may
want
to
consider
doing?
I
guess
one
difference
here
is
also
like
the
environment
doesn't
run
until
you
hit
the
play
button
and
without
I
don't
know,
if
you
can
use
changes
on
child
pipeline
triggers,
if
you
could,
then
maybe
this
could
work
for
that
as
well.
I
just
I
just
don't
know.
C
I
looked
at
the
documentation
there
and
it
does
support
rules.
So
since
it's
important,
I
would
expect
it
to
support
changes
as
well.
That.
D
I
think
yeah
look
I
mean
the
dynamic
pipeline
stuff
looks
good
to
me.
I
I
mean
I'm.
I
may
not
be
across
all
the
issues
and
the
problems
I'll
freely
admit
that
I'm
still
part
of
me-
and
maybe
this
is
just
instinct
like
I
I
don't
know.
Maybe
this
is
irrational.
I
guess
is
the
word
is
mixing
terraform
and
it's
full
in
the
same
repo.
I'm
still
I'm
still
like.
D
It
is
the
reason
we
really
want
to
do
that,
so
you
can
get
that
full.
Just
one
end-to-end
pipeline
to
say
create
a
server
and
then
run
ansible
on
it.
Like
is
it's
that
real
we're
trying
to
be
able
to
do
everything
in
one
pipeline,
because
to
me
it's
like
they're,
totally
two
separate
tools
with
different
languages
that
need
different
linting
and
different,
like
testing
trying
to
put
them
in.
A
A
I
can
give
you
an
example
sure
we're
using
gsm
google
secrets
manager
for
db
provisioning,
and
there
are
two
things
I
need
to
do.
One
is
to
create
secrets.
A
The
bootstrap
secrets
with,
like
I
create
dummy
values
in
terraform.
The
second
thing
I
need
to
do
is
to
update
my
ansible
config,
so
that
these
secrets
are
exposed
as
variables.
A
A
Then,
when
I
do
the
make
generate
that
creates
a
variable
for
terraform
that
has
a
list
of
secrets
and
it
also
creates
the
ansible
config
that
exposes
those
secrets
as
variables.
This
is
something
that's
like
super
nice
right.
If
I
didn't
have,
if
I
didn't
have
them
in
the
same
project,
I'd
have
to
have
that
json.
C
D
B
A
E
B
And
lots
of
other
languages
and
obviously
gets
is
another
example.
If
you
want
to
use
that.
D
Yeah,
no,
I
I'll
freely
admit.
Maybe
it's
a
as
I
said,
it's
just
a
outdated
traditional
feeling.
That's
certainly
what
you're
saying
makes
a
lot
of
sense
to
me.
F
I
I
think,
there's
also
kind
of
two
aspects
to
this.
One
of
them
is
kind
of
the
mono
repo
like.
How
do
we
organize
the
code
and
the
other
one
is
the
workflow
side
of
things
right,
and
I
mean
we.
We
could
treat
them
as
two
separate
discussions,
I'm
personally
in
favor
of
both
of
those
being
more
unified.
F
I
mean
I'm
generally
a
fan
of
monorepo
inject
like,
but
but
I
think,
if
we
put
the
monorepos
thing
to
the
side
and
just
think
about
the
workflow,
it
seems
really
powerful
to
me
to
be
able
to,
on
a
higher
level,
have
a
pipeline
that
is
not
constrained
to
having
just
a
single
tool
right
so
that
we
really
get
that
end
to
end
like
single
pipeline.
D
Yeah
look
yeah,
fair
point,
as
I
said
I
I
freely
admit
it
was
more
just
a
gut
reaction
than
than
anything
actually
sensible
would
would
we
consider,
then
you
know
I
don't
want
to
muddy
the
waters
with
this
now,
but
could
we
consider
then
the
kubernetes
stuff
potentially
becoming
part
of
that
monolith
in
the
future
as
well?
Would
there
be
value
there
in
doing
that,
because
there's
a
lot
of
like
bits
we
do
in
terraform
and
then
we
have
to
like
match
that
into
the
kubernetes
piece.
D
B
All
the
arguments
for
for
having
it
in
for
having
the
other
two
together
makes
sense
for
for
kubernetes
as
well.
Yeah.
A
Yeah
I
brought
that
up
in
point
eight.
I
think
I
think
it
makes
sense.
Like
I
don't
know
I
the
the
worry.
The
only
worry
I
have
is
keeping
our
jsonnet
sanity.
You
know
like
keeping
this
same,
because
if
we
have
a
lot
of
different
ways
of
generating
different
types
of
config
in
jsonnet,
I
think
it
could
be
get
it
could
get
hairy.
I.
D
D
I
was
going
to
say
a
mono
ripple
should
help.
I
agree
like
we're
all
using
json
right
now,
which
is
great
we're
using
it
a
lot,
and
I
think
my
worry
is
especially
using
it
on
the
I'm
starting
to
use
it
on.
The
kubernetes
side
is
like
we're
all
doing
it
slightly
differently.
So
you
know
unification
into
a
repo
unification
into
the
versions
of
tools
we're
using
and
maybe
even
better,
like
understanding
better.
You
know,
yeah
unified,
unified
way
of
using
this
I'd
be
all
for.
B
I
just
also
to
to
your
point
about
like
generating
lots
of
stuff.
I
think
that's
one
of
the
reasons
why
you
know
in
our
previous
discussion
job.
I
was
saying
if,
if
you
can
have
most
of
the
terraform
still
in
tf
and
then
it's
kind
of
like
the
deltas
between
environments
are
jsonnet,
but
not
everything,
then
you
know,
people
who
understand
terraform
can
go,
look
in
a
file
and
see
terraform,
and
it's
not
just
like
reams
and
reams
and
reams
of
of
jsonnet.
B
There's
still
like
you
know
your
terraform
and
the
way
we've
done
it
with
horse
is
we
have
you
know,
we've
still
got
the
helm
files
that
look
like
helm
files,
but
the
the
values
are
generated
right,
so
it
gives
you
it
it's
not
just
like
every
like
all
the
yaml
comes
from
jsonnet,
which
kind
of
gets
a
little
bit
overbearing.
Sometimes
it's
like
the
deltas
between
the
environments
are
generated
by
jsonnet,
not
everything,
and
I
think
it
makes
it
more
manageable.
D
D
B
B
And
then,
and
then
the
same
with
the
you
know,
with
the
with
the
terraform
itself,
we
don't
generate
like
the
resources
in
in
jsonnet,
because
I
think
that
that
would
get
quite
hairy
quite
quickly.
We
keep
the
we
keep
the
the
resources
as
and
then
we
just
use
variables
and
counts
to
to
modify
to
say
you
know
this
environment
has
got
this
resource,
so
we
give
it
a
count
of
one
and
then
this
environment
doesn't
have
and
then
you
can.
I
will
like
the
terraform
very
easily
and
see
what
the
differences
are.
B
You
know
it's
not,
but
yeah.
B
A
Yeah,
okay,
yeah,
I
mean
like
I'm.
I
took
that
a
step
further
and
I'm
generating
some
resources
with
jsonnet
too
it's
it,
but
I
think
andrew's
right.
It
starts
to
get
hairy,
especially
since
looking
at
terraform
config
as
json
is
like.
You
have
to
look
at
it
sideways
because
you're
not
used
to
it.
I
I
don't.
I
don't
like
it
that
much
so
yeah,
I
think
andrew's
like
yeah.
I
think
I
think
it's
a
good
point
to
make
like.
Maybe
we
should
just
avoid
that
unless
we
absolutely
need
to.
D
At
the
risk
of
side
tracking
this
discussion,
I'm
gonna
really
try
not
to
does
this
problem
also
get
easier
like
the
more
stuff
we
migrate
to
kubernetes
right,
like
our
terraform,
should
get
smaller
and
perhaps
part
of
the
not
necessarily
the
problem,
but
we've
got
things
like
we're
doing.
Dns
metric
entry
management
via
terraform
and
stuff
does
that
stuff
even
make
sense
like?
B
So
there's
a
there's
a
there's,
a
similar
argument
with
what
we
do
in
the
runbooks
repository
where
we've
got
one
bunch
of
scripts
that
update
pingdom
of
all
things.
Then
we've
got
another
bunch
of
scripts
that
update
grafana
dashboards.
We've
got
another,
and
half
of
them
are
bash,
offer
them
a
go.
You
know.
B
I
think
it's
actually
originally
igor's
idea,
but
if
we
took
all
of
that
and
put
it
into
terraform,
it
would
be
much
less
overhead,
so
I
think
almost
having
everything
in
terraform
rather
than
having
10
different
ways
of
doing
it
would
be
better
and
that
might
that
might
apply
here
as
well.
A
Yeah,
I
don't
know,
I
don't
think
I
have
a
strong
opinion
on
this,
except
I
think
monorepo
would
be
nice.
I
mean,
like
I'm,
I'm
feeling
the
pain
of
having
recording
rules
done
by
run
books
and
I'm
trying
to
create
new
environments
that
have
you
know
prometheus
within
the
kubernetes
cluster
and
now
what
do
I
have
to
do?
A
I
have
to
like
okay,
I
have
to
apply
the
recording
rules
from
run
books
and
thinking
like
okay,
maybe
I'll
just
check
out
the
run
books
project
into
my
project,
but
then
it's
like
oh
you're,
like
it's
terrible
right,
so
yeah.
I
think
everyone
here
sounds
like
we're
on
the
same
page,
that
monorepo
is
good
and
we
should
try
at
least
to
make
it
work.
F
B
A
Okay,
is
there
anything
left
that
we
haven't
touched
on
that
we
should
talk
about
before
I
come
up
with
the
plan.
I
think
because,
like
I
think,
what
I'll
do
is
I'll
I'll
come
up
with
a
plan
that
does
2
2
a
and
2
c
and
2
to
d
through
f,
which
so
basically
everything
except
moving
ops
to
and
steve
like?
I
would
love
if
you
could
pair
with
me
on
this.
If
you
have
time
because
yeah.
C
C
Yeah,
I
I
think
I
should
have
time
right
now.
My
priorities
are
the
rollout
for
os
query
to
production,
because
there
needs
to
be
some
movement
there
for
compliance
reasons
and
some
kind
of
stuff.
But
I
can
try
and
sync
up
with
scan
on
that
and
see
what
can
be
done,
feel
free
to
ping
me
at
any
time
and
I'll
I'll.
Try
it
out
as
much
as
possible.
A
And
andrew
I
mean
we're
already
using
jsonnet
tool
for
githubcom
infrastructure.
B
So
one
thing
that
we
did
as
a
hack,
which
I've
actually
really
I
really
like,
is
and-
and
I
I
think
this
is
probably
controversial-
so
you
know
feel
free
to
disagree
but
on
the
run
books
repository
where
we
check
the
the
yaml
in
I've
caught
so
many
mistakes,
because
you
know
I've
run
the
delta
and
I've
seen.
Oh,
I
didn't
expect
that
change
and
I
think
that
that
saved
so
many
production
outages,
at
least
for
myself.
I've
gone
oh
gee.
B
I
didn't
expect
that
and
like
you,
your
test
suite
would
have
to
be
massive
to
cover
all
of
the
things
that
you
can
spot
by
doing
that.
So
that's
the
one
thing
like,
but
then
maybe-
and
this
is
also
something
I'd
be
thinking
about
generally
like
maybe
we
we
should
be
building
our
own
ci
templates
that
we're
including
so
we
could
have
one
that
does
a
that.
B
Does
a
check
of
the
generate
right
and
it
kind
of
runs
through
all
the
the
files
and
make
sure
that
generate's
done-
and
I
don't
know
a
bunch
of
things
like
that,
and
we
could
include
that
in
the
json
tool
directory
project.
Perhaps
just.
A
B
Yeah,
if
there's
yeah
the
I
yeah
totally
totally
agree
like
the
full.
E
B
That
made
sense,
but
it's
it's
like
the
overhead
in
figuring
out.
What
what's
going
to
happen
is
really
high
like
it's,
not
something
you
want
to
do
unless
you
absolutely
have
to,
but
also
we
we
can
include
other.
You
know
we
kind
of
includes
we
can
generate
like
a
second
yaml
file,
like
you
say,
with
make
generate
and
actually
check
that
in
and
then
it
just
becomes
so
much
easier
to
kind
of
understand.
What's
going
to
happen
when
you
push
that
to
to
a
gitlab
server,
I
mean
the
amount
of
time
I
spent.
B
While
I
was
doing
that
dealing
with,
especially
with
the
dag
stuff
and
some
of
the
error
messages
that
you
get
with
dag
are
super
cryptic
like
that
needs
a
lot
of
work
and
you're
sitting
there,
trying
to
figure
out
what
the
hell
has
gone
wrong
and
it's
a
nightmare.
So
I
would
definitely
say
if
we
can
check
that
in
it
would
be
a
better
practice.
D
Yeah,
I
was
gonna
say
I
think,
also
by
even
the
whole
makes
generate
check
in
the
source
and
the
destination
file
with
json
it.
I
found
really
useful
as
well
for
just
you
can
use,
obviously
we're
talking
about
changes,
but,
like
ci
changes
keyword,
you
can
do
changes
keyword
to
do
things
on
that,
like
you
can
do
code
owners
like
you,
make
better
code
owners
usage,
you
can
do
you
like
one
thing.
D
I've
learned
over
using
gitlab
is
it
works
all
of
the
features
work
really
well
off
files
in
git,
but
not
off
like
dynamic
stuff
running
in
ci,
jobs
like
if
you
can
commit
something
to
get
git
lab
functionality
makes
perfect
sense
what
what
file
matches,
what
environment?
Who
owns
it?
What
whether
it's
a
binary
like
all
of
this
stuff
works.
If
you
commit
the
output
as
well
as
the
input,
so
we
should
try
and
be
pushing
for
that
as
much
as.
B
Another
I
I
I
agree
like
another.
Another
thing
to
think
about
is
jsonnet
is
really
there
to
kind
of
help
us
be
more
productive
as
as
infrastructure
engineers
and
what
we
were
doing
before
is
we
were
checking
in
those
config
files
and
doing
those
as
part
of
the
mr.
So
you
know
we're
just
kind
of
extending
that
and
and
just
help
increasing
our
own
productivity,
but
still
checking
that
config
in
and
you
know
getting
the
same
benefits
that
we
would
have
had
if
we
were
hand
coding
those
files.
E
B
C
B
The
one
thing
I
would
say
is
so,
with
the
json
a
tool
we
started
off
with
one
sub
command
called
yaml,
and
now
it's
got
a
second
one
called
render,
I
think,
and
and
that
one
kind
of
mandates
that
the
far
that
all
the
jsonnet
files
are
basically
the
multi-stream,
sorry
the
the
multi-file,
and
so
it's
got
the
file
name
that
it's
going
to
generate
and
then
the
the
contents-
and
I
think
we
should
just
just
standardize
on
that,
because
it
gives
you
flexibility
and
yeah.
B
I
think
it's
it
it's
much
easier
to
kind
of,
and
in
future
we
could
even
build
tooling
to
to
kind
of
figure
out
dependencies
and
stuff
like
that,
because
the
file
has
you
know
the
jsonnet
file
has
got
the
output
file
in
the
declarator
and
so
yeah.
It's.
I
think
we
should
just
standardize
on
that.
Jsonnet
render.
A
Okay,
so
so,
then,
I'm
going
to
propose
I'll,
take
the
infrastructure
project,
template
and
update
it
to
use
jsonnet.
There
isn't
much
to
the
ci,
because
it
just
uses
woodhouse
for
the
mirroring
and
stuff
like
that,
but
updated
to
use
jsonnet
json
at
tool
establish
where
the
json
files
are
located.
Like
I
like,
I
know
some
people
put
them
in
dot,
gitlab
ci,
other
people
put
them
different
places
like
I
don't
know
where
we
want
to
put
it,
but
we
can
use
that
project
as
like
the
base
template
and
then
you
know,
follow.
C
It
I
wonder
if
we
should
just
start
thinking
about
project
scaffolds,
and
that
says
like
do
json,
a2
bennett
and
it
will
create
the
make
fight
for
you
and
the
main
donations
on
it
for
you
just
so
like.
I
think
you
don't
have
to
think
or
read
where
the
best
practices
are.
You
just
do
like
with
tonka
right.
You
do
tonka
in
it
and
it
just
creates
a
scaffold
for
you
and,
like
you,
already
have
that
convention
and
you
follow
that
conversion
blindly
right.
So
maybe
we.
C
Yeah
yeah,
I
I
for
the
terraform
repo.
I
added
the
make
part
because
I'm
gonna
make
fi
like
I
enjoyed
using
the
make
file,
but
it
can
be
about
script
or
it
can
be
whatever
script
you
want,
but
yeah
as
long
as
we
kind
of
have
one
way
to
do
it
and
we
don't
think
about
it
and
do
it
in
a
reproducible
way.
I
think
that
will
create
the
conversion
out
of
laziness.
C
A
Kind
of
I
mean
like,
I
think,
that's
a
that's
a
great
idea.
I
just
don't
know
whether
we
should
do
that
now
or
like
it
seems
like
it
might
be
pretty
simple
to
do,
but
maybe
like
let's
first
figure
out
what
we
want
the
directory
structure
to
look
like
and
then
and
what
is
the
best
project
to
follow
as
like.
Should
I
use
course
andrew,
as
or
horses
too
complex
for
this,
like.
B
Different,
but
it's
the
render
is
the
same.
One
idea
that
I
had
was
that
if
we
had
something
like
some
prefix
dot
json
on
the
files,
so
we
had
something
dots.
I
don't
know
I'd
put,
there's
a
bad
bad
example,
but
I'll
just
use
it
it.js
on
it.
Then
we
could
have
like
a
a
command
which
is
basically
go,
find
all
the
files
and
all
the
subdirectories
that
are
matching
that
and
just
render
them.
B
And
you
know
then
it's
then
you
almost
don't
need
to
make
file,
because
it's
super
simple.
It's
just
like
the
same
as
go:
build
dot
forward,
slash,
dot,
dot
and
it's
it
just
does
the
the
thing,
but
I
think
if
you
had
that
on
everything
star.json,
it
would
probably
match
too
much
stuff.
So
it
would
have
to
be
some
and
then
and
then
it
just
becomes
really
simple
to
use
and
in
future
you
can
start
doing
clever
things
with
dependencies
and
stuff,
but
or.
B
Point
we
actually
we're
actually
doing
that
on
there's
a
branch
that
cindy's
been
working
on
for
the
you
know:
the
gets
metrics,
catalogs
and
they've
got
a
single
entry
point
exactly
like
that,
and
it
just
includes
all
the
files
that
it
needs
and
then
there's
only
one
and
also
the
other
thing
is
it
speeds
up
the
build
because
it
only
pauses
once
we're
probably
starting
memory
problems
at
some
point,
but
it's
it
definitely
speeds
it
up.
That's
what.
A
Okay,
I
think
that's
pretty
much
it
guys,
I'm
going
to
create
a.
I
think,
I'm
going
to
create
an
epic
for
this.
That's
going
to
link
to
my
project
as
a
dependency
for
doing
the
staging
infrastructure,
build
out
it's
going
to
include
the
items
I
just
talked
about
and
and
steve
like.
Maybe
I'm
hoping
you
can
help
with
this
too,
and
I
I'm
hoping
we
can
bang
this
out
fairly
quickly.
A
You
know
I
I
I
don't
see
anything
major
here,
especially
if
you
keep
it
on
ops.
So
thanks
everyone
for
spending
the
time
this
morning
to
talk
and
I'll
see
you
later
thanks,
bye.