►
From YouTube: DNA - Demo/Conversation of GET
Description
The Design and Automation meeting views and has a conversation regarding the GitLab Environment Toolkit.
https://gitlab.com/gitlab-org/quality/gitlab-environment-toolkit
3:00 - Meeting Begin
3:50 - What is GET?
5:30 - Conversation
20:05 - Demo Start, and continued conversation
A
Yeah,
I
think,
yeah,
it's
a
pretty
informal
meeting
to
be
honest.
Okay,
great.
B
Yeah,
so
it
depends
on
the
group
really
how
far
you
want
me
to
go
with
the
toolkit.
I
know
I'm
pretty
much
preaching
to
the
choir
here.
Most
of
my
materials
for
people
who
don't
even
know
whatever
from
an
answer
is
so
it
depends
on
what
maybe
it
depends
on
what
form
you
want
to
go
in.
I
can
do
a
quick
show
you
how
to
you
know
a
terraforming
asphalt
works,
but
again
you
probably
have
a
general
gist
of
how
that
works.
B
A
I
guess
I'll
start
to
explain
what
the,
because,
maybe
start
with
the
real
fundamentals
of
what
the
gitlab
environment
toolkit
is
and
who
it's
intended
for
and
who's
using
it
the
most
and
who
you
intend
to
have
using
it
in
future.
B
Yeah,
so
the
toolkit
was
born
out,
need
in
quality
enablement
to
build
out
real
environments,
essentially
of
git
lab
to
performance
tests.
So
it
originally
was
called
the
performance
environment
builder,
very
active
name,
and
it
was
just
a
point
where
the
fight
were
actually
trying
to
design
the
reference
architectures
as
well,
that
we
recommend
to
self-manage
people,
and
so
this
is
kind
of
a
a
synced
effort
where
we
need.
B
We
need
to
build
environment
performance
tests
to
test
the
performance
of
git
lab,
but
we
also
need
to
know
what
a
good
and
forget
lab
environment
would
look
like
and
then
in
turn,
obviously
that
if,
if
we
design
one,
then
that's
the
same,
one
we'd
recommend
to
customers
so
all
kind
of
synced
into
each
other
in
a
cycle.
So
we
needed
a
way
to
build
a
test.
The
environments
just
actually
find
that
they're
viable.
So
we
immediately
looked
around.
B
We
had
one
environment
at
the
time
which
was
built
out
manually
and
we
knew
that
that
wasn't
going
to
be
durable
or
scalable
for
us
moving
forward,
because
we
need
different
sizes
and
update.
So
we
built
we
started
to
build
out.
What
is
what
was
meant
to
be
a
very,
and
it
still
is
a
very
simple
or
boring.
I
guess
is
the
right
term.
B
Environment
builder:
that's
meant
to
be
as
basic
as
possible,
essentially
at
its
heart
as
simple
as
possible
that
we'll
provision
the
machines
we
need
and
then
deploy
gitlab
on
the
bus
in
the
right
way
in
a
multi-node
setup
and
for
gitlab
environment
that
performance
scale,
and
that
is
essentially
what
the
toolkit
is.
Today.
It's
had
a
little
tweaks
and
polish
and
various
other
things,
but
it's
hard.
That's
all
it
does
it
provisions
machines
it
takes
to
get
them
on
the
bus
and
pops
it
on
a
bunch
of
machines
with
the
right
config.
B
It
does
do
other
things
to
support
that,
such
as
saying
upload,
balancing
and
a
few
extra
monitoring
stuff
that
we
we
needed
for
our
testing
and
stuff,
but
that
is
essentially
what
it
is
as
core.
A
B
Yeah,
so
this
is
something
we're
exploring
now
that
now
that
we're
expanding
the
token
is
today
is
used
mainly
by
quality
and
the
geoteam
are
using
it,
though
quite
quite
a
lot
to
help
with
their
stuff,
because,
obviously
your
deployments
are
quite
complicated,
so
they'll
take
anything
they
can
get
made
that
easier
and
we've
been
working
with
them
quite
closely.
Nick
has
to
try
and
simplify
that
and
we're
continuing
to
add
more
more
functionality.
They're
quite
happy
with
it
for
to
answer
that
question
yeah.
B
That
is
a
question
that
we're
expecting
to
get
a
lot
more
of
the
goal
of
the
toolkit,
like
I
said,
was
to
keep
it
as
simple
as
possible,
and
adding
customization
adds
complexity
so
that
that's
something
we're
very
keen
to
try
and
manage
and
do
in
a
considered
way.
That
said,
we
have
been
asked
about
switching
out
components
wholesale
essentially,
and
we
don't
support
it
today,
but
we
will
hopefully
soon
it's
something.
I'm
I'm
doing
a.
C
B
More
polish,
with
like
adding
docs
for
azure
and
aws
and
a
few
little
bits,
and
then
the
next
big
thing
will
be
okay.
How
can
we
add
I'm
calling
it
a
customization
hooks
for
people
to
to
to
do
reasonable
modifications
to
the
process
and
a
reasonable
location,
I
think,
would
be
switching
out,
don't
use
postgres
on
the
bus
switch
out
for
aws,
rds
or
gcps
variant,
etc.
That
should
be
possible
hopefully
quite
soon,
but
for
you
guys
if.
D
A
B
B
B
We
wanted
to
roll
out
internally
first
and
get
feedback
on
that
and
go
for
that
kind
of
development
loop
and
then
we
do
allow
it
to
customers,
obviously
developing
for
just
the
quality
team,
which
is
what
the
toolkit
was
for
to
then
opening
up
to
internal
teams
has
taken
a
few
months
to
get
get
in
the
right
place
because
we
had
to
move
a
lot
of
baked
in
you
know,
config.
B
You
know
how
it
goes
if
you
and
you
build
your
own
little
tool,
you
just
you
build
it
for
yourself
and
then
someone
else
wants
to
come
along
you're
like
oh,
that
doesn't
really
make
sense
anymore.
We've
been
doing
that
for
the
last
few
months
and
the
same
process
needs
to
happen
for
customers,
for
example,
the
only
support
you've
been
to
today
and
for
most
internal
teams.
That's
probably
fine
for
customers,
probably
not
they
probably
want
centos
or
another
red
hat
variant.
So
we're
going
to
go
through
that
process.
B
B
For
for
the
teams
to
actually
deploy
for
customers
we've,
I
think
I've
actually
had
a
good
look
at.
I
had
a
quick
look
at
their
tooling
is
a
bit
of
a
frankenstein
and
that
they've
obviously
had
to
do.
This
is
the
this
is
the
bigger
problem
is,
and
this
is
potentially
the
same
for
infrastructure
as
well,
and
obviously
every
customer
has
different
requirements
and
different
modifications
and
wants
and
needs
and
security
considerations
and
everything
else,
and
then,
basically,
each
tooling
just
kind
of
went
down
that
route
of
well.
B
This
is
for
this
customer.
This
customer
and
it's
become
obviously
quite
a
bit
of
a
hodgepodge,
so
the
tool
is
meant
to
be
just
one
tool
and
we
want
to
try
and
support
general
environment
building,
and
but
we
do
want
to
add
in
those
hooks
to
try
and
add
in
the
ability
to
customize,
but
we're
still
we're
still
fr,
we're
still
feeling
that
out
and
how
far
we
can
support
people
with
it,
while
keeping
the
tool
kit
maintainable,
but
we
hope
to
do.
A
Yeah
because
it's
sort
of
like
what
we
have
with
omnibus,
like
for
all
its
warts,
is,
is
kind
of
great,
because
everyone's
kind
of
pulling
the
effort
in
the
same
direction
and
it's
really
nice
to
have
something
like
you
know,
a
wider
multi-machine
deployment
that
standard
and
obviously
based
on
on
all
the
excellent
work
you
did
with
the
reference
architecture
and
everyone
kind
of,
but
obviously
I
understand
it's
also
super
difficult,
because
the
customization
is
the
really
hard
thing
to
get
right.
There.
B
It's
really
difficult:
yeah
we're
going
to
try
and
figure,
try
and
be
proactive
and
and
cut
things
off
and
and
try
and
cover
bases,
but
obviously,
once
I
guess
in
the
hands
of
customers,
we'll
probably
get
a
lot
of
feedback
about
what
we
want
to
do
this
particular
thing
and
that
so
what
what
the
general
just
today
is-
and
we're
still
like
saying
we're
still
feeling
out-
is
that
anything
that
makes
sense
that
you
know
can
be
applied
to
mobile
customers
or
deployments
we'll
look
to
try
and
add
into
the
toolkit
as
long
as
it's
not
like.
B
I
mean
maintenance
nightmare
or
has
comes
with
a
lot
of
debt,
so
like
yeah,
for
example,
switching
out
postgres
to
use
aws
or
elasticash,
or
something
else
like
that.
That's,
okay,
that's
a
wholesale
replacement
of
a
whole
component.
That's
maintenance!
Wise!
That's!
Quite
yes!
That's
a
conditional
really
in
the
end!
So
that's
okay!
We
can
do
that
doing
other
complex
things,
then
yeah!
B
That's
when
it
gets
a
little
bit
question
mark,
but
then
I
just
say
the
big
benefit
of
the
toolkit
that
we
I've
always
believed
in
with
these
tools,
because
I've
done
it
in
the
past
as
well
and
so
is
nick.
Is
that
when
we
have
one
centralized
tool
like
this,
and
we
do
unfortunately
we'll
have
to
say
no
sometimes
to
people.
B
But
the
point
is:
is
that
we
have
one
centralized
tool
that
everyone
should
be
able
to
use,
at
least
in
for
a
base
environment,
and
that
means
that
they
feed
back
into
that
toolkit
and
then
the
things
that
we
find
that
are
hard
to
do.
If,
on
the
bus
in
terms
of
automation,
for
example,
then
we
can
feed
that
back
into
the
teams
and
we
can
improve
on
the
bus,
improve
the
toolkit
and
then
improve
the
experience
for
customers.
So
we're
kind
of
wanting
to
do
that
same
thing
of
having
one
tool.
B
A
B
It
would
be
a
lot
better
yeah
for
sure
yeah
customers
are
definitely
building
their
own
tools,
especially
if
they're
building
a
large
reference,
architecture-like
environment.
They
are
absolutely
building
or
we're
building
the
tooling
for
them
at
the
moment,
and
so
this.
C
E
B
I
mean
if
we
have
one
tool,
then
we
can
have
more
of
a
shared
space
for
discussions
and
finding
and
exposing
problems
with
the
setup,
and
then
we
can
get
it
back
to
teams
and
if
we
improve
it
and
none
of
us
that
it
proves
all
the
way
down
to
the
customer.
So
that's
the
goal
for
our
goal.
B
We
hope
we
hope
to
see
that
continue,
but
yeah
that's
why
they
took
it
is
that's
why
we're
doing
this
effort
is
to
it's
going
to
be
difficult,
I'm
sure
at
times,
but
the
effort
is
to
try
and
give
people
one
tool
that
they
can
start
to
to
all
to
contribute
to
so.
A
C
I
had
a
question
about
scope
and
I'm
kind
of
looking
at
this,
mostly
from
the
infrastructure
and
gitlab.com
perspective,
because
I'm
seeing
a
lot
of
overlap
between
what
we
do
and
what
this
tool
is
is
doing,
and
it
looks
like-
or
at
least
it
sounds
to
me
like,
the
the
tool
is
sort
of
aimed
towards
building
out
like
creating
a
new
environment
and
a
lot
of
the
challenges
that
we
face
on
a
daily
basis
in
infra
is
related
to
change
like
basically
making
changes.
Deploying
new
versions.
C
Is
that
something
that
is
in
scope
for
this
tool
or
might
get
get
into
scope
for
the
tool
in
the
future?
What's
your
view
on
that.
B
So
the
the
underpinning
of
the
token
is
on
the
bus
that
that's
important
for
numerous
reasons
as
like
andrew
was
saying
they're
having
that
pin
on
omnibus
means
that
we
don't
have
to
we're
not
going
off
peace
and
doing
weird
things
we're
staying
on
on
track,
we're
staying
in
our
lane.
B
So
to
speak
with
on
the
bus,
we
do
support
full
upgrades,
anything
that's
updated
on
the
bus,
the
toolkit,
you
just
run
it
again
and
it
will
automatically
upgrade
the
environment
to
to
the
latest
on
the
bus
version
and
with
everything
that
comes
up
on
the
bus
and
that's
one
of
the
big
reasons
why
we're
based
on
the
bus
there's
no
point
recreating
what
on
the
bus
already
does
for
us
so
for
infrared.com
I
didn't
expect.
I
didn't
expect
everything
to
to
knock
on
our
doors
so
to
speak
so
quickly.
B
I
thought
that
that
was
always
going
to
be
a
bit
of
a
a
there's.
A
lot
of
stuff
we'll
be
doing
that's
similar,
but
obviously.com
is
a
bespoke,
unique
environment
with
a
lot
of
very
customized
things.
So
it's
interesting
to
see
that
you
guys
already
one
of
the
first
to
come
to
us,
the.
I
know
that
you
guys
want
to
use
it
for
maybe
potentially
staging
secondary
environments
and
stuff
like
that
and
for,
if
you're
wanting
to
deploy
a
reference
architecture
like
environment,
then
the
toolkit
can
be
used
today.
That's
that's!
B
That's
absolutely
no
problem!
So
yeah
we
do
support
upgrades
and
we
do
yeah.
We
do.
We
do
support
we,
for
example,
quality
team.
We
use
the
toolkit
every
day
we
we
have
in
our
pipelines.
It
updates
for
performance
testing,
updates
numerous
environments
every
day
with
the
latest
on
the
bus,
and
then
we
run
the
performance
test
against
it,
and
we've
been
doing
that
now
for
for
quite
a
while.
B
It
doesn't
support
upgrades
with
downtown,
yet
that
is
obviously
quite
an
involved
and
more
difficult
effort.
We
do
want
to
look
at
it
eventually,
but
but
yeah
zero.
Dental
grades
are
also
quite
difficult,
especially
automation-wise,
because
there's
a
lot
of
spinning
plates
in
that
kind
of
scenario.
We
need
to
keep
things
up
and
down
and
that's
a
very
involved
process,
but
hopefully
one
day
we
will
be
able
to
support
it.
E
I
mean
this
is
what
we
do
for
upgrades
now.
We
use
ansible
and
we
do
pre
and
post
tasks
for
like
taking
instances
out
of
the
load
balancer
and
putting
them
back
in.
This
allows
us
to
do
no
downtime
upgrades,
but
we're
finding
that
this
is
really
not
a
nice
way
to
manage
a
fleet.
I
think
you
know
we're
all
looking
forward
to
moving
these
services
or
the
front
end
to
kubernetes,
so
we
don't
have
to
do
this
anymore.
E
B
Yeah,
it
is
it's
difficult.
My
confidence
about
actually
supporting
zero
time
upgrades
fully
is
is
not
super
high.
We
may
be
able
to
do
the
way.
That's
expressly
detailed
in
the
docks,
but
then
you
have
the
customers
to
go
off
peace.
Then
I
mean
obviously
they
can
go
and
it's
open
source
they
could
go
in
and
crack
open
the
token
and
change
it
to
what
they
need.
B
Hopefully,
we'll
be
able
to
add
in
hooks
to
be
able
to
do
things
but
like
zero
to
anything
upgrades
has
a
lot
of
kind
of
core
movements.
That's
required
to
make
it
to
make
it
happen.
As
you
say,
I
mean
developing
a
load
balancer
or
creating
some
kind
of
crazy
ansible
playbook
that
only
hits
different
nodes
at
different
times.
B
Yeah,
it's
a
big
challenge,
just
like
it
is
to
do
manually,
so
we
will
look
at
eventually
and
see
what
we
can
do
and
try
to
make
it
as
easy
as
possible,
but
I
just
say,
and
every
customer
every
customer
environment
is
different
and
sometimes.
E
B
Very
notably
slow,
so
if
a
customer
just
wants
to
deploy
a
standard
reference
architecture
and
then
walk
away
and
be
happy
with
it,
then
they
will
probably
hopefully
be
able
to
continue
to
just
get
and
then
maybe
we'll
be
able
to
support
zero
time
to
don't
take
enough
grades
of
that.
But
if
this
is
to
go
off
piste
they
can
use
the
toolkit
to
build
a
standard
environment.
Maybe
some
of
the
hooks-
they
can
add
some
good
customization
in
there,
but
then
yeah.
It
really
depends
on
how
far
they
want
to
go.
A
B
A
B
That's
fine.
Can
you
see
my
screen?
Yes,
so
by
very
much
by
design
we
have,
we
don't
have
any
we
haven't.
We
do
have
one
little
script
that
can
make
make
the
ansible
process
a
little
bit
more
shredded
to
try
to
make
it
faster.
But
except
for
that,
we
use
terraform
and
ansible
commands.
We
don't
do
anything
crazy
or
custom.
Here
again,
that's
that
goes
back
to
the
boring
design.
We're
not
trying
to
do
anything
fancy
here.
B
It's
just
basics,
get
on
the
bus
in
the
right
way
and
let
all
the
bus
do
his
magic
because
of
the
bus
is
the
real
star
here
so
for
terraform,
and
if
you
look
for
the
dog
says,
you
know,
use
the
dogs
you'll
see
there,
but
essentially
is
sell
the
comforter
for
run.
It
serve
coffee
grounds,
we'll
run
it.
That's
essentially
the
the
overall
plan
there
so.
A
Zoom's
been
doing
that
for
me
as
well.
How's.
B
B
B
Better,
okay,
good,
so
so
yeah,
it's
just
the
basic
terraforming
ansible
commands.
I
guess.
If,
for
the
toolkit
the
we,
we
have
some
convenience
modules
into
our
form
to
try
it.
You
know
just
make
it
more
easier
to
configure
and
to
run
them
I'll
show
you
what
our
files
look
like
now,
so
we
tried
we
tried
to
keep
the
terrifying
because
terraform,
as
you
probably
know,
terraform
it
can
get
quite
verbose
and
quite
quickly
and
have
a
lot
of
files
and
different
configs.
B
So
we've
created
modules
to
kind
of
take
that
away
in
terms
of
fishing.
We
don't
actually
expect
many
users
long
term
to
actually
be
using
the
terraform
piece.
Most
customers
will
have
their
own
cloud
hardware
and
stuff
like
that,
but
we
still
provide
obviously
for
our
benefits,
but
this
is
probably
the
main
file
here
and
it's
designed
to
be
quite
readable.
We
also
behind
the
scenes
of
a
module
that
this
is
a
gcp
module.
B
So
behind
this
will
have
a
gcp
module
that
sets
up
the
instances
and
firewall
rules
and
everything
else,
that's
all
put
into
its
own
module
that
you
can
see
on
the
on
the
repo.
If
you
you
go
and
have
a
look
and
the
config
for
it
is
is
meant
to
be
quite
straightforward,
so
it
just
literally
goes
through.
It
says
we
want
x,
console
nodes,
and
these
are
the
machine
types.
So
this
this
is
replicating
the
10k
architecture
and
yeah.
So
that's
pretty
straightforward.
B
If
you
do
a
terraform
plan,
it
will
just
obviously
go
through
and
go
through
those
and
set
up
everything
it
needs.
It
does
up
object,
storage
as
well
object,
storage,
firewall
rules
and
gps's.
The
aws
and
azure
modules
are
also
existing.
Azure
needs
a
little
bit
extra,
but
is
aws.
Is
there
they
do
a
bit
extra
completely
dependent
on
what
the
the
cloud
provider
needs
like
is
either
we
just
needed
extra
networking,
for
example,
because
that's
just
the
way
it
works,
so
these
mods
just
meant
to
take
all
that
away.
B
So
that
may
not
be
so
attractive
to
you
guys
because
you
may
want
to
do
a
bit
more
customization,
but
that's
what
we're
looking
for
feedback
on.
I
could
do
a
terraform
reply,
but
this
won't
change
anything
but
you'll
see
we
have
outputs
at
the
end,
mainly
just
the
ip
addresses
of
the
of
the
boxes,
and
it
goes
through
that
by
default
we
usually,
we
recommend
just
setting
the
state
onto
the
cloud
provider.
B
We
were
going
to
look
at
adding
in
support
for
or
at
least
switch
people
to
use
gitlab,
because
gitlab
can
support
state,
but
it
gets
a
bit
self-referential
at
that
point
you
have
to
use
gitlab
to
deploy
gitlab
and
it
gets
a
bit
weird,
but
it
works.
I
guess,
but
it's
just
a
it's
a
thing.
You
have
to
think
about
at
least
a
couple
times
to
think
it
through
so
you'd
assume
this
point.
It
would
set
up
the
boxes
and
the
box
is
also
just
standard.
B
Ubuntu
boxes
and
they're
just
ready
to
go
the
the
main,
probably
with
the
the
more
magic
part.
I
guess,
for
lack
of
a
better
term
is
fantable.
We
use
dynamic
inventories
for
fanciful
terraform.
The
modules
will
set
a
bunch
of
tags
on
each
vm
based
on
what
vm
it
is
so
then
ansible
is
able
to
come
in
and
identify
that
this
is
a
postgres.
It
even
goes
in
and
says
it's
actually,
the
postgres
primary,
for
example,
because
sometimes
I
might
need
to
do
extra
things
there.
B
B
B
So
let
me
get
the
well.
This
is
actually
the
the
inventory
here.
You
see
things
like
posters
primary.
This
is
actually
an
overhang
for
the
past,
so
in
the
past
we
had
reedus.
We
had
rep
manager,
sorry
and
postgres.
Remember
you
can
tell
it
always
make
one
the
primary
so
that
made
things
a
lot
easier
in
life.
That's
not
the
case
now
so
an
answer.
If
you
go
for
the
scripts,
you'll
see
places
where
we
actually
will
pull
patrony
now
and
ask
it
and
then.
E
Since,
since
console
is
a
service,
I
think
that's
in
omnibus
and
did
you
consider
using
like
a
console
inventory,
I'm
sure,
there's
a
plug-in
for
ansible
that
can
do
discovery
using
console
and
that
would
also
allow
us
to
like
you
know
dynamically
update
the
inventory
for
your
primary
or
secondary
database.
You
can
just
it's
a
lot
more
flexible,
probably
than
using
gcp.
B
In
and
upgrade
update,
pg
bouncers
config
and
point
pg
bouncer
at
the
new
primary.
When
that
occurs,
that's
that's
console's
kind
of
main
thing
and
it
monitors
postgres
to
see
when
it
goes
down
consoles.
Also
use,
though,
for
monitoring
and
that's
the
other
thing
we
use
it
for
quite
a
bit-
is
to
have
prometheus,
have
automated
targets
to
pull.
E
Yeah,
so
I
think
it
would
be
a
good
fit
for
ansible
inventory,
because
I
think
you
could
enable
the
agent
on
all
of
the
boxes
and
then
you
can,
you
know,
use
console
tags.
B
Which
we
do,
the
only
issue
is
that
it's
a
card
for
horse
issue.
If
you're
building
a
new
environment,
you
won't
have
console.
E
B
E
Interesting
because
yeah
sorry,
I
was
gonna
say
you
could
do
something
where,
where
you
discover
the
console
host
see
if
you
can
fetch
the
inventory
and
then
if
you
can't
you
fall
back
to
gcp
like
you
could
write
a
custom
inventory
script
to
do
something
like
this.
B
Yeah,
you
could
do
I
mean
ultimately
what
we
will.
Hopefully,
when
we
start
going
down
that
path,
to
support
leasing
toolkit
to
customers,
and
we
will
go
down
that
we
will
need
to
explore
those
things
and
pass
off.
Essentially,
your
inventory
can
be
whatever
you
want.
As
long
as
it
meets
these
x
requirements
for
now,
tags,
I
think,
will
probably
need
to
continue
and
the
whole
toolkit
is.
B
On
tags,
that's
the
way
that
people
usually
run
ansible
and
that's
what
we've
done,
but
when,
in
the
future
for
example,
customers
might-
and
this
is
point
half
of
most
large
customers-
because
they
won't
have-
they
might
not
be-
they
probably
won't
be
using
cloud
providers.
So
far,
we've
seen
that
they've
been
using
their
own
hardware,
so
they'll
need
a
static
inventory
and
in
the
static
adventure
you
can
add
as
many
tags
as
you
want
in
the
inventory.
So
you
can
mimic
what
we
want
in
the
toolkit.
B
So
that's
where
we're
going
to
go
off
that
console
in
this
context
is
something
to
maybe
explore,
because
we
do
so,
we
do
need
to
know
the
primary.
Sometimes
we
do
need
to
know
the
primary
node
during
the
asp
run
like.
D
B
B
What
is
the
primary
and
the
main
reason
for
that
would
be
during
the
migrations
when
rails
comes
along
rails
will
expect
we
get
rails,
one
the
primary
to
to
do
the
migrations,
but
it
has
to
do
it
directly
against
the
primary,
which
is
a
very
painful
part
of
the
process
to
automate
and
with
rep
manager
was
easier
because
we
could
always
bet
postgres
one
speed
priority
with
pajoni.
Obviously
it's
not
the
case,
so
we've
got
it
working
now,
I'm
working.
B
We
actually
are
feeding
back
to
distribution,
to
make
it
easier
to
to
have
github
ctl,
commands
or
other
ways
to
try
and
pull
and
find
out
what
the
primary
is
and
we'll
continue
to
do
that,
because,
like
say
this
is
where
it
comes
in.
We
things
that
we
find
hard
will
be
things
that
customers
find
hard
and
we
want
to
continue
to
improve
it.
Yeah.
B
And
yeah,
and
so
with
the
with
asphalt
again
it's
a
pretty
standard
stuff.
So
yeah
we
focus
on
dynamic
inventories,
but,
like
I
said,
static
inventory
should
be
support.
Today
we
haven't,
you
haven't,
documented
what
tags
are
needed
on
each
box,
but
essentially
what
you
see
up
here
and
it's
what's
in
the
key
group,
so
yeah
we
have
attack
called
get
lab.
Node
type,
get
that
node
level,
which
would
be
no
type,
would
be
postgres
and
level
will
be
primary
secondary.
That's
essentially
how
it
would
work,
and
then
we
have.
B
We
will
build
all
these
groups
in
ansible
and
that
that's
how
the
the
the
playbooks
will
go
through
and
run
based
on.
What's
what's
available
and
what's
not,
this
is
actually
quite
powerful
because
we
can
base
a
lot
of
logic
on
simple
things
of.
If
so,
for
example,
the
toki
can
build
a
1k
environment
which
is
essentially
a
single
node,
and
then
we
can
we're
able
to
update
the
config
files
to
say
if
postgres
doesn't
exist
in
the
groups
list
then
obviously
skip
all
this.
B
Don't
do
this
and
add
in
some
basic
config
in
the
single
nodes
context
to
make
sure
that
the
database
is
ready
and
running.
So
we
are
quite
heavily
dependent
on
this.
So
I
think
I
said
someone
they'll
say
that
customer's
very
bit
funny
about
using
tags.
So
something
we'll
need
to
think
about
moving
forward.
But
at
the
moment
we're
talking
about
we're
using
tags.
B
For
playbooks
we
have
a
simple
player,
that's
called
all
I'll
actually
be
able
to
show
you
here
so
should
actually
just
be
here
and
that
just
runs
through
it.
As
you
see
in
the
in
in
the
standard
way,
as
defined
in
our
docs
and
reference,
our
architecture
docs
common,
to
explain
some
of
the
weird
ones
that
are
common.
It's
just
the
standard,
one
that
goes
and
it
prepares
you
into
install
basic
packages
and
it
actually
installs
the
gitlab
ee
package
without
the
environment
url.
B
So
the
package
is
ready,
but
it's
not
actually
configured
and
similar
stuff,
python
and
other
things,
and
then
it
goes
for
each
list.
H.
Processor
says
something
which
proxy
obviously
and
the
rest
are
just
standard.
Stuff
elastic
is
something
that's
not
in
on
the
bus,
that's
something
we
have
added
in
the
toolkit,
because
that's
just
the
way
we
will
recommend
doing
it,
we're
actually
figuring
out
if
we
can
continue
to
do
that
with
the
license
changes.
B
But
for
now
it
deploys
elastic
and
docker
images
and
and
post
configures
that
last
piece
where
it
will
come
in
and
do
a
few
things.
It
will
apply
that
a
license
if
it's
available
it
will
apply.
Some
configuration
changes
over
that
can
only
be
done
for
api,
which
is
a
bugbear
of
mine,
but
that's
where
we
are,
and
it
will
come
in
and
save
elasticsearch
as
well
advanced
search.
B
So
that's
that's
what
it
would
do.
I
can
literally
show
you
I
can
let
you
start
playing
it
now,
so
this
is
with
the
10k
environment
that
we
have,
which
is
up
right
now
and
obviously
because
I'm
doing
it
live.
I
forget
I've
forgotten
everything,
as
is
always
the
way.
B
There
we
go
we've
designed
it
so
that
you'll
need
to
know
you
need
to
make
sure
that
everything
if
the
environment
is
already
running-
and
it's
already
run
at
least
once
we
decided
so
you
can
go
and
run
each
each
playbook
individually,
which
in
turn
is
a
role.
We've
got
some
tags
that
ignore
that
we've
got
some
tags
that
like
to
just
run
so,
for
example,
we'll
have
a
tag
generally
like
reconfigure,
and
we
haven't
documented
this.
B
We
don't
want
to
call
this
out
yet
we
probably
will
do
eventually
where
people
need
to
come
in
and
do
more
admin
tests.
So,
for
example,
they
just
want
to
change
the
config
of
a
box
and
not
do
a
full
update
and
anything
else,
there's
already
tags
in
the
toolkit
that
we
can
use
such
as
reconfigures
the
tag
which
obviously
goes
in
and
just
does
a
reconfigure,
maybe
restart
some
things,
because
we
found
that
some
things
need
to
be
restarted
as
well
as
we
can
figure
and
stuff
like
that.
B
So
yeah
pretty
boring,
damn
I'm
afraid,
but
this
is
it
running
now
and
they'll
go
off
and
serving
yeah,
I'm
trying
to
think
there's
anything
else
to
call
out
again.
I
say
it's
meant
to
be
boring.
It's
meant
to
be
pretty
straightforward.
We
do
like
say
we
do
have
to
do
some
more
advanced
things
like
asking
patrons
find
the
postgres
primary.
I
think
that's
the
only
primary
now
that
we
need
to
pull
directly
with
cluster
obviously
has
a
floating
primary
as
well.
B
Thankfully,
we
don't
need
to
do
anything
there,
so
that
makes
that
process
quite
off
hands,
which
is
good.
So
that's
that's
good
in
terms
of
automation,
gala
braille's
primary.
We
we
deemed
the
first
on
the
prairie,
but
that's
just
because
we
just
need
to
pick
one.
We
also
know
to
do
a
few
things
like
migrations
and
some
other
stuff.
A
Can
I
ask
some
questions
while
it's
running.
B
It
is
this
was
recommended
to
us
at
the
time
of
building
american
research
teacher.
I
think
it
was
a
discussion.
B
Venus
is
actually
complicated.
Yeah
you
adopt
against
it
yeah,
so
venus
is
actually
one
of
the
work
is
one
of
the
is
annoyingly
one
of
the
more
complicated
parts
of
the
setup.
I'll
be
surprised
when
I
go
in
and
see,
for
example,
that
1k
situation
or
2k
or
other
environment
knows
so
it
depends
on
the
size
of
the
architecture
for
10k
up.
We
have
separate
sentinels
as
separate
readers,
those
with
different
queues
for
lower
environments.
It's
a
combined
redus.
E
A
Getting
it
to
configure
can
also
be
a
bit
of
a
pain.
The
second
question
I
had
was-
and
this
is
kind
of
very
much
like
gitlab.com,
specific
or
at
least
our
use
cases,
but
we
have
these.
We
we
call
them
shards
or
sometimes
services,
but
basically
sort
of
bulkheads
between
different
workloads.
A
So
for
sidekick
we
we
break
the
traffic
up
so
that
you
know
urgent
traffic
goes
to
one
set
of
nodes
and
we
have
cpu
bound
and
then
we
obviously
split
our
web
traffic
between
web
api
gets
and
now
websockets
all
of
websockets
is
on
kubernetes
like
do
you
think
it
would
be
really
hard
to
kind
of
specialize
certain
nodes
like
to
basically
deal
with
web
traffic,
or
you
know,
one
set
of
sidekick
nodes
needs
to
be
configured
in
this
way
and
another
set
needs
a
slightly
different
configuration.
B
It
does
add
complexity,
but
it's
not
the
question,
so
we
just
essentially
had
to
do
something
similar
there
yeah
so
like
1k.
Venus
is
obviously
on
the
same
2k
off
top
of
my
head.
I
forget,
but
3k
is
all
one
medius
free
radius,
nodes
and
sentinels
sentinels
and
the
3k
I
think,
actually
get
put
on
the
same
nose
as
console
just
to
reduce
nodes
count.
So
it's
not
it's
not
the
question.
It
does
a
lot
of
complexity.
I
think
when
we
design
the
reference
architectures,
there
was
discussion
about
the
separate
psychic
queues.
B
There
was
discussion
about
suffering,
web
and
api
traffic
as
well
rails
because
that's
actually
where
it
was
what
was
recommended
as
the
big
environment
design
before
the
refs
architectures,
I
think,
on
the
whole,
we
it
was
decided
that
and
that
complexity
is
just
not
really
needed
for
customers
to
worry
about.
It
obviously
is
for
com,
but
for
any
other
self-managed
install
it's
not
really
for
traffic
and
that's
a
load
balancer
configuration
so
that
shouldn't
be
too
bad
for
psychic.
B
The
way
that
we
tackled
readers
is
that
we
actually
just
have
different
media
types
of
tags
and
nodes.
So
it's
actually
in
the
right
from
terraform,
we'll
say,
deploy
readers
cache
instead
of
a
redus,
so
to
speak,
and
we
actually,
we
just
treat
them
completely
differently,
because
that
was
the
only
way.
We
could
really
have
that
kind
of
clean
separation
of
concerns.
So
we
can
deploy
3k,
which
only
has
one
reader's
cluster
and
a
10k
which
has
two
essentially
so
it
is
complicated.
B
As
I
say,
readers
is
actually
one
of
the
more
complicated
places
in
the
toolkit,
but
it
could
we
could
do
if
there
was
a
desire
for
psychic.
We
could
look
at
that
as
well.
A
B
C
B
Real
life,
you
know
situation,
we
try
and
estimate,
but
you
know
that
can
only
go
so
far.
The
psychic
yeah
I
mean
we've.
Just
not:
we've
not
had
any
feedback,
we've
not
seen
any
need
for
it.
Yet,
usually,
if,
if
there
was
expect,
if
customers
use
its
shape,
was
to
be
quite
q,
heavy
and
quite
psychic
heavy
we'll
quality
at
the
moment
just
suggest
more
psychic
nodes.
We
probably
wouldn't
go
down
the
route
separate
queues,
yet
unless
there
was
a
really
clear
benefit.
B
The
only
benefit
of
think
of
is
that
if
we
see
strong
evidence
that
user
patterns
are
this,
these
particular
psychic
queues
are
always
very
heavy.
That's
maybe
when
we
maybe
look
at
separate
about
and
say,
okay,
we'll,
create
a
psychic
for
imports
or
merge
requests
and
handling
pipeline
running,
but
psychic
for
now
in
our
testing
seems
to
be
fighting
with
a
kind
of
a
vertical
and
horizontal
scale.
So
to
speak,
but
it's
not
the
table.
We
always
have
to
look
at.
B
There
would
be,
there
has
been
a
little
bit
of
talk
of
100k
environment.
I
think
one
customer
is
potentially
maybe
going
to
go
into
that
route,
but
we
don't
expect
many
customers
to
the
size
of
environment
to
be
very
very
little
and
at
the
moment
it
would
just
be
50k
scaled
up
and
we
would
then
see
how
how
it
would
perform,
and
maybe
at
that
point
we
may
need
to
start
looking
at
breaking
things
apart,
but.
A
Yeah,
I
think
I
think,
as
well
when
you
get
to
that
size
like
it's
very
difficult
to
guess
exactly
what
the
workload's
going
to
be
like,
because
it's
dependent
on
what
the
customers
are
doing,
and
you
know
whether
they've
got
some
crazy
mono
repo
that
has
10
000
ci
jobs
running
or
you
know,
yeah
exactly.
However,
it
becomes
very
difficult
to
get
second
guess.
B
It's
yeah,
so
the
reference
architecture
is
a
reference.
It's
meant
to
be
a
general
architecture,
it's
always
we'll
we'll,
try
and
as
for
the
process,
this
hopefully
requires
for
customers
might
come
out,
but
these
kind
of
things
you
just
don't
know,
especially
when
it's
like
a
large
environment,
the
bigger
the
environment,
the
bigger
the
number
of
users,
the
the
more
varied
the
shape
of
usage
will
be,
and
one
customer
can
be
very
heavy.
B
Registered
very
heavy
ci
using
every
customer
just
might
be
completely
just
source
control
and
issues
emerge,
but
it's
so
hard
to
guess
these
things.
So
we
don't.
I
mean
it's
pointless
to
go
on
a
futile
exercise.
We
just
try
and
create
a
good,
solid
base
and
then
we'll
for
each
customer
do
point
you
to
adjust
as
to
go
along
and
through
that
process
we
will
look
at.
Can
we
bring
that
back
into
the
toolkit?
Can
we
add
that
customization?
Does
that
make
sense?
B
So
migrations
we
fall.
We
follow
the
docs
pretty
much
as
as
those
go.
So
let
me
see
if
I
can
actually
get
you
there,
the
actual
the
actual
scripts
and
I'll
show
you
what's
up.
What
happens
there
so.
E
In
other
words,
like
do
you
let
the
omnibus
run
like
when
you
do
a
package
upgrade,
for
example,
do
I
think
omnibus
will
automatically
do
migrations?
Unless,
if
you
set
a
flag
to
say
skip,
I
think
you
drop
a
file
or
something
to
skip
migrations
right.
B
Yeah
we
we
we
handle
it
directly,
so
so!
Well,
it's
it's
a
mix
of
both.
So
we
have.
We
do
expressly
configure
everywhere
to
say:
don't
don't
even
begin,
don't
even
think
about
migrations,
because
obviously
the
rule
for
migrations
is,
it
has
to
be
one
node.
That
does
it
at
one
time.
B
Against
the
postgres
node,
so
that's
what's
this
this,
the
gitlab
rail
script
actually
does
handle
that.
So
what
it
does
is
that
everything's
configured
not
to
do
it,
but
then,
during
the
actual
run
of
the
ansible
playbook,
it
will
come
in
and
take
the
primary,
which
is
always
node
one
and
gitlab
rails
world.
It
will
reconfigure
it
with
a
different
config
file.
It's
essentially
the
same
config
file,
but
with
it
with
those
extra
little
flags
to
say
this
time
before
migrations
and
as
you
see
here,
we'll
actually
go.
B
If
it
first
goes
off
to
patrony
and
ask
it
ask
it
for
the
primary
postgres
node
at
the
time
and
it's
ip
essentially
and
then
I'll
take
the
ip
put
into
the
gitlab
rails,
one
node
a
config,
and
then
we
run
a
reconfigure
which,
during
that
it
will
it'll
do
the
migrations.
This
is
very
much
a
downtime
upgrade.
Of
course
we
don't
do
zero
time
yeah
yet,
but
we
hope
to
do
it
in
the
future.
E
I
see
so
you
get
the
ipo
the
primary
you
configure
a
leader
in
your
rails
fleet
to
use
that
as
the
database
ip
and
then
you
run,
you
run
the
rake
script
to
run
migrations
on
that
node,
okay,.
C
E
B
It
does
one
first,
it
does
the
primary
first,
so
everything
else
is
going
to
be
so
in
the
run
order.
A
postcard
is
already
graded
and
everything
else
is
already
upgraded.
Rails
comes
and
gets
upgraded
as
well.
The
package
has
already
been
updated
and
then
it
comes
in
all
the
reconfigure
postgres
one.
It
get
rails
primary,
sorry
first,
which
in
turn
will
also
do
migrations,
and
then
it.
C
B
B
That's
something
we
can
look
at
for
sure
as
well
yeah,
that's
a
good
point.
E
Whether
you
you
want
to
support
that
sort
of
thing
or
not
like
that's,
that
makes
it
very
complicated
and
for
us
you
know,
I
I
put
some
notes
here
on
how
we
use
ansible.
If
you're
curious,
we
also
do
post
post
deploy
migrations,
which
you
specif,
you
you
pass
a
flag
to
your
rakes
command
to
run
migrations
to
say,
skip
post-deploy
migrations,
and
these
are
the
migrations
that
destroy
data,
and
we
run
these
at
the
end.
E
So
we
have
like
two
migrations,
one
before
the
fleet
upgrade
on
the
elite
on
on
what
we
call
a
deploy,
node,
which
its
only
purpose
is
to
run
migrations.
It
doesn't
receive
any
rails
traffic
and
then
we
run
post-deploy
migrations
at
the
very
end
which.
B
Yeah
I've
heard
I
have
I've.
I've
come
across
that
design
with
customers
who
recommend
have
a.
I
know
that
just
that
does
them
yeah,
just
it's
purely
they're,
just
the
migrations.
I
I
yeah.
We
designed
these
to
be
lockstep
with
reference
architectures.
I
I
always
wanted
to
avoid.
I
know
that
just
sits
there
nice
time
doing
nothing,
but
I
understand.
B
Of
management
that
has
benefits
so
it
it's
difficult,
we're
hoping
for
this.
We
can
look
at
the
future,
trying
to
drive
an
improvement
to
how
migrations
are
done,
but
the
design,
how
we
had
it,
how
he.
B
Initially
we
did
the
post
deployment
creations
and
things
like
that.
We
found
that
actually
a
few
times,
we
actually
got
wrong
and
and
migrations
weren't
happening
the
right
way.
So
that's
where
we
moved
to
this
kind
of
design
of
let's
just
make
it
as
simple
as
possible,
downtime
wise
and
make
sure
they
run
and
and
try
and
keep
it
yeah
as
boring
as
possible.
But
we
probably
will
need
to
look
at
this
again
in
the
future
and
look
at
a
more
specific
approach.
So
yeah.
B
E
B
With
we
were
to
work
with
nick
on
this,
what
we
did
is
we
went
to
the
distribution
team
and
said
you
need
to
give
us
a
way
to
get
the
postgres
leader.
Please
tell
us
away:
how
can
we?
How
can
we
ask
patrony
or
or
something
like
that
to
get
just
to
get
to
tell
us
what
leader
is
and
then
at
least
then
we
can
go
off
and
use
asphalt
to
try
and
get
the
ip.
So
it's
actually
there
we
actually
there.
B
She
has
command
now
recently
added,
which
is
called
get
postgres
primary.
I
get
love
ctrl
command
and
we
use
that
to
to
to
get
the
ip
address
and
then
yeah
put
it
into
the
config
file
reconfigure,
and
then
we
we
figure
again,
because
then
we
also
need
to
re-point
the
primary
node
to
use
pg
bouncer
again,
because
right.
B
Needs
pointed
directly,
which
I
understand:
why
is
the
deployment
node
being
separate?
That
does
make
sense
for
migration.
E
Yeah,
that's
the
advantage
of
having
the
separate.
So
we
have
our
deploy
node,
which
has
the
console
address
for
the
primary
to
connect
to
it
directly
and
then
we
never
have
to
change
it.
So
that's
that's
just
convenient
for
us,
but
yeah.
B
Cool
yeah,
something
something
I'll
reflect
on.
I
will
look
at
it.
I
just
thought
it
was
wasteful
to
have
a
separate
node,
but
yeah
it
is
yeah.
It's
it's
really,
it's
a
really
complicated
area
as
well.
It
was
one
of
the
first
big
problems
I
had
to
solve
with
the
the
toolkit,
yeah
and
I'd
say
I
got
wrong
a
few
times,
because
migration
is
just
such
an
involved,
brittle
process.
So,
for
example,
we
had
it
going
through
pj
masks
a
few
times.
B
Obviously
that
started
to
fail,
because
pj
benson
is
not
good
for
that.
So
it's
that
learning
process.
So
yeah
we'll
probably
need
to
look
at
this
again.
E
Cool
before
we
continue,
can
we
just
go
through
the
questions
in
order
because
I'm
not
sure
which
ones
we've
covered
and
which
ones
we
didn't.
Maybe.
E
You
have
the
beginning
of
the
of
the
questions.
Can
you.
E
B
Expressly,
no
not
right
now.
E
B
Well,
for
now,
today
is
a
hard
note,
just
to
be
clear:
okay,
the
the
distribution
team
and
the
helm
team
make
the
charts
for
kubernetes
we're
not
going
to
go
and
recreate
what
they're
already
doing.
Kubernetes,
as
you
all
will
know,
is
not
an
easy
platform
to
to
master
in
any
way.
It's
a
very
complicated
platform
and
yep.
E
I
was
going
to
say,
but
you
could
still
have
terraform
support
for
using
cloud
kubernetes
services.
B
Right
so
yeah
we
can
eat,
we
can
add,
we
could
maybe
eventually
terraform
a
provisioning
support
to
support
provisioning,
a
cluster
and-
and
things
like
that.
But
for
now
we
want
to
make
for
now
the
clean
kind
of
separation
concerns
is.
You
can
use
the
toolkit
to
set
up
all
the
bus
based
components.
B
So
we
are,
we
have
recently,
and
we
have
always
looked
at
supporting
it
in
such
a
way
that
you
can
use
the
toolkit
to
build
a
postgres,
node
and
gately
cluster
and
other
stateful
components.
And
then
you
can
come
with
helm
with
our
charts
and
then
set
up
the
front
end
and
point
it
to
the
back
end,
and
we
actually
do
support
that.
Essentially,
the
reference
architecture
docs
just
literally
added
this
week.
B
So
we
do
support
that,
but
yeah
we're
not
going
to
try
and
recreate
what
the
hell
guys
do
you
look
at
what
they
do
and
they
may
have.
They
have
a
whole
team
dedicated
to
that
effort,
because
it's
just
a
complicated
situation
and
the
toolkit
is
not
going
to
replicate
that
took
us
here
just
to
deploy
on
the
bus.
Helm
is
for
kubernetes
and
if
everyone
wants
to
use
kubernetes,
we
point
people
to
helm
because
yeah.
E
B
Yeah
we
have
we
this.
This
was
quite
an
effort
in
the
past
to
try
and
figure
out.
So
initially
toolkit
was
just
for
quality,
so
we
had
a
private
project,
that's
how
we
did
it.
We
encrypted
the
files
and
the
files
that
had
the
passwords
in
it,
the
ansible
config
files
that.
B
Course,
but
then,
obviously,
when
starting
to
think
about
okay,
now
we
need
to
open
up
all
the
teams,
and
that's
literally,
all
I've
been
working
on
the
last
few
months.
It's
like
okay,
how
can
we
make
this
work
and
at
least
the
usable
friendly
way
to
get
people
to
do
it,
so
how
we
do
it
is
for
inventory.
Variables
is
essentially
the
bottom
line
there,
so
we
at
the
moment
we,
if
you
go
through
the
docs.
What
we'll
say
is
you
should
put
your
passwords
in
the
inventory.
E
B
So
today,
what
we,
what
we
have-
and
it
is
a
basic
approach
and
we're
more
than
happy
to
to
look
at
improving
it-
is
that
the
toolkits
project
and
repo
are
don't
have
any
of
this
in
it,
which
is
good
our
quality
environment,
configs
in
a
different
project,
and
we
copy
that
in
in
the
ci
and
in
that
project
we
have
dynamic
inventory
we
have
and
then,
with
the
dynamic
venture,
you
can
add
in
other
files
such
are
classes
of
entry
variables
and
at
the
moment
that
we
just
seem
to
put
put
the
passwords
in
there
and
keep
that
file
away
either
encrypted
or
in
a
private
project
and
keep
that
file
safe
and
we
could
expand.
B
You
can
actually
take
that
the
inventory
file
is
completely
up
to
you.
How
you
write
it.
We
obviously
put
the
conflict
that
needs
to
be
in
there,
but
you
could
change
that
to
be
an
environment
variable
if
you
so
wish.
Yeah
all
that
matters
is
that
the
variable
exists,
an
answer
when
it's
run.
How
that
answers
the
fight
is
completely
up
to
you
sure.
E
The
next
question
is
mine
as
well,
which
is,
I
think
this
is
already
been
answered.
I
think,
which
is
like
whether
we
could
use
this
for
gitlab.com
one.
Is
that
since
there's
no
media
plan
to
support
kubernetes,
it
sounds
like
it's
probably
not
suitable
for
gitlab.com
for
ansible
in
general
and
terraform
in
general?
E
I'm
I'm
not
sure
right
now,
whether
there's
any
use
here
but
like
I
think
one
of
the
part
of
the
problem
with
with
like
adopting
your
terraform
is
that
we
just
have
like
a
lot
of
custom
stuff.
That's
probably
unique
to
gitlab.com
that
other
customers
don't
do
even
for
our
gke
terraform
config
for
kubernetes,
it's
sort
of
like
we
divide
into
all
these
different
node
pools,
and
maybe
we
could
share
some
like
our
gke
module
with
customers.
We
have
a
compute
module
for
terraform
that
I
don't
know.
B
Yeah,
it's
it's!
Yes,
it's
difficult.
We
have
tried
to
keep
a
a
clear
focus
for
the
toolkit
throughout
and
we're
going
to
try
and
continue
to
do
that,
because
there's
no
point
making
a
tool
that
can
do
everything
because
then
it'll
become
you
know
a
mess.
It
will
become
a
master
of
nothing.
You
know
that
kind
of
situation,
whereas
the
tokens
especially
to
deploy
compute
vms
on
the
bus,
that's
his
purpose.
B
We've
got
helm
which
just
is
especially
designed
to
deploy
kubernetes.
So
we
want
to
keep
that
situation
here,
we're
not
going
to
try
and
do
one
tool
that
does
other
tools
purposes
when
that
tool
already
does
it,
especially
when
it
comes
to
such
a
dedicated
and
bespoke
platform
like
kubernetes
and
using
from
what
in
our
experience,
you
know
that's
what
makes
sense
for
configuring
static
vms.
That
makes
sense,
therefore
makes
sense
to
to
deploy
the
provision
static.
B
Vms
hell
makes
sense
to
and
and
charts
make
sense
to
deploy
to
kubernetes,
because
those
tools
are
dedicated
to
those
equivalent
platforms,
so
to
speak
can
do
everything,
of
course,
but
then
that's
that's.
That's.
Actually,
someone
consider
that
flaw
of
ansible
that
could
be
so
widely
used
because
then
you're
using
asphalt
to
run
charts,
then
you
get
into
weird
you're
getting
into
a
less
support
pathway
to
speak,
whereas
we
try
and
keep
it
basic.
B
We
use
the
tools
that
are
designed
for
each
platform
to
to
to
do
the
things
that
they're
designed
to
do
and
they're
good
at
doing
so
yeah.
It
is
difficult
and
yeah
for
dot
com
yeah.
I
yeah.
I
don't
know
how
far
we
can
go
to
support
that
because,
as
you
say,
com
is
a
it's
a
very
unique
environment.
There's
going
to
be
there's
no
environment
like
it
and
you
will
need
all
these
bespoke
and
little
bits
to
support
it.
B
But,
as
I
say,
we're
always
we're
definitely
happy
to
look
at
adding
customizable,
hooks
and
other
ways
to
try
and
make
the
process
customizable.
As
far
as
we
can
go,
but
the
tool
is,
the
toolkit
has
a
purpose
and
that's
what
its
purpose
is.
I
want
to.
We
always
want
to
make
sure
the
focus
is
clear.
The
tool
kit
is
generally
quite
lean.
It
does
what
it
does
helm
does.
What
it
does
and
yeah
that's,
that's
the
that's.
The
current
goal.
E
E
B
It
was
very,
very
initially
so
the
first
thing
I
think
everyone
does
when
they
first
start
terraform,
is
they
open?
Terraform
do
configuration
as
well,
and
the
answer
is
no.
It's
not
designed
for
that
for
tell
you
themselves
don't
do
this.
There
is
the
execution
provisioner
in
terraform
the
big.
This
may
have
changed,
but
not
to
my
knowledge
that
the
big
thing
there
is
that
only
gets
wrong
once
it
only
gets
run
on
provision.
B
If
the
environment
already
exit
no
doubt
exists,
it
won't
run
the
script
again,
and
it
was
just
a
very
clear
thing
that
you
just
got
the
message
quite
quickly.
Your
terraform
is
that
it's
not
designed.
B
It's
like
what
I
said
earlier:
terraform
is
designed
for
provisioning.
That's
his
purpose.
That's
his
goal,
trying
to
bend
it
into
their
things.
I've
just
found
over
time
as
I've
designed
these
things,
because
I've
worked
with
chef
in
the
past
as
well.
You
just
you
just
learn
it's
like
just
don't
bend
it,
don't
don't
try
and
make
it
do
something.
That's
not
designed
to
do
because
then
it
just
it
just
gives
you
pain
at
beginning
and
then
forever
more.
It
will
always
remain
its
problem,
always
be
it's
just.
B
It
just
causes
problems,
so
we
we
we
will
because
terraform
doesn't
really
support
it.
Really.
To
be
blunt,.
E
Yeah
I
mean
where
I've
seen
this
work
is,
if
you're
creating
an
image
pipeline,
where
you
bring
up
an
instance
once
and
you
configure
it
once
and
then
you
create
an
ami
in
aws,
or
you
know
an
image
in
gcp
and
then
use
that
in
an
auto
scaling
group
or
something
like
that.
But
I
guess
then
it's
like
getting
too
complicated
or,
like
you
just
don't,
have
this
demand
from
customers
for
this
sort
of
like
setup.
B
B
Obviously,
as
you
say,
it's
a
completely
different
approach,
though
you
will
need
a
you
will
need
to
figure
out
yeah,
it's
a
completely
different
approach
and
that
we
need
to
set
the
image
and
and
also
then,
we
need
to
try
and
figure
out
how
to
pass
work
in
terms
of
migrations
and
other
things.
So
we
don't
support
that
today
and
it
probably
won't
be
support
for
quite
a
bit
as
we
try
and
figure
out
all
the
rest
of
it.
But
it's
something
we
might
need
to
consider
in
the
future.
E
The
next
one
is
mine
as
well,
which
is
like
I
just
wanted
to
give
you
kind
of
a
quick
rundown
of
how
we
use
ansible
for
gitlab.com.
If
you
weren't
aware,
you
should
just
read
that
if
you
have
any
questions,
the
next
one
was
for
a
console
which
we
already
answered
and
then
migrations,
and
that's
the
end
of
the
questions.
A
And
really
appreciate
you
coming
on
and
talking
to
us.
B
Yeah,
it's
no
problem
at
all
more
than
have
to
discuss
all
these
kind
of
things
yeah,
but
yeah
it's
yeah.
I
definitely
want
to
keep
talking
with
with
infrastructure
and
dot
com
teams
and
see
what
we
can
do
to
support.
But
like
say
that
we
want
to
keep
toolkit
in
this
lane,
and
that
means
sometimes
I
have
to
say
no-
which
is
not
a
nice
thing
to
hear.
B
But
like
say,
we
want
to
keep
it
focused
into
what
we
can
do,
but
we
do
want
to
add
that
you
know
if
you
want
to
cuss.
If
you
want
to
reference
our
edge
with,
like
I
say,
different
provider
or
you
want
it
with
you,
you
want
to
do
something
which
is
quite
you
know
would
be
considered
quite
common.
We
want
to
support
that
kind
of
thing,
and
maybe
eventually
we
have
to
talk
in
a
place
where
you
can.
B
You
maybe
wanted
to
look
at
to
deploy
like
a
staging
secondary
or
something
like
that.
The
going
down
different
kind
of
paths
is
yeah.
We
just
need
to
value
each
one
and
make
sure
that
we
make
sense
for
what
the
toolkit
is
and
then
have
that
supported
and
then
support
helm
and
then
maybe
I
know
open
shift
is
being
developed
right
now.
Maybe
we'll
need
to
open
shift
operator
support
as
well
again.
B
Same
kind
of
situation
obviously
is
similar,
but
it
can
also
get
completely
different
to
kubernetes
and
it
needs
his
own
tooling,
because
it's
just
such
a
vertical
platform
that
needs
its
own
dedicated
stuff,
so
yeah,
but
I'm
more
than
happy
to
come
to
jan.
Hopefully,
this
has
been
helpful
for
you.
Any
questions
stuff
I've
linked
some
stuff
in
the
in
the
agenda.
We
have
got
one
issue
right
now
where
people
were
asking
people
to
come
in
and
suggest
things
about
where
we
could.
Where
would
you
want
the
toolkit
to
be
customizable?
B
We've
already
had
some
suggestions,
such
as
elastic
rds,
different
os
support.
That's
something
we
know
we
do
need
to
support
eventually
and
things
like
that.
Switching
out,
load,
balancers,
wasn't
the
request
and
that's
something
we
probably
would
look
at
doing.
Eventually,
there
is
a
proxy
in
the
future.
We
could
probably
bring
in
gcp
it'll
balance.
Aws
is
your.
B
B
That's
another
question,
then:
that's
that's
it
for
me.
I
guess.
C
Yeah,
thank
you.
It
was
a
very
great
demo
and
I
think
I
just
want
to
add
one
thing
by
thinking
about
testing
geo
environments
and
seeing
how
we
can
test
things
in
staging.
I
think
one
big
topic
I
found
is
that
it's
very
hard
to
stand
up
something
like
our
gitlab.com
production
or
staging
environment,
because
getting
up
a
new
environment
is
very
fiddly
right
now.
It's
not
nice
to
do
that
and
and
get
would
be
a
nice
tool
to
stand
up
new
environments
very
fast
right.
C
The
question
is:
how
can
we
met
in
the
middle
somehow
to
make
it
easier
to
stand
up
new
environments
by
getting
still
the
features
that
we
need
and
have
in
github.com
right?
So
I
think
that
would
be
the
interesting
future
question
which
isn't
easy
to
answer,
but
maybe
think
about
kubernetes.
This
would
be
a
way
to
you
know
reach
this
one.
So
I
think
that's
the
thing
to
think
about
in
the
future.
B
Yeah,
I
mean
absolutely
get,
doesn't
support
all
the
features
of
gitlab
yeah.
Some.
This
reasons
for
this,
like
pages,
is
obviously
quite
difficult
to
set
up.
Although
that's
that's
improving
all
the
time,
and
we
will
look
at
that
eventually,
registering
again
is
quite
difficult.
Essentially
it's
part
git
lab
it's
actually
a
separate
deployment
really
and
has
its
own
address
and
other
things
to
do
runners.
Actually,
we
can't
do
because
runners
aren't
automatable
today
they
you
need
the
unique
id
for
the
environment
and
other
things.
B
It's
a
four-year-old
issue
that
we
hope
to
be
able
to
push
and
get
that
sorted
out.
So
there's
things
we
don't
support
yet
today
and
we're
always
happy
to
hear
requests
for
that.
You
know
come
into
our
project
coming
to
our
channel
and
and
ask
about
it
raise
the
issue.
We
can
get
up,
thoughts
on
that
and
try
and
see
if
we
can
get
those
in
eventually
geo.
B
Yeah
nick
has
been
working
amazingly
with
the
geo
team
to
to
add
in
that
support,
and
you
can
still
you
can
stand
up
a
geo
environment
with
get
today
as
well,
and
it
can
support
upgrades,
although
we
want
to
improve
that
a
little
bit
as
well.
B
So,
if
you're
looking
for
standard
environments
that
aren't
like
a
dot
com
because
dot
com's
purpose
isn't
to
run
git
level,
application
per
se
get
live,
the
columns
purposes
to
serve
git
lab
as
a
service
to
crazy
amount
of
users
all
at
the
same
time,
which
obviously
is
never
going
to
be
the
same
for
self
managed.
So
there's
always
going
to
be
that
divide
where
we're
going
to
have
to
try
and
figure.
Can
we?
Where
can
we
bridge
and
what
would
make
sense
for
us
and
then
what's
makes
sense.com?
So
I
always
we
will.
B
The
goal
I
think
of
toolkit
is
to
try
and
reduce
like
internal
teams,
deploying
development
and
test
environments
with
all
the
old
little
tooling,
and
for
customers
as
well,
where
they
only
want
a
standard
environment
for
cob.
I
always
expect
that
to
probably
have
its
own
specific
and
bespoke
tooling,
you
guys
have
a
lot
more
things
to
consider
and
like
slas
and
other
things
like
that.
That
are
obviously
what
were
important.
C
I
see
definitely
use
that
just
for
get
like.
We
could,
for
instance,
use
get
to
ins
in
such
a
new
environment
for
a
disaster
recovery
site
of
staging,
which
doesn't
need
to
look
like
staging
exactly,
but
it's
capable
enough
to
now
run
queries
and
keep
all
the
data,
and
then
we
can
use
it
to
testing
failover
from
one
staging
site
to
the
other
staging
site
and
get
and
failing
back
to
to
you
know,
test
geo,
for
instance.
That
would
be
one
way
to
use
it.
I
think.
B
Yeah,
that
should
be,
that
should
be,
if
you're
looking
for
just
a
standard
environment
for
that
kind
of
stuff.
That
should
be
that
should
be
doable
today
and
the
docs
go
into
how
to
use
geo
and
stuff
feel
free
to
read
for
that
and,
as
I
say,
raise
issues
or
come
to
come
through
this,
the
slight
channel
to
ask
about
a
bit
that
should
be
doable
today.
If
you're,
just
looking
for
a
standard,
secular
environment
like
that.
D
D
You
know
we're
reliant
on
when
we're
setting
up
the
secondary,
knowing
that
the
primaries
ipad
the
primary
site
ip
address
is
this,
so
I
think
all
we
would
really
need
to
do
for
something
like
that
is
actually
just
allow
those
things
to
be
overridden
so
that
you
could
actually
just
say.
Okay,
you
don't
know
anything
about
the
primary,
but
here's
the
ip
addresses
that
you
need,
and
if
we
allow
that
kind
of
hook
then
get
could
probably
quite
easily
work
to
spit
up
a
secondary
without
knowing
about
the
primary.
D
The
only
things
we
couldn't
do
is
like
we
do
things
at
the
moment,
like
we
add
the
secondary
as
a
secondary
to
the
primary.
Like
you
know,
through
the
ui,
you
have
the
geonose
and
you
can
add
a
new
one.
We
do
it
to
a
rate
task
on
the
primary
site.
We
couldn't
do
that
because
we'd
need
access
to
the
primary,
but
other
than
that
we
could,
with
just
a
few
ip
addresses.
We
could
probably
hook
up
quite
easily
so.
B
Yeah
that'd
be
that'd,
be
absolutely
fine
yeah,
we
encourage
everyone
to
just.
You
should
be
able
to
use
it
today
and
that's
that's
the
goal
now.
As
I
say
for
like
secrets
of
stuff,
we
have
separate,
we
have
a
separate
project
now
for
our
config,
and
that
would
be
the
kind
of
the
same
design
for
everyone
else
to
use.
You
should
check
out.
I
didn't
know
config.
A
I
don't
think
we've
got.
We
can
really
talk
about
craig's
agenda
item
now,
but
it
is
interesting
that
it's
also
about
ansible.
So
I'm
not
sure
if
you're
planning
on
joining
the
the
later
call
grants,
but
there
might
be
some
discussion
there.
B
Unfortunately,
I
won't
be
able
to
yes,
it's
way
out
my
my
working
hours,
but
I
am
interested
in
seeing
that
effort
a
higher
level.
I
highly
say
I've
worked
with
chef
in
the
past
and
for
this
tool,
I've
I
dabbled
defensible.
I
hadn't
actually
properly
deep
dived
into
it,
because
I
I
worked
a
chef
quite
a
bit
in
the
last
roll
and
then
right
at
the
end,
they
did
the
license
change
bonanza.
B
B
You're
you
want
to
move
to
ansible.
I
found
it
to
be
quite
comparable.
Chef's
a
bit
more
has
a
bit
more
depth
to
it.
I
guess,
in
terms
of
being
able
to
just
you
know,
run
ruby
scripts
and
stuff,
but
as
full
can
you
should
be
able
to
replicate
but
yeah.
I
imagine
that'll
be
quite
a
bit
work
to
do
so.
I'm
interested
how
that
goes.
B
A
Okay,
I
think
that's
it
for
today,
then.