►
From YouTube: GET-20220119
Description
The GitLab Environment Toolkit (GET) is a provisioning and configuration toolkit for deploying GitLab's Reference Architectures with Terraform and Ansible. It is built and maintained by the Quality Enablement team.
A
Good
morning,
everyone
and
welcome
to
this
session
for
2022
the
first
session
for
the
year,
and
so
today
we're
going
to
hear
about
the
gitlab
environment
the
toolkit
commonly
referred
to
as
get,
and
so
we
have
grant
young
who's
a
staff.
Software
engineer
who's
going
to
lead
us
through
that
discussion,
so
grant
without
much
further
ado
I'll.
Let
it
over
to
you.
B
Okay
thanks
so
yeah
we've
been
working
away
on
on
get
for
a
while
now
and
the
idea
is
for
it
to
be
able
to
deploy,
get
labor
scale
and
take
take
away
all
the
pain
points
that
that
that
kind
of
setup
brings.
B
Obviously
that
means
it's
got
quite
a
wide
gamut,
so
I'm
happy
to
go
through
a
demo
of
of
how
get
runs
today,
it's
just
a
quick
demo
of
how
everyone-
I
guess,
one
of
our
google
environments,
but
usually
if
these
sessions
I
like
to
actually
open
up
and
see
what
particular
areas
people
want
to
ask
about,
but
if
there's
nothing
initially
I'll
start
I'll
start
showing
you
the
kind
of
a
quick
demo
now,
so
this
will
be
against
a
standard
environment
that
we
have
right
now.
Let
me
get
the
right
screen
up.
B
Okay,
so
the
toolkit
is,
is
boring
by
design.
It's
not
meant
to
be
anything
too
special,
although
we
think
it's
a
bit
special.
But
by
that
I
mean
it's
actually
just
terraforming
ansible.
That's
all
it
is:
we've
added
a
bunch
of
terraform
modules
and
ansible
roles
and
playbooks
that
will
configure
gitlab
when
you
run
them
in
the
right
way.
B
But
essentially,
although
there's
various
permutations
about
how
you
want
to
do
things,
whether
it's
like
on
amazon,
google,
microsoft
or
you
want
to
deploy
a
a
normal
reference
architecture,
which
is
what
gets
actually
meant
to
do,
I've
got
to
call
that
out.
It
gets
primarily
designed
to
deploy
the
various
reference
architectures.
B
We
have
now
for
frequent
lab,
so
we
can
deploy
a
fill
on
the
bus
reference
architecture
where
it
just
installs
gitlab
on
the
box
and
vms,
or
you
can
deploy
a
what
we
call
cloud
native
hybrid
architecture,
which
is
a
mix
of
ominous
back
ends
where
the
state
is
stored
and
running
certain
components
of
gitlab
in
kubernetes,
which
don't
contain
states
such
as
rails,
psychic
and
a
few
other
things.
So
there's
quite
a
wide
gamut
there
and
then
there's
variations
there
on
about
each
cloud
provider.
B
You
can
there's
several
options
you
can
do
in
each
depending
on
such
things,
such
as
network
yeah.
You
know
you
could
have
get
create
a
network
or
you
could
use
the
default
or
you
could
provide
one
stuff
like
that.
So
there's
a
lot
of
a
wide
and
deep
range
of
gets.
So
that's
why
I
say
at
the
top.
Usually
people
have
particular
questions.
They
want
to
explore
specific
areas,
but
we'll
do
a
quick
demo
first.
B
So
the
first
thing
you
always
do
is,
after
you
set
up
your
config,
which
is
you'll,
be
guided
through
that
in
the
docs
you
need
to
run
terraform.
So
this
is
going
to
be
running
against
the
current
environment,
but-
and
I
should
probably
shouldn't
have
any
changes
because
we've
not
made
any
changes
recently,
but
what
I'll
do
is
I'll
go
through
and
check
every
vm
and
every
kind
of
dependencies.
There's
networking
object,
storage,
other
bits
that
is
needed
for
to
run
and
I'll
go
for
each
and
check.
B
So
this
one
as
you
see
it's
just
come
back
and
given
us
outputs
of
all
your
vm's
created,
there's
nothing
to
be
changed,
but
that's
pretty
much
it
you
just
run
terraform
apply
once
you
configs
in
and
get
should
handle
everything
for
you
once
you've
configured
it
for
what
you
want
to
do
and
on
the
azure
piece
asphalt
comes
in
once
the
vms
are
running
or
the
queen's
cluster.
B
If
you're
doing
that
kind
of
setup
and
it
will
go
through
each
node,
it
I'll
go
for
the
nodes
in
order
and
start
employ
installing
github
on
the
bus
or
deploying
the
helm
chart.
If
it's
kubernetes
and
configure
gitlab
with
the
right
config
and
the
right,
I'm
hooking
everything
up
for
making
everything
work
together
across
a
multi-node
setup,
so
I'll
run
as
well
now
and
then
I'll
open
the
floor
to
any
questions.
So
again,
it's
just
a
standard
answer
command.
B
We
have
an
all
playbook,
which
will
now
go
through
all
the
nodes
install
with
this
case
I'll
go
through
and
update.
Gitlab
apply,
apply
any
new
config
that
that's
been
added,
etc.
So
yeah,
that's
a
very
high
level
overview
of
gets,
but
like
say,
I'm
happy
to
open
floor
up
and
answer
any
more
specific
questions,
but
particular
areas
that
people
want
to
ask
about.
C
B
No,
it's
a
fair
question,
so
what
you
do
is
in
terraform.
You
actually
will
configure
the
the
file
nice.
Let
me
see
if
I
can
get
one
up.
I
usually
don't
show
config,
because
I'll
have
some
sensitive
stuff
in
there
and
try
to
be
careful.
But
let
me
see
terraform
I
should
be
able
to
show
so
give
me
a
second
I'll
I'll
show
you
a
10k
one.
B
Okay,
so
this
is
a
a
terraform
config
file
that
we
have
to
configure
the
module
that
we
provide
and
we
find
modules
for
gsb
aws
azure
those
modules
go
off
and
they
actually
employ
other
modules
to
go
off
and
create
vms
networking
et
cetera.
B
So
we
try
and
keep
it
all
in
one
file.
There's
a
few
other
files
needed
in
terraform
world
is
two
of
our
files,
but
they're
just
for
me
configuration
against
authentication
against
the
cloud
fighter
and
some
variables,
but
this
is
the
main
file
where
you
actually
say:
here's
all
the
vms.
B
I
want
or
here's
all
the
services
I
want,
depending
on
on
the
environment
type
you
want,
and
in
this
you'll
see
we
actually
have
the
machine
types,
the
machine
counts
and
that's
where
you
kind
of
the
fact
will
define
the
size
of
the
reference
architecture.
This
could
be
tweaked
too.
So
if
you
want
5k
you'd
set
smaller
machine
sizes,
smaller
machine
counts,
etc.
B
We've
got
a
list
of
all
reference
architectures
for
internal
use.
We
don't
have
an
external
available
list
yet
because
we've
just
been
looking
for
the
right
way
to
actually
have
a
big
list,
of
example,
configs
in
a
maintainable
way,
but
we're
going
to
be
doing
this
soon
we
need
we
need
to
get
done
because
people
keep
asking
so
we
will.
We
will
have
a
list
soon.
That
will
be
externally
available
as
well.
D
D
D
So
first
question
is:
I
know
how
the
get
was
created.
This
was
created
for
us
to
quickly
roll
out
environments
to
test
and
do
do
the
performance
testing.
Would
you
consider
get
also
a
tool
to
deploy
productive
installations
and
if
no,
are
there
any
future
plans
of
getting
there?
Because?
Okay,
I'm
just
asking
currently
currently,
of
course,
rolling
out
bigger
reference.
Architectures
is
a
bit
of
a
pain.
Documentation
has
a
lot
of
multiple
steps
to
follow
that
and
get
seems
to
be
a
much
better
way
of
doing
this.
B
So
the
goal
is
to
make
your
production
ready
we're
not
there
quite
yet,
but
we're
there's
a
lot
of
functionality
in
get
so
what
we
say
today
to
customers
and
to
yourselves
and
others
is
that
you
know
get
on
a
journey
and
it
can
configure
a
base
environment
today
for
a
customer
to
use,
and
then
you
can
build
upon
that
accordingly
and
that
will
always
be
the
case
in
some
ways,
because
customers
will
have
very
specific
security
requirements
or
other
kind
of
requirements
they
need.
B
So
there
always
will
be
some
kind
of
additional
day
two
kind
of
a
task
there.
So
what
we
see
today
is
very
welcome
to
trial
and
evaluate
get
if,
if
they
think
it
meets
the
requirements,
they're
one
worth
to
give
our
goal.
Support
will
support.
You
know
questions
around
environments
built
with
get
because
once
gets
done,
it's
just
a
standard,
gitlab
fire
and
gets
not
doing
anything
special
here.
So
support
builds
to
debug
it
like
normal,
but
there
are
some
limitations.
We
call
those
out
the
docks
and
yeah.
B
D
Okay,
cool-
and
I
also
have
a
second
question.
Basically,
I
I
looked
at
get
a
few
weeks
ago
and
I
think
the
documentation
is
stellar.
I
loved
it.
Super
easy
super
well
documented.
Of
course.
It
is
currently
very
well
documented
of
how
to
use
this
to
spin
off
your
gitlab
installation
within
one
of
the
cloud
providers,
so
either
aws
or
google
cloud.
However,
the
ansible
playbooks
that
are
there,
I
believe,
could
be
as
well
used
for
on-premise
installation.
D
A
B
For
ansible,
let
me
get
the
page
up.
We
added
this
a
few
weeks
ago.
It
was
something
we
knew
we
needed
to
do
because,
there's
obviously
quite
a
lot
of
customers
out
there
on-prem.
We
don't
support
the
terraform
piece,
because
the
variations
there
would
be
unmanageable.
But
if
the
customer
has
the
vms
ready
to
go,
that's
a
much
more
flatter
thing
that
we
can
support.
B
So
we
do
support
static
environments,
essentially,
as
we
call
them,
and
it's
just
a
few
configuration
changes,
you
can
get
in
the
asphalt
side
to
make
it
work
and
you
need
to
have
a
static
inventory,
obviously
as
well
fanciful.
So
it
is
supported
awesome!
Please!
Let
me
know
how
you
get
on
with
it,
because,
obviously
it's
a
bit
of
a
tricky
one,
because
we
don't
actually
have
an
on-prem
setup
to
test
against,
but
we
have
tested
the
best
as
we
can
and
we
know
I
think,
a
few
other
people
trying
to
use
it.
E
Yeah
hi
grant
just
wondering,
but
before
you
made
this
sort
of
publicly
available,
we
were
seeing
a
lot
of
interest,
particularly
from
partners.
I
just
wonder
if
you
got
any
feel
for
how
widely
it
is
being
used,
maybe
based
on
you
know,
questions
you're
being
asked
and
so
on.
B
Recently
we're
starting
to
see
more
interest-
and
I'm
talking
quite
recently
over
the
last
month
or
so
we're
starting
to
see
a
spike
in
interest
in
it,
and
this
is
anecdotal,
of
course,
but
I'm
seeing
more
questions
coming
from
more
users,
more
issues
have
been
raised,
more
support
tickets
being
raised,
and
we
are
starting
to
see
that
increasing,
which
is
expected.
We
expected
it
to
be
a
bit
of
a
slow
start,
but
yeah
it's
starting
to
go.
B
What
we
think
is-
probably
it's
probably
an
exponential
curve
at
this
point
and
that's
both
exciting
and
scary
at
the
same
time,
but
you
know,
as
always,
with
these
cases,
but
it's
good
to
see
it
starting
to
go.
B
We've
got
like,
for
example,
we've
had
an
mr
literally
today
from
an
external
from
a
customer,
just
fixing
defend
one
tiny
little
bug
in
the
code
and
and
they've
submitted
them
or
to
fix
it,
which
is
great
and
we're
getting
more
and
more
of
that
so
yeah
it's
starting
to
increase,
but
yeah
I'd
like
to
say
we
keep
keep.
The
message
clear
of
you
know
is
we're
still
building
this.
This
is
still
on
a
journey.
We've
done
a
lot
of
work,
we're
very
proud
of
it.
B
B
So
the
the
toolkit
genesis
was
actually
to
test
performance
yeah,
we
needed
a
way
in
quality
to
build
out
and
test
real.
You
know,
essentially
real
environments
and
this
all
kind
of
was
kind
of
a
loop
kind
of
fed
into
itself.
We
needed
a
way
to.
We
need
to
define
what
environments
would
get
that,
but,
like
you
know,
in
the
real
world
that
we
can
test
against,
you
know
real
life
targets
and
that's
now
called
the
reference
architecture.
B
So
we
had
to
build
those
first
and
then
we
needed
a
way
to
build
those
architectures
out.
So
we
actually
built
something
called
the
performance
environment
building,
which
is
what
became
get
and
we
built
that
at
first
to
actually
build
out
the
environment,
and
then
we
built
gpt,
which
you
might
know
of
which
is
our
performance
testing
tool.
So
it's
kind
of
a
self-feeding
cycle,
the
office
architectures
are
defined,
get
bills,
reference,
architectures
and
gpt
tests
against
them.
B
We
test
these
range
between
daily
and
weekly,
depending
on
what
we
think
are
the
most
popular
reference
architectures.
We
have
those
available
on
our
wiki
and
I
can
link
that
to
you,
where
you
can
see
literally
daily
updates
of
the
latest
performance
metrics.
B
Yeah,
the
wiki
is
completely
open,
so
it's
gpt
and
get,
and
the
reference
architecture
is
because
we
find
actually
one
thing
we
didn't
actually
expect.
Was
customers
actually
use
gpd
themselves
during
the
build
process
and
some
and
then
some
cases
after
so
they
just
have
it's
kind
of
like
a
health
check
just
to
see
is
the
performance
still
good
or
is
the
performance
of
the
new?
If
that
is
the
performance
of
the
new
environment
we've
just
built
actually
to
par
so
all
very
open
in
that
regard,
excuse
me.
F
Yes,
grant
thank
you
for
doing
this
presentation.
This
is
excellent.
This
this
has
been
a
very
interesting
thing
that
we've
been
looking
at,
and
thank
you
again
for
for
putting
this
together.
F
Question,
as
far
as
my
question
goes,
does
get
work
with
the
gitlab
agent
for
kubernetes
like
does
it
do
the
installation?
Does
it
do
validation
like
what's
the
scope
of
how
it
interacts
with
it?
If
it
does.
B
F
So
there
is
a
agent
that
the
ops
team
has
put
together,
which
essentially
does
a
git
ops
model
where
the
agent
lives
in
the
kubernetes
cluster
and
it
interacts
with
git
lab.
So
you
don't
do
a
push
to
through
a
runner
to
install
your
application
that
you're
developing
in
gitlab
use
the
kubernetes
agent
to
do
a
pull,
there's
an
agent
and
then
there's
a
server
that
goes
along
with
it
and
so
and
so
forth.
B
It
doesn't
do
anything
directly
with
the
gitlab
agent
functionality
directly
right
now,.
B
They
just
I'm
just
trying
to
remember
how
that
works,
exactly
that
when
it
comes
to
runner
stuff.
Get
doesn't
do
that
right
now,
because
runners
are
actually
a
little
bit
unautomatable
in
various
little
ways,
but
what
gate
will
do
is
give
you
a
base
environment
and
then
you're
more
than
welcome
to
configure
it
after
the
fact,
of
course,
to
to
add
in
the
runners
or
the
kubernetes
agent.
In
this
case,.
B
But
if
you
think
there's
a
if
you
think
there
is
scope
to
automate
that
process
feel
free
to
is
an
issue,
and
we
can
take
a
look
here.
F
Yeah,
so
what
I
just
did
is
I
posted
the
documents
link
for
what
the
kubernetes
agent
is.
As
for
everybody
to
kind
of
know,
what
that
what
I
was
referring
to,
I
think
that
there
was
a
name
change
recently
and
that's.
Why
might
be
some
confusion
around
that?
F
In
any
case,
I
think
there
is
some
scope
for
it
to
be
automated
in
the
sense
that
it
is
a
difficult
component
to
get
installed
and
working,
and
if
there
is
any
automation
that
can
be
applied
to
it,
I'm
sure
it
would
go
a
long
way.
Thank
you.
B
Yeah,
it's
an
interesting
one.
It
would,
I
assume
the
agent
would
be
in
his
own
kubernetes
cluster.
Is
that
the
the
right
take.
F
Doesn't
have
to
be
from
what
I've
heard
it
can
be,
it
could
be
within
the
cluster
that
you're
deploying
the
apps
to
or
it
could
be
in
its
own.
I
think
for
security
purposes.
They
propose
to
do
it
in
its
own,
so
that
it's
you
know
it's
isolated.
B
Yeah
that
make
that
makes
a
lot
of
sense,
yeah
so
get
the
get
supports,
deploying
git
lab
itself,
the
actual
application
into
his
own
cluster,
and
that
has
been
a
tremendous
effort
to
get
that
to
work
to
where
we
need
to
be
at
the
moment.
They
probably
would
be
before
we
be
reticent
to
introduce
a
second
cluster
to
apply
an
agent
into,
because
there's
various
reasons
for
that.
But
the
other
reason
is
that
we're
getting
into
opinionated
territory.
B
How
that
looks
for
different
customers
is
different
and
it
would
be
difficult
to
try
and
get
a
one
solution
that
fits
all
but
we'll
take
a
look
and
see
if
there's
any
any
scope
there
for
sure,
but
yeah
yeah
when
it
comes
to
like
say
that
deployments
kind
of
piece
when
customers
are
one
to
use
runner
or
kubernetes,
we
usually
will
empower
them
to
do
it,
but
we'll
take
a
look
and
see
if
there's
anything
that
that
makes
sense.
There.
B
That's
fine!
So
what
I'll
say
is
that
usually
these
presentations?
I
said
I
usually
try
and
keep
it
light,
because
it
can't
it's
a
very
quick
like
even
that
in
that
conversation
get,
can
easily
deep
dive
and
become
very
technical
very
quickly,
and
I
present
to
different
audiences
all
the
time.
So
it's
always
hard
to
gauge
where
you
know
the
level
that
we
wanted
to
kind
of
present
on
so
the
the
high
level.
The
high
level
is
very
high
level.
B
Today,
I'm
happy
to
to
go
over
any
any
other
pieces.
If
anyone
people
just
say,
can
you
go
over
a
bit
of
kubernetes
cloud
native
or
stuff
like
that
feel
free
to
shout
out
and
I'll
I'll?
Do
that
for
you,
but
but
yeah
any
other
questions
you
have
or
anything
else.
You
want
me
to
cover
if
they
get
kind
of
design
happy
to
discuss
that
now
or
even
the
reference
architectures,
I'm
heavily
involved
in
those
as
well.
So
I
can
discuss
those
two.
B
Silence
always
a
worrying
or
a
good
thing
hard
to
tell
between
the
two
okay,
so
let
me
run
off
some
some
features
again
and
see.
If
that
see,
that
brings
out
any
interest
and
people
to
discuss
so
currently
get
supports.
We
can
deploy
all
the
refs
architectures
from
1k
to
50k,
including
the
cloud
native
hybrid
variants.
We
have
support
for
amazon,
azure
and
git
lab
github,
amazon,
azure
and
google
for
deploying
the
omnibus
environments.
We
have
support
for
aws
and
gcp
for
the
cloud
native
hybrid
environments.
B
There
is
no
plans
for
that.
Azure
kubernetes
service
at
the
moment,
we're
discussing
that
on
a
larger
scale
about
azure
services,
because
we've
continuously
had
problems
with
azure's
offerings
and
so
nothing
to
to
discuss
on
that
point.
For
now
the
get
supports
upgrades
it
supports
being
able
to
it
supports
upgrades.
We
do
support
elasticsearch
at
the
moment
for
advanced
search,
we're
re
evaluating
that
with
the
license
change.
But
at
the
moment
you
can't
deploy
an
elasticsearch
version
that
that
is
the
version
that
doesn't
have
the
the
license
change.
B
We
want
to
look
at
other
stuff
in
the
future,
so
it's
just
the
open
search
stuff,
but
that's
not
that's,
not
ready
quite
ready.
Yet
we
do
support
geo.
We
support
zero
downtime
upgrades
and
get
will
also
support
some
other
additional
things
such
as
for
adding
and
load
balancing.
We
do.
We
do
support,
exploring
up
setting
up
the
monitor
piece
and
on
the
bus,
so
that's
promising
prerequisites
and
kirfana
that
gitlab
provides.
B
We
support
external
ssl
termination
and
we
support
certain
services,
mainly
aws
right
now.
We
support
rds
and
elasticash
and
on
our
on
avs
as
well.
We
do
support
the
internal
bouncer
being
a
cloud
aws,
nlb
load
balancer,
that's
those
are
the
areas
very
still
expanding,
so
hopefully
we'll
have
more
on
that
piece
of
google
and
external
about
stuff
in
the
future.
B
The
get
project
has
a
whole
bunch
of
issues.
You'll
see
what
we're
trying
to
work
on
we're
working
on
the
2.0
release
right
now,
which
is
going
to
have
a
whole
a
whole
bunch
of
changes
and
big
improvements
coming.
That's
hopefully
going
to
be
you'll,
see
it
better,
probably
the
end
of
next
week
and
then
probably
the
main
release
would
be
in
the
middle
of
february
and
then
we
just
continue
2.1,
we'll
we'll
have
additional,
we'll
be
continuing
to
add
new
features,
those
missing
areas.
B
One
of
the
big
questions,
I
guess
about
red
hat.
We
we're
planning
to
the
2.1.
We
might
actually
be
doing
a
2.0
because
someone's
actually
made
an
mr
that
actually
seems
to
be
working,
which
is
good.
We
found
it
to
be
a
lot
harder,
but
actually
seems
like
we've
made
a
lot
of
sensible
choices
in
the
past.
So
actually
the
switch
to
our
sport
rail
seems
to
be
a
lighter
touch,
so
watch
the
space.
B
F
Grant
this
is
samir
again
I
apologize.
You
ran
through
the
list
quickly.
Did
you
mention
red
hat,
open
shift
on
there.
B
A
little
stuff
there
is
the
red
hat
operator,
which
I
know
the
distribution
team
are
working
on.
We
don't
support
that.
Yet
we
support
helm,
we're
going
to
evaluate
that
in
the
future
when
the
time
is
right
to
see
if
you
want
to
either
switch
from
operator
from
hell
to
operator
or
backwards
or
whatever
we
want
to
do.
But
yeah
there's
nothing.
Nothing
to
announce
in
that
that
for
now.
G
Grant
thanks
for
doing
that,
grunk
here,
a
pretty
generic
question.
What
would
be
reasons
where
we
would
not
recommend
using
get?
What
would
what
would
trigger
a
recommendation
against
it
other
than
the
obvious
certain
platforms
not
supported.
B
That's
stuff
we're
still
trying
to
figure
out
and
we're
still
we're
excited
to
see
more
customers
use
it,
because
we
want
to
get
that
feedback
more
and
what
will
probably
be
the
case
is
that
customers
need
bespoke
designs
around
the
networking
or
the
security.
That's
the
two
big
areas,
probably
you'll,
start
to
see.
Maybe
some
friction
where
they
have
like
if
they
have
very
strict
security
crimes
that
they
can't
change.
B
They
must
have
this
this
and
that
we
might
not
have
that
in
place
or
the
networking
needs
to
be
in
a
certain
way.
Again.
We
have
generic
networking
kind
of
designs,
although
we
do
allow
customers
to
to
literally
stick
in
their
own
networking
if
needed.
So
it
really
depends
on
the
customer's
requirements.
B
B
So
where,
if
you
hear
anything,
please
let
us
know
we're,
certainly
keen
to
hear
those
areas
that
would
prevent
customers
from
from
using
get
previously
has
been
reasons
such
as
rail
support
or
private
subnet
support
was
one.
We
got
a
few
times
we're
working
on
that
literally,
as
we
speak
any
of
the
aws
at
least
and
yeah
can't
remember
anything
else
off
the
top
of
my
head.
Some
customers
just
wanted
it
themselves.
B
If
I
hear
anything
from
customers
I'll,
let
you
know
but
yeah,
please
let
me
know
if
you
hear
anything
either.
B
So
I'll
call
it
some
things.
We
don't
plan
to
include
and
there's
various
reasons
for
that,
so
we
generally
won't
handle
cloud
accounts
and
gets,
and
that's
a
very
murky
area
depends
completely
usually
has
various
considerations
for
customers
and
security
passwords.
All
that,
so
get
won't,
usually
generate
accounts.
They
might
sometimes
be
small.
Some
like
automated
accounting
technically
would
be
accounts
for
just
pieces
and
the
cloud
providers
to
make
things
work
like
service
accounts
and
gcp.
That
kind
of
stuff,
but
generally
won't
do
that.
B
We
won't
support
anything
outside
of
promises
and
grafana
directly,
and
by
that
I
mean
we're
not.
We
know
that
customers
usually
will
bring
their
own
monitoring
stack
and
that
will
be
very
different
depending
on
each
customer.
We
will
obviously
try
and
support.
Let
customers
plug
that
in
to
get
into
their
gitlab
environment.
Sorry,
but
yeah.
We
only.
We
only
automate
the
the
main
prometheus
and
gravana
on
the
bus
piece,
but
yeah.
B
We
don't
do
anything
more
than
that.
We
also
don't
support
directly
on
the
buff
and
on
the
off
and
email
support,
because,
again
the
variations
that
each
customer
have
will
just
be.
It's
just
massive
this.
If
you
look
at
email
lists,
particularly
there's
numerous
different
email
files
that
you
could
use,
but
what
we
do
do
is
we
do
support
custom
config.
So
this
is
a
concept
where
a
customer
can
set
up
get
and
also
then
literally
paste
in
like
a
ruby
file,
config
file.
B
That
then,
would
be
added
to
the
different
components
in
gitlab.
So
you
can
add
in
your
own
specific,
on
the
off
or
email
support
with
your
own
file
and
get
we'll
take
that
and
add
it
in
the
right
place
and
merge
it
in,
and
you
know,
apply
that
config
for
you,
so
we're
confident,
hopefully,
they'll
cover
cover
most
people's
bases,
but
it's
good
to
call
that
as
well.
B
Enough
enough
watering
for
me
any
other
questions,
any
other
discussion
points
but
gets
referenced.
Architectures
gitlab
at
scale
happy
to
discuss
that.
But
if
not,
I
think
that's
enough.
You've
heard
enough
from
me
today
so.
A
B
Yeah,
it's
certainly
it's
a
it's
a
lot
to
take
in
take
it
from
me.
So
what
we'll
say
is
I've
got
the
gitlab
environment
toolkit
slack
channel
you're
very
much
welcome
to
come
over
there
and
ask
any
question
discuss
anything
like
I
say
there
is
no
stupid
questions.
We
have
the
get
lab
environment
toolkit
project
securities
issues.
If,
when
it
comes
to
mrs,
we
because
gets
quite
is
quite
specialized.
What
we
say
is
raise
an
issue
first
or
speak
to
us
on
the
channel.
B
If
you
want
to
do
an
mr
and
we
could
discuss
together
about
potential
ramifications
and
stuff
and
the
correct,
maybe
the
correct
way
to
do
it.
What
will
we?
We
will
rarely
say
no
to
anything,
there'll
be
sometimes
where
things
can't
work,
but
we
will
try
and
find
a
compromise,
but
we
certainly
want
to
encourage
any
contribution
and
anybody
wants
to
come
in
and
help
us
on
the
journey,
so
yeah
feel
free
to
reach
out
and
hopefully
that's
been.
This
has
been
helpful.