►
From YouTube: GitLab as Cloud Native Complex Suite Made Simple with Helm Jason Plum, Sr Distribution Engineer,
Description
Join us for Kubernetes Forums Seoul, Sydney, Bengaluru and Delhi - learn more at kubecon.io
Don't miss KubeCon + CloudNativeCon 2020 events in Amsterdam March 30 - April 2, Shanghai July 28-30 and Boston November 17-20! Learn more at kubecon.io. The conference features presentations from developers and end users of Kubernetes, Prometheus, Envoy, and all of the other CNCF-hosted projects
A
Okay,
I'll
try
not
to
yell
at
it.
If
I
do
somebody
just
tell
me
to
do
lower
it
down
a
little
bit.
I'm
used
to
conference
speaking
in
much
larger
rooms,
so
I
tend
to
project
so
first
off
thanks,
everybody
for
actually
coming
out
and
anybody
who
actually
works
on
the
project
or
in
the
community.
Thank
you
for
your
work.
A
I've
been
doing
this
for
some
time
now
and
without
some
of
that,
I
wouldn't
be
able
to
do
what
I
do
so
I
both
get
to
keep
and
lose
my
sanity
because
of
all
you
so
I
want
to
say
thank
you
to
some
of
the
people
that
are
actually
crazy
enough
to
be
here
and
prepare
yourselves
for
what
I
would
refer
to
as
some
pastafarian
nightmare
spaghetti
monster
of
what
I
do
now.
This
is
all
made
possible
thanks
to
the
help
project
and
several
of
the
partners
that
we
have
throughout
the
project
and
community.
A
So
let
me
just
go
ahead
and
get
started.
Let's
set
some
of
the
groundwork
here,
I
need.
You
understand
the
scale
that
we're
working
with
all
right,
the
size
and
complexity
of
the
application
that
I'm
trying
to
deploy
might
look
like
I'm
lying
to
you
at
first
until
you
actually
dig
in
a
little
bit
and
see
what
I'm
actually
talking
about
you
don't
understand
the
suite
and
you
don't
understand
the
challenge
that
we're
actually
trying
to
address.
A
So,
let's
look
at
scale
now
we
have
five
external
dependencies.
That's
it
right!
Well,
there's
some
of
them
that
are
actually
fort,
because
we
need
to
make
changes
that
weren't
ready
for
being
up
streamed,
or
we
knew
that
upstream,
wouldn't
necessarily
accept,
or
we
just
had
to
integrate
so
tightly
with
it-
that
we
didn't
have
a
choice.
A
So
we
have
a
few
of
them
that
are
actually
forked
from
upstream
and
we
have
a
few
of
them
and
that
are
developed
completely
in
parallel,
because
the
up
streams
didn't
exist
when
we
started-
and
we
didn't
think
ours
was
quality
enough
at
the
time.
But
then
we
also
have
ten
and
Counting
internal
charts
that
we
have
to
keep
in
line
orchestrated
and
Inter
woven
fully
enabled
right
now,
excluding
our
operator.
That's
approximately
6,000
lines
and
Counting
of
output.
A
A
There
are
a
lot
of
charts
that
actually
do
spit
out
several
hundred
lines
somewhere
between
the
range
of
three
and
six
hundred
is
the
average
chart
so
to
give
you
the
visibility
in
scale,
that's
relatively
one-tenth
our
size
now
to
be
fair,
that's
also
because
we're
basically
spitting
out
19
charts
in
one
so
understand
that
goes
to
show
the
level
of
complexity
that
we
have
to
deal
with.
So
when
I
tell
you
that
we
have
to
deal
with
an
entire
application,
suite
I
am
not
joking.
A
This
is
the
public
architecture,
documentation
that
we
have
that
actually
outlines
all
the
components
and
ways
that
things
interact.
Our
chart
isn't
even
all
of
this.
Yet,
okay,
when
people
think
of
get
labid
I
think
it's
an
application,
it's
what
they
interface
with.
In
reality,
it's
an
entire
suite
of
applications
behind
it
that
we
have
to
keep
very
tightly
integrated
in
a
Roven
and
orchestrated
no
matter
what
platform
that
is
now
our
our
truck
can't
actually
be
as
simple.
A
If
our
edict
is
to
be
self-contained
and
manageable
scalable,
we
have
to
actually
do
some
complex
things.
In
the
background,
we
can't
make
a
great
new
deployment
method
and
then
immediately
tell
our
customers,
hey
here's
our
awesome
new
way
to
do
things
by
the
way
it's
harder
which,
if
you're
coming
from
the
Omnibus,
Kate
Lab
land
so
operating
in
standard
VM
deployments.
A
It
really
is
as
simple
as
apt-get
install
get
lab,
so
I
have
to
make
the
helm,
charts
and
the
use
of
commodities
as
easy
as
hell
install,
get
lab,
or
at
least
try
really
hard
to
do.
That
and
I
need
you
to
remember,
though,
not
everybody
understands
what
we're
doing
okay,
they
have
to
be
very,
very
simple,
to
use
while
being
extremely
complex
in
the
background,
but
at
the
same
time,
development
perspective.
A
Those
of
us
working
on
complex
charts
of
any
kind
we're
working
with
Suites
of
charts
put
together
orchestrating
all
of
that
gets
hard,
especially
when
you
have
competing
styles
or
maintainability
issues
and
kind
of
keeping
all
those
things
straight.
And
then
we
have
to
remember
that
our
target
consumer
may
not
understand
the
depths
of
helm,
so
they
may
not
be
able
to
make
small
modifications
to
make
it
work
and
it's
possible.
A
They
don't
really
understand
kubernetes,
as
at
all,
when
you
stop
and
think
about
it,
they
could
be
just
an
application
developer
that
needs
to
install
a
tool
that
they
can
use
or
they
could
be
just
a
caperna,
DS
administrator
who's
never
seen
an
application.
This
complex
before
this
is
not
a
knock
to
anybody
who
might
want
to
use
a
chart.
It's
we're
talking
about
new
technologies,
kubernetes
itself,
as
a
project
is
barely
5
years
old.
A
Hamels
a
project
is
not
even
half
that
we're
doing
things
with
technology
that
were
not
imagined
15
years
ago,
when
the
cloud
got
big
and
people
started
understanding
virtual
machines
and
how
to
work
with
them.
We
are
changing
the
way
people
think
about
application
deployment.
So
you
need
to
understand.
Not
everybody
has
caught
up
yet
and
thus
I
need
to
do
things
in
a
couple
of
interesting
ways.
A
So,
what's
my
actual
challenge
with
these
charts,
it's
pretty
simple.
I've
got
to
be
able
to
install
the
charts
on
any
cloud
with
a
few
commands
as
possible.
Any
cloud
one
command
is
the
challenge
itself.
We
do
our
darndest
to
say
the
least,
but
not
everything
is
completely
perfect,
the
idea
being
that
you
can
simply
add
the
repo
pull
your
updates
and
actually
run
the
install
command
with
as
little
configuration
as
possible.
A
Let
me
explain
why
we
need
to
make
the
move
to
cloud
native.
We've
been
developing
the
omnibus
gate
lab
for
years.
It's
great!
No
really
it
is.
It
is
actually
awesome
to
use
it's
also
inspiring
ly
large,
complex
and
heavy.
That
thing
is
now
over
a
gig
and
a
half
ok
compressed,
but
these
two
say
it's:
a
monolith
only
gets
you
so
far
at
coop,
con
in
Seattle
I
gave
a
presentation
outlining
our
work
and
progress
at
the
time,
but
I'm
just
going
to
summarize
that
here
the
omnibus
is
massive.
A
When
you
multiply
that
size
and
complexity
with
the
size
and
complexity
of
deploying
a
multi
million
user
SAS,
you
learn
the
meaning
of
growing
pains
and
Pyrrhic
lee.
We
got
to
the
point
where
we
were
on
our
ant
interation
iteration
on
the
in-house,
develop
development
and
deployment
tools,
and
we
realized
we
really
need
to
make
progress
on
this
on
something
that's
more
maintainable
for
the
future
and
designed
for
the
future.
A
So,
let's
set
some
grounding
here
right
this.
This
particular
meme
came
up
just
the
other
day
and
it
was
so
timely.
I
had
to
put
it
in.
Why
is
this
thing
so
massive?
Our
goal
is
to
actually
encompass
the
entire
development
lifecycle
as
a
tool
while
dogfooding
at
all,
while
we
proved
it
as
a
result.
That
means
that
we're
incorporating
all
the
tools
that
we
use,
if
not
making
new
ones
to
fill
in
those
gaps
and
I,
know
there's
some
people
are
going
to
be.
Why
not
do
this
piece
by
piece?
A
Why
not
chart
by
chart
unix-style
one
thing:
does
one
thing:
well,
ok,
cool.
If
I
told
you
to
deploy
that
right,
would
you
think
hey?
This
is
easy,
really,
which
one
of
us
is
lying
to
ourselves,
because
I
just
showed
you
that
architecture
graph,
if
you
had
to
deploy
every
single
component
and
keep
track
of
all
the
interconnections,
the
secrets,
the
TLS
for
every
single
component?
How
long
would
that
take
you
to
do
manually?
A
Then
we
have
to
worry
about
the
basic
things.
What
are
our
targets
in
the
project?
Right?
It's
got
to
be
maintainable
a
number
one
absolute
with
it,
something
this
large
you've
got
to
be
able
to
maintain
it.
As
it
grows,
even
if
it
mutates,
if
you
lose
this
fight,
the
game
is
over
because
you
won't
be
able
to
manage
it,
which
means
you
can't
use
it,
which
means
your
consumers
can't
use
it.
Whether
that's
in-house
or
out
of
house
doesn't
matter.
It's
got
to
be
flexible
and
I
mean
really
flexible.
A
This
is
the
close
second
I
really
heard
for
me
to
separate
between
maintainability,
but
if
I
can't
use
it,
it
doesn't
matter
right.
The
chart
won't
be
deployed
either
as
a
some
whole
and
with
very
little
modification
for
most
people,
but
for
others
say
customers
running
several
thousand
users
or
tens
of
thousands
of
users
or
hundreds
of
thousands
of
users.
It's
got
to
be
able
to
be
swapped
component
by
component
with
the
existing
patterns
and
I
have
to
be
able
to
roll
from
an
architecture.
A
That's
been
in
deployment
for
five
to
ten
years
and
then
slowly
migrate,
all
components
over,
because
if
you
have
a
global
customer,
that's
in
you
know
every
single
region.
He
can't
have
the
downtime,
because
if
he
does
it,
while
the
Americans
are
asleep
well,
that's
gonna
mess
with
everybody
in
a
pack
and
probably
make
for
a
bad
morning
over
here
in
amia
right,
so
I
have
to
be
able
to
install
all
or
part
or
just
the
minimum
requirements
for
us.
Downtime
is
insanely.
A
Costly
I
can't
make
that
mistake
to
where
I
break
things
and
force
you
to
do
pump-and-dump
right
back
it
up,
drop
it,
throw
into
production
again
can't
do
that
right.
It's
a
bad
thing
for
us,
it's
a
bad
PR
and
it's
really
bad
for
any
of
our
customers,
and
we
have
a
lot.
The
last
thing
in
this
project
is,
it
has
to
be
industry
accepted.
This
new
method
has
to
be
able
to
make
use
of
industry
recognized
tools
with
a
level
of
maturity.
A
A
Broadly
enough,
throughout
all
my
customers,
that
they'll
actually
make
use
of
the
thing
I'm
doing
we
had
choices
when
we
started
this
project
a
little
over
two
years
ago
and
helm
was
the
best
choice
that
was
available
at
the
time
and
personally
I
think
that's
the
best
decision
we
could
have
made.
We
definitely
think
it
has
the
highest
traction
at
this
point
and
soon,
with
more
flexibility
coming
it'll
gain
even
more
so,
let's
get
to
the
gory
details
because,
let's
face
it,
that's
why
y'all
came
right
so.
A
We
make
extremely
heavy
use
of
templating
and
by
extremely
I
mean
that
that
list-
the
project's
open-
you
guys
can
see
this.
This
is
40-plus
of
our
templates,
plus
a
couple
of
the
templates
that
are
forked.
It's
insane
the
amount
of
templating
we
make
use
of,
but
we
do
it
for
very
good,
very
solid
reasons,
and
sometimes
people
don't
believe
why
we're
doing
it
the
way
we
do
it.
Some
people
don't
believe
that
it's
required,
and
you
know
what,
when
you
see
something
this
big,
it's
not
when
you
see
something
this
big.
A
Yes,
I
wish
that
we
could
condense
this
and
even
maybe
even
separate
out
our
charts
so
that
we
don't
have
one
big
monolithic
chart,
but
with
as
many
templates
as
we
have
here.
If
we
could
do
that,
we
would
do
that,
but
remember
that
library
charts
aren't
here
yet
so
I
have
to
make
do
as
best
I
can,
with
the
tools
that
I
have.
A
So
we
have
to
worry
about
the
values
that
we're
templating
the
sections
of
config
maps
or
repeated
chunks
of
code
right.
This
is
what
library
charts
will
eventually
do,
and
we
also
have
to
worry
about
our
Global's.
One
of
the
things
we
do
is,
as
you
saw
in
the
previous
slides
I,
have
all
of
these
individual
sections
of
primary
helpers.
These
are
actually
whole
sections
of
configuration
or
even
individual
values
or
comprised
values,
so
that
I
actually
have
a
way
to
say:
I,
don't
have
to
create
the
URL
for
the
API
server
15
times.
A
I
can
set
the
template
once
and
then
just
include
the
template
in
line
and
it'll
actually
work.
This
is
one
such
example.
What
we're
doing
here
is
literally
just
give
me
the
host
name
for
the
actual
gitlab
instance.
That'll
be
reuse
later,
as
part
of
the
URL
it'll
be
reused
for
API
endpoint
values,
it'll
be
reused
for
any
number
of
things,
but
you'll
notice.
A
We
have
the
internal
template
for
a
symbol
host
which
actually
pulls
together
some
of
the
things
so
that
you'll
have
to
set
as
many
values
but
also
the
specific
value
provided
through
our
global
setup
and
I'll
cover
Global's
in
a
second.
But
we
have
our
global
host
and
get
lab
so
the
actual
gitlab
api
instance.
You
can
specify
its
name
directly
outside
of
our
defined
patterns.
A
If
you
choose
otherwise
we'll
actually
go
and
assemble
that
for
you,
so
we
actually
have
chart
values
plus
global
values,
plus
assemble
value
and
then
the
ability
to
override
that
if
you
choose
yes,
that's
a
stack
and
stack
and
stack
of
things
but
remember
helm.
2
only
has
go
template
text
with
sprig
and
a
couple
of
salts
functions
right.
We
don't
have
Lua,
we
don't
have
the
ability
to
programmatically
do
this.
We
had
to
do
this
by
dealing
with
the
output
of
templates
stacked
upon
each
other
and
remember
that
that
is
always
text
right.
A
There
we
go,
let's
set
a
few
ground
rules
and
what
we're
actually
doing
when
it
comes
to
our
templates.
We
have
a
document
out
there
in
our
documentation
that
specifically
says
architecture
decisions
I'm
going
to
cover
a
few
of
those
and
how
they
specifically
relate
to
the
depth
and
use
of
our
templates
is
because
library
charges
don't
exist.
Yet
we
have
to
make
sure
that
the
template
partials
whenever
we
use
them
are
as
close
to
the
chart
that
needs
it,
but
not
all
the
way
in
the
global
space.
A
Imagine
if
I
had
one
templates
folder
at
the
head
that
had
all
45
template
files
in
it
or
worse,
one
template
file
that
was
all
45
and
one.
If
you
opened
a
2600
line
file
for
a
template
file,
would
you
like
where
the
heck
did
I
put
stuff
in
here
now?
Imagine,
instead
of
2,600
lines,
that's
closer
to
a
hundred
thousand
lines.
A
You
don't
do
that.
We
have
to
be
able
to
structure
these
specifically
now
I've
actually
kind
of
dropped,
a
few
things
in
here
right
out
of
our
documentation,
but
I
want
to
point
out
that
the
reason
we're
doing
it.
This
way
is
we
needed
to
be
able
to
use
dry
patterns.
You
know
we
don't
keep
repeating
ourselves
all
the
time
if
we
can
use
it
once
and
do
it
through
a
template.
A
Do
it
that
way
and
remember
that
templates
are
actually
global,
so
you
can
stack
them
and
the
first
one
read
in
through
hierarchy
is
the
one
that
actually
wins
by
that
name.
It's
kind
of
a
dynamic
JIT,
but
not
quite
that
comes
into
play
in
the
way
we
use
things
because,
for
example,
we
actually
override
at
the
PostgreSQL
service
name
so
that
we
can
deploy
PostgreSQL
but
also
know
exactly
what
it's
name
is
going
to
be
and
have
that
be
controlled
by
our
chart
instead
of
the
downstream
chart.
A
That
has
actually
come
into
play
throughout
the
entire
chart
in
dealing
with
some
of
the
Forks
that
we've
had
to
deal
with,
but
it's
a
particular
case
that
I
have
to
point
out
so
here's
another
one.
This
is
one
that's
unique
to
us.
As
far
as
I
found,
we
actually
have
a
pattern
in
play
that
makes
use
of
helm
templates
to
specifically
detect
a
configuration
that
was
passed.
That
would
actually
be
deprecated
in
a
future
release.
A
So
if
we
know
that
we
need
to
move
configuration
values,
we
actually
intentionally
make
use
of
the
fail
logic,
that's
available
to
actually
do
as
close
as
we
get
programmatic
checks
on
the
values
and
tell
you
okay,
this
was
moved
to
a
secret.
Please
create
that
and
move
that
value
here.
Here's
the
link
to
the
documentation
or
hey.
We
moved
this
entire
configuration
for
sam'l
auth
from
the
individual
chart
up
to
a
global
property,
because
we
realized
that
it
needed
to
be
set
in
six
different
charts,
not
one.
A
This
allows
us
to
actually
have
rolling
changes
in
minor
releases
without
going.
Oh
just
broke
15
people
in
production,
because
that
would
be
bad
to
say
the
least,
and
let's
face
it
not
everybody
has
a
full
change
management
in
place.
So
if
we
don't
try
to
protect
them,
just
a
little
bit
we'll
end
up
breaking
a
lot
of
people
in
bad
ways
and
that's
a
nasty
support,
call
for
us
and
not
great
PR,
either
as
a
product.
A
A
It
does
not
take
in
new
default
values.
So
if
you
deploy
the
chart
at
one
revision
and
then
I
go
and
make
a
new
revision
and
I've
added
say
a
new
dependency
that
requires
defaults
and
you
say:
helm
upgrade
tack,
tack,
reuse,
values,
there's
a
whole
sections
of
value
defaults
that
are
no
longer
present.
A
That
template
will
fail
and
fail
horribly
and
not
explain
to
you
why?
So
we
actually
ask
people
to
use
helm,
get
values
and
then
pass
that
in
to
how
upgrade
the
reason
being
now
we
don't
have
to
worry
about
new
default
values,
not
coming
through
they'll
get
rolled
in
and
tiller
will
actually
behave
appropriately.
A
A
The
next
thing
I
have
is
based
on
that
same
kind
of
logic
is
the
ability
to
actually
check
for
configuration.
That
will
be
bad
an
example.
We
have
both
Redis
and
Redis
H
a
we
have
them
as
separate
Forks
because
of
another
choice
that
I'll
talk
about
in
a
minute.
But
if
you
tried
to
deploy
Redis
and
Redis
H
a
which
one
am
I
going
to
hook
the
application
to,
why
am
I
going
to
consume
both
sets
of
resources
which
one's
gonna
win
the
service
name
war?
That's
like!
Why
would
you
do
that?
A
It's
a
mistake,
but
it's
a
mistake
that
more
than
one
person
has
made
or
could
make.
So
we
used
the
same
kind
of
logic
in
deprecations,
which
I
can
show
you
at
the
end.
If
you
choose
so
that
if
we
detect
that
you've
set
two
values
that
are
competing
or
that
you've
set
a
value
but
not
set
a
dependent
value,
we
will
actually
stop
you
from
deploying
the
chart
and
deploying
a
known,
broken
state.
So
when
we
can
find
and
detect
those,
we
prevent
people
from
doing
that.
A
Then
we
have
the
very
pervasive
use
of
values,
global
and
when
I
say
pervasive,
meaning,
if
you
do
a
helm,
install
dry,
run
debug
and
look
at
the
output.
You
will
see
what
you
put
in,
which
is
anywhere
between
one
and
sixty
values,
depending
on
how
twiki
you
get
with
the
settings.
However,
when
you
go
to
computed
values,
you're
gonna
see
the
same
list
of
global
default
behaviors
a
lot,
but
that's
because
we
don't
want
to
have
to
configure
everything
ten
times
and
we
don't
want
you
to
have
to
configure
everything.
A
Ten
times
say
you
want
to
use
an
existing
production
grade.
H
eh,
you
know
region
resilient
Postgres
right,
whether
that's
using
cloud
sequel,
whether
that's
using
the
updated
charts
from
fit
Nami.
You
need
to
be
able
to
configure
that
now.
Imagine
that
I
have
to
tell
every
single
individual
component:
hey
Postgres
is
over
there.
Hey
postgrads
is
over
there.
Hey
here's
Postgres
secret
over
here.
Imagine
I
had
to
do
that
to
every
component
out
of
the
eleven
that
need
that
information.
A
Would
that
drive
you
insane
having
to
copy
and
paste
the
same
values
in
the
settings?
All
the
time
I
mean
cool,
you
could
do
it
and
with
Helmuth
we
you
start
to
get
partials
and
things
like
that
and
you
can
kind
of
use
customize.
But
if
you
use
customize,
then
it's
more
held
template
pass-through,
customize,
pass-through
past
right,
I
can
do
it
this
way,
it's
usable,
it's
manageable,
I
have
a
secured
template
set
that
I
know
are
going
to
behave.
The
right
way
and
I
can
put
them
in
place.
A
On
top
of
that,
I
can
check
one
place
for
a
broken
configuration.
If
you
tell
me
that
and
hey
here's,
the
external
and
you
didn't
bother
to
tell
me
whether
or
not
you
want
SSL,
let
alone,
if
there's
a
password
on
it,
I'm
probably
gonna
tell
you
you're
using
external
postgrads.
Why
are
you
not
using
a
password
right?
A
This
comes
into
play
specifically
when
it
comes
to
Redis
and
Postgres.
I
have
approached
bitNami
in
coop
Khan
in
Seattle,
then
I
met
with
him
again
in
Barcelona
and
said:
look
it's
nice
that
I
can
configure
it
this
way,
but
there's
a
lot
of
flight
I
have
to
deal
with
because
if
I
need
to
set
PostgreSQL
I
have
to
override
this
and
I
have
to
override
this.
But
I
don't
want
to
keep
maintaining
my
own
Fork.
First
off
I
have
an
old
Fork
second
off
I'd.
A
Rather
them
each
make
use
of
the
community
resources
if
at
all
possible,
so
I've
worked
with
them
and
you
will
actually
see
certain
things
in
the
global
patterns
begin
to
come
up
in
the
charts
that
they're
coming
into
play,
because
they've
recognized
some
of
these
patterns
are
useful.
So
hopefully
you
guys
will
see
that
as
well.
I'm,
not
saying
everything
should
be
global.
I
am
saying
if
I
can
configure
one
set
of
values
once
and
we
can
all
abide
by
them,
it
will
save
everybody
a
lot
of
maintenance
time.
A
A
This
stems
from
the
12th
ad
to
wrap
everybody
familiar
with
containerization,
12
factor
app
and
all
this
stuff
right,
I'm
getting
a
bunch
of
nods,
a
couple
of
head,
tilts,
okay,
we
actually
make
use
of
an
an
it
contain
er
that
populates
all
secret
content.
Yes,
this
does
mean
when
the
secrets
change,
some
of
our
pods
have
to
be
fully
restarted,
instead
of
allowing
it
to
just
be
hooked
and
pick
up
the
new
values
right
now.
The
reason
is
one
of
the
things
that
our
customers
do
often
is
make
use
of
get
lab.
A
What
does
privilege
do
it
gives
you
the
ability
to
actually
query
the
underlying
runtime?
This
is
a
problem.
Why?
Because
doctor
info
is
something
that'll
get
run
and
you'll
see
it
run
on
the
API
logs
all
the
time,
but
that
then
contains
a
huge
amount
of
actual
secret
data
if
you
put
it
in
the
environment.
So
in
order
to
protect
our
users
and
prevent
bad
habits,
we
don't
allow
environment
in
secrets
ever.
A
That
allows
us
to
be
a
little
agnostic
about
where
that
content
comes
from,
so
we
can
use
kms.
We
can
use
vault,
we
can
use
AWS,
it
doesn't
really
matter,
but
our
application
then
also
doesn't
have
to
change
or
be
knowledgeable
of
those
secrets.
We
can
just
bam.
It
works
so
I'm
almost
done
and
I'll.
A
Let
people
ask
me
all
kinds
of
questions
for
the
rest
of
the
day,
but
I
want
to
say
thank
you
to
the
maintainer
of
the
other
projects
that
we
make
heavy
use
of
being
nginx
and
specifically
they're
reluctant
support
of
TCP
connections.
We
really
appreciate
that
you
listened
to
us
when
we
said
we
really
would
like
to
not
break
all
of
our
users
that
are
using
SSH.
Please
I
will
actually
be
putting
forward
a
patch
to
make
it
not
reset
all
TCP
balancing
when
you
need
to
make
a
change
to
the
ports.
A
I
want
to
thank
bit
Nami
for
the
work
that
they've
done
in
the
major
changes
to
PostgreSQL
Redis
and
upcoming
manao.
That's
going
to
help
me
significantly
because
I
can
please
stop
forking
these
things,
because
I'd
rather
stay
closer
to
mainstream
than
anything
else,
and
especially
to
jet
stack,
if
you
guys
aren't
familiar
with
jet
stack
and
the
things
that
they
do
within
the
community,
both
kubernetes
and
helm,
specifically
with
their
work
with
cert
manager.
A
A
The
question
was
ever
since
our
public
GA,
how
has
the
adoption
been
by
our
gitlab
users
of
the
helm,
charts?
The
adoption
is
definitely
slow,
but
it's
not
due
to
the
fact
that
the
helm
charts
are
new
per
se.
We
obviously
we
have
things
we
need
to
add
to
it.
We
need
to
be
better
at
a
certain
number
of
things
and
we
have
an
open
tracker,
but
we
have
large
customers
anywhere
between
a
thousand
and
fifty
thousand
that
are
actually
making
use
of
the
charts.
A
A
The
question
is,
with
all
of
those
charts:
how
painful
is
it
to
go
through
all
that
yeah
Moe
as
a
developer,
I'm
quite
familiar
with
it?
The
easiest
way
is
honestly
to
run
home
templates
on
the
output
and
then
search
for
the
known
path
of
where
the
things
coming
from.
But
if
you
output
the
whole
thing
and
actually
try
to
manually,
read
6000
plus
lines
of
yamo
yeah,
your
your
eyes
are
gonna
hurt.
After
a
little
bit,
you
need
to
know
which
section
you're
looking
for
hundred
percent.