►
From YouTube: 2020-09-03 GitLab.com k8s migration APAC
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
B
B
B
B
B
Mapping
services
to
web
api
git
this
is
this-
is
going
to
be
problematic
for
us
still
and
we
still
don't
really
have
a
good
plan.
I
think
for
how
to
deal
with
this.
It's
not
going
to
block
the
get
https
migration
since
that
uses
the
that
that
uses
a
different
pod.
But
when
we
move
to
api
and
web,
it's
going
to
be
a
blocker.
A
B
It
has
only
priority
three.
I
guess,
because
it's
not
an
immediate
locker,
but
it's
going
to
be
a
blocker
for
us
very
soon.
B
I
think
like
what
may
throw
a
wrench
into
this,
is
that
you
know
we
have
another
blocker,
which
is
the
cross,
a
z
traffic,
so
that
could
extend
things
for
get
https
anyway,
but
yeah,
I
think,
maybe
maybe
we
need
to
circle
a
circle
around
and
talk
about
this,
some
more
for
the
distribution
team
to
see
what
the
plan
is.
A
A
B
Proxy
request-
buffering-
I
guess
you
could
say
this
is
this-
is
this
is
mitigated
because
I'm
just
disabling
this
globally
on
the
web
service
pod
for
now,
because
we're
only
using
git,
but
we
have
to
decide
whether
we
want
to
enable
this
for
web.
I
think
some
discussion
like
for
gitlab.com.
We
already
have
cloudflare
in
front,
so
maybe
maybe
we
can
just
leave
this
disabled
globally.
A
B
I
think
it's
I
I
don't
know
like.
I
think
it
remains
to
be
seen
whether
so
currently
proxy
request
buffering
is
disabled
for
certain
paths
for
https,
specifically
we're
discussing
whether
we
could
just
disable
it
for
all
paths.
B
A
All
right,
in
that
case,
let's
leave
it
be.
We
have
other
things
to
work
on
for
now.
B
Yeah
for
sure
the
next
one
we've
talked
at
lengths
about
already
this
is
the
cross
easy
network
traffic.
I
still
think
you
know
this.
Is
it
really
hasn't
been
flushed
out
completely
yet
so
we
need
to
you
need
to
spend
some
more
time
to
see
what
the
plan
is.
B
Unencrypted
network
connection
between
nginx
and
workhorse,
this
came
out
of
the
readiness
review.
I
don't
know
like
I've
talked
to
a
bunch
of
people
about
this.
It
doesn't
feel
like
it's
any
worse
than
having
a
unique
socket.
I
mean
I
think
like
in
order.
I
think
I
think
what
could
be
bad
here
is
for
compliance
checkboxes,
where
you
have
unencrypted
traffic
traveling
between
nodes,
potentially
in
our
current
configuration
and
obviously
you
need
to
have
root,
and
I
don't
know
maybe
grain
or
craig.
Maybe
you
guys
know
if
you
have
an.
C
B
C
D
The
answer
may
have
changed
over
the
years
as
well,
because
I
think
the
peacock
capabilities,
or
whatever
the
the
syscall
that
you
need
to
actually
tap.
It
varies
widely
of
what
version
of
docker
and
other
bits
and
pieces
you're
using
my
thought
with
gke,
and
what
they
have
is
that
yes,
you
you
would
have
the
ability
to
to
do
that
within
your
container
only,
but
I
would
have
to
go
and
actually
test,
and
I
could
very
well
be
wrong.
C
D
C
B
Okay,
just
curious,
I
think,
like
even
with
the
ux
socket
and
you
have
root
access.
You
obviously
can
still
it's
pretty
much
game
over
anyway
right.
So
I
don't
know
if
it's
that
big
of
an
issue
except
possibly
for
compliance-
and
I
think
it's
exacerbated
by
the
fact
that
previously
we
had
you
know
nginx
and
workhorse
on
the
same
vm.
Now
we
have
nginx
and
workhorse
on
potentially
two
different
nodes
with
traffic
traveling
across
availability
zones.
Even
so,
it's
a
bit
different.
A
Yeah
but
another
thing
like
two
things
jared
first
of
all,
does
any
of
you
know
whether
we
are
that
we,
whether
we
enabled
whether
we
added
tls
to
italy,
traffic
we've.
A
A
So
if
that
is
already
not
encrypted,
then
I
don't
think
necessarily.
We
need
to
discuss
this
right
now,
because
this
is
like
definitely
a
bigger
bigger
issue
than
than
just
this
and
the
jar
for
compliance.
Compliance
only
necessarily
cares
if
there
is
traffic
across
network
that
is
unencrypted
like
who
has
access
on
what
is
important
for
them.
But
if
you
already
have
root
access
based
on
certain
rules,
then
right
like
you,
you
have
that
privilege
anyway,
so
you
can
do
quite
a
lot
of
damage
anyway.
A
B
D
A
Just
to
be
clear,
because
this
is
being
obviously
recorded-
I'm
not
trying
to
brush
off
the
this
is
not
a
real
problem.
It
is
a
problem,
I'm
just
managing
expectations
a
bit
because
I'm
not
really
sure
whether
this
is
something
we
need
to
resolve
immediately.
B
Okay,
that
sounds
good.
I
think
that's
pretty
much
it
for
blockers.
We
can
move
forward
to
the
demo.
B
So
the
demo
is
going
to
be
to
enable
get
https
traffic
in
canary.
We
already
have
a
change
issue.
It's
been
approved
by
the
on-call,
so
we're
good
to
execute.
One
thing
I
did
notice
earlier
today
was
that
we're
missing
some
canary
or
missing
canary
logs,
and
that
should
be
applying
now,
and
you
can
verify
that
first.
B
B
B
B
D
People
here
may
I
know
craig
mystical
you've
seen
this,
but
just
for
everyone
else
being
aware,
we
did
see
some
I
over
the
weekend
on
call.
I
noticed
some
strange
behaviors
with
log
with
fluent
elasticsearch
and
some
of
the
settings
and
I've
got
a
corrective
action
to
to
have
a
tweak
of
some
of
our
settings
and
stuff
relating
to
what
happens
when
logs
can't
be
delivered.
D
I'm
not
saying
this
is
related
to
this
necessarily,
but
we
do
want
to
make
sure
that
we're
not
just
going
to
fill
up
all
of
fluency's
buffer
with
failed
error.
Logs.
B
So
if
you
don't
know
how
this
is
set
up,
we
have
fluent
fluent
d.
B
Set
so
there's
fluid
d
running
on
every
single
node.
We
can
take
a
look
at
a
web
service
pod
and
take
a
look
at
one
of
the
nodes
that
it's
on.
So
what
I
usually
do
is
I'll
like
find
this
same.
The
fluid
d,
that's
running
on
this
node,
for
example,
so
I'll
just
do
like.
B
A
B
B
B
B
B
B
C
B
I
mean
there's
really
just
not
a
lot
going
on
right
now,
because
there's
no
traffic
being
taken,
so
I
think
we're
just
seeing
like
these
get
requests
for
metrics
from
the
web
exporter
log,
which
are
not
which
are
not
structured,
so
yeah.
Nothing,
really
nothing
really
interesting
here,
but
at
least
they're
coming
through
now.
What
I
don't
understand
is
why
we're
not
seeing
them
for
workhorse,
because
I
think
I
would
expect
to
see
vlogs
for,
of
course,.
B
B
B
B
Okay,
well,
what.
B
B
B
We'll
need
to
do
something
for
git
https,
for
that
we'll
do
like
a
git
upload.
B
B
So
this
is
good.
We
got
26
200
responses.
This
is
the
latency
distribution.
Let's
run
this
for
a
bit.
B
B
B
B
B
B
B
B
B
For
the
correlation
id
well,
I
think
I'm
going
to
dig
into
this
a
bit
more
before
we
start
the
the
change
issue,
and
I
don't
know
whether
troubleshooting
this
live
is
the
good
use
of
everyone's
time.
D
Yeah,
so
for
those
of
you
who
haven't
read
through
your
issues
or
anything
yet
so
earlier
today,
I
kind
of
put
a
proposal
or
a
point
of
discussion
or
something
to
paper.
The
issue
itself
is
quite
long
and
I
don't
want
to
tie
up
too
much
of
people's
time
here,
as
we
only
have
10
minutes
left
so
I'll.
Try
and
put
it
together,
a
very,
very
quick
and
brief
summary.
But
basically
you
know
I.
D
I
talk
a
lot
with
people
in
the
community
and
I'm
looking
at
what
we're
doing
and
I'm
always
thinking
of
ways
we
can.
You
know
improve
what
we
do.
I
work
very
closely
with
the
configure
team
as
part
of
the
gitlab
product
as
well.
I
try
and
provide
a
lot
of
feedback
to
them
on
how
we
do
things,
how
they
can
make
it
better
to
help
us
and
things
like
that.
D
One
of
the
things
that
kind
of
jumped
out
to
me
across
all
of
this
is
the
the
kind
of
way
we
use
I'm
going
to
talk
about
helm
and
helm
file,
but
really
it
doesn't
matter
about
the
deployment
tool
per
se.
But
one
of
the
interesting
things
in
the
way
we
do
use,
helm
and
helm
file.
How
we
deploy
our
kubernetes
stuff
at
the
moment
is
that
we
kind
of
use,
helm
and
helm
file
to
do
three
actual
distinct
phases
and
the
reason
we
do.
D
That
is
because
the
way
everyone
typically
starts
doing
things
with
helm
and
kubernetes
is
what
to
adopt
helm
using
doing
these
three
phases
and
those
are
helm,
gives
us
a
templating
engine.
So
it
allows
us
to
you,
know,
get
upstream
helm,
charts
and
a
whole
bunch
of
yammer
manifest.
It
allows
us
to
inject
some
values
per
environment.
It
allows
us
to
do
things
like
grab
values
out
of
chef.
It
allows
us
to
do
all
of
this
stuff
and
the
output
is
a
set
of
yaml
manifests
that
we're
actually
interested
in
the
final
result.
D
The
second
part
that
helm
and
helm
file
give
us
is,
then,
that
kind
of
diff
or
apply
you
know
I
want
to
dip
this
against.
What's
actually
currently
there
and
see
what's
changed,
or
I
actually
now
want
to
actually
apply
this
to
my
cluster
and
then
the
final
part
that
we
use,
helm
and
helm
file,
for
is
that
validation?
How
do
we
get
something
back
that
says?
D
Yes,
what
you
did
is
successful
or
know
what
you
did
is
failure,
and
you
know
maybe
even
roll
that
thing
back
for
me
automatically
and
one
of
the
things
I
com
I
commonly
see
is
that
as
people
grow
with
you
doing
deployments
on
kubernetes
as
their
tooling
grows,
as
their
complexity
grows,
having
one
tool
to
do
all
those
three
stages
might
not
be
enough,
and
I
certainly
starting
to
see
that
now
a
little
bit
I
think,
with
where
we're
kind
of
at
now
kind
of
journey
and
deployment.
D
There's
a
talk
about
git
ops
in
there
and
like
kubernetes
trade,
I
say
kubernetes
tm,
trademark
version
of
git
ops
because
they
have
a
specific
way
of
how
they
talk
about
it,
which
I
thought
was
fairly
not
important.
But
I've
come
to
realize
some
of
the
value
in
the
way
they
try
and
describe
to
do
do
things.
But
what
I'm
trying
to
push
for
in
the
proposal
is
at
the
moment
we
just
go
home
file.
Diff,
helm
file,
apply,
we
just
use
helm
for
templating.
We
use
helm
file
to
diff.
D
We
use
helm
file
to
apply
things.
We
use
helm
file
to
roll
it
back.
It's
all
just
in
one
kind
of
tool
and
we
bump
against
the
kind
of
pros
and
cons
of
that.
We
hit
problems,
and
you
know
we're
basically
locked
into
what
that
thing
can
offer
us.
But
if
we
think
about
it
in
an
abstract
sense
and
start
to
pull
this
out,
I
feel
there's
some
opportunity
for
us
to
innovate
in
each
particular
area
and
leave
us
open
to
actually
changing
tools
or
changing
processes
at
each
individual
level.
D
So
for
an
example,
I
won't
get
in
too
into
the
nitty-gritty
details.
Sorry,
my
brains
mush
at
the
moment,
but
one
of
the
key
factors
is
every
kubernetes
deployment
tool
is
able
to
be
used
in
a
mode
where
it
will
output
the
final
manifest.
So
you
can
run
helm
file
template
you
could
run.
I
don't
know,
k
up
templates,
another
one
customize
template
all
of
the
tools
have
a
way
to
say.
D
I
still
want
to
use
this
tool
to
template,
but
I
just
want
the
output
to
be
a
final
set
of
real
yaml
kubernetes
manifests
into
a
directory
or
a
folder
structure
or,
however,
you
want
to
set
it
up.
So
if
we
actually
look
at
the
templating
step,
we
can
actually
say
that
we
don't
actually
care
really
too
much
about
what
tool
we
use.
At
that
step
we
can
make
you
know
we
can
use
helm
file
or
whatever,
but
what
we
really
want.
D
The
output
of
that
this
is
to
be
the
actual
raw
manifests,
and
then,
if
we
actually
have
a
process
of
passing
those
raw
manifests
into
the
input
of
some
different
repo
tool
process,
then
we
can
actually
have
a
different
way
of
using
diff
and
apply
and
actually
controlling
how
we
actually
apply
those
manifest
to
a
cluster.
In
the
issue
itself,
I've
talked
a
bit
about
the
get
ops
controllers.
I've
talked
a
bit
about
ci
cd.
I've
talked
a
bit
about
how
cube
cuttle
itself
in
the
cube.
D
Api
server
has
now
become
one
of,
if
not
the
best,
for
citizen
to
actually
a
first-class
citizen
and
actually
doing
gifts
and
applies.
They've
made
a
lot
of
effort
to
actually
make
it
a
really
solid
engine
now
for
actually
doing
kubernetes
applying
and
deploying
manifests,
and
then,
finally,
that
third
step
that
validation
step
is
something
I
think
we
can
actually
explore
and
start
to
explode
out
as
well,
because
we've
talked
about
this
a
little
bit
before
about
okay.
Well,
you
know
I've
applied
my
manifests.
You
know
this,
the
whatever
has
changed.
D
So
let's
say
my
deployment
has
changed.
What
is
the
extra
validation?
I
can
do
to
make
sure
that
my
deployment
is
successful.
Am
I
watching
metrics?
Am
I
watching
logs?
Is
there
a
test
sweep?
I
could
actually
run
like
curl
some
commands
or
something
so
that's
very
rambly
and
I
do
apologize,
but
that's
kind
of
what
I'm
trying
to
look
at
is
by
using,
especially
starting
with
pulling
that
get
ups
part
in
the
middle
out.
We
actually
move
away
from
helm,
file
and
helm
doing
everything
to
just
doing
templating.
D
D
You
know
there's
potentials
for
deviations
to
happen,
although
it
hasn't
really
hurt
us
so
far,
whereas
if
you're,
using
like
a
kind
of
true
get
ops
approach,
where
you
have
your
git
repo
contains
the
rendered
manifests
all
right
there
and
then
it
becomes
very
easy
for
anyone
to
look
at
those
to
see
what
is
in
the
cluster.
You
can
start
to
do
static
code
analysis
against
those
audit.
All
my
yaml
files
see
if
anyone's
running
pods
as
root
making
sure
people
are
using
good
service
accounts.
D
Things
like
that,
which
is
a
lot
easier
to
do
than
you
know
currently
trying
to
integrate
with
the
tools
we
have
and
then
finally,
that
validation,
step
cubectl
has
got
some
first-class
operators.
Sorry
first-class
command
line
options
now
to
actually
do
in-depth,
in-depth
monitoring
of
resources
like
watch
this
rollout
set.
You
know
for
every
resource
that
has
a
specific
label.
Can
you
tell
me
when
that
resource
is
in
a
ready
state
and
from
there
also
we
can
start
to
explore.
D
D
Okay,
when
you
actually
apply
the
manifest
now,
please
run
this
to
make
sure
something
is
is
useful,
so
I
think
not
only
does
this
have
the
potential
to
help
us
straight
away,
but
as
a
small
side
effect,
this
kind
of
experience
and
everything
will
allow
us
to
feed
back
into
the
product,
because
this
is
the
area
in
auto
devops
and
all
of
the
kubernetes
integration
for
git
lab
the
product
they're
struggling
with,
like
they're,
trying
to
figure
out.
D
A
Crossed
right,
yeah,
sorry
yeah,
like
run
helm,
applying
fingers
crossed
basically.
A
It
out
yourself,
I
I'll
admit
I
didn't
read
the
whole
thing
you
you
popped
it
up
in
your
work
time
and
I
I
didn't
have
the
time
to
read
it,
but
I
find
it
an
interesting
idea.
A
Will
also
allow
the
official
helm
charts
to
figure
out
one
of
the
bigger
problems
they
have,
and
that
is
the
whole
life
cycle,
right
management,
which
is
really
difficult,
and
I
think
you'll
be
surprised
to
hear
that
we
we
have
been
thinking
about-
or
rather
I
have
been
thinking
about.
How
are
we
going
to
even
get
there
because
right
now,
our
primary
concern
is
migration.
Right,
like
just
do,
lift
and
shift,
but
we
have
another
requirement,
which
is
we
need
to
replicate
multiple
gitlab.coms.
A
A
A
I
think
this
is
the
path
we'll
go
and
I
think
you're
giving
us
like
a
good
set
of
steps
that
we
could
take
to
go
into
that
direction
right
like
if
you
isolate
only
templating
first
and
then
leave
the
rest
to
someone
else
to
figure
out.
So
I.
A
Probably
a
future
proposal,
something
that
we
should
definitely
continue
donkey
talking
about.
I
wouldn't
focus
too
much
on
the
gitops
aspect
and
like
all
of
that,
I
would,
I
would
try
to
keep
an
open
mind
there
and
maybe
like
bring
in
those
requirements
of
us
running
multiple
gitlab.coms.
This
is
really
important
because,
whatever
small
decision
we
make
in
our
world,
we
need
to
be
able
to
replicate
it
to
others,
and
that
will
also
help
self-manage
customers
right
like
if
we
managed
to
package
it
up
for
them
as
well.
A
It
would
be
very
useful
and
we
don't
want
to
probably
make
too
many
rapid
decisions
at
the
moment
of
what
tool
do
we
use
for
that
sure.
D
And
I
think
yeah
so
I
mean
for
the
for
the
quote-unquote
gitops
part,
all
I'm
really
suggesting
there
is
actually
just
using
cube
ctrl,
so
that
part
is
just
cube.
Ctl
apply
for
cube,
cdl
diff,
no
other
tool,
which
I
I
think
is
is
good
and
it's
it's
kind
of
simplicity
as
well.
We're
not
trying
to
pick
some
other
slugs
or
anything
too
crazy.
We
just
stick
with
the
basic
cube,
ctl
part
and
you're
right.
D
The
the
large-scale
working
group
now
means
I
mean,
if
we're
going
down
the
path
of
extra
clusters
for
pride,
we're
going
to
have
three
plot
cross
clusters
for
prod
three
for
stage
three
for
pre.
I
assume,
then,
if
we
ever
decide
to
go
multi
region
for
gitlab.com,
that's
three
per
region,
so
it
comes
nine
for
one
region.
You
know,
then,
if
we
do
multiple
gitlab.com.eu,
we
get
like
the
amount
of
environments
we
have
just
starts
to
multiply
exponentially
as
well,
which
is
another
drive
for
this.
D
Obviously,
everyone
here
is
focused
on
the
gitlab.com,
which
is
the
most
important
part,
but
we're
starting
to
get
more
little
bits
and
pieces
from
the
other
teams
as
well,
who
are
deploying
with
kubernetes
as
well.
So
I
think
there's
benefits
to
some
of
this,
like
the
validation
stuff,
I
think,
is
really
interesting,
not
just
for
gitlab.com.
But
how
do
we
tell
anyone,
hey
look.
I
want
to
run
elasticsearch.
I
want
to
run
kabat.
You
know
all
of
our
team's
tools
on
top
of
kubernetes,
giving
them
a
framework
to
say.
D
Not
only
do
you
just
deploy
it
now,
you
can
think
about
how
you
want
to
test
that
and
drive
that
level
of
maturity
up,
and
certainly
even
when
we're
migrating
our
pieces
for
gitlab.com.
You
know
how
do
we
test
every
time
we
do
redeploy
sidekick
that
it's
still
working
like
how
do
we
make
sure?
How
do
we
make
sure
workhorse?
You
know
when
we're
doing
this,
this
migration
of
the
web
tier?
How
can
we
define
a
small
set
of
tests?
It's
not
just
the
deployment
has
succeeded,
but
we're
also
confident
that
nothing
is
wrong.
A
So,
in
order
for
this
not
to
be
just
dead
letters
on
on
paper,
I'd
encourage
you.
If
you
get
a
chance
to
to
think
about.
Like
again,
I
didn't
read
everything
you
wrote.
So
maybe
you
already
wrote
this
down,
but
I
would
encourage
you
to
find
like
one
of
the
smaller
steps
that
we
could
do
right
now
to
just
not
derail
the
migration
itself,
but
get
us
into
a
direction
where
we
are
going
to
have
clearer
separation
between
templating
disapplying.
A
You
know
like
whatever
you
can
do,
to
isolate
those
things
and
like
do
some
verification
on
on
that
those
steps
as
well
right
like
that,
could
be
like.
We
don't
want
a
big
bank
migration
again
right.
We
want
a
smaller.
D
Moves,
no,
I
agree,
and
I
think,
even
when
we
did
when
we
did
the
migration
to
helm
file,
we
kind
of
did
gitlab.com
first,
because
I
think
we
got
the
most
value
out
of
that.
But
I
agree.
I
think
this
time
I
would.
I
would
try
and
see
if
I
could
demo
or
come
up
with
an
example
for
just
one
of
the
many
we've
got
a
few
very
small,
simple
apps.
Now
that
I
could
just
say
hey,
this
is
what
it
looked
like
before.
D
This
is
what
it
looks
like
now,
and
that
could
be
a
good
second
step.
I
I
haven't
figured
out
all
the
problems
are
everything
and
I
would
want
to
think
about
it
a
little
bit
more
but,
as
I
just
said,
it
was
just
an
interesting
idea
that
you
know
I've
been
thinking
about
and.
D
Worrying
about
helm
and
helm
releases,
it
gets
us
worrying
about
like
we
don't
have
to
care
what
cluster,
if
it
uses
helm3
or
helm2.
If
it's
all
just
kubernetes
manifest,
we
we
draw
a
nice
big
line
in
the
sand
around
what
we're
using
everything
for,
and
we
can
kind
of
and
for
engineers
on
call,
I
think
it'll
be
easier
for
them
to
kind
of
while,
while
they
might
engineers
on
call
probably
aren't
too
worried
about
the
templating
part
they're
more
just
interested
in
I'm
on
call.
D
Where
you
know
where
what
what
could
be
broken,
what
configuration
is
set
where
so?
I
think
it
has
potential
for
engineers
on
call
who
aren't,
as
involved
in
the
kubernetes
work
to
just
kind
of
you
know
easy
for
them
to
kind
of
get
into
and
and
understand
just
for
their
on-call.
What
they
need
to
see
and
know.
A
Yeah,
I'm
I'm
excited
about
this,
like
we
should
talk
about
it
a
bit
and
then
see
what
we
can
do
and
I'm
looking
forward
to
reading
it
all.
B
Yeah
I
read
over
the
issue
this
morning.
I
really
liked
it
as
well.
I
think
I
put
some
I
put
a
couple
concerns
in
the
issue,
just
like
having
two
different
pipelines
and
being
able
to
track
who
changed
what,
especially
when
the
pipeline
that
runs
cube.
Ctl
is
generated
from
a
different
pipeline
pipeline.
D
B
So
that
way,
it
might
be
a
step
backwards,
a
bit
because,
right
now,
it's
just
you
know
one
single
pipeline
and
it's
easy
to
determine
who
made
the
changes,
but
I
like
having
the
dips
in
like
having
the
changes,
exposed
or
checked
in
to
get
you
know
for
the
raw
manifest.
This
is
really
nice.
D
B
D
Think
the
whole
preserving
that
workflowy,
the
workflow
we
have
right
now
is
really
nice.
I
put
an
mr
up.
I
get
a
nice
report
of
the
diffs
and
everything
and
I
would
not
want
to
go
backwards
with
that
and
from
talking
to
people
how
they
kind
of
solve.
It
is
there's
a
lot
of
just
custom
code.
People
write
bots
automation.
What
have
you
I'm
not?
D
We
need
to
go.
I
would
like
to
start
jumping
into
that,
especially
because
we're
not
at
the
scale
of
some
of
these
other
companies.
To
that
part,
I
would
need
to
understand
what
gitlab
ci
can
give
us
and
and
how
that
would
work,
and
I
think,
to
be
honest,
a
bulk
of
the
work
around
this
probably
wouldn't
be
so
much
changing
things
like
helm,
file
or
whatever,
because
I
think
that
they
basically
give
us
what
we
want
right
now.
A
So
jerry,
I
see
you
you've
written
down
in
your
comments
that
working
outside
of
the
official
chart.
I
am
willing
I'm
willing
to
find
a
way
to
support
this
working
outside
of
the
official
charts.
As
long
as
we
also
have
a
path.
How
are
we
going
to
be
bringing
this
back
in
and
I
think
it
shouldn't
be
a
surprise
to
every
anyone
that
the
the
home
charts
need
to
change
significantly,
because
it's
it's
just
look
at
our
application.
A
Complexity
like
it's
just
unusable
in
some
cases
right,
it's
really
difficult
to
use
it,
so
everyone
is
prepared
for
it.
As
long
as
we
set
the
trend,
we
move
out
of
the
official
charts
and
we
also
describe
about
how
are
we
going
to
bring
this
thing
back
into
something
that
people
could
use?
It's
fine.
We
can
diverge
and
one
way
of
doing
that
is
always
thinking
about.
How
are
we
going
to
do
the
multi-large
like
multiple
gitlab.coms,
because
that
actually
puts
us
on
the
path
of
self-managed
customers?
Basically
right.
A
So
as
long
as
that
is
the
case,
we
can
diverge.
D
A
So
you
can't
we
can't.
We
can
say
that
that
is
gonna
be
the
case
because
we
don't
know
well
for
cn
most
likely.
It's
not
going
to
happen
but
think
about
think
about
federal
instances
as
well.
We
can't
as
well
maybe
we.
D
A
A
What
is
more
important
is
that
the
operational
experience,
the
running
experience
that
we
have
at
gitlab.com
can
be
relatively
simply
replicated
across
those
big
instances
right,
yeah,
okay,
so
that's
where
we
need
to
to
think
more
and
that's
why
I'm
saying
like,
let's
not
get
too
locked
down
in
what
gitlab
can
do.
Gitlab.Com
can
do
because
that's
where
we
don't
want
to
be
too
prescriptive.
But
what
we
want
to
do
is
like
the
general
concept
of
you
know.
A
B
B
Well,
I
guess
there
I
think,
like
by
doing
this,
we'll
be
tempted
to
kind
of
go
off
on
our
own
path,
without
necessarily
a
lot
of
motivation
to
get
things
back
into
the
chart,
especially
for
features
that
are
only
really
needed
by
us
and
of
course,
there
are
some
very
large
deployments
out
there,
but
I
think
that
it's
just
something
we
need
to
be
careful
about.
I
think
as
we,
because,
right
now
we
are
fairly
limited.
B
We
don't
have
any
manifests
that
are
applied
outside
of
the
chart
and
everything
that
we're
doing
has
to
kind
of
feed
back
into
the
chart
and
we're
talking
about
things
that
really
only
apply
to
us
in
some
ways.
Right
or
you
know
you
know,
and
and
for
this,
for
these
sort
of
things
I
think,
if
we
had
this
setup,
then
we
would
just
probably
work
around
the
charts
for
expediency.
D
Yeah
we
could
still
use
the
charts,
as
the
template,
like
you
know,
helm
file
with
the
the
upstream
with
the
gitlab
chart
can
still
be
the
in
the
first
input
to
the
process.
I
I
think
if
we
push,
we
can
still
push
everything
up
to
the
chart
as
much
as
possible.
If
we
want
okay,
I
guess
that
question
is
up
to
us.
A
So
let
me
let
me
just
try
to
put
it
this
a
tiny
bit
differently
for
both
of
you
actually
the
whole
point
of
dog
fooding,
the
helm,
charts
the
whole
point
of
us,
not
drifting
from
that
is
not
because
we
just
don't
want
to.
It
is
so
that
we
don't
have
to
repeat
the
same
thing
over
and
over
again,
and
also
is
that
our
customers
don't
have
to
repeat
the
same
thing
over
and
over
again,
because
we
already
experienced
it.
Why
not
share
it
with
them
and
charge
it
on
them
right.
A
So
the
fact
that
gitlab.com
might
have
some
special
requirements.
We
have
special
requirement
because
we
are
a
very
large
instance
of
gitlab
and
even
if
we
talk
about
orders
of
magnitude
less
of
a
size
for
customers,
if
they
can
use
this
and
leverage
the
same
tool
for
us
from
us,
that
is
also
already
valuable.
A
Maybe
it's
not
valuable
for
people
with
a
single
node
cluster
installation
that
are
running
this
on
a
raspberry,
pi
cluster
sure
I
I
think,
you'll
see
that
in
there
we
don't
have
to
necessarily
justify
that
diff,
because
we
are
running
a
different
thing,
but
as
long
as
we
say
that
any
large
instance
of
gitlab
can
use
this
same
approach
and
switch
configuration
only
right
like
ip
addresses
and
so
on,
to
use
this-
and
this
is
how
you
do
it.
There
is
already
value
in
this
and
jarvis
you
you're
already
seeing
some
problems.
A
We
are
running
into
with
the
charts
right
like
the,
for
example,
the
the
the
labeling
right,
the
api
split.
All
of
that
splits
that
the
chart
doesn't
necessarily
need
what
we
need
as
gitlab.com.
That's
already
a
good
enough
reason
for
me
to
say
you
know
what
we
really
need.
It
charts
really
doesn't,
let's
see
if
we
need
to
fork
if
we
need
to
do
something
outside
and
then,
if
we
do
something
outside
it
is
up
to
all
of
us
to
discuss.
Why
did
we
do
this
outside
and
how
do
we
bring
this
back?
A
So
the
point
here
is
as
long
as
there
is
a
real
value
that
this
company
and
our
work
can
actually
contribute
to
the
company
across
larger
instances,
larger
customers
and
so
on.
We
don't
have
to
necessarily
immediately
care
about
every
single
installation.
B
Yeah,
no,
it
makes
sense
to
me.
I
think
I
think
this
is.
I
mean
I
think
we
could
there's
nothing
preventing
us
from
diverging
from
the
chart.
Now,
even
you
know,
we
can
patch
things
in
it's
just
that
we
haven't
yet
and
yeah.
So
I
think
I
think
this
is
a
good
approach
for
us
and
it'd
be
interesting
to
do
a
pfc.
B
C
D
I
think,
once
we
see
the
real
pipeline
right
and
what
each
step
is
how
it
looks
and
what
the
feedback
is
from
the
user
experience,
then
we
can
kind
of
understand
that
this
is
something
we
want
to
do
or
if
this
is
too
difficult
or-
and
I
yeah-
I
don't-
have
the
full
answer
to
everything
yet
so
I
would
like
to
try
and
kind
of.
B
A
Graham,
I
would,
I
would
even
be
more
conservative
than
that,
but
let
me
put
it
this
way.
I
know
some
things
that
are
on
your
plate
and
I'm
not
too
excited
to
see
what
is
on.
A
But
then,
if
you
do
this,
like
you
said
on
a
non-gitlab.com
specific
item
right,
like
I
don't
know
elastic
or
whatever
we
end
up
doing,
I
don't
I
don't
really
know
like
that-
might
be
a
good
way
of
doing
it,
and
then
you
can
do
it
over
a
lower
period
of
time.
So
the
priority
is
like
it's
it's
it's
funny
to
say
this,
but
like
it's
not
like
it's
a
low
priority
or
high
priority,
it's
just
on
a
completely
different
path.
D
Parallel
yeah,
no,
I
agree
I
would
want
if,
if
we,
if
we
think
this
works,
I
would
want
to
do
the
git
lab
chart
last,
because
I
think
that
we
will
find
a
lot
of
questions
we'll
have
along
the
way
that
we
can
answer
them
safely
with
like
elastic
or
prometheus,
all
the
other
little
bits
and
pieces.
We
have
around
first
before
we
finally
tackle
the
biggest
part.
So
I
I
wouldn't
yeah-
I
wouldn't
put
this
as
on
any
part
of
the
critical
path
for
migration
or
anything
like
that.
D
As
it's
it's
more
interest,
it's
probably
more
interesting
to
those
people
who
are
working
on
the
smaller
pieces
because
they're
the
people
who
work
on
the
smaller
pieces.
You
know
they
spend
less
time
with
the
kubernetes
staff,
so
you
know
and
yeah,
and
they
have
trickier
things
like.
We've
got
very
good
monitoring
around
gitlab.com.
We
don't
have
quite
as
good
monitoring
around
some
of
our
little
applications
that
we
have.
A
Awesome
cool
well
green
thanks
for
sharing
jarv
thanks
for
demoing.
I
hope
you
get
to
find
what's
what's
happening
and
why
the
logs
are
not
showing
up.
Do.
Let
us
know
once.
D
A
I'm
curious
as
well
to
see
what's
happening
there
and
yeah
thanks.