►
Description
Part of https://gitlab.com/groups/gitlab-com/gl-infra/-/epics/143
A
A
So
Marin
won't
be
able
to
join.
He
asked
me
to
drive
the
demo.
Unfortunately,
I
don't
think
we
have
anything
in
the
agenda
to
demo.
I
would
like
to
take
an
opportunity
just
to
go
through
the
epic
to
see
whether
everything
in
the
epic
absolutely
needs
to
get
done
before
we
go
to
production
for
project
export
and
if
there's
anything
that
the
same
scarback,
maybe
you
can
talk
a
little
bit
about
your
work
for
the
deployment
stuff
you've
been
doing.
A
I
can
talk
a
little
bit
about
the
log
gain
and
even
demonstrate
its
though
there
isn't
a
whole
lot.
The
show
is
there
anything
you'd
like
to
see
or
anyone
here
would
like
to
see.
Demoed
I.
A
A
A
A
C
C
Not
like
the
the
general
consensus
is
that
the
like,
if
it's
something
that's
being
emitted
by
the
application,
it
should
have
the
secrets
redacted
before,
because
obviously
not
everyone's
using
it.
But
I
don't
know
what
other
sources
they
are
they're.
You
know
next
to
the
edge
a
proxy
I'm
guessing
it's.
A
Not
just
H
epoxy,
it's
also
nginx,
because
you
know,
because
we
don't
really,
you
know
personal
access
tokens
passed
it
as
query
strings.
The
application
can't
do
anything
about
that.
So
we
have
to
connect
them,
but
that's
not
a
problem
here.
I,
don't
know
if
there
are
any
other
things
that
we
need
to
worry
about.
C
A
A
Okay,
in
that
case,
I
think
I'm
going
to
remove
this
from
the
epic
I.
Don't
see
a
reason
to
keep
it
the
secrets,
a
single
source
of
truth
between
chef
and
kubernetes
I
think
this
is
a
hard
requirement
scarback.
Could
you
kind
of
just
give
a
quick
update
on
high
level
updates
on
where
this
is
and
if
all
these
issues
need
to
be
here,
all.
B
These
issues
do
need
to
be
here,
they're,
just
the
stepping
stones
necessary
to
make
it
work.
Grain
has
stepped
in
and
has
been
providing
some
work
on
this,
which
is
super
helpful
since
I'm
concentrating
on
an
RCA
currently,
but
we've
got
the
necessary
bits
inside
of
our
tooling
such
that
it's
available
to
us
and
Graham
is
currently
working
on
a
merge
request
for
adding
the
secrets
and
in
manual
fashion,
using
helm
file.
B
Tour
clusters
he's
got
a
merge
request,
that's
open
and
currently
in
review,
currently
I'm
working
on
the
automation
piece,
which
unfortunately
calls
the
RCA
that
I'm
working
on
that
will
automate
the
necessary
stuff
to
make
help
file
useful
inside
of
K
control.
Our
current
script
that
we
use
in
seei
I
think
next
week.
Graham
will
pick
up
the
work
to
do
the
non
secret
configurations
that
will
ensure
that
we
keep
configurations
between
chef
and
coronaries
same
this
similar
and
then.
Lastly,
it's
just
the
documentation
around
this
entire
process.
So
you.
B
B
A
A
Andrew
or
Sean-
maybe
you
guys
would
know
whether
is
it
a
problem
for
get
web
export.
If
we
don't
have
database,
read
replicas
configured
on
the
application,
I
assume
project
export
probably
doesn't
hit
the
database
that
hard
and
the
the
issue
here
is
that
in
forget,
lab
comm
for
database
load,
balance
II
an
application
database
load
balancing.
We
use
a
console
DNS,
we
use
console
DNS
what
to
get
the
list
of
hosts
and
there's
like
eight
or
so
replicas.
We
don't
have
console
agent
currently
running.
It's
not
yet
supported
by
the
chart.
A
C
D
C
D
A
D
Balancing
is
not
enabled
for
psyche,
because
this
would
lead
to
consistency,
problems
and
psyching
which
do
performs
rates
anyway.
It's
the
way
as
I
remember.
The
way
it
works
for
web
requests
is
that
as
soon
as
the
web
request
performance
are
right,
then
it
sticks
to
the
primary
use,
which
is
the
primary
and,
and
it
starts
using
the
primary.
So
just.
B
D
C
A
C
E
C
Of
like
I
will
does,
to
some
degree,
both
but
I
mean
specifically
the
agent
yeah.
A
A
A
A
This
is
enabled
right
now
on
staging
kind
of
validating
it
and
hopefully,
hopefully,
everything
will
go
well
and
the
production
database
configuration
audits,
I
picked
this
up
I
think
there
isn't
a
whole
lot
more
that
we
need
to
do
here.
I
have
I,
think
I
think
when
skarbek
was
looking
at
this
we
decided
there
wasn't
really
much
to
do
for
prepared
statements
load
bouncing.
You
have
an
issue
for
SSL
compression.
We
have
an
issue
for
and
then
we
need
to
set
the
statement
limit.
A
So
here
this
was
just
a
comparison
of
what
we
have
set
in
the
VM
configuration
for
the
database
versus
what
was
set
in
the
pod.
You
can
see
like
statement
limit
is
set
to
a
thousand
in
the
VN
and
in
the
pods
I.
Don't
I
thought
I
had
it
here
as
well,
but
I
guess
I,
don't
but
anyway
it's
not
configurable
I
think
so.
This
is
something
that
we
need
to
fix.
A
A
A
D
Try
without
the
in
the
URL
I
wonder
if
it's
because
of
that
routine!
Oh,
no,
that
wasn't
change
recently
saw
it
as
major
quest,
then,
might
that
wouldn't
apply.
A
C
A
C
A
E
C
C
B
When
I
could
speak
to
this
epic
overall,
currently
I'm
trying
to
work
on
adding
the
capabilities
such
that
release
tools
is
able
to
pull
all
of
the
necessary
own
immersions
and
send
it
off
to
CNG
for
a
building
via
a
commit
and
followed
up
by
a
tag,
and
we
also
need
to
still
create
the
unstable
location
where
the
Helmer
past,
where
we
live,
and
then
we'll
also
after
that
start
signing
the
same
tag
to
how
that
way.
It
builds
something
and.
D
B
A
B
There's
only
one
issue
in
there
currently,
because
there's
it
was
just
stood
up
quickly.
More
triage
work
may
need
to
be
needed
to
determine
what
else
needs
to
be
done
with
deploy,
which
may
create
more
issues
outside
of
this
okay,
we
could
adjust
later
I'll
think
we
need
to
handle
that.
Now,
though,
we
need
to
get
the
work
necessary
to
get
onto
employer
work
and
before
we
get
a
chance
to
touch
deploy.
Anyways.
B
A
A
A
Think
one
of
the
things
we
need
to
decide
is
whether
one
pod
one
running
am
in
part
of
one
is:
is
going
to
be
right
for
production
right,
but
whether
we
want
to
keep
some
pods
warm
and
we
also
wanted
to
fix
the
fix
the
start
time
of
the
pod
to
make
it
a
little
bit
shorter.
So
we
should
probably
have
something
something
to
track
that
specifically
the
dependency
in
its
pod.
There's.
A
B
A
A
A
So
one
thing
that's
interesting
with
regard
to
logging
is
that
the
way
that
we
run
side
click
on
the
d-ends
is
we
have
separate
log
files
for
all
the
different
log
types
and
we
handle
them
separately
when
we
run
side
click
in
a
pod.
All
the
logs
are
jumbled
together.
So
we
have
both
the
sidekick
logs
aside
exporter
log
and
the
go
ahead,
so
we
can't
go
again,
and
so
we
have
the
exercise,
click,
exporter,
log
and
you
have
the
production
log,
all
jumbled
together.
So
some
is
structured.
A
C
A
So
what
we
have
is
when
we
we
have
the
sidekick
standard
out
standard
standard
out
which
is
JSON.
If
you,
if
you
configure
it
I,
think
for
JSON,
then
that's
JSON.
You
had
the
sidekick
exporter
log,
which
we
don't
send
the
central
logging
at
all
right
now,
if
that's
also
on
the
output
to
the
pod
and
that's
not
structured,
and
then
we
have
the
production
log,
and
this
is
any
place
in
rails,
where
you
have
like
just
rails,
dot,
blogger.
C
A
C
We
do
something:
clever
was
tail
there,
where
we,
we
only
do
the
structured
logs,
and
then
we
create
things
about
moving,
so
there's
kind
of
a
there's,
a
related
thing
which
is
worth
bringing
up
at
this
point.
Obviously,
for
workhorse
for
giddily
and
the
go
services,
we
generally
have
a
single
log
that
gets
used
for
everything,
well,
access
delegating
and
for
like
hey
that
took
a
long
time
to
acquire
this
lock
and
all
the
different
events
are
in
the
same
nog
for
Rails.
C
You
can
never.
You
can
never
think
you
should
never
sort
of
it's
basically
what's
emitted
cuz,
because
otherwise
it
makes
sense
really
hard
right.
Obviously,
some
things
move
slower
than
others
like
the
access
logs,
but
even
though,
even
with
those
the
workhorse
ones,
we
renamed
a
whole
bunch
of
fields
recently
and
it
wasn't
very
disruptive
and
he
certainly
didn't
get
like
a
lot
of
feedback
from
customers.
Saying
like
on.
D
C
Was
that
was
that
was
what
I
did
last
time
in
and
I
kind
of
put
it
in,
and
no
one
challenged
me
on
the
fact
that
we
were
changing
the
scheme
and
I
kind
of
wanted
to
see
and
and
it
was,
it
seemed
to
be
terribly
fine
like
about
Zuckerberg.
We
didn't
get
a
lot
of
feedback
as
far
as
I
know,
from
customers
about
it
and
I
think
we
should
definitely
say
like.
This
is
not
something
that
has
a
schema
so.
A
A
B
C
A
C
A
Here's
an
example
of
an
export
that
I
just
did
so.
You
see
this
like
nice
JSON
log
here
and
then
you
have
some
crap,
you
don't
care
about.
Then
you
have
like
this
saved
project
export.
This
is
coming
from
production
Bach.
So
it
tells
me
that
it's,
you
know
the
photography
on
disk
and
then
it
says
okay
I
successfully
exported
and
then
you
have
more
like
been.
This
is
for
the
mail.
You
know
this
end
say
that's
cute.
So
that's
good
to
know,
and
then
you
have
this
sidekick
blog
here
so
like
I
would
say.
D
Just
like
throw
away
everything
that
we
don't
like
you
know
the
Mail's
cute,
the
scope
order
is
ignored
whatever
that
means,
and
then
the
save
project
export,
like
is
just
forced
to
be
in
a
JSON
format,
because
we're
manually
like
putting
that
in
somewhere.
So
we
can.
We
can
do
that.
I
think
they're.
Both
reasonable
options
I'm
just
suggesting
that
as
an
alternative,
because
these
creating
scope
messages,
I,
don't
care
about
them.
If
we
need
to
know
about
them,
we
can
load
the
app
locally.
D
It's
vanishing
me
unlikely
that
they're
going
to
be
different
in
production,
since
someone's
local
environment
there's
I,
can't
think
of
an
issue
that
would
be
caused
by
these
so
yeah.
My
I
think
either
wrap
it
all
adjacent
like
Andrew,
said
or
replace
it,
so
that,
like
we
just
ignore
non
Jason
and
we
make
it
so
that
anything,
that's
manually
logged
from
the
app
that
has
to
be
in
JSON
format,
yeah.
A
A
D
A
C
I
mean
I
would
be
very
surprised
that
they
were
actually
a
little
binary.
That
does
exactly
this,
and
so
we
should
look
out
for
that.
You
know
that
does
the
little
rust
binary
or
something
like
that,
but
then
the
second
thing
is:
if
we
did
that,
that
would
be
part
of
of
of
helm
right
well,
I've
seen
GE
rights,
yeah.
A
A
E
C
C
Would
shrug
reverse
like
rendering
view
you,
like,
you
know
the
the
ones
that
are
entered.
C
E
A
A
A
E
A
A
A
It's
this
and
then
what
we
actually
have
for
the
log
name
is
like
rails,
dot,
API
rose
dot,
you
know
whatever,
but
this
search
captures
all
of
that
and
yeah.
So
it
goes
to
the
topic,
and
then
it
goes
over
to
elastic
search
in
theory.
It
seems
to
be
working
for
pre,
prod
I,
just
enabled
it
for
staging
it's
not
working
yet
because
I
have
I
had
to
recreate
all
of
the
index
mappings,
because
our
time
field
change
from
time
to
time
stamp.
That's
what's
I
know
that's
what
stackdriver
uses.
A
B
A
They
have
to
use
the
new
Kabana,
which
is
like,
like
I,
don't
know
what
these
icons
are.
So
I
always
have
to
do
this
extra
click
I
just
really
hate
the
new
kibana.
A
A
Nothing
sounds
leaking
pretty,
so
I'll
have
to
see
whether
these
logs
are
definitely
going.
The
way
I,
usually
troubleshoot.
This
is
that
I
first
I
stopped
the
pub/sub
beat
and
then
I
query
the
topic
to
see
if
the
logs
are
actually
going
there.
If
they
look
like
they're
going
there
and
I
can
run
the
beach
with
debug
in
the
foreground
with
extra
log
in
to
kind
of
see,
it
picks
the
message
off
of
the
beat
and
fall
into
elasticsearch,
so
I'll
have
to
the
troubleshoot
this
a
bit
more.