►
Description
Part of https://gitlab.com/groups/gitlab-com/gl-infra/-/epics/143
A
B
We
could
create
a
place
where
we
don't
hopefully
bleed
over
stuff
that
we
don't
want
consumers
to
consume
for
their
home
charts.
So
I
started
at
work
late
yesterday,
so
I
wouldn't
continue
that
today
and
figure
out
what
needs
to
be
done
and
hopefully
have
a
plan
of
action
a
start.
Next
we
go
up
nice
and
fresh
and
get
it
going.
A
A
C
Think
we
should
time
box
discussion
because
we
just
need
to
get
going
with
this
and
to
me
it
seems
like
you
know,
we
need
to
have
something:
that's
gonna
that
we're
going
to
be
able
to
do
in
in
a
few
weeks,
not
a
few
months.
Finally,
so
I
like
scarbacks
pipeline
approach,
the
best
so
far
but
I
mean
I,
can
maybe
maybe
maybe
by
Monday.
We
can
at
least
each
other
votes
on
what
to
do
and
just
go
forward
with
it.
D
C
Are
just
there's
some
database
defaults
I
mean
this
is
fine
these
we're
doing
this
through
issues
just
we
need
to
change
some
of
the
helm
defaults
and
you
make
some
things
configurable
that
weren't
configurable
right
now,
but
it
doesn't,
it
doesn't
block
us
from
at
least
testing
on
staging.
So
it's
not
a
son
of
a
big
deal
for
now.
A
C
C
If
we
go
to
the
environment
configuration
file,
we
see
that
scientific
is
enabled
here
and
we're
using
this
image
tag.
It's
a
bit
old,
but
it
checks
out.
It's
it's
fine
I
did
a
quick
validation
today
that
we
don't
need
to
build
a
new
image,
build
any
new
images.
Images
are
kind
of
a
pain
right
now,
so
I'm
just
going
to
use
the
same
image,
which
was
all
the
deploy
branch
from
last
week.
C
The
rest
of
the
configuration
is
in
the
common
values
file
here.
This
is
the
common
config
for
all
environments.
So
this
is
these.
Are
the
things
like?
Okay,
the
secret
for
the
Postgres
database,
the
secret
for
Redis
I?
Think
trying
to
remember
what
else
there
is.
Of
course
we
have
the
cute
configuration
for
sidekick
like
the
cron
jobs.
All
of
that-
and
you
can
see
here
that
for
sidekick
we
have
sidekick
by
default.
It's
not
enabled
metrics
are
enabled
here
and
then
here
are
the
pod.
C
C
To
do
that,
I'll,
just
like
I'm
in
the
I'm
on
the
console
server,
which
is
the
server
you
need
to
be
as
stage
2
in
order
to
do
cube,
CTL,
so
I
get
the
pods
in
the
staging
cluster.
You
can
see,
we
have
mailroom
and
we
have
to
registry
pause.
We
don't
have
sidekick,
I
can
do
cube
CTL
deployments
and
we
do
have
the
deployment.
C
It's
just
that
it's
not
ready,
and
this
is
basically
after
we
apply
the
configuration,
the
staging
created,
the
deployment
to
create
at
the
pod
and
for
now
the
way
that
were
not
keeping
it
on
all
the
time
is
by
scaling
down
the
deployment
to
0.
So
in
order
to
bring
this
pod
back
to
life,
all
we
need
to
do
is
scale.
It.
C
C
So
it's
in
the
unit,
the
the
initialization
for
the
cyclic
pod.
It
takes
a
long
time
the
in
its
containers
take
a
while.
Then
it
containers
are
actually
doing
a
full
check
of,
for
example
like
connecting
to
the
database.
So
it's
bringing
up
where
else,
and
that
takes
a
long
time
I
think
there
was
a
discussion
about
making
that
a
little
bit
more
efficient.
B
E
B
Far
as
its
loading,
the
entire
rails
application
and
checking
the
schema
version
to
make
sure
it's
at
a
legit
state
that
it
expecting
to
be
in
so
I,
don't
think
sidekick
should
rely
on
this
type
of
thing.
I
think
that
should
be
reserved
for,
like
some
other
contained,
that
starts
it
before
the
rest
of
the
application
does.
But
yes,
there's
an
issue
somewhere
I'll
try
to
link
it
in
here,
just
so
that
we
have
it
long
and.
A
C
E
C
E
C
Yeah
I
guess
so
the
I
guess
we'd
run
migrations
first
and
then
pods
wouldn't
be
able
to
come
out.
The
edge
like
the
schema
version
would
be
different.
That
would
be
bad.
I
know
exactly
what
the
comparison
is
for
schema
version.
Is
it's
not
I,
don't
I,
don't
I
don't
know
like,
but
if
the
number
is
higher
yeah.
A
C
C
C
E
C
E
C
C
C
E
C
C
C
C
C
What
we're
thinking
currently
is
the
way
that
we're
going
to
get
these
logs
into
elasticsearch
is
that
we
will
use
log
sinks.
That's
what
we're
doing
now.
We
have
all
of
the
logs
going
to
a
single
index
for
like
a
gke
index
on
elasticsearch,
and
this
is
not
going
to
work
long
term
because
we're
just
going
to
be
adding
more
and
more
logs.
So
my
current
thought
is
that
we
will,
based
on
the
log
name,
will
just
afford
these
logs
to
specific
pub/sub
topics,
which
will
then
in
turn
send
them
to
specific
elasticsearch
indexes.
C
So
so
the
way
that
I've
restructured
the
sinks
of
it
and
I
have
a
terraform,
mr
for
this,
it's
not
yet
merged.
That
needs
to
be
reviewed
still,
but
I.
Don't
think
this
will
change
if
it
works.
Okay
like
before
we
have
so
what
you
do
is
you
provide
terraform,
just
a
array
of
long
filenames.
In
this
case,
we
just
provided
an
array
of
nginx
inside
kick
and
then
terraform
makes
these
three
sinks.
The
first
one
is
the
default
sink
which
review
the
filter.
C
It's
basically
creating
a
filter
for
all
logs
that
aren't
in
the
list,
and
this
will
be
the
catch-all
for
logs
and
so
by
default.
All
of
our
gke
logs
will
go
to
the
existing
gke
pub/sub
topic
and
go
to
the
gke
index
on
elastic
search,
and
then
we
have
individual
application
logs,
for
example,
for
sidekick.
C
If
I
look
at
this
filter,
it's
just
looking
for
any
log
that
looks
like
a
gke,
a
GK
log
message
and
the
log
me
the
sidekick,
and
the
idea
here
is
that
once
we
have
structured
logging,
we'll
have
all
of
both
of
these
sidekick
logs
for
VMs
and
cyclic
logs
from
gke
going
into
the
same
index.
I
think
that's
a
good
thing
to
do.
The
other
way
we
could
do
this
is
like
we
could
create
unique
indexes
for
all
of
the
different
components,
but
I
think
it's
probably
better
to
put
them
both
in
the
same.
E
D
C
I'm
sorry
I
was
I
was
thinking
up
my
login
stuff,
which
I
did
on
pre
prod
but
you're
right.
Let's
do
the
the
same.
C
C
C
C
C
So
same
again,
when
I
described
before
on
pre
prod,
which
I
did
export
earlier.
So
that's
why
we
had
a
log
message,
but
you
know
now
that
we
expert
it
again
on
staying.
We
see
it
here
and
yeah
nothing.
Basically,
it's
exactly
the
same.
You
don't
have
structural
bogs,
I,
don't
have
these
logs
sinks
configured
yet
on
staging
because
I'm
testing
that
out
on
pre
prod
but
yeah,
is
there
anything
else?
Anyone
would
like
to
see
you.
D
C
I
think
we
can,
just
probably
you
know,
hammer
the
API
with
the
export
requests.
Maybe
we
can
just
do
this.
I
mean
there's
no
reason
why
we
can't
just
do
this
on
staging
yeah.
Do
this
on
pre
prod
as
well
just
to
get
started,
and
we
have
a
issue
open
for
load
testing.
So
maybe
this
is
something
that
we
can
start
on
and
do
a
demo.
Maybe.
C
C
C
C
C
A
E
A
There
is
something
to
be
said
about
like
testing
some
of
the
edge
cases
like,
for
example,
we
scale
the
deployment
up
manually
and
fire
off
and
see
what
happens
or
we
just
hit
all
the
limits
that
we
set
and
see
how
the
deployment
scales
itself
up.
So
a
couple
of
things
like
that
might
be
interesting
to
poke
for,
and
that
will
be.
That
would
be
good
to
do
for
next
week,
if
possible.
Nothing
super
fancy
like
just
yeah.
C
C
A
A
A
C
A
A
They
will
pop
up
in
the
next
month
after
the
next
cube
con
or
something
you
know
so
would
we
should
we
consider
maybe
roping
in
rain
and
seeing
whether
he
can
I,
don't
know
how
I'll
talk
with
his
manager
but
seeing
whether
he
can
do
a
POC
and
do
a
POC
that
will
replace
one
item
from
the
list
that
we
select
here
and
see
how
much
time
it
would
take
and
say
that
we
need
a
POC
in
like
week.
We
can
offer
our
help
to
like
sit
down
and.
A
To
type
rapidly,
so
if
that
means
like
every
day,
we
prototype
together
great
and
see
whether
this
will
take
us
in
the
direction
that
we
expect
it
to
take
us
and
then
and
then
port
things
over
one
by
one
as
we
need
it,
rather
than
do
a
multi
month,
project
and
I'm,
not
interested
in
that
whatsoever.
We
have
our
MOOC
images
already
yeah.
C
I
haven't
really
looked
into
using
how
start
like
I,
don't
know.
If
you
looked
into
this,
whether
I
didn't
think
that
helm
really
dealt
with
secrets
at
all
and
I
guess:
there's
a
plugin
for
letting
helm
also
manage
secrets.
Is
this
something
that
we
want
to
use?
How,
before
what
we
don't
want
to
do,
obviously,
is
like
put
secrets
we
need.
We
need
to
put,
we
need
to
store
secrets
encrypted,
and
we
want
to
use
kubernetes
secrets
for
now,
if
we're
not
going
to
put
them
in
bulbs
and.
B
A
So
skarbek,
how
about
writing
up
that
requirement?
Rights
like
you
want
to
figure
that
one
out
we
don't
need
to
replace
the
things
that
all
are
already
working.
We
need
to
solve
this
problem
and
if
this
problem
can
be
solved
with
file,
we
can
talk
about
porting,
the
other
things
that
we
currently
have
over
period
of
time.
A
But
it's
two
of
you
doing
this
full-time
with
a
less
you're
healthy
here
and
there,
and
if
that
means
that
we
want
to
move
in
this
direction,
I
want
to
ask
for
more
people,
then,
and
if
I
say
I
get
no,
then
it
means
we
can't
do
it.
We
can
do
it.
What
we
have
basically.
So
that's
basically
a
requirement
from
my
side,
so
you
write
about
what
is
needed.
What
is
expected,
can
these
two
get
the
thing
done?
A
If
yes
go
to
a
quick
POC
to
see,
how
can
we
plug
this
in
and
only
then
we
can
see
like
whether
it
even
makes
sense
to
look
into
all
the
details
of
what
these
two
can
do
compared
to
what
r2
can
do
at
the
moment?
If
we
can
do
this
with
harmful
great
because
then
we
can
port
it
over
and
use
what
develop
product
has
moved
to
for
our
integration
as
well.
C
B
B
B
We
should
try
to
finish
up
the
database
defaults.
I.
Think
I
left
a
comment,
so
we
should
try
to
follow
up
with
each
other
and
knock
that
out
just
to
close
out
that
issue,
because
any
more
work
accomplished
with
it.
Take
that
and
then
hopefully,
by
the
end
of
next
week,
we'll
have
a
proposal
written
up.
So
we
know
precisely
what
we
want
to
accomplish
with
the
single
source
of
truth.
How
about
that?
How
about
that,
whether
that
be
a
proposal
for
our
POC
and
hopefully
we
can
get
grain
involved
or
if
it's
just
a.
F
B
Like
a
lot
of
the
work
that
we're
trying
to
accomplish,
especially
with
the
single
source,
the
truth
is
probably
going
to
be
helpful
for
anyone.
That's
running
a
hybrid
infrastructure
so,
like
the
scalability
grew
like
some
of
us
are
participating,
and
this
will
be
tremendously
useful
in
that
situation.
C
B
C
There
are
a
couple
open
issues
here,
but
most
of
them
are
just
waiting
for
charts.
So,
like
enable
JSON,
logging
is
done.
The
sidekick
logs
from
gke
pods
are
entreated
into
a
new
index.
I
have
DMR's
out
for
that.
So
that's
in
good
shape.
I
think
the
network
policy
work
is
restoring
for
our
charts
update,
so
it's
I
think
maybe
I
don't
know.
Maybe
we
should
update
charts
for
next
week.
Skirbeck.