►
Description
GitLab APM weekly issue - https://gitlab.com/gitlab-org/incubation-engineering/apm/apm/-/issues/30
A
Hello
joshua
here
full
stack
engineer
in
incubation
engineering,
I'm
looking
at
application,
performance,
monitoring
management
and
observability
and
how
we
integrate
that
with
the
gitlab
devops
platform,
a
short
update
this
week,
I've
mostly
been
working
on
various
elements
of
getting
testing
or
staging
environment
set
up
and
automated
in
the
background,
which
is
meant
doing
a
bit
of
learning
around
some
of
the
tools,
I'm
not
as
familiar
with,
for
example
like
terraform,
that
I
don't
have
very
much
experience
with.
A
So
that's
taking
up
a
bit
of
my
time
at
the
moment
in
terms
of
a
merge
request
that
we
got
in
this
week.
A
I
wanted
to
have
the
helm
registries,
the
helm,
charts
pushed
to
a
registry
on
merge
requests
and
when
we
merge
into
the
main
branch
as
well
took
me
a
little
bit
of
time
to
get
this
worked
out.
I
initially
was
using
the
packages
feature
in
gitlab,
which
has
a
helm
in
cicd.
You've
got
a
csd
packages
and
registries.
A
You've
got
a
helm
package
registry
in
here.
You
can
see
more
information.
A
There
are
various
ones
supported
and
for
some
reason
I
thought
that
was
equivalent
of
a
container
image
and
oci
image,
and
it's
not
that's
the
the
sort
of
standard
way
of
doing
a
helm,
chart
package
and
save
as
part
of
a
helm
repository.
A
A
So,
as
well
as
having
standard
container
image
pushes
as
part
of
our
builds
for
this
apm
solution,
we
also
push
the
helm
charts
as
well,
so
we
do
as
part
of
the
printing
process,
we've
added
the
mechanism
to
update
the
helm,
charts
using
yq,
which
is
a
fairly
standard,
yaml
cli
tool
to
update
parts
of
yellow
files,
and
we
add
a
helm
tag
in
there
update
the
chart
yaml,
and
we
also
go
into
the
values
file
and
update
the
images
in
that
values
file.
A
So
it
reflects
the
commission
of
the
images
that
are
built
as
part
of
the
the
the
commit
push
as
part
of
the
merge
request
or
the
merge
training
to
the
main
branch.
A
So
that
creates
that
updates
that
chart
and
then
we
lint
it
and
once
everything
else
in
the
pipeline
is
finished
once
the
container
image
is
built.
For
example,
we
push
the
helm
chart
as
well
to
the
registry,
and
so
we
we
set
the
experimental
oci
to
one
there
as
part
of
helm.
To
tell
we
can
use
this
feature.
A
We
can
use
that,
but
also
if
we
decided,
for
example,
in
the
staging
or
production
environment,
we
wanted
to
take
a
say,
git
ops
approach.
It's
much
easier
to
do
so
now
that
we
have
those
artifacts
in
a
registry,
because
we
can
have
say
a
a
helm
chart
operator
in
the
environment
that
checks
for
new
versions
and
automatically
pulls
them
and
rolls
them
out
things
like
that.
So
that
in
theory,
should
make
that
a
lot
easier
to
work.
A
With
what
else
have
we
been
working
on
one
of
the
things
alongside
the
automation
of
deployment
and
setting
up
the
staging
environment?
A
I
realized
that
the
data
coming
in
we
don't
currently
have
a
way
of
integrating
that
with
the
gitlab
project
and
the
user's
environment
that
we
want
that
they're
using
the
project
with
right
and
that's
quite
crucial,
really
because
at
the
moment
it's
just
storing
a
bucket
of
all
the
data.
So
it's
not
particularly
useful
in
that
respect.
So
we've
created
an
issue
here,
and
I
think
this
needs
to
be
done
before
we
put
the
project
into
staging.
A
Otherwise
anyone
that
uses
it
their
data
is
just
going
to
be
just
going
to
exist
with
everyone
else's
data
which
would
make
it
pretty
much
useless
if
we
then
provide
any
other
functionality.
On
top
of
that.
So
my
proposal
here
and
I
do
need
to
work
this
out
a
little
bit
more.
A
Is
that
with
any
agent
that
that
we
use
with
the
apm
solution,
which
is
currently
the
data
log
agent.
We
can
set
some
environment
variables,
so
the
user
can
set
product
id
and
optionally
an
environment
id,
and
that
could
either
be
the
path
in
the
gitlab
instance.
A
So
it
might
be
something
like
the
path
that
we
see
in
the
url
here
and
it'll
be
able
to
work
that
out
or
it
could
in
fact
be
the
actual
integer
id
there
same
goes
for
the
environment
id.
You
can
see
how
you
do
that
with
the
datadog
agent
there,
using
these
fields
or
these
environment
variables
or
the
tags
attribute
in
the
helm
chart.
A
So
the
project
id
would
be
a
requirement
for
the
user,
and
that
would
be
checked
against
the
gitlab
api.
Using
the
api
token
user
provides
that's
effectively
a
way
of
doing
all
authentication
and
authorization
against
that
project.
We
need
to
probably
make
sure
that
you
that
that
account,
that
is
linked
to
is
a
minimum
of
a
sort
of
developer
level
level
of
user.
So
they
have
the
correct
credentials
to
be
storing
this
data
against
the
project
or
the
environment.
A
It
might
be
that
you
know
you
have
public
projects
like
a
lot
of
the
ones
that
get
the
posts.
You
don't
want
people
just
to
be
able
to
submit
metrics
that
are
attached
to
that
project.
A
So
you
know
you
need
to
put
some
checks
in
there
and
I
put
in
a
little
initial
sequence
chart
here
so
the
agent
you
know
if
it's
invalid,
when
it
when
it
gets
the
project
we'll
get
401
and
that'll,
send
back
if
we
pass
it
a
valid
key
and
project
we'll
get
the
project
with
the
key,
we'll
get
the
json,
we'll
optionally,
get
the
environments
and
we'll
give
it
200
back
to
the
agent.
But
what
we
can
do
is
we
need
to
then
store
against
any
series
data
or
any
other
data.
A
We
start
to
store
alongside
apm,
we'll
store
that
project
id
in
the
environment
id
and
then
we
can
link
between
the
two
systems
and
that
would
be
part
of
the
sorting
key
as
well.
So
that's
that's
quite
important.
I
think
before
we
make
this
available
anyway.
Otherwise
it
just
doesn't
make
much
sense
yeah.
So
that's
the
update
for
me
at
the
moment.