►
From YouTube: Observability Staging Demo
Description
Tracing, Error Tracking and a little bit about the UI.
A
Hi,
thank
you
very
much
for
watching
my
video.
This
is
going
to
be
an
update
on
observability
at
gitlab
and
what
we're
building
into
the
products
and
a
little
bit
of
what's
coming
into
15.3.
So
basically,
I've
shown
a
few
parts
of
these
demos,
but
I've
always
shown
them
in
a
way
where
things
are
developed
and
run
locally.
A
This
time
everything
is
running
in
our
staging
environment,
a
staging
environment
that
will
eventually
receive
tracing
data
from
gitlab.com
staging
and,
while
we're
working
on
this
I'll
be
showing
a
few
other
ways
that
you
can
how
to
send
your
own
traces
to
that
platform,
for
example,
and
how
to
send
your
own
errors.
I've
shown
that
too
I'll
show
it
quickly
again
here
after
that,
I'll
show
also
a
little
bit
of
a
quick
preview
on
what
we're
doing
with
our
future
UI.
So
here
we're
still
in
this
UI
that
still
uses
grafana.
A
They,
we
are
going
to
look
at
a
few
traces
that
I'm
going
to
send
from
this
computer.
So
we
have
this
running
at
this
URL,
and
we
also
have
a
gitlab
instance
that
we
deployed
ourselves
at
skitlab.staging.com.
This.
A
A
So
now,
basically,
what
I'm
doing
is:
I'm
configuring
environmental
variables
for
the
open,
telemetry
exporter,
I'm,
going
to
be
sending
it
to
staging
launch.com
and
I've,
set
it
up
for
the
projects.
The
group
number
two
and
I'm
using
an
API
key
that
API
key
I
have
created
by
going
here
and
then
there's
in
grafana
a
way
to
create
multiple
API
keys.
So
once
you
have
access
to
that
UI,
you
can
then
create
authentication
tokens.
A
There's
a
data
source
that
I
had
to
still
configure
manually.
This
is
going
to
be
automatic
in
15.3,
hopefully,
and
in
the
status,
but
I'll
get
that.com
it'll
be,
of
course,
automatic
out
of
the
box.
A
The
URL
here
that
you
can
see
is
an
internal
service
that
we're
running
to
connect
to
the
eager
instances
that
we
Deploy
on
gcp.
So
let's
go,
let's
start
sending
certain
some
data
I'm
going
to
launch
this
doc
compose
and
so
now
this.com
compose
simulates.
What
would
be
happening
in
some
cluster
right,
like
data
coming
leaving
tracing
data,
leaving
the
application
and
then
being
sent
to
our
open
Telemetry
apis?
For
this
we
go
back
to
this
UI,
in
which
you
can
then
start
pressing
buttons
and
by
pressing
buttons
you
generate
traces.
A
I,
don't
think
you
need
to
reload
the
page.
I
just
did
something
to
test.
Let's
see,
let's,
let's
just
choose
anything
right
like
we've
done
only
the
same
HTTP
request
so,
but
we
can
also
so
we've
done
only
this
one,
but
we
can
also
inspect
it
from
other
books.
For
example,
what's
generated
a
SQL
carry
or
what's
generated
a
call
to
redis
in
this
case,
let's
go
through.
Let's
enter
the
trace
from
where
it
starts,
which
is
the
HTTP
requests.
A
So
here
it
is
that's
the
number
of
the
trace
for
those
not
familiar
a
trace,
basically
traces.
What
happens
in
a
program
from
the
moment?
That's
a
call.
A
function
called
starts
all
the
way
to
when
everything
finishes,
and
it
goes
through
the
entire
infrastructure,
if
possible,
or
by
keeping
an
ID
all
along
the
way,
and
that
way
you
can
see
how
much
time
a
function
spent
in
calling
different
parts
of
your
infrastructure
and
how
much
time
is
spent
in
some
sub
functions.
A
So
in
this
case,
one
HTTP
call
becomes
a
call
to
select
a
database,
a
call
to
find
something
in
the
redis
cache,
some
of
them
failing
for
some
reason,
you
can
then
dig
into
what
happened
and
what
exactly
and
which,
which
part
failed
and
and
then,
of
course,
other
calls
to
other
parts
of
the
HTTP
stack
and
from
the
beginning,
to
the
end,
you
see
that
this
call,
for
example,
took
761
milliseconds
the
these
this
entire
hdp
request.
A
But
if
you
look
at,
for
example,
just
the
SQL
query
that
took
300
milliseconds,
so
that's
how
you
use
traces
just
as
a
high
level,
while
we're
here
I
might
as
well
explain
for
those
who
don't
have
a
context
right.
So
that's
what
you
can
do
right
now
with
this.
That's
what
you
will
be
able
to
do
in
15,
.3
on
and
yeah.
A
A
Oh
there,
it
is
sorry
a
monitor,
and
here
we
basically
already
enabled
it
for
this
project.
We
get
this
to
put
in
the
Sentry
SDK
that
we
use
to
send
errors
and
then,
in
this
go
code
that
we
use
as
an
example
I
just
pasted
here
in
the
init
function,
when
you're,
basically
init
the
Sentry
client,
so
it's
Century
compatible
and
all
integrated
within
git
map.
And
then
you
just
run
the
program
a
couple
of
times.
A
So
this
one
here
once
and
two
every
time
there's
an
error
which
is
on
purpose.
Then
we
just
go
back
to
our
to
the
kitlab
UI
and
to
error
tracking
and
I've
shown
this
before.
But
basically
you
can
see
that
it
happens.
How
many
times
it
happened
five
times
it's
always
the
same.
One
I
keep
clicking
it,
so
you
can
resolve
it
once
you're
done.
If
it
doesn't
happen
again
and
then,
if
you,
if
it
happens
again,
it
should
to
theoretically
unresolve
it
foreign.
A
Let's
run
it
twice
more
fresh,
it
gets
unresolved
and
it
hasn't
been
around
seven
times,
two
more
so
yeah,
and
then
you
can
inspect
exactly
within
the
code
where
the
error
happens,
all
right,
that's
it
and
then
the
last
piece,
our
observability
uui,
so
we're
currently
working
our
grafana
Fork
to
transform
it
into
our
fork
in
our
base,
we're
removing
all
the
functionality
we
don't
want.
We
are
rebranding
it
with
our
branding
and
we
are
also
remove
we're
also
going
to
make
a
couple
of
things
more
prominent.
A
Eventually,
in
these
in
in
this
front
page,
we
removed
a
bunch
of
call
to
actions
and
we'll
replace
it
with
default
dashboards
over
time,
but
that
will
not
that's
not
what
we're
doing
for
15
3
for
15
3,
we'll
just
do
The,
Branding
and
we'll
take
this
you'll
be
able
to
have
it
to
view
it
in
an
iframe
within
the
gitlab
UI,
so
that
it
looks
more
or
less
integrated
and
then
what
you'll
be
able
to
do
is
see
your
traces
and
also
connect
to
existing
Prometheus
that
somebody
might
be
running
or
existing
elasticsearch
instances
that
might
already
be
running
somewhere.
A
So
yeah,
that's
it!
Thank
you
very
much.
I
hope
you
enjoyed
this
and
if
you
have
any
questions,
feel
free
to
come
to
the
Geo,
obserability
Channel
or
also
simply
create
issues,
ask
questions
we're
always
available
and
happy
to
help.
Thank
you
very
much.