►
From YouTube: GitLab Observability (Opstrace) local tracing demo
Description
This is a demo of deploying GitLab Observability (Opstrace) locally and using a test application to send and query traces.
This is using our new Golang operator code, GitLab auth, ClickHouse backend and an early version of our Observability UI (Grafana fork).
A
All
right,
so
I'm
going
to
show
how
we're
going
to
deploy
an
instance
of
the
current
gitlab
observability
tooling,
and
to
do
that
I've
checked
out
a
version
of
the
upstream
repository.
I
did
a
git
pull
someone
in
the
latest.
Yes,
I'm
up
to
date.
First
things:
first,
we
need
to
do
a
local
cluster.
There's
a
so!
Let
me
see
I'm
going
to
go
to
the
quick.
I
know
these
by
heart,
I'll,
do
them
by
heart
and
there's
a
quick
start
for
anyone
to
do
these
so
basically,
first
sorry
mac.
A
We're
going
to
create
a
do,
make
kind,
because
we're
deploying
this
on
a
local
cluster,
so
I've
already
set
up
my
mac
and
I've
already
given
enough
enough
ram
and
enough
cpus
for
this
cluster
to
work,
but
it's
already
that's
also
described
in
the
click
start.
A
The
other
thing
is,
I
think,
the
whole
end-to-end
system
on
my
machine
will
be
a
bit
quicker
than
if
somebody
tries
to
replicate
this,
because
I
don't
have
to
pull
any
images
anymore
or
anything
like
that,
so
yeah.
So
right
now
it's
starting
the
kind
cluster
once
the
kind
cluster
is
started,
I'll
have
a
local
kubernetes
cluster
to
be
able
to
deploy,
to
deploy,
build
and
deploy.
B
A
A
I
cleared
my
screen
and
then
I'm
going
to
do
a
make,
deploy,
make
deploy,
is
going
to
build
and
deploy
the
system
to,
and
so
that
means
a
whole
build
and
deploy
what's
going
to
happen
is
basically,
we
have
a
we're
building
an
operator
that
operator
once
built,
gets
deployed
to
the
the
kubernetes
cluster
and
that
operator
will
then
deploy
the
rest
of
the
infrastructure.
A
So
everything
that's
getting
created
right
now,
like
what
you
can
see,
especially
in
the
part
here.
This
one
is:
we've
created,
what's
called
custom
resources
for
the
this
cluster,
so
we
can
now
deploy
tenants.
We
can
now
deploy
dashboards.
We
can
now
deploy
there's
a
bunch
of
different
systems
that
we
can
deploy
to
one
cluster.
So
now
that
all
of
this
is
done
and
deployed,
the
next
step
is
to.
A
The
next
step
will
be
to
create
a
config
file
that
the
config
file
is
here.
I've
already
created,
it's
called
cluster
yaml.
I've
also
created
inside
of
my
org
inside
of
my
own.
So
now
I'm
sharing
my
secret,
but
that's
fine,
I'll
change
it
right
after
this
video,
but
basically
inside
of
my
profile
here,
I
inside
of
api,
I've
created
an
api
where's
api.
A
Okay,
so
I've
created
an
application.
It's
down
here.
It's
an
application,
called
ops
trace
I've
added
a
callback.
Everything
is
in
the
quick
start
like
like
I
explained,
and
that
gives
me
the
cli,
the
that
gives
me
a
client
id
and
a
client
secret
to
put
in
there.
This
shared
secret
is
just
a
random
string.
Right
now
will
be
used
in
the
later
future.
So
don't
worry
about
that
one.
So
now
that
I
have
this
cluster
yaml,
this
is
basically
the
definition
of
a
cluster.
A
B
A
We're
going
to
control
apply
this
yaml
file,
so,
what's
doing
so
as
soon
as
we've
done
that
see,
it's
created
one
thing,
which
is
something
called
the
dev
cluster,
which
is
what
I've
currently
deployed
now
we
can
actually
wait
and
we're
going
to
basically
wait
until
this
cluster
is
ready.
That's
one
of
the
nice
things
of
custom,
resource
definitions
and
operators.
You,
basically
the
operator,
tells
you.
When
everything
is
ready,
you
don't
have
to
go
hunt
for
things.
A
We
can
take
a
look
because
we're
curious.
We
can
do
a
get
pod
and
we
can
continuously
look
at
the
output
of
what's
happening
here
and
you
can
see
that
the
cluster
is
getting
created,
see
these
this.
All
of
this
is
the
the
basics
that
are
already
installed
inside
of
the
the
cluster
that
we
did
when
we
did
make
deploy.
So
it's
all
our
operators
and
a
few
other
things
like
a
development
postgres,
and
things
like
that.
A
A
So
yeah
it's
going
to
be
a
few
minutes.
C
Yeah,
I
was
trying
to
think
if
I
have
any
any
questions
right
now
so
so
this
was
not
local.
How
would
it
be?
I
guess,
how
would
this.
A
Exactly
the
same
thing,
so
we
used
a
kind
cluster
because
that's
easier
to
develop
locally
and
everything,
but
if
you
deploy
any
a
kubernetes
cluster
in
the
right
configuration,
which
is
what
our
terraform
code
does
right
like,
then
you
can
deploy
the
same
system
and
code
to
that
cluster
and
run
it
there
right
like
there's,
there's
not
a
big
difference.
A
More
and
more
things
are
getting
deployed.
There's
a
lot
of
dependencies
right,
like
redis
is
used
and
other
things
like
that
things
could
gets
much
faster
because
right
now
things
are
spun
in
sequence:
it's
easier
for
debugging,
it's
easier
for
development.
In
the
beginning,
one
day,
when
we're
more
confident,
we
can
start
spinning
a
couple
of
things
in
parallel.
A
Take
a
look,
maybe
everything
in
theory:
you
can
spin
up
everything
in
parallel
and
it
starts.
This
is
how
our
our
previous
code
worked,
but
this
is
a
better
approach
right
now
for
development,
like
we're
gonna
find
bugs
faster.
This
way.
A
So
yeah
this
is
clearly
separated
in
two
parts.
Right
like
this
part
of
the
demo,
is
the
part
where
I
would
say
the
operator,
the
the
the
person
the
who
is
going
to
deploy
the
system
deploys
it
right
and
we
want
to
make
it
much
much
easier
than
all
of
these
commands
right,
like
I've
done,
make
commands
and
queue
control
commands.
That's
not
what
our
customers
are
going
to
do
in
the
end.
Right,
like
that's,
that's
important
to
understand
right.
A
A
We
can
kick
off
this
deploy
these
deployments
from
the
I'm
pretty
sure
the
the
engineers
are
thinking
about
this
quite
a
bit
already,
but
then
other
things
that
we
can
do
are
so
kicking
off
the
deploy
from
gitlab
that
or
terraform
that's
the
main
thing,
oh
yeah,
and
then
this
this.
A
This
thing
that
I
went
to
and
when
I
did
the
the
the
oauth
secrets
and
everything
that
could
also
be
done
behind
the
scenes
right
like
there's,
no
reason
somebody
who's
installing
gitlab
shouldn't
be
able
to
click
a
few
buttons
to
say
I
want
this
deployed
and
then
would
fill
out
a
form
saying
this
is
where
my
kubernetes
cluster
is
here's
a
here
are
some
credentials
to
go
there
or
if
they
don't
even
have
a
kubernetes
cluster
like
kick
it
all
off
with
our
with
our
integrated
terraform,
deploy
right
like
there's,
no
reason
somebody
should
have
to
click
anything
and
do
anything
manually
all
the
way
to
deploy,
and
we
know
it
works
because
we've
done
it
before
right,
like
the
the
pre,
like
the
previous
demos
of
upstream
and
the
code,
pre-acquisition
was
like
that
right
like
end
to
end:
no,
no,
no
human,
all
the
way
to
just
clicking
into
login.
A
That's
the
experience
we
want
to
bring
back,
but
from
gitlab
right.
Okay,
okay,
are
we
looking
here?
I
think
we're
pretty
good.
Let's
see,
yeah
we're
pretty
good
look
conditions
are
met
the
cluster's
up
right,
like
that's
it.
It's
that
simple.
Now
we
have
a
running
cluster,
but
we
don't
have
any
observability
to
ui
to
go
to
basically
right,
like
we
don't
have
an
instance
of
the
observability
ui
running
for
our
group.
A
B
A
A
A
So
the
loading
sequence
is
mainly
for,
like
demo
purposes,.
B
A
Us
being
patient,
what's
happening
behind
the
scenes,
can,
as
usual,
be
looked
at
when
you
look
at
what
pods
are
getting
deployed
to
the
kubernetes
and
the
pods
are
basically
the
they
are
the
the
pods
that
are
specific
for
that
group.
There
are
only
for
that
group
where
the
traffic
for
that
group
goes
through,
so
there
will
be,
for
example,
a
pod
deployed
with
the
gitlab
observability
ui
right,
like
not
just
one
pod,
and
then
there
will
be
like,
as
you
see
like
a
an
operator
for
the
tenant.
A
So
what
happens?
Basically
is
we
have
an
operator
that
then
launches
another
one?
The
tenant
operator
and
the
tenant
operators
is
responsible
for
one
group,
one
tenant
and
that
deploys
the
rest
of
the
stack,
an
open,
telemetry
endpoint.
You
can
see
it
here
as
well
right,
like
jaeger,
all
the
pieces
that
are
specific
to
a
user
or
not
so
much
a
user,
but
a
a
group
are
getting
deployed
and
that's
what
we're
waiting
on
we're
waiting
on
these
things
to
run,
and
they
should
be
up
in
a
second
like
yeah.
Let's
just
wait.
A
A
But
as
soon
as
see
like
as
soon
as
we
have
all
the
pods
running,
we'll
be
happy
so,
and
I
think
we're
almost
there
so
argus
deployments
is
our
ui.
Jaeger
is
the
jager
endpoint
and
so
on.
Right,
okay,.
A
Not
yet
soon
soon,
soon
there-
oh,
let's
see
like
once
you've
done
it
a
few
times.
You
know
the
timings.
It's
also
you
can
look
at
the
their
hints
right
when
you
look
and
you
know
what
to
look
for
here.
Okay,
now
we
have.
This
is
our
grafana
fork.
This
is
still
says
grafana,
but
that's
gonna.
Do
people
are
going
to
be
work
work
this
over
now
we're
going
to
add.
First,
we
need
to
add
an
api
key.
A
A
This
is
all
everything
I'm
doing
right
now:
proxies
forwarding
everything,
that's
just
because
I'm
actually
doing
a
test
where
I'm
going
to
deploy
a
website,
that's
emitting
traces,
and
then
I
want
to
be
able
to
go
there
and
see
the
traces
right.
So
the
part
that
I'm
doing
right
now
is
a
docker
compose
that
I'm
deploying
that
is
going
to
do
that.
So
compose
this
token
that
I
just
got
is
needed
by
our
demo
application.
Oh
sorry,.
A
I
am
still
not
used
to
all
the
mac
short
like
keyboards,
like
I'm
still
doing
all
my
linux
things
and
then
it
blows
up
in
my
hand
when
I
do
use
this
okay
and
there
we
go.
We
just
start
this
up.
So
when
we
do
this
docker
compose
up,
we
can
see
that
we've,
given
it
the
namespace
and
the
token,
and
once
this
docking
could
pose
up
it's
going
to
create
traces
when
I
visit
the
website.
A
C
A
A
Adding
a
data
source
is
something
that
we're
doing
here
during
the
demo,
but
our
system
will
do
automatically
right,
like
no
user
should
have
to
put
a
data
source
of
something
that
is
basically
already
there
right
like
it's
deployed
by
us.
Sorry,
that's
what
it
means.
Why
is
this
being
annoying
all
right?
That's
fine!
Okay,
the
url
for
the
data
source!
Oh,
that
one
is
a
complicated
one
that
I
need
to
copy
paste.
B
B
A
A
B
There
why
there
it
is.
A
Okay,
green
is
good,
explorer
jaeger
traces
customer
get,
and
this
is
the
click
that
I
did
at
a
few
minutes
ago.
So
that's
a
trace
for
the
last
click.
I
did
basically
right
on
the
system,
so
you
can
see
and
the
traces
inside
of
the
ui
it's
stored
in
click
house
and
so
on
right
like
so
what
happened
once
you
click
a
trace,
gets
created,
sent
through
open
telemetry
through
open,
telemetry,
then
stored
in
click
house.