►
From YouTube: Scalability team demo - 2020-10-28
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
Okay,
I've
got
the
first
item
on
the
agenda,
which
is
just
about
feature
categories
and
metrics
on
logs,
so
this
is
actually
stuff
I
did
a
while
ago.
I
just
forgot
that
I'd
done
it
and
didn't
demo
it
I
was
like.
Oh
I've
done
nothing
I
haven't
haven't,
got
anything
to
demo,
but
I
did
so.
This
is
a
retrospective
demo,
so
we
added
the
feature
category
to
structured
logs
and
to
our
metrics.
A
So
I'll
start
off
with
the
structured
logs,
so
yeah
here
you
can
see
I've
just
filtered
by
when
we've
got
caller
id,
because
that's
typically
when
we'll
have
a
feature
category
we've
got
a
bunch
of
stuff
here
from
the
git
service.
So
I
might
want
to
exclude
that
just
to
make
it
a
bit
more
interesting.
A
Well,
exactly,
and
then
you
can
see
here
like
we've,
got
a
gap
for
the
api
which
bob
is
working
on
right
now.
So
some
api
routes
do
have
feature
categories
and
some
don't
at
the
moment,
so
we're
filling
those
in
at
the
moment
and
the
same
for
the
rails
routes
to
be
fair
as
well
like
all
of
the.
A
All
of
what
we
know
about
in
rails
has
the
has
a
feature
category,
but
if
I
look
for
does
not
exist,
maybe
that'll
be
a
bit
clearer
type,
not
api.
A
Yeah
so,
for
instance
like
when
you
use
the
device
unlocks
controller,
because
that's
not
our
controller
like
we
haven't
defined
a
feature
category
on
that,
so
we
probably
should,
but
it's
not
a
huge
deal,
because
it's
not
super
common.
When
there's
a
404,
we
don't
have
an
explicit
owner
set,
which
is
probably
fair
enough,
so
stuff
like
that.
Yeah
and
the
peak
results.
Controller
shows
up
there
as
well,
so
basically
like
what
we
think
is
missing
at
the
moment
is
from.
A
Second
of
all,
anything
that
is
not
in
a
controller
that
inherits
from
our
application
controller,
because
that's
where
we
define
the
feature
category
stuff,
we
can
add
it
to
other
controllers.
We
just
haven't
and
then
not
found
is
the
other
one.
So
that's
the
logs.
B
What's
the
meta
caller
id
exists
filter?
What
is
that
sketch.
A
Oh,
that
was
just
because,
like
if
I
exclude
this
see
if
I
just
get
rid
of
all
my
filters,
I'll
probably
be
clearer.
We
get
log
lines
like
this,
which
is
just
I
don't
know.
I
don't
really
know
why
we.
A
Oh
so
caller
id
is
the
the
route
that
we
have
handles
from
the
application
side.
So
it's
part
of
that
context,
stuff
that
bob
added.
So
it's
like
the
api
root
or
the
controller
action
pair.
B
That
aren't
really
relevant
to
this.
I
I
was
just
asking
because
I
don't
know
everything
that's
in
this
log
and
some
of
these
things
I
think
oh
caller
id
that
looks
useful,
but
where
does
it
even
come
from
or
what
does
it
mean.
A
Yeah,
the
other
big
use
for
caller
id
is
for
aggregations
so
for
api.
You
can
aggregate
by
the
roots
fields
like
so,
if
you
want
to
know
the
top
10
api
routes
you
can
use,
I
think
it's
called
root,
but
for
rail
stuff,
that's
in
two
fields,
that's
in
controller
and
in
action.
A
So
it's
not
very
convenient
to
aggregate
by
the
two
of
those
here,
but
you
can
just
use
the
caller
id
and
that
would
be
controller
and
action
for
rails
or
the
api
route
for
api,
which
is
typically
what
you
want
most
of
the
time.
So.
C
Yeah
thanks
for
sidekick
logs,
it's
also
useful
to
know
which
controller
action
would
have
scheduled
the
sidekick
job,
yes
or
exactly
which
job
would
have
scheduled.
Another
job.
D
Is
that,
why
is
is
that?
Why,
in
what
were
we
looking
at
yesterday-
oh
external
jobs,
what
they
call
external
dependency,
reactive,
caching,
yeah,
reactive
caching,
there
was
something
that
was
going
awry
on
those
yesterday
and
craig
furman
was
asking
for
help,
and
I
said
well,
it's
kind
of
difficult
to
apply
attribution
because
they're
unowned,
but
actually
the
actually
it's
not
the
caller
id.
But
the
feature
category
on
those
seems
to
be
the
feature
category
of
the
caller
rather
than
unowned,
which
made
it
quite
easy.
D
So
the
there's
different
feature
categories
on
that
worker,
which
kind
of
caught
me
by
surprise,
because
that
wasn't
how
I
understood
it.
So
do
we
inherit
the
feature
category
from
the
caller
now
on
on
some
of
the
unowned
jobs?
Was
that
a
is
that
a
bug
feature.
A
C
C
A
Okay,
we'll
talk
about
matrix,
so
what
am
I
doing?
So?
Let's
just
have
a
quick
look
at
this,
so
it's
also
in
the
metrics.
So
it's
on
this
http
request,
total
metrics
so
probably
should
have
typed
this
up
before
the
call.
A
So
if
we
look
at
this,
we'll
see
what
the
distribution
looks
like
over
time
for
web,
it's
also
mostly
set
for
git,
because
most
of
the
git
rails,
traffic
is
rails
controllers,
so
that's
kind
of
like
a
happy
accident.
These
are
pretty
much
all
polling,
e-type
caching.
So
if
we.
A
Just
do
two
xx
codes.
Those
will
drop
quite
a
bit,
so
the
the
e-type
caching
endpoints
do
now
have
the
feature
category
but
yeah.
They
show
up
a
lot
because
we
get
a
lot
of
3xx
responses.
A
I've
not
owned
still
got
quite
a
bit
of
unknown
somewhere.
It
was.
It
was
in
the
top
10.
Yesterday,
at
least
when
I
looked
oh,
it's
yeah,
it's
sort
of
down
there.
That
mostly
seemed
to
be
like
the
redirects
that
we
do
in
the
rails,
routing
when
we,
if
you
clone,
just
with
the
repository
url,
we
like
redirect
you
to
the
dot
git
version
so
that
all
works,
but
that
doesn't
say
a
feature
category
and
then
for
api.
A
We
will
also
have
some
feature:
categories
set
and
bob's,
adding
more.
A
And
we've
got
some
issue
tracking
and
news
and
stuff
and
bob's
just
added
some
more
more
ones,
all
the
ones
beginning
with
a
bob's
just
added
to
match
the
theme
from
before.
A
So
that's
that
the
only
other
thing
I
wanted
to
mention
there
was
the
way
I
did
this
because
I
wasn't
really
sure
the
best
way
to
like
exfiltrate
like
what
the
controller
thinks
into
this
rack
middleware.
So
it's
actually
a
header
that
is
just
in
the
response.
So
if
I.
D
A
Yeah,
so
that's
not
super
useful
most
of
the
time,
but
it's
kind
of
handy
if
you're
testing
that
it
works
basically
but
yeah.
The
other
nice
thing
about
that
is
because,
because
you
know,
rack
has
access
to
the
headers,
because
we
can
just
grab
it
out
of
the
headers.
It
means
that
it's
very
it
was
very
easy
to
like
implement
that
for
the
api
which
uses
great
after
we've,
implemented
it
for
rails,
because
all
rails
and
grape
need
to
do
is
set
the
header,
and
then
the
middleware
will
just
deal
with
that.
A
Exactly
yeah,
I
just
figured
like
you
know,
might
be
useful:
it's
not
really
a
secret
the
source
code's
public
anyway
right
so
yeah,
it's
more
about
like
head
up
what
bloat
flash
waste
but
yeah,
so
yeah
any
questions
on
that.
A
To
the
extent
that
I
asked
you
that
question
earlier,
that
should
tell
you
where
I
am
with
that
project.
So
I
added
the
recording
rule
for
this
and
the
significant
labels
for
feature
category.
So
now
we
generate
a
recording
or
we
will
once
mr
done,
we
will
generate
a
recording
rule.
That
includes
feature
category
for
this
http
request
total
metric
that
we
were
just
using,
and
then
we
can
use
that
recording
rule
metric
on
the
error
budget
page
to
split
this
out
by
feature
category
like
we
do
for
psychic
already
yeah.
B
A
For
the
web,
for
the
the
the
rail
side
of
things,
it's
just
the
http
request.
Total
at
the
moment
has
the
feature
category
label
on
it,
because
I
think
it's.
C
A
D
The
the
other
thing
that
I
kind
of
this
brings
to
mind
like
that
error
budget
page
is
like
not
that
great,
and
I
I
say
that
as
the
author
of
it,
the
one
of
the
things
that
we
could
con
consider
doing,
and
this
also
ties
in
with
teams
coming
along
and
wanting
to
put
their
charts
into
kafana
and
they're,
not
really
fitting
on
the
service
overview
page.
D
Some
of
it's
generated,
and
then
some
of
it's
like
that,
and
they
can
go
there
and
say
this
is
your
dashboard.
You
know
this
is
where
you
can
see
your
kind
of
view
of
the
world
and
if
you
want
to
add
things
you
can
add
a
chair
and
at
least.
A
A
Yeah,
I
think
I
think
now
is
the
point
where
we
can
definitely
start
doing
that
because
yeah,
I
think,
what's
nice,
about
having
this
in
metrics
specifically.
Is
that,
like
it's
really
easy
to
say
like
okay,
I
think
not
owned
or
unknown
or
whatever
is
too
high,
like
let's,
let's
squash,
some
of
those
and
then
like,
maybe
be
like
it's
kind
of
okay
for
now,
like
we've
got
enough
to
like
give
that
to
teams
so
yeah
yeah.
That
sounds.
That
sounds
interesting.
A
Though,
oh
and
I've
got
the
next
thing,
so
this
is
potentially
for
craig,
because
craig
asked
for
some
rack
attack
vlogging,
and
I
was
just
looking
at
that
just
before
the
demo.
So
I
thought
I
might
just
talk
about
this.
I'm
just
gonna
link
to
the
issue
in
the
dock
as
well,
because
I
didn't
actually
put
that
there,
so
we
already
have
some
logging
that
mirror
added,
which
is
pretty
good.
So
what
we
want
to
do
is
extend
that
to
well.
A
A
Yeah,
I
think
they
are
so
like
you
know
it
makes
it
easier
to
find
that
and
then
what
was
the
other
thing?
Oh
there's
one
field,
that's
like
only
added
when
there's
a
user,
but
it
could
be
added
when
there's
not
a
user
as
well
for
consistency
and
then
the
other
thing
craig
was
asking
for
is
logging
the
you
know
the
meta
project,
meta,
namespace
fields
like
we
have
in
our
structured
logs.
A
However,
that
requires
a
database
lookup
because
we're
not
in
like
when
we
add
this
to
our
structured
logs,
normally
the
way
bob's
implemented
it
is
that
we've
already
loaded
a
project
if
there's
a
project,
that's
relevant
to
this
page
right,
like
you
know,
that's
you
know.
If
we
haven't
loaded
a
project,
then
we're
probably
not
on
a
page,
that's
related
to
a
project.
So
we
just
use
that
project
that
we've
already
loaded
from
the
database,
but
in
this
case.
B
A
A
Yeah
yeah-
and
I
was
looking
at
a
thing
so
there's
this,
because
we
actually
have
to
load
the
project
at
some
point
during
the
routing
as
well.
So
this
is
sort
of
getting
a
bit
complicated
with
like
rails
and
rack
middlewares
like
rails
routing
and
rack
middlewares
and
stuff,
and
I'm
not
100
sure
of
what
the
outcome
is
but
like
when
we
because
of
our
roots
being
ambiguous.
A
So
now
a
bunch
of
them
have
got
this
hyphen
segment
in
them,
but
like
because,
like
you
could
go
to
gitlab.org
lab
slash
blob,
slash
master
blah
blah
blah.
How
do
we
know
that?
There's
not
a
project
called
git
lab
dash
or
slash
gate
lab
blob,
for
instance,
like
you
know,
how
do
we
know
which
part
of.
A
Where
is
that
regex
yeah?
So
it
looks
it
up
from
the
database,
which
obviously
workforce
can't
do
it.
A
C
What
is
kind
of
interesting,
maybe
but.
C
B
A
Yeah!
No,
if
you
look
at
line
26
of
the
racket
logging
file,
it's
just.
B
That
of
the
logging
file
20
seconds.
A
Yeah,
but
what
I'm
also
curious
about-
and
I
haven't
looked
it
up
yet
because
I
only
started
working
on
this
like
now-
is
that
we
get
the
user
id
from
the
rack
attack
match
discriminator
field,
but
to
know
what
the
user
id
is.
We
have
to
have
done
something
with
their
session
total
session
or
their
personal
access
token,
in
the
first
place,
which
also
implies
that
there
might
be
a
look
up
at
that
point.
B
Yeah
and
it
seems
like
it
will
happen
a
lot.
Well,
it's
a
throttle
with
user
information,
but.
A
So
yeah
the
problem
here
is
that,
like
conceptually-
and
I
know
this
is
what
you're
thinking
of
I'm
just
sort
of
explaining
it
for
the
call.
If
we
are
throttling
something,
then
that
means
it's
been
happening
a
lot
and
if
it's
happening
a
lot
and
we're
adding
cost
to
the
logging
of
that
thing,
that's
happening
a
lot
like
that
seems
bad,
because
it's
not
like
we're
we're
only
adding
this
in
the
rare
case.
A
D
A
B
A
That's
just
the
general
rack
attack
initializer
yeah.
B
A
D
A
A
So,
if
yeah
right,
so
if,
if
we've
already
done
that
work
yeah,
then
I
think
if
we're
gonna.
Basically
that
says
we
can
log
one
of
them
for
free,
like
the
id
or
the
username,
because
we
can
use
one
of
them
as
a
discriminator.
I
would
be
tempted
to
use
the
username
as
the
discriminator,
because
that's
more
useful
in
logging
and
from
architect
perspective,
it
doesn't
matter
oh
bob's,
linked
to
something
that
might
be
useful.
B
No
actually
there's
a
file
called
auth
finders
and
that's
just,
but
that
may
do
a
database
lookup.
B
B
A
Yeah
I'll
make
a
note
of
this
in
the
thingy
anyway
and
I'll
see
where
I
go
from
this,
because.
A
Yeah
figuring
out
what
order
all
this
stuff
happens
in
and
what
database
queries
happen
in
middleware
and
stuff
is
like
always
kind
of
a
hassle
to
figure
out
so
yeah.
I
need
to
look
into
it
a
bit
more,
but
if
I
can
now
send
this
section
to
craig,
what's
a
call
and
say
like
here's,
that's
why
we
might
not
be
able
to
do
projects.
B
C
B
C
Just
wondering
like,
I
think,
it's
more
about
not
introducing
anything
like
that,
like
not
making
it
worse
so
based
on
the
yeah
we
could
do
because
it
looks
like
we
might
have
probably
loaded
it
anyway,
but
if
craig
asks
or
like,
if
we
want
to
add
rate
limiting
based
on
project
or
namespace,
we
don't
want
to
query
for
the
project
or
namespace.
Maybe
we
can
do
it.
A
A
B
A
Yeah
I'll
take
a
look
but
yeah,
so
my
goal
here
is
to,
if
possible,
remove
database
lookups
from
the
logging,
but
certainly
not
add
them
and
to
log
the
most
useful
things.
Given
those
constraints.
A
Basically,
so
jacob
did
you
want
to
ask
your
related.
Oh.
B
Something
else
I
want
to
log
for
the
sake
of
wreck
attack,
because
in
the
case
of
the
safe
list,
we're
adding.
I
don't
want
it
to
go
into
this
log,
but
I
want
to
add
a
field
to
the
log
range
log
where,
like
like
the
general
json
log,
for
the
request
where
I
want
to
be
able
to
say
by
the
way
this
request
was
safe,
listed
because
of
the
bypass
header.
B
Because
that
way
you
can
see
how
much
that
even
happens
in
the
first
place
and
what
is
not
quite
clear
to
me
and-
and
this
is
really
how
that
works,
because
we
have
this
transaction
class
and
it
wraps
prometheus.
But
it
also
wraps
log
rage,
and
I
don't
really
know
what
happens
when
if
I
can
have
one
without
the
other.
C
You
can't
get
to
transaction.current
and
put
anything
you
want
in
there.
I
would
like
I
create
an
issue
to
get
rid
of
the
metrics
from
there,
because
they're
kind
of
a
lump,
everything
together
kind
of
thing,
and
it's
not
super
useful.
A
B
A
Context,
that's
what
we
do
with
the
context.
Isn't
it
bob?
We
add
the
the
that's
the
payload
and
the
application
controller,
but
I
don't
know
if
you
can
do
that
outside
of
a
controller.
B
D
Cool,
I
I
I
thought
this
would
be
like
a
nice
filler
if
we
had
extra
time
so
we
got
a
little
bit
and
I
just
wanted
to
kind
of
like
this
week.
I've
been
looking
at
open,
telemetry
a
little
bit,
and
so
I
just
wanted
to
kind
of
circulate
some
of
the
things
that
I
found
so
basically.
B
D
D
D
Or
so
so
open
telemetry
is
is
pretty
wide,
so
it's
got
traces,
it's
got
metrics
and
they
talk
about
logs
and
they
talk
about
extensions
for
things
like
profiles
and
health
checks.
D
So
if
I
tell
you
all
the
things
that
they
are
going
after,
it
starts
sounding
remarkably
like
lab
kit,
yes,
and
and-
and
that's
that's
a
really
good
thing,
because
I
would
much
rather
we
were
using
something
that
was
a
was
an
open
api
rather
than
our
own,
like
the
the
reason
we
we
built,
our
own
was
because
we
kind
of
had
to
and
we're
forced
to,
and
actually
the
way
we
built
lab
kit
was
because
of
the
politics
of
this.
D
D
There
were
certainly
some
people
who
you
know
weren't
very
keen
on
it,
but
actually
now
it
seems
to
have
kind
of
gained
that
traction,
and
so
there
is
like
a
kind
of
path
forward
that
we
kind
of
transition
open,
sorry,
lab
kits
to
open,
telemetry
and
and
a
lot
of
the
stuff
that
we
do
in
lab
kits.
We
basically
stop
doing,
and
we
just
tell
people
use
the
open,
telemetry
api
and
then,
if
you,
if
you
look
at
open
telemetry,
it's
kind
of
broken
into
two
parts,
there's
the
api.
D
So
if
you
are
building
a
library-
and
you
want
to
instrument
your
library,
you
use
the
api
and
then
there's
a
different
part
called
the
sdk
and
that's
if
you're,
building
an
application,
and
you
want
to
take
that
api
and
tell
it
to
do
something
with
the
information
like
I'm
using
influx
db
with
a
zipkin
tracing
system,
you
configure
that
through
the
sdk
and
and
so
really
what
we've
had
up
to
now
is
we've
got
lab
kits
and
and
lab
kit
has
been
the
api
for
git
lab
and
it's
also
the
sdk
going
forward.
D
You
know
for
like
http
access
loggers
stuff
like
that,
but
if
you
want
your
own
traces,
do
them
in
using
the
open,
telemetry
api,
and
we
will
give
you
code
that
will
configure
the
open,
telemetry
sdk,
which
is
basically
what
we
use
the
gitlab
tracing
environment
variable
for
at
the
moment,
and
so
we'll
set
up
like
all
the
sensible
defaults
and
we'll
allow
interoperability
and
do
all
the
fancy
stuff.
And
then
people
can
just
use
the
standard
api.
D
D
D
So
I'll
give
you
like
one
example
of
this,
so
open
telemetry
has
a
has
this
extensible
trace
propagation
thing
so
trace
propagation
is
if
you've
got
one
process
and
it
wants
to
talk
to
another
process,
it
needs
to
take
some
context
and
pass
it
between
those
processes
and
that's
propagation,
and
so
the
obvious
one
is,
is
your
trace
id
and
your
span
id,
but
you
can
also
have
things
called
baggage
and
at
the
moment
we
stuff
correlation
ids
into
baggage
and
going
forward.
D
We
probably
stuck
stick
cloudflare
ray
ids
in
there
as
well,
so
that
we
just
have
that
everywhere
and
and
the
way
that
that's
done.
If
you
look
at
jaeger,
you
might
be
very
surprised
to
see
that
when
one
of
our
applications
talks
when
one
of
our
processes
talks
to
another
one
of
our
processes,
there
will
be
headers
between
those
two
processes
that
are
x-uber
dash,
something
something
id
and
you'll
be
like
wait.
D
Why
is
there
something
that
says
uber
in
here
and
that's
because
jaeger
came
out
of
uber
and
and
they
have
these
uber
ids,
and
so
that's
what
jaeger
uses
for
propagation
and
since
then,
there's
now
a
standard,
w3c
trace,
propagation
standard
and,
what's
quite
nice
about
open
telemetry
is
you
can
basically
say
I
want
to
use
both
of
these
standards,
so
some
clients
might
be
talking
jaeger
and
some
might
be
talking
w3c
and
you
know
we'll
just
kind
of
mash
those
together,
and
so
you
can
kind
of
upgrade
clients
sort
of
gracefully
without
having
to
kind
of
upgrade
your
entire
cluster
and
all
the
machines
at
once.
D
And
so
where
was
I
going
with
this?
So
we
can
have
that
as
a
default
right.
So
when
we
start
this,
you
know
some
of
our
processors
will
be
still
talking
with
x,
uber,
ids
and
others
will
be
talking
with
x,
trace
id
or
whatever
the
w3c
stand
is
called
and
we'll
be
able
to
do
that.
Interoperability
and
we'll
control
that
in
lab
kit,
without
having
to
push
that
into
the
application
and
make
it
really
complicated.
D
And
the
other
reason
why,
like
from
early
on
using
open
tracing,
I
just
really
felt
like
this
is
not
something
you
want
to
push
on
to
developers
to
use,
because
it
was
a
horrible,
horrible,
horrible
api.
It
was
like
10
vendors
got
in
a
room
and
they
were
like.
Our
customers
want
us
to
have
a
standard.
So
let's
have
a
standard
and
everyone
just
bashed
in
whatever
they
wanted
into
the
standard,
and
it
was
like
one
of
the
worst
apis
really
difficult.
D
You
the
only
way
you
could
really
use
it
was
by
cutting
and
pasting
code,
because
none
of
it
made
any
sense
and
the
open,
telemetry
apis
are
definitely
built
with,
like
developers
in
mind
and
not
vendors,
and
so
the
interfaces
are
really
nice
and
clean
and
useful.
And
you
know,
if
you
open
a
span,
you
don't
have
to
worry
about
dealing
with
the
error,
because
you
know
that's
not
something
like
when
you,
when
you
want
to
do
some
tracing
you
don't
want
to
have
to
like
then
have
an
error.
D
You
know
to
just
work
and
if
it
doesn't
work,
it
should
fail
silently,
and
so
the
the
the
api
is
much
cleaner
and
the
other
thing
that
I
noticed
is
that
the
ruby
client
seems
to
be
in
a
much
better
state
than
the
jager
ruby
client
that
we're
using
at
the
moment,
which
is
looks
like
a
side
project
of
somebody
inside
an
organization
has
got
like
50
commits,
and
it's
not
like
you
know,
not
something
you
want
to
be
building
production
code
on
top
of
or
our
metrics
strategy
going
forward
and
so
moving
across.
D
And
then
the
other
part
which
is
really
nice.
Sorry
sorry.
A
D
Yeah,
the
other
thing
that's
really
nice-
is
that
the
way
that
we
built
lab
kits,
especially
especially
on
the
on
actually
in
the
ruby
site
and
the
go
side,
is
we
compile
in
like
all
of
these
different
clients?
So
we've
got
like
light
step
in
there.
We've
got
data
dog
and
every
different
want
to
use.
D
We
have
to
compile
it
and
make
the
binary,
bigger
and
bigger
and
bigger,
and
with
open,
telemetry,
there's
quite
a
simple
abstraction
where
your
process
will
sent
to
an
agent,
and
this
is
something
that
happens
in
tracing
already,
so
the
process
is
sending
to
an
agent
and
the
agent's
doing
some
work
and
then
sending
it
off
to
to
be
persisted,
and
one
of
the
steps
that
they've
made
in
open
tracing
is
instead
of
that
agent
being
like
a
jaeger
agent,
an
elastic
search
agent,
a
light
step
agent.
D
D
In
order
to
talk
to
datadog
lightstep
and
the
others,
we
just
talk
it's
a
very
confusing
name
otlp,
so
not
oltp
otlp,
which
is
a
protocol,
and
so
I
just
started
taking
a
look
at
it
because
we've
got
we
don't
the
the
the
interface
that
we've
got
for
tracing
at
the
moment
is
quite
limited
and
people
are
starting
to
say,
hey.
D
I
want
to
do
this
and
I
want
to
do
that
and
they're
starting
to
add
open
tracing
code
into
the
application,
because
lab
kit
doesn't
have
you
know,
lab
kit's
got
very
limited
tracing
abilities,
it's
basically
on
the
in
on
things
going
into
a
project
and
things
leaving
a
project
that
we
do
the
tracing
pretty
much,
and
so
people
are
starting
to
pollute
their
code
with
open
tracing
code,
which
is
going
to
kind
of
compound
over
time
and
make
it
much
worse
to
to
get
rid
of
that.
D
All
of
that,
and
so
like
a
giddaly,
has
got
some
open
tracing
code
in
it.
Now,
unfortunately,
that
we're
gonna
have
to
get
rid
of
and
there's
more
pressure.
More
people
want
to
do
this,
and
so
it's
gonna
start
happening
more
and
more.
So
I
think
that's
why
I
started
considering
this
now
and
but
it's
it's.
D
It's
been
really
positive
because
we
can
upgrade
lab
kit
to
go
from
open
tracing
to
open
telemetry
without
changing
the
clients
at
all,
except
for
the
clients
that
have
already
started
using
open
tracing
directly
and
that
we'll
have
to
fix.
But
as
far
as
I
know,
there's
only
one
case
of
that
at
presence,
but
that's
going
to
get
worse,
especially
as
jaeger
comes
online
and
people
see
the
advantages
of
this.
I.
B
I
was
about
to
ask
about
the
last
thing
you
just
said:
if
people
are
adding
open
tracing
code,
that
means
they
expect
to
see
the
data
somewhere
and
where
are
we
with
that?
And
you
just
said
that
it's
not
online
yet.
D
Don't
know
another
reason
it
was.
It
was
added
because
it
was
a
it
was
a
gap
and
I
I
probably
would
have
like
stopped.
I
probably
would
have
stopped.
It
was
basically
what
it
is
is
when
gidley
forks
or
you
know
exacts
a
git
process.
We
don't
have
tracing
around
that
and
it's
it's
a
nice
thing
to
trace
right.
It's
like
a
really
nice
piece
of
information,
and
I,
if
I'd
been
involved
in
it,
I
would
have
said
like.
Please
do
this
through
lab
kits
rather
than
directly,
but
I
wasn't
involved
in
it.
D
D
At
the
moment,
well,
you
know
obviously
on
when
you're
working
on
gdk,
you
can
click
on
on
any
link
in
the
on
any.
You
know,
it's
got
tracing
up
there
and
then
in
staging
we've,
got
it
in
there
as
well.
It
goes
to
staging
jaeger
instance,
and
I
put
the
link
into
the
I
can
give
you
I'll
give
you
a
quick
demo
of
it.
D
B
I
understand
why
it's
useful,
I
just
what
I'm
curious
about
is
why
somebody
already
went
through
the
work
of
building
this
when
it's
not
on
in
production,
yet.
D
What
building
the
the
trace.
B
This
custom
open
tracing
hook
around
git
processes.
D
Well,
everything
else,
like
literally
everything
else
is,
is
instrumented
right.
I
did
all
of
that
work
about
two
years
ago
in
there,
and
I
did
it
in
the
hope
that
it
would
shortly
be
followed
with
the
production
jaeger
instance
and
the
second
two
years
for
that
production
yaga
instance
to
come
through,
but
but
the
the
tracing
is
still
super
useful
right.
Can
you
see
my
screen?
Yes
yeah?
So,
like
you
know
here,
you
can
see
what's
going
on
here.
I
think
we
might
have
to
trim
it
down
a
little
bit.
D
It's
a
little
bit
noisy
at
the
moment
and
also
the
other
thing
is
we're
definitely
still
losing
some
spans,
and
so,
when
you
see
this
kind
of
weird
stuff,
where
you
know
you've
got
a
call
and
then
it's
kind
of
like
disconnected,
you
know,
what's
happened
there
is
that
that
information
has
been
dropped,
and
so
that's
one
of
the
things
that
we
need
to
fix,
and
I
think
that
going
away
from
that,
like
home,
brewed
jaeger
client
to
like
the
the
new
one,
is
going
to
help
with
that
as
well.
D
D
We
started
the
connection
after
500
microseconds,
the
connection
to
to
the
socket
succeeded,
blah
blah.
You
know,
there's
so
much
information
in
here
that
that
is.
That
is
really
useful.
So
you
know
2.8
seconds
in.
We
got
the
first
response
from
the
server
you
know
then,
and
then
we
started
doing
a
bunch
of
other
stuff
and
you
know
being
able
to
trace
across
it's
weird.
The
ui
is
lost.
They
used
to
have
something
where
they
could
show
everything
connecting
together,
but
it
seems
to
have
disappeared.
D
You
know
and
like
you
can
mark
traces
as
being
an
error.
So
here
you
can
see
that
you
know
just
give
me
the
the
errors-
and
you
can
see
here
that
this
this
request,
ip
restriction
load-
seems
to
have
failed
for
some
reason.
But
you
know
you
can
see
the
queries
that
went
through.
D
You
can
see
here,
there
was
obviously
like
something
just
dropped
right
and
it
just
goes
from
there.
Sorry.
D
Well,
there's
something
yeah,
there's
there's
stuff
missing
here
at
the
moment
you
know.
The
other
thing
we've
got
to
do
is
tune
this,
because
you
can
see
it's
like
every
single
cache
hits
on.
You
know
this.
This
is
making
hundreds
of
redis
calls,
and
I
mean
it
is
pretty
useful
because
you
can
see
what
it's
actually
going
and
fetching
from
redis.
You
know
you
can
see
it
was
a
get.
Oh
yeah.
D
But
you
know
mostly,
I
think
it's
going
to
be
really
useful
for
incidents,
and
you
know
understanding
what's
happening
there
so
anyway.
Part
of
that
will
probably
move
over
to
open,
telemetry
and
and
kind
of
adopt
a
lot
of
the
open,
telemetry
stuff
and
the
first
step
of
that.
We
kind
of
I've
got
a
merge
request
and
it
compiles
in
workhorse.
It
doesn't
compile
in
italy,
but
the
reason
it's
not
compiling
and
giggly
is
because
there's
a
grpc
like
upgrade
that's
broken
in
kidney,
something
to
do
with
the
load.
Balancer.
B
Well,
no,
I'm
being
unfair,
but
it
is
it's
a
really
weird
thing
that
I
left
behind
there
for
other
people
to
work
on.
D
Well,
we
do
have
giddy
ruby
in
our
you
know,
we
can.
We
can
see
giddily,
ruby
traces,
so.
D
I
find
that
hard
to
believe
here's,
here's
a
a
trace
from
gideon
to
gideon
ruby
that
failed,
and
so
you
know,
we've
we've
been
talking
about
adding
those
logs
for
for
knowing
which
of
the
processes
it
is,
but
on
the
tracing
side,
we've
already
got
that
right,
which
is
which
is
really
cool.
That's
one
of
the
reasons
why
I
want
tracing,
because
you
know
here
you
can
see
that
this
failed
on.
You
know.
D
B
C
B
Pushing
on
the
crpc
upgrade
or
if
it's
really
blocked
on
that,
then
if
I
look
into
that
that
might
I
don't
know
if
it's
good
I
mean
I,
I
probably
I
know
what
the
load
balancer
is
supposed
to
do.
So
from
that
point
of
view,
I'm
probably
able
to
fix
the
problem,
but
it's
not
good.
If
this
is
in
italy
and
the
kittly
team
can't
fix
this
problem.
D
D
The
the
big
problem
there
is
that
the
open,
telemetry
apis
are
not
super
stable,
either
so,
but
from
what
I've
seen
the
the
part
of
the
application.
Part
of
the
api
that
you'd
be
using
inside
the
application
like
to
open
and
close
a
span
is,
is
very
stable.
It's
just
the
sdk
part,
that's
not
stable,
and
so
you
know
if
you
want
to
send
traces
to
like
a
new
endpoint,
and
you
know
those
things
break,
but
that
would
be
on
us.
D
We
wouldn't
have
to
kind
of
change
a
hundred
places
inside
workhorse
every
time
we
upgraded
and
that
will
be
pinned
yeah
yeah.
So
so
I
think
that
this
is
a
good
way
to
go
and
we'll
hopefully
arrest
the
like.
Okay,
we're
just
going
to
start
using
open
tracing
in
the
application,
because
we
want
more
traces
problem
that
I'm
seeing
will
be
coming
along.