►
From YouTube: 2022-12-08 meeting
Description
OpenTelemetry Prometheus WG
A
B
A
A
B
B
E
Cool
well,
we've
got
a
good
crew
showing
up
so
the
way
I
was
thinking
of
doing
this.
You
know
we
have
like
questions
and
things
we
we
like
to
ask,
but
you
all
have
a
particular
subject
that
you
want
to
talk
about
mainly
rum
and
client
and
I.
Believe
you
have
a
presentation
you'd
like
to
give
us
yeah,
which
is
great,
so
I,
think
really
the
the
way
I
like
to
do.
It
is
maybe
we'll
just
do
introductions.
Everyone
can
introduce
themselves.
E
B
That
sounds
good.
One
thing
that
we're
going
to
need
to
be
careful
about
is
not
to
take
up
too
much
time
in
the
presentation,
because
yeah
there
is
the
primary
topic
that
I
think
I
know.
We
talked
about
at
the
the
open
Telemetry
unplugged
Day
related
to
Rome,
but
I
Know
Rich
also
has
a
couple
of
other
things,
the
time
permitting.
Maybe
we
can
also
get
into
a
little
bit
or
that
could
be
saved
for
another
time
in
the
future
or
whatever
yeah.
D
E
So,
on
yeah,
okay,
we're
really
excited
to
hear
your
presentation
and,
at
the
end
of
the
day,
the
the
purpose
of
these
interviews
are
to
receive
what
you
feel
is
your
most
pressing
feedback
for
open
Telemetry.
Of
course
we
love
compliments,
but,
but
really
we
want
to
hear
your
feedback,
for
what
was
frustrating
or
what's
missing
that
to
us
is:
is
the
actionable
information
right.
B
B
No
I
think
I
think
that
makes
a
lot
of
sense,
so
maybe
I'll
just
if
you
want
to
go
ahead
and
introduce
myself
and
Rich
I'll.
Maybe
let
you
introduce
yourself
after
that,
unless
you
want
me
to
talk
about
you,
I,
don't
know
yeah,
so
I'm,
Vic,
Thomas,
I
work
for
Medallia
I
could
go
into
detail
about
what
Medallion
does
but
I
don't
know.
There's
a
lot
of.
We
don't
have
a
whole
lot
of
time,
but
I'm
happy
to
answer
questions
about
what
we
do.
If
you
guys
are
interested.
B
My
role
is:
is
a
performance
and
observability
architect
within
the
engineering
team
and
Rich.
Do
you
want
me
to
sort
of
introduce
you
a
little
or
do
you
want
to
talk
about
your.
C
B
So
so
rich
has
recently
joined
the
performance
engineering
team
within
engineering,
but
prior
to
that
he's
been
spending
many
years
as
one
of
our
lead
front-end
developers,
so
he's
actually
been
building
out
some
of
this
sort
of
observability
or
the
tools
that
allow
both
developers
and
our
tech
support
team
to
be
able
to
investigate
and
troubleshoot
our
application.
Reg
fill
in
any
details
here:
I,
don't
wanna,
no.
B
All
right
good,
so
hopefully
that
you
know,
hopefully
that
helps
everybody
to
understand
where
we're
coming
from
on
this,
so
rich
is
going
to
have
a
very
Dev
heavy
perspective
on
this,
which
is
great
he's
going
to
be
able
to
go
into
a
lot
more
lower
details.
I
think
for
me,
I'll
be
able
to
maybe
provide
a
little
bit
more
context,
how
it
relates
I'm,
more
familiar
with
open
Telemetry
we've
been
using
it
for
the
past,
probably
a
year
and
a
half
to
two
years
in
varying
degrees
and
bearing
pieces
of
it.
B
B
We
don't
want
to
keep
switching
sharing
the
screen
so
I'll
ask
him
to
switch
slides
here
from
time
to
time
so
I
think
I
mentioned.
We
do
have
three
different
use
cases,
but
the
the
one.
B
To
you
about
Ted
at
the
the
unconference
or
the
unplugged
event
was
from
a
macro
perspective.
So
basically
the
thing
that
is
really
important
to
our
developers
like
when
Rich
was
in
that
role
and
our
tech
support
people
and
management
actually
is
to
understand
how
our
application
is
performing
from
the
end
user
perspective
and
up
to
this
point,
a
lot
of
what
we
had
built
out
was
really
more
from
the
server
side
and
behind.
B
But
I
say
this
from
a
macro
perspective.
So
a
lot
of
talk
has
been
more
micro
level
like
troubleshooting
and
investigation,
but
what's
really
been
important
and
what
Rich
has
been
working
in
is
to
to
be
able
to
have
awareness
of
change
in
trend
for
load
testing.
B
So
the
common
input
here
at
the
end
is
that
metrics
are
really
great
for
core
screened
analysis
and
alerting,
but
for
fine-grained
ad
hoc
analysis
that
gets
really
expensive
because
the
cardinality
explodes
right
so
metrics,
maybe
aren't
the
the
best
answer
for
us
in
in
that
situation.
So
maybe
Rich
you
can
go
to
the
next
slide.
B
Okay,
so
our
our
current
solution
that
rich
had
a
heavy
big
part
in
in
building
and
he'll
talk
more
about
this
in
a
little
bit.
But
the
current
solution
is:
is
manual
instrumentation,
that's
capturing
data,
that's
it's
in
a
hierarchical
structure
within
the
user
agent
or
I'm,
going
to
probably
use
user
agent
and
browser
interchangeably,
but
we
also
have
mobile
application
or
a
mobile
front
end
of
this
to
our
system
as
well.
But
the
hierarchy
is
page
view,
modules
and
API
calls
so
capturing
the
timings
for
those
three
levels.
B
That
data
is
then
accumulated
and
sent
along
to
a
vendor
who
provides
a
an
SQL
like
language
for
us
to
be
able
to
do
analysis
and
okay
Richmond
go
to
the
next
one
and
okay.
So
an
ideal
solution
for
us
I
think
would
be
to
leverage
open
Telemetry
in
some
way
to
collect
that
data
in
a
vendor
agnostic
manner.
So
we're
not
locked
in
on
that
particular
with
that
particular
vendor.
B
We're
not
locked
in
in
terms
of
the
the
payload
that
needs
to
be
produced
to
send
to
them
and
as
well
as
the
the
storage
and
query
solution
as
well,
and
a
couple
of
thoughts
related
to
this
Trace
storage
is
maybe
not
ideal
or
is
not
ideal
for
slice
and
dice
analysis.
I
hope
I'm
not
being
controversial
in
saying
that.
But
you
know
that
it's
great
for
diving
into
a
very
specific
incident,
but
it.
A
B
You
know
when
you
want
to
slice
and
dice
over
a
large
period
of
time-
it's
maybe
not
so
great
for
that,
but
also,
similarly,
because
our
data
is
hierarchical
in
the
way
it's
captured
in
the
browser.
It's.
B
So
maybe
there's
like
a
a
solution,
that's
kind
of
like
spandometrics
Transformations
that
could
be
like
a
span
to
event
stream
kind
of
a
thing,
not
sure
exactly
what
that
might
look
like,
but
that's
something
I
just
wanted
to
toss
at
you:
okay,
Rich
I'll.
Let
you
take
it
away.
I
think
the
next
set
of
slides
are
yours.
C
All
right,
I'm,
just
gonna,
preface
this
by
I,
am
I'm
just
getting
over
covid,
so
I
kind
of
have
like
that
coveted
brain
going
on
so,
but
you
may
see
some
yeah.
That's.
C
C
Yeah,
so
I'm
gonna
walk
you
through,
like
kind
of
how
we
do
this
so
just
to
make
you
aware
out
front
of
it.
Our
stack
is
react,
Redux,
very
much,
a
single
page
web
app.
We
do
SUB
server-side,
rendering
we
do
SUB
front
side,
rendering
we
do
have
different
architectures,
it's
definitely
distributed.
We
have
node,
we
have
Java,
we
have
a
whole
bunch
of
different
stuff.
C
What
I'm
going
to
show
you
is
the
front
end
part
of
it
most
of
our
backend
stuff.
Vic's
team
does
a
really
good
job
of
using
open
Telemetry,
but
we
struggle
and
we've
had
to
make
these
solutions
for
front
end
because
of
issues
that
we've
had
with
like
implementing
open,
Trace,
again
fighting
a
good
solution
right.
So
this
is
like
basically
what
our
dashboard
looks
like
here.
We're
going
to
have
a
dashboard.
You
can
see
that
there
are
different
modules
on
it.
It's
like
we
are
used
to.
C
You
know
the
simple
architecture
like
Vic
said
this
is
broke
it
up
into
a
hierarchy
in
our
applications
we
have
middleware
and
that
middleware
basically
grabs
events
that
are
happening
they're
either
a
user
events
where
they
click
on
something
or
their
side
effect.
Events
from
loading
things
in
and
the
job
of
the
middleware
is
to
figure
out
how
these
events
fit
in
the
hierarchical
structure
and
record
them
with
their
vendors.
So
top
level
is
whatever
application.
C
There
is,
then,
there's
page
or
you
know
any
type
of
container,
whether
it's
a
dashboard
or
a
page,
whatever
we're
just
using
a
page
for
an
example
here
and
then
a
module
level
and
then
API
level,
and
so
as
that,
those
events
are
feeding
in,
like
I
said
before
the
middleware
figures
out,
where
it's
supposed
to
go,
and
then
it
goes
to
the
vendor.
So
the
hard
part
with
this
is
this
is
what
the
Json
looks
like.
It's
all
been
obfuscated,
it's
been,
you
know
made.
C
So
it's
it's
a
very
compact,
it's
vendor
specific
and
it's
non-discoverable.
So
like
we
showed
this
to
an
an
engineer
and
say:
hey:
go
figure
out
what's
being
sent
to
the
vendor
they're
like
we
have
no
idea
what
MX
means
or
RT
or
PL
it's
just
like
they're
they're
sitting
there
trying
to
figure
out
what's
going
on,
so
we
do
send
a
lot
of
metrics
and
when
they
go
to
the
vendor,
one
of
those
that
exists
dice
about
this
is
we
can
have
high
degree
of
cardinality.
C
We
could
have
a
lot
of
really
cool
things
that
we
could
track
here.
So,
for
instance,
we
could
look
at
the
number.
This
is
one
of
our
this
example
of
like
what
a
lot
of
our
customers
would
see
right,
so
they'd
have
like
okay.
In
this
time
frame,
we
made
52
million
graphql
calls
we
had
this
much
page
load.
This
is
p75
P95,
bootstrap
time
number
of
users.
C
It
allows
us
to
also
kind
of
really
slice
and
dice
the
numbers,
so,
for
instance,
here
in
this
query
performance
stuff
here
we
can
tell
you
age,
individual
query.
What
page
it
was.
It
was
executed
on
what
modules
called
it,
because
they're
different
modules
would
call
the
same
queries
I
had
to
block
out
active
role,
but
it
would
show
you
the
role
of
the
user.
That
was
there,
the
number
of
counts,
the
number
of
Errors.
C
It
really
allows
us
to
slice
a
dice
this,
so
you
could
actually
even
like
like
visualize
this
as
well
like
we
could
go
in.
We
had
this
really
cool
query
language
where
we
could
say
Hey,
you
know
give
us
the
performance
of
here.
This
is
again
graphqls,
but
we
can
do
this
at
a
module
level.
Page
level.
C
We
can
do
some
pretty
amazing
stuff,
like
we
can
look
at
it
by
ISP
by
region.
So
as
a
provider,
for
you
know,
customers
all
over
the
world
in
different
regions,
we
could
say:
oh
if
a
customer
says
to
us,
hey
we're
having
a
problem,
we
could
say.
Oh
actually,
no,
your
only
problem
is
with
Indonesia
and
we
could
say
this
is
you
know
clearly
a
problem
in
Indonesia,
but
we
can
even
go
deeper.
C
We
could
say
it's
actually
in
this
city
and
with
a
specific
ISP
and
we've
actually
had
that
happen.
Where
we've
had
customers
come
to
us
and
say
we're
having
a
problem,
we're
like
actually
no
it's
these
stores
and
we've
had
funny
cases
where
they're
like
yeah.
That
store
is
so
remote.
It
only
can
use
dial
up,
so
they
literally
just
don't
have
the
bandwidth
to
handle
the
to
handle
the
application
right.
C
So
the
really
cool
thing
that
we
could
do
with
all
this
data
is
what
Vic
was
talking
about
is
we
could
use
it
for
other
tooling?
So
this
is
actually
a
snapshot
of
our
tooling
that
we
have
for
performance
testing
and
so
inside
of
here
we
measure
we
go.
We
will
run
a
performance
test.
This
is
a
for
every
release,
special
branches
special
feature
sets.
We
have
a
platform
that
will
do
this
and
this
is
an
example
of
a
graphql,
but
it
is
actually
stored
in
this
reporting
mechanism
hierarchically.
C
So,
if
you
see
a
problem
in
a
page,
you
can
click
on
that
page.
It
will
show
you
every
module.
Loaded
on
that
page.
You
can
find
the
one.
That's
wrong
and
then
click
on
that
and
we'll
show
you
all
the
apis
that
are
part
of
that.
You
can
see
like
what
apis
are
causing
the
problem,
but,
more
importantly,
you'll
notice.
Here
it
actually
is
keeping
track
in
the
metadata
of
what
version
like
up
here
at
the
top.
C
You
can
see
the
version
of
it
so
as
you're
you're,
comparing
previous
versions
or
previous
runs
of
the
those
versions,
and
then
the
coloring
is.
C
This
is
what's
been
set
to
Baseline,
but
the
coloring
is
is
this:
is
the
best
run
of
what
we're
tracking
right
now
is
green
and
red
is
the
worst
for
that
you
can
compare
its
Performance
Based
on
other
versions
that
are
there.
So
the
nice
thing
about
like
having
this
sort
of.
C
We
call
it
like
the
the
stream
of
the
signal
stream,
as
we
have
all
this
stuff
coming
in,
and
we
basically
can
look
at
it
and
do
a
whole
bunch
of
slicing
and
dicing
on
it,
and
but
because
it's
in
that
hierarchical
system
very
similar
to
tracing,
we
have
a
way
of
understanding.
Oh
well.
This
is
failing
in
this
sort
of
critical
path
and
and
sort
of
go
from
there.
So
from
a
very
high
level,
it
allows
us
to
keep
track
of
stuff
at
that
level.
B
Well,
I
was
going
to
say
this
may
be
a
good
place
for
us
to
pause
and
take
a
breath
for
a
moment.
Maybe
if
people
have
questions
so,
we
could
continue
with
the
other
two
use
cases
which
are
related
I,
don't
think
we
have
quite
as
many
slides
for
the
others
right
yeah.
So
it
could.
We
could
continue
I
feel
like
we're
going
at
a
pretty
fast
pace.
E
Does
anyone
have
a
question
that's
going
to
slip
out
of
their
brain?
They
don't
ask
it
right.
Now,
all.
B
And
definitely
at
any
time,
if
anyone
needs
a
follow-up
or
things
of
a
question
later,
you
know
absolutely
definitely
happy
to
answer
it
anytime,
all
right.
So
let's
talk
a
little
bit
about
a
second
use
case,
which
is
you
know.
These
are
all
related.
B
This
one
is
more
of
a
micro
perspective
and
it's
actually
I
think
in
many
to
a
large
extent.
This
is
probably
going
to
be
very
similar
to
what
you
would
hear
from
other
other
end,
users
of
open,
telemetry,
but
I
think
there's
a
couple
things
that
that
we're
doing
that
might
be
interesting
and
might.
A
B
Know
might
give
you
some
ideas
of
things
to
to
include
in
the
future.
Maybe.
A
B
See
we'll
give
it
a
shot
here
so
yeah
so,
like
I,
said
it's
more
of
a
micro
perspective,
and
the
reason
for
this
use
case
is
that
we
need
end-to-end,
observability
signals
for
investigations
and
I
said
parenthetically,
like
everyone
else.
Right
like
this
is
not
probably
something
very
different
because
server
side
signals
alone
are
not
going
to
tell
the
whole
story
Rich.
If
you
could
go
to
the
next
slide.
Yeah.
B
It's
Tricky,
like
our
tracing,
doesn't
initiate
in
the
browser.
We
know
that
there
is
in
open
Telemetry
the
ability
to
do
that.
We've
experimented
with
it
a
little
bit
but
it.
But
for
us
it
was
really
painful
because
we
are
using
react
and
I
know.
There's
a
react.
Module
I
think
in
the
OR
react,
contribute
contribution.
B
B
This
is
going
to
take
me
way
too
long
so
anyway,
our
tracing
today
starts
on
the
back
end
for
that
reason,
but
we
do
want
to
get
it
onto
the
onto
the
client
side
and
the
user
agent,
but
Rich
will
show
you
some
of
the
things
that
they've
built
to
sort
of
bridge
the
the
observability
gaps
it'll.
Do
that
in
a
moment,
I.
Think,
though,
I
wanted
to
put
forth
maybe
an
ideal
solution.
B
B
Something
that
would
play
well
with
our
page
module
API
hierarchy
that
that
we
have
implemented
within
react,
but
also
something
that
could
integrate
well,
perhaps
with
browser
Dev
tools,
because
I
think
that's
what
Rich
will
show.
You
is
I.
Think
that's
where
they've
spent
some
time.
B
C
Here
yeah,
so
we
we
actually
like
Vic
said
we
took
several
attempts
to
get
open
Telemetry
on
the
front
end
and
to
work
in
a
sacredist
fashion.
C
Once
I
show
you
this
I
could
tell
you,
we've
come
up
with
a
theory
that
we
think
might
work,
but
we
haven't
had
a
chance
to
implement
it
yet,
but
it
it
is
somewhat
promising
from
what
we've
looked
at.
So
in
order
to
be
able
to
do
this
stuff
correctly,
what
we've
done
is
we've
added
in
a
whole
bunch
of
Chrome
extensions
and
a
bunch
of
other
things.
That'll
give
us
like
tools
like
power
tools,
essentially
on
the
front
end,
and
this
one
right
here.
C
What
you're
seeing
is
that
that
middleware
that
we
have
here
it
can
dump
to
almost
anything,
so
it
could
adopt
to
Prometheus
it
could
dump
to
other
vendors,
but
also
kids
up
to
the
console
and
the
console.
What
if
you
look
very
carefully
here,
you'll
see
this
is
a
mark
event
and
that
allows
us
to
actually
keep
track
of
like
what's
happening
on
the
page.
You
can
say,
oh
well,
for
instance,
right
here.
C
C
It
also
will
create
these
cool
little
tables
that
give
you
a
breakdown
of
every
single
module,
their
timings,
everything
that
it
took
to
go
from
there
and
basically
just
page
data
like
page
CGI
and
everything
like
that.
The
really
interesting
thing,
the
reason
why
we,
in
our
tools,
we
actually
give
you
the
unique
shot
for
the
module,
is
one
of
the
things
we've
done
on
our
front
end
is,
as
you
get
information,
we
allow
you
to
turn
off
things
that
are
not
relevant.
C
So,
for
instance,
you
could
say
isolate
this
dashboard
to
only
have
these
modules,
so
you
could,
you
could
essentially
say
I
only
want
this
module,
that's
on
there
and
you
can
actually
down
to
one
or
many
or,
however,
you
want,
and
that
allows
us
to
not
only
make
it
easier
to
debug
instances,
but
when
we're
looking
at
Opitz
laboratory
or
we're
looking
at
other
things
that
are
that
are
recording
this
information,
we
can
make
sure
that
there's
no
noise
that
we
could
basically
just
reduce
the
app
down
to
exactly
what
the
problem
is
and
cut
out
all
other
code,
that's
executing,
so
it
makes
it
easier
to
find
find
issues
that's
in
there
and
to
that
end
we
also
hack
the
flame
charts
and
we
put
in
custom
events
in
here.
C
So
you
can
see
like
this.
Is
how
long,
like
you
know,
page
CTI.
These
are
the
the
individual
modules
and
you
can
also
see
that
like
when
these
modules
are
loaded
again,
we
actually
had
a
layout
issue,
so
we
could
be
like
hey.
You
know
these
modules
and
what
we
could
do
is
we
could
individually
isolate
those
modules
and
see
who's,
causing
the
layout
issue
and
sort
of
go
from
there,
and
so
you
could
like
go
in
and
see
the
actual
performance
you
could
align
this
up
to.
C
You
know
all
sorts
of
other
things
like
he.
You
know
all
of
the
different
things
that
are
on
the
event
and
have
a
good
idea
of
what's
happening
at
that
point
in
time.
C
Right,
that's
in
there
just
to
let
you
know
one
of
the
things,
because
I
did
promise
to
tell
you
our
Theory
here
on
the
front
end,
one
of
the
things
that
we
do
have
a
lot
of
Hope
for,
but
it
would
require
our
app
to
get
re-architected
is
we're
big
fans
of
x-state
and
Harrell
State
charts
and
one
of
the
things
that
we've
been
throwing
around
the
idea
is
that
because
the
biggest
problem
with
using
open
celebrity
on
the
front
end
is
the
asynchronous
nature
of
JavaScript
and
things
could
get
executed
in
out
of
order.
C
If
you
put
everything
inside
of
a
Herald
state,
chart,
actions
are
synchronous,
so
essentially
you
could
fire
an
action
and
you
create
a
context
wrapper
in
that
action.
So
any
graphql
or
anything
that's
being
done
on
the
front
end-
would
actually
execute
inside
that
context,
and
you
could
use
that
as
your
initial
Trace,
that's
interacting,
and
then
everything
else
would
use
that
context
to
get
that
Trace
ID,
and
so
it
would
all
work
correctly
and
we've
got
a
couple
of
experiments
they
do
appear
to
to
work.
C
We
just
would
require
a
large
architectural
change
on
our
end
to
get
that
to
work,
but
because
it
is
synchronous
and
because
you
can't
just
use
like
a
context
wrapper
like
you
can
on
your
JavaScript
solution
in
react.
That's
probably
the
best
solution
that
we
found
so
far
and
I'll.
Let
us
talk
about
our
last
Point
here.
B
Yeah,
okay,
thanks
Rich,
so
maybe
we
should
give
one
more
opportunity
to
take
a
breath
and
I
don't
want
to
have
a
question
of
what
rich
just
showed
or
do
it
should
we
just
do
it
at
the
end,
I
always
like
to
give
people
the
opportunity
to
just
jump
in
when
it's
top
of
mind,
but
if
not
I
can
go
ahead.
I.
E
Have
plenty
of
comments,
but
let's,
let's
wait
till
the
end
very.
B
Good,
okay,
so
yeah.
The
third
and
final
use
case
I
want
to
talk
a
little
bit
about
was
one
I
hadn't,
even
thought
of
until
recently,
when
rich
brought
it
up
to
me,
but
he
I
think
he
described.
For
example,
I
think
Richard
said
it
was
you
had
a
case
in
Indonesia
where
there
was
dial
up
right
and
so
they're
having
very
slow
performance
and
well
sure,
because
you're
using
a
dial-up
connection
or
it
could
be
a
corporate
user
going
through
VPN
with
a
lot
of
different
hardware
and
software?
B
That's
not
our
customer
like
it's,
not
something
that
the
customer
has
coded.
It's
not
something
we've
coded,
but
there
can
be
missing
time
spent
and
difficult
to
pinpoint
where
that
is
so
okay,
so
this
use
case
is
about
just
an
idea,
maybe
for
the
future.
Maybe
this
is
something
that's
already
been
considered
in
the
inflammatory
world
and
we're
probably
not
the
first
ones
to
to
say
this
or
to
bring
it
forth.
B
But
maybe
we
are
I,
don't
know,
but
yeah,
so
we're
SAS
provider
a
lot
of
corporate
users
interacting
with
their
system
via
their
corporate
Network,
usually
VPN.
So
a
lot
of
bottlenecks
between
user
agent
and
our
system,
so
having
holistic
Trace
end
to
end
that
includes
those
missing
pieces
would
be
really
helpful.
Rich.
Can
you
go
to
the
next
one?
Okay,
so
yeah
I
mean
this
is
maybe
sort
of
aspirational,
but
maybe
an
ideal
solution
would
be
hey.
B
Open,
Telemetry
just
becomes
so
ubiquitous
that
it's
embedded
in
firmware
and
software
and
any
you
know
any
respectable
provider
of
infrastructure
would
would
provide
it
rich.
Did
you
skip
over
a
slide?
I
was
expecting
one
more,
maybe
I.
A
B
No,
you
know
what
maybe
I
think
it
was.
The
I
think
it
was
missing
the
the
current
solution,
which
is
basically,
we
don't
have
a
current
solution
right,
Rich
kind
of
goes
crazy
and
talks
to
the
customer
and
their
I.T
team
right
man,
I'll.
Let
you
talk
about
that
rich
yeah.
C
I
I
honestly
could
say
this
is
probably
the
biggest
tackle
for
B2B
SAS
I
I.
My
team
spends
a
huge
amount
of
time
actually
dealing
with
this
one
of
the
big
problems
and
I'll
give
you
an
example.
C
Is
that
when
you
have
a
lot
of
Fortune
500
companies,
their
Network
infrastructure
and
everything
is
budget
conscious,
so
essentially
they're
like
very
they're,
not
used
to
a
lot
of
bandwidth,
and
so
you
know,
if
you
have
a
customer
we've
had
this
happen,
where
you
have
customers
that
come
by
they're
like
okay,
we're
gonna,
give
all
of
our
our
front-end
people
and
a
bunch
of
other
different
people,
the
ability
to
use
the
application
and
that
application
could
have
a
hundred
thousand
users
coming
online,
and
so
now,
all
of
a
sudden,
you're
you're,
not
thinking
just
about
scale
of
your
infrastructure,
you're
thinking
about.
C
Are
we
going
to
blow
up
their
proxy
server
because
we
make
200
calls
on
this
page
times
a
hundred
thousand
and
they
all
log
in
Saturday
morning,
right
like
this,
is
actually
happened
to
us,
where
it's
like
100
000
people,
all
log
in
on
Saturday
morning.
Well,
how
does
that
affect
their
infrastructure?
You
know
what
can
their
ISP
handle?
This.
Are
we
in
China,
because
China
when
we
hit
the
firewall,
it
adds
time
for
every
call
right
that
goes
in
there
and
and
it's
not
just
their
infrastructure.
C
It
also
when
we
had
a
very
big
issue
with
we
had
a
big
Fortune
500
company,
that
we
turn
our
system
on
and
they
were
like
you're
breaking
our
system
and
we're
like
there's
no
way
we're
breaking
your
system,
but
they
weren't
aware
that
there
was
a
vendor
that
was
using
95
of
their
bandwidth
already
and
just
having
us
come
on
with
just
a
few
people
broke
the
straw
on
the
camel's
back
right,
and
so
what
we
find
with
a
lot
of
people
is
not
only
do
we
need
tracing
as
as
software
engineers
and
as
SAS
providers
and
stuff,
but
the
actual
organizations
themselves
need
to
be
able
to
use
it
to
understand,
what's
moving
through
their
networks,
how
their
networks
are
performing,
and
this
is
I.
C
This
is
a
huge
problem
like
I
would
say:
I
spend
a
majority
of
my
year,
working
with
customers,
helping
them
look
at
their
infrastructure,
understanding,
what's
happening
inside
their
infrastructure
and
there's
all
sorts
of
different
things.
A
great
example
with
covid
was
the
VPN
infrastructure
where
they're
like
okay,
this
works
great.
It's
been
in
our
stores.
Now
everybody
has
to
go
home
and
now
everybody's
going
to
use
their
software
through
our
VPN,
which
tunnels
to
one
location
in
the
middle
of
nowhere
and
like
all
of
a
sudden,
nothing
works.
C
So
if
we
did
have
some
capability
of
basically
being
like
okay,
we're
starting
a
trace
here
and
it's
going
to
go
through,
you
know
like
company,
A's
infrastructure
and
we're
just
basically
tell
it
where
to
report,
and
it
can
send
us
sort
of
like
public
data,
like
you
know
how
long
this
is
taking,
how
taxed
they
are
all
this
other
stuff,
as
it
goes
through
the
whole
system,
and
then
we
get
a
good
idea
of
the
tracing,
but
it
also
allows
us
to
see
what
other,
yet
vendors
are
possibly
doing
and
allows
them
to
understand,
what's
going
through
their
Network.
C
So
everyone
benefits
but
I
think
as
a
B2B
company
in
SAS.
This
is
a
huge,
huge
issue,
especially
when
you're
selling
to
some
of
the
biggest
customers
that
are
out
there,
that
are
multi,
National,
there's
just
so
much
going
on,
and
we
have
no
idea.
We
have
to
do
things
like
start
looking
at
like
the
region
or
location.
C
E
It's
really
great
to
see
how
your
really
trying
to
get
a
holistic
view
from
the
end
user's
perspective
and
all
the
the
little
details
there
I
really
appreciated
the
way
you
hacked
the
flame
graph.
By
the
way.
That's
that's
definitely
something
once.
E
With
our
client
overhaul,
which
I'll
cover
in
a
second
because
I
feel
like
we
owe
you
a
bit
of
a
road
map
about
what
we're
currently
doing,
but
once
that's
further
along
that
sounds
like
being
able
to
to
work.
Some
Dev
tools
into
that
package
would
be
really
great
and
I'm
sure
we
would
come
calling
to
see
if
you
could
donate
some
of
that
to
open
telemetry.
E
But
yeah
so
I
I
can
definitely
go
as
the
the
moderator
go
into
a
back
and
forth
feel
with
you
all,
but
before
I
do
that
I
want
to
open
it
up
to
questions
from
the
audience.
If
anyone
have
any
questions
or
details,
they
want
to
dig
into.
D
Just
a
small
question
hi:
this
is
shubanchu
I'm
from
Adobe
and
thank
you
for
sharing
all
the
information.
You
shared
really
appreciate
that
as
I
understand,
you
are
using
open
television
currently
in
your
backend
infrastructure
and
for
your
front-end
infrastructure,
you're
using
custom
setup
for
a
vendor,
which
is
what
which
is
allowing
to
you
to
generate
all
the
metrics
and
all
the
data
which
you
use
for
all
the
all
the
metrics
and
calculation.
Is
that
correct.
B
That
is
correct.
Yeah
I
want
to
be
careful
when
we
use
the
term
metrics,
sometimes
like
he's
Rich,
sometimes
because
I
think
from
a
developer
perspectively,
but
what
we
might
think
of
as
observability
signals,
the
developers
tendency
is
metric,
so
I
just
want
to
be
sure
yeah
on
that
yeah
but
I'm.
Sorry.
Your
question
was
also
about
we're
using
open
where,
like
the
scope
of
where
we're
using
open,
Telemetry
I.
D
Could
yeah
I
I
just
wanted
to
understand
like
like
on
the
front
on
the
front-end
side?
As
you
were
talking
about
the
obserability
signals
you
are
using
that
and
on
the
back
end,
if
I
understood
correctly,
you
are
using
distributed
tracing
and
probably
open
Telemetry
metrics
to
get
the
metrics
out.
That's
right
right.
B
Yeah
more
or
less
it's
pretty
close,
so,
okay,
we
have
I,
don't
know
Rich
how
many
development
teams
do
we
have
now,
maybe
25
to
30.
We've
had
a
lot
of
Acquisitions
in
recent
years,
so
we
probably
have
probably
10
to
12
different
products,
and
so
we
have
code
in
Java,
JavaScript,
python.
B
Php
right
yeah,
Ruby,
yeah
I.
Guess
we
do
don't
we
yeah,
so
we
have
a
lot
of
different
languages
and
honestly,
because
a
lot
of
these
Acquisitions
came
on
board
within
the
last
couple
of
years,
they're
still
not
fully
onboarded
with
all
of
our
processes.
So
one
of
the
challenges
within
the
observability
engineering
team
is
to
outreach
to
those
teams
to
tell
them.
You
know
hey,
what
are
you
doing
for
instrumentation?
Are
you
doing
most
of
the
time
they're
not
doing
tracing
at
all?
B
They
maybe
haven't
even
heard
of
tracing,
probably
know
about
metrics.
Certainly
they
know
about
logs,
but
we're
trying
to
encourage
those
teams
to
use
the
various
open,
Telemetry
libraries
for
metrics
and
for
traces,
but
but
honestly
we're
really
starting
with
Java,
because
that's
where
we
originated
our
core
product,
our
Flagship
product
is
is
Java.
B
It
also
happens
to
be
where
most
of
our
team's
knowledge
is
up
to
this
point.
So
we
are
using
open,
telemetry,
the
Java
agent
and
Java
library
in
our
Flagship
core
product,
we're
not
yet
using
metrics.
Part
of
that
is
because
metrics
more
recently
became
GA
generally
available.
I
think
right,
but
also
because
we
have
a
lot
of
metrics
instrumented
in
that
application
manual
is
manually
instrumented,
that
is
using
a
homegrown
Fork
of
the
old
drop
wizard
metrics
Library.
B
So
it's
going
to
be
a
pretty
big
lift
for
us
to
shift
that
over
into
using
open,
telemetries
metrics,
but
it's
definitely
the
direction
we
want
to
go.
You
know
personally,
I'm
really
excited
about
the
multi-language
support
and
the
multi-signal
support
within
the
open
telemetry.
B
You
know,
Library,
slash
agents
and
as
well
as
the
auto
instrumentation
like
I
mean
that
those
are
you
know.
I
know
it's
the
varying
degrees
of
maturity
across
different
languages.
But
that's
that's.
That's
really
exciting.
For
a
lot
of
reasons,
I
mean
I
could
go
and
I
could
talk
about
that
for
a
long
time.
I
don't
want
to
monopolize.
B
We
do
at
the
moment
we
use
it
strictly
for
tracing.
We
have
multiple
data
centers
around
the
world
and
we
have
collectors
in
each
of
those
data
centers,
but
we
also
ship
to
a
central
set
of
collectors
as
well.
B
Now
we
recently
moved
over
to
using
Tempo
from
profanolabs
so
that
all
fits
together
nicely
and
it
I
haven't,
looked
in
the
code
base,
but
the
the
receiver
configuration
for
Tempo
looks
oddly
familiar
right,
so
I
think
they're,
probably
borrowing
some
of
the
code
there,
but
it
makes
it
nice
because
it's
all
the
same
sort
of
ecosystem
and
configuration
it's
very
familiar
makes
it
really
easy
to
to
deal
with
for
metrics.
We
are
Prometheus
and
Thanos
based.
B
A
B
Know
we
have
a
cluster
or
multiple
shards
of
prometheuses
that
were
sort
of
managing
manually,
which
scrape
mutually
exclusive
collectively
exhaust
system
sets
of
metrics,
and
then
we
provide
the
unified
view
of
that
in
Thanos,
but
yeah.
We
probably
will
at
some
point
move
that
into
open
Telemetry
as
well.
I
I
was
at
kubecon.
B
You
know
the
next
couple
of
days
after
I
met
you,
Ted
I
saw
a
couple
of
really
good
presentations
by
maintainers
from
Prometheus
about
the
contributions
they've
done
in
open
Telemetry,
so
I
know
that
there's
a
Prometheus
receiver
and
also
a
Prometheus,
remote
right
exporter,
which
sounds
like
that
could
really
fit
in
nicely
with
the
way
we
want
to
do
things,
but
that'll
probably
be
a
little
while,
because
the
the
current
setup
we're
doing
is
working
reasonably
well,
it's
just
a
little
bit
more
laborious
to
maintain
than
we
would
like.
E
Yeah
I
would
say
it
when
you
do
start
looking
at
that
part
of
your
infrastructure,
that's
a
place.
You
know
we
would
love
feedback,
The
Collector,
one
of
the
ways
we
want
it
to
work
and
we've
been
working
with
the
Prometheus
people
is
to
have
it
be
a
drop
in
replacement
for
Prometheus
servers
for
that
pipelining
and
scraping
Duty.
Just
so
that
you,
when
you're,
also
trying
to
deal
with
the
other.
D
E
That
the
collector
can
do
you
don't
end
up
with
this,
like
double
decker
stack
of
Agents
and
pipelines
that
you
have
to
be
running
yeah.
B
Absolutely
I
mean
the
way
we
do
it
right
now.
We're
consuming
a
lot
of
resources
to
keep
those
Prometheus
is
running
and
that
they're
stateful,
but
they
don't
really
need
to
be,
especially
for
remote
writing
into.
A
E
So
yeah
you've
you
should
be
able
to
when
you
want
to
Leverage
The
Collector.
For
other
reasons,
if
you
end
up
in
a
situation
where
you're
now
starting
to
consume
even
more
resources,
because
you're
deploying
two
sets
of
things,
you
should
be
able
to
combine
that
into
just
the
collector,
but
that's
a
place
where
we
would
love
love
feedback
from
people.
E
Yeah,
let
me
see
well
we're
talking
about
this
part
of
your
stack
I.
E
Also
just
before
I
forget
definitely
want
to
say
reach
out
when
you're
looking
to
shift
to
open
Telemetry
metrics
from
drop
wizard,
because
I
think
I
think
there
are
several
strategies
you
could
use
there
to
avoid,
having
to
have
some
kind
of
hard
jump
or
having
to
to
rewrite
all
the
things
in
the
process
of
doing
that,
there
should
hopefully
be
a
way
to
to
leave
what
you
currently
have
in
place
well
progressively,
adding
in
hotel
metrics
as
you
you
know,
want
to
start
leveraging.
It
I.
B
Of
an
adapter
or
sort
of
a
like,
we
have
a
a
thin
layer
of
abstraction
that
we
could
probably
plug
in
to
open
Telemetry,
metrics
Library
behind
that
and
swap
out
the
the
drop
Wizard
and
I
hesitate
to
even
call
it
and
it
is
based
on
drop
wizard,
but
it
was
severely
forked.
So
it's
it's
drop,
wizard
plus
other
stuff.
But
ultimately
we
probably
would
like
our
developers
to
use
the
open
Telemetry
classes
for
that
instrumentation,
because
I
think
there's
going
to
be
benefits
for
them
to
do.
E
E
And
yeah
what
one
final
comment
before
we
turn
back
to
the.
E
B
Good
to
know
I'll
reach
out
to
our
our
Dev
team.
That's.
E
Yes,
using
that
yeah
MailChimp
currently
holds
that
working
group
down,
but
they're
they're,
always
asking
for
for
more
PHP
people
to
come,
come
and
help
them
great,
okay.
So
getting
back
to
the
front
end
really
quick
I
would
love
to
just
surprise
you
and
everyone
else.
The
current
state
of
Rome
and
open
Telemetry.
So
basically,
what
we
have
had
in
the
past
was
really
kind
of
experimental
right.
There
was
some
trace-based
instrumentation
for
react
and
browser
events
and
things
like
that,
but
it
was
very
rudimentary.
E
E
That's
been
working
on
that
for
some
time
now,
consisting
of
experts
in
that
domain
and
where
they
are
currently
at
is,
is
overhauling
our
semantic
conventions
and
kind
of
defining
how
to
actually
capture
all
of
these
events,
we're
also
interested
in
IOS
and
Android,
but
we're
sort
of
starting
with
the
browser
and
if
we
can
get
through
the
browser,
then
look
at
engaging.
E
You
know
apple
and
other
people
for
what
we
can
do
with
the
other
mobile
clients,
but
the
approach
we're
looking
at
for
browser
is
really
using
tracing
when
the
situation
calls
for
it
right
when
it's
possible
to
automatically
wrap
things
in
a
trace,
especially
anything
that's
an
event
handler
leading
towards
you,
know,
HTTP,
requests
or
or
other
application
code.
E
We
definitely
want
to
wrap
the
major
Frameworks
I
think
that
would
be
a
stage
two.
So
if
you
have
experience
there
with
what
you
want,
it
would
be
great
to
have
you
all
join
that
group.
You
know
to
test
out
what
we're
currently
doing,
but
also
you
know
when
it
gets
to
like
react
understanding
what
you
want.
We
would
also
love
feedback
on.
E
You
know
context
in
the
browser,
as
you
mentioned,
that's
a
really
painful
subject
and
often
the
the
good
solutions
for
it
involve
totally
rewriting
what
you've
already
done,
because
they
weren't
done
with
those
Solutions
in
mind,
but
that's
definitely
an
area
where
we
would
want
help.
The
main
thing
approach
we're
taking
those
in
events
based
approach,
so
we've
added
a
new
events,
API
to
open
telemetries
logging
infrastructure
and
we'll
be
primarily
using
that
to
egress
data
from
open
Telemetry
clients,
rather
than
metrics
the
idea
being
trying
to
conserve
resources.
E
You
would
egress
events
connected
to
spans
when
spans
are
available
and
then
that
information
would
be
hitting
a
collector
or
another
pipeline
piece
on
infrastructure.
You
control
and
in
that
place
you
would
be
generating
your
metrics
out
of
the
events
and
spans
rather
than
having
the
metrics
baked
in,
and
that
was
mainly
for
performance
reasons.
E
So
we'd
be
interested
to
hear
about
your
requirements
on
that
front,
which
I
think
is
my
first
question
here
you
mentioned
you
know:
resources
are
really
scarce
in
these
environments
and
I'm
curious
when
it
comes
to
both
like
package
size,
loading,
this
stuff
up
the
resources
you're
consuming
in
these
clients
and
then
also
egressing
the
data.
C
So
I
would
say
one
of
the
things
that
we
found
initially
was
egressing
the
data
because
it
can
actually
affect
load
performance
right.
You
know
a
lot
of
our
APM
solutions
that
we
tested
in
the
past.
Unfortunately,
if
they
especially
in
older
browsers,
it's
not
a
big
issue
now,
but
with
some
of
them
that
had
the
before
HTTP
2
protocol,
some
of
the
ones
that
are
still
working
on
one.
C
We
do
have
people
throughout
the
world
that
are
doing
like
one
one
on
HTTP
they,
the
egressing
of
the
data,
would
actually
use
up
all
the
possible
threads
and
basically
lock
up
the
application
right.
So
what
we
ended
up
doing
is
we
actually
in
this
system
we
actually
store
the
data
until
the
page
is
indicated,
it's
loaded,
then,
when
it's
loaded,
we
actually
wait
for
three
CPU
Cycles
to
actually
post
that
data,
so
essentially
we're
trying
to
base
when
it's
got
the
idle
the
the
functionality.
C
If
the
browser
has
it
to
wait
for
idle
frames,
it'll
start
sending
that
data
forward
that
that
can
be
toggled
on
and
off
for
some
people.
We
do
turn
that
off
mainly
because
they
have
so
much
going
on
on
their
page.
It
would
just
accumulate
a
huge
amount
of
data
before
we
can
aggress
it
out.
C
So
so
that's
there
other
things
as
far
as
like
loading
in
another
issue
that
we
have
is
priority
right,
like
what
is
the
actual
load
priority
of
assets
and
and
go
from
there
and
I
would
say,
that's
definitely
been
a
challenge
and
and
we've
actually
had
to
go
in
and
manage
priority
so
like,
for
instance,
our
APM
Solutions
things
that
we
have
to
load
in
have
to
go
in
as
the
highest
priority,
and
then
we
actually
have
to
somewhat
tweak
the
priority
of
different
assets,
loading
in
to
make
sure
that
they're
not
blocking
anything,
especially
if
we
have
like
a
giant
JavaScript
block
and
I
would
say
the
thing
that
we're
seeing
a
lot
of
people
doing
and
is
sort
of
not
going
to
the
progressive
web
app
solution
that
a
lot
of
people
have
been
talking
about
for
the
in
the
industry.
C
For
years
where
it's
sort
of
like
you
know,
split
your
code
base
have
hundreds
of
dependencies
that
get
loaded
in
as
you
need
them.
That
works
fine
in
places
like
the
US
that
has
great
bandwidth
and
has
a
lot
of
stuff
like
that.
But
in
other
places
you
really
need
to
have
a
model
where,
like
you're
loading
in
a
very
small
amount
of
chrome
and
a
very
like
a
small
shell,
server-side
rendering
as
much
as
you
can
and
serving
that
up
and
just
doing
like
Dom
replacement
very
quickly
Dom
replacement.
C
And
so
we
do
do
that
for
certain
things
that
becomes
tricky,
because
how
do
you
keep
track
of
tracing
of
things
that
are
happening?
Server
side
rendered
versus
what's
happening
on
the
front
end
and
where
there's
a
flip?
Essentially
you
have
to
kind
of
like
be
like
okay
I'm
starting
the
trace
on
the
front
end.
C
C
So
yeah
there's
there's
a
lot
of
stuff
that
that
goes
into
that
that
play
in
that
interplay
between
server
side,
rendering
and
not
server
side
rendering
and
having
the
ability
to
turn
that
off
right.
So
a
lot
of
people
are
trying
to
do
this
sort
of
like
isomorphic
JavaScript,
where
you
can
run
it
in
different
locations
and
that
really
messes
up
like
how
you
track
stuff,
sometimes
right,
right,
yeah.
E
C
The
wheel
we've
had
to
do
all
the
strategy
route
and
I'll,
give
you
an
example.
So,
like
module
loading
right,
we
try
to
basically
just
create
a
module,
wrapper
and
react,
and
it's
basically
just
like
hey
when
this
is
a
because
you
know
you
could
basically
and
react.
You
have
different
events
and
one
is
when
I
was
rendered
to
the
page
right.
So
you
could.
A
C
When
I
start
run
running,
that's
the
mark
start
and
when
I
render
fully
to
the
page.
That's
the
mark
finish
right
and
we'll
be
like
okay,
that's
fine,
but
what
we
found
out
very
quickly
is
that
the
business
logic
inside
the
module
doesn't
necessarily
mean
that
that's
done
when
it's
rendered
to
the
page.
It
may
run
to
the
page,
and
then
it
may
go
to
a
bunch
of
different
graphql
calls
that
have
the
other
dependencies.
C
C
The
module
metrics
wrapper,
that's
there,
and
then
we
will
give
you
hooks
that
are
basically
like
Mark
time
to
interact,
so
they
can
interact,
but
it
has
a
fully
rendered
and
then
Mark
marginal
done
is
when
we
consider
it
to
actually
be
completely
done,
and
so
that's
the
only
way
we've
been
able
to
get
around
it
because
there's
no
way
to
out
of
the
box
instrument
events
correctly,
because
you
don't
actually
know
when
they're
done.
That's
a
business
logic
decision
right.
E
And
are
you
using
this
Marketing
System
because
actually
using
something
like
a
span,
you
know
requires
you
to
line
the
context
up?
Is
that
is
that
one
of
the
problems
being
able
to
pass
the
context
around
properly?
Yes,.
C
Exactly
so,
the
thing
is
like
with
your
JavaScript,
how
you
guys
your
generic,
JavaScript
and
I
believe
you
can
do
this
in
angular
as
well,
how
you
can
wrap
it
in
sort
of
an
execution
context
like
the
whole
thing,
and
you
basically
can
say:
okay,
this
is
my
Global
variable,
which
is
the
trace
ID
and
whenever
you're
looking
for
it
you're.
Basically
any
JavaScript
executed
in
that
context
has
access
to
it.
It's
kind
of
like
a
right
like
a
wrapper,
or
you
know,
like
you're
doing
curling,
or
something
like
that
right.
C
So
that
that
that
works,
you
cannot
do
that
and
react,
and
so
that's
the
problem,
like
we
tried
doing
in
a
react
and
the
problem
is,
is,
as
things
are
being
called,
because
it's
all
like
virtual
events
and
it's
all
stuff
happening
in
different
areas.
It.
It
loses
your
context.
You
know,
as
things
are
being
called
asynchronously
right.
So
that's
why
we
decided
to
try
and
mess
around
with
doing
State
charts
and
essentially
when
an
action
which
is
basically
an
event
that
happens
on
the
nested
finite
State
machines.
C
When
that
happens,
it's
synchronous,
so
you
can
basically
say
at
this
moment.
This
is
the
trace.
That's
doing
everything
right,
because
currently
we
have
to
kind
of
build
the
trace
up
in
a
data
structure
on
our
end
and
then
push
it
all
out
right
now
to
our
vendor,
but
it's
not
as
a
trace.
It's
just
sort
of
as
interrelated
data.
That
looks
like
garbage.
E
Yeah,
so
it
would
definitely
be
great
to
work
with
you
all
one
to
get
your
feedback
on
the
the
the
browser
instrumentation
that
we're
actively
building.
But
then
yet
the
next
step,
I
think
react
is
probably
the
first
of
I
mean
there's
a
you
know,
endless
number
of
front-end
Frameworks,
but
but.
A
E
You
say
it's
not
just
a
simple
matter
of
instrumenting
react,
there's
also
with
these
Frameworks
you
end
up
having
to
potentially
create
a
context
management
system
specifically
for
that
framework
that,
because
internally,
that
framework
has
some
ability
to
keep
track
of
its
state,
and
we
would
love
to
work
with
you
on
building
out
that
system.
So
just
like
a
way
to
generically
keep
track
of
a
bag
of
context
within
react.
Our
approach
tends
to
be
like
figure
out
a
way
to
keep
track
of
a
bag
of
context.
E
C
The
way
we
had
some
success
for
what
you
guys
have
is
what
we
did
is
we
actually
created
a
provider
in
react
and
that
provider
would
manage
the
context.
So
you
know
how
like,
in
your
code,
there's
like
create
this
context
and
then
execute
inside
this
context.
You
know
your
traces
or
your
spans
or
whatever,
and
it
kind
of
keeps
track
of
where
you
are
in
the
context
manager
that
you
guys
built.
We
would
do
that
inside
of
our
provider
and
we
had
some
success
with
it.
It's
just
as
different
events
were
happening.
C
You
kind
of
have
to
like
create
providers
on
the
Fly
and
sort
of
like
wrap
them,
and
sometimes
they
didn't
work
so
yeah.
It
was
very,
we
understood
what
you
guys
were
trying
to
do
and
it
works
100
in
the
dollar
JavaScript.
It's
just
the
way
that
react
works.
It
keeps
sort
of
unpacking
the
work
and
you
know
it
can
make
you
get
harder
to
to
to
work
right.
So
yeah.
E
Yeah
totally
yeah
well
I
think
a
lot
of
those
details
would
probably
be
best
worked
out
in
that
that
our
realm
or
client
Sig
I'll
follow
up
with
you
all.
E
Hopefully,
if
you
can
join
that
Sig
or
at
least
if
I
can
introduce
you
to
them
so
that
when
they're
hitting
various
Milestones,
they
could
have
you
all
on
board
as,
like
you
know,
alpha
or
beta
testers,
just
to
give
us
feedback,
not
necessarily
in
production
but
just
being
able
to
get
feedback
on
whether
the
solution
we're
building
would
actually
work
for.
You
I
think
that
would
be
a
really
great
actionable
item
to
come
out
of
this
session.
E
I
think
I
would
like,
since
we
have
all
the
client
stuff,
I
think
well
covered
are
most
important,
but
most
generic
question
of
all
the
open,
Telemetry
you
use
today.
What
improvements
are
your
top
priority?
I
mean
I,
know
client
rum,
obviously,
but
for
the
rest
of
the
stack.
Is
there
anything
if
you
could
change
anything
about
open
Telemetry
today?
What
what
are
your
top
requests?
I.
C
Right
here
in
network
I
would
love
it
if
I
could
just
click
right
here,
and
it
would
show
me
all
the
traces
associated
with
this
call
inside
inside
of
you
know,
whatever
I'm,
using
whether
it's
Chrome
Brave
whatever
and
just
be
in
there.
We've
actually
have
a
Chrome
extension
that
we
built
to
hack
to
do
just
this,
and
it
shows
the
actual
call
and
then,
when
you
click
on
it,
it'll
show
you
like
the
stuff
underneath
it
like.
C
Basically
the
tracing
information,
that's
underneath
it,
and
this
would
be
huge
because
this
would
allow
us,
if
you
can
imagine
like
you,
would
just
instead
of
seeing
this
data,
you
would
see
the
trace
that's
there
is.
This
would
be
huge
for
our
customer
support
people
for
engineering
for
PS,
because
they
could
click
on
it
and
say.
C
Let
me
go
down
this
list,
okay,
what's
red,
what
is
like
super
long
and
then
the
second
thing
that
I'd
add
to
this
is
have
the
ability
to
go
into
our
tracing
and
set
Baseline
metrics
on
it
and
then
show
the
trace,
and
we
actually
did
a
little
proof
concept
for
this.
Where,
like
you'll,
see
the
the
trace
bar
and
then
if
it's
bigger
than
the
Baseline,
it's
red
for
the
rest
of
the
time
and
if
it's
less
than
it
it'll
be
like
green
and
then
this
is
like
gray
like
right
there.
So.
A
C
Who
has
no
idea
about
how
this
application
works?
Can
open
up
tooling?
Look
at
that
and
be
like
wait
a
minute.
These
three
things
are
50
slower,
I'm,
gonna,
take
a
screenshot
and
I'm
gonna,
send
this
to
engineering
and
that
would
save
hundreds
of
hours
of
people
trying
to
figure
out
what's
going
on.
So
that's
my
number
one
thing:
I'd
love
to
see
awesome.
E
Awesome
so
yeah
and
but
moving
that
information
into
browser
Dev
tools,
specifically
because
you.
C
Right
but
moving
it
into
devtools
so
that,
like
essentially
you
don't
need
anybody,
can
look
at
it
and
understand.
What's
going
on
and
I
would
I
would
love
to
be
a
part
of
any
team
that
did
that,
because
that
would
be
awesome.
So.
E
C
It
understands
like
what's
going
on
and
what's
wrong
yeah
and
even
if
you
want
to
get
crazy
with
that,
you
could
even
like
start
pre-packaging
like
who
it
belongs
to.
So
you
could.
C
A
E
B
Boy,
I,
don't
have
anything
that
would
be
a
specific
and
actionable
and
exciting
is
what
rich
just
provided,
but
I
mean
it
would
just
be
more
of
a
general
sort
of
thing:
I
mean
I
I,
it's
not
even
a
critique
or
criticism,
I
think.
Actually,
the
maturity
of
the
project
has
been
impressive,
like
the
growth
in
the
maturity
of
the
project
has
been
impressive
over
the
past
year
or
two
years.
B
The
one
thing
that's
been
a
little
difficult
for
me
historically
sometimes
is
finding
the
right
documentation
in
the
right
place
right
like
sometimes
there's
three
copies
of
the
documentation
and
I,
don't
know
which
one
is
the
actual
source
of
Truth
and
up
to
date,
and
that
sort
of
thing
you
know
honestly
I
I
end
up
finding
myself
spending
a
lot
of
time
just
in
the
GitHub
repository,
and
so
maybe
even
if
just
that
becomes
sort
of
I
mean
I,
don't
know
it's
like
you
got
to
have
the
sort
of
the
storefront
kind
of
page
too,
but
typically
then
I
end
up
in
GitHub.
B
A
E
Is
always
helpful
and
yes,
we,
our
goal,
is
to
make
the
website
the
open,
Telemetry
website,
be
that
dashboard
so
yeah
that
eventually,
you
should
be
able
to
just
go
to
the
website
and
from
there
easily
find
the
canonical
documentation
for
whatever
it
is
you're
looking
for,
we
do
have
a
that
website
is
itself
a
repo
in
open,
telemetries,
GitHub
org
and
we're
always
interested
in
in
issues
so
yeah,
one
thing
I
will
say
is
like
going
forwards
anytime.
You
have
problems
with
documentation,
whether
it's
onboarding
teams
or
anything.
E
We
would
love
to
hear
about
it.
You
can
always
just
reach
out
to
me
on
slack,
but
also
just
like
opening
an
issue
there
and
being
like.
E
Yes,
we
would
love
active
feedback
from
you
as
your
onboarding
teams,
because
the
area
we're
trying
to
improve
the
most
right
now
is
that
initial
installation
and
setup
being
able
to
boot
strap
up
applications
and
service
teams
onto
open
Telemetry,
and
we
know
that
can
be
you
know.
E
Tracing
based,
instrumentation
can
be
tricky
and
painful,
and
you
know
we
really
want
to
improve
the
experience
on
that
front
so
anytime,
while
you're
set
doing
that
work,
which
is
when
people
have
it
fresh
in
their
mind.
If
you
want
to
just
be
sending
us
a
constant
stream
of
complaints
about,
you
know
anything
that
was
hard
to
find
or
difficult
we're
actively
trying
to
approve
that
right
now,
and
we
would
really
appreciate
it.
Oh.
B
That's
great
to
hear
yeah,
because
that
is
one
of
the
challenges
I
face.
Sometimes
is
we're
trying
to
onboard
somebody
or
a
team
to
play,
putting
them
in
the
right
place
and
then
inevitably
or
what
we've
ended
up
doing?
Is
you
know
creating
our
own
pages
and
our
Confluence
site?
And
then
you
know
it's
like
you
know
that
it
could
get
static.
You
know
yeah.
E
E
Generic
feedback
right
like
we're,
we're
generally
aware
that,
yes,
we
need
to
improve
installation
and
documentation,
but
it's
much
more
helpful
when
we
can
get
specific
examples
and
people
tend
to
only
hold
those
in
their
minds
when
they're
they're
actively.
A
E
Free
to
to
Channel
all
of
that
information
and
frustration
directly
to
us,
because
we're
really.
E
That
is
an
interesting
question.
I
think
the
user
feedback
channel
are
just
slacking
knees,
always
fine
I
think
we
want
to
we're
happy
to
help
channel
that
information
to
the
correct,
GitHub
repo.
Ultimately,
we
want
it
recorded
as
issues
but
Rhys.
Maybe
this
is
the
thing
we
can
discuss
in
our
next
user
meeting
of
like
where,
where
is
the
best
place
to
to
store
all
of
this
data,
so
we
can
keep
track
of
it.
E
Cool
and
I
believe
we
are
out
of
time,
yep
so
yeah
unless
anyone
has
any
final
questions
or
comments
just
gonna
say.
Thank
you
very
much
for
as
very
informative
feedback
session.
Great.