►
From YouTube: 2022-09-29 meeting
Description
OpenTelemetry Prometheus WG
A
B
A
couple
of
minutes
to
join
in
here.
B
B
Henrik
I:
don't
have
anyone
to
take
notes
Hunter.
Could
you
yeah
to
take
notes.
A
D
B
B
Okay,
that
seems
that
seems
like
a
nice
cross-section
of
members
from
the
community,
so
welcome
everyone.
Today
we
have
a
special
agenda
and
we
are
excited
to
invite
an
end
user
of
open
Telemetry
to
come
and
talk
with
us
about
their
journey
and
their
experiences.
This
is
our
third
community-led
interview,
and
so,
if
you
haven't
attended
one
of
these
before,
please
know
that
you're
welcome
in
to
ask
to
jump
in
and
ask
questions.
B
There's
we
run
these
we're
still
figuring
out
the
right
right
organization
of
these
meetings,
but,
generally
speaking,
there's
usually
some
time
at
the
end
to
jump
in
if
it
doesn't
there's
a
natural
jumping
in
point
for
questions
as
we
talk
with
our
end
user
and
so
welcome
to
the
relativity
team
before
we
get
started,
I
just
want
to
confirm
that
you
don't
have
any
concerns
about
today's
session
being
recorded
and
uploaded
to
the
open,
Telemetry
YouTube
playlist.
Is
that
correct.
B
Awesome
thanks
for
the
confirmation
and
so
for
anyone
who
is
in
the
audience
or
watching
the
recording,
just
a
quick
reminder
to
respect
the
vendor
neutrality
of
these
Community
interviews
and
please
don't
use
any
content
from
today's
session
for
commercial
purposes,
and
so
with
that
Cal
you
and
I
were
able
to
catch
up
a
little
bit
beforehand
and
so
I
know.
You've
got
some
great
content
to
lay
the
foundation
for
our
discussion
today.
B
C
Okay,
thank
you
very
much.
Actually,
I
have
a
couple
slides
just
to
keep
my
own
thoughts
in
order.
So
I'll
go
through
that
first,
it
should
be
really
quick.
I
don't
want
to
board
anyone
with
PowerPoint,
but
let
me
see
if
I
can
share
the
right
screen.
First,
hopefully,
that's
the
right
one.
C
So
you
see
slides,
hopefully
Big
Orange
slide.
Hopefully,
yep
I
see
it
okay
cool.
So
this
is
talk
about
relativity's
use
of
both
inflammatory
a
little
bit
about
me
done
lots
of
different
things
started
as
a
physicist,
but
since
2020
I've
been
an
architect
of
Relativity,
I
was
brought
on
basically
to
redo.
You
know,
sort
of
level
up
our
observability
for
our
core
products
and
that's
what
I've
been
doing
since
then.
C
It's
been
still
progress,
but
we're
actually
making
good
progress
right
now
and
it's
been
speeding
up
and
our
adoption
of
open
Timothy
helps
a
lot
a
little
bit
about
relativity
itself,
so
relativity
provides
products
and
solutions
that
automate
the
legal
e-discovery
process
so
we're
in
the
legal
field.
C
So
that's
our
logo,
our
model,
organized
data,
discover
the
truth
and
act
on
it.
Our
customers
are
things
like
large
law
firms,
large
multinationals,
and
we
have
a
large
partner
Network
that
sort
of
integrate
their
own
software.
With
our
with
our
platform,
we
have
many
products.
C
The
main
one
we're
talking
about
today
is
relatively
one,
which
is
our
SAS
product.
It's
the
one
we
operate
ourselves,
it's
a
very
complicated
platform.
Does
lots
of
different
things,
but
basically
this
is
the
thing
we
need
observability
into.
So
that's
that's
where
we
sort
of
deal
with
everything
just
a
bit
about
relativity
itself,
so
it's
actually
a
complicated
platform,
but
it's
also
largely
distributed.
We
run
three
separate
environments.
It's
been
around
20
regions
globally,
so
it's
a
very,
very
large
platform
and
very
very
distributed
as
well.
D
C
We
want
to
monitor
this
whole
thing.
The
layout
of
what
we
do
with
open
television
is
actually
simple.
If
you
look
at
it
from
the
high
level
view
developers,
basically,
their
services
sent
Telemetry
into
the
system,
which
is
based
on
open
Telemetry
from
open
Telemetry.
We
send
our
metrics
to
New
Relic
and
that's
basically
for
our
operators
to
understand
how
things
are
operating
in
more
or
less
quasi
real
time.
We
also
have
a
secondary
store
for
everything
we
store
everything
in
Azure
blog
as
well.
C
Some
of
that
is
taken
out
and
put
into
snowflake
in
Pueblo,
where
a
set
of
our
managers
actually
look
at
it
and
use
that
for
reporting
purposes
and
also
long-time
data
analysis.
So
that's
sort
of
a
unusual
use
case
there
and
we
actually
have
an
internal
AI
team
now,
which
is
looking
starting
to
look
at
our
Telemetry
and
looking
to
use
that
to
make
a
predictive
analysis
of
various
things.
That's
also
a
long-term
data
storage
type
of
thing
that
we're
doing
there.
C
So
the
big
thing
here,
obviously,
is
the
open,
Telemetry
part
of
this
department
circled,
looks
really
simple
from
there.
The
reality
is
a
lot
more
complicated,
so
this
is
a
view
of
our
data
flow
through
things,
the
core
part
of
the
system
is
actually
here,
which
is
really
quite
simple,
but
we
have
a
whole
bunch
of
collectors
which
run
as
agents
which
all
in
particular
types
of
data
from
the
different
sources
and
that's
where
most
of
the
conduction
of
the
system
comes
in.
C
But
nonetheless,
this
is
sort
of
the
way
the
system
looks
at
the
moment
a
little
complicated
and
then
to
give
you
an
idea
of
the
scale
of
things
so
the
core
collector
instances.
So
those
are
the
Ingress
ones
which
are
on
the
right,
we're
running
about
450
different
instances
of
those
throughout
our
infrastructure,
the
agents
we
have
a
lot
more
of
them
just
because
they're
a
lot
more
varied
than
about
700
of
them
in
terms
of
loads,
basically
through
the
system.
So
for
logs.
C
What
we
call
events
about
13k
per
minute
about
one
one
million
per
per
minute
per
metrics
and
traces
as
well,
so
that's
sort
of
the
load
we
have
overall,
the
latency
end
to
end
the
95
percentile
is
somewhere
around
200
milliseconds,
so
it's
actually
quite
a
fast
system
and
the
overall
volume
and
there's
an
asterisk
here,
because
it's
the
volume
of
everything
we've
collected
and
we
have
two
monitoring
systems
to
do
our
one
which
is
based
on
open
Telemetry
and
the
older
one.
C
B
Yeah
awesome
thanks
for
I
I,
personally,
really
excited
about
some
of
the
more
Advanced
use
cases
that
relativity
has
build
their
system
around,
especially
on
getting
some
of
that
Telemetry
data
into
Tableau
and
snowflake
for
data
analytics
purposes.
That
is
not
traditionally
something
that's
too
common
that
I've
seen
throughout
here,
but
before
we
jump
into
some
of
my
questions
for
anyone
in
the
audience,
is
there
anything
that
you
wanted
to
jump
in
and
ask
at
this
point
after
seeing
what
the
systems
that
relativity
has
up
there.
D
Yeah
I
got
a
quick
question,
so
you
mentioned
that
operators
are
using
this
Telemetry
data,
but
you
also
listed
developers
on
there.
Can
you
talk
a
little
bit
about
the
different
types
of
operators?
You
have
that
that
are
reviewing
the
Telemetry
information
yeah.
So.
C
It's
kind
of
a
complicated
internal
situation
at
the
moment
we're
originally
when
we
moved
from
a
server
product
into
a
SAS
product.
There
was
a
separate
Service
delivery
team
that
dealt
with
operations,
and
so
there
was
a
clean,
clean
separation
between
the
developers
and
The
Operators.
Over
the
last
couple
of
years,
we've
been
pushing
much
more
toward
a
devops
model,
so
a
lot
of
those
operations
are
developers
traditional
developers
and
also
wore
two
hats.
It's
the
programming
part
of
it
and
also
the
the
operating
part
of
it.
C
C
We
still
have
a
service
delivery
department
to
some
extent
which
handles
some
of
the
typical
operations
that
we
do
so
there's
a
whole
set
of
people
in
that
group
as
well,
which
act
as
operators
look
at
the
data,
which
is
there
they
tend
to
look
more
at
a
higher
level.
Looking
at
the
alerting
and
things
like
that,
and
then
you
know,
send
out,
send
out
alerts
or
tickets
or
whatever
for
for
individual
students.
C
C
C
Yeah,
so
so
the
logging
situation
is
a
bit
complicated
at
the
moment.
The
initial
work
that
we
did
with
the
observability
part
was
really
geared
toward
metrics
and
traces
and
some
logging
which
what
we
would
call
events
they're,
basically
State
changes
for
services
which
we've
modeled
as
opentry
logs.
Those
were
the
core
things
that
we
started
started
with
doing,
and
our
entire
logging
system
was
actually
separate.
C
So
there's
an
entire
parallel
logging
system
that
actually
logs
to
just
flunk,
and
so
we
have
those
two
things
running
at
the
same
time
this
year,
actually
we're
planning
on
converging
everything
into
relied,
which
is
our
monitoring
system
based
on
open
telemetry,
so
we're
expecting
those
logs
to
start
flowing
as
well
through
through
open
Telemetry.
C
At
the
moment,
we're
some
subset
of
them
are
actually
flowing
from
fluent
bit
by
impubernetes,
to
open
Telemetry
and
into
New
Relic,
where
we
destroy
the
operational
data,
that's
a
very
small
fraction
of
what
we
have,
but
the
hope
is
that
over
the
next
year,
or
so,
essentially,
our
full
logging
Telemetry
will
also
end
up
in
in
New
Relic
and
also
going
through
open
telemetry.
C
Did
that
did
that
answer
your
question.
B
Okay,
excellent
So,
Cal
I
understand
that
your
team
over
there
has
like
a
centralized
observability
team
within
relativity.
That's
driving
the
adoption
of
open
Telemetry.
What
are
some
of
the
challenges
that
you
have
being
that
centralized
team
encouraging
broad
adoption
across
the
organization
of
open
telemetry.
C
Yeah
so
I
I
kind
of
mentioned
at
the
beginning
that
it's
been
sort
of
a
slow
progress
and
try
to
get
people
to
adopt.
This
mainly
I
think
it's
mainly
for
two
reasons.
One
is
that
developers
are
really
busy.
C
They
have
a
lot
of
features
that
they
want
to
get
out
and
all
that
kind
of
stuff
and
they
kind
of
view
monitoring
as
as
I
wouldn't
say,
not
important,
but
they
already
have
a
lot
of
monitoring
from
our
older
system
that
works
for
their
purposes,
so
sort
of
the
transition
into
the
newer
system
and
the
translation.
C
The
transition
of
telemetry
isn't
viewed
as
a
high
high
priority
thing,
so
I
think
that's
one
impediment
that
we've
had
in
sort
of
evangelizing,
open,
Telemetry
and
trying
to
get
it
more
adopted
across
the
across
the
organization.
C
The
other
thing
which
we've
seen,
which
I
think
is
the
bigger
impediment,
actually
is
that
monitoring
up
until
this
point,
at
least
in
our
organization,
has
really
been
focused
on
debugging
a
particular
service
for
a
particular
problem.
So
they
have
a
lot
of
custom
metrics
which
are
really
geared
toward
their
own
platform,
which
only
they
understand,
whereas
we
really
see
the
value
in
being
able
to
look
at
data
across
the
entire
platform
across
all
services
and
being
able
to
have
a
high
level
view.
So
other
people
cannot
actually
understand.
C
C
One
is
basic
adoption,
but
also
sort
of
we've
been
pushing
toward
tracing
as
sort
of
a
tracing
first
mentality,
and
also
getting
them
to
adopt
tracing
has
been
very
difficult
just
because
of
getting
over
that
home
and-
and
it
is
one
of
the
things
which
is
problematic
because
you
get
the
most
value
from
tracing
one
is
broadly
adopted,
because
then
you
see
the
flows
through
the
entire
system,
whereas
if
you're
only
doing
a
personal
service,
it's
a
very,
very
limited
view
that
doesn't
give
you
much
value
other
than
what
you
would
get
from
the
metrics.
C
B
B
Similar
challenges,
and
so
that
kind
of
brings
up
the
question
of
you've
got
systems
in
place
already
some
of
those
Legacy
monitoring.
What
was
the
decisions
around
standardizing
on
open
Telemetry?
Can
you
tell
us
a
little
bit
about
why
you
want
to
bring
open
Telemetry
into
the
organization
to
replace
those
Legacy
options.
C
Yeah,
so
a
couple
years
ago,
when
we,
when
I
came
on
board,
basically
to
do
this
work,
we
actually
did
a
POC
with
a
lot
of
a
lot
of
different
vendors
and
a
lot
of
different
solutions.
C
In
the
end,
even
though
open
television
was
really
really
young
at
the
time
it
turned
out
that
the
open,
Telemetry
POC
went
very
well
even
back,
then
the
collector
was
super
stable
and
you
could
actually
use
it
and
in
fact
we
had
to
extend
it
and
then,
during
the
POC
we
actually
extended
it
to
do
some
of
the
things
we
needed
so
I
think
that
was
that
part
of
it
was
an
extremely
positive
experience
and
being
able
to
use
it
as
a
whole.
C
The
other
sort
of
high-level
things
we
wanted
to
get
done
was
we
didn't
want
to
be
vendor
specific,
at
least
in
our
code
base
anymore.
C
So
the
open,
Telemetry
sort
of
gave
us
a
way
of
sort
of
making
a
vendor
code
base
for
on
our
side
and
being
able
to
do
things
in
a
way
which
allowed
us
to
switch
vendors
at
some
point
with
features
we
needed
to
do
that.
So
less
vendor
lock-in
for
the
document,
stuff,
I
think
the
other
things
have
pushed
us.
This
way
were
because
we
wanted
to
be
under
neutral.
We
were
looking
out
there
for
other
standards
and
it
looked
like
open.
C
Summertree
at
the
time
was
getting
a
little
traction
and
it
was
sort
of
looking
to
be
the
solution
going
forward.
So
even
though
it
was
a
bit
risky
at
the
time
it
kind
of
jumped
on
that
and
said,
you
know,
this
is
the
direction
we
want
to
go
so
a
bit
of
State
in
where
the
community
was
going
at
the
time
it
seemed
reasonable
for
us,
but
it
also
seemed
to
be
stable
enough
that
we
could
actually
adopt
it.
At
that
point,.
E
Foreign,
you
mentioned
a
desire
to
use
open,
Telemetry
logging.
What
functionality
are
you
waiting
on
primarily
for
hotel
logging.
C
Per
otel
logging
at
this
point,
I,
don't
think
we're
really
waiting
on
any
functionality.
We
could
say:
okay,
I
mean
it
would
really
be
nice.
If
you
know
like
fluent
installed
out,
would
support
open
symmetry
directly.
I
know
that's
in
the
roadmap
and
in
their
next
release
to
do
that
to
sort
of
simplify
things,
but
at
this
point
we
actually
have
a
path
into
into
open
symmetry.
We
make
things
through
a
bit
fluid
forward
and
get
that
into
the
system.
So.
C
C
C
Because
of
the
way
you
know,
the
philosophy
which
is
there,
which
is
use
your
native
logging
and
then
just
have
it
set
to
open
Telemetry
at
the
end,
makes
it
a
bit
more
difficult
because
it
makes
it
difficult
to
write
those
low-level
sinks
that
you
actually
need
to
to
send
things
to
open
Telemetry.
So
for
the
logging
in
particular,
it'd
be
very
nice
if
the
sdks
would
sort
of
have
a
lower
level
API
where
you
could
actually
generate
those
things.
I.
E
Was
actually
about
to
ask
that
so
like
if,
if
Hotel
added
to
the
sdks,
that
this
is
in
plan,
eventually
the
ability
to
author
logs
and
I
think
our
long-term
vision
is
that
you
would
author
them
such
that
they're,
not
just
written
to
disk
as
like
Json
or
like
human
readable
text,
but
they're
written
in
like
a
strongly
typed
binary
format.
Would
that
be
appealing
to
you
and
I
asked
this
just
because
we've
heard
mixed
things.
E
There
are
some
end
users
who
say
like
absolutely:
the
performance
will
be
better
it'll
be
more
reliable,
the
logs
would
be
a
structured
type
and
there's
others
who
say
no,
no,
no,
no,
no
I've
got
so
much
text-based
logging
going
on.
Like
don't
don't
distract
me
with
this.
Please
I'm
curious.
What
your
opinion
is.
C
Yeah,
so
so
my
opinion
there
is
that
we
are
we're
pushing
very
strongly
to
have
structured
logs.
We
want
structured
logs
everywhere,
so
you
know
as
part
of
that,
having
something
which
is
more
strongly
type
and
being
able
to.
Actually,
you
know
understand
what
logging
is
coming
through
for
us,
that
would
be
very
beneficial,
we're
very
much
a.net
shop,
so
we
use
Sarah
log,
basically
for
our
login
and
eventually
we
would
you
know
if
it
doesn't
show
up.
C
We
would
love
to
just
write
the
serolog
sync
prolonged
to
open
television,
so
we're
not
going
to
change
the
code,
we're
still
going
to
use
Sarah
log,
but
we'd
really
like
just
to
to
send
it
out
directly
to
open
Telemetry
and
not
have
to
deal
with
intermediate
protocols
right
so
for
us
having
that
low
level
API
in
the
SDK
or
having
a
separate
one
just
for
logging
would
be
useful
because
we
could
write
those
things
ourselves.
C
Obviously,
if
those
things
existed,
we
wouldn't
care,
but
but
someone's
going
to
have
to
write
those
things
and
having
some
support
from
open
Telemetry
to
make
that
easier,
I
think
would
be
helpful.
D
I'm
actually
curious,
so
serolog
has
support
for
Microsoft's
I
logger
abstraction
would
leveraging
that
help
with
getting
your
logs
flowing
through
the
the
current
SDK
Solution.
That's
that's
provided.
C
It
might
in
in
some
cases
we
have
a
book
directly
at
what
that
path
would
be
for
some
of
our
code,
so
we're
doing
many
transitions
all
at
the
same
time.
So
airing
Dirty
Laundry
here,
but
you
know
we're
also
moving
from
sort
of
Beyond
based
infrastructure
to
kubernetes
as
well.
C
So
the
LA
a
lot
of
the
things
we're
actually
logging
will
actually
be
logging
to
kubernetes
instead
and
that
path
for
us
is
pretty
clear.
We're
going
to
do
the
standard
standard
error,
standard,
outlogging
kubernetes.
We
capture
that
with
fluent
bit
fluid
bit,
ideally
we'll
send
this
directly
to
open,
Telemetry
and
that'll
be
our
path
to
most
things
for
things
which
will
continue
to
use
Sarah
log
and
use
a
sort
of
a
separate
code
path
for
their
logging.
It's
less
clear
what
protocols
we
want
to
use
there,
but
you
know.
C
Is
obviously
a
core
part
of
that
so
share
log
to
ilogger
ilogger
to
to
open
Summit,
you
would
probably
work
and
just
start
to
worry
a
bit
about
the
number
of
translations
with
it
that
happen
in
that
path.
We
obviously
do
want
this
to
be
as
performant
as
possible,
so
yeah.
It
depends
on
the
performance,
I
guess
in
the
end,.
B
And
I
see
in
a
question
from
the
audience
adopting
tracing
is
often
a
challenge,
irrespective
of
open
Telemetry.
Is
there
anything
that
the
open
Telemetry
Community
can
help
with?
In
that
regard,.
C
Yeah
I
think
that
is
I,
think
that
is
a
general
problem.
All
in
all,
I
guess,
I
guess
probably
the
best
thing
to
have
would
be
sort
of
testimonials
on.
You
know,
companies
that
have
done
that
or
organizations
that
have
done
that
and
they've
seen
benefits
in
doing
it,
because
that
seems
to
be
the
main
problem.
We
have
it's
sort
of
motivating
the
developers
motivating
the
organizations
to
see
to
see
the
value
in
that,
and
it's
not
entirely
that
we
don't
see
the
value.
C
There
are
groups
within
the
organization
which
do,
and
certainly
we
do
as
an
oblivability
team
to
push
along
those
along
those
lines,
but
the
average
developer
doesn't
see
that
and
having
having
sexy
videos
out
there
or
you
know,
being
able
to
really
just
just
see
the
value
of
this.
C
The
given
developer
would
would
be
helpful
in
terms
of
Technologies
I'm,
not
sure
that
there's
a
lot
of
technological
stuff
that
will
help
with
that
other
than
you
know,
just
motivating
people
to
do
it.
B
I,
like
this
concept
of
testimonials,
there
is
a
adopters
of
open
Telemetry.
It's
not
exactly
the
same,
but
they're
inside
the
community.
We've
got
a
list
of
organizations
who
have
adopted
open
Telemetry
and
want
to
publicly
claim
that
it
works
for
them,
and
so
I
can
point
that
to
you.
If
you
wanted
to
add
relativity
onto
that
at
some
point,
so
as
an
end
user,
what
has
been
your
experience
with
open,
Telemetry
you've
said
that
it.
C
In
our
data
flow,
we
used
to
have
an
event
Hub
in
the
middle
of
our
data
flow.
Basically,
as
a
safeguard
against
using
models
should
should
our
you
know,
storage
Services
go
down
for
some
reason.
C
C
We
looked
into
it
a
bit.
It
looked
like
there's
an
issue
that
the
library
which
was
being
used
by
those
things.
C
In
the
end,
we
ended
up
taking
the
event
Hub
entirely
out
of
our
system,
so
we
just
don't
use
those
components
anymore,
so
head
in
the
sand
kind
of
thing,
but
it
actually
worked
for
us
and
in
fact
it
actually
lowered
our
latencies
quite
a
bit
in
terms
of
the
overall
systems,
so
we've
taken
on
some
additional
risks
in
terms
of
back-end
Services,
going
down
to
simplify
the
system
overall
or
to
actually
get
a
better
latency
going
through.
So
that
was
one
issue
we
ran
into
with
with
the
collectors.
C
The
other
one
we're
running
into
now
is
we're
occasionally
getting
dropped.
Telemetry,
yeah
and
just
put
some
some
issues
in
the
chat.
What
are
the
other
issues?
We've
been
running
into
now
is
sort
of
drop
Telemetry
from
time
to
time,
and
it's
been
really
hard
to
debug
the
cause
of
that
and,
in
fact,
we're
looking
at
that
in
more
in
more
detail
now
my
suspicion
is
it's
actually
the
service
mesh
that
we're
using
we're
using
istio
on
on
our
service
connection.
C
To
understand
whether
that's
a
collector
problem
or
whether
that's
an
infrastructure
problem
for
that,
but
other
than
that
in
terms
of
the
transport
of
things
across
the
system
open
until
we've
been
great
the
in
terms
of
functionality,
the
fact
that
we
can
actually
extend
the
The
Collector
itself
has
been
very,
very
helpful
for
us,
and
it's
helped
us
patch,
essentially
holes
and
functionality
of
the
tree
system,
either
to
sending
to
specific
back
ends.
For
instance,
we
have
to
to
send
all
of
our
data
to
Azure
blog.
C
We
have
our
own
private,
essentially
exported
for
that.
We
have
quite
a
bit
of
experience
now
in
generating
those
things.
So
that
is
not
not
a
problem
for
us
to
actually
model
credit
card
and
do
those
things
so
you
know,
being
able
to
add.
Exporters
has
been
very
helpful
and
we've
filled
a
few
gaps
there.
That
we've
had
the
other
gap
that
we
that
we
saw
and
we
plugged
ourselves,
is
going
from
our
old
system
to
our
new
system.
C
Our
old
system
was
a
free-for-all
in
terms
of
the
the
metadata
on
the
telemetry,
which
makes
it
very
difficult
to
use
across
teams.
So
that
is
one
thing
in
going
to
the
new
system
that
we
wanted
to
avoid
from
from
the
start,
so
the
new
system.
Actually,
we
have
a
strict
schema
developers
chafe
at
the
schema,
but
so
far
they've
been
they've
been
going
along
a
bit.
C
We
actually
enforce
that
schema
and
we
dropped
Telemetry
that
doesn't
that
doesn't
schema
and
those
are
sort
of
components
that
we've
added
to
our
own
collector
to
to
do
that
kind
of
stuff
in
terms
of
feeding
those
back.
Some
of
those
are
General.
Some
of
those
are
specific
to
relativity.
We
have
a
desire
to
actually
feed
those
back
to
the
community.
C
However,
you
all
can
probably
imagine
we
are
a
legal
organization
with
lots
of
lawyers,
so
getting
getting
the
permissions
to
feed
things
back
is
sometimes
difficult,
where
we
do
have
permission
now
to
sort
of
contribute
actually
on
the
tree
stuff,
so
that
that,
hopefully,
we
can
start
feeding
some
of
these
things
back
to
the
community
if
it's
of
interest,
but
those
are
sort
of
the
areas
where
we've
seen
holes
and
we've
been
able
to
fix
them
ourselves.
E
Just
following
up
on
that,
because
you
mentioned
you
have
a
fairly
strict
schema
internally-
is
that
something
that
you
would
want
to
maintain
or
or
if
open
Telemetry
were
to
have
I
mean
this?
Isn't
the
works
by
the
way
but
like
we're
to
have
like
a
very
formal
logging
schema?
Is
that
something
you'd
prefer
to
snap
to
and
just
use
that
instead.
C
Yeah,
so
for
us
this
this
whole
thing
with
the
schema
and
the
community
came
was
sort
of
awkward
for
us
because
we
actually
started
the
schema
first
and
we
were
using
the
schema
from
the
beginning,
and
now
you
know
with
the
semantic
inventions
and
all
that
now,
there's
a
bit
of
a
tension
between
our
internal
schema
and
what's
happening
with
the
semantic
dimensions
we
our
policy.
Basically,
is
we
want
to
follow
the
semantic
emotions
as
much
as
possible,
so
we're
we're
doing
that.
I.
C
Don't
how
to
say
this
in
the
right
way.
I
don't
see
that
we
would
ever
not
have
our
own
internal
schema.
That's
a
lot
of
the
things
that
we
have
as
attributes
are
things
like
options
on
API
calls,
and
things
like
that,
so
we
will
always
have
an
addition
to
whatever
a
formal
schema
is
out
there,
but
we
would
love
to
have
a
formal
schema
which
is
out
there
and
be
able
to
easily
add
our
own
things.
On
top
of
that,
that
would
be
fantastic
in
terms
of
the
semantic
conventions.
C
It's
interesting,
you
know,
and
it
has
been
evolving
over
time.
We
actually
have
a
schema,
not
only
on
the
attribute
names.
We
have
a
schema
on
the
values
as
well,
so
we
actually
enforce
types
of
values
as
well
as
particular
guy
particular
values.
For
instance,
we
have
regexes
for
some
of
our
things.
We
have
you
know,
constraints
on
on
the
integers
and
things
like
that.
It
ends
up
being
much
closer
to
the
Json
schema
in
terms
of
capability
and
what
what's
currently
available
with
these
semantic
conventions.
C
E
A
B
I
see
a
question
from
Austin
in
chat
regarding
scheme
enforcement.
B
B
G
For
it,
so
you
mentioned
that
you
were
doing
schema
enforcement
and
you
were
using
open
Telemetry
for
like
metadata
consistency
across
things,
which
is
great
by
the
way
love
it.
But
have
you
experienced
any
problems
with
adoption
around
the
fact
that
a
lot
of
the
semantic
conventions
are
not
1.0
like?
Is
that
blocking,
or
is
that
just
kind
of
seen
as
a
well
a
lot
of
this?
A
lot
of
individual
Hotel
Parts
aren't
1.0
yet
either.
C
Yeah
so
yeah
the
1.0
versus
not
1.0
frankly
for
us
has
never
been
an
issue
philosophically
because
we
adopted
open
Telemetry
two
years
ago,
when
everything
was
out
of
us
essentially.
G
C
G
C
C
We
have
our
own
internal
schema,
so
we
pulled
those
in
as
we
can
to
make
them
available
to
our
internal,
our
internal
developers.
So
we
sort
of
managed
any
breakage
that
happens
there,
which
is
which
is
good
internally
internally.
We
actually
never
remove
attributes
so
our
skin,
our
internal
schema,
is
always
strictly
password
compatible
and
we
ensure
that
that's
always
the
case.
Now.
I
say
that
and
I
add
sort
of
the
error
quotes
to
that.
C
G
When
you
say
you
don't
use
the
semantic
evictions,
is
that
like
do?
You
is
my
interpretation
of
that
that
you
you
for
things
that
have
like
Auto
instrumentation,
that
either
a
you
aren't
using
the
audio,
instrumentation
or
B?
You
are
using
the
autism
instrumentation,
but
you're,
adding
your
own
schema
on
top
of
it.
C
Yeah,
so
the
auto
transportation
has
something
is
something
that
we've
actually
avoided
for
a
long
time,
because
it
wasn't
very
complete
so
far
use
cases.
It
wasn't
terribly
useful.
In
most
cases,
that's
changing
now,
so
we're
seeing
more
and
more
of
that
being
useful.
C
The
the
part
of
the
auto
instrumentation,
which
is
it's
not
quite
Auto
instrumentation,
but,
for
instance,
the
host
metrics
receiver,
that's
something
that
we're
using
more
and
more
and
in
fact
there
is
a
class
of
of
internal
workloads
which
that's
very
well
adopted
for,
and
we
are
doing
that
so
for
that
type
of
stuff.
For
the
attributes
which
are
you
know,
coming
from
those
things,
we've
actually
added
to
our
schema
to
make
sure
that
we
get
those
in
terms
of
the
other
Auto
instrumentation
we're
not
using
it
very
much.
C
So
you
know
we
will.
We
will
do
more
and
more
of
that,
and
obviously,
even
though
that
that
conflict,
the
tension
between
the
semantic
intentions
is
being
used
there
and
our
internal
schema
will
become
more
difficult,
but
hopefully
we'll
be
able
to
manage
that
cool.
Thank
you,
yeah
and
dad
put
some
details
in
there
on
on
the
types
of
Auto
instrumentation
that
we're
probably
using.
F
The
processors
that
do
type
conversion
on
The
Collector
themselves,
which
is
in
a
sort
of
a
useful
tool
for
for
some
of
those
deprecations
like
for
type
changes
and
stuff.
Just
some
more
context.
F
B
I
say
that
you
asked
a
question
in
chat
around
the
community's
thoughts
on
Regis
schema
versus
Wild
West.
Do
you
feel,
like
you
got
enough
of
an
answer
to
that?
Or
do
you
want
to
ask.
A
B
Okay,
yeah
go
for.
C
It
Dan
just
mentioned
in
the
chat,
but
managing
our
schema
is,
is
a
difficult
process,
so
we're
we're
paying
a
price
for
for
having
that
schema,
your
mileage
will
Gary
kind
of
thing,
I
guess.
B
Excellent,
so
yeah
Cal,
you
talked
a
little
bit
about
some
of
the
challenges
that
you've
had
as
an
end
user.
Are
there
anything
that
you
want
to
ask
the
community
for
things
that
either
need
intention
or
could
be
improved
to
help
with
your
open
Telemetry
experience
or
adoption.
C
Yeah
so
I
think
one
thing
which
would
be
very
useful,
I
mean
up
until
this
point.
You
know
to
be
really
honest.
Most
of
the
effort
that
we've
been
putting
in
and
most
of
the
changes
have
really
been
on
the
infrastructure
side
of
things.
So
putting
you
know,
rely
our
monitoring
system
in
place
and
open
Telemetry.
A
lot
of
that
hasn't
been
directly
visible
necessarily
to
the
developers
themselves.
C
In
fact,
we
we
actually
translate
a
lot
of
our
old
metrics
to
to
open,
Telemetry
and
ingest
them
with
with
our
own
custom
receiver
to
to
do
that
kind
of
stuff.
C
C
One
of
the
things
I
think
has
been
difficult
with.
That
is
that
the
documentation
is
often
not
great
or
a
lot
of
those
things.
So
you
know
I
know
everyone
hates
to
do
documentation,
but
that's
one
area
which
would
benefit
a
lot.
C
The
other
thing,
I
think,
would
be
extremely
beneficial,
especially
we're
a
dot-net
shop
mostly,
but
as
we
get
the
AI
group
in
and
a
few
others
we're
starting
to
get
more
and
more
sort
of
many
different
languages
coming
in
one
thing,
that
would
be
very
helpful
if
there
was
sort
of
a
standard
open,
Telemetry
demo
for
or
a
service.
Basically,
how
show
how
to
instrument
a
service
but
have
that
same
demo,
instrumented
for
pythonberg.net
or
Java
or
whatever?
C
Oh,
it
exists.
Great
I
haven't
seen
it
so.
A
C
G
G
Yep
you're,
trying
to
be
extremely
comprehensive
and
it'll,
also
have
some
metrics
in
there
too,
Hotel
metrics
traces
logs
coming
soon.
B
Excellent
and
so
count
go
for
it.
Chris
yeah.
D
So
you
mentioned
that
you
have
a
lot
of.net
services
and
so.net
has
two
two
projects
out
there
for
open
Telemetry.
You
have
the
open,
Telemetry
SDK
and
you
have
the
auto
instrumentation
project,
and
so
one
requires
you
to
to
make
some
modifications
to
your
code,
usually
minimally
and
the
other.
The
the
goal
is,
you
can
configure
some
environment
variables
and
just
have
this
thing
installed
and
extract
the
same
type
of
telemetry.
Do
you
see
benefits
to
using
one
versus
the
other
in
your
environment.
C
Yeah
I'll
I'll.
Let
them
answer
that
question
more
detail
in
a
second
but
yes,
I
think
it's
it's
useful
to
have
both
actually
and
I
think
we
actually
use
both
in
the
current
system.
C
One
of
the
issues
that
we've
had
is
that
we
have
one
part
of
our
system
is
actually
a
platform
that
other
developers
use
to
run
their
own
applications
within
within
the
platform
itself,
and
we've
run
into
issues
with
dependency
version
flashes
with
with
the.net
library,
for
instance,
not
necessarily
bad
on
one
way
or
the
other,
but
open
Telemetry
brings
at
a
huge
number
of
dependencies,
so
the
likelihood
for
a
clash.
C
There
is
very
high
and
that's
one
of
the
things
we've
had
to
manage
but
sort
of
carefully
in
that
part
of
the
system.
I
think
both
are
useful,
but
Dan
is
is
much
more
versed
on
the
botnet
part
of
things
and
opinions.
F
Yeah,
so
we
use
and
I
don't
actually
know
how
to
differentiate
these
Auto
instrumentations.
We
use
the
ottoman
Auto
instrumentation,
that's
part
of
the
SDK
like
when
I
wire
up
my
provider.
I,
add
my
you
know:
optional
things
like
ad
SP
that
core
instrumentation,
etc,
etc.
F
We,
we
I,
think
we've
briefly
looked
at
the
like
the
other
repository,
the
auto
instrumentation
repository,
but
it
was
like
Alpha
I'm,
not
sure
if
it's
even
stable,
yet
I
was
a
little
bit
hesitant
to
to
use
that
Alpha
feature
and
kind
of
roll
it
out
everywhere
and
then
have
to
do
like
a
it's
easier
sort
of
to
manage
a
singular
entry
point
for
the
code,
I
think
than
like
ensuring
the
environment
variables,
if
say
they
got
renamed,
were
consistent
across
like
every
environment
or
something
so
that
was
kind
of
a
barrier
there.
F
But
one
of
the
one
of
the
things
I
think
one
of
the
reasons
why
I
think
we
prefer
the
SDK
is
often
we
need
like
that
custom
instrumentation,
like
the
the
data,
probably
not
for
like
spans
as
much,
but
for
for,
like
the
metrics.
That
teams
are
producing
like
some
of
those
are
not
all
of
those
are
operational
they're
like
reporting
kind
of
metrics,
and
you
know
when
they
need
to
have
that
control
of
you
know.
F
A
Yeah,
so
thank
you.
I
can
just
tell
again
for
the
auto
stimulation
like
developers
and
that
still
is
possible
to
create
manual
like
spans,
metrics
and
logs
Delta.
Instrumentation
is
mainly
built
to
make
easier
setting
up
like
the
whole
bootstrapping
of
the
SDK,
to
make
it.
You
know
using
a
material
business
instead
of
code
and
also
to
automatically
apply
the
instrumentation
libraries,
but
still
you,
the
the
users,
can
create
their
own
custom,
metrics
and
spans,
and
we
encourage
to
do
it.
A
I
think
Mike
wants
to
ask
something
or
tell
something
you
have
a
couple
of
very.net
specific
questions
for
you.
Cal
I
might
have
a
follow-up
depending
on
your
answers,
but
high
level
across
your
organization,
what.net
Frameworks.
What
run
times
are
you
targeting?
Are
you
do
you
have
some
Legacy
like
dot,
NET,
Framework
and
Dominic,
or
how
do
you
manage
that.
F
So,
basically,
all
all
of
the
code
that,
like
we
used
to
run
in
an
access
platform,
is
still
running.net
framework,
like
everything
is
essentially
part
of
our
core
project.
That's
done
at
462
and
higher,
so
it
is
supported
by
you,
know,
the.net
stuff,
that
open
Telemetry
produces
any
any
kind
of
like
modern,
like
new
service.
I
would
say:
that's
created,
you
know,
like
a
web
API
that
we
just
hosted
directly
in
kubernetes.
That's
targeting
like
some.
You
know,
later.net
version
things
that
have
been
developed
over
the
past
few
years.
F
Might
you
know
be
targeting
that
core
three
one,
but
you
know
I,
think
that's
coming
into
life
into
summer
here
so
like
those
are
actively
being
upgraded
to
to
current
versions.
If
a
team
was
gonna
produce
new
code
today
like
they
would
they
would
be
doing
it
in
and
done
at
six.
So
it's
it's
really
a
large
mix
of
stuff
that
runs
in
the
flat
platform
that
has
that
NET
Framework
requirements
and
then
kind
of
newer
Tech
that
that's,
you
know
typically
hosted
on
Linux
running.
You
know
dot
net
six.
F
F
F
I
talked
about
what
we're
gonna
do.
I've
talked
with
Alan
West
sort
of
about
that
we
don't
distribute
any
of
the
open
telemetry.
F
Oh
Allen's,
here,
hi
Alan,
we
don't
distribute
any
of
the
like
open,
Telemetry
live.
We
like,
we
don't
wrap
that
internally
in
any
sort
of
like
you
know,
like
I.
Don't
know
thing
like
that.
So
like
from
that
perspective,
it's
it's
not
really
concerned,
but
it's
it's
kind
of
hard
for
me
to
like
be.
F
You
know,
tell
you
like
how
I
think
that
could
affect
us
or
might
affect
us
for
the.net
framework
stuff
I
want
to
say
that's
unaffected,
so
shouldn't
have
any
impact.
Because.Net
Frameworks.
It
still
doesn't
support
for
the
newer
stuff.
Our
security
posture
is
generally
like
if
something's
end
of
life,
you
need
to
upgrade
anyway,
so
I
think.
From
that
perspective
it
puts
us
in
a
better
spot,
but
you
know
it'd
be
hard
for
me
to
speak
on
I.
Think
more
specifics
than
that.
B
F
So
I'm
curious,
if,
like
anyone,
is
successfully
doing
tail-based
sampling,
we
we
have
this
hope
to
do.
Tail-Based
sampling,
but
I
I,
really
don't
see
how
it's
possible,
like
at
a
at
a
high
volume
like
if
I
have
you
know
if
I'm
producing
millions
of
of
spans
per
minute,
like
what
type
of
configuration
can
I
run.
That
would
support
that
I
would
be
like
super
curious
to
be
like.
F
If
you
run,
you
know
a
deployment
in
kubernetes
with
these
specifications,
it
would
support
whatever
that
would
help
me
solve
so
many
problems,
because
today
we
we
really
struggle
against
like
doing
head-based
sampling,
but
then,
like
wanting
to
say,
collect
you
know,
anomalous
spans
and,
like
things
are
longer
than
a
time
period
or
think
like
I,
would
want
to
capture
every
error,
so
any
like
guidance
or
anything
that,
like
the
community,
could
do
to
like
promote
solving
those
problems
to
me
would
be
like
a
huge,
huge
win
foreign.
G
Same
one,
the
unfortunate
answer
is
this
is
why
people
pay
the
people
that
pay
us
a
lot
of
money,
because
it's
a
very
hard
problem.
I
would.
G
Theoretically,
there
is
some
math
you
could
do
about
throughput
and
memory
and
the
real
hard
one
here
is
actually
like
calculating
the
average
length
of
time
that
a
trace
needs
to
that.
Basically,.
G
How
long
did
you
sit
right
and
and
that,
like
lights
up,
did
a
lot
of
this
math
years
ago,
because
we
had
when
we
did
our
original
satellite
implementation,
because
this
is
a
because
that's
how
we
did
tail
based
sampling
I
will
tell
you.
This
is
not
current
Tech.
This
is
not
what
we
sell
to
people
now,
but
I
can
just
give
you
a
pre.
G
You
know
tell
you
what
it
was
like
for
people
that
were
doing
tail-based
sampling,
then
with
a
different,
slightly
different
approach
than
the
hotel
collector
does,
but
we
had
people
running
like
M2
like
M2
to
M2
like
2XL
memory,
optimized
AWS,
nodes,
literally
hundreds
of
them
in
order
to
have
like
15
minutes
of
look
back
in
order
to
do
like
effective
tail-based
sampling.
G
It
is
extremely
at
scale,
and
this
this
is
like
a
lift
style
thing
right,
where
you're
talking
about
like
millions
of
QPS
or
whatever
it
it's
really
hard
to
do
at
scale,
and
so
the
the
options
are
talk
to
a
commercial
provider
that
has
like
figured
out
various
ways
to
do
this,
for
you
or
hope
that
or
Advocate
I
guess,
for
those
commercial
vendors
to
move
more
intelligent
sampling,
algorithms,
Upstream
and
I
do
believe
that
some
of
that
is
coming
like
the
sampling.
G
The
intelligent
sampling
proposals
that
I've
seen
from
like
the
spec
side
do
help
with
this
by
at
least
giving
you
more
durable
tail
sampling,
but
you're
always
gonna
have
you're,
always
gonna
run
to
the
problem
of.
If
you
want
to
do
like
really
100
accurate
tail
sampling,
then
you
need
enough
memory
and
CPU
to.
G
If
you
have
a
gossip
protocol
between
your
collectors
and
again
we're
talking
about
like
custom
stuff
here
or
things
that
don't
exist
today,
then
you
can
have
some
other
service
make
those
sampling
decisions
for
you
or
use
heuristics,
like
basically
probability
based
plus
some
like
attribute
filtering
and
bucketing
to
kind
of
figure
out
like
oh,
this
is
what
we
should
save
and
then
send
that
out
to
and
then
have
a
processor.
That's
basically
saying
like
Okay
other
thing
that
makes
this
sampling
decision.
G
Please
tell
every
other
collector
to
do
it,
but
now
you
have
other
fun
distributed
systems.
Problems
about
you
know,
cap,
theorem,
BS,
so
I'm.
Sorry
that
that
probably
wasn't
a
super
helpful
answer,
but
this
is
also
like
you
are
not
wrong
to
feel
like
this
is
insurmountable
because
it
it
kind
of
is
in.
E
E
Trying
to
rebuild
it
yourself
as
a
community,
we
did
a
lot
of
demos
of
tailbase
sampling
like
relatively
early
on
because
it
was
added
to
the
collector
as
a
feature.
I
wasn't
I
work
at
Splunk.
Currently,
I
wasn't
working
at
Splunk
at
the
time,
but
on
Mission,
which
then
merged
into
Splunk
I
think
they
at
the
time
we're
trying
to
really
push
that,
and
my
impression
is
that
they
discovered
that
it
was
easier
just
to
like
build
a
more
bigger
Trace
database
on
the
back
end
and
just
tell
people
not
to
sample
like.
E
F
E
E
G
G
You
probably
people
probably
sample
too
aggressively
just
as
a
starting
point,
because
they
keep
data
too
long,
like
the
the
way
to
think
about
this
I
think
is
less
that
traces
are
durable
and
more
that
here's
a
good
example.
It's
really
hard
to
do
a
very
consistent,
accurate.
You
know
tail-based
sampling,
implementation.
G
However,
it's
very
it's
relatively
much
easier
to
have
all
a
bunch
of
traces
go
into
blob
storage
and
then
have
a
Cron
job
looking
at
like
last
access
date
and
then
every
day
reaping
that
blob,
storage
and
saying
like
well,
nobody
looked
at
this,
so
it's
gone
right
and
you're
you're
always
going
to
be
paying
for
something,
but
it
usually
makes
more
sense
to
kind
of
bring
in
a
lot
of
data
and
then
start
cutting
it
Down,
based
on
like
like
an
lru,
you
know
like
a
lru
catch.
G
Basically,
right
like
you
can
easily
see
like
what
traces
people
saw
and
you
can
also
start
doing
like
more
processing,
then
right.
So
if
you
have
you're
like
well,
we
want
to
make
sure
that
all
errors
get
processed
cool.
That's
an
easy
thing
to
do
like
pretty
cheaply
when
you
already
have
all
the
traces.
You
know
you
just
go,
iterate
start
iterating
through
things
or
when
you
are
ingesting
them.
You
store
them
in
such
a
way
that
it's
easy
to
kind
of
do.
G
Lookups,
based
on
things
you
care
about
right,
like
error,
error,
attributes
or
or
whatever
else.
D
While
we
have
the
the
Greater
Community
here,
I
think
this
is
a
good
opportunity.
One
thing
that
I
think
I
understood
about-
maybe
that's
maybe
unique
to
your
use
case-
is
that
again,
if
I
understood
you
correctly,
there's
you
have
long-running
bash
jobs,
and
it's
not
so
much
that
you
have
something
that's
distributed
in
nature,
but
that
it's
doing
a
lot
of
work
in
process
and
your
desire.
There
was
to
do
something
like
tail
based
sampling,
but
that
might
actually
be
able
to
be
done
by
the
application
itself.
A
F
Yeah,
we
do
have,
like
you
know,
API
driven
workloads
that
are
kind
of
short-lived,
and
that
works
great,
some
of
our
like
long-running
batch
workloads
like
it's
not.
We
don't
always
see
the
same
kind
of
behavior
or
same
use
case
I
was
like
in
I.
Don't
know,
Austin
mentioned,
like
some
more
things
coming
in
regards
to
like
head-based
sampling.
So
maybe
this
fits
in
there,
but
like
like
right
now,
a
common
way
that
we
would
configure
tracing
like
in
our.net
app
is
like
use.
F
The
parent
base,
sampler
combined
with
the
trace
ID
ratio.
Sampler
so
basically
use
the
parent
sampling
decision
and,
if
that
doesn't
exist,
fall
back
to
some.
You
know
percentage
based
thing.
E
F
D
F
Want
to
sample
and
like
what
is
the
right
way
to
do
that
configuration
I,
don't
know
if
that
answers
your
question
on,
but
that's
kind
of
like
like
where
I
would
want
to
leave
that
like
to
to
like
be
like
yeah,
here's,
my
default
but
like
when
these
situations
arise.
Like
oh
I
encountered
a
span
that's
longer
than
five
seconds
I
know
in
my
service
like
that
would
always
be
a
bad
thing
and
I
would
always
want
that
data
so
part.
G
If
we
know
a
trace
or
a
spans
will
be
sample,
we
don't
want
to
do
the
work
of
actually
allocating
all
the
stuff
that
needs
to
go
in
it.
So
from
a
performance
perspective
like
that's
the
thing,
which
is
why
you
need
to
push
those
decisions
off
later
down
the
ingest
pipeline
right
yeah
like
that's,
that's
one
of
those
things
that
I
don't
think,
there's
anything
I
can't
imagine
I,
can't
imagine
a
sampler
that
would
I
guess
it
could,
like.
You,
could
sort
of
think
of
like
a
late
sample.
G
But
the
problem
is
when
you
talk
when
you
say
like
parent-based:
it's
like
okay!
Well,
what?
If
the
parent
was
sampled
and
then
my
child
has
a
situation
where
it
shouldn't
be,
sampled,
I
can't
go
back
and
I
can't
rewrite
history
on
the
parent
and
if
I
did
export
that
span,
then
I
would
now
have
a
broken
Trace
like
because
a
lot
of
this
is
sort
of
encoded
at
the
spec
level,
and
it's
gonna
like
is
it's.
It's
intended
Behavior
right,
yeah.
F
F
E
G
Would
that
work
because
I
can't
again
the
sampling
is
gonna?
Sample
decision
is
put
into
the
span
context
which
oh.
E
D
Yeah
Austin,
the
thing
that
was
on
my
mind
in
this
respect
was
like
you
could
do
it
like
say
in
one
of
the
language
sdks,
but
it
would
have
to
be.
You
know,
maybe
like
a
custom
exporter
or
something
like
that
or
yeah.
Like
you,
you
know
that
you
know
that
there
was
an
error
at
this
point
or
you
know
the
entire
time
of
that
span.
Right
so
you're
effectively
doing
tail
based
sampling,
not
at
the
trace
level,
but.
G
Well,
part
of
the
thing,
though,
is
that,
if
you've
made
a
sampling
decision,
when
request,
you
know,
request
comes
in
my
you
know.
Let's
talk
in.net
terms,
my
action
filter
or
whatever,
whatever
Interceptor
is
there
that
is
pulling
that
out
of
the
head
request.
Headers
looks
at
it
says:
parent-based
or
ratio
based,
okay,
parent-based
sample
true
or
is
sampled
false,
so
this
is
sampled
out
by
Design
we
shouldn't
even
like
the
trace
the
span
shouldn't
open.
It
should
just
like
sit
there
and
there's
no
allocations.
F
G
Don't
think
that's
weird
I
think
that's
a
really
interesting
idea.
I,
don't
know
if
the
actual
thing
that
should
happen
is
a
trace.
There
I
think
there
should
be
some
sort
of
telemetry,
but
I
also
know
where,
like
at
time,
I
think
it's
a
I
see
where
you're
going
though,
and
it's
interesting.
B
G
Log
fallback
would
basically
be
what
I
would
think
right.
It's
like
hey,
create
a
log
message
or
create
a
log
event
or
another
way
to
think
about.
It
is
like
buffer
that
notion
like
put
like
buffer
it
somehow
on
the
hotel
side
and
then
on
the
next
actual
Trace
like
pop
in
a
message
it's
like
hey.
There
was
something
a
lot.
There
was
a
lighted
error
here,
I,
don't
know.
B
Just
reminding
folks
that
it's,
we
can
always
connect
on
cncf
slack
potentially
to
follow
up
on
this.
We
are
at
time
and
I
want
to
appreciate
the
relativity
team
for
coming
in
and
talking
with
the
community
this
morning
this
afternoon.
For
you
all
and
this
yeah,
you
can
find
I
think
almost
everyone
who's
in
the
community
on
CNC
of
Slack.
B
So
if
there
are
follow-ups
that
is
great,
are
there
any
closing
remarks
from
ucal.
C
Nope,
just
thanks
for
thanks
for
having
us.