►
From YouTube: 2021-11-02 meeting
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
B
B
A
B
I
do
realize
that
we're
intermixed
with
all
the
other
cigs
in
the
same
youtube
channel,
so
please
you
know,
make
sure
that
we
get
the
most
thumbs
up
that
way.
It
helps
us
with
our
revenue.
C
D
E
F
F
D
A
A
B
A
F
H
Excellent
yeah,
let's
get
into
it,
I
think
there
were
some
questions
from
the
folks
trying
to
work
on
real
user
monitoring.
B
H
Kind
of
browser
browser
tracing-
I
know
this
group
exists.
I
haven't
totally
like
dug
into
the
details
of
what
their
what
their
path,
I
guess
to
rome
is,
I
think
you
know
early
on
in
hotel.
I
think
the
assumption
was
like
javascript
is
javascript
and
hotel.
Js
will
work
just
as
well
for
the
browser,
as
it
does,
for
node
apps
and
in
some
ways
I
think
that
might
still
be
relevant,
but
I
know
there
was
at
least
a
point
in
time
where
a
group
was
spitting
up
on
raw.
H
They
kind
of
wanted
like
a
different
data
model
and
wanted
kind
of
like
a
different
direction
altogether
so
yeah.
It
would
be
good,
at
least
for
me,
to
check
in
and
see
like
what
what
is
brewing
in
in
those
camps,
but
there
are
at
least
yeah.
I
think,
there's
some
debate
as
to
whether
like
span
is
the
right
thing
to
represent
stuff
coming
from
a
browser
like
they
kind
of
wanted
something
span
alike,
but
more,
I
don't
know
I
feel
like
it
was
more
along
the
lines
of
like
an
event.
H
Some
chess
I've
had.
Oh.
H
That's
part
of
it,
I
think,
there's
like
a
whole
other
series
of
just
like
performance
characteristics
when
you're
in
in
somebody's
browser
as
well
and
there
there
is
like
a
slack
channel.
I
know,
and
occasionally
people
say
something's
there,
keeping
on
top
of
all
these
things
is
more
than
a
full-time
job.
I
think.
H
But
at
any
rate,
I
think
specifically,
the
thing
that
was
being
talked
about
was
the
device
semantic
conventions
from
hotel
and
whether
they
were
the
right
thing
to
represent
a
browser
and.
H
H
The
next
thing
that
was
up
was
that
tigran
is
asking
why
trace
flags
are
recorded
in
log
records,
so
there's
a
trace
flag
specifying
that
yeah.
That
itch
was
sampled,
not
sampled
or
unset.
Possibly
maybe
I
might
not
be
remembering
all
the
flags
exactly
correct,
but
I
don't
think
that's
super
important.
I
think
this
did
turn
into
like
a
fairly
long,
drawn-out
conversation,
but
like
the.
H
The
reasoning
there
was
a
reason
why
trace
flags
were
in
logs
and
they
were
supposed
to
kind
of
be
like
a
hint
to
the
logging
back
end
that
you
can
correlate
this
with
the
trace.
If
you
know
the
flags
indicate
sampled,
I
think
the
conversation
that
kind
of
spun
off
of
this
is
like.
H
H
I
think
that
was
one
one
thing
that
kind
of
came
up
and
there
were
like
a
handful
of
other.
I
guess
issues
with
this,
possibly
in
practice,
but.
D
It'd
be
nice
if
it
was
standardized
from
a
user
perspective
yeah,
if
there's
an
easy
way
for
me
to
have
recorded
spans,
show
up
somewhere
where
a
developer
can
snag
them
and
build
search
them
in
my
telemetry
tool
that
that's
a
really
common
workflow.
I
just
got
after
my
telemetry
vendor
for
that
I
was
like
you
know.
It's
too
much
work
to
take
a
printed,
trace,
id
or
trace
parent
and
get
to
seeing
my
trace
because
that's
like
that's
what
my
developers
want
all
the
time.
It's
like
they're,
like
hey.
H
I
stopped
using
this
analogy:
there's
probably
a
better
one,
but
the
grail
of
grails
would
be
to
be
able
to
do
that
even
between
vendors.
But
I
think
the
stuff
that's
being
worked
on
here
and
the
stuff
that's
being
worked
on
in
the
w3c
will
unlock
that
somewhere.
H
H
You
might
also
need
to
trace
state
and
inner
knowledge
of
how
people
are
using
the
trait
stage
to
make
it
all
work.
But.
H
We
have
mentioned
the
probability
sampling
spec,
probably
every
week,
for
like
the
last
three
months
at
this
point
or
more,
but
the
the
spec
pr
is
there
and
the
new
bit
of
information
is
that
there's
a
go
implementation
of
yeah.
I
think
we
talked
about
that
pr
a
little
bit
last
week.
It
introduces
this
consistent
probability,
sampler
and
consistent
parent
probability
sampler,
so
this
will
have
implementations
of
of
those.
G
I
have
started
attending
that
sampling
sig,
to
understand
all
the
mathy
maths
to
help
contribute
that
sampling.
G
I
can't
I
can't
in
this
venue
represent
what
happened
last
week,
because
I'm
still
understanding
all
the
matthew
math,
but
doing
what
I
can
to
help
that
out.
G
F
There
isn't
if
we
handle
this
fine
in
ruby.
I
think
we're
in
line
with
this
back
already,
if
I
hit
from
off
top
of
my
head
so
anyway,
that's
all.
H
Cool
this
guy
josh
cyrus
is
working
through
this
state
of
open
telemetry
document.
D
D
H
So
we
do
have
events
by
way
of
span
events,
so
I
think
maybe
we're
just
calling
them
out.
Oh
okay,
yeah
as
a
sub
telemetry
of
phrases,
which
you
know
are
really
just
logs,
you
think
about
it
and.
D
H
Cool,
so
this
doc,
I
think
we
talked
about
a
little
bit
last
week.
It's
really
for
all.
H
D
H
H
Probably
yeah
there
there
were
definitely
some
interesting
cases
around
like
metric
labeling
that
were
discussed.
I
don't
know
how
much
we
should
kind
of
delve
into
those
as
we
have
yet
to
break
ground
on
the
metrics
implementation.
C
Yeah,
I
will.
H
I'll
try
to
time
boxes
to
one
minute,
but
basically,
if
you
have
a
metric
like
process
cpu
time,
you
kind
of
have
like
two
options
of
how
you
might
want
to
record
this
thing.
You
might
just
like
record
a
total
and
be
done
with
it,
but
you
might
also
record
it
as
kind
of
the
you
know:
system
user,
whatever
the
other
versions
of
those
all,
are
and
end
up
with
kind
of
fractional
numbers
that
add
up
to
be
the
whole.
That
add
up
to
be
the
total.
H
I
guess,
and
in
that
case
like
you,
should
do
exclusively
one
or
the
other
and
not
mix
these
ultimately
like
if
you,
if
you
had
the
fractional
parts
and
total
like,
I
think
one
of
the
rules
is,
you
should
be
able
to
add
up
all
of
the
sub
components
and
and
get
to
a
total.
H
And
if
you,
if
the
number
that
you
sum
these
all
up
to
is
is
not
some
kind
of
number
that
makes
sense,
then
that's
probably
going
to
be
a
problem
ultimately
for
your
your
metrics
back
end,
and
I
think
one
reason
behind
this
is
that.
H
Backhands
want
to
be
able
to
kind
of
do
like
cardinality
reduction,
where
you
do
kind
of
like
drop
labels
and
then,
if
you
drop
labels,
yeah
kind
of
be
able
to
apply
like
an
aggregation
to
the
remaining
data.
And
it
should
still
make
sense.
And
if
you
get
into
a
situation
where
this
doesn't
hold,
then
that
whole
the
cardinality
reduction
kind
of
begins
to
fail.
H
D
But
like
I,
I
just
this
one's
kind
of
funny
because,
like
linux,
just
kind
of
doesn't
give
you
anything
remotely
resembling
a
metrics
that
add
up
to
some
rational
number
yeah.
So
like.
B
H
Yeah,
like
I
definitely
understand
the
theoretical
arguments,
I
think,
as
you
start,
to
get
more
and
more
attributes
on
a
metric
and
you
kind
of
like
start
thinking
about
how
they
could
be
rolled
up.
I
feel
like
it's
it's
hard
to
understand.
If
you're
doing
doing
things
properly,
I
guess
and
probably
very
very
easily
to
break
these
relationships.
D
H
Yeah,
so
I
think
ultimately,
this
will
lead
to
maybe
some
spec
clarification
language,
but
that's
maybe
most
of
what
what
we
will
get.
H
Oh
I'm
just
noticing
the
chat
over
here
francis
broke
ground
on
the
metrics
api
sdk
this
morning.
So.
G
Can
I
ask
a
question
of
the
brain
trust
here
about
it:
comes
up
in
the
context
of
metrics,
where
the
precision
of
the
timestamp
of
the
recording
will
in
some
back
ends
make
for
a
lot
of
events.
We
call
them
events
a
lot
of
individual
metrics,
where,
if
you
want
to
correlate,
we
have
this
technique
where
we
can
truncate
the
time
span
instead
of
getting
like
your
cpu
at
this
nanosecond,
we
go
within
this.
G
Second,
your
cpu
was
about
this
range,
which
means
within
that
second,
all
the
other
metrics
could
get
correlated
too
like
within
this.
Second,
this
is
what
all
the
numbers
look
like.
Does
the
brain
trust
like
that
idea
or
dislike
that
idea?
Thinking
that
say,
maybe
nanosecond
precision
on
your
measurements
are
important.
D
Sometimes
right
yeah,
but
I
think
for
there
so
the
way
I've
been
explaining
this
for
years
is,
as
you
get
closer
to
the
metal.
You
want
more
precision,
but,
as
you
get
further
out,
you
want
less
right
so
that
you
can
do
all
those
things
you
mentioned
like
correlation
and
stuff,
so
you
actually
want
to
quantize
the
values
as
the
timeline,
the
the
kind
of
your
span
of
time,
you're
looking
at,
gets
bigger
and
it's
tricky
like
because
it
changes
as
you
go
through
the
spectrum,
I'm
looking
overall.
D
G
A
a
lot
of
times
in
our
thinking,
we're,
like
usually
your
metrics
back
end
is-
is
has
been
aggregating
these
anyway,
so
you're
getting
like
an
average
over
five
seconds,
10
seconds
a
minute
we're
like
so
maybe
we
can
truncate.
Maybe
you
don't
need
nanoseconds
for
this
stuff
to
to
get
the
payoff
of
all
these
stuff
is
now
correlated
easily.
B
I
was
going
to
say
I'm
back,
I
can't
imagine
us
being
able
or
any
system
being
able
to
process
that
volume
like
that,
would
cause
more
problems,
trying
to
measure
this
stuff
right
like
if
we
were
taking
nanosecond
measurements.
I
think
that's
why
most
vendors
are
doing
like
some
interval.
It's.
F
D
G
D
G
D
Yeah,
your
copy
to
user
call
in
the
kernel
just
blew
a
random
number
of
nanoseconds.
F
You
know
for
for
what
it's
worth,
they've,
they've
sort
of
refined
the
definition
of
like
what
open
telemetry
is
supposed
to
cover
recently,
to
be
like
very
explicit
that
it's
about
like
data
collection
for
observability
and
so
like
not
to
say
I
disagree
but
more
to
say,
like
it's,
not
our
problem
and
like
so.
If
that
precision
is
to
get
lost
it
would
it
should
be
up
to
arbitrary
back
end
that
falls
outside
our
scope
of
discussion,
not
to
say
I
don't
like
discussing
it.
F
Destroyed
like
I
would
think,
dropping
it
at
the
instrumentation
layer
is
an
anti-pattern
not
that
it's
necessarily
that
bad,
but
but
because
open
telemetry
has
made
a
conscious
decision
to
be
really
explicit
around
saying
that
it's
we
want
to.
You
know
like
we
want
to
exclude
observer.
You
know
back-end
use
cases
anyway.
That's.
G
Yeah,
it
was,
it
was
not.
I
was
raising
this
as
a
oh.
G
Here,
yeah:
well,
while
we
well,
I
have
some
people
who's,
who,
who
I
know,
are
smart
and
we're
talking.
G
D
So
take
a
look
at
digital,
audio,
workstations
and
quantization,
the
way
that
they
do
it
and
those
right,
because
that's
actually
what
it
feels
like
you're
talking
about,
is
you
kind
of
want
to
take
things
that
are
kind
of
jittered
across
different
time,
buckets
and
align
them
in
the
same
seconds?
D
G
Compute,
it
actually
boils
out
to
cost
savings
because
we
charge
per
event.
You
send
us.
So
if
you
send
us
fewer
wider
events,
you
save
money
and
also
you
get
a
payoff
and
correlating
metrics
within
a
certain
time
bucket,
exactly
yeah.
But
again
I
agree
eric.
It
doesn't
necessarily
get
implemented
in
the
instrumentation.
It
could
be
in
a
collection,
yeah.
F
I
think
it's
software,
I
think
it's
super
thoughtful.
I
agree
cool
or
we're
on
the
same
bridge.
I
I
will.
H
Spec,
stick
is
almost
done.
The
last
thing
that
happened
is
josh.
Siroth
mentioned
trying
to
work
with
the
http
conventions
in
java,
and
it
was
horrible
and
you
wanted
to
make
some
updates
to
them.
He
didn't
really
specify
exactly
what,
but
I
was
sad.
I
don't
think
he
used
the
word
horrible,
but
there
was
so.
H
So
yeah,
that's
that's
what
I
have
for
the
spec
sig.
Any
questions
comments
concerns
that
we
didn't
get
to.
H
Glad
I'm
glad
people
get
something
out
of
it.
I
often
question
if
it
is
useful
at
all
but
yeah
it
seems
like
we
have
good
conversations
so.
H
Excellent,
so
what
should
we
head
on
to
next,
I
think
sometimes
robert
has
an
update
from
the
instrumentation
sig.
Sometimes
we
have
some
pressing
things
from
our
from
our
seg.
E
Wonderful,
the
big
takeaway
from
the
last
instrumentation
segment.
Let
me
take
away
from
me
is
nothing
too
dramatic.
I'll
just
read
my
notes,
but
basically
I
was
talking
about
formalizing,
the
the
http
spec
of
it,
and
this
was
really
the
note,
the
bulk
of
the
notes
I
took
because
I
thought
this.
This
point
was
interesting.
They
originally
explored
having
a
parents
band
to
wrap
retries,
so
you
could
see
kind
of
a
way
of
grouping
multiple
retries.
E
The
last
thing
they
presented
was
showing
is
just
having
quite
simply
links
to
previous
retry
attempts
and
an
attribute
that
represents
how
many
times
it's
retried.
So
you
can
see
retries
or
redirects
if
it
follows
multiple
redirects
it'll
link
back
to
the
request
that
followed
the
the
redirect
and
the
same
is
true
for
retries.
E
E
It
was
temporary
right,
it's
transient,
so
I
didn't
really
have
a
strong
opinion
on
the
intricate
deduction
of
a
new
error
code.
I
just
thought
it
was
an
interesting
discussion.
I
think
for
us
the
implementing
this,
if
this
gets
formalized
and
gets,
makes
it
into
a
spec
saying
that
http
instrumentation
should
behave
a
certain
way,
and
this
is
one
of
the
behaviors
that
we're
forced
to
implement.
E
E
So
I
think
I
think
it's
interesting.
I
think
it
makes
sense
conceptually
having
a
bunch
of
retried
network
attempts
and
then
just
seeing
them
all
linked
together,
like
that.
I
think
it's
really
nice.
So
I
don't
know
that
was
like
the
really
big
takeaway.
There
was
some
other
portions
talking
about
timelines
and
expectations
of
these
semantic
convention
meetings
and
what
they
should
deliver,
and
some
discussion
around
like
december.
E
Everything
shuts
down
so
like
don't
ever
try
to
aim
for
anything
for
at
least
around
then
so
I
kind
of
phased
out
a
little
bit
at
that
part.
To
be
honest,.
D
E
Yeah,
I
thought
I
thought
it
was
really
cool.
I
think
that
it
makes
a
lot
of
stuff
that's
happening.
There
seems
to
make
sense
to
me
again
like
having
to
look
at
how
we'd
implement
this
exactly
for
all
the
various
http
libraries.
I
don't
know
how.
E
But
I'd
say
that's
like
the
bulk
takeaway,
that's
like
what
I
pulled
out
of
the
meeting
anyways.
So
that's
not
a
time
to
share.
I
don't
know
if
anybody
has
any
discussion
or
anything
that
they
think
is
kind
of
weird
about
this
approach.
I
thought
it
made
sense.
I
thought
that
having
the
higher
level
span
to
group
retry
seemed
a
little
bit
weird.
So
this
is
a
nice
solution
that
fits
well
into
what's
already
supported
by
hotel
in
the
framework.
So.
F
Yeah,
I
agree
with
everything
I
think
the
devil's
in
the
details
of
some
of
these
libraries
we
use
in
ruby.
Like
I
don't
know,
if
http.rb
the
gym
gives
us
like
nice
wrappers,
where
we
can
crack
catch,
three
tries
and,
like
you
know,
annotate
the
span
I
almost
the
only
so
it's
like,
but
I
don't
want
that
to
get
in
the
way
I
almost
wish
there
was
like.
Maybe
some
field
denoting
like
this
is
a
non.
You
know
this
specific
instrumentation.
F
Yeah,
like
I
don't
know
it's
just
like
it's
starting
to
be
yeah,
I
wish
we
could
get
like
it's
like.
I
don't
want
this
to
block
getting
to
1.0
on
a
particular
instrumentation.
If
that's
like
the
goal
here,
but
at
the
same
time
like
we
can
only
support
what
the
library
provides
us
without.
Like
writing.
Some
really
really
really
hacky
instrumentation.
That's
like
re-implementing
half
the
library,
so
I
yeah,
I
don't
but
yeah
yeah,
like
robert
said.
I
think
it's
a
really
good
idea.
G
F
H
Yeah,
I
I
basically
share
all
of
those
thoughts
that
that
eric
was
just
saying
mainly
about
how
accessible
some
of
this
stuff
is.
So
it
should
be
pretty
clear
that
there's
like
the
basic
instrumentation,
where
it
should
look
like
this,
and
if
you
can
get
the
extra
detail
structure
it
this
way,
but
not
like
a
hard
requirement
on
it.
H
I
suspect
a
lot
of
this
logic
is
like
very
deep
in
some
methods
and
there's
not
like
an
easy
yeah
there's
not
like
an
easy
monkey
patch
that
will
get
some
of
these
things
and
some
of
them,
like
I,
don't
know
if
you're
trying
to
use
like
ethon
or
something
where
it's
using
lip
curl.
I
think
a
lot
of
it's
happening
like
outside
of
of
the
at
least
the
ruby
part
of
the.
D
D
D
H
H
Yeah
any
more
discussion
on
this
anything
anything
else.
We
should
move
on
to.
F
I
can
give
an
update
on.
I
have
two
open
pr's,
I'm
happy
to
update
on
those
one's
the
schema
url
stuff,
francis.
Thank
you
for
the
feedback.
I've
implemented
some
things
in
properly
off
spec,
so
I
just
need
to
update
the
implementation.
F
The
implementation,
I
think,
also
schema
urls,
like
use
cases
expanding.
It
looks
like
they
want
individual
libraries
sdks
to
handle
merging
of
schemas
as
well
or
they're,
working
on
like
adding
that
so
like
handling
the
actual
translation
in
the
sdk
of,
like
you
know,
let's
say,
convention
changes
so
that'll
be
interesting,
but
I
think
I
can
just
do
the
feedback
francis
pointed
out
and
merge
this
like
incremental
improvement
and
then,
when
that
upstream
stuff
gets
finalized
around
how
they
want
to
do
schema,
url
translations
in
the
you
know
different
languages.
F
We
can
add
that,
but
that's
still
on
my
plate,
I
have
that
on
the
schedule.
There
are
all
my
you
know
on
my
list
of
priorities
and
the
other
one
is
environment,
variable
configuration
for
instrumentation
options,
so
that
is
so
yeah.
Thank
you,
everyone,
robin
robert
robert
and
ariel
and
stuff
for
the
feedback.
Yeah.
It's
it's
changed.
A
little
the
one
thing,
I've
added
kind
of
snuck
in
here
is
I've
added.
F
No,
not
that
one,
the
the
top
one
top
person,
I've
added
a
new
validation
type
for
innumerables
enums,
because
what
I
found
is
that
when
you're,
trying
to
set
and
via
environment
variable
like
a
callable,
you
know
option
it's
incredibly
hacky,
because
some
callables
are
like
a
proc
itself,
which
you
can't
really
set
via
environment
variable
and
then
some
callables
are
like
a
custom
proc
that
accepts
an
a
you
know,
argument
and
it's
just
really
hard
to
differentiate
and
you're,
not
sure
like
what
to
what's.
F
It
called
like
what
to
you
know
like
what
type
the
argument
should
be.
So
I've
taken
the
approach
of
saying
if
it's
a
option
of
a
callable
type,
you
can't
set
it
via
environment
variable
it
just
doesn't
let
you
and
then
I've
added
an
enumerable
type,
which
is
in
practice
how
we're
using
callable
types
in
every
single
one
of
our
callables,
which
is
just
a
list
of
you,
know
possible
options.
F
So
I
don't
know
the
implementation's
like
a
little
gross,
because
you
it's
like
yeah.
I
don't
know
it's
so
gross,
but
that's
that's
where
that
stands
so
like.
I
think
we
all
probably
want
some
sort
of
a
numeral
type,
if
you,
if
anyone
has
like
advice
or
opinions
on
like
how
to
implement
it,
a
little
better
that
it's
implemented
here
like
I'd,
be
open
to
it.
I
wasn't
quite
sure
how
to
like,
I
kind
of
had
to
like
the
pattern
we
exposed
for
like
adding.
F
These
is
a
little
bit
gnarly,
so
I
had
to
like
kind
of
shove
it
in
there.
But
anyway,
though,
that
those
are
those
are
I'm
working
on
those
two
things
and
so
yeah
there's
one
long-standing
pr.
We've
had
open
from
from
the
f
person
who
wanted
to
add
sorry,
I'm
having
a
mind
a
moment.
They
wanted
to
add
to
http
instrumentations
a
what
was
it
like?
F
F
So
it
got
added
as
part
of
like
that
instrument
help
instrumentation
helpers
gem,
but
then,
like
that
instrumentation
helper
stuff
had
a
bunch
of
like
question
marks
around
it
so
like
we
really
ought
to
like
get
this
one
thing
just
like
merged
for
the
sake
of
this
person's
been
politely
waiting
like
five
months
for
it
and
then
worry
about
all
the
stuff
we
tacked
on
on
top
so
anyway.
B
F
I
I
mean,
I
think
at
this
point
we
should
probably
just
like
get
in
the
pr
to
add
a
little
option
that
redacts
the
query
string
like
there
was
some
quite
there
was
some
small
feedback
on
like
the
details
of
that,
like
we
can
do
this
a
little
more
performantly,
so
yeah
like
the
discussions,
kind
of
moved
all
over
the
place
and
moved
to
this
other
pra
open
and
then
that
pr
got
stuck.
F
But
I
think
at
this
point
it's
like
let's
just
have
a
good
implementation
of
how
to
redact
the
query
string
and,
let's
add
it
as
an
option
to
the
relevant
instrumentations
and
let's
get
it
out,
and
then
we
can
worry
about
all
these
like
thoughtful
abstractions
about
how
to
you
know,
do
this
stuff
like,
but
at
this
point
like
it
is
useful
and
people
want
it
and
I
don't
think
anyone's
opposed
it's
just
a
matter
of
like
you
know
it's
kind
of
gotten
like
90
percent.
F
You
know
whatever
the
last
10
percent
is
kind
of
like
bogustome,
so
that's
my
opinion,
but
I'm
open
to
if
people
feel
strongly
or
have
other
ideas
or
you
know
like
I'm,
not
saying
that's
that's
just
where
I
stand
on
the
matter.
B
Anything
we
can
do
incrementally
is
great.
Anything
we
can
do
provide
value
early
is
great.
The
only
questions
I
have
is
for
consistency.
B
Do
you
happen
to
know
if
the
spec
has
any
specification,
any
inkling
at
all,
with
respect
to
redaction
rules
and
or
if
they've
got
or
like
even
the
collector
has
like
a
set
of
like
well-known
reduction,
regular
expressions
for
the
attribute
processor
that
we
could
reuse
here.
F
Yeah,
that's
a
good
question.
I
don't
know,
I
don't
think,
there's
much.
I
know
there
was
some
otep
out
there
floating
around
around
like
marking
sensitive
fields
like
and
that
feels
like.
It
will
take
a
long
time.
F
There
might
be
leverage
I
need
to
do
it
better,
you're
right,
I
should
figure
out.
We
should
try
to
stay
as
close
to
whatever
the
spec
looks
like
it
could
be.
So
maybe
that's
my
next
year's
like,
let's
make
sure
at
least
like
we're,
staying
close
to
spec
and
then,
let's
make
sure
the
implementation
is
like,
like
this
original
pr.
It's
like
the
implementation
is
local,
so
it
like
creates
like
a
uri
class
like
and
then
rips
it
apart.
Every
time
like
it's,
not
super
performant.
So
like
there's
some
anyway.
G
I'd
be
curious
about
how
re-reviewing
this
pr,
in
light
of
what
was
coming
out
of
that,
for
instance,
the
otep
that
robert
linked
to
in
the
chat
with
the
http
semantic
convention
movements
about
not
necessarily
to
block
this
pr
but
re-review
it
and
see.
If
does
this
move
us
towards
or
away
from.
F
F
So
if
anyone
wants
to
sort
of
like
run
with
those
things
for
sure
I
mean
I'm
on
board
and
there's
like
an
improved
implementation
floating
around
in
a
related
pr,
but
it's
all
just
kind
of
a
mess
at
this
point,
and
I
think
it's
like
just
me
kind
of
someone
to
grab
it
own
it
and
like
bring
it
across
the
finish
line.
G
In
general
is
redaction,
something
that
we
would
put
in
the
instrumentation
or
in
something
that
we
would
like.
You
could
implement
as
a
span
processor.
B
Like
that
given
given
this
conversation
about
the
schema
transformations
that
need
to
happen
as
part
of
like
future
versions
of
the
sdk,
I'm
wondering
to
my
like
again
it's
one
of
these
things
where
it's
like
we're
we're
we're
having
the
processor
pipeline
be
more
sophisticated
and
it
might
be
interesting
again
if
the
same
concept,
which
is
the
attribute
processor,
that's
in
the
collector,
gets
ported
over
to
the
client
sdks,
because
using
the
same
concept
right,
it
uses
a
yaml
file
for
describing
its
rules
or
you
could
write
a
custom
attribute.
B
F
F
G
G
F
F
G
As
long
as
it
shows
up
like
before,
it
leaves
your
infrastructure
by
the
time
it
shows
up
in
a
back
end,
it
should
get
redacted
and
whether
it
happens
at
like
by
nerfing
the
instrumentation,
with
the
span
that
it
emits
versus
a
processor
that
runs
yeah
in
in
the
sdk
versus
a
spam
versus
a
process,
a
spam
processor
running
in
a
collector,
it's
yeah.
I
think
I.
B
F
F
So
the
schema
translation
work
is
is
really
I
think,
maybe
what's
getting
lost
here
is
what's
allowed
in
a
schema.
Translation
is
incredibly
basic,
so
you
can't
do
arbitrary
regexes
on
values.
It's
just
like
one-to-one
string,
mappings
of
names
of
essentially
so
that
might
be.
Why
they're
saying
that
it's
okay
to
do
this,
because
it's
low
overhead,
but
you
know
if
you
would
expand
the
scope
to
say
translations
can
be.
You
know
some
some
complex
rejects
or
whatever
that
that's
not
so
you
know
adding
schema.
F
Translation
is
pretty
basic
in
terms
of
it
does
for
ruby,
it
would
mean
we
would
have.
We
would
still
have
to
copy
the
span
data
stuff.
You
know
because
of
the
way
we
implement
you
know
like
or
I
guess
the
way
processors
implement
span
data,
but
I
think
we're
it's.
The
scope
is
changing
around
like
what's
expected
of
a
slightly
like
just
because
they're
introducing
schema
translation
into
an
sdk.
I
think
I
don't
think
they're
saying
like
the
gloves
are
off
go.
Do
whatever
you
know
like
we
encourage
you
to
go.
F
G
B
B
F
Yeah,
no,
I
mean
I
don't
disagree.
I
mean
it's
unclear
whether
how
this,
like
I
just
know,
they're
working
on
that
and
go.
I
don't
know
whether
they've
made
something
in
the
spec
or
whether
they're
saying
let's
try
some
stuff
or
what
you
know
like,
but
I
know
that's
the
goal.
They've
like
decided
that
that
is
going
to
be
part
of
what
schema
translation
means,
at
least
that's
what
I've
been
told
in
the
collector
calls.
F
But
I
don't
disagree
like
I
agree
with
you.
It
doesn't
seem
to
fit
neatly
into
like
the
processor
model,
because
you're
having
to
copy
spams.
C
Anyway,
copy
doesn't
have
to
be
that
expensive.
I
mean
ultimately
you're,
depending
on
the
modifications
you're
making
like
you
have
to
copy
the
span
data
object,
which
is
one
object
and
everything
else
if
you're
not
mutating,
it
can
just
be
slammed
into
the
copy
and
if
you're
modifying
some
attributes
well,
you
know
you're
copying
the
hash
which
is
going
to
be.
You
know
marginally
more
expensive,
but
not
crazy.
C
So
I
mean
it.
It
doesn't
have
to
be
terrible
and
you
can
be
reasonably
clever
about
like
what
you
mutate.
C
And
yeah
and
you
don't
necessarily
need
to
freeze
everything
that
you
copy
right.
It's
like
you,
you
copy
it
to
make
it
mutable
and
you
don't
necessarily
have
to
re-freeze
it
before
sending
it
further
down
the
pipeline.
The
key
point
is
that
you
have
this
like
read-only
thing
and
you're,
not
modifying
the
original
spam.
G
I
could
be
reading
a
lot
of
into
the
spirit
I
need
to
go.
Read
the
spec,
which
I
will
take
that
as
homework,
but
there's
a
spirit
there
that
I
figured
the
auto
instrumentation
in
the
sdk
should
emit
a
span
and
not
have
tinkered
with
it
and
at
a
point
where
it's
getting
sent
into
a
processor
pipeline.
That
processor
pipeline
is
not
is
no
longer
like
us
as
hotel
maintainers.
G
It's
now
the
processor
pipelines
configured
by
the
end
user
and
if,
in
the
processor
pipeline
the
end
user
has
said,
I
don't
want
this
field
to
leave
the
exporter
like
they've
chosen
to
mutate
it,
and
it's
not.
I
think
the
spirit
of
the
spans
are
immutable
or
not
to
prevent
an
end
user
from
choosing
to
redact
their
stuff
before
it
leaves
an
exporter.
It's
more.
G
C
I
was
asking
I
was
asking
francis
yeah:
I
yes
you're
free
to
make
changes
afterwards.
It's
just
that
you
can't
modify
the
original
span.
So
if
you
need
to
create
a
copy
in
order
to
mutate
anything
and
you,
then
then
you
would
subsequently
pass
that
copy
down
the
pipeline,
but
the
the
actual
span
that
was
passed
to
on
finish
or
on
end.
I
can't
remember
what
the
method
is
called.
You
can't
mutate
that
so.
G
B
Right
which
is
kind
of
what
we
do
right,
it's
kind
of
we
take
that
if
you're
using
the
batchband
processor,
you
take
a
batch
of
spans
and
you
give
it
to
an
exporter
and
that
exporter
takes
that
data
and
converts
it
to
the
protobus
to
send
it
out.
So
it's
like,
if,
if
there,
if
there
was
some
way
to
hook
into
the
middle
of
that
right,
francis
describe
the
two-span
data
where
we're
taking
the
original
immutable
span
extracting
attributes
out
of
it,
converting
it
into
a
struct.
C
So
I
mean
you
could
get
more
sophisticated
where
like
or
could
you
get
more
sophisticated?
That's
a
good
question.
I
was
going
to
say
that
you
could
do
the
conversion
to
span
data
yourself,
but
I
I'm
not
sure
that's
true,
so
you
probably
have
to
do
the
two
spam
data
and
then
do
some
modifications
by
copying
it
into
a
mutable
struct
with
the
same
signature
and
then
like
mutating
and
sending
it
on
right
because.
F
C
G
F
F
It
depends,
is
the
answer
to
everything,
but
anyway,
impractically
speaking,
I
just
all
I'm
all
I'm
saying
not
to
just
to
bring
it
kind
of
back
is
like
because
pr
needs
some
love
and
it
kind
of
got
lost
in
the
shuffle,
and
I
just
wanted
to
remind
folks
of
it,
and
ideally
I
I
would
say
I
would
do
it,
but
I
know
in
practice.
F
I
won't
because
there's
already
something
I
just
said
I
would
do
last
week,
which
was
to
read
the
respect
thing,
and
I
haven't
done
that
so
anyway,
I'm
right
with
you,
man,
all
right
with
you,
sorry
that
I
took.
F
B
H
I'd
like
to
derail
it
for
one
more
moment,
just
to
say
that,
like
I'm,
like
idealistically,
I
really
like
the
approach
rob
was
describing
where
you
make
the
instrumentation
as
simple
as
possible
and
defer
things
as
much
into
the
processor
pipeline
as
possible,
but
realize
that
the
processor
pipeline
is
kind
of.
H
I
don't
know
it
feels
very
unusable
to
me
at
this
point
in
time
and
that's
that's
like
one
of
the
big
blockers
there,
but
I
don't
know
having
worked
on
mature
instrumentation
systems
in
the
past,
I
just
feel
like
they
just
become
littered
with
options
and
weird
things
to
like
switches
to
like
record
this.
Don't
record
this
record
this
like
this
and
it
becomes.
H
You
can
optimize
a
lot
of
stuff
if
you
can
simplify
that
that
instrumentation
as
soon
as
you
start
building
all
these
switches
into
the
instrumentation,
it's
like
there's.
No,
it's
like
you
are.
You
are
stuck
with
that
code.
I
guess,
even
if
you
kind
of
know
op
on
all
of
them,
you
still
have
to
like
kind
of
go
through
them.
H
So
I
don't
know,
I
don't
know
what
the
answer
is,
and
I
don't
know
exactly
the
direction
that
hotel
is
going,
but
it
would
be
awesome
if
there
was
some
kind
of
like
symmetry
between
the
collector
and
the
and
the
language
sdks
on
how
they
handle
all
this
stuff,
and
if
we
could
just
kind
of
like
do
the
same
thing
where
it's
like.
If
you
want
to
do
it
in
process,
do
it
in
process.
H
But
if
you
want
to
do
it,
if
these
things
can
leave
the
process
you
want
to
do
in
the
collector.
Do
you
want
the
collector,
but
the
mechanism
is
kind
of
like
the
same
in
both
places.
H
B
Before
we
go,
we've
got
like
one
minute
right
and
I
saw
a
handle.
I
don't
recognize.
That's
mdc
hello,
mdc,
welcome
to
the
sig
meeting.
B
D
F
No,
it's
it's
because
they're
not
funny.
It's
probably
why
okay,
yeah,
yeah
cool,
no
worries
if
people
want
to
observe
is,
did
anyone
want
to
cover
anything
in
the
manner
too?
I
know.
E
There's
a
couple
things
I'm
on
the
hook
to
do-
or
at
least
one
thing
I
have
to
do
some
stuff
around
the
the
fire.
There's
a
getter
there's
a
custom
getter.
Basically,
it
takes
a
string,
converts
it
to
a
symbol
so
that
you
can
retrieve
values
from
hashes
that
are
keyed
with
symbols
instead
of
strings.
E
E
B
Okay,
well,
that's
that's
fantastic
for
those
of
you
who
could
stay
a
couple
of
other
things:
the
active
support
notification,
so
I
ran
into
a
problem
we
upgraded
to
head
on
alpha
2
and
the
active
support
notification
started,
reporting
numbers
with
nanosecond
precision
so
when
it
was
trying
to
like
convert
the
timestamps
to
like
rational
numbers,
it
was
overflowing
the
protobus,
and
I
thought
that
was
kind
of
interesting,
that
that
would
be
something
that
was.
B
That
was
changed
right.
So
I
am
not
a
date
time
genius
and
as
a
simple
hack,
I
kind
of
just
divided
the
number
by
a
thousand
which
leads
to
like.
Also
to
you
know
who
knows
what
floating
point
problems
I'm
giving
myself
as
a
result
of
that?
So
does
it?
Is
there
any
like
date,
time
wizards
here
that
are
good
with
converting
these
numbers
or
shifting
these
numbers
so
that
they
make
sense
for
time
stamps
that
can
be
transferred
over
the
protos.
E
I
don't
want
to
volunteer
because
I
don't
think
I
would
provide
much
help.
I
at
one
point
was
a
time
zone
wizard.
It
was
a
very
it
was
a
title
I
didn't
want,
but
I
got
delegated
a
lot
of
time
zone
issues
for
my
first
two
months
at
shopify
and
I'm
hoping
to
never
go
back
there
yeah.
So
I
am
interested
to
see
what
you
come
up
with
and
what
you
determine
is
like
the
appropriate
solution,
because
that
stuff
is
kind
of
miserable,
but
it
is
interesting
at
the
same
time,.
B
Yeah
I
was
wondering
about
how
that's
going
to
impact
like
the
action
view
subscriber
code
that
y'all
are
working
on
right
now,
because
are
you
using
the
timestamps
to
determine
the
start
and
end
times
that
are
published
by
the
notification
and
have
you
seen
any
issues
with
rail
7
alpha
2.
E
Issues
in
a
very
quiet
way,
which
is
frightening
so
once
tim
is
kind
of
like
we
say,
tim
is
done
with
working
on
this
stuff,
we'll
turn
it
back
on
and
re-evaluate
it
again
and
see
what
the
like.
However,.
B
So
we've
really
not
adopted
anything
outside
of
like
faraday
rack
and,
like
you
know,
sql
stuff,
so
trying
to
figure
out
how
to
make
all
this
stuff
work.
E
E
F
E
B
E
We
do
want
to
bring
it
back,
it's
just
not
until
it
feels
a
bit
more
stable
and
I
think
that's
like
after
tim
wraps
up
this
work
and
hopefully
get
some
feedback
from
you
and
see
like
from
what
you're
seeing
and
you
can
just
make
this
a
bit
more
stable
for
everyone.
Yeah.
B
I
don't
you
know
we're,
I
don't
know,
I'm
gonna
meet
with
the
team
that
is
interested
in
adding
this
instrumentation
tomorrow
and
we'll
see
where
that
goes.
But,
lastly,
is
this:
the
the.
C
B
Provide
some
more
feedback
after
that.
The
last
thing
I
want
to
mention
is
the
gzipcon
compression
by
default
flag.
B
So
you
you
put
that
fix
out
already
eric.
So
should
we
revisit.
I
I
B
F
What
what
sick
is
it?
I'm
just
curious.
F
F
B
What
what
meeting
are.
I
I'm
sorry,
I
still,
I
still
miss
sorry.
Let
me
let
me
turn
on
the
mic.
That's
my
mic
problem.
Okay,
go
ahead,
please.
What
meeting
are
you
here
for?
Oh
I'm
here
for
the
one
telemetry
is
that
the
media
open
yeah.
B
I
A
A
I
Yeah,
because
this
is
my
second
time
to
join
a
meeting
like
this
of
the
open
telemetry
team,
and
I
really
want
to
to
do
something
for
this
open
project,
because
this
is
really
awesome
and
you
know
I
believe
it
has
a
very
greater
future
for
this,
because,
after
the
micro
service,
everything
like
you
know,
everything
is
run
going
and
I
believe
you
know
the
centralized
or
manipulation
of
the
log
and
everything
that'll
be
very
part
of
the
it's
kind
of
like
amazing,
of
the
entire
picture.
Things
like
that
fantastic.
B
B
Okay,
well,
so
normally
we
start
an
hour
earlier
than
this
time.
This
is
usually
like
our
end,
so
I
think
eric
already
posted
that
in
there,
but
if
you're
interested
in
getting
started,
I'm
going
to
post
a
link
to
issues
that
are
labeled
as
good
first
issue,
where
we
can
use
some
help.
B
If
you
have
availability
to
look
at
any
of
these
and
and
try
to
help
us
implement
prs
for
those
that'd
be
great
or
if
you
are
using
the
system
now
and
running
into
bugs.
You
know
report
those
if
you
can
file
prs
to
try
to
fix
those
too.
Those
are
welcome.
So
thank
you
very
much.
Oh.
I
Thank
you,
yeah
I'll,
try,
my
best,
you
know
yeah
I'll,
try
my
best
to
join
you
guys
more
and
try
to
get
myself
familiar
with
all
the
process
and
everything
and
thank
you
you
guys
are
so
welcome
me
and
I
feel
I
just
come
back
home.
Thank
you.
So
much
wonderful.