►
From YouTube: 2020-07-30 Ruby SIG
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
B
Sorry,
it's
I
actually
had
to
find
a
meeting
room
for
the
first
time
in
forever.
B
A
A
B
Yeah,
it
was
so,
I
think
so
we're
around
200
here
and
I
think
they
put
a
cap
and
said
there's
only
a
hundred
max
allowed
because
of
the
way
spacing
has
to
work,
but
it's
only
around
40
or
50
people
right
now.
So,
but
it's
it's
more
like
it
was
30
last
week,
so
it's
been
slightly.
People
are
sort
of
waiting
back
out
there
cool,
I
think.
Also
they
a
lot
of
folks
plan.
Their
summers
by
you
know
going
to
bordeaux
or
wherever
and
so
now
they're.
B
B
A
A
A
A
I
also
have
the
usual
agenda
like
I
I'm
not
at
all
confident
this
is
the
best
agenda,
but
it's
the
best
thing
that
I
come
up
with
at
9am
for
me
on
on
thursdays,
so
feel
free
to
always
feel
free
to
add
things
and
feel
free
to
change
this
format
up
if
it's
not
working
for
folks.
C
Bat,
on
my
mind,
this
generally
works.
For
me,
we
should
probably
discuss
milestones
and
yeah
like
any
particularly
pressing
issues.
The
main
one
that
I
want
to
mention
is
that,
like
I've,
cleaned
up
the
otlp
export
apr,
so
getting
more
people
looking
at
that
would
be
good.
A
Cool
yeah:
let's
go
really
quick
through
the.
A
Triaging
of
of
current
spec
issues,
so
I
guess
ga
spec
burn
down
part
one.
There's
96
open
and
11
done.
There
were
some
talks
about
trying
to
define
ga
for
for
different
signals,
so
kind
of
the
rolling
ga
where
you
could
ga
tracing
potentially
ahead
of
metrics.
You
can
keep
those
experimental.
A
At
least
we
had
that
discussion
at
the
maintainers
meeting.
These
are
starting
to
blur
together
to
me
now
they
both
happened
earlier
in
the
week,
but
morgan
was
asking.
It
would
be
useful
to
have
some
some
guidance
around
that
and
I
think
it
would.
A
I
know
daniel
had
some
good
questions
about
what
what
that
means
from,
like
a
support
perspective
on
our
side,
once
we
kind
of
release
like
a
you
know,
a
aga
tracing
like
are
we
on
the
hook
for
fixing
bugs
in
a
certain
amount
of
time,
etc,
etc.
I
don't
think
that
much
thought
has
gone
in
yet,
but
I
think
at
some
point
those
probably
will
be
defined.
A
It
looks
like
it's
it's
an
issue,
it's
a
discussion.
This
has
been
going
on
for
some
time.
I
don't
know
if
we
want
them
like.
It
seems
like
this
is
a
tracing
back-end
concern.
Probably
it
is
somewhat
of
a
concern,
I
guess
in
process,
but
maybe
a
little
bit
less.
So
it's
like,
if
you,
if
you
set
an
in
process
limit,
you
do
kind
of,
like
you
know,
help.
C
Yeah,
the
practical
problem,
though,
is
that
you
may
lose
the
data
permanently
like
you
may
lose
an
entire
span,
so
in
production
we
sometimes
see
people
putting
occasionally
multi-megabyte
values
as
attributes,
so
because
our
particular
export
of
pipeline
has
a
limit
of
one
meg
for
a
an
export
packet.
Basically
an
export
bundle.
We
sometimes
end
up
just
dropping
spans,
because
they're
too
big.
C
It
can
also
cause
having
unlimited
sizes
can
also
cause
other
export
problems.
I
mean,
even
if
you
don't
have
a
limitation
on
bundle
sizes.
You
may
just
end
up
with
more
errors
during
export
and
it
just
causes
flakiness
drop
connections
or
whatever.
The
other
thing
that
can
occur
is
that
that
data
ends
up
causing
problems
downstream,
so
the
collectors
may
end
up
starting
to
have
memory
issues.
If
you
get
a
lot
of
these
large
spans
being
exported
and
having
to
be
held
in
memory
by
the
collectors.
C
So
there's
a
lot
of
problems
that
happen,
if
you
just
say
well,
like
no
limits
having
some
kind
of
limit
is
useful.
We
certainly
have
limits
already
for
the
number
of
attributes
per
span,
and
things
like
that,
so
actually,
limiting
particularly
string,
valued
attributes
could
be
a
useful
thing.
C
A
Yeah
all
right,
so
I
think
this
discussion
is
useful.
I
feel,
like
you,
definitely
have
some
context
here.
If
you
want
to
comment
on
this
issue,
I
encourage
you
to.
It
sounds
like
it
sounds
like
the
best
approach
and
probably
the
one
that
if
anybody
asked
me
for
my
opinion,
on
which
nobody
really
does,
I
would
probably
say
we
should
we
should
have
them,
but
they
should
be
configurable
because
they're,
probably
not
going
to
be
one
size,
fits
all.
A
So
this
is
probably
like
an
sdk
configuration
for
like
max
size
and
that
that
can
be
infinity.
If
you
want
it
to
be
by
setting
max,
I
is
equal
or
by
not
setting
it
or
something
along
those
lines.
C
Yeah
actually
related
to
this
sorry,
I
realized
this
issue
was
about
the
number
of
attributes,
links
and
events
and
then
there's
a
link
to
another
issue,
lower
down
about
the
actual
actual
value
size
limits.
Sorry,
the
the
last
one
here
yeah.
C
We
already
have
the
limits.
I
think
we
copied
them
from
java.
We
have
the
limits
on
the
number
of
attributes,
the
number
of
links
and
so
forth.
One
thing
we're
not
doing-
and
I
don't
know
whether
anybody
else
is
doing
this-
either
we're
not
actually
tracking
how
many
attributes
or
links
or
whatever
we
dropped.
C
C
So
that's
I
opened
an
issue
for
this
yesterday,
but
it's
something
we
should
consider.
Should
we
actually
be
tracking
this
because
otlp
at
least
has
fields
for
dropped,
attribute
count,
drop,
link,
count,
dropped,
event,
count
and
it
would
be
nice
to
actually
be
able
to
populate
those
with
something
meaningful.
A
Yeah,
I
think
we'll
definitely
talk
about
the
new
issues
that
have
been
opened
and
that
that
sounds
reasonable
to
me
that
the
collector,
at
least
the
collector
exporter
has
or
otlp
has
this
in
the
protocol.
So
having
that
data
on
a
span
makes
a
lot
of
sense
for
for
anybody
who
wants
to
make
use
of
it
otlp
or
any
other
pipelines.
A
A
It
seems
like
you
want
to
place
the
surface
errors
too,
so
that
you
can,
you
know
log
them
debug
them
whatever,
but
there
was
like
a
strong
opinion
that
you
know
we're
building
this
great
telemetry
system
and
we're
reverting
to
like
vlogging
to
like
solve
it,
and
then
people
like
well,
you
can
send
you
can
record
metrics,
you
can
send
this
other
stuff
but
like
well.
What?
If
the
error
is
about
metrics
and
it
just
kind
of
turned
into
this
conversation
that
seemed
like
I
don't
know
from
my
viewpoint.
It
was
like
all
right.
A
We
can't
build
a
perfect
system,
so
you
can't
get
visibility
into
your
errors
at
all.
That
was
just
like
my
I
don't
know,
maybe
pessimistic
viewpoint
on
that
conversation,
but
that's
where
I
felt
it
kind
of
ended.
So
I
do
think
it's
important
that
we
have
a
way
to
surface
errors
somewhere.
Somehow,
even
if
it's
not
perfect,
and
if
we
come
up
with
the
perfect
solution,
we
should
replace
it.
C
I
opened
an
issue
about
this
one
as
well,
so
when
I
was
riding
the
otlp
exporter
our
I
had
written
an
otlp
exporter
for
shopify's
internal
instrumentation
package
as
well,
because
we're
trying
to
shift
to
otlp
internally.
C
We
have
a
whole
bunch
of
metrics
in
that
exporter
that
allow
us
to
track
things
like
drop
spans,
successful,
successfully
exported
span,
counts,
successful,
export
requests,
unsuccessful
export
requests,
the
modes
of
failure,
all
that
sort
of
thing,
and
we
don't
have
any
of
those
capabilities
right
now
in
the
exporter
that
I
wrote
that's
in
the
pr
itself
right
now,
we
also
don't
have
them
in
things
like
the
batch
span
processor,
so
it
would
be
useful
to
have
some
mechanism
either
either
some
established
conventions
for
metrics.
C
That
exporters
should
report
or
some
mechanism
for
hooking
into
those
failures
so
that
we
can
report
them
meaningfully.
So
that
kind
of
fits
this
global
error
handler
idea.
You
know
we
have
a
logger
right
now,
but
that
doesn't,
as
you
said,
that
doesn't
really
seem
like
the
perfect
way
to
report
those
sort
of
things.
C
A
A
A
You
know
just
otlp
exporter
or
exporters
in
general
kind
of
like
an
interface
to
to
either
surface
data,
or
you
know,
handle
handle
retry
and
failures
in
sensible
ways
like
I
think
people
are
open
to
these
suggestions.
I
think
it
is
one
of
these
things
where
nobody
really
knows
what
to
do,
but
I
feel,
like
I
feel
like
when,
like
it
sounds
like
you
have
a
lot
of
experience
with
this
at
shopify,
and
you
have
found
some
things
that
really
work
for
you
there
and
those.
A
The
solutions
that
you
come
up
with
in
a
mature
production
environment
are
usually
pretty
good
and
the
result
of
a
lot
of
things
that
don't
work
for
us
yeah.
I
I
mean,
but
this
this
is
good
like
by
having
real
world
experience
with
them.
A
You
kind
of
know
know
what's
going
to
work
and
what's
not
going
to
work,
and
if
you
found
things
that
really
are
going
to
work,
I
think
open
telemetry
can
benefit
from
from
from
that
experience,
and
I
think
people
are
more
than
willing
to
to
accept
and
listen
to
proposals
that
come
come
from
this.
A
I
think
where
we
get
into
troubles
is
where
we
try
to
like
imagine
the
best
solution,
because,
like
our
imaginations
are
not
good
enough
to
you
know,
handle
all
of
the
the
real
world
scenarios
edge
cases
and
everything
else
so.
B
For
for
datadog,
we
encountered
a
similar
issue.
It
actually
was
mostly
coming
from
like
customer
complaints,
so
we
implemented
it's
like
diagnostic
or
health
health
metrics.
I
think
we
call
it
it's
just
at
state.
B
It
is
a
little
easier
because
we
already
had
you
know
a
statsd
client
as
part
of
the
the
tracer,
so
we're
just
emitting
a
bunch
of
you
know
stationary
metrics
over
udp
and
so
well.
One
nice
thing
is
you
can't
get
errors
there,
but
they're
pretty
helpful.
There's
a
bunch
of
stuff,
that's
just
hard
to
die
like
yeah,
I
would
say
similar
to.
I
actually
share
the
link
in
the
chat.
What
francis
was
saying,
it's
really
hard
to
know.
B
What's
going
wrong
so
stuff,
like
the
the
the
queue
filling
up
or
retries
without
metrics,
because
it's
just
hard
to
reproduce
the
behavior
of
like
a
production
mode
system
and
especially
asking
a
user
to
is
like
impossible.
So
yeah,
it
seems
like
you
can
maybe
a
hook
and
then
people
can
add
an
arbitrary
way
might
be
an
easy
middle
ground
and
it's
like.
If
they
just
want
to
log
stuff,
they
can
log
it
if
they
want
to
fire
off
metrics.
They
can
do
that,
but
yeah
feel
free
to
you
know.
A
Cool
yeah,
I
think
I
think
this
will
be
a
continued
topic,
as
people
start
using.
These
sdks
is
how
to
like
how
to
get
health,
health
information
and
error,
information
and
other
things
out
of
it.
A
A
So
I
think
the
default
sampler
is
always
on
and
it
sounded
like
they
were
talking
about,
possibly
changing
yeah,
changing
this
to
parent
or
else
always
on,
and
the
difference
between
always
on
a
parent
or
else
always
on
is
like.
A
If
you
had
a
you
would
respect
the
parents
sampling
decision,
if
there
was
one
otherwise
on
is
the
default
so
seemed
pretty
reasonable.
I
don't
think
it
was
like
super
controversial.
I
think
you
can
always
change
the
default
sampler
so.
C
If
you
don't
respect
the
parent
samplers
decision,
then
you
can't
kind
of
interpose
this
span,
that
is
not
traced
or
that's
yeah,
not
sampled,
not
recorded
and
trying
to
say
you
know.
This
chunk
of
code
should
not
be
traced,
becomes
much
much
more
difficult.
A
Cool
the
next
thing
was
having
global
d
falters
default
to
no
op.
A
Whether
or
not
you
should
propagate
context,
even
if
you're,
using
the
minimal
implementation
or
not-
and
I
think
this
has
been
controversial
from
the
beginning,
but
it
does
seem
like
people
are,
I
think
they're
divided,
but
I
think
more
and
more
open
to
just
having
them
be
a
no-op.
You
can
always
register
again.
This
is
always
like
just
the
default.
It
can
be
overwritten,
but
it
was
kind
of
like
why
why,
in
the
minimal
implementation,
is
everything
else
noaa,
except
for
context
propagation
like?
Why
is
this?
C
The
argument
for
not
propagating
by
default
is
that
in
a
production
deployment,
you're
probably
going
to
need
to
decide
what
things
you
want
to
propagate,
what
format
you
want
to
use
for
propagation,
and
then
you
probably
want
to
write
libraries
that
will
have
instrumentation,
but
you
know
it's
up
to
applications
whether
they
actually
want
to
configure
open,
telemetry
and
actually
have
it
working
in
production
versus
just
using
a
library
that
happens
to
be
instrumented
and
not
you
know
they
may
decide
that
they
don't
actually
want
to
have
any
propagation
by
default.
C
Just
like
just
requiring
a
gem
that
requires
open,
telemetry
shouldn't
automatically
turn
on
context
propagation
for
them,
because
that's
kind
of
a
surprising
behavior,
but
you,
you
know,
suddenly
get
these
headers
appearing.
Just
because
you
happen
to
require
a
gem
that
itself
required
open,
telemetry.
A
The
tl
dr
I
was
getting
out
of
this
was
trace
context
and
the
w3c
based
propagators
should
come
as
part
of
it's
part
of
the
api,
but
everything
else
should
be
independent
packages,
so
we
are,
we
are
lucky
in
that.
We
only
have
the
w3c
formats
right
now,
but
if
we
were
to
add
b3
or
other
things,
they
should
probably
be
just
separate
gems
that
people
can
pull
in.
I
think.
A
It
does
seem
like
there
is
some
gray
area.
Definitely
people
are
thinking
like
you
know,
proprietary
protocols,
definitely
like
a
gem
but
like
b3,
it's
close
enough
to
a
standard
that
having
it
in
a
as
part
of
the
sdk
or
maybe
even
api
packages
wouldn't
be
terrible.
A
There's
no
consensus
on
any
of
these
things
and
kind
of
the
one
thing
that
I've
noticed
is
there's
no
there's
not
even
there's
not
like
a
spec
to
say
how
v3
should
be
handled
in
hotel,
because
not
only
do
you
have
like
the
two
header
formats
from
what
I
know,
the
single
header
is
a
newer,
less
used,
but
it's
kind
of
inspired
by
the
trace
parent
header
from
w3c,
or
they
have
very
similar
qualities,
but
a
lot
just
looking
through
open
telemetry.
A
Some
of
the
implementations
only
support
the
the
multiple
header
and
I
think
to
complicate
things
even
more.
A
It's
like
there's
some
like
optional
fields,
I
believe
like
v3,
has
it
has
span
id
which
is
kind
of
your
parents
id
ultimately,
and
then
it
also
has
parent
id,
which
I
think
from
the
perspective
of
the
receiving
app
that's
kind
of
like
your
grandparent,
and
it's
it's
unclear
like
how
much
of
that
you
need
to
propagate
how
much
that
you
should
propagate
and
what
the
consequences
are
for
picking
and
choosing
some
of
that
stuff
tldr.
I
think
there
should
be
like.
A
Ideally,
there
should
be
a
b3
for
hotel
spec
that
lays
out
exactly
how
stuff
should
work,
because
otherwise
each
thing
is
going
to
do
it
differently
and
that's
going
to
you
know
it's
definitely
going
to
make
for
some
future
mysteries
for
some
users.
C
Yeah,
I
also
feel
that
you
know
b3
should
live
in
a
separate
gem.
It
shouldn't
be
part
of
the
api
gem
trace
context
or
transparency
should
be
part
of
the
api,
because
that's
really
what
we
want
to
encourage
people
to
use
going
forward.
So
I
think
that's
a
reasonable
default
yeah.
A
A
A
Live
was
some
discussion
about
removing
http
from
http
text
format.
I
would
like
to
call
that
we're
some
pioneers
in
this
area
and
that
we
dropped
http
a
long
time
ago.
I
think
francis
was
a
pretty
big
advocate
for
that,
so
I
hope
this
happens
just
so
that
we're
not
a
special
snowflake,
but
I
I
do
think
it.
A
I
think
the
http
is
a
little
redundant.
I
think
in
the
discussions
for
people
fighting
to
keep
that
like
the
the
argument.
There
is
like
the
character
set
choices,
somehow
somehow
they
justify
the
http
prefix
in
some.
A
Oh,
you
know
given
some
lines
of
reasoning.
I
think
I
think
they
could
be
adjusted,
though.
A
A
Benchmarks
back
so
I
haven't
had
a
chance
to
look
at
it,
but
if
you're
curious,
yeah
there's
not
much
there.
C
There's
very
little
here
and
the
goals
seem
somewhat
arbitrary,.
B
This
is
such
a
black
hole.
It's
I
don't
know
sorry.
A
Yeah,
no,
it's
it's
a
total
can
of
worms.
But
if
I
look
at
the
headings
they
don't
like,
I'm
like
yeah,
okay,
that
makes
sense
but
yeah.
If
I
start
reading
the
text,
I'm
probably
gonna
be
disappointed,
but
as
a
work
in
progress,
it's
probably
fine
for
them.
C
Yeah
yeah,
the
probably
the
the
weird
thing
here
is.
The
throughput
thing
in
particular,
number
of
outstanding
outstanding
events
in
local
cache
is
actually
specked,
as
expected
here,
as
one
megabyte,
I
don't
imagine
most
like
most
exporters
are
actually
going
to
be
measuring
or
buffering
things
in
terms
of
megabytes.
It's
going
to
be
in
terms
of
number
of
events,
that's
a
number
of
spans,
so
yeah,
it's
just
a
bit
weird
the
way
I
expect
right
now.
A
C
I
don't
know
what
that
phrase
means
to
be
honest:
it's
heavily
dependent
on
how
fine-grained
your
instrumentation
is,
what
kind
of
operations
you're
tracing
you
know
we
had.
Somebody
come
to
us
recently
complaining
that
tracing
looked
really
expensive
when
they
were
tracing
their
health
check
endpoint.
C
C
I
have
an
app
that's
tracing,
monitor,
enter
so
the
cost
of
actually
obtaining
a
mutex
and
ruby.
A
Yeah
and
then
there's
this
last
one
about
changing
the
otlp,
exporter's
export
keywords
and
metrics
by
default.
You
look
at
this,
you
want,
we
don't
we're,
not
exporting
metrics.
So
I
think
who
knows
what
what
the
ask
will
be
by
the
time
we
get
around
to
that,
but
we'll
figure
it
out.
Then.
A
A
Shall
we
move
on
to
to
our
repo.
A
C
The
oglp
export
is
pretty
big.
I
just
really
wanted
to
flag
that,
like
I
think
it's
mostly
review-ready.
There
are
two
things
that
I
need
to
polish
for
it.
One
is,
if
you
scroll
down
a
little
bit,
there's
two
comments
here.
C
So
one
is
that
array
support
is
now
being
implemented
upstream,
so
it's
implemented
in
the
proto
and
in
the
collector.
So
I
need
to
actually
implement
it
here
and
then
the
other,
the
other
side
of
it
is
handling
redirects.
C
So
I
copied
and
pasted
some
code
for
handling
redirects
from
our
implementation
of
the
otlp
exporter,
but
the
structure
of
our
exporter
is
a
little
bit
different,
so
I
just
need
to
refactor
this
a
little
bit
to
match
what
we're
doing
here.
C
B
Yeah,
I
can
definitely
review
this
probably
over
the
weekend,
but
yeah.
I
definitely
thank
you
for
tagging
me
on
it.
I
looked
at
it
when
it
an
earlier
iteration.
It
looked
pretty
strong,
so
I'll
try
to
give
some.
C
Thoughtful
comments,
cool
thanks.
One
thing
to
note:
there
is
an
integration
test
here,
a
pretty
minimal
integration
test,
but
you
can
actually
run
this
locally
if
you've
got
the
open,
telemetry
collector
up
and
running
and
verify
that
data
is
actually
sent
to
the
collector.
The
collector.
A
Awesome
now
this
this
looks
great
thanks
for
thank
you
for
doing
this.
I
think
this
is
going
to
be
something
that
a
lot
a
lot
of
people
will
get
will
want
to
be,
or
a
lot
of
people
coming
to
open.
Telemetry
ruby
are
going
to
want
this
so.
C
One
question
I
have
which
I
didn't
actually
put
in
here
is
this:
implementation
is
otp
over
http,
specifically
protobuf
of
the
http.
There
are
there's
a
recommendation.
C
Oh
sorry,
another
recommendation
there's
an
open
issue
in
the
spec
repo,
I
think
about
a
kind
of
plain
json
representation
over
http
and
then
there's
also
the
grpc
version
at
shopify,
we're
just
planning
to
do
otlp
protobuf
over
http,
but
there
I
imagine
that
a
lot
of
people
like
us
may
not
want
to
take
a
dependency
on
grpc.
C
So
lumping
grpc
and
http
exporters
in
the
same
gym
doesn't
seem
ideal.
I
don't
know
if
everyone
agrees
with
that
or
not
should
we
rename
this
gem
to
you
know
open
telemetry
exporter,
oklp
http,
to
allow
us
to
then
have
a
hlp
grpc
exporter.
A
I
think
I'm
in
favor
of
that
to
be
honest
like
there.
There
is
this
open,
spec
issue.
I
don't
know
if
it's
the
same
one,
that
you
were
just
referencing,
where
I
think
the
ask
was
really
we
would
like
otlp
http
without
having
we
want
the
option.
A
We
also
really
want
to
not
have
to
pull
in
grpc
as
a
dependency,
and
this
is
something
that
I'm
aware
of
from
from
users
that
I
haven't
encountered
in
the
wild
like
just
taking
a
protobot
where
it's
not
portable,
but
the
grpc
dependency
is
an
issue
for
them.
C
Yeah
we
also
have
so
I
think
the
http
thing
has
so
proto
over
http
is
specked
at
the
moment.
There's
a
motec
for
it
and
it's
implemented
in
the
collector.
C
C
I
think
the
javascript
sdk
yeah,
but
there
isn't
any
corresponding
implementation
of
in
the
collector,
so
they
may
be
able
to
send
it,
but
nobody
can
receive
it
at
the
moment.
You're
saying
json
over
http.
A
A
But
my
understanding
was
that
was
that's
necessary
for
browsers
in
general,
so
it
should
be
coming
to
a
collector
near
you.
Protobuf
from
a
browser
is
problematic.
It's
not
impossible
but,
like
I
think,
all
of
the
libraries
use
use
eval,
which
is
not,
which
is
a
security
concern
coming
from
browsers,
usually-
and
I
don't
know
browsers-
are
another-
can
of
worms
that
I
could
know
a
lot
more
about,
but.
C
Yeah
I
do
know
there
are
some
issues
there:
okay,
there's
a
related
consistency
issue,
which
is
all
the
implementations
are
inconsistent
in
their
use
of
ports
and
paths.
For
you
know
the
different
formats
and
so
forth.
The
collector
at
some
point
made
the
decision
to
split
the
ports,
so
they
have
they
use
separate
ports
for
grpc
and
http.
C
Lightstep,
I
know,
is
strongly
objecting
to
that,
so
they
may
get
unified
at
some
point,
but
there's
I
think
the
port
changed
at
some
point,
so
we
had
been
using
a
different
port
internally
and
then,
when
I
was
implementing
this,
I
realized
that
the
collector
has
changed
the
default
ports
for
these,
but
the
ports,
the
default
ports,
don't
actually
appear
anywhere
in
any
spec.
C
So
it's
not
like
it's
not
clear.
What
the
right
thing
to
do
is
other
than
just
copy
copy.
What
the
collector
did,
and
unfortunately
people
seem
to
have
copied
that
differently
in
different
different
cigs.
A
Okay,
I
think
this
should
be
the
place
where
that
stuff
lives
yeah,
and
I
think
these
configuration
options
could
I
don't
know
there
are
also
a
can
of
worms
because,
like
as
as
they're
being
implemented
around
like
endpoint,
I
think
endpoint
is
being
reused
for
grpc
and
for
otlp
http
and
interpreted
slightly
differently
for
each
one
kind
of
for
grpc.
A
It's
like
the
endpoint
is
just
your
host
and
port
and,
like
your
secureness,
comes
from
this
insecure
flag,
whereas
for
otlp
http
the
endpoint
can
be
just
like
a
full
http
endpoint.
You
know
with
path
and
with
passport
and
scheme,
to
kind
of
figure
out
these
secure
security.
So
I
think
I
think
that's
okay,
like
I'm
not
opposed
to
that.
C
Yeah,
the
the
default
for
glpc
in
the
collector
is
55680.
C
It
used
to
be
55
679,
because
the
open
census
port
was
55
678,
so
they
just
incremented
by
one,
but
at
some
point
they
seem
to
switch
to
55
680
and
then
http
is
55681
and
then
you've
got
this
path
thing
as
well.
C
So
yeah
it's
it's
just
interesting,
because
a
lot
of
the
implementations
done
by
different
cigs
use,
different
port
numbers
and,
in
some
cases,
they're
using
55
680
for
both
http
and
grpc,
and
this
confusion
around
the
path
as
well.
C
A
I
think
I
think
this
is
fine,
like
I
like
that
we
have
our
pr
right
now
and
that
this
pr
is
also
here
for
configuration,
and
I
think
we
can.
We
can
participate
in
helping
make
sure
that
both
of
these
do
something
reasonable.
That
we
have
reasonable
recommendations
and
are
clear
enough
about
the
configuration.
A
And
then
our
implementation
can
be
a
user
of
that
and
just
make
sure
that
the
things
in
the
spec
are
same.
You
know
we
can
complete
that
circle.
Sometimes
you
write
a
spec
with
configuration
and
realize
that
it
didn't
check
all
the
boxes,
and
sometimes
you
write
an
exporter.
That's
missing
a
lot
of
configuration
so.
C
Yeah,
I
certainly
don't
have
all
this
configuration,
so
that's
something
that
I'll
need
to
address
in
the
pr.
I
wasn't
aware
of
this
this
pr
so.
A
Cool
yeah
and
those
are
things
that
we
can
choose
to
do
as
part
of
your
pr
or
follow-ups.
You
know
we're,
I
think,
we're
all
flexible
on
how
to
actually
get
this.
C
Done
a
small
thing
also
compression
is
not
currently
implemented
in
the
collector,
so
this
fails
in
surprising
ways.
Also,
the
errors
reported
back
to
the
client
by
the
collector
are
not
what
is
currently
spec.
C
So
the
otep
for
this
says
that
you
know
certain
errors
encode,
this
protobuf
should
be
coming
back
and
the
the
implementation
of
the
collector
is
incorrect.
So
right
now
all
we
can
do
is
just
really
take
the
body
and
drop
it
because
we
don't
know
how
to
decode
it.
A
Okay,
lightstep
will
take
gzip.
C
C
Know
cool
anything
else
we
should
talk
about.
C
So
while
I
was
working
on
this,
a
bunch
of
things
occurred
to
me.
So
I've
opened
a
few
issues.
C
C
A
C
B
I
probably
do,
but
I
yeah
thank
you
for
the
issue.
Yeah
someone
had
asked
is
the
js
one
singular
and
I
was
like
nope
that's
the
way
it
works.
Apparently,
anyway,
cool
yeah.
A
C
C
Shall
we
move
on
to
another
one
or
surely
any
concerns
with
that?
Not
really
no
just
wanted
to
know
you
know.
Do
we
think
we
should
go
ahead
with
that
or
not
this
one
is
about
disabling
tracing
in
a
block,
so
I've
implemented
this
in
the
exporter
because
in
general
we-
and
I
think,
like
other
stick-
implementations-
are
doing
this
as
well.
C
C
Yeah,
the
the
way
I
was
proposing
doing
this
is
by
creating
this
span,
that
is
not
sampled,
not
recorded,
and
that
acts
as
a
parent
for
everything
below
it
right.
Okay,
so
that's
this
one
as
well.
A
A
C
Cool,
I
think,
open
census
previously
did
this,
like
all
the
instrumentations,
that's
in
open
census.
Did
this
with
this
kind
of
unsampled
span,
which
they
then
throw
away,
but
yeah,
I
think
context,
is
the
right
way
to
do
it.
We
just
need
kind
of
a
standard
approach
to
this
cool
I'll.
A
I'll
update
this
ticket
with
kind
of
mention
that
context
is
something
we
should
explore
and
just
reference
the
js
related
issues
around
it
for
inspiration,
cool.
C
C
So
probably
having
some
kind
of
hooks
would
be
would
be
handy
if
people
have
ideas
of
what
that
should
look
like.
It
would
probably
be
good
to
add
a
link
to
the
the
datadog
metrics
for
this,
as
certainly
one
source
of
inspiration.
A
So
would
you
see
these
as
like
open
telemetry
metrics
of
some
sort
that
we
would
that
we
would
use
the
met
the
the.
C
To
be
written
yeah
exactly
so,
there's
kind.
A
C
Chicken
and
egg
thing
there
right:
it's
like
we'd
like
to
have
these
metrics,
but
we
don't
have
any
metrics
sdk
implementation
yet
so
they
wouldn't
do
anything
useful.
A
I
think
this
is
good.
I
do
kind
of
think
that
there
would
be
some
interest
and
possibly
just
something
to
bring
up
more
like
at
a
spec
level.
I
guess
because
I
think
this
would
apply
to
possibly
to
all
exporters
and
spam
processors,
like
other
languages,
might
have
opinions
and
want
to
adopt
this.
I
know
it's
always
a
tough
sell
up
at
that
level,
but.
A
Yeah
I'm
going
to
be
off
next
week,
so
I
will
not
be
at
the
spec
sig.
But
if
anybody
is
going
to
be
at
the
specs
again
wants
to
like
put
this
on
the
agenda
as
a
thing
to
like
talk
about
to
see,
if
others
just
to
see
if
other
sigs
are
are
interested
or
I
have
have
had
discussions
about
this,
and
if
it
makes
sense
to
try
to
standardize.
A
Things
might
make
sense,
and
I
can
always
just
throw
this
on
the
agenda
with
a
note
or
something,
and
at
least
somebody
will
say
something
about
it.
I
think.
C
So
the
last
issue
I
had
here
was
fork
safety
in
the
batch
fan
processor.
So
right
now
we
don't
actually
have
any
fork
safety.
If
you
start
a
dash
fan
processor
and
then
fork,
the
child
is
not
going
to
have
the
thread
running
so
that's
problematic,
so
we
need
to
restart
the
thread.
C
I
don't
think
this
is
certainly
the
requirement
is
not
controversial.
We
agreed
pretty
early
on
that.
We
need
fork
safety,
but
we
don't
have
it
yet.
So
I
don't
know
whether
this
approach
makes
sense
to
people
or
if
people
have
seen
other
approaches
that
may
be
more
suitable.
B
Datadog's
tracer
addresses
this
in
some
way
and
I
totally
blanked
on
it
when
writing
the
datadog
processor.
So
I
can
look
at
how
it's
being
addressed.
I
have
to
kind
of
catch
up
on
it
because
I
I
didn't
write
it.
I've
joined
the
team
later
but
yeah.
I
know
this
came
up
as
a
bug
for
a
number
of
people.
So
before
we
realized
it
was
an
issue.
A
Cool
thanks
is
this
something
that
requires
like
a
top-level
api
method
or
some
api
method
that
the
forking
process
needs
to
be
aware
of,
or
is
this
something
we
can
always
handle.
C
Automatically
so
I
mean
that's,
certainly
one
way
the
other
way
is
just
to
track
the
kid
and,
if
the
p,
that
changes
then
assume
that
you've
been
forked
and
you
need
to
restart
the
thread
basically
and
then
re-record.
The
pit.
A
That's
probably
better,
if
you
can,
if
you
can
detect
the
situation.
C
A
We
have
reached
time
I
did
want
to
quickly
mention.
I
think
daniel
opened
up
a
couple
of
issues.
We
talked
about
these
a
little
bit
last
time,
but.
C
Yeah
and
generally,
I
think
this
is
a
good
idea.
I
just
want
to
point
out
that
we
don't
want
to
accidentally
go
1.0.
We
want
that
to
be
a
fairly
intentional
thing.
A
Yeah,
I
think
that
makes
sense
yeah,
so
I
think
we're
thumbs
up
on
these
if,
if
you
ever
do
want
to
start
working
on
these
daniel
just
assign
it
so
nobody.
D
Yeah
I'll
probably
have
some
some
cycles
to
start
looking
at
these
in
the
next.
Maybe
in
a
week
or
two.
A
Cool
nothing
for
me
now.
This
has
been
a
super
productive
meeting.
I
thought
we
had
some
good
conversations,
I'm
not
going
to
be
around
next
week
fyi,
so
anybody
feel
free
to
run
this
meeting.
I
don't
know
how
I
run
this
meeting
and
feel
free
to
if
yeah.
If
people
are
interested
in
running
this
meeting
like
you,
can
you
can
share
it.
C
Yeah
yeah,
I
mean,
I
think,
you're
doing
a
great
job
and
you
have
the
context
from
the
specsic
meeting
that
maybe
others
here
are
not
attending
so
yeah.
That
recap
and
context
is
useful:
cool
yeah,
I'm
flexible
on
all
these
things.
What.