►
From YouTube: 2021-02-09 meeting
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
I'm
doing
okay
hanging
in
there
living
that
pandemic
life.
You
know.
What's
up,
I
am
familiar
there
you
work
on,
I
did
a
dog.
Is
that
right?
I.
B
Do
I
they
haven't
fired
me
yet.
B
Man
quite
an
achievement,
it's
it's
the
small,
the
small
wins
you
gotta
find
the
small
ones
and
github
your
get
upgrade.
A
Thanks
so
much,
this
is
like
my
austin
texas
wework,
where
it's
like
my
remote
middle
schoolers.
In
my
study
here
in
my
house,
oh
man.
B
B
A
So
I've
mostly
been
a
fly
on
the
wall
in
these
meetings,
but
is
matt,
usually
the
one
that
runs
these
yeah.
B
Matt
matt,
daniel
and
other
francis
are
the
maintainers
and
tend
to
to
run
them
and
matt
is
sort
of
the
the
greatest
showman
he's.
Often
the
one
doing
the
talking.
I
tend
to
talk
a
lot,
but
I
don't
really
contribute
that
much
or
as
much
as
I'd
like
to
recently
so
still
getting
that
and
daniel
doesn't
is
quiet,
but
when
he
has
something
to
say
it's
usually
right
and
he
contributes
the
most
useful
code.
C
C
B
D
With
a
guy
for
about
five
years
on
the
same
team
called
scott
francis,
that
was
enormously
confusing.
B
E
E
B
Know
before
we
get
started,
has
anyone
figured
out
how
to
do
tracing
in
in
three
ruby
three,
yet.
B
B
It
feels
like
it
won't
be
an
issue
until
libraries
support
it,
but
we've
got
some
folks
working
on
it
and
their
first
feedback
was
oh.
This
is
really
hard.
C
F
I
suspect
it'll
be
a
little
while
before
directors
are
working
before
many
libraries
really
support
reactors.
I
was
just
playing
with
them
last
night
and
already
found
several
bugs.
B
B
For
pregnancy,
that's,
I
suppose,
that's
par
for
the
course,
but.
A
B
Normal
with
any
new
cool
feature,
but
will
be
interesting
to
see,
there's
always
there's
always
one
adventurous,
company
or
team,
or
something
who
insists
on
you
know
trying
it
where
you
could
be.
I
think
meteor
and
node
uses
fibers
still,
which
are
a
completely
deprecated
node
concept.
So
you
always
see
people
latching
onto
things,
even
if
they're
terrible
ideas.
E
So
yeah
it
does
look
like
the
usual
crew
is
around.
So
I
guess
we'll
do
the
usual
go
through
the
spec
sig
try
to
make
it
shorter
than
the
spec
sig
and
then
start
going
through
issues
in
the
repo.
E
E
It
appears
as
though
the
metrics
sig
has
kind
of
split
into
two
and
a
half
segs,
which
I
think
is
really
three
zigs,
but
there's
one
group
who's
going
to
work
on
the
data
model.
There's
another
group
who's
going
to
work
on
the
api,
sdk
and
then
there's
another
group
who
will
work
on
prometheus
compatibility.
E
E
E
I
don't
know
what
this
really
means
like
in
practice
for
for
a
number
of
things
I
think
for
us.
It
just
means
that
we
haven't
really
had
in
an
api
or
sdk
that
hasn't
gotten
any
use.
So
probably
it
does
not
affect
us
all
that
much,
but
other
sigs
do
have
apis
and
sdks
and
stuff
built
upon
those.
So
I
don't
imagine
these
stuff
built
upon
them
or
will
go
away
right
away,
but
that's
kind
of
the
state
of
hotel
metrics.
I
guess.
E
There's
also
just
kind
of
this
mention
that,
for
for
big
things
for
big
features
and
to
kind
of
validate
big
features,
the
otep
process
should
be
used
and
an
ideal
way
to
make
sure
that
all
bases
are
covered
is
to
have
a
prototype
in
at
least
two
languages,
preferably
one
strongly
typed
and
one
dynamically
typed.
E
And
then
there
was
just
kind
of
the
question
of
should
to
the
logging
spec
kind
of
put
the
same
stability,
stability,
maturity,
levels
on
it
as
as
metrics.
E
The
release
of
the
1.0
1.0
of
the
spec,
which
would
be
a
stable
tracing
spec,
has
been
imminent,
but
there
have
been
some
blockers.
E
There
was
one
issue
dealing
with
span
limits
and
environment
variables.
I
guess
it's
resolved
and
merged,
I'm
not
sure
exactly
what
what
the
deal
was
there,
but
probably
just
some
naming
bike
shedding.
We
can
dig
up
the
pr.
E
There
are
a
couple
prs
and
an
issue
around
the
trace
id
ratio
based
sampler,
and
I
think
it's
there's
an
issue.
E
And
basically,
there
is
a
to
do
in
the
spec.
That's
this
that
says
that
the
at
the
algorithm
for.
E
E
So
it's
not
a
great
way
to
have
a
a
1.0
for
the
spec.
I
think
that
ask
was
we
either
need
to
like
fill
in
the
to
do
before
communist
1.0
or
label
this
as
an
experimental
feature.
E
E
E
Yeah,
so
it
looks
like
the
release
date
for
a
1.0.
Try
spec,
I
don't
know,
there's
there
are
talk
of
dates.
I
heard
things
mentioned
as
soon
as
today,
but
it
looks
like
there
was
a
5
16
written
down,
so
maybe
that
is
like
a
maybe
we
have
a
range
of
dates.
E
They
would
like
to
have
two
language
rcs
available
soonish
to
vet,
the
1.0
spec,
and
then
there
was
a
question
of
what
of
instrumentation.
Will
that
be
1.0?
And
I
think
the
answer
is
no:
that's
still
kind
of
un
unstable.
E
And
then
there
was
talk
about
just
trying
to
have
like
a
monthly
release
cadence
for
the
spec
and
there's
also
a
talk
about.
Will
language
cigs
have
to
kind
of
like
have
a
release
that
matches
the
spec
each
time
and.
E
E
Yeah,
I
think
the
idea
is
that
a
lot
of
things
kind
of
go
into
maine.
You
don't
necessarily
have
to
try
to
release
those
until
they
make
it
into
a
spec
release.
E
B
E
B
E
E
B
Oh
good,
it's
a.
E
B
It
it's
okay,
except
the
mystery
I'll
look
into
it.
E
So
I
think
the
process
is,
you
know
once
there's
a
1.0,
the
spec.
We
should
start
language
saying
you
should
start
getting
rcs
ready
and.
E
So
my
sense
is
that
we're
like
a
little
bit
before
my
sense
of
there
are
a
few
more
things
on
our
checklist
before
we
get
to
rc.
I'm
like
just
off
offhand,
I
think
like
having
like
a
zip
code.
Exporter
is
one
thing
that
we're
supposed
to
have
that.
I
think
we
don't
have.
D
Yeah,
so
the
jaeger
propagator
is
one
of
the
things
that's
in
flight
right
now
and
that's
required.
So
I
imagine
that
one's
pretty
close,
we
have
a
few
small
environment
variable
things
that
we
need
to
implement,
so
the
trace
exporter
and
propagators
are
the
main
ones.
D
So
those
are
trivial
and
shouldn't
take
much
effort,
I'm
just
trying
to
parse
like
optional.
This
is
not
optional.
D
The
ota
lp
exporter
is
supposed
to
support.
Well
sorry,
it's
not
marked
as
optional
to
do
concurrent
sending,
but
if
you
read
the
spec
it's
may
not
must
which
implies.
It
is
optional
and
I
think
only
two
languages
have
actually
implemented
concurrent
sending
so
there's
also
multi-destination
spec
compliance
which
is
optional,
and
we
haven't
done
so
zip
can
export.
We
need
to
do
something
there.
E
E
B
D
E
D
E
E
So
yeah
it
seems
like
we
are.
We
will
probably
have
an
rc
sooner
than
later.
D
Yeah,
I
don't
know
the
biggest
contribution
we'd
be
looking
for
is
the
zipkin
exporter.
That's
them
probably
the
most
challenging
piece
that
we're
missing.
A
D
Yeah
shopify
is
using
the
jaeger
exporter
and
the
otlpa
exporter
internally,
we
haven't
had
a
need
for
a
zip
connects
border,
so
we
haven't
put
effort
into
that.
D
G
Is
it
working
now
yep
yeah,
I'm
just
in
the
process
or
I
just
finished,
deploying
our
first
production
service
with
open
telemetry.
So
I'm
just
closely
watching
that
right
now,
so
it's
looking
like
everything's
working
great
so.
D
Yeah,
so
that's
with
open,
telemetry,
ruby
exporting
via
otlp
to
the
open,
telemetry
collector.
B
I
have
some
familiarity
with
the
zipkin
export
format
because
they
handle
errors
very
poorly
and
it
caused
an
issue
with
data
to
hook,
so
I
had
to
chip
in
a
little,
but
I
don't
have
a
ton
of
as
you've
noticed
that,
because
I've
just
been
totally
absent
in
terms
of
actually
contributing
this
project.
I
don't
have
a
ton
of
bandwidth
right
now,
but
it's
something
that
I
probably
can
grab.
I
just
don't
know
when.
E
Our
jaeger
exporter
we
borrowed
heavily
from
there
was
a
ruby
jaeger
client.
This
guy
indrack
made
it.
His
company
also
has
a
zipkin
client.
A
How
does
anybody
feel
about
sort
of
broadcasting
calls
for
help
on
like
social
media
like
twitter
and
stuff,
like
that,
specifically
linking
to
these
issues?
Would
you
say,
like
anybody
out
there
use
retracing
with
zipkin?
We
could
use
your
help
to
build
this
exporter
out
and
see
if
that
gets
any
bikes.
B
Yeah
I
mean
I've.
Most
of
my
followers
are
my
grandparents
so
feel
free
to
do
know
anyone
popular
online.
They.
E
So
yeah
this
I
don't
know
our
our
our
jager
x.
Water
came
largely
as
a
lift
and
shift
from
the
sail
move
jaeger
exporter.
I
know
indrack
used
to
come
to
these
meetings
and
he
was
like
really
gracious
at
allowing
us
to
borrow
heavily
from
from
that
prior
art.
E
We
could
pick
him
again
and
see
what
he
thinks
about
borrowing
heavily
from
from
the
zipkin
ruby,
open
tracing
library,
from
what
I've
seen
his
work
is
very
high
quality
for
the
most
part
so
like.
If
you
are
looking
for
for
prior
art,
his
his
stuff
is
great,
it
does
look.
I
was
just
kind
of
taking
a
quick
glance
and
looks
like
they
are
using
the
json
format,
which
is
one
of
the
allowed
formats.
E
I
would
say,
preferably
if
we're
going
to
go
with
the
json
route,
that
we
get
json
v2
or
there's
a
v2
protobuf.
It
seems
like
the
v1
stuff.
B
You
know
I
don't
use
zipkin
internally,
so
I
don't
have
much
familiarity
except
for
have
some
a
little
bit
of
understanding
of
v2
just
from
one
or
two
small
pr's
I
made
to
the
collector,
but
it's
very
small,
I'm
still
looking
around
to
see
if
there's
any
other
vendors
out
there
exporting
and
zipking
right.
B
E
E
E
I
know
there's
this:
I
haven't
had
much
of
a
time
to
look
at
it,
I'm
not
sure
if
we
should
be
if
this
is
something
we
wanted
to
talk
about
at
all
robert.
I
know
we
talked
about
this
a
little
bit
last
time
and
we
talked
about
how
our
our
config
is
basically
just
a
hash
and
there's
no
there's
no
really
validation,
that
the
keys
make
sense
or
the
values
make
sense,
and
at
least
pointed
out.
We
were
looking
at
ddtrace.
E
B
Yeah,
we
have
very
little,
I
think,
generally,
things
will
not
fail
if
you
pass
in
garbage
in
the
config
and
indeed
trace
hybrid
until
actual
you
know
until
it's
invoked,
which
is
bad,
there's
some
exceptions,
because
there's
some
some
work
done
at
initialization,
but
yeah.
It's
a
lot
of
that
was
done
before
my
time.
I
would
say
it
might
be
nice
to
have.
I
don't
know,
I
think
it's
good
to
have
it
be
required.
Have
you
know
a
validation
be
required?
B
I
don't
know
if
the
I
think
one
of
my
early
points
of
feedback
is
like
or
one
of
my
early
areas
of
like
I
don't
know
the
right.
The
right
behavior
is
like.
Do
you
want
to
have
an
error
raised
essentially,
or
do
you
want
to
just
have
it
locked?
You
know
quietly
like?
What's
the
should
we
basically
say
like
we
don't
you
know
this
shouldn't
should
yeah?
Should
we
like?
Should
we
raise
an
error
or
swallow
an
error?
Basically-
and
I
don't
know
the
right
answer-
there's
a
part
of
me.
B
That
would
say
you
want
to
raise
an
error
because
it's
incorrect
and
they're.
You
know
it's
better
than
having
behavior
not
working
quietly
or
just
like
quietly
logged,
but
then
sort
of
the
first
principles
of
like
observer.
Apm
is
like
you:
don't
want
to
mess
with
the
customers
or
you
sorry,
a
user's
application.
So
you
know
just
quietly.
Logging
would
be
the
appropriate
video.
I
don't
know.
G
Yeah
so
like
the
way
I
have
it
right
now
and
I'm
willing,
like.
I,
don't,
really
have
like
a
a
strong
standpoint
on
that.
One
like
the
way
it's
as,
as
is
like
the
code
is,
if
you
pass
an
incorrect
value
that
doesn't
fit
or
expect
the
expectations
that
are
defined
in
the
validator
that
we're
going
to
just
fall
back
to
what
the
default
value
is
and
it'll
walk.
G
It'll
say
you
know
this:
whatever
supply
value
didn't
work,
we're
going
to
go
back
to
the
default
if
the
validator,
for
whatever
reason
raises
it'll,
go
back
to
the
defaults,
so
it's
very
much
swallowing
the
exceptions,
but
it
is
recording
that
it
happened.
G
So
I
don't
know
what
the
necessarily
is
the
right
approach,
or
I
think
it's
really
just
a
decision
to
be
made.
I
personally
kind
of
like
the
fall
back
to
the
default,
like
oh,
you
put
something
in
wrong.
Here's
some
information
like
it
lets.
G
You
know
that
you
did
something
wrong
instead
of
just
like
blowing
up
on
initialization
or
just
not
instrumenting
anything
altogether,
because
I
think
if
the
fallback
is
like,
oh,
this
instrumentation
was
failed
to
be
or
failed
or
we
failed
to
configure
it
so
like
we're
going
to
shut
it
off
like
that
seems
wrong
to
me,
because
now
you
you
make
an
error
or
someone
passes
the
incorrect
configuration
value,
and
now
none
of
that
code
path
is
instrumental
anymore.
Like
you're,
I
think
you're
creating
a
problem
for
a
bigger
problem,
right,
you're
losing
significantly
more
visibility.
G
B
C
Okay,
can
I
throw
a
thought
too,
which
is
that
you
know
there's
a.
I
think,
like
some
of
the
implementation
here
seems
to
be
raising
an
error
sort
of
when,
when
the
given
class
is
being
instantiated,
which
is
not
necessarily
the
same
as
sort
of
the
application
being
initialized
right,
you
could.
You
could
make
the
argument
that
there's
actually
two
different
sort
of
steps
which
is
like
that
actual
predicate
method?
C
Is
this
valid
or
not
true
or
false,
and
the
actual
step
after
it,
which
is
like
raise
an
exception
of
don't
if,
in
fact,
you
had
a
bunch
of
classes
that
were
able
to
answer
questions
like
true
or
false,
this
config
is
valid,
then
say
in
the
configurator.
You
could
say
I'm
going
to
do
a
first
pass
validate
as
much
config
as
I
have,
which
I
guess
most
of
it,
but
I
guess
there's
some
dynamic
context:
right
raise
an
exception
before
you
can
even
instantiate.
You
know
turn
on
your
ruby
application.
C
Does
that
does
I
don't
know
if
that's
a
useful
sort
of
lever
to
imagine
pulling
but
kind
of
like
gives
us,
you
know,
look
right
like
the
principle.
We
definitely
don't
want
to
be
raising
the
exception
in
the
middle
of
the
life,
the
lifetime
of
the
application
that
we're
instrumenting,
but
it's
useful
to
have
that
exception.
That
says:
hey
like
hey
developer,
like
you,
you've
misconfigured,
something
in
this
hash
right.
That's
why
it's
gonna
be.
G
Yeah
yeah,
that
was
actually
the
motivation
like
that.
Last
point
there
was
like
that
was
the
motivation
to
adding
this.
Is
that
because
we're
just
passing
in
config
hashes
and
some
places
we
were
doing
runtime
checks,
some
places
is
just
kind
of
that
kind
of
coerced
truthy
value
like
it
just
it
seemed
a
little
too
haphazard.
So
this
is
trying
to
add
that
layer
of
structure
to
it.
G
So
when
we're
actually
installing
the
instrumentation
we're
going
through
and
kind
of
loosely
type
checking
or
we're
just
doing
a
kind
of
right
now,
like
the
examples
I've
added
are
pretty
shallow,
like
is
did
you
pass
in
an
array?
I'm
not
checking
the
contents
of
the
right,
but
it's
just
making
sure
that
the
very
base
level
of
the
config
cache
that
you
pass
in
actually
works
with.
G
What's
expected
to
for
that
value,
and
it
also,
I
think,
has
the
kind
of
side
effect
of
adding
kind
of
document,
documentation
and
code
like
if
you're
looking
at
adding
this
instrumentation
to
your
library
now
or
to
your
application.
You
can
look
at
this
install
class
and
you
can
see
what
the
compatible
versions
are.
G
G
B
And
the
other
thing
to
just
point
out
is
that
we'll
need
to
like
introducing
this
feature
also
means
that
there's
work
necessary
to
then
go
back
and
implement
it
for
all
our
like,
not
just
feature
contributors
and
their
integrations
or
instrumentations
or
plugins
or
whatever,
but
our
previous
ones
sort
of
like
they're.
You
know
I
want
to
go
back
to
rack
and
like
add
it
or
whatever
it
is.
G
No,
no,
I
I
actually
asked
you
to
look
at
it
before
it
was
really
done.
I
just
wanted
to
get
your
thoughts
on
the
approach
I
was
taking,
but
I've
since
then
added
it
to
unless.
G
Instrumentation
that
we
currently
have
just
also
because
I
wanted
to
really
make
it
obvious,
like
what
does
this
look
like
when
we
use
it
friends
plus
some
comments
yesterday
that
I
have
yet
to
address
like
some
of
the
like
validations
that
I
used
and.
B
Yeah
yeah,
that's
awesome!
Thank
you.
That's
amazing!
So
much
for
my
nitpicking,
I
yeah
I
mean
I'll
try
to
review
the
implementation
here
of
the
of
the
integrations
instrumentation
specific
validations,
but
this
is
yeah.
That's
like
99
of
the
work
to
me.
So
thank
you.
B
G
Different,
like
I'm,
I'm
welcoming
different
approaches
like
I
just.
I
just
think
we
need
something
like
this.
I
don't
necessarily
care
too
much
about
the
fine-grained
implementation
details.
I
just
think
having
this
layer
of
validation
or
structure
around
the
instrumentation
libraries
is
really
really
important
for
anyone
who
consumes
this
code.
E
Cool
yeah
at
a
glance,
this
looks
pretty
good.
I
don't
know
like
the
only
thing
that's
going
through
my
mind
is
it
would
be
really
awesome
to
have
like
some
standard
helpers
for,
like
you
know,
validate
that
something's,
an
array
validate
that
something
is
a
string
validate
that
something
is
boolean
yeah
trying
to
write
the
thing
over
and
over
again.
E
So
I
don't
know
if
it
could
just
be
like
a
symbol
that
you
know
how
to
pass
the
argument
to
the
right
method
to
validate
it
or,
if
that
I
don't
know,
sometimes
that
can
be
limiting.
But
if
you
just
have
like
a
few
like
out-of-the-box
validations
for
the
common
cases,.
G
Yeah,
I
totally
took
some
inspiration
from
like
working
with
active
record
associations
where
you're
like
I
even
I've
removed
it
since
because
I
don't
think
it's
needed
yet.
But,
like
I
had
like
a
loud
nail,
you
know
like
kind
of
rail
style
that
everyone
seen,
but
I
I
did
kind
of
foresee
this
evolving,
because
it
would
make
sense
to
just
have
like
a
boolean
check
right.
G
So
you
don't
have
to
have
this
like
ultra
verbose
thing
and
have
to
even
pass
into
lambda.
But
I
just
like
the
general
idea
was
like
I'm
trying
to
convey
here
so,
like
I
think
some
basic
helpers
are
needed
and
I
do
expect
potentially
in
some
instrumentation,
you
might
want
to
have
a
more
comprehensive
validation,
and
in
that
case
it
would
exist
in
the
instrumentation
library
and
get
passed
in
so
like
graphql.
I
think,
like.
I,
don't
think
it's
necessary
right
now,
considering
how
loosey-goosey
it
was
before.
G
But
when
you
pass
in
schemas
like
we,
don't
check
that
you're
actually
passing
in
schemas
to
be
instrumented
like
you
could
just
pass
in
an
array
of
integers
and
it's
going
to
blow
up
and
install,
but
still.
E
Yeah,
so
that
all
makes
sense,
it
sounds
like
the
big
question
outstanding
question
is
whether
or
not
this
should
raise
exceptions
when
the
validation
fails
or
whether
it
should
log.
D
So
this
goes
through
our
error
handling
right
now,
which
is
configurable.
D
G
I
did
have
another
well
I'll,
just
like
to
wrap
this
one
up
so
yeah.
Anybody
who
has
a
an
interest
in
this
feel
free
to
jump
in
and
just
leave
a
comment
or
nitpick,
I'm
looking
for
like
the
hopefully
the
finishing
comments
to
address
before
we
can
get
this
in,
and
then
I
do
have
another
pr
that
if
you
are
interested,
we
could
like
quickly
look
at.
I
think
it's
pretty
small,
it's
it's
pretty
important
to
shopify,
I'm
not
sure
how
much
use
that
everyone
will
get
out
of
it.
G
It's
we
want
to
trace
our
export
pipeline.
So
it's
this
one
here,
yeah
and
the
idea
is:
we
just
need
a
means
of
either
tracing
or
not
tracing
the
exporter.
G
We
also
have
some
other
security
stuff
that
needs
to
be
added
so
that
we
can
actually
export
traces.
So
this
is
introducing
some
concept
of
a
hook
being
able
to
get
into
this
and
once
again,
francis.
Let
some
cop
left
some
comments
that
I
have
not
even
looked
at
yet
sorry,
for
instance,
but
that's
the
general
idea
of
what
I'm
trying
to
accomplish
here.
B
When
you
say
send,
when
you
say
trace
the
export
pipeline,
you
want
to
create
a
http
request
span
for
okay
yeah.
I
mean
I
think
it's
a
certainly
for
like
and
most
end
users
will
probably
not
want
it.
So
it's
good
as
a
hook
but
yeah
I
can
see
the
value
and,
and
is
the
value,
what's
the
value
for
you
guys
is
it
you
want
to
see
how
long.
D
It
takes
so
we
we
actually
want
to
trace
our
pipeline,
including,
like
we,
our
pipeline
kind
of
bounces
through
envoy,
to.
B
D
Bunch
of
collectors-
and
normally
you
know
you're
using
metrics
for
that,
but
it
we're
all
about
tracing.
So
why
aren't
we
using
tracing
right?
It
just
seems
silly
to
be
building
all
this
infrastructure
and
then
not
actually
use
it
for
monitoring
or
observing
our
our
systems.
D
So
that's
that's
one
thing
and
then
the
the
other
issue
here
is
that
we
need.
We
need
to
actually
allow
this
request.
So
we
have
an
allow
list
of
requests
that
are
are
permitted
to
be
made
from.
B
D
Yeah
I
looking
at
my
comments
here
from
last
night.
I
realized
that
maybe
we
need
to
explicitly
lay
out
the
use
cases
for
this
robert,
because
I
forgot
the
security
aspect
as
well
that
we
needed
the
allow
list.
So
my
first
comment:
oh
sorry,
second
comment
there
about
adding
a
boolean
flag
is
obviously
useless
in
this
context.
So.
G
The
requirements
I'll
update
the
pr
it's
very
it
was
very
much
draft
end
of
day
it
kind
of
sent
to
francis
to
look
at,
but
I
thought
on.
The
call
would
be
good
to
talk.
So
if
anybody
like
the
use
cases
are
just
being
able
like
to
simplify
it
as
being
able
to
allow
some,
like
a
user
to
wrap
the
basically
the
send
bytes
call
and
the
exporter
with
any
sort
of
block,
like
kind
of
like
it's
untraced.
G
But
we
want
to
be
able
to
define
and
many
different
things
to
wrap
around
so
yeah.
If
anyone
has
any
thoughts
on
how
that
should
be
accomplished,
feel
free
to
chime
in
while
it's
in
draft.
G
Yeah
so
the
way
it
kind
of
looks
right
now
we
we've
started
doing
it
on
our
internal
library,
it
just
it'll
say
like
our
otlp
exporter
export
and
then
it
traces
the
spam,
but
like
the
http
span-
and
we
just
it's
kind
of
like
a
worker
loop,
we
get
a
lot
of
them.
We
still
need
to
finish
connecting
it
through
the
entire
pipeline.
It's
kind
of
falling
on
the
back
burner
a
little
bit
finishing
that,
but
the
idea
is
we'd
be
able
to
see
the
overall
kind
of
latency
of
going
through.
D
Know
implementation,
yeah,
you'd
have
an
envoy
child
span,
you'd
have
a
a
child
span
on
the
collector
as
well,
and
then,
if
you're
doing
something
on
the
other
end
of
the
collector
potentially
another
and
like
it's
the
same
bundle
flowing
through
and
everything
propagates
correctly,
then
you're
potentially
getting
another
bundle.
There
is
there
another
stand
there.
D
D
E
D
I
mean
we
may
be
pushing
for
it.
You
know
we
use
go
heavily
as
well.
So
as
we
start
using
open
telemetry
go
in
production,
we
may
push
for
similar
functionality
or
we
will
push
for
similar
functionality
and
the
open
telemetry
go
sdk
as
well
along
the
same
lines
we
have
metrics,
or
we
have
the
capability
of
recording
metrics
that
we've
added
in
the
sdk
for
ruby.
D
It
would
be
useful
to
have
the
same
metrics
reported
from
open
telemetry
go
probably
rust
as
well.
That's
another
one
that
we're
using.
E
Yeah
that
all
sounds
good.
It
sounds
like
these
are
kind
of
like
some
somewhat
experimental
features
that
we're
incubating
here
in
ruby,
with
maybe
the
hopes
of
trying
to
standardize
them
at
a
at
a
wider
level
and
yeah.
I
I
think
that
all
sounds
good
as
long
as
it's
off
by
default.
I
don't
see
any
problems
with
it.
D
Yeah
I
had
committed
a
while
back
to
updating
the
spec
for
the
metrics
so
that
we
could
get
standard
metric
names
everywhere.
I've
not
done
that
yet.
E
All
right
I
was
going
to
see
francis
wong
is
since
you're
here
today.
Any
updates
on
this
is
it
yeah.
C
This
is
this
is
ready
for
review.
You
know
you
had
added
what
was
it
the
sort
of
a
more
standardized
way
to
handled
the
test,
transformations
of
rack
key
names?
Basically,
so
I
I
updated
this
pr
to
incorporate
that
yeah.
C
I
don't
know
if
we're
looking
over
the
one
thing:
that's
a
little
tricky
is
that
just
a
flag
at
the
the
testing
I
did
bring
in
the
sdk
as
a
dependency
as
a
as
a
development
dependency,
primarily
because
I
wanted
to
write
automated
tests
on
baggage
and
the
the
api
baggage
implementation
is
just
no
up.
So
there's
no,
like
you
can't
you,
so
you
can't
use
api
baggage
to
to
test.
Does
you
know
when
I've
done
running
this
code?
C
Does
the
baggage
have
the
values
that
I
wanted
to
have,
and
then
I
went
down
a
path
of
trying
to
mock
up
the
baggage
calls,
but
it
ends
up
being
pretty
crazy
and
then
I
was
like
well.
I
could
write
my
own
little
mini
bag
implementation
and
we
already
have
one
within
the
sdk.
B
C
E
Cool,
I
will
have
another
look
over
this.
I
know
it
was
looking
like
it
was
on
the
right
track.
It
was
it
got
complicated
because
we
didn't
have
the
rack
and
getter,
so
you
kind
of
had
to
invent
this
way
to
pass
in
a
proc
to
to
to
handle
the
rapper's
non-rat
case,
but
now
that
we
have
this,
I
think
yeah.
C
E
E
E
All
right,
yeah
I'll,
take
a
look
through
this.
I
think
when
I
glanced
through
it
last
time.
The
only
other
question
I
had
was
like
with
jaeger.
It
seemed
like
I
don't
know.
I
think
I
was
poking
at
the
java
implementation
and
it
appeared
to
me
like
that
implementation
will
propagate
baggage.
Even
if
there's
not
like
a
valid
trace
context-
and
I
was
wondering
if
that
was
just
like
a
java
thing
or
if
this
is
something
that
is
widely
supported
in
jaeger,
but
I'll.
E
I'll
take
a
few
more
looks
at
at
other
things,
to
kind
of
see
what
they're
doing.
F
G
Cover
I
just
wanted
to
quickly
call
to
see.
I
feel
bad
because
been
also
neglecting
there's
an
instrumentation
pr
for
rabbit.
Mcq
francis
has
done
some
reviewing
on
it.
The
person
re-requested
a
review.
G
I
have
been
neglecting
them
greatly.
I
haven't
actually
worked
with
revenue
mq.
I
just
want
to
see
if
anyone
on
the
call
has
an
experience
with
it.
That
would
like
to
maybe
take
a
look
at
it
just
to
weigh
in
on
it
a
bit,
but
I
will
I
have
every
intention
of
jumping
in
and
trying
to
get
into
it
a
bit.
I
just
I
find
so
sometimes
a
little
bit
trickier
to
give
a
good
review
if
you're
not
familiar
with
the
tech
they're
using.
B
E
Ever
I
have,
I
have
some
experience
just
in
instrumenting.
It.
E
I
haven't
really
used
it
heavily
myself,
but
I
can
try
to
take
a
look
through
this
as
well.
I
think
I
think
a
lot
of
it
is
or
yeah
a
lot
of
what
is
needed
to
review
this
pr
is
a
pretty
good
understanding
of
the
semantic
conventions
for
for
otel,
and
then
the
the
other
component
would
be
just.
E
Understanding
the
when
to
propagate
context,
and
when
to
I
guess
when
to
add
things
as
a
link
when
to
continue
the
trace.
All
of
that
so.
E
So
yeah,
I
feel
like
there's
there's
a
bit
of
like
understanding,
rabbit
mq,
but
there's
also
a
whole
lot
of
having
a
good
understanding
of
the
hotel
semantic
conventions
around
all
of
this.
D
Yeah,
I
provided
a
bunch
of
feedback
there.
I
started
looking
at
this
again
this
morning,
so
I'll
try
to
finish
my
review
today.
We
have
one
more
pr
that
is
stuck
on
cla,
so
the
573.
D
So
this
is
approved,
but
we
can't
merge
it
until
the
cla
is
signed.
Yeah,
we
haven't
had
an
update
a
couple
of
weeks,
so
I'm
not
sure
what
we
can
do
with
this.
It's
actually
a
bug
it'd
be
nice
to
get
it
fixed,
but
because
it
was
written
by
somebody
who
hadn't
signed
cla.
D
Should
we
try,
we
try
pinging
them
again.
G
B
G
D
E
Cool
yeah,
so
I
will
try
to
get
through
some
of
these
vr's
this
week
and.
D
Yeah,
I
I
guess
just
one
last
minute
thing,
because
we
do
have
the
push
towards
like
1.0
or
getting
a
release
candidate
out.
It
would
be
really
useful
if
anybody
has
time
to
start
going
through
the
spec
and
our
implementation
with
a
fine
tooth
comb
and
if
anything
jumps
out
at
you
as
like,
not
spec,
compliant
then
just
open
an
issue,
so
we
can
start
tracking.
Some
of
this
sounds
good.