►
From YouTube: 2021-03-02 meeting
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
B
Along
with
yeah,
I'm
definitely
not
just
a
fanboy.
I
hang
out
and
try
to
be
helpful.
A
Yeah,
I
guess
we're
all
the
fans,
the
people
in
the
cheap
seats
like
us,
like
we,
we
do
pretty
all
right
for
ourselves,
then.
A
He
said
he
won't
make
it
today.
Eric
is
saying
that
he
has
a
conflict,
so
he'll
be
here
after
the
conflict.
I
haven't
heard
from
francis
at
shopify.
A
A
A
D
D
Okay,
oh
okay,
right
I
I
wasn't
paying
attention
to
the
slack.
It's
my
fault,
okay,
yeah!
So
I
guess
matt's
out.
I
don't
really
have
any
updates
on
the
spec
sig
so
yeah.
Maybe
we
can
dive
into
open
issues
and
open
pr's.
D
D
D
So
I
guess
let's
start
with:
is
there
anything
anyone
wants
to
chat
about
today?
I
just
want
to
make
sure
that
we
don't
miss
anything.
C
We
had
starting
to
work
through
our
internal
implementations.
We
had
some
questions
about
background
jobs
and
things
like
that,
but
they're
not
super
urgent.
We
can
open
that
as
a
discussion,
it's
more
of
a
best
practices,
sort
of
question
and
we're
writing
some
instrumentation
for
active
job
things
like
that.
So,
if
that's
well
suited
here
for
this
format,
I
can
talk
about
here.
If
it's
best
on
the
issue
as
a
discussion,
I
can
do
it
there
as
well.
D
C
I
haven't
done
rescue
yet
okay.
I
suppose
that
basically,
what's
on
our
mind,
is
context
propagation,
looking
through
the
specifications.
C
The
one
sort
of
use
case
that
I
don't
see
covered
very
well
is
the
sort
of
the
very
typical
rails
sort
of
I
want
to
do
a
job
asynchronously,
but
it's
not
really
a
producer
consumer,
like
many
things,
are
subscribing
to
this.
It's
just
a
background
job,
so
we're
kind
of
seeking
best
practices
on
that
and
specifically
linking
a
job
back
to
a
request
that
originated
it.
C
We
sort
of
have
a
hack
in
place
for
active
job
that
will
actually
serialize
the
parent
span
context
and
pass
it
through
to
a
job
via
the
job
argument,
but
that's,
admittedly,
a
hack
and
we're
trying
to
find
out
if
people
have
experience
there
is.
Is
that
just
something
that
people
don't
typically
propagate?
C
The
sort
of
you
know
originating
web
request
context,
or
is
that
where
the
conversation
id
semantic
parameter
comes
in,
so
we're
kind
of
just
seeking
sort
of
community
best
practices,
we're
not
really
sure
what
what
you
should
do
there.
D
This
link,
okay,
so
there's
an
issue
in
the
spec
repo
that
was
opened
a
little
while
ago
for
similar
sorts
of
things
in
at
least
in
java,
so
batch
jobs
in
java,
not
quite
the
same
but
close
the
at
the
moment.
D
There
are
no
semantic
inventions
around
background
jobs,
and
so
we've
been
using
the
messaging
semantic
conventions
for
for
background
jobs,
but
yeah
like
there
is
this
issue
to
try
to
actually
create
semantic
conventions
around
this,
because
it
is
a
different
use
case
and
it's
worth
having
something
specified
just
for
this,
as
opposed
to
the
more
more
general
kind
of
messaging
system
in
terms
of
context
propagation,
you
can
see
what
we've
done
in
sidekick,
for
example,
not
sure
if
you've
looked
at
that
instrumentation.
C
I'm
muted
that
doesn't
help.
I
I
can
take
a
look
at
at
that.
I
had
seen
the
sidekick
implementation
before
and
was
struggling
to,
adapt
it
to
active
job
because
it's
not
quite
as
flexible
but
okay.
B
C
D
Yeah,
that
would
be
great.
I
think
it's
really
helpful
to
have
some
input
into
that
spec
issue
from
people
who
have
experience
running
this
kind
of
background
job
system
yeah.
So
whatever
input
you
have,
there
would
be
really
useful.
Having
support
for
that
issue
is
also
really
helpful,
so
that,
like
we,
can
move
things
along
a
little
bit
rather
than
just
kind
of
leaving
it
in
limbo.
You
know
yeah
it'd
be
nice
to
have,
but
okay.
C
E
No,
I
just
taken
this
opportunity
to
maybe
just
like
quick
mention.
I
put
up
a
pull
request
today
for
patching
the
google
api
discord.
Gem.
Oh.
F
Yeah,
actually,
I
was
just
looking
at
that
I
yeah,
so
I
actually,
it
might
be
a
good
thing
to
discuss
at
this
point,
so
I
effectively
own
google
api's
core
myself,
so
I
would
prefer
us,
instead
of
going
through
and
doing
any
monkey
patches
from
a
separate
gem,
either
just
building
in
whatever
hooks
into
google
api's
core.
We
need
for,
for
this
explicitly
or
even
hard
coding.
The
instrumentation
into
the
google
dave's
high
score
itself.
F
It
currently
does
that
for
open
census,
and
we
can
just
rip
that
out
and
replace
it
with
something
similar
for
open
telemetry
just
to
avoid
monkey
patches,
because
from
google's
perspective,
open
telemetry.
Is
it
we're?
This
is
this
is
the
canonical
way
to
to
instrument
these
libraries,
and
so
I'm
happy
to
just
privilege,
open
telemetry
and
just
build
it
right
directly
into
the
clients.
E
That
that
would
be
great.
Do
you
is
that
something
like
do
you
accept
prs
to
it
like?
Can
I
open
a
pr
against
it
and
then
because
this
is
something
that
I
like,
our
team,
like
shopify,
really
wants
to
have
with
open
telemetry,
and
it's
something
that's
slowing
me
down
from
migrating
more
services,
because
I
know
there's
a
few
teams
that
do
depend
on
like
we
have
an
internal
monkey
patch
but,
like
you
said,
if
we
can
actually
have
like
first
party
support.
That
would
be
great.
F
Yes,
absolutely
for
open,
telemetry
yeah,
just
open
the
pull
request,
we'll
review
it
and
get
it
submitted
and
get
it
released.
F
E
Yeah
yeah
sorry
go
ahead,
no.
C
I
was
going
to
say
if
it
helps
the
thing
that
I'm
wondering
about
sort
of
specifically
with
the
status
of
it
all
is,
if
you're
planning
on
implementing
a
sort
of
generic
framework
for
active
support,
notifications
and
hooking
into
those
sort
of
how
some
of
the
other
tracing
gems
have
done
it
like
the
data
dog
gems
in
the
past
that
you
know
they
all
hook
into
active
support
and
based
off
of
that
or
active
support
notification.
C
Sorry,
the
reason
I'm
asking
about
it
is
because
we're
starting
to
build
out
some
of
our
own
internal
stuff,
and
we
want
to
make
sure
we
align
with
what
you're
doing
and
be
able
to
open
source
and
contribute
back
as
much
as
possible.
E
So
that's
like
the
the
honest
truth
is,
I
just
haven't
started
yet
like
it
is
like
one
of
the
next
things
to
do,
I'm
trying
to
just
as
mentioned
there.
I
got
the
google
thing
done
for
the
api
gem
and
then
I
was
going
to
start
working
on
lmdb,
which
should
be
a
quick
one,
and
then
it's
going
to
be
active
sport
active
record.
E
More
urgently,
I
guess
to
like
our
personal
needs
is
active
record.
That's
something
that
we
really
want,
or
it
gets
a
little
bit,
maybe
not
contentious.
Is
we
not
the
royal
we
but,
like
maybe
francis,
and
I
aren't
particularly
fond
of
the
active
support
notifications
in
terms
of
the
dependability
for
what
we're
doing
and
how
we
do
use
tracing.
E
So
I
don't
really
have
a
concrete
answer
like
I'm
going
to
look
at
it,
but
if
I
can't
feel
confident
the
reliability
that,
like
you're
saying
you've,
got
three
layers
of
active
support
notifications
and
one
of
them
raises
it's
not
going
to
interrupt
the
rest
like.
I
have
to
confirm
that
that's
not
the
behavior
and
I
believe
it
still
is,
but
I
actually
have
to
test
that.
Otherwise,
I
think
we
might
go
the
funky
patch
route
yeah.
D
So
I
I
can
chime
in
there
in
general,
we
prefer,
if
possible,
to
use
other
hooks
or
monkey
patch
instrumentation,
rather
than
relying
on
active
support
notifications,
but
we
recognize
that
people
use
active
support
notifications
for
a
lot
of
different
things.
D
D
So
that's
problematic
when
we're
using
when,
when
you're,
basically
hooking
into
the
start
and
finish,
and
you
create
a
span
in
start,
and
then
you
want
to
finish
it
in
finish.
But
if
something
raises
before
your
notify
gets
the
start
event,
then
you
end
up
getting
a
finish
for
a
span
that
hasn't
been
created
yet
or
if
something
raises
during
finish
before
you're
notified,
then
you
end
up
with
this
span.
That
is
never
finished,
and
so
that's
that's.
The
main
problem
with
active
support
notifications
that
we've
encountered.
D
As
robert
says,
we
haven't
gone
back
to
verify
that
that's
still
the
case,
but
it
was
the
case
a
few
years
ago
when
we
did
our
own
internal
instrumentation
and
we've
had
to
do
some
horrible
hacks
to
work
around
it,
basically
maintaining
a
separate
stack
of
spans
that
we've
created
and
then
matching
based
on
the
name
of
the
span.
D
When
we're
trying
to
finish
so,
we
end
up
like
popping
the
stack
if
we
get
a
finish
notification
for
something:
that's
not
at
the
top
of
the
stack
we
unwind
the
stack.
So
it's
just
kind
of
a
messy
thing
to
deal
with.
We
spoke
to
the
rails
team
when
we
were
writing
our
instrumentation
and
they
said
that
that
behavior
was
based
basically
baked
into
active
support
notifications
and
was
not
going
to
change,
but
they
may
have
do
you.
C
Think
that
we
should,
we
could
ask
them
about
that
again
internally,
I
was
seeking
advice
from
eileen
and
I
can
never
remember
her
last
name
on
the
rails
team
about
about
instrumentation.
Things
are
just
seeking
advice
from
her
about
things
and
she
brought
up
active
support
notifications-
and
I
said
you
know
I
guess
we'll
take
another
look
at
it,
but
she
did
seem
open
to
feedback
and
improving
anything
that
might
not
meet
our
needs
because
I
brought
up
the
idea.
F
C
Like
why
don't
we
instrument
rails
core
itself
with
open
telemetry
because
of
yadda
yadda
and
she's
like
a
active
support
notification?
So
if
that's,
what
actually
prevents
it
from
being
reliable?
The
fact
that
you
know
any
notification
subscriber
could
raise
and
blow
everything
up.
Maybe
we
could
bring
that
back
up
internally
and
say:
hey
here's!
Some
of
the
feedback
I'd
be
happy
to
pass
that
along
certainly.
D
Yeah
we
had
brought
it
up
with
raphael.
Okay,
I
can't
pronounce
it.
D
Yeah,
so
we
had
brought
it
up
with
him
previously
and
he
was
one
who
said
this:
behavior
wouldn't
change,
but
yeah,
I'm
like
who
knows:
what's
happened
in
the
intervening
years,
so
it's
it's
possible
that
they're
more
open
to
this.
We
had
also
planned
internally
at
shopify,
to
approach
raphael
once
open,
telemetry
became
the
default
instrumentation
for
our
services
to
approach
raphael
and
suggest
that
we
start
integrating
it
into
rails.
If
we
can
like,
if
we
can
approach
the
rails
core
contributors
from
two
different
directions,
that's
probably
helpful.
C
Yeah
I'll
certainly
bring
up
the
feedback
about
the
notifications
again
and
see
what
we
can
do
there
and
then
instrumenting
rails
core
itself,
just
from
an
adoption
standpoint
rails
core
having
default
in
you
know,
instrumentation
for
open
telemetry
is
huge,
so
yeah
cool
thanks
for
the
context
on
that.
That's
really
really
helpful.
I
appreciate
it.
Yeah.
A
I
I
did
you
already
started
giving
me
feedback
on
that
one
draft,
pr
that
I
was
putting
up
so
as
far
as
things
like,
if
we
identify
things
like
structural
duplication
or
any
issues,
obviously
we'll
be
opening.
Those
in
separate
prs
are
those
are
refactorings
that
try
to
eliminate
things
like
structural
duplication
in
the
code.
Is
that
stuff?
That's
welcome
at
this
point,
or
do
we
want
to
wait
for
a
few
more
implementations
like
the
x-ray
implementation
to
come
out
before
we
start
identifying
patterns
to
extract
out
of
there.
A
Well,
I
mean
looking
at
very
closely
you
know,
so
one
example
of
this
was
like
the
b3
injectors
and
extractors
had
a
bug
in
it
where
it
wouldn't
return.
A
The
the
or
inconsistent
behavior,
it's
gonna,
say
buck:
an
inconsistent
implementation
with
the
the
trace
context,
propagator,
where
it
would
return
the
carrier
if
unmodified
right.
If
there
was
no
state,
it
was
returning
nil
right,
just
exiting
early
in
one
case
right,
so
to
avoid
situations
like
that,
where
some
functionality
should
remain
the
same.
A
Extracting
out
the
commonalities
in
the
code
blocks
right
and
the
implementations.
Another
thing
that
I
had
thought
of
was
you
know.
Eric
pointed
the
truncation
of
the
trace
id
in
when
injecting
or
extracting
like
the
trace
id
being,
I
want
to
say
it's
a
64-bit
value,
120
versus
64.,
something
like
that
and
there's.
A
It
seems
like
there's
multiple
implementations
of
that
as
well,
so
that
that's
something
that
I
was
looking
at
saying:
I'm
I'm
not
I'm
not
sure
how
these
would
how
to
share
these
across
the
jumps.
It
looks
like
we
have
like
a
a
common
library
as
well.
That
might
be
a
good
candidate
for
that.
D
Yeah,
so
so,
right
now,
common
is
intended
as
that
kind
of
grab
bag
of
utility
functionality
that
people
might
want,
if
they're,
building,
instrumentation
or
for
test
helpers,
and
things
like
that
so
yeah
right
right
now,
if
you're
looking
to
extract
some
of
that
functionality,
common
would
be
the
place
to
put
it
common
kind
of.
I
don't
think
it
has
an
api
dependency
at
the
moment.
I
think
it's
kind
of
because
it's
this
grab
bag
of
things.
It's
not
clear
whether
it
should
depend
on
api
or
api
should
depend
on
it.
D
Okay
and
then
things
like
the
sdk
and
other
components
can
depend
on
common
as
kind
of
the
the
location
for
that
removal
of
duplication,
so
we're
for
the
most
part
common
functionality
like
that
that
isn't
specified
by
the
open,
telemetry
specification
shouldn't
be
in
either
api
or
sdk.
D
It
needs
to
be
factored
out
somewhere
else,
and
I
think
I
think
a
lot
of
languages
are
taking
that
approach
and
in
the
camera,
what
is
modularity
and
versioning
spec.
They
also
encourage
that.
D
Thank
you
so
much
cool.
So
yeah
I
mean
to
answer
answer
your
question.
Yes,
we're
certainly
open
to
refactoring
things
to
to
factor
out,
replicated
or
factor
out
duplication,
cool
anything
else,
or
I
can
jump
into
milestones
and
prs.
D
I
will
jump
into
master
into
pr's.
Okay.
In
terms
of
milestones,
we
have
the
tracing
v1
rc,
there's
work
going
on
right
now.
This
is
not
linked
at
the
moment,
but
we
do
have
work
going
on
to
implement
the
jaeger
propagator,
the
zippin
exporter.
D
I
still
need
to
start
my
pr
for
implementing
fields
in
propagator.
I
will
endeavor
to
do
that
tomorrow.
I'll
try
to
get
draft
pr
up
and
then
there's
this
pr
to
add
the
zip
code
exporter
so
in
general.
Actually,
that's
the
thing
for
that.
Okay,
so,
in
general,
we're
progressing
reasonably
well
here,
with
the
exception
of
me,
so
I
will
try
to
step
up
my
game
in
a
bit.
D
A
Like
are
y'all
able
to
see,
are
you
all
seeing
his
screen
share
with
updated
info
with
the
project?
Are.
A
We
just
either,
like
I
just
see
like
the
main
landing
page
of
the
repo.
D
That's
a
good
question
so,
which
view
do
you?
Let
me
close
a
few
things
and
see
if
we
get
something
useful.
D
E
D
Cool
okay,
so
let
me
go
and
look
at
milestones
again
cool.
These
are
the
issues
that
we
have
open.
Unfortunately,
this
one
is
tagged,
spec
compliance.
So
this
this
zip
code
exporter
and
this
sipkin
exporter
are
the
same
thing.
I
need
to
work
on
fields
in
propagator.
D
Yes,
I
was
asking:
is
there
anything
else?
People
feel
is
required
for
the
rc,
and
I
was
pointing
out
that
we
had
a
couple
of
prs
closed.
Oh
that
merged
recently.
D
So,
let's
see,
I
guess
the
main
one
is
adding
force
flush
to
the
sdks
tracer
provider.
I
think
there's
another
spec
change,
that's
going
to
follow
up
on
this
one,
so
when
that
lands
I'll
also
provide
the
implementation,
it's
just.
This
is
just
really
wiring
everything
through
the
so
that
force
flush
is
available
on
the
tracer
provider,
on
the
sdks
tracer
provider
and
then
is
wired
through
to
the
span
processor,
that's
registered
and
then
wired
up
to
the
span
exporter
as
well.
D
So
this
is
in
lieu
of
a
shutdown,
for
example.
This
is
a
little
bit
different,
so
there
is
shutdown.
Shutdown
is
meant
to
be
used
when
you're
terminating
the
process.
Force
flush
is
intended
to
flush
any
buffered
spans,
and
this
is
particularly
intended
for
functions
as
a
service
implementations
where
the
process
may
be
suspended
and
you
don't
know
it's
suspended
between
requests
and
you
don't
have
a
chance
to
actually
flush
the
spans
from
the
old
request
until
the
process
is
woken
up
again.
D
So
so
that
was
milestones,
tracing
v1,
mostly
documentation,
issues
here
and
configurator
improvements.
I
think
robert
mentioned
he
was
interested
in
taking
a
look
at
that.
E
D
Along
those
lines,
there's
I
should
have
reviewed
it
and
presented
it
here,
but
there
is
a
proposal
for,
or
there
are
proposals
for
a
number
of
projects
over
the
coming
months
to
make
some
improvements,
some
ergonomics
improvements
primarily
to
the
tracing
functionality
and
open
telemetry,
in
particular,
providing
some
kind
of
convenience
api.
A
lot
of
the
api
right
now
is
very,
very
much
focused
on
core
functionality,
with
less
thought
to
how
convenient
is
that
to
actually
use
for
common
cases.
D
I
will
attempt
to
find
the
links
for
that
project
and
share
them,
probably
in
slack
possibly
also
in
github
discussions,
so
that
there's-
and
so
everyone
knows
the
project
to
track,
and
if
you
have
some
thoughts
around
having
a
more
convenient
api
or
what
that
might
what
that
api
might
look
like
it
would
be
really
good
to
chime
in
on
that
project.
So
that's
a
across
language
effort,
it's
not
just
well,
it's
not
it's
not
a
ruby,
specific
effort.
It's
generally
considering
all
languages
and
all
implementations.
D
D
Okay
back
to
pull
requests
yeah,
so
we've
got
a
bunch
of
things
in
flight
right
now.
Some
of
these
are
probably
not
getting
much
active
work.
I'm
going
to
work
bottom
up
here,
because
maybe
we
can
start
thinking
about
cleaning
out
some
old
things
that
are
not
going
to
be
updated,
so
the
open
tracing
bridge
the
requirements
have
been,
or
the
specification
for
this
has
been
added.
It's
probably
quite
different
from
what
was
implemented
here
and
our
underlying
api
has
evolved
significantly.
D
Since
rhys
did
the
initial
work
on
the
open
tracing
bridge,
I
think
we're
probably
best
off
adding
a
spec
compliance
issue
for
open
trace
and
support
is
not
actually
required
for
1.0,
but
it
should
be
added
shortly
thereafter.
D
So
I
think
I'll
create
a
new
issue
and
close
riza's
pr
here
and
we
can
start
fresh
there.
Likewise
for
a
consensus,
although
we
had
discussed
earlier
that
open
census
was
only
ever
released
as
an
alpha
for
ruby
and
it
doesn't
necessarily
make
sense
for
us
to
support
open
census
compatibility
if
we
do
want
to
support
open
sensors
compatibility.
D
The
approach
that
other
languages
are
taking
right
now
is
to
provide
that
shim.
I
believe
in
the
open
census
project,
not
in
the
open
telemetry
project.
Daniel
probably
has
more
thoughts
around.
D
I'm
not
sure
the
state
of
this
pr
to
add
design
documentation.
This
is
this
was
an
early
effort
to
it's
over
a
year
old.
Now
this
was
an
early
effort
to
add
some
documentation
around
how
the
api
and
sdk
are
designed,
how
to
go
about
contributing-
it's
probably
also
pretty
out
of
date
at
this
point,
and
we
may
just
want
to
close
it
and
open
issues
for
any
holes
in
the
documentation
that
that
would
have
covered
ruby
on
rails
tracer
support.
This
was
an
initial
effort
that
martin
from
datadog.
D
Pushed
up
quite
some
time
ago,
here's
eric
eric.
G
D
Can
so
yeah,
so
I
think
we
can
close
that
and
move
on
grpc
example.
This
is
effectively
a
documentation
effort.
I
mean
it's
adding
a
code
sample
for
how
you'd
go
about
instrumenting
grpc.
D
We
should
take
another
look
at
it,
but
again
at
this
point,
given
the
age
of
this
pr,
it's
probably
better
to
close
it
and
open
a
new
one
to
replace
it,
jager
exported
with
no
values.
This
was
attempting
to
fix
a.
D
Fix
an
issue
that
was
reported,
I
believe,
that's
since
being
fixed
a
different
way,
I'll
review
it
to
see
if
that's
still
a
problem
and
if
it,
if
it's
been
fixed,
I'll,
just
close
this
pr
and
move
on
koala
instrumentation.
This
is
still
work
in
progress
from
somebody
at
shopify.
D
I
don't
know
I
I
do
know
that
his
team
is
currently
working
on
adding,
I
think,
open
telemetry
support
to
some
other
projects.
I
will
poke
him
to
see
whether
or
not
he
wants
to
keep
contributing
here.
Otherwise,
robert
or
I
will
probably
pick
this
up
and
get
down
to
the
finish
line.
Okay,
this
pr
was
a
draft
pr
for
something
that
I
thought
we
should
not
do.
D
I
just
wanted
to
put
it
up
there
as
a
an
example
for
fixing
something
that
I
thought
didn't
need
to
be
fixed
once
I
dug
into
it.
So
it
was
mostly
for
matt
to
take
a
look
at
and
comment
on.
I
think
at
this
point,
I'll
close
it
and.
D
Jaeger
propagator
francis
has
been
working
on
this.
I
think
he's
just
dropped
off
the
call
francis
has
been
working
on
this.
I
think
it's
pretty
close
to
getting
done
this
one.
B
I'm
still
here
the
last
I
just
I
just
pushed
up
a
new
round
of
changes
this
morning,
so
hopefully,
hopefully
they
are
almost
there.
D
Cool,
I
will
take
a
look
I've
kind
of
been
deferring
to
matt
on
this
one,
but
I
will
take
a
look
and
see
if
we
can
get
that
one
merged
sooner
rather
than
later
so
yeah.
Thanks
for
your
work
on
that,
that's
really
good!
Is
there
anything
in
particular
that
you
want
to
call
out
there?
We
should
take
a
look
at
now
or.
B
No,
I
mean
we're
just
sort
of
like
dialing
in
on
really
small
kind
of
questions,
about
sort
of
like
edge
cases
of
the
spec
that
drifted
a
bit
between
and
open
four
entry,
but
yeah.
I
think
we're
good
okay,
cool.
D
D
D
If
anybody
does
have
any
experience
with
bunny
and
rabbit
mq
and
any
thoughts
here,
it
would
be
helpful
to
take
a
look,
but
there
are,
I
think
I
had
some
feedback
on
this
one
that
probably
still.
D
Still
needs
to
be
addressed,
let's
see.
D
It's
probably
on
me
to
re
re
review
this.
There
were
some
updates
four
days
ago.
I
will
get
back
to
this,
but
again,
if
anybody
else
has
some
time
to
take
a
look
at
that.
That
would
be
really
helpful.
E
E
So,
if
any
of
you
end
up
working
on
a
pr
that
takes
a
while
to
get
through
trying
to
get
discouraged,
especially
around
the
message
queues
just
because
the
like
france
was
saying,
the
semantic
conventions
are
a
little
bit
tricky,
especially
when
the
implementations
don't
really
follow
the
implementation.
So
it's
like
hard
to
kind
of
make
it
fit
right.
So
I
think
bunny
is
experiencing
some
of
these
similar
pains
that
I
did
with
ruby
kafka
like
I
opened
it
on
november
18th,
and
when
did
it
come.
D
E
D
Yeah
cool
this
pr,
I'm
trying
to
get
over
the
finish
line
right
so
for
some
reason
we
have
the
opencla
check
which
passed
previously
for
everybody,
everybody
that
contributed
here
but
is
failing
right
now
or
not
failing,
but
we're
waiting
for
status
to
be
reported.
D
This
has
happened
a
few
times
in
the
past
and
like
pushing
up
an
empty
change,
has
been
enough
to
kick
it
and
fix
a
problem.
So
we'll
see
whether
that
happens,
and
then
I
can
merge
this
one
so,
but
this
is
pretty
pretty
much
done
at
this
point.
It's
really
helpful
and
we
have
people
asking
for
this
in
other
contexts
as
well.
D
New
relic
has
some
functionality
in
their
instrumentation,
which
is
thankfully
license
compatible,
so
they
have
some
functionality
to
obfuscate,
sql
queries.
This
pr
is
just
adding
the
my
sequel
to
implementation
of
that
they've
implemented
it
for
a
bunch
of
other
databases
as
well
or
a
bunch
of
other
query
languages.
Once
we
have
the
instrumentation
for
things
like
postgres,
we
can
start
to
extract
this
into
a
more
general
form
and
probably
stick
it
into
the
common
like
open,
telemetry,
common
gem.
G
This
is
there,
is,
I
guess
one,
two
three
like:
is
there
spec
around
sequel,
obfuscation
in
the
s
you
know
in
the
clients?
Is
there
any
real
like?
Is
that
it's?
I
feel
like
it's
a
cross
language
thing
to
some
extent.
Definitely
a
good
like
you
know
we
can
add
whatever
yeah.
D
It
is
a
it
is
a
cross-language
thing
there.
I
don't
remember
whether
there
are
any
conventions
here.
Okay,.
G
Do
you
think
they
would
like
want
to
just
donate
it
and
say
like
hey,
we
did
look,
here's
our
big
here's,
a
press
release.
We
did
a
bunch
of
stuff.
Like
I
don't
know
it
would
be
nice
to
have
like
some.
It's
just
I've.
I
see
it.
G
You
know
it
just
feels
like
a
common
issue
that
everyone's
going
to
ask
for
and
I'll
get
implemented,
a
bunch
of
million
different
million
different
ways
and
like
and
so
yeah
I
don't
know.
I
know
datadog
doesn't
have
anything
like
this.
All
we
have
is
in
our
agent.
We
have
objection,
but
that's
just
like
and
go.
D
Yeah
so
there's
I
know
when
this
was
being
discussed
at
the
spec
level.
Previously
it
was
suggested
that
this
should
generally
be
done
at
the
collector
level.
G
Maybe
we
need
to,
I
don't
know,
just
kind
of
spitballing
what
you're
doing
all
of
these
I
wish,
michael,
if
michael
were
here,
I
would
be
curious
what
his
opinion
was,
unlike
other,
what
new
relic
feels
about
like,
maybe
including
this
more
generally.
D
Yeah
yeah,
that's
a
good
point.
We
should
see
if
we
can
encourage
him
to
join
next
week's
meeting
and
maybe
discuss
it
there.
There
is
this
point.
Sorry
I
just
want
to
mention
there
is
this
point
here
that
in
the
spec
core
level,
attributes
db
statement
a
side
note
that
the
value
may
be
sanitized
to
exclude
sensitive
information,
there's
no
actual
details
about
how
that
should
be
accomplished
or
what
kinds
of
things
are
permitted
or
not
permitted
there
cool,
that's
as
specific
as
the
specification
yet.
C
I
wanted
to
just
chime
in
and
say
that
more
broadly-
and
I
know
we
interacted
on
gitter
about
this,
but
sanitizing
outgoing
values,
without
going
through
a
collector,
is
extremely
important
for
us
and
as
the
api
is
written,
it
makes
that
extremely
hard.
You
know
we
we've
got
a
proof
of
concept
that
does
some
dodgy
things,
setting
instance
variables
on
things
that
are
supposed
to
be
frozen
and
it
works.
But
you
know
if,
if
instrumentation
doesn't
necessarily
sanitize
something
before
it
gets
before,
it's
ready
to
be
admitted
to
a
collector.
D
C
Okay,
yeah
that,
actually
I
wasn't
that
wasn't
clear
to
me.
I
I
thought
that
was
sort
of
like
these.
Both
both
of
these
approaches
are
not
great,
and
you
can
pick
the
lesser
of
the
evils
approach,
but
actually
that's
really
helpful
to
know
that.
That's
what
supposed
to
be
done,
because
we
can,
we
can
go
and
do
it
in
a
span
processor.
Instead
it
it
didn't.
We
didn't
know
if
that
was
encouraged,
necessarily
or
if
it
was
sort
of
yeah.
D
I
believe
it
is
encouraged,
it
is
the
or
at
least
it's
the
recommended
place.
We
do
this
so
there's
built-in
span
processors
that
do
batching
and
conversion,
but.
D
C
D
So
it
means
that
you
should
be
copying,
so
if
you
need
to
modify
something,
you
basically
have
to
create
a
new
span
data
instance
or
a
new
span
and
copy
all
the
data
into
it,
removing
or
filtering
whatever
you
need
and
then
pass
the
copy
along.
D
So
that's
what
I
mean:
it's
not
terribly
efficient
from
a
memory
point
of
view,
but
it
is
the
supported
way
of
doing
it.
Part
of
the
reason
for
this
is
that
there's
a
possibility
that
you
have
multiple
span:
processor
pipelines
that
are
running
in
parallel
and
there's
no
defined
way
of
synchronizing
between
those.
So
in
general,
the
data
is
expected
to
be
read-only.
C
That
makes
sense,
would
would
people
be
interested
in
a
contribution
of
a
filtering
span?
Processor
we're
going
to
have
to
run
a
generic
one
that
sanitizes
all
sorts
of
things
in
a
configurable
fashion
if
that's
useful
to
people
I'll
make
sure
it's
compatible
and
contribute
it
back.
D
Yeah,
I
think
that
would
be
really
useful.
I
one
thing
to
think
about
is
whether
you
want
to
work
at
the
span.
D
C
Okay,
yeah
I'll
I'll
work
on
ours
a
little
bit
more
internally
and
if
it's
cleaned
up
I'll
I'll
pr
it
and
see
what
people
think
and
then,
if
that's
useful.
D
Yeah,
if
you
wanted
to
discuss
it
in
slack
as
well,
that's
awesome.
You
know
we
can
get
some
quick
feedback
on
that.
The
or
github
discussions,
whichever
you
prefer,
the
I
think
that's
really
useful
functionality,
whether
you
do
it
as
a
separate
gem
or
whether
it
goes
into
common.
We
we
can
discuss
that
yeah,
yeah
cool
yeah.
D
I
think
we
would.
We
would
probably
like
to
see
that
functionality
we
do
have
some
stuff
internally,
that's
using
spam
processors
for
a
couple
of
weird
things.
So
it's
not
an
unreasonable
thing
to
do.
D
Yeah,
sorry,
robert,
I
don't
have
the
code
in
front
of
me.
Do
you
happen
to
remember
what
we're
doing
with
the
span
processor?
I
pushed
up
that
weird
pr
that.
G
I
mean
the
datadog
one,
for
example,
which
actually
is
like
a
couple
versions
behind
now,
because
we
don't
want
people
really
using
it,
because
the
collector's
better
is,
like
you
know,
batching
traces
by
trace,
like
batching
spans
by
trace
and
doing
like
some
really
non-performing
things.
So,
like
you
can
do
all
sorts.
D
Yeah,
actually,
that's
a
that's
an
interesting
one.
Is
that
I'm
curious,
whether
that's
something
you
would
consider
extracting
into
its
own
processor
and
pushing
that
up
into
common.
The
reason
I
ask
is
that
we
have
one
service
that
hacked
together,
something
exactly
like
that.
G
Oh
cool
yeah,
I
mean,
if
you
send
those
traces
to
datadog,
yeah
sure
yeah
I
mean
I
think
so
I
don't
know
to
be
honest.
We're
trying
to
move
away
like
for
context
on
where
we're
at,
like
from
a
vendor
perspective,
we're
trying
to
move
away
from
having
to
maintain
a
bunch
of
crappy
gems,
and
you
know
npm
packages
or
whatever
in
different
languages,
and
we
just
want
to
accept
otlp
in
r8.
Allow
people
to
do
batch
span
export
to
an
otop
and
just
in
point
in
our
agent.
G
Probably
so
I
don't
know
like
how
actively
I
I
can
gen,
I
can
probably
generalize
the
that
it's
all
it
is
is
batch
by
trace
id
and
instead
of
by
instead
of
by
you
know,
span
count
and
then
you
have
a
and
then
you
have
a
drop.
Unfortunately,
what
ends
up
happening
is
you
have
to
also
implement
a
drop
mechanism,
because
you
can't
really
flush
and
the
drop
mechanism
is
like
you
randomly.
You
start
randomly
dropping
traces
instead
of
dropping
last
trace,
but
it's
tacky
in
its
own
ways.
Yeah.
G
D
D
Yeah,
okay,
yeah.
I
understand
the
problems
and
we
want
to
move
away
from
that
too,
but
we
don't
really
have
a
general
tail
sampling
solution
at
the
moment.
So
that's
why
we've
been
kind
of
forced
to
to
do
that.
Sorry,
I
want
to
mention
the
two
use
cases
that
we
came
up
with
for
a
span.
Processor
one
is
the
ability
to
override
the
environment
tag
on
a
span.
D
The
reason
we
do
this
is
that
we're
using
the
signalfx
ui
for
looking
at
the
trace
data.
So
looking
at
the
service
graph,
you
there's
an
environment
selector
our
team,
or
we
have
two
teams
that
do
interesting
things
with
tracing
one
is
our
team,
which
is
trying
to
trace
the
trace
export
pipeline
and
for
services
that
have
very
little
traffic.
D
So
you
know
somebody
stands
up
a
service,
they
add
tracing
they're,
not
really
receiving
any
traffic,
yet
the
only
spans
they
see
when
they
look
at
the
ui
are
the
spans
from
the
export
pipeline.
D
So
that
that's
one
use,
another
team
is
doing
something
interesting
where
they
trace
the
job
worker
loop,
so
rescue
workers,
for
example,
once
you
get
into
complex
scenarios,
rescue
workers
can
actually
do
a
whole
bunch
of
work
just
to
figure
out
what
job
to
work
on
next
so
they're
hitting
a
bunch
of
external
data
stores.
D
We
have
weird
things
like
checking:
replication
lag
on
my
sequel
and
blah
blah
blah,
but
that
team
wants
to
be
able
to
trace
their
worker
loop
so
that
they
can
fix
performance
problems
in
the
work
loop,
but
again,
that's
kind
of
noisy
so
having
the
ability
to
tag
with
a
different
environment.
D
D
D
D
That
means
getting
past
head
sampling
and
getting
past
any
uniform
down
sampling
that
might
occur
in
the
collector
and
the
way
we're
doing
that
is
by
setting
the
sampling
priority
and
then
also
using
baggage.
So
we
set
a
value
in
baggage
that
says,
force
tracing
true
and
then
that
propagates,
along
with
any
requests
that
are
made
within
that
span
lifetime.
D
So
that
looks
like
an
effective
solution.
It's
actually
a
really
small
amount
of
code,
and
that's
probably
something
that
we
should
push
up
into
common
as
well.
D
Cool
we've
got
like
four
minutes
left,
so
I'm
just
going
to
pop
back
to
the
other
issues
we
had
here.
D
Sorry,
the
other
pull
requests
zip
can
export.
Some
dude
is
working
on
that
and
you
know
it
should
be
done
soon
here.
G
Yeah,
it's
mostly
there's
a
few
updates
from
your
last
round
of
feedback,
and
then
I
think
it's
mostly.
I
think
it
is
good
after
that,
any
other
reviews,
robert
francis
or
anyone,
would
be
appreciated.
I
don't
have
any
familiarity
with
it
again
until
as
of
last
week
until
last
week,
so
it's
possible.
I
missed
something.
G
D
Okay,
no
worries
I'll
take
another
look
at
that.
There
was
that
code
that
you
copied
from
the
jager
exporter,
which
makes
me
realize
that
I
probably
should
have
fixed
it
when
I
copied
it
from
wherever
I
copied
it
from.
G
G
There
was,
I
do
have
some
questions
around
I
did.
I
wasn't
totally
sure
how
to
do
some
of
the
retry
stuff.
I
just
sort
of
like
looked
at
prior
art
for
some
of
our
retry
behavior
and
stuff,
like
that
kind
of
looked
at
what
we
were
doing
for
otlp
in
terms
of
like
retrying
and
but
any
I'm
not
sure.
If
any
of
that
was
otlp
specific,
so
it's
possible
I've
like
implemented
some
stuff
that
is
maybe
not
appropriate
for
the
zipping
explorer.
It
seems
fine
to
me.
G
Okay,
zipkin
didn't
mention
a
ton,
but
it
just
implemented
a
bunch
of
retry
stuff
anyway,
so
I
guess
it
can't
hurt
but
cool
okay,
but
yeah
I'll
try
to
I'll
ping
again.
When
I
have
another
round
of
feedback
ready,
I
guess.
D
Okay,
have
you
have
you
actually
tested
this
against
a
zikan,
collector
or.
G
No,
not
at
all,
okay.
I
should
probably
do
that.
It's
important
yeah,
I
noticed
I
we
had
a
we
used
to
have
something
running
in.
I
guess
circle
ci
or
whatever
we
were
using
before
github.
That
was
running
a
live
collector.
If
I
recall
there
was
like
an
environment
variable,
it
doesn't
look
like
we'd.
Do
that
anymore.
D
Or
you
were
just
doing
that
locally,
I
was
just
doing
it
locally.
We
did
have
something
set
up
with
docker
I'll
test
it
I'll
test
it.
It's
fine
I'll,
just
spin
up
some
docker
containers
yeah.
If
you
can
just
do
it
locally,
that's
fine
using
an
environment
variable
to
run
the
extra
tests
and
skipping
them.
Otherwise,
that's
totally
fine,
and
then
I
can
test
it
locally
as
well.
Just
provide
some
instructions
on
how
to
configure
the
the
collector.
D
Sorry,
let
me
just
jump
back
quickly
here.
There's
an
x-ray
propagator
implementation,
that's
in
progress!
It
might
be
good
if.
D
It
might
be
good
if
anybody
who
is,
I
guess,
ariel
if
you're
doing
the
ot
propagator
ot
trace
propagator.
If
you
could
take
a
look
at
the
the
x-ray
propagator
as
well
since
you're
you're,
gaining
familiarity
with
that
part
of
the
codebase
yeah
sure
I'd
love
to
yeah
cool
yeah.
We
talked
about
the
google
api
core
and
ot
trace
propagator.
So
that's
it
any
other
questions.
Anything
people
want
to
discuss.
A
G
A
Give
some
feedback
to
the
group.
Thank
you
very
much
for
being
welcoming
and
letting
us
come
in
here
and
tamper.
D
Cool
yeah
just
reminded
everyone,
everyone
that
slack
is
the
place
we're
chatting
now
rather
than
gitta.
I
will
close
the
getter
channel,
probably
today
or
tomorrow
and
yeah.
Let's
move
everything
to
to
slack
or
github
discussions.
I
know
francis
started
one
in
github
discussions,
so
cool
cool,
okay,.