►
From YouTube: 2020-11-24 meeting
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
A
A
B
C
Think
so
no
it
hasn't
been
too
bad.
Overall,
we've
had
a
few
things
go
in
so.
A
A
A
So
yeah
I
was
saying
we
had
a
pretty
short
meeting,
but
it
was,
I
thought,
useful
and
a
handful
of
things
that
have
been
somewhat
troubling
have
have
come
up
and
we
least
got
a
chance
to
discuss
them.
So
the.
A
A
Open
telemetry,
open
symmetry
has
this
notion
of
resources,
but
resource
resource
detection
and
kind
of
the
merging
process
is.
A
A
The
core
of
this
problem,
so
I
think
ultimately
the
discussion
was
what
what
should
you
do
if
there
is
no
service
name?
Should
we
end
up
with
with
some
sort
of
fallback?
Should
there
be
like
kind
of
multiple
levels?
There
can
be
a
fallback.
There
can
be.
You
know,
kind
of
a
first
class
option
for
service
name,
which
I
think
is
what
we
have
actually
for
for
ruby,
which
I
think
works
well,
and
then
you
could
always
through.
If
a
later
resource
was
detected
with
a
different
service
name,
it
can
merge
over
that.
A
It's
there's
a
pro
icon
there
and
that
your
service
name
can
technically
change
as
more
resources
resolve.
So.
A
I
think
this
definitely
highlights
the
problem
of
having
asynchronous
resource
detection,
but
the
results
of
this
discussion,
I
think,
was
there:
would
there
at
least
be
some
kind
of
a
fallback
so
that
you
would
always
end
up
with
a
service
name,
and
it
might
be,
it
might
be
a
not
great
service
name
and
you
might
end
up
with
similarly
named
services
if
you
didn't
provide
a
better
one,
but
it's
kind
of
like
a
better
experience
than
no
service
name
or
like
failing
to
start
up
that
make
sense.
A
A
This
cool,
so
the
next
thing
was
the
request
that
we
store,
like
the
the
sampling
rate
and
and
other.
A
Sampler
decisions
on
the
span,
I'm
not
sure
exactly
what
we're
doing
right
now.
I
think
we're
probably
not
and
storing
these
due
to
the
specs
saying
that
we
shouldn't.
I
know
that
a
sampler
can
return
attributes
that
should
be
added
to
to
the
span,
so
that
was
kind
of
where
this
discussion
was
going
is
that
you
can
always
have
a
custom
sampler
or
you
could
subclass
a
sampler
if
this
was
something
that
you
needed
or
if
there
were
attributes
that
you
wanted
added
to
the
span
that
weren't
already.
A
There
we
briefly
looked
at
the
ga
burndown.
I
think
this
resource
stuff
is
isn't
that
list
somewhere,
although
officially
officially
the
the
traceback
has
been
frozen,
which
probably
should
have
been
the
thing
that
I
started
this
meeting
off
talking
about
so
yeah,
I
guess
resources
is
not
technically
part
of
trace.
They
are
part
of
the
sdk,
so
we
should
expect
anyways
that,
as
far
as
like
the
core
trace
components
go,
there
should
be
no
new
changes,
which
is
good
news.
A
A
There
is
a
pull
request
for
adding
provider
specific
resource
fields
to
resource
detectors.
I
think
most
of
the
most
of
the
resource
attributes
are
kind
of
generic,
and
I
know
in
writing
some
of
these
resource
detectors
and
reviewing
some
of
them.
It's
like
you
end
up.
I
don't
know
in
a
lot
of
instances
I
feel,
like
you
have
to
find,
like
the
the
closest
attribute
to
the
thing
at
the
detect.
A
Attributes
are
com
are
being
pulled
in
here,
but
I
think
it
definitely
makes
sense
for
for
writing
these
detectors
and
then
trying
to
understand
the
information
that
you
pull
off,
that
you
use
kind
of
a
as
close
of
language
as
the
provider
actually
uses
to
identify
those
things.
A
A
A
The
idea
is
to
not
break
traces
as
they
go
through,
possibly
multiple
tracing
providers
and
even
possibly
multiple
organizations,
because
you
don't
actually
know
what's
going
to
happen.
Maybe
maybe
service
x
is
going
to
call
back
to
service
a
or
something
in
organization
one,
and
if
you,
if
you
have
continued
to
pass
context,
you
have
ways
to
figure
this
back
out
and.
A
And
by
using
trace
state,
you
have
ways
of
you
have
a
lot
more
ways.
I
guess
of
of
handling
these
situations
and
making
some
decisions
about
them.
So
I
know,
for
example
like
at
some
like
it
doesn't
even
have
to
be
like
a
multiple
tracing
provider
situation.
A
You
would
probably
want
to
keep
this
trace
going,
but
you
would
probably
not
want
to
like
show
org
to
org
one's
trace
like
this
would
be
kind
of
a
slight
kind
of
permissions
problem
unless
there
was
some
explicit
consent
that
you
know
you
can
see
this
tracing
data.
A
But
again
you
still
might
have
the
situation
where
the
trace
comes
back
to
org
one
in
some
way,
and
you
would
at
least
like
to
show
to
org
one
that
you
know
a
trait
a
trace
originated
into
service,
a
it
left
service
b
and
it
went
into
some
black
box,
but
it
came
back
to
one
of
your
services
and
using
trace
state.
You
can
do
this.
There
is
kind
of.
A
A
You
can
have
tenant
entries
in
there
where
you
have
a
tenant
id
that
kind
of
prefixes
your
entry,
so
service
x
would
not
accidentally
look
at
this
information
to
try
to
link
up
the
trace,
but
it
would
there
would
be
some
tenant
information
in
trace
state
for
service
b
when
it
comes
back
and
organization,
two
can
stick
their
tenant
information
in
tray
state,
so
it's
there
so
tldr
there.
A
A
There
might
be
things
that
this
other
service
can
do
with
that,
and
a
trace
could
come
back
to
you
as
well.
C
C
We
had
traces
where
we'd
called
out
to
some
third
party
and
then
that
third
party
had
like
enqueued
jobs
or
something
and
way
later
worked
off
those
jobs
and
made
a
bunch
of
requests
back
into
us.
So
we
have
these
traces
with
gigantic
gaps,
and
then
you
know
kind
of
a
partial
trace
for
that
request.
Backing
to
us,
then
a
huge
gap
and
another
partial
trace
for
the
for
a
request
back
into
us.
C
So
and-
and
this
happened
because
this
third
party
was
also
using
x
cloud,
trace
context
and
presumably
tracing
on
their
side.
They
were
just
accepting
our
trace
id
passing
it
back
into
us
and
it
was
quite
confusing.
So
we
used
kind
of
two
of
these
approaches
to
deal
with.
That
one
is
that
we
had
domain-based
filtering,
so
an
allow
list
on
the
way
out.
So
we'd
only
add
our
headers,
our
trace
context,
headers
on
the
way
out.
C
If
we
were
talking
to
a
service
in
one
of
our
domains
and
then
we'd
also
check
to
we,
we
had
a
second
header
that
we
used
just
to
validate
that
this
was
actually
coming
from
a
shopify
service,
not
from
somebody
else
who
happened
to
be
using
x
cloud,
trace
context
as
their
context,
propagation
format.
C
I
got
a
request
from
you
know
some
third
party,
and
this
was
their
trace
id
and
I'm
not
going
to
use
their
trace
id
because
I
don't
trust
it,
I'm
going
to
start
new
trace,
but
I'm
going
to
record
the
fact
that
that
was
their
trace
id
so
that
if
I
want
to
debug
things
later
on-
and
I
want
to
have
you
know
some
discussion
with-
you
know-
support
people
on
their
side.
I
can
give
them
their
trace
id
and
say
hey.
C
This
is
this
is
a
corresponding
trace
on
your
side,
so
we
haven't
kind
of
like
we
haven't,
made
the
jump
to
trace
parent,
yet
transparent
and
tri-state,
but
when
we
do
that,
we're
probably
going
to
be
doing
the
kind
of
validation
on
the
load
balancers.
C
A
A
lot
of
the
things
that
you
just
said
were
some
of
the
things
that
that
we
discussed,
so
I
think
there
probably
does
need
to
be
this
option
for
like
an
allowed,
an
eye
list
on
the
way
out
for
things
where
you
definitely
know
you
don't
want
to
send
context,
but
in
general
I
feel
like
that
should
probably
be
used
very,
very
lightly
and
for
the
most
part,
hopefully
people
are
adopting
w3c
and
making
use
of
tradestate.
A
You
could
go
so
far
as
to
encode
all
of
the
tracing
data.
You
need
in
a
shopify
entry
and
just
kind
of
stick
it
in
there
to
the
extent
where
you
could
flat
out
and
actually
ignore,
trace
parent.
If
you
wanted
to
duplicate
some
of
this
data
down
there,
for
example,
like
the
last
known
kind
of
trace
id
it
as
it
passed
through
like
a
shopify
system
and
and
pull
that
back
out,
I
feel
like
that's
a
little
extreme,
but
there
there
are
actually
some
use
cases
that.
A
Where
we're
doing
that
actually
makes
sense,
try
getting
a
little
bit
off
topic,
but
do.
C
A
No
no
guidelines,
so
I
think
this
is
this
is
something
that
would
be
completely
something
that
you
would
use
trace
state
for,
and
you
would
have
to.
A
You
would
have
to
implement
some
sort
of
like
custom,
propagation
logic.
So
one
thing
I
was
trying
to
find
an
example
of
traced
state
with
a
tenant
id,
and
I
should
just
search
for
it,
but.
A
So
it's
not
really
like
a
a
beautiful
example,
but
your
trace
state
will
be
key
value
pairs,
and
this
is
one
example
of
a
tenant
id
where
this
fw
blah
blah
blah
everything
up
to
the
at
is
like
a
tenant
id
and
then
after
that
is
kind
of
your.
Your
key.
A
So
trust
rules
would
be
something
that
you
would
have
to,
or
I
think
the
intent
is
that
the
the
owner
of
the.
A
Of
kind
of
the,
I
guess
the
propagator
would
would
implement
this
so
on
xtract.
You
could
imagine
that
you
only
want
to
try
to
link
up
this
trace
if
you
found
that
it
has
come
from
from
a
shopify
service
or
that
you
know
there
was
no
trace
parent,
for
example,
then
you're
going
to
obviously
start
start
the
trace
and
add
in
like
your
own
shopify
entry,
but
your
your
shopify
trade
state
can
kind
of
contain
all
the
information
that
that
is
needed
for
the
next
time.
A
C
Okay,
so
you
don't
need
to
do
anything
complex
really.
You
just
need
to
look
at
these
two
headers,
and
that
should
be
enough
to
make
the
decision.
A
Yeah
look
at
these
headers
and,
if
you,
if
you
need
to
make
a
decision,
put,
have
a
custom
trace
state
entry,
so
so
this
tenant
id
is
like
one
way
that
you
could,
that
that
you
could
implement
multi-tenancy
like
within,
like
a
single
tracing
vendor,
for
example.
So
this
is
this
is
the
dynatrace.
A
A
You
could
just
have
like
a
dynatrace
entry
that
everybody
knows
how
to
read
and
stick
like
an
account
id
in
there
and
then
only
only
link
up
those
traces
that
the
account
id
matches,
although
I
do
think
that
this
the
tenant
id
is
a
it's
a
more
elegant
solution
to
this,
because
then
you're
only
looking
for
a
a
tray
state
entry
for
you,
and
it
also
makes
it
easier
if
a
trace
actually
passes
through
multi,
multiple
tenants
of
this
vendor
and
returns
getting
slightly
slightly
off
topic,
but.
C
So
sorry,
just
one
more
thing:
this
is
sort
of
assuming
that
you're
using
a
third-party
trace
vendor.
So
you
know
whether
that's
a
data
dog
or
a
light
step
or
a
signal
effects
or
whatever
it
is.
C
If
you
want
flexibility
in
changing
vendors
or
working
across
different
vendors,
then
I
guess
this
becomes
a
little
bit
more
challenging,
and
maybe
you
just
want
a
a
single
trace.
State
entry
like
shopify
equals
to
indicate
that
this
is
stuff,
that's
relevant
for
us
yeah.
I.
A
A
So
it's
kind
of
it's
written
from
that
perspective,
and
I
don't
know
how
much
it
considered
the
the
case
where
you
also
have
an
organization
on
top
of
it
that
has
stuff
that
they
would
like
to
pass
in
state.
But
I
don't
think
that.
A
A
A
So
I
think
the
conclusion
was
in
some
ways
this
is
a
feature
and
not
a
problem.
So
sometimes
you
would
like
to
leak,
but
I
think
the
the
other
thing
was
sometimes
you
know
you
don't
want
to
pass
contacts
or
you
just
you
know.
You
need
to
turn
this
off
for
some
reason
and
there
should
probably
be
an
allow
deny
list
for
for
outbound
requests,
but
I
think.
A
A
A
I
think
well,
I
I
agree
that
there
is
a
feature,
and
you
probably
do
want
to
propagate
this.
You
may
want
to
have
like
a
a
way
to
prevent
this
as
well.
A
C
Okay,
based
on
the
structure
of
the
api,
is
it
likely
that
that
allowed
deny
list
would
be
implemented
at
the
trace
propagation
layer,
or
would
it
be
something
that
you'd
have
to
build
into
every
bit
of
instrumentation?
That
needs
to
pull
something
off
the
wire.
A
A
C
So
we're
going
to
yeah,
I
mean
I
guess
this
hasn't
been
worked
through
the
specs
spec
sort
of
committee
whatever
yet,
but
we're
at
some
point:
we're
going
to
need
to
figure
out
how
to
structure
those
allow
and
deny
this
for
all
our
instrumentation
at
http
and
so
forth.
A
C
No,
I
mean
that
feels
right
yeah
unless
it's
you
know,
unless
we
define
another
layer
here,
which
is
like
the
common
helpers
right.
A
C
If
that
makes
sense,
I'm
not
sure
that's
true
in
the
same
way
that
we
can
figure.
You
know
the
the
resource
and
the
export
pipeline,
all
the
rest
of
it
in
the
sdk
and
those
are
not
api,
concerns
configuration
of
the
allow
list
or
deny
list
or
like
that
could
happen.
That
could
be
a
concern
of
the
sdk,
the
mechanisms
for
doing
that.
Don't
necessarily
belong
in
the
api.
C
A
Yeah,
I
think
I
think
that's
what
I
was
trying
to
say
is
that,
like
the
the
list,
management
stuff
can
live,
any
can
live
elsewhere,
it
doesn't
need
to
be
in
the
api,
but
the
actual
list
itself
and
it
there
needs
to
be
an
accessor
on.
You
know
the
open
telemetry
module
to
get
at
this
list,
basically
at
the
api
level,
and
there
needs
to
be
a
you
know,
an
agreed-upon
structure
for
this
list,
so
you
can
at
least
read
the
entries.
A
A
A
A
So
it
sounds
like
I
don't
know
it
sounds
like
shopify
has
this
need
and
you
may
have
just
run
into
it.
C
A
couple
days
ago,
it
was
more
that
somebody
was
writing
instrumentation
for
http
client.
You
know
we
already
had
instrumentation
for
net
http
and
the
net
http
instrumentation
was
written
a
long
time
ago.
C
C
C
A
A
B
A
The
thought
that
was
just
going
through
my
mind
is
when
you
pull
context
off
the
wire,
at
least
in
ruby.
The
carrier
is
usually
the
rack
and-
and
you
might
have
some
sort
of
you
might
have
some
header
like
available
to
you
to
make
this
decision.
I
don't
know
if
you
can
always
rely
on
such
a
thing,
but.
A
I
feel
like
in
the
in
the
event
that
you're
starting
a
a
server
span.
I
think
you
have
a
lot
more
information
than
when
you're
starting
a
client
span.
Maybe
I'm
missing
something
kind
of
make
these
decisions,
if
you
didn't
want,
if
you
wanted
to
kind
of
make
it
based
on,
you
know
where
the
thing
is
coming
from
or
possibly
where
it's
going
to.
A
If
you
think
that,
if
you
think
that
we
do
need
this
list
or
if
we
need
to
talk
about
the
list
like
it
might
be
useful
to
to
comment
on
on
this
issue,
since
it
was
already
here-
and
it
has
this
nice
diagram
and
I
think
acknowledging
that
in
many
cases
you
do
want
to
pass
this.
But
sometimes
in
some
cases
you
do
need
to
have.
Like
a
you
know,
a
strong
armed
ability
to
turn
this
stuff
off.
C
Yeah
yeah
from
from
my
perspective,
I
feel
like
the
choices
we
made
weren't
the
greatest
they
worked,
but
they
just
weren't
the
greatest,
not
not
least
because
we
need
to
maintain
an
allow
list
of
some
kind,
and
I
actually
don't
know
if
al's
is
still
up
to
date,
and
then
you
need
to
propagate
that.
Allow
list
to
all
your
services,
and
you
know,
of
course,
your
services
update
their
dependencies
at
different
times,
so
this
can
easily
become
style.
A
But
yeah
I
mean
this.
Allow
a
lawless
mechanism
is
something
that
I've
I've
seen
end
up
in
a
lot
of
facing
clients.
So
if
we
ultimately
need
to
have
it
in
open
telemetry,
I'm
not
going
to
be
overly
surprised,
but
it
would
be
really
awesome
if
there
are
ways
to
be
a
little
bit
more
clever
about
things
and
do
this
in
the
propagation
layer.
D
A
No,
it's
it's
no
problem.
Yes,
it
was
a
shorter
spec
sig
because
of
the
thanksgiving
holiday
here
it
seems,
like
a
lot
of
people
have
kind
of
taken
the
week
off.
So
so
we
had
some
more
in-depth
conversations
about
about
a
handful
of
things.
I'm
sure
that
meeting
is
recorded
if
you
want
to
like
go
with
a
deeper
dive,
but.
B
I
have
a
kafka
pr.
I
would
love
some
eyes
on
at
some
point.
It's
pretty
simple,
I'm
I
consider
myself
really
really
agreed
with
kafka
in
general,
so
instrumenting.
It
was
like
interesting
because
I
can't
learn
what
it
does
and
how
it
does.
It
pulled
a
lot
of
inspiration
from
the
dd
trace
gem.
B
I
made
it
painfully
repetitive
and
simple
for
the
notifications.
I
don't
know
I'm
sure,
there's
lots
of
room
for
improvement.
I
just
tried
really
hard
to
get
it
to
a
point
where
people
could
kind
of
pick
it
apart
and
help
make
it
better.
So
this
is
like,
I
think,
a
decent
starting
point,
but
I
don't.
I
don't
really
consider
it
to
be
like
done.
B
Kind
of
tricky,
like
kafka,
has
this
this
the
producer
has
or
the
consumer
has
this
method
for
each
message
or
each
batch
and
using
the
active
support
notifications,
I
can
create
a
span
after
the
fact
for
when
a
message
is
processed,
but
if
we
wanted
to
propagate
like
if
you
want
to
actually
create
a
span
that
any
call
that's
wrapped
in
this
each
message
would
you
know
continue
that
chain?
It's
that's
not
how
it
is
right
now,
I'm
not
sure
how
to
kind
of
get
around
that.
D
Yeah,
I
don't
think
I
I
can
thank
you
for
bringing
it
up.
I'm
sorry,
I'm
so
behind.
I
don't
think
almost
at
least
in
datadogs
vendor
instrumentation.
I
don't
think
we
handle
any
q
propagation
well
at
all.
I
don't
know
if
there's
a
single
queue
that
we
have.
We
prop
maybe
like
rabbit
mq
in
java
or
something
there's
a
way
to
do
it,
but
otherwise,
like
I'm
pretty
sure
we
don't
handle
propagation
among
cues
in
the
least
bit.
So
I
don't
think
you
missed
something
at
least
it's
a
huge
shortcoming.
B
Yeah
I
yeah,
I
just
found
it
tricky
because
I
think
it's
process
message
is
where
I
like
it.
It
just
could
be
a
lot
better.
I
know
that,
like
internally
our
instrumentation
of
kafka
like
for
shopify,
we
recently
ran
into
some
gaps
and.
B
D
You
guys
maintain
a
single
trace
between
like
producers
and
consumers
or
whatever
they
are.
I
don't
even
know
like
or
is
it?
Is
it
two
different
traces.
D
C
We
have
a
specific
use
case
for
tracing
kafka
at
the
moment,
and
that
is
monitoring
kafka
itself.
C
So
you
know
we,
we
have
a
team,
that's
building
reports
about
things
that
are
things
that
appear
to
be
consumed
that
were
never
produced,
things
that
were
produced
and
never
consumed
that
sort
of
thing,
and
so
that
team
only
cares
about
tracing
the
actual
produce
and
consume,
and
not
anything
that
happens
downstream.
C
D
B
There
is
message
headers,
so
that's
what
I'm
I'm
injecting
it
here,
but
in
this
like
process
message,
if
you
actually
click
on
the
link
that
I
have
in
my
comment,
there
that'll
bring
you
to
like
the
underlying
implementation
there.
B
So
but
because
you
have
the
instrumentation
hook,
you
can
do
it,
but
any
code
that
executes
within
there
like
it,
doesn't
have
that
context
available
to
it.
So
it
like
france
was
saying
it's
reliant
on
whoever's
using
say
this
method,
the
each
message
from
the
consumer.
They
would
have
to
be
extracting
the
context
from
the
message
headers.
So
that's
like
this,
like
instrumentation,
only
goes
so
far
so
to
speak.
D
Okay,
I'll
have
to
review
it.
Thank
you
for
bringing
it
up.
I
definitely
haven't
looked,
but
I
appreciate
you
doing
the
work.
B
So
and
then
all
of
these,
like
notification
subscriptions,
I
did
like
it's
brutally
repetitive,
but
it's
for
me.
I
just
found
it
like
it's
just
simple
to
read
like
when
I
was
implementing
it
because
I
just
this
is
the
first
time
I've
even
used
active
support
notifications-
and
I
don't
know
anything
about
kafka,
so
I
was
just
like
I'll
keep
it
really
simple,
probably
could
produce
a
lot
of
duplication,
but
I
don't
know
I'll
leave
that
to
the
crowd.
A
All
right
so
so
two
questions
I
just
wanted
to
clarify.
Are
you
saying
that
you
are
injecting
context,
but
it
is
up
to
the
user
to
actually
extract
it
in
this.
Each
message,
method.
B
So
yeah,
if
you
go
sorry
to
go
back
to
my
instrumentation,
there
should
be
a
patch
folder
with
for
the
producer,
so
we
should
have
because
in
the
patch
I
am
injecting
the
head
or
the
headers,
and
then
in
that
file
that
you
were
just
on,
I'm
extracting
it
so
like
it
will
create
a
span
for
it.
So
I
think
this
no.
It
should.
D
C
Yeah
sorry,
the
the
spec
has
some
recommendations
for
distinguishing
between
batch
receiving
and
batch
processing
and
how
things
like
how
spans
or
traces
should
be
linked
in
each
case.
C
The
problem
is
that
it
assumes
that
apis
are
going
to
be
structured
in
a
way
that
allow
you
to
do
both
and
in
practice
the
ruby
kafka
apis
are
not
structured
in
a
way
that
allows
you
to
do
this
without
sort
of
imposing
on
the
the
clients
of
ruby
kafka.
C
So
part
of
the
problem
here
is
that
it's
hard
to
provide
a
it's
hard
to
provide
instrumentation
out
of
the
box.
That
does
the
right
thing
for
everybody.
C
You
kind
of
need
to
configure
it
depending
on
your
use
case
or
expose
the
the
headers
effectively
to
to
clients
so
that
they
can
figure
out
what
they
want
to
do,
and
then
it
becomes
like
it's
sort
of
problematic
for
clients
as
well,
because
the
way
they
instrument
their
code
is
going
to
have
to
depend
on
what
kind
of
analysis
they
want
to
do
with
it.
C
D
Yeah,
I
I
don't
have
enough
context
here
around
kafka.
I
just
know
from
other
sort
of
batch
instrumentations
like
this,
that
it's
can
be
very
problematic
and
the
solutions
are
really
disparate
and
it's
like
you
know,
sqs
sms
has
its
own
pitfalls
rather
than
q
as
its
own.
So
anyway,
thanks
for
writing
answer
thanks
for
taking
a
shot
here
and
doing
something.
B
Yeah
sorry
to
circle
back,
and
I
can't
answer
your
question
so
there's
a
patch
for
producer
that
the
the
client
stuff
like
here
that
I
left
the
comment
like
they
don't
actually
recommend
using
any
of
this
stuff
like
out
of
the
client
they're
just
like
this
is
for
demonstration
purposes.
So
the
patch
for
the
producer.
Here.
It's
just
wrapping
the
call
and
we're
injecting
the
propagation
context
into
the
headers
here.
B
So
on
line,
and
it's
just
letting
us
do
its
thing
and
then,
if
we
go
to
the
event
for
the
consumer
process,
messages
or
process
message,
I'm
extracting
that
context
and
we're
just
creating
a
span
after
the
fact
that
reflects.
B
B
So
there
yeah
so
basically
we're
taking
the
notification
we're
grabbing
from
the
payload
setting
a
bunch
of
tags.
Most
of
the
stuff
is
similar
to
datadog,
just
because
that's
something
easy
to
lean
on
create
a
new
span
with
the
start
time,
and
then
I
explicitly
finish
it
with
the
end
time
stamp.
That
comes
out
of
active
support
notifications,
but
I
do
pull
the
parent
context
out
of
the
headers
and
when
I
create
the
span,
I
make
that
the
parent
context
of
the
span
of
creating.
B
A
Yeah
because
you're
kind
of
creating
these
spans
after
the
fact,
if
they're,
if
something
did
happen
during
process
message,
it
would
not
be
parented
in
the
right
place.
A
B
B
Unless,
like
active
support
has
something
that
I'm
not
aware
of
that,
we
could
leverage
like
that's
possible
too,
that
active
support
might
have
like
a
start
and
end
that
we
can
hook
into
to
like
do
this,
but
I'm
not
I'm
not
familiar
with
it
enough
to
like
know.
I
could
dig
into
that,
but.
A
So
I
think
there
are
a
variety
of
ways
that
you
can
wire
up
a
subscriber
to
an
active
support
notification,
and
my
memory
may
be
failing
me,
but
I
believe
that
you
can
subscribe
an
object
that,
yes,
you
get
a
certain
interface
and
start
and
finish
would
be
methods
on
that
interface,
and
I
think
that
this
block
form
will
trigger
the
right
events
at
the
right
time.
C
Does
the
notification
include
the
message
here?
Yes,
message:
headers,
so
you
do
have
that
data
so
yeah.
If
you
take
a
look
sorry,
I
know
this
is
meaningless
for
everybody
else.
But
if
you
take
a
look
at
our
shopify
tracing
rails,
instrumentation
there's
a
subscriber
class
there
and
that
implements
the
expected
interface
so
that
you
get
the
start
and
finish.
C
If
you're
not
yeah,
there's
a
there's,
a
pitfall
where,
if
you've
got
a
chain
of
notifiers
or
sorry
chain
of
subscribers
and
one
of
them
raises,
then
the
rest
of
the
subscribers
won't
be
notified,
and
that
could
happen
for
either
the
start
or
the
finish.
So
it's
possible
to
actually
get
mismatched
kind
of
calls
to
start
and
finish.
C
Oh,
so
we
have
a
bunch
like
if
you're
looking
at
the
code,
you'll
see
a
bunch
of
complexity
there
to
deal
with
that
specific
thing:
that's
why
we
have
like
the
lookups
that
we're
trying
exactly
exactly
yeah
yeah
so
effectively
your
context
stack.
If
you
want
to
view
it
that
way,
your
context
stack
can
get
out
of
sync.
If
you're
using
active
support
notifications.
C
A
Yeah
so
coming
back
to
this,
does
it
make
sense
to
explore
trying
to
wire
up
a
a
subscriber
with
start
and
finish
methods
so
that
you
can
start
the
span
and
potentially
any
work
that
happens
within
the
message
would
be
parented
there
properly
and
as
a
result
of
this
exploration,
kind
of
see
see
what
happens
a
if
something
in
a
chain
of
subscribers
does
raise
like
have
more
current
versions
of
rails.
B
Yeah,
I
think
I
think
it
makes
sense
to
like
make
an
effort
here
and
like
the
the
test
that
I
have
for
this,
like
this
notification
is
pretty
simple
right
now,
it's
pretty
like
thin,
so
I
could
add
a
case
where
there's
multiple,
multiple
subscribers
and
one
of
them
happens
to
like
raise
in
the
middle
and
see
what
the
behavior
there
is
and
then
once
we
see
what
it
is,
we
can
kind
of
iterate
on
that
and
see
what
we
want,
how
we
want
to
proceed
forward,
I'm
going
to
lean
a
little
bit
on
the
crowd
just
to
get
kind
of
a
group
opinion
because,
like
I
don't
have
a
lot
of
experience
with
it.
C
It
yeah,
I
think,
worst
case
you
could
monkey
patch.
I
know
I'm
not
that
fond
of
using
monkey
patches
for
this
stuff,
but
you
could
monkey
patch
this
method
right,
this
being
what
is
it
each
message.
C
C
Right
yeah,
so
the
yield
is
going
to
happen
for
each
message.
So
what
you
would
do
in
your
instrumentation
is:
have
you
you're
calling
super
with
a
block?
The
first
thing
you
do
in
the
block
is
trace,
which
yields
and
then
you
yield
right,
so
it
effectively
becomes
and
around
for
the
yield.
That's
happening
here.
All
right.
C
C
Approach
and
actually
given
that
this
supports
each
message
and
each
batch,
you
can
distinguish
the
two
cases
that
I
link
to
the
batch
processing
versus
batch
receiving,
and
I
think
you
should
be
able
to
do
the
right
thing
for
each
of
these
cases.
It
may
be
wrong,
but
it
would
be
worth
trying
to
match
the
semantic
conventions
here.
C
We're
over
time
I
just
wanted
to
point
out.
We
actually
have
had
decent
progress.
Over
the
last
couple
of
weeks,
we've
had
10
pr's
merged.
You
know
some
are
kind
of
minor
clean
up
things,
but
we've
had,
I
think,
a
few
significant
pieces
of
instrumentation
merged
in
the
last
little.
While
sorry,
this
should
be
in
the
open,
telemetry,
ruby,
stuff.
C
Yeah
we
should
actually,
I
can't
remember
when
our
last
release
was,
but
we
probably
have
had
one
breaking
change
since
then.
D
Okay,
did
we
did
we
have
to
change
anything?
I
know
where
I'm
I'm
apologize
for
talking
over.
Did
we
have
to
change
anything
with
the
span
status?
Stuff
that
went
out
you
know
tlp
they
like
did
all
these
changes
to
a
span.
Stat.
Is
that
something
that
only
happens
with
otp,
like
in
the
collector.
A
D
And
and
yeah
I
I
think
we
should
cut
a
release.
I
did.
I
know
I've
intended
a
thousand
times
to
get
the
graphql
thing
done
before
then
I
can
try
to
get
it
done
by
the
end
of
the
week
if
it
doesn't
get
done,
feel
free
to
cut
a
release.
Without
me,
it's
not
I'm
just
you
know
it's
not
a
big
deal
to
me,
but
yeah.
D
B
I'm
trying
to
make
a
push
for
us
to
like
start
migrating
stuff
internally
at
shopify
to
be
using
this
gem
only
as
soon
as
possible,
and
I
think
this
one's
required
for
us,
hopefully
francis,
is
invited
I'm
just
going
to
jump
over
that
vr
and
start
adding
comments
to
it.
C
No
go
ahead,
that'd
be
great
yeah
context
with
this.
I
think
I
put
this
pr
up
and
then
internally
we
implemented
the
same
thing
with
tests
in
our
shopify
tracing
gym.
So
really
we
just
need
to
back
port
any
fixes
and
the
tests
into
this
pr,
and
then
we
should
be
good
to
go.
D
A
Yeah
same
I've
been
feeling
especially
swamped,
so
don't
don't
let
us
hold
up
progress
and
always
always
fly
me
down.
If
you
need
something
from
me,
but
I'll
have
my
eyes
out
for
this
anything
coming
to
the
release.
D
Cool,
I
have
one
thing:
I'm
actually
not
sure
if
it's
appropriate
for
the
meeting,
so
I
apologize,
I'm
not
sure
what
the
code
of
conduct
is
here
datadog's.
Looking
for
like
a
senior
hire
here
in
ruby,
I
won't
try
to
recruit
you
people
because
you
all
seem
happy
and
it's
super
rude
to
do
that.
If
you
know
someone
looking
feel
free
to,
let
me
know
I
didn't
I
just
put
down
the
only
person
I
knew
was
like
you
know
dhh.
D
B
Is
it
do
you
have
like
a
posting
like?
I
have
a
community
from
a
bunch
of
people
at
a
previous
company
and
if
it's
it's
remote
yeah,
I.
D
B
B
Oh,
it's
called
it's
all
like
my
previous
company
there's
like
about
like
30
people
in
there.
D
B
C
Yeah
and
you
know,
if
you
get
anybody
applying
for
it,
you
know
we're
looking
for
people
at
shopify
as
well.
So
you
know,
maybe
we
can
compete
for
this
pipeline.
A
Okay,
so
we
are
well
over
time,
so
it
sounds
like
action
items
are
keeping
out
for
the
release.
This
pr,
we
would
like
to
go
in.
Anything
else
is
probably
optional
and
then
reviews
on
kafka,
but
it
sounds
like
there's
going
to
be
a
little
bit
of
work
there
to
see
if
we
can
hook
into
the
start
and
finish
events
and
what
that
might
look
like.
B
I
don't
want
to
let
that
block
anyone
from
reviewing,
because
I
think
the
most
of
the
change
will
be
around
one
event,
because
a
lot
of
the
other
events
are
process
message
and
process
bash
because
we're
going
to
make
these
changes.
I
think
the
other
stuff
like
the
way
it
is,
is
fine,
so
yeah,
don't
let
that
hold
anyone
back
from
reviewing
or
looking
at.
Even
just
like.
Looking
at
the
comments,
I
left
because
I
tried
to
highlight
the
parts
where
I
was
most
unsure
or
just
points
of
interest.
A
A
Great
yeah
I'll
keep
an
eye
out
for
release
related
prs
I'll,
try
to
get
some
eyes
on
this
kafka
and
see
you
already
next
week.