►
From YouTube: 2021-09-21 meeting
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
B
How
about
yourself
pretty
busy
the
different
kind
of
busy
that
I'm
used
to
it's
like?
Basically
the
work
I've
been
doing
with
francis
it's
now,
instead
of
it
just
being
me,
there's
a
team
that's
being
built
and
you
have
like
went
from
me.
What's
francis
overseeing
to
me,
you've
met
tim
eric's
there,
and
then
we
have
a
new
manager
as
well.
B
That's
managing
this
team.
So
it's
like
I'm
kind
of
working,
getting
everyone
up
to
speed
and
stuff
like
that
and
setting
out
a
bit
of
a
roadmap
for
the
team
and
things
that
we
should
be
working
on.
So
I'm
used
to
just
more
tinkering
with
lines
of
code.
Then
I
am,
you
know,
word
documents
so.
A
B
Yeah
lots
of
plans
yeah,
I'm
I'm
fortunate
in
the
regard
that
I
know
that
there
this
is
like
a
transition
phase,
so
I
get
to
be
involved
in
steering
that
so
that's
like
that
feels
good,
and
I
know
I'm
gonna
get
back
to
focusing
on
code
and
it'll
be
probably
a
bit
more
of
an
emphasis
on
like
doing
the
code
stuff,
but
also
you
know,
working
with
the
wonderful
tim
and
but
also
proposing
projects
and
stuff
like
that.
So
something
comes
up.
I
think,
is
interesting.
A
Yeah,
no
there's
there's
definitely
some
some
pros
to
to
that
role
so
yeah
and
it
helps
when
you
have
an
awesome
team
which,
as
far
as
I
know,
you
do
yeah.
A
A
No,
it's
it's
great
for
you,
like
I
I
bought
several
years
ago.
Like
I
don't
know
like
I
invested
in
in
a
fairly
decent
model
and
back
then
I
wasn't
remote.
I
was
actually
going
going
into
the
office
and
I
thought
it
was
a
total
waste
of
money,
but
I
still
managed
to
use
it
enough.
Then,
and
I
don't
know
I
was
remote
before
go
with
and
use
it
extensively.
B
I'm
an
aeropress
kind
of
person.
That's
I've
been
really
happy
with
that,
just
using
an
aeropress
and
getting
like
a
good
bird
grinder
and
taking
time
to
find
coffee
beans.
You,
like
I've,
been
super
happy
with
that.
I
don't.
I
don't
actually
drink
that
much
coffee
anymore,
but
when
I
was
having
like
a
cup
a
day
that
was
like
perfect.
A
Yeah
I've
gone
through
all
the
things
aeropress
pour
overs
are
in
kind
of
the
same
same
vein,.
B
But
we
had
machines
at
shopify
and
I
tried
finding
like
the
price
tag
on
these
machines
because
they
were
like
commercial
units
right
there.
It
wasn't
just
like
an
espresso
machine.
It
was
like
you
had
a
display
and
you'd
pick
like
do
I
want
a
shot?
Do
I
want
a
coffee?
What
do
I
want
and
like
you
can
see
the
hopper
on
the
top
and
it
would
like
spit
out
these
perfectly
like
compacted
ground,
cubes
or
like
cylinders
after,
and
I
honestly
I
think,
like
that-
was
probably
the
best
coffee
I've
ever
had.
B
But
from
my
searching
on
like
you,
you
don't
ever
find
websites
that
actually
list
the
prices
of
these
units
because
normal
humans
don't
buy
them.
But,
like
I
found
one
thing,
that's
mentioned
to
price
and
I
don't
know
the
accuracy,
but
I
think
it
was
like
20
grand
or
something,
and
I'm
like
who
you
know
what
I
mean
people
take
coffee
seriously
like
really
seriously.
B
I
don't
think
they
like
bought
them.
I
think
it
was
like
there's
probably
like
this
coffee
distributor
company
in
ottawa,
that
like
supplies,
these
machines
to
all
these
companies
right,
which
would
hopefully
make
sense,
but
even
I
have
a
friend
who
I
was
asking
for
recommended
recommendations
for
a
bird
grinder
and
he's
like.
Oh
you
don't
want
my
recommendation.
I
was
like
okay.
Well,
what
do
you
have
and
he
likes
it
to
me
and
I
think
it
was
like
he
spent
900
on
it.
B
A
A
They
they
had
a
physical
shop
in
portland
and
basically,
I
think,
most
of
the
stuff
that
they
have
like
their
lowest
end.
Stuff
is
still
pretty
good.
Oh
yeah.
B
A
Okay,
I'm
really
gonna
try
to
get
through
these
things.
Much
faster
than
the
actual
sick
itself
goes.
There
was
a
metrics
update.
There
seems
like
the
sdk
spec
is.
It
is
added
experimental
release,
they're,
focusing
on
getting
into
like
a
feature,
freeze,
so
feature
freeze,
I
think,
means
feature
freeze.
They
really
want
to
get
it
to
a
point
where
they
do
not
need
to
add
anything
new
to
it.
So
I
think
the
key
takeaway
is
that
language
clients
are
encouraged
to
start
working
on
an
implementation.
A
Yeah,
I
think
the
this
one
pretty
quickly,
but
I
think
because
a
lot
of
this
stuff
is
in
trace
state.
It
simplifies
where
things
actually
need
to
be
stored,
especially
getting
this
data
onto
a
span.
A
D
I
guess
there
is
because
trey
state
has
already
implicitly
stored
in
a
field
in
otlp.
We
don't
need
to
assign
a
special
field
for
this.
A
There
yeah
instrumentation
update.
I
think
I
think
robert
will
actually
fill
us
in
on
this
a
little
bit
more.
But
for
the
for
this
purpose
they
were
just
kind
of
mentioning
the
channels
and
the
various
meetings
that
are
happening
and
just
generally
that
there
seems
to
be
like
a
lot
of
interest
in
in
these
meetings
and
that
so
much
so
that
the
people
interested
are
in
many
time
zones
and
they
were
trying
to
figure
out
how
to
accommodate
all
the
interest.
E
A
Like
there's
interest
in
definitely
in
the
european
time
zones,
and
also
in
in
apac
as
well.
A
So
basically,
as
I
understand
it,
you
have
your
histogram,
you
have
your
bounds
and
you
have
your
counts
and
your
bounds
and
because
it's
easy
enough,
while
you're
kind
of
adding
things
to
your
buckets
to
compute,
a
min
and
max
this
is
part
of
the
histogram
and
there
are
some
uses
for
it.
But
there's
both
you
know,
cumulative
histograms
and
then
delta
histograms,
and
it
seems
like
the
min
and
max
would
work
differently
between
those
and
might
not
yeah.
A
I
would
have
like
a
different
semantics
in
the
cumulative
situation,
and
I
think
this
becomes
like
a
problem
when
you
want
to
like
merge,
merge,
histograms
and
do
operations
that
involve
kind
of
more
than
one
histogram.
That
was
the
takeaway
that
I
was
getting
so
it
was
kind
of
like
we
can
add
this,
but
it
doesn't
really
work
properly
for
cumulative
or
you
need
to
interpret
it
differently
for
cumulative
and
yeah.
A
I
think
the
discussions
were
about
whether
or
not
there
was
a
good
way
to
make
that
work,
and
then
there
were
other
discussions
about
like.
Why
are
you,
why
is
there
even
a
min
and
max
on
the
histogram
to
begin
with,
and
why
don't
you
just
make
this
a
separate,
separate
metrics
that
you
track,
if
you
actually
care
about
those
things
and
remove
it
from
the
histogram
itself,
but
the
end
result
was,
I
think,
after
half
an
hour
of
talking
about
this,
they
decided
to.
A
A
A
This
is
somewhat
of
a
formality
around
the
gzip,
but
basically
the
spec
is
kind
of
somewhat
strict,
in
that
the
default
compression
should
be
none
for
an
exporter,
but
for
like
hotel
javascript,
especially
for
the
browser
the
default
is
gzip
is
and
should
be
gzip.
So
I
think
they
just
want
to
loosen
this
restriction
so
that
you
can
change
this
to
make
the
most
sense
for
your
language.
A
And
also
gzip
should
be
an
option
I
think
they
were.
There
has
been
some
debate
around
like
what
what
compression
algorithms
should
be
supported.
A
D
Yeah
reading,
through
these
issues,
it
looks
like
they
want
to
remove
any
suggestion
of
alternatives
other
than
none
or
gzip.
So
those
are
the
only
two
that
are
supported
in
part,
because
the
addition
of
any
other
compression
algorithms
would
require
supporting
the
collector
as
well.
D
The
the
original
proposal
from
it
was
from
somebody
associated
with
rue.
Somebody
wanted
this
in
our
sdk
and
he
opened
the
spec
issue.
Basically
saying
gzip
should
be
the
default,
so,
instead
of
moving
ahead
with
that,
they
want
to
make
it
a
choice
between
gzip
or
none,
as
default.
D
D
Does
anybody
have
or
does
anybody
know
of
the
rationale
for
not
having
compression
like?
I
think
we're
compressing
right,
we're
using
gzip
by
default
at
shopify.
E
Interesting,
did
you
hear
audio
come
in
when
he
took
his
mic
off,
like
I
heard
like
a
white
noise,
but
but
then
he
just
like
disappeared
or
is
he
returning
yep?
That's
better.
D
B
I
don't
think
we
have
it
enabled
by
default.
That's
a
good
point.
I
I
don't
know
the
rationale
for
having
it
off,
but
I'm
also
just
not
familiar
enough
with
the
implications
of
it
like
what
are
the
consequences
of
having
this
on
grant
evolved
like
who,
what
could
potentially
be
harmed?
By
doing
this.
A
I
think
I
have
heard
a
little
bit
like
I've
always
been
under
the
impression
that
you
probably
want
to
jesus.
The
benefits
far
outweigh
the
the
the
disadvantages
for
for
network
transport,
but
I
think
the
the
reasons
I've
heard
for
not
doing
it
is,
if
you
are
sending
your
spans.
If
you
are
sending
your
stance
to
a
remote
collector,
then
you
would
like
to
use
it,
but
if
you're
sending
them
like
on
the
same
machine
or
with
less
network
traversal,
then
not
gzipping
could
be
better
for
you.
D
Yeah,
that's
fair,
so
I
think
with
the
typical
like
agent,
either
sidecar
or
agent
demon
set
approach.
It
probably
makes
sense
to
not
gzip,
but
if
you're,
because,
like
you're
going
to
pay
the
compression
cost,
decompression
cost
the
serialization
re-serialization.
D
So
you
probably
just
want
to
pay
the
compression
cost
when
you're
leaving
the
mode
yeah,
that's
reasonable.
Our
deployment
topology
is
not
like
that
at
shopify,
so
I
thought
we
were
g-zipping,
but
maybe
that's
in
the
old
instrumentation
libraries
that
we
have
so
we
we
might
want
to
revisit
that.
D
All
right,
you
could
delegate
it.
I
think
you've
got
a
couple
of
folks
here
who
could
take
that
one.
A
Cool
these
last
two
issues
are
about
the
default
face
id
generator.
I
believe
so.
A
Bargain
is
suggesting
that
the
the
most
significant
32-bits
of
the
trace
id,
so
the
left-hand
side
should
be
a
timestamp.
A
This
this
has
some
advantages.
I
guess,
if
your
backend
wants
to
roughly
know
like
the
the
ordering
of
spans
as
they
come
in,
I
guess.
A
A
A
I'm
not
sure
where
all
the
implications
are
for
this.
I
know
I
don't
know.
One
thing
that
comes
to
mind
is
that
there
are
things
like
b3
that
kind
of
have
like
this
hazy.
A
We
support
32
or
64
bit
trace
ids,
but
it's
not
like
super
sorry,
64
128,
the
trace
ids
and
it's
not
super
clear
what
happens
when
you're
interrupting
between
those
systems,
so
I
think
hotel
kind
of
made
some
decisions
there
and
basically
made
the
decision
just
left
pad
with
enough
zeroes
to
make
the
id
of
you
know
128
bit
length.
A
Yeah,
that's
one
thing
that
came
to
mind.
I
don't
really
know
what
other
implications
there
are.
F
F
It
might
be
good
to
get
aws's
input
on
how
that
approach
has
been
in
practice
on
x-ray
formats.
Like
I
don't,
maybe
that's
on
the
sampling
pr,
but
I
know
there's
some
folks.
I
know
technically
x-rays,
maybe
a
different
team
than
the
folks
who
work
on
open
telemetry
at
aws,
but
that
might
be
relevant.
They
could
maybe
mention
any
pitfalls
or
phenomena
benefits.
D
Yeah,
my
understanding
is
that
they
use
the
the
timestamp
and
the
trace
id
as
a
way
to
quickly
decide
whether
to
drop
a
span,
because
it
arrived
more
than
10
minutes
after
it
was
generated.
Yeah
the
the
time
stamp
primarily
exists.
There
is
a
very
quick
filter
for
them.
They
won't
accept
spans
that
are
older
than
10
minutes.
F
D
Yeah
I
mean
there's,
I
guess
uuid
generation
or
there
was
another
one
mentioned
in
this
issue.
I
think
there
are
like
id
generation
schemes
that
are
standardized
that
do
include
some
kind
of
time
stamp
in
them.
So
there's
prior
odd
for
this.
D
Yeah,
practically
speaking,
I
mean
the
the
issue
that's
called
out
here
is
that
there
might
be
sampling
schemes
that
this
affects,
but
the
sampler
in
the
collector
hashes
the
bits
anyway
so
like
the
random
randomness,
ends
up
distributed
fairly
evenly.
So
it's
it's
not
going
to
be
broken
by
this.
A
Cool,
I
don't
know
robert,
do
you
want
to
talk
about
the
instrumentation
seg?
Is
there?
Is
there
an
update.
B
Yeah
sure
just
making
sure
can
you
guys
hear
me
yes,
so
it
was
a
lot
of
continuation
of
what
we
talked
about
last
time
so
talking
about
suppressing
spans.
So
if
you
have
like
a
single
client,
you
have
like
nested
clients,
fans
which
one
should
remain,
and
so
a
big
part
of
that
call
was
just
demonstrating
like
how
it
would
actually
look
in
a
in
code
so
that
took
up
a
good
bulk
of
it.
B
But
one
of
the
things
the
more
to
me
like
interesting
parts,
was
like
a
small
example
that
they
surfaced
after
that.
Well,
I'm
trying
to
share
it.
It's
just
like
I
literally
just
like,
took
a
screenshot
of
their
example
that
they
had
written.
So
I
remember.
B
Okay?
So
the
idea
was
like
on
the
talk
was
talking
about
their
their
proposal.
Like
say
they
come
up
with
this
mechanism
for
sampling
out
multiple
servers.
Fancies
they'd
be
like
us
having
like
a
rack
span
and
then
having
a
rails
controller
span
so
right
now
we
don't
do
that,
but
imagine
in
the
world,
if
you
will
that
we
do
that
so
they're
saying
well.
If
you
have
some
means
of
sampling,
I
would
say
the
controller
span
somewhere
further
down
the
line.
B
Someone
says,
give
me
the
current
span
and
set
this
attribute,
but
that
span
has
potentially
been
sampled
out
so
in
their
proposal
they're
saying:
well,
maybe
we
just
don't
create
a
span,
so
I
think
in
that
case
this
would
work,
but
what
if
it
does-
and
it
decides
not
to
record
it
so
in
this
case
like
in
the
bottom
example
or
sorry.
The
top
example
is,
if
they're
using
a
sampler
to
reduce
the
verbosity
of
a
trace.
B
What
should
that
do
and
effectively
they'd
be
running
into
the
same
concept
with
the
the
suppression
that
they
propose
is
like
if
you
want
to
set
an
attribute
on
the
current
spam,
but
the
parents
band
was
suppressed
in
some
capacity,
whether
it's
not
recording
or
otherwise,
depending
on
how
they
actually
implement
it.
What
should
the
behavior
be?
B
B
I
don't
know
if
that's
too
naive
of
a
like
a
mindset,
but
it's
just
like
it
seems
like
you're,
pushing
a
lot
of
decision
making
on
the
application,
instead
of
just
taking
the
time
to
be
intentional
and
configuring
your
instrumentation,
so
you
only
actually
instrument
the
stuff
you
care
about.
This
seems
like
a
very
hands-off
approach
which,
maybe
that's
it
makes
sense
from
a
vendor
perspective.
It's
like,
I
have
a
bunch
of
customers
and
they
want
to
be
able
to
tune
it.
B
So
we
want
to
give
them
this
use
all
package
and
then
give
a
mechanism
to
fine-tune
it
there,
instead
of
just
saying,
don't
use,
use
all
only
include
the
things
you
care
about,
and
maybe
this
is
a
lot
more
relevant
in
the
land
of
java,
and
I'm
not
aware
of
some
of
the
the
constraints.
There
are
considerations
to
be
made
around
like
how
it
is
with
working
that
language,
but
I
think
of
in
ruby
like
if
we
had
a
bunch
of
nested
clients
fans
and
we
only
cared
about
one.
B
I
would
be
pushing
my
team
to
just
drop
the
instrumentation.
We
don't
care
about
and
not
include
it
and
not
configure
it.
So
you
don't
have
to
deal
with
any
of
this,
but
maybe
I
don't
know
I
feel
like
I
might
be
missing
something
big
here.
That's
making
the
the
reason
for
this
motivation
is
completely
obvious.
A
A
A
A
E
B
That's
correct
so,
like
you
have
this
really
noisy
trace
and
you
identify
that
a
bunch
of
spans
are
being
duplicated
by
multiple
levels
of
instrumentation
but
they're
effectively
doing
the
same
thing
so
you'd
like
imagine.
If
you
will,
you
have
like
your
your
rack
span
and
then
like
10
middleware
spans
because
you're
using
these
other
middlewares
that
happen
to
instrument
themselves.
B
E
B
That's
the
idea,
so
I
think
we
talked
a
bit
about
it
last
week.
B
It's
like.
Is
this
the
right
way
to
approach
this
at
all
like
shouldn't?
We
just
there
must
be
some
something
I'm
missing
I'll,
try
to
bring
it
up
today.
If
it's
on
topic
is
that,
why
does
this
duplicate
instrumentation
exists
like?
Why
are
you
including
it?
If
you
don't
want
it,
isn't
that
just
a
waste
right,
but
looking
at
it's
like
you,
see
like
these
servlets
and
maybe
it's
maybe
the
java
ecosystem
everyone's
been
really
keen
on
adding
first
party
instrumentation,
but
without
any
ability
to
just
opt
out.
B
I'm
imagining
there
is,
but
I
think,
they're
trying
to
come
up
with
like
a
unified
approach
of
controlling
the
verbosity
of
these
these
traces.
But
again
it's
just
like
it
feels
like
it's
an
application
owner's
responsibility
to
be
deliberate
about
what
they
include,
not
just
like
throwing
everything
in
the
kitchen
sink
and
then,
after
the
fact,
they're
like
how
do
I
trim
this
down.
E
Gotcha,
it's
sort
of
like
configuring
log
levels
for
different
facilities
and
different
levels
because,
like
the
parallel,
would
be
something
like
a
log
for
j
or
like
log
back
that
lets.
You
say
like
everything
in
this
specific
package
will
be
at
log
level
debug,
but
everything,
but
by
default
everything
else
will
be
on
info.
B
B
Don't
use
it
right,
but
it
feels
naive,
saying
it
out
loud
because
it's
like
it's
so
obvious.
Just
don't
include
the
thing
you
don't
want
right,
so
there
there
might
be
constraints,
I'm
just
not
aware
of,
but
these
calls
seem
to
be
a
lot
more.
I
don't
mean
this
in
a
weird
like
mean
way,
but
it's
like
more
vendor
specific.
So
maybe
there's
these
these
problems
as
like
a
vendor
distributing
these
packages
that
they
they
need
to
provide.
F
Yeah,
I
guess
I
can
see
why
it's
not
so
easy,
as
just
saying
like
we'll
just
don't
like.
So
if
you
take
maybe
so
http
servers
is
probably
not
it.
That
seems
more
reasonable,
but
like
clients,
I
don't
know
you
could
have
you
know
certain
pa
certain
execution
paths
it
you
would
want
to.
Can
you
know,
have
a
net
http
instrumentation,
but
then
others
where
maybe
there's
a
panda
wrapper
or
something
you
you
wouldn't,
but
at
the
application
level,
it's
hard
to.
You
don't
have
the
granularity
to
say
like
well
on
this
specific
ra.
F
You
know,
you
know
circuitous
route,
don't
include
http
net
http,
but
everywhere
else
do
so
like
it
can
be.
I
can
understand
how
there's
cases
that
pop
up?
That's
not
so
easy
to
control
from
simply
unplugging
or
plugging
in
the
instrumentation,
but
I
do
think
from
a
like
sdk
maintenance,
like
as
maintainers
of
the
sdk
like
we
have
some.
You
know
we
have
sort
of
like
places
where
folks
can
provide
their
own
configurability
here.
F
You
know
the
export
pipeline
or
processor,
or
that
you
know,
and
and
it's
like
it
feels
like
the
number
of
edge
cases
where
this
would
go
wrong
or
the
maintenance
burden
of
handling.
All
these
cases
would
just
start
to
become
it's
not.
It
doesn't
seem
like
a
super
robust.
You
know
it
just
feels
like
it
could
be
difficult
to
maintain.
So
I'm
not
yeah.
F
I
don't
know
if
I'm
not
super
convinced
that
right
we
want
to
take
on
that
responsibility
of
of
hand
of
having
this
all
kind
of
abstracted
out,
whereas,
like
I
don't
know,
and
even
from
vendors
like
you
can
make
a
distribution
of
the
sdk
and
you
can
do
what
you
want.
E
Interesting,
so
it's
like
very
much
like
granular,
head-based,
sampling
versus
controlling
that
tail-based
sampling
and
potentially
like
say,
like
the
collector's.
The
thing
that's
going
to
have
this
capability
to
say,
like
I'm,
going
to
sample
out
1
in
10
or
something.
F
Yeah,
I
just
think
there's
hooks
where,
if
you
want
to
group
a
trace
by
trace
id
and
then
make
a
decision
around
what
you
want
to
keep
in
that
trace,
like
you,
can
do
that
in
places
within
the
current
open,
telemetry
ecosystem
and
yeah.
I
don't
just.
I
think
I
generally
disagree
with
matt.
I
did
think.
Maybe
one
thing
you
could
do
is
if
you're
passing
down
like
yeah,
it
doesn't
really
work,
but
because
there
could
be
multiple
clients
within
this
yeah,
never
mind
yeah.
I
just
generally
agree
with
matt.
F
You
know
I
don't
I
just
don't
know
if
every
and
also
like
I,
it's
still
unclear
to
me,
why
we're
adding
you
know
like
all
this
complexity,
I'm
it's
still
not
clear
to
me
the
benefit
that
this
really
provides.
No
one's
given
me
like
the
the
the
real
like.
What's
it
called
like
the
killer,
the
killer
use
case
that
makes
it
meaningful
to
like
add
all
this
craft,
so
I
don't
know
I'm
open
to
seeing
that.
B
Yeah,
I
was
gonna
say
like
I
think
the
intention
here
is
providing
an
api
for
instrumentation
authors,
first
party
or
otherwise,
that
they
can
use
and
say,
like
just
just
hook
into
the
sale
like
I'm,
generating
a
client
http
spam
and,
like
you,
have
this
helpful
method
kind
of
like
the
work
you're
starting
on
there
and
it'll.
Just
you
can
just
trust
it
to
do
the
right
thing.
I
think
that's
what
it
is.
B
But
again,
like
I'm
kind
of
hung
up
on
the
just,
don't
include
the
instrumentation,
you
don't
want
team.
Hopefully
this
continues
tonight
for
the
sake
and
I'll
kind
of
just
ask
that
question
and
maybe
there's
a
really
good
reason.
E
The
only
use
case
that
I
could
think
of
is
there's
a
specific
part
of
the
code
that
you
want
the
verbosity
for
right.
So
if
we
turn
on
like
auto
instrumentation
for
faraday
right,
some
middlewares
and
net
http
right,
you
generate
all
these
bands,
but
in
most
cases
you
don't
need
all
those.
It's
really
this
one
slow
interaction,
that's
happening,
and
you
only
want
to
turn
the
volume
up
for
that
particular
interaction,
but
leave
everything
else
alone
and
keep
everything
at
its
own
verbosity
level.
E
That's
the
only
use
case.
I
can
think
of
now.
D
So,
in
a
really
large
project,
like
shopify's
monolith,
different
teams
can
use
different
http
libraries
as
an
example,
and
some
of
those
may
include
effectively
wrappers
like
faraday
and
so
different
parts
of
your
code
base
or
different
code
paths
might
be
using
your
multiple
layers
of
instrumented
libraries
and
others
might
be
just
using
one.
D
So
I
can
see
some
value
in
kind
of
dynamically
merging
like
http
client
spans,
but
verbosity
is
maybe
not
not.
What
first
comes
to
mind
like
it
doesn't
come
to
mind
first,
for
me,
it's
more
these
cases
like
when
we
were
doing
the
the
ruby,
sorry,
the
rails,
instrumentation
rails
and
act,
action,
controller,
active
controller
and
the
rack
instrumentation
there
was
that
weird
thing
I
think,
there's
a
weird
case
where
you
check
to
see.
D
D
So
I
think
it
would
be
useful
to
think
about
those
use
cases
where
you're
combining
libraries
that
may
have
their
own
instrumentation
and
you
might
want
to
just
create
one
span
and
if
there's
already
a
span
in
your
context,
you
just
want
to
augment
it.
Instead,
if
we
can
think
through
those
use
cases
and
think
about
you
know,
can
they
be
generalized
into
something:
that's
either
like
this
or
an
alternative
proposal
that
might
be
a
useful
path
forward.
F
One
not
to
ramble
about
this
the
whole
meeting,
because
we
kind
of
did
this
last
meeting
too,
but
one
thing
we
had
talked
about
was
like,
I
think,
for
synchronous
stuff.
It's
pretty
effect.
You
can
design
a
system
like
what
I
was
thinking
I
mentioned
is
if
you
pass
down
so
instead
of
like
with
rack
right
now
we
passed
down
like
you
know,
we
maintain
this
like
key.
That's
just
like
hey,
there's
a
rack
span
here
and
we
look
for
it,
but
instead
of
it
being
like
a
rack
spam,
we
just
have
a
hash.
F
That's
like
does
a
cl.
Has
a
client
been
you
know
almost
like
we
we
de-bounce
is
that
what's
called
it's
like.
Has
a
client
been
generated
for
this
particular
trace
as
a
server
has
a
consumer
has
a
producer?
And
if
so,
then
you
don't
generate
a
new
span?
You
just
start
grabbing,
you
know
like
and
you're
just
passing
it
down
the
tree
like
that,
but
it
and
so
I
think
you
could
technically
like
give
it
in
a
synchronous
environment
like
that
would
be
fine,
because
then
you
could
augment.
F
Oh,
the
client,
the
server
span,
already
exists
whatever
augmented,
but
I
don't
know
if
that
works
in
asynchronous
situations,
because
you
you
would
not,
you
would
have
what's
robert
pointed
out
in
the
original
snapshot,
is
like
you're,
essentially
you're,
adding
attributes
to
nothing
because
it's
already
been
flushed
and
like
maybe
that's
fine,
if,
if
it's
just
like
this
works,
but
only
for
synchronous,
libraries
and
I
don't
know,
but
that
was
the
instead
of
looking
up
the
tree-
you
just
passed
on
a
map
and
that
would
be
maybe
a
generalized
way
to
do
it.
A
Yeah,
I
think
that's
my
main
concern
like
these
things
all
break
down
as
soon
as
things
are
async,
because
anything
further
up
the
tree
could
have
finished
and
become
a
span
data
and
has
been
passed
on
the
wire.
A
So
anything
that
you
try
to
add
or
augment
is
is
not
not
going
to
go
anywhere,
which
makes
me
think
this
is
all
stuff
that
should
be
like
further
down
in
the
pipeline
once
you
know
that
these
things
have
finished,
that
you
can
go
ahead
and
combine
stuff,
if,
if
necessary-
and
you
could
have,
you
could
specify
how
things
should
merge
but
yeah,
I
know
rob
was
it
sounds
like
this
is
actually
a
thing.
This
is
a
concept
that
exists
and
I'm
I'm
going
up
based
on
memory.
D
Yeah
rob
was
saying
that
honeycomb
had
proposed
something
like
this,
or
they
were
starting
to
build
something
like
this.
I
think
they
were
waiting
for,
maybe
the
sampling
otaps,
to
settle
down
before
making
this
proposal,
but
yeah,
I
think,
like
people,
have
had
this
notion
of
merging
spans
in
different
ways
in
the
past,
and
I
kind
of
suspect.
D
The
sticking
point
is
that
it's
hard
right,
kind
of
graph,
manipulation
and
merging
nodes
in
a
graph
is
not
an
easy
problem,
especially
in
some
kind
of
streaming
pipeline,
and
because
it's
hard,
you
get
proposals
like
this,
which
are
trying
to
solve
it
on
the
client
side
instead
like
in
the
instrumentation.
B
I
think
we
can
try
to
bring
some
of
the
questions
back
tonight.
We
can
maybe
change
the
subject
a
little
bit.
I
think
it
might
be
worth
bringing
up
a
little
bit
well
andrew's.
Here,
we've
been
working
on
the
active
support
notifications,
stuff.
It
has
been
awful.
G
Oh
no,
so
these
were
all.
B
Yeah,
so
the
original
implementation
you
did
actually
did
work,
but
it
it
kind
of
ran
into
one
big
issue
is
that
it
was
dependent
on
the
initialization
life
cycle.
So
if
something
subscribed
to
the
notifier
before
we
configured
the
sdk
and
replaced
the
notifier
with
ours,
their
subscription
would
not
exist.
So
you'd
lose
any
anything
that
relied
on
that
so
that
that
became
a
big
issue.
Tim
implemented,
a
change
that
meant
to
delegate
it.
We
spent
a
lot
of
time
trying
to
debunk
that
one,
it's
very,
very
work.
B
B
We're
exploring
something
a
little
bit
different
at
the
moment.
So
the
big
issue
that
we're
trying
to
work
around
that
the
fano
implementation
was
solving
was
that,
if
we're
the
first
subscriber
always
it
works
because
finish
and
end
will
always
get
called
first
and
if
something
blows
up
down
the
line,
it
doesn't
matter
because
we've
already
done
our
thing,
which
is
great
but
there's
no
guarantee
that
we're
going
to
be
first.
B
So
that's
what
the
fan
out
thing
was
doing
with
the
sorting
to
try
to
make
sure
that
our
stuff
gets
called
first.
So
I
did
like
a
really
hacky
proof
of
concept.
With
tim,
we
haven't
pushed
it
up
anywhere
because
it's
gross,
so
we
were
getting
rid
of
the
the
fan
out
approach
and
so,
instead
of
trying
to
sort
every
time
it
goes
to
fire
these
notifications.
B
It
did
some
really
ugly
code
that
kind
of
implements
our
own
subscribe
method
that
just
jams
us
to
the
front
of
the
array,
because
the
official
api
will
only
ever
append
to
the
array
of
like
listeners
like
who
should
respond
to
this
notification.
G
I
don't
I
don't
know
if
that's
actually
any
hackier
than
what
I
originally
did
because
like
th,
the
things
that
I
did
in
the
original
commit
are
not
necessarily
guaranteed
they're
sort
of
guaranteed
in
the
rails
api
and
that
I
looked
and
they
hadn't
changed
it
in
a
very
long
time.
But
that's
right
guarantee.
So
I
I
mean
yeah.
B
So
that's
that's
kind
of
like
a
summary
of
a
lot
of
pain
and
tears
on
my
part.
I
cried
a
lot.
Tim
talked
me
through
it,
which
was
nice,
but
that's
kind
of
the
direction.
Yeah,
that's
the
direction
I'm
hoping
for.
I
don't
know.
If
tim
has
anything
he
wants
to
show
that
I
omitted.
C
I
think,
after
digging
to
rails
code,
the
reason
why
the
delica
doesn't
work
is
because
delegate
is
not
a
mutex
like
the
original
fanout
class
has
a
mutex
in
it.
So
once
we
changed
the
classical
inheritance
to
the
delegate
class,
we
lost
the
mutex.
B
I
was
trying
to
just
start
over
from
scratch
and
like
take
the
idea
of
let's
just
try
to
force
ourselves
to
be
first,
let's
not
play
with
the
notifier.
Let's
just
try
to
use
as
much
of
vanilla
subscribers
as
we
can
so.
The
one
happy
bit
now
is
potentially
going
to
be
isolated
to
just
having
our
own
subscription
method.
G
Right
I
mean
I
don't
think
that
approach
is
necessarily
bad
and
I
honestly
don't
I
truly
and
honestly
don't
think
it's
actually
any
hackier
than
what
we
implemented
originally
like
there's,
good
and
bad
to
both
of
those.
I
don't.
I
don't
know
if
eric
has
any
thoughts,
but
I'm
pretty
sure
the
datadog
tracing
stuff
wrapped
the
whole
life
cycle
in
order
to
get
around
this
kind
of
problem
kind
of
the
same
way
we
did
originally
like.
F
It
was
implemented
before
my
time
and
no
one
complains
to
be
on.
I
don't
think
we
ever
got
complaints
around
the
implementation
or
bugginess
for
it,
so
no
one
ever
really
looked
again
and
that's
the
way
enterprise
software
goes.
G
G
To
look
at
it,
if
you
want
to
push
up
a
branch
or
something
I'm
sure
it
will
be
fine
if
it
solves
additional
problems,
you
don't
have
like
super
strong
feelings
on
it.
One
way
or
the
other.
B
G
B
A
Yeah,
I'm
gonna
have
to
drop
as
well
one
one
thing
that
I
was
just
thinking,
though,
during
this
conversation
was
when
you're
asking
about
the
data
dog
implementation.
A
It
might
make
sense
to
cross-reference
the
data
dog
implementation,
but
one
thing
that
I'm
curious
about
is
like:
if
you
try
to
use
people
in
the
vendor,
space
will
have
customers
and
customers
will
try
to
use
multiple
tracing
clients
and
expect
stuff
to
work.
I'm
just
curious
like
if
you
were
to,
if,
if
you
look
at
the
datadog
implementation
and
if
you
were
to
have
datadog
and
hotel
in
the
same
process
like
if
that
would
be
yeah
if
they
would
clash,
I
suspect
this
probably
happens,
and
it's
happening
a
lot.
D
G
D
Yeah
and
really
the
solution
is
that
all
these
vendor-specific
instrumentation
libraries
should
just
be
turned
off
and
everybody
should
just
instrument
using
open,
telemetry
and
the
problem
is
solved.
I
like
your
vision,.
E
Am
also
going
to
drop
off
for
snacks
before
you
all
go
just
kindly
project
you
to
participate
in
a
conversation
about
version,
compatibility
that
we're
having
in
the
concurrent
ruby
pr,
if
you
have
a
minute
and
specifically
like
like
to
produce
some
sort
of
guidance
for
instrumentation
authors
out
of
that,
and
I
think
I
fixed
my
robocop
issues
and
merged
branches
up.
So
let
me
know
if
there's
anything
you
need
for
me
to
merge
the
documentation
or
this
ruby
kafka.
Header
thing
version
thing.
D
G
Shipping
that
I'm
gonna
say
if
there's
nothing
else
internally,
that
we
know
about
and
and
if
they
don't
respond
to
the
prodding
after
a
couple
of
times,
I
would
personally
feel
comfortable,
saying
it's
ga
like
clearly.
Nothing
was
bad
enough
to
block
it.
So.