►
From YouTube: 2021-01-19 meeting
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
A
A
A
A
B
Yeah,
it's
it's
about
five
after
I
think
this
is
mostly
the
usual
crowd.
Welcome
back
francis
long
time,
no
see
hi.
B
Yeah
so
we'll
probably
do
the
usual
go.
Recap
the
specs
egg
and
then
start
looking
at
issues
prs
in
in
our
repo.
That
sounds.
B
B
Cool
right,
so
the
first
issue
and
first
couple
issues,
kind
of
start
off
as
being
environment,
variable
issues
more
or
less,
but
I
think
they
extend
from
theirs
a
little
bit,
but
this
pr
suggesting
that
everything
that
is,
that
has
millies
in
the
name
be
removed.
B
B
B
B
Yeah
this
the
reason
why
this
is
a
little
bit
complicated
and
I
think
it's
less
complicated
for
us,
given
that
we
have
no
zip
connects
order,
is
that
but
yeah
is
is
twofold.
Like
some
many
languages
do
not
support
multiple
transport
formats
so
that
that
led
to
a
discussion
was
like.
Well,
maybe
we
should
just
align
on
proto
buff
as
being
the
default
and
then
that
kind
of
led
to
we
had
a
similar
discussion
with
jaeger
last
week.
B
We're
like
what
if
your
implementation
is
currently
json,
is
that
okay,
do
you
have
to
make
a
new
implementation
if
we
start
specifying
that
and
then
the
third
one
kind
of
started
to
dovetail
in
with
the
otlp
exporters
and
there's
an
environment
variable
there?
I
forget
how
we're
naming
these
things
because
they're
so
complicated,
but
I
think
it's
like
otlp
exporter
protocol
or
something
like
that,
and
it
can
be
grpc,
protobuf
or
json,
and
that
is
problematic,
at
least
from
what
I
know
of
of
the
open
telemetry
ecosystem.
B
A
lot
of
the
languages
that
have
that
support.
Multiple
export
formats
or
transport
protocols
tend
to
ship
these
as
different
packages
and
there's
a
lot
of
reasons
to
do
that.
So,
like
the
having
an
environment
variable
to
try
to
switch,
these
things
doesn't
really
make
sense.
It's
not
really
like
a
runtime
decision.
It's
like
a
you
know,
compile
time
decision
for
in
in
that
situation,
so.
B
I
think
yeah,
I
think,
for
us
we
can
kind
of
this-
is
not
super
relevant
to
us.
We
kind
of
dodge
both
of
these
questions
right
now,.
B
All
right
service
name
handling-
we've
talked
about
this
for
at
least
three
weeks
straight
now.
I
think
the
I
think
the
thing
that
we
talked
about
last
time
is
something
that's
going
to
happen.
Resource
merge
is
going
to
work
like
normal
merge
where
last
one
wins
and
you'll
have
a
default
resource
that
has
a
a
service
name
in
it.
A
Pun
intended
we've
already
implemented
the
resource,
merge
change,
so
we
have
like
I'm
anticipating
that
this
is
going
to
be
approved,
and
this
is
the
way
we're
going
to
go
so
like
everybody's,
in
favor
of
switching
the
order
of
the
emerge
so
yeah
anyway.
I've
gone
ahead
and
done
that
and
we've
already
merged
that
in
our
replay.
B
So
this
issue-
I
think
it
came
up
last
week
or
at
least
came
up
in
the
triage.
I
don't
know
if
you
talked
about
it,
though
I
don't
think
it
had
its
bullet
point
on
the
agenda,
but.
B
Yeah,
the
idea
is
to
have
some
way
to
register
the
export
without
exposing
the
batch
fan
processor
as
as
a
type
or,
as
you
know,
surface
area
to
the
user,
and
I
think,
like
you
know,
in
in
trying
to
make
configuration
easier.
I
feel
like
this
is
one
of
the
hard
things
to
do.
Is
that,
like
you,
you
have
configuration
or
it's
easy
to
add
a
span
processor,
but
a
span
processor,
usually
references
an
exporter
and
the
exporter
interface
is
non-standard.
B
So,
like
it's
hard
to
it's
hard
to
improve
that
situation,
a
whole
lot,
it's
like
you
kind
of
do
have
to
go
through
this
situation,
where
you
at
least
have
to
initialize
an
exporter
and
then
probably
a
span
processor
to
use
that
exporter
with
and
then.
B
Set
those
all
up
for
for
your
sdk,
I
think
you
know
there
are
some
improvements
that
can
be
made
around
here
by
maybe
just
like
choosing
like
a
default
span.
Processor,
if
somebody
does
hand
you
an
exporter,
you
can
kind
of
ease
that
a
little
bit,
but
I
I
understand
at
least
the
underlying
want
desire
here
is
to
just
be
able
to.
B
Yeah
and
given
that
it's
like,
I
think,
part
of
the
reason
why
part
of
the
reason
why
all
this
is
like
a
little
bit
hard
and
confusing.
Is
that
yes,
there
are,
I
think
two
standard
span
exporters
that
you
will
find
in
sdk
implementations
and
probably
a
handful
of
standard
exporters,
but
these
aren't,
like
extension
points.
So
it's
like
it's
hard
to
know
what
the
whole
universe
is
going
to
look
like.
A
Yeah
and
then
there's
the
interesting
case
that
the
jabber
sdk
actually
has
the
batch
fan
processor,
which
was
originally
specced,
but
then
they
also
have
this
disruptor
based
batching
mechanism,
which
is
actually
the
one
that
they
recommend
people
use
so
yeah.
It's
like
the
recommended
thing
isn't
actually
in
the
spec
anyway.
The
spec
says
that
you
need
to
have
a
simple
span,
processor
and
a
batch
fan
processor,
and
it
needs
to
be
configurable
in
this
way
with
these
defaults.
A
B
Yeah
so
I
feel
like
th.
This
is
interesting
because
I
feel
that
this
whole
setting
up
a
pipeline
could
be
easier,
but
I
think
the
fact
that
the
there
are
no
like
standard
interfaces
for
anything
in
the
pipeline,
which
makes
it
kind
of
hard,
and
given
that
it's
an
extension
point,
I
yeah.
I
don't
know
how,
how
you
make
this
easy
to
be
honest,
but
there
are
probably
ways.
A
Yeah
there's
also,
I
think
eric
was
suggesting
that
datadog
can't
actually
use
a
batchband
processor.
They
need
something
that's
kind
of
like
a
batch
fan
processor,
but
does
batching
based
on
trace
id,
not
on
not
just
like
batching
unrelated
spans,
so
they
have
that
added
complication
and
originally
batch
fan,
processor
and
simple
span.
Processor
were
just
intended
to
be
like
building
blocks,
that
exporters
often
needed,
and
but
I
think
they
ended
up
becoming
this
separate
thing.
A
The
problem
with
them
being
a
separate
thing
is
that
most
of
the
interesting
things
that
you
would
want
to
do
with
span
processes
other
than
just
batching,
involve
mutating
the
span,
data
or
mutating
the
spans.
Sorry,
whether
whether
it's
a
read
like
a
writable
span
spam,
whatever
it's
hard
to
say,
but
the
the
difficulty
is
that
the
spans
that
are
handed
to
them
are
meant
to
be
read
only,
which
means
that
if
you
want
to
change
anything,
you
actually
have
to
copy
everything.
A
So
you
have
to
copy
all
the
data
from
the
span
in
order
to
mutate
it,
and
then
your.
If
you
want
to
be
able
to
compose
these,
you
need
to
pass
something
through.
That
is
explicitly
read
only
so.
Every
time
you
do
a
mutation,
you
have
to
copy
everything,
so
the
specs,
the
spec,
actually
ends
up
making
these
way
less
useful
than
they
could
be.
A
Like
the
I
haven't
seen
that
discussion,
it
came
up
for
us.
I
think
eric
brought
it
up,
maybe
a
couple
of
weeks
ago
where
he
asked
about
configuration
or
deployments
where
people
were
exporting
directly
to
a
back
end
or
to
like
a
service
provider
rather
than
running
the
collector
themselves,
and
in
that
case
they
may
want
to
have
some
of
the
facilities
that
are
provided
by
the
collector,
such
as
filtering
out
data
or
mutating
attributes
or
span
names.
A
They
may
want
to
have
that
capability
in
the
export
pipeline
running
on
their
services
like
the
instrumented
services,
and
in
that
case
you
know
you
kind
of
look
at
it
and
say:
well,
wouldn't
you
use
the
span
processor
for
that,
but
you
can't
because
the
data
you're
passing
around
is
read-only
or
you
can,
but
it's
going
to
be
incredibly
inefficient.
A
A
So
this
yeah
it's,
I
don't
think,
there's
an
ongoing
discussion
for
this
or
there's
a
discussion
anywhere
in
the
specification
related
to
this.
But
it's
a
problem.
It
may
have
been
discussed
earlier,
and
you
know
as
in
a
year
ago,
or
something
and
the
assumption
at
that
time
was
most
people
would
be
using
the
collector,
and
this
was
functionality
that
belonged
in
the
collector.
A
Probably
yeah
yeah,
I
think
it's
more.
I
mean
it's
not
relevant
to
shopify,
because
shopify
is
running
the
collector,
but
it's
probably
relevant
to
some
vendors
and
certainly
relevant
to
some
vendor
customers,
particularly
if
you
start
thinking
about
functions
as
a
service.
So
where
is
the
collector
running
for
functions
as
a
service
right?
A
If
you're
running
you
know,
google
cloud
functions
or
something
you're,
potentially
just
sending
to
some
backend,
you
know
maybe
stackdriver
or,
if
you're
running
an
aws
you're
sending
to
directly
to
x-ray.
A
So
in
that
case,
how
do
you
configure
some
of
the
you
know
filtering
functionality
if
you
wanted
to
do
pii
reduction
or
something
like
that?
How
would
you
you
accomplish
that?
But
yeah,
I
think
it's
more
relevant
to
vendors,
and
you
know
if
eric
has
questions
around
this.
Maybe
it's
something
he
wants
to
raise,
or
I
don't
know
if
you
have
customers
that
don't
use
collectors.
B
Yeah,
I
think
this
is
a
potential
use
case
for
yeah
for
for
a
lot
of
people,
so
I
do
think
like
yeah
I
I
would
agree
that
this
is
probably
a
good
thing
to
bring
up
at
the
spec
level
like
it
seems.
B
I
don't
know
what
the
appetite
is
for
for
talking
about
these
things
at
this
stage
in
the
game,
but
I
think
I
think
this
at
least
should
be
brought
up.
B
B
Yeah,
I
will
put
it
on
my
on
my
list
to
at
least
ask
around
and
possibly
see
what
we
can
do
here
at
least
get
it
as
like
a
blurb
or
bullet
point
on
a
future.
Spec
sig
cool
thanks,
that'd,
be.
B
B
B
It
was,
it
was
interesting.
I
don't
know
if
anybody
from
this
call
showed
up
there.
I
know
eric
was
there
for
a
bed
or,
I
think
for
the
whole
thing
actually
so
yeah
a
lot
of
it
had
a
lot
of
it
was
discussion
around
open
telemetry.
B
Apparently,
that's
like
a
fairly
big
question
mark,
but
it
kind
of
I
don't
know,
there's
a
huge
turnout
and
in
many
ways
I
feel
like
there
has
been
a
lack
of
turnout
in
a
lot
of
the
metrics
discussions,
at
least
from
what
I
know,
and
so
the
people
who
have
been
working
on
it
have
kind
of
built
this
system
that
they
think
will
serve
us
best,
and
I
got
the
impression
that
a
lot
more
people
showed
up
on
friday
than
have
ever
showed
up
to
any
of
these
previous
discussions
and
are
I
don't
know,
I
think
there
was.
B
B
So
I
don't
know
what's
going
on
there.
I
my
impression,
like
walking
away
from
this,
is
like
this
felt,
like
the
hotel,
metrics
kickoff
meeting,
which
you
know
probably
should
have
happened
like
a
year
ago,
and
it
would
have
helped
helped
a
lot
with
the
last
year's
worth
of
work
on
that
stuff.
B
But
I
don't
know
that
could
be
totally
yeah
that
that
could
be
a
wrong
impression,
but
at
any
rate
there
there
seems
to
be
a
lot
of
interest
and.
B
I
guess
it
will
be
remain
well
yeah.
I
guess
we'll
see
how
this
all
kind
of
plays
out,
but
part
of
part
of
all
this.
B
Was
that
this
guy
josh.
B
He
just
kind
of
had
a
proposal
to
try
to
help
get
metrics
over
the
line
and
made
this.
What's
this
thing
called
a
slide,
a
google
slide.
They
were
calling
it
a
sheet
which
I
knew
it
definitely
wasn't,
but
basically
the
idea
was
to
try
to
get
metrics
over
the
line
by
doing
it
an
instrument
at
a
time
and
having
each
instrument
reach
1.0
individually.
B
B
B
If
the
planning
around
any
of
the
individual
instruments
is
too
short-sighted,
you
might
end
up
with
like
more
instruments
than
you
need
or
just
bigger
changes
than
you
would
like
after
the
fact.
B
B
A
B
There
was
this
question
I
saw
on
the
list.
I
don't
know
that
I
took
away
the
answer,
but
I
think
it
should
be
marked
as
stable.
B
However,
we
still
continue
to
bike
shed
a
handful
of
things,
so
it
might
be,
it
might
be
kind
of
dependent
on
the
version
and
stability
pr
merging,
which
I
think
ted
was
saying.
The
final
comments
have
been
resolved.
I
believe
they
are
really
ready
to
merge
this.
A
Portion
yeah,
sorry,
just
as
just
looking
at
the
tracking
issues
for
ga
an
interesting
thing
is
that
they're.
Well,
while
we
have
two
items,
marked
done:
tracing
api,
1.0
lts
for
ga
and
likewise
for
metrics,
the
latter,
which
seems
incorrect,
given
we
haven't
actually
inspect
the
api.
Yet
there's
nothing
there
about
the
sdk
at
all
the
higher
level
items.
Are
you
know,
performance,
testing
and
open
tracing
compatibility
and
things
like
that?
But
there's
there's
nothing
here
about
the
sdk.
A
B
Yeah,
my
guess
is:
this
is
somewhat
intentional
in
that
you
know
it's
the
largely
the
level
or
the
the
largely
the
the
spec
sig
is
interested
in
in
defining
the
apis.
I
guess
we
do
define
the
specification
for
the
for
the
sdks
there's
some
dovetailing
there,
but
the
actual
sdk
releases
and
work
are
kind
of
delegated
to
the
sig.
A
Yeah
I
mean
we
need
an
issue
for
sdk
specification.
We
need
to
know
that
the
spec
is
actually
stable.
There's
you
have
a
look
at
the
api
one
there's
the
high
level
tracking
issue
says
you
know.
Definition
of
done
is
release
of
languages
and
spec
targeted
for
ga
and
documentation
of
lts
support
for
these
releases,
but
I
don't
know.
A
Imagine
we
should
at
least
have
something
similar
for
the
sdk
that
tells
us.
You
know
what
is
done
mean
for
the
sdk
for
metrics
for
api,
sorry
for
tracing.
B
This
is
something
that
I
can
pass
along
and
get
some
sort
of
clarification
on
either
an
issue
or
have
either
a
new
issue
for
sdk
or
some
kind
of
addition
to
some
of
these
existing
ones.
That
mention
how
how
that
should
work,
but
yeah
like
I
I
do
agree.
There
is
kind
of
like
this.
I
don't
know
if
they're
calling
any
longer.
I
feel
like
that.
They're,
not
calling
this
thing,
ga
they're
calling
it
you
know
an
sdk
with
a
stable
with
stable
signals.
B
I
guess
so,
given
that
every
given
that
kind
of
all
of
the
the
different
signals
are
decoupled,
at
least
in
inversioning,
as
as
we
build
them
anyhow.
B
So
knowing
exactly
what
we're
calling
things
and
what
we're
working
towards
and
when
we
can,
you
know
advertise
that
you
know
this
or
that
is
1.0
or
stable
is.
B
It's
necessary,
like
there's,
been
a
lot
of
build
up
to
whatever
this
is,
but
we
don't
know
what
it
is
to
see
if
we
can
declare
mission
accomplished
yeah
so.
B
B
Yeah,
so
I
will
pass
that
along,
I
don't
know,
is
there?
Is
there
anything
else
specific?
Should
we
move
on
to?
Oh,
I
think
we
can
move
on
cool.
A
The
main,
like
the
primary
new
issue
here
of
interest,
is,
I
guess,
the
top
two
here.
My
sql
instrumentation
creates
one
trace
per
sql
statement.
This
was
something
that
somebody
new
to
the
project
opened
robert
tried
to
reproduce.
It
wasn't
able
to
so
yeah
we're
not
too
sure.
What's
going
on
there.
D
The
like
kind
of
side,
theory
that
I
think
talking
to
france
might
be
happening
is
that
they
have
some
other
operation,
that's
creating
sql
queries
and
they're,
just
getting
confused
by
the
output
so
like
they
do
their
request.
D
It's
like
I
should
clarify
with
the
poster
but
they're
seeing
some
of
these,
like
sql
statements
as
like
root
traces
outside
of
the
like
the
request
of
the
rails
app,
and
it
could
be
just
some
other
like
background
operation
because
they
are
using
delayed
job,
which
I
believe
is
uses
sql
right
like
it's.
It's
where
you
store
your
data,
your
your
job
queue
and
your
database
right.
I
think
I
haven't
used
it
so.
B
Yeah,
I
feel
like
it's
a
lot
like
a
reddish
sort
of
sidekick,
but
instead
of
using
or
it's
a
lot
like
a
sidekick
or
rescue,
but
instead
of
using
redis,
it
is
pulling
the
database.
I
believe
so.
D
It's
very
well.
That
could
be
the
case
there.
I
offered
more
help,
but
they
seem
to
want
to
like
look
a
little
bit
on
their
own,
so
I'll
leave
them
to
their
devices
and
be
ready
if
they
come
back
with
more
information,
more
questions.
A
Yeah
yeah
we've
certainly
experienced,
and
this
is
something
robert's
looking
into
right
now.
We've
experienced
apps
that
have
there's
a
sidekick
scheduler
and
where
we
don't
have
instrumentation
for
psychic
scheduler,
but
psychic
scheduler
is
talking
to
redis.
So
you
get
all
these
kind
of
orphaned,
redis
spans
that
are
just
coming
from
that
kind
of
background
process.
It's
not
it's
not
request
driven
tracing
but
because
we've
got
the
always
sampler
turned
on
and
we
only
have
instrumentation
for
redis
on
that
particular
code
path.
A
B
B
I
know
of
these
situations
and
I
know
there
were
kind
of
like
some
implicit
rules
that
some
tracing
vendors
may
apply
internally.
I
guess
to
kind
of
I
don't
know
just
just
omit
these
orphaned
databases,
fans
things
that
happen
outside
of
either
a
web
request
or
a
background
job
processing
framework.
B
I
know
there
were
like
there
are
things
that
some
vendors
will
do
to
make
sure
that
that's
that
those
are
the
traces
that
you're
capturing
but,
like
I
felt
like
yeah,
I
feel
like
those
are
kind
of
like
arbitrary
decisions
and
maybe
not
universal
ones.
A
You
know,
maybe
you
don't
want
to
get
all
these
orphans
fans.
On
the
other
hand,
they
do
represent
load
on
the
database,
in
this
case,
or
in
our
case
load
on
redis
and
if
you're
computing
slis,
for
example,
from
from
a
client
spans.
A
So
I
mean
certainly
one
option
would
be
to
say
we
only
generate
a
span
if
there's
already
a
span
in
the
context
for
this
kind
of
instrumentation.
But
I
feel
like
there
are
circumstances
where,
like
you,
would
still
want
to
capture
spans.
Even
if
you
didn't
have
a
parent
just
because
you
actually
want
to
represent
the
true
kind
of
slis.
C
And,
and
can
I
can,
I
type
in
for
a
second
I've-
had
a
number
of
consulting
clients,
who've
used
first
of
all,
delay
job
is
crazy
because
you
shouldn't
start
q
in
a
relational
database
table,
but
I've
actually
known
clients
who
not
recommended,
but
they
actually
rely
on
the
transactional
guarantees
of
delayed
job,
because
I
think
what
will
happen
if
delayed
job
will
use
a
transaction
to
pull
a
job
off
of
a
database
table
within
a
transaction
and
then
execute
the
work
of
that
job.
C
And
if
there's
a
rollback,
it
will
actually
roll
back
the
dq
of
the
job
and
put
it
back
into
the
queue
effectively,
which
is
also
I've.
I've
always
been
like
that's
a
bad
idea,
but
if
you're
gonna
do
it,
apparently
people
do
it,
then
you
can
do
it.
You
do
want
actually
some
instrumentation
around
it,
so
you
can
sort
of
see
the
rollbacks
and
the
load.
That's
francis.
B
Yeah,
so
all
right,
so
I
guess
longer
term,
if
we,
if
these
are
noisy
what
would
be
the
best
routes
or
what
would
be
ways
to
kind
of
mitigate
this
in
open
telemetry
like
would
it
be
like
configuration
for
for
database
adapters
or.
A
That's
actually
a
good
question.
I
thought
we
did
have
delayed
job.
A
C
D
So
it's
interesting
in
the
case
of
like
sidekick
scheduler,
like
what
we're
planning
to
do
right
now
is
just
because
it's
kind
of
impeding
me
deploying
open
telemetry
like
shopify.
D
We
don't
want
to
create
all
those
noisy
spans,
at
least
at
this
point,
so
I'm
gonna,
I'm
gonna,
start
working
on
internal
instrumentation
like
I'm,
not
gonna
open
source.
Just
yet
it's
going
to
squelch,
like
those
fans,
it'll
untrace
them,
so
we
won't
produce
any
noise
from
it,
but
I
think
further
down
the
road
I'll
probably
be
pushing
that
upstream
and
then
that'll
be
a
configuration
option.
So
you
can
decide
whether
or
not
you
want
to
trace
that
background
work
or
in
the
case
of
like
our
use
case.
D
B
Well,
all
right!
Well
yeah,
it
sounds
like
it
sounds
like
this
issue.
Oh
at
least
as
it
exists
now
is
not
like.
My
skill
is
not
tragically
broken
for
everybody.
There
is
like
a
a
use
case
likely
where
sql
statements
are
being
executed
outside
of
of
a
request
or
background
job
of
some
kind.
That's
causing
this,
but
we'll
keep
an
eye
on
it,
and
I
guess
the
other
thing
is.
B
I
don't
know
at
some
point
in
time.
We
should
probably
have
some
sort
of
policy
around
like
triaging
these
things
and
getting
as
much
information
up
front,
so
we
don't
always
have
to
just
like
guess
the
repro,
because
I
have
spent
a
lot
of
time
in
in
my
career,
doing
that
and
it
takes
forever
and
is
only
sometimes
successful.
A
A
A
lot
of
this
kind
of
report,
so
it's
hard
to
generalize
from
this
one
to
to
figure
out
exactly
what
you
should
be
asking
for,
but
yeah.
I
think
once
we
get
a
few
of
these
and
we
have
some
kind
of
pattern,
then
we
can
have
kind
of
a
template
for
bug.
A
Reports,
the
second
one
here,
actually,
we
can
probably
come
back
to
this.
We
have
a
pr-
that's
open,
related
to
this
as
well.
So
maybe
we
look
at
prs
instead.
A
Is
this
this?
Oh?
I
know
this.
It's
it's
kind
of
both
of
them,
but
we
can
look
at
this
one
first,
so
we
were
instrumenting
according
to
the
spec,
we're
trying
to
follow
the
examples
in
the
spec
and
it's
a
little
difficult.
We
discussed
this
last
time
like
ruby.
Oh
sorry,
kafka
is
hard
and
instrumenting
it
in
a
consistent
fashion
is
hard.
A
A
In
both
cases,
though,
we
should
have
links
back
to
the
enqueuing
span
or
the
sender.
I
guess
they
call
them
simply
because
that
allows
you
to
consistently
process
it
on
the
back
end.
You
know
you
know
you're,
always
looking
at
the
link
for
rendering
purposes
for
a
single
message.
At
a
time
it
you
get.
A
You
know
a
nice
waterfall
view
of
the
entire
trace
by
default,
but
if
you're
processing
things
and
you
you're
just
looking
at
like
enqueue
and
process
spans
or
send
and
process
spans-
and
you
want
to
know
how
do
they
relate
to
one
another,
you
can
process
them
in
the
same
way,
regardless
of
whether
you're
doing
batch
or
or
single
message
processing
so
yeah
that
there
was
a
bunch
of
discussion
around
this
until
we
got
to.
A
I
think,
a
reasonable
point
if
you'd
like
to
review
this
and
make
sure
that
that
lines
up
with
your
views
on
kafka,
tracing
that'll
be
cool.
If
you
don't
have
opinions
on
kafka
tracing,
that's
also
fine.
B
B
If
you're
processing
a
single
message,
you
continue
to
trace,
if
you're
processing
a
batch,
you
you
will
have
the
the
the
end
cuers
as
as
links.
A
Yeah
in,
in
both
cases,
you'll
have
the
enqueuer
as
links.
Okay,
just
so
it
it's
consistent,
but
in
the
single
message
you'll
continue
the
trace
and
the
other
one.
You
won't
continue.
The
trace
you'll
start
a
new
one
yeah
and
then
the
other
pr.
We
had
discussed
some
interesting
things
around.
A
Yeah
set
context
in
span.
We
had
discussed
this
last
time
and
I
sketched
out
like
I
dug
into
the
spec
further
I
dug
into
what
we'd
implemented
further
and
it
feels
like
what
we
implemented
is
the
correct
behavior,
because
the
spec
says
nothing
about
what
you're
supposed
to
do
with
the
context
other
than
extracting
the
span,
that's
in
the
context
and
using
that
as
a
parent.
B
We
are
not
setting
the
context
passed
in
to
be
the
current
context,
and
that's
this
is
what
you
were.
This
was
the
the
proposal
is
that
instant
should
basically
activate
the
with
parent
yeah
the
context
passed
as
with
parent
yeah.
A
Now,
the
reason
I
don't
think
it
should
do
that
is
that
the
spec
is
just
saying
the
spec
is
really
just
talking
about
setting
the
currently
active
span.
So
the
with
parent
argument
is
the
same
argument
for
in
span
or
start
span
where
start
span
doesn't
make
the
current
doesn't
make
the
news
span.
The
current
span
right
if
we
change
the
behavior
of
in
span
to
set
the
context
based
set
the
context
past
in
as
with
parent,
that
behavior
is
distinctly
different
from
start
span.
B
It
is,
I
feel,
like
I
have
to
think
about
this
just
a
little
bit
but
yep.
That's
why
I
said
it's
not
immediately
intuitive
yeah.
A
lot
of
it
is
because
star
span
is,
or
in
span
is
not
actually
part
of
the
spec,
but
that's.
A
Right,
it
might
help
if
you
actually
look
at
the
code
for
this.
So
just
ignore
the
comments
from
johnny
here
I
think
he
didn't
realize
that
I
was
saying
we
shouldn't
merge
this
so
he's.
A
So
the
original
behavior
we
had
some
complexity
here
because
what
happens?
If
exceptions
are
raised,
but
the
original
behavior
is
just
we
call
start
span
and
then
we
call
trace
with
span.
So
it's
purely
a
convenience
for
that
fairly.
Common
combination
of
things.
A
A
If
we
start
adding
this
special
behavior
of,
if
you
pass
in
a
non-nil
with
parent,
you
set
that
as
the
current
one.
Now
we
have
like
considerably
more
complex
sorry,
we
have
a
more
complex
thing.
It's
not
just
like
a
simple
convenience.
It's
actually
trying
to
do
something
quite
different
from
you
know,
starting
the
span
and
then
doing
a
trace
with.
A
B
B
A
Yeah
part
of
the
problem
is
that
baggage
isn't
really
a
trace
concept.
It's
a
context.
Propagation
concept
so
and
the
with
parent
is
the
with
parent
argument.
To
start
span
is
just
supposed
to
extract
the
span
or
the
span
context.
The
parent
span
context
from
that
kind
of
general
context.
B
It's
not
merchants
for
now.
I
I
do
feel
like.
I
have
to
kind
of
think
through
the
implications
here,
but
if
you're,
if
your
opinion
is
that
we
should
not.
B
That
we
should
not
merge
this.
We
should
kind
of
keep
the
current
behavior.
I
think
odds
are
that's
probably
a
good
thing
to
do.
A
Yeah,
I'm
it
it's
a
little
bit
difficult
because
in
span
is
not
spec'd,
so
there
I
mean.
One
thing
you
could
argue
for
is
actually
removing
in
span,
but
inspan
is
actually
fairly
convenient
because
it's
also
doing
the
error
handling
and
setting
the
or
recording
the
exception
and
setting
the
status
on
the
span.
B
I
think
ruby's,
not
the
only
language
to
have
made
this,
I'm
pretty
sure
python
has
an
equivalent
at
least
js.
I
think
does
not,
but
we
could
see
if
there
are
some
if
other
helpers
exist
in
other
languages
and
what
exactly
they've
done
there
and
what
their
reasoning
was
behind
it.
But.
A
Cool
I
realize
we've
got
four
minutes
left
and
I
want
to
make
sure
that
if
francis
has
questions
we
have
time
to
address
them.
B
B
Yeah
I
did
reply
to
your
comment
on
gitter
this
morning
about
the
parents
band
id.
B
It
could
it
could
get
complicated
like
I
feel,
like
it
got
complicated
with
b3,
but
we
were
able
to
at
least
just
ignore
it,
at
least
that's
what
we
ended
up
having
in
the
spec.
I
would
say
again:
this
is
something
that
my
next
step
would
be
to
look
at
some
of
the
jaeger
propagators
from
some
of
the
other
languages
and
see
yeah
yeah.
B
All
right,
so
then
we
do
have
three
minutes
now
anything
we
we
can
squeeze
in
here.
D
I'm
just
gonna
mention
I'm
looking
at
why
this
came
up
over
the
break
when
I
was
off,
but
the
ruby
kafka,
it's
one
of
the
best
is
failing
in
ruby
three
and
I've
just
started
trying
to
dig
into
that
after
trying
to
get
ruby
even
to
install
last
week.
I
have
no
idea
what's
going
on
there,
but
I
am
working
on
it.
A
A
We
have
a
couple
of
open
pr's
from
johnny
that
need
review.
I
haven't
looked
at
either
of
these,
yet
I
expect
they're,
probably
pretty
simple,
but
we
should
try
to
triage
them
pretty
quickly.
B
That
sounds
good
I'll,
try
to
get
eyes
on
these
that
have
coming
from
johnny
and
then
yeah.
I
feel
like
I'm
taking
away
some
action
items.
I
definitely
need
to
think
a
little
bit
about
this
set
context
in
span
and.
B
A
We
did,
coincidentally,
there
was
a
a
spec
issue
opened
for
batch
job,
which
I
may
be
mistaken.
I
imagine
batch
job
is
the
same
as
background
job,
but
perhaps
not
so.
Somebody
from
the
java
community
opened
a
spec
issue
for
batch
jobs,
semantic
conventions.
A
There
was
a
bit
of
pushback
from
dynatrace
actually
about
like
the
need
for
this
kind
of
thing.
I
haven't
gone
back
to
see
if
there's
been
any
more
discussion
there
but
yeah,
I
I
think
there
is
a
need
for
it.
I
don't
think
it's
a
matter
of
trying
to
cobble
together
kind
of
message:
the
messaging
semantic
conventions
with
some
other
semantic
inventions.
I
think
we
do
need
distinct
semantic
conventions
here.
B
A
Yeah,
I
mean
we
think
we
may,
as
in
shopify,
may
push
for
merging
this,
just
because
we
need
it
for
parity
with
our
old
instrumentation
libraries,
we
tried,
we
moved
our
old
instrumentation
libraries
to
match
the
messaging
semantic
conventions
so
and
we
want
to
get
open
telemetry
rolled
out
internally,
so
we
may
push
for
this,
but
then
subsequently,
if
we
get
new
semantic
inventions
for
background
jobs,
then
we
can.
We
can
certainly
modify
the
sidekick
middleware
to
match
those
semantic
conventions.
Instead,
okay.
B
We
are
kind
of
at
time,
so
if
anything
else
needs
eyes,
I
think
I
need
to
come
back
and
look
at
if
johnny
has
updated
this
one
yeah,
but
if
anything
else
needs
eyes
or
attention.
Let
me
know
either
now
or
over
for
getter
cool
sounds
good.