►
From YouTube: 2021-01-12 meeting
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
A
B
A
B
A
A
I
I
think
someone
from
skylight
or
one
of
the
folks
started.
Let
me
try
to
find
it.
D
A
A
D
What
what
do
you
say
like
what
is
the
bulk
of
people,
or
is
it
a
lot
of
like
smaller
groups.
A
It's
go
and
go
in
python
is
the
vast
majority
of
things
the
agent
is
sort
of
like
the
core
thing
is
mostly
written
in
bingo,
with
some
of
the
integrations
being
python
and
then
the
back
end
is
a
little
bit
of
a
frankenstein
because
of
acquisitions,
but
a
lot
of
go
in
the
back
end
with
I
think
some
of
the
web
backend
is
python
and
then
the
logs
company
is
that
is
not
part
of
data
august
java.
So
it's
it
sort
of
ends
up
being
a
little
frankensteiny
yeah.
A
D
I'd
say
the
two
or
I
guess,
there's
the
three
main
languages
you'll
see
is
ruby,
go
and
then
javascript
typescript,
whatever
you
want
to
call
it
for
content.
Stuff
yeah
makes.
D
E
E
C
C
E
E
At
some
point
I
started
recommending:
why
don't
we
just
have
a
default
resource
that
you
just
that,
has
default
values
and
then,
if
you
don't
like
those,
you
can
supply
some
other
ones
somewhere
along
like
the
in
the
merging
process
of
the
resource,
and
it
became
slightly
clearer
that
there
are
there's
some
weirdness
with
resource
merge.
E
Actually
that
made
that
not
a
great
idea,
but
I
think
people
apparently
like
the
first
value,
wins
in
resource
merging
and
I'm
not
sure
if
that's
how
our
implementation
works
or
not,
but
that
that
makes
that
situation
difficult
unless
the
default
resource
ends
up
being
the
last
resource
that
you
merge.
But
I
think
the
discussion
moved
to
like
well.
Why
does
why
does
merge
work
that
way,
like
no
other
merges
preserve
the
initial
value,
so
I
think
the
consensus
was.
We
should
be
able
to
change
that
to
make
more
sense.
C
Yeah,
that's
always
been
a
weirdness
in
the
resource
match.
I
honestly
don't
remember
why
it
was
built
that
way
or
designed
that
way,
but
yeah,
it's
the
opposite
of
every
other
emerge.
D
I
think
for
the
helper
we
exposed
in
the
configuration
we
set
it
up,
so
the
last
in
wins
so
like
it
goes
against,
like
the
actual
merge
just
like
flops
them,
so
it
like
makes
it
work
but
yeah
sorry.
E
E
But
yeah
at
any
rate,
it
seemed
like
it
was
a
useful
discussion
and
that
we
will
fix
this.
I
was
under
already
under
the
assumption
that
it
works
this
way.
So
hopefully
I
won't
have
to
change
my
mental
model.
E
This
led
slightly
into
a
discussion
of
like
is
anybody
concerned
that
tracing
is
supposed
to
be
stable?
Well,
there
isn't
a
jaeger
spec,
and
then
people
said
yes,
and
then
they
were
asking
for
a
champion,
and
I
I
spoke
up
saying
that
I
was
an
anti-champion
but
like
that
there
are
a
bajillion
different
export
formats
from
jaeger.
E
Well,
we
should
just
pick
a
format,
and
then
I
kind
of
just
mentioned
that
if
you
do
that,
there's
a
good
chance
sigs
will
have
to
rewrite
their
jaeger
exporter
from
the
ground
up
because
they
implemented
a
different
format.
So
I
think
that's
where
that
landed
anyways
there
will
probably
be
some
spec
that
comes
along.
Hopefully
it
will
be
inclusive
of
many
formats
like
as
long
as
you
can
get
yours
fans
into
jaeger.
I
feel,
like
you
have
completed
that
task.
Some
way.
E
E
And
then
kind
of
the
last
thing
is
that
there
is
supposed
to
be
an
all-day
workshop
for
metrics
on
friday.
So
if
you're
interested.
A
E
I
think
to
some
degree:
yes,
I
think
maybe
the
other
point
and
I'm
hoping
the
other
point,
I'm
very
much
considering
coming
to
this.
If
I
can
find
time
in
my
schedule,
but
I
think
I
think
you
have
been
like
really
tracing
focused,
at
least
out
of
the
gate
for
open
telemetry
and
there's
been
like
a
subset
of
those
people
or
maybe
even
a
disjoint
set
of
people
that
have
been
a
little
bit
more
interested
in
metrics.
E
But
it
seems
like
it's
been
a
smaller
group,
so
I
believe
some
of
the
desire
is
to
like
bring
everybody
from
the
hotel
community
into
the
metrics
discussion
and
just
kind
of
get
more
interest
there,
and
I
think
the
hope
is
that
that
will
help
kind
of
hammer
out
the
details
and
just
get
more
participation,
because
I
think
that's
I
think
that's
one
of
the
problems
is,
I
think
we
even
see
this
in
tracing
where,
like
you,
have
people
floating
ideas
and
really
soliciting
feedback,
and
just
you
know,
the
feedback
will
help
drive
final
decision.
D
A
E
A
Okay,
yeah,
you
know
there's
some
folks
from
my
and
who
are
more
involved
in
metrics
than
me,
and
there
is
a
big
push
on
dd
sketch
as
an
aggregation
algorithm.
They
really
want
that,
and
I
know
there
was
some
contention
there,
so
I
will
maybe
have
one
of
them
attend
or
I
will
attempt
and
just
sit
quietly,
not
know
what
I'm
doing,
but
that's
good
to
I'll
share
this.
Thank
you.
A
E
Yeah,
I
think
I
think
math
is
complicated.
I
think
there
were.
There
are
several
sketches
that
are
well
regarded.
I
know,
dd
sketch
is
one
of
them.
I
think
like
yeah,
I
think
at
one
point
the
question
was
like:
would
hotel
have
permission
to
use
dd
sketch?
So
I
think
yeah
useful
to
know.
A
E
I
guess
maybe
there's
I'll
speak
up.
There's
one
thing
I
didn't
cover
which
wasn't
from
the
specs
tig.
It
was
from
the
maintainers
meeting
and
it
was
really
the
question
I
asked
internally
last
week
and
that
was
like
what
are
we
even
calling
these
things
now,
because
we
had
this
versioning.
E
The
versioning
and
compatibility
otep
actually
just
two
things,
there's
one
more
thing
I'll
mention
after
this,
but
we've
had
that
otep
and
I
think
we
were
calling
all
this
work
that
we
were
doing.
Ga
and
the
way
kind
of
like
the
versioning
scheme
came
out.
I
was
just
like
wondering
like
what.
What
is
the
thing
that
we're
working
on
right
now
and
it's
like
when
tracing
is
1.0?
Do
we
call
this
ga,
and
I
think
that
is
not
the
case.
E
I
think
we'll
just
call
it
tracing
1.0
ga
is
more
nebulously
going
to
be
announced
when
a
tracing
and
metrics
are
1.0
for
for
languages,
and
you
know
I
guess
once
like
four
core
languages
or
something
reach
that
point,
they
will
make
some
buzz
about
open,
telemetry
being
ga.
C
I
mean
it's
useful
to
know
in
terms
of
the
milestones
that
we're
working
towards
I
deleted
or
closed
our
existing
zero
point,
whatever
milestones,
because
they're
all
out
of
date.
So
right
now
we
have
an
rc
and
I
can't
remember
if
it's
a
1.0
or
a
ga
milestone,
the
the
latter
isn't
really
relevant.
The
rc
seems
to
be
the
relevant
one
and
I
think
that's
the
one
that
we
want
to
be
working
towards,
although
maybe
we
need
to
clarify
that
that's
rc
for
tracing.
E
D
E
E
So
tigran
is
floating
this
idea
of
a
telemetry
schema.
I
think
we
talked
about
this
a
little
bit
last
week
as
being
one
of
the
potential
ways
forward
to
solve
this.
But
basically
the
idea
is
like
what,
if
what,
if
a
semantic
convention
changes
format?
What,
if
we
add
one?
What
if
we
remove
one.
E
What
does
this
mean
for
us
and
like
a
a
version
number
that
we
assigned
to
subsequent
releases?
So
I
have
not
read
through
this,
but
I
know
I
know
we
talked
about
this
last
time.
We
were
a
little
bit
concerned
about
how
how
restrictive
something
like
this
will
be
and
just
how
it
will
interact
with
trying
to
version
different
packages,
but
but
whatever
the
case
this
is.
This
is
the
beginning
of
that
discussion.
E
E
But
yeah
I
I
need
to
give
this
a
read
through
and
think
about
a
little
bit
before
I
have
any
critiques
or
much
to
say
about
it.
C
C
There's
a
bunch
of
examples
in
the
semantic
conventions
for
messaging
around
kafka
and
it's
difficult
to
implement
some
of
those
things
in
ruby
kafka
aspect,
but
robert
and
I
have
been
over
this
many
many
times
and
it
looks
like
linking-
is
more
important
than
parent
information
in
the
the
kafka
processing
spans.
E
True,
I
guess
just
thinking
this
through,
like
one
one
potential
thing
that
I
could
see
like
being
a
like
a
hidden
problem
like
for
something
like
ruby,
is
that,
like
the
data
type
or
even
just
kind
of
like
the
format
of
a
value,
could
change
with,
like
you
know,
a
a
release
of
of
rails,
for
example,
we're
just
kind
of
pulling
an
attribute
off
and
and
adding
it
under
a
certain
label,
and
I
know
I've
seen
it
in
the
past
where
things
change
or
things
change
names.
E
Sometimes
you
just
lose
the
attribute
altogether
because
because
it
wasn't
where
you
were
pulling
it
from
any
longer
or
there
was
a
refactor
in
rails
that
changed
something
to
some
degree.
So
I
feel
like
there
are
some
there
can
be
some
unknown
changes.
E
I
guess
that
have
nothing
to
do
with
your
instrumentation
and
er
kind
of
everything
to
do
with
the
thing
that's
being
instrumented,
and
these
are
probably
more
likely
to
to
occur
in
something
like
movie,
where
typing
is
a
little
more
amorphous.
A
Problem
solved
yeah.
I
think
this.
I
think
it's
a
good
idea.
I
don't
know
about
addition
of
a
span
attribute
constituting
a
breaking
change.
I
don't
understand
why
that's
breaking,
but
you
know
mine.
I
think
this
is
important
to
have.
I
think
vendors
will
need
it,
because
if
you
dashboard
around
it
and
you
might-
and
you
do
alerting
around
it
something
like
a
spam
name
if
it
changes,
it
really
blows
shoots
people
in
the
foot,
so
I
think
it's
good
I'll
try
to
comment,
probably
won't
yeah.
B
E
All
right
so
for
real,
I
think
the
spec
zig
and
other
extra
sig
portion
is
is
over
for
now.
E
D
A
light
one
that
we
could
maybe
talk
over
quickly,
johnny
put
up
that
ad
minimum
gem
version
for
ruby
kafka.
Now,
that's
a
comment
just
to
see,
if
kind
of
makes
sense
the
idea
there.
So
basically
he
used
a
version
that
wasn't
covered
by
an
appraisal
it
broke.
D
I
gave
him
a
sad
face,
but
I
was
wondering
for
for
like
the
the
install
flow,
if
we
should
start
setting
a
minimum
version
check,
that
is
at
parity
with
the
lowest
appraisal
who
you've
set
because,
like
it's,
it's
not
stopping
anyone
from
using
anything
outside
of
what
like
we've
defined
in
the
appraisal.
D
E
E
If,
if
that
is
there,
I
added
that
with
this,
as
like
a
that
was
one
of
the
things
that
I
thought
would
probably
happen
there
is,
you
would
do
a
version
comparison.
I
think
the.
E
At
least
yeah
that
that
was
for
like
at
least
the
instrumentation
descriptor
file,
because
I
I
believe
we
try
not
to
depend
on
on
the
underlying
library
that
was
kind
of
the
other
thing.
So
you
wouldn't
use.
E
Necessarily
yeah,
so
I
think
there's
actually
one
more.
I
think,
there's
one
more
hook
called
compatible,
I'm
not
mistaken.
E
Yeah,
so
it
kind
of
mentions
this
in
the
comment.
Even
so,
there
there's
definitely.
E
There's
definitely
a
mechanism
to
make
this
check,
and
this
is
one
thing
that
I
thought
we
would
probably
do.
I
think
it
makes
sense
to
have
a
a
minimum
version
to
prevent
people
from
figuring
out
stuff
the
hard
way
and,
in
the
event,
that
somebody
figures
out
that
it's
a
little
bit
too
restrictive
and
would
like
to
loosen
that.
E
The
restriction,
I
think
they
can
do
the
due
diligence
to
make
sure
that
it
works
and
open
up
pr
suggesting
to
to
make
the
change.
D
B
D
E
I
don't
know,
I
think,
I'm
not
sure
if,
if
you
have
this
at
datadog,
if
you
have
like
a
table
that
kind
of
like
lays
out
minimum
versions
of
things
supported.
I
know
a
lot
of
vendors
often
do
and
these
end
up
being
questions
that
customers
have,
and
it's
usually
like
a
conscious
effort
to
remove
older
versions.
A
Yeah
we
do
it's
in
our
docs.
Let
me
share
it.
I
think
the
difference
is
we,
it
sucks,
maintaining
old
stuff,
and
so,
if
you
can
avoid
it,
you're
kind
of
the
problem
is
all
it
takes
is
like
one
sort
of
enterprise
person
to
sneak
in
with
the
super
old
version
or
something
and
you're
sort
of
like
peer,
pressured
into
maintaining
it
forever,
at
least
at
the
vendor
side,
which
will
you
know
whatever?
A
This
is
an
open
source
project,
but
it'll
end
up
like
trickling
down
into
peer
pressure
on
this
project,
so,
like
the
sooner
you
can
nudge
people
away
from
that,
the
better
otherwise
you're
stuck
maintaining
node
six,
for
example,
but
yeah.
I
do
think
having
like
a
compatibility
thing
like
this
is
probably
helpful.
E
D
E
One
of
the
years,
possibly
some
kind
of
so
we
don't
end
up
getting
strong-armed
into
maintaining
something
incredibly
old
for
for
eternity.
Some
kind
of
policy
around.
C
Maintain
so
our
policy
so
far
has
generally
been.
If
it's
not
supported
by
the
upstream
project,
then
we
don't
support
it.
So
we've
used
that
to
kind
of
age
out
versions
of
ruby
that
we
support.
I
can't
remember
if
that's
true
for
rails
as
well,
but
certainly
for
ruby.
E
Yeah
I
personally
like
that
policy.
I
feel
like,
as
we
get
more
and
more
users
and,
if
god
forbid,
an
enterprise
user
like
these
things
could
become
a
little
more
contentious.
But
I
think
starting
out
this
way
is
the
right
way
to
do
it.
If
we
need
to
talk
to
expand
those
things
a
little
bit
like.
I
think
I
think
that's
fine.
E
E
Something
where
the
goal
posts
are
pretty
close
together
and
not
super
far
apart,
so
starting
starting,
strict
and
maybe
being
able
to
relax
that
stuff.
A
little
bit
if
we
have
to
is,
is
the
right
idea,
don't
start
relaxed,
because
people
will
want
to
relax
that
even
more
and
it
will
get
out
of
hand
pretty
quickly.
E
Cool
anything
else
to
talk
about
on
this.
D
I
just
I
had
saw
with
the
while
I
was
off.
There
was
some
issues
with
ruby,
3
and
some
of
the
kafka
tests.
I'll
just
mention
that
I'm
gonna
look
at
this
that
this
week
and
see
if
I
can
get
that
sorted
out,
because
it's
kind
of
blocking
for
that
pr,
just
even
adding
additional
appraisals
so
I'll,
be
looking
into
that
this
week.
But
that's
all
I
got
for
reposting.
C
I
had
a
few
things
one
is
I
had
a
pr
up
to
release
zero
12.1.
I
just
realized
that
wasn't
approved
and
not
was
not
merged.
I
just
released.
C
Oh
sorry,
just
merged
a
a
new
pr,
I'm
thinking
I'll
close
this
one
close
this
release
start
a
new
release
and
it'll
be
cool.
If
I
could
get
a
reasonably
quick
turnaround
on
approving
that
release,.
C
I'll
make
sure
if
I
see
these
two
together
yeah
this
one
was
for
real.
This
was
a
couple
of
the
small
changes
that
I
think
johnny
shields
had
asked
for.
Yeah.
There
are
a
bunch
of
small
things
in
here.
C
C
Oh
wait
so
I'll
pick
the
easy
one.
First,
sidekick
semantic
conventions,
there's
a
pr
up
to
to
fix
this
blake
is
one
of
the
guys
on
our
team
as
well
shopify.
C
This
is,
and
the
interesting
bit
of
this
sorry
is
that
we,
the
semantic
conventions
for
messaging
state,
that
the
message
queue
should
be
part
of
the
span
name,
so
you
get
like
the
queue
name,
space
send
for
enqueuing
things
and
the
cue
name
space
process
for
actually
processing
things
for
like
in
in
our
experience
at
shopify
generally,
the
queue
name
is
not
that
interesting.
What
tends
to
be
interesting
is
the
job
class
name.
C
I've
proposed
just
adding
the
job
class
name
as
another
attribute,
so
that
we
stick
with
the
semantic
conventions,
but
we
also
have
the
job
class
name
in
case
people
want
to
add
a
processor
to
to
rename
this
to
basically
use
the
class
name,
because
that's
what
they
care
about
for
their
analysis.
C
So
yeah,
though,
you
probably
want
to
look
at
the
comments
in
the
main
part
of
the.
E
Issue
yeah,
I
I
agree
that
the
job
is
probably
the
more
and
most
useful
thing
I
think
for
for
sidekick
and
generally
these
background
drop
processors
in
ruby.
So.
C
So
yeah
the
queue
name
is
already
an
attribute.
The
job
class
name
is
not
part
of
the
semantic
inventions,
so
I
propose
just
using
messaging.sidekick.jobclass,
which
seems
to
match
how
we
add
kind
of
cues
specific
or
whatever
the
messaging
system,
specific
attributes.
C
With
those
two
in
place,
you
can
use
a
processor
to
just
rename
the
span
to
whatever
you
need
or
to
do
all
your
processing
just
on
on
attribute
names
and
ignore
the
the
spam
name
all
together.
So
that's
that's
one
thing.
We
had
a
discussion
last
week
about
what
happens
if
people
don't
want
to
use
the
open,
telemetry,
collector
and
they're
just
instrumenting
their
app
and
sending
directly,
so
this
also
kind
of
plays
into
that.
C
D
I
don't
necessarily
care
about
the
queue
like
I
I
like
that
the
information
would
be
associated
with
the
span,
but,
if,
like,
I
think
that
kind
of,
like
the
really
interesting
part,
is
like
what
job
is.
I
don't
want
to
see
all
my
jobs
grouped
under
a
single
span,
name
to
me,
because
it's
to
me
they're
fundamentally
different
right
so
like
having
it
be
like
each
unique
name
or
whatever.
Instead
of
like
you
know,
default,
thought
process
or
default
process
like
it's,
it's
not
very
informative.
D
I
just,
I
think,
like
the
user
experience
is
kind
of
poor
there.
So
I
think
thinking
I,
like
our
team
like
shopify.
Specifically,
we
could
do
the
spawn
span
process
or
do
the
rename.
I
think
that's
something
that
would
make
sense
for
us
and
just
like,
be
beneficial
to
the
people
who
are
consuming
this
data,
but
for
a
group
like
you
said
that
doesn't
have
or
all
that
infrastructure
set
up
being
able
to
easily
set
a
default,
I
think
is
pretty
important.
D
C
We
had
discussion
last
week
about
why
that's
challenging
the
way
span
processes
in
kind
of
on
the
server
side,
not
in
the
collector
but
on
the
server
side,
are
defined
but
yeah.
I
don't
think
we
need
to
rehash
this.
I
can
bring
you
up
to
speed
if
you
want,
or
you
can
listen
to
last
week's
recording
either
way.
E
It
uses
a
messaging
system
and
like
if
in
like
in
the
case
of
sidekick,
the
messaging
system
to
me
is
is
redis
and
it's
the
enqueuing
and
dequeuing
of
a
message
on
redis
that
again,
it
is
a
thing
that
falls
into
this
gray
area.
Is
it
a
data
store?
Is
it
a
queue,
but
that
the
background
job
processor
is
something
that's
built
on
top
of
this,
and
I
think
I
can't
imagine
any
user
wanting
the
name
of
your
redis
queue
or
the
name
of
your
background
job
for
something
like
this.
E
So
I
don't
know
that
the
semantic
conventions
or
the
specification
is
aware
of
this
kind
of
ecosystem
and
has
really
kind
of
considered
it
in
the
way
that
maybe
it
should
be.
C
Yeah
I
mean
we
had
some
discussion
along
those
lines
early
on
and
concluded
that
messaging
system
or
messaging
semantic
conventions
were
the
closest
to
job
processing,
and
so
we
would
overlay
it
in
that
way.
But
I
agree
that,
like
realistically
it
would
be
better
to
have
a
job
processing
semantic
conventions
or
a
background
job
mechanism,
maybe
batch
job.
I
don't
know
what
the
terminology
would
be
for
that,
but
something
that
that
is
a
little
bit
closer
to
that.
E
C
Yeah
yeah,
I
mean
the
short
form
of
this
is
that
the
span
names
as
defined
by
the
messaging
semantic
conventions,
aren't
particularly
relevant
here,
it's
other
than
the
way
you
view
the
trace.
C
So
if
you
look
at
a
trace,
waterfall
then
sure
maybe
it's
interesting,
but
the
availability
of
both
the
cue
name
and
the
background
job
class
name,
I
think,
is
important
so
that
people
can
do
analysis
on
either
one
of
those
they've
had
some
debates
internally
about
this,
and
basically
sometimes
you
want
you
know,
depending
on
what
your
role
is,
you
may
want
slis
defined
in
terms
of
the
job
queue
and
in
other
cases
you
may
want
slis
defined
in
terms
of,
I
think,
the
job
class
name,
for
example,
or
slis
I
mean
you,
may
view
it
as
you
want
to
be
able
to
break
down
by
the
job
class
name
for
analysis
purposes.
E
Yeah
and
I
I
can
see
that
going
either
way,
it's
like
you
can
if
a
job
is
slow
or
if
it
takes
a
while
for
a
job
to
process.
If
there's
latency
in
your
job,
it
might
be
that
the
queue
is
overloaded
and
that's
the
thing
that
is
more
important
to
you,
or
it
could
be
that
the
job
itself
is
inefficient
or
doing
more
than
it
should.
E
C
It's
going
to
be
more
helpful,
yeah
the
send
and
process
names
are
a
little
bit
strange
like
this,
the
suffixes
on
the
span
name,
but
that's
an
artifact
of
the
fact
that
we're
using
the
messaging
semantic
conventions.
E
At
the
bare
minimum,
like,
I
think,
some
sort
of
spec
language
that
says,
if
you're
using
a
background,
job
processor,
some
of
the
messaging
semantic
conventions,
will
apply
and
should
be
used,
but
for
span
name.
The
span
name
should
be
the
name
of
the
job.
The
queue
should
be
an
attribute,
or
something
like
that.
C
Cool
the
other
issue
I
wanted
to
discuss
is
kafka
related
again,
this
is
yeah.
It's
this
pr
and
the
associated
issue.
The
associated
issue
there's
two
there's
two
things
here,
but
let's
start
with
kind
of
the
associated
issue
here,
which
is
the
this
difference
in
logic
between
these
two
things:
it's
basically,
if
you
extract
so
the
second
one,
in
both
cases,
you're
extracting
the
the
parent
context,
which
is
a
full
context.
C
C
C
Setting
both
the
parent,
but
also
propagating
baggage
and
any
other
context
that
comes
with
it-
the
first
case,
your
extracting
the
parent
context,
but
then
you're
passing
the
parent
context
into
tracer
in
span,
which
only
extracts
the
span
context,
so
it
doesn't
actually
propagate
effectively
any
of
the
other
contexts
so
baggage
in
this
case
into
the
current
context.
C
So
the
question
is
really:
is
this
intentional?
I
didn't.
I
had
a
cursory
look
at
the
spec
and
I
didn't
see
any
language
around.
This
kind
of
helper
like
in
span,
is
really
a
helper
that
we've
defined
it's
not
defined
in
the
spec.
C
I
think
the
spec
alludes
to
it
that
you
know
this
kind
of
helper
might
be
useful,
but
it
doesn't
really
define
what
it
should
do
and
the
difference
in
behavior
is
a
bit
weird
because
because
we
discard
everything
except
the
span
context
in
in
the
first
case.
E
C
Okay,
so
I
can
pr
that
fix.
It
brings
up
a
second
issue
that
is
more
kafka,
specific
there's
this
weirdness
around,
like
baggage,
propagation
context,
propagation
parenting
and
so
forth,
so
in
kafka
like
in
the
in
the
messaging
semantic
conventions.
There's
these
three
examples
presented,
and
the
crux
of
it
is
really
that
the
parenting
information
should
effectively
be
discarded
in
most
cases,
for
the
parent
should
either
be
sorry
here.
C
I'm
talking
about
the
process
span,
so
some
q
name
space
process
as
the
span,
the
parent
in
some
cases,
is
going
to
be
the
receiver
span.
In
other
cases,
there's
no
parent,
but
in
all
cases,
there's
a
link
from
the
process
span
to
the
send
span.
C
So
only
the
link
is
really
important
and
the
parenting
information
is
not
it's
really
challenging
to
actually
do
that
properly.
In.
C
With
the
apis
that
we've
defined
it's
it's
just
really
hard
to
do,
because
you
kind
of
want
to
set
like
what
this
person
wants
to
do
is
actually
set
the
baggage
or
like
propagate
the
baggage
context,
but
not
necessarily
propagate
the
parent
and
instead
have
either
an
ill
parent
or
the
parent
come
from
the
current
context
before
you
do
the
extract,
but
you
do
want
to
get
the
linkage
information.
So
it's
like
how
do
you
propagate
everything
except
the
parent
or
how
do
you
get
the
maybe
we're
just
using
extract
wrong?
C
Maybe
that's
my
problem
that
I'm
misunderstanding:
the
way
extract
works,
but
it's
it's
just
a
little
bit
complex
right
now
to
reason
about
propagating
some,
but
not
all
of
the
context,
not
using
the
parent
from
extract
using
the
parent
from
whatever
your
current
context
has
set.
So
all
that
stuff.
Sorry,
I
know
that's
vague
and
hand-wavy,
but
it's
a
really
vague
and
hand-wavy
part
of
the
spec
yeah.
D
I
looked
at
that
that
issue
the
pr
that
he
that
he
put
up
it
seems
that
basically
like
when
we
process
the
span
after
it
was
produced.
We
just
want
to
create
a
link.
We
don't
actually
want
to
like
pull
the
producers
context
in,
but
by
doing
so
when
we
just
create
the
link
the
baggage
isn't
being
propagated.
So
the
change
that
was
proposed
now
sets
the
parent
context,
while
they're
just
trying
to
get
the
baggage.
So
how
do
we?
D
C
Yeah,
it's
kind
of
everything
like
I
assume
context
could
be
more
than
just
baggage
and
the
trace
parent
right.
Somebody
could
define
some
other
context
that
they
want
to
propagate
and
some
mechanism
for
propagating
that.
C
E
Yeah
so
try
to
navigate
through
this,
so
I
think
there
are.
E
There
are
a
few
things
at
play
here
like
one
is
that
we
have,
we
have
extract,
extract,
will
get
you
a
context,
and
it
kind
of
the
idea
is
that
this
is
the
exact
same
context
that
you
that
you
had
in
your
process
on
the
injecting
side
so
and
then
I
think
from
there
you
kind
of
have
two
options
once
you've
extracted
your
context,
you
can
restore
that
context
or
you
can
just
if
that's
not
what
you
want
to
do.
E
E
Yeah,
you
should
be
able
to
make
a
new
context
that
didn't
contain
a
parent
and
pick
and
pull
things
into
it.
If
you
needed
to
do
that,
I'm
not
sure
what
all
the
apis
look
like,
though
there
might
be
some
room
for
api
improvement.
If
this
is
a
thing
that
we
need
to
do.
C
Yeah,
I
think
in
this
case
you
can
probably
resolve
it
by
grabbing
your
parent
context
before
you
do
the
extract,
or
at
least
grabbing
the
span
context
from
that
in
some
way,
and
then
passing
that
in
to
with
parent
or
if
you
really
want
a
root
span,
then
you
do
a
with
parent.
C
Trace
context
or
span
context,
I
forget
which
yeah
it's
so
that
part
of
the
api
is
a
little
bit
awkward.
We
have
this
thing
that
does
a
start
root
span,
but
we
don't
have
an
easy
way
to
explicitly
start
a
root
span
using
like
in
span
or,
I
think,
with
span.
Maybe
if
it's
just
in
span.
E
Yeah,
I
think
it's
if
it
can
be
mucked
around
with,
and
we
can
kind
of
see
what
this
actually
looks
like
in
in
code.
I
think
that
would
that
would
be
useful,
and
then
I
would
say
the
next
step
would
be
to
just
kind
of
understand
if
this
is
something
unique
to
the
design
in
ruby
or
if
just
kind
of
see
like
what's
available
in
some
of
the
other
languages
and
see
how
awkward
it
is
and
see.
E
If
it's
like
something
that
we're
inducing
on
ourselves
or
if
we
are
kind
of
following
generally
the
spect
api
and
it's
a
larger
issue
and
then,
depending
on
what
we
figured
out
there,
we
can
either
try
to
make
some
improvements
either
to
ruby
things
at
the
spec
level,
or
I
feel
like
there
is
maybe
this
situation
that
kafka
is
just
hard
and.
C
Yes,
I
wholeheartedly
agree
with
that.
Kafka
is
very
hard,
especially
from
an
instrumentation
perspective
in
not
just
in
ruby.
You
know.
The
design
of
cerama,
for
example,
which
is
one
of
the
popular
go
packages
for
interacting
with
kafka,
makes
a
lot
of
context
propagation,
effectively
impossible.
So
yeah
kafka's
heart.
E
Yeah
and
if
it
just
ends
up
being
that
kafka
is
hard,
but
we
can
still
instrument
it
with
some
less
than
ideal
code.
This
is
the
only
place
it's
like
biting
us,
I
don't
know.
Maybe
we
can
just
heavily
comment
the
code
and
try
not
to
look
at
it
too
much
and
move
on
with
our
lives,
but
but
if
it
is
indicative
of
some
kind
of
bigger
problem,
I
think
we
should
try
to
find
some
solutions
to
it.
C
Okay,
yeah,
I
haven't
seen
any
other
examples
like
kafka
just
seems
to
come
up
as
the
problem.
Childs
all
the
time.
C
I
will
pr
a
fix
to
in
span
so
that
we
can
pick
up
the
context
and
then
I'll
at
least
sketch
out
what
I
think
the
code
is
to
to
fix
this
current
problem
for
for
kafka
and
yeah.
E
Cool
yeah
that
sounds
good
and
yeah,
and
there
will
be
another
release
coming
through
soon
as
well.
E
E
Great
right,
when
it's
over
any
last
minute
concerns.
C
C
So
if
you
can
just
take
a
look
at
those
and
close
the
ones
that
should
be
closed,
that
would
be
great.
Okay,
we'll
do.
E
All
right:
well,
I
guess
I'll
see
everybody
online.