►
From YouTube: 2020-11-12 meeting
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
A
C
I
think
we
could
actually
probably
jump
in
we're
only
a
minute
late,
that's
kind
of
a
rare
one
for
us
yeah.
Let
me
figure
out
how
to
start
sharing
my
screen.
C
Awesome
welcome
everyone
yeah,
please
add
yourself
to
attendees
list,
and
if
you
have
anything
you
want
to
talk
about
in
the
agenda,
please
be
sure
to
add
it.
I
think
johannes
is
joining
us
in
a
little
bit
or
maybe
he's
already
here
yeah.
We
can
jump
into
this
so
yeah.
I
kind
of
wanted
to
talk
a
little
bit
about
the
status
of
the
api
refactor,
just
maybe
to
pause
for
just
a
second.
I
can't
see
the
participants
list
there.
C
It
is
okay,
yeah,
I
don't
see
a
honda
sign
yet,
but
krezmir
and
johannes
are
maybe
just
get
a
little
introduction
for
people
that
aren't
familiar.
Kresner
is
returning
to
the
project
from
a
little
bit
of
a
hiatus
yeah
he's
a
former
major
contributor,
I
would
say
former
approver
as
well,
and
he
works
for
kinvoke
and
kenvoke
has
contracted
with
new
relic
recently.
So
we're
happy
to
bring
him
back
onto
the
project.
C
Maybe
not
but
yeah
so
just
kind
of
an
introduction
on
that,
and
with
that,
like
the
idea
is
that
he's
been
very
actively
working
on
this
ticket
to
re-encapsulate
the
hotel
api.
The
thing
that's
blocking
the
next
release
and
making
some
great
progress
on
that.
C
I
think
the
current
status
is
the
open
pr
is
active
to
change
the
propagators
and
baggage
package.
This
has
already
been
moved.
Yes,
I'm
one
behind
on
my
linkages
here,
but
yeah,
so
that
is
the
next
one.
Next
big
one.
I
think
christmas.
If
you
wanted
to
talk
a
little
bit
about
maybe
progress
after.
D
That
it's
not
it's,
not
a
big
one,
just
getting
propagations,
not
a
lot
of
code,
but
I
think
this
is
there's
one
more
pr
I
think
to
go,
which
is
where
I
move
the
global
package
to
the
top
level
and
update
the
documentation,
and
that
should
be
it.
So
I
just
decided
to
split
them
into
because
there
is
a
bunch
of
code
moved
in
the
from
global
internal
into
global
into
internal
global.
So
there's
like
more
code
changes.
D
You
know
yeah
if
I
split
them,
but
other
than
that,
it's
just
same
thing:
just
move
the
call
to
rename
some
imports.
That's
it
so
they're
also
needed
a
question
about
the
metric
test
thing
that
was
smart
into
auto
test.
D
D
C
Yeah
that
was
kind
of
my
assumption
as
well.
When
I
was
looking
at
this,
I
was
also
kind
of
punting
on
it
and
wondering
if
we
could
kind
of
figure
that
out
at
a
later
point,
but
maybe
this
my
goal
is,
in
this
conversation
to
kind
of
get
the
community's
kind
of
pulse
on
this
one
just
to
see
what
everyone
else
thinks
about
the
hotel
test.
I
imagine
I've
got
some
guesses,
but
I'd
love
to
hear
from
steve,
anthony
evan
johannes,
to
see
around
as
well.
D
C
Yeah,
I
think,
if
there's
a
if
there's
a
if
there's
any
contention,
I
absolutely
want
to
open
up
an
issue
for
it,
but
if
there's
also
just
no
desire
to
really
split
it
up
again,
then
I
don't
really
want
to.
You
know
waste
it.
So
I
was
kind
of
just
wondering
if
anybody
else
thought
that
we
should
be
doing
the
split
again
and
moving
things
back
into
the
metrics
and
trace
test
packages.
A
I'm
kind
of
ambivalent
about
it.
I
think
it's
convenient
to
have
it
all
together
in
one
package
that
it
maybe
makes
sense
to
have
them
in
separate
packages,
since
we've
kept
traces
and
metrics
separate
in
the
restructuring,
but
I
don't
know,
I'm
also
not
sure
it
matters
a
whole
lot.
C
You
you
just
took
the
words
right
out
of
my
mouth
yeah,
I'm
kind
of
in
the
same
boat.
I
don't
really
have
a
strong
feel.
I
think
I
think
what
I'm
gonna
do
is
I'll
just
I'll
open
up
an
issue
and
we
can
close
it
if
there's
really
no
desire
on
that,
but
yeah
I'm
on
the
same
boat.
C
I
think
that
it
kind
of
makes
sense
just
for
what
we
used
to
do
for
symmetry,
but
it
also
is
really
nice
because
it's
cohesively
in
a
single
package,
it's
if
you're
testing,
it's
really
nice
to
just
kind
of
have
it
all
there.
It's
a
really
small
api,
as
friends
pointed
out
like
it,
seemed
fine
but
I'll
I'll
capture
this
into
a
ticket
then-
and
we
don't-
I
guess-
have
to-
we
definitely
don't
need
to
make
a
decision
just
kind
of
looking
for
opinions.
C
Well,
cool
yeah.
That
sounds
great.
I
guess
since
andrew's
on
as
well.
Maybe
we
can
talk
a
little
bit
of
high
level
status.
We
do
have
some
the
issue
project
board
tracking
our
progress
going
forward.
We
have
made
some
decent
progress
in
the
past
week,
there's
152
done
issues
and
our
to-do's
have
dropped
by
36..
C
Last
week
we
took
a
look
at
this
andrew
as
well,
and
we
had
identified
the
fact
that
there
was
a.
I
think
I
can
actually
do
this.
I've
done
this
a
few
times
now.
C
Oh
cool
yeah
coming
for
you.
That
means
a
lot
yeah,
so
I
think
that
this
might
have
filtered
might
not
have
filtered,
but
we
kind
of
identified
and
looked
through
all
of
the
open
to-do
issues
last
time
I
think
actually
specifically
through
a
lot
of
the
trace,
open
issues,
and
this,
I
don't
think,
is
as
defined.
C
Yeah-
and
these
are
directly
related
to
the
trace,
there's
a
few
more
that
obviously
are
related
to
the
propagation,
obviously
related
to
the
context
that
are
not
going
to
be
captured
here.
But
I
think
these
are
the
bigger
ticket
ones
and
we
kind
of
have,
I
think,
a
handle
on
this,
and
we
had
made
some
really
rough
pie
in
the
sky.
Estimates
of
you
know
that,
two
to
four
week
time
period
to
get
these
done
seemed
reasonable.
C
As
long
as
we
have
some
progress
going
forward
and
based
on
the
work
from
kazmir
and
johannes
in
the
past
week.
I
think
that
is
a
reasonable
assumption
to
assume
that
that
progress
is
going
to
be
achievable,
so
just
kind
of
give
you
a
little
bit
of
a
status
update
on
that
one.
If
that
helps.
C
E
C
Table
that's
a
good
question,
so
I
was
actually
my
editor
is
open
with
the
spec
compliance
matrix
right
now
trying
to
resolve
some
conflicts.
Try
to
fill
that
out
again.
I
think
that
it's
pretty
close.
I
think
that
there
might
be
some
gaps
and
some
things
that
we
need
to
shore
up,
but
I
think
for
all
of
the
gaps
that
are
currently
existing
there,
things
that
we
haven't
filled
out.
C
There
are
mostly
captured
in
issues,
so
I
think
that
is
something
that's
mostly
reflective
of
what
we
have
already
on
the
project
board.
You
see
so
the
spec
compliance
matrix
needs
to
be
updated
with
those
issues
to
link
them
correctly.
That
being
said,
I'm
glad
johannes
on.
Could
I
because
I
wanted
to
talk
a
little
bit
about
what
he
put
in.
C
Also,
what
I
kind
of
put
down
for
another
issue:
there
is
a
little
bit
of
an
sdk
issue
that
I'm
kind
of
wanting
to
dive
into
as
a
technical
one.
So
I
think
that
might
take
a
little
bit
of
time,
but
for
what
we're
trying
to
freeze
for
the
api
and
the
context
api
and
the
package
api,
the
trace
api.
I
think
that
we
have
all
of
those
issues.
I
I
opened
or
actively
in
progress
right
now,
so
I'm
I'm.
C
My
goal
is
to
have
that
spec
compliance
matrix
completed
by
friday
at
end
of
day.
So
that's
if
you
can
hold
me
to
it
I'll
I'll,
try
to
try
to
be
there
for
you
on
that.
One.
E
Okay-
and
I
also
want
to
offer,
if
there's
anything
that
I
can
do
like,
I
was
going
to
start
naively
opening
some
issues
in
the
in
the
go
repo.
But
I
was
I
looked
at
it
as
like
geez,
you
guys
guard.
I
got
it
organized,
so
I
I
felt
it
would
have
just
been
closest
or
whatnot,
but
I
didn't
know
which
one
to
link
up.
If
there's
anything,
you
think
that
I
could
help
with.
Let
me
know
and
I'd
be
happy
to
help
with
doing
some
connection
association
with
the
compliance
matrix.
C
C
Cool
thanks
for
everyone.
We
kind
of
bumped
andrew
up
a
little
higher,
but
I
don't
think
he
wants
to
stand
for
the
full
meeting.
So
I
just
wanted
to
prioritize
him
a
little
bit.
The
next
thing
christmas.
You
had
the
otlp
exporter.
I
haven't
looked
too
deep
into
this,
but
I'd
love.
Maybe
if
you
can
kind
of
just
start
talking
and
kind
of
follow
along.
D
So
the
work
on
on
the
old
trp
exporters
to
more
or
less
follow
the
to
confirm
the
specification.
So,
as
you
said,
it
was
blocking
two
issues.
The
first
one
was
that
the
specifications
said
that
you
should
be
able
to
configure
like
two
separate
endpoints
for
tracing
and
for
metrics
and
another
one
was
supporting
some
other
protocol,
then
grpc.
So
this
is
like
http
or
like
json.
Always
you
know
whatever
protobuf
over
http.
D
I
think
those
those
two
issues
are
like
really,
I
don't
know
related
to
each
other.
I
mean
you
can
possibly
work
on
both
of
them
at
the
same
time,
because
they
are
touching
the
same
code
which
is
about
connection
stuff.
So
so
I
started.
I
picked
up
this
pool
required
that
stefan
briefcah
was
working
before
I
got
a
bit
carried
away
with
that,
so
I
started
laying
some
groundwork
for
splitting.
D
You
know
the
protocols,
let's
say,
but
there
is
some.
I
had
some
like
issues
with
doing
that.
This,
especially
we
have
this
option
for
specifying
a
number
of
workers.
D
So
this
is
I'm
not
sure
this
should
be
a
part
of
the
exporter
or
part
of
the.
Let's
say
protocol
connection
or
something
because
basically,
this
number
of
workers
something
we
pass
when
we
want
to
transform
the
export
metrics
into
this
proto
protobus
stuff,
and
I
suppose
that
in
the
future,
when
we
want
to
support
like
json
over
http,
this
is
not
something
that's
gonna
be
used
so,
but
this
is
all,
but
this
is
something
that
probably
will
be
used
if
you
want
to
send
the
protobuf
over
http.
C
So
with
it
with
the
worker
number.
Sorry
that
that,
if
I
remember
correctly,
was
in
the
transform
pipeline-
and
that
is
essentially
like.
C
D
I
did
is
like
I
introduced,
let's
say,
connection
manager
right,
so
you
create
a
connection
manager
and
you
pass
it
to
the
exporter
when
that
connection.
Another
is
what
will
those
protocols
implement
so
for
now
we
have
got
this
gtrc
and
then
we
probably
later
we
will
have
this
http
protobuf
and
http
json
connection
manager
right
so.
A
D
B
C
I
mean
my
guess
is
to
just
remove
that
worker
number.
I
had
originally
written
that
so
that
this
thing
could
become
a
little
bit
more
high
powered
when
concurrency
was
needed,
but
if
that
is
getting
in
the
way
of
implementing
the
specification
or
in
in
any
sort
of
design
choices
like
that
was
a
feature
that
was,
I
guess,
pre-optimized,
maybe.
D
Know
what
kind
of
number
of
europeans
you
will
need
that?
Probably
if
at
some
point,
if
this
becomes
like
a
bottleneck,
then
probably
maybe
we
can
do
some
smart
gold
basically
add
up
the
number
of
goodies
to
use
yeah.
Okay,
so
I
don't
know,
maybe
I'll
just
remove
it
and
then
you
can
actually
introduce
it
in
some
clever
way.
D
Other
than
that,
when
I
was
working
on
this,
as
I
said,
this
is
like
very
related
to
the
other
issue
of
supporting
other
protocols.
Is
that
probably
it
will
be
useful
to
put
those
protocols
into
separate
modules,
so
you
can
use
the
otf
exporter
with,
for
example,
http
json
protocol.
So
you
don't
pull
the
grpc
dependency,
but
right
now.
D
D
C
I
think
I
think
your
design
goal
is
a
really
good
idea
to
try
to
reduce
the
dependency
overhead
if
you're
not
going
to
use
grpc,
because
we've
had
other
people
kind
of
that's
a
problematic
library,
as
I'm
sure
you
know,
or
the
program
buff
library
like
people
are
not
really
excited
when
that
thing
gets
included
into
their
code
base.
So
I
think
that's
a
really
good
design
choice.
I
I'm
hearing
what
you're
saying
and
it.
C
D
D
A
Okay,
yeah
yeah
keeping
them
as
separate
modules
is
a
good
place
to
start,
even
if
they'll
end
up
pulling
in
grpc,
because
if
at
a
later
date
we're
able
to
eliminate
that
dependency
in
some
manner,
either
by
using
a
different
struct
generator
or
by
deciding
to
duplicate
some
of
those
trucks.
Then
we
don't
have
to
ask
people
to
change
what
modules
they're
importing
you
just
update
it
and
they'll
get
that
benefit.
C
Yeah,
I
think
that's
really
good
advice.
Okay,
I
look
forward
to
looking
at
that
pr
kezmir.
So
I'm
excited
you
got
me
thinking
about
nclp,
again
cool.
I
think
the
next
issue
that
I
have
on
the
agenda
is
the
same
thing
that
johannes
has
on
here,
so
we
can
kind
of
just.
I
think,
we'll
just
jump
straight
to
his
link
be
honest.
C
I
gave
your
introduction
a
little
bit
before
you
jumped
on
the
call,
but
I
was
just
kind
of
letting
everyone
kind
of
know
who
you
were
and
you're
working
on
the
this
spam
processor
onstart
stuff.
Maybe
you
want
to
give
a
little
bit
of
background
on
it.
I've
read
through
it.
I've
got
some
suggestions,
but
I
kind
of
wanted
to.
Maybe
you
can
just
give
a
little
introduction
on
it.
H
H
So
if
you
need
to
grab
data
from
the
spans
internal
state,
for
example,
to
export
things,
then
you
kind
of
cannot
do
it
from
the
api
because,
for
example,
you
cannot
read
attributes
you
can
only
set
them
and
basically,
up
to
now
before
we
are
before
we
started
trying
to
align
these
methods
to
the
spec,
then
what
we
had
is,
instead
of
passing
a
spam
struct
to
own
starting
on
end.
We
would
pass
a
span
data
on
instruct
and
that
doesn't
follow
the
spec.
H
Now
right
now,
I'm
working
on
the
change
to
actually
pass
a
span
to
these
methods,
but
the
problem
is
that
the
moment
we
do
that,
then
we
need
to
do
some
tricks
before
we
can
actually
operate
on
the
contents
of
a
spam
so
yeah.
I
basically
like.
I
have
a
kind
of
a
concrete
question
there
which
is
yeah.
Are
we
intentionally
copying
expand
span
data
in
the
sdk,
for
example,
inside
inside
spam.end,
and
I
try
to
like
provide
relevant
snippets
in
my
question
there,
because
right
now
what
happens
is
the
moment?
H
I
changed
the
function,
signature
of
the
interface,
I
broke
some
tests
and
then
I
started
digging
in
and
figure
out
exactly
what
broke,
and
I
realized
that.
Basically,
we
end
the
span,
but
the
the
end
time
of
a
span
is
not
reflected
in
the
original
span,
because
at
some
point
we
copied
this
weird
span
data
struct
and
we
made
the
modification
there.
And
then
we
pass
on
this
pan
data
to
the
exporter.
But
the
original
span
is
unmodified
and
yeah.
H
I
don't
have
enough
background
about
the
go
library
to
know
if
this
copying
is
intentional
and
by
design-
or
this
was
kind
of
a
bug
that
was
never
caught,
and
I
think
it
is
very
likely
that
it
was
a
bug
that
was
never
caught
because
again
we
didn't
pass.
The
original
span
object
to
the
exporter.
We
passed
this
span
data,
which
is
basically
copying
more
or
less
everything
from
the
spam's
internal
state
to
a
tracks
that
the
exporter
can
use
so
yeah.
C
Yeah,
I
I
have
some
context.
C
Christmas
was
also
here
when
this
was
kind
of
put
in
place,
so
he
also
probably
has
some
things
he
could
think
about,
but
I
do
think
that
it
was
intentional
that
it
was
a
copy
of
the
stand
data
being
passed
specifically
for
the
idea
that
once
it
sent
down
the
export
pipeline
and
spam
processor
pipeline,
you
don't
ever
want
the
there
to
be
some
sort
of
confusion
about
the
state
that
it
actually
sent
and
it's
it's
needs
to
actually
go
that
way
and
that
eventually
became
termed
in
the
specification
as
a
read
span
or
a
read-only
span
is
what
originally,
I
think
our
span
data
got
like
kind
of
morphed
into
like
the
terminology
wise
and
this
man
high
level
the.
C
So
I
I
think
at
that.
If
that's
the
case,
I
linked
the
specification.
D
D
H
Exactly
so,
first
of
all,
I
didn't
know
if
it
makes
sense
at
the
conceptual
level
if,
after
a
span
has
ended,
that
anything
would
need
to
modify
it
anything
other
than
the
exporter,
for
example.
But
yeah
right
now
like
the
reality,
is
that
we're
definitely
setting
the
end
time
step
only
in
the
copy
in
span.end
in
the
sdk
and
to
me
it
doesn't
make
sense,
because
if
anything,
I
don't
know
what
tries
to
read
the
span.
H
This
state
is
not
reflected
there
anymore,
and
I
think
the
only
reason
why
it
has
been
caught
by
united
s
right
now
as
after
now
is
because
we've
been
relying
in
the
unit
tests
on
the
on
the
modified
copy,
and
now
that
I
made
this
change
to
actually
pass
a
span
to
onstart
and
on
and
then
suddenly
the
original
struct
is
what
gets
evaluated.
And
then
this
bug
potential
bug
got
uncovered.
C
So
yeah,
I
don't,
I
think
it
has
some
partial
truth
to
it,
but
I
think
there's
a
little
bit
of
confusion.
I
think
that
so
the
on
end
method
should,
I
think
in
in
reality,
actually
take
the
the
span
data,
not
the
api
span.
I
think
that
was
actually
correct.
C
There's
a
caveat
on
that,
but
we'll
kind
of
get
to
that
and
then
the
the
the
onstart,
I
think,
should
take
the
the
span
api,
because
that
is
supposed
to
be
the
the
modifiable
span.
I
guess
is
how
I
would
say
that
right
and
the
onspan
shouldn't
actually
have
that.
C
Another
part
of
this
is
we
kind
of
talked
about
this
last
time
was
the
span
itself
when
you
actually
end
this
band,
which
is
the
thing
that
actually
sends
it
off
to
the
on
the
end
side
of
things
all
of
the
other
methods
for
that
span
after
that
fact,
should
be
no
ops
essentially,
and
they
shouldn't
actually
turn
change.
C
The
internal
state
of
the
span
because,
like
you
shouldn't
have
this
like
ambiguity
of
like
you,
you
end
the
span
and
then
you
go
to
set
the
status
afterwards
and
then
you
were
kind
of
in
this
gray
area.
It
was
like.
Does
that
actually
change
the
status
of
the
thing
that's
getting
exported
or
has
that
already
been
exported?
There's
like
this
catch-22
there
and
so
or
not
catchphrase
you
just
race
condition,
and
so
like.
I
think
that
was
actually
intentional
and
then,
with
the
end
time,
I
don't
like.
C
C
Would
probably
say
that,
or
it
was
intentional
because
they
knew
you
would
never
get
it,
but
it
may
not
be
the
best
idea
to
just
leave
that
on
on
set
from
the
span
level.
So
that
being
said,
I
think
at
the
the
actual
interface
for
the
on
end.
Sorry,
I
lost
my
place.
We
may
need
to
kind
of
rethink
about
how
we
structured
our
readable
span
and
our
read.
Write
span
objects
because
the
readable
span
is
like
span
data
that
comes
from
the
export
package.
C
C
H
In
this
case,
for
example,
are
inaccessible
via
the
api.
So
basically,
if
we
do
that,
then
this
sort
of
solves
my
problem,
because
then
I
don't
have
to
do
the
change,
the
breaking
change
or
well,
it's
not
the
right
time
term,
but
I
don't
have
to
break
the
unit
tests
anymore
because
I'm
not
changing
on
end.
H
But
then
we
have
kind
of
a
disparity
because
on
start
accepts
a
span
and
on
and
on
and
accepts
us
and
data
where,
like
no
matter
how
we
call
it
in
the
go
library,
the
spec
explicitly
says
that
it's
the
same
thing
that
is
passed
to
both
methods
right,
it
doesn't.
H
Oh
sorry,
yeah
I
I
misinterpreted
that
no,
I
I
saw
the
word
spam
and
I
assumed,
like
the
spec,
doesn't
dictate
the
details
of
the
library.
So
I
assume
like
there
is
a
notion
called
spam
and
it's
the
same
thing
in
both
cases
but
yeah.
I
guess
you're
right.
C
Yeah,
so
that's
actually,
I
think
where
originally
that's
how
the
go
implementation
is
written
in
that,
like
there's
a
span
and
then
like
we
have
span
data,
which
is
a
different
conceptual
thing,
and
people
took
that
and
I
saw
like
in
the
java
world,
there
are
actually
two
different
objects
for
a
span,
probably
because
they
have
generics
there
and
they're
able
to
just
implement
this
like
idea
of
a
span.
But
it's
a
readable
span
and
a
a
rewritable
span,
I
think,
is
what
they
call
it
in
the
java
world.
C
C
They
likely
need
to
be
different
types
at
this
point,
but
I'm
not
sure
that
our
structure
of
code
is
is
is
fulfilling
the
specification
or,
if
it's
serving
us
well
at
this
point-
and
I
was
kind
of
like
your
questions-
are
starting
to
make
me
wonder
if
we
need
to
kind
of
like
rethink
about
like
what
a
rewritable
span
is
at
this
point
and
in
the
sdk
and
if
there's
like,
maybe
some
sort
of
additional
interface
it
could
implement,
or
something
like
that,
we
could
say
like
you
could
actually
get
get
data
from
this
or
something
like
that
might
be
a
useful
thing.
A
Yeah-
and
I
think
that
the
question
you
honestly
asked
about
whether
we
take
that
makespan
data
function
and
export,
it
might
help
with
that.
So
it
would
allow
you
to
take
an
sdk
span
and
turn
it
into
a
readable
span.
A
You
wouldn't
be
able
to
make
changes
to
that
and
have
it
you
know
in
the
api
span
that
you've
got
so
I
don't
know
if
that
fully
satisfies
the
the
specs
new
idea
of
a
read,
write
span
or
if
we
want
to
actually
put
accessors
on
the
api
for
api
spans.
You
know
that
section
on
the
spec
is
a
bit
weird,
because
it
seems
to
think
that
the
api
should
be
right
only,
but
that
the
sda
should
allow
you
to
access
the
data.
C
Yeah,
I
think
that
that
has
to
do
with
java's
interface
system
and
how
you
can
implement
a
partial
interface
or
explicitly
implement
an
interface
versus
go
where
it's
implicit,
and
so
I
think
that
I
think,
if
that's
that's
an
option
like
having
like
this
make.
Data
makes
mandatic
as
just
like
a
way
to
take
a
writable
span
and
get
from
it
the
readable
side
of
things.
Another
option.
I
was
kind
of
I
thought
about
this
for
20
minutes.
C
I
I
don't
have
like
the
best
ideas
on
this
one,
yet
I
don't
think,
but
just
like
in
the
sdk
package
itself,
adding
an
interface.
That's
like
a
read,
write,
spam
interface
and
then
you
can
check
to
see
like
if
the
span
implements
this,
then
you
can
also.
You
know
it
would
also
implement
these
additional
accessory
methods.
And
then
you
could,
you
know,
directly
pass
those
in
the
sdk.
You
know
as
long
as
it.
F
C
Bleed
into
the
api,
I
guess
is
the
which
is
fine.
I
don't
think
it
would,
but
that's
kind
of
another
idea
I
thought
about
it.
I
haven't
thought
it
completely
through,
but
I
think
if
there's
maybe
some
ways
to
explore,
I
do
think
that
there's
a
there's
a
gap
here.
I
think
we
need
to
kind
of
address
it
before
we
release
this,
because
this
is,
I
think,
a
problem.
H
D
Why
yeah,
maybe
we
can
we
can
somehow
document
or
say,
okay,
so
read
only
span,
is
basically
a
copy
of
spam
data
because
whatever
changes
you
make,
you
make
the
copy
of
fun
data
basically
get
lost.
So
so
so
the
original
is
not
modified
and
read.
Write
spam
could
be
just
a
pointer
to
original
spam
data,
so
you
can
actually
modify
it.
The
way
you
want
yeah
wouldn't
would
that
be
a
satisfying
solution?
F
C
So
the
problem
there
is
that
the
span
data
doesn't
or
standard
doesn't
include
the
links,
the
attributes
or
the
events,
so
that
would
probably
have
to
we
could
solve
that,
but
that
doesn't
currently
exist
in
there.
And
then
you
have
a
concurrency
problem,
because
the
span
data
in
the
span
has
a
concurrency
guarantee
that
multiple
methods
can
be
updating
that
and
they're
not
going
to
cause
some
sort
of
race
conditions
if
you're
passing
the
pointer
outside
of
the
function,
there's
not
a
guarantee
at
that
point:
okay,
yeah,
yeah,
yeah,.
H
Yeah,
I'm
getting
the
impression
that
the
separation
of
some
of
the
spans
fields
into
this
separate
struct-
maybe
it
is
to
do
with
like
code
reuse,
because
the
export
package
is
the
thing
that
is
going
to
mainly
require
like
reading
that.
And
this
is
why,
like
people
did
didn't
want
to
copy
things
from
the
experts
right
to
the.
C
I
don't
know
if
I'm
the
best
to
talk
about
that
design
choice.
I
don't
I
didn't
make
it,
but
I've
also
looked
at
it
recently
and
been
like.
This
is
kind
of
weird
like
how
come
half
of
the
fields
are
here
and
half
the
fields
were
there.
I
think
it
might
also
have
to
do
with
optimization
of
like
performance,
because
then
you're
kind
of
just
passing
a
pointer
around
instead
of
multiple
fields
of
the
data,
but
I
that's
a
that's
a
wild
guess.
C
Yeah
100
and
then
the
locking
as
well
is
also
inclusive
for
the
entire
span.
This
is
something
I
I
think
I
wanted
to
take.
It
reason
which
doesn't
need
to
be
because
a
lot
of
the
methods
are,
you
know
they
can
be
asynchronously
adjusted
on
their
own.
They
don't
need
to
lock
the
entire
data
set
so
yeah,
and
but
it's
really
hard
to
do
that
if
the
span
data
is
encapsulating
all
of
this
other
stuff.
C
So
I
don't
think
this
is
the
most
optimal
format
for
this
right
now,
but
I've
spent
like
two
hours
looking
at
this,
the
actual
span
data
format
itself
and
I
think
that
it
can
be
improved.
I
don't
have
a
suggestion,
I
guess
for
our
proposal
at
this
time,
though
that
thing
so
maybe
that's
a
good
way
to
say
that
is,
if,
if
you
maybe,
if
you
wanted
to
take
a
little
bit
more
of
a
deeper
look
at
this
johannes
after
we,
you
know-
I
mean,
I
think
1304.
C
We
could
probably
merge
with
some
something
that
you've
already
kind
of
implemented,
where
maybe
the
onstart
we
can
keep
it
with
a
readable
span,
only
just
pass
the
a
copy
of
this
fan
data
and
include
the
parent
context,
like
the
context,
context
and
just
kind
of
say,
like
that's
cool
for
this
issue
and
then
open
up
another
issue
to
kind
of
address
like
the
whole
span
data
with
the
read
and
writeable
span
and
just
kind
of
scope
it.
That
way.
Would
that
help
yeah
yeah.
H
I
think
that
that
sounds
easy
enough.
Okay,
I
might
need
I
I'll
probably
need
to
consult
with
some
forks
regarding
the
design
aspects
of
the
spam
data
refactoring,
because
I'm
not
experienced
enough
with
the
go
library,
I
guess
but
yeah.
I
can
tag
people
like
ad
hockey
and
the
usually
are
discuss
things
in
the
next
meetings.
I
guess.
C
Yeah,
I'm
I'm
definitely
happy
to
collaborate
and
and
work
on
this
with
you
as
well,
because
I
this.
C
For
the
trace
is
actually
trying
to
ga
that
in
like
a
week
or
two
as
well,
so
I'd
like
to
get
this
usable
and
in
conforming
to
the
specification
as
well
so
yeah.
This
is
a
good
top
priority
for
me
as
well
so
yeah.
Let's,
let's
do
that!
I'm
going
to
try
to
capture
what
we've
just
talked
about
a
little
bit
in
the
comments
after
or
if
you
wanted
to
put
a
comment
in
this
issue.
Let's
try
to
capture
some
of
the
stuff
behind
us.
That'd
be
cool.
H
Yeah,
I
can
do
that
so
just
to
summarize
we're
basically
not
changing
the
first
argument
to
onstart
and
on
end
right
now.
The
only
thing
we're
changing
is
changing
the
parent
context
to
context.context
for
now
and
then
we'll
open
a
follow-up
issue
to
do
like
the
bigger
rethinking.
Is
that
correct
that.
C
Sounds
yeah
that
sounds
good
to
me
see
anthony
head.
If
anybody
else
has
any
ideas.
C
Awesome,
cool
yeah.
I
think
that's
our
agenda,
I'm
just
to
kind
of
open
it
up,
stop
showing,
so
you
can
see
those
faces
if
anybody
else
signed
anything
else.
They
wanted
to
talk
about
during
the
meeting
the
agenda.
C
Oh
david,
I
think
you
sorry
bringing
you
back
in.
I
think
you
would
you're
the
one
that
put
in
the
open
tracing
bridge
pr
open
census.
Oh.
C
So
I
feel
bad.
I
just
wanted
to
say
sorry
to
you
as
well,
but
I
don't
know
if
you
saw
my
latest
comment.
The
I
merged
christmas
tracing
package
changed.
There
just
needs
to
be
some.
I
think
it's
just
package
updates
on
that,
but
then
we
can
get
that
merged.
I
just
wanted
to
make
sure
I
think
we're
gonna
sync
on
that
one
yep.
C
You're
faster
than
I
can
read
my
notifications,
okay,
cool,
then
we're
all
good
sorry
about
that
again:
cool.
C
A
C
D
C
D
C
C
Let's,
let's
just
do
that
because
yeah,
I
think
they're,
I
don't
even
know
if
they
were
asking
that
they
could
work
on
it.
I
think
they're
just
asking:
maybe
they
could
be
interpreted
that
they
were
just
asking
like
when
is.
D
D
So
he
contributed
something
or
she
I
don't
know
so,
but
that's
why
I
had
my
doubts,
but
anyway.
C
Yep,
okay,
it's
it's
clear!
We'll
have
you
working
on
that,
so,
okay.
C
I
think
I
think
that's
it
for
the
agenda
and
they
don't
see
anybody
else
raising
their
hands.
So
I
think
we
can
get
back
another
23
minutes
which
I'm
all
about
cool
thanks.
Everyone
for
joining
in
great
conversation,
see
you
all
online
and
see
you
virtually
thanks.
So
much
until
next
time,.