►
From YouTube: 2021-01-05 meeting
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
A
All
good
yeah
I've
been
in
the
middle
of
moving,
so
I've
kind
of
been
yeah,
not
not
super
active
on
the
open
source
stuff
for
check
email,
but
I'm
now
moved
so.
I
should
be
back
to
normal.
B
C
On
that,
that's
really
interesting.
Sorry,
do
you
know
who
who
provided
this
one
who
implemented
this
dogs.
C
Yeah
we've
actually
I'll.
I
don't
know
whether
ian
is
back
yet
sorry
ian
is
one
of
my
colleagues
ian
implemented
pub
sub
exporter
long
long
ago.
We've
been
running
this
in
production
for
well
over
six
months
now,
and
we
learned
a
lot
in
the
process.
C
We
had
been
planning
to
polish
it
and
release
it
upstream,
but
we
haven't
done
that
yet
we
actually
had
a
new
person
join
at
the
start
of
december,
who
was
going
to
take
on
the
job
of
polishing
it
and
uploading
it
so
or
merging
it
upstream
anyway,
I
will
see
whether
I
can
get
ian
and
or
tanner
to
take
a
look
at
this
because
yeah,
as
I
say,
we,
we
learned
a
lot
and
we
found
that
the
higher
level
apis
for
pub
sub
did
not
work
effectively.
C
So
we
had
to
end
up
dropping
to
the
the
lower
level
apis
in
order
to
actually
get
get
it
working
effectively
with
kind
of
back
offs
and
retries
and
batching
and
providing.
C
Just
trying
to
think
providing
back
pressure
to
clients
and
things
like
that,
so
yeah
we'll
we'll
definitely
take
a
look
at
this.
B
Cool
yeah,
so
it
sounds
like
you
all,
are
good
people
to
to
get
some
eyes
on
this
and
provide
some
feedback.
B
B
We
are
constantly
refining
and
perfecting
things
and
we'll
never
get
there.
But
it's
a
term
that
I
hear
thrown
around
a
lot
still.
C
Yeah,
so
the
compliance
matrix
is,
I
think,
pretty
much
up
to
date
for
us
at
the
moment,
I've
I
made
a
a
couple
of
updates,
I
think,
just
before
the
holidays,
so
yeah,
I'm
pretty
sure
that
captures
the
current
state
of
things
for
us,
and
I
opened
some
issues
to
track
things
that
we
need
to
work
on
to
be
fully
compliant.
B
C
Yeah
we,
it
looks
like
github,
ran
some
vulnerability
checks
or
something
on
our
repo,
and
you
know
we
had
that
one
issue
that
was
open
before
the
holidays.
It
seemed
to
be
out
of
date
by
the
time
it
was
opened,
but
yeah.
B
Yeah,
that
was
a
weird
one
in
that
it
looked
like
the
the
problem
had
been
like.
The
problem
code
had
been
removed
from
the
repo
like
at
least
a
month
before
the
issue
was
open.
So,
like
I
really
don't
know
all
the
behind
the
scenes
on
that,
if,
like
there
was
some
vulnerability
opened
sitting
in
somebody's
inbox
forever
before
they
acknowledged
it
or
something,
but.
C
Yeah
also,
I
guess
that
was
a
little
bit
different,
because
it
was
pointing
out
a
vulnerability
in
the
ci
scripts,
not
in
the
actual
framework
code,
so
yeah,
I
don't.
I
assume
that
these
codeql
security
scans
are
actually
scanning
the
code
rather
than
scanning
the
the
build
scripts.
B
B
It's
possible
yeah,
so
I
guess
maybe
one
will
show
up
for
us
at
some
point.
Maybe
we
should
reach
out
of
one
dozen
and
it
looks
like
it
supports
ruby.
B
So
I
think
we
kind
of
rambled
about
this
one.
Last
time
by
we
I
mean
I
and
it's
just
about.
Can
you
rely
on
a
service
name
being
in
in
a
resource.
B
So
I
think
this
is
still
ongoing.
B
A
Pr
attributes
with
sdk
for
what
awesome
titles.
C
So
this
is
just,
but
this
just
basically
says
you
must
have
a
resource
and
must
the
service
name
must
be
defined,
and
if
one
is
not
provided
by
the
user,
then
you
have
some
default
release.
That
was
the
original
text
for
this
that
the
default
was
supposed
to
be.
I
think
the
process
name
and
possibly
something
else.
A
Okay,
I
think
that
makes
sense.
It
would
be
nice
to
see
more
or
quite
you
know,
then
than
just
service
but
yeah.
I
think.
C
So
the
question
like
the
initial
implementation
that
most
people
had
was
ignoring
the
resource
service
name
and
just
passing
in
the
service
name
to
the
yeah,
your
exporter
right
as
contact
yeah,
and
that's
not
the
way
that
we're
supposed
to
be
doing
it.
Because
you
want
to
be
able
to
swap
out
the
the
exporter,
possibly
through
environment
variables
and
not
actually
have
to
configure
it
in
code.
A
Yeah,
I
think
that's
right
yeah,
one
of
the
onboarding
things
we've
noticed
is
all
the
like
for
what
it's
worth.
It's
like
vendors,
have
their
conventions
around
like
here's,
what
you
set
as
a
service
name,
here's
what
you
set
for
your
environment,
here's
what
you
set
for!
I
don't
know
some
other
stuff,
your
version
and
then
getting
people
to
like
nudge
them
into
like
the
semantically,
correct
resource.
Attribute
versions
of
those
is
has
been
painful,
so
it
would
be
nice
to
get
some
more
documentation.
A
A
A
B
B
B
B
B
I
guess
the
other
thing
that
is
here
and
kind
of
came
up
at
the
maintainers
meeting
on
monday
is.
I
think
this
is
kind
of
split
out
of
the
versioning
and
stability
otep
and
its
telemetry
stability
guarantees.
So
this
kind
of
comes
mainly
around
like
the
instrumentation
and
the
semantic
conventions
in
use
and
how
to
yeah
just
how
to
handle
stability,
guarantees
and
changes
there.
B
I
guess
this
seems
a
little
bit
tricky,
I'm
not
sure
to
what
degree
like
we're
overdoing
things
here,
but
it
seems
like
there
is
at
least
requests
by
some
participants
that
there
is
some
sort
of
guarantees
around
this,
and
I
guess,
like
the
the
underlying
principle
or
underlying,
I
think
fears
are
that
like
if
somebody
is
creating
some
sort
of
alert
on
some
random
attribute
name
or
something,
and
that
name
changes
or
disappears.
It's
like
the
alert
is
now
like
broken
in
a
tracing
back
end.
B
They
want
to
kind
of
prevent
that
situation
from
happening
so.
A
In
that
things
will
change
and
that's
just
the
nature
of
developing
software.
A
And
you
try
to
minimize
it,
but
you
know
something
needs
to
be
fixed.
I
don't
know
yeah.
I
think
it's
gonna
come
up
in
two
ways.
One
is
people
are
generating
if
there's
a
change,
one
alerting
might
break,
but
two
you're.
If
you
have
metrics
or
like
some
sort
of
like
generated,
you
know
whatever
prometheus
metric
or
sets
geometric
based
on
span,
name
or
span
kind
or
something
and
then
span
kind
changes
for
some.
It's
like
then
your
metric.
A
You
can't
compare
the
history
of
that
metric
over
time,
because
the
metric
is
now
a
different
metric,
okay
cool,
I
don't
know
what
it
would
suck
to.
I
think
it
would
be
bad
if
this
becomes
too
strict
because
then
change
it
like
correct
changes
can't
be
made.
I
guess,
but
I
don't
know.
B
Yeah,
I
think
I
don't
know,
I
know
we
have
instrumentation
library
and
the
instrumentation
library,
and
then
the
version
has
at
least
says
this.
A
B
B
It
seems
like
there
is
some
like
wiggle
room
in
the
semantic
conventions.
It's
like.
There
is
at
least
some
set
of
required
attributes
some
level
like
optional
attributes.
That
can
be
added
some
things
that
seem
to
get
added,
regardless
of
whether
they're
in
the
spec.
C
C
So
yeah
there's
always
been
discussion
on
every
issue
where
we're
porting
stuff,
where
it's
like.
Well,
this
isn't
in
the
semantic
conventions.
Are
we
supposed
to
be
adding
this?
Is
this
correct
to
add
what
should
we
do
here
so.
B
C
Yeah,
I
feel
I
feel
like
it's
important
to
at
least
adhere
to
the
semantic
conventions
in
terms
of
the
stuff
that
we
provide.
So
if
it
says
it's
required
in
the
semantic
conventions,
we
should
try
really
really
hard
to
make
sure
that's
present,
but
for
we
shouldn't
be
prohibited
from
adding
spans
events,
attributes
that
make
sense,
because
you
know
somebody
found
value
in
them.
C
So
I
think
in
I
think,
like
locking
it
down
too
much
is
kind
of
a
bit
of
a
silly
thing
to
do
for
people
who
are
using
the
open,
telemetry
collector.
There
are
also
ways
of
mapping,
attributes
or
remapping
spam
names
based
on
attributes
and
things
like
that,
and
we
certainly
take
advantage
of
those
at
shopify.
C
I
could
imagine
you
know
if
there
was
a
bug
and
in
some
instrumentation
and
we
weren't
adhering
to
the
semantic
inventions,
and
somebody
set
up
an
alert
based
on
that
bug
when
we
release
a
new
version
that
fixes
that
bug,
we
may
also
want
to
make
a
recommendation
to
support
remapping
in
the
collector.
So
if
somebody
has
these
old
alerts
set
up,
they
can
still
support
those
alerts.
Just
by
remapping
the
thing
in
the
collector.
B
Yeah,
these
all
sound
reasonable.
I
think
I'm
in
the
same
vote.
You
know
in
terms
of
hoping
that
whatever
comes
out
of
this
is
not
too
restrictive,
but
yeah.
I
don't
have
a
a
really
good
feeling
of
what
what
direction
this
might
go.
So
I
will
keep
you
all
up
to
date.
A
One
thing
I'm
always
a
little:
this
is
I
apologize
for
going
doing
a
little
bit
of
a
tangent
here
is
like
to
what
extent
are
we
assuming
that
a
collector
exists
in
a
tracing
pipeline,
since
it
seems
like
many,
whatever
vendors
people
are
opting
to
essentially
just
export?
You
know:
otlp
ingest,
natively,
the
other
api,
endpoint
and
just
say:
export
from
your
client
skip
the
collector,
and
so
it
feels
like
there's
some
functionality.
That's
intentionally,
you
know
like
we
don't
really
have
retrace
process.
A
Do
we
we
don't
really
have
like
trace
processing
in
the
client.
It's
with
the
assumption.
That's
like!
Well,
you
just
do
all
that
in
the
collector
and
the
collector
is
really
good
at
it.
So
I'm
always
like
confused,
though
on
like
whether
you
know
the
collector
is
like
just
a
happy
path
or
is
it
going
to
be
like
the
you
know,
if
you
are
exporting
directly
from
a
client,
do
you
basically
is
it?
Does
it
come
with
a
caveat
that
you
use
a
lot
of
functionality
like?
A
C
I
don't
think
so.
I
mean
it's
certainly
recommended
and
helps
a
lot
if
you
do,
but
like
the
the
full
recommended
package
includes
an
agent
typically
running
as
a
demon
set
if
you're,
if
you're
using
kubernetes
and
our
experience
with
that
has
not
been
good
and
consequently,
we've
removed
the
agent
from
our
architecture
and
we
just
export
from
the
clients
to
well
we're
originally
exported
to
a
collector
pool
running
in
the
same
cluster
with
the
number
of
clusters
we
now
have.
C
That
also
became
problematic,
and
now
we
actually
export
to
like
an
envoy
cluster,
that's
or
like
an
envoy
deployment
service
whatever
running
in
the
same
cluster.
That
then
forwards
to
a
large
cluster
of
collectors.
That
just
makes
the
management
way
easier
for
us.
So
we've
departed
significantly
from
kind
of
the
default
happy
path,
but
we
are
still
using
the
open,
telemetry
collector.
A
C
I
think
that
I
think
that
you're
right
that
vendors,
at
least
to
support
customers
who
don't
really
want
to
run
all
that
like
learn
how
to
run
that
infrastructure
themselves
may
just
want
to
export
directly
from
their
applications
to
an
endpoint
that
the
the
service
provider
provides.
I
think,
yeah,
I
think
that's
going
to
be
increasingly
common
yeah
now
in
in
that
case,
if
the
vendor
is
providing
an
open,
telemetry
collector
behind
the
scenes,
then
maybe
they're
offering
some
configuration
mechanisms
for
remapping
whatever.
C
C
I'm
not
sure
that
the
current
design
is
actually
optimal
for
that.
The
current
design
tends
to
assume
that
you
have
this
read:
write
span
that
becomes
read-only
as
soon
as
you've
finished
it.
C
You
can't
then,
make
any
changes
to
it,
certainly
through
the
api
and
technically
through
the
sdk,
either
you're
not
supposed
to
be
able
to
make
it
it's
just
like
a
read-only
span
at
that
point
right
and
then
your
your
only
choice
seems
to
be
that
you
either
like
copy
the
span
in
order
to
modify
it
or
you
turn
it
into
span
data
which
you
send
to
an
exporter.
A
C
That
span
data
can
then
be
modified,
like
it's
owned
by
the
exporter,
and
it
can
be
modified
in
place.
I
guess
sure.
C
But
then
you're
going
to
have
to
write
your
spam
processor
really
in
terms
of
the
spam
data
instead
and
then
it
just
becomes
like
an
exporter,
prefix
type
thing,
so
it's
just
really
weird.
I
just
think
the
there
was,
and
there
were
interesting
reasons
for
having
spam
processor.
Originally,
I
don't
think
most
of
the
use
cases
are
really
supported
by
it
anymore
and
now.
A
C
Yeah
yeah
and
we're
likely
to
need
something
similar.
We
have
an
application
that
does
similar
sort
of
batching
by
trace
id,
so
it
is
a
useful
thing
to
do,
but
that's
a
limited
set
of
functionality
right.
C
That's
like
three
things
that
you
have
spam,
processors
for,
and
it
seems
like
most
interesting
things
that
you're
talking
about
you'd
really
have
to
do
in
the
exporter,
which
means
you
need
kind
of
a
spam
like
a
batch
spam
processor
in
front
of
a
like
mutating
exporter,
thingy
that
sits
in
front
of
a
real
otlp
exporter
right
yeah.
I
don't
know.
A
Yeah,
just
I'm
just
thinking
a
lot
around,
like,
I
think,
the
remapping
stuff.
If,
as
stuff
changes,
it's
like
here's,
what
the
suggested
just
getting
bringing
it
back,
here's
the
suggested
remapping
you
want
to
do
is
a
good
in
the
collector
is
like
a
good
thing
to
put
in
release
notes
as
we
make
those
changes
yeah.
I
just
wonder
you.
B
A
C
But
I
I
think
you're
right
and
I
think
that
going
forward
going
forward,
we
should
think
about.
Do
we
want
to
provide
a
convenience
thing
for
people
who
are
to
to
support
this
remapping
so
that
you
know,
as
people
make
changes
to
auto
instrumentation
to
fix
bugs
related
to
semantic
conventions.
B
B
B
People
did
seem
like
they
were
willing
to
do
this.
I
think
it's.
A
lot
of
people
have
already
been
kind
of
deep
into
the
metrics
work,
but
I
think
january
12th
was
taken,
wait,
12,
oh
no,
it
was
actually
the
sixth
the
sixth
was
taken
away
because
or
it
was
something
this
week
was
taken
away,
but
these
are
still
kind
of
on
the
table.
12Th
and
14th.
B
So
yeah,
I'm
not
sure
if
you're
interested
in
this,
I'm
not
sure
you
find
out
if
the
thing
has
been
scheduled
or
not
would
definitely
probably
end
up
on
the
public
calendar.
B
But
I
know
this
is
also
something
that
we
would
like
to
spend
a
little
bit
more
time
talking
about
at
the
spec
meeting
as
well,
but
we
definitely
did
not
have
30
minutes
for
metrics
topics
this
morning.
B
B
B
Yeah,
so
I
saw
a
handful
of
things
have
come
in
over
the
holidays.
Some
things
went
out.
I
think
we
had
a
christmas
day
release
in
in
the
spirit
of
ruby.
C
I
think
we
can
maybe
talk
about
some
of
the
things
that
went
out
so
we
had.
We
had
the
the
new
release,
trying
to
remember
off
the
top
of
my
head.
What
actually
went
into
that
release.
C
C
Yeah
yeah,
so
these
were
a
couple
of
I
mean
graphql's
significant
one
pluggable
id
generation
was
somewhat
minor
and
yeah
just
a
few
little
fix
ups.
Really
we.
C
Then
had
a
few
small
things,
so
we've
had
a
few
small
things
that
have
gone
out
since
then.
They
all
jam
missing
the
ruby
kafka,
there's
a
protobar
for
version
dependency,
ruby,
3.0,
random
default
and
batch
span.
Processor
errors
for
fork
safety,
so
we
probably
want
to
cut
a
release,
at
least
for
the
api
and
sdk
annoyingly.
C
The
way
things
are
set
up
right
now.
If
we
do,
I
think
if
we
do
a
release
of
the
api
and
sdk,
we
basically
have
to
release
all
the
instrumentation
as
well.
So
our
versioning
story
is
not
awesome
right
now,
and
so
that's
one
problem.
C
The
release
process
is
also
not
awesome.
It's
pretty
it's
pretty
excruciating.
If
something
goes
wrong
and
it
it
usually
does,
and
it
usually
happens
with
the
jaeger
exporter
then-
and
jager
exporter
is
currently
like
the
fourth
thing
that
gets
released.
Everything
afterwards
has
to
be
released
manually,
one
at
a
time.
C
So
it's
horrible,
and
this
happens
quite
a
lot.
C
One
quick
fix
I
want
to
do
is
actually
just
move
the
jaeger
exporter
to
be
the
last
thing
in
the
list
of
jams
to
build
just
because
it
has
the
highest
probability
of
failing.
So
that
will
help
a
little
bit,
but
we
should
also
think
about
fixing
the
versioning
compatibility
between
like
the
instrumentation
and
the
sdk
and
the
api.
C
B
Is
that
mainly
because
it
this
is
for
like.
B
I
feel
like
allowing
like
the
patch
to
float,
so
you
can
make
a
bug
fix
release,
but
anything,
that's
like
a
minor,
a
minor
bump
usually
requires
everything
to
go.
I
think.
C
Yeah,
I
think
that's
where
we're
at
right
now
so
and
often
it's
it's
not
necessary.
B
I
think
part
of
the
reason
for
that
is
because
we
are
in
this
more
volatile
stage
where
no
signals
have
1.0
yet,
but
but
yeah
any
anything
that
we
can
do
to
improve
the
situation.
I
think
we'd
all
be
on
board
with.
B
C
To
the
what's
that
using
for
its
encoding,
thrift
yeah,
so
it
it's,
the
thrift
jam
is
often
not
found,
and
it's
just
a
a
random
not
like
it
just
seems
to
be.
C
It
happens
to
fail
during
the
build
and
isn't
retried
a
retry
like
a
manual
rebuild,
will
fix
it,
but
yeah.
This
always
ends
up
blocking
the
the
release
of
all
26
gems
that
we're
up
to
now
and
we're.
B
B
C
I
don't
really
know
how
to
fix
the
jaeger
one
specifically
but
like
a
a
quick
thing
that
fixes
the
majority
of
the
pain
is
to
move
the
jaeger
exporter
to
the
end
of
the
list,
which
is
pretty
easy
to
do,
and
that
just
means
the
only
one
that
you
typically
need
to
build
manually
and
push
is.
Is
the
jager
exporter.
B
It
seems
like
in
an
ideal
world
like
a
multi
jump
release
would
be
like
a
a
transaction
where,
if
it
failed,
like
nothing,
actually
ended
up
being
produced,
basically
just
kind
of
like
roll
back
and
try
again.
C
Yeah,
maybe
I
don't
know
if
I
don't
know,
if
that's
necessary,
if
you
have
the
dependencies
set
up
correctly,
then
as
long
as
you
release
the
api
and
then
the
sdk
and
and
then
all
the
instrumentation
and
exporters
afterwards,
then
I
think
it's
okay
to
have
kind
of
this
partial
release
that
you
then
fix
up
afterwards.
C
It
would
be
nice
if
you
had
a
way
of
restarting
like
it
would
be
nice
if
the
release
process
was
tolerant
of
partial
releases
and
it
would
figure
out
where
it
was
up
to
and
then
continue
the
release
process
from
there
yeah
as
it
stands.
If
there's
any
failure,
then
it
becomes
a
gem
by
gem
manual
process.
B
Yeah,
since
I
got
a
bare
minimum,
it
would
be
nice
if
the
thing
just
if
something
does
fail,
it
tries
the
next
one
is
that
you
only
have
a
handful
of
manual
manual
releases.
I
haven't
done
a
release
in
a
while.
I
know
I
used
the
new
process
once
I
figured
if
I
released
everything
or
if
I
just
released
like
a
single,
I
will
be
happy
to
poke
you
next
time.
C
Yeah
weirdly,
I
felt
like
it
was
better
when
we're
in
circle
circle
ci
rather
than
github
actions,
but
that
means.
B
Do
we
have
do
we
have,
or
should
we
have
like
an
issue
where
we
can
document
like
the
at
least
the
struggles,
and
maybe
some
wish
list
type
items
on
there,
because
I
feel
like.
C
Improved,
I
don't
see
an
issue
open
for
that.
I
had
some
idea
that
that
robert
had
noted
some
stuff
down
that
I'll
check
our
internal
repo.
Maybe
he's
maybe
he's
got
some
additional
issues
open
there.
B
Yeah
either
way,
I
think
the
the
easiest
and
best
way
out
of
this
would
probably
be
to
start
start
documenting
it
and
getting
some
advice
from
daniel,
because
I
think
I
don't
know,
I
fear,
if
any
of
us
jump
in
and
try
to
fix
things
we'll
just
make
it
worse.
C
Yeah,
I
guess
the
related
issue
is
that
the
major
version
release
the
the
build
scripts
right
now,
don't
actually
bump
all
the
dependencies.
C
So
when
you
do,
I
guess
it's
a
minor
bump
but
like
if,
if
we're
going
from
zero
to
zero
to
zero
thirteen
zero
and
you
need
to
adjust
all
the
dependencies
and
all
the
gems,
then
the
scripts
will
do
the
version.
Rb
updates.
So
you
get
all
those
done,
but
you
then
manually
need
to
do
another
push
with
all
the
dependencies
bumped
as
well,
which
is
typically
just
a
search
and
replace-
and
you
know,
identify
the
things
where
the
version
of
some
random
gem
happened
to
be
the
same
thing
that
you're
bumping.
C
So
that's
my
usual
experience.
So
that's
a
little
bit
annoying!
It
would
be
nice
if
that
was
also
automated.
B
Yeah,
I
agree,
I
think
I
think,
having
issues
for
these
things
will
help
make
them
a
reality.
Yep
yeah
I'll
I'll
try
to
get
that
done.
C
Yeah,
I'm
hoping
that
release
is
just
a
patch
level
release.
I
don't
think
we
have
any
actual
api
changes.
C
And
would
that
address
ruby
3.0?
That
would
add
ruby
3.0
to
ci?
There
are
some.
If
you
go
to
the
issues
list,
there's
a
couple
of
cleanup
items
related
to
ruby30.
One
is
ruby.
Kafka
has
some
ci
failures.
These
seem
to
be
timeouts
waiting
for
some
export
to
happen.
I
think,
but
anyway
it's
it's
been
assigned
to
robert
robert
is
away
this
week.
So
hopefully
he'll
get
a
chance
to
take
a
look
at
it
when
he's
back
next
week
and
then
the
the
remaining
one
is
otlp.
C
Actually,
I
thought
they
were
more
than
this.
So
there's
we
need
to
create
some
additional
issues,
but
there
are
otlp
instrumentation
tests
that
are
disabled
because
the
protobuf
gem
is
not
compatible
with
ruby
3.0.
C
So
that's
an
upstream
dependency
that
needs
to
be
fixed.
I
don't
think
the
dependency
bump
for
protobuf.
That's
just
been
done.
I
don't
think
that
is
enough
for
ruby,
three
zero
compatibility
and
then
the
other
issue-
that's
not
here,
is
there's
a
handful
of.
There
was
a
change
in
the
behavior
around.
C
On
mri
used
to
pick
up
the
block
arguments
to
the
containing
method.
This
is
just
a
weird
behavior
that
behavior
has
changed
and
it
affects
a
bunch
of
gems
that
we
have
provided
instrumentation
for
so
those
things
are
not
usable
with
ruby
3.0
at
the
moment,
and
I
think
that
may
include
rails
as
well,
so
rails
is
not
compatible
with
ruby
3.0
yet
so
anyway,
yeah
we
need
another
issue
opened
up
for
that.
B
A
C
C
The
ones
that
aren't
are
not
compatible
due
to
the
upstream
gem
being
incompatible,
and
then
we
have
the
otlp
protobuf
thing
which
again
is
out
of
our
hands
and
ruby.
Kafka
is
the
only
one,
that's
a
bit
weird
and
is
in
our
hands,
or
at
least
in
robert's,
hands.
B
Yeah,
this
actually
sounds
slightly
tricky
in
that
I
mean
it's
not
all
that
tricky,
but
I
think
what
we're
gonna
have
to
do
is.
B
Add
ruby
three
and
make
it
like
an
allowed
failure
for
most
things
for
now,.
C
That's
what's
happened,
yeah,
the
some
of
the
appraisals
specifically
exclude
ruby3
yeah.
B
And
then,
when,
like
a
new
compatible
version
of
the
gem
does
come
out,
you're
gonna
have
to
have
some
conditionals
in
your
appraisal
file
to
possibly
at
least
around
ruby
3,
to
only
run
on
those
most
current
versions,
and
then,
when
everything
is
passing,
we
can
kind
of
call
it
ruby,
3.0,
compliant
yeah.
C
B
C
What
haven't
we
covered
that
we
should
in
the
next?
Maybe
some
of
the
open
pull
requests
might
be
worth
taking
a
quick
look
at
yeah,
the
top
one
is
koala.
Instrumentation,
that's
that's
happening.
Somebody
at
shopify
is
working
on
that
http
client
gem.
This
needs
a
review.
I
think
it
was
specifically
asking
for.
C
Yeah
I
provided
feedback.
I
think
robert
hasn't
had
a
chance
to
address
this.
Yet
this
general
block
of
comments
is.
C
A
I
don't
think
datadog's
ins,
implementation
of
net
http
instruments,
the
connect
events,
but
so
I
would
lean
toward
if
we
include
them
include
them
as
events.
Okay,
but
I
don't
have
strong
opinions
and
I
don't
use
this
frequently
enough
to
you
know
there
could
be
value
to
it.
C
Yeah,
there
is
certainly
certainly
value
to
it
like
we
should
be
tracking
it.
My
argument,
oh,
we
should
be
incrementing
connect.
My
argument
for
events
rather
than
a
span
is
that
there
is
no
corresponding,
typically
no
corresponding
server
span
for
a
connect
span
right.
C
You
typically
don't
instrument
servers
at
that
level,
so
you'd
end
up
with
kind
of
a
client
span
with
a
client
span,
nested,
underneath
it
no
corresponding
server
span
for
that
kind
of
leaf
client
span
and
then
a
separate
server
span.
C
I
don't
know
whether
that's
an
okay
thing
or
not,
but
anyway
it's
it's
just
a
thing.
Instrumenting
connect
is
really
important
because
it
often
highlights
a
performance
problem
in
client
code
where
clients
are
not
reusing,
connections
across
requests
and
we've
seen
that
crop
up
again
and
again
in
a
bunch
of
places.
So
our
current
instrumentation
and
I
think
the
possibly
the
net
http
instrumentation
has
a
separate
span
for
connects,
but
I
just
don't
think
it's
the
right
thing
to
do.
C
I
think
it's
better
as
events
and
jbd
has
an
excellent
blog
post,
where
she
talks
about
use
of
events
in
open
census
in
the
go
instrumentation
and
her
examples.
Their
instrument
connect
as
events
not
as
spans.
A
C
So
those
are
the
big
ones.
There
is
the
jaeger
exporter
with
no
values
thing.
I
can't
remember.
I
think
I
think
that's
unnecessary
now,
because.
C
I
released
something
that
actually
only
that
disallowed
nil
values,
so
I
think
this
one
can
probably
just
be
closed,
but
I'll
confirm
that
before
asking
it
asking
drogas
not
to
close
it
yeah
did
we
dust
off
this
work.
By
chance
I
had
asked.
I
think
I've
got
a
comment
here,
suggesting
that
this
needs.
C
B
C
Both
the
the
jager
exporter
with
no
values
and
the
deferra
attribute
validation,
all.
A
A
B
Ideally,
or
is
it
true
that
the
currently
released
version
of
jaeger
exporter
and
api
sdk,
like
drogus,
could
potentially
see,
if
he's
still
having
a
no
value.
B
Might
be
able
to
kind
of
say
that
this
this
is
like
a
likely
released,
try
it
report
back
and
we'll
close
it.
If
so,.
C
C
I
believe
or
sorry
we
report
them
through
the
error
handler
and
then
then
drop
them
sounds
like
the
right
thing
to
do.
B
A
Close
that
one,
I
don't
think
aaron's
working
on
it
and
robert's
work,
mostly
er
ex,
does
everything
that
was
existing
in
there.
I
just
closed
my
old
graphql
one
also
that
robert
helped
drag
kicking
and
screaming
across
the
finish
line.
C
Yeah
and
he's
he's
opened
an
issue
for
kind
of
graphql
error
handling,
so.
A
C
A
C
C
A
key
one,
that's
missing
isn't
mentioned
here-
is
zipkin
export.
A
I
did
some
work
recently
in
the
collector
with
zipkin.
I
might
be
able
to
have
to
see
what
my
priorities
look
like
this
quarter,
but
I
might
be
able
to
jump
in.
B
C
Quality
cool-
I
did
have
one
open
issue
here
about
spec
compliance
for
the
jaeger
exporter
and
we
don't
need
to
cover
it
now,
but
it
is
something
that
we
should
chat
about
next
time.
It's
just
basically
we're
doing
compact
thrift
so
we're
using
a
different
port
as
our
default.
C
The
specs
say
the
port
that
the
spec
talks
about
is
for
the
binary
thrift
encoding,
which
we're
we're
not
doing
so.
It's
really
like
the
question
there
is:
should
we
be
using
the
binary
thrift
encoding
rather
than
the
compact
thrift
encoding,
and
should
we
change
our
port
to
match
the
spec
and
then
the
a
similar
question
for
the
collector
export.
The
jaeger
collector
export,
where
we're
doing
thrift
over
http
or
binary
through
over
http,
the
spec
seems
to
imply
grpc,
is
required.
B
So
I
think
a
lot
of
this
can
probably
be
solved
with
some
spec
adjustments.
I
feel
like
in
general,
like
it
seems
like
there
are
a
lot
of
export
options
for
jaeger.
I
don't
think
everybody
is
supposed
to
support
all
of
them,
so
my
guess
is
that
these
spec
authors
kind
of
documented
what
they
did
like
for
their
thing,
and
I
I
think
if
we
just
try
to
relax
this,
that
would
probably
be
a
successful
round
at
least
okay,
something
to
try.
First.
B
B
All
right,
great
yeah,
I
guess
we
can
organize
some
of
this
stuff
a
little
bit
more
next
week,
yeah
otherwise
I'll
see
you
online
yep,
see.