►
From YouTube: 2022-01-05 meeting
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
C
A
I
I
joined
like
where
I
started
to
attend
like
a
month
ago,
and
I
was
out
for
two
weeks
for
christmas
new
year.
Yes,
I'm
fairly
new.
C
Okay,
cool,
who
are
you
with
which
company.
A
Sorry
I
have
like
post
nasal
stuff,
I
am
with
newt
relic
I'll.
Add
myself
before
I
forget
and
I
joined
their
open
source
team
and
I
chose
to
work
on
the
hotel,
js
project.
A
I
have
a
question
about
the
agenda.
If
you
have
something
that
you
want
to
talk
about
with
the
team,
do
you
just
add
it
to
to
the
list,
or
do
you
bring
it
up
in
slag?
First.
A
D
I
don't
think
that,
even
though
he's
a
maintainer,
I
don't
think
legendicus
will
join
most
of
the
sig
meetings
because
he's
in
I
don't
remember
exactly
where
but
asia,
so
his
time
zone
is
not
very
friendly
to
this
meeting
as
the
first
order
of
business,
I
want
to
say
welcome
to
the
new
maintainers
for
those
that
aren't
aware
or
didn't
notice.
We
added
amir
qingzhong
and
rauno
as
maintainers
of
open
telemetry
gis.
So
thank
you
to
all
of
you
for
for
joining.
C
D
All
right
so,
first
first
item
I
have
here,
it
was
just
a
a
check
in
on
the
stale
bot,
I'm
sure
that
most
people
have
noticed.
We
have
a
github
action
that
runs
periodically
and
checks
issues
in
prs
to
see
if
they
have
activity
and
has
been
closing
them.
D
I
have
noticed
that
most
of
the
issues
that
get
marked
as
stale
either
end
up
reopened
or
we
apply
the
never
stale
label
to
them,
and
I
was
wondering:
do
people
think
that
the
stale
bot
is
useful,
or
is
this
something
that
we
should
think
about
disabling
because,
most
of
the
time
it
seems
like
we're
not
actually
letting
it
close
issues.
C
C
So
I
think
it's
it's
it's
useful
or
will
be
more
useful
over
time
once
we
start
getting
more
like
inbound
issues,
especially
ones
that
you
know
it's
are
sometimes
like
awkward
too
close,
because
you
know
the
issue
is
there,
but
no
one's
probably
never
gonna
deal
with
that
thing,
etc.
C
D
D
So
one
one
issue
that
I
know
that
we
have
is
sometimes
things
become
a
month
or
two
months
old
and
everybody
just
forgets
that
they're
there,
and
this
gives
us
a
a
chance
to
go
back
and
look
and
say
this
is
an
old
issue,
someone
created
it,
nobody
ever
replied
or
something
like
that
and
to
just
make
sure
that
we're
actually
replying
to
all
these
issues
and
closing
ones
that
that
makes
sense
to
close,
I
think,
just
from
from
that
perspective,
it
makes
sense
to
keep
it.
D
I
don't,
I
don't
think
it's
causing
problems.
Yet
I
have
had
a
couple
of
people
complain
that
their
issues
have
been
closed,
but
it's
easy
to
reopen
them,
so
it
hasn't
been
a
big
issue
yet
so
personally,
I
would
say
we
should
keep
it.
D
E
C
Go
go
guys
just
just
for
the
record.
The
quick
comment.
People
might
not
know
it,
but
there
is
a
limit
of
operations
on
sailbot,
which
is
like
30
or
something
the
and
the
operation
means
like
labels
added
or
issues
closed,
and
there
is
a
limit
to
it
and
we
are
constantly
on
every
run.
C
We
are
actually
hitting
that
limit,
so
we
we
haven't
even
gotten
to
a
point
where,
like
all
the
issues
are
you
know
on
the
state
they
supposed
to
be,
and
that
also
might
you
know
one
on
hand.
It
kind
of
helps
us.
You
know
to
deal
with
those
issues
like
in
a
slower
pace
on
the
other
it
it
might
make
it
feel
like
we
are
constantly
fighting
stale,
but
even
though
an
alternatively,
we
could
just
get
it
over
with
for
once.
You
know.
E
Yeah,
I
just
wanted
to
add
that
one
I
I
think
for
for
future
requests.
Mostly
it's
not
really
useful
to
check
if
they
are,
if
they
are
stale
or
not,
because
there
are
literally
feature
requests
that
need
to
be
done
at
some
point,
but
it's
mostly
useful
for
bugs,
because
I
I
I
think,
and
more
even
more
in
the
future,
we'll
have
people
like
opening
issues
about
yeah
this.
This
thing
doesn't
work
like
I
want
it
to
be.
E
Generally,
it's
because,
like
I
mean
hopefully
generally
it's
because
it's
a
misconfiguration,
but
more
and
more,
we
will
have
like
bug
fixing
er
happening
across
all
the
repo
that
will
fix
issues,
but
the
people
will
not
actually
close
the
issue
because
he
either
he
he
found
out
later
on
that
it
was
fixed
and
didn't
bother
or
didn't
think
about
closing
the
issue
initially.
So
I
think
I
think
it's
more
interesting
for
for
bugs
issues
than
feature
requests
or
others
that
are
not
labeled.
D
Yeah,
I
agree,
there's
also
a
lot
of
people
that
open
issues
asking
questions
or
for
as
discussion,
topics
and
stuff
like
that
and
after
a
while,
the
discussion
dies,
but
nobody
ever
closes
the
issue.
So
that's
the
other
case
that
I
think
is
a
a
good
use
case
for
it
I
think,
for
now.
Let's
keep
it.
I
did
not
realize
ronaldo.
You
said
that
the
that
we're
running
into
the
limit
the
more
operations
left.
C
D
Yeah,
it's
just
so
it
doesn't
take
like
the
backlog
of
like
300
issues
and
just
close
them
all,
which
I
think
makes
sense.
I
I
actually
like
having
it
only
apply
to
so
many
issues
per
day
so
that
we
can
deal
with
it
more
easily
because
if
it
just
applied
stale
to
like,
I
think
in
the
core
repo,
we
have
literally
300
issues,
yeah
250
issues.
D
I
think,
if
it
applied
stale
to
like
200
issues
all
at
once,
nobody
would
have
time
to
go
through
and
look
through
them
all
so
just
doing
30
per
day.
It
will
eventually
get
there.
So
I
think
letting
it
just
continue
to
run,
at
least
until
it's
tagged
all
of
the
issues
and
closed
the
ones
that
it
thinks
are
stale.
D
You
know
I
I
agree
with
you
ronna
that
it
we,
we
won't,
really
see
the
value
of
it
until
we
end
up
in
the
stable
state
which
we're
not
there
yet
so
for
now.
Let's
just
keep
it
mostly.
I
just
wanted
to
check
in
and
make
sure
that
nobody
had
any
major
complaints
about
it,
which
it
sounds
like
nobody
does.
So
far,
did
everybody?
Okay,
moving
on
okay,
this
item
is
kind
of
a
problem
that
I
guess
I
caused
just
to
give
a
little
bit
of
a
history.
D
D
But
when
I
went
to
update
it,
I
ran
into
some
permission
issues
because
actions
v2
doesn't
play
nicely
with
the
official
node
containers.
D
D
D
The
integration
tests
fail
and
the
reason
I
didn't
notice
at
the
time
is
because,
when
I
updated
the
testing
infrastructure,
it
didn't
run
the
tests
for
all
of
the
packages,
because
the
packages
had
not
been
changed,
but
I
guess
when
we
actually
made
changes,
it
was
discovered
that
some
of
the
tests
were
failing
because
the
like
my
sequel
and
cassandra
and
the
things
that
need
to
start
in
order
to
run
the
tests
were
not
properly
being
found.
D
So
rauno
made
a
pr
to
make
them
listen
on
localhost,
which
makes
those
tests
work.
But
I
guess
the
test
all
versions.
Script
is
still
having
an
issue,
I'm
not
entirely
sure.
What's
going
on
there
so
ronald,
are
you
able
to
explain
that.
C
Yeah,
it's
it's
all
like
actually
makes
total
sense
now
in
in
hindsight,
but
but
since
we
are
not
running
the
tests
in
in
docker,
the
host
names
do
not
resolve
because
you
know
they
they
need
to.
They
are
like
controlled
by
the
dns
of
for
for
the
docker
network
and
and
the
breach
network
doesn't
reach
the
host.
Basically
that,
but
I
guess
the
the
test,
all
versions,
pr
or
workflow
wasn't
wasn't
ever
tested.
Actually,
it
was
because
none
of
the
packages
were
ever
changed
during
that
pr.
C
Once
it
compiles
you,
you
guess
it
works
right
and-
and
that
was
the
issue
actually
so
the
the
node
wasn't
set
up
in
the
workflow
properly,
or
at
least
it
didn't
seem
to
be,
and
that's
why
I
didn't
bother
at
the
moment
of
fixing
the
main
workflow.
I
didn't
bother
actually
going
through
the
desktop
versions
workflow,
because
I
anticipated
a
lot
of
unknowns
there
like
like
more
more
issues
that
need
to
be
fixed,
so
I
just
disabled
it
for
for
now
and
to
come
back
to
it.
But
I
don't.
C
I
don't
think
it's
actually
like
a
huge
issue
in
terms
of
like
the
setup,
node
or
or
whatnot
there
it
it's
probably
easily
fixable,
actually
to
make
the
desktop
versions.
Workflow
also
run.
D
D
No
similar
timing,
but
I
think
the
container
change
was
like
the
last
week
of
december
and
the
test
all
versions
was
just
before
christmas.
B
Okay,
I
want
to
add
a
I
added
the
feature
to
one
tests
on
ci
only
for
packages
that
had
the
changes
and
it's
working
good,
but
it
doesn't
count
four
dependencies.
So
if
one
package
was
changed
and
another
package
is
dependent
on
it,
then
it
doesn't
run
the
test.
I
was
trying
to
squeeze
the
ci
time
to
to
be
as
the
lowest
as
possible,
but
maybe
if
it's
not
a
lot
of
time,
maybe
we
can
consider
returning
it
and
testing
everything
on
each
on
each
pr.
C
A
C
Was
was
failing
what
seemed
to
be
like
unrelated
issues
with
cassandra
tests,
so
that
made
me
hesitate
or
questioned
the
the
reliability.
For
you
know
the
change
detection
that
learner
offers.
A
D
D
So
I
guess
for
now
there
isn't.
So
I
I
guess
I
misunderstood
a
comment
that
you
made
on
a
on
a
pr.
I
thought
the
test.
All
versions
was
not
going
to
work
with
setup
node,
but
I
guess
what
you
were
saying
was.
It
just
needs
to
be
done
in
a
separate
pr,
because
it
might
that
we
might
run
into
additional
issues
and
you
were
trying
to
just
get
things
working.
D
Okay,
so
I
guess
this
afternoon
I
will
try
to
fix
the
the
test,
all
versions,
workflow,
and
I
will
try
to
find
a
way
to
trigger
it
for
all
packages,
just
to
make
sure
that
that
they're,
all
working
before
we
merge
it.
C
Cool,
if
you
want
to
do
that,
like
you're,
welcome
I'm
kind
of
working
on
the
ci
area
right
now
as
well,
I
can,
I
can
take
it
as
well.
Oh.
C
C
And
and
that's
why
it
kinda
makes
sense
for
me
to
not
run
it
for
every
for
every
pr
at
all,
but
do
it
scheduled
and
on
the
release
brs.
Perhaps
that
makes
sure
that
we
kind
of
guarantee
on
every
release.
We
guarantee
that
you
know
it
should
work.
D
Is
it
possible
to
run
two
separate?
I
mean
I
know
it's
definitely
possible,
but
it
would
be
too
much
work
to
make
two
separate
test.
All
versions
workflows,
one
of
them
to
test
only
versions
that
we've
already
you
know,
claimed
support
for
and
want
that
would
run
on
all
of
the
pr's
and
one
with
an
open-ended
version
that
would
run
nightly.
D
C
I
would
guess
that,
whenever,
like
a
developer
or
like
introduces
changes
to
a
package,
they
would
want
to,
you
know
locally
at
least
run
those
desktop
versions.
Of
course,
this
is
something
we
cannot
guarantee
ever,
but
at
least
that
would
you
know,
come
out
in
the
next
scheduled
run
or
next
release.
D
At
that
point,
we
have
to
just
trust
that
people
are
actually
running
it
before
they
check
the
box,
but
if
we
run
it
at
least
for
release
prs,
that
would,
in
the
worst
case,
that
that
we
do
merge
something
that
breaks
an
old
version.
We
would
catch
it
before
release
at
least
right.
D
I
mean,
I
think,
that's
a
reasonable
way
to
go.
It
does
also
prevent
if
we
have
open-ended
version
ranges-
and
you
know
some
packages-
that
we
don't
control.
You
know
have
a
few
releases
and
suddenly
we're
testing
a
lot
more
versions
than
we
expected.
We
could
be
slowing
down
our
ci
too,
and
we
don't
really
want
that.
D
D
Seems
like
no,
I
guess
for
now:
let's,
let's
change
it
to
only
run
on
the
release
prs
and
then
I
guess
we
should
also
probably
remove
the
change
detection
when
we
do
that,
so
we
should
just
have
it
run,
all
of
them
all
the
time
just
for
stability
reasons.
D
And
rano
you
said
you're
working
on
ci
stuff.
Are
you
gonna
handle
that
as
well.
D
I
created
a
a
new
package
for
those
that
didn't
notice
called
open,
telemetry
proto
and
the
idea
we've
talked
about
this
a
few
times.
But
the
idea
is
to
have
a
package
that
only
handles
the
transformations
from
our
internal
representation
to
the
protobuf.
D
The
code
is
automatically
generated
from
the
proto
definitions,
or
at
least
the
serialization
code
is,
and
then
we
have.
You
know,
custom
code,
that
that
does
the
actual
translation.
D
It's
still
marked
as
a
draft,
because
there's
there's
quite
a
few
housekeeping
things
that
still
need
to
be
done
in
terms
of
like
the
readme
and
the
package
json
and
some
stuff
like
that.
But
I
just
wanted
everybody
to
be
aware
that
that's
there
for
people
that
aren't
aware-
and
please
take
a
look
at
it,
because
if
there's
any
major
problems,
I'd
like
to
catch
them
as
early
as
possible
in
the
process.
D
So
yeah,
please
just
just
take
a
look
at
it
and
make
sure
that
that
the
direction
we're
taking
seems
to
be
a
reasonable
one.
It
does
pull
in
the
dependency
of
protobuf
js,
but
I
think
that's
not
a
problem,
because
our
exporters
already
all
depend
on
it
anyway.
So
I
don't
think
that's
an
issue.
D
Almost
so,
we
can
get
rid
of
all
of
the
sub
modules
in
the
other
in
the
exporter
packages,
but
it
would
need
to
be
in
this
one.
You
know
having
one
is
better
than
having.
I
think
six
right
now
and
ideally
we
we
could
instead
of
doing
a
sub
module,
we
could
copy
all
of
the
the
proto
files
in,
but
I
don't
think
having
a
single
sub
module
is,
is
necessarily
a
problem.
D
One
thing
we
have
right
now
is
the
the
current
exporters,
when
you
run
compile,
there's
a
post,
compile
step
that
copies
all
of
the
proto
definitions
into
the
dist
folder,
because
they're
actually
required
to
be
in
like
the
final,
like
tarble,
and
this
removes
that
so
the
pro
we
won't
ship,
the
proto
files
anymore,
we
will
only
ship
code
that
was
statically
generated
from
them,
so
it
should
hopefully
reduce
the
size
of
those
packages
and
also
things
like
tree
shakers
should
be
able
to
to
shake
out
a
lot
of
the
code,
that's
not
being
used
for
web
browsers
and
stuff
like
that.
D
Where,
right
now,
we
actually
have
like
the
proto
files
being
loaded
by
the
web
browsers,
which
is
not
ideal.
D
So
that's
that's
where
we're
at
now.
I
don't
have
a
ton
more
to
say
about
it,
but
please
take
a
look
at
it
and
make
sure
that
I
haven't
done
anything
horribly
stupid
other
than
that
I'll
be
cleaning
up
the
readme
and
the
package
json
and
testing,
and
some
stuff
like
that
and
the
marking
that
is
ready
for
review
fairly
soon.
D
Probably
yes,
because
if
we
don't
change
the
port
yeah,
that
would
be
a
breaking
change
to
do
it
later
yeah.
So
we
probably
should,
I
think,
there's
two
different
pr's
that
are
open
to
change,
that
port
right
now,
there's
one
in
in
the
exporter
and
there's
there's
a
pr
in
the
contrib
repo
actually
too,
but
I
think
that
just
affects
the
examples
yeah.
I
think
the
the
pr
that
changes
the
port
was
recently
rebased,
so
it
should
be
fairly
easy
to
merge.
D
Do
I
have
it
in
my
list
here
yeah
I
do
that's
this
one
it
looks
like
rona
is
the
only
person
that
has
approved
it
so
far,
so
yeah
people-
please.
Please
approve
this
and
we'll
get
that
merged
before
we
release
the
the
trace
exports
and
that's
the
only
the
other
item
that
I
had
on
the
agenda.
I
I
went
through
all
of
the
open
prs
in
the
core
repo.
D
I
did
not
look
through
the
contributor,
but
these
are
all
the
pr's
that
I
found
that
are,
I
guess,
real,
like
they're,
not
drafts
and
they're
not
created
by
the
dependency
bot,
and
things
like
that.
So
this
is
our
current
sort
of
list
of
prs
to
review.
D
The
metrics
are
high
priority,
because
we're
pushing
on
metrics
right
now
and
bug
fixes
are
obviously
always
high
priority,
and
in
this
case
the
otlp
http
port
should
definitely
be
reviewed
as
quickly
as
possible,
but
yeah
just
a
list
of
of
pr's
to
reviews.
If,
if
I
missed
anything,
people
could
feel
free
to
add
prs
in
here.
D
So
if
anyone
else
has
anything
that
they
would
like
to
add,
I
know
svetlana
you
mentioned,
or
was
that
just
a
general
question.
A
I
guess
it's
more
of
a
general
question.
I
saw
this
issue
that
I
wanted
to
work
on
and
it's
actually
some
something
my
team
has
been
talking
about
and
it's
the
re,
the
retry
logic.
It's
the
issue,
one
two,
three
three.
It
was
open
in
2020,
so
I
just
wanted
to
see
how,
if
others
have
any
any
tips
for.
A
For
me,
this
is
my
first
big
issue
and
I
just
wanted
to
see
if,
if
I
should
try
adding
a
custom,
retry
logic
or
if
I
should
use
some
kind
of
package
for
it.
D
So
in
general
we
try
not
to
depend
on
external
packages
when
at
all
possible,
some
things
turn
out
to
be
very
complex,
like
obviously,
we
have
a
protobuf
dependency
and
some
things
like
that,
but
it
makes
sense
to
use
external
dependencies,
but
for
the
most
part
we
try
to
bring
in
as
few
as
possible,
especially
for
core
components.
D
So
I
would
say,
if
at
all
possible,
please
try
to
implement
it.
You
know
directly
in
the
repo
if
it
turns
out
to
be
more
complex
for
some
reason
or
something
like
that,
we
can
talk
about
adding
a
dependency,
we're,
not
100.
A
Okay,
awesome
and
just
to
double
check
this
logic
would
have
to
be
implemented
in
the
exporter.
Trace,
otlp
http
package
is
that
correct.
D
Yes,
probably
would
have
to
be.
I
know
that
the
the
otlp
exporters
depend
on
each
other
to
some
extent,
I
don't
remember
which
one
actually
has
like
the
the
base
implementation,
but
I
think
it's
the
http
trace
one
but
yeah.
There
is
one
of
these
exporters
that
has
like
most
of
the
logic
in
it
already
and
that's
where
I
would
recommend
to
start
at
least
okay,
perfect.
E
E
E
D
So
I
remember
like
the
the
span:
exporter
spec
at
one
point
had
some
some
wording
about
retries,
but
I
think
it
was
removed,
or
maybe
it
was
a
pr
that
never
even
got
merged.
As
far
as
I
remember,
the
the
complaint
was
essentially
that
other,
like
each
exporter,
would
have
different
retry
handling
based
on
what
what
the
back
end
requirements
were,
and
nobody
could
really
agree
on
a
single
retry
implementation
at
the
time,
and
then
I
think
it
just
yeah.
D
It
became
too
much
of
an
argument
and
everybody
dropped
it
and
walked
away,
and
nothing
ever
got
done.
As
far
as
I
remember,
which
happens
a
lot
in
specification.
E
Yeah
but
yeah,
I
just
wanted
to
say
that
there
was
this
mechanism
inside
the
sdk
and
currently
I
I
just
checked
a
little
bit
more
and
we'll
we
resolve
the
pi.
The
the
result
is
success,
but
if
it's
not,
we
just
throw
an
error,
so
I'm
not
sure,
but
I'm
not
sure
if
we,
if
we
need
to
keep
that
or
not
because
it's
not
actually
used.
E
E
Yeah,
if
you
check
I'm
gonna
just
you
can
check
this
one,
but
that's
on
the
batch
exporter,
but
it's
the
best
processor,
but
it's
pretty
much
the
same.
It's
just
just
to
know.
If
we
need
to
throw
back
there
or
not,
I
just
I
just
send
them
some
another
link
into
the
zoom
zoom
call
if
you
want
to
just
invite
yeah.
D
But
currently
we
don't
have
any
retry
logic
in
here
right,
yeah,
yeah,
okay,
so
you
can
you
can
look
at
the
old
retry
logic.
I
you
you're
saying
look
at
the
at
the
old
discussion
that
to
determine
why
we
don't
have
it
in
the
batch
processor
or
I
I
guess,
I'm
unclear
what
the.
E
Yeah
yeah,
I'm
yeah
for
sure
I'm.
I
was
more
meant
to
say
that
there
was
some
background
around
this,
but
I'm
not
sure
I'm
really
not
sure.
If
it's,
if
it's
interesting
to
to
check
it,
I
was
just
throwing
information
that
it
was.
That
was
the
discus
a
long
time
ago,
but
yeah,
I
don't
think
it's
really
relevant
and
even
though
I
think
we
should
put
this
logic
inside
each
exporter,
so
I
I
think
we
can
in
your
actually
what
I
said.
D
Okay,
so
I
I
think
that
the
retry
should
be
in
the
exporter.
For
now
I,
as
far
as
I
know,
the
specification
does
not
have
any
retries
within
the
batch
processor,
and
I
I
could
be
wrong
about
that,
but
I
think
it
doesn't-
and
I
would
I
would
prefer-
to
have
the
export
logic
in
the
exporters,
even
though
that
is
a
little
bit
more,
you
know
duplication
if
we
do
it
for,
for
multiple
exporters,.
D
E
And
just
for
to
try
out
something
I
am
currently
using
the
collector
and
deploying
the
collector,
and
they
are
the
same.
They
are
the
same
issues
at
the
time
at
the
beginning,
they
were
using
a
processor
to
to
to
retry
the
the
sending
of
logs
and
matrix
tracing
metrics
and
no,
I
think
they
moved
on
to
having
like
some
share
package
that
hosts,
like
all
the
rit,
the
basic
retry
stuff
for
http
protocols
and
basic
protocol.
E
Ladies,
so
I'm
not
sure
if
it's
interesting
to
to
have
it
somewhere
in
the
core,
but
in
case
like,
for
example,
we
have
also
the
the
zip
king
exporter,
not
sure,
if
the,
if
we
meant
to
to
to
to
keep
the
jaeger
exporter,
not
that
it's
sensating
the
the
exporter
but
yeah.
Maybe
it's
really
interesting
to
have
the
logic
to
retry
any
http
stuff
inside
the
car
package
or
some
something
you
just
I'm
not
sure.
E
Yeah
yeah,
I
mean
I
mean
they
have
the
issues
in
the
collector
because
they
have
maybe
like
20
or
more
exporters
that
use
http.
But
I'm
not
sure
if
it's
worth
doing
this
for
like
two
different
or
three
different
exporter.
D
Yeah,
I
don't
think
we
have
enough
to
worry
about
it
right
now.
I
mean
we
may
want
to
when
we
switch
the
exporters
to
use
the
new
proto
library
we
may
want
to
have.
You
know,
create
a
like
a
transport
library
or
something
like
that
as
well,
and
that
would
maybe
make
sense
to
live
there,
but
for
now
I
think
we
just
put
it
in
the
exporters.
D
Thank
you,
yeah,
thank
you
and
I
guess
I
would
find
that
issue
to
you.
F
I
was
just
going
to
ask
because
I
actually
opened
up
this
issue.
I
guess-
and
I
don't
remember
it-
it
was
so
long
ago,
but
yeah.
So
that's
why
I
went
kind
of
digging
into
the
spec
to
see
to
see
what
I
can
find
and
I'm
not
really
seeing
any
definition
around
the
otlp
exporter.
So
I
don't
know
if
it's
like
removed
or
if
it's
just
in
a
place
that
I
haven't
found
it
yet.
F
So
if,
if
anybody
knows
that
might
be
useful
information
there,
there
is
a
folder
for
sdk
exporters
that
I
have
pasted
in
the
chat
that
talks
about.
Is
it
kind
of
jager
and
non-otlp?
D
Yeah,
it
might
be
in
the
proto.
I
wouldn't
expect
that
to
though
I'm
in
this
non
hotel.
F
Yeah
my
vague
recollection
around
opening
that
ticket
is
that
at
some
point
there
was
some-
I
don't
know
at
least
nebulous
wording
around
retries
in
otlp
somewhere.
F
So
it
would
be
nice
to
maybe
figure
out
if
that
wording
still
exists
or
what
happened
to
it.
D
And
if
it
doesn't
exist,
it
probably
should
so
it
might
be
worth
making
a
pr
against
the
specification
around
retry
logic
but
anytime.
I
know
I
suggest
a
specification
pr.
I
know
how
much
work
that
can
be
so
I'm
usually
hesitant
to
volunteer
other
people
to
do
that
type
of
work.
Svelana.
Can
you
comment
on
that
issue
so
that
I
can
assign
it
to
you.
D
F
Yeah,
oh
I'll,
do
a
quick
look
and
see
if
I
can
find
what
happened
to
it
or
or
if
I
can
verify
that
it's
been
removed
and
just
comment,
but
I
feel
like
that's
one
of
the
the
big
obstacles
to
working
on.
This
is
figuring
out
exactly
what
the
work
should
be.
So
there
might
be
a
little
bit
of
a
research
project.
D
D
So
I
guess
svetlana
before
you
before
you
do
start
implementing
this.
Please
take
a
look
at
the
specification
and
at
least
see
if
you
can
find
any
wording
about
retries
in
there.
If
you
can't,
I
won't
say
that
you
have
to
make
a
specification
issue
first,
because
that
tends
to
be
a
lot
of
work,
but
if
you
feel
like
you're
up
for
that,
then
feel
free
to
do
that
as
well.
D
D
Okay,
then,
just
thank
everybody
for
their
time,
and
I
will
talk
to
you
all
next
week.