►
From YouTube: 2023-04-05 meeting
Description
Open cncf-opentelemetry-meeting-3@cncf.io's Personal Meeting Room
C
D
F
Yeah
hi
folks,
yeah
I,
just
want
to
talk
about
the
releasing
like
some
set
of
modules
with
no
changes.
Recently
we
did
that
several
times
for
RC
release
and
yeah
I
see
Jurassic
replied
that,
like
some
dependency,
can
contain
security
fixes,
but
I
still
would
like
to
discuss
whether
we
want
to
do
that
so
for
I.
I
think
that
dependency
updates
may
be
important,
but
usually
consumers
of
the
our
packages.
They
like
Define
dependency
themselves
right.
F
They
can
not
use
our
dependencies,
they
can
like
they
can
upgrade
the
dependency
themselves
and
that's
it
and
also
if
our
beta
modules
depend
on
stable
that
are
not
getting
any
updates,
they
will
and
someone
depends
on
better
modules.
They
will
get
the
new
new
dependencies
anyway.
So
yeah,
that's
my
take.
Let
me
know
what
they
think.
G
If,
if
everything
was,
you
know
stable,
like
1.0,
and
we
will
we
keep
releasing
every
two
weeks
or
would
that
slow
down
and
if
it
slowed
down-
and
we
only
released
you
know
once
a
month
or
once
a
quarter
or
whatever
and
all
we
had
was
dependency
updates
would
be
released
or
would
we
just
not
I
guess
what
I'm
asking
is
like
once
we're
not
releasing
so
fast?
Is
it
still
the
same
question.
H
I
mean
it
maybe
takes
a
little
bit
of
analysis
like
the
go
release
this
week.
You
know,
is
it
you
know
vulnerable
to
it
or
not
it.
You
know
it
it
sometimes,
there's
like
a
oh
yeah
burn
everything
down
and
fix
it,
and
sometimes
it's
like
well,
it's
questionable
as
to
whether
you
know
how
how
reasonable
it
is.
I
I,
don't
know
that.
There's
a
blanket
policy
that
necessarily
fits
everything.
F
A
For
example,
the
go
issue
that
the
the
120.3
and
119.8-
we
don't
have
to
update
dependencies
for
that.
We
just
need
to
make
a
new
release
with
a
new
build
right,
so
that
happens
in
the
releases
repo,
because
that's
in
the
standard
library
but
I,
think
it
is
important
to
release
updates
for
things
like
that,
because
if
people
are
using
it
and
we're
exposing
them
to
denial
of
servers
vulnerabilities,
we
should
give
them
an
opportunity
to
fix
that.
I
I
guess
there
are
two
aspects
to
this
question.
The
first
one
is
from
the
perspective
of
The
Collector
as
a
library,
and
the
second
one
is
the
collector
as
a
binary.
So
the
collector
has
a
binary
I
think
we
should.
We
should
keep
the
pace
up
to
every
three
weeks,
a
new
release
and
that
that
will
at
least
make
sure
that
the
changes
from
our
dependencies
are
propagated
to
the
to
the
users
of
the
binary
and
then
perhaps
even
have
patch
releases
like
what
is
suggested
Anthony
yesterday
or
the
day
before.
I
When
it
comes
to
the
collector
as
a
dependency,
I
I,
don't
think
we
actually
need
to
release
that,
because
people
can
then
just
depend
on
the
main.
As
long
as
we
merged
the
a
fixed
or
a
dependency
upgrade,
people
can
then
just
depend
on
on
on
the
main
or
in
the
worst
case
they
can
replace
the
dependency
Downstream.
So
I
don't
think
we
actually
need
a
release
of
the
core
as
a
library
every
two
weeks,
just
because
of
dependencies.
F
So
we
can
introduce
a
rule
if
there
is
no
changes
for
this
particular
set
of
modules.
Like
our
c,
a
stable
Orbiter.
We
just
like
skip
that
release
and,
for
example,
we
don't
release
RC
next
time
we
just
like
keep
whatever
beta
and
they
keep
previous
RC.
If
there
are
no
changes,
just
depends,
it
does.
Does
it
sound
good.
F
Yeah
someone
asked
in
the
channel
was
the
pros
of
doing
that.
I
believe
it's
just
like
for
user
convenience
like
they
will
not
get
any
new
or
depend
on
both
updates
like
hey.
There
is
a
new
release
they
will
and
they
will
not
need
to
change
anything
on
their
side.
So
it's
like
just
for
user
convenience.
F
I
mean
it's
still
like
organically.
It's
automated,
you
have
to
run
serial
commands
for
in
like
separate
sets
of
modules
right,
so
we
just
don't
need
to
run
another
one.
If
there
are
no
changes
like
you
check
the
change,
change,
log
and
drive,
there
are
no
changes.
Just
skip
that
part.
F
J
A
Review
I
think
looking
just
looking
at
the
change
log
is
going
to
be
error
prone.
There
may
be
changes
that
aren't
clearly
indicating
that
they
apply
to
something
I
think
we
should
have
some
technical
means
for
assessing
whether
there
is
a
change
that
is
required
or
not
like.
We
should
be
able
to
look
at
the
changes
in
a
subtree
and
see
if
they
are
going
to
go
mod
and
go
some
only
or
if
there
are
any
other
changes.
A
E
I'd
also
like
to
point
out
that
this
may
become
a
problem
when
we
have
multiple
components
that
are
rcd
and
we
only
want
to
release
one
of
those
ones,
because
today
we
only
like
we're
only
aware
of
two
versions:
right
we
have
beta
and
then
we
have
RC,
but
then
it
becomes
more
complicated
when
we
only
want
to
release
one
or
the
other.
Or
do
you
release
all
RCS
together?
If
there's
one
change
in
any
RCS
yeah.
A
That's
what
we've
done
at
the
Go
SDK.
If
we
release
a
module
set,
we
release
all
of
the
modules
and
that's
it.
D
Take
one
I
noticed
in
this
meeting
notes
that
the
time
is
not
it's
not
right
because
of
the
daylight
saving
time
change,
and
this
is
I
suppose,
because
the
the
meeting
is
created
in
the
Pacific
time
calendar
right
originally
in
the
calendar,
so
that
changes
according
to
the
Pacific
time
time
zone
daylight
changes,
so
I
changed
it
to
the
PT
so
that
we
know
it's
always
really
relative
to
Pacific
time
and
jurassicious
suggesting
to
not
change
because
of
VST.
D
So
I
suppose
you
mean
specify
the
meeting
time
in
UTC
and
then
the
meeting
time
will
change
for
people
living
in
BST
right.
I
Yes,
if
you're,
if
your
time
is,
is
changing
I
guess
we
have
a
lot
of
people
from
the
US
here,
but
if
your
time
is
changing,
it
shouldn't
change
for
all
of
the
participants
of
the
meeting.
Because
of
your
time
change
I
mean
we
don't
have
DST
here
in
Brazil.
I
know
I'm
the
only
one
in
here,
but
why
why?
Why
should
I
be
affected
by
this
change?
Are.
D
I
G
Will
calendar
works
though
it
just
based
on
the
like
the
time
zone
of
the.
I
L
Yeah
I
think
this
started
as
a
compromise
between
us
and
Europe
is
in
terms
of
time
zone
and
we
picked
9am
West
Coast,
which
is
a
reasonable
time.
8
AM
will
be
a
bit
too
early
for
for
some
folks
because
they
will
have
to
to
do
kids
to
the
daycare
and
other
stuff.
So
if
you
changed
UTC
and
you
make
it
half
of
the
Year
8
A.M
you're-
probably
not
going
to
have
half
of
the
people
in
this
call
that
are
West
Coast
present.
L
So
that's
the
the
history
behind
this.
Like
we,
we
said
that
hey
we
will.
We
will
do
the
best
we
can
between
Europe
and
U.S,
because
that
was
the
most
people
in
the
in
the
early
days
and
then
now
now
we
have
a
wide
spread
of
of.
L
D
Yeah
I
I
didn't
start
this
too
I
I
really
wrote
it
just
to
let
you
know
that
I
changed
this
thing
from
17
UTC
to
9am
PT,
so
that's
I
felt
like
I
needed
to
do
something
about
it,
but
yeah.
It
probably
seems
like
a
bit
too
much
to
discuss
now
again
I
suppose,
timing
of
this
of
this
meeting.
So
if
that's
required,
we
can
schedule
it
for
another
time,
but
we
have
many
other
topics.
So
maybe,
let's
move
on.
M
And
always
I
understand,
tensions
are
hard,
so
the
next
one
is
an
issue
that
we
was
reported
by
one
of
our
customers
and
we
have
been
working
diligently
towards.
We
found
that
in
some
situations
you
would
see
a
high
memory
usage
from
the
Prometheus
receiver
or
the
simple
permissions
receiver.
There
seems
to
be
some
level
of
add-on
I'm
not
too
familiar
with
the
details.
I
can
we
can
look
into
it
if
you
like?
M
What
we
found
is
that
we
have
found
a
way
to
reduce
significantly
this
memory
usage
at
the
cost
of
some
loss
of
data.
Just
I
think
you
need
some
examples
and
we'd
like
to
propose
that
this
might
be
a
solution
going
forward
for
some
use
cases
where
we
have
a
massive
amount
of
metrics
and
we
want
to
be
able
to
script
them
efficiently.
We
have
a
prototype
little
bit
of
code
that
we
pushed
it's
still
using
permissions
libraries
to
to
read
Paul's
information.
M
So
must
you
let
you
know
this
is
something
that
we're
looking
into.
This
is
something
that
is
actively
being
under
a
scope
of
work
and
we'd
like
to
eventually
have
a
discussion
about
contributing
that
back
to
the
project
of
finding
a
way
to
to
work
with
the
community
here
to
bring
it
we.
There
are
a
couple
ways
we
can
add
yet
another
receiver,
which
I
think
is
getting
really
it's
getting
a
little
stuffy
on
the
police
side.
M
We
have
too
many
ways
to
just
Prometheus,
or
we
can
say
the
simple
Prometheus
receiver
becomes
this
this
procedure
instead,
which
will
break
its
its
configuration,
which
I
don't
I,
don't
like
so.
M
F
M
I
M
M
Well,
next
one's
for
me
too,
so
I've
been
following
this
rabbit
hole
that
about
an
open
long
time
ago
to
remove
components
from
root
module,
so
some
of
it
was
actually
dictated
by
by
I
think
a
good
idea
of
finding
ways
to
reduce
the
amount
of
toil
that
you
have
to
maintain
all
those
go
dot
mod
and
all
that
stuff.
M
Some
of
it
was
to
to
move
also
some
of
the
logic
away
to
the
components
list
which
can
be
now
generated
from
the
Builder
in
command
call
I
have
a
I
have
a
bit
of
an
issue
where
we
we
we
currently
do
not
export
the
list
of
components
from
command
control
and
it's
a
main
module.
So
it's
actually
impossible
for
me
to
depend
on
it
from
a
different
module,
so
I'm
kind
of
Aging.
M
M
Config
call
and
config
schema
and
other
all
the
other
tools
that
we
like
to
depend
on
that
we'd
like
to
just
have
one
list
of
components
for
and
right
now,
the
again
the
Builder
does
not
allow
you
to
explicitly
export
open
an
issue
or
in
a
possible
customization
as
a
generation
template
for
that.
But
I
think
it's
maybe
overblown.
Maybe
I
just
need
to
have
a
separate
file
in
that
folder.
M
That
would
export
the
list
of
components
and
it
would
be
fine
I
just
wanted
to
let
you
know
it's
kind
of
I'm
in
a
bit
of
a
standstill
on
that
one
I
moved
as
fast
as
it
as
far
as
I
could
to
move
all
the
tests
that
were
part
of
the
internal
folder
to
command
or
tell
contribco,
and
now
I
need
to
to
be
good
about.
So.
L
These
tools
be
refactor
like
right
now,
I
think
there
is
a
problem.
There
is
a
problem
that
this
tool
depends
on
all
the
the
entire
internet
to
be
able
to
be
built.
I
I
mean
I'm
calling
the
entire
internet,
because
it
depends
on
all
the
components
that
we
have
in
code
trip.
You
probably
depend
on
half
of
the
internet
to
go
code.
M
L
So
I
think
I
think
these
tools
can
be
implemented
in
a
way
that
we
reference
them
in
all
the
components
to
generate
the
goddock
or
whatever
we
generate
there,
instead
of
having
the
other
way
around
reference,
the
modules
that
we
want
to
work
on,
okay,
we
do
it
that
way.
If
we
change
this
way,
we
should
remove
this
need
of
having
a
huge
components,
because
the
problem,
the
problem
that
I'm
trying
to
avoid,
is
having
these
components
causes
you
a
lot
of
slowness,
a
lot
of
a
lot
of
dependency
problems
and
all
the
things.
M
L
J
L
M
I'll,
let
you
know
if
that
comes
up
cool
unless
anyone
else
has
anything
and
we
can
move
on
to
I
think
Jersey
purpose.
This
review.
I
Yeah
so
I'm,
mostly
asking
for
a
review,
it's
early
interaction,
I
just
wanted
to
see
the
general
direction
is,
is
okay.
Basically,
the
problem
is
on
the
exporter
side.
We
are
generating
errors
and
those
are
not
lot
is
actually
in
line
with
the
response
they're
getting
from
from
Downstream,
so
oclp
exporter,
for
instance,
when
making
hrpc
call
to
another
service
to
send
a
data
sheet
it.
I
It
does
not
return
the
same
error
that
it
got
from
from
the
from
the
external
service,
meaning
that
the
information
that
it
sends
back
to
the
receiver
is
likely
wrong.
I
So
if
you
see
The
Collector
as
a
as
a
gateway,
you
have
clients
sending
data
to
The,
Collector
and
The
Collector
sends
data
elsewhere,
and
if
that
elsewhere
says
I'm
too
busy
right
now,
I'll
try
again
later,
The
Collector
will
just
replying
saying
bad
data,
internal
server
errors,
don't
try
again
and-
and
this
proposal
is
mainly
to
make
it
make
this-
make
the
collector
not
the
Gateway
not
break
the
chain
like
The
Collector
is
a
gateway
not
break
the
chain,
such
a
properly
propagate
the
same
errors
that
it
gets
now.
I
I
don't
know
if
there
was
a
purpose
in
in
just
returning
500
for
everything,
but
this
change
here
aims
to
change
that,
so
I
would
like
to
I,
don't
know
if
there
are
any.
I
If
there's
any
way
to
do
that
on
a
more
correct
way
for
all
of
the
exporters
at
once,
I
couldn't
find
one
so
I
applied
the
change
only
to
Hotel
Peak
exporter
and
otlp
receiver
and
as
part
of
the
change.
If,
if
it
is
the
right
path,
then
then
it
would
imply
having
a
specific
policy
somewhere
like
in
the
coding
contributions.
I,
don't
know
where,
where
we
say
that
we
should
just
relay
the
same
errors
that
we
get
from
upstream
or
Downstream
in
this
case.
F
I
think
I.
Currently
we
have
only
two
types
of
error:
permanent
and
not
permanent,
right,
like
in
journal
in
The,
Collector
and
I.
Believe
we
should
extend
that
and
instead
of
making
it
work
only
for
The
Collector.
We
should
have
other
types
of
Errors.
For
example,
like
four
to
nine,
should
be
somehow
codified
inside
the
collector
and
be
propagated
to
the
downstream
and
for
different
receiver.
It
will
be
different
Behavior.
We
recently
found
out
that
file
lock.
Receiver,
for
example,
just
drops
the
data.
F
If
there
is
some
back
pressure,
but
instead
it
should
slow
down
I
believe
at
least
it
should
be
configurable
to
slow
down
so
yeah
that's
in
great
Direction,
and
that
is
definitely
what
we
need,
but
I
don't
think
it
should
be
a
glp
specific
only
so
we
should
just
extend
number
of
possible
internal
collector.
I
Errors
right
so
about
the
permanent
and
non-permanent
errors.
That
is
true,
but
those
wrap
dot.
Well,
they
should
wrap
the
original
errors,
yes
right
so
so
on
at
the
receiver
side,
the
hlp
receiver
I
think
what
it
does
is
what
what
it
could
do
is
get
the
status
out
of
there
and
their
unwraps
whatever
there
is
until
it
finds
a
hrpc
error
of
sorts
right
so
and
once
that
happens,
then
it
returns.
It
builds
the
status
that
was
generated
at
this
exporter
level.
I
So
you
know
I
agree
permanent
should
be
there.
Perhaps
you
could
have
another
set
of
Errors.
I
did
actually
end
up
creating
another
one,
but
I
did
not
want
one
that
encompasses
everything.
Basically,
we
have
grpc
and
your
PC
has
the
status
codes
already
and
we
have
then.
What
we
didn't
have
is
a
one
specific
for
HTTP.
So
if,
if
the
original
P
exporter
is
making
an
HTTP
call-
and
it
receives
an
error
like
the
404,
it
will
just
generate
a
new
500
and
propagate
back
to
the
receiver.
I
Now
the
receiver
would
never
have
access
to
the
status
code
of
that.
So
what
I
did
was
I
created
a
new
error
like
in
a
package
at
the
root
module
I,
create
a
new
package
errors,
I
think
errs
which
are
not
clash
with
the
standard
library
and
in
there
I
created
a
request.
Error
I
think
what
is
the
name
of
there,
which
embeds
the
the
status
code
so.
D
L
So
that's
not
going
to
work
in
the
current
design
of
The
Collector
and
the
reason
why
it's
not
going
to
work
is:
we
have
the
queue
for
the
exporter
and
we
decouple
the
response
from
the
from
the
receiver.
So
receiver
will
never
see
the
response
from
from
that
unless
we
don't
have
any
kind
of
queuing.
While
we
have
queuing,
we
decoupled
the
the
the
synchronously
request,
grpc
HTTP,
whatever
it
is
in
the
receiver
from
the
exporter
in
this
queue
in
batch
in
in
the
exporter,
queue
for
for
retries
and
stuff
like
that.
I
I'm
trying
it
it
does
work
done,
it
does
work
because,
in
my
case,
I
have
a
Gateway
and
I'm
disabling
batching
I'm
disabling
retry
queue,
so
it
does
work
and
it
is
sync,
it
is
synchronous
between
the
receiver
and
enforcement,
which
makes
sense
playing
deeply
situation.
It
does
make
sense
when
you
have
the
collector
as
like
the
last
mile
to
to
your
external
systems,
but
it
does
work
if
it
is
a
gateway.
That
is
the
ingest
part
like
in
my
specific
case.
It
is
like
the
grafana's
hlp
intake
right.
L
I
Because
the
I
want
to
so
again,
I
have
a
Gateway
that
is
my
ingest,
so
my
interest
is
taken
to
the
specific
database.
So
if
my
Matrix
database
is
overloaded,
I
want
to
send
the
same
information
to
the
client
that
is
now
running
external
to
to
my
info
and
the
client
wouldn't
have
a
retry
mechanism
of
some
sort
based
on
that
information
and
if
I
just
send
it
500,
it
will
just
drop
the
data
and
say
well.
It
is
a
permanent
error.
L
But,
okay,
that
that
makes
total
sense
but
I
think
we
should.
We
should
come
up
I,
don't
think
we
will
do
you
think
we
should
propagate
native.
G
L
Like
for
HTTP,
we
should
back
propagate
HTTP
errors
for
grpc
grpc
errors
versus
building
our
own
generalization.
That
encapsulates
multiple
protocols.
The
reason
is
there:
it's
now
you
have
grpc,
you
have
HTTP,
but
maybe
you
have
Thrift
sooner
than
later,
a
receiver
having
to
go
through
all
the
possible
ways
of
an
exporter
being
is
going
to
be
a
mess
so
most
likely.
L
What
we
need
is
actually
in
our
consumer
error
package
and
I
think
even
started
to
do
something
there,
but
I
think
we
should
have
some
ways
of
generalizing
this
class
of
errors
that
we
want
to
back
propagate,
propose
some
error,
type,
canonical
error
types
or
whatever
generalized
error
types
and
hence
have
if
you
are
an
HTTP
exporter,
you're
not
going
to
back
propagate
500
or
503
or
whatever
you
receive
you
gonna
back
propagate,
something
that
a
receiver
can
understand
that
hey.
This
is
a
retry
with
delay,
or
this
is
whatever
it
is.
L
L
Who,
who
wants
to
take
an
action
item
to
propose
a
generalization
of
these
errors
in
the
consumer?
I
think
the
consumer
error
is
the
right
package.
I
think
there
are
a
couple
of
issues
where
I
mentioned
this.
Even
probably
you
remember
those
so
I
think
we
should.
We
should
look
into
to
this
and
propose
the
some
solution
there.
I
So
actually,
the
first
iteration
of
my
PR
actually
changed
the
consumer
error
to
include
that
what
I,
why
I
ended
up
doing
the
way
that
I
did
in
the
final
version
of
the
pr
is.
I
We
have
only
two
transports
right
now:
grpc
and
HTTP.
Of
course,
there
might
be
orders
for
more
specialized
exporters
and
more
specialized
receivers,
but
the
bulk
of
the
exporters
are
are
HTTP
or
hrpc
now
Thrift.
We
can
think
about
that
in
the
future,
but
it's
very
likely
that
it
dies
before
we
actually
Implement
anything
with
trip.
I
Now.
The
reason
that
I
didn't
use
consume.
Customer
error,
for
that
is
I,
saw
that
I
would
have
to
embed
all
of
the
grpc
code.
I
Oh
well,
the
JPC
status
there
any
object
in
there
and
we
have
to
build
our
own
like
HTTP
status,
code,
Handler
or
or
struct
to
hold
HTTP
status,
which
is
not
it's
not
bad,
but
in
the
end
we
probably
do
not
want
to
create
another
abstraction
layer,
because
grpc
does
have
its
own
status,
which
are
different
than
HTTP
for
for
a
reason,
and
we
would
have
to
map
those
jar
PC
to
HP
and
vice
versa.
I
Now,
if
we
create
our
one,
we
are
going
to
create
like
the
14th
news
standard
for
status,
codes
and
I.
Don't
think
that's
the
right
way.
We
should
just
think
about
grpc
and
HP
if
it
is
your
PC
on
both
sides,
just
relay
the
same
error
if
it
is
HTTP
on
both
sides,
just
relay
the
same
error,
let
the
client
handle
just
like
any
good
behaving
in
a
gateway
and
if
there
is
a
hotel,
sorry,
jrpc
and
http
on
different
sides
than
in
your
transmission.
So.
L
N
L
About
otlp
receiver,
but
every
receiver
will
have
to
have
this
duplicate
code
and
say:
is
this
grpc?
What
do
I
do?
Is
this
HTTP?
What
do
I
do
is
the
Mi
grpc.
What
do
I
do
if
I
receive
HTTP
am
I
HTTP?
What
do
I
do
if
I
receive
grpc,
so
all
the
the
receivers
will
have
to
handle
all
these
four
combinations.
L
I
So
the
translator
itself,
you
can
have
on
a
helper
package.
So
that's
the
errors
request
error
is
I,
think
so
I
think
there
is
a
I
think
I
externalize
this
to
http
function
there.
If
it's
not
I
can
certainly
do
that,
but
a
translation
between
grpc
codes,
interest,
HTTP
status
codes
and
vice
versa,
can
be
done
separately
now
on
the
receiver
side,
it
knows
which
protocol
it's
talking
already,
because
it
is
on
a
path
like
I'm
I'm,
writing,
HTTP
response
to
my
client,
so
I
know
that
the
response
is
coming.
I
I
can
just
get
this
one
to
the
helper
function.
That
does
the
translation
to
the
HTTP
status
code
for
me
and
the
translator
just
looks
into
and
sees.
Is
it
HTTP
already
then
return
back
if
it
is
not
HP
already?
If
it
is
a
grpc,
make
a
translation?
So
it's
not
it's
not
too
much
work.
Actually
I.
Have
it
there
in
the
pr
I
can
I
can
show
you
what
is
there,
but.
F
L
L
So
so
we
have
way
more
than
these
I
I
still
believe
that,
yes,
we
may
keep
the
root
there
if
we
want,
but
I
think
we
need
to
to
have
a
way
to
translate
this
and
imagine.
For
example,
there
are
components
like
batch
or
others
that
also
need
to
to
look
into
this
I
I
believe
that,
coming
up
with
the
with
a
subset
of
things
that
are
very
interesting
for
everyone.
Is
this
a
permanent
error?
Is
this
a
retryable
error
in
if
it's
a
retireable
error
pass
some
time
start
with
a
class
of
things?
L
We
can
keep
the
root
one
if
you
really
want
to
propagate
that
as
well,
but
I
think
we
should
have
this
intermediate
layer
to
help
everyone
else
work
with
these
errors
on
our
on
our
path.
So,
no
matter
what
we
do,
we
should
wrap
the
the
error
with
something
that
is
much
easier
for
other
components
to
to
to
consume
like
is
it
retryable
or
not?
If
it's
retrievable,
is
this
a
retry
after
or
retry
with
delay?
L
Is
this
a
permanent
error
and
mostly
like
I,
think
it's
like
four
or
five
classes
that
we
need
to
to
to
do
now?
If
you
really
want
to
to
keep
and
I
think
all
of
them
should
hold
a
reference
to
the
original
error
in
your,
as
you
explain,
but
I
think
I
want
the
code
to
work
with
most
of
the
code
to
work
with
these
five
classes
of
Errors
instead
of
having
to
deal
with.
L
I
It
does
make
sense,
but
I
didn't
then
I
see
two
different
things.
So
one
is
the
error
propagation
itself,
which
is
my
original
goal,
and
the
second
one
is
this
new
class
of
errors
that
allows
components
to
do
the
right
thing
based
on
the
original
error
right.
So
for
the
error
propagation,
which
is
my
current
problem,
there
is
a
PR
and
I
would
appreciate
if
you
could
take
a
look
and
see
if
there
is
anything
there
that
would
prevent
the
next
one.
I
That
is
what
it
just
proposed
from
working.
You
know,
because,
right
now
we
on
the
rtlp
I
have
the
code
in
here
in
only
Hotel,
P,
HTTP
exporter.
It
just
creates
new
errors
for
for
things
that
it's
getting
from
from.
You
know,
if
anything
fails,
which
I
think
is
wrong,
it
should
propagate
the
information
back
like
at
least
a
status
code
that
it
receives
so
that
anyone
in
the
chain
can
make
decisions
based
on
that.
So
this
is
the
first
one.
I
D
Yeah
I
agree
with
Jurassic
on
this
one.
This
this
BR
makes
it
so
much
better
Nicole.
Why
correct
me
if
I'm
wrong,
but
we
have
the
same
issue.
Basically,
what
Dmitry
described
in
that
file
log
issue
we
use
when,
basically,
when
you're
in
kubernetes,
you
have
a
demon
set,
you
might
want
to
collect
logs
from
nodes
using
file
load
receiver
and
then
send
it
out,
maybe
to
some
admission
that,
like
we
do
so,
this
is
a
typical
use
case.
D
D
I
D
I
Promise
that
I'm
going
to
look
at
the
generalization
or
I
mean
the
second
step
that
you
you
mentioned
broken
so.
L
But
I
think
to
be
honest,
I
think
reading
a
bit
through
your
PR,
sorry
for
Interruption,
I'm,
really
sorry
I
was
just
having
a
revelation,
so
I
I
think.
Yes,
we
may
need
a
base
error
for
every
exporter.
That
may
include
things
like
status
message.
Maybe
the
protocol
type,
and
we
can
call
that
exporter
error
or
something
like
that-
that
we
include
all
of
these
details
into
that
that
we
back
propagate
so
and
then
that's
that's
the
root
error.
L
I
would
call
it
like
the
the
base
error
that
we
propagate
and
then
that
one
we
can
I,
don't
know
if
we
need
more
than
that.
But
that's
that's
probably
something
like
that.
Correct
like
if
we
have
an
error
that
includes
the
protocol
protocol
type
status
message
and
something
like
that
and
then
and
then
on
that
we
can
have
options
to
save
is
retrievable
is
whatever
whatever
things
we
may
not
even
need
more
more
rappers.
I
So
yeah
I
I
call
that,
like
the
pier
in
the
in
the
previous
iteration
like
pipeline
error,
because
I
I
assume
that
errors
would
not
be
created
or
propagated
only
from
exporters.
So
perhaps
it
is
a
processor
that
is
doing
I,
don't
know
some
routing
or
some
load
balancing
and
it
lost
contact
with
the
DNS
or
the
source
of
data.
And
it
knows
that
data
is
not
accurate
anymore.
So
it
wants
to
fail
earlier
than
reaching
out
to
the
exporter.
I
So
it's
more
like
a
pipeline
error,
type
of
thing,
so
not
not
necessarily
exporter
error,
but
but.
I
We
could
have
like
the
collector's
own
error
with
that
kind
of
metadata,
but
that
doesn't
remove
the
necessity
of
preserving
the
original
error,
which
is
what
this
PR
is
doing.
I
So
what
I
was
going
to
say
before
is
that
I
I
cannot
promise
to
work
on
this
second
step,
because
that
is
not
currently
affecting
me
and
I
do
have
a
few
things
on
my
plate,
including
the
proposal
from
last
week
like
the
interceptors
proposal,
but
if
I
do
have,
if
I,
if
I
am
able
to
clear
out
my
my
task
queue,
I
can
certainly
give
a
shot
at
this
one.
B
So
I
have
a
draft
PR
open
right
now
that
adds
basically
this
sort
of
wrapper
air
into
the
consumer
error
package.
If
we
think
that'd
be
a
good
place
for
it.
Please
feel
free
to
add
some
comments
on
that.
Otherwise
we
can
put
it
elsewhere.
You
can
take
a
crack,
but
I
think
that
that
should
solve
kind
of
the
sort
of
issue
that
we're
looking
for
pretty
pretty
well.
L
B
L
Look
into
both
of
them,
but
it
would
be
good
for
someone
to
looking
overall
I
mean
I
can
do
that,
but
I
would
prefer
to
have
someone
else
looking
over
the
entire
problem
and
come
up
with
the
right
proposal
or
or
maybe
even
you
already
did,
then
this
is
the
result
of
that
I.
Don't
know
if
you're
talking
consideration
this
new
use
case
that
you
already
had.
B
M
So
I'm
gonna
make
it
quick.
We've
had
some
tools
now
to
generate
some
status
to
be
part
of
metadata
yamo,
which
I
think
is
a
good
Innovation.
So
now
components
can
move
some
of
their
code
into
the
metadata
diyamo
specifically
to
manage
status
and
maybe
additional
things
it's
useful.
M
It
helps
with
uniformizing
the
redmi
as
well,
so
the
little
status
that
you
see
at
the
top
there's
a
markdown
table,
it's
not
being
generated,
which
I
think
is,
is
going
to
help
us
get
better
stability,
maturation
of
our
components,
but
now
the
Temptation
is
to
add
more
and
I
think
we're
being
a
bit
and
I'll.
Take
the
blame
a
bit
messy
about
it.
M
We
don't
really
know
and
I,
don't
think
we're
gonna
come
up
with
a
fix,
the
next
five
minutes
or
a
great
approach
to
that.
But
I
wanted
to
call
it
out
and
if
you
have
things
you'd
like
to
see
in
metadata
yaml
that
helps
I
personally
have
a
couple
requests
from
different
people.
In
my
art,
like
all
documentation,
folks
would
like
to
see
the
OS
being
called
out
explicitly.
We
have
warnings
that
have
been
an
ongoing
topic.
We
should
add,
maybe
there's
more
stateful
statements.
M
This
type
of
discussions
I
just
wanted
to.
There
are
smart
people
in
the
room.
There
are
smarter
than
me
if
you
have
opinions
about
what
metadata
yamos
contain
for
components:
okay,
Eric,
not
you,
okay,
fine,
we'll
we'll
get
to
that
we'll!
You
know
just
don't,
feel
shy.
It's
very
much
right
now,
a
little
bit
messy
and
I'd.
Rather,
we
take
a
pause
and
we
do
a
good
job.
Then
I
add
a
sturdy
different
iterations
of
how
we
can
go
about
it,
which
we'll
do
anyway,
all
right
just
stay.
Flying.
G
M
Yeah
now
we
should
do
that
incrementally
anyway,
because
that
is
the
reality
of
Open
Source
right
and
we
can't
get
it
right,
the
first
time
otherwise
we'll
be
building
Cathedrals,
not
bazaars.
However,
there
are
a
number
of.
So
what
happens
is
that
you
come
up
with
a
great
idea
and
then
you
start
moving
all
the
components
to
use
this
generated
metadata.
M
But
then
you
start
having
conflicts,
because
if
you
change
any
of
the
generative
templates,
then
all
the
other
PRS,
which
were
beforehand
and
get
kind
of
out
of
scope
out
of
you
know
and
there's
a
bit
of
an
independence
match
it's
actually
easier
and
faster
to
try
to
move
all
the
components
to
status
right
now
than
to
try
to
make
status
better.
M
G
Yeah
I'd
say:
let's,
let's
finalize
like
a
first
good
status,
generator
like
list
of
things,
get
that
PR
merged
in
and
then
do
all
the
status
generation
and
while
we're
doing
that
status
Federation
as
long
as
it
doesn't
take
six
months.
If
someone
wants
to
add
new
fields
to
this
like
that
list,
let's
just
hold
off
on
that.
Let's
pause
that,
while
we're
doing
like
the
first
batch
through
and
then
we
can
keep
iterating
after
that,
it
should
hope.
Hopefully
the
generator
makes
it
go
really
fast.
G
M
Yep
how'd
you
cool
down
I
think
we
have
168
different
components
in
concert
and
ideally
would
love
code
owners
to
also
own
that
status.
Migration.
M
Not
me
I,
don't
scale
so
feel
free
to
take
that
up
and
play
with
it
and
come
back
with
feedback.
That's.
G
It-
and
one
thing
we
can
do
is
we
can
get
the
basics
generated,
but
like
for
new
things
like
I
know
that
the
the
stats
generator
is
about
to
support
warnings.
We
don't
have
to
throw
the
warnings
in
yet
like
we
can
just
converge
them
over
to
the
status
generator
and
then,
when
the
code
number
is,
do
the
review
and
say:
okay,
I
know
what
warnings
I'm
supposed
to
put
here.
They
can
go
mess
with
that
in
their
own
component,
because
it'll
already
be
there.
E
I'll
I'll
add
just
one
comment
around
what
you
mentioned:
Antoine
around
code
owner
is
owning
the
adding
the
generated
status
table.
I.
E
Think
that's
a
that's
a
great
goal,
but
I
think
what
this
leaves
us
with
is
half
implemented
things
most
of
the
time
in
the
control
repo,
where
not
all
the
components
will
be
following
a
particular
whatever
process
so
like
even
even
now,
there's
still
components
that
don't
have
a
header
table,
for
example,
or
they
have
different
fields
in
that
table
and,
like
I,
think
we
were
better
off
to
just
do
a
pass,
make
sure
it's
generated
for
all
the
components
and
then
like
add
a
check
to
invalidate
any
component
that
don't
have
that's
in
there.
I
Yeah,
if
the
code
owners,
don't
care
about
the
header
table
or
the
metadata,
isn't
that
a
sign
that
the
component's
not
been
really
maintained.
M
Oh,
that's
next
yeah,
so
good
point
I
mean!
Yes,
you
slowly
get
to
a
point
where
we
can
reach
up
all
the
requirements
for
a
component
to
being
contribute
right.
So
right
now
we're
getting
triage
all
the
code
owners
make
them
more
like
you
know
they
have
to
be
now
members
of
open
Telemetry.
We
did
not
enforce
that.
We'll
fix
that.
M
Then
we'll
start
like
pushing
a
little
bit
harder
on
folks
to
be
more
present
on
on
this
type
of
issues
and
then,
ideally,
what
I'd
like
to
do
is
to
say
if
I
can't
find
in
metadata
yaml
your
stuff
is
supposed
to
be
better.
Then
you
need
to
pass
a
number
of
Trials
by
fire
where
CIS
need
to
check
every
aspect
of
your
component
and
make
sure
you
behave
in
certain
ways
and
so
yeah.
M
If,
if
your
component
is
currently
better
you're,
not
maintaining
your
metadata
you're,
not
doing
your
names
at
work
down
the
road,
it
might
be
degraded
down
to
Alpha
or
even
on
maintained,
because
we
can't
keep
up
you're,
not
keeping
up
with
the
times.
But
that's
going
to
take
a
while
first
meeting.
We
set
the
tone
and
then
we
go
from
there.
I
So,
for
that
specific
case,
I
think
we
also
needed,
like
an
official
official
Communications
channel
to
the
with
the
code
owners,
so
perhaps
a
specific,
select,
Channel
or
something
where
people
can
subscribe
to
and
not
be
bothered
with
the
noise
of
the
Ripple.
But
it's
still
do
whatever
they
need
to
do
for
their
components.
I
I,
don't
know
if
this
likes
the
right
channel
for
that,
but
I
think
I
have
the
feeling
that
we
need
a
more
official
way
of
communicating
with
the
code
owners.
G
I
think
that,
with
the
automation
that
Evan
added
with
automatically
pinging
code
owners
and
and
once
we
have,
the
enforcement
of
of
all
being
a
member,
is
being
able
to
automatically
assign
something.
I.
Think
me
personally,
I
think
that's
good
enough.
I,
don't
know
if
we're
going
to
get
a
better
like
channel
of
communication
than
like
assigning
a
PR
or
an
issue
to
somebody
like
not
everyone's
going
to
want
to
use
Slack
and
I.
Don't
think
that
we
should
have
slack
as
a
requirement
for
open
Telemetry
membership.
I
G
I
mean
that's,
they
should
handle
I
mean
just
set
it
to
be.
You
know
at
only
right,
I,
I,
guess
I
the
way
if
someone
chooses
to
handle
the
notifications
poorly.
That's
that's
that's
eye.
J
I
D
G
Yeah
but
I
think
the
on
the
original
topic.
Let's
work
on
that
PR
that
was
linked
and
get
that
like
framework
merged,
and
then
we
can
link
and
apply
it
actually.
I
think
that
PR
even
does
blanket
apply
it
to
all
the
components.
Isn't
it
I
have
to
look
at
it
again.
D
G
C
Yeah
so
I'm
next
I
I
just
wanted
to
ask
her.
O
Review
as
it
turns
out,
the
persistent
queue
had
problems
when
it
tries
to
write
to
a
full
disk.
Some
funny
stuff
can
happen,
depending
on
on
how
exactly
adapt
that
and
what
kind
of
data
and
I
have
a
funny
PR,
which
is
like
20
lines
of
fixed
and
then
200
lines
of
test
code
to
prove
that
this
can
actually
happen.
O
So
I
wanted
to
ask
for
review,
because
now,
at
the
time,
I
I
raised
an
issue,
and
this
is
described
in
in
detail
in
an
issue
accompanying
this,
but
now
I
know
that
this
actually
happens
more
frequently
in
the
real
world
than
than
I
thought.
So
it
is
so
it's
a
bit
more
urgent
and
then
I
originally
had
that.
O
There
is
another
PR
bugdon
which
logs
more
errors,
instead
of
just
just
doing
debug.
That
is
also
something
I.
I
want
to
re
redo
the
the
error
reporting
in
the
persistent
storage
in
general,
but
for
this
I
I
want
this
fixed
because
it's
like
it's
actually
an
error
that
I
understand
why
it
happened.
There's
a
there's
also
potentially
error
that
I
don't
understand
how
they
could
happen
and
more
logging
will
help
for
that.
But
I
have
a
separate
PR
for
to
help
with
that.
M
Sorry
so
I'd
love
to
I'd
like
to
show
my
head
in
the
ring
I,
have
multiple
hats
now
and
become
a
Trader
of
the
core
collector
project
just
to
keep
up
with
my
involvement,
foreign.
N
Okay,
so
I'm
next,
so
we
have
Martin
and
I
have
created
the
issue
for
any
new
component
for
data
exporter,
so
I'm
here
to
just
say,
like
hi,
and
to
learn
like
how
the
process
works
because
yeah
like
I,
was
trying
to
with
some
like
previous
videos.
But
it
looks,
there's
me
to
be
here
in
person,
so
yeah.
F
N
M
Yep
I've
accepted
your
I'll,
sponsor
it
I
know
someone
from
your
company
and
knows
you,
you
guys
mean
well
and
so
I
will
work
with
you.
E
All
right
next
one's
mine-
hopefully
it's
not
too
controversial-
I-
purposely
start
using
the
open
census
Bridge
from
the
open,
telemetrico
SDK.
E
This
will
allow
us
to
start
moving
off
of
the
Prometheus
bridge
that
we're
currently
using,
and
it
will
allow
me
to
continue
the
work
to
try
and
get
configuration
for
exporting
data
from
The
Collector
using
otlp
through
configuration.
So
that's,
that's
all
I
got.
K
I
I
guess
I'm
next
I
just
also
sort
of.
K
Some
questions,
my
name
is,
and
I
have
started
to
contribute
to
the
to
the
old
collector
recently,
mostly
around
the
data
dog
Loki
and
hopefully
Nelson.
You
know
the
Prometheus
exporter
and
one
we
have
been
within
the
company.
I've
been
working
for
we've
been
using
heavily
Auto
for
the
last
couple
of
months
and
one
thing
I'm
currently
not
clear
on.
K
So
you
have
the
different
metrics
produced
by
the
hotel
collector,
one
of
them
being.
The
auto
processor,
dropped
spans,
for
example,
and
the
other
hotel
exporters
and
failed
spans
and
I
was
wondering
if
this
is
a
first
of
all
good
form
to
ask
the
questionly
my
understanding
currently
of
how
they
work
is.
One
basically
gets
incremented
when
we
drop,
for
example,
spans
during
the
part
of
process
search
pipeline
and
the
the
other
one
when
we
drop
spans
during
the
exporters.
Part
of
pipeline
is
the
correct
assumption.
K
If
so,
then,
then
the
documentation
may
be
note
that
accurate
around
these
two.
I
G
I
Right
mental
model
yeah,
so
anything
that
removes
data
from
the
from
from
from
from
the
pipeline
is
expected
to
increase
the
the
or
to
report
that
the
expense
were
dropped
right.
So
tail
sampling,
for
instance,
would
report
drop
data
instead
of
a
failed
sent
data
and
and
sent
or
sent
failed
or
the
second
one
that
you
mentioned
is
indeed
when
you
fail
to
to
send
data
out
of
the
process.
K
Okay,
because
because
in
the
documentation,
it
specifically
says
like
sustainers
of
standards
of
Auto
exporters
and
failed
and
auto
exporters
and
field
metric
points
indicated,
collector
is
available
to
export
data.
As
expected,
it
doesn't
imply
data
loss
per
say
since
there
could
be
retries,
but
it
can
also
imply
job
data
if
there
are
no
retrace
happening
in
that
specific
export
right.
K
Okay,
so
yeah,
so
it
might
then
maybe
nice
too
change
the
wording
of
that
in
the
documentation,
because
there
are
now.
This
is
like
under
secondary
monitoring
and
it's
asleep
may
or
may
not
imply,
but
even
from
what
I
have
been
observing
so
far,
it
seemed
like
having
this
gets.
Incremented
very
Tropic
data.
K
D
Yeah
in
general,
these
kinds
of
metrics,
starting
with
auto
call
processor,
Auto
call
exporter
I,
believe
there
are
being
produced
by
specific
components
by
a
specific
exporter
by
a
specific
processor.
So
there
will
be
metadata
with
these
metrics
telling
you
the
the
exact
exporter
or
processor
name,
but
that.
D
D
You
know
that
might
be
the
case
as
well.
Yeah.