►
From YouTube: 2023-01-25 meeting
Description
Open cncf-opentelemetry-meeting-3@cncf.io's Personal Meeting Room
A
A
B
B
C
E
E
H
I
Sure
yep
so
I
mean
this
will
be
brief.
Basically,
I've
got
a
PR
for
a
triaging
guide
out.
We've
got
I.
Think
four
triagers
now
so
figured
it'd,
be
a
good
time
to
establish
some
kind
of
common
guidelines
and
conventions
and
recommendations.
H
E
Gotham
is
probably
there
I
think,
so
let
me
think
him
and
see
if
he's
still
interested
yeah.
J
E
Privately
that
he
does
not,
but
himself.
H
G
I
I
think
it's
definitely
gotten
a
lot
better.
I
haven't
taken
a
super,
close
I
haven't
tracked
it
in
any
meaningful
way.
So
I
don't
know,
I
think
we
are
looking
a
lot
better
than
we
were
a
couple
months
ago,
though,
for
sure.
F
I
I've
done
some
I'm
trying
to
keep
on
top
of
these
new
ones,
and
what
I
haven't
done
is
look
at
the
kind
of
long
long
tail
of
issues
we
have
815
open,
yeah.
So
I
think
this
is
something
that
at
some
point
needs
to
take
half
an
hour
and
review
the
kind
of
the
old
ones,
but
yeah
so
far.
It's
all
good!
It's
just
this.
F
This
PR
is
super
helpful
for
me,
because
yeah
I
talked
about
that
on
slide
because
coming
into
the
role
I
didn't
know
how
to
apply
the
needs,
triage
label
and
when
to
remove
it,
and
so
we're
having
the
discussion
now
on
the
pr
and
I
think
this
is
super
helpful
for
me
to
to
do
a
good
job.
H
K
Yeah
I
would
say,
don't
don't
touch
really
old
stuff,
because
the
automation
will
keep
closing
it,
which
I'd
rather
have
that
happen.
I
So
it
actually
I've
noticed
it
hasn't,
kicked
in
yet
I'll
try
and
get
a
PR
for
that
soon,
but
I've
noticed
that
the
way
that
the
the
stalebot
works-
it's
not
taking
in
old
issues
like
it
should
so
there's
a
lot
of
issues
that
are
open
right
now.
That
probably
should
just
be
closed.
Like
they've
been.
B
L
H
Maybe
we
can
have
like
three
three
notifications,
like
first
still
notification,
then
another
after
the
same
amount
of
time
and
then
closing,
because
if
you
have
only
two
still
enclosed,
sometimes
it
still
may
be
missed.
But,
like
those
notifications
help
people
to
actually
like
get
get
the
get
the
idea
about
their
issues
and
bring
them
back.
So
that's
that
was
very
helpful.
E
I
agree
having.
E
Comments
bumping
issues
up
they're,
really
helpful
in
closing
issues
that
we
know
that
they're
fixed
and
by
making
us
making
us
think
about
those
all
the
issues
in
a
new
way.
In
the
current
with
current
eyes,.
F
Oh
we're
going
on
the
last
one
yeah
you
want
to
talk
about
the
this
is
the
purpose
component.
E
It's
one
from
you
of
all
Evo
yeah.
M
Hey,
first
of
all,
just
a
few
quick
disclaimers,
it's
my
first
time
being
on
this
weekly
collector's
meeting
and
the
first
time
contributing
like
trying
to
contribute
a
major
component.
So
go
easy
on
me.
So
basically,
I'll
quickly
explain
the
the
problem
that
we
faced
and
the
my
proposed
solution.
So
we
receive
a
bunch
of
spends
and
traces
and
we
basically
create
some
kind
of
we
do
a
bunch
of
stuff
with
the
traces
some
of
them.
M
For
example,
we
were
creating
a
service
dependency
graph
and
we
connect
the
edges
like
multiple
Services,
which
protocols
do
they
use
and
so
on,
and-
and
we
noticed
that
most
of
the
data
we
get
there
from
an
attributes
perspective.
That
means
the
services
that
are
involved
inside
the
trace
and
protocols
being
used.
The
routes,
parameters
and
so
on
are
basically
just
duplicates
of
the
same
data.
M
There
would
be
a
specific
care
process
which
a
user
can
define
a
pre-configured
set
of
attributes
which
this
processor
will
be
configured
for
then,
when
a
new
Trace
is
arrived,
it
will
tell
sample
that
specific
trace
and
basically
check
the
values
of
those
specific
tags
and
keys
inside
the
span
across
the
entire
trace,
and
if
those
values
had
already
been
seen,
the
trace
will
get
to
opt,
and
so
this
is
like
in
very
like
going
over
this
very
quickly
and
in
high
level,
and
I
would
like
to
hear
some
some
feedback
from
you.
M
H
I
have
a
question:
what
what
can
you
please
clarify
what
what
the
result
of
the
duplication
will?
Will
it
remove
some
spans
or
will
it
remove
other
attributes
only.
M
Yeah,
so
so,
basically,
how
it
works
is
that
it
groups
the
spends
by
the
trace
ID.
So
let's
say
we
have
a
couple
of
spends
coming
into
our
system
which
belong
to
the
same
place.
They'll
get
aggregated
the
under
the
same
place
and
then
when
a
sample
decision
is
made
and
the
processor
would
go
over
a
pre-configured
set
of
tags,
whether
it
be
a
spend
tag
or
a
resource
tag,
it
would
check
the
value
of
those
specific
tags,
for
example
HTTP
route,
and
it
might
be
HTTP
route.
M
It
might
be
the
messaging
protocol
being
used
and
it
will
check
the
value
of
a
of
that
key
for
each
specific
span.
And
then
it
will
check
if,
if
it
had
already
seen
a
trace
that
contains
the
the
same,
basically
the
same
hierarchy
between
spans
between
different
spins.
That
means
that
the
trace
is
composed
of
the
same
amount
of
spends
and
the
same
hierarchical
order
between
those
pens
and
also
it
will
check
the
tags
of
those
specific
expense
to
see.
M
For
example,
if
service
a
had
been
talking
to
service
B
at
the
same
protocol,
and
if
we
had
already
seen
that
so
and
basically
we
can
check
if
the
span
is
the
Same
by
by
checking
the
tags
and
the
attributes
inside
that
specific
span
or
spend
resource.
And
so
this
is
our
I
have
implemented
it.
M
And
each
time
we,
for
example,
we
receive
a
new
Trace.
If
we
have
not
seen
the
values
of
this
Trace
previously,
we
will
sample
it.
But
if
we
receive
the
same
Trace
again
and
we
had
already
seen
the
same
values
for
that
specific
for
those
specific
tags
and
in
a
trace
that
is
composed
of
the
same
hierarchical
order
of
the
spends
and
then
we'll
we'll
drop,
the
the
entire
Trace.
J
J
E
Just
one
clarification
is
that,
when
we're
talking
about
the
same
Trace,
it's
not
really
the
same
trace.
It
is
an
equivalent
Trace
right.
So
we're
not
talking
about
seeing
the
same
Trace
twice
but
seeing
similar
traces
with
a
similar
structure
more
than
once
in
a
given
time,
interval.
J
M
I
think
I
can
clarify
that
a
little
bit
more.
Let's
say
we
have
an
API
Gateway
in
our
cluster.
The
API
Gateway
talks
with
some
back-end
service.
The
API
Gateway
talks
with
the
backend
service
via
HTTP,
and
this
is
the
just
for
the
sake
of
Simplicity.
This
is
the
only
Trace
we'll
receive
a
to
B,
using
HTTP
to
some
specific
route,
and
if
we
have
a
one
1000
transactions
or
five
thousand
thirty
thousand
transactions
per
second,
and
this
Trace
basically
represents
the
same
transaction
inside
the
the
same
logical
transaction
inside
the
the
backend.
M
It
does
not
belong
to
this.
If
we
receive
three
thousand
requests,
this
would
not
be
the
same
Trace
but
like
The
Logical
flow
inside
our
backend
is
basically
the
same
over
and
over
again
for
30
000
requests,
and
if
some
exception
is
raised,
obviously,
then
then
the
trace
will
give.
It
would
not
be
considered
the
same,
because
the
a
logical
flow
inside
this
space
would
be
considered
of
different.
This
would
be
considered
a
different
place,
so.
H
D
Okay,
okay,
a
few
the
initial
thoughts,
although
I
haven't
spent
more
time
than
just
this
meeting.
Looking
at
the
proposal,
one
I
guess
I'm
curious
if
you're
so
the
case
you're
describing
is
for
yourself
as
a
consumer
of
other,
like
customers
like
Telemetry
data
and
you're,
trying
to
compute
something
for
them
generally,
or
are
you
just
working
at
a
end
user
using
The,
Collector
and
looking
for
the
best
way
to
accomplish
this
sampling
task?
D
If
it's
the
latter,
you
may
find
more.
We
have
similar
issues
at
my
employer
and
we
found
that
sampling
at
the
SDK
level
is
a
little
easier
to
implement
and
can
be
done
sort
of
like
ad
hoc
rather
than
upstreaming
component
and
maintaining
it
for
life
and
all
this
stuff.
You
know
we
happen
to
be
a
don't:
have
a
ton
of
languages
we're
using
internally,
so
it
makes
it
easier
to
do
that,
but
yeah.
D
D
H
M
M
This
might
be
the
case
that
we
need
to
modify
the
tails
that
processor
itself,
rather
than
just
adding
a
new
policy
which
I
I
don't
know
if
this
is
if
this
is
like
an
option
or
maybe
implementing
some
functionality
as
a
policy
and
not
implementing
the
entire
functionality
like
leaving
some
of
the
functionality
out
of
The
Proposal,
but
without
modifying
the
entire
process
or
just
creating
a
policy,
because
this
is
also
an
option.
E
I
think
a
policy
would
work
a
policy
within
the
detail.
Sampling
I
cannot
see
why
it
wouldn't
work.
J
But
a
couple.
E
Of
of
things
that
came
up
that
I
just
want
to
summarize
on
the
discussions
on
the
on
the
issue
itself.
The
first
one
is
a
comment
from
Austin
where
he
said
the
sampling
that
is
recorded
as
part
of
this.
This
pens
has
to
be.
They
have
to
be
adjusted
with
the
new
sampling,
the
reality
within
a
sampling
rate.
So
what
I
so
he's
not
here,
but
what
I?
E
What
I
think
that
he
meant
is
within
the
traces,
some
libraries,
some
seritation
libraries
will
add
what
was
the
sampling
rate
for
that
specific
trace
or
for
that
for
that
yeah
for
the
trace,
so
that
backends
can
then
use
that
value
to
extrapolate
the
values
and
see.
Oh
this,
then,
here
was
only
simple
on
once
in
a
minute,
but
it's
part
of
a
one
in
a
hundred
a
rate.
E
E
So,
if
you're
doing
as
a
new
policy
of
detail
sampling,
you
don't
have
to
worry
about
that,
and
the
second
thing
is
the
naming
seems
confusing
so
you're,
not
de-duplicating
traces,
you're,
just
simply
out
things
that
look
similar
right,
so
you
might
want
to
think
about
it
in
a
different
name.
For
that.
M
Yeah,
thank
you.
Thank
you.
I
I
will
apply
the
the
changes
that
you
all
commented
out
and
I'll
rework
the
stuff
that
I've
done
so
far.
Also
where
I
gotta
I
got
a
question.
What
would
be
the
next
step
in
order
for
this
to
get
into
abstinence
that
someone
needs
to
code
review
it
if
I'll
open
up
a
request
as
a
policy
for
the
tail
sampling
process,
so
would
I
need
someone
to
check
it,
and
how
does
this
process.
E
So,
typically,
when
you
open
a
APR
against
a
specific
component,
the
the
code
owners
for
those
components
or
for
that
specific
component,
like
you,
tell,
simply,
will
get
pinged,
and
it
just
so
happens
that
it's
me
so
if
you
open
APR
on
the
table,
sampling
feel
free
to
ping
me
on
this,
like
as
well
I.
Can
then
I
can
review
it.
E
You
don't
have
to
I
mean
it's
probably
going
to
be
on
my
queue,
but
if
you
want,
you
can
ping
me
as
well.
N
Thank
you
on
that
point.
I
know,
I,
think
we
do
this
for
new
components
too,
but
can
we
also
look
at
like
what
the
configuration
would
look
like
first
as
like
a
new
policy
for
the
tail
sampling
processor,
because
it's
basically
like
even
more
configuration
on
its
own
as
a
new
policy
and
there's
a
lot
of
kind
of.
M
Yeah
sure
should
I
should
I.
Do
it
on
the
proposal
itself
update
the
proposal
itself
will
just
stay
open,
a
pull
request
and
describe
the
the
configuration
on
the
model
Quest.
J
F
Like
to
talk
so
this
is
that's
been
open
for
a
little
while
this
is
called
the
timestamp
processor
and
clearly
this
is
this
is
not
a
component
that
should
ever
exist,
but
for
reasons
that
are
out
of
or
anybody's
grasp.
F
At
this
point,
someone
might
want
to
set
the
wrong
time
on
their
machine
and
dedicate
to
The
Collector
that
they
want
all
the
data
points
that
are
being
exported
and
collected
from
from
that
particular
environment
to
be
changed
so
that
you
know
the
time
now
actually
matches
the
the
real
time.
So
when
that
is
done
and
that
you
know
exactly
how
much
of
a
of
a
skew
there
is
on
the
clock,
then
we
can
apply
an
offset,
and
the
offset
may
be
positive
or
negative.
Based
off
that.
F
So
we
had
to
build
this.
We
found
it
useful
for
at
least
one
customer
of
ours
grew
for
some
reason
decided
to
use
this
particular
approach
to
the
production
environment
and
we're
kind
of
in
a
Lurch
here
where
this
is
a
processor
which
is
useful,
we'd
like
to
make
sure
that
as
much
as
possible,
we
adopt
an
upstream
first
attitude
to
our
work
and
we're
not
actually
interested
in
just
minting
it
by
ourselves
and
we'd
like
to
see
a
perennial
solution
going
forward.
F
Now,
when
I
brought
this
up
initially
I
brought
this
up
in
the
form
of
a
problem
to
the
community
and
saying
I'm
having
this
issue
I'd
like
to
see
how
we
can
resolve
this,
and
the
big
big
thing
that
we
could
apply
to
towards
here
is
to
use
a
TTL,
which
is
this
transformation
language
which
is
being
incubated
here
inside
the
collector
right
to
change
the
timestamps
on
the
records.
Now
the
dtl
support
for
time
is
still
ongoing.
F
It's
not
quite
there
as
far
as
I
understand,
and
please
keep
me
honest
at
the
same
time.
What
I'm,
realizing
as
I,
go
through
the
code
of
this
processor
again,
is
that
the
processor
doesn't
change
the
timestamp
of
one
element
of
the
record.
It
has
to
go
and
change
the
timestamp
of
everything
that
is
being
exported
right
and
so,
for
example,
for
histogram.
F
It
has
to
go
quite
deep
inside
the
structure
of
the
data
point
to
change
the
timestamps
on
all
the
elements
of
the
histogram,
so
armor
that
knowledge,
I,
came
back
to
the
discussion
and
said
well.
I
believe
that
while
OTL
is
an
optimal
solution
for
some
use
cases
where
we
know
exactly-
and
we
have
a
limited
amount
of
changes
that
we
need
to
make.
If
we're
just
going
to
go
widespread
and
change
all
the
timestamp
of
everything
that
flows
through
us,
then
maybe
we
need
a
dumber
solution
that
can
be
done.
F
Just
please
one
setting
and
we
kind
of
left
it
at
that.
I
wanted
to
hear
from
you
folks.
If
there's
you
know,
frustration
drop
this.
If
what
CTL
is
the
solution
is
going
to
be
flexible
enough,
then
it's
going
to
do
the
work
or,
if
we
need
to,
if
we
can
adopt
this
even
even
further
the
time
being,
it
could
be
duplicated
in
your
time
when
otcl
is
great,
but
I
believe
this
component
may
be
may
be
useful
at
this.
O
Point
so
independent
there
are
two
pieces
here
that
you
need
to
do.
One
is
the
identifying
which
telemetry
to
apply
the
transformation,
and
second
is
the
transformation
itself,
which
is
the
the
removing
of
the
times
or
adjusting
of
the
timestamp
correct.
So
so
now,
ottl
also
has
two
parts:
has
the
conditions
and
has
the
functions
the
functions
mean.
O
O
So
you
still
need
to
use
ottl,
because
we
we
move
that
from
Even
in
our
filters
and
everywhere
we
move
towards
using
OTL
conditions
on
on
identifying
or
or
filtering
which,
which
Telemetry
that
are
passing
through
our
pipeline.
You,
you
want
to
transform
ing
to
use
the
ottl,
but
a
human
condition
as.
F
A
condition
yes
yeah,
okay
I
mean
that
could
be
added
to
the
to
this
processor
as
a
as
a
condition
feel
that
didn't
think
of
that
you
know
for,
for
all
intents
and
purposes,
this
is
very
much
a
catch-all
where
we're
saying
that
all
the
data
going
through
this
pipeline
is
incorrect,
timestamps
and
needs
to
be
corrected
as
a
whole
and
again.
O
You
can
start
with
that,
but
I
I
will
I
I'm
pretty
confident
that
at
one
point
people
will
come
and
ask
for
for
filters.
F
The
problem
is
that
I
I
I'd,
rather
so
it's
you
know,
there's
a
completely
valid
enhancement.
I
agree:
it's
a
great
idea.
The
thing
is
right
now:
I'm,
not
even
sure
if
this
processor
is,
is
useful
for
the
community
as
a
whole
or
not
and
so
I.
Just
just
what
I
want
to
understand
is.
Is
anybody?
Does
anybody
think
this
is
a
good
thing
to
bring
into
country
or
am
I
missing?
Something
obvious
here.
D
I
can
comment,
which
is
that
we
do.
We
have
a
custom
processor
that
does
resets
the
timestamps
on
web
traffic
because
browsers
have
unreliable
time
stamps
and
it's
super
useful
and
it'll
be
easier,
not
to
maintain
a
custom
processor,
so
yeah
I
think
be
useful,
but
a
bunch
of
stuff
I
don't
know,
but
that's
where
it's
most
obvious
is
like
you
know,
in
browser,
some
of
the
timestamps
are
just
like
com.
D
You
know
17
days
behind
or
something,
and
so
it
kind
of
messes
up
your
P99
stats
or
whatever
anyway
so
yeah,
you
know
anecdot
would
be
useful,
I
think
it's
pretty.
You
just
invoke
like
data
points
on
every
metric
and
we
you
know
you
iterate
over
the
data
points,
so
it's
not
I
think
the
implementation
could
fit
into
ottl
eventually,
and
you
know
I
could
see
why
people
would
want
to
configure
it
right
now.
We
just
are
pretty
naive
about.
F
D
E
So
time
step
issues
are
really
common
and
me,
and
Alex
were
just
you
know
on
a
call
last
week
and
timestamp
or
or
clocks
were
actually
the
topic
of
one
of
the
questions.
End
users
had
I
just
don't
know
whether
the
problem
that
you
have
here
is
really
that
common,
because
what
seems
to
be
here
is
that
you
know
beforehand
by
the
time
that
you're
configuring
The
Collector,
you
know
by
that,
but
then
that
your
server
X
or
your
service
Y
is
two
hours
behind.
E
It's
not
that
easy,
but
you
know
it
seems
like
very,
very,
very
edge
case
here
that
perhaps
we
it's
not
the
best
way
to
have
a
a
component,
a
processor
in
the
open
television
trip
for
that
I,
don't
know
I
mean
if
people
are
saying
that
they
all
have
the
same
problem,
then
sure
but
I
think
it's
a
process.
F
G
Yeah
I
can
see
just
generalizing
this
a
little
bit.
I
can
see
this
becoming
quite
useful
for
a
couple
different
use
cases
in
particular
I'm
thinking
about
users
who
are
tinkering
with
the
collector
kind
of
getting
started
with
it
or
you
know,
I'm
sure
many
vendors
have
cases
where
we'd
like
to
demo
things
to
the
to
users,
and
you
know
we
have
exporters
that
will
capture
data.
You
know
a
lot.
We
have
back
ends,
of
course,
but
just
even
simple
things
like
files
right,
we
can.
G
We
can
write
data
to
files.
Now,
when
you
pick
that
data
back
up
and
readjust
it
the
time
stamps
are
out
of
date
and
what
you'd
really
like
to
do
is
sort
of
bring
them
up
to
real
time
to
simulate
that
that
data
is
flowing
in
at
real
time.
So
to
generalize
this
I
would
say
this
could
be
useful
to
me
if,
instead
of
saying
I
want
a
fixed
adjustment
that
I
could
somehow
Auto
detect.
G
Maybe
the
first
time
data
point
that
comes
in
look
at
the
time
stamp
do
a
diff
from
now
and
then
that's
the
offset
that
I
apply
and
then
maybe
even
metering.
These
points
out,
you
know
if,
if
the
next
point
is
a
minute
later,
I
I
wait
a
minute
and
then
admit
it.
That's
the
kind
of
thing
I
could
where
I
would
see
this
being
very
useful
to
me,
but
so
it's
a
little
bit
different
than
what.
K
That
would
love
to
get
rid
of
so
the
concept
of
messing
with
timestamps
to
some
degree
like
we're
totally
on
like
we're
down
for
that.
So
like
it's,
not
the
exact
use
case
that
you
have,
but
if
we
had
a
generic
time
stamp
processor
that
could
do
different
things
with
timestamps,
like
honeycomb
would
be
on
board
with
that.
K
I
do
still
feel
that
if,
if
the
timestamp
processor
or
whatever
it's
doing,
whether
it's
it's
adjusting
for
time,
zone
or
or
truncating
duration,
which
is
what
ours
does
or
or
setting
a
field
to
a
new
value
based
on
what
Dan
was
saying
like
some
some
static
like
now
value
since
that's
happening
on
all
the
data
points
or
subset
of
the
data
points
on
a
condition
in
a
very
like
consistent
manner
like
we're
not
having
to
do
a
lot
of
logic.
K
I
do
still
feel
like
that
handle
like
that
fits
well
into
the
transform
processor,
because
the
transform
processor
is
taking
a
payload
of
data
looping
over
all
the
data
and
Performing
the
transformation.
Okay,
so
I
still
feel,
like
the
transform.
Processor
is
really
good
at
doing
that
kind
of
a
transformation
I'm
not
totally
convinced,
yet
that
we
need
a
brand
new
processor,
maybe
just
a
function,
because
that's
the
transform
processor
is
just
a
bunch
of
loops
and
it
just
goes
through
the
data
and
says
apply
these
functions.
L
C
All
right,
I
think
I
was
the
last
person
in
my
hand.
Up
on
that
question
there.
My
only
comment
there
is
like
I've
seen
like
well,
two
things
one.
If
we
want
to
keep
this
in
there,
as
is
I,
am
okay
with
giving
customers
and
users
of
our
product
the
tool
to
do
whatever
they
need
to
do.
I
mean
I'm
sure
they
know
their
system
better
than
I
do
but
I
for
a
messaging
standpoint.
C
We
should
probably
make
it
really
clear
that
you're
doing
something
and
like
we
would
expect
95
Plus
in
cases
you're
doing
something
wrong,
so
just
be
very
clear
about
hey,
probably
just
fix
your
clock
if
you
can
and
then
okay,
so
that's
number
one
there.
Just
you
know
big
big
warning
or
something
on
the
readme.
It
might
be
like
a
nice,
like
you
know,
Middle
Ground,
two
I've
actually
seen
cases
before
where
I
wanted
to
add
some
more
time
stamps
like
you
can
have
like
Telemetry
on
your
Telemetry.
C
This
is
a
concept
you
see
in
elsewhere
and
whatnot.
So
what
might
be
useful
for
me
in
a
situation
like
that,
like
if
I'm
thinking,
if
I,
were
to
use
this
processor
in
some
extended
format
would
be
like,
and
this
is
kind
of
the
question
I'm
asking
the
group
because
I
I'm
a
little
bit
fuzzy
on
what's
possible
here
in
a
parliamentary?
Could
we
actually
like
transform
the
original
time
stamps
to
I'm,
not
saying
you
would
have
to
do
this,
but
could
we
Tran?
What
is?
C
Does
the
functionality
exist
where
we
could
transform
the
original
time
stamps
to
an
attribute
and
then,
like
the
actual
MTS,
that
we
like
do
all
our
aggregation
on
become
the
newly
injected
processing
time
timestamp
anyway,
like
that's
an
open
question
there
and
I
think
it
might
be
a
little
bit.
You
know
branching
off
topic
from
this,
but
Dimitri
everyone
Reese.
H
I
forgot
to
unmute
myself,
so
yeah
I
agree
with
that
or
that
it's
it's
transfer
producer
is
a
better
fit
for
that
and
if
like
amount
of
configuration,
is
the
only
concern
for
that.
So,
for
example,
we
need
to
specify
which,
like
which
particular
field
particular
data,
point
type.
We
need
to
apply
that
and
it
becomes
bigger
configuration
I
I
believe
like
we
have.
H
We
have
similar
issues
in
different
places
that
we
have
some
common
functionality
to
be
applied
and
the
config
of
that
one
particular
receiver,
processor
or
whatever
and
I
believe
we
we
can
solve
it
generically
and
apply
some
some
kind
of
presets
using,
for
example,
config
sources
and
config
sources
will
will
be
looking
at
some
particular
predefined
configuration
that
are
already
written
in
the
building,
The
Collector.
H
So,
for
example,
we're
saying,
transform
processor
and
inject
me
configuration
for
like
exchanging
all
timestamps
and
all
the
data
points
everywhere
in
that
particular
set
on
something
like
that.
I,
don't
believe
we
have
configure
sources
functionality
right
here
for
that,
but
this
is
what
they
would
think.
We
need
to
have
going
forward.
H
F
I
do
have
one
thing:
I
think
can
I
can
I
get
a
just
a
pulse
really
quickly
from
Tyler
I
do
believe
time
systems
are
not
quite
done
yet.
The
standard
support
right,
I
see
it.
K
Depends
on
what
you
mean,
the
ottl
allows
you
to
get
and
set
a
timestamp
on
any
Telemetry
and
in
the
thing
it
does
that
via
it
only
allows
you
to
do
it
via
the
nanosecond,
whatever
the
way
that
it
suits
it
out.
So,
if
you,
if
you
know
the
exact
digit
of
the
timestamp,
you
need
you
could
set
it.
That's
the
extent
that
it
that
it
lets
you
interact
with
it.
So
that's
really
rough
and
I
mean
the
technical
answer
is
yeah.
K
You
can
change
your
timestamps,
but
the
real
answer
is
it's
pretty
hard
to
change
your
time
stamps
it's
easy
to
read
them
it's
hard
to
write
them.
O
No
but
I
think
this
should
be
a
function.
Correct
should
be
a
function
offset
timestamp
or
something
like
that.
K
O
That
accepts
that
works
on
probably
all
of
our
Telemetry
like
span.
K
O
K
You
could
I
have
found
that
when
grabbing
the
timestamp
for
other
things,
especially
if
you're
doing
anything
in
conditions,
the
large
nanosecond
like
value,
is
pretty
rough
internet.
K
J
F
Yeah
I
think
there's,
so
the
work
that
we
need
to
do
at
this
point
is
to
build
the
same
functionality
using
the
Transformer
and
see
where
we,
where
we
hit
the
wall
and
come
back
to
sigmating
and
talk
more
about.
What's
needed,
not
to
be
fair.
This
is
not
super
urgent
right,
I'm,
just
trying
to
make
sure
that
we
clean
up-
and
we
don't
have
a
bunch
of
dangling
components
that
we
end
up
maintaining
forever
for
no
good
reason,
so
just
trying
to
to
find
a
way
forward.
Yes,.
K
F
So
it
seems
to
be
a
broad
issue
and
the
love,
there's
gonna,
be
a
lot
of
little
weird
use
cases
around
this
I
completely
understand,
like
obvious
cases,
really
just
a
very
much
like
Edge
case
right
to
the
US
keys
that
were
brought
today
in
this
discussion
were,
in
my
opinion,
more
common
and
made
more
sense
than
what
I
have
to
do
is
so
it's
good
to
hear
that
there's
other
types
of
things
anyway.
Thank
you.
J
P
Hello,
everyone,
so
we
have
some
config
map
providers
that
might
require
configuration.
So
usually
some,
let's
say
vanilla,
coffee,
Mac
providers
like
file
based
one
or
it's
p,
bass,
one.
They
don't
require
configurations,
but
we
have
a
PR
in
place
to
add
support
to
https
based
providers
and
also
in
the
country
brief.
We
have
a
S3
base,
one
so
those
config
map
providers,
they
might
require
extra
configuration
and
right
now
there
is
no
mechanism
in
place
that
allow
you
to
pass
configuration
to
config
name
providers.
P
So
I
created
this
proposal
and
there
is
another
one
that
allows
you
to
pass
configuration
to
configure
map
providers.
So
I
would
like
some
more
opinions
and
next
steps,
so
let's
say,
should
I
create
a
draft
PR
to
show
how
this
could
be
implemented.
What's
what's
your
opinions
with
that.
L
Ogden
I
think
you
said
it.
This
was
already
supported,
but
I
do
not
understand
what
you
mean.
Could
you
expand
on
that
yeah.
O
So
we
we
use
URI
URI
syntax,
so
that's
what
we
we
pretend
to
have
there
and
as
a
URI
it
has
the
concept
of
fragment.
So
we
can
use
that
to
to
to
configure
for
configuration
purposes
and
I
know
that
that
is
that
has
a
limited
configuration
support,
but
I
think
that's
probably
the
best
for
for
us.
I
I
want
to
hear
other
opinions,
but
I
I
think
that's
that's
what
I,
what
I
was
suggesting
there.
L
Okay,
I
guess
to
me
something
that
seems
a
bit
weird.
Is
that
there's
things
that
you
can
configure
the
whole
provider
for
the
whole
provider,
but
you
have
to
specify
them
on
each
URI.
O
Unusual
yeah,
but
otherwise
otherwise,
otherwise
we
need
to
have
the
following.
We
need
to
have
so
so
what
I'm
trying
to
say
is
if
you,
if
you
have
a
configuration
for
the
provider,
then
you
need
to
know:
do
you
support
multiple
providers
of
the
same
type,
so
multiple,
let's
say,
say
multiple
https
providers,
then
you
have
to
have
more
a
name
for
each
provider
and
we
complicate
things
very
much.
O
Yeah,
sir
and
I
think
these
are
not
that
used
to
be
worried
of
a
duplicate
configuration.
In
my
opinion,
I
mean
how
many,
how
many
things
you
will
embed,
like,
maybe
four
or
five
sources,
not
more
than
that.
I,
don't
see
that
you're
gonna
have
more
than
five
six
sources
for
for
a
entire
config.
L
O
Yeah
I
think
think
about
Pablo,
like
how
many
sources
you'll
put
if
you
are,
if
you
are
using
these
in
the
config
in
the
config
flag,
you're-
probably
not
going
to
have
more
than
three
four.
If
you
are
using
it
inside
the
files
like
as
embedded
things
again,
I,
don't
expect
you
to
have
thousands
of
things
like
you're,
gonna
have
very
limited
and
they
are
not
on
critical
paths.
They
are
executing
when
you
load
the
file.
Even
performance
is
not
that
important,
so
I
I
would
not
be
that
worried
about
this.
L
Okay,
I
just
hope
we
are
not
like
the
requirements,
do
not
change
over
time
and
like
we
to
close
the
door
now
to
that
and
become
some
problem
later,
but
yeah.
B
L
O
I
mean
to
be
honest,
even
if
we
do
this
one
option
for
us
would
be,
let's
assume
we
we
hit
a
case
where
we
will
have
a
thousand
to
resolve,
and
we
we
want
to
to
group
them
I.
Think
a
possibility
for
us
would
be
to
to
implement
the
providers
in
a
way
that
you
catch
the.
Let's
assume
you
have
to
open
a
client,
an
HTTP
client
for
every
of
these
fragments
you,
you
can
implement
it
in
a
way
that
you
cache
the
clients
per
the
fragment
configuration.
P
O
Provider
has
the
Anton
Anthony
pointed
the
RFC,
the
88820
decide
that
fragments
are
scheme
or
radio
type
specific,
so
every
every
provider
will
will
parse
their
own
fragments
and
will
do
whatever
they
want
with
them.
P
O
P
A
A
So
for
things
like
HTTP,
where
we
are
actually
using
a
URI
as
a
configuration,
the
fragments
is
a
good
place
to
put
it
because
that
doesn't
get
sent
to
the
to
the
client.
It
could
be
usually
pulled
off
in
parsed,
but
for
other
things
we
might
have
different
formats
of
both
specifying
the
resource
to
access
and
the
the
properties
that
go
on
it.
So
I
don't
know
if
that
right
now
we
need
to
be
thinking
about
how
to
optimize
for
developer
efficiency.
A
Here
this
is
going
to
be
something
we
do
a
few
times
probably,
and
it's
it's
fine
to
write
that
parsing
each
time
yep.
A
F
Okay,
so
I
have
one
last
there's
a
new
component
called
the
Prometheus
remote
right
receiver,
which
I
think
is
making
its
way
through
design
and
well
we're
interested
we'd
like
to
see
how
we
can
help,
but
also
we'd
like
to
know
a
little
bit
more
about
the
where
it's
at
there's
a
design
dock
that
is
mapped
to
the
issue
seems
to
be
pointing
out
to
a
couple
options
on
how
to
go
about
the
implementation
of
this
work
does.
G
F
I'm
not
sure
if
Roseland
is
on
this
call
and
I
think
easy
offer.
What's
the
path
forward
on
this
one,
what
does
it
look
like?
How
do
we?
How
can
we
help.
F
Q
Discuss
this
at
the
Prometheus
work
group
a
week
or
two
ago,
there's
some
issues
with
Prometheus
remote
right
that
makes
it
really
difficult
to
use
as
a
receiver
format,
and
the
conclusion
we
came
to
during
the
meeting
was
that
it
would
be
best
for
us
to
wait
for
a
version
of
Prometheus
remote
right
that
we
can
actually
support
some
of
the
issues.
Q
Just
to
summarize,
what
I
remember
from
the
discussion
histograms
can
end
up
split
across
my
quest,
which
is
a
a
big
deal
for
trying
to
send
histograms,
basically
at
all
through
it,
as
well
as
I,
I,
think
I'm,
trying
to
recall
there's
some
issue
with
sending
metadata
or
either
it
doesn't
include
metadata
or
it's
sent
separately.
Q
F
So
is
there
a
the
work
on
now
depends
on
the
Promises
project.
Is
that
correct?
Does
that?
Does
that
mean
the
publicist
project
itself?
Is
sticking
on
this
work?
Do
you
know
what
the
scope
of
the
work
looks
like
seems
like
a
pretty
big
school.
Q
Yeah
I
recall
there
being
some
design
Docs
regarding
improvements
to
Prometheus
remote
right
I'm,
not
up
to
date
on
the
current
status
of
those
or.
B
I
actually
reached
out
to
remote
right
receivers,
maintainers
and
asked
about
this
transactional,
remote
right,
enhancements
and
stuff
like
that,
and
they
say
that
they're
not
really
actively
working
in
this
area
now
so
probably
also
no
work
outlined
for
the
you
know
for
remote
right
version.
Two
now.
O
What
is
the
source
who
is
emitting
Prometheus
remote
right
in
your
case.
F
Kind
of
the
bearer
of
this
one
I
I'm,
not
sure
I,
have
it
for
context,
but
my
understanding
is
that
we
have
scenarios
where
we
currently
have
remote
rights
that
happen
with
a
product
that
we
have
been
actively
deprecating
called
the
signal.
Fx
Gateway
we'd
like
to
offer
a
good
off-ramp
of
that.
That
is
full-on
compatible.
The
signal
effects
Gateway
actually
has
code
that
is
open
source
under
Apache
license.
That
allows
you
to
do
a
remote
right,
receiver
and
seems
to
work,
but
indeed
I
can
tell
it's
a
it's
a
one.
F
A
it's
only
meant
for
one
there's,
no
ability
for
you
to
hurt
the
scale
right.
It's
it's
very
much,
just
your
right
to
that
part
of
your
gateway,
so
we're
looking
to
make
sure
that
we
have
a
good
solution
going
forward
that
we
can
recommend
to
our
customers
that
is
well
supported.
That
is
maintained
by
the
open,
Telemetry
collector,
and
just
just
in
that
script
that
I'm
coming
to
this.
So.
D
I
vaguely
remember
some
issue
a
while
back
where
exporting
some
like
exemplars,
maybe
we're
only
supported
through
Prometheus,
remote
right
but
I
think
that's
just
no
longer
the
case,
but
that
that
might
be
one
valid
reason.
Someone
is
relying
on
that
and
is
important
to
folks
using
some
folks
using
some
portions,
but
I
am
not
up
to
date,
so
I
think
it
may
be
yeah.
So
don't
don't
quote
me
on
that,
but
if
you're
doing
further
investigation
that
might
be
worth
looking
into.
J
F
Well,
I
mean
the
options
for
me.
Looking
at
this
from
a
point
of
view
of
just
like
playing
with
the
jigsaw
puzzle.
Right
is
I'd
love
for
us
to
have
a
good
solution.
Moving
forward
that
is
well
maintained,
supported
by
the
open,
Telemetry
collector
Community.
But
what
you
tell
me
is
actually
also
that
there
will
be
a
breaking
change
for
folks
that
they
will
need
to
forcibly
upgrade
the
way
they
push
they
write
remotely
to
to
our
stuff
by
upgrading.
F
The
version
of
commission
is
to
update
to
the
new
way
of
doing
things
so
that
it's
transactional
right
and
I'm,
not
sure
that
is
possible
in
most
environments
at
least
for
a
little
while.
So
one
thing
I
might
apply
here
is
go
back
to
the
open
source
code
that
we
have
from
signal
FX
to
get
times
and
and
see.
If
we
can
turn
that
into
a
receiver,
that's
an
option.
F
I,
don't
want
to
have
more
stuff
to
maintain
right,
so
I'm,
just
looking
at
this
from
a
point
of
view
of
or
how
we
can,
how
we
can
move
together
at
one
to
to
do
a
good
job,
but
the
the
thing
that
I,
the
the
thing
that
is
not
stressed
in
the
design
doc,
which
I
see
here
for
me,
is
that
if
it
forces
everybody
to
upgrade
to
a
new
version
of
Prometheus
in
a
way,
we
that's
it's.
Not
it's
not
ideal
so
and
move
the
mind
of
in
the
doctors.
F
Two
solutions,
one
is
the
best
effort
and
we
deal
with
the
complexity
and
there's
no
way
for
you
to
scale
across
multiple
collectors
when
you
send
histograms
and
fortunately
that
might
be
what
we
have
to
do
for
a
while,
because
our
existing
customers
are
not
going
to
upgrade
Prometheus
easily.
F
F
Okay,
great
got
some
some
good
Vision
to
this.
Thank
you
so
much
everybody.