►
From YouTube: 2023-03-08 meeting
Description
Open cncf-opentelemetry-meeting-3@cncf.io's Personal Meeting Room
A
A
D
B
E
F
If
there
is
nothing,
I
would
have
a
question,
maybe
just
go
for
it.
So
there
is
the
Prometheus
receiver
and
we
also
have
a
Primitives
exporter,
and
is
there
some
or
does
someone
have
some
benchmarks,
how
efficient
it
is
to
translate
it
back
and
forth,
or
how
does
it
work?
Is
there
some
can
someone
compared
to
the
prameters
agent,
for
example,
I.
E
Mean
definitely
or
most
likely
is
not
going
to
be
as
performant.
If
you
do
it
that
way,
but
if
you
transform
it
from
Prometheus
receiver
to
Prometheus
remote
right,
we're
probably
gonna
be
almost
the
same,
but.
E
F
So
when
you
have,
let's
say
you
have
a
workload
there,
you
have
your
collector
next
to
it,
it
starts
scraping
these
targets.
He
wouldn't
transport
it
away
using
otlp
and
then
somewhere
on
the
remote
cluster.
We
on
your
observability
cluster.
You
send
it
then,
with
remote
right
to
your
database,
so
you
would
directly
send
it
from
there
because
it
has
some
benefits
or
no.
E
E
F
E
F
And
the
question
was:
if
there
are
some
performance
issues
when
I
transform
it
to
otlp,
because
so
I
have
two
collectors,
so
one
scrapes
the
thing
the
perimeters
endpoint
transforms
it
into
a
jlp,
sends
it
to
another
one,
and
this
one
sends
it
to
a
database
using
remote
red
compared
to
having
the
bermuda's
agent,
scraping
something
and
then
also
using
remote
red
to
send
it
somewhere.
F
E
I
haven't
done
that
that
comparison
to
be
honest,
I
I,
would
expect
we
we
are
on
pair
or
around
the
same,
but
I
may
be
completely
wrong.
F
Okay,
someone
else,
maybe
yeah.
G
I
haven't
done
comparisons
at
scale,
but
simple
comparisons
of
small
scale,
setups
that
have
shown
a
collector
setup
like
you've
described
with
one
scraping,
Prometheus
and
emitting
otlp
and
another
receiving
otlp
and
emitting
Prometheus
remote
right
is
using
less
CPU
and
memory,
but
a
third,
the
CPU
and
maybe
a
quarter.
The
memory
of
the
Prometheus
server
setup
in
that
same
situation.
G
F
Mm-Hmm
can
you
share?
Could
you
share
something
like
from
your
benchmark
there,
or
is
it
not
at
this
time?
Okay,
thanks.
E
But
it
would
be
good
to
have
these
these
numbers.
If,
if
you
do
a
comparison,
just
show
us
and
we
will
be
happy
to
to
work
with
you
on
and
understand
better.
E
That
being
said,
I
heard
that
there
are
rumors
that
Thanos
is
working
to
accept
otlp
and
that
will
definitely
will
become
better
than
that
at
that
point.
But
it's
a
rumor
I'm,
not
I'm,
not
I'm,
not
gonna,
say
that
is
gonna
happen.
G
It's
probably
a
safe
assumption
that,
if
they
do,
they
would
do
it
in
a
way
similar
to
how
Jaeger
has
done
so
using
the
translation
packages
from
The
Collector,
so
we're
slightly
to
end
up
with
very
similar
performance
because
we'll
be
using
the
same
code.
At
that
point,.
E
E
Does
it
make
sense
yeah
but
but
I
doubt
I
I,
don't
know
it's.
Somebody
has
to
make
the
the
the
test
there.
E
Is
David
around
from
from
Google
dashboard?
He.
E
I
see
because
he
also
did
some
benchmarks,
so
he
may
have
something
so
another
source
for
you,
Benedict,
maybe
if
you're
being
just
so
that
who
is
from
Google
as
well
on
slack
and
he
may
have
something
from
from
David.
C
C
If
you
yeah
suggesting
that
we
should
first
Define
like
what
supporting
a
platform
means
on
probably
different
tiers
of
support,
yeah
the
main
thing
that
I
want
to
get
here,
what
they're
done,
but
that
is
a
good
idea
at
all,
is
if
we
should
block
the
AIX
work
on
this
or
not
because
yeah
I
understand
that
Antoine
may
want
to
pursue
these,
and
this.
E
So
that's
a
that's
a
very
good
point
with
the
with
the
level
of
support,
I,
think
I.
Think
I
would
probably
not
gonna
run
tests
on
IX,
because
I,
don't
think
GitHub
or
anyone
offers
that,
and
it
will
be
a
hard
job
for
us
to
to
configure
things.
But
in
terms
of
if
somebody
else
does
the
work
and
propose
PRS
to
to
solve
this
issue,
should
we
accept
them
I
think
it's
reasonable
to
consider
them.
C
G
If
we're
not
testing,
should
we
be
building
and
releasing
artifacts,
or
should
we
merely
make
it
easy
for
users
who
want
to
compile
for
platforms
like
X
to
do
so
on
their
own.
H
Yeah
I
guess
the
the
question
that
I
would
have
is.
If
someone
comes
up
with
a
an
issue
with
AIX
say
you
know:
we've
released
the
artifact
as
our
policy
that
we
just
won't
do
anything
about
this
bug,
because
we
have
no
way
of
testing
it
or
what
does
that
look
like
I?
You
know
I
always
struggled
with
even
Windows
issues
because
of
this.
I
Yeah,
so
these
are
good
points
and
I
think
they're
kind
of
listing
the
issue,
maybe
in
a
in
a
way
that
may
seem
a
bit
simplistic,
but
indeed
one
of
the
requirements
that
we
can
set
is
that
if
you
want
to
operate
on
any
OS,
we
should
have
a
way
to
run
on
that
OS.
We
should
have
some
way
of
provisioning
some
test
Runners
on
that
OS
I
think
I
can
find
a
place
to
run
those
I
know
GitHub
actions
Runners
do
not
run
on
AIX
or
even
ppc64.
I
There
may
be
some
work
around
where
we
could
SSH
into
the
box
run
things
there
do
stuff
like
that.
You
know,
frankly,
you
you
tell
me
how
how
high
we
need
to
jump
to
to
see
if
that's
possible-
and
we
see
if
it's
possible
or
not.
That's
that
simple
for
me
right
and
I
welcome
this
discussion.
I
think
it's
the
reason
I'm
I'm
bringing
this
up
now
is
I,
want
to
see
what's
possible.
What's
what's
in
the
realm
of?
I
What's
the
appetite
for
the
community
as
well
for
this
I,
don't
think
this
is
going
to
be
something
we
need
to
resolve
right
away.
I
just
wanted
to
make
sure
that
everybody
knows
this
is
something
that
is
of
interest,
and
then
we
have
some
some
interest
towards.
You
know
having
compatibility
for
that.
Yeah.
E
Should
we
put
effort
into
supporting
this
platform
I,
don't
think
we
have
enough
bandwidth
and
and
credits
in
our
in
our
life
to
to
support
all
of
these
things.
So
I
think
the
most
reasonable
thing
is
what
Pablo
suggested,
which
is
we
kind
of
have
two
levels
of
support?
Let's
say,
first
class
support
for
Linux
Windows
platform,
and
then
we
have
second
level
support,
which
is
somebody
else
like
you
may
want
to
Antoine.
E
You
may
do
your
setup,
you
may
run
the
test
somewhere
else
and
you
come
with
PRS
and
we
commit
to
to
review
them
and
work
with
you
to
make
sure
this
gets
into
it
to
to
have
all
the
proper
fixes
that
your
platform
works,
but
not
us
fixing
them
or
not
us
running
the
the
test.
I
think
this
second
level
I
think
is
reasonable
to
to
Define
and
to
to
start
relying
on
the
community
to
help
us
having
these
things.
J
The
question
just
thinking
about
what
the
sort
of
the
minimal
things
we
should
do
for
these
platforms
that
we,
even
if
we
have
a
low
tier
of
support,
I'm,
assuming
we
would
cross,
compile
right
to
that
platform
that
might
catch
some.
J
Else
that
we'd
want
to
be
doing
for
these
low
tier
signal
support.
G
J
J
G
E
Yeah
and
we
can
just
list
people
that
are
interested
in
this
platform
and
you
can
work
with
them
to
set
up
some
things
and
will
not
stop
you
to
do
that.
We
we
may,
even
if
we
want,
we
may
even
create
a
project
in
in
cncf
like
a
new
repo
where,
where
we
set
up
tests
for
for
collector
on,
is
or
whatever
it
is,
and
and
just
have
it
there,
so
you
periodically
pull
up
all
the
dependencies.
E
You
run
the
test
and
you
like
to
have
a
way
to
file
bugs
against
that
platform
and
everything
we
can
even
do
that.
If,
if
that's,
if
it
gets
a
lot
of
traction,
one
of
these
platforms
and
it's
hard
to
to
maintain
it
in
the
core
or.
G
E
D
C
C
On
Anthony's
point,
or
should
we
even
have
the
code?
If
we
don't
know
if
it
builds
I
linked
the
the
rust
compiler
like
to
your
policy,
they
do
have
a
tier
three
were
code
exists
in
the
code
base,
but
it
may
not
build
if
they
do
it,
I'm
not
against
doing.
C
If
it's
useful
for
people,
then
why
not
so
long
as
it's
recently
isolated
from
the
rest
of
our
code
base?
It's
just
one
file
with
a
build
tag
on
Ax
or
Flatline
or
whatever.
I
That
works,
but
you
know
for
what
it's
worth
the
thing
that
keeps
me
up
at
night
when
I
think
about
this
use
case
is
not
the
collector.
It's
the
gops
util,
so
I
think
that
that
Library
probably
needs
some
love.
First
before
we
even
look
at
the
collector
to
make
sure
it
works
on
AIX,
because
it's
much
more
opinionated
about
the
way
it
goes
and
scrape
metrics
or
minutes
so
I
actually
think
the
collector
is
going
to
be
just
fine.
I
It's
really
just
the
the
libraries
would
depend
on
they're
going
to
be
more
interesting
to
see
if
we
can
even
support
this
this
environment,
if
anyone's
interested
in
probably
like
six
months,
might
try
just
do
a
spot
check
and
see
how
things
are
going
and
getting
report
back
in.
It's
not
urgent.
K
So
maybe
I
missed
it.
If
we
don't
have
anything
else
to
talk
about
I
had
a
I've
been
seeing
a
couple
conversations
going
around
the
idea
of
like
persistent
translations,
so
I'm,
not
sure
if
everyone's
aware
of
this
or
not,
but
we
have
the
semantic,
we
have
the
schema
right
in
hotel
and
we
have
a
file
which
shows
you
how
to
do
basic,
simple,
renames
and
whatnot.
Just
you
know
how
to
evolve
our
hotel
schema
in
the
semantic
conventions
group.
K
We
have
been
talking
a
bunch
about
well
for
one
standardizing
these
things,
which
may
incur
more
changes
to
the
schema
and
two
we're
starting
to
realize
that
a
lot
of
our
external
customers
or
other
people
who
wish
to
standardize
and
open
Telemetry
such
as
the
elastic
common
schema
and
the
you
know,
Microsoft,
has
an
internal
ideas
for
what
metrics
look
like
and
I
know
us
at
Splunk.
K
We
had
ideas
of
what
things
should
be
named
and
whatnot
in
the
our
old
signal
of
that
three
pose,
and
so
so
what
I'm
trying
to
bring
to
attention
here
is
that
it
seems
like
there's
a
bunch
of
companies
external
to
the
hotel
project
and
also
within
the
hotel
project
that
have
a
need
for
this
for
translation.
And
so,
while
I
like,
what's
going
on
with
the
schema
right
now,
I
don't
fully
Rock
if
it's
possible
to
do
like
more
long-term,
mutually
intelligible
translations.
K
With
these
different
versions
of
schemas
and
also
I,
don't
know
how
to
formalize,
taking
in
an
external
source
of
schema
and
being
able
to
translate.
Is
there
a
potential
for
us
to
reuse
our
logic
here
for
translations,
for
external
customers
to
open
Telemetry
to
start
standardizing
on
open
telemetries
semantic
conventions?
It's.
H
K
B
I
can
talk
a
little
bit
about
using
the
schema
in
as
an
end
user,
which
was
we
found
it.
We
had
tried,
we
had
poked
around
with
it
and
tried
to
use
it
to
yeah.
Basically,
do
there's
a
couple
cases
where
you
just
like
application
owners
are
sending
you
know
like
instead
of
http.target,
they
send.
You
know,
like
some
other
thing,
we
wanted
to
standardize
on
stuff
and
we
found
yeah.
We
found
it
was
lacking
the
translations.
Some
of
the
flexibility
there
was
lacking
I
think
you
can
only
rename
right.
B
You
can't
like
quite
I,
don't
I'm,
not
I'm,
actually
sorry
I'm,
just
kind
of
off
top
of
my
head.
It
didn't
sort
of
it
just
didn't,
have
enough
functionality
if
I
call
correctly
and
then
at
least
with
like
ensuring
that
you
know
we
could
agree
on,
like
you
know,
in
the
UI
like
one
standard
for
like
this
is
what
a
URL
attribute
or
whatever
label
should
be,
and
then
the
other
was
I.
B
Think
there's
a
behavior
with
the
schemas
in
the
sdks,
where,
if
where
it's
like,
let's
say
like
a
resource
detector
or
something
isn't
using
a
kind
of
schemer
like
an
instrumentation,
is
using
a
different
schema.
It
just
and
then
it
has
to
like
at
some
point
merging
it.
Just
if
there's
a
Delta,
if
there's
a
difference
in
schemas,
it
just
drops
the
schema
entire.
It
just
drops
it
which
we
found
was
like
not
helpful.
B
K
Hearing,
though,
is
you
had
a
need,
but
so
far
it's
I
I
would
actually
that
does
help,
because
internally,
we
also
have
some
semblance
of
a
translation
layer
and
our
translation
layer
does
have
some
things
more
complex
and
simple
renames.
We
actually
do
have
a
desire
to
add
even
more
complexity
to
such,
but
you
know
instead
of
like
doing
this
and
different
times
for
every
company.
Like
you
know,
I
see
we
already
have
this
infrastructure
in
place
to
do
some.
K
B
Yeah
I
can
provide
more
I,
can
async
follow-up
and
try
to
provide
like
exactly
where
the
blockers
were
or
where
things
sort
of
like
weren't,
exactly
what
we
expected
from
like
the
future,
but
yeah
there's
definitely
a
need.
We
drive.
You
know
we
drive
a
lot
of
things
off
like
we.
We
store
a
lot
of
stuff
in
bigquery.
B
K
And
I
want
to
say:
Riley
are
at
least
aware
of
this
side.
We
could
probably,
like
maybe
talk
a
little
bit
more
I'm
unaware
of
any
working
group
specifically
to
this
kind
of
effort,
but
it
seems
like
it's
becoming
a
bigger
and
louder
problem,
so
it.
B
Was
at
the
time
it
was
like
a
nice
to
have
and
we
just
didn't,
have
the
bandwidth
to
go
like
you
know,
I
saw
yeah,
tigrant,
obviously
there's
less
work.
We
saw
him
working
on
it
and
just
didn't
have
time
to
contribute
or
really
like
get
too
involved,
but
I
can
revisit
and
at
the
very
least
like
summarize,
where
things
were
not
useful
enough.
K
I
also
want
to
message
you
in
the
slap
chat
and
if
anyone
else
has
an
interest
in
this,
you
know
things
for
you
to
speak
up
or
message
me
as
well
hi.
My
name
is
James
Hughes.
It's
easy
to
find
the
slideship.
I
It's
actually
maybe
more
questions
for
maintainers
I'm,
seeing
that
a
lot
of
issues
here
are
now
still
in
their
past,
they're
still
dates.
You
know,
we
remark
them
still.
We
say
we're
gonna
wait
60
days
and
a
number
of
them
were
marked
still
on
November
6th,
so
I've
done
some,
which
were
you
know
things
around
flaky
tests,
things
like
that,
but
I
don't
think
I
should
be
closing
them.
I
think
they
should
be
automated.
Is
that
am
I
missing
something.
D
L
Days
right
now,
we
we
were
trying
to
close
through
some
of
the
the
really
old
issues
pretty
quickly.
As
for
why
they're
not
being
Mark
stale,
that's
because
the
the
stalebot
is
ordering
that
or
it's
it's
processing
them
from
newest
to
oldest.
So
it's
only
finding
non-stale
issues,
that's
on
me,
I
was
I
was
intending
to
take
a
look
into
that,
but
you
can
feel
free
to
open
a
PR
as
well.
I
Okay,
because
yeah
right
now,
for
example,
okay,
this
one
I
close
it
by
hand
because
on
November,
8th
2022,
it
was
said
this
issue
has
been
yanked
for
60
days-
will
be
closed
in
60
days.
If
there
is
no
activity
so
we're
four
months
later,
it.
D
What
other
things
but
but
I
would
say
at
least
half
a
year
when
you're
doing
probably,
we
should
think
maybe
code
owner
a
couple
couple
times
after
that.
So
it's
marked
the
stale
code
owners
paint.
Maybe
after
every
every
30
or
60
days,
they've
been
binged
again
and
then
like
in
half
a
year
at
least
they
would
be
dropped.
I
Fair
enough,
so
we're
talking
about
two
different
things:
I'm
saying
I,
think
the
butt
is
broken
and
I'm
closing
things
by
hand
as
a
result
and
I'm
not
sure.
If
I'm
doing
the
right
thing
and
you're
saying
well,
the
the
behavior
is
wrong
in
the
first
place
and
we
should
not
be
clothing.
60
days
in
I'm.
I
Look
at
your
noise
sure.
L
So
multiple
pings
might
be
a
little
difficult
because
you
right
now
the
stalebot
uses
the
stale
label
to
keep
track
of
the
state.
It's
kind
of
used
as
a
I
mean
that's
essentially
the
database,
for
how
do
you
determine
whether
the
issue's
already
been
the
codones
have
already
been
pinged?
D
I
see
yeah,
maybe
we
can
find
another
way
or
just
leave
it
as
this
and
just
increase
in
period
when
we
remove
when
we
drop
the
issue
would
be
enough
so
60
days
things
to
do
aggressive
to
me.
L
I
think
it
could
I
think
we
might
hit
github's
API
limits
again.
The
issue
right
now
is
that
it's
looking
at
all
of
the
new
issues
that
are
it's
looking
at
issues
that
are
recently
marked
stale
I
believe
so.
The
issues
that
are
recently
marked
stale
are
going
to
have
a
recent
updated
date
and
therefore
won't
be
eligible
to
be
closed.
So
we
just
need
to
flip
that
logic.