►
From YouTube: 2023-01-30 meeting
Description
cncf-opentelemetry meeting-2's Personal Meeting Room
A
C
B
All
right,
let's
get
started,
welcome
everybody
I,
see
lots
of
topics
down
here,
so
I'm
excited
to
get
to
those
we'll
quickly
go
through
the
Sig
check-in.
This
was
actually
further
down
the
agenda.
Briefly,
so
I
don't
know.
If
people
have
a
chance
to
enter
their
items,
do
we
have
anyone
from
the
spec
cig
who
wants
to
give
us
an
update
there?
Yeah.
D
Sorry
yeah,
we
will
have
our
February
release,
probably
by
the
end
of
this
week.
So
I
will
prepare
a
PR
today.
B
Okay,
skipping
ahead,
JavaScript
still
working
on
high
resolution
histogram
and
logs
and
events
no
updates
for
python
for
go.
We
have
version
1.12.0
released.
This
updates
the
pre-ga
metric,
API
and
SDK
excellent.
We
talked
about
that
last
week.
Thanks
also
updates
the
semantic
conventions
for
super
plus.
We
version
1.8.2
out
this
week,
which
also
includes
metrics,
post
GA,
fixes
deprecating
Jaeger
exporter
and
build
improvements
for
erlang,
no
new
updates,
and
that's
it
for
these.
But
that's
fine
because
we
have
a
ton
of
stuff
here
so
Jurassic.
B
Okay,
so
it's
a
great
opportunity
to
get
someone
working
on
a
feature
that
you
always
wanted
to
get
done
great
for
small
to
medium
self-contained
tasks,
helps
you
Mentor
people,
I,
believe
Ted
or
someone
else
correct
me
if
I'm
wrong
I
believe
outreachy
is
like
a
way
to
get
more
people
in
the
communities
of
interns
specifically
or
people
who
are
new
to
open
source
contributions.
E
I
believe
it's
an
internship
program
that
pairs
people
up
with
open
source
projects.
Okay,
great,
so
you
have
to
be
willing
to
Mentor.
So
anyone
who's
interested
in
becoming
a
mentor
basically
should
sign
up
without
reachy
and
you'll,
get
an
intern
to
help
you
on
some
open,
open,
Telemetry
project.
B
Great
well,
that
sounds
like
a
great
resource
for
everyone.
So
if
you're
interested
in
that
dresses
left
some
links
here
on
Outreach
yourself,
as
well
as
information
about
past
Community
participation
for
outreach
within
open
Telemetry,
because
I
know,
we've
done
it
before
alrighty.
F
Hi,
hello,
everyone
I
just
wanted
to
don't
know
how
many
people
saw
this
already,
but
just
formally
kind
of
announced
there
will
be
an
observability
day
at
Cube,
County
U
as
part
of
the
kubecon
and
Cloud
nativecon
co-located
events.
This
will
be
taking
place
on
April.
The
18th
I
believe
is
the
date
it's
the
day
before
kind
of
the
main
kubecon
program.
F
As
part
of
this,
we're
still
there's
a
cfp
open
registration
all
that
stuff,
but
we
are
looking
into
the
possibility
of
putting
together
like
a
maintainers
panel
or
some
sort
of
you
know,
panel
discussion
about
open
Telemetry
and
if
you
would
like
to
be
a
part
of
that,
maybe
you're
already
planning
to
attend
kubecon
EU.
F
Maybe
you
would
like
to
attend,
keep
kind
of
you
please.
Let
me
know
reach
out
and
I
will
put
your
name
on
a
list
to
kind
of
gauge
interest
for
this,
we're
still
a
couple
or
still
maybe
a
month
or
so
out
from
actually
putting
having
a
final
schedule
and
we
kind
of
need
to
see
how
many
talks
are
submitted
and
so
on
and
so
forth.
But
that
is
something
we
are
talking
about.
So
if
you
have
any
questions
about,
this
feel
free
to
also
reach
out
to
me.
F
And
finally,
if
you
are
an
end
user
of
telemetry
or
you
know,
someone's
interviews
for
open,
Telemetry
we're
very
interested
in
having
open,
Telemetry
sort
of
end
users
speak
at
observability
day
talking
about
how
they're,
using
this
in
production
any
kind
of
challenges,
how
they've
overcome
them,
what
they're
able
to
do
with
it
stuff
like
that,
so
strongly
encourage
you
to
pass
the
cfp
along
to
end
users.
F
If
you
know
someone
that
might
be
interested
in
speaking,
but
this
is
like
their
first
time
speaking
or
they
they
want
help
crafting
a
proposal
or
whatever
feel
free
to
reach
out
to
me
as
well
and
I
will
put
the
you
I
will
I
will
make
sure
they
get
help
to
do
that
and
yeah.
That's
all
I
have
so
hopefully
yeah
any
other
questions.
You
feel
free
to
reach
me
async,
but
I
would
hope
to
see
a
lot
of
you
in
Amsterdam
this
spring.
Thanks.
D
F
We
don't
I,
don't
I
I,
don't
know
right
now.
I
know
the
there's.
Some
big
changes
this
year
to
how
they
do
collocated
events
in
general,
but
the
Foundation
is
expecting
between
1
000
and
2000
attendees
to
the
Colo
day.
F
Unlike
previous
iterations
of
this,
where
you
had
to
kind
of
ticket
for
one
specific
event,
now
you
get
access
to
all
of
the
co-located
events,
so
there's
a
potential
of
having
several
thousand
people.
You
know
in
the
space
and
they
can
choose
to
kind
of
go
to
which
one
they
want,
but
I
would
I
would
anticipate
significantly
higher
attendance
for
this
versus
maybe
prior
ones,
simply
because
of
the
changes
to
how
colos
work
and.
B
F
F
Obviously,
you
know,
because
this
is
combining
all
of
the
observability
content
that
normally
would
have
been
been
split
into
like
three
different
things
yeah.
This
is
actually
this
is
you
know
before
there
was
like
Prometheus
day
and
sort
of
open,
Telemetry,
unplugged
or
Community
Day
or
whatever,
and
also
sort
of
a
generic
observability
event
there.
F
You
know
that
that's
all
being
brought
together
in
one
thing,
rather
than
being
three
things,
so
there
will
be
kind
of
more
decisions
to
be
made
about
programming
and
content
and
stuff
like
that,
but
we're
trying
to
make
sure
that
you
know
each
project
is
pretty
represented
in
this,
and
ideally
you
know
down
the
road.
If
this
is
pretty
successful,
then
you
know
there's
the
potential
for
kind
of
getting
a
specific
cncf
observability
event
similar
to
what
they
have
done
with
Cloud
native
security,
con
that
some
people
might
be
aware
of.
F
F
F
B
G
You
all
right,
hi
everybody
so
as
Ted
is
gonna,
say
after
this
little
pitch
we're
starting
to
try
and
put
more
effort
into
this
meeting
for
Technical
Community
related
Communications.
G
So
this
is
maybe
a
prototype
for
how
we'll
do
things
I
want
to
introduce
Laurent
who
works
at
F5.
He's
been
working
with
me
on
this
project
that
we're
going
to
describe
which
sort
of
short.
In
short
terms,
we
call
Hotel,
Aero
or
open
Telemetry
Arrow.
So
the
Apache
arrow
is
an
ecosystem
for
data
manip
for
data
exchange,
in-memory
data
storage
and
memory
data
processing.
It
has
a
lot
of
connections
to
the
Java
and
the
rusty
language
systems.
G
There's
entire
databases
available
for
it.
It's
also
high
performance.
It's
becoming
big
in
the
world
outside
of
outside
of
go
I
would
say.
However,
we've
been
slaving
away
or
working
away
as
hard
as
we
could
on
this
topic
of
getting
go
up
to
speed
and
arrow,
and
now
I'm
going
to
hand
it
over
to
Laurent
who's
going
to
give
a
little
pitch
the
outcome.
G
H
Okay,
can
you
see
the
slight
deck
we
can?
Okay
cool,
so
I
will
try
to
make
this
presentation
as
short
as
possible,
something
about
15
minutes
if
it's
okay
for
you.
H
So
it's
about
the
otap
error
project,
so
a
quick
definition
of
what
or
what
are
the
goal
of
the
otlp
project.
So
first
it's
an
extension
of
the
otlb
protocol,
so
the
goal
here
is
to
to
be
perfectly
compatible
with
the
existing
ecosystem
and
we
have
three
main
goals.
H
The
first
one
is
to
reduce
the
bandwidth
requirement
of
the
protocol
by
a
factor
of
two
to
four,
so
that
will
be
translated
to
cost
saving
in
terms
of
network
cost,
especially
when
you
have
a
deployment
where
you
have
a
customer
environment,
sending
Telemetry
data
to
a
backend,
and
you
have
to
Crossing
this
information
across
all
over
internet.
H
The
second:
our
goal
is
to
provide
a
more
similar
representation
of
motivated
Time
series.
So
right
now
in
otlp
we
only
support
the
universe
metrics.
So
that
means
that
we
have
one
metric,
a
collection
of
attributes
and
a
timestamp
in
a
multiviate
Time
series
world.
We
have
multiple
Matrix
related
together.
Sharing
the
same
attributes
and
and
the
same
time
Stone,
so
there
is
a
way
to
better
represent
that
and
right
now
we
have
to
duplicate
the
information
again
and
again
and
it's
a
definitely
TV,
not
optimal.
H
Goal
is
to
in
phase
two
to
provide
more
advanced
and
efficient
Telemetry
data
processing
pipeline
that
will
be
directly
integrated
into
the
collector
and
and
for
that
we
will
rely
on
arpeggio.
They
are
multiple
sub-project
to
to
help
us
and
and
do
that
very
quickly,
so
we
rely
on
multiple
Technologies.
The
main
one
is
apacheo
and
it's
used
to
represent
batch
of
Hotel
entities
in
a
columnar
way
and
because
we
most
of
the
Telemetry
information
traces,
especially
and
Matrix,
they
are
coming
in
batch
and
we
can
also
batch
them
into
the
collector.
H
So
there
are,
there
are
opportunities
to
to
improve
the
Congressional
ratio
to
improve
the
memory,
consumption
and
so
on
with
this
kind
of
approach
so-
and
we
are
also
using
a
stream
grpc
instead
of
a
standard
request,
replay
mechanism
and
I
can
expand
that
later.
H
Why
we
are
using
that
and
the
possible
message
that
we
transmit
over
the
stream
contained
basically
a
narrow,
IPC
stream.
So
it's
a
stream
oriented
representation
of
columnar
information
and
with
nice
properties.
So
if
we
open
a
stream,
we
can
basically
send
a
first
a
schema
representing
the
information
to
the
stream
and
then
a
collection
of
dictionary
and
then
a
record
batch
and
for
the
next
batch.
We
just
have
to
send
the
batch.
H
We
no
longer
have
to
send
the
schemer
again
and
if
there
isn't
a
date
for
a
dictionary,
then
we
can
just
send
the
data.
So
that's
what
I
mentioned
by
dictionary
supporting
your
Delta
dictionary.
H
We
also
rely
on
this
standard
compression
because
it's
a
very
nice
operational
algorithm,
especially
in
the
compress
columnar
information.
So
we
have
two
main
areas
of
cues
in
Phase.
One
like
I
mentioned
network,
is
saving.
That
would
be
the
the
most
important
one
where
we
have
a
deployment
that
send
or
tell
data
over
the
internet
and
in
phase
two
we
want
to
to
have
a
fast
and
complex
Telemetry
that
are
processing
on
the
edge
directly
integrated
into
to
The
Collector
foreign.
H
Ago
now
and
after
this
proof
of
concept,
I
created
the
first
version
of
the
open
Telemetry
on
small
focus
on
156
and
that
has
been
submitted
in
August
2021.
H
H
So,
let's
enter
a
little
bit
in
more
detail
into
the
this
new
protocol,
so
this
diagram
will
present
a
typical
scenario
that
will
optimize
the
the
network
boundaries
and
the
cost
where
we
have
on
the
left,
a
corrector
running
into
customer
environment
and
aggregating
and
multiplexing,
basically,
the
multiple
source
of
telemetry
information
and
sending
that
over
internet.
H
To
another
exporter
that
is
basically
a
front-end
for
a
Telemetry
pipeline,
ultimately,
by
back
end
So
based
on
the
experiment,
we
did
both
on
synthetic
data
and
some
prediction
data.
We
observed
an
improvement
in
terms
of
compression
ratio
between
200
and
400
percent,.
H
H
So
if
let's
say
that
you
have
a
collector
in
the
back
end,
that
is
not
supporting
this
new
protocol.
The
connector
on
the
on
the
left
will
automatically
fall
back
to
the
adap
protocol
if
needed.
H
Okay,
so
some
some
numbers
regaining
the
completion
rates,
so
these
three
column
with
two
diagram
inside
they
were
present,
for
they
represent
the
completion
rate
by
match
style.
H
H
So
you
can
see
that
we
have
a
a
nice
gain
in
terms
of
compression
rate
and
to
give
you
some
more
specific
numbers
for
Matrix
right
now,
based
on,
depending
on
the
nature
of
the
information
for
the
literature
of
the
stream
for
Matrix,
we
observe
between
200
and
355
by
50
percent
of
improvement,
something
royalty
vehicles
for
logs
and
a
little
bit
better
for
faces.
Foreign.
H
Regarding
the
the
duration
of
the
main
steps,
so
by
men's
step,
I
mean
so
usually
when
we
have
an
exporter,
we
first
have
a
sterilization
phase,
then
a
completion,
and
then
we
are
sending
that
to
over
internet
or
overall
Network,
and
we
have
on
the
opposite
side
the
compression
this
organization.
H
In
this
first
phase,
only
we
have
two
additional
steps:
well,
tlpr
conversion,
so
basically
converting
otlp
to
our
tlpro
and
and
the
same
thing
on
the
other
side
in
the
opposite
direction.
You
see
so
we
we
obviously
had
some
additional
override
that
will
disappear
in
phase
two,
but
overall,
because
the
the
size
of
the
messages
will
be
smaller,
so
the
compression
will
be
faster
and
and
obviously
the
transmission
will
be
faster
and
the
salvation
and
desolization
does
not
exist
when
you
are
using
a
hotel.
H
H
So
right
now
what
we
observe
so
in
terms
of
end-to-end
processing,
time
and
I
I,
don't
count
the
the
the
network.
There
are
just
the
the
phases
that
we
have
here,
so
it's
between.
H
So
it's
let's
say
Alf,
but
it's
50
slower
for
Matrix
and
relatively
close
for
logs
and
some
situation
for
traces.
So
we
are
currently
working
to
optimize
further
this
this
step
for
phase
one.
But
what
is
super
important
to
understand
is
in
phase
two.
H
We
can
do
a
much
better
just
because
we
can
remove
the
the
overrider
so
in
phase
two
the,
and
why
we
can't
do
that
right
now,
because
it
will
require
much
more
work
and
and
basically
a
new
connector,
because
if
we
want
to
have
a
jpro
end-to-end,
that
means
that
we
have
to
change
the
interface
of
the
receiver,
processor
and
exporter
in
order
to
Leverage
The
columnar
representation
end
to
it.
H
So
we
didn't
work
on
that
right
now,
but
I
had
a
in
the
proof
of
concept,
an
example
of
it
and
we,
where
I
demonstrated
already,
that
we
can
process
the
information
with
data
Fusion,
for
example,
with
a
query
language,
and
it's
super
fast.
H
So
in
terms
of
performance,
it's
biometrically
better.
So
basically
we
will
be
of
the
platform
metrics,
especially,
for
example,
the
speed
will
be
between
times
three
to
time,
11
faster
than
the
existing
system.
So
it's
an
estimate
because
we
don't
we
don't
have
right
now:
the
Google
implementation
but
or
the
rest
implementation,
but
the
it's.
An
estimate
base
on
the
existing
go
implementation
and
I
just
removed
the
the
conversion
steps
and
that's
what
we
get
at
the
end.
H
So
for
us,
the
next
steps
are.
We
basically
want
to
iterate
to
reach
the
hotel
and
donation
approval
and
we
are
looking
forward.
We
look
forward
to
your
feedback
and
help
to
finalize
this
approvals.
We
an
important
world.
We
want
to
test
this
entire
implementation
on
really
production
data
with
the
help
of
the
community,
so
we
have
a
set
of
tools
and
then
procedures
that
we
will
be
able
to
communicate
that
so,
for
example,
we
could
right
now
test
The
Collector.
H
At
some
point,
we
will
be
able
also
to
record
batches
that
we
can
aluminize.
We
have
tools
for
that
and
then
it
will
be
super
useful
for
us
when
we
detect
some
situation
when
there
is
a
bug
or
when
the
compression
rate
is
not
that
great,
we
will
be
able
to
use
the
anonymized
photograph
messages
to
replay
that
on
our
laptop
and
try
to
figure
out
what
was
the
issue
and
we
are
iterating
to
improve
performance,
reliability
and
obviousness
of
the
reference
implementation
right
now
and
that's
it
open
to
question.
B
G
Is
it
yeah
I
would
just
add.
You
know
from
the
from
that
that
was
a
great
pitch
I
think
that
community
and
the
interest
from
the
open
source
ecosystem
is
all
very
strong
here.
Of
course,
there's
also
a
vendor
angle
here,
if
you're
interested
in
saving
a
lot
of
money
for
your
customers,
this
is
going
to
be
an
important
project.
G
I
think
it's
one
of
our
Best
Bets
and
and
that's
the
reason
that
we're
seeking
your
approvals,
the
the
implementation,
is
almost
ready
and
I'll
be
back
to
announce
like
an
alpha
or
something
like
that
in
weeks
weeks,
time,
I
think,
and
so,
if
you
see
this
as
an
opportunity,
well
one
thing
you
could
also
help
with
is:
if
you
have
an
in-house
Aero
expert,
we've
got
them
at
this
company
and
You.
G
E
Yeah
I
mean
I,
think
I
just
want
to
reiterate
in
case
it's
not
clear.
You
know,
egress
costs
for
Telemetry
data
are
huge.
There's
like
one
of
the
biggest
costs
that
end
users
of
open,
Telemetry,
face
or
egress
costs,
and
this
makes
a
substantial
impact
in
that.
So
that's
why
it's
actually
really
valued
this
having
something
like
this
in
the
collector
really
puts
the
collector
kind
of
in
the
lead.
As
far
as
the
various
you
know,
Telemetry
processors
that
are
that
are
out
there
so
breed.
B
It's
important
yeah
both
for
I,
think
the
existing
use
cases
like
if
you're
an
end
user
and
you
have
a
huge
100,
000
posts
or
more
and
you're
capturing
Telemetry
from
those
and
your
whatever
your
observability
solution
is,
is
in
some
other
Cloud
environment.
This
would
dramatically
fit
down
your
egress
cost,
but
then
I
think
what
you're
getting
at
is
even
beyond
that,
like
using
the
collector
as
like
a
giant
pipeline
processor,
or
something
like
that.
This
makes
that
even
feasible
and
even
more
interesting,
yeah.
E
A
G
Yeah
I
think
there's
going
to
be
a
lot
of
benefit
in
processing
this
data
with
Aero
tools
off
the
shelf
in
in
the
coming
months
and
years.
That
will
be
more
and
more
accessible
as
we
get
a
few
more
of
the
steps
done
for
this
project
right
now,
for
example,
I'm
struggling
not
with
the
compression
or
the
encoding
stuff
that
larmont
has
done,
I'm
struggling
with
off
in
the
collector.
G
Those
of
you
who
go
to
the
collector's
Sig
may
have
seen
me
present
some
of
the
issues
around
metadata
propagation,
but
we
are
making
it
so
that
you
can
propagate
metadata
through
this
Arrow
Bridge
so
that
you
can
send
your
off
to
the
to
the
collector
on
your
side
of
the
network
goes
through
the
arrow
pipe
compressed
and
the
off
comes
out
the
other
side.
That's
the
ideal
goal
here,
so
we're
getting
close
and
I'll
keep
this
group
up
to
date.
Thanks.
B
E
Let's
pick
it
up,
okay,
so
let
me
share
my
screen.
A
E
And
so
what
we've
been
doing
for
those
who
were
not
with
us?
The
last
time
we
did
this,
we
are
trying
to
improve
our
project
management
for
the
spec
backlog.
Just
something
we
identified
is
we.
We
have
too
many
open
threads.
We
context
switch
way
too
much
and,
as
a
result,
getting
things
pushed
through.
Our
spec
process
can
be
very,
very
slow
and
difficult.
So
what
we're
trying
to
do
is
just
get
organized.
This
started
with
some
great
work.
E
Morgan
did
to
go
to
our
community
and
get
an
understanding
of
what's
valuable
to
our
community
I'm
working
with
the
the
tce
and
some
product
managers
to
help
get
the
rest
of
our
backlog.
You
know
into
something
that
looks
more
like
actual
project
management
and
the
first
step
is
just
to
go
through
all
of
our
existing
oteps
and
sort
them
by
priority
in
error
and
like
area
of
reference.
E
Anything
that's
already
being
worked
on,
gets
a
P0.
In
other
words,
if
we
already
have
a
working
group,
that's
actively
working
on
this
and
that
working
group
is
blocked
because
we
aren't
paying
attention
to
the
sotep
that
gets
a
p
zero
and
everything
else
gets
a
priority
according
to
what
we
have
in
this
stock.
E
If
it's
not
on
the
list,
I'm
not
giving
it
a
priority
right
now
for
better
for
worse
I
kind
of
I
think
it
would
be
mean
to
be
like
priority
no
or
something
like
as
a
tag
so
I'm
just
not
doing
that.
E
But
I
would
love
everybody's
help
to
to
finish
up
this
first
pass.
This
is
super
boring
to
you
and
you
don't
want
to
participate.
That's
also
fine,
but
let's
start
with
Lawrence
PR
columnar
encoding
for
open
Telemetry
protocol.
E
E
E
E
Maybe
we
can
move
it
to
current
project
once
this
gets
community
approval
because
it
does
appear
there
are
people
actively
working
on
it
so
Josh
by
the
way,
it
would
be
great
to
get
like
a
project
tracking
issue
for
this
just
describing
your
goals
and
timelines,
since
it
seems
like
it's
kind
of
a
multi-stage
project.
A
I
That's
fine.
This
is
something
that
Josh
sureth
and
I
started
working
on
before
the
before
the
holiday,
but
because
of
the
holiday
it
kind
of
got
pushed.
I
Basically,
the
idea
is
so
originally
we
had
been
thinking
about
some
sort
of
like
X
prefix
for
semantic
convention
attributes,
or
something
like
that
in
order
to
denote
that
they
are
experimental
while
doing
our
due
diligence
and
research,
particularly
around
HTTP
headers,
which
some
people
may
remember
used
to
also
have
an
X
prefix
and
the
reason
that
that
was
deprecated.
We
decided
to
go
a
slightly
different
direction.
I
I
It
would
essentially
just
be
somewhere
where
anyone
who's
writing
an
instrumentation
can
say
this.
Is
you
know?
No
previous
semantic
convention
existed
for
this
and
I
needed
to
instrument
something.
So
this
is
what
I
did
and
then,
if
it
picks
up
support
by
the
community,
it
can
eventually
be
incorporated
into
the
main
semantic
conventions.
I
I
But
this
is
meant
to
break
the
sort
of
dependency
cycle.
We
have
between
not
wanting
to
create
instrumentations
that
use
attributes
that
are
not
defined
in
semantic
conventions,
but
also
not
wanting
to
create
semantic
conventions
for
attributes
that
don't
have
significant
use,
and
you
know
value
shown
in
the
wild.
I
It's
not
a
ton
different,
except
that
this
is
very
explicit
about
significantly
lowering
the
barrier
of
Entry
there.
It's
actually
explicitly
stated
in
here
that
technical
Merit
is
not
a
grounds
for
rejection,
and
rejection
would
really
only
be
for
cases
of
like
frivolous
or
or
like
frivolous
use.
You
know,
like
I'm,
registering
my
birthday
or
something
stupid
like
that
or
abuse,
and
things
like
that
foreign.
I
That
is
definitely
a
possibility.
Hopefully,
if,
if
somebody
goes
to
submit
one,
they
see
that
a
previous
one
already
exists
that
covers
their
use
case
or
somebody
points
it
out
and
says.
Does
this
cover
your
use
case,
then
that
kind
of
ends
it
there?
If
they
really
say
like?
I
No,
that
does
not
fit
and
I
and
I
need
it
to
be
different
and
registers,
It
Anyway,
then
I
guess
it
would
be
up
to
the
TC
to
when
it's
promoted
to
a
quote
real
estate
invention
to
choose
a
winner
between
the
two
of
them
based
on
community
adoption.
I
would
assume.
A
E
I
I
mean
I
see
this
as
a
very
high
priority.
Just
because
instrumentation
is
I
mean
the
the
SDK
is
useless
for
that
instrumentation
I.
Think
instrumentation
is
is
the
highest
priority.
You
know
overall
component-
and
this
is
an
important
part
of
that.
Okay
are.
E
I
It
depends
on
I
guess
the
on
the
final
form
of
the
approval.
If
it's
just
lowering
the
barrier
of
our
existing
experimental
tag,
then
it
doesn't
really
require
that
much
work
other
than
documenting.
You
know
the
the
new
process.
I
It
does
also
put
some
additional
I,
don't
know.
Maybe
burden
is
the
wrong
word,
but
gives
some
responsibility
to
the
TC
in
this
area
or
some
group
that
the
TC
chooses
so
maybe
it
would
require
the
creation
of
a
semantic
convictions,
approvers
or
something
along
those
lines
or
maybe
not
I.
E
So
it
sounds
like
if
it's
just
if
it's
just
updating
the
spec
with
some
new
rules
and
talking
to
TC
about
that
you
and
Josh
sarath-
can
can
bottom
line
that
yes
and
I
think
it's
fine
to
make
it
a
P0,
because
semantic
conventions
are
like
primary
primary
goal
for
actually
I
guess
we
call
them
a
P1.
So
let's
just
call
this
a
P1
cementing
conventions.
A
G
Daniel
I
I
I'm
happy
to
help
with
that
also
great.
E
And
yeah,
my
my
only
request
is
if
this
does
over
the
course
of
this
Otep,
if
this
does
morph
into
like
more
work
than
that
or
like
building
stuff
or
or
some
kind
of
longer
thing
that
people
have
to
think
about
and
pay
attention
to.
If
you
don't
mind
just
creating
like
a
project
issue
for
it,
just
explaining
what
all
that
is,
so
that
we
don't
we
don't
forget
yeah,
but
I,
don't
think
you
need
one
if
it's
just
just
updating.
I
E
E
G
E
E
Okay,
let's
just
add
a
label
for
that.
E
E
A
E
Yeah,
by
the
way,
if
someone
thinks
they
can
like
work,
GitHub
faster
than
me
and
they're
getting
antsy
like
feel
free
to
to
take
over
the
keyboard.
Jockeying
okay
include
otlp
proto-version
identifier
and
requests
request.
Identifier
describe
the
ltlp
version
used
to
create
the
binary
Json
payload.
E
Do
people
have
any
thoughts
on
this?
One
looks
looks
blocked.
E
My
short
term
is
that
this
is
this
is
blocked,
but
it's
and
it
doesn't
appear
to
be
related
to
one
of
our
priorities,
possibly
it's
more
experimental.
Unless
someone
on
the
call
cares
about
it.
I'm
just
going
to
move
on.
D
Wait
yeah,
but
the
record
each
one
has
a
lot
of
reviews
so
remember
now
and
a
lot
of
support
from
from
a
few
vendors
at
least
okay.
Just
we
need
to
resurrect
that,
but
I
think
in
general.
This
is
a
great
one:
I,
don't
know
whether
it's
a
PC,
robot,
P1,
probably
yeah,.
A
E
I
know
that
kubernetes
support
is
something
that
Community
really
cares
about.
I,
don't
know
quite
how
we
captured
that
here,
but
I
agree
with
you
that
it's
important
I
do
realize,
like
we
have
like
our
P0
continued
investment
in
open
Telemetry.
E
E
Like
that
maintain
yeah,
but
but
there
is
like
actual
Beyond
just
maintaining
I
know
there
are
like
config
is
the
one
that
comes
to
mind
the
most
that
people
have
been
banging
on
like
an
improved
installation.
Experience
is
a
thing.
You
know
people
want
better
docs,
but
yeah.
If
you
you
have
ideas,
it
might
be
helpful
to
just
clarify
that
one,
because.
E
E
In
other
words,
like
this,
doesn't
look
like
an
Otep
being
submitted
from
that
messaging
group.
It
looks
like
it
was
submitted
beforehand,
so
I'm
not
sure
where
it's
stands.
E
If
there's
something
that
group's
working
on
then
I
would
totally
call
it
a
P0,
because
we'd
have
to
pay
attention
to
it,
but
I'm
gonna,
wait
for
that
messaging
group
to
sorry.
I
think
they
should
take.
A
D
This
is
just
a
fix
to
update
a
link
and
actually
that's
a
silly
thing,
because,
as.
D
E
E
E
A
E
Data
classifications
for
resources
and
attributes,
the
idea
behind
this
is
not
all
Telemetry
data
is
the
same
with
regards
to
importance
how
it
is
processed
and
what
is
supported
by
Downstream
vendors.
This
approach
will
allow
for
efficient
checking
of
data
to
then
apply
various
configurations
that
is
required
by
your
organization.
J
I
believe
what
he's
looking
for
here
is
within
the
semantic
conventions.
Four
attributes
that
we
know
are
likely
to
contain
sensitive
information
just
to
have
a
flag
for
that,
so
that
the
processor
can
say
is
this
one
of
the
things
in
the
set
of
attributes
that
may
be
sensitive?
If
so,
let
me
operate
on
it,
otherwise
put
it
through.
E
I
see
okay,
so
this
is
about
improving
how
we
handle
our
semantic
conventions
like
by
being
able
to
label
certain
Fields
as
having
the
the
need,
the
being
able
to
have
it
in
our
semantic
conventions,
markdown
file,
basically
being
able
to
know
automatically,
which
Fields
should
be
flagged
for
extra
scrubbing.
Essentially
right.
Does
that
sound,
correct,
yeah.
E
Of
this
okay,
that
sounds.
E
E
Work
also
to
approve
this
Otep,
so
I'm
willing
to
to
give
it
a
P1
priority,
since
it's
part
of
getting
our
semantic
conventions
stable.
That
seem
fair
to
people.
E
In
other
words,
we
have
a
semantic
conventions-
umbrella
group
that
meets
every
other
week
and
I
think
that
group
should
have
a
look
at
this
and
actually
deal
with
it
and
I'm
part
of
that
group.
So
I'll
turn
around
and
do
that
we
meet
next
week.
A
C
B
E
B
D
E
A
C
Lot
with
I
think
Prometheus
compatibility,
so
I
might
also
ping
Josh
McDonald,
because
I
know
that
this
is
a
topic
that
was
pretty
actively
discussed
during
the
SDK
development
for
metrics.
A
E
Export
span
context
is
remote
and
otlp
that's
interesting,
so
this
is
basically
saying
this
seems
to
be
like
spanned
kind.
Right,
isn't
that
is
remote.