►
From YouTube: 2022-12-14 meeting
Description
Open Telemetry Meeting 1's Personal Meeting Room
A
C
D
C
I
just
put
some
of
the
probably
items
for
the
specification
I
mean
I
just
took
it
from
the
last
specification
meeting,
so
one
other
thing
is
basically
they're
having
some
discussion
on
this
semantic
convention.
Stabilization
timeline
from
the
process.
There
is
a
document
which
is
on
review,
feel
free
to
add
your
review
comments.
If
you
have
something
here,
it
just
talks
about
the
timelines
and
the
process
how
how
the
semantic
conventions
would
be
stabilized.
C
So
probably
it
talks
about
the
all
the
currently
available
by
July
24th,
so
not
next
year.
I
think
it
talks
about
2024
with
each
quarter
plan
for
these
quarter,
power
could
be
stabilized,
so
this
is
open
for
review,
so
I
think
just
wanted
to
put
it
here.
That
probably
feel
free.
If
you
have
something
here
and
you
want
to
review
it.
C
This
open
Telemetry
project
roadmap
just
talks
about
how
I
mean
what
has
been
done
right
now
and
how
what's
the
what's
the
priority
going
forward,
so
it
has
done
some
prioritization
for
P0
P1.
So
probably
that's
also
something
which
we
can
when
we
have
gone
through.
That
I
mean
it
looks
fine,
but
I
think
it's
open
for
reviewed,
so
there
are
53
having
some
reward
to
review.
That
could
be
so
you
see
the
priorities.
I
think
it
talks
about.
C
The
P0
is
the
continued
investment
in
the
open,
Telemetry
artifacts,
which
is
a
collected
SDK
language
units
P1,
is
blocks
and
then
then
also
the
semantic
conventional
stabilization
is
also
P1
and
the
client
instrumentation
is
P2,
so
compiling
is
P2.
The
demo
is
also
P2,
so
yeah
probably
feel
free
to
I
will
look
into
that.
C
There
were
a
couple
of
years,
I
think
that
was
of
Interest
I
just
wanted
to
put
it.
This
was
also
discussed
in
last
segmenting.
This
is
the
specification.
I
mean
yeah,
Hotel
specification
segmenting.
So
what
was
the
supporting
the
debug
mode
in
sdks?
B
C
C
C
Or
something
they
can
plug
in
that,
so
they
can
use
that
to
fail
to
to
fail
past
so
that
something
probably
at
least
in
C,
plus
plus.
We
have
a
way
of
doing
that.
But
it's
all
discussion
right
now
so.
C
Other
one,
which
was
also
discussed,
was
basically
how
do
I
achieve
consistent
sampling
across
the
link
traces.
This
is
probably
when
it
may
need
some
changes
in
the
specification
right
now.
There
is
no
I
mean
not
specification,
but
may
need
some
new
sampling
algorithm
if
it
is
not
something
supported
with
the
existing
sampling
neighborhood.
So
how
can
we
do
sampling
across
these
spans,
which
has
created
across
the
different
traces?
C
So
if
you
want
to
do
a
head-based
sampling
for
spans,
which
work
for
say
span
as
S1,
which
was
created
through
Trace
T1
and
then-
and
that
span
is
linked
with
span
S2,
which
was
created
into
through
Trace
T2,
but
we
want
to
ensure
that
we
should
be
able
to
sample
all
these
are
the
link
spans.
So
we
should
be
able
to
sample
these
links
fans
to
get
a
consistent
reporting
for
traces.
So
that's
something
which
is
not
really
available.
C
So
if
something
has
to
be
done,
it
may
need
a
new
sampling
algorithm,
whether
it
should
be
standard
sampling
algorithm
in
this
specification
or
implementation
can
have,
can
have
their
own
sampling
algorithm.
Plugin
anybody
want
to
do
that
they
can
plug
in
their
own
subject
algorithm,
which
has
been
discussed
80.
C
C
B
C
Think
this
was
a
very
task.
There
was
an
issue
which
was
raised
couple
of
days
back
where
the
required,
where,
where
the
requirement
was
that,
if
we
can
do
the
collection
as
part
of
post
flush
before
I
mean
thing
but
for
the
periodic
exporting
metric
reader,
Force
plus,
cannot
be
called
by
the
user
because
periodic
exporting
method
readers,
we
pass
a
unique
pointer
to
meter
providers.
C
So
there
is
no
way
of
calling
anybody
can
call
it
explicitly,
but
yeah
that
can
be
called
through
meter
provider,
but
I
think
it's
it's
going
through
this
issue.
I
think
it's
a
fair
ask
Community.
This
was
basically
the
issue
was
raised
that
the
application
is
is
shutting
down
before
SDK
shuts
down.
It
should
ensure
that
any
pending
metric
should
get
exported
before
it
shuts
down.
So
probably
there
were.
C
There
was
a
window
where
it
may
not
happen,
because
if
we
do
meet
a
provider,
dot
shutdown
after
that,
any
new
correction
is
not
happening.
So
if
there
are
some
some
measurements
generated
and
some
some
Matrix
aggregation
happened
that
may
not
get
exported
so
I
just
created
a
PR
for
that
so
feel
free
to
have
a
look
into
that.
That's
basically
it's
just
doing
an
export.
C
I
mean
it
just
do
it
doesn't
collection
and
Export
once
after
the
shutdown
has
been
called
so
after
the
shutdown,
it
will
do
one
one
cycle
of
collection
and
Export,
and
then
it
will
exit
so
as
it's
fairly
simple
thing,
but
you
probably
think
it's
something
I
just.
C
Probably
we
can
wait
for
him
to
be
here.
I
mean
I
just
wanted
from
that.
If
we
want
to
move
to
Docker
image,
it
should
not
be
just
for
us
email.
Probably,
we
should
have
some
showcase
for
bazel,
also
just
to
see
if
the
caching
mechanism
is
working.
Fine-
and
you
know,
if
you
want
but
bazel,
we
are
not
going
to
lose
any
daily
time.
It
should
not
increase
the
build
time
for
that,
but
I
think
we
can
discuss
I
presume
it's
working
progress
and
we
can
discuss
with
this.
A
B
B
Owens
had
to
add
a
couple
of
comments
which
I
I
fixed
already
and
I'm
still
doing
some
massive
cleanup
to
to
be
in
a
state
where
we
can
test
that
I
was
able
to
to
actually
do
a
test
with
with
certificate
and
doors
necessary
connections.
So
it's
getting
very
close.
B
C
Wow,
can
we
do
that
yeah.
C
B
C
C
C
Had
as
per
discussion,
we
had
in
this
PR
I
think
the
Assad
is
going
to
close
this
right
and
probably
you
you
will
have
only
one.
No,
this
is
not
to
be
close.
Sorry,
okay!
This
should
be
good
enough.
Let's
review
it,
I
don't
know
it's
getting
missed
out
for
me.
I'll
just
review
it.
This
is
something
probably
is
testing
into
NC
level
for
GRTC.
C
C
List
which
is
I,
don't
know,
I
mean
it
was.
It
was
a
using
array
for
that.
I
change
this
to
list
because
of
some
probably
I
had
some
thoughts
on
that
that
time,
but
I
am
not
able
to
Recall
why
I
changed
it
to
list.
So
this
was
a
discussion.
I
created
a
issue.
Out
of
this
I
mean
we're
not
going
to
change
that
boundaries
once
once.
They
are
configured
the
boundaries
for
the
histogram.
So
there
is,
you
can
even
have
array
that
would
be
more
memory
efficient
instead
of
using
vector
or
list
right.
B
But
in
in
general
for
performance
is
this
raised
the
question
of
doing
some
profiling
I
mean
which.
C
The
reason
is
that
we
don't
have
I
mean
the
profile.
We
are
somehow
we're
not
able
to
measure
the
the
proper
profiling
statistics
because
of
the
virtual
machines
on
GitHub
are
not
providing
as
a
consistent
data
and
every
time
you
run
a
map
profiling
on
a
virtue
on
a
virtual
environment
in
GitHub.
It
will
give
you
a
different
statistics,
so.
B
C
B
Least,
we
should
have
the
test
case
to
run
the
workload.
C
Yeah,
that's
that's
a
good!
That's
a
good
I
think
suggestion.
I
I
I
have
to
create
some
spark
tests,
also,
which
was
somehow
not
created
for
for
lots
of
scenarios
in
Matrix
and
probably
I.
Think
start
running
it
locally
would
be
a
good
option
if
you
can
at
least
tested
before
doing
any
such
change.
If
you
can
just
measure
the
performance
locally,
you
finish,
there
is
any
change
in
the
performance.
I
think
it's
a
good
yeah
I
know
how
dotnet
team
is
doing
because
they
are
working
very
closely
with
us.
C
C
We
have
to
do
it
and
have
the
benchmark
test,
which
is,
as
of
now
not
complete
for
Matrix
I'll,
create
a
issue
for
that
and
probably
I'll
add
some
more
Benchmark
tests
around
the
scenarios,
at
least
around
this
correction
and
Export
scenarios
and
yeah,
and
probably
start
measuring
that
for
you
before
any
change,
which
we
are
doing
but
I
mean.
The
only
thing
I
wanted
to
point
out
was
that
we
cannot
test
it
as
part
of
CI
pipeline
now,
because
that's
not
giving
us
a
consistent
measurement.
Yes,.
C
C
And
there
was
a
plan
I
remember
we
used
to
discuss
if
we
can
have
a
local
machine
somewhere,
which
we
can
use
or
local
server
somewhere
lying
around
in
one
of
the
office
in
our
office
or
something
where
which
we
can
use
for
as
a
GitHub
virtual
environment,
so
that
all
the
tests
should
be
running
there.
C
A
kind
of
posted
virtual
environment,
but
it's
not
very
secure,
I
mean
that's
what
I
think
when
I
was
reading
the
GitHub
documentation,
it
says
that
better
not
to
use
those
hosted
machines
for
public
GitHub
CIS,
because
anyone
can
clone
that
repo
and
they
can.
They
can
create
their
own
CI
test
and
they
can
start
running
and
those
CI
tests
would
also
be
running
on
the
same
hosting
machine.
So
they
can
kind
of
run
anything
on
that.
B
B
This
is
probably
how
you
end
up
forging
big
Bitcoins
on
the
CI
machine,
so.
A
B
C
Think,
let
me
I
think
spend
some
time
on
benchmark
test
and
so
that
we
can
cast
these
kind
of
issues
for
this
particular
scenario.
The
list
of
boundaries
are
not
huge
in
general,
six
to
seven
so
I,
don't
think
it
would
have
been
a
significant
performance
over
it,
but.
C
This
is
this
would
be
definitely
good
to
have
a
reward
Vector,
because
these
boundaries
are
in
sorted
order
and
we
can
definitely
going
forward.
You
can
definitely
use
a
binary
search
or
something
on
that
for
for
I
mean
going
ahead.
If
you
want
to
do
it,
it
would
be
helpful
to
do
a
bindies
search
on
these
and
then
use
the
appropriate
range
of
boundary
to
to
put
the
data
into
that.
So
at
least
this
will,
if
it's
released,
it
will
fail
to
the
to
use
that
so
anyway.
C
C
C
C
B
And
it's
I
have
done
that
already
in
the
open
into
SSL
PR.
C
C
B
C
I
think
probably
I
understand
that
for
package
config,
these
these
metadata
files,
which
are
generated
can
be
this
would
be
useful,
for
the
make
make
file
can
use
that
and
for
building
with
makeup.
But
you
have
probably
you
know.
Let
me
not
comment
that
if
I
don't
have
stock
knowledge
or
without
knowledge,
but
yeah
I
was
looking
into
this
link,
which
looks
like
I,
think
Google
Cloud.
C
C
Oh
probably
I,
you
know,
I
missed
this
issue
exporter
option
the
battery
clocks
do
not
honor
Matrix
and
log
environment
variables,
foreign.
C
B
This
one
I
think
this
one
is
only
for
the
grpc
exporter.
C
C
C
C
B
C
C
C
B
With
recent
changes
in
in
CI
and
make
files
at
least
two
ones
can
compile
with
GCC
for
the
date
and
all
libraries
for
Google
tests,
I
think
so.
A
B
Are
getting
closer
to
to
a.
D
C
B
C
C
Yeah
I
was
just
thinking
a
bit
more
on
this,
probably
I,
think
I'm,
just
still
thinking
pros
and
cons
should
be.
Should
we
try
to
separate
out
apis
in
sdks
like
API,
we
will
always
support
C
plus
11,
so
any
instrumentation
Library,
which
is
being
greater
than
C,
plus
plus
11.
They
can
still
use
our
API
and
and
I,
don't
think
we'll
ever
use
any
of
these
C
plus
plus
14
features
inside
the
API
code
and
also
inside
in
the
API
interface.
C
B
C
I'm,
just
thinking
you
will
will
make
the
problem
I
mean.
So
if
we
say
that
at
the
API
level
we
would
use
C
plus
11.
That
means
that
the
SDK
in
the
SDK
the
I
mean
all
those
implementation
of
the
API
will
be
also
not
at
the
interface
level.
They
won't
use
any
of
the
STL
components
or
C
plus
or
14
components
so
as
internally
they
may
use
it
SDK.
Let
SDK
use
Plus
14.,
but
that.
D
C
C
Sdk
can
be
compiled
with
with
the
application,
and
then
they
would
be
linked
together
and
I
think
there
should
not
be
any
linking
issue,
because
you
correct
me
again,
because
we
are
we,
we
say
that
we
are
Avi
compliant.
That
means
that
we
are
not
using
anything
at
the
interface
level.
There
won't
be
any
yeah.
B
If
someone
does
that,
it
will
mean
mixing
and
matching
different
compilers
in
the
same
binary
at
some
point.
C
Grpc,
probably
upsell
and
protobuf,
these
are
not
supporting
C
plus
plus
11
I
mean
again
I
started
thinking,
somebody
who
is
any
application
who
want
to
use
Zipkin
exporter
and
that
application
is
on
C,
plus
11.
I
mean
they
may
have
a
concern
that
why?
Why
is
a
pink
exporter?
C
Why
open
Telemetry
is
exporter
is
not
on
sequence
level,
just
because
OTL,
just
because
grpc
and
protobuf
are
not
supporting
C
plus
plus
11.
That's
the
reason.
Why
is
it
can
open
telemetries
if
the
exporter
is
also
not
supporting
C
plus
67
I
mean
is
not
related
to
grpc
and
protocol
and
because
of
those
dependencies,
if
you
are
moving
to
C
plus
14,
will
that
affect
somebody?
C
Any
application
who
want
to
use
say
Zipkin
as
an
example,
I
mean
that
that
just
just
kind
of
double
Advocate
understanding
to
think
pros
and
cons
of
moving
here
and
there
so.
C
C
I'm
just
saying
that,
whether
it
is
it
something
like
we
can
say
like
open,
Telemetry,
SDK
plus
Zipkin
as
an
example,
would
be
C,
plus
plus
11
wow
I.
B
Of
which
I've
seen
also
in
respect,
where
is
the
the
pr
for
deprecation
of
Jaeger,
which
is
moving
around?
So
it's
likely.
D
B
C
Okay,
even
probably
going
another
step
here
like
open
Telemetry.
If
you
have
something
like
open,
Telemetry,
API,
C5,
open
television,
SDK
still
C
plus
is
11.
like
we
don't
use
any
C,
plus
plus
40,
not
17
construction,
SDK
and
API,
and
they
would
be
only
for
the
exporter
like
for
this.
For
the
otlp
exporter
dependencies.
C
C
Some
thought
which
was
coming
like
we
should
do
something
like
this
I
mean
that
allows
any
external
fdk
any
X,
any
external
developer,
who
is
writing
their
own
exporter
and
their
own
GPS?
They
want
to
support
C,
plus
plus
11.
They
can
still
do
that.
C
C
C
C
B
We
have
the
latest
thing,
I've
done.
I
think
this
could
be
fixed.
C
B
B
B
C
B
Well,
I
think
that
the
when
we
went
fixed
the
Google
test
version
we
use
in
but
as
a
side
effect,
could
have
fixed
benchmark.
C
A
C
A
A
C
B
C
C
B
Pc
exporter
and
I'm,
working
on
the
HTTP
exporter,
on
top
of
that
is
asking
for
TLS
on
http,
with
minimum.
C
C
B
Think
this
is,
this
is
fixable.
The
problem
is:
if
we
need
to
expand
the
gmpc
options,
we
will
have
a
breeding
a
blue
bike.
Then
I
mean
not
available.
A
C
A
C
B
It
is
doable,
but.
C
C
D
C
B
C
B
So
with
the
work
I
did
on
on
the
HTTP
exporter,
I.
B
C
C
C
This
is
again
something
which
is
optional,
and
this
will
need
change
to
the
API.
That
was
something
I
have
been
thinking,
whether
we
should
do
it
and
do
this
or
not,
but
I
think
it's
something
which
would
be
definitely
helpful.
We
have
a
support
for
that.
I
I
know
most
of
the
languages
are
already
supporting,
it
dotted
is
not
supporting,
but
definitely
Java
and
JavaScript
are
supporting
it.
Python
I
need
to
check
so
I
think
these
two
languages
already
supported.
They
are
already
engine
but
yeah.
C
C
Yeah
I
think
I
saw
your
comment
on
this
Mark
I
think
I
agree
that
this
is
not
a
yet
not
a
complete
support,
which
we
have
on
this.
These,
probably
if
you
want
to
really
support
it,
it
should
be
something
more
more
full-fledged
like
we
could.
We
should
be
able
to
create
different
packages
for
each
of
those
components:
yeah
I
I,
just
don't
want
that.
We
should
start
supporting
packages
directly
specs
files
for
each
of
those
packages.
C
That
means
that
we
have
to
maintain
all
those
stuff
would
create
a
difficult
aspect
if
we
start
supporting
full
pack,
I
mean
supporting
the
packages
spec
file
for
all
the
dev
RPM
new
gate
and
everything
if
you
start
supporting
that
I
think
that
would
be
a
huge
maintenance
support
for
us.
So
this
is
something
which
allows
make.
D
C
Example
no,
this
is
this
is
not
directly
and
I.
Think
we
don't.
We
should
not
be
adverted.
I
think
that
this
is
something
which
is
could
be
directly.
You
can
take
this
distribution
package
and
even
directly
consume
it.
We
can't
do
it.
I
mean
a
simple
example:
I
think
like
for
nuget
packages,
we
I
mean
just
a
given
example
for
newer
packages
we
created,
we
can.
We
can
create
a
new
packages
from
them
from
here,
but
most
of
the
cars
at
least
users
of
the
customers
of
those
nuget
packages.
C
C
I
mean
I,
just
use
that
word
facilitate
generating
those
packages
and
they
can
customize
it
further
based
on
their
own
needs.
C
A
C
That's
just
just
providing
One
Step
a
base
package
for
them
and
they
can
modify
it
and
start
using
it,
but
I
don't
think
we
should
definitely
go
go
for
providing
a
full
full
fresh
packages
which
can
be
used
directly
pensionable
I.
Don't
think
we
should
do
it
well.
C
C
But
I
think
it
was
good
good
that
that
and
blocks
went
to
create
further
peers
so
and
any
changes
required
I
think
before
before
doing
a
ga
for
for
logging,
I
think
we
can
have
a
look
into
that
if
any
changes
are
required
before
before
Market
as
a
GM.
So-
and
apart
from
that,
you
see,
if
anything,
yeah
I
think
Tom
I
just
wanted
to
check
with
you
like
for
VC
package.
Would
you
be
taking
care
for
the
new
version.