►
From YouTube: 2021-03-09 meeting
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
B
A
C
C
D
Hey
what's
up
good
morning,
guys
welcome
to
the
tuesday
meeting.
We
can
start
probably
in
the
power
of
minutes.
Meanwhile,
please
add
your
selves
to
the
agenda
to
the
attending
list.
Sorry,
we
have
a
few
items
there
for
the
agenda
today,
so
we
have
enough
stuff
to
talk
about,
but
in
case
there's
something
you
want
to
discuss.
Please
please
add
it
before
you.
D
D
D
D
First
of
all,
yeah
there
are
two
tickets
from
yesterday's
maintainers
meeting,
the
first
one.
I
don't
know
whether
bogdan
is
here
or
not.
It's
about
doing
this
monthly
release
that
we
promised
after
doing
1.0
the
first
week
of
each
month.
D
Well,
I
guess
it's
not,
but
yeah
we
will
go
according
to
the
plan.
So
if
there's
something
you
want
to
mention
against
that
or
about
that
please
let
us
know
here:
you
are
often
we
were
talking
about
doing
the
release.
The
monthly
release.
F
F
I
said
we
should
do
that
based
on
based
on
our
promises
that
we'll
have
a
release
every
beginning
of
the
month.
I
will
take
care
of
that
later
today,
fantastic.
D
All
right,
thank
you
for
that.
The
next
item
is
regarding
semantic
conventions.
D
We
have
an
interesting
discussion
regarding
zone
for
resources,
and
this
was
between
whether
we
should
define
or
try
to
specify
guidelines
for
semantic
conventions,
whether
we
should
favor
longer
readable
names
versus
compact
names
that
you
know,
that
would
require
less
space,
and
this
was
started
because
of
a
pr
that
channel
put
together.
G
Yeah,
if
you
haven't
seen
the
issue,
it's
about,
you
know,
call
renaming
cloud
zone
to
cloud
available
to
zone,
because
both
aws
and
azure
calls
it
available
to
zone.
Only
google
calls
its
own.
So
you
know
we
had
this
funny.
You
know
description.
That
said
like,
oh,
by
the
way
on
aws
and
azure,
this
is
actually
available
to
zone.
So
it's
a
super
long
name.
So
that's
where
the
discussion
came
from
like
is
it?
G
Is
it
should
we
favor
compact
names
or
should
we
is
it
okay
to
like
have
long
names
if
they're,
more
readable
or
representing
the
concept
better.
H
If
we
convert
that
to
something
like
easy,
I
would
say
it's:
it's
not
very
easy
for
people
to
understand,
and
also
it's
not
the
best
performance
thing.
If
you
want
the
extreme
performance,
it
should
be
some
like
crazy,
binary,
hex
code.
So
if
we
care
about
performance,
I
think
probably
save
the
the
human
readable
name
in
the
spec
and
in
the
protocol
we
actually
have
some
translation
that
maps
to
the
extreme
performance.
One.
B
I
want
to
I
want
to
back
that
up.
I
actually
think
it
might
be
worthwhile
to
investigate
defining
schema
on
semantic
conventions
in
the
future,
so
you
would
actually
have
like
a
protocol
buffer
message
where
it
would
say,
availability
zone
with
a
string,
because
that's
going
to
compress
quite
well
right,
it's
going
to
be
an
integer
instead
of
a
string
or
a
variant
even
so
like.
B
I
think
that,
but
my
my
statement
here
is:
if
we
stick
with
human
readable
names
for
now,
and
we
run
into
performance
problems,
there
are
options
that
I
think
are
actually
pretty
nice
to
optimize
this
in
the
future
and
keep
the
readability
right,
but
anyway,
just
want
to
throw
that
out
there.
We
can't
optimize
this.
I
think
that's
not
a
problem.
I
We
should
probably
still
have
some
reasonable
limitations
for
the
length
of
the
attributes
because
they
are
supposed
to
be
shown
somewhere
in
the
ui,
and
there
is
some
some
limits
to
what
you
can
show
nicely
right
in
the
ui.
So
some
some
limits
would
be
nice
to
have
recommendations,
not
hardly
missed.
F
B
Yeah
yeah,
I
I'm
still
looking
at
if
you're
familiar
with
the
transit
protocol
out
of
the
closure
community,
that's
how
I
view
optimal
json,
but
that
could
also
be
wrong
too.
Can
you
share
that.
B
F
It
makes
a
lot
of
sense-
maybe
maybe
that's
the
answer.
So
we
should
stick
with
more
readable
for
the
moment,
but,
as
tigran
pointed
some
kind
of
not
limitation,
but
recommendation
would
be
good.
I
Josh,
I
think
you
when
you
were
saying
limit
optimizations,
you
were
saying
referring
to
some
sort
of
dictionary
compression.
I
guess-
and
I
actually
tried
that
at
some
point
it
it
did
not
yield
any
significant
improvements
over
in
the
compressed
format
list
if
you're
using
compression
this
strings
compress
well
anyway,
because
that's
that's
what
many
of
the
compressions
technique
do
anyway,
they
do
the
dictionary
compression
so.
I
B
B
I
I
Large
requests,
anyway,
that's
that's
a
topic
for
a
separate
research,
which
is,
I
think,
it's
reasonable
to
do,
but
probably
right
now
we
don't
really
have
to
dive
in.
G
By
the
way
like
given
the
number
of
like
semantic
conventions-
and
you
know
potentially,
like
you
know,
user
specified
like
stuff,
do
you
think
that
the
key
size
is
going
to
be
the
bigger
issue?
I
feel
like
there's
so
much
data
already
that,
like
this
is
not
going
to
be
a
significant
issue.
If
we
set
the
limit
to
a
sane,
you
know
thing,
there's
so
much
data
already
like
it's
like
really
optimizing.
I
Yeah,
all
I'm
saying
we,
we
don't
need
to
to
go
crazy
here
and
require
like
four
four
four,
no.
G
G
J
D
Yeah,
actually,
I
would
say
that
we
need
probably
both.
We
need
like
a
hard
limit
in
the
future,
but
have
a
guideline.
You
know,
like
general
suggestions,
I
think
that
we
don't
want
to
be
forcing
users
to
just
reach
that
hard
limit.
Quite
often,
you
know,
like
probably
12
15
cracks
or
something
like
that.
Whatever
you
know.
D
F
Okay,
we
can
discuss
that
separately,
but
right
now
we
should
recommend,
probably
some
size,
reasonable
size.
Let's
say
it
could
be
less
than
12
characters
or
16
characters.
Something
like
that.
16
is
probably
better
because
it's
power
of
two
but
sure
let's
recommend
this
16
characters
as
under
16
characters
as
a
reasonable
thing,
and
then
we
can
move
forward
with
that
p
area.
Now.
D
G
Let's
take
a
look
at
the
current:
let's
yeah,
let's
take
a
look
at
the
existing
keys
because
there's
a
lot
of
like
long
stuff
like
infrastructure
and
underscore
service
and
stuff
like
that.
So
I'll
take
a
look
and
I'll
file
an
issue
with
a
recommendation
and
then
we
can
discuss.
D
Sweet
fantastic
yeah
once
we
have
that,
I
think
we
should
put
a
comment
there
in
your
pr.
You
know
yuri
was
the
person
initially
against
this,
so
we
have
to
explain
to
him
why
we
are
you
know
the
root
perfect
right.
J
Add
there
are
quite
a
few
backhands
out
there
that
just
display
the
plane
value
and
it's
not
always
the
same
person
looking
at
the
backhand
and
and
reading
the
things.
J
That
is
also
writing
the
instrumentation,
and
I
think
it
would
already
provide
quite
some
value
if,
if
it
would
be
somewhat
self-descriptive
because
a
set
for
availability
zone
won't
help
anyone.
But
if
there
is
a
person
stopping
by
at
that
area
in
the
back
end
and
they
see
a
filter
that
has
availability
zone
in
there,
they
will
usually
understand
right
away
what
this
is
about.
J
K
I
have
a
comment
in
a
semantic
convention.
I
don't
think
we
have
specified,
like
you
know,
delimiters
for
pathing
and
the
reason
I
say
that
is
that
when
we're
talking
about
these
minimum
lengths
should
we
define
those
minimum
lengths
to
be
per
segment
if
we
had
pathing,
because
it
seems
like
the
semantic
convention,
always
you
know
require
us
to
have
some
kind
of
grouping
of
some
kind,
and
so
it's
really
the
length
is
probably
defined
by
each
segment
rather
than
you
know.
K
I
mean
I'm
sure
we
could
have
a
hard
limit
also,
but
but
I
would
think
that
you
know
part
of
this
would
include
if
we
do
segmenting
or
or
delimiters
or
pathing,
and
if
so,
then
we
may
have
length
restrictions
per
segment
as
well.
F
K
Yeah,
I
don't
disagree
with
that.
I'm
just
saying
the
general
question
is:
do
we
have
segments
and
if
so,
then
the
minimum
length
probably
applies
more
to
each
segment
rather
than
you
know,
I'm
sure
we
could
have
a
hard
limit.
G
G
I
mean
this
is
a
very
tough
topic,
because
people
will
introduce
their
own
like
keys
in
the
you
know,
they
may
bundle
it
under.
You
know
some
of
the
existing
paths
as
you
mentioned,
but
we
can
that's.
Why,
like
I
I
felt
like
you
know.
If
we
try
to
set
a
limit
in
this
meeting,
it
will
be
tough.
We
have
to
take
a
look
and
potentially
what
other
people
will
introduce.
E
There's
been
some
discussion
about
attribute
name
spacing,
so
people
want
to
have
an
attribute
name
that
reads
as
a
short
string,
but
actually
is
qualified
by
some
sort
of
like
lengthy
descriptor.
That
says
where
this
attribute
actually
comes
from,
so
you
can
disambiguate
it
because,
as
if
people
want
to
display
a
short
name
and
encode
a
long
name
and
we've,
we've
talked
about
it,
but
we've
always
backed
off,
because
it's
kind
of
like
adding
structure
to
your
labels
or
something
like
that
sounds
scary.
When
we
haven't
finished
enough
of
what
we've
got
already.
L
Yeah,
we
use
a
convention
of
dot
notation
currently,
but
we
don't
want
to
encourage
people
to
be
building
literal
programming
structures.
We
want
to
encourage
people
to
keep
it
as
flat
as
possible
and
the
dot
notation
seems
to
be
working
fine
for
us
for
the
time
being,.
L
D
Okay,
so
the
next
one
is
josh
matthix
data.
B
Model,
please
all
right.
So
this
is
a
bit
of
an
update
and
I'm
sure
we're
going
to
get
into
this
in
the
next
meeting.
But
since
this
is
a
spec
meeting
just
wanted
to
call
people's
attention,
I
threw
together
a
kind
of
project
with
a
bunch
of
themes
of
tasks
to
complete
and
then
a
proposed
order
of
accomplishing
those
tasks
in
terms
of
what's
been
going
on
so
right
now,
everyone's
been
kind
of
focused
on
histograms
in
the
proto
lib.
B
If
you
look,
I
think
there
are
no
less
than
three
bullet
requests
around
histograms
and
some
data
model
stuff
from
victor,
so
that
is
actually
kind
of
top
of
the
agenda.
If
we
want
to
make
the
end
of
march
deadline,
I
think
that
means
at
a
minimum.
B
We
need
to
sort
out
histograms
and
what
we
want
to
do
around
type
schema
and
meta
stuff
by
the
end
of
this
meeting
at
least
have
a
tentative
notion
of
what
we'd
like
to
do
and
then
hash
it
out
and
pull
requests
so
that
by
next
meeting
we
can
focus
on
some
of
the
temporality
things
we
want
to
do.
And
then,
after
that,
we
can
talk
about
aggregations.
B
This
is
a
bit
of
an
aggressive
timeline,
so
I
just
want
to
give
that
a
heads
up
so
yeah.
E
M
E
As
someone
who's
been
pretty
pretty
close
to
those
details,
I
agree
with
you.
The
first
two
bullets
are
very
very
much
what
all
the
discussion
has
been
about
in
the
protocol
repository,
it's
victor's
pr
proposal
and
then
several
things
about
histograms-
and
I
I
want
to
say,
they're
connected
whatever
we
decide
to
do
with
victors-
is
going
to
tell
us
what
to
do
with
histograms
and
vice
versa.
So
we
have
basically
one
decision
to
make.
We
could
probably
do
it
in
the
next
hour
and
I
think
we
should.
E
We
should
try
and
finish
that
this
week
and
then,
as
you
say,
there's
several
secondary
issues
for
the
proto
and
then
back
to
you
josh.
I
think
there's
something
about
the
data
model
dark.
B
Yeah,
I
forgot
to
mention
we're
trying
to
get
all
this
in
the
specification,
so
I
have
a
pr
out
where
I
took
josh's
document
and
tried
to
wire
it
into
the
spec
so
anywhere
with
the
spec
mentioned
the
data
model.
I
was
only
looking
at
the
proto.
It
now
mentions
this.
This
document
I
took
josh's
document
and
just
translated
into
markdown
with
all
the
fun
markdown.
You
probably
won't
like
what
I
did
with
what
are
they
called
footnotes
because
markdown
footnotes,
what
the
heck
we
don't
need.
Those
anyway.
B
What
why
did
I
use
a
footnote
anyway?
I
I
don't
know,
but
I
like
the
footnote
just
I
couldn't
replicate
it
well.
So
please
take
a
look
and
review
that
because
again,
we'd
like
to
get
the
data
model
solidified
by
the
end
of
this
month,
and
we
have,
I
would
say
at
least
three
hard
discussions
to
have
and
we
have
exactly
three
meetings
so
that
lines
up
whether
or
not
pre-meetings
is
enough
for
the
discussion.
I
don't
know
so.
That's
that
that
we
can
talk
about
in
the
next
one.
G
All
right,
the
next
item
is
mine.
It's
about
like
we
are
we're
not
very
consistent
about
formatting
the
enum
values
in
semantic
conventions
and
let's
try
to
make
them
consistent
because
there's
like
all
caps
underscore
and
then,
like
you
know,
camel
case
type
of
stuff,
actually
there's
no
camel
case,
but
there's
underscore
sorry
lowercase
underscore.
G
So,
since
we
discussed
like
we're,
gonna
ask
the
languages
to
other
generate
some
of
these
like
keys
and
values.
Keys
are
by
the
way,
mainly
consistent,
but
values
are
not.
It
will
be
good
to
kind
of
make
them
consistent.
So
there's
an
issue
feel
free
to
comment
on
it.
That's
what
this
is
about.
F
G
F
G
Okay,
yeah
I'll
I'll
make
a
suggestion,
and
then
we
can
discuss
on
the
pr
or
the
issue.
The
next
thing
is
we
discussed
like
last
week
that
hey
like
we
need
a
compatibility,
spec
it's
for
prometheus
or
what
we
actually
mean
by
we're
compatible
with
prometheus.
I
ted
suggested,
like
maybe
that
could
be
a
prototype
spec
under
the
spec
repo,
but
I
think
it's
in
the
very
early
stages.
That's
why
we
decided
to
put
it
under
the
working
group.
I
created
the
pr
there
feel
free
to
take
a
look
and
comment.
F
G
A
lot
of
a
lot
of
things
that
the
spec
mentions
about
the
behavioral
requirements
from
the
collector
is
actually
you
know.
We
designed
it
with
the
prometheus
folks,
it's
just
kind
of
documenting,
actually
what
we've
been
doing
so
far.
So
so
you
know,
one
of
my
difficulties
is
when
people
ask
me:
what
does
the
extent
of
the
primitive
support
I
just
want
to?
You
know,
send
them
to
this
one
doc,
so
they
understand
the
extent
and
the
behavior
and
the
expectations
from
the
you
know
the
libraries
and
the
collector.
G
G
There's
like
library,
specific
stuff,
so
the
main
reason
libraries
library
related
stuff
is
just
doesn't
exist,
or
it's
very
short
at
this
point.
It's
like
it's
a
phase,
two
item
for
us
right
like
even
like
prometheus
metrics,
like
exporters
and
so
on,
but
we
will
also,
like
you
know,
put
that
there
will
be
also
a
section
about
libraries.
We
can
discuss
this
later.
L
Like
likewise,
I'd
like
to
it'd
be
great,
wherever
this
stuff's
end
of
living,
just
to
add
a
link
in
the
table
of
contents
in
the
readme
of
the
spec
repo
as
well.
Just
as
long.
D
L
L
But
when
I
was
writing
the
versioning
and
stability
requirements
and
asking
people
to
read
those
in
the
spec,
I
got
a
lot
of
confusion
back
from
people.
One
of
the
confusion
was
whether
we
meant
the
the
targeted
libraries
like
the
in
the
libraries
we're
targeting
with
our
instrumentation.
L
The
other
confusion
was
whether
or
not
the
collector
and
other
libraries
that
open
telemetry
provides
are,
are
the
things
we're
talking
about
and
we
actually
have
like
a
collection
of
libraries
that
make
up
our
clients,
which
are
the
api,
the
sdk,
the
plugins
and
the
the
instrumentation
that
the
installed
instrumentation.
L
So
I
I
added
the
word
open
telemetry
client,
just
to
define
that
collection
of
four
components,
but
I
did
not
go
back
through
the
spec
and
and
change
the
words
everywhere.
So.
G
L
L
No,
that
that
is
a
worthy
goal
and
yeah
I'm
I
personally,
I'm
I'm
fine
with
with
any
name.
I
think
people
like
the
having
the
term
client
just
because
it
it
seemed
to
help
people
outside
of
the
project,
their
brains,
because
the
client's
pretty
common
term,
but
yeah
client,
like
okay,.
G
D
Yeah
the
next
one
is
also
related
to
what
actually
it's
not
never
mind
my
butt,
the
next
one
is
about.
This
is
just
something
that
wanted
to
get
everybody's
attention.
I
remember
josh.
Suret
may
remember
this
that
we
postponed
defining
the
environment,
variable
support
for
specifying
the
transport
for
for
exporters,
and
this
is
something
we
need
to
take
to
take
over.
You
know
from
1.0
and
it's
not
super
urgent,
but
I
think
somebody
needs
to
own
it
and
start
working
on
that.
B
D
I
could
take
this
stuff
myself
if
we
don't
have
any
hurry.
I
have
so
there's
enough
stuff
for
the
following
weeks,
but
I
should
be
able
to
take
stuff
myself,
especially
now,
I'm
a
little
bit
concerned
because
I
I
do
remember
it
was
kind
of
tricky
and
it
was
getting
trickier
and
I'm
worried
that
we
would
try
to
push
it
together
right
before
the
next
stable
release
and
it's
going
to
be
more
painful
than
it
should
be.
D
B
And
and
just
to
clarify
what
we're
talking
about,
if
I,
if
I
recall
correctly,
the
main
problem
was
there's
like
multiple
formats
for
zipkin
and
jager
that
you
can
use
and
nobody
could
actually
answer
whether
or
not
all
the
different
language
implementations
could
support
more
than
one.
D
B
I
just
want
to
call
this
out
to
see
if
maybe
this
deserves
a
no
tab.
There's
there's
a
discussion
in
the
proto
repo
around
some
protocol
buffer
changes
in
otlp
and
there's
kind
of
a
process
that
some
people
have
been
following
around
how
to
do
deprecations
and
migrations
to
new
messages,
and
so
victor
proposed
a
process.
B
I
proposed
a
counter
process,
I'm
wondering
if
maybe
we
should
formalize
like
what
deprecation
policy
looks
like
across
for
that
repo,
specifically,
so
that
we
all
consistently
do
the
same
thing
and
it's
clear
so
josh.
F
I
think
there
is
a
requirement
from
the
spec
that
we
need
to
have
a
versioning
stability
document
in
every
repo
which
we
don't
follow
in
proto,
so
fyi.
Yes,
it's
up
for
grabs.
Somebody
has
to
to
start
writing
that
and,
as
part
of
that
document,
I
think
we
should
define
this.
Well,
that's
what
my
point
was.
I
F
I
will
focus
more
on
a
stable,
a
better
ad
hoc,
at
least
now,
until
we
significant.
I
B
To
do
that?
Well,
I
mean
yeah
formally
we're
allowed,
but
the
question
is
how
many
people
are
trying
to
use
metrics
and
are
we
okay,
breaking
every
single
sdk
talking
to
the
collector,
with
otlp
like
that
that.
E
Me
I
we
have
real
uses
at
this
point
in
production
and
I
would
rather
not
make
a
hard
abrupt
breaking
change,
but
I
think
I
supported
both
proposals
loosely
speaking.
If
we
can
have
six
months
of
compatibility
for
the
collector,
but
that'll
that'll
make
it
a
good
story.
I'm
still
gonna
have
to
change
light
stuff,
because
I
know
we
don't
always
have
collectors
in
the
past
and
that's
fine,
but
let's
just
not
make
it
look
like
we're.
Jerks
and
breaking
okay.
F
Yeah,
I
think
I
think
we
can
then
define
some
guarantees
for
for
beta
as
well,
but
will
be
way
loose
than
the
stable
one
like
six
months
with
aggressive
that
we
can
delete
after
six
months.
The
deprecation
thing
it's
fine
for
beta
for
for
for
stable,
probably
we
discuss
about
other
terms
and
other
other
things.
B
Yeah
yeah,
I
think,
that's
totally
fair
and
honestly
six
months,
even
three
months
is
better
than
six
months
or
sorry.
Three
months
is
better
than
nothing.
I
I
just.
I
think
we
need
to
provide
some
hand-holding
here,
because
we
already
have
users
on
beta
yeah.
F
Yeah,
we
will
do
that
and
I
don't
know
if
you
saw
we
explicitly
did
not
care
about
jason
and
if
somebody
uses
json
sorry
for
them,
but
rather
than
that
for
proto,
we
always
provide
a
path
to
migration
and
we
always
implement
it
in
the
collector,
the
migration
pad.
So
on
that
regard,
I
think
we
are.
We
are
good.
We
did
this.
I
Change
once
for
traces
already,
so
we
follow
the
certain
approach
there.
I
think
it
was
one
year
and
we
are
still
remaining
compatible
in
the
sense
that
we
will
not
refuse
the
whole
old
approach.
It
will
just
be
a
failsafe
we're
not
going
to
to
operate
fully,
but
it
will
continue
to
work.
So
maybe
we
follow
something
similar
right
for
a
year.
We're
fully
operating
with
with
the
previous
versions
and
after
that
period
is
over.
The
the
protocol
will
continue
to
work.
F
Yeah
we
will
not
reuse
the
id
and
stuff
we.
We
have
to
define
and
also
also
a
good
question.
Should
we
define
what
it
means
for
json
this
and
what
it
means
for
for
proto,
because
I
think
for
proto.
We
have
way
more
flexibility
and
it's
way
easier
to
do
deprecation
and
everything
in
protobuf
than
in
json.
F
G
Yeah
but
the
you
know
the
to
the
customers.
It's
not
super
clear
like
at
least
from
our
perspective,
so
they
don't
know
if
they
have
different
policies
or
differ
deprecation
policies
or
may
have
the
different
deprecation
policies.
So
it's
going
to
be
a
bit
complicated.
You
know
if,
if
you
want
to,
I
think,
like
favor
proto,
let's
make
it
very
explicit
everywhere.
F
Yeah
we
have
to
discuss
with
javascript
and
I
think
they
are
the
main
users
and
the
only
users
that
I
know
right
now.
So
we
we
probably
need
to
involve
with
daniel
and
couple
of
folks
from
the
javascript
when
we
put
down
these
recommendations
and
stuff
and
probably
no
matter
what
I
would
ask,
maybe
in
two
three
weeks
to
have
danielle
and
somebody
from
javascript
joining
the
data
model
calls
that
we
have
and
discuss
their.
What
do
we
do
with
json
in
general
and
how
do
we
make
this?
F
B
F
Proto
I
mean
we
already
went
through
the
old
tap
to
the
to
say
that
every
repo
has
to
define
this.
Do
you
think
this
is
important
enough
to
have
another
attempt
sure
we
can?
We
can
discuss
that
in
another,
but
for
me
the
the
understanding
was
that
we
went
through
that
process
and
we
agreed
that
it
has
to
have
every
every
every
part
of
the
of
the
ecosystem
has
to
have
these.
B
Proposal-
let's
put
this
yeah,
I
guess
what
I'm
saying
is.
I
think
the
next
action
item
is
a
proposal
and
discussion
in
the
proposal
to
try
to
shore
up
some
of
the
things
that
we've
just
talked
about,
as
opposed
to
continuing
to
take
more
time
in
this
meeting
I
can
either
put
in
notepad
directly
in
the
proto
repo
you.
Let
me
know
what
you
want
and
we'll
do
it
roto
ripo,
if
you,
if
you
have,
I
would
start
with
that
cool
unless
someone
else
really
feels
strongly
wants
to
write
it.
I
can.
D
Cool
all
right,
thank
you
for
that
josh.
Thank
you.
A
lot,
okay,
final
issue
that
it's
a
long
issue,
I
guess
so
go
for
it.
L
Yeah,
I
just
wanted
to
to
put
the
details
down,
so
we're
kicking
off
two
projects
right
now.
One
is
on
the
tracing
front.
One
is
the
convenience
api,
that's
a
fairly
simple
project.
L
The
weightier
meteor
project
is
instrumentation,
so
just
to
frame
this
for
people
who
haven't
heard
it
before
at
the
beginning
of
may.
We
expect
interns
to
start
hitting
the
project
and
we
would
like
them
to
write
instrumentation
and
in
general
we
want
people
to
start
writing
instrumentation
now
that
the
apis
are
stabilizing.
L
However,
there
are
a
number
of
open
questions
related
to
instrumentation
that
needs
some
design
work
in
order
to
be
solved.
One
aspect
of
it
is
stuff
we've
been
discussing
before,
which
is
finalizing
the
semantic
conventions
themselves.
That
means
normalizing
them,
making
any
changes.
We
want
to
the
existing
conventions,
as
well
as
getting
feedback
from
the
instrumentation
we've
already
written
to
flush
out
sorry
flesh
out
our
guidelines
for
writing.
Instrumentation,
it's
not
enough
to
just
say
these:
are
the
http
semantic
conventions
for
writing.
L
Http
instrumentation
there's
other
questions
that
tend
to
come
up
that
we've
gotten
from
people.
How
many
spans
do
you
have
do?
You
have
a
logical
span:
do
you
have
a
span
for
every
retry
like
how
does
all
that
supposed
to
work?
Where
do
the
tags
go?
If
you
have
multiple
spans,
so
there's
some
some
work
there
around
just
for
the
core
semantics,
just
making
sure
the
instrumentation
open
telemetry
provides
is
at
least
as
good,
if
not
better
than
the
instrumentation.
L
You
would
get
out
of
older
systems
that
are
already
out
there
and
have
already
kind
of
worked
through
this.
So
it
would
be
awesome
if
someone
or
a
group
of
people
were
interested
in
tackling
that
problem.
There's
another
problem
which
is
pretty
related
to
that
which
is
the
the
stability
requirements.
Once
we've
decided
what
it
is
that
we
want,
we
don't
want
to
say
no
changes
going
forwards,
because
that
sounds
like
a
straight
jacket,
but
we
do
want
to
clearly
communicate
to
users
like
once.
L
We
declare
telemetry
stable
what
what
will
or
will
not
change
in
future
minor
releases,
yeah.
I
So
one
thing
I
think
it's
important
to
phrase
this
correctly.
I
wouldn't
want
to
fossilize
the
the
semantic
conventions.
I
want
to
make
sure
that,
rather
than
saying
it's
not
going
to
change
to
say
that
we
have
a
way
to
manage
the
changes
it's
controlled,
I
had
a
proposal
there
about
that.
I
think
I
could
work
on
this.
One.
L
Awesome
awesome,
great
okay,
so
that's
a
t
grand
that's
awesome!
The
the
third
piece
is
ecosystem
management.
This
is
actually
a
big
important
problem.
There's
a
question
like
there's
going
to
in
the
long
run,
be
more
instrumentation
code
than
there
is
like
core
client
code
and
that
instrumentation
code
will
have
to
evolve
and
keep
up
because
the
stuff
it
targets
is
evolving
and
changing.
L
So
there's
a
question
of
how
do
we
manage
all
of
that
code
as
it
gets
written?
Do
we
manage
it
internally
within
the
project?
Does
it
all
live
in
separate
repos?
Does
all
of
it
a
big
contrib
repo
if
people
donate
instrumentation
who
what's
the
ownership
model
for
that,
and
if
we
tell
people
it's
third
third-party
instrumentation,
it
just
lives
outside
of
the
project.
L
L
L
M
L
Do
you
want
to
pick
pick
some
point,
people
that
you'd
like
to
work
on
some
of
these
projects?
Just
yes,
so
we.
M
H
M
L
Awesome,
okay,
so
anyone
else
please
write
your
name
down
or
or
contact
me
on
slack
well,
there'll
be
opportunities,
we'll
turn
these
into.
You
know
proper
working
groups
and
things
like
that,
but
I'm
just
trying
to
figure
out
like
at
this
juncture
who
who
who's
like
yeah.
That
sounds
like
something
I'd
like
to
dig
into
so
please
reach
out.
I
see
wyatt
saying
happy
to
assist
as
a
beginner,
the
instrumentation
world.
That
is
awesome.
Yes,
we
also
need
people
to
actually
try
this
stuff
out.
L
D
D
I
don't
josh
that
you
have
your
the
metric
data
model
topic
at
the
beginning,
so
yeah.
E
So,
as
josh
s
said
earlier,
I
think
that
there
are
sort
of
three
or
four
hot
topics
in
protocol.
We
should
just
take
them
one
at
a
time.
The
biggest
one
is
a
coupled
issue
with
victor
made
a
proposal
about
taking
number
type
out
of
sort
of
top
level,
one
of
and
at
the
same
time
several
proposals
were
being
made
about
handling
a
one
of
inside
of
the
histogram,
essentially
so
that
we
can
have
different
bucketing
styles
and
at
some
point
we
really
kind
of
recognize.
E
The
same
question
is
happening
here
is:
do
we
want
our
one-off
variation
at
the
top
level?
Do
we
want,
at
the
bottom
level,
there's
actually
a
middle
option,
which
is
what
victor
put
in
there,
which
is
to
say
that
you
may
have
a
repeated
slice
of
points
according
to
whichever
type
you
have
and
that
might
be.
You
have
gauges
that
are
both
integers
and
doubles,
they're
parallel
arrays.
That
was
one
of
the
proposals
and
we
could
do
the
same
for
histograms.
F
Before
before
doing
that,
josh
do
we
want
to
move
to
the
next
meeting
and
start
that
next
meeting
earlier,
but
and
continue
to
have
the
whole
discussion
there
or
we
want
to
stay
here
for
10
minutes
and
then
move
to
the
other
meeting
I'll?
Let
you
decide.
M
Josh
by
the
way,
I
was
just
curious,
I
added
an
item:
did
you
guys
have
a
look
at
the
open,
histogram,
dot,
io
effort
and
again
super
interested
in
you
know
us
evaluating
or
its
usability
or
not.
E
G
E
Sorry,
yes,
thank
you,
yeah.
That
was
exciting
news.
Last
week,
mr
conus
released
their
circlehist
library
under
a
new
open
source
license.
This
is,
I
feel,
one
of
the
best
out
there,
and
now
it
has
a
better
license
that
that's
sort
of.
I
don't
want
to
try
to
choose
winners
here,
though
it
sort
of
competes
with
dd
sketch
it
competes
with
some
of
the
other
options
and
at
this
point
in
time,
we're
just
trying
to
narrow
down
on
a
protocol
that
lets
us
have
these
options.
E
So
I'm
glad
it's
out
there.
It
may
be
one
of
the
better
options,
but
we
have
to
decide
how
to
support
that
one
of
and
then
we
can
have
a
one-off
for
circle.
Hist.
M
Right
right
direction,
but
the
question
still
is,
as
anthony
also
pointed
out,
you
know
there
are
some
patent
provisions
and-
and
you
know,
limitations,
so
that's
something
that
may
not.
They
may
not
necessarily
work
for
everyone
right,
so
just
bringing
that
up
right
so
that
folks
are
there.
E
G
E
I'm
not
sure
whether
that's
going
to
sell
like
fly.
What
I
see
in
the
sort
of
the
long
sort
of
as
this
plays
out
is
that
there
are
really
two
different
categories
of
bucketing
strategies
out
there.
One
is
this
circle,
his
style,
where
you
have
up
to
30,
000,
buckets
and,
and
it's
very
precise
and
high
resolution,
but
if
you
actually
have
30
000
buckets
you're
going
to
get
a
very
large
histogram,
and
that's
that
that's
circle
hist.
E
So
I
think
it's
probably
best
of
breed
in
that
category,
given
that
it
has
these
decimal
aligned
buckets,
but
then
this
whole
other
category
that
the
dd
sketch
authors
and
the
skewd
sketch
and
some
google
work
we
know
about-
is
doing
this
auto
collapsing
of
buckets
and
it
it
just
falls
into
it.
E
It's
like
structurally,
you
encode
it
in
a
different
way,
so
there
still
seems
to
be
a
will
to
have
a
auto
collapsing
high
resolution
bucket,
it's
a
histogram
which
is
not
circle
test,
so
we're
still
working
on
just
protocols
that
can
represent
them.
I
don't
know
what
happens
in
the
pro
in
the
patent
scenario,
where
we've
created
a
protocol
that
can
literally
convey
circle
hist
but
allows
people
to
break
their
license
by
choosing
different
parameters,
because
at
some
level
an
exponential
histogram
with
certain
parameters
equals
a
circle.
M
Cool
thanks
thanks
for
the
insight
there
very
good
point
you
make
okay
anyway,
and
I
just
wanted
to
bring
it
up
for
folks
to
be
aware
of.
B
Hey
related
to
you're,
not
a
lawyer.
Does
cncf
provide
lawyer
abilities
for
us
to
say
hey.
Can
you
tell
us
if
this
is
compatible
with
the
open
telemetry
license,
because
that
would
be
good
to
find
out.
M
E
In
truth,
I've
had
some
discussions
already
because
I
asked
my
ceo
to
ask
cncf
questions
of
this
nature
once
and
at
the
time
there
was
an
appearance
of
incompatibility.
Cncf
one
had
nothing
to
do
with
the
type
of
restrictions
that
that
I've
seen
in
the
open,
histogram
announcement.
Maybe
we
should
ask
again.
F
M
They
do
provide
that
as
a
service,
so
logged
in.
Definitely
we
can
find
out.
We
can
also
ask
our
lawyers
to
open
source
lawyers
to
actually
take
a
look
at
it.
B
E
B
F
Okay,
let's,
let's
close
these
right
now
and
maybe
meet
the.