►
From YouTube: 2021-02-23 meeting
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
B
B
B
All
right,
it's
five
minutes.
After
just
in
the
interest
of
time,
we
should
probably
get
going.
We've
got
some
big
meaty
things
on
the
agenda.
I
want
to
make
sure
we
get
through
them
all
so
riley
and
josh.
We
can
make
sure
we
we
time
box
your
sections
to
at
most
20
minutes.
That
would
be
great
and
why
don't
we
start
with
riley.
D
Thanks
ted,
so
I
I
put
a
draft
document
there.
The
intention
is,
we
send
a
blog
post
trying
to
clarify
what
are
we
going
to
do
with
the
the
metrics
that
includes
the
api
sdk
and
also
the
data
model
protocol.
So
I
want
to
give
people
like
a
good
understanding.
What
are
the
milestones
and
how
we're
going
to
approach
this?
D
D
Thank
you
and
if
you
look
at
the
dock,
I
also
put
some
like
timeline
based
on
the
matrix
like
sig
progress,
so
currently
we're
trying
to
like
put
the
work
as
four
milestones,
and
if
you
remember
like
last
year,
when
we
were
tracking
the
project,
we
used
a
term
ga
and
later
we
introduced
some
confusion
because
we
decided
that
ga
is
reserved
for
marketing
term,
but
for
individual
projects.
D
We're
now
using
more
concrete
terms,
as
like
ted
clarified
in
the
specs,
so
we
have
feature
freeze
stable
those
things,
so
my
proposal
would
be
for
matrix.
We
can
start
pilot
list,
there's
a
github
top
level
open,
telemetry
project.
If
you
open
that
link,
what
you
can
see
today
is
there
are
four
projects.
D
Well,
some
of
them
are
not
maintained,
so
it's
not
up
to
date
at
all,
so
my
proposal
would
be.
We
try
to
use
that
as
a
single
source
of
tools
and
in
the
document
we
can
pound
people
say
hey
in
this
document.
We
give
you
the
idea.
This
is
what
we're
currently
thinking,
but
you
can
always
refer
to
the
top
level
projects
at
the
source
of
tools
and-
and
we
try
to
split
that
by
the
milestones.
D
So,
for
example,
instead
of
having
the
golan
project
tracking
there,
we
probably
need
to
move
the
golan
product
tracking
back
to
the
gold
client
ripple
and
for
the
matrix
part.
Instead
of
like
putting
everything
in
the
big
ga
milestone,
we
will
have
the
matrix
protocol
and
data
model
stable,
which
we
have
a
deadline
of
march
end
of
march
and
also
like
matrix
api
sdk
feature
freeze
by
end
of
september.
D
In
this
way,
like
it's
easier
for
us
to
like
parallel
the
work,
so
each
individual
work
stream
can
can
have
the
owner.
Maintaining
the
project
make
sure
it's
up
to
date,
just
want
to
get
feedback
see
if
people
think
this
is
something
makes
sense,
and
and
of
course,
this
one
is
relatively
big
effort
so
like
just
by
pat
himself
or
like
me,
we
wouldn't
be
able
to
achieve
this.
We
can
probably
start
by
experimenting
the
matrix
part,
and
if
that
works,
we
can
like
eventually
change
the
other
projects
to
follow
the
similar
model.
B
So
just
for
clarity,
some
of
the
milestones
were
we're
looking
at
from
metrics
are
when
the
work
is
going
to
move
from
small.
A
B
Design
work
and
prototyping
on
the
spec
to
something
that's
released
in
this
spec,
where
we're
asking
maintainers
to
switch
their
focus
to
implementing
metrics.
So
I
think
that's
that's
probably
the
other,
then
everything
is
completely
done.
I
think
that's
the
most
important
milestone
that
that
we
want
to
be
able
to
to
call
out
on
some
kind
of
big
board.
B
D
Yeah
so,
for
example,
just
to
give
idea
instead
of
having
this
goal,
we
need
to
move
this
back
and
instead
of
having
the
ga
burn
down,
we
probably
should
avoid
using
the
ga
term
here,
just
focus
on
the
spec
stability
and
feature
freeze
and
also
for
collector
the
same
thing
like
we
should
declare
what
what
does
the
timeline
mean,
and
here
goes
an
example
of
how
we
can
do
that,
for
example,
we're
saying
by
end
of
march.
We
want
to
sorry
this
is
the
latest
document
by
end
of
march.
D
We
want
to
have
the
data
model
declared
as
stable,
which
means,
if
you
want
to
use
otlp
as
a
data
exchange
format,
you
can
rely
on
it
without
having
to
worry
about
breaking
changes
unless
it
moves
to
the
next
major
version.
So
this
gives
you
a
clear
description
of
what
we're
trying
to
do
and
what
you
can
expect
and
what
timeline.
E
Okay,
what's
the
bar-
and
I
know
that's
a
hard
question,
but
I
was
wondering
if
we
could
be
really
aggressive
about
not
adding
features
and
making
sure
that
anything
that
gets
added
is
actually
due
to
like.
We
didn't
understand
this
language
in
the
initial
prototype
or,
like
is
absolutely
a
bug
right
that
that
is
masquerading
as
a
feature
just
anyway.
E
Mostly,
what
I
want
to
say
is
is
instead
of
the
bar
will
be
high,
like
no
additional
scope
will
be
added
to
the
fee,
like
does
that?
Does
that
make
sense
like
I
just
wanted
to
see
if
that
resonates
with
people
as
a
way
to
say
this,
so
it's
like
more
clear
what's
going
on,
but
like
the
the
api,
when
it's
marked
experimental,
solves
this
problem,
and
if
you
have
a
problem,
that's
over
here,
it's
not
the
time
to
try
to
get
that
feature
in
prior
to
like
feature
freeze
right.
D
D
E
Okay
and
we
don't
really
have
a
notion
of
what
a
scope
freeze
means,
but
that's
kind
of
what
I
wanted
to
call
out
of
like
when,
when
you
say
you
know
well,
we
might
add
additional
features
later,
but
the
bar
is
high.
That's
a
door,
that's
open,
yeah!
I
I
just
want
to
be
very
sensitive
to
that
right.
Yeah.
D
B
Just
oh
one,
quick
question
because
part
of
the
request
here
was
to
move
sig
specific
project
boards
back
to
those
six.
I
know
one
of
them's
the
the
go
project
board.
I
just
wanted
to
check
in
with
the
go
people
I
see
anthony
is
on
the
call.
Is
this
something
you
are
aware
of,
and
is
that
feasible
to
move
it
back
to
the
go
repo
or
is
that
like
problematic
for
some
technical
reason.
C
So
I
think
it's
as
tyler
mentioned
in
in
the
doc
that
we
had
created
that
at
the
project
level,
so
that
we
could
combine
issues
from
both
the
go
and
go
contrib
repos,
which
I
don't
know
that
we
can
do
in
a
repo
project.
Okay,
so
that
may
be
a
capability
that
we
don't
have
if
we
move
it
away.
G
It's
verified,
we
don't,
but
another
thing
I
didn't
put
in
there
is
it's
kind
of
nice
to
have
it
at
the
project
level
because
it
communicates
it
from
the
org
level,
but.
B
Maybe
we
can
just
come
up
with
a
naming
convention
for
these
things,
so
all
the
spec
stuff
is
grouped
together
and,
like
the
same
stuff,
is
grouped
together.
F
Also,
the
same
reason
is
for
the
collector
board
as
well.
We
have
contribute
core
and
we
couldn't
track
things
across
the
repos,
so
we
had
to
to
put
it
at
our
level.
I
think,
maybe
maybe
you
can
call
it
sig
specific
board
or
whatever
you
want
to
call
it,
but
the
ability
to
track
across
repos
when
you
have
two
or
three
wrappers,
it's
it's
very
valuable
and
I
think
you
should
know.
A
B
B
The
the
with
the
collector,
it
might
be
nice
to
have
that
board,
maybe
named
as
as
riley
was
saying
just
clarifying
like
like.
What's
the
next
milestone
that
that
collector
board
is
aiming
for,
if
it's
for,
like
a
release
of
the
collector
like
just
just
having
that
in
a
name
or
something
like
that,
would
probably
be
helpful.
D
B
D
Currently
we
have,
we
have
reasonable
funding
like
we
have
three
language,
six
volunteered
to
work
closely
with
the
matrix
and
given
we're
still
at
early
stage.
I
I
think
the
intention
is
not
to
have
like
all
the
languages
trying
to
do
the
prototype,
so
we're
covered
once
reached
to
a
certain
milestone.
B
B
All
right,
maybe
we
should
move
on
at
this
point.
Please
continue
to
discuss
this
on
slack
or
in
the
comments
section
of
the
doc.
A
The
only
question
I
had
here
riley
is
that
you,
you
have
identified
two
streams,
but
I
also
added
the
prometheus
work,
that's
ongoing
because
it
will
affect
the
metric
support
in
the
long
run,
even
though
we
are
supporting
or
working
on
the
collector
first.
So
would
that
be
defined
as
the
third
stream
or
or
is
that
just.
F
A
Yeah,
that
is
a
good
point,
but
it
it
is,
isn't
it
but
because
like
for
example,
if
you're
supporting
summary
or
you
know
any
many
parts
of
the
format
would
that
automatically
be
assumed
that
it
it
is
available.
For
the
I
mean
you
have
to
add
code
to
the
apis.
A
Okay-
just
just
I
just
want
to
understand
where
that
fits
in.
B
I
would
say:
there's
there's
one
place
which
is
making
sure
in
the
sub
josh
is
about
to
present
that
you
know
prometheus
exporting
to
prometheus
is
taking
into
account,
so
I
know
he's
doing,
but
that's
good
to
call
out
and
two
if
there's
prometheus
work
on
going
as
well,
I
know:
there's
a
working
group,
that's
working
two
goals
of
getting
implementations
out
there
of
stuff.
If
that's
enough
of
a
parallel
stream
that
could
probably
use
its
own
project
board
up
up
at
the
organization
level.
A
B
I
So
picking
up
where
elia
left
off,
we
are
so
I've
got
this
draft
that
I've
posted
and
I'd
like
to
share
it.
I
Briefly
now,
and
hopefully
we
can
talk
about
it
in
the
next
hour,
I
have
tried
to
write
up
a
summary
of
where
we
are
with
the
current
otlp
protocol
and
I'm
focusing
on
the
core
what
I'm
calling
the
basic
data
point
types
which
just
does
not
include
summary,
so
it's
counters,
gauges
and
histograms
that
we
are
trying
to
work
on
for
now,
and
the
rationale
is
that
there's
something
about
the
way
we
merge
those
points
that
requires
a
lot
a
lot
more
specification
in
order
to
do
the
things
that
open
census
wanted
to
do
and
that
this
summary
data
type,
which
is
literally
imported
from
prometheus,
has
to
be
done
exactly
the
way
prometheus.
I
Does
it
and
there's
no
spec
except
to
say,
look
at
what
prometheus
code
does
here
and
we
can
eventually
have
an
aggregator
that
produces
that
in
the
sdk,
like
you
said,
but
I
don't
know
that
that's
a
priority
to
get
summary
export
working
today,
given
that
the
api
design
is
still
kind
of
ongoing.
You
know
the
the
summary
is
no
different
from
the
histogram,
we
think
semantically.
But
how
do
you
choose
that
in
the
api?
I
I
Okay,
so
to
continue
my
own
speech
here,
so
I
have
put
together
this
sort
of
rough
outline
of
the
the
current
data
model.
The
idea
here
is
just
describe
what
we
have
and
especially
to
describe
how
we're
meant
to
use
it,
how
we're
meant
to
process
it,
what
the
timestamps
mean
and
there's
not
enough
detail
in
this
document
yet,
but
but
we
can
talk
about
it
in
the
next
hour.
I
The
idea
is
that
the
open
census
sort
of
mission
is
satisfied
here,
and
we've
discussed
this
about
how
we're
not
talking
about
api
level,
compatibility
with
open
census,
we're
talking
about
data
model
compatibility
and
that
the
big
deal
here
is
that
we
can
do
open
census
views
outside
of
the
process,
and
that
means
the
open,
open,
telemetry
collector
can
implement
those
views,
and
this
is
the
sort
of
spec
that
we
need
to
do
that.
I've.
I've
got
one
diagram
and
I'll
pitch
you
this
for
a
minute
and
then
I'll.
I
Let
us
move
on
with
the
agenda.
The
idea
is
that
we
are
going
to
define
a
data
model
that
rests
between
two
other
models.
One
is
the
api
model
which
is
like
how
do
you
interact
with
the
metric
system
as
a
programmer,
and
every
time
you
interact
with
the
metric
system
as
a
programmer?
There's
an
event,
some
there's
a
number
there's
an
event:
there's
an
instrument
instrument
involved
at
the
bottom
level.
I
There's
this
model
of
a
time
series
which
is
very
much
exactly
like
what
prometheus
thinks
of
it
as
it's
the
prometheus,
remote
right
kind
of
idealized
a
little
bit,
and
in
that
model
we
have
a
lot
more
strict
requirements
about
what
the
data
can
be
like.
So
it
has
to
be
ordered.
It
has
to
be
identified
using
external
labels
if
there's
any
duplication
and-
and
it
doesn't
quite,
it
doesn't
permit
like
what
I'm
calling
overlapping
points.
I
So
this
document
sort
of
describes
how
we
think
about
automatic
data
transformations
things
like
removing
labels
for
spatial
reaggregation
things
like
removing
widening
time
aggregation
and
all
these
can
be
done
inside
the
collectors.
What
we're
trying
to
say,
or
inside
the
sdk
I've
given
up
some
some
example,
use
cases
really
trying
to
cover
the
case
where
there's
a
local
agent
that
can
do
a
lot
of
work
for
you,
particularly
to
aggregate
like
local
processes
into
a
single
metric
stream
and
when
it
gets
down
to
the
the
sort
of
bottom.
I
Here
we
really
get
to
talking
about
what
are
the
requirements.
What
what
are
your
obligations
when
you
use
this
this
format,
for
keeping
the
data
in
in
shape
that
we
can
use
it
semantically,
and
that
and
this
this
idea
of
a
single
writer
is
the
biggest
idea
that
we
have
here
and
then
there's
something
about
overlap
like
what
happens
when
two
processes
try
to
write
the
same
metric.
I
What
should
a
collector
do,
and
I
want
to
point
out
that
a
lot
of
these
things
are
things
that
just
cannot
happen
in
a
prometheus
system
because
of
the
way
you're
pulling
data.
So
in
order
to
integrate
a
proportional
poll,
we
have
to
talk
about
this
stuff,
and
this
is
my
first
draft.
So
I
hope
we
can
talk
about
this
in
the
next
hour
and
I'd
like
to
maybe
move
on
to
questions
or
the
next
agenda
item.
J
This
is
great.
I
want
to
review
it
in
details.
What's
the
deadline
for
providing
meaningful
feedback.
I
I'd
say:
there's
no
deadline.
I
think
our
goal
is
to
finish
this
data
model
question
by
the
end
of
march
right
and
I
think,
there's
two
there's
two
tracks
here:
one
is
to
specify
this
type
of
stuff
and
the
other
track
is
just
to
finish
up
the
sort
of
loose
ends
that
we
have
the
protocol
and
those
are
things
like
labels
versus
attribute
terminology
min
and
max
for
histogram,
which
is
not
as
easy
as
it
looks,
and
well
bogdan
made
a
change
that
has
an
open
change
about.
I
You
know
histogram
less
than
holes
versus
greater
than
eagles.
It's
that
type
of
stuff
that
we're
we're
on
to
now
and
then
the
last
one
is
like
exponential
histograms.
Is
it
a
separate
type
or
is
it
a
separate
option?
So
that's
the
level
of
finish
that
we
need
on
the
protocol
and
then
I
hope
that
we
could
that
that's
the
scope
for
this
march
deadline.
B
Yeah
so
in
case
people
aren't
aware
the
data
model
sig
has
moved
to
right
after
this
meeting,
so
at
9
00
am
we'll
get
off
the
zoom
call
and
jump
onto
that
call,
and
then
we
can
dive
into
the
details.
B
Okay,
moving
on
I'm
up
next,
so
let
me
share
my
screen
here.
B
So
I've
put
together
a
tracing
roadmap.
We
went
over
this
in
the
maintainers
meeting
yesterday.
It's
mostly
the
same
people
here,
so
I
won't
do
a
deep
dive
into
this,
but
at
a
high
level
we
have
all
this
metrics
work
in
flight.
B
We
expect
from
the
rough
road
maps
we've
gotten
from
riley
and
josh
that
we
have
about
a
five
to
six
month
window
before
that
metrics
work
as
far
as
building
apis
and
sdks,
we'll
we'll
hit
this
spec
and
be
released
as
something
that
maintainers
will
want
to
implement
and
try
to
get
to
a
release
candidate
stage
beta
release
candidate
as
quickly
as
possible,
we'd
like
to
use
the
the
time
in
between
well
to
sort
of
clean
up
the
existing
offerings
that
that
we
put
out
there
now
that
the
api
and
sdk
are
stable.
B
There's
breathing
room
to
kind
of
work
on
this
stuff
sort
of
around
that
some
of
that
work
was
around
instrumentation,
which
was
hard
to
build
when
the
api
was
a
moving
target,
but
is
quite
a
lot
of
work,
and
some
of
it
is
just
about
improving
the
experience
for
new
users,
just
just
focusing
on
that.
First
time
user
experience.
B
So
we
identified
four
four
areas:
that
just
from
user
feedback
that
we
think
are
important
here.
One
is
simplifying
the
installation
experience.
B
This
is
an
area
where
I
would
like
to
gather
feedback
from
maintainers
just
ideas
about
how
they
think
they
could
simplify
this
or
or
where
the
the
tricky
issues
are
that
we
might
want
to
discuss
as
a
group.
It
feels
like
a
lot
of
this
may
be
somewhat
implementation.
Specific
though
so.
This
is
an
area
where
I
think
getting
a
lot
of
feedback
from
maintainers
would
be
good.
B
We
can
maybe
organize
a
feedback
session
for
like
the
next
maintainers
meeting
to
kind
of
like,
like
just
do
a
round
of
like
how
do
you
think
it's
going
in
your
group?
We
can
all
like
do
a
group
exercise
to
kick
this
off,
but
I
would
love
any
feedback.
Maintainers
or
users
have
on
this
point.
Please
start
just
just
jotting
it
down
somewhere
so
that
we
can
compile
the
notes.
B
So
that's
my
main
ask
for
people
on
the
call
today,
just
just
start,
compiling
notes
about
this,
so
that
we
can
put
this
together
and
form
a
plan
and
I'm
trying
to
get
a
plan
together
for
these
over
the
next
two
weeks.
B
B
Sometimes
our
error
messages
are
a
little
vague,
but
even
beyond
that,
it's
difficult
to
debug,
open
telemetry,
sometimes
because
there's
a
lot
of
what
you
might
call
silent
failures
that
can
happen
when
you
set
up.
For
example,
your
propagators
are
misconfigured
or
some
piece
of
important
instrumentation
isn't
actually
installed
things
like
that.
So
we
need
someone
to
help
design.
What
what
a
good
diagnostic
readout
from
open
telemetry
would
look
like
the
ideal
thing
here.
B
It
would
be
nice
if
it
was
somewhat
self-similar
across
languages,
but
the
main
target
you
could
think
here
is,
if
someone
says
my
jumps
into
slack
and
says,
like
my
open,
telemetry,
doesn't
work
being
able
to
ask
them
to
say
well
type
this
command
or
do
this
thing
and
then
copy
paste,
the
readout
into
slack
or
into
the
issue?
B
What
what
would
that
look
like
for
open
telemetry
just
so
that
you
could
get
a
sense
of
what
their
system
was
doing?
This
is
a
thing
that
I
think
will
will
help
people
kind
of
troubleshoot
their
their
getting
started
situation.
It's
especially
important
for
languages
where
there's
like
a
lot
of
magic
going
on.
I
think
this
is
like
well
contained
in
the
java
agent,
but
in
other
languages
you
know
it's,
it's
not
as
first
class
a
citizen,
some
of
the
stuff
we're
doing
so.
Some
of
it
is
language
specific.
B
My
call
to
action
here
is:
we
do
need
someone
to
to
run
point
on
this,
or
at
least
people
interested
in
in
providing
a
spec
for
this.
So
my
ask
after
I
get
to
the
bottom
of
this-
is
if
there's
anyone
interested
in
forming
a
working
group,
a
small
working
group
on
this?
Let
me
know
so
when
I
get
to
the
bottom
I'll,
bring
that
ask
back
up
again.
B
The
next
bit
is
a
convenience
api.
We
know
that
for
the
application
developer,
it's
the
current
api
is
really
for
instrumentation
authors,
who
have
to
deal
with
a
lot
of
different
edge
cases
potentially,
but
for
the
application
developer,
who
you
can
presume
has
all
of
their
instrumentation
already
installed.
Their
framework
is
instrumented
their
their
libraries
are
instrumented.
B
Is
there
a
more
convenient,
simpler
thing?
We
can
give
them
so
that
when
they're
decorating
their
application
code,
they
it's
it's
just
cleaner
and
more
elegant
and
simpler.
A
great
example
of
this
are
annotations
for
languages
that
support
annotations.
It's
really
useful,
but
even
languages
that
don't
it's
easy
to
see,
places
where
we
can
make
a
more
declarative
api.
That's
just
simpler
and
more
convenient
if
you
presume,
there's
an
active
span
or
handling
language
specific
idioms
that
you
could
potentially
wrap
up.
So
there's
been
a,
I
think.
B
To
some
degree
this
is
language
specific
work,
but
there
has
been
a
request
from
some
maintainers
to
see
at
least
something
in
the
spec,
describing
this
perhaps
having
just
some
naming
conventions
to
advise
people
on.
So
this
is
another
area
where
it
would
be
great
to
have
a
spec,
a
small
spec
group,
to
champion
just
adding
some
amount
of
this
detail
to
the
spec
to
make
maintainers
more
comfortable.
B
I
should
emphasize.
I
don't
think
this
is
an
area
where
everyone
has
to
do
exactly
the
same
thing,
because
these
are
higher
level
functions.
There
isn't
some
risk
that
the
convenience
you're
adding
is
going
to
mess
up
our
model
right.
B
We
had
to
be
very
precise
about
specifying
the
lower
level
api
that
we
provide
today,
because
if
you
get
that
wrong,
there's
some
edge
case
that
you
might
cut
off,
but
with
this
higher
level
stuff
with
the
lower
level
stuff
stable,
the
higher
level
stuff
it's
sort
of
like
well,
whatever
is
convenient
in
my
opinion,
is
fine
to
add,
because
you're
not
at
risk
of
actually
changing
the
functionality
of
open,
telemetry
tracing
by
just
adding
some
convenience
on
top
of
it.
B
B
So
this
is
an
area
where
I
think
we
need
a
lot
of
maintainer
input
and
a
larger
working
group
to
to
flush
out.
This
includes
simply
adding
more
instrumentation
coverage
like
we
don't
have
as
much
instrumentation
as
we
would
like,
because
we've
been
focusing
on
stabilizing
the
api,
but
now
is
the
time
to
expand
that
we
also
want
to
improve
the
quality
of
that
instrumentation.
B
That
means
improving
our
semantic
conventions,
and
it
also
means
ensuring
that
the
instrumentation
we're
writing
like
conforms
to
the
latest
semantic
conventions,
but
also
there's
a
general
feeling
that
I
think,
is
correct
that
we
as
like
core
maintainers,
don't
want
to
write
and
maintain
all
of
this
instrumentation,
and
so
we
want
to
come
up
with
an
effective
way
to
encourage
other
people
to
write
this
instrumentation
and
contribute
it
that
has
a
couple
of
moving
parts.
It's
not
as
simple
as
just
writing,
documentation
for
people
that
we
need
to
do
that.
B
There's
also
an
issue
of
you
know:
do
our
auto
installers
trust
this
instrumentation?
If
so,
where
does
it
live?
Do
we
want
everything
living
in
one
giant,
growing,
contrib
repo?
How
do
we
do
testing
for
this
stuff?
Because
if
you
know
we
don't
want
the
case
where
one
plug-in
or
one
piece
of
instrumentation
is
broken
and
that's
say,
holding
up,
you
know
the
bundling
of
all
of
these
things.
So
there's
there's
some
general
work
that
we
have
to
do
about.
B
How
are
we
organizing
this
instrumentation
into
an
ecosystem,
and
how
are
we
presenting
this
to
people
so
that
that's?
This
is
like,
I
think,
the
biggest
long-term
project
for
this
project
as
a
whole,
and-
and
this
is
going
to
be
something
we're
going
to
be
working
on
for
some
time.
But
you
know
we
need
to
figure
out
a
way
to
eat
the
elephant
and
the
most
important
thing
we
can
be
giving
people
today,
I
believe,
is
just
improving
the
quality
and
achieving
just
more
coverage
of
the
basic
stuff.
B
I
think
it
is
within
our
ability
to
to
write
good
instrumentation
for
the
most
popular
things
in
each
language
and
figure
out
what
that
quality
actually
means,
and
we
kind
of
have
to
figure
that
out
before
we
can
really
turn
around
and
start
asking
lots
of
other
people
to
donate
this
instrumentation
super
long
term.
We
would
like
people
to
start
writing
native
instrumentation,
but
we're
not
talking
about
that
as
part
of
this
project.
This
is
just
how
do
we
maintain
the
ecosystem
that
we're
going
to
provide?
B
A
Ted,
I
had
a
question
on
dynamic
configurations
or
just
configurations
that
are
available.
You
mentioned
popular
supporting
popular
frameworks,
but
should
we
make
it
clear
because
I
I
think
that
that's
not
kind
of
called
out,
but
it
is
a
major
part
of
get
the
getting
started
experience.
A
What
do
you
mean
by
dynamic?
That
is
configurations?
You
know
which
are
used
typical
configurations
for
being
able
to
spin
up,
you
know
being
able
to
maybe
that's
more
applicable
for
specifically
for
the
collector,
but
it
it
really
is
in
general,
you
know:
how
do
you
actually
instrument
seamlessly,
including
setup.
B
Right
so
there's
part
of
the
installation
experience
which
is
right.
Maybe
that's
part
of
it
is
like
open,
telemetry
out
of
the
box.
Can
we
provide
a
simple
one-liner
like
if
you
presume
you
are
talking
otlp
to
a
collector,
which
is
like
our
standard
thing,
we'd
like
to
propose
people
do
like?
Can
that
like?
Can
that
just
be
a
one-liner
set
up
for
the
sdk
right,
yeah.
B
Another
thing
that
we
could
potentially
improve
on
that
front
is
more
standardized
or
more
idiomatic
configuration
options
like
right.
Now
we
have
some
environment
variables
and
you
can
do
it
in
code.
You
know
what
about
python
ini
files
java
system
properties?
B
B
We
will
be
getting
requests
from
end
users
to
we
want
to
provide
a
default
configuration
for
all
of
this
instrumentation,
but
it's
definitely
going
to
be
the
case
that
people
are
going
to
to
want
to
tweak
the
data
that's
coming
out
of
their
library
instrumentation,
and
we
need
to
figure
out
a
coherent
strategy
for
dealing
with
that
configuration.
B
We
need
to
figure
out
like
where
we're
going
to
put
it
advice
for
how
they
should
write
it
semantic
conventions
that
they
should
use
and
the
way
they
should
offer
configuration
for
it.
So
this
is
definitely
a
thing
where
I
think
we're
gonna
need
need
a
lot
of
input,
which
I
think
leads
me
to
my
ask:
do
we
have
volunteers
or
either
at
a
personal
level
or
as
a
manager,
an
organization
who's
willing
to
put
like
leadership
effort
into
any
of
these?
These
particular
areas?
B
Some
of
them
need
some
amount
of
spec
work
after
getting
feedback
and
the
the
other
two
areas.
Just
it's
I
don't
say
cat
hurting,
but
it
is
like,
like
just
going
through
kind
of
a
standard
process
of
gathering
requirements
from
a
bunch
of
people
and
like
synthesizing,
that
into
a
backlog.
B
B
So
I'm
looking
for
people
to
to
help
maybe
take
on
one
one
of
these
particular
projects
to
kind
of
project,
manage
it
or
or
provide
some
some
resources
behind
that
that
face.
L
D
B
B
B
Yeah
well-
and
hopefully,
this
kind
of
like
this
stuff
kind
of
ties
in
with
that,
like
especially
the
installation,
experience
and
things
of
that
nature.
So
yeah,
I
don't
want
to
overburden
maintainers,
but
that's
sort
of
why
I'm
asking
for
some
project
management
help,
because
I
do
completely
agree
with
you,
john.
We
have
to.
B
We
have
to
to
to
scope
these
down
to
something
that
we
think
can
fit
into
the
amount
of
time,
and
I
think
it
would
probably
help
our
community
if
we
did
try
to
to
some
degree
as
a
group
go
through
them
rather
than
trying
to
vaguely
do
all
four
in
parallel
like
if
we
could
just
focus
on
the
installation
experience
first
and
then
the
convenience
api,
or
maybe
we
want
to
focus
on
the
convenience
api
first
and
diagnostics.
B
Second,
but
we
we
want
to
over
the
next
two
weeks
put
this
into
a
road
map
so
that
on
our
website,
we
can
have
kind
of
a
gantt
chart
that
shows
people
where
each
one
of
we
expect
each
one
of
these
initiative
initiatives
to
begin
and
end.
E
Can
can
I
propose
a
methodology?
Yes,
please
do
right
so
evaluate
each
of
these
by
the
risk
of
adoption.
Like
will
people
not
adopt
open
telemetry,
because
we
didn't
invest
in
this
thing
multiply
that
by
how
hard
we
think
it
is,
and
you
get
a
priority
yeah
there's
like
some
other
risk
factor
in
there
you're
supposed
to
do,
but
that's,
I
think,
simple
enough
for
open
source
but
yeah.
When
I
I
agree
with
john
that
we
should
have
a
priority
list
of
these
things
and
we
should
especially
for
instrumentation
piece.
E
I
expect
that
to
occupy
the
this,
this
sig
specifically
for
a
very
long
time,
yeah
on
a
huge
roadmap,
and
so
maybe
like
just
limit
it
down
to
say
you
know
what
we're
going
to
focus
initially
on
http
instrumentation,
because
http
is
the
web
and
that's
it
we're
just
gonna
do
http
to
start
with
and
then
once
we
figure
out
how
that
works
like
we'll
go
to
the
next
one,
but
just
things
to
make
it
smaller
and
things
to
keep
it
within
that
six.
You
know
that
june
time
frame.
A
Yeah
yep,
I
I
agree
josh.
I
think
that
we
have
to
you
know,
be
very
clear
about
taking
one
area
at
a
time
as
john
said
and
yeah
having
a
building
out
a
backlog
again
ted.
Do
you
think
that's
it's
worth.
Having
a
small
group,
as
you
said,
I
mean
I'm
happy
to
volunteer
in
terms
of
identifying.
We
have
a
clear
set
of
requirements.
A
You
know
collating
them
together
from
all
the
main,
different
pigs
and
then
putting
it
together,
but
I
think
that
we
need
more
than
one
person.
It
should
be
a
group.
B
I
think
there
needs
to
be
a
small
group
that
at
least
runs
through
this
process
of
requirements
gathering
and
coming
up
with
a
a
a
backlog
proposal
for
maintainers
to
like
comment
on.
I
think
we
need
a
lot
of
maintainer
input
on
this
stuff
and
user
feedback
on
this
stuff,
but
I
like
we
can't
make
a
decision
with
like
30
people,
so
it's
better
for,
like
a
small
group
to
to
come
up
with
a
proposal
and
let
the
the
maintainers
provide
feedback
on
that
proposal.
So.
E
B
Yeah
people
have
potential
tpm
or
other
people
at
their
organization.
They
could
tap
just
on
a
temporary
basis
to
help
us
like
run
through
a
process,
not
manage
open
telemetry
forever,
but
like
people
who
might
work
with
you,
who,
you
think
would
just
be
helpful
to
run
through
these
processes
to
get
get
this
stuff
up
and
running.
I
think
that
would
be
great
I'm
going
to
do
my
best,
but,
like
I
said,
I'm
spread
pretty
thin,
and
so
this
is,
I
just
want
to
identify
this
as
like.
B
The
the
way
that
we
fail
is
that
it
no
one
has
enough
time
to
actually
run
this
project
management
process
and
like
work
this
into
a
backlog
once
it's
into
a
backlog
for
each
project.
I
think
we
can
like
chew
through
it
using
our
more
regular
processes,
but
but
actually
like
booting
this
up
and
getting
these
things
going.
I'm
I'm
asking
for
help.
B
So
please
please
bring
this
back
to
your
organization
or
your
manager
and
ask
if
you
can
get
time
to
work
on
it,
and
this
is
a
general
you
to
this
group.
So
thanks!
That's
that's
all
we
have
time
for
on
this.
B
You
can
reach
out
to
me
directly
on
slack
if
you
have
issues
or
concerns
or
want
to
start
collaborating
on
a
project.
So
I'll
talk
to
you
on
slack,
okay.
Last
but
not
least,
jonathan
johnson
ivanov,
asking
about
an
eta
for
the
stable
release
of
the
java
sdk,
and
we
have
an
answer
here,
which
is
tentatively
friday
february
26
for
tracing.
Is
there
any
follow-up?
John
then
you're
on
the
call
want
to
ask
for
the
questions.
L
B
Yeah
so
just
fyi
there
is
a
java
sig,
there's,
also
a
slack
channel
on
the
cncf.
If
you
go
to
our
community
repo,
you
can
find
the
links
for
that
cncf
slack
channel
and
that's
another
great
place
to
to
ask
questions.
D
F
To
say,
to
say
at
least
that
I
I
did
a
lot
of
reviews
of
the
java
java
sdk
and
apis,
and
probably
I
upset
a
lot
of
the
people
there,
but
that's
a
different
story
else.
Do
you
want
to
know?
I
mean
for
the
moment.
F
I
think
that
pr
is
stuck
the
process
pr
for
good
or
for
bad.
There
are
some
pros
and
cons,
but
not
enough
approvals
from
the
tc
for
me
to
move
forward
with
that
process.
I
would
ask
the
tc
members
to
to
say
yes
on
that,
and
once
we
have
four
out
of
the
seven
tc
members
saying
yes,
I
will
move
forward
with
that
idea.
M
The
way
by
the
way
before
I
forget,
I
will
be
trying
to
go
through
this
in
a
very
formal
way
for
python
this
week.
So
I
hope
that
I
can
gather
some
feedback
in
very
initial
feedback
from
that.
D
A
F
F
I
think
I
would
I
would
say
that
java,
even
if
they
didn't
want
they
followed
this
process,
because
I
cared
so
much
about
that
and
I
I
spent
a
lot
of
time
doing
the
reviews
and
stuff,
but
you
will
be
good
for
for
one
of
the
tc
members
to
try
go
over
the
apis
check
the
matrix
really
because
a
bunch
of
the
times
things
were
added
to
the
matrix
but
not
verified,
and
I'm
not
saying
that
people
are.
F
On
java
we
can,
we
can
probably
put
a
template
already
and
we
we
have
all
the
things
there
already.
So
we
we
can
try
that,
and
I
can
try
that
as
a
maintainer,
but
it
would
may
be
good
to
talk
to
john
I'll
talk
offline
with
john
and
maybe
put
the
template
just
for
for
reference
to
see
what
we
did.
A
F
Yeah,
I
will,
I
will
talk
to
john
and
he
if
he
has
time
just
to
start
a
pr
draft
pr
with
the
with
the
template,
filling
the
template
and
I
will
add,
a
bunch
of
the
pr's
and
and
issues
that
we
discovered
during
this
review
and
to
show
people
what
kind
of
things
we
we
discover
and
what
were
the
resolutions.
A
F
L
K
B
Great
any
further
topics:
do
we
want
five
minutes
back
before
going
on
to
the
next
cig
yeah?
Do
that
thanks.
L
I
have
a
meta
topic
that
has
actually
shown
up
a
lot
during
this
process
of
blogging.
Reviewing
things,
and
that
is
on
the
specification
is
very,
very
clear.
I
believe
that
the
api
should
never
throw
exceptions
in
the
face
of
bad
input,
but
we
also
simultaneously
don't
specify
what
sdks
should
do
in
the
face
of
bad
input.
L
So
we've
got
ourselves
basically
in
a
place
where
the
the
sdk,
the
spec,
is
extremely
clear
that
we
should
never
throw
exceptions
in
the
face
of
bad
input,
but
we
don't
actually
say
what
we
should
do
in
the
face
of
bad
input
like
if
someone
passes
in
a
null
context
to
an
a
to
an
api.
That
requires
a
context.
What
should
we
do
like?
We
don't
say
so?
E
B
This
is
totally
what
I
want
to
do
for
that
group
by
the
way
is
nev
wiley
still
on
the
call,
because
he
was
requesting
to
help
with
that
work.
So
thank
you.
Nev
I'll
reach
out
to.
L
L
B
B
What
is
my
setup
and
the
other
stuff
is
like
when
things
are
being
wrong
during
runtime
like
what
do
we
do
completely
agree?
That's
underspecified
right
now
respect
but
yeah.
You
want
to
open
an
issue
on
it,
that's
great,
but
I
think
we
should
bundle
this
up
into
like
a
track
of
work,
so
we
don't
lose
our
minds
but
good
feedback.
B
Okay,
let's
take
a
quick
break
and
then
I'll
see
hopefully
most
about
everyone
on
the
metrics
call,
which
is
happening
right
after.