►
From YouTube: 2021-02-16 meeting
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
C
A
A
D
C
E
Agenda
there's
a
meeting
right
after
this,
so
just
fyi
this.
This
is
the
9
a.m.
Meeting
that's
kind
of
focused
on
metrics
data
model,
we're
here
right
now
at
the
adm
meeting.
So
I'm
going
to
talk
about
things
that
are
not
specific
to
the
metrics
data
model.
F
B
E
G
E
It's
five
minute
pass,
so
we
can
get
started.
I
think
we
need
to
rework
how
we
deal
with
this
time
box
for
one
this
was
scoped
down
to
metrics.
I
think
for
this
meeting.
This
should
just
be
general
time
boxing
for
p1
issues,
but
it
seems
a
little
low
value.
People
don't
quite
know
what
to
do
with
this
section,
so
I'm
interested
in
feedback
for
how
we
can
review
or
go
over
the
backlog
as
a
group
in
a
way
that
might
be
more
productive.
E
I
think
this
will
also
be
helped
by
having
roadmaps
generated
for
the
various
spec
interest
groups,
so
the
data
model
group,
the
metrics
api
group,
getting
a
roadmap
together
and
I'm
going
to
work
with
alolita
this
week
to
get
together
a
tracing
roadmap,
even
though
this
spec
is
is
mostly
completed
for
tracing.
E
We
still
have
a
lot
of
work
to
do
in
the
realm
of
improving
the
installation,
experience
for
people
and
increasing
our
instrumentation
coverage,
so
I'm
going
to
try
to
get
a
roadmap
for
that
together
and
present
it
for
people
to
review
in
time
for
the
maintainers
meeting
next
week.
So
that's
a
goal
of
mine
and
alolitas
just
fyi.
E
I
love
feedback
into
that
front
for
the
issues
that
are
currently
marked
to
do.
A
number
of
them
have
assignees
and
I
wanted
to
just
run
through
them,
really
quick
and
just
identify
if
any
of
these
are
actually
actually
in
in
flight
with
a
linked
pr.
E
So
if
someone's
actually
working
on
one
of
these,
did
you
just
mention
it?
I'm
just
gonna
run
through
the
names
really
quick
and
then
we'll
be
done
with
this
section:
okay,
adding
shutdown
to
metrics
exporter
interface,.
E
Okay,
do
you
mind
filling
that
in
real
quick
bugged?
In
with
the
details,
I
don't
want
to
close
it
on
these
people
with
a
incorrect.
Was
the.
H
Pr
that
deleted
the
sdk
text
from
the
spec
entirely
from
riley,
and
I
think
the
goal
is
to
rewrite
that.
So
we
either
keep
it
and
say
that
it's
possible
until
the
sdk
work
is
in
place.
Yeah.
E
Got
it
I
see
what
you're
saying
since
we're?
Okay
since
we're
rebooting
this
we're
just
gonna,
say.
I
E
Right
I
mean
I
kind
of
imagine
that
here
that
we
need
shutdown
in
general
but
yeah.
Maybe
we
can
keep
this
for
now.
D
Yeah
so
ty,
how
about
you
just
assign
this
to
me
and
I
I
can
go
through
all
the
current
metrics
issue
this
week
and
I
I
probably
need
to
update
the
issue
template
just
to
clarify
people
want
to
open
the
sdk
issue.
It
will
become
a
feature.
D
C
E
C
E
Ad
hoc
in
this
meeting,
but
I
think
we
need
to
get
those
road
maps
together
so
that
we
have
a
framework
for
figuring
out
how
we
want
to
relabel
things.
E
J
Yeah,
I
think
that
once
we
have
a
road
map
or
something
that
give
us
an
idea
of
what's
important
next,
we
can
have
a
three
hour
session
like
the
ones
we
had
used
to
have
on
fridays
and
just
go
through
the
the
issues
you
know
well
anyway.
First
the
road
map,
I
guess
yeah.
E
H
E
Yeah,
so
there
is
a
standing
meeting
on
friday.
8
30
a.m,
pacific:
to
do
this
kind
of
triaging
do
we
do
we
want
to
plan,
but
to
do
that,
then.
H
G
E
E
E
Data
model
and
collector
just
going
to
mention
that,
even
though
we're
not
necessarily
focusing
on
this,
thank
you
people
for
correcting
my
horrible
spelling,
then
we're
going
to
create
a
new
labeling
system.
B
Maybe
specifically
call
out
that
it's
going
to
be
about
metrics,
I
guess
the
new
labels,
I
mean
right,
and
maybe
if
we
do
that,
maybe
do
the
same
thing
for
the
logs
as
well.
I
guess
right.
If
we
are
going
to
use
the
name
of
the
signal
in
the
labels,
then
it
can
be
done
the
same
way
for
the
logs.
It
can
work
the
same
way.
So
if
we
say
that
required
for
matrix
ga,
then
it's
unambiguous.
We
can
also
have
required
for
logs
ga.
In
that
case,.
E
Yeah
totally-
and
we
are-
we
are
metrics
focused
there
is
this
other
pile
of
work
that
we
also
need
to
track.
E
We
kind
of
have
been
half-passively
tracking
this
with
issues,
but
we
haven't
been
focusing
on
doing
the
work,
which
is
things
that
are
less
about
specifying
things
and
more
what
I
would
call
the
installation
experience
right
like
that,
can
be
improved
across
the
board,
making
it
easier
for
users
to
get
started,
increasing
the
coverage
of
instrumentation
figuring
out
how
we're
going
to
manage
that
instrumentation
and
what
was
the
third
one
there's
another
one.
Oh
things
just
more
like
cicd
performance
tests,
you
know
just
a
general
productionizing
our
pipeline.
E
We
could
continue
to
talk
about
that
work
in
this
meeting,
but
I'm
open
to
proposals
for
for
how
we
keep
track
of
that,
because
it's
less
about
coming
up
with
a
spec
and
putting
it
into
the
specification
repo.
It's
more
about
some
of
it's
fairly
language.
Specific
and
some
of
it's
just
about
management
and
structure
of
the
project,
so
I'm
gonna
brainstorm
with
with
alolita
about
this
because
she's
interested
in
this
problem.
E
But
if
people
have
ideas
for
how
to
manage
this
kind
of
stuff
from
prior
projects,
they've
been
involved
in
or
want
to
like
make
a
proposal.
Please
do
so.
We
would
love
ideas
for
ways
to
make
sure
that
this
work
is,
is
very
public
and
consumable
and
easy
for
the
maintainers
to
to
have
an
idea
of
what
what
they
should
be
focusing
on
so
they're,
not
getting
spun
out
context.
Switching
all
the
time
so
request.
E
I
Yeah,
I
don't
have
anything
good
to
propose,
but
yes,
I
I
will
write
up
my
thoughts
as
soon
as
I
can
yeah
sure
yeah.
So
I
am
interested
okay.
E
Great,
if
anyone
else
is
just
interested
in
participating,
just
put
your
name
here.
I
Yeah,
so
that's
a
fun
story.
Do
you
want
to
hear
why
I
can't
remove
big
nerd
from
my
name.
You
got
two
minutes.
I
literally
can't
get
rid
of
it.
It's
it's
like
a
mistake
of
my
youth,
but
google
literally
thinks
that's
my
name
and
I
can't
fix
it.
So
the
nickname
is
josh
ceret.
The
the
real
name
is
big
nerd.
Just
so
you
know
so
we're
all
clear.
E
I
Oh,
my
that's
yeah.
That's
the
story,
that's
the
story!
Yeah
anyway,
we
should
all
have
a
beer
when
we
can
meet
in
person
and
I'll.
Tell
you
all
about
it.
That.
E
Would
be
fabulous,
I
have
my
own
gmail
issues,
I'm
not
gonna
bring
them
up
on
this
call,
but
I
have
no
idea
who
to
contact
at
google
about
them
and
it's
really
frustrating.
I.
E
But
anyways,
so
I
I
hope
you
know
between
triaging
this
stuff
on
friday,
getting
these
road
maps
together
and
getting
some
some
proposals,
for
you
know
ways
to
present
this
other
than
just
this
kind
of
little
box
that
we
don't
quite
know
what
to
do
with
in
like
two
weeks,
we'll
we'll
have
something:
that's
a
lot
more
public
and
a
lot
easier
for
people
coming
into
the
project
to
kind
of
get
a
handle
on
where
we're
at
and
what
we're
doing.
E
We're
also
going
to
start
trying
to
blog
more
about
this
stuff
and
in
general,
inform
the
public,
especially
with
the
spec
going
1.0
and
release
candidates
coming
out.
You
know
we're
starting
to
see
interest,
but
we've
been
kind
of
quiet
publicly
around
kind
of
explaining
ourselves
like
where
we're
at
our
design
decisions,
and
things
like
that.
So
we
need
to
start
doing
that
a
little
more
in
public
in
places
where
people
can
read
it.
Since
we
are
kind
of
a
big,
complicated
project,
it's
easy
for
people.
E
K
E
Yes,
yes,
I
think,
I
think,
when
we're
looking
at
these
road
maps,
it's
it's
also
about
doing
the
work.
It's
not
just
about
about
writing
specs.
E
So
for
the
metrics
work,
for
example,
we
need
we
need
to
be
prototyping
this
stuff
before
we
go
back
and
add
it
to
the
spec
and
ask
every
maintainer
to
go
implement
it,
because
we
don't
want
them
to
thrash
the
way
we're
creating
some
thrash
with
the
tracing
work.
E
So
and
then,
some
of
this
work,
like
I
was
saying,
is,
is
actually
doesn't
even
have
like
a
spec
necessarily
a
big
spec
component,
but
it's
definitely
work
now
that
the
tracing
apis
are
settled
that
we've
been
kind
of
pushing
off,
but
we
need
to
need
to
start
focusing
on
so
so
we
kind
of
need
a
road
map
just
to
get
that
all
together,
so
that
it's
easy
to
go
to
the
maintainers
and
be
like
this
is
like.
E
We
would
like
everyone
to
kind
of
focus
on
this
stuff
right
now,
and
so
they
can
also
have
expectations
about
like
what's
coming
down
the
pipe
and
you
know
be
able
to
manage
their
backlogs
and
manage
their
expectations
with
their
users.
So
we
don't
want
to
be
dictating
this
to
the
maintainers
to
be
clear.
It's
just.
We
want
to
create
something
more
systematic,
so
that
maintainers
are
not
having
to
guess
or
feeling
like
they're
they're
context.
Switching
all
the
time.
K
Okay,
that
sounds
that
sounds
related
to
the
question
that
I'm
asking,
and
I
think
I
won't
take
up
more
time
from
the
time
box.
Thank
you.
E
Sure
I
mean
do
you
have
another
question
like
like:
please,
please
go
for
it
because
this
is
important.
Well.
K
K
Like
let's
say
it's,
the
metrics,
let's
say
whatever
it
is:
will
we,
as
a
community,
know
when
the
sigs
are
when
the
sigs
should
be
expecting
to
make
their
conforming
releases
right,
because,
with
the
trace
like
at
least
again,
I've
been
paying
somewhat
close
attention
to
go
and
it
seemed
like
we
were
a
little
bit
caught
by
surprise
by
the
timing
by
the
expected
timing
of
the
rc
right,
and
so
I'm
trying
to
understand.
K
Are
we
going
to
remedy
that?
Is
that
a
problem
specific
to
one
sig?
What's
the
systemic
solution
here-
and
this
is
a
planning
question,
not
a
what's
in
the
spec
question,
yeah.
K
K
We
were
not
aiming
to
release
around
that
time
right,
and
so
I
think
it's
totally
fine
to
say
we
were
aiming
to
release
and
we
missed
no
one's
surprised
that
we
missed,
but
I
think
again
to
me,
as
a
relative
newcomer,
it
seemed
like
there
was
a
little
bit
of
surprise,
and
that
is
worrying.
I
think
that's
a
surprise.
E
But
first
I
want
to
apologize.
It
was
actually
literally
a
question.
I
wasn't
trying
to
pressure
the
go
group
to
release
an
rc
just
because
we're
making
an
announcement
today
about
the
1.0
and
various
groups
are
working
on
rcs.
I
just
didn't,
know
the
status
of
go,
and
so
it
wasn't.
I
apologize
if
that
came
off.
Isn't
it
like
pressure,
but
it
was
literally
just
a
question.
Are
we
is
the
go
group
like
about
to
release
an
rc?
If
so
I'll
include
them
in
the
list?
E
If
you
guys
aren't,
then
then
then
I
won't
so,
but
it
wasn't
like
you
guys,
are
missing
a
target.
So
I'm
I'm
sorry
if
it
came
out
that
way.
K
To
me
it
didn't
come
off
as
pressure,
so
I
I
I
again
like
I
think,
tyler's
the
actual
maintainer
so
or
tyler
is
the
maintainer
who's
in
who
I
see
in
the
room
right
now,
so
he
you
know
he
would.
He
would
know
better.
But
to
me
he
didn't
come
off
as
surprise
as
as
pressure,
but
I'm
actually
wondering
like.
Wouldn't
it
be
desirable
if
our
system
did
say,
oh
the
press
release
should
contain.
You
know
a
significant
list
of
languages.
K
E
All
right,
yes,
I
think
we,
you
know
all
of
these
groups
are
kind
of
at
different
stages.
So,
but
I
think
I
think
at
the
heart
of
your
question
is
we
we
need
more
of
a
road
map
so
that
maintainers
can
can
focus
right
if
they
have
a
sense.
C
E
Where
the
project
in
general
is
trying
to
row
that
they
have
a
sense
of
what
what
work
is
going
to
be
coming
down
the
pipe
if
we
could
maybe
align
what
we're
working
on,
especially
for
the
the
client
implementations
that
are
all
kind
of,
I
don't
want
to
say
the
head
of
the
pack,
but
are
kind
of
like
eating
off
of
head
of
the
the
specification
right
now.
You
know
there's
some
working
groups
where
you
know
they're,
they're
they're
still
implementing
tracing
right.
E
So
they
shouldn't
feel
any
pressure
to
keep
up
with
this.
E
But
one
thing
we're
looking
at
doing
to
help
with
that
is
monthly
releases
of
the
spec
and
ensuring
that,
in
those
releases,
we've
had
like
the
spec
work
kind
of
blocked
in
the
past
by
by
having
like
big
things,
getting
put
into
the
spec
and
doing
a
lot
of
discussion
there,
and
so
we're
going
to
try
to
move
that
work
out
into
more
like
prototyping
and
design
coming
from
these
working
groups.
E
So
when
it
does
go
into
the
spec
it'll
be
clean
and
in
the
meantime
there
can
be
like
incremental
improvements
going
out
the
door.
So
hopefully
that
means
keeping
up
with
the
spec
will
be
less
thrash
and
less
of
an
issue
for
maintainers
in
the
meantime,
and
also
hopefully
it
means
that
when
we
are
prepared
to
like
okay,
we
want
to
go
implement
the
new
metrics
sdk
or
something
like
that.
E
That
will
come
with
with
some
some
heads
up
and-
and
there
will
be
a
lot
of
work
already
done
so
that
it
should
be
more
of
a
straightforward
process.
But
I
completely
agree.
We
we
don't
have
like
enough
of
that
kind
of
drum
beat
and
public
road
map
so
far
partially,
because
we
were
just
heads
down
focused
on
on
tracing
yeah
partially
because
we're
still
kind
of
growing
as
an
as
an
organization
and
kind
of
learning
learning
the
ropes.
K
Thanks
for
walking
me
through
that
ted,
I
absolutely
was
not
trying
to
place
blame,
but
but
I
do
appreciate
the
transparency.
No.
E
That's
awesome
and-
and
it's
always
important
to
ask
these
questions,
especially
when
there's
when
there's
confusion,
because
if
you're
confused
about
this,
then
surely
there's
a
bunch
of
other
people
on
the
call
who
are
confused
about
it
as
well.
So
as.
K
A
as
a
suggestion
for
a
future
process,
I
think
that
the
eclipse
foundation
has
released
trains.
There
could
be
something
interesting
there
and
I
could
imagine
something
where
the
spec
creates
a
roadmap,
and
then
other
groups
sign
up
to
tie
their
milestones
to
spec
milestones
in
a
public
and
kind
of
ahead
of
time
manner.
K
E
We
can
give
an
idea
of
when
we
think
spec
work
or
other
thing
is
going
to
be
done,
and
then
groups
can
can
sign
up
to
be
part
of
that
release
train
so
that
we
have
like
a
complete
cycle
of
being
able
to
go
to
the
public
and
be
like.
We
expect
that
this
stuff
is
going
to
be
implemented
in
these
languages.
I
I
Ask
libraries
to
try
it
out,
and
the
only
thing
you're
trying
to
do
is
not
give
it
to
end
users
but
get
maintainers
to
give
you
feedback
of
this
language
feature
is
terrible
and
it's
broken
everywhere
right,
and
so
that
is
the
churn
that
we
were
trying
to
avoid,
and
you
need
to
find
a
way
for
them
to
do
this
in
a
not
release
channel,
I
think,
is
kind
of
important
for
them
to
experiment,
but
I
I
totally
agree
with
this
is
like
if
the
spec
can
put
milestones
out
of,
we
feel
like
this
piece
of
metrics
is
done,
or
this
piece
of
whatever's
done.
I
Maintainers
can
look
at
it
and
say
I
think
this
is
going
to
be
fine
or
I
need
to
dive
in
and
report
bugs.
That
helps
a
lot,
but
the
real
key
is
that
there's
a
release
candidate
on
the
spec,
where
the
spec
effectively
is
locked
and
then
there's
a
long
turn
for
sdks
to
catch
up
and
report
breaking
issues
before
you
actually
then
release
1.0,
and
you
want
to
like
wait
for
all
the
sdks
to
be
done
right.
I
So
I
want
to
like
add
that,
as
like
a
thing
we
might
have
to
do
in
the
future,
just
to
avoid
breaking
spec
changes
so
anyway,
I'll
I'll
write
down
my
thoughts
as
well.
This
was
a
really
good
discussion.
E
Yeah,
if
people
have
more
thoughts
on
this
is
totally
welcome.
If
I
missed
some
points
in
the
notes,
please
add
them,
but
yeah
like
coherent
proposals.
Even
if
it's
just
a
paragraph
I
think
is
helpful.
It's
good
to
know
what
people
think
is
missing
or
or
what
what
they
think
would
be
helpful
for
them,
especially
maintainers,
since
I
feel
like
they're
the
they're,
the
ones
in
the
hot
seat
having
to
to
to
manage
this,
the
most
okay
we're
at
8
30.
E
I'm
gonna
move
on
to
this
that
other
people
can
have
a
chance
to
get
their
their
issues
addressed
so
nikita
limbo,
limbo.
E
A
Then,
when
I
asked
again,
can
this
be
merged
or
not,
then
again
came
a
comment
about
that
pull
request,
that's
not
good
enough,
but
that
commenter
didn't
want
to
block
that
pull
request
as
well.
So
currently
I
am
in
like
a
total
limbo.
Should
I
do
anything
with
that?
Can
it
be
merged?
Is
it
bad
and
shouldn't
be
merged?
A
E
So
without
getting
into
the
specifics
of
this
one,
and
if
people
do
have
specific
ideas
about
how
to
resolve
this
one,
let's
get
to
that
in
a
second
I
would,
I
would
say,
generally
there's
there's
three
ways
if,
if
it
seems
like
what
they
want
is
just
already
a
dead
horse,
that's
been
beaten,
then,
and
there's
a
bunch
of
approvals.
It
might
be
okay
to
say,
like
you
know,
yes,
we
can't
make
everyone
happy,
but
in
general
we're
going
this
way.
E
The
other
option,
which
I
think
is
common,
is
often
things
have
like
90
of
it.
People
agree
on
and
then
there's
there's
10
percent,
that's
contentious
if
it's
reasonable
to
pull
that
10
percent
out
and
commit
the
90
and
then
open
up
another
issue
just
to
focus
on
that
last
ten
percent.
E
I
think
that's
a
more
inclusive
process,
so
I
think
you
know.
We've
certainly
found
situations
where
that
that
at
least
helps
you
know
get
the
work
out,
but
it's
not
always
the
case.
Sometimes
the
contentious
bit
is
like
the
important
bit
and
we
don't
want
to
be
putting
things
into
the
spec
where
the
spec
couldn't
be
released.
With
with
what
got
committed
into
it,.
A
A
J
That's
the
usual
way
to
proceed,
and
that's
a
matter
of
you
know
and
actually
for
this
one
I
would
like
to
be
assigned,
so
I
can
help
drive
this
one
way
or
another
yeah.
Thank
you,
carlos.
So
nikita.
H
I
would
ask
you
a
different
question
and
remove
your
your
interest
into
getting
this
in,
because
I
know
you
have
a
small
interest
into
to
getting
this
in.
If
you
would
read
all
the
comments
and
all
the
concerns
and
you'd
be
completely
independent,
would
you
have
concerns,
or
would
you
do
things
differently?
H
So
that's
that's,
usually
what
I'm
trying
to
encourage,
because
you
ask
me,
as
a
maintainer
as
a
maintainer,
usually
you
should
put
a
hat
of
a
person
that
doesn't
have
any
any
interest
into
getting
something
in
and
try
to
understand.
Is
this
something
that
we
really
care
about?
Is
this
something
that
we
we
want
to
solve,
and
is
this
the
way
how
to
solve
that?
So
there
are
two
different
things.
First,
is
this
important
for
us
to
solve?
Yes
or
no?
We
have
to
have
a
conclusion
on
that.
H
H
But
if
I
don't
have
experience-
and
I
don't
understand
something
very
well-
I'm
I
I'm
not
sure
if
I
should
put
my
stem
that
I
accepted
like
when
when
when
is
this
thing
I
don't
know,
maybe
I
should
invest
more
and
to
learn
into
this,
but
but
in
general
I
felt
that
that
was
a
problem
for
me
with
couple
of
prs
that
when
I,
when
it
was
not
easy
for
me
to
understand
from
the
first,
let's
say,
10
comments.
What
what
is
this
about,
and
why
do
we
need
this?
H
I
kind
of
drop
the
ball
on
that
specific
pr,
but,
but
maybe
maybe
there
is
some
kind
of
process
that
we
can
do
for
for
for
maintainers
and
for
everyone
to
to
be
able
to
easily
understand
the
need
or
the
importance
of
something
first
and
then
and
then
review
the
pr
and
and
stuff.
So
I
don't
know
what
would
be
the
right
thing,
maybe
maybe
others
with
the
experience
of
these
cases
I
mean
like.
Can
I.
A
Really
really
correctly
post
it
to
zoom
chat.
We
have
like
already
described
process
how
to
get
pull
request
merged.
If,
in
this
particular
case,
this
process
broke,
we
have
enough
approvals.
There
is
no
request
changes.
There
are
two
working
days
past.
G
D
H
No,
I
I
disagree
with
that.
There
is
another
process
which
says
the
maintainer
should
merge
all
these
things.
If
they
feel
comfortable
correct
there
is,
there
is
a
rule
about
who
can
merge
things
and
that
rule
is
not.
You
blindly
merge
things.
It
says
if
you,
if
you
know
you're,
going
to
maintain
this.
So
where
is
that
process?
H
I
Approvers,
I
think,
there's
a
meta
issue
if
I
can
raise
it.
The
meta
issue
is:
this:
is
around
semantic
conventions,
semantic
conventions,
kind
of
require
deep
knowledge
of
a
particular
niche
and,
in
this
case,
we're
trying
to
define
a
semantic
convention
that
applies
to
like
app
servers
and
http
engines,
and
that
sort
of
thing
I
really
love
what
this
is
trying
to
do.
I
think
the
idea
of
semantic
conventions,
though,
has
me
nervous
in
general,
I've
even
contributed
some
myself
and
I'm
nervous
about
them,
because
this
is
like
our
political
here's.
I
The
way
we
want
the
world
to
see
everyone
to
see
the
world
of
telemetry
right
and
we
all
need
to
agree
on
it,
and
so
there's
this
like
fuzzy.
Does
everyone
agree?
This
is
how
we
should
see
the
world
and
then
combine
on
that
that
you
have
expertise
right.
Not
everyone
here
is
an
expert
in
grpc.
Not
everyone
here
is
an
expert
on
http
and
the
person
running
the
semantic
convention
needs
to
be,
and
we
don't
have
a
special
spec
approvers
around
symantec
conventions.
I
Right,
it's
like
a
whole
class
of
thing,
that's
really
important
to
the
community.
That's
maybe
missing,
and
I
just
want
to
call
that
out
of
like
pr's
like
this
pr,
I
think,
fell
afoul
of
that
of
it's
in
that
semantic
convention
space.
So
I
expect
high
political
disagreement
because
we're
trying
to
define
a
best
practice
and
people
who
don't
have
any
practice
don't
know
what
a
best
practice
would
be
right,
so
you're
kind
of
like
falling
between
the
cracks
there
of
the
process
we
have
defined
today.
I
So
I
think,
maybe,
if
we
find
a
write,
a
process
around
how
semantic
inventions
come
in
that
could
help,
because
I
think
it's
highly
political
to
begin
with,
so
we
need
a
political
process
around
how
that
has
come
in.
I
don't
like
adding
process
in
general
personally,
but
in
this
case
it
might
actually
be
worth
it
just
by
nature
of
what
semantic
inventions
are.
B
One
more
thing
to
add
there:
if
you
look
at
the
discussion
in
the
pr,
you
will
probably
see
that
there
is
some
nuance
which
github
does
not
allow
to
to
express.
Actually
some
review
systems
allow
you
to
do
class,
one
or
plus
two
right,
and
here
it's
not
possible.
So
if
you
look,
for
example,
at
my
approval,
I
said
I
don't
object
to
this-
I
don't
see
why
it
shouldn't
be
merged.
It
looks
reasonable
to
me
and
I'm
giving
my
approval,
but
that's
not
a
strong
approval
right.
B
You
see,
there
is
no
way
to
express
that,
and
our
process
does
not
have
a
way
to
express
these
nuances,
and
I
think
that's
the
reason
why
this
is
not
moving
forward.
People
don't
feel
strong
in
strong
approval
right.
Whoever
did
the
approval
does
not
be
strong
about
that
and
yuri
also
phil
felt
that
he's
also
on
the
fence
from
what
I'm
reading
from
the
comments
right-
and
this
is
the
difference
right.
B
So
we
have
this
formal
definition
of
there
are
two
approvals
just
merged,
but
in
reality
these
approvals
are
not
the
ones
that
we're
expecting
in
the
process
to
happen.
So
I
think
I
agree
with
you
george.
Maybe
there
is
unfortunately
more
process
needed
here
for
these
specific
cases,
more
refinement
of
the
process
where
people
can
say
I'm
I'm
strongly
plus
two
with
this
or
I'm
not
or
whatever
right,
and
maybe
this
is
really
the
difficulty
with
semantic
conventions
you're
right.
There
is
the
political
element
here
right.
D
I
can
share
one
one
thing:
I
learned
in
the
open
time:
gc
plus
past.
There
are
many
folks
working
on
areas
where
I'm
not
expert,
but
I'm
the
maintainer,
for
example,
people
working
on
grpc
I'll,
tell
people
if
we
have
approval
on
this
from
any
like
valid,
approver
or
expert
on
grpc,
once
they
approve
and
I'll
wait
for
a
day
and
then
I'll
merge
it.
D
I
make
it
very
explicit
that
I'm
I
have
no
expertise
in
grpc,
but
I'll
merge
it
once
we
got
valid
approval
after
a
day
and
with
that
people
got
the
social
pressure
if
they
want
to
approve
that
they
know
that
by
approving
that
they're
taking
responsibility,
so
I
think
it's
a
chicken
egg
problem.
If
we
execute
on
the
process,
people
will
will
feel
responsible
for
this.
Instead
of
saying
I'll,
just
make
my
weak
approval
and
wait
for
the
others
to
decide.
B
C
H
The
other
thing
that
tries
to
make
things
more
or
less
easy
to
approve.
I
felt
that
when
we
set
up
two
approvals
for
spec
people
just
to
to
hit
the
number
say:
okay,
I'm
on
the
fence,
but
just
because
we
need
two
or
three
or
whatever,
how
many
we
need.
They
just
press
the
green
button
because
they
felt
like
okay,
I'm
not
on
the
fence,
and
I
know
that
there
has
to
be
two
approvals.
H
So
there
is,
there
is
gonna,
be
the
second
one,
who's
gonna
do
the
right
or
the
the
strong
call
and
so
on.
So
I
feel
I
feel
people
don't
consider
always
their
approval,
very
strong
approval.
As
your
point
integra.
B
B
E
So
I
think
this
is
good.
We
should
probably
move
on,
but
I
think
one
big
takeaway
that
I
had
from
this
conversation
is
like
semantic
conventions.
You
use
the
term
political.
I
would
use
the
term
bike
ship
right,
it's
about
naming
things,
it's
a
hundred
percent
about
naming
things
or
I
would
say
it's
like
fifty
percent
about
describing
what
the
thing
does
accurately
and
the
other
fifty
percent
is
like
the
name
of
it.
That's
why
they're
literally
called
semantic
conventions.
E
E
I
am
a
little
bit
afraid
about
handing
this
over
to
the
academics
because,
like
we
may
never
get
done-
and
I
think
you
know
having
some
conventions
that
describe
this
stuff
perfect
or
not
is
better
than
none.
E
But
I
do
wonder
if
yeah
this
is
an
area
where,
where
someone
wants
to
become
you
know,
the
the
semantic
attributes
are
where
they
can
at
least
help
us
come
out
with
like
a
bit
of
a
framework
for
making
these
decisions,
because
I
think
that's
maybe
what
I'm
hearing
somewhat
something
that's
missing,
specifically
with
the
semantic
conventions.
We
don't.
We
don't
have
a
great
framework
for
for
resolving
these
issues
and,
as
has
been
mentioned,
it
is
going
to
come
up
a
lot
in
this
area.
E
E
A
E
Yeah,
the
action
item
is
carlos,
is
gonna
gonna
pick
and
resolve
this.
H
F
Yeah,
absolutely,
I
think
yes,
something
that
doesn't
have
any
unresolved
comments
or
comments
just
being
like.
Oh,
please
request
changes
or
go
away.
Like
I
mean
you
need
to,
I
mean
you
need
to
fight
for
consensus,
and
if
there
is
consensus
and
all
comments
and
enough
approvals,
this
player
typically
merged
really
fast.
If
there
are
unresolved
comments
that.
C
F
Kind
of
handguns
there,
it's
really
I
mean
you'll,
get
into
this
like
strange
state
when
you
need
to
find
who
is
actually
like,
have
a
strong
accrual
who
is
having
not
strong
approval.
Those
kind
of
things.
E
Yeah,
I
I
don't
sergey.
I
think
I
need
to
call
time
on
this
one,
because
we
only
have
10
minutes
left
yeah
absolutely,
but
we
we
need.
We
do
need
to
to
come
up
with
with
like
a
framework
for
how
we
think
about
these
things.
A
bit
more
like,
I
think,
that's
part
of
the
problem
with
the
semantic
conventions,
they're
they're
their
own
beast,
and
we
need
to
just
tighten
up
how
we
deal
with
them.
E
Anyone
again,
if
anyone
has
an
idea,
there
feel
free
to
to
write
it
up,
even
just
as
a
paragraph
and
present
it.
E
Also.
Prior
art
is
good,
just
don't
keep
saying
elastic
container
schema
because
it's
not
helpful.
Okay.
Moving
on
fyi,
the
1.0
announcement
goes
live
at
10
am
just
so
people
know
they
want
to
repost
that
somewhere
or
social
media
whatever
it
is.
You
do
with
these
things.
E
Next
up,
open,
telemetry
net
1.0
release,
ready
to
be
announced,
need
to
align
this
announcement
with
overall
1.0.
This
is
a
follow-up
on
a
previous.
L
Item
so
michael,
the
dot
net
release
is
in
a
stable
api
stage,
and
you
guys
mentioned
just
previously
that
you
want
to
line
up
at
least
our
c
quality
releases
for
languages
to
this
one,
all
announcements.
So
we
we're
ready
to
go
what's
the
best
way
to
align
the
announcements,
so
it
comes
out
from
the
community.
L
E
This
is
going
to
go
out
the
door
today,
just
getting
the
spec
announcement.
That
was
my
main
ask
was
like
we
not
announced
1.00
before
we
announced
1.0
to
spec.
Now
that.
C
E
If
you
want
to
the
best
way
to
announce,
things
is,
is
on
the
on
the
blog,
and
so
you
can
send
an
email
to
comms
with
your
announcement
and
asked
for
it
to
get
to
get
put
out
there
so
comms
right.
So
this
mailing
list
here
is
the
one
just
send
them
a
message.
Saying:
hey
we're
ready
to
post
this.
But
beyond
that,
I
don't
think
we
have
a
a
process
if
you
feel
like
you
need
more
of
a
process.
Let's,
let's
talk
about
it.
M
Yeah,
so
this
is
rihanna,
so
it's
kind
of
like
a
question
for
like
principal,
related
thing
in
our
open
telemetry
pipeline
structure,
so
for
some
of
our
matrix,
so
we
only
get
some
metadata
when
a
container
dies
like
say,
for
example,
container
stop
time
or
something
like
this.
So
by
this
time
we
actually
don't
have
any
real
magic
data
point
or
matrix
that,
but
we
have
some
metadata.
Who
is
we
want
to
shift
to
our
backend
now?
M
The
scenario
is
like
the
receiver
from
our
open,
telemetry
collector
is
sending
the
data,
but
this
data,
as
we
don't
have
any
metric
data
point
in
the
resource
matrix
slice.
We
have
like
only
resource
attributes
or
metadata,
so
these
metrics
are
or
these
data
being
dropped
in
the
processor
and
finally
in
the
exporter.
Actually,
so
it's
kind
of
like
a
principle
related
question
like
so
how
ot
actually
suggests
this?
So
does
this
kind
of
data,
or
should
this
kind
of
data
where
we
don't
have
pneumatic
name
or
metric
data
point
but
have
only
resource
attributes?
M
Should
this
get
passed
through
the
pipeline
or
the
processor
or
exporter?
I
mean
the
components
in
the
expo:
open
telemetry
collector
should
stop
this
behavior
like
what
is
the
recommended
behavior
from
community
here.
I
You
I
want
to
call
out
a
discussion,
that's
happening
on
the
metrics
data
model
around
alive
and
up
metrics,
so
the
idea
here
would
be
you
know
something
that
would
discover
service
and
then
would
post
like
in
a
live
metric
with
a
data
point
of
like
one
or
zero
or
an
up
metric
or
a
present
metric
of
like
this
thing
exists.
I
think
you
should
be
involved
in
that
discussion,
because
this
this
scenario
is
kind
of
related
to
that
in
some
way.
So
I
just
want
to
call
that
out.
I
I
don't
know
specifically
how
you
should
do
it
today.
I
just
know
we're
talking
about
things
that
we
can
do
to
help
with
this.
B
E
Okay-
and
so
I
think
my
request
here-
I
don't
think
we
have
time
to
to
get
into
the
details
now,
but
this
sounds
like
it
would
be
great
to
be
written
up
as
an
issue
and
then
discussed
as
josh
said
in
the
the
metrics
api.
M
Okay,
so
this
issue
should
lie
in
the
specification
drug
or
like
collector
repo
in
his
repo.
You.
B
Know
I
think
it's
a
it's
a
bigger
discussion.
We
likely
need
to
discuss
it
in
the
specification,
so
it
generally
is
about
reporting
about
about
reporting
entities
right
about
things
that
exist,
regardless
of
whether
something
is
happening
with
them
right
now
or
not.
There's
no
recording
of
the
metric
happening.
No
spawn
is
emitted
from
that
particular
thing,
but
it
exists.
It
is
it's
interesting
to
observe
right.
We
want
to
know
about
it.
I
think
it's
it's
a
big,
separate
discussion.
To
be
honest,
so
yeah.
E
C
N
I
am
so
I
just
wanted
to
mostly
ping
you
ted.
This
is
my
easiest
way
to
make
it
public.
A
couple
weeks
ago,
you
talked
about
starting
a
group
to
talk
about
further
tracing
data
modeling
and
it
kind
of
spun
out
of
this
particular
discussion.
So
I
just
wanted
to
make
sure
that
that
was
still
something
that
was
happening
and
I
would
like
to
be
involved
awesome.
E
Awesome
yeah
step
step.
One
is
just
drafting
kind
of
a
road
map,
but
I
just
wanted
to
make
sure
this
was
included
in
your
roadmap.
E
So
this
is
just
your
main
ask
is
make
sure
we
we
form
a
a
discussion
group
like
a
working
group
on
on
tracing
issues
specifically.
E
G
E
We
can
maybe
punt
them
punt
them
to
next
time,
but
we
got
a
couple
minutes
just
so
people
know
what
they
are.
One
is
increasing
coverage.
Will
it
be
a
central
effort
to
reach
out
to
open
source
libraries
maintainers
to
incorporate
observability
into
them
yeah?
So
that's
just
step.
One
is
we'd
like
to
at
least
have
instrumentation.
E
So
I
think
this
is
part
of
our
our
tracing
roadmap.
Is
we
have
to
increase
our
own
coverage
of
this
instrumentation
so
that
people
have
something
step?
Two
is
now
that
maybe
tracing
is
stable
enough,
that
we
feel
confident
in
some
languages,
starting
to
reach
out
to
those
actual
libraries
themselves
and
saying,
like
hey,
rather
than
the
user,
having
to
install
this
separate
package
that
we
maintain.
Would
you
just
like
to
bait
this
directly
into
your
library,
and
then
you,
the
library
owner,
maintains
observability
for
your
library
using
our
guidelines.
E
Is
that
of
interest
to
you?
I
don't
know
if
we're
we're
quite
there.
Yet
I
do
want
to
do
that,
though.
Yeah.
B
I
don't
think
we're
there.
There
is.
There
needs
to
be
a
lot
more
effort,
put
into
one
thing:
making
open
telemetry
attractive
for
these
developers
for
other
developers.
Right
and
that's
great
examples.
That's
guides
right
how
to
do
things
right.
It's
so
it's
not
just
us
having
an
api
that
you
can
call
and
just
reaching
out
to
the
these
guys
and
tell
them
hey
just
use
it,
but
we
really
need
to
make
it
very
easy
and
very
attractive
to
use
open
monetary.
I
think
we
need
an
effort
in
this.
B
E
Yeah,
I
I
totally
agree-
and
I
see
this
as
it's
kind
of
like
work
we
can
maybe
be
doing
while
in
tandem
with
the
metrics
and
logging
prototyping
going
on
but
yeah.
This
is
this.
Is
the
critical
nuts
and
bolts
okay
last
item?
Can
we
do
a
better
job
time
boxing
these
agenda
items
the
keto?
We
spent
all
the
time
on
your
items.
E
Yeah,
but
it
is
true
I
I
will
try
in
the
future
when
something
does
seem
like
it's
really
spinning
out
into
a
longer
discussion
to
to
say:
hey
this
should
get
moved
to
an
issue
or
somewhere
else,
but
yeah.
We
made
it
right
on
the
dot.
Okay,
we
do
have
a
a
metrics
sig
meeting
coming
up
right
after
this
someone
got
off
the
calendar
here,
so
I
will
see
you
all
who
are
interested
in
metrics
data
model
in
that
zoom
right
after
this,
so
cheers.