►
From YouTube: 2021-02-16 meeting
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
B
B
All
right
and
feel
free
to
to
add
your
items
onto
the
agenda.
I
see
people
still
filling
the
agenda
out,
so
we
can
just
wait
a
moment
until
I
think
it's
done
and
yeah.
Please
remember
to
add
yourself
to
the
attendee
list.
B
B
B
All
right
looks
like
we've
got
a
fair
bit
of
stuff
added
to
the
backlog
or
sorry
added
to
the
agenda
at
seven
past.
So
let's
get
started.
B
First
thing
I
want
to
draw
attention
to
is
in
the
past,
in
these
meetings,
we've
kind
of
gone
over
the
open
issues
and
the
backlog.
So
these
are
the
current
issues
that
are
labeled
as
being
a
data
model.
B
Two
of
them
look
like
they
need
assignees.
A
number
of
them
are
assigned
to
josh,
but
I
think
we're
actually
looking
at
trying
to
improve
this
process.
It
doesn't
seem
super
helpful
to
just
kind
of
do
a
review
of
do
a
review
of
this
backlog
and
this
issue
set
right
at
the
beginning,
so
request
called
action.
I'm
making
in
the
different
spec
sigs
is
suggestions
for
how
we
can
improve
looking
at
these
issues
and
triaging
them
and
making
our
backlog
and
roadmap
a
little
more
clear.
B
I
know
yana's
got
a
fire
alarm.
I
hope
that's
just
a
drill
so
yeah.
I
made
this
request
in
the
last
specification
meeting,
so
I'm
gonna
make
it
again
here.
What
is
a
good
way
to
triage
and
review
issues
in
a
way?
That's
that's
helpful
to
this
group.
B
B
People
don't
ever
raise
their
hands
at
once,
and
I
do
know
that
josh
mcd
is,
I
believe,
working
on
a
road
map,
but
he
is
on
vacation
right
now.
Out
of
curiosity
is
someone
pairing
with
him
on
this
road
map?
I
know
that
I'm
aware
that
macd's
working
on
it
is
that
just
him
right
now.
B
Yeah
that
was
sort
of
my
question.
I
think
maybe
it
doesn't
pause
due
to
the
fact
that
that
he's
on
vacay,
so
I
believe,
tomorrow's
the
last
day
of
this
vacay
but
I'll,
ask
him
to
to
get
a
draft
out
and
to
get
some
collaborators
sooner
rather
than
later,
because
it
seems
like
like
getting
a
road
map
put
together
and
agreed
upon
is
like
first
order
of
business
and
kind
of
once.
We
have
that
we
can
maybe
start
working
back
from
that
to
figure
out
what
what
a
good
process
is.
C
Yeah,
I
was
going
to
say,
like
I
don't
know
necessarily
I
mean
like
I
was
in
the
last
meeting
as
well
and
the
whole
thing
about
like
having
different
tagging
conventions.
I
mean
really.
I
think,
if
you
have
the
road
map,
that
kind
of
congeals
and
says
like
this
ticket
is
a
priority
or
isn't
a
priority,
and
then
you
can
start
moving
on
the
priority
ones.
You
know
anything,
that's
p1,
you
know
you
work
through
that.
C
Whatever
discussions
you
need
to,
have
you
make
those
and
you
continue
on
you
know,
I
think,
having
that
roadmap
will
clarify
things
and
then
the
whole
question
about
process
or
whatever
doesn't
necessarily.
I
mean
it's
not
that
you
know
discussions
about
process
can't
take
place,
but
sometimes
having
that
road
map
clarifies
things
more
than
reworking
the
process.
B
Yeah,
oh
yeah,
I
completely
agree
there.
The
main
note
by
the
way
about
the
process
is
just
we
have
these
labels.
We
were
doing
this
kind
of
required
for
ga,
but
then
this
ga
concept
kind
of
evaporated
in
the
sense
of
releasing
metrics
and
tracing
and
everything
all
at
once
and
now
that
tracing
is
at
the
door
we
kind
of
are
like
is
this.
This
is
the
best
way
to
do
it,
but,
but
I
agree,
like
we
kind
of
need
these
road
maps
for
these
various
tracks
kind
of
together
before
before
we
can.
B
C
B
Yeah
yeah
we're
definitely
starting
from
scratch
here
and
a
side
effect
is.
I
don't
think
we
have
this
issue
here.
Quite
yet,
so
we'll
have
to
do
a
big
triage
is
we
did
get
through
all
the
p1
issues,
but
now
there
is
a
giant
pile
of
things
that
are
open
that
were
labeled.
You
know
p2
and
p3,
and
we
do
have
to
like
go
through
and
triage
those
and
figure
out
what
we're
going
to
do
with
them
at
some
point.
So
it's
not
just
like
only
p1's
get
worked
on
or
something
like
that.
B
So
just
fyi,
there
is
a
triage
meeting
that
happens
friday,
8
30
a.m,
pacific
and
so
we're
gonna.
We're
gonna
start
that
overhaul
in
that
meeting
on
friday,
so
people.
B
In
in
that
process,
please
come
to
that
meeting
and
hopefully
we'll
have
road
maps
for
these
various
working
groups,
at
least
a
draft
of
them
available
by
next
tuesday,
I
believe,
is
what
we're
aiming
for
for
all
these
groups,
so
yeah
the
big
work
can
start
then,
okay,
so
in
the
meantime,
yep
just
wanted
to.
E
Circle
back
so
the
only
person
working
on
this
red
map
is
josh
or
or
actually,
let's
be
clear
here-
josh
mcdonald
or
josh.
B
E
Yeah,
I'm
I'm
not
associated
with
it
cool
I
yeah
sorry,
I
just
we
should
probably
get
in
into
the
habit
of
gmat
or
something
like
that
yeah.
That
sounds
good.
If,
when
he
comes
back,
it
sounds
like
it's
tomorrow.
First
off,
I
can't
believe
you
let
him
go
on
vacation,
that's
kind
of
rude
and
then
second
off.
If,
if
he
wants
to
ping
me
I'd
be
up
for
helping
him
on
that
one
great!
That's
awesome.
F
C
I
was
gonna
say:
I'd,
be
willing
to
provide
feedback,
but,
like
you
said
initial
draft,
I
could
help
with
reviewing
it.
B
Great
yeah
yeah,
let's
get
more
reviewers.
This
is
more
like
because
if
he
gets
back
on
wednesday,
this
means
like
thursday
friday.
We
need
to
get
that
that
kicked
out.
So
it's
good
to
know.
Tyler,
you
and
josh
has
maybe
can
schedule
a
meeting
with
josh,
specifically
to
kind
of
hack
it
out.
E
Yeah,
this
is
good
and
I
think
that,
like
for
review,
we
should
probably
have
it
as
an
action
out
of
next
tuesday's
meetings,
just
as
an
item
to
review
it.
For
you
know,
as
a
group
yeah,
that's
reasonable,
yeah
for
sure.
B
Okay,
so
that's
helpful,
yeah
and
so
my
hope
is
in
like
two
weeks
we
will
have
backlogs
for
road
maps
for
all
of
these
different
spec
groups
and
we'll
be
able
to
then
wrangle
that
into
a
larger
road
map
for
the
maintainers,
because
that's
sort
of
my
ultimate
goal
is
to
make
sure
that
it's
clear
what
the
maintainers
can
be
working
on
versus
what
is
being
prototyped
who's,
doing
those
prototypes
and
and
just
getting
a
longer
term
plan
together.
B
So
people
can
have
expectations
but
sounds
like
there's
a
lot
of
interest
in
that
from
all
the
different
groups.
So
I'm
excited
okay.
Moving
on
up
metrics
with
cyrith.
F
Yeah
this
might
be
hard
to
have
without
josh,
because
it's
his
proposal
that
raised
all
the
questions
but
effectively
there
were
some
meta
questions
in
there
that
I
wanted
to
talk
about,
because
I'm
kind
of
curious
what
the
consensus
in
the
community
is
right
now.
First
off
there
was
a
question
around
so
josh
has
this
design
for
up
metrics
that
involves
taking
two
metrics
and
kind
of
joining
them
together
in
the
collector?
F
F
F
Just
just
so
to
give
a
little
more
context
from
the
specifics,
j
macd
is
proposing
a
this
notion
of
an
alive
metric
and
a
present
metric,
and
you
can
join
the
two
of
them
together
to
get
enough
metric
on
prometheus
right
and
so
there's
two
different
things
that
would
be
reporting
in
metrics
and
you
would
kind
of
join
these
metrics
together
in
a
collector
to
get
back
both
alive
and
present,
and
one
of
the
open
questions
is
well
should
that
join
actually
be
done
downstream
in
a
back-end
system.
F
C
I
think
this
is
actually
related
to
the
the
point
that
I
had
about
the
whole
started
metric,
so
I
didn't
actually
see
that
one
and
maybe
the
other
one
that
you're
thinking
of
is
issue
1273
talking
about
metric
start
time,
resource
semantic.
B
And
just
to
clarify
this
is
this
is
separate
from
the
discussion
of
whether
or
not
we
should
have
this
feature.
There
seems
to
be
consensus
that
we
should
have
this
feature.
It's
just.
Are
we
leaving
it
to
prometheus
to
deal
with
it
or
or
do
we
want
the
collector
to
figure
out.
F
G
I'm
curious
if
we
are
when
we
are
talking
about
like
joining
it
on
the
backhand
side,
do
we
mean
joining
on
the
query
time
on
or
on
the
ingestion
time.
D
C
F
I
think
one
of
my
thoughts
here
is,
I
feel
like
if
to
get
this,
it's
the
meta
question
of
I'd
like
to
have
these
alive
and
up
metrics
kind
of
specified,
but
if
we
have
to
fully
prove
to
ourselves
that
we
can
do
it
in
a
collector,
that
is
a
completely
new
technical
challenge
that
I
don't
know
if
we
actually
need
to
solve
to
have
this
be
useful
and
good
for
people
right,
and
so
I'm
trying
to
understand
is
that
something
that
we
have
a
consensus
on.
F
B
And
I'll
I'll
jump
in
with
my
just
kind
of
I
don't
know
what
hat
but
suggestion
hat
I
I
do
think
it
is
absolutely
critical
that
we
get
something
into
users:
hands
that
is
a
metrics
pipeline
where
they
can.
They
can
write
metrics
against
an
api
we
can
put
metrics
into
our
own
instrumentation
and
then
that
information
can
flow
into
various
backends
and
that
that
really
should
be
goal
number
one.
I
do
want
to
make
sure
that
we
don't.
B
B
So
I
guess
maybe
a
lens
I
would
love
to
look
at
it
through
is
is
what
decisions
are
decisions
that
would
be
closing
the
door
to
doing
that
kind
of
work
versus
just
deferring
the
work,
because
I
don't
think
we
want
to
close
the
door.
We
do
want
open
telemetry
in
the
long
run
to
be
like
no,
no,
you
should
use
this
telemetry
pipeline
because
it
actually
gives
you
stuff
like
it
does
useful
work,
especially
if
you
are
really
scaling
up,
but
I
feel
like
that
stuff
is
way.
B
That's
more
like
inventing
the
future,
and
we
need
to
be
like
laying
just
the
baseline
of
like
kind
of
open,
metrics
or
open
telemetry's
original
charter,
which
is
that
this
is
a
telemetry
pipeline.
B
Existing
back-ends
and
vendors
can
adopt
today
so
that
they
can
stop
having
to
pour
all
of
their
effort
into
maintaining
their
own
telemetry
pipeline,
because
the
original
goal
of
the
project
is
it's
a
lot
of
work
for
everyone
to
maintain
this
pipeline.
If
we
just
work
together
on
one,
then
we
can
share
it.
Yeah.
C
H
G
C
H
That
sorry,
this
is
a
victor
and
in
this
case
I'm
I'm
representing
microsoft.
So
it's
a
general
question
in
terms
of
why
do
we
want
to
do
more
things,
given
that
most
of
the
vendors
want
to
differentiate
their
offering?
So
I
would
see
that
the
open,
telemetry
is
really
just
a
standard
body
to
collect
data.
The
interpretation
and
the
meaning
of
the
data,
in
one
view,
is
probably
best
left
for
each
of
the
individual
vendor
to
bring
the
differentiation.
H
So
then
that
leaves
the
question
of
what
about
the
simple
things.
Well,
if
it's
simple
enough,
then
it's
simple
for
everyone
to
do
so,
so
I
think
we
really
in
terms
of
quote
putting
meaning
to
things.
We
should
probably
take
a
more
least
common
denominator
approach.
You
know
so
so
right
now.
I
know
that
this
up
metric
is
very
specific
for
prometheus,
and
I
know
prometheus
is
definitely
one
thing
that
we
do
have
to
support,
but
we
need
to
probably
also
be
more
vendor
neutral
as
best
we
can
on
that.
B
Yeah,
maybe
someone
can
can
comment
on
that.
I
don't
believe
the
intention
josh
hears
is
making
something.
That's
prometheus
specific.
But,
sir,
do
you
do
you
know
some
of
the
history
of
this.
F
Yeah
there's
so
there's
concern
around
our
prometheus
compatibility,
especially
if
so
so
you
know,
prometheus
has
huge
adoption
in
say,
kubernetes
world
right
and
if
we're
going
to
live
in
there,
we're
going
to
deal
with
a
lot
of
people
who
already
have
prometheus
endpoints
and
the
less
that
we
look
different.
The
less
friction
we
cause
the
easier.
So
the
idea
would
be
if
you
start
using
open
telemetry
to
collect
prometheus.
F
It
should
look
as
close
to
prometheus
as
possible,
like
you
shouldn't,
even
notice
that
the
open
telemetry
collector
was
in
there
is
the
theory
it's
just
descending
metrics.
The
way
prometheus
would
and
then
you
can
expand
and
do
all
the
additional
telemetry
and
you
get
all
the
value
and
you
can
start
doing
more
things
collector
right
as
like
a
two-step
process.
So
that's
kind
of
the
idea
behind
this
design
is
like
all
right.
Well
how?
What
does
this
up?
F
Look
like
inside
of
open,
telemetry
and
josh
mcdonald
wanted
to
kind
of
abstract
it
away
of
like?
Is
there
something
useful
for
push
and
pull
metrics
that
we
can
do,
and
I
think
it's
clever,
and
that
was
my
second
point
to
talk
about.
F
The
first
point,
though,
was
more
around
trying
to
understand
how
much
we're
going
to
specify
and
how
much
we're
going
to
expect
of
back
ends
if
it's
like
here's,
how
prometheus
gets
up
metrics
from
from
open
telemetry
generated,
metrics
or
open
telemetry
collected
metrics,
that's
kind
of
what
we
need
to
service
this
particular
bug,
because
it's
about
you
know,
are
we
open
metrics
compatible?
Are
we
prometheus
compatible
whatever
right?
F
C
Yeah,
I
guess
one
one
question
I
had
is
how
how
compatible
with
because
there's
also
the
open,
metrics
the
openmetrics
format,
which
is
supposed
to
be
like
an
evolution
on
prometheus.
I
guess
my
understanding
was
that
the
there
should
be
some
level
of
compatibility,
but
not
necessarily
a
one-to-one
right.
So,
like
you
know,
in
in
in
open
telemetry,
you
call
it
an
up-down
counter,
but
it's
a
gauge.
C
H
So
to
add
to
that
another
approach,
is
I
assume
that
when
we
make
these
decisions
that
each
of
these
individual
components
would
be
pluggable
in
that
a
user
can
choose
to
or
not
choose
to
enable
up
or
alive
or
present
stuff?
So
if
we
took
a
more
of
a
modular
approach-
and
I
don't
think
that
the
different
vendors
would
have
any
issues
with
that,
you
know
because
you
could,
then
you
know
open
telemetry
as
a
sig
could
offer,
like
you
know
a
plug-in
for
providing
up.
B
Yeah,
I
think
it's
it's
ensuring
that
we
have
the
baseline
right
like
if
we,
if
we're
just
completely
lacking
in
some
concept.
That
means
down
the
pipeline.
It's
not
possible
to
take
advantage
of
that.
I
think
that's
what
we're
trying
to
avoid
right
correctly,
if
I'm
wrong
there,
and
so
we
know
that
prometheus
is
something
we
want
to
make
sure
that
that
this
telemetry
pipeline
is
at
least
good
as
good
as
what
prometheus
currently
has,
and
so
we're
kind
of
using
that
as
one
target,
I
believe,
the
other
target.
B
I
don't
know
how
much
this
group
talks
about
it
is,
is
statsd,
which
is
the
the
main
push
model
that
that's
out
there.
That's.
B
H
So
the
other
consideration
is
that
what
what
happens,
if
I'm
purely
on
the
api
side
and
the
sdk
side,
that
they
insert
their
own
exporter,
which
is
vendor
specific,
in
which
case
does
any
of
the
existing
collectors,
the
hotel
collectors?
And
this
you
know
whatever
the
proposal
is,
would
that
even
come
into
play
at
that
point
and
at
the
same
time,
how
would
you
enable
those
people
who
are
writing
their
own
exporter
to
gain
the
same
benefit.
F
That's
what's
really
interesting
here.
So
if
you
look,
if
you
look
at
this,
this
diagram
that
ted's
showing
the
sdks
provide
this
a
live
metric
right,
that's
just
a
thing
that
would
be
like
built
in
so
if
you're
actually
exporting
somewhere
else
that
a
live
metric
can
go
out,
I
mean,
like
you,
can
filter
metrics
and
such
I
hope
when
we
finish
the
sdk
specification,
but
I
don't
know
since
it's
gone
now,
yeah,
but
anyway
like
when.
F
I
I
assume,
when
we
add
it
back
in
there'll,
be
filtration
and
all
the
all
the
good
stuff
that
we
see
in
tracing
right,
so
that
alive
metric
can
go
out.
Which
means
you
can
delegate
straight
from
the
api
sdk
without
a
live
metric.
What
I
really
like
about
this
proposal,
but
also
has
been
nervous-
and
I
think,
needs
probably
a
team
around
it.
F
Almost
a
cig
is
the
service
discovery
component
right
of
I
can
take
push
metrics
that
don't
have
strong
semantic
conventions
or
labels
or
understanding,
and
I
can
have
a
thing
which
lives
beside
it,
which
actually
grabs
more
information
about
that
endpoint
that
service
and
annotates
it
one
of
the
issues
with
the
statsd
right
now,
I
think,
is
our
model
is
really
sophisticated,
with
lots
and
lots
of
stuff,
and
things
coming
out
of
our
api
has
all
that
stuff.
Like
we
have
resource,
we
have
context
we
have
tracing.
F
You
know
like
that,
gets
baked
into
metrics.
Then
things
coming
out
of
these
other
systems.
Don't
have
any
of
it,
and
this
is
an
interesting
way
for
us
to
try
to
unify
that
this.
This
picture
kind
of
relies
on
a
collector
to
do
the
joining,
and
you
know
the
question
here
is:
are
we
okay
with
specking
this
in
a
way
where
we
don't
need
the
collector
or
prove
that
doing
that
in
the
collector
is
scalable.
F
B
Necessarily
it's
not
about
whether
or
not
everyone
should
have
to
write
an
exporter
and
that
and
have
an
exporter
in
every
language
versus
just
writing
one
exporter
in
the
collector.
I
think
what
surat
is
proposing.
Is
you
write
your
exporter
in
the
collector?
B
This
data
is
just
getting
pipelined
out
to
the
collector
and
then
you're
doing
something
in
your
exporter
and
then
in
the
future.
We're
looking
at
you
know.
How
can
the
collectors
start
doing
more,
joins
and
other
interesting
interesting
features,
but
I
don't
think
saying:
let's
not
worry
about
the
collector
is
is
something
that
would
like
save
time
or
energy.
It's
just
a
question
of
when
we
put
this
into
our
protocol.
B
Is
there
enough
information
in
that
data
model
that
all
these
various
groups
can
can
implement
their
protocol?
So
prometheus
is
on.
Like
the
complex
end,
we
need
to
make
sure
our
system
emits
enough
data,
that
they
can
write
an
exporter
anywhere
and
and
have
that
work,
and
then
I
think
the
the
other
end
is
is
like
we
have
the
ability
to
breed
all
this
data
together.
B
What
is
where
do
we
see
the
value
for
doing
that,
because
that's
something
that
open
telemetry
can
offer
existing
backends
and
it
to
be
clear.
It's
not
about
adding
like
fancy
features
around
processing.
B
The
data
like
you
do
in
the
back
end,
it's
more
about
enriching,
enriching
the
data
so
that
you
actually
have
better
data,
because
you're
able
to
not
just
have
these
metrics
on
their
own,
but
you
have
context
for
these
metrics
around
the
traces
that
are
happening,
for
example,
and
the
the
resources
describing
the
machine
or
the
service
that
the
metric
came
out
of
how
various
groups
do
that.
I
think
that
would
in
the
simple
model,
josh
is
talking
about
right
now.
B
H
Yeah
ted-
and
I
agree
with
your
statement
here-
I
was
just
giving
some
consideration
of
it.
If
the
otlp
protocol
was
generic
enough
such
that
anybody
could
implement,
that
is
obviously
our
goal.
You
know
and
adding
presence
or
alive
you
know
specification,
I
don't
think
detracts
from
people
doing
so.
So
the
other
aspect
of
it
is
that
when
we
talk
about
the
collector
I
assuming
that
that
also
means
potentially
an
unhost
agent
as
well-
that's
acting
like
a
collector.
So
so
perhaps
you
know
you
know
the
this
accommodation
made
for
that.
H
My
my
concern
here
is
really
in
one
scenario:
we
may
just
plug
directly
as
an
exporter
into
the
sdk,
in
which
case
the
collector
piece
doesn't
play
and
in
fact,
otlp
may
or
may
not
play
so
I
want
to
see
and
if
there's
a
way
to
make
sure
the
input
to
fund
this
otlp
is
generic
enough,
that
anybody
could
use.
B
Yeah
yeah,
I
think,
at
the
end
of
the
day,
writing
and
putting
another
way
writing
when
we
get
this
all
done
and
you're
going
to
write
an
exporter
for
every
sdk,
and
that
includes
go
so
you're
gonna
write
an
exporter
for
the
go
sdk
that
exporter
is
going
to
basically
be
no
different
than
the
exporter.
You
would
write
for
the
collector
in
both
cases
you're
you're,
dealing
with
with
whatever
we
decide
our
our
otlp
standard
data
model
is
you're.
B
Getting
that
thing
as
an
in-memory
object
and
then
you're
doing
whatever
you
want
with
it.
So
so
that
I
think
that's
just
the
point
I
want
to
make
it's
it's
not
that
having
the
collector
involved
would
would
change
the
the
way
you
would
write
that
exporter.
H
I
understood
I
understood
so,
I'm
not
against
you
know
the
otlp
having
specifications
for
you
know
the
present
signal
or
the
live
signal
so
forth
yeah.
But
as
long
as
we
could
get
the
otlp
in
its
input
form,
then
it's
up
to
the
collector,
whether
it's
vendor,
specific
or
old,
tell
to
do
what
they
wish.
B
Yeah
yeah
the
the
otlp
like
that
that
data
model
is
turned
into
proto
buffs
and
those
protobufs
the
object,
you're
gonna
be
dealing
with
where,
regardless
of
where
you're
writing
that
exporter,
it's
gonna
be
in
that
object
and
the
what
the
the
main
point
of
the
collector
is.
Besides
doing,
fancy
processing
is
being
able
to
translate
right.
It's
like
someone
described
it
as
a
choke
point.
B
That's
I
feel
like
that
is
a
negative
connotation,
but
it's
just
about
like
you
could
then
also
have
receivers
in
various
different
kinds
of
things,
and
you
know
how
to
translate
those
receivers
into
otlp,
and
then
you
have
exporters
that
know
how
to
export
from
otlp,
and
so
that
that
is
eliminating
this.
Like
combinatorial
problem,
people
have
to
deal
with
right
now,
where
someone
comes
to
you
and
they
say
cool
so
like
I
want
to
like
send
data
to
your
back
end,
but
I
already
did
all
this
work
instrumenting
with
some
other
thing.
B
How
can
I?
How
can
I
do
that
without
re-instrumenting
and
step
one
rather
than
saying
we'll
go
re-instrument
with
open
telemetry
step?
One
is
saying
we'll
just
run
a
collector
and
as
long
as
we
have
a
receiver
that
turns
that
into
otlp,
then
you
can
send
it
to
our
system.
B
And
so
that's
like
the
to
my
mind.
The
primary
purpose
of
the
collector
is
is
just
that
simple
task
and
then
the
kind
of
processing
you're
doing
on
that
is
basic
things
like
scrubbing,
and
you
know
just
just
the
kind
of
basic
things
that
an
operator
would
want
to
do
that
or
less
about
interpreting
the
data,
but
more
about
just
just
cleaning
it
up
and
just.
H
F
H
B
B
Now
and
add
this
metric
type
to
otlp
today.
E
That
we're
trying
to
define
so
can
I
just
jump
in
here
really
quick,
though,
because,
like
I
think
that
there
may
be
some
misunderstanding,
like
I
don't
think,
there's
actually
any
need
to
change
the
otlp.
I'm
just
wondering
if
we're
not
on
the
same
page
here
because,
like
I,
was
kind
of
been
holding
off,
because
I
was
wondering
if
I'm
not
fully
understanding
this
conversation
but
like
this
is
a
very
specific
concept
to
the
api
and
to
the
sdk
and
even
to
the
collector
of
the
production
of
this
metric.
E
But
the
otlp
currently
supports
like
a
metric
type
that
can
transmit
the
oh,
the
up
down
or
the
alive
or
the
whatever
metric
event
that
you've
gone
to
capture
or
create
or
artificially
synthesize.
For
for
this
up
metric,
the
otp,
I
don't
think,
actually
needs
to
change
currently,
like
I
don't
know
if
this
is
really
even
a
discussion
about
the
data
model.
F
No,
it
is
because
the
the
fact
that
you
say
that
is
what's
up
for
debate,
so
this
this
is
josh's
proof
that
we
don't
have
to
change
our
data
model
at
all
to
support
up,
as
opposed
to,
let's
make
a
new
up
metric
in
the
data
model
that
has
its
own
meaning
right.
That's
that's
literally
the
debate
that's
going
on,
so
what
I'd
love
is
like.
We
all
agree.
Yes,
this
looks
like
a
good
way
to
you
know,
support
the
up
metric.
We
don't
have
to
change
the
data
model.
F
Here's
how
we
encode
it
as
two
different
metrics,
we're
done
like
yay
up
metrics
are
supported.
Everyone's
happy
that'd
be
awesome,
but
I
don't
think
it's
actually.
It
was
an
agreement
prior
to
this
that
our
current
data
model
does
support
up
effectively
right.
B
E
I
just
want
to
make
sure
I'm
hearing
you
all
before
I
say
something
and,
and
that
was
kind
of
what
I
was
waiting
for,
but
like
also
the
I'm
interested.
I'm
super
interested
in
the
service
discovery.
Part
of
this
thing,
but
I
definitely
don't
know
like,
is
that
really
a
part
of
the
data
model
discussion
or
is
that
should
be
more?
I
don't
even
know,
if
should
be
part
of
the
api
discussion.
I
think
it's
something
we
could
add
on
later,
but,
like
is
that
is
that,
where
that
should
live.
E
F
F
That's
the
meta
question
here
and
I
took
45
minutes
of
this
meeting,
so
I
apologize
for
taking
so
much
time,
but
I
you
know
it's
important
to
know
what
we
talk
about
here
and
what
we
don't
talk
about
and
that's
yeah,
like
your
confusion,
relates
to
why
I
want
to
ask
a
question
like:
is
this
a
data
model
problem
right
or
is
there
something
else
where
we
have
to
discuss
this?
And
maybe,
when
we
have
a
road
map,
it'll
be
clear,
more
clear
what
we
talk
about
here
and
what?
F
E
Yeah,
I
think
so
I
think
I
agree
also.
I
don't
mean
to
diminish
the
conversation
that
we
had.
It
was
a
very
interesting
one.
A
lot
of
interesting
points.
Also
josh
is
a
really
smart
person,
so
it's
fun.
Well,
both
josh's
are
really
smart
people,
so
it's
fun
seeing
where
both
are
putting
in
but
yeah.
So
I
think
I
think
I
agree.
I
don't
know
if
it
particularly
belongs
here,
but
I
think
you're
right.
A
roadmap
would
really
help
frame
the
discussion
points
here.
B
We're
going
to
make
the
data
model
and
someone
can
go,
make
a
joiner
processor
for
the
collector
and
go
propose
that,
and
maybe
that's
just
nothing
to
do
with
this
group.
Okay,
let's,
let's
move
on
just
to
make
sure
and
yeah.
I
apologize
to
the
people
who
put
these
other
issues
up,
because
we
did
take
a
long
time
on
that
and
we'll
try
to
tie
box
things
a
little
more
effectively
in
the
future,
so
use
cases
for
started
times.
C
So
I
think
the
last
45
minutes
has
kind
of
informed
that
one.
So
it's
really
just
beyond
that
one.
I
wasn't
sure
what
start
times
were
to
be
used
for.
I
know
that
they
exist.
C
But
beyond
that
I
wasn't.
I
I
hadn't
been
really
sure
because
I
have
you
know
been
looking
around
at
a
bunch
of
the
documentation.
That
was
one
thing
that,
like
a
specific
use
case
I
couldn't
find,
but
that
one
actually
makes
a
lot
of
sense.
So
I
don't
know
if
there's
others,
it
wouldn't
be
a
bad
thing
to
have,
but
other
than
that
I
don't
think
there's
much
to
discuss.
E
So
the
the
start
of
time
was
specifically
one
of
the
things
that
was
really
critical
for
that
temporality
question
that
we
had
and
we're
trying
to
communicate
across
the
proto,
whether
it's
a
cumulative
versus
a
delta,
and
so
the
start
time
should
encapsulate
that
in
many
ways.
B
This
is
also
why
I
think
we
need
an
overview
doc,
because
I
feel
like
I'm
coming
into
this
sig,
and
it's
like
I'm
like
halfway
through
season
three
of
like
a
really
like
game
of
thrones
kind
of
complicated
thing,
so
so
being
able
to
like
condense
this
down
into
something
that
people
can
usually
get
started
with,
will
be
really
helpful.
E
C
Yeah,
so
that
that
was
me
as
well,
but
I
I
think
this
might
actually
apply
more
to
the
api
meeting,
so
I
can
belay
that
okay.
B
And
victor
you
want
to
know,
what's
the
next
step
for
this
issue
here.
H
So
I'll
give
some,
so
I
put
out
this
question
about
clarity
of
the
you
know
what
an
instrument
name
is
and
how
does
that
apply,
because
I
think
that
has
a
direct
impact
on
how
the
api
is
designed
and
how
we
name
different
things.
More
specifically,
I
think
this
wider
question
in
terms
of
you
know,
simplification,
I
mean.
H
Does
the
protocol
have
difference
between
counter
and
up
down
counter
cannot
gauge
be
used
to
represent
a
counter?
Oh,
you
know
those
types
of
questions
in
terms
of
what
are
the
fundamental
counter
types
and
how
does
the
naming
relate
to
it?
And
so
specifically,
I
don't
know
if
this
is
an
api
question,
or
this
is
a
data
model
question
and
then
there's
a
feedback
from
many
people
on
this
particular
issue,
but
I'm
not
sure.
H
E
Yeah,
I
think
that
this
is
a
really
good
question
by
the
way,
and
I
think
that
again,
this
is
another
dragon.
It's
been
discussed
a
lot
in
the
tracing
world
as
well,
because
they're
scoping
on
like
what
the
tracer
instrument
name.
Is
there
as
well
and
so
there's
a
lot
of
prior
art
on
this
in
in
the
in
the
group.
I
do
think
that,
like
how
we've
approached
the
data
model
in
the
past
is
that
it's
a
pretty
stupid
thing,
so
it
doesn't
really
have
like.
E
We
have
comments
and
documentation
as
to
like
what
you
should
be
putting
into
these
fields,
but
at
the
end
of
the
day,
like
it's
kind
of
like
how
you
want
to
use
the
protocols
how
you
use
the
protocol-
and
I
think
if
this
is
a
really
good
question
for
the
behavior
of
the
api
and
the
sdk
and
how
they
respond
to
like
users
that
misuse
this
sort
of
data
or
how
they
validate
or
if
they
don't
validate
or
something
like
that.
So
that's
kind
of
my
sense
on
the
topic.
E
H
Yeah
so
I
talked
I
spoken
to
josh
mcdonald
a
little
bit
previously
and
he
he
seems
to
believe
that
it
probably
should
stem
from
the
data
model,
because
no
matter
how
the
api
name
it
at
the
end
of
the
day,
it
needs
to
be
sent
across
the
wire
and
what's
sent
across
the
wire
is
a
unique
name
and
how
that
unique
name
is
then
represented
and,
more
specifically,
how
vendors
are
expected
and
collectors
are
expected
to
join.
Those
mains
is,
I
guess,
the
one
of
the
primary
questions.
H
So
if,
if
you
know,
if
there
is
quote
on
the
otlp
there's
only
one
name
and
the
name
does
or
does
not
include
type,
then
that
means
that
fundamentally,
the
system
only
provides
one
type
of
counter
or
one
type
of
instrument,
in
which
case
the
api
should
or
should
not
make.
You
know
informed
decision
based
on
that
or
more
importantly,
what
does
it
present
if,
if
we
present
12
instruments,
but
at
the
end
in
the
otlp,
it
only
translates
to
only
two
types,
then?
Is
that
a
service
or
not
a
service?
I
don't
know.
C
So
sorry
go
ahead,
I
was
gonna
say
so,
so
I
mean
like
because
we've
we've
discussed
on
on
the
ticket
about
the
specific
one
but
like
in
general,
I
think,
like
the
gauges
the
gauges
are,
you
know
an
up
down
changing
value,
the
others
are
sums,
and
maybe
there's
histograms
as
a
secondary
thing
or
as
a
third
potential
item,
but
I
think,
with
the
exception
of
like
int
or
doubles,
or
what
have
you
for
the
most
part,
there's
like
three
different
three
different
like
cores
and
then
from
there.
C
You
know
you
can
multiply
it
any
way
you
want
from
the
api
level,
but
as
far
as
the
data
model
that
that's
at
least
what
I
would
I
would
see
and
getting
into
like
what
should
be
in
the
api,
like
the
expectation
of
behavior
and
how
that
works.
That
definitely,
I
think,
comes
more
into
the
spec
of
that
comes
more
into
the
spec
rather
than
rather
than
the
rather
than
the
data
model.
But
it
would
be
like
an
expectation
that
you
know
if
you're
going
to
use
a
particular
namespace.
C
B
I
think
another
way
of
putting
it
is
we
ourselves
are
going
to
have
to
write
a
lot
of
instrumentation
right,
like
presumably
we're
going
to
go
back
through
all
of
our
tracing
instrumentation
and
add
metrics
to
it,
and
so
we
need
guidelines
for
like
what
what
do
you
put
in
there
and
how
should
you
use
this
stuff
and
how
much
of
that
should
be
baked
into
the
api
versus
like
something
the
end
user
is
doing.
B
I
suspect
this
also
relates
to
ensuring
that
api
we
present
to
end
users
is
simple
enough
that
that
they
don't
get
scared
away
from
it,
which
is
the
current
case,
but
yeah.
That's,
I
think,
that's
riley's
group
that's
going
to
tackle
that
and
I
think
the
main
thing
is
just
to
make
sure
that
we
don't
change
the
data
model
so
radically
that
they
get.
B
You
know,
cut
sideways,
but
I
think
that's
the
group
that
needs
to
figure
out
how
to
to
simplify
this
and
apply
standards
or
conventions
to
it.
D
B
So
right
now
riley's
assigned
to
this,
and
so
as
part
of
retriaging
the
the
api
backlog
for
metrics.
This
will
get.
This
will
get
pulled
into
that
group,
so
yeah.
I
think
it's
probably
better
to
just
have
one
group
own
it
than
split
across
two,
and
this
sounds
like
maybe
more
focused
on
on
that
group.
B
Okay,
two
more
issues
in
seven
minutes:
less
how
to
handle
resource
metadata
without
metrics
metric
data
points,
should
they
allow
them
yeah.
So.
D
We
we
take
it
from
our
like
earlier
meeting
from
the
aspect
meeting,
so
I
just
wanted
to
discuss
a
little
bit
more
here
like
what's
our
thought
on
that,
like
we're
getting
some
metrics,
where
we
don't
have
any
data
point
or
stats,
but
we
do
have
some
resource
attributes
we
want
to
shape.
These
are
important.
So
how
do
we
handle
this
case?.
F
D
D
B
D
So
it's
something
like
so
these
metadata
are
available
for
the
some
of
the
earlier
magnetic
data
point
we
are
recording,
but
suddenly
the
container
dies.
We
don't
have
any
matrix
to
record,
but
we
still
want
to
shift
this
metadata
without
the
data
point
or
the
metric
counter
so
rehan.
Why
is
this
a
metric
rather
than
a
log
or
a
trace
event?
F
D
From
the
otlp,
like
kind
of
a
specification
type,
these
were
like
part
of
our
like
when
we
denied
a
resource
matrix.
We
have
two
parts
like
so
we
have
a
resource
attributes.
These
are
basically
part
of
the
resource
attributes
when
we
are
recording
the
metrics
and
we
had
a
multiple
data
points
like
automatic
data
point,
but
suddenly,
like
we
stopped
receiving
this
data
point
by,
we
still
have
live
metadata,
which
we
want
to
ship,
so
this
is
kind
of
like
it's
kind
of
in
the
middle
of
like
matrix
generation
process.
H
H
D
Yeah,
for
I
mean
that's
the
ultimate
question
like
what's
the
ot
guideline
here:
do
we
prefer
to
create
some
dummy
matrix
with
all
the
metadata
and
ship
them
or
the
pipeline
should
allow
us
to
pass
through
the
I
mean
to
pass
through
the
processor
and
exporters
without
any
data
point?
What's
the
recommendation
of
community
principle
here?
So
that's
kind
of
my
question.
B
I
would
guess
it
you
what
the
one
option
of
like
a
dummy
metric,
as
you
say,
is,
is
you're,
just
writing
a
shim
that
that
does
this,
that
that,
to
me,
sounds
like
the
right
place
to
start
at
least
right
like
if
you
can
model
it
that
way
start
there
and
if
there's
something
that
doesn't
work
about
that,
where,
like
the
collector,
actually
needs
to
have
some
functionality
to
make
this
work,
we
would
at
least
have
like
an
example.
B
This
sounds
like
the
kind
of
situation
where,
having
like
an
example
in
code
or
or
like
a
more
concrete
example,
would
probably
help
people
get
their
heads
wrapped
around.
What's
what's
missing?
Is
that
thing
you
think
you
could
provide
yeah.
D
So
I
can
provide
like
high
level
code
and
the
two
solutions
we
are
thinking
we
are
thinking
like.
Maybe
we
can
just
remove
all
the
validation
from
the
processors
and
exporters
kind,
of
course,
to
providing
the
guideline
from
ots
like
okay.
This
should
be
allowed
and
the
processor
or
exporter
should
not
stop
this
kind
of
data,
or
maybe
we
can
do
something
generate
some
dummy
metrics
in
the
receiver
so
that
they
can
easily
pass
through
the
pipeline.
B
H
So
this
reminds
me
of
a
slightly
different
question,
which,
which
may
or
may
not
be
related,
is
that
you
know
in
the
otlp
or
just
in
general,
when
we
send
metric,
we
usually
send
you
know
like
descriptions
and
stuff
along
with
the
metric
one
common
way
to
do.
That
is
to
separate
out
the
the
metadata
about
a
counter
separate
from
sending
the
actual
data,
but
the
metadata
is
sent
separately,
you
know
and
so
that
metric
that's
sent
separately.
It's
just
a
definition.
H
You
know
that's
sent
repeatedly
and
that's
kind
of
sounds
like
what
raha
may
be
asking
for
in
some
ways
is
to
basically
you
have
some
metric,
you
know,
but
you
have
some
metadata
that
should
be
sent
in
some
other
way,
but
it's
isn't
a
metric.
Like
the
example
here
is.
You
know,
description
of
the
metric
information
about
that
you
know
I
I
don't
know.
I
haven't
seen
that
in
hotel,
but
just
bringing
that
up.
B
Yeah
and
sorry
I
do,
it
is
10
a.m
and
we
do
have
to
cut
this
off.
I
do
did
have.
There
was
one
more
item
so
jana.
If
you're
back
on
the
call
is
there
it's.
I
Okay
to
like
discuss
this
in
a
later
meeting,
it
was
like
bogdan,
is
not
here,
so
I
think
he
needs
this
okay.
Well,.
I
B
Great
okay,
thank
you
all
right,
y'all!
That's
that's
the
end
of
our
time
here.
Everyone!
Thank
you
for
all
of
your
input.
Victor.
Thank
you
for
trying
to
keep
us
focused
on
shipping
value
today
and
thank
you,
tyler
and
josh
for
helping
draft
the
next
version
of
roadmap
I'll.
Let
mac
do
you
know
about
that?