►
From YouTube: 2021-01-29 meeting
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
B
D
D
C
D
C
D
B
D
I
feel
that
both
are
useful.
We
have
been
trying
to
bring
metrics
into
the
tuesday
morning
meeting
and
sometimes
it
works,
and
sometimes
it's
full
of
chasing
content,
and
the
idea
was
to
have
a
little
bit
more
focused
conversations
in
these
in
these
meetings.
Here
I
think
anyone
else
care
to
comment.
B
E
Thanks,
ryan
ryan:
let's
let's
wait:
let's
wait
until
we
we
we
see
that
that
happens
before,
because
so
far
we
did
not
see
that.
D
Great
well,
as
far
as
a
usual
agenda,
a
few
of
us
have
put
items
on
there
and
then
andrew's
usual
section
is
great
because
we
get
to
see
what's
new
and
andrew.
Will
you
help
us
since
you're
here.
A
Standing
agenda
item
status
on
p1
metrics
issues
in
the
spec
repo
that
we're
tracking
towards
not
much
has
changed
in
some
way
since
last
week.
Another
one
I
got
is
it
17?
What
did
I
count
it?
Well,
no,
no
yeah,
yeah,
okay!
So
one
moved
into
the
done
column,
and
I
think
that
was
just
out
of
a
triage
but
or
sorry
it
might
have
trade
places
same
number
in
the
to
do
column
same
number
in
this
progress
column,
and
we
have
one
result
in
the
dunk
column.
A
I'd
be
happy
to
jump
into
any
one
of
these
in
depth.
If
you
guys
would
like
else,
we
can
move
on
to
the
next
agenda
items.
D
Sure,
I
guess
I've
seen
all
those
new
ones
well,
I
feel
like
before
we
go
into
the
rest
of
the
items
on
the
agenda
here.
It
would
be
worth
a
little
bit
of
a
state
of
the
union,
perhaps
bojan
and
I
were
had
a
one-on-one
just
before
this-
to
gather
some
thinking.
You
know
we've
been
at
this
hotel
metrics
project
for
about
a
year
and
a
half.
D
Now
and
suddenly
we
have
a
lot
more
interest,
and
I
know
it's
driven
by
aws
and
premier
and
the
prometheus
team
coming
in
with
the
collector
generating
a
lot
of
excitement,
and
so
now
we're
at
this
suddenly
at
this
place,
where
it's
not
clear
what
the
direction
is,
there's
just
a
lot
of
interest
and
what
I've
gathered
myself
coming
out
of
some
meetings
over
the
last
couple
weeks
are
that
we
need
to
be
able
to
separate
our
work
streams
because
there's
a
lot
of
it
going
on,
and
I
think
our
top
priority
should
be
to
try
and
really
stabilize
our
otlp
protocol
for
mometrix,
all
the
other
other
things
that
we're
doing
depend
on
it
and
if
you
were
like,
I
sort
of
was
spent
a
lot
of
time.
D
D
It
doesn't
matter
to
me
that
we
are
focusing
on
instruments
and
the
semantics
of
the
apis
and
the
sdks.
That's
my
position.
I
don't
know
if
everyone
would
agree,
but
I
think
we
need
to
start
somewhere,
and
this
otlp
is
the
right
answer.
Josh.
B
So
I
I
I
have
the
question
listed
here
so
similar
thing
like
you,
you
you
answered
part
of
my
question,
okay,
good,
so
I
I
definitely
agree
that
the
api
and
sdk
will
have
a
dependency
on
the
data
model.
So
basically,
what
are
we
trying
to
support?
What's
the
scope,
but
meanwhile,
I
still
see
the
value
of
like
ex
at
least
exploring
and
learn
the
api.
Sdk
part
guided
feedback,
because
those
things
will
not
be
thrown
away.
Work
like
some
of
the
performance
issues
like
I.
B
I
was
working
with
chris
back
in
open
senses
on
the
metrics
and
then
we
worked
on
the
the
python
prototype
for
matrix
the
version,
one
matrix
api
and
then
in
c
plus
class.
Last
year
we
worked
with
alolita
on
the,
so
I've
been
doing
all
the
prototypes,
and
now
we
have
the
dotnet
prototype
and
now
I'm
tackling
the
performance
issue.
So
I'm
willing
to
drive
the
epi
and
sdk
part
just
to
bring
like
the
experience
that
I
I
had
in
all
these
languages.
B
When
we
do
the
prototype,
I
can
clearly
see
there's
a
balance
between
how
simple
these
apis
are
and
how
close
that
meets
with
our
data
model
and
also
how
performance
these
apis
could
be.
So
I
I
think
this
is
a
valuable
work.
If
you
want
to
spend
more
time
focusing
on
the
like
the
data
model,
I
think
it's
totally
fine.
I
I'm
willing
to
drive
the
api
and
isdk
and
I
totally
understand
there
is
a
dependency.
So
if
the
data
model
changed,
there
might
be
additional
requirement
and
open
to
like.
D
My
belief
is
the
data
mile's
really
close
and
and
as
far
as
I
would
love
to
have
someone
else
take
on
the
api
work,
because
I
have
a
lot
of
my
work
in
what's
there
today
and,
and
I
think
that
a
new,
a
new
set
of
eyes
and
a
new
set
of
a
new
design
would
would
possibly
help.
I'm
not
saying
necessarily,
I
like
the
design
that
we
have,
but
I'm
not
sure
that
I
should
be
leading
the
rest
of
the
work
on
designing
the
api
myself,
yeah
and.
G
One
point
here
that
I
caught
in
riley's
thing,
so
I
inherited
open
census,
the
mic,
the
maintenance
of
it
and
it
is
marked
as
deprecated,
pending
open,
telemetry's
metric
api,
and
there
is
no
migration
path
today.
So
I
agree
that
the
data
model
is
number
one
and
there's
also
some
pressure
for
our
users
of
open
census.
Unless
we
just
want
to
leave
them
out
to
dry
and
say,
go
use
prometheus,
which
is
okay.
I
mean
we
can
do
that.
G
That's
fine,
but,
like
I
it'd,
be
nice
to
give
them
some
kind
of
a
soft
landing,
which
is
one
thing
we
wanted
to
do,
which
is
why
we
wanted
a
stable
api,
but
we
can't
have
a
stabilized
api
until
we
have
a
stable
data
model.
So
I
agree
that
is
the
priority.
But
if
there's
any
way
you
can
shut
off
that
api
to
get
like,
I
think
we
have
a
lot
of
collector
issues.
C
G
E
B
B
Might
be
probably
I'll,
I
need
your
help
to
get
me
introduced
with
folks
who
I'm
not
familiar
with
like
some
folks
working
on
micrometer.
I
know
like
bowdoin,
you
probably
have
have
some
engagement
with
them
and
also
folks
who
are
familiar
with
premises.
So
I
I
can't
bring
like
we.
D
Have
one
of
them
in
the
call
today,
I
know
for
sure
so
we're
starting.
B
F
D
And
you
and
josh
working
on
api,
for
example,
yeah
riley,
one
of
the
things
that
you
said,
though,
is
about
migration.
I'd
like
to
talk
about
that,
come
back
to
migration!
Why
don't
you
continue.
D
D
Here's
how
you
get
the
bridge
and
the
problem
is
of
course,
communities
is
not
an
api,
just
like
open
census
api.
What
we
could
imagine
doing
is
forking
the
prometheus
client
and
literally
rewriting
all
those
api
calls
to
call
through
to
an
hotel,
an
hotel
api.
That
would
be
a
very
low
level
api,
though,
and
I
think
I
could
do
it
actually,
that's
that's
one
idea.
I
don't
want
to
say
anymore.
E
I
don't
think
that's
necessary,
so
the
main
difference
between
tracing
and
metrics
is
the
context
part.
The
contextual
part,
the
the
fact
that
there
are
there
is
a
notion
of
propagation
between
function,
calls
and
stuff,
which
is
not
the
case
for
prometheus
or
for
metrics
in
general,
because
metrics
are
one
moment,
so
we
I
don't
think
we
need
necessarily
a
bridge
for
prometheus
because
you
can
happily
have
from
half
of
your
metrics
in
an
application
recorded
via
prometheus
and
the
other
half
recorded.
D
E
Start
pushing
their
metrics,
so
so
so
one
option
that
we
can
do
one
option
that
we
can
do
is
much
simpler.
We
just
scrape
inside
the
process.
We
just
build
a
small
scraper
or
or
an
exporter
in
prometheus
that
talks
about
tlp.
I
think
I
think
prometheus
library
can
have.
You
can
have
a
plugin
for
for
exporting.
Okay.
D
G
So
one
one
thing
to
throw
out
is
a
do.
We
need
to
be
having
this
discussion
right
now
and
then
b.
I
I
think
there
is
this
thing
to
tease
out
in
the
api
discussions
of
what's
the
point
of
an
open,
telemetry
api.
Is
it
really
nice
end
user
surface
area,
or
is
it
really
stable
and
efficient
integration
with
existing
metric
solutions
to
get
us
on
the
same
data
model
and
then
eventually
a
really
nice
user
service
area,
and
that's
something
that
we
should
definitely
tackle
when
we
split
that
off
an
api?
G
C
B
C
B
E
I
will
also
join
both
meetings,
but
as
an
idea,
I
think
we
should
have
completely
different
discussion
and
different
focus
between
the
groups
and
that
that
will
solve
some
of
these
problems,
and
I
think
you
and
josh
from
google
are
good
candidates
for
for
leading
some
of
these
discussions
on
the
apisdk
side.
Okay,.
B
And
we'll
need
someone
who
has
the
background,
like
some
of
the
things
like
josh
asked
their
mental
question.
For
me,
I
believe
when
we
have
open
telemetry,
the
goal
is
to
combine
open
sentences,
so
the
minimum
bar
is
people
using
open
synthesis
metrics.
If
we
tell
them
there's
no,
how
we
can
use
open
telemetry,
because
the
api
is
a
subset,
then
we
fail.
We
shouldn't
do
open
time.
We
should
go
back
three
years
right.
So
these
are
the
questions
I
think
both
of
them.
B
E
D
I'm
wondering
if
it
really
is,
I
mean
for
me,
there
was
a
question
about
there
was
the
promise
of
open
census
that
you're
going
to
be
doing
be
able
to
do
more
with
your
metrics
and
your
traces.
That
was
the
promise
yeah
and
it
was
encoded
in
a
specific
set
of
ideas
like
a
stats
record
method
and
some
views
and
stuff
like
that,
and
I
think
after
we
turned
and
worked
for
a
long
time.
D
It
came
out
looking
a
little
bit
differently
and
I
think
the
the
key,
for
me
at
least,
was
this
otlp
idea
that
each
data
point
has
an
associated
aggregation
so
that
you
can
throw
your
data
at
a
collector
and
it
can
do
stuff
with
it,
which
is
a
view.
Essentially,
that's
that's
how
I
feel
like
we
got
here
so
preserving
exactly
what
opencensus
does
today,
maybe
not
so
important.
D
I
don't
want
to
speak
for
google,
but
I
think
the
spirit
of
opencensus
was
what
we
were
after
that
you'll
get
those
context
identifiers
and
bring
in
some
distributed
context
labels
and
slap
them
on
your
metrics.
That
was
what
we're,
after
not
not
exact,
support
for
census.
B
Oh
so
I
still
have
one
minor
thing
in
my
topic,
so
the
target
date.
So
this
is
probably
a
question
for
angel
so
last
year,
like
a
lot
of
folks
were
under
the
impression
that
we're
going
to
shift
metrics
but
based
on
my
like
prototype
work
in
multiple
languages,
I
I
think
I
explained
to
a
little
time-
I'm
not
very
optimistic
so
so
now
one
has
to
be
realistic
about
the
timeline
and
if
we
don't
have
a
timeline,
then
like
we
will
keep
slapping
things.
B
A
Well,
based
on
the
latest
changes
that
we've
added
to
the
spec
around
versioning
and
stability
requirements
and
what
are
desired
for
ga
ted
tetsuo
has
been
able
to
separate
out.
So
we
have
different
signals
with
different
version
numbers
right,
so
that's
having
tracing
1.0
in
metrics,
spec,
being
1.0
and
then
following
subsequently
all
the
languages
implementing
the
spec.
A
A
As
for
trying
to
shoot
for
a
specific
timeline
and
estimating
it
I
mean
just
observing,
only
the
work
streams
is
coming
up.
I
think
we
have
to
have
a
good
grasp
on
like
how
much
more
work
is
on
the
data
model.
The
api
sdk,
the
collector,
the
otlp
have
inventory
on
that
before
we
can.
You
know
really
even
ballpark
this
stuff
yeah.
C
I
I
still
think,
though,
andrew
that
again
you
know-
and
this
goes
back
to
the
discussions
that
we've
been
having
even
with
the
prometheus,
you
know,
tasks
that
we
need
to
address
is
that
take
a
phased
approach
and-
and
you
know
clearly,
kind
of
try
to
achieve
or
or
put
in
phases
which
are,
you
know,
say
three
months
at
a
time
right,
because
it's
hard
to
estimate
everything
at
day.
One
and
just
you
know
kind
of
keep
targets
for
every
water.
B
Yeah
so,
for
example
like
how
about
like
probably
a
question
for
josh
like
if
we
said
like,
let's
agree
on
the
date,
at
least
for
the
data
model-
let's
have
a
clarity,
so
the
api
work
can
be
totally
unblocked
and
let's
set
the
target
date
at
end
of
march.
Would
that
be
possible
if
it's
possible,
but
still
risky?
I
think
it's
still
worth
it
that
we
set
the
target
date.
At
least
we
use
that
to
measure.
Do
we
need
more
people
like?
B
Are
we
okay
or
we
probably
need
to
like
speed
up
and
also
it
helps
the
language?
And
I
know,
like
john
john,
wasn't
on
the
java
side
like
he.
He
has
this
issue
of
like
implementing
something
and
then
the
spike
changed
and
like
go
back
and
forth
again,
and
I
know
every
everyone
come
here.
You
got
a
company
behind
you.
It's
got
some
pressure.
You
want
to
at
least
have
some
estimation
like.
Where
are
you
going
to
make
stuff
happen,
so
you
can
communicate
back.
This
would
help
everyone
in
the
community.
A
D
C
D
D
I
think,
to
get
the
basic
gauges,
counters
and
histograms
at
least
to
a
level
of
compatibility
with
prometheus
and
stuff
and
like
histograms,
I
there's
going
to
be
some
questions
lingering.
So
so
we
may
end
up
wanting
to
add
stuff,
but
we're
going
to
be
ready
to
commit
to
stuff
that
we
will
not
break
and
stabilize.
What
we
have,
I
think,
is
the
answer
I
like
end
of
march.
A
I
think
would
we
like
to
track
this
in
the
current
issues
and
and.
B
The
labels
that
we
can,
I'm
not
sure
when
we
work
on
the
tracing
like
triage,
it's
it's,
we
have
a
clear
date,
so
we
start
to
see
like
which
one
we
should
do,
which
one
we
should
give
up
and
and
for
matrix.
Currently,
I
I
think
the
ga
thing
is
just
too
big.
It
covers
everything
and
we
probably
need
to
like
make
that
in
multiple
faces.
C
C
A
A
C
Maintain
a
project
board
for
that
specific
set
of
tasks,
deliverables
by
faith
and
just
something
I
discussed
in
the
gc
today
and-
and
I
think
that
that
was
something
that
was
recommended
so
again.
That
was
just
the
model
that
was
suggested,
but
all.
A
C
All
right
good,
so
what
I
was
saying
is
that
you
know
taking
the
example
of
the
prometheus
group
and
the
very
specific
set
of
tasks
that
we
have
itemized
for
each
phase
and
working
with
you
know
all
stakeholders
on
that:
maintaining
a
repo
for
the
active
development
of
those
tasks
and
then
also
maintaining
a
project
board
for
each.
You
know
to
I
clearly
itemize
and
relate
the
issues
and
prs
you
know
for
each
of
those
tasks
on
the
project
board.
C
A
A
Okay,
yes,
I'd
be
happy
to
help
set
that
up
like
set
up
a
separate
project
board
for
the.
As
I
understand
splitting
the
discussion
into
two
groups,
we're
trying
to
track
two
different
work
streams.
Is
that
so.
C
G
Actually,
I
think
the
first
project
board
we
need
to
track
is
josh's
short
list
of
things
that
do
in
the
in
the
model,
and
I
think
once
that's
done,
then
we
can
give
you
an
actual
inventory
of
what
would
like.
Then
you
can
actually
do
design
on
the
api
and
collector
independently
and
fragment
it,
but
that's
the
first
short
list
to
nail
down
get
consensus
that
that's
the
only
thing
that
we
have
to
do
around
this.
This
otlp
spec
get
that
out
the
door.
G
Then
you
can
fragment
right,
but
I
think
like
if
you
were
to
make
those
the
real
question
here
is:
what's
that
inventory
of
tasks
to
do
and
from
what
I've?
Seen
in
my
you
know,
five
months
in
the
community
is
the
inventory
of
tasks
is
never
ending
and
keeps
getting
added
for
ga
right,
and
I
think
that's
that's
that's
step.
C
Mean
I
agree
with
you
josh,
but
I
also
would
say
that
think
of
it
as
stable
and
iterative,
not
gay,
there's
no
such
thing
as
gay
per
se.
D
A
All
right,
josh,
perhaps
I'll
sync
up
jmacd,
perhaps
I'll
sync
up
with
you
offline
in
order
to
be
able
to
grab
that
list
and
build
this
populate
and
see
it,
and
then
I'll
add
this
to
the
topic
of
maintainers
meeting.
So
that
way
we
can
track
it
across.
D
Great
well,
then,
what's
left
in
this
list
of
agenda
items
here
is
one
item
that
is
very
much
one
of
these
new
api
group
discussions
from
victor,
and
since
I
actually
want
to
hear
it,
I
think
it
would
be
appropriate
to
at
least
talk
a
little
bit
victor.
Are
you
here.
D
H
Based
on
this
conversation,
I
think
this
item
is
likely
going
to
be
in
the
api
space.
However,
I
think
it
is
kind
of
the
junction
point
between
the
api
and
otlp.
H
Simply
I'm
just
reading
to
the
initial
spec
on
just
what
is
fundamentally
considered
a
unique
metric,
and
just
that
definition
alone,
whether
that
be
applied
to
otlp
or
apply
to
the
api
or
sdk
is
at
least
for
me
fundamentally,
is
what
is
a
metric?
Is
it
single
instance?
Is
it
shared?
Is
the
given
a
name?
Is
that
always
going
to
be
the
same?
How
are
they
going
to
be
joined
and
so
forth?
So
I
don't
know
where
this.
D
You're
right,
this
is
a
data
model
question.
I
agree,
but
let's
point
out
that
the
the
questions
that
you're
asking
maybe
and
some
maybe
can
be
addressed
as
more
spec-
can
be
written
about
how
to
address
what
happens
when
the
data
arrives
in
a
certain
way.
First,
is
we
need
to
make
a
structural
change
to
delete
a
field
or
add
a
field
or
change
some
structure?
D
That's
going
to
actually
break
like
existing
uses
where
today,
the
meaning
is
kind
of
self-evident,
and
nobody
and
and
the
cases
that
you're
asking
about
in
a
lot
of
today's
metric
systems
are
just
misconfiguration
and
what
we're
trying
to
get
to
is
a
place.
Where
are
some
we've
expect
a
protocol,
that's
very
clear
about
what
correct
behavior
is
and
what
happens
when
you
do
incorrect
behavior,
I
think,
and
that's
I
guess
I
would
just
call
that
a
data
model
question
so
yeah.
D
H
H
Happy
to
collect
information
and
collate
it
and
so
forth
as
necessary,
but
at
least
for
me
this
is
just
fundamentally
understanding
what
metrics
are
and.
D
Others
think
about
it,
because
I
know
in
in
the
early
days
the
open
sense
of
specs
said
something
it's
illegal
to
register
an
instrument
with
the
same
name,
a
different
type,
and
that
makes
sense
to
me.
But
yet
it's
very
clear
that
in
the
real
world
and
there's
version
sku
and
there's
different
people
implementing
code,
you're
going
to
end
up
with
different
metric
definitions
and
schemas
are
going
to
change.
Okay,
it's
a
fact
of
life,
so
the
collector
can't
just
say:
oh
you
did
it
wrong.
D
D
G
I
I
think
the
answer
to
the
question
is:
where
will
you
get
the
best
error
message
for
the
user
and
if
it's
I
send
it
downstream
and
they
give
the
best
error
message?
That's
the
right
answer
to
do
like.
If
we
can't
give
a
good
error
message,
we
shouldn't
fail.
That's
that's
fundamentally
like
these
are
the
kinds
of
problems
you
need
to
be
able
to
diagnose
and
fix.
H
Which
brings
up
a
very
interesting
conversation
regarding
what
is
the
role
and
again,
I
don't
know
where
this
conversation
belongs,
but
where
is
the
role
of
the
api
and
and
fundamentally
to
me?
I
think
that
the
role
of
the
api
is
nothing
more
than
just
collection
of
what
the
user
has
in
terms
of
data
and
there's
no
other
meaning
to
it.
How
you
aggregate
it,
how
you
send
it
down
the
pipe
to
otlp,
how
you
the
backend
vendors,
collected
and
aggregate
whatever?
That
is
really,
I
guess,
sdk-ish
back-end
vendor
specifics.
H
D
My
philosophical
position
is:
there
should
be
no
errors
in
an
instrumentation
api.
You
are
trying
to
say
what
you're
trying
to
say
and
and
the
it's
the
reader
who
should
see
that
there
was
an
error.
That's
my
opinion.
Think
of
these
as
right
only
means
there's,
never
an
error,
but
I
don't
know
that
that
is
a
widely
held
opinion.
That's
my
personal
opinion.
D
Yeah
sergey
refers
to
what
I'm
thinking
of
as
well
is
that
we've
placed
a
kind
of
general
constraint
on
hotels
that
we
shouldn't
be
giving
errors
to
users,
but
this
is
a
case
where
I
think
the
the
spirit
is
that
we're
helping
the
user
by
telling
them
that
they
tried
to
register
two
instruments
with
the
same
name.
But
then
again
is
that
a
real
problem.
We've
got
this
instrumentation
library
idea.
You
have
named
tracers
and
named
meters
now.
D
D
H
D
You
can
easily
convert
instant
floats,
so
I'm
not
sure
how
I
feel
there.
The
reason,
as
far
as
I
know
why
we
have
instant
floats
is
that
some
people
like
to
count
things
that
are
very
large,
like
bits
per
you
know
bits
on
the
wire
and
it
overflows
the
floating
point
pretty
fast.
So
so
you
need
to
be
able
to
choose,
but
it's
very
likely
we're
going
to
have
mixed
data.
The
javascript
person
javascript
sdk
reports,
an
integer
is
where
somebody
else
reports
floating
point,
so
the
collector
is
going
to
see
that.
H
B
If
you
look
at
the
micrometer
dogs,
I
I
I
kind
of
like
the
way
how
they
describe
it's
just
a
way
how
you
organize
things
and-
and
some
people
choose
the
hierarchical
format.
Some
people
choose
a
different
approach,
but
ultimately
the
name
is
just
just
a
way
how
you
refer
to
a
bunch
of
things
and
you
can
put
meaning
in
a
way
so
make
that
hierarchical
or
some
other
approach.
D
One
of
the
things
we
have
to
spec
out
is
whether
you,
if
you
use
an
instrument
with
two
different
units,
is
that
okay,
so
are
we
going
to
get
into
the
game
of
of
combining
units
or
standardizing
or
normalizing?
I
don't
think
we
should
so
it's
it's
meaningfully
incorrect.
It's
meaningful
and
correct
to
just
pass
through
any
instruments
with
different
units
as
separate
units,
separate
instruments
and
then
a
vendor
can
do
the
right
thing
if
they,
if
they
want.
D
G
Is
this
reality
of
not
owning
the
back
end
like
you're?
Not
we're,
not
owning
the
storage,
so
you
can't
really
control
what
the
hell
goes
on
with
the
storage
of
it
right.
So
to
some
extent
you
can
do
the
best
you
can
for
getting
things
over
the
wire
and
say
all
of
this
is
making
it
to
your
vendor
and
if
your
vendor
doesn't
do
a
good
job
for
the
user,
then
that
just
kind
of
puts
pressure
on
the
vendor
to
change.
If
users
want
this
right,
but
I
don't
know
if
that's
a
solvable
problem.
D
It's
like
this
with
mixed
labels
as
well.
If
you
have
like
two
dimensions
on
one
set
of
data
and
three
dimensions
on
another
set
of
data,
the
storage
system
is
going
to
tell
you
what
happens
then,
and
your
query.
Language
is
going
to
define
how
it
works
and
the
data
model
is
it's
more
than
just
data
model.
At
that
point,
it's
processing
model.
H
Then
the
question,
then,
is
that
all
of
these
that
we're
talking
about
and
how
to
aggregate
the
data
based
on
how
we
slice
and
dice
the
different
labels
so
forth.
These
all
sound,
like
vendor-specific
issues
or
vendor-specific
value,
add
that
each
vendors
can
choose
to
join
units
can
choose
to
allow
more
dimensions
or
less
dimension.
H
So
then
the
question
is:
why
do
we
have
sdk
level,
aggregators
and
so
forth?
If
there's,
no,
you
know
clear
understanding
of
how
to
split
labels.
Multiply
labels
normalize
units
so
forth
or
even
name
spacing.
D
There
is
clear
understanding
and
I
think
units
are
a
pretty
simple
example.
So
could
we
provide
a
collector
pipeline
stage
that
standardizes
units-
you
know
if
you
see
microseconds
change
them
in
milliseconds?
Like
that's
pretty
easy,
that's
like.
Maybe
the
community
wants
to
do
that
and
they'll
they're
going
to
contribute
it,
but
but
like
as
far
as
we
are
mostly
sent
by
vendors
and,
like
our
my
vendor,
doesn't
care
about
this
feature
of
units
standardization.
D
So
I'm
not
going
to
work
on
that,
but
someone
could
the
data
model
supports
it
and
and
likewise
with
mixed
labels?
We've
talked
a
lot
about
this
some
observer
instrument
and
like
like
there's
something
you
can
do
with
labels.
D
But
if
the
vendor
doesn't
it's
it's
a
case
of
well,
the
data
is
there
and
you
can
work
with
it,
but
you
didn't
you
know,
there's
some
question
of:
maybe
the
open,
telemetry
collector
will
one
day
get
a
pipeline
stage
that
can
do
exactly
the
right
thing
to
do
down
sampling
and
removal
of
labels
and
reductional
of
cardinality.
But
it's
not
something
that
my
vendor
cares
about
right
now,
and
so
someone
can
do
that
yeah.
So.
H
I
think
you
and
I
talked
a
little
bit
josh
in
that
you
know
again.
I
I
I
don't
know
where
I
stand
on
the
sdk,
but
to
me
up
front,
the
sdk
just
seems
like
it
should
be
very,
very
vendor
neutral
in
the
sense
of
why
can't
we
just
provide
completely
independently
whether
or
not
you're
going
to
write
your
own
sdk
or
use
hotel
sdk.
H
D
Api
to
me
that
to
me
we
have
the
the
sdk
that
focus
that
we've
had
has
been
what
you
described
and
I'm
okay.
It
may
not
look
that
way,
but
but
there's
sort
of
within
the
sdk
bubble.
There's
sort
of
several
subparts
that
are
kind
of
the
idea
is
that
there
are
standard
components
that
you
can
swap
around
and
add
another
pipelining
stage
in
to
do
something
different.
D
But
when
you
stand
back
and
look
at
the
whole
sdk,
it's
it's
a
very
complex
piece
of
machinery,
and
I
don't
see
people
really
swapping
pieces
in
and
out
of
them,
I'm
not
sure,
there's
a
real
win
there,
but
it's
but
you
but
you.
But
you
touched
on
a
real
thing,
which
is
that
openometry
tried
to
create
this
idea
of
a
semantic
api
separation
from
the
sdk,
so
we're
gonna,
we're
gonna
define
this
api.
D
That's
like
the
meaningful,
semantically,
meaningful
operations
that
the
user
is
gonna,
gonna
use
and
that
and
that
their
need
to
understand
what
happens
inside
the
sdk
ends
at
the
api,
and
that
was
provide
those.
The
hotel
mission
was
to
do
that
so
that
vendors
would
come
in
and
be
willing
to
share
their
work
on
an
sdk,
because
if
ever
that
sdk
was
not
good
enough,
the
vendor
could
bring
in
their
own
sdk
and
and
completely
replace
what
the
community
sdk.
D
I
don't
foresee
that
happening,
because
it's
a
tremendously
big
investment
to
develop
a
whole
sdk.
So
what
we
have,
therefore,
is
this
promise
of
an
api
sdk
separation
that
probably
no
one
is
going
to
use
and
then
an
sdk
which
is
really
three
or
four
parts
glued
together
through
some
interfaces
that
can
be
extended
in
various
ways
to
change
cumulative
to
delta
or
to
change
which
exporter
you're,
using
or
to
reduce,
dimensionality
and
so
on.
That
is
sort
of
an
assembly
kit
for
for
sdks,
more
so
than
it
is
a
monolith
of
an
sdk.
H
Yeah
yeah,
I
agree,
and
I
again
I
have
no
judgment
on
how
that
decision
came
to
be
and
whether
it's
good
or
not,
my
my
simple
answer
is:
if
I
wanted
to
come
and
write
my
own
sdk,
I
will
be
given
the
way
that
it
is.
I
would
be
disincentivized
to
do
so
because
of
all
of
the
fine
work
that
the
hotel
community
has
put
in
and
there's
no
easy
way
for
me
to
use
any
of
that,
because
those
are
tied
into
the
sdk.
And
thus
I
will
not
write
an
sdk.
B
I
have
a
different
understanding,
so
I
I
think
most
most
of
the
open,
telemetry
sdk
were
were
engineered
in
a
componentized
way
and
I've
seen
their
approach.
People
take
the
open,
telemetry
sdk
and
they
build
the
sdk
on
top
of
that
they
expose
extra
stuff,
but
majority
of
the
features
are
coming
from
one
existing
sdk,
but
probably
this
is.
This
is
more
like
a
general
open
time,
three
spec
meeting
topic
without
something
here.
Sorry,
I
was
talking.
D
In
the
hotel
go
metric,
sdk,
there's
a
processor
api
and
you
could
substitute
a
units.
Conversion
like
you
could
swap
in
a
units
conversion
processor.
That
would
do
the
right
thing
in
there
and
I
I
don't
think
that
that's
worth
doing
because
then
there's
seven
nine
other
languages
to
do,
and
I'd
rather
do
that
in
the
collector.
But
it's
something
you
could
do.
H
D
D
Jonathan
you're
smiling.
Do
you
want
to
go
next
I'll
I'll
leave
my
item
for
the
end?
If
there's
time.
I
C
Can
I
answer
that,
so
I
think
that
we
we
have
had
one
follow-up,
at
least
with
the
prometheus
work
group,
starting
off
the
other
follow-up.
That
so
I
mean
I
know
you,
I
think
jonathan
you
are
joining
in
there
right
already.
It
was
yesterday.
I
Morning,
yeah,
no,
I
I
I
wasn't
there
but,
like
I
I
I
am
okay,
that
I
am
mostly
important
on
the
on
the
api
side,
like
on
the
on
the
user's
side,
what
the
users
you
interface
with.
C
Okay
and
and
then
the
second
action
item
that
we've
had
is,
of
course,
is
both
josh's
were
mentioning
actually
starting
to
work
on
the
data
model,
and
that's
something
that
you
know
we're
in
the
process
of
crystallizing
and
then
the
third
item
is
that
all
the
tasks
that
we
have
mapped
out
so
far
in
just
discussions
in
the
web
group
as
well
as
the
you
know,
community
discussions,
we
I'm
adding
them
as
issues.
I
D
Thank
you
alelita.
I
look
forward
to
you
shepherding
that
effort
and
helping
us
stay
organized
and
on
track
really.
C
D
Well,
it's
a
pretty
small
group
here
and
I
don't
mind
sharing
this
link,
but
maybe
it's
it's
time
to
end
the
meeting
and
you
can
read
this
offline.
I
just
wrote
this
today
in
response
to
some
of
the
questions
from
the
workshop
and
the
prometheus
working
group
yesterday,
in
which
I
was
trying
to
answer
the
question
of.
Can
we
emulate
the
up
metric
in
a
push
model,
and
so
I
have
a
proposal
there.
You
might
want
to
read
that
and
think
about
it
and
continue
following
this
this
issue.
D
I
think
it
I
think
we
can
make
progress
on
this
and
pretty
soon
we'll
have
prometheus
working
in
the
open,
telemetry
collector.
D
It's
in
it's
it's
my
item
there,
oh.
D
It
we're
out
of
order
now
but
yeah
this,
and
maybe
I
should
be
posting
this
in
another
getter
or
in
or
some
way.
I
need
to
get
this
in
front
of
the
prometheus
working
group,
because
this
issue
has
been
open
since
october
and
maybe
now
we
can
make
progress.
Yep.
C
C
H
D
We're
all
doing
this
together,
I
may
have
it
seems
like
I
talked
a
lot
here.
I
started
the
hotel
metrics
sig
formally
about
a
year
ago.
It
was
called
the
hotel
metrics
office
hours
before
that,
and
if-
and
I
don't
need
to
be,
the
leader
of
this
group-
I'd
be
happy
to
let
other
people
run
the
meeting
so
who
makes
decisions
as
we
all
do.
Please
make
proposals
yeah.
C
A
Yeah
this
google
doc
is
also
editable
by
the
public.
So
if
you
have
some
other
agenda
items,
you
want
to
queue
up,
feel
free
to
add
this
on
before
the
next
meeting
and
also
open
telemetry
community
is
a
good
resource
on
like
how
we
also
structured
maintainers
contributors,
triagers
and
then
there's
a
group
of
overall
technical
committee.
People
folks
who
have
overarching
like
understanding
of
how
the
specification
will
impact
all
the
different
languages.
D
Thank
you
get
organized
is
the
answer,
and
I'm
also
looking
at
josh
sureth,
who
has
definitely
stepped
up
in
recent
months
to
help
get
more
organization
here
so
he'll
be
part
of
the
answer.
I
think
too
thanks.
G
Than
the
old
scholar
release
process,
that's
for
sure.