►
From YouTube: 2021-02-26 meeting
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
B
A
A
Okay,
so
it's
been
two
minutes
I'll
start
so
first
I
want
to
give
a
quick
update
on
the
progress.
Currently
we
have
so
one
thing
is
I.
I
started
this
like
blog
post,
the
draft
version
under
multiple
reviews,
a
lot
of
folks
help
to
review
and
comment.
I
believe
I've
resolved
all
the
current
comments,
so
thanks
everyone-
and
I
will
propose
that
we
will
publish
this
early
next
week,
so
I'll
I'll
work
with
pad
and
and
get
this
published.
A
But
if
you
want
like,
if
you
haven't
reviewed
this
still,
you
have
a
few
days.
So
please
take
a
look
and
see,
if
that's
clear,
so
I'm
trying
to
focus
on
the
scope
like.
Why
are
we
doing
the
matrix
work
comparing
to
premises
what's
additional
value,
we're
bringing
in
and
what's
the
relationship
with
premises
and
statsd
and
also
what's
the
rough
timeline?
And
as
part
of
that,
I
also
created
the
the
timeline
here.
So
this
is
based
on
the
discussion
on
tuesday.
A
I
want
to
use
that
for
for
the
metrics
part,
so
josh,
I
also
created
one
for
the
data
model
and
we
can
figure
out
how
how
to
change
the
tags.
I
think.
Currently
we
have
a.
We
have
a
huge
like
ga
tag,
which
is
everything
we
I'll
try
to
break
down
that
into
smaller
pieces
and
there's.
C
A
data
model
tag
at
the
moment.
It
may
be
not
perfect,
but
it's
pretty
close
and
I've
been
trying
to
move
all
the
data
model
issues
into
the
protocol
repository
so
that
they're
kind
of
in
the
same
place
right
now
and
they're,
probably
not
all
there,
but
most
of
them
are
at
least
the
ones
where
we're
trying
to
change
the
protocol
before
you
know
like
the
next
month
or
so.
A
Yeah,
I
I
think
that
should
be
fine
and
for
the
for
the
github
projects.
We
talked
about
this
that
briefly
on
tuesday,
so
I
I
think
currently
we're
still
suffering
like
we're,
not
well
funded
here,
so
instead
of
creating
the
data
model,
one
because
I
I
think
you're
already
overloaded,
I
didn't
create
that.
I
just
created
the
three
for
the
metrics
project
like
the
api
sdk
part,
and
I
don't
have
to
work
on
all
three
of
them.
A
I
know
ted
is
trying
to
find
someone
and
I'm
also
trying
to
find
some
if
some
pm
from
from
microsoft
to
be
willing
to
help-
and
I
I
put
the
exact
wording
from
the
the
draft
blog
post
here
so
I'll-
try
to
put
issues
in
the
proper
place
and
link
them
in
this
project
see
how
it
goes
and
also
this
old
house
has
been
under
several
days
review
and
I
think
all
the
outstanding
issues
are
resolved
so
I'll
I'll
ping
people
and
see.
We
can
merge
that
as
soon
as
possible.
A
Thank
you
so
now
I've,
I'm
I've
been
looking
at
the
current
spec
issues
and
also
there's
a
on
the
spec
there's
a
github
discussions.
I've
seen
some
questions
from
like
from
two
companies
and
and
also
I
look
at
the
java
prototype
python.net.
A
Instead
of
listing
all
the
questions
I
found,
these
two
are
the
outstanding
questions
that
I
can
see
like.
Everyone
is
kind
of
asking
that
so
the
first
one
I
think
we
we
talked
about
this
in
the
data
model
meeting
and
the
pad
has
a
follow-up
item.
The
basic
question
is:
hey.
You
have
multiple
dimensions,
and
today
we
look
at
the
spec
we're
seeing
the
dimension
is
what
we
actually
call
them
labels.
A
Why
do
we
only
have
string
as
a
type
for
matrix,
and
the
answer
is
because
most
of
the
metrics
back
in
the
use
string
and
also
when
you
try
to
aggregate,
if
you
have
an
array,
how
do
you
aggregate?
How
do
you
compare
like
even
compare
whether
they're
equal,
so
this
is
not
just
the
exporting
format,
but
also
even
within
the
icky,
don't
export
the
data
anywhere
when
you
want
to
do
the
aggregation
you
want
to
compare?
Is
this
thing
equal
to
the
other
thing?
How
are
we
going
to
do
that?
A
So
this
is
a
challenge
and,
and
title
is
trying
to
get
feedback
from
people
if
we
want
to
have
flexible
type,
what
would
be
the
proper
way
of
doing
comparison
and
the
conversion
if
the
backend
only
supports
stream
and
a
lot
of
people
are
not
clear?
Why?
Why
would
a
small
group
of
people
believe
that
we
should
support
more
than
string?
A
So
I
try
to
gather
that
information
and
I'll
explain
here.
Number
one
thing
is
for
customers
who
want
to
use
all
the
signals
they
use
traces
to
use
metrics.
They
would
have
a
hard
time
like
for
traces.
I've
used
attributes
for
metrics,
I
use
labels
and
for
another
thing
I
use
properties
is
for
them.
It's
just
key
value
pair,
very
hard
to
understand
the
difference
and
on
the
other
side,
for
people
who
are
using
traces,
they
already
made.
Something
like
this
is
my
http
request.
A
In
this
way
it
seems
we
have
to
do
the
conversion
anyways
if
they
put
a
status
called
as
integer
for
the
span
attributes
either
we
ask
them,
go
and
like
report
everything
in
the
string
format
calling
a
separate
metrics
api
or
we
do
something
and
we'll
have
to
face
the
same
problem.
So
it
seems
we're
a
little
bit
stuck
here.
While
we're
waiting
for
pad
and
pad
is
having
this
challenge
of.
How
do
we
do
the
conversion?
A
I
want
to
get
some
idea
from
folks
here
like
like
just
I
want
to
go
through
like
like
one
by
one.
Just
tell
me:
what's
your
preference
and
do
you
have
a
strong
preference,
you
think
we
should
stick
with
string
or
we
should
support
arbitrary
types.
I
I'm
more
like
on
the
flexible
side,
because
I
believe
in
like
later,
if
some
backhand,
they
started
to
support
integer
as
a
dimension
value.
If
we
convert
everything
to
a
string
in
the
api
we
kind
of
like
job.
That's
that
semantic.
A
A
I
I
know
tita
is
probably
like
an
agreement
of
we.
We
should
support
more
than
string,
and
I
know
bogdan
is
a
strong
believer
that
we
should
have
string
and-
and
currently
I
I
think,
there's
no
common
consensus,
although
it
seems
we're
moving
towards
the
the
flexible
one.
So
just
a
quick
wrong
table.
D
Can
I
clarify
something
here?
Yes,
yes,
are
you
asking
like
if
we
should
support
it
in
the
api
or
in
the
protocol
in
the
api
just
in
the
api
yeah?
So
if
we
support
it
in
the
api-
and
I
am
an
exporter
right,
does
that
mean
that
I
am
responsible
for
converting
types
into
strings?
If
I
need
strings
or
is
there
going
to
be
an
open,
telemetry
canonical
way
of
doing
that?
D
Honestly,
this
isn't
like
a
huge
big
deal,
because
most
languages
have
simple
two-stringy
stuff,
but
I'm
just
kind
of
curious,
because
some
of
the
two
strings
you
would
have
to
do
are
not
obvious,
but
that's
that's
my
my
main
question
is
like
there
will
be
back
ends
that
need
to
convert
to
strings.
We
know
that
pretty
much
all
back
ends
do
now
for
metrics
right.
D
So
are
we
putting
that
the
responsibility
of
the
exporter
in
the
api?
Are
we
putting
this
in
the
protocol
of
otlp,
so
we
preserve
types
all
the
way
through
there
as
well
right?
Those
are
those
are
kind
of
my
two
questions.
I
think
if
you
don't
go,
if
we're
not
willing
to
go
into
otlp,
it
begs
the
question
of:
does
it
belong
in
the
api?
D
That
said,
I
think
the
convenience
of
being
able
to
use
a
free,
a
type
in
a
type
language
is
awesome.
If
it
gets
converted
into
string,
I
don't
care
as
a
user
and
as
a
googler,
we
use
strings.
So
again,
I
don't
care
if
you
keep
types
or
not,
I'm
converting
them
to
strings
the
one.
C
E
I
think
that's
the
core
thing.
Is
it
because
we
have
multiple
types
elsewhere
in
open
telemetry
in
the
api,
it's
inevitable
that
users
are
going
to
convert
these
into
labels
somewhere
either
resources
or
they
want
to
make
metrics
out
of
their
trace
data.
So
I
don't
think
we
escape
the
question
of.
How
do
you
do
this?
We
would
just
be
pushing
it
out
onto
like
the
user
community
to
sort
it
out
which,
which
I
think
would
be
worse.
It
would
be
better
for
us
to
just
just
pick
a
thing.
Basically.
A
A
E
I
I
haven't
been
to
the
w3c
meetings
in
a
long
time,
but
when
I
last
left
off,
we
were
looking
at
the
ietf
dictionary
type
that
paul
henning,
confident
crew
were
putting
together,
which
looked
like
it
was
efficient
and
you
know
coming
in
as
a
standard.
E
E
F
Okay,
so,
by
the
way,
I'm
I'm
not
opposed
or
or
produce.
I
just
ask
you
ted,
as
as
riley
mentioned,
that
we
need
to
define
two
string
method
on
these
attributes
that
we
can
use.
Also,
we
need
to
define
an
equal
operator
on
this,
so
without
this
we
cannot
so
for
the
main
difference
between
metrics
and
any
other
place
where
we
use
attributes
is
the
equal
operator
right.
So
the
equal
operator
is
extremely
important
and
the
equal
operator,
by
the
way
fyi
when
you
apply
to
string,
should
not
generate
duplicates.
F
So
if,
if,
for
example,
I
have
a
a
late,
an
attribute
as
a
string,
for
example,
256
as
a
string
or
200
these
and
the
value
in
256
should
be
equal
and
that's
where,
when
a
lot
of
of
of
in
any
type
conversion,
so
so
the
problem
is
the
problem
that
I'm
mentioning
here
is
because,
if
you
export
this
to
a
back
end
that
you
stringify
things,
you
still
need
to
preserve
the
the
the
uniqueness
of
label
combination
and
if,
after
the
string
transformation,
this
is
no
longer
true.
You
break
the
backhand.
G
F
But
but
josh
there
is
one
important
thing
on
the
resource:
we
don't
we
don't
aggregate
right
now
on
resource
attributes.
We
just
pretend
that
this
is
coming
and
we
we
have
them
available,
but
we
don't
apply
any
operation
on
top
of
resource
attributes.
That's
why
we
don't
need
it
for
the
moment.
Probably
we
will
need
it
at
one
point,
but
what
I'm
trying
to
say
is
right.
Now
we
don't
do
any
operation
on
the
resource
attributes.
F
We
just
do
the
stringify
at
most
yeah
operation,
but
for
for
the
other
things
for
the
time
series
we
do
aggregation
and
we
we
do.
We
do
have
to
compare
equality
by
by
by
by
different
label
set
or
attribute
set
or
whatever
you
call
it,
and
that's
where
we
have
troubles
with
multi-values
if
we
keep
them
as
multiplied
anyway,.
D
To
make
it
to
make
it
crisp,
though,
the
main
issue
here
is
multi
values
that
have
inherent
errors
like
floating
point
numbers
in
their
representation.
When
they
go
to
string
right,
you
can't
one
to
one
get
equality
with
floaty
point
numbers,
and
I
don't
know
of
other
types
off
the
top
of
my
head
that
have
that
issue.
So
it's
literally
we're
talking
floating
point
numbers
as
values,
and
so
the
question
I
have
is:
is
that
going
to
be
a
practical
scenario
where
someone
is
using
floating
point
numbers
and
labels
and
expect
segregation?
D
F
That's
not
the
problem.
It's
another
problem
is
the
number
256
should
be
equal
with
the
stream
256.,
it's
even
cross
types.
We
have
this
problem
because
if
I
export
this
to
to
stackdriver-
and
I
aggregated
256
the
number
as
a
different
time
series
than
the
256
string,
I
I'm
in
trouble
because
I'm
exporting
the
same
time
series
to
to
to
stack
power.
Does
it
make
sense?
What
I'm
saying
like
even
between
types.
We
have
problems,
not
only
floating
numbers,
but
even
between
types.
We
have
problems
yeah.
We.
G
F
G
G
Resources
though
it
does,
as
bowden
says,
we
haven't
really
done
anything
with
it.
If
I
am
not
familiar
with
that
part,
but
if
I
like
provide
a
a
object
as
a
label
whatever
and
otlp,
let's
say
it
supports
that.
How
will
that
it
look
like?
How
will
that
look
like
on
the
wire
like?
E
We
we
don't
support,
object,
we're
just,
I
think,
there's
another
way
of
putting
it.
I
just
want
to
clarify
that
I
kind
of
agree
with
jonathan
that
there's
a
separate
issue
of
whether
like
metrics
themselves
are
multi-type.
I
think
it's
fine
for
them
to
say
string.
I
think
it's
more
that
what
we're
missing
right
here
is
we're
missing
two
string
on
attributes
right.
E
So
it's
more
like
we
have
to
figure
out
what
two
string
is
and
somewhat
similarly
like
it
would
be
more
convenient
for
the
end
user
to
just
have
one
attribute
type
that
they
use
anywhere.
We
have
attributes,
and
that
includes
label
sets,
since
we
have
to
sort
out
what
to
string
looks
like
anyways,
it's
not
really
harmful
to
provide
that
convenience
to
people.
E
But
if
we
say
no,
it's
I
feel
like
it's.
It's
going
to
end
up
like
coming
back
to
being
something
we
have
to
deal
with,
regardless,
because
people
are
just
going
to
want
to
convert,
trace,
attributes
and
resources
and
other
things
to
labels.
But
I
do
agree.
That's
totally
separate
from
whether
or
not
like
in
otlp.
G
So
if,
if
if
we
could
separate
these
two
things
from
the
from
the
api
usability
perspective,
I
think
it
would
be
very,
very
valuable
to
be
able
to
provide
something
by
the
users
as
a
label
or
attribute
which
is
not
a
string.
Let's
say
a
color
object
or
something
and
the
sdk
under
the
hood
could
like
convert
it
to
a
string
easily,
so
that
the
user
does
not
need
to
do
that.
We're.
F
That's
a
great
thing
we
should
be
supporting.
I
don't
see
why
we
wouldn't
no,
but
the
passion
is
not
right.
Now
I
mean
our
current
attributes
do
not
support
any
random
type.
It
supports
only
a
predefined
list
of
primitives
and
arrays,
so
it
does
not
support
any
random.
That's
a
separate
discussion
we
can
ask.
We
can
discuss
about
attributes
supporting
any
random
object,
but
I
think
it's
it's
independent
of
this.
C
We're
saying
now
is
that
we're
gonna
try.
I
think
the
proposal
is,
we
keep
string
value
labels
we
can
maybe
could
we
call
them
string,
valued
attributes
on
metrics
to
or
maybe
but
and
and
we're
going
to
live
with,
the
fact
that
resources
may
have
type
information
and
these
attributes
do
not
on
when
they're
on
metrics
and
that's
okay
and
but
then,
when
you're
converting
like
say
a
span
into
a
metric
which
is
going
to
happen.
F
Yeah,
I
think
it's
reasonable
just
fyi
any
time
somebody
adds
a
non-string
attribute
to
the
metrics.
We
will
need
to
have
a
costly
conversion
to
string,
because
all
our
aggregator
has
to
work
on
the
string
value.
If
that's
the
uniqueness,
that's
how
we
define
the
uniqueness
part
of
the
label,
value.
C
H
C
C
I
think
you
know
we
have
similar
concerns
like
with
units,
for
example,
is
it
okay
to
to
mix
units
and-
and
that
may
never
happen
inside
an
sdk,
but
it
does
inside
of
a
collector.
So
you
know
we
could
just
define
these
metrics
to
be
different
if
they
have
in
integer,
256
is
different
than
string
256
and
what
the
vendors
sorted
out.
Some
vendors
might
consider
those
the
same,
and
some
might
not.
F
F
C
C
Sorry,
there's
prometheus
exporters
in
the
sdk
too,
so
this
argument
fell
apart.
We
can
define
a
collector-like
stage
to
do
aggregation
that
canonicalizes,
those
and
and
re-aggregates
them
together.
I
would
call
that
a
semantic
aggregation,
that's
doing,
string
canonicalization
or
something
like
that.
I
don't
like
it
now.
I've
said
it.
I
don't
like
it
at
all.
F
Let
me
let
me
say
this
before:
jumping
and
making
this
change.
I
would
like
to
to
think
through
all
these
possible
things,
I'm
not
saying
to
not
do
it.
I'm
just
saying
that
I
would
like
to
think
about
all
these
implications.
That's
all.
A
Okay,
so
it
seems
I'll
do
some
time
control
here,
so
it
seems
we're
the
same
as
what
we
like
talked
about
last
week.
So
where
type
mentioned
there
will
be
a
follow-up
on
his
site
to
do
the
conversion,
but
he
will
definitely
need
some
help,
so
I'm
I'm
doing
some
small
research
on
my
site.
If
you
guys
see
anything
like
you,
you
have
idea
how
we
should
do
the
conversion.
A
I
Adding
to
this
conversation,
so
to
me
this
question
is
really
based
on.
What's
the
motivation
that
someone
wants
to
put
a
non-string
class
into
into
the
label
and
at
least
in
my
experience
and
my
usage
typically,
it
is
for
I
have
some
user
contacts,
so
I
have
some
contacts
that
I
want
to
carry
forward
and
the
api
doesn't
allow
me
to
pass
this
context
in
any
other
way,
and
so
I'm
going
to
try
and
shove
it
as
a
label.
I
So
so
one
aspect
to
this
particular
problem
is
that
does
it
help
or
not
help
if
we
provide
it
a
separate
extensibility
point
for
them
to
carry
a
user
contacts
or
some
context
of
any
object,
type
that
they
want,
and
this
context
by
policy
would
just
purely
remain
in
process,
and
it
is
up
to
the
exporters
to
if
they
know
about
this
particular
context,
type
to
extract
or
use
it.
However,
they
wish,
including
extracting
the
user
contacts
and
building
up
labels
as
they
see
fit
now.
I
Obviously,
this
doesn't
necessarily
fit
into
quote
the
sdk
aggregation
and
so
forth,
but,
but
my
experience
previously
has
been,
I
want
to
pass
a
context
from
let's
say
an
auto,
my
my
auto
instrumentation
with
some
rich
context,
and
I
want
to
pass
it
down
to
you,
know
an
exporter,
my
exporter,
or
something
along
that
line
and
I'm
forced
to
serialize
and
deserialize
it,
which
potentially
will
cause
other
people
other
vendors.
You
know
plugins
and
and
import
and
exporters
to
have
to
deal
with
my
you
know
10k
worth
of
serialized
data.
I
So
anyway,
that's
just
one
perspective
in
in
in
perhaps
you
know,
perhaps
divorcing
the
idea
of
a
string
needing
a
object
if
we
gave
them
a
different
place,
provided.
F
The
the
main
difference
vector
here
is:
we
apply
operations
on
top
of
this
context,
so
we
we
apply
some
operations
during
what
we
call
aggregation
phase
of
the
sdk
and
we
need
to
treat
an
equality
there.
We
need
to
have
an
equality,
because
it's
part
of
this
and
and
we
understood
we
have
to
serialize
them
in
a
way
where
we
can
compare
apples
with
peers.
Unfortunately,
because
because
that's
how
the
world
works.
I
C
Have
a
actually
victor,
I
recognize
your
use
case
like
trying
to
stuff
sort
of
application,
specific
information
into
a
label.
I
I
get,
I
think,
that's
a
separate
one
and
I'm
not
sure
it's
one
that
hotels
really
talked
about
much.
I
I
believe
it's
more
of
a
convenience
matter.
C
I
have
an
integer,
that's
my
my
answer
like
that
is
my
type
and
I
don't
want
to
have
to
convert
to
string
because
it
is
an
integer
like
so
I
feel
like
it's
mostly
a
convenience
in
the
in
the
in
the
sdk
api
world
and
what
I
don't
offer
a
middle
ground,
which
is
to
say
that
in
the
sdk
and
the
api
we'll
treat
them
as
different,
because
what
are
the
chances
that
you're
going
to
have
a
label?
C
So
that's
a
runtime
error.
I
don't
think
it's
really
going
to
happen,
but
then
you'd
have
to
say
in
the
collector.
It's
definitely
going
to
happen.
It's
inten!
It's
not
no
longer
an
accident,
it's
no
longer
an
error
in
the
collector
in
the
collector.
We
could
do
the
fancy
correct
thing
of
turning
all
the
labels
into
strings
and
re-aggregating
in
a
semantically
meaningful
way,
which
is
not
easy
but
doable.
C
E
Victor
there
is,
we
do
actually
have
this
ability
now
for
synchronous
plug-ins,
so
you
have
the
context
object
and,
if
you're,
using
a
synchronous,
plugin
like
a
spam
processor
that
has
access
to
the
full
context
object
when
it's
called.
E
You
don't
have
access
to
that
at
the
point
at
which
you're
in
an
aggregate
like
the
exporter,
because
that's
that's
happening
in
another
thread,
but
but
there
is
some
some
support
for
what
you're
talking
about
today.
That's
a
that
is
an
interesting
use
case,
though,
because
I
do
think
open.
Telemetry
is
about
making
sure
we're
contextualizing
all
of
this
data
sufficiently,
but
I
agree
with
josh
the
thing
we're
looking
for
right
now
is
like.
E
F
But
let
me
let
me
move
the
the
problem
a
bit
more.
So
if
we
start
supporting
ins,
people
will
ask
us
sooner
or
later
that
they
they
will
go
and
ask
their
vendors
hey.
Why
the
heck?
Do
you
not
support
ins
for
this,
and
maybe
maybe
support,
give
me
something:
fancy
in
your
in
your
prom,
ql
or
equivalent
query
language
that
you
have
to
say
show
me
the
distribution
of
latency,
where
status
code
is
greater
than
200
and
less
than
400,
for
example,
or
something
like
that,
so
they
will.
F
F
E
I
I
think
that
would
be
I've
been
trying
not
to
push
on
that,
because
I
didn't
want
to
perfect
to
be
the
enemy
of
good
enough.
I
do
think
you
know
having
this
convenience
thing
is
super
important,
so
I
didn't
want
to
tie
it
to
that,
but
I
do
see
that
is
also
like
an
efficiency
gain
and
a
simplification
right
like
we
already
convert,
attributes
to
protobuf,
there's
no
need
if
we
support
that
in
the
protocol
to
add
stream
conversion
on
top
of
that
in
process.
E
You
know
that's
just
that's
just
pure
overhead,
so
so
there
are
some
like
attractive
reasons
for
doing
it
and
just
delaying
the
string
conversion
to
the
point
at
which
it
would
be
required
like
turning
into
prometheus
or
something
like
that.
F
E
Only
because
you
forced
to
move
it
into
the
application
process
versus
picking
picking
where
you're
going
to
do
it.
E
E
If
there
is
I'm
curious,
if
there's
like
some
terrible
downside
to
just
doing
that
like
well,
you
know
you
could
still
support
multi-types,
but
like
set
your
sdk
to
do
the
string
conversion
there.
If
that's,
where
you
wanted
to
do
it
instead
of
downstream,
and
if
you
have
a
prometheus
exporter
installed
in
your
sdk
you're,
definitely
going
to
do
it
there.
E
D
It's
also
hella
complicated
type
systems
like
effectively
we're
locking
down
the
entire
type
system
of
metric
back
ends.
If
we
say
here
are
the
end
types
that
are
supported
and
you
can't
support
any
other
types
which
is
fine
like
you
know,
but
there's
a
reason
I
think
most
metrics
apis
right
now
are
stringly
typed,
because
you
can
convert
strings
back
and
forth
in
any
case
like
if
you
look
at
databases
and
sql
and
kind
of
what
they
do
and
kind
of
how
magical
their
type
systems
are.
D
I
think
that's
my
evidence
that
maybe
maybe
we
don't
want
to
like
I'm
a
scholar
guy.
So
yes,
I
do
care,
lots
and
lots
about
types,
but
when
it
comes
to
databases,
I
I
think
practically
those
are
kind
of
living
entities
of
craziness
and
flexibility
is
far
more
important
than
anything
else.
So
it
makes
me
a
little
bit
nervous
to
to
throw
all
this
types
at
it
and
it
does
make
processing
with
otlp
way
more
complicated
than
just
processing
string
string
pairs
right.
D
So
I
think
that
that
is
a
downside
to
consider
yeah.
I
you
know
when
it
comes
down
to
it,
though
I
think
we
just
need
to
pick
one
and
go
because
we're
gonna
regret.
Whatever
we
do
six
months
from
now
and
we're
gonna
find
ways
to
work
around
that
regret
and
it'll
be
fine.
E
Help
inform
the
rest
of
the
discussion.
I
would
love
recommendations,
I'm
going
to
lean
towards
the
that
ietf
dictionary
type
is
my
proposal,
but
you
know
I'm
curious.
If
people
have
other
proposals,
if
there
are
other
standard
string,
conversions
people
would
use.
There's
json,
which
is
like
comes
with
a
native
library
most
languages,
but
also
is
json
and
comes
with
the
overhead
that
json
has,
but
I
suppose
that's
also
on
the
table,
but
anyways.
That's
that's
a
quite
open
question.
E
I
have
I
plan
to
promote
to
propose
this
ietf
dictionary
format
unless,
unless
something
else
comes
along
that
people
want
to
see
instead.
F
That
means
that
means,
if
I
remember
correctly,
the
ietf
dictionary
for
string
has
to
to
have
an
extra
code.
So
if
I
had
a
label
full
bar
in
the
backend,
I
would
see
it
as
true
code
bar
code,
like
with
codes
yeah.
So
that
might
not
be.
Is
that
not
annoying
for
for
people
to
for
strings
to
see
them
with
extra
codes?
In
the
back.
E
Could
be
annoying
right
could
be
the
wrong
choice
right,
maybe
json
is
actually
just
fine.
I
know
there's
some
you
know
well.
Has
that
problem
correct.
Sorry,.
C
F
But
we
wanted
those
to
be
the
same.
Ietf
ietf
declares
them
as
different.
That's
why
I'm
pointing
like
ietf
declares
them
as
different
as
code.
One
two
three
versus
one,
two,
three
without
codes,
but
the
annoying
part
would
be
for
people
to
see
their
string
with
codes
in
the
in
the
backend.
I
I
think.
E
There's
good
feedback
I'll
look
for
another
format.
That
figure
out
there's
a
standard
way
to
do
this.
Where
there's
the
types
are
implicit.
A
Okay,
so
I'll
put
a
hard
stop
here,
and
I
know
like
this
is
like
a
hot
topic,
so
a
lot
of
folks
here
they
haven't,
got
a
chance
to
put
their
opinion
just
like
people
who
normally
trying
to
be
very
very
loud
here.
They
will
speak
out
so
for
others.
If
your
words
is
not
heard.
I
encourage
you
to
put
your
thinking
in
the
meeting
notes
and
I'll
summarize
them.
A
Okay,
so
another
topic
you
see,
I
only
have
two
topics
here,
because
I
know
it
will
be
another
hot
topic.
So
on
this
question,
I
I
think
jonathan
put
the
the
question
here.
So
when
I
look
at
the
prototype
like
in
java
and
in
c
sharp
today,
I
I
see
a
problem.
It
seems
like
if
you're
a
library
owner,
for
example,
you
own
the
http
server
library
and
you
start
the
instrument
using
the
open,
telemetry
metrics
api.
A
If
you
look
at
the
current
spec,
you
kind
of
give
the
people
hint.
This
is
a
up
down
counter.
Well,
this
is
monotonic,
but
there
are
many
other
hints
like
if
you
have
20
dimensions,
you
probably
don't
want
the
user
to
read
a
long
document
and
figure
out
which
one
this
is
what
you
want
to
recommend
hey
by
default.
I
think
you
should
take
this
five
dimensions
and
then
it
seems
the
hint
has
much
has
a
bigger
scope
like
if
you
look
at
histogram,
if
I'm
using
http
library
how
the
hell
should.
A
I
know
whether
I
should
put
histogram
like
a
milliseconds,
or
this
is
a
very
slow
library
I
should
put
seconds
so
that,
let's
that
leads
to
a
question:
how
should
we
handle
these
hints
like?
If
you
look
at
the
current
api,
it
seems
we're
trying
to
put
some
hint
on
one
semantic
like
whether
it's
up
down
counter
is
is
ever
increasing,
but
we
don't
think
the
histogram
bucket
is
part
of
the
hint.
A
So
should
we
put
all
the
hint
in
the
api
or
we
we
allow
that
as
a
separate
thing,
it's
optional,
so
the
instrumentation
api
itself
does
not
leave
any
hint
it.
It
doesn't
even
know
whether
it's
up
down-
and
I
I
know
some
library
is
taking
a
balance
like.
If
you
look
at
the
premises
they
they
also
have
some
type.
They
have
the
counter
idea.
A
So
I
want
to
get
people
thinking
here
like
do
you
see
this
as
a
challenge
and
if,
as
a
library
owner,
we
don't
allow
them
to
give
hint?
So
all
the
customers,
using
that
http
library
will
have
to
struggle
with?
What's
my
default
dimension,
and
what's
my
like,
like
recommended
histogram
buckets,
do
you
think
that's
an
issue
or
we're
fine?
A
F
I
think
I
think
we
explicitly
did
one
didn't
want
to
have
the
hint
directly
into
the
api,
because
for
for
an
example,
do
I
use
histogram
with
buckets
or
do
I
use
the
dd
sketch
for
for
my
latency
distribution?
F
F
But
we
had
the
idea
of
default
dimension
or
recommended
dimension.
If
I
remember
that's
how
we
called
it
and
we
essentially
removed
them
for
for
the
moment
what
what
I'm
thinking
is,
we
can
do
something
like
what
open
sensors
did
and
maybe
have
an
extension
of
the
api
that
is
json
based
or
whatever
we
we
can
define
how
it
looks
like
where,
where,
in
this
case,
every
instrumentation
can
provide
in
a
in
a
language
neutral
way.
That
here
is
somehow
the
default
labels
or
and
the
the
recommended
aggregation.
F
If
you
are
using
history
to
use
histogram
or
something
like
that,
yeah
something
like
the
service,
config
josh,
I
I
think
I
think
we
need
to
have
at
one
point
something
like
that
in
open
session,
we
had
a
view
directly
exposing
the
api,
so
that
was
the
the
way
how
to
to
to
do
this.
F
I
would
make
it
to
be
honest.
I
would
make
it
an
extension
of
the
api
api
extension
that
depends
only
on
the
api,
not
on
the
on
the
sdk
and
define
a
kind
of
a
configuration
language
there,
or
something
like
that.
F
That's
how
I
think
right
now,
I'm
open
to
hear
others,
but
but
the
reason
why
I
want
to
to
decouple
them
is
mostly
because
we
don't
want
to
enforce
any
implementation
of
the
api
to
implement
that
it's
just
a
matter
of
hey.
This
is
a
recommendation
in
this
language.
Our
api
will
support
it
and
will
respect
that
will
will
be
easy
to
consume
with
our
sdk.
But
if
you
are
using
other
sdk,
if
they
are
willing
to
do
whatever
they
are
open
to
do
whatever
they
want.
D
Yeah,
I
feel,
like
the
the
use
case
you
called
out
in
the
otep
lines
up
really
well
with
this
of
the
person.
Writing
the
instrumentation
and
the
person
consuming
the
instrumentation
are
could
be
different
people
and
this
notion
of
view-
or
this
notion
of
aggregation
is
more
about
the
person
consuming
than
it
is
about
the
person
producing
right,
and
so
when
they
are
different
people,
we
want
to
shift
that
api
and
not
blend
those
two
concerns.
However,
it
should
be
dead
simple.
This
is
a
good
task.
D
I
want
to
call
out,
as
my
pm
hat,
which
I
shouldn't
have,
but
I
want
to
call
out
you:
can
you
can
split
these
two
tasks
and
have
separate
work
streams
of
different
people?
Looking
at
those
two
problems
differently,
I
do
think
open
census
views
in
the
api
I
kind
of
wish
they
were
either
processors
or
hidden
somewhere
else
like
like
not
directly
in
the
api
or
like
a
separate
component
like
bogdan
mentioned.
So
I
think
this
would
be
really
great
if
we
treat
it
that
way.
F
Yeah
and
also
the
views
in
the
in
the
open
sensors
had
more
comes
with
the
data
and
stuff.
What
I'm
here?
What
I
want
here
in
this
package
is
just
a
simple
just
just
some
pojo,
like
some
rand
objects
that
define
some
some
string
properties
or
something
without
without
implying
how
the
data
will.
The
data
format
looks
like
because
vue
has
get
data,
or
something
like
that
on
that
you
know.
So
there
are
a
bunch
of
things
I
just
want
to
have
like,
even
if
they
write
some
json
or
yaml
can
be
that.
A
A
So
if,
if
this
like,
we
believe
the
hint
can
be
somewhere
else,
my
question
would
be:
do
we
think
the
up
down
versus
like
ever
increasing
is
just
one
special
hint
and
it
should
be
treated
as
the
same
as
the
default
dimension
and
histogram
bucket
and
be
part
of
the
eti
extension
or
we
think
it's
special,
because
I
I
see
promises
is
kind
of
treating
that
as
special.
But
it's
simpler
permission
doesn't
have
seven,
we
have,
it
seems
working
of
seven
and
we
have
the
struggle
of
that.
C
C
Those
are
more
model
instruments
that
we've
now
like
translated
into
the
data
model
really,
and
if
we're
system
of
talking
about
naming,
we
can
do
that
separately,
but
up
down
counter
maps
into
this
prometheus
gauge
and
there's
two
different
ways
of
using
it,
and
I
think
I
think
bowden
nodded
his
head.
I
would
like
to
consider
it
like
a
semantic
difference,
but
there
is,
I
can
still
a
configuration
which
says:
do
you
want
it
cumulative
or
do
you
want
a
rate
there
like?
There
are
two
semantically
reasonable
ways
to
view
an
up
down
counter.
B
C
Of
them
requires
you
to
keep
state
and
track
the
total
since
the
beginning,
and
then
you
can
monitor
as
a
traditional
gauge,
but
in
the
statsd
world.
You
could
always
take
that
up
down
counter
and
and
record
it
as
deltas
and
then
monitor
it
as
a
rate
and
never
pay
that
cost
of
cumulative
conversion.
And
I
think
that
might
be
a
setting.
But
I'm
not
like
a
config
option.
A
F
It's
a
contract
that
the
between
the
between
the
the
instrumentation
api,
instrumentation
user
and
our
our
api
that
they
will
provide
only
positive
values,
or
they
will
provide
positive
and
negative
values
which
have
different.
This
is
not
a
configuration.
This
is
not
something
that
we
can.
We
can
ignore.
We
have
to
know
for
sure
this.
It's
a
requirement
for
us
to
know
if
they
I.
C
Mean
we
it's
part
of
the
data
model
is
what
bowdoin's
saying
and,
and
we
believe,
there's
a
semantic
distinction.
There
is
how
I
like
to
to
say
it,
but
I
think
riley,
your
your
question
stands
for
labels
which
aggregation
you
want
and
potentially
histogram,
buckets
and
stuff.
Those
are
all,
I
think,
hints
that
that
are
not
going
to
change
semantics,
except
it's
like
it's
really
hard
to
make.
I
F
We
do
know
because,
for
example,
for
counter
we
on,
if
you
do
a
sum
we
we
preserve.
The
fact
that
is
is
monotonic
or
not.
We.
I
I
I
assume
those
two
are
somehow
carried
in
the
otlp
and
that
in
the
back
end,
all
I
have
to
know
is
that
in
otlp
I
see
that
this
is
the
instrument
kind,
which
is
just
a
string
and
from
that
singular
string
name.
I
will
then
naturally
apply
my
set
of
semantics
to
this
data
set.
Is
that
how
that
works
or
or
is
there
a
different
field?
That
says
this?
Is
monotonic
yes
or
no?
This
is
you
know
there
is
a
field
in
the
otlp.
F
Aggregation
that
says,
if
this
sum
is,
has
only
some
it's
only
sum
of
positive
numbers
or
is
sum
of
positive
and
negative
numbers.
C
It
tells
you
something
like,
for
example,
there's
no
loss
of
information
in
turning
a
positive,
ascending,
monotonic
value
into
a
rate,
but
there
is
potentially
loss
of
information
when
you
do
the
opposite.
Show
a
non-monotonic,
that's
part
of
it
here,
but
it's
there's
a
tricky
entanglement
here
with
cumulative
versus
delta,
because
it
relates
and
in
ways
that
I
feel
it
would
derail
this
meeting
to
talk
about
anymore.
I
C
Where's
the
semantic
so
in
my
in
my
document,
which
is
a
draft,
I've
tried
to
just
define
multiple
data
models,
there's
an
event
model
which
is
like.
I
call
the
api
with
a
number
and
it
has
some
meaning.
But
then,
by
the
time
it
gets
translated
into
otlp
there's
been
some
aggregation
done,
and
the
data
the
data
types
in
otlp
explain
to
you.
What
type
of
aggregation
has
been
done
from
which
you
can
infer
at
some
level
what
the
instrument
was,
but
as
as
we
know,
we
have
like
example.
C
Counter
is
the
synchronous
form
and
some
observer
is
the
asynchronous
form
they
will
both
produce
the
same
data
output
because
the
semantics
are
allowing
them
to
like
one
of
them.
You
change
by
deltas
and
one
of
them.
You
report
the
current
value
and
at
the
output
you,
you
might
have
summed
all
those
changes
to
get
a
total
value,
and
you
can't
tell
by
the
time
you
see
the
data
whether
it
was
a
sum
of
changes
or
if
it
was
the
current
total.
C
And
then,
and
then
it
gets
output
as
a
time
series,
which
is
another
form
of
translation,
that
I've
tried
to
start
documenting,
and
I
think
we
get
into
confusion
here
and
I
haven't
answered
all
these
questions
either,
because
at
some
level,
when
you
talk
about
prometheus
you're,
going
to
output
it
as
a
counter
or
a
gauge
and
some
level
prometheus
wants
them
to
be
counters.
C
But
in
the
prometheus
api
you
can't
see
a
way
to
to
do
a
a
sum
observer.
You
have
to
do
this
custom
collector
thing
and
it,
and
so
what
I'm
trying
to
say
is
you
don't
know
how
the
data
was
put
in?
You
just
know
that
there's
an
aggregation
there
by
the
time
you
get
to
tlp
and
so
that
the
the
translation
of
time
series
is
is
somewhat
is.
Is
we
need
to
know
this
information,
it's
more
than
a
hint
to
do
a
correct
translation
into
prometheus?
C
There
you
go
because
prometheus
wants
to
see
a
cumulative
and
or
a
gauge,
and
so
if
it
was
monotonic,
we
should
have
a
cumulative,
and
if
it
was
not,
we
should
have
a
gauge.
I
So
so
the
aggregators,
why
wouldn't
they
just
look
at
the
hints
and
then
apply
the
appropriate?
So
as
I
I
guess,
the
question
is,
you
know.
We
know
that
we
need
these
information
like
monotonic
and
you
know
incrementing
and
what
have
you?
So
I
guess
the
question
is:
do
we
encode
that
in
the
counter
type
or
in
hints,
to
a
generic
counter
type,
I
guess
that's
the
question
right
yeah,
you
know
we
were.
C
It's
funny
we
were
asking
the
same
question.
Over
a
year
ago,
we
had
we
had
early
phases
of
this
design.
We
had
optional
options
on
various
instruments
to
try
and
tease
apart
this
and
at
some
point
we
got
very
confused
and
decided
to
at
least
lay
out
distinct
instruments
with
one-to-one
mapping
from
semantics
to
the
aggregation
that
we
wanted
by
default,
so
that
we,
we
don't
have
hints
anymore
or
it's
like
optional
types
like
we
at
some
point,
there's
like
there's
a
gauge
and
it's
either
cumulative
or
not.
F
It's
for
me
a
hint,
but
the
fact
that
the
the
data
semantic
is
positive
or
not
positive
is
not
a
hint
it's
a
contract,
and-
and
we
treat
that
more
strict
than
a
hint,
because
the
hint
we
can
ignore
or
not,
but
the
contract
we
cannot
ignore.
If
I
know
that
this
is
up
and
down,
I'm
not
going
to
allow
people
to
expose
this
as
a
counter
in
prometheus,
because
this
is
a
contract.
This
is
this
is
a
bit
more
than
a
hint.
That's
what
I'm
trying
to
say
victor.
A
So
one
challenge
I
I
think
during
during
the
workshop
session,
we
we
do
see
feedback
that
if
we
have
six
instruments,
the
first
thing
when
people
use
library
they
try
to
instrument
something
they're
going
to
see
this
six
different
things
and
they
have
to
pick
one
right:
how
how
do
they
peak
in
prometheus
world
yeah,
I
think,
like
premises,
has
some
and
also
like.
If
I
look
at
micrometer,
it
also
has
some
like
predefined.
F
A
I'm
I'm
trying
to
find
the
balance.
I
I
think
we
have
a
hundred.
Of
course
we
fail,
but
six,
I'm
not
sure
I'm
not
metrics
expert,
but
I,
I
think,
probably
two
or
three
like.
If
I
look
at
the
micrometer
api,
I
think
it's
easier
for
people
to
understand,
so
I'm
trying
to
understand
whether
we're
having
too
many
or
we
think
it's
a
fair
number
we're
just
liking
some
concrete
example.
If
we
bring
the
code
example,
people
would
be
happy
or
something
else.
So
this
part
is
not
clear
and
trying
to
collect
the
the.
A
C
C
Yeah
yeah,
maybe
that's
the
right,
but
we
do
have
this
distinction
between
cumulative
gauge
aka,
some
observer
aka,
the
the
non-monotonic
sum
cumulative
sum
versus
the
true
gauge
in
my
language,
and
that
is
a
new
distinction
that
otlp
is
trying
to
make
that
has
existed
in
the
world
before
but
is
not
currently
present
in
in
prometheus,
and
so
it's
confusing
users.
Why
do
we
want
to
make
two
different
types
of
gauge
yeah?
A
Okay,
so
follow
up
I'll,
collect,
more
perspectives,
and,
and
hopefully
we
can
keep
existing
stuff
and
just
find
a
better
name,
and
so
confuse
people
like
this
will
be
easier
and
there's
another
topic
from
victor,
and
I
think
it's
more
like
a
data
model
thing
so
we'll
cover
that
in
the
data
model.
Well,
thanks
era,
thank.