►
From YouTube: 2021-02-02 meeting
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
A
I
know
it's
a
short
notice,
so
last
week
we
just
explored
if
we
could
have
a
separate
metrics
meeting,
just
one
focusing
on
the
api
sdk
another
one
focusing
on
the
data
model.
So
this
is
the
first
time
we're
having
this
yeah
sdk.
So,
to
be
honest,
I
not
expecting
a
lot
of
folks
would
join.
So
probably
just
like
I'll
give
a
quick
update
on
what,
like
my
current
thinking
and
how
we
can
move
forward.
A
good
thing
is
we're
getting
20
minutes
long
from
the
all
up
open
time.
Trees
like
me.
C
A
A
D
A
A
Screen
so,
first
I'll
do
a
quick
update
on
the
metrics
part.
So
we
discussed
that
we
want
to
have
two
separate
work
streams,
one
focusing
on
api
sdk,
another
one
focusing
on
the
data
model
and
protocol
and
we'll
make
sure
like
people
have
enough
knowledge
sharing
between
the
two
six
and
we
both
report
to
the
oauth
spec
meeting
just
to
make
it
more
tight.
A
So
for
for
this
meeting,
I
I
think
we
probably
can
focus
on
the
technical
details
and
then
for
the
olap
spec
meeting.
We
have
20
minutes,
that's
the
place.
I
can
report
back,
see
what
are
the
things
we
figured
out.
We
want.
We
want
to
make
a
decision
and
that
general
idea
and
see
if
we
can
drive
for
the
for
the
conclusion
and
there's
there's.
One
thing
I
asked
alolita
to
help
is
to
set
a
target
date.
A
It
didn't
have
to
be
a
like
the
the
final
days,
similar
like
what
we're
we're
doing
with
tracing
like
we
set
a
date,
and
then
we
screw
that
up
and
then
we
move
to
the
next
target
date,
but
having
a
target
date
would
help
us
to
move
forward
and
with
understanding
that
we
have
dependency
on
the
metrics
data
model.
So
I've
asked
alolita
to
help
to
have
some
clarity
about
the
the
the
target
date
for
the
matrix
data
model
as
well,
and
I
I
think
josh
will
kind
of
saying.
A
B
A
Okay
and
based
on
this,
I
I
think
we
we
got
to
figure
out.
How
do
we
do
the
asia
triage
so
there
there
are
two
things
like
angel
carlos
has
been
driving
the
the
asian
charge
last
year
for
the
for
the
at
that
time
it
would
cause
ga
release,
but
now
we
change
that
to
a
spec
like
freeze
for
tracing
at
that
time
it
was
not
clear
whether
we
can
create
both
tracing
and
metrics.
So
what
we
have
been
doing,
I
have
been
joining
the
strategy.
A
Is
we
have
been
working
on
all
the
issues,
but
later
we
realized
that
we
want
to
be
focusing
on
tracing
so
matrix
part.
We
still
do
the
triage,
but
we're
not
trying
to
hit
a
particular
date
given
like
currently
we're
we're
kind
of
trying
to
revisit
the
matrix
part
and
also
angel,
is
taking
his
personal
decision
to
leave
lightstep
and
pursue
a
different
opportunity.
So
I
think
the
issue
issue
triage
process
will
likely
to
change
and
I'll
follow
up
with
carlos,
like
how
should
we
do
the
tagging
and
like?
A
Should
we
revisit
the
like
all
the
issues
every
week
or
some
of
the
issues?
We
can
park
them
and
tell
people
like
we
want
to
be
able
to
focus
on
a
smaller
scope.
So
that
brings
the
the
scoping
question,
and
this
is
the
first
thing
I
want
to
get
more
idea
from
you
guys.
So
my
understanding
is
the
purpose
of
open
telemetry
is
to
be
able
to
converge,
open
tracing
and
open
sensors.
Open
sensors
already
have
a
lot
of
metrics
investment
like
I've
been
working
as
a
maintainer
for
python.
A
I
know
there's
a
the
raw
metric
api
and
also
the
view
api.
So
probably
this
is
something
josh
you
can.
You
can
help
us
to
understand.
Well,
with
the
current
open,
telemetry
metrics
like
spec,
the
draft
version
is
that
well
covered
or
you
believe
we
need
to
get
folks
from
open
senses.
Today,.
B
Right,
I
think
we
need
well
david
you're
here
I
do
you
formally
represent
open
census
today.
I
don't
think.
A
That
that's
a
question
mark-
and
I
know
bogdan-
has
a
lot
of
history
on
open
sentences
and
also
remember
our
metrics.
Like
meeting
we
had
a.
We
had
a
whole
day
like
friday
meeting
and
where
I
remember
folks
from
google
mentioned
the
philosophy
behind
open
synthesis
was
the
google
internal
census
and,
and
they
develop
that
with
a
goal
they
can
flow
some
some
dimensions
as
part
of
the
tracing,
so
you
can
propagate
that
a
cross-service
boundary
and
use
that
as
extra
dimension
and
and
they
want
to
share
some
internal
document.
A
B
Yeah,
I
think
bogen
was
basically
suggesting
that
we
reach
into
google
and
find
the
old-timers
who
remember
the
beginning
of
census.
I
think,
and
he
named
nabil
who's,
who
was
on
waymo
the
last
I
knew
and
and
had
nothing
to
do
with
census,
and
I
just
don't
I
don't
actually
know,
there's
nevin,
there's
a
nevin
involved
in
the
project
that
I
recall
anyway.
B
These
are
some
names
of
googlers
that
mean
nothing
to
most
of
us,
neven
heinz
and
the
point
was
to
look
up
an
original
paper
that
was
sort
of
laying
out
the
point,
and
I
think
you
summarize
it,
though
the
highest
level
of
that
was.
We
need
to
get
our
attributes
from
the
root
of
a
trace
into
the
metrics
at
the
leaf
of
a
trace,
and
it
was
basically
a
mechanism.
A
B
I
don't
have
any
formal
or
official
role
to
say
what
census
does
or
doesn't
mean,
but
I
do
have
like
the
history
of
kind
of
picking
up
the
thread
in
you
know
a
year
and
a
half
ago.
So
my
I
mean
my
belief
is
that
the
the
we,
what
we
have
today
in
otlp
kind
of
meets
the
spirit
of
open
census,
and
it
is
perhaps
a
task
to
finish
the
data
model
to
make
sure
that
everyone
to
check
off
that.
B
That
is
still
still
true,
and
I
think
there
are
some
open
issues
about
raw
data
that
are
probably
the
corner
case
where
we
don't
quite
have
our
story
complete.
And
it's
also
interesting,
though,
because
the
point
where
we
talk
about
raw
data
is
the
point
where
the
data
model
and
the
api
kind
of
meet,
and
so
in
some
sense,
the
data
model
for
raw
data
depends
on
the
api
model.
At
some
level.
A
A
B
C
A
Yeah,
I
I
know
like
currently
we
don't
have
that
view
api
in
open
climate,
there's
open
pr
and-
and
so
my
question
is
like-
do
we
want
to
have
the
exact
same
thing
and
api
compact
is
a
goal
or
not
and
based
on
the
discussion,
I
think
most
of
folks
believe
it's
not
our
intention
to
to
make
open
synthesis
user
like
zero
code
change
migration
or
build
some
bridge.
We
just
tell
them
hey,
there's
an
open,
telemetry
api
model.
A
It
gives
the
same
power
or
even
more,
but
you
have
to
change
your
code.
There's
no
like
one-to-one
api
mapping
is
not
our
intention,
but
our
intention
is
to
give
you
that
spirit
or
the
features
had
that's.
B
Open
telemetry's
intention,
I
think
josh
surith
would
say
that
they're
they
are
still
trying
to
make
a
compatibility
layer
for
the
open
census,
libraries
that
can
be
ported
to
open,
telemetry,
essentially
yeah,
and
so
there
there
was
a
question
of
whether
they're
able
to
fit
the
open
census
raw
statistic
model
with
that's.
Why
that's
where
the
otlp
model
almost
directly
interacts
with
census,
because
they're
going
to
sort
of
target
the
back
end,
not
the
api
directly,
and
so
they
have
a
prototype
in
the
go
pr
repo
that
is
outputting.
A
Okay
and
and
the
third
topic
so
for
for
premises,
I
wonder
like
josh,
you
probably
have
better
understanding.
Would
they
be
focusing
on
the
data
model,
or
we
would
also
want
the
api
to
have
a
have
a
big
taste
or
like
the
feel
of
premises
like
client,
libraries,
my
understanding
is
not
the
goal,
but
this
has
some
api
that
is
like
widely
adopted
and
people
just
love
it
we
might.
B
B
I
I
think
you
know
if
you,
if
you
do
look
at
the
prometheus
api,
you
have
counter
gauge
histogram,
and
then
you
have
this
like
custom
kind
of
callbacky
channel
inter
interface
that
lets
you
kind
of
asynchronously
provide
those
as
well
and
and
at
that
point
you
do
have
essentially
the
six
kinds
of
instrument,
but
they're
a
lot
quite
a
bit
different
and
their
names
are
like
really
different
than
what
we
have
chosen.
So
today
we
have
these
six
names
that
have
been
used
throughout
the
last
throughout
the
last
year.
B
B
A
Yeah,
so
one
quick
question
you
just
mentioned
like:
if
you
want
the
expensive
one
with
a
lot
of
expense,
I
I
started
mint
with
a
lot
of
tags.
B
Well,
I'm
there.
There
are
many
dimensions
where
over
which
we
have
control
and
cost
in
the
metric
system,
and
I'm
probably
not
going
to
give
you
an
exhaustive
list,
but
you
can,
you
can
raise
and
lower
the
interval
of
collection.
So,
like
time
period
aggregation,
you
can
collapse
dimensions
by
throwing
away
labels
and
you
can
change
the
aggregation
that
you
use,
which
is
like.
B
Do
I
want
a
histogram
or
do
I
want
a
sum,
because
a
histogram
is
just
a
more
detailed
form
of
sum
in
many
ways:
yeah
and
it
just
breaks
down
your
inputs
and
lets
you
do
more,
but
but
actually
that
statement
I
just
made
is
so
full
of
nuance
that
we
can't
just.
Let
me
make
that
statement.
A
B
What
labels
you're
going
to
use
on
these
raw
stat
events,
and
then
those
views
are
going
to
come
in
and
the
views
are
going
to
specify
what
dimensions
they
want
to
output
and
which
aggregations
they
want
to
use
and
even
which
intervals
they
want
to
use.
So
for
all
those
three
types
of
costs
they
can
reconfigure.
B
B
Well,
first
of
all,
you
don't
have
to
say
four
up
front,
and
we
also
might
take
some
from
the
context
and
inject
them,
and
at
this
point
it
feels
like
we're
just
trying
to
open
up
the
model
to
say
just
give
us
all
your
labels
and
if
there
are
more
labels,
give
us
them
too,
and
that
is
to
do
that
is
considerably
more
than
prometheus
is
quite
ready
for,
and
I
think
we
are
between
this,
this
group
and
the
prometheus
group
trying
to
reconcile
our
position.
B
A
D
By
the
way,
there
is
a
github
discussion
about
what
you
just
talked
about
the
the
instrument
naming
and
the
amount
of
them,
and
I
I
kind
of
like
the
idea
of
of
providing-
I
don't
know
like
a
more
basic
set
of
sort
of
instruments
to
the
2d
api
users
or
or
something
which
is
like
more
close
to
them.
B
So
so
jonathan,
I'm
kind
of
responding
to
you,
though
it's
like
to
me,
there's
maybe
two
models:
two
levels
of
our
data
model:
there's
the
low
level
model
and
the
high
level
model
at
the
high
level,
there's
like
counters
and
gauges
just
two
kinds
of
thing,
except:
what's
a
histogram?
Is
it
a
counter
or
a
gauge?
It's
like
a
little
bit
of
both
and
that's
why
we,
I
think
we
have
some
trouble
in
like
formalizing
at
the
low
level.
B
If
you
look
at
prometheus,
you've
got
gauges
that
you
can
set
and
you've
got
gauges
that
you
can
add
and
delete
and
you've
got
gauges
like
temperature
that
are
sort
of
different
than
gauges
like
cpu
usage,
and
so
there's
been
a
document
I
posted
last
week,
which
is
about
how
do
we
and
I
think
the
real
reason.
This
is
a
hard
question.
Is
that
we're
talking
about
a
push,
a
push
model
versus
a
pull
model
and
in
that
push
model
pull
model?
B
These
data
model
questions
almost
creep
into
your
api,
and
and
if
we
don't
have
enough
specificity
in
the
api,
you
can't
push
or
pull
correctly
and
the
the
example
is
the
document
that
I
I'll
pull
up
a
link
for
which
is
about
how
we
define
the
up
metric
or
how
we
place.
What
are
called
staleness
markers
in
the
prometheus
remote
writer
output.
B
Something
very
specific
about
those
gauges
and
those
up
down
some
observers
and
all
that
stuff.
A
Yeah,
so
probably
I
have
one
idea
like
historically
like
when
folks
were
working
on
metrics,
I
know
like
blatant,
has
been
working
on
python
cgo
working
on
the
dotnet,
and
we
had
some
folks
from
amazon.
We
can
see
past
class
when
people
discuss
about
some
topic
for
people
who
are
not
deeply
involved,
it's
kind
of
hard
for
them,
because
there
are
a
lot
of
terms
and
and
a
lot
of
people
like
they're,
not
very
familiar
with
his
program.
A
But
if
you
talk
about
like
counter
they
understand
and
from
matrix
perspective,
histogram
is
just
just
how
you
count
like
you
have
a
one
dimension:
that's
based
on
something
that
you
can.
You
can
put
a
different
range.
That's
my
understanding
like
you
can
have
latency
distributed
among
like
less
than
one
second
more
than
one
second,
but
less
than
ten,
it's
just
a
mathematic
representation,
but
if
you
start
with
expensive
okay,
this
is
your
http
server
duration.
This
is
your
total
number
of
http
incoming
requests.
It's
much
easier.
A
So
if
I
could
put
a
proposal
here,
I
want
to
take
some
simple
example
like
one
like
we
discussed
on
on
the
friday
metrics
meeting.
So
one
is
hey:
you're,
a
customer.
You
you
write
code
using
the
instrumentation
api
and
you
report
data.
Everything
is
under
your
control.
So
this
is
your
experience.
Another
one
is
hey
there.
There
are
two
different
persona,
one
is
the
owner
of
http
instrumentation
library
and
they
have
no
idea
which
dimension.
A
They
should
give
you
because
they're
writing
a
very
generic
library
and
they
want
to
use
some
api
to
report
the
data
and
give
that
flexibility,
but
they
care
about
performance
and
on
the
euro
side,
once
you
got
the
library.
The
number
one
question
is
you
want
to
know
what
kind
of
metric
do
I
have
like
do
they
have
http
server
duration
or
they
have
the
total
number
of
incoming
http
requests
and
how
many
dimensions
do
I
have.
So?
A
Are
you
able
to
get
that
from
the
runtime
or
you
need
to
go
through
a
document
from
the
library
and
to
subscribe
to
that
and
and
what,
if
there's
a
change
in
the
semantics?
Like?
Do
you
need
when
you
update
the
library?
How
are
you
going
to
handle
that
and
from
the
instrumentation
library
owner
the
question?
Is
you
got
this
number
of
dimensions
that
you
think
valuable
like
from
http,
and
you
got
this
url?
A
You
know
it's
a
high
cardinality
thing:
it's
crazy
if
you
just
report
them,
but
it's
your
choice
and,
and
normally
I
think
for
instrumentation
owner
they
won't.
They
would
want
to
report
all
the
data
you
know
in
an
efficient
way.
So
let
me
give
you
one
example:
hey
I
got
http
status
called
and
you
might
call
this
integer,
I'm
not
waiting
to
convert
that
into
a
string
unless
someone
is
asking
for
it.
If
no
one
is
using
the
dimension,
I
would
want
to
just
emit
that
code
path
at
all.
So
how
should
I
do
that?
A
And
I
also
have
this
question.
I
have
20
dimensions
like
all
kinds
of
http,
incoming
and
and
the
status
call
duration,
and
when
I
report
the
data,
I
want
the
user
to
be
able
to
get
multiple
things.
One
is,
I
want
them
to
get
the
total
incoming
http
count,
and
I
I
probably
will
have
one
extra
dimension
saying
this,
like
http
request
just
coming
in
and
still
under
execution,
or
it's
already
finished
and
only
for
finished
one.
The
status
code
would
make
sense.
So
I
won't
report
that
meanwhile,
I
want
to
report
duration.
A
A
Everything
or
I
want
them
to
use
zero
metric
api.
They
can
just
use
trades
and
I
understand
by
using
trades
that
seems
people
would
be
crazy,
like
if
I
decided
to
turn
off
the
trades.
The
trace
api
is
no
off,
and
now
you
have
metrics
it's
entangled.
So
so
my
gut
feeling
is
people
would
prefer
to
have
one
single
api
and
then
the
question
is:
do
we?
Do
we
want
to
optimize
for
that?
A
Having
a
very
efficient
like
batching
api,
or
something
and
also
push
that
to
the
end
user
like
like,
even
if
they're,
writing
console
application,
or
we
allow
to
have
two
apis
that
can
do
the
same
thing.
One
is
a
simple
way:
another
is
the
expert
way
and,
and
then
it'll
be
hard.
So
we
can
take
that
approach,
but
then,
like
what's
our
guidance,
how
should
we
tell
people
which
one
should
we
use,
or
the
third
approach
is?
A
Oh,
we
realize
there
are
actually
like
two
different
scenarios,
so
we
can
first
come
out
with
the
matrix
api
spec.
That
only
covers
the
expert
mode
of
api
and
for
people
who
who
claim
these
are
hard
to
use.
We
can
build
additional
api
on
top
of
this
without
having
to
break
the
metric
scenario.
So,
instead
of
like
dumping
all
the
details,
I
I
think
having
those
scenario
driven
thing
would
help
us
to
push
things
forward
and
especially
it
would
be
valuable.
A
We
can
collect
a
lot
of
feedback
from
people
who
are
not
very
confident
on
the
metrics
topic
like
we
don't
have
scary
terms
like
something
about
algorithm.
How
do
you
do
sketch?
Just
people
like
writing
normal
http
server.
They
have
the
desire
to
report
metric.
I
won't
be
able
to
open
the
funnel,
so
people
can
start
to
give
us
those
feedback.
A
B
B
So,
but
your
question
was
absolutely
on
track.
Is
that
is
that
maybe
the
end
user,
like
for
the
most
part
99
of
metric
statements,
are
counters
and,
like
you,
just
you're,
going
to
write
count
in
your
code
and
you're?
Let's
see
that's
all
you
care
about
and
like
you
just
don't
want
to
know
about
six
different
instruments
and
why
it's
different
to
report
a
sum
from
a
callback
versus
from
account
synchronously
like
those
are
like
you,
don't
care
because
you're,
not
writing
callbacks,
because
you
don't
care
about
the
optimization
factor.
B
A
Yeah
yeah,
so
it's
a
it's
a
hard
topic
and
I
I
think
especially
like
there
are
a
lot
of
prior
arts
and
also
there's
a
history.
So
we
want
to
be
careful
instead
of
trying
to
boil
the
ocean.
I
think,
starting
from
some
concrete
example
would
help
us
to
move
forward
like
if
you
have
a
good
understanding
with
the
the
like.
The
current
metrics
api
in
the
spec
would
support
in
this
way,
and
this
is
the
performance
consideration
and
then
the
people
are
saying
yeah.
A
These
api
are
like
too
difficult
to
use
and
we
want
a
simpler
one.
We
can
explore
the
two
options
either
like
we
screw
up
the
export
mode
of
the
api.
We
give
the
simple
one,
but
what
about
performance?
And
if
that's
something
we
can
balance
out
or
it's
not
it's
not
going
to
work,
then
whether
we
can
explore
having
a
set
of
simpler
api
just
by
having
those
concrete
examples
and
people
can
debate
and
something
we
can.
A
D
Or
to
me
it's
not
just
like
for
writing.
Instrumentation
like,
for
example,
you
have
a
you
are
a
maintainer
of
library,
and
you
want
instrumentation
for
that.
For
example,
the
http
client
dimension,
but
if
you
are
writing
a
web
service,
a
business
application-
and
you
want
business
metrics
on
it
like
it
can
be
anything
it
can
be
a
counter
gauge.
It
can
be
like
a
a
summary
and
so
on
so
to
me
like
having
this
this
higher
level
api.
D
I
like
the
idea
by
the
way
of
the
high
level
yeah
so
having
this
desired
level.
Api
is
kind
of
like
very
useful
for
those
users
who
are
not
like
really
into,
for
example,
metrics
are
measuring
things
and
they
they
don't
really
care
about.
They
just
want
to
hey.
Give
me
like
how
many
orders
did
I
have
like
in
the
last
hour
or
something.
A
I
I
see
that
makes
sense,
so
I
changed
my
like
proposal
on
the
first
scenario.
I
I
think
I
picked
http
client
just
because
I'm
familiar
with
that,
but
I
I
think
there's
no
difference
when
you
swap
the
idea
like
just
to
mention
business
metrics
well,
for
http
server
is
a
little
bit
different.
It's
a
technical
part
requires
high
performance,
so
I
think
it
makes
sense
to
change
the
first
scenario
just
to
just
to
record
that.
D
I
I
believe
the
difference
is
that
maybe
those
people
who
are
writing,
for
example,
http
clients
they
might
be
able
to
get
along
with
the
low-level
api
yeah.
But
for
those
people
who
are
like
writing
like
business
applications,
they
do
they
don't
really
simple.
Can't
they
just
say
give
me
a
counter
or
or
whatever,
because
that's
what
I
know
or
a
gauge
that
that
higher
level
api
will
be
more
important
for
them.
I
guess.
A
Okay,
cool
and
and
if
we're
trying
to
explore
that
path,
I
can
also
imagine
the
low
level
api
probably
will
will
have
more
flexibility
to
satisfy
all
the
open
senses
potential
requirements.
So
we
don't
have
to
restrict
ourselves
and
for
people
who
want
the
simpler
one,
they
can
still
use
a
very,
very
easily
just
provide
all
the
key
value
pair
combination.
A
Okay,
so
I
I
can
follow
on
this
and
and
start
some
like
initial
proposal.
So
just
four
questions
like
when
we
do
this,
I
I
think
the
best
way
is
to
have
some
some
actual
running
code,
and
I
want
to
ask
people
here
like
which
language
should
we
pick,
I
think,
on
the
the
metrics
session
like
three
weeks
ago,
there
is
a
lady
from
red
hat
and
she
she.
A
I
called
that
we
we
should
pick
one
strong
type
language
and
one
scripting
language,
so
I'm
I'm
familiar
with
the
c
plus
plus
and
c
sharp,
but
I
think
c,
plus
plus,
is
just
hard,
so
I
would
prefer
to
pick
one
from
c,
sharp
or
or
java.
A
Just
if
we
want
to
write
this
example
like
hey,
you
write
a
business
application.
These
are
the
number
of
things
you
want
to
report.
How
you
use
api.
This
is
the
http
store
on
the
script
language.
I
would
propose
python
just
because
I
used
to
be
the
python
internet.
I'm
familiar
with
that,
and
also
I
think
python
is
a
language
that
even
people
don't
work
on
that
language.
A
B
I
you
know
speaking
for
bogdan
who's,
not
here
we
have
a
fairly
advanced
implementation
of
the
draft
that
he
and
I
have
been
working
on
so
both
java
and
go
are
pretty
well
developed
in
in
hotel,
and
I
feel
like
including
those
is
low
effort
and
high
value
plus,
whatever
ones
you
think
are
important
to
add.
I
I
it
was
aaron
schnabel,
I
think,
is
how
I
say
her
name
who
said
it
in
the
meeting.
B
A
Okay,
then
then
I
I
will
propose:
let's
stick
with
python
java.
I
can
probably
cover
the
python
part
I'll
I'll
start
write
some
example
and
and
put
all
the
questions
there.
Just
ask
like
hey:
do
you
think
this
is
efficient
enough
or
from
the
python
perspective,
like
the
library
owners
are
saying?
No,
we
don't
want
to
make
three
api
calls.
This
is
the
end
of
life
and
the
other
question
I
I
think
I
haven't
seen.
I
I'm
trying
to
explore
that,
but
I
haven't
found
a
solution.
A
B
That
one
has
been
discussed
heavily
and
I
think
bogdan
was
willing
to
move
on
from
his
position,
which
was
the
oh.
B
But
I
think
there's
a
there's
still
going
to
be
a
question
of.
Why
should
I
choose
metrics
instead
of
spans,
and
why
can't
I
just
use
a
span
to
to
denote
a
timing.
I
I
think
I
can
answer
that
myself,
but
I
think
that
that
question
has
to
have
a
good
answer.
A
Okay,
cool
yeah,
so
once
we
have
those
like
examples,
I
I
I'm
I'm
happy
to
consolidate
all
these
ideas
and
then
report
back
to
the
spec
meeting
and
gather
more
ideas
and
hope.
Hopefully,
in
that
op
spec
meeting,
we
can
make
a
call
then
I'll,
submit
pr
to
lock
this
down.
B
I
don't
know
if
this
is
useful,
but
I
have
an
additional
item
that
might
be
helpful.
Is
that
with
today's
go
sdk
which
which
I've
been
responsible
for
for
metrics,
we
have
a
a
very
complete
implementation
of
the
the
model,
the
six,
the
six
data,
the
six
instruments
and
the
otlp
model,
and
I
have
actually
been
using
it
for
myself
to
instrument
real
code,
and
I
have
some.
B
I've
kicked
the
tires,
as
I'm
trying
to
say
and
and
like
I
could
show
you
the
like,
I
could
go
through,
I
can
say:
look
there
are
nine
metric
instruments
that
I
used
to
monitor
this
piece
of
code,
and
I
could
show
you
every
single
metric
statement
and
and
explain
it.
I
think
it
might
be
useful
because
it
turns
out
that
every
six
instrument
does
appear
when
you,
when
you
know
what
you
intend
to
say
with
those
statements
and
you're
an
expert,
so
that
might
be
useful.
I
wanted
to
offer
that.
B
Yes,
I
also
it
sort
of
addresses
some
of
the
view
question,
but
I
also
have
a
comment
on
your
review
question.
Is
that
like,
for
example,
I've
been
working
on
debugging?
This
code,
so
I've
been
instrumenting
it,
and
I
have
some
statements
there
that
are
essentially
trying
to
output
histograms
because
that's
the
default
output
for
those
value
recorder
instruments.
B
Well,
it
turns
out
I'm
debugging
with
lightstep,
which
doesn't
support
the
histograms
yet
so
I
I
like
actually
had
histogram
data
that
was
just
getting
dropped
and
what
did
I,
instead
of
changing
my
histograms
back
into
counters?
I
just
wrote
myself
a
reducer
that
would
ch
or
a
processor
stage
that
would
take
every
histogram
count
and
output,
a
counter
which
was
very
easy
to
do
in
you
know,
50
lines
of
code.
I
wish
it
had
been
20
lines
of
code,
and
that
is
essentially
a
view.
That's
my
first
view
that
I
wrote.
B
Essentially,
I
also
have
a
processor
called
reducer,
where
you
could
just
give
it
a
label,
restrictions
and
it'll
erase
any
label,
that's
not
on
in
the
list.
B
A
Yeah
personally,
I
I
agree
with
you
and
also,
I
think,
like
how
to
discover
all
the
metrics
like.
What's
the
measure
name
space,
how
do
we
organize
the
semantic
convention
and
how
to?
How
do
people
change
that
from
microsoft?
I've
seen
a
lot
of
cases
where,
when
people
roll
out
something,
they
haven't
anticipated
a
particular
dimension
being
a
high
cardinality
one,
but
then
like
people
always
make
mistakes,
and
when
you
realize
that
you
start
to
put
some
restriction
like
in
the
extreme
case,
you're
saying
I
like,
I
want
to
ignore
this
dimension
for
now.
A
I
won't
just
remove
that
and
I'm
having
to
ask
everyone
to
change
the
call
and
roll
out
and
learn
from
the
failure
and
then
switch
back
and
forth
it's
going
to
be
creative
for
them
another.
Another
thing
I
I
think
important
is
currently
we
haven't,
explored
the
sdk
implementation
on
the
bonded
memory
behavior
like
if,
if
a
customer
would
want
to
put
a
policy
say
for
this
dimension,
it's
http
work.
So
I'm
I'm
not
going
to
take
more
than
10
values,
because
I
know
they're
only
like
get
set
put.
A
B
Okay,
that's
it's
ambitious.
I
like
it.
I
mean
I,
I
know
that
we've
had
to
tackle
that
type
of
problem
in
our
light
step
system
and
and
it
involved
like
approximate
algorithms,
and
it
got
pretty
technical,
so
finding
a
sort
of
something,
that's
simple
enough
that
you
could
actually
implement
in
all
the
sdks
is
probably
going
to
be
the
challenge
outside
of
just
literally
specifying
like
erase.
This
label
was
very
easy,
but
I
don't
know.
A
Yeah
and
even
asking
people
to
like,
if
we're
taking
the
view,
api
approach
asking
everyone
to
update
their
code
is
not
going
to
work,
but
I
there's
some.
Some
balance,
probably
it'll,
be
a
view
api
where,
if
you
want
to
do
that
programming,
you
can
do
that.
But
on
top
of
that
we
can
give
some
configuration
environment,
variable,
yam
or
whatever,
to
make
that
easier,
but
happy
to
explore
that
part.
I'm
borrowed
if
it's
just
api
and
we
ask
people
to
write
code,
it's
just
becoming
too
hard
for
them.
B
So
I'm
worried
that
if
you
specify
it
must
do
something,
it
becomes
an
algorithmic
challenge
that
is
outside
of
the
scope
of
open,
telemetry
and
I'd
love
to
think
about
and
work
on
those
problems.
But
my
my
company
doesn't
care.
If
I
did
did
that
for
open
telemetry,
you
know
I
mean
so
I
hear
some
questions
about
cardinality
estimation
from
microsoft
as
well.
I
hear
questions
about
how
do
you?
How
could
you
perhaps
dynamically
push
back
on
the
client
or
even
inside
the
sdk
itself,
literally
have
a
limit
or
governor
saying
I?
B
I
must
not
go
over
this
much
memory.
Therefore,
I
will
adaptively
figure
out
which
labels
have
high
cardinality
and
all
of
a
sudden.
I've
got
this.
You
know
having
a
bounded
memory
to
do
bounded
memory
algorithm
for
top
k
is
challenging
enough
and
then
like
it
gets
hard.
I
wouldn't
want
to
go
into
that.
B
On
the
other
hand,
I
think
that's
what
that's
what
people
are
looking
for
when
you
propose
to
them
that
you're
going
to
allow
arbitrary
labels
in,
is
that,
like
you,
could
build
a
system
that
would
kind
of
automatically
just
be
safe
against
that
type
of
thing?
And
it's
just
a
question
of
like
at
this
point.
B
The
question
of
sampling
and
tracing
comes
up
comes
up
for
me,
because
I
people
talk
about
sampling
and
tracing
so
that
they
can
solve
that
same
problem
about
counting
spans
with
high
cardinality,
and
if
you
know
how
to
count
spans
with
high
cardinality,
then
we
should
be
talking
about.
We
should
be
able
to
talk
about
that
same
thing
in
metrics
and
I
sort
of
do.
But
it's
like
any
time.
I
talk
about
sampling
and
metrics.
I
lose
most
of
the
audience
and
my
company
starts
glaring
at
me
because
they
don't
care
either.
B
So
I
don't
want
to
talk
about
sampling
and
metrics,
but
we
could
and
we
can
talk
about
sampling
and
metrics
to
address
cardinality
and
we
might
have
to
talk
about
sampling
metrics
to
to
be
compatible
with
data
with
statsd,
which
has
a
sampling
rate
in
it.
So
there's
some
question
about.
I
wanted
to
just
get
something
done,
but
we
can
drag
ourselves
into
a
real
torturous
side
detour.
If
we
have
to
talk
about
sampling-
and
I
I
see
cardinality
reduction
automatically
and
fixed
cost
limits
and
stuff
that
gets
pretty
challenging.
A
Okay,
yeah
thanks
for
the
reminder,
I
I'll
keep
that
in
mind.
So
I
put
scope
here
before
this
example.
I
think,
during
the
actual
practice,
when
we
start
to
see
like
the
scope
is,
is
like
like
keep
increasing,
and
we
got
to
control
that
and
and
make
a
decision
that
we're
not
going
to
further
expand
the
scope.
B
There
is
an
actual
spec
issue
that
I
forget
the
number
of
calling
labeled
labels
versus
attributes,
which
was,
I
think,
originally
filed
by
ianna
dogen,
but
maybe
lightsteps
product
team
picked
it
up
and
said
it's
pretty
confusing
for
this
light
stops
building
this
like
integrated,
metrics
and
traces
platform
for
us
to
have
to
say
attributes
on
traces
and
spam,
and
but
metrics
have
labels
and
they're
different,
because
there's
also
this
concept
of
resources,
that's
a
little
bit
ambiguous
because
it's
kind
of
both
and
if
we
have
resources
that
are
multi-value,
multi-value
typed.
B
Why
can't
we
have
labels,
be
multi-value
typed
because
honest,
obviously
we're
going
to
take
resources
and
put
them
into
label
position?
The
only
thing
missing
was
a
formal
spec
on
how
to
turn
non-string
types
into
strings
when
you're
exporting
them
over
a
legacy
place.
But
actually
that
was
a
problem
in
spans
too,
because
zipkin
doesn't
have
multi-valued
type
attributes
yeah.
A
Understand
it
has
a
relationship
with
the
data
model
like
when,
when
I
was
consulting
with
folks
inside
microsoft,
what
I've
been
doing
in
the
matrix
back
end,
they
told
me
like
we
only
have
strings
if
you
have
integer
you
got
to
convert
that.
So
why
don't
you
just
do
that
in
the
hci
report
and
expect
integer?
What
are
we
going
to
do
then?
I
have
other
questions
like
do
we
envision
that
backend
system
will
have
other
types,
not
just
string
or
not.
B
A
Yeah,
so
my
gut
feeling
is
probably
in
the
matrix
api
we
we
will
allow
types
that
are
not
streamed
and
will
will
do
the
cost
somewhere,
either
in
exporter
or
somewhere
processor
or
even
like
downstream.
I
would
prefer
to
do
that
in
the
downstream
as
much
as
possible,
because
in
this
way,
if
later
system
decided
to
support
that
you
don't
have
to
change
the
entire
chain.
B
In
the
go
api,
we
already
did
that
there
was
no
sensible
reason
for
me
to
create
a
new
type
that
was
string
only
val
attribute,
so
labels
and
attributes
are
the
same
in
go
already
and
it
like
it
just
doesn't
seem
like
a
real
problem
to
turn
it
into
a
string.
When
you
need
it,
but
we
can,
we
can
specify
how
to
turn
it
into
a
string.
We
need
it.
A
And
I
know
currently
like
in
the
in
the
in
the
current
metrics
api.
We
we
allow
people
to
subscribe
things
dynamically
and
in
the
upcoming,
like
the
view,
at
least
from
our
thinking.
We
allow
people
to
subscribe
to
different
dimensions,
but
my
my
question,
I'm
liking
of
the
the
contacts
here
so
just
try
to
pick
your
brand
josh.
Is
there
a
way
we
can
like
from
the
instrumentation
part
we
can
get
notified.
Saying
hey
this.
A
This
particular
dimension
is
not
needed
by
any
subscriber,
so
we
don't
have
to
fetch
that
value
if
it
turned
out
to
be
expensive.
My
my
current
understanding
is
no.
We
just
assume
people
will
always
have
the
dimension
value
and
if
it's
something
expensive,
we
want
to
change
the
model,
so
they
have
a
callback
function
and,
and
they
can
use
that
model.
B
A
A
A
B
A
And
in
this
case
I
still
have
like
two
dimensions
that
are
expensive
and
I
have
time
to
conversing.
So
if
nobody
is
using
that
you
want
a
mechanism,
they
don't
have
to
report
or
even
compute
the
data.
So
we've.
B
Looked
at
two:
what
there
are
two
places
you
can
remove
labels.
One
is
before
the
api
like
before
the
sdk
like
at
the
api
surface.
You
just
drop
it
as
if
it
never
existed
and
then
there's
you
can
reduce
it
after
you've
done
processing,
which
there
may
be
reasons
it's
much
simpler
to
do
it
after
processing.
You
don't
have
to
worry
about
concurrency
and
stuff
after
processing
and
then
so,
if
you
weren't
talking
about
export
down
to
a
sort
of
otlp
destination
that
might
have
its
own
opinions
about
what
you
want
it
would.
B
This
would
be
pretty
straightforward.
You
could
just
consult
all
of
your
exporters
and
ask
them.
Does
anybody
want
this?
Are
you
subscribing
to
this
label
or
not,
and
then
you
could
kind
of
if
it
wasn't
a
dynamic
decision,
but
I
think
we
run
into
questions
when
you
have
otlp
and
you're
pushing
data,
and
it's
like
the
the
downstream
exporter
might
have
a
position
on
this,
and
I
don't
want
to
complicate
our
lives
by
talking
about
a
negotiation
or
kind
of
some
kind
of
like
publishing
your
interest
on
the
push
model.
B
But
that's
maybe
that's
probably
what
people
think
of
when,
when
they
start
to
look
at
this
problem
as
a
dynamic
kind
of
thing,
where
you
can
figure
it
out,
but
then
I
want
to
there's
almost
a
really
interesting
data
model
question
here
that
has
to
do
with
a
push
versus
pull
and
it's
that
prometheus
when
it's
pulling
it
just
gets
like
a
small
set
of
of
labels
that
the
application
was
definitely
interested
in.
B
That
was
like
the
the
application
had
to
tell
you
some
small
set
and
then
what
what
happens
is
all
these
so-called
meta
labels
are
just
sort
of
floated
up
by
the
prometheus
service
discovery.
They
say:
here's
a
choice
of
labels
that
you
have.
You
could
choose
any
one
of
these,
and
this
is
your
chance
to
drop
them
or
relabel
them
or
rename
them,
and
so
on.
So
there's
this
point
in
the
push
infrastructure
where
you've
said,
I
have
all
these
labels
available.
B
Please
erase
some
of
them
because
it's
expensive
to
have
too
many
labels
and
you've
done
that
kind
of
downstream
and
to
me
this
almost
suggests
that
we
want
a
way
to
have
like
two
classes
of
attribute
or
resource
or
label,
which
is
to
say
some
that
are
like
really
mandatory
provided
by
the
call
site
and
some
that
are
just
kind
of
like
additional
contacts
that
you
may
or
may
not
want
to
keep
according
to
your
cost
and
your
topology
or
your
network
or
whatever
yeah.
And
yet
I
don't
want
to
complicate
matters.
B
A
B
I
had
a
question
about
summaries.
I
think
it
may
be
relevant
because
jonathan
mentioned
it
and
it's
like
a
common
use
for
prometheus.
It's
the
fourth
instrument
type.
Essentially,
it's
the
only
one
that
doesn't
really
quite
so.
It
doesn't
really
have
a
new
semantic
difference
than
say
a
histogram.
It's
just
a
cheaper
version
of
histogram
that
has
this
configurable
output
and
there's
been
a
lot
of,
like,
I
think,
pushback
on
summary,
because
it's
not
mergeable,
and
yet
it's
such
a
simple
thing,
and
if
all
you
want
is
this
simple
monitoring
like
it's?
B
Actually
what
people
want,
and
so
we've
put
in
this
new
type
into
the
otlp.
That
is
called
summary,
and
it's
sort
of
like
this
wart
on
the
side,
because
it
doesn't
really
have
an
instrument
behind
it.
It
doesn't
have
a
an
aggregation
behind
it,
because
it's
prometheus
has
this
very
custom
logic
that
that
is,
I
think,
an
exponential
weighted
average
and-
and
it's
like-
I
don't
even
know
exactly
it's
a
black
box
that
does
some
stuff
and
outputs.
This
thing
that
doesn't
even
have
a
temporality
that
we
could
associate
delta
or
cumulative.
It's
neither.
D
Josh
is
it,
is
it
created
the
the
summary
that
you're
talking
about
on
the
otp
status,
it's
created
from
a
value
recorder,
it's
from
the
original.
B
Well,
it
could
be
yes,
that
would
be
the
right
one
it's,
but
the
algorithm
is
is
like
prometheus's
code,
essentially
so
we'd
have
to.
You
extract
the
code
from
prometheus
to
do
exactly
what
prometheus
does,
but
semantically
yeah.
That's
a
recorded
value
and
it's
just
computing
some
aggregates.
So
to
me,
this
connects
to
the
view
question,
which
is
to
say
that,
to
me
summary
was
a
view
on
value
recorder
with
some
specifically
configured
quantiles,
and
so
is
that
an
optional
or
is
that
just
like
another
instrument?
B
And
that's
the
type
of
question
that
we
that
might
be
here,
is
that
maybe
there's
a
high
level
api?
That's
like
a
value
recorder
with
a
view,
that's
pre-packaged,
to
say,
make
it
a
summary
with
these
quantiles
and
that's
that's.
What
I've
heard
people
think
about
is
that
you
could
sort
of
pair
a
an
instrument
with
an
aggregation
and
call
it
a
new
type
of
instrument
when
it's
really
just
a
configured
view
on
top
of
a
low-level
instrument.
A
D
As
like
a
an
api
user,
it
would
be
like
I
have
this,
this
concept
of
value,
recorder
or
or
yeah
this
value
recorder,
and
I
should
be
able
to
configure
it
hey.
I
want
a
histogram
and
these
quantize
or
I
don't
want
the
histogram.
I
just
want
these
quantize
and
under
the
hood,
it's
the
same
concept.
It's
just
how?
What
what
data
will
you
get
at
the
end.
B
There
was
a
early
days
like
going
back
to
the
summer
of
2019
right,
oh
god,
it's
a
long
time
ago,
when
we
were
first
converting
away
from
the
census
model
into
my
first
contention
with
census
model
was
that
I
wanted
about
like
there
was
this
bound
instrument
for
counters
and
gauges,
but
not
for
the
roth,
the
raw
statistic
and
I
was
trying
to
like
separate
those
concepts,
and
so
we
wrote
up
this
like
this
new
proposal.
B
This
that
had
was
just
trying
to
pull
out
the
semantics
of
the
instruments
and
had,
and
then
there
was
this
otep
that
I
wrote
at
some
point
proposing
that
you
could
have
options
like
like
we
do
in
like
currently
there's
optional
on
the
instruments
like
with
unit
and
with
description,
you
could
have
like
option
with
this
aggregation
or
with
these
quantiles
or
with
this
histogram
bucket
setting
or
with
this
sketch
algorithm,
actually
at
the
call
site.
B
When
you
declare
your
instrument
and
then
this
is
a
pretty
big
philosophical
question
about
views
really
is:
do
you
want
that
stuff
in
the
api
model?
How
you
configure
your
view,
or
is
that,
like
a
separate
question,
that's
only
based
on
the
otlp
that
comes
out
like
do
we
put
views
on
the
aggregations
and
the
data
model,
or
do
we
put
views
all
the
way
up
to
the
instrument
to
where
the
the
api
programmer
can
say
what
view
they
want?
B
And
currently
we
have
not
done
that
and
I'm
not
saying
we
shouldn't.
It
was
just
it
felt
like
an
area
where
we
could
postpone
that
complexity
or
I
wrote
the
yotep
thinking
it
was
necessary
and
and
got
enough
resistance
from
bowdoin
at
the
time
that
we
didn't
that
we
closed
that
otp
otep
number.
Four.
As
I
recall
it's
very
early.
B
B
B
Or
a
histogram,
but
it's
it's
connected.
It
takes
a
dependency
on
the
sdk
to
say
I
want
to
control
to
say
that
this
is
a
summary
aggregation,
so
it's
not
a
pure
api
dependency
if
you're
configuring
aggregations
and
that
at
least
takes
the
complexity
of
talking
about
views
out
of
the
api
and
puts
it
into
the
sdk.
B
Yes,
and
that
way
we
don't
have
to
specify
what
a
view
means
across
all
of
open
telemetry.
We
just
have
to
say
for
for
the
open,
telemetry
user
who
adopts
the
default
sdk.
This
is
your
view.
Functionality
and
you
can
program
it
using
this
api,
which
is
called
default,
hotel,
sdk,
or
something
like
that-
not
hotel
api.
That
is,
you
know,
because
the
the
hotel
library
guidelines
talk
about
apis,
sdk
separation.
And
yes,
if
we
put
this
in
the
sdk,
our
life
gets
a
lot
easier.
B
And
maybe
it's
not,
we
don't
want
to
say
the
sdk
api,
because
that's
very
tightly
talking
about
binding
to
the
sdk,
which
is
almost
makes
it
sound
like
it,
can't
be
replaced.
It's.
I
think
it's
more
like
saying,
there's
a
separate
api,
sdk
separation
for
view
definition
and
you
can
provide
a
perfectly
legal
compliant
implementation
of
an
otel
metric
api
without
providing
a
view,
api
implementation,
and
that
means
that
if
the
user
goes
with
one
of
these
high
level,
wrappers
that
configures
an
instrument
look
semantic
instrument
plus
an
aggregation.
A
B
So
one
more
piece
of
administration
before
we
go,
I
know
ted
zuo
might
he's
in
the
devrel
or
now
at
bitestep
has
been
involved
with
hotel
since
to
start
he's
volunteering
to
take
on
some
of
andrew's
responsibilities
as
andrew
is
leaving
this
week.
So
I
do
expect
to
get
him
ted
involved
in
this
movement
of
metrics
these
days,
and
that's
just
all.
I
want.