►
From YouTube: 2021-03-05 meeting
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
B
D
D
So
I'm
I'm
very
good
yeah,
so
I'm
the
bad
guy
because
I
was
you
saw
that
I
was
pushing
to
revert
to
your
commit.
D
A
A
A
E
Okay,
I.
D
A
D
D
Yeah,
but
do
we
have
okay?
I
will
double
check
again
what
everybody
is
missing
and
I
will
create
an
issue
of
how
we
can
have
I'm
all
supportive
to
have
it
in
the
end.
But
let's
have
a
way
to
okay.
A
Okay,
so
just
some
quick
update,
I
I
went
through
all
the
the
current
metrics
spec
ratio
and
I
assign
them
to
different
milestones.
A
I
didn't
touch
the
the
bit
model
part
because
I
I
think
most
of
them
are
already
in
shape
and
I
saw
the
other
josh
as
he
did
some
cleanup
already,
and
the
second
update
is
like
kyla
and
I
created
the
top
level
projects
we're
trying
to
get
that
back
in
shape,
but
the
project
probably
will
take
another
week
and
another
update
is
originally.
A
I
was
hoping
the
roadmap
document
would
get
published
earlier
this
week,
but
looks
like
we're
getting
more
comments
and
most
of
the
comments
were
around
the
data
model
and
the
premises
take
part,
so
I
I
reached
out
to
alolita
and
josh
for
their
help.
So
so
please
help
on
that.
Thank
you,
and
so
this
is
my
biggest
topic
for
for
this
meeting.
So
I
propose
the
next
step.
A
We
can
scope
down
the
api
and
focus
on
the
scenario
that
we
merged
in
the
old
tab,
and
these
are
the
couple
of
the
topics
I
want
to
explain.
So
the
first
thing
is
that
we
have.
We
have
the
meter
like
provider
where
we
have
version.
I
believe
it's
the
same,
so
the
intention
is
to
keep
the
same
as
tracer,
but
I
I've
seen
the
question
like
when
you
ask:
how
do
we
use
the
version,
I'm
not
seeing
the
the
example?
A
Currently,
if
you
look
at
the
tracer,
I
just
want
to
quickly
check
if
people
have
a
good
understanding
or
give
me
some
pointers,
so
I
can
start
to
curve
that
in
the
metrics
back
and
if
there's
a
need,
I
can
come
back
to
the
the
tracing
part.
For
example,
if
I
have
instrumentation
library
which
is
1.0.1,
do
you
specify
that
has
1.0.0
or
should
the
library
when
checking
the
version
match
to
the
semantic
version
convention?
A
D
Use
case:
that's
why
it's
optional,
but
I
would
be
very
strongly
or
against
not
having
the
version
in
matrix.
A
Oh
yeah,
I'm
not
saying
we
don't
have,
I
think
most
likely
we'll
keep
that
consistent
with
tracing,
because
the
tracing
part
is
already
there.
It
doesn't
make
sense
to
have
two
different
things
yeah,
but
what
I'm
saying
is
in
the
metrics.
I
want
to
have
a
little
bit
better
understanding
and
we
might
be
coming
back
to
the
tracing
and
clarify
that
as
well.
Yeah.
D
So
so,
if
you
want
to
clarify
on
the
tracing,
I'm
all
for
that,
but
to
be
honest,
as
I
said,
I
I
don't
know
exactly
what
is
the
thing?
So
the
reason
there
are
use
cases
for
this
that
I
heard
about
things
like
I
wanna
remove
a
metric
to
be
exported
if
libra
instrumentation
is
mongodb,
something
something
and
version
is
1.03,
because
because.
C
I
want
to
do
that
because,
for
example,
the
metric
was
a
wrong
unit
and
I
don't
want
to.
D
Break
the
back
end
for
whatever
reasons,
and
I
prefer
not
to
have
it
or
I
want
to
change
the
unit
or
do
some
other
crazy
things
in
the
what
we
call
the
views
in
the
sdks.
So
then
I
update
it
and
normalize
it
to
something.
So
there
was
a
bug
in
that
version,
or
something
like
that.
That
was
the
use
case
that
I
keep
hearing
about
these.
F
I
think
it's
really
useful
to
have
instrumentation
library
version
that
has
been
given
as
the
potential
to
be
like
a
filter
or
dropping.
I
remember
an
issue
saying
we
should
have
one
document
in
hotel
that
says
what
a
provider
does
across
signals
so
that
we
don't
have
to
spec
it
out.
First
tracing
and
metrics
independently
the
difference
in
metrics
that
I'm
aware
of
that
we've
discussed
over
the
past
year.
It
has
to
do
with
the
uniqueness
of
registering
an
instrument,
so
we
use
the
provided.
F
The
the
named
meter
that
you
get
from
a
provider
has
a
scope
that
that
so
that
you
only
check
instrument
uniqueness
within
that
scope
is
what
I
recall
being
the
distinction,
but.
D
G
Can
I
point
out
one
thing:
I
think
this
question
of
what
version
is
and
where
it
is
actually
plays
back
to
a
very
central
question
about
the
identity
of
a
metric.
So
I
was
just
going
through
some
of
the
specs
today,
a
little
bit
just
reading
it
and
right
now
the
spec
actually
says
that
when
it
gets
down
to
identifying
you
know
what
a
metric
is,
it
explicitly
says
to
not
use
the
metric
name
or
metric,
presumably
metric
version
to
identify
the
identity
of
a
metric.
F
We
had
discussed
how
you
might
have
a
semantically
specified
metric
name
like
http
request,
let's
say,
and
you
wanted
to
have
one
instrumentation
library
be
swappable
with
another,
so
you
have
two
ways
of
instrumenting
your
server
or
something
like
that,
so
they
should
be
able
to
provide
the
same
instrument
name
and
it,
and
we
expect
that
you
won't
run
two
of
those.
At
the
same
time,
if
you
did
happen
to
run
double
instrumentation,
you
would
end
up
counting
things
twice.
F
G
So
we
it
helped
to
help
clarify
that
item
for
me,
because
right
now
it's
ambiguous
and
if
we
say
that
on
the
otlp
line
we
don't
include
metric
name
or
metric
version,
and
I
don't
know
if
we
do
or
not,
but
based
on
that
reading
of
the
spec
is.
It
does
not
imply
that
we
would
pass
that
information
as
the
identity
of
on
the
ltlp
line.
If
that
is
the
case,
then
why
do
we
even
need
to
carry
or
collect
metric
name
or
version
at
the
api
and
sdk
level?
G
D
G
Using
the
words
wrong,
sorry,
yes,
so
let
me
rephrase
that
so
so
at
the
when
we
passed
the
instrumentation
detail
and
the
meter
information,
my
reading
of
the
spec
is
that
the
meter
name
and
the
meter
version
is
typically
not
included
in
identifying
the
metric
name.
So
the
met
the
meter
name
is
the
only
thing
that's
used
to
identify
it,
which
is
to
imply
that
the
otlp
on
the
line
level
only
cares
about
the
the
meter
name
right
in.
G
D
Is
exposed
in
open
telemetry,
so
all
our
protocols,
all
our
protocols,
have
a
hierarchy
like
resource
which
has
the
instrument
name.
Yes,
no,
that's
the
resource
resource
is
the
provider.
It's
the
provider
level.
So
all
the
things
have
a
resource
plus
a
a
repeated
instrumentation
library
and
inside
the
instrumentation
library
we
have
the
name
version,
plus
a
repeated
metrics
or
a
repeated
spans,
or
a
repeated
log
record.
So
all
the
otlp
follow
the
same
model.
So
so
the
the
meter
name
and
meter
version
or
whatever
you
pass
there,
it's
the
second
layer
of
our.
D
G
D
Okay,
I
think
I
think
I
think
the
the
the
request
is
clarifying
what
identity
is
in
case
of
audio
perfect.
I
I
think
that's
that's
reasonable
and
it's
it's
a
very
good
thing
to
to
to
take
care
of.
A
Yeah,
I
also
put
the
issue
here
and
I
I
think
most
likely
we'll
cover
that
in
the
data
model.
So
this
is
the
dependency
and
it's
not
a
not
a
blocker
for
the
api,
but
we
might
change
the
behavior
because
there's
some
dimension.
If
people
try
to
register
the
same
thing,
do
we
error
out
or
just
give
the
same
thing
and
depending
on
whether
it's
the
provider
or
the
meter
or
the
instrument,
we
might
have
different
behavior.
D
I
I
would
say
that
usually
at
a
meter
level
should
be
enough,
because
that
version
is
the
version
of
your
release,
artifact.
That
includes
the
the
instruments
correct.
So
then,
if
you
modify
something
that
will
be
modified
in
a
new
release
of
your
artifact
or
package
or
whatever,
so
that
should
be
good
enough.
I
mean
I
don't
see
that
you
can
have
a
change
in
an
instrument,
property
properties
in
the
same
version
of
the
meter.
D
If
the
meter,
if
the
the
meter
version,
is
the
one
that
we
document
to
be
the
the
version
of
the
artifact
that
does
the
instrumentation,
I
don't
see
how
that
can
happen
to
have
same
version
with
two
different
instrument
configuration.
G
I
I
think
the
only
one
caveat
to
that
is
that
if
they
just
change
one
instrument
name,
then
they
have
to
rev
the
whole.
You
know
library,
which
you
know
that.
F
So
this
is
just
this
is
really
a
question
about
version
compatibility
not
much
as
much
about
uniqueness.
I
think
so
I
think
actually
they're
coupled
is
what
you're
trying
to
say.
I
would
agree
with
you
is
that
is
that
about
right.
F
All
boils
down
to
the
identity
right
and
say
that
that
I
I
believe,
we're
saying
that
everything
is
identifying
except
the
description
so
unit,
instrumentation
library,
name,
instrumentation
library,
version
and
metric
name,
I
think,
is
the
the
four
and
then,
as
far
as
locating
yourself,
a
time
series
or
a
series.
In
some
sense
you
add
labels
and
resource,
and
those
are
all
the
things
that
I
identify
a
particular
series
coming
out
of
an
sdk.
D
D
So
so
what
I'm
trying
to
say
is
does
not
necessarily
if
you
change
an
instrument
name,
you
don't
necessarily
need
to
do
a
major
bump
of
that.
F
F
Without
doing
a
major
bump
of
your
instrumentation
library
version,
there
was
some
language
put
into
ted
sue,
ted
young's
document,
which
was
mostly
about
tracing,
but
it
drafted
some
pieces
about
metrics
and
I
think,
as
I
recall,
what
it
said
was
something
along
the
lines
of
you.
If
you
add
it
didn't,
say
anything
about
adding
or
removing
instruments.
I
think
the
presumption
is
you
won't,
remove
your
instruments
and
it
didn't
even
get
stated,
but
there
was
a
question
about
adding
and
moving
labels
or
attributes,
and
that
actually
is
a
really
significant
question
about
that.
F
D
It
is
a
different
topic
and
it's
the
topic
that
we
explicitly
postpone,
which
is
the
stability
of
of
the
the
ins,
the
telemetry
data.
So
this
belongs
to
the
stability
of
telemetry
data.
And
what
do
we
call
as
a
breaking
change
in
terms
of
the
telemetry
data
that
an
integration,
exp
exports
or
not
from
the
api
and
and
data
model
perspective?
You
identify
very
well
what
are
the
identity
identifiers
for
for
a
metric
which
we
we
need
to
document
and
from
the
from
the
api
perspective?
D
We
need
to
accept
all
of
this
in
the
form
that
we
have
right
now,
and
I
think
there
is
a
big
discussion
that
tracing
has
to
go
through
as
well,
which
is
the
stability
of
telemetry
data.
And
what
are
you
allowed
to
do
in
a
specific
version,
or
when
do
you
need
to
bump
the
major
version
of
your
instrumented
data.
G
A
G
So
one
question:
oh
sorry,
so
just
one
quick
thing
from
the
from
one
of
the
information,
the
the
meter
name
is
that
the
fully
qualified
you
know
name
per
the
language
of
the
instrumenting
library.
C
G
D
And
there
is
another
big
discussion
about
that
related
to
logger,
name,
which
I
I
want
to
also
have
that
discussion.
Oh.
I
Yes,
let's,
let's,
okay
and
sub
resources
comes
up.
When
you
talk
about
that
too.
D
Let's
start
one
by
one
josh,
so
so
I
I
want
just
to
clarify
one
thing
that
yes,
indeed,
there
are
hiccups
on
the
instrumentation
name
and
version,
but
I
would
like
them
to
be
consistent
for
the
moment
with
metrics
and
I
think
the
act.
The
immediate
action
item
are
extract
that
definition
in
a
common
place,
and
maybe
then,
after
that
start
revisiting
and
have
all
these
good
questions
victor,
that
you
ask
us
as
issues
or
comments
in
or
fixes
in
the
in
the
common
definition
of
instrumentation
name
and
version,
make.
A
A
C
Because
tracing
already
depends
on
you,
yeah
and.
A
Yeah,
okay,
okay,
yeah,
so
so
just
to
avoid
people
getting
lost,
we're
trying
to
go
through
a
list
of
the
issues
that
I
believe
we
want
to
address
at
the
when
we
work
on
the
prototype.
A
So
if
you
want
to
know
the
scenario,
that's
describing
the
old
type,
we'll
use
this
as
a
scenario,
and
I
try
to
list
the
the
issues
here
as
the
initial
scope.
So
these
are
the
things
we
don't
have.
A
good
idea
from
the
three
languages
that
volunteer
to
do
this
prototype
we
just
covered
one
naming
naming
thing
is
always
the
most
difficult
one.
So
that's
why
I
put
it
first,
and
here
I
I
think
we
we're
already
getting
the
consensus
that
instead
of
trying
to
cover
everything,
we
try
to
focus
on
the
synchronous
api
first.
A
I
also
find
this
previous
issue
of
talking
about
that
and
it
seems
there's
a
lot
of
support.
So
I
wonder
how
we
can
how
we
can
execute
on
that.
So
first,
I
want
to
understand.
Like
do
we
have
a
someone
strongly
against
like
we
want
to
focus
on
everything
at
this
moment,
or
we
should
scroll
down
just
on
the
sync
api.
What
does
it
mean
to
focus
on
that?
Okay,
so
the
ask
here
is
to
just
remove
the
observer
part
from
the
api
and
clean
up
the
synchronous
part.
A
D
A
D
Just
brainstorming
here,
but
why
do
you
have
problems
with
that?
I
I
keep
hearing
and
I
keep
seeing
people
having
problems
with
the
with
the
observers
stuff.
A
F
D
G
B
G
D
Could
be
could
be,
but
my
my
whole
hope
is
that
in
two
months
or
in
a
month
where,
when
we
move
forward
with
the
with
the
synchronous
things,
we
will
not
have
to
have
another
one-man
discussion
in
the
understanding
of
why
those
are
needed.
That's
that's
only
my
hope.
G
D
D
So
every
time
when
you
receive
an
http
request
at
the
end
of
the
request,
you
you
record
to
the
instrument.
The
number
of
bytes
read
the
number
of
whatever
metrics.
You
want.
Okay
for
the
asking
things
we
have
things
like
cpu
usage
and
the
problem
is
when,
if,
if
I
don't
give
you
an
the
observer
thing,
when
are
you
gonna
report
that,
with
what
frequency
are
you
gonna
report
it?
So
you
can
create.
D
It
matters
it
matters
because,
because,
if
you,
if
you
do
it
with
a
different
sequence
than
the
sequency,
that
the
exporter
does,
you
may
have
duplicates,
you
may
not
have
the
the
right
values
and
so,
for
example,
I
can
give
you
an
example
that
if
you
have
it,
if
you
implement
your
own
trigger
or
timer
that
reads
cpu
usage
and
reports,
it
every
minute,
and
then
I
have
it.
I
have
an
exporter
that
exports
all
the
metrics
every
minute.
D
G
D
It
is
chronologically
correct,
but
you
will
not.
You
will
miss
some
of
the
cycles,
but
I'm
trying
to
say
what
I'm
trying
to
say
is:
if,
if
your
current
cpu
usage
is
a
hundred
okay,
you
report
that
and
then
after
five
seconds,
you
export
that
and
the
you
export
a
hundred
next
time.
D
The
export
comes
first
for
good
or
for
bad
and
you
export
again
100,
which
means
that
that
that
in
the
last
one
minute
you
had
zero
cpu
usage,
because
it
just
happened
that
you
were,
you
were
your
timer
that
was
reporting
was
not
reporting
correctly.
Even
though,
is
the
last
value
because
didn't
report
before
my
second
export
come,
I
will
export
a
hundred
again
right.
G
G
Okay,
so
so
I
I'll
just
to
to
save
time,
I
I
I
don't
understand
all
of
the
semantic.
I
I
obviously.
F
I
have
a
new
way
of
thinking
about
this.
I
think
a
year
ago,
if
you
would
ask
me,
is
there
a
semantic
difference
between
synchronous
and
asynchronous
instruments?
I'd
say
yes.
Today
I
have
a
new
way
of
thinking
about
it.
I
think
there's
like
there's
a
temporal
difference.
We
call
this.
We
have
this
variable
called
temporality
and
you
know
to
me
the
difference
between
you.
F
So
that's
like
not
really
a
semantic
difference,
because
at
the
end
of
the
pipeline
you
get
the
same
numbers
out,
but
I
but
but
I
think
it's
both
a
practical
matter
for
readability
and
also
there's,
like
kind
of
like
a
technical
matter,
about
timing
of
your
scraping
intervals
and
such
but
like
the
like.
Just
for
organizing
your
code
at
least
half
of
the
metrics
I've
been.
Writing
are
asynchronous
to
where
I
don't
want
to
have
a
loop
that
I
own
just
for
the
purposes
of
setting
a
value
periodically.
F
I'd
love
it.
If
you
call
me
back
so
I
I
like
saying
it's
the
second,
a
second
topic
for
us.
I
also
think
of
it
as
not
semantic
anymore,
but
it's
temporal
and
it's
there
are
real
distinctions
there
and
it
is
much
harder
to
spec
out
the
asynchronous
instruments,
which
is
another
reason
to
go
with
what
riley
said.
A
Yeah
yeah
and
coming
back
to
what
the
program
said.
So
we
do
have
the
scenario
where
we
mention
something
like
this
so
like
there
are
model
where
you
have
something
that
is
always
available,
but
you
only
fetch
that,
based
on
the
need,
I
believe
that
captured
the
the
scenario.
So
we
won't
be
able
to
just
ignore
that
scenario
and.
D
Yeah
there
are,
there
are
other
things
like
performance
implications,
victor
think
about
most
likely
people
will
say:
okay
to
be
safe.
I'm
gonna
report
cpu
usage
every
second,
because
I
don't
know
maybe
the
exporting
interval,
maybe
a
second
a
minute
or
whatever
the
the
final
users
configures.
But
by
having
these
callbacks
or
observers,
you
don't
have
to
make
that
decision
of
exporting
every
second.
The
whole
idea
is.
The
observers
are
essentially
just
callbacks
for
the
sdk
to
retrieve
the
values
whenever
they
need
it.
G
Yeah
so,
and
and
by
the
way,
I'm
not
disputing
that,
there's
semantic
difference
and
temporal
difference.
So
so
I'm
looking
eager
to
learn
all
that,
I'm
just
saying
from
the
perspective
another
way
to
treat
it
would
be,
provide
an
extensibility
that
says
before
collection
and
then
the
user
can
then
do
whatever
they
want
by
using
synchronous.
You
know
reporting
of
it
and
then
we
avoid
the
whole
issue
of
the
observer.
F
F
That
is
during
your
callback,
you
output,
a
bunch
of
gauges
which
are
value
observer
and
we
just
collapse
those
into
a
histogram
by
erasing
some
attribute
that
made
them
different,
and
so,
if
you
can
figure
out
how
to
capture
a
gage,
histogram
synchronously,
then
yes,
but
right
now,
we've
modeled
it
differently.
J
K
Please
so
it's
about
the
like
synchro
asynchronous,
like
distinction.
Can
we
add
one
more
topic
into
this,
or
just
like
put
it
into
the
synchronous
packet?
I
don't
know
which
one
is
the
which
one
is
the
better
which
is
like
having
a
abstraction
over
like
measuring,
latency
or
time
for
the
metrics,
because
I
just
checked
the
python
example
and
the
java
example
and
both
are
doing
wrong.
K
And
so
why
would
we
expect
that
people
will
do
it
in
the
right
way
when,
like
the
examples
are
wrong?
But
that's
that's!
That's
for
synchronous,
correct
you!
You
are,
moreover,
yes,
yes,
yes,
yes,
yes,
I,
I
think,
like
measuring
time,
is
more
like
a
synchronous
like
value
recorder,
where
you
basically
give
something
to
the
users
which
will
measure
the
time
for
them,
and
today
you
just
say
that
hey,
I
want
to
measure
this
block
of
code.
D
Yeah
yeah,
I
always
thought,
but
probably
I'm
wrong-
that
we
will
offer
all
these
helpers
as
what
we
call
extensions
to
the
api,
and
we
expect
people
to
use
those,
but
most
likely
we
failed
and,
and
that's
point
taken.
I
think
we
failed
to
offer
them
and
I
think
we
failed
to
on
the
other
thing
we
failed,
because
I
think
my
idea
of
people
will
trust.
This
extension
is
wrong.
If
he's
not
in
the
api,
people
will
not
trust
them.
I
I
got
that
message.
A
H
D
Use
cases
or
whatever
more
more
things.
The
reason
why
I
don't
want
necessary
in
the
main
api
is
to
keep
that
limit
limited
and
to
keep
that
small
just
for
the
documentation
perspective
and
still
it's
a
big
win
to
have
a
small
api
and
have
advanced
api
usage
in
the
extensions,
but
we
can
discuss
when
the
time
comes,
but
point
take
and
jonathan.
We
definitely
need
to
have
this,
at
least
as
an
api
extension,
if
not
as
directly
into
the
api.
F
Yeah,
I
I
fully
support
jonathan
there
there
time
that
the
api
for
timing
is
absolutely
got
to
be
different
than
the
api
for
arbitrary
histogram
observations.
I'm
fairly
convinced
of
that
and
the
question
that
we
asked
before
that
was
maybe
blocking
us.
Was
this
question
of
like
how's
that
different
from
a
span
or
what
different
api
would
you
use?
F
That's
different
than
recording
a
span,
and
the
answer
I
have
at
least
in
my
hotel
go
world
is
that
you
know
a
span,
gives
you
back
a
context
that
does
something
so
between
them,
the
start
and
the
end
of
a
span.
You
can
use
that
context
and
this
timing
api.
That
jonathan
kind
of
keeps
asking
for
is
absolutely
like
the
same
as
a
span,
but
there's
no
context,
and
I
want
it
to
be
a
one
line,
not
two
line
statement,
that's
kind
of
the
highest
level
of
like
we
definitely
need
a
timing.
Helper.
K
F
D
So
I
think
I
think
there
are
multiple
use
cases
here.
So
first
completely
got
we
need
to
have
a
timer,
but
then
second,
I
think
there
is
another
thing
that
during
the
instrumentation,
auto
instrumentation
and
instrumentation
javasci,
I
discussed,
which
is
this
notion
of
operation
or
something
where
behind
the
scene.
We
hide
the
fact
that
we
create
a
span.
We
measure
the
latency
and
record
the
latency
as
metric.
We
may,
whenever
you
give
me,
for
example,
the
number
of
bytes
written
or
something
like
that.
I
may
use
this
as
a
measurement
for
metrics.
D
I
may
put
it
in
the
spans
and
stuff
like
that.
We
can
even
have
that
as
a
new
concept.
Again,
it
was
just
a
brainstorming,
not
seeing
that.
That's
the
right
thing
to
do,
or
it
was
just
an
idea
that
we
can
extend
something
on
top
of
the
span
to
call
it
operation
that
deals
with
metrics
and
logs
as
well
in
like
as
a
new
concept
that
hides
all
the
things
behind.
G
It's
just
so,
just
as
a
workaround
I
mean
when
you
enter
a
metric.
When
you
push
a
metric,
you
could
attach
a
span
contacts,
in
which
case
the
span
context
does
contain.
You
know
latency
information
just
as
fyi,
but
then
that
implies
that
you
do
have
to
still
instrument
spans,
so
yeah.
A
Okay,
I
I
think
we
have
a
good
understanding
of
the
problem
not
trying
to
solve
all
the
problems.
Just
try
to
list
this
scope
and
if
you
see
anything
obvious
like
you
think
this
is
very
important.
We
should
we
should
put
this
in
in
the
initial
prototype.
If
we
don't,
then
we
fail
then
called
or
you
think
this
is
something
probably
it's
too
big.
We
should
wipe
them
out
just
trying
to
carve
out
the
scope
when
we,
when
we
do
the
initial
api
metrics
like
end-to-end,
just
focusing
sync
part.
G
So
one
sorry
one
quick
one.
Hopefully
it's
a
quick
question,
the
the
use
case
I
said
earlier,
where
people
just
attach
to
the
pre-collect
and
then
push
you
know
metric.
Is
that
recommended
or
not
so
will
you
clarify
the
question
yeah
so
the
instead?
If
if
I
were
quote,
if
I'm
a
poor
man
and
I
want
to
implement
my
own
async,
because
the
spec
hasn't
fully
been
out
yet-
and
I
decided
that
you
know
somehow-
I
hook
into
the
pre-collection-
you
know
of
you-
know
hotel
and
from
there.
F
Except
pogen,
I
made
that
batch
observer
call
back
at
some
point,
and
that
was
the
use
case
right
there,
which
was
I'm
going
to
do
some
batch
observations,
meaning
I
have
some
one
expensive
call
like
read:
men
stats
and
go
that,
like
you,
get
five
stats
out
of
so
now,
you're
gonna.
F
Do
your
batch
five
observations
right,
but
that
that,
like
expensive,
call,
also
gives
you
like
a
list
of
numbers
that
were
buffered
like
so
there's
a
temporal
shift
that
happened
there
and
you
need
to
synchronously
output,
something
from
a
batch
observer
and
it
was
the
go
memory,
garbage
collection,
timings.
So
all
the
garbage
collections,
since
the
last
call
could
be
output
at
that
moment.
But
it
was
in
an
asynchronous
context.
D
Yeah
so
fyi
for
the
moment,
if
we
histogram,
but
if
we
remove
the
observers
that
will
get
removed
as
well.
So
to
answer
your
question:
victor
you'll
no
longer
have
that
hook
to
do
this.
G
E
D
If
you
don't
want
the
the
observer,
yeah
you'll
not
have
that
hook,
so
you
will
not
be
able
to
to
use
it.
Unfortunately,.
A
A
G
D
But
I
I
would
be
reluctant
on
necessary
make
that
stable,
make
that
1.0
metrics
without
thinking
about
this.
So
I
think
it's
a
good
roadmap
to
have
victor,
but
I
would
not
call
that
we
are
one
zero
when
we
do
not
have
a
response
for
for
this
use
case,
and
we
don't
want
people
to
start
creating
timers,
because
we
know
that's
not
the
right
thing
to
do.
D
A
Okay,
so
going
to
the
next
topic,
I
think
this
has
been
like
covered
several
times,
so
the
consensus
is
we
want
them
to
be
consistent,
so
we
try
to
avoid
calling
things
like
labels
and
tags
and
just
use
attribute
and
want
to
do
something
called
string
conversion
which
is
currently
assigned
to
ted
and
he's
trying
to
explore
that.
I
also
have
questions
regarding
double
call
those
things,
so
I'm
not
going
to
do
the
details,
but
this
is
the
general
thinking
I
believe,
and
the.
G
Next
one
does
that
mean
then,
since
its
attribute
from
trace
or
just
generally,
then
that
means
we
have
a
non-string.
G
A
I
think
so
and
regarding
unit
I
I
I'm
a
little
bit
struggle
with
this
like
if
we
have
different
units
from
the
from
the
like
instrumentation.
A
F
G
F
It
is
part
of
the
identity
we
and
we
have
discussed
this.
The
sdk
should
never
have
to
interpret
unit
strings
and
we've
specified
like
kind
of
pass
through
for
the
collector,
so
the
collector
also
doesn't
have
to
interpret
it
so
that
you
can
opt
into
the
idea
of
units
conversion,
but
but
otherwise
you'll
get
different
units
for
different
metrics
as
different,
distinct.
D
Only
the
back
end
that
may
interpret
them
as
long
as
we
treat
it
as
part
of
the
identity,
which
means
we
don't
ever
have
to
convert
them
between
different
things,
because
two
different
time
series
or
two
different
points
with
different
units
are
for
us
different.
We
never
gonna
merge
them,
so
we
will
never
be
in
charge
of
of
of
converting
from
one
unit
to
another.
A
F
There
was
a
pr
that
never
got
merged,
I'm
going
to
put
a
link
in
the
document
right
now.
I
liked
it
a
lot
just
as
we
were
sort
of
losing
momentum
at
the
time.
Yeah.
A
A
A
For
the
processor
model,
so
this
one
I'm
trying
to
be
careful
just
to
control
the
scope,
but
I
I've
seen
several
like
issues
in
the
history
where
people
ask
hey
I'm
sending
those
like.
I
have
this
instrument,
I
keep
sending
data,
I
don't
need
any
aggregation.
I
just
want
to
export
this
raw
data
and
since
they
have
a
scenario,
so
I
want
to
get
a
get
the
rough
idea
from
folks
here.
C
A
F
It
is
the
the
most
important
part
of
the
processor
that
we
needed.
Independent
of
any
aggregation
support
was
to
convert
deltas
to
cumulatives
coming
from
the
synchronous
instruments,
so
you
get
delta's
input
and
you
output
cumulative
some
somewhere
there's
a
piece
of
memory.
That's
going
to
do
that
work
and
we
put
that
in
a
processor
abstraction.
F
This
is
definitely
sdk
spec,
not
api
spec.
But
now
you
know
the
issue
you're
showing
about
metric
data
raw
data.
F
I
was
hoping
not
to
make
not
to
have
to
talk
about
that
in
this
meeting,
but
that
that
one
is
keeps
not
going
away
and
if
we
are
going
to
talk
about
it
I
would
I
would.
I
would
be
happy
to
if
everyone
thinks.
A
F
Well,
I
have
mixed
feelings,
you
know,
there's
this
emf
format.
I've
recently
looked
at,
which
is
called
embedded
metrics
format.
It's
a
logging
format
for
metrics
and
amazon
uses
it
for
cloudwatch.
So
it
is
very
much
a
case
of
what
what
prevents
you
from
just
calling
it
a
log.
It's
that
you
want
to
interpret
it,
so
it
has
structure.
So
it's
a
structured
log
with
some
schema
to
tell
you
how
to
turn
it
back
into
metrics
after
you
log
it
like
through
the
log
somehow-
and
I
think,
there's
a
belief.
F
I've
discussed
this
with
tigran
since
he's
looking
at
logs
that,
if
you're
gonna
have
a
metric
log
format,
you
should
have
it
be
a
metrics
format,
even
though
it
looks
like
you're
logging
raw
events,
so
we
could
extend
our
data
model
for
raw
events
and
but
it
I.
I
fear
that
this
is
going
to
derail
the
meeting
that
we're
having
right.
Now
I
have
an
open
issue
about
adding
an
instantaneous
temporality.
I
just
opened
it
last
night,
it's
to
finish
up
our
histogram
work.
It's
also
to
help
us
get
this
gauge
histogram.
F
I
keep
talking
about
and
I'll
put
in
the
notes,
I'd
like
you
to
consider
that
and
then
we
can
talk
about
raw
events
again
like
it
covers
raw
counter
events
and
gauges
are
already
raw.
So
the
only
question
left
is
what's
a
raw
histogram
event
and
I
have
a
few
answers
and
none
of
them
are
great.
G
F
Exactly
right,
I'd
like
to
say
about
gauges
is
that
they
don't
aggregate.
So
we
like
to
say
that
the
the
rate
of
a
gauge
there's
no
event
there.
It's
just
a
current
value.
So,
like
you
can't
say,
I
think
it's
dangerous
to
call
a
gauge
to
use
a
gauge
to
represent
a
raw
histogram
point,
although
that's
very
close
in
semantics,.
G
G
F
Way
what
really
is
coming
up
is
that
people
want
sampled
raw
events.
So
statsd
is
an
example
of
a
case
where
each
api
event
maps
into
a
raw
event,
and
if
you
sample
them,
you
then
get
individual
events
that
carry
a
count
which
is
like
this
is
effectively
a
100
count,
because
you
sampled
one
in
100
at
that
point,
you
want
to
have
a
event
that
is
raw
but
sampled,
and
so
and
that's
not
meaningful
for
gauges.
F
I
I
according
to
what
I
already
said,
because
you
can't
sample
a
gauge:
there's
no
frequency
associated
with
gauge
event,
but
for
a
counter
and
a
histogram
we
kind
of
want
a
way
to
if
you're,
if
you're,
sassy
or
if
you're
thinking
about
emf,
or
you
want
to
just
literally
output
from
your
metrics
sdk
through
a
log
to
a
metrics
back
end.
You
basically
want
to
have
raw
events,
probably
that
are
sampled.
F
D
F
A
Okay
cool,
so
we're
almost
5
p.m.
So
I
I
think,
we'll
cover
the
topic
here.
I
I
know
viktor
has
another
one
so
I'll
make
him
ask,
so
you
guys
are
more
experienced
on
the
like
the
metrics
and
the
history
about
this.
So
please
help
us
to
look
at
the
scope
and
see
if
we
can
further
cut
that
to
a
smaller,
just
a
self-contained,
something
we
can
achieve
or
very
concrete,
and
we
want
to
regret
later.
F
I
was
going
to
ask
about
the
bound
bound
instruments
which
are
the
sort
of
cooked
form
of
a
label
set
and
that
connects
with
victor's
topic
about
label
sets,
especially
the
predefined
versus
ad
hoc.
G
So
my
my
question
is:
is
really
just
for
again
clarification.
I
heard
multiple
times
in
the
past.
You
know
meetings
that
we
we
want
the
label
set
to
be
more
or
less
ad
hoc,
in
the
sense
that
the
the
set
of
labels
you
want
to
add
can
be
ad
hoc
at
the
when
you're
presenting
or
when
you're
recording
a
data
value.
Is
that
a
true
statement,
or
not
a
true
statement.
G
And
I'll
give
you
the
reason
for
it,
because,
because
we're
we're
looking
at
some
potential
design,
decisions
in
terms
of
you
know
in
this
particular
case
we're
looking
at
the
net
runtime
and
whether
or
not
we
can
predefine
the
dimension
names
before
you
know.
In
the
api
side,
the
sdk
you
know,
and
thus
would
then
lock
out
when
you're
recording
a
value
to
provide
ad
hoc
labels,
names.
D
This
is
nice-
I
remember
having
this
conversation
long
time
ago,
even
to
find
the
all
the
the
labels
keys
in
advance
and
then
define
only
the
values
right.
D
D
D
D
G
F
This
is
there's
a
history
here,
there's
been
a
lengthy
discussion,
I'd
like
to
say
that
it's
worth
discussing
a
full
hour
could
be
discussed
on
this
topic.
I
think
I
I
believe
ad
hoc
is
better
that's,
but
but
we
can
decide
as
a
group.
My
position
has
thinking
about
it
comes
from
like
wanting
to
be
able
to
like
think
about
how
tracing
works.
You
very
much
just
add
labels
when
in
doubt
add
a
label.
F
The
ability
to
throw
in
new
labels
potentially
from
distributed
context
was
one
of
the
things
I
think
open
sense
was
talking
about
in
the
statisti
world
because
of
deltas
labels
are
free
and
easy.
You
can
always
remove
them
in
a
delta
measurement
system,
no
problem
so
coming
in
from
statsd.
That's
what
I
wanted.
Josh.
D
There
is
a
this
difference
between
what
you
can
do
with
the
data
versus
how
do
you
record
the
data?
So
so
there
is
a
difference
if,
at
the
api
level,
we
want
to
have
keys,
specifies
during
instrument
creation
and
just
specify
values
during
recording
that's
an
option
again,
this
does
not
limit
how
you
what
you
can
do
with
in
terms
of
deltas
aggregations
or
whatever
it's
a
matter
of.
How
do
you
get
the
data?
D
Do
you
get
the
data
in
this
format
like
keys,
yeah
instrument,
creation
and
values
during
recording
or
you
don't
get
anything
during
instrument
creation
and
you
get
keys
and
values
during
record
time?
I
think
I
think
this
is
the
the
question
that
has,
if
I
understood
correctly,
it
does
not
imply
what
we
can
do
or
we
cannot
do
with
the
data
later.
F
There's
a
data
model
question
here
and
I
agree:
let's
take
that
separately
from
a
point
of
view
of
semantic
or
the
api
mechanics.
Do
you
specify
a
list
of
key
values
that
could
be
arbitrary
or
do
you
have
a
template?
That
says
I
have
three
three
labels
and
it's
a
you
know
they
have
position,
numbers
and
stuff.
That
was
where
the
openstacks
library
was-
and
I
remember,
there's
a
lot
of
topic-
a
lot
of
discussion.
You
know
it's
a
little
risky
of
the
api
when
the
api
has
that
yeah,
but.
D
So
I
think
there
there
are
some
optimizations
that
you
can
get
if
you
go
with
the
keys
on
the
instrument,
creation
and
the
values
later,
you
get
some
optimizations
there
for
for
allocations.
For,
for
things
like
sorting
things,
remember
josh,
you
have
to
sort
the
label
set
right
now
in
order
to
compare
them,
then
comparing
them
is
just
super
easy,
because
you
always
know
that
this
is
the
sequence
of
things.
It's
always
this
and
it's
super
easy
to
compare
a
bunch
of
thing
fast
things
you
can
do,
but
victor
to
answer
your
thing.
D
We
can
give
a
try
to
come
up
with
some
apis
prototypes
and
see
if
we
get
it
right,
but
based
on
my
experience,
it
was
super
error,
prone
user
couldn't
get
the
right
in
the
right
places,
the
the
values
for
the
the
keys
that
they
wanted.
So
I
saw
that
in
practice,
but
we
can
talk
if
you
have
an
idea
how
we
can
design
the
api
and
still
achieve
all
these
things.
D
But
I'm
I'm
giving
you
feedback
and
then
yes
feedback.
Sorry,
I'm
giving
you
the
history
behind.
Why
we
choose
this,
but
if
you
can
come
up
and
show
us
how
to
do
it
that
way,
I
know
how
much
performance
I
can
win
if,
if,
if
I
have
them
predefined
and
always
I
have
the
same
set
of
things
and
it's
super
cool,
but
it's
super
user
unfriendly.
I
couldn't
get.
F
I
haven't
actually
seen
great
great
applications
for
bound
instruments.
It
seems
like
the
prometheus
library
had
it
and
when
I
first
joined
the
group
here,
your
your
party
line
of
open
census
was
prometheus,
like
performance
compatible
with
prometheus.
Therefore
we
have
to
have
bound
instruments,
and
I
I
I've
been
using
the
code
a
little
bit.
I've
never
written
a
bound
instrument
or
the
place
where
I
did
it
didn't
feel
like
it
was
worth
the
benefit,
the
cost
of
doing
so,
but.
F
Yeah
also
at
some
point
I
had
proposed
a
first
class
label
set
object
which
was
like
you
have.
You
could
ask
your
sdk
to
compute
you,
a
label
set
which
would
be
pre-cooked
essentially,
and
then
you
could
use
it
over
and
over
again,
as
opposed
to
having
bound
instruments,
but
even
there
I
wanted
more
than
I
had,
which
was
like.
F
I
have
a
label
set
and
I
want
to
add
one
key
to
it
and
I
have
a
derived
label
set,
and
that
was
another
feature
that
you
can
find
examples
out
in
metrics
apis
that
do
that
and
prometheus
does
that
too
you've
got
this
curry
with
or
whatever
it's
called,
but
there's
so
much
possibility
here
and
you
end
up.
I
don't
know
I'd
like
to
know
what
others
think
about
all
those
options.
G
D
To
answer
your
question,
I
would
say:
if
we
get.
D
D
As
I
said,
if
for
the
predefined,
I
think
it
needs
to
be
do
a
better
job.
Maybe
the
predefined
can
return
some
kind
of
a
builder
for
this.
So
so
initially
the
the
thing
that
I
saw,
the
the
thing
that
I
saw
so
far
was
was
this
concept
of
I
have.
I
have
a
predefined
keys
and
the
the
api
for
recording
accepts
a
list
of
values
and
it
was
user
user
job
to
match
the
indexes.
D
So
that
was
the
api
that
I
I
used
and
it
was
very
error
pro,
but
I
think
we
can
do
better
even
with
predefined.
We
can
maybe
return
an
object
that
allows
you
users
to
to
say
that
the
value
for
this
key
is
this
one
and
stuff
like
that.
Still
they
use
something
like
this
with
value
for
this
key
and
value
for
that
value
and
behind
the
scene
we
construct
the
index
matching
and
we
pass
that
to
the
sdk.
D
So
so
I
think
there
are
possibilities
to
to
do
the
api
to
for
predefined
to
be
a
bit
more
user
friendly,
and
we
didn't
do
that
and
maybe,
if
you,
if
you
have
time
to
think
about
how
would
you
do
a
nice
api
for
users,
that
use
is
predefined
but
also
also
is
not
very
error
prone.
I
think
that
is
a
good
trade-off
to
be
considered.
Yeah.
F
Yeah,
you
know
from
us
my
background,
as
a
user
of
metrics
really
had.
I
used
a
lot
of
dog
stats
d
and
because
measurements
are
output,
as
deltas
like
there's,
there's
no
harm
done
in
adding
another
label.
It
gives
you
a
new
way
to
find
that
data
and
it
doesn't
change
the
interpretation
or
any
of
the
charts
that
you
could
possibly
imagine,
because
it
because
of
the
way
deltas
work
and
in
the
way
that
delta
counts
work.
F
So
and
also
I
come
from
a
tracing
background,
where
the
the
common
mode
of
interaction
with
tracing
is
is
add,
more
labels,
because
you
want
more
ways
of
finding
that
data,
and
then
we
can
do
sampling
and
querying
and
do
like
just
like
stats.
F
You
can
sample
stuff
and
count
them
as
well
and
so
that
working
with
high
cardinality
is
a
possibility,
at
least-
and
I
and
I
like
to
draw
a
connection
from
this
span-
that
I've
got
with
15
dimensions
into
a
metric,
because
that
in
statsd
is
exactly
what
you
do
and
I've
seen
it
work.
I've
seen
it
used,
but.
F
Yeah,
there's
definitely
options
and
I
think
we're
going
to
get
in
the
place
where
per
language
like
idiomatic,
like
usage,
is
what
we
really
need,
and
so
every
language
might
be
a
little
different
in
this
place.
So
it's
a
question
of
how
we
can
spec
out
something
great.
You
know
there's
an
example
in
go
where
you,
you
actually
use
struct
with
struct
tags.
F
So
I
give
you
a
struct
with
five
fields
and
they're
typed
and
they've
got
names
and
you
give
them
tags
and
and
then
you
give
it
to
the
metrics
library
and
it
creates
five
labels
and
they're
typed
and
it's
pretty
pretty
safe
actually,
but
and
it
uses
features
of
the
runtime
and
all
kinds
of
reflection,
stuff,
and
but
it's
it
works
pretty
well
and
it's
pretty
popular,
it's
not
going
to
apply
in
every
language.
So
I'm
not
sure
how
we
address.
D
That
I
think
we
can
recommend
predefined
versus
free
form,
as
victor
said,
and
every
language
does
it
in
the
way
that
is
so
anyway,
we
can
discuss
victor,
but
victor
personally,
I
would
like
somebody
to
spend
couple
of
days
on
thinking
about
how
can
the
predefined
be
improved
to
be
user?
Safe
user?
Not
that
error
prone.
As
I
explained
to
you,
the
version
of
you
define
the
keys,
and
then
you
have
an
array
of
values
and
his
user
responsibility
to
match
the
keys
and
the
values.
That's
that
was
just
wrong.
H
D
H
D
I
think
that's
the
main
difference
between
a
printf
statement
and
this
okay,
that's
my
two
cents
josh.
Do
you
have
two
or
three
minutes
to
talk
here?
I
want
to
discuss
a
bit
some
of
the
data
models,
so
I
think
we
are
done
with
the
api
thing,
but
I
want
to
talk
to
you.
Thank
you,
everybody.
Thank
you.
Everyone
is
welcome
to
stay
right
when
you're
here
yeah,
I
gotta
run
anyway,
so,
but
thanks
so
much
okay.
So
I
have
this
josh.
D
I
want
to
discuss
this
one
off,
think
thinking
and
I'm
not
against.
I
understand
how
you
solve
try
to
solve
this,
but
I'm
still
struggling
to
see
if,
if
we
put
a
one
off
here
versus
we
put
a
one
off
and
have
here
in
this
list,
we
have
a
explicit
histogram
and
exponential
histogram
and
sampled
histogram.
B
D
The
reason
the
reason
I'm
struggling
with
this
to
give
you
something
by
having
what
I'm
saying
is
guarantee
that
all
the
points
will
have
the
same
bucket
types.
D
D
F
So
anyway
yeah
I
I
like,
I
agree
with
what
you're
saying,
but
I
feel
like
the
practical
cost
of
having
two
or
three
different
histograms,
where
we
handle
them
all
the
same
for
the
most
part
they're
the
same,
because
they
have
the
same
labels
and
the
same
start
times
and
temporalities
and
stuff
and
like
and
then
they're
just
different
ways
of
forming
buckets
so
yeah.
I
don't.
I
don't
believe
this
is
gonna,
be
a
real
problem,
but
you're
right,
something
has
to
be
said
about
what
happens.
F
F
D
Because
we
can
document,
I
mean
our
public
documentation
can
be.
There
is
only
one
histogram
concept
and
here
are
the
five
or
four
representation
of
histograms.
If
you
are
using
exponential.
F
D
Yeah,
that's
something
that
we
decide
we're
going
to
drop
that
thing:
we're
going
to
drop
the
hint
from
here.
So
it's
going
to
have
only
double
histogram
and
my
idea
was
we
make
that
the
exponential
histogram
or
whatever
sorry
explicit,
bucket
history
or
whatever,
and
then
we
have
an
exponential
histogram.
D
B
F
D
F
I
was
I
mean
what
I:
what
did
I
write
there?
I
said
something
like
you
can
merge.
If
the.
If
the
buckets
are
the
same,
then
the
bucket
has
to
be
able
to
tell
you
whether
it
can
merge
or
not,
because
we
like,
for
example,
we
and
I
think
we
want
to
be
conservative.
So
if
your
explicit
buckets
boundaries
are
the
same,
then
you
can
merge.
Otherwise
you
cannot
merge.
So
it's
runtime
check
like
if
they're
the
same
buckets.
Yes,
if
they're
different
buckets,
no
and
so
the
same
would
be
true.
F
F
That's
a
possibility,
then,
because
you
can
say
this
may
or
may
not
be
mergiable
assuming
they're
mergable
it'll
succeed
if
they're
not
they
won't,
and
then
I
guess
what
we're
left
with
it
reduces
to
victor's
question
that
you
asked
him,
which
is
now
you've
got
the
potential
for
mixing
of
integers
and
floating
points
or
mixer
mixing
of
explicit
and
exponential
boundaries
and
they're
almost
the
same
topic
at
that
level.
So
we
should
talk
about
it
again,
like
everybody
else.
This
is
I'm
out
of
time.