►
From YouTube: 2021-09-21 meeting
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
B
The
spec
meaning
was
running
over
so
just
fyi.
B
A
B
Yeah,
the
just
fyi,
we
had
a
very,
very
long
discussion
on
the
min
max
and
histogram
that
I
think
we
need
to
continue
here.
So
I'm
gonna
copy
paste
the
notes
from
the
spec
meaning
and
just
add
it
to
it.
But
okay
that'll
be
a
fun
one.
I
asked
people
to
attend,
so
I
don't
know
if
you
want
to
cover
that
first,
because
I
expect
it
to
take
a
little
bit
or
if
you
want
to
go
through
business
first
and
then
get
into
it.
B
I
don't
know
how
long
this
the
first
set
of
engineering
topics
will
be.
A
So
this
is
the
the
board
we
have
and
currently
we've
done,
eight
items
we
have
12
and
some
of
the
items
are
not
assigned
to
anyone.
I
think
only
a
few
are
probably
one
or
two.
I
wonder
if
people
can
claim
them.
I
I
think
some
of
the
items
are
are
quite
independent
from
the
other
things.
Well,
some
other
thing
is
like
very
entangled
with
other
items.
Currently
I
sent
that
to
myself,
but
if,
if
you
look
at
the
item,
you
think
this
is
something
you
have
good
contacts
you
want
to
take.
A
I
definitely
appreciate
your
help
because
I,
I
think
we're
running
short
of
time
and
and
the
in
progressing
are
what
we're
going
to
focus
on
today.
So
I
I
think
there
are
three
things:
one:
it's
not
ready
for
review.
I'm
still
working
on
that,
like
which
instrument
should
I
use
the?
The
me
max
is
a
big
topic
that
we'll
discuss,
and
then
I
have
a
small
pr
remove
the
measurement
process.
We
already
got
three
approvals.
I
think
josh
sturridge
had
had
the
suggestion
for
me
just
rewarding
the
timestamp
a
little
bit
so
I'll
I'll.
A
Do
that.
But
after
that
I
think
we're
good
to
go
and
after
that
I'll
work
on
the
which
instruments
should
I
use
and
also,
I
think
there
is
ask
about
clarifying
the
memory
usage,
for
example,
we're
saying
if
people
are
using
delta,
then
we
shouldn't
let
the
ick
to
take
indefinite
amount
of
memory
over
time
like
the
idk
should
be
able
to
forget
about
the
very
old
data
and
that's
becoming
a
requirement.
So
I
guess
we
would
want
the
sdk
part
to
have
a
section
regarding
the
memory
usage
and
giving
some
suggestion.
A
B
Just
because,
if
we're
trying
to
push
the
api
to
get
adopted
across
instrumentation,
we
don't
want
too
many
sdk
details
of
implementations
to
leak
across
best
practices.
If
that
makes
sense,
yeah.
So
from
the
standpoint
of
somebody
writing
instrumentation
using
just
the
api,
they
should
have
a
clear
notion
of
how
best
to
do
it
and
what
leads
to
the
most
efficient
collection
for
their
users.
B
So
to
that
extent,
I
think
there's
a
bit
of
a
spec
a
bit
of
spec
work
to
do
here,
but
we
might
want
to
kind
of
coordinate
what
we
mean
there.
For
example,
in
java
it
might
be
different
than
other
languages,
but
yeah.
B
A
C
C
Maybe
we
should
define
from
the
sdk
perspective
that
if
you
are
implementing
any
kind
of
optimizations,
when
possibly
you
drop
points,
you
should
have
this
configuration
and
that's
it.
A
C
A
Yeah,
I
I
think
so
these
are
not
like
shoot
or
like
must.
This
is
like
best
practice,
but
my
my
take
from
josh
is
that
when
a
lot
of
people
start
to
instrument,
we
want
to
give
them
something,
instead
of
letting
them
go
through
the
entire
spike
and
trying
to
understand
that.
So
if
I
I'm
spending
like
eight
hours
just
to
save
100
developers
like
two
hours,
I
I
think
I'm
willing
to
do
that.
A
Okay,
yeah,
but
I
I
got
your
points
thanks
vogue,
so
so
coming
to
the
big
topic
here
me
max.
B
Yeah,
let's
make
sure
so
jack
is
on
the
call
right,
yeah,
yep,
yeah
and
bogdan
and
josh,
because
we
were
the
three
that
talked
the
most
and
just
I'll,
give
a
quick
recap.
B
So
the
recap
is:
there's
there's
an
specification
pr
around
adding
min
max
to
histogram
and
there's
a
couple
proposals
on
ways
forward
and
we
try
to
distill
down
a
few
key
decision,
questions
that
are
either
yay
or
nay
and
I'm
gonna
rephrase
them
actually
based
on
the
previous
meeting.
So
I
think
I'm
gonna
have
three
different
decisions
that
we
need
to
kind
of
understand.
So,
if
you
look
here
should.
D
B
Okay,
so
one
of
the
things
that
that,
basically,
I
think
came
up
that
we
I
hadn't
considered
was-
and
this
is
from
bogdan-
is
that
right
now
we
have
requirement
in
the
data
model
that
we
should
be
able
to
convert
from
cumulative
to
delta
and
delta
cumulative
for
all
aggregations.
B
The
next
question
that
we're
debating
here
is
right.
Now,
in
our
data
model,
we
have
sum
as
an
aggregation.
We
have
histogram,
which
basically
does
a
bunch
of
sums,
and
then
we
have
gauge
which
doesn't
actually
aggregate.
It
just
keeps
the
last
value,
and
thanks
bogdan.
I
just
noticed
your
comment,
so
so
gage
actually
just
keeps
less
value.
So
the
question
is:
do
we
need
a
new
aggregation
function
that
does
actual
natural
min
and
max,
and
I
think
jack
would
like
that
in
this
pr?
B
I
would
also
like
to
see
that
in
the
pr
we
just
need
to
figure
out
how
to
make
that
happen
in
a
way
that
people
are
comfortable
with,
because
there's
implications,
if
we
add
natural,
min
and
max
on
cumulative
to
delta
conversion,
because
natural
math,
min
and
natural
mathematics
do
not
have
a
natural
cumulative
delta
conversion,
so
we'd
be
giving
up
on
that.
The
third
thing
for
us
to
discuss
is:
is
it
okay?
B
If
cumulative
histograms
have
different
semantics
for
different
pieces
of
data,
there
was
a
proposal
to
make
min
and
max
effectively
have
gauge
semantics
within
histogram
and
to
have
min
and
max
actually
mean,
like
the
last,
the
last
10
minutes.
What
was
the
min
the
last
10
minutes?
What
was
the
max
as
opposed
to
min
and
max
being
the
cumulative
men
over
the
entire
reporting
period,
or
even
similarly,
that
might
apply
to
delta?
I'm
not
sure
we
didn't
actually
get
into
that.
B
C
I
I
think,
if
we
use
the
current
temporality
for
that
aggregation,
because
that's
that's
the
thing
that
signals,
I
think
yes,
otherwise
we
can
introduce
a
window
aggregation
or
whatever
temporality,
or
something
like
that.
That
supports
this.
C
C
That,
for
example,
applies
to
the
summary
part
because
summary
has
the
same
problem:
the
quantiles
are
not
calculated
over
the
cumulative
or
over
a
delta.
They
are
calculated
over
a
fixed
window,
10
minutes
or
something
like
that.
B
C
Is
that
what
you
mean?
That's
that's
a
separate
discussion
if
we
like,
if
we
allow
mixing
the
temporalities,
but
I'm
I'm
trying
to
say
that
so
far
these
functions
mean
and
max
that
do
not
comply
with
our
definitions
of
temporality
right
now,.
E
Trying
to
get
to
like
a
decision
right,
so
there
is
no
current
min
and
max,
and
it's
just
the
proposal.
So
I
think,
like
the
implication
of
this
question
josh
is
is
asking
is
if
this
is
not
a
hard
requirement
that
we
need
to
be
able
to
go
to
cumulative
to
delta.
In
all
cases,
then
we
could
propose
that
min
and
max
in
both
delta
and
cumulative
case,
follow
natural
min
and
max
semantics,
and
so
we
we
don't
have
to
worry
about
that
anymore.
C
B
Okay,
so
the
last
10
minutes
thing
was
just
an
example
of
what
we
saw
on
the
pull
request:
that's
not
an
actual
proposal
when
it
comes
to
this
cumulative
to
delta
thing.
If
you
want
we
can,
we
can
take
a
step
back
and
look
at
the
whole
problem
again,
I
was
hoping
to
focus
this
discussion
on
specifically
the
cumulative
to
delta
requirement,
and
just
is
this
something
we
absolutely
need
to
have
in
our
data
model
and
like
let's
just
let's
just
agree
on
that.
Yes
or
nay,.
C
A
Yeah
so
I'll
explain
my
my
priority.
I
I
think
like
respecting
the
temporality
is
my
number
one
part.
Otherwise,
it
looks
like
the
data
like
the
protocol
is
very
misleading
and
later
we
might
use
this
as
a
floodgate
to
hack
the
other
places
and
eventually
like
the
temporality,
would
make
no
sense,
given
the
temporary
temperature
is
a
top
level
thing.
I.
A
I
won't
respect
that
as
the
number
one
thing
and
the
number
two
thing
when
it
comes,
we
we
need
to
add
some
sliding
window.
I
think
we
can
use
a
different
type
or
something
instead
of
saying
the
top
level
is
the
temporary
and
and
then
we
have
something
that
against
the
top
level
rule-
and
it's
just
like
a
hack
to
me.
D
I
think
I
agree
with
all
this
one
thing
I
want
to
remind
people,
then,
if
we're
going
to
move
away
from
the
idea
that
you
know,
there's
a
min
and
max
inside
the
same
point
is
that
we
don't
really
have
a
way
to
combine
metrics
from
a
family.
That's
a
term
that
prometheus
uses
to
say
that
all
these
metrics
relate
to
the
same
instrument.
D
B
Well,
possibly,
except
if
min
max,
some
count
needs
to
have
cumulative
and
delta
nope.
Just.
B
Just
make
it
delta
okay,
I
that
so
again
I'm
let's
just
okay.
So
from
what
I
understand,
though,
of
the
the
answer
to
question
number
one
cumulative
to
delta
people
would
like
to
preserve
chemo
delta
as
much
as
possible,
which
means
min
and
max
as
natural
aggregation
functions
do
not
work
if
we
need
to
go
backwards.
So
going
from
delta
to
cumulative
is
fine.
Going
from
cumulative
to
delta
is
not
fine.
We
have
a
natural
min
and
max
that
we
can
use
for
most
aggregations
that
make
sense.
B
People
understand
what
the
hell
that
means,
but
we
can't
go
from
cumulative
delta.
Therefore,
we
cannot
provide
a
data
point
with
natural
aggregation
function
of
min
or
max.
If
the
answer
to
number
one
is:
let's
preserve
cumulative
delta,
does
everyone
agree
with
that,
or
am
I
mistaken
in
some
fashion.
C
C
E
Though
you
know,
because
what
we,
what
we're
suggesting
then
and
well,
I
guess
the
implications
of
that
is
that
you
have
a
histogram
with
a
min
and
max.
E
That
is,
you
know,
cumulative
for
all
intents
and
purposes,
except
for
the
min
and
max,
which
is
recent
data
points
recently
recorded
data
points,
and
it
doesn't
have
a
temporality
on
it
and
because
it
doesn't
have
a
temporality
on
it,
you
don't
have
to
worry
about
being
able
to
convert
between
delta
and
cumulative
accurately,
but
as
soon
as
you
stamped
the
the
histogram
with
cumulative
temporality,
then
what
like?
So
you
can't.
E
So
you
can't
go
from
cumulative
to
delta
and
and
so
what
you
you
can't
have
min
and
max
on
it.
What
what's
the
fallout
of
that.
B
B
D
Thing:
what's
the
worst
idea
of
them
all
you
ready
yeah.
We
just
remember
that
accumulative
is
a
delta.
They
degenerate
into
the
same
thing,
therefore,
to
convert
the
delta
into
a
cumulative
and
preserve
them
in
max.
You
literally
change
its
temporality
without
change
against
time
stamps.
It
is
cumulative
from
the
beginning
of
the
delta,
it's
technically
correct.
B
That
is
technically
correct
and
that's
one
of
the
reasons
why
I
didn't
want
cumulative
to
have
a
different
notion
of
what
min
and
max
were
than
delta.
But
then
again
it's
also
not
super
practical.
I
like
it,
though
I
want
to
be
the
one
with
the
bad
ideas,
josh.
D
D
B
Then
let
me
just
let
me
just
throw
a
completely
if,
if
like
we're
completely
set
on
cumulative
delta,
is
required
right
and
we
want
natural
min
and
max
to
work
in
an
aggregate
fashion.
On
some
metric,
then,
should
we
go
back
and
and
make
summary,
be
a
viable
aggregation
in
the
entire
sdk
so
that
we
can
get
min
max
some
count
out
of
that.
C
Summary
summary
right
now,
unfortunately,
is
a
random
window,
because
that's
how
prometheus
defines
it
and
we
have
to
stick
with
that
for
prometheus
backwards
compatibility,
but
I'm,
as
mentioned
before,
I'm
happy
to
to
give
another
point,
call
it
light
histogram
or
whatever
min
max
some
count
or
whatever.
We
want
to
call
it
with
different
semantics,
where
we
don't
put
any
temporality
into
that
metric,
which
means
we
are
allowed
to
do
whatever
we
want.
D
B
B
Instrument
that
generates
it
right
again
in
terms
of
like
a
usability
standpoint.
You
know
if
somebody
wants
this
min
max
some
count
thing
in
their
exporter
right
and
we
we
have
an
instrument
called
histogram
right
now
are
we,
I
don't
know.
D
D
It's
not
worse,
I
think
that's
why
we've
been
discussing
putting
this
in
the
histogram,
especially
the
explicit
boundary
histogram,
because
it
already
says
something
about
omitting
buckets,
which
makes
it
just
some
count,
but
is
what
you're
saying
worse
than
what
I'm
suggesting,
which
was
like
throw
in
the
min
max
for
cumulative
and
like
let
it
be
fuzzy?
Who
cares?
D
B
I
I'd
echo
that
as
well.
I
think
I
think
people
who
are
using
cumulative
have
adopted
cumulative
and
people
who
are
using
delta
might
need
to
adapt
to
cumulative.
Because
of
reasons
but
like
we
don't.
Cumulative
delta
is
not
as
important
so
wait.
Wait.
C
D
I'm
not
sure,
actually
I
was
going
to
say
that
the
lightstep
backhand
is
based
internally
on
deltas,
but
I
think
there
are
back
ends
based
on
cumulatives
as
well.
I
asked
my
back-end
team
how
they
handled
realignment
of
these
delta
points,
which,
as
I
said
earlier,
it's
like
this
problem
exists.
Even
if
we're
not
talking
about
cumulatives
the
phrase
that
was
used
is
that
the
mean
max
points
are
along
for
the
ride.
D
They're
just
considered
gauge
points
really
at
the
end
of
the
interval
and,
if
you
think
of
them
that
way,
always
it's
pretty
easy
to
define
how
to
manipulate
them.
It's
just
that.
You
end
up
with
sort
of
fuzziness
and
sliding
around
the
min
max
by
you
know:
half
of
the
window
on
average
or
something
like
that.
B
Yeah,
our
our
back
end
is
cumulative
by
default.
We
do
allow
deltas,
it's
it's
interesting,
how
that's
dealt
with,
but
the
the
opposite.
Try
the
same
people
in
large
part.
I
know
which
is
fun.
B
C
D
C
C
But
that's
only
so
so
george,
it
depends
if
your
window,
if
your
window
has
let's
say
half
an
hour
or
an
hour
window,
and
you
are
receiving
10
minutes
windows
for
these
intervals.
D
B
D
B
Let's
let
let's
do,
let's
do
something
better
here:
people
who
have
delta
backends,
specifically
jack.
Let's,
let's
ask
you
what
what
would
be
your
preference
here
like?
Are
you
expecting
to
generate
cumulative
histograms
and
absorb
cumulative
histograms
in
your
back
end?
Are
you
looking
to
get
the
delta
thing
squared
away
and
then
just
delta,
just
asking.
E
You
specifically
so
so
we
have
a
delta
back
end
at
new
relic.
If,
if
we
went
down
this
path
of
having
histograms
cumulative
histograms,
the
min
and
max
be
natural,
min
and
max's
on
there.
That
you
know
are
the
min
and
max
for
the
from
t0
to
to
now
and,
and
somebody
sent
those
to
us,
we
would,
we
would
drop
them
so
like.
We
would
just
say
that
there
is
no
way
to
translate
that
cumulative
min
and
max
to
delta.
B
B
E
B
B
Okay,
so
the
question
I
just
asked
jack
was:
are
you
recommending
your
customers
use
delta
aggregation
for
your
back
end
and
also
what
are
you
doing
if
you
get
cumulative
metrics
that
come
in
right?
So
what's
what's
light
stuff
doing
here?
Well,.
D
I
I've
been
describing
all
along
what
I
would
let
what
they
would
do,
I'm
pretty
sure
I
can
go
double
check
with
them,
but
the
min
max
value
is
is
approximate.
We
don't
care
about
its
exactness,
it's
just
a
point
in
time
that
we
would
use
if
we
were
trying
to
plot
that.
It's
not
a
big
deal
to
us.
We
will
convert,
we.
We
store
points
in
both
formats
and
we
convert
to
deltas
in
the
processing
code
path.
D
C
A
Okay,
so
so
I'm
trying
to
see
if
we
can
get
some
conclusion
to
move
this
forward.
So
if,
for
example,
if
jk
is
going
to
change
the
the
pr
saying,
no
matter
is
cumulative
all
delta,
the
min
max
should
respect
temporarily,
which
means
it
should
always
represent.
The
mathematical
mean
max
from
the
time
range
and
then
either
section
is
saying.
B
D
B
B
B
E
And
min
and
max
are
optional
anyways,
so
I
mean,
are
you
really?
Are
you
really
sacrificing
cumulative
to
delta
or
are
you
just
you
know,
taking
advantage
of
the
optional
aspect
of
those
specific
fields.
D
B
Bogdan,
I
don't
know
if
you
saw,
but
we
we
talked
about
this
previously
and
then
I
got
super
swamped
and
wasn't
able
to
do
half
the
things
I
said
I
would
but
field
presence
as
a
thing.
We
need
to
have
a
discussion
of
this
in
the
spec
meeting
as
well.
B
We,
I
think
I
don't
know
if
you
were
there
when
we
raised
the
the
example
pr's
and
just
asked
people
to
take
a
look
at
it,
but
we'd
like
to
propose
using
field
presence
in
otp
going
forward
now
that
it's
part
of
proto3,
so
this
means
you
can
explicitly
use
optional
and
then
field
presence
is
allowed
and
we
can
determine
if
someone
didn't
fill
it
out
or
if
it's
zero.
B
C
Or
does
it
make
it
a
boolean
which
has
value
that
that
if
it
wasn't
coded,
it
mean
was
present?
If
it
was
not
encoded,
it
doesn't
mean.
What's
present.
C
Okay,
I
need
to
check
my.
D
D
B
Links
are
in
the
spec
sig
meeting
from
to
a
week
ago
two
weeks
ago,
and
I
can
I
can
find
them
again
for
you
if
you
want,
but
yeah
it's
basically
to
get
proto
three
adopted
in
places
where
proto2
was
they
just
added
back
the
in
pro
in
process
check
of
whether
or
not
a
field
was
present,
and
you.
C
B
Slack
yeah
I'll
just
scroll
down
through
this
meeting,
notes
and
grab
the
link
and
put
it
to
you
in
slack.
A
A
Just
to
make
sure
we're
clear
on
the
follow-up
item,
so
this
is
still
a
proposed
conclusion,
because
we
have
a
pending
item
on
box.
Then
we'll
do
the
research
and
confirm
if
this
would
work
when
this
works
jack
will
change
the
vr.
What
line
is
what
we
decided
here?
If
it
wouldn't
work
for
collector,
then
what
do
we
do?
My
suggestion
is:
if
it
wouldn't
work,
then
we
should.
We
should
put
this
out
of
the
stable
release,
scope.
E
Well,
there's
still
an
option
to
go
forward,
even
if
we
can't
use
the
the
optional
keyword
in
the
proto
there's
still
the
the
metric
flags
that
you
know
you
can
encode
to
my
understanding.
That's
been
the
way
that
we've
kind
of
used
to
specify
optionality
for
fields
up
to
this
point,
and
so
we
could
take
advantage
of
some
of
those
bits
to
specify
whether
the
min
and
max
are
present
or
not.
A
B
Yes,
one
thing
I
want
to
add,
because
I
don't
think
I
was
mentioned
josh's
pr
using
the
field
presence
broke
the
build
because
we
actually
have
to
go
to
the
maintainer
sig
and
the
metric
sigs
or
sorry
and
the
the
language
sigs
and
update
the
proto-generation.
B
If
they're
using
a
protocol
buffer
compiler
from
like
two
years
ago,
they
need
to
use
a
dash
experimental
flag
if
they're
using
the
one
from
like
six
months
ago,
then
it's
built
in
so
we
just
have
to
go
update
some
of
the
flags
for
the
build
and
for
the
proto
generation,
but
I
think
there
should
be.
We
should
be
directly
reaching
out
to
the
language
sigs,
and
I
can.
I
can
take
ownership
of
that
with
the
the
proto
pr.
B
I
think
it
it
was
either
assigned
to
me
or
I'm
on
it
whatever
either
way
I
can.
I
can
go
out
and
reach
out
to
the
language
sigs
and
say:
hey
we're
we're
doing
this
update,
we'll
bring
it
to
the
specification
meeting,
we'll
talk
about
it
there
and
make
sure
that's
okay
with
everybody
for
the
optional
flag.
But
I
agree
with
jack
that
that
is
a
proto
concern
and
the
data
model
pr
can
go
through
without
that.
A
So
after
this
I'll
work
with
strike
on
the
sdks,
I
got
it.
G
Yes,
hello,
I
was
talking
with
aaron
and
about
the
api.
H
A
For
example,
like
the
matrix
spec
is
following
exactly
what
we
have
done
for
the
tracing
spike
and
in
the
tracing
api
you
can
see.
This
is
the
tracing
api
stack.
We
mentioned
something
similar.
Like
we
mentioned.
We
should
we
should.
We
must
not
require
the
user
to
repeatedly
obtain
something
like
general
feedback,
it
seems,
a
lot
of
requirements
are,
are
suitable
for
the
ict
inside
of
the
ati.
A
My
my
gut
feeling
by
going
through
the
tracing,
is
I
I
want
metrics
to
follow
what
we
have
done
interesting
instead
of
creating
something
totally
different,
and
I
also
kind
of
understand
why
people
originally
put
some
of
the
actual
like
restriction
or
requirements
here
in
api
spec,
because
when
people
use
the
api,
they
want
to
see
what
level
of
guarantee
do.
I
have
if
I
use
meter
to
get
two
tracer
with
the
same
input.
Do
I
get
the
same
instance
or
different
instance?
A
I
I
think
that
one
is
flexible,
but
I
do
see
the
benefit
of
putting
this
in
the
api,
because
you,
as
a
owner
of
a
library
you
want
to
instrument,
you
need
an
api.
The
only
thing
that
you
can
trust
is
this
back,
so
you
read
the
apis
back
and
everything
is
self-contained.
You
don't
have
to
go
to
the
sdks
back
and
try
to
understand
what
the
behavior.
G
It
is
okay,
it's
just
hard
to
know
where
the
checks
should
live,
because
so
far
I
was
just
implementing
in
the
api,
the
stuff
that
was
required
in
the
api
spec,
and
it
seems
like
in
some
cases
we
should
implement
in
the
sdk
the
requirements
that
are
specified
in
the
api
document.
A
Yeah,
I
I
hear
you
so
my
answer.
Probably
I'll
give
you
a
answer.
My
answer
will
be
whatever
python
did
for
tracing
that
is
already
stable.
We
follow
the
same
thing
for
matrix,
because
if,
if
python
can
ship
a
stable
version
following
the
tracing
api,
I
think
we
should
be
able
to
do
the
same
as
matrix.
I
I
think
so
I
think
this
first
point.
I
think
we're
talking
about
the
second
point
here:
right
riley,
the
configuration
one.
I
Is
this
is
this
by
any
chance
referring
to
having
like,
so
that
if
you
set
up
a
sdk
after
somebody
already
has
references
to
no
op
tracers
that
that
that
should
take
effect,
or
is
this
referring
just
to
like
the
behavior
that
sdk
should
have?
If
you
do
have
an
sdk
like
you
were
saying.
A
I
I
think
for
java.
What
I
I
heard
from
from
jung
is:
the
api
is
not
doing
any
validation,
even
if
you
give
something
that
really
wouldn't
make
sense.
The
api
package
will
just
take
it.
However,
there
is
a
a
separate
domain
implementation,
whether
it's
in
the
api
package
or
isdk,
I'm
not
sure,
but
people
can
specify
that
if
they
run
unit
test
or
the
dev
inner
loop
and
that
dummy
implementation
would
give
you
some
error.
A
G
Okay-
maybe
it's
worth
mentioning
here
that
this
could
be
strongly
related
to
a
difference
between
the
languages.
G
G
You
can't
put
code
in
those
parent
methods
because
it
is
not
the
same
thing.
The
obligation
of
implementing
them
is
does
not
conflict
with
your
the
ability
to
inherit
from
them.
So
you
can
call
super,
and
in
that
way
you
can
execute
the
code
from
the
apparent
method.
G
We
have
notices
that
several
checks
that
are
something
that
every
sdk
must
do,
and
it
makes
sense
to
have
those
checks
implemented
in
the
api
methods
so
that
every
sdk
can
always
just
call
super
and
execute
these
checks,
so
at
least
in
python,
or
at
least
in
python.
I
don't
have
any
problem
with
doing
that.
I
have
noticed
that
some
of
the
checks-
the
there's
defined
in
the
api
document,
makes
sense
to
have
them
in
the
api,
and
I
disagree
with
other
ones
from
your
previous
answer.
G
Riley
I
I
now
feel
like
we
are
kind
of.
It
is
kind
of
okay
for
implementations
to
decide
where
these
checks
should
be.
So
if
we
decide
some
of
them
should
be
in
the
sdk.
Some
of
them
should
be
then,
but
the
opposite
question
arises:
if
we
think
that
some
checks
can
and
should
be
in
the
api,
can
we
have
them
in
the
api
as
well.
A
Yeah,
I
think,
for
open
time
to
one
one
challenging
thing:
we've
seen
is
a
lot
of
people
they
when
they
look
at
the
api.
They
believe
it's
talking
about
the
language
api.
Well,
it's
not
it's
talking
about
a
package.
It's
talking
about
the
dummy
implementation
of
an
isdk,
so
the
actual,
open,
telemetry
api
implemented
in
each
language
is
api,
plus
dummy
sdk
and
the
requirements,
I
believe,
is
part
of
the
dummy
sdk
requirements.
G
But
this
dummy
sdk
is:
are
you
referring
with
dummy
sdk
as
a
subclass
of
an
abstract
class.
A
This
is
this:
is
the
freedom
for
each
language.
You
decide
how
you
want
to
implement
that.
I
think
in
open,
country.net
and
class
class.
We
implement
the
abstract
version
of
the
api,
so
you
use
the
api
know
that
it's
going
to
do
anything
for
you.
Yes,
you
put
like.
If
you
don't
specify
any
real
sdk,
then
you,
you
wouldn't
run
into
a
runtime
issue,
because
by
default
we
give
you
a
dummy
sdk,
then
that
ic
has
some
logic
following
what
the
api
spike
requires.
G
G
B
Yeah
one
one
suggestion
I
have
from
from
implementing
and
from
looking
at
the
pr-
and
I
think
I
made
this
in
the
pr-
but
maybe
I
didn't
when
you
when
you
implement
the
sdk,
there
will
be
some
additional
requirements
and
checks
that
need
to
happen
in
the
sdk
and
it
might
make
sense
to
defer
some
of
the
decision
of
what
goes
in
the
api
and
the
sdk
until
you're
finished
with
the
sdk
and
then
so.
B
My
suggestion
would
be
actually
to
move
everything
into
the
sdk
implementation
and
then
move
backwards
into
the
api.
As
you
understand
the
full
implications
of
the
sdk
just
because
there
are
some
checks,
you
need
to
do
in
the
sdk.
That
could
be
made
very
hard
depending
on
how
you
implement
your
api.
So
from
the
standpoint
of
the
initial
implementation
like
what
we
did
in
java,
was
we
implemented
a
raw
api
interfaces?
That's
it
no
implementation
whatsoever.
I
fully
expect
over
the
next
three
months,
we'll
add
some
checks
back
in
to
the
api
to
for
consistency.
B
B
So,
if
you
haven't
already
implemented
the
sdk
go
start
doing
some
of
it,
because
you'll
run
into
some
weird
weird
things
where,
like
you
need
a
level
of
abstraction
between
the
two
or
you
have
to
actually
defer
some
of
these
requirements
of
the
api
into
the
sdk
implementation
to
do
them
correctly,
but
that
that's
all
on
you,
you
can
decide
that
my
suggestion
in
your
pr
was
just
maybe
maybe
implement
more
of
the
sdk
first.
G
I
I
kind
of
had
the
opposite
experience.
I
put
all
the
checks
in
the
api
as
they
were
required
for
us.
I
thought
they
were
required
by
the
api
spec
and
I
kind
of
felt
that
some
of
them
should
be
in
the
sdk.
So
I'm
now
probably
we're
gonna
do
the
opposite
from
moving
check
from
the
api
to
the
sdk
right,
the
ones
that
I
consider
they
should
be
there.
G
D
Yeah,
I
heard
something
there
was
a
concern
raised
here
about,
I
guess,
deferred
setup
for
the
sdk.
Was
that
like
something
about
when
you
create
instruments
before
there's
an
sdk?
Is
that
a
concern
that
you
have
in
python
a
problem
that
you're
trying
to
solve?
That
is
one
we
solve
in
go,
but
I
don't
believe
it's
one
that
every
language
has
yeah.
We
have.
D
So
in
go,
I
describe
this
as
a
deferred
setup.
You
can
create
instruments
statically
using
the
global
sdk,
and
that
will
create
you
an
instrument
that
doesn't
work
yet,
but
as
soon
as
you
do
install
the
sdk,
all
the
deferred
setup
happens
on
the
new
sdk
that
you
install
so
that
instruments
you
declare
and
register
statically
work
after
an
sdk
is
installed.
This
was
the
approach
that
we've
taken
to
allow
instrument
registration
in
go
before
you
have
an
sdk.
D
It
doesn't
necessarily
have
to
be
the
only
way
to
do
this.
If
you
have
a
proper,
I
call
it
proper
dependency
injection
framework.
You
don't
really
need
that,
but
go
does
not
have
a
proper
dependency
injection
framework.
So
this
is
what
we've
done
and
it
sounded
like
that
might
be
something
you're
doing
in
python,
which
necessitates
creating
a
sort
of
category
of
sdk,
that's
sort
of
hollow
and
defers
to
a
real
sdk,
and
that
is
a
reason
why,
in
the
go
implementation,
there's
pressure
to
try
and
factor
some
pieces
of
common
code.
D
D
D
I
was
merely
commenting
about
the
deferment
question.
It
does
create
some
complexity
and
it's
not
always
clear
why
it's
being
done.
We
could
try
and
standardize
something
for
some
of
the
languages.
I
Yeah
yeah,
I
was
gonna,
say
we.
We
do
have
this
sort
of
deferment
thing.
We
have
like
a
proxy
pattern
which
we've
also
done
with
tracing
and
yeah.
I
agree
if
you
put,
if
you
put
some
of
the
behavior
in
the
base
class,
then
this
proxy
won't
be
using
that
it
will
just
call
like
the
actual
implementation
directly.
D
D
Is
this
here,
going
to
interfere
with
like
low-cost,
no
op
and
and
maybe
what
that
code
was
there
for
was
to
make
sure
that
the
proxy
works
and-
and
I
think
the
observation
is
that
factoring
code
very
nicely,
so
that
your
proxy
and
your
default
sdk
share
a
bunch
of
code
makes
everyone
detect
a
bad
smell.
It's
just
not
worth
the
benefit
of
that
factorization,
because
it
looks
like
you're
creating
an
sdk
based
class
that
everyone
has
to
follow,
and
it's
only
a
base
class
for
your
two
implementations.
I
Yeah,
I
think
I
think
those
checks
were
actually
just
in
there,
because,
if
you
do
a
little
reading
of
the
api
spec
like
like
diego
copied
copieder
says
its
name.
Property
should
keep
original
value
message.
Reporting,
specified,
values
and
valid
should
be
logged.
So
in
order
to
implement
that
we
do
have
to
keep
some
state
like
there's
a
similar
check,
also
for
like
duplicate
instrument,
names
and
then
also
for
non-monotonic
values
passed
to
a
counter.
I
So,
just
from
like
a
literal
reading
of
the
api
spec,
I
think
that's
why
we
have
most
of
those
checks
right,
diego.
I
D
Yeah,
diego
and
I
discussed
that
yesterday
and
it
seemed
like
maybe
the
spec
is
a
little
vague.
My
interpretation
like,
for
example,
with
duplicate
instrument
registration,
is
when
I
the
first
time
I
try
to
register
a
counter,
let's
say
by
name
then
the
moment
I
try
and
register
a
gauge
with
the
same
name.
It
logs.
The
error
hands
me
back
a
no
op
instrument,
which
will
never
do
anything
for
the
rest
of
its
lifetime
and
that's
it.
D
And
in
order
to
do
that,
you,
the
ins,
the
no
op
instrument,
doesn't
need
to
remember
its
name.
I've
logged
the
error
at
the
moment.
I
had
it
and
then
I
forgot
it.
That's
kind
of
how
I
read
it,
but
but
diego
and
aaron
seem
to
have
read
it
differently
and
I'm
not
saying
it's
incorrect.
G
No,
in
fact
now
that,
in
fact,
after
the
conversation
that
we
had
yesterday,
I
noticed
that
we
lacked
that
functionality
to
do
that.
To
return,
I
know
up
instrument
or
object,
so
that's
something
which
I'm
working
now
so
that
yeah,
we
are
actually
pretty
much
gonna,
follow
it.
B
The
other
question
is
this:
next,
one
about
implementations
must
not
require
users
to
repeatedly
obtain
meter
again
with
the
same
name
and
version
schema
url
from
what
I
understand,
though,
is
it's
it's
allowed,
though,
right
like
specifically,
you
should
be
able,
you
shouldn't
be
required
to
do
it,
but
you
should
be
able
to
do
it
if
I
call
get
counter
or
whatever,
with
the
exact
same,
you
know,
arguments
I
should
get
back
this
encounter.
D
Okay,
what's
interesting
is
that
this
conversation
is
exactly
where
all
the
complexity
lies
between
your
proxy
and
your
real
sdk
is
that
you're
trying
to
implement
this
registry?
The
registry
is
responsible
for
saying
which
meters
and
which
names
are
registered
and
the
the
checking
is
supposed
to
be
done.
D
At
the
moment
you
create
the
instrument,
which
means
it
has
to
happen
before
the
deferred
checks
before
the
deferred
registration,
so
the
the
deferred
object
has
to
implement
the
same
checking
that
the
real
object
has
to
implement
and
that's
confusing
and
it
causes
you
want
to
factor
things
in
a
way
that
gets
out
of
hand.
I
think
we've
there's
an
open
pr.
H
D
B
D
B
B
B
D
I
yeah
I
was
not
trying
to
never.
B
D
You
said
something
about
throwing
in
go.
What
I
was
saying
is
returning
an
error
which
is
saying
you
tried
to
register
an
instrument
and
I'm
going
to
tell
you
it
failed.
I'm
not
I'm
not
throwing
I'm
just
returning
you
an
error
value,
which
is
how
go
does
it.
So
I'm
I'm
returning
the
error
statically
at
the
moment.
It
happens,
even
though
the
real
sdk
has
not
been
installed
yet,
and
I
it's
just
a
choice.
It's
a
way
that
it's
an
idiom
that
works
in
go.
B
But
it's
actually
against
the
rest
of
the
specification
like
we'll
have
to
talk
about
that,
because
you're
supposed
to
again,
if
somebody
is
instrumenting
an
api
and
and
there's
there's
an
issue
between
two
pieces
of
instrumentation,
what
does
the
person
who
wrote
that
instrumentation
do
on
error?
Are
they
expected
to
just
silently
ignore
it
and
drop
it
you're,
forcing
them
to
deal
with
the
problem,
or
are
we
giving
them
an
empty
object
right.
D
I
was
making
it
a
choice
and-
and
the
reason
why
I
think
it's
an
acceptable
choice-
is
that
you're
supposed
to
be
a
single
instrumentation
library
here
there
shouldn't
be
multiple
parties
that
just
literally
don't
know
about
each
other
writing
instruments
for
the
same
instrumentation
library.
So
this
shouldn't
happen
unless
it's
like
a
real
accident.
So
if
you
try
to
register
a
gauge
encounter,
I'm
going
to
tell
you
that
failed,
here's
an
empty
object
and
for
one
of
them
and
and
that's
okay
for
me,.