►
From YouTube: 2022-02-15 meeting
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
Hello
howdy
did
you?
Did
you
get
here
using
the
link
I
sent
in
the
chat
edgy.
A
B
A
I
think
I
like
subscribed
to
one
of
them
and
then
like
they
changed
it,
and
I
just
go
to
the
website
every
time
now,
but
anyway,
I
wasn't
sure.
Sometimes
I'm
terrified
about
sharing
the
wrong
links.
So
I
wanted
to
make
sure
I
wasn't
just
like
creating
multiple
people
using
the
wrong
link,
but
I
trust
now
that
it's
the
right
link.
C
D
A
I
invited
our
manager
andy
who's.
I
don't
think
you've
had
a
chance
to
meet
to
the
meeting,
I'm
not
sure
I'll
be
able
to
attend,
but
he
had
mentioned.
He
was
interested
in
getting
up
to
speed
on
some
of
robert's
sig.
What
would
I
do
weird
sig
metric
stuff,
so
I
was
like
yeah
like
he's
been
explaining
stuff
in
these
meetings,
so
you
might
pop
in
a
nice,
a
nice
fellow
if
you
if
he
does
join.
A
If
not,
I
he'll
hear
me
saying
this
about
him
on
a
recording
later.
I
guess
so.
A
C
All
right,
so
I
think
there
has
finally
been
some
resolution
on
the
the
instrumentation
scope,
instrumentation
library,
that
we've
been
talking
about
yeah,
it's
really
finalized
there.
They
just
kind
of
mentioned
it.
C
They
mentioned
it
earlier
in
the
week
last
call
for
comments
here,
and
then
they
mentioned
it
again
at
the
meeting,
and
then
it
was
probably
literally
just
merged,
maybe
two
minutes
ago,
but
I
kind
of
scanned
through
and
it
looks
like
I
don't
know
I
feel
like
this-
is
like
the
worst
of
all
possible
solutions,
but
maybe
the
least
disruptive
of
all
solutions-
and
that
was
this
all
came
about,
because
the
logging
sig
needed
a
place
to
stick
the
logger
name,
and
originally
they
proposed
a
new
field,
and
then
somebody
was
like
well,
it's
not
just
like
instrumentation
library
but
for
logging,
but
for
a
longer
name.
C
I
think,
coming
from,
like
the
java
world
at
least
it's
usually
like
the
fully
qualified
class
name
that
gets
stuck
on
it.
So
instrumentation
library
was
like
semantically,
not
a
great
name
for
it.
C
So
then
somebody
suggested
well
what,
if
we
just
like
called
the
thing:
instrumentation
scope
and
for
for
instrumentation
that'll,
be
the
instrumentation
library
name
and
then
the
version
and
then
for
logging.
It
can
be
the
fully
qualified
class
name
and
or
version
that
morphed
into
a
couple
of
ideas.
C
The
one
idea
that
I
actually
liked
was
to
have
instrumentation
scope,
just
kind
of
be
a
arbitrary
set
of
key
value
pairs
that
you
could
add
to
like
that,
apply
to
like
everything
that
was
below
it
and
then
just
kind
of
have
a
reserved
like
name
and
version
there.
C
That
would
take
the
role
of
instrumentation
library
instrumentation
version,
but
I
don't
think
we
got
that.
It
looks
to
me
like
basically
instrumentation
scope
literally,
is
just
kind
of
like
a
renaming
of
instrumentation
library,
but
kind
of
like
a
slight
relaxing
of
what
name
and
version
apply
to
so.
D
C
A
logical
unit
of
the
application
code
with
which
the
emitted
telemetry
can
be
associated.
It's
typically
the
developer's
choice
to
decide
what
denotes
a
reasonable
instrumentation
scope.
The
most
common
approach
is
to
use
the
instrumentation
library
as
the
scope.
However,
other
scopes
are
also
common.
Eg
module
package
or
class
can
be
chosen
as
the
instrumentation
scope.
C
C
I
guess
I
guess
yay
for
compromise.
A
Yeah,
I'm
just
thinking
what
we'll
need
to
do
besides
aliasing,
like
instrumentation
name
in
our
like
sdk,
for
the
otlp
exporter.
If
we,
I
guess
when
this
is
part
of
the
proto
we'll
need
to,
I
don't,
I
guess
we'll
just
make
sure
we
we
handle
that
correctly,
make
sure
we
need
to
pull
in
that.
C
There
is
a
blurb
on
that
actually
on
the
readable
span,
so
I
think
that's
the
span
data
or
whatever
we
have
in
our
system
and
it
says.
C
A
Seems
not
too
difficult.
Well,
just
there's
just
a
bit
of
work,
probably
something
I
can
grab
but
yeah.
I
guess
I'll
I'll
probably
wait
to
see
some
other
sig.
Do
it
and
then
I'll
just
copy
what
they
did.
C
There
was
continued
discussion
on
this
thing.
We've
been
discussing
a
little
bit
being
able
to
add
links
after
spam
creation.
There
was
a
presentation
from
johannes
that
was
kind
of
laying
out
in
diagrams
and
code
exactly
what
he
was
talking
about,
but
basically.
C
Things
work
out:
okay,
in
like
a
messaging
queue
with
like
a
push
model,
he
was
showing
it
was
kind
of
like
a
pull
model
where
they
were
running
into
trouble.
So
it's
kind
of
like
you
have
like
one
span
that
is
kind
of
in
a
receive
loop.
I
think
pulling
things
off
of
a
message
queue
and
that
span
has
already
been
started.
C
It's
already
started,
and
you
kind
of
want
to
like
add
a
link
for
each
message
that
you
pull
off
the
queue.
So
that
was
the
use
case
and.
C
The
the
whole
reason
why
links
need
to
be
added
at
span
creation
time,
at
least
you
know,
as
things
are
currently
specced,
is
for
the
sampler,
so
sampler
can
make
the
right
decision
so
that,
like
you,
don't
accidentally
lose
that
whole
subtree.
It's
like.
If,
if
you
know
that
the
parent
is,
if
the
parent
link
has
been
sampled,
then
you
can
sample
the
subtree
and
you
don't
end
up
with
broken
traces,
but
it
seems
like
it
doesn't.
It's
like
the
modeling
doesn't
always
play
out
this
yeah.
Here
we
go.
B
C
Sorry,
I
think
it's
because,
like
each
create
like
results
in
a
possibly
new
deliver,
whereas
this
one
it's
like
each
create,
is
part
of
this
receive,
which
is
already
nested
under
some
parents
fan.
C
C
Then
you
could
still
have
your
upfront
span
links,
but
I
do
believe
that
this
is
all
kind
of
based
on
what
the
instrument
or
what
the
messaging
sig
is
proposing
for
how
to
model
some
of
these
situations.
So
I
think,
there's
still
further
debate
to
be
had.
C
A
I
think
I
wrote
the
original
jaeger
exporter
and
I
just
remember
doing
I
think
the
latest
one
which
was
like
v2.
If
I
vaguely
recall.
A
A
So
I'm
not
sure,
actually
I
think
we
just
do.
A
A
B
C
All
right
so
it
adds
an
hotel
exporter
jaeger
protocol
and
improves
the
description
of
the
other
environment
variables.
I
was
just
kind
of
looking
for
the
diff
and
there's
a
lot
of
stuff
here,
but
apparently,
allegedly
yeah.
I
think
it
has
just
like
reorganized
stuff.
So
it's
like
otlp
yeager
endpoint
that
didn't
really
change,
but
something
about
the
description
did.
A
A
A
For
at
least
the
oh
okay
and
then
there's
these
other
ones
for
sure
okay,
so
seems
reasonable.
I
guess
before
using
it.
C
But
yeah
with
my
heading
in
this
third
second
wait:
okay
hold
on
it
would
be
jaeger,
udp.
C
A
Http,
our
thrift
binary
is
http,
which
is
the
one
that
sends
to
the
collector
and
then
udp.
Thrift
compact
is
the
one
that
sends
to,
I
guess,
a
jaeger
agent
or
something
I'm
not
quite
sure
so.
E
Yeah,
the
agent
does
send
udp.
I
know
that
for
sure,
because
we
were
using
the
agent
locally
like
for
like
for
dev
environments
at
shopify,
and
it
definitely
is
using
thrift
over
udp,
but
I
never
used
the
collector
exporter.
So
I'm
not
sure
I
believe
it's
still
thrift,
but
I
think
it
might
be.
E
It's
not
udp.
I
don't
know
if
it's
thrift
binary
or
not.
A
Yeah,
according
to
code
comment
not
actually
not
actually
cracking
the
code.
It's
http
thrift,
binary,
okay,.
C
You
could
set
the
hotel
trace's
exporter
to
jager,
and
then
you
could
switch
based
on
you
can
set
the
protocol
with
the
second
environment
variable
and
switch
between
exporters.
That
way,
theoretically,.
E
B
C
We
discussed
about
this
a
little
bit
last
week
about
like
the
wild
card
and
the
wild
card
itself
was
a
bit
of
a
wild
card,
at
least
until
this
update
to
the
spec,
which
kind
of
defines
exactly
what
the
wild
card
means,
and
it's
a
may
so
so
the
name
of
the
instrument
can
support
wild
card
characters
constrained
to
question
mark
which
would
match
exactly
one
character
or
an
asterisk,
which
would
match
zero
or
more
characters.
C
So
I
think
the
question
I
spawned
this
was
like:
could
you
use
like
a
full
regex
or
other
things?
It's
looking
like.
No,
it's
kind
of
a
very
limited
set
of
wild
card
characters.
C
I
think,
like
one
of
the
use
cases
here,
is
you
may
register
like
an
async
instrument,
that
that
makes
like
an
expensive
call
that
returns
like
multiple
values.
I
think
there
are
like
some
of
these
kind
of
like
system
info
or
like
cpu
usage
calls
where
it
actually
returns
like
three
or
four
values
you
might
get
like.
C
You
know,
system,
user,
real
calls
or
something
like
a
cpu,
and
you
want,
if
you
had
an
instrument
for
each
one
of
those,
you
would
like
to
have
like
a
callback
that
can
record
a
value
for
each
one
rather
than
having
to
call
it
three
times
or
something
to
extract
all
those
values.
C
So
go
javascript,
I
think,
and
java
have
been
prototyping
the
multi-callback
registration.
I
think
I
think
people
do
want
to
move
forward
with
this.
This
probably
does
have
some
implications
for
for
the
work
that
we're
doing,
but.
C
Yes,
just
briefly
mentioned
the
proposal
to
move
the
metrics
sd
case
back
to
a
mixed.
C
If
you
actually
use
this
must
not
use.
So
if
you
use
this
must
not
term
the
question
was:
would
you
ever
be
able
to
change
this
in
the
future
without
it
being
a
breaking
change.
C
C
There's
some
like
deeply
complicated
stuff
going
on
in
this.
This
issue,
slash
pr
and.
C
The
way
that
you
export
counters,
I
think
there
are
a
couple
paradigms
out
there.
There
are.
There
are
back
ends
that
just
like
to
receive
counter
data
as
cumulatives.
So
you
just
like
start
a
counter.
You
keep
adding
to
it,
and
then
you
just
always
send
like
this
cumulative
value
and
the
back
end.
C
The
back
end
kind
of
does
like
subtraction,
based
on
some
time
intervals
to
kind
of,
like
you
know,
plot
things
out
properly
and
then
there
are
other
back
ends
that
are
that
don't
want
to
do
this
math
and
they
always
kind
of
think
that
you're
sending
like
the
delta
from
the
last
time,
and
it
turns
out
when
you
kind
of
implement
this
as
a
metric
sdk.
C
The
the
cumulative
situation
ends
up
being
way
easier
because
you
don't
have
to
maintain
any
additional
state
in
in
the
metrics
sdk
to
be
able
to
send
a
delta,
whereas
deltas,
you
need
to
keep
some
extra
stuff
in
memory,
but
I
think,
with
with
all
the
instruments,
I
guess
like
the
up
down
counter
like
typically,
I
think
you
want
to
set
like
your
export
as
like,
just
to
send
cumulatives
or
just
send
deltas,
but
this
up
down
counter
from
what
I
was
hearing
makes
things
difficult
and
there's
some
like
corner
cases,
and
sometimes
you
really
just
want
it
to
be
a.
C
C
So
I
don't
know.
I
think
there
were
some
discussions.
There
were
some
proposals
and
I
think
that
they
were
going
to
draft
up
some
some
things
that
describe
these
edge
cases
and
ways
to
ways
to
be
able
to
accommodate
this.
Mainly
this
configuration
where
you
can
have
kind
of
like
a.
C
C
This
is
about
environment,
variable,
resource
detection
in
kubernetes,
I'm
not
sure
what
the
setup
is
in
ruby,
but
I
feel
like
there
is
like
the
google
cloud
resource
detector,
at
least
in
some
languages
that
will
pick
up
some
kubernetes
environment
variables
and
there
wasn't
like
a
first-class
kubernetes
resource
detection,
so
this
guy
david
at
least
went
through
and
kind
of
surveyed,
various
things-
and
oh
here
we
go.
We
are
mentioned.
C
This
is,
this
is
just
a
proposal
at
this
point
in
time,
but
I
think
that
we
would
end
up
with
maybe
like
a
just
kind
of
a
first-class
kubernetes
resource,
detector
and
kind
of
pull
out
kind
of
the
piecemeal
ones
from
other
detectors
and
just
have
a
uniform
way
to
extract
this.
C
So
you
can
have
like
timeouts
in
the
socket.
You
can
have
timeouts
on
the
request
and
I
think
there's
some
desire
to
kind
of
clarify
like
what
what
this
time
watch
actually
apply
to.
D
B
C
What
what's
top
of
mind
for
everyone
else,
anything
we
should
discuss.
E
Did
anyone
want
to
continue
on
the
metrics
train
from
last
week?
I'm
offering
that
up
if
anybody's
interested,
I
don't
have
anything
else.
I've
been
trying
to
stay
a
little
bit
blinders
on
that.
So.
C
A
If,
before
you
get
started,
I
don't
want
to
talk
about
it
because
ariel's,
it's
probably
better
to
async
because
he's
the
contributor,
but
the
metric,
the
rail
tie,
discussions
probably
worth
getting
opinions
on
from
other
people
rob
especially
you
actually,
because
it
may
affect
vendors
who
are
using
instrumentation
and
then
the
metrics.
Reporter
looked
good
to
me.
A
E
That's
a
fun
screen,
okay,
so
I'll
try
to
power
through
some
of
this
that
we
did
last
week
without
getting
hung
up
on
again.
Essentially,
we
have
all
these
proxies
so
that
people
can
initiate
or
instantiate
their
meters
and
instruments
and
all
their
other
fun
bits
prior
to
the
sdk
being
configured
in
a
similar,
safe
fashion
that
we
do
with
tracing.
E
Last
week
we
ended
up
talking
quite
a
bit
about.
I
believe
we
talked
about
the
raising
behavior
a
bit
and
saying
like
how
it
feels
like
it
deviates
a
little
bit
from
tracing.
I
took
this
like
I've,
been
meeting
weekly
with
francis
on
this
stuff,
just
going
through
like
questions
I
have
and
trying
to
like
piece
together,
the
implementation-
and
we
discussed
this
a
little
bit
and.
E
There
wasn't
really
anything
insightful
that
I
could
share
beyond
that.
Like
yeah,
it
is
a
bit
of
a
deviation,
but
the
alternative
of
people
like
dynamically
generating
instruments
seems
also
like
a
kind
of
risky
behavior
in
terms
of
performance.
E
It
would
have
a
pretty
significant
performance
impact
like
it's
just
it
would
be
slow
and
especially
if
you
have
like
a
calendar
wrapping
like
a
very
hot
endpoint
like
that
would
be
very,
very
bad,
so
maybe
the
trade-off
is
worth
it
there.
I'm
still
not
convinced
on
that
matter,
like
we're
a
little
bit
kind
of
not
at
odds,
but
I
still
don't
like
that.
E
But
it's
not
a
hill
that
I'm
willing
to
die
on
at
this
time,
we'll
see
what
it
actually
looks
like
when
this
actually
is
further
along.
I
think
that
was
like
just
something
I
wanted
to
touch
on
a
little
bit
just
to
kind
of
share.
Like
another
perspective,
like
basically,
it's
like
there
is
real
performance
implications.
E
There
are
differing
opinions
there
and
I
think
there
is
like
a
valid
case
for
either
side,
I'm
more
of
the
cautious
fearful
type.
So
I
like
deploying
things
safely
and
not
having
to
worry
beyond
that
it
there's
some
tests
feel
free
to
look
at
them
and
see
if
they
make
sense.
I
did
my
best
to
make
them
as
close
to
what
was
being
described
in
the
spec
so
that
I
didn't
have
to
overthink
it.
E
E
So
I
can
see
your
wonderful
faces,
we're
good
cool,
so
I'm
gonna
say
this
often,
but
this
part
not
very
interesting
again,
it's
more
of
how
we
see
this
continual,
like
kind
of
the
evolution
of
this
like
we
know
that
the
metrics
implementation
is
far
and
away
from
being
considered
stable,
then,
as
a
result,
it
should
be
distinct
from
open,
telemetry
api
and
open
telemetry
sdk.
E
So
if
I
am
a
consumer
of
tracing-
and
I
am
brave-
and
I
want
to
try
the
experimental,
matrix,
metrics,
api
and
sdk-
I
should
be
able
to
bring
it
in
and
it
should
kind
of
all
fit
together
in
a
way
that
I
would
expect
it
to
so.
What
I've
done
here
is
essentially
just
patched
the
configuration
file,
so
when
you
require
the
metrics
sdk
it'll
bring
in
the
metrics
api,
the
metrics
api
depends
on
the
open,
telemetry
api
and
the
metrics
sdk
depends
on
the
open,
telemetry
sdk.
E
Essentially,
I've
just
kind
of
shimmed
in
a
hook
so
part
of
the
main
configuration
hook.
It's
going
to
call
this
metrics
configuration
hook.
We
patch
it
to
do
the
upgrade
path
of
your
proxy
meter
provider
is
now
a
real
meter
provider.
E
Interesting,
but
I
starting
to,
I
think
I
might
kind
of
retroactively
do
this,
for
the
telemetry
sdk
is,
like
just
add,
a
reset
helper,
because
a
lot
of
this
stuff,
we
we
just
do
ad
hoc
in
a
lot
of
different
places.
So
I
might
I
wanted
to
start
a
little
bit
more
organized
than
that
here
so
between
like
calls
to
configure
we
do
these
resets.
E
Essentially,
this
is
just
an
overly
verbose
sanity
check
on
the
proxy
provider
and
making
sure
that
when
you
upgrade
it,
it
actually
upgrades
all
your
meters
all
your
instruments.
They
actually
become
real
meters,
real
instruments
and
even
going
as
far
as
documenting
like
if
you
grab
a
reference
to
your
meter
provider,
and
you
run
the
configuration
hook.
E
Your
meter
provider
is
still
the
proxy
provider,
but
it
does
have
the
instance
of
delegates,
so
you
can
still
call
those
methods,
but
you
are
still
dealing
with
a
single
layer
of
interaction,
and
the
same
is
true
for
your
meter
provider-
an
instrument,
whereas,
if
you
call
after
the
sdk
is
configured,
if
you
call
for
a
meter
provider
or
its
meter,
you're
not
getting
these
these
proxy
things
anymore,
you're
you're,
actually
getting
just
the
sdk
implementation
and
again
this
is
just
this
test
is
very
much
a
sanity
check
like
does
this
whole
upgrade
path,
actually
work,
and
how
does
it
behave?
E
Is
there
differences
because,
like
I
ran
through
this,
I
was
like
wait.
Is
this
right
and
I
believe
it
is
it's
how
the
tracing
stuff
is
working.
It
all
seems
to
work.
E
Again,
this
is
a
replicated
test
from
the
open,
telemetry
sdk.
If
you
do
something
that
the
configuration
doesn't
expect,
do
you
actually
get
an
error
with
the
error
message,
the
original
kind
of
understanding
of
what
went
wrong
so
just
making
sure
that
that
is
all
compatible
and
there's
parity
between
metrics
sdk
and
the
open,
telemetry
sdk.
It
still
works
the
way
you
expect
it
to
so
here,
there's
just
the
concrete
implementations.
E
There's
there's
still
very
much
a
work
of
pros
progress.
The
abstraction,
as
francis
pointed
out
here,
is
kind
of
awkward
so
to
highlight
that
and
at
any
point
again,
if
anybody
wants
to
jump
in,
don't
hesitate
to
interrupt
me,
I
can't
see
anyone.
So
I'm
just
looking
at
my
github
and
talking
this
is
the
the
api
implementations.
You
have
all
these
calls
for
create
instrument,
and
then
it
goes
through
the
create
instrument
which
does
that
raising
and
validation
behavior.
E
But
then,
when
you
switch
to
your
concrete
implementation,
so
I'm
missing
all
the
other
instruments.
It
feels
a
little
bit
awkward
because
we're
going
to
be
duplicating
all
of
these
methods
that
don't
do
anything,
it
seems
like
there's,
probably
something
a
better
structure
here
there,
francis
left
that
comment
and
I
ignored
it
because
I
know
he's
right,
but
I
just
it's
not
what
I'm
looking
at
right
now.
E
So
if
anybody
has
any
thoughts
or
strong
opinions
on
abstractions
feel
free
to
jump
in
and
see,
if
you
have
a
pattern,
you'd
like
to
see-
or
you
think
would
be
really
well
suited
to
cleaning
this
up
again
like
not
where
I'm
spending
my
attention
in
this
very
moment
or
my
attention
right
now.
What
I'm
trying
to
do,
or
what
I'm
hoping
to
accomplish,
is
to
just
plumb
through
a
very,
very
simple
path.
E
So
I'm
going
to
move
away
from
the
code,
because
I
think
the
discussion
will
be
more
interesting
if
we
move
to
looking
at
the
actual
specification.
B
E
Has
some
defined
defaults
already,
so,
basically
it's
gonna,
it's
a
metric
reader,
that's
gonna
call
collect
on
a
loop,
but
that's
some
interval
and
then
the
console
exporter
is
going
to
do
exactly
what
it
sounds
like
it's
going
to.
Do
it's
just
going
to
spit
up
the
metrics.
E
The
reason
why
this
is
a
nice
starting
point
from
my
perspective,
is
it
really
is
going
to
force
me
to
plumb
through
some
of
the
key
parts
while
leaving
like
this
path,
pretty
pretty
simple
to
reason
about
as
a
group
as
we
review
it,
because
I
will
once
we
get
this
done.
I
think
that
is
kind
of
where
I'd
like
the
initial
stopping
point
for
this
pr
to
be
is
like
once
this
path
is
done
because
we're
going
to
require
to
do
that.
E
We're
going
to
have
to
create
the
structure
of
in
memory
in
memory
state
in
memory
state
is
a
very
nebulous
term.
In
my
mind,
I
was
really
like
what
what
is
this
while
I
was
reading
through
it.
Essentially,
each
metric
reader
is
gonna
have
to
have
some
place
where
every
call
to
increment
sends
this
value
and
because
the
periodic
exporter
will
be
only
running
on,
say,
like
60
seconds.
E
So
this
end
memory
state
if
it's
a
delta,
each
call
from
the
periodic
exporter
is
going
to
basically
pull
everything
out
and
like
essentially
like
a
flush.
It'll
leave
nothing
remaining,
whereas
cumulative
you're
going
to
have
to
retain
all
the
values.
So
if
you
have
say
like
a
value
that
gets
incremented
once
every
hour
in
cumulative
like
that,
value
is
going
to
stay
forever.
What
if
it
happens
only
once
a
day?
How
do
we
want
to
structure
that?
Because
eventually
we're
going
to
need
to
flush
it
out?
E
A
Yeah
I
mean
my
initial
feedback
one.
This
is
great
thanks,
like
really
good,
I
thought
it
was
good,
clear
overview.
I
think
the
goal
of
like
stopping
point
makes
sense
to
me
of
like
just
try
to
get
one
end-to-end
workflow
api
sdk
and
like
one
you
know,
and
it's
I
don't
I
get.
I
haven't
even
reviewed
all
the
export
and
you
know
types
of
export,
so
I
would
you
know
I'm
fine
with
whatever
one
is
most
interesting
to
you
like.
A
I
don't
have
strong
opinions
where,
like
you
know,
or
you
think,
is
a
good
balance
between
like
interesting
and
man.
You
know
whatever
it
can
get
out
the
door.
I
think
that's
the
right.
Stopping
point.
I
guess
like
with
the
periodic
stuff.
Is
it
like?
I
know
with
tracing
we
have.
We
can
configure
the
buffers
max
settings
is
sort
of
like
the
well
you
handle
that.
So
I
wonder:
if
is
there
different?
E
There's
there's
a
lot
of
open-endedness
so
like
the
in-memory
state,
doesn't
have
like
any
hard
definitions
around
like
how
big
it
gets
like
this
thing
can
just
grow.
There
are
like
a
bunch
of
addendums
everywhere.
E
There's
like
a
whole
section
here,
I'll
just
drop
this
in
the
chat,
because
this
is,
I
think
this
is
actually
really
interesting
to
read
about,
because
it's
kind
of,
I
think,
in
my
mind,
is
like
the
interesting
somewhat
tricky
somewhat
straightforward
part
is
like:
how
do
you
accrue
data
between
calls
to
collect,
because,
like
if
you're
again,
I
was
saying
like
talking
about
periodic
exporting
metric
reader
is
feels
familiar
to
the
batch
fan
processor
because
we've
all
kind
of
been
around
this
that
area
of
code
for
such
a
long
time.
E
Now
it's
like
it's.
I
think
it's
a
little
bit
more
easy
to
reason
about,
but
the
less
intuitive
one
would
be
like
a
prometheus
exporter
which
can
be
modeled
as
a
metric
reader.
If
you
so
choose,
it
doesn't
occur
on
well,
it
does
occur
on
an
interval
defined
by
your
scraper,
but
essentially,
instead
of
it
like
the
sdk,
pushing
the
data
out,
it's
waiting
for
something
to
request
data
from
it
right.
E
E
So
for
me
it's
like
trying
to
find
similarities
in
familiarity
with
what
we're
doing
here,
because
I
think
that'll
enable
us
as
a
group
to
get
better
reviews
out
and
then
that
way
we
can
focus
on
the
trickier
bits
of
like
again
the
in-memory
state
is
going
to
be.
I
think,
the
very
interesting
data
structure
here
and
how
we
manage
the
temporality
and
how
we
do
this
in
a
way
that,
like
doesn't
bloat
the
already
bloated
ruby
applications
of
the
world
right,
yeah.
E
The
other
part
of
temporality
is
there
are
defaults,
there's
three
ways
you
can
set
temporality.
So
there's
the
default.
You
can
override
the
defaults
by
setting
it
explicitly
on
your
metric
reader,
and
then
your
exporter
can
also
have
a
stated
preferred
temporality.
So
it's
like
you
can
override
it.
If
there's,
if
it's
not
available
anywhere,
you
fall
back
to
a
default,
but
your
metric
reader
should
say:
hey
exporter.
What
would
you
like,
and
the
the
exporter
should
have
a
preferred
temporality
to
be
used
for
retrieving
your
your
in-memory
state?
E
So
if
you
have
your
delta
temporality,
basically
it
acts
like
a
flush
clean
slate
every
time
if
it
is
the
cumulative.
That
means
you
have
that,
like
that
counter,
that
just
goes
up
infinitely
right
and
then
you
have
probably
the
idea
of
needing
to.
I
don't
know
if
it
talks
about
here,
but
you
need
to
clean
up
every
once
in
a
while
for
for
long-lived
metrics
right.
E
What
else
the
there
is
like
I've
glossed
over
a
lot,
even
in
this
overview,
views
and
aggregations
are
very
interesting.
A
view
can
define
essentially
how
your
counter
like.
So
if
you
have
a
counter
just
a
very
simple
counter-
or
I
think
a
histogram
is
probably
a
better
example,
so
you
have
a
histogram
histogram.
E
You
could
take
that
histogram
and
add
the
corresponding
views
so
that
it
emits
three
different
metrics.
It
could
emit
just
a
simple
counter
metric.
It
can
emit,
like
obviously
the
value
from
the
histogram,
and
then
you
can
also
specify
it
to
emit
like
an
error
counter.
So
your
one
instrument
actually
turns
into
three
different
metrics
that
get
stored
in
your
memory
state,
and
then
you
can
tell
it
to
the
view
what
type
of
aggregation
you
want
to
use
as
well.
So
there's
this
whole
part
about
how
your
aggregate
default
aggregation
is
sum.
E
Some
aggregation
is,
you
can
see,
as
defined
here
there's
monotonic
non-monotonic
for
like
up
down
there's
a
drop
aggregation
which
is
kind
of
like
your
no
op
collector
type
deal
where
it
just
like
drops
it
into
the
ether
it.
This
is
like
compared
to
the
tracing
spec.
I
would
say
this
is
very,
very
heavy
and
there's
a
lot
of
details
to.
A
I'm
not
going
to
pretend
to
understand
everything
you
said,
but
I
understand
the
gist,
which
is
that
there's
a
huge
universe
and
I
think
they're
approach,
you're
taking,
which
is
just
hyper
focus
on
one
and
end
path
and
get
that
in
and
then
that
way
you
can
expand
iteratively
and
like
if
we're
stuck
on
some
crazy
asynchronous
thing.
It's
you
know,
like
that's,
not
gonna
block
a
larger
chunk.
E
Yeah
yeah,
so
that
that's
it
like.
I,
I
think,
if
I
can
get
this
end
to
end
and
a
point,
I
imagine
there's
going
to
be
a
lot
of
discussion
to
be
had
a
lot
of
review,
probably
quite
a
bit
of
code
churn
on
the
pr.
But
my
hopes
is
that
once
that
pr
gets
merged,
then
there
can
be
individual
pr's
for
instruments,
the
remaining
instruments
and
then
the
observable
instruments,
because
those
will
have
different
implementation
or
implications.
E
Different
temporality
of
the
in
memory
state,
different
exporters,
different
metric
readers
for
us,
like,
I
think
we're
interested
like
internally
of
like
experimenting
with
like
a
premier
ps
exporter,
so
that
would
be
kind
of
something
I'd
like
to
see
early
on
once
we
are
approaching
some
semblance
of
stability.
E
Personally,
I
want
to
experiment
with
it
a
little
bit
on
like
a
little
like
local
environment,
so
it'd
be
nice
to
get
something
like
that
going,
but
there
is
like
still
like
there's
one
of
the
histograms,
the
exponential
histogram
thing
or
whatever
you're
gonna
have
to
find
someone
with
their
doctorates
and
statistics.
To
explain
that
to
me
we
might
have
someone
like
that
at
shopify,
so
hopefully
I
can
find
them.
A
D
E
So
I
have
a
bunch
of
uncommitted
code
right
now
sitting
on
my
computer.
I'm
embarrassed
to
push
it
up
because
it's
just
like
a
lot
of
like
I'm
gonna
go
in
this
direction
until
I'm
confused,
I'm
gonna
go
in
this
direction
until
I'm
confused
and
so
there's
like
just
yesterday.
I
think
I
reached
the
point
where
and
it's
not.
I
know
it's
not
correct.
E
I
have
like
a
counter
spitting
out
like
a
number
in
the
console,
but
it
doesn't
accumulate
properly,
so
it's
very
very
embarrassing,
like
not
even
whip
worthy,
but
I'm
gonna
try
to
get
that
more
structured
in
a
way.
I
the
part,
I'm
personally
struggling
a
lot
with
right
now,
is
setting
up
the
views
even
just
a
default
view,
making
sure
that
that's
set
up
properly
and
the
state
getting
the
state
right
is
very
awkward
right.
E
Now,
I'm
literally
just
playing
with
a
hash
that
is
keyed
off
of
the
name
of
the
instrument
and
the
name
of
the
meter.
I
believe
I
don't
think
that's
correct.
I
think
there
needs
to
be
some
something
else.
That's
like
used
there
and
then
now
and
then
like
how
to
apply
your
aggregations
to
that.
E
A
Yes,
not
to
be
the
only
person
giving
like
other
people
are
certainly
welcome
to
jump
in
I'm
just
trying
to
be
supportive
or
whatever.
It
was
one
thing
I've
seen
help
in
the
past,
and
you
have
like
a
chunky
pr
that
you
want
to
merge
to
me,
but
then
you
also
have
sort
of
like
experimental
work
against
that.
Pr
is,
if
you
want
to
push
up
a
third
branch,
that's
making
prs
against
your
feature.
A
A
A
E
No,
that's
a
good
suggestion.
I'll
probably
look
into
doing
that
for
like
different
slices
of
this,
so
that
the
discussion
can
stay
more
focused
on
the
prs
cool
yeah.
That's
I
think
that's
all
I
have
for
like
today.
I
still
need
to
like
push
the
boundaries
of
my
own
understanding
like
a
lot
of
the
stuff
I
explained
is
very
much
in
a
hand,
waving
thin
veneer
and
if
you
were
to
ask
like
a
serious
question,
I
probably
would
get
stumped
so
but
yeah
I'll
stop
it
here.