►
From YouTube: 2022-03-15 meeting
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
C
C
E
It's
6
p.m.
E
D
D
Got
it?
Oh,
that's
terrible.
I
mean
not
terrible
like,
but
just
keeping
that
in
your
head.
I
guess
I'm
keeping
it
in
my
head
too.
Now.
B
E
It's
one
hour
earlier.
B
Yeah
I
really
early
in
my
career.
I
was
working
in
finance.
I
once
lost
a
lot
of
money
because
the
time
zones
changed
in
the
u.s,
but
if
you
trade,
gold,
derivatives,
they're
based
on
the
london
metal
exchanges,
spot
price
and
the
closing
price
which
closed
an
hour
like
closed
diff
at
a
different
time-
and
I
forgot
to
like
close
out
my
positions
and
proceeded
to
lose
other
people's
money.
But
a
lot
of
other
people's
money.
E
E
F
Those
are
cool
lights.
Maybe
we
should
get
a
a
demo
of
how
to
actually
put
something
like
this
together.
One
of
these
days.
F
For
now,
maybe
I
will
try
to
cruise
through
the
specs
egg.
F
F
So
one
kind
of
biggest
thing
that
was
interesting
is
there
is
this
is
an
otep
to
support
the
elastic
common
schema.
F
Really,
this
is
kind
of
being
advocated
for
by
elastic,
and
particularly,
I
think
this
came
in
through
logging,
but
I
believe
it
will
have
an
impact
on
semantic
conventions
in
general,
but
they
would
like
to
donate
the
elastic
common
schema,
which
is
a
framework
very
similar
to
hotel
semantic
conventions
covering
yeah,
covering
kind
of
all
all
the
same
areas
that
that
hotel
is
so.
F
They
would
like
to
to
donate
the
common
schema
they
would
like
actually
for
the
elastic
common
schema
to
become
the
hotel
semantic
conventions.
They
would
like
them
to
be
one
of
the
same
and
they
are
committed
to
where
the
elastic
common
schema
differs
from
hotel.
They
would
be
willing
to
adopt
the
hotel
convention,
so
it
seems
like
they
are
they're
quite
committed
to
trying
to
make
elastic
comments,
schema
and
hotel
semantic
conventions.
F
One
of
the
same,
so
I
think
that's
interesting
and
exciting,
and
while
we
spend
a
lot
of
time
defining
conventions
for
a
lot
of
things
in
hotel,
I
feel
like
there's
still
like
a
lot
of
like
holes
and
whenever
we
like
run
into
these
when
we're
working
on
stuff,
it's
hard
to
like
figure
out
exactly
how
we
should
handle
things.
So
maybe
this
could
be
one
of
those
things
that
helps
out
when
we
start
start
heading
into
new
territory.
F
F
Free
to
look
over
this,
and
if
you,
if
you
are
happy
with
this,
if
you
have
any
comments,
just
go
ahead
and
add
the
comments.
F
I
think
I
think
this
kind
of
has
spun
out
of
like
the
logging
sig
and
that's
kind
of
the
context
that
it's
written
from,
but
I
believe
they
would
like
this
to.
I
think
the
goal-
and
I
don't
know
if
it's
explicitly
stated-
is
for
this
to
apply
to
tracing
and
metrics
in
some
way.
Maybe
that
needs
to
be
figured
out
a
little
bit
more.
E
F
Next
up
there
was
a
pr,
it
adds
some
environment
variables
and
configuration
for
client,
key
and
client
cert
files.
F
So
this
would
probably
be
relevant
for
our
otp
exporter.
I'm
guessing.
F
There's
a
bit
of
a
bite
shed
on
there's
a
pr
saying
that
we're
suggesting.
F
That
400
errors
on
the
client
should
not
be
marked
as
errors,
and
I
think
this
is
quite
controversial.
Actually
like
the
spec
was
saying
that
400
from
the
server
side,
you
can
ignore
these
from
the
client
side.
F
I
think
a
lot
of
people
are
of
the
mind
that
you
want
to
know
about
400
errors
from
the
client
perspective,
but
I
think
this
discussion
kind
of
had
to
happen
in
the
sig
initially.
Sadly,
like
the
I
don't
know,
we
keep
going
down
this
path
of
like
just
adding
a.
F
F
This
is
the
project
with
the
burn
down
list.
I
guess
all
this
kind
of
spec
work.
F
They
would
like
to
move
the
sdks
back
to
mixed,
so
that
means
that
some
some
areas
are
stable.
Some
are
still
going
to
be
kind
of
experimental
or
in
progress.
F
This
is,
I
think
we
talked
about
this
a
little
bit
last
week,
but
it
was
about
having
this
idea
of
a
being
able
to
configure
your
temporality
and
aggregation
based
on
metric
kind
or
instrument
kind
on
the
basis
of
instrument
kind.
F
There
were
two
competing
prs
both
by
this
guy
j
mcd.
I
think
people
were
more
interested
in
this
one,
so
he
kind
of
fleshes
out
a
little
bit
more.
It
has
a
lot
of
approvals
so.
F
Feel
free
to
take
a
look
at
this.
If
you
have
opinions.
F
This
did
turn
into
like
a
huge
bike
shed
and
that's
kind
of
where
the
meeting
ended,
and
I
think
the
thing
that
ended
up
turning
into
like
a
bike
shed
is.
There
was
just
one
comment
in
here
about.
F
F
F
I
guess
in
the
metrics
pipeline,
but
in
the
evolution
of
the
metrics
spec,
the
the
idea
that
a
user
would
ever
create
their
own
reader
has
kind
of
gone
away
this,
and
I
think
there
were
some
discussions
about
the
the
design
in
general.
I
guess
because,
like
I
think,
jmcd
was
bringing
up
the
point.
He
never
expects
that
a
customer
would
ever
create
their
implementation
of
a
metric
reader,
but
do
people
need
this
functionality
and,
if
so,
like?
What
is
the
proper
abstraction?
I
guess
to
put
there.
F
I
guess
right
now,
like
the
the
metric
reader,
is
coupled
with
kind
of
the
sdk
in
ways
that
make
it
something
that
would
not
be
user
extensible.
So
I
don't
know
if
any
of
us
are
far
enough
along
in
our
metrics
journey,
to
really
have
a
lot
of
feedback
or
input
on
that
discussion.
F
So
I
think
this
last
bullet
point
was
talked
about
a
little
bit
at
the
maintainer
meeting.
F
There
was
at
least
the
idea
so
yeah,
I
always
tell
you
what
I
know
about
this
from
the
maintainers
meeting,
and
that
is
somebody
brought
up
the
notion
of
trying
to
have
sprints
or
something
like
them
for
at
least
a
specification.
F
At
least
so
people
know
what
the
what
people
in
this
back
group
are
focused
on
and
kind
of
what
the
current
topics
of
interest
are,
and
just
also
to
kind
of,
like
figure
out
like
how
swamped
this,
the
spec
sig
folks,
are
on
things
at
any
point
in
time,
so
just
kind
of
having
a
a
sprint
a
project
board
and
trying
to
commit
to
taking
some
things
on
during
a
certain
time
period.
F
F
So
I
think
that's
what
I
should
expect
at
least
I'm
going
to
call
it
here.
Any
questions
comments,
concerns
about
things.
F
Yeah
no
problem:
what
is
on
everybody's
mind
the
wrestling
meeting.
F
Are
there
any
issues
prs
walkthroughs
of
metrics
work
that
we
should
move
on
to.
G
A
Can
you
click
on
reactions
raise
hand?
Oh.
B
Cool
I'm
raising
my
hand.
A
couple
small
pings
for
review.
Ariel
has
up
a
pr
for
the
metric
reporter
and
the
oh
gosh.
Sorry
head
rails
tie
rail
tie.
A
And
I
haven't
done
any
work
on
the
feedback,
yet
it's
been
a
couple
of
weeks,
so
I
don't
have
any.
I
have
to
actually
work
on
the
feedback.
The
joking.
B
No
worries
you
don't
need
to
get
another
round.
The
point
of
vacation
is
not
to
work
on.
Work
is
to
relax
and
I
have
a
pr
up
for
the
which
sam
thank
you
for
viewing
is
for
the
http
client,
oh
gosh,
a
config
option
which
one
of
our
users
was
asking
for
it's
pretty
uncontroversial,
but
I
could
use
an
I
think,
another
set
of
eyes
or
another
approach
so
not
time
sensitive.
But
those
are
my
my
few
things
to
mention.
F
A
F
Yeah,
that's
fine!
Let
us
know
when
it's
ready
for
more
feedback,
I
guess
yep
and
then
the
the
metrics
reporter.
B
No,
I
don't
think,
there's
active
work.
I
think
we
left
it
as
like,
if
rob
or
amir,
also
like.
If
you
have
opinions,
I
think
we're
looking
for
a
vendor
perspective
on
this,
because
that
might
be
like
sort
of
the
end
user
might
ultimately
be
vendors
and
not
necessarily
like
the
average.
You
know
end
user,
so
I
think
we're
looking
for
feedback
from
that.
But
overall,
I
think
it's
a
good
thing
and
pretty
uncontroversial
good
stuff.
F
Okay,
great
so
this
is
this
is
a
call
for
the
rest
of
us
to
kind
of
take
a
look
and
and
give
it
a
thumbs
up.
If
we're,
if
we're
happy
yeah,
I
do
know
that
the
metrics
reporter
it
was
kind
of
a
stealth
feature
that
we
added
to
ruby
because
it
was
useful,
but
I
think,
having
it
stealth
and
still
be
more
useful
to
other
people
is
fine
with
me.
F
If,
if
ever
something
comes
down
from
like
the
expec
sig
in
this
area,
I
think
yeah,
I
think,
there's
two
things
that
we
can
do
one.
We
can
mention
that.
Oh
we've
had
this
stuff
feature,
that's
working
really
well
for
people,
and
maybe
we
should
go
this
direction
and
be
if,
if
we
do
have
to
kind
of
change
something
we
will,
we
will
work
with
that
at
the
time.
G
Yeah,
I
think
that
the
spec
itself
like
on
that
actually
says
that,
like
any
language,
implementation
can
add
some
sort
of
like,
like
observability,
for
your
observability,
like
it's
kind
of
expected,
and
it's
left.
I
think
I
don't
know
how
intentional
is,
but
it's
very
open-ended,
because
even
the
metric
stuff,
it's
like
yeah
like
feel
free
to
like
add
some
way
to
let
your
users
know
that
your
metrics
isn't
working
right.
So
it
seems
to
be
kind
of
like
hand-wavingly,
supported.
A
All
right
great,
I
mean
we
can,
if
we're
getting
really
close
to
doing
the
metrics
sdk
implementation.
Would
it
be
sensible
to
use
that
to
attract
the
client
trace,
sdk
performance,
or
are
we
still
to.
G
I
think
like
eventually
it
makes
sense.
I
think,
like
I
think
it's
in
my
mind.
We'd
still
want
to
keep
that
kind
of
like
generic
interface,
whether
or
not
like
the
internals
modify
a
little
bit.
I
think
it
makes
sense
that,
like
someone
could
supply
any
metrics
reporter
to
the
like
for
your
tracing
system,
but
on
the
flip
side
like
for
your
metrics,
maybe
you
want
to
trace
it.
G
They
kind
of
look
at
each
other
right
because
yeah
yeah,
I
think
they
kind
of
in
my
mind
it
makes
sense
that
they
look
at
each
other,
because
it's
like
it
blends
down,
you
need
to.
You
want
to
be
able
to
see
it
right,
but
you're
not
going
to
admit
metrics
from
your
broken
metrics
exporter
and
nobody
wants
to
look
at
logs.
F
Is
anybody
having
opinions
as
to
whether
this
is
is
the
right
approach,
or
whether
there
is
an
avenue
with
the
metrics
sdk
as
it
becomes
available?
A
A
But
it
sounded
to
me
like
robert
was
saying:
is
that
it's
open-ended
and
it's
up
to
really
like
the
language
implementers
to
do
something.
F
Yeah,
I
don't
think,
there's
anything
expect,
so
I
guess,
as
as
our
metrics
sdk
matures.
If
ever
something
really
makes
sense
in
this
area.
We
can
address
it
at
that
point
in
time
or
have
that
discussion.
F
And
then
this
is
the
this
is
the
http
client.
You
were
mentioning
eric.
B
Yeah
yeah
yeah:
it's
pretty,
I'm
sorry,
just
a
conflict
option
to
drop
the
or
hide
the
query
params.
B
If
they're,
sensitive
and
yeah
I
we've
gone
back
and
forth
on
a
while
it'd
go
down
a
rabbit
hole.
We
wanted
to
abstract
this
into
a
separate
gem,
then
that
gem
became
a
larger
than
expected
work
and
yada
yada,
and
then
the
original
person
who
made
the
pr
was
just
like
hey.
B
I
could
use
this
feature
like
and
he's
been
a
really
good
citizen
of
the
open,
telemetry
ruby
community.
So
far,
so
I'm
just
trying
to
push
this
forward.
I
did
notice
sam
left
some
additional
notes,
feedback
which
I
need
to
address,
which
seemed
reasonable,
but
yeah
broadly
could
use
feedback
from
you
know.
I
don't
think
it's
a
super
controversial
thing,
but
we
just
need
the.
I
just
want
to
get
out
the
door
so
that
chris
doesn't
hate
us.
F
B
Just
went
through
all
the
http
clients,
yeah
and
added
this
config
option,
and
if
you
want
to
drop
your
query
params
because
it's
sensitive,
I
think
it
actually
defaults
to
dropping
the
query
params.
I
don't
have
strong
opinions
if
we
want
to
turn
it
off
or
on
by
default.
I
don't
care,
but
that's
all
it
does
yeah.
Just
francis
had
really
good
feedback.
We
originally
implemented
this,
which
was
like
the
performance,
was
trash.
So
this
implementation
has
is
about,
I
think,
as
optimized
as
you
can
get
cool.
D
I
think
that
was
that
was
it
for
for
prs
one
other
thing
and
I'm
not
explicitly
not
trying
to
like
advocate
for
one
thing
or
the
other,
but
in
the
agenda
I
linked
just
a
comment
from
francis
who,
I
think
is
generally
right
about
everything.
Basically
there's
like
sometimes
errors
when
like.
If
you
try
to
truncate
an
attribute
value
and
it's
like
an
integer
or
something,
obviously
you
can't
truncate
an
integer,
so
francis's
point
oh
and
rel.
D
Of
course
the
point
is
sort
of
like
it's
like
on
the
onus
of
the
function
caller
to
like
know
if
they're
passing
a
functioning
garbage
because,
like
we're
we're
in
ruby
and
those
are
kind
of
like
the
rules-
and
I
guess
I
wanted
to
just
make
sure
that
that
was
like
the
belief
of
the
like
just
socialize
that
decision
or
that
like
perspective
among
the
people
that
are
working
on
the
repo
and
just
like.
D
That
is
that
we
believe
that's
true
like
we
that's
like
a
like
check,
because
I
I've
been
doing
a
lot
more
ops
than
writing
code
for
the
past
few
years.
So
I
just
want
to
make
sure
I'm
on
the
up
and
up
here.
F
Yeah,
I
think
I
think
in
general,
because
because
ruby
is
a
dynamic
language,
you
could
really
go
crazy
type,
checking
pretty
much
all
parameters
to
all
methods
and
yeah.
There's
no
end
to
how
defensive
you
could
try
to
be
so,
I
think
striking
the
right
balance
and
making
sure
that
hey.
I
think
there
is
some
deal
of
trust
and
responsibility
that
you
need
to
bestow
on
your
callers,
but
I
think
there
is
some
some
some
level
of
defensiveness
that
is
often
warranted,
but.
F
Possibly
not
quite
this
much
but
yeah
I
don't
know,
does
anybody
have
a.
F
Yeah
no
problem,
it
would
be
nice
if
we
had
a
a
good
rule
around
this,
but
I
think
we'll
yeah.
I
think.
As
long
as
we're
consistent
in
reviews,
I
think
most
people
will
kind
of
get
the
idea
and
the
spirit
of
how
we're
trying
to
approach
this.
G
I
think,
like
a
healthy
balance,
I'd
like
to
see
if
this
was
just
kind
of
the
expected
pattern,
as
if
like
say
like
in
this
this
pr,
someone
introduces
some
defensive
programming,
it
can
get
called
out
and
it's
expected
to
get
called
out
like
as
it's
being
done
here.
But
then
the
onus
is
on
the
person
who
introduced
it
to
defend
it
and
say
like
no.
This
like
this
should
be
here.
Otherwise
we
would
air
to
say
that
this
is
like
an
anti-pattern.
G
So
I
think,
what's
like
organically
happening
is
probably
the
right
thing:
it's
nice
seeing
it
because
it
shows
everyone's
paying
attention
to
their
views.
I
don't
know
if
that
kind
of
answers
it,
but
that's
that's
how
I
feel,
like
I
like
seeing
this
sort
of
behavior
around
these
these
these
things
where
it's
like.
Actually
this
is
defensive
programming
and
it's
like
the
the
author's
saying:
okay
yeah,
you
know
I'll
change
it,
but
if
they
felt
really
strongly
like
they
should
be
able
to
defend
themselves
and
we'll
listen
and
react
to
it.
G
But
if
there
isn't
really
a
a
good,
not
necessarily
a
good
but
like
it
needs
to
be
like
kind
of
like
the
exception
right,
and
I
I
I
don't
think
this
is
the
case.
A
A
If
it's
nil,
you
can
return
early
or
if
it
doesn't
satisfy
constraint
or
whatever
for
for
preconditions,
then
we
can
go,
but
whenever
we
see
type
checking
in
the
code
is
where
we
are
are
concerned
in
this
particular
case,
because
attributes
can
have
multiple
types
as
values
and
you
have
to
handle
them
differently,
then
that's
why
I
think
the
case
statement
is
introduced
right,
we're
allowing
for
the
case
statement,
but
the
actual
truncate
method
expects
a
string
and
should
return
a
string,
and
if
a
string
is
not
given
to
it,
then
it
should
fail.
A
It
should
raise
an
error
because
it's
like
that's
the
you
gave
me
the
wrong
thing
and
I
couldn't
call
you
know
the
index
eyes
on
it
or
whatever,
and
that
should
be.
There
should
be
some
minimal
kind
of
like
this
is
how
you
write
ruby
code
kind
of
thing,
so
so
just
to
be
explicit,
is
about
explicitly
about
defensive
type.
A
Checking
that
I
think
we
we
want
to
avoid,
because
you
know
another
example
of
this
was
recently
we
had
the
issue
with
arrow
passing
in
an
sql
literal
object,
which
is
a
derived
type
of
string,
and
we
ran
into
an
issue
where
it's
like.
Oh,
I
don't
know
some
bug
that
somebody
reported
about
the
statement
not
being
correct
or
something
of
that
nature.
I
can't
remember
what
it
was,
but
it's
kind
of
like
a
treated
as
a
treated
as
a
string
like
don't
don't
treat
it
as
not
a
string.
A
I
guess
right
coerce
it
to
a
string.
If
you
have
to
in
those
cases
is
there
is
like.
Can
we
ought
to
write
like
a
ruble
cop
rule?
That's
kind
of
like
anytime,
you
see
is
a
question
mark,
something
like
make
it
a
warning
or
something
that
seems
like.
Oh,
I
don't
know
what
that
would
result
in
happiness,
pain.
G
G
I
think
that
it
is
like
useful
for
a
lot
of
things,
but
if
I
am
given
just
free
grain
to
choose,
I
I,
like
the
alternative
of
us
having
this
like
active,
attentive
community.
That,
however,
we're
like
a
small
little
group,
I
think
we're
a
great
little
group
and
I
think
that
we
can
handle
these
things
without
the
use
of
a
cop.
But
again
if
that
proves
to
be
wrong,
I'm
happy
to
be
wrong,
but
I
don't
know
I'm
just
like
I've
been
really
happy
and
like
impressed
with
our
team.
So
it's
like.
I.
G
We
do
need
the
cop,
but
I
just
feel
like
we
don't
need
it
because
we've
been
doing
the
right
things
in
code
review
and
I
think
that
it's
like
we're
we're
adults
and
some
of
us
are
professional,
I'm
not,
and
it's
like
we
can
use
our
best
judgment
to
say,
like
you
know
what
this
cop
would
complain
in
this
case,
so
like
I'm
not
going
to,
but
it's
the
right
thing
to
do
so,
I'm
not
going
to
complain
and
there's
going
to
be
no
cop
to
complain
like
it
just.
G
F
Yeah-
and
I
guess
if
we
do
want
to
codify
this
a
little
bit
more,
just
as
a
principle
that
we're
at
least
trying
to
advocate
for
and
it's
that
we
really
don't
want
to
do
type
checks
like
anything
that
you
could
technically
use.
Sorbet
for,
or
you
know,
any
future
additions
to
the
ruby
language
that
allow
you
to
to
validate
a
type
would
be
things
that
we
really
do
not
want
to
be
writing
code
for.
F
Cool
yeah-
I
I
think
we're
all
on
board
with
this.
If
we,
if
we
end
up
running
into
this,
a
lot
where
people
are
submitting
prs
and
we
have
to
regularly
comment
about
this
stuff,
maybe
it
makes
sense
to
look
at
something
a
little
bit
more
automated,
but
I
think
for
now
just.
G
G
Anyways
did
did
we
want
to
do
some
metric
stuff.
A
E
A
Quick
look
at
that
one,
so
this
has
also
come
up
in
other
cases,
the
first
pr
that's
on
that
list.
This
person
has
submitted
a
change
responding
to
feedback.
If
you
look
at
the
changes
themselves,
matt
so
here,
their
next
proposed
change
is
to
so
they're
trying
to
mitigate
string
creation,
also
right
so
based
on
the
feedback
it
looks
like
this.
A
Commit
is
also
trying
to
mitigate
creation,
strings
of
the
http
and
method
concatenation,
but
what
they
did
was
they
created
a
constant
with
a
lazily
defined
hash
up
there,
which
is
going
to
be
a
no
no
right.
We
don't
want
to
have
global
variables
that
are
constants
that
are
mutated,
that
you
can
mutate.
A
So
this
is
also
potentially
a
problem
in
in
other
instrumentations
as
well
so
faraday,
for
example,
right
has
a
similar
map,
and
so
on
you
know,
and
so
on
and
so
forth.
So
I'm
wondering-
and
you
pointed
out-
I
think,
robert-
that
some
that
one
of
the
instrumentations
doesn't
use
a
constant
map
at
all.
G
Yeah,
so
I
did
the
http
one,
and
I
just
did
like
2s
up
case
and
then
I
pointed
out
how
net
http
is
using
the
like
adaptive,
hash
right.
A
So
we
should
this
seems
like
something
that
should
we
should
extract
some
common
functionality.
I
don't
know
is
this
feedback
that
we
should
give
this
person,
or
do
we
give
them?
The
feedback
to
say,
look
just
make
the
hash
right
now
with
the
values,
duplicated
and
then
freeze
that
hash
and
then
we'll
address
the
other
issues
in
a
separate
pr,
particularly
looking
at
line
31
on
the
left
side,
where
they're
making
a
change
to
extract
the
http
method.
A
G
Yeah
so
like
this
is
very
similar
to
like
net
http,
so
I
agree
I
think
like
when
it
comes
to
like
a
decision
where
it's
like.
Oh,
we
should
make
this
consistent
or
extract
it.
If
it
comes
up
in
discussion
like
this
for,
like
someone
who
isn't
like
kind
of
like
a
regular
I'd,
say
like
let
it
through,
but
then
we
follow
up
and
just
like
do
the
thing
we
want
to
do
after
I.
A
B
A
B
F
Think
that's
fine,
ultimately,
is
is
the
goal
that
we
key
these
things,
with
both
the
up
case
and
lowercase
versions
and
have
maps
that
that
hold
the
span
names.
F
If
there's
a
way
that
we
can
build
these
build
out
these
these
caches,
you
know
at
load
time
you
don't
necessarily
have
to
like
enumerate
it.
It's
like
you
can
still
have
like
a
whatever
fanciness
you
want
to
have.
As
long
as
you
can
freeze
the
result,
you
know,
as
you
assign
it,
to
the
constant
and
if
there's
like
a
quick
snippet
that
we
could
just
put
in
there
as
a
suggestion
that
might
be
the
quickest
and
easiest
way
to
get
this
thing
through.
A
With
the
simplest,
but
the
quickest
thing
might
be
them
calling
to
sam
on
the
datum
method
that
they
passed
in
to
the
hash
right.
They'll
do
one
symbol:
allocation
which
is
going
to
be
a
finite
set
of
symbols
right
or
our
symbols
still
treated
as
literals
only
and
when
they're
dynamically
created
they're
allocated.
I
can't
remember.
A
F
Yeah,
I
think
I
think
well
simple
stuff
has
changed
over
the
years
in
ruby,
like
I
think
there
is
a
chance
that
they
get
cleaned
up
periodically,
but
for
the
most
part,
I
don't
think
you
have
to
worry
about
introducing
a
new
symbol,
especially
if
it's
used
frequently.
I
think
there
ends
up
being
some
cleanup
that
happens
when
you're
dynamically
to
simming
a
bunch
of
stuff
and
they
don't
regularly
get
used.
F
Yeah,
so
it's
it's
better
than
2s
you're,
not
getting
a
new
one
every
time,
but
not
it's
not
always
free
nope.
A
F
If
anybody,
whatever
ideas,
people
have
to
help
shepherd
this
thing
through,
we
should
we
should
go
for
them.
F
And
then
I
think,
the
if
we
are
basically
doing
this
same
kind
of
caching
or
something
like
it
in
every
http
client
instrumentation
like
we
should
definitely
try
to
centralize
that
as
much
as
possible.
So
somebody
wants
to
like
draft
up
like
an
issue
around
it
or
or
anything.
I
think
that
would
be
fine
as
well.
G
Yeah,
I
think
that
fits
well
into
the
apr
erica's
open
like
something
like
this
can
kind
of
go
live
with
that
stuff,
they're
very
similar
in
responsibility
of
just
like
providing
helpers
for
http
instrumentation,
so,
like
I'd,
say
like
eric's
pr
cuts
brought
in
this
one
doesn't
need
to
be
blocked
by
it,
like
whatever
changes
need
to
be
made
made
and
then
extracting
common
functionality.
Out
of
this
would
be
a
follow-up
for
us,
not
the
author
of
this
pr.
G
G
Kind
of
like,
like
kind
of
like
oh
sam's,
mic's,
all
crazy.
You
guys
can
see
my
screen
okay.
So
what
is
the
good
entry
point
to
this?
Okay?
This
is
all
work
in
progress.
A
lot
of
it's
not
pushed
up.
So
if
you
see
weird
stuff,
I
apologize
in
advance.
G
G
So
you
have
a
meter
provider
and
it's
responsible
for
creating
meters,
but
it's
also
responsible
for
creating
these
metric
readers,
which
are
analogous
to
like
our
exporters
in
traceland.
They
also
metric
readers
may
support
the
like
passing
of
an
exporter
to
it,
but
every
metric
reader,
so
you
can
have
multiple
metric
readers.
G
G
You
need
to
store
that
in
two
places
so
that
if
you
have
two
readers
using
a
delta
temporality,
meaning
that,
like
you,
basically
kind
of
flush,
what
you've
accumulated
at
different
intervals,
they
don't
impact
each
other.
You
don't
want
to
kind
of
support
that
interplay.
You
just
ideally
want
to
keep
it
distinct,
but
there's
kind
of
like
a
strange
coupling
and
how
do
we?
How
do
we
manage
just
what
like
makes
sense?
G
So
I
looked
a
bit
at
the
javascript
implementation.
I
took
a
little
bit
inspiration
there.
I
looked
at
a
few
different
ones
and
I
kind
of
come
up
with
like
this
rough
version
of
it.
So
when
you
initialize
your
meter
provider,
you're
going
to
have
whatever
set
of
metric
readers
and
we're
going
to
create
something
called
this
metric
store
registry,
don't
know
if
it's
a
great
name,
it's
not
something
I'm
attached
to,
but
whatever
it's
there.
G
Essentially,
this
metric
store
registry
supports
metric
stores
and
it
also
gets
a
meter
provider
that
might
change.
It
might
just
be
resources
because
this
thing's
going
to
be
coupled
to
a
single
meter
provider
which
has
resources,
and
we
know
that
we'll
need
the
resources
to
create
a
data
stream
or
data
point
within
the
site.
Collection.
G
G
You
want
to
emit
to
a
single
place
and
then
have
it
end
up
in
all
the
right
spots.
Right
like
it
needs
to
be
available.
So
if
I
go
back
to
my
metric
provider,
there's
kind
of
like
the
boring,
typical
methods
that
we
talked
about
before
and
very
similar
to
the
tracing
code,
except
for
here
now,
when
you
add
a
metric
reader,
what
we're
going
to
do
is
we're
going
to
grab
our
metrics
for
registry
and
we're
going
to
add
a
new
metric
store
and
we're
going
to
push
that
metric.
G
We
need
to
add
the
the
new
metric
source,
sorry
to
the
reader,
so
we
just
assign
it
here
so
now
now
your
each
metric
reader
knows
where
it's
like
retrieving
the
metrics
from
so.
If
we
go
down
to
the
meter,
so
meter
belongs
to
the
meter
provider
when
we,
this
is
really
long
and
awful
to
read
again,
I
don't
know
how
permanent
this
is,
but
I
think
conceptually
it
makes
sense
when
you
create
a
new
instrument
from
your
meter,
there's
kind
of
these
fixed
things
that
needs
to
be
there.
G
So
your
counter,
like
your
your
instrument,
needs
to
have
a
name.
It
needs
to
have
an
optional
unit,
optional
description
and
we're
going
to
provide
it.
This
metric
store
registry
and
we're
going
to
provide
it.
The
instrumentation
library
which
I
know
is
renamed
to
instrumentation
scope,
because
these
are
required
for
emitting
the
metric
and
also
collecting
all
the
information.
So
if
you
go
from
here,
so
you've
created
your
your
counter
when
you
actually
want
to
increment
something.
So
you
have
your
counter
counter
a
and
you
say,
add
one.
G
If
we
have
multiple
metric
readers,
we
know
we
need
to
kind
of
produce
these
metrics
to
all
of
the
places
that
can
be
collected
from.
So
that's
why
we've
passed
in
the
metric
store
registry?
We
don't
want
to
have
to
do
the
whole,
like
song
and
dance
of
like
I'm
an
instrument,
I'm
a
counter,
so
I'm
going
to
grab
my
meter
to
get
the
instrumentation
library.
A
G
That's
exactly
it
one
of
the
things
that's
missing
in
this
this
puzzle-
and
I
mentioned
it
last
time
we
talked
about
this-
is
there's
this
whole
concept
of
views
that
I've
I'm
intentionally
like
skipping
for
now,
because
if
you
don't
have
a
view
pass
in
and
it
views
a
way
of
like
configuring,
a
like
a
say,
a
counter
to
emit
two
metrics
out
of
it.
G
If
you
don't
have
a
view,
you
just
like
have
your
default
aggregation
which
isn't
even
wired
in
yet,
but
basically
it
is
like
we'll
just
ignore
views
existence
for
now.
So
if
we
go
to
the
metric
store
registry,
this
is
the
produce
call.
So
all
of
your
instruments,
all
of
your
synchronous
instruments,
not
asynchronous,
that's
something
else
that
I
haven't
really
started
digging
into.
Yet
all
of
your
synchronous
instruments
when
they
call
produce
they'll,
be
calling
it
against
the
metric
store
registry,
and
so
this
is
just
like
the
info.
G
We
need
to
build
a
metric,
so
we
need
the
measurement
which
is
includes,
like
the
value
of
whatever
was
recorded,
and
the
attributes
again,
we'll
just
jump
back
here,
new
measurement,
it's
whatever
happened,
and
the
attributes
associated
with
it.
G
So
we
jump
back
here.
We
need
to
like
to
build
a
metric.
We
need
the
measurement,
we
need
our
meter
provider
resource,
which
is
why
this
is
passing
to
the
registry.
Again,
it
might
just
end
up
being
the
red
of
the
resource
passed
in.
It
might
not
need
like
an
actual
reference
to
the
meter
provider.
G
We
need
to
know
what
kind
of
instrument
was.
It
was
like
a
counter
or
instagram
or
whatever
the
name
unit
description,
the
instrumentation
library
or
scope.
This
is
the
part
where
I
said
I'm
like
intentionally
omitting.
It
is
that
the
instrument
views
can
like
basically
split
this
out
again,
so
it's
like
becomes
like
one
metric
can
turn
into
many
metrics.
G
That
has
to
go
too
many
metric
stores,
but
for
now
we're
keeping
it
simple
and
we're
saying
that
it's
just
gonna
kind
of
publish
to
each
metric
store,
we're
gonna
record
this
information,
and
this
this
is
the
info.
We
need
to
do
so,
there's
kind
of
an
interesting
part
here.
I
think
it's
like
worth
like
just
briefly
talking
about
when
we're.
Actually
writing
metrics
like
if
you're
doing
an
update
of
a
value.
G
You
have
to
deal
with
the
fact
that,
like
multiple
things
will
be
writing
at
the
same
time,
right
like
what,
if
two
kind
of
threads
are
trying
to
update
the
same
value
before
you
even
think
about
that.
What
if
you're
like
producing
metrics
and
someone
adds
a
metrics
reader
in
a
separate
thread
right?
How
do
we
handle
that?
G
The
way
we
do
it
is
we
actually
are
going
to
just
duplicate
this
metric
store
reference?
Add
the
new
metric
store,
that's
associated
with
the
new
reader
and
then
just
do
like
a
full
swap
that
way.
Your
array
isn't
changing
so
like.
If
this
code
is
in
flight,
it's
not
dealing
with
a
value,
that's
mutating,
it
could
be
considered
less
efficient
because
you're
duplicating
this
entire
array,
but
again
we're
operating
the
assumption
that
people
aren't
adding
hundreds
and
hundreds
of
metric
readers
throughout
the
lifetime
of
the
application.
G
I
think
that's
a
safe
assumption.
Realistically,
I
imagine
most
applications
will
probably
have
one,
but
they
might
have
many.
But
again
those
things
are
likely,
hopefully
to
be
configured
at
boot
time
in
the
meter
provider.
When
we
actually
do
this,
it
is
interview
text.
So
it
is.
This
part
is
blocked,
then,
when
it
comes
to
the
metric
store.
So
again
we
were
talking
about.
We
have
also
collected
all
the
information
we
need
to
actually
produce
a
metric
and
then
we're
going
to
go
through
each
of
our
stores.
G
Our
metric
stores
enumerate
them
and
kind
of
record
the
measurement.
This
becomes
again
tricky
because
we
have
to
deal
with
multiple
things
trying
to
access
the
same
value.
This
is
kind
of
was
just
what
a
metric
stream
might
look
like
like
this
is
the
the
information
that's
going
to
be
there,
it's
a
little
bit
sparse,
but
just
kind
of
I
put
it
in
just
like
to
conceptualize
again,
it's
like
not
close
to
being
anything
other
than
whip.
G
But
if
you
have
multiple
things
trying
to
update
this
value,
we
we're
gonna
have
to
introduce
a
mutex
here
right.
This,
like
comments,
are
just
kind
of
like
talking
through
what
do
we
have
to
do
when
we
actually
record
a
measurement?
So
what
can
we
do
outside
of
the
new
text
like?
What
can
work?
Can
we
do
up
front
so
we
can
compute
the
metric
stream
name
so
like
what
value
are
we
writing
to?
G
We
can
figure
that
out
before
locking
it,
but
then
finder
find,
if
we're
going
to
look
to
see
if
it
exists,
because
if
it
exists,
if
it's
an
update,
if
it
doesn't
we're
going
to
have
to
create
it,
we'll
probably
have
to
lock
for
that,
because
if
something
creates
it,
while
we're
looking
for
it
again
that's
kind
of
risky,
then
we
have
to
see
if
we
can
as
quick
as
possible,
run
the
aggregation
and
then
unblock.
G
So
this.
This
is
going
to
require
a
lot
of
like
care
and
attention,
because
if
you
have
some
counter,
that's
like
rapidly
being
incremented
across
multiple
threads,
you
do
not
want
to
completely
nuke
the
performance
of
your
application
here.
So
this
was
like
a
bit
of
thinking
and
pairing
with
francis
to
get
to
this
point
of
comments.
G
Same
is
going
to
be
true
for
the
collection
path,
we're
going
to
need
to
do
that
as
fast
as
possible,
because
it
should
be
the
same
mutex
right
like
if
you're
writing
to
it
and
collecting
from
it.
You
need
to
make
sure
those
things
aren't
changing,
while
you're
collecting
them
right,
so
we'll
probably
have
to
block,
and
how
do
we
block
that?
G
No
in
a
way
that
isn't
gonna
tank
your
performance
again,
it's
like
probably
take
a
quick
snapshot
and
then
just
let
go
and
then,
however,
that
process
looks
like
so
that's
like,
I
think
the
latest
set
of
progress
I've
made
there.
I
need
to
take
a
lot
of
those
comments
and
actually
put
them
into
real
code
and
add
tests
associated
with
them.
I
don't
know,
I
know
we're
at
time
now,
so
I'm
going
to
just
like
shut
up
and
see
if
anybody's
any
questions.
E
A
I
feel
like
need
to
be
answered
by
the
spec.
If
I
would
just
read
it,
then
my
the
one
thing
that's
on
my
mind
is
about
resources
themselves,
so
that
one
that
when
you're
looking
at
that
resource,
that's
an
sdk
resource.
A
G
Probably
both
as
I
understand
it
so
far
again,
like
yeah
reading,
I'm
not
trying
to
encourage
you
to
read
it
like
reading.
It
is
like
a
whole
adventure
in
its
own.
I
think
there's
a
lot
of
flexibility
and
things
can
happen
in
various
places
right
like
so
far,
I
haven't
read
anything
specifically
or
I've
blocked
it
from
my
memory
that,
like
says,
like
you,
have
to
control
the
cardinality
of
these
values
right,
it's.
A
Is
there
any?
Is
there
any
notion
of
like
a
client-side
interval
aggregation
or
is
that
something
that
is
really
in
the
view?
So,
for
example,
like
you
generated
like
like,
like
a
thousand
metrics
in
10
seconds,
or
something
like
that
right,
oh
there's,.
G
A
there's
a
whole
lot
of
that
defined,
and
this
is
where
I'm
gonna
say:
it'll,
be
a
good
use
of
your
time
to
read
about
okay,
the
aggregation,
temporality.
A
A
So
I
just
have
to
spend
time
and
look
at
that,
and
and
so
you
looked
at
a
couple
of
those
I
know
the
I
have
access
to
a
private
c-sharp
implementation.
I
think
also,
I
don't
know
if
it's
made
available
publicly
yet,
but
have
you
looked
at
any
of
the
the
c-sharp
implementation?
A
Okey-Dokey
because
I
was
going
to
say
like
riley,
who
is
like
one
of
the
big
spec
writers
on
there,
his
team
rolled
out
like
a
c-sharp
implementation
of
the
metrics
sdk
and
it's
running
in
production
at
microsoft.
Now
so
no
cool
I'd
be
interested
in
and
sharing,
maybe
some
of
the
things
that
they
ran
into,
but
I'll
take
a
look
and
see
if
it's
publicly
available.
G
Yeah,
I
think
you
would,
if
you
are
interested
in
stuff,
there
is
like
no
shortage
of
reading
material.
There's
like
the
api
spec,
which
is
like
pretty
pretty
quick
and
easy.
Then
there's
the
sdk,
which
is
not
quick
and
easy.
Then
there's
the
data
model
and
then
there's
like
another,
like
supplementary
thing
that
gets
linked
in
there.
That
has
like
even
more
detail.
G
G
I
took
a
reading
like
throw
everything
out,
whereas
your
cumulative
one
is
more
like
your
prom
style,
just
let
it
build
up
indefinitely,
which,
in
my
my
kind
of
code,
comment
thing
here,
it
says
like
we
do
need
to
keep
track
of
these
these
times,
so
that
you
can
have
like
timestamps
associated
with
your
intervals.
Again,
that's
like
a
whole
other
part
that
needs
to
be
fleshed
out
a
lot
more.
A
Okay,
a
lot
of
work,
a
lot
of
work
thanks
for
being
patient
and
welcoming
us
through
that.
G
I'm
happy
to
it
it's
useful,
because
questions
that
I
can't
answer
just
means
I
need
to
understand
stuff
better.
So
I
like
getting
the
opportunity
to
have
people
ask
questions
about
what
I'm
doing
yeah
I'm
gonna.
I
keep
saying
this,
but
I
try
to
push
up
some
more
of
the
stuff
once
I
like
add
a
bit
of
tests
and
a
bit
of
more
like
actual
implementation
to
the
part
I
just
talked
through,
but
there
is
already
a
a
whip
up
so
feel
free
to
like
look.
A
A
We're
still
wrestling
with
the
tracing
journey
so,
like
you
know,
for
us
it's
trying
to
figure
out
how
to
get
more
semantic
conventions
defined
for
the
instrumentation
themselves.
G
A
Like
right
now,
I
know,
like
you
know
some
of
the
instrumentations
they
have
their
own
attributes
like
you
know,
for
like
rails
controller
and
view
names
and
whatnot
like
having
to
have
figuring
out
some
way
to
publish
those
to
make
those
well-known
or
figuring
out
like
if
we
can
have
some
common
set
of
you
know,
I'm
gonna
use
the
word
mvc.
A
You
know,
but
some
semantic
conventions
around
mvc
instrumentation,
which
we
which
isn't
in
the
spec
right
now,
and
you
know
those
are
the
kinds
of
problems
I'm
running
into.
G
G
Repo
instrumentation
is
just
like
some
person's
like
stab
at
go
graphql
instrumentation
and
I
think,
in
the
span
of
like
a
a
couple
hours,
it
introduced,
like
170
new
attributes
to
our
schema
and
like
caused
a
whole
bunch
of
havoc
and
like
so
we're
actually
looking
at
right
now,
moving
to
having
a
fixed
schema
where,
like
users,
aren't
allowed
to
just
add
our
arbitrary
values
to
their
spans.
A
Same
here
same
here,
well
so
much
for
the
the
the
firewall
rules
right.
It's
like
denying
and
wonderful
friends
have
a
wonderful
afternoon
and
by
this
time
next
week,
well,
anyway,
we're
recording
forget
it
see
you
soon.