►
From YouTube: 2021-12-02 meeting
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
B
D
Okay,
so
looks
like
we're
10
past,
so
I
guess
we
could
get
started.
D
D
I'm
still
gonna
try
to
get
back
on
track
and
update
it
with
whatever
has
been
going
on
thanks
everyone,
for
you
know
addressing
all
of
the
you
know,
questions
that
we've
been
getting
through
slack,
I
think
just
like
looking
at
you
know,
prs
have
been
being
open
and
issues
looks
like
we're
getting
some
good
progress
in
the
metrics
work.
D
I'm
working
on
like
a
few
metrics
issues
myself.
I
guess
not
any.
I
don't
see
any
topics
in
the
agenda
as
of
right
now,
but
I
guess
we
can
just
kind
of
like
give
a
little
bit
of
a
status
update.
Yeah
there
wasn't
a
meeting
last
week
right,
I'm
assuming
oh,
there
was
no
meeting.
D
Yeah
got
it
so
I
think
it'd
be
good
to
kind
of
just
like
bring
everyone
up
to
speed
and,
what's
like
everyone
has
been
working
on
or
like
doing
so.
We
can
kind
of
like
set
some
goals,
and,
like
conclude
for
this
year's
because
I
know
a
couple
of
people
are
gonna
be
out
near
the
end
of
december.
So.
D
Just
taking
a
look
at
it,
diego,
you
want
to
like
kind
of
just
start
us
off
with
like
what
you've
been
working.
E
F
Okay,
yeah.
F
Been
able
to
make
much
progress
on
reviews
or
anything
this
week
I've
been
really
sick
these
days
cleaning
better
now,
so
I
hope
that
I
can
catch
up
on
so.
D
F
Yeah,
can
you
see
my
screen
yeah
yeah,
all
right
yeah,
just
to
give
you
some
updates.
I
have
opened
six
pr's,
all
of
them
related
to
metrics.
There
is
these
prototypes
that
are
open.
F
He
said
so
I
went
through
all
this
code
and
discussed
with
them.
F
Many
things
to
learn,
I'm
making
a
an
implementation
that
pretty
much
follows
this
quite
closely
with
some
differences
and
they
are
being
added
pr
by
pr
in
this
sequence
of
prs,
I'm
sorry,
okay
and
so
far.
I
have
them
as
draft
prs
because
they
don't
have
yet
many
test
cases
added
to
them,
but
I'm
making
sure
that
all
of
them
are
constantly.
F
They
constantly
have
green
checks
right
so
that
they
can
be.
The
reviewers
know
that
they
are
kind
of
at
least
safe
to
take
a
look,
because
they
don't
introduce
anything
that
breaks
stuff.
F
They
are
reviewers
can
pick
anyone,
but
they
they
are
sequential,
so
they
will
be
asked
to
review
some
other
pr
first
until
you
finally
reach
this
one,
which
is
the
first
one
of
them.
First,
I'm
doing
some
refactors
some
classes
that
we
have
already
media
provider
instruments
and
aggregations.
F
Then
new
stuff
is
being
added,
okay,
so
the
goal
by
the
end
of
this
week.
I
really
hope,
is
that
these
prs
are
finished
in
the
sense
that
we
can
run
an
example
that
I'll
be
probably
adding
in
a
last
pr
that
just
introduces
that
example,
and
that
we
can
prove
that
we
can
have.
Finally,
a
working
sdk
prototype
the
better.
F
That
will
be
my
my
goal
for
this
for
this
week,
so
pretty
ambitious,
because
we
only
have
like
one
day
and
a
half
left,
but
we're
pretty
close
and
just
implementing
the
last,
the
very
last
method
so
yeah.
Hopefully
we
can
have
that.
A
I
just
have
a
question
about
about
this.
Take
out
so
the
there's
a
few
of
these
that
are
marked
as
ready
for
review
and
some
of
them
are
still
in
draft.
Are
they
all
ready
for
review
or.
A
G
E
E
I
don't
know,
oh
it.
F
F
So
so
yeah
I
can
I
mean
I
don't
want
to
give
the
impression
that
oh
yeah,
we
are
good
to
go
without.
F
I
mean
significant
test
cases
right,
but
if
you
prefer,
I
can
mark
them
as
medical
review
so
that
people
at
least
feel
like
they
can
it's
worth
for
them
to
take
a
look
at
them.
Even
if
we
are
still.
A
F
B
Yeah,
so
can
we
I
was
going
to
take
a
step
back
sure,
I'm
sure
in
the
I
guess.
A
lot
of
people
are
out
for
thanksgiving.
I
know
last
thing
was
canceled
the
one
before
there
wasn't
a
lot
of
people
here,
but
I
did,
I
think,
there's
some
context
missing
here
and
there
is
a
dock
like
a
design
dock.
I
wrote
and
shared
on
slack
a
few
times
and
in
the
sig
that
that's
basically,
what
we're
implementing
at
least
was
my
understanding.
B
So
I
would
like
to
like
hammer
down
everything
here.
Like
you
mentioned,
you
made
some
changes
like
whatever
those
changes
are
I'd
like
to
discuss
them,
because
I
think
at
a
high
level,
it's
really
hard
to
review
like
a
bunch
of
sequential
pr's,
where
you
don't
really
know
where
it's
going
and
yeah
like.
I
still
have
only
really
gotten
reviews
on
here
from
I
think,
like
diego
entry
con
so
sure.
F
Yeah
aaron
points
out
something
very
important.
These
this
document
is
like
the
foundations
of
this
vr,
and
this
pr
is,
you
can
call
it
like
the
foundation
of
these
other
6prs
that
are
split
apart
now.
There
are
a
couple
of
differences
between
the
implementation
that
I
followed
and
the
implementation
that
is
followed
here.
F
F
In
my
opinion
are
not
significant
in
the
sense
that
we're
pretty
much
following
the
same
approach,
aaron
and
I
have
been
discussing
a
couple
of
differences.
One
of
them
is
the
names
of
the
classes
that
we're
gonna
use.
I'm
pretty
much.
I
still
haven't
updated
those
names
in
my
pr's,
but
I'll
I
will
surely
do
and
the
other
one
is.
F
This
class
mirror
reader
storage
that
I
don't
consider
to
be
necessary.
F
To
make
this
process
just
move
faster
and
not
to
get
stuck
discussing,
if
we
really
need
this
or
not,
I'm
pretty
much,
okay
with
adding
it
back
and
discuss
that
later,
because
it
is
not
a
significant
change
I
I
would
prefer
not
to
have
it
because
I
believe
it's
not
necessary
and
it
just
makes
the
process
of
consuming
measurements
longer
aaron
disagrees.
F
I
think
he
considers
this
to
be
a
significant
part
of
the
design
and
something
that
introduces
an
important
separation
between
sdk
and
user
interface.
So,
yes
I'll
be
pretty
much
just
adding
this
back
and
and
using
the
same
names
that
are
being
used
here
so
that
we
don't
have
these
differences
now
and
and
because
they
are
much.
F
Less
important,
in
my
opinion
than
the
whole
project,
so
we
can
discuss
them
later.
So
in
this
way,
also,
this
document
will
be
much
more
will
match
more.
The
implementation
that
we
have
here
in
this
years,
probably
aaron
I'll,
be
contacting
you
today
to
ask
you
more
questions
about
the
temporality
conversion,
algorithm
where
I,
I
am
adding
a
few
changes
there.
We
discussed
them
before
because
we
had.
F
We
had
like
some
hardcoded
temporalities
remember,
and
I
asked
you
if
this
is
bad
or
if
we
should
change
them
depending
on
the
type
of
instrument,
so
he
said
yes,
so
I'll.
B
F
E
B
Have
some
visibility
into
it
and
not
like
have
disagreements
in
the
pr
about
anything
about
like
code.
B
And
I
don't
know
honestly,
I'm.
B
Like
could
you
go
over
what
your
prs
do,
because
I
thought
we
were
going
to
like
split
up
the
work
a
bit.
E
D
Hey
so
coming
from
someone
who
just
came
and
like
was
looking
at
this-
also
apologies
aaron.
I
know
the
metrics
sdk
design
has
been
up
for
a
while.
D
I
will
take
a
look
at
that
as
soon
as
I
can
to
get
like
another
point
of
view,
but
kind
of
like
coming
back
and
seeing
like
a
bunch
of
pr's,
I'm
not
really
sure
how
like
like,
not
even
like
the
order,
but
again
what
you
were
talking
about
about
like
seeing
like
the
whole
picture
kind
of
thing,
it's
difficult
for
me
to
understand
like
what
I
should
do
first
and
what
is
needed
for
me.
D
A
I
think
aaron
was
suggesting
that
we
should
go
over
this
doctor
at
this
point
right
and
so
that
everybody
can
get
an
overview
of
what
is
changing.
Do
you
want
to
do
you
want
to
try
and
do
that?
Maybe
aaron?
Maybe
you
can
drive
and
share
the
talk,
and
you
can
talk
about
it.
I
mean
honestly,
it
might
be
easier
to.
B
Review
it
offline.
The
the
big
picture,
though,
is
so
so
I
mean
with
this
I
mean
honestly.
I
think
it's
easiest
to
just
review
it
offline
for
everybody.
D
Do
you
guys
think
it'll
be
worth
it
to
like
in
order
to
actually
move
this
forward?
Should
we
have
like
a
separate
working
session
or
something
to
discuss
metrics,
specifically,
because
I
know
we
can
like
review
it
like
offline
and
stuff,
but
I
I
personally
think
it's
difficult
to
go
like
back
and
forth
just
through
text
and
stuff,
but
my
question
was
more
towards
like
how
does
this
design
relate
to
like
the
six
or
seven
draft
prs
that
got
brought
up?
F
Sorry,
I
I
just
received
a
phone
call
in
the
last
15
seconds
and
I
couldn't
hear
what
was
being
said.
Are
you
trying
to
reach
an
agreement
or
something.
D
Oh
so
mike,
so
we
all
understand
that
we
have
to
oh
we're
we're
going
to
review
aaron's,
metrics
design
doc
and
see
if
that's
going
to
be
like
the
source
of
truth
in
which
we
follow
our
design.
That
makes
sense.
I'm
wondering
like
these
six
new
draft
pr's
that
were
created.
Is
that,
like
all
based
off
of
this
design
or
is.
F
It
was
my
understanding
that
the
the
approach
to
follow
here
was
to
take
this
pr
that
implemented
this
design
and
split
it
into
smaller
prs,
so
that
and
to
add,
and
something
important,
also
to
add
these
cases
see
to
do
sprs
so
that
we
can.
We
can
take
this
design
and
introduce
a
new
domain
into
the
main
branch
a
little
by
little,
so
that
that's
that's
my
intention
with
this
6prs
that
I'm
making.
F
F
I
did
some
also
some
code
cleanup
just
a
little
bit
of
this,
a
little
bit
of
that.
Sorry
of
that,
but
we're
talking
about
pretty
much
the
same
thing
here:
the
design
document,
my
6prs
and
this
pr,
the
the
it's
pretty
much
the
same
thing.
You
know
there
are
no
different
approaches.
B
Aaron
is
that
the
correct
understanding
I
mean
I
haven't
had
a
chance
to
look
at
the
draft
prs
I
was.
B
I
was
thinking
it'd
be
easier
to
split
it
up
like
a
few
people,
do
each
part-
and
we
maybe
like,
create
issues
right
and-
and
so
it's
not
like,
because
I
don't
like
I
understand
that
the
draft
pr's
are
in
order,
but
it's
really,
I
know
it's
going
to
be
really
hard
to
rebase.
If
I,
if
we
request
any
changes
or
we
want
to
fix
anything,
if
there's
like
a
stack
of
six
pr's.
F
Well,
that's
pretty
much
what
I
have
what
I
have
been
doing,
as
as
I
have
been
splitting
this
pr
into
smaller
ones.
I
have
rebasing
every
other.
If
I
make
a
change
on
a
previous
vr,
I've
been
rebasing
the
next
one,
since
every
pr
introduces
changes
in
in
separate
files,
because
this
is
the
approach
that
I've
been
following
pretty
much
one
format
repeater
one
from
this
one
for
that
one-
for
that
there
are
very
very
few
conflicts.
D
Okay,
well.
D
This
would
be
kind
of
difficult.
To
be
honest,
I
personally
don't,
like
the
you
know,
having
to
review
one
after
another
and
having
to
backtrack
any
other
thing
that
you
would
propose
aaron
without
like
throwing
away
what
diego
has
done.
B
I
maybe
I
think
the
best
thing
would
be
to
create
like
issues
that
we
know
what
each
of
these
things
represents,
and
then
we
have
like
a
view
of
the
whole
thing,
and
maybe
we
could
split
up
the
work
a
little
bit
so
that
also,
instead
of
just
diego
having
to
write
all
the
code,
he
like,
we
don't
have
to
review
at
all,
and
you
can
help
with
the
review
as
well.
D
Yeah,
so,
basically
the
consensus
of
like
how
we
can
move
forward,
like
it's
great
that
you
like,
went
forward
with
like
implementing
and
splitting
up
the
metrics
sdk,
but
in
terms
of
like
execution
and
like
easiness
of
how
we
can
get
this
all
in
and
especially
in
terms
of
reviews
and
who's
coding
and
everything,
it's
kind
of
hard,
because
we
won't
get
your
reviews
ever
right
and
you're
the
only
implementer.
D
So
I
think
a
good
way
in
order
to
move
forward
and
not
like
throw
away
everything
that
you
did.
Erin
suggested
that,
like
we
should
have
like
a
you
know
like
create
issues
for
each
of
these
and
as
well
as
like
an
overarching
like
sdk
implementation
kind
of
like
task
so
that
we
could
keep
track
of
like
you
know,
who's
doing
what
and
people
can
like
help
out
on
the
implementations.
D
So
you're
not
you're,
not
the
only
one
who's
like
like,
because
then
you're
never
gonna
be
reviewing.
We
never
get
your
input
and
stuff
right.
So
it's
also
difficult
as
a
reviewer
to
like.
I
know
that
you're
doing
your
best
to
like
rebase
and
stuff
and
like
to
your
eyes.
It
could
be
mutually
exclusive
and
all
of
them
have
no
conflicts
but
like
like
that's
just
your
word
right
like
it's,
it's
hard
for
as
a
objective
reviewer,
who
knows
nothing
about
metrics
to
you,
know
review
all
of
them
at
once.
D
You
know
we're.
Definitely
gonna
do
like
hey
we're.
Just
gonna
do
meter
and
meter
provider
first
until
that's
merged
in
we're,
not
gonna
touch
anything
else.
You
know
like
it's.
It's
it's.
It's
kind
of
slow
in
that
in
that
regard,
so
does
that
make
sense.
F
Sure,
that's
that's
fine
if
you
want
to
follow
another
approach,
but
I
think
that
unless
there
is
a
significant
change
in
the
whole
design
that
is
being
specified
in
the
document
and
to
be
honest,
I
don't
think
there
will
be
much
of
a
change
because
there
aren't
so
there
aren't
that
many
ways
of
implementing
this,
and
even
the
objections
that
I
have
with
the
design
as
it
is
right
now,
are
not
critical
or
life-threatening.
A
I
mean,
oh
sorry,
go
ahead.
Sorry
me
and
tricanton
at
the
same
time,
maybe
sure
continue
to
go.
First.
H
Yeah
sure
so,
like
book,
I
I
kind
of
forgive
what
I
iron
said
so
like
split
it
up
the
work
and
then
create
issues,
split
up
the
work
and
then
assign
it
to
you
know
different
people
like
we
have
around
like
four
or
five
teams,
so
that
you
know
they
have
like
a
work
distributed
and
then
each
each
person
gets
to.
You
know
work
on
something
and
then
gets
to
review
everything
else.
So
you
know
eventually
also
like
everybody
else
has.
H
You
know
idea
of
like
solid
idea
of
the
component
that
they
implemented
and
then
that
reviewing
other
pls
gives
the
good
idea
about
the
like,
the
other.
You
know
sub
components
in
the
matrix
right.
That's
what
I
think.
E
D
Step
piecewise
and
then
I
think
everyone
had
the
same
understanding
at
the
same
time
gradually
of
the
of
the
space.
So.
F
F
Okay,
what
I
just
don't
don't
understand
this,
what's
the
plan?
Well,
what
should
I
do
with
this
dpr
so
now.
D
D
Like
I
don't
want
to
like,
like
disregard
like
what
you
did
but
like
like
as
like
a
group
as
like
a
team,
it's
it's
difficult
when,
like
you
know,
you
have
such
expertise
in
one
area
but
like
everyone
else,
doesn't
right.
So
when
you
go
off
and
like
implement
everything,
it's
it's
hard
for
us
to
keep
up
and
like
review
it
and
like
see
the
whole
picture
the
same
way
as
you
do.
Okay
right.
So
in
order
of
like
an
actual
like
execution
like
it
would
be
simpler.
D
If,
like
you,
just
release
like
one
pr
at
a
time
and
make
sure
that
that
gets
put
into
fruition
and
like
finished
and
then
do
the
next
one
and
then
the
next
one
granted
the
metrics
design
is
way
harder
than
like
logs
or
anything.
D
F
Yeah,
but
we
are
in
that
situation
right
now,
we
are
in
the
situation
where
there
is
one
pr
of
mine
open
waiting
for
other
people
to
read
the
design
dock,
get
themselves
up
to
speed
and
review
it.
The
fact
that
there
are
five
rpr's
behind
it
that
doesn't
matter
the
only
thing
that
matters
now
is
that
there
is
one
pr,
the
the
most
basic
ones
or
any.
F
As
it
is
waiting
for
it
to
for
it
to
be
reviewed,
what
I'm
trying
to
say
is
that,
if,
after
reading
the
design,
doc
and
and
making
any
changes,
if
that
happens,
these
pr's
are
no
longer
relevant,
because
there
were,
there
was
a
significant
change
in
the
design
that
that's
also
fine.
I
mean
there
is
no
obligation
to
merge
these
prs
as
they
are
right
now.
E
F
The
fact
that
this
pr
success
right
now
is
just
to
save
time
to
create
a
a
code
representation
of
the
design,
as
it
is
right
now
and
believe
me,
it
is
easier
to
understand
the
design.
If
you
have
code
that
you
can
follow
along.
F
Yeah,
but,
but
that
that
actually
doesn't
matter
because
the
what
is
necessary
now
is
for
us
to
get
a
couple
of
more
people
that
read
the
design
document,
understand
it
and
make
themselves
metrics
reviewers.
F
These
prs
can
just
stay
there
right.
It
doesn't
hurt
to
to
have
them
there
and
when
people
have
reviewed
the
design
dock,
two
things
can
happen
either
the
they
agree
with
the
design,
as
it
is
right
now.
In
that
case,
they
can
just
go
and
review
these
vrs
or
if
they
have
differences,
disagreements
with
the
design
that
the
sign
can
be
updated
changed,
and
I
cannot
change
my
apr's
as
well.
F
F
What
I
think
is
that
we
are
in
the
situation
where
we
want
to
be
we.
We
just
have
five
other
prs:
five
extra
vrs,
that
that
actually
doesn't
it
doesn't
hurt
for
them
to
be
there
right
now.
Only
one
of
these
pr's
is
is
waiting
for
review
to
the
most
basic
ones.
F
So
so
I
I
agree
with
you
all
so
it'll
be
great
to
have
our
people
to
read
the
assigned
documents
and
get
up
up
to
speed
to
metrics
so
that
they
can
start
reviewing
this
as
well
or
if
they
want
to
implement
things
differently.
E
D
Okay,
so
following
what
you
said,
looking
at
the
current
prs,
the
most,
I
guess,
first
pr
which
reviews
is
this
add
views
pr,
no
well.
F
F
This
is
the
most
basic
one,
but
but
again
I
I
won't
care
that
much
about
reviewing
these
prs
now
that
the
most
important
thing
now,
as
sarah
mentioned,
is
for
more
people
to
get
familiar
with
this,
this
design
document
these
vrs
or
even
this
pr,
they
will
help
you
understand
it,
because
it's
it's
easier
to
follow
along
and
understand
this
document.
G
A
Hey,
how
is
that
pr
related
to
erin's
pr
this
one
yeah,
there's
a
there's
one,
pr
by
errand
that
says:
metric
sdk,
prototype
implementation.
F
Yeah,
this
vr
is
a
big
pr
that
implements
this.
I
went
through
all
this
code
through
all
this
descent
document
and
from
that
I
created
these
six
pr's
right
now
they
have
a
few
implementation
differences.
F
There
is
one
component
in
the
sign
here
that
I
don't
consider
necessary
aaron
and
I
have
been
discussing
that,
but
even
with
those
all
those
differences
considered,
there
is
no
substantial
change.
The
pretty
much
the
same
path
for
processing
measurements
is
being
followed.
The
same.
F
A
Trying
to
try
to
go
back
to
what
we
want,
so
we
want
to
get
be
able
to
make
mo
forward
movement
here
with
us.
Does
it
like,
I
think,
layton
you
propose
to
have
a
metric
specific
call?
We
can
can
we
plan
to
do
this
for
everybody
who's
interested
in
the
metrics
implementation,
to
read
this
design
doc
review
some
of
the
pr's
and
get
back
together
like
either
tomorrow
or
monday,
and
just
have
a
like
a
quick
metric,
specific
call
for
the
implementation,
so
we
can
like
essentially
have
actionable
items
coming
out
of
this.
A
H
Yeah,
I
also
agree.
I
I
also
think
like
if
iron
can
give
the
overall
high
level,
you
know
overview
of
this
document
and
then
ask
people
to
you,
know,
read
it
offline
and
then
comment
down
comment
before
we
get
back
to
like
another
call
to
discuss
our.
B
B
All
right
can
everybody
see
this
all
right
and
hear
me
right
here
is
the
size
of
the
document:
okay,
yeah
yeah-
that
was
great,
so
yeah
yeah.
So
I
just
put
a
bit
of
background
here
like
goals
and
non-goals,
and
such
I
put
some
background
by
reading
through
the
dot
net
and
java
implementations
of
what
they've
done
java,
I
think,
is
probably
the
most
mature
just
because
josh
suret
has
been
working,
the
metric
sig
and
in
java,
basically
implementing
that,
along
with
the
sdk.
B
B
The
idea
is
this
is
internal
to
the
sdk,
at
least
for
now,
and
all
of
like
the
aggregation
and
view
stuff
and
everything
happens
in
there.
There's
like
a
default
implementation,
which
is
in
this
dock,
which
could
be
swapped
out
with
something
else
so,
for
instance,
like
a
queue,
might
work
better.
We
could
investigate
that
in
the
future
or,
for
instance,
for
like
async
io
everything
single
threaded.
You
can
cut
out
some
of
the
steps
and
like
reduce
a
lot
of
locking,
for
instance,
so
basically
there's
this
thing
in
the
metrics
spec
called
metric
reader.
B
It's
how
to
put
it
there's
basically
like
each
metric
reader
that
you
register
is
sort
of
like
like
an
exporter
and
they
each
have
a
view
of
all
the
measurements
or
like
the
views
that
are
configured
so
far,
so
they
shouldn't
interfere
with
each
other,
meaning
like
if
you
configure
delta
export
calling
collect
on
one
of
them
shouldn't
influence
any
of
the
other
ones.
So
it
will
only
reset
the
views
for
that
single
reader.
B
So
that's
what
this
level
is
for
the
metric
reader
storage,
so
view
storage
is
basically
like
you
can
think
of
it
as
an
instantiation
of
a
view.
So
when
a
view
matches
an
instrument,
we
create
one
of
these
view
storages
and
there's.
You
know
a
whole
tree
for
each
metric
reader
storage,
so
they'll
each
have
their
own
basic,
basically
copy
of
all
the
views
so
for
each
label
set
or
or
attribute
set,
there's
going
to
be
an
aggregation
which
is
chosen
by
the
view
and
the
instrument,
and
so
these
are
one
per
label
set.
B
So
again,
this
part,
the
gray,
is
all
internal.
The
blue
is
sort
of
like
user
facing
and
this
can
all
be
swapped
out
and
it
wouldn't
require
changing
any
of
these
blue
boxes
so,
for
instance,
if
again
like,
if
we
want
to
have
a
cube
based
consumer
or
something
like
that,
we
can
try
that
out
or
even
in
the
future
expose
this
to
users
if
they
want
to
do
other
stuff.
D
Hey,
I
don't
want
to
go
into
two
specifics
right
now.
I
was
wondering
if,
like
the
view,
storage
could
change
like
like
it's.
It's
like
one
copy
per
label
per
sorry
label
set
per
instrument
right.
B
No,
no
so
there's
one
view:
storage
per
basically
a
view
that
matches
with
an
instrument.
So
if
you
had
like,
for
instance,
four
counters
and
you
had
a
view
that
targeted
all
the
counters
and
change
the
aggregation
for
all
them
to
a
histogram,
for
instance,
then
you
would
get
four
view
storages
one
for
each
instrument.
D
Right
and
like
if
you,
if
you
change
the
label
set
in
which
an
instrument
that
you
want
to
aggregate
by,
like
will
that's
like
a
new
view,
storage,
we
created.
B
Even
if
you
like
reconfigure
the
view
or
if
you
send
new
measurements
with
more
labels,.
D
Right
but
also
okay,
but
the
view
is
still
the
same
right.
Yeah
that's
registered,
got
it
yeah.
Okay,.
B
Yep,
so
basically,
the
other
thing
is
that
at
collection
time,
when
we
read
something
a
single
metric
reader
will
say,
hey
give
me
the
all.
The
streams
are
all
like
the
metrics
for
my
reader.
It
will
also
go
through
this.
It's
not
in
this
diagram,
but
it
will
also
go
through
this
measurement
consumer
part.
It
will
sort
of
bubble
down
through
here
and
there's
granular
locks
at
each
level.
So
what's
going
to
happen,
is
it
will?
You
know
lock
the
view
storage
that
it's
reading,
which
will
lock
each
aggregation
individually
that
it's
reading?
B
Once
that
view,
storage
is
done,
we'll
go
to
the
next
one
and
read
all
those
metrics,
but
but
then
this
other
one
will
be
unlocked
so
that
new
measurements
can
come
in,
for
instance,
and
then,
finally,
when
collections
are
done,
it
just
basically
bubbles
everything
up
through
here
and
returns
it
to
the
metric
reader
and
it
will
reset
the
aggregations
afterward.
D
E
D
B
Sorry,
sorry,
it's
because
of
you
can
target
more
than
one
instrument.
So,
for
instance,
let
me
open
the
sdk
dock.
E
B
All
right
can,
can
you
hear
me
and
see:
yeah,
yeah,
okay,
sorry,
there's
there's
two
parts
to
a
view.
There's
basically
this
here's
some
examples
here,
but
the
important
part
is
there's
like
a
selection
criteria.
So
the
instruments
you
want
to
target
and
then
there's
the
actual
configuration
for
the
output
that
you
want
to
change.
So
the
options
for
things
you
can
target
are
like
a
specific
meter
name
so
called
station
yeah.
You
can
target
all
them
all
the
counters.
B
For
instance,
you
can
also
use
globs,
I
believe,
that's
required
in
here.
So
like
a
wild
card
and
for
each
one
for
each
one,
you
would
apply
this
configuration,
which
is
the
second
part
of
the
view.
So
so
you
can
think
of
the
view
kind
of
like
a
template
and
for
each
instrument
that
it
matches
it.
It
creates
the
configuration
based
on
what
you
give
it.
D
B
B
Yeah
yeah
for
sure
the
so
there's
there's
some
other
weird
things.
Let
me
just
call
that
in
the
sdk
doc
is
there.
Are
we
familiar
with
temporality
like
delta
versus
cumulative
yep,
yep,
okay,
yeah,
so
so
part
for
each
reader
that
you
register?
You
have
to
say
what
temporality
you
want,
or
you
can
choose
a
different
temporality,
so
you
could
ask
for
like
a
prometheus
exporter
which
will
be
all
cumulative.
E
B
You
could
also
add
a
otlp
exporter
which
will
be
delta,
so
each
of
those
have
a
separate
view
and
also
for
for
observable
instruments.
You
have
to
convert
the
temporality
as
well.
So
if
you
think
of
something
like
cpu
time,
which
is
just
read
from
like
your
profile
system
on
linux
and
it's
like
a
cumulative
in
its
own
right,
you're
gonna
have
to
subtract
the
previous
points
to
convert
that
to
a
delta,
for
instance.
So
that's
another
like
really
complicated
part
part
of
the
spec
and
in
the
design.
B
There's
like
a
it's
basically
copied
out
of
the
java
implementation,
but
there's
an
algorithm
for
converting
from
delta
to
cumulative
and
vice
versa.
In,
like
a
general
way,
aaron.
F
F
Okay,
so
the
code
that
you
that
you
had
it's
it
was
you
used
the
java
implementation
to
create
the
the
aggregation.
B
Yeah
but
but
it's
it's
kind
of
the
simplest
way
to
do
it
regardless,
so
I
did
think
about
it
on
my
own
for
a
while.
B
Basically,
the
way
it
works
is
it
keeps
it
keeps
a
cumulative
of
the
previous
collection
interval
and
if
you
have
a
just
just
by
storing
the
last
cumulative
that
lets
you
convert
either
way.
So
if
you
have
like
a
synchronous
instrument
which
has
cumulative
measurements
that
is
representing
like
that
aggregation
just
for
the
collection,
current
collection
interval
is
already
representing
a
delta.
So
if
you
need
delta,
you
can
return
that
if
you
need
cumulative,
you
would
just
add
it
with
the
previous
cumulative
and
for
async
instruments.
So
something
like
cpu
time.
B
F
B
I'm
not
sure
I
I
don't
I
haven't
actually
checked.net
I'd
be
curious
if
there's
another
way
to
do
it.
That's
easier,
though,
because
I
think
I
think
this
is
the
the
simplest
way
like,
regardless
of
whether
or
not
like.
I
don't
think
it's
it's
really
coupled
to
java
in
any
way.
D
Okay,
thank
you.
There
are
most
of
these
implementation
details
or
what
java
is
doing.
B
B
Exactly
so
java
java
is
storing
like
the
measurements.
Basically,
this
view
storage
component,
it's
storing
on
the
meters,
but
there's
no
like
real
reason
to
do
that
and
it
makes
it
so
you
can.
E
D
Awesome:
yep,
hey,
I
guess
one
last
question
I
remember
like
some
time
ago,
like
we
had
some
discussions
about
like
performance
and
stuff.
D
Is
that
like
something
we
should
always
just
keep
in
mind
while
reviewing
this?
I
personally
don't
know
what
the
implications
of
performance
is
but
like
like.
Is
that
something
we
need
to
address
right
now,
or
should
we
just
get
something
in
the
design
first
or
like
implement
it?
First?
Yes,.
B
B
If
there
is
an
issue,
that's
sort
of
like
the
structure
or
like
the
wiring
together
and
then
the
actual
implementation
of
the
default
measurement,
one
like
the
one
that
we're
using
it's
kind
of
like
batch
span,
processor
right
like
it's
something
that
ships
with
the
sdk
that
you
can
use
that
has
should
be
decent,
but
any
anyway
that
this
one
it
has
granular
logs.
So
we
try
not
to
we
try
not
to
make
it
like
slow
or
if,
in
cases
of
high
contention
it
should
be
okay
but
yeah.
B
I
do
think
we
should
keep
the
performance
in
mind
and
definitely
write
some
benchmarks
for
everything
I
did
when
I
implemented
this.
I
did
test
it
out
and
it
was
not
terrible.
So,
like
you
can
look
at
the
script
in
the
in
the
prototype
pr
output,
but
sure
we
should
definitely
keep
it
in
mind.
B
Yeah
and
there's
also
some
sorry,
diego
yeah,
that
diagram.
F
That
you
have
there
yeah.
Can
you
show
it
again?
Please?
Yes,
are
you
planning
on
adding
metric
reader
or
some
blue
box
as
well?
Oh
yeah,
it
could.
B
Guess
what
I
I
mostly
just
didn't
add
anything
else,
because
it
because
it
was
already
getting
pretty
small,
it
ran
out
of
room
yeah.
I
think.
B
F
B
D
Cool
we
have
like
a
minute
left
thanks,
aaron
for
sharing
that
that
actually
helps
greatly
for
moving
forward
in
terms
of
like
what
we
can
do
to
kind
of
execute
like
this
whole
metrics
meeting,
should
we
just
allocate
like
next
week's
meeting
to
doing
that,
or
should
we
just
do
something
closer,
I
feel
like
finding
a
separate
time
might
be
difficult
amongst,
like
everyone,
who's,
interested
or,
and
also
there's
some
people
who
aren't
here
today.
D
So
I
think
making
the
next
week's
sig
meeting
just
metrics
related
it's
the
simplest
way,
but
I'm
open
to
suggestions
as
well.
What
do
you
guys
think.
D
Okay,
cool,
so
I
guess
we
can
just
coordinate
times
in
slack,
then,
if
that's
cool,
all
right,
awesome.