►
From YouTube: 2022-03-10 meeting
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
A
B
So
let
me
I
think,
probably
I
think
I
sent
you
a
message
yesterday,
so
I
was
looking
into
this.
First
of
all,
I
was
looking
into
the
aggregation
thing
which
we
have
to
do
and
I
think
I,
after
looking
into
java
code,
I
got
some
some
idea
how
to
do
it.
Probably
I
can
quickly
share
with
you.
B
B
This
is
our
code,
so
so,
basically
there
is
a
synchronous,
metric
storage
which,
as
I
told
last
time
where
we
are
whenever
a
new
measurement
is
coming,
this
record
method
gets
called
what
that
method
does.
Is
that
will
add
this
measurement
in
a
attribute
hashmap,
if
you
see,
whenever
record
long
or
record
double
is
called
that
basically
adds
this
value.
B
So
probably
this
is
a
better
thing.
If
you
see
this
one,
so
whenever
a
new
measurement
is
coming,
we
use
this
attribute
hash
map
to
maintain
the
value
of.
So
basically,
we
do
a
aggregation.
So,
like
suppose,
suppose,
a
count
count
instrument
is
created
which
we
have
to
count
say
number
of
requests
which
we
are
getting
so
every
time
this
is
called,
it
will
have
some
value
say
say
the
attribute
may
be
the
the
url
on
which
the
request
is
coming.
B
B
This
attribute
is
this
attribute
gets
stored,
the
url
one
singular
one
gets
stored
in
this
in
in
this
attribute
and
and
then
and
we
aggregate
this
value,
the
value
which
we
have
got
so
aggregate
means.
Like
you
know,
at
first
time
we
got
a
value
of
10
as
a
count
suppose
for
the
count
instrument.
We
got
a
value
of
10.
That
means
10
number
of
requests,
so
we
added
that
attribute
url
in
this
hash
map
and
that
ui
hash
map
is
mapped.
B
A
B
B
This
addition
will
happen
if
for
the
same
url,
if
the
next
time
the
url
2
is
there,
there
would
be
new
value
for
that,
and
that
would
be
a
new
entry
would
be
there
in
attribute
map
for
that
url,
and
what
we
do
is
we
do
this
collection,
we
keep
on,
recording
it
and
whenever
the
collection
collect
will
be
called,
this
collect
can
be
called
so
this
this
these
this
these
all
this
attribute
hashmap
is
stored
for
a
given
instrument
right,
but
there
could
be
multiple
readers
for
that
instrument.
B
A
B
And
so
what
we
do
is
suppose
a
reader
one
makes
a
call
to
collect.
B
A
B
And
we
have
to
create
so
we
have
to
somehow
get
all
the
metrics
from
here
and
send
it
to
that
collector
or
that
reader,
which
has
made
the
call.
B
But
the
problem
is
that
I'm
thinking
now
right
now
this
example
was
that
the
reader,
one
suppose
a
reader
one
makes
a
call
first
time
they
make
a
call
to
collect
we'll
get
a
metric,
we'll
create
the
metric
data
out
of
this
attribute
hash
map
and
we'll
send
it
to
that
to
the
reader
whenever
the
reader
one
makes
the
next
call,
it
will
get
the
new
data
from
this
attribute
hashmap.
B
So
in
between
those
two
calls,
whenever
the
new
which,
whichever
the
new
measurements,
have
come,
it
will
get
those
values.
But
the
problem
is
this:
this
is
a
simple
case
where
only
one
reader
is
making
this
correct
cons.
If
there
is
a
reader,
one
makes
the
first
call,
but
the
second
call
is
made
by
reader
two
now
reader
two
will
not
should
not
be
getting
the
same
values
from
actual
hashmap
right.
A
B
Yes,
so
that's
what
we
have
to
do
here.
We
have
to
maintain
another
hash
map
of
unreported
measurements
for
each
of
the.
B
B
The
media
shared
state
is
different.
That
is,
that
is,
that
is
we
also
have
that.
B
Across
all
the
meters,
so
we
call
that
as
meter
context,
okay,
because
in
trace
we
have
trace
context.
So
we
just
need
named
as
meter
context,
but
in
java.
B
B
B
A
B
B
So
so
the
thing
is
that
we
have
to
maintain
a
history
for
the
unreported
matrix
for
each
of
those
collectors
somewhere
before
clearing
up
before
clearing
up
this
before
we
clear
up
the
our
hash
map,
because
this
information
is
only
going
to
reader
one
reader
who
hasn't
got
this
information
yeah.
So
if
we
clear
this
up,
reader
2
will
not
get
any
information.
So
we
have
to
maintain
a
list
of
all
this
information
before
clearing
it
up.
B
So
so,
basically,
what
we
have
to
do
here
is,
I
mean,
attribute
hash
map
before
clearing
it
up
in
collect
call.
We
have
to
store
this
information
for
all
the
collectors
which
are
we
created.
We
create
another
map
of
all
the
collectors
with
the
measurements
which
they
have
to
use
they
yet
have
to
receive.
B
What
all
measurements
are
there
and
we
have
to
create
a
hashbrown
so
bit,
I
think,
probably
I
I
know
it
the
way,
I'm
saying
it's
very
difficult
to
really
understand
and
for
me
also
to
articulate
it
it's
a
bit
difficult,
yeah.
A
I
I
I
totally
got
it
like
yeah,
it's
like.
I
can
tell
you
how
it's
how
it
is
so,
let's
say.
A
B
A
We
have
key
value
pair
of
one
two
and
so
on,
and
everybody
should
read
that
key
value
pair,
one
two
and
so
on,
and
when
we
do
so
when
one
reader
does
the
collect
and
reads
the
first
pair,
we're
not
allowed
to
remove
it
from
our
hashmap,
because
the
other
readers
didn't
read
it
yet.
B
B
So
what
I
need
to
do
is
basically
in
the
collect
con,
so
there
are
these
two,
these,
through
storage,
these
through
maps
storage,
would
be
there.
This
would
be
only
for
any
new
new
measurements
which
we
get.
We
store
it
in
this
one.
So
this
is
the
same
as
this
attribute
hashtag
any
new
measurement
coming.
We
are
storing
it
in
the
attribute
hash
map
and
we
maintain
before
clearing
actually
have
matched.
We
have
so
something
like
this.
I
mean.
B
B
A
B
And
then
we
have
to
do
a
reporting,
then
so
for
any
new
collection.
I
think
we
have
to
use
this
unreported
matrix
and
then
probably
something
like
that.
B
Something
like
this,
I'm
saying
I
mean
like
for
every
for.
First
of
all,
we
have
to
store
all
those
all
those
measurements
which
came
between
two
collections
in
this
hash
mark
and
whenever
the
request
is
coming
for
collection,
this
hash
map,
we
have
to
stash
or
store
it
for
all
those
collectors
before
clearing
it
up,
and
then
we
have
to
probably
clear
it
up
somehow.
B
Could
be
doable
if
we
can
make.
I
mean
that
that
probably
I
have
we
have
to
see.
I
mean
if
this
is
the
shared
stores
or
something
or
just
maintain
a
pointer
or
something
I'm
not
sure.
If
we
can
do
that,
we
are
copying
it
if
the
same
same,
if
it's
a
shared
pointer
kind
of
thing,
if
we
can
store
the
same
thing
in
all
the
places,
I
don't
know
how,
if
we
can
do
that
or
not,
but
probably
that's
something
even
I
was
thinking
we
can,
because
this
will
have
lots
of
this.
B
Actually
this
will
have
lots,
lots
of
copy
pushback
same
same
attributes.
B
B
So
if
you
see
here
some
yeah,
if
you
see
collect
in
the
matrix
collect
they
are,
they
are
calling
this
red
temperament.
So
this,
I
think,
once
we
have
this
in
this
logic,
buildup.
This
logic
is
actually
built
in
this
template
matrix
manager.
B
So
all
this
logic
for
doing
this
this
stuff,
so
it's
coming
from
temporal
metric
storage.
So
I'm
just
thinking.
Let
me
create
this.
This
part-
and
I
think
that
will
get
that-
will
have
some
more
clarity
coming
in
this
storage
thing
that
would
that
would
be
used
in
both
the
places
for
asynchronous
also-
and
I
think
this
would
be
used
in
asynchronous
and
synchronous.
Booth.
B
Yeah
even
asynchronous
is
calling
the
same
thing,
so
we
can
reuse
it
at
both
the
places
and
that
that
will
that
will
bring
some
clarity,
even
if
we
can
do
the
optimization
and
we
may
not
be
able
to
optimize
it
first,
but
I
think
we
have.
We
know
you
know
we
are
now
doing
multiple
copies
of
performance.
I
mean
we
can
definitely
then
fix
all
those
performance
issues
or
try
to
fix
it
yeah,
but,
but
I
think
I
mean
I
am
thinking.
If
I
can.
B
I
can
spend
some
time
on
this,
but
I
was
going
through
that
I
mean
probably
I
wanted
to
even
talk
about,
because
I
think
that
that
looks
to
be
very
tightly
coupled.
I
don't
want
because
I'm
feeling
that,
probably
from
past
two
weeks
because
of
this
lack
of
clarity
or
thing-
I
think
even
I
mean
even
you-
are
kind
of
blocked
you're-
not
able
to
do
much
of
the
thing.
Because
of
that
I
was
just
thinking.
If
this
part,
I
think
it's
required
this
example,
and
this
would
be
important
part
for
sdk.
B
I
haven't
checked
it
yeah
just
see
when
I
don't
have
much
clarity
in
this,
but
this
is
kind
of
correlation
between
traces
and
metrics,
so
it
adds
for
each
of
the
measurements
it
adds
trace
id
and
the
span
id
also
along
with
value
and
attributes.
So
as
of
now,
measurements
have
value
and
attributes,
but
this
is
also
adding
trace
id
and
spanning
of
these
active
trace.
So
something
like
this
and
if
I
do
a
start
of
trace,
start
doing
a
trace
and
in
between
doing
creating
a
tracer
or
creating
a
span.
B
We
can
add
it,
but
it's
a
bigger,
bigger,
bigger
work,
but
I
mean
I
just
felt
that
that's
something
I
think
we
will
if
we
allow,
both
of
us
to
work
independently
independently
and
probably
once
for
me
once
a
temporal
storage
and
all
those
part
is
done,
I
think
probably
we
can
start
working
on
those
pieces
also
together
then
yeah
yeah.
B
I
mean
I'm
just
just.
I
was
feeling
that,
probably
because
of
that
that
part
that
the
storage
part
I
mean
it
has
lots
of
pieces
which
probably
have
to
be
worked.
B
Yeah
I
mean
it's,
it's
not
even
it's
something
like
it
that
that's
a
core
part.
I
think
once
that
gets
fixed,
I
think
probably
we
can
have
we
will.
We
can
start
working
independently
also
in
the
other
stuff,
but
I
think
till
that
is
not
there.
It's
very
difficult
and
I'll
keep.
You
also
blocked
once
on
some
of
the
things,
so
that
that's
something
probably,
I
think
that
we
can
start
supporting
it.
I
think
that
would
be
a
good
addition
for
our
sdk,
because
we
cannot
even
go
for
a
prototype.
B
Yeah
that
needs
some
changes
in
our
apis.
Also,
I
think
I
just
mentioned
in
that
issue
also
example,
an
issue
also
that
needs
some
changes
in
our
apis,
because
we
have
to
pass
this
span
context
along
with
right
now
we
just
pass
value
and
attribute,
but
I
think
we
have
to
pass
this
by
context.
Also,
and
java
is
already
doing
it.
I
saw
this
studio.
B
I
think
there's
something
you
have
been
spending
some
time
on
that
and
that's
I
mean
that's
totally
fine,
if
you
want
to
because
I
think
I
don't
have
any
any
for
me.
I
don't
have
any.
What
do
you
say:
preference
of
either
of
them?
If
you
want
to
take
stories,
I'm
totally
fine.
I
can
probably
start
working
on
yeah.
B
B
Need
lots
of
lots
of
fine
tuning
lots
of
things
we
may
have
to
improve
in
that
place?
That
may
not
work
also
that
we
may
have
to
again
redesign
it,
and
I
probably
want
to
discuss
with
you
before
before
finalizing
that
part
I
mean
I
don't
really
want
that.
We'll
need
more
discussions
also
in
this
challenge,
because
that's
the
core
part
and
that
need
that
may
need
lots
of
performance
tuning
and
lots
of
discussion
before
really
finalizing.
That.
A
B
A
B
B
B
It's
just
that
I
I
just
want
that
to
probably
put
something
for
storage,
then
let
if
we
have
something
working
on
that
and
then
then
start
start
probably
doing
more.
On
that
I
mean
we
have
to
improve
something
performance
and
let's
try
to
do
it,
even
if
you
have
to
redesign
it
without
breaking
the
functionality.
I
think
that,
let's
see
when
we
can
do
that
also
afterwards.
B
B
So
and
I'm
fine
either
way,
I
mean
I
just
want
that
we
should
deliver
something
okay
to
take
anything
of
that.
B
So
probably
I
think,
let's
let
me
let
me
spend
one
probably
this
week
on
that
and
probably
maybe
next
week.
I
think,
let's,
let's
connect
once
again,
probably
you'll
have
done
some
progress
on
this
on
example,
and
I
may
have
some
something
to
show
for
for
this
storage
part
and
let's
discuss
let's
discuss
and
then
probably
let's.
B
Yeah
and
yeah,
but
but
yeah
that
this
this
will
need
lots
of
lots
of
discussion.
I
can
see-
and
that's
I
mean
I
may
implement
I'm
talking
about
this.
If
you
see
this
would
be
a
very
crude
way
of
creating
a
map
of
collector
map
of
maps,
which
has
this
thing,
this
would
be
a
very
crude
way
just
to
make
the
things
work.
B
B
A
B
B
I
mean,
I
don't
think
it's
a
low
priority.
I
think
definitely
it
would
be
good
to
know
I
won't
say
good
to
have,
I
think,
with
matrix.
I
think
it
will
have
a
it
will
have
a
more
priority
actually
with
matrix
coming
in,
so
I
think,
probably
with
traces.
I
think
the
requirements
were
not
there
because
they
are
probably
just
just
to
have
that
asynchronous
export.
If
we
can
do
that
requirement.
B
B
B
In
terms
of
the
api
changes
in
sdk
changes,
because
it
will
have
those
api,
hd
changes
and
then
let's
discuss
next
week,
I
think
we'll,
probably
tuesday
or
wednesday
we
can
discuss
where
we
are
on
this,
and
if
I
have
more
clarity
on
this,
I
think
there
is
some
something
which
I
I
can
progress
on
this.
I
think,
let's
start
working
together
on
this
story.
B
A
A
B
Parts
are
taking
more
time.
I
think
it's
good,
that
it
is
the
parts
which
probably
may
relatively
take
literally
less
time.
I
think
we
can
complete
those
parts
if
there
is
not
much
other
dependencies
and
then
probably
we
can
switch
back
to
those
other
other
stuff,
and
so
that's
what
I
was
thinking
and
thanks
for
understanding,
yeah,
okay,
you
having
anything
else
more
to
discuss.