►
From YouTube: 2022-03-18 meeting
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
A
A
I
think
I
just
created
a
kind
of
example
how
the
sync
the
synchronous
instruments
would
be
collected.
So
so
this
example,
you
can
see
the
screen
right:
yeah,
yeah,
yeah,
okay,
so
this
example
just
so.
A
On
the
left
hand,
side
you'll
see
some
measurements
which
are
getting
recorded
through
through
some
instruments
which
are
created,
so
I
may
have
created,
say
the
instrument
of
type
counter
so
and
then
the
aggregation
which
would
have
been
configured
for
that
instrument
would
be
say
sum,
so
I
want
to
take
some
of
all
the
counters,
so
these
counters,
as
example,
is
number
of
http
requests
and
what's
the
type
of
those
requests
which
are
coming-
and
this
is
a
synchronous
storage
where
there
are
multiple,
there
are
three
data
structures
or
three
stories
which
we
are
using.
A
One
is
a
delta
storage,
so
the
data
storage
actually
means
that
any
new
which
contains
all
the
new
or
live
all
the
new
measurements
which
are
coming
that
that
are
stored
in
the
data
data
storage.
So
this
is,
this
is
basically
it
it's
it's
a
map
which,
with
where
the
key
is
the
attributes
of
the
story
or
the
attributes
of
that
measurement
and
the
value
is
the
aggregation.
A
So
take
an
example
when,
when
I
get
a
first
record,
where
number
of
there
are
two
http
requests
which
are
coming,
which
are
to
be
recorded
so
in
the
delta
I'll
have
I'm
just
I
mean
written
as
2g
mean.
That
means
two
get
requests,
so
the
g
is
the
actually.
The
g
is
the
key
in
this
map
and
the
two
is
the
aggregated
result.
The
final
aggregated
is
the
total
number
of
http
requests
of
type
git.
So
next
time,
if
four
comes,
then
this
will
get
incremented.
A
It
will
be
six
for
that
key
of
gate
and
subsequently,
when
we
get
three
post
requests,
it
would
be
six
g
and
three
p.
So
this
is
this
is
all
fine
till
here
again
I
got
another
request
of
delete
so
6
g,
3,
p
and
5
d,
and
now
there
are
two
different
collectors.
A
Suppose
these
are
one
of
them
is
a
data
collector.
One
of
them
is
a
cumulative,
so
delta
collector
wants
only
the
delta
only
the
new
new
matrix
since
the
last
time
it
has
invoked
the
collect
call
so
between
the
two
connect
calls
from
a
given
for
this
collector
it
should.
It
only
wants
the
deltas,
while
cumulative
will
want
all
the
metrics
right
from
the
start
of
the
sdk.
So
right
from
the
sdk
got
started,
it
wants
all
the
cumulative
aggregation
for
the
matrix
for
which
it
wants
to
get
those
data.
A
So
let's
assume
that
collector
ones
want
to
get
the
delta,
so
it
invokes
the
collect
call.
So
as
soon
as
the
correct
call
comes
in
synchronous,
metric
storage,
what
we
do
first
of
all,
we
will
take
all
the
metrics
information
from
the
data
and
remove
it
from
delta
and
copy
it
to
unreported
stash.
So
this
is
unreported
stats
which
contains
for
all
the
collectors
it
has
a
separate
entry
for
all
the
collectors
or
what
all
measurements
are.
Metrics
are
still
not
reported,
still
not
reported
for
those
collectors,
so
their
characters
have
not
collected
those
metrics.
A
So
since
we
we
know
there
are
two
characters:
c1
and
c2.
How
will
you
come
to
know
that
there
are
two
collectors?
If
you
remember,
you
asked
me
the
question
right
why
we
are
sending
the
list
of
all
the
collectors
in
in
the
call
of
collect,
so
that
is
how,
when
the
connect
call
will
come
through
meter,
so
this
actually
does
from
collector.
A
It
goes
to
meter
and
meter.
Will
meter.
Has
all
the
information
of
what
all
collectors
are
configuring,
so
it
will
send
list
of
all
the
collectors
to
it
and
it
will.
This
will
create
an
entry
for
each
of
those
collectors
and
it
will
put
this
delta
take
from
the
data
and
it
will
put
it
in
for
all
of
all
of
them.
So
this
is
a
share
pointer.
So
there
is
only
one
entry
of
this,
but
it's
a
sheet
pointer
for
these.
All
collected
from
these
are
delta
measurements,
which
are
removed
from
here.
A
So
that's
the
first
part,
which
we
do
second
part
is.
We
know
that
we
need
we
need.
We
need
to
collect
the
matrix
for
collector
one.
So
again
we
remove
remove
this
information
remove
unreported
charge,
because
now
we
are
going
to
report
for
collector
one.
A
So
I
mean
it,
it
may
not
make
sense
as
of
now
why
we
are
using
three
three
different
stories,
but
I
think,
as
we
go
through,
I
think
you
will
get.
You
will
get
more
understanding
why
we
do.
Why
do
we
need
two
different
stories
in
this?
A
Yes,
exactly
spot
on
this,
this
is
basically
required
for
the
cumulative
so
actually
for
delta.
Even
if
even
if
you
clear
it
from
here,
we
are
totally
fine
yeah.
I
think
we
are
not.
I
don't
know,
I'm
not
doing
it,
but
if
you
want,
if
you
see
it's
a
delta,
you
can
just
clear
it
from
here
and
we
don't
need
to
do
anything
else,
but
as
of
now,
we
are
storing
it
here,
which
is
fine.
I
think
we
can.
A
A
One
is
mutex
for
the
data,
so
in
the
first
stage,
when
we
are
clearing
up
the
data
at
that
time,
we
have
to
ensure
that
there
is
no
more
records
getting
recorded,
so
we
have
to,
and
so
this
has
to
be
protected
once
we
are
copying
the
matrix
from
delta
to
unreported
and
that-
and
this
has
to
be
protected
after
that
this
storage
can
be
delta.
I
think
the
records
can
be
started.
A
We
can
start
to
insert
this
has
to
these
two
has
a
has
to
be,
I
think,
protected,
for
the
complete
collect
call
this
unreported
chat
and
the
last
reported
stash.
This
has
to
be
interpreted
in
this
direction,
and
you
have
to
ensure
that
this
should
not
be
simultaneous
right,
so
parallel
collect
for
a
given
instrument.
So
this
this
storage
has
to
maintain
for
each
instrument.
B
I
I
have
one
quiz
for
going
too
far,
so
let's
say
my
collector
is
called
the
collect
and
now
hey
actually.
A
A
C
A
Yeah,
so
I
think
we
started
discussing
on
how
the
synchronous
storage
is
currently
being
implemented.
Synchronous
instrument
collection,
basically
it's
for
both
cumulative
and
delta
collection.
So
if
we
want
to
collect
only
the
delta
records
or
we
want
to
collect
all
the
reports,
so
I
just
put
one
some
example
by
example
on
the
on
our
that
matrix
architecture
document.
A
So
I
think
it
was
just
going
through
with
asan
in
this,
so
it
was
just
just
started
that,
like
there
are
three
different
stories
which
we
maintain
inside
the
matrix
sdk.
There
is
a
delta
storage
which
only
stores
the
delta
information,
all
the
new
matrix,
which
are
getting
collected
for
a
given
instrument.
A
A
So
this
is
just
an
example
we
have
taken
here,
so
there
are
http
requests
which
are
coming.
So
if
there
are
two
get
requests,
they
will
get
stored
in
the
data
storage,
because
that
maintains
the
data
or
the
new
incoming
metrics.
A
A
So
that
means
it
has
to
do
a
sum
of
all
the
aggregation,
all
the
measurements
which
are
coming
so
like
we
got
two
get
requests.
So
it's
two
g,
so
two
get
replaced
before
that
request
comes
next
time,
so
it
will
just
get
added
in
this
data
storage
and
it
will
become
six
gate
requests
now,
next
time,
if
three
post
requests
are
coming
again,
a
new
entry
will
come,
six,
get
requests
and
three
post
requests.
A
A
So
what
we
do,
when
the
correct
request
comes
we'll
clean
up
the
delta
storage,
which
contains
all
the
new
measurements,
all
the
new
metrics,
which
has
been
received
since
last
collection.
So
assuming
this
was
the
sdk
start.
There
was
no
last
collection,
so
we
are
just
maintaining
all
the
data
information
in
this
clean
up
from
here
and
copy
it
into
the
unreported
stash.
A
So
basically
it's
a
sharepointer
and
it
will
be
copied
in
the
unreported
stash
for
all
the
collectors,
because
this
new
matrix
has
not
yet
gone
to
any
of
the
collectors.
So
we
get
the
collector
information
as
part
of
this
form
collect,
call
all
the
collectors
which
are
configured
and
the
this
matrix.
The
data
might
get
copied
here.
A
The
sharepoint
has
all
of
having
the
same
fake
point
that
we
will
maintain
and
the
next
would
be
now.
We
know
that
the
collect,
even
though
there
are
two
collectors
which
are
configured
the
actual
collect
call,
is
only
for
collector
one.
So
we
clean
up
the
matrix
from
the
unreported
stash
for
collector
one
and
we
copy
it
into
the
last
reported
stash.
A
That's
another
storage
last
repository
and
we'll
send
that
matrix
back
to
collector1,
so
collector1
gets
the
all
the
data
since
the
start
of
this
sdk
or
actually
it's
a
delta
since
the
last
collection
of
for
collector
one
and
then
there
is
another,
collector
can
be
type
cumulative
cumulative
means
it
does
not.
I
mean
it
just
needs
all
the
matrix
aggregated
matrix
since
the
start
of
sdk,
like
the
delta
means
all
the
new
metrics
since
last
collect
request,
but
cumulative
would
be.
A
I
just
need
everything
from
the
start
and
I
don't
I
don't
I'm
not
concerned
when
the
last
time
I
collected,
I
just
need
all
the
cumulative
data,
so
so
this
this
is.
This
is
how
it
the
storage
look
like
when
the
first
collect
call
goes
like
there
is
unreported
stash,
which
is
for
c2,
because
c2
has
not
really
sent
any
request.
Then
there
is
a
last
reported
stat
for
c1,
because
c1
has
center
correct.
A
Yeah,
so
this
was
what
I
think
we
discussed
in
law,
so
asana
had
some.
A
B
So
the
for
collector-
let's
say
for
c1
during
this
process,
that
the
information
or
data
is
moving
from
delta
to
the
on
report
to
the
stash.
It's
not.
A
Yes,
so
that's
why
I
said
there
were
two
different
mutants.
First
mutex
will
be
synchronizing
the
delta,
so
that,
because
that
would
be,
as
for
the
small
time,
just
go
that
that
we
can
you
can
once
this
is
protected
after
this.
A
It's
not
just
for
collector
one.
Actually,
I
think
this
is
quite
sequential
as
of
now
so
once
the
first
any
of
the
collector
is
sending
a
request.
The
other
collectors
have
to
wait
till
this
process
is
completed.
A
No,
no,
we
don't
so
these
collectors
may
be
configured
to
collect
the
data
at
different
intervals.
So
so
it
may
happen
that
the
collect
requests
for
collect
one,
even
though
they
want
delta,
but
they
may
be.
They
may
ask
for
the
request.
One
may
be
asking
every
15
minutes.
Other
collector
will
be
asking
everyone,
so
the
actual
data
would
be
different
for
both
of
them.
B
A
A
Now
we
get
new
records,
say
five
get
in
six
posts,
so
we
so
this
data
isn't
so
this
data
stash
is
empty.
So
we'll
just
put
that
here
and
now
collector
one
will
against
another
request
as
an
example.
So
again
we'll
clean
up
this
ygn60
from
here,
we'll
put
it
in
collector
one
and
all
this
is
this.
This
basically
is
is
a
map
of
list
this,
this
basically
for
each
of
those
collectors.
It's
it's
it's!
It
contains
a
list
of
all
the
all
the
metrics,
so
the
first
matrix
was
6g
5p
for
collector2.
A
A
So
so
again,
if
you
see
collector,
one
just
gets
the
delta
records
which
were
sent
in
between
two
collecting
revenues,
but
for
c2
we
have
maintaining
list
of
or
unreported
stash,
because
c2
has
to
get
all
the
request.
C2
c2
has
to
get
on
for
collector
2.
We,
whenever
the
request
comes,
we
have
to.
We
have
to
use
this
unreported
stash
to
create
and
find
an
aggregated
matrix
and
send
it
across.
B
A
D
A
A
So
we
copied
this
10g
5g
to
both
c1
and
c2.
So
c2
has
three
different
elements
in
this
area.
So
all
these
agri
matrix
it
has
here.
So
it
has
to
do
a
merge
of
all
these
three
six
g.
Five
p,
these
ones.
So
if
you
see
just
just
do
for
so
with
the
merge,
I
means
for
each
of
this
http
request
type.
We
have
to
just
do
addition
sum
and
store
it
in
c2
here.
A
A
And
now
we
got
two
more
requests
to
get
in
three
delete
to
get
in
three
delete.
So
same
thing
we
do.
This
is
sick.
A
This
is
it
yeah,
so
we
got
these
two
requests,
which
is
fine,
so
when
so,
we
just
store
this
two
get
in
three
delete
in
the
data
and
now
this
is
then
the
second
request,
assuming
that
another
request
from
collect
collector
two
comes,
so
we
will
remove
it
from
the
data
we'll
copy
it
into
the
c1
and
also
copy
into
the
c2
for
unreported
stash,
and
now
the
next
stage
would
be
will
remove
it
from
the
c2.
So
this
has
to
again
change.
A
So
now
and
then
the
next
step
would
be
remove
that
entry,
this
entry
from
from
c2
and
do
a
merge
of
the
record
in
c2
in
in
unreported
stash
and
the
last
reported
stash.
A
A
A
A
So
we
gotta
fight
it
it's
this
first
entry
is
same
as
the
last
the
last
one.
So
now
we
got
a
connect
as
an
example,
we
got
a
collect
from
the
data
collector
one.
So
again
we
remove
5g
from
here.
We
copy
it
in
collector
one
and
collected
two,
and
the
second
would
be
do
a
merge
of
all
the
matrix
which
we
have
collected
for
collect,
because
we
we
want
to
collect
for
collector
one.
Do
a
merge
of
all
this
matrix
formation
for
collector
one
ten
plus
2
plus.
A
A
And
send
the
same
one
back
to
the
corrective
one,
so
so
this
is
how
the
flow
will
flow
will
keep
on
working
for
synchronous
and
basic
for
synchronous,
delta.
A
A
Okay,
so
this
is
how
it
so
right
now,
if
you
see,
whenever
the
request
comes
for
record,
we
store
that
in
attribute
hashtag.
This
hash
map
is
a
delta.
A
A
Yes,
so
so
any
request
which
we
are
coming,
we
basically
just
store
that
in
attribute
hash
map
for
any
type
type
is
coming.
So
if
you
see
this
is
the
right
fit
of
seeing
this
one.
A
So
here
for
record
record,
you
can
take
example.
Long
value
would
be
two
for
the
first
one.
The
attributes
would
be
get
so
only
one
attribute
is
there
the
get?
So
the
attribute
is
the
key
here,
and
the
value
would
be
the
this
long
value
would
be
the
actual
value
which
we
are
going
to
store.
So
this.
This
is
how
we
we
store
it
in
this
and
when
the
collect
is
so.
This
is
how
the
collect
is
implemented.
A
A
Take
all
the
unreported
matrix,
okay,
it's
gone
yeah,
so
we
store
it
and
then
we
take
it
from
the
unreported
matrix
undeported
list
for
that
collector
and
do
a
merge
here.
So
so,
somewhere
we
do
a
mud.
A
It
depends
whether
if
it
is
if
it
is
just
the
delta
aggregation,
then
we
just
take
it
from
from
here
from
the
undeported
style
and
just
put
it
in
the
reported
last
reported
slash,
but
if
it
is
cumulative,
so
if
you
see,
if
it
is
cumulative,
we
have
to
again
do
a
merge
of
all
the
all
the
entries
which
all
the
matrix,
which
is
already
there
in
the
reported.
So
we
take
everything
from
the
last
segregated
map.
A
So
yeah
just
go
through
that,
I
think,
probably,
if
you
want,
I
can
go
more
into
depth
presenting
it
should
be
clear
once
you
go
through
this
logic,
and-
and
this
is
only
for
sync-
synchronous,
async
known
as
matrix
storage
logic
has
to
be
a
bit
different
because
in
asynchronous
matrix
storage
there
is
no
record
request.
Coming
like
this,
like
there's
no
record
calls,
if
you
see
in
this,
it's
not
yet
implemented,
but
there
would
be
just
collect,
collect,
call
coming
from
matrix
reader,
div
or
matrix
collector.
A
There
will
not
be
any
records
which,
because
it's
asynchronous
so
whenever
the
collect
is
coming,
we
have
to
send
a
request
to
so
there
is.
There
is
a
callback
in
case
of
instagram
synchronous
that
callback
is
yeah
this
one,
my
measurement
callback.
So
whenever
the
request
for
collect
comes
for
any
from
any
given
collector,
we
have
to
invoke
this
callback
to
collect
the
exact
actual
matrix
from
from
the
application
and
then
do
the
aggregation
and
then
send
it
back
to
the
collector.
A
A
We
have
to
I
mean
how
we
have
to
implement
both
for
delta
and
cumulative.
I
think
we
have
to
see
the
logic
we
have.
We
have
to
think
think
of
it,
and
then
we
have
to
implement
it,
and
most
of
this
logic
I
mean
which
images
here,
it's
not
something
which
I
mean,
as
I
said
earlier,
also
it's
not
something
which
we
have
invented.
It
is
something
which
is
already
been
used
in
java,
and
I
think
that.
A
A
A
A
A
A
A
A
A
A
A
Yes,
that
that's
true
we
do
have
we
do.
We
have
to
do
lots
of
copies
here.
If
I
think,
if
there's
something,
we
can
optimize
a
very
good
point,
even
though
these
are
not
the
real
copies,
when
we
store
the
same
matrix
information,
we
are
collecting
from
delta,
we
are
using
the
shape.
These
are
the
shade
pointers,
but
even
the
shade
pointers
have
to
be
copied.
A
So,
even
though
there's
no
real
data,
so
probably
if
we
have
to
see
if
we
can
somehow
optimize
these
copies-
and
you
can
not
have
any
idea
how
we
can
do
that,
but
I
think
that
something
we
have
to
see
probably
good
to
benchmark,
see
how
much
time
it
is
taking
whether
that's
acceptable
or
not,
and
I
think
it's
something
I
think
we
can
definitely
improve
over
the
time,
but
total
valid
point.
I
think
there
are
copies.
A
And
just
I
think
some
not
directly
related
to
this,
but
as
of
now
I'm
having
some
issues
so
in
building
this
one,
four
dot,
a
it's
not
working
on
the
gcc
4.8
because
of
it
does
not
supports
all
the
secret
special
features.
A
A
A
It
has
it
has
for
for,
for.
I
have
I,
I
know
at
least
for
right
references.
It
has
not.
It
has
lots
of
issues,
I've
seen
at
least
using
the
move
semantics
it
has.
It
does
not
work
seamlessly
earlier.
Also,
I
think,
while
trace
implementation
also
they
were.
They
were
issue
using
move,
semantics
and
preferences
in
that
so
but
yeah.
A
I
think
if
we
have
to
spend
more
time
in
fixing
the
box
for
gcc
4.8,
I
think
we
may
want
to
make
a
call
it's
getting
obsolete
and
whether
whether
there
are
any
real
versions,
ubuntu
what
have
been
version
which
we
still
would
still
have
that
gcc.
A
A
A
D
D
My
work
for
me
and
how
about
at
least
happiness.