►
From YouTube: 2021-05-06 meeting
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
B
So
we're
now
are
we
now
considered
stable
with
with
metrics,
then
or
is
that
still
no.
A
It's
not
stable,
so
so
we
we
have
the
experimental
version.
I
think
the
the
data
model
part
is
like
0.9,
something
so
we're
trying
to
get
to
stable
very
soon
for
the
api
and
isdk
spike
will
will
market
as
experimental
and
vacate
for
a
while.
B
A
B
A
Update
so
there
still
some
small
api
clarification.
For
example,
people
are
saying:
hey
like
it
would
be
great
if
you
can
add
in
the
readme.
How
do
people
pick
the
the
instruments
like
a
decision
tree
or
something
so
currently,
my
hyper
focus
is
on
the
sdk
part,
because
I
want
to
kick
off
the
idk
as
soon
as
possible
and
potentially
we
can
split
the
work,
so
I
can
have
multiple
sections
and
and
folks
here
can
help
to
contribute
to
different
sections.
A
That's
why
I
want
to
prioritize
that
and
this
one
is
not
a
blocking
issue.
It's
just
helping
us
to
clarify
further,
but
it's
not
going
to
change
the
shape
of
the
api.
So
that's
why
I
decided
to
hold
down
this
for
for
a
while
and
now
I'm
working
on
the
matrix,
sdk,
spec
skeleton
and
also
I've.
I've
examined
all
the
metrics
issues
that
are
in
the
spike
repo
last
night.
So
I
put
all
the
items
that
I
believe
we
want
to
discuss.
A
So
these
items
does
not
like
do
not
mean
we
must
solve
all
of
them.
Some
of
them.
We
probably
can
decide.
Okay,
we
don't
do
it
for
now
and
give
a
like
epa
like
either.
We
won't
be
able
to
do
it
by
end
of
this
year
and
it
can
be
additive
change
later
on
or
we'll
do
it,
but
not
now,
probably
like
september
or
sometime
or
this
is
something
we
must
do
at
this
moment.
So
I
put
all
the
issues
here
and
we
probably
don't
have
enough
time
to
go
through
all
of
them.
A
So
my
ask
is,
if
you
see
any
other
issue
that
you
think
is
important.
Well,
we
never
captured
here
please
file,
issue
and
and
tag
them
I'll
I'll,
look
at
all
the
matrix
issues
every
day
and
make
sure
all
the
voices
are
captured
here
in
the
board.
So
this
will
be
the
single
source
of
truth,
and
you
can
see
we
have
more
sdk
related
issues
here.
C
Quick
question
about
that:
what
is
the,
what
is
the
status
or
what
is
the
plan
with
the
time
obstruction.
A
A
A
A
A
So
reply
to
the
issues
and
I'll
watch
over
all
the
issues
on
a
daily
basis:
okay,
cool!
Thank
you!
So
that's
the
overall
update
and
the
next
one
is
a
fairly
simple
thing.
So
we
discussed
that
we're
going
to
flip
the
specs.
A
Previously
I'm
working
on
something
called
new
underscore
api
spec
just
to
avoid
the
pr
like
mess,
and
now
it's
almost
in
shape,
so
we're
going
to
make
this
the
matrix
api-
and
I
got
a
lot
of
approvals
yesterday,
but
then
arming
called
out
that
by
flipping
the
file
it
might
mess
up
the
gate,
blame
and
history.
So
instead
I
created
another
pr
as
a
first
part.
So
my
ask
is:
please,
take
a
look
and
give
approval,
and,
and
hopefully
we
can
merge
it
as
soon
as
possible,
because
it
is
not
a
real
change.
A
It's
just
editorial
like
fixing
something
can
rename
files
so
because
working
on
the
isdk
we
have
a
dependency.
The
ick
will
have
some
hyperlinks
to
the
api
and
if
we
don't
merge
that
it'll
be
very
messy
to
work
on
the
sdk
skeleton
part.
So
please
help
and-
and
the
third
topic
is
a
small
one.
So
I
I
think
when
I
send
the
the
counter
and
I've
done
counter
pr,
there
are
comments
and
discussion
when
we
decided
it's
not
a
blocking
issue
here.
A
I
I
see
the
similar
thing
again,
but
based
on
the
previous
discussion,
I
I
don't
think
we
have
a
good
consensus
on
this
question.
Do
we
think
there's
a
natural
aggregation
function
across
all
the
dimensions
or
it's
just
not
the
case,
so
it
seems
like
when
we,
when
we
were
talking
about
aggregation.
We
have
a
general
belief
that
if
you
have
multiple
dimensions
and
the
additive
property
will
apply
to
any
of
this
dimension
with
no
discrimination.
A
A
We
still
think
majority
of
the
cases
will
be
captured
by
a
single
aggregation
across
all
dimensions,
or
this
is
something
we
should
consider,
but
not
now,
or
this
is
something
it's
just
out
of
scope.
So
the
scenario
here
is:
if
you
have
multiple
cars,
each
car
has
a
battery
and
they
have
different
cells.
You
want
to
measure
the
voltage.
It
makes
sense
if
all
the
other
dimensions
are
distinct
and
you
only
add
up
the
voltage
based
on
the
different
cells.
A
So
you
got
the
total
voltage
of
the
battery
pack,
but
if
you
add
the
voltage
from
a
foreign
f150,
it
just
doesn't
make
sense.
I
I
don't
know
whether
it's
a
corner
case-
or
this
is
very
common.
D
B
I
feel
like
in
general,
it's
it's
harder
to
know
it's
harder
to
know
what
the
what
the
aggregation
would
make
sense
unless
you
know
what
the
dimensions
are
all
right
like
it,
it
strongly
depends
on
like
because,
like
you
said,
it's
like
it
doesn't
make
sense
to
sum
your
volts
across
two
different
cars
of
like,
even
if
they're
of
the
same
brand,
that
doesn't
make
any
sense,
and
so,
unless
you
had
like
some
sort
of
meta
information
that
was
able
to
tell
you
these
are
aggregateable
for
a
lack
of
a
better
term
like
if
these
labels
can
be
aggregated
across.
B
E
E
Labels
in
this
case
is
cell
number,
and
I've
been
thinking
of
this
intrinsic
or
natural
aggregation
function,
as
applying
only
to
perhaps
only
applying
to
the
application
level
labels,
and,
I
think,
but,
however
riley
I
congratulate
you
on
finding
a
very
devious
example.
Voltage
is
something
I
would
think
of
as
a
gauge.
It's
a
measurement.
It's
a
physical
quantity.
It's
not
a
count
of
anything,
but
you
do
add
them
when
they're
in
series.
E
So
adding
those
cells
is
an
operation
you
could
do,
but
I
would
think
of
the
operation
of
analyzing
those
cells
as
being
one
that
you
would
probably
average
like
there's.
One
of
those
vehicles
has
an
unhealthy
scenario
where
there's
high
variance
in
its
gauge
for
the
cell
voltage.
So
I
would
say
that
you
can
average
out
the
gauge,
which
means
you
can
average
away
the
the
cell
number
and
at
that
point,
you're
left
with
a
resource
level
average.
E
Now,
your
your
averages
are
very
different,
but
these
are
up
to
encounters
and
if
you
want
to
compare
the
averages
of
a
two-person,
a
person
whose
cue
sizes
are
counted
in
two-dimension
with
the
averages
of
a
person
whose
cue
sizes
are
counted
in
three
dimensions,
you
need
to
erase
one
of
them
from
the
the
third
three-dimensional
set
of
data
to
get
down
to
two.
At
that
point,
you
should
apply
this
intrinsic
operator.
E
I
believe
which
for
up
to
encounter,
would
be
to
sum
so
you're
going
to
end
up
reducing
the
three-dimensional
down
to
two-dimensional
using
the
intrinsic
and
now
you've
got
the
same
data
and
I'm
still
questioning
whether
there's
rules
about
when
it's
safe
to
average
and
when
it's
not
safe
to
average,
which
I
think
are
off
topic
now,
but
probably
important.
A
Yeah,
so
I
I
guess
this
is
less
a
concern
for
the
api,
but
more
like
data
model,
but
I
I
just
want
to
mention
and
remind
people.
It
seems
we.
We
don't
have
a
very
solid
understanding
on
this
topic
and-
and
I
I
don't
have
a
full
picture-
whether
if
we
figure
out
a
better
understanding,
would
that
mean
we
might
go
back
and
change
some
of
the
design?
This
is
my
worry
and
as
long
as
we,
we
have
a
reasonable
understanding
and
could
confirm
it's
not
going
to
impact
or
flip
our
design
in
the
future.
C
A
C
So
in
the
example
I
I
agree
that
it
is
like
not
really
a
counter,
but
can
there
be
any
solution?
Sorry
can
there
be
any
scenario
where
basically
like
aggregating
or
summing
up
everything
without
using
any
tags
doesn't
make
sense?
C
But
to
me
that
makes
sense
to
accuse
it
without
any
any
labels
or
attributes,
because
that
means
like
all
of
the
people
who
like
participated
in
your
survey,
so
it
has
a
meaning,
but
in
your
example,
basically
what
it
tries
to
demonstrate.
I
guess
is
that.
C
C
To
sum
up
the
all
of
the
cars
it
doesn't
matter,
so
if
you
don't
have
any
any
tags
on
it,
then
it
must
the
example
must
have
no,
no
meaning,
and
I'm
not
sure
if
it
is,
if
it
is
holds
like
in
a
in
a
term
of
in
terms
of
counters.
I'm
not
sure
I
can
give
you
an
example
like
that.
A
Okay,
so
just
a
time
check
here,
so
I,
like
my
suggestion,
is
we
we
probably
want
to
follow
up
on
the
pr
comment
and
just
to
get
some
consensus
there.
Otherwise,
I
I
figure
we're
advancing
with
something
left
behind
and
we
don't
really
understand.
What's
the
consequence,
it
might
later
like
bite
us.
E
E
B
So
the
idea
would
basically
be
that
resource
attributes,
anything
that's
a
resource
attribute,
couldn't
be
or
could
be
potentially
aggregated
across,
but
unless,
like
a
metric
attribute,
would
have
to
be
manually,
determined.
E
Well,
I
think
the
proposal
has
been
that
if
you
have
a
gauge
point,
you
you
either
you're
going
to
build
a
gauge,
histogram
or
you're
going
to
average.
E
Somehow
you
because
semantically
there
was
some
meaning
there
and
then,
if
you
have
a
some
point,
you're
going
to
add,
because
that
was
what
was
meant
there,
the
one
that's
been
giving
me
the
reason
I've
been
having
this
sort
of
similar
conversation
with
my
team
at
last
step
is
that
there
was
a
point
in
our
user
interface
where
the
default
was
to
give
you
a
mean
value
when
you
group,
because
it
seems
like
a
good
out
a
good
default
once
in
a
while
and
then
the
problem.
E
I've
seen
that's
immediate
when
you
have
mean
value
as
your
default
comes
up
with
these
up
down
counters,
because
we've
talked
about
like.
What's
your
current
memory
size
or
how
much
memory
are
you
using
and
there's
a
there's,
a
memory
number
and
the
total
memory
available
on
your
system
is
probably
you
know
something
that
you
can
sum
up
and
then
you've
got
memory.
That's
free,
you've
got
memory.
That's
in
use
you've
got
memory,
that's
like
sitting
there
in
some
free
list.
You've
got
different
categories
of
memory.
E
Now,
if
I
just
say
show
me
memory,
usage
and
don't
group
by
the
state,
then-
and
I
and
I
give
you
a
mean
value,
but
what
I'm
showing
you
is
the
the
mean
of
all
memory
divided
by
the
number
of
classes
of
memory.
So
it's
like
the
quarter
of
memory,
because
there
are
four
classes
of
memory,
and
I
and
that
to
me
tells
me
that
something
has
gone
wrong.
E
E
Well,
the
the
the
current
state
of
affairs
as
I
understand
it,
is
that
you
would
choose
a
gage
point
when
you,
when
you
think
of
this
as
a
measurement
that
you
might
might
average
and
you
would
choose
an
up
down
some
data
point
which
will
be
the
sum
which
is
non-monotonic
and
cumulative
when
you
want
to
express
something
that
is
added
and
the
reason
we've
talked
through
this
just
to
make
sure
everyone
has
an
example
in
mind.
Is
you
can
you
can
have
your
cube,
be
instrumented
in
two
dimensions
or
dimensions?
E
That's
a
verbosity
level!
If
you
want
detailed,
metrics
you're
gonna
have
a
three-dimensional
key
metric.
If
you
want
less
for
both
metrics,
you
have
two
dimensional
q
metric
and
if
you
ever
want
a
dashboard
that
combines
data
from
the
application,
that's
running,
verbose,
metrics
in
the
application.
That's
not
running
verbose,
metrics,
you're,
going
to
end
up
wanting
to
do
this,
dimensional
alignment
erase
one
of
them
and
then
you're
going
to
either
average
or
sum
according
to
the
point
type.
A
Yeah,
so
I
I
did
one
class
following
up
on
this
and-
and
we
probably
can
use
the
next
tuesday
data
model
meeting
to
to
to
push
more
on
this.
I
figure
this
is
an
important
topic.
A
B
A
These
are
the
concepts
I
I
want
to
explore
that
they
should
introduce.
I
think
meter
provider
is
a
clear
winner,
like
people
know,
if
we
need
that
for
sure
what
we
haven't
hammered
out
last
time
is:
do
we
need
two
different
processors?
One
is
having
access
to
the
raw
information,
another
one
is
after
the
aggregation,
so
it
has
access
to
the
free
aggregate
data,
and
my
my
current
proposal
is
that
we
need
to
have
both
the
reason.
A
I
look
at
the
issues
I'm
seeing
there
are
people
asking
hey
like
they
want
to
be
able
to
enrich
the
metric
or
they
want
to
be
able
to
access
contacts
and
baggage,
and
I
think,
having
this
measurement
processor
is
a
generic
way
that
we
can
allow
people
to
do
that
and
regarding
the
details
like
if
they
want
to
take
the
data
and
just
dump
it
somewhere
or
they
want
to
take
the
data
and
aggregate
them
in
memory.
It
is
something
that
different
sdks
can
implement
without
having
to
block
on
this.
A
If
we
have
this
interface,
but
we
don't
have
that
we
have
something
else
later
we
have
the
need.
We
got
to
come
back
and
edit
anyways,
and
that
means
the
ick
has
to
expose
some
other
interface
for
people
to
achieve
that
behavior
and
then
they're
going
to
migrate.
So
this
is
my
worry
and
that's
why
I
want
to
see
if
we
should
put
it
here
and
and
the
again
I'm
struggling
with
the
name,
so
I
tend
to
call
it
measurement
process
and
metric
processor,
because
these
two
concepts,
measurement
and
metric
seems
to
be
established.
A
However,
when
I
look
at
the
data
model
spec
and
like
we
have
different
terminology,
so
let
me
give
you
some
example
here
in
the
data
model,
we're
calling
at
the
event
model
we're
saying
this
is
the
event
in
the
api.
We're
saying
this
is
called
the
measurement,
it's
the
raw
data
that
we
retrieve
from
the
api.
E
E
A
Yeah,
and
in
this
way,
I
I
think
it's
still
environment,
the
measurement
captures
the
value
and
some
context.
The
context
is
a
given
set
of
attributes
or
the
implied
attributes
from
the
contacts
or
the
package,
or
anything
that,
like
application
developers,
might
want
to
enrich.
But
it
sounds
like
still
a
metric
like
still
a
measurement
to
me,
because
the
the
core
of
this
data
point
is
the
value
the
others
are
just
like.
Additional
information
associated
with
that
value.
B
You're
saying
makes
sense,
and-
and
I
guess
the
to
the
to
the
question-
that's
there
as
like
the
aggregators
to
be
implemented
as
measurement
processor
would
would
that
would
that
be
a
valid
case,
or
would
this
mostly
be
focused
around
the
enrichment?
I.
A
I
I
think
this
is
just
the
the
the
integration
point
where
people
can
hook
up
whatever
they
want.
So
if
they
want
to
filter
out
data,
they
want
to
enrich
the
data
or
they
want
to
do
aggregation.
They
should
use
this
measurement
process,
and
that's
why
I
didn't
put
the
aggregator
as
a
top
level
concept,
because
I
I
think
we
can
start
with
this
basic
concept
and
make
aggregator
a
language.
A
I
seek
internal
thing
and
yeah
when
we
eventually
figure
out
how
how
we
can
make
this
more
consistent
and
expose
that,
for
example,
we
can
we
can
have
some
internal
structure
saying
hey.
These
are
the
built-in
aggregators
and
people
can
build
the
composite
aggregator
based
on
this
or
they
can.
They
can
do
the
lego
pieces
and
put
them
together.
No
one
can
expose
additional
components,
but
these
are
the
the
most
essential
components
that,
for
example,
I
I'm
thinking
the
metric
processor
is
like
the
the
engine
and
whether
it's
the
gas
engine
or
the
diesel
engine.
A
A
F
One
of
the
things
I
think
we
should
call
out
is
that
we've
definitely
found
that
the
span
processor
model
on
the
tracing
side
has
ended
up
being
a
little
bit
problematic
in
that
the
compositionality
is
not
something
that
spam
processors
have
inherently
meaning
that,
if
you
want
to
do
any
sort
of
compositional
span
processing,
you
have
to
kind
of
figure
it
out
yourself
yeah
and
it
might
be
worth
on
the
metric
side,
kind
of
making
sure
we
bake
that
in
in
right
up
front
so
that
that
compositionality
is
a
part
of
that
that
interface,
whatever
that
ends
up
meaning,
because
I
think
this
is
something
the
compositionality
of.
F
A
F
Yeah,
I
I
I
do
agree.
I
think
I
think
this
is
a
good
abstraction
to
think
about
measurements
in
metrics
out,
and
I
think
that's
a
really
good
way
to
think
about
that
idea,
something
we
hadn't
had
in
the
previous
sdk.
So
quite
so
explicitly.
A
Yeah,
okay,
so
I'll
continue,
and
in
this
way
I
want
to
quickly
explain
some
of
the
thinking.
So
when
you
review
the
pr
you
will
understand
some
of
the
underlying
thinking
like
where
I'm
struggling
and
what
I
need
help.
So
another
thing
I
want
to
mention
here
is:
I
think
we
should
provide
support
that
one
meter
provider
can
have
as
many
as
measurement
processors
in
parallel
like
they're,
like
many
pipelines.
A
I
only
push
the
temperature
every
15
seconds,
but
for
blood
pressure,
it's
critical.
I
want
that
every
single
second,
I
think
last
meeting
bogdan
also
mentioned
in
google
that
they
have
scenario
some
critical,
like
isli
metrics,
has
to
be
pushed
every
single
second.
So
so
my
proposal
is
we.
We
should
start
from
beginning
to
support
multiple
processors
and
and
multiple
exporters
running
on
the
same
meter
provider,
with
different
reasons.
A
E
Can
I
I
just
want
to
make
it
clear
that
I've
been
thinking
about
this
in
terms
of
two
different
types
of
processor,
the
one
that
we
just
discussed,
this
enrichment
processor
is
one
that
might
take
variables
out
of
the
context
and
add
them
to
the
event,
so
enrichment
decorates
an
event
and
then
in
at
least
in
the
old
model,
that
we
had
these.
What
you're
talking
about
a
processor
that
can
have
a
variable
frequency
of
export,
for
example,
or
different
dimensions
configured.
E
I'm
not
disagreeing
with
anything
you
said
at
all,
but
I
want
to
be
clear
and
the
only
challenge
we
know
about
is
removing
labels,
and,
if
your,
if
your
enrichment
processor
can
remove
labels,
it
might
mean
that
you
end
up
with
these
events
that
look
duplicate
and
that's
actually.
Okay
in
a
synchronous
event.
Context,
because
you
can
just
say
this
is
a
synchronous
event
that
happened.
I
may
have
erased
some
labels
that
make
them
look
like
a
bunch
of
duplicate
events
just
happened,
but
there's
still
events
that
just
happened.
E
It's
when
you
talk
about
asynchronous
events
and
if
you
start
to
observe
multiple
duplicate
events
that
have
the
same
label
set
it's
a
question
of
what
that
really
means
and
whether
that
was
an
accident
or
whether
that
was
intentional.
So
when
we
talk
about
enrichment
for
asynchronous
events,
there's
a
few
more
questions
that
come
up.
A
It's
it's
a
based
on
observation.
Then
it
looks
like
a
lazy
object
somewhere
like
a
handle
inside
the
process.
When
you
need
that
you
grab
the
value
and
and
the
callback
will
be
triggered.
It
gives
you
that
so
the
box
here
I
have
a
hard
time
trying
to
capture
that,
but
the
the
idea
I
want
to
express
here.
This
is
a
state
machine.
It
will
have
some
value
that
is
already
available
and
the
exporter
can
grab,
and
it
also
has
some
lazy
values
that
you
can
grab
and
it
will
trigger
the
asynchronous
instrument.
A
Fetch
and
a
more
complex
scenario
I
mentioned
somewhere
here
is
that
we
might
allow
different
frequency
of
the
the
collection
and
the
reporting.
For
example,
we
might
say
hey.
We
want
to
collect
the
metric
data
asynchronously
every
one.
Second,
like
the
cpu,
we
want
to
call
the
operating
system
and
get
the
cpu
statistics
every
one
second,
but
for
reporting
we
only
send
the
data
every
one
minute,
because
just
sending
every
second
won't
be
very
effective.
We
won't
be
able
to
batch
the
data
and
the
latency
is
not
a
big
concern.
E
Yeah
spoken
always
insisted
that
it
was
useful
to
have
a
per
instrument
configuration
for
export
frequency.
I
would
like
to
make
sure
that
we
don't
ever
export
part
of
an
instrument
on
one
frequency
and
like
one
set
of
labels
on
one
frequency
and
a
different
set
of
labels
on
the
other
frequency
for
the
same
metric
name.
I
think
that
makes
makes
for
trouble,
but
maybe
there's
a
reason.
A
Yeah,
so
so
in
the
current
skeleton,
I'm
I'm
trying
to
propose
that
we
allow
multiple
processors
and
multiple
metric
processors,
and
by
default
the
frequency
will
be
synchronized.
That
means
for
for
poor
processor,
when
the
when
the
scraper
is
trying
to
take
the
data,
it
will
trigger
the
fetch
from
the
state,
and
this
will
be
a
latest
state.
It
will
trigger
the
actual
callback
from
the
asynchronous
instrument.
But
if
people
can
configure
something
they're
saying
hey
for
this
one,
I
I
don't
want
to
be
too
lazy.
A
I
want
to
report
it
everyone
like
one
minute,
but
for
collection,
I'm
doing
that
every
one
second,
they
will
be
able
to
do
that
somewhere
here
and
and
having
those
basic
skeletons
allows
us
to
do
that
in
the
isdk,
without
having
to
expose
a
clear
interface,
and
once
we
figure
out
during
the
prototype
we
can,
we
can
communicate
and
see
what
java
is
doing.
What
donate
is
doing,
what
python
is
doing,
then
we
can
decide
whether
we
want
to
have
a
consistent
way.
Otherwise
it
can.
A
Okay
and
the
last
piece
is
the
metric
exporter,
and
here
the
the
outstanding
thing
I
want
to
call
it
is:
we
have
push
metric
his
power
and
the
pool
matrix
for
her,
and
I
look
at
the
github
issues.
There
has
been
a
high
demand.
One
thing
is
people
mentioning
open
senses,
they're
able
to
use
both
in
the
same
place
so
giving
this
open.
Telemetry
project
has
a
goal,
and
also
we
agreed
that
we
have
to
support
the
open
system
scenario.
A
We
don't
have
to
do
the
one-to-one
api
mapping
game,
but
we
have
to
enable
the
scenarios
open
census.
Customer
can
migrate.
So
here
I'm
I
put
the
portal
that
we
should
allow
multiple
matrix
exporter,
whether
push
or
output,
to
be
mixed
up
and
configured
on
the
same
meter
provider
so
that
that's
the
key
here
and
with
that
I
I
think
I
I
I've
gone
through
all
the
key
points
in
this
vr
and
it's
just
a
skeleton
to
get
us
started.
So
any
other
questions
for
the
multiple.
F
A
Yeah
and
in
terms
of
the
implementation,
I
think
the
most
down
version
of
implementation
can
be
just
duplicate
the
work
and
and
like
each
frequency
will
have
their
own
copy
of
memory.
And
later
we
can
do
more
optimization,
and
when
we
try
to
do
the
optimization,
we
might
figure
out
hey
there,
a
particular
thing
we
can
expose
like
aggregator
or
some
intermediate
state,
and
that
probably
can
can
help
us
to
like
do
that
in
multiple
stages.
A
By
the
way.
The
more
I
look
at
this,
I
find
it
very
interesting
it's
similar
to
the
the
video
game
or
like
graphic
engine,
because
if
you
look
at
the
in-memory
state,
it's
basically
a
frame
buffer,
you
have
some
like
rendering
logic
based
on
the
user
activity.
You
put
something
in
the
frame
buffer
and
at
certain
moment
the
display
decide.
Okay,
I'm
going
to
refresh
the
display,
so
I'm
going
to
grab
the
information
from
the
frame
buffer
and
and
also
you
can
think
about,
like
double
buffering.
A
A
Okay,
so
so
that
that's
the
the
thing
I
want
to
explain
here
with
that
we
can
go
back
to
the
gender
doc.
So
any
other
topics
we
want
to
discuss.
A
If
not,
there's
one
you
want
to
make
so
if,
if
we
got
this
pr
merged
that
we
flipped
the
the
api
specs
to
make
the
the
new
api,
the
api
and
and
I'll
update
this
pr
by
putting
the
link
to
the
api
dock,
like
some
concepts,
will
will
need
to
link
back
to
the
api
and-
and
we
can
spend
time
to
debate
and
and
hopefully
this
skeleton
can
be
merged.
After
that
there
will
be
two
things
we
got
to
divide
and
conquer.
One
is
in
order
to
have
a
very
concrete
sdk
spec.
A
We
have
to
do
the
real
prototype.
I
cannot
just
sit
here
and
do
like
some
imagination
and
give
a
reasonable
spec.
So
I'm
doing
a
lot
of
prototype
in
python
and
and
c
sharp,
but
I
I
need
more
help
from
you
guys,
so
my
proposal
would
be
once
the
skeleton
is
there.
We
can
divide
and
conquer,
and,
for
example,
I
can
pick
the
meter
provider
and
hammer
off
the
views
and
probably
victor
you
can
help
to
pick
something
else
and
josh.
A
I'm
not
sure
your
availability,
because
I
I
know
you've
been
busy
with
other
other
artists
and
and
also
the
the
data
models,
will
have
some
remaining
issues
that
we
want
to
tackle
as
soon
as
possible.
But
my
suggestion
is,
after
this
pr
got
merged
I'll
list,
several
things
in
the
project
tracking
by
creating
issues
and
I'll
assign
something
to
myself.
The
other
items
I'll
put
up
a
list,
and
I
want
people
here
to
help.
D
A
Yeah
I'll
explain
so
so
we
got
this
api
merged.
In
this
way,
I
can
update
the
link
in
the
sdk
skeleton
pr
so
that
we
mentioned
some
term
or
concept.
We
can
link
to
the
api
spec.
Okay,.
C
A
And
after
that
we
got
the
skeleton,
then
I'll
create
issues
for
every
section
of
the
isdk
and
called
what's
the
expectation
and
and
with
that,
I'm
going
to
assign
something
to
myself
and
all
the
others
I'll
put
a
list
and
ask
people
here
to
help
out.
And
yes,
when
we
fill
in
the
details,
we
have
to
do
the
prototype.
So
my
ask:
is
you
pick
a
particular
language?
You
work
with
the
sake
you
do
the
prototype
and
based
on
the
learning
you
send
the
vr
and
then
we
can
discuss
in
the
next
meeting.
E
E
The
areas
that
we
hadn't
done
in
that
particular
sdk
had
to
do
with
enrichment.
We
had
two
prototype
pr's
that
were
output
and
it
just
never
got
merged,
and
then
the
thing
I
mentioned
that
bogdan
always
asked
for
which
was
a
per
instrument
frequency
that
we
have
an
entire
one.
Frequency
for
the
hotel
go
sdk,
and
I
can
see
how
I
introduced
some
complexity
that
I
haven't
considered.