►
From YouTube: 2022-05-12 meeting
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
C
A
Yeah
definitely,
but
at
the
end
I.
B
A
Everything
makes
better
sense,
so
aaron,
having
reviewing
several
of
your
comments,
added
pretty
much
fix
for
every
one
of
your
comments.
Oh
nice,
nice
yeah.
I
still
have
to
go
through
the
prometheus
things
and
the
block
that
you
found
with
the
previous
point.
A
B
Yeah,
I
was
gonna
say
I
think,
like
the
main
I
mean
we
could
talk
about
the
pr
actually
when
everybody's
here,
if
you
want
that,
might
be
or
we
could
go,
yeah.
B
Yeah,
so
the
the
main
thing
was
so
now
the
individual
aggregations
are
returning
the
number
data
point
or
histogram
data
point,
and
then
you
know
like
when,
when
exponential
histogram
gets
added,
we'll
have
one
that
returns
exponential
histogram.
B
The
only
problem
is
number
data.
Point
is
used
for
both
gauge
and
sum.
I
think
those
are
the
only
two,
so
you
have
to
know
you
still
have
to
know.
B
You
know
how
to
if
your
aggregation
is
supposed
to
make
a
sum
or
a
gauge
and
like
you
know
that
at
the
aggregation
level,
but
the
problem
there
is
that
you're,
having
like
a
flat
view
of
single
individual
data
points
for
an
attribute
set,
whereas
sort
of
the
view
instrument
match
object,
is
what
corresponds
to
a
single
sum
gauge
or
histogram
point
which
has
or
sorry
I
think
the
term
in
the
otlp
is
data.
And
then
the
actual
points
correspond
to
like
a
single
attribute
set.
B
So
the
problem,
I'm
thinking,
is
if
we
ever
so
right
now,
I
think
there's
an
is
instance
check.
That's
higher
up
that
looks
at
the
aggregation
type
to
figure
out
which
one
of
some
gauge
or
histogram
to
put,
whereas
we
actually
have
that
information
already
at
the
aggregation
level,
so
my
preference
would
be
if
we
could
somehow
you
know,
like
the.
B
I
think
that
is
instance
check
is
a
bit
of
a
code
smell
if
we
could
return
it
at
the
level
where
we've
already
got
the
sort
of
knowledge
which
would
be
the
aggregation
since
each
one
corresponds
to
a
single
point
or
data
type.
B
A
A
B
A
Okay,
but
what
do
you
suggest
that
we
move
this.
A
B
The
vs
menu
yeah
that
would
work
and
then,
for
instance,
we
could
add
on
the
aggregation
like
a
class
method
which
would
know
that
it
needs
to
instantiate
a
sum
and
combine
it
with
the
data
points.
B
B
Well,
that
line
was
removed
you're
using
the
unified
view.
Can
you
scroll
down
a
bit.
B
Yeah,
so
here
we're
doing
data
points
against
uninstallmatch.collect
right
so
that
that's
the
data
points.
What
could
happen
is
this
this
whole
block
of
code,
for
instance
here
for
gauge,
for
instance,
no,
no,
that's
fine
from
175
to
180.
B
That
could
all
happen
in
the
view
instrument
match,
based
on
like
a
class
method
on
the
aggregation,
for
instance.
So
it
would.
C
A
B
It
would
return
the
data,
so
it
would
return
like
right
now.
I
think
there's
a
union
type
variable
for
this,
where
it
could
be
histogram
gauge
or
some
so.
Okay,.
B
Yep,
I
cannot,
if
you
want,
I
can
make
a
pr
to
to
your
fork.
B
Yeah
I'll
I'll,
I
think
I
have
an
idea
I
can.
I
can
make
a
pr
for
that.
Okay,
yeah.
The
other
thing
was.
The
other
thing
was.
The
previous
point
thing
has
to
be
repeated
in
all
the
aggregation
classes.
B
It
would
be
nice
if
we
could
somehow,
because
that's
a
bit
error
prone
like
I
think
we
forgot
to
do
it
in
this
pr.
So
if
we
could
somehow
keep
that
abstracted
outside
of
the
collect
method,
so
it
doesn't
have
to
be
a
bunch
of
repeating
code.
B
B
Will
do
this
well,
I
was
I
was
thinking
or
or
have
it
at
a
different
like
in
a
different
class
or
something
so
right
now
this
sets
it
once,
but
after
the
first
iteration
one
previous
point
is
none.
This
will
never
execute
again.
A
C
A
B
Yeah,
I
don't,
I
don't
think
that's
necessary
and
and
for
for
more
context,
like
the
I've
heard,
people
ask
for
custom
aggregations
in
the
sdks
as
an
extension
point,
maybe
in
a
future
version
and
the
way
I'm
imagining
this
would
work
is
it
would
have
to
the
custom.
Aggregation
class
would
still
have
to
emit
a
valid
otlp
point.
So
one
of
those
some
gauge
histogram
exponential
histogram
data,
point
kinds
or
data,
whatever
it's
called
data
types.
B
B
Yeah,
so
that's
what
the
or
sorry
it
does
it's
different
for
the
sync
and
asynchronous
instruments.
Isn't
it.
B
Yeah,
I
think
anyway,
we
could
always
refactor
this
later,
but
yeah
I
would.
I
would,
I
would
just
say
it's
error-prone,
to
have
to
repeat
that
code
in
each
okay.
A
Yeah
I'll
I'll
try
to
think
about
that.
When
I
get
to
that
comment,
I
think
turban.
A
I
just
said:
okay,
all
right,
yeah,
so
I'll
address
all
your
remaining
comments,
I'll
fix
the
bug,
then
I
guess
that
well,
we
can
get
this
merged
somehow
when
you
open
the
vr
there
is
this
other.
C
A
Right
this
other
issue
about
it's
much
simpler.
I
like
collecting
star
times,
I
think
you're,
already
review
there.
Thank
you
for
that.
Okay,
I
think
I
can
address
all
these
comments.
The
only
one
that
I
was
thinking
about
that
could
be
a
little
bit
more
okay.
Is
this.
A
To
get
to
pass
it
instantiation
yeah,
I
guess
we
can
do
that.
I
can
also
work
on
that.
B
Oh
yeah,
but
yeah
yeah.
I
have.
I
have
one
outstanding
question
on
that,
so
right
now,
the
way
the
sdk
works
is
when
we
get
the
first
point
for
a
view
like
a
point
that
targets
a
given
view.
That's
when
we
instantiate
everything,
so
it's
like
a
lazy
instantiation
when
that
happens,
I'm
not
sure
if
it
makes
more
sense
to
use
the
start
time
as
like
the
start
time
from
the
last
collection
interval.
B
B
I
think
it
makes
more
sense.
The
first
approach:
okay,
yeah,
it's
more
correct.
B
Yeah
yeah,
I
think
so
so
like,
for
instance,
if
you
were
trying
to
scrape
like
process
metrics
or
something
and
process
starts,
you
know
like
an
hour
into
the
runtime
of
your
program.
You
would
you
would
use
the
start
time
from
whenever,
like
the
interval
has
started
during,
I
guess
that
makes
sense.
Okay,
yeah,
okay,.
A
Okay,
I
guess
I
can.
I
can
fix
that,
so
this
pr
needs
another
reviewer.
So,
as
you
can
see
later,
so
please
take
a
look
and
but
yeah
so
well.
I
don't
know
if
anyone
has
any
topics
that
they
want
to
discuss
before
we
go
into
our
metrics
world.
A
Nobody
all
right,
let's
take
a
look
at
the
board
right,
so
this
is
how
we
are
right.
Now
we
have
zero
being
worked
on
and
two
in
progress.
That's
all
we
need
the
complete
metrics.
As
you
know,
kubecon
is
next
monday,
so
we're
a
little
bit
short
in
time
to
get
this
merged
and
I'll
make
a
release.
So
I
wanted
to
ask
you
if
you
be
able
to
review
this
today
or
tomorrow,
so
that
you
can
get
this
to
vrs
merged.
A
Great
okay,
so
let's
assume
that
we
can
get
this
to
be
ours
merged,
and
that
will
be
all
that
we
need
to
make
a
release.
So
in
theory
we
were
aiming
to
get
a
release
candidate.
Unfortunately,
it
wasn't
possible
to
do
that
before,
because
issues
kept
showing
up,
so
the
idea
was
to
have
a
stable
release
before
keep
going.
So
I
don't
know
if
you
guys
think
it
will
be
better
just
to
skip
the
rules,
candidates
and
move
these
into
stables
right
away.
B
Yeah
go
ahead.
Sorry.
B
Yeah,
I
I
was
going
to
say
it
makes
me
a
bit
nervous
as
well.
I
feel
like
having
an
rc
that
we
like
stand
behind,
as
in
we
don't
really
plan
to
make
any
breaking
changes
to
the
you
know
like
public
apis
at
all.
I
feel
like
that
would
be
pretty
good
in
you
know
in
terms
of
making
an
announcement,
I
feel
like
it's.
You
know
it's
pretty
close
to
to
done
at
that
point.
A
A
More
reasonable
approach,
it's
it's
too
bad.
We
were
unable
to
get
our
really
scheduled
a
couple
of
weeks
ago,
but
yeah.
We
did
our
best
to
do
this
as
fast
as
possible.
Okay,.
A
All
right,
let's
say
that:
well,
I
guess
the
the
is
this
freaking.
What
do
you
think
by
the
way?
Sorry,
I
forgot
to
ask
you.
C
A
I
I
would
like
for
us
to
set
a
deadline
on
when
to
make
this
table
so
that
we
can
say:
okay,
this
is
going
to
be
the
release
candidate
and
after
this
amount
of
time,
we
will
make
this
table,
because
what
I'm
a
little
bit
concerned
about
is
that
it
just
lingers
on
and
we
don't
get
a
stable
win
in
in
a
certain.
A
And
also,
I
guess
the
community
will
like
to
know
when
we
will
have
a
stable
sort
of
better
release
date.
What
do
you
think.
D
Yeah,
I
think,
having
a
specific
date
after
the
rc
would
be
a
good
idea.
B
Yeah,
I
think,
a
single
release
cycle.
If
there's
no
major
issues
during
that
whole
release
cycle,
we
could
just
cut
stable
release
after
that
for
it.
So,
okay,
all
right
people
to
actually
try
it
out
if
possible.
So
I
know
like
alex,
made
a
blog
post
on
lightstep's
website
about
trying
it
out
as
experimental,
but
we
should
definitely
maybe
even
do
like
a
blog
post
on
open
telemetry
like
the
open,
telemetry
blog.
If
we
can
see
if
we
can
get
people
to
to
try
it
out.
A
Yeah,
definitely
we
you
can
publicize
this
okay.
How
much
time
will
do
you
think
it
will
be
good
to
wait
before
the
stable
matrix.
A
D
Doesn't
have
to
be
exact,
I
guess
if
you,
if
we
have
reasons
for
cutting
an
earlier
release,
that's
fine.
D
A
It's
not
too
strict,
yeah,
okay,
sure
we
can.
We
can
do
that
or
we
can
make
our
our
next,
because
I
I
don't
see
much
sense
in
making
a
release
now,
since
the
previous
release
was
made
one
week
before
a
week
ago,
so
I
guess
it
will
be
just
it
will
just
make
sense
to
to
make
a
release
one
week
and
one
month
after
the
previous
release,
for
this
case.
A
A
B
B
Yep,
okay,
so
yeah.
If
the
next
one's
1.12,
then
12.,
sorry
again,
yeah
whether
12.
yeah.
So
if
the
next
one's
1.12,
we
would
do
released
1.12
and
then
the
rc
would
be
for
1.13
right.
Is
that
how
it
would
work.
D
D
B
A
Sort
of
thinking
about
okay,
all
right,
okay,
so
yeah!
Yes
to
summarize
we're
going
to
make
a
release
candidate
tomorrow,.
A
Perfect,
okay,
so
what
is
needed
right
now
is
for
me
to
address
all
the
comments
here
fix
everything
that
has
to
be
fixed
and
I
need
a
another
review,
especially
in
this
one,
because
the
the
first
one
already
has
been
reviewed
and
broke
device
frequent
and
that's
from
era.
This
one
only
has
I
review
from
aaron.
So
please,
someone
review
this
here.
A
And
and
yeah,
will
you
ernest
you
can't
and
later
be
available
slack
because
I'll
probably
will
have
to
ping
you
as
soon
as
I
add
and
fixes
for
you
to
review
them.
A
C
A
Okay
I'll,
therefore,
we
do
that
we're
great,
so
I
think
we
have
glenn.
A
Thank
you
all
for
your
reviews.
I
have
been
passing
you
with
asking
you
for
reviews,
but
thank
you
at
very
late
times
of
the
day.
Many
times
appreciate
that.
So
thank
you
very
much.