►
From YouTube: 2021-08-05 meeting
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
I
haven't
prepared
the
agenda
this
time,
but
I
remember
like
last
time
we
got
some
follow-up
like
trying
to
reach
out
to
josh
offline
and
and
there's
another
thing
I
haven't
got
got
dream
actually
about
histogram,
so
I
can
probably
quickly
follow
up
on
that
and
and
also
I
saw
my
pr
still
there
waiting
for
more
approval,
the
one
that
I
tried
to
remove
the
metric
processor.
That
should
be
quite
straightforward.
So
so
please
help
on
that.
B
Riley
sorry,
if
I
may
do
a
quick
survey,
is
anyone
having
trouble
with
github
getting
stuck
in
ci.
Somebody
else
has
noticed
that.
B
A
A
A
Yeah,
so
so
sorry,
I
I
haven't
got
time
to
put
agenda
some
emergency
on
my
side,
but
coming
back
so
who
want
to
go
first,
I
I
have
simple
ask
that
my
pr
I
need
approval
and-
and
I
remember
what
we
discussed
last
time
so
josh-
we
want
you
to
give
us
an
update
on
the
histogram.
We
know
there
are
folks
doing
prototype,
but,
like
sufficient,
do
you
see
any
blogger
any
help?
You
need.
D
Yeah,
yes,
so
I
have
been
interrupted
this
week
by
a
bunch
of
stuff
myself
and
I'm
barely
prepared
to
speak
at
this
moment
in
time.
The
histogram,
pr
that
I
put
out
has
received
some
feedback
and
I
have
not
processed
it.
Yet
we
seem
to
have
it's
been
the
same
story,
the
whole
time.
D
The
number
of
people
who
are
interested
in
actually
approving
this
is
so
small,
because
it's
very
technical
and
requires
this
like
deep
understanding-
and
I
feel
like
all
I'm
really
doing-
is-
is
trying
to
push
forward
by
conveying
what
these
histogram
researchers
have
all
said
into
something
we
can
practically
use.
D
I
have
not
been
able
to
make
enough
progress
on
it.
I've
also
been
working
on
sampling
for
lightstep,
which
is,
in
my
mind
like
like
this
histogram
data
model,
stuff
that
we're
talking
about
will
only
matter
after
we
have
an
sdk,
that's
done,
and
we
will.
You
know
some
at
some
point.
D
There's
you
know,
there's
the
opportunity
to
replace
the
old
style
aggregator
for
explicit
histograms,
with
the
new
style
aggregator
for
exponential
histograms,
and
that
will,
in
theory,
improve
your
data
quality
if
you
are
using
a
vendor
or
backend
that
can
accept
that
data
today,
and
you
know
like
prometheus,
is
not
ready
to
accept
that
data
either.
So
like
it's
moving
at
the
right
pace,
I
think-
and
I
want
everyone
to
understand-
it's
not
blocking
api
and
it
really
depends
on
sdk.
So
my
my
attention
has
always
gone
first
to
the
sdk
and
the
api
discussions.
D
I
would
prioritize
reviewing
your
work
right
away
before
getting
back
to
that
which
I
just
did,
and
I
think
we
should
it's
certainly
important
to
keep
under
our
eye,
but
we
so
just
for
the
the
final
status
report.
New
relic
gave
us
a
prototype.
D
I
think
there
will
come
a
point
in
time
in
the
next
couple
quarters
where
we
try
to
say
hotel
sdks.
You
should
consider
implementing
this
new
histogram
aggregator,
but
as
far
as
when
that
actually
matters,
I
think
so.
You
know
the
data
path
has
to
be
ready.
The
hotel
collector
will
have
to
begin
processing
this
data.
D
We
will,
I
presume,
need
to
start
translating
between
histogram
types,
that'll
be
some
processor
code
that
has
to
get
written
and
then
again
I
just
until
prometheus
is
ready
to
to
actually
give
this
to
to
open
source
users.
I
just
don't
see
a
lot
of
energy
behind
this
until
we
have
other
things
out
of
the
way.
A
D
Base
discussion
is
kind
of
locked
down,
but
I
don't
think
that
exponential
benefits
many
people,
because
today's
prometheus
doesn't
receive
that,
and
I
don't
know
the
pace
of
the
prototyping
in
prometheus
land,
but
I
think
the
my
assumption
was.
We
would
have
the
explicit
histogram
option
with
essentially
status
quo
for
prometheus.
You
can
have
set
up
your
buckets
and
you'll
get
the
same.
Prometheus
performance
you
expect
now
led
step
is
really
excited
about
having
you
know,
exponential
histograms
and
it
has
a
data
pipeline.
That's
basically
ready
for
it.
D
A
D
I
think
so,
unless
someone
else
disagrees
like
I
don't
know,
I
don't
know
who
would
disagree,
because
the
the
open
source
infrastructure
is
not
ready
for
anything
else
that
I
can
see.
A
And
when
we
change
that,
do
you
think
like
that
would
become
the
default
because
histogram
we're
talking
about
the
default
and
the
default
currently
is
the
premises
default.
It's
fine.
So
later
we
can
see
like
the
default
is
still
the
premises
one.
But
if
you
want
to
have
higher
performance,
you
can
use
the
base
tool
thing
once
some
like
backhand
start
to
support
that.
A
Okay,
so
so
try
to
make
a
conclusion
here.
This
is
to
put
a
stake
on
the
ground,
so
we're
saying
we
will
ship
the
sdk
and
the
default
histogram
would
be
what
we
already
discussed.
It
will
be
the
premises.
One
victor
has
that
covered
in
his
pr
and
later,
even
like
we
can
revisit
the
base
tools
thing
and
continue
the
prototype
and
and
see
if
we
can
make
progress
on
the
back
end,
like
especially
premises,
and
if
that
eventually
become
part
of
the
sdk
that
won't
be
the
default.
D
I
think
that's
right.
I
mean
we,
wouldn't
I
think
it'd
be
nice
if
we
kept
open
like
a
way
to
change
defaults.
We
did
talk
a
lot
about
that
in
recent
meetings.
D
How
this
is
essentially
like
a
case
for
view
configuration
we'd
like
to
say
I'd,
like
my
histograms,
to
have
the
new
exponential
data,
not
the
old,
explicit
data
and
when
you're
configuring,
your
views,
you
can
say
whether
you
do
or
do
not
want
to
take
the
defaults,
and
so
we
could
say
sdk
version
2
has
default
being
exponential
or
explicit
histograms
and
then
sdk
version.
3
has
defaults
being
exponential
histograms,
because
sdk
version
3
came
after
prometheus
server
released
version
3
when
they
support
the
exponential
histogram.
Maybe.
A
A
Okay,
do
you
feel
we
can
move
to
the
next
one.
C
D
I'm
saying
that
for
the
sdk
yeah
and
our
specification
that
we've
been
moving
so
ever
so
carefully
on,
like
we
shouldn't,
we
shouldn't
try
and
change
for
this
new
histogram.
Okay,
I
don't
think
I
mean,
even
though
I'm
excited
about
it.
I
just
don't
want
to
slow
us
down
even
a
little
bit.
I
think
it
makes
sense,
but
joshua
will
continue
to
work
with
other
folks
on
the
prototype,
yeah.
B
D
And
then
start
getting
collector
support
because
you're
going
to
want
to
be
able
to
convert
it
to
a
prometheus
exporter,
and
the
first
thing
will
be
converted
from
exponential
back
to
explicit,
which
is
not
a
hard
calculation
to
do.
But
it's
machinery
that
we
need
built
before
someone
can
turn
on
a
you
know,
an
sdk
with
the
fancy
new
histograms
and
at
some
point,
you're
going
to
end
up
at
a
data
translation
where
it
just
doesn't
work.
D
E
This
actually
ties
very
closely
into
my
the
the
agenda
item.
I
have
a
couple
down
from
here
about
the
fact
that
the
collector
is
still
on
otlp
07,
I
think,
or
six
something
very
old.
A
E
E
Which
will
mean
when
we
release
java
api
sdk
1.5,
probably
next
week,
everybody's
existing
usage
of
the
apis
will
be
completely
broken
anyway.
It's
just
a
status
update.
E
Thank
you,
progress!
Yeah,
I
mean
it's
just
it's
it's
progress.
I
mean
there's
still
tons
and
tons
of
work
and
it's
probably
not
the
final
version
of
those
apis.
There's
still
some
discussion
and
argument
about
the
exact
shape
of
them,
and
we
want
to
like
example,
for
example,
how
we
want
to
deal
with
observer,
async
instruments
versus
sync
instruments
in
the
api
from
a
constructor,
so
we're
still
working
through
it,
but
we
at
least
have
the
instrument
names
now
in
alignment
with.
What's
in
the
spec.
D
Good
question:
I
have
an
anecdote
for
you:
all
hotel
go
because
of
the
like
presence
of
a
metrics
sdk
prototype
in
order
to
release
sdk
or
tracing
api.
1.0
tracing
sdk
stable,
had
to
upgrade
to
0.9,
because
we
called
that
version
of
the
protocol
stable
and,
like
a
lot
of
the
code,
was
intertwined
around
htlp
so
that
we
forced
to
go
to
upgrade
to
0.9
and
there's
there
are
incompatibilities.
D
D
Case
to
catch
up,
because
I
can't
actually
you
know
even
test
my
own
stuff
like
here
on
my
platform
here
and
but
I
know
that
we
we
started
pushing
on
the
collector
and
alex
boten,
who
is
not
usually
here
in
this
meeting
but
as
part
of
python
and
has
been
joining
collectors
more
and
more
has
been
slamming
out
prs
to
try
and
get
us
up
to
0.9
and
and
making
quite
a
lot
of
progress.
So
I
think
he
reached
0.8,
which
is
a
totally
useless
intermediate
point
where
hopefully
nobody
will
ever
support.
D
A
Probably
a
dumb
question
I
I
haven't
been
quite
involved
in
the
otlp
part.
What's
the
versioning
story?
Do
we
follow
sembler
2
and
I
thought
some
part
of
otlp
like
tracing
and
matrix
is
already
stable,
although
we
probably
made
some
mistakes
that
we
broke
something
unintentionally,
but
do
we
want
to
call
that
one
point
something.
D
I
don't
know
the
answer
on
this.
I
think
the
probably
otlp
versioning
has
been
discussed
and
keeping
that
in
december
alpha
until
more
signals
are
stable,
and
you
know
this
metrics
being
marked
stable
months
ago
was
a
milestone
and
we
were
saying
that
that's
going
to
be
the
first
stable
one.
So
I
I
don't
know
when
we
decide
to
call
it
1.0.
I
think
we
have
to
have
a
stable
sdk.
A
And
does
that
mean
like
currently
we're
trying
our
best
effort,
but
nobody
is
really
like
saying
people
should
treat
it
officially
as
stable.
So
we're
saying
some
signal
might
be
stable,
but
how
we
implement
that
in
the
portal
buff
might
break
people
and
it's
like
it's
not
something
we
try
to
do,
but
people
should
expect
that
happen.
D
I
don't
want
to
say
I
like
that
sounds
worse
than
I
want
it
to
be.
We
called
it
stable
trying
to
say
we
wouldn't
make
a
backwards,
incompatible
change,
but
I
don't
know
when
we
call
otlp
version.
One.
A
D
B
D
When
we
release
0.10,
we'll
still
accept
zero,
nine,
but
zero,
eight
and
earlier
are
off
and
those
are
unstable.
That's
that
was
what
we
were
expecting.
I
think
when
it
was
marked
stable
and
you're
right,
we
did
kind
of
not
consider
how
incompatible
0.9
is
with
0.7,
because
it's
like
in
practice
it
breaks,
even
though
the
protocol
semantics
are
kind
of
compatible
in
practice.
It
breaks
because
of
the
rpc
endpoint.
D
So
I
mean,
I
think,
the
reason
we're
discussing
this
is
that
john
asked
what
about
version
10
or
like?
When?
Are
we
going
to
add
histograms
of
report?
I
think
we
should
move
forward
with
the
histogram
get
it
approved,
get
it
merged,
call
it
otlp
0.10,
and
then
the
next
step
will
be
to
get
a
collector
out
that
can
translate
between
exponential
and
explicit
boundaries
for
compatibility
reasons,
and
then
once
we
have
a
collector
that
can
take
exponential
histograms
and
output
prometheus
histograms
today,
that'll
be
like
we'll
have
reached
the
milestone.
D
At
that
point
at
least
you
can
turn
on
exponential
histograms
and
for
the
most
part,
your
data
paths
will
either
continue
working
or
you'll
get
better
data,
and
one
day
you
know
when
enough
people
support
it.
Maybe
that
should
be
the
default.
E
Yeah,
my
question
was
not
related
to
histograms
at
all,
because
the
java
sdk
doesn't
have
a.
We
don't
even
have
an
aggregator's
race.
Well,
we
do
have
an
aggravated
histograms,
but
I
don't
know
if
it's
not
on
by
default.
I
don't
know
anybody
who's
using
it.
My
question
was
mostly
that
if
you
just
send
otlp
0.9.0
to
the
collector
today,
which
only
scores
about
seven
metrics
kind
of
work
but
there's
things
that
don't
work
and
so
we're
publishing
an
sdk,
we
have
been
publishing
sdk,
probably
for
three
to
four
months.
E
E
E
D
I
riley,
I
have
a
different
answer
and
I
I
said
something
different
internally
to
lightstep.
Is
that
zero
point
the
difference
between
zero,
seven
and
zero?
Nine
are
they're,
basically,
two
differences
for
metrics,
you
add
attributes
and
there's
a
backwards,
compatibility
translation.
We
can
do
and
you
add,
into
labels
the
0.7
labels
are
the
0.9
attributes,
but
it's
like
there's
a
one-to-one
translation
and
then
the
number
type.
D
Still
exists
so
so,
if
you're
sending
that
in
histogram
type,
we
still
it's
still
part
of
0.9,
and
there
is
a
translation
that
you
can
do
from
0.7
into
0.9
stable
that
we
can
anticipate
being
done.
The
reason
why
we
screwed
up,
though,
is
that
the
protocol
for
export
today
takes
takes
this
double
histogram,
and
we
said
well,
it's
the
same
as
histogram,
and
it's
not
because
this
new
one
of
creates
for
integer
type
data
is
totally
unrecognized
in
the
the
legacy.
D
Sorry,
once
once
this
collector
supports
0.9.
What
I
just
said
won't
be
true.
I
should
stop
talking
tired.
D
A
E
Well,
I'm
actually
not
sure
about
that.
I
mean
we
can
assert
that,
but
if
the
proto
is
still
at
less
than
1.0,
I
don't
think
there
are
any
guarantees
from
the
semver
perspective
that
there
won't
be
further
breakages
right,
yeah
and
also
like.
So
you
know
my
my
approach
so
far
was
oh
otlp
has
been
released.
I
see
that
I'm
going
to
go
and
update
the
nexus
java.
Sdk
we'll
have
the
newest
version
of
the
proto,
because
I
was
expecting.
E
That
would
also
be
happening
in
the
collective,
that's
obviously
not
not
true,
and
so
just
as
a
maintainer,
I'm
trying
to
understand
what
I
should
be
doing
and
how
I,
how
I
know
what
to
target
and
how
I
know
when
it's
safe
quote,
unquote
to
upgrade
the
photo
version
that
we're
exporting.
D
Think
this
is
a
this
is
a
big
community-wide
in
the
future.
I
think
we
need
to
gate
on
the
collector.
Basically,
languages
shouldn't
be
releasing
versions
that
the
collector
hasn't
already
supported,
right
and
so
maintainers
now
have
to
carefully
watch
the
collectors
for
releases
yeah.
I
guess-
or
we
have
to
start
communicating
this
at
the
maintainers
level-
is
now
that
the
collector's
support
0.9,
please
release
your
0.9.
We
didn't
do
that
before.
We
ended
up
in
this
weird
situation
where
it
goes
sending
0.9
because
it
had
to
to
save
a
bunch
of
others.
D
A
Thanks,
that's
all
any
anything
else,
so
so
so
josh.
If,
if
my
pr
looks
good
and
got
enough
approval,
I
know
how
to
merge
it
then
I'll
I'll
quickly
send
the
pr
try
to
add
the
basics
powertrain
pipeline.
A
A
I
can
do
a
quick
update
so
on
the
download
side
I
talked
with
cjo
and
we
got
some
idea
how
we
can
do
a
zero
allocation
implementation,
so
we're
going
to
do
some
refractor
that
shouldn't
affect
the
the
spec.
Let's
just
skip
the
whole
heads
up.
If
anyone
see
any
memory
allocation
thing
they
want
to
discuss,
I
think
open
time.
Machine.Net
folks
have
some
idea
and
also
regarding
the
matrix
api.
So
we
put
the
metrics
api
as
part
of
the
download
6,
the
runtime
and
there's
a
bar.
A
So
we're
saying
if
people
have
more
than
eight
dimensions
instead
of
putting
everything
on
the
stack,
we'll
do
a
keypad
location,
because
we
think
copying
many
many
dimensions
from
the
stack
to
somewhere
else,
it's
quite
expensive
and
for
things
that
are
eight
or
less
than
eight
dimension.
We'll
do
everything
on
the
stack.
So
if
a
good
imitation
can
do
some
optimization,
you
can
achieve
absolute
zero
heap
allocation.
A
D
A
A
I
see
yeah
just
for
your
information,
because
in
downl
the
initial
prototype
was
focusing
on
the
functionality.
Then
we
noticed
like
in
order
to
export
the
data
we
have
to
expose
some
type
for
the
exporter
and
the
type
was
a
reference
type
and
we're
thinking.
We
might
want
to
change
that
because
a
reference
type
is
going
to
give
you
a
location
and
later,
even
if
you
want
to
optimize,
you
only
have
two
choice:
you
create
another
type
and
and
let
the
exporter
migrate
or
you
break
this
folder.
We
don't
want
to
do
that.
A
E
Well,
yeah:
in
java
we
won't
have
non-reference
types
until
like
java
21,
so
we're
never
going
to
be
able
to
use
those
in
the
job.
Yes,
ever.