►
From YouTube: 2022-02-25 meeting
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
Oh
insane,
crazy,
just
lucky,
I
got
my
swim
in,
so
that's
good.
I
like
to
I
swim
twice
a
week
so
very
happy.
I
got
my
swim,
so
it
keeps
me
sane
yeah.
A
So
I
I
saw
that
jack,
you
called
you
called
me
out.
No,
you
you!
You
tagged
me
on
the
the
closable,
the
auto
closable
observables.
A
A
So
that
always
feels
a
little
strange
to
me,
but
it
and
I
one-
I
don't
think
that,
like
a
try
with
resources
is
ever
going
to
be
used
in
this
circumstance.
I
would
guess
that,
like
removing
an
observable
from
that,
like
try
with
resources,
doesn't
seem
like
a
like
a
normal
way
that
you
would
end
up
removing
one
of
your
your
callbacks.
C
B
I
I
gotta
say
I
was
I
was
I've
been
thinking
the
same
thing.
I
was
trying
to
think
of
a
scenario
where
I
could
use
this
in
a
try
with
resources
block.
I
couldn't
come
up
with
one
that
doesn't
mean
it
doesn't
exist,
but
in
all
the
situations
you
know
it's
the
type
of
thing
where
you
you
create
the
callback
at
the
beginning
or
the
construction
of
a
class,
and
then,
when
that
class
is,
is
released
or
closed.
Then
you,
you
close
the
callback
or.
C
A
So
that's
my
only
concern,
I
think
I
mean
having
the
functionality
is
great.
I
mean
I
think
it's
necessary.
My
only
concern
is
that
I
I
just
wonder
a
little
bit
about
the
whether
users
will
like
will
understand
that
that's
the
method
that
they
should
use
to
unregister
things
which
clearly
folks
on
this
call
who
are
writing
instrumentation.
D
D
B
A
A
I
also
get
weirded
out,
because
josh
block
recommends,
using
null
pointer
exceptions,
to
signal
invalid
inputs
to
a
function
which
I
think
is
terrible,
because
null
pointer
exception
means
a
mistake
in
the
code,
not
a
bad
input,
but
whatever
that's
my
own.
That's
my
own
stickler
opinion,
so
I
think
I'm
fine
with
it.
B
Yeah
the
reason
I
brought
it
up
is
because
I'm
trying
to
do
a
follow-up
that
builds
on
that
there's
as
a
part
of
josh
mcdonald's
pr.
There
was
a
bunch
of
other
changes
to
the
definition
of
metric
identity,
and
you
know
I
I
think
I
think
it's
natural
to
build
off
allowing
multiple
callbacks,
because
now
we
have
symmetry
between
async
and
synchronous
instruments,
and
so
the
next
step
is
to
make
sure
that
we
we
implement
the
right
identity
rules.
A
Did
you
understand
my
comment
about
putting
the
putting
the
the
closure
or
putting
the
the
lambda
at
the
end
on
the
prototype
batch
api
rather
than
at
the
beginning.
A
Because
in
kotlin
you
can
you
can
just
you,
you
can
move
that
block
outside
if
it's
the
last
parameter.
B
I
think
yeah,
here's
what
you're
talking
about
the
the
batch
api.
C
Hunter
egg
had
a
different
proposal
here.
C
C
But
are
you
saying
that
these
are
just
like
that?
There's
no
need
necessarily
to
restrict
that
and
just
to
treat
these
as
sort
of
normal
measurements.
C
B
Having
the
observers
pass
is
a
list
in
some
way
that
you
intend
to
record
to
is,
is
the
safest
thing
to
do?
I
think
it's
it
hurts
ergonomics,
but
it
allows
us
to
do
something
in
the
internals
where
we,
you
know,
unlock
those
observers
and
allow
recordings
to
them
and
then,
after
the
callback's
over
lock
them
back
up
again.
You
know,
theoretically,
somebody
in
an
asynchronous
call
in
an
async
callback
today.
B
Could
you
know
take
that
reference
to
an
observable
long
measurement,
and
you
know
it's
provided
to
your
callback,
but
you
could
set
it
in
some
sort
of
atomic
reference
or
something
and
then
invoke
it
outside
of
the
context
of
that
callback,
and
I
don't
know
what
would
happen
off
the
top
of
my
head
but
it'd,
be
you
know
you,
you
could
basically
manipulate
the
api
to
synchronously,
observe
asynchronous
stuff
right
now
and
so
yeah,
that's
that's
kind
of
where
my
head
was
at
on.
D
B
Yeah
cool,
okay!
I
could
imagine
that
working
as
well
and
yeah-
I
guess
when
I
have
a
moment,
I'm
gonna
play
around
with
this
stuff
and
just
you
know,
experiment
with
both
options
and
see
which
one
which
one
works
and
is
provides
the
best
combination
of
ergonomics
without
making
the
implementation
super
nasty.
B
D
D
A
That's
my
idea
there
jack
do
you,
I
don't
remember
what
josh's
rationale
was
for
needing
to
restrict.
There
was.
There
was
some
though,
but
I
don't
remember
what
it
was.
Do
you
recall
what
that
rationale
was.
B
He
had
he's
had
a
couple.
Ideas
float
around
about
what
a
batch
api
would
look
like
in
some
of
the
renditions.
You,
you
record
the
same
set
of
attributes
against
several
instruments
at
once.
So,
like
you
know,
every
instrument
is
going
to
have
the
same
attributes
but
they're
going
to
have
different
measurements,
potentially
that
you're
recording
against,
and
I
don't
really
like
that.
That
type
of
pattern
can
totally
fit
into
here,
but
I
don't
think
that's
what
you're
describing.
A
A
This
idea
floating
around,
at
least
from
my
understanding
that
if
you
have
a
callback
and
you're
calling
back
and
you're
doing
stuff
inside
your
callback,
that's
not
relevant,
like
not
related
to
what
you're
actually
trying
to
have
registered
that
you
could
get
into
trouble.
But
I
don't
know
I
don't
remember
like
the
detailed
rationale
as
to
why
that
was.
B
Yeah,
if
it's
a
performance
related
thing,
I
would
like
to
revisit
that
with
josh,
because
my
assumption
has
always
been
that
we
don't
have
to
worry
about
locking
as
much
for
asynchronous
instruments.
They're
only
called
once
per
collection,
and
you
don't
have
like
contention
because
yeah
you
know
each.
You
should
be
getting
unique
recordings
for
each
set
of
attributes.
A
Anyway,
I
I
know
there's
something
that
I'm
missing,
but
it's
that's
totally
fine.
I
don't
need
to
know
everything
by
the
way
I
do
have
some
feedback
on
our
metrics
sdk
for
the
periodic
metric
reader,
because
we're
I'm
using
it
now
and
another
person
who
has
not
been
so
intimately
familiar
with
otel
that
when
you're
building
your
periodic
metric
reader,
you
can
specify
the
minimum
interval.
That
recordings
will
happen
at.
A
That
was
the
interval
of
of
reporting
for
the
period
like
that
was
the
period
that
was
going
to
be
reported
on
which,
of
course,
made
all
sorts
of
crazy
stuff
happen.
When
you
tried
to
use
that,
like
you
said
it
to
a
minute
and
then
like
who
knows
like
very
strange
things
happen
when
you
set
that
value
to
something
very
large
anyway,
this
I
wanted
just
some
feedback
that
the
api
is
there's,
maybe
a
little
bit
of
confusing.
A
That
method,
in
particular,
is
a
little
bit
confusing
at
the
moment.
So
I
don't
know
what.
B
So,
if
you're
testing
code
so
there's
you
basically,
you
want
to
allow
some
time
to
pass
between
collections
and
weird
stuff
happens
if
you
collect
without
any
time
having
passed,
and
so
the
minimum
collection
interval
makes
sure
that
you
don't
set
a
periodic
metric
reader
up
with
a
interval
of
zero.
B
So
it's
just
like
constantly
running,
but
it's
useful
to
to
change
that
and
allow
you
to
record
like
one
after
another
for
testing
purposes,
because
that
can
you
know
if
you
don't
have
the
ability
to
bring
your
like
the
the
duration
between
collections
down
to
zero?
Then
you
you're
stuck
with
this
messy
testing
code,
where
you
have
to
sleep
just
to
like
accommodate
the
periodic
metric
readers
requirements.
A
B
D
B
Yeah
so
like
what
can
happen
is
like
you
know,
let's
say
you
set
your
collection
interval
to
be
every
30
seconds.
If,
but
you
can,
you
can
force
flush
more
frequently
than
that,
and
so
this
this
allows
you
to
force
flush
more
frequently
than
what
would
be
allowed
by
default.
D
D
D
A
B
B
If
you
run
into
this
with
the
periodic
metric
reader,
then
that
means
you
are
setting
your
your
interval
to
be
really
small.
D
D
D
B
A
A
A
Yeah,
hey
by
the
way
I
can
verify
when
you
shut
down
the
meter
provider,
it
no
longer
collects
metrics.
Also
that
was
we
had.
We.
A
A
Every
time
there
was
a
kafka
rebalance
and
I'm
like
where's
all
our
metrics
they're
disappearing,
and
so
I
can't
verify
that
we
no
longer
record
metrics
once
you
shut
down
the
sdk
so.
C
Yeah
so
we
talked
a
lot
about
jbm
metrics,
but
at
the
end
emily
was
checking
in
about
metrics
sdk
stability,
so
yeah.
So,
let's
I
know
that's
on
your
mind,
given
the
prs
I've
seen
flowing
this
week.
B
Yeah,
the
one
thing
on
my
mind,
has
been
resolving
this
identity
stuff
and
I'm
working
on
that
actively.
So
after
that,
I'm
gonna,
I
guess,
do
a
pass
of
all
the
apis
and
make
sure
there's
nothing
that
that
stands
out.
I'm
glad
you
caught
that
part
about
the
the
metric
data
base,
honorary
that
what
double
gauge
data
and
versus
just
gauge
data.
C
B
Yeah,
so
you
know
right
now,
if
you
register
metrics
or
instruments
that
have
like
produce
an
identity
conflict,
we
we
log
a
warning
and
we
return
a
no-op
instrument
and
what
we.
What
we
need
to
do
is
allow
metrics
that
have
conflicting
identity
to
get
produced
on
export,
and
we
need
to
continue
logging
message.
B
Tell
the
user
to
the
best
of
our
knowledge,
how
to
resolve
the
conflict,
but,
like
you
know,
let
it
be
exported
and
at
that
point
the
the
metrics
are
defined
as
like
being
in
an
error
state
or
a
conflict
state
on
export,
and
it's
unspecified
how
back
ends
resolve
that
they
can
either
try
to
make
sense
of
it
or
or,
like
you
know,
throw
out
some
of
the
data.
E
D
I
get
it
and
that
is
confusing,
like
I
don't
think
it's
possible
for
us
to
have
a
method
that
can
return
either
a
histogram
or
an
exponential
histogram,
just
based
on
whatever
we
feel,
because
those
are
different
data
models.
I'm
starting
to
feel
like
the
intent
of
that
histogram
method
was
for
different
implementations,
the
normal
histogram
type,
so
one
of
them
is
explicit
buckets
and,
if
there's
a
different
aggregation
that
can
still
generate
that
normal
histogram
data
model.
D
B
D
Prometheus
doesn't
support
histograms
in
general,
it
gets
converted
to
gauges
of
buckets
right
so
how
to
model
that
in
a
way
that's
useful
for
griffon
or
whatever
downstream
dashboards
is
going
to
be
discussed.
I
think,
but
in
the
meantime
we
already
know
that
our
prometheus
expert
doesn't
support
them.
Many
backgrounds
won't
because
it's
just
a
different
code
path,
so
selecting
the
best
histogram
aggregation
available,
I'm
sure
must
be
referring
to
our
normal
histogram
type.
B
D
D
D
D
B
D
B
Just
it's
hidden
in
the
it's
hidden
in
the
docs,
where
a
lot
of
people
won't
see
it,
because
a
lot
of
people
won't
need
to
change
their
aggregations.
You
only
see
the
aggregation
javadoc
if
you're,
looking
if
you're,
using
the
view
api.
D
D
D
A
I
think
we
should
change
this
because
I
don't
like
I
think
it's
going
to
be
it's
going
to
break
people,
like
you
say,
like
somebody's
going
to
just
use,
histogram
he's
like
yeah.
I
want
a
histogram
that
sounds
great
and
then
then
we
switch
it
to
exponential.
Then
suddenly
all
hell
breaks
loose
and
their
back
end
doesn't
even
actually
support.
D
A
A
We
could
create
a
new.
We
can
instead
have
a
method
called
most
pop,
most
bestest
histogram
that
we
add
on
there
that's
very
explicitly
like
a
weird
name.
That
would
then
switch
so
that
people
don't
like
just
assume
that
histogram
will
always
work
the
same
way.
C
D
B
A
Yeah,
please
definitely
open
an
issue
and
I'm
happy
to
chime
in
on
the
next
spec
meeting,
oh
jack's,
usually
there
too
so
yeah.
I
think
we
should.
I
think
we
should
definitely
button
this
up,
because
I
think
it
I
I
mean.
I
just
think
it.
It's
a
great
idea,
like
hey,
we'll
give
you
the
best
histogram
we
know
about,
but
like
realistically,
it's
just
it's
not
going
to
work,
not.
A
E
A
F
C
D
B
So
if
we
were
to
go
ahead
with,
let's
say
this-
the
spec
stabilizes-
I
assume
that
for
stabilizing
the
java
sdk
that
we
would
want
to
go
through
an
rc
approach
like
we
did
with
the
api.
A
I
mean
like
one
of
the
reasons
why
we
needed
rc
on
the
api
was
because
we
were
basically
getting
rid
of
an
entire
artifact
as
a
part
of
that
process.
We
do.
We
anticipate
that's
happening
in
this
case
also,
or
are
we
just
going
to
keep
the
keep
the
existing
metrics
sdk.
B
B
Oh,
there
is
another
issue
that
might
come
that
might
have
an
impact
on
us
from
the
spec,
and
so
I
I
don't
know
if
you've
seen
josh
mcdonald
has
a
pr
talking
about
aggregation,
temporality
for
async
up
down
counters
and
so
the
the
idea
is
this
right
now
your
preferred
temporality
can
be
cumulative
or
delta
and
when
it's
cumulative
all
your
instruments
that
have
that
have
temporality
export
and
cumulative
and
likewise
with
delta.
B
B
So
so
converting
to
delta
in
the
sdk.
B
C
B
You
know
you're
kind
of
switching
things.
Okay,
so
it
wasn't
only
about.
B
B
To
encounter
or
not
normal,
so
yeah
so
like,
let's
imagine
kind
of
a
typical
use
case
for
up
down
counters
versus
counters
or
let's
just
take
async
once,
for
example,
so
async
up
down
counter
typical
usage
is
to
to
monitor
memory,
and
so
you
want
to
be
able
to
analyze
that
in
its
cumulative
format.
So
you
want
to
know
what
is
the
current
memory
usage,
and
so,
if
you
have
dimensions
on
that,
you
want
to
be
able
to
sum
them
together.
B
So
you
can
say:
okay,
the
memory
usage
of
this
particular
series
is
is
whatever
and
then
summed
together
that
it's
the
aggregate
memory
and
so
yeah
you
want
to
be
able
to
analyze
those
in
cumulative
format,
always
at
least
in
all
cases
that
people
can
come
up
with.
And
but
let's
now
talk
about
async
counters.
B
So
a
typical
use
case
for
that
is
measuring
cpu
time
aggregate
cpu
time
that
a
process
has
used,
and
it
is
useful
to
analyze
that
in
its
delta
format,
because
if,
if
you're,
because
it's
because
cpu
time
is
monotonically
increasing,
it's
not
useful.
Just
to
like
see
a
line
going
up
and
to
the
right,
you
want
to
know
like
how
much
cpu
time
was
spent
during
this
window
and
the
next
window
and
the
next
window.
B
Yeah
so
delta
back
ends.
You
know
like
to
analyze,
monotonically,
increasing
counters
in
delta
format
and
in
cumulative
back
ends
like
to
analyze,
monotonically
increasing
counters
in
cumulative
format.
Prometheus
works
well
with
those
and.
D
B
It's
able
to
compute
the
diff
kind
of
on
its
back
end
on
on
query
at
query
time.
B
Down
so
so,
if
you're,
if,
if
you
have
the
the
deltas
of
memory,
so
it's
like
effectively
the
change
in
memory
for
this
window,
so
you'd
be
able
to
say
something
like
memory
usage
has
gone
up
by
one
megabyte
or
10
megabytes
in
like
window
one
and
then
window
two
memory
usage
has
gone
down
by
10
megabytes
but
like
what
most
people
want
is
like.
Okay,
what
is
the
memory
usage
right
now
now?
What
is
the
change
in
memory
usage?
B
And
so
in
order
to
compute
that
the
memory
usage
right
at
this
moment,
you
would
have
to
sum
from
t
0,
like
all
the
all
the
deltas
from
t
0
to
whatever
point
in
time
you're
at
now
and
any
like.
You
know
any
gap
in
data
screws
with
the
screws
that
up
and
there's
a
bunch
of
challenges
to
basically,
you
know
reconstituting
the
current
state
now
in
a
back
end,
if
you
only
have
the
deltas,
especially
with
like
flaky
networks
and
stuff.
D
D
B
Well,
so
imagine
the
cpu
time
the
aggregate
like
it's
going
up
and
to
the
right
constantly,
and
so
you
know,
if
you,
if
you're
getting
that
in
cumulative
format,
then
like
you
just
know
that
at
t0
you're
you
had
total
of
a
million
milliseconds
and
then
at
t1
you
had
a
total
of
1.5
million
milliseconds
whatever,
and
so
that's
not
really
like
a
useful
way
to
analyze
that.
So,
whenever
you
want
to
look
at
cpu
time,
you
like,
when
you're
in
prometheus,
you
convert
that
to
a
rate.
B
So
you
say
like
okay,
it's
it's
monotonically
increasing,
but
you
know
compute
the
rate,
and
so
you
can
know
for
each
window
how
much
cpu
time
was
spent
in
that
particular
window
and
so
delta
back
ends
prefer
to
analyze
it
like
that
too.
B
Like
if
even
one
of
those
deltas
is
lost
like
if
there's
a,
if
there's
a
flaky
export,
then
the
intermittent
network
then
like
you,
cannot
know
the
current
memory
usage.
So
it's
more
of
an
implementation.
B
There's
no
so
the
one
use
case
I
can
come
up
with
for
deltas
for
for
up
down
counters
is
like
if
you're
monitoring
the
size
of
a
queue
like
you
know,
you're,
adding
items
to
a
queue
and
you're
taking
some
other
threads
or
taking
items
off
the
queue.
Then
it
might
be
interesting
to
say,
okay
for
each
window.
What's
the
net
change
in
activity
on
that
queue?
What's
the
change
in
size,
and
so
like
you
can
kind
of
look
through
like
you
know,
you
can
kind
of.
C
D
B
Yeah,
so
that's
like
monitoring
the
size
of
a
queue
and
you
can
you
could
take
either
point
of
view
on
that
one.
You
could
say
like
okay,
what's
the
current
number
of
active
requests
or
what's
the
change
in
active
requests
and
you
can
envision
how
either
of
those
would
be
useful
and
that's
first
well.
C
D
B
Yeah,
I
I
guess
like
like
full
transparency,
I'm
more
used
to
using
it
as
a
gauge
as
well,
but
it's
being
specked
all
over
the
place
at
like
in
the
system
level.
Metric
semantic
conventions,
folks
are,
you
know,
want
to
record
the
amount
of
cpu
time.
That's
spent
from
process
start
as
a
monotonic
counter
or
an
async
monitor.
D
B
B
Well,
it
depends
it
depends
on
what
the
the
solution
for
it
is
there's
like
solutions
that
can
work
with
the
sdk
stable
and
they
can
be
like
kind
of
like
additive
potentially,
and
I
think,
there's
other
solution.
Paths
where
you
have
to
you
have
to
do
it
now
before
stabilization.
A
Yeah,
I'm
going
to
drop
off
good
stuff,
though
good,
to
see
you
all
again.
B
Guys
see
you
next
time,
I'm
out
for
a
couple
of
weeks,
so
I'll
see
you
in
a
while.
Well
I'll,
send
him
a
message
on
slack.
C
Cool,
I
don't
think
anything
else
I
was
gonna
try
to
get
back.
I
wanted
to
get
back
to
emily
about
the
target,
but
she'll
she'll
join
next
week
can,
if
she
or
or
she
might
just
check
the
release
notes.
I.
C
C
I
think
we're
good,
I
don't
think,
there's
anything.
Is
there
anything
at
all
going
on
in
our.
C
Other
than
me,
mucking
around
with
github
actions,
there's
a
lot
of
broken
links.
This
is
cool.
Oh
nice,
a
link
link,
checker
yeah
yeah.
If
I
get
it
once
I
get
it
hooked
up
here,
I'll
I'll
send
a
pr
your
way.
D
C
C
C
I
think
this
showed
up
as
yeah
some
weird
you
know
windows.
What
can
I
say,
do
you
use
windows
terminal?
No,
it
actually
probably
would
work
if
I
did
that,
but
I
use.
B
To
switch,
I
just
got
it
set
up
with
windows,
terminal
and
wsl
like,
and
I
basically
I
moved
all
my
code
to
the
like
the
windows
file
system
or
the
the
ubuntu
file
system
within
wsl
and
man.
It's
great
being
able
to
use
the
same
tooling
on
my
mac
is
on
my
windows,
machine.
B
B
B
D
D
D
C
B
You
can
like
it's
like
mounted
and
you
can
access
it
from
a
location,
but
then
linux
has
its
own
like
file
system,
and
you
can
access
that
from
windows.
If
you
go
to
the
right
place
and
depending
on
which
direction
is
like
the
base
and
in
which
you
know
your,
which
is
like
accessing
the
other
operating
systems,
file
system
like
you
use
one
or
the
other,
based
on
your
task
and
and
based
on
like
which
needs
to
read
the
files
faster.
C
I
need
to
try
this
yeah.
I
I
still
do
miserable
things
like
fire
up
a
docker
image.
If
I
need
to
build
something
from
a
github
repo
that
doesn't
support
windows.
B
Is
good
now,
like
so
docker
when
you
install
docker
on
windows,
it
by
default,
runs
in
wsl
and
then
it
just
like
you.
You
know
you
can
access
those
ports
from
from
windows,
and
so
it's
now
the
unusual
use
case
to
run
to
have
like
wind,
docker
running
in
windows
itself.
It's
it
by
default,
runs
in
your
wso.
B
C
I'll
have
to
find
these
things
out.
I
only
barely
launched
the
vs
code
this
last
week
for
the
first
time.
D
D
C
D
B
D
B
My
my
thought
on
that
is,
I
think
things
are
slow
at
the
spec
level,
because
folks,
just
let
conversations
go
stale
for
like
days
and
days
and
days,
and
you
know
if,
if
everybody
just
responded,
you
know
within
an
sla
of
like
24
hours
in
concluded,
conversations
and
stuff
could
move
along.
B
But
it's
just
it's
just
allowed
to
get
stale
and
I
don't
know
I'm
I'm
cautiously
optimistic
that
once
metrics
is
kind
of
resolved,
the
the
headspace
of
everyone,
it
will
be
spread
less
thin
and
people
their
their
attention
is
pulled
in
less
directions
and
maybe
stuff
can
move
along
and
it's
like
it
goes
from
a
a
a
vicious
circle
to
a
positive
feedback
loop,
but
mostly.