►
From YouTube: 2022-04-01 meeting
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
B
B
Yeah
and
just
like
we
were
in
mexico
and
trying
to
get
to
get,
you
know,
get
you
have
to
get
covered
tested
before
flying
back
and
I
can
speak
a
little
spanish,
but
you
know
you're
having
to
deal
and
we
were
also
in
a
small
town
so
like
getting
a
covet
test
from
some
a
small
town
laboratory
in
a
foreign
country.
On
a
timeline
you
know
not
the
not
the
most
fun
in
the
world
so
but
anyway,.
A
B
A
Was
in
mexico
in
october,
and
but
we
were
at
a
resort
and
so
the
resort,
organized
the
tests
and
so
no
issue
at
all.
But.
B
C
Well,
let's
catch
john
up
on
everything:
that's
happened.
B
Probably
I
have
about
like
700
issues
in
my
current
github
to-do
list,
so
I'm
not
I'm
thinking
of
just
declaring
bankruptcy
and
letting
people
ping
me
if
there's
something
important.
B
Oh
yeah,
I
try
to
keep
my
github
inbox
completely
clean
yeah
and
I've
got
something
like
I
guess.
It's
only
like
300
things
I
have
to
look
at,
I
mean
I
don't
know
I
don't
know,
but
I
have
to
figure
out
some
way
that
we
weed
them
out
so
anyway.
If
there
are
things
that
are
important,
that
you
all
need
to
look
at.
Please
ping
me
directly
on
them,
because
otherwise,
they'll
probably
get
lost
in
the
notes.
C
Yeah,
so
the
yeah,
the
big
thing,
obviously
is
the
metrics
sdk
stabilization
stuff.
So
I
think
that's
been
our
primary
topic.
The
last
couple
of
weeks.
C
I
don't
know
yeah,
maybe
jack.
You
want
to
try
to
summarize.
A
Let's,
let's
talk
about
them
so
before
we
had
this
pattern
for
registering
metric
readers,
where
you
had
like
a
metric
reader
factory
and
you
registered
the
factory
and
from
the
factory,
you
can
there's
like
a
callback
where
the
meter
provider
calls
back
and
obtains
an
actual
instance
of
the
reader
and
exchanges,
the
factory
or
exchanges
like
a
handle
that
the
the
reader
can
use
to
trigger
collections,
and
so
I
I
was
finding
that
a
bit
confusing,
just
like
the
the
fact
that
you
register
reader
factories
instead
of
readers
themselves,
and
I
there's
a
couple
of
concepts
in
there
like
a
metric
producer
that
don't
show
up
in
the
specification,
and
so
one
of
the
one
of
the
changes
is
to
get
rid
of
that.
A
So
now,
there's
just
like
a
metric
reader
interface
and
you
know,
periodic
metric
reader
is
an
instance
of
metric
reader
or
implements
metric
reader
and
there's
there's
two
other
implementations
and
and
the
whole
notion
of
a
factory
has
gone
away.
Instead,
there's
a
method
that
called
like
register
and
upon
meter
provider
initialization,
it
registers
itself
with
the
metric
reader
and
you
know
there
gives
the
reader
a
way
to
trigger
collections.
B
B
A
Yeah,
so
I
mean
I
I
I
hope
you
can
take
a
look
at
like
them,
and
I
and
I
assume
you
will
just
like,
like
a
close
look
at
the
metrics
sdk,
maybe
not
like
all
the
diffs
but
like
where
it
stands
today
and
just
do
like
a
sanity
check
of
of
the
api
surface
area.
Just
because
there
are
a
handful
of
changes,
I
can
go
through
some
of
them
just
like
briefly.
The
ones
that
I
think
are
are
interesting.
A
A
Well
that
that
so
right
now,
they're
actually
enabled
by
default,
and
that's
something
that
I
think
we
should
visit
before.
We
we
go,
we
go
stable,
so
we
can
turn
them
off
right
now,
but
I
think
anorak
pointed
this
out
that
there's
no
performance
benefit
to
turning
them
off
by
default,
just
the
way
that
it's
implemented,
and
so
there's
kind
of
like
the
question
that
the
sdk
spec
doesn't
have
a
must
that
examples
are
disabled
by
default.
A
I
think
it's
a
should
and
so
we're
free
to
to
break
that
if
we
want
what.
A
Okay,
yeah,
there
is
a
little
bit
of
memory
impact.
I
think
the
default
number
of
exemplars
that
are
stored
is
a
function
of
the
number
of
cpus
that
are
detected
and
so
the
size
of
the
storage
driver
yeah.
That's
that
right,
so
that
type
of
thing
is
not
specified
at
all
like
what
is
the
default
size
of
the
storage
registrar?
So
I
think
josh
probably
took
some
liberties
with
that
and
came
up
with
something
that
seemed
reasonable.
B
A
Right
yeah,
so,
let's
see
what
other
changes
have
there
been?
Oh
there's
there's
this
big
change
so
before
metric
exporters
had
a
method
called
get
aggregation,
temporality
and
the
spec
was
adjusted
to
have
a
five-way
selection
function
for
aggregation
temporality,
where
a
reader,
it's
not
at
the
exporter
level,
a
reader
can
say
for
each
instrument
type.
What
the
desired
aggregation
temporality
is
and
we've
extended
that
concept
into
metric
exporters.
So
metric
exporters
have
the
ability
to
influence
the
reader,
and
so
that's
that's
like
allowed
in
the
spec.
The
sdks
have.
A
The
ability
to
you
know
have
have
the
liberty
to
choose
whether
they
want
to
extend
that
concept
to
the
exporter.
At
least
I
don't
think
it's
explicit
anywhere,
but
the
comments
have
discussed
that,
like
that's
intentionally,
not
specified
and
there's
so.
A
So
right
now
the
only
reader
that
has
so
let's
talk
about
the
reader,
so
there's
prometheus
is
a
reader
and
so
that
the
the
temporality
is
always
cumulative,
regardless
of
instrument
type.
A
The
other
one
is
in-memory
metric
reader
and
that
one
you
know
you
just
configure
what
temporality
you
want
and
it's
static
for
all
instruments,
the
other
one
is
periodic
metric
reader
and
that
one
it
just
it
there's
no
ability
to
configure
that
for
the
period
at
the
reader
level.
It
just
takes
the
configuration
from
the
exporter
and
passes
it
through
so
because
the
next
border
is
always
associated
with
a
periodic
metric
reader.
B
B
A
Yeah,
the
one
that
makes
sense
is
like
a
sum
I
think
or
and
maybe
not
makes
sense,
but,
like
you
know,
that
is
actually
practical.
There's
a
related
question
that
I
have
and
maybe
now
it's
the
best
time
to
talk.
B
A
A
So
when
you're
observing
like
let's
take
async
counter,
for
example,
I
think
it's
a
good
one
when
you're
observing
an
a
a
value
for
an
asynchronous
counter.
It's
assumed
that
you're
observing
the
sum,
and
so
when
you're
trying
to
translate
that
to
a
histogram,
you
obviously
take
the
sum
and
you
set
it
to
be
the
sum
of
the
histogram.
But
what
do
you
do
for
the
count,
the
min
the
max
and
the
bucket
counts?
Well,
you
only.
A
It's
not
actually
it's
not
actually
not
a
reading.
It's
like
a
summary
itself.
It's
an
aggregation
that
you're
observing
so
it'd
be
wrong
to
increment
the
count
of
a
bucket
that
contained
that
sum
value
that
you're
observing.
B
I
mean,
but
you
can
imagine
second
order.
You
I
want
to
like
have
a
second
order
measurement
around
like
how
this
particular
song
varies.
Over
time
I
mean
that's
a
that's
a
valid
statistical
measure,
it's
not
the
same
thing
as
the
song,
but
you
could
want
that's
kind
of
second
order.
Variability
of
of
that
song.
That's
it's!
I
don't
know
the
specific
use
space,
but
certainly
that's
a
valid
statistic.
You
might
want
to
look
at.
A
That
that
that's
true,
I
haven't
thought
about
that.
I
think
that
means
that
the
interpretation
of
the
sum
changes
based
on
your
aggregation,
which
is
kind
of
strange,
but
in
in
like
frankly,
so
I
I
I
I
opened
an
issue
about
this
thing,
because
I
don't
know
what
to
do
with
histograms
and
asynchronous
instruments
right
now.
I
think
they
need
to
be
addressed,
because
I
don't
whatever
the
right
thing.
Is
we
don't
do
it
right
now.
B
Yeah
I
mean
I'm
saying
we
could
just
say
whatever
value
you
grab
shove
it
in
the
histogram
and
it's
is
what
it
is
and
it
will
get
flushed
if
it's
a
delta
histogram
get
flushed
out
every
reading
and
if
it's
cumulative
histogram
it'll
continue
to
you
know
aggregate
its
overall
long-term
histogram
values.
You
know
measuring
the
variability
of
that
particular
sum
right
right.
That
would
be
the
simplest
answer.
I
don't
know
how
it's
going
to
be
useful
to
someone,
but
I
don't
know
who,
but
I'm
sure
there
is
a
use
case
where
that
would
work.
A
B
B
Okay,
yeah
that'd
be
actually
interesting
for
I'd
love
to
goof
around
with
that
and
just
see
what
happens
right
now
with
the
histograms
and
gauges
yeah,
and
then
it
does
what
it's
I
bet
it
does,
what
it.
What
I
would
expect
to
do,
which
is
just
take
that
value
and
shove
it
in
the
histogram.
A
So
there's
there's
one
problem
so
there's
at
least
one
problem
that
I
can
think
of
off
the
top
of
my
head,
which
is
that
the
sums
of
histograms
are
supposed
to
be
monotonically
increasing
and
if
you
are
observing
an
async
up
down
counter
that
value.
Oh.
C
E
B
B
A
A
A
And,
and
so
the
the
implication
of
that,
is
you
can't
record
negative
values
with
histograms
right
and
so
like?
A
I
suppose,
I
suppose,
actually
that
that
has
implications
on
synchronous
up
down
counters
as
well,
because
yeah
synchronous
up
down
cottoners
can't
you
want
to
be
able
to
aggregate
those
in
histograms
because
they
can
record
negative
values.
B
B
B
So
if
you
tr
we're
having
some
some
interesting,
you
know
not
impossible
use
case
where
you
had
negative
infinity
to
positive
infinity
values.
In
your
instagram,
like
in
your
that
you're
recording,
you
just
need
to
shift
everything
up
to
zero
to
infinity,
and
then
I
don't
know
how
you
deal
with
the
fact
that
where
zero
is
in
the
end,
but
it's
basically
a
use
case
we
can't
handle
is
the
the
infinite
in
both
directions.
Histogram
right
right.
D
A
No,
it
supports
the
negative,
the
negatives
like
whatever
exponents
in
in
those
refer
to
the
range
between
zero
and.
A
I
finally
did
some
experimenting
with
like
all
the
different.
Like
you
know,
I
I
wanted
to
see
just
how
exponential
histograms
actually
work
and
like
where
the
bucket
boundaries
end
up
for
different
scales
and
number
of
buckets
and
stuff
like
that.
So
I
can
visualize
it,
and
I
think
I
have
a
firm
grasp
on
it
now,
but
yeah.
C
B
C
A
And
if
I
understand
corr,
if
I
I'm
going
off
memory
a
bit
here,
but
I
think
like
so
our
metric
data
interface
and
all
of
its
you
know
different
data
point
types
are
kind
of
our
embodiment
of
the
metrics
data
model
and
while
there
is
a
exponential
histogram
type
in
this
metric
data
type
enumeration,
I
don't
think
that
there's
a
exponential
histogram
like
interface.
A
C
Other
like
like,
because
we
don't
have
oh,
do
we
have.
We
do
have
both
long
and
double
of
both
types.
Yes,
okay,
right
right!
I
remember
that.
C
A
So
there's
one
interesting
call
out.
I
think
that
so
metric
readers
now
have
the
ability
to
influence
the
aggregation
temporality
and
we
talked
about
that.
There's
a
five-way
function,
but
what
came
along
with
that
was
the
ability
for
metric
readers
to
inc
influence,
the
default
aggregation
and
so
there's
the
metrics
in
sdk.
A
Spec
says
that
metric
readers
have
a
another
five-way
function
that
says
for
each
instrument
type,
what
their
default
aggregation
is,
and
the
idea
is
that
when
you're
resolving
the
aggregation,
if
there
are
no
views
that
are
configured
you,
you
check
in
you
check
the
metric
reader's
default,
aggregation
and
and
use
that.
A
But
we
haven't
implemented
that
yet
and
because
I
don't,
I
don't
know
of
any
use
cases
that
need
that
today,
and
so
I
I
figure
punt
on
that,
because
why
not.
B
A
Yeah,
the
adding
it
would
consist
of
just
you
know,
implementing
it
having
a
default
method
on
the
interface
that
just
says,
return
aggregation
get
default,
and
you
know
which
we
already
have
a
definition
for
yeah.
B
B
A
Great
super
helpful
one.
Other
thing
you
know
that
I
think
is
worth
talking
about.
Is
that
the
view
api,
the
the
surface
area
is,
is
reduced
a
bit,
so
we
have
some.
We
had
some
stuff
in
our.
In
our
view,
that
is
inspect
specifically.
We
have
this
concept
of
an
attribute
processor
and
an
attribute.
A
Processors
just
allows
you
to
generically,
take
a
set
of
attributes
and
do
things
like
remove
them
or
transform
them
in
some
way,
potentially
with
values
from
context-
and
you
know
one
of
the
things
that
you
can
do
with
that
is.
You
can
have
like
a
baggage
appending,
attribute
processor,
and
so
that's
not
specked
out,
and
so
that's
now
it's
only
accessible
via
internal
apis,
so
you
can
still
do
it,
but
it's
not
part
of
the
api
surface
area.
A
So
you
can
say,
there's
an
attribute
you.
The
only
thing
you
can
do,
according
to
the
spec,
is
select
a
set
of
keys,
which
you
know
reduces
the
cardinality.
A
And
one
thing
I
think
we
should
talk
about
as
a
group
is
that
our
instrument,
selection
criteria
for
views
is
actually
broader
than
what
the
the
spec
requires.
So
the
spec
requires
this.
So
for
when
you're
selecting
an
instrument
by
name,
you
have
to
support
an
exact
match
on
the
name
or
a
wild
card
match,
and
the
wild
card
is
very
specific.
A
It's
just
question
marks,
match
a
single
character
and
stars
match
any
number
of
characters,
one
or
more
characters,
zero
or
more
characters,
and
then
all
the
other
instrument
selection
criteria,
so
selecting
by
meter,
name,
meter,
version
and
meter
schema
url
are
all
exact
match.
There's
no
fuzzy
matching,
or
we
are
much
more
permissive
than
that.
All
of
our
instrument.
Selection
criteria
allows
you
to
specify
a
predicate
that
says
you
know
you
know
whatever.
A
Whatever
you
want
to
do
in
the
predicate
you
can
do,
and
I
think
that
we
should
consider
reducing
that
and
only
allowing
like,
what's
explicitly
specked,
because
there's
one
good
thing
that
you
can
do
from
the
like.
What
is
explicitly
specked,
you
can
detect
a
scenario
where
a
view
selects
more
than
one
instrument
and
tries
to
change
its
name
and
that
gear
is
guaranteed
to
produce
a
conflict
on
export.
A
A
Yeah
so
suppose
you
just
say
your
instrument,
selector
just
has
a
field
called
set,
name
and
name
accepts
either
accepts
just
a
string
of
the
name
to
select
on
what
you
can
do
is
you
can
say
if
there
are
no
stars
and
there's
no
question
marks
in
it,
it's
an
exact
match.
If
there
are
any
stars
or
any
question
marks,
disallow
changing
the
view's
name.
Oh.
B
A
B
I
think
I
think
it's
totally
fair
to
remove
that
from
the
public
service
for
this
release.
So
it
seems
I
mean
it's
super
powerful.
If
you
get
a
predicate,
knew
every
one
right.
B
In
the
foot
as
many
times
as
you
want
with
his
high
caliber
weapon
as
well,
that's
right,
I'm
fine
with
I'm
fine
with
removing
it
from
the
api's
right
now.
I
don't
have
any
particular
use
case
for
anything,
but
just.
A
Okay,
I
mean,
and
I
suppose,
if
people
do
have
those
use
cases,
they'll
give
us
feedback.
E
E
E
E
E
A
I
guess
two
last
things
to
talk
about
to
catch
john
up
on
and
then
I've
kind
of
gone
through
my
list.
So
one
is
there's
this
kind
of
old
concept
or
not
old,
but
this
strange
concept
of
minimum
collection
interval
and
if
you
collect
more
frequently
than
this
minimum
collection
interval,
then
this
collection
of
synchronous
instruments
is
suppressed.
You
call
your
asynchronous
callbacks
and
yeah.
I
think
you're
nodding
your
head,
so
you
understand
what
it
is.
B
A
It
and
and
furthermore
like
when
I
was
thinking
about
it
more,
I
think
it's
actually
confusing
that
our
default
behavior
was
to
basically
apply
an
artificial
throttle
on
how
fast
you
can
collect
that
was
100
nanoseconds
and
that
seemed
arbitrary.
It's
not
mentioned
in
the
spec
anywhere,
and
so
I've
kept
that
as
like
an
internal
api
that
you
can
opt
into,
but
I've
changed
the
default
to
be
zero.
A
So
there
is,
you
can
collect
as
fast
as
you
want,
but
that's
on
you
if
you
cause
contention
issues
so
yeah
and
if
you,
if
you
do
want
to
enable
this
behavior,
which
is
kind
of
unspecified
and
strange,
then
opt-in,
via
this
internal
api.
C
Nobody
could
remember
why
why
why
it
was
introduced
is.
B
E
E
E
D
B
C
He
probably
had
some
bad
experience
with
somebody,
you
know
not
collect
with
collecting
too
often,
and
that
probably
you
know
create
created
some
crazy
big
issue
and.
B
C
A
Yeah,
it
just
makes
it
kind
of
tricky
and
annoying
to
test.
You
have
to
set
this
extra
property
during
tests
and
doesn't
impact
actual
systems
in
production,
yeah.
A
Yeah
so
we're
safe
for
now,
but
I'm
I'm
happy
with
removing
it.
I
don't
have
any
use
cases
for
it.
Maybe
we
wait
to
see
if
josh
has
any
opinions
on
it
or
a
strong
opinion
to
keep
it.
I
don't
think
it
hurts
to
keep
it
in
an
internal
api
for
a
bit,
but.
A
There
was
a
metrics
common
package
which
was
holding
just
two
in
numes
that
wasn't
providing
any
value,
so
we
got
rid
of
that
and
I
guess,
like
along
the
lines
of
example,
are
hiding
that's
just
an
internal
package.
Now,
so
that's
gone
or
gone
from
the
public
api,
and
so
I
think
that's
like
a
good
segue
into
this
general
idea.
A
concept
like
you
know,
conversation
about
whether
we
have
a
release,
candidate
or
publish
when
once
we
feel
more
comfortable
because
they're
there
are
a
couple
things
that
are
going
to
break
everyone.
A
In
particular
the
periodic
metric
reader
change
to
get
rid
of
the
factory
pattern
like
everybody
has
to
configure
a
periodic
metric
reader
unless
you're
using
auto
configure,
and
so
that
that's
going
to
impact
them
and
it's
it's
an
easy
code
change
to
make.
But
yeah.
B
I
think
my
vote
would
be
release
1.13
with
it
still
has
alpha,
but
with
plenty
of
notes
that
this
is.
This
is
our
release
candidate.
So
I
don't
think
we
should
do
like
a
1.13
rc1
or
anything
like
that,
but
I
think
we
should
consider
1.13
as
our
release
candidate
for
the
metrics
sdk
and
then
stabilize
it
assuming
there's
not
a
major
deal
in
1.14.
A
A
little
bit
more
context-
and
I
don't
think
this
changes
the
conversation
much,
but
I
just
wanted
to
say
it
because
I
don't.
I
was
at
that
maintainers
meeting
on
monday
and
they
were
like
there's
a
question
about
whether
when
our
release
candidate
was
going
to
get
published
and
when
our
stable
artifact
was
going
to
get
published
for
the
metrics
sdk
and
at
the
time
we
weren't
planning
on
having
a
release
candidate.
And
there
was
some
side
eye
about
that.
B
B
C
Yeah,
I
think
the
micro
profile
ship
has
sailed,
I
think
they're
they
are
building
their
own
metrics
abstraction
layer
on
top
of
micrometer.
A
A
I
wonder
what
the
extent
of
that
was
going
to
be.
I
I
briefly
read
something
that
said
that
the
your
it
said,
something
that
was
confusing.
It
said
you're
affected.
If
you
deploy
as
a
war
with
tom
tomcat,
but
then
it
also
said
that
there's
also
plenty
of
other
ways
that
you
could
be
vulnerable
to
this,
and
I
was
like
wait
so,
which
is
it?
Is
it
just?
Are
you
only
vulnerable
if
you
deploy
it
like
this
or
is
it?
Is
there
a
lot
of
other.
C
C
E
Official
yeah
so
now,
when
I
understood
it,
it's
because
the
get
module
method
was
added
to
class,
but
that
so,
if
you
have
classes
in
this
cache
or
something
like
that,
then
when
it's
deserializing
there
was
a
deny
list
of
methods
on
class
or
something
to
prevent
it
from
doing
magic.
But
when
java
added
the
get
module
method
that
wasn't
added
to
the
deny
list
in
spring,
so
that
then
it
was
possible
to
do
weird
stuff.
E
D
C
E
C
E
B
B
B
Well,
what's
the
next,
the
next
big
java
framework,
that's
gonna,
get
gonna
get
hit,
would
you
take?
We
should
have
a
pool.
C
A
C
I
wonder
if
corkus
and
friends
will
see
an
uptick
in
adoption.
B
A
Jason
keller,
no
jason,
plum
so
yeah,
so
my
team,
we
we
did
the
like.
We
had
a
alpha
or
early
access,
support
for
otlp
and
just
and
and
we
built
that
right,
which.
C
A
E
A
So
they're
creating
some
sort
of
like
you
know,
execution,
environment
and
they're
trying
to
properly
you
know
I
don't
know,
what's
the
word,
I'm
having
a
brain
fart
but
like
isolate
it,
so
it
can't
impact
other
things
and
they're
failing
at
it.
E
E
C
Controller
piece
of
the
mvc
has
always
had
like
data
binding
capabilities.
Oh.
A
Here's
one
more
thing,
so,
if
we're
not
going
to
mark
if
this
next
metrics
release
is
our
release
candidate,
should
we
enable
metrics,
by
default
in
auto,
configure.
B
A
E
What
do
you
think
on
right
now,
yeah,
because
it
is
a
really
it
should
have
everything
that
would
we
would
release.
B
C
A
But
you
have
to
you:
oh
that's
right.
I
was
going
to
say
you
have
to
opt
into
them
via
your
your
log
config
template,
but
that's
not
true.
When
you
have
the
agent
enabled
it's.
B
I
mean,
I
hope
my
hope
would
be
that.
Well,
a
all
the
vendors
in
sporto
tlp
will
at
least
not
blow
up
if
people
send
them
signals
like
the
log
and
metric
signals
when
it's
stable
and
the
the
collector
should.
Also
I
mean
I
right
now.
I
guess
if
you
don't
configure
a
pipeline
in
the
collector.
Well,
actually
I
haven't
looked
at
this
forever.
It
used
to
be
that
if
you
don't
like
you
didn't
set
up
a
metrics
pipeline
in
the
collector
that
it
would
just
would
not
like
enable
the
endpoint
right
yeah.
E
A
B
A
B
B
You
know
there's
a
weird
case.
I
just
realized.
Let's
say
somebody
doesn't
care
about
metrics,
because
they're
using
micrometer
or
whatever
and
they've
explicitly
said
they
want
to
send
their
traces
to
jaeger
like
it
feels
like
we
like
in
that
case
they're
like
we're
not
into
otlp
like
we're
not
doing
that,
and
so
we
would
still
end
up
trying
to
send
otlp
to
nowhere
and
end
up
with
exceptions,
because
there
was
no
back
end.
E
E
C
C
A
Here's
a
related
thought
to
whether
logs
should
be
enabled
by
default
once
that's
ready.
So
I
think
there's
a
question
about
what
the
agent
should
do
and
but
the
auto
configure
modules,
certainly,
I
think,
should
enable
logs
by
default.
And
it's
it's
up
to
you
to
decide
to
actually
wire
anything
into
the
log
sdk.
But
it
won't
do
anything
unless,
like
you,
actually
wire
up
some
sort
of
logs
to
send
through
the
log
sdk
and
if.
A
B
So
there's
actually
been
a
pretty
strong
request
from
what
came
from
ken
finnegan.
I
don't
remember
exactly
who
he
was
working
for
at
the
time.
A
B
But
I
think
there
was
a
pretty
strong
desire
to
have
the
sdks
stay,
independent
and
only
sec
should
only
depend
on
apis
if
they
need
to
have
interdependence
between
stuff,
and
so
I
think
I
don't
think
we
would
go
the
same
route.
I
don't
think
we
would
ever
go
the
same
route
that
we
have
with
the
api,
where
it's
one
artifact.
It
would
be
individual
sdk,
artifacts
and
then
one
sdk,
all
artifact,
that
just
isn't
that
is
transitive
dependency
holders.
A
B
A
Yeah,
the
logging
exporters
too,
like
whether
you
include
those
by
default
or
whether
you
have
to
like,
include
those
on
your
class
path.
I
think,
like
high
performance
situations,
might
just
want
to
log
everything
out
to
standard
out
and
have
some
sort
of
other
system
that
observes
those
logs
and
you
know,
then
sends
those
over
the
network.
So
yeah.
B
A
A
It
is
wrong
and
I
wrote
a
test
to
verify
it
if
you
have
so,
if
you
have
a
single
actually
so
the
way
that
it
works
right
now
is,
if
you
have
a
single
reader
and
you
call
collect
on
that
reader
from
multiple
threads,
then
you
can.
Those
can
happen
concurrently.
So,
like
imagine,
the
prometheus
collector-
and
you
know
two-
it's
responding
to
two
different
requests
at
once.
A
A
So
you
know,
one
reader
can
be
called
from
two
different
threads
concurrently
or
two
different
readers
can
be
called
from.
I
suppose
two
different
threads
concurrently,
so
a
periodic
metric
reader
executing
in
the
background,
could
be
called
at
the
same
time
as
a
prometheus
reader,
but.
A
Yeah
I
mean
it's
certainly
possible
to
say
that
you
know
I
think
that
would
be
reasonable
to
just
say
that
each
reader
can
only
it
has
to
be
whatever
sequential
elections.
First
for
a
particular
reader.
A
E
Like
I
guess,
maybe
we
have
a
lot
of
fine-grained
blocks
like
all
metrics
is
so
stateful.
Almost
nothing
could
happen
concurrently
in
practice.
Anyways.
I
think
so,
and
I
thought
we
were
getting
around
there
by
having
the
high
level
lock.
But
if
that's
not
the
case,
maybe
we
can
remove
some
other
locking.
Then
if
we
only
walk
at
the
producer
level,.
C
A
Possible
yeah,
so
one
thing
that.
A
E
C
Yeah,
I
I
I
would,
I
feel
like
the
having
metrics
exported.
As
you
know,
one
group
at
a
time
is
the
can,
is
the
easiest
to
reason
about
for
sure
and
I'm
not
sure
that
there's
really
any
I'm
not
sure
what
the
use
case
would
be
for
needing,
like
you
know,
like
a
high
performance
multi-threaded
metric
reading.
A
Yeah
yeah
yeah
so
like
let's
say,
let's
say
you
and
this
is
actually
a
related
issue.
So
let's
say
you
have
a
callback
that
it
takes
a
lot
of
time
to
invoke.
So
if
you
have
a
prometheus
reader
and
a
periodic
metric
reader
that
you
know
that's
running
in
the
background
and
somebody
tries
to
read
prometheus
they
they.
You
know
that
request
could
take
a
long
time
to
resolve
if
the
periodic
metric
reader
is
in
progress
and
it's
waiting
on
a
long
callback.
E
E
C
B
That's
true,
so,
okay,
I
got
to
take
off
yeah
bye-bye
yeah.
A
E
A
C
A
C
C
E
A
That's
the
writer
lock
and
then
the
reader
lock,
the
one
which
I
think
is
the
top
level
lock
that
you
get
on
collection
doesn't
exist.
E
Got
it
so
we
wouldn't
need
a
writer
luck
anymore.
If
we
had
the
collection,
lock,
right
and.
E
E
A
A
A
And
then,
okay,
so
on
a
related
note,
there's
one
other,
like
related
topic
to
taking
not
allowing
callbacks
to
be
called
concurrently
or
this
top
level.
Reader
lock,
which
is
technically
you're
supposed
to
time
bound
callbacks
in
some
way
according
to
the
spec.
And
so
we
don't
do
that
like.
If,
if
a
callback
wanted
to
take
two
minutes
to
complete,
it
would
just
hold
up
everything
for
two
minutes,
and
you
know,
arguably,
we
should
I'm
not
so
I'm
not
sure
how
important
it
is
to
do
in
like
an
initial
release.
E
I
think
and
go
like.
I
have
a
feeling
that
language
is
sort
of
inspired
by
go
where
they
pass
the
context
to
the
callback
and
then
they
say
they
cancel
the
context
or
something
like
that,
and
even
if
you
do
that,
the
user
code
won't
be
checked
in
the
context.
It
doesn't
help
anyways,
but
that's
how
they
probably
implement
that.
But
a
similar
pattern
doesn't
exist
in
java
and
I
don't
think
we
would
want
to
have
a
separate
thread
just
to
be
able
to
have
that
sort
of
timeout.
E
A
C
A
B
C
C
E
E
C
Yeah,
I
think
we
got
some
good
some
good
decisions
out
of
that.
E
Sorry
I
was
looking
at
the
min
max
again
my
pending
comments.
Weren't
sent
it's
not
that
I
ignored
it
please,
and
then
I
was
going
to
ask
one
thing
now,
just
or
I
can
ask
them
to
github
most
of
them.
If
you
have
any
thoughts
on
using
nullable
versus,
has
methods
to
avoid
the
boxing.
A
A
Yeah,
I
don't
I
I
I
don't
like
the
the
nullable
in
there.
I
just
didn't
see
a
way
around
it,
but
you
know
I'm
totally
open
to
that
idea.
I
think
we'd
have
to
make
it
a
pattern
for
any
future
field
on
our,
like
our
data
model
that
that
needed
to
be
like
optional.
So.
C
And
honor
was
there,
you
were
had
some
idea
about
the
collector
on
it,
opening
an
issue
with
the
collector
yeah.
Looking
up
an
issue:
okay,
cool:
will
you
tag
us
or
send
it
to
us
in
slack
yeah,
because.
C
Yeah
yeah,
because
that
that's
gonna
be
for
sure,
once
we
turn
on
metrics
and
eventually
logging
by
default
in
the
java
agent,
that's
going
to
hit
a
lot
of
users.
E
C
E
E
Sorry,
unsupported
or
whatever
that
404
that
we
like,
we
have
some
special
logging
for
it.
Maybe
we
don't
even
log
that
by
default,
like
switch
that
to
debug
log.
E
C
Would
be
yeah
because
I
could
tell
them
what
what
to
turn
off
in
that
message.
I
like
that.
A
E
E
Saying
is
the
collector
configured
for
metrics
or
something
I
think
we
have
some
vlog
message
like
that
it
wasn't
back
in
the
day
in
the
very
very
beginning
when
we
saw
the
non-clean
message
and
he
did
that
it
is
so
it's
literally
just
moving
that
into
a
static
initializer.
That's
then
only
executed,
one
second.