►
From YouTube: 2022-04-21 meeting
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
C
Yeah,
I'm
trying
to
figure
out
what
I
think
I
can
commit
to
and
a
lot
of
it.
I
guess,
is
going
to
depend
on
how
the
new
job
like
what
the
timing,
what
the
time
like
when
things
are
happening
with
that
new
job,
but
I'm
guessing
the
the
utc
friday
meeting,
would
be
kind
of
what
I
would
intend
on
trying
to
hit
most
weeks
cool.
That's
one
place
where
you
know
jack
on
org
and
I
would
all
be
able
to
still
connect.
A
Yeah
you've
been:
how
long
were
you
at
new
relic
before
splunk
nearly
five
years?
Okay,.
C
C
C
Yeah
corey
is
one
of
the
leads
over
there,
so
there
are
16
people
in
the
company,
so
very
very
small.
C
Oh,
I
won't.
I
could
never
forget
you
ben
you're,
you're,
pretty
unforgettable.
By
the
way,
I
was
super
happy
to
read
your
your
article
about
trying
to
run
java
on
a
single
core.
It
was
mostly
gratifying
because
I've
been
trying
to
tell
people
for
a
long
time.
Don't
try
to
do
that.
It's
very
very
difficult
to
run
java
on
a
single
core.
D
Yeah,
I'm
just
I'm
completely
blown
away
by
the
response.
It's
been
like
it's
as
the
kids
say,
it's
done
numbers
and
it's
actually
spawned
a
lot
of
interest
in
in
taking
in
some
other
directions
as
well
and
saying
what
happens
if
you
go
to
okay,
so
that's
still
a
reasonable
heat
size.
What
happens
if
you
go
down
to
a
much
smaller
heap?
A
Oh
and
laurie,
hey
everyone
yeah,
sir
mattias,
I'm
glad
I
was
almost
gonna
release
113
one
yesterday
and
then
I
was
like
oh.
E
A
Just
wait
till
the
sig
meeting
and
check
in
if
there's
anything
else,
we
should
include
yeah.
I.
B
Discovered
another
one
I
actually
started.
B
I
wanted
to
implement
the
like
a
single
record
instrumentation
for
spring
kafka
so
that
it
doesn't
rely
on
the
horrible
kafka
clients
one
and
while
I
was
doing
it,
I
kind
of
discovered
that
our
recent
change
in
the
last
release
that
disabled
the
kafka
consumer
process
span
also
disabled,
like
single
record
listener
spans
so
opposing
to
batch
record
listeners
in
spring
kafka.
So
if
you
were
using
spring
kafka,
that's
2.7
or
newer.
So
a
relatively
new
version-
and
you
didn't
have
the
batch
listener
set
up.
B
A
And
that's
because
we
added
the
thread:
local
suppression.
A
So
yeah
I
will,
I
will
look
at
this
today
and
we'll
hold
definitely
hold
the
patch
release,
for
it
sure.
A
Two
sounds
one:
I
was
like
no
no
urgency
and
it
wasn't
like
a
this.
One
is
actually
more
critical.
Yes,
let's
try
to
hatch
out.
B
The
second
one
is,
I
also
added
this
link.
I
probably
should
have
added
my
name
next
to
it,
some
user
added
the
bug
that
the
start
time
and
end
time
are
different
from
metrics
and
spans,
and
I
think
it
we
probably
have
already
talked
about
this
area,
because
there
was
an
issue
that
you
created
to
ask
some
time
ago.
B
I
remember
like
vaguely
that
when
I
added
these
time
extractors,
I
think
I've
implemented
it
once,
but
we've
removed
it
and
decided
to
rely
on
sdk
four
times.
I
don't
remember
why,
but
maybe
it's
time
to
go
back
and
revisit
the
decision.
C
Yeah
I
mean
this:
is
the
general
clock
implementation?
That's
used,
I
mean
everywhere,
everyone
opens
telemetry.
Is
we
have
we?
We
take
a
snapshot
of
the
current
clock
wall,
clock
time
with
current
time,
millies,
and
then
we
use
nano
times
from
then
on
to
do
any
timing,
and
I
think
we
initialized
one
of
those.
I
remember
when
we
initialized
that,
whether
it's
span
creation
or
parent's
band
creation-
I
don't
remember
exactly-
I
think
it
might
be
parent
span
like
when
there's
a
a
span
with
no
local
parent.
A
C
A
E
E
Yeah,
adding
a
few
things,
they
shouldn't
take
a
ton
to
talk
a
ton
of
time
to
talk
about.
So
I
am
interested
in
adding
some
examples
to
open
telemetry
java
docs.
We
have
a
new
relic,
have
our
own
open,
telemetry
examples
repository,
but
a
lot
of
it
contains
examples
that
kind
of
apply
to
everyone,
not
just
customers
of
new
relic,
and
so,
when
those
are
possible,
when
the
example
doesn't
demonstrate
something
specifically
open,
telemetry
new,
relic,
related
I'd
like
to
like,
upstream
those
and
so
before.
I
do
that.
E
I'm
just
interested
in
if
anyone
has
any
general
thoughts
on
how
to
go
about
adding
more
examples
to
that.
Whenever
I
update
the
the
version
in
the
examples
repo
today,
it's
kind
of
like
cumbersome,
because
I
I
manually
run
the
examples
that
I
think
are
subject
to
change
so
there's
no
sort
of
like
ci
cd
around
it.
Besides
compile
time
verifications
or
compile
verifications,
but-
and
I
know
the
instrumentation
folks
have
learned
some
things
from
the
the
smoke
tests,
so
you
know,
can
any
of
that
be
applied
to
this
docs
repository.
A
So
the
the
ver
I
mean
that
at
least
it's
we're
compiling
the
have.
You
found
cases
where
that
alone
hasn't
caught
things
in
the
because
I'm
assuming
the
examples
are
going
to
be
pretty
simple.
E
There
there
have
been,
they
don't
happen
very
often,
but
like
somebody
reported,
for
example,
one
time
that
the
prometheus
example
didn't
work,
despite
the
fact
that
it
compiled
something
about
our
implementation
had
changed.
C
Yeah
the
prometheus
example
has
been
problematic
since
I've
been
on
the
project
just
because
trying
to
run
run
it
in
local
docker,
with
like
mac
docker
networking,
which
is
terrible
like
getting
scraping
to
work
between
the
prometheus
running
in
docker
and
then
the
app
running
non-docker
is
a
mess.
E
A
Has
it
been,
you
said
when
you've
been
updating
bumping,
the
versions
here
it's
been
cumbersome?
Has
that
just
been
because
of
the
braking
all
the
braking
metrics
api
changes.
E
E
A
I
would
be
fine
with
that,
and
if
we
see
that
it's
a
problem
I
mean
I
don't
want
to
make
it
more
cumbersome
to
necessarily
like
maintain
these
by
adding
smoke
tests
across
all
of
them,
because
that's
also,
as
laurie
will
attest,
that
is
cumbersome
as
well
right,
yeah,
definitely
heavy
weight
and
the
sporadic.
You
know
we're
still
chasing
down
sporadic
failures
and
memory
container
memory
issues
on
the
smoke
tests
in
the
instrumentation
repo.
A
A
E
Maybe
we
wait
till
we
see
that
it
becomes
a
problem
and
I
guess
to
limit
limit
the
possibilities
for
for
breakage.
We
just
limit
the
number
of
examples
to
a
few
that
we
think
are
representative
of
core
use
cases.
So
we
don't
let
it
balloon
into.
You
know,
20
examples.
We
just
have
a
small
collection.
A
E
I
think
I
I
think
that'd
be
nice,
but
not
a
high
priority
for
me.
So
if
folks
don't
mind,
I
think
I'll
I'll
just
add
a
couple
of
examples
and
not
really
worry
about
it.
For
now
and
the
two
things
that
I'm
interested
in
adding
examples
for
our
micrometer,
the
micrometer
shim,
I
think
that's
a
good
one
and
then
just
a
basic
basic
demonstration
of
the
agent.
E
A
Yeah,
that's
a
great
idea
in
the
agent.
Do
you
have
an
idea
on
what
would
you
have
us
just
a
separate
sam,
simple
sample,
app
that
you
run
the
agent
against
or
would
you
try
to
use
like
a
spring
pet
clinic.
E
A
E
E
Example
that
would
be
cool
and
you
could
demonstrate
like
several
things
in
there.
You
could
do
metrics
tracing
and
then,
if,
if
like
log
back
is,
I
think,
enabled
by
default
and
spring.
So
you
could
do
demonstrate
logs
as
well,
so
everything
would
kind
of
be
working.
E
All
right,
moving
on
so
yeah,
just
I'm
interested
in
making
exponential
histograms
available
for
experimental
use,
they're,
they're,
great
and
they're
superior
to
explicit
bucket
histograms
in
in
several
key
ways:
they're
not
stable
in
the
in
the
specification
yet,
but
they
will
be
soon.
And
so
you
know
what
I
mean
by
making
them
available
for
experimental
use
is
just
having
it
so
that
you
can
use
them
and
reference
them,
but
they'll
still
be
in
internal
packages.
So
we
don't
make
that
part
of
our
api.
Yet
so.
E
C
Yeah
so
figuring
out
how
to
how
to
expose
them
without
locking
us
into
the
api.
I
guess
is
the
tricky
part.
C
C
A
Yeah
under
I
brought
over
I
one
of
those
ideas
into
the
consistent
sampling
pr
over
in
the
contrib
repo
for
hiding
some
stuff
yeah,
it's
good!
It's
a
good
idea.
E
All
right,
yeah,
so
super
excited
that
this
pr
got
merged
so
for
database
connection,
pool
semantic
conventions.
Four
months
later,
oh
my
gosh.
What
a
journey!
So
I'm
just
curious
what
what
the
plans
are
for
this
if
any
that
exists.
B
So
we
have
like
five
or
six
database
connection
pool
instrumentations
over
in
the
splunk
distro,
like
all
the
major
important
ones,
and
we
will
slowly
start
to
upstream
them
now
that
there
are
somatic
conventions,
probably
starting
with
hikari.
So
since
that's
the
most
popular
one,
I
think
it's
the
default
one,
it's
being
good
at
the
very
least
yeah.
I
guess
you
can
expect
some
prs
in
the
coming
weeks.
B
A
Cool-
let's
see
I
just
I
remembered
I
wanted
to
just
bring
up.
I
don't
know,
had
the
there
was
some
good
discussion
already
but
wanted
to
get
your
thought
jack
on
whether
the
so
when
temporality
is
so
just
a
high
level
for
with
when
exporting
metrics.
We
export
end
time,
start
time
and
end
time
and
that's
sort
of
like
the
interval
over
which
the
metrics
were
gathered
and
so
for
cumulative.
Of
course,
it's
from
the
very
beginning
always
further
and
further
for
delta.
A
It
sort
it
moves
along
like
that.
But
when
I
was
running
some
integrating
that
into
our
distro
and
running
some
smoke
tests,
I
was
it
threw
me
for
a
while
this
first
interval.
The
first
delta
interval,
is
not
like
your
say,
your
60
seconds.
Your
first
delta
interval
could
be
from
the
very
beginning
of
the
metric
subsystem,
which
is
technically
correct.
A
For
sure
I
mean
because
it's
measured
over
there
was
nothing
prior
to
that,
but
I'm
worried
about
confusion,
because
we
do
report
that
interval
and
so
having
that
you
know
some
intervals
be
way
different
than
your
configured,
so
you
can
figure.
10
second
interval
seems
weird.
E
To
me
yeah,
I
know
I
I
agree
and
just
like
to
give
like
an
example
for
this,
that
that's
concrete,
so
suppose
you
set
up
the
periodic
metric
reader
to
have
a
10
second
collection
interval,
so
you're
collecting
every
10
seconds
and
you
you
have
a
histogram,
you're,
you're,
collecting,
you're,
collecting
measurements
and
then,
like
five
minutes
into
the
start
of
the
application,
you
collect
a
measurement
with
a
new
set
of
attributes,
a
set
of
attributes
that
has
not
previously
occurred.
E
The
interval
for
the
point
when
that,
when
that
set
of
attributes
gets
exported,
will
not
be
the
last
10
seconds
it
will
be
since
the
beginning
of
the
application,
and
so
that's
that's
the
confusing
part
that
the
intervals
are
correct.
If
you,
if
you've
seen
the
set
of
attributes
before
then,
like
you
know,
the
start,
time
was
the
last.
You
know
the
end
time
of
the
last
collection,
but
if
for
whenever,
a
new
set
of
attributes
appears
everything
just
everything
gets
funky
and
we
go
from
application
start.
E
Yeah,
I
think
both
are
technically
correct,
but
one
is
more
intuitive
and
less
less
surprising.
E
Yeah,
so
the
the
times
of
each
of
those
the
start
time
and
in
time
correspond
to
the
the
window
of
that
collection.
So
zero
one
one,
two,
two
three
three
four,
but
if
we,
if
we
see
a
new
set
of
attributes,
the
first
time
we
see
it,
that
point
will
get
exported
with
a
start
time
of
zero
and
an
end
time
of
whatever
the
time
is
so
yeah.
F
A
Do
you
think
we
should,
I
I'll
also
try
to
clarify
it
in
the
spec.
E
I
don't
oh,
I
don't
know
I
mean.
What
do
you
think?
I'm
inclined
to
say
that,
like
your,
your
point
of
view
on
it
is
is,
is
what
the
spec
implies
like
your
take
on
things
is
what
the
spec
implies
and
that
you
know
we're
just
doing
the
wrong
thing.
E
A
All
right
any
other
topics.
Anybody
wanted
to
chat
about.