►
From YouTube: 2021-10-07 meeting
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
B
C
It
was
fine,
travel
is
not
fun.
Oh.
C
C
A
C
Well
worth
reading
girdle
etcherbox,
especially
if
you
are
interested
in
things
like
the
incompleteness,
theorem,
girdles,
incomplete
mysterio,
and
how
bach
wrote
his
music
and
escher's
art.
It's
an
excellent
treatise
that
also
dives
into
cognitive
science
and
also
also
other
really
interesting.
Philosophical
discussions.
C
I
share
I've
been
telling
people
the
coolest
talk.
I
went
to
was
this
woman
who's
a
particle
physicist
at
cern,
but
also
a
modern
dance,
choreographer
and
modern
dancer
who
took
all
of
the
things
she
knew
about
by
using
neural
networks
to
analyze,
high
energy
particle
physics,
data
and
put
on
a
motion
capture
suit
and
recorded
herself
dancing
and
put
that
through
her
neural
networks
and
used
it
to
inform
new
moves
and
better
understanding
of
how
her
body
interacts
with
itself
and
when
she
dances
and
all
sorts
of
really
cool
stuff.
A
C
I
guess
I'm
here
too,
I
should
probably
put
my
name
yeah
so
release
week.
I
think
this
is
the
usual
month
cadence,
where
we
try
to
release
on
friday
and
usually
ends
up
being
bumped
to
the
weekend
or
monday,
but
we
do
our
best
for
friday.
C
So
just
wanted
to
you
know,
have
just
have
a
check
in
and
I'll
talk
to
analog
tonight
make
sure
there
isn't
anything
we
need
to
get
in
before
we
do
1.7
there's
a
lot
of
changes
to
a
whole
bunch
of
internals
in
this
release,
so
it
feels
at
least
somewhat
you
know,
but
risky,
but
there's
definitely
a
lot
of
internal
stuff.
That
happened
so
more
more
protobuf
and
exporters,
and
a
lot
of
optimization
throughout
that
that
part
of
this
code.
A
Yeah,
I'm
really
interested
in
the
grpc
without
grpc
library
support
we
it's
gonna.
Let
us
shave
off,
like
four
megs
from
the
from
the
agent
distribution
from
that
now:
slim
distribution
that
just
has
otlp
grpc.
D
C
C
C
So
we've
got
the
first
exemplar
support
in
one
seven,
I'm
trying
to
remember
we
had.
I
think
we
actually
have.
E
We
don't
it's
not
in
yet
no
close,
but
we
have
it's.
The
preview
of
the
of
the
whole
sdk
api,
though
so
multiple
readers
are
now
supported.
Multiple
metric
readers
and
exporters
they're
not
supported
in
auto
configure,
yet
I
kind
of
failed
on
that
we'll
get
there
and
then
so.
Yeah,
multiple
metric
readers
and
what
the
view
api,
I
think,
is
likely
to
be
what
will
be
public
when
everything's
public.
E
What
else
landed
exponential
histograms
are
in
the
data
package?
You
just
can't
make
them
yet
that's
what
it
is
I
knew
they
were.
I
knew
that
I
merged
a
pr
with
them
in
there.
E
C
E
I
think
I
think
we'll
notice
for
users
who
are
sampling
traces
will
notice
a
hit
in
metric
performance.
If
it's
enough
that
that
we
see
complaints,
we're
gonna
have
to
spend
some
more
time
optimizing.
Do
we
default
to
having
sample
exemplars
on
we
default
to
sampling
exemplars?
Yes,
yes,.
C
E
Yeah,
I
did
a
lot
of
performance
tuning
of
the
the
benchmark
code,
but
it
hasn't
really
been
benchmarked
in
the
agent,
so
it
could
be
that
we
want
to
disable
exemplary
sampling
in
the
agent.
If
we
see
issues,
but
people
who
are
doing
a
hundred
percent
trace,
sampling
and
always
sample
will
notice
a
hit.
Those
who
are,
if
you're
doing
like
a
10
trace
sample
it.
E
Basically
metric
recording
moves
from
about
20
to
40,
mil
nanoseconds
to
20
to
70
nanoseconds
per
per
measurement.
So
it
can.
It
can
as
much
as
double
the
the
metric
ingestion
cost.
But
I
don't
know
what
that
looks
like
in
an
actual
real
app.
Yet.
C
E
C
C
E
We
can
decide
yeah,
I
mean
the
goal
is
that
we
want
them
on
with
sample
traces
by
default
and
fix
all
the
performance
issues
related
to
that.
But
I
don't
know
I
I
am
totally
fine
with
having
it
disabled
in
this
release
and
hammering
on
it.
C
E
Folks
at
google,
meaning
like
myself
and
my
team
people,
like
actual
like
actual
like
service
owners
and
things
like
that,
oh
until
metrics
is
stable,
I'm
not
gonna
be
able
to
get
service
owners
to
to
take
it
on.
So,
unfortunately,
you
get
my
kind
of
hammering,
which
is
unrealistic,
hammering
but
also
worst
case
scenario,
hammering.
C
C
Interesting:
okay,
one
might
argue
that
the
author's
company
should
be
willing
to
support
that
author
and
test
things
out
in
their
systems
before
all
the
rest
of
us
do.
But
you
know
we're
a
community.
So
it's
all
good.
E
Well,
yeah,
I
guess
we
will
offer
the
best
that
we
can,
but
I
I
like
I'm
not
going
to
be
able
to
get
this
to
run
on.
You
know
gkd's
core
infra
right.
No,
I
don't
know
but
we'll
we'll
hammer
the
hell
out
of
it.
It's
just
like
in
terms
of
what
real
life
overhead
looks
like
yeah
yeah.
I
don't
think
I'm.
A
And
josh,
what
does
it?
What's
you
talk
about
perfect
on
metrics,
but
what's
how?
How
big
is
that
relative?
To
I
mean
assuming
we're
you
know,
capturing
spans
fans
are
already
I
mean
in
general,
spans
tend
to
be
more
expensive
anyway.
E
I
wanted
to
get
you
that
number
this
week
and
I
unfortunately
couldn't
do
the
conference
fun.
It's
been
a
real
stressful
week,
but
anyway
I
don't
know
yet,
but
my
suspicion
is
that
that's
true.
I've
mostly
done
isolation
tests
just
to
optimize
the
hot
path,
because
I
thought
that'd
be
the
most
critical
thing
to
do
for
this
first
release
for
metrics,
so
the
the
hot,
the
hot
path
and
synchronization
is
been
hammered
on
a
bit,
and
we
know
what
the
performance
is.
E
There's
some
things
we
can
do
to
possibly
make
it
better
in
terms
of
overall
overhead
but
tracy,
not
sure
yeah.
I
suspect
that
tracing
will
cost
more,
but
then
again,
we'll
see
we'll
see
that
you
will
take
a
memory
hit
for
sure
with
metrics.
A
And
what's
with
users
who
enable
it
say
in
the
agent
what
what's
the
back
end
experience?
Does
the
back
end
have
to
support
this.
E
Oh,
so
we
did
a
few
things
actually
in
this
release,
if
you
so
so
still,
if
you
don't
provide
an
exporter
for
metrics
like
if
you
don't
configure
one
everything's
disabled
and
you
won't
see
any
hit,
so
you
actually
have
to
configure
an
exporter
for
metrics.
E
A
E
Yeah,
so
so
it's
not
actually
taking
any
span
attributes.
What
happens
is
when
you
record
a
measurement
and
you're
doing
example,
our
sampling,
if
there's
a
sampled
span
by
default,
if
there's
a
sampled
span,
there's
a
chance
that
we
sample
an
exemplar,
not
guaranteed
sample
speed
in
the
context
in
context
when,
when
the
measurements
recorded,
so
let's
say
I'm
recording
latency.
This
is
this
is
this
is
the
example,
for
example,
I'm
recording
latency
into
a
histogram
okay.
E
When
latency
was
recorded
into
an
exemplar
and
then
the
exemplar
will
have
a
timestamp,
it
will
have,
it
might
have
attributes
associated
with
the
measurement
of
the
metric,
not
the
span
or
trace
just
the
measurement
of
the
metric
if
it
needs
additional
things
there
and
it
will
have
the
span
id
and
the
trace
id
in
it
and
the
value
that
was
recorded
so
like.
If
I
have
a
histogram
bucket-
and
you
know
I-
I
am
sampling
my
high
latency
things.
You
know
for
things
over
10
seconds.
E
I
might
have
an
exemplar
of
a
trace
associated
with
something
over
10
seconds.
I
might
have
another
example
for
a
trace
associated
with
things
between
5
and
10
seconds.
Another
example
are
things
between
you
know
one
and
five
seconds
at
maximum.
I
will
have
one
exemplar
per
bucket
in
a
histogram
or
I'll
have
up
to
two
exemplars
for
sums.
If
I'm
doing
some
base
things-
and
you
don't
see
any
exemplars
for
gauges,
you
don't
see
any
exemplars
for
async
instruments
so
like
gc,
related
java
stuff,
because
there's
no
possible
span
in
context
there.
A
E
There's
a
there's,
a
exemplar
data
structure
that
is
underneath
the
metric
data
point.
So
it's
kind
of
like
how
spans
have
events
metric
data
points
have
exemplars.
A
E
A
E
Supports
exemplars
and
our
prometheus
exporter
will
also
support
exemplars.
We
added
that
we
added
the
prometheus
support,
I
think
in
the
previous
release,
but
it
didn't
matter
because
examples
weren't
wired
through
yet,
but
now
you
should
start
seeing
exemplar
show
up
for
prometheus.
A
Oh
cool
and
you
just
see
the
rod,
there's
no,
like
actual
linking
to
your,
I
mean
you
just
get
the
raw
trace
id
span
id
over
there
and
you
copy
paste
that
into
your
tracing
system.
For
now,.
E
So
I
guess
it
depends
on
your
system
that
you're
using
I
know
for
so
for
google
cloud
monitoring,
there's
actually
a
direct
visual
link.
So
if
you
see
a
histogram
distribution,
there'll
be
like
a
little
icon
that
says:
there's
a
there's
a
trace
at
this
particular
quantile.
You
can
click
on
it
and
go
look
at
the
trace.
I
don't
remember
how
prometheus
is
doing
whether
or
not
you
can
configure
you
know
a
link
to
your
tracing
system.
E
I
know
that
that
it
supports
it.
It's
still
considered
kind
of
experimental
in
prometheus,
but
I
don't.
I
don't
know
what
people
are
doing
in
practice,
but
you
should
be
able
to
take
that
trace.
Id
span
idea
make
the
link.
E
We
don't
include
the
trace
flags
in
the
in
the
recording
of
example.
R.
Is
that
something
we
need
because
that'd
be
important
to
know?
That's.
E
Yeah,
so
the
if
you
need
the
trace
flags
that
should
be
recorded
as
part
of
the
trace.
The
idea
here
is
we
just
record
enough.
I
identified
information
to
link
the
two
and
it
assumes
that
you
have
a
system
that
records
both
sides
right.
B
Well,
one
of
the
reasons
why
we
we
use
it
was
correlation
between
traces
and
logs,
for
example.
So
if
you're
sampling
a
trace
and
you're
adding
information
into
the
the
the
logs
themselves
and
being
able
to
determine
whether
or
not
that
data
is
actually
going
to
exist
in
the
back
end,.
C
B
E
Well,
so,
right
now,
by
default,
exemplars
are
only
sampled
when
open
telemetry
has
the
sample
flag.
E
Okay,
you
can
you
can
sample
any
possible
measurement,
that's
a
thing
you
can
turn
on,
but
I
wouldn't
recommend
it
and
we
have
it
there
just
in
case
people
want
it,
but
that's
not
the
idea.
The
idea
is
only
when.
A
And
then
it's
kind
of
redundant,
so
that
makes
sense.
Thank
you
yeah
and
josh.
One
last
question
from
the
agent
perspective:
is
there
anything
we
should
be
thinking
about
from
the
view
api
perspective.
E
Yes,
so
whenever
we
start
actually
specifying
standard
configuration
for
open
telemetry
the
view
api
is
how
you
kind
of
control,
what
metrics
come
out
and
how
you
control
like
label
aggregation
and
that
kind
of
junk.
If
you,
if,
if
you're
familiar
with
prometheus,
rewrite
rules
right,
my
expectation
is
in
open
telemetry.
This
becomes
a
probably
a
yaml
file
that
sdks
ingest
around
view
api
configuration
and
it
will
be
important
for
the
agent
to
kind
of
be
able
to
do
that.
E
It
might
be
that
the
collector
is
enough
for
us
that
people
always
export
from
java
to
a
local
collector
and
do
their
rewrite
rules
in
that
local
collector.
But
I
have
a
feeling
that
we're
going
to
get
pressured
to
have
robust
configuration
around
views
and
around
what's
exposed
around
what's
aggregated
around
delta
and
cumulative
that
sort
of
thing.
So
I
expect
us
to
get
a
lot
of
user
demand
there
over
time
as
we
get
more
adoption.
E
Otherwise,
I
don't.
I
don't
know
what
the
I
really
don't
think
the
agent
should
expose
anything
high
level
outside
of
like
delta,
aggregation
or
cumulative
aggregation
right.
E
Every
every
single
instrument
has
a
default
view
and
right
now
that
default
view
is
basically
what
prometheus
would
want.
That
probably
will
change
when
exponential
histograms
hit,
but
right
now
that's
what
it
is.
A
So,
for
now
in
the
agent
this
is
this
would
just
be
what
people
get.
E
Yeah
yeah,
the
the
nuance
here
is
non-monotonic
sums
are
gauges
in
prometheus,
but
we
actually
expose
them
as
non-monotonic
sounds,
but
with
cumulative
aggregation
temporality.
If
that
made
sense
to
you
awesome,
if
it
didn't,
hopefully
you
never
have
to
care.
F
I
I
do
think
it's
gonna
be
a
while,
or
maybe
like
some
time
before.
You
know
the
spec
comes
up
with.
You
know
language
around
configuration
of
the
views
and
I
think
it
would
help
if
we
built
an
spi
hook
that
people
could
use
to
configure
that
in
the
interim.
F
C
You
can
fully
configure
your
sdk
in
your
meter
provider
configuration
so
I
think.
E
It's
hard
yeah,
so
so
right
now,
both
exporters
and
views
are
exposed
on
sdk
meter
provider
and,
as
john
said,
there's
an
spi
hook
for
it.
So
you
should
be
able
to
do
all
view
related
activities
and
even
x,
like
advanced
exporter,
things
with
that
spi.
F
A
Trying
to
remember
what
we
did
something
hacky
in
a
previous
release.
E
Oh,
that
was
around
moving
crapo,
where
they
called
before
the
value
observer
metrics.
We
made
them
be
histograms
with
the
view
api.
Is
that
what
you
mean.
A
This
was
for
the
the
cardinality
explosion
that
we
had,
so
we
we
restricted
where
we
were
going
to
eventually
have
the
list
of
attributes
metric
the
metric
dimensions
in.
I
thought
this
view
default
views,
and
so,
but
we
didn't.
A
A
E
Is
going
to
be
probably
a
a
big
kerfuffle,
so
I
don't
know
if
you've
seen
the
discussions
around
limiting
attributes
and
having
attribute
count
limits
in
the
spec
around
traces
and
the
implications
for
metrics,
but
removing
attributes
from
a
metric
actually
changes
its
identity,
and
it's
not
clear
what
attributes
you
can
remove
and
that's
why
we
rely
on
view
configuration
for
that
you,
you
should
be
able
to
use
a
label
processor
to
get
rid
of
labels.
E
You
don't
want
from
instrumentation,
but
you
shouldn't
default
to
having
all
those
attributes
to
begin
with,
because
we'll
just
roll
over
and
die
from
a
memory
standpoint.
We
don't
have
a
way
to
limit
attributes
at
all
and
that's
an
open
bug
around
memory
usage
in
the
sdk
that
we're
still
kind
of
talking
through
ideas
of
how
to
fix.
I
think
john
had
some
ideas
there,
but
that
I
am.
E
I
have
no
ideas
that
I
think
aren't
terrible
for
the
user
in
one
way
or
another
right
so
still
trying
to
think
through,
like
a
good,
a
good
solution
there.
That's
that's
decent,
and
the
best
idea
here
is
making
sure
that
we
know
what
attributes
are
informational
versus,
identifying
and
dropping
informational
attributes
as
quickly
as
possible.
A
Can
you
dump
that
spec
the
link
here.
C
C
I
think
that
was
basically
the
only
thing
that
I
could
come
up
with.
You
know
in
a
30-second
interval
I'll
find
a
I'll
find
the
issue.
E
Cool
that
that
actually
might
be
what
we
go
with,
because
it's
simple
to
implement
easy
understand
and
doesn't
blow
up
so,
but
it
there's
there's
an
interesting
restraint
like
anyway,
if
you,
if
never
send
a
url
as
a
metric
label.
If
you
can
help
it
always
try
to
scrub
that
sucker
down
to
something
that's
less
cardinality
crazy.
B
So
so
what
will
be
the
net
impact
when
we
actually
document
the
informative
versus
identifying
attributes
like?
Will
there
be
a
spec
update?
Why
am
I
asking
because
the
collector
can
modify
these
attributes
or
these
labels?
So
we
need
to
be
clear
and
maybe
even
make
it
clear
to
the
user,
so
they
know
what
they're
doing
and
whether
they're
impacting
the
identity
of
the
metric
or
not.
E
Yeah
yeah
yeah,
so
we
we.
So
we
have
a
few
a
few
things
written
about
that
when
the
collector
changes
a
metric,
there's
a
thing
called
the
single
writer
principle
as
long
as
effectively
there's
one
writer
of
the
new
metric
with
less
attributes,
everything's
gravy
right,
and
so
that
idea
just
means
you
have
to
aggregate
to
one
location,
everything's,
fine.
The
second
thing
around
informational
versus
identifying,
there's
a
long
discussion
on
this
around
resources
around
the
collector
and
metrics.
I
can
try
to
find
that
link
and
send
it
to
you.
E
I
think
it's
something
about
like
how
do
we
know
what
labels
to
prove
preserve
for
prometheus?
What
resource
labels
to
preserve
for
prometheus
was
the
way
I
phrased
it,
but
there's
a
bunch
of
discussion
in
there
around
options.
Like
I
said,
I
don't
think
we
have
a
great.
A
Cool
all
right
yeah,
so
one
seven
looking
forward
to
that.
C
C
We
don't
have
any
kind
of
log
provider
or
anything
like
that
like
we
do
for
metrics
and
traces,
but
we
do
have
a
batch
log
processor.
Is
there
a
simple
log
processor
in
order
to
remember,
but
there's
a
batch
log
processor
there's
a
lot
of
the
log
exporter
interface.
We
now
have
a
otlp
implementation
of
the
log
exporter.
C
All
of
that
is
in
this
release.
So
if
someone
wanted
to
manually
wire
up
logging,
they
could
do
that,
but
we
don't
have
any
first
class
sdk
support
for
it.
It's
just
all
the
all
of
the
public
classes
are
there.
If
people
want
to
mess
around
with
it
and
figure
it
out
and
start
using
it,
but
no
top
no
top
level
official
support,
but
all
the
infrastructure
so.
A
If
we
wanted
to
start
instrumenting,
say
log
for
j
in
the
agent,
can
we
write
directly
to
that
exporter?
We
write
directly
to
the
exporter.
C
The
model
we
would
be
using
here
is
you
would
so
you
want
to
write
a
log4j
appender
that
outputs
otlp?
That's
that's
what
the
use
case
that
we're
trying
to
solve
for
and
what
you
would
do
then
usually
write
that
appender
and
that
appender
would
have
access
to
the
batch
log
processor
with
the
export
and
wired
up
to
it,
and
then
it
would
write
directly
to
the
log
processor.
C
C
F
Okay,
is
there
any
interest,
is?
Is
there
any
interest
in
you
know,
maybe
in
the
contribution
or
somewhere
like
that,
actually
building
these
appenders?
Yes,
yes,
yes,.
F
Well,
we've
got
folks
at
my
company
that
are
interested
in
this
type
of
thing,
so
maybe
I'll
prototype
something
up,
see
what
it
looks
like
awesome.
That
would
be
super
fantastic.
C
B
C
A
Yeah,
that's
exciting:
to
get
logging
rolling,
get
some
output
from
the
the
auto
instrumentation
for
logs.
We
had
some
very
early
days,
auto
instrumentation
for
log
for
j
log
back
and
java
util
logging
that
generated
spans.
A
Well,
we
flip
flop
between
events
on
the
current
span
and
whole
new
spans
and
eventually
ripped
it
out,
but
that's
jack
of
your
that
was
more.
That
was
auto
instrumentation
it's
and
it
wouldn't
work
like
as
extracted
piece
but
there's
some
stuff
in
the
history
of
the
instrumentation
repo.
I
can
dig
out
if
you're
interested.
F
A
G
So
we
enabled
the
strict
context
checks
a
while
ago
and
fixed
some
of
them
that
failed
more
often
and
for
the
others.
We
basically
rely
on
on
this
being
rerun
on
failure,
so
I
built
the
ugly
hack
to
automatically
retry
it.
G
G
A
Yeah,
so
the
the
the
from
tucking
previously
with
honorable,
when
we
had
sort
of
tried
this
similar
something
similar
his
concern,
which
made
sense
to
me
once
we
talked
through
it
for
a
while,
was
that
the
strict
context
checks
potentially
are
still
pointing
us
to
issues
in
particular,
there
were,
and
there
was
one
that
I
was
able
to
track
down
and
at
the
time
where,
when
we
end
a
span
in
the
middle
of
a
while,
a
context
is
still
open.
A
A
I
agree
it
may
not
be
feasible
in
all
cases.
I
guess
I'm
not
quite
ready
to
give
up
on
the
experiment,
but
what?
What
specifically
is
the
experiment
causing
problems
right
now,
as
opposed
to
revisiting
it
later?
A
Is
it
giving
is
the
build
in
a
bad
state?
Even
though
we
have?
You
know
we
just
kind
of
explicitly
disable
it
on
modules.
We
know
it's
causing
a
problem.
G
We
haven't
disabled
it
or
no
modules
that
cause
problem
the
currently
I'll
build.
This
is
such
that,
if
a
test
fails,
then
we
automatically
retry
it
which
kind
of
hides
some
of
the
flakiness.
G
Ideally,
I
think
we
should
disable
the
retry,
but
with
the
strict
context
check
failures
in
place
like
we
can't
really
do
it
now,.
A
Do
you
think
I
mean,
does
it
seem
like
there's?
I
know
we've
applied
it
to
a
lot
of
modules.
Do
you
think,
there's
a
lot
more
modules
that
need
it?
Is
it
hopeless
to
just
throw
it
on
additional
modules
until
the
flakiness
goes
away,
or
do
you
think
that's
that
that
band-aid
is
just
gonna
end
up
covering
the
entire
repo.
G
A
So
if
we
do,
if
we
do
this,
I
mean
it's
good
in
that
it
still
makes
sure
that
we're
not
like
permanently
leaking
something.
I'm
not
sure
it
gives
us
too
much
comfort,
as
opposed
to
just
removing
this.
The
strict
context
check
all
together.
A
But
we
could
do
this
to
get
to.
I
totally
support.
Getting
the
build,
having
the
build
in
a
solid
state
is
always
a
priority,
so
we
could
do
this
and
then,
if
we
ever
want
to
mess
around
in
a
branch
we
could
undo
the
the
context
the
check
the
check
retries.
A
Okay,
let
me
I
will
talk
to
honorag
this
evening,
see
if
we
get.
A
Agreement
on
that
approach,
yeah
because
at
least
we
have
them
the
mech,
the
mechanism
in
place
to
track
those
down
in
the
future.
If,
when
we,
if,
when
we
want
to
spend
time
on
that,.
A
Sounds
good
yeah
thanks
for
sending
that
and
pushing
for
the
the
build.
H
Yes,
hey
everyone.
I've
had
the
chance
to
meet
a
few
of
you
and
bring
this
up
during
a
couple
of
our
calls
with
on
the
on
the
maintainers
call,
as
well
as
the
collector's
call,
but
I
I
did
want
to
get
more
feedback
surrounding
this.
So
just
for
some
context.
I
am
doing
some
more
research
surrounding
getting
started.
H
Experience
for
different,
open,
telemetry
components-
and
I
did
want
to
take
this
opportunity
to
reach
out
to
more
people,
get
more
feedback
and,
and
I've
added
the
dock
to
this
comment
here
itself
and
feel
free
to
kind
of
add,
in
any
observations
that
you
have
as
well
as
reach
out
to
more
people.
H
I
think
it's
going
to
be
valuable
to
kind
of
step
back
and
assert
certain
any
new
pains
or
any
pain,
points
or
observations
that
we
are
seeing
with
the
new
releases
that
we've
been
had
in
the
past
few
months.
I
think
it's
great
to
kind
of
reflect
on
this
right
now,
so
yeah.
A
So
are
you
yeah,
maybe
if
you're
looking
for
reviewing
of
the
docs
or
just
general,.
H
Oh
yeah,
so
I
think
just
general
thoughts
right.
I
think,
basically
out
of
my
discussion
from
the
collector
and
maintainer
calls,
I
think
we
just
wanted
to
put
together
this
talk
as
an
amalgamation
of
different
observations,
that
people
have
and
different
suggestions
as
to
what
we
can
do
to
improve
the
current
experience
or
or
what
they've
been
seeing.
A
Are
you
looking
for
things
that
cross
all
the
sdks
and
instrumentation
like
across
all
the
languages,
or
are
you
looking
sort
of
language
by
language?
Have
you
have
you
looked
at
the
java
stuff
already.
H
No,
I
think
this
is
definitely
more
general
and
I
think
not
specific
to
a
particular
language
I
think
more
cross-cutting
across
different
sdks
and
the
collector
itself
as
well.
I
think
feedback
across
that
and
seeing
how
things
integrate
with
each
other
as
well
as
any
other
pinpoints
that
come
across.
I
think
that's
going
to
be
helpful.
E
The
feedback
on
getting
started
yeah
there
was
like
a
set
of
survey,
results
that
I
remember
reading.
I
don't
know
if
this
is
it.
H
But
the
yeah,
so
there
was
a
user
research
form
that
was
circulated
a
couple
of
months
ago.
I
think
bob
has
shared
that
with
me
and
I
tagged
that
to
his
comment
as
well
and-
and
I
think
we're
still
going
over
those
studies-
and
he
shared
that
with
me
a
couple
of
days
ago.
So
that's
been
helpful,
yeah,
okay,.
E
I
I
wanted
to
call
out
specifically
the
thing
I
read
in
there
was
people
were
having
issues
going
between
auto
instrumentation
and
custom
instrumentation
when
they
wanted
to,
and
they
found
that
confusing.
E
So
I
was
actually
really
happy
to
see
that,
because
that's
an
awesome
use
case
that
it
it's
cool.
They
made
it
that
far
to
comment
on
this.
That's
all
I'm
saying
right
like
anyway.
That
was
the
one
that
I
read
that
I'm
like.
That
makes
a
hell
of
a
lot
of
sense
and
something
to
work
on.
H
Yeah,
definitely,
I
think,
just
going
through
the
steps
of
the
form
and
and
seeing
how
further
they're
getting
and
each
step,
if
they're,
actually
able
to
kind
of
give
us
some
perspective,
even
if
they're
kind
of
dropping
off
at
a
particular
step
along
the
user
research
journey.
I
think
that
also
gives
us
further
data
and
indication
as
to
what
they're
doing
right
now
and
what
can
be
better
right
so
yeah
for
sure.
I
think
that
was
great
feedback.
A
C
C
A
All
right
any
did
anybody
any
other
topics.
Anybody
wanted
to
chat
about.
D
Yeah,
okay,
I
have
got
most
of
the
work
done
to
get
the
jfr
piece
merged
and
put
up
as
a
pr,
but
I
had
a
question.
I
don't
really
know
what
to
put
for
the
schema
or
for
the
the
instrumentation
name
or
version.
Do
we
have
any
thoughts
about
this?
If
this
is
going
to
go
into
contra.
C
So,
like
the
resource
attributes
and
the
semantic
attributes,
classes
have
us
have
a
constant
in
them,
which
is
the
schema
that
they
represent.
C
B
E
B
C
Oh,
it
wouldn't
fill
out
the
instrumentation
name
and
version,
though
right.
C
C
C
Are
here
and
here's
the
schema
that
you
should
be
adhering
to
so
otherwise
they
wouldn't
even
be
aware
that
it
was
a
thing
that
they
were
supposed
to
be
doing.
I
think,
there's
a
now.
It
was
probably
not
a
javadoc
linkage,
but
there
probably
should
be
whenever
we
talked
about
the
schema
urls
to
point
over
that
semantic
attributes
and
resource
attributes.
A
Then
for
the
instrumentation
name,
I
think
something
like
this
is
I
don't
remember
what
we've
done
in
the
contrib
repo,
but
I'm
I'm
just
guessing
that
this
is
what
we've
done.
If
you
want
to
start
with
that
and
in
the
pr
I'm
sure
we'll
we
can
double
check
what
the
convention
is.
C
D
Yeah
john,
I
think
I
shouldn't
be
worried
about
the
fact
that
this
the
semantic
attributes
class
is
in
tracing,
whereas
currently
this
this
was
all
targeted
at
metrics.
C
E
I
have
a,
I
have
an
open
question
on
that
on
one
of
the
latest
bumps
so
basically
to
to
re-emphasize
what
john's
saying.
Metrics
aren't
in
a
code-readable
form
the
semantic
at
the
semantic
conventions.
E
So
we
cannot
synthesize
classes
from
them.
There's
been
an
open
to
do
around
it
and
there's
a
question
of:
should
we
stabilize
metrics
then
make
semantic
convention
generation,
or
should
we
do
it
now,
but
that's
why
it's
missing
so
in
the
meantime,
I
would
suggest
use
the
schema
url
from
the
tracing
one,
because
it's
going
to
be
the
same,
schema
url
and
you
have
to
hand
do
the
metric
names.
Well,
you
know.
C
Alternatively,
you
know,
alternatively,
in
this
case,
it
might
make
more
sense
to
leave
the
schema
url
null
because
you
don't
have
like
you're,
not
adhering
to
a
known
schema
with
your
instrumentation,
so
it
actually
might
make
more
sense
to
leave
that
null
in
this
case,
and
it
is,
it
should
be
nullable
I
mean
it
is
not.
Yes,.
E
It's
nullable
yeah
yeah,
that's
that's
a
good
idea,
so
keep
schema
url
null
and
then
as
much
as
you
can
abide
by
the
semantic
conventions,
and
then
we
can
fix
them.
If
you
run
into
issues
because
they're
still
not
stable,
that
sounds
really
good,
but
there's
a
there's
a
there's,
a
bug
around
metric
semantic
convention
generation
in
the
spec
okay.
Should
we
escalate
that
like
do
we
do
we
need
this
now.
D
I
mean
it
depends
on
how
long
it's
going
to
be
before
it's
actually
stabilized.
So
if
it's
going
to
be,
you
know
a
short
time
before
we
reach
stability.
I
would,
I
would
argue
for
for
for
not
escalating
this,
whereas
it
is
going
to
be
a
while.
Maybe
we
should.
A
And
then
the
then
the
instrumentation
version
would
be
the
version
of
the
the
package
that
your
pr
has,
which
we're
aligning
all
to
the
contrib
repo
has
a
single
version
throughout.
A
And
I
don't
think
we
have
a,
I
don't
know
if
we
have
a
good
way
in
contrib
repo,
yet
to
auto
populate
this.
That's
something
we
need,
but
I
would
for
now
is
it
required?
A
C
I
mean
the
only
reason
you
might
put
in
some
value
there
is
that
if
you
want
to
track
people
like
if
you're
getting
data
in-
and
you
want
to
see
where-
which
version
of
the
library
it
came
from,
it
can
be
really
useful,
even
just
to
put
a
placeholder
in
there
right
now
that
you
manually
increment,
that
doesn't
line
up
with
an
actual
version.
It
all
depends
on
whether
that's
something
that's
going
to
be
useful
to
to
who
whoever
is
consuming,
that
data.
D
A
D
D
D
E
Yeah,
do
we
generate
a
property
file
with
that
version
number
in
it
for
the
jars?
I
know
that
there
was
an
issue
with
the
jmx
gatherer
thing:
the
collector
uses
where
it
was
reporting
against
1.0
in
every
release
we've
made
until
1.6
bananarock
bumped
it
we
do
for
the
api
and
sdk.
C
A
D
Cool,
oh
yeah.
I
had
one
other
thing:
sorry
to
jump
in
just
yeah,
so
basically
erin
schnabel
and
I
are
going
to
take
another
crack
at
that.
Micrometer
open,
telemetry,
bridging
patch
we're
meeting
next
wednesday,
which
is
the
13th
at
whatever
time
that
is,
that
is
1pm
eastern
time
or
10
pacific.
A
A
A
Cool,
we
are
pretty
much
at
time
just
briefly
some
good
progress
on
and
I
didn't
even
get
through.
I
got
through
one
of
two
pages
of
pr's.
A
We
can
still
always
use
help
on
that
issue,
a
lot
of
discussions
and
work
around
stabilizing
the
instrumentation,
the
instrumentation
api.
A
This
is
library
instrumentation
for
kafka,
now
broken
out,
which
is
great
addition
to
the
library
instrumentation.
We
did.
I
think
we've
discussed
this
before
we
are
changing
the
default
java
agent
artifact
in
one
seven
to
have
the
exporters
now
and
there's
a
separate
slim
distro,
which
only
has
the
otlp
over
grpc,
which
will
now
with
the
one
seven
changes
in
the
sdk,
where
we
don't
need
the
grpc
libraries
to
do
grpc.
C
A
This
is
a
really
nice
change
both
for
naming.
We
have
less
things
named
with
context.
Virtual
field
feels
good.
It's
where
we
auto-inject
the
field
into
an
object
in
the
java
agent
to
track
some
stuff,
but
now
we
can
also
use
that
from
library
instrumentation
and
in
library
instrumentation.
It
falls
back
to
a
weak
hash
map
or
weak
map,
and
that
allows
us
to
share
that
code
between
library
and
auto
instrumentation,
better
yeah
and
four
minutes
over
try
to
stop
but
great
to
see
everybody.