►
From YouTube: 2021-10-28 meeting
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
B
A
A
C
Yeah,
so
I
I
added
two
issues
today,
which
basically
summed
up
what
we've
discussed
on
the
tuesday
sick,
and
there
are
like
the
most
important
item
being
that
we
want
to
remove
cafe
into
from
instrumentation
api
completely
and
switch
to
the
what
was
oh
concurring,
link
hash
map-
that
was
the
predecessor
of
caffeine,
and
we
will
probably
move
caffeine
over
to
the
java
agent
so
that
the
java
agent
still
uses
the
optimal
cache
implementation.
But
the
instrumentation
api
uses
some
kind
of
not
bad
fallback
that
works
everywhere.
C
So
that
was
the
one
one
issue
that
I
created.
The
second
one
was
about
class
value
and
long
other.
These
are
both
classes
that
are
absent
on
android
and
we
use
both
of
these
in
our
own
instrumentation
api
code.
C
Oh
and
there's
a
there's,
an
animal
sniffer
pr
that
I
they
pushed,
but
we
would
probably
like
to
wait
with
merging
it
until
those
issues
above
are
fixed
because
it
has
like
the
very
long
lists
of
ignores
that
that
had
to
be
added
unless
and
because
animal
stuff
wouldn't
pass.
Otherwise.
So
yep.
A
Yeah
this
is,
I
don't
know
how
we
would
do
android
support
without
this,
with
any
reliability.
D
I've
actually
been
pondering
that
a
little
bit,
and
I
because
I
want
to
I'm
actually
trying
to
figure
out
how
to
write
tests
to
probe
this
in
our
in
splunk's
android
instrumentation,
and
I
wonder
there
is
there-
is
a
testing
library
for
android
that
will
let
you
run
in
a
it's,
not
exactly
running
on
an
android
vm,
but
it
should
at
least
get
closer.
D
So
I'm
wondering
whether
this
is
something
we
should
ponder.
If
we're
really
serious
about
android
support,
I
mean
animal
sniffer
should
cover
that's
kind
of
its
purpose
right.
It
should
cover
this,
but
I
wonder
if
there's
some
more
targeted,
android
testing
that
could
happen.
E
C
I
think
that
we
can
weak,
hashmap
is
fine,
it
doesn't
use
any
of
the
forbidden
apis
or
maybe
I
just
haven't
noticed
that
because
the
you
know,
the
list
of
forbidden
things
from
animal
sniffer
was
like
seven
seven
hundred
items
long.
So
I
might
have
missed
that.
But
yeah
caffeine
was
the
main
problem
here.
C
A
So
I
think
laurie
would
are
you
saying,
because
we
talked
about
this
earlier
this
week-
also
that
we
had,
we
did
have
one
case,
normally
we're
either
using
it
for
bounded
or
for
weak,
not
for
both,
and
so
in
the
bounded
case.
We
can
use
concurrent
linked
hash
map
in
the
weak
case,
we
could
use
a
normal,
weak
hash
map.
E
E
E
It's
it's:
it's
really
common
to
use
weak
identity,
hash
map
for
agents,
because
we
really
don't
know
too
much
about
objects
that
we
use
as
keys.
They
might
implement
the
cache
code
or
equals
in
some
weird
way
that
shows
exceptions,
or
so
so
everybody
always
uses
weak
identity,
dash
maps,
I
think-
or
at
least
they
should
be
using,
but
as
jdk
doesn't
provide
it
like.
You
have
to
copy
paste
it
from
somewhere
or
build
it
yourself.
A
C
As
as
a
fallback,
because
you
know
you,
don't
you
don't
really
have
you
can't
really
add
fields
in
runtime
without
an
agent
and
it's
pretty
much
mandatory
is
because
it's
used
to
pass
context
between
the
interceptor
and
the
call
factory.
If
I
remember
correctly,.
C
E
Yes,
so
like
make
the
instrumentation
api
a
bit
smaller
in
the
sense
that
the
stuff
that
you
can't
easily
provide
on
android,
maybe
we
can
dump
it
into
some
other
artifact.
D
I
would
I
would
say
we
should
be
careful
about
saying
the
only
thing
that
android
users
is
okay,
http,
that
they
that's
the
only
thing
they
need,
because
open
telemetry,
client,
side
instrumentation
for
android
is
going
to
become
a
you
know.
This
is
a
it's.
A
fully
will
be
a
fully
supported
thing
and
there's
more
libraries
that
do
need
support
right
now
than
just
okay.
D
Http
and
users
are
also
going
to
use
the
open,
telemetry,
sdk
and
apis
to
do
instrumentation,
and
we
should
make
sure
that
our
instrumentation
api
is
continues
to
be
android
friendly,
so
that
we're
not
you
know
essentially
saying
android
users
can't
use
anything.
It's
a
very,
very
narrow
amount
of
instrumentation.
E
I
think
that
currently
there
aren't
like
too
many
like
things
that
are
broken
on
android
thinking
like
maybe
some
of
those
can
be
somehow
extracted
like
okay,
some
of
the
stuff
is
probably
easy
to
fix,
like
like
the
long
gather.
We
probably
can
just
replace
it
with
something
else,
but
the
bounded
cash
and
stuff
like
that.
It's
going
to
be
troublesome.
C
A
I
so
I
ideally,
I
would
love
if,
for
the
instrumentation
api
to
be
just
us
like,
have
a
the
same
story
for
both
just
it
be
simple
and
not
have
this
divergent
story,
but
it
is,
it
does
seem
like
it's
going
to
be
more
a
good
amount
of
work
to
make
that
happen.
C
D
D
A
Think
it
doesn't,
don't
you
vendor
in
jc
tools
or
did.
D
A
D
E
D
I
don't
I
mean
splunk
does
not
need
one
seven.
For
anything
I
mean.
The
only
thing
that
will
get
dicey
is
if
we
have
a
customer
or
user
who
decides
to
bring
in
1.7
for
themselves,
although
I
think
the
1.7
sdk
well,
I
mean
the
1.7
sdk
will
work.
Fine,
I
think,
with
the
1.6
instrumentation
api,
when
you're
doing
library
instrumentation,
like
I
think,
there's
nothing.
We
haven't
changed
anything
in
the
apis.
D
A
B
Yes,
so
this
is
something
which
came
up
just
today,
so
josh
and
myself
and
erin
met
to
to
basically
discuss
whether
we
felt
that
there
was
sort
of
sufficient
work
and
legs
in
developing
a
a
small
subgroup
to
to
work
on
getting
some
some
metrics
nailed
down
for
for
the
vm
and
the
runtime
itself,
and-
and
basically
you
know,
the
the
kind
of
biggest
thorniest
problem
in
everyone's
minds
essentially
is
gc,
and
I
I
have
this
sort
of
concept
of
what
I
call
a
t-shaped
api,
which
is
effectively
that,
because
the
different
garbage
collectors
are
so
very
different,
you
need
to
to
have
some
common
core
that
every
garbage
collector
will
will
provide
you
with
which,
which
you
know
should
be
of
utility
to
an
sre
or
other
user.
B
Well,
more
specific,
let's
say
you
could
you
know
you
can
envisage
metrics,
which
is
specific
to
just
one,
exactly
one
garbage
collector,
but
also
there
is
kind
of
a
middle
ground
in
terms
of
metrics
which
are
applicable
to
an
entire
class
of
garbage
collector,
and
we
were
just
basically
wondering
whether
there
was
there
was
some
some
worthwhile
effort
in
putting
together
a
group
to
to
do
this.
B
Our
conclusion
was
that
there
is,
and-
and
so
this
really
is
just
a
notification
to
the
group-
that
this
is
happening
in
the
background,
and
if
anybody
is
interested
in
this
or
feel
strongly
about
it,
then
please
get
in
touch.
I
guess
with
me
and
make
sure
you
get
invited
when
the
when
the
meetings
go
up.
F
A
I've
done
it
generally,
just
it
worked
surprisingly
well,
it
kind
of
scared
me
how
easy
it
was
to
the
amount
of
permissions
that
we
all
have
here,
but
yeah
I
just
went
in,
and
nice
did.
F
Yeah
there's
well,
there
is
the
bug
morgan
to
make
sure
we
have
a
zoom
link
and
all
that,
so
things
are
recorded
and
yeah
and
then
google's
new
permissions
for
documents
means
I
have
to
remember
how
to
create
a
very
publicly
accessible
one
that
doesn't
break
on
you
like
we
had
earlier
so,
but
yeah,
it's
anyway
that
yeah
you
should
expect
the
invites
to
go
out,
probably
tomorrow
or
next
week,
depending
on
how
long
it
takes
to
get
everything
submitted,
because
we
have
to
add
it
to
the
readme
and
the
community
thing
too.
A
Cool,
can
you
ping
in
the
slat,
the
java
slack
channel
also
when
that
goes
out.
F
Tomorrow,
though,
just
as
an
if
I
oh
sorry,
tomorrow,
eastern
standard
time
to
clarify
what
tomorrow
means.
A
F
Yeah
so
I've
been,
I
finally
got
my
act
together
and
got
the
auto
agent
running
in
gke,
and
I
just
have
a
single
java
spring
app
and
then
I
have
another
cron
job
that
just
pings
it
every
once
a
while
using
a
http
client
and
I'm
just
running
it
for
a
really
long
time
to
see
what
happens
and
my
memory
usage
is
unbounded
right
now,
very
slightly
because
these
net.peer
attributes
and
because
I'm
using
a
cron
job,
it's
spinning
up
a
new
pod
which
has
a
new
ip
address,
eventually
I'll,
probably
run
out
of
new
ip
addresses
that
that
pod
gets
allocated
and
then
it'll
stop
growing
in
memory.
F
But
I
have
ginormous
amounts
of
metrics
right
now,
a
very,
very
awkwardly
high
cardinality
stream,
and
I
think
we
need
to
remove
those.
What
I
wanted
to
know
was
so
I'm
gonna
be
a
little
scatterbrained
here.
F
F
Around
semantic
conventions,
but
we
haven't,
we
don't
have
anyone
who
can
do
the
work
yet
to
get
the
tooling
up
to
date
at
a
minimum.
I
think
we
can
at
least
work
on
the
spec
bit
a
bit
here,
and
I
know
for
a
fact
that
net.peer.ipnet.peer.port.
F
B
F
Okay,
yeah
yeah.
Actually
so
the
net
host
name
is
fine
net
peer,
ip
is
one
of
the
problems
and
port
on
the
on
the
on
the
client
is
actually
kind
of
a
problem.
A
little
bit.
F
So,
okay,
first
of
all,
host
name
and
host
port
should
ideally
be
on
the
resource
not
on
the
metric
right.
But
again
that's
a
discussion
for
metric
semantic
conventions
and
for
us
to
talk
about
there
and
like
where
things
live.
F
Well,
so
so
yeah,
if
we're
talking
these,
are
on
both
actually
there's
a
problem.
But
let's
talk:
let's
talk
only
about
server
duration,
that's
the
one
that
I'm
focused
on
right
now,
because
I
was
trying
to
do
exemplar
latency
evaluation.
I
was
going
to
put
a
blog
together
where
you
can
see
your
server
duration,
where
you
can
see
exemplars
and
where
you
can,
like
click
through
and
do
stuff.
So
I
decided
to
run
a
long
running
test.
Everything
worked
gravy
when
I
only
had
one
metric
data
point.
F
All
the
queries
worked:
exemplars
were
there,
it
was
beautiful,
it
was
gorgeous
and
then
I
let
it
run.
And
now
I
can't
query
without
timeouts
on
my
back
end,
because
my
back
end
sucks
at
high
cardinality
metrics,
and
I
have
I
think
about
like
one
to
two
thousand
time
series
now
that
are
being
reported
every
minute
because
it
kept
every
single
peer.
I
p
address
from
the
server
of
who
it's
talking
to
that's:
okay
for
tracing,
that's
really
bad
for
metrics,
right
and,
and
it
doesn't
make
sense
from
an
aggregation
standpoint
right.
F
If
I'm
talking
about
server
latency
from
that
endpoint,
I
don't
really
want
my
peer.
I
p
address
to
show
up
in
there.
I.
What
I
want
is
to
know
that
that
endpoint
had
you
know
this
latency
to
it
and
then
I'll
have
exemplars.
That
tell
me
look
at
this
trace.
I
look
at
that
trace.
I
see
who
it
was
talking
to
right.
A
F
It's
it's
net.peer
dot
ip
is
being
recorded,
net.host.ip
would
be
okay,
but
net.peer.port
and
net.peer.ip
are
showing
up
on
these
metrics,
and
if
it's
in
the
semantic
conventions,
we
can
go
fix
it
in
the
semantic
conventions.
I
think
they
need
a
little
bit
of
evaluation,
but
I
I
think
the
I
guess
what
I'm
suggesting
is
when
we
have
a
histogram.
F
If
we
include
the
pure
ipn
port
effectively,
our
histograms
will
always
have
one
point
in
them.
We're
going
to
end
up
with
a
bajillion
time
series
or
the
the
histogram
is
less
useful,
because
it'll
only
represent
a
particular
client
right
and
if
the
client
reconnects
with
another
http
request,
they
might
have
a
different
port
that
they're,
using
so
putting
those
two
specifically
on
metrics,
is
really
really
problematic
and
is
leading
to
a
huge
explosion
cardinality.
So
it's
not
just
that.
I'm
cycling
ips!
It's
that
I'm
always
cycling.
F
Ports
too,
with
the
connection,
so
the
tl
dr,
is
what
I'd
like
to
do.
If
it's
okay
is
remove
the
problematic
ones
in
the
built-in,
auto
instrumentation
stuff
and
like
take
that
back
to
the
semantic
conventions
to
fix,
like
I
think
in
this
case
we
should
be
a
little
bit
of
head.
We
should
be
driving
the
semantic
conventions
from
this
group
because
you
have
instrumentation
and
that
that
gives
you
an
advantage
here.
F
A
G
So
so
the
metric
semantic
conventions
are
obviously
still
work
in
progress,
but
we
have
you
know
real
artifacts
that
are
being
used
by
real
users
that
you
know
are
subject
to
things
like
these,
the
bugs
and
in
the
semantic
inventions
that
can
result
in
cardinality
explosion.
Would
would
it
help
if
we
were
just
ultra
conservative,
about
which
attributes
we
used
on
the
metrics
that
we're
producing.
F
We
we
should
be
actually
so
if
you
read
the
metric
semantic
conventions,
what's
ironic
is,
it
recommends
never
using
those
kinds
of
things
as
attributes
and
then
later
on,
when
http
attributes
came
in
some
of
those
things
got
added
with
like
caveats
and
nuances,
but
I
I
do
agree.
We
should
probably
be
very
conservative
and
just
the
rule
of
thumb
should
be.
F
If
I
expect
the
value
of
that
label
to
change
more
than
say,
you
know,
10
to
20
times
for
an
instance
of
a
server,
it's
probably
not
a
good
metric
label,
that's
a
good
trace
label,
and
that
should
be
you.
You
should
have
an
exemplar
linkage
as
opposed
to
a
metric
label.
For
that.
G
Well,
the
the
I
I
think
these
attributes
are
being
generated
by
the
instrument
or
api,
which
you
know
you
define
a
bunch
of
attribute,
extractors
and
and
then
there's
like
another
method
to
just
kind
of
add
metric,
instrumentation
or
add
http
server
instrumentation.
G
A
And
I'll
show
you
jack:
where
is
my
horrible
horribleness?
We
have
a
heart,
we
hard-coded
the
list
somewhere.
Oh.
A
Yeah
somewhere
around
here,
temporary
temporary
metrics
view.
Yes,.
F
Yeah
so
like
here
here,
there's
a
few
things
that
we
probably
want
to.
We
probably
want
to
remove
because
they're
just
going
to
be
causing
churn
right,
and
I
I
can
take
a
look
at
that
list
and
send
a
pr
of
like
here's,
my
recommendations
for
our
first
cut.
F
G
F
F
F
Hey,
but
now
that
I
know
how
to
search
for
these,
I
can
fix
it:
okay,
cool
I'll.
Is
it
okay,
so
it
sounds
like
the
to
do.
Is
let's
make
a
separate
list
for
client
versus
server,
because
half
of
the
things
you
have
on
there
should
belong
on
the
client.
The
other
half
should
belong
on
the
server,
but
we
shouldn't
mix
them
because
that's
what's
causing
the
problem
and
then
that
that
should
fix
this.
G
Okay,
cool
another
thought:
that's
going
through.
My
head
is
like
so
when
you're
using
the
instrument
or
api,
you
define
these
standard
extractors
like
the
http
server
attributes,
extractor
or
whatever
it's
called
in
the
client
one.
I
think
it's,
it's
probably
easy
to
insur
to
implement
those
extractors
in
a
way
that
you
accidentally
provide
like
a
high
cardinality
response
or
return
value.
G
A
G
It
just
somehow
improve
the
clarity
so
that
it's
it's
less
easy
to
make
mistakes
and
and
return
high
cardinality
values
back
when
you're.
You
know
writing
these
things,
but
then
I
guess
you
know
the
problem
there
is
that
how
do
you
keep
this
stuff?
In
sync,.
F
Yeah
yeah,
I
think
the
hard
part
it's
here.
I
don't
want
to
limit
the
attributes
available
in
traces
because
metrics
can't
deal
with
high
cardinality
in
general
right.
So
so
we
have
this
dance
and
that's
one
of
the
reasons
view
api
exists
is.
I
can
actually
go
fix
this
with
the
view
api
and
the
agent.
I
can
make
an
extension.
F
I
can
register
a
view
for
server
duration
for
or
for
every
histogram,
really
that
that
limits
the
attributes
and
removes
them,
that's
built
into
view,
so
I
can
fix
it
today
with
an
extension
and
that's
what
I'm
planning
to
do
to
to
rerun
my
test
right
now,
I'm
just
waiting
to
see
how
long
it
takes
to
blows
up,
but
I
can
rerun
the
test
with
a
fix
against.
You
know
what
we
have
today.
F
So
this
is
not
like
a
a
you
know:
urgent
bug
fix
because
there's
a
workaround,
it's
real
crappy,
but
there's
a
workaround
going
forward,
though
I
think
like
jack
says,
if
there's
a
way
that
we
can
figure
out,
this
instrumentation
story
of
there
will
always
be
a
different
set
of
attributes
for
traces
and
metrics,
and
metrics
should
probably
always
be
a
subset
of
traces.
How
can
we
make
that
more
explicit
riley?
F
Has
this
idea
of
a
hint
api
in
metrics
where
effectively
what
you'll
do
is
you'll
provide
all
of
the
attributes
from
instrumentation
into
metrics
and
in
the
in
the
duration
thing
we'll
have
a
hint
that
says
just
keep
these
five
and
by
default.
That
will
always
happen
and
then,
if
users
go
override,
they
can
get
more
more
information
if
they
really
want
that
right.
If
they
want
that
net
peer,
I
p
to
show
up
in
their
metric,
they
could
override
and
get
it.
That's
like
the
long-term
plan.
F
B
I've
got
another
suggestion
rather
than
a
hint
api,
and
this
could
be
complementary
or
it
could
could
be.
You
know
you
could
do
either
or
both
of
these
a
watchdog
api
which
basically,
you
have
to
opt
into,
but
if
you
run
it,
it
keeps
track
of
cardinalities
and
it
tells
you
what
is
running
hot
and
what
is
producing
high
cardinality
so
for
early
adopters
and
people
who
were
receptive
to
bringing
back
to
this
group
that
could
give
us,
like
real
world
field
information
about
about
what
actually
is
high
cardinality.
F
Okay,
that
that
is
totally
implementable
in
the
current
metrics
api.
We
have
a
single
place
to
look
at
the
cardinality
of
any
particular
metric,
so
we
could
totally-
and
we
have
an
open
bug
around
what
the
hell
do.
We
do
with
high
cardinality
metrics,
just
throwing
a
watchdog
api
that
warns
you
when
you
hit
it,
that's
something
we
could
throw
in
real
quick.
I
love
that.
That's
great!
It's
a
really
it's
something
that
the
sdk
would
do.
F
D
One
of
the
suggestions
that
I
made
in
the
in
that
issue
that
I
logged
about
the
cardinality
and
this
cropped
up
in
one
sentence:
yep.
F
Yeah,
actually
there's
a
lot
of
good
suggestions.
You
had
there,
we
should
look
into
over
time,
but
I
think
that
one
in
particular
is
more
doable
now
and
I
think,
could
have
a
lot
of
good
value
for
us
to
just
you
know
have
especially
now
that
we're
using
the
mylamo
temporal
metric
storage
thing
that
always
keeps
cumulatives
no
matter
what.
A
D
A
D
D
F
No,
I
I
like
all
of
this.
All
of
it
terrifies
me
a
little
bit
in
terms
of
how
to
implement,
but
we
can
do
it.
Yeah.
F
However,
if
we
only
run
that
during
an
export
cycle-
and
that
happens
like
once
a
minute-
maybe
that's-
okay
for
us
to
like
run
the
detections
efficient,
and
then
we
only
pay
the
cost
of
figuring
out
how
to
reduce
it
once
a
minute
maximum
and
then
and
then
we
go
insert
our
correction.
You
know
facilities
in
place
wherever
they
need
to
go.
Maybe
that's,
okay,
I
think
yeah.
F
I
can
look
into
it,
but
first
I
want
to
stabilize
what
we
have
and
we're
still
missing
good
error
messages
for
reviews,
and
that
really
disturbs
me,
because
every
time
I
make
a
mistake,
I
get
annoyed
at
myself
for
not
making
better
error
messages
so,
but
that
that
sounds
like
something.
If
somebody
has
time
to
look
into,
I
can
help
guide
how
where
to
where
to
plug
in
or
that's
that
might
be
the
thing
I
pick
up
after
we
get
what
we
have
stabilized.
A
A
Yeah,
I
thought
about
tagging
you
on
that
that
question
I
should
have.
A
F
That
that
that
is
a
question
for
how
we,
how
confident
we
are
with
our
resource
detection.
But
if
we
have
good
resource
detection,
I
think
we
should
not
be
recording
ip
address
in
metric
attributes
at
all,
because
that
should
be
accounted
for
in
the
resource.
F
And
then
we
just
need
to
fix
the
issue
where
resource
attributes
can
go
out
in
prometheus,
as
as
those
components
right.
A
Can
you
explain
that
I
didn't
follow
the
resource
attribute
because
I'm
I'm
thinking
of
netpure
being
the
client's
net
peer.
F
Oh,
oh
sorry,
I'm
the
the
host
like
host
name
and
host
ip,
not
netbear.
Sorry,
I
was
thinking
of
those
those
things
that
are
labels
right.
Those
those
should
really
be
in
the
resource
like
if
we're
gonna,
if
we're
gonna
record
those
against
the
metric
that
should
come
from
the
resource.
If
we
need
that
in
prometheus,
we
need
to
make
sure
that,
like
the
resource
semantic
conventions
get
special
treatment,
I
know
for
a
fact:
there's
a
there's,
a
host
name
and
a
host
ip
and
stuff
that
show
up
in
the
resource
semantic
conventions.
F
Yeah
yeah,
there
is
complications
today,
of
course,
because,
like
I
said,
we
don't
have
a
good
story
around
resource
attributes,
making
it
into
metrics.
I
think
harold
deust
has
a
pr
out
for
the
spec
on
this.
A
F
Feedback
here,
here's
another
thing
I
was
trying
to
do
and
I
couldn't
figure
out
how
to
do
it.
I
want
to
make
I
want
to
suppress
standard
outlogging
in
a
server
using
the
java
agent
and
send
all
of
my
logs
through
otlp
or
sorry
through
the
sdk,
but
have
an
otlp
json
like
in
standard
out
writer.
G
I
think
that
should
be
so.
I've
been
working
on
a
implementation
of
the
logging
sdk
and
I
think,
when
that's
done,
assuming
there's
like
an
appender
for
whatever
logging
framework
you're
using
that
that
would
be.
That
would
be
doable
because,
there's
already
a
logging
exporter
for
otlp
json,
I
think
pretty
sure,
which
means
that
you
would
just
have
to
have
set
up
your
appender
to
append
logs
to
whatever
log
emitter,
open,
telemetry
log
emitter
that
you've
configured
and
that
log
emitter
would
be.
G
F
D
F
Yeah
that
that's
that's
beautiful,
it's
more
that
I
want
all
of
my
logs
to
go
through
the
hotel,
sdk
first
and
then
come
out
that
thing,
so
they
yeah,
so
that
the
format
is
the
the
otlp
json
you
want
to
use.
You
want
to
use
the
sdk
as
a
format
normalizer.
G
So
so
the
current,
like
otlp
logging
exporter
that
would
like
you
know,
takes
open
telemetry,
logs
and
formats
them
into
otlp
protobuf
json
format
it.
It
calls
logger,
like
java
utility
logging
log
for
those
messages,
so
you'd
have
to
configure
basically
just
logs
from
this
logger
from
that
class
to
go
to
standard
out
and
all
the
other
ones
to
go
somewhere
else.
You
know.
D
F
Yeah
at
a
minimum,
though,
is
it
possible
for
me
to
get
every
every
single
log
going
through
the
open,
telemetry
sdk,
instead
of
and
and
make
sure
nothing
goes
to
standard
out
right
like
I
want
everything
funneled
through
the
our
sdk?
Well,
we
need
to.
D
Somebody
needs
to
write
some
offenders,
so
that's
what
needs
to
happen.
We
don't
have
appenders
for
anything
except
jack's,
prototype
blog
for
j2
at
vendor.
At
the
moment,.
G
Okay,
okay,
that
makes
sense,
so
I
think
the
steps
that
we
have
to
do
is
you
know
that
the
pr
that
I
have
outstanding
to
you
know
update
the
the
logging
sdk.
I
think
a
lot
of
I
think
that'll
probably
get
merged
before
the
next
release
and
then,
after
that,
a
variety
of
appenders
can
be
written
for
in
in
java
instrumentation
and
there's
one
prototype
for
log4j.
But
you
know,
depending
on
which
framework
you
use,
you
need
an
appender
for
that
framework.
D
F
Yeah,
do
we
have
issue
siphoning
off
java
util
logging
into
our
sdk?
I
was
looking
reading
the
logback
config
when
I
was
trying
to
figure
this
out
because
a
lot
of
our
examples
use
logback
and
they
have
a
very
scary
message
about
java
util
logging
in
log
back
and
efficiency.
They're
like
this
could
be
upwards
of
a
60
performance
hit.
If
you
do
it
this
way,
but
they
don't
give
you
an
alternative.
D
Is
this
something
that
would
take
java,
util
logging
and
send
it
through
log
back?
Is
that
what
you're
talking
about.
F
F
Apparently
an
80
performance
hit
according
to
their
docs,
which
I
don't
want,
and
I
was
kind
of
curious,
like
I'm
hoping
we
can
do
better,
but
I
don't
know
if
anyone's
looked
at
it.
Yet
I
just
I
read
that
what
yesterday
two
days
ago,
I
was
like
holy
crap,
I
never
even
thought
about
how
bad
that
was.
A
Okay
from
I
from
an
auto
instrumentation
perspective,
I
mean
I
can
say
that
you
know
we
can
instrument
java
util
logging
directly
and
avoid
that,
because
we
don't
have
to
go
through
public
sort
of
adapter
hooks.
G
So
the
thought
being
that
you
rewrite
the
java
util
logging
bytecode
capture
those
messages
and
send
it
to
the
sdk.
A
Yeah
you
can
capture
it,
the
the
calls
to
the
logging,
the
java
util
logging
or
the
underline
when
it's
passing
to
the
actual
appenders,
and
then
I
haven't
in
past
in
other
projects.
I've
captured
that
and
still
let
it
flow
through
it
sounds
like
josh
also
wants
to
suppress
it
from
going
through.
F
Which
I
want
that
as
an
option-
at
least
yes,
we
have
so
so.
F
The
use
case
here
is:
there's
there's
a
default
thing
that
collects
logs
when
you
run
in
like
kubernetes
right
and
it
can,
you
can
run
an
agent
that
sits
there
and
like
reads
your
pods
and
sucks
logs
out,
and
I
wanted
to
prototype
what,
if
I
have
an
agent
that
sits
there
and
just
reads:
otlp
json,
so
I
just
write
otlp
json
out,
there's
a
very
efficient
transfer
there
of
writing
that
out
standard
out
to
this
thing,
there's
well-known
ways
to
do
it
and
that
that
agent's
kind
of
built
into
a
bunch
of
our
kubernetes.
F
I
just
want
to
see
if
it
works
and
I
made
something
work,
but
I
had
to
do
a
lot
of
weird
hackery
with
logback
for
it.
I
also
like
log4j2
way
better.
I
don't
know
why
logback's
more
popular
with
our
examples,
but
whatever
anyway,
was
curious.
What
our
plan
is
here,
because
I
I
it's
a
use
case
I
think
is-
is
valuable
and
I'm
really
impressed
with
how
fast
logging
is
moving
now.
D
D
G
John
one
thing
I
thought
of
when
you,
while
you
were
saying
that
is
what
if
we
gave
the
otlp
logging
exporter
whatever
it
is
like
an
optional
ability
to
log
out
to
standard
out
yeah.
I.
D
Also
wonder:
if
can
we
set
its
there's?
No,
I
don't
know
if
the
java
util
logging
apis,
let
you
like
set
its
default.
Its
default
output
some
way,
but
that's,
I
think,
that's
a
good
idea.
It's
probably
yeah.
It
probably
should
be
the
default
honestly
to
to
make
sure
that
we
don't
get
into
these
loops
automatically
by
default.
D
Yeah-
and
I
don't
know
if
java
each
logging
has
a
way
to
say
like
when
you
declare
the
logger
to
say
this,
one
should
all
should
go
to
the
standard
out
by
default
or
something
like
that.
It
probably
doesn't,
but
something
where
it
would
default
to
something
that
wouldn't
break
people's
apps.
Probably
is
a
good
idea.
D
A
F
That
that
brings
up
a
good
topic,
so
there
is
a
hotel
roadmap
that
we're
trying
to
put
together
of
what
is
going
to
happen
over
the
next
nine
months
kind
of
like
june
of
next
year
and
kind
of
deliverables.
The
community
can
expect
one
of
the
things
I
wanted
to
ask-
and
I
forgot
to
put
this
on
the
agenda.
Sorry
from
a
java
instrumentation
standpoint.
F
What's
the
frequently
asked
question
for
hotel
job
instrumentation?
When
is
this
going
to
be
marked
stable,
so
I
can
take
a
dependency
on
it.
There
is
a
roadmap
in
the
java,
instrumentation
markdown
that
was
last
updated
in,
I
think
relatively
recently.
No
sorry
2020,
not
2021.
It
was
a
year
ago.
F
That's
what
people
look
at
they're
like
this
isn't
ready.
So
what
I'm
going
to
ask
for
is
where
the
there's
a
document
with
the
roadmap
that
is
going
to
turn
into
other
artifacts,
that
people
can
see
across
the
website
across
other
things.
I'm
just
trying
to
use
a
google
doc,
because
I
think
the
commenting
feature
is
nice
and
I
want
to
have
lots
of
collaboration
here,
but
if
we
can
so
so,
that's
the
state
table
of
where
things
stand
today.
F
If
you
go
below
this,
we're
trying
to
do
very
high
level
and
the
quarter
by
quarter
gantt
charts
of
what
we're
working
on.
Why
we're
working
on
it
when
things
land,
one
of
the
things
I'd
love
to
have
in
here
yeah.
So
there's
the
metric
sig,
for
example,
where
we're
going
to
be
working
on
high
resolution,
histograms,
expanded
convenience
features
and
then
in
q2
as
we'll
start
on
that
open
census
compatibility
sorts
of
things
right.
So
my
ask
here
for
the
instrumentation
java
community.
F
A
Yeah
yeah,
we
can,
we
can
work
on
this.
I
can.
I
can
put
something
initial
together
and
review
with
folks
next
week.
A
F
No,
this
document
is
public
and
people
can
see
it
and
it's
got
a
bunch
of
views,
but
I
that
also
doesn't
bother
me.
It's
not
going
to
be
we're
not
going
to
blog
about
it.
We're
not
going
to
do
anything
about
that
until
we're.
Okay,
with
this
being
stable,
my
plan
is
to
cancel
my
cancel,
the
meaning
that
always
conflicts
with
the
maintainers
meeting
and
we'll
run
through
it
and
all
agree
to
it
before
this
goes
public
or
sorry
before
it
goes
really
public
and
not
github
issue
public.
A
A
Yeah
and
yeah
we'll.
Let
me
fix
this
because.
A
A
All
right
we're
at
time
any
last
comments
I
didn't
get.
I
didn't
get
to
the
week
in
review
but
in
a
nutshell,
lots
of
instrumenter
api
conversions.
Lots
of
I
think
the
last
batch
of
nettie
of
like
10
or
12
netty
instrumenter
prs.
Thank
you
again.
Mateish
yeah
and
thanks
to
everyone,
who's
been
driving
this
all
of
these
forward,
we're
we're
almost
there.
It's
awesome.