►
From YouTube: 2022-01-14 meeting
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
Phew
I
was
worried
that
we
might
have
one
of
those
20
2099
zoom
problems.
A
Yeah,
I
was,
I
wasn't
sure
what
we
would.
I
probably
should
have
thought
that
one
through
wasn't
sure
what
we
would
do
if
the
zoom
link
didn't
work.
A
So,
let's
start
with
the
the
110
release
going
out
today
or
tomorrow.
Last
couple
of
things,
I
think,
are
all
just
kind
of
dock
related.
One
is
the
documenting.
The
new
logging
instrumentation.
A
In
particular,
the
around
logging-
because
I'm
not
quite
sure
if
this
is
this-
is
sort
of
how
we've
been
referring
to
the
difference
between.
We
have
appender
instrumentation
that
captures
logs
and
exports
logs,
and
we
have
separate
instrumentation
that
does
mdc
injects
the
span
id
trace
id
span
id
into
the
logs.
E
Yeah,
the
mdc
injection
is
kind
of
missing
the
context
that
it's
in
injecting
the
trace
context
into
there
into
mdc.
You
know
that's
lost
on
that
would
be
lost
on
me.
If
I
was
a
consumer
of
this.
A
How
about
the
appender?
Does
that
or
I
mean
we
could
almost
we
could
call
it
log4j
like
exporter.
E
It's
like
the
log
for
the
appenders.
It
basically
allows
you
to
use
log4j
as
the
open,
telemetry
log
api,
that's
kind
of
what
we're
doing
there
right,
because
right,
if
you
use
the
log
for
j
api,
then
logs
are
getting
bridged
to
the
open,
telemetry
log
sdk
wondering
if
we
can,
like
you,
know,
reference
that
in
this
description,
somehow.
F
Alternatively,
we
already
have
a
longer
mdc,
instrumentation
block
that
just
describes
how
the
dc
injection
stuff
works.
Maybe
we
could
describe
the
appender
approach
there
and
just
link
to
it
from
here
and
just
write
that
we
support
log4j2
versions,
2.11
plus
and
just
link
to
the
document.
If
you
want
to
read
more
details
about
it,
go
there.
A
Yeah,
that's
true
because
it
is
just
one
library
I
was
splitting
out
some
things
like
akka.
A
And
we
link
here
we
link
to
the
external
site
like
akka
as
opposed
to
our
internal.
F
A
A
C
A
My
bad
no
no
stand
alone,
library.
A
Yeah,
I
need
to
go
through
this
page
also.
E
So
you
know
one
column
that
expresses
the
name
of
the
library
and
links
to
the
external
library
docs
in
another
column
that
links
to
the
internal
documentation
which
describes
you
know
if
there
is
standalone
library
instrumentation
for
that
how
to
use
it
and
if
there
is
automatic
instrumentation
for
it,
you
know,
and
if
we
get
around
to
it,
expressing
what
that
automatic
instrumentation
does,
because
that's
that's
a
kind
of
a
question
that
I'm
always
wondering
when
I
see
support
for
a
library
is
like
what
does
support
actually
mean?
Am
I
getting
just
trace?
A
C
A
All
right,
we'll
look
out
for
big
110
release
today
or
tomorrow,
with
micrometer
and
lagging
big
big
new
additions
come
on
jay
center.
B
Was
this
you
jason,
that
was
me
yeah
all
of
the
builds
in
the
core
library.
Repo
are
all
broken
at
the
moment,
built.
A
B
B
And
I
was
looking
at
the
gradle
plugins
status
page
and
it
looks
like
they've
fixed
some
things,
but
jsender
is
still
down
for
having
problems
and
there
are
at
least
some
plugins
that
were
only
hosted
on
jsetter
and
those
are
all
still
broken
and
it
looks
like
that
semver
or
is
it
sember
remember
now,
which
plugin
I
pasted
it
in
the
channel
the
other
day?
That
error
is
still
happening
and
I
have
yeah
that's
that
javascript
gradle
plugin
and
I
would
assume
that
is
a
like
it
being
used
by
whatever
that.
B
I
can't
remember
the
name
of
the
plugin
that
does
all
of
our
versioning
with
the
versioning
plugin,
but
yeah.
Everything
is
pretty
much
broken.
Almost
all
the
time
right
now,.
B
A
Do
think
that's
why
yeah?
Because
I'm
wondering
why
the
I
guess,
maybe
we
don't
use
this
in
the
instrumentation
repo,
because
our
builds
were
failing
all
of
yesterday
morning,
specific
time,
but
by
like
three
or
three
o'clock
in
the
afternoon.
Yeah.
B
Because
the
instrumentation
repo
does
manual
version
versioning
rather
than
using
a
plugin
to
manage
that
right.
B
Yeah
nebula,
that's
like
that's
the
word.
I
could
not
remember
yeah,
I'm
guessing.
This
is
used
by
nebula
and
is
broken
because
of
that.
It's
a
guess.
I
haven't
looked
at
any
source,
so
my
guess
is:
if
we're
going
to
fix
this
and
not
depend
on
this,
if
that's
our
strategy,
we're
going
to
have
to
rewrite
all
of
our
release
process
to
not
use
nebula,
which
is
a
big
hunk
of
work.
A
D
A
I
don't
know,
I
don't
know
what
we
could
do.
I
think,
trying
to
make
changes
to
work
around.
This
is
sort
of
a
whack-a-mole
who
knows
what
else.
B
D
I
mean
I
can
talk
very
briefly
about
some
of
the
metric
stuff
that
I've
been
working
on
since
monday,
yeah,
so
basically
we're
making
some
progress,
but
but
one
of
the
the
things
that
I've
discovered
is
that
we
need
to
be
a
bit
smarter
about
how
we
recognize
which
gc
is
in
use,
because
that
out
of
the
box,
the
the
open
cemetery
library
complains.
If
you
have
duplicate
metric
names,
and
some
of
the
events
can
be
sourced
in
different
ways
from
different
gcs.
E
So
in
terms
of
duplicate
definitions
of
metrics,
so
it
shouldn't
cause
any
issue.
If
all
of
the
you
know
properties
of
the
instrument
match
so
like
you
know,
if
the
name
unit
instrument
type
and
description
all
match,
then
you
should
be
good.
D
Yes,
and
in
fact,
jack
now
that
I
think
about
it,
I
think
it
was
complaining
about
the
fact
that
one
there
was
a
slight
difference
in
the
way
that
the
unit
was
defined.
So
I
don't
know
I
mean
I,
I
sort
of
feel
like
it's
kind
of
cleaner,
to
only
declare
the
things
we're
actually
going
to
use.
D
It
seems
like
declaring
a
bunch
of
stuff
which
is
never
going
to
get
set
up
in
the
stream.
It's
just
asking
for
trouble
down
the
line,
so
I
might
actually,
you
know,
persist
at
the
bit
of
additional
complexity
and
defining
which,
which
events
we're
going
to
listen
to
based
on
which
gc
you
know.
I,
I
think
that's
an
acceptable
complexity
trade-off.
E
A
And
so
that's
ben!
That's
because
you're
setting
up
the
setting
that
up
up
as
async
now
versus
the
event-based
streaming.
D
No,
this
is
still
for
jfr
streaming,
but
basically,
when
you
set
up
the
registry
of
event
handlers
as
as
it
stands
in
the
current
code
base,
you
just
set
up
a
big
list
of
stuff,
and
that
includes
things
like
g1
handlers,
whether
or
not
you're
actually
running
g1.
D
D
A
Yeah
one
thing
I
could
share
about
the
met
with
the
metrics
meeting
from
monday
that
I
thought
was
kind
of
interesting
was
so
we
went
through
kind
of
the
existing
implementations
and
realized
that
there's
really
just
sort
of
two
categories:
the
sort
of
the
traditional
all
of
reading
the
jam
it
reading
the
mx
beans.
A
All
of
these
do
that
essentially,
and
then
there's
the
jfr
streaming
metrics,
which
is
significantly
different
sort
of
because
it's
streaming
based
and
so
to
align
those
in
the
spec.
We
have
to
pick
a
single
instrument,
so
the
jfr
streaming
can
use
histogram
since
it's
event
event-based
and
pushing
the
data
in,
whereas
the
mx
being
ones
are
just
kind
of
traditional
gauges
or
in
open
telemetry
up
down
counter.
I
believe.
A
Thank
you,
and
so
the
gfr,
so
I
think,
to
align
those
two
to
have
a
single
spec
that
they,
both
that
both
types
can
follow.
The
jfr
streaming
will
be
a
also
a
async
up
down
counter
change
to
an
async
down
counter,
and
I
guess
then,
were
you
thinking
just
to
keep
the
last
value
from
the
stream.
D
I
mean
for
memory
utilization.
I
think
I
think
that
last
value
is
probably
all
we
can
do.
A
Like
average.
A
D
B
Yeah
and
we
have
when
we
have
inhand
api
and
custom
aggregators,
we
could
also
think
about
bringing
in
a
a
summary
aggregator
again
for.
A
B
A
Conceptually
async
up
down
counter
understandings
that
just
get
scraped
on
each
export
right,
so
it
would
just
be
a
single
value.
I'm
not
sure
what
it
would
act,
what
it
could
aggregate.
E
I
think,
like
I,
I
mean
correct
me
if
I'm
wrong,
but
I
don't
think
there's
anything
stopping
you
from
taking
a
ace
async
up
down
counter
and
saying
you
want
a
different
aggregation
than
like
last
value,
but
it
just
won't
be
like.
It
won't
make
a
lot
of
sense
and
I
think
that's
what
you're
saying
as.
A
Well,
ben
the
do
you
think
the
last
value
would
be
the
I'm
thinking
that
would
be
the
most
consistent,
at
least
with
the
mx
beans.
Not
that
consistency
is
required
or
a
good
thing,
but
just.
D
A
D
If
my
intuition's
correct,
then
then,
then
what
you're
saying
is
correct
and
that
something
which
used
last
value
will
be
closer
to
jmx
than
an
average
would
be
so.
Let's
try
it
out.
You
know
we'll
just
stick
an
extra
tag
on
it
and
let's
plot
some
time
seriously
and
see
it
in
action
on
on
some
real
processes
to
get
a
sense
for
how
this
thing
really
behaves.
A
John
is
your
service.
I
forget
if
your
service
is
you're
still
on
java
11
or
we
need,
we
need
a
real
world
service.
Guinea
pig.
D
D
G
Our
problem
was
more
that,
basically,
because
we
didn't
jfr
data
from
from
the
stream
but
from
the
file,
so
it
was
misaligned
with
the
data
from
the
jmx.
D
Well,
I
mean
this.
This
is
one
of
those
errors
where
you
know
in
in
general.
Yes,
but
but
you
know
don't
forget
that
we,
we
are
talking
to
a
broad
audience
here.
You
know
for
for
day-to-day
sre
sure,
but
we
also
need
to
think
about
how
far
we're
going
to
push
into
the
into
the
use
case
where
you
actually
care
about
about
the
detail
of
the
data.
You
know
totally.
D
We,
we
are
focused
the
math
market
here,
and
we
need
to
remember
that,
but
it
would
would
be
nice
to
see
just
just
having
the
differences
really
are.
A
One
of
the
action
items
that
came
out
of
the
metrics
meeting
was
putting
kind
of
a
trying
out
specking
one
of
them.
I
think
jack.
A
E
A
Anything
any
other
topics.
Anybody
wanted
to
bring.
D
A
A
couple
yeah
but
yeah
great
to
yeah
great,
to
have
people
here
nice
to
meet
you
cool,
well
I'll,
just
kind
of
briefly
the
go
through
things
I
like
to
call
out
highlights.
From
the
last
week
the
micrometer
instrumentation
was
completed
all
covering
all
instruments.
That's
a
huge
deal
and
really
looking
forward
to
you
know
getting
some
users
working
we're
using
it.
A
A
That's
moving
along
nicely,
of
course,
updating
to
the
sdk
110
release
and
our
release
will
be
the
instrumentation
release
will
be
today
or
tomorrow,
logs
expo
tlp
logs
exporter.
That
was
another.
That's
another
big
part
of
this
release
enabled
by
the
all
the
new
logs
work.
Thank
you,
jack.
A
On
the
sdk
side,
the
auto
configure
support
was
key
for
the
agent
being
able
to
use
that,
and
of
course,
I
thought
that
this
was
the
end
of
that
story,
but
we
will
get
to
the
key
discovery
that,
after
that
micrometer
library,
instrumentation,
so
mathias
also
extracted
out
the
java
agent
instrumentation
as
usable
as
a
library
instrumentation.
So
it's
just
a
micrometer
meter
registry
that
you
can
use,
which
is
pretty
awesome
and
pretty
great
way
to
get
users
on
the
open,
telemetry
metrics
sdk
and
get
a
lot
more.
H
A
I
don't
believe
I
believe
micrometer
will
continue
having
their
existing
api
and
what
we've
done
here.
I
think
this.
This
is
our
is
a
nice
interop
story
here
now
with
the
because
you
can
use
the
open,
telemetry,
micrometer
meter
registry,
and
you
can
just
plug
that
into
your
app
and
all
your
api
micrometer
api
usage
metrics
will
flow
through
there
and
go
to
the
open,
telemetry
metrics
sdk
and
out
through
the
open,
telemetry
metrics
exporter.
A
Yeah,
so
you
would
use
either
micrometer
api
or
open
telemetry
ap
metrics
api,
but
through
this
bridge
you
both
of
those
can
point
to
the
open,
telemetry,
metrics,
sdk
and
open
telemetry
exporter
pipelines.
H
Okay,
thank
you,
oh
by
the
way,
I'm
emily
from
ibm
so
yeah
interested
in
this
so
basically
is
the
yin.
I
think
roberto
might
have
mentioned
before
in
micro
profile.
We
trying
to
adopt
micro
pro
trend
of
dark,
open
television
metrics
at
the
moment,
also
interested
in
the
matched
open,
telemetry
metrics.
Previously.
Currently
we
are
working
on
the
treating
open,
telemetry
teaching.
So
the
other
thing
I
mean
the
other
question
I
have.
If
a
customer
you
use
both
the
macrometer
api
plus
open
telemetry
api,
can
they
I
mean?
F
So
if
you,
for
example,
have
an
already
existing
application
that
uses
micrometer,
you
can
just
plug
in
our
new
open
telemetry
video
registry
and
all
the
micrometer
calls
that
you
previously
had
will
now
basically
forward
those
metrics
to
the
telemetry
sdk.
So
in
effect,
you
can
use
both
the
open,
telemetry,
matrix,
api
and
micrometer
like
interchangeably,
and
they
will
all
send
data
to
the
to
the
same
same
thing.
Basic
same.
H
H
H
So
that's
using
cdi
and
also
like
an
integration
with
the
jax
rest
so
that,
like
all
the
dxrs,
the
innovation,
the
emotions
and
etc
automatically
traced.
So
it's
a
pretty
much
as
on
and
looking
to
the
application
runtime
story,
so
so
that
application
developer
directly
uses
kind
of
the
open
telemetry
behind
the
scene.
They
don't
even
need
to
add
anything
and
then
everything
interfaced
if
they
are
using
the
jxrs.
A
Oh,
I
see
so.
If
they're
using
jax
rs,
then
you
will
automatically
capture
traces
and
at
some
point
metrics
for
them
and
then
the
cdi
beans,
that's
if
they
want
to
inject
and
capture
custom
things.
On
top
of
that.
H
Yeah,
okay;
okay,
if
you
like,
like
all
the
spans
and
the
trees,
aren't
etc,
can
be
injected
by
cdi.
Yeah.
A
Cool,
if
you
have
a
couple,
a
couple
links
to
that,
if
you
could
drop
that
either
in
chat
or
in
this
doc,
that
would
be
awesome.
H
Yeah,
okay
I'll,
add
a
link
there.
We
have
a
sandbox,
yeah
and
sandbox
the
channel
with
our
proposal
and
etc.
A
Moving
on
the
this,
this
is
only
interesting
because
it's
anurag
has
been
using
intellij
with
a
remote.
I
forget
what
it's
called.
If
anybody
remembers,
you
can
run
the
you
run
it
just
as
like
a
shell,
a
ui
and
all
the
real
work
happens
on
a
remote
over
ssh
and
he's
got
a
16
core
box
that
runs
all
that
so
he's
running
into
new
concurrency
gradle
concurrency
issues.
A
Really
needs
to
be
a
better
way
to
say
this,
so
we're
just
gonna
we,
since
we
don't
have
any
stable
artifacts
in
the
instrumentation
repo,
yet
we
just
removed
it
or
not
running
it
and
we'll
bring
it
back
once
we
do
have
stable
artifacts
that
we
need
to
track
api
diffs
on
this
was
a
good
issue
that
was
raised
by
a
user.
A
This
also
is
came
out
of
last
week's
discussion
on
the
appender
api,
hiding
that
so
that
we
don't
expose
anything
that
looks
like
a
logging,
api
and
so
users
who
want
to
use
the
logging
sdk
should
use
log4j
or
logback
only
and
there's
no
direct
sort
of
api
for
the
logging
sdk.
A
This
was
to
hit
deal
with
metric
explosion,
cardinality
explosion
until
we
have
ideally
http
route,
we'll
we'll
be
capturing
that
in
the
next
month,
or
so,
we
have
some
ongoing
work
there,
and
this
was
the
little
wrinkle
that
thank
you,
jason
for
fine
testing
and
finding
this.
Otherwise,
the
the
log
exporter
wasn't
actually
hooked
up
and
nothing
would
have
worked
in.
A
A
Oh
yeah,
there
was
also
one
other
thing
that
this
found
where
there
was
some
serious
log
spam,
because
one
of
the
I
guess
payara
was
using
some
a
lager
that
has
no
name,
and
so
we
were
passing
nothing
as
the
log
emitter,
the
instrumentation
name
to
the
log
emitter
and
the
oh.
A
The
open,
telemetry
sdk
was
not
very
happy
about
that,
so
it
was
log
spamming
and
that
actually
caused
our
payer
smoke
test
to
fail,
which
was
good
so
that
a
couple
of
good
issues
there
yeah
thanks
for
jumping
on
that
trask,
oh
yeah
yeah.
No,
I
really
want
to
get
some
user
feedback
on
the
logs
after
110.
A
All
right
last
chance
for
topics.