►
From YouTube: 2022-01-20 meeting
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
C
I
did
I
did
two
years
or
earlier
yeah.
B
C
C
C
B
I
did,
I
did,
wake
up
and
start
listening
to
the
talk
at
6am
this
morning,
but
I'm
not
sure
I
was
really
all
that
focused
on.
C
D
B
D
C
C
A
Cool,
we
don't
have
much
anything
on
the
agenda
just
start
with
the
we
did
make
releases
in
the
last
week
in
both
the
instrumentation
and
contrib
repos.
A
The
instrumentation
repo
release
did
not
go
smoothly,
started
off
with
a
innocuous,
yaml
parsing
problem,
but
went
kind
of
downhill
from
there,
but
we
did
eventually
salvage
it
that
took
like
five
tries
so
a
lot
of
sort
of
a
release.
Workflow
improvements,
we're
trying
to
make
one
of
which
is,
we
finally
figured
out
or
got
a
solution
from
open
telemetry
on
how
to
automate
commits
because
of
the
easy
cla
requirements.
A
Allow
list,
because
it's
kind
of
too
broad
it'll
cover
like
way
too
much
stuff,
but
it
they
are
okay
with
us
creating
our
own
sort
of
more
targeted
bot
accounts.
A
D
Yeah,
so
thank
you.
Thank
you
very
much
for
your
interest
in
knowing
a
little
bit
on
the
work
we're
doing
here
in
in
the
microprofile
world.
That
was
one
of
the
reasons
why
I
joined
this
call
like
a
few
few
months
ago.
I
guess
because
I
was
doing
work
over
there
and
I
also
doing
the
integration
work
with
open
telemetry
in
quarkx.
D
So
that's
one
of
the
reasons
why
I'm
hello,
I'm.
I
have
a
lot
of
interest
in
the
open,
telemetry
side
of
things
and
yes
so
for
for
michael
profile,
and
I'm
probably
going
to
share
a
couple
of
things
on
my
on
my
on
my
desktop,
so
you
can
see
as
well
but
to
intro
the
topic.
So
I'm
not
the
computer
too
sure.
If
everyone
is
aware
what
what
microprofile
is.
D
But
microprofile
is
a
it's
an
open
source
initiative
as
well
held
at
the
eclipse
foundation,
and
it
was
somehow
a
an
effort
that
came
when
jakarta
was
or
at
the
time
java
was
a
little
bit
stale
and
we
wanted
to
do
some
innovation
around
the
enterprise
java
world
and
microservices.
So
we
came
up
with
a
lot
of
specifications
in
that
area.
Things
like
configuration,
health
checking
fault,
tolerance
at
the
time,
open
tracing.
D
So,
of
course,
there
is
an
interesting
on
in
this
case
of
open
telemetry
because
of
the
way
the
screens
evolve
and
and
yeah.
So
we
we
we
kept
watching
what
was
happening
on
the
open
planetary
side
and
once
we
thought
that
things
were
somehow
stable,
we
wanted
to
to
basically
replace
the
way
that
we
say
how
to
do
tracing
into
microprofile
applications.
D
So
in
this
particular
case,
instead
of
using
open
dressing,
just
use
open,
telemetry
right
now
to
give
a
little
more
detail
on
the
micro
profile
size.
So
the
idea
is
that
when
you're
building
a
java
e
traditional
application,
you
can
also
use
microprofile
for
that
and
both
of
them
work
together.
So
they
are
they
they're
supposed
to
be
complementary.
D
There
has
been
some
conversation
on
how
microprofile
and
jakarta
are
going
to
work
in
the
future,
but
when
microprofile
started
basically
java
java
e
was
still
at
oracle
and
then
they
moved
to
eclipse.
So
there
is
a
more
more
opportunities
for
synergy
from
now
on,
but
microprofile
still
has
a
lot
of
small
specifications
that
are
mostly
targeting
the
microservice
world
and
architecture
right
so,
for
instance,
in
in
indica
in
in
my
case,
in
case
of
emily.
D
So
I
I
mostly
work
on
on
quercus
and
we
work
mostly
works
on
on
webster
liberty,
so
we
do
have
implementations
of
microprofile
in
both
of
our
containers
frameworks.
Whatever
you
want
to
call
it.
D
And
well,
yes,
users
come
to
us,
they
develop
their
their
java.
Applications
involves
workers
or
liberty
and
yes,
they
want
the
tracing
implementation.
So
until
now
we
were
stuck
to
use
open
tracing.
That's
what
we
picked
when
when
we
started
microprofile
a
few
years
ago
and
well
now
we
are
in
the
process
of
changing
that
to
open
telemetry
now
the
interesting
thing,
and
that
I
think
that's
one
of
the
the
things
that
you
did
a
really
great
job.
D
Is
you
expose
a
lot
of
apis
that
allow
us
to
integrate
quite
seamless
with
that,
and
so
we
are
using
a
different
approach
that
we
use
for
open
tracing.
So
when
we
adopted
open
tracing
in
micro
profile,
we
ended
up
having
to
create
a
couple
of
apis
on
our
own
and
pretty
much
shield
or
or
or
wrap
some
of
the
open
tracing
apis.
D
We
don't
see
a
need
to
do
that
with
open
telemetry.
So
basically,
what
we
did
is
we
created
an
adoption
specification
in
a
sense
that
we
pretty
much
delegate
a
lot
of
stuff
to
the
open
to
language,
api
and
specification,
and
we
just
fill
in
the
blanks
on
define
how
is
going
to
be
the
behavior
in
a
microprofile,
slash,
jakarta,
environment,
so
things
I'm
not
sure
if
you
guys
are
related
with
the
specifications
like
cdi,
probably
probably
yes
right,
cdi,
jax
rs
and
things
like
that.
D
So
let
me
I'll
share
here
something
really
quick
and
I
can
probably
show
you
some
prototypes
as
well,
that
that
we
have
okay
so
share
share
there.
You
go.
D
Yes,
so
this
is,
this
is
right
now
the
current
proposal
that
we
wrote
right.
I
guess
it
is
just.
Oh
sorry,
that's
not
that's
not
the
right
document,
that's
one
okay,
and
so
this
is
just
like
introduction,
and
here
basically
we
just
mentioned
that
all
the
implementations
must
comply
with
these
specific
links
that
go
to
the
open,
telemetry
specification
right.
So
one
one
of
the
things
that
we're
in
right
now
we're
fixing
the
version.
D
So
this
is
still
written
to
1.8.
We
know
there's
still,
there's
already
1.9,
I'm
not
sure.
If
there's
a
110,
I
think
there's
already
one
time
as
well,
but
so
we
need
to
update
this.
D
But
anyway,
any
implementation
on
the
micro
profile
size
will
will
have
to
comply
with
everything,
that's
written
on
the
specification
size
for
for
this
version
and
well
the
tracing
api,
the
package,
api
context,
api
and
the
resource
sdks,
and
then
we
actually
reuse
some,
some
of
the
apis
that
you
have
so,
for
instance,
there
is
the
with
fan,
annotation
right
and
our
recommendation
at
least
the
way
that
we
see
also
the
big
span
annotation.
D
So
if
you're
using,
for
instance,
an
agent,
the
way
that
you
will
do
is
you
the
agent
will
just
pick
up
the
annotation
right.
It's
just
going
to
instrument
the
method
where
that
annotation
is
placed
right
and
it's
going
to
generate
a
span
for
that
particular
method.
D
So
one
of
the
things
that
we
did
it
was
we
extended
the
behavior
of
that
weak
span
for
cdi
based
beams.
You
could
just
place
the
wheat
span
annotation
on
whatever
cgi
vineyard
you
have
and
then
the
cdi
container
will
just
create
a
span
for
you.
D
D
So
we
also
a
lot
of
the
micro
profile
and
slash
jakarta
environments.
It's
everything
every
time
that
you
want
to
retrieve
something.
Usually
you
go
for
cti,
so
we
also
extended
that
to
allow
the
developer
to
retrieve
a
tracer
or
the
current
span,
using
at
inject,
so
usually
most
developers,
you
know
in
our
world
are
used
to
just
that
inject
stuff
and
well
basically,
it's
just
a
really
simple
implementation
on
the
producer
side
of
that
you'll
just
call
the
programmatic
api
of
like
tracer
current
or
spam
current.
D
So
you
can,
you
can
use
one
or
the
other.
It's
just.
It
is
a
more
natural
user
experience
for
for
for
this
kind
of
environment.
D
If
you
use
that
inject
into
those
cases,
and
then
it
works
pretty
much
the
exact
same
way
right
so
just
get
the
tracer
get
the
builder
or
just
get
the
current
spam.
Add
attributes
into
it
and
and
you're
done
so
that's
so
here
is
a
programmatic
example.
It
just
works
pretty
much
the
same
way
or
is
just
pti.
D
You
just
use
that
inject
and
you
get
you
you
just
put
whatever
attribute
you
want
over
there.
So
one
one
one
important
thing
and
that's
something
that
I'm
being
working
as
well
is.
We
want
to
make
sure
that
we
are
the
most
compatible
with
open
telemetry,
so
we
want
the
agent
that's
been
provided
with
the
controllability
to
integrate,
seamless
with
with
an
open,
telemetry,
environment
or
sorry
with
the
microprofile
environment.
D
So
the
idea
is
that
you
can,
just
if
you
want
to
just
plug
in
the
agent
and
well
you
just
integrate
directly
with
whatever
we
set
on
the
microprofile
size
right,
and
I
I
made
some
prototypes
and
that
actually
seems
to
work
very
well
with
that
installation
api
that
that
is
available
on
the
open
to
line
with
this
size.
D
So
another
thing
that
we
have
is-
and
maybe
you've
seen
some
some
of
my
issues
and
pull
requests
around.
That
is
configuration,
so
we
do
have
a
lot
of
configuration
which
is
very
similar
to
the
to
the
one
using
on
the
open
climate
side.
There's
a
couple
of
differences,
but
we
wanted
to
have
that
integrated
seamlessly
as
well,
so
in
a
way
that
users
could
configure
open
telemetry
using
the
configuration
sources
that
we
make
available
into
the
microprofile
world
or
even
their
own
custom
sources.
D
So
yes,
so
here
I
I
we
were
able
to
define
an
integration
point
for
you
to
use
the
sdk
to
configure
out
of
the
box
and
well
then
you're
able
to
set
up
your
own
exporters,
set
up
your
own
processors
right,
configure
the
the
the
clusters
that
can
be
traced
or
not
just
by
using
configuration
right.
So
all
all
of
those
confusion,
properties
that
exist
on
the
on
the
open
time
to
decide
how
to
configure.
D
There
is
one
thing
that
I
have
to
confess:
I'm
not
completely
happy
about,
because
one
on
one
of
the
previous
versions,
there
was
a
way
that
we
could
implement
the
configuration
properties,
interface
and
that
allows
us
to
to
to
to
well
have
a
more
a
little
more
control
on
how
configuration
was
being
laid
out.
I
was
trying
to
keep
that
behavior
again,
but
well,
that
was
remove
us,
or
at
least
not
remove
it,
but
at
least
not
public
anymore
and
well.
D
We
can
work
around
that
and
we'll
try
to
make
it
a
little
bit
better
but
yeah.
Let's,
let's
carry
on
so
we
also.
We
also
recommend
everyone
to
follow
the
semantic
conventions
that
are
specified
on
the
open,
telemetry
side
for
traces,
so
that
that
will
cover
things
like
http,
rest,
clients
and
databases
and
and
and
things
like
that,
one
of
the
things
that
I
probably
I
have
here,
I
think
before
that
I
probably
forgot-
is,
for
instance,
on
a
microprofile
environment.
D
Everything
that
this
breast
or
s
client
is
already
automatically
traced
out
of
the
box.
So
you
don't
you
don't
have
to
do
anything,
that's
something
that
we
just
find
here
on
the
specification
or
the
adoption
of
the
specification
on
our
size,
but
of
course,
that
that
can
be
disabled
using
the
the
the
regular
environments
of
open
telemetry.
D
Usually,
we
feel
that
people
will
like
to
trace
everything.
That's
rest,
so,
basically,
everything
that
you
have
as
a
rest,
endpoint
exposes
with
jax,
rs
or
called
by
rest
client.
We
just
automatically
trace
out
of
the
box
and
and
then
we
have
just
here
a
small
section
about
how
people
can
just
move
from
open
tracing
on
the
micro
profile,
size
and
and
open
telemetry
if
you're
a
little
bit
curious
how
this
works
a
bit.
D
So
I
have
this
prototype
project
that
you
guys
can
have
a
look
if
you
prefer,
so
it
has
like
the
cdi
implementation.
D
The
I
did
notice
that
the
agent
instrumentation
repository
in
the
telemetry
side
has
a
cdi
folder
over
there,
which
is
empty.
Well.
Maybe
this
is
something
that
I
can
push
there,
but
at
the
time
I
was
just
thought
it
was
just
a
little
bit
easier
to
start.
My
my
own
thing
on
the
side
to
make
it
because
I
have
no
idea
how
this
was
going
to
evolve.
Now.
I
think
it's
a
little
more
stable,
so
I
think
this
is
still
on
on
the
old
version
of
open
telemetry.
D
So
there
was
a
couple
of
issues
over
here
around.
If
you
remember,
I
even
brought
that
up,
and
that
was
fixes
about
the
shutdown
hook
so
on
a
cdi
environment.
While
cci
does
manage
the
the
the
beams
creation
and
destroying
we
prefer,
if
you're
able
to
register
your
own
shutdown
hook.
So,
basically,
here
the
idea
is
that
we'll
just
use
auto
configuration
and
just
pass
in
whatever
configuration
we
found
to
configure
open
telemetry.
D
I
think
I
believe
that
I
have
here
a
shelf
with
yeah
that
was
the
the
one
with
the
with
the
new
implementation,
with
the
new
version
that
I
haven't
that
I
haven't
really
committed
yet
because
I
believe
when
I
was
trying
this
out,
the
api
was
releases,
but
the
the
the
instrumenter
was
not
so
I
didn't
have
a
version
of
the
instrument
or
to
play
with.
So
I
was
just
playing
with
the
snapshot
version
and
yeah.
Well.
D
This
is
just
like
the
interceptor
for
cdi,
nothing
out
of
the
ordinary,
but
basically
it
allows
you
to
well
have
things
like
this:
have
things
like
this
annotate
it
with
with
spam
and
then
automatically
create
the
span
or
enroll
into
a
span?
That's
already
out
there,
and
these
are
the
implementations
for
for
for
jax
rs.
So
there
is
a
filter
for
for
the
client
side.
D
This
is
also
using
the
instrumentary
api
and
there
is
also
a
filter
for
the
for
the
client.
So
that's
the
client
side,
that's
the
server
one
and-
and
one
of
the
things
that
I
really
enjoy
over
here
is
that
I
I
was
pretty
much
able
to
use
or
reuse
a
lot
of
stuff
that
was
already
available
on
the
open,
telemetry
api.
D
So
I
think
that's
a
very
interesting
use
case.
Even
for
all
of
you
have
a
better
understanding
how
we
can
use
the
apis
outside
the
open,
telemetry
side
of
things.
I
found
like
a
couple
of
things
here
and
there
that
I've
been
reporting,
but
nothing
really
out
of
the
ordinary
right.
So,
even
even
things
like
this,
it's
just
using
the
http
server
attributes
extractor
that's
available
on
the
api,
and
there
is
one
hcp
client
output
extractors
as
well.
D
Now
one
one
interesting
thing
that
we
might
want
to
discuss
here
as
a
group
as
well,
is
we
usually
this
not
only
for
michael
profile,
but
also
for
jakarta?
D
We
do
create
something
that
we
call
the
pck,
which
is
the
test,
compatibility
kit
and
that's
something
you
can
run
on
the
side
and
basically
tells
you
if
the
implementation
that
we
wrote
complies
with
whatever
rules
are
specified
on
the
specification
document.
So
basic
is
just
a
way
to
say
that
whatever
implementation
is
portable
across
multiple
servers
right
so,
for
instance,
open
liberty
might
have
a
different
open.
Telemetry
implementation
and
workers
might
have
the
opportunity
implementation,
but
with
both
passive
pck
and
if
we
both
pass
the
microprofile
tck.
D
D
So
that's
something
that
I
that
I
that
I
miss
a
little
bit
because
as
far
as
I
can,
I
can
tell
I
mean
there:
are
there
are
tests
on
on
the
open
climate
side,
but
there
are
there
are
no
tests
in
a
way
that
I
can
make
sure
that
my
implementation
behaves
as
as
it
should
right.
D
D
So
in
this
particular
case
I
just
make
like
a
few
of
them,
basically
just
the
more
some
of
the
most
basic
stuff
writing.
So
the
spanish
created
strings
if
parent-child
relationships
are
being
done
correctly,
but
I
mean
there
are
a
lot
of
use
cases
right
and
a
lot
of
really
really
small
details.
D
I
think
that's
pretty
much.
It
so
feel
free
to
post.
Any
questions
be
happy
to
to
answer
any
any
any
doubt
that
you
might
might
have.
A
Yeah
that
that's
really
cool
to
see
how
open
telemetry
is
being
used
for
the
tck.
We've
talked
about
something
similar
in
for
like
you're
thinking,
specifically
for,
like
http
server
spans.
If
we
had
like
a
tck
that
well
what
the
the
context
we've
thought
about
it
in
terms
of
is,
we
have
and
we
sort
of
have
a
bit
of
a
what
we
consider
or
what
we
use
as
a
tck
in
the
instrumentation
repo,
where
we
have
a
common.
A
We
have,
you
know,
probably
20,
plus
server
different
well,
maybe
10,
plus
server
instrumentations
and
30,
plus
http
client
instrumentations,
and
we
have
a
set
of
tests
that
we
run
against
a
set
of
http
client
tests
that
we
run
against
all
those
http
clients
and
a
set
of
http
server
tests
that
we
run
against
all
those
server
instrumentations
to
make
sure
that
they
all
produce
the
same
instrumentation.
D
The
the
the
tck
doesn't
like
that
we're
necessarily
tied
to
http
right,
so
http
is
just
one
one,
one
approach
or
one
option
right.
So,
for
instance,
if
I'm
using
the
weak
spanner
notation
right,
I
want
to
make
sure
that
my
application
still
behaves
exact
same
way:
either
you're
using
an
implementation
like
cdi
or
if
you're,
using
the
agent
instrumentation
when
you're
starting
up
your
jbm
runtime
right.
C
D
C
So
so
for
the
for
the
test,
you
have
traffic
task,
so
it's
it
says
it
can't
be.
I
mean
portable
can
be
runnable.
Basically,
it's
the
e4.
C
What
I
have
been
thinking
about
this
and,
for
example,
we
have
done
the
microfiber
reactor
messaging
and
we
run
some
tests
directly
produced
by
the
reactive
streams
folks,
so
I'm
wondering
whether
we
could
do
the
same
so
at
least
like
a
proof.
Actually,
the
as
roberto
said
that
the
style
we
created
all
the
trees
information
is.
The
kind
of
pro
is
a
comply
conform
to
the
open,
telemetry.
D
Yeah,
the
only
the
way
I
see
that
this
can
be
that
this
can
work
is
to
pretty
much
have
like
some
way
that
you
run
like
this
arbitrary
piece
of
code
that
will
generate
some
damage
data
and
then
you'll
assert
it
on
the
collector
side
right
because
that's
the
common
point
between
different
implementations
and
that
that's
pretty
much
the
approach
that
I'm
using
right
now,
it's
just
that
the
application
is
on
on
my
side
and
then
I'll
just
generate
whatever
data
and
I'll
just
go.
D
D
Now
the
interesting
thing
will
be
if
we
could
think
if
this
is
interesting
from
the
open
telemetry
size
to
do
it
in
a
way
that
any
implementation
can
run
this
test,
compatibility,
kit
being
quarkus
spring
liberty
or
whatever
other
other
server,
because
one
thing
is
having
these
tests
on
on
your
repo
and
being
able
to
run
them,
and
I've
done
them
a
few
times.
The
other
thing
is
having
something
that
is
portable,
that
you
can
extract
and
other
outside
implementations
can
run
on
their
own,
which
might
be
a
little
bit
trickier.
A
Yeah,
so
right
now,
it's
that
the
the
reusable
http,
client
and
hdb
server
tests
are
very,
like
you
say,
they're
very
integrated
into
this
repo,
but
that
topic
has
come
up
and
it
is
something
we
would
like
to
be
able
to
provide
to
anybody,
writing,
http,
client,
instrumentation
or
writing
http
server
instrumentation
to
be
able
to
use
the
way
that
this
works
and
maybe
there's
the
way
this
works
is
right
for
apache,
for
example,
you
still
have
to.
A
We
have
there's
still
a
lot
of
variability
between
different
http
client
instrumentations,
based
on
their
capabilities
of
what
we
can
do.
So
there
are
kind
of
a
lot
of
knobs
and
then
there's
the
basic
part
of
send
request,
and
this
does
a
send
request
using
the
apache,
http
client
and
then,
in
this
case
we
in
memory.
We,
you
know,
we
capture
the
telemetry
that
it's
produced
and
the
test
validates
that
one
thing
that
might
work.
C
Before
you
move
on
task,
so
when
you
do
like
your
memory
like
directly
retrieve
the
response,
so
is,
this
is
quite
valid
if
the
client
does
send
a
request
to
the
server.
So
in
order
to
test
the
runtime,
such
a
qualcomm's
open
liberty.
So
if
we
have
application
deployed
to
the
server
so
instead
or
you
do
your
memory
check
and
then
basically
you
go
to
the
directory.
The
the
real
server
is
that
kind
of
pluggable.
C
C
A
Yeah,
so
the
test
spins
up
a
real
server
that
responds
to
the
that,
so
the
http
clients
can
make
calls
to
this
real
server,
but
the
telemetry
is
then
captured
just
locally
from
the
hcp
client
instrumentation
called
open,
telemetry
api.
B
B
A
A
Well,
I
wanted
to
show
you
one
other
option
here
that
might
make
more
sense
may
make
more
sense.
I'm
not
sure
is
we
have
smoke
tests
which
we
have.
For
example,
we
have
servlet
smoke
tests
and
we
su
run
these
against
a
lot
of
different
underlying
servers.
So
you
know
liberty,
pyra,
tomcat,.
A
And
these
operate
differently.
These
are,
we
have
an
actual
like
war
file
that
we
deploy
in
this
case
into
those
servers.
So
it's
a
single
warf
file.
We
deploy
it
into
each
one
of
those
application
servers
we
hook
up.
In
our
case,
we
hook
up
the
java
agent
and
send
it
out
real
over
otlp,
and
so
the
smoke
tests
have
sort
of
three
components
running
it
has
the
test
running.
A
It
has
and
then
two
containers,
one
container,
that's
running
the
application
server
and
one
container
that's
running
sort
of
a
fake
collector,
so
the
the
application
server
exports
data
over
otlp
to
our
fake
collector,
and
then
the
test
can
pull
that
telemetry
down
and
validate
it.
D
That's
something
that's
something
very
similar
to
what
usually
we
do
for
the
dck
test.
Let
me
have
a
look
into
that
and
see
if
I
can
reuse
some
of
that
in
some
way.
Yeah.
Let
me
give
you
some
some
change
a
little
bit.
The
conversation
like
maybe
some
some
other
detail
so
on
the
quirky
side,
and
now
it's
talking
specifically
on
the
quirks
side
of
things.
D
I've
used
the
agent
and
yeah
it
works
right.
Even
of
course
it
doesn't
support
all
the
libraries
that
we
have,
but
one
of
the
main
reasons
why
we
ended
up
doing
or
having
to
do
integration
with
optometry
on
our
side
is
because
of
native
image
right.
D
So
we
do
have
to
build
the
integration
directly
into
quercus
for
users
to
be
able
to
use
that
international
image,
because
they're
not
going
to
be
a
half
the
agent
available.
E
A
What
do
you
think
yeah?
So
it's
it's
definitely
something
that's
yeah
intriguing
to
folks
we're.
We
are
hoping
to
have
at
some
point
some
solution
for
auto
instrumentation
of
native
agents.
A
A
This
works
the
there's
a
couple.
The
I'd
say.
The
only
the
only
limitation
here
that
I
know
of-
and
anybody
can
cr
speak
up-
is
that
you
can't
instrument
core
java
classes,
so
some
executor
core
executors,
for
example.
So
that's
a
little
tricky,
potentially
yeah.
So.
A
D
Yeah
I
mean
on
on
our
side.
That
means
before,
and
ben
and
ben
knows
this
right
on
the
quirky
side
of
things
we
have
every
time
we
do.
We
want
to
offer
something.
We
always
have
to
make
sure
that
works
on
on
on
native
image.
So
in
some
of
those
situations
require
us
to
have
to
do
some
custom
integration
and
do
our
own
codes
to
make
sure
that
that's
going
to
behave
the
exact
same
way.
D
So
in
case
of
open
telemetry
yeah,
we
have
to
do
the
some
of
the
integration
ourselves
to
make
sure
that
that
works
on
on
on
the
telemetry
size,
and
I'm
still
and
I'm
still
working
on
that.
So
we
we
we
the
most
basic
stuff
that
I
that
I
that
I
did
are
working
just
fine.
I'm
gonna
start
to
add
even
more
stuff
into
that.
So
one
of
the
worries
that
I
have
is
the
data
source
integration.
D
I
believe
it
should
be
fine,
but
even
even
even
with
that,
because
we
did
have
tracing
for
open
tracing
working
with
data
source.
Even
though,
and
I'm
going
making
a
parenthesis
over
here
and
the
last
version
or
the
last
couple
of
versions-
don't
really
work.
D
So
we
stuck
with
an
old
version
of
the
jdbc
data
source
for
open
tracing,
which
actually
makes
me
ask
the
question:
if
there's
someone
over
here
that
can
help
us
on
the
open
tracing
size,
because
we
put
a
couple
of
pull
requesting
some
some,
I
know
the
open
tracing
is
it's
in
maintenance
mode
is
mostly
banded,
but
we
put
a
couple
of
issues.
We
put
even
a
few
full
requests
and
we
got
no
answer
some
of
some
of
the
repos.
E
Let
me
also
just
jump
in
and
just
add
one
more
thing
to
this.
You
know
it's
important
to
remember
that
at
the
moment
we
still
don't
have
a
completely
standard
view
of
what
static
statically
compiled
java
images
are
going
to
look
like
right,
we're
we're
all
still,
basically
just
working
with
gravim
and
what
growl
vm
does,
because
it's
the
only
game
in
town.
E
If
what
I
hear
on
the
grapevine
is
correct,
then
there
may
be
some
moves
soon
to
try
to
actually
get
a
proper
project
and
actually
get
some
momentum
into
standardizing.
What
what
static
java
actually
means.
So
this
may
actually
become
a
something
which
heats
up
again
in
a
month
or
two's
time.
So
just
because
we
haven't
managed
to
make
a
lot
of
progress
on
it.
Now
we
are
still
very
early
and
we
should.
We
should
bear
that
in
mind.
E
It's
also
worth
pointing
out
that
what,
if
you
think
about
it
in
the
right
way,
what
we're
actually
doing
is
we're
thinking
about
a
change
in
well
the
phase
structure
of
how
java
applications
work,
because
the
way
things
work
today
is
you
you
turn
on
your
your
your
java
application
and
you
you
have
class
loading
time.
You
bootstrap
up
your
application.
E
You
know
you
go
through
a
phase
where
you're
you're,
compiling
things
and
jitting
and
warming
up,
and
then
you
go
into
steady
state
right,
so
you
have
classroom
initialization,
there's
a
period
of
heavy
class
loading
followed
by
a
period
in
the
dynamic
vm
of
heavy
jet
compilation
and
then
steady
state.
E
So
that's
like
a
three-phase
structure
with
you
re-imagining
this
not
only
for
static
images,
there's
also
the
opportunity
to
think
about
things
like
checkpoint
and
restore,
which
is
also
relevant
to
us,
because
if
you,
if
you
have
a
an
application
which
which
comes
up
with
inflammatory
and
then
it
needs
to
checkpoint
well,
we
need
to
do
things
like
when
it's
restored.
We'll
probably
have
to
do
things
like
re-establish
our
grpc
connections.
E
So
the
idea
of
all
the
immutable
stuff
that
we
set
the
connections
up
once
and
they
just
stay
there
you
know-
has
something
to
do
with
the
space
structure.
So
I
you
know,
having
talked
to
a
couple
of
people
who
are
working
on
this,
you
know,
I
think
it's
it's.
It
is
probably
the
you
know
a
good
idea
just
to
wait
until
these
ideas
of
what
the
phase
structure
of
future
applications
is
going
to
look
like
it's
resolved
a
bit
because
I
think
both
pieces
of
that
are
relevant
to
us.
D
Yeah,
I
I
I
agree,
the
the
let
me
actually
add
an
extra
topic
so
sorry
from
the
spreading
things
a
little
bit,
but
that's
what
that's
some,
that
that's
an
inspiration
and
another
thing
that
we
probably
need
to
to
figure
out
is
in
the
reactor
world
right.
D
So
one
one
of
the
other
issues
that
we
had
with
the
agent
in
quercus
is
being
able
to
propagate
the
context
when
we're
dealing
with
with
reactive
stuff
right,
especially
because
the
the
the
threats
when
things
where
things
are
being
executed
can
pretty
much
be
completely
different,
depending
if
it's
your
or
your
own
dial
threat
or
the
worker
fraud
right
and
well.
Of
course,
the
agent
doesn't
really
know
the
internals
on
on
on
some
of
those
containers
and
doesn't
know
how
to
propagate
the
contacts
properly.
D
So
that's
why
we
have
to
come
in
and
and
do
some
some
some
sort
of
integration
to
be
able
to
tell
well
just
do
the
propagation
on
the
contents
in
this
way
right,
because
the
the
the
interesting
thing
right
is,
if
you're
able
to
to
build,
especially
because
it's
a
very,
very
high
trend
these
days.
D
If
you
build
all
of
these
reactive
applications
and
you're
able
to
have
all
the
tracing
happening
from
whenever
you
call
your
rest,
endpoint
or
whatever
other
entry
point,
you
have
your
application
and
you
can
track
it
down
to
the
to
the
to
the
message
that
have
been
sent
into
kafka's
and,
and
things
like
that
right.
So
that's
that's
another
work
that
that
we're
also
exploring
to
quercus
to
be
able
to
propagate
the
context
correctly
when
you
are
in
the
in
the
in
the
reactive
world
right.
A
Yeah,
if
you
run
into
cases
where
it's
not
propagating
with
the
java
agent,
so
the
java
agent
does
do
a
bunch
of
work
around
reactive
project
reactor
in
order
to
attempt
to
trace
that
there's.
A
couple
of
things
to
watch
out
for
one
is
like
the
corkus
I
had
tested
quercus
recently
and
correlation
was
not
working
at
all
with
the
java
agent
and
it
turned
out.
A
Well,
I
added
a
smoke
test
for
it,
but
the
real
fix,
whereas
the
real
fix
one
line,
which
was
we
needed
to
add
the
jboss.
A
To
our
allow
list
of
executors
that
we
instrument
and
then
on
the
project
reactor
side,
ludmilla
has
done
some
great
work
recently.
A
To
add
some
additional
because
the
azure
sdks
use
project
reactor
really
heavily,
and
so
she
has.
She
leads
the
tracing
effort
in
the
azure,
sdks
and
so
they've
run
into
some
issues.
A
So
it's
worth
checking
out
that
or
posting.
If
you
are
having
issues
with
project
reactor
and
the
java
agent,
there
may
be
some.
A
D
A
D
A
I
had,
I
know
just
have
a
couple
of
minutes
left,
but
I
I
had
was
curious,
roberto
and
emily
and
in
microprofile
if
you've
started
to
think
about
metrics
and
logs
at
all,
oh
yeah.
So.
D
Sorry,
yeah,
that's
definitely
for
sure.
That
was
something
that
I
wanted
to
as
well
to
to
to
mention,
but
I
forgot
but
yeah,
that's
something
that
we
we
discussed
briefly.
D
The
reason
why
we
haven't
got
too
much
work
on
that
is
because
well
when,
when
I
look
into
metrics
and
log
in
there
were
still
a
little
behind
tracing,
and
so
we
thought
that
we
should
go
with
tracing
first
and
then
leave
some
room
for
metrics
and
logging
to
to
catch
up
and
then
we'll
we'll
look
into
metrics
and
logging
to
see.
What's
going
to
happen
into
this
space,
then
there
is
another
another.
Another
thing
is
we
still
want
to
figure
out.
D
D
I
know
there
has
been
some
some
some
discussions
and
conversations
how
open
telemetry
can
consume
micrometer
or
the
other
way
around,
but
that
doesn't
seem
to
be
clear,
at
least
at
this
point,
so
that's
something
that
we
still,
or
at
least
we
want
to
leave
a
little
more
time
to
figure
out.
What's
going
to
happen
in
that
space.
G
D
G
That
is
what
we,
what
we
are
planning
to
do
by
may,
so
our
next
minor
release
it
should
have
like
odf
support.
C
G
We
were
also
talking
about
having
some
kind
of
bridge
by
like
bridging
the
open,
telemetry
api
to
the
micrometer
api,
or
vice
versa,
but
we
haven't
had
any
like
concrete
plan
or
or
like
deeper
discussion
about
that.
C
Right
so
at
the
moment,
in
micro
profile
metrics,
I
is
use
a
job
wizard,
so
in
the
at
the
moment
the
group
is
trying
to
align
with
the
macrometer.
So
basically
it's
wherever
in
the
microprofile
metrics
land.
We
still
have
our
own
api,
however,
now
allow
implementer
to
plug
in
the
macrometer
so
also
as
roberto
said.
Actually,
so
we
don't
really
want
to
have
a
harder
dependency
like
have
for
micrometer
if,
in
the
future
like
we
want,
maybe
switch
to
open
telemetry
if
a
macro
micrometer
is
completely
different
from
open,
telemetry
magics.
C
D
We're
not
completely
sure
about
that
right.
Well,
the
the
point
is
where
we,
we
are
not
we're
not
completely
sure
and
well
until
this
point
right,
we'll
getting
we're
getting
some
new
information
right
now,
but
at
least
we
were
not
completely
aware
of
how
what
how
both
metrics
and
open
telemetry
side
and
micrometer
were
going
to
work
or
if
there
would
be
some
kind
of
integration
between
them
or
they're,
going
to
be
competitive
alternatives
right.
D
So
we
we
that
was
still
very,
very
cloudy
for
us
and
we
were
not
completely
sure
how
to
move
into
that
space.
Yet.
C
So
I
haven't
quite
finished
so
basically
that
is
an
immediate
term
like
the
action
for
the
like
a
microphone
matrix
is
to
adopt
a
micrometer.
However
long
term
ago,
if
like
one
day,
even
when
the
canon
micrometer,
maybe
is
the
is
not
just
kind
of
strategic,
maybe
hopefully
it's
not,
it
won't
happen
and
we
want
to
move
over
to
something
else.
C
So
it's
basically
all
I
want
to
say
is
the
microphone
matrix
currently
is
using
the
kanoshim
layer
to
like
it's
not
to
expose
the
macrometer,
whatever
the
data
et
cetera,
maybe
the
micrometer
data.
If
we
use
a
jonathan,
I
think
like
a
micrometer
aligned
with
like
open
telemetry
matrix.
So
in
this
case
you
will
produce.
C
G
So
the
format-
definitely
yes,
the
the
content,
it
depends
on
open,
telemetry
has
its
own
instrumentation
module
and
micrometer
has
has
that
too?
So
there
can
be
differences
in
that
based
on
like
what?
How
do
you,
how
like
do
you
call
things
like
what
tags
are
you
attaching
and
so
on,
but
the
body
format
or
tlp
that
that
is
the
same.
C
Right,
okay,
at
least
the
format
is
the
same,
is
better,
however,
if
the
content
are
different.
So
that's
still
a
yeah
challenge
want
to
do
like
a
seamless
switch.
A
If
I
can
ask
matej
to
just
give
a
two-minute
overview
of
or
one
one
to
two
minute
overview
of
the
bridge
that
he
just
added
in
the
open,
telemetry
instrumentation
repo.
H
H
H
Whether
we
produce
same
data,
because
there
is
nothing
to
compare
to
really-
I
mean,
if
you're,
using
a
micrometer
counter
and
an
open
telemetry
counter
that,
yes,
they
will
produce
exactly
the
same
data.
I
mean
if
you
use
the
same
tags
and
the
same
attributes
in
both.
If
you,
however,
if
you
use
open,
telemetry,
timer,
no
micrometer,
timer
and
open
telemetry
histogram,
which
are
sort
of
equivalent,
they
might
have
slight
differences,
especially
if
you
enable
micrometer
histogram
support.
C
Right
so
I
mean
how
much
a
percentage
difference
do
you
think
I
mean
for
this
kind
of
histogram
knows
this
kind
of
the
from
a
counter
point
of
view,
probably
the
same
and
like
from
the
api
point
of
view,
the
the
data
gathering
I
mean
are:
they
must
different
for
the
data.
H
A
My
there's
sort
of
three
components:
there's
the
api,
there's
the
sdk
and
there's
the
export,
so
the
the
sdk
is
where
the
aggregation
actually
happens
and
the
the
histogram
supporter
and
so
the
api
and
the
export
will
be
the
same
but
yeah
that
aggregation
can
be
slightly
different
and
I
think
the
main
difference
is
currently
jonathan
in
open
telemetry.
A
There's
like
histogram
support
in
micrometer.
I
know
that
out
like
we
have
to
define
the
is
it
there's
not
a
like
a
full
histogram
output,
or
maybe
there.
G
Is
soon
there
is
there
is
since
the
beginning,
it
is
called
distribution
summary,
so
I
would
say
the
biggest
difference
around
that
as
matosh
pointed
out,
open
them
doesn't
have
a
timer.
G
So
I
that
was
a
huge
pain
point.
I
I
brought
in
a
lot
of
times
on
the
on
the
specification
meetings
right
now.
Users
needs
to
do
like
that
manually,
but
you
can
you
can
basically
like
after
you
measure
the
time
you
can,
you
can
attach
it
to
the
histogram
with
micrometer.
You
can
do
it
in
one
step
like
measure
the
time
for
me
and
create
a
histogram
as
well.
So
basically
the
timer
is
is
is,
is
that,
but
micrometer
has
has
its
own
histogram
support.
C
H
C
I
Yes,
that
that's
been
discussed
to
add
that
in
a
future
version
it
just
didn't
make
the
the
1.0
version
of
the
metrics
api
specification.
C
G
G
G
Basically,
like
simulate
this
behavior,
the
point
behind
the
timer
was
mostly
because,
like
measuring
elapsed,
time
is
hard
and
most
of
the
users
are
getting
that
wrong
and,
like
I
tried
to
like
demo
this
by
asking
people
who
are
working
on
open
telemetry
to
to
measure
it,
and
they
got
it
wrong
too,
and
it
did
not
reach
into
into
the
specification.
That
was
a
huge
paid
point,
at
least
to
me.
C
A
So,
as
far
as
the
data
that
gets
produced
the
aggregation
can
we
because
I
think
that
was
one
of
emily's
questions?
Can
we
summarize
sort
of
the
difference
in
the
aggregation
like?
Is
it
only
if
you're
using
histograms
that
the
sdk
would
produce
slightly
different
output
data.
A
G
It
depends
because
we
don't
have
the
the
otrb
support
right
now.
It's
it's
hard
to
tell
what
what
we
what
will
happen
or
how
that
looks
like
or
what
will
be
the
differences.
I
So
so
I
can
add
a
bit
of
color,
but
I
don't
know
micrometer
enough
to
you
know
complete
the
whole
picture,
so
micrometer
instruments
are
bridged
to
open
telemetry
instruments
and
when
you
configure
the
open,
telemetry
sdk
to
you,
know,
export
metrics
to
some
external
source,
for
example
over
otlp
or
by
exposing
a
prometheus
endpoint.
I
Each
instrument
type
has
a
default
aggregation
associated
with
that,
and
so
I,
I
suppose
one
of
the
questions
that's
in
my
head
is
how
do
those
default
aggregations
compared
to
the
default
aggregations
in
micrometer?
So
you
know
a
histogram
is
an
instrument
type
and
it
also
exports
as
a
as
a
histogram.
I
A
counter
is
an
instrument
and
exports.
As
a
sum,
that's
like
the
type
of
aggregation
that
it
exports
as
it
needs
instrument
like
follow
suit
and
it
has
its
own
aggregation
type.
So
I
think
if
there,
if
there
was
a
difference,
it
might
be
there
and
like
how
each
instrument
is
is
aggregated
and
exported.
I
can
link
to
the
documentation
of
the
specification
that
describes
each
instrument
and
their
default
aggregation.
If
folks
think
that
will
be
useful.
A
And
sorry,
folks,
I
didn't
moderate
our
time
and
so
we're
over
on
roberta
and
emily.
Thank
you
so
much
that
was
really
super
cool
to
see
the
microprofile
stuff
and
look
forward
to.
D
D
A
D
I'm
working
with
erin
on
the
quirky
side,
I'm
doing
the
tracing
portion
and
doing
the
micro,
the
matrix
portion
actually
on
quercus.
What
we're
doing
for
metrics
is
we're
adopting
micrometer
and
she's
doing
the
work
for
there
in
that
in
that
net
space.
So
sometimes
we
actually
pick
our
toes
together
because
of
all
the
synergy
between
both.