►
From YouTube: 2021-07-29 meeting
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
B
C
Yeah,
so
I
am
preparing
to
publish
our
muzzle
plugins
to
gradle
plugin
portal
because,
like
that
official
place
for
them,
that
requires
some
credentials
and
that
requires
like
some
personal
credentials.
I
have
to
register
with
my
name
and
my
email
to
get
the
api
key
required
to
upload.
C
No
reading
first
time
the
documentation-
I
understood
that
I
don't
you
just
upload
or
whatnot
plugins
by
ids
and
that
id
should
be
unique,
preferably
reverse
domain.
D
A
There
were
other
cigs
were
mentioning
on
slack
that
the
bill
jobs
where
they
were
suffering
also.
C
C
A
Do
so
I
I
talked
with
sarah
novotny
who's
on
the
the
governance
committee
and
she
is
going
to
reach
out
to
folks.
She
knows
at
github
who
I
assume,
the
same
people
we
discussed
with
last
year
when
we
bumped
from
the
free
plan,
the
20
concurrent
to
the
50
concurrent.
A
E
A
E
A
C
Yeah,
so
there
is
a
pull
request
to
alway
to
introduce
a
functionality
behind
this
feature,
toggle
to
always
create
a
span.
Whenever
native
based
implementations
try
to
establish
a
connection
to
remote
cost
to
track
issues
with
connection
pools,
long
connection,
failed
failure,
failed
connection,
et
cetera,
et
cetera.
C
Why
I
brought
that
specific
pull
request
here
is
to
make
sure
that
if
anybody
is
opposing
that
pull
request,
then
please
speak
up.
Otherwise,
I
will
merge
it
pretty
quickly
like
tomorrow.
C
F
A
C
A
C
A
Yeah,
I
think
not
propagating,
is
fine.
We
have,
I
mean
a
lot
of
most
of
the
database
spans,
don't
prop
clients,
bands,
don't
propagate,
and
even
the
presence
of
net
attributes
also
seems
to
me
to
point
to
it
being
a
client
span.
G
I
mean
I,
I
kind
of
consider
this
just
like
a
netty
span,
but
just
a
regular
netty
client
span,
but
with
a
connection
failure,
because
I
think
the
the
that
connect
only
gets
created
if
the
connection
failed
right
and
that's
that's
the.
C
A
So
all
right,
we
can
continue
that
on
the
pi.
I
left
a
comment
yeah
on
the
pr.
C
A
C
As
as
now
we
by
default,
suppress
nested
client
spans,
then
we
cannot
make
that
client,
because
I
will
switch
on
that
connection
span,
but
I
will
not
get
it
in
many
cases
because
it
will
be
suppressed
anyway.
So
we
have
to
make
that
for
now
internal.
A
It
yeah
this
one.
I
wanted
to
just
get
especially
mateusz
if
you
have
a
chance
just
from
a
instrumentation
api
perspective,
because
it
adds
a
new
required
art
to
the
instrumenter
builder.
B
It
I
didn't
really
think
that
deeply
about
this,
but
I
have
one
observation:
it
seems
pretty
similar
to
our
client
spam
servers
and
constructs,
so
maybe
it
will
should
it
would
be
possible
to
replace
those
with
this
instrumental
type
on
instrumentation
kind
of
estimation
kinds
limitation
category,
something
like
that.
Maybe
it
should
be.
It
would
be
possible
to
completely
replace
that
old
stuff
with
that
more
generic,
constant.
A
Cool
I'll
think
about
that
and
discuss
with
honorag
this
evening.
A
The
vertex,
so
we
added
this
one-
did
get
merged
nikita.
Did
you
happen
to
discuss
this
with
laurie.
A
Okay,
so
for
others
the
so
vertex
has
its
own
open,
telemetry
instrumentation,
starting
in
vertex
4,
is
not
built
in,
but
they
have
a
like
an
extension.
D
A
Users
can
use
that
will
then
emit
open,
telemetry
instrumentation,
and
so
we
discussed
previously.
A
Do
we
want
to
add
our
own
instrumentation
for
for
in
cases
where
libraries
do
this
and
have
their
own
way
to
emit
open
telemetry
instrumentation?
A
Do
we
want
to
reuse
that
versus
producing
our
own
instrumentation,
and
we
decided
that
we,
we
really
want
to
reuse
and
in
cases
where
there's
not
where
there's
insta,
where
there's
challenges,
because
there
there's
definitely
challenges
to
that
and
lori
had
pointed
out
several
challenges.
A
But
we
want
to
try
to
drive
those
improvements
in
the
library's
instrumentation
itself
just
from
even
if
it
ends
up
being
more
work.
That
seems
like
a
much
better
stance
from
from
being
from
the
community
to
encourage,
because
we
do
want
to
encourage
libraries
to
produce
good
instrumentation,
that
we
can
reuse
and
certainly
library
instrument.
This
would
be
in
the
java
agent
we're
using
injecting
that
automatically
in
the
java
agent
outside
of
the
java
agent
people.
We
people
could
just
use
the
library's
hotel,
instrumentation.
A
Oh
yeah,
we
did,
I
don't
know,
did
it
get
released.
I
hit
the
button
last
night.
C
Yeah
some
some
rip
already
got
dependable
top
grade,
pull
request,
1.4.1.
A
Okay,
cool,
I
will.
I
will
actually
publish
the
release
notes
today.
If
nobody
beats
me
to
it,
so
there
was
one
one
it
has
one
fix.
It
was
a
regression
and
we
had
discussed
previously
that
our
policy
was
gonna,
be
any
regressions.
A
Muscle,
let's
bump
these
up.
F
Josh
sure
so
the
metrics
api
is
getting
marked
as
feature
freeze.
There
is
a
pr
that
upgrades
the
java
api
to
the
new
metrics
api,
which
will
break
the
hell
out
of
instrumentation
when
it
comes
through.
F
So
just
as
an
fyi.
Please
take
a
look
and
comment.
Ironically,
there
was
a
ask
from
the
sig
to
deviate
slightly
from
the
specification
that
I
did
and
I'm
going
to
talk
to
bogdan
about
it
because
I
don't
think
he
likes
it.
So
I'm
gonna
have
him
come
to
this
sig
to
talk
with
when
john's
back,
because
I
know
that
john's
the
one
who
asked
for
it,
but
I
think
we
all
kind
of
agreed
to
it
in
any
case
I'm
out
next
week.
F
So
I
don't
know
when
we
have
that
conversation
or
how
long
you
want
this
pr
to
sit.
I
am
a
little
nervous
because
it's
large
and
it
breaks
the
api
in
non-functional
just
cosmetic
ways.
It's
a
lot
of
name
changes.
F
F
The
api
is
feature
free,
so
you
should
be
able
to
read
this
read
the
spec
line
it
up
with
what
this
pr
is
see,
how
you
feel-
and
I
think
ideally,
we
can
move
quickly,
because
once
the
api
here
is
out,
we
can
upgrade
instrumentation
to
it,
and
then
we
should
expect
a
lot
less
braking
changes
going
forward
outside
of
views
and
sdk
configuration
parameters
anyway.
The
tldr
is
please,
please
look
at
it.
Please
let
us
know
what
you
think
and
fyi.
A
A
Well,
do
you
are
you
able
to
join
this
evening
to
maybe
chat
with?
I
don't
know
how
involved
onrod
has
been
in
this
pr.
F
Ironically,
it
depends
on
whether
it
rains
it's
supposed
to
rain
tonight
so
likely
I
will
be
there.
If
it's
not
raining,
then
I
have
a
volleyball
game,
but
I
will
be
I
I
you
guys
run
late
a
lot
of
times
right.
F
Okay,
I
will,
if
I'm
going
to,
if
I'm,
if
I'm
not
going
to
be
on
time,
I'll
I'll
ping
you
on
slack
to
let
you
know
what
time
I
expect
to
to
be
able
to
show
up,
but
I
might
be
like
30
minutes
late.
If,
if
it's.
H
Not
raining
like
6
30,
like
6
30,
pacific,
well,.
A
F
Yeah,
the
related
bullet
point
is
on
this,
so
I
was
trying
to
upgrade
our
exporter
to
the
latest,
instrumentation
and
latest
hold
on.
Let
me
unwind
the
stack,
so
we
were
using
the
spi
from
the
agent
for
trace
exporter
and
for
metric
exporter.
F
F
I
was
trying
to
upgrade
with
our
latest
release
to
what
is
marked
as
the
supported
way
to
do
it.
So
I
want
to
confirm
my
understanding
of
how
to
provide
a
google
cloud
exporter
for
trace
and
for
metrics
appropriately.
F
So
from
what
I
understand,
we
should
be
using
the
auto
config
spi
for
a
trace
exporter,
which
I
created
in
auto
configure
sbi.
C
F
A
F
A
Is
this
is
all
new
work
and
docks
from
from
nikita
and
team?
Oh.
A
F
Okay,
the
the
last
thing
is
auto
configure.
Today
you
can
configure
prometheus
exporter
and
otlp
exporter.
F
I
don't
know
if
I
actually
want
to
ask
for
this,
but
because
I
kind
of
don't
I
kind
of
want
to
just
rely
on
the
collector,
but
I
can't
if
I
want
to
provide
a
different
metric
exporter
to
auto
configure
right.
I
don't
think
there's
a
way
to
do
that
today
and
that's
because
the
current
sdk
only
allows
for
one
metric
exporter
going
forward
the
metric
prototype
that
we
have
allows
for
multiple
exporters.
F
F
Yeah
yeah,
I
mean
well,
we
support
this,
so
so
we
support
it
again.
I
don't
want
to
have
to
do
this,
I'm
just
like.
We
have
customers
asking
for
it
so
and
for
some
reason
they
won't
take
the
collector,
but
we
we
have
this
for
traces,
where
you
can
actually
have
this
extension
jar
for
a
different
trace
exporter.
F
All
I'm
asking
is,
would
it
would
it
be
okay
if
we
did
that
for
metric
yeah?
So
this
is
the
java
agent
specific
one.
A
D
A
So
this
one
is
now
deprecated
because
we
can
use
auto
configure.
We
are
interested
in
definitely
interested
in
deprecating.
You
know
having
similar
functionality
in
auto
configure
and
removing
this
also.
C
I
don't
remember
about
metric
side,
but
on
the
tray
side,
auto
configuration
does
have
the
like
low
level,
tracer
sdk
configuration
provider
which
allows
like
programmatically,
configure
several
spam
processors
and
several
exporters,
so
it
it
will
be
logical
if
there
is
a
similar
thing
on
the
metric
side,
meaning
that
your
extension
can
wire
up
sdk
in
any
way.
You
like,
including
several
metric
exporters,.
F
Yes,
yes,
so
I
I'd
like
to
take
a
first
crack
at
that
partially,
because
I
also
want
to
use
that
to
influence
the
spec,
because
the
sdk
spec
is
still
stabilizing
but
partially,
because
I
kind
of
need
it
right
now
for
our
exporter,
I'd
like
to
I'd
like
to
have
that
part
of
auto
configure.
So
if
you
include
our
jar,
you
know
you
can
just
enable
multiple
exports
relatively
efficiently.
F
What
I
don't
want
to
do
is
I
don't
want
the
documentation
on
google
to
say,
don't
use,
auto
configure.
Do
this
by
hand,
I'd
rather
start
pushing
everyone
at
auto,
configure
and
allow
them
to
have
google
as
an
optional
export
for
metrics
instead
of
having
our
documentation
around
metrics,
which
today
says
use
this
google
specific
method.
I
don't
want
that
anymore.
I
want
it
to
just
be
use
auto,
configure.
C
C
C
C
D
D
K
F
Yeah
so
right
now
you
can
only
have
one
metric
exporter
and
the
only
way
to
configure
a
different
metric
export,
plugin
wise,
is
via
the
java
agent,
via
this
auto
configure,
doesn't
give
you
any
option
to
do
that.
So
I
guess
what
what
I'm
suggesting
is
I'm
willing
to
take
a
crack
at
taking
something
like
this
metric
exporter
factor
making
it
look
like
the
trace
side
getting
that
implemented
in
auto
configure
in
anticipation
of
having
multiple
exporters
as
a
possibility
in
in
the
sdk.
A
C
C
So
there
is
meta
provider,
configurer
already
yeah
that
doesn't
allow
export
at
all.
So
so
what
what
I
was
trying
to
to
say
is
that
if
you
want
to
allow
several
exporting
pipelines,
doesn't
matter
for
traces
or
metrics,
you
still
have
to
like
write,
write,
similar
code,
which
configure
that
you
can.
You
cannot
just
put
environment
variables,
say
this
and
that
and
please
do
do
the
magic.
F
The
so
so
I
do
want
to
follow
that
convention,
though,
and
make
sure
that
metrics
and
traces
are
similar
in
terms
of
how
you
do
it,
and
I
also
want
to
lean
in
on
google's
documentation
around
open,
telemetry
java
to
encourage
people
to
use
autoconfigure
unless
that's
not
how
we're
going
as
a
community,
but
it
seems
like
I
should
encourage
this
usage
as
the
way
to
do
it
right
now,
though,
meter
provider
only
allows
you
to
provide
views,
and
you
cannot
control
exporters
at
all
because
you
actually
have
to
build
the
meter
and
the
exporter
is
an
external
thing.
F
It's
it's!
It's
the
there's,
a
what
the
hell
is,
that
thing
called
interval
metric
reader
thing
that
gets
built
later,
so
in
the
future.
F
It's
likely
that
the
new
metric
sdk
will
allow
you
to
specify
exporters
in
the
meter
provider
and
to
the
extent
that
you
know
that
that
will
make
this
all
be
the
same.
Great,
that's
not
the
case
yet.
So
all
I'm
proposing
is
the
to
add
that
that
meter
expo
configurable
meter,
exporter
provider,
thing
like
we
have
for
trace
and
fire
that
through
auto
configure.
That's
all
I'm
proposing
as
a
stop.
F
A
Go
did
anybody
add.
L
L
I
wanted
to
give
you
an
update
on
the
logging
stuff
dress.
Yes,
sorry
for
all
the
delays,
but
I'm
being
told
that
we're
gonna
really
start
working
on
it
next
week.
So
should
be
some
things
to
see.
Hopefully,
unless
there
are
more
questions
in
which
case
I'll
be
chatting
on
the
slack
with
you
about
that
with
you
folks
with
everyone,
so
just
a
little
update.
That
was
it
from
our
side.
So
yeah
awesome.
A
A
Cool
so
this
example,
so
the
examples
now
built
against
the
snapshot,
which
is
nice
because
they'll
always
be
up
to
date,
and
we
can
see
when
things
change
that
they
require.
We
still
haven't
figured
out
exactly
how
to.
I
think
nikita
correct
me.
If
you
have
new
ideas
on
how
to
when
we
make
the
release
and
tag
the
release,
we
still
want
to
figure
out
how
to
update
those
to
point
to
the
released
version
sort
of
at
the
same
time,.
A
For
doing
some
profiling
spins
up
just
a
really
basic
spring
boot
app
and
uses
sort
of
jmh,
maybe
incorrectly
as
almost
a
macro
benchmarking
hardness,
but
it's
pretty
convenient
and
got
reasonably
stable
results
run
to
run
variants
so
that
we
could
make
some
changes,
and
then
this
kind
of
is
related
from
the
based
on
the
benchmarks.
A
If
you
capture
a
flight
recording
and
then
this
analyzer
code,
basically
strips
out
all
of
the
stack
traces
that
look
like
they
may
be
agent
related,
which
is
helpful.
I
I've
found
helpful
in
the
past
this
approach
just
because,
especially
as
the
as
we
get
this
number
down,
although
in
such
a
small
micro
like
this,
the
whole
benchmark
is
like
80
microseconds,
the
whole
request
loop.
So
it's
we're
still
talking
fairly
small
number
of
microsecond
overhead.
Even
though
16
doesn't
look
great.
A
So
one
way
it's
a
little
tricky
because
also
we
embed
our
inject
our
byte
code
in
to
method.
So
it's
hard
to
say
that
that
method
was
slow
because
of
our
injected
byte
code.
So
what
the
analyze
the
filter
does
is.
It
looks
for
if
a
method
called
any
of
any
open,
telemetry
code,
it
tags
that
method
as
likely
one
that
we
injected
bytecode
into
so
it
treats
that
as
a
an
agent
method
so
like
it
was
pretty
clear.
A
Just
from
this
first
run
that
an
area
of
slowness
and-
and
while
I
didn't
improve
the
the
performance
of
this,
I
did
hide
it
behind
is
recording
check
so
that
if
you're
say
one
of
one
of
the
goals
is,
if
you
have
a
low
sampling
rate
of
like
only
one
percent,
then
our
the
agent
should
have
much
less
overhead
right
because
we're
not,
we
shouldn't
be
doing
that
much
work,
but
it
was
fairly
similar
over
lev
overhead
at
one
percent
100.
A
Well,
also,
the
the
benchmark
isn't
actually
exporting
anything.
So
certainly
a
1
sampling
rate
would
save
on
all
the
performance
overhead
of
exporting.
A
Oh
yeah,
I
wanted
to
I
didn't
follow.
Maybe
jason
can
give
an
update
on
the
this
is
a
more
so
this
is
a
fairly
low,
more
micro
benchmark.
This
is
a
kind
of
a
more
true
end-to-end
overhead
benchmarking.
I've
been
calling
it
a
macro
benchmark.
E
E
There's
pr's
out
there.
I
wanted
to
time
travel
a
little
bit
if
you'll,
allow
me
to
and
just
call
out
to
jonah
and
say
that
I'm
going
to
paste
a
link
in
the
dock
here.
That
is
some
of
the
logging
stuff
that
we
have
in
our
repo.
If
there's
anything
for
you
all
to
glean
from
that,
you
know
it's
out
there.
E
A
If
we
time
travel
in
a
different
direction
here,
how
far
do
we
have
to
go
back?
We
had
instrumentation
for
live
log4j
logback
that
emitted
spans,
which
was
totally
against
the
spec
and
why
we
removed
it,
but
this
also
might
be
oh,
I
see
this
emits
them
in
the
logs.
Basically,.
A
It
it
captures
the
logs
that
were
yeah
so
these.
So
this.
I
would
imagine
that,
if
you're
working
on,
if
you're,
going
to
work
on
auto
instrumentation
of
logging
that
this
this
will
be
very
helpful
because
it
kind
of
shows
what.
L
Did
you
paste
this
in
just
so
that
I
get
the
right
branch
here,
yeah.
C
E
A
Great,
let's
see
the
http
client
test
is
now
j
unit,
which
is
has
pluses
and
minuses.
One
of
the
minuses
is
the
the
exception
messages,
but
that's
actually
good,
because
it's
now
using
the
open,
telemetry,
sdk,
test,
testing,
artifact
and
so
honorag
has
been
driving
a
bunch
of
improvements
upstream
into
that
into
the
sdk
test,
artifact.
A
Now
that
we're
using
it
more
heavily-
and
this
is
coming
soon
to
clean
up,
which
should
help
clean
up,
because
this
is
this
is
a
big
monster.
This
is
the
reusable
test
that
we
that
runs
against
all
like
20
http
client,
libraries
that
are
instrumented.
A
This
one
is
cool
because
it
simplifies
the
a
couple
of
the
modules
makes
it
a
lot
clearer
and
we've
been
struggling
with
shadow
for
weeks
or
months
now.
A
Oh
new
library
instrumentation.
So
this
is
for
people
who
don't
want
to
use
the
java
agent
and
we
have
a
listing.
A
We
have
a
new
muzzle
expert
on
the
call
nikki.
You
need
to
hide
yourself,
a
bunch
of
a
bunch
of
work
on
muzzle,
which
is
great.
This
is
this
is
what
helps
to
make
sure
that
we
only
inject
code
that
will
that
will
work
that
will
throw
into
the
right
versions
of
libraries
that
have
the
right
methods
that
we
need
all
of
that
good
stuff.
It's
really
important,
and
but
it's
also
been
another
thing
we
struggle
with
and
have
been
making
a
lot
of
improvements
over
time.
A
C
A
A
But
now
we
have
a
nice
mechanism
to
to
have
instrumentations
build
their
own
little
bootstrap
module,
and
so
we
don't
need
to
pollute
the
api
module
anymore.
A
This
was
a
big
one.
I
thought
it
was.
There
was
actually
that
okay,
htcp3
concurrency
tests
were
were
had
not
been
enabled
yet
and
were
failing,
and
it
was
a
real
bug,
meaning
you
could
end
up
with
the
wrong
correlation
of
between
your
request
and
response,
or
your
request
and
listeners
yeah
with
callbacks.
A
A
This
was
not
affecting
users
as
far
as
we
know,
but
we
just
noticed
in
some
tests
there
was
some
warning
log,
so
that
was
nice
to
get
fixed
and
another
another
fix.
Another
http,
client
or
all
about
the
this
last
week
was
all
about
http
clients
and
then
more
testing,
some
good
additional
tests
added
so
yeah
a
lot
of
stuff.
This
last
week
wanted
to
call
out.
E
As
an
open
and
welcoming
community,
should
we
also
say
hi
to
new
people
on
the
call?
Maybe
we
should.
K
Nice
to
meet
you
working
at
f5
and
I
joined
this
meeting
because
I
think
on
august
10
I
will
talk
about
some
benchmark.
I
did
regarding
the
optimization
of
the
protocol
by
itself,
especially
for
the
multivariate
time
series
and
something
that
could
be
applied
also
on
for
traces
and
analogues
in
general.
K
In
the
in
the
siege
matrix
meeting
augustine
9
a.m,
but
this
meeting
basically
but
augustine
if
I
understood
well,
I
talked
yesterday
with
with
josh
ted,
young
and
joshua
mcdonald
and
I
think
joshua
mcdonald,
mansion
august
10
9
am
so
my
guess
is
it's
related
to
this
meeting,
but
I'm
not
sure
so,
maybe
josh.
F
But
yeah
it's
it's
august
10th!
They
were
going
to
wait
till
I
get
back
from
vacation,
but
you
should
link
your
benchmarks
here,
because
I
I
they're
impressive,
like
I
and
the
proposal
yeah.
K
Yeah,
I
will
copy
past
the
two
link
in
this
chat
in
one
minute.
F
Sounds
good
yeah
because
I
think
everybody
should
take
a
look
at
this.
It's
a
pretty
cool
idea
and
there's
a
lot
of
a
lot
of
good
potential,
so
yeah
take
a
look
thanks.
I'm
looking
forward
to
that
chat,
man.
K
K
The
so
sorry
my
value,
so
the
initial
document
was
this
one
explaining
why
we
need
to
do
something.
Then
I
created
a
first
benchmark.
That
was
the
first
link
and
then
I
created
a
second
benchmark,
including
an
additional
alternative,
and
that's
the
last
one,
and
that's
probably
the
one
that
I
will
present
august
10
this
one.
A
Cool
and
that's
in
the
in
the
metric
sig
meeting
or
the
spec
meeting.
K
But
because
I'm
a
newbie
in
this
open,
telemetry
community,
please
tell
me
if
my
understanding
was
that
this
meeting
was
the
the
matrix
meeting.
But
if
it's
not
the
case.
A
F
F
Yeah
cool,
no,
no,
the
yeah,
so
the
meeting
where
we
should
be
discussing
this
is
in
in
another
hour
and
that's
the
one
that
we'll
have
that
one
goes
back
and
forth
in
times,
though
it's
sometimes
at
9am,
sometimes
at
crap.
What
time
is
it
in
pacific,
sometimes
at
4pm?
J
Hi,
so
I
I
know
a
few
people
already,
especially
jason
and
jack
and
and
john
when
he's
not
on
holiday,
but
for
people
who
haven't
met
me
before
and
ben
evans.
J
I
recently
joined
red
hat
to
work
specifically
on
java
and
jvm
observability
and
what
essentially
the
long
term
health
of
the
of
the
java
platform,
and
I
I
think
that
observability
and
open
geometry
is
part
of
that
is
one
of
the
things
that
we
need
to
do
to
keep
java
relevant
for
the
for
the
long
term
time
horizon,
but
by
which
I
typically
mean
you
know,
sort
of
not
18
months
or
two
years.
J
But
I'm
thinking
about
the
kind
of
the
you
know
the
next
10
years
and
observability
is
a
big
part
of
what
I
think
that
we'll
we'll
need
to
do
for
that.
Before
red
hat.
I
worked
at
new
relic
for
a
couple
of
years,
which
so
I've
had
exposure
to
some
of
these
things.
But
I
in
the
last
six
months
I
was
there
I
tended
to
to
get
dragged
into
too
many
management
and
group
running
activities
and
not
enough
time
doing
technical
work.
J
So
I'm
looking
forward
to
getting
back
to
to
spending
the
vast
majority
of
my
time
doing
technical
things
again.