►
From YouTube: 2021-07-08 meeting
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
C
D
E
Morning,
good
morning,
I'm
sorry
I
missed
the
the
bike
shed
on
whether
we
should
use
log
or
logger
or
capital
log.
E
B
B
Nikita
would
have
approved
that.
F
D
B
B
B
I
have
you,
haven't,
proposed
converting
our
tests
to
closure
yet.
D
B
Cool,
let's
see
one
thing
I'm
wanted
to
call
out
today
for
folks
was
go:
go
watch
the
java
contrib
repo,
because
there
are
so
we
had
this
request.
This
kind
of
issue
come
in
about
adding
some
jakarta
ee,
like
injection
cdi
support
for
open,
telemetry
and
they're
the
they're
going
to
work
on
that,
though-
and
we
discussed
on
tuesday
whether
we
wanted
this
to
go
into
the
instrumentation
repo
or
not
but
decided.
This
is
kind
of
the
perfect
thing
for
this
repo.
B
B
If
you're
interested
in
this
kind
of
stuff
watch
it
so
that
we
can
get
the
coverage
and
reviewers,
I
did
add
the
instrumentation
repo
maintainers
and
laura
laurie
as
an
approver
here,
so
that
we
have
more
coverage
since
we're
going
to
use
it
heavily
for
instrumentation
john.
I
wanted
to
check
with
you.
If
I
mean
it
kind
of
makes
sense
to
me
for
contrib
to
be
like
a
super
set
of
the
maintainers
from
both
repos,
but
didn't
want
to
add
you
without
your
consent.
E
I
don't
feel
a
strong
need
to
be
a
maintainer
over
there.
I
mean
if
you'd
like
me,
to
be
I'm
happy
to
be
in
the
in
the
pool,
but
I
don't
feel
a
strong
need.
I
don't
I
don't
expect
having
that.
I
will
have
a
huge
amount
of
time
to
devote
to
it.
So
unless
it's
just
deleting
all
the
groovy
that's
in
there
and
I'm
all
on
board.
D
No,
let
me
start
another
way,
so
one
of
the
things
that
we
do
during
the
build
of
our
project
is
we
have
a
bite
body
plug-in
which,
by
body
like
compilation,
kind
of
plug-in,
which
generates
muzzle
methods
for
our
instrumentation
modules,
muzzle
method,
meaning
like
the
list
of
all
references,
all
methods
and
classes.
D
Are
these
instrumentation
references
so
that
later
in
the
runtime,
we
can
check
their
existence
so
that
that
functionality
of
byte
body
plug-in
and
invoking
that
plug-in
and
applying
that
like
plug-in,
was
essentially
divided
between
one
script
in
build
serc
folder
and
some
classes
in
agent,
tooling,
module,
and
they
were
now.
I
try
to
extract
them
into
one
griddle
plugin
that
can
be
actually
published
to
making
central,
for
example,
and
so
that
plugin
can
be
reused
in
custom
distributions
or
extensions.
D
That
kind
of
works-
I
don't
pay
attention
that
pull
pull
request,
is
all
red
because
the
right
chicken
pro
problem,
where
we
plug
and
uses
our
project,
which,
which
requires
plugin
yeah.
That's.
B
D
Thing:
okay,
yeah
kinda,
so
that's
why
the
build
is
already
don't
pay
attention
to
that.
So
what
I
ask
asking
this
pull
request
from
reviewers
mostly,
is
the
do
the
names
of
all
extracted
things
make
sense
and
do
those
things
make
sense.
Like
material
already
commented
that,
like
this
new
module,
do
we
actually
need
it?
Can
we
actually
pull
put
those
classes
in
some
already
existing
one
and
stuff
like
that?
D
D
B
Yes,
john
john
was
asking
for
the
the
same
disable
error
prone
workaround
for
the
sdk
repo
too,
because
that
is
that
is
like
I
don't
know.
I
still
debug
with
system
out
print
lines
and
I'm
like
really
the.
B
Cotton
now
that
we're
our
gradle
scripts
or
kotlin,
at
least
at
least
I
can
click
through
and
see
the
source
code
on
those.
F
Yeah,
this
is
just
a
quick
question,
so
I
have
a
pr
to
add
exemplar
exports,
so
we
know
it
doesn't
sample
exemplars,
but
if
they
were
to
happen
to
be
sampled
in
some
magical
fashion,
they
get
exported
in
otlp
correctly.
However,
as
we're
going
through,
the
prometheus
reviews
got
brian
brazil.
F
To
look
at
that,
it's
unclear
to
me
whether
or
not
we
should
take
an
implementation
of
prometheus
at
this
time
or,
like
I
don't
know
how
comfortable
you
are
with
something
that
we
plan
to
use,
to
write
the
specification
for
how
to
export
to
prometheus
versus
taking
in
a
pr
that
is
like
only
the
stable
components.
F
So,
basically,
what
I'm
asking
is,
should
I
cut
out
the
prometheus
part
of
that
pr
or
not.
F
Well,
nothing's
going
to
be
populating
it
right
yeah,
it
can't
get
populated
right
now
and
it's
part
of
the
sdk
that's
already
considered
unstable,
but
also
I'm
thinking
that
we
might
have
more
discussion
around
prometheus.
So
it's
it's
not
like
this
is
going
to
be
submitted
and
then
never
change.
Just
the
prometheus
bits
yeah.
I
personally.
E
Think
it's
fine
because
it's
unstable
and
I
think
if,
if
we
think
what
we
have
is
at
least
reasonable
at
this
point,
and
maybe
the
right
thing,
if
there's
some
way,
people
can
try
it
out,
that's
better
than
if
it's
nowhere
right,
I
mean
they.
Obviously
they
could
only
try
it
out
if
they
figured
out
some
way
to
actually
write
exemplars
via
the
sdk
or
api
or
something,
but
at
least
it
would
be
something
that
could
be
when
we
do
have
that,
like
people
will
be
able
to
try
it.
E
E
Just
the
I
mean
the
buggyness
was
just
I
mean
it
was
more
spec
related
than
it
was
our
implementation
related
just
how
to
map
various
things
into
things
that
normal
prometheus
structures
like,
I
think
the
we're
using
the
summary
in
ways
that
prometheus
users
didn't
really
understand
or
like,
or
something
like
that.
So
anyway,
I
think
it's
fine,
but
I'm
also
not
a
prometheus
user.
So
my
opinion
is
relatively
meaningless.
E
B
F
B
A
we
have
somebody
else,
who's
interested
in
metrics
ken
is.
F
Yeah,
in
other
news,
the
I'm
still
I've
tried
three
different
ways
to
fragment
out
the
metrics
prototype.
To
not
have
it
be
one
ginormous
cl
and
all
of
them
suck
in
various
ways.
Where
you're
going
to
ask
weird
questions
like.
Why
did
you
do
it
this
way
and
the
answer
would
be
go
look
at
the
rest
of
the
cl,
so
I'm
kind
of
I'm
still
kind
of
blocked
at
trying
to
figure
out
how
to
get
that
bundled
in.
F
But
in
other
news
I
did
get
multi-export
working
for
a
simple
test
case
and
I'm
going
to
keep
flushing
that
out.
So
in
the
metrics
prototype
you
can
have
multiple
exporters
configured
and
they
all
work
on
their
own
timeline,
etc.
F
E
F
Double
counting
right,
you,
you
only
pay
the
cost
when
you
have
multiple
exporters,
the
way
it's
implemented,
but
what
I'm
using
I'm
using
a
bit
set
with
a
kind
of
id
leased
per
exporter,
and
then
you
track
the
memory
with
that
bit
set
so
you're,
paying
the
cost
of
at
least
an
extra
integer
per
per
metric
stream
like
per
instrument.
But
it's
not
it's
not
like
super
duper
expensive.
From
that
standpoint,
it's
just
if
you
have
like
one
exporter
that
say
has
an
hour
long
timeline.
F
It
means
you're
going
to
be
keeping
track
of
all
the
deltas
that
were
collected
for
synchronous
instruments
for
that
entire
hour
to
report
them
without
any
kind
of
like
you
know,
go
back
and
collapse
functionality
to
kind
of
limit
the
memory.
It's
something
that
we
could
we
could
fix
over
time.
You
know
this
is
the
naive
implementation,
but
it
is
a
bit
funky.
B
So
josh
does
that
require
you're
tracking.
That
like
say
one
exporter,
is
it
two
minutes
and
one
is
at
three
minutes?
The
deltas
are
at
one
minute
apart.
F
So
so
effectively
for
this
is
for
synchronic
for
asynchronous
instruments.
You
don't
have
to
store
a
darn
thing
unless
your
aggregator
is
delta,
then
you
just
remember
the
past
reported
value
per
per
exporter,
so
each
exporter
would
need
its
previous
reported
value
and
you
can
calculate
a
delta
for
asynchronous
instruments.
F
So
that's
a
bit
cheaper
for
synchronous
instruments,
each
synchronous
instrument
effectively
every
time
you
get
an
export
call,
you
grab
the
delta
measurement
and
you
track
it
with
an
empty
bit
set
and
mark
it
with
the
the
current
exporter
that
read
it
and
then,
as
other
exporters
go
you
grab
from
this.
Like
queue
of
you,
know
previous
deltas
and
grab
the
ones
that
they
haven't
seen
yet
and
report
them,
so
it
can
lead
to
like
if
you
have
two
exporters
that
are
in
sync.
F
At
the
same
time,
it
can
lead
to
you
doing
a
little
bit
of
churn
with
grabbing
a
bit
too
many
async
instruments,
but
that's
effectively
how
it
works.
Is
you
you
store
each
delta
point
and
then,
when
all
collectors
have
read
them,
you
you
prune
off
the
edge.
You
know
and
we
just
track.
We
track
reads
with
a
bit
set.
I
can
riley.
F
Has
this
big,
complicated
design
dock
that
I
should
say
it's
just
a
one
picture
that
should
be
making
it
somewhere
into
the
spec
relatively
soon
that
this
is
based
on
yeah.
I
can.
I
can
dive
into
the
design
a
little
bit
but
again,
like
yeah,
I'm
trying.
F
This
all
apart
and
get
us
something
that
that
that
is
a
good.
You
know
one
step
chunk
and
we
talked
about
this
last
time.
I
I
basically
have
to
gut
the
aggregators
as
they
are
today
and
some
at
some
point
and
because
everything
is
tangled
in
around
them
and
my
current
plan
is
actually
to
finish
this
prototype
implementing
all
the
features
of
the
sdk
that
we
are
happy
with
for
a
pr
on
the
sdk
specification
that
could
get
submitted
and
then
once
that's
submitted
then
do
fragmentation
based
on
the
prototype.
F
Just
from
the
standpoint
of
you
know,
the
notion
of
what
an
aggregator
is
could
change.
The
notion
of
you
know
how
we
do.
Example:
our
sampling
could
change
the
notion
of
what
a
measurement
processor
is
can
change,
so
I
I
think
it
it's
a
little
too
early
to
to
go
super
deep
outside
of
the
api,
and
I
can't
change
the
api
without
gutting
the
backing
implementation
to
the
point
where
it
doesn't
make
sense
to
do
anything
else.
F
At
this
point,
I
think,
but
I'll
I'll
run,
that
by
y'all
and
see
see
if
you
agree
or
if
you
you
know
when
I
ask
questions
or
whatever
on
the.
E
F
Do
you
why
that
is
good?
That
is
an
sdk
configuration,
so
so
what
what
I
did
in
the
prototype
is
each
so
when
you,
when
you
go
to
instantiate
an
instrument
you're
given
an
interface
for
where
to
send
your
your
measurements
unless
you're
asynchronous,
in
which
case
there's
a
different
flow,
but
this
is
synchronous
instruments,
so
you're,
given
an
interface
for
where
to
send
your
your
measurements.
If
you
are
a
viewer,
you
happen
to
be
a
view.
You
instantiate
storage.
F
F
But
we
do
have
to
pay
the
memory
cost
of
tracking
all
of
the
instruments.
Then
kind
of
you
know
in
in
in
these
hash
maps,
if
you
will
or
or
ink
maps
over
and
over
and
over
again.
E
For
all
the
different
instruments
yeah,
we
should
we
definitely
need
to.
I
don't
know
my
gut
tells
me
we
should
talk
riley
down
off
of
this
cliff
and
and
not
try
to
bite
off
something
that's
complicated
at
1.0
for
metrics,
but
but
we
should
talk
about
that
at
the
metric
c.
F
Yeah
yeah
and
I'm
happy
to
write
all
all
this
down.
I'm
trying
to
make
sure
I'm
not
a
funnel
of
you
know:
experimentation
with
no
one,
knowing
what
the
hell
we're
doing,
but
there's
there's
a
decent
amount
going
on.
F
For
example,
label
processor,
you
know
not
handling
bound
instruments
correctly,
the
fact
that
we're
not
pulling
context
correctly
from
people
who
are
tracking
context
manually.
Those
are
things
that
I
think
are
relatively
easy
to
fix
that
we
should
try
to
push
through.
I
haven't
figured
out
how
to
do
it
in
a
way.
That's
not
just
a
giant
breaking
change,
though,.
E
F
Yeah
yeah,
you
should
so
there's
three
prototypes
right
now
in
the
spec
one's
in
c,
sharp
one's
in
python
and
then
there's
mine.
There
might
be
more,
I'm
not
sure
we
have
deviated
significantly
in
a
few
cases.
Oh
wait,
there's
a
fourth!
There's
a
go
one:
the
go!
One
deviates
the
most
josh
mcdonald
tried
this
crazy
like
api
thing,
which
is
really
really
cool
and
completely
different
than
anything
we've
done
before.
E
F
All
right,
I
will
get
something
that
sounds
like
the
best
step
forward,
then
to
get
that
reviewed
and
sorry
for
monopolizing
so
much
time
on
this,
but
I
I
feel
like
I'm
in
a
vacuum
sometimes-
and
I
want
to
make
sure
that
everybody
here
is
paying
attention
and
that
this
is
not
wasted
effort,
so
cool
now.
B
So
thank
you
for
thank
you
for
jumping
jumping
in
head
first
into
it.
F
If
anyone
wants
to
try
the
prototype,
it's
it's
it's
almost
at
the
point
where
you
can
use
it
and
end.
You
can
use
pieces
of
an
end
to
end
today,
but
not
all
of
it's
exposed,
but
I
can.
I
can
make
sure
that
that
branch
is
always
up
to
date
and
that
folks
can
try
it
out,
but
yeah.
It
should
be
implementing
everything
that
has
been
discussed
in
the
metric
sig.
E
F
E
B
Great
nikita
wants
feedback
on.
Oh,
yes,
this
is
a
do
you
wanna
explain.
This
issue
is
very
interesting
and
it
probably
possibly
applies
also
to
the
sdk
repo.
I
would
imagine.
E
D
I
think
we
have
to
fix
it
and
the
problem
is:
there
are
two
pro
problems,
so
one
the
straightforward
forward
way
to
solve.
It
is
just
well
not
a
problem.
Let's
do
before
release,
let's
do
a
pull
request
which,
which
makes
which
brings
the
repository
into
exactly
the
required
state.
D
1.4.0,
let's
merge
that
pull
request
then
tag
the
resultant
commit
and
voila.
We
have
everything
that
we
need.
The
problem
is
that
if
you
try
to
change
examples
to
point
to
a
version
which
is
not
released
yet-
and
we
currently
want
to
actually
build
examples
on
every
pull
request,
then
there
is
a
chicken
and
egg
pro
problem.
You
cannot
merge,
put
requests,
because
we
cannot
build
examples
because
version
is
not
released
yet.
D
So
there
are
a
couple
of
ideas:
how
to
fix
that
from
an
iraq
and
me
of
different
state
state
states,
different
stages
of
convolution.
So
please
read
discussion.
If
you
have
any
ideas,
they
are
most
welcome.
D
D
We
made
that
pull
request
that
I
have
just
talked
about
into
that
branch.
On
the
branch.
We
don't
build
examples
on
pull
requests.
We
built
examples
on
pull
request
only
on
main
branch,
so
we
can
point
example
to
a
unreleased
version
on
a
pre-release
branch.
We
merge.
We
tag
on
that
branch.
We
make
relief
from
from
that
branch.
D
Then
we
cherry
pick
those
changes
into
the
main
branch
as
well
and
on
the
main
branch.
We
always
have
examples
pointing
to
the
next
snapshot
or
the
current
snapshot,
and
before
we
build
examples
in
order
to
kill
two
birds
with
one
stone,
we
actually
tried
to
publish
our
main
repo
to
mail
in
local,
and
then
we
tried
to
build
examples
against
that
maven
log.
D
In
this
way,
we
always
test
both
that
our
examples
work
against
current
snapshot
and
we
force
developers
to
update
examples
with
every
change.
We
test
on
every
pull
request
on
the
main
branch
that
our
publishing
still
works.
That
examples
can
consume
published.
Artifacts
not
just
build
artifact
and
publish
snapshots,
and
we
have
released
tags
pointing
to
exact
exactly
correct
version
of
the
repository
which
uses
that
version
everywhere.
E
So
the
downside
of
this
approach
on
the
examples
side
is
that
if
somebody
comes
into
the
repository,
let's
say
they're
using
version
1.3
or
1.4,
and
the
examples
have
now
diverged
for
whatever
is
on
1.5
snapshot
and
they
start
looking
at
the
examples.
They
won't
work
with
what
they're
using
they
would
have
to
go.
Look
at
the
examples
only
on
that,
like
on
the
tagged,
the
1.4
tag,
yes,
which
that
does
not
know
how
to
do
in
general
is
my
guess.
D
Well,
first,
we
can
always
point
to
that
in
release
notes.
But,
okay,
nobody
will
read
that,
but
yes,
that
will
be
in
problem.
So
as
long
as
our
like
extension
example,
apis
are
not
stable,
which
is
any
way
we
want
to
stabilize
them
as
soon
as
possible.
E
Yeah
as
soon
as
as
soon
as
things
are
stable,
the
problem
basically
goes
away.
This
is
we.
This
is
what
we
we
struggled
with
this
in
the
sdk
repo
for
quite
a
long
time
until
things
were
stable,
and
now
it
doesn't
matter
because
we're
backward
compatible
and
the
examples
are
always
going
to
work
so,
and
we
don't
update
the
examples
currently
until
after
a
release
to
if
there's
any
new
features
that
we
want
to
add.
Although
honestly,
we
haven't
done
that
either
so
we
haven't
touched
these
people
since
1.0.
Basically,
at
this
point.
E
Exactly
yeah,
that's
where
that's
kind
of
the
weird
the
weird
state
is
you're,
trying
to
provide
examples
for
unstable
apis
and
then
once
they're
stable.
The
examples
are
all
good
and
there's
no
problem
anymore,
so
yeah
it
may
be
worth
I
mean.
Maybe
the
thing
is
in
the
example.
I
don't
know
how
I
don't
remember
how
the
examples
are
structured
in
the
instrumentation
repo,
but
maybe
just
in
the
readme.
There
could
be
the
link
to
the
stable,
like
you're,
not
stable.
B
That
that
we
can
do
what
do
you
yeah
in
that
same
kind
of
vein?
What
do
you
think
about
adding
that,
like
at
the
top
here
of
because
one
of
the
things
we
have
struggled
with
and
like
you
mentioned
john,
the
readmes
we've
kind
of
postponed,
adding
updating
the
readme
or
documentation
with
prs
to
keep
the
docs
at
the
latest
release?
B
E
I
don't
know,
I
think,
read
me
being
pointing
at
snapshot
is
well
again
when
things
are
stable,
it
doesn't
matter
anymore.
Everything
is
fine,
but
read
me
pointing
at
snapshot
means
that
it
won't
be
useful
to
anyone,
as
opposed
to
with
the
goal
of
the
mean
like
you
and
I,
and
the
people
on
this
call
generally
don't
need
to
you.
Don't
look
at
the
readme.
It's
for
people
who
are
coming
in
externally
and
so
having
it
pointed
snapshot
is,
I
think,
maximally
confusing
for
the
users
who
need
it.
E
The
most
that's
I
think
yeah,
but
I
think
the
readme
in
the
readme
documentation
is
solved.
I
think
with
in
this
workflow,
because
you
cherry
pick
that
you
cherry
pick
that
commit
onto
the
main
branch
after
the
release
is
done
and
then
the
tag
is
there
and
the
root
and
the
documentation
is
up
to
date.
Right,
yes,.
B
Yeah,
I
was
just
thinking
wondering
if
we
could
also
solve
the
issue
of
people
having
to
submit
both
a
pr
1pr
for
the
code
and
one
pr
for
the
docs
that
we
delay
and
don't
merge
until
later
on.
But
well.
D
D
D
E
D
E
Well,
it
says
main
branch
is
protected
and
I
don't
think
I
think
we
can
get
away
with
it
on
the
release
tag,
because
it's
not
a
protected
branch.
I
might
be
wrong
there.
I
think
there
are
def.
I
think
there
are
issues
with
the
having
the
actions
user
put
stuff
on
the
main
branch
on
rog
and
I
ran
into
this
okay
a
bunch
of
months
ago,
but
I
mean
maybe
that's
been
resolved.
I
don't
remember.
B
E
Know
I
mean
I'm
hoping
to
do
a
release
tomorrow,
I
guess
of
the
api
sdk.
I
could
try
to
see
how
well
this
process
works.
In
a
I
mean
in
the
simple
api,
sdk
repo,
I
don't
know,
or
I
can
do
it-
I
think
it's
his
turn
for
the
release.
Maybe
he
could
do
it
as
I
did.
The
last
release
is
his
turn.
B
B
E
He
was
he
and
I
have
been
chatting
in
slack
today,
so
I
think
he's
here
today.
B
E
Yeah,
if
you
know
we
could
also
hold
off,
we
could
also
hold
off
on
the
release.
I
mean
there's
nothing
that
forces
us
to
do
it
tomorrow.
We
could
wait
a
couple
days.
I
mean.
If
we,
I
don't
think
anyone
will
will
punish
us
if
we
need
to
wait
a
little
bit.
E
Onorag
also
hasn't
deleted
all
of
the
aws
stuff.
Yet
so
I
can't
can't
do
release
until
that's
done
either,
but
we
can
also
chat
about
this
this
afternoon.
When
the
honorable
meeting.
B
So
I'm
gonna
be
out
the
rest
of
today
and
tomorrow,
but
I'm
not
needed
for
this
decision,
except
that
we
have.
We
did
have
somebody
kind
of
ask
about
the
next
instrumentation
release,
just
because
there's
a
bunch
of
bug
fixes
in
there.
B
Okay,
if
you
wouldn't
mind
pinging,
if
you
were
chatting
with
justin
pinging
him
just
to.
B
See
was
it
in
the
normal
slack.
B
F
Oh,
my
god,
it's
the
most
annoying
thing
ever.
Why
did
you
let
google
do
this
to
you
yeah?
We
did
it
to
ourselves.
Did
you
no
one
to.
E
F
Here's
the
thing:
here's
the
thing
there's
got
to
be,
and
I
I
I
kind
of
blame
gradle
for
this,
because
I
remember
talking
to
them
about
this
at
one
point:
there's
a
different
notion
of
a
build
tool
for
development
workflow
than
for
release.
Workflow
error
prone
should
totally
be
in
like
the
pr
build.
It
should
totally
be
in
some
build
that
I
can
run
to
validate
that
things
are
gravy
before
I
send
a
pr
which
I
always
forget
to
do
anyway.
F
Kind
of,
like
I
run
spotless
apply
like
you
know
incessantly
before
I
send
prs,
but
it
should
not
be
on
every
freaking
belt,
especially
when
I'm
like
halfway
through
writing
code.
It's
like
hey
this
variable's
unused.
I
know
I
have
a
stub
method
I
just
made.
Can
you
give
me
like
five
seconds
to
write
it
anyway?
F
E
B
But
so
what
we
do
in
instrumentation
or
what
I
do
at
least
in
gradle.properties.
B
I
have
this
set
and
we
have
this
flag
that
disables
it
and
then,
when
I
want
to
run
that
separate
workflow,
I
I
I
comment
this
out
when
I
want
it.
Otherwise
I
leave
it
disabled.
You.
E
F
F
I
tried
to
deprecate
something
in
the
metric
data
model
package,
specifically
summaries,
and
I
gave
up
making
it
work
because
error
prone
kept
saying:
hey
you're
using
deprecated
things,
I'm
not
going
to
let
you
compile
and
it's
like
how
many
and
I
would
annotate
something
as
deprecated
and
then
it
still
can't
make
use
of
a
deprecated
thing
like
I
don't
know
how
deprecated
things
aren't
allowed
to
use.
Deprecated
things
that's
interesting,
but
I
had
to
suppress
all
the
like.
F
F
E
F
It's
in
the
data
packages
where
you
deprecate
it
it's
in
the
aggregator
and
it's
in
the
exporter,
so
it's
in
the
prometheus
exporter,
the
open,
telemetry
exporter
and
it
showed
up
somewhere
else
randomly
that
I,
but
I
had
given
up
by
that
point.
So
I
don't
remember.
F
E
F
That
uses
it,
which
sucks
it
was
insane
too,
and
I
tried
to
just
deprecate
it
in
tests
like
just
just
like,
suppress
it
and
test
and
that
didn't
work
or
it
like
inconsistently
worked.
It
was,
I
had
a
hell
of
a
time,
so
if
you've
never
done
this
before
with
trying
to
keep
the
thing
around
that
that's
fair,
I
can
look
at
it
or
just
give
up
on
deprecating
summary
I'll,
just
put
a
comment
that
pretends
like
it's,
not
a
deprecation
comment,
and
I
hope
people
read
it.
G
C
F
You
know
I
did
this
before
honorable
went
through
and
made
the
conventions.
Repo
thing,
too
is
maybe
I
should
see
if
it's
still
a
problem.
E
Interesting
yeah,
error
prone
is
a
big
pain.
Nobody
likes
it,
everyone
hates
it
except
we.
It
is
really
nice
to
have
as
the
final
step.
So
it
would
be.
I
don't
know
I
don't.
I
haven't
looked
at
how
you,
how
that
disable
error
prone
works
in
the
instrumentation
repo,
but
if
we
could
get
that
ported,
that
would
certainly
help
things
a
lot,
but
it
wouldn't
help
with
the
deprecation.
E
B
There
were
a
couple
of
pr's.
I
thought.
B
G
G
Levels
I
like
those
groovy
assertions
because
they
were
kind
of
like
more
or
is
it
easier
to
read,
I
mean
not
even
easier
to
look
at.
You
could
just
take
a
look
and
see
where
which
trace
has
how
many
spans
and
what
kind
of
attributes
there
were.
They
were
kind
of
like
more.
They
were
better
from
the
visual
perspective.
B
Yeah
just
to
show
people
kind
of
the
difference
here
is
the
current
sort
of
groovy
dsl
assertion,
name,
kind,
parent
attributes
and.
B
B
Yeah
yeah,
if
we
could
somehow
get
this
shifted,
I
don't
know
which
direction
you're
seeing
me
in
if
I'm
mirrored
or
not,
but
if
we
could
get
that
shifted
left
then
also
that
would
help
prevent
the
line
wrapping
and
that
would
look
a
lot
better.
F
F
No
anorak
doesn't
like
that
format
because
he
and
I
had
this
discussion,
but
it's
like
conventional
insert
js,
and
I
think
it's
specifically
to
handle
this.
Where,
like
you,
call
a
method
and
it
flips
your
assertion
to
be
like
a
different
thing.
F
So
a
lot
of
the
collections
work
this
way
where
you
can
call
like
dat
and
then
the
you
know
the
property
name,
and
then
it
flips
into
a
abstract
iterable
assertion,
instead
of
just
you
know,
nesting
that
that
lambda,
so
it
would
flatten
this
out
where
instead
of
that
assert
that
attributes,
you
would
just
say,
has
has
attributes
or
you
say,
dot,
attributes
right
and
then
you'd
go
right
into
that
contains
only
instead
of
having
it
nested
in
the
lambda.
So
it
would
flatten
it
out
it
bring
it.
F
F
I,
like
your
the
json
style,
instantiation
dsl
things
that
exist
in
other
languages
that
are
not
java
and
to
the
extent
that
you
could
go
to
like
kotlin
and
do
that
kind
of
thing.
Awesome
to
the
extent
you
could
make
a
thing
in
java
that
looks
less
like
this
that'd
be
awesome
too,
but
I
I
don't
know
I
that's
the
best
assert
js.
E
F
A
F
What,
if
you
made
sure
every
test
was
written
in
all
of
the
above
kotlin
closure
and
scala?
F
D
F
Nope,
no,
those
were
the
people
who
left
the
scala
community.
Honestly,
if
I
remember
right.