►
From YouTube: 2021-10-22 meeting
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
D
E
E
D
D
D
B
B
D
Yeah
well
as
long
I
mean,
I
think
it's
not
very
many
apis
that
can't
be
de-sugared
at
this
point,
but
for
some
reason,
method
handle
invoke
is
one
that
they
I
mean.
I
guess
it's
sort
of
understandable,
that
it
would
be
a
particularly
difficult
one
to
figure
out
what
to
do
around
de-sugaring
yeah.
It's
interesting.
Not
I
mean
you
can
have
method
handle
all
over
the
place.
D
I
just
you
know,
I'm
learning
all
this
myself,
but
if
you
as
soon
as
you,
your
code
calls
invoke,
even
if
it's
never
on
a
code
path
that
can
be
hit,
then
the
dexter
won't
generate
valid,
will
just
blow
up
and
say
sorry
requires
android
api
level,
26
or
above.
B
D
D
D
B
E
In
yeah
things
that
you
know
you
want
to
keep.
D
E
E
Yes,
this
one
laurie
put
in
a
pr
for.
B
F
No
user
has
reported
it
yet
so
I
mean
those
annotations.
I
think.
B
F
F
That
just
curious,
oh
proactively,
I,
like
I
mean.
F
E
A
B
B
D
So
the
answer
is
enabling
minification
does
not
solve
the
problem.
D
My
guess
is
that
the
order
of
operations
there
is
that
it
indexes
it
first
and
then
does
minification,
but
I
I
don't
know,
I'm
not,
I'm
not
really
sure
the
gradle,
the
android
gradle
plug-in
is
a
giant
mystery
and
I
don't
think
any
of
it
is
open
source.
As
far
as
I
can
tell
so
it's
really
hard
to
know
what
happens
inside
there.
E
So
what
this
is
doing
is
removing
caffeine,
three
from
instrumentation
api
and
then
sort
of
just
patching
it
in
to
the
java
agent.
B
B
E
All
the
old
pr's
to
remember
the
we
added
caffeine
three
to
support
j-linked
binaries
that
don't
have
unsafe.
E
An
issue
and
but
I'm
curious
to
see
what
if
lori,
has
ideas.
E
The
other
thing
that
we
were
talking
about
is
doing
the
mr
jar
for
the
library
instrumentation,
because
john
did
confirm
that
that
would
solve
the
android
problem
and
that
solves
the
j-link
problem.
And
then
the
only
remaining
problem
is
that
mr.jars
don't
work
in
the
bootstrap
class
loader.
E
E
John
I'll
wait
to
get
lori's
feedback
on
this,
but
I'll
plan
on
merging
it
tomorrow,
regardless
I
mean
it's
fine,
as
is
for
at
least
the
critical
issue.
D
D
F
D
D
I'm
just
saying
that,
don't
you
know
don't
rush
out
of
a
171
for
android
unless
we
get
other
people
complaining,
because
we're
we're
not
really
in
a
hurry.
I
mean
honestly,
we
could
honestly
at
this
point,
I'm
sure
we
could
skip
1.7
altogether
and
wait
a
month
before
upgrading
and
no
one
would
ever
notice
or
care.
E
So
this
this
might
be
the
most
critical,
the
one
that
really
drives
our
patch
release.
Then.
D
Yeah
we
gotta
put
the
we
gotta,
put
the
prometheus
exporter
in
our
android
apps.
So
no.
E
How
do
we
even
get
the
prometheus
library-
because
I
remember
when
you
upgraded
it
in
the
sdk
repo,
but
why
do
we
even
have
it
yeah.
F
B
E
B
B
B
E
Just
we
just
knocked
ourselves
out
of
all
three.
B
B
E
E
B
D
B
D
D
By
the
way,
with
my
with
me
joining
a
new
team
that
is
not
yet
using
the
open,
telemetry
based
agent.
Well,
I'm
gonna
push
hard.
That
might
mean
we
log
a
lot
of
performance
bugs
we'll
see
what
happens.
We
mostly
we.
D
With
kafka
consumers
so
we'll
see
how
it
goes.
D
D
They
tried
it
a
year
ago.
I
don't
even
think
that
I
don't
even
know
if
they're
using
they
were
using
the
agent.
I
think
they
were
using
just
the
metrics
library
itself,
which
is
a
drop
wizard.
Basically
it
just
drop
wizard
plus
exporter
or
whatever
they
call
it
in
drop
wizard
land.
They
tried
it
a
year
ago
and
the
performance
like
tanked
tanked,
their
kafka
consumers
very
badly,
but
they
didn't
tell
anyone.
D
So
it's
just
been
off,
but
they
didn't
tell
matteo
sure
the
instrumentation
team
that
there
was
a
problem,
but
anyway,
I'm
going
I'm
going
over
there.
Two
weeks
from
now,
I'm
gonna
start
beating
the
drum
to
get
that
stuff
in
there,
especially
right
now
we
don't
like
it's
funny.
We
have
the
kafka
producer,
sending
to
kafka
in
all
of
our
our
service
maps
and
then
it's
just
like
nothing
on
the
other
side,
but
no,
no,
we
have
a
thing
over
there.
D
E
Yeah
yeah.
We
also
have
no
metrics
for
kafka
right.
D
I
mean,
I
think
the
way
I'm
going
to
get
this
stuff
in
is
to
just
work
and
write
a
bunch
of
manual
instrumentation,
and
I
can
crib
all
sorts
of
kafka
instrumentation
or
I
don't
think
like.
If
you
have
a
very
specific
use
case,
it's
not
hard
to
build
just
manual
instrumentation,
for
it.
D
Yeah
we'll
see,
I
don't
know
we'll
see
what
happens
I
mean
it
might.
I
would
probably
put
the
agent
in
there
with
all
the
instrumentation
off
and
then
write
manual
instrumentation,
especially
to
get
with
span
context,
propagation
and.
E
Exactly
yeah
open,
if
you
want
metrics
from
kafka.
Well,
if
you're
going
to
write
your
own,
then
it
doesn't
matter.
But
if
you
want
to
try
that
out,
if
you
like
an
issue,
I
would
be
interested
in
throwing
that
in
because
I'm
starting
to
we're
starting
to
play
with
metrics
a
little
bit.
D
D
E
Yeah
maven
has
been
pretty
historically
reasonable,
with
backwards
compatibility.
That's
why
the
gradle
w
is
like
basically
required.
D
Yeah,
I've
run
into
some
very
old
projects
that
were
running
old
versions
of
maven,
where
that
was
not
true,
but
I
would
think
all
of
our
reliability
people
would
be
like.
Oh
yeah
builds
always
running
on
the
same
version.
You
know
it's
going
to
produce
the
same
thing,
no
matter
what,
even
if
it's
not
really
any
going
to
be
any
different.
At
least
it
it's
a
good
story,
and
one
last
tool
for
devs
to
have
to
under
to
install
by
default.
So.
B
B
B
D
Well,
I'm
not,
I
mean
it's
funny
as
soon
as
I
put
in
that
pr.
Our
sre
was
like
how
how
about
we
just
move
to
gradle,
and
I
was
like
that
sounds
great,
but
that's
going
to
be
a
big
job.
D
E
D
Yeah,
I
mean
part
of
I
think
jack's
goal
here
is
to
kind
of
both
implement
otep
150
and
also
try
to
make
it
look
more
like
tracing,
but
I
think,
what's
actually
resulted
in
is
a
kind
of
a
frankenstein
that
doesn't
really
look
like
either
and
is
very
weird.
So
I
think
it
has
some
there's
still
some
massaging
to
be
done
in
here,
but
yes
in
in
principle,
trafficking
right.
This
will
essentially
be.
D
Good
to
split
the
cpa,
I
don't
think
we
talked,
we
didn't
talk
in
depth.
We
ended
up
talking.
What
did
we
talk
about?
Mostly?
E
There
was
no,
nobody
said.
No,
we
shouldn't,
I
think
there
was
like
ja.
I
think
this
was
josh's
thought,
but
yeah
I
mean
I
think
there
was
agreement
that
there
should
be
some
kind
of
an
api.
What
what
it's
called
or
where
it
lives.
We
didn't.
E
E
D
I
just
I
need
to
call
out
and
gloat
a
little
bit
that
the
logging
sig
is
now
considering
just
having
string
as
the
log
body
yeah
yeah.
It
wasn't.
Even
it
wasn't
even
me
who
said
it,
but
it
was
other.
I
think
it
was.
Was
it
yuri?
No,
I
don't
remember
who
it
was
anyway,
I'm
just
like
yeah,
I
told
you
so
this
is
what
almost
everyone
needs.
D
C
E
So
ideally,
maybe
put
into
a
histogram,
and
if
you
don't
want
the
histogram
due
to
cost,
you
can
configure
a
view
last
value
or-
and
this
was
cool
that
jason
and
I
learned.
If,
if
you
want
an
averaging
view,
you
can
use
a
histogram
with
zero
buckets.
D
D
Camera
yeah,
but
I
also
know
this
has
been
this
has
taken
up.
I
would
say
at
least
half
of
every
metric
sig
for
the
past
couple
months
and
I
don't
know
that
there's
nes,
I
haven't,
heard
a
definitive
conclusion
to
the
argument
so.
D
D
So
I
think
the
last
latest
compromise
is
add,
count
and
min
and
max
maybe
to
the
no
not
count
min
and
max
to
the
histogram
data
model.
And
then
let
people
configure
a
single
bucket
like
this.
If
they
want
also
want
to
count.
E
I
would
like
a
summary
aggregation,
but
that's
clever.
D
Yeah
yeah
anyway,
I'm
studiously
trying
to
stay
out
of
those
arguments,
because
I
don't
think
it's
fruitful
for
me
to
be
in
there.
E
And
we
talked
about
josh
brought
up
that
so
like
cpu
metrics
are
percent
percentages.
So,
like
say
up
to,
like
you,
have
a
thousand
percent
cpu
usage
that
that
doesn't
work
well
with
our
current
default
buckets.
E
So
you
would
want
to
do
explicit
buckets,
but
hopefully
the
real
solution,
for
that
is
james's
pr.
The
the
exponential
bucketing.
C
Yeah,
I
actually
wanted
to
chat
about
that
quickly
as
well,
because
I
think
that
pr
is
kind
of
in
a
state
where
I
can
bring
out
a
draft.
Although
I
have
just
like
one
or
two
questions
about
it,
let
me
get
a
link
to
it.
D
I've
been
I've,
been
intentionally
ignoring
this
pr
and
letting
you
and
uk
and
josh
sort
it
out
before
I
even
started
looking
at
code.
C
Oh
yeah,
that's
totally!
Okay!
It's
it's
a
lot
of
stuff
to
get
through
as
well.
So
the
first
thing
here
was
anarag.
Remember
we
talked
about
a
long
list
optimization
here
we
go
like
here
surface.
C
Okay,
so
this
here
I
just
used
array
as
as
list
and
then
I
remember
we
talked
about
long
list
optimization,
but
currently
I
don't
think
there's
anything
in
the
class
path
that
will
be
able
to
achieve
that.
So
I
was
wondering
if
we
add
something
or
do
we
just
keep
it,
how
it
is
or.
C
Okay,
I'll
just
yeah,
remove
that
okay
cool
and
I
think,
there's
gonna,
be
some
conflicts
because
I
think
there's
already
some
conflicts
because
of.
C
So
much
stuff
here
this
pr
well,
this
hasn't
been
merged.
Yet
this
is
doing
a
lot
of
stuff,
so
I'll
have
to
refactor
that,
but
I'm
not
sure
which
is
going
to
get
merged
first.
I
assume
this
one.
D
I
don't
know
this.
One
is
also
big
and
hairy.
D
And
I
don't
know
that
it
feels
like
there's
a
little
bit
of
spec
work
that
still
needs
to
be
hammered
out
before
this
is
going
to
be
ready,
so
I
don't
know
remove
delta.
No.
The
idea
here
is
that
the
well,
the
deltas
yeah,
remove
them
from
the
views
so
that
the
only
thing
that
can
specify
temporality
is
actually
maybe
that's,
not
true
anymore.
I've
lost
track,
I
think
it
may
have
like
exporters,
can
specify
delta
versus
cumulative.
D
I
thought
there
was
also
a
movement
to
remove
that
option
from
views,
but
I
haven't
seen
any
code
that
actually
implements
that
yet.
A
D
Yeah
I
mean,
I
think,
that
a
lot
of
that's
going
to
depend,
I
think,
as
as
josh
was
alluding
to
about
what
back
ends
do
with
otlp,
because
otlp
supports
either
right.
So
you
might
be
able
to
say
I
want
my
counters
to
be
cumulative,
but
I
want
my
histograms
to
be
deltas
or
whatever,
like
otlp
lets.
You
do
that.
D
Or
do
I
mean
somebody
has
some?
I
don't
know,
I'm
not
a
I'm,
not
a
metrics
back-end
implementer,
but
for
some
reason
histogram
implementation
only
accepts
only
works
with
deltas.
I
I
have
no
idea
again.
That's
why
I
was
saying
like
we're
trying
to
do
design
this
magical
system
that
can
handle
everything
possible,
except
that
that
isn't
like
there's
not
really
a
need
for
that,
and
I
was
happy.
Josh
has
actually
been
like
serving
actual
metric
back-ends
that
are
relevant
to
try.
E
D
E
That
was
interesting
yeah
he
touched
through
six
different
sort
of
metrics
back
ends,
his
kind
of
go-to
or
things
that
have
been
influencing
the
design.
C
Currently,
it's
it's
something
to
do.
We
have.
We
have
like
a
pipeline
in
between
our
back-end
and
our
collection,
and
I
think
it's
because
there's
some
we're
using
mid
max
sum
count
or
something
like
that
that
doesn't
work
well
being
aggregated
on
like
the
client
side,
so
we
purposefully
just
made
sure
there
was
no
aggregation
at
all
on
the
client
side.
C
Probably
also,
I
think
maybe
the
memory
usage
of
cumulative
was
a
bit
more
than
delta,
and
we
just
I
can't
remember
this
history
of
it,
but
for
some
reason
we
use
delta
for
everything
it's
not
to
do
with
the
back
end
specifically,
but
it's
more
to
do
with
yeah.
For
some
reason,
aggregation
on
the
client
side
was
giving
some
inaccuracies.
C
Yeah-
and
we
do
we
do
do
a
bunch
of
stuff
in
between,
like
you
know,
the
application
and
the
and
signal
effects
we
have
like
a
custom
pipeline.
That
kind
of
you
know
filters
and
aggregates
stuff,
just
because
we
you
know,
are
overloading
signal
effects
constantly
and
constantly
getting
throttled
by
it.
So
we
have
to
do
something
about
that.
So
I'm
not
part
of
that
team
that
maintains
that
pipeline.
So
I
couldn't,
I
can't
really
answer
the
question:
why
we
don't
do
any
aggregation
clients.
I
can't
remember,
but.
C
For
some,
for
some
reason
we
decided
that
we
inject
into
the
attributes
a
unique
id,
so
it
doesn't
aggregate
at
all,
and
I
was
I
I
wanted
to
move
away
from
that
because,
obviously,
when
it
aggregates
by
attributes,
so
if
you
inject
a
unique
attribute,
it
won't
aggregate
anything.
So
it's
kind
of
a
hack
to
get
around
anything
being
aggregated.
E
C
Yeah,
but
it
was
something
to
do
with
like
averages
or
if
you
get
the
average
on
the
client
side.
It
isn't
doesn't
work
well
when
aggregated
in
our
pipeline,
because
we're
doing
also
aggregation
there
as
well.
So
if
it
doesn't
work
when
you
aggregate
it
twice
something
like
that,
but.
F
B
B
E
C
E
C
Yeah
it's
interesting.
Maybe
we
could
move
to
cumulative
and
in
some,
if
we
were
to
migrate
from
cumulative
to
buff
sorry
from
delta
to
cumulative,
maybe
there
would
be
some
kind
of
intermittent
period
where
some
metrics
would
be
cumulative
and
some
would
be
delta.
Maybe
maybe
there
is
a
use
case,
but
I
don't
know
it's
probably
not
super
important.
C
Anyway,
moving
on
what
else
was
there
so
yeah
there's
conflict,
so
I
won't
worry
about
those
that
conflict
thing
I'll
just
kind
of
oh
there's.
There
is
some
conflicts,
but
I'm
sure
they
won't
be
too
hard.
What
else
was
there.
D
C
Yeah
cool
too
easy,
and
what
else
was
it?
Oh
yeah?
So
basically
the
approach
of
this
is
it
kind
of
doesn't
do
too
many
optimizations.
It
kind
of
it
goes
for
safety
over
optimizations
like
it.
It
does
a
couple
of
copying
of
arrays
for
kind
of
immutable,
immutability
purposes.
It
doesn't
do
any
fancy
indexing.
C
It
doesn't
do
any
fancy
like
storage,
just
kind
of
gone
for
like
basic
structures
that
will
work
and
then
they're
all
kind
of
interfaced.
So
we
can
benchmark
and
compare
different
strategies
against
each
other
easily,
which
is
what
we
discussed
here,
something
here
but
yeah
anyway,
I'll
I'll
get
this
out
of
draft
today
and
people
can
have
a
more
of
a
proper
look
at
it.
Yeah
sound
good.
D
D
I
told
him:
okay,
that's
an
interesting
idea.
D
E
D
B
D
Yeah,
as
I
said
I
I
don't
want
to
speak
for
him.
I
don't
know
exactly
what
his
use
case
is,
but
he
wanted.
He
wanted
the
marshallers
for
some
reason.
So
we
weren't.
D
D
D
D
D
E
Oh
yeah,
here's
the
oh
yeah,
this
sounded
cool
john.
You
said
you
were
using
this
also
yeah
influx
db,
yeah.
D
E
This
is
for
the
the
the
critical
use
case
of
monitoring
the
the
weather
in
his
white.
D
I
was,
I
was
commenting
that
so
I
I
use
the
influx
db
cloud
sas
offering.
I
write
my
metric
data,
one
one
or
two
data
points
per
minute
and
I've
been
doing
it
for
like
four
months,
and
I
think
my
bill
is
currently
up
to
like
nine
or
ten
cents.
D
D
D
E
Yeah
and
then
we're
we've
just
got
10
to
go
here.
I
think
I
purch
didn't.
I
merge
one,
no
there's
a
couple
in
flight.
B
D
A
bit
of
well,
I
mean
it's,
I'm
storing
all
my
data
and
they
have
basically
infinite
retention,
so
it's
gonna
grow
over
time.
Eventually,
it'll
get
up
to
ten
dollars.
You
know
probably
a
few
years
from
now.
Hopefully
the
company
stays
around
that
long.
That's
why
more
of
my
problem
more
of
my
question
at
this
point.
D
Oh
no,
that's
the
yeah
59.88
degrees
fahrenheit.
So
there
you
go.
D
D
Yeah
the
foundation
tends
to
stay
pretty.
It's
pretty
good
temperature
reservoir
or
heat
reservoir.
E
We
could
talk
about
the
let's
talk
about
that
one,
the
the
semantic
attribute
versioning,
oh
yeah,
that
is
actually
probably
worth
at
least
chit
chatting
about.
D
E
I
I
was,
it
seemed
weird
to
me
like,
if
we're
going
to
be
this
idea
of
bumping
it
every
time
like
out
of
sync
with
the
spec
repo
like
introducing
a
new
version,
has
cost
cognitive
cost.
E
But
if
the
thing
that
turned
me
on
to
this
idea,
though,
was
that
idea,
if
we
sort
of
more
handcrafted
it
to
really
minimize
breakages,
then
we
could
you
know,
maybe
we
could
get
away
with.
You
know,
bumping
it
every
year
or
two
kind
of
a
thing,
and
that
would
feel.
B
B
C
D
B
Even
I
mean,
and
then
that
class
is
just
a
bunch
of
strings
so,
like
even
the
enum
values,
they're
a
key
that
just
equals
the
string
of
that
name.
I
guess
so.
We
could
always
just
keep
old.
I
I
can't
imagine
a
situation
after
we
can
keep
an
old
one.
D
Yeah
and
I
guess
if
they
actually,
if
they
keep
the
name
of
the
value
the
same
like
let's
say
for
well,
this
did
happen.
Actually
at
some
point,
I
don't
think
it
happened
during
like
between
releases,
but
they
changed
things
from
being
all
uppercased.
All
lowercase,
like
I
guess,
if
the
name
of
the
value
stays
the
same,
that's
fine!
It's
not
going
to
break
any
compatibility
or
binary
compatibility.
At
that
point,
we
just
output
a
different
value
at
that
point.
So
that's
probably
fine.
D
D
D
B
D
D
B
B
D
Right
yeah,
I
don't
have
a
good
handle.
All
of
this
is,
is
actually
kind
of
super
annoying.
I
think
the
the
the
whole
schema
evolution
thing
was.
It
seemed
like
a
really
good
idea
at
the
time
and
now
we're
hitting
now
the
rubber's
hitting
the
road,
and
I
don't
know
that
it
actually
is
ending
up
to
be
a
super
awesome
solution.
D
What
we're
talking
about
right
here
is
one
of
the
transformations
that's
possible
is
they've,
taken
one
attribute
and
replaced
it
with
another,
but
we're
generating
all
those
constants
all
those
keys
as
constants,
and
that
means
that
people
who
are
dependent
on
earlier
versions
of
the
semantic
convention
api
that
we
provide
how
we
have
to
keep
those
around.
So
we
don't
break
our
december
guarantees.
D
C
D
C
You'd,
probably
you
could
probably
do
the
generating
of
the
code
and
have
it
consider
the
version
as
well.
D
E
The
the
other
problem,
james
that
we
have
in
the
instrumentation
repo,
is
the
instrumentation
api,
like
we
have
these
attributes
extractors
that
are
sort
of
designed
around
the
schema,
so
we
have
like
route
scheme
host.
If
any
of
these
changed,
we
have
to
kind
of
decide
what
we
would
want.
Thank
you
guys,
yeah.
D
D
D
Yeah,
I
will
let
me
go
find
the
topic.
E
D
It's
really
cool.
It's
a
it's,
a
very
interesting
approach
and,
as
I
said,
I
learned
I
learned
some
cs
that
I
didn't
know
so
that
was
good.
E
E
B
E
Fast
well,
if
somebody
is
using
one
of
these
yeah,
how
did
they
even
tell
us
which
version
of
the
semantic
convention
they
want
to
emit
right.
D
D
D
I
posted
a
link
by
the
way
to
that
youtube
talk
or
to
the
strange
loop
talk.
Nice.
E
D
B
D
E
D
B
D
Hey
I'm
going
to
bail
out
but
and
I'm
off
tomorrow,
but
I'm
back
on
monday.
So
it's
all
good
see
y'all
later.
C
F
E
F
E
What
we're
supposed
to
do?
Well,
that
fails.