►
From YouTube: 2020-12-16 meeting
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
D
G
Have
I
did
I
did
fess
up
recently
on
a
one
of
the
maintainer
calls
that
I
hadn't
made
it
out
of
bed
yet.
H
G
All
right,
what's
the
what's
the
hot
topic
this
week.
A
I
have
two
at
least
two
one
question
one
problem.
One
problem
is
that
I
was
told
yesterday
that
splunk
very
much
wants
ga
by
mid
april.
A
A
E
A
A
Because
so
far
we
like
well
not
exactly,
but
we
are
like
wild
west
wild
wild
west.
Any
anybody
is
doing
whatever
they
want,
which
is
yes
moving
project
forward,
but
not
in
focused
way.
G
The
hardest
thing,
I
think,
has
always
for
me,
been
the
the
api
stability
part
of
it.
D
G
The
the
instrumentation
project,
you
know,
I'm
I'm
totally
comfortable,
releasing
one
zero
of
the
java
agent
much
way
earlier
than
that
it
just
requires.
You
know
the
semantic
attribute
I
mean
our
our
contract
is
so
small
with
the
our
sort
of
public
contract
is
so
small
and
it's
proven
out
to
be
stable.
We
have,
I
know
among
all
of
us.
We
have
a
lot
of
preview
customers
on
it.
J
J
A
E
Just
the
only
caveat
with
that
is
that,
unlike
the
sdk,
we
probably
can't
or
shouldn't
split
instrumentation
apis
between
tracing
and
metrics
they're
supposed
to
sort
of
be
the
same
thing.
That's
where
the
operation
concept
has
come
from
and
whatnot,
and
so
I'm
just
worried
that
if
metrics
are
still
in
progress,
we
won't
be
able
to
finalize
the
instrumentation
api
because
of
some
reason,
some
unpredictable
reason
based
on
the
metrics
api
changing
and
then
our
api,
not
working
anymore.
That's
my
biggest
concern.
I
guess
when.
F
Let's
assume
I
mean
not,
let's
just
let's
see,
methods
behind.
When
is
the
deadline
for
metrics
to
be
done
in
order
for
you
to
be
able
to
be
done
with
your
goal
right,
okay,.
B
G
I
mean
we
can
work
side
by
side.
In
I
mean
we
can
take.
You
know
as
long
as
metrics
is
progressing
along,
I
mean
we
can
be
integrating
that
along
the
way
it
it.
It
just
comes
down
to
a
resourcing
and
and
focus
like
issue
like
nikita
said.
F
There
are
other
shows
you
can
shape
the
metrics
for
the
moment
for,
for
a
period
of
time,
do
everything
and
don't
expose
them
publicly
in
any
public
api.
F
G
F
G
Need
yeah,
we
need
to
know
the
if
we're
producing
metrics,
we
need
to
know
like
the
labels.
The
semantic
conventions
around
metrics
need
to
be
finalized.
Also.
F
A
A
G
I
mean
as
long
as
it's
ga,
by
the
time
we
ga
I
mean,
and
as
long
as
it's
progressing
and
stable
you
know,
I
don't
think,
that's
the
well.
A
Let's
say
if
you
want
to:
if
you
want
to
have
a
java
agent,
ga
by
the
first
of
april
it
we,
it
would
be
perfect
if
we
have
metrics
payable
by
the
first
of
march,
just
in
gate.
I
don't
want
to
play
very
tight
game.
Yeah,
no.
G
You're
you're
right
at
least
a
month
of
like
actual
spec,
completely
frozen
semantic
attributes
completely
frozen.
G
K
K
A
D
G
Supply
for
those
like
you
know,
obviously,
spam
name
or
but
others
like
response
code,
other
things
yeah.
I
K
I
would
do
it
differently.
I
would
just
put
all
these
things
as
properties
or
entries
in
the
baggages
and
record
with
the
right
context
on
the
metric
and
then
and
then
use
what
you
want
to
be.
The
viewers
api
to
configure
whatever
is
a
default
anyway.
E
E
Yeah
again
so
for
auto
instrumentation,
of
course,
anything
goes
not
great
at
all
about
that,
but
then,
finally,
using
our
actual
instrumentation
api,
like
how
do
we
manage
the
like,
a
syncing
of
the
attributes
and
labels
of
spans
and
metrics
like
do
we
use
baggage?
Do
we
have
our
own
abstraction
that
can
set
attribute
except
label?
E
Request
path,
response
code,
like
these
things,
they're
gonna,
have
a
lot
of
overlap
with
spam
and,
as
you
mentioned,
like
baggage
is
one
option
I
think,
and
we
have
to
sort
of
figure
out
how
we're
going
to
do
that.
I
think.
E
E
E
E
So
if
it's
only
auto
instrumentation
doesn't
matter,
but
I'm
still
like,
we
also
can't
have
our
instrumentation.
He
played
us
alpha
forever
april
is
a
long
play.
Sleuth
was
wanting
it
this
month,
right
and
now
they're
waiting,
but
even
if
they're
waiting
waiting
another
four
months,
that's
a
pretty
long
time.
So
I
am
interested
in
figuring
out
when
we're
to
be
ready
for
that.
Also.
A
A
In
some
sense
that
that
makes
our
life
harder,
like
http
clients
refactoring
the
trusk,
is
currently
doing.
We
will
need
to
do
that
in
two
repos,
so
that
make
lives
harder,
but
will
it
make
some
of
our
life
easier
if
you
have
smaller
main
repo,
with
only
like
tier
one,
instrumentations.
E
G
There's
something
about
like
it
being
out
of
the
main
view
that,
like
I
think,
could
be
nice
of
like
feeling
like
it
was
a
more
manageable,
but
at
the
same
time,
working
across
there
there's
something
really
nice
about
the
mono
repo
as
far
as
working
across
it.
So
I
would,
I
would
say,
let's
just
keep
it
in
mind
as
and
if
it
becomes
painful,
then
we
pull
the
trigger.
A
L
E
E
G
G
Api
no
api
surface
except
the
implicit
api
of
what
were
the
span
attributes.
G
Right
and
our
config
attributes
I
mean
we
have.
We
have
some
stuff
to
lock
down
still
in
our
public
contract
outside
of
just
apis.
But
that's
it's
a
reasonable
list.
E
G
A
A
Well,
how
how
did
we
jump
from
mid-april
to
entry
january,
we
took
out
all
the
hard
stuff
yeah.
G
I
would
like
that
I
mean
personally,
and
it
would
be
nice
to
have
that.
You
know
out.
A
L
E
E
G
G
F
G
We
are
yeah
we're
hiding
things
that
aren't
stable,
we're
hiding
things
that
aren't
defined
by
symmetric
attributes
by
the
spec,
behind
an
experimental
flat
behind
experimental
flags.
F
Okay,
so
nikita
you
may
want
to
look
at
which
ones
you
want
for
this
april,
because
it
may
include
some
of
the
things
that
they
are
still
behind
that
flag
and
you
may
have
more
work
to
do,
including
going
through
finding
that.
G
Respect
yeah,
so
do
we
want
to
have
two
different
artifacts
one?
That's
a
our
base,
artifact
with
tier
one
and
one,
that's
all
with
everything.
G
If
we
decide
we'd
want
to
pull
something
out
like
one
of
the
tier,
the
lesser
ones,
I
think
one
once
it's
in.
We
can't
pull
it
out.
A
Which
reminds
me
that
we
have
to
have
an
example
of
how
to
use
our
ripper
as
a
lego
pieces.
A
A
Yes-
and
the
second
question
was
about
class
loading,
so
I
I
I
suddenly
realized
that
I
don't
understand
how
exactly
jvm
picks
in
which
class
loader
to
load
to
any
given
class.
A
F
F
Now,
whenever
you
call
new,
you
will
call
the
the
instance
class
loader
to
to
load
that
okay,
this
is
cl.
The
whole
is
a
tree,
but
but
if
you
have
two
class
loaders
somewhere
and
from
somewhere,
you
try
to
load
to
to
do
a
new
instance
of
something
you
would
load.
You
will
start
from
its
parent
to
the
root.
F
M
A
A
A
G
J
J
I
think
so
I
think
we've
got
general
agreement.
I
think
the
only
thing
left
was
for
the
only
thing
I'm
not
100
sure
about
is
when
we
auto
populate
the
global.
J
I
don't
think
there's
a
right
answer
in
this
one,
honestly,
there's
pros
and
cons
on
both
sides,
and
I
don't
I
honestly
don't
know
what
the
right
answer
is.
So
I
think
we
maybe
we
just
flip
a
coin
and
move
forward.
N
J
N
I
think
okay.
J
Yeah
my
daughter
had
emergency
surgery
yesterday
and
I've
been
thinking,
so
I'm
very
tired.
C
C
A
G
A
A
G
I
think
what
you
would
want
in
that
case
is
to
inject
that
mat
have
that
in
the
like,
say
the
attribute
key
here,
and
I
agree
this
is
I'm
not
convinced
this
actually
works
the
way
it
says
it
does,
but
ideally
right,
we
want
it
in
this
attribute
key
class.
We
want
like
inject
it
into
there,
whatever
class
loader,
that's
in
that's
where
we
want
that
to
live.
G
G
A
A
A
A
E
A
D
A
A
And
so
the
the
the
the
essence
of
the
solution,
I
believe,
is
that
we
have
to
have
holder
somewhere,
which
is
available
to
everybody.
So
holder
should
be
in
bootstrap
class
loader,
but
actual
instances
should
be
created
by
the
class
loader,
who
can
load
that
instant
class,
so
holder
and
creation
logic
should
be
separate.
A
G
G
Yeah
based
on
because
you
want
the
key
to
be
the
class
loader
yeah
and
you
want
some
kind
of
a
generic.
Basically,
you
want
a
generic
map
of
your
context,
key
of
your
string
to
your
attachment.
Well,
I.
E
G
A
E
E
G
Something
like
that,
where
your
first
first
entry
point
is
the
basically
class
of
t,
and
then
you
get
back
a
map
of
something
to
those.
F
The
the
key
of
the
map
that
you
have
there
is
not
an
instance
of
the
class.
It's
the
descriptor
of
the
class
correct,
and
the
descriptor
includes
the
class
loader
associated
with
it
so
essentially
for
every
instance
of
of
that
class
per
class
loader.
You
have
this
map,
that's
exactly
what
it
says
that
that
map
yeah.
E
A
E
F
F
D
E
A
G
Yeah,
I
saw
a
comment
that
you
have
a
you
have
one
of
those
now
in
the
sdk
that
we
can
all
use.
A
F
E
Benchmark
like
so,
I
did
a
lot
some
history
of
jvm,
while
looking
at
that,
like
back
in
the
day
before
java
7
jdk,
usually
cached.
The
entry
actually
returned
the
same
entry
object
for
each
iteration,
but
after
adding
escape
analysis,
they
actually
got
rid
of
that
because
they
found
escape
analysis.
Did
it
automatically.
F
So
I
don't
know
measuring
with
the
with
the
gmh
and
they
they
see
hip
allocations,
so
it
shows
in
the
hybrid
and
the
heat
liberate
norm
normalized
by
operation.
So
I
bet
they
know
what
they
are
doing.
Alec.
F
He
it's
called
hybrid
norm,
yeah,
it's
the
allocations,
the
memorial
okay,
maybe
th
behind
the
scene.
The
the
jvm
counts
that
as
number
of
allocated
objects
but
return
the
same
object
and
actually
does
not
do
allocation
may
be
possible.
But
I
I
expect
that
the
smart
people
on
the
gm
gmh
they
know
about
these
tricks.
So
I
don't
know
I
don't
know
what's
happening.
F
I
just
saw
a
lot
of
I'm
trying.
E
To
look
into
it
yeah,
like
that's
what
I
like
what
is
in
a
map,
they
don't
store
entries,
which
is
why
they
compute
an
entry
every
time
in
the
entry
set,
so
they
change
the
behavior
from
java
6
to
java
7..
I
read
about
that
history
and,
like
I
was
considering
reusing
the
entry
for
our
iterator,
also
until
I
found
that
so
then
I
didn't,
but
I'll
look
into
that
just
to
get
a
better
idea
of.
What's
going
on,
there
that'll
be
interesting.
G
F
Yeah
but
but
if
you
go
down,
I
started
by
improving
from
47
nanoseconds
to
35
nanoseconds
and
then
after
anurag
started
me
with,
can
we
do
better
better
better
and
then
I
I
did
another
round
of
improvements,
so
I
got
from
47
to
21.
I
think
it's
the
update
section
yeah.
I
did
another
round
of
improvements
on
that
nice.
F
So
right
now,
right
now
we're
almost
allocate
very
minimal
memory.
We
still
allocate
a
lot
of
biter
rates.
I
I
found
a
way
to
remove
that
allocations
for
for
attributes,
but
but
we
are
much
better
than
than
the
pro
so
the
the
to
explain
this,
the
marshall
proto,
is
what
we
have.
G
I
wanted
to
ask
I
wanted
to
ask
for
a
summary
of
why
why
the
proto
the
auto
generated
stuff
can't
be
as
good
as.
F
Two
two
things:
two
important
things:
one:
it
follows
that
stupid
model
of
builders
of
objects
to
construct
immutable
objects.
So
there
is
an
always
an
extra
builder
allocation.
Second,
it
treats
any
byte
or
anything
as
mutable,
so
it
does
a
copy
of
everything,
even
though
we
know
in
the
span
they
in
the
span
data.
We
do
the
same
thing
and
we
do
again
guarantee
that
those
are
immutable.
F
So,
for
example,
for
trace
id
span
id
links
and
we
do
have,
but
we
do
generate
byte
arrays,
then
we
copied
them
for
also
for
for
normal
strings
protobuf
protobuf
does
they
need
to
to
encode
them
in
utf-8
and
in
order
to
encode
them
in
etf
eight
they
put
them
into
what
they
called
a
byte
string,
which
is
kind
of
a
wrapper
on
top
of
string
or
bytes,
because
on
the
wire
string
and
bytes
are
the
same
for
for
for
protobuf,
the
only
difference
is,
is
the
are
there
utf-8
encoded
or
not,
or
is
the
color
expected
to
be
encoded
or
not?
F
F
F
F
I'm
not
creating
any
objects
so
so
far
for
I'm
creating
some
temporary
objects
for
two
reasons:
in
order
to
save
the
cache
of
the
size,
because
I
need
the
size
in
two
way
into
in
two
places.
I
need
the
size
once
to
calculate
the
size
of
the
entire
message
to
pre-allocate
the
buffer
and
secondly,
every
message
has
this
format
id
length
length
first
length
first,
which
is
good
for
digitalization,
but
because
of
that,
I
have
to
cache
the
size
of
everything.
F
So
I
need
to
calculate
once
the
cache
and
allocates
a
couple
of
objects:
temporary
objects
that
I'm
allocating
just
to
catch
the
size.
So
in
order
when
I
first
traverse
everything
to
calculate
the
entire
size
to
allocate
the
initial
buffer
the
the
whole
buffer,
then
I
need
to
do
the
then
I
start
serializing.
I
still
need
that
size,
so
I
need
to
cache
it
because
of
that,
I'm
I'm
allocating
a
couple
of
objects,
so
you
can
see
that
I'm
allocating
like
instead
of
a
hundred
and
sixty
thousand
bytes,
so
it's
160
k
per
span.
F
I'm
allocating
no,
the
last
one,
the
the
last
the
dates
yeah
the
memory
rate.
So
the
pro
the
marshall
proto
is
the
normal
marshalling
of
things.
Marshall
proto
is
160
000,
so
it's
160
kilobytes
per
second,
but
no
per
per
operation,
and
in
my
operation
I
have
16
16
spans.
G
F
G
See
so
so,
you're
not
creating
the
you're,
not
using
the
proto.
The
protobuf
objects
at
all
you're,
going
straight
from
our
model
to
the
wire.
F
Right
now,
I'm
going
from
our
model
to
kind
of
a
temporary
things
where
I
sketch
the
sizes,
so
I
still
create
couple
of
objects
to
to
catch
the
sizes
and
then
to
the
wire
I'm
working
to
not
create
these
objects
again
at
all.
But
I
started
with
this
and,
as
you
can
see
in
memory,
I'm
like
2x
or
more
like
from
160
to
69.
K
M
G
Gotta,
I
gotta
get
you
to
optimize
our
our
json
over
the
wire
encoding,
our
custom
json
encoding.
E
K
I
have
a
plan,
so
I
know
I
read
what
gogo
proto
does,
which
is
traversing
twice
the
things
one
we
calculate
the
size
and
then
because
the
size
is
required.
As
the
first
thing
you
start
to
populate
the
buffer
from
the
end,
you
start
with
the
lowest
with
the
leaf
and
then
and
then
you
you
have
the
size
as
weird.
K
You
can
calculate
the
size.
The
second
size
instead
of
caching
it
you
can
calculate
as
part
of
a
bending
from
the
end
here,
is
where
you
started
appending
this.
This
was
the
size.
So
so
it's
very
it's
very
nice.
I
I
think
they're
they're.
They
were
very
smart.
So
so
so
you
do
to
traversation
of
the
entire
objects,
but
one
for
calculating
the
entire
size.
G
All
right,
I'll
ping,
you
on
slack
on
monday.
K
E
E
Yeah
me
neither
but
code
cup
nowadays
report,
sometimes
not
every
time,
and
it's
been
really
weird,
so
I
don't
know
I
want
to
I'm
not
sure
if
it's
a
good
problem
with
code
cover
or
build,
I
don't
think
we
have
any
problems
with
our
build,
so
I
just
want
to
see
if
coveralls
works
better,
so
I
might
try
it.
The
other
thing
is.
K
For
jacok
or
whatever
plugin
do
we
use?
Do
we
use
the
official
one
from
gradle,
or
do
we
speak
with
the
one
from
from
pivotal
or
palantir.
E
G
This
be
the
shortest
tuesday
night
meeting
ever.
E
G
G
E
O
K
Yeah,
so
by
the
way,
trust
are
you,
so
you
are
asking
everyone
to
go
directly
from
from
the
application
to
your
back
end,
or
so
you
don't
use
the
hotel
collector
at
all
correct.
G
Now,
there's
I
think
that
the
long
term
plan
is
to
have
some
kind
of
a
collector
thing
that
does
accept
otlp
so
that
we
will
be
able
to
send
otlp
from
the
agent.
But
of
course
our
collector
is
going
to
be.net
and
for
net
plug-ins,
and
all
of
that.