►
From YouTube: 2021-06-03 meeting
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
B
A
C
D
And
did
I
send
them
to
the
right
place.
C
It's
not
done,
though,
so
we
were
waiting
for
the
the
data
model
to
get
stable
and
for
open
telemetry
kind
of
metrics
api
to
get
more
stable
before
we
finish
this
we're
working
on
kind
of
a
spec
right
now,
but
effectively
only
sums
and
gauges
come
in
what
about
we're?
Dropping
histograms
on
the
floor.
C
Oh
for
traces,
it's
here
yeah,
sorry,
so
I
thought
it
was
metrics,
okay,
traces.
Yes,
this
kind
of
works.
We
don't
have
any
integration
tests
for
it,
though
so
and
it's
it's
broken
every
single
release.
C
So
that's
that's
number
one
on
my
list
where
I
don't
know
if
you
saw
we're
slowly
rolling
out
integration
tests
that
spin
up,
like
you,
know,
kubernetes,
and
all
that
kind
of
junk
and
and
testing
end
to
end
making
sure
it
all
works.
But
auto
exporters
are
somewhat
next,
I'm
just
not
sure
when
we'll
get
it
all
sorted
hold
on
my
cat's,
trying
to
destroy
my
door
I'll,
be
right.
C
A
What
what
version
are
you
on
currently.
C
That
is
using
1.1.
That
was
the
the
last
one
that
we
had
confirmed
working
so
because
we're
we're
still
again
until
we
have
automated
testing
it's
it's
all
tested
by
hand,
so.
A
In
this
repo
that
we
use
where
we
spin
up
like
a
collector
but
more
importantly,
we
spin
up
a
container
that
runs.
You
know
wildfly.
C
Yes,
we're
so
that
all
is
great.
We
have
the
other
problem
of.
We
just
want
to
make
sure
that
a
very
simple
set
of
telemetry
makes
it
to
cloud
trace
api,
and
we
can
pull
that
back
down.
C
That's
the
only
bit
we're
trying
to
verify
in
our
integration
test
like
we're.
Assuming
all
this
exists,
the
only
bit
we
want
to
verify
is
that,
whatever
configuration
we
give,
you
leads
to
things
making
into
google
cloud
trace.
I'll,
give
you
an
example.
There
was
a
bug
with
class
loading
somehow
broke
our
detection
of
what
google
cloud
resource
you're
running
on
and
like
destroyed.
Some
of
our
exporters.
C
Now
that's
more
important
for
metrics
than
it
is
for
traces,
because
we
have
this
resource
semantic
for
metrics
that
matters,
but
it's
it's
yeah
our
integration.
Tester
specifically,
did
our
exporter
get
instantiated
correctly
and
is
then
making
it
into
google
all
this
other
stuff
is
great
and
super
useful,
but,
like
not
yeah
wouldn't
be
leverageable.
C
I
I
got
it
and
we
are.
I
think
we
are
using
one
of
these,
possibly
in
some
of
our
prototypes.
I
think.
D
A
Because
I
know
we
have
a
similar
set
of
smoke
tests,
but
they're
not
going
all
the
way
to
the
cloud.
It's
more
to
verify.
Our
exporter
is
sending
the
right
data
that
we
expect
that
our
backend
expects
and
we
have
our
own
smoke
test
harness
from
you
know
a
couple
years
back
or
more
a
few
years
back
and
I'm
hoping
at
some
point
to
reuse.
These
smoke
test
harness
for
our
exporter
end-to-end
test,
but
end-to-end
doesn't
from
us.
Does
it
our
tests?
Don't
go
all
the
way
to
the
cloud
and
back.
C
C
Okay,
so
I'm
going
to
refine
my
question,
so
we
have
a
docker
harness
or
docker
container.
That
runs
a
thing
that
looks
like
google
cloud
tracing
or
google
cloud
monitoring
right,
and
we
have
internal
code
only
hooks
to
talk
to
something:
that's
not
our
public
api
endpoint
that
we
don't
expose
to
users
as
config.
Today,
if
I
were
to
use
these
smoke
tests-
and
I
were
to
expose
that
configuration
as
a
public
config
item
that
I
just
asked
people
politely
not
to
use
ever
because
it'll
break
you
terribly
can
I
can.
A
Yeah
and
if
you
need
some
changes
to
this,
to
support
that
better,
I
mean
I,
I
think
that's
a
great
use
case
and
I
I
would
like
to
also
reuse
these
at
some
point.
In
that
way,.
C
Okay,
yeah
yeah
one
one
question
for
you:
we
are
we're
working
on
resource
detection
within
java.
I
think
I
might
have
mentioned
that
resource
is
super
super
super
critical
for
us,
so
we're
trying
to
make
sure
that
we
detect
the
right
set
of
attributes
that
our
metadata
servers
are
visible,
that
if,
for
example,
you're
running
on
verizon,
you
don't
accidentally
think
you're
in
gke,
which
is
a
legit
bug
that
happened
in
open
census.
C
Yeah
dns
squatting
is
a
problem
apparently
just
fyi,
so
we
have
a
set
of
integration
tests
that
spin
up
google
infrastructure
and
we
have
that
running
as
a
cloud
build,
and
we
have
that
running
where
it's
charging
google
for
this
right
as
opposed
to
cncf
or
you
know,
through
the
github
organization
and
we're
planning
to
do
all
of
our
resource
detection
in
that,
including,
like
the
java
based
resource
detection,
would
be
in
this
external
repository
and
tested
via
this
mechanism
and
like
charged
there.
C
How
are
you
doing
like
resource
detection
now
for
other
major?
You
know
different
cloud
infrastructure,
like
on-prem
kind
of
things.
A
I'm
not
sure
what
john
we
have.
I
don't
think
we
have
any
of
the
any
cloud
detection
resource
providers.
I
know
we
have
a
few
resource
providers
that
get
automatically
configured
via
the
sdk,
auto
configure
module.
C
What
I'm
asking
is
how
how
are
those
being
tested?
Are
you
running
an
actual
ec2
instance,
and
and
do
you
have
an
integration
test?
That's
testing
that
and
how
what's
the
mechanism
for
doing
that,
like
I
said
right
now,
we
set
up
a
google
cloud
project
that
bills,
google,
that
test
these
resource
detectors
that
we're
working
on
and
working
on,
contributing
that
I'm
curious.
What
like
for
this
amazon
one?
Are
you
spitting
up
code
in
amazon
that
actually
runs
this
detector
and
like
make
sure
that
it's
working
against
production?
C
Are
there
integration
tests
or
not
just
just
curious.
F
A
Very
briefly,
I'm
not
seeing
integration
tests,
I'm
just
seeing
lock
stuff.
If
you,
if
you're
available
today
at
six
o'clock
pacific
we
meet
with
honorard,
and
I
mean
he
could
they
have
a
a
repo
aws.
G
Awesome
yeah
off.
We
don't
have
anything
like
that.
We'd
have
to
ask
about
a
rock
if
they,
if
amazon
may
have
it
in
their
own
repository,
but
we
definitely
do
not
have
anything
like
that
in
the
okay.
C
So
so
the
reason
we've
kept
our
resource
detection
external
so
far
is
is
so
that
we
could
provide
that
as
part
of
it
and
that
we
don't
charge
hotel.
But
if
there's
a
mechanism
for
us
to
contribute
that
back
in
we,
we
could
do
that.
If
that's,
if
that's,
how
you'd
like
to
have
things
get
done,
that's
that's.
One
of
the
things
we've
been
working
on
internally
is
trying
to
get
that
all
up
and
running.
G
I
think
it
pert-
and
this
is
my
personal
opinion,
but
having
those
detect
resource
detectors
in
the
sdk
extensions
seems
totally
fine.
We
won't
have
the
integration
test.
It
won't
make
sense
to
have
there
just
because
we
don't
have
the
infrastructure
to.
You
know
talk
to
google
et
cetera,
but
you
if
it's
there,
you
can
still
have
your
own
integration
test
to
make
sure
it's
still
working
on
your
side,
though,.
G
C
Particular
thing
that's
fair,
I
and
I
don't
know
if
I
can
do
that-
against
open
telemetry.
Just
from
a
policy
standpoint,
I've
been
trying
to
figure
that
crap
out
it's
been
fun,
so
the
I
guess.
The
second
question
is:
if
we
provide
a
resource
detector
mechanism,
can
we
put
something
into
that
contrib
that
basically
takes
a
dependency
on
this,
so
that
we
have
the
resource
detection
with
the
integration
test,
kind
of
verifying
it,
and
we
still
get.
C
G
This
is
one
of
those
this
is
getting
into
one
of
those
really
super
squishy
things
about
vendor
and
cloud
specific
stuff.
That,
I
think
is
there
are
word,
there's
wording
in
the
spec
that
the
java
repository
is
probably
in
violation
of
just
like,
given
the
fact
that
we
do
have
some
aws
stuff
in
there
that
maybe
we're
not
supposed
to,
but
one
of
our
maintainers
also
works
for
aws.
So
we've
been,
like
you
know,
he's
working
on
it.
If
he
goes
away,
we'll
probably
dump
it,
but
you
know
yeah.
C
Well,
we
have,
we
have
two
major
components
for
basically
sdk
extensions
that
are
not
just
an
exporter.
One
is
this
the
resource
detection,
so
detecting
gke
gce
google
cloud
functions
that
sort
of
thing
and
the
second
thing
is
actually
cloud
trace
propagation.
We
have
this.
This
deprecated
format
that,
prior
to
w3c,
where
it's
like
x,
dash
cloud
dash
trace,
dash
context
with
its
own
parsing
and
it
it's
unique
in
its
own
special
way.
So
I
have
a.
C
I
have
a
prototype
implementation
of
that
that
I
can
I,
when
trying
to
refine
and
harden
and
then
contribute
that
back
in
as
well,
but
that's
also
one
that
we
ideally
like
to
have
an
integration
test
for,
and
it
really
only
affects
google
cloud
users,
because
it's
only
things
that
come
through
our
load.
Balancer
get
this.
G
C
I
mean
I
it
from
from
an
open,
telemetry,
sdk
versus
contrib.
I
don't
care
where
it
goes.
I
have
two
concerns.
One
is
visibility,
and
the
second
is
that
I
actually
do
have
this.
This
end-to-end
integration
test
to
prevent
breakages
both
on
my
end
and
from
upstream
open
source
contributions,
because
we
yeah
we've
seen
some
interesting
infrastructure
things
so.
G
G
So
if
you
include
that
module-
and
you
have
auto
configure
on
you-
get
everything
unless
you
specifically
turn
turn
some
things
off
the
environment
variable,
I
think
it's
a
little
bit
like
the
bigger
the
more
resource
detectors
are
in
there,
the
more
unwieldy
it
gets,
especially
for
I
mean
and
potentially
the
more
it
will
impact
startup
time,
especially
if
these
things
are
having
to
reach
out
and
do
something
outside
the
vm
to
see
whether,
where
they're
running
so,
I
think,
there's
a
open
issue
to
switch
our
resource
detection
spi
handling
to
having
having
an
every
spi
for
resources
having
a
name
like
it's
just
a
string
name
that
can
be
then
used
to
turn
things
on
and
off
very
easily
via
environment
variable.
G
C
Yeah
yeah.
We
actually
have
an
issue
today
around
ordering
yeah
but
like
right
now
effectively.
If
you
want
to
know
if
you're
on
google
kubernetes
engine,
you
also
look
like
you're
on
google
cloud
platform
or
gce
the
compute
engine,
because
one
kind
of
runs
on
the
other,
and
so
you,
if
you
run
the
gce
detector,
first
you're,
always
gonna
win
as
gce,
whereas
you
might
be
on
gke,
so
we
actually
have
to
yeah
and
then,
if
there's
a
general
kubernetes
detector,
that
would
be
true
and
return
things.
C
C
If
the
resource
detector
changes
today
and
that's
not
cool,
so
you
have
to
use
our
version
and
that's
one
reason:
I'm
bundling
the
exporter
and
the
resource
detector
together
in
the
same
version
to
make
sure
that
it's
easier
for
customers
to
understand
which
one
is
which
and
and
how
to
align
the
two.
So,
like
there's
a
bunch
of
reasons
that
we're
separate
now-
and
I
don't
like
any
of
them
so
to
the
extent
we
can
fix
it,
that'd
be
that'd,
be
ideal.
C
G
This
is
a.
It
means
a
little
bit
of
a
tricky
problem
like
we
could
break
things
out
into
like
25
different,
separate
little
resource
detector
modules,
and
that's
going
to
be
not
a
great
user
experience
either
so
yeah.
I
think
this
is
a
tricky
problem
in
a
I
did
see
the
otep
in
there.
I
didn't
dig
in
super
deep,
but
yeah.
This
means
it's
definitely
an
issue
that
we
don't
have
a
great
solution
to.
E
But
in
regards
to
location
and
integration
tests,
I
think
that,
having
your
specific
components
in
country
people,
it
will
be
actually
easier
because
well
in
country
people,
it
is
acknowledged
that
some
parts
of
country
prepo
can
be
under
the
like
under
the
power
of
external
developers,
which
means
that
you
are
free
to
do
whatever
you
like,
including
use.
Whatever
infrastructure
you
like
for
integration,
testing,.
C
Yeah,
the
problem
is
actually
more
on
my
end,
I
if,
if,
if
we
go
into
contrib
it
mean
I
I
run
this
gcb
run
command
to
like
run
a
google
cloud,
build
run
on
google
infrastructure
and
if
I
have
not
inspected
every
ounce
of
code
in
that
repository,
I'm
liable,
and
I
I
can't
do
that
for
contrib,
like
it's
too
big,
there's
too
much
in
there
right.
C
C
If
I'm
going
to
run
it
in
this
manner,
and
that's
that's
the
thing
that
I
I
can't
do
that
for
can
trip
if
it
was
just
core,
I'm
okay
with
that,
that's
that's
a
tax
I
can
take,
but
for
contrib
there's
way
too
much
in
there
and
way
too
much
potential
kind
of
differentiating
things
that
I
don't
think
I
can
take
that
as
as
something
that
I
would
do
again.
That's
another
reason
why
we're
remote
right
now.
E
C
E
C
E
E
C
E
E
E
C
E
C
And
I
made
a
few
comments
on
it
yeah
I.
E
C
I
mean
so
the
real
question
is:
will
users
I
I
always
ask
the
inverse
question
of
where
do
users
go
to
find
components?
C
So
if
we
want
to
make
them
use
the
registry
and
we're
successful
in
doing
that,
great,
my
fear
is
that
we
won't
be
able
to
make
that
a
success.
C
But
that's
you
know
so
you
need
you
need
another
thing
today.
I
think
people
go
to
the
open,
telemetry
repositories,
the
contrib
repo
and
the
instrumentation
repo
to
find
things.
I
don't
think
they're
going
to
the
the
registry
today,
based
on
what
we
see
and,
for
example,
someone
coming
to
the
coming
and
asking
trask
like
hey,
where's,
the
gcp
exporters
right
and
then
him
answering
great
wait.
That's.
A
Why
I
found
this
yeah?
I
think
that's
a
good
example,
because
I
did
find
your
repo
via
the
the
registry.
I
went
and
searched
the
registry
yeah.
C
C
Exactly
so
like
like,
where
are
users
looking
and
making
sure
that
we
have
a
visibility
there?
So,
for
example,
what
we've
done
with
the
collector
contrib
is?
We
actually
have
a
collector
contrib,
it's
actually
called
google
cloud
and
there's
a
stackdriver
one
that
points
to
google
cloud,
because,
by
the
way
we
rebranded
stackdriver
anyway,
we
have
a.
We
have
a
google
cloud
exporter
in
the
collector
contrib
repo
that
consumes
a
dependency
from
this
google
operations
thing,
and
so
that
that
gives
us
that
freedom
of
I
can
meet
all
of
these
like
dumb
requirements.
C
C
What
I
don't
want
to
do
is
cause
a
massive
amount
of
friction
and
pain
for
everybody.
Here
I
don't
want
to
slow
you
down.
I
don't
want
to
cause
issues
based
on
junk.
I
have
to
deal
with
right
so
that
that's
why
I
kind
of
wanted
to
ask
before
doing
anything.
A
A
Yeah,
because
it
is,
I
mean
it
will
the
the
cloud.
Maybe
at
some
point
we
will.
We
are
planning
to
put
more
stuff
into
contrib
from
the
instrumentation
side.
So
maybe
at
that
point
we
would,
you
know,
consider
breaking
out
like
both
of
them.
A
C
Okay,
are
you
exporting
metrics
as
well
now
from
the
java,
auto
instrumentation?
I
think
I
saw
some
pr's
go
through
that
did
metrics
just
curious.
A
A
Okay,
so
all
of
the
anything
that
implements
http
client,
we
don't
automatically
register
the
metrics,
but
we
have
a
component.
E
A
Could
add
into
the
instrumenters
easily
yeah
so
josh
just
in
was
instrumenter,
so
we're
currently
going
through
this
massive
re
re-refactoring
of
all
the
instrumentations
into
this
new
instrument
or
api,
which
is
designed
around
the
idea
of
supporting
both
tracing
and
metrics.
A
So
it
it's
coming,
but
yeah
very,
very
little
so
far.
C
Okay
cool:
I
was
just
curious
about
how
you're
dealing
with
the
instability
and
the
api
around
metrics
or
the
potential
instability
as
things
change
well,.
G
It's
going
to
be
unstable
because
we're
about
to
completely
change
it
with
the
new
the
new
api
direction.
A
So
I'm
the
instrumenter,
this
new
instrumenter
api
doesn't
expose
very
much
there's
only.
I
think
one
are
two
methods
that
expose
that
that
have
to
expose
the
metrics
like
metrics
provider
or
something
and
that's
marked
with
an
unstable
annotation,
so
that
we
are
planning
the
new
instrument
or
api
is
not
stable
yet,
but
we
are
planning
to
make
it
stable
before
metrics
is
stable
and
leave
that
unstable
annotation
just
on
that
piece,
and
then
we
can
hide
the
instability
behind
that.
As
far
as
you
know,
to
at
least
users
cool.
C
Yeah,
what
I
I
we're
trying
to
prioritize
different
components
that
we
work
on
and
specifically
we
have
like.
We
have
that
prototype
metrics
exporter
and
I
know
of
clients
trent
using
auto
instrumentation
exporting
metrics
and
as
long
as
we
aren't
doing,
histograms
everything's
gravy
for
google.
C
But
I'm
trying
to
understand
when,
when
we
should
prioritize
working
on
our
exporter,
because
I'd
like
to
wait
for
the
churn
in
api
sdk
to
kind
of
have
passed
before
we
do
our
exporter.
Just
because
we
have
some
limited
head
count
around
effectively.
I'm
the
only
one
doing
java
right
now-
and
I
am
also
doing
way
too
many
other
things
so.
G
G
Well,
I
guess
right
now,
there's
nothing
that
unless
somebody
specifically
can
configure
a
view
with
histogram
aggregation,
nothing
defaults
to
histograms
at
all.
It's
all
like
on
the
output
side,
we're
going
to
we
get
summaries,
but
not
histograms,
really
wait:
you're
you're,
making
summaries
yeah!
Sorry
like
for
latency.
C
Wow,
okay,
the
the
assumption
of
the
data
model
was
summary,
was
just
like
a
legacy
only
for
prometheus
thing
we
haven't,
I
mean
we
haven't,
touched
metrics
implementation
in
java
in
like
six
months.
So,
oh
that
that's
fair!
No!
That's
I
I'm
just
I
I
well
okay.
I
need
to
change
that
opinion.
Then,
okay,
cool.
G
C
A
G
For
for
now,
although
I
mean
I
assume,
that's
not
going
to
change,
although
I
don't
even
know
like
what
the
default
aggregation
is
supposed
to
be
for
the
histogram
instrument,
has
that
been
specified?
I
haven't
I
mean
I
don't
think
that
is
in
the
sdk
specification.
Yet.
C
Oh,
you
mean
does
like
bucket
sizes
and
that
sort
of
thing
or
what
does
the
default
aggregation
for
histogram
is
histogram?
Is
it
it's
not?
I
don't
think
that's
in
the
sdk
spec
yet
right.
Oh.
C
Oh
from
this,
from
instrument
to
metric
stream,
yeah
gotcha
right
gotcha.
G
I
mean
okay,
I
assume
it's
going
to
be
histogram,
but
I
don't
know
like
what
are
the
default
bucket
sizes?
I
don't
know
how
many
of
that
stuff's
going
to
work.
I
mean
that's
one
of
the
reasons
why
I
think
summary
was
chosen
because,
like
it's
min
max
some
count
like
there's
nothing
to
configure
there.
It's
just
like
you
get
what
you
get
right.
Well,
you
can
do
percentiles
too,
you
could,
but
you
know,
do
you
want
what
kind
of
percentiles
do
you
want?
What
values
do
you
want?
G
A
A
G
C
Okay,
so
from
the
standpoint
for
us
in
terms
of
stability
and
contributions
right
we
have.
We
have
a
trace
exporter
that
is
now
fully
integration
tested
and
stable
people
can
take
a
dependency
on
the
auto
version
of
that
for
the
the
jar
the
auto
agent
is
still,
we
don't
have
any
kind
of
end-to-end
integration
test
on
that
we'll
work
on.
Thank
you
for
the
pointer
that'll
help
a
lot.
Our
resource
detectors
are
kind
of
in
a
little
bit
of
a
prototype
phase.
Right
now.
C
I
think
their
efficiency
is
not
what
I
would
like
it
to
be,
but
they
work
we'll
we'll
kind
of
clean
that
up
on
our
end,
get
them.
Integration
tested,
figure
out
how
to
contribute
those
back
and
then
the
last
thing
is
around
a
metrics
exporter.
I
mostly
I'm
just
looking
for
a
timeline
so
like
the
resource
detection
stuff.
That
spi
is
that
marked
as
stable,
yet
it
wasn't
in
1.0.
G
G
Proposals,
so
I
understand
I
mean
and
if
we
need
auto
configure
to
be
stable
sooner,
the
dependency
on
semantic
semantic
conventions
is
very
small.
It's
like
literally
service
name,
I
don't
think
there's
any
other
dependency
except
service
name,
but
we
would
have
to
split
the
metrics
auto
configure
out
if
we
wanted
the
spi,
for
that
would
need
to
be
split
out
if
we
wanted
to
market
stable.
G
G
F
E
G
G
C
A
I
I
like,
I
would
use
I
use
checker
framework
and
they
essentially
it
does
like
for
certain
annotations
like
nullable.
It
essentially
will
treat
it
as
a
different
type,
which
is
very
cool
but
yeah
yeah.
C
Anyway,
sorry
for
randy,
thank
you,
okay,
so
from
a
stability
standpoint,
it
sounds
like
auto.
Configure
is
not
quite
there
yet
so
we'll
we're
going
to
continue
to
call
ours
and
alpha
as
well
any
auto
configure
stuff
we
provide
for
tracing
the
resource
detection
stuff
if
we
get
it
into
autoconfigure.
That
would
also
be
in
this
kind
of
unstable
format
and
then
metrics.
C
G
C
C
G
Today,
I'll
come
today,
I
mean,
I
think,
that
we
just
somebody
needs
to
just
start
doing
the
prototype,
taking
the
taking
the
api
specification
that
has
been
written
now,
like
a
new
one,
and
probably-
and
this
is
this-
is
a
decision
that
anurag
and
I
I
think,
are
on
the
same
page
with
we're
just
gonna,
take
the
existing
metrics
and
just
break
everything
and
rename
everything
like
the
other
option
is
to
completely
start
from
scratch
in
a
new
module,
with
a
name
like
new
metrics
or
something
horrible
which
we
don't
really
want
to
do
so.
C
G
That's
like
too
much.
That's,
that's
I'm
not
that
young,
but
I
think
the
mo
the
main
thing
that
needs
to
happen
is
we
need
to
the
first
thing
that
needs
to
happen
is
we
need
to
go
and
rename
all
the
instruments
based
on
the
new
names,
so
that's
kind
of
step,
one
and
then,
but
there's
a
there's,
a
sd.
I
mean
I
would
say
if
you
want
to
work
on
this,
like
I'm
happy
to
sit
down
and
chat
with
what
the
current
metrics,
sdk
and
apis
look
like
if
you're
interested.
C
G
You
know
it
as
well
as
anyone
then
probably
well
I
mean
literally
as
well
as
the
user
sure.
Well
I
mean
maybe
I
mean
bogdan
probably
knows
it
better,
and
I've
at
least
can
stumble
my
way
through
the
code
and
understand
structurally
how
things
put
together
anyway.
If
you
were
interested
in
picking
that
up
and
working
on
that,
that
would
be
amazing,
okay,
but
yeah
all
the
goals
of
the
prototypes
like.
G
G
A
All
right,
we
have
one
other
topic,
I'm
guessing
john.
G
Yeah,
I
just
want
to
call
out
just
that's.
I
I
think
our
monthly
release
cadence
comes
up
next
week,
so
probably
end
of
next
week.
We'll
work
we'll
probably
plan
for
a
release
or
mid
next
week.
I
don't
know
we
don't
have
a
huge
amount
of
changes,
oh,
but,
except
for
nikita's
amazing
performance
fix.
I
want.
G
D
G
Mean
I
mean
I
understand,
I
understand
why
I
mean
we
are
allocating
a
string,
a
string
builder,
a
string
buffer
string
builder
and
a
couple
of
you
know
a
couple
of
extra
strings
for
every
single
character
in
every
trace
id
and
every
span
id.
G
That's
that's
a
lot
of
extra
garbage
just
being
flung
around.
G
You
should
open
a
hot
spot
bug
of
like
hey.
Can
you
optimize
this
away?
I
was
actually
kind
of
surprised
that
it
wasn't
like.
I
was
actually
super
surprised.
I
was
wondering
whether
more
modern
jvms,
more
modern
hotspot
versions
might
actually
realize
that
this
is
something
because
it
was
never
used
by
anything
like
in
the
in
the
normal
path.
These
strings
were
never
used,
they
were
just
a.
They
were
just
an
error
message
if
you
happen
to
send
a
character
that
wasn't
valid
in
that
was
never
used.
E
Anyway,
wait
if,
if
you,
if
you
plan
to
release
the
sdk
next
week,
I
probably
will
try
to
look
if
I
can
do
a
couple
of
more
small
small
change
changes
in
performance
field.
G
E
E
A
Cool
then
I
don't
think
we
need
to
do.
We
can
review
here,
everybody
pretty
much
well
except
josh.
Sometimes
we
do
the
week
in
review
will
go
through
the
prs
from
the
last
week
just
to
keep
more
people
up
to
date
on
what's
going
on.
A
But
yeah
the
big
one:
yes
nikita
nikita,
take
your
victory
lap
that
that
was
the
big
one
this
week,
so
cool
yeah,
thanks
for
showing
up
josh
great
to
have
you.
C
Yeah
thanks
for
having
me
sorry
for
dominating
the
conversation
for
so
long.
I
need
to
come
more
often,
though.
Thank
you.
A
F
E
A
Jason
yeah
better
late
than
never.
A
All
right
then,
have
I
can't
do
math,
but
what's
9
55
minus
9,
48,
seven
minutes
back
all
right
see
y'all
next
time.