►
From YouTube: 2021-10-29 meeting
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
D
A
E
E
A
It
is
convenient
to
have
pacific
time
zone
approvers.
A
B
F
G
B
B
B
E
F
D
A
F
It
never
works
for
me,
so
this
is
why
I
just
want
to
get
rid
of
time
zones
all
together
and
we
can
all
just
live
on
on
zulu
time
and
you
know
wake
up
when
it's
the
right
time
to
wake
up.
E
F
I'm
not
sure
I
agree
with
that,
john
I
I
can
adjust
it's
all
good,
where
you
could
use
that
swatch
that
swatch
metric
time
thing
they
introduced
in
the
early
like
late
90s,
early
2000s.
I
especially
like
that.
Also
it
was
a
thousand
ticks
a
day
is
what
it
was
and
centered
in
grenoble
or
someplace
in
switzerland.
Obviously,
but
I
list
liked
it
because
I
got
to
work
at
oracle
at
666
was
my
time
arrival
time
generally.
A
Java
date,
time
have
a
conversion
to
what
did
you
call
it
swatch
time.
A
I
saw
your
comment
on
a
rug
that
made
sense
from
spi
s-p-I-e-net.
B
A
A
B
A
Oh
yeah,
but
we
could
put
that
in
the
odd.
Like
the
for
library
instrumentations,
we
could
have
a
optimized
artifact
that
you
pull
in.
A
John
says
that,
from
his
perspective,
we're
good
waiting
to
1
8,
which
would
be
my
preference
to
hopefully
just
get
it
solved
right
and
we'll
see
if
any
nobody's
opened
an
issue
or
reported
it.
Otherwise,
yeah.
A
Oh
yeah,
so
lori
was
mentioning
that
the
weak
hashmap
that
we
need
actually
a
weak
identity,
hash
map,
but
you
know
now
I'm
thinking
about
it.
His
argument.
He
was
talking
about
this
being
from
a
java
agent
perspective
that,
in
that
this
is
like
a
best
practice
for
java
agents
when
you're
putting
the
key
random
user
classes
as
your
keys,
because
you
don't
want
to
rely
on
the
user's
equals
and
hashcode.
A
E
A
Bounded
plus
week,
but
he
said
even
in
just
the
weak
case
week
alone,
that
the
weak
identity
hashmap
is
important
from
the
agent
perspective
to
not
rely
on
equals
and
hashcode
yeah
and.
B
A
Think
well
so
we
do
put
like
say:
the
virtual
field
doesn't
work
and
we
fall
back
to
weak
hashmap
for
like
propagating
context
in
runables.
A
Hello,
hey
hello,
we
wanna
knowing
that
you're
a
bit
later
we
wanted
to
check
in.
It
would
be
great
to
have
you
in
these
meetings.
If,
but
I
know,
the
time
is
not
great
honorary
mention
with
daylight
savings
coming
though,
potentially
we
could
move
it
an
hour
earlier.
Also.
H
I
know
I
at
this
time:
it's
fine
for
me.
It's
it's
once
a
week
and
it's
outside
of
business
hours,
but
you
know
it
is
what
it
is
so
yeah
I'll
try
to
make
these
going
forward
thanks,
yeah.
A
Yeah
yeah
we've
tried
to.
We
have
varying
staggering
meetings.
We
also
have
one
on
tuesday,
which
is
our
monday
night,
the
which
is
to
overlap
between
honorag
and
japan,
estonia,
poland,
pacific.
I
think
that.
A
Be
like
midnight
or
1am
for
for
you
all
on
the
central
and
east
coast,.
E
E
What
they
were
yeah-
okay
yeah.
I
guess
this
is
recorded,
so
I
don't.
I
don't
remember
how
much
it
was
actually
but
yeah
there
was.
There
was
a
little.
There
was
a
little
any
hey
related
we're
going
to
talk
about
the
road
map
later
and
that's
yeah.
A
E
Oh
yeah
cool,
so
we
we
we're
gonna,
try
to
start
a
new
semantic
convention
committee
focused
on
garbage
collection,
semantic
conventions
for
metrics.
E
So
you
know
how
there's
the
instrumentation
groups
doing
like
http
and
messaging.
So
this
was
actually
been
doing.
The
jfr
stuff
came
up
with
the
idea
pulled
in
aaron
schnabel,
and
then
we
were
talking
about
it
and
got
a
list
of
like
who
from
java
like
gc,
we
should
try
to
pull
in
and
they
have
big
big
names
and
fun
people
to
pull
in.
So
we're
going
to
try
to
get
a
group
together.
E
That
is
hopefully
not
just
open,
telemetry
maintainers
and
we're
going
to
try
to
get
other
metric
groups
so
specifically
pulling
from
micrometer
and
then
get
a.
We
also
want
to
get
someone
from
prometheus
too,
but
get
a
set
of
metric
naming
and
attribute
conventions
in
hotel
that
we
think
are
going
to
be
kind
of
standard
across
monitoring
in
general
and
then
make
sure
that
otel
supports
it.
You
know
first
class
and
that
sort
of
thing
so
lots
of
lots
of
cool
discussions
in
the
30
minutes.
We
spent
deciding
that
it's
worth.
E
Having
giant
discussions
now
so
yeah,
I'm
gonna
get
the
get
a
calendar
proposal.
Pr
set
up
the
hard
problem
here
is
gonna,
be:
when
should
we
have
it
yeah,
I'm
probably
the
worst
offender
there,
because
I'm
east
coast-
and
I
think
all
the
important
people
are
not,
but
I
could
be
wrong
there.
I
don't
know
we'll
see
well
ben's
in
spain
right.
E
Yeah,
I
think
I
think
we're
going
to
focus
on
really
good
documentation
and
bug
communication
and
see
if
we
can
make
up
for
you
know
not
having
as
many
live
meetings
but
there'll
be
a
live
meeting
too.
To
kind
of
get
things
kicked
off
and
meet
each
other.
F
E
If
we
can
that
that's
a
great
question,
I
I
have
like
having
discussion
so
one
of
the
one
of
the
guys
on
my
team
used
to
work
on
the
gc
for
go
and
knowing
how
that
thing
works.
I
just
don't
I
don't
wanna,
I
don't
wanna
do
go
and
java
and
other
garbage
collected
languages,
because
the
differential
is
really
super
high.
E
I
think
we're
gonna
have
a
big
risk
already
trying
to
define
a
gc
kind
of
thing
if
we
define
java
semantic
conventions
and
then
we
look
at
go
and
define
a
higher
level
set
of
semantic
conventions.
At
some
point,
maybe
I
just
I
don't
think
there's
enough
understanding
yet
there,
by
the
way,
have
you
done.
Go
gc
kind
of
tuning
evaluations,
see
one
of
the
things
with
monitoring
a
a
garbage,
collector
collectors.
E
You
need
to
be
able
to
tune
it
like
observability
without
action
is,
is,
is
kind
of
hard
and
I
from
what
I
understand
and
go
there's
like
two
common
things.
People
do
is
they
make
a
ballast
which
the
collector
has
and
they
try
to
avoid
allocations
to
begin
with.
Right
but
otherwise
they
just
assume
it's
doing
a
good
job.
E
That's
what
I've
seen.
I
don't
know
if
that's
fully
accurate
in
terms
of
how
people
monitor
go
applications
overall,
but
I
think
it's
like
anyway.
I'm
ranting
about
go
then
go
apply
that
to
like
python
go
apply
that,
to
you
know,
javascript
other
gcna
yeah.net.
I
think
it's
gonna
be.
I
would
like
to
to
limit
the
scope,
because
we
already
have
a
lot
of
people
that
will
have
opinions
in
a
room
just
with
java,
so
let's
nail
it
and
then
and
then
we'll
move
on.
A
B
Yeah
yeah,
actually
in
a
month
I
think
I
had
a
random
email
sent.
I
guess
there's
an
aws
team
working
on
generational
shenandoah,
a
new
gc
targeted
for
a
future
jvm.
I
think-
and
the
question
was
specifically
like-
should
we
be
using
open
terminal
to
collect
metrics
for
this
gc
and
I
was
like
probably
you
should
be
using
mx
beans
like
the
videos
it's
the
beginning
of
time,
but
obviously
they
do
have
open
temperature
in
their
mind
so
and
jfr.
E
Yeah,
we
actually
are
trying
to
get
one
at
least
one
person
who's
actively
working
on
shenandoah
into
this
this
committee,
because
we
think
that's
going
to
be
really
important
going
forward
that
people
writing
the
new
garbage
collectors,
understand
observability
as
a
use
case
and
provide
hope.
Yeah
doesn't
matter
where
that
hook
is
just.
Please
provide
good
ones.
Yeah
yeah.
F
It
would
be
super
awesome
if
you
could
get
marcus
hurt
involved.
Also,
you
know
getting
prying
somebody
from
datadog
into
open
telemetry
would
be
very
nice.
E
F
E
B
E
Yeah
yeah,
I
would,
I
would
link
them
with
ben
evans
and
then
I'm
planning
to
send
the
the
pr
with
all
the
coordination
stuff
in
my
morning
tomorrow.
So
I
I'll
put
a
link.
I'm
gonna
put
a
link
in,
I
guess
I'll
put
in
the
public
java
channel
because
and
the
public
java
instrumentation
channel,
and
maybe
the
instrumentation
channel
in
general
too,
and
then
everyone
can
comment
on
there
of
like
time
zones.
E
E
B
F
D
E
E
I
will
propose
like
a
time,
though,
do
we
have
a
time
that
we
like,
because
it'd
be
nice
to
have
at
least
one
of
the
maintainers
there?
But
again,
this
is
not
necessarily
a
meaning
for
the
maintainers.
This
is
we're
trying
to
get
a
different
set
of
people
who
are
more
like
the
instrumentation
authors,
to
make
sure
they
can
come,
but
it'd
be
good
to
have
at
least
one
representative
there,
depending
on
the
time
zone.
E
I
might
I
might
propose
in
every
other
thing,
then
so
that
we
can
have
a
working
group
with
maybe
someone
who
can
attend
both
pass
off
the
way
we
do
for
the
metrics
sdk,
where
the
one
the?
What
is
it
the
4pm
pacific
one?
I
can
never
attend.
That's
always
terrible
for
me,
but
the
the
noon,
one
I
can
and
then
we
just
kind
of
pass
off
communities.
You
know.
A
Yeah,
if,
if
ben
is
sort
of
leading
that,
then
there
would
probably
be
europe,
japan,
europe,
us.
A
F
A
Misdiagnosed
but
thanks
to
josh
got
it
got
it
all
figured
out.
E
No,
no,
no,
the
net.pier
yeah!
So
we've
a
we
have
to
change
some
of
the
things
in
the
spec
but
net.pure.ipnet.peer.port.
E
I
don't
think
we're
part
of
the
spec
for
server
side,
metrics
we're
using
the
same
filter
for
server
and
client,
and
that's
one
of
the
problems
there,
because
net.peer.port
on
the
server
and
net.peer.ip
on
the
server
are
not
stable
labels.
So,
if
you
look
at
the
attributes,
see
how
it
says,
client
and
server
they're
like
different
between
one
or
the
other
and.
E
Yeah
so
like
that,
that's
actually
the
issue
there,
but
I
also
I'm
not
percent
positive.
We
should
be
using
net.peer.port
and
ip
on
the
client.
We
can
leave
it
there
because
it's
there,
but
I
all
I'm
gonna
do
is
make
a
new
filter
one
first
client
one
for
server
and
then
go
update.
The
instrumentation.
C
E
E
Frequently
yeah,
I
don't
think
we
should
have
that
in
in
the
metric
spec
at
all.
I
think
we
need
to
go.
We
need
to
go
re-evaluate
that
I
want
to
go.
I
need
to
do
some
root
cause
and
find
out
if
that
was
in
the
original
hotel.
Sorry
open
census,
spec
as
well,
if
it
was
or
if
it
was
added,
because
people
just
took
everything
from
tracing
and
said:
let's
keep
it
all
and
make
it
work,
and
if
it
was
that
second
thing,
then
I
think
it's
probably
fine
to
remove.
B
C
E
We're
okay
yeah,
but
yeah
right
now
I
have.
I
have
a
server
running
that
just
has
a
crown
job
in
in
case
that
wakes
up
and
hits
it,
and
it's
only
one
client
and
only
one
server
and
I'm
up
to
about
2
000
metric
streams
because
of
recycled
ip
addresses
and
different
ports
right.
E
Garbage
collection
for
that
sort
of
caseload
yeah.
We
also
talked
about
high
cardinality
and,
I
think,
there's
a
there's,
a
few
actionable
items
to
do
here.
What
I,
what
I
want
to
do
is
finish
the
public
facing
api
and
some
error
handling
things
around
view
configuration.
E
But
if
someone
has
time
to
pick
up
some
of
the
some
of
the
fixes,
like
john,
had
a
bunch
of
good
ideas
in
the
original
high
cardinality
pr,
I
think
the
the
bare
minimum
we
should
do
is
in
the
what
the
hell
is.
That
thing
called
the
either
the
delta
metric
storage
or
in
the
temporal
metric
storage.
You
can
have
something
that
just
checks
the
size
of
the
map
and
when
it's
over
a
particular
size,
we
call
some
kind
of
hey
we're
in
a
high
cardinality
situation.
E
Do
something
useful
and
it
can
go
through
and
garbage
collect,
really
stale
points
that
haven't
been
updated
in
a
long
time.
If
we,
if
we
keep
track
of
that,
which
we'd
have
to
actually
add
a
little
bit
of
for
cumulative
metrics
or
the
second
thing
is
you
could
you
could
actually
go
and
update
the
label
filters
for
the
actual
storage
so
that
you
get
rid
of
whatever
label
has
the
highest
cardinality?
E
Because
you
can
actually
go.
Look
at
your
label
sets.
That's
that's
a
bit
more
expensive,
but
I
think
the
the
tl
dr
is.
We
just
need
something
that
every
time
collection
is
called.
We
check
the
cardinality
of
the
metric
at
that
point
in
time
and
decide
if
it's
too
high
and
then
do
some
kind
of
cleanup
garbage
collection,
memory
limit
thing.
That'd
be
my
suggestion.
I
think
that's
that's
pretty
simple.
To
do
and
would
have
a
big
benefit.
H
E
B
E
B
F
B
E
Sure
I
mean
we,
we,
I
think
we
can
be
more
aggressive
than
just
today,
but
yeah
or
something
yeah
there's
also.
We
do
have
the
ability
to
like
know
when
we're
trying
to
allocate
a
new,
a
new
cardinality,
a
new
data
stream
that
doesn't
fit
in
the
attribute
concurrent
map
for
our
delta
accumulators,
so
we
can
actually
detect
if
we're
going
over
a
limit.
E
It's
a
little
bit
tricky
to
do
in
in
that
concurrent
setting,
but
we
could
we
could,
instead
of
having
a
watchdog
like
an
after
the
fact,
detect
this
and
go
clean
it
up.
We
could
do
it
ahead
of
time
and
do
like
an
enforcer
thing.
I
think
that
that's
going
to
take
a
bit
of
a
hit
in
terms
of
performance
like
the
after
the
fact
cleanup
is,
is
a
little
bit
more
performant.
We
kind
of
allow
ourselves
to
grow
a
little
bit
for
one
collection
cycle
and
then
shrink
back
down.
E
I
I
am
suggesting,
though,
that
I
really
do
think
we
need
to
permanently
limit
that
not
permanently,
but
but
but
for
a
very
long
time
limit
that
cardinality
problem.
I
especially
with
with
the
kind
of
cardinality
issues
that
I've
seen
and
what
what
we
would
expect
if
you're
having
a
high
cardinality
problem.
It's
actually
your
instrumentation
is
the
issue
right.
That's
not
going
to
get
fixed.
E
It's
it's
not
necessarily
like
the
data
shape,
it's
actually
the
instrumentation
itself.
So
I
I
think
it's
better
for
us
to
say
we
have
this
thing
that
limits
the
cardinality
it's
in
operation.
Here's
how
it
works,
we're
going
to
log
to
you
and
send
you
an
error
when
it
happens
in
some
fashion,
so
you
can
go
clean
up
your
instrumentation,
but
this
is
literally
just
so.
You
get
metrics
and
you
should
consider
it
a
bug
in
your
instrumentation
that
you
need
to
go
fix.
A
If
the
was
I
thinking
that
I'm
trying
to
think
from
this
idea
of
from
the
agent
from
the
the
auto
instrumentation
what
our
default?
I
liked
the
idea
of
being
conservative
by
default,
but
if
we
had
some
kind
of
protection
like
that,
I
guess
that's.
The
point
is,
then
we
that
we
could
be
more
confident
to
collect
net
prp
for
for
clients.
E
You
you
could
yeah,
so
I
guess
what
I'm
suggesting
is
if
you
did
collect
net
peer
ip
and
we
detect
that
your
dimensionality
goes
crazy.
We
would
automatically
start
suppressing
it
and
log,
a
warning
that
this
attributes,
causing
problems
or
some
kind
of
event
or
some
sort
of
error,
that
users
could
catch
and
then
go
fix
it.
I,
I
still
don't
think
we
should
be
using
netpeer
ip
for
the
client
or
sorry
not
for
the
server
and
it's
confusing
to
talk
about
the
client
of
the
server
but
yeah.
E
When
the
server
reports
metrics,
it
should
not
be
reporting
the
client
ip
address,
or
it's
port
that
that
I
still
think
we.
C
Shouldn't
be
doing
like
ever,
but
that
that
was
just
me
not
missing
this
column
here.
Oh,
that's!
Fine!
That's
fine!
Yeah!
The.
E
Other
thing
is,
I'm
hoping
over
time.
We
can
lean
into
exemplars
for
some
of
this
stuff
like
when
you
see
a
server
latency
metric
and
you're.
Like
hey,
you
know
my
99th
percentile
is
bad.
I
have
an
exemplar,
that's
like
in
this
range.
Let
me
go,
take
a
look
at
it
and
that
lets
you
know
which
servers
were
causing
problems,
but
that's
not
that's
not
as
general
as
letting
people
kind
of
get
more
access
to
metrics.
E
So
I
think
we
need
to
do
both
in
practice,
but
I
do
want
to
lean
into
exemplars
more
now
that
you
know
they're
more
mainstream
they're,
I
wouldn't
say
they're
mainstream,
but
they're
more
mainstream
than
they
were
before.
E
I
was
gonna,
give
you
a
good
demo
today
and
then
I
actually
can't
query
my
back
end,
because
I
blew
up
my
cardinality
on
this
metric,
so
it
actually
crashes
on
this
metric
anyway.
Unfortunately,
josh.
F
I
was
just
while
you
all
were
talking
about
important
stuff.
I
was
looking
through
the
code,
so
the
aggregator
handle
at
the
moment
which
there's
one
of
those
per
label
or
attribute
unique
attribute
set.
Has
this
has
the
has
recordings
boolean
on
it?
Yes
feels
like
we
could
just
take
that
and
make
it
a
little
more
sophisticated
and
not
just
as
recordings.
It
just
gets
reset
every
cycle,
but
we
could
keep
track
of.
E
Those
lines
yeah
yeah
yeah.
We
should
probably
also
do
this
for
async
metrics,
which
is
why
I
was
thinking
of
doing
it
in
the
temporal
storage,
because
that
hits
both
of
them
right,
but
but
you're
right,
like
that's,
that's
another
place,
we
can
put
this.
I
was
thinking
the
concurrent
map
of
aggregator
handle
when
we
go
to
allocate
one
of
those
that's
an
option,
because
we
can
check
the
size
right.
We
could
then
evic.
F
E
E
Are
the
problem
because
they're
probably
already
being
devoted
yeah
yeah?
The
the
issue
is
more
that
the
I
still
think
we're
getting
pretty
high
cardinality
right
in
the
delta
thing
too,
but
but
the
temporal
storage
is
probably
the
main
thing
we
need
to
tackle
here.
E
Yeah,
unless
you
get
everyone
to
use
crap,
what's
it
called
the
really
cool
one
that
stores
giant
json
structures.
Sorry,
there's
like
one
metric
back
end
that
handles
cardinality
and
I
like
it,
and
I
always
forget
its
name.
E
B
E
A
E
E
Anyway,
yeah
so
so
high
cardio,
the
high
cardinality
metrics.
If
anyone
wants
to
take
a
crack
at
it,
there's
no
way
I'm
going
to
get
to
it
in
with
probably
in
the
next
two
weeks.
What
I'm?
What
I'm
working
on
right
now
is
removing
all
of
those
freaking
ugly
nulls
that
I
didn't
want
to
begin
with
and
aggregation
temporality.
E
We
got
rid
of
it
from
the
view
api,
so
I'm
gutting
out
that's
step
one
step
two
is
I
I'm
working
on
a
better
error
message
when
you
have
view
instrument,
name
conflicts
where
it
should
actually
write
out
the
line
of
code
that
registered
the
instrument
for
the
view,
I'm
not
happy
with
adding,
I
have
to
add
mutability
in
weird
places.
E
Is
this
sort
of
getting
the
stuck
face
when
someone
creates
an
instrument
yeah,
I
was
gonna
grab
the
stack
trace
when
they
create
an
instrument
and
just
remember
the
line
that
created
the
instrument
yeah.
If
I
can,
and
if
I
can't
we,
we
stopped
trying
to
do
that
ever
again.
E
Might
be
a
map
of
these
yeah?
I
have
to
mutate.
The
metric
storage
object
to
record
every
place
that
registered
an
instrument
so
that
you
know,
like
here's,
the
five
that
registered
you
know
this
version
of
the
instrument
and
here's
the
one
that
registered
the
different
one
right
that
caused
the
conflict.
Yeah.
E
That
simplifies
the
crazy
stuff.
I
was
attempting,
then
I'll
try
to
keep
it
short,
but
because
it's
been
too
big
support.
B
B
B
E
E
A
It's
not
a
in
the
I
mean
the
recommended
is
to
bind
register,
bind
and
then
use.
E
Here's
the
worst
defender
by
the
way-
and
I
need
to
figure
out
how
to
fix
this,
because
it's
this
is
possibly
going
to
be
specced
asynchronous
instruments
should
allow
you
to
register
synchronous
instruments
inside
of
them
and
immediately
write
values.
E
It
should
that's
that
is
the
worst
that
is
done
because
we
don't
have
batch
recording.
So
that
was
the
decided
workaround
or
that's
what
go.
Does
it
hasn't
made
it
into
the
spec
and
I'm
basically
trying
to
not
remind
anyone
of
that?
That
is
in
the
metrics
sdk
meeting
that
that
was
discussed
and
decided,
because
I
don't
want
to
do
it.
I
think
there's
better
solutions,
but
that's
a
different
story.
H
E
E
But
it's
all
around
bad
because
again,
like
with
counters
you're,
recording
deltas,
but
with
async
counters
you're,
recording
cumulatives,
and
so
it's
just
it's
at
it's
literally
the
worst
for
users,
in
my
opinion-
and
I
just
don't
want
to
do
it,
so
we
are
not
going
to
implement
that
until
we
absolutely
have
to
to
be
spec
compliant
if
it
makes
it
in
unless
you
guys
disagree
and
want
that.
Let
me
know.
E
E
All
we
have
to
do
to
make
it
work
is
basically
collect
all
asynchronous
instruments.
First,
then
collect
all
synchronous
instruments
and
then
it'd
be
fine
right
now
we
just
do
them
in
order
of
registration
arbitrarily
yeah.
E
So
it's
simple:
it's
simple
to
fix,
but
like
from
a
user
standpoint,
the
the
main
problem
I
have
is
you
know
asynchronous
instruments,
record
cumulative
values,
delta
instruments
do
not
or
sorry
synchronous
instruments
do
not,
except
for
histograms,
which
is
just
record
at
any
value,
so
it
could
be
a
delta.
It
could
be
cumulative.
So
the
only
synchronous
instrument
we
have
that
works
in
an
asynchronous
use
case
correctly.
Without
all
sorts
of
craziness
is
histogram.
E
F
Way
when
we
had
when
we
had
batch
instruments,
I
don't
know
months
ago
or
batch,
collect
batch
batch
collections
of
async
stuff
months
ago.
We
got
a
complaint,
I
believe
from
christian
neue
mueller
that
we
weren't
using
weren't
setting
the
timestamp
exactly
the
same
on
all
of
the
recordings
in
that
batch.
So
that's
another
thing
that
is
tricky
to
figure
out
how
to
actually
do
correctly.
F
F
Mean
yeah,
because
every
time
you
make
a
call
it's
going
to
have
a
different
time
stamp.
So
what
is
the
timestamp
for
a
batch
recording
logged
in
pushed
back
and
said
this
is
impossible,
we're
just
gonna
we're
not
gonna,
try
to
do
that
and
that
was
kind
of
the
end
of
it.
But
you
know
that's
that's
a
concern.
H
E
Okay
yep,
so
it
could
be
like
I
send.
You
know
cue
size.
I
could
send
you
know,
memory,
usage
and
cpu
all
at
the
same
time,
and
I
have
some
expensive,
you
know
syscall
I
make
to
get
that
would
be
like
one
one
thing
or
you
know
my
I'm
grabbing
metrics
from
an
actual
physical
instrument
and
recording
them,
and
that
physical
instrument
gives
me
just
a
dump
of
bits
that
I
pull
and
getting
that
dump
of
bits
is
expensive.
But
then
I
just
parse
it
and
report
like
10
different
things.
You
know.
H
E
It's
it's
about
so
the
workaround
that
we
gave
people
was
to
put
kind
of
a
last
updated
flag
and
locks
on
their
async
callbacks
and
just
have
a
check
recent.
You
know,
and
then
you
have
some
state
that
basically
checks
to
see
if
the
current
collection
is
recent
enough
and
if
so
use
all
the
existing
values.
And
if
it's
not
recent
enough,
then
you
make
your
expensive
call.
I
got
it
and
get
your
values,
but
you
want
to.
E
E
Yeah
yeah
just
the,
but
that's
anyway,
we
have
to
fill
out
the
spec
compliance
matrix.
So
unless
I
was,
I
was
planning
to
do
that
unless
somebody
else
wants
to
I
I
don't
need
to
do
it.
I
I
want
to
do
it,
but
I
don't
need
to
if
anybody
else
wants
to
fill
the
spec
compliance
matrix
if
you're
comfortable
with
me
doing
that
great.
If
you're,
not
that's
fine
too,
if
I
have
some
free
time
tomorrow,
I
might
take
a
look
at
it:
okay,
cool!
E
I
will
not
do
it
then
cool
the
t.
E
Everyone,
even
internally
calls
it
stackdriver.
Yes,
it's
or
if
you're
familiar
with
our
internal
names,
it
has
another
name
but
either
way,
yeah
it's
so
it's
cloud
operations,
the
cloud,
logging
cloud
monitoring
and
cloud
trace
so
with
like
links,
so
the
the
the
exemplars
were
working
perfectly
until
I
got
my
third
metric
report
that
blew
out
my
cardinality
and
now
I
can't
query
my
metrics
anymore,
so
that
is
yeah
and
unfortunately,.
B
E
All
those
metrics
yeah
yeah,
okay,
and
that
that's
actually
going
to
be
an
issue
with
prometheus
as
well,
so
anything
else
to
call
oh
yeah.
I
expect
to
have
some
some
changes
in
the
prometheus
exporter
post,
getting
the
sdk
stabilized.
So
that's
the
other
area
that
we'll
see
some
changes
going
in
nothing,
nothing
like,
hopefully
surprising
but
effectively.
There
are
pieces
of
open
telemetry
that
need
to
make
it
into
prometheus
that
aren't
by
default.
F
H
F
One
one
example
is
the
the
resource
for
the
processed
runtime,
which
has
the
full
command
line
limits.
F
Yeah
they're,
not
stable
one.
I
don't
think
makes
a
great
that
doesn't
make
a
fantastic
magic
dimension.
C
F
E
Do
you
saw
the
hotel
roadmap
document?
One
thing
I'm
I'm
trying
to
get
from
every
sig
and
I'm
reaching
out
to
you
personally,
because,
because
you
know
this
is
the
one
I
attend
more
often
is
to
try
to
actually
get
a
coarse-grained
deliverables
in
a
gantt
chart
and
we
want.
We
want
these
to
be
as
accurate
as
we
can
make
them,
and
we
also
want
to
aggressively
be
telling
the
community
when
things
are
stable.
E
E
There
is
a
roadmap
that
trask
took
a
ai
to
go
update,
that
is
from
2020
november
of
2020,
so
not
like
that
old,
but
still
a
bit
stale
people,
people
keep
asking.
When
can
I
adopt
these
things?
When
are
things
ready
what's
happening?
What
do
things
look
like
so
so?
The
goal
is
to
give
them
a
roadmap,
and
you
know
the
three
signals
I
want
are:
don't
look
yet
start
looking
now,
because
it's
about
to
be
stable
and
go
adopt
right.
A
A
E
A
E
E
You
want
to
mark
this
thing
as
stable,
like
put
it
right
there,
and,
and
that's
kind
of
the
message
we
want
to
give
people
is
so
so
like
in
the
state
of
hotel
up
above.
If
we
could
say
hey,
the
agent
is
stable
and
ready
to
be
adopted
for
th
this
number
of
things
great
and
then
we're
building
out
this
other
number
of
things.
E
That
means
they're
going
to
go,
take
a
look
at
it
and
that's
what
we
want
right,
because
I
think
you
are
stable
for
like
definitely
for
tracing
and
this
set
of
things
I
you
know,
we
probably
don't
want
to
tell
people
to
go
start
taking
these
metrics
yet
and
that's
fine,
but
we
do
want
them
to
look
at
tracing
and
that's
the
goal
behind
the
roadmap.
E
So
I
just
want
to
make
sure
that
everyone
here
is
kind
of
on
board
and
understands
like
the
goal
of
this
is
we're
going
to
try
to
improve
communication
with
users
so
and
we're
gonna
try
to
stick
to
our
road
maps
a
little
bit
better.
If
we
can,
I
understand
yeah.
I
feel
like,
if
there's
a
group
in
otel
that
I
could
say
will
tell
me
when
something
will
be
done
and
it'll
be
done
around
that
time.
It
would
definitely
be
this
group,
so
yeah,
no
okay,.
A
G
E
B
E
E
Is
totally
fair
and
that
that
I
I
I
totally
buy
that
and
to
the
extent
that
we're
trying
to
push
on
that
for
a
stable
release.
I
don't
know
if
you
saw
this
data
if
you
pop
open
the
state
of
hotel
document
we're
out
of
time
for
honor
rock,
I'm
really
sorry.
If
you
go
down
the
instrumentation
say
just
updated
with
stable
semantic
conventions
and
when
they
expect
it
to
hit
so
there.
E
I
think
I
think
he
passed
it
or
no.
No,
no!
It's
right!
There,
yeah,
okay,
so
this
is
what
they're
focused
on
and
if
you
scroll
down,
I
think
the
gantt
charts
here.
E
F
For
over
two
years
now
that
feels
aggressive.
H
When
they
say
this,
do
they
mean
semantic
inventions
for
both
metrics
and
traces
for
that
of
just
tracing?
That
is.
E
Just
for
tracing-
and
I
I'm
planning
to
personally
kind
of
kick
off
the
metric
bit
again
with
metrics.
We
have
this
issue
with
resources,
so
we
actually
have
to
figure
out
how
to
deal
with
resources
and
metrics
before
we
can
do
anything
with
semantic
conventions
for
reals
there
yeah
anyway,.
F
Hey
I'm
gonna
bail
out
it's
since
it's
been
an
hour.
E
Yeah,
I
I
have
to
drop
too,
but
thanks
thanks
for
the
time,
and
I
am
trying
to
be
more
aggressive
and
john.
I
hear
what
you're
saying.
F
E
F
Velocity
is
both
magnitude
and
direction,
so
maybe
we
just
changed
the
direction
and
not
the
actual
magnitude.
That's
on
it.