►
From YouTube: 2021-10-21 meeting
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
B
The
spirit
jason,
you
really
need
to
have
some
sort
of
zoom
filter
that
changes
your
head
into
a
skull.
D
C
All
right,
let's
get
to
the
agenda,
one
seven
went
out
this
week
and
we've
already
got
two,
and
I
think
I
was
scanning
haven't
looked
at
things
yet,
but
I
did
see
that
looks
like
we
might
have
another.
A
C
Regression
or
another
patch
candidate
john:
do
you
want
to
talk
about
this
a
little
bit.
B
Yeah,
so
we
uncovered
this
trying
to
upgrade
the
splunk
android
instrumentation
to
1.7.
B
It
appears
what's
happening
so
just
to
talk
a
little
bit
about
how
android
builds
work
when
the
an
actual
application
app
package
gets
built,
the
android
gradle
plug-in
via
the
d8
tool.
I
think
it's
called
d8
converts
all
the
class,
the
classes
in
the
jars
to
an
android,
friendly
format
called
dex
or
something
I
don't
remember
exactly
what
it's
called,
but
something
like
dex
and
it
takes
into
account
the
api.
B
Then
the
bill
will
fail.
So
just
it
will
refuse
to
actually
build
the
application
package
and
it
looks
like
in
1.7
the
instrumentation
api
now
shadows,
caffeine,
3..
This
is
I'm
taking
laurie's
word
for
it.
I
believe
this
is.
This
is
probably
the
case
which
apparently
has
methodical
references
in
it,
which
then
causes
the
packaging
to
blow
up.
C
E
C
B
So
we're
just
just
brainstorming
a
little
bit
what
if
there
was
an
instrumentation
api
packaging,
maybe
named
differently
that
didn't
shadow
in
that
stuff,
but
treated
it
as
a
translator
dependency
that
could
then
be
excluded.
B
C
Yeah,
I
don't
have
any
great
ideas
for
sure
right
now,
but
I
hope
that
we
can
find
some
way
that
would
feels
like
we
should
be
able
to
find
some
way
that
some
way
to
handle
this
well.
The
problem
we've
brought
in
caffeine
three
because
of
that
jdk,
unsupported
j,
linking.
B
Yeah
in
the
meantime,
I'm
trying
to
see
if
I
can
work
around
it
in
this
specific
case
by
shading
the
instrumentation
api
jar
into
our
library
jar
and
excluding
those
classes
specifically,
but
I'm
still
trying
to
figure
out
how
to
do
that.
Android
android
gradle
is
not
the
same
as
regular
gradle.
So
I'm
I'm
sorting
out
how
shadow
jar
works
with
it.
B
I'm
using
it
as
an
opportunity
to
learn
a
little
bit
more
how
changing
on
android
works
so
yeah,
I
just
that's
more
infor
information,
not.
F
Necessarily
any
sort
of
solution
are,
we
are
we
building
a
completely
separate
android
ecosystem
over
time
like
opencensus
did,
or
are
we
planning
to
try
to
use
all
the
same
jars
and
classes.
C
B
I
think
it's
definitely
one
of
the
I
guess
current
stated
requirements
of
the
sdk
and
the
apis
that
they
support
android
out
of
the
box,
with
no
modifications.
F
Right
and
but
instrumentation
api
is
a
bit
different.
Yes,
absolutely
anyway,
if
you
need
experience
with
hacking,
gradle
to
create
a
whole
new
ecosystem
for
something
we
did
it
in
scala,
it
uses
underscores
it's
great.
I
don't
know
if
I'd
recommend
that,
but
I
definitely
don't
recommend
classifiers.
F
If
you
were
thinking
because
that's
another
option
here
is,
you
could
have
two
versions
of
the
interpretation
api,
one
with
an
android
classifier
right,
so
you
could
actually
publish
it
twice,
one
for
java,
one
for
android,
the
android
one
would
have
none
of
the
offending
classes
and
would
be
limited
to
libraries
that
are
on
android.
F
The
problem
with
using
classifiers,
though,
is
your
transitive
dependencies
are
not
distinct,
so
it
basically
forces
everyone
to
do
the
same
trick
then,
and
you're
not
guaranteed
that
you
get
any
transitive
dependencies,
which
is
why,
like
for
scala,
they
went
with
that
underscore
thing
and
you
actually
have
like
a
different
module.
F
C
C
Yeah,
I
I
think,
having
splitting
the
ecosystem
might
make
sense
that
individual
instrumentations
like
if
we
if
we
have
to,
but
I
I
hope
that
we
can
the
core
apis.
We
can
find
a
way
to
support,
have
the
common
one
because
of
the
whole
transitive
dependency
problem.
C
F
What
what
changes
with
caffeine
in
the
api
like
what
is
actually
broken
when
you
remove
it.
C
We
have
internal
caching,
we
use
it
for
basically,
instead
of
like
having
our
like
a
weak
map,
type
of
bounded,
weak
map,
kind
of
a
thing
we
use
that.
C
C
There
was
some
some
weird
problem
we
had
with
caffeine
2,
where
I
don't
remember
why,
but
the
j-linked
modules,
where
people
excluded
jdk-
unsupported,
oh,
I
think,
because
it
didn't
have
unsafe
and
the
caffeine
too,
relied
on
unsafe
or
not
unsafe,
something
something
in
the
jdk
and
supported.
Yes,.
B
E
B
F
Is
is
the
opposite
like
worse
of
instead
of
shading,
both
into
the
instrumentation
api
having
the
instrumentation
api,
do
like
a
service
lookup
and
then
providing
the
instance
appropriate
to
the
environment
later
as
a
dependency.
C
C
E
But
if
that's
not
possible,
if
you
want
to
do
like
a
quick
hack,
then
I
think
onora
already
did
some
sort
of
trickery
of
replacing
subclasses
inside
the
cafe.
E
A
They
are
generated.
I
was
just
checking
that
and
it's
I
think,
node
factory
and
local
cache
factory,
and
they
both
have
some
like
factory
methods
that
are
completely
generated
by
something
called
java
poet,
I
think
in
the
caffeine
we
go,
you
can
still
look
up
the
light
code.
Well,
even
if,
in
our
build
extracted
directory
inside
our
project
but
yeah,
it's.
C
All
right
any
more
thoughts
on
that
or
we'll
go
back
to
let
it
filter
and
hopefully
we'll
throw
around
some
more
ideas
in
the
issue.
B
C
We
could
even
like
put
caffeine,
pull
caffeine
in
kind
of
the
way
it
is
currently.
We
could
pull
it
in
shaded
into
the
bootstrap
class
loader
and
do
some
other
things.
B
I
don't
know
if
this
would
solve
it
on
solve
the
android
problem
or
not.
I'm
I'm
I'm
researching
android
and
mr
jr
how
things
are
handled.
C
Yeah
john,
if
you
could
figure
this
out,
that
would
that
sounds
potentially
promising
if
it
happened,
if
it
did
work
on
android.
B
We
have
mr
mr.jars
in
the
sdk
right
now
right
like
our
something
resourcefully
calculated
clock,
yeah
the
clock
and
everything
is
working.
Fine,
although
I
don't
think
I
don't
know
if
the
clock
applies.
Try.
B
C
C
Cool
logging
updates.
C
No
jack,
so
we
had
a
john
I
I
was.
I
wanted
to
kind
of
share
the
discussion
that
we
had
last
week
with
onorag
about
ap
about
logging
api.
C
Do
you
want
to
talk
about
that
or
I
can't
yeah.
C
Yeah
yeah
because
it
is
sort
of
driven
by
from
the
instrumentation
use
case.
C
So
currently
the
logging
we
have
logging
sdk,
but
the
goal
the
the
we've
always
said:
we're
not
going
to
create
a
logging
api
because
who
needs
another
logging
api
and
what
we're
realizing
from
the
instrumentation
perspective
like
when
we're
building
logging
appenders
for
log4j
logback,
it's
still
nice
to
know.
What's
having
the
separation
between
the
api
and
sdk
so
that
those
logging
appenders
don't
pull
in
the
sdk,
they
know
exactly
what
what
their
api
they're
allowed
to
use,
and
this
also
then,
from
the
agent
perspective.
C
So
I
don't
know
if
we
want
to
you
know
if
anybody
has
ideas,
another
name
for
it,
because
I
mean
we
don't
we
clearly
don't
want
to
advertise
this
as
a
logging
logging
api,
but
maybe
like
a
logging,
instrumentation
api.
I
don't.
I
don't
know.
B
Yeah
at
the
moment,
the
the
sdk
logs
module
depends
on
sdk
all
which
is
really
the
problem
right
you'd
be
pulling
you'd,
be
pulling
in
all
of
the
sdk
as
a
part
of
your
dependencies
api
dependencies.
B
F
I
wanted
to
hijack
all
my
logs
and
send
it
to
the
hotel
logger,
and
then
I
wanted
an
exporter
that
dumps
out
json
in
the
google
cloud
logging
format
that
gets
auto
parsed
and
I
couldn't
figure
out
how
I
should
do
that
effectively
like
is
that
a
custom
exporter.
F
Should
I
encode
that
in
my
exporter,
like,
I
think,
there's
a
there's
having
tried
to
implement
one?
I
have
a
lot
of
open
questions
where
I
think
I
guess
what
I
would
call
it
is
the
sdk
api,
I
think,
might
need
fleshing
out
more
more
so
than
anything
you
know
and
and
being
very
careful
about
what
we
put
there
because
holy
crap,
I
was
trying
to
get
things
working
on
log
back
end
log4j2
and
I
forgot
how
basically
the
same
and
different.
They
are.
F
You
know
where
something
looks
like
it's
the
same,
and
it
just
doesn't
work
at
all
the
same
between
the
two
which
is
infuriating
anyway,
who
who's
driving
this
just
curious.
C
Jack
has
been
doing
the
most
prs
on
around
logging
and
he
made
an
attempt
in
the
instrumentation
on
the
instrumentation
side,
also,
which
I
think,
sort
of
helped
flesh
out.
Some
of
these
ideas
of
how,
like
you,
were
saying
how
like
how
kind
of
problematic
instrumenting
writing
a
lot
like
what
writing
a
log.
A
pender
is
currently.
D
D
F
Gotcha
I
mean
like
fundamentally
the
thing
that's
kind
of
weird:
is
I
write
config
in
my
application
for
logs
and
then
it
sort
of
gets
things
from
the
current
auto
agent.
The
way
it
is,
you
know
it
just
felt
it
felt
weird
that
it
wasn't
necessarily
all
in
the
agent.
C
So
do
you,
like
the
from
the
agent
perspective,
what
we
were
thinking?
Eventually,
we
will
have
the
instrumentation
right,
the
auto
instrumentation,
injecting
login
appenders
logging
appenders
talking
to
this
sdk
api,
which
then
we
bridge
through
the
bootstrap
class
loader
and
into
the
agent
class
loader
and
would
go
out
through
the
logging
exporter.
C
And
so
you
would
be
able
to
configure
your
logging
exporter
or
through
the
agent
extension
wire
in
whatever
you
wanted
from
the
sdk
side.
F
Right
right,
I
guess
the
the
thing
that
I'm
looking
for
is-
and
this
might
be
a
cloud
specific
thing
on
google
cloud.
You
know
gke
automatically
ingests
logs
into
google
cloud
logging
and,
if
I'm
sending
them
somewhere
else.
I
don't
want
that
to
happen.
So
I
literally
don't
want
anything
to
hit
standard
out
unless
it's
formatted
for
cloud
logging,
in
which
case
it
should
go
to
cloud
logging
right.
F
I
wanted
to
go
just
out
of
hotel
now
this
again,
this
might
be
very
specific
to
gke
and
cloud
run
or
google
cloud
run,
but
that's
that's
kind
of
the
thing.
The
the
thing
that
I'm
like.
Okay,
how
hard
is
this
going
to
be
to
do
in
practice
and
basically
right
now,
we
just
documented
how
to
take
over
your
log
back
or
log
for
j
config
to
do
that,
but
you
have
to
go
directly
at
the
application
log,
log4j
and
override
it
not
just
attach
right.
F
If,
if
that's
your
use
case,
otherwise,
you
know
again
there's
a
sensitivity
to
double
logging.
Some
people
are
fine
with
it,
but
some
people
don't
absolutely
do
not
want
to
do
that,
and
that's
that's
why
I
think
not
just
depending
but
also
taking
over,
should
be
an
option.
F
In
other
news,
I
have
a
demo
of
metric
exemplars
logs
with
traces
and
agent
traces
all
working
together
from
like
a
vanilla
spring
app.
Unfortunately,
I
had
to
write
custom
blogging
code
for
google
cloud,
but
it's
pretty
cool.
The
new
agent
is
awesome.
C
F
C
C
Cool
metrics,
oh
yeah
yeah,
so
if
you
haven't
seen,
there's
a
if
you
want
to
try
out
java,
17
and
jfr
streaming
in
java
17.,
this
is
really
cool
and
a
nice
some
nice
examples
of
how
to
model
metrics
thanks
josh
for
all
the
help
on
the
review
there.
I
think
we're
for
those
of
us
still
learning
how
to
model
those
things
properly.
C
F
Specifically
cpu
usage
in
jfr
is
sampled
at
a
faster
rate
than
metric
export,
and
so
what
what
we're
recommending
right?
Now
we
had
a
discussion
on
this
is
we're
going
to
put
it
into
a
histogram
for
the
highest
kind
of
basically,
because
that's
the
most
accurate
way
to
record
that
synchronously.
F
So
every
time
jfr
samples
like
you
know
how
much
time
was
spent
in
a
long
lock
or
how
much
cpu
usage
there
was
in
a
percentage,
that's
recorded
in
a
histogram,
then
the
user,
if
the.
If
they
want
the
actual
export
rate
that
they've
configured
for
metrics,
they
would
need
to
use
a
view
to
use
a
gauge
last
value
style
aggregation
for
that
to
only
get
the
last
value
if
they
are
worried
about
like
the
cost
of
using
histograms.
F
So
there's
like
a
bucket
for
anything
over,
you
know
a
thousand
milliseconds
or
something
which
is
going
to
be
useless
for
percentiles
to
you.
If
you're
over
a
thousand
percent.
I
think
we
have
major
problems
with
jfr,
so
there
there's
there's
a
lot.
I
I
really
love
this
jfr
pull
request,
because
it's
finally
getting
really
good
real
world
use
cases
for
us
to
push
back
on
the
metric
spec.
D
Really
so
just
to
clarify
my
understanding
right.
This
last
value
is
like
between
metric
samples
like
jfr's
sampling,
much
faster
than
our
our
instrumentation
samples,
or
that
the
metrics
samples
taking
the
last
value
is
great,
but
you've
lost
some
fidelity
there.
Another
way
of
improve,
perhaps
improving
the
fidelity,
is
also
averaging
over
that
window.
Right.
F
F
F
Right
so
I'm
I'm
actually
super
excited
for
when
we
don't
have
to
define
buckets
for
histograms
as
well
with
the
high
resolution
histogram,
because
that
that
will
solve
a
lot
of
my
concerns
around
what
we're
seeing
with
explicit
bucket
histograms,
and
we
have
this
default
set
of
buckets.
I
don't
feel
like
it's
great
for
cpu
metrics.
F
There
was
this
thing
that
we're
going
to
have
called
a
hint
api
where
you
could
hint
at
what
buckets
should
be
when
you
instrument,
but
users
can
override
with
views.
This
is
the
exact
use
case
where
we
kind
of
need
that,
and
we
don't
have
it,
it
got
dropped
for
the
initial
release.
F
So
you
know,
there's
there's
a
there's
a
lot
of
really
good
things
coming
out
of
this
jfr
streaming
outside
of
these
are
going
to
be
awesome,
metrics,
so
lots
of
lots
of
good
discussion
there
and
trying
to
make
sure
that
gets
that
all
gets
back
to
the
metric
sig.
C
F
Yeah
we
can
so
you
can
configure
the
histogram
with
a
with
your
own
set
of
bucket
boundaries
that
fit
percentiles
well,
and
we
could
try
to
define
ones
that
fit
cpu
percentile
usage.
Well,
that
we
think
are
good
for
users.
We
can't
do
that
without
forcing
the
user
to
write
this
configuration
in
the
sdk
or
doing
some
sort
of
hackery,
because
there's
no
api
for
us
to
do
that
right
now,
when
we
have
high
resolution
histograms.
F
This
probably
won't
be
that
big
of
an
issue,
because
we
can
use
the
high
resolution
histogram
and
there
should
be
some
way
that
users
can
reduce
the
cost
of
that.
If
they
want
they
can
move
it
back
to
explicit
buckets
if
they're
not
happy
with
it,
but
we'll
get
high-res
we'll
have
a
maximum
number
of
buckets
that
you'll
ever
have
and
shouldn't
be
a
problem.
F
There's
also,
you
know
the
we
could.
Let
users
do
last
value
here
potentially,
but
we
don't
have
an
instrument
that
does
it
by
default,
so
yeah
the
tldr
is
hopefully
when
james
high
resolution
histogram
stuff
is
through
and
we've
kind
of
stabilized
that
maybe
this
won't
be
an
issue
we'll
just
be
using
high
resolution
everywhere.
I
am
only
about
60,
confident
that
is
going
to
be
the
future.
C
All
right,
we
can
come
back
to
oh,
no
exporters
own.
F
Oh
yeah,
so
this
is
a
big
pr.
The
the
spec
for
this
is
going
through,
but
effectively
we're
shifting
directions
and
metrics
a
little
bit
of.
We
want
exporters
to
be
the
primary
owners
of
whether
you
have
delta
cumulative
metrics,
because
it's
more
a
matter
of
what
your
back
end
supports
than
it
is
like
what
the
api
should
do.
So
this
is
a
this
is
a
big
shift.
There's
a
there's,
a
spec
pr
related
to
it.
F
It's
it's
a
much
bigger
pr
than
I
really
wanted
it
to
be,
but
it
there
was
a
lot
of
untangle.
I'm
curious
how
we
feel
about
making
progress
on
that.
I
expect
it
to
be.
F
There's
one
open
question
in
our
in
in
that
john
raised
around:
where
should
flexibility
live
for
delta
versus
cumulative
right?
Is
it
at
an
exported
level?
Is
it
at
the
exporter
plus
the
specific
metric
level
metric
type?
Is
it
at
or
is
it
something
that
view
like?
User
should
have
100
configuration
over
with
views
right
now?
The
way
the
spec
is
leaning
is
views
are
the
ultimate
source
of
truth.
F
If
someone
configures
delta,
in
a
view
that
is
true
for
all
exporters,
if
an
exporter
gets
something
that
they
can't
support,
that's
an
error
in
the
config
and
and
the
user
sees
it
logged
at
runtime
and
that's
their
fault.
Exporters
are
the
next
source
of
truth
right
where
like.
If
I
am
exporting
open,
telemetry
protocol,
there's
going
to
be
an
environment
config
variable
that
I
can
use
to
say.
I
want
delta
versus
cumulative.
F
If
I
don't
specified
it's
cumulative
by
default
and
like
the
prometheus
exporter
will
be
specced
to
say
it
only
supports
cumulatives,
because
that
resolves
a
lot
of
issues
with
that
exporter,
so
that's
kind
of
where
the
spec
is
right.
Now
and
again,
this
is
a
shift
where
we're
moving
the
notion
of
aggregation
temporality
into
the
exporter,
but
we
have
a
lot
of
people
who
are
nervous
about
flexibility
of
of
configuration
and
I
feel
like
we
open
that
can
of
worms
too
early
in
the
spec.
F
Personally,
that's
where
I
sit,
I
don't
know
if
we
need
that
level
of
detail
all
the
way
down
to
a
particular
instrument,
aggregation
temporality
level
where
I'll
be
like.
You
know,
I
want
to
export
otlp
and
I
want
all
my
histograms
to
be
deltas
and
I
want
all
my
sums
to
be
cumulatives.
F
I
don't
know
if
that's
going
to
be
the
case,
not
not
really
sure
I
haven't
heard
anyone
kind
of
argue
with
a
real
real-life
use
case
for
that
yet,
but
that's
kind
of
the
open
question
on
this
pr
and
on
the
spec
itself.
F
So
I
don't
know
if
anyone
here
has
examples
that
they
want
to
talk
through
where
they
sit
on
that,
whatever
whatever
we
agree
to
here.
I'd
like
to
take
to
the
metric
sig
later
today
to
kind
of
talk
through.
B
One:
it's
not
a
real
world
use
case,
but
it
is
a
consequence
of
only
allowing
of
a
we're,
not
allowing
granular
temporality,
and
that
is
when,
when
we
did
have
the
summary,
the
mid
maximum
count
mid
max
sum
count
doesn't
make
a
lot
of
sense
in
cumulative.
B
So
there
I
think
there
are
going
there
will
be
aggregations
and
maybe,
when
we
enable
custom
aggregations,
specifically,
which
I
know
we
don't
have
right
now
in
the
sdk,
but
eventually
we're
going
to
let
people
define
their
own
aggregations.
I'm
sure
that
there
will
be
aggregations
that
literally
just
don't
make
sense
in
one
world
or
the
other,
and
then
how
things
behave
with
respect
to
that,
setting
that
the
exporter
does
seems
a
little
bit
strange,
like
yeah
cumulative,
but
I'm
using
an
instrument
or
an
aggregation
that
only
makes
sense
for
delta.
B
B
F
F
I
I
do
feel
like
we.
There
should
be
a
way
for
a
aggregation
to
not
support
delta
or
cumulative.
F
F
B
Yeah,
I
I
mean
all
these
are
excellent
questions.
I
don't
have
any
answers,
yeah
and
you
I
think
when
you
say
metric,
you
mean
instrument,
type
right,
yeah
yeah
I
mean
I
could
imagine
like
that's
what
one
of
the
reasons
why
you
define
a
view
like.
B
I
want
to
be
this
specific
metric
to
the
specific,
named
ins,
whatever
instrument
I
want
to
have
this
particular
aggregation
for
it,
but
that's
not
something
that
we
talk
about,
but
then
we're
talking
about
well,
he
said:
well,
there
was
a
time
there
was
a
point
in
time
where
we
did
think
about
like
just
exposing
the
new
api
to
exporters
and
letting
them
do
whatever
they
what
they
want.
B
B
I
guess
I
think
that
I
think
the
issue
is
that
we
like
there
are
not
that
many
vendors
right
now
that
have
well.
There
are
a
limited
number
of
vendors
that
have
explicit
metric
ingest
like
splunk,
doesn't
even
really
have
metric
ingest
yet
and
we're
just
going
to
take
otlp.
I
think,
although
the
delta
versus
cumulative,
I
don't
know,
I
have
any
idea
what
we're
going
to
do,
because
we
generate
all
our
metrics
based
on
full
finale
traces.
Of
course,.
B
F
There's
there's
four
metrics
back
ends
that
I
tend
to
pay
attention
to
and
there's
there's
six
that
we
look
at
total
that
I
know
of
the
number
one
is
prometheus
because
we
need
to
be
compatible
with
it.
It's
the
biggest
thing
in
gke
right,
the
other
one
that
I
personally
look
at
is
actually
influx
db.
Just
because
I
like
it,
it's
like
one
of
the
few
things
that
lets
that
has
infinite
cardinality
and
that's
kind
of
cool
and
totally
not
like
any
of
the
other
metrics
back-ends.
F
F
Number
three:
is
the
google
backend
right,
so
google
backend
actually
supports
both
cumulatives
and
delta,
and
you
can
pick
your
poison,
but
unfortunately,
I
think
in
all
cases
we're
going
to
be
recommending
customers
these
cumulatives
in
our
back
end
for
the
the
benefits
that
you
get
from
it
right.
F
There
there's
a
possibility
that
you
can
get
delta
metrics,
but
those
are
well
anyway
long
story
short.
We
still
recommend
cumulatives
from
a
query
standpoint.
There
there's
light
step,
has
a
back
end.
That's
mostly
based
on
deltas.
F
If
I
understand
what
josh
mcdonald
has
described
correctly,
so
I
just
ask
him
of
what
he
would
need
and
what
he
recommends
right
and
then
microsoft
has
two
different
metric
backends.
They
have
one
that's
publicly
available
to
people
and
they
have
one
that's
privately
available.
Their
back
end,
that's
publicly
available
is
based
on
cumulatives
their
private
one.
For
internal
usage
is
based
on
deltas.
F
So
is
that
I
think
those
are
the
six
that
I
was
mentioning
right.
One
two,
three,
four:
five:
six
yeah.
So
these
are
the
six
back
ends
that
we
tend
to
use
to
do
our
evaluation
and
I
think
it's
pretty
pretty
rich
in
terms
of
like
understanding
use
cases.
But
I
agree
that
that
we
don't
we
don't
really
know
if
there's
going
to
be
something
that
comes
out.
That
needs
really
really
fine.
Grained
changes
to
to
hear
and
there's
a
lot
of
hard
decisions
to
make
sometimes
with
deltas
right.
B
F
Yeah
someone
internally
like
recommended,
I
take
a
look
at
it
and
I
was
like
wow.
This
is
I
I
wish
I
could
try
it
out
at
scale.
I
I
also
don't
want
to
actually
hold
a
pager
for
it.
I
think
from
what
I
can
tell,
but
I
do
want
to
try
it
out
on
something
like
big
at
some
point,
but
it's
it's
very
cool
yeah.
I
use
the
their
cloud
their
sas.
B
Influx
tv
for
a
personal
project-
it
just
you
know,
writes
the
the
temperature
of
my
wine
cellar
once
a
minute
to
it,
and
the
nice
thing
is
the
data
is
so
small.
I
think
I've
I've
been
running
it
for
four
months
now
and
I
think
my
total
bill
is
up
to
eight
cents,
so
they
and
they
won't
bill
for
less
than
a
dollar.
So
you
know
I
might
get
a
bill
in
a
couple
of
years.
At
some
point.
F
B
A
B
F
Yes,
everyone
should
look
at
it
if
you're
interested
in
metrics
and
want
to
see
like
a
really
cool
way
of
taking
in
metrics
and
and
querying
for
them.
So
just
to
recap,
though,
I
think
if
you
look
at
that
pr,
like
I
said,
unfortunately,
it's
larger
than
I
intended,
but
that
open
question
of
where
and
how
granular
do
we
want
delta
and
cumulative
configuration?
F
I
I
know
john's
going
to
be
at
the
metric
sig
or
we'll
comment
there,
one
or
the
other,
but
if
you
have
concerns
or
comments
like
now
is
the
time
to
raise
them,
because
there's,
if
I
think
it's
still
open
from
riley,
that
some
of
some
of
that
pr
is
in
the
spec,
and
some
of
it
is
not
some
of
it's
in
an
open
pr
and
it's
guidance.
F
I
want
to
make
sure
that
that,
like
whatever
we
decide
here
is
like
reflected
upstream,
the
other
thing
I'll
call
out
is
this.
Design
is
also
what
dot
net
is
doing
for
their
metric
api.
The
way
exporters
work
in
the
way
views
work
between
the
two.
As
far
as
I
understand.
F
Yes,
I
I
thought,
if
I
didn't
mention
apologies,
I
tried
to
mention
that
that
is
totally
an
option
of
something
that
we
could
do.
F
Yes
and-
and
I
think
there
there's
a
lot
of
merit
to
that
as
well-
potentially
because
I
do
think
effectively,
aggregation
temporality
is
really
a
storage
concern
more
so
than
anything
else,
because
it's
how
good
your
query
engine
can
handle
the
data
determines
what
aggregation
temporality
you're
going
to
use
for
this
data
effectively.
F
So
I
think
that's
that's
a
good
call
out.
My
only
concern
there
is
around
configuration
which
I
put
in
the
bug,
so
it's
worth
reading
that
that
comment
thread.
I
think
it's
it's
the
only
one.
That's
still
open
from
john.
F
Sorry,
if
you
do,
you
want
me
to
try
to
fragment
it.
It
would
be
about
another
week
or
two.
I
think,
to
get
that
out.
I'm
a
little.
I
don't
know,
I'm
really.
F
Okay
sounds
good,
sounds
good.
I
I've
only
had
comments
on
how
bad
my
spelling
is
so
far,
which
is
pretty
bad.
B
F
Really
they
just
disappear
to
me.
It's
like
the
worst,
like
even
code
comments
with
two
dudes
they
disappear.
I
just
don't
see
them
anymore.
F
Okay,
I
think
I
spent
too
much
on
that.
Please
take
a
look
and
let
us
know
what
you
think
if
you
care
real
quick,
though
I
want
to
call
out
I'm
working
on
the
hotel
roadmap.
F
This
is
we're
trying
to
coordinate
across
different
sigs
and
different
different
pieces,
the
tc
different
owners-
and
we
want
to
put
out
a
new
road
map.
We
get.
We
get
a
lot
of
yeah.
I
think
there's
a
community
issue
on
this,
but
we're
talking
about
this
in
the
last
tc
meeting
that
it's
time
to
update
the
roadmap
and
so
I'm
working
on
a
outline,
and
I
want
to
make
sure
that
things
are
added
related
to
what
everybody's
doing
so.
If
you
have,
I
can,
can
I
share
the
to
do.
F
I
will
add
a
link,
maybe
in
in
the
docs,
because
this
is
not
shared
yet
but
effectively,
there's
going
to
be
a
section
on
what's
happening
with
metrics,
what's
happening
with
logs,
what's
happening
with
trace,
focused
on
sampling,
what's
happening
with
instrumentation
and
the
semantic
convention
stability
what's
happening
in
the
collector,
then
we're
going
to
have
a
a
bit
of
a
road
map
around
open
tracing,
open
census,
deprecation
migration
paths
and
then
there's
this
open
question
of
what
else
do
we
need
to
cover
in
the
roadmap?
F
So,
if
you're
aware
of
frequently
asked
questions
from
users,
if
you
look
in
that
thread,
you'll
see
a
lot
of
things
that
people
want
the
one
that
scares
me
the
most
is
they
want
a
gantt
chart
of
here's?
What
we're
going
to
work
on
in
what
month
and
I'm
like
this
is
open
source.
We
have
people
who
come
in
and
do
stuff
and
then
leave.
We
don't
know
who's
going
to
come
in
and
do
stuff,
but
it
to
the
extent
that
we
could
say
what
we
know
we're
working
on.
F
I
was
going
to
just
kind
of
pull
from
like,
for
example,
instrumentation,
api
stability
and
this
converting
to
the
instrumenter
api.
As
like
a
a
thing
that
job
has
been
working
on
the
stability
of
the
agent.
You
know
things
that
are
in
there.
I
was
just
gonna,
I'm
gonna,
throw
out
a
rough
outline
and
then
send
it
for
people
to
add
and
contribute
to
so
just
as
an
fyi
that
should
hit
sometime
next
week.
C
Yeah,
definitely
maybe
just
in
slack
post,
if
you
post
the
link
to
the
doc,
I'm
sure
people
would
I'd
be
happy
to
go
in
and
you
know
update
stuff
cool
yeah.
F
And
if
you
do
just
feel
free
to
add
yourself
to
like
the
author,
maybe
we
can
get
all
of
open
telemetry
as
an
author
of
this
thing,
or
we
won't
even
write
it.
The
author
tag
anyway,
that's
probably
an
internal
google
joke
that
no
one
gets.
C
Like
like
crs,
you.
C
C
C
Cool,
so
we've
got
two
minutes
left
here,
just
briefly
great
progress
on
the
instrumenter
api
conversions.
C
I
want
to
call
out
thanks
to
thank
you,
team,
splunk
and
yeah
lots
of
nettie
fun,
neri
stuff
and
some
a
little
bit
more
about
instrumentation
api
stability
of
just
you
know
we're
starting
to
fine-tune
things
like
naming
and
kind
of
going
through
in
more
detail,
other
things
that
went
in
this
last
week.
This
was
a
good
one.
Thank
you,
lori.
C
C
Oh
yeah,
thank
you
mateish
for
all
the
doc
work.
I
know
there's
another
big
pr
out
that
mateis
put
it's
really
nice,
so
we
have
some
really
nice
talk
on
writing
instrumentation,
which
is
complicated
in
the
agent
world.
C
C
Just
because
it's
a
little
too
low
level
to
do
things
like
to
capture
things
properly
when
it
comes
to
like
http,
2,
pipelining
and
things
like
that
metrics
metric
dimensions.
So
I
started
using
metrics
in
anger
and
well
not
not
a
lot
of
anger
yet,
but
a
little
bit
of
anger
and
ran
into
we
weren't
capturing
status
code,
which
is
a
very
important
dimension.
C
And
yes
thank
you
nikita
for
all
of
this
work
around
the
smoke
test
matrix
stuff
I
had.
C
I
had
really
struggled
understanding
and
doing
things
in
the
smoke
test
matrix
previously
and
nikita
did
some
great
stuff
to
simplify
that
any
other
topics,
thoughts.