►
From YouTube: 2022-05-05 meeting
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
I
want
to
introduce
you
to
my
new
teammate
john,
who
is
in
your
time
zone.
I
think
mateos
cet.
B
A
One
right
out
of
you
know
so:
you're
you're,
further
away
from
from
yeah.
C
A
All
right,
so
I
just
I
was
curious
because
we,
if
anybody
has,
I
know
this-
is
a
traditionally
popular
or
feature
of
apm
tools
to
inject
the
sort
of
javascript
monitoring
server
side.
But
I
also
know
it's
traditionally
problematic
and
so
people
have
mixed
feelings
about
it
and,
of
course,
open.
Telemetry
has
no
real,
like
client
browser
story
yet
anyway,
but
we
were
thinking
of
having
a
intern
a
summer
intern
project
for
this,
and
so
was
curious.
A
If
anybody
had
already
done
this,
what
people's
general
thoughts
are
obviously
not
on
by
default
kind
of
a
thing
and
of
course
we
have
no
standard
upstream
hotel
thing
to
inject
yet
so
I
think
the
timing
of
you
bringing
this
up
is
remarkable
because
it
was
brought
up
internally
here
today
like
literally
this
morning,
and
I
think
we
said
exactly
the
same
thing
you
said,
which
is
that
it's
yeah
everyone
has
been
doing
that
and
it's
always
like
a
little
bit
perilous
and
clunky
yeah.
A
A
Cool
yeah,
so
I
mean
I,
I
think
you
know
we
would
try
to
do
something
in
contrib.
A
I
just
know
that
at
least
when
I
was
at
new
relic,
there
was
always
a
complicated
story
around
jsps.
A
Yeah
we'd
love
when
we
get
into
the
implementation
of
that
any
pitfall,
any
known
pitfalls
that
people
have
run
into
because
yeah.
I
know
it's
notoriously
flaky
or
problematic.
It's
hard
to
get
right.
D
B
Now,
if
you're
targeting
serverless
you're,
probably
going
to
run
into
some
problems
with
the
server
output
streams
and
and
all
that
stuff,
they
have
slightly
non-standard
methods
and
versioning
policies.
A
Yeah
didn't
pavel
or
somebody
a
long
time
ago
and
the
hotel
java
agent
instrumented
the
response
in
order
to
capture
length
the
response
length.
I
think
the
number
of
bytes.
B
A
A
Oh
interesting
like
because
I
would
expect
the
snippet
to
be
fairly
small
and
then
that
to
although
I
guess
it
depends
on
how
much
you
want
to
run
in
line
as
it
loads
so
yeah,
it
makes
sense
yeah
and
I
mean
it's
been-
it's
been
a
while
now,
but
I
think
also
the
just
the
way
that
new
relic
agents
were
like
set
up.
A
connection
I
think,
requires
some
additional
bootstrapping.
A
Cool
so
yeah
you
can
let
people
know
this
is
an
official
intern
project,
so
we
will.
We
will
be
working
on
that
this
summer.
Cool
awesome
thanks
for
sharing
that
yeah
josh
good
to
see
you
welcome,
hey.
E
Okay,
a
bit
ago,
the
abbreviation
temporarily
for
metrics
was
changed
from
cumulative
to
delta,
and
this
unfortunately
makes
some
tests
flaky,
because
most
of
the
tests
have
written
so
that
they
they
assert.
Some
like
specific
value
like
with
cumulative
like
this
value,
will
eventually
exist,
but
with
delta.
E
Instead
of
one
value,
we
might
have
like
two
values
that
we
need
to
sum
together.
In
some
cases,
I
was
wondering
whether
we
should
fix
those
by
changing
the
assertions
which
can
be
quite
crazy.
I
think,
if
you
need
to
sum
up
those
points
to
get
someone,
some
fixed
value-
or
maybe
somebody
has
some
clever
idea-
how
to
do
it.
F
Yeah
there's
so
I'm
the
one
that
made
the
change
from
cumulative
to
delta
and
the
reason
was
is
because
the
way
that
the
tests
are
set
up,
the
there's
a
periodic
metric
exporter
that
has
a
very
small
interval.
So
on
the
order
of
I
think,
like
every
10
milliseconds
or
something
like
that,
and
you
know
every
10
milliseconds
it
collects,
collects
metrics
and
sends
them
to
the
in-memory
metric
exporter.
And
so
when
the
temporality
is
cumulative.
F
You
have
many
many
many
many
instances
of
the
same
metric
and
it's
cumulative
values
which
I
think
don't
change
very
often.
And
so,
if
there's
like
any
sort
of
assertion,
error
that
happens,
it'll
print,
like
literally
thousands
of
or
tens
of
thousands,
maybe
of
individual
metrics,
and
it's
just
like
it's
horrible
to
go
through
and
try
to
figure
out
what
exactly
the
problem
was
and
so
yeah.
So
I
switched
them
to
delta.
F
But
you
know
you're
totally
right
delta
presents
its
own
challenges,
so
the
the
metrics
could
be
split
or
the
values
could
be
split
across
two
distinct
like
collections,
and
it
would
be
the
sum
of
those
that
you
would
have
to
assert
against.
E
So
the
reasonable
problem
that
check
had
was
that
the
tests
were
interfering
with
each
other.
If
you
run
like
one
individual
test,
then
it
worked
if
the
even
with
the
temper
like
the
copulative
temporarily.
But
if
you,
if
you
run
two
tests,
then
the
second
one
would
fail,
because
it
found
some
stuff
that
was
left
over
from
the
first
test.
F
Yeah,
that's
right,
that's
the
other.
Part
too,
is
that
the
state
doesn't
reset
for
individual
tests.
The
state
is
shared
across
all
tests,
so
so
one
one
one
option
that
I
tried
to
go
down,
but
I
couldn't
quite
figure
out
how
to
do
it,
and
maybe
it's
just
because
I'm
not
as
familiar
with
the
test
setup.
Maybe
maybe
you
would
have
better
luck,
but
I
think
it's
actually,
rather
than
using
a
periodic
metric
reader
with
the
in-memory
metric
exporter.
F
I
think
a
better
fit
is
to
use
the
in-memory
metric
reader
with
cumulative
temporality,
and
so
the
advantage
there
would
be
if
you
can
figure
out
how
to
do
that.
Then,
like
each
time
you
read:
metrics
you'll
just
get
the
current
state,
rather
than
like
relying
on
metrics
being
collected,
every
10
milliseconds
and
you
know
accumulating
with
lots
of
duplicates.
F
I
think
when
the
agent
tests
are
running,
there's
the
way
that
they,
like
the
traces
get
and
and
metrics
get
sent
from
kind
of
the
the
agent
that's
running
to
that
the
test,
so
that
assertions
could
be
made
was
that,
like
the
bytes
actually
get
transmitted
in
like
otlp,
formatted
and
then
kind
of
like
reconstituted,
so
that
you
can
make
assertions
against
them
and
so
that
that
didn't
play
well
with
this
idea
of
like
an
in-memory
metric
reader
that
you
just
pull
when
you
need
to
make
assertions
against
it.
F
Yeah,
I
think,
like
the
best
case
scenario,
would
be
that
it
like
that,
would
simplify
assertions,
and
so
you
know,
and
would
also
make
it
kind
of
easier
to
debug.
When
things
do
go
wrong
would
be
to
have
the
state
be
able
to
reset
you
know
between
tests,
so
you
don't
get
tests,
polluting
other
tests
and
then
two
using
the
in-memory
metric
reader
with
cumulative
temporality
instead
of
the
periodic
metric
reader
with
an
in-memory
metric
exporter.
F
D
Right,
I
guess
what
I'm
saying
is:
we
don't
have
any
other
way
to
reset
state
right,
that's
it!
Okay.
Also,
I
want
to
call
out
in
open
census.
They
just
don't
test
this
kind
of
thing
ever
so
I'm
glad
we're
solving
it.
But
what
type
of
thing
like?
What
do
you
mean
this
kind
of
a
flaky
test,
because
they
have
global
state
and
all
the
tests
would
pollute?
There's
like
a
single
ginormous
test
for
everything
all
at
once
and
then
nothing
else.
It's
like
awful.
A
The
problem
is
the
java
agent
is
the
the
massive
global
state
because
we're
running
these
actually
against
the
real.
The
real
java
agent.
D
F
I'm
trying
to
think
like
what,
if
there
was
some
way
where
we
could
have
an
internal
method
on
the
sdk
that
allowed
you
to
like
clear
all
the
instruments
with
that,
I'm
trying
to
think
what
how
how
challenging
that
would
be
to
implement,
because
then
you
could
call
that
thing
between
the
tests
effectively
clear.
All
the
storages
and
act
like
no
instruments
were
there
or
registered
anymore.
D
Okay,
because
I
guess
what
I'm
suggesting
with
open
census
just
to
clarify
is
you
would
write
unit
tests
but
you'd
run
them
all
kind
of
we'd
have
to
make
a
new
testing
framework
for
this,
where
you'd
have
a
setup
phase
and
an
assertion
phase
and
you'd
run
all
of
the
setup
phases
for
all
the
tests.
Then
you'd
run
all
the
assertions
for
all
the
tests
and
you
have
to
make
sure
that
what
you're
testing
is
disjoint
manually,
if
that
makes
sense.
D
So
while
you
would
write
something
that
looks
like
it's
a
unit
test,
it's
actually
a
complete
integration
test.
That's
an
option
not
suggesting
that
that's
the
best
option.
I
think
jack's
ideas
are
far
better,
but
so
the
the
internal
sdk
thing
I
think,
making
that
function
is
probably
not
too
bad
right,
because
you'd
have
to
oh.
F
Shared
state
and
just
crush
it,
I
don't
think
it
would
be
well,
so
you
want
to
have
to
hijack
shared
state.
You
would
just
have
to
hijack
like
where
the
metric
storages
are
all
stored,
because
that
that's,
I
think,
what
like
you
know,
whenever
a
collection
happens,
we
we
access
these
storages.
So
if
we
kill
those,
then
everything's
effectively
gone.
D
D
F
F
Yeah,
so
you
could
yeah,
so
it's
not
just
that
you
want
to
reset
the
reader.
It's
like
you
want
to
reset
all
the
instruments
that
are
being
pulled
every
time
a
collect
is
is
run
so
you
know,
let's
say
you
you're
able
to
have
this
proxy
reader
and
you
replace
you
know
reader
one
with
reader.
F
Two
well,
there's
still
lots
of
asynchronous
instruments
that
are
registered
with
callbacks
that
that
you
know,
even
when
you,
even
when
you
use
your
new
reader
too,
your
replacement
reader,
those
callbacks
still
get
invoked
and
and
the
existing
state
exists
for
even
your
synchronous
instruments
as
well.
E
So
like
built-in
metrics,
like
the
http
client
metrics,
is,
there
is
still
a
risk
that
will
accidentally
nuke
those
also
and
and
then
we
can
them
or
something
like
that.
F
Okay,
I
see
what
you're
saying
so
some
some
well
okay,
so
you're
saying
that
http
client
metrics
are
tested
or
those
that
instrumentation
is
spun
up.
E
F
I
think
for
simplicity,
it
would.
The
new
king
would
have
to
be
all
or
nothing,
and
so
I
think
you'd
have
to
find
a
way
to
I
mean.
E
Preferably
like
the
way
I
see
it
like
not
too
many
tests
are
failing.
So,
even
if
you
have
to
change
them,
it
wouldn't
probably
be
a
big
deal,
but
just
that,
like
those,
some
of
those
assertions
might
not
be
like
really
nice.
E
F
Okay,
yeah,
I
think
I
yeah,
I
think
I
I
have
a
couple
places
in
mind
where
you
would
have
to
touch
things.
I
could
give
you
some
pointers,
but
if,
if.
E
D
A
A
So
this
it
sounds
so
this
is.
This
is
sort
of
the
consensus
on
the
most
on
the
ideal
solution,
the
most.
E
E
That
it
would
report
delta
metrics
but
actually
accumulate
them.
F
E
Yeah,
maybe
it
should
be
so
that
okay,
I
think
it
was
hundred
milliseconds,
but
maybe
it
should
only
hold
them
when
it
actually
starts
to
assert
them
right.
So
I
don't
know
whether
it
will
always
work
because
at
least
with
traces
we
have
the
issue
that
when
it
first
falls,
then
it
doesn't
assume
that
all
the
traces
have
already
been
generated.
E
So
if
it
doesn't
get
the
right
amount
of
traces,
then
three
tries,
I
think,
the
metric
one
also
retries,
but
maybe
if
they
like,
arrange
it
carefully,
then
it
will
actually
work.
A
D
It
I
do
think
that,
from
a
just
stability
of
test
standpoint,
cumulative
with
cleared
state
every
test
is
going
to
give
you
the
most
repeatable
test
with
the
least
amount
of
possible
flakiness.
If
we
can
get
there,
if
we
can't
get
there,
then
you
know
other
alternatives
are
possible,
but
definitely
especially
with
the
whole,
like
retry
assertions
that
that
exist.
I
don't
know
if
these
tests
are
one
of
those,
but
I
know
that
we
had
lots
of
issues
doing
anything
delta.
There.
F
F
At
this
default,
because
the
reason
I
ask
is
because
the
reason
we
have
a
million
rows
is
because
the
periodic
metric
exporter
runs
every
10
milliseconds,
and
so
that
is,
you
know
every
time
that
runs
you're,
essentially
taking
a
copy.
You
get
a
copy
of
all
the
cumulative
metrics,
and
so
you
know
if
there's
10
metrics
that
are
running
and
you're
running,
that
every
10
milliseconds
you
quickly
end
up
with
like
a
lot
of
rows.
F
But
if
the
await
utility
is,
is
you
know
if
you're
pulling
the
metrics?
You
know
each
time
a
weight
is
trying
to
run
its
cadence
like?
Let's
say
it
runs
every
hundred
milliseconds
or
something,
then
the
rows
that
you
see
should
only
be
the
set
that
were
like
pulled
that
specific
time,
not
like
the
cumulative
from
time
0
to
time
now.
So
it
should
be
a
really
big
reduction
in
the
number
of
rows.
A
Yeah,
let's
see
it
sounds
like
laurie's
gonna
mess
mess
with
it,
see
what
what
works
or
doesn't
work
sweet.
B
A
A
B
A
A
Yeah,
I
don't
have
anything
better,
but
we
have
time
on
the
the
java
agent
packages
probably
on
stabilizing,
although
the
extension
stuff
is
getting
close
right.
B
B
There
is
one
thing,
though:
the
division
extension
api
depends
on
the
sdk
auto
configure
module,
not
the
spi,
the
actual
config
module
for
the
auto
configured
open,
telemetry
sdk,
which
is
used
somewhere,
and
that
one
is
still
alpha.
F
A
Yeah,
let's
bring
that
up
this
evening.
A
F
Could
you
follow
the
trick
that
we
do
often
in
the
in
the
open,
telemetry
java,
where
we
have
stable
artifacts,
have
implementation
dependencies
on
unstable,
artifacts.
B
I
think
it's
an
api
dependency,
it's
the
auto,
configure
the
sdk
automatically,
but
the
implementation
is
exposed
and
agent
listener.
I
think.
B
F
But
the
thought
process
is
you
know,
even
though
it's
part
of
its
api,
you
have
to
opt
into
using
it
by
placing
your
own
dependency
on
it,
and
you
know
we
it's
kind
of
a
weird
situation,
because
we
guarantee
backwards.
Compatibility
as,
like
you
know
in
our
api,
so
we
can't
change
certain
parts
of
those
of
those
dependencies.
F
Like,
for
example,
the
you
know,
open,
telemetry
sdk
has
an
interface
for
getting
the
sdk
meter
provider,
so
it's
part
of
its
api,
and
so
we
couldn't
rename
sdk
meter
provider
ever
like,
even
though
that
artifact's
unstable
we're
committed
to
that,
because
a
stable
component
has
a
dependency
on
it.
F
F
B
Well,
I
think
if
you
really
really
want
to
give
yourself
yourself
some
trouble
with
these
and
there
are
ample
opportunities
opportunities
to
do
it
using
many
other
interfaces
like
this
module
is
a
good
example
where,
if
you
try,
you
can
install
completely
any
bytecode
transformer,
which
can
also
do
lots
of
weird
stuff.
B
F
E
I
was
actually
like-
maybe
I'm
missing,
like
I'm
mixing
something
up.
Maybe
I
meant
some
other
place,
but
before
agent
was
used,
it
was
just
generally
like
current.
I
think
all
the
open,
dynamic
initialization
happens
before
before
the
agent
has
completed
some
of
its
internal
stuff,
which
kind
of
makes
it
way
more
risky
than
than
it
would
be
if
the
agent
would
do
some
stuff
before
the
open.
Telemetry
is
initialized,
but
it
might
be
that
there
are
other
trade-offs
that.
A
The
problem
is
when
we,
as
soon
as
we
install
instrumentation
some
of
that
instrumentation
will
access
the
open,
telemetry
object
and
that
will
trigger
it
to
initialize
at
that
point.
So
we
want
to.
We
need
to
kind
of
configure
it,
so
it
will
initialize
properly
unless
we
did
some
kind
of
swapping
proxy
and
swap
swap
out.
B
F
So
the
advantage
of
auto
configured
open,
telemetry
sdk,
as
opposed
to
just
open
telemetry
sdk,
is
that
you
can
access
the
resource
and
the
config
properties
as
well.
So
it
kind
of
wraps
those
three
things
we
use
it
for
the
resource.
F
D
Isn't
a
spec
issue,
so
the
resource
thing,
I
think,
is
a
spec
issue.
The
config
thing,
I
think,
is
a
lack
of
spec
around
config
issue,
but
all
of
those
are
things
that
I
think
should
be
part
of
the
raw
sdk
right
like
at
all
three
of
those
possibly.
D
D
A
That
would
be
interesting
if
it's
only
used
for
resource.
F
A
A
Because
yeah,
as
laurie
mentioned
also
before
agent,
is
somewhat
troubling
any,
not
that
there
aren't
so
many
ways
for
us
to
shoot
ourselves
in
the
foot
but
yeah
I'm
curious.
If
anybody
has
a
use
case
for
an
implementation
that
uses
both
before
and
after
because
it
almost
seems
like
we
should
maybe
split
that
interface.
A
A
B
So
I
don't
think
that
we
should
remove
the
entire
agent
listener.
It's
just
the
before
method.
The
after
method
is
useful
for
registering
instrumentations
that
don't
really
hook
into
bytecode
like
jmx
reading
things
metrics
and
stuff
like
that.
A
A
No,
it
was
thanks
for
digging
in
and
getting
to
the
it
helped
really
to
understand
why
bite
buddy
works.
The
way
it
does
like
mimicking
javascript
that
make
that
made
a
lot
of
sense.
A
E
I
think
usually
it
won't
have
much
of
an
impact
because
you
probably
won't
have
too
many
annotations
and
right.
Buddy
also
has
some
cashing.
So
even
if
you
have
to
look
them
up
a
couple
of
times,
it's
not
a
big
deal,
but
lately
we
have
had
a
couple
of
support
cases
where
ridiculous
amount
of
time
spent
looking
up
resources,
but
not
in
like
the
this
particular
place,
but
in
some
other
place
in
our
code
base.
E
A
But
do
you
think
the
the
number
of
distinct
annotations
seems
to
me
so
much
smaller
compared
to
the
number
of
distinct
classes?
E
A
Does
it
if
you
have
like
temporary
class
loaders,
but
you
find
it
in
the
parent?
Do
you
know
if
that
requires
re-looking
up.
A
So
the
the
point
of
having
multiple
class
like
multiple
class
loaders
and
having
to
look
it
up
multiple
times,
doesn't
worry
me
like
if
each
class
loader
is
for
like
a
war
file
like
something
big,
it
does
worry
me
when
there's
some
patterns
where
lots
of
temporary
small
little
class
loaders
are
used,
but
then,
primarily,
everything
is
loaded
from
the
parent.
E
E
Lately
I
did
a
small
fix
for
for
spring.
E
There
is
some
kind
of
temporary
type
matching
cluster.
That's
used
when
long
time
reading
is
enabled
it
basically
creates
a
new
class
loader
for
each
class
or
like
each
beam
class.
I
think-
and
this
seemed
to
have
horrible
performance
impacts,
but
not
because
of
this
place,
but
because
of
all
the
class
loader,
optimization
and
classroom
matches
that
try
to
figure
out
whether
some
instrumentation
should
be
applied
to
some
class
loader.
A
E
I
was
really
hoping
that
author
of
white
body
would
help
us
out,
but
maybe
I
didn't
do
like
a
good
enough
job
and
selling
tissue
to
him.
A
Yeah
I
mean
so
that
that
idea
of
bite
buddy,
you
know
following
the
javascript,
makes
sense,
but
also
I
mean
there
are
some
other
changes
we
would
like
to
do.
That
could
help
us
with.
A
Yeah,
like
we
have
that
other
issue
where
it
looks
up
all
of
the
class,
all
of
the
classes
and
all
of
the
method
signatures.
Even
if
it's
not
used.
E
That's
a
different
issue.
I
think
that
only
applies
when
the
class
is
getting
transformed.
A
E
So
it
might
make
sense
for
us,
like
maybe
for
cases
like
this
to
maybe
have
some
kind
of
internal
performance
metric
system,
something
that
would
allow
us
to
like
see
that
how
long
how
long
the
extra
resource
lookups
took
or
or
how
long
the
class
transformation
really
took
to
better
quantify
like
how
expensive
it
is
because
currently
they're,
just
guessing.
A
Yeah
I
like
that
idea.
As
potentially
I
mean
a
way
forward,
is
maybe
first
put
in
some
metrics
that
we
could
then
get
out
to
users
that
could
help
guide
us
on
how
much
where
to
where
to
hack
things
and
where
we
don't
need
to.
E
Actually,
it
wasn't
meaning
like
a
proper
metrics.
Maybe
something
like
simpler
that
just
I
don't
know
computed
the
invocation
counts
and
how
much
time
was
spent
and
dumped
it
into
the
log
when
some
flag
is
enabled.
E
Either
either
periodically
or
at
the
exit
of
the
application
or
both.
So
if
we
are
doing
some
small
like
performance
testing
or
then
if
they
have
managed
to
tag
all
like
the
good
suspects
where
we
could
spend
excessive
amount
of
time,
then
maybe
it
would
give
us
a
better
idea
what
we
should
be
doing.
A
So
do
those
get
do
those
get
sent
over
otlp
or
we
we
logged
them.
A
Yeah
I
like
I,
I
like
that
anything
that
we
can
to
ask
users
to
just
enable
debug
logging
rerun
and
gather
that
would
be
really
helpful.
A
Yeah,
so
it
was
pretty
yeah.
It
was
pretty
straightforward
as
far
as
just
propagate
thread,
local
down
to
know
that
we're
in
this
case
and
then
optimize
that
I'm
okay
with
I
was
okay
with
it
I'll
take
another
pass.
A
Maybe
if
you
could
give
it
a
like
just
from
just
from
a
complexity
versus
benefit
perspective,.
D
Yeah,
so
this
is
a
silly
one
or
not
not
super
silly.
Actually,
so
I
was
toying
around
with
the
open
telemetry
operator
to
do
some
random
testing
in
jiki,
since
I
saw
there
was
a
gke
autobug
and
one
of
the
things
I
noticed
was
there's
a
very
convenient
like
yaml
file,
you
shove
at
kubernetes,
okay,
that
defines
like
your
sampler
and
other.
You
know:
exporter,
setup,
config,
so
yeah,
here's
a
link
to
it.
Right,
I
can
say:
here's
the
propagators,
everyone,
here's
my
sampler,
okay.
D
I
have
a
feeling,
since
this
exists,
that
this
is
our
de
facto
standard
config
right
now
for
cross,
open,
telemetry
sdk
configuration
it
works
with
java
works
with
python,
it
works
with
node
and
it
has
like
extensions
for
language
specific
things.
What
I
wanted
to
run
by
everyone,
so
two
things
one
right
now.
D
I
think,
specifically
for
java,
because
the
way
that
this
encodes
everything
is
as
environment
variables,
that
it
should
be
able
to
work
with
something
that
uses
auto
configuration,
and
so
I
I
think
that
that
would
be
a
good
if
we
think
that
this
is
accurate
and
something
we'd
want
to
support.
That
might
be
a
good
thing
to
open
up
and
expand
users
expand
for
users
of
the
operator
to
say,
hey,
you
can
configure
any
open,
telemetry
sdk,
not
just
auto
instrumentation.
D
For
java,
I
should
say
if
we,
it
kind
of
goes
into
stabilizing
the
auto
configure
module
and
anyway
lots
and
lots
of
discussion.
Point
being
primary
point.
Is
there
anything
in
this
particular?
So
if
you
look
underneath
spec
ignore
the
kubernetes
specific
parts,
the
yaml
that
is
under
spec
there's
actually
a
whole
definition
for
what's
in
there,
if
there's
anything
in
there,
that
we
feel
like
is
terrible
or
really
bad.
D
Please
let
me
know,
but
I'm
thinking
about
putting
a
proposal
together
for
standardizing
on
that
yaml
and
some
sort
of
a
yaml
to
open
telemetry,
sdk
conversion,
spec
thing
where
users
can
leverage
that
kind
of
a
config
for
any
open,
telemetry
sdk,
not
just
these
auto
configure
or
not
just
you
know
what
we've
happened
to
hack
together
for
the
operator,
and
I
want
to
expand
it
to
also
have
eventually
things
like
filters.
You
know
so
like
when
I
look
at
this.
I
ask:
how
do
I
turn
on
and
off
spans
right?
D
How
do
I
turn
on
and
off
metrics?
Those
are
the
kind
of
things
I
think
we
need
to
build
into
here,
so
the
initial
otep
would
just
be
things
that
are
simple
kind
of
what's
defined
here
with
room
for
expansion
and
then
follow
on
like
okay.
Let's
try
to
expand
this
over
time.
What
I'm
asking
right
now
is:
how
scary
is
this
to
everyone
here?
D
F
I
love
this
idea
and
I
think
this
type
of
idea
has
been
getting
more
momentum.
I
think
tyler
young
has
been
talking
about
interest
in
this,
along
with
a
variety
of
people,
but
a
file-based
configuration
for
the
sdk
that
allows
us
to
escape
the
limitations
of
flat
environment.
Variable
based
configuration
is
is,
is,
is
a
great
idea.
F
I've
I've
explored
this
to
some
extent
in
the
sdk
today,
there's
an
experimental,
metrics
module
that
allows
you
to
specify
a
file
that
can
contains
an
array
of
view
configurations,
so
you
can
have
a
view
configuration
file
instead
of
having
to
do
that.
Programmatically.
That
was
just
like.
F
F
D
Right,
I
think
that
the
main
problem
right
now
with
the
yaml
is
actually
around
extension
points
in
the
sdk
right.
So
java
has
a
that
nice
lookup
mechanism,
where
you
can
register
everything
with
a
name
and
then
look
it
up.
Not
that's
not
necessarily
inherent
in
every
possible
sdk
in
some
languages.
D
So
is
that
a
requirement
we
can
enforce?
That's
something
we'll
find
out,
but,
more
importantly,
how
do
we
allow
like
somebody
to
make
the
break
export
right
now,
just
as
endpoint,
because
it
assumes
otlp
going
to
a
collector
going
somewhere
else
right?
How
do
we
allow
custom
exporters
to
be
fine
there
where
they
need
configuration?
D
How
do
we
validate
that
configuration
like
there's
a
bunch
of
open
questions
around
there?
That,
I
think
that's
that's
actually
extension
of
the
config
is
my
my
big
major
concern
as
like
the
hardest
problem
to
solve
here.
D
The
second
hardest
problem
is
just
to
get
people
to
agree
on
things
that
aren't
extensions
yeah,
what
it
needs
to
look
like,
but
yeah
like
sampler
right
there,
sampler
type
parent-based,
trace
ratio
with
an
argument
that
I
don't
think
scales
to
custom
samplers
totally
agree
right,
and
so
that's
something
that
we
need
to
think
about,
but
the
yeah
is
it
worth
putting
the
otep
together.
Now
I
guess,
or
is
this
going
to
be
a
huge
distraction
for
everyone
here.
F
I
think
you
know
I
think,
we're
starting
to
wrap
up
metrics
a
bit,
and
you
know
the
the
specification
has
talked
about
introducing
a
process
to
try
to
plan
out
how
we
focus
kind
of
spec
level,
initiatives
to
to
make
sure
that
we're
not
kind
of
stretched
too
thin,
and
you
know
I
I
I
think
I
think
ted
has
ted
young-
has
been
kind
of
trying
to
drive
that
a
bit,
and
so
I'm
interested
in
just
like,
where
open
telemetry
in
general,
as
a
community,
wants
to
focus
its
efforts.
F
After
kind
of
we,
we
we
clear
up
some
of
the
the
the
time
commitment
that
we've
all
been
spending
on
metrics
for
the
last
I
don't
know
forever
so
yeah
like
I.
I
think
this
is
a
good
idea,
I'm
interested
if,
if
the
broader
community
is
interested
in,
you
know
simplifying
configuration
as
kind
of
a
next
big
issue
that
we
tackle
collectively.
A
Josh
as
a
supportive
data
point,
our
most
popular
issue
by
far
in
the
java
instrumentation
repo
is
excluding
health
checks,
which
basically
means
samp,
defining
like
a
a
rule-based
sampler
based
on
like
attributes,
and
we
are
basically
stuck
on
that
because
you
need
a
structured,
we
can't
really
do
that
via
environment
key
value
pairs.
We
need
a
structured
config
to
support
this.
E
A
This
is
a
good
question.
You
can.
F
Can't
you
can't
you
do
filtering
in
the
exporter
layer
as
well.
So
couldn't
you
like
have
an
exporter
that
wrapped
your
your
ultimate
and
exporter
and
you
just
filter
the
spans
that
come
through
there.
You
have
to.
A
D
A
D
I
just
want
to
give
people
ex.
I
want
to
get
people
excited,
and
I
think
it's
going
to
take
us
several
months
to
even
get
organized
on
like
a
a
general
kind
of
proposal.
I
might
throw
something
out
way
quicker
to
a
straw
man
to
let
people
beat
up
on
it
and
tell
me
how
terrible
it
is
because
that's
the
best
way
to
get
all
of
their
design
considerations
right,
but
just
so
you
know
this
is
gonna,
be
it's
I
don't
I
don't.
D
Even
if
you
see
a
proposal
next
week,
I
don't
expect
the
discussion
to
end
for
several
months.
I
think
this
is
going
to
be.
This
will
be
a
thing,
but
I'm
looking
at
this
off
as
my
next
venture,
if
you
will
outside
of
trying
to
get
rid
of
open
senses,
and
by
that
I
mean
we're
going
to
be
talking,
open
senses
bridge
and
we're
going
to
be
talking
stabilizing
and
deprecating
open
senses,
so
cool.