►
From YouTube: 2021-10-13 meeting
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
We
have
vishwa
hi
tyler.
How
are
you.
A
A
D
I
I
I'm
talking
to
kubecon
later,
but
I
did
want
to.
I
had
an
item
I
wanted
to
talk
about.
I
didn't
want
to
wait
too
long,
because.
A
A
A
C
A
All
right
cool,
I
think
we
have
a
couple
of
items
very
interesting
ones.
A
Okay,
so
paris
and
miss
van,
I
know
we
had
this
remote
right.
Sorry,
the
receiver
remote
right
tests
that
we
have
been
talking
about.
So
what
we
have
been
doing
is
based
on
you
know
some
of
the
requirements
that
vishwa
had
identified
and
also
you
know
some
of
the
areas
that
we
have
been
working
on
in
terms
of
testing
the
end-to-end
prometheus
pipeline.
A
We
looked
at
you
know
what
the
prometheus
receiver
today
is
doing,
and
the
good
news
is
that
there
are
some
integration
tests,
but
again
miss
finn
and
parachute.
Did
you
want
to
discuss
the
two
approaches
that
you
know
we've
kind
of
looked
at
you
know
component
testing
versus.
A
A
G
A
And
I'm
super
curious
to
understand.
You
know
like
how
have
you
you?
How
has
your
team
been
testing
and,
of
course
you
know
the
folks
also
who
have
been
using
the
pipeline
because
we've
we've,
you
know
pretty
much
been
doing
the
end-to-end
tests,
as
well
as
the
prometheus,
remote
right
exporter
tests
and,
of
course
you
know
controlling
the
data
generation
and
varying
it
so
far,
so
they've
been
end-to-end
tests,
not
so
much.
G
D
G
Types,
you
know
all
the
attributes
and
yeah
it's
it's
pretty
painful.
A
Okay,
so
again,
good,
I
think
that
we
we
we
definitely
looked
at
it
and
we
were
also
looking
at
the
receiver
code
in
more
detail
so
spend
it.
You
want
to
go
over
what
you've
identified
so
far,
which
are
these
requirements
right.
H
Yeah,
so
these
were
the
test
cases
that
we
had
on
the
design
dock
that
vishwa
and
you
shared
with
us.
So
we
found
like
we
used
the
prometheus
remote
right
compliance
tests
as
like
for
reference
as
to
how
those
are
being
tested.
So
it
turns
out,
like
the
remote
right
test
cases
that
are
there
on
the
prometheus
compliance
repository.
H
They
basically
test
the
collector
remote
right
exporter
end
to
end,
but
like
what
I
mean
there
is
they
basically
set
up
a
collector
pipeline
with
the
prometheus
receiver
and
they
set
up
a
prometheus,
remote
right
exporter
there
and
they
send
some
like
so
the
test
exposed
metrics
onto
the
receiver,
which
they
remove,
and
it
goes
to
the
collector
pipeline
and
the
remote
right
exporter
then
writes
its
exports,
the
metrics
back
to
the
test
suite
and
the
testing
then
basically
validates
what
it
sent
and
what
it
received
from
the
collector.
H
It
only
tests
it
end-to-end,
so
we
identified
that
there
are
there's
a
way
to
isolate
just
the
receiver,
which
can
be
done
by
using
a
metric
sync
that
basically
instantiates
the
receiver
and
sends
all
those
metrics
that
the
prometheus
receiver
receives
and
prometheus
format
transforms
it
into
tlp
format,
and
this
metric
sync
will,
at
the
end
of
the
script,
prometheus
receiver
scrape
loop.
We
can
retrieve
those
metrics
from
the
sync
and
run
our
validation
on
those
retrieve
metrics.
H
So
this
was
the
approach
that
is
being
used
in
these
integration
tests
that
are
there
for
the
prometheus
receiver
in
the
collector,
a
contrib
repository
and
right
now
we
are
working
on
evaluating
these
test
cases
to
see
what
requirements
are
being
covered
and
what
of
what
requirements?
These
test
cases
do
not
cover
in
the
integration
tests.
G
Sync
is
an
in-memory
storage,
yeah,
okay,
yeah
yeah.
A
So
so
the
question
I
think
one
of
the
questions
that
we
were
you
know
debating
about
is
that
would
it
be
useful
to
have
component
testing,
that
is
black
box
testing
in
terms
of
the
receiver
being
the
component
and
then
because
there
are
obviously
two
ways
of
enhancing
this
right.
One
is
that
we
add
another
processor
for
our
test,
harness
which
kind
of
is
a
driver
for
not
only
emitting
different
types
of
metrics,
but
also
different
variations,
and
you
know
kind
of
testing
all
these
different
cases,
but
then
having
validation.
A
G
G
A
G
Each
one
of
us
so
being
compliant
on
the
closest
collect
and
the
closest
part
of
the
collection,
which
is
the
receiver
output,
would
be
the
best
way.
In
my
opinion,.
H
Yeah,
so
in
that
case,
I
think
the
metric
sync
approach
should
be
helpful
in
like
isolating
testing
just
to
see
what
the
receiver
outputs
just
the
prometheus
receiver.
G
And
that
part
already
exists
today.
That's
the
way
the
receiver
is
being
tested
right
now
in
the
country,
repo,
the
receiver
component
tests
are
actually
tested
with
this
approach,
isn't
it.
I
This
one
is
our
integration
test
and
in
the
same
file
we
have
unit
test
as
well.
Yeah
creation
test
the
whole
receiver
component
is
a
black
box.
It
checks
it
and
it
tests
end
to
end,
but
it
doesn't
do
the
validation
like
so
it
doesn't
have
the
it
doesn't
check
the
expected
b
data
like
it
matches
the
expected.
So
so
what
comes
out
of
the
receiver?
It
checks
that
it
comes
out
and
it
checks
for
some
of
the
parameters,
but
it
doesn't
check
for
all
the
things.
H
Okay
yeah,
so
it's
missing
a
few
cases.
It's
definitely
missing
negative
test
cases,
so
the
reference
that
you
provided
for
the
open
metrics
test
suite
that
had
prometheus
negative
test
cases
for
the
promiscuous
format,
so
we
could
like
define.
This
could
definitely
be
enhanced
to
cover
more
of
the
requirements
that
we
have
and
I
think.
G
This
approach
would
also
be
a
lot
more
reliable
and
faster.
I
think,
except
that
you
know
we
need
to
do
at
least
one
scrape.
You
know,
that's
the
only
thing
that
would
take
time
and
after
that
I
think
it's
because
it's.
F
G
To
end
right,
we
can
just
quickly
get
the
test
going.
Yeah.
A
A
Okay,
yeah.
The
other
question
was
that
you
know
it's
like
again.
When
should
this
be
triggered
right
because
maybe
just
running
it,
you
know
at
what?
What
time
period
should
it
be
triggered
at
dress,
red
and
running
it
regularly,
as
in
as
in
build
as
part
of
the
releases
or
as
part
of
every
pr
being.
G
A
Yeah,
I
mean
we'll
test
it.
Obviously
you
know,
as
we
build
this
out,
but
again
would
love
to
have
the
use
cases
for
different.
You
know
different
uses
that
that
are
being
done.
I
mean,
I
think
component
tests
here
make
sense
in
terms
of
configuration
tests
that
already
exist
enhancing
those,
but
also
to
note
you
know-
and
again
I
wanted
to
get
you
know.
Other
opinions
on
this
is
that
today
the
prometheus
receiver
is
not
really
a
prometheus
receiver.
It
has
everything
else.
A
Also
in
it
it
happens
to
be
called
a
prometheus
receiver,
but
it
does
a
lot
more
than
just
prometheus
right.
I
mean
it's
basically
used
as
a
collection,
israeli
service
discovery
and,
and
you
know,
scraping
for
for
different
all
kinds
of
data
sources
that
are
being
used
by
the
collector.
A
So
again,
this
would
very
specifically,
this
pipeline
would
very
specifically
just
test
the
prometheus
tests,
which
you
know
again.
We
are
looking
at
very
closely
at
the
what
is
implemented
for
prometheus
itself
on
the
on
on
the
core
project.
G
Yeah,
so
the
data
source
here
is
prometheus
yeah
matrix.
We
should.
A
E
So
there's
there's
two
relevant
or
hopefully
relevant
thoughts.
I've
had,
on
the
one
hand,
we're
currently
trying
to
get
to
get
the
promises
agent
out
the
door,
and
that
is
probably
the
thing
which
which
would
be
best
suited
to
to
act
as
the
emitter
for
prometheus
remote
right
to
test
the
sender,
the
receivers
with,
because
then
you
don't
have
to
rely
on
anything
other
than
basically
promises
upstream
to
to
send
to
to
the
receiver
of
promises
remote
right
and
yes,
I
hate
the
name
as
well.
E
It's
super
complicated,
but
we
currently
have
what
we
have
because
then
you
you,
you
simply
do
away
with
this
complete
problem
domain
and
hopefully
like
we
can
also
put
put
a
few
targets
and
such
into
a
harness,
and
all
of
that
is
just
done
once
and
and
then
it
starts
sending
that's
the
one
thing
which
which
I
had
to
think
about
right
now.
The
other
thing
is,
I
discussed
this
with
julie's
fault.
E
Just
today,
where
we
consider
there,
we
were
considering
how
we
could
run
the
prom
ql
tests
in
the
future.
Of
course,
we
are
facing.
Basically,
what
is
similar
problems
here?
E
The
concept
which
is
currently
in
my
head
is
that,
from
the
prometheus
testing
side,
we
obviously
have
various
bits
and
pieces
which
can
be
tested,
but
they
they
often
share
properties.
So,
for
example,
for
something
which
wants
to
act
as
a
long-term
storage
that
would
obviously
need
prometheus
remote
write
receiver.
E
It
would
need
promises,
proncl,
endpoints,
to
to
to
query
data
alert
generation,
blah
blah
blah.
What
have
you
and
we
can
actually
start
combining
those?
So
the
thought
which
we
had
was
basically
that
vendors
and
projects
can
register
endpoints
and
ap
api
keys
and
such
where
we
can
from
the
test
suite
from
the
upstream
compliance
repository
trigger
regular,
builds
like
maybe
daily
or
something
so
we
catch
things
early,
not
on
every
single
commit
of
every
every
contributing
repository,
but
on
on
a
high
enough
cadence
to
catch
stuff.
E
If
something
goes
wrong
and
then
start
running,
for
example,
the
prometheus
agent
sent
to
a
receiver
do
a
minimal
test
through
the
published
fromql
endpoints,
pull
the
data
just
to
raw
data
and
see.
Did
the
correct
data
actually
make
its
way
into
the
promises?
Remote
right?
Receiver,
that's
als
already
a
little
bit
of
a
baseline
test
of
prom
care,
which
is
not
ideal,
but
it
it
would
as
certain
that
we
have
always
a
way
a
standardized
way
to
get
data
out.
E
But
then,
once
you
have
that
you
can
immediately
run
the
more
complicated
prom
ql
tests
for
actual
characteristics
of
prom
klu,
because
you
can
reuse
the
literal
same
data
and
the
literal
same
same
prompt
here
and
point
to
run
your
deeper
tests.
E
So
that's
that's
the
thinking
where
we
will
most
likely
end
up
and
also
we
would
love
for
people
to
submit
stuff.
So
we
can
so
we
can
try
and
put
this
into
a
harness
and
help
with
with
creating
this
harness,
and
I
feel
as
if
most
of
what
I
just
said
is
relatively
low
hanging
fruit
with
with
the
pipeline.
A
Yeah
richard
I
mean,
I
think
we
looked
at
you
know,
obviously
the
existing
tests
that
are
there
and
yes,
ideally
we'd
like
we,
our
first
thinking
was,
could
we
reuse?
You
know
what
exists
as
a
harness
from
the
prometheus
project,
enhance
that
and
then
be
able
to
reuse
that,
for
you
know
this
prometheus
receiver
until
it's
until
it's
rewritten
right,
because
again,
the
idea
for
this
receiver
is
also
to
in
the
long
term.
A
You
know,
focus
it
in
either
into
being
a
complete.
You
know
replace
in
replacement
with
the
upstream.
You
know
agent,
that
you're
talking
about
from
prometheus
or
or
that
we,
you
know
again
refine
this
receiver
to
do
exactly
what
it
says.
You
know
that
is
the
prometheus
pipeline
and
nothing
else
separate
out
the
other
general
service
discovery
and
scraping
into
a
different
component.
A
But
that
said
again,
those
are
all
you
know,
items
in
progress,
the
the
you
know,
and
we
can
do
that
when
we,
when
those
components
are
available,
but
in
the
meantime
at
least,
have
the
full
test
suite.
I
then
you
know,
set
up
and
and
the
harness
set
up
wherever
you
know
whether
we
need
to
add
it
to
both
projects
to
prometheus
as
well
as
here
for
the
time
being
and
then
yank
it
out
of
you
know,
obviously,
ideally
we'd
not
want
to
run
it
two
places.
A
If
you
can
use
the
same
test,
suite
that's
ideal
right,
so
wherever
that
test
harness
exists,
kind
of
modify
that,
but
today
the
assumptions
I
mean
from
our
looking
at
the
prometheus,
the
way
that
prometheus
is
testing
the
agent
right
now
is
that
it's
it's
a
lot
of
different
assumptions
that
that
are
being
made
in
that
testing.
Also,
and
it's
kind
of
difficult
to
run.
That
scenario
porous.
Did
you
want
to
go
into?
You
know
kind
of
what
the
the
setup
that
you
went
through.
A
Like
in
terms
of
the
prometheus
harness,
that's
running,.
I
I
Mr
finn,
if
you
can
scroll
down
to
the
yeah
so
yeah
this
one
so
another
like,
we
were
looking
at
the
other
way
of
testing
the
prometheus
receiver.
So
for
that
we
were
trying
to
do
an
interoperability
test,
which
is
a
system
test.
I
So
in
that
we
were
planning
to
create
a
test
bed
and
this
that
test
that
could
create
data
through
the
either
through
the
standard
metrics
using
prometheus,
go
client
or
from
the
text
file
cases
that
could
be
positive
or
negative
cases,
and
then
it
would
get
served
onto
the
http
and
and
then
once
the
data
is
pulled
by
the
by
the
prometheus
receiver.
So
then
this
serve
would
stop
so
basically
like
in
this
dashboard.
I
We
would
have
a
config
file
that
would
create
a
collector
config
file
that
could
start
the
collector
the
pipeline.
Basically,
so
this
collector
pipeline
would
would
initiate
a
prometheus
receiver
and
then
like
if
we
skip,
and
this
receiver
could
transfer
the
p
data
to
a
custom
exporter
where
the
custom
exporter
does
nothing
apart
from
sending
apart
from
explosive,
exposing
this
data
to
http
and
then
from
there.
I
E
For
clarity,
when
you
say
collector,
what
do
you
mean
precisely
course?
That's
something
which
I
also
couldn't
be
certain
of
with
with
the
fl.
So
what
do
you
mean
precisely
when
you
say
collector.
I
So
we
need
to
start
the
collector
pipeline.
Basically
in
this
test,
so
to
start
the
collector
pipeline,
we
would
need
to
generate
a
config
file,
so
in
config
file
we
could
give
like,
like
what
kind
of
receiver
we
want
and
like
what
processor
do
we
want
and
what
kind
of
exporter
we
want.
So
in
this
one
we
will
need
to
create
another
x,
custom
exporter,
basically
so
and
then,
and
in
our
config
we
would
choose
that
prometheus
receiver
sends
this
data
sends
the
p
data
to
this
custom
exporter.
E
Because
I
I
know
I
haven't
been
able
to
make
to
make
these
calls
for
for
some
time,
but
I
didn't
we
a
few
calls
back
talk
about
the
system
where,
as
a
first
step,
you
simply
expose
static
files.
Basically,
you
take
the
the
open,
metrics
reference,
because
there
you
already
have
all
the
positive
and
and
false
examples,
just
put
them
onto
a
dozen
or
100
different
static.
Endpoints
start
a
prometheus
instance
scrape
from
those
static
web
files.
E
In
theory,
you
can
even
scrape
from
from
github
because
they
do
like
github
publishes
static
web
points
and-
and
I
made
certain
that
you
can
actually
scrape
directly
from
github
if
you,
if
you
so
choose
and
then
from
that
prometheus
instance,
simply
start
sending
through
promises
remote
right
to
whatever
promises.
You
wrote
right,
receiver
and
point
your
testing.
I
Yeah,
so
if
we
can't
find
the
data
through
prometheus
remote
right,
we
can
do
that.
So
in
this
example
also
we
are,
we
are
doing
the
we
are
serving
statically.
I
All
the
data
that
metrics
that
are
generated
are
served
statically
to
the
to
the
prometheus
receiver,
but
because
we
want
to
I
s,
because
we
want
to
test,
promote
this
receiver
and
nothing
else,
that's
why
like,
if
we
send
it
through
prometheus
remote
right
so
then
it
would
be
like
a
and
like
a
whole
pipeline
test
because,
like
we
wouldn't
know
like,
if
it's
a
failed
test,
then
the
data
failed
at
the
prometheus
receiver
or
it
failed
at
the
remote
right
component.
A
Yeah
they
I
mean
they
use
the
the
pipeline
is
slightly
different
right.
Richard
I
mean
we
can
certainly
use
the
open
metrics
tests,
the
right,
but
again
it's
how
much
of
the
pipeline
we
set
up
and,
as
we
said,
you
know
we
kind
of
went
back
and
forth,
and
I
think
that
the
component
level,
black
box
testing
is
probably
easier
with
the
assumptions
that
you
know.
As
you
said,
either
data
is
and
the
cases
are
generated
through
static.
You
know
static
files
or
otherwise.
E
E
I
I
think
I'm
just
not
getting
or
my
feeling
is,
that
what
we're
currently
talking
about
is
is
more
complicated
to
create
and
to
run
reliably
versus
a
much
more
stripped
down
and
and
and
basically
constrained
pipeline.
I
mean,
if
you
want
to
constrain
it
even
more
you
can.
You
can
take
a
p
cap
off
of
the
prometheus,
remote
right
and
and
just
send
that
p-cap
on
the
wire
in
in
theory,
by
just
pulling
in
a
prometheus
binary.
E
You
get
the
advantage
that,
if
anything
changes
within
prometheus,
you
just
get
get
updated
whatever
comes
out
at
the
other
end,
but
all
of
those
are
function
equivalent.
It's
just
it
feels
to
me
as
if
it's,
if
too
much
work
is
being
done
for
what
for
what
you
want
to
achieve.
C
G
Yeah
and
did
we
did
we
also
know
how
we
are
going
to
actually
expose
or
generate
this
test
data.
I
think.
Like
rich
said,
you
know
we
can
just
host
static
endpoints
in
github
for
the
for
the
metrics
for
the
test
cases.
Basically,.
E
They're
being
like
they
are
as
of
like
for
ages
for
over
a
year,
they
have
had
step
github
urls
within
the
within
the
openmetrics
proposal,
but
you
can
also
just
start
a
a
minimal
web
server
within
your
ci.
So
you
you
take
away
any
potential
breakage
of
the
network
between
between
wherever
ci
is
running,
and
the
github
endpoints
yeah,
just
wouldn't
bother
with
any
custom
exporter
or
or
trying
to
come
up
with
any
test
data
which
you,
which
you
mentioned.
G
G
E
But
the
thing
is
that
we
know
the
properties
of
those
of
those
time
series
as
well,
so
we
can
also
start
using
them
for
other
tests,
because
we
know
what
their
properties
are,
how
they
change
over
time,
blah
blah
blah
and
we
start
reusing
the
same
thing
again
and
again
just
for
different
for
different
scenarios.
But
we
all
have
this
one
set
of
of
of
data,
or
this
several
set
of
standardized
data
which
we
can
all
draw
from.
E
So
yes,
by
all
means
you're
more
than
welcome
to
to
just
draw
from
those
and
just
use
them.
G
Okay,
and
is
it
is
it,
is
it
easy
that
we
can
actually
add
test
data
there
if
we
want
to,
for,
for
our
test
cases
explicitly.
E
Yeah
I
mean
submit
the
pr
as
long
as
it's
valid
openmetrics
and
serves
observes
a
purpose
we
can
edit
sure.
Okay,
I
mean
we
have
our
own
tests
running
on
our
site,
blah
blah
blah
blah.
So
we
need
to
take
this
into
account
when
blah
blah
blah,
but
beyond
the
normal
interdependencies.
Yes,
absolutely.
G
A
I
mean
richard
we
can,
we
can
definitely
sync
up
and
figure
out
yeah
I
mean
I
agree
that
I
think
the
source,
ideally
having
a
single
source
where
all
the
tests
are
you
know,
updated
and
set
up,
is
ideal
because
everybody
can
use
it
then.
So
we
really
think
that
I
mean.
I
really
think
that
it
should
be
in
a
single
source,
but
richard
we
can,
you
know,
figure
it
out
and
we
are
happy
to
you
know,
add
it
to
open
metrics
and
keep
that
maintained.
G
A
G
And
all
the
negative
cases
that
we
are
targeting,
they
are
already
there
yeah
for
for
positive
cases.
I
didn't
check
because
there
are
like
hundreds
of
them
that
are
not.
A
Next
time,
when
we
chat
we'll
walk
through
the
gaps,
because
we
have
a
list
of
gaps
that
we've
identified
from
the
tests
that
exist
today,
already
yeah
so.
E
If
you
have
gaps,
then
I
would
very
much
like
to
to
yeah.
No
absolutely
like.
That's
that's
perfect,
like
if
you
have
gaps
where
we
forgot.
Something
then
by
all
means
that's.
A
Yeah
that
I
mean
that's
the
idea
right.
We
just
want
to
make
sure
that
you
know
if
you're
looking
at
each
one
of
these
cases
and
conditions,
both
the
positive
and
negative
are
tested,
as
well
as
any
other
error
conditions
related
yeah.
So.
G
We
are
basically
saying
try
reusing
the
test
data
from
the
openmetrics
suite.
C
A
A
Yes,
exactly
okay,
cool
any
other
questions.
Folks
have
so.
G
E
I
don't
have
a
timeline,
it's
it's.
The
usual
of
everything
is
happening
all
at
once
I
mean
if
you
want,
you
can
use
the
grafana
agent.
It
has
too
many
features,
but
it's
on
on
that
data
path.
It's
functionally
equivalent
to
to
what
the
prometheus
agent
is,
because
initially
the
prometheus
agent
is
just
a
stripped
down
graffana
agent,
so
you
can
literally
use
an
agent
as
a
stand-in.
You
can
also
spin
up
a
full
prometheus.
G
Okay,
so
that
that
would
be
yet
another
orthogonal
way
to
you
know:
collect
prometheus
methods
outside
the
open,
telemetry
project
right.
A
Yeah
I
mean
we
have
looked
at,
I
mean.
Obviously
we
have
looked
at
rafana
claude
agent,
a
fair
bit
in
detail
earlier
and
and
but
it
is,
as
richard
said,
that
it's
it
has
every
a
lot
more
functionality
right.
So.
D
A
It
it
complicates
the
pipelines,
for
you
know,
if
you're,
if
you're
testing
in
isolation,
then
you
you
can
validate
the
data
streams
right
if
you're,
if
you
have
too
many
other,
which
is
the
problem
with
the
prometheus
receiver
on
hotel,
also
there's
just
too
many
things
going
on
in
that
in
that
code
base.
E
If
yeah,
I
mean
you
don't
again
as
long
as
the
function
is
in
place,
I
I
don't
really
have
a
strong
opinion
on
on
how
that
function
is
implemented.
I
just
know
that,
if
I
would
be
doing
it,
I
would
probably
be
starting
with
the
grafana
agent
course
in
the
in
the
functionality
which
I
care
about
it's
precisely
equivalent
to
what
the
promises
agent
will
be.
So
it's.
E
Replacing
the
binary
once
once
the
other
binary
is
available
and
everything
else
is
is
equivalent,
except
for
the
fact
that
the
promises
agent
will
be
having
less
code
and
maybe
a
little
bit
quicker
because
it
has
less
code.
But.
D
E
Yes,
it's
listed
on
the
prometheus
io
overview
for
these
slime
boxes.
I
think
let
me
check.
A
E
Thank
you.
Good
job
is
being
slow.
E
But
again
you
don't
have
to
wait
for
this.
You
can
literally.
A
And
we
should
be
able
to
transpose
the
tests
you
know
to
to
right
to
the
right
location.
So
no
worries
I
mean,
I
don't
think
this
work
will
be
wasted
in
in
any
sense,.
E
Now
it's
like
for
the
agent
itself,
it's
literally
drop
in
replacement
where
you,
where
you
replace
the
one
binary
with
the
other,
but
they
are
functionally
equivalent
from
that
perspective,
one
thing
regarding
all
the
tests
which
you
find
in
the
openmetrics
thing.
If
it
was
me,
I
would
be
creating
a
single
like
per
file.
E
I
would
just
define
a
target
because
that's
easiest
to
to
handle
on
on,
like
you,
don't
need
to
manual
the
files
or
anything
you
can
just
have
one
target
per
per
file,
that's
leased
work
and
also
it's
the
most
flexible
when
new
stuff
is
added
and
such.
E
D
No
problems
just
getting
caught
up
to
speed
on
everything
you
all
are
working
on,
so
I've
been
focused
on
a
different
set
of
metrics
things,
and
this
is
around
metrics
compatibility
from
the
open,
telemetry
kind
of
apis
and
sdks.
D
So
we
took
a
shot
at
refreshing,
some
of
the
prometheus
exporters
from
open,
telemetry
sdk,
and
what
I
wanted
to
do
was
get
a
specification
together.
So
I
feel
like
this
group
has
really
done
a
great
job
of
defining
how
prometheus
goes
into
otlp
and
then
back
into
prometheus,
without
loss
right
yeah.
What
I
want
to
do
is
figure
out
all
the
other
things
that
open
telemetry
does
that
prometheus,
like
doesn't
doesn't
do
the
same
way
and
and
then
how
do
we
get
those
mapped
out?
D
So
I
took
a
first
crack
at
literally
it's
an
outline
which
is
like
what
needs
to
get
filled
out.
Some
dvds.
D
That
might
light
my
computer
on
fire,
but
yeah.
I
can
do
that.
D
Like
yeah,
I
get
I
here,
I
got
it
chrome
tab,
open,
telemetry,
yeah,.
D
Okay,
my
bad,
I
didn't
realize
that
happened.
I
wasn't
even
looking
at
the
tab.
Okay,
apologies,
okay,
so
basically
this
is
reverse
engineered
from
what
was
done
in
the
collector
around
compatibility
and
then
in
addition,
like
the
exporters
so
effectively,
there's
some
things
here
that
I
just
want
to
make
sure
that
I'm
writing
things
down
correctly,
that
we're
done,
but
I
just
took
a
crack
at
putting
together
the
shape
of
this
document,
so
prometheus
compatibility
section
around
how
our
data
model
gets
mapped
both
in
and
out
of
prometheus.
D
So
there's
a
notion
of
how
prometheus
metric
points
come
into
open
telemetry.
We
have
three
tbds.
I
believe
each
of
these
has
a
bug
associated
with
the
mapping
yep
yeah.
That
needs
to
get
addressed
over
time.
I
also
don't
like
my
one
question
I
had
was
in
our
prometheus
compatibility
discussion.
So
far,
are
these
part
of
the
general
test?
Are
these
considered,
like
advanced
prometheus
features,.
A
No,
these
are
part
of
the
general
tests
and
the
histogram,
for
example,
is
the
general
test.
The
stateful
set
is
also
implemented
at
this
point
josh.
So
I'm
not
sure
what
is
dropped
means
that
was
implemented
and
completed
with
the.
D
A
You
know
we
actually
added
a
lot
of
all
the
stateful
set
support.
Man
and
david.
A
D
Now
do
that
does
that
come
in
then
as
gauges
or
how
are
we
representing
it
like?
Are
we
what
I'm
curious
about
what
we
can
get
into
the
details
later?
I
need
to
go
look
that
up,
but
there
was
a
there's,
a
question
of
whether
or
not
the
open
telemetry
protocol
should
directly
support
stateful
set
as
a
concept,
but
if.
A
It's
a
very
we
have
a
mapping
and-
and
we
can
walk
through
that-
perhaps
next
time,
where
or
or
just
you
know,
kind
of
share
the
documentation
with
you,
but
we
can
definitely
walk
through
it,
but
this
is
supported
now
fully
with
the
enhancements
that
we
made.
I
continue
the
pr's.
D
A
A
Histogram
is
supported,
gauge
instagram.
F
C
D
A
A
D
D
But
info
is
pretty:
common
info
is
pretty
common,
so
probably
is
supported.
We
just
need
to
figure
out
what
it's
supported
with
okay.
D
I
can
I
can.
I
can
open
an
issue
and
assign
it
to
you.
I
guess
the
question
is
I'd
like
if
someone
could
take
a
look
at
this
and
see
how
bad
it
is
initially
like
again,
I
just
took
a
first
crack
to
get
this
movie.
Okay,.
D
D
Help
write
this
there's
there's
like
I
said,
the
the
bit
that
I'm
more
focused
on
is
some
of
these
kinds
of
things
around.
K
D
D
For
histograms,
open
telemetry
does
allow
kind
of
non-monotonic
histograms
with
non-monotonic
sums
in
them.
Similar,
I
think,
to
prometheus,
also
allows
this.
I
think
that
we're
100
compatible.
As
far
as
I
understand,
I
tried
to
document
some
of
the
naming
conventions
for
how
we
will
actually
do
this
mapping
for
open
telemetry
sdks.
D
A
D
We
will
actually
have
an
implementation
of
them,
probably
by
the
end
of
the
year,
we'll
have
a
preview
implementation
of
them
by
the
end
of
the
month,
possibly
in
some
of
our
languages,
and
I
want
to
figure
out
what
we're
going
to
do
in
prometheus,
initially
for
that
or
if
we
just
drop
them.
This
is
this
is
like
the
notion
of
an
hdr
histogram.
I.
A
A
Is
josh
mcdonald
doing
that
the
initial
implementation,
or
are
you
doing
it.
A
D
Josh
mcdonald
is
doing
the
go
implementation
and
then
there
james
moses
from
atlasian,
is
doing
the
java
implementation
and
I'm
working
on
the
prometheus
exporter
for
it
and
then
crap
who's.
I
think
diego
from
lightstep
is
doing
the
python.
D
And
someone
from
microsoft
is
working
on
the
net
implementation,
so
those
are
our
first
set
of
implementations
that
were
yeah.
A
Because
we
had
added
the
c
plus
plus
exporter,
we
also
had
added
a
prometheus
python
exporter.
I'm
not
sure
what
diego
is
doing.
Is
he
adapting
it?
Because
I
know.
A
D
A
Are
you
following
that
josh
or
richard?
Maybe
we
can
have
a
more
detailed
discussion.
I've
been
following
the
discussion
on
the
prometheus
groups
also,
but
I
know
josh
mcdonald
has
been
working
with
bjorn
on
that.
D
Yes,
yes-
and
I
have
not
been
following
that
as
closely
mostly
what
I'm
trying
to
do
right
now
is
just
get
the
folks
in
open
telemetry
work
in
the
sdks
to
pay
attention
I'm
overloaded
and
meeting.
So
I
haven't
been
able
to
pay
attention
to
everything
that
I've
wanted
to
so
apologies
there,
but.
A
No,
no
that's
great.
I
mean
again.
This
is
super
helpful.
We
can
definitely
help
josh
because
you've
done
a
lot
of
the
you
know.
Initial
implementations,
where
we
added,
for
example,
we
had
added
the
summary
support.
You
know
and
go
and
in
other
languages
and
python
and
stuff
and
in
the
collector
so
again
collector,
I
think,
didn't
have
it
yeah
and
then
we
added
it,
but
my
point
being
that,
if
we
need
to,
we
can
help.
D
Yes,
yeah
there's
also,
I
didn't.
I
didn't
put
it
in
the
specification
right
now,
but
the
there
is
an
issue
with
exemplars,
not
an
issue.
There's
a
specification
needs
to
happen
around
exemplars
between
prometheus
and
open
telemetry,
and
it's
it's
kind
of
subtle
and
and
interesting,
but
effectively
exemplars
in
prometheus
are
kind
of
attached
to
a
particular
point
value
and
for
histograms
you
have
this
less
than
or
equal
to
bucket,
and
so
one
question
I
have
around
prometheus
histograms.
D
Is
it
expected
that
the
exemplar
attached
to
a
particular
bucket
is
only
relevant
to
the
fracture,
the
bucket,
that's
not
covered
by
other
time
series.
E
It's
relevant
for
the
complete
bucket,
so,
if
you
have,
let's
say
you
have
one
bucket
lower
equal,
10
and
another
lower
equal
20,
and
you
have
an
example
on
the
lower
equal
20
that
is
anywhere
in
the
lower
equal
20.,
okay,
okay,
it
should.
Ideally,
I
don't
think
we
mandate
that
it
must
be
just
in
the
slice.
I
would
need
to
look
up
the
specifics
myself.
Brian,
do
you
know
our
heart,
for
which
now,
if
you
have
two
buckets,
lower
equal
10,
lower,
equal
20,
and
you
have
an
example
on
the
lower
input?
20.
K
D
E
D
No,
so
in
open
telemetry
we
are
doing
a
reservoir
exemplar
buckets
where
the
resi,
the
exemplars,
are
independent
of
the
histogram
buckets
or
they
can
be,
and
so
we
have
to
do
a
mapping
where
we
pick
exemplars
to
go
into
particular
buckets,
and
so
what
I'm
suggesting
is
you
report
to
the
closest
non-exclusive
bucket.
D
E
D
Yeah
both
both
actually
yes,
so
you
try
to
take
the
most
recent,
so
so
there'd
be
a
basically
when
we
take
the
reservoir
of
exemplars
from
open
telemetry
into
prometheus.
We
sample
by
time
first.
So
we
try
to
grab
most
recent
exemplars
and
then,
if
you
have
a
bucket
with,
like
you
know,
less
than
10
and
another
one
less
than
20
and
you
have
a
value
of
nine
you'd
put
that
in
the
less
than
ten
bucket.
E
E
I
mean
to
be
certain
read
the
spec
if,
if
it
disagrees,
but
I
that
seems.
D
Right
right,
I
just
I
just
want
to
check
so
before
I
write
it
down
as
long
as
that
makes
sense.
So
the
goal
is
to
try
to
get
the
most
relevant
to
that
particular
bucket
exemplars
from
the
most
recent
time
stamp,
and
so
I
I
can
document
like
an
algorithm
for
that.
But
that's
what
we
implemented
in
the
java
prometheus
exporter
for
exemplars,
and
I
just
want
to
make
sure
that's
in
line
with
how
we
expect
this
to
get
used
in
prometheus.
D
My
goal
is
to
get
some
language
written
down
for
you
and
I'll
ask
for
a
review
just
to
make
sure
that
we're
all
kind
of
understood.
But
the
goal
here
is
just
to
get
the
most
relevant
exemplars
into
prometheus,
and
we
have
that
issue
where
open
telemetry
has
like
a
bucket
of
them
and
they're
not
attached,
particularly
to
any
point.
E
Yep
this
one
yeah
well
you're
good,
I
mean
my
gut-
would
be
to
not
only
send
it
on
one.
If
you,
because
you
can,
you
may
lose
data
in
theory.
So
if
you
transmit
your
metrics
and
your
ex
implies
distinctly
from
each
other
and
you,
you
only
transmit
the
ex-employer
id
once
in
your
metric
data
and
you
lose
that
packet.
E
D
So
related
to
that,
though,
how
many
exemplars
can
I
attack?
I
can
only
attach
one
example
to
a
point
right
to
any.
E
I
mean
in
theory
it's
a
free
text
field,
but
please
don't
yeah
and
also,
of
course,
that's
something
which
apparently
causes
confusion.
Exemplar
is
defined
as
both
a
span
and
a
trace
id,
so
you
can
do
both
at
the
same
time,
you're
not
limited
to
one
or
the
other.
I
don't
know
why,
but
it
seems
there's
regular
confusion
around
this
point.
E
D
Sorry,
the
the
labels
for
trace
and
span
are
those
specified,
because
I
did
not
see
that
when
I
read
through
this
spec
they're
not
specified.
E
The
reasoning
is
not
growing,
we
did
not
want
to
to
prevent
innovation
from
happening,
and
that
is
like
examples
are
still
suitably
undefined
within
the
wider
open
source
ecosystem
that
we
didn't
want
to
block
anything.
D
Okay
cool,
so
we
will
we'll
probably
specify
when
open
telemetry
has
span
any
trace
id
that
it
that
it
must
go
into
this
field
for
now
and
then,
if
we
need
to
loosen
that
restriction
later
to
innovate,
that's
fine!
The
one
last
thing
in
open
telemetry.
We
have
this
notion
of
aggregated
away
attributes
in
exemplars.
D
So
when
a
when
a
time
series
gets
some
sort
of
you
know
rewrite
rule
or
something
that
removes
labels,
we
try
to
preserve
it
in
the
exemplar.
Is
that
a
use
case?
We
should
try
to
promote
in
our
prometheus
compatibility
layer,
or
should
we
just
drop.
K
D
Yeah
agree
agreed,
and
if
you,
if
you
record
the
ids,
you
should
tie
to
a
thing
that
has
events
on
it
like
that's.
What
chase.
D
Events
so
anyway,
like.
K
D
Right
so
I'll,
is
it
better
to
just
specify
that
trace
id
span?
Id
must
go
and
the
rest
are
kind
of
optional
up
to
an
implementer,
or
should
we
not
send
them?
Because
if
we
by
default
try
to
send
all
of
them,
we
could
break
when
we
hit
prometheus,
because
our
label
is
too
long.
E
I
would
tend
to
do
the
same.
You
don't
have
to
worry
about
breaking
prometheus,
but
you
will
be
outside
of
openmetrics
specification
and
so
every
every
compliant
endpoint
must
reject
you
and
you.
You
can't
get
any
data
in
related
fyi,
and
this
is
currently
in
motion
slash
in
discussion
within
prometheus
team.
E
That
being
said,
there
are
valid
use
cases
which
it's
just
something
where
we
don't
know
what
we
as
prometheus
team
want
to.
If
we
want
to
have
an
opinion
and
if
like,
if
we
have
an
opinion,
it
will
be
part
of
the
test
suite
in
one
form
or
another,
but
we
don't
know
it's
just
something
which
came
up
recently
and
so
fyi
we're
talking
and
thinking
about
stuff.
D
Cool,
if
you,
when
you
come
to
conclusions,
if
you
could
just.
E
Yeah
I
mean
if
we
discuss
it,
it's
going
to
be
in
the
in
the
deaf
summits
anyway,
so
anyone
can
join.
Anyone
can
watch
the
recording
or
or
look
at
the
meeting
notes
as
to
if
we
make
this
a
thing,
I
don't
think
we
have
formal
consensus,
but
my
own
plan
is
to
to
basically
edit
to
the
test
suite
with
information
or
warning
that
in
x
amount
of
time
things
will
actually
start
biting
and
and
be
required
if
we
intend
to
make
them
required
to
give
everyone
a
fair
chance
to
to
adopt
two
requirements.
D
A
Opened
three
tracking
bugs
on
the
prometheus
group
backlog
just.
A
Supporting
state
state
set
info
and
gauge
histograms,
so
you,
if.
L
A
I,
what
I'll
do
is
I'll
verify
the
implementations
or
lack
thereof
on
the
collector.
You
know
implementation
as
well
as
the
as
well
as
the
sdks,
where
this
functionality
should
technically
exist
right
so.
A
Yes
agreed
agreed,
I
mean
I
also
would
like
to
deprecate
some
of
the
exporters
that
are
hanging
around
some
of
the
sdks,
because
I
think
that
you
know
one
of
the
things
we've
discussed
and
this
has
come
up
earlier.
Also,
we
have
a
tracking
issue
on
this.
Is
that
over
time
again,
really
you
know,
as
we
finalize
or
optimize
the
pipelines.
If
you
will,
for
you
know
where
full
exporters
should
exist,
they
should
exist
versus
remote
right.
A
You
know
just
being
done
through
the
collector
again
ensuring
that
any
additional
components
that
have
been
added
over
time
are
deprecated.
A
So
just
I
think,
that's
another
loose
item
josh
that
you
know
we'll
look
at
from
an
sdk
perspective.
A
All
right,
I
think,
you're
at
time,
so
this
is
awesome,
awesome
discussion
and
thank
you
again,
josh
for
joining
and
richard
and
everyone
thanks
have
a
good
one.
See
you
next
week,
bye.
B
I
see
that
the
challenges
section
suddenly
expanded
quite
a
bit.
I
haven't
had
a
chance
to
review
it
yet.
F
F
F
F
Yeah,
so
so
the
main
thing
on
my
what
I'm
planning
to
do
is
I
want
to
talk
about
the
two
sections
we
in
the
planned
arc
that
we
had
on
our
to-do
from
last
week
and
if
you
have
any
other
topics
that
you
want
to
discuss,
please
add
it
to
the
agenda.
F
All
right,
let's,
let's
get
started.
F
So
I'm
gonna
share
my
screen
and
just
walk
through
the
things
that
I
filled
in
for
for
the
challenges.
F
So
I
think
you
know
I
originally
copied
these
sections
from
the
blogging
plan
planned
out,
and
so
they
had
challenges
and
missing
functionality.
I
think
in
our
case
they
overlap
quite
a
bit.
I
think
kind
of
the
overall
theme
is
that
there's
no
like
good
data
model
for
it,
representing
the
data
from
client-side
telemetry
or
like
no
standardized
data
model.
So
that's
kind
of
that's
represented
in
all
these
challenges
that
I
listed
on
a
lot
of
them.
F
But
let
me
let
me
walk
through
them,
so
so
I
think
we
had
already
ordered
discussions
about
like
the
the
no
duration
spans
and
long
running
spans.
So
there
are
lots
of
different
types
of
data
in
client-side
telemetry,
and
some
of
them
do
not
exactly
fit
the
the
existing
the
existing
signals.
Like
you
know,
there
are
certain
events
like
timing,
events
or
network
change
or
random
errors
that
happen
in
the
ui.
That
may
not
necessarily
be
connected
to
the
span.
F
F
The
the
other
thing
that's
that
I've
seen
discussed
around
this
is
on,
on
the
other
hand,
representing
things
that
take
a
really
long
time
so
like
if
we,
for
example,
agreed
on
representing
a
session
or
a
page
view
and
pageview
duration
as
a
spam,
then
sometimes
these
can
last
a
really
long
time.
F
It
could
be
hours
days,
so
it
becomes
with
the
current
sdks.
It's
very
you
know
it's
pretty
much
unpractical
impractical
to
to
capture
them
or
wait
for
their
end
before
we
capture
them.
M
But
a
page
view
is
distinct
from
a
session,
though
right
because
a
page
view
a
session
could
be
comprised
of
a
bunch
of
different
page
views
correct.
Yes,
there
are
two
different
there's
two
slightly
different
correct
things
there,
although
obviously
they're
quite
related,
but
yeah.
Definitely
yeah.
F
Yeah,
so
so
I
was,
I
was
going,
I
was
read
through
the.
I
was
going
again
through
the
proposal
from
aw.
Aws,
that's
been
around
for
a
couple
months
now
and
there
you
know
there
have
been
lots
of
different
ideas
and
opinions
about
this.
So
like
one
of
the
one
of
the
comments
was
that
anything
can
be
basically
represented
as
spans,
including
the
session.
M
Before
you
move
on
martin,
I
do
I
do
see
something
I
think
that
is
maybe
missing
from
here
from
the
long
list.
It's
something
that
we've
been
experimenting
with
at
splunk
and
that
is
kind
of
identifying
the
source
of
the
data
within
the
application
or
or
web
or
web
page.
M
D
M
M
Okay,
we
call
that
we
call
that
component.
That's
splunk,
but
I
mean
the
name.
M
It
could
be,
it
could
be
a
ui
component,
but
it
could
be
a
background
task
right.
Some
of
these
things,
like
especially
on
mobile,
being
able
to
distinguish
whether
something
was
generated
by
like
the
user
interaction
or
generated
by
the
actual
like
background
tasks
in
the
application
is
very.
It
can
be
very
useful,
especially
when
there's
errors
generated.
F
Yeah,
I
actually
did
think
about.
I
didn't
think
about
components,
but
I
did
think
about
you
know
identifying
pages
like
the
like,
for
example,
the
url
of
the
page
that
you're
on
so
like
page
view,
would
have
to
have
like
a
common
attribute.
F
You
know
like
a
semantic
semantically
defined
attribute
like
for
for
the
url
that
so
you
can
you
can.
You
can
like,
because
one
of
the
use
cases
could
be.
For
example,
you
wanna
see
like
which
page
views
have
really
high
durations
or
hit
really
high
load
times.
F
Yeah
yeah,
I
didn't
list
it
because
I
thought
there
would
be
just
like
a
semantic
convention
that
eventually
we
would
obviously
go
through,
but
okay,
okay.
So,
let's
move
on
user
interactions,
so
their
kind
of
user
interactions
are
kind
of
key
key
key
measurement
in
client
side.
Obviously,
that's
what-
and
it
is
currently
not
clear
there
might
be
some
challenges
with
how
to
represent,
represent
these
interactions
and
how
to
measure
them
they
could
be.
You
know
that
you
could
potentially
just
measure
capture
them
as
events
that
you
know
user
clicked
on
something.
F
So
that's
an
event.
You
could
also
represent
them
as
spans,
which,
which
is,
I
think,
what
the
js
open,
telemetry,
js
instrumentation,
does
right
now
and
then,
like
the
duration
duration,
since
this
is
has
a
duration
and
the
duration
represents
kind
of
the
effect
of
the
the
interaction
that
you
know.
I
I
don't
know
how
it
is
actually
on
on
the
mobile
side,
but
in
in
browser
it's
it's
a
little
tricky
how
to
measure
that
kind
of
duration,
because
there's
no
known
end
to
them.
L
F
L
Okay,
for
you
know
what
it's
kind
of
a
user
tap
or
a
click
or
something
you
know
a
span-
can
can
kind
of
affect
what
gets
captured
and
associated
with
it.
If
you
have
like
a
button
reaction
kind
of
method
that
then
spawns
a
bunch
of
other
things,
you
know
you
can
at
least
kind
of
chain
them
together
if
you're
using
a
span
versus
an
event,
but
it
requires
you
know
a
lot
of
instrumentation
that
isn't
necessarily
like
it's
not
something
you
necessarily
get
at
automatically.
N
This
thing
has
multiple
names
like
being
able
to
link
certain
subsequent
events
or
spans
with
the
user,
interaction
and
measure
the
let's
say,
the
entire
duration
of
the
things
that
happen
because
of
user
click.
This
button
yeah
exactly.
M
B
Okay
moving
on
sorry,
so
if
we
want
to
consider
the
ripple
effects
of
that
event,
don't
we
think
pretty
much.
All
of
the
events
have
some
actions
out
of
them.
I
mean
I
I'm
not
sure,
but
let's
say
you
mentioned
about
the
network
change.
B
If,
let's
say
your
app
responds
to
the
network
change,
don't
you
want
to
capture
that
timing?
There
like
it
depends
on
your
your
plan
on
what
all
do
you
want
to
fit
into.
B
F
L
O
L
Yeah,
but
it
would
be
valuable
to
know
if
that
user
interaction
was
the
result
of
a
push
notification
or
not
yeah.
J
M
J
But
a
form
stud
is
not
only
due
to
a
push
notification,
so
basically
it
could
be
that
the
app
wasn't
purged
by
the
operating
system.
M
L
M
B
Yeah
so
in
in
general,
then,
if
we
want
to
include
the
chain
of
events
that
result
out
of
that
initial
event,
then
I
think
representing
that
as
a
trace
in
itself
that
starts
and
ends
entirely
on
the
client
would
be
appropriate.
B
I
think
one
example,
I
think
john,
you
showed
a
demo
last
time
where
the
activity
and
the
fragments
how
they
load.
Of
course,
it's
not
a
result
of
user
interaction,
but
it's
it's
a
trace
that
ends
starts
and
ends
on
the
client
itself.
M
Yeah,
although
there
it's
it's
often
very
tricky
to
catch,
to
trap
the
causality
properly,
especially
when
things
can
be
triggered
on
the
background
threads
and
then
you
know,
however,
many
operations
of
back
background
things
that
can
happen
or
oh,
but
I
did
also
think
there's
another
another
trigger
for
for
things
to
happen,
and
that
is,
if
you
have
a
websocket
open
and
data
comes
in
on
the
websocket
that
will
trigger
trigger
things
to
happen.
F
There
could
also
be
messages-
the
reverse
stress.
F
So
I
think
all
those
things
are
possible,
but
I
do
want
to
just
call
out
that
user
interactions
are
definitely
key
because,
like
for
client,
client-side,
telemetry,
obviously
what's
what's
unique
about
it.
Is
that
the
interactions
with
from
the
user,
which
it
like
consists
of
a
lot
of
use
cases
you
want.
You
would
want
to
measure
or
look
at
with
clan
clan
cytometry,
okay
sessions,
so
talk
about
sessions
they
like
in
the
client
side,
monitoring
tracking
user
activity
across
multiple
uis
and
interactions.
F
So
yeah
there's
I'm
guessing
the
easiest
way
to
represent.
These
would
be
just
adding
adding
a
common
attribute
with
an
id
across
everything
that's
collected
on
the
in
the
session,
but
there
were
also
ideas
of,
like
I
mentioned
earlier,
about
representing
that
as
a
spam
or
event
or
some
kind
of
a
new
event
that.
D
L
It's
just
usage
or
a
particular
action.
That's
taken
you
know,
and
if
that,
if
you
want
that
to
be
configurable,
I
think
it's
easier
to
just
apply
a
attribute
to
everything
that
gets
generated
during
a
period
of
time
versus
you
know,
trying
to
wrap
it
all
in
a
giant
span.
I
think
that
also
will
be
in
terms
of
like
processing.
F
B
Okay,
let's,
let's
move
on
to
sampling,
so
the
multiple
ui
term
that
you
that's
highlighted
it
it
doesn't.
It
refers
to
what
does
it
refer
to?
It's,
not
multiple
tabs
right,
like.
F
B
F
It's
it's
so
like
in
browser.
It
would
be
multiple
page
loads,
but
within
the
same.
F
F
N
L
K
L
Question
could
be
made
for
multiple
apps
right,
like
if
you
have
like
an
ipad
app
and
an
iphone
app,
and
somebody
is
doing
something
on
the
ipad
app
and
then
goes
over
the
iphone
app
it
might
be.
They
might
pick
up
what
they
were
doing,
but
yeah
seems
like
a
really
complicated
use
case.
That
sounds
scary.
Maybe
we
shouldn't
worry
about,
but
it's
kind
of
similar
to,
like
you
know,
cross
tab
behavior.
F
Yeah
when
it's
an
interesting,
interesting
thing
to
just
talk
to
think
about,
because
because
you
know
often
like
tracking
sessions
is
done
through
like
in
browser
like
through
cookies
and
then
that
spans
tabs
as
well.
M
Interestingly,
at
least
on
android,
if
you
have
web
views
the
web
view,
cookies
also
span
application
like
instances.
M
F
Yeah,
but
that's
that's
a
good
point,
grace
that
it's,
if
you
think
of
session,
like
a
user
like
sitting
down
and
interacting
with
the
application,
then
they
could.
They
could
very
well
have
multiple
tabs.
B
Okay,
so
the
next
thing
I
want
to
do
sorry
again
sure
so
in
the
session,
can
we
explicitly
mention
that
the
since,
if
we
use
the
id
as
an
attribute
on
the
spans,
then
we
are
not
explicitly
calling
out
when
the
session
starts
and
when
the
session
ends
right.
That's
for
the
back
end
to
derive.
L
So
you
could
detect,
you
know,
expand
id
changes
and
then
create
an
event.
B
So
so
then
you
would
need
an
explicit
event
for
this
start
and
end,
then,
because
I
thought
I
thought
you
mentioned
somebody
mentioned
that
that
could
be
sampled
out,
so
it's
better
to
not
be
explicit
about
it.
M
The
way
we've
been
handling
this
at
splunk
is
when
a
session
starts
up.
We
are
it's
basically
I'm
talking
about
android,
not
talking
about
web,
so
android
we
create
a
little
initialization
span
that
describes
the
initialization
of
the
library
where
the
session
id
is
created
and
then,
when
the
session
id
rolls
over,
like
we
have
a
limited
four
hours
on
the
session
id
and
then
we
generate
a
new.
We
generate
a
little
event
when
the
session
id
rolls
over
that
can
link
session
ids
from
the
previous
session
to
the
new
one.
M
M
Well
accepted,
so
I
guess
we're
going
to
talk
about
sampling,
so
we're
no
sampling,
but
that
we
can't
do
that
on
device
on
devices,
though,
because
if
the
network
is
offline
for
a
long
time,
we
can't
infinitely
accumulate
spans
and
we
actually
do
limit
the
number
of
spans
that
sit
on
the
device
and
we're
trying
we're
actually
trying
to
figure
out
right
now.
What
we
can
do
to
prioritize
dropping
those
spans
when
we
do
need
to
drop
them.
M
So
it's
not
sampling
in
the
traditional
sense.
But
we
do
have
this
offline
use
case
and
I
think
that's
also.
It
can
happen
for
browser
browser
applications
also,
but
the
offline
use
case
of
how
much
data
to
save
and
how
to
how
to
drop
data.
When
you
run
out
of
room
or
run
out
of
your
linux,
something
we
should
probably
think
about.
O
For
me,
the
question
here
has
been
like
what
what
is
the
right
thing
to
sample?
Is
it
sampling
sessions
like
whole
sessions
or
is
it
sampling
events
within
a
session?
In
theory,
the
instrumentation
could
decide
if
we
wanted
to
sample
a
whole
session
out,
but
it
would
change
sort
of
the
analytics
overall.
O
J
I
think
that
actually
depends
what
you
want
to
see
from
a
session.
So
if
you
have
any
events
that
are
useful
for
you,
for
instance,
taking
now
mobile
as
an
example,
we
have
crash
analysis
and
we
would
like
to
see
the
actions
that
were
occurring
before
the
crash.
J
J
F
F
F
So
I
guess
this
is
like
where
it
comes
back
to
the
the
first
thing
like
it's
representing
certain
things,
as
spans
expands,
are
by
nature
sampled
or
intended
to
be
sampled
so
like.
If
you
want
to
capture
things
like
page
load,
page
load
times
or
even
interactions
like
then,
should
we
also,
you
know,
be
generating
metrics
if,
if
like
things
will
might
get
sampled
out,
if
spams
might
get
sampled
out.
F
B
Do
you
want
to
write
down
a
line
item
on
prioritization
of
spans
to
sample
a
you
know,
offline
case
this
one
yeah,
it's
not
just
limiting,
but
also
picking
which
ones
to
drop
which
ones
do.
B
N
B
D
L
Yeah,
some
sort
of
scoring
system
would
be
valuable
and
making
it
customizable
as
well
would
be
good.
F
F
So
I
was
when
I
was
thinking
writing
this
section
down.
F
I
was
thinking
more
of
the
use
case
of
sampling
sessions
like
the
whole
session,
which
again
client
side
because
of
the
distributed
nature
of
client-side
instrumentation
it's
difficult
to
make
sampling
decisions,
so
I
wonder
like
if
I
mean
the
only
thing
that
I
can
think
of
is
either
you
don't
do
you
know
you
wouldn't
do
any
sampling,
you
would
capture
everything
and
then
you
know
the
collector
would
do
the
sampling
downstream
or
you
would
have
to
provide
some
some
sort
of
sampling
decisions
like
when
the
session
starts.
N
The
presence
of
collector
as
such,
even
is
an
interesting
concept
to
this,
maybe
highlight
briefly,
is
that
in
at
least
in
splunk
around
deployments.
As
of
now,
there
is
no
collector
anywhere
the
agent
speaking
directly
with
ingest
back
in
backhand,
because,
like
the
presence
of
collector,
it's
it's
kind
of
difficult
to
reason.
If,
if
your
app
is
public
in
a
public
domain,
it
is
probably
the
worst
if
it's
kind
of
in
your
internet
or
in
a
protected
region
used
only
internally,
but
the
collector
in
in
in
ram
site.
F
Yeah,
I
actually
thought
you
know.
I
thought
that
was
was
sort
of
the
opposite
in
in
that,
and
I
don't
know
how
you
exactly
handle
this,
but
since,
like
web
applications
are
basically
publicly
open.
F
Then
then,
like
things
like
exposing
certain
secrets,
like
like
api
keys
or
is
a
is
potentially
an
issue
when
you
want
to
send
send
data
to
a
certain
a
certain
back
end
that,
like
the
ingest
back
end
but
like
with
the
collector,
then
the
customer
can
spin
up
their
own
collector
instance
and
handle
that
themselves.
Like
all
the
secrets
secrets
themselves,.
N
L
If
you
have
a
client
side
agent,
it
sends
it
to
the
client
side
collector,
which
kind
of
protects
the
you
know
everything
else
from
the
garbage
in,
because,
basically
anything
coming
out
of
any
of
these
agents
needs
to
be
distrusted
right
because
there's
no
like
the
the
people
who
are
installing
the
the
agents
are
not
trusted
sources
or
rather
excuse
me,
so
the
people
who
are
running
these
web
applications
or
mobile
devices
are
not
trusted
sources,
even
if
they
install
a
trusted
application
with
a
trusted
agent
in
it.
L
That's
the
that's
the
problem,
because
you
know
anybody
can
jailbreak
their
phone
or
whatever
and
then
really
do
nasty
things
to
it.
F
Yeah,
so
I
don't
know
if
you
need
to
do
anything
here,
but
I
just
wanted
to
call
out
is
like
what,
if
you
do
need
to
address
sampling,
how?
How
do
we
go
about
about
it?
So.
F
Okay,
I'm
gonna
move
on
the
other
topic
that
I
had
analyzed
is
kind
of
just
defining
common
data
model
for
for
different
types
of
client-side
devices.
F
So
I
know,
for
example,
like
browser
has
very
unique,
unique
concepts
to
itself.
Like
page
the
page
page
view
page
load
times
and
mobile
has
their
own
things.
So
when
we
talk
about
like
semantic
conventions
or
defining
like
these,
the
data
model
like
how
far
do
we
want
to
go
with
defining
a
common
data
model
that
works
for
all
these
devices.
B
M
O
F
L
I
think
that
they
all
have
common
like
generic
loading
kind
of
parameters.
You
know
you
have
like
the
first
paint
like
the
paint.
After
you
know,
data
is
received
like
those
sorts
of
things,
so
I
mean,
if
you're,
if
you
want
to
go
like
just
you
know
exactly
what's
getting
executed
in
the
the
like,
the
data
loads
like
then
they're
all
different,
but.
M
L
No,
no,
no,
neither
does
ios,
but
conceptually
like,
like
the
problem,
is
there's
no
way
to
really
like
hook
in
to
exactly
know
when
anything
is
actually
loaded
so
that
it's
definitely
going
to
require
you
know
user
intervention
to
be
like
okay,
my
my
view
controller
is
is
fully
loaded.
Now
I
all
of
my
network
requests
that
I
made
to
populate.
L
It
have
finished
and
I
can
flag
that,
like
I
said,
but
you
know
it's
a
valuable
concept,
but
I
don't
know
if,
if
that's
something
that
browser
actually
knows
about
like
there
are
events
for
that
stuff.
But
but
I
think
that's
what
people
want
when
they're,
when
they're
trying
to
instrument
their
mobile
applications
as
they
want
to
know
that
those
those
points
in
time
just
like
on
browser
yeah.
M
M
Yeah
there's
another
use
very
common
use
case
also
where,
for
example,
a
mobile
app
will
use
a
web
view
for,
like
the
login
flow,
to
share
that
login
with
with
you
know,
an
actual
web
app
and
then
sharing
making
sure
that
you
have
instrumentation
that
will
work
across
that
flow.
It's
I
think
something
we
need
to
consider.
L
Yeah
regarding,
regarding
all
these
hybrid
applications
or
hybrid
sdks,
the
the
issue
is:
there's
not
going
to
be
a
single
solution
for
all
of
them,
they're,
essentially
going
to
need
to
all
be
their
own.
You
know
open
telemetry,
sdks,.
M
L
I
I
feel
like
if
we
find
a
generic
data
model,
you
know
along
the
lines
of
like
first
load.
You
know
final
final
paint
that
sort
of
stuff
whatever
whatever
the
proper
terminology
is.
You
know,
I
think
that
we
can
have
a
shared
like
idea
of
how
everything
should
work.
There's
also
session.
M
Session
sharing
like,
for
example,
this
is
something
we're
just
starting
to
figure
out
at
splunk,
because
if
you
open
a
webview
any
and
you
have
javascript
instrumentation
in
there
how
to
make
sure
that
it
shares
the
session
with
the
underlying
mobile
application.
F
So
it's
point
out
that
in
in
that
proposal
from
aws
again
like
they
recognized
this,
I
think
by
and
try
to
solve
it
by
by
essentially
capturing
you
know
this.
It
was
in
the
context,
context
of
capturing
new
signal,
which
was
the
ram
data
event,
but
each
event
would
have
its
own
definition,
so
you
could
have
you
know
as
the
client
side
application
is
streaming
events
that
happened
in
the
that
happens
then,
like
each
event,
has
some
kind
of
you
know,
class
or
definition.
F
So
that's
that
may
be
unique
to
to
that
environment.
F
But
yeah
I
don't
know
like
if
on
the
backhand
or
like
the
on
the
visualization
side,
if
these
overlap
enough
that
you
would
want
to
have
like
the
same
or
similar
visualizations.
N
Yeah
I
added
resource
sections,
so
maybe
maybe
I
should
yeah,
at
least
in
browser
side
of
things
open,
telemetry
semantic
conventions
does
describe
resources
which
should
reflect,
among
other
things
like
the
underlying
resources
utilized,
such
as
operating
system,
for
example,
or
in
our
case,
if
you
look
at
the
browser's
side,
the
browser
vendor
and
version,
as
well
as
the
client
type
b
which
later
can
be
used
for
geo
ip
lookup
to
kind
of
locate
the
user
on
the
on
the
planet.
N
J
Would
it
make
sense
to
define
certain
attributes
there,
so
I'm
considering
mobile
applications
where
definitely
bundle
identifier
version
strings
and
such
would
make
sense
to
send,
along
with
the
data.
M
M
So
it's
another
thing
to
ponder
is
like
for
in
this
case,
unlike
servers
where
which
might
be
running
for
you
know,
days
weeks
months,
client-side
sessions
do
tend
to
be
significantly
shorter,
and
so
we
do
have
an
opportunity
to
kind
of
describe
a
lot
more
detail.
What
happens
when
things
are
initialized
and
send
it
extra
information,
like
the
really
detailed
resource
information
at
that
at
that
point
in
time,.
B
So
is:
is
there
like,
when
you
talk
about
the
headers
you're
talking
about
the
http
headers,
when
the
otl
on
the
otlp,
when
the
when
the
a.
N
B
We
should
be
fine,
but
I'm
not
aware,
and
I'm
not
so
familiar
with
it
anyways.
So
does
anyone
know
if
that
really
happens.
D
L
The
other
issue
to
look
out
for
if
we
go,
that
route
is
differentiating
between
browser
applications
and
mobile
applications,
because
you
know,
when
you
send
a
data
packet,
it's
going
to
have
a
user
agent
for
the
mobile
device.
It's
going
to
say
something
vague
like
mozilla,
something
something,
and
so
it's
difficult
to
be
like.
Is
this
data
from
a
mobile
device
from
like
a
mobile
app
from
a
browser?
B
N
I
just
wanted
to
highlight
this.
This
is
also
something
that's
not
really
well
kind
of
defined
for
us,
and
then
I
also
added
the
next
section.
Causality
is
probably
actually
a
substraction
of
user
interactions.
It's
just
something
to
decide
whether
we
should
address
this
causality
or
zoning
or,
however,
it's
called
being
able
to
effectively
determine
that
this
event
happened
because
of
user,
clicked
this
button
or
not,
but
it's
probably
not
the
topic
on
its
own.
It's
in
the
context
of
user
interactions.
F
M
Just
to
call
out
causality
gets
super
interesting
also
when
you
start
talking
about
web
sockets,
so
something
might
cause
the
initiative
initialization
and
the
opening
of
the
websocket,
but
then
that
that
thing
will
essentially
because
it
can
be
causally
disconnected
from
the
user
interaction
that
initiated
it
also.
So,
especially
when
data
starts
flowing
back
over
the
over
the
wire.
F
Yeah
I
like
from
one
from
my
experience.
I
can
say
that
the
tracking
causalities
can
be
very
difficult
and
it
could
may
have
a
lot
of
overhead
too.
Essentially
it's
you
know
it
either.
F
It
either
is
similarly
similar
to
tracing
like
where
you
or
you
or
you
would
have
to
provide
some
kind
of
api.
That,
like
like
certain
frameworks,
can
can
give
you
more
information
about
the
effects
of
of
of
the
interaction.
M
M
F
Okay,
so
we're
almost
out
of
time.
So
let
me
just
go
through
this
really
quick.
I
just
wanted
to
call
out
that
that
client
side
has
unique
use
cases,
and
I
don't
know
how
much
of
this
we
need
to
define.
But
you
know
there
are
some.
There
are
some
use
cases
similar
to
the
backend
applications
like
metrics
for
load
times
or
tracking
errors,
but
there
are
also
unique
ones
that
are
specifically
about
tracking
user
behavior.
F
You
know
you
want
to
know
how
users
navigate
the
ui.
What
do
they
interact
with,
and
you
know,
maybe
even
like
correlate
the
the
the
user
behavior
apart
with
the
performance
part.
F
So
how
does
I
mean?
I
think
that's
for
like
for
customers
from
customers
perspective?
This
is
this
is
pretty
much
key
understanding
like
if
I
make
a
change
to
my
application,
like
what
kind
of
effect
it
has
on
user
behavior
right.
F
So
so,
just
making
sure,
as
we
talk
about
like
these,
these
different
data
types
like
I,
I
wanted
to
call
this
out
to
make
sure
that,
like
this
can
cover
these.
These
are
these
use
cases.
M
F
And
so
is
that
something
that
that's
like
new
to
open
telemetry,
like
does
open
telemetry
right
now,
do
anything
like
that.
F
M
Any
of
the
discussions
or
specs,
but
I
think
it
is
something
that
we
should
at
least
even
if
we
don't
describe
in
detail,
because
I
think
every
business
case
is
going
to
be
different.
But
at
least
calling
out
like
a
standard
way
that
those
should
be
transported
like.
Are
they
events
or
are?
They
are.
I
F
Okay
and
last
one
that
I
had
was
there
could
be
maybe
a
lack
of
browser
or
the
environment
support
for
certain
things
that
we
might
we
may
need
like,
and
the
one
that
just
came
to
mind
was
from
something
that
was
mentioned
in
the
channel.
The
slack
channel,
which
is
you
know
if
you
wanted
to
use
like
the
trace,
trace
response
header,
then
right
now
browsers,
do
not
expose
that
header,
the
response
headers
to
for
certain
resources.
B
F
Yes,
yes,
let's,
if,
okay,
let's
push
it
out
the
next
next
week,.
F
B
So
especially
I
I
was
researching
on
the
trace
response.
I
got
conflicting
understanding.
I
initially
thought
it
is.
I
mean
the
respect
for
the
trace
response
changed,
so
it
initially
included
a
parent
span
id,
but
it
was
later
removed.
So,
and
I
I
also
was
trying
to
see
whether
there
was
any
effort
made
in
being
able
to
read
that
header
in
the
javascript.
But
you
know,
given
that
you
can't
read
headers
for
all
requests.
M
F
So
does
this
I
mean
going
through
the
missing
functionality
and
this
trace
response.
Does
that
give
us
enough
agenda
for
next
week,
or
should
we
add
some
other
things.