►
From YouTube: 2022-08-03 meeting
Description
Open Telemetry Meeting 1's Personal Meeting Room
A
A
I
put
some
agenda
items
in
and
I'd
be
glad
to
start
talking
about
them.
Yeah,
okay,
thanks
well,
I
had
one
new
topic
that
I
wanted
to
bring
to
this
group
and
hopefully
get
some
attention
from
Prometheus
developers.
A
The
first
one
listed
in
the
agenda.
There
is
it's
an
old
issue
that
was
filed
a
while
back.
A
The
question
is:
can
we
Define
a
standard
for
sdks
to
support
recycling
or
garbage
collection
of
memory
for
when
they
were
operating
in
cumulative
mode?
And
there
is
guidance
from
the
open
metrics
specification
on
this,
but
I
think
it's
missing
a
little
bit
of
detail
that
we
would
need
in
open
Geometry
and
it
comes
down
to
a
question
about
start
times.
We
have
start
times
and
open
Telemetry,
which
are
meant
to
resolve
ambiguity.resets
and
rates
at
the
rates
of
a
process.
A
That's
restarting
when
you
have
when
you
change
your
SDK
to
support
recycling
memory,
that's
potentially
stale
there.
There
becomes
a
point
where
you
may
have
to
recreate
a
Time
series
that
was
once
ejected
for
Stillness,
and
the
question
is
when
we
begin
to
make
changes
of
this
nature.
A
If
we
begin
to
make
changes
of
nature
well,
first
of
all,
vendors
that
support,
Delta
temporality
might
suggest
going
that
direction
first,
but
if
you're
a
cumulative
temporality
consumer-
and
you
want
to
offer
the
SDK
as
a
way
to
a
relief
valve
for
memory-
build
up,
then
the
usually
that
takes
the
form
of
time
out,
at
which
point
you
can
eject
memory
from
the
time
series
from
memory
now.
The
question
is:
when
I
restart
same
time,
series
comes
back,
I
want
to
use
a
new
start
time.
A
The
new
start
time
should
be
some
point
after
it
was
ejected
from
memory
and
some
point
before
the
new
start
time.
It
turns
out
that
free
variable
is
not
really
clear.
How
we
should
do
that
and
you
could.
You
could
be
arbitrary.
You
could
choose
the
point
that
the
last
collection
you
could
choose,
the
you
know
timestamp
minus
one.
A
Every
one
of
these
paints
a
different
picture
of
the
rate
when
you
draw
that
function.
Of
course,
it
is
ambiguous.
What
the
rate
of
a
single
point
is.
So
that's
okay,
but
I
wanted
to
see
if
we
could
get
some
formal
guidance
from
Prometheus
of
any
sort
and
if
the
matter
of
start
time
doesn't
matter
for
Prometheus,
then
we
still
would
like
some
open
Telemetry
SDK
guidance.
So
it
could
be
something
like.
A
B
I,
don't
have
a
good
answer
for
this,
but
these
are
all
good
questions.
I.
B
I
would
ideally
remove
things
from
memory
that
have
not
been
updated
in
a
long
time,
but
again,
I
have
to
think
about
this
yeah,
but
good
questions
I'll
bring
this
up
with
the
Prometheus
team
and
let
you
know.
A
A
Whatever
time
series
didn't
exist
five
minutes
ago,
I
am
now
creating.
I
will
use
that
five
minutes
ago,
timestamp
for
all
the
time
series
that
are
created
new
between
the
last
collection
and
the
current
collection,
and
when
that's
okay
as
long
as
there's
only
one
scraper.
As
soon
as
you
have
two
scrapers,
the
same
ambiguity
comes
back.
You
don't
know
what
time
stamp
to
use,
because
anyone
could
have
read
that
variable.
A
It's
the
application
and
I'm
assuming
this,
because,
if
you've
gone
to
the
trouble
to
release
memory,
that
was
piling
up,
you've
forgotten
the
exact
start
time
or
the
exact
time
at
which
you
dropped
the
thing
you
just
know
that
that
you
might
have
dropped
it
in
the
past
because
you're
in
a
regime
where
memory
is,
is
being
recycled
so
anytime,
a
new,
a
new
series
arrives.
We
want
to
start
time
for
that
and
it
could
be.
A
It
is
fairly
arbitrary
and
by
by
our
own,
if
you're
not
going
to
use
the
start
time
of
the
process.
That
is
it's
fairly
arbitrary
and
by
our
own
logic
you
can.
You
could
say
that
that
that
initial
count
is
essentially
ambiguous
from
a
rate
perspective,
and
you
can
you
can
reset
the
time
series
at
that
point.
I'll
put
a
real
zero
and
just
lose
track
of
the
first
count
in
any
series.
A
C
From
my
understanding,
there
isn't
a
way
to
like
have
metrics
expire
in
the
SDK
or
anything
after
a
timeout.
The
way
that
we've
done
it
in
like
C
advisor
or
something
is
that
you
have
to
specify
you
have
to
implement
a
different
interface
which
lets
you
say
just
here
are
all
the
metrics.
You
should
collect
right
now,
rather
than
like
tracking
over
time.
The
way
open,
telemetries
API
does
me.
A
A
That's
the
easiest,
def,
correct
solution
and
and
I
would
be
okay
with
that
as
as
well
I
think
as
I
said
earlier,
if,
if
you
can
support
Delta
temporality,
that's
one
solution,
but
if
you,
if
you
are
forced
with
communal,
temporality
and
you're
running
out
of
memory,
one
answer
is
sorry
just
run
out
of
memory,
but
the
the
reason
why
this
issue
is
filed
originally,
was
that
hotel
has
a
library
guideline
to
not
accumulate
undoubted
memory
and
those
two
are
in
Conflict.
That's
why
we're
discussing
this.
A
A
D
I
joined
late,
so
I
I
might
be
completely
going
in
the
wrong
direction.
We
are
talking
in
part
about
how
Prometheus
never
or
promises
expositions
never
go
away.
Even
if
you
have
what
you
consider
to
be
a
stale
metric
over
time.
Okay,
it's
it's
the
best
current
practice,
not
to
not
to
have
anything,
go
away.
That
being
said,
there's
no
one
forcing
you
to
not
have
them
go
away.
D
The
one
thing
is:
if
you
have
any
permit,
is
compatible
system,
Things
become
stale,
so
things
go
away
in
the
back
end
at
some
point
as
well.
So
at
the
latest
point
in
time
when
you,
when
you
basically
re-expose
them
to
a
Prometheus,
compatible
system.
That
would
be
the
time
when
you
need
to
re-synthesize
this
kind
of
thing,
but
beyond
this
it
it
seems
to
be
a
little
bit
like
like
most
of
the
pushers
pull
things
where
it's
just
the
direct
consequence
of
of
the
basic
approach
chosen.
C
I
I
think
this
would
actually
still
be
a
problem
even
so.
Open
metrics
has
the
underscore
created
series
for
cumulatives,
where
we
would
describe
start
time
so
I
think
it
would
still
have
a
similar
problem
if
you
got
rid
of
it
from
memory
and
then
had
to
come
up
with
a
new
start
time
for
a
counter
that
you
wanted
to
re-expose.
Somehow.
A
C
D
D
The
the
thing
where,
where
this
area
of
discussion
came
out
is
repeatedly
is
basically
this
is
something
which,
in
theory,
The
Collector
could
do,
but
at
the
end
something
needs
to
needs
to
persist.
That
state,
of
course
else
that
state
is
going
away
and
you're
losing
this
information,
no
matter
what
even
it's
the
same
is
true
for
any
counter
which
goes
up
and
for
any
any
created
information
for
anything
else.
D
Basically,
if
you
stop
saving
the
thing-
and
you
start
saving
the
thing
again
and
you
deleted
it
in
between
then
inherently
it
is
gone
unless
you
have
some
place
in
between
where
it
is
being
stored,
the
overall
complexity,
but
we're
deep
in
philosophy,
philosophy,
philosophical
questions:
if
you
need
to
persist
it
somewhere,
you
might
as
well
persist
it
at
the
one
place
where
everything
else
is
coming
along
to
get
the
data.
That's
that's
part
of
their
of
the
Prometheus
Mantra
or
whatever,
but
in
theory
you
can
persist
this
pretty
much
forever.
A
Think
we
need
to
think
about
this,
a
bit
more,
but
I
I
feel
like
we
could
probably
write
down
some
requirements:
David
the
the
form
I'm.
Imagining
is,
let's
say,
a
Prometheus
server.
Has
its
standard
and
I
know
it's
configurable,
but
standard
timeout
is
five
minutes,
I
think
which
means
that
if
you
stop
recording
after
five
minutes
that
Target
or
the
time
series
will
be
erased,
my
my
requirement
that
I'm
trying
to
sort
of
informally
state
is
that
I
won't
stop
reporting
for
at
least
five
minutes.
A
I
mean
I
will
not
stop
and
restart
reporting
for
at
least
five
minutes,
so
that
I
mean
I'm
trying
to
like.
Let's
make
it
quick,
it's
just
so.
There's
no
ambiguity
so
that
as
long
as
that
restart
or
the
reintroduced
stream
happens
well
after
the
period
at
which
it
would
have
been
removed
from
the
corresponding
collection
or
collector,
then
then,
something's,
good
and
I.
Think
I
should
stop
speaking
now,
because
I
need
to
think
more
on
this
without
people
listening.
D
What
problem
are
you
solving
with
this,
so
the
one
important
correction
Prometheus
doesn't
delete
anything
if
you,
if
you
don't,
if
you
don't
expose
something.
A
D
If
you,
if
you
run
a
query
on
any
Prometheus
compatible,
endpoint
and
say,
give
me
current
data,
give
me
the
rate
over
the
last
10
hours
whatever
now
over.
The
last
10
hours
is
different.
Sorry,
but
if
you
say
give
me
give
me
current
state
and
there
has
been
an
exposition
or
an
ingestion
of
that
one
time
series
in
the
last
five
minutes,
then
it's
going
to
be
displayed.
It's
going
to
be
part
of
your
query
of
your
calculation,
whatever
you're
doing
and
after
those
magic
five
minutes
it's.
D
Assuming
you
assuming
it's
a
counter
and
assuming
you
just
start
counting
by
at
zero
again,
because
that's
what
you
would
start
counting,
it
just
detects
this
as
a
counter
reset
and
and
does
the
thing
automatically
and
the
time
series
is
not
marked
as
stale.
If
it
happens
within
those
five
minutes
in
theory,
you
can
even
go
back
further
and
say,
give
me
everything
over
the
last
10
hours
and
over
that
amount
of
time
there
it
wasn't
stale,
it
would
still
appear
in
all
results,
sets
and
everything.
D
A
This
is
really
interesting
and
helpful.
I
need
to
at
least
for
myself
think
about
what
I'd
like
to
see.
I
don't
have
a
strong
requirement
other
than
that.
We
should
avoid
ambiguity
here
and
if
hotel
sdks
are
going
to
have
a
mechanism
we
should
we
should
specify
it
I
think,
although
I
could
also
accept
the
idea
that
we
say
cumulative
time
series.
A
Segregation
is
a
special
case
of
the
library
guidelines
that
we
are
that
we
are
going
to
allow
to
build
up
memory
to
keep
it
simple,
I
think
we
should
take
this
discussion
offline
and
continue
it
in
the
future.
Appreciate
that
I
apologize
for
having
ill-form
thoughts.
D
D
Great
yeah.
A
D
I,
like
sorry,
kids,
are
fighting
in
the
background.
I,
don't
think
I
ever
heard
about
doing
this
differently
for
different
type
of
data.
Different
types
of
data,
because
that's
interesting
because
yeah-
maybe
just
doing
this
or
requiring
this
just
for
certain
subsets
of
the
of
the
possible
metrics,
would
be
would
be
an
approach.
Another
thing:
if
there's
timeouts
and
such
ideally,
we
would
keep
them
the
same
across
the
ecosystem.
A
Yeah
the
thing
I
want
to
think
about,
and
I
don't
want
to,
like
think
in
front
of
you
on
my
feet
here
is
that
we
have
this
stale
marker
concept
that
we've
added
and
that
you
can
push
data
points
that
have
staleness
effectively
in
them
and
there
might
be
a
sort
of
rigorous
definition.
We
could
give
that
forces
the
producer
to
write
a
staleness
marker
for
something
they're
going
to
eject
from
memory
and
then
Define
the
start
time
as
the
I
guess.
A
This
is
where
I
need
to
think
more,
but
like
some
sort
of
no
no
earlier
than
the
the
last
staleness
marker
that
would
have
been
written
is,
is
what
I'm,
after
and
I
need
to
take
more.
But
thank
you.
D
A
D
A
A
I
really
meant
for
the
next
item
to
be
David's
item
I
want
to
talk
about
2702,
the
the
shortening
and
the
name
spacing,
but
then,
after
that,
I
have
another
topic
which
which
we'll
go
into
next
David.
Why
don't
you
talk
about
namespacing?
See
if
there's
anything
we
need
to
resolve
here.
C
C
So
they're
sort
of,
let's
see
so
I,
think
we're
talking
about
2703.
Probably
let
me
just
double
check.
C
No
no
they're,
both
PRS
I,
was
asked
to
separate
out
the
Mech
like
adding
in
the
attribute
itself,
because
that
involves
a
bunch
of
build
tool,
changes
and
stuff
and
the
actual
Prometheus
using
of
the
new
attribute.
So
2703
is
the
Prometheus
side
of
things.
A
And-
and
the
naming
question
is
really
just
we're
going
to
have
a
new
field
to
allow
us
to
put
a
namespace
prefix
into
our
data
model
so
that
we
don't
have
to
put
the
same
prefix
on
every
single
metric
fog
and
span
in
those
Scopes.
That's
the
high
level
idea.
Is
there
anything
that
a
Prometheus
developer
would
care
to
know
about
or
or
discuss
here.
A
A
The
way
I
see
it
was
sort
of
like
we
originally
specified
this
concept
of
instrumentation
scope
or
library.
In
the
past
it
was
called
and
and
deliberately
declared
that
you
you
may
create
the
same
metric
from
different
libraries
that
that's
that's
allowed
to
be
an
intention
that
you
can
have
so
that
you
could
swap
libraries
and
still
produce
the
same
metrics.
A
Now
that
now
the
question
is
I
have
the
same
libraries
and
I
literally,
don't
want
them
to
be
the
same
metric
and
that's
what
open
metrics
gives
you
this
namespace
recommendation
for
it's
a
little
bit,
it's
sort
of
not
formalized
the
way.
We've
got
an
SDK
kind
of
Constructor
for
this,
so
so
we're
saying
you're
going
to
construct
a
meter
or
a
tracer
or
a
logger,
give
a
namespace,
and
that
will
cause
it
to
prefix
everything
so
that
you
literally
don't
have
the
same.
A
Metrics
logs
and
traces
coming
out
of
those
different
in
instances
is
that
about
right,
David.
A
C
Okay,
there
we
go
sorry
when
I
shared
my
screen.
It
was
sharing
my
computer
audio,
so
you
couldn't
hear
me
I've
been
talking
away.
That's
probably
why
you
started
talking,
isn't
it
okay?
Yes,.
D
But
just
to
to
close
on
on
Josh's
Point,
just
to
make
sure
this
is
precise
so
for
from
Prometheus
point
of
view
and
by
extension,
also
of
metrics
you,
this
isn't
so
much
about
the
library,
it's
more
about
specific
things.
D
So,
for
example,
as
a
specific
example,
if
you
have
a
random
core
application
that
would
have,
if
you
go
underscore,
which
is
coming
from
the
instrumentation
library
from
the
go
instrumentation
library
and
it
talks
about
go,
and
you
also
have
the
ability
to
set
a
name
SNMP
or
kubernetes,
or
what
have
you,
and
that
is
then
the
name
for
the
prefix
for
everything
else.
D
The
reasoning
behind
this
is
you
have
a
lot
of
core
applications,
so
it
might
make
sense
to
actually
go
through
all
the
gold
stuff
and
look
at
memory
Properties.
Or
what
have
you,
because
you
also
have
all
the
rest
of
the
target
labels
to
determine
that
this
is
coming
from
this
into
the
target.
D
There
is
you
also
quite
likely
want
to
do
analysis
of
whatever
kind
across
all
your
ex
application
and
or
all
your
full
application
at
the
same
time,
but
in
a
different
space?
So
it's
not
that
we
say
you
have
the
official
go
library
and
you
have
this
other
go
library
and
then
one
is
called
go
official
and
one
is
called,
go
unofficial
or
anything.
D
It
would
be
more
the
case
that
you
have
all
the
ghost
stuff
and
then
this
is
prefix
with
go,
and
then
you
have
all
your
application
specific
stuff,
and
that
is
prefixed
with
your
application
or
you
might
have
I,
don't
know,
Google
underscore
whatever
in
case
you
want
to
have
stuff
within
within
the
string.
Of
course,
you
need
more
specific
stuff
across
across
the
org
or
something
so
that's
where
that
this
is
coming
from.
D
C
So
this
would
be.
This
would
be
only
for
a
particular
Tracer
which
would
be
like
made
by
a
single
package
you
can
think
of.
So
it's
the
equivalent
of
having
a
namespace
for
a
small
group
of
telemetry.
It's
not
a
prefix
for
everything.
D
C
That
we
could
use
as
the
prefix.
So
that's
why
we,
we
have
to
add
this
field
and
that's
the
first
piece
of
this
PR
is
adding
the
prefix
to
metrics
from
a
scope.
C
The
second
piece
is
adding
an
additional
metric
to
carry
other
scope
information
like
the
version,
so,
for
example,
if
I'm
using
a
particular
version
of
the
go
monitoring
package,
then
I'll
get
a
metric.
That
tells
me
you
know,
go
telescope
info
whatever
and
it'll
include
the
full
name
and
the
the
version,
as
well
as
any
other
scope
attributes.
D
D
D
But
the
good
thing
is,
you
know
precisely
where
this
is
coming
from
and
it's
relatively
low
cost,
because
you
would
usually
have
one
maybe
two
per
per
script:
Target,
which
shouldn't
be
a
huge
percentage,
and
you
still
gain
quite
a
bit
of
information
about
your
thing.
D
Also
Fri,
we
have
the
relatively
well
defined
build
info
and
such
so
stuff
like,
for
example,
version
or
or
which
Go
version
you
used
and
such
could
also
be
put
in
there,
because
then
it
works
with
everything
which
is
already
like
all
the
dashboards
and
such
and
alerts.
And
what
have
you
which
are
written
for
this
type
of
thing
would
automatically
work,
but
that's
again,
Network.
A
Yeah,
we
can
probably
make
a
different
submitted
convention
to
put
build
info
in
the
same
Prometheus.
Conventional
location
sounds
like.
C
Are
there
specific,
like
labels
that
are
consistent
across
languages,
like
does
Java,
have
a
specific
version
key,
that's
the
same
as
go.
That's
the
same
as
python
or
anything
like
that,
or
are
they
different
per
per
language.
D
D
Absolutely
there's
tons
and
tons
because
this
allows
you
to
do
all
the
all
the
all
the
context
and
everything
with
mouse
over
blah
blah
blah.
It's
it's
super
convenient
like
it's
really
something
which
which
I
personally
like
just
FYI
from
from
how
the
info
metrics,
are
usually
split
up,
they're,
basically
split
split
up
by
at
what
time
do
do
the
values
get
set
so
compile
time
run
or
start
time.
D
These
kinds
of
things
are
usually
different
info
metrics,
because
conceivably,
something
which
is
compiled
in
might
be
longer
running
than
something
which
is
just
at
start
time.
D
I
know
command
line
parameters
or
something
if
you
were
to
put
them
into
the
info
label.
That's
different
from
a
build
info.
D
Which
allows
you
to
do
super
nice
analysis
with.
Did
my
label
set
change,
because
if
the
label
set
changes,
then
I
can
also
have
different
colors.
For,
for
my
graphs
and
I
know,
when
restarts
happened,
blah
blah
blah
blah
blah,
that's
the
kind
of
thing
which
you
automatically
get
for
free
with
with
this
type
of
of
design.
A
I
I
mean
there's,
there's
one
point
here
about
whether
the
I
think
you
made
a
great
point
about
how
build
info
Fields
should
be
in
a
build
info
metric,
because
that
way
we
can
see
if
it's
different
or
not
as
independent
of
the
target
info
fields,
that
David
was
describing
and
open
symmetry
I,
don't
know
what
we
have
specified
for
for
semantic
conventions
about
build
information.
But
if
we
did
semantic.
C
A
Well,
I
think
yeah
I,
don't
know
I,
I,
guess
I'm
trying
to
say
that
the
way
the
resource
is
defined
in
Prometheus
will
know
that
the
build
information
didn't
change,
because
the
build
information
didn't
change,
but
Prometheus
users
aren't
are
going
to
see
one
aggregate
Target
info
that
always
changes.
Even
when
build
info
doesn't
change,
and
so
it
seems
like
we
could
improve
that
experience
for.
A
Thank
you,
the
speaking
of
the
agenda.
Next
I
I
was
the
reason
we
were
having
that
last
conversation
is
a
few
weeks
ago,
I
kind
of
started
asking
around
open
until
I'm
sure
you're
like
what's
left.
You
know
we
all
know
of
a
lot
of
things
we'd
like
to
do
after
1.0,
but
trying
to
figure
out
what
we
need
to
kind
of
really
lock
down
right
now
and
the
the
most
for
the
most
part.
A
What
we
heard
were
the
concern
that
the
Prometheus
export
respect
is
is
not
still
it's
still
marked
unstable
or
experimental.
A
So
David
was
discussing
the
namespacing
question
and
the
one
the
only
other
one
that
came
up
in
the
group
discussion
was
that
we
haven't
answered
the
question
of
what
to
do
about
names
that
contain
dots
and
I,
wanted
to
bring
it
to
this
group
and
see
if
there's
basically,
my
my
one
desire
coming
out
of
open
Telemetry
was
to
bring
in
statsd
to
the
ecosystem
and
make
sure
that
that
that
that
segment
of
metrics,
including
the
Delta
temporality,
were
like
given
first
class
support
at
this
point.
A
I
think
we've
done
really
well
at
that.
In
the
Last
Detail
is
that
there's
this
huge
section
of
this,
the
ecosystem
still
using
dots
in
their
metric
names,
and
that
includes
all
the
old
stats
to
users,
kind
of
carbon
graphite
era,
users,
as
well
as
modern
eras,
datadog,
users
and
I.
Remember
one
time
we
discussed
this
in
this
forum
at
least
a
year
ago
it
seemed
like
there
weren't
any
maybe
hard
technical
reasons
not
to
change
that
position.
A
That
would
begin
to
accept
the
dots
and
I
hope
that
that's
not
something
that's
sort
of
syntactically
impossible
and
I
hope
it
doesn't
seem
like
it's
going
to
create
trouble,
but
it'll
definitely
create
less
trouble
for
open
Telemetry
users
who
are
trying
to
come
in
with
their
non-prometheus
data,
and
we
don't
have
a
good
answer
for
them
right
now.
A
D
So
one
thing
which
we
had
like
with
my
grafana
head
on:
we,
we
had
some
issues
with
not
supporting
Dots
and
a
few
others
like
forward
slashes
and
such,
and
there
is
a
mapping
which
has
existed
unofficially
before
and
is
like
at
least
for
us
now
more
like.
D
But
this
is
again
with
microfauna,
not
with
my
Prometheus
has
set
on
where
you
have
I
think
it
was
underscore
underscore
dot,
underscore
underscore
to
to
map
this
and
and
be
able
to
to
translate
it
back
and
forth,
which
is
horrible,
but
it
works
and
it
works
reliably
and
it
can
be
automated
away.
In
theory,
even
allows
you
to
in
the
in
the
ux
layer
visualize
this
with
the
dot
again,
not
great
granted
just
saying
that
this
exists,
and
and
this
might
be
an
option
as
to
anything
else.
D
We
would
at
least
need
need
to
take
this
back
to
Prometheus
and
get
agreement
within
the
group.
Because
the
problem
inherently
is,
there
is
a
huge
install
base
and
we
don't
control
this
installed
base
in
any
way
or
form.
And
one
of
the
Mainstays
is
that,
since
2014,
the
exposition
format
was
always
forwards.
Compatible,
that's
the
same
for
open
metrics,
there's
one
Clash
where
we
don't
have
microseconds
anymore.
We
have
seconds
if
you
expose
second
timestamps,
but
this
has
always
been
an
entire
pattern
and
and
highly
highly
suggested
against.
D
So
this
was
the
one
thing
which
we
thought
would
be:
okay,
Beyond
this.
This
is
deliberately
extremely
forwards
compatible,
so
you
can
literally
take
I,
don't
know
if
you
saw
that
blog
post
recently,
where
they
revived
all
Google,
VMS
and
and
put
premises
against
them
and
all
the
varsity
stuff
for
borkman
from
like
15
years
ago
still
worked
with
Prometheus.
That's
not
a
design
goal
in
and
as
of
itself,
but
it's
something
which
is
super
nice
and
not
the
property
which
I
believe
we
would
be
just
giving
up
just
so.
D
That
being
said,
I'm
more
than
happy
to
take
this
discussion
back
to
Prometheus
team,
we
have
the
deaf.
Summits
are
public.
It
probably
makes
sense
if
you
also
show
up
and
and
argue
yeah,
don't.
A
D
A
D
D
A
D
D
Everyone
is
upgrading
the
thing
and
then
all
the
old
Prometheus
servers
start
breaking,
which
are
ingesting
the
thing
because
you
can
like
yes
in
theory,
you
can
introduce
safeguards
blah
blah
blah
blah
blah
blah
blah,
but
you
would
need
to
to
a
certain
that
the
complete
Fleet
of
whatever
subset
of
users
you're
talking
about
is
upgraded
at
the
same
time
before
you
can
even
start
allowing
dots.
D
So
in
theory,
it
would
be
more
the
case
like
if
we
are
doing
somewhere
to
the
full
extent
and
everything
that
promises
3.0
allows
them
and
then
Prometheus
4.0
actually
activates
them,
because
else
you
you
know
where
I'm
coming
from
that's
like
when
you
go
to
the
to
the
extremes,
is
what
you
would
need
to
be
doing
in
this.
Just
why
we
are
relatively
worried
of
this.
D
I
do
believe
that
this
is
also
written
down.
Those
considerations
somewhere,
but
I
need
to
try
and
find
it.
A
If
you
gave
me
an
option
to
say
this
is
like
we're
gonna,
like
I,
think
girlfriend
just
said,
you
know
we
could
make
an
option
of
flag
in
the
2.0
2.0
series.
I'm
gonna
turn
it
on.
Finally,
in
a
3.x
series,
if
it
was
years
out,
I
didn't
I,
wouldn't
even
care
I,
just
don't
want
to
have
to
write
specs
now,
let's
say
for
open
symmetry.
A
If
you're
going
to
explore
a
DOT,
you
got
to
use
underscore
underscore
dot
underscore
or
whatever
like
escaping
pattern,
that
I
I
just
don't
want
to
write
that
I'd
rather
say
using
dots
with
Prometheus
requires
a
flag
or
a
major
version,
and
and
an
upgrade
and
upgrades
are
not
easy.
What
you
described
is
what
anyone
wants
to
upgrade.
Any
kind
of
metrics
protocol
is
what
they
go
through.
So
I.
A
Don't
think
that
that's
like
exceptionally
special
for
this
case,
because
the
users
who
are
trying
to
get
this
support
turned
on
they've
they've,
been
using
these
dots
forever
they're
going
to
begin
using
Prometheus
to
monitor
those
dots.
So
there's
no
migration
problem
for
them.
They
just
couldn't
use
Prometheus
before.
D
Yes,
but
if
you
have
a
mixed
environment
and
someone
is
on
a
newer
version,
blah
blah
blah
blah-
you
still
have
breakage.
If
someone
already
wants
to
actively
use
dots,
that's
different,
because
they
will
be
more
than
happy
to
upgrade
everything
at
the
same
time,
and
also
they
won't
have
the
issue
with
older
in
gestures
or
older
proxies
blah
blah
blah
two
thoughts
here,
a
just
for
for
reference,
and
this
is
something
which
we've
been
trying
to
do
where
we
have.
D
I
mean
we're
almost
there
yet,
of
course,
2014
or
if
you,
if
you
take
varsi
it's
already
almost
two
decades
but
like
that
kind
of
of
intention
is
underlying
part
of
this.
The
other
is,
we
do
have
a
mapping
which
works.
So
if
push
comes
to
shove-
and
we
just
have
a
well-defined
mapping
of
if
you
see
a
dot
just
turn
it
into
underscore
underscore
dot
underscore
underscore.
D
If
you
want
to
be
super
careful,
you
can
have
a
synthetic
metric
which
tells
you
that
this
this
flip
has
actually
happened
within
the
collector
Pipeline
and
you
are
able
to
flip
it
back,
and
it's
not
the
case
of
someone
deliberately
saying
underscore
underscore
dot
underscore
underscore
for
whatever
reason
in
there
in
the
extra
metric
name
and
you
have
a
way
which
which
in
which
you
can
transform
reliably
and
repeatedly
and
automate
it
in
both
directions.
So
if
push
comes
to
shaft,
that's
probably
the
least
bad
approach.
A
A
A
A
A
A
D
D
And
you
can
just
pass
it
through
without
parsing
it,
but
if
you
were
to
set
Dot
and
underscore
as
being
equal,
you
lose
the
information
of
which
of
the
two
it
is,
and
you
cannot
replicate
the
data
on
the
end
of
your
pipeline
or,
if
it's
ingested,
in
whatever
system,
they
cannot
replicate
this
hierarchical,
meaning
which
they
still
might
be
relying
on.
Just
taking
this
as
an
opaque
string
and
passing
it
through
is
completely
fine,
because
you
don't
have
to
care
what
meaning
anyone
assigns
to
adopt
an
underscore
or
the
letter.
A.
D
A
Willing
to
explain
that
use
case,
I,
guess
that's
why
I'm
saying,
but
it's
not
I
should
not
take
things
so
lightly.
Okay,
I
I,
I
I
told
you.
It
was
the
worst
option.
A
I
I
think
it
would
be
a
show
of
I,
don't
know,
I
think
I
think
it
would
be
somehow
we
should
figure
out
a
way
to
make
this
work
so
that
we
don't
need
so
that
we
can
use
the
same
names
in
our
you
know.
So
the
datadog
user
can
begin
using
Prometheus
and
write
prompt,
ql
queries
with
their
dots
in
them
and
not
underscore
underscore
dot
underscores
in
them
or
something
like
that.
A
That's
the
way
I
mean
I,
think
we
can
find
a
way
and
I
would
I
would
argue
that
it
literally
doesn't
matter
for
existing
Prometheus
installation.
That's
not
using
dots
I
mean
we
could,
you
could
even
say
open.
Metrics
doesn't
have
to
change
its
spec
on
dots.
D
Done
because
the
second,
your
loudest
people
will
start
emitting
this
and
you're,
impacting
the
existing
installation
of
your
back
end
of
forwards,
compatibility
blah,
blah
blah
all
that
being
said
so
for
next
for
the
for
all,
just
we
don't
have
a
deaf
Summit,
of
course,
PTO
and
blah
blah.
So
over
the
summer
we
we
took
a
break
for
the
July
and
the
August
one
September
will
happen
again.
A
lot
of
us
will
be
at
qcon.
We
also
have
from
corner
Munich
in
November,
and
also
we
have
to
permit
these
developers
made
in
this.
D
Those
would
probably
be
the
best
Avenues
to
to
have
this
discussion
in
a
broader
Forum,
of
course,
again.
Gotham
I
can't
just
commit
Prometheus
to
anything
within
this
club.
You
need
like
probably
designed
or
probably
I,
think
this
through
blah
blah
blah
blah
blah.
If
we
even
want
to
consider
this
because
there's
tons
of
considerations
but
I'm
I'm
trying
to
get
it
to
this
point,
so
we
can
have
this
discussion
because
I
think
it's
a
useful
discussion
to
have,
and
we
maybe
we'll
change
it.
I,
don't
know.
A
That
we
could
solve
this
at
this
matter,
I'm
and
I'm
really
trying
to
be
flexible
in
all
the
other
things
to
make
this
work
out.
Thank
you.
Okay,
I
think
that
was
the
end
of
my
agenda
item
and
then
someone's
added
one
about
Target
info.
A
Thank
you
so
much
Richard
and
Richie
and
Gotham.
Thank
you.
B
I
moved
the
fine
Legend
item.
It
was
like
basically
in
in
the
undiscussed
agenda
items.
I
just
moved
it
here,
though
I
don't
know
who
put
it
there
so.
B
A
Okay,
I
haven't
read
this
one,
yet
it
sounds
like
it's
asking
for
a
targeted
show,
which
makes
sense
to
me
now
that
I've
been
begun
working
with
info
networks.
A
lot
in
the
recent
months,
so
I'll
be
able
to
check
this
out
and
understand
it.
But
I
haven't
read
it
yet.
A
A
C
C
A
It
just
looks
like
they're
not
implementing
the
target
inflametric,
which
is
the
the
thing
that
will
take
all
the
resource
attributes
when
we
directly
export
to
communities
from
the
SDK
and
The.
Collector
is
already
doing
that.
We've
got
a
bunch
of
ongoing
issues
about
the
collector
optionally,
turning
that
off,
because
they're
people
who
don't
want
it
or
where
it's
creating
single
writer
violations.