►
From YouTube: 2022-01-13 meeting
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
A
Okay,
let's
see
thank
you,
I
think
we're
waiting
for
hi,
josh
hi
vishwa
happy
new
year.
Welcome
back
welcome
back
grace,
hey
brian
all
right.
A
Okay,
so
let's
let
me
just
share
my
my
cat,
saying
hi
too.
C
A
Yeah
we
had
that
and
and
josh
should.
We
also
talk
about
the
the
spec
completion,
because
that's
something.
D
D
A
D
D
A
Yeah
makes
sense
makes
sense,
so
eric
did
you
want
to
just
share
your
screen
and
walk
through
that
update.
Let's
wait
for
everyone
to
kind
of
gather
up,
give
another
minute
or
two,
and
then
we
can
dive
into
it.
A
Good
to
see
you
again,
josh
was
saying
that
at
least
you
will
have
a
little
bit
more
time
to
work
with
us.
Oh.
D
A
A
A
All
right,
so,
let's
get
started
eric.
Are
you
ready?
We
can
dive
into
the
you
know,
just
share
the
talk,
the
issues
issue,
doc
and
then
we
can
just
quickly
walk
through
that
and
then
we'll
post
it
later
to
the
prometheus
web
group
issue.
E
Yeah
yeah,
let
me
share
my
screen.
E
I'm
trying
to
give
myself
permissions
right
now
to
share
screen.
E
A
A
Okay
nobody's
owning
up
to
it,
but
I
think
again
maybe
this
was
related
to
the
first
update
that
we
were
going
to
be
doing
here
where
I.
D
A
Yeah
I
I
would,
I
would
say
that
you
know
in
my
I
did
further
research,
and
this
is
why
I
wanted
to
kind
of
go
through
what
eric
has
on
his
dock
is
that
you
know
in
our
in
our
research,
and
I
talked
to
several
maintainers,
you
know
and
contributors.
A
Fundamentally,
the
full
exporter
has
been
used,
mostly
for
debugging
purposes,
on
the
collector
side,
at
least
and
and
just
verification
of
data,
and
you
know
just
verification
of
these.
You
know
implementation.
If
you
will
what
so
one
of
the
things
that
has
you
know
again,
I
think
is,
is
generally
accepted
now
is
that
the
full
exporter
would
be
available
where
it
is
more
useful
in
the
sdks,
and
that
is
in
the
libraries
and
then
the
push
exporter
would
be
the
only
exporter
for
prometheus.
A
That
would
be
fully
supported
and
available
in
the
collector
so
that
I
think
that
that
was
one
of
the
conclusions
there
and
I
am
not
sure
if
there
are
other
questions,
but
I
think
josh
you're
right,
it
was
just
you
know
it
stemmed
from
the
full
exporter
being
evaluated,
and
you
know
the
one
of
the
things
we
did
find,
and
this
is
what
we
wanted
to
kind
of
go
over.
A
What
eric
has
is
that
the
pull
exporters
are
not
fully
implemented
even
in
the
language
libraries
that
they
exist
in
and
that
led
us
to
the
other
area
that
we
all
know
about,
which
is
that
the
spec
is
incomplete
for
the
prometheus
exporter
and
and
and
the
general
requirement
overall
right
I
mean
there
were
a
lot
of
assumptions.
I
mean
josh.
Remember
when
we
worked
on
the
original
go
sdk
prw
exporter.
A
Also,
there
was
a
lot
of
greatness,
which
you
know,
we've
kind
of
made
assumptions,
because
the
metrics,
api
and
sdk
were
not
implemented
yet,
and
I
think
it's
just
time
to
make
sure
that
the
spec
is
done.
And
then
you
know,
I
guess,
archive
any
implementations
until
we
have
completed
them
up
to
spec
or
just
move
them
to
a
development
branch.
I
would
say.
D
Yeah,
I
tend
to
agree,
I
think
I
think
we
should
let
eric
present
if
he's
going
to,
and
we
can
talk
more
free
form.
D
E
Okay,
can
you
guys
see
my
screen.
A
E
Yeah,
okay
yeah,
so
we've
been
going
through
the
prometheus
exporters
and
the
different
sdks
just
wanted
to
evaluate
a
couple
things.
The
first
thing
is
we
just
want
to
make
sure
that
it
just
exists
in
all
the
sdks.
E
We
also
want
to
make
sure
that
each
of
them
were
compliant
with
the
spec
and
then
the
last
thing
is
we
want
to
call
out
if
there
are
any
gaps,
if
there
were
any
in
the
implementation
of
them
or
the
test
of
them
right.
So
the
first
thing
is
that
do
they
exist
and
they
exist
in
all
the
requests
that
we
checked.
E
The
end-to-end
integration
tests
and
we'll
be
looking
at
those
later
for
all
of
them,
except
russ.
Php
and
swift
also
won't
be
looking
at
python,
c
c,
plus
plus
and
ruby,
but
that'll
be
addressed
later,
and
so
I
also
looked
at
the
integration
testing
for
all
of
them,
and
so
I
won't
go
through
this
in
too
much
detail,
but
we
just
define
what
integration
tests
are
versus
unit
tests
and
then
has
some
notes
on
them.
E
But
then
important
things
to
know
are
in
this
table
just
so,
you
can
see.
E
Sorry,
excuse
me
how
in-depth
the
testing
is,
and
so
we
looked
at
the
go
javascript.net
java
and
the
collector
contrib,
and
we
found
that,
for
the
most
part
they're
pretty
complete.
E
For
prometheus
you'll
see
here
that
most
of
them
all
of
them,
except
the
collector
contrib,
don't
test
the
summary
and
that's
because
the
sdks
don't
create
prometheus
summaries,
currently
talk
to
josh
a
little
bit
about
what
it
is,
and
it's
because
in
hotel
you
know
creating
histograms,
preferred
and
summaries
are
considered
for
prometheus
support.
Only
eventually,
I
think
that
he
said
that
the
sdks
would
eventually
need
to
be
able
to
support
it,
but
discussion
for
that
is
going
to
happen
later
in
this
year.
E
The
things
that
main
thing
doesn't
need
needs
to
be
addressed
so,
like
we
said
earlier,
we're
talking
about
ruby,
c,
plus,
plus
and
python.
Ruby
just
does
not
have
a
exporter,
and
python
also
doesn't
have
a
prometheus
exporter
it
did
before,
but
the
code
for
it
seems
to
be
missing
now
and
not
sure
exactly
what's
going
on
with
that,
and
then
the
last
one,
the
c
plus
plus
one,
is
one
that
I
actually
worked
on,
but
it's
currently
implemented
based
on
the
old
c
plus
plus
metric
specification.
E
So
it's
depreciated
right
now
and
I
think
work
is
being
done
on
that
currently,
so
I
think
the
biggest
things
to
touch
on
are
that
right
now
they
are
all
like
compliant
with
the
spec,
but
the
spec
is
very
minimal.
Let
me
see
so
I
think
I'll
need
to
mention
that
yeah.
We
need
to
flush
this
out
a
little
bit
because,
right
now
it's
currently
experimental.
E
It's
not
stable,
and
it's
really
just
what
you
see
right
here
and
so,
as
you
know,
the
spec
is
further
fleshed
out
more.
We
need
to
make
sure
that
the
prometheus
exporters
are
kept
up
to
date.
To
be
compliant
with
the
spec
in
the
future,
as
for
the
tests,
they
are
fairly
complete
and
the
ones
that
are
not
the
gaps
have
been.
There
have
been
issues
filed
in
each
repo
for
the
tests,
and
you
can
see
those
there's
links
to
them
here,
but
yeah.
E
That's
pretty
much
it
for
our
evaluation.
I
think
lassie
did
you
have
anything
else
to
touch
on?
If
you
want
to
say
anything.
E
Excuse
me:
okay,
yeah,
that
should
be.
A
Eric
that's
cool,
I
mean.
The
only
point
that
we
wanted
to
call
out
here
is
that
you
know
we
looked
at
all
the
prometheus
exporters
and
the
implementations
from
a
test
completion
standpoint.
But
what
we
uncovered
also
is
that
the
because
the
spec
is
you
know
not
complete
and
fully
detailed.
A
So,
even
though
these
these
code,
you
know,
even
though
the
code
exists
and
tests
exist,
we
cannot
say
that
they
are,
you
know
fully
compliant
with
the
spec,
so
it's
kind
of
a
circular
problem
right
now,
so
I
think
the
the
first
step
in
that
whole
process
is
making
sure
that
the
spec
is
has
the
detail
that
it
needs
for
the
prw
exporters.
A
You
know
we
had
a
little
different
approach
when
we
wrote
them,
especially,
you
know
in
the
collector
that
we
were
using
the
upstream
prometheus
and
openmetrics
premier
prw
compliance
document
and
compliance
guidelines
from
the
project
upstream
with
brian
and
richard's
help,
and
and
so
that
that
you
know
led
to
a
more
fleshed
out
and
verified
implementation,
which
is
good
news,
because
the
prw
exporter
is
used
very
heavily
in
the
in
the
collector.
A
A
So
I
mean
this
again,
the
reason
also
we
did
this
is
because
we
want
to
see
you
know
what
are
we
ready
for?
Metrics
stability,
you
know
or
not
in
the
different
libraries
and
obviously
prometheus
support
is
a
key
part
of
those
requirements
and-
and
surprisingly,
you
know,
there's
a
little
bit
more
work
to
be
done
there
to
complete
complete
implementation.
A
So
again,
this
goes
to.
I
think
the
discussion
josh
you
were
going
to.
D
Yeah
hi,
I
want
to
dig
in,
but
I'm
afraid,
there's
just
so
many
directions
we
could
go
and
so
I'll
try
and
summarize
what
I'm,
what
I'm
thinking
right
now,
the
so
the
the
pull
exporter
from
the
collector
support.
I
think
what
what's
being
stated
is
that
it's
not
clear
that
it's
useful
in
production
might
be
useful
debugging,
it's
not
clear
that
it's
specked
out
well
enough
or
that
anyone
sees
why
we
would
want
it.
D
I
see
that
as
symptoms
of
all
the
other
problems,
if
we
can
solve
all
the
other
problems,
I
bet
we
can
come
back
and
define
a
well-defined
pull
exporter
for
the
collector.
I'm
not
sure
that
anyone
will
use
it
until
federation
is
happening
or
something
like
that
like
there
are
reasons
why
prometheus
pulls
from
other
prometheus
same
reason.
What
might
happen
eventually
in
the
in
the
hotel
collector.
D
So
I
think
that's
our
last
problem
that
we
should
solve
yeah
the
the
what
interests
me
about
this
sort
of
general
there's.
This
issue
that's
been
filed
forever
about
resources
and
that
what
interests
me
is
it
kind
of
comes
down
to
the
difference.
The
semantics
difference
between
monitoring
and
metrics
in
the
the
framing
of
the
prometheus
group
is
that
you
know
monitoring
is
what
we
do.
There's
a
third
party
process.
D
That's
involved,
it's
like
actually
actively
observing
a
system
and
because
of
that
active
observer,
some
of
the
semantics
that
we
that
prometheus
deals
in
can
just
can't
be
emulated
without
this
third
party,
and
so
we
in
the
prometheus
ecosystem.
I
think
I'm,
hopefully
speaking
accurately,
there's
talk
of
in
various
like
references
kind
of
obliquely
to
push
support
inside
of
prometheus,
and
it's,
I
think
it's
been
held
out
as
a
kind
of
like
this
is
something
different.
We
might
think
about
it.
D
It's
in
the
future,
but
hotel
has
come
in
and
basically
said,
we're
going
to
do,
metrics
we're
going
to
push
them.
It's
not
monitoring
by
your
definition,
we're
doing
metrics
and
we're
pushing
them,
and
so
I
want
to
be
very
careful
to
avoid
the
complexities
and
complications
about
the
up
variable,
which
is
semantic
and
nature.
D
It
has
to
do
with
this
third
party
observers
feature
as
well
as
staleness,
which
is
which
is
particularly
defined
in
prometheus,
and
I
don't
want
to
break
any
of
those
definitions,
but
within
the
constraints
of
what
I've
just
described,
it
seems
to
me
that
prometheus
is
aware
of
the
desire
for
a
process
to
identify
its
secondary
attributes.
Shall
we
say
it's
written
up
in
the
open
metric
spec
as
well.
There's
this
concept
of
target
information.
D
It
seems
to
be
like
carved
out
as
a
special
case
that
that,
if
you
have
a
info
metric
named
target,
the
attributes
of
the
target
infometric
are
your
resource
effectively,
and
I'm
I'm
coming
into
this
question
into
this
discussion
with
essentially
a
question:
can
we
can
we
solve
the
problem
in
otel
by
saying
resources
become
info?
I
see
two
paths
for
this.
You
know
practically
speaking
in
a
world
where
we're
just
talking
about
otel.
D
You
will
have
a
a
processor
plug-in
for
the
collector
that
will
see
your
resources
and
see
your
attributes
and
can
do
whatever
renaming
prometheus
would
have
done
as
well.
So
in
prometheus.
You
would
have
gotten
that
resource
through
service
discovery.
You
would
have
applied
your
renaming
and
then
output
prw,
but.
C
D
The
the
real
question
I
came
into
this
and
I
it's
it's-
I'm
not
phrasing
it
as
a
question,
but
I'd
like
to
is:
can
we
try
to
extend
prometheus
to
just
work
with
hotel
data,
and
I
think
what
that
would
mean
is
bringing
these
specs
together
a
little
bit.
The
idea
is
hotel
if
you
and
I
get
that
we
don't
we're
all
on
the
fence
about
why
you'd
want
to
do
this
in
the
collector.
D
But
now
remember
we're
just
talking
about
the
sdk,
so
the
sdk
is
going
to
export
to
prometheus,
which
will
scrape
it
and
it
will.
It
will
suppose
suppose,
I'm
supposing
that
you
turn
your
resource
information
into
target
info
now
prometheus
scrapes,
this
bunch
of
metrics
with
some
target
info
and
some
application
metric
data.
D
Now,
in
the
prometheus
world,
it
had
to
get
that
service
for
that
target
somehow
and
that
target
still
came
from
service
discovery.
So
somehow,
using
one
of
the
available
service
discovery
mechanisms,
prometheus
has
identified
an
hotel
target
and
it
could
be
any
one
of
the
schemas.
It's
kubernetes,
it's
like
a
static
file.
It's
like
whatever
you
found
this
target
now
the
hotel
target
is
producing
hotel
resources.
D
In
addition
to
anything,
you
know
from
service
discovery,
you've
got
these
hotel
resources,
they
could
be
treated
as
target
info,
and
I
guess
what
I'm
looking
for
is
for
to
make
it
normal.
I
think
this
is
possible
in
prometheus
today.
I
just
want
to
make
it
normal
that
when
you
see
some
data
that
has
target
info,
it's
probably
come
from
an
hotel
resource.
D
D
We
we
know
that
those
are
resources
that
the
process
is
self-describing,
we're
going
to
merge
that,
with
whatever
you
have
in
service
discovery
and
do
the
relay,
and
so
the
outcome
of
that
prometheus
scenario
will
be
the
prw
that
just
did
the
same
relabeling
that
you
might
have
done
in
the
hotel
collector
and
the
point
is,
I
think
we
should
be
treating
resources
as
info
and
not
trying
to
make
magic
happen.
The
way,
I
think
is
sort
of
hinted
at
in
that
issue
which
says
gosh.
D
We
don't
know
which
issue
which
attributes
which
resource
attributes
should
become
metric
attributes,
and
we
got
stuck
for
like
months
and
months.
I
think
we
should
just
call
them
resources
and
shove
them
into
prometheus,
but
that's
sort
of
punting
the
problem
over
to
prometheus.
D
But
when
I
read
through
the
openmetrics
documents,
it
sounds
like
that's
part
of
the
plan.
It's
just
that.
I
think
we
just
want
to
see
that
that's
a
real
plan.
I
think
I'm
gonna
stop
talking
and
let
people
respond.
B
That's
not
the
plan,
so
the
first
thing
is
any
discussion
of
pushing
prometheus
is
not
push
from
applications.
It
is
pushed
from
other
promedia
servers
and
it's
entirely
for
network
reasons
for
the
exact
same
things
you
use
federation.
For
so
individual
applications
should
not
be
pushed
into
prometheus
and
yeah,
and
you
are
right.
Resources
are
sorry,
you're,
saying
something.
B
Yes,
resources
are
info
and
target
info,
it's
kind
of
there
to
basically
just
try
to
deal
with
this
impedance
mismatch
of
a
push
versus
system
versus
pulse
system,
so,
at
least
for
meteos
can
be
aware
of
this
information
because
you
know
prometheus
doesn't
want
stuff
coming
in
freeway
labels,
because
that
breaks
things,
because
the
person
running
the
application
isn't
necessarily
the
person
scraping
it.
They
could
be
completely
different
people
completely
different
label
taxonomies.
B
So
that's
one
reason
why
this
shouldn't
be
automatic
because
you're
making
the
assumption
that
hey
these
are
the
same
people
with
the
exact
same
label
requirements
and
not
all
resource
attributes
necessarily
make
sense
like
as
target
labels,
because
that
gap
could
just
blow
things
out
a
bit.
There's
a
more
fundamental
issue,
though,
if
I
propose-
and
that
is
that
a
script
needs
to
be
successful,
to
find
out
what
the
target
info
is
and
target
labels
we
need,
even
when
scrapes
fail.
D
B
D
Would
I
would
go
as
far
as
to
say
that-
and
this
might
be
the
the
actual
core
problem
here
and
this
this
might
lead
to
another
topic,
but
that
that
the
resource
attributes
that
you
produce
through
info
target
target
info
potentially
may
not
be
used
for
target
labels.
They're
they're,
they
can't
be
they're
they're
purely
secondary
attributes
as
much
like
you
would
get
through
service
discovery.
What
I'm
trying
to
what
I'm
trying
the
way
I
see
it
when
I
step
back
is
the
hotel
resource
is
like
a
self-discovered
resource.
D
It's
like
self-service
discovery.
I'm
saying
I
discovered
this
stuff
about
myself.
Please
include
this
in
in
service
discovery
as
though
it
were
discovered,
but
but
the
problem
we're
not
trying
to
solve
and
is
that
we
can't
identify
ourselves,
there's
no
identifying
attributes
in
that
service
discovery
because
from
prometheus
perspective
it
just
can't
and
that's
what
you
said
and
I'm
I
think
we're
trying
to
respect
that.
The
identifying
job
and
instance
comes
from
service
discovery
and
all
the
up
attributes
should
come
from
service
discovery.
Yes,.
C
B
Here,
yeah,
but
that's
fundamentally,
yeah
service
discovery
and
feed
management
like
if
you
want
to
get
your
application
to
talk
to
console
and
push
up
attributes.
So
they
end
up
with
prometheus.
That
way
you
can,
but
that
that's
the
way
it's
done,
because
by
the
time
prometheus
is
making
a
tcp
request.
It's
too
late,
the
other
direction,
but.
D
B
In
that
case,
I
would
suggest
okay,
we
can
put
those
in
as
many
info
as
you
want.
It
doesn't
just
have
to
be
target
info,
like
presumably
there's
a
few
different
dimensions
here,
like
different
things,
come
from
the
team
versus
the
machine
and
all
that,
so
you
might
have
a
few
different
info
labels.
B
Was
just
a
suggestion
where
people
are
you
know
outputting
the
push
or
something
just
so
that
we
can
have
push
work
without
breaking
pull
basically
but
yeah
like
the
sort
of
thing
that
what
I
would
look
at
is
hey.
Can
I
push
this
into
console?
Can
I
push
this
into
whatever
other
service
discovery
mechanisms
and
look
at
that
as
a
separate
service
advertisement
problem.
D
Yeah
yeah,
I
do
see
a
service
advertisement
and
discovery
problem,
I'm
excited
about
it
actually,
but
but
I
don't
think
it's
what
we're
trying
to
solve
here
today,
which
is
why
I
I
was
trying
to
ignore
this,
the
up
and
stillness
problem.
I
really
just
what
I
and
treated
as
separate.
I
think
what
what
the
first
step.
D
I
would
we're
not
trying
to
change
anything
about
prometheus,
we're
just
trying
to
say
that
the
the
process
has
these
extra
attributes
which
apply
to
everything
or
the
target
or
whatever
that
is,
and
we
want
to
become
commonplace
that
you
know
when
you
set
up
your
prometheus.
Often
you
install
like
a
package
from
somewhere
which
has
a
bunch
of
rules
which
is
like
the
default
and
I've
seen
it.
For
you
know
the
community
prometheus.
D
What's
the
word,
I'm
looking
for
helm,
charts
right,
so
the
helm
charts
come
out
of
the
default
out
of
box
defaults
with
rules
to
sort
of
you
know.
Kubernetes
discovery
gives
you
15
attributes
per
pod,
but
the
default
setting
uses
six
of
them
right.
We
would
like
the
default
when
you
see
an
hotel
resource
coming
in
through
target
info
to
just
be
like
included
in
your
available
and
you're
available,
or
we
like
there's
some
way,
some
way
for
an
hotel
user
to
say.
Yeah.
D
But
the
osel
sdk
is
just
going
to
produce
a
resource
with
whatever
the
semantics
are
and
we'd,
like
the
user,
with
a
yellow
file
and
a
prometheus
server
to
just
choose
which
attributes
from
the
resource
they
want,
and
maybe
that's
all
possible
today,
and
maybe
what
we
do
is
we
close
all
of
our
issues
and
saying
hotel
resources
become
target
info.
We're
done
here.
B
Yeah,
so
the
thing
is
you're
contradicting
yourself,
you're,
saying
that
I
want
this
to
be
available
and
work
this
way,
but
I
don't
want
it
to
effect
up,
but
these
are
fundamentally
the
same
thing
and
now
remember
not
talking
about
up
value,
we're
still
just
talking
about
the
labels
on
up,
because,
fundamentally,
we
have
to
know
for
this
to
be
available
for
relabeling.
We
need
to
know
before
the
scrape
happens.
B
Okay,
because
that's
just
because
it
also
affects
what
we
scrape.
So
you
can
have
this,
but
it
all
needs
to
happen
before
the
scrape,
so
anything
has
to
be
basically
an
out-of-band
system.
That's
feeding
the
data
in
service
discovery
via
some
mechanism.
It's
like
fundamentally,
this
is
a
service
advertisement
problem.
D
Well,
I
was
so
the
the
mental
model
I
had
was
that
you
have
a
service
discovery
mechanism.
That's
like
independent.
You
somehow
found
this.
D
Well,
let's
just
work
through
my
my
naive
example,
so
I
can
figure
out
where
I'm
wrong,
so
we
have
a
flat
file
discovery
mechanism.
It
says
port
port
8000
on
localhost
is
to
be
discovered,
so
I
have
a
job
named
localhost
or
whatever
and
name
static
rule,
and
I
have
a
port
name,
so
my
job
and
instance
are
now
fixed
and
the
problem
that
you're
saying
is
that
in
an
ordinary
prometheus
setting,
I
would
have
also
gotten
several
other
target
labels
through
my
static.
No,
you
just
you
just
said
you.
D
You
just
had
to
follow
yeah
cyclone.
So
now
I
know
like
I
know
my
instance
yeah.
I
think
I
see
the
problem,
but
I'm
having
trouble
stating
it,
which
is
that
the
it's
not
just
it's
every
application
metric
every
application
metric.
I
want
to
have
these
extra
resources
available
to
but
the
up
metric
shouldn't,
and
that
creates
I'm
just
breaking
something
in
inside
prometheus,
which
what
I
was
thinking
through
was
that
your
up
metric
would
certainly
be
defined.
D
By
did
we
scrape
successfully
at
this
particular
target
and
the
up
attributes
would
be
determined
by
relabeling.
The
service
discovered
attribute
set
before
the
scrape,
but
all
those
other
application
metrics.
I
would
like
to
have
attributes
produced
by
the
pro
target
itself,
so
I'm
not
talking
about
up,
but
every
other
application
metric.
B
Inside
prometheus
yeah,
so
in
that
case,
no
because
metric
labeling,
this
is
post
scrapery
labeling
and
no,
you
can't
do
that.
The
way
you
would
do
that
is,
you
would
apply
the
label
to
every
single
output,
which
is
exactly
what
we're
trying
to
get
applications
not
to
do
because
that
breaks
full
in
the
sense
that
you
have
decided
over
the
person
running
committees,
what
labels
they're
going
to
get
and
you're
trying
to
override
their
target
labels,
but
fundamentally
ignore
the
fact
that
those
the
target
labels
are
going
up.
B
D
And
you
get
I
get,
I
get
what
you're
saying,
but
I
feel
like
there's
still
something
that
like,
let's
just
suppose
that
hotel
has
this
open
issue
and
we
solve
it
in
a
way
that
actually
we
don't
want
to,
but
here's
what
we
could
do.
We
could
say:
okay
resource,
we
don't
have
a
place.
To
put
you
we're
going
to
turn
every
single
resource
attribute
into
an
application
attribute
metric
attribute.
You
don't
want
that.
It
sounds.
D
No,
I
mean
making
them
actual
label
labels
on
every
application
metric,
so
so,
rather
than
having
a
prefix
of
top
of
my
scrape,
which
is
here
all
my
resource
keys,
once
I'm
going
to
literally
inject
them
into
every
application
metric,
that's
the
that's
the
semantic
behavior!
I
want
it's
just
horribly
inefficient.
D
B
D
So
what
we're
hoping,
then,
is
that
the
the
sdk
produces
these
resources
and
the
user
gets
to
choose
which
ones
they
want.
That's
where
we
started.
I
don't
want
to
put
these
on
as
application
metrics.
I
want
to
make
them
available
so
that
whoever
is
doing
relabel
configuration
can
see
them
and
apply
that,
but
that's
important.
D
B
B
Do
you
need
this
just
sometimes
because
you
can
do
this
at
query
time
and
join
things
in
if
it's
an
infometric
but
like?
If
you
want
to
make
this
a
big
thing,
what
you're
really
doing
is
saying
is
I
want
to
invent
a
new
service
discovery
mechanism
for
motel
or
hook
into
existing
ones,
and
if
you
do
it
that
way,
then
everything
will
work.
B
But
that's
that
that's
how
I
would
approach
it
either.
You
do
it
at
a
query
time,
which
means
that
isn't
there
or
you
you
know,
because,
let's
be
honest,
if
we
could
actually
get
some
standardized
service
discovery
mechanisms
that
would
be
great
so
me
just
doesn't
have
to
support
20
or
whatever
it
is,
but
that's
kind
of
yeah.
D
By
coming
to
this
meeting,
I
see
this
as
as
like
secondary
attributes
that
that
the
process
has
self-discovered,
and
I
heard
you
say
you
can
query
to
get
the
same
behavior
and
if
that
is
good
enough
for
prometheus
users,
then
I
think
we've
solved
our
problem
here.
We
will
take
hotel
resources,
turn
them
into
info
and
wash
our
hands.
D
We
don't
turn
them
into
application,
metrics
for
all
the
reasons
that
you've
just
described
yeah-
and
I
think
I'm-
and
I
maybe
just
this-
is
a
matter
of
opinion,
but
it
seems
to
me
like
they're,
I
it
just
it
just
breaks
the
way
I
mentally
think
about
the
relabeling
process,
which
is
that
now
I've
found
a
discovery.
I
discovered
a
target.
Here's
what
I
know
from
the
service
discovery.
D
The
target
reported
some
extra
attribute
dimensions
that
are
constant
across
itself
and
I'd
like
to
relabel
to
produce,
and
I
don't
want
to
write
queries,
but
that's
fine.
If
I
have
to
I'd
rather
not
I
just
I
just
like.
Let's
say
these
attributes
from
from
the
resources
are
just
going
to
be
relabeled
the
same
way.
Attributes
from
service
discovery
are
going
to
be
labeled
they're,
just
self-discovery
rather
than
service
discovery.
So
I'm
trying
to
make
self-discovery
look
the
same
as
service
discovery
without
breaking
the
semantics.
B
D
So
maybe
the
thing
is
that
the
description
I've
given
calls
for
a
third
step
of
relabeling,
so
you
do
target
relabeling.
When
you
get
service
discovery,
you
scrape
the
thing
and
you
get
its
resources
from
its
info
target
info.
You
do
a
second
relabeling
to
drop
it
given
its
info,
and
now
you
have
metrically
relabeling,
which
is
the
last
step.
I'm
not
saying
that's
what
we
should
do,
but
that
sounds
like
it
would
address
the
issue.
Yeah.
B
A
It's
it's
better
than
any
other
option
for
sure
where.
B
D
I
I
that
was
kind
of
what
we
were
discussing.
I
think
I
I
I
I
hadn't
thought
much
about
dropping
targets,
so
it
was
helpful
to
realize
that
there's
that
phase,
which
is
why
I
said
three
phases,
which
sounds
like
a
lot,
but
it
sounds
like
what
we're.
What
we're
trying
to
do
is
just
kind
of
sugar
over
the
fact
that
these
to
use
resources
in
today's
prometheus
model,
you
you
just
would
have
to
write
queries
to
join
between
application,
metrics
and
target
info.
D
Is
that
correct
and
that's
usually
what
people
do
and
if
that's
your
state
of
the
world,
let's
do
it,
I
mean
it
sounds
like
we
can
move
forward
on
the
the
prometheus
hotel
spec.
We
just
say
that
resources
become
target
info
done
and
we've
already
got
the
rest
of
it.
Like
you
know,
gauges
up
down,
counters
become
gauges
and,
for
example,
if
if
and
I
would
propose
to
move
forward
that
way,
if
users
come
in
to
prometheus
and
say,
oh,
this
is
awful.
D
D
B
B
E
D
I
get
it,
we
should
not
talk
of
these
as
service
discovery
attributes
like
we're
not
trying
to
do
that.
We're
trying
to
give
it's
more
like
constant
labels
in
the
prometheus
libraries.
Oh
actually
gotta,
be
a
security
vulnerability.
D
A
Definitely
and
and
josh
just
just
to
that
point,
there
is
an
discussion
and
some
kind
of
a
prototype
that
some
of
the
maintainers
on
the
collector
have
been
working
on
for
service
discovery,
yeah
yeah,
so
I'll
I'll
bring
you
the
design
at
least
what.
D
We
do
I,
I
have
yeah,
I
heard
from
josh
s
that
dan
jaglowski
was
asking
questions
and
it
points
towards
service
discovery
as
well.
So
maybe
that's
the
same
thread.
D
It
is
like
for
the
the
preceding
discussion
that
we
just
had.
I
was
saying
these
resources
are
just
like
secondary
attributes
that
will
be
attached,
but
not
not
the
primary
service
discovery
mechanism
in
the
future.
When
we
have
a
new
topic,
it
will
be.
How
can
you
push
your
self-identifying
information
and,
and
brian
is
right
to
say,
there's
a
lot
of
problems
there.
A
A
D
A
You're
you're
tracking,
where.
D
D
Yeah
there
is
my
and
I
will
pledge
to
go
update
it
right
after
this
meeting
yeah,
because
if
you.
A
Could
look
at
the
three
parts
you
know,
that
is
what
we
discussed
here
using
taking
hotel
resources
and
you
know
converting
them
to
info
before
pushing
them
to
prometheus.
That's
one
and
then
the
assumptions
of
metric
labeling
being
being
used
as
a
third
phase.
B
D
I
have
one
more,
that's
quick,
we
don't.
We
talked
briefly
about
summary
and
exponential
histogram
and
a
summary
is
one
that
I
would
be
happy
to
punt.
It's
we've
run
into
a
tremendous
amount
of
difficulties
with
min
and
max,
for
example,
and-
and
I
that's
not
my
priority
to.
D
Yes,
they
are,
and-
and-
and
it
should
be-
it
shouldn't
be
so
hard,
but-
and
there
are
issues
about
histograms
and
summaries-
that
we
can
talk
about
for
more
than
12
minutes.
But
the
thing
I've
been
working
on
at
hotel
last
month
or
so
is
this
exponential
histogram.
D
Now
we
worked,
we,
we
coordinated
a
bit
with
bjorn
robenstein
from
prometheus
from
grafana
on
this
on
this
protocol
and
the
design,
and
I
feel
that
we
met
with
basic
agreement
on
that
for
prometheus,
so
we've
moved
forward
and
have
a
protocol
data
point
in
our
protocol,
which
is
the
exponential
histogram
and
it's
a
dense
way
of
expressing
high
resolution.
D
Histogram
data
it's
coming
and
the
question
is
essentially:
how
should
we
handle
that?
If
we
want
to
talk
to
prometheus?
Of
course,
we
could
convert
those
high-resolution
histograms
back
into
the
legacy
form
of
histogram.
I
think
that
would
not
make
anyone
happy.
D
So
we
can
talk
about
methods
of
approaching
prometheus,
which
include
converting
high
resolution
data
down
to
low
resolution
data,
the
the
fee
that
I
have
a
spec
up,
which
basically
says
what
the
sdks
would
expose
when
we
configure
these
things-
and
I
I
came
to
a
place
of
understanding
that
well
for
some
vendors
and
and
consumers
of
this
data.
D
It's
very
much
like
open,
histogram
data
circle,
histogram
data,
it's
just
like
here's,
a
bunch
of
buckets.
What
are
you
gonna
do?
Turning
those
back
into
explicit
buckets
or
collapsing
them
into
you
know,
10
or
so
explicit.
D
Buckets
is
certainly
possible,
but
I've
seen
in
the
future
if
prometheus
wants
to
get
into
the
business
of
handling
exponential
or
high
resolution
histograms,
it
seemed
like
the
challenge
was
that
prometheus
kind
of
wants
to
fix
its
boundaries
up
front
and,
and
so
there's
this
histogram
data
structure,
where
you
have
a
variable
scale,
but
to
do
prometheus.
I
think
you
want
to
fix
that
scale,
which
just
means
saying
what
your
limits
are.
So
the
one
possible
outcome
here
is
when
you
start
up
your
sdk.
D
You
say
I
want
histograms
to
use
high
resolution
and
up
to
300,
buckets
and,
and
my
minimum
measurement
is
one
and
my
maximum
measurement
is
sixty
thousand
like
imagining
milliseconds.
So
the
range
of
my
measurements
are
between
one
and
sixty
thousand
milliseconds
and
I
have
300
buckets
now.
I
know
the
scale
to
use
and
I
will
never
report
more
than
300
buckets.
They
will
start
at
1
and
they
will
end
at
60
000.
D
and
you
will
fit
300
buckets
of
resolution
into
that.
That's
going
to
be
possible,
we're
looking
for
feedback
on
whether
that
is
what
you
would
like
to
see
or
if
you'd
like
to
see
something
different,
the
prototype
pr,
the
prototype
code,
I'm
very
pleased
with
because
it
does
actually
implement
automatic
scaling.
So
you
just
get
say:
300
buckets
and
don't
tell
it
a
range,
it'll
figure
it
out,
but
it's
not.
I
don't
think
that
that
that
works
very
well
for
prometheus.
B
So
there's
two
things
here:
one
is
what
to
do
for
openmetrics
and
you
know,
and
current
existing
prometheus
today
and
the
answer
there
is
low
res
data,
so
one
of
the
requirements
of
voltmetrics
one
zero
is,
you
can
always
basically
downgrade
and
do
something
reasonable,
even
if
there's
some
future
data,
so
at
least
everyone
in
the
future
is
always
one
oh
m10
for
the
future
stuff
and,
of
course,
the
user
getting
hints
about.
Where
does
that
is,
helps
to
figure
out
hey?
What
are
my
10
low
res
buckets?
B
D
I
will,
I
definitely
will
get
his
opinion.
Converting
the
low
res
is
fine.
The
user
may
have
to
tell
you
what
the
the
boundaries
are,
which
is
the
problem
we
were
trying
to
avoid.
Roughly
speaking,
defaults.
B
Let
them
override,
if
they
want
hope
for
the
best,
like
it's
totally
possible,
that
this
will
be
academic
in
a
few
years
time
and
everyone
who
moves
to
higher
res,
but
just
so
that
you
know
there
is
a
baseline
that
everyone
has
just
give
lower
res
worst
case.
They
can
still
get
the
average
from
the
sum
of
the
count.
D
B
A
So
brian
the
default
buckets
that
would
get
defined
they
could
these
then
be
redefined
by
the
by
you
know.
I
mean
I'm
thinking
of
the
cloud
watch
case
where
you
know
they
have
their
own
dynamic
definition
of
buckets
on
the
fly
right,
that
is
they
don't
the
user
doesn't
necessarily
have
the
ability
to
redefine
buckets
for
their
histograms.
So
in
that
kind
of
a
case
you
know,
given
that
the
user
cannot
change
those
buckets
and
it's
dynamically
defined.
How
would
that
work
for
exponential
well.
B
D
I
did
prototype
those
as
part
of
an
early
earlier
phase,
so
you'll
set
up
your
your
export
to
prometheus
and
if
nothing
is
specified,
it'll
just
and
you
use
the
old
format
it'll
just
go
through,
but
if
you've
got
a
new
format
and
we
aren't
there
yet
what
we'll
do
is
just
convert
it
back
down
and
two
choices
I
think,
will
be
if
you
want
to
use
explicit
buckets
like
that
have
been
previously
specified
you're.
Just
it's
a
simple
mapping
problem
to
take
your
histogram
data
and
throw
them
into
those
buckets.
D
So
that's
one
option
and
then
the
other
is
like
just
reduce
the
size.
So
you
had
300
buckets,
and
now
you
want
10.
Okay,
we
cut
it
down,
but
the
range
can
move
around
still
so,
and
I
think
that's
why
I
want
bjorn's
feedback.
Is
that
he'll
understand?
Is
that
like
we
can
decide
to
have
fixed,
fixed
ranges
or
we
can
decide
to
have
fixed
sizes?
But
you
can't
have
both
effectively
and
it's
a
question
or
configuration
question
what
you
want.
B
D
B
D
A
You
ren
thank
you,
jeff.
F
I
had
a
couple
questions
here
about
compatibility
that
have
recently
come
up
with
the
prometheus
receiver.
The
collector
the
first
one
relates
to
overlapping
metric
family
names.
There's
a
report
of
issues
where
say
a
user
has
a
histogram
called
http
request
and
a
counter
called
http
requests,
and
so
you've
got
http
requests.
F
B
It
is
not
so
valid
for
openmetrics
because
we're
cleaning
this
up,
but
so,
if
you
do
have
to
do
it,
one
way
you
can
kind
of
fix
it
up
is
by
making
that
http
request,
total
and
on
say
unknown,
rather
than
counter
that's
kind
of
way
to
hack
around
it
but
yeah.
This
is
valid
prometheus
format.
F
Okay,
I
think
that
unfortunately
makes
things
even
more
complicated
for
us.
F
Okay
and
then
the
other
question
I
had
was
actually,
I
think
this
might
be.
The
prometheus
remote
right
exporter
is,
is
that
we
sanitize
labels
in
a
way
that
prevents
labels
users
from
using
labels.
That
start
with
us
an
underscore.
F
I
know
that
labels,
starting
with
two
underscores,
are
reserved,
but
there's
not
really
any
indication
why
this
standardization
was
included
in
the
exporter
and
some
users
have
asked
if
we
can
remove
it,
but
I'm
worried
to
remove
it
without
understanding
what
the
implications
might
be.
Do
you
have
any
concerns
from
a
provider's
perspective
about
labels,
starting
with
a
single
underscore.
B
No,
no,
it's
mostly
just
there.
It's
a
bit
weird.
Basically,
that's
about
ish,
so
it's
in
the
please
don't
category
the
same
would
under
double
underscore.
To
be
honest,
it's
mostly
just
reserved
in
case
a
monitoring
system
needs
them
and
a
single
underscore
is
like
yeah.
That's
weird,
that's
just
not
how
we
do
labels,
but
if
a
user
has
one
and
you
can
like
at
the
sdk
level,
you
know
their
code
not
compile
or
something
or
instantly
fail.
That's
fine!
F
The
question
we
were,
we
were
prepending
key
to
it,
just
to
make
sure
that
it
started
with
an
alphabetic
character
rather
than
underscore.
The
request
is
to
remove
that.
I
think
that
might
be
safe.
I'm
just
trying
to
understand
where
that
might
have
come
from
and
what
the
implications
it
would
be.
Yeah.
B
A
A
All
right,
good,
any
other
questions
from
folks.
C
I
have
I
have
a
very
quick
question
about
the
topic:
allocator
feature
that
is
added
to
odell
operator,
so
we
were
starting
to
look
at
prototyping
using
the
target
allocator
and
this
I
just
wanted
to
know
what
state
it
is
in
and
if
it's
ready
for
use.
F
A
F
Okay
and
there
there
is
an
outstanding
pr
for
that
that
adds,
support
for
the
prometheus
operator
service,
monitor
and
pod,
monitor
crds
to
define
targets.
If
anybody
wants
to
take
a
look
at
that,
I
think
that
could
use
some
more
reviews
to
to
get
that
landed.
It
also
includes
some
cleanup
to
the
allocator
service,
which
yeah.
A
Freshman,
have
you
seen
that
pr
or
do
you
need
it.
C
A
Okay,
okay,
cool
cool-
I
think
we're
at
time,
but
I
just
wanted
to
say
josh
and
david
I'll
reach
out
to
you
guys
for
and
vishwa.
If
you
have
some
bandwidth
for
working
on
the
spec,
the
prometheus
spec
and
just
you
know,
making
progress
reviewing
the
draft
pr
that
exists
right
now
and
then
adding
in
any
of
the
other
parts
so
that
then
josh
you
can
review
it.
You
know
overall
and
then
we
can
get
other
reviews
too.
But
we'll
do
the
work.
B
D
Glad
to
david,
do
you
want
to
take
the
point
on
that
since
josh
kind
of
had
a
pr
open?
And
I
don't
know
if,
if,
if
you'd
like
to
take
over
that
from
him,
google
josh,
yes,
google
josh,
there
is
a
draft
of
a
prometheus
spec.
That's
been
open
for
a
couple
months
and
I
think
he
just
ran.
D
D
A
Josh
we
wanted
to
only
use
you
as
a
reviewer.
I
mean
again
some
some
a
little
bit
of
your
time
david.
If
you
have
time
we
can
work
together
if
that
works.
Yes,.