►
From YouTube: 2022-01-19 meeting
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
A
A
A
Hi
all
usually
I
tune
in
this
meeting
to
listen
without
an
agenda.
Is
there
someone
planning
to
run
this
today?
I
was
hoping
we
might
talk
about
the
issue
from
last
week
mostly,
but
I
do
not
want
to
run
a
meeting.
B
I
mean,
I
suppose
I
can
run
the
meeting.
Usually
anthony
or
alolita
run
the
meeting,
but.
B
Yeah,
it's
six
after
so
why
don't
we
started?
B
We
do
have
one
topic
that
we
can
discuss
first
and
then,
if
there
is,
unfortunately
I
don't
see
any
of
the
prometheus
folks
that
we
were
talking
with
last
time
about
the
issue,
so
maybe
we
might
save
it
for
another
week
eric
do
you
want
to
go
ahead
and
tell
us
about
the
status
of
prometheus
poll
exporters.
A
A
Well,
I
think
we
should
talk
a
little
bit
about
this
topic
now.
I
forget
the
number
and
I
also
could
pull
it
up,
but
I
linked
it
in
yesterday's
spec
sig,
so
I
know
how
to
find
it.
A
A
I
it's
the
same.
It's
a
similar,
that's
something
we
could
discuss
here.
I
guess,
is
the
open
telemetry
to
prometheus
mapping
topic
if
you
want
to,
but
your
pr
is
draft
fur.
So
it's
unfair
to
go
commenting
on
draft
beers.
A
It
is
still
work
in
progress.
I
just
I
did.
I
just
wanted
to
add
the
bits
there
that
there's
sections
whole
sections
of
the
data
model
written
in
preparation
for
what
you're
about
to
do
so,
hopefully
that
helps
the
issue.
I'm
going
to
pull
it
up
that
I
wanted
us
to
continue
discussing
after
last
week's
meeting,
I
posted
an
update
on.
A
Issue
1782
and
put
it
in
the
slack
or
the
chat
right
now.
Oh
sorry,
you
just
you
just
linked
it.
A
second
before
me.
Yes,
that
one
okay,
so
we're
looking
at
this
issue
and
to
me
it
represents
what
I
think
of
as
well.
I
said
it
in
the
meeting
yesterday.
I
I
feel
that
open
tobitry
started
out
with
this
vision
of
resources
being
a
way
to
self-describe
prometheus
is
saying:
thou
shall
not
self-describe.
A
You
will
use
prometheus
to
describe
kind
of
that's
how
I
see
it
and
I'm
not
saying
that
prometheus
is
wrong,
but
I
feel
there's
a
way
for
both
to
exist
and
I
don't
see
how
to
make
that
happen
without
essentially
moving
into
the
permission,
prometheus
project
and
starting
to
spout
opinions.
But
I
don't
want
to
do
that.
A
The
I
think
the
ideal
that
I
would
imagine
for
the
world
if
we
could
all
get
along
would
be
that
a
system
like
prometheus
would
have
access
to
self-identity
as
a
part
of
its
re-labeling
machinery.
But
as
we
discussed
last
week,
it
looks
like
three
phases
of
rewriting
would
then
be
called
for,
which
sounds
like
a
big
change,
and
it's
not
clear.
Why
we'd
want
that
to
the
the
bystanders.
A
So
sort
of
talking
about
how
prometheus
is
kind
of
pushing
back
on
resource
as
a
whole.
We
are,
I
still
feel,
there's
an
opportunity
to
stop
me
if
it's
totally
boring
and
off
topic,
but
I
feel
there's
an
opportunity
for
otel
to
specify
how
service
discovery
works
so
that
we
have
a
well-defined
path
to
push
otlp
metrics
data
into
a
collector
and
have
that
collector,
integrate
the
data
from
service
discovery
and
prometheus
does
the
same
type
of
integration,
but
it's
so
oriented
towards
pull
and
its
own
service
discovery
process.
A
That's
hard
to
see
where
that
interaction
takes
place
inside
of
prometheus,
but
in
this
issue
here
trying
to
bring
it
back
to
the
issue
in
front
of
us.
A
We
are
talking
about
how
prometheus
has
a
schema
for
identifying
job
and
instance,
and
as
long
as
you
adhere
to
that
schema,
we
can
emulate
what
service
discovery
we
can.
First
of
all,
we
can
emulate
service
discovery,
but
we
can
also
meaning
we
can
treat
job
instance
as
a
source
to
join
with
something
from
the
service
discovery.
A
A
Every
target
is
a
infometric
which
is
a
defined
as
a
essentially
a
one-valued
metric.
The
value
doesn't
matter.
The
fact
that
it's
defined
is
what's
important,
and
the
idea
I'm
trying
to
reach
for
here
is
that
we're
talking
about
I'm
trying
to
trying
to
make
sure
we're
talking
about
a
difficult
case,
we're
talking
about
a
prometha,
a
federated
prometheus
style
thing
where
you've
got
a
open,
telemetry
collector,
exporting
multiple
targets
and
exporting
multiple
metric
streams
into
another
consumer.
A
That
consumer
ought
to
be
able
to
run
service
discovery
and
re-labeling
on
its
own,
because
the
services
have
been
discovered
for
it.
Those
are
the
target
infos
and
the
metrics
are
present.
So,
given
a
batch
of
target
infos
and
a
batch
of
metrics,
you
ought
to
be
able
to
do
the
relabeling
that
prometheus
is
doing
without
actually
running
service
discovery,
you're,
providing
your
services
alongside
your
metrics.
A
And
it
just
feels
like
a
specification,
could
help
us
otherwise
without
a
specification.
What
I'm
tempted
to
do.
I
don't
like
this.
I'm
going
to
explain
it
anyway
and
then
shut
up
without
a
specification
for
a
service
discovery.
What
I'm
tempted
to
do
is
is
go
make
up.
My
own
job
name
make
up
an
instance
id
that
matches
my
ip
address
or
something
like
that
and
shove
it
into
prometheus
within
target
info.
That
explains
my
resource
and
then.
A
And
then
everything
that
the
prometheus
exporters
have
written
to
do
service
discovery
rewriting,
will
work
for
me,
even
though
I
never
had
prometheus
scrape
happen.
I
just
pushed
my
resources
as
target
infos,
and
I
pushed
my
metrics
as
otlp
and
now
there's
some
relabeling
logic
running
inside
a
collector,
which
is
doing
what
prometheus
would
have
done
had
that
service
discovery
happen
through
a
whole
different
machinery
which
it's
not
going
to
wow
in
my
proposal,
totally
incoherent.
I
just
gave
you
a
pitch
of
when
I
say
hotel
could
use
service
discovery.
What
I'd
like
to
see?
A
A
You
ought
to
be
able
to
find
a
target
and
join
with
it.
Just
the
way
prometheus
would
do,
but
then,
instead
of
having
a
whole
pull
your
promethean
pull
your
service
discovery
infrastructure
through
prometheus
motion
that
the
prometheus
server
does
you
instead
just
have
a
little
service
discovery
agent,
pushing
target
infos
into
your
collection
infrastructure,
and
it
can
do
the
joint
itself.
A
This
is
long
a
lot
of
words,
I'm
sorry
for,
for
that.
I've
said
everything
in
a
very
incoherent
way
to
bring
it
back
to
1782.
I
think
we
know
what
we
should
do.
A
Turn
resources
into
target
infos.
I
think
we
know
what
users
are
going
to
ask
for.
How
do
I
join
my
target
infos
with
my
metrics
just
the
way
hotel
said
they
would
when
I'm
using
prometheus-
and
I
don't
have
a
good
answer
for
that,
but
I
think
we
can,
when
we
can't
do
prometheus,
but
we
could
develop
a
pipeline,
a
processor
for
open
telemetry
that
would
receive.
A
B
Think
the
nice
thing
about
the
way
prometheus
does
it,
where
the
same
entity
that
does
the
service
discovery
and
constructs
the
job
at
instance,
labels
in
the
first
place,
is
sort
of
guaranteed
to
do
so
consistently
for
your
metrics
and
for
your
target
info.
B
A
B
A
A
In
addition
to
that,
the
only
leftover
feeling
and
thought
I
have
to
share
is
that
what
stops
me
from
going
forward
any
any
further
on
this
is
related
to
complexity.
Is
that
suppose
you
ignore
the
you
know?
Ips
are
not
uniquely
identifying,
so
you've
got
some
new
identity.
That
is
you,
you
think,
but
you're
going
to
restart
and
suppose
you
restart
with
the
same
identity,
but
a
bunch
of
new
resource
values.
A
One
of
them
that
we've
talked
about
is
restart
count.
So
obviously,
if
you
restart
your
restart,
your
restart
count
has
changed
now,
and
you
may
consider
yourself
a
different
resource.
You
have
different
start
time.
I
don't
know,
could
have
moved.
Who
knows
different
hosts
to
join
target
info
with
metrics
then
requires
temporal
alignment,
essentially
so
that
your
service
discovery
is
not
just
a
state,
a
point
in
time.
It's
a
stream
of
records
that
changes
in
time
and
so
joining
service
discovery.
A
Data
with
pushed
prometheus
data
means
knowing
exactly
what
point
in
time
each
service
each
job
and
instance,
or
whatever
identities
you're
using,
were
defined
and
how
they
were
defined,
because
I
don't
want
to
use
stale
data
from
the
last
wave
or
like
the
last
target
for
my
current
target.
I
want
to
make
sure
I'm
using
current
data.
So
then
you
have
a
possibility
that
data
arrives
out
of
order
and
if
your
service
discovery
information
arrived
after
your
metrics,
what
would
you
do?
A
In
this
scenario,
the
reason
why
prometheus
doesn't
have
to
answer
questions
about
earthly
driving
data
is
it
defines
it
out
of
existence
because
there's
no
such
thing
as
slate
arriving
data
either
you
are
escaped
or
you
are
absent,
you're
never
late,
and
that
makes
the
the
job
much
simpler
for
prometheus.
So
whenever
I
think
about
service
discovery,
I
think
this.
This
has
to
wait
until
we're
much
further
along
in
the
lifespan
of
open
telemetry.
A
Okay,
super
off
the
topic
there,
but
but
they're
all
kind
of
tangentially
tied
to
1782..
I
I
wanted
to
rest
my
case
and
say
the
last
comment
here.
David:
do
you
feel
like
we
can
move
forward?
Do
we
to
the
previous
pool?
Exporters
know
how
to
export
target
infos,
and
the
only
confusion
I
had
left
over
was
if
there's
no
job
in
instance,
what
you
do.
A
A
It
sounded
like
you
knew
what
you're
doing,
and
the
preceding
conversation
was
just
mentioned:
the
possibility
of
using
a
resource
hash
when
there's
no
job
in
instance,
or
something
like
that.
I
don't
know.
A
B
If
it's
scraped
from
a
prometheus
endpoint,
the
only
way
for
someone
to
mess
this
up
is
if
they
relabel
their
job
and
instance,
labels
differently
per
metric.
So
if
they
had
a
relabel
rule
that
applied
only
to
target
info
but
not
to
the
actual
metric
that
they
sent
with
it,
then
you
could
mess
up
the
join
and
yeah.
B
In
that
case,
I
think
we
should
just
treat
the
working
as
intended,
I
think.
Well,
then
we
can
find
some
fallback
mechanism.
I
think
what
we're
defining
is
just
a
special
case
in
which,
when
they
do
join
here's,
what
we
do
is
we
make
one
be
the
resource
of
the
other
and
if
they
don't
join,
then
we
can
just
treat
them
as
regular
metrics.
B
I
think
okay,
but
maybe
or
if
there's
other
fallbacks
that
are
better.
Then
we
can
do
those.
B
A
B
A
All
right,
I
apologize
for
that
lengthy,
probably
digression
into
service
discovery.
Hopefully
some
of
it
will
stick
and
you
will
all
maybe
think
about
it
at
some
point
david.
My
other
comment
to
you
was
about
now.
I
have
to
find
the
other
comment.
A
I
linked
you
to
a
section
talking
about
how
to
handle
unknown
start
time
entirely
written
with
prometheus
in
mind.
I
wrote
that
when
I
was
working
on
prometheus
sidecar
a
year
ago,
so
we
basically
had
to
deal
with
the
same
type
of
problem,
and
I
think
it
gives
two
solutions
that
are
both
kind
of
meaningful
and,
and
I
think
either
either
go
either
is
fine,
but
the
natural
solution
is
to
just
turn
all
deltas
into
well.
A
A
This
comma
here
was
about
the
preceding
discussion.
We
can
ignore
it.
This
is
we.
I
don't
think
we
want
to
drop
this
if
we're
starting
to
use
it,
but
I
think
you
know
that
too,
and-
and
I
I
have
this
point-
this
is
a
potentially
contentious,
but
up
down
counter
is,
I
think,
the
natural
way
to
represent
target
info
as
well
as
up
metrics.
A
I
know
prometheus
treats
up
as
a
gauge,
but
in
my
thinking
up
is
is
how
many
things
are
up.
It's
a
count,
it's
so
it's
a
count
and
it
can't
be
monotonic
because
it
goes
down
again.
So
it's
an
up
down
count
in
my
thinking.
You
could
then
issue
a
query,
which
is
how
many
services
are
up
and
like
it's
the
sum
of
all
ups
and
that's
correct,
so
so
the
natural
aggregation
should
be
to
some.
Therefore,
it's
it's
not
that
matter.
A
It'll
turn
back
into
a
gauge
with
prometheus,
so
I
think
that's
that's
cool,
and
then
this
was
this
was
my
only
substantial
contribution.
I
think
here
is
that
I
think
if
you
see
deltas,
the
fairly
easy
and
correct
thing
to
do
is
turn
them
into
cumulatives.
A
This
came
up
in
in
the
pr
here,
which
is
how
I
ended
up.
Seeing
your
pr
and
I
didn't
go
looking
for
pr's
to
review.
This
is
lengthy.
I
don't
think
we
should
read
it
here.
Statsd
is
definitely
not
prometheus,
and
so
you
know
I
don't
want
to
sidetrack
this
meeting
at
all,
but
and-
and
it's
also
not
very
well
specified,
so
we're
kind
of
making
it
up,
but
the
this
upshot
is
if
we.
A
The
other,
what
was
I
saying
here
if
the
prometheus
server,
if
the
prometheus
exporter
will
handle
delta
to
cumulative
conversion,
then
statsd
receiver
can
be
simplified,
it
is,
and
the
user
who
who
sent
this
in
actually
has
a
real
application
here
and
it's
like
statsd,
is
still
a
thing
in
the
world
and
what
was
happening
is
stats.
He
was
sending
deltas
and
prometheus
is
dropping
him,
but
prometheus
is
already
doing.
Basically
all
the
work
needed
to
do
this
correctly,
if
it
would
just
add
the
deltas
together,
which
is
almost
no
extra
code.
A
B
A
I
guess,
and
so
my
point
was
the
stats,
the
stats
receiver
will
be
much
easier
to
write
and
straightforward,
and
so
will
the
prometheus.
As
long
as
we
put
this
delta
cumulative
inside
prometheus,
there
is
no
delta
to
cumulative
standalone
processor,
which
is
odd
because
it's
such
an
important
functionality.
A
I
think
the
reason
it
doesn't
exist
is
super
easy
to
roll
it
into
whatever
you're
doing
basically
and
not
have
a
separate
pipeline,
which
just
means
extra
overhead.
We
should
probably
at.
A
Yeah,
I
mean
something
that
knows
how
to
add
each
point
or
merge
points
correctly
would
be
a
good
helper
because
you're,
not
you
know,
for
for
a
sum,
it's
straightforward
as
adding
two
numbers
together,
but
for
histograms
it's
not
it's.
It's
a
little
bit
more
complicated,
and
that
I
mean
I
would
say
that
the
collector
is
woefully
inadequate
for
helpers
in
general.
Every
time
I
get
into
the
collector
code,
and
I
wonder
how
do
I
write
a
test
I
feel
like?
I
have
to
do
everything
myself
and
it's
pretty
hard
to
do
so.
A
Some
helpers
would
be
nice.
The
reason
I
say
this
is
that
I've
been
working
on
the
histogram
exposition
for
exponential
histogram
and
as
soon
as
you
introduce
that
you're
going
to
end
up
having
to
convert
it
back,
and
you
won't
want
every
single
exporter
to
do
a
bespoke
conversion
from
histogram
points.
So
it
sounds
like
we
need
some
utility
classes
to
help,
merge
points
and
convert
points
between
appropriate
representations,
but
I
actually
opened
a
similar.
A
Okay,
yeah
the
the
collector
code.
I
I
see
that
too,
and
I
and
every
time
I
get
in
there,
it's
just
not
a
priority
for
me
to
try
and
rainbow
that
myself,
so
yeah
okay.
So
this
is
super
technical,
it's
about
how
to
convert
delta
into
cumulative
and
where
the
state
gets
managed
and
you're
right.
It
should
be
done
in
a
helper,
but
I
think
the
I
don't
think,
there's
much
reason
to
object
to
having
prometheus,
manage
cumulative
state
and
count
deltas
for
you.
Do.
B
A
Well,
that's
that's
right.
The
the
sdk
will
always
say
the
default
should
be
to
respect
what
the
exporter
requests
that
there's
a
kind
of
carved
out
of
the
spec
is
that
you
don't
set
temporality
intentionally
unless
you're
really
trying
to
and
the
exporter
exports
supports
both.
So
if
you
have
a
prometheus
exporter,
it
should
insist
on
cumulative
temporality
first
and
then,
of
course,
you
know
the
sdk
is
responsible
for
it,
but
it's
doing
kind
of
the
same
thing.
It's
like
you
already
have
a
way
to
aggregate
and
merge
these
things
right.
A
And
that
way,
and
that
and
that's
why
this
statsy
receiver
thing
is
so
esoteric
in
the
corner
cases,
because
when
you're
an
sdk,
you
have
your
start
time
and
you
know
that
your
value
was
reset
to
zero.
So
the
sum
of
all
your
upgrade
counters
is
correct,
and
the
conversion
to
gauge
is
correct.
This
preceding
discussion
about
returning
a
statsd
delta
non-monotonic
into
prometheus
is
ambiguous
because
you
don't
know
zero
value
because
you
don't
know
reset,
and
you
don't
have
start
times
so
when
you're
in
the
sdk.
A
I
hope
it
helps
this
is
this
is
good.
A
Well,
I
think
I've
talked
too
much,
so
someone
else
should
talk,
or
else
maybe
we
can
restand
here.
A
I'm
still
working
on
views
specs
and
views
implementations,
so
I've
got
plenty
to
do.