►
From YouTube: 2021-05-19 meeting
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
Okay,
so
I'll,
let
you
just
let
me
know
she's
having
some
troubles
joining
so
let's
go
ahead
and
get
started.
Looks
like
we've
only
got
one
issue
on
the
agenda,
thus
far.
So
if
you
have
anything
you
want
to
discuss,
please
get
it
added
to
the
agenda
and
before
we
start
real
quick,
I
wanted
to
let
everyone
know
that
collector
0.27
was
released
yesterday.
A
A
B
B
B
It
will
go
through
the
elements
in
the
scrape,
config
and
extract
the
static,
configs
and
write
it
into
the
script
target.
Yaml
file
and
this
file
will
be
created
under
the
base
path,
that
the
user
will
config
and
there
are
two
components,
should
be
informed
by
of
the
script
targets
yama
file,
one
is
the
disc
one:
is
the
discovery
manager
so
we're
going
to
wrap
the
wrap
the
file
into
the
file
sd
config
and
pass
it
to
the
discovery
manager?
B
So
the
discovery
manager
will
refer
to
the
file
for
the
target
information
and
the
other
one
is
the
target,
update
manager
and
to
do
that.
We
build
a
map
from
the
job
names
to
the
to
the
target
files,
so
it
can
be
used
by
the
target
of
the
server
and
then
it
will
create
a
script
target
of
the
server
and
will
pass
the
map
to
it.
So
the
update
server
will
know
which
file
to
update
when
it
receives
the
request
from
the
client.
B
B
So,
in
order
to
support
this
service,
we
added
a
new
label
for
the
configuration
the
script
target,
the
script
target
update
and
there
are
four
fields
under
this
label:
the
enable
port,
refresh
interval
and
the
base
path.
So
the
user
can
start
the
service
by
setting
the
enable
to
be
true
and
the
port
is
the
port,
is
the
inked
value
and
will
specify
the
port
which
port
the
server
lists
zone
and
if
the
user
did
not
specify
one
and
we
will
set
the
defaults
to
be
to
be
this
7
to
30.
and
the
fresh
interval.
B
The
first
interval
controls
how
frequent
the
receiver
should
retrieve
the
target
information
and
also
the
default
value
we
set
here
is
five
minutes
and
the
base
path.
The
base
path
specifies
the
basic
directory
of
the
script
target
files.
So
here
is
the
actual
implementation
of
the
of
the
configuration
in
the
config
struct.
B
And
there
are
two
different
approaches
of
what
kind
of
data
the
operator
the
client
will
pass
this
in,
and
one
is
the
the
client
passes
in
the
file
is
the
config
and
the
other
is
the
static
configs,
and
we
listed
the
pros
and
cons
in
here,
and
our
current
design
is
that
the
client
will
pass
in
the
static
configs,
because
in
that
way
the
client
don't
need
to
worry
about
the
file
names
and
they
just
need
to
pass
in
the
configs
and
we
will
take
care
of
the
rest.
B
B
D
Yeah,
so
for
the
server
implementation
we
wanted
to
implement
endpoint
that
the
user
operator
would
call
to
to
make
a
request
to
change
the
list
of
script
targets,
and
so
the
design
decision
we
came
up
with
the
server
implementation
was
to
go
with
normal
http
rest
method.
So
we
thought
to
be
better
for
our
case
and
less
complication,
as
all
we
just
needed
was
the
endpoint
to
be
able
to
update
the
list
of
scrape
targets.
D
So
just
the
json
body
would
be
a
list
of
screen
targets
and
it
would
go
and
it
would
overwrite
the
specified
file
for
the
specified
job
and
prometheus.
So
we
decided
to
go
with
the
rest
endpoint
using
gorilla,
mux
router.
So
if
you
go
down,
you
can
see
the
server
setup.
We
have
a
escape
target,
scrape,
update
server
manager
that
holds
the
logger.
D
The
java
target
files
map
that
I
was
talking
about
as
well
as
the
port
and
the
router,
and
so
with
this
we
can
make
a
function
that
creates
a
new
manager
that
defaults
it
or
takes
in
different
inputs,
depending
on
the
user,
and
after
that,
after
the
router
is
created
and
the
server
update
manager
is
created,
then
the
run
function
just
normally
runs
it
on
the
port
that
was
specified
or
the
default
one,
and
it
handles
an
endpoint
that
is
dependent
on
the
job
and
the
job
id
variable.
D
So
the
endpoint
that
the
user
could
go
to
is
depends
on
the
job
id.
So
it's
the
endpoint
for
the
router
is
job
specific
and
the
gorilla
mux
router
handles
this
and
takes
in
put
requests
to
update
the
list
of
scrape
targets,
and
so
when
this
endpoint
is
hit,
it
calls
a
method,
called
m
dot,
update
prometheus
files,
sd
and
the
function
is
defined
below
here.
D
So
all
it
does
really
is
read
the
request
body
and
after
you
read
the
request
body,
you
get
the
job,
you
get
the
file
from
the
job
name
that
was
specified
inside
the
endpoint
and
then
overwrite
that
file
with
the
request
body,
which
would
be
a
in
this
case,
list
of
static
config
entries
that
prometheus
would
automatically
automatically
grab
due
to
the
file
sd
config
and
then
update
those
targets.
D
B
Yeah,
so
so
the
integration
test.
Basically,
we
have
two
one
is
for
the
communities,
receiver,
configuration
and
one
is
for
the
the
spread
target,
update
server
yeah,
so
so
for
the
test
for
the
integration
test.
For
the
configuration
we're
going
to
we're
going
to
set
up
this
path
here
and
create
a
new
receiver
instance
and
and
try
to
start
it.
B
B
B
So
so
we
are
going
to
set
up
a
mob
target
servers
and
send
the
http
request
to
the
target
at
this
server
to
modify
the
targets,
and
we
are
going
to
assert
that
people
on
matrix
will
be
will
be
will
be
stripped
from
the
new
targets.
D
So
we
include
unit
tests
to
test
out
how,
when
the
server
is
started
correctly
and
stop
correctly,
as
well
as
test
out
the
update
files
function
in
which
we
just
create
a
test
prometheus
job
and
then
make
a
make
a
sample
test,
a
request
there
using
google
mugs
and
then
see
if
it
is
properly
handled
and
properly
updates
the
file
and
and
then
from
there
as
we
develop.
D
We
can
look
to
see
if
we
need
to
include
any
other
test,
but
for
now
that
kind
of
covers
the
amount
of
you
testing
we'll
have
for
the
server
and
now
that's
the
end
of
our
design,
and
we
just
wanted
to
get
everyone's
thoughts
about
the
design
of
implementation
and
any
questions
you
guys
might
have.
And
we
would
be
happy
to
answer.
D
D
E
So
I
just
wonder
if
it
should
be
reversed
and
that
it
should
fetch
the
targets
from
some
other
configured
essence,
because
so,
for
example,
there's
like
the
so
there's
a
ecs
observer
that
I
don't
know
if
you're
familiar
with
observers,
but
they're
extensions
that
find
targets.
E
E
Because
I
just
wonder
if,
like
I
don't
know
yeah,
I
guess
my
initial
inclination
is
like
it
would
be
reverse
that,
because,
because
now
you
have
like
this
state
issue
of
like
what,
if
you
fail
to
push
the
targets
to
the
prometheus
receiver
right
and
on
restart
like
if
the
prometheus,
if
the
collector
restarts
like
how
does
it
know
that
it
needs
to
re-push
those
targets
to
it.
So
it
kind
of
feels
like
my
initial
explanation,
is
that
this
should
be
fetching
targets
from
somewhere
else.
E
Basically,
it
should
be
a
client
instead
of
a
server,
but
it
could
be
that
could
be
missing
something,
but
that's
yeah,
the
ecs
observer
kind
of
does
something
similar
right.
It
discovers
ecs
targets
and
then
the
configures
and
then
the
prometheus
receiver.
You
know
it
generates
like
a
targets
file
and
then
this
receiver
picks
those
up
automatically.
A
I
think
part
of
the
reason
why
the
initial
design
was
for
the
this
is
part
of
the
the
overall
work
to
get
the
operator
to
be
able
to
load
balance
across
a
set
of
collectors
and
the
design
there
was
to
have
the
load
balancing
component
that
does
the
actual
prometheus
service
discovery,
push
out
changes
to
each
of
the
collector
instances
as
changes
are
received,
and
that
puts
the
the
component
that
does
the
service
discovery
in
control
of
pushing
out
those
changes
so
that,
if
there's
a
change
and
all
of
the
collectors
change,
they
all
change
at
the
same
time.
A
So
you
don't
end
up
with
for
some
time.
You
know
two
collectors
scraping
the
same
target
if
one
is
updated
and
the
other
hasn't,
but
it's
certainly
worth
exploring
whether
flipping
that
relationship
and
having
the
load
balancer
component
expose
a
service
that
the
collectors
could
reach
out
to
to
pull.
Targets
would
be
worthwhile.
E
Yeah
I
mean
yeah,
I
mean
I
think
yeah
so
that
I
mean
the
operator
would
still
do
the
service
discovery,
and
you
know
it
would
generate
different
target
files
for
different
clients
right
so
you'd
have
to
like
have
some
distinguishing
key
right
for
each
for
each
previous
receiver.
That's
requesting
the
target
file,
so
you
could,
you
know,
send
the
appropriate
set
of
targets
each
one.
We
have
the
double
the.
I
guess.
Yeah
there
there.
E
A
A
One
of
them
is
going
to
succeed
and
the
other
is
going
to
fail,
because
the
samples
are
already
there
and
they'll
be
out
of
order,
or
something
like
that.
So
it
may
not
be
a
huge
issue,
but
it's
something
to
think
through.
F
E
Yeah-
and
I
mean
it
also
feels
more-
I
mean
you
know
given
like
prometheus
premium.
This
is
a
very
like
scrape
oriented.
You
know
protocol.
If
the
verse
relationship
just
feels
a
little
bit
more
natural
and
it
fits
into,
I
mean
it
doesn't
require
very
many
changes
or
doesn't
require
many
or
any
changes
to
the
prometheus
receiver.
You
know,
I
think
it
supports
you
know,
but
you
just
have
to
somehow
feed
it.
I
don't
know
the
exact
details.
E
You
may
not
make
sure
you
check
out
the
ecs
observer
if
you
haven't
already
since
yeah.
It
does
exactly
this.
I
don't
know
the
exact
mechanism,
it
feeds
the
targets
to
the
receiver,
but
it
does
somehow.
I
think
it
might
be
through
a
file.
A
In
a
in
a
pole
model,
one
thing
I
would
want
to
explore
as
well
would
be
whether
implementing
another
service
discovery
mechanism
that
could
do
that
discovery
directly
within
the
prometheus
service.
Discovery
would
be
worthwhile
rather
than
coupling
it
to
collector
observer
components
or
anything
like
that.
That
would
have
to
be
run
alongside
the
receiver.
G
Speaking
from
the
peanut
gallery,
I
agree
with
your
statement
anthony
the
light
stuff.
We
have
prototyped,
essentially
a
push
mechanism
for
service
discovery.
I
think
it's
a
good
idea.
It
just
takes
a
lot
of
investment
to
pull
this
off,
so
the
idea
would
be
that
you
just
run
a
single
service
discovery
agent,
that
does
everything
prometheus
is
doing
today
and
then
sends
that
data
as
otlp
to
all
the
receivers
that
need
to
need
to
use
it
and
then
there's
a
separate
question
of
how
you
shard
it
yeah.
G
A
G
I've
I
mean
I've
prototyped.
Basically,
essentially,
today's
prometheus
server
uses
an
up
metric
to
say
which
servers
are
up
and
there's
a
one-to-one
correspondence
between
up
metric
values
and
the
results
of
service
discovery.
So
the
idea
was
essentially
to
produce
a
single
metric,
which
I
called
present
in
my
prototypes.
But
you
know
it
could
be
anything.
It
says
which
service
discovery
targets
ought
to
be
there
and
then
you've
separated
the
question
about
what
what
ought
to
be
there
from
what
is
actually
alive.
G
And
then
the
theory
was
that
you,
the
the
up
value
that
we
have
today,
would
equal
the
logical
and
of
present
from
service
discovery
and
up
from
pushed
data.
Essentially.
A
G
On
the
present
metric,
I
would,
and
then
I
mean
as
long
as
we're
sort
of
talking
about
future
visions.
Here
I
think
I
like
this
new
schema
url
that
we've
been
adding
to
otlp
and
then
the
idea
would
be
that
if
you
had
a
schema
that
said
what
your
core
identifying
attributes
are,
which
is
job
and
instance,
if
you're
prometheus,
then
you
could
just
push
some
data
with
a
job
instance.
F
F
G
When
we
were
talking
through
this,
it
seems
that
there
is
a
lingering
concern
that
you
can't
have
too
many
processes
bringing
the
kubernetes
master
so
yeah.
This
was
most.
This
was
partly
a
sort
of
scalability
project,
thinking
it
through
what
it
would
take
to
get
to
the
point
of
just
having
one
or
two
processes
escaping
all
that
service
discovery,
state
and
then
broadcasting
it
using
all
the
mechanisms
that
we
already
have
like
distributing,
metrics
or
whatever
to
a
pool
of
collectors.
I
just
want
to
keep
that
vision
alive
mention
it.
G
G
Your
data,
but
what's
your
job
and
what's
your
instance
and
instance,
is
maybe
something
you
can
guess,
but
job
is
like
anyway
there's
some
conventions
that
might
be
needed.
F
A
F
H
A
F
Yep,
these
are
good
points
jay
thanks
for
bringing
that
up,
we'll
take
a
look
at
it
and
see.
If
then,
you
know,
in
the
short
I
mean
we'll
also
take
a
look
at
the
ecs
observer.
Thank
you
for
the
link.
H
Hey
I
I
have
a
small
question,
so
I
see
that
you
have
a
static
and
file
sd
config.
So
how
does
this
land?
How
does
this
land
you
know
to
the
receiver
as
a
prometheus
country,
for
example,
if
I
put
prometheus
targets
in
there
like
a
service
or
endpoints
and
stuff
like
that,.
A
Yeah,
so
that
would
be
part
of
an
earlier
step
in
this
operator
service
discovery,
distribution
mechanism.
So
basically,
what
would
happen
is
the
user
would
provide
a
custom
resource
to
the
open,
telemetry
operator
saying
give
me
a
set
of
collectors
with
and
replicas
and
here's
the
prometheus
config
I
want
to
use,
and
maybe
it
has
kubernetes
sd
configs
and
maybe
even
some
static,
sd
configs
that
the
user
already
knows
about
right.
A
The
operator
before
it
provides
that
configuration
to
the
collector
stateful
sets
that
it
creates,
would
pull
all
of
that
discovery,
information
or
that
discovery
configuration
out
and
replace
it
with
a
single
placeholder
static.
Sdconfig
that
says
hey.
This
job
is
going
to
get
information
later,
and
then
it
provides
the
discovery,
information
that
the
user
provided
to
a
load.
A
Balancer
component
that
runs
this
the
service
discovery
mechanism
does
all
of
the
the
scrape
target
discovery
as
josh
was
mentioning
right
so
that
we
have
one
single
component
that
is
hitting
the
kubernetes
api
and
making
all
of
those
requests
rather
than
you
know,
10
20
50.
However,
many
collectors
end
up
running
and
then
that
component
distributes
the
discovered
target
information
out
to
the
collectors
after
splitting
it
up
between
them.
I
think
in
the
appendix
of
the
doc
that
is
linked
in
the
meeting
notes
is
another
document
that
outlines
that
design.
Okay,
thank
you.
Anthony.
C
I
did
have
one
question.
I
think
I
asked
it
on
the
issue
as
well,
but
I'll
re-raise
it
here.
I
think
it
would
be
nice
in
order
to
make
quick
progress
on
this
if
it
was
separate
from
the
prometheus
receiver
to
start
with.
C
It
seems
like,
while
it's
coupled
logically
in
terms
of
being
useful
together,
it
looks
like
we're
just
running
a
server
that
writes
something
to
a
file,
and
I
wonder
if
we
could,
if
we
went
ahead
with
this
as
an
experimental
thing,
if
we
could
get
if
we
could
sort
of
make
quicker
progress
and
and
get
people
using
and
trying
it
out
more
quickly,.
A
The
initial
design
had
it
as
a
separate
component.
There
was
an
extension
that
ran
a
service
that
did
this.
That
design
then
requires
that
whatever
is
configuring.
The
receiver
sets
up
the
file
sd
configs
and
knows
the
location
that
those
files
are
going
to
be
because
that
extension
doesn't
get
a
chance
to
rewrite
the
the
scrape
config,
and
I
think
that's
a
nice
advantage
of
the
approach
that
hui
and
iris
have
come
up
with.
A
Is
that
by
being
able
to
provide
a
static,
sd
card,
you
can
provide
the
initial
scrape
target
sets
if
they're
known,
I
don't
think
the
load
balancer
will
known,
but
some
other
consumers
might
know
them,
and
it
encapsulates
the
logic
of
where
those
files
actually
end
up
entirely
within
the
receiver
and
that
component,
and
I
don't
think
that
that
can
be
done
if
there
are
two
separate
components
in
the
collector.
C
A
Yes,
it's
certainly
doable.
I
I
don't
think
it's
it's
not
feasible,
I'm
just
not
sure
how
much
it
buys
us
in
terms
of
velocity
to
delivery,
and
I
think
it
loses
an
interesting
potential
attribute
that
we
gain
from
having.
F
Uncoupled
but
david
again,
I
think
in
the
long
run
we'll
have
to
I
mean
lo,
even
in
the
medium
run,
I
think
we
should
look
at
how
we
could
run
this.
I
mean,
I
don't
think
it'll
buy
any
velocity
given
the
velocity
of
getting
things
merged,
but
definitely,
I
think,
right
now
we
can
at
least
deleverage
as
anthony
said.
You
know
what
the
some
of
the
info
from
the
receiver
itself.
I
C
We're
now
going
to
commit
to
all
the
things
that
are
in
the
prometheus
receiver,
and
if,
if
this
is
something
that
we're
playing
around
with
to
start
and
trying
to
see
if
it
works
well,
then
we
might
hesitate
to
put
it
directly
there
to
start
unless
we're
confident
in
it.
F
F
F
All
right
can
we
move
on.
I
think
that
I
had
a
specific
request
of
everyone,
so
one
of
the
things
I've
been
doing
is
you
know
walking
through
the
prs
that
are
open.
I
know
grace
you
have
some
pr's
and
others
do.
I
can
share
my
screen
if
that's
useful,
but
this
is
what
I'm
looking
at
on
the
collector
right,
because
many
of
the
prometheus.
F
F
J
F
All
right
so
they're
about
again,
you
know:
I've
been
working
with
bogdan
to
get
these
prs
merged,
reviewed
and
merged,
and
again
there
are
some
of
these
that
are
waiting
for
authors,
so
I
think
emanuel
probably
would
love
to
have
your
feedback
on
the
discussion
going
on
the
writer
headlog.
Pr
again,
I
think
bogdan
was
in
the
process
of
reviewing
it
yesterday
and
he
has
some
questions
also
david.
It
would
be
awesome
to
kind
of
have
your
review
there.
I
think
you
might
have
looked
at
it.
F
I
can
just
click
on
it.
I'm
just
addressing
the
waiting
for
author
you
get
over
slow
today.
F
I
think
tigran
has
taken
a
look
at
it
and
jana
has
but
david.
If,
if
you
could
please
take
a
look
at
it,
that'd
be
useful.
I
think
jamaica's
also
been
working
on
a
general
right
ahead
log
approach,
but
that
again
we
decided
that
we
would
not
be
implementing
a
general.
You
know
we'd
keep
this
very
focused
on
handling
wall
for
prometheus.
F
G
Say
I
I'm
actually
not
doing
anything
of
that
sort.
You
know
we
there
there's
a
request
and
I
have
it
backlogged
to
write
up
something
about
treating
late
arriving
data
which
I
think
you
know
would
fall
out
of
a
solution
that
had
a
wallet.
You
know
a
stable
wall
on
it,
but
yeah,
it's
just
so.
You
know
I'm
not
I'm
not
particularly
looking
at
that
that
option.
I've
been
struggling
to
learn
how
to
read
a
prometheus
wall
for
for
months
and
months
and
months.
G
F
I
K
Yeah
yeah,
I
pushed
up
some
changes
and
responded
to
those
comments.
F
Hopefully,
hopefully
there's
an
answer,
bogdan's
questions,
but
richard
is
there
anybody
else
on
your
end
on
your
team,
who
could
actually
take
a
look
at
this
pr
also
it'd
be
great
to
have
some
feedback
from
the
experts.
Also,
on
your
end,.
J
F
There's
also
another
pr
that
is
for
the
up
metric
again,
which
I
think
vishwa
and
and
jana
have
taken
a
look
at,
but
this
is
also
a
blocker,
because
you
know
we
do
want
to
implement
this
in
order
to
actually
get
the
receiver
to
do
what
it
needs
to
do
so
david.
That's
something
that
would
love
to
have
your
your
review
on.
I
C
That's
my
personal
preference,
I'm
not
a
maintainer
or
approver
of
the
component,
but
I
think
it's
a
question
of
where
we
want.
Sorry.
C
If
you
could
hear
me
it's
a
question
of
where
we
want
the
metric
to
end
up
the
two
approaches
to
summarize
for
people
listening
one
is
that
the
up
metric
is
a
part
of
the
metrics
received
by
a
scrape,
so
you
scrape
and
some
amount
of
metrics
from
the
receiver
go
into
that
pipeline
and
are
processed
the
same
way
that
all
the
other
metrics
coming
from
that
prometheus
endpoint
are
processed
and
that's
the
the
other
pr,
not
the
pr.
That's
up
right
now.
This
pr
adds
the
up
metric
as
a
a
self-observability
metric.
C
C
K
But
okay
yeah,
so
just
for
a
bit
of
an
update
of
that
right
now
what
this
does
is
it
also
injects
that
metric
into
the
pipeline,
you
know
to
add
into
what
you
mentioned
about
the
approaches,
the
other.
The
other
pr's
approach
is
to
basically
basically
enable
all
the
internal
metrics
from
a
receiver,
like
scrape
duration,
scrape
time
stamp
and
all
those.
So
I
I
mean
it's
doing
much
more
than
was
intended,
which
might
be
nice
subjectively,
but
yeah.
I
I
I
figured
this
approach
is
how
I'd
do.
K
C
I
think
yeah,
I
think.
As
long
as
we
have
the
same
outcome,
it
may
be
fine
to
do
either
approach
it.
I
was
almost
going
to
go.
Take
a
look
at
myself,
because
the
second
author
was
having
some
trouble
fixing
unit
tests,
but
I
think
in
theory.
The
second
approach
should
be
simpler,
since
it's
just
not
throwing
away
those
metrics
rather
than
generating
new
ones,
which
is
part
of
the
reason
why
I
preferred
it.
But
I
also,
I
think,
we've
been
working
on
this
for
what
two
months
now
and
haven't
gotten
to
something
acceptable.
F
Yeah
I
mean
and
david
that
would
be
good
because
again,
I
think
that
we
should
emmanuel.
Can
you
please?
You
know?
I
mean
I
agree
with
david
that
we've
been
kind
of
working
on
this
for
a
couple
of
months
now,
when
we
should
take
an
approach
and
take
a
decision
and
and
so
that
we
can
get
this,
get
the
implementation
merged
and
then
again,
if
there
are
shortcomings
that
we
have
not
identified
in
the
short
run,
we'll,
let's
make
sure
we
fix,
add
it
right
so.
F
David,
which
is
the
other
pr,
I'm
sorry,
is
it
also
in
the
collective
stack
I
mean
these
are
all
the
pr's
that
are
tagged
as
open
right
now
and
then
progress.
C
F
F
But
emmanuel
can
we
can
we
please
take
a
look
at
this
and
then,
let's
figure
out
what
we
need
to
do.
K
Yeah
sure,
my
only
you
know,
my
only
question
here
is
that
okay,
not
my
only
question,
but
one
of
one
of
the
questions
I
have
is:
do
we
want
all
the
internal
metrics
scraped
or
not
right?
Are
we
doing
much
more
than
we
should
be
doing
with
this
within
this
new
pr.
J
Up
is
the
most
important
by
far
but
in
particular,
timing
is
is
highly
relevant
when
it
comes
to
debugging
what
your
system
does,
because
if,
if
your
metric
generation
suddenly
takes
longer-
or
you
have
a
few
in
your
fleet
which
take
longer,
that
is
super
valuable
feedback
or
signal
beyond
that,
I
it's
more
of
a
debugging
thing,
but
it's
one
of
those.
If
you
don't
have
them
when
you
need
them,
you're
going
to
have
a
bad
time,
but
those
two
are
are
out
of
that
group.
The
most
important
by
far.
J
It's
a
question
of
open
debate.
What
what
we
want
to
mandate
if
anything,
I
honestly
currently
don't
know
what
what
the
team
consensus
will
be.
Okay,.
F
But
david,
that's
a
very
good
question
you
bring
up
and
I
think
that
maybe
we
should
just.
Can
you
find
an
issue
on
that
because
we
should
at
least
you
know,
have
that
that
you
know
these
other
metrics,
also
part
of
conformance
or
or
are
they
only
specifically
triggered
for
even
if
these
tests
exist?
Are
they
just
for
very
specific
use
cases.
L
F
M
F
Thanks
but
emmanuel
again,
that's
a
good
point,
because
you
know
it's
a
good
question
that
what
are
the
other
metrics.
It
looks
like
at
least
the
time
information,
as
you
saw
so.
F
Okay,
cool-
that's
that's
great!
So
going
back
to
the
other
other
pr's
that
are
open
again,
this
is
so.
This
is
the
exporter
that
yana
has.
This
is
a
janus
pr
custom,
retrying
mechanism
for
prw
exporter
again,
I
think
david
has
already
looked
at
this.
Carlos
has
looked
at
it
he's
just.
F
K
K
It's
waiting
on
a
prior
pr,
because
github
doesn't
support
chaining.
K
But
the
other
pr
is:
it's
just
left
on.
Someone
requested
that
hey
in
addition
to
making
those
equivalence
tests
make
unit
tests,
which
I
mean
my
approach
here,
is
that,
given
that
we
already
have
really
bad
backlogs
yeah
we're
better
off
like
firstly
getting
equivalence
tests
and
then,
when
it's
time
for
deletion,
then
we
work
on.
You
know
unit
tests,
because,
honestly,
this
stuff
is
going
to
take
two
weeks
to
review
plus
so.
D
K
And
I
actually
have
a
comment
in
there.
That
depends
on.
F
Sorry,
let
me
go
and
read
it:
okay,
follow-up
of
three
one:
three,
one:
okay,
cool
yeah,
I
mean
so
this
is
just
waiting
for
the
other
one
to
be
matched
okay
and
moving
on.
This
is
your
second
pr.
So
three
one
three
nine
is
the
other
one
right,
yeah.
M
F
You
can
vishwa,
can
you
take
a
look
too.
F
F
Okay,
we
looked
at
that.
We
looked
at
right
ahead
log.
We
looked
at
up
metric
propagate,
so
this
is
also
a
manual
urpr.
K
Yeah,
so
this
is
anthony
raised
a
great.
You
know
great,
raise
some
great
information,
a
rather
great
question
right
now.
What
happened
is
one
day
I
was
friday
night.
I
was
prowling
through
issues
to
solve
and
I
saw
this
one
then
in
the
issue
discussion
it
seemed
like
it
was
okay
to
turn.
You
know,
debug
values
for
context.
What
was
happening
is
during
a
script
when
targets
weren't
available
that
information
wasn't
getting
printed
out
necessarily
until
you
had
debug
level
for
prometheus.
K
So
someone
request,
you
know
folks
are
saying
hey.
Could
we
turn
that
into
like
a
warrant,
so
I
just
went
and
implemented
it,
but
anthony
raised
a
good
concern
that
maybe
you
know
we'll
be
letting
users
shoot
themselves
in
the
foot
by
creating
so
much
data
volume,
and
that's
now
a
question
that
the
prometheus
work
would
kind
of
need
some
help
answering.
Is
it
okay
for
us
to
turn
specific
debug
error
values
into
worn
messages.
A
Yeah
and
my
concern
was
that
prometheus
had
specifically
concerned
considered
this
this
log
message
and
decided
to
keep
it
as
a
debug
because
of
the
potential
volume
of
logs
that
it
would
produce.
J
C
M
G
Might
add
that
in
the
work
we're
doing
at
lightstep,
I've
been
working
on
this
prometheus
sidecar,
which
has
to
deal
with
essentially
every
point
in
the
prometheus
wall,
and
we
end
up
with
a
lot
of
these
questionable
points
that
we're
having
to
deal
with
and
logging
is
never
a
good
answer,
as
as
the
volume
can
become
its
own
problem.
But
we
have
been
talking
in
the
data
model
in
the
spec
group
in
general,
about
recording,
dropped,
metrics
and
drop
spans
and
and
and
then
in
metrics.
We
at
least
for
us.
G
G
Yesterday
it
was
discussed
in
data
model
and
I
think
josh
sureth
said
he
might
file
something.
I'm
I'm
hunting
on
that
question
right
now,
but
yeah
it's
an
active
discussion.
G
It
came
up
in
spec
because
of
dropped,
links
and
dropped,
attributes
on
spans
which
are
dropped
within
the
data
field
itself
and-
and
that
is
like
if
we
were
dropping
attributes
on
metrics
because
of
a
limit
that
would
be
the
same
type
of
drop.
But
if
it's
an
entire
batch
of
drop
data,
perhaps
we
should
be
reporting
that
as
a
metric
and
it's
a
big
project,
though
to
spec
all
that
out,
yep.
A
So
on,
on
the
flip
side,
I
think
that
there
is
value
in
logging
things
like
this,
but
I
think
that
in
order
to
do
it
effectively
to
avoid
a
flood
of
data
that
might
be
more
than
is
anticipated
or
can
be
handled,
we
really
need
some
sort
of
mechanism
whereby
we
can
de-duplicate
log
messages
and
print
out.
You
know
this
log
was
received
a
thousand
times
in
the
last
second,
as
a
single
message,
rather
than
a
thousand
messages
per
second.
G
I
mean
one
day
this
will
become
an
issue
for
an
hotel
logging
api.
If
there
is
such
a
thing,
I
had
to
solve
that
same
problem
as
well.
I
have
a
log
every
like
period
of
time,
implementation
inside
the
sidecar,
so
that
I
can
log
about
invalid
points.
Rarely
in
a
way
that
works.
That
was
what
I
would
recommend
it's
a
little
simpler
than
implementing
what
you
just
described.
J
Two
points
here,
a
taking
a
page
from
from
the
networking
community
which
had
all
the
scaling
issues
20
years
ago,
as
I
keep
joking.
What
most,
what
most
network
vendors
do?
Is
they
they
just
print
this
many
type?
I
have
a
message-
and
there
has
been
repeated
in
this
form
x,
amount
of
time
in
this
amount
of
in
this
amount
of
time,
and
the
other
thing
is
that
is
also
what
metrics
are
where
you
just
serialize
your
events
into
a
metric,
and
you,
if
you
see
that
one
thing
go
through
the
roof.
J
K
Hey
now
I
mean
you
know
in
my
in
my
book:
it's
very,
very
critical
for
people
to
know
when
scraping
has
failed
right,
so
perhaps
maybe
we
could
meet
in
the
middle.
Maybe
perhaps
we
detect
if
it's
a
you
know
the
specific
error
message
and
then
actually
turn
that
into
a
warrant,
because
you
know
I've
struggled
with
this
at
times.
K
Well,
like
implementing
receivers,
you
do
not
know
what's
working
or
what's
not
and
yeah,
so
I
think
perhaps
we
could
make
a
compromise
whereby
a
detect
that
we
failed
to
actually
scrape
and
then
convert
that
into
a
warrant.
Would
that
be
reasonable.
K
J
So
if
you
were
talking
to
me,
I
just
got
a
ping
on
slack.
If
you
can
repeat
the
last
thing.
K
Yeah,
so
sorry,
right
now,
the
current
the
pr
is
stuck
on
the
fact
that
we
would
be
turning
things
that
are
meant
for
debug
level,
like
errors
that
it
meant
for
debug
level
into
warns
and
the
fear
would
be
large
volumes,
okay
of
data
being
consumed
being
produced
for
a
customer.
K
J
J
Basically,
for
sanity
reasons
to
to
force
proper
end
points,
because
else
you
will
have
this
drift
towards
worse,
inverse
overall
system
status,
which
is
a
debatable
design,
choice
and
there's
very
good
arguments
on
both
sides,
but
that's
how
we
do
it
on
the
prometheus
side,
how
how
that
is
represented
within
the
pipeline.
I
don't
have
a
strong
opinion
on
whatever
suits
you.
I
guess
the
one
thing
is:
if
it's,
if
it's
truly
broken,
then
it
must
not
make
its
way
into
prometheus
to
be
considered
compatible.
A
So
I
think,
that's
potentially
a
viable
approach.
My
concern
would
be
the
fragility
if
it
depends
on
looking
at
the
message
to
identify
this
error,
then
that
might
be
fragile
if
there's
a
sentinel
error
value
that
we
can
easily
compare
against.
That
might
work
better.
I
think
right
now,
it's
just
looking
at
any
error.
That's
coming
out
and
changing
the
level
from
warrant
to
debug,
so
we
need
to
see
specifics
of
how
that
would
be
implemented,
but
it's
potentially
less
problematic.
K
K
A
K
F
Okay,
I
think
I
think
we're
at
time,
so
that's
good
and
emanuel.
Maybe
you
can
think
about
it
and
respond
on
that.
You
know
maybe
modify
the
pr
and
resubmit
and
then
there's
only
one
more,
which
is
the
receiver
to
include
job
and
instance,
labels
that
grace
has
again.
We
can,
I
think,
we're
at
time,
but
david
and
vishwa
have
already
approved
this,
so
we'll
be
answering.
If
you
can
take
a
look,
then
we
can
I'll
ask
rihanna
also
well,
that
was
that
was
the
list
of
open
prs
right
now.