►
From YouTube: 2021-08-18 meeting
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
Hello,
we'll
give
people
a
few
minutes
to
join.
Please
remember
to
add
your
names
to
the
meeting
notes.
A
A
Okay,
I
guess
we're
about
four
minutes
after
we
get
started
grace
you
wanna,
give
us
a
quick
rundown
of
what
this
issue
is
about.
B
Yeah,
so
this
is
just
like
a
small
pr
I
made,
so
we
noticed
that
in
the
like,
actual
prometheus,
if
you
have
like
metrics
that
are
scraped
without
the
type
hit
prometheus
will
actually
still
send
those
and
just
treat
them
as
a
gauge,
whereas
in
the
collector
it
treats
it
as
an
invalid
metric,
and
it
won't
really
be
converted
to
otlp
correctly
because
it's
unspecified
so
then
it's
kind
of
just
an
empty
object.
B
So
that's
one
part
of
it
which
just
like
introduces
if
we
want
to
keep
the
same
functionality
as
prometheus
entry
as
a
gauge
and
then
the
second
part
is
with
the
so.
The
python
sdk
has
like
some
weird
things
in
it
where
the
metadata,
like
type
lookup,
the
name,
doesn't
include
the
like,
underscore
total
and
I
know
anthony
you
made
some
fixes
with
that.
B
So
then
the
lookup
it
strips
the
underscore
total
name,
but
I
was
saying
that
with
that
it's
never
added
back
in
so
then
the
metric
name
is
actually
without
it,
whereas
in
prometheus
it
still
has
the
underscore
total.
So
I
was
just
adding
back
in
like
a
way.
We
can
still
have
that
metric
name,
but
still
have
the
lookup
without
it,
so
I'm
good
with
like
doing
it
like
if
there's
a
better
way.
This
is
just
like
the
way
I
could
think
of
for
it.
A
All
right,
yeah
that
whole
bit
about
total
suffix
handling.
I
was
never
very
clear
on.
I
don't
know
that
there
are
solid
test
cases
around
what
the
appropriate
handling
should
be.
So,
if
you've
got
some
scenarios
that
we
can
use
to
add
to
our
test
cases,
that
would
be
great.
B
Okay,
cool
yeah
I'll
definitely
add
some
tests
for
that
too,
to
this
video.
C
A
Yeah,
I
think
I've
seen
another
issue
reporting
the
the
fact
that
not
having
a
metric
type
in
the
exposition
real
cinematic
being
dropped.
If
this
isn't
linked
to
that
issue,
I'll
find
it
and
like
it.
D
I
suspect
that
the
behavior
you're
looking
at
has
to
do
with
legacy
libraries
that
they
they
support,
and
I
wonder
how
far
back
it
goes.
D
It
always
struck
me
that
that
was
just
backward
compatibility
and
I
wonder
how
how
far
back
support
goes
in
the
prometheus
client
library,
land.
A
C
A
Okay,
I
think
we
can
proceed
on
to
josh's
question.
D
Yeah
I
just
popped
in
thinking.
I
would
like
to
talk
about
stillness
because
we
did
get
a
protocol
change
merged
a
few
weeks
ago
and
I
know
it'll
take
a
minute
before
we
can
get
the
protocol
version
released
and
into
the
collector
code
base
before
we
could
actually
use
it.
But
I
saw
some
issues
about
staleness
come
up
before
us
with
customers
and
so
on,
asking
questions,
and
so
I
was
wondering
if
anyone
has
triaged
this
issue
from
three
weeks
ago.
D
Number
3733,
because
I'm
curious
about
it.
If
we
could
find
out
more
information
and
then
the
longer
term
request
is
to
look
at
when
we
will,
I
guess
correctly
integrate
this
information
about
stillness
using
the
new
protocol
data
flags
field
data
point
flags
field,
which
is
intended
to
help
with
stillness
correctness.
A
I
don't
know
if
emanuel's
had
a
chance
to
look
at
this,
yet
I
have
not,
but
we
did
discuss
it
on
this
call
last
week
and
I
think
brian
brian
brazile's
thinking
was
that
we
were
tracking
staleness
for
all
metrics
at
every
scrape,
but
we
should
be
tracking
stay
on
this
four
metrics
within
a
scrape
independently,
so
across
scrapes
we
should
have
a
different
set
of
stainless
trackers.
A
A
D
I
see
I
think
I
see
so
if
the
theory,
the
theory,
goes,
that
staleness
there's
some
bit
of
state
and
and
would
you
say
it's
on
the
metric
and
not
on
the
actual.
I
want
to
say
time
series.
A
We
use
a
nand
value
on
on
the
metric,
but
we
have
a
state
store.
That's
in
the
appender,
that's
used
by
the
the
receiver
that
tracks
per
target,
whether
it
has
successfully
seen
metrics
from
that
target.
This
scrape,
but
it
tracks
for
all
targets
that
it's
ever
seen
and
then,
at
the
end
of
a
scrape,
emits
stainless
markers
for
targets
that
it
did
not
see
in
this
scrape,
but
that's
not
necessarily
correct,
because
it
may
not
have
been
attempting
to
scrape
those
targets
in
this
scrape.
D
A
I
don't
think
that
the
data
point
flag
will
help
or
hurt
with
addressing
this
issue.
I
think
they'll
be
separable
right,
because
what
we
do
as
a
result
of
identifying
that
we
need
stillness,
is
we
put
a
nand
value
into
the
metric
and
we
can
instead
put
the
set
the
flag
on
on
the
p
data.
Okay,
that
sounds
good,
which
would
just
then
mean
that
at
the
exporter
side,
we
need
to
trap
that
flag
and
and
that
their
set
demand
value
or
do
something
else
appropriate
in
a
different
exporter.
A
D
Sounds
great
yeah
also
sounds
like
the
compliance
test
is
missing
something
about
multiple
scrape
intervals,
but
thank
you
for
clarifying
very
much.
F
F
D
It
sounds
like
you're
suggesting
that
that
the
logic
is
even
more
incorrect
than
we
want.
If,
if
a
target
is
transient
in
the
sentence
that
it
goes
away,
we
should
we
should
emit
stainless
markers
for
it
for
some
period
of
time.
I
think,
but
not
forever,
yeah
exactly.
A
Yeah,
we
should
admit
one
stay
on
this
marker
and
then
no
more,
which
I
I
thought
it
was
set
up
to
do
yeah
and
unless
the
target
reappears.
None
of
the
let
me
see
on
his
marker
again,
but
my
my
recollection
is
that
it
it
effectively
clears
the
the
list
of
targets
after
it
submitted
stimulus,
markers
for
them.
A
A
Okay,
so
yeah
that
that's
an
issue
that
I
think
we
need
to
drill
into
this
week,
emmanuel.
A
Be
able
to
have
time
to
drill
into
that,
I'm
not
sure.
F
Yep,
it's
in
to
my
plate.
A
Okay,
so
I
guess:
is
there
anything
else
that
we
should
be
discussing?
That's
not
yet
on
the
agenda.
Welcome
elita!
We've
we've
reached
the
end
of
the
agenda.
Do
you
have
anything
you'd
like
to
have.
H
Hey
hi,
everyone,
sorry,
I
got
late
of
it.
No
actually
I
just
wanted
to
bring
up
if
there
are
any
other
outstanding
prs
or
any
any
others.
I
know
emmanuel.
You
have
a
couple
in
progress
that
are
pending
review,
just
wanted
to
ask
because
I'll
be
syncing
up
with
bogdan
again
later
today
on
the
collector.
H
So
just
if
there's
anything
pending
from
anybody,
I
can
follow
up
on
it.
F
H
F
Me
yeah
help
me
review
that
it's
been
out,
it's
yeah,
it's
been
stagnated
for
12
days
and
tigran.
Helped
me,
you
know
he
reviewed.
He
looks
good
to
him,
but
he
wanted
someone
in
the
prometheus
side
to
also
say
it's.
It's
gucci.
H
To
him,
okay,
good
good,
to
know
pr
reviews
pending,
I'm
just
noting
that
in
our
list
anything
else,
we
can
take
a
look
quickly
at.
H
What's
there
in
the
backlog
for
the
collector,
so
as
as
many
of
you
know,
the
collector
components
are
also
going
to
be
moved
this
week
from
core
to
contrib.
H
So
that
also
includes
the
prometheus
components
which
we
will
do
last,
but
the
idea
is
that
again
in
in
preparation
for
the
tracing
ga
release,
which
is
upcoming
later
this
month,
we
had,
we
have
an
ongoing
issue
that
has
open.
You
know
that
is
open
where
we
are
tracking
the
movement
of
the
non-tracing.
H
Components
of
the
collector,
which
will
move
to
contrib
as
they
get
stable,
they'll
get
reintegrated
back
with
the
basic
functionality
that
will
be
provided
in
the
collector,
and
we,
of
course,
will
discuss
this
in
the
collector's
sig.
That's
coming
up
will
be
all
the
components
marked
stable
as
part
of
the
collector
core
right.
So
let
me
just
pull
up
the.
H
H
Yeah,
that's
correct
because
again,
jager,
zipkin
and
and
prometheus
have
first
class
support,
and
you
know,
according
to
our
project,
tenants
and
and
again
it
will,
it
will
definitely
move
back
and
be
released
as
part
of
collective
core
in
the
long
run.
But
right
now
the
idea
is
to
actually
stabilize
the
core.
You
know
basic
components
of
the
collector
that
are
needed
for
end-to-end,
trace,
trace
support
and
all
the
additional
dependencies
that
go
with
it
will
be
marked
stable.
Everything
else
will
be
marked
will
be
moved
to
contrib.
H
That's
the
path,
we're
you
know,
tracking
right
now
and
again
just
trying
to
make
sure
that
you
know,
as
we
minimize
as
much
breakage
as
possible
on
the
tests,
as
well
as
the.
H
As
well
as
the
you
know,
any
other
functionality,
so
I'm
just
going
to
share
my
screen.
F
Thanks
for
the
question,
vishwa
alolita,
the
move,
the
move
doesn't
make
so
much
sense
to
me.
We're
deleting
code
then
bring
it
back
in
when
that
effect.
Like
I
mean
that
would
mean
that
if
someone
were
using
a
specific
version
of
the
collector,
we
would
have
to
keep
another
copy
of,
for
example,
the
receiver
in
contrib.
Is
that
correct
for
life.
F
So,
for
example,
if
I
always
use
a
version
of
the
collector
from
like
two
weeks
from
now
where
that
code
has
all
been
moved
to
contribute
for
the
receiver.
If
I
stick
with
that,
we
can't
delete.
We
can't
delete
code
from
country
later
on,
for
the
receiver.
H
H
So
why
I
mean
you
know
if
it
is
there
I
mean
see
the
main
dependency
really
is:
can
we
retain
the
commit
histories
right
as
we
move
from
one
reaper
to
the
other
and
then
and
then
back
potentially
for
specific
components
right,
so
that
that's
where
you
know
we
are
working
and
making
sure
to
retain
the
commit
history
clearly
and
and
retaining
and
actually
punia?
And
I
and
anthony,
as
well
as
our
some
of
our
internal
engineers,
have
been
working
on
this
with
bogdan.
A
Retaining
the
commit
history
is
largely
an
assist
for
developers
as
well,
so
that
there's
some
sense
of
continuity.
But
I
think
what
emanuel
is
asking
about
is
like
build
repeatability
and
and
go
mod
and
what
it's
going.
G
A
A
There
simply
won't
be
any
higher
versions
tagged,
so
it
won't
be
possible
for
anything
to
substitute
a
higher
version
of
that
module
for
that
particular
component,
where
it
doesn't
exist
anymore,
because
each
mod
each
component
in
contrib
is
an
independent
module.
There's,
not
a
single
module
that
contains
all
of
them.
If
there
were
a
single
module
that
contained
all
of
the
components
in
contrib
like
currently
happens
in
core.
Yes,
that
would
be
a
problem,
but
it
shouldn't
be
a
problem
with
the
layout.
That's
used.
F
F
H
Yeah
I
mean
that
was
again.
These
are
all
good
questions
emanuel,
because
you
know
this
is
the
same
concern
everybody
has
had
and
as
long
as
we
can
minimize
breakage,
I
mean
it's
really
hard
for
us
today,
given
everything
is
in
collector
core
to
also
make
it
reasonably.
You
know
easy
to
for
different
approvers
and
different
maintainers
to
be
able
to.
H
You
know
specialize
on
who
are
specializing
on
certain
pipelines
like
prometheus
or
zipkin
or
jager,
to
be
even
able
to
work
on
those
right,
because
everything
is
in
the
same
source
repo
today
so
again,
hopefully
some
of
this
will
get
addressed
and
we
are
working
towards
making
sure
that
jurassic
also
is
building
a
tool
where
releases
will
be
much
easier
to
build.
You
can
pick
and
choose.
You
know
whether
you
build
core
or
whether
you
build
you
know
an
extended
release
with
specific
components,
so
that
tooling
also
will
help.
G
E
Does
that
mean
all
the
metric
receivers
would
be
moved
to
country,
but
the
logs
would
be.
H
No
and
everything
will
gets
moved
right,
so
you
can
see
here,
for
example,
in
the
screen.
I'm
sharing.
For
example,
you
know
we're
moving
kafka,
we're
moving
open
census,
so
we
will
also
move
jaeger
and
zipkin.
Only
otlp
otlp
http,
for
example,
logging
and
export
export
helper,
which
are
used
by
the
core
tracing
pipeline
with
full
otlp
support,
will
stay.
For
example,
you
know
these
are
kind
of
a
snapshot,
and
actually
this
will
change
I'll
update
this
list
because
we
are
moving
jaeger,
ziptn
and
prometheus
out.
Also.
H
Does
that
answer
your
question,
though
vishwa,
because
I
think
you
were
talking
in
general
about
metrics.
E
Yeah,
so
I
was
just
curious
on
what
criteria
you
guys
decided
to
move
these,
because
metrics
is
still
not
stable
right.
H
E
So
it
would
have
made
sense
if
we
had
completely
moved
the
matrix
out
and
then
have
large
energy
that
the
downside
and
then
bring
matrix
back.
E
Sorry,
sorry,
I
went
to
the
same
places.
Yes,
okay.
That
means
that
means
all
the
metric
one
should
have
should
be
in
the
list.
H
They
are,
what
do
you
mean?
I
mean
like
host
metrics,
for
example,
is
being
moved.
Okay,
okay,
I
mean
only
otlp
exporters
receivers
and
the
core,
which
is
mostly
an
internal.
You
know,
so
the
collector
core
components,
core
modules,
are
an
internal
model
and-
and
you
know
some
client
again,
these
items
stay
in
collector.
Some
of
these
actually
will
move
out
like
bob's
report,
for
example,
so
I'll
update
this
list
because
it's
even
a
shorter
list
of
what
stays
in
core
for
now.
H
But
I
mean
again:
bottom
line
is
we'll
make
sure
that
you
know
end-to-end
tracing
is
stable
and
available
with
otlp,
obviously
a
whole
otlp
support
and
then
we'll
as
we
build
a
you
know,
we
stabilize
these
components
whatever
is
needed
for
metrics
we'll
start
moving
it
back.
A
Yeah,
so
it
may
mean
that
some
users
have
to
change
from
using
the
core
build
to
using
the
collector
or
the
the
contrib
build
if
they
were
using,
say
just
prometheus
and
yeager,
or
they
may
need
to
move
to
a
custom,
build
and
that's
what
lolita
was
talking
about
with
the
tooling
that
jurassic
is
working
on
in
terms
of
the
collector
builder.
To
make
it
easier
to
say,
I
want
just
these
components
in
a
custom,
build.
H
H
And
I
mean
obviously,
we
are
going
very
carefully
and
step
by
step
as
we
move
these
components
we've
seen.
You
know
this
favorite
of
test
breakage
as
well
as
as
well
as
you
know,
other
dependencies
which
are
intertwined
with
each
other,
so
also
trying
to
figure
out
and
fix
circular
dependencies
so
anthony.
Maybe
we
can
go
through.
You
know
what
our
strategy
has
been
there,
because
we're
working
on
that
part.
But
it's
it's
important.
You
know
from
a
prometheus
pipeline
point
of
view
because
we
don't
want
to.
H
H
And
punia
had
submitted
an
example,
so
I
mean
take
a
look
at
it
we'll
we.
We
may
not
discuss
this
in
a
lot
of
detail
on
the
collector
sig,
but
we'll
definitely
pull
it
up.
You
know
mention
it
any
any
other
concerns
folks
have
josh.
Do
you
have
any
concerns,
they're
obvious
or
carlos
or
david.
D
I
was
listening,
I
think
anthony's
right,
the
dependencies
are
about
config
and
once
we
have
the
tooling
to
build
these
things,
the
dependencies
won't
be
path-based.
I
think.
H
Yeah
yeah,
yeah
and
also
the
I
think,
the
other
part
that
actually
has
been
in
progress
is
josh.
Do
you
know
if
or
if
the
dependency
on
the
prometheus
receiver
for
service
discovery
has
been
completely
moved
to
use
the
go,
sdk
implementation.
H
Remember
you
had
remember,
you
had
suggested
it
many
many
weeks
back
and
there
was
a
discussion
on
going
with
bogdan,
I'm
not
sure
if
that
was
ever
resolved,
but
the
you
know,
primary
service
discovery
for
the
collector
today
is
done
through
the
prometheus
receiver
that
exists.
So
I
think
that
in
the
last
discussion
at
least
I
had
my
understanding
was
that
that
dependency
had
been
changed.
D
I'm
sorry
if
I'd
forgotten
something
I
do.
I
know
the
dependency
we're
talking
about
and
I
know
it's
ginormous,
but
I.
E
D
Know
if
we've
discussed
how
to
update
that
okay.
H
Anyway,
that's
that's
all
I
had
on
this
end
and
just
wanted
to.
You
know,
make
sure
everyone
is
aware
of
this,
because
we
are
all
testing
and
have
dependencies
on
our
prometheus
components
in
in
core.
So
the
you
know
these
will
change
in
the
next
couple
of
releases
and
then,
of
course,
just
for
your
knowledge.
We
are
tracking
all
the
metrics
milestones
on
the
collector
here,
so
we
are
keeping
that
up
to
date.
I
am
tracking
that
we
have
made
good
progress.
I
think
this
otlp
0.9.0
is
done.
H
H
Okay,
good,
I
think
that's
all
I
had
I
just
we
can
look
at
any
of
the
pull
requests
that
are
there
for
meteos
grace
has
filed
some.