►
From YouTube: 2021-12-15 meeting
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
Hello-
everyone,
sorry
I'm
a
little
bit
late.
I
think
that
we
won't
be
having
anyone
from
microsoft
joining
us
today,
so
it
might
be
a
short
and
brief
meeting
brian
I'm
glad
you're.
Here,
though,
we
had
some
questions
regarding
how
stillness
markers
for
histograms
are
supposed
to
be
handled,
but
hopefully
you
can
help
us
with.
A
Okay,
so
if
you
can,
please
add
your
names
to
the
attending
list
and
so
it'll
get
us
started,
so
the
situation
we're
dealing
with
now
is
we're
trying
to
convert
the
prometheus
receiver
and
exporters
in
the
collector
to
use
the
p
data
flag
for
indicating
that
a
data
point
contains
no
data
which
is
semantically
equivalent
to
the
stainless
markers,
at
least
it's
our
understanding.
A
So
our
current
question
is:
how
do
we
handle
the
situation
where
we
have
a
histogram
with
say
five
buckets
and
one
of
those
buckets
in
a
script
comes
across
where
they
stay
on
this
marker
and
going
forward.
It
only
has
four
buckets
in
p
data
the
flag
indicating
that
there
is
no
data
available
is
at
the
data
point
level
or
at
an
entire
histogram.
So
we
could
have
a
data
point
with
one
fewer
bucket
going
forward.
A
B
A
A
B
Yeah,
like
honestly
from
minute
to
minute,
like
that,
might
happen
like
it
shouldn't
and
with
the
current
fixed
histograms,
like
normally
say:
hey,
there's
a
software
upgrade,
and
this
new
version
has
different
buckets
in
its
various
forms,
but
yeah
generally
it
should
be
the
entire
histogram
or
not,
and
otherwise,
if
things
do
change
around
users
should
expect
breakage
for
as
long
at
least
in
prometheus
sense.
Users
should
expect
breakage
for
any
queries
that
are
touching
both
the
old
and
new
until
such
a
point
that
you're
only
looking
at
the
new
data.
B
A
Okay
and
then,
when
an
entire
histogram
does
go
away,
it's
necessary
to
communicate
a
stillness
marker,
at
least
in
the
remote
right
export
case
right.
We
would
need
to
push
out
a
cns
marker
for
each
of
the
buckets
that
were
on
the
histogram.
Is
that
correct,
yep.
A
Stay
on
this
market
for
that
as
well,
yeah,
okay,
yeah!
I
I
think
that
that
we
can
do
that.
We
just
need
to
know
that.
I
I
think
we
need
to
communicate
that
when
a
histogram
with
the
no
data
flag
is
present,
it
should
also
include
bucket
boundaries
for
all
of
the
buckets
that
were
previously
there,
so
that
that
could
be
reconstructed.
A
B
Yeah,
I
guess
the
other
thing
there
is
that
if
you
have
the
bucket
goes
away
and
you
just
don't
notice
it
at
all
that
you
won't
send
a
stale
marker.
On
the
other
hand,
that
just
means
you'll
have
seen
the
old
value
for
five
minutes
which
isn't
the
end
of
the
world,
because
it's
a
counter
anyway.
So.
B
A
Yeah,
so
the
the
complication
that
arises
for
us,
if
we
were
try,
were
to
try
to
communicate
an
individual
buckets
still
state
is
that
in
otlp,
bucket
counts
are
integers,
and
so
we
can't
convey
the
staleman
there.
That's
part
of
why
we
wanted
to
convert
to
using
the
no
data
present
flag,
but
that
applies
at
the
entire
histogram
level,
not
at
the
individual
bucket
level.
B
A
Okay
and
then
so
I
think,
then
the
only
potentially
weird
situation
is
no.
We
would
just
exclude
that
bucket
entirely,
and
so,
if
say,
someone
was
using
the
prometheus
receiver
and
the
prometheus
exposition
exporter.
A
A
B
A
Okay,
yeah,
that
that
is
in
line
with
what
I
had
figured
the
the
situation
would
be,
but
it's
good
to
hear
that
information.
Thank
you.
Yeah.
A
Okay,
does
anybody
else
have
anything
else
to
discuss?
Please
add
to
the
agenda:
if
you
do.
C
I
have
a
quick
question.
I
want
you
to
ask
if
two
different
sources
that
from
ethios
is
scraping,
have
these
same
metric
and
labels.
Is
that
a
valid
case
like
how
that
would
be
handled
with
prometheus
receiver
and
exporter
pipeline
in
the
collector.
B
Okay,
so
in
prometheusland
like
that
is
a
configuration
a
user
can
do,
please
don't
it's
basically
going
to
be
arbitrary
as
to
which
of
those
will
actually
come
in
and
the
pacing
on
race
conditions
and
everything
is
likely
to
be
a
mix
of
the
data
missing
some
data
and
a
mess
generally.
C
B
Well,
two
yeah
two
targets
generally
should
be,
but
then
relabeling
can
also
happen,
such
that
things
end
up
with
the
same
labels
and
like
a
common
case
where
this
turns
up
is
that
someone
is
pulling
from
federation
or
two
prometheuses
or
something
and
they
don't
have
external
labels
set,
and
they
end
up
with
a
clash
or
something,
and
it's
like
yeah,
that
sort
of
thing
happens
and
it's
a
broken
setup.
B
B
B
D
In
the
collector
they
should
still
be
differentiated
by
the
job
and
instance
resource
attributes,
so
those
should
be
different
for
different
targets.
B
If
you
no
they're
not
required,
you
could
have
two
targets
with
the
same
job.
In
instance,
there's
other
labels
as
well,
but
sometimes
users
do
manage
in
different
script,
configs
to
end
up
with
the
same
job,
in
instance
like
and
all
the
target
labels
being
identical,
which
of
course,
is
always
going
to
clash
on
up.
D
It
in
much
much
older
versions
of
the
collector
we
used
to
not
have
javan
instance,
and
so
this
was
super
super
common,
but
yeah
it
definitely.
It
can
still
happen.
But
if
you
may
want
to
look
at
some
of
the
later
versions,
if
it's
happening
for
just
regular
say
skew
prometheus
setups
or
something
yeah.
A
A
Okay,
one
last
point
I
would
like
to
note
is
that
this
is
the
last
week
for
purge
david,
sorry,
pershinik
and
james
david.
Don't
worry
it's
not
your
last
week.
It's
no
surprise
for
you.
This
is
their
their
last
week
with
us
as
interns
here
at
amazon,
and
I
would
like
to
thank
them
for
all
of
the
work
that
they've
put
into
this
project.
They've
been
a
great
help
in
improving
the
reliability
of
the
prometheus
pipeline
in
the
collector.
A
Okay,
and
if
there
are
no
other
topics
to
discuss,
I
think
we
can
take
back
about
45
minutes
and
have
that
time
before
our
next
meetings.
Thank
you.
Everyone
see
ya.
A
Oh,
that
is
a
good
question.
Next
week,
would
be
the
22nd
right.
A
Yeah,
so
I
think
that
other
cigs
are
planning
to
not
meet
over
the
next
two
weeks.
I
think
that
that's
probably
a
good
possibility
for
this
as
well.
I
would
say
that
we
should
probably
ask
or
determine
what
some
of
the
other
things
are
going
to
do
next
wednesday,
and
I
would
look
to
the
calendar
early
next
week
to
see
if
it's
there
or
not.
I
will
try
to
remove
it
if
we
decide
we're
not
going
to
have
this
meeting.