►
From YouTube: 2021-09-01 meeting
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
C
Yep
just
wanted
to
make
sure
everybody
knew
we
released
the
0.34
yesterday
for
both
core
and
contrib,
and
that
many
of
the
components
that
were
in
the
core
repo
have
been
moved
to
the
contrib
repo.
The
changelog
has
a
hopefully
complete
list.
We've
managed
to
preserve
the
history,
the
get
history
of
those
components
as
we
moved
them.
C
If
you
have
prs
that
were
targeting
any
of
the
components
that
were
moved,
they
will
need
to
be
moved
to
the
contrib
repo.
Those
components
include
all
of
the
receivers
and
exporters
that
are
not
otlp,
and
I
think
the
logging
exporter
also
stayed
and
most
of
the
processors
and
extensions.
A
Yeah
indeed,
we
did
the
release
yesterday,
but
we
still
haven't
released
the
docker
image
for
collector
core.
We
are
working
on
that
jurassic,
we'll
see
more
prs
in
the
repo
and
I
will
set
up
the
credentials
and
stuff.
B
All
right,
I'm
also
working
on
on
the
releases
repository
so
to
make
a
couple
of
changes
that
you
requested,
like
the
binary
name
for
for
the
core,
should
be
hotel
call
instead
of
open,
telemetry
collector.
So
I'm
doing
this
change
right
now
and
yeah.
So
I
think
before
realizing
like
0
34
for
real,
we
should
do
another.
B
Another
dry
run
publish
into
my
name
space
like
zero,
zero
two
and
then
we
verify
all
the
artifacts,
and
if
they
look
good,
then
we
change
to
the
official
namespace
and
034.
A
Yeah
there
needs
to
be
a
pr
for
changing
from
releasing
to
kuai
to
release
to
docker
hub,
because
that's
where,
where
we
have
the
binary,
but
we
will
not
merge
that
until
we
we
do
the
dry
run.
For
you
sure,
okay.
D
It's
organized
sorry.
D
Where's
my
stupid
okay,
yeah
yeah,
hey
everybody,
hey
dr
nick,
the
there's
been
an
a
couple.
Folks
have
reported
some
issues
with
the
prometheus
receiver,
which
is
metrics,
which
means
I
have
no
idea
what
I'm
doing
but
they're
my
customers.
D
So
I'm
trying
to
figure
out
yeah
if
anyone's
triaging
this,
because
it's
sort
of
floated
around
or
been
popped
up
in,
like
a
couple
issues
and
different
like
incarnations.
So
you
know
if
anyone's
investigating
currently,
because
maybe
I
can
coordinate
with
pablo
or
something
and
we
can
assist
and
then
additionally,
I'm
curious.
Whether
if
I
have
a
question
around
like
is
nan
so
there's
the
context.
Apologies
everyone
who
is
not
aware
of
what
I'm
rambling
about
the
context
is.
D
The
prometheus
receiver
is
sending
like
nan,
like
not
a
number
values
into
like
they're,
getting
set
as
the
value
of
like
in
otop
metrics.
So
if
that's
part
of
the
specification,
I
couldn't
find
anywhere.
That
was
part
of
the
specification,
and
if
I
should
be
asking
these
questions
here
or
in
like
a
metric
sig
or
what
so,
I
know
that.
A
Nan
in
in
prometheus
means
the
stale
value,
which
means
the
metric
is
no
longer
available.
You
can
you
can
look
for
for
in
the
prometheus
documentation.
You
can
look
for
the
stale
metrics,
I'm
not
sure.
If
that's
the
case,
but
I
know
I
know
nan
is
a
special
value
in
prometheus.
So
look
for
that
and
see
if,
if,
if
that's
the
case.
A
D
Valid,
I'm
not
disagreeing
that
it's
certainly
a
valid
value
in
prometheus.
That's
like
a
documented
thing
and
that's
fine,
but
I
wasn't
aware
that
was
valid
in
otlp
or,
if
there's
some
translation
doc
that
I
couldn't
find.
C
There
there
will
be
so
right
now,
it's
just
a
value,
it's
a
value
that
is
perfectly
valid
in
otp,
because
it's
a
valid
ieee
float
value
right,
and
so
it's
going
to
pass
right
through
one
of
the
things
we
discussed
this
in
the
last
hour
in
the
prometheus
work
group
meeting
with
josh
mcdonald
in
the
next
version
of
the
otlp
protocol,
spec
v
0.10
when
that
is
released,
it
will
support
a
separate
flag
for
indicating
that
there
is
no
value
to
be
reported
for
this
data
point,
which
is
what
the
staleness
marker
is
intended.
C
The
the
stainless
nand
values
intended
to
to
convey.
So
at
that
point,
what
will
happen
is
we
will
be
able
to
convert
the
prometheus
receiver
to
set
that
flag
in
p
data,
as
as
opposed
to
setting
the
value
to
the
stale
nand
marker,
in
which
case
receivers
or
exporters
downstream
that
do
not
support
emitting
nands
as
stainless
markers
can
either
ignore
it
or
do
whatever
is
most
appropriate.
Based
on
the
information
that
this
data
point
actually
contains?
D
Right,
okay,
that
that
makes
perfect
sense
for
like
f
roadmap,
seems
super
reasonable.
I
guess
I
have.
C
D
Yeah,
that's
that's
kind
of
where
I
was
going
with
this.
It's
like
what
do
you
think
there's
like
the
you
know
the
who
knows
when
that
will
come
out
down
this
downstream,
like
when
the
specula
you
know
are
like
released
and
updated
and
like
in
the
meantime,
our
you
know,
my
employer's
exporter
is
blowing
up
over
it.
So
do
you
think
this
is
just.
I
should
push
up
a
fix
to.
You
know
the
specific
exporter
that
has
having
issues
digesting
this
value.
D
Should
I
is
there
a
I
poked
around
in
some
of
the
processors,
I
couldn't
really
find
anything
that
I
thought
that
might
be
able
to
like
save
the
day
here,
not
a
you
know,
and
if
no
one
has
opinions,
that's
fine,
I'm
just
kind
of
looking
for.
I
think
I
think
the
the
immediate.
A
C
Okay,
cool
the
potential
alternative
might
be
to
take
the
metrics
transform
processor
and
ensure
that
it
has
the
ability
to
drop
nand
values.
That
would
then
help
all
exporters,
as
opposed
to
just
one,
but
I
don't
know
how
much
more
complicated
that
would
be
than
just
changing
your.
D
Exporter
yeah,
it's
like
you
know,
I
don't
care
about
anyone
else
right.
So
no
I'm
just
sure,
but
I
care
about
most
some
people,
other
people,
so,
okay,
yeah
I'll
talk
with
pablo
who's
on
the
call
so
and
we'll
figure
out
whether
it
makes
sense
to
just
ship
something
upstream
to
the
metrics
transform
but
yeah
you're
right.
That
would
be
a
probably
a
more
robust
solution.
Cool
all
right.
Well,
thanks
everyone
I'll
stop
that.
D
Me
no
so
someone's
using
I
apologize
someone's
using
there's
a
few
folks
right.
There's
like
a
couple
issues
floating
around.
You
can
kind
of
like
connect
the
dots.
If
you
do
like
the
you
know,
you
see
in
the
issue
like
what
other
issues
are
linked,
but
we've
seen
a
prometheus
receiver.
People
are
using
a
prometheus
receiver.
D
The
receiver
to
otlp
is
passing
along
that
nand
value
because
it's
whatever
a
float
or
something
you
know
that's
like
a
valid
value,
but
then
our
data
I
work
at
datadog
right.
The
datadog
exporter
is
having
trouble.
You
know
it
didn't,
isn't
accounting
for
a
nand
value.
So
I
haven't
looked
at
the
prometheus
exporter
at
all.
I
I
guess,
there's
some
exporter
somewhere.
You
know
emitting
prometheus
metrics,
but
it's
not
that's
outside
of
hotel's
context.
Okay,.
D
That
appears
to
be
yes
exactly:
it's
it's
technically
technically
correct
the
best
kind
of
crap.
So
it's
fine!
No,
it's
that's
what
I
was
trying
to
confirm
whether
this
is
a
bug
which
it's
not.
The
bug
is
in
the
data
dog,
exporter
or
yeah,
but
whatever
you
don't
call
it.
A
D
Well,
when
my
customers
churn
and
go
to
splunk,
you
just
tell
them.
Actually
it
wasn't
a
data
dog
issue,
but
yeah
yeah,
it's
fine,
I
don't
it
doesn't
matter
whose
fault
there's?
No,
it's
not
about
blaming
things.
I'm
just
trying
to
think
about
how
to
resolve
things
on
block
users.
So
I
think,
but
what
you
suggesting,
what
anthony's
chest
are
both
very
reasonable.
A
Perfect,
okay,
it
looks
like
I
was
looking
at
the
previous
agenda,
so
that's
why
I
called
others
anthony
first,
but
I
think
we
no
longer
have
anything.
We
don't
have
anything
in
the
agenda
anything
else.
We
should
discuss.
A
No
everybody
is
happy
with
our
progress
last
update.
I
think
we
are
done
with
the
code
moving,
so
we
are
back
on
accepting
prs
and
and
new
functionalities
and
stuff
in
the
in
the
repositories.
I
think
we
also
have
a
plan
for
rc
of
the
two
modules
in
a
collector
repo.
We
have
a
plan
for
for
the
b
data
where
we
have
couple
of
items.
There
is
a
milestone
and
we
have
a
milestone
for
for
the
rest
of
the
things
most
likely
will
be
done
in
the
next
couple
of
weeks.