►
From YouTube: 2021-04-28 meeting
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
So
the
meeting
info
in
the
doc
was
still
wrong.
This
morning
I
updated
it,
but
apparently
not
soon
enough
to
catch
some
people,
I've,
let
the
people
that
I
know
are
over
in
the
other
room
that
we
should
be
here.
Hopefully
they'll,
filter
over
shortly.
A
This
is
the
correct
one
yeah.
It
used
to
be
that
the
both
this
and
the
collector
sig
were
on
the
same
meeting
and
they
were
back-to-back
so
needed
to
get
them
broken
up,
got
it.
C
F
G
G
G
So
I
had
listed
the
first
item
and
just
wanted
to
bring
that
up
just
to
discuss
that
jana
as
seen
as
you
and
anthony
no
and
emmanuel.
Also,
we've
been
working
on
the
running
the
remote
right
exporter
tests
that
you
know
from
based
on
the
suite
that
tom
wilkie
did
and
what
we
need
to
what
he
has
requested
is.
If
you
know
we
can
get
a
more
updated
build
of
the
collector,
or
at
least
you
know,
a
artifact
that
he
can
also
be
testing
based
on.
G
G
G
Because
we
we
had
put
in
some,
we
had
also
added
some
fixes
right
and
I
guess
that's
not
there.
In
the.
B
B
It's
it's
a
trivial
thing
actually
like
it,
doesn't
have
a
lot
of
dependencies
or
anything
we
should
suggest.
You
know
tom
to
guilds,
open,
telemetry,
collector
himself.
I
B
J
J
We
just
had
no
no
release
just
had.
B
We
we're
not
like
trying
to
like
we're
not
trying
to
break
you
know
if
something
is
breaking.
We
want
to
see
it
and
that's
why
we
want
to.
You
know,
have
this
compliance
as
a
part
of
our
ci
cd
at
some
point,
but
we
don't
have
bandwidth
to
do
that
right
now,
so
we
shouldn't
be
breaking
as
much
as
possible
if
there
are
conflict
and
things
that
we
need
to
merge.
Maybe
we
may
temporally
break
things,
but
you
know
I
I
we
shouldn't
break.
J
Okay,
okay
I'll
tell
him
yeah,
that's
also
totally
fine.
He
just
didn't
know
where,
where
the
fixes
were
because
he
saw
a
few
pr's
go
by,
but
he
wasn't
certain
where
they
they
went
into
or
like
if
it
was
in
the
release.
J
B
In
the
prometheus,
by
the
way,
they're
in
the
collectors,
yeah
they're
in
collector
core,
so
yeah
I'll
put
a
link.
G
J
J
Of
course,
I
I
don't
think
alolita
mentioned
this
initially
prom
con
is
on
monday,
and
tom
has
a
talk
about
the
test
suite
and
he
wants
to
obviously
run
with
the
latest
and
greatest,
which
has
the
highest
compatibility
rating
and
he'll
probably
do
this
over
the
weekend.
So
he
can
just
update
the
pre-recorded
talk,
yeah.
B
J
I
didn't
see
the
talk
to
be
honest.
I
I
think
he
has
an
overview
and
doesn't
go
in
depth
just
with
an
overview.
But
if
we
can
reduce
this,
then
it's
just
a
better
number,
but
it's
not
it's
it's
not
bashing
or
anything
matter
of
fact.
He
actually
changed
a
few
tests
to
make
it
easier
to
pass
them
to
to
make
sure
that
open
television
passed
more
so
yeah.
J
So
that's
also,
why
he's
looking
to
to
get
what
has
the
most
fixes
in
to
just
get
the
best
version?
Okay,.
G
Okay,
richard
thank
you
for
clarifying
because
again
I,
as
john
said,
we
are
working
on
it
right
now.
So
it's
work
in
progress
and
you
know
we
continue
to
add
yeah
there.
B
Are
also
some
other
parameters
compatible
the
issues
that
is
not
in
the
remote
right
and
I'm
spending
some
time
on
that,
for
example,
this
puts
us
a
lot
of
pressure
on
us.
Like
hey,
we
have
a
bunch
of
different
things
going
on
and
you
know
things
need
to
go
in
a
particular
order,
sometimes
so
yeah
just
just
as
an
fyi
yep
yep.
G
Yeah
so
richard,
let
us
know
if
you
know,
there's
anything
that
we
can
do
to
help
on
on
the
prom
con
talk
or
anything
else.
B
A
In
terms
of
fixing
some
of
those
compatibility
test
issues,
I
have
taken
the
the
restart
detection
and
logic
that
josh
mcdonald
has
described
in
the
hotel
data
model
and
applied
that
to
the
the
prometheus
receiver,
which
was
doing
it
was
trying
to
do
it.
But
it
was
doing
it
in
a
slightly
wrong
way.
That
fixes
a
lot
of
the
a
lot
of
the
like
counter
gauge,
histogram
and
summary
tests
that
have
been
failing,
but
there's
a
fair
bit
of
work.
A
That's
going
to
have
to
be
done
in
terms
of
rearranging
the
unit
tests
in
the
prometheus
receiver
and
the
collector
before
we
can
land
that.
So
I
don't
know
if
that's
going
to
be
landing
before
monday,
but
worst
case.
If
we
really
want
to
get
a
build
that
passes
those
parts
of
the
test.
So
I
can
share
that.
B
The
same
is
true
for
up
metric,
for
example,
you
know
we
want
to
do
some
sort
of
like
cleanups
and
like
change
the
way
we
do
it
like
it's
not
like
we're
not
working
on
it.
There
is
a
there's,
an
existing
pr
that
resolves
it.
It's
just
like
a
technicality
of
like
how
we
want
to
implement.
It
is
the
question
right
now
and
that's
why,
like
you
know,
currently
it's
failing,
but
you
know
manuel
has
been
working
on
it.
G
Yep
and
again,
for
you
know
those
of
you
who
are
following
on
these
discussions,
please
also
add
your
informal
reviews
on
the
prs,
as
as
they
go
by
because,
again,
all
of
your
reviews
help.
G
J
G
Yeah
sounds
good
all
right.
Moving
on,
I
also
wanted
to
share
the
proposed
design
dock
for
the
prometheus
agent,
which
is
based
on
the
grafana
agent,
that
I
think
robert
has
made
a
proposal
for,
and
this
was
just
shared
in
the
prometheus
community.
So
I
just
wanted
to
make
sure
everybody
was
aware
of
that
and
again
great
to
have
comments
on
the
prometheus
agent
richard.
Can
you
tell
us
a
little
bit
more
about
you
know
what
the
general
idea
is.
Is
this
just
going
to
be?
G
You
know
part
of
the
prometheus
build
and
the
project
is
that
the
idea
I
mean.
Why
would
you
have
another
agent.
J
The
prometheus
agent
is
supposed
to
be
less
than
the
grafana.
Well,
less
is
the
wrong
word,
but
a
reduced
form
of
of
the
grafana
agent,
which
is
really
stripped
down
to
just
prometheus
and
nothing
else,
whereas
the
grafana
agent
obviously
has
other
stuff
like
loki
and
such
in
it.
J
The
main
use
case
for
the
prometheus
agent
is
is
yeah
like
if
you
have
a
pure
prometheus
setting
and
still
want
or
need,
or
must
run
an
agent
model
that
you
that
you
have
it
available
and
basically,
at
the
point
anything
hits
upstream
or
hits
the
promises
promises
the
repository
it
will
be
stable,
both
in
user
facing
and
also
in
api
and
such
whereas
the
grafana
agent
is
allowed
to
be
a
little
bit
more
flexible
on
the
api
level,
while
stuff
is
still
being
hammered
out
and
and
tested,
because
mainly
the
the
grafana
agent
users
care
about
user
interface
stability,
not
so
much
about
api
stability
for
obvious
reasons,
and
that's
roughly
it
and
it.
J
Currently,
it
looks
at
if
it
will
not
be
a
a
single
binary,
but
basically
do
what
two
binaries
one
called
prometheus
and
one
prometheus
dash
agent
is
the
most
likely
outcome
to
keep
some
some
housekeeping
easier.
But
this
might
also
flip
robert
frato
put
a
design
dock
onto
prometheus
diwali
yesterday.
I
think.
J
G
Link,
let
me
just
share
it
here
too.
It's
on
the
richard.
It's
in
the
notes
also,
but
let
me
share
it
here
for
everyone
here.
B
J
I
mean
the
the
intention
or
part
of
the
intention
with
the
agent
is
to
to
make
it
easier
to
reuse
the
code
and
the
part
of
the
initial
kickoff
from
that
came
from
open
telemetry
to
just
make
it
easier
to
to
use
bits
and
pieces,
because,
in
particular
when,
when
you
need
to
clean
up
the
ingested
metrics,
we
found
that
often
it's
rather
hard
to
to
do
it
in
precisely
the
way
which
prometheus
does
itself.
J
And
we
don't
have
a
specification
or
anything
on
how
to
do
this
precisely.
But
we
obviously
have
the
reference
code
in
prometheus
permissions
as
such.
That
is
a
good
way
to
to,
on
the
one
hand,
make
it
easier
to
test,
or
also
just
for
others,
for
example,
open
telemetry,
just
injustice
and
and
not
have
to
reinvent
that
part
of
the
of
the
complete
thing
and
just
reuse.
It
best
way.
If
you
are
interested
in
a
specific
thing,
toss
it
onto
onto
the
mailing
list
and
or
that
that
design
document
yeah
that's.
J
B
Yeah,
we
are
not
reusing
any
of
the
remotes
right,
for
example.
That's
something
that
you
know.
We
actually
would
love
to
do.
Yeah
I'll
reach
out
to
the
mailing
list.
K
Yeah,
I
will
read
the
document
thanks
for
sharing
it,
but
when
would
I
use
the
prometheus
receiver
and
open
telemetry
collector,
and
when
would
I
use
you
know
this
agent?
It
looks
to
me
it's
very
redundant.
Isn't
it?
Is
it
like,
like
two
options
that
we
are
talking
about
for
agent
based.
G
Yeah,
that's
what
I
I
think,
that's
what
the
case
is
richard
again
correct
us
if
we're
wrong.
It's
just
doing
some
very
similar
functionality.
J
There
is
huge
overlapping
functionality
for
again
borderline
obvious
reasons,
but
the
the
I
think
the
the
starting
point
is
a
little
bit
of
a
different
one.
There
is
the
promises.
Agent
comes
from,
you
have
a
prometheus
system
or
something
which
exposes
prometheus
metrics
and
something
that
which
can
ingest
remote,
read,
write
and
just
and
just
relay
that
data
between
the
two.
Whatever
things,
whereas
with
open
telemetry,
you
obviously
have
more
use
cases
and
you
cover
a
broader
range.
J
So
you
could
say
that
the
the
agent
is
maybe
the
happy
path
of
just
prometheus
in
prometheus
out
of
the
open,
telemetry
use
case.
I
I
think
that's
roughly.
It
should
at
least
not
be
wrong.
G
Yeah
so
richard
I
mean
you
mentioned
that
in
the
grafana
agent
there
is
loki
and
tempo
and
other
stuff
built
in
so
is
that
so
is
the
again.
Why
would
that
just
not
be?
G
You
know
reusable
again
in
the
prometheus.
Why
would
you
have
a
specific
prometheus
agent
only.
J
Of
course,
prometheus
team
didn't
want
to
have
the
full
agent
with
all
the
bells
and
whistles
which
are
which
are
outside
of
just
the
the
prometheus
focus
myself.
I
I
don't
really
care
where
it
lives.
I
don't
really
care
what
name
that,
what
name
it
carries.
I
care
about.
The
integrated
functionality.
J
Oh,
no,
no,
okay!
So
from
the
from
initially
with
my
grafana
head
on.
We
wanted
to
do
this
upstream
and
then
for
a
variety
of
reasons.
That
didn't
happen,
so
we
did
it
downstream
in
the
grafana
repository.
J
The
unexpected
benefit
of
this
was
that
we
could
be
a
lot
more
breaky
in
in
figuring
out
how
to
how
to
make
this
thing
work
and
how
to
make
it
work
nicely,
and
in
particular,
we
didn't
have
much
concern
in
in
the
initial
phase
about
keeping
any
initially
defined
api,
stable
or
anything,
because
it
was
just
basically
making
it
work
as
long
as
the
user
interface
was
stable.
That
was
enough
prometheus
with
my
prometheus
hat
on.
J
Those
are
also
reusable
and
there's
nothing
stopping
anyone
from
from
doing
this,
and
I
do
expect
that
people
will
be
baking
their
own
where
they
take
one
or
two
things
out
of
out
of
the
ground
asian,
which
they
care
about
and
put
them
into
their
own
prometheus
agent
and
just
run
that
thing
if
they
want
to
but
yeah
from
from
the
perspective
of
grafana
agent.
Everything
which
is
prometheus
specific
will
be
upstreamed
into
prometheus
permission,
accepting
this
of
course,
and
then
just
rendered
in,
and
then
you
have
this
prometheus
part
there.
J
If
you
want
to
have
the
loki
specific
bits
and
pieces,
I
suspect
they
will
be
living
in
the
grafana
agent
for
the
foreseeable
future.
But
I
can
easily
see
a
future
where
we
have
a
loki
agent
and
such
and
then
just
merge
all
of
this
into
one
thing
as
the
grafana
agent
and
you
can
simply
re-merge
the
same
thing
under
a
different
name
with
different
bits
and
pieces
for
for
a
different
use
case.
G
Let
me
ask
you
another
question
here:
is
it
also
their
performance
considerations
for
each
type
of
you
know
stream
of
telemetry
data,
and
is
it
or
is
it
just
use
case
driven
because
that's
something
again,
you
know,
would
be
beneficial
for
all
of
us
to
understand,
given
that
the
open
telemetry
agent
aims
to
address
that
with
different
types
of
receivers
as
well.
As
you
know,
exporters
baked
into
the
agent
right.
J
Yep,
so
I
think
the
answer
is
both
or
or
even
more.
Obviously,
the
graffana
agent
needs
to
cover
wider
use
cases
than
than
just
a
promises
agent
as
same
as
open
telemetry
needs
to
cover
a
wider
use
base
than
just
permissius
collection
as
to
performance.
J
J
But
beyond
this
I
expect
that
they
will
have
pretty
similar
performance
characteristics
and
performance
is
one
of
the
main
aspects
of
of
the
complete
prometheus
stack,
as
you
know
so
so
being
able
to
to
put
I
I
got
some
numbers,
I
thought
I
think
the
numbers
which
I
got
were
60k
per
second
per
core
and
2.5
million
per
system
per
second
of
data
points
it,
but
that
was
a
prometheus
system
that
wasn't
an
agent.
J
We
haven't
battle
tested
an
agent
recently,
but
I
expect
we
will
do
this
after
we
have
done
the
work
to
upstream
prometheus
agent,
but
yeah
performance
will
also
obviously
be
and
continue
to
be.
One
of
the
main
considerations.
G
Okay,
thanks
thanks
richard
any
other
questions.
Spokeshad
again,
please
take
a
look
at
the
design
and-
and
you
know
feel
free
to
comment
on
the
doc
david
or
jay.
Did
you
guys
have
any
other
questions.
L
Hey
I
have,
I
have
a
couple
so
hey
richard.
Do
you
work
at
grafana.
L
Yeah,
thank
you.
So,
firstly,
different
components
are
being
broken
out
of
the
agent.
I
mean
this
might
be
a
difficult
question
to
ask,
but
will
licensing
change.
J
This
will
all
stay
in
apache
tube,
the
complete
list
of
of
projects
which
have
been
decided
to
be
put
under
hgplv3,
because
I
think
that
is
the
actual
question.
So,
let's
just
address
it
from
this
point
of
view
are
the
three
core
products
or
projects
which
is
grafana,
which
is
tempo
and
which
is
loki.
J
L
Thank
you
yeah
and
then
the
next
one
is
about
yeah
I've
skimmed
through
the
prometheus
agent
design
dock
and
you
know,
goals,
for
example,
there's
one
that
is
curious
to
me,
which
is
making
the
right
ahead
log
package.
You
know
just
available
for
everyone,
because
it
pretty
much
implements
a
storage.panda.
L
I
you
know
I've
been
in
charge
of
implementing
a
right
ahead,
log
for
the
prometheus,
remote
right
exporter
and
trying
to
retrofit
with
what
the
agent
had
was
yeah.
It's
basically
impossible
because
of
the
way
we
direct
data
because
yeah,
we
would
pretty
much
receive
it
in
a
specific
form,
then
translating
to
otlp
so
trying
to
add
the
agent
sorry.
The
wall
in
would
mean
a
translation
from
otlp
to
you
know
ghost
style
metrics,
which
then
get
serialized
the
tsb
then
later
on.
L
H
H
D
L
Right,
okay,
yeah,
so
I
mean
I
guess
I
guess
in
that:
okay
and,
for
example,
I'll
also
post
up
the
implementation
of
the
write
ahead.
Log
that
I
that
I
added
to
the
collector
it's
currently
in
a
pr
there.
It
is
yeah
yeah,
thank
you
and
then
the
next
one
would
be.
Is
there
a
specification
of
sorts
for
the
remoteride
exporter,
because
anthony
anthony
encountered
an
interesting
compliance
test
failure?
L
So
essentially
you
know
we
have
a
counter
in
the
client
libraries
when
you
you
know
when
you
get
a
counter
before
you
write
out
its
data,
you
you
append
underscore
total
to
you,
know
the
text
format,
but
in
the
com
compliance
tests
it
doesn't
expect
underscore
total
to
be
written
out.
So
that's
one
thing
we're
very
curious
about
and
also
given
that
the
compliance
test
is
still
being
added.
L
It's
more
of
you
know.
We
have
a
universe
of
unknowns
that
you
know,
could
change
or
you
might
implement
something
a
certain
way
then
something
later
on
breaks
or
so
so
we
kind
of
wanted
to
ask
if
there
was
some
form
of
specification
that
describes.
You
know
remote
right
because
I've
looked
all
around
all
around
in
the
docks
it
just
it
doesn't
say
anything
about
remote
right.
It's
we
just
expect
yeah.
It
just
expects
pro
protobuf
prometheus
protobuff
to
be
uploaded
and
that's
it.
J
So
when
you
say
you
looked
just
to
make
sure,
did
you
see
that
specification
and
it
is
lacking
from
your
point
of
view.
A
I
think
related
to
the
to
the
name
label
issue
that
we
discovered.
I
I
didn't
put
a
an
issue
with
the
compliance
suite
for
that
tom
looks
like
he
just
replied
a
couple
hours
ago,
saying
that
it
isn't
something
that
we're
expected
to
do.
Some
client
libraries
may
be
doing
that.
So
I
think
we
just
had
the
the
initial
remote
right.
Export
implementation
had
made
an
assumption
based
on
either
convention
or
something
that
one
of
the
client
libraries
was
doing,
but
those
sorts
of
things
being
kind
of
ill-defined.
G
Yep
exactly,
I
think
there
were
assumptions
made
anthony
for
sure
in
the
initial
you
know
build
in
the
initial
development
where
decisions
were
made
based
on
it
just
being
completely
unclear
whether
it
was
the
requirement
or
downstream.
You
know
assumption.
J
The
best
way
to
to
address
this
is
probably
to
make
comments
on
that
specification,
so
maybe
to
to
walk
through
how
that
specification
came
because
that
specification,
we
as
prometheus
team
created
specifically
to
support
open,
telemetry,
implementing
and
implementing
remote
right,
and
that's
also
there
what
tom
is
using
as
the
reference
to
to
implement
the
test
suite.
J
So
unless
I'm
very
mistaken,
nothing
which
isn't
in
that
spec
shouldn't
also
be
tested
in
the
in
the
test
suite
if
you
find
stuff
which
is
being
tested
for
which
is
not
in
there,
which
is
not
in
the
intersection
of
of
how
permitted
exposition
and
how
prometheus
remote
right
works,
then,
yes,
that
is
most
likely
a
bug
or
an
oversight.
And
then
please,
please
file
an
issue
against
the
test
suite
or
with
the
spec
or
or
wherever,
of
course,
yeah.
Then
we
need
to
fix
it
easy.
J
I
don't
think
there
is
anything,
but
if
I'm
wrong
just
tell
us
as
to
what
will
be
added,
I
don't
know.
I
know
that
tom
has
a
to-do
list
and
he
mentioned
yesterday
that
he
has
a
bunch
of
half-finished
branches
which
he
which
he
wants
to
polish
up,
and
then
he
was
surprised
that
bronco
is
on
monday,
not
on
wednesday
and
now
he's
a
bit
scrambling.
J
So
I
don't
know
what
he
will
add
when,
but
I
know
that
he
has
the
intention
to
add
more
more
basically,
unless
he
he
feels
that
specification
is
fully
covered
by
the
test.
Suite
the
intention
behind
this
being
by
writing
down
down
both
the
specification
and
then
having
a
test
which
is
which
basically
tests
for
that
specification
would
allow
us,
with
the
permitted
steam
head
on
to
to
actually
make
changes
or
consider
changes
to
remote
write
because
it
is
de
facto
stable.
So
we
need
to
we
needed
to
cut
the
version.
J
C
Actually
a
couple
of
couple
things,
so
I
guess
I
guess
the
first
thing
kind
of
at
a
high
level.
I
think
that
if
we're
at
kind
of
some
decision
points
or
inflection
points,
I
think
we
need
to
probably
report
back
to
the
collector
sig
at
some
point
and
make
sure
that
there's
buy-in
from
whatever
direction
things
are
going
right.
We
don't
want
to
like
kind
of
you
know,
people
don't
need
to
like
do
a
bunch
of
work
and
then
they
try
to
get
their
prs
in
and
it
turns
out.
C
You
know,
bogdan
or
tigrin
has
some
issues
with
it
right
since
they're
kind
of
the
the
vetoing
powers
on
the
u.n
council.
So
I
think
if
there's
like
some
decision
points
or
some
checkpoints
that
that
we
should
probably
kind
of
report
back
and
make
sure
that
those
that
direction
is
acceptable.
Yeah.
G
I
think,
could
you
to
your
point
totally
agree.
I
think
bogdan
has
been
pretty
closely
involved.
He
is
you
know
he
has
been
following
on
the
issues
as
well
as
the
prs
and
the
discussions,
and
you
know
so,
but
I
agree
that
I
think
any
decision
points
should
also
be
brought
up
at
least-
and
you
know
highlighted
with
the
larger
collector
audience.
C
Oh
only
for
the
exporter
yeah,
okay,
yeah,
because
I
think
there
might
be
some
I
mean
I
think
if
it
ends
up
having
like
its
own
right
ahead.
Log
and
its
own
kind
of
like
functionality
like
there
may
be
some
objections
to
that.
Saying
that
you
know
that
functionality
should
be
core
to
the
collector
right.
If
we
have
run
ahead
logs,
then
that
should
be
like
a
feature
of
a
collector,
not
a
feature
of
a
particular
exporter.
So
I
think
there's
some
risk
of
like
if
we
end
up
having
this
kind
of
like
agent.
B
Eventually,
if
there's
going
to
be
a
you
know,
collector-wide
wall,
we
need
to
decide
what
to
do
about
it,
but
they
are
also.
You
know.
It
looks
like
it
could
be
a
duplicate
type
of
effort
like
we
can.
You
know
kind
of
like
implement
this
and
stop
documenting.
H
B
If
collector-wide
wall
replaces
this
type
of
functionality
so
boxing
I
was
involved
in
all
this
in
initial.
L
L
What
we
want
is
ensure
remote
trade
exporter
has
this
capability,
and
then
you
know
when,
when,
when
the
stars
align
for
to
add
it
to
the
collector,
we'll
add
it
then
otherwise,
the.
G
G
That's
correct,
emanuel,
you're,
absolutely
right
and
and
again
you
know
that
was
in
discussed
and
agreed
upon
with
bogdan,
because,
again
just
to
you
know,
limit
and
exposure
specifically
on
the
remote
right
exporter,
which
is
talking
to
prometheus,
and
then
you
know,
whenever
bogdan
gets
to
the
point
where
he
is,
you
know
able
to
actually
propose
a
larger
design
for
wall.
You
know
for
the
entire
collector
and
that
would
get
integrated,
and
I
mean
it's
just
working
on
it
in
a
phased
approach.
G
Right
cool,
I
think,
if
any
nobody
else
has
any
questions
on
the
prometheus
agent,
then
I
guess
we
can
move
on
to
jana's
points.
B
Next
yeah
there's,
so
you
know
prometheus
supports
external
labels
in
the
configuration
it
seems
like
the
receiver
is
not
adding.
You
know
the
external
labels
to
the
samples,
the
scrape
samples,
so
we
need
to
do
some
work
for
that,
but
at
the
same
time
we
have
an
external
label
setting
in
the
remote
right
exporter.
B
So
in
order
to
kind
of
like
provide
the
functionality
rather
than
fixing
the
receiver,
we
previously
put
this
into
the
remote
right
exporter
and
we're
discussing
with
bogdan
whether
we
should
deprecate
this
because
we're
gonna
you
know
take
that
to
the
receiver,
but
also
for
metrics
produce,
and
you
know,
come
in
in
otlp.
B
B
That's
been
attached
to
all
the
like,
you
know
metrics,
so
you
don't
have
to
like
you
know,
instrument
with
the
specif
with
the
specific
labels.
Like
you
know,
at
the
app
or
whatever
the
promited
server
can
automatically
add
these
labels
to
every
sample.
That's
what
it
is,
if
you
don't
have
much
context
and
the
we
need
to
just
decide,
I
need
to
go
to
the
collector's
sig
to
figure
out
what
we
want
to
do
here.
B
That's
that's
it
and
it
may
break
or
deprecate
the
functionality.
We
need
to
decide
on
that
in
the
collector's
sake.
The
other
thing
that
I
need
to
do
is
to
follow
up
with
remote
right
queue
settings
it's
just
you
know.
Maybe
some
people
have
heard
about
this
in
the
previous
meetings
that
we
removed
the
open,
telemetry
collector
queue
from
the
promoters
remote
right,
because
it
was
calls
and
all
this
out
of
order
samples,
because
the
cueing
system,
you
know,
is
a
produce
like
it's.
B
B
So
we
removed
it,
there's
still
an
internal
cue
which
only
uses
one
consumer,
but
you
know
we
want
to
be
able
to
kind
of
like
give
people
a
way
to
you
know
change
the
size
of
the
queue
change
the
number
of
the
consumers,
because
we
want
to
implement
some
sort
of
like
a
consumer
in
the
remote
right.
You
know
with
the
restrictions
that
the
remote
right
has,
so
we
can
chart
by
time
series
and
send
things
in
order
and
so
on.
I
don't
want
to
give
too
much
details.
B
I
just
want
to
say
like
we
need
to
follow
up
with
some
cue
settings,
because
current
people
cannot
fine-tune
the
cue
the
internal
queue,
and
I
just
realized
that,
like
emmanuel,
emmanuel's
draft
about
the
wall
is
around-
and
you
know,
wall
like
prometheus
of
all
has
some
sort
of
set
configuration
settings,
and
I
was
wondering
if
we
should
align
with
that.
Like.
B
Proposal,
for
example,
you
can
see
it's
not
just
path,
there's
like
minimum
maximum
time.
You
know
like
couple
of
other
settings
so
if
we
can
be
aligned
with
that,
that
would
be
super
useful
in
the
future.
L
Indeed,
you
know
I
just
need
to
change
the
variable
names,
but
essentially
yes,
you
know
frequency
size
and
all
that
will
be
aligned.
G
Okay,
okay,
thank
you
awesome.
Thank
you.
Thanks,
amanda
thanks
jana,
that's
that's
really
good
to
know
it's
like
the
more
we
can
align
the
better.
It
is.
G
F
This
past
week,
I've
been
working
on
extending
the
test
bed
that
the
collector
has
for
per
testing
to
also
work
with
the
prometheus
receiver,
and
so
I've
gotten
some
initial
numbers,
but
of
course,
there's
just
a
bunch
of
factors
that
affect
the
result
varying
from
the
scrape
interval
or
how
many
data
points
for
metric
the
jobs.
How
many
data
points
to
scrape
the
duration
et
cetera?
F
So
I
was
wondering
if
we
wanted
to
decide
on
like
numbers.
We
want
to
use
for
these
like
what
we
want
to
keep
constant,
what
we
want
to
vary
or
how
we
want
to
measure
it
like
items
per
second
per
chord
or
something
else
before
I
make
the
pr
with
the
tests.
I've
got
it
in
right
now.
I
just
have
it
that
I
can
plug
in
whatever
numbers
we
want,
and
then
I
can
report
back
the
results
for
that.
G
Yeah,
I
thought
had
published
the
at
least
our
assumptions
on
one
of
the
issues.
Grace
did
you
have
a
look
at
those.
B
Sure
it's
like
brian,
has
shared
some
like
you
know,
yeah
expectations
from
the
prometa
server.
I
think
we
should
try
to
compare
ourselves
to
that.
I.
G
Think
grace,
if
you
look
at
the
meeting,
notes
and
scroll
down
brian
had
actually
given
a
link.
Okay
yeah,
since
that
that
is
a
good
reference
to
some
of
the
numbers
and
thresholds.
B
In
the
issue,
richard
also
has
some
reaches
some
stuff
by
the
way
grace
this.
This
has
to
do
with,
like
some
of
the
cute
settings
that
we
want
to
put
in
place
like
before,
that's
put
in
place.
We
may
not
be
performing
well
like
we
just
need
to.
You
know,
figure
out
the
right
defaults
and
allow
people
to
you
know
fine-tune
so,
just
as
an
fyi
yeah
makes
sense.
B
Yeah-
and
it
is
it
is,
it
is
impact.
So
aren't
you
like
seeing
the
total
number
of
things
coming
like
going
out?
It's
gonna,
you
know
remote
right
will
be
the
bottleneck,
that's
that
even
even
receiver
is
doing
its.
You
know
it's
just
remote
right
may
end
up
being
the
bottle.
Like
that's
what
I
wanted
to
say.
F
Right
right,
yeah,
so
right
now
I
just
have
it
using
the
otlp
exporter
just
to
test
the
receiver
to
make
sure
like
to
see
what
it
can
do
to
see
where,
like
exactly
where
the
bottleneck
will
be.
H
Yeah,
so
it's
very
specific
to
the
prometheus
receiver
or
the
matrix
at
least
we
are
getting
from
the
c
advisor
matrix
endpoint.
So
my
question
was
like
I
need
to
know
the
container
memory
request
metric
and
from
the
matrix.
I
guess
like
it
should
be.
The
field
container
aspect
memory
reservation
limit
bytes,
but
I
was
not
totally
sure
I
was
wondering
like
if
it's
the
same
field,
but
this
is
always
being
recorded
at
zero.
H
I
am
not
sure
if
I'm
missing
anything,
so
I
was
just
wondering
if
anybody
has
an
insight
like.
Is
it
the
same
matrix
or
not?
If
it's
so
like
why
it's
being
recorded
as
zero?
Always,
how
can
we
get
the
memory
request
matrix
also
to
mention
like
I
have
like
the
memory
request
and
memory
limits
for
his
container
setup
on
the
on
the
spec
on
the
email
file?
E
E
I
believe,
if
docker
swarm
does
do
something
with
c
groups
based
on
memory
requests
or
whatever
their
equivalent
is
so
for
some
container
orchestration
systems.
You
will
get
that
metric,
but
not
for
kubernetes.
I
don't
remember
which
file
it's
actually
looking
at,
though,
but
you
can
just
ping
me
on
slack
and
we'll
go
through
it.
Okay,
thank
you,
yeah
I'll,
I'll
use.
It
thanks.
E
G
J
Thank
you.
If
it's
hard
to
build
a
collector
thomas
asking
you
to,
please
tell
him
how,
because
he
will
need
to
figure
this
out
over
the
weekend
if
there
is
no.