►
From YouTube: 2021-05-26 meeting
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
B
B
D
Yeah
sure
yeah
yeah
sure
I'll
get
started
now.
So
what
I
want
to
talk
about
today
with
iris
is
our
project
for
the
ultimate
telemetry,
prometheus
receiver,
enhancement
and
so
last
week,
during
our
discussion,
we
were
talking
about
our
design
and
a
lot
of
people
had
good
feedback
and
good
questions
about
some
considerations.
We
should
make,
for
example,
they
made
comments
about.
D
Maybe
we
should
consider
a
pool
model
instead
of
a
push
model
and
basically
for
anyone
who
doesn't
know
what
I'm
talking
about
right
now,
the
enhancement
we're
planning
on
making
to
the
researches
receiver
is
I'm
creating
a
scrape
target
update
service
in
which
a
server
is
started
up
and
exposes
the
endpoint
in
which
a
user,
or
in
this
case
a
load
balancer
operator,
would
be
able
to
make
put
requests
to
this
server.
D
And
then,
if,
within
this
put
request,
would
be
a
list
of
new
scrape
targets
and
the
mercedes
receiver,
which
should
be
able
to
grab
these
scrape
targets
from
this
endpoint
and
then
update
the
the
scraping
within
prometheus.
D
That
way,
and
so
that's
the
kind
of
like
the
target
update
service
that
we're
trying
to
implement
and
last
week
we
were
going
to
go
with
the
design
of
implant
directly
a
coupling
and
implementing
it
within
the
permissive
prometheus
receiver
itself,
and
so
some
of
the
design,
questions
and
comments
that
were
made
after
that
was.
D
Maybe
we
should
look
into
observing
and
pulling
in
straight
targets
instead
of
having
them
being
pushed
to
the
end
point,
and
then
also,
we
should
consider
whether
we
want
to
build
it
directly
into
the
36
receiver
or
outside
of
it,
because
because
we
have
to
look
into
kind
of
like
future
enhancements
and
building
it
directly
and
making
it
coupled
with
the
premises,
receiver
could
be
kind
of
difficult
for
future
changes
and
implementation.
D
So
we
spent
the
week
on
thinking
of
design
alternatives
and
the
one
that
we
think
could
potentially
work
out
for
us
is
listed
out
in
the
links
within
the
agenda.
So
the
implementing
a
customer
service
discovery.
So
our
path
moving
forward
is
with
thinking
of
implementing
a
custom
service
discovery
in
which
this
tool
that
is
already
could
already
be
integrated
with
the
existing
prometheus
receiver.
D
Since
we
are
already
using
a
discovery,
manager
is
to
use
a
custom
service
discovery
tool
to
have
the
http
server,
which
will
receive
updates
through
the
endpoint
and
then
update
to
the
prometheus
receiver
using
the
service
discovery
by
inputting
the
targets
within
the
channel
that
it
provides,
and
then
after
is
inputted
through
a
channel
it
provides.
Then
prometheus
will
will
accordingly
update
according
to
this
channel
through
the
service
discovery
tool,
and
so
we
want
to
get
everyone's
thoughts.
D
B
Who
do
you
want
to
share
your
screen
and
just
click
on
the
link
for
the
issue
or
the
or
the
link
you've
attached?
For
the
example.
D
Sure
so
I'll
share
my
screen
here,
and
this
is
the
service
discovery.
Oh
it's
kind
of
lagging,
but
this
is
a
service
story
that
we
plan
on
implementing.
So
it's
the
changes
here
aren't
too
huge.
What
it
always
includes
is
an
adapter
and
what
it
does
is.
D
It
runs
the
discovery
manager
which
is
already
already
exists
inside
the
parenthesis
receiver
and
then
starts
our
custom
service
discovery
and
once
we
implement
it,
the
only
function
we
really
have
to
define
is
the
run
function
for
our
discovery,
and
all
this
run
function
would
do.
D
Is
they
give
an
example
here,
but
in
this
in
their
example
of
how
you
would
implement
the
custom
service
discovery?
Is
they
have
it
refreshing
at
a
certain
interval?
Then,
at
each
of
these
intervals
it
will
send
a
get
request
in
order
to
pull
so
pull
targets,
to
be
able
to
update
and
send
through
the
channel
that
you
see
here.
D
The
target
group,
and
once
this
is
sent
to
his
channel,
then
premises
will
update
it
update
the
targets
that
it
needs
to
scrape
from
itself,
and
so
what
we
would
do
here
is
actually
have
and
when
we
modify
the
run
function
and
actually
have
a
http
server
that
exposes
the
endpoint.
And
then
we
can
push
updates
to
this
endpoint,
in
which
we
would
pass
those
targets
through
the
target.
The
to
the
channel
group
as
a
target
group,
and
then
producers
will
update
targets
that
way.
A
Hey
so
one
question
so
who
will
be
so
you're
proposing
this
service
will
be
running
outside
the
receiver?
D
Yep
yeah,
so
this
doing
it
this
way
would
make
it
very
much
less
coupled
with
the
parisian
receiver,
and
so
it
won't
affect
it
too
much
all
it
will
require
is
a
few
changes
within
the
receiver
like
few
code
changes,
but
not
definitely
nothing
as
major
as
our
previous
design.
A
Okay
and
every
instance
will
just
discover
their
integration
through
the
service,
the
targets
to
the
service.
Is
it
every
convector
instance.
A
This,
what
will
the
service
return
when
it's
you
know
called
by
the.
D
So
this
service
will
be
called
the
receiver,
and
so
it
will
be
implemented
through
the
adapter
here
that
you
see,
because
it
it
can
be
integrated
with
the
discovery
manager
that
the
premises
receiver
already
is
already
uses.
So
we
could
use
this
adapter
and
implement
it
into
the
receiver
so
that
it
could
run
this
custom
service
discovery.
E
Got
it,
maybe
if
you
could
bring
up
the
the
operator
target
trading
dock
that
I
just
linked
in
chat?
That
is
a
diagram
that
shows
the
slightly
broader
context
of
what
we're
trying
to
accomplish
which
may
help
answer
the
question.
E
D
Okay,
can
everyone
see
the
document
okay,.
E
So
scroll
down
a
bit,
there
should
be
some
diagrams
that
show
options
for
the
target
dissemination
yeah.
So
basically,
what
will
happen
here
is
there
will
be
a
load
balance
component
that
is
running
the
actual
prometheus
target
discovery.
E
I'm
going
to
divide
this
list
of
targets
into
five
and
give
each
one
of
them
one
fifth
of
the
targets,
so
the
the
mechanism
that
way
is
describing
is
the
the
mechanism
on
the
collector
end
that
will
receive
updates
from
that
load.
Balancer
that
tells
it
okay.
Here's
your
set
of
the
target
space
to
scrape.
E
The
the
model
we're
thinking
right
now
is
that
it
would
be
pushed
from
the
load
balancer
to
the
collector,
because
the
load
balancer
will
have
information
about
all
of
the
collectors
and
will
will
know
when
the
target
set
has
changed
much
the
same
way
that
a
a
prometheus
service
discovery
implementation
pushes
on
a
channel
to
the
discovery
manager
right.
This
is
just
a
remote
channel
that
it's
being
pushed
over
so
yeah.
E
It
would
push
to
the
the
custom
service
discovery
manager
running
in
the
collector,
those
updates,
but
I
think
that,
as
we
were
discussing
with
wei
and
iris
yesterday,
whether
it's
push
or
pull
is
it's
kind
of
an
implementation
detail
that
that
we
can
go
either
way
on.
If
we
run
into
complications
with
pushing
it's
very
easy
to
change
it
to
a
pull
model,
you
saw
the
code
for
that
custom
service
discovery.
Example
that
was
pulling
from
console.
E
It
would
end
up
being
very
similar
to
that.
So
I
I
don't
think
we're
locked
into
that
decision
once
we
start
down
this
path,
if
we
find
that
one
model
isn't
working,
it's
easy
to
change
to
the
other.
A
Okay
and
also
one
more
question
that
I
have
let's
say
I
have
a
cluster
with:
let's
say,
5000
nodes
right
and
let's
say
I
have
a
you
know:
node
as
one
of
the
target
in
my
in
my
screen,
and
that
was
that
is
going
to
return
like
5
000,
you
know
discovered
notes
now.
Can
I
actually
split
the
can?
Will
we
be
able
to
split
the
targets?
You
know
returned
by
a
single
job,
or
is
it
the
job
level?
Can
the
configuration
be
different?
A
Can
the
configuration
be
split
from
a
job
and
then
give
it
to
different
collector
instances?
That's
like
how
horizontally.
E
Yes,
yeah,
that's
that's!
The
idea
behind
this
broader
proposal
is
that
that
load
balancer
component
would
do
the
discovery
of
all
5000
nodes
and
then
say:
okay,
I've
got
five
collectors
to
try
this
out,
so
I'm
going
to
give
each
one
of
them.
A
thousand
I've
got
10
collectors
they're
going
to
give
you
500.
G
F
H
F
Can
you
hear
me
better
now,
yeah
so
did
you
consider
extending
the
sd,
the
kubernetes
as
a
discovery,
to
maybe
support
some
kind
of
custom
resource
definition
where
you
would
like
only
pass
the
name
and
then
you
would
get
the
labels
attached
to
that
to
the
the
objects
and
then
you
could
work
around
the
cuban
service
discovery.
E
So
your
your
suggestion
would
be
then,
to
have
an
additional
kubernetes
service
discovery
type
that
could
look
to
some
custom
resource
that
contains
the
list
of
targets
and
labels.
I
think
that
that
would
tie
us
to
kubernetes.
Our
assumption
is
that
we're
largely
going
to
be
operating
in
a
kubernetes
environment,
but
nothing
about
what
we've
constructed
requires
kubernetes
other
than
the
operator
component
here,
but
especially
if
we
go
with
the
the
approach
that's
depicted
here
where
the
load
balancer
is
even
separate
from
the
operator
controller.
F
I
E
Alternative
to
that
would
also
be
using
a
config
map
and
mapping
that
to
a
file
and
using
file
sd
and
just
updating
the
config
map
from
the
load
balancer
that
was
considered.
But
again
I
would
couple
us
to
kubernetes
when
we
may
be
able
to
do
it
without
doing
so.
F
E
So
that
sd
would
pull
from
an
end
point
as
opposed
to
receiving
a
push
or
reading
from
a
file.
Okay,
yes,.
J
E
F
So
I
don't
have
a
timeline.
The
difference
would
be
that
this
would
like
pull
every
x
seconds.
If
you
have
a
file,
you
can
actually
decide
when
you
write
the
file
and
then
you
use,
I
notify
to
know
when
the
file
is
changed
but
like
it
might
in
some
cases
we
feel
that
some
users
are
looking
for
that.
So
there
is
already
like
a
pull
request
which
some
working
code
already,
but
I
still
don't
have
written
the
test
and
the
documentation
yet
so
but
yeah.
F
B
Okay,
okay,
julian
good,
to
know,
I
think
I
think
again
as
anthony
was
saying
we
can
still
you
know
what
we're
implementing
right
now
is
a
push,
so
we
can
certainly
you
know
as
soon
as
the
changes
on
your
end
are
out,
then
we
can
pick
a
look
and
adapt
it
accordingly.
If
that
works
better.
F
Yes,
so
the
format
will
be
the
same,
just
that
you
will
not
need
to
have
a
sidecar
next
to
your
committees
to
write
the
files.
B
B
B
David,
do
you
have
any
comments
and
we
or
vishwa
or
others.
K
Go
ahead
I'll
have
to
think
about
it
for
a
bit.
It's
a
very
interesting
idea.
There's
a
lot
there.
I
like
some
aspects
of
it
for
sure.
B
Okay,
okay,
good,
I
mean
please,
you
know,
take
a
look,
and
this
is
what
we
are
proposing
right
now,
because
it's
something
that
you
know
works
for
all
our
use
cases,
but
again
would
be
what
julian
proposed
is.
Also,
I
mean
it's
not
there
yet,
but
would
be
good
good
to
take
a
look
later,
but
that's
a
bull
based
implementation.
L
Also
fyi,
I
can
try
and
look
again
where
we
have
the
sharding
for
scraping,
slash
agents
on
there's
a
design
look
somewhere.
I
look
it
up
and
I'll
put
it
into
the
meeting
notes
again.
L
To
but
if
if
if
the
implementations
behave
the
same
or
you
could
even
steal
some
of
the
code,
that's
probably
best.
B
B
Answer
me
any
other
thoughts
when
we'll
like.
B
I
think,
let's
wait
for
a
couple
of
days
to
get
feedback
from
david
and
julian
or
anybody
brian
says,
they're
looking
at
stuff
at
the
issue
as
well
as
docs,
but
that's
the
current
implementation
we
are
proposing
and
then,
if
there
are
no,
you
know
strong
objections
will
implement
the
push
mechanism
as
you're
proposing
and
then
once
the
code
is
implemented
on
the
prometheus
side,
we'll
take
a
look
at
it
and
adapt.
E
Yeah,
I
want
to
take
a
look
at
the
links
that
julian.
B
B
E
Think
it
might
also
be
good
to
have
multiple
alternatives
implemented
that
you.
J
E
D
Yeah,
okay,
thank
you.
Everyone
for
your
feedback
definitely
look
into
it.
B
I
think
the
next
topic
we
had
was
from
josh
again
about
this
was
about
the
metric,
stillness
and
josh
says
he
has
closed
this
vr.
I
Hi,
I'm
just
put
that
in
to
pass
along
some
information
to
this
group.
We
discussed
in
the
data
model
meeting
yesterday
this
pr
of
mine
that
I
had
written
trying
to
propose
essentially
what
it
would
look
like
if
we
did
the
most
compatible
sort
of
path
to
make
it
that
we
can
recognize
the
prometheus
remote
right.
Staleness
marker
pr
was
pretty
short
and
sweet,
but
the
group
responded
with
some
concern
about
other
things.
They
wanted
that
were
kind
of
similar.
I
So
we
closed
my
pr
and
there's
going
to
be
a
new
investigation
into
using
a
set
of
bits
in
each
data
point
to
indicate
things
like
staleness
or
even
a
final
report,
which
is
something
that
we
talked
about
yesterday.
I
So
that's
there
and
stay
tuned
for
more
on
how
we
how
we
can
represent
stillness.
I
think
this
doesn't
break
the
use
of
man
values.
The
way
prometheus
has
they're
still
gonna
pass
through,
but
it's
something
to
keep
an
eye
on
and
that's
all
I
have.
B
So
josh
thanks
thanks
for
the
heads
up,
but
did
you
were
you
going
to
file
another
pr
on
an
alternate
implementation.
I
I
So
it
won't
be
me,
but
yes,
we
will
keep
doing
this,
who
did
who
volunteered
yeah
so
victor
liu
at
microsoft,
took
an
interest
in
since
he's
been
doing
the
benchmarking
already
and
and
figuring
out
what
this
will
cost
us
in
terms
of
bytes,
and
you
know,
points
per
second
and
then
I
think
the
the
group
would
come
back
with
a
proposal
to
how
we
would
convey
the
stillness
in
the
protocol
instead
of
being
a
man
value,
it
might
be
a
bit
to
say
no,
no
data
here.
I
I
expect
more
progress
will
be
reported
back
in
the
data
model
sig
and
we'll
keep
posting
it
here
as
well.
Okay,
josh.
B
Thanks,
hey
manuel
does
that
affect
some
of
the
the
design
that
you
were
thinking
of
for
the
stainless
marker.
B
G
I
Yeah
we
talked
about
how
there's
some-
I
don't
know,
40
bits
or
whatever
man
values
available,
and
I
personally
think
it's
great
and
don't
mind
it
so
anyway,
that
was
my
opinion
and
the
group
sort
of
reacted.
That
way.
I
Right
yeah,
I
think
it's
sort
of
many
many
options
will
work
here.
I
was
hoping
to
do
the
most
compatible
thing.
It
would
make
less
work
for
manual.
I
If
mad
values
were
just
there,
but
we
will
keep
looking
at
this.
G
Do
you
know
if
you're
doing
basic
relenting
coding
yeah,
like
a
nan
value,
looks
very
different
at
the
bit
level
to
normal
values
so
like?
If
it's
it's
end
of
series
like
last
report,
it
doesn't
matter
there's
only
one
of
them,
but
if
you
had
a
flaky
target
that
was
appearing
and
disappearing,
it
won't
compress
as
well.
G
L
I
mean
from
from
from
there's
two
different
heads
here
from
the
from
the
pipeline
ease
point
of
view:
it's
probably
best
to
just
reuse.
What's
there
from
from
the
from
yeah
from
the
something
that
works
perspective
as
long
as
it
works
as
as
josh
says,
it
might
be
a
little
bit
more
complicated
to
go
back
and
forth,
but
it's
also
doable.
L
If
you
ingest
open
metric
and
you
write
prometheus
remote
right,
you
basically
flip
back
and
forth
and
just
instead
of
just
having
an
opaque
number,
which
you
don't
care
about
and
it
just
passes
through
the
complete
pipeline.
That's
that's
one
of
the
beauties
of
this
thing
because
it's
just
a
number
yeah.
I
That's
exactly
what
I
like
about
it
as
well.
To
me:
that's
a
nice
benefit
right.
M
B
Okay
manuel
did
you
have
any
questions
you
wanted
to
bring
up
or.
M
No,
I'm
I'm
dry
for
now.
M
To
go
with
the
other
ppr,
so
is
that
correct
david?
Is
that
correct,
anthony.
A
E
So
sorry
go
ahead.
K
I
think
we
need
to
figure
out
how
to
help
the
current
author
fix
his
tests.
I
don't
know,
I
think,
that's
where
we.
K
E
I've
worked
on
that
I've
got
a
slightly
different
approach
for
doing
those
tests
that
I've
worked
through
and
I've
got
mostly
working.
I
need
maybe
about
an
hour
to
finish
it
up,
and
what
I'll
do
then
is
I'll
make
a
pr
to
that
author's
branch
with
the
conflict
fixes
and
the
test
fixes,
so
that
we
can
get
that
folded
into
the
existing
pr.
B
Did
folks
want
to
bring
up
any
other
topic?
Last
fall.
We
can
give
back
half
an
hour
to
books
otherwise
grace
and
did
your
vrs
get
merged
on
the
test.
C
C
And
I've
I've
applied
the
feedback
for
the
test
one
and
then
for
the
the
job
and
labels.
I
just
I
couldn't
easily
make
the
feedback
change.
I
just
wasn't
sure
if
we
still
want
them
as
actual
labels,
because
I
made
this
pr
before
they
were
added
as
resource
attributes,
yeah
yeah.
So
I
was
just
confirming
that
with
janna
and
anthony.
E
Okay,
I'll
have
to
find
that
and
take
another
look
at
it.
But
I
think
if
they're
added
as
resource
attributes,
then
they
will
still
end
up
as
labels
outgoing
in
prometheus
remote
right,
which
solves
our
immediate
needs,
and
I
think
it
probably
is
a
better
integration
with.
E
With
the
non-prometheus
exporters
as
well
right,
so
I
think
these
these
are
probably
resource
type
things,
but
I'll
take
another
look.
C
I
B
E
B
B
B
C
Nope,
it's
just.
I
just
have
those
two
pr's
open.