►
From YouTube: 2021-08-11 meeting
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
A
We
will
get
started
in
just
a
moment,
please
be
sure
to
add
your
name
to
the
attendee
list
and
add
anything
you
have
to
discuss
to
the
agenda
and
then
we'll
get
started.
A
C
Oh
yeah
sure
we'll
stop,
so
we
wanted
to
talk
about
the
target
allocator
basically
and
the
changes
made
and
the
updates
to
that.
C
So
to
start
with,
actually
we
just
we
just
wanted
to
clarify
some
of
the
design
changes
we
made
over
the
past
week
and
then,
after
that
we
also
thought.
Maybe
you
could
address
a
few
comments
so
I'll
just
share
my
screen
quickly.
C
Yes,
I
hope
everybody
is
able
to
see
the
screen.
C
Yes,
so
to
start
with,
the
two
major
changes
which
were
made
in
this
pr
from
the
initial
version
was
that
one
I
think
I
mentioned
last
week-
was
the
validation
of
the
mode
being
stateful
said
when
the
target
allocation
is
enabled.
C
So
if
it
is
actually,
it
was
moved
to
the
admission
web
book
basically
so
that
we
can
allow
the
user,
I
mean
we
don't
allow
the
user
to
proceed
if
there's
any,
if
the,
if
it
is
not
matching
so
the
only
available
mode
is
stateful
set
for
the
target
allocation,
and
apart
from
that,
there
is
one
more
change
which
was
made.
This
was
kind
of
a
more
major
change,
which
is
we
had
initially
thought
of
using
a
separate
controller
instead
for
the
target
allocation
instead
of
using
the
same
one.
C
So
there
were
two
main
reasons
for
this.
One
was
to
actually
reduce
the
reconstitution,
calls
to
the
target
allocation,
so
resources.
So
this.
Actually,
there
are
a
few
watch
events
in
the
collector
and
that
can
actually
trigger
a
lot
of
events,
and
we
just
thought
it
would
not
be
necessary
to
actually
keep
calling
the
reconciliations
loop
again
and
again
for
the
target
allocation
when
only
the
collector
is
involved
in
it.
C
So
that
was
one
reason
and
another
reason
was
future
extensibility
to
basically
support
separate,
having
separate
functions
for
the
creation
and
deletion
of
all
the
resources,
so
jurassic
suggested
that
actually
the
reconciliation
cycle
is
less.
It
has
a
lesser
cost
than
having
a
separate
controller
and
even
with
regards
to
future
extensibility,
he
mentioned
that
it
would
actually
be
better.
We
take
care
of
it
when
it
actually
happens.
Stuff
thinking
about
it
right
now,
because
most
of
the
logic
is
still
same.
C
So
that
is
why
we
actually
went
forward
and
chose
to
use
one
controller
instead
rather
than
two.
So
that
was
another
change,
and
apart
from
that,
there
is
also
a
few.
So
yes,
david
had
suggested.
Actually
we
also
move
the
validation
of
the
prometheus
configuration
to
the
admission
network.
C
C
So
that
was
the
check
which
was
happening
inside
this
part
of
the
code
and
even
if
the
prometheus
config
is
available
or
not,
we
feel
that
the
collector
should
still
be
up
and
running
as
it
is.
So
that
is
one
reason
we
thought
that
it
should
it'll
be
better
just
to
keep
it
here
and
we
wanted
to
keep
the
check
because
in
case
a
conflict
map
comes
up
and
the
deployment
has
I
mean
we
don't
want
the
sorry
deployment
to
come
up
and
the
conflict
map
not
being
there.
C
C
D
D
If,
if
this
is
a,
it
looks
to
me
like
this
is
looking
for
errors
with
configuration,
rather
than
just
making
sure
that
we
don't
end
up
in
a
state,
that's
poor.
We
can
follow
up
on
the
pr.
If
you
want
okay,
yes,
yeah,
we
can
follow
independence.
C
And
yes,
baseline
solution
actually
doesn't
require
https,
so
we
changed
it
to
80
and
the
container
port
would
be
8080
for
now.
So
that
was
changed,
and
apart
from
that,
there
were
two
comments
which
yes,
which
I
just
changed
today
yesterday
night,
I
think
and
yeah
so
today
morning,
actually
I
started
discussing
with
rc
and
he
actually
pushed
a
new
image
into
the
open,
telemetry
p
dot,
io
namespace,
so
that
is
being
used
now.
C
So
we
also
push
the
changes
now
and
it's
not
using
the
stock
image
as
of
now
using
the
actual
open,
elementary
namespace
image.
And
apart
from
that,
I
think
it's
just
this
one
comment:
we
can
talk
more
about
it,
but
from
there
I
think
most
of
it
is
addressed,
and
I
I
think
yes
alex
also
just
wanted
to
talk
about
the
changes
in
pr2,
but
before
he
starts
talking
about
pr2.
C
I
just
wanted
to
mention
that
pr
three
also
has
been
filed
upstream
and
pr,
the
third
part
of
the
tr
and
this
actually
it's
still
a
work
in
progress,
but
this
actually
contains
all
the
changes
which
so
it
sends
the
updated
configmar
to
the
collector.
Where,
if
the
target
allocation
is
enabled,
this
would
send
the
updated
configmar
with
the
httpst
config,
otherwise
it
would
not
be
it
would
be
sending
the
config
as
it
is.
C
So
this
pr
is
basically
for
that-
and
this
would
be-
this
is
actually
related
to
pr1,
so
we're
just
waiting
for
that
to
be
merged
until
we
work
on
this.
So
it's
just
a
draft
here
as
of
now-
and
yes,
I
think
alex
will
talk
about
it
too.
E
F
So
rahul,
maybe
you
can
share
the
the
pr
on
the
screen
and
then
alex.
E
E
So
I
want
to
go
through
the
commit
that
I
just
pushed
and
david
whenever
you
have
the
time.
Please
review
it
again,
and
essentially
this
update
just
removes
this
is
for
the
target
allocation
implementation.
The
the
actual
image
that's
pushed
to
cui.io,
and
essentially
the
change
here-
is
that
I
am
removing
the
next
collector,
so
we're
no
longer
storing
that.
Instead,
the
find
next
collector
just
returns
the
collector,
and
that
also
changes
a
few
things.
Also
for
more
clarity.
E
I
renamed
set
targets
to
set
weighting
targets
to
to
specify
that
this
is
not
the
targets
that
are
actually
in
use.
It's
more
used
as
a
comparison
list
from
the
configs
that
are
incoming,
and
I
also
fixed
a
few
of
the
lock
changes
which,
as
addressed
in
the
comments,
would
lead
to
races.
So
originally
it
was
both
of
the
remove
outdating
targets
and
process
waiting
targets
were
locked
individually,
but
now
they
would
just
be
locked
if
any
of
them
are
trying
to
be
accessed.
E
And
there
was
one
more
so
yeah
and
when
you
actually,
when
we're
actually
adding
new
targets,
the
this
change
now,
instead
of
looking
up
the
targets,
it'll
just
call
the
function
and
then
that's
the
collector
that
will
receive
the
next
incoming
target.
So
these
changes
are
essentially
just
removing
the
the
next
target
or
the
next
collector
data
field
and
then
just
updating
the
locks
and
then
just
updating
the
the
testing
case
for
this.
E
C
And
actually,
we
just
thought
we
could
actually
show
you
a
small
demo
on
how
it
is
actually
working.
So
just
on
the
screen
again.
C
So
here
we
actually
have.
C
So
here
I'll
just
explain:
what's
happening
right
now,
so
this
is
the
open,
telemetry
operator
crme,
which
is
which
is
the
which
is
as
it
was
before,
but
with
actually
another
option,
which
is
the
target
allocator
option
with
the
enabled
option
as
true.
So
it
also
has
the
image
option
which
allows
you
to
set
custom
images,
but
for
now
it's
using
the
default
image
and
what
it
does
is
along
with
the
stateful
set.
C
C
C
So
here
what
is
happening
is
we
have
the
stateful
target
allocator
service,
which
is
exposing
it
at
80,
and
this
is
engine
export,
basically
just
to
show
that
the
targets
are
exposed
at
the
certain
endpoint,
and
this
is
the
format
of
the
entire
url
which
we
have
where
we
would
be
able
to
query
for
all
the
targets
for
each
collector
at
the
certain
endpoint.
C
C
C
C
So
here,
as
you
can
see,
we
have
the
stateful
target.
I
mean
when
I
call
the
same
link
again
endpoint
again,
we
have
the
stateful
corrector
zero,
which,
with
one
target
stateful
corrector
one
with
another
target,
which
is
zero.
Two
and
stateful
collector
two
with
another
target,
which
is
the
third
target,
so
everything
is
distributed
among
this,
and
this
same
thing
is
like:
when
we
remove
or
add
the
collectors
it
actually
starts,
distributing
the
targets
and
at
each
collector
update
it
actually
redistributes
the
entire
target.
C
So
there
is
still
an
future
enhancement
where
the
all
the
targets
are
not
reallocated
from
beginning
and
we
actually
just
redistribute
targets
based
on
like
if
new
targets,
if
a
new
collector
comes
up,
it
just
takes
in
a
few
targets
from
the
remaining
collectors
or
if
a
collector
dies.
C
So
yeah
one
of
them-
and
this
actually
takes
some
time,
but
because
it's
watching
for
the
configmap
updates,
it's
actually.
This
fs
notify
event
in
the
code
which
actually
triggers
when
the
config
changes
inside
the
port
and
on
that
the
targets
again,
the
server
completely
restarts
and
the
allocation
happens.
So.
C
C
So
maybe
we
are
working
on
the
end-to-end
demo,
where
we
actually
send
the
metrics
and
we
can
see
it
inside
the
collector
but
yeah
the
the
collector
will
be
able
to
scrape
the
certain
endpoint
and
using
the
http
sd
config,
which
we
send
in
the
part
three
of
the
year,
which
we
have
part
three
of
the
pr,
and
this
will
actually
complete
the
end-to-end
flow
of
the
target
allocation
system.
F
Okay,
I
guess
there's
no
questions.
Did
anybody
else
want
to
cover
any
other
topics?
I
didn't
see
any
in
the
narrow
dock,
josh.
F
Emmanuel
any
updates
on
your
end.
G
Hey
hi
hi
everyone,
an
update,
would
be
I'd
like
to
introduce
a
colleague
of
mine.
His
name
is
nathan,
nathan,
diaz,
yeah
nathan.
Please
introduce
yourself.
H
I
guess
I'm
nathan,
I
just
joined
a
ridge
tag
from
google
last
week.
How
nice
me.
F
Well,
welcome
nathan
and
nathan
will
be
also
working
on.
You
know
a
lot
of
prometheus
work
as
well
as
that
we're
doing
on
hotel,
as
well
as
on
the
collector.
So
indeed
look
out
for
him
right.
G
And
already
nathan,
how
does
a
pr
up
that
it's
it's
part
of
phase
two
for
metrics
ga
and
it
uses
open
open
telemetry,
go
instead
of
open
sensors
for
internal
observability
metrics,
so
he
has
that
up
kindly
requesting
for
a
review.
G
G
Yeah
and
and
then
I
also
have
a
pr
which
wires
up
the
right
of
the
headlog.
I
wrote
it
many
months
ago,
but
I
was
requested.
It
was
requested
that
hey
break
this
down,
simplify
it
makes
makes
it
easier
for
review,
so
I
send
it
in
parts
and
it's
up
for
review.
Currently
I
tagged,
I
believe
I
don't
know
if
I
tagged
davey
dashball,
but
thank
you
very
much
for
all
your
help
and
also
for
everyone's
help.
Alelita
anthony
jana,
everyone
bogdan
the
whole.
G
There's
that
is
that
change.
Then
we
all
we
had
a
discussion
about.
There
was
an
issue
that
was
filed
a
while
ago
when
the
prometheus
work
group,
which
essentially
was
hey.
We
need
a
validation
of
prominences
receiver
configurations
and
from
doing
an
investigation,
came
up
that
some
of
the
features
that
we're
missing
would
be
remote
right
and
remote
read.
G
But
those
are
out
of
scope
right
now
and
dave
v
dashboard
made
an
excellent
suggestion,
which
was
that
essentially,
we
should
perform
a
validation
that
rejects
that
rejects
any
extra
fields
that
we
do
not
support.
G
So
if
someone's
prometheus
configuration
has
alerting
or
it
has
remote
right,
remote
read
htc,
we
should
reject
those,
and
I
believe,
earlier
on,
someone
had
a
pr
open
where
that
suggestion
was
also
made
hey,
you
know,
should
we
be
rejecting
stuff
at
before
runtime,
so
we
might
send
a
pr
either
this
week
or
next
week,
and
maybe
we
could
share
those
capabilities.
F
Yeah,
that's
cool
and
emmanuel.
Thanks
for
the
update,
I
think
again
david
we're
going
to
see
you
know
how
and
propose
a
design
for
the
changes.
I
think
they're
they're
actually
pretty
well
understood,
but
based
on
the
changes
that
you
suggested
on
the
receiver
and
and
we
can-
we
can
walk
through
that
next
time.
D
B
The
only
one
but
is
not
directly
related
to
open
telemetry,
but
it's
related
to
testing.
We
will
also
start
work
on
an
alert
emitter
or
alert
generator
test
suite,
but
that
is
not
going
to
be
finished
in
next
few
weeks
or
month.
It's
just
the
head
ups
that
we
will
also
be
testing
implementations
of
something
which
can
store
and
process
to
to
test
against
us,
so
for
anyone
who
wants
to
be
compatible
with
prometheus
blah
blah
for
the
storage
that's
relevant,
but
beyond
this
now.
F
Yeah,
it
may
not
be
directly
relevant
on
the
remote
right
compliance,
but
definitely
in
terms
of
the
alert
generator.
That's
super
useful.
So
are
those
tests
already
there
or
are
they
landing.
B
No,
I
created
the
shim
yesterday.
Ganesh
will
be
putting
some
time
towards
this
in
this
quarter.
Our
quarters
are
shifted
by
a
month,
so
yeah
we
will
see
progress,
but
I
I
don't
know
at
what
at
what
state
that
will
be
at
the
end
of
this
or
our
quarter.
B
Hopefully
it's
it's
reusable
before
prom
com
or
kubecon,
because
then
we
can,
we
can
announce
it
blah
blah
blah,
but
if
not
we'll
just
enable
it
when
we
have
it.
If
anyone
wants
to
help
more
than
happy
to
to
make
contact
and
get
people
in
touch.
F
I
also
had
another
question
on
and
maybe
richard
you
may
give
us
some
more
insight
into
this.
So
there
was
an
interesting
design
that
was
posed
on
a
transactional
based
remote
right
in
the
prometheus
dev
group
and
again
was
you
know.
I
was
interested
in
kind
of
understanding,
some
use
cases
there
and
what
what
is
being
addressed
there.
This
is
the
doc
I
can
share
it
on
the.
B
I
didn't
look
at
this
in
depth.
My
main
care
is
not
so
much,
but
it
doesn't
matter.
I
can't
give
you
details
on
this.
Okay,
I
know
there's
discussions
and
such,
but
I
I
don't
have
any
details
on
it
beyond.
What's
on
the
main
english
land,
github.
F
Okay,
so
I
again
just
wanted
to
understand
you
know
if
there
was
any
dependencies
that
we
could
support
or.
B
F
B
We
want
to
enable
by
default
or
or
mark
as
non-experimental
or
whatever,
and
then
it
will
find
its
way
into
into
the
test
suite,
but
until
that
time
it
won't
be
tested
for
at
least
not
with
with
a
hard
compliance.
B
That
being
said,
if
someone
wants
to
already
extend
the
test
suite
to
just
test
for
it,
we
can
just
have
the
test.
Suite
say
that
this
one
thing
is
experimental
and
the
other
things
are
mandatory.
I
just
add
some
some
of
that
metadata
and
then
you
can
test
your
stuff
and
you'll,
see
that
you
are
compliant
with
all
the
mandatory
stuff
and
what
your
compliance
level
with
with
the
experimental
or
better
or
whatever
name
we
choose.
F
Okay,
any
other
questions.
For
example,
again
we
are
deep
in
the
middle
of
david
any
any
updates
on
your
end.
I
know
you've
been
you
know,
reviewing
a
lot,
which
is
great,
really
appreciate
it.
D
Yeah,
I've
must
have
been
trying
to
catch
up
on
those
okay.
I
have
my
change
somewhere
to
add
self
observability
metrics,
but
I
haven't
spent
any
time
on
that.
I.
F
Know
I
know
you've
been
multitasking
your
favorite,
so
but
thanks
for
the
reviews,
really,
you
know
we've
we've
kind
of
tried
to
unblock
a
lot.
So
thank
thank
you.
Wishfur
grace
any
updates
on
your
end.
F
Okay,
I
think
we
can
end,
then,
if
there
were
no
other
discussions
that
you
want
to
kind
of
dive
into
anthony
anything
on
your
end,
then
we
can
otherwise
give
back
some
time
to
folks.
A
Oh,
no,
nothing
on
my
end,
there's
still
an
outstanding
issue
that
I
have
regarding
stillness
markers
for
targets
that
weren't
scraped
in
the
current
scrape
that
I
need
to
dive
into,
but
being
on
vacation.
Last
week
I
hadn't
had
a
chance
to
get
into
that.
I
hope
to
get
into
that
this
week.
F
Okay,
no
no
worries
again.
That's
super
helpful.
All
right
guys,
thanks
so
much
hold
on
hi
emmanuel.
G
Yeah
sorry
before
we
please
would
you
mind
charging
me
in
that
issue?
Yes,
totally
yeah,
because
I
worked
in
stillness
markers
and
wondering
what's
up.
F
Was
there
some
parts?
Was
this
the
bug
that
we
were
looking
at
yesterday.
F
A
Yes,
I
I
put
a
link
to
this
in
chat.
I
think
we
discussed
this
briefly
two
weeks
ago,
where,
where
I
had
asked
some
questions
about
the
structure
of
stillness
and
if
this
was
intended,
behavior
or
not,
because
the
current
implementation
is
passing
the
compliance
tests,
but
it
sounds
like
this
is
not
the
the
expected
behavior.
I
A
Yeah
yeah
and
I
think
that's
what
that
is,
and
and
so
we
just
need
to
ensure
that
we're
tracking
this
separately
for
each
target
and
only
emitting
still
markers
for
things
that
we
attempted
to
scrape
but
didn't,
I
think,
is
what
needs
to
happen.
Rather
than
emitting
stainless
markers
for
everything
that
we've
ever
attempted
to
scrape
but
didn't
scrape
this
time
around.
I
Well,
it
does,
I
guess,
like
one
thing,
is
that
the
scraping
of
targets
is
independent
prometheus,
so
we've
got
target
a
and
target
b
they're
happening
completely
independent
timelines.
It's
not
as
if
there's
one
big
scrape
process
that
tries
to
do
everything.
Everything
is
all
spread
around
over
time
to
spread
load.
A
On
this
yeah,
I
believe
this
has
been
assigned
to
me.
Well,
it
looks
like
it's
assigned
to
me
and
you
oh
good
job,
melolita,
that's
not
assigned
to
both
of
us
yeah.
So
I
was.
I
was
asked
to
look
into
this
and
I
so
I
think
what
we're
doing
is
the
these
great
target
or
the
the
stay
on
this
tracker
is
on
a
per
receiver
basis
and
it
tracks
all
of
the
stain
or
all
of
the
targets
that
it's
seen
in
that
receiver.
A
That
has
ever
tried
to
scrape
and
then
every
time
it
finishes
a
transaction.
It
emits
all
of
the
targets
that
weren't
seen
since
the
last
transaction,
but
I
think
that
needs
to
be
on
a
per
target
basis.
It
needs
to
say
I
tried
to
scrape
this
target
here
and
I
didn't
see
it
so,
I'm
in
a
stale
marker.
I
F
Okay,
cool
cool
that
was
a
good
good
one
to
bring
up
anthony
all
right.
I
think
I
will
give
back
a
few
minutes
back
to
everyone.
See
you
at
the
collector
meeting.
Thank
you
have
a
good
day,
bye.