►
From YouTube: 2021-06-02 meeting
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
D
E
C
B
A
Yeah
I
mean
yeah,
I
I
agree
I
mean
I
don't
think
I
was
expecting
the
overall
design
to
be
complete,
but
I
think
that
koi
at
least
mentioned
you
were
he
was
going
to
bring
up
the
overall
like
the
the
diagram
for
for
the
workflow
that
we
are
thinking
of
and
at
least
bounce,
that
off
hue
and
iris.
Are
you
guys
at
the
point
that
you
could
do
that
or
do
you
want
to
do.
G
Yes,
we
do
have
the
diagram
drawn
out
if
you
want
us
to
kind
of
go
through
that
yeah.
A
G
Okay,
I
could
add
that,
to
this.
A
Are
there
other?
Should
we
just
get
started?
Where
did
you
add
your
item?
Okay,
cool!
Let's
get
started
so
I
think
there's
a
general
question
here:
what
is
the
overall
timeline
for
the
prometheus
receiver
and
exporters,
modifications
and
right
now
again,
I
think
most
of
the
work
that
is
getting
done
is
you
know
again.
I
think
that
the
microsoft
team
has
been
working
on
some
of
the
changes
david
and
others
have
been.
You
know,
code
reviewing.
I
finally
got
the
code
owners.
A
You
know
updated
to
have
more
approvers
for
the
prometheus
component,
so
david
anthony
jana.
A
These
folks
are
and
and
myself
we
are
all
approvers,
so
we
can
actually
accelerate
the
the
code
reviews
as
well
as
the
ready
to
merge,
so
that
bogdan
can
just
you
know,
go
and
merge,
then,
so
that's
kind
of
you
know
streamlined
a
bit
of
the
process
in
terms
of
a
timeline.
What
we
are
looking
at
right
now
with
the
two
items
that
we
have
on
our
radar,
which
is
the
up
metric
and
the
stillness
marker
updates
in
the
prometheus
receiver.
A
A
The
ipa,
I
think,
yeah,
let
me
know,
still
connecting
all
right,
no
worries,
you
know
so,
but
we
are
targeting
june,
so
hopefully,
by
the
second
week
of
june,
we're
hoping
everything
gets
completed.
These
are
the
two
items
and
then
we
run
the
compliance
test
and
at
least
functionally
the
basic
you
know.
Compatibility
requirements
are
met,
so
that's
kind
of
what
we
have
on
our
radar
right
now.
A
E
I
only
I
only
want
to
mention
that
we
are
in
except
we
are
very
interested
about
this
because
we
have
been
using
this
some
sidecar.
That
was
a
fork
initially
from
sac
driver,
but
we
are
very,
very,
very
curious
about
when
we
can
start
playing
with
this
part.
Sadly,
as
you
know,
we
were
busy
with
that
one,
but
hopefully
we
can
transition
to
this
project
and
that
yeah,
so
having
that
second,
the
week
of
june
as
a
potential
date,
that's
good
to
know,
so
we
can
start
paying
more
attention.
Finally,.
H
H
So
much
this
doesn't
include
the
operator
effort,
yeah.
A
It
does
the
operators,
you
know
code,
all
the
changes
that
we
have
done
so
far
have
already
been
merged,
so
that
is
already
available.
Is
there
anything
else
that
is
in
our
issues
that
we
have
not
addressed.
F
B
E
E
A
I
think
you
know
I
mean
again,
you
know,
with
the
engineering
that
we've
been
adding,
I'm
hoping
that
some
more
engineering
can
be
added
from
google,
as
well
as
from
the
microsoft
team,
because
you
know
again,
we
have
been
the
primary
folks
have
been
working
on
this,
but
that
said
again,
we
do
want
to
align
with
the
stability
of
the
collector,
so
phase
two
needs
to
be
completed
before
then,
which
means
that
that's
kind
of
a
july
target
for
an
rc
that
we
are
looking
at
anyway.
A
I
mean
it'd
be
great
if
anybody
from
lightstep
can
help
call
us
just
check
with
your
team
and
see
if
anybody
can,
you
know
participate.
I
know
josh
has
been
involved,
but
there
is
another
big
aspect
that
I
think
needs
to
be
addressed,
which
is
the
ensuring
that
the
prometheus
histogram
is
fully
you
know
supported
in
in
hotel
and
that's
something
also,
which
is
an
open
item
that
you
know
again.
I
know
josh
has
been
looking
at
it,
but
there's
work
to
be
done.
There.
E
Yeah
totally
he
he
did
mention
that
last
week
he's
on
pto
as
well
this
week,
but
yeah.
He
did
mention
that
so
yeah.
Of
course
we
we
know
that
this
is
an
issue
yeah.
A
A
All
right
wait
did
you
want
to
chair.
G
Okay,
so
sharing
my
screen
for
the
new
document
that
we
started
drafting
and
it's
also
linked
in
the
agenda.
Anyone
else
wants
to
look
at
it
iris.
Do
you
want
to
speak
about
this
diagram
or.
D
Okay,
I
can
I
can
talk
about
in
this
diagram.
So
basically,
what
we're
going
to
do
is
that
we're
going
to
add
a
new
custom,
sd
discovery
which
will
hold
the
target
information
from
the
given
http
server,
and
this
feature
is
not
going
to
be
added
to
the
discovery
manager
in
the
communities
repo
and
instead
it
will
be
only
used
in
the
hotel
collection.
D
So
the
new
custom
custom
service
discovery
discoverer
will
be
unregistered
during
the
runtime,
so
the
discovery
manager
can
recognize
the
corresponding
custom
sd
conflict
in
the
user,
configuration
and
and
this
whole
mechanism
is
going
to
be
used
to
update
the
targets
for
the
prometheus
escape.
And
so
that's
the
basic
structure
of
our
diagram.
G
Yes,
so
just
to
add
on
just
the
main
change
that
we've
made
from
our
previous
design
that
we
showed
last
week
is
that
we
were
initially
thinking
about
a
push-based
model
in
which
we
would
serve
an
endpoint
on
a
server
that
would
be
started
up
by
the
custom
service
discovery
and
then
receive
push
updates
for
the
list
of
targets
to
be
able
to
update
for
the
preview
scraper.
G
But
I'm
going
through
that
model
and
seeing
how
complicated
it
could
be.
We
started
looking
into
a
pool
based
model
instead
in
which
we
do
periodical
ht,
http
requests
on
a
certain
url
and
then
receive
targets
that
way,
but
and
then
that
way,
it
would
be
a
lot
easier
and
less
complication,
because
we
don't
have
to
serve
it's
up
a
server
as
well
as
startup
endpoint.
We
are
listening
to
targets
that
we
would
have
to
pass
through
the
channel
within
the
customer
service
discovery.
G
So
it
makes
our
lives
a
lot
easier,
but
the
only
thing
that
we
do
have
to
take
into
account
is
how
to
uniquely
identify
each
collector
and
how
we
would
create
those
unique
urls
for
each
collector,
and
so
we're
also
looking
to
see
if
anyone
had
any
initial
thoughts
or
ideas.
For
that.
G
The
idea
that
we
have
are
thinking
of
right
now
is
to
use
the
pod
name
to
identify
uniquely
identify
each
collector
so
that,
when
the
load
balancer
operator
sends
out
the
targets
it
would,
it
knows
which
collector
is
which
and
which
to
send
updates
to.
Oh,
it
would
be
on
which
targets
to
put
on
a
certain
url
that
the
disc
that
the
customer
service
covering
would
pull
from.
B
I
think
the
the
one
thing
that's
holding
this
back
from
being
able
to
use
the
generic
http
service
discovery
mechanism
that
I
see
someone
else
put
on
the
agenda
as
the
next
item.
That's
that's
planned
upstream,
is
that
I
well,
we
could
probably
make
it
work,
but
the
the
operator
will
create
a
config
map
that
is
used
to
seed
the
collector
configuration
and
that
same
config
map
is
used
for
every
instance
of
the
collector
that
started
up
in
a
deployment
or
stateful
set.
B
So
we
wouldn't
be
able
to
put
unique
urls
in
that
config
for
each
of
the
collectors,
unless
we
used
something
like
an
init
container
to
rewrite
that
config
into
a
new
location,
substituting
the
pod
name
in
some
place,
and
so
I
think
that's
perhaps
an
option
we
could
take
later
on,
but
it
adds
more
moving
parts
in
the
the
kubernetes
world,
whereas
I
think
if
we
just
use
hostname
in
a
either
a
fork
of
the
httpsd
or
a
custom
implementation,
we
can
make
it
work
without
the
init
container
and
always
keep
that
option
open
down
the
road.
I
I
F
B
So
we
would
need
to
have
in
the
http
sd
config
a
different
url
for
each
running
instance
of
the
collector
so
that
it
could
identify
itself
to
the
server.
That's
providing
the
the
static
ses
back
in
response.
We
don't
want
to
send
the
same
target
and
label
sets
to
each
collector.
We
want
to
send
each
one
different
target
label
sets
so
that
we
can
spread
the
load
of
scraping
a
set
of
targets
so.
C
B
B
C
It's
it's
annoying
in
a
lot
of
ways
because
we
chose
to
do
or
not
we
chose,
but
it
works
obviously
using
dollar
signs,
and
so
we
had
to
provide
an
escape
because
people
often
use
dollar
signs
in
their
prometheus,
regexes
and
and
stuff
like
that.
So
that's
why
everyone
hits
the
issue
of
having
to
double
dollar
sign.
Everything
in
our
config
is
because,
unlike
prometheus
server,
we
decided
to
support
environment
variable
substitution,
even
within
the
prometheus
config,
but
here's
a
place
where
that
might
be
useful.
B
B
Yes,
yeah.
I've
used
the
node
name
before
my
thinking
is
that,
if
we're,
if
the
host
name
is
available
in
the
environment,
then
we're
not
we're
still
not
dying
to
kubernetes
in
any
way.
B
B
C
I
think
the
host
name
is
a
truncated
version
of
the
pod
name.
Last
time
I
remembered
it
has
a
maximum
number
of
characters,
so
they
can't
differ
if
your
pod
name
is
really
long
or
something.
B
B
Okay,
wait:
this
is
all
very
good
information.
Hawaiian
iris,
were
you
following
this
and
you
understand
how
we
can
update
our
design
to
incorporate
that.
I
suppose
also.
We
should
ask
the
question
about
the
generic
sd
config
then,
and
its
timelines.
I
see
there's
a
pr
that's
up
and
open
for
this.
Do
we
expect
that
that's
going
to
land
anytime
soon,.
B
I
think
the
the
the
allocation
can
be
done
in
a
at
least
semi-consistent
manner.
The
current
implementation
basically
does
a
round
robin
allocation
across
all
of
the
sets,
and
I
think,
as
long
as
you
have
the
same
pods
that
are
in
there
and
you
sort
their
names
before
you
do
the
allocation
to
them.
You
can
make
that
consistent,
but
I'm
also
not
sure
it's
a
huge
issue
unless
you've
got
thrashing
on
your
operator
pod
that
keeps
going
up
and
down,
and
then
you
probably
want
to
fix
that
problem.
First,.
I
Cool
we
also
like
you
know
in
the
collector
there
is.
There
is
a
bit
of
metadata
you
you
need
to
have
for
each
scrape
target,
but
that's
the
only
like
sticky
part.
That's
why
we
didn't
care
too
much
about
consistent
hashing,
it's
not
very
expensive
to
query
it
again.
I
I
I
What
is
our
like?
Sorry,
I've
been
very
disconnected
from
this
work.
I'm
not
super
on
par
with,
what's
going
on
right
now,
but
what
is
our
current
strategy.
B
Sorry
I
was
looking
for
a
metrics
receiver
out
of
the
collector
repo,
not
in
this
document.
B
F
I
think
that's
so
we
have
a
limitation.
Well,
we
have
a
limiting
point
is
that
we
will
only
update
the
targets
at
maximum
every
five
seconds
and
I
use
the
thing
that
we
do
in.
If
I'm
not.
If
I'm
not
quite,
I
think
that
we
do
not
interrupt
scripts
when
a
target
disappear.
We
still
finish
the
current
script.
If
I'm
correct.
B
Yeah,
I
would
assume
that
that
would
have
to
be
how
it
functions,
but
to
the
to
the
extent
that
it
does
or
doesn't
it's
in
the
prometheus
library
and
not
in
the
receiver,
because
we're
just
delegating
that
functionality
to
this
great.
B
I
This
still
doesn't,
you
know,
handled
out
of
order
samples,
though
right,
because
you
may
start
a
scrape,
you
may
need
to
retry
the
you
know
the
remote
right
or
you
know
like
it,
there's
no
guarantee
that
you
will
be
successfully
rewriting
your
samples
before
another
collector
picks
up
the
same
target,
scrapes
it
and
sends
out
the
remote
right.
I
B
B
I
think
that
the
way
that
the
the
initial
implementation
release
of
the
load
balancer
had
been
set
up
was
that
it
would
do
its
best
to
preserve
a
target
on
a
collector
and
only
move
targets
around
four
collectors
that
had
disappeared,
which
should
help
with
that,
but
also,
I
don't
know
too
much
how
we
avoid
that.
If
it
is
indeed
going
to
be
a
thing
where
we
need
to
move
a
target
from
one
collector
to
another,.
I
How
are
we
going
to
pick
the
the
hessian?
Like
I
mean,
how
are
we
going
to
ensure
that,
like
things
are
balanced.
B
Correctly,
don't
know
that
we
know
that
for
sure
the
initial
implementation
did
a
priority
queue
and
assigned
to
the
least
weighted
the
least
weighted
collector,
which
in
the
initial
case
ends
up,
I
think,
being
effectively
round
robin
and
then,
if
a
collector
disappears,
new
one
gets
filled
up
until
it's
equally.
G
Yeah,
so
that's
that's
all
iris
and
I
wanted
to
share
today.
We
just
wanted
to
get
some
initial
thoughts
and
preview
kind
of
like
the
design,
we're
thinking
about
with
the
pool
base
design
so
feel
free
to.
Please
leave
any
comments
on
the
document
or
comments
anywhere
else
and
will
definitely
take
into
consideration,
and
thank
you
david
for
a
lot
of
your
comments
and
jana.
We'll
definitely
take
those
and
edit
our
design
with
them.
So
thank
you.
Everyone.
F
And
to
fix
that
out
of
order
to
fix
the
autofocus,
wonder
if
you
could
look
at
the
external
labels,
switches
or
promoters
can
add
labels,
depending
on
the
collector
that
it
is
in.
And
then,
if
you
branch,
if
you
push,
for
example,
to
cortex,
you
can
use
the
cortex
the
duplicator
mechanism
to
ensure
that
you
don't
have
those
out
of
order
and
that
the
sampler
seemed
just
like
if
it
was
from
a
tch
right.
I
I
mean:
is
it,
is
it?
Is
it
best
practice
to
do
that?
Like
you
know,
if
you
put
the
pod
name
of
the
collector
as
an
external
label,
would
that
be
the
the
solution.
F
And
well,
if
you
do
sharding
with
prometheus,
normally
you
don't
do
that.
You
really
do
that
for
reaching
players,
but
I
think
that
in
this
case
this
could
be
like
an
idea
to
see
like
a
practical
disease
it
would,
it
would
rely
on
having
cortex
on
the
other
side,
if
you
do
a
lot
of
scaling
up
scrolling
down.
I
F
C
A
more
control,
so
I
think
these
are
new
new
problems
that
we
need
to
now
think
through
and
figure
out
how
we
want
to
solve.
A
B
Yeah,
that
was
one
of
my
concerns
with
the
polar
pole-based
model
was
how
you
coordinate
effectively
transitioning
from
one
to
another.
But
I
think
that,
given
the
complexities
of
implementing
the
push-based
model
and
having
a
server
in
the
receiver
and
all
of
that,
it's
it's
better
to
try
to
solve
these
problems
than
those.
F
B
B
B
A
You
know
I
mean
we
decided
to
look
at
the
pull-based
model
in
terms
of
our
specific
use
cases.
I
thought
that
initially
we
had
decided
that
the
push-based
model
may
work
better,
but
again,
I'm
very
interested
in
understanding.
You
know
like
based
on
which
use
cases
now
would
the
pull
based
model
work
better?
Obviously,
both
have
pros
and
cons.
B
B
Parts
that
it
would
have
in
terms
of
starting
observers,
handing
around
channels
ensuring
the
channels
were
properly
lined
up
and
and
the
data
could
flow
smoothly
from
all
of
the
pieces
to
all
of
the
places
it
needs
to
go,
whereas
with
a
pole-based
model,
especially
with
what
we're
learning
here
about
being
able
to
use
environment
variables
in
the
collector
config
to
inject
values,
we
we
can
make
this
almost
entirely
stock.
We
don't
need
to
change
the
receiver.
We
need
to
get
the
hd
the
generic
http
service
discovery
mechanism
upstream.
B
We
need
to
get
that
into
the
the
version
of
the
discovery
manager,
that's
used
by
the
collector,
and
then
we
need
to
use
that
config
mechanism
to
get
these
configs
there.
So
that
makes
the
receiver
part
of
it
very,
very
simple,
and
then
it
puts
all
of
the
complexity
in
terms
of
coordinating.
How
do
we
ensure
that
we
get
targets
to
collectors
in
a
way?
That's
not
going
to
cause
us
issues
with
outboard
samples
and
the
like
on
that
load.
B
F
I
would
also
note
that
when
you
say
it
locks
the
scripts,
it
is
not
exactly
correct,
so
prometheus
is
hashing
the
labels
of
the
target,
and
then
it
is
hashing
that,
together
with
the
hostname
to
decide
when
it
needs
to
scrape
a
target.
So
if
you
have
two
different
collectors
there,
if
they
have
different
earth
names,
they
will
not
scrape
at
the
same
time
the
same
target
but
like
when
a
promoter
is
started
with
the
same
hostname.
It
will
rest
it
will
collect
the
same
interval.
F
So
if
you
restart
in
the
middle
of
the
30
seconds
the
next
after
the
start
point,
you
should
take
back
30
seconds
before
the
last
script.
So
that's
how
we
do
we
do
a
hashing
of
the
labels
of
the
target
and
and
the
earth
name.
So
if
you,
if,
if
your
collector,
has
multiple
last
name,
they
should
not
scrape
at
the
same
time.
B
Okay,
sure
would
it
clock
the
assignment
of
scraped
targets,
though
I
think
that's,
that's
our
big
concern
like
if
we
moved
a
target
from
one
collector
instance
to
the
next
and
we
ensure
that
all
of
the
collectors
says
we're
receiving
target
updates
concurrently
or
simultaneously,
then
one
would
stop
scraping
and
the
next
would
pick
up
and
there
wouldn't
be
overlap.
If
we,
if
we
did
that.
F
F
A
F
Httpd,
well,
we
already
talked
about
it,
so
we
are
making
progress.
We
know
that
other
people
are
already
implementing
that.
So
I
am
expecting
a
review
in
the
coming
days
and
it
should
land
in
the
x
committee's
race
mid-june,
but
I
want
it
to
not
solve
all
your
issues.
A
Any
other
questions
folks
had.
I
think
that
the
only
other
update
I
had
was
on
the
pr's
that
are
still
pending
waiting
to
be
merged,
especially
grace
your
performance
issues.
I
think
that
they've
already
been
reviewed,
so
I
started
being
bogged
in
to
get
them
wedged.