►
From YouTube: 2021-07-07 meeting
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
Hi
I'll
give
everybody
a
couple
more
minutes
to
filter
in
and
then
we'll
get
started.
Please
add
your
names
to
the
attending
list
in
the
doc
and
add
any
agenda
items
you
have
for
us
to.
A
A
Okay,
I
think
eloleda
will
be
joining
us
shortly.
Perhaps
you
will
have
more
to
add
to
the
agenda,
but
just
in
terms
of
standing
updates
that
we've
been
having
I've
again
tested
the
compliance
speed
against
the
current
head
of
the
main
branch.
There's
no
regressions,
but
there's
also
no
resolution
to
the
last
outstanding
issue,
which
is
a
stainless
marker.
I
believe
the
last
pull
request
to
address
that
is
outstanding.
I
don't
see
a
manual
on
the
call,
so
I
don't
think
we
can
get
an
update
on
that
right
now.
A
That
covers
what
I've
got.
Is
there
anything
that
anybody
else
has
to.
B
A
Okay,
I
think
one
thing
also
to
note
is
that
the
work
we've
been
doing
on
the
open,
telemetry
operator
to
add
load
balancing
support
for
prometheus
target
scraping
is
proceeding
nicely.
A
Alexis
and
rahul
have
been
working
on
implementing
the
load
balancer
as
a
a
standalone
component
so
that
it
can
be
it'll
present
an
interface.
Basically,
you
provided
a
configuration
and
it
expects
that
an
http,
sd
endpoint
will
be
available
for
the
collectors
to
hit,
and
so
we
expect
that
we'll
be
able
to
explore
potentially
alternative
load
balancing
options
in
that
matter.
A
Just
by
replacing
the
container
image
of
that
load
balancer,
which
will
give
us
flexibility
to
address
concerns
like
I
was
discussing
with
iana
the
other
day
about
the
potential
for
out-of-order
samples
or
ensuring
that
we
minimize
churn
in
allocating
targets
while
having
still
a
simple
and
stable
load,
balancing
algorithm,
that's
available
for
general
use
and
recommended
to
use
okay
richie.
Is
there
anything
going
on
on
the
previous
side?
That
would
be
interesting
for
us
to
know
about.
C
One
is,
I
know,
bjorn
invested
time
into
the
high
resolution,
histograms,
that's
in
some
branch
in
the
prometheus
repo,
which
I
can
try
and
see
if
I
can
find
quickly,
but
that
that
exists
now
and
early
testing
is
looking
good.
C
A
C
I
I
heard
ramblings
on
prometheus
team
list
about
people
wanting
to
add
more
tests
to
all
three
test
suites.
So
I
don't
know
if
or
what
is
coming
but
yeah
as
it's
coming.
It's.
A
C
Let
me
see
I
found
it,
I
don't
know
which
it
is.
I
know
this
is
based
on
the
design
of
which,
which
I
put
into
this
working
group.
I
think
one
or
two
months
ago,
more
likely,
maybe
two
or
three
hundred
right
and
that's
basically,
the
implementation
of
what
of
fort
bearing
wrote
down
back
then.
A
Okay,
I
know
josh
mcdonald
also
had
an
an
issue
up
an
otep,
I
think,
to
try
to
gain
consensus
on
whether
he's
base
two
or
base
10
for
exponential
histograms,
and
that
may
relate
to
this
in
terms
of
interoperability.
So
let
me
see
if
I
can
find
that
to
make
sure
that
you
guys
are
all
aware
of
that.
D
D
E
Yeah
just
make
sure
that
I
think
I
think
anthony
he
also
posted
the
link
in
her
notes
earlier.
Let
me
look
it.
B
E
So
richard
I
had
a
question
on
the
prometheus
remote
right
tests.
As
you
mentioned,
you
know
for
the
compliance
program
again.
Will
all
all
of
these
tests
get
get
versioned?
That
is,
you
know.
You
were
just
mentioning,
for
example,
that
there
might
be
more
tests
that
get
added
over
time.
F
E
C
It's
versioned
and
the
version
is
tied
to
to
prometheus
releases
or
in
the
case
of
having
specification
to
the
to
the
version
of
the
specification,
and
the
intention
is
to
basically
or
I
mean
that's-
that's
basically
a
summary
of
of
what's
also
written
out,
but
the
intention
is
to
copy
the
iso
model
where
you
have
a
a
compliance
test
or
where
you
have
a
thing
which
is
test
against,
and
then
you
have
a
valid
validity
of
that
test.
Result
for
x
amount
of
time
to
ensure
that
there
is
rolling
updates
for
everyone
in
the
ecosystem.
C
C
We
deliberately
chose
a
relatively
quick
refreshing
cycle
to
to
ensure
that
if
things
are
added
or
discovered
like,
if
we,
if
we
find
that
we
have
major
issues
with
one
of
the
tests
or
something
that
we
can
iterate
quickly,
yeah.
E
Okay,
okay,
I
mean
so
like
once
the
program
rolls
in
we'd
be
verifying
against.
Obviously,
the
latest
release
and
the
latest
version
right,
but
under
most
circumstances,
but
that
would
build
out
to
be
a
matrix
of
some
sort
over.
C
Time,
yes,
I
mean,
if
you,
if
you
look
at
the
history
of
certification,
then
you
can
build
out
a
matrix
for
for
the
foreseeable
future.
I
would
say
that
having
a
current
compatibility
is
the
only
thing
which
really
matters,
because
hopefully
everyone
is
moving
forward.
If,
if
there
are
specifics
like
I
know
you
want
to,
you
want
to
support
a
specific
version,
long
term
and
a
certain
compatibility
with.
C
I
know
prometheus
2.30
for
next
three
years,
because
there's
a
long-term
release
or
something
then
that's
different
than
we
would
need
to
look
at
it,
but
like
we
don't
have
any
long-term
support
releases
or
anything
as
of
right
now.
So
it's
more
academic,
I
guess
okay,
so
current
is,
is
yeah.
It's
the
current
goal.
E
C
Yes,
which
is
why
it's
it's
really
just
a
matter
of
of
of
being
compliant
with,
with
the
most
current
thing,
the
other
piece
which
is
relevant
for
here.
We
currently
have
three
of
them.
We
have
promises
around
right.
C
We
have
open
metrics
and
we
have
promptly
so,
for
example,
prom
qr
would
not
apply
to
open
telemetry
for
obvious
reasons,
unless
you
have
in-flight
promptly
for
for
changing
or
summarizing
or
something,
but
as
long
as
you
don't
it
doesn't
apply
to,
it
doesn't
apply
to
open
telemetry,
for
obvious
reasons
and
as
such
to
be
prometheus
compatible.
C
You
don't
need
to
be
promptly
compliant
because
it's
not
applicable
applicable
to
to
open
telemetry
and-
and
that
is
the
same
for
for
everything
else
like
if
you
only
have
a
back-end
storage
which
which
doesn't
support,
scraping
or
anything
you
wouldn't
need
to
support,
either
chromkill
or
well,
probably,
maybe
depends
on
semantics,
but
but
not
open
metrics,
because
you
would
only
get
stuff
pushed
to
you
through
prw.
C
It's
pretty
complete,
let
me
let
me
get
you
the
link,
so
we
we
have
two
of
them.
The
one
is
basically
defining
a
load
of
static
tests
where
you
have
valid
and
invalid.
C
A
A
No,
I
was,
I
was
just
wondering
where
it
was
so
that
we
could
try
to
test
against
it.
I
don't
know
if
this
has
been
tested
against
the
open,
telemetry
receiver.
A
Which
kind
of
stresses
our
receiver,
because
some
of
the
things
that
were
causing
remote
right
failures
were
actually
issues
in
our
receiver.
But
it
would
also
be
nice
to
test
it
against
the
open
metrics
tests
to
make
sure
that
we're
scraping
that
properly.
D
C
I
mean
for
for,
if
you're
using
that
code,
you
will
pass
still
formally
speaking.
You
need
to
actually
test
it
to
to
be
able
to
to
be
marked
as
passing.
C
One
thing
which
you
could
consider,
and
you
could,
even
upstream
this
to
to
the
test
suite
is
to
to
just
expose
all
the
different
bits
and
pieces
of
open,
metrics
and
then
just
see
if
something
comes
out
on
the
on
the
other
side
as
prometheus
remote
right
and
if
something
comes
out,
if
it's
great,
because,
obviously
for
the
invalid
ones,
you
shouldn't
be
sending
anything
and
for
for
the
valid
ones.
You
should
be
sending
the
valid
developed
data
in
in
different
encoding.
D
Yes,
so
prometheus
will
not
pass
the
full
tests
because
it
is
not
a
full
parser.
It
is
a
fast
parser.
So
the
proper
way
to
test
things
is
against
the
python
client.
But
what
prometheus
does
is
also
valid
against
the
spec
and
because
of
how
it's
worded,
but
if,
as
I
said,
if
you're
using
the
prometheus
parser
just
reusing
it
straight,
I'm
not
expecting
issues,
but
it
will
need
to
be
manually
verified
that
you
know
the
way,
because
it's
working
line
by
line
that
the
result
is
okay,.
A
Yeah,
I'm
not
expecting
any
issues
either.
I
just
wanted
to
to
make
sure
that
if
there
were
tests
we
could
run
against
them
and
and
make
sure
that
we
are
as
conformed
as
we
can
be.
E
Good
yeah,
I
think,
anthony
we
had
looked
at
it
earlier
when
you
know
we
talked
about
the
remote
right
compliance
tests.
Initially
a
few
months
back,
you
know
before
the
formal
suite
was
composed
and
we
had
looked
at
the
open,
metrics
test,
but
I
think
we
were
not
at
the
point
where
we
could
run
those
tests.
Then
I
mean
I
think
now
is
a
good
time
to
start
looking
at
it.
E
The
issue
here
so
we
have,
as
I'd
indicated
earlier,
I
don't
know
if
I
think,
a
couple
of
weeks
ago
we
have
been
building
out
our
backlog
for
some
of
the
items
that
need
to
be
done
in
prometheus.
E
You
know
just
compatibility
and
full
support
completion
and-
and
I
just
wanted
to
kind
of
make
sure
brian
richard
and
others
to
get
your
feedback
on.
If
there's
anything
else
and
of
course,
you
know
active
hotel
contributors,
if
there
are
any
other
items
in
the
backlog
that
need
to
be
done
or
addressed,
I've
just
added
the
link
in
the
dock.
E
This
is
the
phase
two,
because
most
of
the
phase
one
items
are,
I
think,
done
at
this
point
and
I'll
go
through
and
update
the
backlog
that
we
have
been
maintaining
in
terms
of
the
links
we'll.
B
E
A
look
at
some
of
the
bugs
and
kind
of
prioritize
them,
but
the
idea
is
that
you
know,
as
we
start
working
on
metrics
stability
in
in
the
collector.
E
The
prometheus
support
is
obviously
you
know
very
important
and
in
fact,
critical
and
and
it's
something
that
we've
been
you
know
consistently
working
on
for
the
past
few
months,
so
just
wanted
to
make
sure
that
david
again,
you
know
you
should
also
take
a
look.
Anything
is
missing
from
this
list.
E
Let's
make
sure
we
have,
you
know
full
list
to
kind
of
handle,
because
that
will
be
something
that's
required
for
stability
requirement.
B
E
E
So
there
has
been
a
discussion
around
serverless
support
with
prometheus
on
the
prometheus
lists,
and
I
was
just
catching
up
on
it
and
it's
a
good
discussion
because
you
know,
as
you
know,
we've
been
rolling
out
lambda
support
in
hotel
for
specifically
for
everything
and-
and
we
rolled
that
out
in
hotel,
you
know
as
generic
layers
and
then
in
the
downstream
aws
distro,
specifically
for
aws
lambda
and
with
with
integrations.
E
You
know
with
ews
x-ray
right
so
right
now
we're
working
on
a
very
interesting
project
which
is
adding
the
prometheus
remote
right,
exporter
and
being
able
to
actually
to
you
know,
push
lambda
metrics
to
a
prometheus
endpoint,
okay
through
the
prometheus
from
a
tritex
water.
E
So
we
will
make
available
lambda
layers
that
are
actually
integrated
with
the
prometheus
pipeline
to
to
be
able
to
send
metrics
to
a
prometheus,
endpoint
right
now,
obviously,
the
remote
writers,
cortex
and
thanos,
but
I
I
think
it
was
related
to
or
could
be
related
to,
the
discussion
that
is
ongoing
on
the
prometheus
developers
list.
And
I
was
wondering
if
you
know
there
was
some
way
we
could
actually
work
together
on.
E
C
I'm
just
resorting
mentally,
of
course,
there's
a
lot
of
questions
so
on
on
trying
to
align
on
this
and
actually
like
working
together
from
from
both
ends
absolutely,
but
that
would
make
that
would
make
sense.
C
The
official
recommendation
is
probably
still
going
to
be
that
ideally,
serverless
stuff
is
being
observed
through
the
orchestrator,
because
you
already
have
something
which,
which
is
aware
of
of
everything
which
is
running,
which
is
collecting
state
and
end
state
for
just
operations
of
of
the
provider
and
also
for
billing
and
such
so.
C
E
D
Yeah,
I
I
think
kind
of
not
right
strong
way
to
look
at
it.
Fundamentally,
you
need
to
be
able
to
produce
a
slash
metrics
and
getting
it
to
that
point
is
the
first
thing
and
then
what
you
do
next
is:
okay,
the
user
can
scrape
it
themselves
or
maybe
you
give
them
the
option
of
sending
it
remote
right
directly
to
cortex,
but
at
the
end
of
the
day
you
need
to
slash
metrics.
E
Yeah,
actually,
right
now,
that's
a
very
good
point,
brian,
because
right
now
what
we
are
looking
at
is
otlp
and
two
prometheus
right.
So
we're
not
even
looking
at
the
scraping,
because,
at
least
in
in
hotel,
one
of
the
considerations
is
the
receiver
size
and
that's
something
that
you
know
the
current
prometheus
receiver
is
significantly
large,
so
we're
looking
at
you
know
how
the
the
and
this
is
not
even
related
to
the
prometheus
format
or
anything.
It
really
is.
E
You
know
that
this
footprint
of
the
component
itself
is
actually
quite
heavy.
So
that's
another
consideration
in
a
lambda
layer.
As
you
know,.
D
E
Another
consideration
is
also,
you
know,
just
the
the
ability
to
handle
the
throughput
from
you
know
at
the
prometheus
layer,
in
terms
of
what
cortex
can
even
handle.
D
I
wouldn't
worry
about
that.
Like
fundamentally,
you've
got
all
these
different
service
functions,
going
via
the
orchestrator
or
equivalent
back
to
this
thing,
and
then
the
data
sizes,
then
is
so
small
that
there'll
be
no
problem
with
bridges
dealing
with
it.
It's
all
the
individual,
invocations
and
merging
those
all
together
into
whatever
the
current
countries
are
that
that's
the
expensive
bit.
E
C
That's
if,
if
you
watch
the
dev
summit
recording
from
I
think
two
weeks
ago,
we
came
to
a
similar
conclusion
of
more
let's
open,
metrics
statsd
for
for
prometheus,
which
has
a
lot
of
properties
which
are
not
super
nice.
I
mean
at
the
point
where
you
have
a
slash
metrics.
You
can
also
argue
that
the
orchestrator
should
just
collect
this
at
the
kill
time
for
that
function
and
then
you're
done,
which
would
be
a
much
nicer
overall
interface.
C
C
Just
for
their
observation
story
or
observability
story
on
the
serverless
functions,
at
which
point
they,
they
literally
had
to
change
the
overall
strategy
to
to
get
data
out
of
that
thing,
because
it
was
just
way
too
expensive
to
to
justify
and
defend
that
that
kind
of
investment,
and
that
was
with
only
prometheus
as
as
the
base,
so
no
pipeline,
no
nothing
as
in
relatively
lightweight.
C
I
mean
this
this.
This
move
goes
back
and
forth
in
the
industry
all
the
time
and-
and
we
are
certainly
not
at
the
last
situation
of
this-
but
you
will
have
significant
overhead
within
the
serverless
functions,
the
smaller
and
the
more
number
they
become,
which
which
would
be
easier
if
everything
was
in
the
orchestrator
or
tied
to
the
orchestra.
Somehow.
D
Fundamentally,
assuming
that
these
are
relatively
sourced
functions,
let's
say
no
more
than
a
minute
or
two
you're,
just
logging,
hey
my
metrics
at
the
end
of
this
was
dis
and
you
merge
all
of
those
together
and
check
them
out.
So
you
know
it
might
be.
A
udp
call
might
be
tcp,
it
might
be
a
log
file
but
architecturally.
It's
dusty.
C
And
one
thing
to
consider
again,
if
we
are
talking
doing
it
through
the
orchestrator,
as
you
are
talking
orchestrator,
you
are
aware
of
context
like
security
of
user
of
of
specific
domains
where
data
must
cross.
All
of
this
is
already
built
into
whatever
your
orchestrator
and
billing
and
everything
structure,
because
you
already
have
security
and
authentication
all
those
things
into
the
thing,
and
you
can
just
reuse
those
bits
and
pieces
without
having
to
build
a
second
infrastructure
to
to
mirror
this
kind
of
thing.
D
Yeah
like,
if
you
can
produce
a
slash,
metrics
per
function
deployment,
then
that's
all
you
need
and
then
because
if
you
try
to
merge
across
deployment
versions
or
whatnot,
you
suddenly
get
into
stainless
problems
and
whatnot.
But
it's
just
like
hey
this
particular
function.
With
this
version
of
this
name,
here's
the
slash,
metrics
and
there's
some
ways
to
discover
that
which
you'll
implicitly
have
because
you
know
you
can
list
all
of
them,
that's
kind
of
it.
E
Okay,
I
mean
so
as
we
build
out
the
you
know,
these
layers
and
the
design
we'll
definitely
you
know,
share
in
order
to
get
your
reviews
and
and
again
you
know
it's
it's
a
complex
area
right,
because
there
are
different
configurations
and
different
requirements
and
hotel
you
know
currently
is,
can
only
support
specific
workflows
in
terms
of
just
ingestion,
and
then
you
know,
and
and
that's
one
of
the
reasons
why
I
want
to
make
sure
that
those
are
can
be
handled.
E
That
is
lambda
is
a
different
case
altogether
right.
So
at
that
point
we
need
to
be
able
to
also
handle
you
know
those
use
cases
in
the
in
the
prometheus
pipeline
on
on
hotel
on
the
collector
sorry.
C
E
Yeah
I
mean
I
I
mean
anthony.
My
assumption
was
lambda
itself
right.
That
is
the
lambda.
You
know
native
lambda
engine
itself
now.
The
question
I
have,
though,
is
for
application.
Metrics
that
doesn't
apply
so
much
right
for
workload
metrics,
but
for
infrastructure,
yes,
which
is
natively
being
emitted.
D
E
Yeah,
that's
yep,
that's
true!
That's
true!.
D
C
And
even
if
you
weren't
doing
this
with
a
flash
metrics
and
and
pushing
to
a
back
end
at
that
point
in
time,
you
don't
have
those
super
small
bits
and
pieces,
you
already
have
a
more
aggregated
overview.
So
a
lot
of
a
lot
of
cost
of
maintaining
state
and
such
is
not
needed
to
be
paid
at
this
complete
leave
level.
C
All
that
being
said,
prometheus
still
intends
to
make
to
make
the
serverless
story
a
lot
better,
but
those
are
just
some
of
the
problems
which,
which
are
basically
why
prometheus
is
designed
the
way
it
is
designed,
because
many
of
those
problems
go
away.
If
you
have
this
through
a
defined
slash,
matrix.
C
And
I
mean
this
and
others
as
well,
where,
with
smaller
workloads,
you
have
the
actual
machine
or
system
or
what
have
you
in
which
individual
workloads
are
running
and
then
you
just
have
individual
workloads,
sometimes
giving
back
state
or
not.
But
you
you
care
about
this
orchestration
level,
first
and
foremost,
and
extract
metrics
from
that.
E
C
In
the
community,
I'm
not
aware
of
an
issue,
yet
there
is
no
formal
process,
but
there
is
a
process
which
which
we
are
doing-ish.
I
it's
established.
It's
we
discuss
on
the
mailing
list
as
as
was
currently
happening
happening
also
to
self
onboard.
Ideally,
watch
watch
the
last
summit
and
then
we
create
a
design.
C
Look
that
we
signed
up
is
fully
open
to
everyone
to
to
participate
in
and
then,
while,
while
there's
a
lot
of
movement
and
back
and
forth,
and
such
everything
stays
in
that
design
document
or
in
the
next
version
or
different
section,
blah
blah
blah.
But
it
is
basically
a
google
doc
and
then
once
once,
consensus
is
reached
and
once
we
know
which
way
to
go
forward
it
branches
out
into
vrs
and
and
such.
C
But
that's
the
the
thing
get
involved
on
the
min
yeah
when
we
have
a
direction
1,
two
twenty
design
docs
will
will
spring
from
this
and
then.
E
Okay,
I
mean
that's
useful
because
again
I
was
you
know,
tracking
the
discussion
so
definitely
add
in
there.
E
E
Vishwa
any
or
grace
any
updates
on
your
end.
E
Yeah,
I
think
many
of
the
folks
even
on
many
of
the
maintainers
were
on
you
know
off
for
the
long
weekend,
so
things
may
have
been
a
little
slow
on
the
project
you
can
kind
of
see.
The
number
of
you
know,
issues
and
pr's.
E
G
Actually,
we
are
still
working
on
anthony,
gave
an
update
at
the
beginning.
Oh.
E
Anthony
any
other
topics
on
your
radar,
or
I
mean
I,
that
those
are
the
two
topics
I
had
specifically.
E
All
right
folks,
I
think
we
can
give
everybody
back
at
least
20
minutes
before
the
collector's
thing
starts
thanks.
Everyone
thanks
richard
thanks.
Brian
thanks.
Everyone
for
joining
me
bye,
take
care.