►
From YouTube: 2021-10-26 meeting
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
Meanwhile,
please
add
anything
important
to
the
agenda.
Also,
I
have
a
small
doubt.
Rightly.
Usually
we
are.
We
were
leaving
the
metrics
the
time
box
as
the
last
part
of
the
agenda.
A
B
A
A
Okay,
let's
start
so,
we
don't
end
up
rushing
later
for
those
20
minutes
to
discuss
metrics
at
the
end
of
the
call.
So
thank
you,
everybody
for
joining
yeah.
We
have
a
few
items
in
the
agenda
today.
Don't
forget
to
add
yourselves
to
the
attendee
list
as
well,
and
the
first
item
is
from
gc
a
quick
reminder
to
both
in
the
gc
election
it
opened
yesterday
and
it's
going
to
be
closing
36
hours,
probably
something
like
that
so
yeah,
so
it's
basically
one
day
and
a
half
or
two
maximum.
A
C
Yes,
all
right!
Thank
you,
okay,
so
I
want
to
present
this
pr
to
you
all.
This
is
an
idea
to
make
our
specification
documents
compliant
to
compliant
with
this
specification
for
specifications
that
is
defined
in
the
w3c
documents.
So,
let's
first
talk
a
little
bit
about
this
specification
for
specifications.
C
From
2005
and
it
pretty
much
defines
how
a
specification
should
be
written
the
there
are
many
advantages
of
writing
our
specification
following
these
guidelines
I'll
list
a
few
of
them
right
away.
So
the
first
one
is
that
this
document
defines
what
is
a
normative
section
and
what
is
an
informative
section.
C
The
informative
sections
are
pretty
much
everything
else.
So
this
guideline
for
writing.
Specs,
say
that
says
that
specifications
should
be
should
clearly
indicate
when
a
section
is
normative,
so
that
it's
easy
to
gather
all
the
requirements
that
are
defined
in
this
specification
now.
Another
advantage
of
this
following
this
guideline
is
that
sorry,
we
also
can
follow
this
approach
where
specification
requirements
are
written
with
a
testing
objective
in
mind,
so
it
when
writing
the
requirement,
the
normal
sections
it
puts
people
in
this
state
of
mind
where
they
write
it
down.
C
Thinking
can
this
be
tested?
That's
that's
good,
because
that
makes
implementing
the
specification
just
a
simple
process
of
following
the
requirements
that
are
here,
writing
test
cases
that
make
sure
that
implicit,
that
the
implementation
is
combined,
of
course,
and
then
filling
up
the
coding
implementation
until
all
the
test
cases
pass.
C
Something
else
is
that
this
specification
document
defines
also
what
to
do
when
there
are
some
parts
of
the
specification
that
are
conditioned.
C
So
I
have
a
notice,
while
reading
our
specs,
that
we
have
some
conditional
requirements
so,
for
example,
sometimes
we
say
if
the
implementation
language,
for
example,
supports
this
approach,
then
it
this
part
must
comply
with
these
sort
of
lab
requirements.
That
is,
of
course,
conditional
right.
That
is
a
requirement
that
only
applies
to
certain
languages
that
are
capable
of
following
this
or
that
approach.
C
So
this
pr
that
I
have
opened
here
pretty
much
has
transformed
one
of
our
documents.
I
picked
it
up
pretty
much
in
alphabetical
order,
the
baggage
api
document
and
the
markdown
indus
in
that
document
in
this
pr
is
written
in
a
way
the
the
requirements
are
separated
from
the
rest
of
the
document
and
the
conditional
requirements
are
also
specified
as
such.
So
real
quick,
I
can
show
it
to
you
how
it
will
look.
C
Here
so
pretty
much
following
this
guideline,
I've
transformed
this
document
into
several
sections,
each
one
labeled
as
requirement,
which
includes
one
of
these
keywords.
Right
then
some
of
the
requirements
are
conditional,
and
in
that
case
the
condition
is
defined
here.
If
this
condition
is
true,
then
this
requirement
applies
now
the.
C
Intention
of
this
pr
and
the
and
what
problem
it
solves,
is
detail
here.
I
have
noticed
that
our
specification,
as
it
is
right
now
includes,
or
contains
several
things
that
are
not
consistent
with
how
this
specification
should
be
stuff
like,
for
example,
a
recommendation
sections
that
includes
a
mask,
or
things
like
that.
Without
these
checks
is
easy
to.
C
So
I
think
this
us,
especially
people
who
are
implementing,
because
it
will
provide
a
list
of
what
exactly
needs
to
be
implemented,
how
mandatory
it
is,
and
just
something
else.
I
wanted
to
show
you
that
right
now
there's
a
draft
of
this
for
a
pr
that
is
being
written
in
this
style,
so
you
can
also
take
a
look
here
as
how
it
will
be.
A
Perfect
thanks
so
much
for
that.
I
think
the
details
will
require
actual
time
from
people
here.
So
thank
you
for
that.
Yeah.
Let's
take
a
look
in
detail
there
later.
I
think
it's
important.
Thank
you!
Okay,
yeah
everybody.
If
you
have
time,
please
take
a
look
at
this
pr.
I
think
it's
an
interesting
approach
to
format
the
specification
to
have
it
more
like
similar
to
what
w3c
does.
A
Okay.
For
the
sake
of
time,
let's
move
to
the
next
item.
Gmid
probability,
sampling
interest,
state
specification.
D
Yeah,
just
I
thought
I'd
give
an
update
since
I've
been
climbing,
claiming
I
would
last
week
I
said
I
didn't
have
one,
and
this
week
I
will
say
I
made
quite
a
lot
of
progress
on
the
draft,
which
is
now
updated
and
just
put
myself
after
diego
here
in
this
time
slot,
because
I
wanted
to
say
I've
used
or
tried
to
study
the
test
specification
for
specs
and
it
definitely
forced
me
to
really
work
harder
and
produce
a
better
specification,
and
I
want
to
just
sort
of
like
acknowledge
that.
D
That's
what
I
felt
when
I
worked
through,
trying
to
format
and
separate
normative
and
non-normative
sections
and
trying
to
make
sure
that
each
each
requirement
is
stated
in
a
minimal
and
testable
way,
so
just
wanted
to
endorse
diego's
work
and
say
that
I
think
this
is
worth
doing
if
we
can
do
it
incrementally
and
not
try
to
boil
our
spec
ocean
at
the
same
time,
and
that
being
said,
I
will
come
back
next
week
with
a
prototype
that
demonstrates
conformance
with
the
draft
that
I
work
working
on.
It's
pr247.
D
E
Whoever
wrote
that
sorry
trying
to
unmute,
that
was
me,
I
just
wanted
to
quickly
mention.
Actually
bogdan
made
a
proposal
in
the
w3c
to
add
a
trace
flag
and
when
the
flag
is
set
to
true,
I
believe
it
was
63
bits
of
the
trace
id
would
be
guaranteed
to
be
random.
As
long
as
you
know,
you
trust
the
caller
and
whatever
which
solves
half
of
your
initial
proposal.
D
It
also
gets
us
a
lot
a
lot
of
what
we
want,
because
when
you
are
unsampled,
there's
no
extra
cost
in
the
context,
which
is
the
bulk
of
the
case,
is
when
you're
sampling.
So
it
gets
us
most
of
the
way
towards
a
really
efficient
implementation,
and
it
is
also,
as
you
say,
sort
of
orthogonal,
so
it
can
be
done
separately,
yeah.
E
It's
also
completely
backwards
compatible,
so
that
was
a
big
driver
in
in
the
meeting
last
week.
F
Yeah
just
wanted
to
give
a
quick
update
from
the
messaging
semantic
convention
work
group,
so
we
this
week
we
merged
the
pr
where
we
defined
like
the
scenarios
and
the
scope
that
we
wanna
have
in
the
first
ever
version
on
the
theory.
So,
thanks
to
everybody
who
contributed
to
that
and
who
reviewed
and
approved-
and
now
we
are,
we
are
continuing,
basically
brainstorming,
more
or
less
about
like
context
propagation,
because
in
the
messaging
semantic
conventions
there
are
some
in
the
current
conventions.
F
There
are
some
scenarios
also
to
define
how
spans
are
created
and
how
they
are
parented
or
linked,
so
that's
kind
of
a
special
case
amongst
other
semantic
conventions,
because
it's
currently
the
only
one
where
this
is
in
and
yeah.
We
are
just
brainstorming
to
trying
to
make
it
consistent,
because
current
examples
are
actually
not
really
consistent
that
are
in
the
spec.
So
we're
just
trying
to
visualize
that
and
discuss
there's
a
document
or
brainstorming
notes
and
yeah
whoever's
interested
in
that.
F
A
Perfect
thanks
so
much
for
that
yeah,
I'm
pretty
happy
that
that'll
take
what
merged
yeah
I'm
looking
forward
to
the
to
its
inclusion
into
the
actual
specification.
Thank
you
so
much
for
that.
Okay,
next
item
josh
zuret,
the
road
map.
G
Yeah,
so
we're
trying
to
get
a
road
map
out
by
end
of
the
week,
if
possible,
but
there's
a
lot
to
do
in
it.
Some
is
actually
for
carlos.
Thank
you
for
signing
up
for
some
of
that,
some.
So
what
I'd
like
to
do?
There's
there's
basically
an
overall
state
of
hotel.
In
terms
of
where
things
stand,
then
there's
a
section
that
I
want
to
have
all
the
different
efforts
that
are
going
on.
For
example,
metrics
sampling
instrumentation
the
pieces
we
just
heard
from
with
a
like.
G
You
know:
here's
what
we're
investigating
and
here's
what
we
want
to
do
over
the
next
three
quarters
and
again
rough
swaths.
Just
like
this
is
what
we're
investigating
here.
Here's
when
we
expect
to
have
something
you
can
look
at
this
is
a
common
question
we
hear
from
users.
I
took
a
first
crack
at
almost
everything,
but
not
some
of
these
specific
instrumentation
groups.
G
I
can
do
that
if
no
one
has
time
to
step
in,
but
I'd
like
if
the
actual
owners
would
because
you're
gonna
know
better
and
in
more
detail
what's
going
on,
but
that's
fine.
You
can
take
a
look
one
of
the
things
I
wanted
to
call
out.
Well,
there's
actually
two
things
I
want
to
call
out.
First,
I'm
just
going
to
call
out
a
thing
that
I
wrote
that
I
want
to
see.
If
it
resonates
with
everybody,
should
I
present,
I
guess
I
should
present.
H
G
Okay,
it
shares
audio
by
default.
Forget
that
can
you
hear
it
yeah?
It
shares
audio
from
the
tab
like.
Why
does
my
document
have
audio?
I
don't
understand,
come
on
zoom
all
right
anyway,
so
the
this
is
the
state
of
open,
telemetry,
here's
the
status
of
different
projects,
I'm
going
to
talk
about
this
next,
because
I
want
to
have
some
discussion
on
it.
G
Then
what
we
call
out
here
is
just
major
goals
and
technical
challenges:
we're
solving
in
open
telemetry,
for
example,
the
the
goal
from
the
prometheus
working
group
of
being
fully
prometheus
compatible
the
goal
from
the
metrics,
the
api
and
sdk
sig
of
stabilizing
this
notion
of
high
definition.
Histograms,
that's
going
on
right.
Those
are
kind
of
like
high
level
goals
and
I
called
out
what
I
think
are
the
primary
set
of
goals
that
this
community
is
actually
bifurcated
and
solving.
I
want
to
make
sure
that
this
resonates
with
you.
G
Yeah,
I
I'm
absolutely
positive.
I
didn't
get
everything
I
tried
to
get
all
of
the
high
level
pieces.
I
knew
so
please
add
yes,
okay!
Is
there
anything
like
really
really
bad,
like
egregious
that
I
missed.
H
I
think
one
of
the
things
that
in
metrics
at
least,
I
feel
that
is
overall,
being
addressed
by
all
the
workflows
that
are
being
worked
on
is
ensuring
otlp.
H
You
know
support
and
compatibility
across
the
board,
and-
and
although
it
is
an
you
know,
unwritten
goal
of
the
project,
I
think
it
would
be
good
to
call
it
out,
because
end
users
usually
find
that
helpful.
G
G
H
G
Yeah
yeah
please
do
and
and
we're
if
my
warding's
bad,
let
me
know,
is
there
anything
anything
else
like
that
that
maybe
needs
to
get
called
out.
B
I
think
folks
also
talk
about
the
open
census
like
shim
or
bridge
yeah,.
G
Yeah
yeah,
I
list
it
under
traces
and
again
some
of
these
things
that
are
cross
spanning
I'm
not
quite
certain
where
to
put
those,
I
put
it
under
traces
initially
under
this
goal
section
at
one
point
in
time,
I
actually
had
a
separate
goal
that
said
compatibility
and
I
put
the
prometheus
stuff
there.
I
put
the
open
census
stuff
there,
the
open
tracing
stuff
there
we
could
go
either
way.
Please
comment
on
the
document.
G
I
think
our
use
of
time
here
probably
isn't
super
valuable
there,
but
I'm
happy
to
talk
about
this
offline
or
in
chat,
because
that
that's
a
good
call
out
if
this
isn't
like
denoting
the
importance
of
it
in
the
community
same
as
the
open
telemetry
protocol
right,
then
we
should.
We
should
call
it
out.
So
please
please
comment
there.
One
thing
I
wanted
to
go
over
real
quick.
So
if
you
look
at
metrics,
this
is
an
example.
G
G
We
plan
to
have
expanded
convenience
features.
You
know
at
some
point
and
then
we
expect
to
shift
into
instrumentation
and
open
census
compatibility.
We
also
expect
to
have
a
significant
effort
on,
and
this
is
around
q2
right,
and
this
is
just
to
give
both
maintainers
and
sig
folks
and
that
a
notion
of
what
we're
planning
to
invest
in
now,
I'm
putting
riley
on
the
spot,
because
I
wrote
this
down
and
didn't
confirm
with
him,
and
I
didn't
get
to
run
this
by
the
metric
spec
sig,
which
is
what
I
plan
to
do.
G
Well,
that's
kind
of
why
one
of
the
reasons
I'm
bringing
it
up
now,
but
I
want
to
make
sure
that
we
all
put
put
something
down
that
resonates
with
us
that
we
feel
comfortable
with,
and
that
gives
users
an
understanding
of
where
the
community
is
moving.
So
I'm
more
than
happy
to
take
a
swath
at
writing
all
of
these
gantt
charts
for
every
effort
if
needed,
but
I
left
some
to
do's
and
I
put
some
owners
for
various
pieces
to
to
clarify
things.
Johan.
G
One
thing
I
was
thinking
is,
it
might
make
sense
for
you
to
to
take
at
least
the
messaging
part
of
instrumentation
and
provide
that
here
I
don't
know
if
we
want
to
do.
I
was
thinking
of
doing
a
broader
instrumentation
scoped
description,
but
it
might
make
sense
to
just
have
messaging
as
its
own
thing
and
and
split
out
the
other
instrumentation
components
as
their
own
thing
as
well.
That's
totally
fine
key
is
I'd
like
to
have
this
document
in
evaluatable
form
by
friday.
H
Okay,
josh
can
I
work
with
you
on
this,
because
I
think
there
are
some
things
missing.
There's
also,
I
think,
not
enough
detail
on
the
gantt
charts
by
quarter,
because
I
think,
there's
a
little
bit
more
detail
that
we
should
add,
especially
given
that
there
are
a
lot
of
moving
parts
for
the
collector
and-
and
you
know
that's
not
well
represented
in
this
account
right
now.
G
I
think
I
had
did
I
have
a
section
for
the
collector.
I
think.
G
G
Yeah,
please
please
do
and
you
I
think
you
have
edit
access
to
this
alolita
yeah.
Anyone
who
wants
edit
access
and
wants
to
write
sections
more
than
happy
more
than
happy
to
add
you,
the
one
thing
I
want
to
call
out
real,
quick
and
spend
more
of
our
time
on
than
just
the
fact
that
this
exists
is
actually
around
our
definition
of
status.
G
G
G
We
don't
expect
the
apis
to
change
much,
but
we
also
want
you
to
know
that
we
haven't
declared
these
stable,
but
we're
ready
for
you
to
try
them
out
and
give
us
feedback
right,
and
that's
that's
a
thing
that
I'd
like
to
denote
specifically,
because
I
think
a
lot
of
the
metrics
sdks
are
going
to
be
there
very
shortly
and
I'd
like
to
make
sure
that
we
are
denoting
this
for
people
to
try
those
out.
There
are
aspects
of
the
collector.
I
know
the
collectors
under
this.
G
This
effort
to
mark
the
user-facing
configuration
is
stable,
even
if
the
underlying
go
modules
aren't
yeah,
because
the
collector
itself
as
a
component
is
stable,
and
I
want
to
make
sure
users
know
that
what
the
hell
this
means-
and
it's
clearly
denoted,
and
so
from
a
standpoint
of
only
having.
Basically
this
this
terrinary
and
it's
almost
binary
of
experimental
or
feature
freeze
or
sorry,
experimental
or
stable
work.
I
feel
like
we're
missing
a
status
in
there
and
I'd
like
to
define
something
new.
What
I've
done
for
now
is.
G
I
have
colors
where
green
beans
adopt,
yellow
means,
evaluate
and
red
means,
watch
kind
of,
like
the
tech
radar.
If
you
will
I'd
like
to
propose
that
we
have
a
new
name
for
something,
a
lot
of
the
sig
maintainers
went
in
and
used
the
names
they're
using
for
their
own
components.
So
if
we
were
to
look
here,
we
have
feature
freeze
for
the
metric
specification
because
we
decided
we
needed
a
notion
of
feature
free,
so
people
know
whether
or
not
they
can.
Oh.
I
do
need
to
give
you
edit
access.
G
I
will
work
on
it.
We
did
decide
that
we
needed
feature
free
so
that
people
aren't
adding
features
to
something
that
we're
planning
to
stabilize
and
that
we
only
get
bug
fixes
from
that.
On
the
for,
like
the
go
sdk
this
notion
of
beta,
you
know
that
that's
a
thing
that
kind
of
showed
up
from
some
of
the
sigs.
Where
was
the
other
one
yeah
there's
a
beta
down
here
as
well.
Beta
is
not
a
thing
that
we
denote
in
the
spec.
G
So
there
was
an
open
question
of
whether
we
should
have
everything
be.
You
know
either
marked
as
experimental
or
stable
what
the
hell
does
beta
mean.
Then
erlang
is
in
review
for
its
stable
release.
How
do
we
denote
that
right
so
effectively?
H
G
Well,
so
what
the
the
proposal
that
I'm
putting
out
I'll
just
I'll
just
lay
it
out
straight
is
we
have
we
have
two
two
notions
december?
We
have
basically
it's
stable
or
it's
not,
but
when
it's
not
stable,
we
have
this
evaluate
phase
that
I'm
going
to
call
beta
effectively.
So
if
something
is
in
beta,
it
means
it's
not
stable
from
a
semver
standpoint
but
we'd
like
you
to
look
at
it
and
try
it
out
and
evaluate
it
and
see
the
quality.
H
So
in
a
release
from
a
release,
standpoint
isn't
that
a
release
candidate,
because
I
think
that
that's
where
the
confusion
also
is
occurring
somewhat,
that
beta
is
being
used
by
some
of
the
you
know,
repos,
and
then
there
is
rc
that
is
used
and
there
is
a
systematic.
G
There's
a
there's,
a
big
difference,
though,
between
a
release
candidate.
What
I'm
suggesting
is
a
beta,
which
is,
I
want
the
ability
for
us
to
say,
hey,
go
evaluate
this,
but
we
won't
release
this.
If
we
don't
find
issues,
we
still
want
to
spend
a
little
bit
more
time,
stabilizing
it
or
doing
some
extra
work
like
we
know,
there's
some
other
cleanup
we
want
to
do.
The
difference
is
a
release
candidate.
G
If
we
put
one
of
those
out-
and
we
don't
see
bugs,
we
should
be
willing
to
then
immediately
release,
and
that
is
not
the
case
with
some
of
these
betas
some
of
these
betas.
We
know
that
there's
some
cleanup
to
do
internally.
There's
some.
You
know
better
error
messages.
We
can
provide
or
performance
tweaks.
G
We
want
to
do
they're,
not
actual
release
candidates,
but
from
a
user
standpoint
it
provides
the
behavior
that
we're
planning
to
give
them
and
they're
evaluatable
versus,
like
some
of
the
some
of
these
other
things
that
we're
working
on
are
just
not
even
in
an
evaluatable
state
right.
There
there's
code
in
the
in
the
library
there's
a
release
that
has
a
package,
but
that
package
should
never
be
used
by
a
user
just
yet
because
we're
just
not
even
ready
for
feedback
we're
still
implementing
the
thing.
You
know.
I
I
So
basically
do
blue
blue,
yellow
and
red
work
yeah
that
should
work,
but
I'm
not
I'm
not
an
expert
in
the
right
way
to
do
it.
But
I
know
that
I
can't
tell
the
difference
between
red
and
green
in
many
many
documents
so.
G
G
Yep
I
know
here
so
so,
if
you
could,
if
you
could,
I
tried
to
pick
different
hues
of
green
and
red,
but
yeah
it'd
be
better
if
we
just
use
different
colors
too.
I
Try
turning
on
the
colorblind
friendly
palette
on
github
and
you'll,
see
well-researched,
color
choices.
G
The
underlying
thing,
the
the
notion
of
adoption
versus
try
out
versus
early
version,
where
basically
yellow
and
red,
should
only
affect
this
experimental.
You
know
denotation.
Does
anyone
violently
disagree
with
that
or
think
that
this
is
not
a
useful
concept
to
denote
to
users.
H
G
Okay,
so
it
all
right-
let's
let's
we'll,
take
it
offline,
then-
and
you
and
I
can
talk
about
this
more
alita
but
yeah,
okay,
so
yeah,
but
but
from
what
I
understand
from
that
comment,
though,
having
more
than
just
experimental
and
stable
is
useful.
Just
the
colors
are
confusing
and
we
have
to
pick
good
names.
A
Okay,
cool
by
the
way,
the
experimental,
stable
and
all
the
labels
were
proposed
by
ted
young.
So
probably
you
may
want
to-
or
I
can
talk
to
him
about,
adding
more
more
more
keywords
in
that
regard.
So
we
can
talk
offline
about
that
and
then
we
can
probably
make
a
proposal
or
something.
G
G
H
G
A
Thank
you
so
much
for
that
yeah.
Okay,
we
have
four
minutes
before
the
metrics
time
box.
So
the
last
item
is
from
me:
there
is
there's
this
pr
about
having
auto
propagators
environment,
variable
use
known
as
explicit
keywords
for
saying
no
propagation
whatsoever,
as
opposed
to
having
the
value
of
this
environment
variable
empty
this.
This
is
how
other
similar
environment
variables
work.
A
Instead
of
taking
just
an
empty
value,
you
just
have
to
say
no
explicitly
so
we
need
month
well,
there
were
not
many
people
yesterday
at
the
montana
schools,
but
I
mentioned
this
there
and
we
want
reviews
for
these.
So
for
the
sake
of
the
time,
we
don't
have
enough
time
to
discuss
that
here.
Probably-
and
probably
people
haven't
paid
attention
to
this
one,
but
please
take
a
look
later
today
or
this
week.
A
Whenever
you
have
time-
and
please
let
us
know-
I
think
it's
very
straightforward,
but
we
don't
want
you
know
to
make
the
life
harder
for
anybody
that
already
had
something
different
to
this
and
that's
all
from
besides.
So
let's,
let's
jump
into
the
metric
sandbox.
B
B
G
Yeah,
actually,
we
did
not
end
up
implementing
this
in
java,
it's
more,
we
could
we
could.
We
have
two
options
here.
One
is
we
can
provide
this
because
I
think
it's
absolutely
going
to
be
necessary
or
we
can
do
it
in
the
next
phase
of
specification,
and
we
can.
G
G
B
Okay,
so
what
I
will
do
I'll
I'll
send
a
small
pr
trying
to
do
the
same
as
we
mentioned
the
hint.
I
I
think
in
the
api
spike
somewhere
we
mentioned
like.
If
there's
later
we
added
a
hint,
then
it
might
change
the
behavior
in
a
non-breaking
way.
So
I'll
clarify
the
wording.
G
G
We
didn't
finish
that
bit
and
so
we
could.
I
could
when
we
remove
that
tbd.
We
could
also
remove
the
definition
of
that
interface
or,
since
we
agreed
to
the
what
the
interface
looks
like
we
could
just
leave
it
and
add
it
add
in
the
usage
of
it
later.
One
of
the
two.
B
Okay,
that
covers
all
the
the
spike
related
issue.
I
have
a
small
question
on
behalf
of
one
maintainer
from
opencountry.net,
so
he
he's
working
on
the
promises
exporter
and
he
has
multiple
questions
which
I
I
want
to
see
if
we
have
a
common
understanding
and
if
not,
I'm
happy
to
send
some
pr
to
clarify
that
in
the
premises,
exporter
document,
the
question
is:
do
we
expect
permissions
exporter
to
handle
multiple
sessions
when
there
are
multiple
like
scrapers.
B
J
B
Okay,
so
I'll
give
you
one
one
example
like
the
current
way
of
premises,
exporter
at
least
in
doughnut.
Is
it
don't
allow
re-entrance?
So
if
there's
already
a
scraper
accessing
the
endpoint,
if
another
one
try
to
access
will
just
respond
and
tell
silver
to
build
it
or
something
instead
of
wait,
there
is
that
the
correct
behavior
or
we
should
wait.
J
That
doesn't
seem
correct,
so
we
should
be,
would
expect
something
more
like
if
you
need
to
gather
a
response
and
prepare
it
that
once
it's
prepared
for
the
first,
the
first
scraper
you
can
give
that
same
response
to
the
second
scraper
that
came
in.
While
you
were
preparing
the
response
for
the
first
one.
J
Perhaps
you
you
cache
it
for
some
short
period
of
time,
so
that
any
subsequent
requests
that
come
along
get
that
same
value.
Until
you
know
five
seconds
ten
seconds
down
the
road.
B
H
Multiple
threads
open
to
the
same.
B
H
A
different,
a
user.
B
B
J
G
Oh
sorry,
you
cut
out
there
and
I
jumped
in
it
is
common
for
you
to
just
curl
that
endpoint
to
see
if
it's
working,
like
that's
just
the
thing
that
happens
right
if
you're
debugging
locally
or
you
know
where.
H
G
What
I'm
saying
is
in
the
event
that
I
curl
something
and
someone
else
and
the
scraper's
actually
running
at
the
same
time
we
shouldn't
fail,
like
I
think
I
think
we
should
handle
simultaneous
requests
to
prometheus
exporter
in
some
fashion.
So
if
you
make
the
one
weight,
that's
that's
fine!
If
you,
if
that's
necessary,
if
you
join
the
two
together,
that's
fine
too!
I
don't
know.
I
don't
know
what
your
architecture
looks
like,
but
I
would
make
sure
you
know
it's
an
http
server.
It's
an
http
endpoint.
B
H
Yeah
riley
we
should-
and
I
we
can
also
bring
it
up
in
the
prometheus
group
tomorrow,
so
that
we
can
also
understand
you
know
what
are
some
of
the
other
ways
that
prometheus
itself
handles
this.
D
D
In
the
spec
we
wrote
about
asynchronous
instruments
would
evaluate
once
per
exporter,
and
my
question
is
really
do
we
need
to
re-out
evaluate
the
asynchronous
instruments
in
this
situation
according
to
the
spec,
because
that
would
disagree
with
we
can
cache
them
in
the
go
implementation.
I
do
have
a
staleness
setting
that
says
I
will
reserve
the
same
metrics
up
till
five
or
ten
seconds,
as
the
suggestion
here
says,
but
I
don't
know
what
the
prometheus
developers
would
think
of
that.
I
do
think
of
it.
As
stateless,
though.
A
Thank
you
so
much.
I
have
a
small
question
for
you
now
that
you're
trying
to
reach
you
know
stability
by
the
end
of
this
month,
which
is
this
week
basically
and
and
that
yeah
I
think,
that's
all,
do
you
need
reviews
from
the
remaining
parts
from
people
who
are
not
involved
in
the
metrics
group.
A
Yeah,
I
was
wondering
because
you
know
in
the
past
we
had
a
few
situations
where
something
was
about
to
be
to
reach
stability,
but
then
there
were
not
enough
eyes,
like
giving
actual.
You
know
giving
the
actual
stamp.
B
But
just
to
clarify
we're
not
moving
the
spec
too
stable,
we're
moving
that
to
feature
freeze.
That
means
we're
not
going
to
add
feature.
This
is
a
signal
because
if
people
come
and
say
hey,
I
want
something
like
cool
like
a
base.
Two
exponential
buckets
histogram
well
tell
them.
Wait
we're
not
going
to
focus
on
that
right
now
until
we
got
too
stable.
B
B
A
So
I
think
this
is
it
then
yeah.
We
don't
have
any
any
other
item
in
the
agenda.
So
yeah,
let's
go
back
to
work.
Thank
you
so
much
everybody
stay
safe
and
have
a
have
a
coffee
or
something.