►
From YouTube: 2021-09-09 meeting
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
A
Yeah
no,
I
had
an
office.
I
moved
house
earlier
this
year
or
I
moved
into
a
house
instead
of
having
instead
of
renting
and
one
of
my
projects
during
my
paternity
leave
was
getting
an
actual
workspace
set
up
in
the
basement.
So
nice,
it's
my
little
11
foot
square
corner
of
paradise.
I
get
to
paint
whatever
color
I
want.
So
I
chose
a
nice
dark
color
that
I
still
got
to
put
stuff
behind
me.
I
got
gotta
figure
out
some
set
dressing
here,
but.
B
A
A
Flip
it
around
it's
actually
like
I,
I've
got
a
the
set,
I
feel
like
I
could
pretty
good
set
up.
I
actually
switched
to
a
using
a
big
4k
tv,
like
a
55
inch
tv
mounted
on
the
walls,
my
kind
of
primary
display,
and
then
I've
got
another
monitor
over
here
in
my
laptop,
so
I've
kind
of
got
a
whole
setup.
B
B
The
past
couple
of
meetings,
jonah.
C
Hello,
hello,
hello:
this
is
a
very
specially
attended
meeting
these
days.
We
can
probably
do
some
pr
if
you
want
more
people
to
start
coming.
A
Yeah
two
minutes:
let's
go
ahead
and
get
rolling,
so
ted
just
fyi
patrice-
and
I
talked
earlier
this
week
just
to
kind
of
catch
up
before
the
sig
and
discuss
stuff.
So
some
of
this
is
stuff
that
he
already
knows
about,
and
so
we'll
be
repeating
it
for
the
benefit
of
you
first
thing
I've
got
here
is
these
two
things
are
kind
of
related.
A
Is
docs
documentation
rework
again,
but
it's
not
really
so
much
of
a
rework
it's
more
of
a
refinement
so,
as
we
all
know
the
current
way
of
doing
the
documentation,
the
current
situation
of
the
docs
is
that
we
have
this
copy
github
action
where
the
canonical
docs
are
in
sig
repose,
and
then
they
copy
them
over
through
the
use
of
an
action
to
create
a
pr.
A
Patrice
has
added
in
or
started
proposed,
and
I
have
come
to
agree
with
kind
of
a
refinement
of
this,
where
there
will
be
two
options
and
sigs
will
get
to
choose
one
or
the
other
option
a
is.
They
keep
their
docs
in
their
sig
repo,
and
if
that's
the
case,
then
we
sub
module
their
repo
into
the
website
repo
and
if
they
don't,
then
their
docs
live
in
the
website
repo
and
that's
where
they
live,
there's
no
copying
and
back
and
forth.
A
B
No
that
I
haven't,
but
what
has
been
merged
is
the
the
fact
that
the
specs
are
now
a
sub
module,
but
that
yeah
that's
unrelated.
I
suppose.
A
A
So
patrice
also
has
an
enhancement
that
will
make
the
open
a
documentation
issue
link
in
the
repo,
or
I
think
it's
already
merged,
we'll
make
it
smart
and
we'll
go
to
the
correct
place
depending
on.
B
A
A
A
I
said
you
can
work
on
a
sort
of
revised
information
architecture
for
the
docs
site
itself
to
make
sure
that
everything
is
consistent
and
clear
and
that
there's
kind
of
like
everyone
knows
this
is
the
content
available
on
the
website
and
then
also,
if
you
look
at
the
grpc
docs,
I
I
think
I
kind
of
I
really
like
what
they
did
over
there
with
having
like
links
out
to
individual
link.
A
A
A
I
click
languages,
click
c
sharp,
but
these
these
two
right
here
are
the
the
interesting
things
I'd
like
to
add
to
ours,
which
is
you
can
deep
link
out
from
the
site
from
the
menu
so
for
python,
where
they
have
their
read
the
doc
site
or
javascript
or
whoever
you
know.
A
A
B
A
Yeah,
I
think
I
think
this
would
also
be
a
great
place
to
like,
for
instance,
ted.
I
know
like
php
right.
We
want
more
php
maintainers,
so
you
could
have
a
docs
index
page
and
then
there
could
be
a
banner
on
this.
That's
like
hey.
This
language
needs
more
attention.
This
language
needs
more
contributors
and
we.
A
B
So
I
think
the
c
plus
plus
has
a
banner,
if
I
remember
correctly
c
sharp
sorry
c.
B
A
A
C
A
Yeah,
I
don't
know
like.
Maybe
I
think
this
is
one
of
those
things
that
we
can
kind
of
do
mock-ups
on
and
we
can
we
can
play
around
with
the
language.
But
I
I
really
like
this
idea
and
I
believe
that
this
is
something
else
which
we
can
help
us
with
a
lot
in
terms
of
sort
of
defining
like
hey.
These
are
going
to
be
these
sections
and
and
helping
us
build
that
out.
So
right.
A
It's
really
just
a
feature
we
turn
on.
We
would
have
to
port
the
content
for
medium,
which
shouldn't
be
a
problem.
B
But
we'd
probably
want
to
leave
the
current
blog
entries
where
they
are
but
have
some
anchors
here.
That's
what
we've
done
elsewhere
so
make
sure
that
there's
an
entry
here
and
it
may
give
a
short
paragraph
but
then
it'll
say
if
you
want
read.
B
If
you
want
to
read
more,
then
it
redirects
and
then
for
new
blog
posts.
Of
course,
then
they'd
be
published.
C
We
can
use
canonical
links
to
recreate
the
old
blog
posts
here
without
disrupting
seo,
but
I'm
sure
we
can
make
canonical
links
work
in
in
hugo.
I'm
sure
that's
a
thing,
I
just
wonder
yeah.
Yes,
it's
probably
fine.
I
freaking
hate
medium
so
well,.
A
Yeah,
that's
the
thing
is
nobody,
I
think
really
likes
medium
and
I
think,
there's
probably
a
lot
to
be
said
for.
C
C
A
B
I
think
that's
the
way
we
handle
handle
new
blog
posts
for
grpc,
for
example,
yeah
somebody
submits
a
pr
while
it's
being
worked,
the
blog
page
is
marked
as
draft,
so
it
doesn't
show
up
on
the
production
server
and
we
can
merge
and
iterate
eventually,
once
we
do
want
it
published
on
the
production
server,
we
remove
the
draft
status
and
then
boom.
It
appears.
A
A
A
So
like
right
now
we
have
docs
approvers
docs
maintainers,
like
maybe
a
blog,
approvers
or
blog
maintain.
You
know,
blog
maintainers
or
something.
A
A
Yeah,
I'm
sure
we
can.
I
I
feel
like
that's
a
detail
that
is
not
super
necessary
to
we'll
do
something
yeah.
I.
D
Actually,
I
do
like
the
idea
of
moving
off
of
medium
because
it
it
right
now,
if
you
want
to
publish
it
first,
it
sort
of
requires
people
to
have
a
medium
account
which
yeah
you
know
like.
It's
not.
D
The
world,
but
it's
not
nothing
and
also
there's
a
separate
if
it's
your
first
time,
publishing,
there's
a
separate
like
workflow
required
from
like,
I
said
I
think,
there's
a
handful
of
us
ted.
I
think
you're,
probably
an
admin
and
morgan
and
it
might.
B
D
Be
if
I
remember
to
pay
for
another
year
of
medium
membership,
like
you
have
to
add
new
authors,
if
somebody's
a
first-time
author
and
that's
also.
A
Just
like
yes,
it's
awful
the
medium
process
kind
of
sucks,
so
I
like
the
idea
of
us
having
the
blog
under
hotel
at
I
o
I
like
being
able
to
have
an
rss
feed
for
those
of
us
who
still
hold
hold
the
candle
out
for
google
reader.
A
It
also
means
that
site
search
gets
more
useful
and
valuable,
because
site
search
would
also
get
those
blog
posts.
So
if
you're
searching
for
you
know,
if
someone
posts
a
really
good
tutorial
on
the
blog,
then
you're
searching
for
whatever
tutorial
then
you'll
find
that.
C
Yeah
also
on
a
practical
matter,
medium
doesn't
support
code
blocks
right
in
any
reasonable
sense
and
that's
like
kind
of
useful
to
say
the
least.
I've
had
to
do
really
annoying
workarounds
to
to
make
that
work.
There.
A
Yeah,
so
I
think
in
in
terms
of
like
overall
look
and
feel
I'm
not
saying,
let's
completely
copy,
what
grpc
is
doing,
but
let's
kind
of
copy
what
grpc
is
doing
and
get
a
coat
of
paint.
C
A
A
A
It's
not
bad,
I
think
we
could
do.
This
could
be
the
hero
could
be
smaller.
We
could
do
another
few
rounds
on
on
this
language.
These
should
link
out
to
things
right.
C
C
Yeah
we
need
to
make
we
need
to
overhaul
the
status
page
and
make
that
more
prominent
here.
A
C
It
also
just
just
the
current
layout
looks
like
it's.
It
looks
like
it's
missing
an
image
or
something
just
like
that's
in
bolts
layer
like
there's
just
this
huge
chunk
of
important
real
estate
that
that's
just
empty
and.
C
C
Scheme
is
color
scheme.
Color
scheme
is
great,
I
mean
it's.
The
color
scheme
is
great.
I
love
the
background
image
love
if
we
could.
A
Yeah,
I
think
if
we
could
get
someone
to
kind
of
vectorize
this
or
like
take
the
use,
this
inspiration
and
vectorize
it
and
sort
of
do
some
iterations
on
the
actual,
look
and
feel
like
that
would
be
really
cool
but
yeah
anyway.
Yeah.
So
that's
kind
of
my
yeah
and
then
there's
and
there's
a
bunch
of
like
other.
C
Yeah
one
by
the
way,
one
thing
that's
in
an
issue
that
just
maybe
real
quick
we
could
figure
out
where
it
should
go,
is
our
marketing
guidelines.
This
is
the
thing
that
has
come
up
multiple
times.
We
have
these
marketing
guidelines.
I
put.
A
C
Like
I
don't
think
it
necessarily
deserves
its
own
block
on
the
top
menu
bar
necessarily,
but
I
don't
know
it's,
it
would
be
great
if
it
if
we
could
unbury
it
just
because
it's
it's
a
it's
a
load,
load-bearing
load-bearing
web
page,
shall
we
say:
do.
B
You
do
you
find
that
the
the
fact
that
it's
present
on
the
community
page
now,
maybe
that
might
be
get
us
80
percent
of
the
way
for
folks.
C
Yeah,
it's
a
it's
a
step
up,
yeah,
it's
a
step
up
to
just
have
it
linked
linked
from
here;
that's
definitely
better
than
what
we've
had
but
and
it's
possibly
hopeless.
It's
just
it's
a
different
audience
right,
like
the
the
audience
who
needs
to
see
that
is
not
an
audience,
that's
contributing
or
or
spending
a
lot
of
time
around
the
project
right.
It's
people
who
are
in
the
marketing
and
salesy,
like
managers
and
adjacent
departments
or
the
people,
need
to
see
it
so
partially.
A
C
C
That
that's
yeah,
so
I
don't
know
it's
again.
It's
like
one
of
these
things
where
I
think
we
could
make.
We
could
make
the
sign
as
large
as
we
want,
and
people
still
aren't
going
to
read
it,
but.
D
A
C
Yeah,
maybe
there's
like
yeah
a
third
block
with
a
header
of
some
kind
that
has
marketing
guidelines
and
maybe
contributing
blog
posts
and
stuff
like
that.
Like.
B
So,
by
the
way
this
is
the
standard
doxy
community,
page
yeah.
E
B
D
Yeah,
so
I
I
mean
I've
been
in
the
position
of
of
trying
to
like
find
this
type
of
resource
plenty
of
times.
I
feel
like,
even
if
you
just
even
if
the
headline
is
just
like
to
the
point
just
like
for
marketers
or
like
marketing
resources,
and
you
put
the
guidelines,
the
brand
resources
and
possibly
also
the
icons.
D
Are
the
things
I
would
most
likely
like?
I
mean
98
percent
of
the
time,
I'm
just
looking
for
a
like
vector
format,
logo
or
a
cmyk
version
of
the
logo,
or
something
but
like
those
are
the
those
are
the
things
that
like
I
would
expect
to
sort
of
be
yeah
like
and
like
marketers
are
probably
going
to
be
your
your
primary
users.
A
A
Go
marketing
resources
whatever,
but
it
would
be
good
to
look
at
this
community
page.
Look
at
what
we
actually
have
here
like.
Where
do
we
really
want
to
funnel
people?
What
do
we
want
to
pull
out
of
this
and
put
it
on
to
like?
I
think
a
good
example
of
this
is
the
slack
stuff
right.
It's
like
oh
chat
with
other
developers
sign
up
here.
Like
you
know,
this
could
even
be
its
own
thing.
Maybe
we
should
have
links
deep
links
from
the
language
pages,
maybe
not,
but
either.
A
C
Adding
yeah
this
this
page
of
show,
I
I
think
we
should.
We
should
do
just
just
do
away
with
the
release.
Notes
that
we're
pulling
like.
A
C
C
A
C
A
Do
want
to
roll
back
to
one
thing
I
was
going
to
say
about
like
the
difference.
Betw
we
are
going
to
encourage
people,
I
think,
to
move
their
docs
back
into
the
website
repo
for
one
very
specific,
two
very
specific
reasons.
One
internationalization,
so
subrepos
extremely
dip
would
be
extremely
difficult
to
use
with
internationalization
because
of
the
way
the
docsy
file
structure
or
the
hugo
file
structure
needs
to
be
to
do
multiple
versions
of
something
because
you
have
to
duplicate
the
the
directory
and
we
are
about
out
of
time.
So.
C
A
Off
the
call
in
a
second,
we
can
talk
more
about
it.
I
think
patrice
I'll
write
an
issue
up
and
we
can
discuss
it
in
there.
Okay,.
F
F
Okay,
so
we
can
start
so
please
put
your
name
on
the
tennis
list,
so
first
update
on
the
sdk
experimental
release.
I
I
think
the
last
bigger
pr
is
the
metric
reader
pr
here
we
got
several
approvals.
F
I
think
we're
trying
to
merge
that
as
soon
as
possible
and
then
there's
two
small
follow-up
prs
one
is
I
try
to
fix
the
the
flash
and
shutdown
on
the
meter
provider
and
notice.
There's
a
comment
from
bogdan.
He
suggested
that
the
flash
might
might
introduce
some
issue,
so
he
asked
if
we
we
should
remove
the
flash
for
now.
F
I
I
feel
that
without
flash,
the
semantic
for
shutdown
would
be
a
little
bit
confusing
that
we
expect
when
people
can
shut
down.
The
underlying
like
isdk,
should
also
do
some
flash,
especially
for
the
push
exporters,
or
we
just
leave
that
for
now
make
it
super
big.
So
later,
when
we
add
the
flash,
we
can
still
change
the
wording
there
so
want
to
get
some
idea
here.
F
I
I
personally
think
given
force
flash
is
a
very
well
established
concept
for
traces
and-
and
it
doesn't
seem
to
to
me
that
it
will
be
adding
a
lot
of
overhead,
so
I
won't.
I
would
want
to
have
that,
but
if
folks
from
java
or
like
other
language,
when
doing
the
prototype,
you
think
it'll
it'll
be
too
hard
you'd,
rather
leave
it
after
the
stable
release.
I
I
think
I
don't
have
a
strong
opinion.
F
So
let
me
switch
to
the
file
so
basically
copy
the
wording
from
the
the
trace
spec.
So
this
is
the
shutdown
I
think,
there's
a
comment
from
from
josh
about
the
failure
status
and
also
the
reader
part.
This
is
outstanding
one
from
bogdan,
so
please
read
this.
G
I
would
agree
that
it's
definitely
easier
to
remove
force
flush
from
the
spec
at
the
moment,
but
I
think
you're
also
right.
I
wonder
if
there's
a
way
to
relax
the
specification
for
what
force
flush
will
do
to
accommodate
bogdan's
concern.
I
can
imagine
it
because
in
the
in
the
go
prototype,
you
know
they're
it
for
forcing
a
flush
means,
calling
all
the
observers
and
if
you,
if
it
means
that
all
the
exporters
had
to
accept
a
package
of
data
and
handle
it
themselves.
G
That
would
be
problematic.
But
we
talked
last
week
about
how
every
different
exporter
could
have
different
interval,
free
export
intervals
and
therefore
introducing
force
flush.
Just
means
you
have
to
expect
irregular
intervals.
As
an
exporter.
F
Not
necessarily,
I
I
think
here
false
flash
actually
gives
you
that
flexibility,
you
look
here
it
just
let
the
color
like
notify
the
individual
exporters.
F
So
so
for
the
simple
case,
you
can
simply
implement
the
open,
telemetry
goal
with
returning
error,
saying
I
don't
support
that
for
now.
G
Well,
so
I,
but
I
do
actually
support
it.
It's
just
that
I
have
to
call
collect
and
at
the
moment
in
the
interface,
the
the
sort
of
normal
operation,
you
can
have
there's
essentially
two
modes.
If
there's
no
push
export
happening
well,
it
only
collects
metrics
on
demand.
So
when
there's
a
scrape,
there's
there's
an
on
demand
and
then,
if
you,
but
if
you're,
mixing
push
and
pull
it'll,
basically
compute
that
push
interval,
you
know
that
that
the
collect
will
be
called
once
every
interval
and
the
export
will
happen.
G
So
when
I
shut
down
the
sdk,
I
do
a
final
like
collect
and
that's
it
so
the
if
the
definition
of
force
flush
is,
I
can
implement
it
two
ways.
One
is
just
call
collect
at
an
odd
interval
like
halfway
between
an
interval
and
everyone
should
be
able
to
handle
that
or
wait
for
the
next
regularly
scheduled
flush
or
collection,
and
I
think
bogan's
concern
is
probably
that
it's
easier
to
wait
for
the
next
scheduled
flush
or
the
next
schedule
to
collect
than
it's
a
force
of
flush.
G
But
I
could
be
wrong
and
I
don't
think
that
that
would
be
an
unreasonable
implementation
if
you're
flushing
frequently,
but
it
could
also
be
like
if
you've
got
some
instruments
exporting
on
a
fast
interval
and
some
instruments
exporting
on
a
slow
interval.
Now
you're
going
to
like
is
there
something?
Is
someone
going
to
be
upset
by
having
more
export
than
I
don't
know?.
H
H
Let's
talk
about
the
most
important
use
case
for
force
flush,
though,
which
is
specifically
like
lambda
and
functional
like
those
function
based
computing
things
where
you
have
to
flush
when
the
thing
is
killed,
because
your
your
process
doesn't
live
long
enough
for
an
expert
interval.
H
H
The
other
alternative
is
flush.
Only
calls
you
know,
flush
only
flows
through
periodic
metric
reader,
and
these
synchronous
push-based
use
cases,
and
it
doesn't
do
anything
for
the
pool
based
use.
Cases
like,
I
think
I
think,
that's
kind
of
what
what
is
going
on
here
is
like.
Why
are
we
defining
a
method
that
we
know
doesn't
work
for
a
specific
use
case,
and
then
what
are
we
defining
like
that
to
do
for
prometheus
right?
If
force
flush
hits
the
prometheus
exporter?
H
Does
the
prometheus
exporter
fail?
You
know
and-
and
I
think
the
way
it
was
originally
phrased
was
it
denotes
a
failure
when
it
returns?
Does
that
mean
that
we
take
that
failure
and
stop
doing
any
more
force
flushing
in
like
a
standard
implementation
that
goes
through
and
tries
to
force
flush
readers
like
that,
that's
the
interaction
that
we
have
to
tease
out
there
so,
like,
I
would
say
if
we
make
force
flush,
be
ignored
on
readers.
H
F
Yeah,
but
you
know
there
are
two
ways:
one
is
you
go
through
the
like
the
readers
and
you
notify
them
and
they
run
sequentially
or
you
just
notify
all
of
them,
and
they
wait
for
all
of
them
to
return
and
if
they
don't
return
or
they
give
you
failure
in
the
end
when
they
hit
the
timeout,
you
just
collect
all
the
data
as
much
as
you
can
and
based
on
the
feedback.
You
decide
what
to
do.
Return.
E
F
So
I
I
think
we
have
three
options
here.
One
is
we
put
this
as
a
as
an
issue
for
the
sdk
feature
freeze,
and
we
merge
this
as
this,
knowing
that
we
might
debate
on
this
and
eventually
might
remove
the
force
flash
if
bolton's
concern
wins
out
or
we're
saying
this
is
a
blocker
we
should
solve
that
which
I
I
don't
think
is
the
case,
we're
we're
saying
we.
We
should
just
focus
on
this
and
hold
the
release.
G
Can
I
pro,
can
I
propose
something
here?
Riley
we
discussed
it
last
time
and
I'm
not
sure
everyone
especially
josh,
was
out
there's
a
question
about
whether
these
pull
exporters
that
you've
specified
are
really
push
exporters
in
disguise.
G
That's
how
I
wanted
to
say
it
like
there's
a
question
about
whether
who
who
initiates
the
the
collection
and
then,
but
what
I
think
josh
is
saying,
is
that
if
there,
if
an
exporter
is
capable
of
of
like
sending
data
out,
then
it
is
considered
a
push
exporter
and
and
there's
a
periodic
push
exporter
where
the
signal
to
push
comes
internally
and
there's
a
on-demand
push
exporter,
where
the
signal
comes
from
anywhere
and
force,
flush
is
really
just
like
the
the
non-periodic
push
exporter
and
we
there
is
no
such
thing
as
a
pull
exporter.
G
F
F
H
I
just
I
missed
the
last
sentence.
I
was
doing
something
that
I
had.
I
had
got
a
ping.
What
was
was
your
sentence
about
like
the
the
last
bit
here
just.
G
Getting
rid
of
that
riley
said
that
if,
if
the
wording
was
changed
to
say
do
all
that
it
could,
because
I
was
trying
to
answer
your
concern,
essentially,
that
a
pull
exporter
can't
really
do
anything
on
a
force
flush
because
we've
been
calling
it
an
exporter
when
it's
not
and
there
there
is
something
that
riley
has
talked
about
in
the
past,
which
is
a
pull
exporter
in
some
sense,
because
the
signal
to
to
collect
comes
from
outside
that
says,
pull
me
some
metrics,
but
the
mechanism
to
deliver
those
metrics
is
by
pushing
them,
meaning
take
this
data
that
I've
produced
and
and
carry
it
somewhere,
and
so
that
the
the
prometheus
is
not
truly
an
exporter
in
the
sense
of
the
word
prometheus
is
a
reader.
G
H
Yeah,
I
think
I
think
so
one
thing
I
do
want
to
call
out
that
I
thought
was
a
nice
clean
up
in
the
java
implementation
around
readers
is,
I
can
actually
put
a
life
cycle
on
a
reader,
so
actually
calling
shutdown
on
prometheus
makes
sense,
because
I
can
shut
down
the
http
port.
H
On
shutdown,
so
that's
nice,
so
I
want
to
call
out
that
shutdown
still
has
meaning,
but
flush
does
not
and
so
yeah.
I
I
think
I'd
agree
that
if
we
change
the
wording
to
say
it
does
all
it
can
and
if,
if
flesh,
doesn't
make
sense,
it
doesn't
flush.
I
think
that's
reasonable.
The
the
real
question
is:
what
should
the?
What
should
the
return
value
be
because
we
specify
return
value
of
flush
and
shutdown
right?
H
H
That
I
think,
though,
if
we
answer
those
two
questions,
then
I
think
where
everything
is
gravy,
and
so
my
my
proposal
would
be
that
we
specify
force
flush,
should
let
it
know
whether
it
succeeded
failed
or
timed
out.
If
force
flush,
you
know
doesn't
make
sense.
Maybe
we
return
either
a
third
option
of
unimplemented
or
like
unimplementable
or
we
just
return
success.
H
One
of
the
two
right
and
then
from
the
force
flush
standpoint
at
the
meter
provider.
We
should
ignore
and
collect
all
failures
and
try
to
force
flush.
Every
exporter
there's
a
question
of
whether
we
need
to
call
force
flush,
synchronously
or
asynchronously.
I
don't
really
care
just
like.
I
don't
think
we
have
to
specify
that,
but
we
should
specify
that
we
don't
stop
calling
force
flush
because
one
exporter
failed.
F
Yeah
so
so
josh
to
answer
your
question,
I
think
there
you
have
multiple
points.
The
first
one
I
want
to
address
is
we're
saying
all
the
exposures,
so
probably
we
can
see
like
it
has
to
make
sure
it
notifies
all
the
exporters,
whether
it's
sequential
parallel,
it's
not
specked
off,
but
but
the
all
part
is
important
and.
H
H
F
Explain
to
you
so
yeah.
I
want
to
give
that
flexibility,
because
the
tracing
spec
doesn't
explain
that
if
you
have
multiple
processors
or
multiple
pipelines
with
different
exposure.
What
do
you
do,
and
I
I
think
in
matrix,
I
don't
have
a
goal
to
to
fix
the
tracing
problem,
so
we
we
really
need
that.
I
would
argue
that
the
tracing
spec
sucks
to
fix
that
as
well.
But
my
my
thinking
is
there
multiple
ways
you
can
see.
I
return
a
single
status
code.
F
The
status
code
is
based
on
the
end
operation
of
all
the
underlying
things.
So
if
there's
one
failure,
then
the
return
value
should
be
a
failure.
Then
you
might
argue
hey.
I
have
some
success,
but
you
give
me
a
big
failure.
What
does
that
mean?
And
of
course
you
can
return
aggregated
thing?
People
can
travel
through
that
and
they
can
even
understand
the
underlying
reader
probably
can
give
some
key
value
person.
F
This
is
your
reader,
like
the
order
of
the
readers
and
each
one
is
the
status
code
and
that
works
as
well,
and
I
think
the
current
spec
allows
you
at
least
it
doesn't
stop
you
from
doing
that
and
regarding
whether
you
have
the
force
value
or
not.
I
I
think
ultimately
it
comes
to.
How
do
you
think
about
something
not
supported?
If
something
wouldn't
make
sense,
I
I
think
I
try
not
to
define
if
it
doesn't
make
sense
for
poor
exporter.
F
H
Yeah
from
if
you
think
about
this
from
the
inverse
way,
I'm
calling
force
flush,
I'm
getting
back
that
that
that
status
code.
What
can
I
do
with
it?
That
makes
sense
right
if
I'm
handling
a
failure
from
the
status
code?
Can
I
consistently
handle
that
failure
if
the
failure
actually
means
something
different
depending
on
the
exporter?
H
Now
I
have
some
some
really
complicated
logic,
around
handling
failure
from
forced
flush
right,
so
that
that
I
think,
is
that
that's
the
concern
as
long
as
like
we.
If
we
call
something
a
spade,
it
means
a
spade
right,
then
we're
fine.
So
if
it's
like
this,
if
we
have
a
partial
success
and
the
output
of
force,
flash
is
partial
success
because
we
have
a
a
thing
that
can't
handle
it.
That's
that's
fine.
I
think.
H
Then
we
can
clarify
for
people
how
to
use
that
error
status
code,
but
if
we
give
people
a
failure
and
they're
gonna
do
some
sort
of
failure
handling
code,
but
we
always
give
them
a
failure
because
we
know
it
can
never
succeed.
That
starts
to
get
funky
and
I
think
that's
something
we
want
to
avoid.
F
I
I
I
see
so
I
was
thinking
about
this
well
right
right
this
this
part.
So
imagine
if
you
only
have
one
exploder
that
takes
power
is
the
permissive
one
and
when
it
comes
like
flash,
you
know
it's
not
supported.
Do
you
just
return
success
and
people
might
yell
at
you
they're
saying
it
really
wouldn't
make
sense?
What
if
you
don't
have
any
exposure
at
all?
You
simply
put
the
view,
but
you
forgot
to
put
any
metric
reader.
H
Agreed
if
we
step
back
to
use
cases,
though,
like
I
said
for
me,
the
the
biggest
use
case
that
I
want
to
see
in
the
spec
initially,
with
both
force,
flush
and
or
shutdown
is
just
the
ability
to
handle
the
lambda
type
environments
or
the
serverless
type
environments,
where
compute
or
batch
processing
right.
Where
compute
does
not.
Last
long
enough
and
you're
almost
relegated
to
using
push-based
exporters,
and
you
need
to
make
sure
that
when
the
thing
is
dying,
you
get
your
telemetry
out
as
quickly
as.
H
H
J
H
F
I
I
have
a
scenario
from
microsoft
that
when
you
do
some
dangerous
operation,
you're
touching
a
piece
of
code
that
you
don't
have
ultimate
control-
and
you
know
something
bad
might
happen
that
blow
your
process
away.
You
might
force
flash
to
leave
some
track
before
you
enter
that
dangerous
zone.
F
Yeah
so
josh
to
answer
your
question,
I
I
think
most
of
the
user
wouldn't
even
know
how
to
handle
force
flash.
That's
my
gut
feeling,
because
if
you're
trying
to
access
and
the
lambda
function
is
shutting
down,
you
tried
the
best
attempt
to
force
flash
and
it
failed.
What
else
do
you
do
you
write
a
log?
F
It
wouldn't
work,
it's
no
exception,
you're
dying,
so
why
the
hell
don't
do
that
yeah
or,
if
you're,
losing
the
cpu.
You
probably
won't
even
be
able
to
write
the
log
until
you
wake
back
up
too
yeah.
I
think
the
return
value
normally
is
just
for
people
who
do
the
debugging.
They
need
to
understand
what
happened,
but
they
wouldn't
have
the
code
logic
to
handle
it's
similar.
Like
the
logging
api.
If
logging
api
field
it
gives
you
a
boolean
value,
saying
it's
false.
F
H
Yeah,
I
guess
so
then
the
question
is:
do
we
just
not
have
a
return
value
for
force
flush.
F
H
H
The
only
thing
to
address
then
is
is
what
a
metric
reader
force
flush
does
when
when,
when
it
can't
flush
out
its
metrics
like
what's
that
return
value,
I
think,
if
we're
crystal
clear
on
that
and
we're
comfortable
with
that,
not
returning
failures
that
people
see
in
logs
when
we
know
that
it's
going
to
happen,
because
we
know
that
that
reader
could
never
support
force
flush.
H
Yeah
yeah
effectively,
my
my
assumption
here
is:
we
should
have
platform
based
configuration
to
call
force
flush,
say
in
a
lambda
environment
right
there
there'll
be
something
that
detects
it's
in
a
lambda
environment
or
some
kind
of
serverless
environment
and
there'll
be
some
registered.
You
know
event
that
calls
force
flush.
H
H
I
want
that
component
not
to
just
die,
even
if
it
gets
turned
on.
You
know,
via
config,
like
the.
If
we
think
about
the
principle
of
having
instrumentation
in
your
code
that
I
haven't
configured
explicitly
myself
doesn't
cause
failures
right,
even
if
I'm
using
some
open,
telemetry
package
that
tries
to
add
hooks
forward
lambda,
it
shouldn't
break
metrics.
So
that's
that's
kind
of
my
my
underlying
principle
behind
why
I'm
suggesting
this.
F
Okay,
so
so
coming
back,
let
me
summarize
so
it
seems
based
on,
like
we
spent
30
minutes
on
this.
I
think
we're
saying
we
don't
want
to
do
the
sdk
experimental
release.
We
need
to
solve
this
issue
and
I'm
fine
with
that
and
we're
saying
we
want
to
keep
the
force
flash.
G
F
Same
actually
so
we're
okay,
either
with
saying
for
poor
exporter
force,
flash
should
just
return
success
or
no
return
value
for
force
flash
at
this
moment.
My
struggle
is,
there's
no
return
value
later.
If
we
want
to
add
that
it'll
be
a
brilliant
change,
but
if
we
put
a
return
value,
people
can
return
now
or
something
that
that
basically
is
is
nothing
for
their
language
and
literally
we
want
to
return
something
they
can.
H
You
should
maybe
maybe
what
I
what
I
was
trying
to
suggest
initially
was
force.
Flush
should
say
it
attempts
to
export
metrics
and
it
returns
success
when
it's
done
with
what
it
can
do.
H
Therefore,
something
like
prometheus
would
immediately
return
success
we
actually
in
in
java
the
the
return
values
can
be
used
in
an
asynchronous
manner
to
denote
when
a
another
thread
is
done
with
work
so
like
denoting
that
you're
done
is
important,
sometimes
for
continuing
or
joining
threads
together.
So.
H
That's
a
better
way
of
phrasing.
What
I
was
trying
to
say
would
you
repeat.
J
J
I
think,
though,
I
really
like
josh's
josh
mcdonald's
idea
of
saying
that
there
is
no
poll
exporter,
that
the
only
things
are
push
exporters
and
if
you
want
to
implement
something
like
prometheus's
pole,
style,
collection
of
metrics,
you
have
a
push
exporter
that
pushes
it
into
a
cache
that
can
be
read
by
an
http
server
that
provides
the
prometheus
exposition
and
in
that
case
then
force
flush
pushes
into
that
cache.
F
Yeah
that
can
be
an
implementation
detail.
I
guess
so
the
the
highlighted
the
parts
I
I
are,
what
I
think
we
agreed
on
so
I'll
change,
the
pr
based
on
that.
F
J
F
Okay,
thank
you
so
much
to
the
next.
I
think
with
that
we
have
a
several
issues
that
are
open
for
the
feature:
freeze,
sdk
part,
so
so
I've
listed
them
here
and
I
believe
I
have
done
a
reasonable
job.
I
went
through
all
the
issues
on
the
spec
ripple
that
is
tagged
with
matrix
and
made
my
triage.
So
if
anything
you
want
to
cover
it's
not
capturing
the
feature
freeze
or
the
stable
release.
Please
like
either
tag
me
on
the
issue
or
like
help
to
create
new
issue.
F
If
it's
not
covered
and
anything,
you
know
I'll
I'll,
try
to
use
my
judgment
first
and
if
there's
something,
I'm
not
sure
I'll
bring
back
to
the
next
meeting
for
discussion.
So
probably
starting
from
next
week.
We
should
get
prepared
for
the
ist
feature
freeze
and
now
coming
to
the
api
stable
release.
So
we
talked
about
this
during
the
tuesday
meeting
this
week
and
the
three
languages.
I
I
think,
currently
we're
trying
to
do
this
so
for
downlight
and
java.
I
I
think
so
far
there
are
very
good
progress.
Diago.
F
I
I
think
he
works
on
python
looks
like
he
needs
some
help,
so
josh
surge
talks
with
me
and
and
seems
like
google
can
put
a
developer
aaron,
I'm
sure
it's
part-time
or
something,
but
he
he
could
work
with
you
on
that,
and
I
can
also
help
to
reveal
the
prs
now
probably
need
your
help
to
see
if
we
can
get
some
timeline
sorted
out
with
the
python
sig,
and
if
you
need
my
help,
I
can
join
the
python6.
E
Yeah,
in
fact,
we
just
had
the
figure
and
presented
a
plan
to
collect
all
the
requirements
from
the
spec
in
both
api
and
sdk,
and
I
discussed
that
with
aaron
already
so
pretty
much.
We
have
this
list
of
tests
that
we
want
implemented
for
being
able
to
call
this
complete
and
I'll,
be
sorting
them
out
and
starting
a
framework
of
tests
and
I'll
be
in
contact
with
aaron
to
giving
the
parts
of
the
api
and
or
inspect
that
can
be
implemented
in
separate
apps.
F
Yeah
yeah
and
with
those
three
languages,
I
I
think
we
kind
of
hit
the
minimum
bar.
In
addition,
I
believe,
like
the
two
josh
is
here
they
mentioned
golan
and
I
think
golan
is
special
because
it
is
also
used
by
the
collector.
So
I
wonder
like
how
people
think
about
that
like?
What's
the
what's
the
timeline
and
do
you
think
you
would
have
some
energy
on
that
and
if
you
don't
have
the
golan
matrix
part,
how
would
that
affect
the
collector.
G
Yeah,
the
the
dependency
between
the
collector
makes
it
kind
of
more
urgent
to
get
those
releases
out
and
we
are
making
some
progress
on
the
go.
We
got
the
instrument
names
renamed
for
the
upcoming
release
or
outcome
outgoing
release.
It
just
went
out
what
we
don't
have.
There
are
things
like
exemplars
for
view
configuration
essentially-
and
I
think
chances
of
python
being
done.
First,
are
good.
G
H
Good
question
on
that,
though,
from
what
I
know,
the
collectors
using
that
p
data
library
to
divorce
the
internal
representation
of
telemetry,
and
so
it's
not
like
a
lot
of
the
receivers
that
do
metrics
aren't
directly
depending
on
hotel,
go
right
now,
they're
using
the
p
data
thing
in
the
middle.
H
G
G
H
G
J
I
just
I
just
want
to
mention
the
distinction
between
receivers
using
p
data
to
ingest
metrics
and
get
them
down
the
pipeline
and
the
collector's
instrumentation
of
itself,
which
is
currently
doing
with
open
census
metrics
and
that's
where
their
dependency
on
a
stable,
otel,
gometrics,
api
and
sdk
is
going
to
come
in
for
the
collector,
not
so
much
for
the
processing
pipeline.
But
for
instrumentation
of
the
collector
itself.
H
Right
and
I'm
suggesting,
I
think
the
processing
pipeline
is
the
most
important
part.
So
if
we're
releasing
api
stable
releases
and
we're
targeting
a
version
of
otlp
that
the
collector
does
not
support
in
its
processing
pipeline,
I
think
we'd
have
to
change
our
plan.
But
if
we're
targeting
the
the
same
version
of
otlp
that
the
collector
supports
now,
which
is
0.9,
I
think
I
think
everything's
gravy.
J
Yeah
and
I
think
we're
good
there
as
well,
and
I
think
we
plan
to
update
the
collector
to
010
as
soon
as
possible,
because
there's
features
in
there
that
we
want
to
make
use
of.
H
Yeah,
so
these
these
are
somewhat
correlated,
but
maybe
maybe
it's
it's
better
to
talk
about
the
second
thing
before
the
first
thing,
so,
basically,
sometime
in
may
of
last
year,
proto3
added
experimental
support
for
what's
known
as
field
presence.
H
We
actually
designed
some
of
the
features
in
the
data
model
around
the
idea
that
we
had
field
presence
enabled,
and
that's
a
lot
that
I
I
apologize
for-
that
I'm
used
to
proto2,
not
proto3
in
proto3.
You
have
to
explicitly
declare
in
the
protophile
if
you
want
to
be
able
to
see
field
presence
in
your
protos.
So
what
this
means
is
on
histograms
in
0.9
we
documented
that
some
will
not
be
filled
out.
H
However,
we
have
no
way
to
detect
whether
or
not
some
was
filled
out
or
if
it's
actually
the
value
of
zero,
like
there's
practically
no
way
to
do
that.
The
protocol.
H
What
that
implies
is
you
cannot
use
histogram
aggregations
on
up
down
counters
inside
of
the
api
and
sdk
on
otlp
0.9,
because
there's
no
way
downstream
for
us
to
ignore
the
sum
we
had
a
couple
proposals
for
how
to
fix
it
and
open
telemetry
that
that
got
bogged
down
with.
Why
are
we
doing
it?
This
way?
Why
aren't
we
doing
it?
This
other
way,
et
cetera,
et
cetera?
What
I
want
us
to
to
entertain
here
is
actually
proto3
does
support
optionality.
H
H
So,
depending
on
whether
or
not
we
consider
the
previous
behavior
just
an
outright
bug
or
not
depends
on
whether
or
not
we
want
to
use
optional
but
effectively.
I
want
folks
to
if
you're
curious
about
wire
protocols
and
protocol
buffer
there's
a
great
definition
of
what
field
presence
means.
There's
more
documentation
on
it.
H
I'd
like
to
propose
using
optional
going
forward
in
our
protocol
buffer
definition
of
otlp
and
leveraging
field
presence
to
help,
allow
us
to
add
fields
in
scenarios
where
we
need
them
and
ignore
them
when
they
don't
exist,
and
I'd
like
to
leverage
that,
specifically
for
adding
min
and
max
to
histogram.
G
G
So
it's
hard
for
me
to
think,
and
I
I
know
I've
gotten
feedback
from
people
in
hotel
like
tigran
saying
optional
is
like
a
thing
of
the
past.
You
don't
ever
write
that
you
know
in
a
profile,
so
we're
not
used
to
this.
This
newfangled
thing
you're
talking
about
which
is
a
field
option
saying
that
explicitly
opt-in
for
field
presence,
that's
new
to
me.
I
haven't
seen
it
yet.
H
Yeah,
it's
so
that's
how
you
explicitly
make
sure
field
presence
is,
is
available
and
that
you
don't
fall
back
to
default.
Okay,
so
I
guess
what
I'm
asking
for
here
is:
does
anyone
have
a
huge
dissenting
opinion
in
the
metric
sig?
I
agree
with
you.
We
have
to
push
this
across
all
of
hotel.
I.
F
So
that's
actual
work,
but
shouldn't
be
something
like
impossible,
which
also
means
if
there
are
customers
who
are
using
protocol
2
and
they
depend
on
some
open,
telemetry
components
with
the
otlp
exporter
using
profile
3,
they
might
run
into
the
side-by-side
issues
either
you
have
to
do
something
or
it
helps
them.
You
cannot
run
both.
You
have
to
upgrade
to
protocol
3..
H
Already
on
proto3-
and
I
want
to
point
out-
it
is
so
that
what's
really
frustrating
as
hell
here
is
what
is
on
the
wire
doesn't
change
the
bytes
do
not
change.
If
I
send
a
message
from
a
to
b
and
there's
this,
this
field
presence
flag,
the
message
itself
doesn't
change.
H
What
changes
is
the
generated
code
in
the
languages,
so
this
is
literally
only
a
generated
code
signal
and
it's
whether
or
not
you
want
to
explicitly
know
if
a
particular
field
id
was
present
in
the
bytes
proto3
removed
your
ability
to
do
that,
proto2
preserved
it
all
the
time
which
actually
led
to
issues
with
adopting
proto3
proto3,
basically
was
unable
to
be
adopted
until
field
presence
was
allowed
in
this
optional
standpoint,
because
now
you
can
have
field
presence
in
your.
H
You
can
have
field
presence
in
your
generated
code
when
you
want
it
and
when
you
need
it,
it's
an
optional
thing.
So
so
you
can
get
full
compatibility
at
a
client
library
later
with
proto
ii.
Dll
hell.
I
don't.
I
guess
what
you're
asking
is,
if
someone's
using
proto2
can
they
use
anything
generated
with
proto3?
G
There's
a
there's,
a
topic
of
debate
in
this
histogram,
the
exponential
histogram
pr
of
mine,
which
usually
we
don't
talk
about
here,
but
it
comes
down
also
to
a
representational
question
that
can
be
addressed
in
client,
libraries,
not
in
the
wire
level.
So
there's
one
field
that
we're
debating
that
like
at
the
very
end
of
this
pr,
which
is,
is
the
offset
32
bits
or
is
it
64
bits
and
the
wire
encoding
is
the
same.
So
we
don't
really
need
to
answer
this
question.
F
Yep;
okay,
thanks
for
careful,
I'm
not
seeing
any
problem
on
my
side.
Okay,.
G
H
H
Yeah,
I
I
I
think
so
jack
berg
raised
this
in
the
java
seg
and
I
think
last
on
on
tuesday,
as
well,
that
when
we
deprecated
summary
we
did
not
provide
like
histogram
is
not
a
full
replacement
because
it
doesn't
preserve
min
max
and
the
min
max
proposal
got
dropped
and
partially.
This
falls
a
foul
of
that.
So
I'd
like
to
I'd
like
to
revive
that
mid
max
proposal
with
explicit
field
presence
defined
on
it,
as
as
as
that
thing,
so
the
question
now
is
josh.
H
G
I'd
be
happy
for
you
to
take
that
over,
but
I
want
to
ask
first
question
about
the
summary
I,
like
you
said
it
was
deprecated,
but
I
always
thought
it
was
in
a
more
of
a
special
category
in
the
otlp
protocol.
Being
like
this
thing.
That
represents
exactly
prometheus.
The
way
prometheus
meant
it
to
be,
and
yes
it
has,
I
mean
max,
but
it
also
has
quantiles
and
we
so
I
wouldn't
have
considered
it
deprecated.
From
that
perspective,.
K
H
Yeah
yeah,
it's
it's,
so
you
can
make
it
to
back
ends
with
summary,
as
opposed
to
generate
summaries.
Now,
remember
that,
like
josh
was
saying,
the
point
of
summary
is
actually
quantiles,
not
min
max
right.
So
if
all
you
need
is
min
max
like
let's
get
that
on
histogram
and
I
think
everything's
gravy,
because
I
honestly,
I
think
histogram
would
benefit
from
having
min
max
as
well,
especially
in
the
explicit
bucket
phase.
G
K
No,
so
I'm
not
interested
in
summaries,
but
I
just
wanted
to
like
you
know,
make
sure
that
I
was
interpreting
that
correctly,
like
you
know,
so
that
that
just
you
know
didn't
like
logically
add
up
to
me
that
we
supported
it,
but
we're
never
going
to
generate
it,
but
josh
clarified
it
for
receiving
so.
G
My
sort
of
imaginary
wish
list
for
an
alternate
universe
is
that
we
have
a
data
type
which
is
like
the
histogram
but
allows
floating
point
count
buckets
instead
of
integers,
and
then
you
can
represent
summaries
as
histograms
by
the
explicit
boundary
form.
So
you
just
put
a
boundary
where
your
percentiles
lie
and
the
percentiles
are
therefore
explicit,
but
it
requires
that
you
have
floating
point
counts
to
do
and
that's
tricky.
So.
G
G
K
E
K
There
are
other
ones:
ruby
and
python
have
no
support
for
quantiles
for
summaries.
So
it's
it's.
It's
a
very
fringe
edge
case
thing.
Oh.
G
Yeah
this
this
request
came
up
over
and
over
again,
and
I
think
you
can
go
through
the
archives
of
various
spec
issues.
Finding
it
again
and
again,
the
pr
that
I
had
standing
was
meant
to
sort
of
focus
the
attention
on
a
single
place,
rather
than
have
it
come
up
again
and
again,.
F
So
josh
do
you
think
it
would
be
helpful
to
add
some
note
in
the
data
models
back
because
it
seems
like
the
similar
question.
I've
heard
that
at
least
three
times
I'm
having
to
just
some,
because
I'm
already
summarizing
what
we
discussed.
I
can
send
the
pr,
but
let
me
know
if
you
think
it's
just
overkill
respect
for
them.
You.
H
Know
I
I
agree
it
should
it
should
be
in
the
data
mouse
spec.
I
thought
like
the
way.
The
way
I
treated
it
probably
isn't
well
written.
So
I'm
happy
to
go
fix
that
if
you,
if
you
have
a
suggestion
for
how
to
reword
it,
feel
free,
otherwise
just
open
a
bug,
assign
it
to
me
and
I'll
reward
it
and
respecify
that.
H
K
K
Hey
riley,
should
we
add
this
to
the
the
feature
freeze
project
to
do
this,
so
we
don't
lose
track
of
this.
F
K
I
was,
I
was
referring
to
adding
the
the
the
min
and
max.
F
G
Thank
you,
riley
and
just
to
close
it
out,
then,
would
you
say
our
actions
are
to
review
your
two
pr's
and
get
them.
F
Yeah,
I
think
many
folks
already
did
that
I
haven't
seen
any
approval
on
this
vr,
but
it
should
be
a
very
simple
job.
So
please,
sarah.