►
From YouTube: 2022-06-29 meeting
Description
cncf-opentelemetry@cncf.io's Personal Meeting Room
A
You
guys
see
it
okay,
first
order
of
business
here
is
a
release.
We
it's
been,
I
think
a
week
or
two
since
our
last
release
and
we've
had
a
few.
You
know
fixes
and
a
bunch
of
metrics
things
implemented.
So
I
think
it's
time
for
a
quick
release
here.
A
And
then
I'll
probably
have
to
do
the
contrib
release
manually
again,
because
we
have
still
not
solved
our
rate
limiting
problems
with
the
contrib
releases.
A
Quick
update
on
the
metrics
ga
status,
we
are
making
progress,
but
we're
actually
still
adding
issues.
So
hopefully
we
can
burn
this
down
a
little
bit.
There
are
a
couple
of
pr's
in
here
that
are
in
need
of
reviews
so
that
they
can
be
merged.
So
if
you
don't
have
time
to
write
code
but
you're
looking
for
some
way
to
help,
that's
a
good
way
to
go
about
it.
A
Happy
to
answer
questions
right
now.
If
anyone
has
any
any
questions
about
any
of
the
issues
or
pr's
in
here,
otherwise
I'll
just
move
on
the
two
pr's
that
I
wanted
to
pull
out.
Are
this
export
unit
block
which
is
in
need
of
of
reviews?
A
I
don't
think
buyer
is
on
the
call
today
he
hasn't
updated
it
since
the
last
round
of
reuse,
so
I'm
hoping
that
that's
not
abandoned
and
then
the
in-memory
metric
exporter.
Here
I
do
have
a
question
of
opinion
here.
A
So
there's
some
question
as
to
whether
we
should
implement
a
force,
flush
and
a
reset
function.
On
the
let's
see,
if
I
added
a
link
here,
I
did
not
on
the
span
exporter,
we
do
not
have
a
force
flush.
A
A
Right
now
we
just
have
a
force
flush
because
it
is
specified,
but
we
don't
have
a
reset.
So
I
guess
the
question
is:
should
we
depend
on
this
force
flush
to
be
implicitly
reset
on
the
in-memory
metric
exporter,
or
should
we
have
a
specific
reset
function?
A
A
Well,
on
the
it's
an
in-memory
metric
exporter,
so
flush
really
doesn't
do
anything
in
this
case.
Yeah
and
reset
is
not
a
part
of
the
interface.
It's
only
a
part
of
this
exporter.
But
yes,
that's
in
my
mind.
That's
the
difference.
Reset
is
very
explicitly
resetting
everything
and
doing
nothing.
I
think
force
flush
is
probably
just
a
no-op
on
this
particular
exporter.
C
Yeah,
but
but
in
general,
that
that's
the
thing
I
know
probably
keep
keep
them
in
sync.
One
thing
I
would
say
with
force
flush
is
we
probably
do
want
to
have
a
flag,
at
least
for
browsers,
to
say
whether
you're
doing
it
synchronously
or
asynchronously,
so
that
during
unload
we
can
force
flush
and
push
it
out
immediately.
C
Well,
you
can
use,
send
beacon
or
you
can
use
fetch
with
keeper
live.
So
it's
not
really
synchronous,
but
you
are
saying:
do
it
now.
A
Okay,
I
mean,
I
think,
force
flushes
implicitly.
Do
it
now
as
quickly
as
possible.
It's
also
impossible
to
do
it
synchronously
in
node.
So
I
think
you
know
maybe
there's
a
different
word
other
than
synchronous
that
you're
thinking
of,
but
in
my
mind,
if
someone's
calling
force
flush
it's
because
they
expect.
A
You
know
it's
just
that
it
should
be
forced
to
happen
right
now.
There's
some.
A
Okay,
we're
also
missing
force
flush
on
our
span
exporter,
so
it
is
specified,
I'm
not
sure
how
we
missed
it.
We
don't
have
it
since
we
already
created-
or
since
we
already
released
this
as
stable,
we
cannot
cre,
we
can't
add
a
new
interface,
that's
required,
so
I
would
say
we
should
just
add
this
function
as
a
optional
property
on
exporters
and
then
call
it
if
it
exists.
A
D
Hey
one
one,
quick
question
on
the
in
metric
exporter:
there's
some
discussion
like
whether
it
should
be
a
push
exporter
or
metric
reader.
I
think
I
found
the
issue
and
there's
also
some
discussion
in
the
pr
and
the
spec,
but
basically
like
in
python.
We
just
did
it
as
a
metric
reader
because
there's
no
real
need
to
buffer
anything.
You
just
are
basically
for
testing
purposes,
saying
hey
get
the
metrics,
so
I
can
verify
them
or
compare
them
or
whatever
I
think
they
mentioned
in
java.
D
A
Okay,
I
mean-
I
I
see
reasons
to
have
both
like
you
said.
Having
the
push.
Exporter
helps
for
testing
the
metric
readers,
but
yeah.
A
I
also
can
definitely
see
how,
if
you're
testing
the
sdk
itself,
then
having
a
metric
reader
go
between
is
probably
an
annoying
intermediate
step
where
you
could
just
have
the
exporter
where
you
call
force
flush
or
whatever,
and
it
would
force
or
read
at
whatever
time
you
do
that
yeah
I
mean,
I
think
it
makes
sense
to
have
both
mark,
since
you
opened
this
pr,
if
you're
on
the
call
right.
What
do
you
think
about
that.
E
So
having
that
the
in-memory
metric
reader,
just
as
a
as
a
standalone
without
the
exporter,
I
I'm
not
sure
if
that's
if
that
is
something
that
we
would
want
to
do
so.
The
in
my
opinion
that
the
in-memory
metric
exporter
is
also
quite
helpful
for
people
who
look
to
implement
their
own
exporters.
That
is
something
that
I
have
ran
run
into
where
it
was
tremendously
helpful
implementing
an
exporter
for
opentelemetry.net.
E
I
believe
it
was
where
you
can
basically,
then
just
have
your
same
metric
reader
exporting
to
the
in-memory
metric
exporter
and
exporting
to
your
your
actual
exporter,
that
you're
trying
to
test
and
then
have
the
same,
the
same
data
and
you
know
what
what's
going
to
be
passed
in
there
so
yeah
it.
I
think
it
definitely
makes
sense
to
have
have
both
as
well.
E
I
think
I
also
ran
into
a
person
who
was
looking
for
the
in-memory
metric
exporter
on
open,
telemetry
python
and
couldn't
find
it,
and
I
pointed
them
towards
the
metric
reader.
So
that's
also
some
point
of
confusion
that
we
could
introduce
by
just
having
a
metric
reader
and
not
having
that
in
memory.
Metric
exporter.
A
A
Right
right,
I.
E
A
E
A
That's
fine,
I
think
in
any
case
it
doesn't
block
this
pr.
We
should
like
this
pr
makes
sense
to
have
the
metric
exporter.
I
guess
the
only
question
is:
should
we
rename
it
to
in-memory
push
exporter.
A
E
A
A
D
Apparently,
wasn't
you
yeah
no
worries?
Yeah,
I
mean
I
I
that
makes
sense
to
me.
I
think
my
point
of
bringing
it
up
was
mostly,
like
you
know,
you're
talking
about
reset
and
force
flush
and
this
sort
of
avoids
the
need
altogether
but
yeah
like
I
don't.
I
don't
see
any
reason
to
block
the
push
exporter
either
just
pointing
out
that
from
like
an
ergonomic
standpoint,
it
might
be
nicer
for
users
to
have
the
other
variant,
but.
A
A
Okay,
this
is
the
same
item.
That's
been
around
for
a
while
the
otep
for
the
events.
Api
is
still
open.
So
if
you
have
time,
go
ahead
and
review
that,
if,
if
that's
something
that
is
important
to
you,
I'm
not
going
to
talk
about
it.
Now
because
it's
been
beaten
to
death
for
the
last
few
weeks,
I
did
want
to
draw
people's
attention
to
the
new
bug
triage
workflow.
A
I
you
may
have
already
seen
this
or
you
may
not,
but
there's
a
document
here
for
how
to
handle
bugs
when
they're
reported,
despite
being
in
the
maintenance
folder.
This
particular
documentation
applies
to
both
maintainers
and
approvers,
or
anyone
that
has
permission
to
change
labels
and
things
like
that
on
issues.
A
I
don't
have
anything
specific
to
say
about
this.
I
think
it's
all
fairly
straightforward,
but
we
have
a
fairly
long
backlog
of
bugs
that
has
not
been
well
well
groomed
and
there's
a
lot
of
really
old
ones.
That
may
not
apply
anymore
and
I'm
trying
to
sort
of
avoid
that.
So
I
went
through
and
applied
this
to
most
of
the
the
bugs
that
have
come
up
recently.
A
So
you
can
see
here.
I've
applied
priority
labels
to
a
bunch
of
bugs
there's,
also
p3
and
4..
This
link
just
only
happens
to
show
one
and
two,
if
you're
looking
for
the
definitions
of
what
priority
one
two
three
and
four
all
mean
they're
here,
but
it
essentially
ranges
from
priority
one
which
cause
problems
and
end
user
applications.
Crashes
memory
leaks
things
like
that
all
the
way
down
to
p4,
which
are
things
that
are
technically
bugs,
but
aren't
really
causing
problems
and
are
the
lowest
priority
bugs.
A
I
also
added
a
couple
of
other
labels
like
spec
inconsistency,
for
something
that's
not
necessarily
a
bug.
A
code
is
working,
but
it
is
not
compliant
with
the
spec
and
spec
feature
for
something
that's
been
specified,
but
we
have
just
not
implemented
yet.
I
think
it's
all
relatively
straightforward.
A
A
Most
of
you
are
probably
aware
it
used
to
be
just
like
a
markdown
template,
but
github
now
supports
yaml
templates,
which
will
let
you
have
much
more
specific
sections
which
will
hopefully
lead
to
users
creating
more
complete
bug
reports,
or
at
least
that's
my
that's
my
hope
again.
I
think
that
this
is
fairly
self-explanatory
and
straightforward.
A
A
If
nobody
has
any
objections,
I'd
like
to
take
15
minutes,
or
so
here
and
triage
some
of
our
older
bugs
because,
like
I
said,
we
have
a
fairly
long
backlog
of
untriaged
bugs
these
are
all
the
ones
that
I
have
not
gotten
to
prioritizing.
Yet
some
of
them
are
really
old.
Some
of
them
are
not
so
old,
but
I'd
like
to
go
through
and
prioritize
a
handful
of
them.
A
So
if
anyone
has
something
for
the
agenda,
we
probably
should
do
it
before
we
do
this,
or
there
will
still
be
some
time
afterwards
and
then,
when
we're
done,
we
can
go
through
assigning
hyper
the
high
priority
bugs
to
whoever
has
time
to
do
them
that
sound
reasonable
to
everyone.
C
Yeah,
I
just
added
an
item
on
the
end,
which
is
really
just
an
update,
and
it's
mostly
explanatory
just
stating
where
it's
up
to
and
the
I'll
sync
up
yesterday
or
I'm
currently
changing
tack
a
little
bit
to
try
and
keep
history.
So,
okay,
1.4
coming
out,
I
I
guess
I
I
am
creating
an
automated
script,
so
I
guess
I'll
wait
until
we
release
1.4
before
I
actually
merge
all
the
history,
because
I
can't
currently
find
a
way
to
keep
merging
the
history
once
I
merge
it
and
then
move
things
around.
A
C
So
that
pr
already
has
a
merge,
an
automated
merge
script
which
effectively
as
long
as
you
have
a
local
api
and
js
repo.
It
effectively
copies
the
file
over
and
then
munches
it
to
add
sandbox
dash
on
the
front
of
the
package
name
to
try
and
simplify
that
process.
C
In
branches
so
effectively
that
the
main
branch
of
the
sandbox
repo,
I
want
to
try
and
keep
as
close
well
effectively.
I
want
to
have
that
auto
emerge
between
js
and
api,
and
then
we
can
create
branches
off
that
to
effectively
go
and
make
bigger
little
changes
so
that,
therefore,
we've
got
main
to
go
back
to
the
effectively
merge
anything
down
that
we
want
to
merge
down
into
those
branches.
A
So
then
the
branch
is,
each
branch
represents
like
a
prototype
feature
of
some
kind.
C
Yeah,
currently
in
the
redmi,
in
that
pr,
I've
only
got
minification
and
release.
I
I
don't
know
if,
when
we
do,
the
release
branch,
but
the
minification
is,
is
really
where
I'm
mainly
focusing
this
pr
does
turn
around
and
it
changes
a
few
things
where
it
uses
rush.
Instead
of
learner,
it
uses
roll
up
to
generate
bundles
for
everything.
C
A
In
terms
of
switching
from
learn
out
of
rush
beyond,
I
assume
some
level
of
personal
preference
and
just
being
familiar
with
that
tooling.
Have
you
seen
any
real
major
advantages
like?
Is
it
a
lot
faster
or
does
it
solve
some
problem.
C
I
think
it
can
be
faster,
except
because
I'm
generating
bundles
for
everything.
I
think
the
net
effect
of
this
repo
is
slower,
because
it's
doing
a
lot
more
work.
The
main
reason
I
use
brush
apart
from
I'm
really
familiar
with
it,
is
that
effectively
it
links
up
everything
together
and
everything's
all
self-contained,
so,
which
is
why
I'm
I'm
prefecting
every
package
name
of
sandbox,
so
that
it
actually
just
uses
everything
locally
out
of
the
repo
and
why
the
api
is
also
merged
into
this.
Instead
of
using
the
standard
one.
A
C
C
Rano
yeah
yeah
because
he
cleaned
me
on
slack
and
he's
got
his
pr
out,
but
I
haven't
had
a
chance
to
get
looking
again.
A
Okay,
yeah,
I
think,
he's
also
using
pnpm
there.
I
think
the
hope
is
to
speed
up
the
the
build
process,
because
the
the
actual
installation
of
dependencies
takes
a
longer
time
than
than
it
really
should
yeah.
C
And
russia's
pretty
good
at
that
so
effectively,
you
have
a
config
with
all
the
packages
it
wants
to
build.
You've
got
to
still
do
npm
installed
from
the
root,
but
then
it
effectively
just
links
up
everything,
so
it
downloads
more.
Once.
A
A
A
All
right:
well
then,
I
guess
let's
go
through
and
see
how
this
goes.
Let's
see
it's
12
25
right
now,
so
let's
give
this
about
15
minutes
and
see
how
it
goes
suspicious.
That
metrics
is
not
exporting
valid
json.
I
looked
at
this
earlier
and
the
proto
definition.
A
Let's
see
he
didn't
link
any
the
proto
definition
uses
this
one
of
on
the
metrics
and
then
when
we
actually
export,
we
are
exporting.
You
know
histogram
directly,
rather
than
inside
a
field
called
data
mark,
you
wrote
or
you've
done
the
most
work
recently
on
the
metrics
exporters.
A
Yeah
are
you
have
you
tested
our
json
export
with
a
collector
to
make
sure
that
it's
working
correctly.
E
Yeah,
so
I
have
this
exporter
sample
in
the
examples
directory
which
I
used
to
test
it
with
a
tested
to
export
to
the
collector
and
then
have
the
collector
export
it
to
prometheus
and
there
it
is
working
fine.
But
I
can
pick
that
up
and
and
look
it
look
into
that
so
yeah
also.
I
have
exported
it
to
the
collector
and
read
the
the
logs
from
the
logging
export
and
they
show
up
there.
E
So
it
might
be
that
the
collector
does
the
same
thing,
but
I
have
not
enough
information
yet
to
like
give
give
a
concrete
answer.
I
think
aaron
you've
got
your
hand
up.
D
You
don't
need
the
data
field.
Basically,
the
the
data
is
just
a
marker
which
goes
in
the
it's
using
the
generated
code,
but
the
one
of
is
essentially
just
like.
They
all
have
field
numbers
because
they're
all
as
if
they
were
just
directly
in
the
message
and
then
the
the
one
of
is
kind
of
like
a
validation.
A
F
A
So
that
tells
me
that
I
you
know,
I
think
this
is
what
the
code
generator
did
and
is,
since
we
started
using
the
new
sdk.
We've
only
used
that
code
for
transformations,
so
I
think
if
it
was
working
before
it
should
still
be
working
now.
The
only
thing
that
gave
me
pause
was
I
looked
up.
The
let's
see
proto
3
json
mapping,
I
think,
is
what
they
call
it,
and
one
of
is
not
like
listed
in
here.
A
So
that
was
what
gave
me
a
little
bit
of
pause
like
it
doesn't
have
any
special
handling
for
one
of,
but
it
does
have
yeah
so
there
there
is
nothing
in
the
documentation
about
that
json.
So
I
wasn't
sure,
but
if
you're
confident
of
that,
then
do
you
think
we
should
just
close
this
as
not
a
bug
yeah.
D
I
think
so,
and
it
would
be
great
if
there
were
some.
I
haven't
looked
at
the
exporter
code,
but
do
we
have
like
typescript
definitions
for
the
output.
A
Or
the
output
itself,
I
believe
no,
let's
see
yes,.
A
Metrics
types,
I
think
right:
oh
metrics
types
yeah,
I'm
sorry
source
metrics
types
and
then
it
would
be
export
service
requests.
Resource
metrics
contains
scope,
metrics,
which
contains
metrics,
which
contains
yeah.
So
then
you
just
have
histograms
some
gauge.
You
just
have
one
of
each
here.
There's
no
data
field
at
all,
but
I
mean
that's.
D
You're
saying
that's
correct,
yeah,
that's
that's
right
and
the
same
thing
with
the
as
in
and
as
double
in
the
data
points.
There's
like
a
few
one
knobs
but
yeah.
That's
that's
right!
Correct
to
the
best
of
my
knowledge,
I
I'm
checking
some
like
test
fixtures
that
we
also
have
and
they're
also
like
that.
So
I
think
this
one
is
not
a
bug.
A
A
A
A
This
person
says
it's
working
with
these
versions.
I
think
this
is
just
a
version
mismatch,
so
I
will
ask
if.
G
Yeah
like
so,
this
is
someone
from
honeycomb.
I
vaguely
remember
this
coming
up,
and
I
know
that,
like
we
have
our
docs
now
updated,
that
has
like
versions
0.27
for
sdk
node
and
the
exporter,
and
then
0.28
of
auto
instrumentations.
Is
that
yeah?
So,
like
basically,
that
comment?
That's
there
above
yours
daniel,
I
think
that's.
We
ended
up
adding
to
our
docs
at
the
time
as
a
working
grouping
of
things,
but
I
think
we
can
try
out
what
you
mentioned.
There
is
those
newer
versions
to
see
if
that
works
now.
A
Yeah,
so
the
these
versions
are
correct.
That
should
work,
but
this
was
before
the
most
recent
release.
So
we
we
have
since
released
a
few
times
actually
because
we've
been
iterating
pretty
quickly
on
metrics,
but
I
believe
the
latest
version
of
everything
should
work.
Okay,
as
far
as
the
honeycomb
docs
go,
I
don't
know
really.
A
You
know,
I
don't
wanna
tell
you
what
to
do
or
anything,
but
putting
specific
versions
in
your
documentation
seems
like
don't
they
get
out
of
date
really
quickly.
Yeah.
G
We
only
added
the
versions
because
this
issue
had
come
up.
Someone
had
reported
it
to
us,
so
we
put
in
the
versions
because
when
that
was
the
only
way
to
get
the
auto
instrumentations
to
work.
So
if
this,
if
it
works
now
with
like
the
later
versions,
we're
going
to
pull
the
versions
back
out
of
our
docs,
we
we
don't
actually
want
to
have
them
in
there.
G
A
Then
I
guess
we'll
move
on
there.
Should
we
leave
the
I
mean
it
was
not
really
a
bug,
it's
just
a
version
mismatch,
but
it's
unfortunate.
I
think
I'm
gonna
remove
the
bug
label
from
this,
but
not
close.
The
issue.
G
A
Okay,
this
is
another
very
similar
issue.
I
remember
seeing
this
batch
span.
Processor
is
not
a
valid
span.
Processor.
I
this
happens
again
because
of
a
version
mismatch
and
that's.
This
is
what
those
type
errors
tend
to
look
like,
because
yeah,
a
private
property
changed,
which
was
a
which
was
determined
to
be
not
a
breaking
change
but
typescript
for
some
reason:
checks
against
private
properties
when
it's
doing
type
checking.
A
Use
interfaces
everywhere,
instead
of
requiring
a
actual
class.
I
in
an
interface
you
have
to
use
only
interfaces
and
types
florina
created
a
pr
against
this
specific
issue,
but
then
found
that
it's
a
deeper
issue
than
he
thought
it
was.
A
But
I
think
this
is
something
that
we
we
do
need
to
fix.
But
again
it's
a
version
mismatch.
I
think,
since
the
most
recent
release,
this
should
be
fixed
again.
Let's
see.
A
This
was
actually
probably
around
the
same
time.
Wasn't
it
yeah
end
of
so
this
shouldn't
happen
anymore,
since
we
now
release
our
stable
and
experimental
packages
together
this
just.
It
just
happened
that
at
the
time
we
had
released
our
stable
packages,
but
we
had
not
yet
released
experimental
packages
that
worked
with
the
latest
stable
version.
A
We
should
still
change
the
api
interface.
We
only
require
interfaces
and
types
that
we
don't
end
up
with
mismatched.
A
A
H
A
Okay,
grpc
collector
exporter,
not
showing
traces
with
webpack.
A
A
I
don't
think
this
is
a
bug
because
it's
kind
of
a
configuration
issue,
but
we
don't
have
any
documentation
as
to
how
to
use
these
with
a
web
packed
environment
mostly
because
the
grpc
exporter
and
the
protobuf
exporter
don't
work
in
the
browser
anyways,
so
webpack
has
not
really
been,
I
think,
viewed
as
an
important
target
for
them,
but
some
people
do
webpack
their
back
ends.
A
I
don't
know
all
right:
do
we
have
any
webpack
experts
here
that
have
different
solutions?
Other
than
that
I
don't
know
if
nav
is
if
this
is
something
you
have
experience
with.
C
We
don't
have
any
in
any
of
our
stuff.
We
are
only
only
package
code.
A
Okay,
I
think
this
is
not
really
a
bug
as
much
as
a
feature
request
right.
C
It
might
be
a
case
that
needs
another
webpack
plug-in
installed,
because
there
are
roll-up
plug-ins
to
handle
different
types
of
objects
like
your
images
and
convert
them
into
javascript.
I'm
assuming
the
same
would
exist
for
webpack.
A
I
mean
other
people
in
the
issue:
have
posted
their
work
around?
There
are
ways
to
get
it
working.
It
just
doesn't
work
out
of
the
box,
which
I
think
I'm
gonna
say,
is
not
a
bug
because
we
never
claimed
that
it
would
does
that
seem
reasonable.
If
I
remove
the
bug
label
here
and
change
it
to
feature
request.
C
E
So
I
just
checked
the
documentation
for
the
exporter
packages
and
on
grpc.
It
also
says
that
it's
an
exporter
for
weapon
node
and
we
should
probably
remove
the
web
part
of
that
from
the
documentation,
so
that,
because
I
would,
I
would
assume
someone
who
reads
that
would
just
assume
that
it
would
also
work
with
webpack
as
well.
Even
if
we're,
if
they're
just
doing
the
it
says,
they're
the
backhands.
If
it
says
weapon
node,
I
guess
they
would
assume
that
it
would
work
with
webpack
as
well.
E
A
E
Yeah,
I
can
just
make
a
quick
pr
to
fix
the
documentation.
A
All
right,
we've
been
more
than
our
15
minutes,
so
I
think
we
can
move
on
because
I
do
want
to
get
to
this
next
thing.
If
we
can
oh
whoops.
A
So
after
triaging
bugs
should
be
prioritized-
and
I
just
added
a
link
here
for
high
priority,
bugs
which
I
defined
as
priority
one
and
priority
two
here
priority:
one
bugs
potentially
cause
problems
in
end
user
systems
and
priority
two
bugs
are
bugs
that
potentially
cause
telemetry
to
be
incorrect
or
not
exported
at
all.
A
A
I'll
assign
it
to
you
for
now,
if
it
turns
out
that
you
don't
have
time,
please
let
me
know
so
that
we
can
make
sure
somebody
is.
A
Working
on
it,
the
next
one,
this
should
be
relatively
easy.
The
grpc
status
code
resource
attribute
is
expected
to
be
an
int,
but
it
is
actually
a
string.
So
I
think
that
this
should
be
relatively
easy
to
fix.
Is
there
anybody
that
would
like
to.
A
A
H
Bug
is
yeah.
F
F
A
Okay,
there
are
two
prs
already
opened
for
high
priority
bugs,
so
I
listed
them
here.
A
high
priority
bug
with
a
pr
implies
a
high
priority
review.
So
if
you
have
time,
please.
F
A
At
these
two
pull
requests,
ideally,
I
would
like
to
get
them
merged
for
the
for
the
release
that
we're
trying
to
cut
this
week,
but
if
they
have
to
wait
for
the
next
one
is
what
it
is
and
then,
as
usual,
I
have
a
list
here
of
prs
that
are
waiting
on
reviews
that
do
not
fall
into
the
above
categories.
F
A
Yeah,
so
I
actually,
I
think
it's
abandoned
I've
been,
you
can
see.
The
last
four
updates
are
for
me,
so
I
kind
of
took
this
over
a
little
bit.
Do
you
still
have
open
comments.
F
I
think
you
asked
him
to
add
a
test,
yeah
yeah
and
also
we
fixed
it
in
one
place,
but
there
are
other
places
that
are
not
fixed
with
the
same
row.
A
Okay,
so
in
that
case,
since
you're
already
looking
into
the
grpc
status
code,
will
you
include
this
in
your
pr.
A
F
F
A
Yeah,
so
I
think
this
one's
not
abandoned
yet,
but
I
agree
it
if
we're
trying
to
fix
something
in
it
and
it
doesn't
correctly
implement
this
the
the
spec,
then
I
think
we,
you
know
it
is
a
blocker
in
my
mind,.
A
F
F
If
it's
just
failing
to
comply
with
specification,
I
don't
think
it's
really
high
priority,
but
I
will
read
it.
A
I
had
p1
as
bugs
that
cause
problems
in
end
user
applications,
so
like
crashes
and
stuff
like
that,
like
that's
the
highest
priority
and
then
I
have
p2
and
then,
when
I
made
that
link,
I
just
included
p1
and
p2.
So
p2
would
be
bugs
that,
let's
see
where
were
we
yeah
the
telemetry
to
be
incorrect?.
C
A
F
The
priorities
are
fine.
I
think
that
p2
are
not
so
urgent
as
p1
like
p1.
We
should
fix
as
soon
as
possible
and
p2
it's
okay.
If
they
stay
open
for
a
few
weeks,
I
don't
see
any
problem.
A
Yeah
p2s,
I
don't
think,
need
to
be
the
highest
priority
and
that
that's
why
we
have
p1.
I
just
only
added
those
because
if
you
look
at
all
the
p2
and
the
p1
bugs
there
were
two
pr's,
so
I
just
added
those
two.
It
looks
like
now,
there's
actually
three
okay
good,
so.
A
Okay,
anyone
else
have
anything.
A
All
right
well,
thank
you
everybody
for
your
time.
Then.
Oh
there's
something
in
the
chat
here.
Oh
that's
just
someone
saying
they
have
to
leave.
Okay!
Thank
you
everybody
for
your
time,
and
I
will
see
you
next
week.