►
From YouTube: 2021-01-14 meeting
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
A
B
Oh,
I
just
do
control
plus
or
you
can
go
to
view
and
then
there's
a
or
not
sorry,
not
under
view,
not
not
in
view.
Sorry,
it's
it's.
It's
see
where
it's
so
shows.
100
there
yeah.
A
A
I
think
we
can
start.
Probably
let
people
join
it's
already,
five
minutes
past
the
schedule
time,
so
I
don't
see
any
agenda
in
the
list
other
than
what
I
have
mentioned.
So
probably
we
can
quickly
go.
I
think,
to
start
with
what
the
point
which
I
have
mentioned
so
wanted
to
understand,
I
mean
max
is
not
there,
but
I
just
wanted
to
discuss
on
the
approver
and
maintainer
list
for
the
content
repo.
A
So
we
do
have
a
contract
repo
which
is
right
now
totally
empty.
We
don't
have
anything
in
that
and
going
forward.
We
do
plan
to
add
exporter,
instrumentations
and
other
stuffs
propagators
and
other
stuffs,
probably
coming
from
different
people
who
are
not
part
of
main
contribution,
main
rapport
contributors.
So
probably
we
need
to
discuss
and
understand.
It
should
be
the
same
list
of
approvers
or
maintainers,
or
it
should
be
different,
so
I
mean,
as
per
my
understanding,
the
understanding.
A
There
is
no
such
guideline
for
open
telemetry
that
we
should
have
same
list
of
approvers
and
maintainers
for
the
content
repo
as
we
have
for
the
main
repo.
So
I
don't
think
that
should
be
a
hard
line,
something
which
we
need
to
follow,
but
just
want
to
discuss
and
understand
what
you
guys
feel
on
that.
A
Yeah
I
mean,
apart
from
that,
we
do
have
elastic
exporter
in
the
main
repo,
which
was
basically
developed
by
the
code
owners
by
the
aws
interns,
so
they
have
gone.
So
I'm
just
trying
to
understand
the
scenario
where
somebody
comes
to
us
that
there
is
some
issue
in
the
elastic
exporter.
It
is
not
working
who
is
not
responsible.
For
that
I
mean.
B
It's
if
I
can,
if
I
can
so
so.
I
asked
this
question
of
some
of
the
other
things
and
the
expectation
is
once
it
hits
even
contrib
we
own
it.
B
Okay
like
if
it
hits
contrib,
then
the
owners
of
contrib
are
saying.
Yes,
we
will
maintain
this
going
forward
and
so
etw
and
elastic
effectively
since
they're
in
the
repo
we
as
people
in
and
you
as
the
maintainer
have
said.
Yes,
I
will
own
this
going
forward,
given
what
I
saw
out
of
the
elastic
one
it
doesn't.
B
I
my
recommendation
here
personally,
having
been
on
a
lot
of
different
open
source
is
if
someone
comes
and
wants
to
contribute
something
right
if
they
are
a
maintainer
or
an
approver
or
somehow
very
active
in
the
project,
then
like
you're
you're,
assuming
they're,
going
to
continue
to
maintain
it
as
it
comes
in
it's
the
same
problem.
We'd
have,
if
say,
one
of
us
were
to
leave
and
any
code
we're
working
on
now
needs
to
be
owned
and
maintained
by
someone
else.
B
If
it's
easy
for
that
to
get
passed
on,
then
it
gets
passed
on.
Otherwise
you
need
to
look
hard
and
figure
out
if
you
need
to
remove
that
code
right.
So
I
my
recommendation
is:
if
people
come
energetic
and
are
very
active
in
the
community,
if
they're
willing
to
become
an
approver
and
take
on
the
responsibility
for
contrib,
let
them
put
it
into
control
kind
of
a
thing
that
would
be
my
recommendation.
A
They
may
not
be
staying
there
for
a
long
time,
so
putting
them
for
tech
as
an
approver,
and
then
we
don't.
We
don't
really
expect
them
to
once
they
have
done
their
work,
they
may
move
away,
they
may
not
want
to
even
own
it
or
they
may
want
to
continue
owning
it,
so
that
that's
that's
something
I
was
just
thinking,
whether
it
I
was
I
was
actually
looking
to
in
the
collector
contract.
A
So
they
they
basically
use
the
code
owners
approach,
it's
something
that
they
have
their
own
set
of
approvers.
Probably
if
I
see
their
collector
contribute,
they
have
their
own
set
of
approvers
and
maintainers,
but
for
the
individual
exporters,
propagators
or
instrumentations
they
have,
they
have
specified
those
owners.
A
B
A
A
A
C
Yeah,
I
have
a
related
question
here
for
the
contract
repo.
I
see
that,
for
the
you
know,
the
damage
we're
currently
like
there
are
standards
working
out
for
stability
and
versioning
guidelines.
Will
those
also
apply
to
the
contrib
repository
or
are
there
kind
of
different
kind
of
more
relaxed
requirements
for
the
contribution.
B
Me
they
said
it
does:
okay,
okay.
I
asked
that
question
and
that's
that's
so
again.
I
was
shocked
by
the
answers
personally
that
a
contrib
is
considered
to
be
under
the
same
stability
guidelines
because
it
has
the
open,
telemetry
name
and
b.
Maintainers
are
expected
to
own
any
of
those
code
pieces
of
code
that
come
down
now.
I
don't
know
if
you've
heard
differently,
but
that's
that's
what,
when
I
asked
that
question
in
the
spec
meeting,
that's
exactly
what
I
got
and
I
was
kind
of
shocked.
A
C
A
Yeah
I
mean
the
only
thing
I
see
is
that
adding
lots
of
exporters
in
the
main
repo
we
are
going
to
increase
the
size
of
the
repo.
So
if
somebody
want
to
really
use
it,
they
have
to
really
fetch
that
much
amount
of
size
of
repo.
So
that's
that
that
that's
the
concern
I
see
but
yeah
versioning,
I'm
not
sure.
For
the
contrary,
I
I
don't
see
really.
If
there
are,
there
are
tons
of
exporters
in
the
contract,
they
should
be
following
their
own
versioning.
We
cannot
really
have
1.04.
A
Contract
means
that
all
the
exporters
are
1.
level.
They
would
be
at
different
maturity
level,
so
I
mean
I
I'll
be
a
bit
probably
having
I
have
to
have
to
give
a
thought.
I
mean
how
the
visioning
should
work
for
contrib
yeah,
but
but
we
are
probably
something
something
probably
we
need
to
discuss
at
the
specs
level
in
that
case
or
how
the
other
other
sick
teams
are
doing
it.
In
that
case,
we
are
too
early
here
right
now.
In
the
contrary,.
A
Yeah-
and
apart
from
that
that
I
just
wanted
to
talk
about
the
pr
merge
guideline,
I
mean
the
current
guideline
says
that
I
mean
if
dpr
is
raised
by
any
of
one
of
the
maintainers
or
the
approvers,
it
should
be
approved
by
at
least
one
approval
of
maintainers
from
a
different
company,
and
if
it
is
something
done
from
outside
somebody
from
outs
coming
from
or
from
external
I
mean
it
should
be
at
least
approved
from
at
least
two
maintainers
approved
from
a
different
company.
I
mean
right
now.
A
A
That's
too
much
of
a
ask
for
you
right
now.
I
see
that
basically,
it
comes
down
that
you
have
to
review
each
and
every
pr-
and
I
mean
I
I
just
wanted
to
see
if
we
should
make
it
bit
flexible
for
the
time
being
and
probably
once
we
get
more
approvers
or
the
maintainers.
I
think
we
can
always
switch
back
to
the
original
guideline
of
having
at
least
approval
from
a
different
company.
I
mean
just
want
to
do.
A
B
Yeah
well,
the
original
guideline
was
having
two
approvers
from
different
companies.
Yes,
that's
like
what
the
other
ones
do
and
that's
still
the
same
as
I
have
to
review
everything
if,
unless
johan
pretends
he's
not
at
microsoft,
yet
maybe
we
can
yeah.
I
I
think
I
I.
B
It's
more
public
perception
of
this
project
and
who
owns
it
is
is
more
the
concern
where
I
wanted
to
have
at
least
if
we
only
have
one
approver,
it's
from
a
different
company,
specifically
like
if
I
contribute
things
as
well,
I,
how
does
everyone
feel
about
that?
Are
we?
Are
we
worried
about
the
public
perception
of
open,
telemetry,
sleepless,
plus.
A
A
If
there
one
person
has
to
review
all
the
prs,
we
definitely
so
I
mean
the
perception
which
I
got
was
that
they're
totally
fine
to
remove
that
restriction
for,
and
we
can
always
circle
back
and
see
if
we
have
enough
approvers,
we
can
definitely
make
make
it
a
norm
to
have
multiple
approvals
from
different
companies,
but
but
for
the
time
being,
if
that
I
mean
being,
if
we
try
to
be,
if
we
want
to
be
truly
agile,
that's
that's
totally
a
best
practice
to
have
to
remove
that.
For
the
time
being,.
A
No,
no,
no,
it's
sorry.
I
mean
if,
if,
if
it's
going
to
the
way
that
you
are
you're
slow,
it's
not
at
all,
I
mean
I
mean,
I
see
your
reviews
coming,
I
mean
on
time
it's
just
that
it's
too
much
over
there.
Lots
of
we
are
getting
ready
to
look
each
and
every
pr
is
not.
I
mean,
for
one
person
is
definitely
not
feasible.
So
it's
just
that's
the
concern.
I
see.
B
Yeah
I've
been
I've
been
kind
of
doing
it.
I
I
haven't
been
contributing
any
code
because
I've
mostly
been
spending
my
free
time
reviewing
prs
just
just
to
make
sure
that
you
have
someone
from
a
different
company
to
again
help
with
that
perception
thing.
I'm
not
like
I'm
not
worried
at
all
about
like
that.
As
a
thing,
I'm
more
worried
about
public
perception,
if
someone
were
to
take
a
look
and
be
like.
Oh,
this
is
like
a
microsoft
only
thing
that
could
be
weird.
B
So
that's
the
only
thing
I
wanted
to
avoid,
but
yeah
it'd
be
nice.
If
I
didn't
have
like
feel
the
need
to
review
every
single
thing
coming
through.
A
Yeah,
let
me
see
how
we
can
put
it.
I
mean
there
either.
We
can
remove
that
guideline
of
having
at
least
I
mean
approver
from
different
company,
or
we
can
keep
it
and
then
then
give
maintainer
a
flexibility
that
in
case,
you
feel
that
it's
it's
the
pr
is
good
enough
for
approval,
a
restricted
flexibility
for
the
time
being
again.
For
the
time
being,
we
can.
B
Yeah
that
I
that
so
maybe
the
way
I
would
phrase
it
is
it
it
should
have
like
go
back
to
two
reviewers
with
one
from
a
different
company.
However,
the
maintainer
can
decide
that
the
pr
is
simple
enough,
that
having
two
reviewers
from
the
same
company
is
okay
now
does?
That,
put?
Is
that
as
bad?
We
don't
have
a
lot
of
approvers
either
right.
B
A
C
Yeah
and
regarding
the
perception
thing,
I
think
we
are
in
a
tight
kind
of
a
tight
spot
there,
because
that
I
agree
it
might
be
kind
of
not
the
most
favorite
perception,
but
on
the
other
hand
I
mean
what
other
options
do
we
have
just
slow
down
development.
That's
awesome
and
one.
D
C
One
thing
I
would
say
I'm
worried
about
this
actually
mo
a
bit
more
than
the
receptionist
that
when
george
here,
the
bottleneck
who
has
to
review
every
pr
to
go
through
that
you
I'm
actually
worried
that
you
get
sick
of
it
at
some
point
and
say:
oh,
that's!
That's
too
much
for
me.
I
don't
care
anymore.
I
mean
that's
what
I'm
more
worried
about.
So
I'm
fine
with
that.
Like.
A
B
A
So
so
we
did
kind
of
advertise
it
in
yesterday's
maintainers
meeting
that
we
do
need
more
contributions
to
the
level
of
people
who
can
move
to
the
approval
level
to
the
as
approver.
So
I
think
it
was
highlighted
and
probably
really
told
me
yesterday
that
he
got
one
con.
He
did
get
one
contact
from
right
step.
I
think
and
probably
he's
the
same
person
ryan,
I
think
probably
johan
would
be
knowing
him
he
they
are
trying
to
make
him
join
back
to
c
plus
plus
approval
list.
A
So
that's
that
that's
something
I
heard
last
from
raleigh,
which
does
not
get
finalized.
I
think,
but.
A
B
Yeah
I
I'll
I'll,
have
to
see
if
I
can
drum
up
someone
else
from
google
too.
We
don't
the
c
plus
plus
bent
is,
is
short
small
like
there's,
not
a
lot
of
c
plus
plus
api
need,
but
there's
enough
that
there's
one
of
us
right
but
I'll,
see
if
I
can
get
a
second,
maybe
two,
because
that
could
help
as
well.
C
You
know
yeah
when
I
remember
if
we
could
ask
alolita
from
aws,
because
I
mean
it
seems
they
have
some
interest
here.
I
mean
they
contributed
logging
stuff,
they
contributed
the
metric
stuff,
so
it
seems
there
is
at
least
some
interest
there
in
like
the
c
plus
plus
part,
and
I
wonder
if
they
could
just
made
it.
As
true
said,
it
doesn't
even
need
to
be
regular
code
contributions.
Just
like
a
approver
from
aws
would
be
helpful.
A
Yes,
so
I
think
again,
I
did
hear
from
raleigh
that
he
did
speak
with
alolita
on
that
she
did
say
that
she
is
going
to
try
if
they
can
get
some
c
flip
plus
developers,
but
I
think
after
that
it
has
been
a
while,
and
he
didn't
hear
anything
from
her
after
that.
So
not
much
hoping
from
that
perspective
I
mean.
A
It's
yeah,
okay,
yeah,
and
I
think
that's
all
from
my
side
I
mean
I
just
wanted
to
talk
about
our
trace
api
sdk
project
burn
down.
Currently
I
mean
it's
something
which
your
hands
you
have
created.
I
just
wanted
to
have
a
look
on
this
is,
is
it
complete
or
do
we
need
to
add
or
remove
something
from
this
list?
A
C
I
think
it's
complete
for
basically
making
the
traces
api
functional.
I
think
it's
not
complete
in
terms
of
that.
It's
basically
that
we
are
ga
ready
for
traces
can
be
done.
So
that's
how
I
would
put
it.
A
C
I
think
yeah
in
regarding
feature
completeness.
We
are
doing
that's
I
mean
I
think
you
you,
you
got
the
resources
work
going.
I
think
that
maybe
should
I
mean
that
does
not
really
belong
to
the
traces
per
se,
but
I
think
that
it's
important
to
kind
of
be
there
when
we
as
the
board
traces,
api
and
sdk,
but
I
think
oh
it
is
there.
B
Do
we
have
is
baggage
considered
part
of
the
the
first
spec.
B
C
A
B
A
A
B
B
I
would
be
happy
to
contribute
that
as
well
for
c
plus
plus,
let
me
actually
put
a
link
to
something
that
one
that
that
the
python
folk
have.
This
was
contributed
from
amazon
and
we've
been
looking
at
this
as
well.
I
don't
know
if
you've
seen
this
but
there's
a
github
action
that
will
record
your
benchmarks
on
every
pr
and
give
you
nice
pretty
graphs.
A
B
That,
I
think,
is
mostly
bling
hype
worthy.
I
don't
want
to
distract
us
too
much.
I
just
want
to
show
you
a
pretty
graph
of
performance
yeah,
it's
it's
it's
jumpy,
because
it
runs
on
github
actions
and
you
don't
know
what
cpu
you
get.
So
that's
fixable,
but
in
any
case
it's
kind
of
interesting.
It
gives
you
a
notion
of
like
whether
or
not
pull
requests
are
going
majorly
up
or
majorly
down
in
your
trends.
The
more
important
question
is:
there
is
a
specification
around
performance.
B
There's
a
specification
around
what
that
benchmark
needs
to
run.
Do
we
have
those
benchmarks?
I
know
we
have
some
benchmarks
that
I've
looked
at.
I
added
the
spin
lock
one
just
to
make
sure
my
spin,
lock
imitation
wasn't
trash
and
then
I've
been
working
on
the
java
ones.
If
we
don't
have
them
in
c
plus
that's
an
area
I
could
actually
contribute
to
instead
of
just
doing
code
reviews,
so
I
just
want
to
check
and
make
sure
that
no
one
like
a
isn't
required
in
b,
you
know,
is
anyone
else
looking
at
it.
C
I
definitely
think
we
need
some
more
benchmarks.
I
mean
basically
now
the
benchmarks
we
have
are
mostly
micro
benchmarks
and
definitely
we,
I
think,
need
some
higher
level
benchmarks,
because
I
mean,
I
think
it's
special,
especially
c,
plus
plus
users
were
very
sensitive
there.
We
just
need
some
benchmark
showing
yeah
when
I,
when
I
just
use
the
api
without
an
sdk.
What
kind
of
overhead
do
I
see
and
then,
when
I
plug
plug
the
sdk
in
what
overhead
do
I
see
then
compare
the
application
bar?
C
B
Cool
so
so
there's
a
specification
of
what
the
benchmarks
are
meant
to
benchmark,
and
it's
it's
all
stuff
that
I
can
implement.
It's
all
otlp
exporter
benchmarks
and
it's
throughput
how
many
messages
per
second
and
then
memory
consumption.
So
I
can
work
on
that
it'll
be
like
a
month
or
two.
Unfortunately,
I'm
going
to
be
a
little
slow,
but
I
I
can
start
work
on
that.
If
someone
else
can
finish
it
faster,
that's
fine
feel
free
to
take
it.
Otherwise
I'll
jump
on
that,
because
I'm
doing
the
same
thing
for
java.
A
D
B
The
thing
that
we
need
to
figure
out-
and
I
think
this
is
going
to
be
a
discussion,
possibly
in
the
spec
sig
or
in
the
technical
committee-
not
that
I'm
in
it
but
like
we
might
try
to
push
it
up
there.
Can
we
get
all
the
companies
to
contribute
a
standard
set
of
resources
for
benchmarking,
so
we
have
a
consistent
cpu
to
benchmark
on
so
they're,
less
flaky,
then
committing
on
the
pull
request
is
useful
right
now
you
saw
how
it
kind
of
jumps
up
and
down
a
bunch.
B
D
I
see
so
better
phone,
I
I
say
benchmark
could
be
affected
by
the
underlying,
maybe
bm
or
contender,
but
if
we
run
the
benchmark
for
both
baseline
and
the
pr,
I
think
we
can
get
some
day
out
to
see
whether
this
vr
is
good
or
or
decreased
performance,
or
not
right,
no
need
to
get
some
absolute.
B
A
B
Yeah
yeah,
I
think
it's
a
solvable
problem.
I
just
think
from
what
we
saw
with
a
python
implement
rollout.
We
shouldn't
just
turn
it
on
right
away.
We
should
watch
and
see
how
flaky
it
is
and
then
fix
issues
with
the
stability
of
the
performance
measurement
in
github
actions
before
we
turn
it
on,
but
yeah.
Like
the
reason
I
want
to
dimension
the
github
actions,
I
think
it's
amazing
having
it
commented
automatically
would
help
so
much
in
reviewing
code
and
it
would
eliminate
a
lot
of
our
discussions
on.
Is
this
performant
right.
A
And
we
don't
have
any
more
agenda
here.
We
can
quickly
go
through
the
pull
requests
which
we
have.
I
think
we
yeah
we
already
have
some
time
here
so
tom
you
raise
this.
One
right
fix
missing,
include
directory
for
conflict.
D
Yes,
this
is
for
importing
importing
the
installed
package
into
some
third
party
projects
yeah.
Currently,
we,
our
cmx
script,
is
supposed
to
install
to
your
machine
there's
a
install
target,
but
the
installation
in
the
install
script
that
they
include
photopass
is
missing.
So
actually
the
finder
package
can't
import
it
correctly.
A
C
I
I
did
not
have
too
much
time,
unfortunately,
last
week
developed
that
there
are
some
things
to
be
fixed,
mostly
I'm
not
exactly
sure
where
to
put
that
it
started
out
as
an
example,
then
it
moved
into
the
sdk
tests
and
I
am
now
actually
tempted
to
move
it
into
the
extensions
tests
because
it's
kind
of
it
it
does
not
really
run
in
like
together
with
our
normal
unit
tests
and
all
our
tests
in
the
sdk
or
normal
unit
tests.
I
mean
that's
a
bigger.
C
It's
not
really
uni
test,
it's
kind
of
more
like
a
system
test,
because
it
needs
this
service.
This
button
service
that
sends
and
receives
requests
from
this
application,
so
I'm
currently
rather
more
tempted
to
move
it
to
the
extensions
and
to
the
tests
there.
C
Also
for
the
reason
that
yeah
I
use
the
enloement
json
stuff
and
that
does
not
work
or
doesn't
compile
on
tcc
4.8
so
which
it
actually
has
in
common
with
other
extensions
like
the
c
pages,
also
don't
compile
for
or
that
or
platforms
that
you
support,
and
so
I
thought,
yeah.
The
sdk
tests
definitely
should
work
for
platforms,
but
I
think
the
extensions
test
can
be
more
flexible
properly,
so
it
probably
moved
over
to
the
extensions.
A
Okay,
make
sense
I
mean
from
my
side.
I
did
approve
it,
but
there
were
some
tests
which
were
failing,
yeah.
A
Yeah
I
mean
I
did
raise
that
and
I
think
there
were
some
comments
on
this,
and
probably
I
think
I
did
address
them
and
the
problem
here
with
gypkin
exporter
is:
it
did
bring
some
of
the
change.
Some
of
it
needs
some
of
the
previous
prs
to
be
merged.
So
as
of
now,
I
was
bringing
lots
of
those
pr
changes
also
as
part
of
this.
A
So
that's
the
reason
why
the
pr
looks
bit
bigger,
but
it's
really
not
that
a
big
pr.
So
I
mean,
if
you
want
to
review
it,
you
can
ignore
those
http
things
which
are
part
of
this
vr.
It
should
be
plain
chipkin
exporter,
which
needs
to
be
reviewed,
not
the
http
factory
and
http
sync
request
stuff,
so
I
did
address
some
of
the
pr.
I
mean
some
of
the
comments
here.
A
A
Thank
you
upgrade
deep
grpc
dependency.
I
think
bogdan
has
raised
it,
but
I
mean
there
are
some
issues
in
this
yeah.
B
Did
he
did
he
fix
it?
So
there
was
an
issue
with
grpc
on
windows,
specifically
where
you
couldn't
update
grpc,
because
the
grpc
bazel
dependency
thing
was
breaking
that
they
we
had
to
wait
for
grpc
to
fix,
but
as
far
as
I
understand
that
was
fixed
last
week
in
that
upgrade
grpc
because
he
had
to
upgrade
one
additional
release
past
the
one
he
had
tried
originally,
and
I
think
that
fixed
bazel
windows.
B
That
was
my
concern
with
it
originally.
I
think
I
approved
this
though
yeah.
A
A
D
A
A
A
Sending
sync
request
through
http
client:
I
did
made
this
changes.
The
reason
is
that
once
we
are
doing
export,
we
need
the
status
of
the
export
success
or
failure
there
itself.
I
mean
you
cannot
really
make
it
asynchronous.
A
So
I
did
add
the
sync
synchronous,
http
request
request
in
that
I've
got
some
comments
from
max
I've
addressed
them,
probably
I'll
just
check
with
him.
Once
again.
This
is
something
specific
to
max.
I
think,
he's
working
on
a
ttw
exporter,
so
I
just
ignored
it
as
of
now
proposal
to
add
google
test
demo
document,
I'm
not
sure.
What
exactly
is
this?
A
B
A
These
are
just
the
interns.
We
can
definitely
check
with
them.
If
we
need
something
to
be
done
in
this
pr,
we
can
definitely
ping
them
if
they
are
interested
to
for
to
that's
more
of
my
informal
and
official
way
of
pinging
them,
but
I'm
not
sure
if
it
would
be
extent
of
making
them
as
approver
I
mean
I
think
they
would
be
still.
A
Most
of
these
work,
I
think
this
that
pages
and
this
stuff
was
done
by
the
interns.
This
is
quite
quite
quite
old.
I
think
more
than
three
four
months
back.
B
This
is,
but
we
had
like
an
absurd
number
of
interns
that
came
and
contributed
a
bunch
of
stuff.
So
if
it's
related
to
that,
then
they're
probably
not
coming
back
unless
unless
they
decide
that
that
was
the
best
internship
ever
and
come
back
again.
For
that.
B
But
you
know
it's
it's
not
it's
not
clear.
What's
going
to
happen
there
so
yeah
those,
I
think
if,
if,
if
they
weren't
good
enough
to
merge
and
the
interns
no
longer
active
on
it,
you
can
just
you
can
close
them.
A
Yeah
exactly
yeah
yeah.
We
do
have
lots
of
issues.
Probably
I
can
quickly
go
through
them.
I
think
this
is
already
fixed
by
tom
right.
So
that's.
D
A
B
Josh
I
mean
you
can
you
can
is
that?
Is
that
actually
a
pr
or
just
an
issue
you
can
assign
that,
because.
B
B
Just
so
you
know
the
open
telemetry
exporter
for
google
cloud
is
specifically
not
in
contrib,
because
we
don't
want
to
force
you
guys
to
maintain
our
stuff
that,
like
according
to
those
rules,
so
we
have
it
in
a
what
is
it
google
cloud
platform,
slash
open,
telemetry
operation
cpp
is
the
the
name
of
the
repo
and
that's
an
example
for
how
to
use
bazel,
but
it
was
doing
a
whole
bunch
of
things
that
probably
we
don't
really
want
to
make
everyone.
Do
that
bogdan
open,
a
bunch
of
issues
to
like
fix.
B
So
I
need
to
update
that
and
then
I'll
have
an
example
for
everybody,
but,
like
that's,
that's
what
I've
been
basing
most
of
the
basal
consumption
examples
on.
If
you
want
to
see
like
tentatively,
what
that
will
look
like
and
if
you're
curious,
where
the
stackdriver
exporter
is
it's
over,
there.
B
Oh,
the
in
other
news,
the
I'm
actually
blocked
on
the
grpc
issue.
So
originally
we
couldn't
do
a
bazel
example
for
how
to
consume,
because
grpc
broke
bazel
for
windows,
that's
fixed,
but
until
until
boctin's
pull
request
goes
through.
I
actually
can't
do
that.
So.
Okay,
anyway,.
A
Okay,
I'm
going
we're
going
to
ping
him
once
again.
If
you
can
really
push
it
and
yeah
it
seemed
this
was,
I
think,
request
timeout
optional
in
test
run
so
locally.
I
think
tom
we
can
make
it
optional.
A
D
A
A
Probably
I
just
need
to
see
how
it,
how
the
how
it
is
different
from
in
the
vm,
because
if
I'm
running
locally
it
takes
around
35
seconds
from
me
for
me
to
time
out,
but
just
just
just
I
wanted
to
say
like.
If,
if
you
want
to
test
all
the
tests
at
one
go,
there
should
be
a
flag
to
exclude
some
set
of
test
cases.
A
A
A
Okay,
I
got
it
here.
Let
me
see
if
I
find
a
way
of
doing
of
excluding
a
test
through
cmec
test
I'll
just
just
comment
on
this
test.
D
A
A
Yeah
sure,
let
me
check.
Let
me
check
that
if
there's
option
thanks
yeah
bogdan,
has
raised
one
extract
batch
span,
processor
configuration
in
its
own
class.
I
didn't
understand
it
fully.
What
exactly
he
wants
to
say.
I
mean
I
just
pinged
him
to
add.
More
information
even
max
also
has
pinged
him
coming
to
get
more
information,
he's
saying
that,
basically
for
exporters,
if
we
can
have
a
separate
options,
class
and
all
the
configuration
specific
to
those
exporters
should
be
part
of
that,
but
that
that's
what
I
understood
but
just
need
more
clarity
from
him.
A
A
Is
there
would
be
different
version?
Yes,
okay,
let
me
check
that.
Probably
we
should
be
good
to
close
it.
I
think
I
did
discuss
with
him
I'll
check
it
afterwards
here
refactor.
These
are
something
which
is
raised
by
this
is
max
I'm
just
I'm
just
kind
of
going
past
this
stuff,
which
was
part
of
etw
and
max,
has
raised
it.
A
D
A
D
A
Yeah
this
is,
I
think,
this
probably
we
can
finding
a
background
for
silencing
the
droppings
panthen
locks
improve
test
code.
Copyright
remove
it
dog
fooding
is
probably
something
which
even
I
didn't
get
time
to
really
look
into
that.
Probably
I'll
spend
some
time
to
have
some
dog
feeding
example
for
both
client
and
server.
If
we
can
use
that.
B
A
So
we
don't
need
people
from
outside
cpc
plus
place.
I
mean
to
really
try
it
out,
but
I
mean
I
was
just
thinking
me
and
tom.
We
can
just
use
some
client
and
web
server
framework
and
make
it
and
try
it
out
just
see
if
that
works.
I
I
was
in
fact
thinking
that
if
we
can
use
create
an
nginx
module
for
open
tray
for
for
open,
telemetry,
plus
plus,
which
actually
captures
the
all
the
requests
and
create
the
spans
out
of
that,
that
would
be
good
enough
for
server
side.
A
A
B
B
To
the
that
other
bug
of
reporting
missing
or
dropped
spans
and
drop
traces
right.
A
B
I
think
the
dog
fooding
should
inform
that
if
we
dog
food
and
find
that
errors
are
really
bad,
that
we
should
prioritize
that
buck
right.
Yeah.
A
A
Yeah.
I
think
that's
all
I
think
we
are.
We
have
some
more
in
the
agenda.
Let
me
see
we
don't
have
much
more
more
to
discuss,
so
I
think
we
should
be
good
to
finish
today.