►
From YouTube: 2021-04-21 meeting
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
D
E
F
A
C
H
G
Okay,
let's
go
ahead.
Let's
see,
I
see
you
have
the
first
item
in
the
notes,
update.
C
You
want
to
deal
with
that
sure
yeah,
so
yeah,
it's
been
quite
installed
for
a
couple
of
weeks,
mostly
because
of
other
things
on
my
side,
but
I
do
plan
on.
C
I'm
sorry
and
and
so
yeah
I'm
blocked
on
a
couple
of
fronts
for
a
couple
of
past
weeks,
and
I
do
plan
on
working
on
this
week
so
and
I
think
you
approved
already
tyron.
So
the
main
approach.
G
It's
good
enough:
we
don't
need
a
perfection
here,
he's
probably
done
in
the
call
now
he's
not
in
the
code.
I
will
think
him
one
more
time
to
to
have
his
opinion
as
well,
and
he's
also
positive
about
that.
We
can
move
forward
with
that
with
that
approach.
Okay,.
C
Yeah
there
are
a
couple
of
problems
with
the
pr,
so
one
is:
there
is
a
data
race
somewhere.
I
need
to
investigate
that.
C
C
Yeah
I
mean
if
it's
not
moving
at
all
or
if,
if
people
don't
have
the
bandwidth
to
work
on
those
two
other
parts,
then
I
can.
I
can
take
them
after
I'm
done
with
the
first
part,
yeah
yeah.
So
maybe
let's,
let's,
let's.
G
See
me
and
you
and
we
can
discuss
what
do
we
want
to
do
about
that?
I
am
I'm
not
worried
about
the
details
of
the
implementation
on
the
receiver
side.
I
think
the
approach
is
good.
We
can
discuss
the
detail.
That
should
be
good,
so
maybe
even
before
we
go
ahead
and
do
the
full
implementation
of
the
receiving
side.
G
Maybe
let's
discuss
how
do
we
want
to
deal
with
the
with
the
outgoing
requests
off
and
all
that
stuff,
because
that
part
is
still
a
bit
more
unknown
and
maybe
listen
so
that
would
be
good
to
do
bogdan
great
that
you're
here
we
were
just
discussing
the
oath.
Did
you
have
a
chance
to
have
a
look
at
jurassic's
proposal
on
the
the
oath?
That
is
no
longer
using
reflection?
G
I
Unfortunately,
not
because
there
are
a
bunch
of
other
things
for
the
ga
that
we
are
having
to
deal
with,
but
promise
to
to
look
at
that
today.
Probably.
G
G
C
No,
I
think
that's
pretty
much
it
I
mean
I
I
do
hope
to
to
get
some
time
to
work
on
on
the
finishing
of
the
pr
in
the
next
couple
of
days,
but
at
most
or
at
the
latest.
I
should
have
an
update
by
this
time
next
week,.
H
Yeah,
I
just
wanted
to
radiate
that
there
is
a
storage
extension
available
now
in
the
control
collector.
Certainly
I
consider
this
alpha,
but
if
this
is
something
that
any
other
components
could
make
use
of
certainly
welcome
feedback
sooner
rather
than
later
basic
idea
is,
you
know,
treat
it
as
a
key
value
store.
It
will
write
some
files
to
your
local
file
system
and
can
retrieve
them
after
a
collective
restart.
G
Did
you
you
have
a
chance
to
have
a
look
at
the
storage?
I
think
you
wanted
to
have
a
look
at
it
from
the
perspective
of
this
offering
as
well
right.
B
Yeah
exactly
so,
I
look
at
that
at
that
and
I
think
it
it
looks
great.
I
think
that
it
could
be
used
for
file
buffering.
Maybe
there
are
other
options
as
well,
but
since
we
have
this
already
in
the
code
base,
then
why
not
use
that
so
right
now,
I'm
looking
at
some
prometheus
right
ahead.
Log
implementation
as
well
have
the
idea
of
how
the
buffering
was
implemented
there
essentially
or
the
persistent
buffering.
B
G
Thank
you
and
thanks
for
the
storage,
then
that
was
great.
Thank
you
for
the
implementation.
G
Okay,
next
topic:
this
is
compliance
so.
J
Yeah,
so
we've
been
working
on
some
compliance
issues.
Prometheus
project
put
together
a
test
to
it
now.
So,
if
you
click
on
the
link,
there's
a
compliance
test,
suite
we've
been
trying
to
like
file
the
issues
for
failing
stuff
in
like
triaging
things
based
on.
What's
failing
so
there's
a
couple
of
like
you
know,
pull
requests
in
flight,
one
of
them
for
the
up
metric
support,
the
other
one.
J
Is
this
missing
like
drop,
and
instance,
labels
there's
one
more
related
to
this
underscore
underscore
name
the
name
labels
coming
up.
So
if
you
can
kind
of
like
give
us
a
review
for
these
things,
that
would
be
super
useful
for
the
existing
stuff,
because
there
are
a
lot
of
independencies
between
interdependencies
between
you
know,
different
changes
that
we're
gonna
send.
So
it
will
help
us
to
fasten.
J
You
know
the
work
as
tigran
that
you've
seen
that
like
we,
I
removed
the
cue
because
it
was
causing
this
out
of
order
samples
issues.
So
you
know
prometheus,
so
the
prometeus
is
expecting
remote
right
samples
to
be
chronologically
or
ordered
by
time
series
the
collector
queue
mechanism.
You
know
it's
just
there
is
a
consumer
that
just
you
know,
consumes
from
the
fire
hoses.
J
You
know
randomly,
so
we
we
need
to
do
some
work
here
in
order
to
provide
some
like
capability,
as
we
discuss
in
the
pr
like
one
option
would
be
doing
what
prometo
server
is
doing,
implementing
the
queue.
The
way
the
prometa
server
is
doing
in
the
remote
right
exporter
and
provide
like
similar
like
configuration
settings,
so
people
can
carry
their
like
fine-tuned.
You
know
behavior
by
just
copy-pasting
their
configuration.
J
The
other
idea
would
be
kind
of
like
eleven.
You
know
extending
the
queuing
mechanism
in
the
collector
to
kind
of
like
take
a
function.
Maybe
so
you
can
shard
things
arbitrarily.
So
we
need
to
discuss
about
this.
I'm
not
thinking
that
this
is
like
a
super
big.
You
know
thing
that
we
should
be
worried
about,
but
I
think
it's
going
to
come
up,
and
maybe
let's
try
to
have
this
conversation
after
the
ga
right
now
we're
if
you
agree
that
we
are
just
fanning
out
in
five
requests
charted
by
time
series.
J
I
don't
think
that,
like
if
people
are
gonna
experience
performance
issues,
we
we
can
do
something
about
it,
but
I'm
not
sure
that,
like
we
have
to
take
action
about
this
before
the
stabilization,
so
those
are
going
to
be
the
you
know
our
updates.
If,
if
somebody
is
like
helping
us
like
with
the
approvals
of
the
you
know,
compliance
related
things
that
that
would
help
us-
and
you
know,
help
us
for
the
stability
of
milestone.
Also.
K
I
mean
a
good
point:
you
bring
up
diana
because
again,
tigran
bogdan,
you
know:
how
do
we
get
rights
to
be
able
to
actually
review
these
or
merge
these
again,
very
specifically
on
the
prometheus
components.
I
We
can
discuss
about
that,
but
but
I
I
there
was
one
proposal
to
make:
what's
his
name
anthony
or
what's
his
name.
I
Approver
but
he
kind
of
bail
out.
He
did
not
continue
on
his
progress
on
doing
that,
but
we
can
discuss
separately
on
the
contrib.
You
already
have
anurag
as
approver,
so
he
can
review.
We.
I
Once
he
he
reviews
will
merge
things,
but
the
problem
with
that
there
is
a
pr
that
did
not
pass
tests
for
for
some
small
problem.
But
let's,
let's
discuss
about
two
things:
the
removing
of
the
send
queue.
I
didn't
have
a
chance
to
comment,
but
I
think
that's
a
bit
problematic
for
the
collector.
I
My
proposal
would
be
to
add
it
back,
but
added
hidden
in
the
config
or
or
not
expose
it
in
the
coptic
and
just
set
up
a
single
consumer
for
for
the.
I
No,
no,
no,
not
the
breaking
part.
It's
the
problem
is
without
that
the
the
export
happens
on
the
same
thread,
sometimes
as
the
receive
parts
so
you're
blessing.
The
there
was
that
was
a
nice
way
to
to
ensure
that
we
do
not
block
the
receiving
card
anyway.
My
proposal
would
be
you
you
did
correctly
remove
from
config,
but
just
initialize
it
with
the
with
the.
J
I'll
revert
it
I'll
make
sure
the
consumer
is
always
one
we
maybe
can
override
it
and
not
let.
E
E
I
That
may
happen
if,
if
we
completely
remove
it,
that's
one
thing:
the
other
one
is
the
up
metric
is
the
app
metric
not
already
produced
by
the
prometheus.
I
I
saw
that
we
are
producing
it's.
J
We're
dropping
in
the
receiver
we're
it's
coming
from
the
scraping.
You
know,
library,
we're
just
dropping
we
don't.
We
didn't
know
how
to
handle
it,
so
it
was
just
dropping
and,
like
I
think,
logging
hey.
We
just
dropped
this,
so
the
idea
is,
let's
turn
into
a
gauge
internal
in
the
collector
and
then
export
it
as
an
up
metric
in
the
exporter.
It's
gonna
become
a
gauge,
and
this
is
only
for
like
if
you
script
from
prometus,
so
it's
not
going
to
be
this
yeah.
I
K
It's
there,
it's
two
nine
one.
I
I
Receiver,
internal
metrics,
it
produces
the
up
metric
instead
of
my
point,
being,
I
think
prometheus
already
produced
this.
We
just
need
to.
There
is
a
heart
code
somewhere
where
we
drop
this
off
so.
J
The
receiver
is
talking
and
converting
everything
comes
from
prometus,
just
open
senses
and
then
converts
it
to
open
telemetry.
That's
how
the
receiver
currently
works
so.
J
Introduces
that
you
know
gauge.
I
Let
me
let
me
show
you
so
here
see
the
you
d,
you
do
record
with
open
sensors,
you
do
record
one
record
zero,
you
don't
take
it
whatever
prometheus
produce
and
convert
it
to
open
sensors.
You
do
proper,
like
you,
you
use
the
open,
sensors
library
to
record
a
metric.
E
M
And
higher
yeah
that
that
that
code
in
there
you
know
that
basically
gets
the
response
from
the
you
know:
the
prometheus
library,
it's
telling
us
whether
it's
up
or
down
and
just
records
those
values
yeah.
So
that's
essentially.
I
M
There
are
a
few
reasons
why
number
one
when
I
looked
into
the
code
for
it
what's
going
to
happen?
Is
that
it?
If,
if
we,
if
we
perform,
if
we,
if
we
perform
that
the
direct,
the
direct
conversion
number
one,
it
will
be
a
metric
exported
for
all
other
all
other
exporters
number
two
const
labels
will
be
applied
directly
to
it
and
we
don't
want
that
up.
M
Metric
should
only
have
one
label,
which
is
instance
and
in
fact,
there's
a
follow-up
change
from
this,
which
is
on
the
contrib
that
open,
open
senses,
prometheus
exporter.
Where
we
just
recognize
anytime,
we
encounter
app
metric.
We
you
know,
we
don't
attach
any
labels
to
it.
So
you
know
I
I
I
I
started
with
that
whereby
we
use
the
the
metric
as
it
is
from
yeah
as
it
is,
as
it
is
produced
directly
from
the
previous
library
and
it
that
didn't
work
properly.
I
I
I
I
I
would
apply
okay,
let
me
read,
probably
your
go.
J
I
Yeah
but
but
but
even
if
we
remove
it,
we'll
replace
it
with
open
telemetry,
but
it's
still
recorded
by
us
instead
of
coming
from
open
from
prometheus.
That's
that
was
the
the
part
that
I
did
not
understand.
A
M
J
David
gave
a
feedback
on
that.
If
you
can
take
a
look
generally,
that
you
know,
that
would
be
useful.
The
david
give
we
should
put
the
instance
in
the
resource
rather
than
you
know,.
A
N
Thanks,
I
think
I'm
next
on
the
agenda,
so
I'll
explain
the
problem.
As
briefly
as
I
can
open,
telemetry
collector
does
not
depend
on
this
package
called
open,
telemetry
proto
go
which
is
kind
of
one
package
where
the
photos
are
already
built
and
so
could
be
a
common
dependency
among
anything,
that's
go
and
depending
on
those
protos
there's
some.
I
think,
there's
some
valid
reasons
why
collector
doesn't
depend
on
open,
telemetry
protogo,
and
I
just
wonder
what's
the
current
thinking
and
can
I
do
anything
to
help
move
along
this?
G
I
guess
there
are
a
couple
of
reasons.
First,
historically,
there
was
no
intellectual
protocol
initially
when
we
started
the
character,
so
we
could
not
depend
on
it.
The
second
is
that,
even
though
it
exists
now,
we
still
cannot
directly
depend
on
it
because
we
use
gogo
proto
for
for
above,
whereas
that
repository
uses
the
gopro
library
for
generation
now
can
we
can
we
move
to
that?
It's
not
entirely
clear
at
the
moment,
because
the
performance
is
going
to
degrade
significantly.
G
We
will
need
to
see
what
do
we
want
to
do
about
google
proto.
The
additional
complication
here
is
that
google
proto
seems
to
be
unmaintained
as
of
last
year
and
if
it
stays
that
way,
then
we
cannot
continue
depending
on
it
possible.
Solutions
here
are
to
to
actually
yes
give
it
up,
give
up
some
of
the
performance
and
go
with
the
canonical,
go
pro
robots
and
also
look
at
the
second
version
of
the
fight.
There
is
a
new
version
of
that
as
well.
I
N
So
the
most
straightforward
way,
for
I
think
is,
is
that
we
all
depend
on
one
one
protobuf
and
then
I
can
just
import
that
package
into
the
exporter.
That's
in
contrib,
and
then
I
can
update
that
package
right
just
with
a
with
basic.
I
I
You
will
ask
us
to
expose
our
internal
representation
in
what
we
call
p
data
so
and
that's
something
that
we
hardly
have
try
to
avoid
and
we'll
probably
never
gonna
expose
that
the
reason
being
we
have
a
dream
at
one
point:
to
make
lazy
unmarshaling
of
the
data
so
essentially
to
to
not
unmarshal
the
protos
and
just
simply
just
simply
look
at
the
by
stream
and
and
go
to
the
right
position
decode
whenever
we
need
and
stop
stuff
like
that,
and
because
of
that,
we
we
don't
want
to
expose
whatever
we
we
have
the
p
data
backed
by
so
not
only
you
are
asking
us
to
to
switch
to
that,
but
you
want
us
to
expose
that
we
we
we
use
that
behind
the
scene.
I
I
So
another
option
for
you
is
if
you
convert
to
that
proto
we
have
a
method
to.
If
you
give
us
the
bytes,
we
know
how
to
to
interpret
the
bytes,
so
the
encoding
is
the
same.
So
if
you
have
that,
but
you're
gonna
eat
some
performance
there
by
by
converting
to
that
proto,
then
then
marshalling
and
passing
the
martial
bias
twice
and.
G
G
Well,
essentially,
we
consider
this
in-memory
representation
to
be
implementation,
detail
that
we
intentionally
placed
under
hidden
interface
right
so
that
there
is
no
access
to
that
and
we
do
actually
tweak
with
that.
We
tweak
it.
We
make
changes
to
that
for
performance
reasons
as
well
yeah.
So
we
would
like
to
keep
that
that
way,
even
though
we
as
vlogmas
said,
may
actually
change
the
implementation
to
depend
on
protocol,
we
probably
will
not
still
will
not
make
it
a
publicly
available
data
structure
that
you
can
use.
G
G
Or
maybe
not
unless,
unless
then,
we
somehow
also
make
our
p
data
convertible
to
the
official
proto-go
memory
structures,
which
is
another
possibility.
We
do
we
do
that
via
bytes,
we
don't
I
mean
without
without
serializing
and
decentralizing,
which
we
can
do
much
like.
It's
it's
going
to
be
a
lot
more
performant,
even
even
if
it's
not
exact
much,
it's
going
to
be
a
much
for
many
of
the
data
structures,
so
maybe
yeah,
so
so
it
can
be
faster.
I
don't
know
if
we
want
to
do
something
like
that,
though,
maybe.
I
Maybe
maybe
that's
a
possibility
to
add
a
method
from
to
the
protos.
The.
I
The
official
tlp
group,
the
problem
with
that
is
you
you
may
not
believe
me,
but
the
problem
with
that
is
people
will
will
feel
lazy
to
use
the
p
data
and
will
always
convert
to
the
proto
because,
whatever
reasons
and
then
hence
we
lose
all
the
possibility
of
performance
improvements.
I
G
N
Yeah,
what
what
I
can
do
is
clean
up
what
I've
got
so
that
we're
still
doing
the
serialized
deserialized
we're
losing
some
performance
there,
but
maybe
we
can
put
a
pin
in
the
performance
problem
and
that
could
and
other
people
may
or
may
not
have
the
same
issue
in
the
future.
Right
like.
N
G
Good
analytica,
you
have
the
last
one.
O
P
P
A
O
P
R
O
P
Yes
think
max
like
tom,
can
you
just
start
it?
I
think.
Probably
I
think
there
is
some
problem
with
my
headset
or
probably
mike.
Let
me
just
fix
it.
I'm
enjoying
probably
you
guys
can
just
start
in
case.
I'm
honorable
I'll,
just
join
minutes.
R
Let
me
try
to
create
that
agenda
at
first,
but
then
yeah
for
today.
D
R
R
R
R
T
Okay,
I
guess
I
guess.
That's
me:
it's
not
a
major
issue,
but
I
thought
I'll
just
cover
it
quickly.
I
I've
been
trying
to
get
more
involved
in
contributing
to
this
project
and
I
struggled
a
little
bit
to
understand
some
of
the
concepts
and
I
found
some
things
missing
and
I
feel
like
it
would
be
a
good
opportunity
to
just
clean
up
some
of
the
docks
for
contributors
and
also
for
users
of
the
library.
So
I
wanted
to
do
a
quick
scan
and
see.
T
If
I
can,
I
guess,
add
a
few
sections.
I
just
wanted
to
kind
of
prompt
a
quick
discussion
if,
if,
if
I
guess,
if
we
spend
too
much
time
happy
to
to
to
put
into
an
issue
instead,
but
at
a
high
level,
we
already
have
the
google
test
that
we
landed.
That
has
a
lot
of
duplicate
content,
so
I
wanted
just
to
move
that
out
around
the
bazel,
build
instructions,
and
so
on.
So
just
keep
that
one
specific
to
testing.
T
T
I
posted
in
the
chat
that
wanted
to
split
it
up
and
to
install
remaining
as
a
global
install
if
you
have
like,
if
you
want
to
do
a
system-wide
install
in
line
with
most
unix
packages
and
instead
create
a
separate
usage
page
specifically
targeted
about
how
you
use
these
open
telemetry
in
your
project,
which
is
what
will
cover
either
cmake
users
or
bazel.
I
guess
the
recommendation
here
is
bazel
and
how
to
do
that,
so
that
users
can
pull
that
into
the
project,
but
I
feel
free
to
jump
in
with
comments.
T
R
D
We
are
fine
with
that.
You
may
want
to
confirm
with
george
I
think
josh
was
primarily
driving
the
diesel
stuff.
T
I,
I
guess,
let
me
maybe
reframe
the
question
of
I
think
the
install
section
is
probably
less
relevant
because,
as
far
as
I
can
tell
at
least
I
haven't
been
able
to
find
any
references
to
install
it
from
bazel
from
a
usage
perspective.
Is
there
a
recommendation
of?
I
want
to
use
open,
telemetry,
c
plus
plus
how
we
would
recommend
you
use
it?
Is
there?
Is
there
any
kind
of
views
in
this
group
about
kind
of
what's
intended
usage
recommendations.
T
So
the
examples
are
more
tailored
towards
kind
of
sorry
just
to
clarify.
I'm
not
talking
about
usage
in
terms
of
code
more
about
usage
in
terms
of
for
base.
Should
you
be
dragging
this
in
in
bazel,
should
be
you'll,
be
using
cmake
like?
Is
there
a
view
in
this
group?
Historically,
maybe
about
what
the
recommended
way
of
importing
open
telemetry
to
your
library
would
be.
D
D
They
get
a
nougat
package
for
ms
build,
but
I'd
rather
describe
that
process
in
a
country
repo
somewhere,
because
that
should
not
be
the
cornerstone,
and
that
should
not
be
the
default
process,
because
it
depends
on
at
least
two
microsoft.
Centric
tooling,
such
as
nougat
package
manager,
and
I
must
build
that's
why
I
cannot
advise
it
as
a
general
guidance,
but
I
can
contribute
a
document
for
that.
I.
T
I
I
guess,
if
there
is
no
kind
of
mandated
or
recommended
way,
maybe
it's
worthwhile
just
enumerating
options,
so
users
can
can
use
their
own.
I
guess
the
only
thing
is
we
might
want
to
upgrade
some
of
the
options
to
support
it
and
I
guess
whether
we
support
the
nuget
based
deployment
mechanism
or
not
is
a
second
question,
but
I
would
suggest
having
a
usage
file
that
summarizes
the
supported
mechanisms
that
we
recommend
and
maybe
that's
bazel
and
c
maker-
maybe
not
nougat
yet,
and
maybe
we
can
add
that
later.
D
I'd
say
that
in
other
pro
projects
I
had
the
fairly
good
experience
with
cmak
overall,
because,
as
part
of
the
cmac
build
process,
for
example,
we
can
produce
the
packages
focused
on
specific
commercial
customers,
such
as,
let's
say,
if
we
spin
a
build
on
multiple
docker
containers
for
different
linux
distros,
we
can
produce
deb
packages
for
dead
band.
We
can
produce
our
pan
for
red
hat
and
we
can
produce
like
with
our
gz
for
mac,
for
example,
and
then
pretty
much.
D
The
cornerstone
is
cmake,
build
process
set
up
the
tools,
build,
install
and
optionally,
produce
a
package
and
then
separate
story
which
is
not
our
story,
not
part
of
this
group.
But
then
somebody
who
got
the
package
binary
package
through
the
cmake
process
can
then
redistribute
that
distro
on
whatever
preferred
repo
for
artifacts
that
they
choose
something
like
this.
But
I
guess
I
prefer
starting
the
general
guidance
focused
on
cmake.
L
So,
just
to
give
my
two
sense
here
also
regarding
to
if
gain
is
the
initial
question.
I
think
I
mean
what's
clear,
think
that
the
two
sub
officially
supported
ways
that
we
kind
of
the
two
ways
that
you
officially
support
now
are
baseline
cmake.
L
L
So
if
you
work
on
that
awesome
great-
and
I
mean
I
see
folks
here-
express
like
a
preference
for
cmake,
but
I
think
yeah,
I
think
we
we
officially
support
both
cmec
and
bazel.
I
think
that
is
our
stance,
so
we
should
have
kind
of
documentation
for
both
approaches.
T
P
Just
just
to
add
one
point
I
mean:
definitely
I
think.
As
yuan's
mentioned,
we
are
officially
supporting
cma
and
and
bazel
as
part
of
the
build
systems,
but
the
end
I
mean
the
distribution
of
distribution
perspective.
We
are
going
to
distribute
the
source
code
of
the
repo,
I
mean
in
terms
of
source
of
the
api
of
the
sdk
and
the
exporters,
and
we
are
not
going
to
enforce
end
user.
How
do
they
want
to
do
a
build?
P
T
Okay,
cool.
That
sounds
good,
so,
just
to
summarize,
I
mean
there's
been
quite
a
few
views
here,
but
I
think
I
guess
johannes
summarized
the
supported
systems
where
bazel
and
cmaker
will
document
this
that
those
are
supported.
Not
enforced
support
is
the
word
I
would
use
I'll
describe.
I
will
describe
the
workflow
both
in
terms
of
cmake,
as
do
it
yourself
entirely
and
also
c
makers.
T
T
If
anybody
has
views
on
adding
additional
ones,
whether
it's
to
be
through
contrib
or
something
else
we
can,
we
can
add
that
later
as
well,
there'll
be
a
pr
that
we
can
have
the
discussion
on
as
well,
not
to
take
up
too
much
more
time,
but
just
to
to
maybe
wrap
up
as
well.
In
addition
to
the
user-facing
documentation,
I
was
hoping
to
just
make
it
easier
for
contributors
as
well.
T
The
one
thing
I
was
going
to
add
to
the
contributor
style
documentation,
probably
in
the
slash
docs
folder-
was
going
to
be
the
library
structure.
I
posted
an
issue
yesterday
about
what
is
ext
anyways
and
I
think,
there's
been
a
a
couple
of
not
an
issue.
I
think
was
an
issue.
No
sorry
it
was
a
slack,
a
response.
So
I
feel
just
like
a
summary
of
here's.
The
api
package.
Here's
the
sdk
library,
here's
the
next
four
internal
contributors,
so
they
know
where
everything
is-
would
be
useful.
L
Well,
cool!
All
right,
that's
awesome!
I
give
a
thumbs
up
just
one
more
point
there,
because
I
stumbled
across
something
similar
yesterday
when
I
was
cleaning
up
the
api
documentation.
I
think,
when
you
write
documentation,
don't
be
afraid
also
to
clean
up
stuff.
That
doesn't
seem
ergonomic
to
you,
because
that
would
also
be
greatly
welcome,
because
I
think
writing
documentation.
L
It's
a
great
great
way
to
kind
of
put
yourself
in
the
user's
shoes,
and
I,
for
example,
saw
yesterday
in
the
api
that
we
have
a
core
and
a
common
folder
and
which
doesn't
make
sense
to
have
those
two
folders.
So
I
just
try
to
put
them
together
and
expect
that
there
are
like
lots
of
other
similar
issues
lingering
around
using
the
cmak
builds.
T
Definitely
take
that
on
board.
I
very
much
see
a
deletion
as
a
positive
contribution,
as
well
as
an
addition
here,
so
yeah
so
I'll
be
cleaning
up
things,
at
least
the
things
that
I
can
tell.
If,
given
the
context,
I
have
for
the
time
being
so
yeah
I'll
do
that
as
well
somewhat
related
in
terms
of
kind
of
where
the
docs
live.
T
I
noticed
that
there
is
also
read
the
docs
artifacts
that
we
publish
I'm
not
sure
how
well
those
kept
up
to
date,
but
I
presume
that
we
do
keep
that
up
to
date
as
well.
L
Yes,
there
is
like
an
those
are
triggered
automatically
on
each
like
the
latest.
Read
the
docs
stuff
is
updated
on
each
merge
to
master
okay,
cool,
so
correct.
T
I
think
something
like
the
usage
instructions
about
how
you
should
link
this
into
your
your
project.
Probably
do
make
sense
to
escalate
to
that
one
as
well,
so
I'll
I'll
have
a
look
into
how
to
get
things
into
the
read
the
docs
documentation
and
I'll
probably
make
sure
that
that
content's
covered
there
as
well.
D
D
Are
we
gonna
have
any
scenarios
where
we
just
have
header
only
there's
one
unique
case
that
for
etw
event,
racing
for
windows,
I'm
intentionally
trying
to
keep
the
entire
thing
header
only
just
because
it
is
much
easier
for
us
to
consume
it
that
way.
So
for
the
other
processes.
D
I
think
we
right
now
assume
that
we
run
the
cmak
build
it
produces
some
lib
or
dot
a
and
dll
or
dot
so
files
right,
and
then
somehow
the
customers
have
to
sort
out
how
to
take
those
files
package
and
or
ship
with
the
their
application
and
for
all
of
the
standard
exporters.
I
think
we
don't
really
have
that
choice
of
making
that
header
only
because
it's
too
beefy
with
the
extra
dabs
like
http,
client
or
jrpc
library
itself,
there's
practically
no
way
to
make
anything
else.
Heather
only
right.
T
I
I
think
the
only
use
case
for
header
only
is,
if
you're,
using
just
the
api
for
producing,
I
don't
know
like
libgrpc
itself,
which
conforms
to
the
api
but
doesn't
actually
install
any
exported
or
anything
but
yeah.
That's
the
only
case
that
comes
to
mind.
D
T
D
D
What
else
json
hp
is
here
there
only
it's
compiled
into
this
actual
binary
anyways,
but
there's
some
cost
in
150k
100k,
but
also
for
grpc.
Also
there
might
be
some
requirements
like
I
don't
know
how
big
the
library
is.
Maybe
500,
kilobytes
or
so
right
it'd
be
great
to
go
to
that
point
where
we
can
say
how
to
build
and
what
how
to
consume.
As
in
what
library
dependencies
you
have
and
how
those
library
dependencies
are
affecting
your
instrumental
process
like
how
big
your
process
grows.
If
you
instrument
with
a
certain
exporter.
T
I
I
might
suggest
just
going
beyond
kind
of
just
a
dependency
view
on
that.
We
want
to
target
specific
use
cases
so,
for
example,
api
on
the
user
library
versus
an
application.
That's
fully
statically
compiled
everything
in
including
the
exporters.
I
think
there's
going
to
be
a
handful
of
common
use
cases
depending
on
who
you
are.
It
would
probably
be
worthwhile
enumerating
each
one
and
submit
adding
suggestions
for
those.
I
think
that's
definitely
a
usage
consideration,
probably
a
more
advanced
one
but
yeah.
D
Yeah,
sometimes
they
just
don't,
but
sometimes
they
do,
and
sometimes
the
very
first
question
I
have
is
how
big
my
brush
is:
gonna
grow.
If
I
consume
that
thing,
and
if
I
instrument
with
that
and
what
extra
library
I
need
to
bring
into
my
address
space,
those
sort
of
aspects,
it
would
be
great
to
kind
of
get
to
some
sort
of
documentation
for
that.
At
some
point,
when
we
are
close
to
more
formal
examples
for
each
concrete
exporter.
T
It's
a
good
point
max,
maybe
I'll-
I
probably
won't
address
it
in
the
first
iteration
I'll.
Probably
just
do
the
simple
users,
but
I
think,
as
we
build
on
this,
I
think
targeting
specific
use
cases,
I'm
building
a
library
that
has
api
built
into
I'm
building,
x
or
y.
We
can
work
on
that
and
add
all
those
extra
features
also
extra
requirements,
such
as
how
big
it
is
into
that
as
well.
D
My
my
biggest,
you
know
challenge
right
now
is
when
we
go
to
getting
started.
We
have
that
fubar
lib
right
example.
I
mean
it
it's
informative
and
such
I
wish
we
could
get
to
the
point
where
we
can
drill
down
into
more
concrete
scenarios,
for
example
for
jagger
for
zipking
for
jpco
hlp
exporter.
D
This
is
how
you
set
it
up.
This
is
what
the
preferred
library
is
going
to
be.
I
mean,
even
if
we
don't
package
it
ourselves,
there's
still
going
to
be
some
sort
of
binary.
Artifact,
like
I
don't
know,
shared
library
that
implements
this
racer
it'd
be
great
to
can
explore
in
that
direction.
From
my
end
again,
my
my
apologies
have
been
focusing
on
something
totally
orthogonal,
like
480
double
exporter,
on
a
totally
different
path.
T
How
about
we
can
start
a
an
issue
to
collect
these
use
cases?
I
think
that's
a
great
example
of
like
I
just
want
to
drag
in
zipkin
and
compile
my
app
and
that
we
can
then
decide
which
ones
we
want
to
document
more
officially.
So
maybe
I
can
start
something
and
then
everybody
can
weigh
in.
D
Yep
that
works
yes,
as
we
iterate
over
concrete
examples,
we
can
more
easily
use
the
same
template
for
other
examples.
Right.
L
And
then
just
somewhere
from
my
side
here,
I
think
when
you
write
documentation
about
that,
I
think
from
a
high
level
view
we
have
two
main
like
use
case
scenarios
like
the
one
is
like,
I
think,
open
the
diamond
record,
the
instrumenter,
like
the
person,
instrumenting
a
library
and
for
those
use
case.
I
think
we
have
kind
of
the
api.
Like
the
the
header,
only
approach,
we
only
need
a
header,
so
you
can
instrument
the
library
and
then
the
second
use
case.
L
I
think
they
call
it
operator,
it's
the
person
like
running
the
application
and
actually
using
our
sdk,
and
I
think
that's
the
use
case
where
we
currently
don't
at
or
support
any
header
only
approach.
So
as
soon
as
you
sk
sdk,
you
need
to
have
like
a
shared
or
static
library
and,
I
think
also
in
documentation.
We
should
consider
those
two
use
cases
separately,
because
it's
often
not
always
but
often
different
kind
of
roles.
L
Doing
those
two
things
and
also
like,
I
think,
for
the
api
documentation
side,
we
have
a
pretty
stable
state
for
the
sdk
side,
things
are
still
flowing
and
we,
I
think
we
also
have
to
be
careful
documenting
there,
because
things
there
are
still
changing
and
hopefully
also
still
getting
simpler
for
the
end
user.
D
P
P
D
P
So
probably
I
mean
there
are
java
and
one
more
one
more
sick
case
before
is
is
taking
the
initiative
to
develop
the
prototype,
and
if
that
works,
the
specification
would
be
written
and
then
that
would
be
finalized.
So
I
think
still
long
way
for
that.
I
don't
see
it
happening
in
next.
Two
to
three
months.
D
P
D
Do
we
couple
it
with
the
requirements
for
concrete
exported
like
jag
recipient,
or,
do
you
mean
like,
like
there's
api
right.
R
D
So
into
like
separate
milestones,
api
headers
first
and
then
whatever
required
by
spec
exporters
as
a
separate
ps
as
sdk
yeah.
That's
reasonable,
yeah.
L
But
but
I
think,
when
we
release
a
1.0,
I
think
we
should
at
least
have
a
somehow
stable
and
usable
sdk,
because
otherwise
it's
it's
misleading
to
to
to
users.
I
mean
still,
I
think,
on
the
sdk
side,
we're
still
flexible.
We
can
still
do
like
also
even
bigger
changes,
but
I
think
it
should
be
like
somewhat
stable
and
usable
for
people
when
we
have
like
a
1.0
release.
Otherwise,
it's
misleading
because
you
just
released
the
api
and
sdk
is
unusable.
People
will
be
kind
of,
I
think,
confused
receiving
subject.
1.0
version.
R
Okay,
that
sounds
a
violated
concern
yeah,
but
this
means
our
stable
release
will
depend
on
both
the
api
and
sdk,
and
I
just
want
to
try
to
convey
the
message
to
the
user
right.
They
can.
They
can
maybe
start
to
instrument
their
code.
I
think
that
let
me
take
to
take
some
work
and
which
doesn't
depend
on
the
sdk.
R
D
Is
isn't,
there's
something
that
we
can
use
like
release
candidate
milestones?
We
can
call
it
1.0
the
stiller
risk.
There's
gonna
be
a
minor
change.
What
we
can
call
api
a
release
can
do
that
right
and,
as
we
incrementally
scan
the
build
number,
we
can
respin
it.
Alongside
with
the
sdk,
which
has
the
proper
exporters
functioning
and
then
we
release
within
somewhere.
We
can
still
use
the
release
candidate
identifiers
right.
D
D
P
Okay,
I
think,
still
for
api.
We
we
do
have
couple
of
features
which
we
have,
which
we
are
missing,
like
baggage
support
and
baggage
propagation,
which
is
not
there
even
contacts
propagation
we
have
to
revisit.
There
may
be
few
issues
there.
I
not
trace,
trace
parent
or
trace
state.
One
of
them
is
not
fully
working
so
probably
once
we
are
good
with
that,
we
can
really
think
over
having
1.0
release
calculated
for
api
and
as
we
move
forward
as
we
work
further
on
the
sdk.
R
I
think
the
release
candidate
is
a
good
way
for
for
this
1.0
this
candidate
and
for
both
stable
api
and
unstable
sdk.
I'm
thinking
for
this
case
like
if
we're
going
to
release
1.0
release
candidate.
Can
we
like
emit
a
compiler
warning
when
the
user
is
compiling
sdk
to
say
the
sdk
is
not
stable,
but
for
api
it
is
just
fine.
Would
that
not
be
possible?
So
when
user
try
to
integrate
it
to
the
code,
they
can
get
a
warning
and
under
understand
that
the
sdk
may
change.
D
And
I
was
wondering
down:
let's
say
we
have
a
set
of
exporters
to
support,
but
in
real
world
customers,
probably
focusing
on
just
one
full
right.
I
mean
portability
is
all
good,
but
for
concrete
company,
let's
say
they
use
zipkin
or
jagger,
and
they
just
need
only
that
working
I
mean
from
the
bigger
project
perspective.
We
have
to
ensure
that
everything
is
functioning,
but
there
could
be
something
that
for
a
less
candidate,
one
api
is
stable
and
okay
I'll
selfishly
use
atw
exporter.
D
D
D
What
is
considered
as
stable
rock
solid
and
what
is
still
going
to
be
needing
an
update
as
long
as
we
can
capture
this
in
the
release
nodes,
maybe
I
don't
know
like
we
have
eight
of
them
right,
maybe
something
like
a
matrix
in
the
release,
notes
in
the
change
list.
Every
time
we
publish,
we
say
this
is
stable,
stable.
This
is
like
nearly
stable,
and
this
is
not
ready
yet,
and
that
way
the
customers
then
use
their
best
judgment,
how
they
are
going
to
use
the
product.
D
Them
release
matrix
knows,
works,
works
well,
works
just
pretty
much
the
list
like
four
eight
common
things.
We
say:
don't
don't
done
not
done
yet,
and
then
customers
look
at
this.
Oh
this.
They
call
it
rc2,
but
they
don't
have
this
zipkin
exporter,
for
example,
cool.
No,
it's
not
ready
for
me.
Yet.
D
P
D
Real
quick,
so
I
used
a
tool
code
who
scan
code
could
code
scan.
It
runs
forever,
but
it
yields
a
good
result
of
for
all
recursive
files
included
in
the
repo.
What
licenses
we
rely
on
we're
generally
good,
I
mean
almost
everything
is
apache
licensed.
I
opened
the
issue
mentioning
what
is
not
apache
license
and
I
read
about
mit
license
versus
apache,
so
in
general
we
are
in
a
relatively
good
shape.
D
I
noticed
that
primitive
exporter,
which
is
off
by
default,
pulls
a
ton
of
stuff
and
when
I
run
recursively
the
scan
on
that
it
so
a
bit
problematic,
I
mean
right
now
we
don't
commit
to
metrics
api
at
all,
and
maybe
we
should
keep
it
that
way.
We
can
say
that
prometheus
is
off
by
default
and
there
are
extra
licenses
needed
for
that
component.
D
D
D
Right
so
when
it's
not
included
in
the
actual
product,
it's
totally
fine.
The
users
do
not
have
to
ship
an
extra
document
in
their
list
of
licenses
and
for
the
variant
I'll
try
to
do
the
work
this
week
to
switch
permanently
for
all
platforms
to
upsell
variant,
and
I
will
keep
it
running
for
a
while
and
I
think,
right
later
on,
I
can
just
remove
the
empire
variant
because
not
because
it's
bad,
it's
maybe
good,
because
it's
differently
licensed
and
because
I
know
that
upsell
variant
is
good,
make
sense.
R
Yes,
and
for
the
license,
I
have
a
related
question:
can
we
check
in
some
auto
generated
files
to
our
repo,
and
I
think
we
don't
need
we?
We
can't
add
any
or
apply
our
lessons
to
to
them
right.
D
D
It's
like
you,
don't
have
to
have
a
license
in
the
file.
Maybe
it's
good
to
add
one.
I
had
a
script
which
was
adding
spdx
like
this
small
tool
with
our
liner,
which
is
the
license
description.
If
that
helps,
and
maybe
we
can
run
that
on
a
set
of
files
like
that
by
mask
and
append
whatever
license
it's
a
partial
license.
Short
copyright,
open,
telemetry
waters.
D
I
can
show
you
that
offline
thanks.
I
had
the
question
sorry
again
service
question
about
the
double
exporter.
I
have
a
quote
that
is
mit
licensed
from
microsoft.
We
just
recently
got
through
legal
hurdles
and
we
are
ready
to
release
it
in
open
and
we
already
released
it
in
a
separate
repo.
D
It
is
optional,
like
you
can
still
compile
the
entire
code
without
that
I'm
just
thinking
like
I
could
either
add
it.
This
is
some
model
or
I
can
drop
it
in
the
atw
exporter
directory,
but
I
need
to
add
some
sort
of
readme
saying
that
you
may
enable
this
feature,
and
if
you
enable
this
feature,
you
will
require
mit
license
for
that
exporter.
D
This
is
similar
to
what
we
already
do
for
z
pages,
for
example,
because
in
the
badges
we
use
json
hpp
and
I
think
in
a
few
other
places
we
use
json,
http
and
lochman.json
and
that
other
library
is
also
mit
licensed.
So
there
are
places
where
we
already
rely
on
external
mit
license
school.
D
P
D
Library
yeah,
so
it's
like
trees,
logging,
dynamic
library
for
windows.
It's
mit
licensed
our
legal
team
wanted
to
see
the
mit
license
on
north
apache.
D
In
the
main
reaper
right
now
and
it's
not
in
the
country
people,
I
was
just
thinking
that
if
I
had
a
sub
module,
then
it's
a
hassle
of
referencing,
a
sub
module,
which
is
not
really
that
often
used
by
a
typical
flow,
or
I
can
drop
in
a
copy
in
a
separate
directory
in
the
repo
itself
like
cloning,
that
my
anticipation
is
that
it's
never
gonna
change
in
the
next
four
years.
This
is
a
code.
That's
existed
for
many
years
and
never
needed
an
update.
D
I
could
put
a
readme
file
in
there
saying
that
this
is
the
mit
licensed
piece
that
is
not
on
by
default.
In
most
cases,
you
don't
need
it.
R
One
will
work,
is
the
original
copy
of
this
file
and
yeah
for
the.
D
D
D
I
mean
we
did
it
in
the
api.
Even
so,
what
I'm
asking
for
is,
if
I
can
drop
in
a
file
under
the
exporter's
htw
put
it
in
a
separate
directory,
put
a
readme
file
in
there
and
say
this
file
is
optional.
You
don't
include
it
usually
in
your
usual
build
of
sdk
unless
you
use
it
to
double
export,
or
in
that
case
this
specific
module
is
mit
licensed.
D
If
that
makes
it
better.
I
can
cover
this
in
a
separate,
markdown
document.
So,
for
that
license
issue,
I
want
to
produce
a
mark.
The
own
document,
which
describes
the
exporters,
build
options
and
what
licenses
you
inherited
by
enabling
that
so
I
can
cover
it
in
there.
So
I
can
say
for
atw
exporter,
if
you
flow
to
event
racing
for
windows,
you'd
need
this
header
and
this
additional
headers
and
made
two
lessons.
If
that
works
for
community
I'll,
do
that
both
markdown?
It's
the
total
list
of
components
in
their
licenses.
P
R
Yeah,
what's
the
advantage
of
checking
the
single
file
instead
of
some
module
or
almost
everything.
D
Great
I
mean
for
sub
module.
I
was
thinking
like.
Usually
you
have
to
maintain
like
a
few
things
in
the
actual
repo
like
at
least
three
lines,
which
would
say
that
in
git
configuration
this
is
the
url
location
and
there's
an
extra
time
needed
to
check
out
the
sub
module
to
actually
fetch
that
file
rather
than
when
it's
a
single
file.
D
R
Yeah,
I
think
yeah
almost
the
same
here,
but
so
in
this
case
checking
single
file
doesn't
provide
it
too
much
advantage.
I'd
prefer
some
modules
yeah.
L
R
It's
for
for
this
jagger
ideal
file.
I
I
I
try
to
checking
the
generated
file
directly
because
and
and
didn't
choose
some
module
approach,
because
the
files
need
some
pre-processing
by
their
jugger
ideal
compiler.
So
I
want
to
avoid
the
darting
or
build
processor
attracting
the
auto
generated
property
new
york.
It
seems
the
file
can
just
directly
be
used
to
add,
prefer
some
modules.
P
D
P
D
If
I
had
the
sub
module,
everybody
sees
the
submodule
when
it's
checked
out.
If
I
drop
an
into
acw
exporter
directory,
everybody
knows
that
this
specific
path
is
unique
to
etw
exporter
one.
So
it's
like
isolated
in
the
world.
D
It's
more
about
the
perception,
because
then,
when
I
do
the
initial
recursive
cloning,
I
see
an
extra
thing
pulled
in
and
then
my
immediate
reaction
is.
Why
do
I
need
to
pull
that
thing
at
all
and
I
might
say
well
what
did
microsoft
guide?
So
you
wouldn't
look
at
the
dependency.
P
Totally
every
video
I
was
thinking
about
the
third
perspective,
whether
it
really
belongs
to
or
to
open,
telemetry
space
main
repo
or
it
can
move
to
contrary
or
it
can
be
in
some
microsoft.
So
that's
the
only
thing
I
just
want
to
get
more
discussion
with
community
members,
and
I
can
I
don't
know
rather.
D
Not
go
back
to
that
discussion.
I
want
to
contribute
a
country
repo
example,
which
shows
how
it
is
beneficial
to
everybody
and
how
it
is
not
coupling
you
with
microsoft
technologies
per
se,
so
I
will
show
how
to
export
it
to
google
cloud
exporter
to
json,
to
elastic
whatever
using
that
exporter.
D
D
P
Okay,
I
think,
just
in
the
interest
of
time
we
just
lit
10
minutes.
I
think
I
think
you
wanted
to
talk
about
the
contract
level.
O
Hi,
yes,
so
just
still
well,
there
is
like
two
things
related
to
this.
So
issue
was
already
open
and
the
issue
actually
touches
two
things:
one
is
versioning
and
the
other
one
is.
O
P
Think,
that's
that's
totally.
I
think
get
your
point.
I
mean
probably
wanted
to
discuss
this
in
this
community
meeting.
Now
I
mean
in
the
context
paper
we
wanted
kind
of
end-to-end
ownership
for
each
of
the
components
belonging
to
the
real
owner
of
those
components
so
with
end-to-end.
It
would
ideally
mean
they
should
be
able
to
raise
a
pr.
They
should
be
able
to
do
approval
and
even
merge
it
and
then
finally,
they
should
be
able
to
release,
create
a
release
out
of
that.
P
So
some
somewhere,
I
think,
create
creating
a
release
permission.
Even
I'm
not
able
to
understand
from
where
it
is
getting
how
it
is
how
it
is
coming,
probably
max.
If
you
have
some
idea,
because
this
repo
was
created,
I
mean
the
request
for
the
record
was
done
by
you
and
probably
do
you
do
you
have
idea
how
the
because
I
get
deep,
I
can
see
the
permission
to
create
a
release,
but
tomas
is.
P
D
I
think
I
have
admin
rights
or
not.
I
can
add
the
other
folks
that
are
interested
and
trusted
by
community
as
administrators
as
well.
We
can
enable
the
feature:
okay,.
P
D
D
Each
component,
we
were
thinking
that
code
owners
should
be
used
to
enforce
the
code
reviews
of
specific
components,
for
example,
if
I
contribute
some
example
in
contrib
related
to
it.
W
I'm
gonna
set
myself
as
a
code
owner
and
for
all
prs
related
to
that.
I'm
gonna
be
providing
reviews
so
feel
free
for
other
components
to
take
ownership
of
specific
subdirectories
and
use
the
code
owners
for
that.
That
is.
D
Let
me
check
the
the
permission
issue.
I
think
we
can
actually
create
the
the
released
app
yeah
I'll
I'll
I'll
figure
it
out.
O
O
Yeah,
if
there
are
some
other
communications
with
you,
apart
from
the
meetings
like
like
slack,
that's
that
will
be
great.
D
Okay,
instead
of
the
the
dislike,
I
I
I
have
tried
a
few
times
sorry
I'll
get
it
set
up
today.
P
D
Q
Yeah
I
was
just
going
over
the
issues
and
I
think
history
can
be
just
closed.
I
have
added
a
few
comments
to
some
days
ago.
P
P
Okay,
fine,
so
you
you
want
a
permission
to
close
it.
I
think
you
should
be
able
to
do
it
right.
P
P
Here,
okay,
tom,
I
mean
probably
you
want
to
talk
about
the,
including
the
auto
generated
files
without
any
license,
hey
sim,
you
have
something
else
to
discuss.
Sorry,
I
think
I
just.
P
R
I
think
my
my
issue
was
covered
in
your
previous
booster
license
topic.
So,
okay,.
P
Okay,
I
I
just
had
one
issue,
probably
if
you
can
quickly
discuss,
I
didn't
put
it
here.
The
problem
right
now
is
the
latest
version
of
grpc
that
has
dependency
on
gcc
4.8.
P
So
probably
our
api
and
sdk
are
compilable
with
gcc
4.7,
and
even
I,
our
ci
system
has
the
complete
one.
I
mean
we
don't
I
mean
for
otlp
exporter,
gcc,
4.7,
bazel
and
cma
compilation
will
feel
if
we
upgrade
grpc
to
the
latest
version.
So
probably,
I
think
we
should
be
splitting
our
ci
for
otlp.
We
should
the
legacy
build,
should
be
4.8
and
for
api
and
sdk
with
other
exporters
legacy
build
should
be
4.7.
P
I
mean
I
can
do
that,
but
just
wanted
to
check
if
somebody
has
any
issue
inspecting
those,
so
that
would
be
both
for
bazel
and
c
make.
I
mean
right
now
we
for
bazel
reas,
basal
gcc
version
we
use
4.7
for
legacy
and
for
c
mac.
Also
we
use
4.7,
but
that
will
not
work
if
we
upgrade
to
the
latest
version
of
grpc.
P
P
And
I
think
we
are
done.
Probably
we
can
quickly
go
to
if
there
is
somebody
wanted
to
talk
about
any
of
the
pull
requests
which
is
lying
for
a
long
time.
Probably
we
can
talk
now,
if
no,
if
not
that's
good,
so
this
this
was,
I
was
talking
about
upgrade
grpc
to
1.37.0
this.
This
is
something
which
is
failing
as
of
now
for
legacy
bezel
and
cmec.