►
From YouTube: 2021-07-21 meeting
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
A
C
Okay,
so
I
think
we
should
start
this
five
minutes
past
the
the
starting
time
jurassic.
You
have
the
first
one,
but
unfortunately
I
will
I
already
read
it,
and
I
have
a
comment
for
you:
that's
something
that
we
need
to
post
on
the
community
as
an
issue,
because
the
zoom
meetings
are
not
controlled
by
dc
are
controlled
by
the
community.
E
F
B
E
Yeah,
okay,
so
this
is
basically
just
a
continuation
of
what
was
happening
last
week
because
last
week
I
just
proposed
the
feature,
and
this
week
there's
an
actual
design
doc.
It
shouldn't
be
too
too
complicated.
E
But
the
background
is
essentially
the
same
as
last
week
that
if
a
customer
wants
to
submit
a
config
file
for
the
open,
telemetry
collector,
it
all
must
sit
locally
and
then,
especially
if
you're
merging
using
spunk's
code,
some
of
which
has
just
been
merged
in
you,
might
have
to
have
a
non-default
config
file
sitting
sitting
somewhere
in
your
in
your
system,
which
can
be
fairly
inconvenient
depending
on
how
you're
deploying
otone,
where
you're
deploying
it.
E
So
the
proposed
interface
that
I
suggest
is
that
we
have
a
dash
dash
remote
config
flag,
and
then
we
have
remote
config
and
then
name
of
source
url.
But
we
do
a
we
do
it
in
url
form
like
this,
and
then
other
information
will
be
in
the
end
form.
E
E
The
way
we
would
implement
this
is
mostly
information.
Work
would
be
done
by
each
individual
source,
so
we
would
add
to
the
config
source
interface
that
splunk
has
recently
proposed
a
session
from
url
function
that
would
take
in
a
url
string
and
then
return
a
session
and
an
error.
We
would
include
a
default
implementation
that
returned
nil
and
it
does
not
support
this
functionality,
error
to
not
break
anything
backwards
and
then,
in
terms
of
other
bits
of
the
implementation.
E
E
Those
would
be
submitted
to
the
various
config
source
implementations,
which
would
implement
the
function
listed
above
each
returning
their
own
config
source
session,
and
then
we
would
use
splunk's
existing
merge
functionality
to
get
a
merge,
config
source,
and
then
we
would
use
the
root
injection
functionality
and
have
the
full
configures
ready
to
go,
and
then
we
just
submit
that
to
the
hotel,
collector.
C
E
Vocally
is
fine,
I
believe
you
should
also
all
have
like
edit
and
comment
privileges
so
feel
free
to
mark
it
up.
As
you
can
see,
I
share
this
information
in
aws
and
there's
some
comments
on
the
side
which
I've
been
responding
to
and
and
incorporating
feedback
from
so
comments
should
be
good
or,
if
there's
something
you'd
like
to
express
right
now.
That's
also
good.
C
I
would
not
let's
let's
park
this
for
the
end
of
the
meeting.
If
we
have
time,
I
will
do
that,
but
otherwise
let's
go
through
the
all
the
agenda
items
and
I'll
do
it
offline,
if,
if
not
possible
during
this
meeting.
Okay,
all
right,
thank
you
and
by
the
way,
nice
nice
work,
and
I
I
I
cannot
retain
myself
why?
Why
do
you
need
a
different
flag
than
the
current
dash
dash
config.
E
The
reason
I
I
decided
to
make
it
a
different
flag
is
we're
possibly
submitting
it
multiple
times
which
dashboards
config
currently
doesn't
support,
and
it
might
be
confusing
to
use
if
we
change
that
I'm
happy
to
have
it
be
the
same.
It's
like
a
better
user
experience
to
keep
it
a
separate
flag,
but
there's
no
technical
reason
why
it
needs
to
be
different.
I.
H
A
D
E
What
about
now
it's
a
blank
screen.
It's.
D
All
right,
so
alright,
so
oh
yeah,
okay,
so
I
I
think
I
can
do
something
different.
Then.
C
Editor
has
a
open.
D
The
terminal
in
the
id-
that's
exactly
what
I
was,
what
I'm
going
to
do,
yeah
all
right,
so
this
I
promised
last
week
last
week
that
I
would
you
know,
try
to
get
this
here
done
by
next
week.
Like
you
know
two
weeks
from
that
meeting-
and
I
actually
got
it
done
in
for
for
this
one
here,
so
I'd
like
to
present
the
results
of
the
plc
of
the
restrictoring
of
the
of
the
repositories.
D
So
what
I
have
here,
I
is,
I
I
forked
or
cloned
the
open
telemetry
collector
as
open
timer
collector
api
can
trip
as
open
parametric
collector
components,
and
the
builder
is
through
the
builder
all
right.
One.
One
repository
that
is
missing
based
on
on
on
the
proposal
would
be
an
open,
timesheet
collector
manifests
and
the
manifests
would
then
contain
a
manifest
firework
or
a
set
of
manifests
similar
to
this
one
here.
D
So
this
one
is
for
the
load
balancer
it
we
specify
a
module
so
for
people
who've
tried
the
the
builder
before
it's
it's
really
the
same
builder.
I
just
I
had
to
do
a
couple
of
tricks
in
a
couple
of
places
mostly
to
replace
to
handle
in
the
situation
where
you
know,
I'm
changing
the
organization
here.
D
So
instead
of
open
telemetry,
it
is
using
the
one
for
my
own
namespace
and
you
can
see
you
know
the
api
and
the
components
instead
of
the
old
or
you
know
the
current
names
and
let
me
copy
them
what
I
wanted
to
share.
D
And
what
it's
doing?
Well,
so,
okay!
So
what
it's?
What
I
expected
to
to
happen
is,
I
would
end
up
with
a
distribution
that
contains
only
the
htlp
receiver,
the
logging
exporter
and
the
load
balancing
exporter,
download,
balancing
and
logging
are
coming
from.
D
The
new
repository
for
components
and
all
tlp
receiver
is
coming
from
from
the
old
core
or
the
new
collector
api
and
oops,
and
when
I
run
the
the
builder
I
get
a
binary
out
of
it
and
I
can
also
config
local
hotel
without
open
country
collector
configuration-
and
I
get
you
know-
a
valid
open
temperature
collector
setup.
So
the
configuration
is
this
one
here
it
is.
It
is
verbatin
the
same
as
the
one
that
we
have
in
the
redmi
file
for
the
load,
balancer
load
balancer
exporter.
D
So
it's
a
little
bit
complex
because
it
is
creating
a
lot
of
back
ends
and
one
load
balancer.
So
whenever
you
send
data
using
otlp
to
the
port
4317,
it
will
load
balance
based
on
the
trace
id
on
to
one
of
those
back
ends
here
and
then,
eventually,
you
know
those
backhands
wouldn't
be
configured
with
the
logging
exporter.
D
F
D
H
yeah,
so
it's
17
megabytes
right,
so
it's
way
smaller
than
the
contrib,
which
would
be
required
today
to
use
the
load,
balancing
exporter
and
yeah.
So
what
I
wanted
to
show
as
well
is,
I
probably
cannot
do
that
from
well.
If
I
can
show
my
browser,
perhaps
can
you
see
my
browser.
H
D
Oh
come
on
all
right,
so
we
are,
let's
see
if
I
can
do
that
from
the
from
the
code
editor
as
well.
So
let
me
try
to
open.
D
D
D
All
right
so
in
here
I
try
to
split
every
step
into
its
own
commit,
so
the
first
one
was
to
remove
the
open
temperature
collector
command
and
remove
some
some.
You
know
the
components
are
the
default
components,
so
we
don't
have
distribution
as
part
of
the
api
anymore
and
then
each
one
of
those
components,
one
by
one
and
each
step
here
has
all
the
tests
passing
now.
Of
course,
this
exercise
would
have
to
be
redone,
because
you
know
maine
has
moved
on
from
from
the
stage
from
the
stage
that
I've
started.
D
But
if
we
agree
that
this
is
what
we
want
to
do,
then
I
would
then
you
know,
create
a
ticket
and
with
the
steps
that
I
that
I
performed
and
then
I
can
get
hopefully
help
from
from
other
folks
to
take
care
of
those
individual
commits
here.
C
C
Okay,
so
we
we
do
have
to
do
some
other
things
like
command
and
and
the
four
components
which
we
are
not
touching
yet,
but,
as
I
mentioned,
the
other
things
are
already
imposed.
Yeah.
D
Because
you
know,
when
I
moved
from
a
container
to
collector
api
to
components,
they
would
just
break
because
they're
they're
using
something
that
is
internal
to
the
api
now,
so
those
are
small
things
that
so
it's
not
only
about
moving.
It's
also
about
you
know
deciding
what
to
do
with
internal
dependencies
but
yeah.
So
that's
you
know,
that's
mainly
what
it
took
for
me
to
to
get
from
from
the
current
state
into
this
one
here.
So
it's
a
lot
of
commits.
C
C
D
Well,
I
I
have
one
that
is
not
going
to
be
representative
of
of
the
future.
Oh
come
on.
D
And
that
is
you
know
it
would
be
similar
to
this
one
here,
but
instead
of
having
only
the
otp
receiver,
it's
gonna
repeat
this
one
here
for
all
the
components
that
we
have
in
there.
D
There
yeah,
you
would
have
to
list
all
the
components
individually
here,
yeah,
okay,
today,
what
you
can
do
is
you
can
just
switch
this
flag
here
so
include
core
true
and
what
it's
gonna
do.
Is
it's
gonna
call
the
default
components
to
get.
You
know
the
core
components,
but
we
are
gonna,
get
rid
of
the
default
components,
so
you
have
to
list
them
individually.
Here.
Okay,.
C
D
Yes,
okay,
the
whole
resolution
or
the
whole,
the
you
know
pretty
much
everything
can
be
customized,
but
you
know
what
is
the
import
that
we
are
doing
on
the
go
code?
What
is
the
name
of
the
the
the
the
package
and
what
is
actual
path
and
so
on?
Those
are
all
assumed,
but
it
can
be
overwritten
on
a
per
module
basis
and-
and
I
do
assume
that
they
are
called
new
factory.
I
think
yeah.
H
C
D
Well,
so
for
the
builder,
it's
you
know
the
main
code.
It's
there
are
only
a
couple
of
changes
required
to
that,
mostly
on
the
templates,
but
for
an
api
and
for
the
components
I
I
did
create
new
repositories
under
my
account,
and
I
linked
them
on
on
the
meeting
minutes
from
this
from
this
call
here.
C
H
D
All
right,
so
let
me
stop
sharing.
Second,
second
good.
C
D
Just
not
yeah
yeah,
I
well,
I
think
that's
that's
all
from
me
on
that
part.
So
what
are
the
next
steps?
So
do
we
want
to
to
continue
with
that?
Is
it
good
enough,
or
is
it
matching
what
people
expect?
I
would.
C
D
Yeah,
so
there
are
quite
a
lot
of
small
but
breaking
steps
in
to
achieve.
You
know
the
goal
of
that
that
is
listed
in
the
proposal
on
a
proposal,
and
it
includes
like
renaming
repositories
or
or
creating
new
repositories
cloned
from
old
repositories.
Why?
Why.
D
Right,
so
that's
that's
part
of
the
proposal,
that's
what
we
we
should
be
discussing
and
seeing
whether
we
want
them
or
not.
So
I
think
it
is
we
can.
We
can
still
name
the
repositories,
the
way
that
we
name
them
today.
The
problem
is
it's
not
going
to
reflect
the
new
reality
right,
so
we
want
to
make
clear
for
people
what
is
an
api?
What
is
the
distribution
and
what
are
the
components
it
is
all
explained
or
is
somewhere
okay,
but
but
the
country.
C
So,
for
the
contrib,
for
example,
yeah,
why
is
better
to
call
it
components.
D
I
think
components,
it's
is
a
better
explanation
or
a
better
description
of
what
the
new
repository
is
going
to
be.
C
I
I
think
I
think
I
think
there
is
another
contingency
here
which
is:
where
do
we
draw
the
line
between
what
is
api
versus?
What
do
we
leave
in
the
same
module
with
the
api?
So,
for
example,
for
my
understanding,
I
was
thinking
that
otlp
and
probably
even
the
debug
or
logging,
whatever
is
called,
I
think,
could
be
in
the
same
core
distribution
or,
however,
you
call
it
api
distribution
or
or
whatever
it's
called
so.
C
So
I
think
there
is
this
contingency
which
we
should
so
so
there
is
a
problem
of
removing
the
command
default
components
and
stuff,
which
I
understand,
but
then,
where
the
code
leaves,
I
think
we
can
again
a
bit
more
yeah.
D
D
So,
in
my
opinion,
hlp
would
have
to
belong
to
the
components
it's
in
the
core
or
it's
in
api
right
now,
because
you
requested
it
last
week
because
you
know
it
makes
it
so
much
easier
to
test
things
in
the
api
itself,
and
I
agree
you
know
so.
I
think
we
need
more
implementation
in
there
for
at
least
for
testing
purposes,
for
end-to-end
tests,
but
that,
ideally
or
in
theory
it
would
belong
to
the
components.
C
Okay,
so
so
I
do
okay,
what
I'm
trying
to
say
is
so
far
my
100
percent,
I'm
100
on
you
with
the
we're,
not
building
distributions
from
this
rep
or
building
all
the
distribution
outside.
I
think
we
still
need
to
negotiate
a
bit
where
the
components
and
understand
better
where
the
every
component
leaves
and
and
stuff
like
that.
But
besides
that,
I
think
we
are
good.
Okay,
yeah
I
mean.
D
Those
are
again
there
are
very.
There
are
a
lot
of
small
steps
that
are,
that
would
cause
breaking
changes,
and
renaming
repositories
is
one
of
them
right.
So
as
long
as
we
do
all
of
those
steps
before
ga
before
v1,
we
are
good.
We
can
do
all
those
individual
steps
one
by
one,
and
we
can,
you
know,
have
a
discussion
about
the
renaming
afterwards
and.
C
The
other
option
fyi
is
so
for
for
the
collector
for
the
current
collector
we
do
have.
We
do
have
a
vanity
url,
so
there
is
no
need.
I
don't
think
we
need
to
rename
the
current
collector
in
collector
api.
C
D
Yeah
yeah
I
mean
if
we,
if
we
think
that
collector
is
not
confused
for
whoever
is
looking
at
import,
then
that's
fine.
I
mean
if
we
look
at
the
import
and
and
easily
see
what
is
part
of
the
api,
that
I
need
to
use
to
build
components,
and
what
is
a
component
that
I
include
in
my
distribution?
Then
that's
fine.
If
we
don't
have
this
ambiguity
or
if
it
is
very
clear.
That's
that's!
That's
fine
by
me.
C
Yeah,
so
let's,
let's,
let's
discuss
more
a
bit
of
these
names,
but
the
overall
goal
is
for
me:
it's
is
this
to
to
not
have
the
command
not
have
a
bunch
of
code
in
the
in
the
modules
that
we
have
right
now
and
rely
on
the
builder
for
all
these
things
and
let's
yeah,
as
I
said,
let's
start
by
moving
all
of
them
to
contrib.
Initially,
don't
do
another
repo
and
then
maybe
in
the
in
the.
C
The
other
thing
is,
you
said
that
otlp
is
not
contribute
but,
for
example,
would
would
load
balancer
be
a
component
or
a
country.
I
felt
somehow
that
by
using
component
you
you
put
a
statement
on
a
on
a
piece
of
code
that
is
maintained
by
us
or
by
the
community.
So
I
think
that's
another
thing
that
I
I
didn't
not.
I
don't
know.
C
If
that's
what
we
want
to
do-
or
this
is
the
right
thing
to
do-
but
I
think
like
I
would
like
to
hear
others
initially,
we
thought
that
everything
should
be
in
one
big,
rep
or
as
country,
but
we
have
a
status
on
every
page
name
it
it's
maintained
by
by
the
collector
maintainers
or
by
the
community.
C
That's
another
option.
I
think
I
think
alolita
promised
to
help
us
with
the
with
that
approach,
and
this
may
be
also
be
considered,
and
I
think
part
of
the
the
way
how
we
solve
that
jurassic
will
will
imply
some
of
the
the
like
repositories
that
we
need
or
even
urls,
that
we
need
for
different
things.
D
So,
on
on
making
a
difference
between
you
know,
what
is
is
not
the
necessary
contributor
or
core,
or
you
know,
maintained
by
by
us.
I
think
each
one
of
those
each
one
of
those
components
would
need
to
have
very
explicitly
at
the
very
top
of
the
reading
file.
What
is
the
expectation
that
people
should
have
about
that
component?
So
who
is
supporting
that?
And
what
is
the
level
of
support
I
mean?
Is
it
offer
quality?
Is
it
production,
quality
and
so
on?
D
So
I
think
I
think,
we're
in
line
with
that,
and
then
you
know
I
don't
expect
end
users
to
use
that
information
directly
themselves.
Apart
from
some
very
you
know,
some
some
some
users
were
very
knowledgeable
about
the
collector
itself.
D
C
I
think
okay,
so
yeah.
In
my
opinion,
I
we
need
clarity
there
and
I
don't
know
if
we
should
jump
directly
into
this-
maybe
maybe
some
some
kind
of
one
pager
that
describes
how
we
would
like
to
structure
the
code
in
different
in
different
repositories
or
what
the
name
should
be
because,
for
example,
in
country
we
also
have
things
like
in
the
packaging
packages
that
are
shared
between
things
are
those
components.
They
are
not
components,
but
where
do
we
keep
those?
C
So
so
there
are
a
bunch
of
questions
that
we
need
to
answer
for,
for
if
we
want
to
rename
things,
I
think
we
should
rethink
a
bit
or
we
should
think
overall,
the
entire
picture,
not
not
just
components,
make
sense.
D
Yeah,
so
I
try
to
grab
all
those
questions
or
those
points
here
in
the
in
the
meeting
minutes
so
and
I'm
going
to
create
issues
first
to
discuss
each
one
of
those
points,
but
take
a
look
and
see
if
there's
anything
that
I
missed.
I
think
I
did
capture
everything,
though.
C
But,
as
I
said
also,
we
are
actively
working
80
of
what
you
need
is
going
to
happen,
no
matter
what
so
so,
that's
that's
the
the
good
thing
that
the
the
wheel
is
still
rotating
and
we
we're
gonna
we're
gonna
have
some
some
progress,
no
matter
what.
I
Yep
so
hi
everyone,
so
at
new
relic
we
were
trying
to
research,
the
behavior
of
the
otlp
receiver
in
the
presence
of
a
high
latency
link.
I
Some
of
our
customers
are
operating
in
a
different
region,
and
so
we
wanted
to
make
sure
that
that
was
performant.
So
I'll
share
some
of
our
observations
here
and
try
to
get
some
opinions
on
what
was
going
on.
But
this
is
a
packet
capture
over
a
high
latency
link,
and
this
is
the
h2
frames
that
were
coming
across
for
the
grpc
transaction
over
otlp.
I
One
of
the
things
of
note,
though,
is
that
one
of
the
settings
that
got
sent
back
in
the
initial
handshake
is
this
max
frame
size
of
16k.
So
this
is
the
h2
frame
size.
So
if
you
look
at
each
one
of
these
data
frames,
that's
coming
through
they're,
all
approximately
16k,
so
our
average
payload
sizes
for
otlp
on
compressed
are
300
to
600k,
which
means
that
they're
going
to
be
a
lot
of
these
data
frames
coming
through
and
so
each
time
these
data
frames
come
through.
I
I'm
not
sure
if
there's
any
layer,
7
communication,
but
it
definitely
has
to
hand
off
to
the
application.
We
looked
into
tuning
this
tuning,
this
frame
size
and
it
turns
out
that
grpc
go
hard
codes.
This
frame
size
to
16k,
so
one
of
the
things
that
we
started
looking
at
is
instead
of
using
the.
So
there
are
a
couple
of
approaches
we
can
change
grpc
go
to
allow
for
the
tuning
of
this
of
this
frame
size
or
we
can
explore
using
potentially
another
library
to
service
the
the
otlp
traffic.
I
I
I
don't
know
if
this
is
easy
to
see
but
to
bind
the
grpc
endpoints
into
the
hdp
mux,
so
this
actually
has
a
second
benefit,
which
is
this
on
split
the
otlp
receiver
to
use
a
shared
port,
so
the
initial
specification
called
for
serving
both
hdp
and
grpc
on
port
4317,
and
there
was
some
discussion
that
I
that
I
read
through
on
this
thread
about
some
technical
difficulties
that
were
preventing
this
from
occurring.
I
C
Okay,
I
will
say
the
following:
I
think
grpc
is
has
its
own
protocol
on
top
of
http
2,
which,
by
just
mapping
the
handler
and
using
the
generated
code,
there
is
not
going
to
work
for
you.
You
need
to
send
some
metadata
and
some
other
things
that
grpc
rely
on
in
order
to
be
a
grpc
implementation.
H
But
by
the
way,
why
is
this
only
a
problem
for
tlp
I
mean
this
issue
is
in
the
grpc
got
for
the
last
five
years
since
grpcs
came.
H
I
think
they
put
some
some
sort
of
like
a
workaround
for
people
who
wanna
set
the
frame
size
explicitly.
Is
there
such
an
api?
That,
like
is
there
a
way
to
you,
said
that
it's
hardcoded
I,
as
far
as
I
remember
there
was
they
were
thinking
about
allowing
people
to
set
the
frame
size
at
least
explicitly
to
find
unit
for
their
case?
It's
not
like
you
know
adaptive
thing,
but
we
can
take
a
look.
I
Yeah,
so
it's
done
so
it
is
hard-coded.
I
I'll
I'll
try
and
pull
up
the
code
in
just
a
bit,
but
I
just
wanted
to
mention
that
we're
still
using
the
grpc
library
here,
the
grpc
library
actually
exposes
an
interface
which
allows
you
to
bind
the
grpc
handler
into
an
http
mux,
it's
an
experimental
interface.
I
C
I
Yeah,
I
did
so.
It
looks
like
there's
some
other
library
that
that
josh
was
exploring
so
grpc
to
htp.
This
is
a
different
approach,
so
I
I
tested
it.
I
mean
the
code.
The
code
that
I
put
together
is
just
hacking.
It
together
just
to
test
it
and
it
seemed
it
seemed
to
be
okay,
so.
C
C
I
Yes,
so
the
the
frame
size,
the
hq
frame,
size,
yeah,
set
max,
read
frame
size,
so
this
framer
exposes
a
public
interface
to
actually
tune
the
frame
size,
whereas
this
value
is
hard-coded
inside
of
grpc.
C
Is
it
not
sorry
for
my
ignorance,
but
there
is
a
property
on
the
server,
because
this
is
on
the
server
or
on
the
client.
You
have
this
on
the
server
correct
yeah.
So
let
me
see
if
I
can
find
the
there
is
a
read
buffer
size
and
right
buffer
size
in
the
grpc.
I
I
believe
this
is
the
code
that
deals
with
it
yeah,
so
here's
here's
the
value
which
is
hard
coded
in
grpc
I'll
paste,
the
link
to
this
code.
I
So
I
mean
this
is
kind
of
a
smallish
contributor
to
the
overall
latency
on
a
highly
latent
link,
but
it
does
contribute.
We
saw
an
extra
50
to
100
milliseconds,
I
think,
as
a
result
of
the
reduced
frame
size.
So
I
looking
at
the
pr
that
actually
implemented
this
max
length.
They
claimed
that
there
was
some
sort
of
security
issue,
but
I
don't
know
what
the
details
are.
There
there's
no
information
on
the
actual
details
of
the
security
implications
of
the
framing
okay.
C
So,
as
I
said,
I'm
I
will
try
to
to
to
split
this
into
two
independent
problems.
First,
I
think
we
have
some
friends
at
google
that
can
push
this
and
can
help
us
maybe
talk
to
the
grpc
team
to
figure
out
if
this
can
be
a
tunable
for
for
for
us,
and
it
will
be
good
to
have
that
anyway.
C
Yeah,
I
believe
I
believe,
that's
correct,
but
we,
I
would
still
prefer
if
we
force
them
to
make
it
non-experimental
sure.
H
G
J
I
A
But
we
were
burning
a
lot
of
engineer
time
trying
to
make
it
happen.
I
don't
really
know
ever
what
was
wrong,
but
something
about
the
htc
upgrade
and
it
was
just
unreliable.
We
have
lots
of
doubts
about
it,
so
if,
if
this
was
supported
by
the
community
by
google,
by
open
telemetry,
we'd
love
to
have
a
single
port
and
the
solutions
in
that
ticket
are
great,
but
I
we
don't
trust
it
at
that.
Current
level
of
support.
I
Yeah,
that
sounds
good.
Maybe
the
next
step
for
me
is
I'll.
Try
and
harden
that
implementation
around
using
the
http,
2
server,
mux
and
open
a
pr
for
discussion.
Perhaps.
C
A
I
can
try,
but
ultimately
we
gave
up,
because
you
know
adding
a
plain,
grpc
port
to
take
plain
grpc
traffic,
as
opposed
to
taking
plain
text
http
and
doing
this
weird
upgrade
just
fixed
our
problem.
So
we
didn't
ever
understand
the
problem.
It
was
like
purely
one
of
too
many
hours
where
some
too
many
days
were
sunk
into
trying
to
make
it
work.
I
A
I
I
Yeah
yeah,
that's
fine,
but
if
people
are
interested
I
guess
I
can.
I
can
open
that
as
a
draft,
maybe
in
the
next
week
and
we
can
discuss.
C
Thank
you,
I
mean
again,
you
may
want
to
give
a
try,
initially
not
necessary,
to
split
the
ports
but
change
grpc
to
use
a
normal
http
2
head
server
instead
of
their
own
hard-coded
server.
C
I
C
Okay,
let's,
let's
see
how
much
change
would
that
be
because
there
are
a
bunch
of
things
in
grpc
that
we
rely
on
including
things
like
buffer
sizes
like
compression
and
stuff
like
that,
that
jrpc
comes
with
standalone
and
this
will
become
a
problem
in
the
meantime,
alan,
if
he's
not
too
much,
maybe
maybe
if
there
is
no
issue
for
the
max
frame,
maybe
we
can
open
an
issue
on
grpc
and
say
to
a
lot
ask
them
to
allow
us
to
to
pass
that
as
an
option
to
the
new
server.
I
C
And
the
other
pr
that
you
may
want
the
other
issue
that
you
may
do
on
the
grpc
is
ask
them
if,
if
they
can
make
that
interface,
the
handler
interface
stable
for
us.
I
Yep
that
that
sounds
reasonable,
so
yeah
grpc
asking
for
frame
size,
api
and
then
grpc,
asking
for
stabilization
of
the
handler
interface
and
and.
C
You
can
ping
ponya
on
slack
to
give
them
to
to
make
them
look
into
this
and
see
what
are
their
comments.
I
I
G
Yeah
hi
me
punya,
I'm
okay,
I'm
a
google
person
and
alan
because
I
didn't
you
know
I.
I
don't
necessarily
understand
the
technical
issues
you've
raised
in
full
depth,
so
this
may
end
up
being
something
where
we
facilitate
a
call
between
you
and
someone
on
the
grpc
team.
So
if,
if
that
happens,
would
you
I'll
check
in
with
you
on
time
zones
and
such.
H
H
But
I
can
suggest
like
if
you,
if,
though,
doesn't
I
mean
I
was
about
to
actually
reach
out
to
brett
fitzpatrick,
who
has
a
lot
of
insights
on
this,
but
he
left
google.
Let
me
know
if
you
can't
hear
anything
from
though
or
like
you,
they
don't
care
about
it.
I'll
reach
out
to
you.
C
I
think
next
one
is
anthony.
J
Yeah,
so
I
just
wanted
to
circle
back
on
the
versioning
policy
proposal.
Pr
that
is
out.
I
think
the
changes
I
pushed
up
last
night
will
address
the
almost
the
last
of
the
outstanding
comments
regarding
struck
tags
and
ensuring
that
they're
identified
as
part
of
the
public
api
for
configuration.
J
So
I
I
think
we
have
the
option
to
either
remain
at
point
zero.
For,
however
long
it
takes
us
to
get
comfortable
that
those
elements
are
stable
which,
with
logs,
it
looks
like
it
that
might
be
sometime
next
year
or
we
separate
them
into
distinct
modules.
C
Yeah
so
metrics
we're
doing
huge
progress
so
thanks
to
to
alex
from
from
lightstep,
we
are
doing
huge
progress
and
I
think
I
expect
this
month
or
early
next
month
to
be
on
zero,
nine,
which
means
stability,
because
at
least
on
the
p
data
and
stuff
for
logs,
I
don't
know.
C
My
question
in
my
my
initial
proposal
and
let's
maybe
merge
the
document
with
this
initial
proposal.
If
everyone
is
fine
with
this,
as
an
initial
proposal
is
to
stay
at
zero
version
for
the
moment
and
just
mark
everything
that
is
stable,
with
clear
documentation
that
this
api
is
stable
and
we
will
will
apply
rules
as
one
zero.
J
Yeah,
I
I
think
that's
reasonable.
It's
going
to
require
a
bit
of
diligence
on
our
part,
but
I
think
the
the
work
that
we've
already
done
in
terms
of
adding
ci
checks
for
backwards
and
compatible
changes
using
the
the
api
diff
mechanism
should
help
us
try
to
enforce
that.
Yes,.
J
C
I
mean
the
tool
should
start
failing
and
we
should
manually
check
that
that
fail
is
accepted
by
us,
because
it's
in
an
unstable
component,
so
we
should
treat
everything
as
1.0,
even
though
we
don't
put
1.0
when
it
comes
to
tools,
that's
my
proposal
and
then
be
be
very
worried
whenever
we
see
that
tool
failing
double
check
two
times
that
we
are
actually
breaking
what
is
expected
to
break
again,
I
propose
this
for
the
next
couple
of
months
until
we
better
understand-
and
we
can
revisit
probably
in
september
where
we
have
gonna-
have
much
more
code
stable.
C
What
to
do.
If
that
makes
sense
for
everyone.
D
Just
wanted
to
share
that
we
have
a
similar
or
related
problem
at
red
hat,
because
we
have
the
distributed
tracing
team
having
open
telemetry,
as
you
know,
a
key
component
for
the
tracing
pipeline,
and
we
have
all
engineers
working
on
the
tracing
side
of
things.
So
we
have
to
tell
our
customers
today
that
you
know
we
do
not
support
the
metrics
or
logs
pipelines
and
we
don't
the
collector
in
any
way
of
course,
but
it's
a
weird
situation
that
we
have
to
we.
D
We
have
everything
there,
but
we
have
to
tell
them
very
explicitly
that
only
the
tracing
part
is
stable
and
it
gets
even
more
complicated
because
ltlp,
receiver
and
exporters
they
have
the
three
signals
as
part
of
one
component.
So
we
say
that
you
know
only
one
part
of
this
receiver
is
supported
and
it's
very
hard.
You
know
to
communicate
this
kind
of
change
and
to
set
the
correctness
expectations
to
to
our
end
users.
C
If
you
split
by
signals
span
to
matrix
processor
will
not
be
able
to
exist,
you
will
not
be
able
to
have
something
like
that.
D
Yeah,
I'm
not
quite
sure
what
is
the
solution
for
that,
I'm
just
exposing
that
you
know
we
have
this
difficulty
and
it's
not
only
what
I'm
sorry,
who
was
that
again,
I'm
I'm
sorry!
I
missed
that,
but
it's
not
only
a
problem
of
one
side
right,
so
we
have
other.
C
C
D
For
internal
reasons
I
mean
it
could
be
because
it
is
not
like
because
it
is
not
stable
or
I
don't
know,
but
there
is
one
team
here
that
is,
that
is
focused
on
the
tracing
aspects
of
it,
and
we
want
to
support
it
even
as
a
tech
preview,
but
only
we
have
bandwidth
only
for
for
the
tracing
side
of
things.
C
Jurassic-
I
I
see
your
point,
I
I
don't
have
a
good
answer
right
now.
I
think
there
has
to
be
a
bigger
discussion
than
between
me
and
you.
D
D
Yeah,
I
was
just
going
to
say
that
you
know
perhaps
there
isn't
a
technical
solution
for
that.
Perhaps
there
is
a
a
communication
problem
that
we
we
can
address.
Somehow
perhaps
I
don't
know
a
way
for
us
to
log
things
to
the
to
the
standard
output.
Saying
I
don't
know,
the
expectation
is
that
only
this
one
here
should
be
supported,
or
you
know
I
don't
know,
I'm
not
saying
that
there
has
to
be
a
split
or
the
head.
There
has
to
be
a
technical
solution.
J
So
can
can
I
propose
that
we,
if
there
are
no
concerns
with
the
versioning
policy
outside
of
this
question?
What
do
we
do
with
metrics
and
logs
in
in
terms
of
a
single
module?
Can
we
land
this
as
it
is
and
then
revisit
whether
we
want
to
split
them
or
or
change
the
policy
at
a
later
date?
Yeah,
because
I
think
I
think
it's
reflective
of
what
our
current
thinking
is
and
just
highlights
this
problem
that
we
we
do
have
a
communications
problem
of.
C
D
But
I
can
live
with
with
that
part
of
the
problem
I
mean.
If,
if
we
have
that
kind
of
level
from
the
baseline,
then
I'm
I'm,
you
know
more
comfortable
in
supporting
the
open
transfer
collector.
Knowing
that
I
can
get
your
support
on
the
logs
and
metric
side
of
things
you
know.
So,
if
my,
if
rendered
customers
are
using
that,
I'm
sure
that
you
know
if
there
is
a
performance
problem
caused
by
the
logging
receiver,
that
is
affecting
the
tracing
part
of
things,
then
I
can
get
your
help.
D
You
know
the
community
help
to
to
solve
that.
Instead
of
just
getting
a
a
you
know,
it's
experimental
matrix,
metrics
fi
is
stable,
so
matrix.
C
Blocks
them
what
logs,
then
I
mean
oh
longs,
longs
yeah
logs.
I
need
to
to
ping
tigran
he's
the
owner
of
that
thing,
so
I
need
to
ping
him
to
to
make
progress
on
that.
But
yes,
so
for
logs
is
the
last
one
that
we
need
to
to
do
stabilization
on
the
protocol
for
for
metrics
and
traces.
As
I
said,
we
are
stable.
We
and
we
are
making
great
progress
on
adopting
the
latest
changes
from
otlp.
So
then
we'll
be
stable
on
matrix
as
well
on
ourselves.
J
I
think
that
the
proposal
to
split
up
components
from
the
api
may
help
with
this
as
well,
because
then
jurassic
you
can
say
here
are
the
components
that
we
will
support.
You
know
they're
the
ones
that
deal
with
traces.
Here's
a
collector
build
that
includes
the
components
that
we
support.
You
want
to
go
outside
of
that
have
fun
the
community.
Is
there.
C
Jurassic,
you
may
only
need
a
very
small
change
and
that
change
you
may
need
is
to
split
just
the
factories,
because
you
don't
care
if
the
code
is
there,
you
care
only
on
the
distribution
and
if
you
are
able
to
to
build
a
distribution
and
choose
which
factory
to
put
there
to
make
it
available
to
the
distribution.
So
imagine,
for
example,
if
we
have
a
metrics
factory
trace
factory
logs
factory
and
we
have
a
factory
that
is
all
of
them.
C
So
then,
if
and
if
the
convention
is
new
factory,
new
factory,
blah
blah
or
however
we
we
do
the
conventions
there.
What
I'm
trying
to
say
is,
I
think
you
will
be
able
to
to
from
the
build
from
the
builder
to
choose
which
factory
to
include
include
the
factory
for
trace
or
for
for
slots,
and
then
you
can
build
your
own
distribution
without
metrics
factories,
which
means
you
cannot
build
metrics
components
and
that's
it.
C
B
C
G
I
have
a
one
sentence
that
I
can
say
but
I'll
write,
an
issue
for
other
people,
so
one
sentence
is
we
can
use
git
tricks
to
keep
the
history.
The
main
thing
is,
we
have
to
allow
ourselves
to
use
merge
as
opposed
to
squash
and
merge
for
those
pull
requests.
That's
it.
C
Okay,
if,
if
you
want
to
be
an
owner
of
this
issue,
I'm
more
than
happy
to
give
you
whatever
permissions
you
need
to
to
handle
all
of
these
things.
G
Okay,
I'll
I'll
I'll
write
up
a
doc
and
I'll.
Maybe
what
I'll
do
is
I'll
do
one
sam
one
example
just
as
a
demonstration
people,
if
people
are
happy
with
that,
we
can
move
forward.
C
Perfect,
thank
you
so
much
punya
and
the
ping
me
on
the
slack.
If
you
need
any
permissions
or
anything
from
git
and
stuff,
I
can
and
also
the
last
comment,
and
I
think
we
are
done
because
we
are
out
of
time
thanks
everyone
and
thank
you
anthony.