►
From YouTube: 2022-09-20 meeting
Description
cncf-opentelemetry@cncf.io's Personal Meeting Room
A
A
Yeah
I,
over
the
weekend
I
put
together
this
pr.
It's
not
really
a
pr
in
the
sense
that
I
expected
to
be
merged.
I
just
wanted
to
kind
of
get
my
thoughts
down
on
what
I'm
thinking
about
custom
messages
and
the
easiest
way
for
me
to
express
that
at
the
time
when
I
was
kind
of
just
sketching,
this
out
in
code
was
to
modify
the
protobufs.
A
B
B
Yeah,
so
I
I
wonder:
if
that's
what
we
want
to
do,
that's
the
number
one
question
I
have:
if
we
do
this,
do
we
make
it
impossible
for
a
communication
where,
let's
say
there
is
a
stream
of
messages
in
one
direction
and
there
is
no
responses
at
all,
or
is
it
possible
to
do
and
you
can
just
ignore
the
responses
or
the
responses
should
be
empty?
B
A
So
the
response-
and
I
and
I
I
struggled
a
little
bit
with
the
language-
whether
it
should
be
a
reply
or
an
acknowledgement
or
something,
but
it's
really
meant
to
be
more
of
an
acknowledgement,
because
the
the
the
messages
are
you
know
because
they
are
custom
messages
and
they
just
have
a
message
type.
It's
quite
likely
that
the
recipient
doesn't
know
what
to
do
with
that
message,
and
I
think
it's
useful
for
the
sender
to
know
that
it
didn't
know
what
to
do
with
that
message.
A
So
that's
why
it
responds
with
a
status
of
either
ignored,
okay
or
or
an
error.
If
it
is
truly
a
request
response,
where
it's
expecting
a
payload
in
response,
then
then
there
would
be
a
separate
custom
message
with
that
payload.
So
the
custom
message
response
isn't
meant
to
carry
like
a
response.
Payload,
it's
just
meant
to
carry
some
sort
of
acknowledgement
that
either
the
the
message
was
ignored
was
successfully
processed
or
there
was
an
error.
A
So
I
think
in
terms
of
sort
of
request
response
style
communication
that
would
still
be
up
to
the
sort
of
custom
message
exchange
to
be
established.
I
think
if
it
were
streaming
you
would,
you
might
just
have
a
bunch
of
messages
sent
and
and
the
recipient
just
responding.
Okay
to
all
of
them.
B
B
Yeah,
okay,
so
you're,
saying
sorry,
sean
you're,
saying
it's
it's
okay
for
the
recipient
to
ignore,
essentially
a
request
and
not
send.
Let's
say
this
way:
if
there
is
no
handler
for
a
custom
message,
you're
going
to
send
the
message
that
it
was
ignored,
the
response
is
going
to
be
the
position.
There's.
B
B
C
Yeah,
yes,
thank
you.
I
I
think
I
like
the
the
the
acknowledgement
or
the
the
reply
with
the
status
to
any
given
message.
I
think
that
that's
good
and
that
definitely
helps
answer
some
questions.
I
had
around
like
what,
if
a
different
implementation
of
the
op
client
was
connected
and
and
communicating
with
the
server
sending
custom
messages,
so
I
think
that
that
checks
that
box.
What
I
was
wondering
about
is
like
the
last
custom
message
hash
is:
is
that
the
right
way
of
identifying
which
custom
message
that
we're
replying
to.
A
B
I
had
I
had
the
exact
same
question
actually
and
yeah.
The
thing
is
that
if
you're
sending
two
requests
that,
for
some
reason,
happen
to
be
the
exact
same
to
be
carrying
the
exact
same
page,
which
is
a
possibility
right
and
the
hash
is
going
to
be
the
same,
then
you
won't
know
the
response,
which
one
is
the
response
to
which
right
sure.
So
I
guess
an
alternate
here
would
be
to
to
have
sequential
ids
right.
B
Every
request
is
automatically
assigned
a
sequence
id
and
the
response
is
just
use
the
same
sequence
id
so
you
and
you
can
return
that
as
an
apac
token
to
the
caller
on
the
client
side.
Let's
say
right,
you
you
send
the
custom
message
get
back
a
token.
If
you
want
to
wait
for
the
response
on
that
particular
one,
you
can
just
store
it
in
some
sort
of
map
and
and
then
and
the
responses
will
reference
the
same
token
right.
So
you
don't
really
need
to
know.
B
C
B
A
Yep
we
do
have,
we
do
have
a
sequence
id
in
the
in
the
verto
yeah.
We
we
got
rid
of
the
other
hash
which
had
a
different
use
case
of
of
compression,
but
this
was
again
meant
to
more
correspond
to
the
remote
message,
but
I
I
I
think
I
I
I
think
that
the
basic
idea
you
know
there's
with
websocket
communication.
We
can
just
keep
firing
away
messages
and
you
know,
and
and
eventually
get
these
acknowledgements
back
and
and
understanding
how
they're
correlated
could
be
a
challenge.
So.
B
C
And
then
and
it
avoids
the
situation
of
confusion
yeah
I
agree,
may
I
ask
the
use
cases
like?
Are
you
thinking
around
specific
use
cases
or
applications
of
custom,
metric
or
custom
messages
right
now.
A
A
So
this
is
not
the
idea
that
we,
actually
you
know
we're
not
trying
to
be
an
otlp
receiver
here,
you
know
or
endpoint.
I
should.
I
guess
I
should
say,
but
we
want
to
sort
of
grab
some
recent
telemetry
when
requested,
and
so
we
we
basically
it's
a
bit
of
a
hack
right
now,
but
we
send
down
a
configuration
and
a
separate
configuration
key
that
that
says
to
retrieve
some
recent
telemetry
and
it
doesn't
get
returned
as
a
message
on
the
websocket.
A
It
actually
gets
sent
me
an
http
post
to
an
otlp
endpoint
and
in
our
management,
server
and
bindplane,
but
that's
how
we
receive
like
just
just
like
the
last
hundred
log
lines
or
last
hundred
metrics
or
something
you
know
just
kind
of
recent
telemetry
so
that
you
can
sort
of
reason
about.
Is
data
flowing?
What
does
that
data
generally
look
like?
A
Does
what
kind
of
attributes
are
in
there,
and
so
it
would
be
great
for
us
to
rather
than
have
this
sort
of
config
hack,
where
we
send
down
a
fake
configuration
that
just
instructs
our
our
custom
agent
to
gather
this
telemetry
and
then
report
it
over
an
http
post.
We
could
send
down
a
custom
message
saying
give
us
the
reach
telemetry
that
send
back
a
custom
message,
saying
here's
some
recent
telemetry.
A
We
also
are
planning
in
the
next
couple
weeks
to
implement
kind
of
a
pause
resume
feature
and
I
think
the
way
we're
going
to
do
that
is
to
actually
send
down
kind
of
a
no
app
config
and
then
on
resume.
We
send
back
the
real
config,
and
so
the
agent
is
sort
of
oblivious
to
this
idea
of
pausing
and
resuming
it's
just
a
management
specific
capability.
But
we
didn't
we
had
considered.
A
You
know
what.
If
we
just
sent
down
a
pause
message
to
the
agent-
and
in
that
case
there
wouldn't
even
be
a
body
or
data
fields,
but
that
would
the
agent
would
then
know
what
pause
meant,
because
there
are
scenarios
where
you
know
somebody
might
be
running.
Some
tests
in
a
cluster
and
they
want
all
the
logs
and
all
the
data
and
everything
they
can
get
out
of
there,
but
then
once
they've
finished
running
their
their
tests,
if
they're
benchmarking
or
whatever
they
want
to
stop
collecting.
They
want
those
agents
to
basically
pause.
A
So
that's
another
another
specific
example.
I
think
the
first
one
is
a
little
bit
more
compelling
for
this
particular
use
case.
So.
B
A
B
B
Yeah,
I
agree:
well
they
both
of
the
use
cases
that
you
described.
They
sounded
like
some
sort
of
commands
going
from
the
server
to
the
to
the
to
the
client,
and
we
do
have
a
message
called
server
to
agent
command
right.
We
do,
I
wonder
yeah,
so
the
restart
or
pause
or
or
retrieve
latest
telemetry
or
stuff
like
that.
Arguably
that's
a
that's
a
kind
of
a
command.
I
guess
so
I
wonder
and.
B
The
difficulty
there
is
that
the
commands
are
pretty
one
of
the
predefined
values
that,
like
it,
has
to
be.
B
I
don't
know
yeah
if
we,
if
we
need
to
do
that,
then
that
becomes
sort
of
a
custom
command
and
in
which
case,
I
guess
you
you'll
just
say
that
why
is
it
a
custom,
command
or
a
custom
message,
but
I
think
it
would
be
very
useful
to
maybe
somehow
add
this
use
cases
that
you
have
as
a
motivation
of
the
change
right,
so
so
that
it's
clear.
Why
we're
doing
this?
C
C
Exactly
but
but
the
acknowledged
design
around
it,
I
think
kind
of
lessened
my
fears
around
that
kind
of
outcome
so
yeah.
I
think
I
think
detailing
these
kind
of
use
cases
around
that
and
and
explaining
why
we
can't
leverage
the
the
cust
or
the
command
for
some
of
those
use
cases,
but
yeah
grease
the
wheels.
B
I
guess
the
reality
is
that,
if
you're-
if
you,
if
we're
very
honest,
it's
none
of
our
business
to
do
this
in
our
in
the
open
protocol.
This
I
mean,
if
you
go
for
for
purity
of
the
solution,
you
would
say
that
it
needs
to
be
a
separate
communication
channel.
The
reality,
though,
is
that
for
performance
reasons,
because
connections
may
be
expensive,
websocket
connections
in
particular,
this
may
be
a
necessity
right.
So
this
may
be
a
pragmatic,
realistic
solution
right,
so
we're
essentially
enveloping
a
communication
channel
inside
of
all
pump
right.
B
That's
what
really
we're
doing
yeah
probably
shouldn't
be
doing
that,
but
again
the
I
guess
the
justification
here
really
is
that
if
you
don't
do
that,
everybody
is
going
to
come
up
with
their
second
connection,
because
there
is
a
need
for
that
and
double
the
the
cost
of
website,
which
we
know
are
expensive
because
you
keep
them
open.
So
I
think
to
me
it's
it's
justified
to
have
this,
provided
that
we
clearly
explain
the
use
cases
right
that
the
use
cases
are
there.
B
Otherwise,
I
definitely
wouldn't
be
in
the
wouldn't
want
to
be
in
the
situation
to
say
that
we
invented
something
without
knowing
that
it's
really
necessary
and
somebody's
going
to
use
it.
Obviously
you
do
and
but
I
think
it's
important
that
the
spec
or
somewhere
we
reflect
that
right.
Why
why?
This
is
the
part
of
the
specification.
A
Yeah,
I
I
hesitated
with
the
or
I
still
do
hesitate
with
the
using
the
recent
telemetry
as
an
example
just
because
it
is
a
good
example,
but
you
know
there's
there's.
Certainly
a
lot
of
there
will
be
a
lot
of
hesitation
to
send
telemetry
over
this
over
over
a
management
protocol.
C
It's
it's
similar
to
my
some
immediately.
My
mind
went
to
an
area
that
I'm
looking
at
right
now,
which
is,
I
want
the
server
to
instruct
the
agent
hey.
I
want
to
understand
more
context
around
the
system
that
you're
running
on
from
a
management
perspective.
Right
like
give
me
your
your
process
tree,
give
me
like
some
facts
about
the
system,
and
I
want
to
capture
that
and
then
yeah
provide
guidance
suggestions
around
the
remote
config
through
this
protocol,
so
very
similar
to.
C
B
Okay,
so
can
we
can
we
do
the
following?
Maybe
add
other
use
cases
there
in
the
pr,
whether
this
pr
or
the
pr
against
the
spec?
We
can
discuss
the
use
cases
and
see
if
they
they
really
make
sense
or
or
we
need
other
use
cases
there
as
well
clarify
the
communication
styles
that
are
possible
here,
right,
just
request
response
or
you
can
do
streaming
if
necessary,
how
you
do
that
and
the
third
was
the
the
harshest
versus
the
the
sequence
items
right.
I
believe.
A
A
I
like
the
discovery,
use
case.
If
you
don't
mind
me
taking
that
one,
I
will,
I
think,
that's
a
good
use
case
and
I'll
yeah.
B
Well,
you
describe
two
use
cases
right
whatever
you
described
just
just
add
how,
if
you
have
others
in
your
mind
and
then
john's
use
case,
just
just
yeah
list
them
all
run
with
it
yeah
and
we'll
take
it
from
there.
A
A
A
No
problem,
I'm
glad
you'll,
have
a
use
for
it
as
well,
and
I
do
think
you
know
to
the
point
you
had
mentioned.
You
know
just
being
able
to
use
that
connection
which
is
not
cheap.
We,
we
sort
of,
we
actually
had
a
live
tale
kind
of
implementation
as
well,
that
we
always
opened
up
another
websocket
and
it
just
felt
terrible
doing
that.
But
yeah.
B
A
A
I
I
honestly,
I
really
struggled
to
find
a
lot
of
examples
of
compatibility
problems.
A
couple
of
interesting
things
I
discovered
one
is
that
chrome
does
not
do
message:
fragmentation
as
a
client,
so
there's
a
strict
one
megabyte
message
size
limit
and
it
will
not
be
broken
into
frames,
which
seems
like
a
problem
on
the
receiving
end.
Whether
a
lot
of
websocket
servers
don't
have
limits
either.
A
So
the
the
particular
limit
I
found
was
in
aws
api
gateway,
an
old
version
of
bind
plane
actually
lived
behind
before
we
moved
to
gcp
lived
behind
api
gateway
and
used
web
sockets,
and
we
actually
never
ran
into
this,
because
we
were
just
passing
tiny
messages
back
and
forth,
but
yeah.
It
was
interesting
to
to
realize
that
you
that
there
that
there
is
this
limit.
B
A
A
I
guess
the
one
area
I
struggle
with
is
is
how
much
you
know
it
almost
seems
like.
Then,
then
your
choice
is
to
use
http,
and
maybe
that
is
one
of
the
reasons
that
you
would
move
in
that
direction,
that
websockets
are
not
going
to
work.
Given
your
our
your
particular
websocket
implementations
that
you're
working
with
I'm
not
sure
how
much
in
this
fact
we
need
to.
B
I
think
we
should
probably
make
a
recommendation,
at
least
even
if
it's
not
a
hard
requirement,
but
at
least
the
recommendation
that
the
clients
need
to
limit
the
frame
size
to
something
like
that
right
to
32
kilobytes,
and
I
I
quickly
looked
at
the
implementation
that
we
have
the
websockets
that
we
use.
B
A
Side
looks
like
there
was
a
check
if
you
were.
If
this
was
a
server
sending
it,
it
would
limit
it
to
two
times
the
size
of
the
buffer,
which
would
only
be
about
eight
k.
Otherwise,
it
because.
B
B
They
are
sent
as
a
single
frame,
but
if
you
have,
if
you
send
much
larger
payloads,
it
bypasses
the
the
buffer
accumulation
and
sends
the
payload,
as
is
without
it,
doesn't
try
to
chop
the
inside
of
a
single
frame.
Yes
right
from
our
implementation,
I
can
actually
do
the
slicing
ourselves
and
if
we
send
it
in
smaller
chunks,
it
will
send
them
out
as
individual
frames.
So
we
can
actually
do
this.
We
can
actually
knowing
how
the
the
website
that
the
gorilla
web
socket
is
implemented.
B
We
can
adjust
our
sending
logic
to
make
sure
that
it
is
actually
staying
under
the
frame
size
that
we
want
to
have
there
kind
of
exploits
the
the
current
implementation.
So
not
a
very
good
thing
to
do
so.
Maybe
it
needs
to
be
filed
as
an
issue
against
the
gorilla
websocket
to
have
this
as
a
as
a
public
option
like
a
control
that
you
would
want
to
use,
but
for
now
probably
it
will
work.
I'm
I'm
inclined
to
saying
that
this
probably
needs
to
be
somehow
specified
right
in
the
spec.
B
A
A
And
there
is,
I
didn't
I
didn't
I
linked
to
it,
but
I
didn't
mention
it
here.
There
is
also
128k
max
message
size
in
api
gateway,
so
you
get
four
frames
of
max
size.
Basically,
so
so
there
is
also
it's
not
just
a
a
slicing
issue.
It's
also
potentially
a
max
message.
Size
issue.
A
B
B
B
A
I
did
come
across
a
few
other
discussions
about
this
and
there
was
a
lot
of
discouraging
people
from
implementing
their
own
chunking
on
size,
on
topics.
B
But
if
it's
limited
to
that,
then
what
do
you
do?
I
mean
it's
weirdly,
small
limitation.
I
don't
know
why
would
they
limit
it
to
something
like
that
128
kilobytes
per
message
like
and
why
does
it
even
care?
If
it's
it's
a
it's
a
gate
gateway,
it
should
be
just
forwarding
the
frames
as
it
receives.
Why
does
it
even
limit
the
message
size?
I
don't
understand
that.
B
A
Yeah,
but
it
I
I
don't.
I
don't
know
if
there's.
A
B
A
A
B
B
A
I
I
you
know,
I
don't
have
any
good
data
on
that
right
now.
That's
that's
interesting.
It's
a
good
question.
I've
I've
just
seen
some
that
are
much
larger
than
I
would
have
expected.
A
B
A
Config
and
managing
config,
because.
B
A
C
Okay,
but
I
thought
I'd:
ask
yeah
it'd
be
interesting
to
see
if,
if
that's
a
a
metric
or
number
that
could
be
found.
A
A
So
if
if
this
is
compressed
that
that
helps,
but
it's
not
clear.
C
B
B
C
All
righty,
okay,
okay,
so
I'm
I'm
next
on
the
list,
so
I
just
wanted
to
to
bring
up
a
topic
of
a
rough
idea
of
a
like.
The
viability
of
a
upstream
op
amp
config
provider
share
my
screen.
If
that's
okay,
this
is
just
a
proof
of
concept
or
work
in
progress,
and
I
threw
it
together
because
I
wanted
to
see
like
a
I
wanted
to
just
make
the
lightest
thinnest
op-amp
agent
implementation
and
make
it
as
generic
as
possible
and
wire
into
the
internal
reload
capabilities
that
a
configuration
provider
provides.
C
So
literally,
I
just
grabbed
the
op
amp
agent
example
and
then
took
some
ideas,
probably
andy
from
your
your
agent
as
well,
and
I
actually
use
bind
plane
as
the
poc
back-end
for
this.
C
C
And
you
know,
there's
a
few
things
like
how
bind
plane
does
authentication
with
the
key
and
a
few
others
and
a
few
required
fields
or
attributes
identifying
non-identifying.
So
these
things
kind
of
squeeze
out
of
it.
But
what
I
thought
was
interesting
is
like
it
was
quite
easy,
at
least
conceptually
to
fire.
C
But
it's
interesting
so
yeah
a
there
was
some
like
difference
between
implementations,
around
authentication
authorization
and
and
the
other
was
like
the
state
management
and
the
flow
like
did.
The
did
the
server
expect
the
the
ui
or
the
agent
id
to
be
persistent
across
reloads.
C
Did
it
expect
a
number
of
fields
to
describe
it
or
it
would
fail
out
so
there's
a
few
cases.
So
I
was
just
kind
of
testing.
I
haven't
put
things
together
yet
kind
of
put
together
any
form
of
like
report
of
my
findings
or
anything.
This
is
still
quite
just
rough
and
raw,
but
I
just
want
to
put
this
out
for
it
as
a
topic
as
an
idea,
I'm
curious
anyone
else
has
tried
this
or
is
trying
it
like.
B
I
mean
this
is
how
we
thought
initially
we
would
be
implementing
this
right,
so
this
is
very
much
in
line
with
with
the
original
intent.
I
guess
the
once
we,
I
guess,
decided
that
we
probably
are
going
to
go
with
a
supervisor.
That's
when
I
I
stopped
pressing
this.
This
particular
direction.
B
C
I'm
working
with
my
colleague,
justin
kohlberg
on
this
and
we've
had
a
new
flow
where
we
receive
the
a
config
from
the
server
and
then
we
run
it
through
our
own
validation,
before
persisting
that
effective
config
to
our
state
management
and
then
before
triggering
the
the
reload,
because
in
in
the
case
where
we're
using
biden
plane
buying
planes
like
yo,
use
this
prometheus
exporter,
and
then
you
know
sumo
logic,
distro
doesn't
have
that
or
that
particular
version
and
it
exploded
so
trying
to
guard
against.
That
is
interesting.
C
So
I
have
we
have
further
experiments
to
to
to
exercise
there
so
yeah.
That
was
a
big
thing
and
yeah
that
led
me
back
to
our
earlier
conversations
on
on
the
supervisor
pattern,
but
there's
something
I
don't
know
something
special.
It's
not
the
right
word
to
use,
but
about
how
lightweight
this
is
and
and
not
introducing
like
another
entity
or
thing.
B
So
if,
if
you
wanted
to
use
all
pump
without
the
configuration
portion,
which
is
completely
valid
right,
maybe
you
just
use
it
for
status
reporting,
then
I
would
say
this
lightweight
approach.
Probably
is
preferable
right.
You
don't
need
a
supervisor
for
that,
if
you're
not
receiving
a
config
you're,
not
reloading
the
agent,
there's
no
danger
of
it
crushing
or
anything
like
that,
so
you
just
use
it
to
report
the
state
that
the
agent
is
fine,
it's
healthy.
B
C
Yes,
yeah,
I
I
think
I
agree.
It
was
interesting
to
wait.
Maybe
I
can't
even
show
it
it's
kind
of
funny.
Sorry,
I
should
be
more
prepared
for
this.
I
just
wanted
to
share
earlier
on,
but
one
work
around
is,
I
was
fixing
the
value
of
where
is
it
oh
yeah
anyways
like
buying
plane's,
interesting,
I
I
use
it
as
excel
in
that
it
requires
specific
version
of
op
amp
to
use
and
also
yeah.
C
So
this
is
this
is
my
earlier
comment
around
certain
filter
headers
you
had
to
present,
or
it
just
wouldn't
work
to
work
around
it,
so
this
is
kind
of
a
to
make
it
a
generic
agent
implementation.
These
kind
of
things
need
to
be
manipulated
in
some
way.
They
need
to
be
configurable,
even
if
it's
just
for
the
I'm
just
going
to
report.
My
current
config
use
case.
A
C
Still
required
and.
A
I
can
speak
to
that
a
bit.
It's
pretty
funny,
but
yeah.
The
protobufs
have
changed
quite
a
few
times
between
0.2
0.3
again,
and
you
know
every
time
there's
an
incompatible
change.
Then
we
we
need
the
same
version
of
op-amp
in
the
client
and
server,
and
that
presents
some
challenges.
A
When
you
know
you
install
a
bunch
of
clients,
a
bunch
of
agents,
you
know
in
the
field
and
then
want
to
be
able
to
manage
them
and
then
suddenly
you
upgrade
the
server
and
it
can
no
longer
talk
to
any
agents.
So
we
have
this,
you
know
validation.
We
do
on
the
version,
but
but
yeah
there's
there's.
A
C
Versus
like
using
op-amp
kind
of
client
capabilities
or
some
way
of
broadcasting,
those
capabilities
of.
A
A
You
know
intended
to
be
backwards,
compatible
because
we're
in
alpha
alpha
state-
and
so
it's
fine,
so
we're
excited
to
get
to
get
to
stable
and
that's
that's.
The
march
we've
been
on
is
to
get
op-amp
stable
and
then
we'll
we'll
update
to
1.0
in
both
our
client
and
in
bind.
B
B
We
stop
making
breaking
changes.
You
should
hopefully
never
need
this
anymore
right,
so
I
hope
I
hope,
but
I
think
this
is.
This
is
a
great
exercise.
I
mean
it's
it's
one
of
the
goals
where
the
any
arbitrary
implementation
of
op-um
server
should
be
interoperable
with
any
other
client
side
implementation
right,
and
if
it's
not.
If
there
is
a
need
for
more,
then
I
guess
the
question
is:
do
we
need
more
clarifications
in
the
spec
right?
What
is
it
that
we're
missing
the
authentication,
for
example
right?
B
Does
it
need
to
be
part
of
the
spec,
or
it's
assumed
that
it
is
part
of
the
configuration
of
the
op-amp
client
and
then
the
op-um
server
and
and
it
becomes
the
responsibility
of
the
end
user
inductors
right?
So
I
think
it
would
be
great
sean
if
you
could,
maybe,
as
as
you
observe
those
things
you
could
file,
maybe
issues
against
the
the
open
spec
and
then
we
decide
whether
we
need
that
to
be
defined
in
the
specific
location.
B
If
it's
missing
in
the
spec,
then
we
add
it
to
the
spec
right
yeah,
but
I
think
it's
it's
great.
It's
I'm
very
happy
to
see
that
you're
trying
this
this
should
uncover.
Probably
it's
natural
that
we
missed
some
things
right,
so
it's
a
great
exercise
to
uncover.
Yes,
you
said.
A
I
appreciate
you
you're
digging
into
vine
plane
a
bit
we
we're
still
sort
of
the
beginning
of
our
journey
of
of
supporting
multiple
agents.
Obviously,
there's
not
many
agents
right
now
that
speak
op,
amp,
so
understanding
what
that
whole
matrix
compatibility,
matrix,
looks
like
is,
is
going
to
be
a
challenge
and,
and
to
your
point
about
you
know,
sending
down
configs
that
don't
work
in
certain
agents
is
also.
C
Well,
it's
a
relative
problem.
Yes,
I
and
I'm
enjoying
it,
though
too
it
was
fun
just
to
like
hack
together,
an
agent
that
would
talk
to
it
so
yeah.
I
will
continue
on
on
these
experiments
and
yeah
I'll.
Do
a
better
job,
we'll
do
a
job
of
of
creating
a
report
of
findings
and
and
reflecting
that
and
github
issues
against
both
bind
plain
and
and
the
proto
or
the
even
the
go
implementation
of
the
proto
yeah.
B
Thanks
thanks
a
lot
for
doing
this,
the
more
I
think
about
this,
the
more
I'm
inclined
to
say
that,
yes,
I
think
we
need
something
like
this
to
be
in
the
collector's
core
or
in
the
country
or
whatever
probably
limited
to
just
the
reporting
part
right
so
that
you
don't
really
receive
a
remote
configuration.
B
But
you
can
report
the
health.
You
can
report
the
effective
configuration
stuff
like
that
that
that
is
not
risky
right
so
that
that
that
there's
no
risk
of
crushing
the
collector
or
doing
anything
like
that,
which
is
still
quite
valuable
and
if
we
can,
by
default
out
of
the
box,
make
it
work
with
bind
plane.
For
example,
that's
a
great
demonstration
of
the
capabilities
of
all
pump,
and
I
I
definitely
would
not
advocate
for
having
that
in
the
collector
out
of
the
box.
C
How
would
I
so,
if
I
wanted
to
go
about
approaching,
making
this
this
minimal
one
to
get
it
up
upstream
and
into
core?
C
B
Let's,
let's
do
this:
let's,
let's
clarify
the
things
on
our
end
with
all
pump
if
anything
needs
to
be
changed
in
the
spec
or
or
in
bind
plane,
for
example,
to
make
a
demo
more
compelling
once
once
that
is
settled
here,
I
think
we
can
just
go
to
the
collector
sig
and
make
a
proposal
and
tell
them
I
mean
you
can
show
you
you
have
you
have.
I
guess
it's
working
implemented.
It.
B
Right
and
we
can
show
it
to
the
collector
sig
and
tell
them
here's
what
we
think
we
would
like
to
do
and
why
they
think
it's
a
good
idea
to
do
this,
in
addition
to
the
idea
of
having
a
supervisor
a
more
full-blown
implementation
and
then
we'll
see
where
that
goes
right.
If,
if
the
collector
seek
is,
if
they
agree
to
have
that,
then
it
shouldn't
be
a
problem
to
abstract.
If
you
have
the
implementation,
we
just
make
it
an
extension
config
source
in
the
collector
yeah.
B
C
Wonderful
all
right
that
that
helps
and
yeah
it
makes
things
more
clear
for
me
all
right,
that's
it
for
my
topic.
Thank.
B
You
great
thank
you.
I
have
the
last
one
there
someone
tried
to
open
with
kotlin
or
java,
or
both
I
guess,
and
they
found
that
the
the
pip
fields
that
we
use
that
we
declare
as
nums.
B
They
are
impossible
to
use
actually
in
kotlin
because
it
doesn't
allow
you
to
specify
the
values
which
are
not
predefined
in
the
enumeration,
which
is
what
you
have
to
do
if
you're,
if
you're,
using
multiple
bits
right.
B
So
I
guess
the
the
simple
solution
there
is
to
use
integer
fields
instead
of
vietnam's
in
the
message
declarations,
we
will
still
keep
the
enums
so
that
it's
clear
what
the
big
definitions
are,
but
in
the
messages
themselves,
I
think
we
need
to
to
use
integers
instead
of
vienna's,
because
I
mean
that
the
code
just
doesn't
compile
in
the
languages
where
they
are
more
strongly
typed.
Apparently
right.
It's
it's
not
a
problem
goal,
but
it
seems
like
it's
a
problem
in
other
other
languages,
so
I
don't
know,
that's
probably
the
only
possible
solution.
B
B
B
A
A
A
Yeah,
I
mean
it's
certainly
common.
I
just
came
across
this
c
plus
plus
implementation,
where
they
recommend
this
as
actually
changing
it
from
an
in
32
to
in
a
new.
So
it
was
more
strongly
typed
in
that
c,
plus
plus
library,
so
clearly,
they're
not
interrupted
interoperating
with
kotlin
yeah.
I
I'm.
B
B
A
B
B
Okay,
cool,
that's
the
end
of
the
agenda
in
the
document
that
we
have.
I
think
the
other
thing
is
the
open,
ish,
the
open
pr
that
peter
you
did,
the
the
changes,
the
clarifications
of
agent
versus
client.
I
saw
that
you
updated
the
pr.
I
didn't
have
a
chance
to
take
a
look
at
the
changes
myself.
I
will.
A
Okay,
thank
you
yes,
so
I.
B
I
accepted
all
the
recommendations.
I
think
one
thing
left
to
do
for
me
is
to
merge
some
changes
from
the
main
from
main,
because
my
changes
are
now
incompatible
with.
Oh
did
we
did
we
managed
to
create.
I
think
I
I
think
I
at
least
at
one
point
I
saw
it
so
I
think
it's
it's
you
there.
I
need
to
double
check
so.
Okay,
yeah
easily.
A
A
C
I'm
going
to
share
a
story.
I
was
trying
to
explain
how
the
ot
collector
had
the
op
amp
agent
that
implemented
the
client
and
yeah.
No.
I
just
want
to
say
I
appreciate
these
these
changes
and
clarifications
in
the
doc,
because
it
does
definitely
make
it
easier
to
communicate
this
to
engineering
teams.
B
B
Full
anything
else,
anyone.
A
Taker
just
trying
to
understand
your
this
bit
fields
change
this
is
just
this.
Is
the
spec
and
then
you're
saying
there's
going
to
be
a
bunch
of
changes
in
the
go
library
to
accommodate
this.