►
From YouTube: 2022-05-31 meeting
Description
cncf-opentelemetry@cncf.io's Personal Meeting Room
B
B
C
B
A
Okay,
I
think
we
can
start.
Let's,
let's
see
how
the
agenda
so
0.3
of
the
spec
yeah.
I
did
the
release
after
the
last
set
of
changes
that
we
had,
I
think,
we're
it's
kind
of
a
good
state
of
the
spec
to
to
have
a
release
and
the
goal
implementation
currently
matches
the
what
we
have
in
the
specs.
So
I
think
I
think
it
was
useful
to
mark
it
as
a
version
number
there
there's
a
couple
prs
pending
or
one.
I
think
one
is
already
meshed
there.
A
So
I
put
the
link
there
for
the
reviewers.
I
think
yeah.
The
end
is
here:
tremec
is
not
here
and
when
you
have
time,
please
do
have
a
look
at
that.
I
did
some
refactoring
there
of
the
callbacks
when
you
have
time
so
the
next
one
is
is
actually
the
issue
that
you
created
and
it's
the
the
connection
settings
related
stuff.
A
A
Okay
sounds
good,
thank
you
cool,
so
the
next
one
is.
We
have
an
open
issue
about
basic
health.
It
wasn't
very
clear
to
me
whether
we
need
it
or
no,
but
once
I
started
working
on
the
supervisor,
I
found
that
there
are
situations
when
without
having
a
way
to
report
the
general
health
of
the
agent.
It's
a
bit
difficult
to
understand.
What
do
you
do
when
failures
happen,
particularly
with
the
supervisor?
For
example,
you
receive
a
configuration,
it
looks
fine,
I
guess
you
parse
it
that
looks
good.
You
write
it
to
a
file.
A
You
restart
the
agent
the
agent
starts,
but
then
it
fails
at
some
point
right.
So
it's
hard
to
tell
whether
it's
because
of
the
configuration
or
because
of
something
else,
we
have
a
way
to
report
configuration
errors,
but
you
don't
really
know
at
this
stage
whether
it
was
because
of
configuration,
and
so
that's
how
you
report
it
or
that's
something
else
so,
especially
with
the
supervisor
model
right.
So
if
it's
a
if
it's,
if
it's
in
in-memory
stuff,
so
maybe
you
have
more
information
you,
you
know
what
to
do
there.
A
There
is
agent
status,
it
does
not
contain,
there's
no
way
to
report
an
error
message.
For
example,
you
can't
tell
what
exactly
is
wrong
and
we
do
not.
There
is
also
no
way
to
tell
whether
the
agent
is
up
or
down
whether
it's
running
or
not
running.
So
I
think
that's
the
bare
minimum.
I
guess
that
we
would
want
to
have
in
the
protocol.
A
That's
that's.
I
guess
what
I
call
basic
health
right.
So
the
way
to
tell
the
back
end
yeah
the
agent
is
up
and
running
or
no
it's
down
or
or
something
is
wrong
like
what's
wrong
at
least
show
an
error
message
right.
So
that's
that's
the
I
guess
the
absolute
minimum
that
you
probably
want
to
have
there
and
we
don't
have
that.
C
A
Important
and
and
when
you
can
detect
that
the
problem
is
because
of
the
configuration
you
certainly
want
to
do
that.
But
in
this
particular
case
when
I
was
implementing
the
supervisor,
there
are
just
situations
when
you
know
something
is
wrong,
but
you
don't
know
what
to
attribute
it
to.
Is
it
because
of
the
configuration?
A
A
But
that's
probably
I
guess
even
with
that,
I
guess
in
some
cases
the
agent
just
doesn't
start
right,
so
there
is
no
way
to
communicate
with
it.
So
what
do
I
do?
In
that
case,
I
need
to
need
to
report
somehow
that
my
agent
failed
to
start
right.
So
the
inter-process
communication
is
fine
when
the
agent
is
up
and
running
and
you
can
query
it
and
ask
it
what's
your
state,
are
you
doing
okay,
any
problems
there
I
mean
yeah,
fine.
A
If
it
responds,
you
can
then
send
that
information
to
the
server,
but
if
you
worry,
if
it
doesn't
start
at
all
like
what
do
you
do,
I
think
it's
important
it's
even
more
important
to
report
that
right.
The
agent
is
failing
to
start
and
we
don't
have
a
way
to
report
it
like.
We
have
a
way
to
report
pet
configuration,
so
we
have
a
way
to
report
specific
upfront,
known
debt
situations
like
pet
configuration.
We
have
a
message
for
that
in
the
protocol.
A
We
don't
have
a
way
to
communicate
generic
like
failures
which,
which
is
okay.
Something
is
wrong.
I
have
no
idea
what's
wrong
like,
but
I
need
to
tell
the
server
it's
it's
wrong.
So
that's
that's.
I
guess
what
what
I
find
missing
there.
It
wasn't
very
apparent
until
I
started
writing
the
supervisor
portion
of
the
code.
So
now
I
can
see
like
I
have
an
error
here.
A
C
A
That's
one
of
the
cases
like
in
the
supervisor.
I
can
try
to
validate
the
configuration
as
much
as
I
want,
but
it's
there
is
no
guarantee
that
still
something
is
not
going
to
go
wrong
when,
when
the
agent
tries
to
stop
because,
as
you
said,
the
port
maybe
is
already
yeah
is
not
available
to
be
listened
on.
A
A
I
think
it's
good
to
start
tracking
this
so
that
we
know
we'll
make
it
progress
towards
stability.
So
if
you
follow
the
link
we
have,
I
think,
16
open,
total
of
which
I
think
only
four
probably
are
important.
So
if
you
find
others
that
you
think
are
important,
please
add
the
label
and
and
if
you
can
think
of
anything
else,
that
is
that
we
need
to
do
before
we
mark
the
protocol
1.0,
please
file
the
the
the
some
issues
there
so
that
we
can
work
on
those.
A
Yeah.
Okay,
that's
that's
all
I
see
in
the
agenda.
That's
all
I
had
so
anyone
anything
anything
that
you
want
to
discuss.
B
I
had
an
issue
the
other
day
about
locking
on
the
connection
zerni
any
thoughts
about
that
I
can.
I
can
really
get
in
there.
I
didn't.
A
B
I
think,
given
the
way
you
know,
agents
are
connecting
and
disconnecting,
and
you
know
configuration
can
either
come
from
the
the
server
pushing
down
configuration.
So
there
could
be
changes
in
the
server
that
are
simultaneously
trying
to
push
down
changes
of
the
agent.
Is
you
either
have
to
lock?
You
have
to
lock
somewhere
either
lock
in
the
server
or
the
or
the
library
could
lock
for
you,
yeah.
B
The
websocket
layer
itself
does
not
lock
for
you
yeah,
absolutely.
A
B
For
the
library
to
lock,
it's
certainly
easier
to
add
that
way.
I've
added
as
a
protection
in
our
implementation
at
the
moment,
but
I
think
it'll
be
helpful
to
push
that
down
into
the
into
the
library
and
it's
easy
to
add
and
I'm
happy
to
submit
a
pr.
If
there's,
if
we
agree,
that's
the
right
approach,
but
we
don't
have
to
discuss
this
right
now.
But
if
anybody
wants
to
add
any
comments
to
the
issue,
I
think
they'll
be
helpful.
A
A
A
I
thought
it
may
be
expensive
and
technically
you
may
have
like
a
single
mutex
for
all
connections,
so
it's
kind
of
a
decision
that
implementations
need
to
mix,
but
I
I
don't
have
a
strong
opinion,
maybe
you're
right,
maybe
for
safety.
We
could
do
that.
I'm
a
bit.
I
guess
I'm
reluctant
because
of
the
costs
of
doing
the
locking
there,
but
if
we
can
maybe
benchmark
and
see
if
it's
insignificant,
then
I
guess
yeah,
that's
that's
better.
Let's
make
it
safer,
I
don't
mind
it.
A
A
A
B
Be
okay,
it
just
if
you
don't
do
it.
The
consequences
are
severe,
so
yeah
and
it
doesn't
yeah.
It's
not
an
easy
thing.
Let's
just
say
it
doesn't
happen
very
often
so.