►
From YouTube: 2022-05-17 meeting
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
A
A
A
B
Hey
sorry,
I
did
somebody
stepped
in,
I
did
have
a
chance
to
look
at
it.
I
think
the
separation
of
connection
settings
makes
sense.
We
were
we
haven't
gotten
to
implementing
this
in
our
server
yet,
but
we
were
considering
just
passing
it
as
configuration
based
on
our
discussion
two
weeks
ago,
but
we
haven't
gotten
to
it.
So
I
haven't
really
been
able
to
weigh
the
benefits
of
passing
it
as
an
additional
configuration
field
versus
as
connection
settings.
B
I'm
still
not
sure,
but
I
think
it's
so
yeah
good,
I
was
gonna
say
I
just
think
it.
It
kind
of
differs
on
what
your
you
know,
these
api
keys
or
the
tokens
or
the
files
you
know
where
they
used.
I
think
I
think
this
proposal
makes
sense
at
least
three
separate
messages.
I
think
those
work.
A
A
Okay,
yeah
I'll
think
a
bit
more
about
it,
but
I'll
probably
just
go
ahead
with
that
and
make
changes
to
the
spec.
Unless
I
come
up
with
some
other
ideas
on
this
another
approach,
probably
is
I
mean?
Maybe
we
could,
if
it's
unclear
how
this
is
going
to
be
used.
Another
option
is
to
delete
it
from
the
spec
for
now
and
add
later
when
we
have
better
understanding
of
the
use
cases.
C
A
Yeah,
that's
the
difference
and
the
the
first
two.
If
you
read
the
comments,
potentially,
we
could
add,
let's
say
transport
type
right
to
the
o-pump
and
to
otlp
and
the
set
of
transports
that
these
protocols
support
are
different.
One
has
http
and
web
web
circuit,
the
other
is
http
and
grpc.
So
the
if
we
add
that
field
the
transport
type
it's
going
to
be
a
different
type
for
that
field,
so
we
may
diverge
in
the
future.
Even
if
right
now
they
look
similar.
A
Like
we,
we
don't
correct,
yeah
correct.
The
problem
is
that
we
don't
know
what
the
enumeration
will
look
like
for
any
other
connection
types
right.
We
don't.
We
probably
can't
make
assumptions
there.
That's
the
difference.
So
yeah,
I
don't
know
you're
right.
In
a
sense
I
mean
I'm
not
sure
that
I
guess
this
is
the
reason
I
wanted
to
hear
what
you
guys
think,
I'm
I'm
kind
of
leaning
the
reason
I
like
yeah.
B
One
is,
you
know
integral
to
the
way
the
agent
is
managed
and
the
other
is
where
the
agent
you
know
happens
to
be
sending
telemetry
information.
It
just
feels
like
they're
they're,
very
different
things,
even
if
they're
all
connections
like
they're,
both
network
connections,
and
so
therefore
they
share
a
lot
of
network
connection
fields
but
they're.
They
serve
very
different
purposes.
A
There's
another
thing
that
may
happen
with
the
telemetry
connection
settings:
we
may
decide
that
we
want
to
support
other
telemetry
protocols,
not
just
ltlp.
Let's
say
I
want
to
do.
I
don't
know
prometheus
remote,
write
right
or
something
like
that,
and
that
could
be
a
field
that
denotes
the
protocol,
not
just
the
transport
which
is
not
applicable,
obviously
to
all
pump
right,
because
all
pump
is
kind
of
identified.
B
B
You
know
that
you're,
basically
again,
if,
if
you're,
if
your
own
telemetry,
you
know
sort
of
a
concept
of
a
maybe
an
own
telemetry
receiver,
and
then
you
know
an
exporter,
it
would
make
sense
to
be
able
to
support
any
exporters
that
open,
telemetry
itself
can
support
and
therefore
you
would
want
all
of
the
fields
and
the
configuration
that
open
totally
she
could
support,
at
which
point
you
probably
just
want,
like
a
fragment
of
a
open,
telemetry
hammel
to
sort
of
complete
that
configuration.
B
But
I
don't
I
haven't.
I
don't
know
if
we
have
own
telemetry
right
now.
I
haven't
really
dug
into
that
area
and
what
that
looks
like
and
how
that's
configured.
A
We
do
have
the
exporter
in
the
collector
implementation,
but
you're
right
essentially,
we
would
be
essentially
mirroring
what
exists
there
right
so
yeah.
Well,
it's
hard
to
tell
I
don't
know.
I
guess
this.
The
reason
that
we
we're
not
sure
is
because
we
don't.
We
don't
clearly
understand
the
use
case,
how
exactly
this
will
be
used
and
that's
that's
the
problem.
I
don't
know.
C
C
If,
if
because
I
I,
I
agree
that
there's
a
lot
of
agreements,
arguments
that
this
makes
sense
and
and
just
experiment
like
like
continue
using
it
like
that,
and
if,
let's
say
in
two
months,
we'll
come
to
conclusion
that
this
was
not
the
right
choice,
we
still
have
yeah
space
to
change.
It.
A
A
A
Okay,
that
that's
all
I
have
does
anybody
have.
B
To
bring
up,
I
I
put
them
on
the
agenda
they
haven't.
I
don't
have
the
time
to
put
together
issues
so
I'll.
I'm
happy
to
put
some
notes
on
the
agenda,
but
let
me
just
mention
them
real
quick
one
is.
I
was
wondering:
we've
ran
into
some
issues
recently
with
upgrading
op
amp
and
we've
been
fairly.
B
B
Doing
that.
I
know
where.
Obviously
this
is
not
a
1.0
in
any
in
any
sense,
but
it
becomes
hard
to
work
with
when
you
need
to
keep
the
agent
and
the
server
in
sync.
A
A
A
B
Well
closed,
beta
of
a
of
a
product,
that's
using
op
amp
available
in
a
few
weeks,
and
and
obviously,
if
there's
you
know
at
the
stage
that
we're
at
both
both
op-amp
itself
and
our
product.
You
know
some.
Some
requirements
like
the
agent
and
the
server
need
to
be
in
sync
is
okay
right
now,
but
it's
just
something
that,
because
of
the
issues
we
ran
into
was
kind
of
a
reminder
of
yeah.
I've
been
looking
at
a
lot
of
pr's
that
change
protobufs
in
compatible
ways
and.
A
Yeah,
no,
I
mean,
of
course
it
has
to
become
stable
at
some
point.
Otherwise
you
just
can't
use
it
in
production.
What
we
probably
can
do
is
that
there
is
three
levels
of
stability.
Typically,
the
conclusion
is
alpha
beta
and
stable
right.
So
maybe
we
can
start
thinking
about
going
to
beta
where
it
may
still
change,
but
a
lot
less
frequently.
So
at
least
it's
less
work
to
do
when
when
things
are
well,
I
guess
less
churn
right
happens
or
less
frequent
churn.
So
maybe
we
can
think
about
that.
A
B
Okay
and
like
I
said,
I
don't
know
the
agenda
just
to
cover
what
I
said,
but
then
the
other
thing
is
I've.
I've
been
struggling
a
little
bit
with
implementing
hashes,
properly
and
and
hash
comparisons,
and
it
seems
to
me
that
there's
some
places
in
the
spec
that
say
like,
like
with
regard
to
packages,
for
example,
that
the
server
must
compute
this
the
hash
and
the
agent
should
only
you
know,
compare
a
store,
and
then
there
are
other
areas
where
the
like.
B
Like
agent
description,
we've
made
some
changes
recently,
where
the
the
library
computes
on
behalf
of
the
agent
and
in
our
implementation,
we've
struggled
a
bit
to
get
all
of
that
right
because
it
seems
like
it's
not
it's.
It's
not
clear
to
me
and
again.
This
could
just
be
my
fault
for
not
reading
this
back
carefully
or
something.
But
it's
not
clear
to
me
who
owns
which
data
and
who
computes,
which
hashes
and
and
how
you're
expected
to
to.
I
guess
the
idea.
Is
you
store
the
hash?
B
If
you
only
receive
a
hash
and
the
hash
is
exactly
what
you've
stored,
then
you
know
you
have
the
right
data
and
if
it's
different,
then
you
request
the
full
amount
of
data.
But
some,
like
I
said
the
the
agent
configuration
and
configuration
status
or
asian
description
seem
to
be
owned
by
the
agent
and
computed
by
the
agent,
whereas
effective
configurations.
B
Seems
to
be
computed
by
the
server
and
owned
by
the
server
and
that
those
distinctions
were
and
same
with
packages
is
also
computed
by
the
server
and
and
owned
by
the
server.
It
just
wasn't
clear
to
me
if
those
hash
calculations
should
be
identical
if
they
should
be
used
for
comparison
or
if
they
should
recompare
it
on
both
sides
of
the.
I
think
you
get
everything
I'm
sorry,
I'm
saying.
A
No,
no,
I
I
get
it
it's
quite
possible
that
the
spec
is
unclear.
It's
definitely
possible.
Let's
make
sure
we
clarify
it,
maybe
even
that
there
are
mistakes
in
the
spec,
that's
also
possible.
Maybe
something
is
not
just
unclear,
but
also
plain
wrong
right,
so
I
guess
the
best
way
would
be.
If
you
actually
tell
me
which
area
was
confusing,
let's
try
to
read
the
relevant
portion
of
the
spec
and
see
if
something
is
missing
there,
I'm
happy
to
fix.
A
We
do
need
to
that
respect,
to
be
completely
clear
about
how
these
things,
work
and
and
you're
you're
right
we're
using
hashes
in
both
directions.
In
some
cases,
the
agent
computes,
the
harshes
and
the
hashes
are
used
to
compress
the
messages
when
nothing
changes.
So
in
that
case
the
the
agent
is
the
one
that
computes
them
and
the
server
is
just
storing
or
comparing
the
hashes.
A
He
doesn't
compute
them,
but
we
also
have
the
opposite
direction
when
the
server
makes
offers
to
the
agent
and
the
the
hashes
for
the
offers
are
computed,
which
then
also
is
used
when
the
server
reports
in
the
opposite
direction,
that
it
did
receive
the
offer
and
is
working
on
the
offer.
So
it's
no
longer
looking
for
for
for
for
that,
one
is
particularly
in
the
paper
packages
area
right.
A
So
when
the
server
has
an
offer
for
the
packages,
it
includes
a
hash
and
the
server
needs
to
respond
and
say:
okay,
I
got
your
offer,
don't
send
it
anymore
because
the
offer
can
be
can
be
large,
but
it
doesn't,
but
we
don't
want
it
to
be
included
in
every
message
that
is
coming
from
the
server.
So
we
have
it
in
both
directions,
but
it's
very
very
possible
that
we
do
so
just.
A
B
A
Information,
but
for
the
packages
it's
it's
the
it
starts
in
the
opposite
direction.
A
For
the
effective
configuration
the
the
agent
computes
them,
we
have
it
yeah.
We
have
it
in
the
code
already
right.
We
just
made
that
change
the
agent
computes
them.
B
A
Okay
center
link,
yeah
yeah,
let's,
let's
have
a
look
because
it
should
not
be.
I
believe
it's
the
agent's
responsibility,
the
effective
configuration
in
particular
not
the
remote
configuration,
because
there
is
a
remote
configuration
which
is
the
server's
responsibility,
server
computes,
the
remote
configuration
and
its
hash
and
offers
to
the
agent
and
the
agent
responds
with
the
effective
configuration
and
the
computation
of.
A
Is
the
agent's
responsibility
right?
Okay,
so
it
flows
in
both
directions
and
depending
essentially
depending
on
who
produces
the
data.
The
original
data,
the
the
source?
That
party
is
responsible
to
compute
the
hash?
That's
that's
the
way
it
is
for
for
everything
there.
It's
just
that
the
responsibility
to
produce
the
data
sometimes
is
with
this
server.
Sometimes
it's
with
the
with
the
agent.
B
Okay,
I
think
that
makes
sense,
given
that
discussion,
I
will
I'll
dig
through
this
back
and
make
sure
I
I'm
clear.
A
B
Yeah,
I
think
I
think
what
was
confusing
for
me
is
there's
some
some
cases
where
it's
like
very,
very
clear
that
the
server
is
responsible
for
computing
and
the
agent
should
not
compute.
And
it's
easy
to
read
that
and
generalize
to
you
know
the
hashes
are
sort
of
a
general
thing
in
the
in
the
spec
and
and
that
there's
but
but
I
I'm
understanding
what
you're
saying
and
I'll
take
a
look.
D
I
guess
just
to
jump
on
that.
I've
been
like
looking
at
the
spec
recently
as
a
newcomer.
Spec
doesn't
specifically
say
like
how
to
compute
hashes,
but
it
looks
like
in
the
actual
code.
We've
made
a
decision,
at
least
in
the
go
code
of
like
how
to
compute
the
hash
for
the
effective
config
or
the
agent
description.
D
I
don't
know
if
that's
meant
to
be
like
this
is
the
way
to
compute
the
hash
or
it
should
be
open-ended.
But
it's
a
little
concerning
like
if
somebody
uses
the
go
client
to
compute
a
hash
and
send
it
to
a
java
server
that
doesn't
have
like
the
same
type
of
implementation.
They
may
have
to
like,
like
reverse
engineer,
the
go
code
to
figure
out
how
you
hatched
it.
If
there's
no
spec
yeah.
A
A
The
go
implementation
of
the
client
can
use
one
hashing
method
and
java
implementation
of
the
client
can
use
another
hashing
method
and
they
can
both
work
with
the
python
implementation
of
the
server
just
fine,
because
the
server
doesn't
care
about
the
hashing
methods
used.
It's
an
apac
value
from
the
server's
perspective.
D
A
Yeah
and
and
the
same
is
the
in
the
opposite
direction
when
the
server
computes
a
hash
of
of
any
anything
of
any
content,
the
agent
only
stores
or
compares
it
to
a
previously
stored
value,
but
never
tries
to
compute
the
hash
again
against
some
other
using
some
other
content
into
comparisons.
It
never
does
that.
If
you
look
at
the
code,
it
never
does
that
yeah.
B
I
think
that
was
my
fault,
an
early
implementation
of
ours
back
in
december,
when
I
was
first
playing
with
this,
we
were
computing,
the
hash
of
the
client,
sending
that
computing,
the
hash
of
what
we
think
we
should
have
on
the
server
comparing
those
hashes,
and
that
was
obviously
the
wrong
way
to
do
that.
But
it
seemed
like
a
reasonable
approach
at
the
time.
A
Okay,
I
guess
it's
an
omission
on
my
end.
Let
me
add
this
clarification
to
the
spec.
A
A
B
Yeah,
just
just
to
clarify
that
last
question:
the
the
package
is
available
as
a
downloadable
file
and
the
data
loadable
file
has
a
signature,
but
a
package
available
has
a
hash,
is
not
the
hash
of
the
file.
It's
the
hash
of
the.
B
A
A
I
mean
nothing
prevents
you
from
doing
that
right,
let's
say,
for
whatever
reason
you
make
a
new
release.
You
want
to
bump
the
version
number,
but
nothing
really
changes
in
the
content
of
the
package.
It's
possible
right
in
theory.
You
do
want
the
package
to
be
updated,
but
the
file
is
not
changed.
You
don't
want
to
download
it.
That's
that's
why
we
have
the
hashes
into
places.
A
Maybe
it's
excessive.
I
don't
know.
Maybe
we
don't
need
it.
We
could
get
rid
of
one
of
those
hashes,
but
that
was
the
thinking
why
why
we
have
both
the
downloadable
file
content
hush,
which
is
essentially
the
hash
of
the
the
bytes
that
that
are
the
the
byte
sequence
of
the
file,
whereas
the
hash
of
the
package
available
field
is
the
includes
the
the
version
number,
the
type
all
those
other
fields
in
the
message.
A
There's
a
there's,
a
section
that
explains
this:
this
hashes
specifically
and
the
download
process
in
particular
about
how
do
you
do
the
downloading
and
how
you
use
the
the?
How
do
you
compare
the
hashes,
what
you
do
there?
There
is
a
three-step
process.
If
you
look
at,
I
don't
know.
If
you
read
the
downloading
packages
section,
I
try
to
to
explain
it
there
and
again.
If
it's
not
clear,
please
do
submit
issues
I
I
will
fix.
B
I
think
the
the
part
I
was
a
little
bit
confused
about
was
was
just
who
who
owns
what
data
and
how
this
works,
with
both
directions,
but
again
I'll.
Take
a
closer
look
and
give
you
specific
feedback.
A
A
It
is
specifically
about
sending
the
agent's
state
to
the
server,
not
the
opposite
direction.
So
again,
maybe
it's
poorly
worded
there,
the
opposite
direction.
It's
typically
the
opposite
is
the
packages
and
the
remote
configuration
itself,
which
is
which
is
not
described.
I
think
in
this
synchronization
section
at
all.
So
maybe
that's
that's
why
I
guess
you
were
expecting
that
it
should
be
described,
and
it's
not
that
maybe
that's
the
reason.
It
is
confusing.
B
Yeah,
I
think
I
was
also
confused
by
the
difference
between
effective
config
and
remote
config,
because
it's
the
goal
is
for
the
server
to
control
the
configuration
on
the
agent
via
remote
config
and
then
the
agent
reports
it's
effective
config.
A
A
A
Is
it
required
to
be
able
to
do
that?
No,
I
think
in
maybe
in
a
lot
of
cases
you
probably
are
going
to
just
use
whatever
you
receive
from
the
server,
and
in
that
case
the
effective
configuration
is
going
to
be
identical
to
what
you
received
from
the
server.
But
I
wanted
the
protocol
to
allow
this
other
use
case
when
it's
not
what
the
resolution.
A
A
Okay,
so
please
do
file
the
issues
for
individual
issues
for
for
every
single,
confusing
part
in
the
specification,
because
because
I
wrote
most
of
it,
it's
sometimes
not
I'm
blind
to
to
these
mistakes
right,
it's
difficult
for
me
to
notice
them,
but
when
you
read
them
the
first
time,
sometimes
it's
obvious
that
something
is
off.
Something
is
missing
there,
so
I'm
happy
to
go
and
do
all
that
work
and
fix
it.
But
I
just
need
to
know
right.