►
From YouTube: 2021-12-14 meeting
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
B
Are
you
good?
I
never
joined
this
this
meeting
before,
but
I've
been
meaning
to
so.
C
D
Yeah,
it's
like
one
of
my
favorite
spots
on
this
planet.
D
B
C
C
Okay,
so
I
guess
couple
updates
from
my
site
since
the
last
time
we
met,
I
I
implemented
the
agent
and
server
in
go
the
the
open
protocol
implementations
I
mean
so
it
is
not
full
protocol.
It's
just
status
reporting
and
remote
configuration
features
and
I'd
like
some
feedback
on
particularly
the
api
and
also
primarily
on
the
api.
I
guess
right.
C
If,
if
anybody
wants
to
try
and
see
how
that
works,
that
would
be
a
great
feedback
to
have
whether
we
need
to
change
something
in
the
api
would
be
good
to
know,
and
then
I
used
that
implementation
to
to
to
implement
two
examples
of
an
agent
and
and
server
that
use
op-amp
with
some
very
basic
ui
as
well
there.
So
you
run
the
server.
You
run
an
agent.
You
then
see
the
server
in
the
sorry
in
the
server
ui
you
see
the
the
agent
connected.
You
can
see
its
configuration.
C
You
can
change
the
configuration
so
the
basic
functionality
that
would
you
would
expect,
obviously
very
rudimentary
very
primitive,
but
it
shows
kind
of
what
is
possible
to
do
right.
So
yeah,
I'd
love,
some
feedback
on
that
as
well,
so
yeah.
I
posted
the
links
there
in
the
meeting
notes
document.
C
That's
that's
updates,
and
then
I
have.
I
guess
a
couple
questions
that
I
wanted
to
discuss.
One
is
this
was
kind
of
brought
up
early.
I
think
in
the
very
first
meeting
whether
we
want
to
want
to
to
be
able
to
use
op
the
same
protocol
to
manage
open,
telemetry
sdks,
and
I
spent
a
bit
time
to
understand
how
we
can
do
that.
What
would
need
to
be
changed
in
the
protocol
for
that
to
become
possible.
C
So
I
linked
in
the
the
issue
there
and
there
is
also
a
linked
pr
there
draft
pr
which
which
shows
what
what
we
can
change
in
the
in
the
protocol,
to
make
this
possible.
It's
primarily
around
what
do
we
call
an
agent
and
instead
of
having
this
notion
of
two
hardcoded
concepts
of
agent
type
and
then
agent
version,
which
are
the
identifiers
of
what
an
agent
is
it
kind
of
slightly
generalizes
it
and
allows
a
list
of
attributes
key
value
pairs,
essentially
to
be
the
identifiers
of
the
agent
which
then
makes
it
possible
to?
C
Let's
say
it
allows
you
to
say
that
well,
this
sdk
is
an
agent
of
some
sort,
and
here
is
the
list
of
attributes
that
describe
it
and
then
you
you
can
use
it
with
with
the
same
management
server,
essentially
which,
which
is
the
goal.
I
guess
so.
It
would
be
great
if
you
guys
could
have
a
look
at
the
proposal
there
and
I'm
I'm
interested
in
feedback
for
in
two
things.
Right.
C
One
is
whether
we
think
it's
generally
a
good
idea
to
manage
open,
telemetry
sdks
remotely
in
this
manner,
and
also
whether
the
specific
proposal
I'm
making
there
about
how
we
modify
how
we
structure
the
protocol,
whether
that
makes
sense,
yeah
some
feedback.
Please
please
have
a
look
at
the
proposal
there.
If
you
had
a
chance,
maybe
you
can
tell
me
now
that
would
be
bad.
D
Yeah,
I
can
share
my
feedback
quickly
like
I
was
thinking
about
this
actually
for
some
time
recently
and
the
more
I
think
about
it.
I
think
the
more
it
is
actually
very
useful,
especially
for
mobile
devices,
which
were
because
for
when
you
have
your,
let's
say
cloud
environment
that
maybe
you
can
apply
this
configuration
changes
using
different
means
easily,
but
with
mobile,
it's
a
totally
different
story,
and
I
think
that,
especially
for
these
use
cases
having
capability
to
do
remote
management
for
sdk
might
be
extremely
useful.
B
We
also
have
a
similar
thing
with
with
remote,
with
with
the
ability
to
do
like
sampling
changes
in
jager,
for
example,
that
we
just.
A
B
C
Fly
right,
right,
yeah
and-
and
I
guess
the
idea
here
would
be-
that
you
define
a
configuration
file,
format
for
the
sdk
and
jager
sampling.
The
settings
for
jager
sampling
would
be
some
sort
of
config
file
settings
right
and
then
you
push
that
file
from
from
the
management
server
to
the
sdk
and
sdk
applies
that.
C
E
C
I
wanted
to
talk
about,
but
setting
that
aside.
Yes,
the
idea
is
that
you
deliver
the
configuration
from
the
management
server
to
the
sdk.
The
sdk
applies,
the
change
and
one
of
those
changes
can
be
the
sampling
rate,
for
example
right
and
whatever
else
you
you
want
to
be
able
to
apply
whether
at
startup,
maybe
or
at
runtime
right
continuously,
maybe
that,
like
sampling,
can
be
changed
at
not
just
at
the
beginning,
but
also,
if
you
need
right,
you
need
it
to
be
changed
over
time,
so
yeah
yeah.
I
think
I'm
seeing
enough.
C
B
I
mean
it's,
it's
definitely
a
major
security
concern
when
you
do
that.
That's
the
downside
to
this
whole
thing
right,
so
yeah.
B
B
A
C
There
is
this
section
about
security
in
the
in
the
specification
which
tries
to
caution:
whoever
does
the
implementation
to
be
careful
with
what
they
are
doing
with
the
stuff
they
are
receiving
from
elsewhere,
and
if
you
think
that
is
necessary
to
to
to
have
more
more
advice
there
about
how
this
should
be
applied,
so
that
we
don't
end
up
with
a
with
a
bad
situation.
C
I
don't
know,
maybe
have
a
look
read
read
that
security
section.
If
we
can
improve
it,
we
can
improve
the
recommendations
or
the
way
that
things
operate.
Maybe
on
the
protocol
level.
Suggestions
are
very
welcome.
C
So
I
guess
yeah
to
continue
on
that
right.
The
second
thing
that
I
posted
there
is
is
what
you
mentioned:
jonah
is
the
the
transport.
So
when
I
started
working
on
open,
it
was
initially
first
of
all
just
for
for
the
agents
and
there
I
was
thinking
about
agents
which
are
up
and
running
for
for
prolonged
periods.
C
Let's
say
right
for
hours
days,
maybe
without
anything
happening
from
the
perspective
of
changing
the
configuration,
let's
say
right
or
anything
that
requires
an
interaction
between
the
agent
and
management
server
and
using
polling
would
be
very,
very
wasteful
in
these
scenarios
like
you'll,
be
polling.
I
don't
know
once
every
30
seconds
when
nothing
really
changes
for
days,
that's
very
wasteful
on
the
on
the
server
side
right
you
waste,
so
much
resources
there,
plus
it
also
introduces
latency
right.
C
If
you
change
something,
then
the
agent
needs
to
use
polling,
then
at
least
your
polling
periods
defines
the
upper
bound
of
your
latency
or
propagating
changes
to
the
agent.
So
I
went
with
the
permanent
connection
to
avoid
these
problems.
You
trade
a
bit
of
memory
for
keeping
the
connections
open
on
the
server
side,
but
you
save
lots
on
the
cpu
usage
as
a
result
of
that,
and
you
also
gain
in
the
in
the
latency.
You
gain
almost
almost
real
time
propagation
of
the
changes
to
the
agent
now
for
sure.
C
Challenges
with
data
collection
so
also
that
that
depends
on
how
many
right,
so
I
test
it
with
with
half
a
million
connections
on
a
single
machine
in
on
a
single
ec2
server.
It's
fine
right!
So,
if
you
have
let's
say
a
few,
millions
should
be
okay,
you
put
a
few
servers
out
there
and
they
they
should
work.
Fine.
C
There's
really
like
you.
You
don't
gain
anything
it.
It
maybe
asks
for
a
configuration
once
at
the
startup.
Maybe
reports
something
at
the
end,
and
it's
done
right.
There
is
no.
There
is
no
polling
necessary.
There
is
no
need
to
push
any
data
at
real
time
from
the
server
to
this
thing.
So
it's
there's
not
much
that
you
can
gain
by
using
persistent
connections
there.
So
I
guess
the
question
here
is:
do
we?
C
If
you
have
http
you
can
have
grpc,
you
can
use
both
and
both
are
considered
valid
and
in
terms
of
complexity
of
the
specification.
It
doesn't
really
add
a
lot
on
top
of
what
we
already
have
you
just
instead
of
using
the
websocket
messages
for
the
payload,
you
use
http
body
for
that
kind
of.
Maybe
it
will
add,
maybe
a
page
to
to
the
specification,
but
not
terribly
more
complicated
than
what
we
have
today.
D
C
C
D
C
The
the
the
content
and
encoding
in
the
the
headers
actually
are
already
used
by
otlp
to
indicate
that
they
are
sending
protobuffs
or
json
or
whatever,
and
we
could
use
this
also
to
understand
whether
it's
an
http
message
or
it's
an
upcoming
websocket
connection,
so
kind
of
I
see
the
ways
that
we
could
easily
make
this
work.
It
would
fit
nicely
still
I'm
slightly
reluctant
because
it's
it's
a
bit
of
a
complication
so
kind
of
I'm
looking
for
more
more
validation
before
I
go
forward
and
make
that
that
problem
thing
in
the
spec.
E
So
just
just
one
comment:
can
you
hear
me?
Okay?
Yes,
so
with
the
websocket
transport,
we
recently
did
an
implementation
of
the
op-amp
server
using
the
websocket
and
having
that
persistent
connectivity
was
key
to
near
real-time
configuration,
application
and
immediate
feedback.
So
it
did
facilitate
a
fantastic.
You
know:
user
experience
exactly
yeah
yeah.
So
that's
just
yeah
that
from
our
experiment
experiment
last
last
week
it
was
a
very
positive
experience.
E
We
did
also
do
it
where
we
only
applied
configuration
when
we
received
a
status
update
from
the
client
and
then
we
would
reply
with
the
expected
configuration
and
that
also
worked,
but
that
had
like,
because
shimmick
put
a
30
second
periodic
status
update
there.
There
was
like
lag
and
latency
there.
It
led
to
a
very
odd
user
experience
where.
C
E
I
then
took
the
the
agent
and
wrapped
in
a
bunch
of
shell
loop
to
deal
with
com,
port
conflicts
and
naming
and
spun
up
a
lot
of
sessions
and
connected
it
to
a
back
end
and
yeah.
It
was.
It
was
inexpensive.
Now
I
wasn't
using
tls
or
anything.
It
was
all
plain
but
yeah
to
your
point
to
ground
like
it.
I
really
didn't
see
much
of
a
performance
issue,
maintaining
all
those
connections-
yeah
yeah.
B
It's
all
going
to
depend
on
scale
if
you're
operating
a
multi-tenant
service
like
what
we
run,
you
know,
could
get
up
to
hundreds
of
millions
of
connections.
So
it's
a
bit
more
tricky
right.
Absolutely.
C
Absolutely
right,
yeah,
yeah,
definitely
but
think
think
of
the
the
the
other
side
right.
If
you
have
hundreds
of
millions
agents
and
each
is
doing,
let's
say,
polling
every
or
30
seconds,
that's
millions
of
http
requests
per
second
on
your
servers.
That
is
not
going
to
be
cheap
either,
like
you,
don't
maintain
the
connection,
but
you
are
trading
it
for
processing
lots
of
actually
pointless
requests.
Like
nothing
really
happens.
Yeah
the
agent
just
connects.
B
E
F
E
Yeah-
and
we
can
also
so
yeah
in
the
implementation,
for
our
proof
of
concept
that
status
checking
interval
to
trigger
updates
was
statically
set.
If
that
was
configured
by
the
server
itself,
we
could
also
reduce
further
traffic,
so
we
pay
once
for
the
upgrade
and
and
minor
maintainership
of
those
connections,
and
then
we
could
actually
tune
down
for
the
network
connectivity
and
that
you
know
the
scaling
pattern
is
the
same
regardless
of
a
pulling
of
the
persistent
it's.
E
You
know
a
small
fronted
micro
service
that
you
scale
out
as
much
as
you
can
yeah,
but
yeah.
So
I
I
shared
a
quick
demo,
video
that
I
uploaded
to
youtube
in
the
in
the
work
group
slack.
If
you
all
are
interested
to
see
what
we.
B
Need
in
a
few
days,
the
downside
is
when
you
do
a
lot
of
cdn
work,
then
the
persistent
connection
thing
becomes
problematic
for
so
for
us,
where
we
do
work
with
a
lot
of
cdns.
There
are
some
advantages
where
we
can
cache
certain
types
of
data
on
the
cdn
that
you
can't
deal
with
the
persistent
connection.
So
it's
something
to
keep
in
mind.
That's
all
I've
seen
it
blow
up
before
this
type
of
design,
specifically
because
of
that
so.
E
B
And
that's
why
uber
actually
made
the
design
decisions
that
they
did
when
they
created
the
remote
sampling.
Configuration
was
specifically
for
certain
scenarios
like
that,
where
they're
running
edge
infrastructure
and
they
want
to
be
able
to
do
caching
and
replication
of
data
versus
like
connect
everything
to
the
mothership
and
then
when
the
mothership
goes
away.
B
Yeah,
no,
I
mean,
I
think,
the
user
experience
is
the
key
thing
that
we
need
to
fix,
but
I
could
also
see
it
blowing
up
like
a
few
years
down
the
line
where
you
don't
think
about
these
things
up
front.
That's
all
yeah.
A
C
Yeah,
but
I
think
I
think
to
me,
it
sounds
like
we
do
want
to
have
both
of
both
transport
supported,
and
we
should.
We
should
structure
it
in
a
way
that
a
single
server
can
handle
both
on
a
single
port
seamlessly,
like
the
server
figures
out.
Who
is
the
connecting
agent
which
which
transport
they
want
to
use
and
just
just
uses
that
I
think
it's
doable.
E
Yeah,
so
it's
all
open
source,
so
a
few
sensor,
engineers
and
and
shimak.
We
we
did
a
project,
so
sumo
has
an
internal
hackathon
that
happened
last
week
and
we
chose
to
implement
the
op-amp
protocol
as
both
in
in
the
sense
you
back-end
as
the
server
and
as
an
extension
within
the
sumo
distribution
of
the
ot
agent
to
really
just
test
and
validate
the
protocol
and
see
it
in
action
and
feel
the
user
experience.
E
You
know
it's
not
production
ready,
not
something
we
would
ship
in
either
product.
But
you
know
it's
a
demo
of
standing
up
senzu
connecting
ot
agent
to
it.
You
know
having
a
a
an
expected
configuration
pushed
out
to
connect
the
aot
agents
that
can
be
modified
and
in
real
near
real-time,
update,
connected
ot
agents.
You
know
we
were
able
to
represent
ot
agents
within
the
sense
of
data
model
as
entities
and
generate
sensor
events
and
be
processed
by
pipelines.
E
So
it's
kind
of
just
like
an
intersection
of
these
two
projects
and
and
our
goal
was
to
validate
the
protocol,
get
a
feel
for
it
and-
and
you
know,
have
it
all
open
source
that
we
can
look
at
it.
And
you
know
I'm
not.
I'm
not.
C
E
Trying
to
extract
that
from
the
developer,
like
from
the
other
engineers
too,
but
the
problem
was
like
there
wasn't
inhere
issues
with
the
protocol,
like
the
experience
is
quite
positive,
but
yeah,
so
I'm
trying
to
extract
things
out
of
everyone's
brains
while
it's
still
fresh
but
there's
nothing
like
overly
like
negative
or
that
stood
out
as
like
a
problem
in
our
implementation.
E
D
Yeah
there
are
several
items
like
and
we
started
working
on
that
before,
like
you
have
added
this
agent
server
examples
which
are
like
actually
more
sophisticated
than
the
code
for
open
handling,
at
least
on
the
client
side.
I
was
working
on
and
there
are
two
things
that
I
think
are
worth
sharing
the
first
one
is,
and
actually
we
discussed
this
under
a
github
issue
recently
is
this
instance
id
because
each
time
I
was
starting
a
new
open
telemetry
coder
instance.
D
I
was
sending
a
new
instance
and,
and
the
server
was
identifying
this
new
agent
and
assuming
that
it
it
needs
to
have
a
new
configuration,
applied
and
etc,
and
it
wasn't
really
much
of
an
issue,
but
it
will
be
better
experience
if
the
instance
id
will
be
persisted,
and
I
think
that
this
is
what
we
need
to
do
eventually
and-
and
this
is
a
work
to
be
done
on
the
open,
telemetry
collector
side.
D
The
second
thing
is
that
well
we
just
needed
something
quickly
for
for
the
hackathon,
and
so
we
were
not
using
that
work.
That
kartik
has
started
on
the
supervisor,
so
we
started
implementing
something
basic
for
open.
Telemetry
collector
and
I
did
not
follow
the
supervisor
model.
D
I
followed
the
extension
model
and
essentially
used
an
extension
that
well
starts
and
talks
to
to
op-amp
server
and
when
there's
a
new
configuration
it
shuts
down
the
current
instance
and
restarts
a
new
one,
and
I
was
anticipating
it
will
not
really
work
because
of
closing
connections
and
etc.
But
actually
it
did
work
quite
nicely
and
to
be
honest,
I
I
like
having
single
process
that
does
it
all
right
than
two
and
need
to
point
to
the
executable
and
etc,
and
I
found
that
this
was
working
really
easily
as
an
extension.
D
C
Okay,
yeah,
I
mean
yeah,
probably
that's
that's
a
valid
way
to
run
things
right.
If
you're
not
very
concerned
about
you,
if
you
don't
need
the
updates
of
the
executable,
the
auto
updating
functionality
which
for
which
you
probably
do
need
the
supervisor
and
if
you're
not
concerned
with
the
instability
of
the
collector,
like
it
crashes
some,
but
something
needs
to
watch
it.
If,
if
the
configuration
is
if
it
doesn't
work,
let's
say
right
that
that
can
happen.
You
download
the
configuration,
try
to
start
the
collector.
D
D
On
the
configuration
actually,
this
is
not
necessarily
true,
because
well,
you
can
do
two
things.
The
first
one
is
that,
before
applying
the
new
configuration
you
can
test
it
and
just
check
that
it's
applied.
Of
course
still
what
can
happen
is
that
well,
the
parsing
might
work
well,
but
maybe
one
of
the
configured
components.
C
A
E
It's
true:
we,
I
looked
at
kind
of
papering
it
over
even
by
better
representing
the
resulting
ot
yaml
as
a
resource
within
senzu,
so
that
we
could
do
further
validation
before
even
pushing
it
out
before
going
through,
like
agent-based
validation
but
ultimately
yeah.
We
can't
it's
never.
C
Something
may
maybe
listening
on
the
port
that
you're
trying
to
listen
on
using
a
receiver
in
this
new
configuration
it
may
it
just
fails
today
the
receiver
will
will
refuse
to
start
and
it
will
just
exit
the
collector
process.
That's
that's
how
things
work
today,
at
least
so,
no
matter
how
much
you
try
to
validate
the
actual
text
of
the
configuration.
C
Unless
you
actually
run
the
collector.
You
never
know
whether
it's
going
to
succeed
or
not,
but
yeah
again
I
mean
that's.
That's
probably
fine
in
some
cases
right,
not
not
trying
to
say
that
what
you
did
is
not
valuable.
Actually,
I
was.
I
was
initially
thinking
that
maybe
that's
the
way
to
go
with
with
an
extension,
so
yeah.
C
E
To
there's
another
path
that
that
kind
of
came
up
through
this
experience
was
that,
like,
instead
of
rolling
back
or
rolling
forward
to
a
known
previous
working
config.
Alternatively,
you
could
given
the
op,
amp
extension
or
whatever
is
in
the
collector,
is
still
functional,
just
apply
that
configuration
and
then
report
to
the
mothership
that
yeah.
C
C
It
prints
a
log
and
exits
the
process.
It
says
what's
wrong
in
exits.
The
idea
is
that,
because
usually
it's
used
manually,
you
just
change
the
configuration
locally.
You
run
and
see
whether
it
works
it
kind
of
provides
the
fastest
feedback
to
whoever
is
doing
this
work
with
remote
management.
It's
no
longer
really
the
case
right.
So
maybe
that
needs
a
slight
change
in
the
philosophy
of
how
we
handle
the
the
startup
errors.
C
Yeah
yeah,
but
that's
that
it's
good
to
know
that
you
guys
were
able
to
implement
and
I
guess
you
didn't
encounter
major
problems.
That's
that's
a
feedback
too.
That's
that's
nice
to
hear.
E
Yeah
and
if
things
change
with
that,
if
there's
like,
if
somebody's
like,
oh
this,
I
didn't
like
how
this
worked
I'll
be
sure
to
update
it
in
the
slack
of
the
issue.
Yeah.
D
G
Sure
I
just
one
question
like
the
demo
that
you
guys
did,
you
know
is
the
you
know.
Is
it
open
source
that
I
can
take
a
look
at
or
is
it
yep
yeah.
E
Everything's
open
source,
I
in
the
in
the
work
group
slack
channel,
I
included
just
the
the
like
github
compare
between
maine
and
our
hack
branches.
Thank
you,
everything.
That's
in
the
demo
yeah
you
can
absolutely
download,
build
and
run.
Thank
you
did
you.
Did
you
implement.
B
E
Is
the
amazing
deck
yeah?
We
used
the
library
and
then
I
think
there
was
two
issues
we
ran
into
one
when
using
it,
and
then
I
can't
remember
how
we
overcame
that
now
that
was
on
that
was
a
week
ago
now,
so
it's
gone.
E
E
G
I
think-
and
I
have
a
very
silly
question-
I'm
new
to
this
ot
so
bear
with
me
on
this,
but
so
when
I
saw
your
agent
examples,
asian
and
server
examples,
I'm
a
little
confused
on
the
terminology
like
when
you,
when
you
introduce
the
concept
of
a
supervisor.
G
What
exactly
is
an
agent
in
the
sense?
The
collector
is
something
that
we
call
as
an
agent,
but
in
your
example,
you
also
had
like
a
like
a
new
agent,
and
you
know,
as
part
of
the
example
right.
So
is
that,
like
a
wrapper
over
the
existing
agent
managed
by
the
supervisor
potentially,
which
can
be
managed
by
the
supervisor
or
the
supervisor,
can
you
can
directly
manage
yeah.
C
No,
the
the
example
is
unsupervised,
one
right.
It's
it's
totally
just
just
to
demonstrate
how
you
use
the
opamp
go.
Client
implementation!
That's
all
right!
It's
not
intended
to
be
to
to
look
like
a
real
agent.
Nothing
like
a
real
agent
you're
right
that
it
try.
It
includes
everything
in
a
single
code
base,
but
in
reality
you
would
have
a
separate
supervisor.
You
would
have
a
separate
collector
and
the
actual
implementation
is
not
going
to
look
like
anything
like
that.
But
it's
going
to
be
very
different,
so
yeah,
sorry,
if
it
was
confusing.
G
C
G
The
more
the
more
closer
example
to
that
would
be
if
the
op-amp
part
of
it
is
inside
the
collector.
C
E
I'm
just
looking
for
some
clarity
on
on
kind
of
the
the
path
of
progression
here
for
the
the
protocol.
So
in
terms
like
we've
gained
more
things
developed,
there's
some
implementations
there's
some
pocs.
The
pattern
around
supervisor
is
progressing,
which
is
fantastic.
E
I'm
trying
to
understand
of
like
how
like
what
the
next
milestone
is
around
the
the
spec
itself,
yeah.
C
C
Well,
that's
that's
a
good
question.
I
think
we
need
to
finalize
the
part
where
we
bring
this
functionality
to
support
the
sdks.
That's
kind
of
a
somewhat
significant
change
to
the
spec
and
I
think
the
remaining
stuff
we
can
probably
leave
open.
There
are
issues
that
are
open,
but
I
don't
think
they
prevent
us
from
at
least
releasing
let's
say,
0.1
version
of
the
specification
and
say
that
you
know
what
now
this
is
ready
to
to
be
tried
as
an
implementation
of
some
sort.
So
I
would
like
to
do
that
to
actually
make
this
this.
C
I
mean
uses
the
implementation
of
the
specification
and
can
drive
the
open,
telemetry
collector.
That
would
be
the
the
second,
I
guess
major
milestone
for
us
in
parallel,
probably
once
we
have
the
spec
0.1
defined,
we
probably
should
go
and
talk
to
the
open,
telemetry
sdk
to
the
specification,
seek
and
see.
What
do
we
do
there
right
from
the
perspective
of
the
sdks?
Are
they
willing
to
start
adopting
it
in
any
way
right?
But
let's
start
thinking
about
how
they
adopted
in
the
optical
mgsd
case.
That
would
be.
C
I
guess
we
should
do
that
in
parallel
to
the
supervisor
work
that
is
not
dependent
on
the
supervisor
in
any
way,
and
then
they
probably
should
provide
some
feedback
to
to
us
to
see
whether,
for
example,
do
we
do
the
http
or
we
do
with
websockets
right?
That's
that's
another
area
that
we
need
to
make
a
decision
on
so
yeah
and
I
think
that's
that's
the
way
we
should
probably
I'm
thinking
a
bit
more
short
term
here
longer
term.
It's
unclear
too
right.
E
I
think
there's
already
enough
there,
like
there's
already
enough
to
to
you,
know
make
it
do
what
we
need
it
to
do
and
there's
enough
there
that
provides
the
foundation
that
we
can
all
fill
it
with
our
imaginations,
like
yeah
of
what
of
what
we
can
layer
on
top.
I
don't
know
if
more
elaborate
examples
are
necessary,
so
I'm
I
for
one,
I'm
excited
by
the
idea
of
a
0.1.
E
You
know
getting
and
getting
sdk
support
in
there
and
then
presenting
and
bringing
that
to
the
to
the
other
folks
would
would
be
really
exciting.
Yeah
yeah.
C
And
I
guess
the
the
another
step
after
that
would
be
to
have
some
sort
of
open
implementation
proxy
in
the
collector,
so
that
when
you
have
sdks
who
connect
to
the
collector,
typically,
they
send
the
telemetry
through
the
collector.
Today
they
also
connect
to
the
opamp
server
through
the
collector
again,
that's
that
can
be
a
requirement
for
many
deployments
where
they
don't
want
every
single
application
or
the
sdk.
C
That
is
part
of
the
application
to
have
a
connection
to
the
the
public
internet,
so
the
collector
usually
serves
as
a
gateway,
and
we
want
it
to
be
also
a
gateway
for
open
protocol.
C
In
that
case,
so
I
would
probably
go
and
implement
some
sort
of
o
pump
proxy
extension
inside
the
collector,
which
can
also
help
with
reducing
the
number
of
connections
can
be
a
connection
concentrator
right
you,
you
can
accept
connections
from
a
large
number
of
agents
or
sdks
thousands,
possibly
in
the
collector,
and
forward
all
that
data
to
the
server
using
a
single
connection,
single
websocket
connection.
C
So
something
like
that,
and
we
would
definitely
want
to
use
that
in
the
helm
chart.
We
have
an
ultimate
limitary
help
chart
which
deploys
one
agent
per
node
as
a
daemon
set
on
kubernetes,
and
then
it
has
a
single
gateway
collector,
which
collects
data
from
all
of
those
agents
and
forwards
to
the
back
end.
So
with
this
model,
it
also
will
serve
as
an
op
gateway,
in
that
case,
open
proxy.
If
you
will
so
anyway,
support
in
the
collector
itself
inside
the
collector
for
the
protocol.
C
E
C
C
E
Yeah,
I
mean
just
to
be
clear,
I'm
very
excited
about
what
we
already
have.
I
think
this
it's
got
legs.
E
G
Yeah,
I'm
gonna,
I'm
gonna
yeah
this.
This
is
the
first,
so
I'm
just
getting
the
hang
of
all
of
this,
so
just
get
taking
a
bit
more
time,
but
I'll
try
to
push
something
out.
You
know
I'll,
probably
bug
you
guys
on
slack.
If
I,
if
I
run
into
some
questions.
C
Yeah
feel
free,
I'm
happy
to
answer
any
questions
you
have
if
I
can,
but
I
think
you
what
you
posted
is
a
great
start.
Let's,
let's
continue.
G
Thank
you.
Thank
you
I'll
I'll
also,
you
know
sean
offered
some
help.
You
know
before
so
I'll.
Also
ask
him.
If
he's,
if
he's,
if
we
can
review
my
early
designs
to
before
I
post
pr
or
whatever,
whichever
way
yeah.