►
From YouTube: 2021-10-19 meeting
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
C
B
C
Yeah,
it's
it's
the
it
is,
it
is
what
it
is
like.
Central
europe
is
nine
hours
from
west
coast,
so.
C
C
D
B
Okay,
I
think
we
can
start
probably
so
yeah.
B
B
Okay,
so
yeah
I've
been
I've
been
actually
working
on
agent
management
internally
at
splunk
for
a
few
weeks
now,
so
I
thought
it
may
be
useful
to
to
see
if
we
can
use
what
I
have
so
far
as
a
source
of
ideas
and
then
we
as
a
work
group
can
decide
which
of
which
are
the
ideas
we
want
to
use
or
what
we
want
to
do
differently
right.
B
So
just
just
some
very
high
level
thoughts
really
so
I'm
thinking
about,
I
guess
maybe
obvious-
of
a
management
server
which
controls
agents
and
for
the
agent
and
server
to
communicate
using
some
sort
of
an
open
source
or
open
specification
protocol
and
yeah.
What
I
mean
by
that
we
are
open
telemetry.
Obviously,
it's
going
to
be
open
specification
right,
but
I
mean
a
bit
more
than
that.
B
I
think
it
would
be
very
nice
if,
if
and
if
we
manage
to,
let's
say
convince
other
agents
to
adopt
this
protocol,
then
what
you
would
have
is
a
management
server
which
can
manage
a
fleet
of
different
types
of
agents.
I
think
that's
a
nice
property
to
have
if
possible,
and
then
I've
been
thinking
about
what
is
it
that
we
actually
want
to
manage
right.
B
What's
the
feature
set
and
here's
the
list
I
came
up
with,
is
it's
about
configuration
management
so
telling
the
agent
what
configuration
to
use
salus
reporting,
so
the
agent
somehow
telling
the
server
or
in
some
other
way,
telling
what
states,
what
what's
its
health?
What
what
metrics?
What
are
the
maybe
cpu
usage
memory
usage
or
some
other
custom
metrics,
and
what's
what
the
current
configuration
is
running
with
then
some
sort
of
plugin
management
or
other
management?
B
If
you
will,
we
don't
have
any
notion
of
dynamic
plugins
for
open
telemetry
collector
today,
but
since
we
adopted
the
stanzas
logging
components,
stanza
actually
does
have
a
notion
of
pluggable
definitions
for
log
formats
which
can
be
plugged
in
at
runtime.
So
that
is
a
kind
of
a
plugin.
If
you
will
right
so,
even
though
we
don't
technically
have
a
notion
of
dynamically
definable
components
in
the
collector,
I
think
it
still
may
be
useful
to
support
this.
B
This
ability
to
deliver
or
push
add-ons
or
manage
others
remotely
from
the
server
to
the
to
the
agent.
Then
another
thing
I
think
is
important
to
consider
is-
is
auto
updating
of
the
agent
itself
right.
So
you
have
an
image
of
the
agent
running
somewhere
and
you
want
to
upgrade,
or
maybe
maybe
downgrade
it
is
behaving.
B
The
new
version
is
misbehaving
to
change,
to
replace,
to
update
the
agent's
executable
remotely
and
at
scale
and
finally-
and
this
one
came
up
a
lot
when
I
was
investigating
one
of
the
pain
points
is
credentials
management
so
well.
The
agent
connects
somewhere
for
the
purpose
of
being
manager
for
the
purpose
of
doing
its
work,
delivering
data
and
typically
uses
some
some
form
of
authentication.
B
It's
either
access
tokens,
maybe
it's
client-side
tls
certificates
and
it
turns
out
that
in
in
many
cases
this
is
a
pain
to
manage
because
they,
the
tokens
or
certificates,
expire
or
need
to
be
revoked
or
rotated
and
being
able
to
do
this
again
at
scale
from
some
sort
of
management
server
rather
than
individually
for
for
every
agent
would
be
beneficial.
B
So
this
this
is
kind
of
a
very
high
level
thoughts.
I
had
I'll
pause
here,
maybe
for
comments
before
I
move
on
to
the
next.
If
anybody
has
any
comments
or
questions,
please
feel
free.
E
E
Do
you
mean,
for
example,
the
the
tracer
that
lives
inside
the
application,
because,
with
with
sampling,
for
example,
there
is,
there
is
a
very
strong
need
to
configure
sampling
grade
remotely
for
a
number
of
nodes
at
the
same
time,
and.
B
Yeah,
do
you
understand?
I
get
the
question
possibly
yes,
I
I'm
not
sure
I.
What
I
had
in
my
mind
was
more
like
standalone
agents
like
fluentvit,
for
example
right,
but
why
not?
I
guess
possibly
the
answer
to
that
is
is
maybe
yes,
maybe
if
we
design
it
in
a
way
that
can
be
used
by
those.
So
in
case
of
open
clementry.
That's
that's
the
sdks,
that's
open
to
entry
as
the
case
which
are
running
alongside
the
application.
B
B
Open
telemetry
does
not
have
that
or
at
least
it
does
not
have
a
a
comprehensive
list
of
things
that
are
configurable
in
the
sense
that
you
can
supply
a
configuration
file
and
it
affects
the
configuration
of
the
sdk.
But
there
is
an
open
issue
to
do
that.
So
I
think,
once
we
have
that
notion
of
a
wholly
set
configuration
of
the
of
the
ultimate
limitation
sdk,
slash
agent,
then
yeah.
Possibly
we
could
use
this
same
solution
to
deliver
the
configuration
of
each
individual
applications.
Instrumentation
right.
C
But
I
just
think
that
it's
possible
with
not
too
much
hassle
is
that
the
receiver
can
be
enhanced
with
ability
to
advertise
the,
for
example,
that
the
sampling
related
to
sdks
and
that
way
that
would
receiver
configuration,
which
means
it
will
be
collector
configuration,
which
means
it
could
be
managed,
even
without
bringing
abstraction
of
some
sdk
part
to
the
configuration
protocol.
But
that's
that's
one
of
the
many
ways
that
could
be
implemented.
B
F
B
Yes,
that's
that's
also
possible,
so
we
should
open
language.
Collector
can
be
an
intermediary
between
other
collectors
and
your
back
end.
That's
a
support.
It's
that's
a
possible
scenario.
Today
you
have
an
agent
running
somewhere
on
some
machine
or
inside
some
container
or
sidecar,
and
then
these
agents,
which
are
open,
telemetry
collectors.
B
These
agents
sent
the
data
to
an
intermediary
opentlantery
collector,
which
is
a
standalone
service
which
then
forwards
the
data
to
the
back
end.
This
is
a
typical
scenario
today.
Now,
whether
when
you
use,
when
you
have
this
scenario,
you
want
to
manage
the
the
edge
agents
directly
for
your
management
server
or
you
want
your
intermediary
collector
to
serve
as
the
management
server.
I
think
that's
that's
also
a
possibility
in
a
sense.
Well,
there
are
options
here
right.
B
F
G
Yeah
my
question
would
be
that:
what
is
the
scope
that
we
want
to
do
with
this?
Because
configuration
management?
It's
okay?
It's
it's!
It's
it's
what
we
are
here,
but
the
list
of
this
bit
ambitious.
To
me
I
mean
the
agent
out
to
updating
and
muti
multi
agent
collector.
I
mean
it's
really
hard
to
define
a
protocol
when
there
are
so
many
different
collectors
available,
like
fluent,
beat
tremor
vector,
open,
telemetry
collector,
with
different
kind
of
features
and
even
different
kind
of
configuration,
language
and
availabilities.
G
B
Yeah,
but
that's
a
very
good
question,
I
think
so
I
think
we
we
should
maybe
start
with
to
see
what's
possible.
We
do
not
necessarily
have
to
implement
all
of
these.
That's
that's
for
sure,
and
definitely
there
there
will
be
priorities
right.
Some
some
of
these
things
are
more
important
than
the
others,
particularly
for
open,
telemetry
collector
for
some
other
agents.
Maybe
the
priorities
are
a
bit
different
and
maybe
vendors
also
have
different
priorities.
Maybe
some
vendors
or
some
customer
bases
care
more
about
certain
capabilities
than
about
others.
B
So,
yes,
that's
definitely
needs
to
be
discussed,
but
I
think
it's
very
useful
at
the
minimum
to
go
through
some
sort
of
design
phase
to
understand
what
is
possible
and
I
think
one
of
the
ways
that
we
should
approach
this
is
that
this
feature
set.
B
Even
if
we
end
up
supporting
all
of
these
capabilities
in
the
protocol,
it
needs
to
be
designed
in
a
way
that
you
can
only
use
a
certain
subset,
a
subset
that
you
care
about
even
on
the
of
the
particular
agents,
may
only
implement,
let's
say
status
reporting,
that's
all
they
care
about
and
the
rest
they
they
just
don't
implement.
So
the
if
they
connect
to
a
management
server
which
is
which
supports
all
of
the
capabilities,
then
for
those
particular
kinds
of
agents
you
can
only
see
that
their
status,
you
can
see
the
status
reporting
works.
B
You
can't
manage
the
configuration
and
I
think
that's
fine
right.
I
think
we
should
approach
it
from
that
perspective,
so
I
had
that.
I
guess
this
was
the
next
on
the
next
slide.
I
call
it
low
coupling
feature
set
right,
so
you
choose
a
particular
subset
of
features
that
you
you
want
to
support,
and
I
totally
agree
it's
very
likely
that
not
all
of
these
will
be
supported,
at
least
not
for
open
planetary
collector.
B
B
B
Well,
I
guess
we
have
some
large
vendors
here
who
probably
would
consider
this
a
requirement,
and
then
I
think
it's
important
for
for
this.
Whatever
we
design
it
works
for
for
very
simple
cases,
but
also
for
for
very
complex
cases,
and
what
I
mean
by
that
is,
let's
say
all
I
care
about
is
I
have
a
number
of
agents
I
want
just
to.
Let's
see
how
how
do
they
are
they
healthy
or
not?
That's
all
I
care
about,
or
maybe
slightly
more
complicated.
B
So
I
think
whatever
we
design
it's
it's
it's
again
valuable
if
we
do
not
design
it
for
a
single
specific
use
case,
but
make
it
flexible
enough
that
whoever
implements
this
management
solution
and
likely
it's
going
to
be
vendors
right
on
the
server
end
of
it.
The
vendors
can
choose
the
way
they
decide
how
much,
how
how
deep
they
want
to
go
into
the
the
cases
right.
What
do
they
want
to
support
simple
or
a
lot
more
complicated
and
then
so
the
on
the
previous
slide?
B
We
had
that
auto
updating,
right
in
particular,
but
also
remote
configuration.
This
can
be
very
dangerous
capabilities
if
done
improperly
right.
So
we
need
to
be
very
careful
about
the
security
of
whatever
we
are
proposing
here
so
likely
it
needs
to
employ
some
sort
of
zero
trust
from
the
let's
say
from
the
agent
side,
so
that,
even
if
your
management
server
is
compromised,
you
well
at
least
you
limit
the
blast
radius
right.
B
You
limit
the
damage
that
can
be
done
to
the
agent,
so
so,
if
you're,
if
there
is
a
way
to
download
especially
download
code
remotely
and
then
execute
it
at
agent
site,
which
is
what
auto
update
needs
right,
then
this
needs
to
be
very,
very
well
thought
out,
so
that
you
don't
end
up
with
disasters
right
so
security
in
in
the
form
of
particularly.
How
do
you
design
the
the
protocol?
Where
obvious
things
right?
B
You
use
tls
you
secure
connections,
but
also
in
the
form
of
recommendations
to
how
do
you
implement
the
particular
capabilities
and
then
the
last
point
I
have
is
well:
let's
say
you
have
some
sort
of
remote
configuration,
but
what,
if
your
control
plane
already
provides
that
right?
So
if
you
use
kubernetes,
you
probably
are
using
kubernetes
configuration
capabilities
right,
you
use
the
configmaps
new.
B
You
already
you
already
have
that
feature
coming
from
your
control
plane,
so
what
you
probably
don't
even
need
in
that
case
the
remote
configuration
capabilities-
or
maybe
you
do
right
so
can
they
complement
each
other?
Is
it
possible
to
for
for
those
to
coexist?
So
what
we
need
to
make
sure
is
that
whatever
we
propose,
it's
not
going
to
end
up
with
a
situation
when
really
there
is
some
some
sort
of
conflicting
things
that
happen
on
the
agent's
side
and
it
receives
a
configuration
from
the
management
server.
B
Then
it
has
actually
has
a
configuration
on
the
kubernetes
side
to
restart
and
then
whatever
it
receives
from
the
server
is
lost.
But
then,
if
you
do
the
opposite,
let's
say
the
server
side
overrides.
Is
it
has
a
priority?
Then,
if
you
try
to
apply
or
change
the
configuration
using,
your
typical
kubernetes
means.
B
Well,
it
doesn't
work
anymore
because
it's
actually
managed
by
a
server.
So
we
need
to
consider
these
scenarios
to
make
sure
that
we're
actually
not
engaging
in
some
sort
of
conflict
with
the
ways
that
people
are
already
used
to
managing
their
agents.
But
rather
we
try
to
provide
an
alternative
possible,
but
also
co-exist
right
with
those
for
those
existing
solutions.
H
H
Possibly,
we
might
want
to
include
some
requirements
around
resiliency
or
robustness
so
that
you
know,
let's
say
like
an
auto
update
occurs,
but
it
fails
for
some
reason.
The
you
know
this
is
an
implementation
specific
thing,
but
the
protocol
ought
to
allow
for
the
scenario
like
where
the
agent
rolls
back
to
its
previous
version
and
then
reports
a
status
about
this
that,
like
this
event,
failed-
and
this
should
be
part
of
going
through
the
protocol.
There
also
could
be
some
order
of
operations.
H
Things
like
we
need
to
make
sure
that
you
know
if
we
ask
an
agent
to
update
at
the
same
time
that
we
ask
it
to
reconfigure
what
is
the
expectation
a
couple,
some
requirements
around
that
at
some
point
here,
I
think,
would
probably
be
good
yeah.
B
B
G
Yeah
my
concern
about
the
requirements
is
that
I
see
so
in
the
the
issue
that
there
is
a
dependency
on
the
the
config
reload
and
the
hot
config
changes
of
of
the
open,
telemetry
collector,
which
is
think,
is
I
I
think,
is
it
a
death
tendency
because
for
any
config
management
we
want
to
do.
G
We
need
a
state
inside
the
application
that
we
can
vary,
that
we
can
update
that
we
can
manage
actually
from
remote
and
just
want
to
ask
if
we
want
to
talk
about
that
as
well,
or
that
is
a
different
thing
to
to
do.
B
No,
I
think
we
should
definitely
talk
about
that
right.
If,
if
there
is
a,
if
there
is
going
to
be
a
way
to
remotely
update
the
configuration
of
the
agent,
then
it
well
for
that
to
be
valuable,
it
has
to
happen
on
the
fly.
It
shouldn't
require
the
user
to
go
and
then
restart
the
agent
manually.
That's
that's
point.
Yeah.
B
So
yeah
that
I
think,
but
that's
more
on
the
well,
how
do
we
then
implement
this
in
the
collector
right?
What
it
means
for
particularly
for
the
collector,
but
I
think
that's
that's
a
prerequisite
if
we
want
to
have
the
remote
configuration
as
a
feature
of
the
collector,
I
don't
think
we're
very
far
away
from
having
that
capability
yeah.
B
Okay,
so
I
think
I
have
one
more,
maybe
yeah
so
kind
of
trying
to
paint
some
sort
of
a
mental
model
of
what
we're
talking
about
here
right.
So
we
have
this
agent.
B
B
B
It
can
be
metrics,
but
maybe
it
can
be
also
vinyl,
traces
and
logs.
If
we
want
to
and
well
possibly
these
two
things
can
be
one
box
right.
Nothing
prevents
that
from
from
being
a
single
machine
right,
but
logically,
those
are
two
different,
two
different
connections
and
then
there
is
well.
The
agent
also
does
some
useful
work
right.
That's
that's
a
job.
These
jobs
are,
there
are
connections.
B
Well,
in
case
of
open
flange
collector,
we
have
what
we
call
exporters
user
definable,
and
then
these
are
connections
to
whatever
telemetry
backend
back-end.
B
The
agent
wants
to
send
the
data
to
the
the
reason
I
wanted
to
show.
This
is
connected
to
that.
One
of
those
points
I
had
there
is
credential
management,
so
I
think
it's
important
that
whatever
we
design
as
a
feature
that
allows
the
management
server
to
to
to
manage
these
credentials
for
the
agent
or
offer
credentials
if
you
will
needs
to
take
into
account
this
this
picture
right,
because
there
is
three
different
sets
of
credentials
involved
here
or
the
types
one
is
how
the
the
client
the
agent
communicates
with
the
management
server.
B
The
other
is
how
it
sends
its
own
telemetry
to
the
back
end
and
obviously
it
can
be
the
same
access
tokens
or
same
certificates,
but
they
may
be
different
as
well
right.
So
I
think
it's
important
to
recognize
the
difference
if
necessary,
and
then
there
is
connections
that
the
exporters
use
to
send
the
data
to
the
telemetry
backend.
B
B
C
I
think
that
this
is
like
a
really
good
summary
of
of
the
goals
that
we
wanted
to
achieve,
and
I
think
that
I'm
pretty
much
aligned
with
with
all
of
them.
I
think
that
we
will
need
to
iterate
for
them
and
accomplish
those,
let's
say
over
time.
I
actually
started
making
a
research
on
status
reporting,
because
this
is
an
area
that
I
have,
let's
say
immediate
need
to
solve
it
in
one
way
or
another.
B
So
I
guess
maybe
moving
on
to
what
what
to
do
right.
So
I
think
well,
based
on
what
I
presented
and
it's
totally
open,
don't
consider
it
in
any
way
obligating.
I
guess
proposal.
B
I
think
we
should
maybe
see
if
there
is
an
existing
management
protocol.
I
looked
I
didn't
find,
but
maybe
there
is
so
we
don't
invent
a
wheel
here
or
maybe
design
ours
or
maybe
choose
something
as
a
foundation
and
maybe
extend
it
to
do
the
design
of
it
and
then
validate
that
it
works
especially
at
scale
as
well
right,
then,
I
would
want
to
make
sure
that
we
have
a
reference,
implementations
and
go.
B
Why
go?
Obviously,
we
need
to
go
implementations
for
the
collector,
but
I
definitely
would
want
to
implement
this
as
independent
libraries
so
that
other
goal-based
agents
can
reuse
this
the
client
portion
of
it,
but
also
why
not
the
server
portion
if
the
vendor
wants
to
implement
this
management
solution
that
we
offer,
let's
make
it
easy
for
the
vendors
right,
so
the
server
and
client
libraries
that
implement
the
protocol,
then
what
exactly
does
that
mean
to
implement
these
management
features
in
the
collector?
What
we
we
we're
just
discussing
right,
how
does
the
configuration
work?
B
How
do
we,
how
to
reload
the
configuration
changes?
What
does
the
there
is?
There
is
a
an
ongoing
work
about
figuring
out
what
the
internal
configuration
apis
should
look
like
there
is.
There
are
some,
I
would
say,
draft
capabilities
at
the
moment,
something
that
is
called
config
sources,
but
it's
not
complete
yet.
B
And
then
there
is
one
thing
that
I
was
I
was
looking
like:
you
did
gemma,
and
what
do
we
do
with
telemetry
in
the
collector
today?
So
I
I
found
that
we
actually
report
our
own
metrics
in
a
different
way
compared
to
how
we
report
metrics
of
other
processes.
B
So
there
is
a
receiver
called
host
metrics
in
development,
telemetry
collector,
which
reports
metrics
of
other
processes,
things
like
cpu
usage
memory,
and
it
also
reports
it
its
own
metrics
today
and
unfortunately,
we
ended
up
having
two
different
ways
of
doing
this,
and
the
reason
I
guess
why
this
happened
is
because
there
is
no
standard
in
open,
telemetry
right
now,
which
defines
how
this
should
happen.
B
There
is
a
section
in
the
specification
in
open
planetary
specification
which,
which
is
to
be
defined
right,
so
we
wanted
to
have
that,
but
we
we
don't
yet,
and
because
of
that,
I
guess
one
of
the
reasons
is
that
we
we
ended
up
with
having
do
two
different
ways
of
metric
reporting
process,
motric
reporting.
So
this
is
also,
I
think,
important
to
to
do
this
work
to
make
it
part
of
the
open,
flange
specification,
and
then
we
follow
follow
that
for
for
the
collector
itself.
B
B
So
I
guess
one
more
thing
I
forgot
to
mention.
I
said
I've
been
working
on
my
engine
management
solution
internally
at
splunk.
What
I
have
is
some
sort
of
draft
design
for
this
protocol.
B
Not
final
I've
been
it's
the
work
in
progress.
I
I
got
clearance
from
our
legal
to
publish
it
as
open
source,
so
I
will
do
that
and
then
we'll
see
how
much
of
that
we
we
do
want
to
use
how
much
we
want
to
change
or
how
do
we
delete
or
whatever
right,
but
it's
it's
going
to
be
apache
license,
so
we
will
be
free
to
to
do
whatever
we
want
to
do
with
that
4k
or
changing
and
I'll
do
that
and
I'll
post
the
link
in
our
slack
channel.
B
A
I
just
want
to
say
thank
you
for
the
the
overview
and
taking
us
through
the
high
level
summary
of
of
the
model
and
and
what
you've
been
thinking
on,
and
I
look
forward
to
digging
into
that
draft
protocol.
I'm
glad
legal
is
on
board.
G
B
Yeah
as
next
step,
so
first
of
all
I
will
I
will
publish
the
the
draft
protocol
post
the
link
in
slack
for
everybody
to
review
and
comment.
Then
I
will
create
based
on
the
to-do
list.
I
had
there
some
some
github
issues
and
we'll
link
from
the
from
the
agent
management
summary
issue
and
then
feel
free
to
think
of
whatever
you
think
needs
to
be
done,
create
issues
for
for
those
things
as
well,
and
then
we're
going
to
look
for
for
contributors
for
volunteers
to
work
on
the
on
those
things
right.
B
B
G
Yeah,
I
was
thinking
about
like
creating
some
use
cases
to
help
along
with
that.
G
B
Now,
that's
that's
a
very,
very,
very
good
proposal
as
well.
So
if
we
have
non-engineers
here,
maybe
we
have
product
managers,
then
maybe
they
can
help
with
that
right,
define
the
use
cases
scenarios
how
they
think
this
thing
should
work
that
maybe
also
can
be
useful.
E
I
have
a
question
with
respect
to
to
your
slides
and
to
your
I'm
looking
forward
to
looking
at
some
partial
implementation
and
so
on,
but
I
wonder
if
you
thought
in
particular
about
identification
of
the
clients
management
clients,
because
if
we
our
goal
is
to
scale
up
to
millions
of
instances,
we
probably
need
to
think
pretty
well
about
how
we
can
identify
the
clients.
So
we.
H
E
B
Have
thought
it
is
in
the
protocol,
I
guess
very
briefly
there
is.
There
is
going
to
be
some
sort
of
unique
identifier
per
agent,
but
also
identifying
attributes,
as
you
said,
which
will
allow
them
grouping
of
the
agents
by
some
sort
of
property
for
the
kind
of
the
asian
version,
maybe
user
defined
attributes
which
then
will
feed
into
what
do
you
want
to
do
with
this
agent?
Maybe
push
different
configurations
to
different
kinds
of
agents.
So
the
answer
is
yes,
absolutely,
I
think,
okay,
it
is
necessary.