►
From YouTube: 2023-03-15 meeting
Description
Open cncf-opentelemetry-meeting-3@cncf.io's Personal Meeting Room
A
B
C
A
E
Yeah,
so
first
one
I
can
talk
about
it.
Ping
file
it,
but
I
can
talk
about
it.
So
our
application
is
thinking
about
extending
the
current
TLS
settings
to
support
loading,
the
configurations
as
a
string
in
the
in
inside
the
struct.
So
today
the
TLs
settings
that's
being
shared
across
only
support
the
file
based
configuration
so
and
then
within
the
settings
it's
try
to
load
the
certificate
or
private
keyed
from
the
files.
E
However,
the
way
our
application
used,
the
configuration
is
slightly
different.
We
we
passed,
we
load
the
configuration
dynamically
and
then
we
so
that
means
that
we
received
the
cert
and
proper
key
already
in
memory
from
from
from
where
we
use
this
external
package
called
supervisor
yeah,
but.
D
E
Yeah
sure,
so
that's
why
we're
thinking
about
adding
the
support
to
have
those
feel
indestructed,
because
the
alternative
is
that
we
have
to
write
those
data
into
a
file
locally
and
then
specify
those
file
in
the
configuration.
E
However,
I
always
see,
like
maybe
other
people,
some
other
people
would
want
to
want
the
same
support
as
well,
because,
like
writing,
I
have
a
key,
for
example,
to
a
low
profile
requires
some
cleanup.
Otherwise
it's
like
a
security
breach.
So
that's
why
I'm
proposing
this
and
see
what
community
thinks
about
it.
E
D
It's
a
edge
case.
To
be
honest,
the
reason
why
it's
an
edge
case
is
because
not
everyone
is
using
a
custom,
binary
and
fetch
this
from
some
some
other
sources.
Most
people
will
have
to
fetch
them
from
somewhere
and
and
that
somewhere
will
not
be
Memory,
because
there
is
no
way
you
have
them
in
memory
unless
you
fetch
it
from
some
external
source.
So
so
that's
why
I
don't
know
how
useful
would
be
for
for
majority
of
the
people,
but.
D
I
mean
I
would
not
want
to
also
be
the
one
who
who
blocks
this,
because
it's
not
I
would
rather
prefer
to
have
couple
of
other
folks
saying
that
they
need
this,
and
then
we
we
should
look
into
add
support
for
this,
so
my
role
would
be
let's
find
out
if
two
two
people
or
three
people
need
these
and
then
if
there
is
a
at
least
a
minimum
of
two
three
users
that
need
this.
Let's
add
it.
Okay.
Does
it
make
sense
like.
E
Okay,
so
in
general
we
only
want
to
change
it
if
there
are
multiple
parties
that
want
something
like
this
I
would
say,
that's
fair.
So
what
would
you
propose
as
an
alternative
for
us
for
now.
G
Yeah,
so
I
just
want
to
confirm,
so
you
want
the
whole
TLS
to
tell
us
what
is
the
one
setting
it
struck
to
be
serialized
and
then
just
realized,
or
do
you
want
only
the
the
actual
certificate
to
be
provided
the
actual
certificate?
G
Okay,
so
only
the
certificate,
as
in
like
a
Basics
report,
encoded
version
of
the
certificate
as
an
environment
variable
and
then
you
you
use
that
as
the
value
instead
of
a
file
yeah,
that's
correct.
All
right,
so
I've
seen
that
pattern
been
used
before
in
some
other
situations.
So
this
is
not
entirely
foreign,
so
it's
not
entirely
unheard
of
and
I
would
be
okay
in
having
something
like
that.
G
You
know
find
whatever
we
had
previous
requests
like
that
I,
don't
remember
seeing
one,
but
perhaps
there
is
or
waiting
for
someone
to
you
know,
of
course,
the
a
a
workaround
for
you
would
be
that
you
have
an
unique
container
that
reads
from
from
environment
variables
for
the
Pod
or
you
know,
for
for
whatever
they're
deploying
and
then
writing
into
a
specific
place.
That
is
in
read
by
the
The
Collector
itself.
So
you
have
a
feasible
workaround
for
the
situation.
I
believe.
E
I,
don't
think
I
follow
on
the
workaround
this
sorry
it
can.
You
repeat
that.
G
You
would
have
like
a
common
month
a
month
pass
between
any
container
and
the
actual
container,
and
the
init
container
is
executed
before
the
actual
workload.
And
it
reads
the
environment
variables
containing
the
certificates
that
you
want
to
load
and
and
and
basically
just
mix
the
bridge
between
environment
variables
and
a
local
file.
And
then
it
just
writes
to
a
file
that
can
be
read
by
the
the
workload
container.
E
Okay,
so
yeah,
okay,
so
that's
pretty
much
right
into
the
local
file.
Yeah.
H
H
E
Yeah,
okay,
I'll
double
check
and
see
if
anyone
else
requests
this
before.
If
not,
we
can
just
look
at
the
alternative
for
now
yeah,
but
I
think
someone
else
commented.
F
Yeah
from
from
my
side,
I'm,
not
a
someone
who
wants
to
use
this,
but
it
does
feel
like
for
the
rest
of
the
components
say
your
configurations
where
we
pass
a
sensitive
value.
We
don't
read
it
from
a
file
or
they.
The
way
we
usually
push
people
to
do
is
stop
reading
it
from
a
file,
but
rather
you
pass
the
value
like
this
issue
proposed,
and
then
you
use
a
provider
like
a
file
provider
or
an
environment
provider
to
read
it
from
wherever
you
want.
F
I
G
Okay,
yeah
I
mean
it
would
need
another
value
provider
like
as
in
a
config
provider,
because
the
values
for
for
the
conf
for
the
file
itself
would
be
like
basics
for
encoded
string.
You
know
online
because
that
would
then
when
decoded.
It
then
shows
where
the
where
the
new
lines
are.
You
know,
because
those
are
things
that
kind
of
matter
for
TLS
certificates,
so
they're,
regular
providers
that
we
have
might
not
be
sufficient.
H
G
The
the
search
file
property
could
accept
a
a
a
value
that
is
dependent
with
like
file
or
end
or
an
aesthetic
and
then
column,
and
then
the
actual
path
to
a
file
or
the
basics
for
value
or
things
like
that,
and
then
a
a
provide
value
provider
reads
this
data
and
provides
the
actual
the
actual
value
we
had.
A
similar
situation
for
I
think
the
bear
token
extension
where
we
would.
G
We
wanted
to
to
be
able
to
both
provide
a
way
for
users
to
read
a
file
and
to
provide
a
static
token,
and
the
solution
was
to
just
add
two
properties,
because
you
know
having.
We
didn't,
have
a
common
mechanism
to
to
abstract
the
source
of
information
from
the
component
itself.
D
H
D
Then
I
think
we
should
look
for
a
mechanism,
a
generic
mechanism
where
we
we
use
something
like
what
we
have
with
the
config
conf
map,
URLs,
where
we
specify
the
source
and
everything
and-
and
you
can
say
file
for
this-
you
can
say
environment
variable-
you
can
say
directly
there
embed
directly
there
or
vault
or
whatever
so
I
think
we
should
provide
a
generic
mechanism
whatever
we
call
it
conf,
something
that
we
we
and
we
should
use
the
same
syntax
as
we
have
for
for
other
things
for
embedding
other
stuff.
G
That
that
sounds
good.
The
only
thing,
the
only
concern
that
I
have
then
would
be.
G
E
E
E
So
for
the
receiver,
that
would
means
we're,
adding
like
an
extra
field
for
the
OBS
report
inside
Hotel
collector
for
the
exporter.
It's
a
little
bit
like
a
different
scenarios.
I've
got
exporter,
it's
since
most
of
the
instrument
is
done
on
the
exporter
helper,
which
doesn't
really
capture
the
exact
bite.
So
I
have
added
like
some
of
the
scenario
that
I
was
thinking,
but
in
general
I
was
just
wondering
I!
Guess
it's
back
to
like.
E
G
G
D
Yeah
there
is
a
very
good
way,
but
you
need
to
have
specific
things
for
every
environment,
so
grpc,
for
example,
has
its
own
way
and
it
it
exposes
the
size
for
you,
but
you
you
have
to
to
implement
some
specific
things
inside
grpc,
most
likely
HTTP
exposes
the
size
somewhere,
but
you
also
need
to
have
an
Interceptor
there.
So
I
think
we
will
have
to
implement
very
specific
for
every
RTC
or
every
transport
framework
to
have
this
in
a
in
a
grpc
I
I
bet.
C
C
This
I
have
a
PR
that
I
opened
and
closed.
It
does
extend
Ops
Report
with
a
new
interface
allowing
you
to
interact
with
those
grpc
stats
handlers,
so
the
grpc
has
an
API
for
us
in
HTTP.
You
need
to
write
a
round
shipper
to
do
it
I
the
thing
we
got
stuck
on
and
why
I
closed
my
PR
decided
to
do
it
for
myself,
without
putting
it
in
core.
C
Is
that
you
ask
spoken
whether
this
should
be
done
using
Oto
go
grpc
instrumentation
like
a
package
and
I,
don't
think
it's
worth
doing
that
or
we're
worth
waiting,
especially
because
Ops
report
is
already
customized
for
a
particular
monitoring
application.
So
extending
you
know
with
two
new
metrics
in
exactly
the
consistent
style
without
making
decisions
external
to
this
group
makes
sense
to
me
and
that's
what
I
would
recommend
and
it's
it's
very
little
code.
So
you
can
see
my
PR
I
put
in
the
chat
sure.
J
F
J
Like
to
see
at
least
an
interface
that
each
component
can
opt
into
and
if
they're
a
common,
you
know
strategies
for
some
components,
then
that's
great
too,
but
I
think
at
least
in
some
cases
there
are
some
components
like
I,
think
a
file
log
receiver,
because
I
hear
this
all
the
time
people
want
to
know.
J
You
know,
what's
the
total
size
of
all
the
files
it's
suggesting
and
then
like
makes
sense,
that
would
be
reported
as
a
as
an
observability
metric
on
The,
Collector
itself
and
and
similarly
for
some
of
the
exporters
that
worked
with
commonly
I
think
we'd
want
to
know.
Yeah.
D
But
I
would
rather
I
would
rather
not
expose
interfaces.
I
think
we
have
enough
plugging
points
that
even
Eric
for
you,
for
example,
to
answer
your
thing:
if
you
are
controlling
everything
by
the
way
you
can
install
a
stats
Handler
in
the
grpc,
as
in
the
global
grpc,
stats,
handlers
and
you'll
be
everywhere,
so
you
don't
have
you
don't
need
any
support
from
us
to
to
achieve
that
to
to
get
injected
into
the
grpc
and
get
access
to
all
these
things
go.
K
I
have
had
ad
I
have
seen
customers
asking
for
this.
There's
been
a
couple
people
I've
talked
with.
That
would
like
to
know
the
like
the
size
of
the
traffic
that
the
collector
is
handling
essentially
are
exporting,
so
I
I
can
vouch
for
a
need
for
this
type
of.
H
G
Yeah,
the
eager
way
of
doing
things
is
not
the
sex
Handler
part,
so
you.
C
C
If
you
haven't
seen
my
PR,
it
is
actually
pretty
small.
It
just
creates
an
interface
that
a
object
that
you
can
create
as
one
of
these
components,
if
you
use
grpc
you'll,
then
go
construct.
A
stats,
Handler
and
I
wasn't
even
proposing
to
put
common
code
together
for
that
stats.
Handler
it's
about
20
lines
of
code.
K
D
So
we
all
agree
that
we
want
to
have
a
metric
or
not
the
hotel
go
metric
recorded
with
this
for,
and
this
should
be
an
optional
config
for
a
component
to
say,
I
want
metric
I
want
my
metric
or
not,
or
no
I
I.
Actually,
don't
need
that
because
it
using.
C
Okay,
the
way
I
did
it.
You
have
basic
metrics,
it's
off,
you
have
normal
metrics,
you
get
the
predominant
direction
of
traffic
counted
only
and
if
you
have
detailed,
you
get
both
directions
of
traffic,
so
exporters
will
count
by
sent
receive
original
account,
but
it's
received
normally
and
if
you
asked
for
detailed
it'll
count
both
directions
for
each
component,
I,
don't
think
we
need
any
more
configuration.
D
Yeah
and
you
have
the
views
on
top
of
that
which
we
in
hotel
go,
we
can
configure
there.
Okay,
I
I,
think,
that's
that's
fine,
let's
not
jump
into
saying,
observe
report
or
whatever,
but
we
want
to
to
have
this
and
how
we
implement
it.
It's
somebody
who's
gonna.
Do
it.
D
H
D
Don't
think
we
need
to
expose
this
I
think
we
should.
We
can
have
it
as
an
internal
in
our
grpc
service
and
the
grpc
server
settings
and
and
HTTP
server
settings.
We
can
inject
this
grpcs
handlers
and
stuff
and
don't
have
to
expose
a
observed
report,
public
API,
to
record
these
things.
That's
why.
C
E
Okay,
that
makes
sense,
so
it
sounds
like
a
lot
of
people
want
it,
and
then
it's
just
a
matter
of
how
we
do
it,
because
we
can
just
leave
the
discussion
on
the
GitHub
issues
itself.
D
Unfortunately,
there
are
two
issues:
let's,
let's
stick
with
the
exporter
PR
and
have
the
discussion
there,
because
the
receiver
will
follow
the
same
result.
Okay
I
know
there.
There
are
two
different
issues,
but
just
for
sanity,
we
we
keep
only
talking
on
the
exporter
run,
sounds
good,
perfect.
L
All
right
so
configuration
reloading
so
this
this
sounds
like
it
might
go
a
little
bit
General.
So
basically,
the
state
of
configuration
rolling
right
now
is
that
we
have
configuration
providers.
You
can
give
them
a
URI.
They
give
you
back
configuration
optionally.
They
can
also
send
notifications
back
to
the
back
to
the
collector
when
they,
when
their
configuration
changes
and
then
what
the
collector
does.
Is
it
reloads
the
configuration
unconditionally
right
now
when
it
gets
a
notification
like
that?
Also
none
of
our
current
providers
actually
implement
this.
L
So
in
practice
nothing
nothing
happens
now
so,
and
the
question
kind
of
is
generally.
How
do
we
want
this
to
work
and
how
to
get
there,
because
originally
this
came
up
from
the
fact
that
I
wanted
to
implement
a
configuration
watching
for
files,
so
I
wanted
to
make
it
so
that
when
a
configuration
file
changes,
then
the
collector
reloads
it
automatically
and
the
actually
the
actual
change
for
this
is
not
very
interesting,
but
the
question
of
should
this
be
enabled
for
everyone.
If
not,
then
how
do
we
let
users
control
it
like?
L
Do
they
control
it
generally
or
do
they
control
it?
Per
file
I
had
a
change
where
I
entered
just
a
flag
that
just
enabled
it
with
a
single
flag
for
everyone
or
disabled,
and
it
was
disabled
by
default.
So
it
kind
of
kept
the
current
behavior,
but
then
Dimitri
recently
commented
on
it
that
he
thought
it
should
work
a
little
bit
differently.
L
Yeah,
so
I
think
kind
of
this
warrants
a
bit
of
discussion
as
what
should
happen
in
general
and
I
also
actually
looked
at
a
little
bit
of
prior
art
for
this,
because
I
was
I,
I
think
I
hallucinated.
That
other
agents
also
did
this.
But
I
I
checked
a
bunch.
I
checked
Prometheus
Telegraph
fluently
fluent,
but
none
of
them
actually
do
any
configuration
reloading
they
just
like
they.
Let
you
do
they.
L
Let
you
send
a
hang
up
signal
which
we
now
also
do,
and
that's
it
you
know
figure
it
out
yourself
so,
but
also
we
did
have
some
I
know.
We
had
some
users
asking
for
the
file
reloading
specifically
so
maybe
like
at
this
point,
I'm
also
kind
of
open
to
the
idea
of
just
scrapping
that
interface
and
not
doing
any
notification.
Loading
whatsoever.
I
think
that
should
be
on
the
table
as
well.
L
So
does
anyone
have
any
opinions
on
what
we
should
do,
because
this
seems
kind
of
like
a
hot
potato,
a
little
bit
like
there's
this
interface?
No,
but
nothing
uses
it
and
when
you
actually
try
to
use
it,
there's
a
lot
of
hesitation
about
what
should
happen
and
how
we
should
get
from
the
point
where
we
at
the
point
where
we
want
to
be.
L
J
Sorry
I,
don't
know
enough
about
this,
so
I
might
be
asking
silly
questions,
but
have
we
do
we
have
a
well-defined
understanding
of
what
happens
when
the
configuration
is
reloaded
because
I
haven't
seen
this
in
the
service
package.
L
It
is
there,
it's
perhaps
I
think
it
also
needs
a
little
bit
more
robustness,
but
that's
a
separate
problem
right
now.
What
it
does
is
if
it
gets
a
new
configuration
and
that
configuration
is
wrong,
it
will
just
crash
which
is
probably
not
what
what
it
should
do,
but
it
does
work
if
you
send
a
sync,
hang
up
signal
which
is
kind
of
a
standard
and
that's
how
what
happens
if
you
do
like
systems
you
reload,
for
example,
it'll
reload,
the
configurations
that
it
has
and
and
like
construct
a
new
collector
instance
in
process.
K
I
K
About
the
way
it's
implemented
now
also,
is
that
it'll
reload
the
config,
but
it
won't
reload
Telemetry
anything
in
pipeline.
Telemetry,
so
like
I
I,
can't
remember
why
exactly
I
think
something
about
the
way
that
we
like
handle
like
spinning
up
the
Prometheus,
metrics
exporting
and
all
and
that
stuff,
but
yeah
I.
Think
if
you
reload
the
config
Telemetry
settings,
it's
like
not
honored,
like
that's
a
one-time
thing.
L
I
think
they're
intertwined
in
practice.
Right
now,
at
the
very
least
like
we
have
kind
of
an
experimental
op-amp
extensions
people
have
been
playing
with
and
the
way
that
extension,
what
that
extension
does
is
it
literally
sends
the
signal
to
its
to
the
to
the
same
process,
which
is
really
hacky,
so
there
should
probably
be
some
way
like,
maybe
like
I
can
see
that
there's
a
need
for
this
interface
internally
for
components
but
but,
like
here,
I
think
we're
talking
about
the
user
facing
part
of
it.
L
Like
you
know,
users
configure
The
Collector,
they
started
with
some,
maybe
some
flags,
and
you
know
what
should
happen.
What
should
they
be
able
to
change
with
and
change
here
and
a
really
annoying
part
of
this
also,
is
that
you
can't
you
know
you
can't
use
the
proper
config
to
to
change
anything
because
you
haven't
loaded
it
yet.
L
So
the
only
way
you
can
change
it
is
is
by
switching
flags
and
then,
if
you
start
to,
for
example,
if
you
start
start
to
want
to
reload
config
from
one
file
but
not
from
the
other,
then
you
have
to
do
all
of
this.
In
the
in
the
flags,
so
it
becomes
very
unwieldy,
very
quickly.
M
M
Currently
you
can,
like
all
the
configuration
in
the
config
map,
and
if
you
have
like
your
deployment
with
like
demon,
sets
in
a
cluster
of
like
thousand
nodes
right
and
you
need
to
change
something
in
it
like
in
your
small
configuration
adjustment
in
in
one
of
your,
like
processors,
assassin,
you
will
need
to
roll
out
the
whole
cluster,
which
should
take
like
hours
and
yeah
for
those
use
cases
I
think
and
in
the
current
behaviors
Ricky
decent.
So,
for
example,
if
you
change
the
configuration
and
it's
broken
you-
you
only
have
one
port
in
that.
M
In
that
phrase
right
ideally,
but
this
is
also
something
we
need
to
separately
check
separately
because
I,
it's
not
like
how
to
work
out
of
the
box
with
this
integration
like
for
kubernetes
new
stuff,
something
else
so
it
like
it
changed
one
by
one
and
some
like
some
synchronization
need
to
be
applied
there
as
well.
But
that's
that's
the
use
case.
I.
Think
it's
worse.
Investigating
and
looking
into
that
and
yeah
go
ahead.
L
I,
don't
ask:
are
you
sure
you
want
that
in
kubernetes,
because
to
me
that
sounds
like
an
anti-use
case
in
kubernetes,
as
in
you
know,
yes,
we're
reloading
reloading.
However
many
demon
set
pods
takes
a
while,
but
kind
of
that's
kind
of
the
principle
of
kubernetes
right
that
you
have
immutable
everything
and
if
you're
changing
configuration
you
should
you
should
actually
reload
all
the
pods
and
then
you
can
use
kubernetes
own.
You
know
mechanisms
for
this,
for
example,
if
kubernetes
reloads
one
thing
like
you
said:
it'll
stop
the
rollout.
M
I
L
And
actually
like
the
the
way,
the
config
map,
reloading
specifically
Works
in
kubernetes,
is
very
obscure
and
I
I'm,
not
sure
if
it's
like
subject
to
actual
guarantees
about
when
that's
gonna
happen
and
how
quickly
last
time,
I
I
did
actually
check
it.
It
is
quite
the
Quagmire
inside
the
cubelet,
so
I
don't
know
if
it's
like
a
good
idea
to
even
suggest
to
anyone
to
actually
do
this
like.
L
Realistically,
if
you
have
a
really
large
cluster,
then
you're
very
sophisticated,
and
you
should
you
can
probably
figure
out
some
way,
some
better
way
of
of
doing
this
and
mostly
kind
of
concerned
about
what
happens
with
like
the
very
basic
use
cases
right.
You
start
the
collector
on
some
single
host
and
how
how
should
it
behave
by
default
like
should
it
reload
the
configuration
or
not.
B
I
mean
I
think
it
makes
a
lot
of
sense
to
be
able
to
reload
a
configuration.
The
I
do
think
it
adds
a
documentation
issue
regarding
illness
configuration
that
may
not
be
relatable.
B
So
you
need
to
be
able
to
say
this
is
either
actively
reloadable
or
not,
and
you
you
know,
you
just
need
to
be
able
to
figure
that
out,
and
then
you
have
the
other
half
of
the
problem
that
you
mentioned
of
okay
I
just
tried
to
reload
the
configuration,
but
they
you
know
fat,
fingered
it
and
when
I
went
to
reload
it
it
didn't
reload.
Now
what
so,
all
that
that
has
to
be
documented
and
carefully
handled,
but
I
do
think
it's
worth
doing.
J
Yeah
I
agree.
It
seems
like
something
that
would
be
valuable,
but
at
the
same
time
it
seems
like
the
default
should
be
that
none
of
this
happens
right,
like
the
the
user,
deploys
a
configuration
I
think
they
expect
it's
going
to
stay
the
same
unless
they
vary
intentionally
update
it
and
one
way
they
should
be
able
to
intentionally
update.
J
It
is
by
specifying
the
mechanism
that
would
update
it,
because
I
could
see
that
being
part
of
the
configuration,
but
it
seems
it
would
seem
very
surprising
to
me
if,
if,
as
a
user,
I
think
that
things
would
just
get
updated
in
a
way
that
I
wasn't
anticipating
I.
J
Always
the
reload,
but
then
you've
lost
some
state
that
you
might
be
able
to
maintain
in
a
hot
reload.
M
Yeah
yeah
that
that's!
Why
that's
why
I
kind
of
suggested
and
lindowers
making
it
appear
config
source
definition
like
they
did,
that
configuration
what
this
result
will
or
not
opposed
to
like
having
one
flag
for
the
all
all
the
config
sources
in
the
yaml
file
because,
like
as
you
mentioned,
there
might
be
some
files
that
are
pretty
important
and
we
I
don't
expect
them
to
be
Auto
reload,
but
others
solids
might
be
less
important.
That
can
can
be
treated
with
how
to
reloading
interesting.
G
Yeah
so
I
mentioned
that
the
system
to
reload,
mostly
in
connection
to
the
way
that
kubernetes
works,
so
kubernetes,
will
bring
the
port
down
and
bring
Newports
up
for
the
with
the
new
config.
G
Now
kubernetes
will
hold
the
connections
thing
right
or
the
new
connections,
while
the
service
is
being
has
been
rolled
out
to
the
new
versions,
are
being
rolled
out
so
that
new
new
connections
can
can
go
to
the
new
Services,
the
new
parts,
with
the
new
configuration
and
if
we
Implement
that
right
on
the
collector
side
like
a
graceful
shutdown,
then
we
can
ensure
that
no
data
is
at
the
pipeline
by
the
time
that
we
shut
down.
G
And
if
we
do
that,
then
even
systemd
can
be
used
to
reload
the
The
Collector,
because
we
gracefully
shut
down
and
our
new
connections
that
are
coming
in
they.
We
can
start.
You
know
not
accepting
those
connections
anymore,
because
the
receivers
are
probably
down
already
and
the
clients
who
retry
the
same,
sending
the
same
data
in
a
few
moments
and
in
a
few
moments
the
new
collector
will
be
up
receiving
that
new
configuration.
So
you
know
I,
don't
think
it
is
something
that
we
should
be
having
on
The
Collector
itself.
G
G
There
are
many
subtle
things
that
you
know
it's
very
hard
to
get
right,
especially
on
a
stateful
pipeline.
My
cars
or
stateable
hearing
quotes,
of
course,
so
and
I
think
it's
the
easiest
for
us,
not
only
the
easiest,
but
it's
more,
it's
the
more
understandable
solution
for
our
users.
If
we
just
require
a
process
reload
to
get
the
new
configuration,
you
know,
just
like
you
know,
I,
don't
know
httpd
the
way,
HTTP
reload
any
it
and
it
stops
and
starts
basically
right.
G
So
it
it
does
decrease
for
shutdown
holds
the
new
connections
and
when
it's
ready
to
accept
new
connections,
it
serves
the
ones
that
are
being
blocked
so
I
think
at
the
most.
We
should
be
doing
something
like
that
shut
everything
down
and
bring
everything
up
again,
but
I
think
that
can
also
be
done
by
by
kubernetes
or
system.
Do
you
know?
Whatever
is
managing
the
process.
A
L
Yeah,
so
that's
that's
how
it
that's
kind
of
how
it
okay,
so
that's
how
it
works
in
principle
right.
The
reloading
mechanism
actually
is
there.
It
might
not
actually
do
everything
it
should
do.
It's
also
like
something
or
like
Tyler
mentioned
something
around
around
the
metrics.
Might
not.
It
might
not
actually
be
reloaded,
but
like
the
actual
collector
objects
with
all
the
component
configurations
and
so
on
is
reloaded
and
it
works
with
it
works
with
systemd
out
of
the
box.
In
this
sense,
you
can
do
systemd
reload
it
it
will
reload
it's.
L
This
is
not
actually
restarting
the
process,
but
but
it's
doing
a
lot
of
things
and
in
general,
I
kind
of
I
definitely
think
it
would
be
simpler
for
us.
If
we
just
didn't
do
it,
the
question
is
kind
of.
Is
there
you
know?
Are
there
like
serious?
Maybe
there
should
be.
We
should
like
circulate
the
issue
and
see
if
there's
like
enough
enthusiasm
for
the
automatic
for
the
foreign.
L
A
A
It
is
a
bit
different
because
Prometheus
doesn't
it's
not
a
usual
example
of
something
that
scales
like
a
big
onset
right,
so
you,
mostly
you,
have
a
one
Prometheus,
maybe
a
couple
shots,
so
the
the
reloading
of
that
is
a
different
story
than
for
demons,
so,
for
example,
for
example,
but
it
doesn't
so
it
Trinity
doesn't
actually
say,
as
you
said,
nikoi
it
doesn't
have
a
auto
reload,
but
then
for
kubernetes.
They
have
this
config
reloader
container
right,
sidecar,.
A
A
J
L
L
Yeah
that
this
is
why
I
wanted
to
add
the
flag,
so
so
it
would
be
disabled
by
default,
but
then,
if
somebody
actually
wanted
to
to
play
with
the
reloading,
they
could
actually
enable
it
wholesale
and
the
flag
the
flag.
Has
the
I
know
that
adding
Flags
is
not
great,
but
I'm,
not
sure.
If,
like
adding
some
additional,
you
know,
switch
to
the
config
stanzas
is
is
really
going
to
be.
L
You
know
much
better
in
practice,
although
if
you
have
like
some
idea
Dimitri
about
how
that
would
look,
you
know
we
could
talk
about
it.
I.
G
Yeah
I
just
wanted
to
mention
that
we
might
be
missing
children
in
this
discussion
and
I
think
he
does
have
a
lot
of
input
in
this
area
because
I
remember
he's
so
children
being
very
active
on
on
discussions
related
to
hot
reload.
L
K
This
is
a
cry
for
help:
I
need
to
do
licenses
and
notices
stuff
for
Go,
Auto
instrumentation,
and
it
turns
out
we
have
to
do
it
for
the
collector
as
well.
It
sounds
like,
and
it
will
be,
an
absolute
nightmare
to
do
it
for
contrib
I've
been
looking
for
automated
ways
to
do
it
and
I
can't
find
like
a
ton
of
great
stuff.
K
There's
like
something
called
go
licenses
or
something,
but
it
doesn't
look
like
it
does
everything
that
at
least
the
the
community
issue
is
expecting
that
we
have
in
our
license
and
looking
at
some
other
public
repositories,
Like
Jaeger,
Prometheus
kubernetes,
there's
lots
of
different
ways
that
people
are
doing
it
like
kubernetes
has
a
pretty
ridiculous
amount
of
manual
code
in
order
to
update
the
licenses,
it
looks
like
Prometheus
is
doing
it
manually.
K
G
Tyler
I'll
bring
this
to
the
GC.
My
current
understanding
is:
we
don't
actually
have
to
provide
that
with
the
binaries.
What
we
have
to
do
is
to
provide
users
who
request
that
information.
We
have
to
provide
them
with
the
sources
and
licenses
and
so
on.
We
don't
actually
have
to
bundle
the
sources
with
the
binaries
or
the
license
headers
with
the
binaries,
and
things
like
that.
So
only
if
users
ask
for
the
sources
we
have
to
provide
the
sources
and,
alongside
the
the
licenses.
H
G
G
K
Yeah,
it
never
went
through
I,
don't
know
if
it's
a
problem
for
anything
in
contrib,
but
that
thing
won't
handle
any
c
code.
So
at
least
like
for
the
Go
Auto
instrumentation.
It
has
some
C
code,
bundled
in
there
with
with
its
go
code
and
that
tool
doesn't
handle
that
at
all.
It
also
doesn't
spit
out
the
actual
author
copyright.
K
It
just
gives
you
the
the
name
of
the
dependency,
the
the
link
to
the
license
and
the
like
license,
that's
being
used
like
Apache
or
MIT
or
whatever,
but
it
doesn't
include
any
of
like
the
author
copyright
information,
which
at
least
one
person
the
person
who's
driving.
This
issue
is
asking
for
so
yeah
I
would
really
appreciate
it.
If
we
could
take
this
to
the
GC,
it
feels
pretty
it
felt
pretty
overwhelming.
Yesterday,
when
I
was
trying
to
do
it,
I
was
like
I'm,
not
a
lawyer
and
I.
H
So,
for
that
reason,
can
I
suggest
that
if
gerasi
is
intending
to
take
this
to
the
GC
that
we
cut
this
conversation
here-
and
this
sounds
good
recorded
for
him.
G
Yeah,
the
only
thing
that
I
wanted
to
ask
you
Tyler
and
other
maintainers
is:
if
we,
if
we
get
the
answer,
that
it's
not
necessary,
would
you
still
like
to
provide
those
files?
So
are
you
looking
for
a
technical
solution
or
are
you
looking
for
more
or
broader
guidance,
I.
K
Am
selfishly
looking
for
a
way
through
this
issue,
so
I
can
have
a
Go
Auto
instrumentation
distribution
when
it
relates
to
this
Community
like
The
Collector
Community.
We
have
also
been
asked
to
address
this
issue,
and
I
am
worried
that
if
we
are
asked
to
accumulate
those
licenses
and
notices
for
contrib,
it
will
be
massive
and
difficult
to
maintain.
G
All
right,
independent
from
the
discussion
with
the
GC
I'll,
then
open
a
ticket
with
the
sincere
directly
and
ask
for
guidance
on
what
other
projects
are
doing,
because
if
they're
asking
Hotel,
they
are
very
likely
asking
privities
and
and
other
projects
as
well,
and
we
may
have
a
solution
already
for
that
sweet.
So.
M
J
M
Yeah,
so
the
thing
I
want
I
want
to
talk
about
is
updating
the
dogs
on
the
on
our
website,
and
there
is
some
suggestion
to
replace
the
definition
of
the
deployment
scenarios
of
The
Collector
and
those
replacement
doesn't
seem
like
clear
to
me.
For
example,
we
are
changing
concept
of
agent
and
Gateway
with
some
other
names
and
like
that's
one
one
problem
and
another
one.
Is
that,
like
those
new
names,
doesn't
really
reflect
the
same
terms
and
it's
like
they,
they
in
definition,
is
a
bit
confusing
to
me.
M
So,
of
course,
I
would
appreciate
more
eyes
on
that
from
nickel.
Collector,
stick
100
PR
and
also
like
to
discuss
those
terms.
Actually
so
from
my
I've,
been
thinking
always
like
those
two
types
of
deployments
are
like
the
most
probably
common
and
being
useful
to
to
Define,
at
least
like
Asian
is
a
host
installation
of
The
Collector,
whether
it's
demon
said
whether
it's
just
like
one,
whatever
very
visual
machine.
M
One
collector
on
one
virtual
machine,
Etc
and
Gateway
is
like
some
like
deployment
somewhere,
where
you
can
just
like
send
the
data
through,
but
so
it
does
like
be
some
bigger,
batch
and
stuff
like
additional
annotation
Etc
and
that
I
think
we
it's
not
being
used
a
lot
through
the
collector,
but
I
will
interact
a
few
places
where
it's
been
used,
and
maybe
we
need
to
establish
these
terms
or
other
and
like
use
them
actually
more
and
Define.
M
Maybe
in
the
like
header
over
there
read
me
whether
this
particular
receiver
can
be
run
in
both
modes
or
one
over
the
other.
So,
for
example,
host
metrics
receiver
is
only
agent
mode
and
the
deployment
supposed
to
be
Etc
yeah.
So
what
do
you
think
folks
and.
H
M
B
Sorry,
just
looking
at
the
documentation
specifically
defines
the
terms
and
what
they
mean
the
context
in
which
they're
doing
like
a
centralized,
collector
deployment
pattern
consists
of
and
and
so
like
they
are
defined
there.
B
B
But
you
know
the
exact
words
used
so
I
I
mean
what
you're
saying
isn't
like
I,
don't
I
think
we
can
clarify
further
but
I,
don't
I'd
sort
of
like
to
say.
Couldn't
we
commit
this
because
it's
better
and
then
go
forward
with
additional
proposals
for
improvements
on
well,
let's
change
the
word
into
something
else
or
or
whatever
like
not
not
get
bogged
down
on.
You
know
shooting
for
perfect
when,
when
what
we
have
really
needs,
Improvement.
M
But
why
should
we
rename
to
something
that
we
don't
agree
on
and
then
find
another
another
name
to
that
sort
like
it
will
be
another
additional
step
with
like
introduce
A
New
Concept
that
may
not
stay
for
the
going
forward
right.
That's
my
concern
here
and
also
they
are
not
like
one-to-one
the
compact
like
compatible
with
each
other.
So
they
are
those
terms,
and
it's
like
decentralized.
Journal
is
a
bit
unclear
here.
It
says,
like
instrumentation
can
send
to
one
collector
or
another,
which
is
completely
different
from
the
agent.
M
So
yes
and
those
things
are
like
different,
so
we
need
to
solve
them
separately,
but
if
we
change
the
name
to
something
just
to
merge
the
pull
request-
and
we
want
to
figure
out
what
would
be
a
next
like
names
that
we
agree
on,
it
will
take
like
another.
Some
amount
of
time
with
this
intermediary
terms,
added
to
the
documentation,
I,
don't
believe.
That's
desirable.
K
Did
the
author
give
an
explanation
for
why
they
removed
and
why
they
switched
to
calling
it
decentralized
and
centralized
instead
of
just
naming
those
files,
agent
and
deployment
or
agent
and
Gateway
or
agent
and
deployment?
And
then
just
by
keeping
all
the
content
the
same.
K
I
guess
I
I
personally,
also
like
agent
in
Gateway
or
agent
and
deployment
more
agent
and
deployment
or
I
guess
technically
Damon
said
and
deployment
is
what
we're
using
the
helm
chart,
but
agent
and
agent
in
Gateway
works
for
me
is
the
reason
this
is
happening
is
because
of
the
whole
agent
discussion
between
Java
agent
and
auto
instrumentation,
and
all
of
that
is
that
why
they're
trying
to
get
away
from
the
word
agent.
M
G
G
To
change
them
yeah,
so
let
me
link
the
original
things
that
I've
written.
Perhaps
you
can
help
me
figure
out
what's
going
on
so
it's.
G
All
right,
so
this
is
what
I've,
actually
written,
I,
just
basically
a
link
here
to
the
chat,
there's
no
centralization
or
centralized
or
decentralized
in.
I
G
Yeah,
so
I
can
I
can
I,
don't
know
better
would
be
if
we
can
discuss
it
in
the
open
with
Michael
and
I
see
what
is
the
source
of
confusion
there.
K
The
term
agent
I
think
I
know
that
Philip
at
least
has
talked
a
lot
about
that
term
and
that's
like
Brian
was
saying
in
the
chat.
Those
definitely
come
up
in
the
cons
channel.
So
my
guess
is
it's
that
word,
but
I
personally
would
vote
for
agent
keeping
agent
in
Gateway
I've
already
seen.
Lots
of
people
already
use
those
terms
to
refer
The
Collector
already
I,
don't
see
why
we
should
yeah.
G
G
Yeah
so
I
think
Asian
should
be
fine.
Gateway
is
not
I
mean
we're
introducing
a
new
term,
for
that
is
not
very
clear
what
what
it
could
be.
G
Matters
like
yeah
I
mean
it
is
yeah.
It
was
part
of
a
discussion
last
week
as
well
on
the
gctc
meeting,
that
we
need
consistent
naming
for
things
and
we
here
we
never
use
Gateway
and
the
website
does
and
I
think
we
are
closer
to
the
collector
than
the
website
is
so
I
think
you
know
if
we
think
that
Gateway
is
wrong
and
I
do
think.
Gateway
is
wrong,
then
we
should
fix
the
documentation
on
the
website.
Perhaps.
M
M
So
in
the
code
base,
we
don't
use
it,
but
we
should
and
I
also
I
think
we
have
a
few
places,
a
few
small
places
where
it's
been
used
in
the
code,
both
Gateway
and
agent,
but
we
should
use
it
more
extensively
whenever
names
we
come
out
come
up
with.
If
we
want
to
change
the
Gateway,
let's
change
it
and
you
can
use
it
more,
but
we
if
someone
need
to
come
up
with
the
name,
I,
don't
believe
the
centralized.
The
decentralized
is
better
just
because
agent
is
pretty
good
one.
M
M
Yeah
and
I
was
just
always
switched
to
like
turning
accumulative
terminology
in
the
helm
chart,
which
is
which
makes
sense,
but
in
other
places
like
in
The
Collector,
describing
like
different
like
this
deployment
scenarios,
we
we
have
to
use
them
and
have
to
find
out
so.
G
This
is
actually
that
documentation,
so
the
condition
that
you
think
we
should
have
for
the
collector
is
actually
this
one
here.
That
Michael
is
writing
and
it
is
I
think
the
right
place
is
indeed
the
website,
because
it
is
so.
The
main
idea
is
to
tell
people
how
they
can
mix
and
match
collector
configurations
to
accomplish
specific
use.
Cases
like
if
I
want
to
do
multi-tenancy
or
if
I
want
to
do
load
balancing.
G
Then
how
do
I
do
that,
and
this
is
something
that
does
not
belong
any
specific
place
on
The
Collector
GitHub
repository,
but
it
does
belong
to
the
website
because
it
is
a
you
know,
a
tell
a
user
how
to
accomplish
a
specific
use
case.
So
I
think
this
is
the
right
place
for
this
documentation.
I
I,
just
don't
agree
with
the
centralized
and
decentralized.
It
is
confusing
to
me,
I
I.
Just
by
reading
those
names,
I,
don't
know
what
they
are
yeah.
G
M
G
M
M
Thank
you,
but
I
don't
like
collector,
because
that's
definitely
confusing.
Actually
I
recently
moved
the
documentation
for
the
kubernetes
address
processor,
from
one
place
to
another
I
replaced
collected
with
Gateway,
because
that's
at
least
one
thing
that
we
stabilize
the
documentation.
So
we
we
can,
we
can
think
of
new
names
and
I'll
update
those
decks.
K
J
Can
we
can
I
just
give
the
10
second
summary
of
my
issue
before
we
drop
here
and
we
can
take
the
electric
discussion
offline,
so
basically
the
component.host
interface.
We
basically
have
three
independent
proposals
to
make
a
change
to
this
interface
and
they
would.
This
is
a
pretty
important
interface,
because
it
would,
it
would
basically
Break
all
components,
implementations
right,
because
the
start
method
accepts
the
the
interface.
So
there
are
some
interesting
and
creative
ways
to
maybe
handle
that
somewhat
gracefully,
but
they're
all
pretty
ugly.
So
basically,
what
I'm?