►
From YouTube: 2022-12-08 meeting
Description
cncf-opentelemetry meeting-2's Personal Meeting Room
A
A
A
A
A
A
A
A
A
A
A
A
A
A
A
A
A
A
A
B
C
D
D
D
E
An
introduction
around
I
can
yeah
I'm
and
I'm,
also
working
on
top
and
Telemetry,
and
on
The
Collector
side
and
I
work
together
with
Pavel
and
that's
basically,
it.
C
Sure
yeah,
my
name
is
Jake
wernoff
I'm,
the
tech
lead
for
the
Telemetry
pipeline
team
at
lightstep.
We
have
some
other
folks
from
my
Superior
that
will
pass
you.
After
this.
C
My
focus
has
been
on
the
operator's
Target
allocator
for
the
collectors
like
sharded
Prometheus
setup,
with
a
few
random
operator
PR's
here
and
there
for
features
that
I've
been
needing
for
testing
and
yeah.
I
have
like
a
few
things
on
the
agenda
for
today,
but
we
can
get
to
this
later.
I'll
pass
to
Christina.
F
Hi
I'm
Christina
I'm
on
Jacob's
team,
the
Telemetry
pipeline
team
at
lightstep,
mostly
I,
work
on
the
target
allocator
code
and
just
like
fixing
things
I,
find
asking
questions.
I'll
pass
demo.
G
Hey
everyone:
my
name
is
Mo
I'm,
also
on
the
Telemetry
pipeline
team,
at
lightstep,
with
Jacob
and
Christina
and
Gustavo
yeah
I'm,
relatively
new
engineer,
I'm
kind
of
just
trying
to
contribute
where
I
can
Target
allocator
a
little
bit
on
the
HPA
and
yeah.
It's
all
passed
to
Gustavo.
H
So
hey
everyone,
I'm
Gustavo,
paiva
I,
haven't
actually
contributed
a
lot
to
never
contribute
to
the
operator.
I
was
also
a
approver
at
the
hotel
goal
and
done
some
contributions
to
the
hotel
collector.
So
I'm,
just
like
looking
around
here
a
little
bit
as
well.
Yeah
I
will
pass
through
Anthony.
I
Hi
Anthony
Mirabella
MSD
at
Amazon,
working
on
ADOT
I'm,
a
maintainer
of
the
go
SDK
approver
in
The
Collector,
also
working
in
Lambda
and
the
operator
get
my
fingers
in
a
little
bit
of
all
the
buys
that
are
going
on
in
hotel,
basically
and
I.
Guess
I'll
pass
over
to
Michael
I
believe
you
are.
B
Next
hi
good
morning,
guys
so
I'm
Michael
Tong
I'm
at
Apple
So,
currently
I
I,
just
started
to
to
learn
working
with
open
territory.
B
So
recently,
I
am
working
on
a
idea
that,
to
create
a
name
chart
that
works
similar
to
cube
premises.
So
what
I
mean
is
that
you
click
you
install
the
entire
stack
with
one
click
or
one
command,
and
then
you
automatically
get
the
production
ready
like
deployment
just.
B
D
There's
some
great
thanks
Mikhail
and
we
have
a
New
Journey,
the
rats
Nike.
Probably
so
we
are
running
the
introductions.
Do
you
want
to
introduce
themselves
hey.
C
G
Yeah
sorry
having
some
webcam
issues
tomorrow,
my
name
is
t
I'm,
an
SRE
on
the
Optometry
pipeline
team,
with
Jacob
Christina
and
mo
yeah.
That's
my
introduction.
C
D
The
theme
is
quite
quite
huge,
yeah,
so
yeah,
let's
go
to
the
agenda
Jacob.
Do
you
want
to
start
with
the
documentation.
C
Sure
yeah,
as
Michael,
was
sort
of
pointing
out
in
his
introduction.
The
documentation
right
now
for
the
operator
is
relatively
light,
especially
for
the
Target
allocator,
but
also
for
other
features
like
Auto
instrumentation.
C
C
C
The
cube
Prometheus
stack
chart
where
it's
like
a
like
one:
click
install
for
all
of
your
infrastructure,
metrics
and
that'll,
be
like
a
thing
that
I
would
love
to
be
able
to
donate
to
the
help
chart
repo.
Maybe
it
is
the
thing
that
you
can
install
runs
the
operator
and
it
automatically
connect,
collects
all
of
your
like
Coupe,
State
metrics
and
then
uses
the
target.
Allocators
crd
functionality
to
also
pull
in
from
service
monitors
as
well.
C
Thing
that
would
be
great
for
documentation
on
how
one
can
use
this,
but
with
all
that
said,
I
think
I
was
going
to
make
some
issues
today
or
tomorrow
on
the
things
to
be
documented.
C
Do
we
think
I
think
that
the
best
place
to
do
it
is
probably
in
like
the
readme's
of
the
individual
like
subfolders?
Does
that
make
sense,
or
should
we
put
the
docs
on
the
docs
website.
D
I
think
that's
a
great
question
and
we
should
kind
of
decide
where
will
be
our?
Where
will
our
dogs
live?
I,
don't
know
what
is
the
direction
in
other
open
Telemetry
projects?
Do
they
focus
on
the
website
or
more
on
the
kind
of
readmies
in
the
repositories?
I
B
Yeah
I
think
what
we
can
do
is
so
we
update
the
the
individual,
the
folder,
which
is
with
me,
and
then
we
put
a
link
in
the
main
main
folder
with
me.
B
It
says
to
look
for
details
about
Target,
educator
or
other
stuff,
and
you
click
this
link,
and
then
it
will
lead
to
that
individual
one.
So,
instead
of
now,
I
have
no
idea
where
to
find
it
and
there's
no
link.
You
have
to
really
go
deep
into
the
code
base
and
then
find
it
some
documentation
there.
A
D
Think
it's
it's
a
great
idea
to
kind
of
split
it
into
those
quick,
starts
and
examples
on
the
website
and
then
have
more
detailed
docs
in
the
readme's
and
kind
of
cross-link
them
together.
Probably
yes,.
B
And
I
think
the
the
card
applicator
is
actually
a
pretty
important
feature
right.
Otherwise,
it's
not
really.
If
you
don't
use
it,
then
it
won't
be
scalable
and
reliable.
D
Yeah
I'm
hearing
this
a
lot
that
there's
not
good
docs
for
the
Target
allocator,
although,
like
in
the
past
I,
found
some
design,
docs
and
I'm,
not
sure
if
they
put
them
somewhere
in
the
readme
or
between
yeah
I.
C
Don't
think
they're
linked
I
think
that
you
have
to
dig
back
through
like
a
lot
of
things
to
find
them.
B
Yes,
that's
the
problem.
I
think
I
can
help
contribute
if
I
got
I
got
enough
knowledge
about
it
about
the
documentation
so
for
now,
I'm
more
interested
in
how
to
deploy
and
manage
it,
because
from
the
current
documentation
that
it
says
it's
a
standalone
and
when
I
install
the
operator
I
don't
see,
is
installed
and
also
I
I
try
to
extract
all
the
values
from
the
operator
chart.
B
C
So
yeah
the
the
target
allocator
is
enabled
under
the
collector's
crd.
Currently,
so
you
would,
when
you're,
installing
a
collector,
you
would
specify
that
you
want
the
target
allocator
enabled
for
that
collector
pool.
C
Think
that
the
only
way
you
would
find
that
right
now
is
either
via
some
examples
that
we've
written
or
by
looking
at
the
types
the
tech
stock
go
files
in
the
repository
which
far
from
ideal
I,
think
Paul
I,
think
that
you
mentioned
a
good
idea
earlier
of
like
maybe
we
can
write
some
overall
getting
started
stuff
in
the
website
or
like
this
is
these
are
just
some
like
basic
examples
and
usage,
and
then
we
can
link
out
to
the
readme's
in
the
repo
itself
and
those
can
be
like
independent
efforts
as
well,
so
that
we
don't
have
to
wait
on
one
or
the
other.
C
C
Does
that
sound
good
Michael?
Yes,.
B
So
also
another
question
I've
been
thinking
about.
Is
that
so
for
now
the
Target
and
allocate
her
get
a
list
of
targets
right
so
which
is
actually
leveraging
the
promises
kind
of
style?
So
what?
If
people
don't
want
like
previous
targets,
for
example,
I
only
use
Ubuntu
collector.
B
You
guys
probably
know
that
there
are
some
of
the
receivers
like
ka's
receiver
net
receiver.
So
if
I
only
use
those,
how
can
target
allocator
support
that
through
the
screen.
C
C
D
C
Yeah
we've
had
a
lot
of
folks
come
in
and
ask
us
questions
like
internally
about
water
instrumentation
and
how
Auto
instrumentation
works
so
I
think
it'd
be
great
to
get
that
more
documented.
D
There
is
as
well
I
think
some
of
the
official
open
Telemetry
examples.
Quite
so
recently.
Is
it
something
that
we
could
kind
of
extend
for.
The
operator
have
like
a
profile
that
you
know
instead
of
maybe
explicit,
like
manually
instrumenting,
it
would
be
deployed
with
operator
instrumented
with
the
operator.
C
Yeah
I
think
the
community
demo
is
interested
in
that
I
think
they
may
already
have
one
or
two
examples
that
use
the
Java
Auto
instrumentation
resources.
C
I,
don't
know
where
that
is
I
feel
like
I
did
see.
That
recently,
though,.
B
And
so,
by
the
way,
there
are
two
places
that
we
can
find
this
operator.
So
one
is
the
operator
repository
itself,
but
there
is
another
repository,
it's
a
hand
chart,
so
the
Hampshire
consists
of
three
three
folder.
One
of
them
is
a
operator
chart
so
which
one
do
you
guys
recommend
people
to
use.
A
C
The
helm,
charts
right
now,
I
believe
installed,
The,
Collector
and
well
it's
a
little
confusing.
The
operator.
Helm
chart
actually
only
installs
the
operator
and
that's
it.
C
It
sort
of
follows
the
operator
pattern
for
a
lot
of
other
services,
where
all
that
chart
does
is
solely
install
and
manage
the
operator,
whereas
the
actual
installation
of
a
collector
or
any
other
crds
is
on
the
consumer.
So
you
would
have
another
chart
that
would
install
The
Collector
rather
than
installing
it
in
band
with
the
operator
itself.
Yeah.
B
That's
that
is
that's
actually
what
I'm
doing
now
so
because
to
write
a
hand
chart
you
will
put
a
dependency
right.
You
need
a
dependency
for
another
hand,
chart
which
is
currently
I'm
using
the
open,
103
operator.
A
B
Chart
to
to
directly
use
the
operator
as
a
dependency
I
will
have
to
need
a
m
chart.
You
cannot
directly
put
it
into
a
hem
chart
right,
otherwise
you
are
redoing
with
the
operator.
Hencher
is
doing
so.
What
I
mean
is
you
will
want
to
implement
more
complicated
use
cases
with
the
with
hand
chart,
then
you
have
to
need
another
hand,
chart.
C
Yeah
you're
saying
that,
like
it's,
you
want
to
be
able
to
install
the
operator
Helm
chart
as
a
dependency
of
your
collector
installation.
B
Yes,
so
what
I
mean
is
I
don't
want,
for
example,
I
don't
want
to
install
the
operator
using
like
a
kubernetes,
a
couplet
Cube
controls
apply,
slash
F
and
then
that
is
pretty
much
not
possible
in
a
in
a
real
world
way,
and
then
you,
what
do
you
do
if
you
wrap
it
with
a
hem
chart,
which
is
what
we
have
already
done
in
the
operator
hand
chart
in
in
the
open
country.
C
I'm
not
sure
I'd
follow
the
question
exactly
I
think
what
we
do
right
now
is
we
install
the
helm
chart
for
the
operator
and
then
we
just
have
another
chart
that
we
install
the
collectors
in.
B
Yes,
so
I
was
asking:
what's
the
recommended
way:
okay,.
H
C
Oh
I
think
you're
asking
Michael
sorry,
the
yeah
I
think
you
could
have
the
operator
as
a
dependency
of
Detroit
that
installs
the
collectors,
but
I
think
that
there
is
a
there's,
a
condition
with
the
operator
right
now
where
it
actually
doesn't.
Allow
you
to
do
that
because
it
requires
the
there
has
to
be
like
staged
applies
for
the
operator
and
then
the
collector
crds,
because
then
I
don't
think
the
operator
will
automatically
reconcile
the
collectors.
C
That
was
at
least
the
case
when
I
tried
to
do
that
a
few
months
ago,
but
I'm
not
sure
if
it's
still
the
case.
Okay,.
C
So
the
operator
is
going
to
manage
The
Collector
crds,
which,
when
you
install
a
collector
crd,
will
create
the
target
allocator,
which
the
operator
will
manage
so
sort
of
know,
sort
of
yes,
the
operator
itself.
The
helm
chart
doesn't
manage
the
target
allocator,
but
the
home
chart
manager
is
the
operator
which
then
will
manage
the
target
operator.
B
Okay,
I
guess
we
should
add
Target
educator
as
a
crd
in
the
Hampshire
folder.
C
So
it's
already
there
under
the
collector
one
it's
just
embedded
currently,
because
that's
the
only
way
you
could
run
the
target
allocator
is
when
you
run
a
collector.
D
A
F
Yeah,
so
right
now,
as
we
add,
this
is
the
GitHub
issue
that
I
made
for
it.
F
I
made
one
issue
just
to
for,
like
a
general
conversation
about
how
we
want
to
release
and
manage
the
target
allocator,
but
there's
kind
of
two
things:
one
is
that
the
go
code
like
the
module
doesn't
currently
have
any
versions
in
it.
So
when
you
go
to
package,
go
Dev,
it's
on
v0.0.0,
so
I
was
wondering
if
we
wanted
to
fix
that
and
version
it
with
the
operator
versions
and
then
the
second
thing
is
right:
now
we
build
and
when
we
build
the
target
allocator
it's
off
of
me.
F
So
when
someone
wants
a
specific
version,
like
versions,
0.66.0
they're
not
actually
getting
v0.66.0
if
V
0.67.0
has
been
released,
vv0660
would
be
like
the
last
commit
before
that
release.
So
essentially
each
version
that
someone
thinks
they're
getting
for
an
image
is
actually
the
next
one
unless
they're
getting
the
latest,
in
which
case
it's
like
Main,
it's
not
actually
the
latest
released
version.
F
D
F
I
think
people
might
if
they
want
to
write
their
own
allocator.
So
we
provide
a
way
to
add
to
write
your
own
allocator
and
like
basically
as
we
provide
these
options
that
someone
can
build
with
their
own
dependencies.
It's
going
to
be
more
difficult
for
them
to
do
that.
If
there's
no
version
on
the
module.
D
C
G
C
E
Not
I
think
in
The
Collector
repository
in
concrete
I've
seen,
there's
always
a
commit
which
goes
through
all
the
sub
modules
and
increase.
The
number
I
think
they
have
just
a
small
script,
which
makes
this.
I
Yeah
in
The
Collector,
they
use
a
tool
that
we
wrote
called
multi-mod,
which
has
a
versions.yaml
file
that
defines
the
version
that
each
module
contained
within
the
repository
should
be
at
or
sets
of
versions
or
sets
of
modules
and
the
versions
that
they
should
be
at
then.
That
tool
can
update
any
requires
that
are
in
other
modules
and
also
handle
adding
the
git
tags
at
release
time
that
may
be
Overkill
here.
I
I,
don't
know
if
we're
gonna
have
as
many
modules
as
collector
control
confirm
have,
which
is
kind
of
what
what
led
us
to
build
that
tool,
but
ensuring
that
we
do
have
appropriate
tags
and
are
able
to
track
the
versions
of
the
the
components
there
would
be
beneficial
for
for
the
reasons
that
Christina
pointed
out
about.
You
know
people
being
able
to
write
their
own
either
allocator
components
or
allocators
entirely.
D
H
D
Don't
have
any
objections
on
putting
their
version
just
a
matter
of
something
a
PR
actually
and
changing
the
kind
of
the
release
probably
and
for
the
publishing
of
the
image.
Is
it
again
like
problem
how
the
GitHub
action
is
set
up
that
is
running
on
the
merge
domain?
Yeah.
F
I
I
think
what
we
probably
need
to
do
is
start.
You
know
if
we
just
released
66,
we
should
probably
as
soon
as
we
do
that
set
the
versions,
conversions,
that
text
to
67
pre
or
something
like
that,
so
that
once
the
66
releases
happened,
that
that
version
is
never
used
again
and
were
pre-released
for
67
until
67
happens,
oops.
D
A
C
Maybe
we
can
just
bring
the
Target
alligator
image,
push
into
the
release,
action
that
happens
for
the
main
operator
image
with
that
I
think
that
might
solve
it.
I
like
the
idea
of
doing
the
pre,
though,
because
then
you're
able
to
also
use
well,
and
at
least
it's
like
a
versioned
latest,
build
right,
like
you
still
have
a
thing
to
tie
it
to,
rather
than
just
being
tied
to
weights
constantly.
I
Yeah
I
think,
to
the
extent
that
we're
building
and
pushing
to
the
the
GitHub
image
repository
on
every
push
to
the
targeted
allocator,
that's
wrong.
We
should
we
should
be
doing
it
on
tag
or
at
release.
This
is
probably
the
first
thing
we
can
eliminate
and
then
we
can
figure
out
how
to
avoid
reusing.
The
prior
released
version
for
subsequent
pre-release
artifacts.
D
F
Yeah
I,
agree:
I
think
it'd
be
good
to
have
some
sort
of
head
right
that
people
can
get
the
latest
even
if
it's
not
released
yet
so
I
realized.
There's
one
last
thing
on
this
issue,
which
is
that
right
now
the
change
log
includes
both
operator
and
Target.
Allocator
changes,
but
it's
not
like
I,
don't
know,
I
think
it's
unclear
or
it's
easy
to
miss.
If
there's
a
breaking
change
or
like
major
changes
in
the
target
allocator,
does
it
make
sense
for
the
Target
allocator
to
have
its
own
change
log?
E
So
it's
for
me
when
I
see
it
I
like
it
when
there
is
a
change
lock
and
then
you
see
the
components
and
then
the
breaking
changes
so
which
means
we
have
this
breaking
changes
on
top
and
then
maybe
the
components
Target
allocator
operator,
Auto,
instrumentation,
I,
don't
know.
And
then
you
see
the
breaking
changes
for
these
components,
which
makes
it
quite
clear
you
go
to
the
version
to
the
change
lock
and
it's
on
top.
It
was
would
be
my
personal
preference.
I
The
thing
that
the
collector
does
that
could
be
useful
here
as
well
is
to
prefix
the
component
that
is
affected
by
the
change
to
every
line
in
the
changelog.
So
if
you
look
at
the
The,
Collector
contribute
change.
Log
it'll,
be
you
know:
enhancements
host,
metrics
receiver,
added
optional,
metric,
Splunk,
HEC
receiver,
added
et
cetera,
et
cetera.
I
We've
got
a
lot
fewer
components
here,
but
that
could
also
be
a
way
to
make
clear.
This
change
affects
this.
C
Yeah
I
like
how
the
collector
does
the
change
lock
to
and
makes
it
easier
with
the
what's
the
folder
name
in
it.
I
don't
know,
there's
a
PR
check
where
the
author
of
a
PR
has
to
commit
a
changelog
file
as
well,
which
I
think
there's
been
a
script
that
will
generate
the
release.
Notes
with
that
like
formatting
as
well.
Yeah
change
like
Jen.
C
That
thing
is
really
useful.
I
think
we
could
like
use
that
as
well,
because
I
think
we've
missed.
There
have
been
a
few
times
where
the
release
notes
for
the
operator
have
missed
PRS
that
were
in
there,
because
I
think
it's
done
manually
currently,
especially
if
we're
going
to
be
more
diligent
about
the
versioning
for
Target
allocator
I
think
it'd
be
great
to
use
that
tool
as
well.
F
A
D
I
I
But
it's
actually
eliminated
a
lot
of
overhead
because
it
eliminates
the
the
conflicts,
the
continual
conflict
that
happens
in
the
change
log
for
people.
I.
Don't
know
that
there's
enough
velocity
in
the
operator
for
that
to
really
be
a
problem.
But
it
seems
in
my
experience
to
be
a
fairly
easy
tool
to
use
in
low
overhead.
C
Yeah,
so
it's
it's
been
pretty
easy.
I
think
the
first
time
that
I
made
a
PR
there
I
forgot
to
do
it
and
then
the
pr
check
just
says
exactly
the
instructions,
some
like
what
you
need
to
make
it
and
there's
like
a
template
in
the
repo
ready
to
go
as
well.
It's
yeah
and
it
makes
it
look
really
nice
as
well.
It
gives
it
good
pretty
release,
notes.
A
C
Yeah,
so
this
is
going
to
be
I,
have
a
document
that's
in
there
I'm
going
to
share
screen
while
I
go
through.
It
is
that
okay,
yeah
I
mean
like
slide
up
in
my.
G
G
C
Okay,
can
everyone
see
my
screen
and.
A
C
Big
enough,
yes
great
so
as
there
have
been
a
lot
of
conversations
in
The
Collector
six
about
remote
configuration,
I've
been
thinking
a
lot
about
the
remote
configuration
story
for
the
operator
and
how
that
might
look.
I
wrote
here
a
proposal
for
what
it
could
look
like
and
I'm.
Just
gonna
walk
through
it.
C
I
don't
know
if
anyone
had
chances
like
pre-read,
so
I'm
just
gonna
give
the
highlights
to
this
doctor
and
it
is
relatively
high
level
as
well
like
I
I
didn't
go
into
as
much
detail.
If
you
read
the
collector
one,
this
one
is
like
very
detailed.
C
This
one
is
less
detailed,
also
because
it
does
less
and
has
to
do
less
as
you'll
see
so
for
those
unfamiliar
op.
Amp
is
a
really
requested
capability
from
previous
conferences.
I
was
at
srecon
in
emea
and
a
PM
of
ours.
Clay
was
at
kubecon
in
Detroit
this
year
and
a
lot
of
people
were
asking
about
it
and
talking
about
it
and
right
now,
the
operator
we
haven't
like
had
this
conversation
about
what
the
needs
will
be
for
the
collectors
op-amp
supervisor.
C
Their
plan
to
summarize
briefly,
is
to
have
another
container
or
another
image
called
the
supervisor,
which
is
then
going
to
do
remote
configuration
for
a
single
collector,
eventually,
multiple
collectors.
C
So
the
first
part
of
this
is
what
that
might
look
like
for
the
operator
and
then
the
second
part
is
adding
a
new
object
for
remote
configuration
of
operator
resources
and
not
so
rather
than
the
remote
configuration
for
a
single
collector.
This
is
going
to
be
the
remote
configuration
for
all
collectors
in
a
cluster
essentially,
so
this
is
the
like
very
high
level
design
where
you
have
the
operator,
and
you
have
your
remote
configuration
container.
This
application
will
connect
to
an
external
SAS
that
runs
an
op-amp
operator
server.
C
This
the
operator
server
can
then
push
a
map
of
collector
name
to
collector
crd
in
a
map
for
the
remote
configuration
to
then
create
the
hotel,
collector
crd
instances
in
the
cluster,
at
which
point
the
operator
will
go
through
its
reconciliation
flow
to
actually
make
those
things
exist.
When
the
operator
server
API
pushes
like
an
update,
the
remote
configuration
would
then
update
the
corresponding
collector
crd,
and
then
the
operator
would
see
that
change
and
then
go
through
the
reconciliation
flow
as
it
usually
would.
C
C
They
also
might
want
to
configure
The
Collector
to
use
its
op-amp
extension,
which
The
Collector
to
say,
is
also
working
on,
which
you
know
is
easily
done,
because
you're
just
pushing
the
collector
config
and
you
just
embed
it
in
the
config
to
reach
out
to
a
collector
op-amp
server.
C
So
in
doing
this,
there
are
two
main
changes
required.
The
first
is
a
change
for
the
supervisor.
Here
is
a
little
bit
more
of
a
detailed
diagram
on
the
supervisor,
change,
I,
sort
of
envisioned
it
initially
I
was
thinking
like
you
might
want
this
as
a
separate
crd,
but
then
I
was
thinking.
It
actually
is
closer
to
something
like
the
target
allocator,
where
it
is
one
to
one
with
a
pool
of
collectors
rather
than
its
own
thing.
C
So
here
I
have
it
where
it's
embedded
within
the
collector
crd.
So
the
flow
for
this
and
again
this
is
no
remote
config.
Yet
this
is
just
for
the
supervisor.
The
operator
reconciles
a
collector
crd,
which
creates
a
supervisor
config
map.
It
creates
The
Collector
and
then
it
creates
the
supervisor.
It
sets
up
the
connection
for
the
extension
for
the
collector's
op-amp
extension
and
the
supervisor
reads
in
its
configuration.
I
actually
forgot
to
include
the
external
connection
here.
C
I'll
update
this
diagram,
at
which
point
the
supervisor
can
do
its
work
in
pushing
and
configuration
and
restarting
the
collector
it'll
have
the
fields
specified
in
the
original
collector
document
just
one
to
one
pretty
much,
but
we
can
do
some
smarter
stuff
like
automatically
configure
config
map
as
a
volume,
and
we
can
also
automatically
configure
The
Collector
to
reach
out
to
it,
via
the
op-amp
extension,
similar
to
the
way
that
we
set
up
the
Target
allocator
in
the
config
map
for
The
Collector.
C
Currently,
I
might
stop
there
before
I
go
to
this
next
one.
Does
anyone
have
any
questions
just
yet.
D
C
Both
so
what
happens
here
is
the
collector
has
this
extension,
which
will
connect
to
the
supervisor's
server.
C
The
supervisor
is
able
to
write
to
the
collector's
config,
at
which
point
it
can
tell
a
collector
to
dynamically
restart
the
dynamic
restart
is
the
functionality
that
they're
building
into
the
Clutter
right
now,
so
this
for
us
would
just
be.
You
know
this
rather
than
just
config.yaml.
This
would
be
the
config
map
that
we
update
for
The,
Collector
and
then
you'll
see
here.
C
This
is
the
supervisor
config
and
in
my
diagram,
that
is
this
config
map
that
we
would
hook
in
and
then
this
connection
is
done
via
this
configuration
where
you
specify
a
server
and
the
endpoint,
and
this
would
just
be
a
local
cluster
endpoint.
C
D
And
kind
of
restart
the
The
Collector
make
kind
of
new
deployment.
C
C
That
was
just
a
separation
of
concerns
where
I
thought
that,
if
we
embedded
the
supervisor
within
the
operator
right
now,
the
way
that
they've
designed
the
supervisor
is
so
that
it
only
manages
a
single
collector,
in
which
case
the
operator
would
have
to
spin
up
and
supervisors
for
every
collector,
at
which
point
you
have
to
manage
all
those
supervisors,
and
it
just
seemed
like
tying.
The
logic
to
the
operator
was
going
to
make
the
flow
of
what
the
operator
does
more
confusing.
E
I
The
supervisor
may
be
getting
its
configuration.
Your
configuration
changes
from
external
sources,
though
that
are
coming
from
kubernetes
events
or
anything
like
that.
So
it
may
not
be
triggered
by
a
normal
kubernetes
operator,
control
Loop
that
may
be
coming
from.
You
know
another
op-amp
server
somewhere
else
or
from
polling
an
S3
bucket
or
wherever
the
the
operator
decides
to
to
make
that
configuration
available.
C
Yeah
and
I'm
actually
going
to
update
this
graph
right
now
to
make
that
more
clear,
because
I
I
don't
think
it's
clear
currently.
E
C
Yeah
I
think
my
fear
with
it
is
just
adding
in
more
logic
into
the
operator.
Just
do
this
work,
because
then
it
would
need
to
I
think
that,
like
the
way
that
they're
designing,
we
would
probably
have
to
keep
our
own
supervisor
up
to
date
with
the
supervisor
that
the
op-amp
people
will
be
writing,
because
artists
would
be
doing
a
bit
of
a
custom
flow
so
rather
than
us
having
to
keep
ours
up
to
date.
I
I
think
similar
to
the
Target
allocator
as
well.
This
could
result
in
an
increasing
usage
of
you
know
additional
code
routines
and
memory
inside
the
operator.
As
you
add,
more
and
more
managed
collector
crds.
So
breaking
that
out
enables
you
to
scale
the
supervisor
alongside
the
things
that
it's
managing
independently
of
the
operator
controller.
C
So
I
think
this
should
be
done
in
the
same
way
that,
right
now
you
can
configure
the
target
allocators
config
map
that
it
expects
via
the
collector
crd
So
within
a
collector
crd.
You
would
specify
that
you
want
a
supervisor
and
then
it
would
have.
The
crd
would
contain
each
of
these
fields,
so
capabilities
so
annoying
to
read
yaml
in
Google
box
yeah.
This
looks
terrible,
but
you
can
see.
Capabilities
is
a
key.
The
exception
mode
configuration
report.
Spec
did
all
of
those
would
just
be
Fields.
C
We
did
the
Trigon
allocator
right
and
then
that
would
configure
the
supervisor's
config
map
I
think
it
would
be
a
relatively
it
would
be
pretty
static,
but
still
I
think
we
would
still
want
it
all.
C
Hand,
oh
I,
see
I,
see
yeah,
that's
not
the
case.
This
is
in
the
config
map
that
would
be
changing.
It
would
only
be
the
collector's
config
map
that
the
third
party
would
then
go
and
change.
D
C
Cool,
so
are
there
any
more
questions
before
I
go
to
the
next
one,
which
I
think
is
maybe
the
more
interesting.
C
C
The
reason
that
I
wanted
it
to
the
reason
that
I
thought
it
should
be
done.
This
way
was
that
so
to
explain
the
flow
first,
a
cluster
user
would
create
the
remote
configuration
crd,
which
is
its
own
thing,
separate
from
The
Collector
entirely.
C
C
This
would
contain
an
op-amp
agent
which
talks
to
an
op-amp
server
run
by
a
third
party.
This
remote
configuration
when
it
receives
anything
from
its
connection
to
create
a
new
collector.
It
would
then
talk
to
the
kubernetes
API
to
actually
create
that
crd,
at
which
point
its
job
is
pretty
much
done.
That's
all
it
does
is
just
do
the
crud
operations
against
the
Kube
API.
It
doesn't
need
to
talk
to
the
operator
whatsoever
and
then
the
operator
is
then
going
to
do.
C
You
know
what
it
usually
would
do,
which
is
just
reconcile
that
crd.
The
reason
I
opted
for
this
was
so
that
there
are
no
actual
changes
in
the
operator's
logic.
It
just
continues
to
do
what
it
already
does
and
we
have
an
external
objects
which
can
do
the
creation
of
these
crds
I.
Think
if
we
were
to
move
it
in
band.
The
thing
that
I
would
fear
is
what
the
actual
creation
flow
for
The
Collector
crds
would
look
like
I
think
doing
it.
C
This
way
has
the
benefit
of
you
being
able
to
query
The,
Collector
crd
in
your
cluster,
whereas
I
think,
if
you
did
it
within
the
operator,
if
you
did
that
it
would
create
a
loop
or
it
wouldn't
create
it
at
all.
Right,
because
the
operator
would
need
to
talk
to
the
cube
API
to
create
the
crd
and
then
it
would
go
and
reconcile
it
and
I
think
that
that's
kind
of
a
weird
like
of
of
operations,
but
maybe
that's
me
being
too
separation
of
concern.
Anxious.
C
Yeah
yeah
pretty
much.
The
only
difference
is
that,
rather
than
something
like
Argo
for
get
UPS,
which
is
pull-based,
this
would
be
like
bi-directional
socket
based,
so
that
it
would
actually
receive
the
update
rather
than
go
against
get
state
to
generate
it.
I
And
presumably
also
the
the
op-amp
server
side
would
then
be
able
to
have
awareness
of
I
expect
these
collectors
to
appear
here.
Here's
what
I've
told
the
operator
to
do.
Here's
also
the
now
I'm,
seeing
the
collectors
reporting
to
me
and
and
here's
their
status
as
well,
or
here's
a
set
of
collectors
that
are
supposed
to
exist,
but
aren't.
C
Yeah
exactly
so
that
would
be
reported
back
via
the
op
amp
server,
the
op-amp
protocols,
effective
configuration,
which
basically
says
this
is
what's
actually
running,
and
so
you
could
use
that
to
report
back
errors
like
validation
errors
or
actual,
like
runtime
errors,
as
well
for
the
configurations
that
you're
running
or
attempting
to
run.
D
H
C
Yeah
exactly
this,
the
idea
here
and
I
called
it
out
up
here
is
the
like
paired
down
version
of
that
diagram
is,
you
might
still
want
the
collector
to
use
the
op-amp
extension,
but
for
read-only
operations,
where
it
just
reports
its
status
and
health,
but
you
would
actually
disable
the
supervisor
capabilities,
so
you
wouldn't
want
to
use
both
the
remote
configuration
and
the
supervisor
at
the
same
time,
because
they
essentially
are
doing
the
same
thing.
D
C
What
would
that
entail?
You
think.
A
D
C
C
So
one
of
the
things
that
I
think
is
coming
in
the
future
for
The
Collector,
so
separate
from
all
of
this
remote
configuration
work,
something
that
is
coming
in
The
Collector
is
dynamic,
restarting
so
that
you
could
actually
restart
like
individual
parts
of
the
collector
pipeline.
C
I
think
what
we
could
do
in
that
scenario
is
just
change,
because
someone
who
isn't
using
any
of
this
remote
configuration
would
want
that
feature
regardless.
I
think
that's
something
that
we
could
integrate
into
the
operator
to
do
where
it
would
just
tell
the
tell
the
collector
that
you
have
a
config
change,
restart
the
pipeline.
C
Like
looks
kind
of
weird
in
this
diagram
like
if
I
were
to
put
it
into
this
specific
diagram
but
separate
from
it,
I
think
that
that
would
be
good
as
a
feature
for
anybody
using
the
operator,
not
just
a
remote
config
user.
D
G
I
I
C
I
think
we
can
do
this
separate
from
them
building
out
the
op-amp
extensions,
given
that
this
actual,
like
remote
configuration,
is
like
relatively
simple
and
can
be
done
using
only
the
op-amp
go
sdks
that
already
exist,
I
think
that
we
could
start
on
the
I.
Don't
know
if
I
put
it
in
here
in
our
internal
documents.
I
have
like
the
order
of
building
which
I
can
add
to
this,
but
essentially
the
thing
that
needs
to
be
built.
C
First
is
just
this
remote
configuration
container,
which
can
do
the
op-amp
agent
connected
to
a
server
and
can
talk
to
the
Kube
API
once
that
is
working
well
tested
and
we're
happy
with
it,
then
we
can
have
that
as
part
of
CI
CD
building,
and
then
we
can
add
in
the
necessary
crds
for
the
operator
to
actually
reconcile
and
create
it.
C
C
Are
there
any
concerns
with
this
model
that
anyone
can
think
about?
Is
there
anything
that,
like
I,
don't
know,
I,
think
Paul,
but
you're
concerned
about,
like
the
latency
of
it
or
like
the
time
to
reconcile
essentially
is
legit,
but
given
that
this
functionality
is
I'd
say
like
pretty
different
than
what
the
op-amp
folks
are
currently
doing
and
I
think
it
is,
as
Anthony
said,
like
closer
to
a
get
op
style
thing
and
less
a
yeah
less
of
a
like
Dynamic,
reloading
scenario.
C
I
think
we
definitely
should
in
the
future,
when
Dynamic
reloading
is
possible,
figure
out
how
we
can
use
that
in
the
operator.
But
I
think
that
this
is
like
I
think
that
the
latency
of
this
is
acceptable
initially
as
long
as
it's
not
like
30
minutes,
because
that
would
be
ridiculous,
but
I
don't
think
it
takes
30
minutes.
I
C
Yeah
I
reading
through
the
spec,
it
was
actually
interesting
that
they
only
mentioned
the
collector
a
few
times,
and
so
that
was
what
made
this
idea
sort
of
possible
was
just
rather
than
having
the
effective
configuration
be
for
a
single
collector.
C
D
D
C
Like
that
yeah,
we
definitely
could
I
think
the
thing
that
we
wouldn't
be
able
to
do
is
the
we
wouldn't
be
able
to
do
the
header
or
the
annotations
on
pods.
C
Well,
so
there
is
one
other
feature
that
well
I'm,
not
no
not
worth
talking
about
it
right
now.
I'm
gonna
focus.
D
Yeah,
it's
great
to
see
this
we're
slowly
kind
of
extending
the
operator
functionality.
We
are
at
the
end
of
the
hour,
I'm,
not
sure
if
we
have
to
finish
I
think
we
should
but
I'm,
not
sure
if
we
have
to
and
I
don't
know
if
you
want
to
continue
Jacob.
D
C
Is
it
I
think
my
the
one
thing
I
just
want
to
do
is
like
next
steps.
C
D
Good
perfect
thanks
and.
D
Yeah
I
think
this
is
old
and
from
the
agenda
and
the
meeting
is
scheduled
to
be
every
two
weeks.
So
let's
try
just
to
yeah
to
meet
in
actually
in
two
weeks.
Is
it
Christmas
week
or
not
yet
I
think
it
might
be.