►
From YouTube: 2021-03-18 meeting
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
B
F
A
Sure
yeah,
so
we
kind
of
touched
on
this
topic
on
the
issue
that
I
linked
there,
so
the
2646
and
you
and
or
both
them
mentioned
that
we
would
be.
A
The
idea
is
to
move
specific
code
somewhere
else
like
constrict
and
then,
but
that
doesn't
mean
that
we
are
not
going
to
have
here
in
the
main
distribution
of
the
open,
telemetry
collector.
So
it's
I
think
it
was
kind
of
implicit
there
that
the
code
would
be
moved
to
the
contrib
repository.
A
But
I
wanted
to
ask
to
ask
you
all
if
it
would
be
acceptable
to,
instead
of
moving
to
the
contrib
to
instead
move
to
eager
to
some
some
place
inside
the
eager
repository
organization
on
github.
Now.
The
reason
is
that
we
would
probably
then
avoid
a
circular
dependency
that
we
have
today
so
right
now
the
core
depends
on
something
on
jaeger
and
on
eager
v2.
It
would
then
depend
on
on
open,
telemetry
core,
so
basically
jager
depends
on
open
telemetry,
which
then
depends
on
the
eager
right.
A
Yeah
yeah
and
having
this
code,
there
allows
themselves
to
fix
issues
you
know
closer
to
where
the
maintainers
for
that
code
are
right.
So
the
only
problem
that
I
see
is
that
whenever
you
release
something
like
with
your
hotel
like
0
23
something,
then
you
would
be
consuming
a
year.
That
depends
on
open,
telemetry,
core
0
22,
for
instance,
but
you
can
get
around
with
that
by
you
know,
doing
a
replace
directive.
G
A
Right
exactly
and
you're
not
gonna,
have
you
know
you're
not
gonna
get
around
this
problem,
because
it's
too
hard,
depending
on
eager
at
some
level.
It's
just
that
today,
your
your
exporters
and
receivers
are
depending
on
code
from
either
and
in
the
future,
you're
going
to
depend
on
the
components
themselves
right.
So
you're
not
you're,
going
to
depend
on
the
receiver
and
exporter
instead
of
depending
on
the
inner
workings
of
them.
A
D
D
Repository
as
we
do
that
anyway,
ideally
maybe
you
can
have
an
artifact
that
exposes
us
only
the
data
model
that
we
need,
because
that's
what
we
need
from
from.
D
F
The
the
problem
that
I
mentioned
sorry,
the
circular
dependency
problem
that
you
mentioned
initially
it's
likely
going
to
be
resolved.
Hopefully,
if
we
actually
split
up
the
core
module
into
separate
sub-modules,
that's
one
of
the
topics
that
we
wanted
to
tackle
as
well
as
part
of
the
ga,
so
that
it's
no
longer
the
it's
no
longer
a
problem,
you
you
can
actually
import
the
core,
and
then
we
can
put
the
the
component
separately.
It's
no
longer
circular,
but
also
the
bogdan's
point
is,
I
think
it's
important.
F
Maybe
I
guess
it
would
be
beneficial,
regardless
of
where
exactly
we
place
the
component
that
you're
right
doesn't
matter
we're
actually
importing.
I
believe
the
entire
jager
module
at
some
point.
Maybe
if
you
can
consider
just
exposing
the
bits
which
matter
for
the
agent
and
for
the
collector
as
a
separate
module
that
would
simplify
things
the
the
dependencies
they
poison
right.
F
A
Yeah,
so
I
just
posted
a
link
here
to
the
chat
showing
you
know.
What
are
what
are
the
packages
that
the
jieger
receiver
depends
on
and
it's
not
only
the
data
model.
It's
actually
you
know.
The
inner
workings
of
the
receiver
is
also
coming
from
here
and
unless
this
receiver
and
exporter
are
going
to
be
refactored
in
a
way
that
reduces
those
dependencies
and-
and
that
means
we're
implementing
quite
a
lot
of
things
that
we
have
for
eager.
A
A
F
F
A
If
it's
on
our
own
repository
it,
we
can
imbue
our
own
distributions
without,
depending
on
the
contributor,
for
instance,
we
can
only
depend
on
the
core
and
have
our
own
exporters
and
receivers
on
the
aggregate
side.
And
then,
when
you
build
the
either
the
open,
telemetry
core
distribution,
you
don't
depend
on
on
the
export
and
receiver
from
the
country.
You
depend
on
the
one
from
eager
organization,
the
dependency.
F
F
A
Separate
are
we
going
to
change
that,
though
I
thought
we
were
going
to
change
that
into
one
single
module
for
the
whole
country,
not
the
concept,
no,
the
contribution.
Okay,.
A
Yeah,
it
still
brings
one
level
of
interaction
there
I
mean
it
it
it's
kind
of
I
mean
I
can
understand
the
technical
details
here,
but
you
know
on
a
logical
perspective
from
you
know.
Looking
from
from
an
external
side,
it
doesn't
make
much
sense
to
have
like
eager
on
the
eager
distribution
to
have
the
eager
exporter
coming
from
the
open,
telemetry
contrib
repository
when
we
can
have
it
directly
from
jager.
You
know
it's
the
ear
exporter
and
the
air
receiver.
F
H
D
Depends
on
where
we
want
to
build.
I
don't
want
to
also
move
the
circular
dependency
on
our
side,
then
that
you
depend
on
us
and
then
we
depend
on
you
because
of
the
component
somewhere
has
to
break
these
circular
dependencies.
Yeah
yeah.
A
Yeah,
so
you
know,
if
you
are
going
to
use
something
like
the
builder
to
build
the
distribution,
then
we
are
not
going
to
have
hard
dependencies
on
the
code
on
either
the
contrib
or
the
core
all
right.
So
basically,
the
the
dependency
is
a
huge
time
dependency
when
you're
building
the
actual
binary
that
you're
distributing
now
there.
A
There
are
some
some
cases
that
you
mentioned,
like
other
components,
making
use
of
eager
libraries
and
and
for
those
cases
I
think
we
would
have.
We
need
to
have
it
if
a
separate
discussion,
because
they,
I
think
they
would
also
be
better
suited
consuming
from
the
eager
repository
directly.
I
don't
know
I
I'm
not
familiar
with
their
usage,
but.
H
J
There
I
I
I
have
a
question:
if
I
may
ask,
will
there
be
any
use
case
where
we
consume
internal
packages
from
auto
core,
like
in
the
exporters
or
receivers
that
you
might
not
be
accessed
once
you
move
out
of
the
of
the
main
breaker.
D
I
don't
know
how
you
prefer
me
to
call
you
jose
carlos
or
just
jose
or
carlos.
I
don't
know,
but
I
said
carlos
is
fine.
Thank
you
jose
carlos
okay,
so
I
think
I
think,
right
now
we
don't.
We
try
to
not
depend
on
internal
except
for
for
otlp
receiver
and
exporter,
which
we
do
just
because
we
we
want
to
have
a
way
to
to
create
the
p
data
directly
from
the
request.
Instead
of
going
through
the
api
to
create
that,
but
in
general
we
avoid
dependency
on
internal.
A
Yeah
in
case
of
I'm
just
confirming
that
we
I'm
we
don't
have
any
dependencies
on
internal
packages
I
just
removed
the
yesterday
yeah
I
saw
I
saw
the
pure
for
that
yeah.
So
that's
why
I
was
you
know:
double
checking
yeah.
So
how
about
we
do
the
face,
then
I
can
make
a
like
a
a
oh.
I
don't
actually
know
how
to
move
forward
because
whatever
they
do
is
not
going
to
be
like
the
final.
D
Yeah,
so
I
think
we
should
start
with
again
diagram
of
dependencies
that
will
help
us
understand
how
the
things
will
look
like.
D
The
second
thing
is
I'll,
be
very
interesting
in
knowing
how
are
we
planning
to
keep
consistency
on
the
config
level,
so
so
one
of
the
issue
with
people
developing
their
own
components
that
we
import
or
we
make
them
first,
that's
tier
one
supported
for
for
for
open
telemetry
will
be
that
we
try
to
to
have
consistency
across
at
least
configuration
like
all
of
them.
D
Speak
kind
of
the
same
configuration
language
that
that
would
be
something
to
think
about
jurassic
in
the
future,
because
there
may
be
a
tendency
of
somebody
coming
to
your
repo
asking
for
something,
and
how
do
we
make
sure
that
you
don't
do
the
same
thing
as
we
do
in
other
components,
in
a
smallly
different
way
than
will
be
annoying
for
users.
A
Yeah
yeah,
that's
a
good
question.
I
mean
one
way
of
doing
that
is,
you
know,
actually
experimenting
and
and
waiting
for
that
scenario
to
happen
in
the
case
of
jager,
and
I
can
see
that
it's
gonna
be
a
problem
for
other
components
but
in
case
of
jager
I
am
active
in
in
at
both
sides.
So
I
I
know
I
can
review
the
code
on
eager
and
and
know
that
it
doesn't
make
sense
for
open
telemetry
collector,
but
I
can
see
that
you
know
not.
All
projects
have
someone
on
both
sides
yeah.
So
before.
K
F
D
G
You
can
read
the
bug
the
issue
punya
made
a
summary
of
all
the
discussions
in
like
every
time.
K
F
L
D
Let
me
explain
the
whole
issue
that
is
260
to
646
started
as
me,
under
based
on
the
discussion
with
punia
and
everyone
about.
It
will
be
a
headache
to
to
have
a
lot
of
modules.
We
said.
Okay
in
that
case,
should
we
consider
to
move
these
components
outside
of
the
core
package,
at
least
so
that
we
keep
a
core
package
that
has
only
minimal
dependencies
and
nemo
things,
and
that's
how
everything
started.
We
can
just
say
that
no,
we
keep
everything
as
is
as
right
now
and
that's
it,
but
it's
yeah.
M
One
thing
I
heard
in
this
discussion,
which
I'm
not
sure
if
this
was
intended,
is
hey.
Why
do
we
have
a
dif?
Why
should
we
have
a
different
standard
for
core?
As
for
contrib,
it
seems
we
are
comfortable
having
many
modules
in
contrib.
You
know
if
we
said
modules
are
bad
for
core.
Should
we
then,
should
we
go
and
erase
all
the
modules
and
contribute
as
well?
So
I
don't
think
it
is
as
black
and
white
as
that.
M
I
think
modules
are
more
difficult
to
manage
in
core
than
in
contrib,
because
the
interrelations
are
more
complicated
in
contrib.
It
is
a
fairly
hub
and
spoke
kind
of
thing
where
most
modules
have
no
interdependencies.
The
each
of
the
modules
is
independent
and
it
depends
on
core
right
in
core.
If
we
were
to
create
modules,
there
would
be
a
dag
of
dependencies,
yes,
but
a
non-trivial
type
of
dependencies,
and
so
I
think
that's
why
there
is
a
decision.
There
is
a
there
is
a
choice
to
be
a
trade-off
to
be
made
here.
F
M
Think
it
is
it
yes,
but
you
have
to
do
it
quite
carefully
because
when
you
have
so
this
is
the
experience
from
the
go
repository
when
you
have
multiple
modules
in
the
same
git
repository,
it
is
easy
for
them
to
have
unintended
sharing
because
of
the
package
hierarchy
and
internal
packages,
so
we
have
to
be
quite
disciplined
and
potentially
change
the
package
hierarchy,
which
is
change
the
package
hierarchy
of
core
to
disallow
this
kind
of
sharing.
So
that's
why
I
think
it's
a
more
delicate
thing
that
might
not
be
worth
doing.
M
I
can.
I
think
this
is
a.
I
can
write
up
a
bit
more
there's
a
there's,
a
github
issue
and
I
can
add
a
bit
more
detail
there.
Please.
F
A
And
so
one
further
question
that
I
have:
how
is
it
gonna
work
with
building
the
core
with
things
from
from
the
core
and
the
contrib?
Is
it
gonna
use
something
like
the
builder
or
is
it?
How
are
you
gonna
do.
D
D
I
think
I
think
in
my
mind
I
was
thinking
to
to
to
maybe
consider
even
not
shipping
a
binary
from
from
core
and
ship,
only
the
the
by
the
two
binaries
from
contrib,
one
with
the
minimal
dependencies
and
the
prometheus
and
stuff
like
that
and
the
one
with
all
the
things
we
still
have
only
two
binaries
that
we
ship,
but
we
don't
ship
anything
from
core
core
will
become
more
or
less
like
a
library
will
not
have
a
main
file.
That's
that's
a
possibility
again.
How
do
we
build
the
main
file?
D
M
You're
actually,
sorry,
I
just
wanted
to
add
a
couple
of
things
there,
perhaps
supporting
the
way
you
are
thinking
about
pulling
out
the
the
jaeger
component-
and
I
I
don't
know
the
answer
here
either,
but
I'm
I
want
to.
I
want
to
state
explicitly
some
things
that
may
have
been
implicit
in
what
you
said.
M
One
reason
to
try
to
move
something
out
of
contrib
into
a
a
third-party
repository
is,
is
because
you
may
just
want
more
control
over
code
reviews
and
governance
right
or
you
may
want
to
have
your
own
ci
pipeline.
So
on
the
google.
Oh,
I
I
agree
with
upgrading
our
drinks.
That
is
definitely
a
true
true
statement,
but
on
the
google
cloud
side
we
have
discussed
again
we
haven't
proposed
formally,
but
we
have
discussed
this
idea.
M
You
know
today,
so
a
community
contributor
can
go
and
make
a
change
to
the
stack
driver
or
google
cloud
exporter
in
contrib,
and
that
is
something
we
officially
support,
which
is
a
bit
scary
for
us
right.
It
may
be.
It
may
be
that
the
community
could
that
the
contrib
governance
mechanisms
are
sufficient,
but
definitely
we
ask
ourselves
and
I'm
sure
other
commercial
entities
ask
themselves.
M
How
can
we
make
sure
that
the
thing
that
is
being
shipped
with
our
stamp
of
approval
is
really
the
thing
we
wanted
to
share
so
for
us
also,
there
is
an
incentive
to
have
a
supported
mechanism
for
building
distributions
containing
non-contrib
components
right
and,
of
course,
the
builder
does
this.
So
there
could
be
a
design
which
we
can.
You
know
which
we
can
flesh
out.
Where
you
say
core
is
a
library.
It
is
the
library
it
is.
M
The
smallest
possible
unit
of
collector
contrib
contains
a
bunch
of
plugins
with
a
variety
of
governance
requirements
and
ci
requirements.
The
the
upside
of
contrib
is
that
the
the
hotel
community
agrees
to
keep
upgrading
right.
So
you
get
free
ci,
you
get
free
maintenance
and
review,
which
is
valuable,
but
the
downside
is
you
lose
some
control
and
then
for
people
who
really
value
the
control
and
they
are
willing
to
assume
the
costs.
They
will
maintain
some
components
out
of
open
telemetry
right
out
of
contributory.
M
So
neighbor
will
maintain
that
google
cloud
may
maintain
that
maybe
amazon
wants
to
maintain
some.
Maybe
azure
wants
to
maintain
some,
and
then
there
could
be
some
repository
containing
distributions,
and
this
distributions
repository
contains
manifests
that
are
inputs
to
the
builder
and
integration
tests.
For
them.
Saying
hey
here
is
a
golden
config
file
that
I
expect
to
run.
This
runs
nightly
and
pulls
the
latest
versions
of
these
components
assembles
them
into
a
assembles
them
using.
M
D
Already
I
already
spent
days
trying
to
make
everything
build
together
from
the
dependencies
and
think
about
that
the
majority
of
the
dependencies
are
in
contribute
right
now,
so
I
can
control
them.
I
can
increase,
update
them
and
stuff.
If
you
do
and
if
you
host
your
own
thing
and
jagger
was
his
own
thing
is
his
own
thing,
I'm
pretty
confident
that
you
will
not
have
everything
up
to
date
with
depending
I
will
not
be
able
to
build
and
how
how.
M
Do
I
decide
what
do
I
remove
so
so
back
then.
This
is
exactly
why
I
haven't
proposed
it.
I'm
saying
I'm
stating
this.
I
think
this
requires
a
doc
and
exactly
what
you
said
would
be
here's
why
we
cannot
do
it
today
and
we
have
to
ask
ourselves
if
this
is
so
beneficial.
What
needs
to
change
to
make
it
happen.
I
agree
with
you
right
like
maybe,
if
one
like
uraci,
because
he
is
part
of
those
communities,
may
be
able
to
pull
this
off.
A
So
so
there's
there's
something
going
in
the
direction
there's
you're
suggesting
punia.
So
it
is
cncf.ti.
It
is
a
aci
server
hosted
by
the
cncf.
That
runs.
I
think
it's
only
for
kubernetes.
Actually
I
I
see
a
year
here,
but
we
we
don't.
We
don't
pay
attention
to
this
one
here,
but
basically
it
is
something
that
runs
nightly
or
or
periodically,
and
it
builds
kubernetes
distributions
based
on
on
well
distributions.
A
So
we
could
have
something
similar
here
right.
So
each
vendor
that
wants
to
say,
like
have
a
stem
saying.
We
support
open,
telemetry
collector.
We
we
have
our
own.
You
know
compatible
open
entry
collector.
They
could
provide
their
ci
view
to
this
one
here
and
if
it
breaks,
then
it's
their
their
fault,
it's
on
a
status
page
somewhere
that
they
have
their
responsible
fix
in
it.
A
I
like
the
idea
of
having
like
a
registry-
and
we
actually
do
have
a
page
on
openflow.io,
with
a
registry
of
compliance
even
for
for
distributions.
I
think
there
and
I
like
the
idea
of
having
like
one
place
where
we
can
list
the
manifests
that
those
solutions
are
using
and,
and
I
think
it
goes
in
a
direction
that
I
particularly
like,
which
is
decentralizing.
The
open
telemetry
collector
in
general,
as
a
general
concept.
A
Is
it
a
pushback
from
one
person
or
from
really
different
people?
Oh.
A
Okay,
because
yeah
well,
I
don't
know,
I
think
I
think
we
have
to
you
know
we
are
the
ones
maintaining.
We
are
the
ones
working
on
it
and
we
are
the
ones
that
have
to
decide
what
is
the
right
way
forward
for
us.
It
doesn't
matter
what
the
large
voices
out
there
are
barking
right.
Okay,.
D
D
A
D
I
would
like
not
this
disregard
the
jurassic
option.
Let's,
let's
see
how
that
would
look
like
in
practice.
Maybe
maybe
we
can
give
a
try
to
one
of
the
component
with
the
mindset
that
if
we
fail
jurassic,
you
commit
to
move
it
back
to
to
us.
If,
if
this
fails,
so
I'm
all
for
experiments
and
let's
say
in
couple
of
months,
we
make
a
decision.
Did
this
work
and
if
not,
you
move
the
code
and
you
promise
us
that
you,
you
will
move
the
code
and
help
us
fix.
M
Interesting
in
parallel,
should
we
should
we
should
we
work
on
it?
I
would
love
to
work
together
on
a
dock
for
decentralizing
like
what
would
be
the
success
criteria
and
then,
if
we
can
even
record
the
concerns
right,
some
of
them
are
technical
concerns
right
like
how
do
we
maintain
stability?
M
How
do
we
who
gets
feedback
et
cetera,
et
cetera,
and
then
there
are
what
I
would
call
more
definitional
concerns
like
what
is
the
charter
of
the
project,
and
then
we
can
at
least
write
down
like
the
concerns
separately,
and
we
can
rebut
the
ones
that
different
people
can
about
the
ones
that
matter
to
them.
A
Block
anything
yeah,
exactly
absolutely
yeah.
So
on
on
your
last
comment,
I
agree
with
that.
We
kind
of
started
a
discussion
with
that
with
granville
from
f5.
I
think
where
he
was
saying
that
they
might
have
some
people
to
work
on
the
ui,
for
you
know
a
starter
page
to
build
their
own
for
people
to
build
their
own
solutions,
and
I
think
that's
one
like
success
exit
criteria
for
that,
so
that
people
can
build
their
own
distributions
with
the
confidence.
A
Work
right
and
having
a
a
web
tool
that
allows
people
to
do
that
is
is
key,
because
that
would
mean
that
people
don't
have
to
know,
go
or
even
have
to
go
tooling
installed
locally
to
build
a
distribution
yeah.
But
I
think
the
outcome
for
that
discussion
back
then
like
a
month
ago
or
so
is
that
we
I
mean
it's
nice
to
have
that.
But
it's
just
not
the
right
time
right
now,
because
we
have
so
many
things
to
work
on
agree
so,
but
yeah
keep
keeping
talk
on
that.
M
A
Yeah
and
and
to
book
them
previous
point,
I
do
promise
to
to
give
the
code
back
if
needed.
We
are,
you
know,
in
a
kind
of
highly
experimental
mode
for
eager
as
well
for
eager
v2.
We
are
not
quite
sure.
That's
all
gonna
work,
the
way
that
we
have
planned
right,
so
we
just
started
actually
building
things
on
top
of
open,
telemetry
collector
and
that's
what
I
was
working
right
before
this
meeting
and
we
have
some
challenges.
A
So
the
storage
mechanism
is
one
challenge
like
having
them
as
extensions
and-
and
you
know
the
whole
discussion
we
had
a
few
weeks
ago.
The
embedding
embedded
in
the
ui
is
also
another
challenge
and
the
query,
server
and
so
on
and
so
forth.
So
we're
not
quite
sure
that
everything
is
going
to
work
out
the
way
that
we're
planning
for
v2.
So
if
it
doesn't,
then
you
know
you're
going
to
have
this
code
back
anyway.
D
I
don't
know
how
you
want
to
build
it
on
top
of
the
collector,
but
maybe
one
thing
that
you
can
do
is
consume
the
collector
as
a
library.
So
you
still
have
a
bunch
of
other
places
stuff
separately,
but
you
have
the
collector
for
accepting
the
data,
maybe
processing
them
and
then
store
the
data
somewhere
and
then
everything
else
on
your
side,
the
query
is
part
and
it's
a
standalone.
That's
the
main
file.
A
D
I
think
I
think
before
before
starting
this
experiment
jurassic.
I
still
need
to
see
a
bit
of
the
diagram
of
dependencies
and
how
we
plan
to
do
this,
but
we
can
discuss
if
we
do
this
experiment,
but
I
I
think
I
will
be
willing
to
give
a
try
to
see
how
can.
F
We
can
we-
maybe
let's
not
rush
this,
can
we
sleep
on
this
like?
Can
we
not
make
the
decision
on
the
spot
right
now
and
definitely,
let's
see
the
diagram,
but
let's,
let's
give
us
a
bit
time
like
a
couple
of
days
yeah
before
we
make
a
decision.
I.
K
Mean
I
have
a
few
more
concerns
about
it,
especially
when,
as
we
go
forward,
there's
a
number
of
things
we
want
to
do
that
will
probably
touch
all
the
components
and
those
kinds
of
things
are
a
lot
easier
when
they're
all
in
one
place.
So
I
feel
like
we're
going
to
end
up
in
this
bifurcated
world,
where
we
can
do
certain
things
to
the
components
that
live
into
trib.
K
D
I
think
the
work
of
stability
that
we
are
doing
right
now
will
help
with
this,
because
I
don't
think
we
will
have
to
to
change
too
many
things
right
now.
Yes
will
be
impossible
to
do
it
since
we
we
kind
of
break
some
of
the
public
apis
and
we
we
fix
them
easily
in
contributing
and
stuff,
that's
a
huge
advantage
for
us,
but
once
we
put
a
1-0
on
the
core,
I
don't
think
that
much
work
will
come,
but
maybe
I'm
wrong.
K
K
You
know
client
settings
right.
Some
like
right
make
some
improvement.
That
applies
to
all.
You
know
all
components,
not
not
that
it's
not
that
it's
like
a
a
breaking
api
change,
but
some
kind
of
improvement
yeah.
I
think
I
think
I
feel,
like
the
more
conservative
thing,
would
be
to
move
it
to
katrib
and
give
people
more
ownership
if
they
feel
like
they
need
more
ownership
and
control
like
restrictions
in
certain
subdirectories
and
contrib,
like
I
think
I
feel
like
that
would
be
the
a
more
conservative
like
first
step.
D
Okay,
I
I
think
tigran
is
calling
time,
let's,
let's
follow
on
the
issue
with
this
asynchronously.
F
F
Deleted,
I'm
guessing
it's
resolved
so
that
the
next
I
guess
major
one
is
this:
oh,
so,
okay,
so
I
guess
I
had
these
requirements
listed
here
jurassic
you
looked
into
them
and
you
had
questions
here.
Maybe
we
go
over
the
questions
right.
So
one
of
the
first
thing
that
you
mentioned
here
is
how
do
we
allow
authenticators
outside
of
the
core
report?
F
So
can
we
maybe
follow
the
the
factories
approach
that
we
have
for
other
components?
What
prevents
us
from
doing
those
like
having
factories
of
authenticators,
which
are
then
supplied
by
the
particular
build
of
the
collector
like
we
do
for
other
components,
and
then
the
core
will
provide
some
concrete,
maybe
we'll
provide
some
more.
A
So
I
think
I
think
the
main
problem
that
I
was
seeing
is
that
those
authenticators
they
are
part
of
the
of
the
config
package.
Right
then
the
config
packages.
They
are
not
components,
so
they
we
cannot
inject
the
list
of
authenticators
in
in
there
in
that
so
either.
A
I
think
the
design
decision
now
is
on
whether
we
want
receivers
and
exporters
to
opt
in
for
authentication
or
if
they
should
all
be
affected,
because
they
are
using
config,
http
and
config
jrpc,
for
instance.
So
as
long
as
they
consume,
config,
http
and
configure
pc,
should
they
then
be
have
authentication
support
either
outgoing
or
incoming.
A
F
It's
we
we
hard
code,
some
some
set
of
authenticators
and
use
them
directly
from
the
from
the
config
models
right
from
right
50
seconds.
Can
we
instead
of
hard
coding
it
somehow
actually
inject
the
factories
there
at
the
time
of
the
use?
Like
I,
from
the
top
of
my
mind,
I
didn't
look
into
how
exactly
we
did
that
I'm
guessing
somehow
it
should
be
possible
at
what
time
like
it's
some
sort
of
custom
loading
feature.
A
I'm
not
quite
sure
I
don't.
F
Know
how
like,
like
I
haven't,
looked
into
that.
A
So
one
idea
that
that
I
had
in
the
past
with
other
folks
also
interested
in
this
one
here,
is
to
implement
some
sort
of
like
a
poor
man
cdi
or
permanent
dependency
injection.
Here.
A
Basically,
when
you
load
the
yaml
file,
which
is
where
you
create
a
config
object,
whenever
you
load
it,
then
you
have
a
a
repository
or
a
registry
of
authenticators
and
then,
when
you
load
the
when
you
construct
the
config
objects,
then
you
inject
those
registries
in
there
in
case
they
implement
some
method,
like
I
don't
know,
set
authenticators,
I
think
was
the
suggestion
there,
but
that.
F
A
I
think
we're
talking
about
different
stages
here,
so
if
you
could
show
me
the
code
that
you're
that
you
have
in
mind,
because
you
have
you
know
the
the
screen
being
shared
here
so
the
place
that
I
mean
is
really
at
the
bootstrapping
app
of
the
process
and
before
the
exporters
before
the
components
are
created,
it
is
really
during
the
parsing
of
yeah.
That's
this.
F
F
F
A
E
F
Okay,
I
guess
to
step
back
a
little.
I
think
this
is
an
optional
thing.
Anyway,
if
you
can't
manage
to
make
it
work,
I
mean
fine.
Maybe
the
authenticators
are
only
possible
to
define
the
chord.
I
would
prefer
that
we
create
the
possibility
to
do
that
elsewhere
too,
like
because
it's
nice
right.
We
do
that
for
components.
Why
not
for
authenticators,
but
if
not,
then
well,
maybe
not
in
that
case.
So.
D
What
what
is
the
problem
we
need
when
we
create
an
instance
of
one
component,
we
need
to
have
access
to
all
the
old.
F
D
A
You
have
the
name,
but
this
one
is
this:
one
is
not
entirely
the
one
that
we
should
be
looking
at.
We
should
really
be
looking
at
the
in
the
incoming
part,
so
this
the
p
per
rpc
auth,
is
on
the
outgoing
side.
So
this
is
for
the
exporter.
F
A
Correct
and
that
would
be
better
so
gss.
yeah,
that's
yeah,
and
then
we
get
the
auth,
which
is
the
config
object,
and
then
we
call
the
two
server
options.
Yeah
and
the
two
server
options,
then,
would
create
a
jrpc
based
set
of
interceptors,
so
unary
and
and
stream
interceptors.
D
D
F
Okay,
you're
asking
how
the
generic
config
loader
should
know
it
shouldn't.
Obviously
it
shouldn't
it
should
be
a
behavior
of
the
configuration
right
of
the
particular
configuration
model.
You
call
something
like
validate
or
load
or
pre-load
or
whatever
we
call
it.
You
pass
the
factories
and
then
it's
the
business
of
this
particular
grpc
server
settings
implementation
to
look
up
the
the
oauth
key
find
the
the
in
the
list
of
the
factories,
the
right
one
by
the
name
and
then
do
whatever
it
needs
to
do
like
additionally,
but
this
is
embedded
by
somebody
else.
F
A
Can
you
open
this
configure
dot,
authentication,
okay,
object.
I
think.
D
F
A
preload
or
whatever
yeah,
something
like
that
that
you
call
immediately
after
loading
the
the
from
yamu
from
viper
as
you
load.
You
also
make
another
call
which
says
initialize
or
do
do
something
about
it.
If
you
need
to
do
right,
it's
usually
nothing
not,
but
some
of
the
models
may
override
and
do
something
about
it.
But
that's
that's
something
like
that.
That.
D
Is
if
you
embed
that-
and
you
have
also
some
initialization
logic-
you
need
to
don't
free
to
call
the
yes
call
the
parent.
A
It's
a
possible
part:
perhaps
what
can
be
done
as
well
or
instead
of
could
be
right
after
the
extensions
are
initialized,
then
you
know
those
objects
here
so
like
the
authentication
can
register
a
callback
somewhere
to
get
called
whenever
the
extensions
are
finished
so
that
they
know
about
all
the
the
authenticator
factories
that
exist
or
you
know,
because
if
you
go
back
one
one
page
and
and
scroll
down
in
one
of
the
examples,
if
you.
A
The
written
file
yeah
so
basically
inside
within
the
authentication
part.
We
have
attribute
authorization,
that's
fine,
it
can
stay,
it
can
still
be
there
and
then,
instead
of
oitc
here
we
can
have
what
is
the
authenticator
to
use
and
then
oidc
would
be
the
authenticator
and
odc
would
be
an
extension
all
right.
So
we
define
an
extension
and
then
we
pass
a
reference
to
that
extension
as
here
instead
of
ydc,
we
have.
F
J
The
receivers
will
know
what
information
exactly
would
pass
into
these
extensions.
For
example,
if
we
are
talking
about
an
http
receiver,
we,
the
authentication,
is
very
likely
to
come
on
the
heaters
right
and
then
that
should
be
put
into
somewhere
and
then
that
somewhere
should
be
passed
into
these
receivers
into
these
authenticators.
F
Yeah,
that's
that's
the
behavior,
that's
the
authentication
behavior,
that's
not
the
configuration
part
and
you're
right.
That's
that's!
The
second
point
how
we
do
that.
Are
we
clear
with
the
config
part?
Can
we
move
actually
to
that?
Second
point,
because
I
think
that's
important
and
we
don't
have.
A
Just
just
to
wrap
it
up,
so
bogdan
is
going
to
do
some
experiment
in
this
part
here
or
what
is
what
is
the
dot
com?
The.
B
A
Can
try
that,
but
I'm
I'm
kind
of
stuck.
I
tried
it
a
couple
of
times
already,
but
I
can
try
it
again.
You
know,
but
I
I
think
I'm
going
to
need
some
some
help
from.
F
More
expensive,
okay,
okay,
maybe
yeah,
maybe
you
can
give
it
another.
Try,
if
not,
maybe
I
or
bogdan,
can
have
a
look
at
it
all
right.
Let's.
I
F
F
What
we
do
now
is
we
look
at
the
context
of
the
request
right
of
the
incoming
request.
We
assume
that
the
context
contains
the
information
that
is
necessary
for
the
authentication,
which
is
true
for
grpc
and
http.
It
contains
the
headers
that
we
use
for
the
authentication,
and
then
we
preserve
the
result
of
the
authentication
again
in
the
context.
F
So
this
is
the
part
that
I
don't
like
and
the
reason
I
don't
like
it
is
because
it
doesn't
work
with
asynchronous
components
like
with
butcher,
with
anything
that
breaks
the
flow
of
the
context
right.
That
does
not
pass
the
context,
as
is
that's
the
part
that
I
would
like
to
solve,
and
that's
the
part
that
we
discussed
very
briefly
to
do
this
thing
right
with
the
resource
x.
Where
is
it?
I
think
I
respect
it
somewhere
here.
J
F
Because
that's
that's
what
exists
today!
You
do
that
it's
possible
today
and
I
think
that's
that's
a
limitation
that
does
not
solve
the
problems
that
we
have,
namely
we
want
to
be
able
to,
for
example,
route
requests
route.
The
data
to
different
exporters
based
on
some
incoming
field
fielding
the
authentication
right
so
like
the
the
receiver,
has
some
sort
of
tenant
id
or
something
like
that.
F
Whatever
right
some
sort
of
information
which
then
based
on
which
we
need
to
make
a
decision
about
sending
this
data
to
an
exporter
a
versus
exporter
b,
that's
one
use
case.
Another
is
passing
the
oauth
information
through
in
its
entirety.
Through
the
collector,
like
we
receive
some
sort
of
headers
in
the
receiver,
we
don't
do
often
collector.
We
don't
care,
we
have
a
server
which
does
that.
But
we
need
to
preserve
this
information,
which
we
don't
it's
dropped
today
we
don't
today.
F
So
that's
that's
another
thing
that
we
want
to
do
and
for
that
keeping
this
information
in
the
context
is
not
sufficient.
We
just
lose
it
at
some
point
in
the
butcher.
It's
butcher
is
a
recommended
component.
We
recommend
it,
you
always
use
it
like
and
if
you
use
it
it's
gone,
we
also
have
the
the
cube
retrieve,
which
loses
that
again,
it
never
reaches
the
export
to
be
in
reality.
D
A
So
that's
that
wouldn't
work
either.
It
has
to
be
at
the
individual
data
point
because
we
do
have
other
batch-like
components
that
group
things
based
on
other
things
right.
So
we
have
so
we
have
the
the
group
by
trace
and
then
right.
D
D
A
Makes
sense
yeah
it's
on
trip
it's
on
control
before
looking
for
the
group
by
trace.
F
A
D
F
This
one
goes
here
right,
so
you
are
the
thing
here,
which
is
what
we
pass
through
the
pipeline
and
we,
with
the
exception
of
things
that
do
these
reorganizations,
like
the
the
group
buy
and
all
that
stuff.
Everybody
else
preserves
this
right.
Nobody,
nobody
touches
this,
so
if
you
put
it
here,
it
will
be
preserved.
A
It
might,
it
might
actually
work
there
because
we
group
by
we
split
batches
by
trace
ids,
so
the
batch
that
we
receive
would
then
have
an
authentication
data
associated
with
it
and
then
the
resource
that
we
create
that
you
know
create
end
resources
based
on
out
of
that
batch.
Then
each
one
of
those
resources
would
have
the
same
token
the
same
authentication
data.
So
I
think
it
might
actually
work.
F
Okay,
so
that's
that
was
the
second
point
right.
I
had
that.
I
don't
want
this
in
the
context.
I
want
this
somewhere
else
that
we
actually
keep
keep
keep
it
in
the
pipeline,
hopefully
so
pipeline
properly.
All
the
way
to
the
exporter.
A
Okay
sounds
good:
what
kind
of
information
do
you
want
there?
Only
the
the
source,
information,
or
also
the
outcome,
information
like
when
we
are
talking
about
oitc,
then
we
might
want
to
restore
what
is
the
actual
subject.
That
was
personal.
A
F
May
store
different
things
and
then
that
thing
is
a
callable
object.
You
can
call
it
and
say
inject
whatever
you
have
on
my
outgoing
http
connection.
So
this
is
the
third
thing
right.
Let's
say
it
propagated
all
the
way
to
the
exporter.
Now
how
you
use
that
to
use
that
you
will
need
to
have.
Let's
assume
we
have
a
generic
way
of
doing
http
requests
the
one
that
you
mentioned
right.
F
We
don't
have
that
today,
but
let's
assume
every
http
based
exporter
uses
some
some
shared
implementation
of
outgoing
http
requests
right
before
you
make
that
request,
you
pass
it
the
authenticator
when
you're
preparing
the
the
actual
request
and
then
that
implementation
asks
the
authenticator
gives
it
the
request
and
tells
inject
whatever
you
want
it
to
be
injected,
and
so,
if
it's
a
pass-through
authenticator
it
just
copies
this.
The
data
that
it
saw
in
on
on
the
incoming
side
also.
D
Keep
in
mind
when
you
resolve
this,
I
would
like
person
that
takes
care
of
this
to
look
into
what
we
have
right
now
in
the
client
run.
You
know
we
we
put
some
ip
and
stuff
from
the
caller
just
for
for
some
of
the
components
to
consume
that,
and
maybe
that
should
be
in
the
structure.
Correct.
F
F
Please
have
a
look
at
that
because
it's
a
related
functionality
and
and
we're
almost
at
time
guys,
so
I
I
will
have
to
drop.
If
you
do
want
to
stay,
please
do,
but
I
have
to
go.
Let's:
let's
discuss
the
rest
offline,
that's!
Okay!
Okay,.
F
F
F
F
H
F
F
Okay,
let's
start
so,
we
have
this
the
new
tap,
which
I
submitted
a
few
days
ago.
It's
about
what
do
we
want
to
do
with
logging
libraries?
F
The
next
step
here
is
to
actually
start
start
the
prototyping
or
continue
because
we
already
have
some
prototypes
in
java
in
c,
plus
plus
and
some
basic
one
in
python.
I
believe
so
we're
here
actually
looking
at
volunteers,
people
who
want
to
work
on
the
the
library's
implementation,
I
don't
know
who
has
any
any
resources
if
they
are
able
to
allocate.
I
Time
to
that,
as
I
told
you,
I'm
trying
to
find
some
folks
I'll,
let
you
know
in
a
couple
days,
but
obviously,
if
anyone
else
starts
on
stuff,
we
can
work
together
on
it.
F
Does
anybody
else
have
any
interest
in
this
topic
like
actually
contributing
to
this
stuff?
We
we
do
a
lot
on
the
collector.
I
don't
think
I
have
seen
much
interest
in
terms
of
doing
jogging
libraries,
but
I
think
it's
very
important
to.
F
F
Hopefully
everything
is
fine
with
that
and
we're
still
looking
for
someone
to
also
performance
tests
for
that
there
is
a
separate
issue
recorded
for
that
for
for
for
system
receiver,
and
now
the
the
big
one
right
this
one
is
the
helm,
chat
the
addition
of
the
logging
support
to
the
to
the
opening
financial
health
chart.
We
previously
had
traces
and
metrics.
No,
so
now
we
have
the
logs
as
well.
F
This
is
great,
I
believe
it's
based
on
the
all
the
work
that
you
guys
did
on
the
kubernetes
collection,
using
the
firework
thing,
so
I
guess
again
this
one
would
be.
It
would
be
great
to
try
it
out
in
some
various
environments
to
see
how
things
work
right,
whether
it's
it's
doing
what
we
wanted
to
do,
whether
we
have
the
right
configuration
options
or
no,
so
some
real
real
life,
real
world
usage
would
be
to
be
very
useful
here
and
feedback.
B
F
O
F
Yes,
please
do,
or-
or
I
don't
know
if
it
requires
any
special
permission,
if
you're
able
please
do.
If
not,
you
can
ping
me
and
I
will
do
it
for
you
if
necessary.
But
yes,
so
rocky
is
working
with
me
at
splunk,
so
we're
both
looking
into
using
the
new
health
chart
also
for
splunk
internally.
F
O
Yeah
coordinates
tagger
and
splunk
exporter,
I'm
testing
first
with
the
splunk
exporter
and
I'm
not
ingesting
any
logs,
like
goodness,
yeah.
N
O
Could
you
provide?
Could
you
comment
on
the
issue
on
what
I
should
do
to
make
it
work
sure
yeah,
thank
you.
That
would
be
great
yeah.
I'm
appreciating
all
the
all
the
help
you're
providing
for
me.
Thank
you.
F
F
O
F
Okay,
that's
that's
yes,
if
you're
building
the
image
yourself,
instead
of
pulling
it
from
the
docker
hub,
then
yes,
obviously,
if
you're
building
an
older
version,
then
yes
that's
a
possibility.
Okay,
okay
cool,
but
this
is.
This
is
a
very
nice
thing,
so
we're
going
to
use
it
hopefully
internally
at
splunk
as
well.
F
That's
that's
good,
okay,
so
the
next
one
is
about
something
that
came
up
during
the
implementation.
We
have
these
two
things
in
the
when
we
collect
quotes
from
kubernetes.
One
is
that
we
have
the
standard
outer
standard
errors
coming
from
the
containers,
and
there
is
a
need
to
reflect
that.
F
Somehow
in
the
collected
logs
right
now,
it's
populating
an
attribute
called
stream
and
that's
not
something
that
open
telemetry
actually
recommends
to
do
right
anything
well,
it
recommends
to
put
things
into
some
appropriate
name
spaces,
not
not
just
within
top
level
components,
same
thing
for
run
id,
which
is
which
is
I'm
guessing
the
the
some
execution
number
for
the
container.
That
does
anybody
know
what
this
actually
is.
F
E
It
can
be,
it
says,
from
which
instance,
actually
container
instance
was
the
outer
logs
coming
from.
E
E
Be
useful
because
it
it
will
know
if
the
logs
are
coming
constantly
from
the
same
instance
of
the
container
of
the
after
they
start
or
before.
They
start.
B
So
in
the
docker
docker
naming
that
should
be,
that
would
be
container
id,
then
right
or
something
like
that.
I
mean
this
goes
from
zero,
but
yes,
run
id
would
change,
because
the
container
is
different.
O
The
I
think,
run
yeah,
I'm
pretty
sure
wrong
idea.
What
it
is
is
that
when
pod
content
fails
and
gets
restarted,
that
run
id
goes
up
by
one.
I
C
I
do
think
that
I
honestly
like
my
again,
I'm
not
a
kubernetes
expert,
but
I
think
you
would
want
to
collect
container
id
if
it
was
just
a
docker
thing
right,
so
that
would
sort
of
speak
for
having
the
run
id
there
so
that
you
know
that
there's
a
discontinuity,
if
you
just
look
at
the
log
right,
run
id
changes.
You
know
the
damn
thing
was
balanced
right.
O
F
O
Yeah,
it's
not
not
like
unique
id,
but
whenever
it
kind
of
refreshes,
which
gets
restarted,
recreate
it.
That
number
gets
incremented.
So.
F
O
O
F
Yeah,
I'm
looking
at
the
semantic
conventions
right
now,
so
we
have
something
like
that
here
we
have
this,
so
this
is
the
image
name,
and
this
is
a
container
id
supposedly
something
that
is
populated,
I'm
guessing
every
time
the
container
is
started,
so
this
would
serve
that
purpose
right.
You
would
know
that
this
this
these
logs
are
part
of
the
same
execution
versus
some
other
execution,
so
in
a
sense
yeah.
What
do
we
get
in
addition
to
this
right
with
with
having
this
run,
run.
C
F
F
F
E
F
F
F
C
C
F
E
I
I
have
some
thoughts
on
the
stream
because
for
me
it
looks
like
it
should
be.
Some
same:
some
nice
space
for
stream
file
path,
file,
name,
some
attributes
which
we
are
not
exactly
going
to
using
kubernetes
work.
Maybe,
but
someone
wants
to
use
it
in
the
another
fiber.
E
F
N
F
F
C
F
Right
right,
that's
another
open
thing
we
have
and
do
we
do.
They
have
that's
the
same
thing
like
whether
it's
standard
out
or
standard
error.
I
don't
see
that
severity
priorities,
syslog
things
yeah.
I
don't
see
that
but
you're
right
trimec.
I
think
it
belongs
somewhere
here,
assuming
we
have
something
equivalent
like
a
file
puff.
F
Okay,
yeah
anyway,
please
think
about
it.
If
you
have
any
other
thoughts,
please
comment
on
this
issue
and
and
then
we
can
make
a
proposal
to
the
specification.