►
From YouTube: 2021-03-31 meeting
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
B
C
C
F
Yeah,
so
it
would
be
useful
if
we
could
have
multiple
config
files.
So
the
reason
for
that
is,
I
mean
I
mean
I
feel
like.
There
are
a
bunch
of
different
reasons
why
that
could
be
useful
right.
The
use
case
that
I
have
is.
F
Actually
so,
like
you
know,
as
as
as
this
thing
as
open
telemetry
expands,
and
you
know
it
takes
on
all
of
telemetry-
that's
quite
a
lot
of
stuff
and
we've
thought
about
things
like
you
know
what
about
even
putting
things
beyond
just
logs
metrics
and
traces
in
it
like
what
about
you
know,
like
events
like
what
about
security
events
and
all
sorts
of
other
stuff,
and
so
realistically
putting
all
your
config
in
one
file
is
insane
because
it's
gonna
get
very
long.
I
think
I
I
showed
an
example.
F
One
configuration
that
we
created
for
one
of
our
use
cases,
that's
very,
very
long
and
the
other.
The
other
example
that
I
gave
is
you
know,
maybe,
like
you
have
a
team
of
people
and
they're
working
on
you
know,
building
their.
You
know
their
fully
all
their
configuration
for
all
of
all
their
use
cases
for
all
of
telemetry.
You
know
which
could
be
pretty
big
and
having
them.
F
You
know
have
to
concatenate
all
of
their
different
stuff
together
into
one
file,
is
kind
of
inconvenient,
for
the
same
reason
that
you
put
you
know
code
in
multiple
files,
it's
easier
for
organization
to
just
put
it
in
in
multiple
files.
It
also
you
know,
makes
it
so
that,
like
if
multiple
teams
own
things,
then
they
can
easily
have
things
in
separate
files.
This
is
one
thing
that
we've
seen
in
our
distribution
of
the
collector.
F
We
have
the
ability
to
get
configuration
from
an
external
like
source
and
like
download
it
and
then
run
it
with
the
collector.
You
know
when
it
starts,
and
so
one
use
case
that
we've
seen
is
that
there
might
be
like
some
one
centralized
team
that
has
some
configuration
that,
like
you
know
it's
created
that
says,
hey
all
teams.
F
I
want
you
to
use
this
configuration,
but
then
the
team
itself
might
have
in
you
know
some
extra
configuration
that
they
also
want
just
for
their
special
use
cases
and
yes,
they
could
obviously
concatenate
it
all
together,
but
it's
just
more
convenient.
If
we
could
say
no,
you
can
have
your
two
configurations
separately,
maintain
them
separately.
You
can
store
them
separately.
F
We
can
build
a
little
like
startup
script
or
a
little
thing
in
our
distribution
of
the
collector
that
lets
you
download
from
some.
You
know
easy
common
places
where
you
might
want
to
store
your
configuration,
and
then
you
know
it
will
just
run
with
the
multiple
files
and,
if
there's
an
error
in
you
know
like
like,
let's
say,
of
three
files
and
there's
an
error
in
one
of
them.
You'll
get
like
you
know
a
useful
error
message
that
tells
you
which
file
is
wrong.
F
I
mentioned
that
because
I'm
imagining
that
people
are
gonna
say:
oh,
hey
wesley,
why
don't
you
just
you
know,
build
a
thing
in
your
distribution
that
that
concatenates,
the
multiple
config
files
together
and
yes,
I
could
do
that,
but
I
don't
want
to
because
you
know,
then
the
errors
won't
be
won't.
Tell
you
which
file
was
broken.
If
there's
a
problem,
so
it'd
be
nice.
If,
if
you
could
just
have
multiple
config
files,
I
feel
like
it's
a
very
simple
and
straightforward
thing
to
do.
D
D
It's
a
generic
notion
of
parts
of
configuration
being
fetchable
from
something
that
is
called
a
complete
source
where
one
of
the
kinds
of
the
config
source
could
be
a
local
file,
and
so
you
could
then
include
that
file
as
a
sub
section
of
the
of
the
primary
config
file
that
that
will,
I
guess,
cover
some
of
the
cases
that
you
were
describing.
D
D
If
so,
we
are
also
planning
to
use
this
functionality
for
what
we
call
or
what
was
called
in
stanza
plugins,
which
define
the
configuration
parsing
format
for
for
specific
applications,
so
you're
using,
let's
say,
file
log,
but
with
a
concrete
configuration
which,
which
include
rules
for
parsing
log
files,
for
that
particular
file,
for
that
particular
application,
log
class
of
that
application,
and
that
is
defined
as
an
external
file
which
is
includable
and
in
stanza
it
was
called
plugins.
D
So
we're
planning
to
use
that
here
as
well
and
the
interesting
part
of
that
is
it
they
are
templated.
You
can.
Actually
they
are
parametrized,
you
can
so
you
can
actually
when
you're,
including
the
configuration,
not
only.
You
include
the
entirety
of
that
file,
but
also
you,
you
supply
some
substitution
values
which
which
are
then
which
are
then
passed
as
parameters
and
expanded
into
the
template.
So
it's
kind
of
a
bit
more
powerful
than
just
simple
inclusion
of
the
file.
D
G
Yeah,
I'm
I'm
curious,
would
you
see
how
how
would
you
see
this
merging
part
like
I
have
a
file,
I
have
an
initial
file
and
then
the
second
file
will
just
include
other
parts
that
are
not
defining
first
one
or
override
values.
F
To
be
honest,
it
would
be
cool
if
you
could
do
both,
but
I
was
thinking
just
just
overriding
is
not
necessary.
Just
adding
like
one
example
like
I
had
thought
of
was
maybe
you
have
one
file
and
I
don't
know,
let's
say
trying
to
think
of
a
realistic
example.
F
I
don't
know,
let's
say
you
you
got
it
from,
I
don't
know
like
a
co-worker
or
another
team.
It's
like
this
is
the
file
that
has
you
know
the
receivers
and
processors
that
generates
metrics
in
some
formats,
and
so
you
have
that.
But
then
you
need
to
add
your
exporter.
F
So
maybe
maybe,
like
those
definitions,
are
in
one
file,
but
then
your
exporter
and
the
pipeline
definition
is
like
in
another
file
and
it
should
still
all
work
like
yeah
like
it
should
be.
It
should
be
able
to
load
the
definitions
and
then
and
then
and
when
it
loads
the
next
file
and
it
sees
the
pipeline,
it
should
build.
You
know
it
has
the
definitions
from
the
first
file
and
it
would
be
great
actually,
if
you
didn't
have
to
load
them
the
files
in
order
either
you
could
load.
F
F
Yeah,
that's
true.
If
we
did
overrides,
then
maybe
an
order
would
matter.
I
guess
I
think
I
don't
care
about
that.
To
be
honest
like
like
it's,
we
can,
whichever
one
you
guys
think
is
better.
If
we
do
overrides,
then
yeah
probably
order
matters.
If
not,
then
I
think
orders
shouldn't
matter.
Okay.
I.
G
Think,
by
the
way,
I
don't
know,
if
you
saw,
I
posted
this
morning,
a
pr
what
is
name
to
be
parser
provider.
So
we
have
a
notion
of
a
parser
of
the
config
and
you
have
parser
provider.
G
D
F
G
Yes,
I
can
tell
you
which
processor
or
receiver,
but
let's
assume,
let's
assume,
because
this
will
be
abused
and,
let's
assume
you
have
one
processor,
defining
the
file.
Another
one
defining
the
file
and
the
third
file
has
a
pipeline
with
no
processor
in
the
previous
one,
because
somebody
in
the
base
file
removed
one
of
the
processors.
G
G
H
Bogdan,
I
have
a
couple
of
comments
joe
lynch
here,
so
this
is
something
that
we
care
deeply
about
and
are
actually
going
to
be
working
on
shortly
on
our
side
as
well,
so
we'd
be
happy
to
get
involved.
H
Some
of
the
things
that
we've
run
up
against
is
concatenation
of
files
is,
is
definitely
like
number
one
on
the
list.
It's
important
that
the
that
the
concatenation
is
deterministic.
H
H
So,
there's
a
lot
of
use
cases
that
that
are
important
to
us
and
we're
thinking
about
it
too
so
happy
to
happy
to
get
involved
in
any
way.
That
makes
sense
so.
G
Fyi,
we
do
support
right
now,
a
trick
to
overwritings,
so
if
you
say
dash
dash
set,
if
you
use
the
set
flag
dash
that
set
the
property
name
equals,
the
new
value
will
override
the
defaulting.
G
G
That's
just
set
this
just
set
right
over
10
times
and
we
will
do
the
right
thing,
but
I
think
support
for
for
doing
this
from
a
file
is
trivial,
not
a
problem
like
doesn't
matter
from
where
we
read
the
properties
anyway,
I'm
I'm
hearing
you
and
the
fyi.
This
is
a
great
moment
of
raising
these
because
we
are
actively
working
on
restructuring
a
bunch
of
configs.
G
I
don't
know
if
you
saw
the
last
changes
about
this,
but
but
we
are,
I
think
we
will
be
able
to
support
the
use
cases
that
your
joel
wand
and
wesley
ones
in
terms
of
the
error
stuff.
G
I
think
it
would
be
hard
because,
as
I
mentioned
there,
there
may
not
be
any
error
in
any
of
the
file,
but
as
as
the
merge
happens,
there
may
be
an
error,
so
so,
for
example,
somebody
may
override
one
of
the
property-
let's
say
endpoint
for
for
one
of
the
processor
and
put
empty
string,
and
that
will
make
the
the
the
new
configuration
invalid
because
the
process,
the
thing
really
needs
an
endpoint
as
an
example-
and
I
don't
I,
I
think
it
would
be
very
hard
for
us
to
track
that
that
without
that
change
was
valid
with
that
change
was
no
longer
valid
and
so
on
so
so
we
will
be
able
to
tell
you
that
hey
endpoint
should
not
be
empty.
G
D
G
Related
to
this,
we
are
actually
adding
validate
to
every
component
configs.
I
don't
know
if
you
saw
that
progress
that
will
help
us
to
have
a
dry
run
implementation
of
the
the
the
collector,
so
you
can
validate
the
config
without.
I
G
Starting
everything-
and
maybe
during
that
we
can
dump
the
entire
config
and
also
point
some
errors
into
where
we
we
found
we
can
do
anyway.
We
are
making
progress
on
this
configuration.
That's
that's
what
I'm
trying
to
say.
J
One
of
the
questions
that
we
have
about
like
dry
run
is
like
when
we
are
doing,
for
example,
upgrades
on
behalf
of
customer.
We
are
thinking
about,
like
you
know,
some
manage.
Maybe
you
know
having
the
collector
as
a
managed
thing.
J
H
G
Dryron
will
tell
you
that
we
can
parse
the
config
and
we
can
load
the
components
and
the
components
have
all
the
required
fields,
but
will
not
tell
you
that,
for
example,
if
somebody
changed
the
port,
the
default
port.
So
if
you're
relying
on
a
default
port
in
a
version
and
that
change
between
the
two
versions,
I
we
don't.
I
don't
think
we'll
be
able
to
detect
that
probably.
I
G
A
bit,
maybe
maybe
one
thing
that
we
can
do
besides
a
dry
run-
is
have
a
dump
final
config,
which
will
include
the
default
values.
Then
then
we
can
have.
If
you
have
two
versions,
you
can
compare
the
final
config,
including
the
default
values,
and
that
will
answer
some
of
these
things,
but
it
will
still
may
still
be
hidden
stuff,
yeah.
J
Generally
speaking,
all
the
availability
of
all
the
stuff
it
will
require.
Just
you
know
it's
not
an
easy.
It's
not
like
a
static.
I
mean
it's
not
a
tool,
verification
problem
right,
like
it's
like
a
whole
pipeline
and
seeing
what's
breaking
and
rolling
back
and
stuff
like
that.
So
I
was
generally
just
asking
this
in
the
scope
of
like
configuration,
verification
yeah
we
can
anyways.
These
are
not
very
critical
things
to
address
right
now.
Yeah.
G
J
A
Yeah,
when
you're
running
on
kubernetes,
you
can
then
do
a
rollout
before
well
a
rollout
with
the
config
map
so
that
your
old
deployments
don't
get
killed
until
new
sane
ones
have
started.
So
you
don't.
A
H
G
There
yet
joe
right
right,
understood,
probably
the
first
core
components
that
would
get
there
will
be
in
a
month
and
a
half
two
months
when
we
start
declaring
some
of
the
first
components
stable
in
terms
of
configuration
and
stuff.
But.
J
In
in
the
contract,
can
we
also
have
a
stabilization
table
somewhere
like?
Can
we
make?
You
know
all
the
contra?
You
know
components
to
mark
themselves
as
stable,
because
the
stabilization
work
we
are
doing
is
for
the
main
collector
ripple
right
like
it's,
not
the
country.
J
J
G
I
think
I
think
they
will
not
be
stable
in
terms
of
configuration
until
we
stabilize
core,
because
some
of
them
depends
on
some
of
the
core
config
define
definitions
so
yeah.
That's
why?
I
think
the
way
how
we
want
to
do
is
stabilize
this
phase.
One
then
phase
two
will
start
stabilizing
some
of
the
core
components,
configuration
and
we'll
learn
things
and
then
we'll
document
for
contrib
how
vendors
can
do
can
follow
the
same
process
to
declare
their
stuff?
Okay,.
L
Yeah
yeah
and
then
yeah.
I
think
that
the
assumption
was
that
you
know
as
we
stabilize
score.
We
would
also
test
you
know
the
control
components
and
make
sure
that
they
reflect
the
yeah.
J
Yeah,
I
have
no
questions.
I
was
just
wondering
if
this
was
something
that
you
you
know
was
thinking
about,
because
it's
you
know
I
I
hear
from
people
like
hey.
How
stable
is
this
component
is.
The
configuration
is.
L
L
No
totally
totally
it's
a
very
good
point.
I
mean
I've
heard
that
several
times,
but
it's
just
gonna
do
the
work.
D
A
I'll
try
to
share
my
screen
now
desktop
one
share,
hopefully
you're
seeing
my
screen
now.
Probably
google
docs
you're
slack.
Actually,
my
my
what
my
slack.
D
A
A
Okay,
I'm
trying
to
share-
I
don't
know,
what's
going
on
it's
supposed
to
be
the
desktop
one.
A
Yeah,
that's
that's!
That
was
a
zoom
zoom
window
for
sure
I
don't
know
what's
going
on
here.
So
if
you
just
see
only
slack,
then
I
don't
know
what's
going
on.
Perhaps
if
I
close
it,
what
do
you.
K
A
A
Vs
code,
no,
I
have
the
terminal
yeah.
Well
that
might
work
yeah.
Okay,
all
right!
So
if
you
see
it
moving,
then
it's
working
yeah
all
right,
so
I
think
the
code
is
actually
what
I
want
to
show
anyway.
So
the
config
change.
It
is
a
change
in
some
parts.
So
if
you
are
familiar
with
the
way
that
configuration
works,
sorry
authentication
works
now
with
openglance
collector.
A
You
know
we
have
a
config
auth
and
in
there
we
define
a
an
authentication
object
extract,
and
this
one
here
is
sounded
by
by
actual
receivers
or,
for
instance,
jrpc
server
settings.
So
if
we
open
the
configure
pc,
we
have
the
jrpc
server
settings
and
then
down
there.
We
have
the
configures.
So
this
is
emitted
by
receiver.
Configuration
basically.
A
D
A
Here,
I'm
I'm
changing
that
so
that
it
only
holds
what
is
the
name
of
the
educator
and
don't
authenticate
is
actually
a
an
extension,
so
it's
implemented
as
an
extension.
Let
me
so
extension
auth
oigc
extension,
so
this
is
one
concrete
authenticator
all
right.
So
if
I
open
the
extension
here,
you
see
it
is
the
the
code
that
we
had
for
oitc
authenticator
in
the
previous
generation,
when
it's
now
wrapped
into
in
a
in
an
extension.
A
It's
nothing
really
fancy.
It's
the
same
code
that
we
had
before
simple
factory
with
promotional
thing
and
then
in
our
configuration
file.
What
we
can
do
is
we
can
specify
the
extension,
and
this
is
the
oitc
extension
and
the
same
configuration
we
had
before
now
belongs
here.
So
this
is
the
issuer
url.
This
is
the
audience
and
in
the
receivers,
this
is
what
changed
so
before
we
had
the
auth
node
and
then
all
the
options
like
issuer,
url
and
audience
within
an
oigc
node
here.
So
we
don't
have
the
ydc
node
here
anymore.
A
What
we
have
is
authenticator
oidc,
which
is
basically
pointer
to
the
extension
with
the
same
name
all
right.
So
if
we
have
two
odcs,
we
can
have
like
slash
two
here
and
then
here
would
have
to
be
slash
two
right.
So
this
is
common
to
what
we
do,
how
we
do
configuration
in
in
exporting
now
the
way
that
it
actually
works
internally
or
how
things
are
actually
tied
together
is
as
part
of
the
service.
A
I
think
this
has
been
renamed
builder
in
the
latest
main
and
perhaps
even
renamed
back
to
service,
but
I
haven't
catched
up
on
that.
Yet,
and
the
glue
for
all
of
those
components
is
actually
here,
so
we
have
a
function
called
setup
configuration
components.
This
exists
already
and
it
starts
by
loading.
Basically,
the
configuration
applying
the
the
configuration
by
setting
up
the
extensions.
A
So
we
don't
do
anything
here
and
at
this
point
we
have
all
the
extensions
ready
right.
So
there
they're,
set
and
set
means
they're
built
and
started,
and
then
we
filter
from
all
the
extensions
we
get
all
of
them
and
we
filter
only
the
authenticator
ones,
and
we
do
that
by
implementing,
like
we
mentioned
before.
A
I
think
a
couple
weeks
ago,
by
doing
a
like
poor
man,
context,
dependency
and
injection
right,
so
we
we
have
here
a
where
is
it
oh
yeah,
an
interface
called
once
authenticators
and
specifies
a
set
authenticators
function
and
our
config
auth
actually
does
that.
So
we
we
say
that
we
want
authenticator
and
implement
a
the
function
set
authenticators.
A
We
traverse
that
by
using
reflection,
so
we
go
every
node
in
the
configuration
for
that
specific
component
and
then
we
test
for
whether
that
node
implements
the
the
interface
once
authenticator.
A
If
that's
the
case,
we
then
inject
the
authenticator
at
the
whole
list
of
authenticators,
so
it
is
possible
for
a
component
to
receive
all
the
things,
all
the
available
indicators
right
and
then
again
on
our
config
off.
What
we
do
is
we
get
only
the
one
that
we
saw
in
the
configuration
file.
So
if
we
specified
like
the
ydc,
then
we
get
the
extension
named
oidc
here
and
then
we
set
as
our
sole
authenticator.
A
Now
some
other
implementations
might
do
like
change
of
education,
or
perhaps
we
might
want
to
do
something
else,
not
necessarily
you
know
called
authenticators
in
this
implementation.
Here
we
have
only
one
authenticator
and
then,
whenever
the
configuration
calls
like
two
server
options,
then
we
we
specify
we
do
whatever
we
have
to
do
here
right
and
everything
is
then
the
same
as
we
had
before.
A
The
idea
is
that
then
downstream
distributions
or
even
contrib
can
contain
more
extensions
that
are
authenticators,
and
you
know
an
extension
that
is
an
authenticator
has
to
implement
both
interfaces,
so
both
where
is
it
both
the
authenticator
interface
and
also
an
extension
right
so
somewhere
here
we
have
an
assertion
that
this
is.
This
is
an
extension
as
well,
so
you
should
that's
pretty
much
it.
Those
are
the
changes
I
think
most
of
them
is
in
line
with
what
we
discussed
before.
A
A
A
So
you
might
have
multiple
values
for
this
for
for
a
given
key
and
we
we
don't
specify,
for
instance,
like
username
and
password,
because
some
authentication
might
not
be
might
not
have
like
username,
which
is
the
case
for
token-based
authentication.
A
And
then
those
here
are
mainly,
I
think,
I'm
not
happy
with
those
ones
here-
the
unity
and
the
stream
interceptors
they're
here
mostly
for
convenience,
because
this
is
what
we
had
in
the
previous
generation
of
authentication.
It
might
not
be
necessary,
so
I
I'll
try
to
remove
those
ones
if
they're
not
necessary,.
G
I
see
but
okay,
why
do
you
do
reflection
versus
so
so
we
make
all
the
extensions
available
during
start
of
the
component.
So
whenever
an
exporter
is
started,
we
pass
you
all
the
already
started
extensions
yeah
as
a
map
based
on
the
name
that
you
just
parsed
you
you're
gonna,
do
a
lookup
into
that
map.
Get
the
x
the
instance
that
you
are
looking
for.
Yeah
convert
it
to
authenticator.
If
it's
not
converting,
it
means
bad
luck
for
you.
A
This
is,
this
is
all
true.
The
only
the
only
point
there
is
that
we
are
not
talking
about
components
here,
so
we
are
talking
about
configuration
files
or
configuration
options.
So
the
part
where
the
authenticator
gets
injected.
It's
not
a
component,
it's
actually
a
configuration,
so
conflict
authentication,
part
and
their
cats.
We
got
a
list,
a
map
of
authenticators,
and
we
we
need
to
do
that
because
we
we
then
get
the
the
jrpc
part
here,
which
you
know
needs
to
have
access
to
authenticator
to
build
the
server
options.
A
So
this
is
required
and
but
server.
G
G
D
G
I
D
G
G
D
On
this
one,
possibly
it's
bigger,
it's
more
changes
compared
to
what
this
does
this
one
kind
of
is
almost
seamless.
It
keeps
what
was
working
but
just
makes
it
decoupled
from
the
actual
single
implementations
makes
an
extension.
What
you
are
describing,
I
think,
maybe
is
possible,
but
it's
yeah.
Okay,.
A
Yeah,
I'm
not
sure
I
I
I'm
following
so
here's.
G
The
third
method
on
line
239
2,
39
yeah.
If
you
pass
the
host.
D
D
We
call
this
host
is
not
available
there
right
now,
so
what
you're
describing
requires
it
to
make
it
available,
which
is
a
different
phase
in
the
startup,
which
I
don't
know
from
the
top
of
my
head.
I
don't
remember
so
this
one
is
definitely
it.
It
has
no
way
to
know
what
the
host
is.
There
is
no
host
yet
yeah
yeah,
but.
I
D
A
One
is
not
available
yet
so
the
host
is
so
the
first
well
so
jrpc
yeah,
I'm
not
for
sure.
Right
now,.
D
A
No
it's
so.
This
is
exactly
crazy.
A
Started,
for
instance,
so
this
is
called
whenever
a
the
config
is
built.
Sorry,
I
can
show
you
where
it
is.
It
is
called
when
the
the
conf,
the
the
receiver
is
built,
but
not
started.
Where
is
it
outline.
D
A
Let
me
show
so
set
up
pipelines
so
that
code
that
we've
seen
before
is
called
like
here
right,
so
build
receivers,
which
is
before
what
we,
which
is
after
building
the
extension.
So
this
extensions
are
built
here,
setup
extensions
which
is
called
here
so
first
set
up
extensions.
A
Then
this
whole
injection
happens
and
then
set
up
a
pipeline,
which
is
where
the
jrpc
server
settings
to
dial
options
is
then
called
yeah.
I
think
I
mean
I
don't
know
what
I
don't
like
about.
That
is
that
the
component
host
doesn't
really
belong
to
that
method.
I
mean,
if
I
see
the
two
dial
options
and
I
see
a
component
holds
there.
I
wonder
why
it's
there,
you
know,
and
it's
only
because
of
an
implementation
detail
there.
D
D
A
Yeah,
which
is
a
requirement
from
the
authentication
not
from
the
actual
jrpc
part
right,
so
the
authentication
configuration
which
is
embedded
into
grpc
needs
extensions,
not
the
jpc
itself.
You
know.
A
Yeah
so
just.
D
A
In
general,
I
don't
like
reflection
either
and
I
think
even
this
code
here
might
be.
I
don't
know,
I've
never
dealt
with
with
reflection
goals,
so
it
was
mostly
you
know,
by
try
and
error.
A
But
what
I
and
that's
why
I
don't
like
here,
but
what
I've
I
made
my
piece
with
this
code
here,
because
it
is
not
in
the
hot
pass
right.
So
it
is
at
the
startup
and
if,
if
it
doesn't
work,
then
it
just
breaks
the
whole
collector
yeah.
D
Obviously,
it's
not
about
performance,
it's
more
about
whether
this
is
fragile
or
not.
Is
this
this
is
kind
of
robust,
or
there
is
a
way
for
this
to
fail
somehow,
because
of
some
weird
configuration
structures,
I
don't
know
kind
of.
I
can't.
I
can't
answer
that.
I
can
guarantee
that
to
myself.
Looking
at
this
code,
I
don't
know
where
this
means.
M
D
A
A
A
So
if
we
have
other
types
in
the
future,
then
it's
really
bad
again,
but
can
we
yeah.
D
If
you,
if
you
don't
mind,
maybe
trying
that
as
well
jurassic,
because
surely
where
that,
where
that
leads
us
yeah,
but
but
otherwise,
I
think
yeah.
This
is.
This
is
almost
achieving
what
we
want
it
to
do.
Right,
it's
extensible,
it's
orthogonal!
This
is
nice.
You
don't
need
to
do
anything
in
your
component,
so
yeah.
A
Yeah
so
this
actually
works
already.
So
I
tried-
and
I
would
demo
this
except
that
I
can
only
show
visual
studio
code
apparently,
but
I
did
just
with
following
the
the
the
instructions
for
following
a
blog
post
that
we
had
I
some
months
ago.
I
A
A
Is
there
I'm
facing
a
problem
right
now
in
the
rebates,
because
this
code
here
depends
on
a
code
that
that
is
now,
I
think,
one
month
old
and
there
were
a
lot
of
refactorings
so
after
the
refractory
stopped
working.
So
that's
why
it's
I'm
showing
you
this
one
here,
but
yeah
so
I'll.
Try
that
to
as
we
discuss
here
I'll,
try
to
then
make
the
server
option
except
a
host,
and
I
think
it
also
means
that
we
need
this
similar
one
to
the
client
settings
right.
A
So
the
today
options
yeah,
even
if
even
if
you're
not
gonna
use
it,
you
know
only
four
consistency.
Yes
do
we
have
several
similar
options
for
http
configuration.
A
A
A
I'm
sorry
I'm
lost
here,
so
should
it
be
done
here
in
the
two
server
the
server?
Yes,
okay,
all
right.
D
Hey
great,
so
I
guess
let's
maybe
also
call
time
on
this.
If,
if
you
don't
mind
because
we
have
a
couple
other
things,
okay
is
jack
here
in
the
call.
N
Yeah
yeah,
I'm
here
so
I'm
from
new
relic
and
my
team
is
interested
in
being
able
to
translate
cumulative
to
delta
metrics
in
a
standardized
way
and
we've
been
discussing
contributing,
building
and
contributing
a
a
processor
to
the
collector
to
do
this.
And
we
wanted
to
gauge
the
appetite
of
the
collector
working
group
to
support
this
type
of
thing
and
see.
If
there's
any
like
contacts
or
previous
discussion
around
this,
that
we
should
be
taking
into
consideration.
G
G
There
is
a
there
is
a
matrix
work
group,
but
besides
that,
I
want
to
know
if
you
are
interested
to
do
it
at
the
sidecar
level
or
region
level,
or
more
or
less
as
a
service,
because
there'll
be
different
implications
in
different
design
discussions.
If
you
do
it
one
or
the
other.
N
You
know
I
guess
our
our
thinking
has
been
when
the
is
deployed
as
a
as
a
service.
That's
that's.
G
N
Yeah
well,
so
I
guess
you
know
that
that
that's
a
question
right
so
for
this
to
work,
then
all
the
metrics
have
to
be
consistently
routed
to
the
same
instance.
N
We
do
a
little
bit
of
this
transformation
in
our
existing
new
relic
exporter
and
I
think,
there's
some
other
exporters
or
receivers
that
do
this
type
of
transformation
as
well,
and
they
all
kind
of
rely
on
that
same
sort
of
consistent
routing
of
metrics
to
to
instance,
and
so
you
know
those
we
would
probably
just
push
those
problems
aside
for
now,
and
just
you
know,
have
a
processor
that
would
maintain
the
state
in
memory
needed
to
do
this
and
count
on
the
you
know,
whoever's
running
this
to
be
able
to
have
a
strategy
that
makes
sense.
G
Before
before
making
it
the
processor,
I
would
most
likely
started
to
build
it.
The
way
how
we
did
it
in
the
package
in
the
big
pkg
thing
for
for
the
country,
we
can
have
a
library
that
implements
only
the
consumer
interface
initially,
that
is
able
to
do
this
and
then
and
then
that
library
can
be
embedded
into
a
an
exporter
to
have
the
right
now
everything
or
can
be
transformed,
also
as
a
as
a
standalone
processor.
G
J
Is
it
the
the
matrix
endpoint
the
remote
right
is
not
doing
it
that
yeah.
G
The
matrix
exporter,
the
metrics
prometheus
exporter,
not
the
not
not
the
remote,
the
classic
one,
that's
something
so
yeah.
I
think
it's
very
interesting
and
I
think,
as
I
said,
we
should
start
it
building
it
as
a
library
right
now
that
just
implements
the
the
consumer
interface
and
then
and
then
figure
out.
How
can
we
put?
I
mean
it's
very
easy
to
make
this
a
processor
and
then
to
make
this
consumer
like
embedded
into
into
the
the
current
new,
relic
or
or
other
places
where
we
invented.
N
Circle
yeah,
I
was
just
gonna
say
that
that
makes
sense
to
us.
You
know
we
we
we
were
thinking
that,
to
the
extent
that
it's
possible
and
reasonable
that
we'd
like
to
be
able
to
go
both
ways
with
whatever
we
contribute
cumulative
to
delta
or
delta,
cumulative
and
yeah.
I
think
the
approach
of
starting
out
as
a
library
that
can
be
you
know
integrated
into
any
exporter
or
receiver
potentially,
and
then
you
know
translate
it
to
a
standalone
processor
later.
N
G
O
I
am
indeed
hey
guys,
so,
given
we
have
10
minutes
left,
I
figure.
I
can
just
talk
a
little
bit
about
this
work.
O
We
had
some
discussions
with
alawita
and
a
few
others
from
aws
as
well,
and
we
wanted
to
bring
this
to
the
community
for
a
larger
discussion,
so
patrick
had
submitted
that
pr-
and
I
think
you
know
tigran-
you
had
voiced
some
valid
concerns.
We
wanted
to
talk
about
with
the
wider
audience
and
see
where
things
stand
and
just
get
some
feedback
on
the
thoughts.
I
Yeah
and
if
I
may
add,
I
think
that
that
really
there
are
two
like
questions,
the
first
one,
if
this
is
something
the
community
is
interested
in
having
if
this
is
something
valuable
and
could
help
in
our
eyes.
This
opens
up
the
possibility
to
use
all
that,
like
200
plus
integrations
that
telegraph
has,
and
the
second
question
is:
if
the
first
answer
is:
yes,
then
how
to
do
it,
and
there
are
several
options
on
the
table
to
make
it
more
convenient.
D
Right
so
I
guess
my
personal
opinion,
this
is
interesting,
can
be
very
useful.
I
don't
know
I
shouldn't
probably
repeat
everything
I
wrote
in
the
issue.
I
have
some
concerns
about
the
the
size
of
the
dependency
and
how
we
manage
the
dependency,
but
I
guess
let's
try
to
answer
the
first
question.
First
right:
what
do
people
think
about
the
usefulness
of
it?
Does
the
community
want
it
and,
let's
see,
please
speak
up
people,
but.
G
I
mean
telegraph
as
it
is:
it's
a
standalone
process,
correct
agent
that
has
some
kind
of
protocol,
so
in
that
regard,
I
think
it
will
be
a
no-brainer
to
support
ingesting
telegraph
wire
formats
now
embedding
the
whole
telegraph
into
the
collector
versus
talking
over
some
network.
It's
a
different
discussion,
but
I
think
supporting
input
from
telegraph
is
probably
a
good
thing
to
add
to
this.
B
I
Yeah,
so
there's
the
question
of
management,
because
it's
much
easier
to
have
a
single
process
and
single
configuration
file
at
this.
Well,
as
we've
been
discussing
at
the
beginning
of
the
call
until
this
configuration
fire
is
too
long.
But
this
was
like
the
key
item
for
us.
It's
it's
much
easier
to
have
a
single
process
than
single
file
other
than
couple
of
them,
and
this
goes
along.
I
Another
item-
and
this
might
be
not
really
that
significant
right
now-
is
that
we
are
skipping
the
sterilization
and
network
communication
and
then
the
civilization,
and
we
are
just
like
communicating
within
the
process,
which
also
simplifies
a
couple
of
things
but
yeah.
I
think
that
this
is
like
the
the
the
the
answers
we're
looking
for.
G
I
I
would
not
like
the
hell
of
dependency
that
I
would
have
to
deal
with
that
200
something
dependencies
extra.
I
don't
know
how
you
think.
We're
gonna
manage
that,
because
we
already
have
half
of
the
prometheus
and
with
the
full
telegraph
in
the
whole
thing.
I
don't
know
who's
gonna
be
able
to
maintain
all
these
dependencies.
I
Yeah,
one
of
the
ideas
that
were
brought
by
tigran
was
using
open,
telemetry
collector
builder.
I
think
that
this
is
probably
one
of
the
more
interesting
ways
to
solve
that,
because
this
this
all
these
dependencies
at
about
50
megabytes
to
the
binary
size
which
in
certain
cases
might
be
considerable.
So.
G
I'm
not
I'm
not
necessarily
worried
about
the
size,
I'm
worried
about
upgrading
dependencies
and
breaking
things
and
hell
what
is
going
to
happen
with
whole
goal.
Ecosystem
will
be
super
fast,
breaking
things
and-
and
I
I
don't
know,
we'll
be
stuck
with
drift-
zero-
nine-
that
that
a
bunch
of
people
was
talking
to
and
stuff
like
that.
That
will
happen
and
it
will
be
impossible
to
coordinate
upgrades
across
the
entire
ecosystem.
L
I
mean,
if
that's
an
again
if
this
can
be
a
contrib
component
and
and
provides
that
integration
with
telegraph
users
to
migrate
to
hotel.
I
think
it's
a
good
good
addition,
and
also
if
it's
an
optional
contrib
component
that
can
be
built
and
performance
testing
can
be
added,
for
you
know,
making
sure
that
it
still
is
performant
and
doesn't
bloat
the
collector
in
you
know
in
general,
with
other
components
being
not
included.
Perhaps
that's
something
worth
examining.
H
Yeah
joe
lynch,
here
I
from
a
from
a
functionality
perspective,
it
sounds
very,
very
interesting
from
a
maintenance
perspective.
I
I
s.
I
have
similar
concerns
to
bogdans,
but
maybe
more
along
the
lines
of
having
solid
integration
tests
and
things
like
that.
That's
one
of.
H
These
third-party
apps
is
you
get
it
to
work
the
first
time
and
then
they
all
they
move
from
version
five
to
version
six
and
everything
breaks,
and
you
don't
really
know
about
it,
because
it'll
be
managed
at
google
with
no-
and
I
say
we
when
we
don't,
we
don't
do
it
well,
so
you
know,
I
don't
know
whether
it
belongs
in
the
contrib
repo
or
not.
I
you
know
I
don't
know,
but
certainly
having
the
functionality
accessible
somewhere
would
be
very
interesting
to
us,
even
if
it's
like
in
another
repo
or
something.
J
In
terms
of
dependencies,
is
it
transient
dependencies
that
has
never
been
used,
or
is
it
actual
like
real
dependencies?
Go,
is
trying
to
improve
it
with
the
next
version
that
they're
gonna
be
able
to
do
more
analysis
on
like
depend,
transit,
you
know,
libraries,
you
actually
never
rely
on
and
be
able
to
remove
all
that
stuff
from
like
you
know,
go
mod
and
go
some.
So
what
is
the
actual?
You
know
concern
here
in
terms
of
dependencies.
G
P
Or
dependent
so
bogdan,
I
one
small
input
here
from
the
from
the
go
from
the
open.
Telemetry
go
side,
we,
you
know
in
open,
telemetry,
go
there's
a
dependency
on
thrift
via
jaeger
and
as
part
of
stabilization,
we
actually
decided
to
render
thrift
so
that
we
do
not
pass
on
that
complexity
to
anyone
using
it.
G
But
the
problem
is,
the
problem,
is
the
dependency
is
not
in
our
repo,
and
it's
not
so
so.
The
dependency
is
in
in
in
telegraph
would
be
and
would
be
in
zipking
and
would
be
in
others,
and
I
don't
know
if
you
can
vendor
in
other.
There.
J
H
J
Sure
about
its
dependencies,
but
it
just
the
trip
it's
possible
that
we
can
vent
it
in
the
contrap
repo
somewhere
and
then
point
the
import
pad
to
use
that
particular
directory.
It's.
G
Not
going
to
be
possible
because
if
you
have
an
incompatible
version,
if
you
have
version
13
and
prometheus
requires
version
14,
which
breaks
some
api,
you
cannot
point
prometheus
to
use
the
version
13
that
you
vendor,
because
that
will
not
have
the
same
signature
for
the
function.
So
so
you
use
you
cannot
solve.
When
you
have
two
dependencies,
then
you
cannot
be
the
single
point.
You
cannot
find
the
the
yeah
so
so
that
will
get
us
stuck
in
in
like
old
versions
and
stuff.
G
P
Diamond
so
I
guess
zooming
out,
then
this
is
all
happening
because
we
are
trying
to
keep
all
these
go
dependencies
in
one
process
and
I
heard
from
tremec.
I
don't
know
how
to
I
I'm
probably
misrepresenting
your
name.
My
apologies,
but
I
heard
two
concerns.
One
is
it's
nice
to
have
a
single
process
and
the
other
is
it's
nice
to
have
a
single
config
file
right,
open,
telemetry
could
could
say,
hey.
We
are
in
the
business
of
running
multiple
processes.
Right
we
will
supervise
the
processors.
P
We
will
do
a
good
job
with
you
know,
killing
zombies
and
whatnot,
but
from
the
user's
perspective,
it's
like
one
config
file,
so
nginx
runs
many
processes,
but
we
don't
care
right
or
joe
and
team
run
multiple
processes,
but
they
do
such
a
good
job
that
their
users
don't
need
to
know,
and
that
means
that
this
dependency
health
concern
goes
away.
Is
that
something
we
want
to
pursue
as
a
long-term
direction?.
G
We
may
do
that.
I
think
it
may
be
interesting
if
we
have
a
receiver
that
actually
does
just
receive
telegraph,
can
accept
the
telegraph
config
and
start
a
telegraph
as
a
sub
process
and
with
that
specified
config,
so
that
achieves
the
config.
It
will
still
be
the
performance
issue
that
that
you
mentioned,
but
I
don't
know
we
should
investigate
what
I'm
trying
to
say
is.
G
We
can
even
start
it
with
the
library,
but
it
will
be
very
hard
for
me
to
put
it
in
the
build
file
so
to
put
it
in
the
final
build
country
like
if,
if
we
don't
put
it
there,
I'm
fine
to
to
provide
it,
but
that's
not
very
useful
for
users.
If
I
put
it
there,
I
I
get
into
the
whole
dependency
hell
which
I'm
trying
to
avoid.
D
So
sorry
guys
we're
over
time
here
and
there
is
another
meeting
which
is
scheduled
in
the
same
zoom
room.
So
please
go
to
the
issue
and
comment
there
and
we
can
discuss
next
time.
We
still
want
to
do
that.
Thank
you.
Everyone
for
the
collector
seek
logsy.
Please
stay.
L
D
D
Q
What's
going
on
yeah
sure
hello,
this
is
rock!
I
I'm
I'm
in
splunk
I've
been
working
on
football
for
kubernetes,
it's
a
full
nd
based
logging
and
metrics
collector
that
injects
data
to
splunk
and
I've
been
working
on
open,
telemetry
hem
chart
and
they
are
contributing
to
the
telemetry's
collector
itself
so
that
we
can
have
our
open,
telemetry,
ingest,
logs
and
metrics
to
splunk
yeah
nice
to
meet
you.
I
I
joined
the
last
session
too.
This
is
my
second
time.
Q
D
Thank
you
so
yes,
the
the
plan
here
is
that
we
will
try
to
so
rock
is
going
to
to
be
contributing
to
the
helm
chart
that
we
have
and
his
primary
focus
is
on
the
kubernetes
log
collection,
and
we
want
to
use
what
we're
doing
here
at
open
telemetry
with
logs
more
at
splunk
internally
as
well.
So
we
want
to
replace
one
of
the
use
cases
that
we
have
using
using
flame
dissolution.
D
Moving
to
this
now
open
classroom,
this
solution,
so
yeah
exciting.
So
we
have
one
more
engineer:
who's
going
to
be
helping
that
sound
folks
cool.
So
I
guess
I
only
had
one
thing
that
I
wanted
to
discuss.
It
was
about
the
the
body
of
the
log,
probably
other
topics
that
you
can
do
after
that.
So
let
me
try
to
maybe
open
that,
because
I
completely
remember
where
we
stopped
where
there
one
second.
D
R
Yeah,
I
think
we
have
some
consensus,
at
least
between
sheriff
and
myself.
I
think
sheriff
was
more
or
less
echoing
your
opinion
or
extending
it
to
green.
So,
to
summarize,
I
think
the
the
point
that
was
made
here
was
that
so
stanza
as
it
was,
I
mean,
as
I
guess,
as
we
brought
it
in
when
we
parse
data.
It
automatically
goes
into
the
body
of
the
record.
The
idea
behind
that
was
that
the
the
body
can
then
sort
of
act
as
a
playground
to
manipulate
the
data.
R
Further
there
may
be
subsequent
parsing
operations
may
want
to
add
or
remove
certain
things,
but
that,
ultimately,
the
idea
would
be
that
you
sort
of
play
with
the
data
in
the
body,
and
then
you
move
the
appropriate
fields
up
to
attributes
or
resource
as
necessary
and
and
also
time
stamp
conspiracy.
R
So
tier
pointed
out,
you
know
sort
of
challenged
this
idea
that
that's
always
the
right
thing
to
do.
There
are-
and
I
think
chairman
really
articulated
a
good
point
about
this-
that
in
many
specific
cases
many
many
cases
there
are
just
there's
a
really
good
case
to
just
parse
directly
into
attributes,
maybe
to
resource
directly
and
it
sort
of
just
comes
down
to
whether
or
not
what
you're
parsing
is.
R
Maybe
this
is
an
oversimplification,
but
whether
whether
what
you're
parsing
is
this
sort
of
flat
data
like
a
regex
parser,
where
you're
naming
each
of
the
fields,
those
are
probably
significant
enough
things
that
you're
calling
them
out
that
you
would
then
put
most
of
them
into
the
attributes
straight
away.
R
Will
you
know
I
can
write
this
up
into
an
issue,
but
basically
is
that
most
parsers
will
parse
to
attributes
by
default
and
then
we'll
have
some
mechanisms
for
overriding
that
we
want
to
make
sure
that
it's
easy
to
move
things
around.
I
think
there's
some
functionality
for
this
and
stains
already,
but
there's
also
another
set
of
work
in
progress
right
now.
D
R
That
is
the
default
input.
Yes
and
you
can
specify
alternate
inputs,
but
yes,
that's
the
default.
I
guess.
D
R
You
know,
obviously
you
know
we
took
it.
We
took
a
stance
on
this
and
designed
it
the
way
that
it
is
because
we
thought
that
was
the
right
call,
but
you
know
I
think,
there's
a
good
there's,
a
good
usability
argument
that
it
you
know
in
the
end
it's
fewer
steps
to
get
to
where
you
want
it
to
be,
and
so
maybe
that
pragmatic
reason
is
enough
to
say
that
that
the
right
way
is
to
parse
from
the
body
to
the
attributes
or
perhaps.
R
R
That
was
one
of
my
concerns
is
that,
right
now
it
is
uniform.
I
think-
and
I
I
personally
like
that,
but
I
don't
know
that
that
needs
to
stand
in
the
way
like
if,
if
we
think
regex
parser
makes
sense
to
parse
directly
to
attributes
and
we
think
json
parser
makes
sense
to
parts
directly
to
the
body.
You
know
I
think
we
could
make
that
case.
R
I
think
that's
one
of
the
possibly
one
of
the
drawbacks
of
this
proposal
is
that
we'd
be
sort
of
fragmenting
that
that
right,
what
is
right
now
a
sort
of
uniform
design,
choice,
yeah.
S
So
I
I
think,
there's
one
other
argument
that
should
be
kind
of
weighed,
which
is
that
it's
in
in
some
context,
it's
important
not
to
lose
the
original
that
if
you
have
an
error
in
parsing,
if
you
lose
the
original
data
security
contacts
and
such
that
can
be
a
really
bad
bad
deal.
S
S
You
know
that
the
original
or
was
it
the
processing
there
are
times
that
you
know.
That's
not
important.
There
are
times
that
it
is.
R
But
and
that's
a
good
point:
stanza
does
support
preserving
the
original,
but
it's
not
always
not
the
default,
so
just
making
that
note,
but
whether
it
should
be
the
default
or
not
that's
another.
I
think
another
question.
R
D
T
Of
you
know
from
the
sort
of
fundamentalist
perspective,
you
know
from
the
perspective
of
the
spec
and
not
the
log
data
model
right,
if
it's
just
looking
at
it
again,
the
way
it's
formulated,
it
doesn't
really
preclude
like
looking
at
all
the
stuff.
That's
in
the
envelope
right.
You
know,
including
the
stuff
that
you
just
mentioned
like
civility
and
and
you
know,
then
we
have
the
resource
stuff
there.
We
have
the
attributes
there.
This
is
not
just
purely
metadata
right.
You
know
the
way
that
the
locator
model
is
written.
T
There
is
no
way
this
would
ever
any
of
the
envelope
fields
would
ever
or
lots
of
the
envelope
fields
would
ever
make
any
sense
if
we
wouldn't
allow
to
populate
them
from
the
actual
message
that
we
are
parsing
right.
This
is
pure.
This
is
not
just
sort
of
environment,
environmental
metadata
from
the
collector's
perspective
right.
T
So
with
that
in
mind,
then,
fundamentally
speaking,
you
know
a
parser
ought
to
be
able
to
populate
any
part
of
the
of
the
log
data
record
right.
So
from
that
perspective,
at
least
you
know,
I'm
not
I'm
not
deep
in
the
stanza
implementation.
So
forgive
me
for
that,
but
just
from
a
sort
of
more
high
level,
abstract
perspective,
I
mean
it
almost
feels.
Like
you
know,
it's
kind
of
predescribed
right
at
the
parts
ought
to
be
able
to
populate
any
field.
T
I
don't
know
if
that
orientation
helps
at
all,
practically
speaking
but
like
to
me.
That
sounds
like
if
the
parser
is
written
in
a
way
such
that
generically,
it
basically
maps
message
to
body,
then
that's
not
good
enough.
T
R
Sure,
yeah,
and
and
just
for
a
little
more
context
on
this,
like
I
think
that
maybe
where
the
design
philosophy
only
differed
is
that
we
didn't
really
see
it
as
the
job
of
the
parser
to
put
the
data
into
the
specific
place.
It
was
that
you
would
use
the
parser
to
make
the
data
available
under
you
know
in
some
structure,
and
then
you
would
use
sort
of
restructuring
operations
to
put
the
data
where
you
wanted
it.
Obviously,
that's
not
always
the
best
usability.
You
know
it's
not
the
best.
R
T
T
Thing
still
makes
sense
on
its
own,
though
so
I
it
just
went
down
to
less
of
a
fundamental
question,
because
you
know
you
can
do
this
mapping
to
the
log
data
model.
It's
just
a
different
from
a
config
perspective.
It's
just
like
there's
a
bunch
of
additional
stuff.
You
need
to
write
and
the
question
is:
can
we
collapse
it?
S
I
also
don't
know
that
it's
necessarily
an
inconsistency
to
have
structured
data
that
gets
parsed
into
a
structured
body.
R
R
D
Yeah,
that
would
be
great,
I
think
so,
if
if
that
means
that
there
will
be
changes,
if
we're
actually
changing
the
behavior
of
the
parsers,
it
still
makes
sense
to
maybe
fix
this
particular
behavior
with
a
black
body,
even
with
the
with
the
current
implementation
right,
so
that
it
works
correctly.
I
think
it's
doable
right.
You
can
just
move
that
to
the
appropriate
place.
If
I'm
not
wrong,.
R
D
Q
R
Q
Yeah
I
anyways,
I
tried
the
suggested
the
configuration
where
you
move
from
log
to
to
dollar
sign.
I
mean
that
it
works
like
a
charm,
hey
it
does
what
I
wanted
to
do
so,
thanks
for
yeah
other
than
that,
I'm
fine
with
it
anything's
fine,
as
long
as
it
doesn't
have
a
strong
negative
impact
on
the
performance.
Q
D
Thank
you,
okay.
I
think
what
just
just
then,
is
the
right
thing
to
do.
I
don't
have
any
strong
opinion
here.
To
be
honest,
I
don't
see
one
solution
that
is
actually
much
better
than
anything
else
that
we
can
do
and
maybe
even
we
keep
it
as
it
is
right.
Maybe
let's,
let's
not
change
just
for
the
sake
of
a
change
right,
maybe
as
it
is
yeah.
Q
U
Q
R
Okay,
yeah,
okay,
so
I
think
joe
siriani
is
on
the
caller.
Joe.
Are
you
able
to
speak
to
this
at
all?
I
know.
We've
we've
dealt
with
this
as
well,
and
I
think
that
this
has
something
to
do
with
the
the
persistence
database,
basically,
but
also
making
that
available
to
the
pod
in
its
its
next
life.
Basically,.
C
Yeah
I
haven't
looked
at
the
helm
chart
very
closely,
but
the
way
we
handle
it
is,
we
use
a
host
path
volume
to
store
the
offset
database
or
the
persistent
database.
E
C
Yeah,
I
can
share
some
examples:
offline
yeah.
I
C
Can
find
me
on
the
on
the
slack
joe
siriani.
R
Also
say
I
don't
know
that
this
will
work
until
we
fully
implement
the
storage.
D
R
R
So
yeah
the
checkpoints
basically
only
exist
in
memory
right
now,
so
you
know
it
keeps
the
files
from
it
should
prevent
duplicates.
It
should
prevent
missing
things,
but
pretty
much
only
while
the
process
persists
and
then
we'll
revert
to
whatever
whatever
behavior
is
if
it
were
starting
up
from
scratch.
So
we
do
need
to
get
this
in
there.
R
Is
to
start
at
the
end
of
the
file,
but
there
is
a
configuration
option
to
say
to
start
at
the
beginning.
So
I
am
curious
here.
If
this
is,
I
haven't,
looked
at
the
home
chart
closely,
but
if
it
is
configured
to
start
at
the
beginning,
that
might
explain
why
it's
starting
over.
D
D
C
Q
R
Right
so
I
think
we
need
to
differentiate.
What's
the
right,
what's
the
right
default
for
now
and
then
what's
the
right
default
once
the
checkpoint
is
working
yeah,
it
seems
that
startup
beginning
would
be
the
right
default
once
the
checkpoint
is
working.
Yeah.
R
D
Okay,
so
I
guess
it
means
we
keep
it,
as
is
for
now,
until
the
checkpoints
are
actually
implemented
and
for
your
tests
rock.
Maybe
if
you
need
it
to
behave
differently,
maybe
when
we
make
it
configurable
right
now,
I
don't
know
if
we
need
it
long
term
as
a
configuration
option.
If,
if
with
checkpoints
it
works
properly
robustly
in
all
scenarios,
maybe
we
don't
need
it
as
an
option,
but
for
now
maybe
we
can
expose
it.
Q
Thank
you.
Well,
another
thing
is
there
is
a
data
loss?
I
think
it's
coming
from
file
rotation
and
like
no
matter
what
the
eps
I
ingest
the
data
from
5000
to
2500.
I
always
have
some
little
bit
of
data
loss
like
98
percent
99
and
looking
deeper
into
the
pattern
of
data
loss.
I
am
missing
in
conclusion:
I'm
missing
a
block
from
consecutive
log
record
to
from
here
to
here.
Q
So
just
like
114
records,
I
am
missing,
so
I
believe
I'm
suspecting
that
log
correct
rotation
happened
after
this.
This
number,
I'm
printing,
my
log
generator,
printing
and
incrementing
this
value
so
yeah.
So,
even
if
I
reduce
the
eps,
I'm
still
seeing
a
data
loss.
Yeah
can.
Q
Q
I'm
telling
a
a
file,
that's
just
disabling,
it
points
to
it,
links
to
an
actual
log
file
and
then
it's
that
file
getting
will
take
yeah.
R
Yeah
yeah,
I'm
not
sure,
what's
going
on
here,
it's
I
mean
it
sounds
like
a
pretty
clear
bug,
but
okay.
D
Q
It's
just
python
print
yeah.
Okay,
I
out
here
the
details,
log
generator
and
also
my
configuration.
I
used
it's
same
configuration
that
another
issue
I
created
but
I'll
share
it
here.
So
I
would
appreciate
someone
expert
looking
into
it.
If
you
need
help,
I'm
also
ready
to
spend
time
on
it,
and
I
can
bring
in
my
teammates
to
this
because
you
know
we
are
eager
to
get
it
working,
get
it
stable
and
share
it
with
beta
testers.
R
Yeah
honestly,
this
would
be
a
great
issue
for
help
on.
It
seems
pretty
isolated
and
particular,
and
you
know,
stanzas
stanza
is
fairly
new
software.
I'm
not
surprised
there
are
going
to
be
some
issues
like
this,
but
and
I'm
not
personally
intricately
familiar
with
this
part
of
the
code
base
either.
So
someone
may
have
to
get
up
to
speed
on
it
one
way
or
another.
Q
So,
to
make
contribution
to
the
stanza
that
open,
telemetry
uses
so
do
I
may
contribute
to
which
repository?
Q
D
Q
I
will
discuss
it
with
my
manager
didi
and
then
he
might
assign
it
to
me.
He
might
have
sent
to
my
yeah,
my
teammate
or
he
might
say.
R
Yeah
yeah,
if,
if
you
or
any
of
your
teammates
get
to
this
before,
I
do,
please
feel
free
to
reach
out
to
me
on
slack,
I'm
happy
to
kind
of
orient
you
in
the
code
base.
I
knew
roughly
where
this
is.
I
just
don't
know
the
exact
details
of
what
might
be
going
on.
D
Q
And
I've
been
running
performance
tests
and
was
able
to
get
like
41kbps
when
I
jack
up
the
blockchain
count
so
yeah
the
it's
good
with
the
99.
Q
D
Is
that
how
many
pods
you
have
is,
what's
what's
the
column
and
see.
Q
Q
Yeah,
cool,
okay.
I
think
he
can
go
up
because
it's
because
I'm
putting
all
16
parts
in
a
single
node,
it
can
only
generate
42
eps
portal,
so
but
then
yeah,
it's
catching
up
really!
Well,
it's
maintaining
like
98
99
interest
rate,
it's
900
because
of
the
rotation
data
loss.
So
it's
it's
a
scaling
up.
Well,
just
sharing.
Sharing
that
to
everyone.
D
Q
Q
Oh
yeah,
I
have
one
more
feature,
which
is
sorry
sorry,
we
need
a
multi-line
concatenation
per
container,
so
users
should
be
able
to
customize
or
if
con,
because
if
container
name
is
this
then
contain
the
oh
regex
privacy
or
concatenated
with
this
pattern,
but
then
with
the
file
leader,
it's
not
possible
at
the
current
currently.
R
I'm
kind
of
surprised,
it's
not
possible,
is
it
okay?
Is
it
because
the
multiple
containers
are
have
are
logging
to
the
same
files?
Is
that
why.
R
First
of
all,
where
each
receiver
has
a
different
instance
of
the
file
log
receiver
or
each
is
a
different
instance,
the
file
log
receiver
with
its
own
configured
path.
That
might
work,
but
I
think
also.
Okay,
that's
probably
that's,
probably
where
I
would
start.
We
can
maybe
talk
about
optimizations
if
that
isn't
cutting
it
for
any
reason,.