►
From YouTube: 2021-07-14 meeting
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
Hello,
everyone
welcome
to
no
water.
Tyrone
is
joining
today.
C
D
Yeah-
I
can
talk
about
this
briefly.
It
should
only
yes,
three
minutes,
basically
with
the
new
config
source,
pr
that's
coming
from
splunk
to
have
config
files
in
cloud
provider,
storage
like
vault,
s3,
zookeeper
and
so
forth.
D
One
idea
was
that,
right
now,
if
a
customer
has
their
entire
config
file,
you
know
so
there's
no
merging
happening
at
all
in,
say
vault.
Then
they
would
stop
to
create,
like
a
dummy,
local
config
file,
to
set
everything
up
and
call
it.
Our
thought
was
that
it
should
be
possible
to
call
simply
from
command
line
if
you
have
your
entire
config
file
in
a
single
in
a
single
cloud
provider,
storage
solution,
so
this
would
probably
involve
creating
a
new
flag
at
command
line.
D
Where
you
would
have
the
new
flag,
then
you
would
specify
the
cloud
provider
and
then
you
would
give
any
necessary
parameters
to
set
it
up
so
like
for
s3
that
might
be
like
region
and
bucket
name.
B
So
right
now,
indeed,
we
don't
have
a
an
easy
solution
via
command
line.
The
option
you
have
right
now
would
be
to
to
have
a
minimal
config
file
where
you
configure
the
source,
which
is
the
s3
credentials
and
everything
you
need
for
for
that.
And
then
you
say
that
processors
are
coming
from
that
file,
processors
and
so
on.
So
you
you
have
like
a
ten
line
config
to
write.
B
We
we
thought
about
this,
and
we
said
that
it
will
be
interesting,
but
this
kind
of
this
solution
is
decent
for
for
the
foreseeable
future.
Until
until
we
we
we
build
the
other
option,
do
you
think
it's
do
you
see
this
as
a
more
urgent
thing
or
or
is
this
solution
good
enough
for
you.
D
I
will
just
say
from
amazon's
perspective
the
reason
why
we
want
this
a
little
bit
is.
We
have
a
lot
of
cases
where
we
expect
customers
to
just
use
one
of
our
default
config
files
sitting
in
s3,
and
we
would
prefer,
if
possible,
for
them
to
not
have
to
set
up
a
a
local
config
file
and
deal
with
any
compilation
that
might
have,
especially
because,
in
certain
cases,
if
you're
deploying
this
not
on
your
local
machine
setting
up
a
config
file
could
be
annoying.
You
might
not
want
to
use.
B
B
Can
we
build
something
that
or
the
the
syntax
that
we're
gonna
use?
Is
it
very
custom
for
for
every
cloud
provider,
or
can
we
come
up
with
a
syntax
that
matches,
for
example,
importing
the
file
from
vault
db
if
running,
on,
aws
or
running
on
gcp
or
any
other
place.
D
My
my
thought
process
and
if
there
isn't
major
objection
this
I
can
then
make
a
more
full-fledged
issue.
My
thought
process
was
right
now,
with
the
way
palo
set
it
up.
Every
every
system
has
a
factory
or
every
every
configuration
has
a
factory
that
takes
in
params
in
some
certain
order.
So
you
could
just
you
could
you
know,
have
one
flag
that
says
you
know
single
config
source,
then
the
name
of
the
type
of
config
source
as
the
first
param,
and
then
you
could
just
list
the
params
you
want
to
give
to
the
factory.
D
B
E
Can
I
say
that
something
else
like
you
know
for
now?
Maybe
it's
too
complicated
to
do
introduce
those
abstractions,
but
can
we
enable
like
some
s3
driver
like
if
you
pass,
for
example,
a
config
file
that
includes
like
s3
colon,
slash,
slash,
it
goes
and
reads
from
s3
like
you
know,
other
cloud
providers
have
like
similar.
You
know
endpoints
for
their.
You
know.
F
B
E
Think
I
think
that's
where
we
should
start,
but
you
know
eventually,
maybe
we
will
need
more
complicated
stuff,
but
I
think
it's
just.
It
will
require
some
discussion.
E
Yeah,
let's
start
with
the
url
support,
that's
my
thinking.
D
The
url
sounds
good
and
I
think
that
should
for
most
cloud
writers
contain
all
the
information
necessary
to
do
anything.
At
least
three.
I
know
for
a
vault.
You
might
need
username
and
password
to
get
a
little
bit
more
complicated,
but
there
could
be
some
requirement
to
just
store
that
in
a
local
end,
variable
or
something.
E
B
E
E
Aditya,
do
you
want
to
take
a
look
at
you
know
if
it's
good,
at
least
for
the
you
know,
major
cloud
providers.
B
G
Do
that
it's
because
there
are
some
kind
of
config
sources
that
we've
implemented
that
are
not
very
standard
for
url.
They
need
some
kind
of
different
configurations,
and
that
was
the
case.
But
if
you
think
we
could
even
implement
a
config
source
that
basically
retrieves
the
stuff
from
urls.
You
know.
B
Yeah,
let's,
let's
also
on
our
side
after
we
see
this
proposal,
let's
try
to
map
with
what
we
right
now
have
and
see.
If
url
based
would
would
work
for
us-
and
maybe
maybe
maybe
the
long
term
solution
is,
if
you
are
passing
the
url
to
the
config
flag,
we
pass
it
as
url
and
stuff,
and
even
when
you
embed
into
the
into
the
file
as
a
subsection,
we
can
still
use
the
url
syntax
would
be.
D
I
will
say
for
s3,
though
in
particular
we
would
need.
We
would
need
to
have
a
special
setup
where
it
was
parsed
slightly
differently,
because
we
would
need
to
call
the
aws
sdk
authentication
in
order
to
authenticate
and
if
you
had
a
non-public
s3,
and
I
assume
that
other
cloud
providers
might
have
similar
authentication
issues.
The
url
is
still
fine
for
input,
but
you
need
to
sense.
This
is
an
s3
url.
I
need
to
do
something
specific
and
I
would
guess
that,
like
azure
or
gcp
might
have
similar
needs,
but
yeah.
E
There
is
this:
the
default
credentials
thing
so
by
default,
there's
some
credentials
in
a
well-known
thing,
like
it's
very
similar
to
aws.
But
if
you
want
to
pass
a
in
a
specific
credential
file,
then
we
need
to
you
know
just
kind
of
be
able
to
pass
that
as
well.
Maybe
we
can
start
with
this
default
thing.
I
don't
know.
H
Can
we
maybe
what
I'm
hearing
is?
We
can
start
with
well-known
url
schemes,
so
http
and
https
with
basic
auth
credentials
in
the
url
and
over,
and
you
know,
depending
on
the
need,
we
can
add
support
for
extra
url
schemes
if
there
is
a
need
to
have
s3
or
s3n
as
a
url
scheme,
that's
supported
by
some
special
logic,
but
then
at
least
that
doesn't
complicate
the
user
interface
yeah
right.
It
sounded
like
the
big
benefit.
H
Is
that
if
you're
doing
something
standard
using
a
file
url
or
using
even
a
publicly
accessible
http
url
life
continues,
and
you
know
you
don't
need
to
think
too
much
about
it
and
again,
like
your
example
with
aws,
you
know
providing
standard
configs,
it
seems
like
those
standard
configs
could
live
in
a
public
location
right,
they're,
not
they're,
not
secret.
E
Well,
it
depends
right,
like
we
can't
assume
that
I
mean
nothing
can
live
in
a
public
by
default.
You
know,
and
I
and
I
think
we
should
start
with
the
actual.
You
know
use
case
that
we
want
to
enable
like
we
need
this
because
of
s3
like
I
mean
for
our
use
case
right,
if
we
won't
be
able
to
achieve
that.
I
think
this
might
not
be
a
good
idea.
I
think
like
we
should.
We
should
think
about
you
know.
Are
we
going
to
be
able
to
support
s3?
H
E
Yeah,
that's
a
very
good
point.
I
think
a
lot
of
the
times
we
don't
you
know,
expect
people
to
pass
in
explicit
credentials.
It's
almost
like
an
anti-pattern.
So
what
do
you
think
of
edit?
Yet?
Do
you
think
that,
like
default
credentials
would
be
a
good
thing?
I
mean
there
are
a
couple
of
things
like
you
would
like
to.
You
know,
pass
a
role
or
whatever,
but
we
can
maybe
turn
them
into
an
environment
where
reliables?
E
D
It
makes
a
lot
of
sense.
I
I
do
agree,
though,
that,
like
defaulting
to
everything,
being
public
feels
like
a
bad
idea.
There's
lots
of
good
reasons
to
want
things
to
be
a
little
bit
private
overall,
but
I
think
yeah,
some
sort
of
environment
and
variable
solution
sounds
good.
B
Okay,
so
I
I'm
gonna
wait
for
for
a
bit
more
discussion
on
these
and
due
diligence.
If,
if
url
based
will
help,
we
are,
we
are
willing
to
to
support
this.
Don't
get
me
wrong,
but
before
jumping
into
a
solution,
we
need
to
know
how
how
the
how
what
are
we
gonna
implement
so.
E
Yeah
I'll
suggest
something
aditya,
maybe
you
and
I
I
mean
you
can
start
a
doc.
You
know
this
is
what
it
will
look
like.
You
know
this
will
pass
s3
or
a
couple
of
others,
and
these
are
like
the
environmental
variables
that
we
may
need
for
each.
So
we
can
just
kind
of
document
that
and
then
come
back
here
and
you
know
people
can
comment
yeah,
that's
a
good
suggestion.
I
J
Hi
yeah,
I
think
I'm
the
next
one
here.
Yes,
I
I'm
at
honeycomb
and
honeycomb.
We
have
the
need
to
figure
out
a
way
to
do
some
transformation
of
metrics,
so
we
want
to
basically
pull
a
specific
time
series
out
of
existing
metrics
and
put
them
into
their
own
metric.
So,
for
example,
like
system.memory.usage,
we
want
to
take
the
used
time
series
out
of
that
and
create
a
new
metric
called
system.memory.usage.used.
J
Metrics,
transform
processor
seems,
like
the
perfect
place,
to
be
doing
that
right
now,
it
doesn't
seem
like
it
does
so
we're
looking
into
maybe
making
a
change
to
that
processor
and
submitting
a
pr
for
it.
But
I
noted
a
comment
on
an
earlier
proposed
change
to
that
processor.
Saying
that
there's
a
big
refactor,
that's
happening,
that
that
is
blocking
any
changes,
and
I
just
wanted
to
know
the
status
of
that
and
sort
of
see.
H
Exactly
exactly
and
but
then
you
can
rub
your
hands
and
glee
and
say
like
an
evil
mastermind.
I
I'm
punya,
I
work
at
gcp,
and
so
so
bogdanon
bogdan
and
the
gcp
team
came
up
with
this
shady
agreement,
but
regardless
the
point
the
point
with
this
with
this
effort,
there
are
two
efforts
happening
right
now
in
early
stages.
Unfortunately,
one
is
to
unify
the
various
like
transform
processes
into
a
single
mutation.
H
Properly
data
mutation
processor-
you
can
see
so
I
think
minsha
of
aws
is
driving
that
you
can
see
a
design
dock
there's
another.
There
is
another
effort
also
to
to
kind
of
redo
the
internals
of
how
the
metric
transform
processor
works.
So
the
first
thing
I
talked
about
is
more
about
you
know
streamlining
the
config
and
it
has
some
impacts
on
the
implementation.
The
second
effort
has
is
all
about
like
making
the
implementation
free
of
legacy
dependencies
like
open
sensors
senses.
H
Okay,
so
you
know
we
always
welcome
people
kind
of
joining
this
effort
and
right
now,
there's
like
a
lot
of
design
work
to
be
done.
I
would
say,
probably
in
like
two
to
six
weeks,
there'll
be
a
lot
of
coding,
work
to
be
done
as
well,
either
way.
If
you
have
a
strict
deadline
for
getting
this
transfer,
this
change
done.
I
unfortunately
have
to
recommend
that
you
write
like
a
small
little
one-off
processor
and
bundle
it
with
whatever
you're
doing
like.
H
I
J
Yeah
I
I
was
actually
just
in
the
process
of
writing
an
issue
for
this.
I
I'm
not
even
sure
this
might
be
part
of
the
intent
of
the
existing
code
or
it
might
not,
and
there
might.
This
might
be
a
change
that
we
want,
but
if
it
is
part
of
the
intent
the
existing
code,
it
doesn't
seem
to
be
working.
J
This
way
right
now
that
you
should
be
able
to
identify
a
specific
label
in
in
a
metric
and
say
hey,
I
want
to
pull
that
label
out
and
create
a
new
metric
out
of
it.
B
Is
that
I
think
that's
the
metric
generator
processor
or
something
like
that?
Is
that
doing
that
one,
not
a
transform?
Yes,
I
know
we
have
10
or
something
don't
get
me
started.
K
I
guess
yeah,
I
guess
the
metric
transform
processor
also
can
do
this.
We
have
an
experimental
feature
which
says
like
a
mass
label.
So
given
a
level
set,
we
match
the
levels
and
generate
a
new
matrix
out
of
that.
But
the
feature
is
like
experimental
underscore
mass
level.
I
guess
and
we
are
using
it.
If
that's
the
same
thing
you're
asking
for.
I
guess
it's
already.
There.
J
Was
looking
at
in
this
processor,
my
assumption
is
that
no
it's
in
the
mattress
transform.
E
H
J
F
F
H
B
B
I
And
this
is
one
of
the
major
ones
for
our
trace
gs,
objective,
jurassic.
L
A
Yeah
and
I
think,
there's
very
little
contention
here-
there's
only
one
point
of
contention
actually
and
both
so
go
ahead.
B
I
J
B
A
Right,
so
the
reasoning
behind
the
third
commit
is
that
it
is
just
implementation
of
the
behavior
that
we
added
as
a
comment
to
the
main
api
right.
So
for
for
the
main
api.
We
mentioned
that
the
context
should
contain
information
about
the
authentication
data
that
is
coming
in
and
the
outcome
right,
so
the
groups
and
and
the
subject
or
the
username.
A
So
the
context
is
mainly
an
implementation
of
that
and
is
actually
copied
from
another
pr.
I
can
remove
that,
but
it
does
create
a
situation
where
you
know
we
release
v1,
but
we
haven't
defined
where
to
find
this
information
within
the
context.
B
No,
my
point
is,
I
think
we
can
have
a
much
focused
discussion
on
that
and
we're
not
going
to
release
v1
without
these
right.
Okay,.
B
It's
still
a
blocking,
but
what
I'm
trying
to
say
is
at
least
we
separate
the
discussion.
We
agree
that,
okay,
we
believe
that
this
api
should
also
return
a
context
because
it
may
change
the
the
the
context.
So
we
are
good
on
that,
but
let's,
let's
split
the
the
changes
into
two
parts,
because
right
now
the
majority
of
the
comments
are
related
to
the.
A
Yeah,
it
is
all
right
I
can
do
that.
I
mean
we
can
do
that
quite
quickly.
It's
just
removing
the
third
commit.
So
do
we
have
a
couple
of
more
minutes
to
discuss
in
the
context
itself
like
the
third
commit
in
in
in
separation
here
in
because
so
there
are
a
few
things
here.
A
So
if
I
understood
tigran
here
correctly,
he's
concerned
that
this
context
here
is
too
specific
to
the
odc
authenticator
and
that
people
I
believe
that
it
belongs
within
the
oigc,
auto
extension
api,
and
I
think
that
extensions
should
provide
their
apis.
A
I
think
that
that's
a
valid
concern,
but
we
need
one
api
for
the
authentication
so
that
all
components
can
rely
on
no
matter
which
authentication
authenticator
was
used
right,
so
we
need
at
least
information
how
to
retrieve
the
subject,
which
is
the
username
or
the
system
or
the
service
name
and
potentially
the
raw.
B
Quick
question
about
this:
if
it's
generic,
it
means
that
or
the
reason
to
have
it
generic
would
be
because
one
component
can
create
it
and
another
component
can
consume
it.
Yeah.
G
B
Is
that
the
case-
because
I
think
I
think
digra's
concern-
is
that
if
oidc
creates
this,
it's
only
idc
who
will
be
able
to
understand
this
and
consume
it?
So
hence
so!
Hence
it's
an
internal
property
on
that
extension
or
on
that
component
is
not,
is
not
something
that
we
need
generic,
but
if
there
is
a
need
to
share
this
between
components
like
one
component
consume,
produces
it
and
the
other
one
consumes
it,
then
it
may
be
a
need.
A
We
have
a
current
need
for
that,
so
we
have
the
routing,
processor
and
the
routing
processor
could
make
use
of
this
information
here
for
like
the
grouping,
the
group
membership
information,
at
least
to
make
decisions
on
routing
decisions,
or
you
know,
for
multi-tenancy
purposes,
based
on
on
membership
or
based
on
on
subject
information.
So
for
this
service
account
here
I
want
data
to
get
to
that
back
end.
But
for
this
one
here
I
want
you
to
go
to
this
vendor
here
right,
so
I
can.
I
can
display
the
destination
based
on
the
context
information.
A
B
A
So
multi-tenancy
is
something
that
we
need,
and
especially
downstream
at
jager.
So
multi-tenancy
is
something
that
jager
that
people
has
has
requests
for
jager.
Quite
a
lot
and
the
way
that
we're
going
to
implement
on
on
the
other
side
is
we
have
one
eager
instance
or
one
thinking
about
collector
terms
here.
We're
gonna
have
we're.
A
Gonna
have
one
collector
instance
and
at
the
storage
level,
we're
gonna
have
multiple
storages,
as
as
extensions
or
as
exporters
and
based
on
tenancy
information,
so
based
on
the
service
accounts
or
and
what
not
from
the
authentication
data
we
direct
the
data
to
specific
storages.
So
if
you
want
data
from
tenant
acme
to
go
to
elasticsearch
index
prefix
acme,
then
that's
the
jager
exporter
that
we
want
to
use.
If
we
want,
you
know
e-corp
to
go
to
somewhere
else,
then
we
just
create
a
new
exporter
and
say
for.
B
I
see
I
see,
I
see
what
you
mean,
but
but
that
being
said
still,
if
you
have
this
as
an
internal
property
on
the
oic
or
idc,
for
example-
okay,
one
possibility
again
we
are
brainstorming
a
bit
here
and
I'm
not
saying
that
this
is
the
right
thing
or
not.
B
B
This
thing
from
from
the
context,
because
they
know
how
how
is
encoded
inside
the
the
context
and
so
on
and
and
get
that
back
and
the
routing
processor
does
routing
based
on
that.
So
still
still,
this
will
not
be
visible.
The
other
api,
probably
that
you
may
you're
probably
going
to
need,
is
an
interface
on
the
out
thing
on
the
out
out
interface
on
the
on
the
extension
to
say,
give
me
property
foo
from
the
out.
A
Does
it
mean
that
the
routing
processor
needs
to
know
or
needs
or
precludes
the
usage
of
oidc
extension,
because
I
can
I
mean
I
could
have
a
local
database
local
file
database
similar
to
apache
httpd
right,
so
you
can
have
an
hd
password,
an
hd
groups
file.
B
But
but
but
for
example,
this
will
be
an
extension
correct
for
http
or
yeah.
J
B
That,
if
that
httpd
extension
implements
an
api,
get
property
and
you
pass
a
string,
what
type
of
property
you
want,
if
that's
the
api
that
we
implement
so
an
extension
pass,
we
have
an
api
get
property,
we
pass
the
context
and
we
pass
the
property
name.
If
this
is
the
api
that
we
add
to
all
the
out
extensions.
B
Correct,
that's!
That's
what
I'm
thinking,
because
most
likely
you'll
not
use
multiple
types
of
of
authentication.
Even
in
multi-tenancy,
you
probably
offer
one
type
of
authentication
or
maybe
two
types
of
authentication,
not
more
than
that.
A
G
C
G
B
Let's
investigate
a
bit
that
path
before
adding
these,
because
I
feel
like
adding
these
we're
getting
into
to
to
challenges
like
is
this
generic
enough?
Do
we
need
more?
Do
we
need
less?
It's
does.
What
does
it
mean
a
group?
What
does
it
mean
a
subject
and
and
stuff
like
that
in
in
different
scenarios
in
httpd
and
stuff,
so.
A
J
A
So
on-
and
I
I
mean
raw
type-
I
agree
it's
it's
good
to
have,
but
it's
not
required.
So
not
all
systems
would
have
a
usage
for
for
the
raw
authentication
data,
but
subject
and
group
membership,
or
you
know
used
for
authorization
is
really
is
really
ubiquitous.
It
is.
It
exists
everywhere.
B
The
other
option,
the
other
option.
Sorry,
the
other
option
is
we
right
now
have
we
gonna
have
this
out
we're
gonna
have
the
other
thing
called
client,
maybe
maybe
we
can
think
of
having
a
metadata
like
multiple
things
and
that
is
kind
of
a
map,
and
we
can
extract
different
properties
like
out
that
subject
and
and
stuff
like
that.
It's
another
way
to
to
think
about
this.
The
concern
that
I
have
is
right
now
we
have
client,
we
add
out.
B
A
Sure,
okay
yeah
I
can,
I
can
start
a
new.
So
what
I'm
going
to
do
is
right
after
this
call,
I'm
going
to
remove
the
third
commit
and
I'll
issue.
B
Actually,
because
we
already
started
to
to
do
this,
maybe
it's
not
it's
easier.
If
you,
if
you
just
extract
the
first
two
commits
as
a
separate
pr,
we
just
merge
based
on
the
fact
that
we
already
agreed
on
that
and
then
rebase
this
and
we'll
this
one
will
be
only
with
the
third
pr,
and
we
continue
the
discussion
here
just
to
not
lose
the
history
of
the
discussion.
A
All
right
absolutely
yeah.
I
can
do
that
and
in
any
case,
I'm
going
to
open
a
new
issue
with
a
summary
of
what
we
discussed
here.
B
M
Yeah
yeah,
I
look
at
the
issue.
Sorry,
I
somehow
missed
the
issue
yesterday,
but
I
just
looked
at.
I
think
the
issue
was
fixed
by
my
another
pr.
I
just
put
a
comment
there.
So
if
this
co,
if
this
user
he's
going
to
update
the
new
collector,
you
know
when
we
have
the
new
collector
release,
his
issue
will
be
gone:
okay,
yeah,
yeah,
okay,
perfect.
M
So,
and
for
my
issue,
the
pr
the
original
pr,
so
I
I
I
don't
know
if
you
you
know
kev,
I
follow
up
the
discussion.
We
had,
I
think
the
people
last
week.
We
also
discussed
that
sorry,
the
member
steal
the
memory
limiter
one,
so
we
decide
use
the
ballast
extension.
That's
the
only
place
to
configure
the
memory
ballast
and
all
the
other
play
the
memory
limiter
will
be.
M
You
know
we
are
reading
the
the
ballast
slides
from
the
ballast
extension
only
and
we
are
going
to
deprecate
the
bullet
size
in
the
memory
limiter
right,
as
well
as
the
command
of
mine.
Okay,.
B
M
B
I
also
also
don't
be:
don't
hesitate
to
build
prs
on
top
of
other
pr's
and
just
say
that
it
depends
on
that.
So,
usually
don't
don't
get
stuck
on
this.
One,
like
git,
has
a
very
powerful
thing
to
merge
from
from
your
pr
and
then
when
you
rebase
your
pr,
you
can
rebase
the
other
one
and
so
on
and
so
forth.
So
you
can
build
chain
of
pr's.
B
If,
if,
if
that
helps
because
in
this
case
for
example,
you
could
have
just
send
me
the
the
follow-up
er
and
say
hey,
this
is
a
follow-up,
see
see
the
prn.
B
M
Okay,
yeah
yeah,
the
last
one
I
just
added,
so
I
see
because
yeah
I
think
all
of
us
are
working
on
that
at
the
ga
thing
right.
This
is
the
issue
that
I
I
observe
is
exist
there.
For
you
know,
no
one
has
been
actively
work
on
this.
One
just
wondering:
what's
our
plan
to
this
one,
should
we
move
that
out
from
our
ga
battle
at
least.
I
I
think
this
was
something
we
needed
to
fix,
at
least
in
discussions
with
tigran
bogdan.
So
is
this
still
a
dependency,
because
tigran
had
also
mentioned
that
we'd
have
to
wait
for
folks
to
provide
feedback
here.
B
I
I
think
this
was
a
sign
to
me,
but
I
didn't
do
too
much.
To
be
honest,
I.
I
A
There
was
a
is
that
the
one
that
contains
a
good
discussion.
A
There
is
a
discussion
about
that.
There
was
a
bug
that
I
think
I
opened,
and
then
there
was
a
discussion
about
whether
or
not
is
it.
It's
a
good
idea
to.
D
N
Yeah
sorry,
this
was
supposed
to
be
merged.
I
was
just
waiting
a
little
bit
more
in
case
somebody
showed
some
interest.
The
problem
is
that
we
are
well.
The
problem,
so
to
speak
is
that
we
are
bringing
back
an
existing
environment
variable,
and
I
was
yesterday-
I
mentioned
that
in
specification
call,
but
today
I
will
merge
it.
I
think
we're
ready
to
go.
B
M
I
M
Yeah-
and
maybe
I
I
may
misunderstood,
I
think
these
two
are
different
issues
right,
because
the
other
one,
the
one
that
I
posted
was
the
request
was,
I
think,
it's
from
lightstep
they're
requesting
if
we
can
make
the
jrpc
or
http
by
using
the
same
port,
they're,
probably
they're
fine,
to
have
two.
You
know
split
end
points,
but
they
just
want
to
use
the
same
port
for
both
particles.
M
B
Yeah,
it
was
hard
for
me
to
get
that
right,
but
the
the
biggest
problem
was
we.
We
were
using
grpc
gateway,
which
we
don't
no
longer
use.
I
remove
that
dependency.
We
no
longer
use
that,
so
we
we
use
right
now
plain
http
handlers
and
grpc.
So
I
think
that
can
simplify
things
yeah.
As
I
said,
I'm
happy
to
review
a
pr
that
does
this,
but
as
long
as
we
collect
a
bit
of
feedback
from
users,
I
think
this.
This.
B
B
I
did
a
long
time
ago.
I
did
a
pr
to
split
them
because
I
couldn't
fix
the
mtls.
L
Explicitness
issue:
I
think
we
can
close
it.
We've
we've
also
had
a
lot
of
trouble
with
our
single
port,
even
though
we
had
it
working
it.
It
was
full
of
bugs
and
I
don't
think
we
should
try
and
fix
it.
Oh
okay
can.
B
A
You've
run
into
problems
because
of
that,
and
you
think
it's
better
to
have
different
ports
that
will
help
us
in
the
future
also
downstream,
also
on
the
eager
side,
because
users
are
going
to
ask
the
same
features
as
you've
asked
you
know
so.
Opening
more
ports
is
is
like
a
a
big
warning
sign
for
sys
admins,
and
I
understand
that.
But
having
split
ports
is,
you
know,
explicit,
is
better
than
implicit
so
knowing
to
which
ports
which
protocols
are
being
are
being
talked
to.
Then
I
think
that
that's
a
good
idea.
B
Let's,
let's
finalize
a
bit
on
this,
this
is
great
feedback.
We
got,
we
have
a
problem,
this
pr
was
blocking
another
issue
in
the
specs
and
I
need
somebody
to
help
us
with
that.
We
we
got
an
official
port,
I
think
from
yano
from
whatever
authority
that
gives
the
the
the
ports
yeah.
J
B
Don't
I
don't
remember
the
name
of
that
authority,
but
I
think
for
grpc,
but
I
think
we
need
to
ask
for
for
another
port
for
http,
maybe
the
consecutive
one,
if
possible,.
B
I
B
Okay,
I
will.
I
B
H
B
Okay,
so
I
think
mean:
did
you
get
all
your
answers.
B
F
Yeah
sorry,
I
just
added
that
at
the
end,
I
was
hoping
that
we
were
going
to
talk
about
some
of
jurassic's
proposal
about
basically,
with
the
last
link,
he
created
a
draft
to
basically
propose
moving
components
between
core
and
distributions
and
like
a
separate
components,
repo,
and
I
was
hoping
that
I
could
point
to
this
issue
that
I
had
posted
there,
where
we
basically
are
proposing
to
create
an
api
with
the
collector
builder
functionality
and
with
this
api
we're
thinking
about.
F
Maybe
perhaps,
instead
of
having
to
host
like
distributions
that
are
pre-built
in
a
separate
repo,
we
would
be
able
to
use
the
collector
builder
directly
by
basically
a
user
could
give
the
collector
builder
manifest
file,
and
then
perhaps
the
builder
might
have
some
pre-built
binaries
to
basically
return
back
to
the
user
through
some
like
http
calls,
for
example,
so
we're
wondering
if
we
could
get
some
feedback
on
this
idea,
and
maybe
if
this
would
have
to
do
with
anything
regarding
jurassic's
proposal.
C
E
I
I
know
you
walked
us
through
it,
so,
yes,
you
have
that.
E
Yeah,
I'm
asking
like
you
know
the
other,
because
you
know
you
do
they
have
a
very
established
ecosystem.
You
know
you
can
publish
extensions
and
this
registry
just
lists
them
and
then
like
when
you
click
them.
It
actually
builds
you
a
custom,
build
with
the
extensions
that
you
chose.
So
it's
a
very
nice
like
experience.
Of
course
it
requires
a
lot
of
work.
A
Yeah
there
was
a
comment
from
from
grenville
from
f5
some
time
ago,
and
he
said
that
he,
if
this
is
the
direction
we
want
to
go,
he
can
get
some
resources
to
work
on
that
yeah.
I
We
can
help
you,
I
mean,
we've
definitely
been
looking
at
it
right
and,
and
we
were
thinking
that,
at
least
if
we
could
build
this
on
the
cli
initially
and
then
you
know
attach
an
ui
that
would
be
phase
two,
so
we
do
have
a
full.
You
know
proposal
for
this,
and
and
it
really
builds
the
capability
of
building
your
own
release
over
time.
It
really
you
know,
and
and
really
separating
out
this
dependency
of
where
the
source
is
located.
I
B
That
would
be
cool,
but
that
that
is
just
you
know
the
manifest
file
and
you
just
append
couple
of
other
things.
So.
B
Cool
to
say,
start
from
the
base
collector
and
then
and
then
add
a
couple
of
other
things,
but
I
will
let
jurassic
lead
this
effort.
I
know
how
how
delegation
works
so
jurassic.
Thank
you
for
for
leading
this
effort.
A
A
I
C
C
There
are
two
things
going
on
here
right,
so
there's
the
the
discussion
of
the
collector
builder
and
adding
an
api
to
that
and
building
a
web
ui
on
top
of
that
api,
which
is
all
one
thing,
but
the
other
part
of
jurassic's
proposal
here
is
the
separation
of
core
and
contrib,
much
more
strictly
than
it
currently
is,
and
and
taking
everything
out
of
core.
C
That
is
not
the
interfaces
for
building
components,
taking
all
of
the
components
out
and
moving
them
into
other
locations
and
getting
rid
of
the
the
hotel
call
build
target
in
in
core.
Not
having
you
know
any
binary
build
out
of
core
having
that
all
in
the
distribution
manifest,
then
that
would
use
the
collector
builder,
which
I
I
think
is
a
great
idea
and
it's
the
way
we
should
go.
But
I
think
that's
the
question
jurassic's
asking
about:
do.
We
have
consensus
on
that
part
of
it.
B
So,
first
all
about
me
doing
less
work
so
jurassic.
You
are
all
my
guest
to
to
take
ownership
of
the
main
and
everything.
So
I'm
more
than
happy
yeah.
There
is
already
an
agreement
to
move
couple
of
components.
I
think
alolita
based
part
of
our
stable
plan
was
already
to
move
a
lot
of
the
components
yeah
so
so
now
now
there
is
another
discussion
which
is
we
decided
on
some
components
to
be
in
core.
We
can
keep
them
in
core
and
just
have
so
jurassic.
B
If
you
give
me
a
very
simple
way
to
define
three
manifest
files,
three
ml
files
and
be
able
to
build
what
we
want
to
build
from
core,
but
even
though
the
code
is
in
contribute,
I
can
tell
you
that
tomorrow,
I'm
gonna
move
everything
to
contribute
and
just
keep
the
interfaces
in
court.
So,
but
right
now
right
now
is
not
that
easy
and
we
don't
have
a
a
simple
process
of
me
going
and
pressing
the
button
and
say
build
build
the
core,
build
the
full,
contrib
and
stuff
like
that
right
away,
correct.
A
I
actually
have
a
manifest
that
does
what
you
what
you
just
said.
So
we
we
have
a
distribution
called
observatorium
hotel
call,
which
is
not
a
real
thing,
but
it
does
serve
as
a
guinea
pig
for
for
the
builder.
Can
you.
B
A
Sure
let
me
hold
on
a
second
share
screen,
I'm
not
quite
sure
it's
going
to
work
because
zoom
always
is
picking
with
me.
Let's
see
all
right.
So
if
you're,
if
you're
seeing
my
browser,
then
you're
seeing
the
right
screen
and
in.
I
I
mean
we're
at
four
minutes,
so
if
you
need
more
time,
maybe
we
should
dive
into
this
next
time.
A
Yeah,
so
let
me
share
only
the
link
here
then
chat,
so
there
is
a
so
this
is
the
repository.
It's
not
the
only
one
we
have
a
couple
of
other
ones,
and
what
this
one
here
is
doing
is
just
consuming
a
few.
A
So
what
I
can
propose
to
do
is
I
can
I
can
build
what
I
have
in
my
mind.
You
know,
based
on
the
proposal,
I
can
build
that
scheme
under
my
own
namespace.
So
I
can,
you
know,
have
a
a
an
open
parametric,
collector
core
and
an
open,
telemetry
components
and
open
telemetry
collector
distributions.
So
what
I'll
call
core
components?
Distribution
and
you
know
make
it
look
like
the
proposal
that
I'm
that
I
wrote
and
see
what
are
the
the
problems
that
we
face
with
that?
A
I
Jurassia,
I
think
that
would
be
useful
because
we
are
going
to
you
know.
We
are
also
looking
at,
and
I
want
to
have
a
clear
understanding
of
where
the
specifically
the
prometheus
components
set
wagon,
because
it
is
important
that
you
know
we
have
several
considerations
for
those
for
those
specific
components.
E
I
I
I
mean
otherwise
we
will
that
we
will
have
jana,
I
mean
if
we
moved
everything
to
contrib.
We
would
have
that,
but
right
now
you
know
because
the
receiver
is
the
is
still
in
core.
So
bogdan,
that's
something
that
we'd
have
to
move
to.
I
said
if.
A
B
E
Is
to
move
everything
out
of
the
core
which.
B
Sense
only
for
the
reason
for
me
to
test
all
the
helpers
and
everything
in
in
one
place,
because
that.
A
For
the
plc,
I'm
probably
stripping
out
the
the
components
are
complex,
like
the
ones
that
jana
just
mentioned,
like
the
ones
that
that
depend
on
each
other,
because
that
would
you
know.
A
B
B
Think
I
don't
think
jurassic,
I
don't
think
the
manifest
should
have
in
any
way
know
about
these
dependencies,
so
the
manifest
should
should
be
free-for-all.
Like
you,
you
specify
the
receivers,
the
exporters
and
the
dependencies
these
kind
of
dependencies.
We
know
when
we
define
the
bunny
fest
yeah.
E
A
Plc
for
the
plc,
I'm
viewing
the
three
repositories
and
what
I
mean
is
I'm
not
going
to
care
about
the
interdependencies
between
the
components
I'm
just
going
to
get.
You
know
the
simple
components
for
all
at
all
levels
that
we
need
just
for
the
plc
and
then
we
can
figure
out
the
details
later
perfect.
B
Perfect
and
again,
if
you
are
able
to
deliver
this,
that
I
can
go
and
have
these
three
manifest
files
and
every
time
when
I
do
a
release,
I'm
able
to
go
and
press
a
button
to
run
a
github
action
to
build
this.
Then,
tomorrow,
next
day,
I
promise
you
that
all
components
will
be
gone
like.
A
Yeah
I
mean
the
problem:
is:
is
really
about
the
the
migration
sci
for
that
right.
So
we
need
to
do
to
test
the
things
that
we
have
in
the
core.
Right
now
have
to
be
moved
somewhere,
and
we
have
to
make
sure
that
you
know
we.
We
have
the
proper
tests
in
place,
but
you
know
clicking
a
button
and
generating
a
distribution
should
be
easy
to
do
with
a
github
action.
A
The
quality
control
in
place
once
we
have
that
generating
the
distribution
is
easy.
B
But
you
need
to
generate
msi
debian
packages
and
all
the
rpm
packages
and
all
the
the
things
that.
B
A
All
right,
okay,
so
I
can
work
on
a
poc.
I
can't
promise
for
next
week.
Things
are
just
too
crazy,
but
I
can.
I
can
certainly
start
scratching
something
right
away,
and
perhaps
I
don't
know
present
something
in
a
couple
of
weeks.
Time
on
this
call
here.
I
Yeah
yeah
that'd
be
great,
jurassic
and,
and
let
us
know
how
we
can
help
jurassic,
because
we
can
definitely
happy
to
help
you.
I
Thanks
cool
very
time
thanks
everyone
up
next
week,
thank
you.
Bye,
bye,.