►
From YouTube: 2021-09-22 meeting
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
C
All
right,
I
don't
usually
leave
these,
but
I
can
leave
this
if
people
want.
First
topic
is
from
wesley
community
members
think
that
our
new
aws
ecs
health
check
extension
should
be
a
generic
health
check
extension
rather
than
an
ecs
specific
one.
Due
to
that,
it
doesn't
have
many
ecs
elements.
There's
discussion
on
apr.
We
need
to
talk
to
bogdan
and
other
members
to
see.
If
we
want
to
create
a
general
one
or
not.
D
Or
tandoo
yeah
yeah,
I'm
here
once
again
we're
going
today,
and
so
I
think
this
this
new
house
chart
is
we
just
we
created
for
our
analyze
ecs
but
yeah.
Just
like
you
see
in
the
csbr
and
many
members
sing
right
now.
D
I
think
we
should
create
a
general
one,
because
we
don't
have
much
essays
since
in
this
extension,
but
I
honestly-
and
I
believe
that
we
have
talked
with
that
and
other
team
members
before
that
we
want
to
create
a
specific
one,
but
those
by
right
now,
synthesizes.
There
are
some
disagreements,
and
so
we
want
to
disa
discuss
again
here
to
see
what
we
should.
What
should
we
do
like?
We
want
to
create
a
for
only
for
us
or
other
members
also
can
use
it.
C
Yeah,
so
bogdan
or
anyone
else,
thoughts
about
whether
we
need
a
we
want
to
make
this
generic,
or
should
we
keep
it
specific
to
ecs
and
stick
with
whatever.
B
D
Okay,
so
yeah,
so
you
are
agree
with
create
a
general
one
right,
rather
yeah
yeah,
okay,
okay.
So
what
do
you
think
about
the
new
names
for
this
new
health
tax
determinant?
Because
we
already
have
a
general
health
check,
discussion.
E
I
think
what
I
would
like
to
see
is
this
as
an
optional
extension
to
the
existing
one.
Just
make
it
something
that
could
be
configuration
enabled
to
use
these
metrics
or
whatever
is
an
appropriate
name
for
the
configuration
to
to
have
the
existing
health
check.
Extension
assume
this
capability.
D
Okay,
got
it
yeah.
Okay,
in
this
case
yeah,
I
may
need
to
turn
some
yeah
some
configurations
and
the
names
of
the
new
one
and
I'll.
Let
you
guys
see
the
new
one.
A
C
F
Hey
hey,
so
I
stumbled
on
this
maybe
earlier
this
week
and
it's
something
that's
really
interesting
to
us
as
well
as
myself.
I
think
it's
a
great
idea-
and
I
just
wondered
if
some
of
the
core
maintainers
could
share
any
insight
and
to
maybe,
if
there's
any
kind
of
timeline
or
version
this
is
attached
to
or
just
any
kind
of
hell
or
any
kind
of
like
yeah,
just
insight
on.
What's
going
on
with
this
particular
issue,
it's
pretty
active.
F
G
B
G
B
B
Have
time
to
review,
but
I
will
look
into
it
if
it
comes
to
add
extra
capabilities
to
the
collector.
Probably
this
again,
I'm
guessing
that
this
is
something
related
to
us:
storing
the
telemetry
in
the
columnar
storage
or
in
kodi.
F
C
H
H
The
short-term
step
that
was
given
in
this
discussion
here
is
that
you
could
imagine
only
a
receiver
that
would
receive
the
columnar
encoding
and
then
inject
traditional
data
types
into
the
open,
telemetry
collector
pipelines,
but
it
would
mean
performing
an
aggregation
in
the
metric
step
in
order
to
change
column
or
raw
metric
events
into
aggregated
univariate
metrics,
for
example.
So
it's
less
appealing
to
the
the
users
that
have
proposed
this.
H
Than
that
well,
this
was
proposed
for
metrics,
but
the
appeal
for
lightstep,
for
example,
is
that
you
could
do
this
with
spans
very
easily.
It's
the
same
idea.
H
We
think
that
we
could
get
much
better
compression
out
of
spans
this
way,
but
this
is
also
proposed
as
a
way
to
get
multivariate
metrics.
So
if
you
have
one
particular
event
with
lots
of
attributes
and
lots
of
measurements
and
one
particular
request,
you
can
output
all
of
your
metric
events
as
one
as
one
row
in
this
columnar
event
store
and
then
send
them
all
at
once,
and
that
way
you
can
draw
correlations
between
multiple
metrics
that
you
couldn't
with
a
univariate
metric
stream.
B
I
I
can
see
that
happening.
I
I
will
have
to
read
the
pr,
and
probably
you
did
that
you
didn't
just
comparing
with
snappy
or
gz,
or
things
like
that.
That
would
be
the
first
relation.
That
comes
in
my
mind,
always.
H
Yeah
they're,
actually,
the
the
proposal
is
based
off
apache
arrow,
which
is
designed
for
this
type
of
encoding
to
be
compressed
very
well.
So
it
goes
beyond
like
the
wire
level
encoding,
we're
talking
about
a
data
set
and
coding.
F
Yeah
I'd
say:
our
main
interests
are
right.
Some
of
the
compression,
as
well
as
storage
and
analytics
on
top
of
trace
data,
is
where
we're
mostly
coming
from,
but
yeah.
I
was
kind
of
excited
about
this
when
I
saw
it
because
we
had
been
talking
about
a
columnar
encoding
for
open
telemetry
data.
For
these
exact
reasons.
H
It's
interesting
because
the
original
proposal
came
out
of
imagining
that
you
have
an
http
request
and
you
want
to
output
a
bunch
of
metrics
at
the
end
of
that
one
http
request
and
you
could
imagine
doing
that
using
this
columnar
store
for
metrics.
But
if
I
was
asked
how
to
do
that.
Instrumentation
of
the
http
service.
A
B
Talk
about
this
and
definitely
will
be.
B
B
And
the
most
likely
will
not
be
as
easy
as
it
sounds
to
be
accepted
could
be
maybe
a
different.
B
Type
like
columnar
telemetry
or
whatever
we
can
call
it.
If
you
want
that
encodes
this,
it
will
be
interesting
to
see
how
this.
B
Ram
or
real
user
monitoring
when
it
comes
to
to
the
same
issue
like
from
the
mobile
or
from
the
web
web.
Whenever
you
do
a
request,
you
have
a
bunch
of
measurements
or
or
things
that
you
want
to
export
as
as
one
thing
so
that
may
be
a
good
incentive
to
support
this.
E
F
Yeah,
I'm
glad
there's
some
attention
here.
Like
I
said
honestly,
we
don't
even
need
the
collector's
supported.
I
just
want
the
encoding
to
exist,
so
I
can
store
open
telemetry
in
this
format.
But
if
people
are
talking
about
it
and
it's
making
progress,
then
I'm.
F
F
C
C
The
pr
is
adding
support
for
exporting
internal
metrics
via
the
open,
elementary
and
open
telemetry
library
nathan.
Do
we
have
you
on
the
call
yeah.
G
So
it's
like
a
pretty
small
pr.
It
just
lays
the
groundwork
for
switching
from
the
open
census
library
to
using
the
open,
telemetry
one.
It
actually
got
marge
stale
like
a
couple
days
ago,
so
I
was
just
wondering
like
whether
the
screen
merged
or
whether
I
need
to
do
some
more
stuff
on
it.
G
B
A
flag
to
enable
one
backhand
or
the
other,
but
I
think
there
are
a
lot
of
at.
G
B
Okay,
I
need
to
to
look
into
that
also
also,
there
is
a
discussion
about
future
gates.
Right
now,
anthony
is
leading.
B
E
Yes,
yes,
I
think
this
would
be
a
good
candidate
for
a
feature
gate,
but
if,
if
it's
okay
to
have
just
a
boolean
constant
that
can
be
set
at
build
time,
gating
it
right
now,
which
is,
I
think,
how
we're
getting
some
of
the
other
debug
kind
of
features
that
would
eventually
move
to
feature
gates.
That
may
be
a
way
to
get
this
moving
forward
before
we
have
the
feature
gates.
Proposal
worked
out
fully.
B
B
So
the
only
worry
that
I
have
here
anthony
is
we
don't
want
to
add
yet
another
thing
that
we
want
to
duplicate
in
in
the
next
two
three
weeks
after
we
have
other
things.
E
Yeah,
I
think
this
also
gets
into
our
discussion
about
how
to
handle
the
configuration
for
it
since
we're
still
not
100
sure
on
how
to
configure
the
telemetry,
but
I'll.
Take
a
look
at
it
and
see
if
this
is
something
that
should
be
able
to
be
handled
in
a
clean
manner.
B
B
Yeah
all
right,
so
sorry,
I
want
to
finish
one
idea
so
nathan,
sorry
for
for
all
the
delay,
but
it
just
happened
that
this
pr
came
when
we
had
a
bunch
of
j
work
that
it
was
very
hard
to
pay
attention.
F
Yep
so
similar
lines,
except
so
I've,
I've
issued
my
resignation
at
new
relic,
so
I
won't
be.
I
won't
be
at
new
relic
after
friday
and
the
company
that
I'm
going
to.
I
I'm
not
sure
what
the
procedure
is
for
open
source
contributions,
so
I
have
to
explore
that
as
a
result,
I
have
to
close
these
prs.
I
was
just
wondering
what
the
status
update
is,
or
is
there
like
an
intention
to
merge
these
prs
they've
also
been
open
since
beginning
of
august.
F
I
think
it's
a
little
bit
older
than
nathan's
pr
actually,
so
this
was
kind
of
a
pr,
the
one
that
we're
looking
at
now
for
h2c
traffic.
This
is
a
precursor
for
the
work
required
to
merge
the
otlp
over
grpc
and
otlp
over
http
ports
to
be
on
port
4317,
because
grpc
traffic,
you
have
to
use
http,
2
and
so
for
non-tls
traffic.
F
F
The
failures
are
on
the
contrib
test
for
some
reason,
but
yeah,
so
so
this
is
kind
of
a
precursor
for
the
actual
full
work
required
to
combine
the
grpc
and
hdp
ports.
F
B
Yeah
so
because,
because
of
this
emergency
situation,
can
you
ping
me
directly
on
slack
after
the
meeting
is
done?
So
I
will
pay
full
attention
the
next
two
days
three
days
and
get
as
much
as
we
can
done,
because
otherwise,
I
would
have
asked
you
to
wait
a
couple
of
weeks
until
we
are
closer
with
the
g
work,
but
because
of
the
situation.
Let's
prioritize
this.
F
Yeah,
that
sounds
great
also,
if
there's
still
interest,
I
can
work
on
finding
a
different
owner
at
new
relic
to
continue
pushing
this
code
forward.
It's
it's
obviously
licensed
under
apache
2.
I'd
love
to
see
this
work
continue.
So
whatever
you
know,
if
someone
wants
to
pick
up
this
work,
you're
totally
free
to
do
so
yeah
and
the
other.
The
only
other
thing
was
the
the
cumulative
delta
processor
improvements.
I
know
that
was
something
that
was
discussed
so
far.
F
What
we've
been
doing
is
we've
been
shipping
a
fork
to
our
customers
to
support
them,
but
we
we
haven't
been
able
to
to
get
that
to
get
traction
on
that
I'll
I'll
ping
you
directly
back,
then
I
guess
we
can
work
through
some
of
this
stuff
to
try
and
get
this.
B
D
F
Yeah
that
sounds
great
and
yeah
just
the
final
word.
I
really
appreciated
working
with
everyone.
This
is
a
great
group
of
folks,
so,
thanks
for
all
the
time.
C
All
right
next
up
is
from
sean
review
and
revisit
the
remote
configuration
proposal
and
related
config
remote
reload
without
rebuilding
the
entire
pipeline.
K
K
Awesome
awesome:
it
was
kind
of
surprising
because
it's
just
so
open.
I
was
like
no
one's
taken
advantage
of
this
yet
so
I
really
just
wanted
to
bring
this
up,
and
I
I'm
curious
to
know
what
everyone
thinks
in
terms
of
you
know
overall
attraction
and
interest
in
this,
as
as
a
proposal
and
kind
of,
I
just
really
want
to
understand
what
would
be
required
to
kind
of
further
the
conversation
here
and
figure
out
what
is
really
necessary
to
to
move
forward.
B
K
Okay,
I
I
have
not
I
I
I
just
gravitated
towards
this
proposal
draft.
I
had
assumed
that
something
wasn't
you
know
underway
in
in
this
space,
so
it
is
like.
Oh,
is
the
parser
provider
and
the
config
source
work?
Actually,
you
know
aimed
at
accomplishing
remote
configuration,
or
is
it
simply
like
a
way
that
we
could
shim
it
in.
B
So
so
the
parser
provider,
it's
the
interface
that
the
collector
uses
to
load
the
configuration
from
any
source
right
now
it
loads
from
the
from
the
flag
passed
as
dash.config,
but
it
you
can
build
your
own
custom
collector.
That
does
different
things.
If
you
are
asking
me,
if
the
core
will
the
core
distribution
will
support
multiple
formats,
I
don't
know
when
that
will
happen,
but
we
will
provide
all
the
the
entire
framework
or
the
entire
all
the
interfaces
required
for
contrib
or
different
distributions
to
to
give
you
this
option.
B
K
Now
now,
in
terms
of,
does
it
just
do
that
on
initialization?
Or
could
we
implement
like
a
pulling
watching
style,
because
there.
B
K
B
There
is
a
watcher
on
that
interface.
So,
if
you
implement
the
watch
part
an
extra
interface,
we
will
watch
for
that
and
we
will
reload
the
entire
the
entire
collector
right.
Now
we
shut
down
everything
and
restart
all
the
components.
That's
how
we
do
it
so
yeah,
but
there
is
a
watch
interface.
So
if
you
look
into
that
directory
where
the
parser
provider
is
listed,
you
will
find
the
watchable
interface
that
you
can
implement.
K
Interesting
wonderful!
Well,
this
helps
me
greatly
figure
out
what
I
can
do
next
to
in
order
to
to
learn
more
and
see
if
that
can
facilitate
what
I'm
after
I,
I
am
curious,
what
would
be
the
benefit
of
actually
doing
the
the
second
piece
here,
which
was
the
remote
reload
without
rebuilding
the
whole
pipeline.
B
That
that
would
be
a
great
feature,
but
in
terms
of
my
personal
priorities
will
be
way
after
I
give
the
capability
of
reloading
the
entire
question.
First.
K
Gotcha,
okay,
some
more
adrov
wonderful!
Well!
Thank
you
very
much.
That's
all
I
want
to
accomplish
with
that
item.
C
Awesome
thanks
sean
and
welcome
to
the
meetings.
Next
up
is
pr
guidance
for
adding
metric
support
to
existing
the
existing
tan
zoo,
observability
exporter
looks
like
it's
from
glenn
and
travis
at
vmware.
I
Yeah
yeah
so
we're
we're
providing
a
metrics
exporter
for
tonsi
observability
and
we
want
to
support
all
the
metric
types
like
dodge
and
sum
and
histogram
and
in
summary,
and
we
have
code
for
the
gods,
metrics
and
and
that's
like
572
lines
of
code
and
and
we've
read
that
we
only
we
read.
The
pull
request
should
be
like
500
lines
of
code
and
so
we're
thinking
of
breaking
this
into
several
smaller
pull
requests.
I
But
the
trick
is
that
our
exporters
already
live,
because
someone
already
wrote
code
to
do
traces,
but
not
metrics,
and
so
our
concern
is
that
as
we
as
we
to
do
all
the
metrics,
it'll
it'll
it'll
be
live
and
it
won't
be
complete.
B
So
let
me
explain:
first
of
all,
you
most
likely
have
to
do
a
transformation
correct
between
open
telemetry
data
types
for
akp
data
in
your
format,
correct
that
you're
gonna
put
on
the
wires.
B
If
that's
the
case,
you
can
send
a
pr
that
does
only
transformation
with
unit
tests
and
is
not
wired
in
anything
for
gauge
initially,
then,
for
some,
then
for
histogram,
then
for
summary,
if
you
want
and
so
on
once
you
have
all
of
this
transformation
in
place
unit
tested,
you
can
have
a
final
pr
that
plugs
everything
in
with
unit
tests
for
that
pr,
and
hence
you
just
got
a
story
of
four
plus
one.
Five
pr.
I
Okay,
okay,
so
just
to
make
sure
I
understand,
we
write
transformers
to
transform
hotel
metrics
to
times
the
observability
metrics
and
then
once
we
have
all
that
done,
we
do
one
pr
to
actually
send
the
times
of
observability
mattress,
that's
onto
observability.
C
Travis
all
right
now
we
have
beneath
the
hotel
collector's
recent
release
was
seven
days
ago,
but
he
couldn't
find
a
.35.0
image
tag
in
docker
hub.
Also.
The
image
names
are
a
bit
confusing
in
the
hotel
docker
hub.
What's
the
difference
between
hotel
hotel
call
versus
hotel,
slash,
telemetry,
collector
yeah,
no
kidding.
B
B
So
it
probably
is,
is
on
me
to
to
fix
that
right
now,
but
that's
the
difference
and
that's
the
the
reason
why
we
have
two
of
them.
We
will
get
rid
of
the
we'll
get
rid
of
the
hotel
call
very
soon.
D
Sorry
I
have
another
question
from
bogdan
and
and
your
new
idea
about
modify
like
our
new
essential
strategy
extension
to
the
current
one.
So,
oh
or
like
I
want
to
figure
out
what
is
that?
What
should
I
do
like
add
a
configuration
to
use
our
new
functionality
to
to
the
existing
one
and
make
it
optional,
and
is
that
is
that
a
possible
way
to
do
that?.
B
I
think
I
think
anthony
had
the
same
idea
and
he
has
something
in
mind.
You
may
want
to
check
with
him
first,
but
essentially
what
I
envision
is.
There
will
be
a
inside
the
configuration.
There
will
be
a
report.
Exporter
error
equals
true
or
something
like
that
configuration
if
that
is
set,
then
you
look
for
the
metrics
as
well.
Otherwise
you
don't.
E
What
I'm
thinking
as
well
chandra,
if
you
want
to
reach
out
to
me
after
the
call
we
can
talk
through
the
options.