►
From YouTube: 2022-03-03 meeting
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
B
Yeah,
I
know
right,
there's
that
or
I
I
like
the
glengoolie
glen
ross
approach,
just
always
be
selling
you
know
or
always
be
closing.
I
guess
so
just
to
always
be
busy.
I
guess.
B
Yeah,
I
wonder
how
long
this
means
to
go
for
today
it's
pretty
tight
on
the
agenda,
but
I
guess
oh,
we
are
past
the
hour,
so
yeah
we'll
start
it
up.
If
you
are
on
the
call,
please
make
sure
you
add
your
name
to
the
attendees
list
and
in
the
link
doc,
you
should
find
it
in
the
meeting.
Invite
if
you
have
anything
you
want
to
talk
about.
Please
add
this
to
the
agenda.
I
did
add
an
extra
section.
B
We
included
it
last
time
for
cool
and
interesting
things
that
you've
done
with
open
cilantry
got
something
to
share
turns
out,
but
I
think
that
yeah,
if
you
have
something
added
there
and
I
will
start
sharing
my
screen.
B
Cool
so
yeah,
it
should
be
a
pretty
quick
meeting
given
the
current
agenda.
The
only
question
that
I
had
is:
what
did
the
other
maintainers
and
people
on
the
call
think
about
getting
a
release
out?
I
had
started
this
milestone
a
while
ago.
B
It
has
a
lot
of
great
things
included
that
we
have
accomplished
in
the
past
a
few
weeks,
a
lot
of
cleanup
for
the
tracing,
sdk
and
implementation
of
support
for
environment
variables,
and
I
don't
think
the
one
thing
I
was
going
to
add
right
before
this
meeting,
but
then
was
we
do
have
the
metrics
api
that
just
landed,
so
that
would
probably
be
included
as
a
metrics
release.
B
So
I
kind
of
wanted
to
ask
about
that
aaron
whether
you
thought
the
metrics
packaging,
experimental,
metrics
stuff
in
the
versions,
dot
email
would
be
included
in
this
release.
C
I'll
be
honest,
I
not
100
sure
what
is
covered
by
that
right
now.
I'd
have
to
to
look
at
it,
but
I
yeah
the
one
place
that
I
think
we
do
not
have
parity
with
the
old
in
this
api
is
the
global
global
has
been
removed
just
so
that
we
could
actually
get
it
out
the
door.
I
don't
know
if
we
want
to
add
that
back
in
before
we
release
a
major
version,
or
do
we
just
want
to
get
this
out
the
door
and
hopefully.
D
C
Yeah,
that's
kind
of
my
feeling
too,
like
it's
something
that
we
need,
but
did
we
just
hold
off
on
the
metrics
portion
being
updated
until.
B
B
Yeah-
and
I
think
that's
totally
fine-
we
can.
We
can
just
do
that.
B
The
metrics
in
its
own
cadence
as
well.
It
doesn't
that's
the
whole
point
of
the
versions.uml
file
in
multimod.
C
I
created
a
issue
for
replacing
global
and
I'm
probably
about
60
done
with
a
replacement
for
it.
It's
just
having
to
adapt.
What
we
did
prior
to
the
new
api
is
finding
a
lot
of
edge
cases
here.
B
Yeah
this
was
the
hardest
one
in
the
global
package
beforehand,
so
I
can
only
imagine
it's
gotten
just
as
complicated
or
more
yeah,
okay
yeah.
That
sounds
good.
I
was
looking
for
this
reason.
I
thought
this
was
a
pr.
Let
me
capture
some
of
this.
B
Cool
yeah
that
sounds
like
a
plan.
If
you
have
time
also
anybody
on
the
call
great
way
to
get
involved
in
the
project
is
reviewing
and
when
aaron
has
something
out.
I
think
that's
a
good
one
to
get
some
eyes
on.
B
B
Times,
which
is
rough,
but
it's
just
what
you
need
to
do,
because
you
need
an
implementation,
so
yeah.
B
Cool
well,
that
was
the
agenda.
That
was
five
minutes,
so
we
have
some
plenty
of
time.
I
definitely
noticed
there's
some
new
faces
on
the
call
we
could
reach
out,
but
let
me
just
pause
and
see
if
everybody
has
some
items
they
wanted
to
just
bring
up
that
aren't
on
the
dock.
B
Cool
yeah:
we
can
go
ahead,
david.
E
Sorry,
I
was
just
gonna
say
for
us
that
are
maybe
less
connected
than
others
on
the
call,
so
the
api
merged.
What
what
are
like,
the
next
steps
that
are
coming,
I
know
you
said
we're
going
to
reintroduce
globals
is
there
some
are
we
going
to
work
on
the
sdk?
Next?
Is
that
the
plan,
or
in
the
next
week
or
two.
C
So
apis,
I
hope
to
get
out
by
tomorrow
like
a
pr
out
tomorrow
for
it
I
need
to
put
a
little
more
testing
through
it,
but
the
next
step
is
the
sdk
and
that's
going
to
be
a
little
bit
more
controversial,
because
the
only
prototype
sdk
we
have
right
now
is
based
off
of
go
118.
C
So
it
may
be
a
bit
of
time
before
the
sck
breaks
like
we
make
any
kind
of
breaking
changes
with
the
sdk,
and
that
would
also
like
the
current
plan
is
that
it
has
that
you'll
have
to
upgrade
to
118
to
get
that
support.
So
it
may
be
that
whatever
version
gets
released.
Here
is
a
little
bit
more
long-lived
of
the
new
api
with
the
the
old
sdk.
C
B
B
Milestone
to
track
the
sdk
work
after
this,
but
yeah
it
looks
like
the
api.
One
should
be
done
as
far
as
she's
still
remaining
but
they're,
essentially
pocs
and
exploratory
design
ideas.
So
we
could
probably
close
a
lot
of
that,
but
yeah
we
should
try
to.
I
think
in
the
next
week
get
something
for
the
sdk
as
a
milestone
out,
I
think
one
of
which
is
the
blocker
of
go
18,
which
was
supposed
to
come
out.
I
think
last
month
or
the
month
prior.
So
what's
going
on,
there.
D
Yeah
yeah
any
day
now,
yeah
I've
been
hearing
that
every
day
I
believe
they
did
actually
ship
an
rc
and
usually
two
weeks
between
an
rc
and
a
release.
So
unless
nothing
has
gone
wrong
next
week,
I
think.
B
Okay,
yeah
cool
that
makes
sense
for
those
on
the
call.
Also
that
aren't
aware,
like
our
plan
for
the
sdk,
was
to
use
generics.
That's
why
we're
waiting
for
118.
B
B
Yeah
and
we
weren't
really
looking
to
do
that
for
a
lot
of
reasons.
I've
already
pointed
out,
but
the
possibility
is
at
least
six
months
off.
C
That
would
be
an
implementation
detail
of
the
api,
which
I
think
would
be
an
acceptable
change.
But
since
since
that
all
should
be
internal.
B
B
Cool,
so
why
don't
we
just
go
ahead
and
pause?
I
see
some
new
faces
on
the
call,
if
you
guys
want
to
introduce
yourself
again,
don't
put
people
on
the
spot.
So
if
you
don't
want
to
don't
worry
about
it
and
we
can
just
keep
moving
on
but
hill,
I
do
see
you
on
the
call
and
you
just
unmuted,
so
I
know
you
well
as
you're
a
fellow
splunker.
Maybe
you
want
to
jump
in.
A
B
Cool,
I
know
we
met
sam
last
time.
Emilio,
I
see
you
as
well
are
on
the
call.
B
If
you
don't
want
to
say
hi,
that's
cool
too,
I
guess:
hey
real,
quick,
hey,
hey
yeah,
I'm
kind
of
new
around
here,
I'm
just
calling
in
from
the
new
relic
side
of
things.
Just
getting
caught
up
to
speed
with
you
guys
and
you
know,
hopefully
I'll
be
contributing
sooner
or
later,
awesome
yeah.
Definitely
it
for
everyone
on
the
call
who's
looking
to
contribute.
If
you
need
some
help
finding
issues
or
something
like
that,
slack's
a
good
place
to
reach
out,
we
can
there's
a
lot
of
work
to
do.
B
Also,
there's
a
help
wanted
tag.
I
think
that
we
still
pretty
frequently
keep
updated
so
yeah.
Those
are
really
great
starting
points
on
our
issues.
B
Cool
yeah,
this
might
be
short
and
sweet,
there's
only
one
other
thing.
I
always
love
talking
about
user
stories
and
any
sort
of
use
case
of
open
summary
to
try
to
show
that
we're
doing
things
that
are
impacting
people
which
it
turns
out.
We
are,
and
so
I
have
a
user
story,
but
I
definitely
want
to
pause
because
I've
been
doing
a
lot
of
the
talking.
Everybody
else
has
some
cool
use
cases
in
the
past
week
or
two
or
just
new
that
haven't
been
shared.
B
Created
this
span
processor
over
the
past
week,
it's
in
a
project,
it's
called
flows
really
anything,
but
the
idea
is
that
it
was
inspired
by
david
actually,
which
didn't
actually
suit
his
use
case.
So
I
don't
know
whoever
he
needs
to
I've
already
used
it
a
few
different
times.
It's
really,
I
think
it's
kind
of
cool,
but
maybe
it's
not
like
the
most
useful
thing,
but
hopefully
it
inspires
some
people.
B
The
idea
is
that
you
have
some
sort
of
span
processing
pipeline,
but
there's
really
no
metrics
for
that
pipeline.
Currently,
one
being
we
don't
have
metrics.
So
we
would,
you
know,
bake
that
into
our
tracing
sdk,
but
it
would
be
kind
of
cool,
and
this
is
what
this
does.
Is
it
wrap?
B
You
can
wrap
those
spam
processors
because
you
can
see
when
they're
getting
new
spans
or
when
they're
getting
spins
that
have
ended
and
then
produce
some
sort
of
metrics
and
in
this
case
producing
some
prometheus
metrics,
and
so
you
just
you
know,
you
just
essentially
wrap
something
that
with
a
batcher
by
just
replacing
like
the
standard
trace
package
with
the
flow
package
here
and
then
you
know
you
can
do
you
know
very
simple
spam
processors
just
by
wrapping
with
your
own
tracer
options
here,
but
then
we'll
just
expose
these
spam
total
metrics
right
here
and
it's
kind
of
cool.
B
I
don't
know
I
was
just
playing
around
with
a
little
bit
like
while
working
on
some
like
troubleshooting
guidelines,
debugging
stuff
to
see
you
know,
metrics
flowing
the
original
thing
that,
like
david,
had
brought
up
was
that,
like
it'd,
be
cool?
If
you
then
build
like
an
alert
off
of
this,
so
that
like
when
you
know
your
service
just
really
starts
dumping,
traces
and
nothing's
actually
reaching
the
end
point
to
do
something
about
it
and
I'm
guessing.
I
understand
david's
pain.
B
Point
it's
hard
to
do
that,
because
a
lot
of
the
traces
that
you're
going
to
get
in
the
ends
are
going
to
be
reported
by
the
collector
or
some
other
endpoint,
which
may
be
on
a
different
scrape
schedule
from
this.
So
it's
really
hard
to
compare
those
and
get
some
sort
of
meaningful
metric
out
of
this,
but
I
think
it's
kind
of
a
cool
proof
of
concept
and
honestly
it
could
be
expanded
a
lot
further.
B
I
thought
a
lot
of
different
ideas
that
you
could
add
to
this,
but
this
is
just
kind
of
a
bare
bones
setup.
So
yeah,
that's
something
I
put
together.
There's
a
link
there.
If
you
wanted
to
take
a
look
at
it,
but
oh
and
if
you
want
there's
like
a
full
example
using
docker
compose
and
I've
been
using
this
a
lot
to
actually
like
deep
work
on
debugging
and
like
testing
the
logging
stuff.
B
D
Very
cool
yeah,
the
collector
does
something
similar
with
z
pages
spam
processor
that
it
has
that
keeps
track
of
spans
that
are
currently
active,
so
you
can
go
and
look
at
them.
I
forget
if
it
if
it
also
records
metrics
about
spans
starting
and
stop,
but
that
would
be
another
thing
that
might
be
useful.
There.
B
Yeah,
I
thought
I
also
saw
in
the
slack
message
something
about
like
there
is
like
a
span,
metrics
plug-in
or
something
for
the
collector.
So
I
was
going
to
take
a
look
at
that
as
well.
I
think
that
that
is
sending
otlp,
though
so
honestly,
one
of
the
cool
things
I
thought
about
was
like
stop
sending
prometheus
metrics
and
just
send
otlp
metrics
with
this,
but
without
a
lot
of
the
metrics
pipeline
there
I'd
have
to
build
it
myself,
so
yeah,
I
again
like
spent
just
a
few
hours
working
on
this.
B
Cool
also,
I
think,
span
processors
are
kind
of
a
cool
place.
We
could
do
a
little
bit
more
work
once
we
do
get
metrics.
I
think
there
are
some
really
great
opportunities
there.
I
know
we're
logging
right
now
for
the
batchman
processor,
but
I
was
looking
at
it
like
yeah
being
able
to
report
some
metrics
around.
The
state
of
that
is
gonna
be
really
helpful
in
debugging,
let
alone
performance.
Optimization,
like
I
think,
that's
gonna,
be
really
cool
to
look
at
so
awesome,
well
short
and
sweet.
B
F
Tyler
I
did
make
crosslink
updates,
so
I'm
poking
you
again
just
a
reminder.
B
Yeah
thanks
for
the
reminder,
yeah,
I
should
hopefully
have
some
time.
I
guess
this
week's
almost
done
yeah
I'll
try
to
take
a
look
at
that
again.
There's
been
a
lot
of
stuff
but
yeah.
That's
on
my
list
thanks.
I
appreciate
it
yep.
Thank
you.
B
Well,
cool:
well
thanks
everyone
for
spending
the
time
and
making
it
happen
today.
I
appreciate
everyone
dialing
in
hopefully
I'm
guessing.
This
is
a
pretty
slow
meeting,
because
we've
gotten
a
lot
done
in
the
past
week,
especially
around
the
metrics,
so
we'll
definitely
excited
to
get
back
some
time,
and
I
think
we
can
end
it
here.
B
Everyone
thanks
again
for
joining
and
we'll
see
you
all
next
week
or
be
a
slack
bye.
Thank
you.
Thanks.