►
From YouTube: 2021-03-31 meeting
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
B
A
B
B
A
B
B
So
it
seems
like
most
of
the
inertia
and
effort
is
being
put
towards
the
integrations
and
instrumentation
cig
yeah.
A
B
So
what
I
was
going
to
say
is,
it
might
make
sense
at
some
point
to
consider
merging
the
sigs
or
at
least
the
sig
meetings,
because
the
past
couple
weeks
it
seems
like
it's
been
pretty
quiet
here.
No,
it's
true.
B
A
A
B
Any
update
on
metrics.
A
I
mean
they're
still,
I
I've
been
kind
of
half
halfway
following
along
they're
re.
I
think
they're
supposedly
getting
close
to
having
the
data
model
solidified
so
that
it'll
interrupt
with
prometheus
better.
A
And
they're
redesigning
the
api
from
the
ground
up,
although
I
think
it
may
end
up
being
very
similar,
but
they're
doing
it
very
deliberately
and
slowly
so
I
think
they're
right
now,
just
making
sure
they
have
a
really
solid
definition
for
the
counter
api
and
then
we
can.
Then
I
think
the
goal
is
to
like
have
a
counter
api
solidified
and
an
sdk
definition
of
how
it's
supposed
to
work,
solidified
and
then
kind
of
build
up
from
there.
A
B
That's
good.
I
also
saw
that
there
was
some
interest
on
kind
of
standardizing
profiling
data.
A
Yeah
they're,
starting
definitely
starting
to
be
some
talk.
Janna
from
aws,
is,
I
think,
starting
to
lead
the
discussion
about
how
we
want
to
model
that
data.
C
B
A
Of
all
of
us
old
school
apm
folks,
it's
kind
of
like
the
bread
and
butter
became
who
did
all
this
stuff
before
distributed.
Tracing
was
around
at
all
like
it
was
kind
of
all.
There
was
was
kind
of
what
we
now
think
of
as
profiling.
B
Well,
just
from
the
amount
of
mileage
and
benefit
we've
seen
on
datadog
side
from
our
product,
like
just
our
agent,
our
java
agent
we've
been
able
to
find
a
lot
of
areas
for
optimization
just
by
our
customers
using
our
product,
our
our
profiling
products.
So
I
I
definitely
see
value
in
in
having
a
profiling
thing.
A
Yeah,
absolutely
it's
a
little
tricky
to
to
do
like
to
come.
Do
a
generic
data
model
for
profiling
that
works
across
all
languages,
though
I
think
yeah.
So
I
think
that's
kind
of
where
I
think.
What's
where
jana
is
trying
to
drive,
things
is
trying
to
figure
out
what's
it
and
since
she
was
at
google,
where
they
did
a
lot
of
profiling
work.
I
think
she
has
a
lot
of
experience
in
sorting
that
out.
B
Yeah,
I
think,
for
us
we've
mostly
just
standardized
around,
you
know
jfr
format
for
java
stuff
and
then
what's
the
more
unified
one
for
other
languages.
I
forget.
B
Dang
it
anyway,
so
there
there's
basically
like
two
formats
that
we've
kind
of
adopted.
A
B
I'll
I'll
mention
it
and
see.
If
anyone
has
some
spare
cycles,
hey
jason,
hey
how's,
it
going.
B
Were
you
thinking
about
p
prof?
Is
that
the
other
one
yeah
pprof
yeah
thanks
yep
and
then
on
the
jfr
side?
You
guys
are
just
sending
whole
files
to
the
back
end.
I
think
right
something
like
that
yeah.
I
haven't
really
worked
on
that
so,
but
I
believe
so
yeah.
B
Jfr
design
was
more
wasn't
really
designed
to
like
be
sent
streaming
of.
The
data,
fortunately,
like
I
think
datadog
folks
have
submitted
changes
to
the
jdk
to
actually
make
it.
So
it's
streamable
yep
I've
seen
all
that
work,
yeah,
java,
14
plus
has
streaming
and
the
streaming
has
not
been
backboarded.
A
C
A
A
A
So,
especially
when
we
hit
the
unix,
the
unit's
epic
turnover
whenever
that
is
in
10
years
or
something.
C
B
A
Take
a
look,
let
me
actually,
let's
go
look
at
my
notifications.
A
A
A
A
And
hopefully
the
other
thing
that
we
want
to
get
in
is
the
throttling
of
exporter
logs
when
your
back
end
is
down
because
right
now,
basically
it'll
spam,
the
logs
every
time
there's
every
time
it
tries
to
send
something
which
people
have
complained
is
too
noisy
when,
like
if
they're,
if
the
local
collector
in
kubernetes
goes
into
a
into
a
crash
loop.
Basically,
the
logs
just
go
crazy
with
with
log
spam.
A
A
A
The
array
blocking
queue
actually
does
eats
a
lot
of
cpu
on
the
with
the
the
way
that
it
does
its
pulling.
So
it's
not
it's.
It's
not
expensive
if
you're
doing
normal
work,
but
when
you're
trying
to
you
know
work
on
a
tracing
platform,
you
don't
want
to
spend
cpu
cycles
there,
dealing
with
blocking
and
unlocking
the
queue
a
lot
to
get
stuff
in
and
out
of
it.
A
B
A
A
That's
a
little
bit
more
sophisticated
a
little
bit
more
complicated,
but
a
little
bit
more
sophisticated
to
reduce
the
amount
of
polling
on
the
queue
that
happens
as
well.
Cool.
A
A
A
So
the
definitely
the
cpu
usage
is
way
way
way
lower
and
especially
when
you
use
jc
tools
rather
than
the
built-in
array
blocking
cube,
much
much
more
efficient
from
a
cpu
perspective
and
that's
being
defaulted
to
now,
then
yeah,
that's
the
old!
I
mean
that's
the
only
implementation
with
best
friend
processor.
We
use
that
jc.
We
shaded
in
shaded
in
the
jc
tools,
cue.