►
From YouTube: 2021-01-06 meeting
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
A
I
don't
like
that
way
of
governing,
but
I
will
do
so
if
necessary.
A
C
A
C
Cool
we
actually
head
over
there
more
frequently
than
I
expected
I,
I
grew
up
near
philly
and
then
switched
to
the
other
side,
the
dirty
side
of
the
state
according
to
philly,
but
the
secret.
The
secret
is
like
pittsburgh,
cleaned
itself
up
because
pretty
much
all
the
steel
business
went
out
of,
like
you
know,
dried
up,
and
then
they
like
planted
trees
and
stuff
and
yeah.
That
was
actually.
C
No,
I
would
say
health
insurance
workers.
What's.
B
C
Terrifying
honestly,
yeah.
A
A
A
Where'd
you
end
up
going.
If
I
can
ask,
I
went
to
amherst
college
in
western
mass
small
super
tiny
little
college
in
western
massachusetts
yeah.
I.
C
Have
a
friend
who
graduated
from
there
what
year?
That's
a
great
question!
I
don't
want
to
think
about
it
because
he's
a
lot
younger
than
me.
Okay,
so
I
probably
have
run
into
it
since
I
graduated
in
92.
yeah
wait
hold
on.
C
I
I
need
to
check
because
well
anyway,
it's
he's
one
of
my
old
open
source,
like
coding,
buddies,
he's
now
with
twitter,
but
he's
like
10
years
younger
than
me,
or
something
or
like
eight
years.
So
I
I
don't
like
to
think
about
that,
like
how
old
I'm
getting
you.
A
Know
what
I
mean
well,
you
know
I
have
a
daughter
who
is
has
just
finished
applying
to
colleges.
So
you
know
that's
that's
where
I
am
oh.
C
Nice,
nice,
I
we
just
hit
the
teenage
years,
so
I
that's
where
we
are.
C
A
A
A
Yeah,
so
I
guess
we
should
get
started,
it
might
be
just
the
three
of
us.
Jason
should
put
his
name
on
the
attendee
list,
so
he
gets
credit
or
for
whatever,
though
whatever
that's
worth
so
we
don't
have
much
in
the
agenda
except
just
kind
of
a
happy
new
year.
Here's
where
we
are
the
ga
burned
down,
there's
still
nine
p2
issues
in
the
two
columns.
A
A
This
ga
project
is
where
I've
been
tracking
kind
of
what
needs
to
be
done
so
this
this
topic
here,
which
is
currently
assigned
to
carlos
the
open
tracing,
open
census,
backwards,
compatibility
requirements.
A
It's
a
very
old
issue.
You
can
see
it's
number
302
and
we're
up
to
in
the
2000s.
Now
I
think
what
actually
would
be
the
most
helpful
would
be
driving,
or
at
least
getting
some
open
census
compatibility
requirements
into
the
specifications.
I
do
not
believe
that
anything
has
been
written
about
what
our
behavior
should
be
with
respect
to
open
census.
At
this
point,.
A
C
Right,
okay,
so
we
I
don't
know
how
public
this
has
been.
There
have
been
some
pool
requests
in
open
census
in
java
and
go
with
an
initial
kind
of
compatibility
layer
where
the
theory
is
instead
of
open,
telemetry
being
compatible
with
open
senses.
We
make
open
senses
compatible
with
open
telemetry,
and
then
we
migrate
everyone
to
open,
telemetry
and
kind
of,
like
you,
replace
your
sdk
in
open
senses
with
open
telemetry,
and
suddenly
things
start
to
work.
Right
was
this
was
the
idea
here,
like
the
other
direction
of.
A
I
think
the
idea
is,
since
the
project
is
formed
as
a
way
to
merge
the
two.
We
should
specify
what
it
means
to
support
open
senses,
because
I
don't
think
it's
actually
well
determined
anywhere
about
what
it
actually
means
now
that
we're
migrating.
So
if,
if
what
it
sounds
like
you
are
doing
with
open
census
is
the
strategy
you
want
to
do,
then
that
should
probably
be
written
down
somewhere.
So
people
are
aware.
C
Yeah
yeah
more
than
happy
to
do
that.
Yes
will
do.
I
just
commented
on
the
issue,
so
hopefully
you
can
assign
it
to
me.
I
don't
think
I
can
assign
things
to
myself.
I
forget
how
github
works
sometimes,
but.
A
A
B
Does
that
approach
josh,
that
approach
that
you're
and
I
think
it's
nice
to
meet
you,
I
don't
think
we've
ever
met.
Does
that
approach
here
describing
diminish
the
value
of
the
shim,
the
old,
open
sense
of
shim.
C
That's
it's
actually
just
kind
of
meant
to
improve
the
shin.
Let's
see
open
census,
instrumentation
to
work
with
open,
telemetry
library
being
open
censorship.
Yes,
it's
basically
just
like
the
compatibility
that
I
was
going
to
push
for
was
just
this
gym
now,
there's
there's
a
notion
of
like
if
you
use
the
open,
telemetry
shim
on
open
sensors
right
should
all
of
your
metrics
and
traces
be
exactly
the
same,
coming
out
the
gate
exactly
one
to
one,
and
I
think
that's
actually
not
practical.
C
So
I
wasn't
going
to
promote
that
because
also
the
user
actually
has
to
take
action
to
use
the
shim
right.
It's
not
going
to
come
for
free,
so
I
think
we're
okay
there
but
yeah.
It
would
be
this
this
shin
bit.
Wait
you
have
the
shim
on
the
yeah.
This
is
what
zoe
there's,
how
he
wrote
most
of
its
code.
I
believe
yes
yeah.
That's
that's!
The
current
plan,
so
zoe
and
david
ashbal
for
go
have
been
contributing.
Those
two
we're
going
to
try
to
expand
that
for
other
open
census.
C
Apis
as
time
allows
based
on
like
priority
and
migrate
ability
and
all
that
kind
of
stuff.
But
yeah
I
can,
I
can
document
all
this
up
and
and
what
what
the
plans
are.
One
of
the
things
that's
interesting
and
I
think,
will
be
a
hot
topic
is
we
can't
actually
do
a
good
shim
for
metrics
yeah
because
of
views.
A
Well,
we're
not
I
mean
metrics
is
still
I
mean
views
is
going
to
be
a
big
topic,
hopefully
next
week
about
how
we
get
to
something
at
least
open
census
light,
so
hopefully
yeah
I
I
even
yeah,
but
the
model
I
know
is
actually
pretty
significantly
different,
so
yeah.
I
can
understand
it's
going
to
be
tricky
to
figure
out
what
to
do.
There.
C
Yeah,
the
issue
is
more
from
an
open
census
standpoint.
Where
do
you
hook?
Do
you
know
what
I
mean?
Do
you
hook
as
an
exporter,
or
do
you
hook
like
straight
up
to
the
api
in
some
fashion
as
an
sdk?
They
don't
have
the
same
level
of
hooks
that
open
telemetry
did
like
this
would
be
way
easier
if
they
had
open
telemetry's
design,
but
they
don't
so
yeah.
A
Yeah
yeah
yeah,
the
metrics
thing
is
yeah,
so
I
know
a
big
open
question:
how
to
properly
manage
that.
I
actually
have
a
question
for
you
related
to
this.
So
very
recently,
you
may
have
noticed
that
we
have
started
tagging
the
metric
in
things
that
are
not
slated
for
being
stable
at
2.0
with
a
dash
alpha
in
the
version
numbers.
A
C
Or
whether
so
I
think
we
should
fragment
it.
Unfortunately,
there
should
be
one
for
traces
which
is
stable
and
one
for
metrics,
which
is
unstable,
because
I'd
like
to
change
how
the
metrics
works
as
open,
telemetry
metric
stabilizes
to
be
better
because
it's
not
nice
right
now,
but
the
tracing
one
is
actually
pretty
good.
Could.
A
You
create
an
issue
in
the
java
project
to
do
that
split,
so
we
don't
lose
traffic.
We
should
that's
something
that
sounds
like.
We
really
need
to
do
before
1.0
and
we're
hoping
one
point.
I'm
I
mean
I'm
still
hoping
we
have
a
hope
of
a
chance
in
hell
of
1.0
by
the
end
of
the
month.
A
Cool
all
right,
so
I
wanted
to
give
just
a
status
update
on
the
work
that
monologue
might
have
been
doing
configuration
for
people
who
have
maybe
not
been
keeping
track
of.
What's
going
on
over
the
last
few
weeks.
The
big
the
big
change
is
removing
all
of
the
configuration
code.
A
All
of
the
kind
of
auto
configuration
code
did
not
programmatic
configuration
into
its
own
module,
so
that
is
all
the
spi
and
the
environment
variable
and
system
property
configuration
configuration
is
going
to
be
moved
into
a
new
module
that
is
specifically
built
for
auto
configuration.
A
So
all
of
that
code
will
live
in
one
place
rather
than
being
kind
of
fragmented
and
done
inconsistently
across
various
pieces
of
the
sdk,
so
that
there's
a
pr
that
I'm
getting
ready
to
merge
from
monorail
to
create
that
module
and
then
the
next
step
will
be
to
remove
all
the
other,
all
those
other
bits
and
pieces
that
are
still
lying
around.
A
So
that's
kind
of
where
we
are
with
that
and
once
those
things
have
done
are
done,
we've
got
that
cleaned
up.
I
think
we're
going
to
be
really
close
to
having
a
stable,
supportable
sdk.
C
Our
yeah
go
ahead,
all
right,
so
so
we
had
that
back
and
forth
on
parsing
of
units
after
numbers
yeah
yeah.
If
I
were
to
contrib,
so
I
just
want
to
make
sure
if
I
were
to
implement
that
and
contribute
a
parser
for
like
duration
and
maybe
megabyte
size.
Would
that
be
a
distraction
and
slow
you
down
or
would
it
be
welcome.
A
I
think
it
this
is
something
that
sounds
like
it
belongs
in
the
auto
configuration
module.
So
as
long
as
it's
kind
of
over
there,
I
don't
think
it'll
be
a
distraction
at
all.
I
think
it
I
mean,
I
don't
think,
like
you
said
it's
not
a
huge
amount
of
code
yeah,
so
I
think
that
would
be
fine.
I
think
this
is
one
of
the
advantages
of
moving
this
kind
of
code
into
a
separate
module,
because
it
doesn't
have
to
interfere
with
the
rest
of
the
sdk
and
api
stability
cool.
A
A
All
right
so,
the
last
little
bit
that
I
just
wanted
to
throw
out
into
something
that
trask
does
on
the
instrumentation
meeting.
So
it
seems
like
it's
useful,
which
is
kind
of
like
stuff.
That's
changed
since
last
time,
sdk
configuration,
as
I
said,
we're
continuing
to
work
on
that.
A
This
is
kind
of
a
biggest
breaking
change
for
sdk
users
that
we're
going
we're
working
hard
to
make
the
sdk
configuration
immutable
so
that
you
can't
change
the
instance
of
trace
config,
that's
assigned
to
an
sdk
or
you
can't
change
the
span,
processors
that
are
assigned
to
an
sdk
after
it's
been
created.
A
A
A
Sdk
tracer
management
shutdown
now
returns
completable
result
code
instead
of
void.
So
that's
useful.
If
you
want
to
need
to
be
able
to
wait
until
the
shutdown
is
complete,
but
you
weren't
before
it
would
just
block
now
you
have
the
option
of
blocking
or
not
blocking.
Is
your
choice,
the
sampling
probability
sampler
attribute
so
there's
I
don't
know
if
people
generally
know
this
is
kind
of
a
weird
hidden
feature
that
samplers
can
add
attributes
to
spans
when
they
do
their
sampling.
A
Hopefully,
no
one
was
actually
depending
on
that
yet,
but
if
so,
we're
still
pre-released
and
the
other
big
thing
is
semantic
conventions
are
moved
into
their
own
module,
a
module
that
contains
a
single
class
with
all
of
those
semantic
dimensions
in
it.
But
that
is
so
that
we
can,
if
we
need
to-
and
hopefully
we
won't,
but
if
we
need
to,
we
can
version
the
version
that
semantic
convention
separately
than
the
rest
of
the
api.
D
Good
to
meet
you,
yes,
I
meet
you
as
well.
I
think
I
think
I've
been
in
a
few
other
meetings,
but
I'm
just
kind
of
letting
the
world
check
out.
A
Excellent
well
joshua
created
a
couple
issues
for
things
for
googlers
to
work
on.
So
it's
all
good.
A
A
Anyway,
yeah,
so
if
you
run
across
phil
watson,
writing
stuff
and
on
mediums
about
the
processes
x,
that's
my
brother,
so
he's
a
good
dude
yeah.
So
anyway,
that's
all
I
had
for
the
agenda
if,
but
if
there
are
questions
or
concerns,
please
speak.
A
C
The
only
thing
I
have
is
so
there's
some
issues.
I
opened
that
weren't
directly
related
to
ga
burndown.
Is
there
an
issue
that
you
think
would
be
good
for
someone
like
myself.
I
have
a
lot
of
jvm
knowledge,
but
I'm
a
scholar
guy.
So
I
apologize
if
my
java
looks
like
crap
anyway.
Here
we
go,
I
know
yeah
just
wait.
Just
wait.
C
A
A
We've
done
a
lot
of
work
on
benchmarking
and
performance
tuning,
but
I
think,
like
there's
one
benchmarking
issue,
that's
number
790,
that's
still
open
that
no
one's
picked
up
is
doing
something
all
right.
The
best
processor
that
would.
C
A
A
C
Yeah
yeah,
I
actually
for
fun
over
christmas,
I
was
using
jmh
for
random
toy
experiments.
Anyway,
I
shouldn't
mention
that
I
don't
know
anything
about
jmh.
I
can't
do
it
well.
If
you
I
mean.
A
C
Cool
cool,
the
other
question
around
your
benchmarks:
do
you
have
like
target
use
cases
that
you're
focused
on
with
your
benchmarks?
You
know
like,
like
shaping
the
data
that
you
shove
through.
It
is
kind
of
important.
You
have
an
idea
of
what
you're
looking
for
or
should
I
just
send
a
pr
and
you
can
look
at
it
and
say
this
isn't
right.
A
A
So
what
I
would
do
is
probably
take
a
look
at
what's
there
and
see
if
there's
anything,
useful
and
maybe
even
add
like
put
in
put
in
pr
to
the
spec
for
additional
things
that
you
think
would
be
good,
but
we
don't
have
we
haven't
so
that
thing
has
a
few
like
shape
of
the
data
cases
in
it,
but
it's
very
narrow,
so
I
think
expanding
that
would
be
very
helpful
with
that.
A
Okay,
so
the
other
thing
I
think,
would
be
super
useful
and
I
didn't
create
a
ticket
for
this,
but
it
would
be.
Is
we
have
a
spam
processor
in
the
contrib
or
not
that
contributes?
I
don't
know
what
we
call
them
anymore
extension
module
that
uses
a
disruptor
for
shot
for
shuffling
spans
off
to
the
exporter.
A
That
has
not
had
any
benchmarking
done
on.
I
don't
believe
so.
That
would
be
another
thing
that
would
be
potentially
useful
aside
from
the
spam
processor,
but
the
the
disrupter
spam
processor.
C
A
C
A
Just
overrides:
what's
before
awesome,
I
didn't
remember,
I
think
you
can
configure
the
disruptor
to
block
if
the
ring
is
full.
Can't
you
anyway,.
A
C
Cool,
do
you?
Is
there
a
section
of
documentation
around
like
the
performance
results
like
being.
C
To
use
but
yeah,
if
you're,
if
you're
immutable,
we
might,
we
google
might
throw
together
a
proposal
around
using
that
across
all
the
sdks.
It
will
actually
comment
on
performance,
regressions
and
prs.
The
issue
is
the
performance.
Regression
could
be
from
the
vm
that
was
chosen
for
for
your
build.
A
Yeah,
so
that's
always
the
you
never
know
when
github
actions,
what
you're
going
to
get
is
a
you
know,
a
vm
that
you're
running
on
so
yeah,
and
this
is
something
we
don't
have
any
dedicated
hardware.
People
who
are
running
benchmarks
are
generally
just
running
them
on
their
laptop
or
you
know,
there's
there's
not
really.
We
don't
have
a
consistent
story,
so
I
think
what
we've
mostly
been
looking
for
in
the
benchmarks
is
a
having
them
written
so
that
we
have
something
that
we
can
work
from.
A
If
somebody
complains
about
an
issue
and
at
least
finding
gross
problems
or
pinpointing
some
gross
problems
in
parts
of
the
of
the
sdk
that
people
are
worried
about,
so
we
definitely
have
not
been
rigorous
in
any
way
shape
or
form
about
our
benchmarking.
So
anything
that
you
all
are
interested
in
doing
to
help
out
with
that
would
be
fantastic.
C
A
C
Okay,
cool
all
right,
so
then,
are
you
worried
about
memory
consumption
and
do
you
want
to
track
that.
A
I
think
actually,
that's
mostly
what
I
have
been
worried
about
so
far
when
I've
been
doing
any
benchmarking
and
performance
work
is
memory
rather
than
speed
and
cpu
overhead
okay,
I
mean
both,
but
memory
feels
like
it's
a
it's.
In
my
experience
at
least,
it
seems
harder
to
back
yourself
out
of
bad
memory
decisions
than
it
is
to
increase
the
cpu
usage
performance,
but
I
don't
know
I
mean
it
probably
depends
on
what
you're
working
and
what
you're
working
on,
but.
A
I've
I
have
I'm
definitely
conserving
memory
usage.
I
think
it's
something
especially
if
we
like
one
of
the
things
that
I've
spent
some
time
on.
I
have
this
stress
test
ticket
that's
been
open
for
forever
and
a
day
that
I
probably
should
just
close
is
just
making
sure
that,
when
things
go
bad
that
we
don't
leak,
that's
kind
of
been
my
one
of
my
primary
primary
issues
is
like
if
something
goes
wrong
with
the
network.
If
something
goes
wrong
somewhere
make
sure
we
don't
just
have
a
memory
leak
that
takes
down
the
user's
application.
C
Okay,
we
can
use
jmh
to
force
high
stress
if
I
recall
correctly,.
A
Yeah,
I
think
that's
probably
true.
This
particular
one
that
I
wrote,
the
stress
test
was
actually
mostly
involving
otlp
and
what
happens
if
the
network
gets
glitchy,
with
sending
the
data
to
the
to
the
collector
to
make
sure
that
we
don't
end
up
blocking
things
or
just
stacking
up
data
everywhere
or
taking
down
the
app
in
some
way
but
yeah
for
jamming.
Thank
you
awesome
that
yeah
anything
you
can
do.
There
would
be
incredibly
helpful,
very
good.
A
Yeah
and
that's
kind
of
the
last,
I
think
that's
kind
of
the
last
coding
thing
that
of
significance.
There's,
there's
documentation,
there's
always
going
to
be
documentation,
I'm
sure,
but
that
feels
like
it's
the
last
kind
of
bit
that
we
really
need
to
get
nailed
aside
from
the
configuration
story,
that
is
long
term
cleanup
before
we're
ready
for
ready
for
1.0
and
I
think
we're
getting
really
close.
D
Thanks
see
you
all
right,
cool,
have
a
great.