►
From YouTube: 2021-07-30 meeting
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
A
What
a
good
meeting
this
morning,
good
crowd
benevolence
came.
He
left
new
relic
recently
and
is
at
red
hat
now.
A
He
said
working
on
like
long-term,
like
long-term
sustainability
of
java
ecosis
java
or
something
like
long-term
health
of
java,
and
so
he
thinks
that
telemetry
is
important
to
that
story,
which
is
awesome,
and
then
I
was
kicking
myself
after
the
meeting
that
I
should
have
brought
up,
and
I
think
we
should
bring
up
with
him
the
context
propagating
context
problem
in
java,
like
that.
That
would
be
a
big,
the
long-term
health
of
java
observability.
C
D
B
I
found
my
second
library
to
target
java
11
though
the
first
was
caffeine.
I
can't
remember
what
it
was,
but
I
was
like
another
library
is
going
to
have
11.
C
B
D
A
Year
ago,
but
then
evans.
A
But
this
their
data,
I
feel
like
they
would
have
the
best
data
source,
because
it's
a
lot
big,
a
breath
and
it's
real
like
real
production
systems,
as
opposed
to
just
asking
developers
hey.
What
do
you.
C
A
Also,
the
the
jonah
from
log
z,
he's
joined
before
I'm,
not
sure
if
he's
a
dev
manager
or
product
manager,
I
think
not
because
he
was
mentioning
that
the
engineers
were
going
to
start
working
on
the
logging
stuff
next
week.
So.
A
So
hoping
that
will
come
along
talked
a
lot.
I
did
a
lot
of
just
sort
of
up.
Oh
at
the
end,
I
did
updates
on
all
I.
D
D
A
One
of
us
will
send
a
pr
for
that.
I
don't
know
still
how
what
to
do
with
like
lag
of
like
do.
We
add
it
here
once
we
release
it.
E
B
A
E
B
Tried
to
hide
the
table,
yeah
I'll,
probably
ask
next
week,
like
I
I'm
curious,
if
there's
any
real
difference
between
where
we
started
and
where
we
ended
up,
because
we
had
muzzle
working
and
then
it
stopped
not
working,
and
now
it's
back
to
work.
I
guess
I
wonder
if
there
were
any
big
changes
during
that.
B
So
we
had
one
build
source
with
a
bunch
of
plug-ins
and
muzzle
work.
I
thought
just
copying.
The
files
cryo.open
would
be
enough
to
extract
the
plugin,
but
instead
we
rewrote
everything.
Sort
of
so
I'm
wondering
if
that
that
ended
up
as
a
full
circle
back
to
where
we
are
back
when
everything
was
in
build
source
or
oh
significantly,.
A
A
But
I
mean
the
at
least
incrementally
they're
good
improvements
on
their
own.
B
A
D
A
Yeah,
so
it
it
seemed
reasonable
to
it
sounds
like
you:
don't
need
to
prove
ownership
of
a
domain
for
gradle
plug-in
portal,
so
nikita's
just
gonna
register
personal
credentials.
Unless
you
have
a
different
idea.
B
B
Do
they
like
if
we
want
to
reuse
our
maven,
coordinates,
I'm
sure
there's
some
step
there
used
to
be
a
step
where,
like
you
verify
like
I
did
this
many
years
ago,
so
I
can
remember,
I
had
to
verify
ownership
of
a
maven
coordinate
so
that
I
could
reuse
it.
Otherwise
it
ends
up
as
like
gradle.plugin.com.github.whatever
we
see.
Sometimes
I
see
I
hope
we
don't
end
up
like.
I
want
us
to
have
our
artifactor.
I
o
dot
open
telemetry.
So
that's
all.
I
would
pay
attention
to
make
sure
that's.
What's
happening.grail.plugin.org.
E
B
B
B
When
specifying
as
a
maven
dependencies
in
a
build
source
file,
they
would
use
this
name
otherwise
inside
a
gradle
build
file,
it
doesn't
matter,
they
don't
check
that
it's
only
about
the
artifact
that
they
used
to
check
okay,
but
maybe
like
now
that
they're
not
using
bin
tray.
Maybe
they
don't
care
and
like
it's
true
that
maybe
a
gradled
up,
maybe
they
don't
do
the
great
old
uploading
thing
anymore
because
they
just
have
their
own
repository.
B
C
A
B
B
D
B
B
A
B
B
B
E
B
I
mean
I
think,
most
of
these
pairs
like
we
have
been
working
around
the
fact
that
the
instrumentation
is
not
the
right
thing
to
do.
Right,
like
we
don't
handle
failures,
because
we're
intermittent
eddie
not
the
framework,
so
we
have
to
instrument
some
random
path
for
failure
and
there's
connection
failure.
First,
then,
there's
read
failure
and
we
find
all
the
failure
kisses
in
our
institution
when
really.
A
A
First
pr,
though,
that
is
more
than
just
that,
because
I
think
this
one
is
splunk
has
a
customer
that
wants.
F
C
A
Oh
yeah,
there
was,
there
was
a
new.
A
new
tracer
got
added
for
a
netty
reactor,
so
I
I
asked
if
it
could
be
converted
to
an
instrument
or
or
add
it
to
the
list.
That's
now
growing
longer
of.
D
A
So
keeping
the
one
that
the
connect
failure
that
we
had
already
as
client
and
I
think
the
other
one.
The
problem
is
that
it's
nesting,
the
client
span,
so
they
didn't
want
it
to
be
splunk,
doesn't
want
it
to
be
client
span
yet
because
then
it'll
suppress,
but
once
we
have
the
option
not
to
suppress.
D
A
Your
thoughts
on
this,
the
api
change.
A
A
Right
now,
it's
being
passed
here
into
the
new
builder,
but
it
feels
like
potentially
it
would
make
sense
well
right
now
at
least
it's
mainly.
The
focus
is
for
client
spans,
so
we
could
potentially
pass
it.
A
new
client
instrumenter,
but
ludmilla
was
pointing
out
that
the
new
client
instrumenter
is
really
for
things
that
propagate.
A
Yeah,
which
feels
a
little
odd
like
from
a
like,
I
don't
know
like
new
client
instrument
or
feels
like
something
you
would
call
for.
B
A
Should
well
have
you
seen
the?
What
is
that
google
contributed.
B
B
B
A
Oh,
like
a
an
extractor.
A
A
Yeah,
I
could
definitely
see
some
cases
like
server
like
say.
C
A
A
Mateusz
brought
up
something
he
was
wondering.
A
Yes,
because
there's
some
overlap
between
instrumentation
type
and
span
kind,
but
I
don't
know
they're
also
a
little
bit
orthogonal
like
I
mean
because
you
can
have
if
instrumentation
type
is
just
those
semantic
conventions,
you
can
have
http
server,
client,
rpc
server,.
B
A
But
I
think
he
was
wondering
he
said
he
was
gonna.
Take
a
closer
look
at
it
his
tomorrow
at
the.
B
A
A
Yeah
yeah,
because
I
think
one
of
the
concerns
that
ludmilla
had
one
of
the
ring
why
she
wanted
it
to
be
sort
of
explicit,
was
just
to
make
sure
people
didn't
forget
to
set
it.
B
B
B
B
A
Cool,
I
think
those
are
some
good
options.
I
will
pass
along
to
ludmila.
A
Oh
and
then
to
matisse's
like
this
span
kind,
I
guess
they
are
separate,
though
yeah
I
see
what,
especially
when
you
think
of
it
in
terms
of
these
attribute
extractors.
A
E
A
A
You're
like
right
on
time
too,
we
just
finished
here,
and
this
there's
nothing
honorary
already
knows
about
that.
So
we
are
all
cute
all.
H
Right
yeah,
so
the
metrics
api-
I
don't
know
if
you
saw
the
we,
the
future
freeze
went
through.
H
So
we
can
take
this
api
as
long
as
it
abides
by
the
spec,
which
means
that
I
need
to
meet
with
bogdan
and
talk
to
him.
I
think
he
saw
on
the
pr,
because
there
was
a
there
was
an
ask
that
john
had
around
merging
the
counter
builder
and
the
up
down
counter
builder
yeah
they're,
both
counters
anyway
merging
those
two,
the
async
and
the
synchronous
version
for
java,
and
I
thought
it
was
reasonable,
but
it
is
a
divergence
from
the
spec.
So
I'll
have
an
interesting
discussion
around
that
one.
B
H
Yeah
that
to
some
extent
it's
there's
still
six
separate
instruments
declared
and
we
have
six
separate
instruments.
We
just
have
like
counter
yeah
yeah,
because
we're
gonna
the
builder
pattern
is
also
not
in
the
spec.
The
spec
is
like
single
methods.
H
Yeah,
so
I'm
gonna,
I'm
gonna,
try
to
get.
I
asked
bogdan
to
join
either
the
early
morning
java
sig
next
week
when
john
is
back
or
this
one
when
john's
back,
to
make
sure
I'd
like
ideally
I'd
like
to
have
you
and
john
and
bogdan
have
the
discussion.
I
don't
care,
if,
like
whatever,
whatever
you
think,
is
gonna
be
best.
I
I
do
agree
like.
I
agree
with
john's
suggestion
that,
like
I
think
this
makes
it
easier.
So
whatever
happens
there
happens,
let
me
know
I'll
fix
the
pr.
H
The
only
thing
I
did
want
to
talk
about
is
actually
lazy,
set
versus
volatile
and,
like
specifically,
I
I
have
been
taught
never
ever
use
volatile
when
you
can
use
atomic
reference
in
lazy
set
instead
because
of
that
memory
barrier
on
right.
That
is
almost
never
really
needed,
but
is
costly
right.
So
in
this
case
it
absolutely
doesn't
matter
at
all
yeah,
so
it's
fun,
but
I
just
want
to
make
sure
that,
like
we
have
an
understanding
there.
H
Right,
so,
if
you
like
see
this,
is
I
don't
know
if
you
remember
when
mechanical
sympathy
was
a
thing
but
high
throughput
concurrency,
when
you
are
writing
with
like
your
cache,
so
I'm
gonna
keep
it
keep
the
model
simple
in
my
head,
so
because
I
can
explain
it
well
only
if
it's
simple,
so
your
cpu,
you
know
you
have
a
cache
and
you
have
rights
to
the
cache
and
that
those
rights
have
to
eventually
make
it
to
from.
H
You
know,
l3
to
l2,
to
l1
to
ram
right,
and
so
there's
this
right,
cue
of
things
that
need
to
go
out
the
door
to
make
it
down
into
ram
from
the
cache
that
is
local
and
a
volatile
right,
basically
forces
you
to
flush
the
right
queue
to
get
things
down
into
ram.
H
So
you
have
to
wait
for
all
those
caches
to
flush,
which
is
a
synchronous
operation
which
can
put
a
you
know
several
nanosecond
delay
across
all
your
cpus
that
touch
that
piece
of
memory
for
doing
a
volatile
right.
Whereas
a
lazy
set
says
I
don't
care,
just
throw
it
on
the
queue
everything's
gravy
and
then
a
volatile
read
will
force
that
area
of
cash
to
all
be
flushed
across
all
the
caches
before
it
can
write,
read
and
you
always
have
to
pay
for
volatile
reads
anytime.
You
read
one
of
these
things.
H
So
what
you
do
is
you,
when
you
write
you
don't
pay
the
the
this
cost.
You
try
to
avoid
ever
touching
it
again
in
that
thread
after
you
write
and
then
your
read
threads
have
to
pay
the
cost
anyway,
so
you
only
take
one
hit
of
flushing
his
cash
cues,
these
cash
right
keys.
That's
that's
the
basic
idea
so,
but
volatile
in
java
does
a
flush
both
on
read
and.
H
H
A
F
H
B
H
My
feeling,
if
you
yeah
I've,
been
trying
to
figure
out
how
to
optimize
exemplar
sampling
and
I'm
looking
at
using
an
atomic
spin
lock
when
there's
enough
cpus
just
to
pull,
because
the
the
overhead
on
a
single
thread
is
only
like
20
nanoseconds,
but
the
overhead
for
multiple
threads
is
like
70
nanoseconds.
H
H
We
might
be
able
to
drop
our
concurrent
throughput
for
like
multi
cpus,
a
lot
by
using
spin
locks
appropriately
in
some
of
these
cases,
instead
of
just
straight
up
blocks,
but
anyway,.
H
I'll
have
to
take
a
look,
because
there
again,
my
concurrent
programming
in
java
is
a
stale.
As
of
java
9.
H
I
should
probably
do
my
benchmarking
on
more
jdks,
because
maybe
maybe
we
just
do
jdk
specific
fixes.
If
we
need
that
okay
yeah,
I
was
I'm
using
the
8
and
jk11
and
jk8
for
sure.
You
absolutely
need
to.
B
H
B
A
A
B
D
B
B
A
A
D
H
I
mean
it,
it
could
be
fair
to
just
have
a
different
sdk
for
android
yeah.
It
also
could
be
fair
that
all
of
the
crazy
things
I
want
to
do
optimize
on
java
8.
We
just
make
a
different
library
too,.
H
Yeah,
the
next
pr
I'm
planning
to
send,
which
again
I'm
out
on
vacation,
but
after
we
get
this
metrics
api
refactor
is
untangling.
H
H
In
the
prototype-
and
I
think
it's
a
useful
refactoring
going
forward,
so
my
plan
is
after
we,
after
the
initial
hit
of
code,
is
to
kind
of
do
that
detangling,
so
that
there's
a
clear
interface
boundary
between
the
the
instruments
and
how
they
wire
into
the
api
and
then
the
entire
back
end
implementation,
so
that
we
can
kind
of
have
a
little
bit
more
flexibility
with
how
we
store
things,
how
we
implement
them.
If
we
need
different
implementations
for
different,
you
know
reasons
like
it
should.
H
To
do
that,
so
that
would
be
the
next
my
next
planned
pr.
I
can
try
to
get
that
out
earlier
rather
than
later,
but
it
it's
going
to
look
like
crap
until
the
first
pr,
because
the
diff
will
be
too
huge.
B
A
B
A
H
H
H
B
Maybe
I
mean
we
have
talked
about
writing
our
test
in
kotlin.
That
might
happen
if
someone
one
of
us
wants
to
do
it,
but
yeah
our
normal
code
wouldn't
right.
H
B
B
Yeah
yeah
and
that
morning
mostly
affects
instrumentation.
I
think
because
we
have
so
many
assertions
like
attributes
like
here's,
a
nested
trace
a
circle,
so
you
can
either
write
out
that
trace
in
a
dsl
type
of
thing
or
use
java,
yeah
and.
B
A
But
yes,
basically,
I
think
everybody
except
nikita
is
in
favor
of
moving
these
to
java
and
primarily
for
new
contributor
experience.
Well,
honorable
also
refactoring,
as
I
think.
H
B
Enough
people
use
that
there's
no
structural,
replace
that's
one
obvious
thing
that
it's
missing,
that
I
use
sometimes
for
refactoring,
gotcha
yeah.
But
then
also
it
reminds
me
when
I
worked
on
youtube
like
to
have
to
run
the
test
to
know
that
the
test
works
rather
than
finding
that
compile
failure.
Just
because
it
was
people
like
missing,
like
a
renamed
wrongly
named
variable
that
just
drives
yeah.
B
It's
like
my
way
of
thinking
about
it
is
this
random
chocolate
python
the
tooling
is
developed
to
make
it
still
better
like
they
have
type
annotate
like
python,
is,
is
going
in
the
direction
where
it
can
still
be
more
static.
Well,
no
one
cares
enough
about
groovy
to
do
the
same,
and
so
groovy
is
always
going
to
be
the
sort
of
very
dynamic,
very
hard
to
refactor
language.
I
think.
H
Yeah
there
was
a
time
where
they
were
doing
types,
but
maybe
that
again,
I'm
like
super
super
stoked
yeah
all
right
so,
but
the
odds
that
I
can
use
scholar
three
for
tests
are
probably
pretty
loud.
B
F
C
B
H
Yeah,
oh
that's
because
everyone
implements
their
own
threading
library
yeah.
So
I
I'm
kind
of
impressed
with
the
amount
of
instrumentation
you
have
already
to
the
extent
I
can
offer
help.
Maybe
I'll
I'll
try
for
some
of
your
weird
running
problems,
but
like
every
scholar,
library,
reinvents,
fibers
in
that
was
like
the
year
2010.
That
was
the
cool
thing
to
do
right.
So
I
don't
know
how
many
instrumentation
context
propagation
things
you
have,
but
I
imagine
it's
heavy
on
the
skyline
is
that
true.
B
A
B
H
Yeah,
then
I'm
not
surprised,
I
don't
I
acquire
instrumentation.
So
when
I
worked
at
type
safely,
that
was
a
that
was
a
fun
fun,
fun,
fun
event,
but
I
know
who
to
call
in
if
you
need
help.
So
if
you
have
a
set
of
bugs
or
things
that
you're
having
issues
with
like
again,
those
are
my
ex
co-workers
I
can.
I
can
call
them
in
and
bring
them
in
all
right.
My
kids.
F
B
B
H
E
A
The
have
you
seen
the
strict
context
option
in
the
sdk
yeah
it'll
report.
If
you
don't,
what
does
it
do
all
right?
If
you
don't.
A
E
H
Are
you
turning
all
the
akka
message
passing
stuff
into
spans,
or
are
you
trying
to
just
propagate
context
across
akka
message?
Passing?
Oh,
okay,
there's
the
problem
all
right,
those
pro
I
I
I
have
to
talk.
Let
me
let
me
talk
to
my
old
co-worker
peter
and
see
what
they
did
before,
but
I
think
you
might
need
to
treat
them
as
fans
and
actually
like
an
actual
message
pass
in
akka
because
that's
like
the
model.
H
Otherwise,
like
your
context,
could
propagate
to
like,
and
you
know,
there's
like
a
one
to
end
mapping
of
context
propagation
inside
of
vodka.
Unless
you
are
creating
subspans
for
every
single
time,
it
crosses
a
thread.
H
E
A
Yeah
ducks,
I
think
this.
C
H
Yeah
I
mean
the
key
yeah.
The
key
is
you
know,
akka
is
a
message
passing
protocol.
How
are
you,
how
are
you
doing
things
like?
Is
it
reactors
from
spring
or
moniks
or
whatever
the
hell?
It
is.
H
B
H
A
B
A
H
So,
actually,
in
scholar
3,
I
suspect
that
is
what's
going
to
happen.
When
I
get,
I
gave
a
talk
on
java,
open
telemetry
in
scholar,
3
like
the
growl,
and
we
did
explicit
passing
because
in
scholar
3
they
redid
all
the
implicit
stuff
and
actually
explicitly
passing
context
is
dead,
simple
now,
so
that's
I.
I
suspect
that
that
might
be
the
case
going
forward.
H
If
you
look
at
what
zeo
has
to
the
extent
people
want
this
auto
agent,
we
might
have
to
implement
fibers
and
zeo
at
some
point,
or
I
should
say,
I'm
hoping
the
zeo
people
come
in
and
implement
fiber
propagation
for
us,
because
that
is
actually
more
standard
and
that's
that's
in
preparation
for
when
loom
hits.
H
B
H
Do
you
mean
the
kotlin
cobra
teams
or
cotton
curriculums?
Oh,
no,
no!
So
scala
at
a
jvm
level
like
what
you'll
see
in
instrumentation
you'll
actually
just
see
context
as
a
method
parameter
in
every
single
method.
H
But
in
the
language
you
don't
necessarily
see
it
as
a
method
parameter.
It's
an
implicit
thing
that
gets
passed
around
yeah,
but
but.
H
H
We
can
fix
that,
oh
here,
the
next
thing
I
want
to
talk
about
so
besides
the
api
change,
if
the
refactoring
that
I
mentioned
makes
sense
for
next
steps,
I
should
say
yeah.
Let's
talk
about
next
steps
after
the
api
change,
real
quick,
because
we
have
two
options.
I
think
option
number
one
is
the
refactoring
to
fragment
the
api
that
the
sorry,
the
api
to
storage
hooks.
H
So
I
created
a
like
storage
interface
and
the
builders
from
the
sdk
kind
of
construct,
one
of
these
storage
interfaces
and
pass
it
to
the
api,
and
so
there's
this
complete
decouple
of
like
the
api
talking
to
an
interface
and
then
the
back
end
implementation
and
the
whole
goal.
Then
of
of
there's
like
a
measurement
processor
whose
whole
responsibility
is
just
to
find
the
right,
you
know
funnel
into
storage,
so
if
that
makes
any
sense
whatsoever
great.
H
If
not,
I
should
write
down
what
the
hell
I'm
trying
to
say
and
and
get
you
a
pr,
but
that
that's
option
number
one
and
then
option
number
two
is
renaming
just
simple
renaming
to
make
the
names
of
things
as
they
are
now
line
up
with
the
names
from
the
instrument
names.
H
No,
no,
no!
No!
No!
No
so
this
would
be.
This
is
an
sdk
only
refactoring,
so
you
know
the
there
would
be
a
kind
of
storage
directory
with
a
set
of
interfaces
that
only
the
sdk
can
see,
and
the
builders
would
kind
of
use
a
class
which
constructs
those
and
ties
into
the
instrument
yeah.
So
we
have
a
hook
where
we
can
kind
of
play
with
stuff.
Okay,.
B
H
Where
is
this
so
it's
in
the
the
metric
prototype
pr?
Actually,
if
you
see
what
it
looks.
F
H
H
H
I
think
that
this
aspect
is
going
to
disappear
right
here
and
it's
just
going
to
be
there's
going
to
be
a
storage
mechanism
that
has
collected
reset,
that's
all
it'll
have
I
have
two
methods
to
create:
writable,
storage
or
asynchronous
storage,
and
then
let
me
show
you,
writeable
storage
is
writable.
Storage,
just
has
a
bind
method
and
record
long
and
record
double,
and
this
is
effectively
the
api
that
backs
every
single
instrument
in
sorry,
is
this
an
spi?
I
don't
know
whatever.
H
This
is
the
interface
that
backs
every
single
api
instrument,
and
then
we
we
have
a
piece
of
code
that
figures
out
how
to
construct
one
of
these
with
the
appropriate
aggregator,
with
the
appropriate
metric
names
and
things
with
the
appropriate
output
with
exemplars.
All
that
kind
of
junk,
so
all
it
is-
is
refactoring
to
kind
of
like
cut
the
builders
and
the
the
actual
instrument
instruments
from
this
notion
of
how
to
store
things
that
I
thought
actually
cleaned
up
the
code.
A
little
bit
made
it
easier
to
understand.
If
we
look
at.
H
H
Right
is
the
instrument,
the
thing
that
is
going
to
own
the
state
or
is
the
view
the
thing
that's
going
to
own
the
state,
and
so
this
detangles
that
whole
notion
of
just
instead
of
having
views
be
this
thing
on
the
side.
When
I
register
an
instrument
for
a
meter,
I
can
use
this
sucker,
and
it
just
gives
me
an
interface
for
where,
to
shove,
all
my
data
and
how
to
get
the
metrics
out
and
the
views
are
kind
of
hidden
under
the
scenes
in
there.
H
H
We
take
our
shared
state.
We
check
our
configured
views
if
this
instrument
has
a
view
associated
with
it,
we
basically
create
a
synchronous,
storage
mechanism
for
that
instrument
with
the
aggregator
that
was
configured
all
of
these
parameters.
By
the
way
I
plan
to
clean
the
hell
out
of
this,
but
this
is
just
how
we
got
it
working
initially.
H
If
we
don't
have
a
view
configured
right,
then
we
construct
the
default
storage
like
a
default
view,
so
for
counters.
That
means
that
we
create
synchronous
storage
with
some
aggregation
for
a
histogram.
We
created
with
histogram
aggregation
and
any
other
instrument
type
is
unsupported
at
this
time.
H
So
I
felt
like
this
actually
cleaned
up
the
notion
of
views
versus
default
aggregations
pretty
clearly
and
the
the
detanglement
of
like
how
things
are
stored
from
the
builder
itself,
and
the
instrument
name
itself
was
was
helpful.
H
So
this
was
the
re
this.
This
is
basically
the
code
I
was
planning
to
to
put
into
the
existing
api.
Here's
the
same
thing
for
asynchronous
of
when
you
go
to
build
an
asynchronous
instrument.
You
have
a
descriptor,
you
have
your
state
and
you
have
that
callback
for
measurements
for
how
to
grab
measurements,
and
this
does
the
same
thing
check
your
views,
see
if
you
need
to
make
an
asynchronous
piece
of
storage.
H
B
H
My
hope
is
is
to
make
this
so
readable
that
I
don't
have
to
maintain
it
because
anyone
can,
but
I
will
maintain
it
because
it's
pretty
fun
code
anyway,
yeah
cool,
so
that
that
was
number
two.
The
other
option
is,
if
you
look
at
oh,
why
did
I
stop?
I
need
to
share
that
I'll.
Show
you
more
more
in
here.
H
H
We'll
start
with
aggregator,
so
what
I
did
was
I
actually
tried
to
prune
down
the
aggregator
interface
to
the
bare
minimum
and
tried
to
streamline
a
little
bit
like
what
synchronous,
storage
versus
asynchronous
storage
looks
like
so
right
now,
aggregators
are
stateless
and
they
provide
a
high
performance,
stateful
kind
of
thing
that
can
aggregate.
H
H
The
stream
storage
basically
also
creates
aggregated
values,
but
it
does
so
in
a
highly
concurrent
way.
I
can
show
you
that
this
is.
This
is
basically
how
existing
aggregators
work.
It's
just
it's
hidden
behind
this
abstract,
aggregator
interface
thing,
but
the
the
last
bit
ignore
this,
because
I
think
this
goes
away.
The
last
bit
is
just
building
a
metric.
H
Someone
else
is
responsible
for
storing
the
map
of
attributes
and
data
and
then
giving
it
back
to
me
and
having
me
construct
my
metric
stream
and
this
interface
is
different
than
what
slightly
different
than
what
we
had
before
it's.
It
was
a
little
bit
simpler
to
implement
and
test,
and
so
I
thought
it
was
a
pretty
decent
cleanup.
H
H
This
is
another
option
for
implementation,
it's
basically
simplifying
aggregators
and
then,
if
you
look
at
the
set
of
aggregators
here,
you'll
notice
we're
missing
a
whole
ton
compared
to
what
was
there
before
and
that's
because
the
only
ones
planned
for
the
initial
spec
implementation
are
sums
histograms
and
last
value.
H
So
this
is,
this
is
option
number
two
for
a
pr,
whichever
one
you
think
is
going
to
be
easier
to
review
and
merge
and
kind
of
get
through
the
door.
B
H
H
If
you're
curious
of
what
it
looks
like
and
what's
going
to
be
exposed,
there's
a
notion
of
a
filter
that
allows
you
to
choose
what
measurements
will
get
sampled
and
there's
three
defaults:
all
measurements
will
get
sent
could
get
sampled
traces
only
will
get
sampled
or
sorry
measurements
that
have
traces
that
are
sampled
could
get
sampled,
and
then
nothing
gets
sampled
and
then,
additionally,
there's
a
there's
a
reservoir
and
the
reservoir
is
just
hey.
Here's
a
measurement,
go,
keep
it
or
not.
It's
up
to
you.
H
H
B
H
Yeah,
so
the
filter
it
chooses
what
measurements
can
get
the
filter
well
well
kind
of,
so
the
filter
chooses
what
could
get
sampled,
and
this
is
literally
we're
exposing
three
options.
Everything
is
possibly
sampled,
only
measurements
that
occur
with
sample
traces
can
get
sampled
and
then
nothing
gets
sampled,
and
that
would
be
a
flag
so
that
that
part's
not
implemented.
But
if
you
read
the
spec
there's
a
flag
that
says,
like
the
user
should
be
able
to
select
one
of
these
three
options
now.
In
addition,
the
exemplar
sampler
is
a
all.
H
It
is,
is
a
pairing
of
a
a
reservoir,
sorry,
a
filter
and
then
a
way
of
making
reservoirs.
H
Let
me
show
you
the
reservoir
thing,
so
a
reservoir
example.
Storage
strategy,
always
off,
is
for
every
single
aggregator.
You
get
an
empty
reservoir.
The
default
storage
strategy
is,
I
give
you
a
reservoir
equal
to
the
number
of
processors
to
avoid
contention.
H
H
Right
and
so
that's
that's
the
notion
of
how
many
samples
you
keep
yep
and
then.
Lastly,
the
first
implementation
of
a
reservoir
sample
is
the
dumbest
implementation
of
reservoir
sampler,
which
is
we
do
a.
We
do
a
random
number
right
between
the
number
of
measurements
and
zero
and
if
it
is
within
the
reservoir,
we
keep
it.
So
it's
like
the
simplest
possible
reservoir
implementation,
but
the
idea
is
that
this
reservoir
hook
is
open
and
people
can
implement
their
own
reservoirs
and
implement.
D
H
B
That's
it,
and
so,
if
you
wanted
to
only
have
failures
as
exemplars,
so
that's
the
job
of
the
filter,
that
sort
of
decides
what's
candidate,
for
example,
and
then
the
reservoir
is
what
the
random.
H
Tries
a
good
random
sample,
yeah
yeah,
you
could
also
make
a
a
reservoir
that
does
different
like
this
one.
This
one
just
does
you
know
the
simplest
reservoir
sampling?
You
could
make
an
exemplar
reservoir
that
is
knowledgeable
right
and
understands.
Like
here's,
my
bucket
strategy,
let
me
make
sure
I
keep
one
one
example
for
every
bucket
and
I'll
keep
the
last
one
I
saw
like
whatever
you
want
to
do
there
is
is
open
to
you.
B
B
H
H
H
Designed
to
never
allocate
in
the
hotpath,
so
these
these
cells
are
actually
mutable
with
a
lock
and
when
you
get
a
measurement
offered,
if
we
actually
lock-
and
we
just
override
our
current
values,
literally
because
we're
trying
to
avoid
oh
and
we
do
really
dirty
things
because
we're
trying
to
avoid
allocations
when
I
ran
the
first
performance
benchmark.
Where
I
didn't
do
this,
I
had
just
enough
code
that
hotspot
couldn't
optimize
me
well,
and
it
was
atrocious.
H
So
this
brought
it
down
to
within
about
20
millisecond
overhead
compared
to
20
millisecond
overhead
for
per
20
nanoseconds.
Sorry,
overhead
per
recorded
value
right
versus
previously
it
was
a
upwards
of
one
second
overhead,
depending
on
the
amount
of
contention
it
was
pretty
pretty
darn
bad.
H
So
that's
that's
the
other
thing
that
might
be
contentious
from
a
job
implementation,
but
the
use
case
for
exemplars
is.
I
have
a
metric
data
point
right
and
I
want
an
exemplar
that
represents
kind
of
the
the
scope
of
my
say,
latency
and
I'll.
H
H
Yeah
yeah
there's
I
I
I
anticipate
going
forward
that
the
naive
exemplar
reservoir
is
not
going
to
be
everything
that
people
want.
They'll,
probably
want
yeah,
more
intelligent
ones
and
got
it.
That's
why
I
want
to
leave
that
like
if
you
look
at
this
back,
I'm
trying
to
specify
that
as
something
users
can
contribute
that
pr
is
open.
So
please
comment
on
it.
It's
in
the
it's
in
the
spec
repo,
but
that's
that's
that's
what
we're
talking
about
right
now.
A
H
The
effectively
effectively
what
I
was
running
into
was
the
if
you
have
a
high
contention
sum,
it's
actually
more
efficient
for
the
exemplar
reservoir
to
store
to
store
one
per
processor.
H
A
H
Like
like
I
I
can
see,
I
can
see
reason
to
have
two
and
yeah.
The
only
reason
it's
number
of
threads
is
because,
because
the
I
was
tweaking
my
performance,
maybe
too
much,
maybe
too
much,
but
but
it
was
rather
significant.
H
Going
from
just
one
to
number
of
threads
going
from
two
to
number
of
threads,
I
have
a
quad
core,
so
there
was
a
there
was
a
modest
boost,
but
there
was
a
huge
boost
going
to
two
and
there
was
a
modest
boost
going
to
numbers
university
views.
A
And
so
the
the
res,
you
have
those
exemplars,
those
say
min
max
and
that
gets
flashed
at
the
same
frequency
that
the
like
the
metric
interval,
reader
frequency.
H
Yeah,
so
having
having
a
couple
for
some,
isn't
so
bad.
If
that
sum
is,
you
know
something
rather
important
to
you
and
you
want
to
have
it
because
again
with
exemplars,
you
get
all
the
attributes
that
get
aggregated
away
and
you
can
have
baggage
attributes.
Possibly
so
you
could
have
an
exemplar
that
gives
you
a
whole
bunch
of
additional
information.
H
H
Yeah
we
we
have
a
back
end
implementation.
I've
been
playing
around
with
and
it's
been
it's
been
interesting.
It's
been
interesting
there's.
I
was
actually
really
surprised
that
naive
reservoir
sampler
does
a
pretty
very
good
job
of
giving
me
the
interesting
ones
or
I'm
really
good
at
sending
purely
random
data
and
assuming
that
the
random
data
get
on
the
other
end
is
interesting.
A
H
Yeah,
if
you
look
at
the
prometheus
implementation,
they
effectively
have
one
example
r
per
bucket,
and
then
anytime
somebody
adds
to
a
bucket.
They
just
swap
the
current
exemplar.
H
I
implemented
that,
but
it
was
actually
way
more
inefficient
than
this
random
sampling,
and
then
I
just
tried
to
get
a
feel
for
how
many
buckets
the
random
sampler
covers,
and
it's
it's
not
perfect,
but
it
was.
It
was
pretty
good.
It
was
like
60,
you
know.
H
B
B
H
Sure
yeah
yeah
that
that
that
makes
it
a
lot
easier
yeah.
I
tried
to
use
an
atomic
reference
array
around
and
like
and
do
that
with
the
bucket,
but
actually
select
well,
no,
we
actually
might
be
okay,
so
so
it
depends
on
the
contention
bucket
contention
across
threads.
So
if
we
actually
fix
our
performance
benchmarks
to
select
better
better
numbers
that
are
more
distributed,
our
performance
benchmarks
will
be
a
lot
better
and
there'll
be
less
contention
on
a
particular
bucket,
so
that
actually
maybe
maybe
we
should
implement
that
for
histograms.
H
But
yeah
that
looks
almost
exactly
like
my
first
implementation.
Yeah.
B
B
H
The
main
the
main
difference
here
is
you're
you're,
calling
ad
readable
spam
right
and
it's
it
with
spans,
you're,
instantiating
the
spam
and
that's
kind
of
a
cost.
You
just
have
to
pay
for
exemplars,
I'm
trying
to
avoid
instantiate
any
instantiating
anything.
So
I
have.
G
D
H
Okay,
anyway,
so
I
think
I
took
a
lot
of
time
on
that.
So
sorry,
last
so
that's
that's
upcoming
metric
sdk
stuff,
the
last
bit
that
we
were
talking
about
was
autocomp
or
autoconfigured.
H
So
right
now,
metric
exporters
are
either
otlp
prometheus
and
those
two
are
hard
coded
and
then
there's
this
like
configured
the
metric
sdk
thing
and
for
trace.
There's
configure
this
sdk
thing
and
configure
some.
You
know
random
trace
exporter,
so
I
want
to
add
the
extension
messaging
to
autocomp
for
metric
exporter.
H
The
specification
around
exporting
isn't
done
yet,
but
when
it
is,
it's
likely
that
we'll
be
able
to
have
more
than
one
exporter
and
it's
likely
that
the
way
that
you
set
up
trace
will
be
almost
exactly
the
same
for
metrics
or
sorry.
It'll
be
similar
enough
for
metrics
that
I
think
we
can
add
something
there.
So
I
was
hoping
to
add
one
there
and
wanted
to
run
it
by
and
see
what
you
think.
B
H
H
B
D
A
The
agent
extension
also
that's,
basically
our
primary
extension
mechanism.
H
Yeah,
I
actually
really
like
that
consolidation.
I
think
that's
a
really
smart
choice.
So,
to
the
extent
we
can
figure
out
what
in
auto
configure
makes
sense
from
a
kind
of
standardization
across
open
telemetry.
I
would
challenge
you
to
you
know.
Let
us
know
what
you
think
is
actually
reusable,
but
maybe
we
can
keep
shorten
it
up
and
get
it
out
the
door,
because
I
do
really
like
the
autocomp
stuff
that
you've
designed.
F
B
F
B
H
Cool
interesting,
I
yeah.
H
Yeah,
so
we
I
think
you
weren't
here
we
talked
about
that
right
now
our
exporters
rely
a
hundred
percent
on
the
resource
detectors,
they're,
actually
a
pair.
So
that's
why
they're,
upstream,
together
the
we
actually
need
the
resource
semantic
conventions
to
stabilize
before
we
can
upstream
that,
because
right
now,
the
only
way
we
keep
them
in
sync
is
to
have
them
be
the
same
version,
but
our
our
metric
exporter,
like
we
need
to
map
the
resources
from
open,
telemetry
back
to
the
stackdriver
resource
model,
except
it's
not
stackdriver
anymore.
H
It's
the
google
cloud
resource
model,
but
we
have
to
map
back
to
our
me
our
model
right
and
if
these
things
change,
it
completely
breaks
our
exporter.
Okay,
yeah.
We
need
the
versions
to
be
in
sync
right
now
or
we
need
to
have
it
be
marked
stable.
So
no
one
changes
it
one
of
the
two,
so
that
so
that,
so
that's
why?
It's
that's
why
it's
external
right!
Now
I
really
do
want
to
get
those
in
one
problem.
We
have.
H
Is
the
integration
test
for
running
we're
running
them
on
infrastructure,
where
you
would
need
a
googler
to
type
a
command
to
run
the
integration
test?
We're
actually,
you
know,
running
the
resource,
detectors,
live
on
our
hardware
and
making
sure
they
get
the
right
resource
detected.
Yeah
to
the
extent
we
can
figure
out
how
to
get
that
into
this
this
code
base,
so
anyone
can
run
it.
That
would
be
ideal,
but
again,
another
reason
it's
external
is
because
we
can
do
it
there.
We
can't
do
it
in
open
telemetry
just
yet.
H
B
No,
the
integration
test
currently
are
in
our
okay.
That
was
mainly,
though,
because
sort
of
amazon's
always
been
okay
with
contributing
aws
accounts,
but
we
prefer
digest
in
general
to
use
the
cmcf
aws
account
for
anything
of
those
repos,
but
then
asking
for
permission
and
getting
those
set
up
has
always
been
annoying,
so
that
causes
us
to
be
prioritized.
B
I
don't
know
if
you
know,
but
zipkin
gcp
actually
does
have
integration
tests
running
on
google
cloud,
based
on
account
that
one
of
the
google,
the
spring
cloud
sleuth
team
members,
I
think,
contributed,
so
they
set
up
the
credentials
into
travis,
and
so
these
integration
tests
for
zipping
gcp
actually
run
on
google
cloud.
So
I
hope
that
someday
we
have
a
similar
setup
in
open
telemetry.
H
H
H
Would
we
be
able
to
get
open
telemetry
to
take
a
dependency
on
that
library?
It
would
be
like
taking
a
dependency
on.
You
know,
grpc
or
something
right.
G
H
F
B
B
But
I
wanted
to
do
that
at
the
same
time
as
gcp,
I
don't
think
we
would
want
to
depend
on
something
outside
of
open
telemetry,
though
in
the
upstream
job
agent,
because
that
sort
of
opened
up
the
whole
bag
of
depending
on
the
entire
world.
I
guess
so.
That's
sort
of
that's
the
re.
Like
the
main
reason,
I
was
hoping
that
those
gcp
detectors
could
be
pushed
upstream.
Is
that
then
it'd
be
very
easy
to
depend
on
them
in
our
job
agent.
B
G
H
Yeah,
so
the
let
me
the
the
idea
behind
this
library
would
be
there'd,
be
some
sort
of
method
that
you
can
call,
and
it
will
return
to
you,
the
google
cloud
resource
in
google
cloud
vernacular
and
then
we
would
still
need
to
write
a
resource
detector
that
takes
that
object
and
turns
it
into
like
a
set
of
attributes.
H
B
Yep,
okay,
okay,
so
we
didn't
use
the
aws.
G
H
We
would
only
do
this
if
the
library
was
only
resource
detection.
If
it
has
a
bunch
of
crap,
bundled
in
then
yeah.
That's
not
that's
a
good
yeah
yeah.
I
mean
we
might
as
well
pull
in
the
scala
standard
library.
A
Gosh,
what
about
the
reverse
of
putting
the
putting
the
resource
detector
in
in
this
repo
and
having
your
repo
wrapped
and
run
the
integration
path.
H
Yeah,
let's
my
org
chart
looks
different
than
that,
so
this
resource
detector
is
likely
to
exist
no
matter
what,
as
part
of
our
client
libraries
like.
H
F
H
That
we
can
use
in
both
places
and
we
can
work
with
them
to
expose
that,
and
then
we
can
depend
on
it
in
open
telemetry,
whereas
getting
them
to
depend
on
open.
Telemetry
is
a
much
much
bigger
can
of
worms,
but
having
this
independent
thing
that
both
like
our
sdk
and
open
telemetry
depend
on,
but
is
an
independent
thing.
That's
a
lot
easier
and
that
we
have
shared
ownership
and
can
work
out
pretty
well.
B
B
H
Yeah,
but
it's
yeah,
there's,
there's
also
weird,
there's
weird
limitations
on
what
what's
in
our
sdk
so
effectively
minimizing
the
dependency
there
is,
is
super
super
critical,
so
I
feel
like
this
is
our
best
compromise,
because
I
that
I
might
be
able
to
deliver
on
a
lot
quicker
than
the
other
alternative
of
trying
to
get
what
we
have
today
into
open
telemetry.
This
would
be
something
different
that
does
the
same
effective
job,
but
is
integration
tested.
G
B
H
B
H
Yeah,
well,
we
are
in
a
kind
of
a
resource
we
have
sprints
right
and
designing.
What
we're
going
to
do
with
resources
was
one
of
our
big
sprints
for
the
last
quarter
and
efforts.
So
we
know
what
we
want
to
do,
we're
just
still
working
on
making
it
happen,
and
the
focus
has
been
on
collector,
getting
the
collector
right
first
and
then
progressing
to
the
rest
of
the
ecosystem,
mostly
based
on
adoption.
A
And
you
could
always
unlock
shaded
in
if
you
are
not
and
not
worry
about
people.
B
H
This
this
is
one
that
I
I
suspect
it's
not
going
to
have
a
lot
of
churn
and
it's
also
going
to
be
under
our
long-term
support
relatively
quickly.
So
I
know
if
you're
familiar
with
our
long-term
support,
but
it's
it's
like
five
years
or
so.
So,
if
you
grab
one
of
the
java
libraries,
that's
lts
you're,
fine,
you
don't
have
to
worry.
B
F
F
G
H
A
Yeah,
like
so
many
things
like
there
were
plenty
of
things,
got
rewrites
in
like
jdk
one
two,
and
you
got
the
screen
builder
that
finally
got
rid
of
string
buffer,
like
a
lot
of
the
mistakes,
have
kind
of
alright.
You
just
aren't
used
anymore,
a
hash
table
vector,
but.
G
D
B
That
people
upgraded
their
java
and
then
after
java
people
stopped
upgrading
yeah
it's
because
they
backboard,
like
I,
I'm
amazed
even
backboarded
alpn
with
java,
so
http
2
works
on
java,
like
that's
just
way
too
much
for
a
backbone
in
my
opinion,
but
they
just
backported
everything
into
java
8.
B
F
H
E
G
H
Yeah,
I
so
effectively
what
that's,
what
led
to
this
discussion
around
auto
configure,
because
all
all
my
hooks
weren't
working
and
I'm
not
sure
if
it's
that
I
effed
up
the
class
loader
or
and
it
turns
out,
I
just
had
to
wait
for
nikita's
docs
and
now
I
can
read
them
and
figure
out
what
I
did
wrong.
Yeah.
H
And
you
know
I'm
biased,
because
that's
where
it
works,
I
was
trying
to
get
all
our
metrics
out
there.
I
I
can
try
it
without,
but
right
now
I'm
trying
to
get
the
gcp
stuff
get
the
metrics
into
gcp,
see
what
they
look
like
yeah.
I
saw
you
out
of
the
histograms.
That's
awesome.
F
B
H
Yeah,
the
well
the
code
looked
perfect.
I
haven't
had
a
chance
to
get
it.
E
H
Yeah
that
I
was
I
was
debating
so,
but
we
have
had,
I
have
been
trying
to
avoid
making
our
own
java
agent.
H
Because
I
I
feel,
like
those
extension
mechanisms
provided,
are
actually
pretty
darn
good
and
I
really
want
to
lean
into
them.
But
we're
like
the
only
ones.
As
far
as
I
know,
is
that
correct.
B
A
Because
yeah
and
I
think,
as
we
get
more
stable
people
like
other
especially
smaller
places,
will,
although
if
they
only
need
it
for
the
so
many
so
many
of
the
vendors
are
going,
the
otlp
collector
route
and
not
writing
java
exporter
their
own
java
exporters.
B
Yeah
yeah,
I
would
say
that
my
plan
was
always
just
to
provide
both
options
like
all
of
our
customizations
as
an
extension
that
is
also
autoloaded
into
our
agent
and
just
have
readme
for
both
that
our
extension
mechanism
was
a
bit
unstable
to
do
that.
Yet
so
I
want
to
stabilize
as
I
might
re-explore
or
I
might
just
not
care
anymore,
because
we
have
something
that
works,
but
that
was
my
original
thought
I
did
want
to
do
both
that's
what
I
was.
H
Otop
export
is
one
thing
and
like
we,
you
know
that's
in
our
that's
on
our
roadmap
like
like
just
taking
out
lp,
we
do
have
a
collector
module
to
take
in
otop,
but
the
thing
the
thing
that
I
am
trying
to
figure
out,
we
have
google
has
a
legacy
cloud
trace
propagation
format
for
propagating
trace.
Ids
that
we
are,
you
know,
w3c
isn't
adopted
everywhere,
so
we
still
use
this
for,
like
cloud
run
and
gke
and
our
load
bouncer
produces
this
thing.
H
That
we
have
in
most
other
languages.
I
wrote
one
in
scala
just
because
you
know
that's
what
I
do
and
we
want
to
get
this
as
an
extension
somewhere.
You
know
where
you
can
use
this
context
propagation.
We
also
want
to
get
the
resource
detectors
in
and
right
now.
As
you
know,
they're
external,
even
without
the
exporter
like,
even
even
if
people
are
speaking
to
xop,
I
want
to
make
sure
that
those
two
things
can
get
into
the
agent.
B
Yeah
yeah,
it's
a
propagator,
I
mean
you
can,
we
can
add
the
propagator
to
the
agent.
We
did
that
for
aws.
I
don't
know
why
we
wouldn't
do
it
for
gcp
and
okay
and.
B
H
Okay:
okay,
that's
fair!
So
the
the
question
around
cloudtrace
propagation
and
your
propagator.
What
do
you
do
when
you
have
both
headers
of
like
your
custom,
trace,
propagation
header
and.
F
H
H
B
D
H
That
led
to
some
interesting
results,
specifically,
the
sample
flights,
are
different
between
the
different
mechanisms.
B
All
right,
we,
I
do
have
a
propagator.
We
haven't
hooked
it
up
with
the
agent,
yet
it's
I
call
it
the
aws
propagator,
but
it's
really
just
a
very
simple
thing
where
it
extracts
in
that
priority
order,
w3
and
that
one
prioritizes
x-ray,
because
it's
sort
of
meant
for
internally
cloud
when
aws,
but
then
it
injects
the
extracted
format
rather
than
all
the
formats.
So
it
keeps
what
it
inject.
D
D
B
I
think
is
the
appropriate
solution
for
a
cloud
vendor,
because
I
mean
I
don't
know
exactly
what
you're
doing
with
this
propagation,
but
I
can
imagine
a
customer
sends
b3
and
they
would
want
to
get
b3
back
somewhere
else,
and
so
in
that
yeah,
probably
even
gcp.
Has
that
sort
of
same
scenario
where
you
might
just
want
to
inject
extracted
format,
not
prioritize
w3c,
because
you
might
double
out
of
users
using
b3.
H
Right
right,
this
is
so
my
propagator
would
be
cloud.
Google
cloud
infrastructure
will
create
traces
for
you,
like
our
load,
bouncer.
H
Yeah
and
then
you
can
choose
to
attach
to
it
or
not
with
your
propagator,
but
the
idea
would
be
if
that's
there
you
attach
by
default,
so
okay,
so
that
I
I
like,
I
like
what
you're
suggesting
the
only
the
only
thing.
That's
weird
is,
theoretically
what
we
want
is
you
know
the
load
bouncer
hits
your
compute
and
you
attach
to
this
trace
that
was
made
so
that
you
can
holistically
look
at
the
load
bouncer
to
your
compute,
to
wherever
you
talk
to,
but
between
your
machines.
H
We
kind
of
want
you
using
w3c
trace
context
and
not
our
proprietary.
Like
you
know,
this
is
what
our
load
bouncer
makes
format,
because
we
kind
of
want
to
move
towards
w3c
trace
as
the
lingua
franca.
So
that's.
B
H
Yeah
we
yeah
there's
an
interesting
issue
there,
the
sample
flag.
We
we
had
issues
where
it
was.
If
you
follow
the
spec
for
w3c,
we
broke
the
hell
out
of
people,
because
the
spec
is
more
limited
than
what
we
we
have
a
terrenery
thing.
That
spec
is
binary.
B
B
A
A
A
H
G
A
A
bunch
of
pr's
right
before
you
leave
rcl
rpls
sales.
H
I
was
telling
my
co-workers
that
in
a
previous
vacation
I
actually
sometimes
write
code
to
relax,
and
so
I
actually
made
a
mean
programming
language
that
on
like
one
of
my
previous
vacations,
so
you
never
know
what
I'm
going
to
do.
You
have
no
idea.
H
D
A
H
You
kind
of
you
kind
of
get
into
it:
yeah
I've,
never
regretted
learning
languages,
though
honestly
like
there's
so
many
good
ones
out
there
too.
F
But
yeah
typescript
is
pretty
good.
A
H
It
typescript
is
everything
that
scala
was
to
java,
for
me
that
it
is
like
typescript
is
to
javascript
yeah.
It's
just
gives
you
way
more
compile
time,
safety,
lots
of
really
really
cool
features,
and
it's
it's
freaking
elegant,
but
typescript
also
has
mass
adoption
so.
H
H
F
A
I'm
not
gonna,
I'm
not
gonna
yeah,
I'm
not
gonna
make
a
religious
stand
on
one
way
or
another.
H
F
H
Anyway,
I've
been
chatting
too
much.
Thank
you
I'll
see
y'all
later.
Let
me
know
if
you
need
anything
for
me
on
this,
like
api
thing
or
whatever
I
I
won't,
I
won't
be
able
to
respond
until
back
from
vacation,
but
I'm
happy
to
make
changes
or
I
don't
know
if
you
have
access
to
make
changes
to
it.
I
don't
know
whatever
it
doesn't
matter
to
me,
but
let
me
know
what
you
need
I'll
see
in
a
bit.