►
From YouTube: 2021-10-28 meeting
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
B
A
From
you
today,
I've
been
promising
it
for
like
weeks
and
weeks
and
weeks
so
it'll
have
to
do.
Oh
god,
it's
not
going
to
be.
I
can't
tell
my
air
pods
might
give
out.
You
know
yeah,
I'm
and
I
I
was
just
finishing
it
up
minutes
ago,
so
I
left
a
tbd
here
and
there
because
I
know
I
can
find
an
old
branch
with
this
and
that,
but
I
basically
probably
have
the
right
level
of
detail
for
this.
A
I
I
am
on
a
zoom
in
the
browser
experience
which
makes
it
hard
to
share
my
screen.
If
you
would
like
to
wait
for
the
right
time
and
introduce
us-
and
you
know,
go
through
the
heading
head
matter,
I
will
be
happy
to
start.
If
one
of
you
please
share
it.
B
B
Looks
like
steve
harris
you're
only
on.
B
B
Yeah,
I
kind
of
wanted
to
wait
a
little
bit
longer
for
people
to
show
up.
I
know
anthony's
back
to
work
this
week,
so
I
was
hoping
to
see
him
as
well.
A
C
B
Well,
okay,
I
guess
we're
almost
four
minutes
into
it.
I
I
think
we'll
just
have
to
wait
for
everyone
to
kind
of
trickle
in
I'd
like
to
get
started,
especially
if
steve
can
only
stay
for
a
little
bit.
I'd
like
to
get
something
together,
so
yeah
welcome
everyone
thanks
for
joining.
If
you
haven't
already,
please
make
sure
you
add
yourself
to
the
attendees
list.
We've
got
a
few
things
on
the
agenda
today.
To
talk
about.
B
First
thing
is
going
to
be
josh,
giving
us
his
overview
of
the
metrics
api
and
sdk
in
a
little
bit.
I
need
to
figure
out
how
to
share
a
screen
like
every
week.
B
There
we
go
yeah,
hopefully
this
works
with
me
sharing
josh's
presentation
as
well.
So
let
me
see,
I
guess
I
could
just
do
it
from
here,
but
I
don't
know
if
anybody
can
never
still
see
my
screen
at
this
point.
Okay,
cool.
A
Yeah
looks
good,
thank
you,
tyler
all
right!
Well,
I'll,
try
to
be
quick.
Let's
see
I
put.
The
next
slide
has
some
history.
The
key
here
is
that
this
is
a
very
old
prototype,
because
I've
been
involved
in
both
this
repository,
as
well
as
the
sdk,
the
metrics
sig
from
the
beginning.
A
So
I
found
that
we
had
formed
the
sig
around
august
of
2019
and
my
first
pr
to
implement
this
earliest
of
the
sdk.
You
know,
metrics
apis
happened
in
pr
100
back
then
I
was
reminded
just
today.
As
I
looked
over
it,
there
had
been
me
making
noises
early
on
in
the
open
geometry
days
about
remembering
to
keep
it
possible
to
have
a
streaming
sdk,
and
so
at
that
point
in
time
around
pr
100.
A
We
still
had
this
early
prototype
just
to
like
remind
people
that
sometimes
a
tracing
api
doesn't
need
to
have
a
span.
Memory
like
a
span
object
in
memory,
so
this
actual
original
metrics
prototype
had
just
passing
through
metric
events
didn't
have
any
aggregators,
and
so
it
was
just
a
counter
ad
would
could
eventually
write
out
an
object.
That
was
like
a
log
record
saying
you
added
to
the
counter.
Anyway,
a
year
passed,
it
became
fairly
mature.
It
went
through
quite
a
lot
of
benchmarking
and
testing
an
option.
A
I
guess
improvements
related
to
the
movement
of
the
open,
telemetry
data
model,
so
it
was
a
fairly
stable
sdk
as
far
as
test
levels
and
performance
levels
and
readiness
for
production,
but
it
wasn't
anywhere
near
where
even
the
openometry
group
metric
sig
specification
was
at
the
time
it
was.
It
was
from
an
earlier
day,
so
it
sat
there
for
a
while.
A
While
I
was
involved
in
the
metric
sig
and
then
my
employer
had
me
working
on
this
prometheus
sidecar,
which
also
was
part
of
you
know,
finalizing
the
metrics
data
model
for
me
and
then,
since
earlier
this
summer,
I
was
trying
to
help
this
group
and
try
to
return
to
this
group,
because
I
was
basically
absent
for
a
year
and
and
began
making
changes
to
bring
it
closer
to
the
current
kind
of
place
where
the
open,
telemetry,
metrics
specification
is,
and
thanks
to
all
the
patients
of
you
all
throughout
the
last
summer.
A
Months
of
the
summer,
we
are
in
a
fairly
good
place.
I
would
say-
and
I'm
going
to
talk
about
that
more
so,
why
don't
you
go
to
the
next
slide?
Please?
This
is
a
high-level
diagram
kind
of
just.
If
I
was
to
try
and
give
you
the
biggest
block
level
diagram
I
could
of
the
sdk
the
the
api
brings
in
these
events,
which
go
through
a
layer
of
machinery
that
kind
of
checks
for
errors
and
handles
all
the
sort
of
surface
area
of
the
metrics
api
and
and
passes
these
events
into
the
sdk.
A
A
The
decision
to
collect
happens
because
the
controller
decided
to-
and
it's
can
it's
responsible
for,
making
sure
that
whoever
needs
to
lock
the
object
is
locking
the
object,
and
that
happens
a
little
bit
differently
if
your
prometheus
pulling
the
metrics
and
if
your
otlp
or
some
other
exporter
pushing
the
metrics,
but
for
the
most
part
that
complexity
is
hidden
within
the
controller
and
nowhere
else
the
next
slide
has
another
diagram.
These
are
diagrams.
A
A
Those
events
come
in
and
the
most
expensive
part
of
mapping
that
into
its
correct
location,
is
turning
this
label
set,
which
we
now
call
attribute
set
into
a
map
key,
and
there
were
a
lot
of
the
work
done
during
that
sort
of
year
of
maturing.
This
code
was
to
get
the
best
implementation
of
that
function.
We
could,
and
that's
currently
baked
into
the
attribute
set
code
that
we
have
both
first
traces
and
metrics,
but
it
was
really
built
for
metrics
because
it's
pretty
high
performance.
A
A
So
as
all
these
concurrent
updates
are
coming
in,
it's
updating
a
live,
aggregator
and
then
at
some
moment
in
time
a
collection
event
happens
from
the
previous
slide.
And
and
what
happens?
Is
it
atomically
copies?
Actually,
it's
a
move,
so
it
resets
the
the
live,
aggregator
and
copies
its
contents
into
another
aggregator
atomically.
The
snapshot
is
then
processed
by
calling
the
processor
interface,
and
that
is
the
basically
the
data
flow.
A
There's
this
moment,
the
first
time
you
see
a
new
label
set
for
an
instrument
so
where
you
have
to
decide
what
kind
of
aggregator
should
I
use?
That's
called
aggregator
choice.
It's
like
bundled
with
the
processor
api,
because
it's
essentially
the
only
dependency
that
an
accumulator
has
is
the
processor,
but
the
processor
is
responsible
in
that
moment
for
providing
the
first
aggregator
to
use
actually
two
aggregators
to
use.
One
is
the
live
copy
and
one's
the
snapshot
copy?
Okay.
A
So
that's
the
the
overall
sort
of
the
hardest
part
of
this
sdk
is
getting
the
concurrency
and
the
memory
management
right
inside
the
accumulator
and
then
the
the
interfaces
are
designed
to
essentially
meet
our
performance
needs
and
let
you
support,
push
and
pull
and
so
on
with
different
configurations.
A
So,
let's
move
on
to
the
next
slide.
I
can't
remember
what
I
put
there.
Oh
yeah,
so
I
talk
about
apis
and
interfaces.
There
are
three
that
are:
you
have
to
pay
attention
to,
otherwise
you
get
confused,
so
the
open,
telemetry
project
itself
has
this
mandate
to
provide
api
interface.
Oh,
I
want
to
read
the
I'm
going
to
read
the
chat.
You
can
stop
me
to
ask
questions
steve
if
you
want.
C
C
It
look
it's
like
you,
you
take
the
snapshot,
I'm
assuming
so
that
you
can
run
the
processing
without
holding
up
inbound,
aggregation
contributions
or
events.
So
what
I
was
wondering
is:
what
happens
if
you
decide
to
pause,
make
the
move.
As
you
said,
you
know
copy
clone
whatever
happens,
but
the
preceding
call
of
the
processor
wasn't
done
yet.
Is
that
possible
that
can
happen.
A
The
aggregator
is
responsible
for
locking
itself
or
or
else
being
atomic,
and
there
are
like
the
special
case
that
we
know
of
is
the
sum
aggregator
just
has
one
word
of
memory
and
it's
going
to
use
atomic
operations
to
set
to
to
move
and
reset,
but
for
all
the
other
aggregators
we
don't.
We
just
use
a
lock.
A
There
is
a
history
here.
We
we
emulated
the
prometheus
library
at
one
point
and
used.
They
have
a
lock-free
algorithm
for
histogram,
for
example,
and
it's
fancy
and
we
could
do
it
and
we
did
at
one
point
actually
and
we
and
then
bogen
filed
an
issue.
Saying
complexity
versus
cost
is
just
not
right
here.
So
let's
get
rid
of
that
and
we
did
so.
We
use
a
mutex
in,
for
example,
the
histogram
or
anywhere
there's
more
than
one
field.
A
At
one
point
I
tried
to
make
it
so
that
we
had
a
min
max
sum
count
where
it
just
didn't
care
about
atomicity,
but
that
was
borderline
and
people.
People
flagged
it
enough
that
I
just
decided
it
wasn't
worth
it.
A
So
back
to
those
apis,
so
there's
the
official
open,
telemetry
metrics
api,
that's
kind
of
to
to
give
us
the
way
to
plug
in
a
different
sdk.
It
gives
us
fender
neutrality
and
so
on.
It's
the
specification
of
the
api
has
to
be
somehow
totally
separate
from
the
sdk
so
that
you
know
new
vendor
can
bring
in
something
crazy.
We
never
thought
of
the
sdk
api
package
is
the
one
that
we
created
over
the
summer.
It
was
always
kind
of
there,
but
it
was
mixed
in
with
the
rest
of
the
interface.
A
So
I
pulled
that
out
to
kind
of
clarify
that
it's
kind
of
this
plays
an
essential
role,
but
it's
not
part
of
the
api
that
the
user
uses.
So
that's
the
descriptor
type,
the
instrument
kind
type
meter
impul,
which
has
three
methods.
If
you
want
to
be
an
if
you
want
to
be
an
sdk,
you
just
need
to
provide
three
methods.
A
So
that's
the
second
category
of
instrument
or
of
interface,
which
is
in
the
sdk
api
directory,
and
then
the
third
category
of
interface
is
part
of
the
sdk
itself.
So
an
exporter
these
things
called
readers,
processors
check,
pointers,
which
is
there's
a
little
bit
of
confusion
left
in
terminology.
It's
not
perfect,
but
never
is
in
the
analogy.
The
analogous
apis
in
this.
The
tracing
realm
are
like
sampler,
batch,
processor,
synchronous,
processor
and
so
on.
A
A
Okay,
actually,
the
next
slides,
are
a
little
bit
disorganized
and
uncategorized
and
loose.
So
I
just
kind
of
want
to
summarize
of
the
code
base
that
we
have
like
what's
good
and
what's
so
so
the
accumulator
is
definitely
high
performance,
it
has
stress
tests
and
high
concurrency
optimizations.
A
A
The
other
thing
is
that
that,
if
you're
only
implementing
prometheus,
you
don't
know
this,
but
delta
exports
can
be
powerful
and
they
let
you
use
more
cardinality
and
many
of
the
vendors
support
it.
But
the
prometheus
ecosystem
is
not
quite
ready
for
it,
but
otel
this
this
sdk
can
forget.
So
if
you're
using
cardinality,
that's
rare
or
occasional,
the
sdk
will
not
hold
on
to
that
memory.
If
you
want
to
export
deltas,
which
is
possible
with
the
otlp
exporter,
what
else?
What's?
A
What's
so
so,
there's
a
lot
of
complexity
in
the
otlp
export
path.
That
was
just
a
mismatch
of
like
an
impedance
mismatch,
so
it's
been
mostly
improved
by
now.
I
fixed
several
of
those
aspects,
but
there's
still
a
few
left.
There
are
tickets
open
about
that.
A
It's
really
wickedly
difficult
to
configure
these
and
there
are
some
sharp
edges
which
I
myself
run
into,
even
when
I
know
what
I'm
doing
so,
it's
a
little
hard
to
get
the
right
sdk
configured
when
you,
when
you
know
what
you're
doing
even
the
layout
of
packages
is
just
not
perfect.
There
was.
There
was
of
course,
two
years
of
history
behind
it,
and-
and
some
of
you
will
remember
when
open
census
brought
in
a
lot
of
ideas.
A
We
had
the
sdk
export
trace
package
at
one
time
which
held
things
like
span
data
which
are
common
elements
of
the
default
sdk,
but
you
don't
need
them
for
the
user
facing
interface,
so
there
is
still
a
parallel
sdk
export
metric
directory,
which
is
totally
confusing
because
it's
like
a
pure
interface,
but
it's
not
user
facing.
So
why
is
it
there
and
and
I'd
like
to
get
that
consolidated
and
dissolved?
A
And
then
there
are
a
few
things
missing
from
the
sdk
specification
which
I'll
cover?
Let's
see?
What's
the
next
slide,
the
hotel
group
has
standardized.
Quite
a
lot,
but
but
not
everything,
and
there
are
things
that
were
still
there
from
open
census
days
that
are
that
are
kind
of
lingering
in
this
prototype.
But
they're
lingering
in
many
of
the
prototypes
and
part
of
the
reason,
is
that
you
look
at
what
prometheus
gives
the
user
in
at
least
four
of
the
prometheus
sdks.
This
concept
of
a
bound
instrument
is
there,
so
is
it
useful?
A
Is
it
important?
Well,
I
don't
know
I'm
not
sure
anymore,
but
it
is
part
of
the
current
api.
So
if
you
want
to
bind
an
instrument,
a
synchronous
instrument
with
some
attributes
and
then
reuse,
it
it'll
be
really
fast
and
there
are
ways
people
can
imagine
doing
that.
I'm
just
not
sure
how
contrived
they
are.
A
Okay,
brief
intermission.
E
B
Yeah
you're.
A
How's
this
much
better
okay,
I
would
predicted
that
anyway,
it
happened.
Okay,
so
I
just
talked
about
bound
instruments.
There's
definitely
a
will
to
standardize.
This
include
it
in
the
specification
for
open
telemetry.
It's
just
not
happened
and
we
we
gotta
start
somewhere.
So
what's
in
the
current
sdk
spec
that's
feature
frozen
is
does
not
include
bad
instruments,
but
we
expect
it
to
become
an
option
after
1.0.
A
I
think
I
don't
know,
I
don't
actually
use
it
in
my
own
programming,
though
so
I'm
questioning,
I
question
it.
It's
also
related
to
some
other
stuff,
so
we
should
talk
about
that
at
as
a
group
there's
two
batch
apis
present
in
the
sdk
that
I
have
mixed
feelings
about.
I
both
of
them
are
important,
in
my
opinion,
neither
is
specified
in
the
hotel
api
for
the
user-facing
api.
So
I
there's
a
question.
A
That's
gonna
get
answered
at
the
at
the
sort
of
open,
telemetry
group
level
of
how
much
are
we
willing
to
tolerate
these
non-specified
apis?
That
we
think
are
really
important,
and
I
don't
know
what
we'll
say,
but
both
of
these
are
there
and
I
there
are
good
reasons
for
that,
but
they're
not
part
of
the
specification.
A
Lastly,
there's
this
thing
called
meter
must-
and
I,
if
I
could
take
it
back,
I
would
in
fact
I'd
like
to
it:
just
adds
a
lot
of
boilerplate
and
it's
just
not
that
expensive
for
the
user
to
manage
their
own
and
once
we
have
generics
that
that
gets
better
also
so
anyway,
that's
just
some
some
complexity
that
could
be
removed.
A
I've
kind
of
put
the
highlights
here,
or
a
few
of
them
need
more
issues
and
so
on,
but
these
are
things
sort
of
medium
to
medium
length
list
of
small
issues
that
I
would
take
with
the
current
code
and
ways
I'd
like
to
make
it
better,
but
as
long
as
we,
but
it's
not
super
priority.
A
The
otlp
metrics
export
path
just
has
one
step
left,
which
would
be
right
now.
There's
a
map
built
during
export
that
restores
order
that's
present
in
the
processor,
which
is
to
say
we
can.
We
can
organize
the
entries
in
the
checkpoint
by
instrument
and
then
you
won't
have
to
build
a
map
inside
the
otlp
exporter.
That's
important
to
me
because
it
impacts
performance.
The
rest
of
the
stuff
is
naming
and
package
layout
for
the
most
part
or
sdk.
A
Configuration
is
really
hard
still,
and
I
want
to
talk
about
that
next,
let's
see,
what's
on
the
next
slide,
okay,
so
I'm
the
rest
of
this
is
sort
of
trying
to
organize
this
into
decisions
for
the
api
itself,
I.e
the
facing
api.
What
do
we
want?
Do
we
want
to
keep
the
batch
record?
The
batch
bound
instruments,
the
bound
observer?
Do
we
want
to
keep
things
where
they
are
or
do
we
want
to
keep
move
things
into
new
packages?
A
I
made
a
proposal
which
some
of
you
have
seen
a
couple
of
months
ago
in
the
linked
pr
there.
We
don't
want
to
look
at
it
now,
but
that
was
basically
saying
I
like
what
we
have,
but
I
want
to
put
things
into
sub
packages
so
that
it's
easier
to
read
the
documentation.
That's
the
only
change
which
is,
if
you
put
all
of
your
synchronous
instruments
into
one
sub
package,
and
then
you
have
integer
types
and
floating
points
as
a
sub
package.
A
By
the
time
you
click
into
a
synchronous
in
a
floating
point
or
an
integer
floating
point
or
an
integer
instrument
kind.
You
only
have
to
look
at
one
interface,
not
four.
Essentially
so
the
the
combinatorial
explosion
of
synchronous,
asynchronous
and
integer
floating
point
can
be
hidden
by
restructuring.
That's
what
I
would
like
to
do.
Otherwise,
I'm
fairly
happy
with
what
we
have.
A
I
might
remove
bound
instruments,
question
mark,
okay,
that
is,
for
the
most
part,
the
api
question
for
the
sdk,
the
the
two
big
pieces
of
the
sdk
that
are
sort
of
specified
functionality
that
aren't
implemented
yet
are
big.
So
it's
not
like
a
little
amount
of
stuff,
but
it's
been
planned
for
all
along
and
I
can
talk
about
both
the
view.
Configuration
mechanism
is
essentially
a
way
to
configure
what
behavior
you
want
from
each
instrument
and
each
instrumentation
library,
essentially
on
an
independent
basis
and
with
a
fair
amount
of
granularity.
A
This
is
an
area
where
you
may
have
filters
of
certain
kinds
to
decide
what
to
do
and
whether
to
suppress
data.
So
we'll
have
to
read
that
together
and
take
a
look
at
that.
It
is
feature
frozen.
That
means
that
we're
still
open
to
more
prototypes,
showing
us
that
we're
doing
it
badly
or
that
we
need
to
make
fixes.
So
this
is
the
time
to
do
this,
and
lightstep
is
definitely
interested
in
having
pursuing
this
with
someone
here
and
then
second
area.
A
That's
pretty
big,
but
not
as
big,
not
nearly
as
big
as
views
is
exemplars,
and
this
is
an
area
where
it's
not
too
much
too
difficult,
but
it
does
touch
the
accumulator
and
therefore
it,
and
it
also
touches
the
aggregator
in
ways
that
are
just
a
little
challenging.
But
the
short
term
objective
is
to
make
it
so
that
you
can
get
one
exemplar
per
bucket
in
your
explicit
boundary
histogram,
because
that's
fairly
useful
in
today's
world
and
then
there's
a
longer
term
goal
of
supporting
other
aggregators.
A
The
both
of
these
are
connected
with
each
other
as
well,
and
I
want
to
say
that
there
are
functions.
Functionality
has
already
been
built
in
the
sdk
to
do
a
lot
of
what
we
need
to
do.
So
one
of
the
major
things
a
view
can
do
is
specify
the
attributes,
and
that
means,
if
you're
changing
the
attributes
from
what
was
present
in
the
instrumentation
you're
performing
a
second
aggregation.
A
A
So
the
processor
can
be
built
up
as
a
pipeline
and
we
have
the
tools
to
remove
attributes
and
the
processor
will
do
the
right
thing
so
view
to
implement
views
is
really
just
an
exercise
and
configuration
and
providing
the
proper
implementation
of
each
interface
at
just
the
right
moment,
which
is
not
an
easy
problem,
but
it
is
fairly
isolated
from
the
logic
and
the
business
logic
of
producing
metrics,
and
example.
Ours
and
sampling
is
an
area
that
I
love
to
talk
about,
but
I
don't
think
I
should
bother
you
anymore
about
it.
A
I
do
have
branches
for
each
of
these
that
I
worked
on
just
to
kind
of
feel
it
out,
and
I
never
got
far
enough
to
even
opening
prs.
So
I
have
to
dig
up
the
issues
or
the
other
pr's
where
that
was
just
where
those
topics
were
discussed
anyway.
I
think
that
might
be
the
last
slide.
I
don't
know
there
could
be
another
one
if
so
I'll
keep
talking
yeah
we're
done.
Thank
you
all
I'd
be
happy
to
answer
questions
or
talk,
or
you
know,
discussion
open
discussion.
A
Honestly,
I
think
that
the
sdk
finishing
the
sdk
is
pretty
it's
pretty
complicated
and
it's
it's
not
something
where
I
expect
like
a
sort
of
small
commitment
from
outside
to
help
out.
I
mentioned
that
lightstep
wants
to
see
this
happen.
The
there
are
there,
there's
very
likely
to
be
in
a
person
full-time
engineer
assigned
to
this.
I
just
can't
say
who,
at
this
time-
and
I
think
it'll
take
like
a
lot
of
help
from
me
to
get
this
done,
so
I'm
responsible
for
for
this
stuff.
A
The
stuff
is
on
the
tenth
slide
here,
but
what
I
want
from
you
all
is
those
decisions
about
the
api,
they're
they're,
real
and
they're
and
they're
hard
and
there's
no
right
answer
now
might
be
the
time
to
click
into
that
api
proposal.
Pr
and
the
I
think
I
put
a
screenshot
of
what
I
was
after.
Basically,
if
you
think
about
go
doc,
maybe
I
did
maybe
I
didn't.
A
Maybe
I
didn't,
but
while
I
was
developing
it,
my
goal
was
I'm
gonna,
I'm
gonna
open
the
godoc
for
my
api
and
I'm
gonna,
look
at
it
with
my
eyeballs
and
see
if
it
looks
overly
complicated
and
the
nice
thing
about
this
refactoring
that
was
done
in
this
pr.
It
was
never
finished
and,
and
it
was
going
to
require
the
sdk
api
package
to
be
created
before
it
could
be
finished
so
now
now
it
can
be
done
as
a
demonstration.
A
Basically,
what
it
did
was
makes
make
it
so
that
the
meter,
the
top
level
metric
package,
will
give
you
a
meter
object,
and
then
you
would
ask
the
meter
for
your
synchronous
creator
or
your
asynchronous
creator.
So
it's
like
a
builder
pattern.
You'd
say
meter.synchronous.foot64.newcounter.
A
Just
adds
more
surface
area
and
batch
observers
adds
more
surface
area,
and
so
those
things
are
things
we
could
remove.
Removing
the
batch,
the
the
record
batch,
which
is
synchronous,
that's
cheap
and
easy
to
do.
It
also
doesn't
cost
very
much
to
have
or
to
keep
the
batch
observer
ends
up,
adding
a
lot
of
interface
complexity,
but
I
do
think
it's
worth
it.
A
If
you
look
at
the
callbacks
that
we're
using
for
like
mems
stats
and
the
gc
stats
and
stuff
like
that,
it
really
does
cost
you
a
lot
to
make
these
observations,
and
sometimes
you
you
want
to
make
more
than
one
of
them
for
more
than
one
instrument
kind
in
the
same
callback
stuff
like
should
we
keep
meter?
Must
honestly,
I
think
I
could.
I
could
see
that
gone,
and
I
wouldn't
be
sad
about
it.
A
As
for
the
bound
instruments,
if
you
remove
it,
you
know
the
whole
structure
of
the
accumulator
is
is
built
around
it
not
which
which
is,
and
then
the
question
is.
Would
you
still
be
able
to
justify
this
code
architecture
if
it
weren't
for
having
bound
instruments?
I've
gone
through
that
thought
experiment,
and
I
think
it's
still
justified.
A
It's
related
to
that
topic
about
being
able
to
forget
your
delta
accumulations.
So
the
way
the
management
of
of
memory
happens
you
you
would
end
up,
I
think,
with
the
same
structure
anyway,
so
we
could
remove
batch.
So
we
could
remove
bound
instruments
and
put
them
back
later
without
a
major
change
to
the
accumulator,
because
you
have
the
same
machinery
to
forget
cardinality.
A
So
that's
an
option
yeah
there.
It
is.
Thank
you.
We're
looking
at
the
strip
down
slightly.
This
might
be
overly
stripped
down,
there's
no
bound
instrument
in
this
in
this
proposal,
but
and
there's
no
batch
so,
but
it
gives
you
an
idea
of
what's
possible,
that's
the
kind
of
help
I'm
looking
for,
and
I
really
would
be
glad
to
have
more
people
offering
opinions
about
what
to
do
here.
B
Yeah
I
mean
I
definitely
have
a
personal
preference
that
we
take
away
everything
that
is
not
locked
down
in
the
specification
for
at
least
the
initial
release
and
build
it
back
up
if
it
is
ideally
doing
it
in
a
deprecation
strategy
manner,
but
yeah,
I
think
it
having
the
minimal
amount,
I
think,
is
a
great
way
to
start
and
then
seeing
what
people
actually
use
this
for
and
find
actual
user
pain
points
is
probably
worth
evaluating.
Before
we
start,
you
know,
building
the
api
into
a
larger
api
is
my
personal
preference.
D
A
They've
been
taken
out,
they
were
in
census,
had
that
open
census
had
that
and
prometheus
has
it
that's
kind
of
where
it
came
from
and
it's
been
raised.
It's
like
the
java
community
is
clinging
to
it,
it's
dear
to
them,
and
I
expect
it
to
come
back
post,
1.0.
D
A
Yeah,
that's
my
understanding
now
the
reason
why
I
find
it
complicated
and
questioning
I
question
it
is
that,
in
addition
to
that
performance,
question,
there's
the
exemplar
question
and
the
the
api
the
way
it
stands
there
you
do.
You
don't
have
a
context
when
you're
dealing
with
a
bound
instrument,
at
least
the
way
it's
been
done.
The
whole
idea
is
that
you're
not
going
to
go
through
context
well
or
is
it?
A
The
question
is
how
do
you
deal
with
baggage
and
dynamic
context
in
this
sense,
in
the
setting
where
there
are
bound
instruments,
and
that's
that
stuff,
it
is
possible,
it
just
gets
harder
to
handle
every
configuration
you
know.
So,
if
the
view
that's
true,
I
left
that
out
of
my
calculus
in
the
view
explaining
how
views
are
complicated.
One
of
the
things
that
you
can
configure
in
a
view
is
the
ability
to
extract
baggage
and
turn
those
into
to
metric
attributes.
A
To
do
that,
it's
just
a
different
mechanism
when
you're
dealing
with
bound
instruments-
and
it
has
to
do
with
just
literal
efficiency
questions
of
like
currently
the
attribute
set
is
fairly
optimized.
When
you
build
it,
it's
sorted.
Deduplicated
and
then
put
into
a
map
key
form
that
you
can
look
up
in
a
map
so
to
handle
baggage
keys
in
the
synchronous.
A
Non-Bound
instrument
path
may
end
up
looking
different
than
the
bound
path,
because
you've
made
optimizations
already
on
that
path.
So
it
gets
harder
to
implement
baggage.
To
attribute
translation,
it
gets
harder
to
implement
exemplars,
so
bound
instruments
is
a
valuable
but
super
complicated,
optimization
at
the
api
level.
2-
and
it
just
gets
worse
from
here,
as
is
kind
of
like
what
I'm
saying
performance
is
definitely
viable.
A
If
I'm
interested
in
saving
the
cost
of
building
an
attribute
set,
I'm
going
to
try
and
reduce
the
number
of
metrics
observations
to
like
one
statement
and
then
I'm
going
to
record
all
my
metric
observations
under
one
so
that,
rather
than
binding
instruments,
which
is
a
huge
amount
of
overhead,
I
just
make
one
record
statement
with
and
then
I
build
the
attribute
set
once
and
then
and
then
I
don't
need
to
have
two
different
code
paths,
one
to
optimize
down
and
one
to
optimize
non-bound,
because
if
ex,
if
I'm
turning
baggage
into
attributes,
I
need
to
extend
that
and
it's
much
easier
to
do
if
I'm
just
slapping
them
all
together
into
one
set
at
once,
whereas
with
bound
instruments,
you
need
to
have
a
way
to
map
a
set
plus
some
attributes
into
a
set,
and
that's
just
a
new,
a
new
piece
of
code
and
complexity
that
you've
got
to
test
and
then
once
you
think
about
exemplars,
it
becomes
even
slightly
more
complicated.
A
I
feel
like
I
owe
you
some
issues
to
summarize
a
few
things
that
weren't
already
filed
to
help
guide
us
towards
1.0,
as
well
as
the
sdk
work
and
so
on.
I
definitely
am
trying
to
bring
in
another
person
on
this
to
help
out.
B
Yeah
that'd
be
really
helpful,
having
some
more
issues
but
yeah,
I
think
just
having
the
asynchronous
conversation
also
is
just
going
to
be
more
helpful
for
people
who
are
going
to
watch
this
afterwards,
so
yeah
really
good
idea:
okay,
cool!
Well
thanks
a
bunch
josh
that
was
really
helpful.
I've
been
with
you
on
most
of
that
journey.
Well,
not
most
some
of
that
journey,
and
so
I
it's
even
for
me
like
great
to
get
a
little
recap
so
yeah.
B
Okay,
that
is
first
part
of
our
agenda.
I've
got
two
more
things
I
kind
of
wanted
to
talk
about
today.
These
are
follow-up
follow-ups.
From
last
week
we
were
talking
a
little
bit
about
the
stuff.
Some
people
were
out.
It
was
kind
of
a
low
attendance
for
those
that
weren't
there.
B
We
did
add
two
new
project
boards
that
are
tracking
both
configuration
and
from
the
environment,
variables
and
logging
go
ahead
and
check
those
out
if
you
haven't
already
and
then
on
top
of
that,
I
one
of
the
things
that
I
was
looking
at
is
for
this
logging
and
troubleshooting
project.
B
This
idea
of
interface
and
standardizing
our
logging
practices
and
just
kind
of
a
recap.
I
was
looking
at
the
log,
our
package
and
I
think
it's
really
well
suited
for
what
we're
trying
to
do
structured
in
an
api
and
then
an
implementation
of
that
api,
and
one
of
the
open
questions
that
came
up
was.
B
How
much
is
that
going
to
overlap
with
the
log
signal
that,
where
you
know
open,
symmetry
is
eventually
going
to
include
as
well,
and
so
I
stopped
by
the
log
sig
this
week
and
asked
some
questions
there.
They
have
merged
this
otep.
B
I
always
have
150
from
not
mistaken
yeah
and
it
defines
a
you
know,
an
sdk
prototype
where
there
is
very
similar
to
like
the
trace
api
or
in
the
trace
signal
itself.
The
log
emitter
is
provided
by
a
provider
there's
a
log
record
which
is
kind
of
a
similar
analogous
to
a
span
log
processor,
similar
to
spam
processors,
and
then
there's
the
log
data
included
in
a
record
log
exporter
very
similar
to
a
trace
exporter.
B
So
it
has
a
very
similar
structure
to
what
logging
is,
and
so
the
question
is
is
if
we
do
have
some
sort
of
standardized
way
that
we
want
to
log
in
this
project.
I
think
that
one
of
the
things
that
I
think
there's
two
questions
like
eventually
it'd,
be
ideal.
I
think
if
we
could
provide
support
and
wrap
whatever
the
signal
is
to
go
down
the
same
pipeline,
but
you
also
kind
of
run
into
this
issue
where
who
watches
the
watcher
at
that
point.
B
So
if
our
sdk
is
logging
using
the
logging
signal
from
a
user's
perspective
and
then
the
sdk
has
an
error,
are
the
log
message
is
going
to
arrive
in
the
destination?
So
I
think
that
it's
important
that,
whatever
the
solution
that
we
do
come
up
with,
which
I
think
I'm
still
trying
to
work
on
it
needs
to
be
extensible
not
only
to
just
the
open
source
tree
log.
B
We
definitely
need
to
not
have
it
just
rely
on
open,
telemetry
logging
and
I
think,
at
the
same
time
like
it
may
be
beneficial
to
have
it
so
that
you
could
plumb
it
in
with
open-source
logging
signal.
But
I
don't
know
if
it
is
necessarily
a
requirement,
so
I
think
it's
still
worth
investigating.
I
think
there's
still
some
more
design
questions
to
ask
here.
B
I
was
hoping
to
get
a
little
bit
more
of
a
prototype
up
this
week,
but
I
ended
up
spending
more
time
on
the
second
topic,
so
we'll
we'll
get
there
in
just
a
second.
But
I
want
to
pause
on
this
agenda
item.
F
The
operator
uses
logger
as
well,
and
I
think
that
works
well
there
I,
I
think,
that's
probably
the
best
kind
of
generic
interface
that
I've
seen
for
logging
in
go
and
and
if
we
want
to
go
down
that
route.
That
path,
that's
probably
the
right
way
to
go
rather
than
defining
our
own
and
asking
people
to
implement
it
yet
again
for
zap
and
zero
log
and
loggers
and
all
of
the
other
logging
implementations
that
leads
this.
B
Okay,
cool:
I
think
that
that
kind
of
motivates
me
to
try
to
get
a
logging
prototype
out
next
week,
although
that
might
be
tough.
I've
got
a
few
other
things
on
my
plate,
but
I
think
that's
going
to
be
my
main
goal
for
the
open
source
work.
On
top
of
that,
I
think
we
can
get
a
jump
into
this
next
issue
for
grpc
client
connections
used
in
our
retry
logic.
B
So
again,
last
week,
one
of
the
things
I
wanted
to
work
on-
and
I
have
in
the
past
week-
is
this
flaky
test
that
just
plagues
us
endlessly
in
our
ci
system
and
one
of
the
things
that
I've
noticed-
and
I
think
it
was
proposed
in
a
few
different
places-
is
that
our
retry
logic
in
the
otp
export
for
grpc
is
external
to
the
retry
logic
that
already
exists
in
the
grpc
client
con
type
itself
and
which
means
that
we
essentially
have
duplicated
work
and,
honestly,
it's
probably
not
as
sophisticated
as
what
grpc
natively
offers.
B
So
the
idea
that
I
put
forward
in
that
issue
that
I
just
linked
from
is
that
we
just
used
the
gfc
client
connection,
reach
high
logic
for
any
sort
of
connection
interrupts,
and
I
put
together
this
proof
of
concept.
I
think
that
it's
probably
worth
going
over
a
little
bit
here,
so
specifically
the
for
those
that
aren't
aware,
maybe
just
a
little
bit
of
background,
so
the
otlp
exporter
is
split
into
the
trace
components
for
our
client
for
grpc
and
for
http.
B
This
specifically
talks
about
the
tracing
for
http,
sorry
for
grpc
and
eventually
would
also
be
copied
over
to
the
the
metrics
grpc
client.
So
this
is
a,
I
think,
a
very
similar
client
proposal
there.
But
the
concept
really
comes
down
to
this
idea
that
we
already
accept
client
connections
from
the
user.
B
We
already
create
client
connections
internally,
and
so
this
restructures
things,
although
it's
pretty
hard
to
see
in
this
diff,
so
maybe
just
looking
at
this
code
directly,
might
be
a
little
more
helpful
which
is
kind
of
what
I
wanted
to
do
last
time
where
it
just
keeps
track
of
the
client
connection.
It
wraps
it
in
some
sort
of
a
new
trace
service,
client
and
that's
that's
it.
B
The
stop,
essentially,
there's
a
lot
of
locking
to
make
sure
that
you
know
if
something's
exporting,
concurrently
that
it's
not
gonna
drop
it
within
the
timeout
period,
that's
provided
by
the
client,
but
other
than
that.
B
It's
pretty
straightforward
in
saying
that,
unless
that's
called
the
client
connection
is
up
to
the
user
to
make
sure
that
it's
handled
and
the
client
connection
itself
from
grpc
is
guaranteed
to
handle
exporter
interrupts,
and
so
it
essentially
just
uses
our
retry
function,
which
handles
specific
responses
from
the
collector
that
are
retriable,
but
other
than
that.
This
you
know
calls
out
to
the
underlying
grpc
library
to
actually
handle
any
sort
of
connection
interrupts
and
to
re-establish
itself,
and
with
that
said
it's
you
know
there
are
tests.
B
There
is
a
replacement
test
included
in
this
change
set,
maybe
wow.
This
is
really
hard
to
read
where
it
essentially
is
looking
for
this
reconnect
getting
rid
of
this
test.
That
is
really
a
problem
for
us,
a
few
actually
there's
a
few
of
them,
but
they're
all
essentially
doing
the
same
thing
and
it
does
validate
that
the
underlying
implementation
does
handle
this.
B
I
don't
know
if
this
test
is
going
to
be
useful
in
the
long
run,
given
the
fact
it's
just
testing
underlying
behavior
of
jrpc
itself,
but
I
included
it
here
just
to
show
that
this
is
valid
and
so
maybe
jump
into
this
file
itself.
B
Yeah
here
we
go
so
what
we're
doing
here
is
this
is
kind
of
boilerplate
to
try
to
you
know,
cancel
when
the
test
cancels
running
a
mock
collector,
and
this
is
essentially
what
the
client
will
actually
be
doing
in
the
end
is
creating
some
sort
of
grp
trpc
connection
themselves,
I'm
creating
it
directly
here
and
passing
it
to
the
client
explicitly.
So
I
can
look
at
the
state,
but
there's
nothing
prohibiting
the
user
from
not
specifying
their
own
connection
and
having
the
client
create
the
tone.
B
It
should
operate
the
same.
It's
just
that
they
don't
have
the
accessibility
just
to
the
internal
state.
What
this
is
then
doing
is
validating
that
it
actually
is
going
to
be
able
to
send
on
that
and
then
once
it
does,
it
drops
the
connection
by
killing
the
collector.
Then
it
starts
up.
You
know
it
validates
that
there's
actually
an
error
there,
that
the
collector
has
stopped
and
it
can't
send
anymore
and
that
the
connection
itself.
This
is
where
I
needed.
The
internal
state
is
getting
a
transient.
B
Failure
from
there
starts
up
a
mock
collector
again
at
the
same
endpoint,
so
essentially
just
bringing
that
back
up
and
it
waits
for
the
connection
to
reestablish
itself.
That's
what
this
eventually
down
here
is
actually
doing,
and
essentially
what
it's
doing
is
it
just
keeps
trying
to
export
spans,
and
the
connection
does
come
back
up,
so
this
is
kind
of
like
a
just
a
proof
of
concept
to
show
that
it
actually
is
doing
this.
B
I've
also
run
manual
tests
when
looking
at
this
bug,
where
I
ran
a
collector,
specifically
in
a
docker
image
and
turned
it
off
for
40
minutes
and
turned
it
back
on
and
the
whole
time.
The
client
was
waiting
to
reconnect
and
once
it
came
back
up,
there
was
some
reconnect
cycle,
so
it
took
about
two
minutes.
I
think
for
that
that
one,
but
eventually,
like
the
you,
know
the
back
off
period
exceeded
and
it
reconnected
and
started
to
send
expands
again,
so
I
have
been
able
to
validate.
B
That
actually
is
working
which
just
it
kind
of
confuses
me
why
we
ever
built
our
retry
logic
in
here,
but
I'm
wondering
maybe
if
that
predated
this
in
the
grpc
library.
I
know
at
one
point
we
were
using
gogo
proto,
so
maybe
that
was
why,
because
they
don't
have
something
there,
but
I
I'm
not
exactly
sure
why
we
had
a
retry
logic.
There.
B
Yeah,
I
think,
if
that's
true,
I
think
that
that
might
be
where
it
came
from
and
that
might
have
predated
the
retry
logic
as
well.
So
I'm
not
exactly
sure
that
don't
forget
that
grpc
has
been
a
moving
target.
It's
not
clear
that
they
had
the
interface
that
we're
depending
on
today.
B
Then
that's
a
really
good
point
yeah.
So
that's
that's
probably
the
best
guess.
Then
I
think
is
to
say
that
when
we
initially
offered
this,
it
did
not
actually
exist.
It's
probably
a
good
guess,
but
yeah.
So
maybe
I'll
pause
there.
I
don't
know
if
I've
overloaded
people
with
this
description,
but
I
think
it's
worth
pursuing
and
maybe
putting
up
a
not
proof
of
concept,
but
an
actual
change
set
to
rip
out
that
logic.
At
this
point.
B
Got
a
thumbs
up
from
josh
thumbs
up
from
moment
and
heart
and
happy
yeah
yeah
as
someone
who
is
trying
to
merge
code
and
is
continually
having
to
reset
the
ci
system,
I
am
very
motivated
to
not
have
this
entire
system
anymore,
so
yeah,
I
think
that's
probably
my
more
my
higher
priority
for
the
next
week
is
to
try
to
do
that.
B
Well,
cool!
I
don't
have
anything
else
on
the
agenda,
so
let
me
stop
sharing
my
screen
and
I'll
just
pause,
see
if
anybody's
something
they
want
to
talk
about
that
isn't
on
the
agenda.
A
It's
something
I
want
to
do
when
I'm
developing
maybe
have
control
over
debug
level
versus
info
level,
but
usually
when
I
start
logging
in
production,
it
only
goes
badly
and
costs
me
a
lot,
and
so
I'm
whenever
our
users
are
asking
for,
you
know
how
should
we
log
this
and
there's
been
questions
about
the
prometheus
receiver
and
the
collector
this
week?
How
should
we
log
this?
A
How
should
we
log
this
and
my
answer
is:
don't
can
we
have
metrics
and
I
want
to
know
if
the
thing
that
lightstep
users
are
missing
from
our
legacy
tracers
today
is
that
we
try-
and
I
don't
say
we
do
a
great
job,
but
we
try
to
count
drop
spans
and
then
I
have
some
experience
using
the
well
this
prometheus
sidecar.
I
worked
on
a
year
ago,
lots
of
ways
you
can
fail
to
write
an
otlp
point.
A
It
doesn't
use
the
same
exporter,
of
course,
as
the
hotel
go
sdk
but
like
in
the
metrics
sdk.
How
would
you
report
failed
points,
and
it's
not
easy,
but
having
some
standardized
ways
to
debug
connections
and
drop
spans
would
be
awesome
better
than
logging.
In
my
opinion
or
logging,
I
mean
I
I
have
this.
Do
I
log
every
like
do
every
mechanism
to
prevent
logging
the
same
statement
more
than
once
per
minute?
A
That's
that's
been
a
great
help
for
us
to
make
it
feasible
to
log
in
production,
but
I
would
need
otlp
to
start
having
like,
I
would
need
to
start
sending
logs
over
to
otlp
and
at
that
point
a
bunch
of
other
questions
come
up
that
I've
seen
yeah
I
this
is
a
tough
question,
meta,
telemetry
and-
and
you
said
who
watches
the
watcher
earlier
and
the
same
question
has
come
up
for
us.
A
How
do
I
know
that
that
that
the
logs
I'm
seeing
are
by
the
client
about
the
client
rather
than
by
the
client
about
from
the
user?
You
know
it's
like
two
classes
of
data
now
and
there's
no
conventions
in
open
telemetry
to
to
do
something
there.
I
have
ideas,
but
I
don't
want
to
talk
anymore.
B
Yeah
I
know
ted
is
also
very
interested
in
this,
as
well
probably
for
very
similar
reasons,
but
I
I
think
that
you
are
correct
in
identifying
that
logging
is
a
it's
a
contentious
issue.
I
think,
because
it
has
a
lot
of
history
and
there's
a
lot
of
time.
People
have
thought
about
it,
and
so
I
I
think
that
you're
right,
I
think,
there's
a
class
of
situations,
particularly
where
you
don't.
You
have
aberrant
behavior
in
the
in
the
pipeline
in
some
sort
of
signal
pipeline.
B
That
is
not
exposable
so
like
if
you
wanted
to
expose
it
as
metrics
or
spans.
It's
like
that's,
I
think,
probably
preferable
if
you
could
do
that.
But
if,
if
you
know
your
pipeline
itself
is
just
not
going
to
be
able
to
do
that
like
how
do
you
go
into
the
state
of
that
system
and
find
out
and
honestly
like?
B
Even
then
I
don't
know
if
logging
is
gonna,
be
your
solution.
I
think
that
it's
a
it's
a
band-aid,
maybe
because
eventually
you're
gonna,
have
to
get
into
you,
know
profiling
or
something
because
you're
gonna
get
into
weird
parts
of
the
code
base
that
maybe
you
don't
have
logs
in
so
there's,
there's
a
whole
host
of
questions
there's
how
to
actually
troubleshoot.
B
But
I
do
think
that
for
especially
like
bug
reports
or
something
like
that
being
able
to
export
in
some
form
whether
it's
in
a
log
or
in
some
sort
of
file
like
the
state
of
the
system
is
useful.
I
think
is
ideal.
B
I'm
not
talking
about
like
erlang,
where
you
just
get
a
core
dump
where
but
something
something
less.
You
know
verbose
from
that.
A
Yeah,
I'm
thinking
of
cases
like
counter
adds
a
negative
increment
and
that's
illegal
or
not.
A
number
is
passed
to
the
metrics
api
or
a
sampler
api
gets
a
failure
to
update
trace
state.
I
was
just
dealing
with
the
prototype
for
the
exponential
stop
or
that
probability
sampling.
What
do
I
do
with
an
error
in
the
middle
of
a
sampler
call?
The
answer
is
call
otell
handle
well.
A
That
could
go
very
badly
and
I
want
to
make
sure
that
that
gets
metric
to
not
counted
and
not
spamming
the
user's
logs,
and
you
can
see
all
kinds
of
ways
that
I'll
go
badly
if
hotel
handle
is
able
to
just
print
a
log
statement.
B
Yeah,
I
think,
if
that's
a
yeah,
I
agree.
I
also
think
that
the
hotel
handle
right
now
is
extremely
limited,
because
you
can
give
it
errors,
but
like
that's
it
like
and
in
a
form
it
is
some
form
of
logging
right,
because
you
could
always
plug
your
logging
system
into
the
hotel
handle
and
build
a
wrapper
around
that.
The
problem
then
becomes
like
well
what
happens
if
you
need
to
like
find
out
what
the
you
know.
The
path
through
the
metrics
pipeline
is
like
you
know.
B
If
you
want
to
see
that
like
what?
How
do
you
go
through
that?
Well,
you
either
have
to
get
a
some
sort
of
debugger
going
in
the
go
ecosystem
or
or
maybe
you
could
you
know
we
could
have
some
sort
of
debug
level.
Logging
might
be
helpful
in
that
situation.
One.
A
One
thing
I've
done
in
with
some
success
in
the
past
is
to
have
well
essentially
two
phases
of
vlogging,
like
during
setup
you're
in
a
mode
where
you
just
don't
know.
If
it's
going
to
work
and
the
the
user
who
typed
the
command
is
very
likely,
the
one
who's
going
to
end
up
diagnosing
it
so
up
until
success
like
self
check,
is
like
a
self
test.
Even
up
until
the
self
test
passes,
I'm
going
to
be
pretty
verbose
because
chances
are
something's
misconfigured.
A
Once
you
pass
a
self
test,
you've
proven
that
you
have
a
good
configuration
and
you
start
logging
a
whole
lot
less,
because
at
that
point
in
time
it's
some
it's
some
engineer
on
call
at
the
customer
vendor
that
you're
sending
data
to
whose
job
it
is
to
monitor
that
essentially
like
we
should
be
able
to
see
when
you
start
dropping
spans.
It's
not
a
misconfiguration
after
that
initial
setup.
A
B
Agree
yeah.
I
think
that
there's
again,
this
kind
of
like
yeah-
and
this
is
a
like
josh-
was
already
saying
like
it's
a
problem
for
all
of
october
or
open
summary
like
having
this
behavior.
I
think
across
sdks
is
really
important,
but
we
don't
really
have
any
way
to
log
anything
right
now,
and
I
know
that
we've
also
had
users
come
back
to
us
and
go
like
how
do
I
debug
the
situation,
and
I
think
I
think
our
response
of
like
put
the
you
know
the
standardized
exporter
in
their
works.
B
B
Yeah
right,
okay,
we
have
four
minutes
left.
I
don't
know
if
that's
enough
time
to
talk
about
user
stories,
but
does
anybody
else
have
anything
they've
worked
on
this
past
week
or
two,
maybe
even
kubecon
timeline
for
cool
things
I
saw
or
have
been
doing
with
himself.
B
Okay,
well,
hopefully,
we
get
some
more
time
to
work
on
some
cool
things
during
the
holidays,
which
are
coming
up
awesome.
Well,
thanks,
everyone
for
joining.
I
think
that's
it
for
our
agenda
today,
and
we
will
see
you
all
if
you
have
some
time.
Please
take
a
look
at
josh's,
slides
and
maybe
put
some
responses
in
josh
is
going
to
have
some
more
issues
and
we
can
get
some
more
feedback
on
that,
but
otherwise
we'll
see
you
all
virtually
or
next
week
at
the
meeting
bye
thanks
all.