►
From YouTube: 2021-04-27 meeting
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
C
C
C
C
C
C
C
C
C
Okay,
so
yeah
our
release
date
is
april,
30th
of
2021,
which
is
in
no
less
than
three
days.
C
So
when
we
get
to
blocking
issues
effectively,
I
think
we
want
to
kind
of
hammer
on
these
a
little
bit.
I
know
for
a
fact,
josh
mcdonald
said
he's
going
to
be
listening,
but
not
able
to
talk
until
9
15..
So
I
want
to
avoid
talking
about
well,
possibly
all
of
these
until
then,
but
I'll
give
you
a
quick
status
update.
There
is
a
pull
request
for
the
start.
C
Time
requirement
that
I
think
josh
put
out
was
it
last
week,
rebuilding
deltas
from
cumulatives
there's
a
pr
out
that
I
put
the
status
on
this
is
actually.
If
we
want
to
have
a
discussion
here,
we
can.
Let
me
pull
that
up.
C
F
C
Okay,
so
josh
actually
marked
this
as
resolved,
so
this
might
be
resolved,
but
this
was
actually
around
late
arriving
packets
and
what
we
do-
and
basically
the
current
proposed
algorithm
does
not
account
for
late
arrive
packets.
So.
C
C
F
Yeah,
so
I
I
think
I
haven't,
I
didn't
read
it
for
right
now,
so
I
think
the
gist
of
it
is
that
there's
a
discussion
previously
in
the
prometheus
working
group
asking
about
you
know
delayed.
You
know
packets,
and
I
think
josh
is
just
saying
that
we
can
handle
that,
probably
in
the
future.
If
we
need
to.
C
Yes,
yes,
what
I'm
saying,
though,
is
there's
an
outlined
algorithm,
that's
meant
to
be
simple
and
it
doesn't
account
for
later
writing
packets
in
any
way.
G
C
Yeah
yeah,
that's
fine!
The!
I
guess
what
my
question
is
for
delta
to
cumulative.
We
post
a
like
simple
algorithm:
people
can
use
to
do
delta
cumulatives
right
now.
If
you
get
a
late
arriving
packet,
what
happens?
Is
you
get
a
reset
on
the
late
arriving
packet,
where
it
basically
resets
the
delta
cumulative?
The
way
it's
defined?
C
In
fact,
you'll
get
two
resets
you'll
get
you'll
get
the
later
writing
packet,
where
you
start
you'll
get
the
packet
after
the
late
arriving
packet
right
and
that
will
reset
the
delta
to
cumulative,
because
you
know
you
have
a
time
gap,
so
you
can't
account
for
what
happened
in
a
particular
bit
of
time,
so
you
have
to
actually
reset
and
then,
when
the
late
arriving
data
comes
in,
you
have
a
choice
of.
C
So
what
what
we
can
do
is
we
can
say
this
algorithm
doesn't
handle
data
arriving
data
at
all
and
we're
going
to
define
it
later
and
we
can
merge
the
pr
as
is
and
handle
data
arriving
data.
We
could
like
explicitly
say
in
the
spec
that
this
doesn't
handle
late
arriving
data,
which
was
what
I
had
proposed.
I
would
do,
or
we
can
actually
specify
how
to
you
know
like
wait.
Wait
on
this
until
we
specify
how
to
handle
late
arriving
data.
G
I
would
prefer
the
third
option,
because
at
least
we
should
have
an
proposed
solution.
There.
C
Yeah
yeah,
my
so
so
we're
clear.
The
current
algorithm
has
a
solution
for
late
arriving
data,
but
that
solution
is
basically
to
create
a
counter
reset.
When
you
have
late
arriving
data.
G
C
C
The
current
proposed
the
current
proposed
algorithm
for
recon
reconstructing
deltas
to
cumulatives,
like
in
memory
at
you
know
in
like
a
collector
or
some
kind
of
streaming
service
right,
and
I
think
we
could
specify
a
way
where
late
arriving
data
is
okay
and
make
that
work.
C
But
I
think
it
might
take
a
little
bit
more
time
to
kind
of
hash
that
out.
So
the
real
question
is:
is
this
a
protocol
blocking
thing
or
is
the
algorithm
as
specified
enough
that
we're
comfortable
that
we
have
the
right
pieces
of
data
to
eventually
define
a
better
algorithm
for
late
arriving
data
than
what's
currently
proposed?.
G
I
I
don't
think
that
the
current
solution
is
adequate
and
I
do
think
that
we
should
actually
have
at
least
a
proposed
way
of
handling.
You
know
late
arriving
data
without
data
loss.
C
C
G
Up
an
issue
josh
in
the
prometheus
web
group,
if
that's
helpful,
to
support
that,
you
know
and
identify
what
that
you
know
what
that
design
would
look
like
or
what
the
proposed
solution
would
look
like.
H
I
may
chime
in
here
a
little
bit
since,
like
having
thought
about
a
problem
like
that
before
for
quite
a
while,
I
figured
that
essentially
like
having
a
consistent
sum
maintained
over
a
stream
of
data,
which
is
the
problem
we
are
talking
about
right
here.
This.
G
H
Famously
hard
string,
processing
problem
that
apache
flink
is,
I
think,
the
first
system
which
solved
it
consistently.
There's
a
lot
of
literature
about
event,
time
versus
processing
time.
There
are
operational
concerns
with
how
much
backlog
do
you
actually
keep
in
the
system?
Where
do
you
keep
that
state?
What
are
the
limits?
So,
from
my
perspective,
it's
it's
a
hard
problem
that
you
will
not
find
an
algorithm
that
will
convincingly
solve
it,
since
it's
like
also
an
operational
problem.
H
Failover
scenarios,
for
example,
when
your
collector
restarts
how
you
are
using
that
sums
in
any
case,
so
there
will
always
be
like
a
lot
of
edge
cases,
a
lot
of
problems
that
you
will
not
convincingly
solve
unless
you
really
make
a
big
engineering
effort
to
to
address
that
problem,
so
yeah,
I
I'm
not
sure
if
this
is
in
scope
for
for
for
the
specification
here,
my
personal
feeling,
but
I'm
I'm
just
joining
a
little
bit
from
the
sidelines
here.
H
G
I
mean
I
agree
with
that
heinrich
and,
and-
and
you
know
what
I
would
say
is
that
we
can
certainly
say
that
you
know
some
of
the
control
plane
dependencies
are,
you
know
again
supposed
to
be
addressed
downstream,
but
I
still
would
say
you
know
that
the
data
loss
per
se
you
know
there
should
be
a
recovery
mechanism
of
some
sort,
that's
handled
in
the
data
model
right.
That
is
even
if
we
say
that
it's
logged
or
whatever
right
again
I'd
like.
H
Was
a
data
loss
event?
Yes,
and
but
I
would
also
see
just
the
contract
like
with
the
the
vendor.
Second,
as
I
read
before,
there
was
a
contract
saying
if
you're
a
later
waving
data
more
than
five
minutes
or
more
than
15
minutes,
then
it
might
get
dropped.
So
I
would
also
see
something
like
that
as
a
viable
stand,
that
the
product
yeah.
C
That's
that's
kind
of
what
the
current
approach
is,
if
we
assume
so
again,
there's
a
bunch
of
assumptions
there,
but
if
we
assume
a
collection
interval
of
like
one
minute
five
minutes,
the
likelihood
of
late
arriving
data
is
is
low
enough.
That
I
think
a
reset
is
not
the
end
of
the
world
in
terms
of
data
loss
there.
C
If
we're
talking
like
I'm
getting
deltas
where
I'm
getting
you
know,
packets
every
minute,
such
like
a
statsd
use
case,
sorry,
packets,
every
second
like
a
statsd
use
case
or
like
every
millisecond
or
so
right.
That's
a
totally
different
use
case.
That's
a
totally
different
kind
of
conversion,
and
I
think
that's
that's
kind
of
the
way
this
is
designed.
That's
not
actually
handled
in
this
algorithm
specified.
The
algorithm
specified
is
if
we
have
what
we
consider
open,
telemetry,
looking
deltas
with
a
collection
interval
around
one
minute.
C
This
gives
you
a
decent
bounce
of
consumption
of
memory
on
the
server
to
an
accurate
cumulative
sum
coming
out
the
door,
and
I
agree
with
you
that
this
is
a
really
really
really
freaking
hard
problem.
We
can
go
super
super
super
deep
with
a
really
complex
thing
or
the
the
the
goal
of
this
algorithm
is
just
to
give
people
kind
of
an
outline
who
may
be
new
to
the
space
and
need
something
kind
of
decent
for
relatively
okay
performance
to
kind
of
pick
up
and
use.
It's
not
meant
to
fully
exhaust.
C
You
know
the
whole
problem
and
I
think
okay
with
that
in
mind,
I
think
in
the
specifications.
I
Sorry,
I've
been
listening
the
whole
time
and
just
gotten.
The
call
I
feel
like
this
is
definitely
a
scope.
I
It's
probably
something
we
can
add
later
we're
talking
about
the
phrase
that
I've
heard
used
is
bi-temporal
modeling,
like
we
have
two
time
dimensions
when
you
start
to
talk
about
late
late,
arriving
data,
and
my
my
belief
or
goal
always
with
otlp
protocol
here
for
metrics
was
just
that
we're
handling
one
of
those
dimensions
right
now
and
we're
giving
you
a
dimension
in
one
time
dimension
an
algorithm
in
one
time
dimension
for
how
to
turn
deltas
the
cumulatives.
I
I
The
way
prometheus
uses
external
labels
for
spatial
replication.
We
can
use
external
labels
for
temporal
replication
and,
if
you
so
later,
having
data
is
exactly
you
can,
you
can
imagine
a
later
having
copy
of
your
data
just
being
another
replica
forward
in
time,
and
so
we
can
follow
the
prometheus
example
on
external
labels
and
solve
this
later.
I
I
want
to
call
that
there's
a
an
interesting
hacker
news
thread
talking
about
the
difference
between
eventual
consistency
and
internal
consistency
in
databases,
and
this
is
exactly
what
we're
talking
about
now
and
and
the
ability
to
reject
data
and
say
I
did
not
accept
this.
Data
is
very
important
in
those
scenarios
and
that's
what
heinrich
was
calling
for,
and
I
think
what
actually,
what
otel
should
be
doing
to
make
to
address
this
problem
is
developing
conventions
for
signaling
lost
data,
in
a
way
that
the
sdk
should
be
saying.
I
dropped
some
data.
I
B
C
Sorry,
I
know
st
is
almost
impossible
to
know
that
you
lost
data,
especially
if
you're,
using
like
udp
but
anyway,
point
being
the
I
think,
there's
there's
a
potential
bug
for
how
we
signal
lost
data
that
needs
to
get
opened.
Based
on
that,
like
based
on
this
discussion
right.
G
For
sure
I
mean
I
agree
with
both
both
of
you
josh's
and
then
it's
just
that.
I
think
that
you
know
data
loss
can
be
handled
in
different
ways
downstream,
but
but
still
the
notification
that
there
was
data
loss
today,
it's
very
it's
completely
silent
right.
I
I
I
I
think
that,
eventually
you
know
there
will
be
a
pushback
solution
for
metrics
that
addresses
late
arriving
data
and
we
might
allow
data
to
arrive
for
five
seconds,
five
minutes
or
15
minutes
as
kind
of
said,
and
I
think
what
means
we're
going
to
have
to
re-report
data,
and
that
means
we'll
have
another
time
dimension.
I
think
it
would
be
nice
to
use
an
external
label
like
virtual
timestamps.
So
anytime
you
have
a
single
writer
output
stream.
I
You
can
add
an
external
label
to
say
which
version
of
that
output
stream
you're
writing
much
like
prometheus.
Does
the
spatial
labels
the
replicas
we'll
say:
I'm
writing
the
first
copy
of
this
data.
I'm
writing
the
second
copy
of
this
data
and
writing
a
third
copy
of
this
data,
and
then
the
the
consumer
of
the
data
can
handle
internal
consistency
using
those
those
records
is
eventually
what
we're
going
to
get
to.
I
But
this
is
a
solution
designed
to
give
push
performance
or
push
semantics
to
sdks
that
choose
to
push
the
metrics
data
and
for
the
prometheus
working
group.
It's
sort
of
not
an
issue,
there's
no
push
based
solution
and
prometheus
prometheus
inserts
a
scale
marker
when
there's
no
data,
which
is
the
same
way
it
hand,
which
is
how
it
handles
missing
data.
C
So
scale
marker,
I
think
I
agree
with
that.
I
want
to
call
time
shenanigans
right
now
because
we're
you
know
20
minutes
in
so
let's,
let's
see
if
we
have
consensus
on
ais,
so
let's
we
need
a
way
to
signal
data
loss,
so
we're
going
to
open
an
issue.
My
question
is:
can
we
I
might
my
personal
feeling
is?
We
can
actually
do
this
in
a
non-breaking
way
on
the
protocol.
C
I
No,
I
can
I
and
I
propose
that
we
model
that
as
another
case
of
reporting
metrics,
you
were
going
to
report
metrics
about
yourself
about
dropped
about
dropped
data
and
it
happens
both
from
the
sdk.
Like
the
server
just
told
me,
it
couldn't
accept
this
data,
so
I
the
sdk,
have
to
report
to
drop
data
and
there's
also
cases
where
the
server
gets
a
bunch
of
data
accepted,
but
then
looks
at
it.
It
has
violations
of
its
own
data
model,
the
server
returns.
C
G
C
So
so
it's
something
that
we
want
to
resolve.
I
guess
what
I'm
suggesting
is.
I
want
to
try
to
model
it
in
a
way
that
doesn't
change
the
protocol.
G
I
I
Prototyped
some
of
this-
and
I
can
talk
about
what
we've
done,
but
we
don't
have
time
right
here.
I
can
put
it
in
the
ticket.
G
So
I
mean
that's
a
good
point,
josh
again
so
josh
s.
How
do
we
make
sure
that
that's
something
we
cover?
You
know
in
the
near
term,
not
long
term.
C
So
one
of
the
things
that
I
want
to
work
on
post
this
april
deadline
is
getting
a
a
series
of
work
streams
that
are
a
little
bit
more
refined
with
a
little
bit
better
notion
of
priority,
and
then
we
as
a
community
can
discuss
what
we
think
our
next
highest
priority
item
is
I
I
implicitly
and
I'm
super
biased,
think
that
finishing
the
prometheus
working
group
items
and
passing
passing
issues
back
and
forth
between
this
sig
and
that
sig
is
going
to
be
like
the
next
thing
that
we
have
to
do.
C
C
C
Yeah,
I
I
yes
once
once
we
have
have
this
this
stable
requirement
out.
I
want
to
do
a
kind
of
kind
of
a.
What
do
you
call
it
project
management
run
through
of
like
taking
the
bugs
prioritizing
them
shuffling
them
around.
We
should
probably
we
should
do
it
in
the
open.
I
yeah
you.
C
That
it's
it's
yeah,
the
the
tldr,
is
when
you
get
an
out
of
order
point.
So
so,
when
you
get
a
point
that
that
doesn't
align
with
the
previous
point,
you.
C
I
I
April
30th,
my
my
priority,
then,
would
be
to
specify
and
and
fill
in
all
the
details
of
a
sdk
pushing
otlp
that
gets
processed
and
then
exported
as
prometheus
remote
right,
so
we're
trying
to
push
to
prometheus
in
a
fully
compatible
way,
and
that
does
mean
we're
going
to
have
to
talk
about
some
of
the
semantics.
But
I
still
don't
think
we
need
to
talk
about
deltas
to
cumulatives
at
that
point.
So
this
is
certainly
important,
but.
G
I
C
C
Okay,
awesome
cool
is
so
now
that
josh
is
here.
Is
there
anything
we
need
to
discuss
around
the
start
time
requirement
pr.
I
I
don't
know
that
I
wanted
to
say
there's
a
companion
pr
now
in
the
spec
which
talks
about
some
of
the
same
stuff
and
really
I
I
changed
my
thinking
a
little
bit
as
I
wrote
it
trying
to
think
about
how
to
detect
overlaps
for
gauges,
and
I
realized
that
that
that
area
has
not
been
fully
discussed
in
the
open.
If
you
think
of
this
pr,
as
mostly
about
handling
start
time
for
cumulatives,
then
it's
it's.
It's
definitely
like,
I
think
settled.
I
Overlapping
gauges
is
a
sort
of
quarter
case,
but
if
we
are
going
to
talk
about
re-aggregation,
we
have
to
talk
about
the
all
the
incorrectness
that
can
enter
when
you
have
overlapping
data,
or
you
know
incorrectly
or
misconfigured
systems.
So
using
start
time
to
detect
overlap
for
gauges
is
a
little
harder
than
or
has
options
there.
I
I'd
like
people
to
look
at
it
and
there's
one
more
pr
that
I
put
up
last
night
or
yesterday,
which
is
about
temporal
alignment
and
the
the
goal
that
was
assigned
to
me
last
week
was
to
put
up
something
about
re-aggregation.
I
The
thing
that
I
didn't
put
in
that
pr,
I
could
put
more
in
it-
I'm
kind
of
like
putting
up
rough
drafts
to
get
people's
help
and
identify
problem
areas,
but
I
think
the
conclusion
I
wanted
to
add
after
thinking
about
it
again
is
sdk,
is
temporal
alignment's
done
for
you
and
overlap
is
done
for
you,
which
is
why
reaggregation
is
very
simple
inside
of
an
sdk,
it's
just
much
harder
outside
the
sdk,
and
that's
what
we're
trying
to
spell
out
here.
C
I
A
G
A
Let's
pop
that
up
here,
yeah,
so
I'm
I'm
not
even
sure
whether
this
is
the
sdk
problem
or
the
protocol
problem
or
50.
one
example:
you
have
a
queue
and
people
keep
adding
different
items
and
items
has
a
color,
so
it
can
be
red
or
yellow
or
other
color,
and
every
time
you
receive
a
cue
like
a
qr
dq
operation,
you
start
to
call
the
up
down
control,
which
is
a
synchronous
operation
and
the
dimension
you
put.
There
is
the
color
of
the
item,
but
imagine
this
scenario
you
have
the
application
running.
A
While
you
don't
have
the
metric
system
enabled
and
at
certain
point
all
of
a
sudden
people
start
to
say
I
want
that.
So
during
some
like
dynamic
configuration,
you
know
that,
but
you
miss
all
the
previous
point.
You
don't
know
how
many
yellow
items
have
been
seen
so
far.
The
only
way
you
can
start
to
report
is
to
report
the
the
delta
value,
and
how
should
we
like
give
this
information
to
the
to
the
collector
and
how
should
the
collector
handle
this
like?
Can
we
tell
the
collector?
This
is
what
we
believe
like
so
far.
A
I
Yeah
riley
similar
things
are
happening
and
I
specked
out
something
about
unknown
start
time,
resets,
because
when
you
import
data
from
prometheus,
you
end
up
pretty
much
in
the
same
scenario
so
and
I
have
in
the
back
of
my
mind,
I've
asked
my
back-end
team
for
comments
on
this
pr
as
well,
because
my
proposal,
the
one
that
makes
prometheus
ecosystem
happiest,
is
just
to
pass
through
data
and
put
up
a
new
start
time.
I
You're
saying
I
this
is
what
I
have,
and
this
is
when
this
this
started
and
you're
saying
it
started
at
zero.
But
it's
not
the
true
zero
and
actually
it's.
So
it's
a
little
different
than
the
scenario
that
I
gave
where
I'm
saying
it's
starting,
not
at
zero,
and
I
don't
know
when
the
true
zero
happened,
but
they're
almost
the
same
situation.
I
It
seems
like
we
could
possibly
use
a
bit
of
data
somewhere
in
the
metric
data
point
to
say:
I'm
a
reset
and
there's
something
unknown
about
me
or
something
faulty
in
this
data.
I
thought
about
a
few
out
of
band
ways
to
think
about
this
as
well,
but
I
don't
have
any
good
ideas.
Roughly
then,
to
say,
there's
points
at
which
you
just
don't
know.
C
There's
also
a
cost
to
tracking,
like
if
you're,
trying
to
track
confidence
of
signals
all
the
way
down
it
gets.
It
gets
pretty
expensive.
It's
very
cool
like
statistically,
it's
a
very
fun
problem,
I'd
love
to
work
on
it,
but
I
think
it's
also
ridiculously
expensive
to
try
to
if
you're,
trying
to,
if
you're,
trying
to
track
unknowns,
we're
trying
to
track
all
possible
unknowns.
It
gets
it
it
explodes.
C
That
said,
if
we
have
a
very
targeted
problem
of
like
this
start
time,
I'm
not
sure
of-
and
we
encode
that
via
saying,
like
zero
means,
I
don't
know
when
I
started
right.
I
think
that's
a
way
way
way
better
solution
that
is
very
practical
and
kind
of
efficient,
as
opposed
to
like
putting
bullying
flags
on
every
single
thing
to
say,
if
you're,
confident
and
then
being
like.
Oh
bullying's,
not
good
enough.
Let's
make
it
a
double
of
how
confident
we
are
in
this
value.
Right
like
I,
I
don't
want
to
go
down
this.
F
Yeah
so
question,
then:
is
there
any
opportunity
to
quote
combine
two
metric
or
this
this?
Please
stop
me
if
you
know
I'm
opening
up
all
wounds,
but
this
kind
of
calls
for,
like
you,
know,
a
set
operator.
So
the
concept
here
is
that
we
only
have
an
up
down
counter
with
a
given
metric
name.
If
somehow
we
were
able
to
using.
Let's
say
you
know
an
asynchronous.
You
know
up
down
counter
to
report
a
set
operation.
Would
that
help,
whether
that
be
the
same
instrument
or
a
different
instrument?
I
Victor
one
I
mean
like
there's
that
we
talked
about
virtual
cues
last
week
and
the
idea
is
you:
don't
you're
gonna
have
to
scan
your
entire
queue
to
figure
out
the
answer
or
current
state,
and
we
don't
want
you
to
do
that
and
you
may
not
be
able
to
do
that.
That's
why
you
might
have
chosen
the
synchronous
instrument
in
the
first
place.
I
So
then
I
your
other
question.
I
feel
that
it's
just
a
separate
one.
I
apologize.
I
know
I
can't
remember
exactly
said.
F
I
G
I
I
am,
I
don't
want
to
derail
a
meeting
with
a
few
of
my
ideas
about
this.
It
is
a
valid
question.
Victor.
H
And
I
just
verified
one
of
my
assumptions
here.
I
I
think
this
problem
with
the
start
time
is
more
or
less
exclusively
about
right
side
aggregation
when
we
are
dropping
labels
on
the
collector,
because
I
think
josh.
I
had
to
say
that
the
problem
is
a
lot
easier
when
we
are
inside
the
sdk
and
do
aggregations
and
then
further
downstream
is
also
not
a
concern.
The
only
real
issue
I
can
see
are
counter
resets
that
are
overlapping.
I
I
How
important
is
that
in
the
future?
I
think
it's
important
to
the
users
of
the
hotel
community.
I
don't
know
it's
important
to
vendors
and
large-scale
customers,
because
in
the
case
of
my
my
system,
that
I
work
with
you
know
like
lightstep
we'd
prefer
to
do
a
lot
of
aggregation
on
the
read
path
than
on
the
right
path.
H
I
H
That
makes
a
lot
of
sense.
My
my
experience
is
very
similar.
Usually
I
have
not
done
right
side
aggregation,
but
if
prometheus
is
the
ecosystem,
but
this
is
relevant
then,
yes,
I
understand
why
we're
talking
about
that.
D
This
one
a
plus
one,
but
I'm
I'm
interested
in
the
discussion
on
aggregation
in
the
right
path.
This
is
what
we
do
at
new
relic
I'd
like
to
discuss
that
further.
I
So
let's
put
that
into
this
discussion
about
late
arriving
data
and
then
there's
there's
a
pr
about
re-aggregation,
which
is
not
talking
about
later
deriving
data
and
clearly
it
needs
to
or
there's
another
dimension
to
that.
C
Problem
all
right,
so
yeah
yeah.
I
think
I
think
that's
that's
important
to
call
out
I'm
gonna
call
time
shenanigans
again,
if
that's
acceptable,
have
we
reached
kind
of
consensus
with
this
particular
pr
and
what
to
do
with
it?
I
guess
one
of
my
questions
here.
Josh
is:
does
this
vr
rely
on
the
other
two.
I
The
the
change
in
the
in
the
proto-
no
it
it
just
so
I
want
people
to
review
it
again
if
you've
already
approved
it.
I
added
one
pr
late
later
than
the
approvals,
I
got
something
about
conventions.
C
Okay,
all
right
so
so
we'll
I
will
take
an
ai
to
that
would
be
here.
You
know
reviewers
re-review
and
just
so
everyone
knows
there
are
two
more
prs
to
take
a
look
at
then
one
of
them
is
around
temporal
alignment
and
I
forget
what
he
called
the
other
one.
I
Overlap
resolution
overlapping
right.
Finally,
the
aggregation
and
the
reaggregation
essentially
builds
on
the
other
two
pr's.
C
Yes,
that's
the
safe
label,
removal,
one
yeah,
okay
cool,
so
this
one
this
one,
I
think
is
is
relatively
recent,
so
I
wasn't
going
to
actually
talk
about
it
besides,
just
calling
it
out
so
that
people
can
work
on
it.
Let's
talk
about
performance
findings,
so
to
give
everybody
an
update
victor-
and
I
have
been
looking
into
this.
C
Yeah,
so
this
was
an
issue
reported
where
basically
from
ltlp4
to
head,
there
was
some
major
performance,
regressions
and
so
victor,
and
I
have
been
investigating
this.
Let's
see,
what's
the
best
result.
F
Sure
so,
from
from
my
perspective,
you
know
I
was
doing
primarily
in
c-sharp
just
to
make
sure
that
the
protocol,
those
issues
with
the
one-off,
doesn't
you
know,
filter
out
to
you,
know
other
languages
and
so
forth,
and
basically
the
results
are
pretty
similar
in
terms
of
for
c
sharp.
The
one
off
doesn't
really
affect
anything,
however,
with
the
change
from
dot
zero,
four
to
dot
eight,
we
introduced
two
one-offs
or
three
one-offs
and
a
couple
of
messaging.
F
You
know
so
that
did
affect
some
of
the
timing
in
terms
of
just
you
know,
encoding
and
decoding
timing
per
se
and,
generally
speaking,
I
think
we're
probably
about
eight
percent
slower
in
in
the
new
protocol,
and
the
main
issue
is
with
gauges,
encoding
and
decoding
gauges.
Because
that's
where
you
know
the
majority
of
that
particular
issue
comes
in
the
one-off.
F
Sorry,
the
the
extra
layer
of
messaging
does
add
a
bit
to
the
allocation
of
memory
use
per
se,
but
so
the
worst
case
scenario,
I
think,
is
I
have
a
result.
I
don't
remember
number
off
top
of
my
head.
If
you
scroll
up
or
down
scroll
up
a
tad,
I
think
in
the
top.
I
had
some
summary
information
there
you
go
so
the
computation
wise
overall,
just
between
four
to
eight,
was
about
six
percent
difference.
The
computation
I
mean
cpu.
You
know
time
to
do
the
encoding
decoding.
F
There
is
one
out
layer
that
is
at
the
decoding,
the
the
gauge
that
was
slower
by
16
percent.
Now,
when
I
I
gone
back
to
look
at
some
of
the
results,
so
I
do
have
new
numbers
that
I
haven't
posted
yet,
but
that
the
numbers
are
a
little
bit
closer,
but
it
is
still
the
general
trend
is
still
relatively.
The
same.
Memory
allocation
is
more
or
less
a
wash
with
some
cases
that
a
memory
allocation
may
be.
F
In
this
test,
I
think
I
forgot
one
of
the
tests
results
here.
I
had
miss
an
allocation,
so
that's
why
that
latter
statement
in
some
cases
were
better
20,
that's
smaller,
but
just
in
general
memory
isn't
as
big
of
an
issue,
but
overall
with
the
introduction
of
the
one-off
and
the
multiple
layers,
I
think
we're
off
by
about
six
percent,
we're
a
little
bit
slower
by
six
percent
or
so.
C
Yeah
this
is
this
aligns
with
what
I
would
expect
from
just
general
protocol
buffer
usage
cool.
Can
I
do
you
mind
if
I
take
over
and
start
talking
about
something
expensive.
C
Yep,
oh,
go
ahead
yeah,
so
we
ran
two
experiments,
then,
to
kind
of
like
get
a
feel
for
where
the
slowdowns
are
coming
and
go
and
different
ways
that
we
can
try
to
resolve
it.
What
it
looks
like
so
experiment
number
one:
was
we
basically
just
stripped
out
the
one
ups?
C
Theoretically,
there
would
be
eventually
be
like
an
enum
here
that
would
have
a
type
where
you'd
be
able
to
select
what
metric
you
have,
but
literally
everywhere
in
the
proto.
We
just
removed
the
keyword,
one
of
and
the
and
the
bracket
and
just
flattened
everything
just
to
see
what
happens
the
next
one
we
actually
we
went
back
and
did
an
evaluation.
C
You
can
see
it
above
of
every
single
protocol
version
release
from
zero,
four
zero:
five:
zero,
six,
zero,
seven,
zero,
eight
all
the
way
up
to
head-
and
I
noticed
there
was
a
drop
between
zero,
four
and
zero
five
and
zero
five
two
zero
six.
So
I
looked
at
what
that
was
and
effectively
what
we
had
done
was
we
had
moved
from
having
metric
that
has
points
on
it?
C
Okay,
wow!
This
does
not.
I
should
I
should
have
made
sure
the
syntax
highlights
sorry,
my
bad
anyway,
we
had
metric
that
had
points
on
it
to
having
a
metric
that
has
a
actual
semantic
message
underneath
it
and
in
case
you're
not
familiar
with
protocol
buffer
performance.
Every
time
you
add
a
nested
message,
you're
just
taking
a
small
hit,
no
matter
what
it's
usually
not
significant,
it's
on
the
order
of
what
victor
showed,
like
you
know,
a
percent
or
two,
but
you
take
a
little
hit.
C
So
maybe
it's
bigger
and
go
let's,
let's
remove
it
and
see
what
happens
so
experiment
number
two:
is
we
flattened
out
metrics
where
you
have
a
single
message
called
metrics?
It
has
a
type
on
it.
This
is
basically
what
the
zero
four
protocol
was
and
then,
underneath
that
we
have
repeated
point
fields
with
the
differentials
of
points,
and
I
collapsed.
The
is
monotonic
sum
and
aggregation
temporality
as
optional
fields
on
every
metric,
as
opposed
to
just
on
some
of
the
metrics.
C
So
this
was
experiment
number
two
and
then
the
results,
and
I
want
to
call
out.
I
just
ran
right
before
the
meeting
a
a
jitter
test,
where
I
run
this,
like
you,
know,
eight
to
ten
times
and
check
the
the
jitter,
because
I
just
learned
about
bench
stat
a
little
too
late
to
like
do
this.
Well,
there's
a
good
bit
of
jitter
in
some
of
these
results.
However,
what
you're
seeing
reflects
as
close
to
reality
as
I
could
get?
C
I
did
not
put
the
jitter
numbers
on,
though,
because
they're
really
expensive
to
calculate,
but
the
this
is
I've
been
able
to
duplicate
these
results
pretty
well
and
what
we
see
is
effectively
when
you
see
this
43
okay,
give
yourself
about
a
20
buffer
in
terms
of
differential
depending
on
jitter,
but
for
bytes
and
allocations.
This
is
exactly
exactly
the
numbers,
so
I've
been
able
to
consistently
replicate
these
so
effectively
from
zero.
Four
to
zero.
Eight,
you
notice.
We
have
a
really
big
hit,
and
primarily
this
wasn't.
C
This
was
moving
those
metric,
the
metric
types
into
that
one
of
was
was
where
actually
a
lot
of
this
came
from
this
second
hit
here
is
actually
the
cost
of
moving
from
labels
to
attributes.
C
So
my
assumption
is,
labels
to
attributes
is
a
non-starter
and
it's
a
consistent
problem
across
the
whole
open
telemetry
protocol
that
we
can
investigate
kind
of
as
a
as
a
global
thing,
but
we're
going
to
target
specific
metric
encodings
that
maybe
could
work
when
you
look
at
the
getting
rid
of
one
of
what
we
were
able
to
do
is
recover
some
of
the
performance,
so
in
terms
of
nanoseconds
per
operation
right,
it's
only
a
50
drop
instead
of
an
85
drop
and
again
this
is
when
you
look
at
the
statistical
average
it
it's
between
20
to
50,
slower,
right
or
sorry
20
to
70,
slower,
depending
on
which
run
you
look
at
the
bytes
per
operation.
C
Right
is,
is
a
little
bit
of
a
bump,
and
we
kind
of
expect
this,
because
actually
the
new
protocol
encodes
more
information
than
the
previous
one
than
zero
four.
We
actually
have
more
data
types
in
here,
so
yes,
there's
a
little
bit
more
bytes,
that's
kind
of
expected
and
always
for
encoding.
The
number
of
allocations
is
two
because
of
the
way
these
strings
are
generated,
go
does
one
giant
allocation
and
then
a
second
one.
Yay,
so
decoding
is
a
different
story.
C
You
can
see
decoding
actually
from
a
cpu
standpoint
had
less
of
an
overhead
right
in
general,
it
was
only
from
head.
We
actually
saw
that
this
was
a
statistically
nice
number
and
when
we
got
rid
of
one
elves,
it's
actually
more
expensive
to
decode
and
a
good
bit
of
that
is
by
creating
these
flat
structures.
It
means
that
allocations
on
decoding
are
insane
and
go.
C
We
actually
put
a
lot
of
memory
pressure
on,
go
to
then
gain
back
some
performance,
although
I
guess
you
don't
see
the
performance
here
much
so
anyway,
there
the
this.
This
number
can
swing
about
20,
just
so
you
know,
but
this
is
relatively
accurate,
so
effectively,
decode
really
really
really
bad.
With
the
one
of
and
code
you
get
a
little
bit
of
a
performance
boost.
C
The
flat
metrics,
though,
are
kind
of
a
different
story.
We
actually
saw
a
pretty
decent
return
on
investment
in
terms
of
the
actual
cpu
cost
of
encoding
and
the
cpu
cost
of
decoding.
C
What
we
do
see
as
well
with
allocations
is
a
drop
down
to
kind
of
around
where
we
are
with
head.
It's
not
like
a
it's,
not
it's
not
much
better,
and
partly
that's,
because
we're
not
really
getting
rid
of
a
ton
of
allocations.
Here.
C
You
know
we
we're
still
instantiating
the
what
is
conceptually
the
metric
one
of
when
we
decode,
so
we
don't
get
like
a
huge
gain,
but
we
do
get
a
gain
in
terms
of
cpu
throughput
and
the
overall
byte
usage
is
about
the
same
as
current
head
and
that's
for
decoding
for
encoding.
It's
a
little
bit
better
than
head
or
similar.
I
should
say
the
cpu
is
significantly
better.
The
memory
usage
is
about
the
same
okay.
So
in
terms
of
thoughts.
C
Basically,
what
what
I
see
here
is
the
label
to
attribute
shift,
gave
us
about
a
10
performance
penalty
that
we're
just
eating
hold
on.
C
Okay,
the
the
other
thing
is
that
unwinding
the
metric
type
to
enum
does
lead
to
a
decent
performance
boost
overall,
when
you
do
encode
and
decode
together,
but
it
doesn't
really
lead
to
much
better
memory.
Usage.
Surprisingly,.
E
Josh,
can
you
write?
Can
you
use
this
said
command
that
I
did
to
you
that
I
sent
to
you
number
equals
so
google
proto
has
this
beauty
that
you
can
say
that
a
specific
message
to
not
be
treated
as
a
pointer
to
be
as
a
normal
message
and
go
has
the
capability
of
having
slices
of
of
messages
in
instead
of
sizes
of
arrays,
which
is
the
default
in
protobox,
and
that
saved
us
around
five
or
six
percent?
E
The
code,
so
by
the
way
that
one
is
in,
if
you
look
in
the
collector,
is
something
that
we
called
proto
patch.
Let
me
send
you
the
link
to
that
one,
because
there
are
more
more
tricks
that
we
do.
That's
why
we
use
gogo
proto.
So
here
it
is
the
link.
I
send
you,
the
link
with
all
the
patches
that
we
do.
C
Okay,
yeah
yeah,
the
so
the
so
you
know
this
was
done
in
go,
go
proto
all
of
the
all
the
code
you
can
find
here
that
I
used
for
this.
If
you
want
to
take
a
look
at
it
and
tell
me
how
bad
my
go
is
because
I
did
it
quickly
where.
C
Yeah
so
feel
free
to
take
a
look
verify
look
at
it.
We're
looking
for
results,
but
my
thinking
here,
though,
is
this.
This
should
only
show
a
differential
around
the
key
value
usage
right,
so
so
we
use.
E
Attributes
attributes
are
repeated
key
values:
okay,
now
every
label,
if
you
don't
put
that
annotation
that
I
added
you
there,
which
is
the
goggle
protocol,
protonullable
equals
false.
E
C
Yeah
yeah,
okay,
so
my
thinking,
though,
with
these
performance
benchmarks
is,
I
have
not
been
carrying
in
my
investigation
on
key
value,
attribute
performance
differential
in
that
notes.
I
have
basically
I'm
considering
that
as
a
change
that
we're
making
no
matter
what
and
that
somebody
and
that
like
optimizing,
this
is
something
that
you
we
already
have
from
key
value
usage
in
the
collector.
So
thank
you
for
all
that.
I'm
gonna!
C
I
can
add
that
in
and
incorporate
it
in
the
benchmark,
so
we
only
see
differentials
from
like
things
that
we
expect
to
see
in
production,
but
I'm
assuming
that
the
attribute
key
value
problem
will
get
investigated
and
looked
at
kind
of
separately.
I'm
actually
trying
to
understand
that
other
slowdown.
Okay
from
the.
E
One
for
for
the
one
off
another
thing
that
I
am
planning
to
do
right
now,
so
one
of
our
super
badly
implemented
in
the
in
goal.
One
of
the
thing
that
I
want
to
do
is
actually
make
your
trick
there.
So,
even
though
we
have
a
one-off
in
the
proton,
we
know
how
one-off
works
in
protobuf,
which
is
essentially
he
it
applies.
The
logic
of
the
last
coming
on
the
bits
of
streaming
will
replace
the
thing.
E
So
so
what
I'm
trying
to
say
is
because
we
generate
our
own
protobufs
and
we
decode
our
own
stuff.
We
can
actually
do
the
trick
that
you
have
there
and
reduce
reduce
the
allocations
by
a
lot
and
I'm
planning
to
do
that.
But
so
that
being
said,
I
I
think
we
can
reduce
those
numbers
significantly
by
doing
our
own
custom
stuff.
C
Awesome
awesome
yeah,
I
I,
if
you
have,
if
you
have
suggestions
and
help,
I'm
gonna,
I'm
definitely
gonna.
Take
this
anything.
We've
done
for
key
value,
I'd
like
to
incorporate
just
so
you
know
what
I'm
like.
What
I'm
really
comparing
here
is
head
against
these
experiments,
because
head
represents
the
current
state
of
the
protocol,
and
these
experiments
represent
what
we
think
we
can
get
to
with
some
optimization
and
whether
or
not
we
want
to
make
any
last
minute
breaking
changes
for
a
more
performant
protocol.
C
C
The
cost
was
a
decent
amount
of
allocations,
which
I'm
kind
of
curious.
If
you
like,
if
you're
going
to
run
into
that
or
not
but
yeah,
if
you
think
you
can
move
that
direction
without
having
to
change
the
existing
definition
of
the
protocol,
that's
amazing,
yeah
fyi.
E
E
So,
for
example,
instead
of
moving
the
data
points
up
one
level
you
you
can
make
the
message
that
includes
data
points
to
have
that
annotation,
which
is
essentially
moving
things
up,
but
without
seeing
the
protocol
we'd
put
it
here
is
what
you
mean
so
yeah,
you
put
it
before
the
semi
column
between
five
and
semicolon
you
put
that
square
bracket
that
you
want,
then
you
get
that
to
be
to
be
a
mess,
a
value
struct
which
value
structs
works
as
in
c
plus
plus,
which
are
common,
even
though
they
are
embedded
into
a
struct.
E
C
Yes,
okay,
so,
and
what
I
like
the
most
is
that
we
might
be
able
okay,
we
might
be
able
to
do
some
value
struct
business
without
changing
the
protocol,
like
it's
literally
just
an
annotation
that
that
only
affects
go
beautiful.
C
What
what
I
want
to
call
out
here
is
my
thinking
from
what
we've
learned
and
from
what
we
see
with
the
protocol
is.
I
actually
think
some
of
the
protocol
slow
down
that
we've
seen
is
because
we're
encoding
more
things
into
into
the
protocol
buffers
and
when
it
comes
to
performance,
I'm
hoping
that
targeting
the
go
language
will
lead
to
better
improvements.
If
you
look
at
what
victor
showed
and
the
relative
degradation
in
in
compared
to
like
the
amount
of
semantics
we've
added,
I
think,
is
reasonable.
C
When
you
look
at
go
it's
kind
of
unreasonable
but
like
as
bogdan's
pointing
out.
We
think
we
can
solve
this
and
go,
and
I
want
to
continue
pushing
on
this.
But
the
question
is
you
know
what
are
the
next
steps
we
want
to
take
by
this
friday
to
say
we
think
that
the
performance
of
the
shape
of
the
protocol
is
as
good
as
it's
going
to
be
right
now
and
any
other
performance
gains
we're
going
to
eke
out
kind
of
in
the
language
implementation.
E
Right
to
be
honest,
I
I'm
very
confident
that
we
I
mean,
I
don't
think
we
should
worry
about
so
based
on
the
examples
in
c,
sharp
and
and
stuff.
I'm
I'm
now
more
confident
and
we
all
know
that
go
protobuf
is
crap
and
we
can
do
a
way
way
better
job
than
that.
So
but
seeing
c
sharp
being
you
know
reasonable
things,
and
I
expect
c
sharp
to
be
reflected
for
java
and
to
c
plus,
plus
and
others.
E
So
I'm
I
would
not
be
worried,
then
then
I'm
I'm
just
gonna
say
that
I
think
we
are
good
enough.
If
you
want
to
play
a
bit
with
the
values,
trucks
and
stuff
go
go.
Proto
also
has
an
extension
for
even
embedding
things.
There
is,
I
think
it's
called
gogo
proto
even
does
really
embedding
not
only
using
value
structs
but
proper.
Embedding
if
you
really
want,
but
that's
that's
more
aggressive,
and
I
don't
think
we
should
do
that.
I
I
think
just
using
value
structs
is
good
enough,
because
it's
almost
the
same.
C
E
I
Okay,
I
maybe
I'm
imagining
some
other
other
hacks
we've
put
in
for
performance
to
our
protobuf
could
be
could
be.
E
But
also
also
one
thing
you
need
to
know
is
the
grpc
I
mean
the
generated
code
for
grpc
is
the
hundred
five
something
lines
that
implements
two
interfaces.
If
you
really
want,
you
can
have
those
implemented
by
yourself,
but
so
you
shouldn't
leave
me
church
all
right.
C
So
so,
because
we
only
have
four
minutes:
are
there
any
action
items
on
the
open,
telemetry
performance
bug
where
we
think
we
want
to
change
the
protocol
itself?
Is
anyone
feeling
strongly
that
we
need
to
investigate
more
or
look
at
other
alternatives
to
represent
our
data
to
to
adjust
performance?
Like
you
saw
the
c
sharp
numbers,
you
saw
the
go
numbers.
We
know
we
need
to
look
into
go.
C
Do
we
need
to
change
the
protocol
at
all
to
reflect
or
or
or
to
adapt
performance?
The
things
that
victor
and
I
tried
again
were
kind
of.
You
know
you,
you
saw
it.
It's
like
it's
like
a
five
to
six
percent
degradation
where
we
are
now
from
where
we
used
to
be,
but
again,
we've
added
all
these
semantics
that
we
couldn't
find
a
way
that
would
kind
of
wholesale,
be
generally
better
in
terms
of
representing
the
data.
E
F
G
F
E
F
Right
so
I
think
to
josh's
point.
I
think
that
the
semantically
I
think
there
is
you
know,
we
know
that
you
know
protobuf
has
that
one-off,
which
provides
that
you
know
exclusive.
You
know
that
mutual
exclusivity
syntax,
if
you
will,
but
if
we
wanted
to
and
I'm
not
suggesting
we
do
we're
just
throwing
that
out
there.
If
we
go
with
the
quote
an
enum
and
then
with
repeated
fields,
we
can
gain
some
of
that
benefits,
but
obviously
it
is
not
semantically
pretty,
but
all
languages
should
support
that
easily.
So.
E
E
I
would
say
that
we,
if
really
performance
matters,
for
example
in
the
collector,
because
we're
gonna
pass
thousands
of
like
way
more
protos
move
protos
a
lot.
We're
probably
gonna.
Do
some
tricky
tricks
like
that
to
to
to
remove
the
one
off
and
and
manually
apply
the
same
logic
as
one
off,
but
that
that's
going
to
be
on
the
collector
not
on
on
on
on
the
protobuf
itself?
E
Okay,
so,
but
I'm
curious,
have
we
seen
or
between
between
zero,
seven
and
zero.
Eight
have
have
anyone
measured
the
thing,
because
the
most
worthy
thing
for
me
is
the.
If,
if
the
number
one
off
the
watch.
F
F
Yes,
so
that
if
you
look
in
the
hotel,
if
you
look
in
the
bug,
I
did
post
that
about.
You
know
two
weeks
ago,
where
I'm
just
comparing
between
you
know
just
the
ink
gauge
to
the
gauge,
which
is
just
the
introduction
of
the
one
off
for
the
number
piece,
which
I
think
is
what
your
question
is
right
and
if
so
the
encoding
was
about
11
slower
and
the
decoding
was
about
five
percent
slower.
F
If
you
use
the
one
off,
if
you
were
to,
then
I
also
tried
it
with
the,
and
this
isn't
go,
not
in
c
sharp.
This
isn't
go
and
if
you
change
the
one
off
to
be
called
an
enum,
we
basically
gain
back
in
the
encoding
side.
F
Instead
of
14
slower,
we
become
just
six
percent
slower
using
the
the
enum
pieces,
but
just
overall
you
know
introducing
the
the
the
number
data
point
where
you
know
encoding
is
about
11,
slower
and
decoding
is
about
five
percent
slower.
E
Including
is
okay,
is
that
easy,
plus
in
seashore.
E
C
E
Personally,
I
think
the
bug
should
stay
open.
I
think
we
got
enough
results
to
be
confident
to
do
the
release
at
the
end
of
the
week,
but
the
bugs
will
be
open
to
and
continue
investigation
and
understanding
and
tricks
that
we
can
do
and
stuff
for
the
collector
okay
for
the
collector.
But
I
think
it's
an
ongoing
work
and
it's
not
done
yet
the
entire
performance
that
we
can
do.
But
that's
my
point:
that's
my.
C
Idea:
okay,
the
one
thing
I
wanted
to
make
sure
of,
though,
is:
can
we
mark
metrics
otlp
as
stable?
What
we
have
now,
where
all
changes
we
make
to
it
from
now
on
will
be
non-breaking.
E
Do
you
consider
the
current
start
time?
Pr
breaking
finish,
because
for
me
for
me
we
don't
have
a
documentation
if
it's
optional,
if
it's
required.
What
does
it
happen
when
you
have
it?
Why
doesn't
it
happen?
So
I
I
I'm
not
100
sure,
but
from
semantics
perspective,
I
think
it's
a
breaking
change,
because
right
now
it
doesn't
say
anything
which
means
it
can
be
interpreted
in
any
way
and
then
we
add,
with
which
one
the
start
time,
the
the
starting
npr
that
josh
put
together.
C
For
the
yeah
yeah
it
it,
but
it
the
start
time.
Pr
is
one.
So
the
start
time
pr
needs
to
go
in.
Okay,
yes,
that
so
so
sorry
we
have.
We
have
four
pr's
that
were
marked
as
breaking
that
need
to
go
in
two
are
on
the
spec
two
are
on
the
proto.
The
start
time
proto
from
josh
is
one
that
needs
to
get
there
before
we
cut
a
release
and
the
performance
related
changes
were
the
with
the
second
thing
that
could
have
affected
the
protocol
right.
C
So
josh's
is
a
documentation,
only
thing
that,
yes,
it
needs
to
go
in
because
we
need
to
make
sure
people
know
the
required
or
not
required
aspect
of
start
time.
If
there's
anything
else
like
that,
we
can,
we
can,
you
know,
do
a
pass
on
it,
but
what
I
want
to
want
to
know
is
when
you
cut
that
next
otp
release,
can
we
move
metrics
from
beta
to
stable,
saying
like
hey?
This
will
not
break
right.
That's
that's
what
I
want
as
a
okay.
Yes,
everyone's
comfortable
with
that
speak
now
or
forever
hate
metrics.
C
Cool
cool
next
week,
we'll
be
we'll
we'll
go
through
our
our
bug
list,
our
task
queues,
some
prioritizing
things
see
what's
coming
out
of
the
working
group
and
sorry
for
running
a
bit
long.
But
thank
you
everybody
for
your
time.
Let's
follow
up
on
the
performance
in
the
performance
bug.
I
will
mark
it
not
blocking
for
release
and
describe
ever
all
the
details
and
stuff
in
the
books.
So
thank
you.
Everybody.