►
From YouTube: 2020-12-04 meeting
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
B
A
Yes,
I've
been
poking
away
with
my
dog
fooding
again
and
discovered
a
a
bug
in
my
open,
telemetry
export
of
honeycomb.
A
A
C
A
It's
it's
probably
also
present
in
my
open
census
reporter,
given
this
the
same
bloody.
D
A
I'm
feeling
the
envy
mate,
but
it
is
first
thing
in
the
morning
for
me:
that's
not
first
thing
second
thing.
A
So,
while
we're
just
warming
up
tristan,
I
found
a
gap
in
the
typing.
Exports
traced
it
back
to
the
original
issue,
and
you
said
oh,
I
think
this
has
mostly
been
fixed
and
so
I've
just
commented
on
there.
A
Did
brian
say
he
was
coming
or
no
he
didn't
say
he's
coming?
I
was
just
nagging
him,
so
you
could
treat
that
as
as
optional.
D
A
D
I
think
that
was
in
an
airport
in
italy.
I
think
I've
just
been
to
a
barista
training
course.
That's
all
I've
been
yeah,
that's
my
coffee
cup!
Actually,
oh.
A
Right,
excellent,
just
the
irony,
you
know
he
is
here's.
My
the
remainder
of
my
latte,
which
I
made
on
my
machine
downstairs
it's
a
little
bit
late
for
me.
That
is.
A
C
The
I've
updated
the
duck.
Oh
yeah,
look
at
the
doc
garth.
Sorry,
there's
two
items
but
really
they're,
just
one
except
the.
D
A
Yeah
I
mean
I'm
less
averse
to
excuse
me
renaming
everything
to
hotel,
but
it
would
certainly
mean
having
to
ship
a
new
version
of
literally
everything.
B
Oh
yeah
yeah
yeah
I'd
rather
not
rename
everything
right,
even
if
we
technically
can
and.
B
A
On
the
other
hand,
I
think
it's
a
branding,
it's
not
necessarily
a
strong
branding
move.
There's
such
confusion
about
open,
telemetry,
open,
metrics
and
open
sensors
that
you
know
sort
of
going
okay.
Well,
it's
it's
open,
telemetry
now,
except
we'll
name,
all
of
the
modules
hotel
just
to
split
the
namespace
up
more.
You
know
well.
F
A
I
I
drew
a,
I
drew
metrics
on
my
floor
and
lit
candles.
F
I've
been
getting
hit,
they,
they
assume
that
I
have
influence
or
knowledge
of
whoever
is
actually
in
charge
of
your
decision
over
there.
B
A
All
right,
I
I
mean
the
most
I
know
about
them-
is
that
you
know
charity
has
has
kind
of
blessed
them,
as
being
you
know,
literal
like
literally
conforming
to
the
definition
of
observability.
Unlike
a
lot
of
the
the
other
vendors
who
are
kind
of
saying.
Well,
if
you
kind
of
strap
our
logs
at
our
metrics
and
are
tracing
together,
you
can
squint
it's
kind
of
observability-ish
and
she
begs
to
differ,
but
she's
fine
with
light
stamp,
but
she,
I
think
she
described
them
as
being
trace
first
event.
A
A
F
B
F
F
A
F
Your
recall
window
yeah,
your
recall
window
is
like
basically
limited
by
the
memory
size
and
the
number
of
nodes
that
you
have
up:
okay,
yeah
funny
it
does
a
roll-up
of
of
events
and
like
some
traces
and
whatever
and
whatnot
and
ships
that
to
the
sas
right.
But
if
you're
within
that
window
you
have
a
hundred
percent
of
your
traces
available.
F
A
Limited
time
just
for
a
limited
time,
I
I
I
keep
having
a
like.
I,
I
really
get
the
how
dynamic
sampling
can
be
really
valuable
and
but
every
time
I
get
close
to
the
need
to
do
dynamic,
sampling,
honeycomb
changed
their
plan
and
then
I've
got
so
much
shoulder.
A
We're
on
we're
on
the
most
basic
plan
at
the
moment
and
yeah
no
way
we're
gonna
hit
our
limit
on
event
count
per
60
days
and
you
know,
except
when
something
breaks
like
it
did
about
a
month
back.
We
had
something
go
absolutely
about.
It
was
brilliant
and
yeah,
and
and
and
so
honeycomb
sent
us
a
send
us
an
email
saying:
hey
we've
activated
your
burst
protection.
A
You
know
you're
not
going
to
get
charged
for
the
rest
of
the
events
today,
but
you
should
totally
look
at
why
there
are
that
many
and
I
looked
and
was
horrified
yeah.
We
did
that.
C
A
That's
why
I
like
the
honeycomb
thing
right.
They
give
you
three
days
a
month
right
that
you
can,
if
you
hit,
I
think
it's
something
like
10
of
your
monthly
bill.
They
just
say:
okay,
you've
cocked
up
we're
going
to
give
you
the
rest
of
the
events
for
free
just
for
today
get
it
sorted,
and
it's
it's
a
lovely
little
bit
of
financial
engineering,
because
every
like
our
logging
vendor
every
time,
elasticsearch
decides
that
it's
going
to
go
a
bit
spare.
A
We
end
up
with
a
whole
lot
of
the
same
trace
back
repeated
over
and
over
and
over
and
over
again,
and
then
they
just
sent
us
a
bill.
F
D
A
A
A
And
and
the
that's
still
not
a
solved
problem
in
the
in
the
go
world.
As
far
as
I
can
tell.
A
Oh
yeah,
I
I
wanted
to
be
able
to
like
increment
counters
and
and
have
them
make
it
into
the
final
event
without
having
to
catch
ex
catch
exceptions,
trap
exits
and
I
can't
even
remember
what
the
other
one
is
to
be
able
to
make
sure
it
happened.
A
You
know
it
was
that
was
like
the
meeting
before
the
meeting
before
last,
but
it
was
something
along
the
lines
of
I
had
some
hacked
together
baggage,
like
code
in
python,
which
I
was
using
to
roll
up
counters.
A
As
I
converted
images,
we've
got
a
funny
situation
where
we,
we
can't
predict
the
output
count
from
the
input
count
and-
and
so
I
just
sort
of
increment-
I
I
accumulate
how
much
time
we
spent
reading
and
how
many
pages
we
read
and
how
many
pages
we
output
and
then
so
on,
and
then
at
the
end
of
it,
I
just
take
that
bag
and
spit
it
out
with
the
the
rest
of
the
event
and
yeah.
A
It
occurred
to
me
that
it
would
be
kind
of
nice
to
be
able
to
have
the
telemetry
barking,
for
that
particular
request
automatically
get
rolled
up
in
the
sorry,
the
metrics
barking.
A
You
know
you
know
colon
telemetry
get
rolled
up
for
the
into
what
ends
up
in
the
trace,
but
I
think
I'm
gonna
have
to
put
that
machinery
together
myself.
That's
fine.
B
Gotcha
yeah
yeah,
I
just
found
out
the
baggage,
is
going
to
be
able
to
be
automatically
associated
with
your
metrics.
If
you
declare
a
view,
so
that's
something
that's
coming,
but
yes,
I
mean
the
the
main
thing
for
today
is
simply
if
you
have
some
time,
look
at
both
the
github
issue
and
the
google
doc,
and
if
you
have
any
opinions,
thoughts,
please
let
them
be
known
on
both
the
versioning
thing
is
still
being
worked
at
as
a
group
like
across
all
the
sigs.
So
it'll
be
it's
not.
You
know
a
decision.
B
Elixir
and
if
it's
they
go
with
I'm
gonna,
I
should
make
the
meeting
tomorrow
but
to
make
sure
they
don't
go
with
something
that
doesn't
work
for
us,
but
I
mean
we
can
always
say
that
doesn't
work
for
us,
even
though
I
think
our
package
management
is
pretty
similar
to
the
other
languages,
we're
not
like
elm
or
something
that
has
literal
restrictions
on
publishing
package
versions,
but
yeah.
B
So
I
don't
think
they'll
be
a
problem
there,
but
people
have
opinions
and
even
if
your
opinion
is
just
you
don't
think
we
should
change
module
names
at
this
time.
Please
let
me
know,
because
I
mean
I'm
gonna
go
on
on
that
one.
I'm
gonna
go
with.
You
know
what
the
majority
of
people
want.
So
if
people
really
want
names
changed
to
specific
things,
that's
fine
with
me.
D
Can
I
say
I
don't
really
care
as
long
as
it
doesn't
change
all
the
time?
Well,.
C
B
Yeah
we
have
a
lot
of
integration
too.
It's
like
to
have
to
go
back
and
change
all
those,
but
it
would
be
good
if,
if
people
feel
really
strongly
about
it,
it'd
be
good
to
get
it
done
now
and
then
later
so,
yeah.
A
I
mean
the
only
what's
top
of
mind.
My
mind
is
is
less
about
whether
it
should
be
open,
telemetry
or
hotel.
It's
more.
The
api,
sdk
separation
and
that
it'd
just
be
nice
to
be
able
to
know
for
sure
which
modules
are
going
to
be
present
in
prod.
A
If
you
know
what
I
mean
like,
if
I'm
writing
and
actually
given
the
way
that
we're
shipping
instrumentation
libraries,
it's
kind
of
a
non-issue
right,
if,
if
we
were
still
holding
out,
hope
that
people
would
bake
open,
telemetry
api
into
their
what
they
were
shipping,
then
we
would
want
to
have
a
pretty
solid
separation
between
what's
present
in
the
sdk
which
which
they
would
have
to
have
there
for
testing
purposes
at
development
time.
A
And
what
was
api
only
and
and
thus
they
could
rely
on
using
in
their
customers
production
environments,
even
if
they
weren't
using
ot.
And
that's
that's
a
great
vision
of
a
of
a
you
know.
But
I
I
think,
we're
kind
of
settling
that,
if
anything
everything's
going
to
be
really
arms
length
and
hooked
up
by
a
telemetry
most
of
the
time
and.
B
E
B
We've
been
getting
some
good
feedback
like
from
this
guidely
on
docs
and
examples,
and
things
like
that.
So
that's
been
really
helpful
and
yeah.
If
anybody
has
any
time,
I've
opened
a
bunch
of
the
tickets
for
what
we
need
to
have
done
to
declare
ourselves
ga
1.0,
some
of
them
pretty
simple,
like
support
environment
variables
for
setting
configuration.
B
A
Yeah,
on
the
documentation
side
of
an
example,
side
of
thing,
I'm
very
big
on
my
kind
of
executable
documentation
thing
so
yeah.
I've
got
plenty
of
pasteable
examples
for
the
ot
honeycomb
exporter
and
I
will
probably
do
this
for
my
dog
fooding
repo.
I
was
doing
the
same
for
the
dog
fooding
and
which
is
how
I
tripped
over
the
ecto
problem.
A
Well
at
the
the
ecto
integration's
problem
with
with
in
ot
honeycomb,
so
yeah
I'll
clean
up
all
that,
but
yeah
I'll
end
up
with
plenty
of
stuff
that
people
can
paste
at
the
interactive
prompt
to
see.
What's
going
on
and
learn
how
the
the
system
works.
B
Yeah,
what
I
probably
need
is
some
sort
of
it
could
be
under
the
examples
reba,
but
like
something
that
pulls
in
all
of
these
different
repos
and
runs
them
or
tests
them.
Because
changes
get
merged
into
the
api
or
something
and
it
never
gets
updated
in
open,
telemetry,
ecto
or
something
and
it's
broken
and
don't.
A
Are
we
going
to
try
something
to
along
the
idea
of
trying
to
make
sure
that
everybody's
shipping
something
zero
starting
with
a
0.5
dot,
just
just
to
indicate
broad
compatibility
across
that
and
then
once
we
we'll
get
everyone
stable
and
some
of
them
will
zb050
and
some
of
them
will
be
zero.
Five,
two
I'm
sure,
but
yeah.
B
A
Yeah
I
mean,
and
so
now
now
that
your
ot,
like
the
api
and
the
sdk
are
both
on
zero,
five,
so
I'll
bump.
Honeycomb
to
that
once
I
fix
this
problem.
B
Yeah
I
opened
the
pull
request
just
now,
because
someone
noticed
they
couldn't
pass
a
context
to
the
elixir
api
that'd
be
great.
If
people
could
look
at
that
as
well,
especially
just
elixir
stuff.
A
Yeah
well
as
as
part
of
this
dog
fooding
I'll,
totally
spelunk
in
there,
because
I'd
really
like
to
bump
across
because
there's
a
whole
lot
of
you
know
the
layers
of
accreted
in
work
project
of
basically
applying
ot
semantics
on
top
of
oc.
A
D
D
D
And
we
solved
our
woes
over
async
stuff
by
the
way,
by
choosing
not
to
use
them.
D
There's
point
we:
actually,
we
can't
actually
see
the
point
of
basic
assessments
in
the
way
that
they
are
created.
They
don't.
I
can.
I
live
around
that
if
you
wish
to
hear.
D
For
small
items
like
that,
it's
good
we've
got
large
workflows
with
large
map
data
going
through
them.
Lots
of
messages
tight
loops,
all
the
rest
of
it,
and
one
old
metrics
implementation
pre-hotel
for
everything
else,
did
things
in:
let's
go
and
fetch
some
data
kind
of
way,
and
it
has
the
problem
of
if
you're,
calling
into
a
process
you're
blocking
that
process,
and
that's
generally
not
great,
because
you
can't
choose
when
that
happens
and
if
you're
doing
all
at
the
same
time,
you're
blocking
all
the
processes.
D
At
the
same
time,
of
course,
you've
got
one
called
back
one
observer
going
to
the
our
workflow
across
all
our
processors
going.
Please
go
and
get
these
values
for
these.
The
last
pts
you've
seen
and
that's
going
to
hang
your
entire
workflow
and
cause
jitter
on
streams
and
things
like
that,
and
we
know
that
from
experience.
D
But
you
still
want
a
last
value
and
only
a
last
value,
and
you
don't
want
a
body
histogram.
So
the
only
way
to
do
that
is
to
push
things
into
another
piece
of
state
somewhere
else,
like
I
don't
know
more
ets
and
set
up
a
callback
to
go,
read
the
ets
at
which
point
from
the
workflows
point
of
view.
Everything
is
now
synchronous
because
it's
pushing
all
the
data
but
uber's
elementaries,
calling
things
from
callbacks
go
and
get
the
data.
It
sounds
brutally
complicated.
D
I
know,
but
that's
what
happens
when
you've
got
thousands
of
processes
running
and
all
doing
stuff
and
wanting
data
from
them
and
all
the
rest
of
it.
D
E
B
Know
so
that's
okay,
yeah
the
I
thought
pre
there's
a
there's,
a
last
value
aggregator
that
should
be
the
default
for
the
up
down
counter.
I
don't
think
you
get
the
min
max
histogram
thing,
but
you
can
also
configure
it
not
to
be
the
mid
max
histogram
thing.
So
if
it's
specified
as
the
default
system
in
max,
then
you
can
configure
it,
but
I
also
don't
think
that's
supposed
to
be
the
default,
so
we
might
yeah.
We
should
look
at
that.
Please.
D
Just
drop
all
my
values
well
because
I
was
looking
at
the
docs
for
this
around.
Both
the
exporter
commit
itself
and
the
conversations
around
the
spec
and,
of
course,
everyone's
opinion
or
the
average
opinion.
Is
that
with
things
like
that,
you
can't
do
just
last
value
with
synchronous
stuff
because
it
means
you're
losing
data.
E
A
With
with
you
there,
I
I
had
a
distantly
familiar
situation
where
I
wanted
to
be
able
to
correlate
the
time
it
took
to
wade
through
the
cage,
looking
for
something
where
the
size
of
the
cache
but
measuring
the
size
of
the
cache.
A
You
know
by
the
time.
I'd
stuffed
a
few
hundred
thousand
things
in
there,
I'm
just
using
ets,
was
taking
four
milliseconds,
and
that
was
distorting
the
original
measurement.
Quite
quite
significantly.
So
and
and
then
you
know,
one
ends
up
with
a
problem
of
well.
How
often
am
I
going
to
measure
this
thing,
and
where
am
I
going?
To
put
it?
A
So
I
end
up
sort
of
you
know
shoving
some
some
cached
values
in
ets,
which
is
like
a
microsecond
to
pull
up
a
value
out
of
so
that's
nice,
but
it's
up
to
a
second
stale,
which
is
fine.
D
That's
pretty
much
worth
all
our
values.
We
have
it
at
a
side
effect
actually,
which
is
by
throwing
things
in
ets.
We
can
easily
then
just
do
a
big
table,
look
up
and
dump
things
into
our
graph,
and
we
have
big
visualizers
for
our
workflows.
That
show
a
directed
graph
of
all
the
work
within
the
system
and
being
able
to
read
everything
out
and
one
go
and
dump
it
in
that's
actually
quite
useful.
So.
A
Look
it's
just
like
that.
There's
a
certain
amount
of
fun
with
big
numbers.
You
know
I
I
just
I
really
enjoy
solving
like
once.
Problems
hit
a
hit,
a
certain
scale.
You
can
kind
of
have
to
think
you
know
it's.
You
know.
You
know
that
time
when
you
suddenly
start
going.
What
I
need
now
is
a
sketch.
You
know
a
problem
or
any
other
probabilistic
data
structure,
kind
of
and
I'm
like
yeah
now
we're
doing
some
compsly.
F
All
of
my
problems
fall
into
two
categories:
right
right,
database,
right,
latencies
and
distributed
race.
Conditions
like
that
is
my
whole
life
relies
around
making.
I
serialized
isolation
in
a
database
fast.
F
A
Well,
well,
I
get
to
like
the
amount
of
time
I
like,
like.
I
literally
just
did
it
again
like
I,
I'm
pasting
setups
for
the
dublin
module
everywhere
now,
even
in
like
unit
tests,
I'm
like
yeah,
what's
the
actual
call
of
the
the
order
of
like
how
is
this
possibly
going
wrong,
I'll,
just
use
debug
again
and
it'll.
Tell
me
exactly
what's
going
wrong
and
it
does
it's
beautiful.
D
Just
rereading
the
instruments
back
just
to
see
if
I'd
missed
something
and
reminding
yourself
why
we
had
the
the
whole
async
issue
in
the
first
place,
and
I
don't
think
I'm
wrong
about
the
grouping
if
you
read
the
actual
docs.
The
problem
is
this:
is
we're
not
working
up
down
counters
and
such
you're
dealing
with
last
value
and
last
value
is
very
much
less
value
very
often
it's
something
that
only
goes
up
in
the
first
place
anyway,
but
it's
not
a
song.
It's
not
a
sum.
It's
it's!
It
is
the
last
value.
D
For
example,
I
don't
know
a
time
stamp.
Time
sounds
quite
a
good
one.
You
have
an
inbound
video
frame,
it's
got
a
timestamp
and
you
want
to
know
roughly
across
the
workflow
where
the
video
currently
is,
and
it
gives
you
a
good
way
of
giving
rough
latency
across
parts
of
the
workflow
as
well.
You
know
if
the
timestamp
is
zero
at
the
beginning
and
the
time
stamp
is
20
at
the
end.
D
You've
got
20
numbers
in
between
usual
terms
and
things
between
beginning
and
end
the
it's
not
it's
not
a
counter,
and
it's
it's
not
a
sum.
It's
not
one
of
those
things,
it's
a
value,
that's
what
it
is.
It's
just
a
value,
but
a
synchronous
value
the
default
grouping.
D
The
default
aggregation
is
doing
histograms
wherever
there's
a
big
tbd
on
that
as
well
in
the
in
the
specification,
there's
still
conversations
over
what
they're
actually
going
to
do
with
that
and
arguments
over
what
they're
going
to
do
with
that
and
that's
why
we're
not
using
it
because
it's
whole
okay,
yeah.
E
B
Yeah,
I
think
you
should
use
the
up
down
counter
with
the
last
value
aggregation
on
it
and
whatever
yeah
I'll
have
to
get
to
those.
Is
that
loud?
Like
I
said,
I
guess
it
isn't
it's
you
know,
defaultation
being
some
is
irrelevant.
You
can
just.
I
thought
it's
the
default
in
the
erlang
lib
right
now.
I
guess
in
the
spec
they
changed
it,
but
I'll
go
start,
arguing
that
doesn't
make
sense
in
those
meetings
and.
F
D
Yeah
last
valuation
on
a
counter
would
solve
our
need
completely.
Yes,
I
would
just
be,
and
then
we
haven't
got
a
fanny
around
doing
stupid
things,
because
it
is
silly
you
know
having
to.
Basically
we
never
want
asynchronous.
We
never
observers
in
our
workflows.
It
doesn't
make
any
sense
to
have
callbacks
going
into
them
because
it
causes
jitter.
It's
just
not
a
good
way
of
working
for
us.
B
D
D
B
Yeah
yeah,
but
but
yeah
you
can
change
it.
You
can
have
it
be
last
value
so
yeah,
ultimately
you're
you'll
be
able
to
use
last
value.
Aggregation
on
this
instrument.
Interesting
isn't
implemented
yet
well.
I
learned
something.
A
Great
one
for
brian,
but
it's
it's
more
of
a
colon
telemetry
thing
than
an
open
telemetry
thing
I
was
like
I'm
still
trying
to
figure
out
rates
like
it
strikes
me
that
everybody's,
like
am
I
just
supposed,
am
I
just
supposed
to
accumulate
time
on
one
counter
and
event
count
on
another
one
and
then
let
something
else
do
the
rate,
determination
or.
F
F
D
F
I
do
weird
stuff
like
only
measuring
things
and
like
rpm
instead,
and
I
do
a
sum
of
the
full
interval
for
like
a
minute.
So
it
just
says
like
what
is
the
full
difference
between
that
and
then
I
just
do
it
by
a
minute.
A
Yeah,
no,
that
makes
sense
like
I
mean
for
a
lot
of
these
things.
I
just
I
mean
it
comes
back
to
that.
How
big
is
the
cache?
You
know
if,
if
the
caching
module
itself,
because
I'm
using
nebula
x
is
not
maintaining,
you
know
the
the
byte
count
or
the
size
itself,
because
it's
relying
on
ets
to
report
that
you
know
via
shards,
then
I've
got
to
have.
F
Something
easy
way
to
do
it
like
that's
fast,
it's
basically,
if
you,
if
you
ran
two
atomics
and
use
one
as
your
index
and
one
as
your
or
you
can
do
it
in
one
but
like
basically,
you
just
set
like
one
of
your
counters
as
where
you're
at
in
the
index
of
the
array
and
then
just
shove,
anything
everything
into
the
whatever
index.
You
are
and
then
just
keep
a
counter
like
the
last
time.
F
Like
the
atomics
are
like
insanely
fast
and
like
no
overhead
there's
like
way
way
faster
than
that,
but
you
have
to
get
tricky
with
it,
because
you
have
to
figure
out
there's
no
way
like
you
have
no
like
human
readable
way
of
like
indexing
into
it
without
another
data
structure
and
that's
kind
of
like
the
that's.
What
limits
the
usefulness
right
now.
F
F
F
You
could
still
name
it,
but
if
you
know
the
name
ahead
of
time,
then
you
basically
create
a
map
that
has
the
name
as
the
key
and
then
the
index
into
the
atomic
as
your
value
and
then
stick
that
in
persistent
term
and
then
you
can
get
like
an
insanely
fast
counter
there,
because,
like
the
lookup
time
is
incredibly
fast
out
of
versus
the
term,
you
don't
have
to
do
any
copying,
and
so
you
can
like
just
write
back
into
it.
F
So
that's
the
the
key.
Is
you
have
to
know
it
ahead
of
time
or
you
do
it
on
a
limited
interval
and
even
then
the
cop?
It's
not
like
a
vmy
gc
like
that's
a
little
bit
of
a
misnomer,
like
the
persistent
term
table
has
to
be
collected,
but
all
it's
doing
is
it's
looking
for
anything
that
has
a
reference
to
that
particular
value
so
that
that
key
is
the
one
that
then
copies
it
into
the
process.
A
Fascinating
okay,
I'll
I'll
I'll
dig
in
thanks,
but.
F
A
F
A
reference
switching
thing
kind
of
like
what
tristan
does
in
the
exporter,
because
if
your
value
on
the
key
is-
and
that
is
one
word-
it
doesn't
garbage
collect,
so
it
you
can
just
set
it
and
switch
it.
So
if
you
had
like
two
tables
or
two
things
whatever
like,
you
can
just
keep
switching
it
out
and
like
there's,
no
garbage
collection,
problem,
sweet
but
you're
limited
to
work.
B
B
All
right
last
minute
thing:
we
can
call.