►
From YouTube: 2020-09-25 meeting
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
A
A
A
A
Cool,
well,
it's
a
couple
minutes
after
the
hour,
so
you
could
probably
get
to
chatting
at
least
paul
good
to
see
you
again.
A
A
A
A
Yeah
totally
or
I'm
like
I'm,
not
going
to
turn
this
blender
off
right
now
cool.
Well,
I
was
hoping
that
someone
from
microsoft
was
also
gonna
show
up,
but
in
the
absence
of
that,
where
we
last
left
our
heroes
last
week
was,
I
think,
honeycomb
and
microsoft
to
maybe
join
forces
around
seeing
if
you
two
could
come
to
an
agreement
on
sampling
priority,
I'm
wondering
if
anything
happened
along
that
front
right.
B
I
have
completely
failed
to
make
any
progress
on
that
this
week,
but
it
is
on
my
agenda
for
next
week
too,
so
I've
got
the
otep
that
I'm
going
to
be
reviewing
and
what
I'd
like
to
do
is
actually
just
see
if
I
could
make
a
proof
of
concept
work,
because
we
did
make
some
progress
on
actually
getting
some
of
our
deterministic
samplers
ported
over
to
the
sampler
interface.
So
I
can
just
try
plugging
in
a
description
of
the
algorithm
here.
A
Yeah,
I
would
love
to
see
these
things
work
as
straight
plugins
as
a
proof
of
concept.
Before
we
add
anything,
you
know
as
like
something
baked
into
the
sdk
or
like
a
default
plug-in
would
be
really
great
to
prove
them
out
that
way,
cool
this
thing
and
context
context
management
is
the
other
piece
where,
for
whatever
reason,
it's
not
really
possible
to
make
headway
without
writing
code
yeah
totally.
B
I
mean
you
know,
being
super
specific
with
us.
There's
the
I
forget
what
it's
called
in
the
sdk
now,
but
the
trace
id
deterministic
sampler.
B
Id
ratio
sampler,
we
use
a
very
similar
mechanism
in
rb
lines
and
so
as
a
proof
of
concept,
I
decided
to
port
over
the
deterministic
sampler
that
we
use
and
the
reason
we
couldn't
just
use
the
trace
id
ratio.
One
is
we
wanted
to
interop
with
people
who
are
using
v
lines,
and
so
the
hashing
algorithm
has
to
be
the
same
in
order
for
to
make
the
same
decision
for
our
trace
id
so
totally
reasonable,
yeah.
So,
even
though
they're
exactly
conceptually
similar
the
the
devil's
in
the
details,
so.
A
Yeah-
and
I
felt
that
is
one
thing-
I've
wondered
about
sampling
priority
as
a
concept
is
whether
or
not
there
was
like
a
universal
enough
concept
of
sampling
priority
that
you
could
nail
it
down
and
wouldn't
be
worth
the
switching
cost
that
the
people
who
care
about
sampling
priority
would
actually
adopt
it
yeah
and
that
and
that's
what
we
wanted.
B
Like
is
this
something
our
customers
would
just
turn
off
yeah,
or
is
this
something
that
people
would
you
know
maybe
borrow
ideas
from,
but
you
know
implement
their
own
version
of
I.
A
Yeah,
I-
and
I
think
this
will
be
a
question
going
forwards
with
the
spec.
The
easy
parts
actually
are
the
parts
where
we're
providing
support
for
existing
systems
and
protocols,
because
there's
not
there's
not
much
design
work
there.
It's
just
should
we
or
shouldn't
we
is
it
expensive
or
cheap
yeah,
but
when
it
comes
to
actually
forging
new
ground,
that's
that
gets
into
more
interesting
territory
and
something
I
think
we're
gonna
have
to
watch
in
this
project.
A
We
want
to
be
able
to
figure
these
things
out,
yeah,
that's
the
right
hooks
and,
like
some
rule
about,
I
don't
know
exactly
hard
rule,
but
definitely
something
where
to
you.
You
get
at
least
two
groups
of
notable
size
actually
using
something
in
production
somewhere
before
it
moves
from,
like
some
experimental
or
add-on
thing
to
something.
That's
that's
in
core
we're
too
busy
focused
on
getting
1.0
and
like
the
spectres
done,
but
I
think
once
we're
over
that
finish
line.
This
will
be
the
next
governance
challenge
we
have
to
face.
A
Is
you
know
we
don't
want
to
be
like,
even
if
we
come
up
with
a
nifty
sampling,
priority
concept
it?
I
don't
want
to
bake
that
in
if
the
reality
is
no
one's
going
to
pay
the
switching
cost
right
exactly
like.
If
the
answer
is
cool
idea,
buddy,
but
actually
in
practice,
no
no
one's
going
to
use
it,
then
I
don't
think
we
should
bake
it
in
right.
We
shouldn't
be
like
well,
maybe
someone
will
find
it
useful
one
day.
I
think
we
should
really
try
to
avoid
that
kitchen
sink
mentality.
B
Yeah
well,
it
makes
a
lot
of
sense
right
now
that
the
sdk
has
an
always-on
sampler,
a
never
sampler
and
a
deterministic
sampler
yeah
at
least
shows
you
the
range
of
what
can
be
done
because
for
our
customers,
what
we
look
at
is
at
the
moment
all
they
can
do
with
our
b
line
is
deterministic
sampling
or
more
commonly.
They
do
custom
sampling
and
that's
where
we
see
more.
You
know
specific
setups,
and
so
that's
the
proof
of
concept
that
we're
doing
is.
Can
we
do
a
baked?
B
A
B
B
20
of
these
the
thing
that
we've
run
into
which
isn't
a
big
deal,
but
it
was
like
the
only
thing
that
was
not
super
straightforward
was
getting
the
sample
rate
back
to
us,
because
one
thing
honeycomb
does
that
not
every
vendor
does.
Is
we
amplify
in
aggregate
calculations,
sampled
out
data
and
that
wasn't
too
bad,
because
the
the
interface
allows
you
to
send
back
attributes
and
so,
in
the
worst
case
scenario,
what
it
means
for
us
is
that
you
know
whatever
package
samplers
do
make
it
in.
B
We
would
wrap
them
and
just
call
should
sample
and
return
an
attribute.
That
represents
the
sample
rate
that
was
chosen
or
if
a
customer
is
doing
a
custom
sampler,
then
they
would
just
you
know,
calculate
their
sample
rate
and
then
send
it
along
back
which
really
like
it's
a
small
price
to
pay
yeah.
It
was
the
only
real
hiccup.
A
I
do
wonder
on
this
front,
there's
about
the
ability
of
different
groups
to
go
fully
open,
telemetry
native
versus
needing
some
some
flavor
of
installer.
If
it
does
nothing
other
than
configure
the
things
and
possibly
install
some
some
plugin
things,
whether
they're
sdk
plugins
or
like
stuff,
that
the
end
user
can
use
like
additional
apis
for
them.
Yeah.
Not
it's
not
clear
to
me
whether
or
not
people
like
it
seems
can
very
convenient
to
say.
A
Oh,
we
just
accept
otlp
data,
just
point
the
fire
hose
at
us
and
you're
good
to
go,
but
in
practice
I
think
it's
actually
trickier
than
that,
and
I
want
to
make
sure
that
it's
actually
a
if
it
seems
likely
which
it
does
right
now
that
people
will
sorry
want
to
ship
one
of
their
own
like
their
own
distros
kind
of
what
I'm
calling
them
yeah.
Then
we
should
do
some
work
to
make
sure
that
that
feels
like
a
sane
concept,
yeah
exactly
yeah.
B
I
could
see
that
being
a
reality,
because,
even
with
the
sample
rate,
we're
sending
it
back
in
the
sampler,
but
then
we
we
also
need
the
exporter
in
order
to
send
it
on.
As
a
header
exactly
yeah,
we
have
to
intercept
that
and
then
also.
A
If
you
want
to
talk
to
lightstep,
we'll
accept
otlp
data,
but
you
have
to
send
us
a
lightstep
access
token,
that's
just
like
a
basic
routing
id
that
we
use
and
you
have
to
do
it
somewhere.
Somehow,
you
know
and
like
the
the
like
configuration
for
setting
a
grpc
header
on
the
like
built-in
otlp
exporter,
like
that's,
like
just
a
ridiculous
piece
of
conf
config
information
to
ask
an
end
user
to
copy
paste
right.
It.
B
Gets
a
lot
smaller,
it
just
doesn't
go
away,
yeah!
It's
it's
basically
just
glue,
that's
doing
something.
A
We'd
much
rather
give
them
a
wrapper
where
they
can
say
access,
token
equals
blah,
and
then
it's
it's
just
sane
yeah,
even
even
though
it
doesn't
actually
add
any
new
features
or
install
any
plug-ins
just
making
it
like
a
simple
one-liner.
Instead
of
some
kooky
looking
block
of
stuff,
it
would
be
great
for
open
telemetry
to
have
a
simple
one-liner
two
that
works
for
everyone,
but
I
just
think
it's
actually
gonna
be
a
bunch
of
one-liners,
as
far
as
I
can
tell.
A
B
Our
event
model
is,
is
always
fun
when
you,
when
you
have
trace
data
like
centric
data
structures,
and
so
because
sampling
decisions
are
made
on
the
trace
level
and
not
per
span.
We
have
to
figure
out
ways
just
to
cause
like
create
inheritance
for
sample
rates,
so
the
sampling
decision
is
fine,
but
when
we're
rendering
the
data
we're
we're
going
to
figure
out
on
our
site
away
to
say
well,
this
span
has
no
sample
rate
attached
to
the
event,
but
it's
parent
does
so
just.
A
See
yeah
that
sounds
like
that
sounds
like
baggage
right
if
it's
getting
propagated
along
the
transaction,
exactly
yeah
baggage
or
trade
state
depending
on
who
needs
access
to
it,.
A
B
We're
going
to
explore
that
next
it
seems
a
solvable
thing.
A
I'm
excited
in
general
about
people
writing
cool
contraptions
on
top
of
the
baggage
api,
to
make
it
useful
right
now
we
have
tracing
and
metrics
and
baggage,
but
they're
not
really
like
super
integrated,
like
I
think
under
the
hood,
they'll
start
to
get
integrated
more
like.
A
I
just
want
my
baggage
to
show
up
on
my
spans,
please
somehow
you
know,
but
I
also
think
that'll
be
a
nice
tool
kit
for
people
building
interesting
stuff
on
top
of
it
like
integrating
that
stuff
with
feature
flags,
for
example
like
feature
flagging
and
distributed
systems
and
other
cool
stuff,
where
you
actually
want
to
put
a
layer
on
top
of
it
that
it's
its
own
little
program.
A
A
Cool
well
just
thinking
about
sampling,
I'm
having
a
look
at
just
what's
open
in
the
spec,
and
let
me
see
if
this
makes
sense
to
you
if
I
just
share
my
screen
so
in
this
spec
there's
also
an
otep.
I
know
you're
working
on
there's
only
one
thing:
I'm
seeing
in
here
that's
labeled
as
sampling
and
required
for
ga.
A
B
A
B
A
A
A
A
A
Paul
feel
free
to
reach
out
if
you
feel
like
your
otep
is
getting
stalled
and
otherwise,
hopefully
I'm
going
to
try
to
poke
azure
to
get
a
sampling
plug-in
for
next
friday,
but
hopefully
yeah
you're
working
one
from
honeycomb
as
well,
not
sure
if
it
fits
your
timeline.
But
it
could
we'll
see
hoping
hoping
for.