►
From YouTube: 2020-09-18 meeting
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
C
Some
people
from
microsoft,
hopefully
reporting
back,
I
think
with
mila-
should
be
showing
up
here.
D
A
Let's
see
just
updating
the
sampling
notes
is
the
18th.
A
C
And
added
microsoft
to
the
agenda,
but
if
anyone
else
have
subjects
they
want
to
talk
about.
A
A
A
C
Okay,
it's
a
couple
minutes
past
the
hour.
We
could
probably
get
started.
This
looks
like
the
usual
crew.
Would
you
like
to
to
kick
it
off
with
your
port
back
from
microsoft,.
D
Yeah,
I
I
work
on
azure
side
of
things
in
microsoft,
we're
trying
to
work
together
with
the
sdk
folks
on
open
telemetry,
and
also
we're
trying
to
provide
guidance
for
internal
azure
stuff
to
use,
distributed,
tracing
open,
telemetry
and
one
of
the
problems
where
the
huge
epics
we
found
is
sampling
for
those
scenarios,
and
I
want
to
talk
about
two
different
custom.
Samplers
we
prototype
first,
one
intends
to
solve.
D
The
problem
of
interoperability
was
legacy
as
the
case,
so
we
have
our
own
azure
monitor,
sdks
and
they
use
sampling
algorithm,
which
is
like
trace
id
ratio
based,
but
the
algorithm
is
different,
so
we
basically
cannot
make
two
applications
properly
work
together.
One
is
an
open,
telemetry
and
another
one
using
azure
monitor
sdk,
because
they
have
different
algorithms
to
calculate
score
or
you
name
it
so
for
to
address
this
problem.
I
created
an
odd
tab
that
is
under
review
and
basically
what
we
are
saying.
D
Okay,
we
want
to
calculate
the
score
once
on
the
first
service,
no
matter
what
it
is,
let's
say
it's
an
open,
telemetry
and
it
will
use
trace
id
ratio
based
algorithm.
Then,
let's
put
it
in
the
trace
state
and
propagate
it
further
and
then
the
next
one.
Let's
say
it's
an
azure
monitor
instrumented
service.
D
We
cannot
achieve
it
today
without
some
changes
in
open
telemetry.
First
of
all,
we
need
ability
to
propagate
to
change,
trace
state
within
sampler.
We
have
an
issue.
I
think
it's
marked
that
it's
required
for
j,
so
it
seems
to
be
on
the
right
track
to
be
addressed.
D
Assuming
we
have
the
feature,
I've
tried
to
build
the
sampler.
That
would
leverage
it.
If
you
want,
I
can
share
my
screen
and
show
you
the
implementation.
A
E
D
And
here
we
are
so
basically
what
we
want
to
do.
We
want
to
it's
a
dot
net,
it
should
be
automatically
irrelevant.
D
So
what
we
want
to
have
is
a
additional
sampler
in
open
telemetry
and,
of
course,
we
can
build
it
ourselves,
but
we
don't
want
to
ship
it
in
every
language
and
we
think
it's
generally
useful.
So
that's
up.
This
is
discussion
and
data,
but
basically
we
want
to
prevent
the
sampler
as
a
part
of
otep
or
open
telemetry.
D
We
give
this
like
the
probability
sampler.
We
give
the
probability
and
we
specify
the
algorithm
how
to
calculate
this
thing.
It
could
be
random,
so
we
don't
have
to
depend
on
distribution
of
trace
id.
It
could
be
open,
telemetry
trace
id
ratio
based,
it
can
be
azure
monitor
specific,
but
the
the
logic
is:
if
there
is
a
score
in
the
trace
state
use
this.
If
there
is
no
fall
fallback
to
this
additional
algorithm
that
is
provided.
D
So
this
is
the
external
score
sampler.
It
basically
does
what
I
told
it.
Parses
the
trace
state.
One
thing
we
don't
have
in
dot
net
is
we
don't
participate.
D
D
Okay-
and
this
is
the
implementation
of
this
external
score-
sampler
naming
could
be
improved,
certainly
so
the
the
baseline
basic
logic
parts
the
trace
state-
if
there
is
no
score
in
the
trace
state,
generate
a
new
one
using
the
the
algorithm
provided
by
user,
then
put
it
in
the
trade
state
and
then,
if
the
score
like,
if
there
was
a
score
or
if
we
calculated
a
new
one,
compare
it
with
the
probability
so
like
for
a
given
trace
id,
we
generated
a
score
of
0.1
if
it's
smaller
than
probability,
then
cool
it
sample
them.
D
If
it's
bigger
than
probability.
No,
it's
sampled
out
the
cool
thing
about
this
score.
D
If
we
also
put
it
into
the
attributes,
we
can
achieve
like
imagine
the
situation
where
you
have
three
different
services
and
they
all
have
different
sampling
scores.
When
you
show
them,
you
want
to
pick
the
traces
that
are
complete
and
what
traces
are
currently
the
ones
with
the
smallest
score,
so
you
can
in
your
ux.
You
can
sort
this
traces
by
the
score
and
so
user
when
they
pick
a
sample,
they
pick
the
smallest
one
and
it's
most
likely
will
be
complete
one
yeah
and
there.
D
I
think
there
are
a
couple
of
questions
here,
the
first
one.
Why
do
we
separate
the
this
generation
of
score
from
the
sampling
and
the?
The
answer
is
what
what
I've
also
found
in
my
other
prototype.
So
I
imagine
you
have
this
aggregating
sampler
that
falls
back
to
another
one.
Then
you
have
all
these
questions.
What
if
my
internal
one
changed
state
on
its
own
or
what
did
it
change
if
it
changed
attributes?
D
How
do
I
merge
those
together
and
if
it's
an
efficient
thing
to
do
at
all,
should
I
ignore
the
attributes
if
I
am
sampling
out
the
internal
ones?
So
there
are
a
lot
of
questions
and
a
lot
of
complicated
behavior
issues.
D
So
I
thought
that
probably
all
this
logic
was
something
based
on
the
attributes
or
anything
is,
is
really
independent
of
the
generation
of
the
score,
and
this
probably
something
that
would
be
a
breaking
change
for
current
sampling
and
specification
because
we
have
the
trace
id
ratio
based
template
and
where
behavior
to
generate
score
is
the
same
as
everything
else
for
the
sample.
C
D
Yeah,
so
one
thing
that
I
would
I
want
to
change
is
we
have
trace
id
ratio
based
sampler?
What
I
it
to
be
instead.
D
So
what
I
want
to
create
is
the
abstraction
called
score
generator
and
it
can
be,
it
can
be
based
on
the
trace
id.
It
can
be
random,
so
this
is
something
users
can
specify
difference
and
then
the
sampler,
the
trace
id
ratio
based
it
can
be
made
like
very
generic.
It
only
needs
the
the
generator
to
calculate
score.
However,
you
want,
and
if
there
is
an
additional
logic
there
like
if
it
reacts
on
http
header
or
sorry,
the
http
method,
for
example
in
attributes,
then
it
can
live
in
the
sampler.
D
E
D
Yeah,
we
can
at
the
same
time
the
the
what
then
he
reviewed
this
autumn
and
he
brought
up
con
the
concern
like
if
we
have
to
have
the
fallback
mechanism.
What
happens
if
there
is
no
score
in
the
trace
state,
and
then
we
should
provide
some
fallback
and
if
we
provide
sampler,
we
now
have
all
these
questions,
because
users
can
provide
any
sampler
they
want,
and
we
don't
know
if
it
generates
score
if
it
generates
one,
how
it
populates
it
on
the.
D
C
Yeah
it
the
thing
I'm
trying
to
tease
out
here
is
whether
it's
feasible
to
build
it
as
a
as
a
sampling
plug-in
entirely,
not
because
that
would
be
the
best
solution
in
the
long
run.
But
I
want
to
make
sure
we're
not
saying
hey.
This
sampling
api
isn't
working,
so
we
need
to
to
bake
this
in
yeah.
D
Yeah
and
from
from
what
I've
seen
it's
possible
to
build
it,
certainly
what
why
we
are
trying
where
we're
trying
to
push
to
have
agreement
on
the
step
is
we
have
our
azure
planning
and
we
have
certain
commitments
to.
We
need
to
give
guidance
to
certain
services
to
say:
okay,
use
this
algorithm
and
if
we
tell
them
use
open,
telemetry
algorithm,
it
won't
work
right
away
with
the
all
the
legacy
customers
that
we
have.
If
we
tell
use
current
algorithm,
we
don't
know
whether
it
will
in
future
work
with
open
telemetry.
D
C
Yeah,
I
definitely
think
sampling.
Priority
is
a
thing.
A
number
of
people
want
so
having
a
agreement.
If
it
was
possible
to
get
agreement
between
the
different
groups
who
need
something
like
sampling
priority,
I
think
that
would
be
fabulous.
C
C
That
requirement
might
diverge
among
the
different
groups
and
that's
probably
where,
where
the
difficulty
will
be
there,
but
I
do
think
that
it
would
be
great
to
have
open
telemetry
start
coming
out
with
standard
sampling
algorithms,
in
the
sense
that
we've
proven
that
this
works.
There
are
systems
that
support
this.
It's
not
just
like
a
pie
in
the
sky,
wouldn't
it
be
nice
if
we
used,
you
know
the
sampling
algorithm,
so
I
do
think
that's
great.
C
I
think,
as
you
mentioned,
one
scary
thing
about
open,
telemetry
being
a
framework,
is
you
may
need
multiple
plug-ins
to
make
it
work
correctly
with
a
back
end?
So
I'm
also
asking
groups
to
look
at
shipping
distros
for
the
time
being,
where
you
provide
a
nice
little
wrapper
to
install
everything,
that's
needed
to
install
and
just
to
have
a
clean
user
experience
for
your
users.
C
So
one
thing
I
would
like
to
see
is:
if
that
that
distro
experience
is
feasible
and
to
see
if
we
could
do
that
and
while
pushing
through
this
otep
in
parallel,
if
the
otep
flies
on
through,
then
that's
that's
super
great,
I'm
just
slightly
nervous
about.
C
I
would
suggest
not
completely
relying
on
that
happening
quickly,
just
because,
as
we're
going
towards
ga
there's
a
lot
of
we're
trying
to
cut
scope
left
and
right
as
for
what's
like
absolutely
required,
and
I
could
see
this
potentially
getting
jammed
up.
If
people
have
just
divergent
opinions
of
how
sampling
priority
could
work,
so
I
think
that's
valuable,
but
I'd
also
suggest
making
sure
you've
you
built
something
that
that
can
work
as
a
plug-in.
Just
so
you're
not
telling
your
end
users.
D
Yeah,
so
let
me
check
if
I
unders,
if
I
get
it
right
so
now,
you
folks
are
busy
with
g
and
we
realistically
cannot
say
that
this
autumn
will
go
in
and
the
like
the
reasonable
way
to
move
forward.
If
we
ship
it
as
a
plug-in
in
languages
that
we
care
about,
make
sure
it
works.
We
can
work
in
parallel
with
other
groups
who
are
interested
in
the
priority
and
eventually
this
problem
will
be
solved
in
this
way
or
another
way.
C
Yeah,
it
seems
like
you
know,
this
is
additive
changes
right.
So
these
are
the
kinds
of
changes
we
can
make
after
ga,
adding
new
kinds
of
fake
and
sampling,
algorithms
and
stuff
the
stuff
I'm
nervous
about
doing
after
ga
we're
actually
I'm
less
nervous
in
general,
about
sampling
after
ga,
because
sampling
has
no
public
api
for
end
users
right,
it's
all.
At
the
end
of
the
day,
it's
all
encapsulated
in
the
sdk,
you
know
plug-in
authors
might
have
to
change
things.
You
may
break
people's
like
when
they
start
their
sdk.
C
C
We
discover
we
have
some
flaw
in
how
that's
working
like,
for
example,
we're
noticing
right
now
it
may
be
possible
to
if
you
are
moving
us
work
from
one
thread
or
another.
If
you
have
to
do
something
where
you
manually
pass
a
span
from
one
place
to
another
and
then
make
that
the
parent
that
if
you
do
that
with
just
a
span
that
separates
the
span
from
the
rest
of
the
context
like
baggage
and
all
these
other
things,
so
those
are
the
kind
of
concerns
we're
looking
at
right
now
and
being
like.
C
We
really
need
to
make
sure
those
are
correct
before
we
call
this
thing:
1.0,
okay,
so
that
that's
why
I'm
saying
it
you,
you
may
get
some
resistance
for
this
to
be
marked
as
required
for
ga
just
because
we're
we're
trying
to
cut
scope
on
that
front
right
now.
Okay,
but
but
it
does
look
like
reasonable
and
I
do
think
people
will.
C
C
A
B
D
C
B
Right,
yeah
yeah:
we
can
do
like
deterministic
straight
up
sampling
rate
and
sampling
priorities
for
dynamic
sampling.
C
Awesome
yeah
yeah,
so
you
use
twos
work
together
and
I'm
not
sure
is
there.
Anyone
from
from
amazon
web
services
here
today.
C
D
Yeah
cool
yeah
and
I
I
also
wanted
to
share
some
findings:
don't
need
to
go
too
deep
in
the
details,
so
for
azure
we
have
quite
different
internals
of
azure.
We
have
a
quite
different
problem
in
sampling
and
I
also
created
the
prototype
for
it.
So
what
we
want
to
have
is
to
support
two
different
tracings.
D
One
is
for
internal
view,
another
for
external
view.
So,
like
you
have
your
cluster
of,
like
let's
say,
azure
storage
and
derive
hundreds
of
services
there
and
internally
they
all
use
up
on
telemetry
and
report
telemetry
or
spence
according
to
it.
But
for
external
users
we
have
just
public
boundaries
exposed.
A
C
Yeah
well,
if
it's
working
with
trace
state,
one
of
the
core
ideas
of
that
trace
state
spec
in
the
w3c
spec
was
the
idea
that
you
might
have
multiple
different
tracing
implementations
taking
part
in
the
same
same
trace
and
if
the
hope
there
is,
if
everyone
can
include
their
own
section
and
trace
date,
then
that's
that's
how
you
would
be
able
to
to
solve
this,
and
we
definitely
did
some
proof
of
concepts
in
that
working
group
to
prove
that
you
could
connect
up
a
number
of
these
systems.
C
So
what
would
happen?
Is
you
would
get
a
trace
in?
And
you
would
see
that
there's
some
information
in
the
trace
state,
but
it's
not
yours,
and
so
then
you
would
generate
your
new
trace
state
and
append
it
right
and
if
it
somehow
was
going
in
and
out
of
different
systems,
you
could
still
scan
through
that
see.
Oh
okay,
I
already
have
something
in
here:
yeah
yeah,.
D
That's
why
that
we've
implemented
and
the
one
thing
that
come,
I
think
that's
this
is
already
been
addressed-
that
we
don't
have
a
clear
spec
on
what
trace
state
is
what
operations
are
allowed
in
open,
telemetry
yeah,
but
it's
been
addressed.
D
The
other
thing
that
for
samplers,
I
think
this
is
a
gap
that
we
don't
have
a
clear
documentation
on
if
you
use
like
multiple
samplers,
the
aggregated
ones,
who
is
responsible
for
like
how
do
you
propagate
the
attributes
from
one
to
another?
How
do
you
return
back
things
from
one
another.
D
Yeah
yeah
so
like
we
can
in
the
like
in
in
order
to
make
sampling
decision,
you
can
pass
attributes
from
one
enter
into
the
sampler
and
it
can
also
update
this
attributes,
so
the
sampling
result
has
attributes
on
them.
So
it's
not
clear
whether
the
inter
in
incoming
ones,
are
populated
into
outgoing
ones
and
what
happens
with
outgoing
attributes.
If
result
is
sampled
out
or
like
some
corner
cases,
I
don't
think
we
have
them
addressed.
C
So,
if
you're
pr,
if
they're
propagated
attributes
that
sounds
like
baggage,
so
the
the
purpose
of
baggage
is
to
have
just
this
fundamental
key
value
storage
that
gets
propagated
along
the
trace,
so
you
could
think
of
baggage
as
trace
level
attributes
in
a
sense.
So,
if
you're
looking
for
a
value
that
you
want
propagated
down
the
stack
and
you're
going
to
to
check
to
see
if
it's
there,
the
two
places
to
put
it
are
baggage
or
trace
date,
so
trace
date.
C
If
it's
something
just
internal
to
your
thing
and
that
gets
propagated
around
so
you
know
I
do
wonder
whether
it
could
be
put
just
put
in
there
and
then,
if
it's
something
that
needs
to
be
exposed,
you
know
outside
of
trace
state
for
some
reason,
then
a
baggage
would
be
the
place
to
put
it
if
it's
getting
propagated-
and
this
is
this
is
actually
one
of
the
things
that
is,
I
think
required
for.
C
Ga,
though,
is
making
sure
that
stuff,
like
baggage,
can
be
accessible
internally
to
plug-ins
and
things
like
that,
I'm
pushing
for
in
general.
This
is
when
I
did
this
whole
otp
66
to
pull
out
context.
C
Propagation,
the
main
insight
I
had
about
what
we
need
to
change
about
open
telemetry
is
any
api
that
only
takes
like
a
part
of
that
context
like
a
span
should
really
take
the
whole
context
or,
if
you're,
in
a
language
where
the
context,
the
active
context,
is
just
accessible
in
the
background,
you
know:
that's
fine,
you
don't
have
to
manually
pass
it
in,
but
but
as
long
as
those
those
apis
are
lifted
so
that
they're
working
on
a
context
that
leaves
a
lot
more
room
under
the
hood
to
be
able
to
to
check
if
there's
baggage,
that
you
need
or
something
else
like
that,
so
that
that's,
that
is
definitely
pre-ga
to
make
sure
that
something
like
that
works.
D
C
You,
okay.
I
have
made
some
notes
on
this.
Unfortunately,
I
can't
stay
past
9
30.
I
actually
have
to
hit
the
road
I'm
driving
down
to
san
diego
over
the
weekend,
so
I've
got
a
very
long
drive
ahead
of
me,
so
I
have
to
sign
off
at
9
30.
other
people,
of
course,
you're
willing
to
keep
on
talking.
C
C
Cool
I
tried
to
take
some
notes
to
to
sort
of
capture
what
some
of
the
requirements
were
and
what
we
discussed,
if
maybe
give
them
a
glance
over
and
if
they're
not
accurate,
update
them,
not
the
best
at
listening
and
typing.
At
the
same
time,.
C
Okay,
anything
is
there
anything
else,
people
want
to
discuss
that
people
have
other
topics
that
could
give
some
feedback
on
before
we
get
out
of
here.
C
Okay,
so
it
sounds
like
yeah
again,
you
know
making
sure
that
we
can
build
all
this
stuff
in
a
sampling
plug-in.
I
think
baggage
is
the
thing
to
look
at
there,
since
we
already
have
trace
date
in
the
works
and
then
maybe
trying
to
find
some
agreement
first
and
foremost,
between
azure
and
honeycomb,
about
what
a
more
standardized
sampling
algorithm
would
look
like
cool
awesome.
That's
progress!
I'm
really
happy.