►
From YouTube: 2021-07-20 meeting
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
A
B
B
C
B
Yeah
yeah,
I
was
saying
I
I
it
would
be
cool
to
have
some
time
to
to
do
that.
I
have
a
short
week
this
week,
but
next
week
I
could
start
digging
into
that.
I
think
yeah.
B
C
A
B
Yeah
I
was
interested
in
just
trying
to
to
do
enough
to
to
get
a
sense
of
what
what
on
our
eyes
proposal
would
look
like
in
a
language
more
like
go
and
whether
it
could
just
be
kind
of
the
same
thing
or
or
what
and
so
yeah
yeah.
B
I
have
a
short
week
this
week,
I'm
pretty
strapped
I'll,
probably
have
to
cut
out
from
the
seat
meeting
early,
because
I'm
on
some
deadlines,
but
I'll
be
back
next
next
tuesday
and
my
hope
is
to
be
able
to
to
actually
write
some
code.
It
should
be
nice,
like
it's
been
a
while.
Actually,
since
then,
it'll
do
that.
C
B
B
C
B
C
B
C
C
So
I,
if
I
could
hear
a
bit
about
if
there's
any
discussion
on
sampling,
that
would
be
interesting.
In
particular,
I'm
still
sort
of
wondering
what
to
do
with
the
battery
limit
sampling
because
it
sounds
like
there's
a
lot
of
people
rate
limit
sampling
yeah.
So
I
have
one
spec
issue
about
rate
limit
sampling.
B
This
is
like
the
kind
of
thing
jager
has
where
it's
like
about
a
minimum.
B
Yeah
so
there's
this
process
report
and
let
me
just
bring
it
up.
I
wasn't
able
to
make
the
spec
meeting
this
morning,
unfortunately,
but
I
have
been
going
to
the
sampling
sig.
So
I'm
happy
to
give
a
report
back
about
that,
but
I
had.
B
So
it
does
seem,
like
the
rate
rate
limit
sampling
is
needed
if
we're
going
to
add
probabilistic
sampling
for
spans,
like.
B
A
C
B
So
this
is
actually
about
span
based
sampling,
which
I
don't
really
have
a
background
in
sampling.
So
I
I
am
nervous
about
this
stuff.
I
do
see
that
jaeger
has
a
form
of
it,
but
the
basic
idea
from
what
I
understand
is
okay,
so
you've
got
like
a
bunch
of
spans
right,
like
a
bunch
of
data
coming
in
it's
too
much
and
so
you'd
like
to
to
back
off
on
your
collection.
B
B
There's
like
all
this
kind
of
aggregate
data
and
histograms
people
are
trying
to
build
out
of
traces
and
that
data
tends
to
be
span
paper
operation
based,
not
trace,
based,
and
so
there's
some
probabilistic
sampling
that
you
can
do.
B
That's
not
head
sampling
right,
it's
not
flipping
a
coin.
At
the
beginning,
it's
saying
like
on
every
span,
you're,
making
a
sampling
decision
and
that's,
I
think,
to
weed
out
like
super
common
operations
on
some
level
and
not
have
them
be
over
represented.
B
It
kind
of
makes
sense
except
one.
It
means
if
you
start
cranking
that
up.
It
seems
to
me
that
you're
going
to
start
getting
partial
traces
right
like
like
the
inevitable
side
effect
of
flipping
a
coin
on
every
span
or
you
know,
hashing
the
taking
a
hash
the
trace
id
and
trying
to
make
a
decision
is
gonna
lead
to
having
exemplars
that
are
are
not
complete,
like
the
actual
trace.
B
Exemplars
are
not
complete,
and
what
makes
that
even
worse,
is
it's
actually
hard
to
tell
whether
or
not
you're
looking
at
a
complete
trace
right
like
it's,
it's
like
the
halting
problem.
You
can't
ever
really
know
that
you
got
a
complete
trace
and
so
a
lot
of
josh's
questions
were
about
like
well.
How
could
you
guess
right,
like
there's
some
algorithms,
you
can
use
to
guess.
A
B
Obviously,
that
span
is
missing,
so
the
trace
isn't
complete,
but
it's
not
very
rigorous
and
in
open
census.
They
were
doing
a
thing
where
they're,
starting
to
record
a
count
of
like
child
spans
to
kind
of
help
get
a
sense
of
when
things
might
be
missing,
but
it's
all
heuristical,
and
so
I
guess
I
have
a
general
concern
with
all
of
that
stuff
of
like.
B
You
know,
there's
always
like
you
could
be
dropping
data,
so
you
could
ever
truly
know.
But
if
you
turn
on
this
kind
of
sampling,
then
it's
like
you're
kind
of
guaranteeing
that
you're
going
to
end
up
with
with
non-representative.
B
You
know,
incomplete
exemplars,
and
so
that
that
has
me
kind
of
concerned
about
whether
like
like
how
good
of
an
idea
is.
This
is
this:
is
this
like
a
foot
gun
that
we're
gonna
hand
to
our
end
users
and
then
yeah?
On
top
of
that,
you
also
have
metrics
and
other
things
where,
if
you're,
trying
to
to
get
if
you're,
trying
to
build
metrics
off
of
this
stuff,
does
that?
B
How
does
that
get
affected
by?
By
doing
this
kind
of
kind
of
thing?
I
guess
if
you're
building
the
metrics
with
metrics
instruments,
it
maybe
doesn't
affect
it
so
much
like
using
your
instrument,
or
I
don't
see
that
instrumenter
really
getting
affected
by
this
automatically.
But
if
you
were
trying
to
to
just
do
things
like
you
know,.
B
Make
metrics
out
of
your
actual
data,
like
counting,
you
know,
just
counting
the
number
of
500s
that
come
through
the
door
like
if
you
don't
this
kind
of
like
sampling
stuff
once
you
start
piling
it
all
together,
where
you're
gonna
have
probabilistic
sampling,
but
then
you're
gonna
weight
it
based
on
what
how
represented
the
data
is,
and
then
you're
gonna
supply
a
floor
to
that
data.
B
It
just
seems
like
that
would
like
be
really
hard
to
untangle.
If
you
wanted
to
to
get,
you
know
a
proper
count
of
something,
and
if
you
had
like
a
predictable
sampling
algorithm
like
if
it
was
straightforward,
then
I
imagine
you
could
to
some
degree
take
that
into
account
when
you
were
like
sampled.
Metrics
seem
a
little
sketchy
to
me,
but
I
imagine
if
it
was.
If
what
we
were
doing
was
rigorous,
you
could
figure
it
out,
and
so
so
that's
where
I'm
at.
Unfortunately,
I'm
not
an
expert.
So
I
just.
B
These
vague
fears,
but
I
feel
kind
of
like
useless
to
the
conversation,
because
I
can't
like
turn
around
and
be
like.
Well,
here's
like
a
solution,
but
I'm
just
kind
of
worried
about
taking
a
hodgepodge
approach
to
to
this
stuff.
B
B
Are
there
users
happy
with
what
they're
currently
doing?
Is
it
creating
problems
and
confusion
over
there
or
or
what
they
do?
Some
of
this
stuff,
using
like
dapper
and
census
and
things
inside
of
google,
but
I'm
a
little
concerned
that
that
those
are
situations
where
you're
you've
got
people
who
are
basically
really
smart
like
and
have
a
lot
of
time
on
their
hands
and
are
kind
of
like
super
users
of
this
stuff
to
some
degree,
and
so
I'm
a
little
concerned.
I'd
be
more.
B
If,
like
this
was
working
fine
in
the
world
of
jaeger,
I
would
be
more
comfortable
with
it
than
hearing
that,
like
this
worked
fine
at
google.
If
that
makes
sense,
because,
like
you're
not
implementing
this
over
and
over
and
over
again
at
google,
you
did
it
once
and
then
you've
got
like
the
same
five
people
dealing
with
it
and
I'm
concerned
about
your
average
end
user.
B
It's
just
really
hard
to
reason
about
this
stuff.
So
I'm
a
little
nervous
about
having
a
system
where
we
have
like
stuff
for
like
one
use
case.
But
if
you're
trying
to
do
another
use
case,
then
you
shouldn't
use
this
stuff
because
it
will
it
will
mess
you
up
like
like
it's
one
thing
to
have
like
plug-ins
that
do
that.
B
B
I
I
feel
like
I'm,
not
the
right
person
to
to
disentangle
this
stuff,
so
I
I
would
love
it
if
other
people
who
understood
this
stuff
could
could
attend
that
meeting
or
or
review
some
of
the
stuff
josh
is
talking
about,
and
fortunately
I
think
the
jaeger
people
are
mostly
european.
So
it's
hard
to
find
like
a
great
time
for
it
all
to
get
actually
get
everyone
together.
B
B
Yeah,
if
you're
interested
in
this
subject
and
feel
like
you've
got
a
handle
on
on
this
stuff,
I
would
recommend
reaching
out
to
them,
or
at
least
trying
to
understand
what
it
is,
that
jaeger's
doing
and
versus
what
they're
like
planning
to
do
and
think
would
be
nice
to
do.
I
guess
that's
what
I'm
trying
to
sort
out
there
so
yeah!
That's
that's!.
B
B
They
have
like
a
remote
sampler
that
they'd
like
to
get
merged,
and
things
like
that.
I'm
just.
I
think
we
should
definitely
go
ahead
and
do
that
and
just
as
far
as
like
generalizing
it
to
open
telemetry,
I'm
not
I'm
just
like
a
little
nervous
that
we're
that
we're
like
doing
a
bunch
of
things
at
the
same
time-
and
this
like
sampling
stuff,
feels
a
bit
like
the
approach
of
like
all
right.
Here's
a
problem.
So
here's
the
solution
and
there's
a
problem.
So
there's
another
solution,
there's
no
problem!
B
And
I
I
personally
not
want
to
rush
into
adding
it's
like
hard
to
take
things
away
once
we
add
them,
so
I'm
a
little
nervous
about
adding
this
stuff
until
we
can
feel
like
it
would
be
great
if,
if
someone
could
write
a
proposal
that
at
least
showed
how
this
was
gonna
work
with
metrics
and
how
it's
all
gonna
work
together,
how
would
you
how
would
you
count
spans
after
the
fact,
if
you
did
this
stuff,
how
how
do
end
users
ensure
that,
like
how
our
exemplar
is
going
to
work?
B
B
It
would
be
great
if
we
could
like
there's
is
one
thing
for
a
user
to
misconfigure
it
where
it's
like.
Oh,
I
dialed
it
up
super
high,
so
I
don't
have
a
lot
of
data
or
I
have
it
super
low.
So
I'm
like
you
know
it
costs
more
than
it
needs
to,
but
that's
a
bit
different
than
like
I
have
it
like
incoherently,
dialed
and
now
my
data
is
actually
wrong.
A
So
I've
been
reading
through
this
fishing
that
conorac
had
opened
about
the
rate
limiting
sampler
and
it
looks
like
josh-
is
coming
to
the
same
conclusion
that
I
had
sort
of
been
thinking
of,
which
is
that,
if
we
want
to
do
this
as
head
sampling,
we
probably
need
some
sort
of
feedback
mechanism
where
we
can
say
okay
over
this
period,
here's
how
many
samples
or
how
many
traces
were
actually
sampled
or
how
many
spans
were
actually
sampled.
Let's
adjust
the
probability
that
we
sample
in
the
future
right,
so
we
may
go.
A
A
B
A
It
could
be
another
back
end,
but
then
we
would
have
to
specify
the
mechanism
right.
Then
then
we
would
be
talking
about.
Having
I
think.
Last
year
there
was
a
proposal
for
like
otlp
configuration
communication
like
having
configuration
specified
in
oclp
and
being
able
to
communicate
that
back
to
an
sdk
along
the
channel.
So
something
like
that
might
work.
B
Yeah
yeah-
and
this
is
the
thing
they
have
in
jager-
they
do
have
this.
They
call
it
like
remote
sampling
configuration
basically
so
and
they're
looking
to
add
that
as
a
jaeger
plug-in
for
like
the
jaeger
exporter,
jager
sampler,
which
is
fine,
but
if
we
were
to
generalize
that,
I
think
we
should
like
it
shouldn't
just
be
a
sampler
thing.
B
B
B
So
yeah,
a
general
purpose
way
of
sending
updated
configuration
to
to
sdks
would
be
would
be
cool,
but
it's
it's
kind
of
like
a
can
of
worms.
Again.
You
know
you
have
like
you
know
the
the
most
secure.
B
B
Or
anyone
else
on
the
call
we
looked
at
this
kind
of
like
dynamic
updating.
A
A
So
I
looked
at
that,
but
it
was
all
a
bit
over
my
head
at
the
point
and
it
got
dropped.
So
I
haven't
really
thought
about
it.
Much
since
then,.
B
Yeah,
I
think
also
like
I
would
guess
maybe
people
were
presuming
grpc
or
something
like
that
if
you're
talking
about
using
otlp
as
the
carrier
there,
but
but
I
definitely
think
it
is
reason
whether
it's
in
the
same
channel
or
not.
I
do
think
the
safer
approach
is
that
the
end
points
reach
out
to
get
their
configuration
data
as
opposed
to
the
end
points.
The
sdks
expose
a
port
where
anyone
can
come
along
and
feed
them.
A
C
B
Yeah,
yeah
and
and
then
you
can
use
service
mesh
and
load
balancers
and
to
to
manage
where
they're
they're
looking
for
things.
This
is
like
you
can
like
hose
yourself
with
this
approach:
I've
written
schedulers
and
control,
planes
and
stuff,
and
so
you
know
like,
as
these
things
schedule,
the
more
and
more
things
you
have
going
like
poke
poke
poke
to
some
source
of
information.
B
So
that's
the
the
flip
side
of
having
these
things
reach
out.
Instead
of
having
them
be
reachable
right,
is
they
don't
know
when
there's
new
information
to
be
had
so
they
have
to
pull
or
otherwise
establish
a
connection
so
like
that
connection
load
just
grows
over
time,
but
they're
they're
already
talking
to
collectors,
then
they
can
get
their
information
from
the
collectors
and
you
can
end
up
with
a
tiered
architecture
that
kind
of
manages
that
so
so
it's
it's
doable,
it's
just
you
can.
B
I
should
go
too
I'm
trying
to
get
some
more
microsoft,
people
there's
a
guy
michael
lee,
who's,
a
pm
and
johannes
tax.
I
think
we're
gonna
help
out
with
instrumentation,
so
hopefully
they'll
be
able
to
show
up
to
one
of
these
meetings
in
the
future
I'll
try
to
poke
them.
Some
more
all
right!
Well,
good!
Talking
to
you
guys,
talk
to
you
guys
later.