►
From YouTube: 2020-08-04 Python SIG
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
B
A
Could
could
just
be
like
stuck
at
home
with,
like
no
internship?
I
guess
too.
A
B
B
B
Okay,
there's
some
visualization
that
I
think
is
useful,
but
to
have
to
find
it
sure.
So,
basically
I
mean
from
from
the
google
side
right
now
right.
Exam
players,
look
like
that's
cool,
yeah,
okay.
This
is
somewhere
in
development,
but,
like
you,
have
your
your
histogram
or
whatever,
and
then
you
have
exemplars,
which
are
these
points
yeah,
which
you
can.
I
don't
know,
click
to
see
your
the
trace
right.
So
that's
that
that's
the
google
concept.
You
say,
okay,
what
is
this
high
latency
request?
B
Or
I
don't
know
this
one
yeah,
because
something
unknown
happened
because
I
didn't
trace
this
properly,
but
you
know
whatever,
so
that's
the
original
goal
and
use
of
it
and
then
obviously
it
expanded
to
be
much
more
than
that.
So
there
is
so
essentially
I
defined.
I
didn't
mention
this
in
the
o
tip
because
I
didn't
have
a
clear
thought
process
here,
but
I
essentially
separated
the
concepts
of
exemplars
into
trace
or
or
I
call
them
semantic
exam
players.
B
I
guess
trace
examples
is
maybe
better
and
then
statistical
exemplars,
which
is
the
the
cool
new
thing
that
nobody
uses
and
I
need
to
anyways.
I
have
a
jupiter
notebook
here
that
kind
of
shows
some
stuff
sure.
A
So
you're
saying
you
didn't:
have
the
separation
of
the
semantic
and
statistical
in
your
otap.
A
A
Yeah,
I
think
so
how
long
ago,
like
a
minute,
maybe
god.
B
B
So
the
topic
is
just
setting
up
obviously,
and
then
I
create
this
input
that
they
randomly
randomly
quotation
mark
to
generate
this
input,
which
has
some
big
customers
and
a
bunch
of
small
customers,
as
you
probably
might
see.
B
B
Create
a
new
aggregator
for
every
single
customer
id
but
from
exemplars,
and
they
contain
the
drop
labels.
You
can
still
make
aggregations
from
that
customer
id
or
estimations
of
the
aggregation.
So
we
can
estimate
under
the
percentages
that
each
customer-
I
don't
know
in
this
case
it's
a
bytes
and
metric.
So
you
could
say
how
many
bytes
did
each
customer
send.
B
And
then
you
can
say
what
are
my
biggest
custom
percentages
that
each
customer
each
customer
templates
did
this
customer
send,
and
this
is
an
estimate,
but
it's
very
like,
depending
on
the
amount
of
examples
you
sample,
it
can
be
very
accurate,
so
there's
some
magical
formula
like
this
thing.
But
if
you,
if
you
know
your
standard
deviation,
you
can
basically
say
how
many
examples
sample.
If
you
know
your
standard
deviation,
you
can
basically
say
how
many
units
do.
I
need.
B
A
B
A
Right
so
so
normally
like.
Without
this,
we
would
only
have
data
on
like
the
the
resulting
label
set
right
after
the
keys
are
dropped.
A
If
you
configure
your
like,
if
you
configure
your
views
to
like
include
example,
bars
like
it
would,
it
would
maintain
what
it
would
hold
like
all
of
the
labels.
B
B
Yeah,
so
this
yeah,
this
data
is
taking
our
exported
aggregations
and
then
the
view
from
the
actual
values
of
your
different
customers,
yeah
so
completely
separate
from
the
actual
pretty
much
new
aggregators
with
like
supported,
aggregations
and
useful
data
on
top
of
things,
and
then
you
can
also
through
exam
players.
You
can
obviously
come
up
with
you
feel
like,
essentially,
even
if
you
don't
initially
record
it.
So
here
I
have
like
percentiles
of
our
bytes
in
request
for
50
90
to
99
percentile,
which
is
somewhat
interesting,
but
see.
A
You're
using
some
like
standard,
like
probability,
distribution,
kind
of
thing
right,
it's
like
a
random
thing
to
choose,
which
of
which,
which
pieces
of
exported
telemetry
is
actually
being
added
into
the
exemplars.
B
Yeah
we
pick
yeah
there's
I
mean
it's
a
pretty
common,
very
simple
yeah.
A
C
B
So
like
for
each
bucket,
we
could
record
and
exemplars
randomly,
and
then
we
also
do
this.
There's
also
this
thing
called
a
sample
count,
and
so
it's
how
many
measurements
does
one
measurement
does
one
specific
example
I
represent,
which
allows
you
to
like.
You
can
multiply
sample
count
by
the
example
to
figure
out
like
how
many
bytes
this
customer
actually
sent,
and
so,
when
you
have
histograms
it
will
be.
B
The
sample
count
will
be
different,
depending
on
which
examples
come
from
which
buckets,
and
so
the
less
common
buckets
will
have
a
lower
sample
count,
which
means,
by
doing
this
multiplication,
you
can
actually
still
re-calculate
what
it
would
actually
be.
But
then
you
have
examples
representing
more
of
the
distribution
which
is
useful.
A
All
right
so
sample
count
so
for
an
exemplar
like
like
it's
sample.
Count
like
it
represents
the
number
of
things
that
were
actually
in
that
bucket.
B
And
so
when
you
have
histograms,
so
the
less
common
buckets
we'll
have
examples
of
the
distribution
of
10.
If
there's
only
one,
if
that
was
all
within
one
bucket,
so
like
each
example
has
been
picked
from
ten
measurements
right,
yeah
and
yeah.
So
if
you
take
all
the
exemplary
values
and
multiply
it
by
10,
you
should
get
an
estimate
of
what
all
the
data
together
was.
Oh,
I
see
got
it
and
then
so
it's
for
for
histogram.
B
That
number
is
per
bucket,
and
so
some
examples
are
more
weighted
than
others,
which
is
it's
useful.
You
get
data
from
a
distribution
like
from
a
wide
variety
of,
but
you
also
get
it
to
be
statistically
significant,
so
we're
basically
measuring
the
bias
here
of
how
we
pick
things
and
then
you
can
use
that
to
create
an
unbiased
estimation.
B
Sure
like
this,
for
example,
so
it's
actually
right
anyway,
so
that's
statistical
and
then
semantic
tracings
and
players.
I
showed
you,
which
are
the
more
google
useful
things
or
most
people
useful
things
which
is
provide
a
link
between
metrics
and
traces,
where,
like
so
some
for
some
metric,
you
can,
I
don't
know
say
in
this
top
bucket
here
is
a
link
to
a
trace
that
represents
when
that
was
called
right
and
that
that's
obviously
very
useful.
A
B
Here,
yeah
yeah,
that's
right!
I
try
to
separate
things
as
much
as
possible.
So
basically
you
I
have
this
exemplar
sampler
class
or
exemplar
manager,
class
yep,
which
will
hold
exemplary
standard
player
first
measure
and
that's
obviously
very
useful.
I
just
so.
I
have
this
exemplary
sample,
for
example,
for
statistical
and
then
you
can
select
based
on
your
h1,
you
used
and
then
when
to
record
things
so
for
trace
examples.
We
only
record
examples
when
there's
a
trace
attached
so
like
when
some
when
a
trace
will
be
recorded.
Then
we
stay
up.
B
Okay,
sampler
cloud
template
statistical
and
then
you
can
see
so
like
when
some
when
it
trades
on
part
or
let's
sample
it,
sorry,
so
it
might
not
be
recorded
but
sample
bias.
So
we
just
sample
from
everything
and
so
that
this
manages
that
so
you
you
know
sampled
and
then
just
recordings
so.
B
For
and
so
there's
yeah
there's
a
few
different
there's
the
random
one
which
uses
the
joshua
link
here,
which
I'll
add
to
a
comment,
but
it's
selecting
a
random
set
from
a
stream
of
data,
which
is
I
say
that
I
don't
know
it's
it's
nice,
it's
pretty
simple
cool,
yeah,
yeah,.
B
And
then
for
like
trace
examples,
we
can
also
have
a
bias
for
the
values
of
the
exemplars,
and
so
here
what
I
decided
on
was
like
min
and
max
so
we'll
record
close
to
the
min
and
the
max
of
exemplars
or
of
the
data,
and
when
that
example
also
has
a
trace
attach.
So
you
get
a
huge
general
relevant
there's
a
light
set
from
that
and
then
for
like
two
minutes
like
in
that
example,
that
was
recorded
like
it's
selecting
your
intercept
and
now
the
example
is
it's.
B
So
the
data
was
some
high
value,
but
you
do
care,
but
that's
normally
like
the
fine
case
versus
you're,
looking
at
outliers
here.
Basically,
what
this
does
and
then
there's
the
the
bucketed
one,
which
is
for
instagram,
where
yeah
we
do.
We
have
a
random
example
sampler
for
every
single
bucket
and
then
we
sample
within
the.
B
B
They
are
meaningful
for
all
areas,
except
for
gauge.
I
see
okay.
B
A
Yeah,
okay,
yeah.
This
makes
sense
in
terms
of
like
implementation.
I
haven't
really
taken
a
look
at
it
too
closely.
But
how
like
invasive
is
this?
If
someone
doesn't
want
to
use
exemplars
at
all
it's
hard
to
I
it's
very
hard.
B
A
B
A
Complex
metrics
is
in
general,
you
know
it's
like.
B
A
Mostly
because,
like
there's
just
so
much
in
metrics,
you
know
tracing
is
just
so
easy.
It's
like
well-defined
context
manager,
everything.
So
I
don't
know
which
is
why,
like
we're,
still
dire
need
of
like
an
sdk
spec
and
which
is
why
I
think
it's
taking
so
long
so.
B
A
B
People
don't
do
anything
about
it,
yeah,
so
what
they
had
in
the
other
hotep
for,
like
the
mini
views
where
exporters
can
specify
which
aggregators
to
use
or
like
set
their
own
views,
which
is
very
important
for
us.
You
know
yeah,
I
saw
the
other
hotep
for
that
it
creates
some
issues,
but
if
basically
on
the
google
side,
what
we
want
to
do
is
have
is
be
able
to
set
this
view
inside
our
exporter
to.
B
To
do
is,
and
so
users
don't
have
to
go
about
setting
them
up.
Every
time
like
we
make
value
recorders,
always
histogram,
because
you
don't
use
new
maximum
account
and
then
have
the
exam
players
auto
set
up,
and
now
that's
very.
That
would
be
very
useful
to
us.
B
B
B
A
B
A
B
A
I
guess
that's
something
on
the
robot
yeah
yeah.
What's
your
what's
your
timeline
timeline
like
for
this.
A
I
don't
know
if
you,
I
don't
know
if
you
like
you
said
this
before
but
like
like.
You,
have
to
wrap
up
everything
open,
telemetry
related
before
you
leave.
A
B
Yeah
and
christmas
said
he
would
review
it,
and
he
also
told
andrew
to
review
it
and
he's
definitely
not
going
to
do
that.
That's
okay!