►
From YouTube: Rust CV meeting 2021 09 29 - Potential Future For Observability In CV Pipelines - Daniel McKenna
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
There
we
go
there,
we
go
okay,
everyone
hear
me
yep,
yeah
right,
so
I'm
just
going
to
do
a
quick
lightning
talk
on
observability
and
computer
vision,
pipelines
and
a
project.
I
plan
on
starting
some
point
soon,
once
other
things
calm
down
abyss-
and
I
get
a
bit
more,
my
free
time
back
to
introduce
myself
briefly,
I'm
daniel
mckenna.
I
also
go
by
xd009642
online,
so
you
might
have
seen
me
around
before
I
work
on
the
code
coverage
tool.
A
A
So
so
what
do
we
want
to
solve
with
observability,
so
computer
vision
pipelines
they're
hard
to
debug?
There
may
be
a
lot
of
stages,
a
lot
of
transformations
and
image
blurs
feature
extractions,
say
various
detectors,
etc,
and
it
can
make
things
potentially
hard
to
debug,
especially
when
so
many
algorithms
tend
to
rely
on
heuristics.
A
For
example,
I
was
looking
at
hog
histogram
of
oriented
gradients
and
all
of
the
implementations
around
opencv
pill,
image,
proc
dlib.
They
basically
all
return
different
things,
and
if
you
look
at
hog,
that's
kind
of
how
it
because
of
how
it
works,
often
people
will
apply
some
sort
of
blur
a
gaussian
blur
or
say
a
mean
filter
which
they
might
not
expose
for
hyper
premises
of
they
do
a
line.
A
Detection
like
sobel,
but
some
might
do
something
else,
but
most
have
tended
to
fall
on
sobel
and
then
they
do
gradient
binning
and
generate
the
histograms,
but
they
might
do
the
orientation
gradients
over
180
degrees
over
360
degrees.
So
there's
a
lot
of
areas
where
things
can
change
and
also
open
cv.
When
I
tried
it
out,
the
default
settings
actually
did
multi-scale
detections,
so
I
think
it
actually
creates
image.
A
So
what
do
people
do
now?
Well,
when
I
work
on
computer
vision
stuff-
and
I
can
see
they-
I've
in
the
past
use
a
visual
debugger
like
image
watch
and
I've
built
open
cv
with
debug
symbols.
So
I
can
step
break
points
into
algorithms
and
look
at
images
like
this.
When
I've
been
trying
to
figure
it
out
or
if
the
code
is
stuff,
I've
purely
implemented
myself,
a
lot
of
people
will
just
add
some
lines
which
they
later
remove.
A
But
obviously
we
can
do
things
better,
because
we're
engineers-
and
this
is
this-
is
this-
is
2021
come
on?
So
what
do
we
want
to
do?
Well,
instead
of
having
to
like
rebuild
code
and
debug
mode
or
modify
it
to
add
things
in
if
we
could
just
write
code
which
can
emit
images
to
some
source
of
data,
sync
that
takes
them
in
and
something
which
we
can
disable
and
have
it
zero
or
near
zero
cost?
So
we
don't
actually
have
to
remove
it
from
our
production
code.
A
If
we
don't
want
to
there's
potential,
there's
always
gains
to
removing
extra
code
from
production,
but
if
it
can
be
as
low
cost
as
possible,
it's
potentially
something
we
can
keep
in
production
use
for
debugging.
A
We
have
to
collect
and
store
these
events.
The
file
system
may
not
always
be
available.
It's
may
have
limited
room.
There
may
want
to
be
source
for
sampling,
ability
to
search
for
things
and
other
finer
filtering,
especially
if
you're
running
something
like
I
do
at
my
own
work,
where
we
have
computer
vision
services
in
the
cloud,
you
may
get
thousands
of
requests.
A
A
A
You
can
always
disable
it
completely
at
compile
time,
if
you
say
wanted,
and
for
the
next
version
of
tracing
we're
planning
on
integrating
valuable,
which
is
kind
of
like
cert,
so
anything
which
implements
serialize
or
deserialize
and
serve.
You
will
be
compatible
with
valuable,
because
right
now,
tracing
can
only
send
string,
floats
or
integer
data.
A
So
you
can
see
everything
that's
been
spawned.
You
can
see
the
total
time
and
asynctask
has
existed,
how
much
time
spent
busy
how
much
time
it
spends
idle
number
of
polls,
and
you
can
get
things
like
performance
information
like
timing,
histograms,
so
we'd
be
creating
something
similar,
but
it'll
be
missing
images
and
not
on
the
terminal
ui
because
of
the
image
of
the
image
parts
of
it.
But
it
can
show
similar
statistics
potentially
skipped
a
bit
ahead,
so
then
for
collecting
and
displaying
I'm.
A
I've
also
seen
open
telemetry
work
really
well
in
production,
especially
going
between
different
micro
services,
because
it
can
keep
the
context
of
what
services
calling.
What
and,
if
you're,
working
on
something
a
bit
more
micro
service
with
computer
images,
you
can
see
how
your
images
are
moving
from
service
to
service
and
how
they're
changing
it,
and
also
you
can
use
open
telemetry
with
tracing.
A
There
is
a
subscriber
for
it,
but
what
happens
is
the
application
has
a
client
which
sends
our
trace
data
to
a
collector
which
stores
them,
and
then
we
have
a
web
ui,
which
can
query
spans,
display
this
essential
timeline
with
what's
being
spawned,
where
all
the
hierarchical
information
of
the
pipeline-
and
it
can
work
like
this
and
sources,
show
associated
data
as
well.
If
you
drop
something
out,
you
can
see
a
ton
of
fields
and
extra
data
values
as
well
as
guessing
timing,
information
and
then
image
watcher
bonsai.
A
So
bonsai
is
like
a
visual
programming
thing
that
lets
you
inspect
images
in
a
pipeline.
I
haven't
used
it
personally.
I
know
andrew's
used
it
because
he
actually
mentioned
it
to
me.
So
I
heard
about
it
but
yeah.
You
can
inspect
these
things
to
do
the
images
from
what
I
gather
and
you
can
do
a
source
of
more
visual
programming
style
and
modify
things,
and
we
have
image
watch
which
I've
used
so
then,
bringing
all
this
together.
A
I'm
thinking
of
a
system
based
on
tracing
which
collects
the
image
data
emits
it
to
some
subscriber
which
moves
it
to
some
standstills
formats.
For
that
emits
it
to
a
server
which
stores
the
images
and
then
the
information
we
want.
And
then
we
can
bring
in
a
gui
application
and
we
can
query
it
and
we
can
look
at
how
the
pipeline
works
and
all
of
this
will
be
completely
disabled
and
then
we're
looking
at
like
when
it's
turned
off
things
in
the
order
of
like
single
nanoseconds
just
for
disabled
spans.
A
But
when
it's
on
getting
rich
semantic,
hierarchical
data
and
being
able
to
look
at
images
and
inspect
everything
at
quite
a
deep
player
level
and
including
things
that
happening
in
libraries
without
having
to
recompile
the
library
or
reconfigure
it
between
your
development
builds
and
production,
which
will
hopefully
keep
the
development
workflow
being
a
lot
snappier
a
lot
more
fluid
and
make
the
source
of
hard
bits
of
computer
vision.
Developments
a
lot
easier,
and
that
is
me
done
for
this
lightning
talk.