►
From YouTube: Kubernetes WG IoT Edge 20210519
Description
May 19, 2021 meeting of the Kubernetes IoT Edge Working Group. Small group bird of a feather recapping KubeCon Europe Edge related sessions + discussion of vision/camera + machine learning use cases at edge
A
A
So
kilton
has
joined
us
and
I
think
we
may
have
one
other
person
joining
a
little
bit
late.
One
thing
I
wanted
to
call
out
was
that
the
session
recordings
from
kubecon
europe
have
now
been
published,
they're
up
on
youtube.
A
There
is
a
cncf
channel
dedicated
to
those,
but-
and
there
were
also
there
was
a
kubernetes
on
the
edge
day
that
had
a
full
day
of
sessions.
A
Plus
there
were
three
kubernetes
edge,
related
topics
in
the
main
portion
of
the
kubecon
event
kilton
since
you're
here
were
there
any
of
those
that
you
particularly
enjoyed
that
you
might
recommend
to
somebody
who
wants
to
try
to
catch
up
on
youtube.
B
B
If
you
haven't
gotten
to
see
that's
just
10
minutes
and
it
will
introduce
you
to
some
really
interesting
concepts,
so
you
should
catch
that
one
and
yeah
so
that
that
stood
out
to
me
and
there
was
another
lightning
talk
too.
I
can't
recall
at
the
moment
they
were
paired
up.
I
don't
know
if
you
got
to
catch
the
lightning
talk
steve
but
yeah.
I.
A
Did
you
kind
of
stole
my
thunder
on
the
acreage
tonight.
A
Top
recommendation,
and
by
the
way
there
it
wasn't
just
the
lightning
talk,
it
turns
out
that
the
one
of
the
main
sessions
at
the
kubecon
event
was
a
full
presentation
on
equity
and,
of
course,
we've
had
the
acry
presentation.
If
somebody's
interested
in
that
acry,
if
you
didn't
know,
is
integrating
device
discovery
and
management
into
the
kubernetes
control
plane
through
crds,
there's
a
lot
more
to
it
than
that,
but
we
had
some
acry
people
present
at
this
group
a
few
months
back
and
that
session
is
recorded
it
up
on
youtube
too.
A
A
I
thought
was
particularly
good
because
he
had
a
real
high
level
perspective
on
where
things
are
headed
with
kubernetes
and
edge
in
relation
to
some
common
use
cases,
and
I
thought
it
was
a
good
general
overview.
It
wouldn't
necessarily
tell
you
how
to
go
out
there
and
do
something
tomorrow,
but
in
terms
of
just
kind
of
the
big
picture
of
things,
I
thought
that
was
a
quite
interesting
session
done
in
an
interview
format.
So
it
was.
You
know
like
a
press
interview
of
somebody
who
knows
a
thing
or
two
about
kubernetes.
A
I
see
we've
got,
I
think,
a
first
timer
here,
darren,
hey
steve,
would
you
like
to
introduce
yourself?
We've
got
myself
and
you
and
I
met
a
while
back.
I
think
I
referred
you
to
this
group
kilton
here,
I
think,
is
actually
based
in
your
general
area
and
I'll.
Let
kilton
introduce
himself
after
you
introduce,
but
what
we've
got
going
on
here
is
that
we
originally
had
planned
a
speaker
giving
us
an
introduction
to
super
edge,
but
they
couldn't
make
it
today.
A
When
that
happens,
we
generally
start
the
meeting,
but
just
have
a
free
form,
discussion,
birds
of
a
feather
so
introduce
yourself,
and
maybe,
if
you've
got
a
topic
or
two
you're
curious
about
or
would
like
to
talk
about,
throw
that
out.
There.
C
Yeah
and
always
always
thanks
steve
again
for
the
invite
and
nice
to
meet
you
kilton,
so
I'm
darren
shay
from
metroid,
so
metroid
is
based
in
palo
alto.
We
are
a
team
of
computer
vision
engineers
where
a
series
b
company
raised
the
the
the
next
round
of
funding
seriously
late
last
year,
and
I
think
the
focus
for
us
is
to
have
this
end-to-end
solution
for
enterprises
in
the
industrial
setting
right
and
the
concept
is,
you
know,
drag
and
drop.
C
There's
no
code
programming.
You
know
low
code
are
required
to
enable
more
and
more
developer
to
generate
models
because,
because
we
really
believe
complex
region
can
be
much
simpler
than
it
used
to
be
right
and
that's
why
steve?
Thank
you
for
reminding
me.
You
know
this
is
the
forum
to
talk
about
open
source
right
and
that's
also
what
we
believed
I
mean
a
path
to.
C
A
Just
so,
you
know
darren
too,
you
joined
late,
so
you
missed
the
introduction,
but,
as
is
the
policy
with
the
kubernetes
project
based
on
the
governance
rules,
all
meetings
are
public
and
recorded
and
they'll
eventually
be
published
up
on
youtube.
So
hopefully
you're
that
that's
not
an
issue
with
what
you're
saying
here.
A
I
can
delete
this
out
of
the
recording
before
it
gets
posted,
but
since
you
missed
that
notice
and
you're,
not
a
regular,
I
just
thought
I'd
mention
it.
So,
on
the
subject
of,
I
think
you
said
this
was
related
to
computer
vision.
There's
a
couple
forms
of
that.
Like
live
video
versus
kind
of
snapshot,
static,
pictures
right.
A
We
do
have
a
member
who's,
not
here
today,
but
usually
attends
moritz
out
of
germany,
who
is
using
vision
in
a
project
that
does,
I
believe,
it's
laser
cutting
for
manufacturing
and
kind
of
a
challenging
application
so
someday
when
you're
both
at
a
meeting,
you
too
should
definitely
get
together
and
see.
If
you
have
some
common
interest.
B
A
A
So
kilton,
why
don't
you
introduce
yourself
for
the
benefit?
Yeah?
Obviously
you
and
I
know
who
you
are
but
darren
doesn't
and
you're,
both
affiliated
with
open
source
plus
a
startup
in
the
yes.
B
A
Yeah
palo
alto
area
go
for
it.
B
B
So
my
name
is
kilton
hopkins,
I'm
the
the
cto
and
co-founder
of
a
company
called
edgeworks,
and
I'm
the
original
creator
of
the
eclipse,
io
fog,
open
source,
edge,
computing
application
framework
and
I'm
still
the
lead
of
that
project
at
the
eclipse
foundation
and
currently
the
chair
of
the
edge
native
working
group,
which
is
an
eclipse
working
group.
That
is
like
a
sister
group
to
this
working
group
that
we're
in
right
here
right
now,.
B
So
it's
I
do
all
things
technical
and
including
doing
a
lot
of
computer
vision
stuff
lately
so
darren.
It's
actually
very
timely
that
you
are,
I
think,
doing
what
you're
doing.
B
I
also
believe
that
that
using
open
source
and
getting
rid
of
the
common
infrastructure
not
getting
rid
of,
but
rather
solving
the
common
infrastructure,
people
should
be
able
to
not
only
execute
computer
vision,
model
building
and
all
these
tests
more
simply,
but
should
be
able
to
deploy
it
as
well
into
a
number
of
devices,
some
of
which
are
ai
accelerated,
some
of
which
are
not
allowing
you
to
use
distributed
computer
vision
very
quickly
and
and
very
practically.
B
B
The
part
that's
related
to
your
task,
and
so
I
have
the
same
belief,
and
there
may
in
fact
be
a
lot
that
you
gather
from
from
visiting
this
group
here
today
that
can
that
can
lead
you
to
some
stuff
that
that
would
be
helpful
for
your
company
and
your
vision.
That's
what
this
is
all
about.
Right
is
using
the
open
stuff
to
have
everybody
benefit,
and
it's
my
point
of
view
that
infrastructure
for
edge
computing
is
really
the
same
as
infrastructure
for
cloud
computing.
B
It
needs
to
be
solved
by
everybody
so
that
we
can
all
use
it
to
get
back
to
business.
There's
not
any
company,
one
company
that
owns
cloud
right,
nor
should
there
be,
and
the
same
is
happening
for
edge
and
so
there's
a
lot
of
stuff
going
on
and
some
of
it's
a
little
consolidated.
Most
of
it's
all
kind
of
spread
out
and,
and
things
are
disjointed
still
because
it's
early
days
but
yeah
there's
a
lot
out
there.
A
So
darren
when
it
comes
to
vision,
what
kind
of
camera
devices
are
you
currently
using
or
hoping
to
use
in
the
future?
Is
it
kind
of
off
the
shelf
things
you
know
like
env
cameras
and
security
cameras,
or
is
it
specialized
stuff
that
needs
to
get
up
to
you
know
infrared
or
heat
vision
or
thousands
of
frames
per
second
kind
of
stuff,
great
question.
C
Great
question:
yes,
we
we
are
camera
agnostic,
meaning
you
can
pick
any
of
the
shelf
camera
onvif
right
and
that
can
be
of
different
sensors.
C
C
They
have,
to
you,
know,
hire
a
bunch
of
experts
right
and
that
became
a
barrier
right
or
a
bottleneck
right
and
yeah,
and
and-
and
I
definitely
would
love
to
introduce
my
founder
to
you-
guys
reza-
who
is
also
stanford
professor
right
in
and
he's
the
the
man
orchestrate
this
infrastructure
platform
right,
because
because
he
also
believes
this
should
be
available
for
everybody
right,
not
just
for
a
limited
group
of
people
right.
Everybody.
A
And
is
machine
learning,
slash
ai
creeping
into
what
you're
putting
together
here
too.
C
That's
right:
that's
right
so
machine,
yeah,
machine
learning
and
yeah
it's
good.
You
mentioned
you,
have
members
working
on
manufacturing,
right
and
and
that's
another
barrier,
meaning
those
enterprises.
I
mean
enterprises
or
users.
C
They
used
to
require
some
very
expensive
machine
vision,
equipment
right
and
every
time
for,
like
new
generation
of
the
product,
they
have
to
buy
new
equipment
and
those
equipment
only
applies
certain
things,
one
or
two
things
and
could
cost
like
quarter
million,
half
million
multi-millions
right
versus
this
new
concept.
Right,
it's
more
flexible.
C
They
can
do
everything
in-house.
It's
open
box,
not
closed
box
right
people
can
always
learn
what
really
happened
from
their
data.
You
know
because
using.
B
C
Closed
box
equipment
they
have
no
idea
how
those
machine
works
and
why
you
know
the
the
result
is
positive
or
negative
right
versus
you
know.
We
want
to
open
up
everything
right,
so
they
really
know
what
happened
in
their
factory
or
any
operation.
Yeah.
Well,.
A
I
think,
as
soon
as
you
combine
vision,
plus
machine
learning,
then
there's
definitely
a
a
call
for
kubernetes
to
get
involved,
because
it
sort
of
implies
that
you're
going
to
have
a
a
multi-tiered
system.
That
needs
a
little
bit
more
management.
You
know
if
it's
something
as
low
as
five
security
cameras
inside
a
711
convene
store,
yeah,
that's,
maybe
an
appliance
that
has
no
need
for
cloud.
Involvement,
no
need
for
data
analysis,
no
need
for.
A
Even
you
know
when
you
get
into
machine
learning,
I
think
with
today's
technology
at
least
often
you're
going
to
have
a
scenario
where
you
want
to
do
training
up
in
a
cloud
where
you've
got
a
little
more
resource
to
get
the
job
done
faster,
but
execution
out
at
the
edge
locations,
and
thus
you
have
kind
of
a
tiered
system
that
you
know
could
really
benefit
from
an
overall
management
control
plane
with
some
containerization
going
on.
A
C
Yeah
I'll
definitely
take
a
look
and
we
are
also
early
member
of
kubernetes.
I
think
we
were
using
that
since
20
2016
I
believe
right
and
it's
a
great
platform
for
us
to
deliver
different
things
and-
and
we
have
been
using
that
and
then
and
we
plan
to
use
that
even
more
and
more
and
more
so
so
yeah.
I
think
I
think
thanks
for
the
feedback-
that's
that's
great,
but
yeah.
You
also
mentioned
about
cloud.
C
I
mean
training
in
the
cloud
right,
so
so
I
I'm
just
wondering,
maybe
you
steve
or
tilton
right.
What's
what's
your
vision
of
training
in
the
cloud
versus
edge?
What's
what's
the
I
mean?
What
are
the
trends
you
are?
You
are
seeing
nowadays
because
that's
something
I've
been
thinking
these
days
too
yeah.
B
So
there's
there's
the
obvious.
The
obvious
drawbacks:
to
trying
to
do
training
at
the
edge
are
having
the
compute
power
to
do
the
training
quickly
and
needing
to
have
the
data
actually
go
through
the
process
of
of
preparation
before
you
execute
the
training.
If
you
get
the
raw
data
for
machine
learning
with
proper
setup
at
the
edge
you
could
you
know,
especially
if
it's
like
sensor
streams,
you
can
probably
train,
maybe
overnight,
maybe
even
in
near
real
time,
meaning
that
you're
constantly
you
know
polishing
things
up.
B
If
you
do
a
batch
and
then
you've
got
a
break
and
you
do
another
batch,
but
when
it
comes
to
computer
vision,
usually
you're
talking
about
a
much
larger
data
set
and
you're,
often
talking
about
having
to
at
least
classify,
you
know
the
data
in
some
way
before
you
go
into
the
you
know,
to
do
the
training
and
then
the
training
you
would
like
to
execute
quickly.
A
A
The
reason
there
are
many,
I
think,
is
the
capex
where,
unless,
if
you
go
out
and
buy
an
expensive
gpu
or
specialized
fpga
for
this
they're
expensive,
and
if
you
don't
use
it
24
hours
a
day,
seven
days
a
week
that
capex
largely
sits
there
under
utilized
so
going
up
to
a
cloud,
helps
you
share
it.
I
mean
it
isn't
necessarily,
I
think
darren.
You
said
the
cloud,
but
there's
more
than
one
cloud.
B
A
It's
also
possible
to
do
it
on-prem
by
buying
the
hardware
but
sharing
it
across
a
region
or
the
world.
You
know,
I
think
the
other
aspect
that
drives
taking
this
to
cloud
is
that
if
your
training
set
is
coming
from
many
locations,
you
know
maybe
you've
got
cameras
out
there
in
multiple
countries
or
multiple
locations
within
even
your
single
city.
The
training
really
needs
to
assemble
that
to
be
effective,
you
don't
do
it
on
a
one
by
one
basis,
at
one
location
at
a
time.
A
So
I
think
that
calls
for
a
higher
tier
and
you
could,
even
if
that
tier,
isn't
aws
or
gce.
You
know
from
a
kubernetes
perspective,
you
can
still
operate
a
regional
data
center
that
you
would
call
a
cloud.
It
just
happens
to
be,
you
know,
maybe
a
privately
or
personally
operated
cloud.
Hey
I
see.
Moritz
has
just
joined
us
darren
and
he
is
the
exact
individual
we
were
talking
to
about
who
does
vision.
So
this
is
perhaps
fortuitous.
C
Awesome
yeah,
I
think
we
are
all
mentally
in
sync.
That's
why
I
joined
this
similar
time
in
in
the
same
meeting
nice
to
meet
you
maurice,
hey
there.
What's
up.
A
Let
me
introduce
you,
he
is
actually
with
a
startup
based
in
the
palo
alto
area,
that's
doing
a
vision
with
kubernetes
and
machine
learning,
so
that
might
be
kilton,
and
I
were
just
talking
about
you
being
somebody
who
routinely
shows
up
and
is
engaged
in
this
type
of
application,
and
you
just
gave
a
talk
about
it
at
the
kubernetes
on
edge
day
too,
that
I
think
just
got
posted
up
there
but
moritz.
Why
don't
you
introduce
yourself
and
what
you're
up
to
with
kubernetes
edge
and
vision.
D
Yeah
sure
so,
basically,
I'm
a
researcher
working
at
achen
university
for
a
research
project
called
internet
of
production
and
the
idea
is
to
have
well,
let's
say
some
kind
of
a
system
to
to
integrate
machine
vision
and
machine
learning.
D
Algorithms
into
manufacturing
machines,
especially
manufacturing
machines,
tend
to
be
well,
let's
say,
quite
high
frequent,
so
we
normally
deal
with
with
sensor
data
that
is
somewhere
between
for,
for
example,
for
time
series
data
in
I
don't
know,
100
kilohertz
range
and
for
for
visual
data
like
like
camera
systems
from
I
don't
know,
100
hertz
up
to
15,
000
hertz,
so
really
high
performing,
let's
say
systems
which
normally
we
require
fpga
systems
to
to
do.
D
The
aggregation
and
the
the
whole
research
topic
is
basically
spun
around
like
how
do
we
integrate
these
systems
into
yeah
kubernetes?
Basically,
and
how
do
we
leverage
kubernetes
to
to
to
get
the
code
that
is
analyzing
these
data
streams
on
the
machine?
D
D
For
I
don't
know,
1500
different
fpgas
somewhere
in
your
machine
somewhere,
an
edge
location
is
quite
different.
So
that's
basically
the
research
topic
and
well,
it's
quite
fun
because
it's
like,
like
a
little
bit,
pushing
the
limit
of
what's
doing
right,
doable
right
now.
C
Definitely
definitely
happy
to
do
that.
I'm
darren
shade
so
metroid,
which
I'm
working
with
is
a
startup.
You
know
focus
focusing
on
computer
vision.
We
make
our
belief
right.
We
make
a
complete
vision,
simple
right.
What
that
means
is
we
have
some
very
user
user-friendly,
ui
ux,
so
the
users
developer
doesn't
need
to
write
a
bunch
of
codes.
C
They
can
use
our
workflow
or
platform
to
generate
models
much
faster
and
then
they
can
generate
as
many
as
they
want
and
helping
the
library
of
the
the
models
to
share
with
many
other
users
who
doesn't
have
a
background
of
cvml
right,
because
we
really
believe
those
can
be
accessible.
Well,
I
mean
by
by
everybody
right
and
our
founder
is
he's
also
a
stanford
professor
right
and
that's
also
his
his
belief
right.
C
So
so
we
don't
have
to
rely
on
specific
interface
or
architecture
right
and-
and
you
know,
because
it's
so
powerful
and
and
and
and
why
why
nowadays
there's
a
lot
of
the
barrier
to
use
that
right?
It's
it's
really
unnecessary
right.
So
so
that's
we
are
driving
right
to
to.
You
know
make
sure
everybody
has
the
benefit
of
cv
right
and
then
attract
more
developer,
more
users
on
kubernetes.
A
C
Yeah,
I
think
for
metroid,
like
the
facial,
like
common
object
detection,
you
said
the
the
license
plate.
Those
are
quite
standard
and
also
already
part
of
our
public
library
right.
So
so
the
you
know
that
the
user
can
just
log
in
and
just
you
know,
try
and
then
they
can
use
that
to
to
retrain
their
specific
model.
For
example,
if
they
want
to
detect
car
and
now
they
want
to
detect
like
toyota
right,
then
they
can
use
our
model
to
retrain
and
detect
the
toyota.
C
On
top
of
that,
because
we
are
also
working
with
clients
for
some
custom
models
right,
meaning
the
data
is
not
in
the
public
domain
right,
but
they
can
also
use
our
framework
to
create
their
own
model
right.
So
so
so
we.
A
We
do
both.
I
know
a
few
use
cases
that
have
come
up
in
this
group
that
people
have
been
interested
in
you
know
just
like
on
a
short
term
basis,
are
retailers
looking
for
things
like
putting
utilizing
the
cameras
that
are
out
in
the
parking
lot
of
a
retail
venue
to
identify
cars
in
particular
parking
slots?
You
know
for
the
kind
of
store,
pickup
service
or
fast
food
operations
that
would
carry
somebody's
food
order
out
to
their
car
when
they
arrive
at
the
location.
A
The
other
group
that
I've
seen
in
this
group
are
people
doing
vehicles
and
robotics
that
are
trying
to
detect.
As
you
know,
the
object
that
has
the
camera
mounted
is
moving
around
they're,
trying
to
get
bearings
and
distance
to
other
relevant
objects.
Do
you
have
examples
of
that
in
this
framework
that
your
your
your
project
is
targeting.
C
Yeah,
that's
a
great
question.
In
fact,
I
was
just
chatting.
You
know
the
dif
different
use
cases
in
retail
yesterday
with
you
know
another
partner
and
also
my
colleagues
right
and
it's
interesting
in
the
retail,
especially
during
pandemic
right.
We
we
are
seeing.
People
are
not
going
to
the
store
as
often
as
before.
C
However,
at
the
same
time,
workers
safety
remain
the
the
higher
priority
right,
making
sure
they
are
healthy.
They
are
safe.
They
are,
you
know,
managing
stuff
properly,
so
so
that
that's
I
mean
that's
what
we
believe,
the
the
more
popular
use
case
right
versus
security
right,
making
sure
no
one
is
stealing
stuff
right.
C
You
know,
though
I
mean
those
can
be
covered
easier
by
just
you
know,
security
like
like
like
human
security
right
and
and
at
the
end
of
the
day,
we
want
to
bring
value
to
those
retailers
well
right
and
and
that's
why
we
are
learning,
because
yesterday
I
learned
they
actually
don't
care
about
those
shoplifting,
because
because
it's
already
baking
to
their
their
cost
right,
yeah.
A
Retail
is
interesting
because
there's
a
lot
of
open-ended
applications
that
I've
heard
talked
about,
like
even
things
like.
If
you've
got
cameras
there
that
maybe
originally
were
put
in
place
for
security,
they
also
can
glimpse
portions
of
the
store
shelves
and
maybe
keep
track
of
into
inventory
situations
like
hey
all
of
the
paper.
Towels
have
run
out,
so
maybe
somebody
should
go
to
the
warehouse
and
restock
the
shelf
and
just
being
able
to
police
things
like
that.
There
are
also
marketing.
A
People
are
just
kind
of
curious
as
to
what
shoppers
are
actually
interested
in.
You
know
it
isn't
necessarily
that
they
bought
it,
but
if
they
went
to
a
shelf
and
lingered
about
a
particular
category,
maybe
seem
to
be
more
interested
in
one
color
versus
another.
That's
potentially
really
valuable
information
to
spot
trends
by
a
retailer
you
know,
so
I
I
think
that
in
the
retail
space
alone,
there's
a
lot
of
opportunity
here,
even
though
I
think
manufacturing
is
huge
as
well.
B
I'm
gonna,
I'm
gonna
jump
in
with
additional
area,
there's
a
whole
area
that
we
haven't
talked
about
yet,
which
is
real-time
interaction
with
people,
so
we're,
obviously
we're
not
aiming
to
like
get
rid
of
the
the
front
desk
attendants
when
you
check
in
at
the
airport
or
get
rid
of
the
the
people
that
are
there
to
help
you
choose
which
tablet
to
buy
at
the
at
the
electronics
store.
But
there
are
certain
tasks
that
we
currently
have
people
doing
that
require
you
to
interact
with
the
visitor.
B
You
know
like,
let's
say
the
customer
in
a
retail
store
and
it
requires
you
to
interact
in
real
time,
but
it's
not
it's
not
really
like
widely
varied
kind
of
the
same
routine
task
again
and
again,
perfect
opportunity
for
using
computer
vision
to
build
automated
systems.
I
don't
frequently
on
this
group
talk
about
the
commercial
stuff
that
edgworks
does,
but
I
think
it's
worth
noting
at
this
point.
So
we
have
this
system
called
darcy
and
darcy
is
an
artificial
intelligence
meant
for
doing
work.
B
I
would
say
like
alexa
or
siri:
are
they
are
ai,
they're,
artificial
intelligences
that
are
oriented
around
helping
you
individually
right?
You
know,
let's
schedule
a
meeting,
let's
you
know
like
turn
up
the
music
or
turn
down
the
lights
in
the
living
room,
but
darcy
is
designed
to
do
real-time,
oriented
work
and
now
using
computer
vision,
she's
currently
out
there
helping
people
check
in
for
covent
safety,
and
so
we
make
a
camera.
B
That
is
what
we
call
a
darcy
body
and-
and
it
has
a
thermal
camera
as
well
as
a
visual
camera
and
by
merging
those
streams
and
doing
a
lot
of
data
computation
that
we're
able
to
actually
have
darcy
determine
where,
if
your
forehead
is
exposed,
as
you
walk
toward
her,
where
on
you
know
where
to
take
your
body
temperature
from
off
of
the
areas
around
your
eyes
and
your
forehead,
whether
or
not
you're
wearing
a
face
mask
and
then
you
can
actually
check
in
to
a
school
or
a
church
or
college
campus,
whatever
it
is
with
darcy,
because
she
actually
gives
you
real-time
interaction,
feedback
via
screen
and
kind
of
says.
B
Like
hey,
please
wear
a
mask
and
then
oh
shoot.
I
didn't
realize
I
had
it
pulled
down
and
then
you
pull
up
your
mask
and
then
she
said:
okay
you're
good
there,
and
then
you
show
her
a
qr
code
which
contains
your
covid
survey
anonymously.
And
then
she
said:
okay,
great
I'll,
tie
that
all
together
and
you
are
checked
in
all
stuff
that
we
currently
have
people
sitting
in
a
chair
outside
the
door
doing
with
a
forehead
thermometer
and
a
notepad
right,
so
the
whole
area.
I
want
to
open
here
for
discussion.
B
I
don't
want
to
make
this
any
kind
of
an
advertisement
for
what
we're
doing
commercially.
The
whole
idea
is
that
when
you
get
down
to
some
repeat
interactions,
the
kind
of
like
let
me
check
in
your
library
book
and
tell
you
you're
good
to
go
right.
These
are
the
types
of
things
that
we
should
probably
be
automating
and
freeing
people
to
do
the
things
that
require
higher
brain
power
and
more
expertise,
and
I
think
aiml
computer
vision,
this.
It's
all
converging
on
doing
real
work,
and
this
is
an
exciting
area.
B
From
my
point
of
view,
is
like
hey,
you
know,
steve
you're,
welcome
to
you
know
like
the
the
tarmac
here
you're
about
to
get
on
the
flight.
We're
just
going
to
ask
you
a
couple
things.
Can
you
show
me
this?
Can
you
show
me
your
boarding
pass
this
and
that
let
me
scan
your
body
temperature,
okay,
you're
good
to
go!
Well,
we
don't
have
a
staff
member
doing
that
for
the
next
five
years,
but
we
still
want
to
do
it
for
five
years
right,
and
so
why
not
have
it
automated
yeah?
That
sounds.
A
B
It
looks
like
augmented
reality
exactly,
but
you
do
see
the
video,
because
people
need
to
understand
what
the
what
darcy
is
seeing
so
right
so
make
it
like
a
mirror.
A
A
Years
ago,
when
I
worked
in
the
iot
space
at
a
different
company,
I
know
we
were
doing
a
lot
of
business
kind
of
in
what
now
some
it
at
the
time
it
seemed
super
high
tech,
but
now
it's
old
school
and
what's
going
on
with
machine
learning
machine
vision,
I
think,
could
totally
replace
it.
A
But
it
was
a
perfectly
valid
use
case
and
let
me
describe
it,
we
put
in
place
these
systems
for
industrial
facility
walkthroughs,
it's
sort
of,
like
you
know,
you're
all
familiar
with
a
situation
in
a
building
where
security
guards
might
be
told
to
walk
through
the
building
once
an
hour
and
just
check
that
all
the
doors
are
locked.
There's
no
broken
windows,
but
when
you
get
to
things
like
oil
refineries
process
plants,
they
have
people
charged
with
similar
things
to
just
walk
around,
make
sure
no
pipe
is
leaking.
A
You
don't
hear
steam
hissing
that
kind
of
things,
but
old
school
people,
augmented
this
with
simple
barcodes,
giving
that
person
doing
the
walk
through
a
barcode
scanner.
They
could
prove
that
they
actually
went
to
these
locations
on
a
timely
basis
and
also
bring
up
details
if
they
spotted
a
leaking
or
dripping
pipe
or
a
hissing
vessel,
they
could
shoot
it
with
the
barcode
reader
and
it
would
tell
them
what's
in
there.
You
know
is
this
hazardous?
Is
this
like
pull
the
alarm
and
evacuate
the
facility,
or
is
this
just
water?
A
You
know
it
strikes
me
that
you
could
build
something
like
that
with
augmented
reality,
where
a
person
walking
around
a
plant
could
see
temperature
and
pressure
readings
that
are
coming
from
remote
sensors,
but
you
visually
superimpose
them
on
on
these
things,
so
that
or
even
color
code
them
like
a
something
that
you
know
is
hot
starts
glowing
red
in
the
augmented
vision
to
give
people
additional
clues,
help
them
troubleshoot
help
them
become
super
aware
of
problem
areas,
and
I
I
think
that
people
spent
money
on
this
for
barcode
related
systems
that
couldn't
do
half
of
what
was
possible
today
and
there
maybe
is
an
opportunity
to
take
this
to
a
modern
era.
B
B
Even
since
now,
computer
vision
enabled
cameras,
and
I
do
want
to
talk
about
ai
acceleration
as
a
topic,
while
we're.
B
But
you
could
steve
you
could
have
since
they're
so
inexpensive.
Now
you
know
little
ai
accelerated
cameras
could
be
added
all
over
the
place
and
detecting
that
that
steam
is
present
or
not
present
is
probably
you'll
be
able
to
get
that
pretty
accurate
on
a
fairly
camera,
and
if
the
cameras
are
sub
100
you
can
just
always
be
watching
all
of
the
pipelines
and
things
this
is
yeah.
I
think
you
combine
that
with
someone
touring
around
and
that
person
is
like
a
superpower
at
this
point
right.
A
Effectively,
maybe
the
cameras
are
a
replacement
for
the
person
walking
around,
particularly
in
areas
where
you
have
to
put
on
a
hazmat
suit.
I
mean
it
isn't
just
chemical
plants
either
I
mean
in
semiconductor
plants,
it's
expensive
time,
consuming
to
have
people
suit
up
and
de-suit
and
then
do
huge
long
shifts
in
those
things.
So
anything
that
you
could
replace
with
that
kind
of
automation.
That's
equivalent
to
somebody
walking
around
is
potentially
a
big
economic
win
and.
B
That
would
explain
why
we're
getting
so
many
requests
for
a
darcy
body
that
is
ip67
plus
rated,
and
you
know
maybe
one
that's
even
hazardous
environment
explosion,
proof
like
yeah
this
have.
If
you
have
one
that
you
can
drop
in
that's
already
rated
for
all
these
environments,
you
can
just
get
to
get
to
work.
Yeah.
A
B
Yeah,
it
could
be
any
anywhere
anywhere
has.
Has
I
mean
moritz's
environments
can
be
hazardous
if
the
if
the
cutting
equipment
is
active
right,
you
don't
have
people
on
the
floor
right
or
you
try
to
avoid
it
anyway.
Those
are
multi-kilowatt
lasers.
B
D
It
depends
like,
if
you're
in
a
research
institute,
you
are
actually
allowed
to
stand
right
next
to
players,
because
we
know
what
we're
doing,
but
I'm
definitely
with
you
like
you,
don't
want
to
be
next
to
these
things
when
they
go
up
and
also
there's
a
stuff
we
have
flying
around.
This
is
powders
and
and
all
that
stuff
that
you
don't
want
to
grease
in.
D
I
think
that's
like
even
one
one
system
more
especially
like
like,
if
you
combine
these
systems
with
other
sensor
systems
like
like,
for
example,
argon
detectors
like
if
something
in
your
workspace
is
actually
poisoning
you.
You
can
pretty
easily
plug
that
into
kubernetes,
for
example,
and
build
a
let's
say
like
like
security
map,
and
then
the
people
that
are
working
around
your
shop
floor
actually
know
where
they
have
to
look.
D
You
also
get
like
a
time
representative
like
if,
for
example,
you
have
some
kind
of
gas
leak,
you
know
where
to
look
first
because
well,
the
first
sensor
basically
spun
up
at
this
point
and
then
others
followed.
So
you
actually
know
where
it's
coming
from.
That's.
I
think
one
of
the
larger
use
cases
why,
like
having
some
kind
of
data
plane
where
you
combine
all
of
these
different
systems,
for
example
with
daisy,
and
something
like
that.
B
Very
cool,
very
cool,
it
makes
sense
in
the
time
we
have
left.
Can
we
talk
about
ai
acceleration
darren,
I'm
curious,
do.
Do
you
guys
tune
your
models
to
any
particular
ai
accelerator
chipsets?
Do
you
do
you
prefer
gpus?
Have
you
tried
out
any
of
the
intel
vpus
from
the
myriad
x
or
the
google
tpus?
The
coral.
C
C
Do
you
do
you
pick
or
do
you
let
the
developer
developer
pick
the
ai
chip
or
you
know,
different
type
of
gpu
cpu?
What's
what's
what's
your
what's
your
vision.
C
B
What
product
could
you
could
you
release?
That
would
have
that
in
it
it's
early
days,
so
you
might
be
looking
at
an
industrial
design
cycle
in
order
to
launch
the
the
product
that
has
your
ex
your
chosen
accelerator
in
it.
The
developers
disconnect,
but
you
know
from
from
their
process
to
the
the
production
deployment
is
so
severe
that
I
think
it's
stonewalling.
Everybody
from
getting
their
computer
vision
applications
out
the
door
in
production
scale.
Pilots
are
a
different
story.
You
can
get
pilots
out
the
door
just
by
choosing
something
you've
got
a
dev
board.
B
B
In
addition
to
that,
let's
say
that
you
have
an
ai
accelerator
and
that
you've
that
you've
preferred-
and
your
customer
tells
you
they
have
these
other
ones,
and
you
say:
okay.
Well,
we
could.
We
could
revise
the
way
the
model
runs
and
so
on.
How
do
you
do
the
deployment
where
you
detect?
What
capabilities
are
on
the
devices?
The
worst
environment
I
can
think
of
is
one
that
has
three
different
ai
accelerator
chips
across
all
of
its
you
know.
B
Deploy
30
is
this
and
35
is
this
so
part
of
edge
computing
infrastructure
is
detecting
if
there
are
any
accelerators
or
available
fpgas?
That
could
be
flashed
or
anything
like
that
and
passing
that
information
up
into
the
orchestration
layer
so
that
you
can
make
decisions.
I
would
deploy
a
different
container
for
using
intel's
mirror
yet
x
than
I
would
for
using
google
google's
coral.
Now
it
doesn't
mean
that
I
didn't
build
both
of
them.
B
Maybe
I
did
so
that
I
could
use
them
across
a
wider
variety
of
hardware,
but
which
one
am
I
currently
looking
at
and
then
on
top
of
that
you
have,
let's
say
it's
myriad
x
and
x86
cpu.
Then
you
have
google
coral
and
an
armed
64-bit
cpu.
You
have
permutations
of
images
that
you
need
to
throw
down
so
knowing
what
it
is,
you're
looking
at,
detecting
it
and
then
orchestrating
down
the
right
thing
built
for
that
is.
That
is
the
future.
B
It
there's
no
way
that
we're
going
to
standardize
on
a
chipset
right,
that's
never
happened,
and
so,
since
that's
not
going
to
happen,
it
means
you're
going
to
have
different
developers,
and
I
would
think
that
to
address
the
market,
you
really
would
have
infrastructure,
then
that
would
allow
you
to
use
multiple.
The
same
way
that
you
say
your
stuff
is
camera
agnostic,
I
think,
being
ai
accelerator
silicon
agnostic
is
you
have
to
do
that
unless
you
are
intel
or
google
or
xilinx
in
those
cases,
you
can
be
a
printer.
A
Now
this,
I
obviously
am
coming
from
a
potential
position
of
bias
here,
working
for
vmware,
but
another
model
to
perhaps
look
at
this
gpu
situation
is
to
abstract
them
out
into
a
virtualization
layer.
You
know
effectively
that's
what
happened
with
x86,
where
there
are
different
variants
of
these,
but
there
are
techniques
to
kind
of
standardize
them
to
make
your
applications
portable
and
maybe,
in
the
long
run,
you'd
hate
to
lock
it
down,
because
there's
so
much
going
on
with
machine
learning,
ai
acceleration
chips
that
coming
up
with
the
least
common
denominator.
A
Abstraction
is
probably
not
something
you
want
to
do
in
an
early
stage,
but
at
some
point
it
might
get
to
the
might
get
to
a
situation
where
you
could
come
up
with
an
abstract
model
that
is
consistent
across
these.
Maybe
the
version
from
one
manufacturer
runs
faster
at
a
higher
cost,
but
from
the
perspective
of
the
app
that
would
consume
it,
let
them
access
a
pool
of
these
and
once
again,
particularly
when
the
learning
stage
is
not
required.
24X7
having
having
this
back-end
pool
be
a
shared
resource
has
a
lot
of
economic
advantage,
potentially.
C
Yeah
and
then
kill
tend
to
answer
a
question.
We
are
startup
as
well
right,
so
this
is
still
something
we
are
discussing
internally
right,
especially
our
edge
strategy.
Like
you
said
there
are
so
many
things
we
can
choose
from
and
we
can.
We
can
build
and
it's
far
from
being
standardized
you
know,
should
we
have
our
own
edge?
Should
we
follow
some
others?
C
We
we,
we
don't
have
an
answer
yet
right.
So
that's
why
in
the
meantime,
yeah
we're
gonna
keep
it
keep
keep
our
options
open
and
you
know
so.
So
that's
that's
my
answer.
B
I
think
that's
a
fine
answer
to
have
it
often
because
of
the
success
that
we've
had
over
the
last
couple
of
decades
centralizing
and
consolidating
a
lot
of
things
that
used
to
be
very
dynamic
or-
or
I
guess
I
would
say,
like
the
diverse,
we
tend
to
think
that
now
we're
supposed
to
have
a
unified
answer
for
just
about
everything.
Well,
it
took
us
so
long
to
get
there
with
some
of
the
architectures
that
we
currently
are
enjoying
that
seem
to
be
stable.
B
The
edge
is
really
just
an
extension
of
the
internet
and
we
have
to
solve
it.
The
way
we
did
before,
which
is
honestly
do
what
steve
was
saying,
go
in
with
caution
not
to
lock
yourself
out
of
the
future,
but
try
your
best
to
provide
something
that
allows
you
some
flexibility
today
and
work
with
this
and
work
with
that.
So
you
can
make
headway
in
10
years
time
we'll
be
talking
about.
Whatever
is
the
standard
of
the
day,
the
standard
approach
that
works
and
until
then
I
think
you
you,
you
really
shouldn't
be
too
opinionated.
B
You
should
be
open
to
working
with
whatever
it
is
that
that
people
want
to
get
their
hands
on.
I
think
there's
the
only
way
to
go.
C
Yeah,
I
appreciate
your
feedback,
I
feel
better
because
I
mean
every
time
when
we
have
this
discussion.
It
can
be
intense.
It
can
be
like
days
of
the
the
back
and
forth
right.
B
Yeah
in
the
team
wait
what's
yeah,
which
is
which
is
the
right
path.
There
is
one
thing
that
I
will
say,
because
this
is.
This
is
basically
what
we
do
in
in
this
working
group,
and
so
on
is
if
you
have
infrastructure
issues
so
meaning
like
you're
like
how
do
I
deploy.
You
know
an
update
to
both
the
model
and
the
code
that
executes
it,
because
right
we're
gonna,
we
gonna
change
out
the
driver
that
runs
runs
against
the
ai
accelerated
silicon.
B
If
you
that
that's
what
I
call
edge
computing
infrastructure,
if
you're
wondering
how
to
you
know,
retain
the
data
on
the
device
but
make
it
accessible
to
a
one
of
steve's
proposed
on-premise
training,
you
know
super
computers
right
and
it
needs
to
tap
into
that
huge
pool
of
sensor
data,
and
how
do
you
allow
it
without
opening
a
vpn?
B
This
is
the
bread
and
butter
standard
infrastructure
of
edge
computing.
It
needs
to
be,
in
my
opinion,
it
needs
to
be
done
in
open
source
contributed
to
by
everyone
and
solved
just
once
or
twice
right.
In
the
end,
we
we
don't
need
11
versions
of
it.
We
need
just
a
couple,
major
paths
that
you
can
take
that
are
all
have
a
good
ecosystem,
because
none
of
that
stuff
is
is
what
matters
to
your
end,
business
solution.
B
It's
it's
the
same
everywhere
right
and
it
doesn't
matter
what
you're
running
it's
all
running
on
top
of
the
same
type
of
pipelines
and
stuff
so
for
edge
computing
infrastructure.
I
would,
I
would
think,
if
you're
not
doing
it
with
open
source
stuff,
then
that's
going
to
be
a
painful
future,
but
for
all
the
other
stuff.
On
top
of
that,
I
think
there's
a
lot
of
business
advantage
to
it
right
and
make
your
picks
what
you're
going
to
use
here
and
there
and
what
runs
faster
and
so
on.
D
Yeah,
I
just
thought
about
like
the
whole
discussion
is
also
like
securing
your
workload
on
the
edge
of
maybe
an
edge
partner
that
you
that
you
don't
trust
so
so.
For
example,
I
write
by
my
own
ai
algorithm
that
I
want
to
deploy
on
an
edge
system
that
I
don't
know
how
do.
D
D
Check
my
model,
how
do
I
protect
my
paper?
When
did
we
release
that?
One
year
ago,
somewhere,
there.
A
Topic
are
you
talking
about
just
about
that?
You
know.
Are
you
talking
just
about
this,
distributing
binary,
artifacts
or
pieces
of
code
to
be
run
at
an
edge
location,
or
is
this
an
issue
of
what
people
are
authorized
to
manage
or
have
administrative
authority
over
a
system.
D
Well,
both,
but
like
I,
my
my
imagination,
is
always
like.
I
want
to
like
deploy
some
kind
of
codes
from
somebody.
Of
course
he
has
to
make
sure
that
his
code
is
not
stolen
by
anybody
else
and
on
the
same
time,
I
have
to
make
sure
that
the
code
that
is
running
is
secured.
So
so
not
doing
some
crazy
shenanigans
on
my
system,
and
maybe
I
don't
know,
accessing
the
camera
and
accessing
everything
else
without
being
allowed
to.
D
A
If
you
pack
managed
to
package
this
in
containers,
you
know
doctor
images,
if
you
will
that
there
are
medic
mechanisms
for
signing
that,
so
there's
no
reason
to
invent
something
to
establish
provenance,
I
mean,
and
then
there
are
mechanisms
with
the
common
image
image
registries
that
hold
and
publish
those
containers
that
enforce
governance,
so
that
you,
as
a
policy
for
in
your
organization,
can
say
we
don't
allow
anything
that
isn't
signed
off
by
these
five
certificates.
A
If
it
does,
if
isn't
signed
by
one
of
these
entities,
it's
not
going,
you
know
we're
not
using
it.
It
is
not
being
held
in
our
registry
not
being
delivered
to
anybody
and
those
might
get
the
job
done
when
it
gets
to
edge
devices
that
handle
binaries.
That
are
not
container
images,
though
I'm
not
sure
unless
you
forcibly
kind
of
repackage
them
as
a
container
image
just
to
use
as
a
delivery
mechanism
vector
whether
there's
some,
I
can't
say
I've
come
across
something
for
it,
but
I
sure
appreciate
the
need
for
it.
A
C
D
It
I
think
I
I
heard
a
talk
on
wasm,
so
basically
web
assembly
touching
zaps
a
little
bit.
I
I
found
it
really
interesting,
so
I
just
saw
that
basically
bringing
it
up
here,
because
I
think
that
that
would
probably
be
the
next
step
by,
like
you,
provide
the
edge
with
all
the
connectivity,
and
somebody
else
provides
the
software
for
it.
A
I'll
have
to
check
that
out
that
that
wasm
stuff
is
definitely
in
my.
B
Queue
for
something
yeah,
one
of
the
most
one
of
the
most
popular
frameworks
for
wasm,
is
called
wasmer
and
that's
cyrus
ackberry,
and
I
think
he
would
be
happy
to
come
in
and
present
to
this
group.
So,
let's,
let's
see
if
we
can
get
cyrus
to
come
in
and
and
give
a
talk
more,
it's
I
think
you've
touched
on
an
area
that
I
think
we've
timed
out
for
today.
B
What
would
be
good
future
is
we're
we're
definitely
moving
beyond
just
linux
kernel
containers
right
in
the
world
of
edge
computing,
and
my
plan
is
to
integrate
wasmer
with
eclipse,
io
fog
such
that
you
can
leverage
it
or
you
can
leverage
linux,
kernel,
containers
or
whatever,
but
boy.
Is
there
a
bunch
of
work
to
be
done
yeah?
I
do
think
that
the
security
model
offered
by
wasm
is.
B
Think
I'm
happy.
I
need
to
catch
up
with
cyrus
anyway,
so
why
don't
I?
Why
don't
I
ask
him
if
he'd
be
willing
to
join
in
into
you
know,
I
guess
well
a
month
from
today
right
on
this
time.
B
A
Let's
call
this
to
the
end,
we're
already
three
minutes
over.
So
thanks
everybody
for
attending
for
darren,
you're
new
to
the
group.
We
have
an
a
meeting
for
intended
for
asia,
that's
about
two
weeks
from
now
from
now,
but
it's
middle
of
the
night
in
your
time
zone,
and
then
this
meeting
will
meet
exactly
four
weeks
from
now.
So
once
again,
thanks
everybody
for
attending
sounds
good
nice
to
meet
you
all.