►
From YouTube: Kubernetes WG IoT Edge 20200115
Description
January 15 2020 meeting of the Kubernetes IoT Edge Working Group - discussion of Kubernetes edge architecture
A
B
B
Oh,
it's
super
easy,
a
couple
of
lines
and
you're
started
in
a
minute,
but
the
fact
is
that
those
claims
are
actually
true
that
there's
not
much
to
it
and
even
on
something
as
what
I
did
was
a
raspberry
pi,
3
b
plus,
which
is
2
gig
of
ram,
and
it
runs
and
still
has
plenty
well,
not
an
enormous
amount
of
space,
but
enough
to
run
a
container
or
two.
C
D
C
D
How
they
actually
run
the
containers
in
there,
because
I
have
tried
running
containers
with
which
access
the
host
usb
and
which
access
the
gpios,
I
I
haven't
tried
this
with
micro,
creates
and
depending
on
how
they
actually
start
the
the
the
content
or
they
actually
start.
The
docker
container
in
there
I
mean
mini
cube,
uses
some
kind
of
vm
magic.
D
In
the
background,
do
you
know
how
micro.
B
K8,
does
it
well
both
k3s
and
micro
cates?
Have
you
make
a
change
from
the
default
on
the
the
underlying
pie
to
get
some
access
to
get
the
container
runtime
to
run
the
way
it's
supposed
to?
And
I
don't
remember
offhand
it's
it's
very
prominent
in
the
documentation
of
one
of
them
and
not
so
much
in
the
other,
but
I
can't
remember
which
was
which-
and
it
just
warns
you
that
you
might
have
issues.
If
you
don't
flip
that
on
then
when
it
comes
to
accessing
the
I
o.
B
I'm
not
sure
you
know
what
the
limitations
are
on
the
os,
letting
you
get
through
whether
you
might
have
to
run
your
containers
privileged
to
get
at
what
you
need
to
to
get
at
the
underlying
hardware
with
it.
I'm
not.
I
think
it
may
depend
on
what
you're
trying
to
do
but
running
just
an
nginx
container
worked
for
me,
but
I
wasn't
trying
to
use
usb
or
the
gpio.
B
B
Yeah,
I
think
you
know,
I
think,
a
lot
of
systems
presume
that,
if
you're
getting
direct
access
to
the
hardware,
it
ought
to
be
a
privileged
operation.
It
granted.
There
might
not
be
a
way
to
steal
information
out
of
another
container
by
getting
at
the
gpio.
But
there
are
a
lot
of
hardware
things
that
just
random
addresses
where
you
could
pierce
security
boundaries,
so
maybe,
as
a
default,
they
lock
down
everything.
I
don't
know.
A
Yeah,
that's
a
problem,
and
also
I
don't
know:
if
have
you
tried
to
do
a
long
running,
k3s
or
michael,
my
god
about
that,
because
my
friend
just
run
it
and
let
me
know
the
k3s
alpha
of
it's
running
for
a
while
when
the
api
server
have
a
cache.
So
it's
not
like
they
claim
they
only
use
a
few
megabytes
or
or
less
than
100
mega
megabytes
is
grow
really
fast.
Go
to
300
400
megabytes
memory
consumption.
So
I
haven't
tried
that
myself.
Yet
that's
so
I
will
give
a
try
yeah,
maybe.
B
A
B
I
haven't
checked
it
this
evening,
because
I've
been
away
on
a
trip,
but
it
was
running
at
least
a
week
and
it
didn't
crash
or
anything
is
about
all
I
can
say,
but
I
didn't
go
back
to
check
for
growing
memory
usage
or
anything
like
that.
Rafana
dashboard
yeah
in
fairness,
I'm
not
sure
that
either
of
those
claims
to
be
a
full
production
ready
release.
Yet
maybe
they
do.
But
a
lot
of
those
things
have
you
know,
version
levels
that
are
below
1.0,
implying
that
maybe
they're
still
in
the
development
stage.
A
So
yeah,
I
briefly
talked
to
sean
from
rancher.
In
doing
the
cooper
kong,
he
told
me
they
already
have
1
million
deployment
for
the
k3s.
However,
the
majority
is
using
the
micro,
the
k3s
as
a
more
like
a
resource
measurement
on
the
id
side,
instead
of
the
whole
control
plane,
the
cloud
edge,
the
hierarchy,
architecture.
A
A
So
I
am
really
interested
to
see
what
the
utili
case
and
how
they
leverage
this
and
how
they,
basically,
how
in
production
they
use
that
the
purpose
is
right,
more
like
a
cloud
edge,
this
architecture
to
use
the
cloud
over
or
shadowed
all
the
edges
side
or
just
just
use
a
or
in
more
like
independent
edge
management,
because
they
claim
a
few.
A
few
users
have
a
couple
of
small
machines.
So
basically
it's
a
standalone
machine.
A
B
If
they're
willing,
if
they
have
stats
and
they're
willing
to
disclose
them,
even
what
the
figures
are
for
running
on
arm
versus
x86-
and
I
know
that
I've
seen
an
awful
lot
of
bloggers
writing
about
using
k3s
just
as
a
personal
home
lab,
you
know
to
learn
kubernetes,
because
it's
small.
So
if
you're
trying
to
do
this
at
home,
don't
have
much
resource.
B
I
know
I've
got
a
lot
of
co-workers
who
are
have
just
stood
up
a
cluster
with
raspberry
pi's,
just
because
it's
so
inexpensive
and
it
actually
gives
you
a
a
real
cluster
leave
around
and
build
very
inexpensively
people
are
using
those
even
for
home
automation
projects.
So
I
wonder.
A
That's
just
likely
their
purpose
and
more
like
running
independently
right
individually,
our
platform
or
I
don't
know
how
they
compete
with
minicoo.
I
think
it's
probably
much
smaller
than
mini
cool,
and
maybe
they
have
potential
to
cluster,
to
make
a
cluster
a
different
kcrs
to
form
a
cluster
or
to
render
that
or
yeah.
I
need
to
do
more
research
on
this.
B
It
was
kind
of
interesting.
I
met
one
of
the
google
people
that
supports
minicube
and
they
actually
had
some
ability
to
detect
the
type
of
operating
system
being
used
to
download
mini
cube
and
were
surprised
that
most
a
slight
majority
were
coming
from
windows
boxes
rather
than
linux.
B
B
A
A
I
think
it
depends
on
the
community
right
in
a
developer
is
one
more
standard.
You
have
a
macbook,
macbook
or
macbook
pro
with
the
external
monitor
it's
but,
however,
it's
the
regular
people,
maybe
windows,
it's
more
popular.
E
E
Right
speaking
of
which,
I
think
you
know
we're
all
probably
fans,
but
I've
noticed
that
when
I've
been
doing
workshops
for
edge
computing
and
and
stuff
like
io
fog
workshops,
a
at
least
50
percent
of
the
crowd
turns
up
with
windows.
B
B
Jack
fan
here,
I'm
not
no,
it
might
not
be
conventional
reasons
and
I've
got
a
co-worker
who
just
flipped
over
to
running.
I
think
he's
running
his
macbook
on
linux,
but
I
chose
with
nominally
as
a
windows
box
just
because
you
can
get
it
with
more
ram.
E
That's
true,
and
so
I,
on
the
topic
of
of
you,
know,
pulling
down
mini
cube
right
on
windows.
E
I
think
people
from
what
I've
seen
anyway
want
a
local
environment
that
that
it
translates
over
to
where
they're
going
to
be
doing
production
stuff,
but
they
want
to
take
it
with
them
as
they
go,
and
you
know
that's
one
of
the
things
that
that
we
do
with
without
fog
is
let
you
stand
up
like
all
of
the
whole
environment,
right
on
your
laptop
right,
whether
it
be
windows
or
whatever,
and
I
think
you
know,
there's
not
the
first
time.
E
B
E
C
C
C
You
would
like
to
have
have
it
locally
and,
like
a
you,
know
your
kind
of
id
experience,
so
you
can,
you
know,
run
things
quickly
and
then
you
know
reiterate
over
your
code
quickly,
but
on
the
other
hand,
these
systems
that
you
know
are
becoming
too
big
to
be
run
on
on
a
even
on
a
more
powerful
laptop
laptops.
It's
it's
still.
It's
still,
you
know
a
pain.
C
B
Scream,
my
own
belief
is
those
super
thin
laptops,
because
I
used
to
be
a
hardware
designer
and
they
just
can't
get
decent
cooling,
so
those
really
skinny
ones
I
contend,
especially
when
they
put
an
i7
in
them
that
they'll
run
at
that
high
speed
for
about
two
minutes
max
and
then
they'll
thermal
down
rate,
and
you
don't
really
get
the
speed
and
I
think
they're,
just
glamour
items
for
an
executive
to
show
off
and
no
developer
should
be
on
one
of
those
skinny
ones.
I
mean
the
physics
of
a
fan
in
that
thin
form
factor.
B
B
B
B
I
made
the
mistake,
you
know
I
get
to
refresh
it
every
few
years
and
the
one
now
I'm
on
sort
of
the
mid-range
like
15-inch,
but
they
have
a
17-inch
that
can
now
go
to
at
least
256
gig
of
ram
three
hard
disks
and
you
can
put
xeon
chips
in
them.
If
you
want
to,
and
the
trouble
is
the
the
thing
is
was
just
such
a
power
pig
and
the
17
inch
screen
was
unusable
in
an
airplane,
even
if
the
power
supply
also
made
it
unusable.
E
Right
can
we
can?
We?
Are
you
guys
interested
in
talking
about
like
the
developer
experience
for
iot
and
edge
stuff,
with
relation
to
like
the
data
streams
and
the
in
the
environments
that
you
that
you
wish
you
could
test
in,
but
are
never
going
to
be
available
on
a
laptop
such
as
you
would
like
to
you?
You
paste
into
the
chat
your
your
laptop.
B
B
They
talk
about
landing,
kubernetes
there,
but
not
the
whole
ci
cd,
workflow,
building
container
images
getting
you
out
there.
You
know
using
the
the
ci
cd
and
one
of
the
things
dion.
You
just
mentioned
that
your
build
tools
are
tough
to
fit
in
your
laptop,
but
one
of
the
things
in
this
article
was
the
point
that
it
was
important
to
minimize
the
size
of
your
container
images,
and
you
don't
you
know
without
putting
some
thought
into
that.
B
B
Yeah,
well
I
mean,
if
you
look
at
that
article,
it's
really
long,
and
I
think
that
that
you
know
there's
more
in
there
than
you
could
put
in
a
35-minute
presentation.
If
you
covered
all
the
topics
in
that
blog,
it's
actually,
I
think,
typical
of
blog
posts
where
I
I
feel
that
in
a
long-ish
blog
you
can
cover
a
lot
more
than
you
would
in
a
presentation.
B
C
If,
if
you're
going
to
run
a
cluster
on
the
edge
or
device
on
the
edge,
the
question
is
how
you're
going
to
keep
developing
that
software
and
and
and
push
it
area.
So
maurice.
I
I
don't
want
to
put
you
in
in
the
hot
spot,
but
this
a
little
bit
crosses
with
what
we
talked
a
little
bit
about
the
the
past
couple
of
days.
So.
D
C
In
a
nutshell,
more
we
started
started
considering
a
new
white
paper
that
we
talked
a
little
bit
about
kubernetes
architectures
on
on
the
edge
and
and
started
laying
it
down,
and
then
exactly
this
topic
popped
up.
Where
I
said,
maybe
we
can
extend
the
white
paper
and
talk
a
little
bit
about
the
developer.
Experience.
D
I'm
not
quite
sure
if
it's
ready
to
share,
but
let
me
give
you
a
quick
run.
D
Okay,
perfect
so
yeah.
Basically,
the
basic
idea
of
this
document
was,
or
we
had
an
hour
in
our
research
project,
was
to
get
like
a
broad
overview
on
different
architectures
on
the
edge.
So
because,
right
now,
I
think,
there's
like
a
lot
of
different
a
lot
of
hype
around
the
edge,
but
nobody
really
knows
what
it's
really
about
like.
How
do
I
actually
build
systems
with
kubernetes
on
the
edge
or
with
containers
on
the
edge?
D
D
I
just
copied
the
stuff
from
james
kirkland,
which
he
talked
about
in
his
last
meeting,
and
my
ultimate
goal
right
now
would
be
something
like
this
in
the
end
that
you
have
like
the
different,
the
different
architectures
on
the
right,
and
then
you
have
like
flexibility,
performance,
security,
robustness
and
so
on.
D
As
a
quick
overview
on
top
of
the
different
architectures,
that,
of
course,
if
you
are
starting
to
to
to
implement
new
architectures
or
new
products,
that
you
kind
of
pick
your
your
architecture
that
you
want
to
go
with,
because
maybe
when
you
just
do
like
like
small,
I
don't
know
iot
edge
devices
which
just
I
don't
know,
reach
the
temperature.
You
want
to
go
fast.
D
Flexible
and
security
might
not
be
the
biggest
concern
for
you,
while
other
applications
might
be
super
critical,
so
yeah,
I
just
got
started
over
the
the
christmas
holidays,
send
it
around
to
to
james
kirkland,
to
john
kilten
and
a
few
colleagues
of
mine,
and
we
got
started
talking
a
little
bit
on
it
and
I
think
john.
You
suggested
that
we
might
broaden
up
the
whole
thing
right,
which
I
find
fantastic
idea.
So
we
kind
of
go
away
from
just
kubernetes
and
maybe
go
more
on
architectures
on
the
edge.
D
One
of
my
ideas
was
to
maybe
split
it
into
two
working
papers
that
we
do.
Okay,
we
do
do
like
more
the
theoretical
stuff
on
architectures
in
one
paper
and
then
more
like
a
hands-on
guide
in
another
one,
where
we
actually
pull
in
different
projects
and
different,
maybe
maybe
different
projects
and
say:
okay.
This
is
a
use
case
for
this
project,
and
this
is
a
use
case
for
this
architecture
and
so
on.
C
So
so
just
to
explain
my
thinking
so
when
I
saw
that
you're
mentioning
operators
and
and
pop
subs-
I
I
you
know
my
brain
clicked
and
said:
maybe
we
can
add
this,
but
you
know
think
giving
giving
it
an
afterthought.
I
I
thought
you
know
yeah,
maybe
it's
it's
actually
a
two
papers.
So
once
one
of
the
kubernetes
architectures
another
is
on
the
cloud
native
edge
native
application
development.
So
I
I
yeah
we
might
start
with
with
what
you
had
in
mind,
see
how
long
it
is
and
can
it
stand
by
itself
and
then.
D
E
E
I
definitely
have
my
favorites,
but
but
they're
all
they're,
all
alive
and
kicking
at
the
moment.
So
I
find
that
very
interesting
and
worth
definitely
worth
worth
documenting
the
way
that
you've
structured
it.
I
know
it
came
from
the
you
know
what
james
first
presented
and
then
you
you're
expanding
on
it,
but
it's.
I
think
this
breakdown
is
going
to
be
really
helpful
for
for
everybody
to
see
how
on
earth
do
I
use
kubernetes
in
tandem
with
or
at
the
edge
directly
yeah.
E
C
Yeah
so
so
james
and
I
submitted
a
session
for
kubecon
eu
on
on
this
topic,
so
I
think
we
can
work
on
it
here.
Basically,
yeah,
it's
it's
over
easiest
to
you
know,
get
the
slides
from
the
white
paper.
Then
then
the
other
way
around
right.
C
I
I
yeah
I
I
don't.
When
is.
D
B
May
it's
end
of
march,
it's
the
last
week
of
march
and
march
yeah,
I
think,
leaks
into
the
very
beginning
of
april,
and
I'm
speaking
then
the
other
issue
is,
I
don't.
I
didn't
pay
attention
to
when
they're
supposed
to
get
notifications
out,
but
a
lot
of
us
procrastinators
don't
actually
fully
commit
to
the
presentation
until
you
get
the
notice
that
you
were
yeah,
the
talk
was
accepted
and
you
might
have
good.
E
C
And-
and
we
can
definitely
even
if,
if
james's
session
is
isn't
accepted,
we
can
definitely
at
least
mention
it
at
the
working
group
session
right
here
and
get
more
eyes
on
it.
D
Yeah
I
mean
I'm
not
like
the
the
super
expert
on
every
architecture.
So,
if
you
just
like
leave
me
little
hints
in
there,
I
can
try
to
like
get
a
little
bit
more
review
on
on
literature
and
everything
else,
because
coming
from
an
academic
standpoint
that
what
we
normally
do-
and
I'm
quite
good
at
that.
So
we
can
definitely
do
that.
So
just
trying
to
to
get
like
a
broad
overview
on
all
architectures,
and
then
we
can
like
pick
the
different
ones
and
maybe
try
to
to
to
get
a
nice
paper
together.
B
Yeah
I
mean
I
can't
say
that
this
list
is
wrong,
but
I
another
way
to
approach.
It
is
just
to
go
at
the
number
of
nodes.
You
know
you
start
with
zero
kubernetes
worker
nodes,
in
other
words,
that
maybe
the
control
plane
is
just
managing
something.
That's
less
than
a
worker
node
like
just
a
run,
run
time.
That's
on
some
box,
but
not
claiming
to
be
a
worker
node
up
to
single
worker,
node
and
cluster.
B
And
a
zero
node
might
sound
bizarre,
but
my
contention
there
is
you've
done
something
like
you're
using
the
kubernetes
control
plane
with
a
crd
or
something
and
or
to
manage
a
containerized
workload
that
is
technically
not
running
on
a
worker
note
in
that,
maybe
it
doesn't
have
cubelet
at
all.
B
D
D
It's
good
the
thing
we
just
copied
here,
like
like
the
more
I
read
over
it.
It's
like
like
basically
two
things,
because
we
have
basically
virtual
couplet,
microcreates
and
small
arcades
installed,
which
are
basically
on
how
to
manage
q
and
es,
and
then
we
have
cid
based
agents
and
which,
in
my
understanding,
are
more
like
a
like
an
architectural
pattern,
and
then
we
have
full
creates
install
which
is
again
kate's
on
the
edge.
So
we
might
need
to
pull
this
one
out.
Yeah.
B
E
What
what
I
guess
we'll
probably
want
to
mention
too,
then,
is
what
you
know.
Federation
looks
like
right
across
you
know:
whatever
kubernetes
is
at
the
edge,
otherwise
you're
you're
talking
about
basically
an
on-premise.
You
know
control
system,
not
not
how
to
do
it
rather,
but
just
said,
you
know
right
that
there's
the
architectural
approach
is
is
requiring
federation
if
the
edge
nodes
are
in
factorial
clusters,.
B
Yeah,
it's
sort
of
to
some
people
edge
means
lots
of
them,
rather
than
where
they're
located
to
others
edge
is
that
there
has
to
do
with
the
geographical
location,
but
once
you
get
to
hundreds,
thousands
of
them,
if
you
are
using
full
kubernetes
rather
than
just
worker
nodes
at
the
edge,
then
federation
of
how
you
manage
thousands
of
instances
of
kubernetes
cluster
is
arguably
an
unsolved
problem.
Although
there
are
vendors
who
maybe
contend
that
they
are
working
on
or
have
solutions,
but.
B
I
mean,
I
think
it
would
be
valid
if
you
sold
some
product
that
happened
to
have
a
kubernetes
cluster
embedded
and
had
no
need
to
connect
to
the
outside
world.
There's
no
rule
that
says
you
couldn't
build
that,
but
so
perhaps
there
are
people
with
thousands
of
clusters
that
don't
have
any
interest
in
federation.
E
Yeah,
and
in
that
case,
I
would
argue
that
those
each
each
cluster
is
an
independent
case
right
because
and
thousands
of
clusters
doesn't
matter
each
one's
a
yeah.
E
But
industrial
automation
is
probably
probably
one
of
those
cases
yeah
yeah.
B
E
So
moritz,
what
can
we
do
to
help
you
get
this
paper
advanced?
It
sounds
like
we
just
we
just
gave
you
some
real-time
notes.
Yeah.
D
Maybe
yeah
just
give
me
comments
and
give
me
as
many
architectures,
as
you
know,
like
just
put
everything
in
there
and
then
I
will
try
to
put
all
the
things
together
and
then
draft
some
some
written
stuff
out
of
that.
D
Give
comments
on
the
architectures
I
already
put
down
if
they're
right,
if
they're
wrong,
I
don't
care
like
like
comment.
Your
heart
out.
E
B
You
know
an
interesting
thing:
I'm
getting
back
around
to
how
you
do
apps
out
at
edge
locations,
but
if
anybody
comes
across
any
interesting
blog
part
posts
or
articles
or
talks
on
running
function,
lists
outed
edge,
I'm
kind
of
interested
in
the
subject.
You
know
a
lot
of
people
talk
about
getting
container
images
out
there
and
running
a
containerized
app,
but
suppose
I
want
to
run
my
own
lambda
out
at
edge.
What's
out
there
to
support.
E
B
A
B
Yeah,
if
it's
a
black
box
proprietary,
it's
tough
to
learn
other
than
but
kilton.
If
you
maybe
it
sounds
like
neither.
One
of
us
is
a
subject
matter
expert
at
this
time,
but
if
you
wanted
to
have
an
organized
learning
activity
directed
to
give
some
joint
presentation
at
some
conference
in
the
future,
yeah.
E
E
That's
right,
I
mean
that's
how
I
got
started
with
docker
way
back
in
2013,
and
I
agreed
to
tell
somebody
about
it
so,
but
in
all
in
all
seriousness,
you
know
we're.
I
I'm
road
mapping
like
what
we're
going
to
add
into
io
fog
run
times
and
we
got
to
add
serverless,
because
we've
got
demand
for
it,
and
I've
got
to
figure
this
out.
So
I'm
I'm
happy
to
dive.
In
with
you,
stephen,
I
did
hear
a
complaint.
B
At
the
there
was
a
pre-conference
called
rejects
at
kubecon,
north
america
and
at
the
rejects
conference
somebody
in
a
hallway
conversation.
It
wasn't
even
a
presentation,
but
we
had
an
interesting
discussion
about
the
startup
time.
B
So
it
was
the
person
associated
with
openfast,
which
is
a
way
to
get
function,
lists
running
on
kubernetes
and,
I
think,
potentially
on
non-kubernetes
platforms,
but
complaining
about
how
the
transition
from
zero
instances
to
one
actually
has
a
big
startup
penalty
on
kubernetes-
and
you
know
the
comment
by
one
of
the
architects
of
kubernetes
is
that
you
know
when
we
design
these
control
loops
in
kubernetes.
B
If,
if
you
make
a
declarative
statement
that
you
want
one
instance
of
this
container,
if
it
took
a
second
to
start
that
out,
that's
going
to
pass
all
the
test
cases
we
have,
and
yet,
if
it,
this
is
a
function,
call
that
you'd
expect
to
take
10
milliseconds,
100
milliseconds
having
it
be
so
unpredictable
that
if
it
happened
to
go
quiet,
so
there's
no
back
end
services
running
and
it
suddenly
takes
several
seconds
versus
a
hundred
milliseconds.
B
It's
tough
to
call.
You
know
one
of
the
measures
of
production
worthiness
is
the
behavior
is
predictable
and
it
becomes
unpredictable,
and
the
hallway
conversation
is
that
at
this
point
in
time,
hosting
serverless
on
kubernetes,
that
might
just
be
something
you
have
to
live
with,
because
if,
if
the
act
of
bringing
up
your
serverless
back
end
requires
instantiation
of
a
pod
running
somewhere
there,
it
there's
no
guarantee
short
of
never
letting
it
get
to
zero.
E
E
Guys,
but
I
mean
so
there's
there's
like
there's
the
firecracker
vm
right,
which
I
think
that's
what
it's
called
right,
which
was,
I
think,
that's
the
basis
of
of
the
aws
lambdas.
If
I'm
not
mistaken,
fact
check
me
on
that
please!
So
anyway,
I
don't
know
video
but
different
ways
of
of
encapsulating
your
serverless
function.
Environment
right!
It's!
I
would
think
that
if
you
are
going
to
host
a
serverless
environment
for
dynamically,
you
know
right
scaling
up
down.
E
You
would
want
that
infrastructure
to,
as
you
said,
remain
at
a
count
of
one
or
greater
so
that
you
don't
have
the
startup
time
of
the
engine
itself.
It's
like
I'm
not
going
to
wait
for
the
os
to
boot
to
start
the
application
if
the
os
is
supposed
to
be
always
online
waiting
for
me
to
you
know,
to
start
up
an
app.
C
C
B
E
The
os
boot
at
least
yeah
I'm
going
to
actually
find
out
what
the
maybe
the
right
approach
is
for,
like
an
iot
gateway,
if
you're
gonna
offer
serverless
on
it,
and
and
also
have
you
know
a
container
environment
and
also
making
those
things
play
nicely
together.
B
But
I
mean
for
some
consumers.
The
way
they
were
sold
on
serverless
is
that
if
there's
zero
use,
there's
zero
cost,
but
if
you
always
are
staging
one
and
it's
your
own
equipment,
which
I
think
at
edge
might
be
a
fair
statement
you,
it
is
never
new
zero
cost.
So
is
it
really
serverless
by
whatever
you
call
the
definition
of
it.
E
It
reminds
me
of
the
the
you
know:
the
the
fuel
saving
and
you
know,
emission
saving
trick
now
that
a
lot
of
automobiles
use,
which
is
that
the
engine
turns
off
when
you're
stopped
at
a
red
light
right
and
then
you
know
just
you
can't
drag
race
anymore.
You
can
it'll
start
right
up
as
soon
as
you
as
soon
as
the
light
turns
green
and
you're
not
really
missing,
except
maybe
half
a
second
right,
but
you
can't
drag
race
anymore,
and
it
depends
on
what
your
use
case
is.
B
Well,
the
interesting
thing
is:
if
I
made
a
car,
I
might
be
able
to
get
away
with
some
prediction,
because
suppose
they
put
a
camera
there
and
you're
watching
the
traffic
light
in
the
other
direction,
and
you
see
it
turn
yellow.
So
you
get
advanced
warning
that
you're
about
to
get
a
green
and
you
start
it.
I
mean
that
might
actually
be
a
desirable
feature
for
one
of
those
cars,
but
unfortunately,
with
function
as
a
service,
I'm
not
sure
that
it's
almost
like
there
must
be
some
math
proof.
B
E
B
E
E
We'll
have
we'll
have
quantum
communication
which
isn't
you
know
near
instantaneous
for
the
the
infrastructure
and
then
regular
communication
for
the
for
the
control
plane
call
how's
that.
E
E
Yeah,
I'm
thinking
that
you
know
all
with
all
seriousness,
I'm
thinking
that
probably
having
the
serverless
infrastructure
at
the
ready,
if
you're
really
going
to
be
changing
workloads
quickly,
just
bearing
the
cost
of
having
that
infrastructure
sitting
there
and
and
consuming
resources
ready
to
take
on
jobs.
That's
that's
what
I'm
assuming,
but
I'm
going
to
find
out
as
we
dive
into
it.
I.
E
That's
interesting:
how
can
we,
how
can
we,
you
know,
make
the
frozen
dinner
version
here
right
to
basically
bake
it,
get
it
ready,
have
it
just
isolated
frozen
and
then
just
thaw
it
real,
quick
yeah!
I
like
this.
E
D
This
out
a
few
weeks
ago,
worked
quite
nicely.
I
think
it's
from
bitnami
yeah
and
you're,
basically
up
and
running
in
like
five
minutes,
so
you
just
download
the
pattern
and
you
can
you're
good
to
go
and
I
think
they
run
the
the
functions
in
a
container.
D
E
Cool
well
to
bring
it
full
circle
if
you're,
building,
serverless
for
edge
environments
and
you're
or
you're
building
containers
for
edge
environments.
What?
What
is
that
like?
What
is
your
tooling
like,
and
what
is
your
life
cycle?
Like
serverless?
You
don't
have
to
worry
as
much
about
processor
architecture
as
containers.
You
do
that's
something.
That's
always
on
my
mind,
right,
someone's
building
on
a
windows
laptop
and
they
want
to.
E
They
know
they're
going
to
run
their
container
on
a
variety
of
devices
or
they
know
they're
going
to
run
it
on
like
a
raspberry
pi
right
and
now
you
got
to
have
a
raspberry
pi
nearby
or
you
got
to
use
the
qemu
right
to
to
cross
cross,
compile
and
build
a
container
for
the
other
and
you're
not
no
guarantees
it's
going
to
work
right.
I've
definitely
run
into
snags
dan.
I'm
trying
to
bring
it
back
to
flesh
out
some
of
your
points
and
give
you
some
material
here.
D
E
In
serverless,
oh,
maybe
right,
because
you
can't
it's
it's
not
it's
not
compiled
right,
it's
not
sorry
right,
it's
compiled
not
interpreted
and
then
yeah
could.
D
E
It
first
and
then
yeah
I
don't
know,
may
I
that's
actually
a
really
good
question
because
I
just
totally
understand
how
they
do
node.js
and
python.
All
of
this
that's
piece
of
cake
right,
but
that's
worth
understanding
and
looking
into
yeah
yeah.
B
E
That
comes
up
a
lot
because
it
certainly
comes
up
with
containers
where,
if
you're
going
to,
you
know
tap
into
something
for
computer
vision,
open
cv
on
the
host-
and
it's
not
built
properly
on
the
host,
for
you
know
whatever
you're
trying
to
do
then
either
your
container
is
going
to
take
about
two
hours
to
to
build,
and
then
it's
going
to
be
huge,
which
I've
done
that
right
to
make
it
force
force
it
in
and
make
it
work.
E
Or
you
know
you
just
can't
run
on
that
board
and
and
that
stuff
happens
well,
a
lot
more
with
less
c
plus
plus
related
things.
But
computer
vision
is
you
know
people
use
python
to
use
the
computer
vision,
but
python
does
not
do
the
opencv
operations.
That's
plus
yeah
yeah
same
with
tempo
flow,
basically
right
right,
yeah,
exactly
your
usage
of
it
is
flexible,
but
the
engine
is
not
yeah.
E
B
C
Things
I
think,
yeah
the
the
k
native
and
the
cloud
events
are,
are
trying
to
solve
that,
at
least
for
the
for
for
the
for
the
k
native.
I
know
that
they
put
a
lot
of
work
to
define
the
spec
for
the
for
the
events
and
then
do
the
mappings
to
different
protocols,
and
things
like
that,
so
that
you
can
run
it
over
anything.
B
B
Whole
topic
of
serverless
at
edge
would
be
actually
a
great
subject.
We'd
need
a
long
runway
to
get
ready
for
it
because
it
sounds
like
we
have
no
experts
on
the
call,
but
maybe
there's
no
experts
in
the
world
and
if
we
would
plan
this
for
some
kubecon
later
in
the
year,
everything
gets
planned
around
the
cube
con
sure.
Well,
you
want
to
get
people
you're
going
to
bother
to
do
it.
You
want
people
to
show
up
and
where
what
conferences
are
going
to
give
you
a
big
audience.
So
what
would
be
so
then?