►
From YouTube: Kubernetes WG IoT Edge 20200729
Description
July 29 2020 meeting of the Kubernetes IoT Edge Working Group - apps at edge discussion
A
Hi
we're
just
about
to
start
the
the
meeting
and
the
apec
cycle
for
the
kubernetes
iot
edge
working
group.
As
always,
the
conduct
of
this
group
in
this
meeting
are
governed
by
the
kubernetes
code
of
conduct,
so
be
nice
to
each
other.
A
This
meeting
is
public
recorded
and
will
be
posted
to
youtube.
So
if
you're
not
prepared
to
get
recorded,
you
can
drop
off.
Now.
With
that
said,
let's
get
started.
We
had
two
items
on
the
agenda
one.
The
first
was
tina,
giving
a
presentation
on
the
akraino
project,
but
she
isn't
here
yet
so
we'll
just
leave
that
for
now
and
hopefully,
she'll
show
up.
A
Then
the
second
item
on
the
agenda
is
me:
giving
a
presentation
about
running
apps
at
edge
and
what
this
is
is
kilton
hopkins,
who
maybe
he'll,
join
us
specifically
ask
for
coverage
of
this
and
cindy
the
co-chair,
and
I
are
about
to
prepare
a
presentation
for
the
online
kubecon
europe
event
and
the
agenda
for
our
presentation
is
specifically
on
running
absent
edge.
So
I'm
going
to
use
this
as
an
opportunity
to
test
some
material
that
I'm
thinking
of
putting
in
that
kubecon
presentation.
A
So
if
you
as
an
audience
would
give
me
an
honest
and
review
if
it
needs
work
or
isn't
even
will
you
you
know
if
you
think
I
should
substitute
it
for
other
material,
please
let
me
know
before
I
go
on
in
front
of
the
broader
audience,
but
I'll
put
it
up
and
understand
that
I
just
put
these
slides
together
all
the
way
up
until
15
minutes
before
this
meeting
started,
so
I'm
still
working
on
them
now
so
they're
potentially
pretty
rough,
but
I
think
the
important
thing,
maybe
isn't
the
polish
on
the
slide
deck,
but
the
actual
content.
A
A
Okay,
I
think
you
should
be
able
to
see
a
a
title
slide
for
a
section
of
my
deck.
A
Yes,
okay,
so
applications
that
edge
some
characteristics,
and
I
won't
show
it
to
you,
but
I
because
I
know
that
you're
not
likely
to
need
it,
but
for
the
kubecon
audience
I'll
go
into
sort
of
a
slide
with
all
the
different
use
cases
that
edge
you
know
anything
from
device
edge
to
you
know,
gateway
edge,
etc.
A
I'm
going
to
just
skip
that
tonight,
but
in
my
mind,
one
of
the
biggest
considerations
for
from
an
app
perspective
with
kubernetes
is
you
know,
does
it
go
low
enough
as
portrayed
by
the
limbo
bar
here?
What
do
I
mean
by?
Does
it
go
low
enough?
Well,
I
put
together
this
drawing
that
on
the
right
side
shows
different
classes
of
apps
and
on
the
left,
shows
different
classes
of
compute
hardware
that
you
might
be
running
it
on
an
edge.
A
So
obviously,
if
edge
to
you
is
something
like
there
are
places
that
are
going
to
run
a
half
rack
of
equipment
and
they're
pretty
much
server.
You
know
full
rack
mount
servers,
it
ends
up
being
you
know,
that's
an
environment
where
your
apps
are
packaged
in
containers
and
kubernetes
should
have
no
issue
with
whatsoever.
A
The
where
this
becomes
an
issue
is
when
you
get
down
to
things
like
equipment,
the
size
of
intel
nooks,
I'm
not
trying
to
plug
one
particular
vendor
here,
but
that's
perhaps
the
best
known
one
in
its
class
where
I
live,
and
the
nook
is
a
system
that
might
have
as
little
as
four
gig
of
ram
four
cores
the
raspberry
pi
four.
A
I
guess
they
recently
announced
one
that
goes
up
to
eight
gig,
but
when
it
first
came
out,
those
things
were,
I
think,
two
and
four
gig
systems,
so
that
would
be
kind
of
a
top
of
the
line
for
some
edge
deployments.
You
drop
down
a
little
lower.
I
haven't
heard
of
people
actually
using
it,
but
I'm
not
sure
why
but
a
mid-range
cell
phone.
A
A
An
actual
cell
phone
might
actually
be
attractive
simply
because
the
battery
the
thing
is
equivalent
to
a
ups,
and
it
would
already
have
a
you
know:
a
4g
or
5g
capable
radio
going
further
down.
The
stack
you've
got
the
pi
3,
which
is
only
one
gig
of
ram
and
even
further,
yet
the
arduino,
which
is
something
that
doesn't
even
run
linux
or
an
operating
system
at
all,
going
over
to
the
right
side,
you've
got
different
platforms
that
host
apps
and
then,
of
course,
the
apps
themselves,
so
starting
at
the
top.
A
I
think
that
hypervisors
generally
are
out
there
citing
a
minimal
requirement
to
make
sense
of
4
gig
of
ram.
I
think
they
could
probably
go
a
little
bit
lower.
You
know,
and
I
think
the
two
models
that
I
looked
up
on,
that
four
gig
spec
were
the
my
own
employer
has
one
called
vsphere,
but
there's
an
open
source
project
called
acorn
believe
they're,
both
citing
four
gig.
A
Once
again,
I
think
you
could
go
lower,
but
you
need
to
have
enough
left
out
around
after
you
load
the
hypervisor
to
make
it
worth
the
bother,
and
you
know
worth
the
bother
is
probably
running
at
least
three
vms.
I
mean
if,
if
you're
only
going
to
run
one
vm
one
might
question
the
value
of
the
hypervisor.
A
You've
got
kubernetes
variants.
Anything
from,
I
think
the
one
that
may
cite
the
lowest
memory
requirement
is
k3s
where
their
documentation
actually
says
it's
cape,
it's
capable
of
running
in
512
megabytes.
Now,
whether
that's
a
practical
production
grade
thing-
I
I
I
don't
know
I
I'm
skeptical.
It
may
be
something
that's
more
along
the
lines
of
yes,
it
booted.
I
can
run
hello
world
and
use
it
as
a
learning
pro
platform
to
learn
cube
cuddle
commands.
A
But
it
strikes
me
that
if
you
look
at
the
other
things
on
that
chart,
the
ubuntu
server
operating
system
and
the
docker
runtime,
both
also
quote
512
megabyte
minimals.
So
you
know
if
you're
going
to
load
a
kubernetes
on
top
of
that
gee.
What
do
you
have
left
to
run
your
containerized
apps,
probably
very
little
you're
on
the
call,
but
I
think
the
current
cube
edge
docks
suggest
four
gig
as
a
a
recommendation.
If
I'm
not
mistaken,
microcase
is
sort
of
in
between
there,
where
I
believe
they
say.
A
One
gigabyte
and
I
personally
have
installed
microgates
and
k3s
on
raspberry
pi
3s,
and
I
can
verify
that
they
definitely
boot
up.
They
don't
have
a
whole
lot
left
for
running
a
whole
lot
of
containerized
apps,
but
it
is
fair.
I
have
personal
experience
where
I
can
verify
they
will
actually
run
sorry.
I
don't
want
to
interrupt.
B
Yeah,
I
just
say
the
coupe
edge
only
we
can
boot
up
a
250
meg.
Okay,
for
the
recommendation.
The
footprint
of
kobe
edge
on
running
on
edge.
It's
always
take
about
40
to
70
megabytes
depends
on
your
deployment.
You
can
choose
more
stuff,
it's
up
to
70
megabytes
footprint
on
the
edge.
B
Yeah
we
did,
I
mean
to
hold
this
example
in
the
github.
There's
a
in
the
github.com
cooperation
examples,
there's
a
lot
of
le
lt
examples,
traffic
light
or
all
right
yeah.
That
does
only
take
yeah.
That's
only
just
at
that
time,
the
only
pi
three
pi
four,
it's
not
up
for
sale,
yet
yeah.
So
that's!
Okay!
I
three
with
a
one
gig
memory.
B
A
I'll
I'll
I'll
clarify
that
when
I
go
on
with
the
presentation,
assuming
I
keep
this
material
and
then
looking
at
that
app
stack,
you
see
a
few
other
variants
that
are
probably
potential
solutions
already
out
there.
Now
for
what
I
call
kubernetes,
maybe
sub
docker,
some
of
the
java
runtimes
are
capable
of
running
in
256
meg
and
those
java
runtimes
have
been
out
there
for
a
decade
or
more
in
the
embedded
space.
So
there
already
is
stuff
out
there
like
that.
A
If
you
go
down
to
compile
languages,
you
know
compiled
c
c
plus
plus
python,
and
there
are
people
who
have
plef
tools,
that'll
even
allow
you
to
compile
java
as
opposed
to
running
it
with
a
jre
from
bytecode,
and
that
stuff
is
capable
of
getting
pretty
small.
Some
of
it.
Hostable
on
platforms
like
the
arduino
and
kubernetes
out
of
the
box.
A
Just
is
not
going
to
manage
those,
so
the
point
being
can
kubernetes
go
low
enough
now,
a
solution
to
that
is
to
go
with
the
crd
model,
then
something
which
cube
edge
did
so.
Potentially,
you
can
have
kubernetes
just
standard
out
of
the
box,
no
extensions
whatsoever
managing
containerized
apps,
where
those
make
sense.
Clearly
they
make
sense
in
large
clouds,
but
also
at
regional
and
mid-tier,
maybe
even
gateway
nodes.
A
If
those
gateway
nodes
are
pretty
sizable,
you
know
in
the
four
gig
of
memory
class,
but
there
is
some
point
where
kubernetes
just
isn't
going
to
go
low
enough,
yet
there
is
still
the
potential
to
use
it
as
a
control
plane.
If
you
were
to
implement
crds
for
managing
these
sub
containerized
apps,
even
if
you
take
kubernetes
and
take
it
with
the
crds
to
manage
it,
kubernetes
alone
doesn't
really
solve
all
the
challenges.
A
You've
got
and
some
of
the
challenges
I'll
list
here
are:
how
do
you
actually
get
the
applications
the
edge
you
know,
registries
can
do
this.
Things
like
the
harbor
project
can
do
docker
images,
but
there's
potentially
other
things
you
need
to
get
to
edge
as
well.
A
A
How
do
you
even
you
know,
for
for
a
lot
of
these
edge
use
cases?
Additional
work
is
going
to
be
the
provisioning
and
onboarding
of
new
nodes.
You
know
if
you're,
using
os's
and
hypervisors
they're
going
to
have
drivers
and
those
in
my
experience
are
always
going
to
need
updates.
There's
a
you
know,
there's
there's
the
cost
of
initial
provisioning,
but
then
there's
the
day
to
and
beyond
cost
of
maintaining
those,
and
if
you
don't
maintain
them,
you're,
potentially
running
up
against
huge
security
risks.
A
A
If
you
install
the
proper
reporting
and
observability
features
down
on
these
lower
tiers-
and
I
believe
that
that
stuff
is
available,
but
it
might
require
today
a
little
bit
of
work
on
the
part
of
the
users
to
go,
discover
it
and
manage
it
in
parallel
with
kubernetes,
and
this
group
has
often
discussed
the
security
aspects
and
even
have
a
white
paper
about
things
that
the
challenges
of
managing
unattended
nodes,
where
you
don't
have
physical
security.
A
I'm
going
to
move
on
now
to
take
on
the
topic.
Kilton
wanted
to
hear,
discussed
and
that's
server,
serverless
and
edge.
You
know,
amazon
has
a
web
page
talking
about
their
lambdas
in
usa
for
edge
applications,
and
I
cut
and
pasted
this
diagram
that
uses
sensors
on
a
tractor
feeding
into
amazon
kinesis,
triggering
a
lambda
and
outputting
up
outputting.
A
Some
result
for
processing
some
result,
and
this
example
appeared
to
me
when
I
read
it
to
be
one
where
you're
doing
serverless
at
edge,
where
the
data
is
created
at
the
edge
but
feeds
up
fairly
quickly
into
the
aws
public
cloud.
A
A
I
saw
that
in
some
of
those
examples,
but
you
know
this
is
an
open
source
group
and
what,
if
you
wanted
to
stage
serverless
yourself
at
edge?
What
what
would
that
look
like?
Is
it
feasible?
Well,
let's
evaluate
some
of
the
key
elements
you
know,
and
here
I've
gone
and
I've
cut
and
pasted
some
text
from
the
aws
lambda
marketing
materials
and
I'm
not
using
lambda
here
to
pick
a
favorite.
A
It's
just
that
in
my
experience.
It's
the
best
known
example.
Other
cloud
providers,
like
google,
have
their
own
version.
For
example,
google
cloud
functions
are
at
least
in
my
mind
something
pretty
much
like
able
aws
lambdas.
A
So
I'm
I'm
picking
on
lambda,
but
it's
just
because,
like
I
say,
I
think
it's
the
best
known,
but
we
look
at
that
quote,
and
it
says
it's
a
compute
service
that
lets
you
run
code
without
provisioning
or
managing
servers.
It
executes
the
code
only
when
needed,
scales
automatically
and
no
charge.
When
your
code's
not
running
now.
A
You've
noticed
that
I've
color
coded
it.
It
wasn't
that
it
wasn't
color
coded
like
this
when
on
the
web
page
that
I
got
it
from
and
no
I
it
didn't
end
up
all
colorful,
because
I
had
a
bunch
of
kids
that
sitting
at
home
because
of
kovit
who
broke
out
the
crayon
box.
I
did
this
because
the
green
things
make
sense
to
me.
Obviously
it
runs
code
well,
pretty
much.
Any
app
solution
is
going
to
run
code,
but
the
ones
in
red
struck
me
as
maybe
not
actually
being
valid
or
making
sense.
A
A
You
know,
and
if
I
already
paid
the
the
capex
on
buying
that
server
and
staging
it
in
a
sense,
the
part
about
no
charge
when
not
running
doesn't
make
sense.
You
know
I
I
effectively
already
bought
it.
Whether
it's
running
or
not,
the
yellow
part
is
true.
It
may
only
you
know
when
you
have
lambdas
sitting
there
and
they
fire
based
on
events.
A
They
only
execute
code
when
needed,
hey,
but
what
code
doesn't
only
execute
when
needed?
I
mean
it
if
your
code
executes,
when
it's
not
needed,
isn't
that
a
bug
scaling
automatically
if
you're
looking
at
a
resource
constrained
edge
thing
you
know
and
some
of
that
low
end
hardware
we
were
talking
about
raspberry
pi
grade
or
even
upgrade,
is
so
low
that
effectively,
you
may
not
have
the
miracle
of
elastic
scalability
in
the
sense
that
you
can
code
apps
as
if.
A
They
can
go
bump
off
other
less
important
things
and
get
scale
only
when
they're
needed
so
to
me,
even
though
I've
been
nitpicking,
this
quote
and
picking
on
the
red
stuff.
There
is
some
value
in
here.
They,
the
the
key
here
with
lambdas
at
edge
our
event
driven
execution.
A
A
A
But
potentially,
if
you
go
with
a
model
where
you
pre-stage
your
lambda
quote
serverless
to
cut
out
the
startup
latency,
you
can
build
some
apps
that
really
can
deliver
kind
of
a
new
class
of
value,
and
I'm
I
can
tell
already
that
I'm
presenting
this
for
the
first
time
I
want,
I
might
want
to
rearrange
my
slides.
A
That
server
listed
edge
has
this
issue
of
maybe
being
too
fat.
You
know,
k
native
is
potentially
pretty
resource
intensive
because
it
demands
istio
and
I'm
not
sure
that
that
you
can
actually
run
kubernetes
plus
k
native
on
something
like
a
pi
three,
a
pi.
Four,
probably
you
could
open
fast,
is
a
thinner
one
and
clearly
you
can
run
it
on
a
pie.
I
believe
alex
ella
alex
ellis's
blog
actually
has
instructional
material
on
installing
it
on
a
pi
3.,
but
the
biggest
issue
of
serverless.
Is
this
startup
latency
the
zero
to
one?
A
If
you
really
don't
preach
stage
the
serverless
back
end
and
have
it
ready
to
go
when
called
upon,
you
can
be
dealing
with
issues
of
startup
latencies
of
a
second
or
more
and
if
you've
moved
out
to
edge
or
are
hosting
apps
at
ads
because
of
latency
challenges,
this
probably
isn't
going
to
work
for
you,
so
the
other
thing
that
I'm
concerned
about
with
serverless
at
edge,
if
you're,
not
ultimately
just
feeding
a
data
pipeline.
All
the
way
up
to
a
public
cloud
is
that
lambdas
by
design
are
stateless.
A
They
don't
maintain
state.
Yet
from
the
aerial
view,
any
practical
application
is
going
to
need
resilient
state
storage,
even
if
it's
only
to
act
as
a
cl.
A
So
to
me,
the
biggest
value
of
serverless
is
a
you
know:
there's
a
there's,
a
lot
of
stupid
hype
about
iot,
being
the
the
value
of
connecting
every
sensor
and
controller
up
through
the
public
internet,
and
I
submit
to
you
that
the
real
value
in
edge
processing
is
actually
instead,
something
more
analogous
to
serverless,
but
instead
of
removing
costs,
which
is
that
story
of
you
only
pay
for
these
things
when
you
use
them
we're
gonna,
remove
something
else:
that's
costly
and
induces
latency.
A
A
You
know
the
60s
was
the
I
that
was
the
decade
of
the
ibm
mainframe,
where
major
organizations
had
one
of
these
mainframes,
and
it
was
often
processing
paper
records
that
get
collected
out
at
retail
locations
got
transcoded
by
humans
on
key
punch
machines,
ultimately
producing
reports
and
value,
but
for
the
average
employee
or
human
in
those
organizations
they
were
very
far
removed
from
that
mainframe.
In
fact,
the
mainframe
was
generally
in
a
locked
room
with
a
raised
floor,
and
that
was
the
it
lifestyle
of
the
60s.
A
Many
computers
came
about
in
the
70s
and
they
they
were
cheaper
and
something
that
could
move
out
to
non-central
locations
because
they
were
affordable
and
didn't
didn't,
require
customized
buildings
to
host
them.
So
the
move
from
mainframes
to
mini
computers,
I
would
contend-
was
a
move
of
compute
moving
closer
to
the
humans
to
do
service
for
them.
A
And
sometimes
these
computers
were
shared.
You
know.
In
a
household
environment,
the
computers
were
still
expensive
enough.
That
members
of
a
household
would
actually
share
one
computer,
often
small
businesses
or
branch
offices
of
large
businesses
would
not
have
a
computer
per
person,
but
eventually,
as
we
move
to
the
90s,
the
cost
of
those
personal
computers
moved
down
to
the
point
where
people
sometimes
had
a
computer
when
they
went
to
work
provided
by
the
employee
employer,
another
one
when
they
went
home
and
maybe
multiple
family
members
in
a
wealthy
household
had
their
own
computers.
A
Finally,
moving
on
to
cell
phones,
the
that
compute
moved
became
portable
and
became
even
more
ubiquitous,
but
the
trend
was
that
compute
moved
down
closer
and
closer
to
the
humans.
A
Well,
I
would
contend
that
the
trend
for
edge
going
into
the
next
decade
is
actually
going
to
be
for
the
computers
to
finally
leap
ahead
of
the
humans.
When
I
say
that
is
that
crazy
talk?
No
the
humans
now
are
a
big
yeah.
If
this
information
has
to
be
collected
by
a
human
reacting
to
something
it's
a
big
latency
impediment
and
the
trend
with
things
like
machine
learning,
image,
recognition
is
actually
not
to
have
human
beings
watching
analog
camera
feeds,
but
to
have
ais
and
machine
learning.
A
Taking
the
first
bite
at
this
state
at
these
data
flows,
spotting
the
anomalies
or
deducing
value
out
of
these
before
the
human
even
sees
these
data
flows,
and
I
think
serverless
has
the
potential
to
be
of
great
value
here,
but
at
edge
it
should
be
called
personless,
because
what
you're
really
doing
isn't
replacing
the
server?
A
You
still
have
the
server
if
you're
the
one
staging
it
on
your
own
edge
hardware,
but
what
you
are
potentially
doing
is
building
the
equivalent
of
robots
that
maybe
do
difficult
work
for
humans
in
a
better
way
than
the
humans
can
or
maybe
outright
replace
humans.
You
know
it's
not
especially
a
rewarding
job
to
be
somebody
looking
at
the
security
cameras
for
an
eight
hour.
Work
shift-
and
maybe
those
you
know
maybe
what's
going
on
here-
is
the
computer
actually
turns
this
into
personless.
A
B
Yeah,
it's
really
great
the
time
wise
a
little
bit
longer
than
maybe
just
my
interruption
right.
More
than
25
or
maybe
30
minutes
yeah.
A
B
Yeah,
it
does
make
sense.
Actually
we
are
thinking
about
if
we
should
integrate
the
k
native
into
combat
and
because
we
are
looking
for
collaborating
with
other
open
source
project
or
bring
the
service
ability
or
skills
to
the
edge.
It's
interesting
topic,
I
think
only
one
suggestion
say
if
you
have
any
more
concrete
examples,
say
you
talk
about
the
more
personalized
and
to
I
mean
to
deal
with
the
data
at
the
the
location
right.
A
B
A
Think
there's
I
probably
have
to
lose
one
either.
I
can
go
into
hosting
serverless
at
edge,
using
the
existing
solutions
like
k,
native
or
open
fast,
or
I
can
talk
about
kind
of
this
philosophy
of
serverless
not
really
being
about
serverless
so
much
as
just
the
value
of
event
driven
being
used
as
a
replacement
for
people
which
of
those
two
do,
you
think,
is
more
interesting.
A
B
The
the
second
one,
the
one
without
pointing
out
any
suggestions,
because
I
think
it's
fitting
your
talk
more
properly,
because
your
talk
is
not
too
not
to
advertise,
or
I
mean
any
directions
right.
You
just
gave
the
the
trend,
and
then
I
said,
that's
my
only
my
opinion
say
so
without
pointing
out
any
concrete,
I
mean
any
particular
product.
B
A
A
The
reason
I
say
it
is
is
I
gave
a
presentation
on
istio
at
open
source
summit
a
year
ago
and
gave
a
demo
running
on
my
laptop,
and
I
just
know
that
that
took
an
awful
lot
of
resources.
On
my
laptop
I
was
lucky.
I
had
a
32
gigabyte
laptop
because
I'm
not
sure
that
was
really
viable
on
a
smaller
one.
B
Yeah
the
thing
is
in
our
thought,
in
my
case,
I'm
looking
for
is
just
like
you
said
a
little
bit
more
powerful
edge
like
mec
edge
or
enterprise
edge.
Now
the
lt,
the
iot
one
I
mean
we,
we
see
they
have
a
limited
resource
right.
They
have
different
purpose
but
the
edge
computing
you
have
from
its
big
variations
right
from
very
small
edge
to
very
big
edge.
So
I
mean
there's
so
many
different
configurations.
A
C
Tina
hi,
I'm
here,
I'm
sorry,
I'm
late.
I
didn't
know
and
my
schedule
for
today.
Overall,
I'm
scheduled
for
the
morning
time.
A
Oh
okay,
so
if
you
wanted
the
next
one,
that's
fine,
we
don't
have
it
we'll
just
roll
it
into
that,
especially
in
fairness.
Since
you
didn't
know
this
was
the
slot,
so
it
wasn't
clear
to
me
which
of
those
you
wanted.
Sorry,
oh
okay,
so
sure
you
can
have
the
the
morning.
Slob
would
be
in
about
two
weeks.
So,
let's
plan
on
that,
especially
because
we
got
somewhat
light
attendance
this
evening
too,.
A
Okay,
so
yeah
I'll
I'll
put
you
on
the
calendar
for
the
next
one
and
you're
welcome
to
stick
around
anyway
and
listen.
If
you
want
what
I
was
doing,
and
maybe
you
caught
the
tail
end
of
it
is
that
I
was
doing
a
test
run
of
some
material.
I
was
thinking
of
putting
on
for
kubecon
europe,
so
I
was
just
trying
to
give
a
presentation
on
serverless
an
edge.
A
B
Well,
yeah.
The
one
example
we
used
to
run
is
more
like
a
javascript,
you
have
about
a
couple
lines
code.
Basically,
for
example,
you
turn
on
turn
off
the
smart
speech,
so
that
you
then
qt
protocol
and
that's
only
we
were
running
is
use
the
the
sun
songs.
That
was
the
engine
very
the
real.
I
forget
what
the
gs
engine
has
it's
really
small.
It's
only
have
a
the
compiler
is
only
about
less
than
20
kilobytes
or
something
in
memory
consumption.
So
that's
fit
that
you
you.
B
Yeah
you
just
the
thing
is:
do
you
want
a
complicated
service
framework
or
you
already
I
mean,
does
feeding
the
principle?
Is
a
service
right,
the
event
trigger
and
the
function
only
run
the
wasted
data
and
it's
because
they're
very
light.
It's
already
a
kind
of
a
surface.
A
Well,
the
other
thing
I
got
into
when
you
literally
took
lambdas
hosted
an
edge
if
you
could
do.
That
is
the
fact
that
they're,
stateless
and
once
again
I
contend
that
for
some
of
these
edge
locations
you
really
need
state
somewhere.
So
maybe
some
other
technologies
are
better
suited.
You
know
you
need
the
state,
even
if
we're
staging
some
of
these
transports
inherently
are
capable
of
queuing.
You
know
where
they
not
only
will
handle
events,
but
maybe
keep
them
possibly
resiliently
enough,
so
that
things
don't
get
lost.
When
you
have
temporary
network
outages.
B
A
B
A
As
you
look
at
serverless
like
k
native,
is
there
any
ongoing
work
integrating
in
things
like
kafka,
into
cube
edge
as
well
to
go
with
models
of
kind
of
continuous
event,
flows.
B
Yeah,
the
cup
cup,
because
we
try
that,
but
even
the
the
slim
version
is
still
too
big
because
we
were
talking,
as
I
said,
we're
targeting
as
a
256
megabytes
memory.
Only
so
we
have
a
mini
flink,
but
it's
for
the
stream
data
processing
is
different
from
a
kafka,
is
take
about
130
megabytes
I
mean
so.
If
we
only
run
the
small
functions
we
need
to
still
I
mean
leave
a
few
extra
space
for
the
user
applications
so
for
the
pi,
but
the
one
gig
memory
is
pretty
pretty
generous
already.
B
B
A
And
I'm
just
curious:
did
you
ever
evaluate
the
open
fast
to
see
if
that
might
make
more
sense
or
be
useful.
B
We
haven't
started
yet
so
this
is
still
on
the
table.
Everything
is
on
table
okay
native
open
fast.
Probably
it's
all
depends
on
adoptions
right.
We
want
to
pick
the
one
of
the
options
and
leave
the
gener
generic
interface
for
other
people
can
switch.
A
B
A
So
tina,
if
you're
still
there,
I'm
just
curious
on
a
crano.
If
there
are
any
people
on
that
project
who
are
pursuing
serverless
solutions.
A
Okay,
yeah
I'd
be
interested
in
hearing
about
that
then,
and
I
think
others
would
be
too
if
that
isn't
too
advanced
to
put
into
your
presentation
when
you
give
it.
C
A
A
And
then
for
that
serverless,
is
it
based
on
some
existing
project
like
k-native,
or
is
it
something
that's
entirely
within
a
crano.
A
A
About
I
think,
before
the
recording
started,
and
maybe
some
of
you
joined
yin,
mentioned
that
the
cube
edge
is
up
for
vote
with
the
toc
in
the
cncf
for
advancement.
A
B
A
And
then,
if
you
want,
do
you
have
sessions
on
cube
edge
at?
I
know
you
do
at
the
cloud
native
con
china
because
I
saw
two
of
them,
but
do
you
have
any
at
the
europe
event.
B
A
So
I'm
curious:
how
do
you,
how
do
the
current
users
shape
up
geographically,
you
know
like
asia
versus
north
america
versus
europe
for
cube
edge
usage.
It's
asia.
B
A
America,
okay
last
call:
does
anybody
have
anything
they
want
to
discuss
otherwise,
we'll
end
this
five
minutes
early.
A
Yep.
Thank
you.
No
for
me,
okay,
thank
you.
Everybody
I'll,
shut
down
the
recording
and
that's
a
wrap.
Thank.