►
From YouTube: Kubernetes WG IoT Edge 20220713
Description
July 13 meeting of the CNCF IoT Edge Working Group. “Edge Native” white paper brainstorm and scope.
B
C
So
welcome
to
the
July
13
meeting
of
the
cncf
iot
Edge
working
group
on
the
agenda.
Today
we
have
one
item
which
is
to
talk
about
a
new
white
paper
that
we're
planning
to
kick
off.
I
guess
it
in
some
sense,
it
already
is
kicked
off
because
there
is
an
early
stage
draft
already
in
existence
and
the
goal
was
to
cover
things
related
to
running
workloads
out
at
Edge
locations.
C
There's
a
shell
of
the
scope
in
that
draft
document,
but
I
think
at
this
stage
we're
still
potentially
open
to
modifying
the
scope
of
it
too,
and
we
had
a
plan
to
conduct
advancement
of
that
draft
of
that
draft
dock
up
until
a
future
publication,
both
by
discussions
in
meetings
and
asynchronous
comment
and
edits
of
the
shared
document.
C
I
don't
have
the
link
to
the
document
readily
available.
But
if
somebody
could
post
that
in
the
chat
that
probably
is
useful
for
Brian
and
then
that's
on
the
agenda
and
as
usual,
sometimes
this
group
has
meetings
where
literally,
nothing
is
on
the
agenda,
but
we
still
tend
to
have
the
meeting
anyway
and
just
conduct
free-form
birds
of
a
feather
discussion
about
you
know
any
questions.
Users
might
have
or
cool
things
they've
seen
whatever.
As
long
as
it
fits
in
the
category
of
azure
iot.
It's
fair
game,
foreign.
C
The
other
thing
I'll
point
out
is
a
few
years
ago
we
did
take
on
a
white
paper
and
I
think
it
largely
addressed
security
issues
when
dealing
with
Edge.
So
we've
kind
of
been
through
this
before,
but
like
many
things
in
Tech,
these
sorts
of
things
get
sell
by
dates.
Where
you
wait
a
couple
years
and
maybe
half
the
document
is
more
dangerous
than
useful
in
terms
of
how
the
world
has
moved
on
so
I
think
it's
we'll
aspire
to
making
this
white
paper
a
maintained
document
that
isn't
a
One-Shot.
C
B
If
you
read
already
the
the
outline
but
I
think
the
main
idea
is
to
try
to
to
specify
the
all
the
superset
of
requirements
for
the
workloads
to
be
a
on
top
of
the.
What
what
we
you
know
refer
to
12
Native
principles
in
terms
of
what
they
need
to
what
we
need
of
them
to
be
able
to
run
it
on
the
edge
I.
Don't
know
from
what
kind
of
edge
are
you
coming
from
and
what's
your
interest?
A
So
that's
why
I
joined
this
one
instead,
so
I
will
review
that
paper
and
that
outline,
but
yeah
I've
been
working
on
edge
stuff
for
many
years
now,
and
just
to
give
you
guys
a
heads
up
too
I'm
doing
Google
summer
code,
mentoring
or
three
contributors
and
we're
doing
analytics
workloads
at
the
edge
I'm
teaching
them
how
to
do
that
on
top
of
kubernetes
and
stuff,
and
we're
gonna
we're
basically
just
creating
open
source
workloads
for
specific
verticals.
C
C
Yeah
so
far
in
the
white
paper,
this
was
initially
proposed
by
a
group
of
people
that
I
don't
know
if
this
is
their
personal
description,
but
I
describe
them
as
largely
Telco
focused,
but
at
Edge
we
tend
to
have
people
coming
from.
You
know.
That's
certainly
a
key
group,
that's
interested
in
Edge,
but
there
are
definitely
others
and
the
use
cases
for
Telco
versus,
say,
industrial
iot
gaming,
there's
a
class
of
analytics
for
analyzing
image.
C
Recognition
coming
from
camera
feeds
and
kind
of
all
these
niches
tend
to
be
a
little
bit
different
I
think
another
one
I
see
a
lot
out.
There
is
kind
of
retail
Edge
where
people
are
Landing
things
out
at
Edge
to
support
various
forms
of
stores
from
large
ones
that
might
have
multiple
kubernetes
cluster
nodes
to
very
small
ones,
that
maybe
our
resource
challenge
to
the
point
where
they
have
difficulty
even
standing
up
kubernetes
at
all
versus
just
the
container
runtime.
A
A
You
know
doing
the
classic
image
inferencing
of
X-rays
and
stuff
like
that
and
I'm
teaching
them
how
to
do
the
data
pipeline.
We
have
one
guy,
that's
working
in
the
smart
traffic
space
to
just
look
at.
You
know
webcams
and
kind
of
come
up
with
the
right
thing
to
do
from
a
stoplight
perspective
or
a
travel
perspective
or
whatever,
and
then
one
of
the
other
guys
is
working
on
the
finance
space
as
well.
C
Are
all
of
these
based
on
machine
learning
in
some
sense
to
do
the
general
X.
A
C
Yeah
I
just
saw
an
interest,
interesting
presentation
yesterday,
on
advances
being
made
for
integrated
circuits
to
assist
in
machine
learning
applications
out
at
Edge,
and
it
was
kind
of
interesting.
C
You
know,
went
into
the
premise
that
kind
of
the
generation
one
was
simply
applying
gpus
that
were
really
not
made
for
machine
learning,
but
worked
out
much
better
than
just
using
generic
CPUs
to
people
doing
things
like
Google's,
tensorflow,
optimized
Hardware
to
now
a
trend
of
vendors
coming
up
with
dedicated
Hardware
that
takes
into
account
unique
characteristics,
found
at
Edge
like
power
and
cost
constraints.
C
So
I
think
that
field
is
kind
of
on
the
cusp
of
a
big
transition.
Potentially.
A
D
That's
focused
on
kind
of
these
core
principles,
and
then
we
could
have
breakout
papers
that
then
focus
on
different
areas
of
focus
on
the
edge
and
I
think
there
is
a
potential
for,
like
a
future
white
paper
to
kind
of
focus
on
what
is
machine
learning
mean
for
the
edge
and
what
is
what
does
it
mean
to
make
your
machine
learning
application
portable
to
the
edge
and
kind
of
having
that
be
like
a
separate
area
of
focus,
foreign.
A
That's
actually
what
we're
doing
with
this
project
is
I'm
writing
an
Enterprise
architecture
thing
about
Edge
analytics
at
the
edge
and
stuff
like
that,
which
is
the
why
you
want
to
go
down
this
path,
and
then
we
were
going
to
publish
those
papers
like
solution
stacks
for
each
one
of
the
verticals,
saying
here's
an
example
of
how
you
can
do
that.
D
And
is
this
all
a
part
of
the
open
summer
of
code,
or
is
this
a
separate
kind
of
I
guess
project
that
you're
pursuing
and
is
this
all?
Would
this
all
be
public,
because
this
that
would
be
great
information
to
reference
from
the
white
paper
as
well.
A
Yeah,
so
that
was
the
goal
of
the
Google
summer
of
code,
at
least
to
do
those
solution,
Stacks
all
publicly
visible
and
I'm,
going
to
ask
them
to
do
it
on
you
know
a
GitHub,
repo
and
stuff
like
that
and
I'm
teaching
them
how
I'm
going
to
teach
them
how
to
use.
You
know
some
open
source,
documentation,
tools
and
stuff
too
so.
B
A
Sure,
yeah
and
just
to
give
you
guys
a
clue,
I
mean
we
do
in
the
Google
summer
of
code,
there's
basically
three
phases.
There's
the
one
about.
You
know,
learning
about
stuff
up
front,
then
there's
a
first
coding
phase
and
a
second
coding
phase
and
every
time
I.
Might
we
do
two
meetings
per
week,
Sprint
wise
and
then
I
tell
them
at
the
end
of
the
Sprint
I
want
them
to
do
a
lightning
talk.
A
You
know
like
a
five
minute,
recording
and
they're
doing
that
we
haven't
publicly
published
those
yet
but
I'm
going
to
put
those
out
there
with
blog
postings
at
the
end
of
the
next
phase.
C
So
I
just
dropped
a
link
in
the
chat,
but
it
is
a
video
recording
of
that
presentation
on
the
AI
assist
chips
for
Edge.
D
I
think
if
one
question
that
I
had
with
white
paper
for
interest
in
discussing
it,
a
bit
is
I
think
one
assumption
should
be
made
kind
of
out
of
the
top.
Is
that
edge
native
whatever
that
coin
term
would
be,
is
a
superset
of
cloud
native
and
I'm
curious?
If
you
all
can
think
of
any
cognative
principles
that
wouldn't
pertain
to
the
edge
and
whether
we
would
have
kind
of
an
exception
Clause
there.
C
I
can
think
of
you
know
that
I
think
the
biggest
elephant
in
the
room
that's
been
coming
up
in
this
group
for
years
is
that
the
the
cloud
native
started
with
this
premise
of
these
large
public
clouds,
where
they
had
a
I,
don't
know
if
I'd
call
it
a
goal
or
just
something
that
fell
out
of
it.
But
what
was
going
on
was
that
they
would
centralize
resources
into
a
big
pool
and
maximize
the
efficiency
of
this
pool
of
servers.
C
You
know
to
cut
costs
and
you
know
optimize
efficiency
and
when
you
look
at
Edge
it's
kind
of
the
polar
opposite
of
that
centralization
going
on.
You
know
you,
you
don't
really
have
this
pool
of
resources
where
workloads
don't
really
care
which
one
they
run
on,
that
you
can
direct
them
to
this
big
pool
and
share
them
around
if
your
compute
resource
gets
split
up
to
thousands
of
locations,
and
maybe
those
locations
have
very
dedicated,
I
o
devices
on
them.
C
That
is
a
really
different
animal
than
what
you've
got
in
the
traditional
Cloud
native
public
cloud
data
center.
C
So
I
don't
know
if
that
I
not
sure
the
official
cncf
Cloud
native
definition
called
that
out,
but
there
was
an
original
Cloud
description
that
came
out
of
nist
that
Define
the
cloud
as
a
situation
where
things
were.
You
know,
on
a
short-term
level,
infinitely
scalable
that
you
could
get
resources
on
demand
and
count
on
it
being
there,
and
that
was
really
an
effect
of
having
this
massive
pool.
They
also
made
a
declaration
in
the
nist
definition
that
you
know
over
a
longer
term
period.
C
You
could
add
resources
to
the
pool
you
know
like
if
the
cloud
was
over
subscribed.
The
vendor
of
that
cloud
could
start
buying
more
servers
and
building
it
out
to
scale
up
resources
at
Edge.
Once
again,
that
is
a
little
bit
more
difficult
where
you
know
if
one
of
these
locations
is
capacity
constrained
and
the
workload
really
has
to
be
at
that
location
to
be
useful,
you
know
Common
scenario.
B
C
Unless
anybody
thinks
that
that's
such
a
big
change,
that
cloudnative
doesn't
even
apply
the
one
thing
that
does
is
I
think
the
key
principle
of
cloud
native
is
that
a
properly
written
app
should
work
the
same
everywhere.
It
shouldn't
be
able
to
tell
you
know
where
it
is,
what
Hardware
it's
in
on
Etc
and
even
if
the
edge
jobs
are
very
location-centric,
it
doesn't
mean
that
there
aren't
advantages
to
an
abstraction
layer
that
would
cover
up
differences
from
the
actual
underlying
compute
resource
that
one
location
versus
another.
C
You
know
I
think
it's
fair
to
say
that
the
hardware
life
cycles
at
Edge
tend
to
be
much
longer
than
the
ones
in
data
centers,
I,
think
data
centers
data
center
operators
are
okay,
with
obsoleting
Hardware
on
life
cycles
of
three
to
five
years,
but
Edge
where
there's
costs
of
landing
and
installing
that,
certainly
in
Factory
automation,
life
cycles
of
10
to
20
years
aren't
uncommon
at
all,
and
it
means
that
if
you're
a
big
multinational
with
thousands
of
these
locations,
you
shouldn't
have
an
expectation
that
every
location
installed
over
a
decade
has
identical
Hardware.
D
Yeah
I
think
that's
an
interesting
thing
to
distinguish
between
something
that
abstracts
away
that
detail
from
the
application
and
then
the
application
itself
being
built
for
the
edge
and
kind
of
in
the
white
paper.
We're
talking
about
educated
applications,
but
I
think
there
is
when
we
started
discussing
it
was
like.
Is
this
about
the
infrastructure?
Is
this
about
the
application
and
if
you
build
out
kind
of
a
an
extended
orchestration
system
that
abstracts
away
all
that
details,
that's
that's
also
a
different
approach.
C
As
you
bring
this
up,
my
gut
feel
is
you
know
we
could
start
this.
We
could
start
this
white
paper
with
the
assumption
that
people
already
are
aware
of
what
the
traditional
Cloud
native
definition
is,
and
we
just
described
the
Delta
but
I'm.
My
thinking
just
see
the
pants
thinking
about
it.
Right
now
is
that
I
don't
think
that's
as
useful,
potentially
to
an
audience,
because
some
of
these
people
might
not
be
familiar
with
that
traditional
cncf,
Cloud
native
definition.
C
So
doing
this
as
a
Delta
from
that
it's
interesting
to
some,
but
not
to
the
entire
audience,
so
I
think
maybe
we
could
have
an
intro
in
the
white
paper
saying
you
know
there
are
some
key
differences,
and
this
is
part
of
the
explanation
as
to
why
we
even
need
this
white
paper.
But
you
know
this
is
a.
This
is
a
realm
that
where
clearly
some
things
are
different
to
part
of
the
mission
statement
is
different.
C
When
you're
going
out
an
edge
to
justify
why
this
white
paper
exists,
but
then,
as
we
go
on
into
the
content,
I
think
we
should
drop
describing
things.
As
you
know,
kind
of
a
Delta
to
the
original
that
came
before,
and
do
it
as
something
that
stands
on
its
own
two
feet,
as
you
know,
best
practices
for
these
abstraction
layers.
What
value
you
get
out
of
it,
how
you
should
operate
at
Edge,
Etc
and
in
the
long
term
I
mean
there,
are
people
forecasting
that
these
Edge
clouds
could
be
bigger
than
the
centralized
public
clouds?
C
Ultimately
and
I?
Don't
think
it's
outlandish.
It
remains
to
be
seen
whether
that
comes
about,
but
what
as
a
what?
If
what?
If,
in
five
to
ten
years,
the
amount
of
compute
outage
is
actually
bigger
than
the
centralized
ones.
It
would
seem
silly
at
that
point
that
this
document
is
worded
as
a
reference
to
the
you
know
to
the
smaller
player
out
there.
D
D
It's
assumed
starting
point
that
some
people
reading
it
might
not
have
so
I,
do
think
yeah
in
the
first
paragraph,
regardless
like
addressing
Cloud
native
and
addressing
how
we've
evolved
from
there
and
then
whether
or
not
we
assume
that
cloud
native
principles
apply.
Is
the
next
question
because
I
honestly
don't
know
what
is
the
established
document
on
what
cloud
native
principles
are
Brandon
linked
to
one
that
Google
wrote
that
had
like
five
principles
and
then
there's
another
one?
D
That's
like
an
open
source
working
document
of
principles,
but
I
think
we
quickly
call
out
like
in
a
couple
sentences
is.
This
is
what
we
assume.
Cloud
native
applications
are
like
microservice
approach,
portability
stuff
like
that,
and
then
then
talk
about.
Okay,
all
those
still
apply,
or
this
one
doesn't
and
then
move
on,
but
making
sure
we're
defining
what
we're
building
off
of
is
important.
B
D
Yeah
I
think
an
interesting
thing
is
like
is
that
developers
and
organizations
want
it
to
be
a
super
set?
It
seems
like
they
want
their
same
Cloud
native
applications
to
be
portable
to
the
edge
and
they're
asking.
How
do
I
take
this
and
move
it
to
that
engine?
So
from
that
logic,
it's
a
superset,
because,
if
you're
just
pasting
something
onto
a
pre-existing
application
to
make
it
Edge
ready,
I
can
see
that
as
a
super
set.
D
So
maybe
there's
this
like
two
options
you
can
make
like
this
pure
from
the
start,
Edge
native
application
or
maybe
there's
some
super
glue.
You
add
to
your
Cloud
native
application
that
makes
it
Edge
native
and
I.
Think
the
latter.
One
is
something
that
people
are
interested
in
because
they
don't
want
to
be
tied
to
one
region
of
compute.
D
B
C
Yeah
I
just
posted
a
link
in
the
chat
to
the
nist
definition,
which
is
a
little
dated,
but
looking
at
that,
I
just
opened
that
Dock
and
looked
at
the
header
of
essential
characteristics,
and
actually
the
rapid
elasticity
is
what
they
call
that
ability
to
scale
up
your
work,
you
know
have
something
where
your
workloads
could
scale
up
coming
out
of
a
pooled
resource
is
not
the
only
thing
in
there.
They
often
they
defined
as
an
essential
characteristic.
On-Demand
self-service
I'm,
not
sure
how
much
that
applies
to
Edge
the
one
that
is
still
clearly.
C
C
Some
of
these
Edge
clouds
might
indeed
have
that
as
an
aspect.
You
know
if
the
Telco
goes
into
the
business
of
Hosting
multi-tenant
hosting
as
a
commercial
service
at
Edge.
You
obviously
would
need
to
measure
it
if
you're
doing
it
for
yourself
now
I'm,
not
so
sure
that
that
it
might
be
a
desirable
characteristic
but
essential
I'm,
not
so
sure.
D
I
think
the
network
one's
actually
interesting
because
I
wouldn't
necessarily
say,
network
access,
is
a
given
on
the
edge
I
would
say.
The
Delta
of
that
principle
is
some
network
access
periodically
is
important
so
that
you
can
kind
of
have
your
centralized
source
of
truth,
but
I
think
that's
an
interesting
example
of
like
in
the
cloud
continuous
network
access
is
assumed
on
the
edge.
The
Delta
is
its
intermittent
and
we
need
to
be
prepared
for
that,
or
maybe
it's
fully
disconnected
so.
C
Yeah
you're,
right
and
clearly
I
think
network
doesn't
mean
public
internet.
It
could
be
essentially
kind
of
call
it.
It's
still
air
gapped,
meaning
air
gap
from
the
internet,
but
there
is
a
at
least
a
localized
pool
of
things
that
are
communicating
with
each
other
I
mean
kind
of.
If
you
had
no
way
to
centrally
manage
and
update
through
policy,
you
couldn't
do
these
nodes
at
scale.
C
I
think
it's
fair
to
say,
the
control
plane
might
not
be
permanently
yeah
attached
24x7,
but
there
is
some
mechanism
to
have
a
semblance
of
control
through
some
means
of
communication.
It
could
be
very
intermittent.
You
know
something
like
a
cruise
ship
that
comes
into
Port
once
every
three
weeks,
but
it's
still
there
periodically.
B
A
C
A
C
D
C
Yeah
I
I
think
the
opening
should
have
some
scope,
I
mean
just
for
somebody
who
discovers
this
white
paper,
one
of
the
first
things
they're
going
to
ask
themselves.
Is
this
something
I'm
really
interested
in
and
they
do
that
by
reading
the
first
paragraph
or
the
first
page,
and
the
document
has
to
clearly
identify
you
know.
C
I
see,
we've
got
a
few
other
people
who
joined
late
here
so
just
to
fill
you
in
we're
talking
about
a
planned
white
paper
on
the
subject
of
application
workloads
running
at
Edge.
E
Yeah
thanks
Steven
I'm,
just
trying
to
catch
up.
C
Anyway,
to
further
fill
you
in,
we
were
talking
about
how
you
know
there.
There
have
been
white
papers
before
on
Cloud
native
in
general,
describing
best
practices
and
principles
for
running
Cloud
native
apps
in
the
traditional
public
and
private
cloud.
Data
centers,
but
Edge
is
different.
The
group
had
an
edge
security
white
paper
a
few
years
ago,
but
this
is
more
an
attempt
to
do
an
update
and
a
kind
of
more
broad
brushed,
General
Cub
coverage
of
the
subject
at
Edge
Kate
in
terms
of
another
way
to
get
started.
It
might
actually
be
there.
C
D
There
are
a
couple
that
are
LinkedIn
it'd,
be
interesting
to
see
some
some
of
the
other
read
around
some
of
the
other
scene
cfoy
papers,
because
I
think
one
of
the
things
that
Brandon
mentioned
when
he
brought
this
idea
to
the
working
group
is
that
the
idea
of
edge
native
has
been
thrown
around
a
lot.
But
no
one's
really
defined
what
it
means,
and
this
working
group
really
is
in
a
good
position
to
Define
that
even
like
seeing
CFA
is
kind
of
the
hub
for
cloud
computing.
D
D
And
that
would
also
be
interesting
to
get
some
of
those
authors
maybe
to
come
present.
Maybe
we
can,
in
some
of
these
Explorations
and
research
phase,
reach
out
to
people
who
have
been
a
part
of
kind
of
the
drafting
of
other
white
papers
that
either,
if
they
align
with
cloud
or
any
Edge
kind
of
adjacent
technology.
That
would
be
interesting
to
have
them
come
talk
through
the
process
of
how
they
narrowed
down
a
list
of
principles
and
and
went
about
defining
that
and
just
here.
B
D
Yeah
and
I
think
the
meeting
notes
or
the
white
paper
Josh
would
be
good
space
for
that
we
could
even
end
the
meeting
notes
just
have
a
section
like
on
I'll
put
one
year
of
just
research,
previous
cncf
and
Edge
Cloud
white
papers,
and
we
can
add
some
links
here
of
the
ones
that
we've
looked
at.
Maybe
the
name
your
name,
if
you're
looking
at
it
and
some
notes
as
well
there
just
to
keep
the
draft
clean
as
well
and
for
folks
who
juggled
in
a
little
later.
D
The
top
comment
is
the
link
to
the
working
draft
for
the
white
paper,
if
you're
interested
in
getting
involved
everyone's
in
invited
to
join
input
and
and
producing
it,
and
then
in
our
agenda,
notes
that
we
Face
a
meeting
off
of
you
can
add
yourself
as
an
attendee
and
let
me
go
ahead
and
drop
the
link
to
that.
D
Great
thing:
Steve
and
then
here's
the
for
the
agenda
and
attendee
notes.
E
Yeah,
this
is
Kevin.
I
have
been
working
on
the
kill
video
project
for
years,
and
actually
we
have
some
kind
of
I
I.
Just
quickly
read
the
beginning,
part
of
the
the
the
paper
craft,
yeah
I,
think
we
can
share
some
input
for
the
paper
yeah,
for
example,
the
challenge
of
or
why
we
need
age,
Computing
and
the
challenge
of
adopting
Cloud
native
to
age
like
that.
B
Yeah,
that's
cool
I
mean
it
is.
As
you
said,
the
original
proposals
of
the
current
five
paper
are,
we
think,
coming
from
the
Telco
industry.
So
a
point
of
view
from
the
folks
doing
iot
is
is
much
appreciated
right.
So
we
have
a
balanced
approach.
E
Yeah
and
actually
we
do
have
some
discussion
in
the
cube,
Edge
community
and
also
reached
out
the
the
etsi
a
few
times
before
to
talk
about
their.
They
have
some
MEC
kind
of
standard
or
or
blueprint
or
something
yeah.
We
have
some
discussion
with
them
and
we
can
also
share
or
previous
things,
yeah.
D
Yeah
input
from
the
cube,
Edge
Community
would
be
certainly
interesting,
given
that
that
was
the
exact
evolution
of
that
approach.
It
was
taking
kubernetes
native
technology
and
they
can't
hedge
optimize
so
that'd
be
really
interesting
to
hear
kind
of
what
principles
Define
those
architecture
decisions
could
probably
be
fairly
applicable.
D
So
I
would
just
say
the
best
place
for
that
input
would
probably
be
in
the
document
itself
feel
free
to
leave
comments
where
you
see
fit
or
add
some
sections
about
kind
of
what
you're
interested
I
would
say,
and
we
can
make
it
a
asynchronous
form
of
discussion,
because,
like
Steve
mentioned
it's
hard
to
kind
of
keep
up
with
all
the
notes,
so
yeah
feel
free
to
comment
some
sources
and
ideas.
There.